home/zuul/zuul-output/0000755000175000017500000000000015073041415014122 5ustar zuulzuulhome/zuul/zuul-output/logs/0000755000175000017500000000000015073043312015064 5ustar zuulzuulhome/zuul/zuul-output/logs/controller/0000755000175000017500000000000015073043141017247 5ustar zuulzuulhome/zuul/zuul-output/logs/controller/post_oc_get_builds.log0000644000175000017500000000161115073043132023620 0ustar zuulzuul*** [INFO] Showing oc get 'builds' Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/namespaces/service-telemetry/builds?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) [INFO] oc get 'builds' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/namespaces/service-telemetry/builds?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) home/zuul/zuul-output/logs/controller/post_oc_get_subscriptions.log0000644000175000017500000000030615073043132025245 0ustar zuulzuul*** [INFO] Showing oc get 'subscriptions' No resources found in service-telemetry namespace. [INFO] oc get 'subscriptions' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_oc_get_images.log0000644000175000017500000000151715073043132023610 0ustar zuulzuul*** [INFO] Showing oc get 'images' Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/images?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get images.image.openshift.io) [INFO] oc get 'images' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/images?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get images.image.openshift.io) home/zuul/zuul-output/logs/controller/post_oc_get_imagestream.log0000644000175000017500000000165315073043133024643 0ustar zuulzuul*** [INFO] Showing oc get 'imagestream' Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/service-telemetry/imagestreams?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) [INFO] oc get 'imagestream' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/service-telemetry/imagestreams?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) home/zuul/zuul-output/logs/controller/post_oc_get_pods.log0000644000175000017500000000026415073043133023307 0ustar zuulzuul*** [INFO] Showing oc get 'pods' No resources found in service-telemetry namespace. [INFO] oc get 'pods' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_oc_describe_subscriptions_STO.log0000644000175000017500000000015015073043134026772 0ustar zuulzuulError from server (NotFound): subscriptions.operators.coreos.com "service-telemetry-operator" not found home/zuul/zuul-output/logs/controller/describe_sto.log0000644000175000017500000000006315073043134022420 0ustar zuulzuulNo resources found in service-telemetry namespace. home/zuul/zuul-output/logs/controller/post_question_deployment.log0000644000175000017500000000035015073043136025130 0ustar zuulzuulWhat images were created in the internal registry? Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information. What state is the STO csv in? apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_pv.log0000644000175000017500000000066315073043137021456 0ustar zuulzuulNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 30Gi RWX Retain Bound openshift-image-registry/crc-image-registry-storage crc-csi-hostpath-provisioner 472d home/zuul/zuul-output/logs/controller/post_pvc.log0000644000175000017500000000006315073043140021605 0ustar zuulzuulNo resources found in service-telemetry namespace. home/zuul/zuul-output/logs/controller/logs_sto.log0000644000175000017500000000024715073043140021605 0ustar zuulzuulerror: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples home/zuul/zuul-output/logs/controller/logs_sgo.log0000644000175000017500000000024715073043140021570 0ustar zuulzuulerror: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples home/zuul/zuul-output/logs/controller/logs_qdr.log0000644000175000017500000000024715073043141021567 0ustar zuulzuulerror: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples home/zuul/zuul-output/logs/controller/ansible.log0000644000175000017500000040021415073043141021370 0ustar zuulzuul2025-10-13 00:19:31,046 p=27673 u=zuul n=ansible | Starting galaxy collection install process 2025-10-13 00:19:31,047 p=27673 u=zuul n=ansible | Process install dependency map 2025-10-13 00:19:46,129 p=27673 u=zuul n=ansible | Starting collection install process 2025-10-13 00:19:46,129 p=27673 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+07f6a4f6' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+07f6a4f6 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | cifmw.general:1.0.0+07f6a4f6 was installed successfully 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-10-13 00:19:47,526 p=27673 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-10-13 00:19:47,527 p=27673 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-10-13 00:19:47,527 p=27673 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-10-13 00:19:48,128 p=27673 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-10-13 00:19:48,128 p=27673 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-10-13 00:19:48,129 p=27673 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-10-13 00:19:48,534 p=27673 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-10-13 00:19:48,534 p=27673 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-10-13 00:19:57,011 p=28260 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2025-10-13 00:19:57,028 p=28260 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-10-13 00:19:57,028 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:57 +0000 (0:00:00.033) 0:00:00.033 ******** 2025-10-13 00:19:58,125 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,159 p=28260 u=zuul n=ansible | TASK [Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-10-13 00:19:58,159 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:01.131) 0:00:01.164 ******** 2025-10-13 00:19:58,193 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,202 p=28260 u=zuul n=ansible | TASK [Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-10-13 00:19:58,202 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.042) 0:00:01.207 ******** 2025-10-13 00:19:58,253 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,259 p=28260 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-10-13 00:19:58,260 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.057) 0:00:01.264 ******** 2025-10-13 00:19:58,623 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,631 p=28260 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-10-13 00:19:58,631 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.371) 0:00:01.636 ******** 2025-10-13 00:19:58,654 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,663 p=28260 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-10-13 00:19:58,663 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.031) 0:00:01.668 ******** 2025-10-13 00:19:58,685 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,692 p=28260 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-10-13 00:19:58,692 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.029) 0:00:01.697 ******** 2025-10-13 00:19:58,723 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,732 p=28260 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-10-13 00:19:58,732 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.039) 0:00:01.737 ******** 2025-10-13 00:20:00,253 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:00,276 p=28260 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-10-13 00:20:00,276 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:00 +0000 (0:00:01.544) 0:00:03.281 ******** 2025-10-13 00:20:00,537 p=28260 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-10-13 00:20:00,986 p=28260 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-10-13 00:20:01,282 p=28260 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-10-13 00:20:01,294 p=28260 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-10-13 00:20:01,295 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:01 +0000 (0:00:01.018) 0:00:04.299 ******** 2025-10-13 00:20:02,907 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:02,920 p=28260 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-10-13 00:20:02,921 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:02 +0000 (0:00:01.625) 0:00:05.925 ******** 2025-10-13 00:20:06,072 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:06,080 p=28260 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-10-13 00:20:06,081 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:06 +0000 (0:00:03.159) 0:00:09.085 ******** 2025-10-13 00:20:14,582 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:14,595 p=28260 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-10-13 00:20:14,595 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:14 +0000 (0:00:08.514) 0:00:17.600 ******** 2025-10-13 00:20:15,509 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:15,519 p=28260 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-10-13 00:20:15,519 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:15 +0000 (0:00:00.924) 0:00:18.524 ******** 2025-10-13 00:20:15,542 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:15,551 p=28260 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-10-13 00:20:15,551 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:15 +0000 (0:00:00.031) 0:00:18.556 ******** 2025-10-13 00:20:16,215 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:16,227 p=28260 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-10-13 00:20:16,227 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.676) 0:00:19.232 ******** 2025-10-13 00:20:16,277 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,286 p=28260 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-10-13 00:20:16,286 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.058) 0:00:19.291 ******** 2025-10-13 00:20:16,339 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,346 p=28260 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-10-13 00:20:16,346 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.059) 0:00:19.351 ******** 2025-10-13 00:20:16,374 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,381 p=28260 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-10-13 00:20:16,381 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.034) 0:00:19.386 ******** 2025-10-13 00:20:16,841 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:16,855 p=28260 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-10-13 00:20:16,855 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.473) 0:00:19.860 ******** 2025-10-13 00:20:17,707 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:17,713 p=28260 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-10-13 00:20:17,713 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.858) 0:00:20.718 ******** 2025-10-13 00:20:17,733 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,740 p=28260 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-10-13 00:20:17,740 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.027) 0:00:20.745 ******** 2025-10-13 00:20:17,760 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,767 p=28260 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-10-13 00:20:17,767 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.026) 0:00:20.772 ******** 2025-10-13 00:20:17,786 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,792 p=28260 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-10-13 00:20:17,792 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.025) 0:00:20.797 ******** 2025-10-13 00:20:17,817 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:17,823 p=28260 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-10-13 00:20:17,823 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.030) 0:00:20.828 ******** 2025-10-13 00:20:17,845 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,852 p=28260 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-10-13 00:20:17,852 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.029) 0:00:20.857 ******** 2025-10-13 00:20:17,877 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,883 p=28260 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-10-13 00:20:17,884 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.031) 0:00:20.888 ******** 2025-10-13 00:20:17,909 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,916 p=28260 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-10-13 00:20:17,916 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.032) 0:00:20.921 ******** 2025-10-13 00:20:17,941 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,947 p=28260 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-10-13 00:20:17,947 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.031) 0:00:20.952 ******** 2025-10-13 00:20:17,963 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,974 p=28260 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-10-13 00:20:17,974 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.027) 0:00:20.979 ******** 2025-10-13 00:20:17,989 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:18,000 p=28260 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-10-13 00:20:18,000 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.025) 0:00:21.005 ******** 2025-10-13 00:20:18,023 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:18,032 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-10-13 00:20:18,033 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.032) 0:00:21.037 ******** 2025-10-13 00:20:18,449 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:18,456 p=28260 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-10-13 00:20:18,456 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.423) 0:00:21.461 ******** 2025-10-13 00:20:18,860 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:18,873 p=28260 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-10-13 00:20:18,873 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.416) 0:00:21.878 ******** 2025-10-13 00:20:19,280 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:19,286 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-10-13 00:20:19,287 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.413) 0:00:22.291 ******** 2025-10-13 00:20:19,316 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,323 p=28260 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-10-13 00:20:19,323 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.036) 0:00:22.328 ******** 2025-10-13 00:20:19,367 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,372 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-10-13 00:20:19,373 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.049) 0:00:22.377 ******** 2025-10-13 00:20:19,410 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,416 p=28260 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-10-13 00:20:19,416 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.043) 0:00:22.421 ******** 2025-10-13 00:20:19,437 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,443 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-10-13 00:20:19,443 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.027) 0:00:22.448 ******** 2025-10-13 00:20:19,481 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,487 p=28260 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-10-13 00:20:19,487 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.043) 0:00:22.492 ******** 2025-10-13 00:20:19,508 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,515 p=28260 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-10-13 00:20:19,515 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.027) 0:00:22.519 ******** 2025-10-13 00:20:20,207 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:20,215 p=28260 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-10-13 00:20:20,216 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:20 +0000 (0:00:00.700) 0:00:23.220 ******** 2025-10-13 00:20:20,597 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-10-13 00:20:21,567 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-10-13 00:20:21,574 p=28260 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-10-13 00:20:21,574 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:21 +0000 (0:00:01.358) 0:00:24.579 ******** 2025-10-13 00:20:22,779 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:22,791 p=28260 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-10-13 00:20:22,792 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:22 +0000 (0:00:01.217) 0:00:25.796 ******** 2025-10-13 00:20:23,314 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:23,341 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-10-13 00:20:23,341 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.549) 0:00:26.346 ******** 2025-10-13 00:20:23,400 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-10-13 00:20:23,415 p=28260 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-10-13 00:20:23,415 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.073) 0:00:26.420 ******** 2025-10-13 00:20:23,446 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-10-13 00:20:23,463 p=28260 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-10-13 00:20:23,463 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.048) 0:00:26.468 ******** 2025-10-13 00:20:54,714 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:54,722 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-10-13 00:20:54,722 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:54 +0000 (0:00:31.258) 0:00:57.727 ******** 2025-10-13 00:20:54,914 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:54,920 p=28260 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-10-13 00:20:54,920 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:54 +0000 (0:00:00.198) 0:00:57.925 ******** 2025-10-13 00:20:55,105 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:55,112 p=28260 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-10-13 00:20:55,112 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:55 +0000 (0:00:00.192) 0:00:58.117 ******** 2025-10-13 00:21:00,193 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:00,208 p=28260 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-10-13 00:21:00,208 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:05.096) 0:01:03.213 ******** 2025-10-13 00:21:00,231 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:00,246 p=28260 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-10-13 00:21:00,246 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:00.037) 0:01:03.251 ******** 2025-10-13 00:21:00,681 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:00,689 p=28260 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-10-13 00:21:00,689 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:00.443) 0:01:03.694 ******** 2025-10-13 00:21:01,077 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:01,092 p=28260 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-10-13 00:21:01,092 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.402) 0:01:04.097 ******** 2025-10-13 00:21:01,117 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,131 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-10-13 00:21:01,131 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.038) 0:01:04.136 ******** 2025-10-13 00:21:01,155 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,169 p=28260 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-10-13 00:21:01,169 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.038) 0:01:04.174 ******** 2025-10-13 00:21:01,191 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,205 p=28260 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-10-13 00:21:01,205 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.035) 0:01:04.210 ******** 2025-10-13 00:21:01,224 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,232 p=28260 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-10-13 00:21:01,232 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.027) 0:01:04.237 ******** 2025-10-13 00:21:01,249 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,257 p=28260 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-10-13 00:21:01,257 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.025) 0:01:04.262 ******** 2025-10-13 00:21:01,280 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,288 p=28260 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-10-13 00:21:01,288 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.030) 0:01:04.293 ******** 2025-10-13 00:21:01,517 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-10-13 00:21:01,711 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-10-13 00:21:01,906 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-10-13 00:21:02,134 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-10-13 00:21:02,311 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-10-13 00:21:02,336 p=28260 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-10-13 00:21:02,336 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:01.047) 0:01:05.341 ******** 2025-10-13 00:21:02,451 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-10-13 00:21:02,452 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:00.115) 0:01:05.456 ******** 2025-10-13 00:21:02,662 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-10-13 00:21:02,818 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-10-13 00:21:02,971 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-10-13 00:21:02,978 p=28260 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-10-13 00:21:02,978 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:00.526) 0:01:05.983 ******** 2025-10-13 00:21:03,005 p=28260 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-10-13 00:21:03,005 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.027) 0:01:06.010 ******** 2025-10-13 00:21:03,026 p=28260 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-10-13 00:21:03,028 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,033 p=28260 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-10-13 00:21:03,033 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.028) 0:01:06.038 ******** 2025-10-13 00:21:03,055 p=28260 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-10-13 00:21:03,057 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,067 p=28260 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-10-13 00:21:03,067 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.033) 0:01:06.072 ******** 2025-10-13 00:21:03,110 p=28260 u=zuul n=ansible | ok: [localhost] => (item={}) 2025-10-13 00:21:03,117 p=28260 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-10-13 00:21:03,117 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.050) 0:01:06.122 ******** 2025-10-13 00:21:03,147 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,152 p=28260 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-10-13 00:21:03,153 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.035) 0:01:06.157 ******** 2025-10-13 00:21:03,800 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,812 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-10-13 00:21:03,813 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.660) 0:01:06.817 ******** 2025-10-13 00:21:03,832 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,845 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-10-13 00:21:03,845 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.032) 0:01:06.850 ******** 2025-10-13 00:21:03,865 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,885 p=28260 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-10-13 00:21:03,885 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.039) 0:01:06.890 ******** 2025-10-13 00:21:03,904 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,913 p=28260 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-10-13 00:21:03,913 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.028) 0:01:06.918 ******** 2025-10-13 00:21:03,954 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,965 p=28260 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-10-13 00:21:03,965 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.051) 0:01:06.969 ******** 2025-10-13 00:21:03,982 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm 2025-10-13 00:21:03,991 p=28260 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-10-13 00:21:03,991 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.026) 0:01:06.996 ******** 2025-10-13 00:21:04,031 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: tests/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' 2025-10-13 00:21:04,038 p=28260 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-10-13 00:21:04,038 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.046) 0:01:07.043 ******** 2025-10-13 00:21:04,375 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:04,390 p=28260 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-10-13 00:21:04,390 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.351) 0:01:07.395 ******** 2025-10-13 00:21:04,414 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-10-13 00:21:04,421 p=28260 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-10-13 00:21:04,421 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.031) 0:01:07.426 ******** 2025-10-13 00:21:04,850 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:04,857 p=28260 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-10-13 00:21:04,857 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.435) 0:01:07.862 ******** 2025-10-13 00:21:04,873 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:04,887 p=28260 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-10-13 00:21:04,887 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.030) 0:01:07.892 ******** 2025-10-13 00:21:05,384 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:05,393 p=28260 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-10-13 00:21:05,393 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.506) 0:01:08.398 ******** 2025-10-13 00:21:05,417 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:05,440 p=28260 u=zuul n=ansible | TASK [Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-10-13 00:21:05,440 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.046) 0:01:08.445 ******** 2025-10-13 00:21:05,820 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:05,851 p=28260 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | localhost : ok=43 changed=23 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.411) 0:01:08.856 ******** 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | =============================================================================== 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 31.26s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 8.51s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.10s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 3.16s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.63s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.54s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Remove existing repos from /etc/yum.repos.d directory ------ 1.36s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 1.22s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.13s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.05s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 1.02s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.92s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 0.86s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Find existing repos from /etc/yum.repos.d directory -------- 0.70s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Run repo-setup --------------------------------------------- 0.68s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_yamls : Get environment structure ------------------------------- 0.66s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Copy generated repos to /etc/yum.repos.d directory --------- 0.55s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_yamls : Ensure directories exist -------------------------------- 0.53s 2025-10-13 00:21:05,853 p=28260 u=zuul n=ansible | discover_latest_image : Get latest image -------------------------------- 0.51s 2025-10-13 00:21:05,853 p=28260 u=zuul n=ansible | repo_setup : Run repo-setup-get-hash ------------------------------------ 0.47s 2025-10-13 00:21:07,183 p=29086 u=zuul n=ansible | PLAY [Run pre_infra hooks] ***************************************************** 2025-10-13 00:21:07,214 p=29086 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-10-13 00:21:07,214 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.045) 0:00:00.045 ******** 2025-10-13 00:21:07,306 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,316 p=29086 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-10-13 00:21:07,316 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.102) 0:00:00.147 ******** 2025-10-13 00:21:07,369 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,379 p=29086 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-10-13 00:21:07,379 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.062) 0:00:00.209 ******** 2025-10-13 00:21:07,428 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,469 p=29086 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-10-13 00:21:07,493 p=29086 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-10-13 00:21:07,493 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.114) 0:00:00.324 ******** 2025-10-13 00:21:07,581 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,591 p=29086 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-10-13 00:21:07,591 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.098) 0:00:00.422 ******** 2025-10-13 00:21:07,615 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,623 p=29086 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-10-13 00:21:07,623 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.032) 0:00:00.454 ******** 2025-10-13 00:21:07,642 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,677 p=29086 u=zuul n=ansible | PLAY [Prepare the platform] **************************************************** 2025-10-13 00:21:07,702 p=29086 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-10-13 00:21:07,702 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.078) 0:00:00.532 ******** 2025-10-13 00:21:07,738 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,747 p=29086 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-10-13 00:21:07,747 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.045) 0:00:00.578 ******** 2025-10-13 00:21:08,019 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,028 p=29086 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-10-13 00:21:08,028 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.281) 0:00:00.859 ******** 2025-10-13 00:21:08,047 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,057 p=29086 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-10-13 00:21:08,057 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:00.888 ******** 2025-10-13 00:21:08,076 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,086 p=29086 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-10-13 00:21:08,086 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:00.916 ******** 2025-10-13 00:21:08,104 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,121 p=29086 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-10-13 00:21:08,121 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.035) 0:00:00.952 ******** 2025-10-13 00:21:08,141 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,151 p=29086 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-10-13 00:21:08,151 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.030) 0:00:00.982 ******** 2025-10-13 00:21:08,171 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,180 p=29086 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-10-13 00:21:08,180 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.029) 0:00:01.011 ******** 2025-10-13 00:21:08,199 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,209 p=29086 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-10-13 00:21:08,210 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.029) 0:00:01.040 ******** 2025-10-13 00:21:08,515 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,525 p=29086 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-10-13 00:21:08,525 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.315) 0:00:01.356 ******** 2025-10-13 00:21:08,553 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-10-13 00:21:08,566 p=29086 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-10-13 00:21:08,566 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.041) 0:00:01.397 ******** 2025-10-13 00:21:08,586 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,595 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-10-13 00:21:08,595 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:01.426 ******** 2025-10-13 00:21:08,615 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,624 p=29086 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-10-13 00:21:08,624 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:01.455 ******** 2025-10-13 00:21:08,644 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,656 p=29086 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-10-13 00:21:08,657 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.032) 0:00:01.487 ******** 2025-10-13 00:21:08,684 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,697 p=29086 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-10-13 00:21:08,697 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.040) 0:00:01.528 ******** 2025-10-13 00:21:08,942 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,953 p=29086 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-10-13 00:21:08,953 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.255) 0:00:01.784 ******** 2025-10-13 00:21:08,983 p=29086 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-10-13 00:21:08,993 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-10-13 00:21:08,993 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.040) 0:00:01.824 ******** 2025-10-13 00:21:09,013 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,025 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-10-13 00:21:09,025 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.032) 0:00:01.856 ******** 2025-10-13 00:21:09,048 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,061 p=29086 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-10-13 00:21:09,061 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.035) 0:00:01.892 ******** 2025-10-13 00:21:09,080 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,090 p=29086 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-10-13 00:21:09,090 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.028) 0:00:01.921 ******** 2025-10-13 00:21:09,113 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:09,123 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-10-13 00:21:09,123 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.033) 0:00:01.954 ******** 2025-10-13 00:21:09,153 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-10-13 00:21:09,166 p=29086 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-10-13 00:21:09,166 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.042) 0:00:01.996 ******** 2025-10-13 00:21:09,181 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,194 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-10-13 00:21:09,194 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.028) 0:00:02.025 ******** 2025-10-13 00:21:09,236 p=29086 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log 2025-10-13 00:21:09,663 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:09,676 p=29086 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-10-13 00:21:09,676 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.481) 0:00:02.506 ******** 2025-10-13 00:21:09,696 p=29086 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-10-13 00:21:09,709 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-10-13 00:21:09,709 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.032) 0:00:02.539 ******** 2025-10-13 00:21:10,195 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,208 p=29086 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-10-13 00:21:10,209 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.499) 0:00:03.039 ******** 2025-10-13 00:21:10,254 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:10,272 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-10-13 00:21:10,272 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.063) 0:00:03.103 ******** 2025-10-13 00:21:10,637 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,646 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-10-13 00:21:10,646 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.373) 0:00:03.477 ******** 2025-10-13 00:21:10,900 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,919 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-10-13 00:21:10,920 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.273) 0:00:03.750 ******** 2025-10-13 00:21:11,252 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:11,275 p=29086 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-10-13 00:21:11,275 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.355) 0:00:04.106 ******** 2025-10-13 00:21:11,348 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:11,360 p=29086 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-10-13 00:21:11,360 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.084) 0:00:04.190 ******** 2025-10-13 00:21:11,893 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:11,904 p=29086 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-10-13 00:21:11,904 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.543) 0:00:04.734 ******** 2025-10-13 00:21:12,195 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,205 p=29086 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-10-13 00:21:12,205 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.300) 0:00:05.035 ******** 2025-10-13 00:21:12,641 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:12,678 p=29086 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-10-13 00:21:12,678 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.473) 0:00:05.509 ******** 2025-10-13 00:21:12,899 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,912 p=29086 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-10-13 00:21:12,912 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.233) 0:00:05.743 ******** 2025-10-13 00:21:12,936 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,952 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-10-13 00:21:12,952 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.039) 0:00:05.782 ******** 2025-10-13 00:21:13,926 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-10-13 00:21:14,610 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-10-13 00:21:14,634 p=29086 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-10-13 00:21:14,634 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:14 +0000 (0:00:01.682) 0:00:07.465 ******** 2025-10-13 00:21:15,723 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:15,733 p=29086 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-10-13 00:21:15,733 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:15 +0000 (0:00:01.098) 0:00:08.564 ******** 2025-10-13 00:21:16,498 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-10-13 00:21:17,268 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-10-13 00:21:17,284 p=29086 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-10-13 00:21:17,284 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:17 +0000 (0:00:01.550) 0:00:10.114 ******** 2025-10-13 00:21:18,202 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:18,217 p=29086 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-10-13 00:21:18,217 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.933) 0:00:11.048 ******** 2025-10-13 00:21:18,272 p=29086 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log 2025-10-13 00:21:18,460 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:18,471 p=29086 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-10-13 00:21:18,471 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.254) 0:00:11.302 ******** 2025-10-13 00:21:18,503 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,536 p=29086 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-10-13 00:21:18,536 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.064) 0:00:11.367 ******** 2025-10-13 00:21:18,554 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,564 p=29086 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-10-13 00:21:18,564 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.027) 0:00:11.394 ******** 2025-10-13 00:21:18,586 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,596 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-10-13 00:21:18,596 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.032) 0:00:11.427 ******** 2025-10-13 00:21:18,629 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,640 p=29086 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-10-13 00:21:18,640 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.044) 0:00:11.471 ******** 2025-10-13 00:21:18,664 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,675 p=29086 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-10-13 00:21:18,675 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.034) 0:00:11.506 ******** 2025-10-13 00:21:18,699 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,711 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-10-13 00:21:18,711 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.036) 0:00:11.542 ******** 2025-10-13 00:21:18,742 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,754 p=29086 u=zuul n=ansible | TASK [openshift_setup : Metal3 tweaks _raw_params=metal3_config.yml] *********** 2025-10-13 00:21:18,754 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.042) 0:00:11.585 ******** 2025-10-13 00:21:18,786 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_setup/tasks/metal3_config.yml for localhost 2025-10-13 00:21:18,803 p=29086 u=zuul n=ansible | TASK [openshift_setup : Fetch Metal3 configuration name _raw_params=oc get Provisioning -o name] *** 2025-10-13 00:21:18,803 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.048) 0:00:11.633 ******** 2025-10-13 00:21:18,817 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,828 p=29086 u=zuul n=ansible | TASK [openshift_setup : Apply the patch to Metal3 Provisioning _raw_params=oc patch {{ _cifmw_openshift_setup_provisioning_name.stdout }} --type='json' -p='[{"op": "replace", "path": "/spec/watchAllNamespaces", "value": true}]'] *** 2025-10-13 00:21:18,828 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.025) 0:00:11.658 ******** 2025-10-13 00:21:18,843 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,852 p=29086 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-10-13 00:21:18,853 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.024) 0:00:11.683 ******** 2025-10-13 00:21:19,567 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:19,580 p=29086 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-10-13 00:21:19,580 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:19 +0000 (0:00:00.727) 0:00:12.411 ******** 2025-10-13 00:21:20,513 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:20,535 p=29086 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-10-13 00:21:20,535 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:20 +0000 (0:00:00.954) 0:00:13.366 ******** 2025-10-13 00:21:21,226 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:21,247 p=29086 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-10-13 00:21:21,247 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.711) 0:00:14.077 ******** 2025-10-13 00:21:21,262 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,272 p=29086 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-10-13 00:21:21,273 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.025) 0:00:14.103 ******** 2025-10-13 00:21:21,294 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,319 p=29086 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-10-13 00:21:21,320 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.046) 0:00:14.150 ******** 2025-10-13 00:21:21,343 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,352 p=29086 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-10-13 00:21:21,352 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.032) 0:00:14.182 ******** 2025-10-13 00:21:21,379 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,394 p=29086 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-10-13 00:21:21,394 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.042) 0:00:14.225 ******** 2025-10-13 00:21:21,418 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,431 p=29086 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-10-13 00:21:21,431 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.036) 0:00:14.262 ******** 2025-10-13 00:21:21,448 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,457 p=29086 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-10-13 00:21:21,457 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.025) 0:00:14.288 ******** 2025-10-13 00:21:21,472 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,483 p=29086 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-10-13 00:21:21,483 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.026) 0:00:14.314 ******** 2025-10-13 00:21:21,498 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,507 p=29086 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-10-13 00:21:21,507 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.024) 0:00:14.338 ******** 2025-10-13 00:21:21,534 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,547 p=29086 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-10-13 00:21:21,547 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.039) 0:00:14.377 ******** 2025-10-13 00:21:21,576 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,591 p=29086 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-10-13 00:21:21,591 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.044) 0:00:14.422 ******** 2025-10-13 00:21:21,665 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:21,674 p=29086 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-10-13 00:21:21,674 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.082) 0:00:14.505 ******** 2025-10-13 00:21:21,731 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:21,741 p=29086 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-10-13 00:21:21,741 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.066) 0:00:14.572 ******** 2025-10-13 00:21:21,815 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,853 p=29086 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-10-13 00:21:21,853 p=29086 u=zuul n=ansible | localhost : ok=36 changed=12 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.112) 0:00:14.684 ******** 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | =============================================================================== 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.68s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces --- 1.55s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Get internal OpenShift registry route ----------------- 1.10s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 0.95s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Wait for the image registry to be ready --------------- 0.93s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.73s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Patch samples registry configuration ------------------ 0.71s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Create the openshift_login parameters file ------------ 0.54s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch new OpenShift access token ---------------------- 0.50s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift token --------------------------------- 0.48s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Append the KUBECONFIG to the install yamls parameters --- 0.47s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift API URL ------------------------------- 0.37s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift current user -------------------------- 0.36s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Ensure output directory exists ------------------------ 0.32s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Read the install yamls parameters file ---------------- 0.30s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | networking_mapper : Check for Networking Environment Definition file existence --- 0.28s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift kubeconfig context -------------------- 0.27s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Check if kubeconfig exists ---------------------------- 0.26s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Login into OpenShift internal registry ---------------- 0.25s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Ensure output directory exists ------------------------ 0.23s 2025-10-13 00:21:38,687 p=29689 u=zuul n=ansible | Starting galaxy collection install process 2025-10-13 00:21:38,713 p=29689 u=zuul n=ansible | Process install dependency map 2025-10-13 00:21:53,440 p=29689 u=zuul n=ansible | Starting collection install process 2025-10-13 00:21:53,440 p=29689 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+07f6a4f6' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+07f6a4f6 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | cifmw.general:1.0.0+07f6a4f6 was installed successfully 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-10-13 00:21:55,423 p=29689 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-10-13 00:21:55,423 p=29689 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-10-13 00:21:55,424 p=29689 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-10-13 00:21:55,694 p=29689 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-10-13 00:21:55,695 p=29689 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-10-13 00:21:55,695 p=29689 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-10-13 00:21:56,034 p=29689 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-10-13 00:21:56,035 p=29689 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-10-13 00:21:56,035 p=29689 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-10-13 00:21:56,354 p=29689 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-10-13 00:21:56,355 p=29689 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-10-13 00:21:56,355 p=29689 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-10-13 00:21:56,580 p=29689 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-10-13 00:21:56,580 p=29689 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully home/zuul/zuul-output/logs/ci-framework-data/0000755000175000017500000000000015073043303020361 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/0000755000175000017500000000000015073043271021331 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_fetch_openshift.log0000644000175000017500000000035215073042765027631 0ustar zuulzuulWARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have access to 64 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log0000644000175000017500000000002115073042776032571 0ustar zuulzuulLogin Succeeded! home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_check_for_oc.log0000644000175000017500000000002215073043207027047 0ustar zuulzuul/home/zuul/bin/oc home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_run_openstack_must_gather.log0000644000175000017500000001060715073043212031726 0ustar zuulzuul[must-gather ] OUT 2025-10-13T00:23:36.738050793Z Using must-gather plug-in image: quay.io/openstack-k8s-operators/openstack-must-gather:latest When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.19.14 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing [must-gather ] OUT 2025-10-13T00:23:36.787875107Z namespace/openshift-must-gather-xwq75 created [must-gather ] OUT 2025-10-13T00:23:36.794218703Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-xmtph created [must-gather ] OUT 2025-10-13T00:23:38.431690265Z namespace/openshift-must-gather-xwq75 deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.19.14 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing Error from server (Forbidden): pods "must-gather-" is forbidden: error looking up service account openshift-must-gather-xwq75/default: serviceaccount "default" not found home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_prepare_root_ssh.log0000644000175000017500000045036115073043231030035 0ustar zuulzuulPseudo-terminal will not be allocated because stdin is not a terminal. Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 Part of OpenShift 4.16, RHCOS is a Kubernetes-native operating system managed by the Machine Config Operator (`clusteroperator/machine-config`). WARNING: Direct SSH access to machines is not recommended; instead, make configuration changes via `machineconfig` objects: https://docs.openshift.com/container-platform/4.16/architecture/architecture-rhcos.html --- + test -d /etc/ssh/sshd_config.d/ + sudo sed -ri 's/PermitRootLogin no/PermitRootLogin prohibit-password/' '/etc/ssh/sshd_config.d/*' sed: can't read /etc/ssh/sshd_config.d/*: No such file or directory + true + sudo sed -i 's/PermitRootLogin no/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config + sudo systemctl restart sshd + sudo cp -r .ssh /root/ + sudo chown -R root: /root/.ssh + mkdir -p /tmp/crc-logs-artifacts + sudo cp -av /ostree/deploy/rhcos/var/log/pods /tmp/crc-logs-artifacts/ '/ostree/deploy/rhcos/var/log/pods' -> '/tmp/crc-logs-artifacts/pods' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.log' + sudo chown -R core:core /tmp/crc-logs-artifacts home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_copy_logs_from_crc.log0000644000175000017500000003475115073043234030333 0ustar zuulzuulExecuting: program /usr/bin/ssh host api.crc.testing, user core, command sftp OpenSSH_9.9p1, OpenSSL 3.5.1 1 Jul 2025 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: configuration requests final Match pass debug1: re-parsing configuration debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: Connecting to api.crc.testing [38.102.83.180] port 22. debug1: Connection established. debug1: identity file /home/zuul/.ssh/id_cifw type 2 debug1: identity file /home/zuul/.ssh/id_cifw-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.9 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.7 debug1: compat_banner: match: OpenSSH_8.7 pat OpenSSH* compat 0x04000000 debug1: Authenticating to api.crc.testing:22 as 'core' debug1: load_hostkeys: fopen /home/zuul/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: aes256-gcm@openssh.com MAC: compression: none debug1: kex: client->server cipher: aes256-gcm@openssh.com MAC: compression: none debug1: kex: curve25519-sha256 need=32 dh_need=32 debug1: kex: curve25519-sha256 need=32 dh_need=32 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:/ZfZ15bRL0d31T2CAq03Iw4h8DAqA2+9vySbGcnzmJo debug1: load_hostkeys: fopen /home/zuul/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: Host 'api.crc.testing' is known and matches the ED25519 host key. debug1: Found key in /home/zuul/.ssh/known_hosts:21 debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 debug1: rekey out after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: ssh_packet_read_poll2: resetting read seqnr 3 debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 4294967296 blocks debug1: SSH2_MSG_EXT_INFO received debug1: kex_ext_info_client_parse: server-sig-algs= debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: No credentials were supplied, or the credentials were unavailable or inaccessible No Kerberos credentials available (default cache: KCM:) debug1: No credentials were supplied, or the credentials were unavailable or inaccessible No Kerberos credentials available (default cache: KCM:) debug1: Next authentication method: publickey debug1: Will attempt key: /home/zuul/.ssh/id_cifw ECDSA SHA256:enDo/v02hd2HvcNTNax3UXNNT1Xbeaa83PPEn0Ri9Pg explicit debug1: Offering public key: /home/zuul/.ssh/id_cifw ECDSA SHA256:enDo/v02hd2HvcNTNax3UXNNT1Xbeaa83PPEn0Ri9Pg explicit debug1: Server accepts key: /home/zuul/.ssh/id_cifw ECDSA SHA256:enDo/v02hd2HvcNTNax3UXNNT1Xbeaa83PPEn0Ri9Pg explicit Authenticated to api.crc.testing ([38.102.83.180]:22) using "publickey". debug1: pkcs11_del_provider: called, provider_id = (null) debug1: channel 0: new session [client-session] (inactive timeout: 0) debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: pledge: filesystem debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0 debug1: client_input_hostkeys: searching /home/zuul/.ssh/known_hosts for api.crc.testing / (none) debug1: client_input_hostkeys: searching /home/zuul/.ssh/known_hosts2 for api.crc.testing / (none) debug1: client_input_hostkeys: hostkeys file /home/zuul/.ssh/known_hosts2 does not exist debug1: client_input_hostkeys: no new or deprecated keys from server debug1: Remote: /var/home/core/.ssh/authorized_keys:28: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Remote: /var/home/core/.ssh/authorized_keys:28: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Sending subsystem: sftp debug1: pledge: fork scp: debug1: Fetching /tmp/crc-logs-artifacts/ to /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts scp: debug1: truncating at 59795 scp: debug1: truncating at 31608 scp: debug1: truncating at 61484 scp: debug1: truncating at 1917 scp: debug1: truncating at 736 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 60637 scp: debug1: truncating at 1976 scp: debug1: truncating at 307194 scp: debug1: truncating at 265 scp: debug1: truncating at 16930 scp: debug1: truncating at 116 scp: debug1: truncating at 17631 scp: debug1: truncating at 1648 scp: debug1: truncating at 1973 scp: debug1: truncating at 19721 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 59909 scp: debug1: truncating at 43448 scp: debug1: truncating at 5370 scp: debug1: truncating at 9930 scp: debug1: truncating at 5142 scp: debug1: truncating at 130989 scp: debug1: truncating at 75290 scp: debug1: truncating at 59236 scp: debug1: truncating at 85 scp: debug1: truncating at 85 scp: debug1: truncating at 85 scp: debug1: truncating at 7823 scp: debug1: truncating at 6351 scp: debug1: truncating at 5934 scp: debug1: truncating at 23257 scp: debug1: truncating at 25796 scp: debug1: truncating at 120 scp: debug1: truncating at 381 scp: debug1: truncating at 28851 scp: debug1: truncating at 1875 scp: debug1: truncating at 2038 scp: debug1: truncating at 9955 scp: debug1: truncating at 4976 scp: debug1: truncating at 110417 scp: debug1: truncating at 93408 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 23827 scp: debug1: truncating at 28297 scp: debug1: truncating at 96 scp: debug1: truncating at 0 scp: debug1: truncating at 122585 scp: debug1: truncating at 78087 scp: debug1: truncating at 32762 scp: debug1: truncating at 574827 scp: debug1: truncating at 311627 scp: debug1: truncating at 411501 scp: debug1: truncating at 25403 scp: debug1: truncating at 26381 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 11526 scp: debug1: truncating at 51603 scp: debug1: truncating at 75 scp: debug1: truncating at 44076 scp: debug1: truncating at 127371 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 7599 scp: debug1: truncating at 7732 scp: debug1: truncating at 12999 scp: debug1: truncating at 12595 scp: debug1: truncating at 10641 scp: debug1: truncating at 571 scp: debug1: truncating at 909 scp: debug1: truncating at 9654 scp: debug1: truncating at 14032 scp: debug1: truncating at 22065 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 1187 scp: debug1: truncating at 1054 scp: debug1: truncating at 14404 scp: debug1: truncating at 6805 scp: debug1: truncating at 664 scp: debug1: truncating at 4037 scp: debug1: truncating at 114883 scp: debug1: truncating at 112225 scp: debug1: truncating at 30933 scp: debug1: truncating at 83 scp: debug1: truncating at 0 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 46226 scp: debug1: truncating at 16333 scp: debug1: truncating at 25182 scp: debug1: truncating at 39824 scp: debug1: truncating at 28909 scp: debug1: truncating at 246832 scp: debug1: truncating at 28696 scp: debug1: truncating at 26036 scp: debug1: truncating at 58729 scp: debug1: truncating at 3086 scp: debug1: truncating at 760212 scp: debug1: truncating at 14943 scp: debug1: truncating at 11687 scp: debug1: truncating at 12256 scp: debug1: truncating at 1042 scp: debug1: truncating at 0 scp: debug1: truncating at 1042 scp: debug1: truncating at 16578 scp: debug1: truncating at 11354 scp: debug1: truncating at 6142 scp: debug1: truncating at 6726 scp: debug1: truncating at 6407 scp: debug1: truncating at 12063 scp: debug1: truncating at 2076 scp: debug1: truncating at 132818 scp: debug1: truncating at 281558 scp: debug1: truncating at 737559 scp: debug1: truncating at 693531 scp: debug1: truncating at 60344 scp: debug1: truncating at 58713 scp: debug1: truncating at 602 scp: debug1: truncating at 2922 scp: debug1: truncating at 443327 scp: debug1: truncating at 1508074 scp: debug1: truncating at 491271 scp: debug1: truncating at 217484 scp: debug1: truncating at 242413 scp: debug1: truncating at 4640 scp: debug1: truncating at 4680 scp: debug1: truncating at 5955 scp: debug1: truncating at 1956828 scp: debug1: truncating at 8190 scp: debug1: truncating at 2347 scp: debug1: truncating at 0 scp: debug1: truncating at 2415 scp: debug1: truncating at 4672 scp: debug1: truncating at 26064 scp: debug1: truncating at 33371 scp: debug1: truncating at 31684 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 2031 scp: debug1: truncating at 46063 scp: debug1: truncating at 17236 scp: debug1: truncating at 391605 scp: debug1: truncating at 890 scp: debug1: truncating at 3764 scp: debug1: truncating at 160066 scp: debug1: truncating at 61 scp: debug1: truncating at 61 scp: debug1: truncating at 31683 scp: debug1: truncating at 7296 scp: debug1: truncating at 53031 scp: debug1: truncating at 3574 scp: debug1: truncating at 52224 scp: debug1: truncating at 37860 scp: debug1: truncating at 23357 scp: debug1: truncating at 22931 scp: debug1: truncating at 46647 scp: debug1: truncating at 296671 scp: debug1: truncating at 100273 scp: debug1: truncating at 178107 scp: debug1: truncating at 29228 scp: debug1: truncating at 34528 scp: debug1: truncating at 39921 scp: debug1: truncating at 37249 scp: debug1: truncating at 91140 scp: debug1: truncating at 1664102 scp: debug1: truncating at 11627525 scp: debug1: truncating at 7599 scp: debug1: truncating at 7732 scp: debug1: truncating at 5648 scp: debug1: truncating at 7598 scp: debug1: truncating at 10792 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 49782 scp: debug1: truncating at 136260 scp: debug1: truncating at 1173 scp: debug1: truncating at 1040 scp: debug1: truncating at 1276 scp: debug1: truncating at 1386 scp: debug1: truncating at 736 scp: debug1: truncating at 27628 scp: debug1: truncating at 132871 scp: debug1: truncating at 475401 scp: debug1: truncating at 222883 scp: debug1: truncating at 739 scp: debug1: truncating at 408 scp: debug1: truncating at 408 scp: debug1: truncating at 411 scp: debug1: truncating at 411 scp: debug1: truncating at 392 scp: debug1: truncating at 392 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 404 scp: debug1: truncating at 404 scp: debug1: truncating at 414 scp: debug1: truncating at 414 scp: debug1: truncating at 80 scp: debug1: truncating at 80 scp: debug1: truncating at 157417 scp: debug1: truncating at 56701 scp: debug1: truncating at 1316 scp: debug1: truncating at 1183 scp: debug1: truncating at 1437 scp: debug1: truncating at 153781 scp: debug1: truncating at 38969 scp: debug1: truncating at 172802 scp: debug1: truncating at 769174 scp: debug1: truncating at 227835 scp: debug1: truncating at 27699 scp: debug1: truncating at 61149 scp: debug1: truncating at 214174 scp: debug1: truncating at 84973 scp: debug1: truncating at 26870 scp: debug1: truncating at 40893 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 65035 scp: debug1: truncating at 15196 scp: debug1: truncating at 1533 scp: debug1: truncating at 1533 scp: debug1: truncating at 47479 scp: debug1: truncating at 180933 scp: debug1: truncating at 396 scp: debug1: truncating at 396 scp: debug1: truncating at 11285 scp: debug1: truncating at 13994 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 43448 scp: debug1: truncating at 27628 scp: debug1: truncating at 23802 scp: debug1: truncating at 33464 scp: debug1: truncating at 8727966 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 74 scp: debug1: truncating at 74 scp: debug1: truncating at 74 scp: debug1: truncating at 17964 scp: debug1: truncating at 20589 scp: debug1: truncating at 20593 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 240 scp: debug1: truncating at 725 scp: debug1: truncating at 518 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 25004 scp: debug1: truncating at 51524 scp: debug1: truncating at 29309 scp: debug1: truncating at 48554 scp: debug1: truncating at 2300 scp: debug1: truncating at 614882 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 83755 scp: debug1: truncating at 324520 scp: debug1: truncating at 193188 scp: debug1: truncating at 2192 scp: debug1: truncating at 1509 scp: debug1: truncating at 5045 scp: debug1: truncating at 101 scp: debug1: truncating at 101 scp: debug1: truncating at 101 scp: debug1: truncating at 20217 scp: debug1: truncating at 18729 scp: debug1: truncating at 79965 scp: debug1: truncating at 15339 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 8134 scp: debug1: truncating at 6025 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 170056, received 41078832 bytes, in 2.0 seconds Bytes per second: sent 85191.2, received 20578849.6 debug1: Exit status 0 home/zuul/zuul-output/logs/ci-framework-data/logs/ansible.log0000644000175000017500000040021415073043044023450 0ustar zuulzuul2025-10-13 00:19:31,046 p=27673 u=zuul n=ansible | Starting galaxy collection install process 2025-10-13 00:19:31,047 p=27673 u=zuul n=ansible | Process install dependency map 2025-10-13 00:19:46,129 p=27673 u=zuul n=ansible | Starting collection install process 2025-10-13 00:19:46,129 p=27673 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+07f6a4f6' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+07f6a4f6 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | cifmw.general:1.0.0+07f6a4f6 was installed successfully 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-10-13 00:19:47,526 p=27673 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-10-13 00:19:47,527 p=27673 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-10-13 00:19:47,527 p=27673 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-10-13 00:19:48,128 p=27673 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-10-13 00:19:48,128 p=27673 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-10-13 00:19:48,129 p=27673 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-10-13 00:19:48,534 p=27673 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-10-13 00:19:48,534 p=27673 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-10-13 00:19:57,011 p=28260 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2025-10-13 00:19:57,028 p=28260 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-10-13 00:19:57,028 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:57 +0000 (0:00:00.033) 0:00:00.033 ******** 2025-10-13 00:19:58,125 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,159 p=28260 u=zuul n=ansible | TASK [Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-10-13 00:19:58,159 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:01.131) 0:00:01.164 ******** 2025-10-13 00:19:58,193 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,202 p=28260 u=zuul n=ansible | TASK [Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-10-13 00:19:58,202 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.042) 0:00:01.207 ******** 2025-10-13 00:19:58,253 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,259 p=28260 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-10-13 00:19:58,260 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.057) 0:00:01.264 ******** 2025-10-13 00:19:58,623 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,631 p=28260 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-10-13 00:19:58,631 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.371) 0:00:01.636 ******** 2025-10-13 00:19:58,654 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,663 p=28260 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-10-13 00:19:58,663 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.031) 0:00:01.668 ******** 2025-10-13 00:19:58,685 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,692 p=28260 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-10-13 00:19:58,692 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.029) 0:00:01.697 ******** 2025-10-13 00:19:58,723 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,732 p=28260 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-10-13 00:19:58,732 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.039) 0:00:01.737 ******** 2025-10-13 00:20:00,253 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:00,276 p=28260 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-10-13 00:20:00,276 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:00 +0000 (0:00:01.544) 0:00:03.281 ******** 2025-10-13 00:20:00,537 p=28260 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-10-13 00:20:00,986 p=28260 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-10-13 00:20:01,282 p=28260 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-10-13 00:20:01,294 p=28260 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-10-13 00:20:01,295 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:01 +0000 (0:00:01.018) 0:00:04.299 ******** 2025-10-13 00:20:02,907 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:02,920 p=28260 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-10-13 00:20:02,921 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:02 +0000 (0:00:01.625) 0:00:05.925 ******** 2025-10-13 00:20:06,072 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:06,080 p=28260 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-10-13 00:20:06,081 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:06 +0000 (0:00:03.159) 0:00:09.085 ******** 2025-10-13 00:20:14,582 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:14,595 p=28260 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-10-13 00:20:14,595 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:14 +0000 (0:00:08.514) 0:00:17.600 ******** 2025-10-13 00:20:15,509 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:15,519 p=28260 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-10-13 00:20:15,519 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:15 +0000 (0:00:00.924) 0:00:18.524 ******** 2025-10-13 00:20:15,542 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:15,551 p=28260 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-10-13 00:20:15,551 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:15 +0000 (0:00:00.031) 0:00:18.556 ******** 2025-10-13 00:20:16,215 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:16,227 p=28260 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-10-13 00:20:16,227 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.676) 0:00:19.232 ******** 2025-10-13 00:20:16,277 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,286 p=28260 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-10-13 00:20:16,286 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.058) 0:00:19.291 ******** 2025-10-13 00:20:16,339 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,346 p=28260 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-10-13 00:20:16,346 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.059) 0:00:19.351 ******** 2025-10-13 00:20:16,374 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,381 p=28260 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-10-13 00:20:16,381 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.034) 0:00:19.386 ******** 2025-10-13 00:20:16,841 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:16,855 p=28260 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-10-13 00:20:16,855 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.473) 0:00:19.860 ******** 2025-10-13 00:20:17,707 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:17,713 p=28260 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-10-13 00:20:17,713 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.858) 0:00:20.718 ******** 2025-10-13 00:20:17,733 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,740 p=28260 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-10-13 00:20:17,740 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.027) 0:00:20.745 ******** 2025-10-13 00:20:17,760 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,767 p=28260 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-10-13 00:20:17,767 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.026) 0:00:20.772 ******** 2025-10-13 00:20:17,786 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,792 p=28260 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-10-13 00:20:17,792 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.025) 0:00:20.797 ******** 2025-10-13 00:20:17,817 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:17,823 p=28260 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-10-13 00:20:17,823 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.030) 0:00:20.828 ******** 2025-10-13 00:20:17,845 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,852 p=28260 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-10-13 00:20:17,852 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.029) 0:00:20.857 ******** 2025-10-13 00:20:17,877 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,883 p=28260 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-10-13 00:20:17,884 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.031) 0:00:20.888 ******** 2025-10-13 00:20:17,909 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,916 p=28260 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-10-13 00:20:17,916 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.032) 0:00:20.921 ******** 2025-10-13 00:20:17,941 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,947 p=28260 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-10-13 00:20:17,947 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.031) 0:00:20.952 ******** 2025-10-13 00:20:17,963 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,974 p=28260 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-10-13 00:20:17,974 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.027) 0:00:20.979 ******** 2025-10-13 00:20:17,989 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:18,000 p=28260 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-10-13 00:20:18,000 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.025) 0:00:21.005 ******** 2025-10-13 00:20:18,023 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:18,032 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-10-13 00:20:18,033 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.032) 0:00:21.037 ******** 2025-10-13 00:20:18,449 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:18,456 p=28260 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-10-13 00:20:18,456 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.423) 0:00:21.461 ******** 2025-10-13 00:20:18,860 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:18,873 p=28260 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-10-13 00:20:18,873 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.416) 0:00:21.878 ******** 2025-10-13 00:20:19,280 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:19,286 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-10-13 00:20:19,287 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.413) 0:00:22.291 ******** 2025-10-13 00:20:19,316 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,323 p=28260 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-10-13 00:20:19,323 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.036) 0:00:22.328 ******** 2025-10-13 00:20:19,367 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,372 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-10-13 00:20:19,373 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.049) 0:00:22.377 ******** 2025-10-13 00:20:19,410 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,416 p=28260 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-10-13 00:20:19,416 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.043) 0:00:22.421 ******** 2025-10-13 00:20:19,437 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,443 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-10-13 00:20:19,443 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.027) 0:00:22.448 ******** 2025-10-13 00:20:19,481 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,487 p=28260 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-10-13 00:20:19,487 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.043) 0:00:22.492 ******** 2025-10-13 00:20:19,508 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,515 p=28260 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-10-13 00:20:19,515 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.027) 0:00:22.519 ******** 2025-10-13 00:20:20,207 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:20,215 p=28260 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-10-13 00:20:20,216 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:20 +0000 (0:00:00.700) 0:00:23.220 ******** 2025-10-13 00:20:20,597 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-10-13 00:20:21,567 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-10-13 00:20:21,574 p=28260 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-10-13 00:20:21,574 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:21 +0000 (0:00:01.358) 0:00:24.579 ******** 2025-10-13 00:20:22,779 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:22,791 p=28260 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-10-13 00:20:22,792 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:22 +0000 (0:00:01.217) 0:00:25.796 ******** 2025-10-13 00:20:23,314 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:23,341 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-10-13 00:20:23,341 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.549) 0:00:26.346 ******** 2025-10-13 00:20:23,400 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-10-13 00:20:23,415 p=28260 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-10-13 00:20:23,415 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.073) 0:00:26.420 ******** 2025-10-13 00:20:23,446 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-10-13 00:20:23,463 p=28260 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-10-13 00:20:23,463 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.048) 0:00:26.468 ******** 2025-10-13 00:20:54,714 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:54,722 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-10-13 00:20:54,722 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:54 +0000 (0:00:31.258) 0:00:57.727 ******** 2025-10-13 00:20:54,914 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:54,920 p=28260 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-10-13 00:20:54,920 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:54 +0000 (0:00:00.198) 0:00:57.925 ******** 2025-10-13 00:20:55,105 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:55,112 p=28260 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-10-13 00:20:55,112 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:55 +0000 (0:00:00.192) 0:00:58.117 ******** 2025-10-13 00:21:00,193 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:00,208 p=28260 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-10-13 00:21:00,208 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:05.096) 0:01:03.213 ******** 2025-10-13 00:21:00,231 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:00,246 p=28260 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-10-13 00:21:00,246 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:00.037) 0:01:03.251 ******** 2025-10-13 00:21:00,681 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:00,689 p=28260 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-10-13 00:21:00,689 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:00.443) 0:01:03.694 ******** 2025-10-13 00:21:01,077 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:01,092 p=28260 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-10-13 00:21:01,092 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.402) 0:01:04.097 ******** 2025-10-13 00:21:01,117 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,131 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-10-13 00:21:01,131 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.038) 0:01:04.136 ******** 2025-10-13 00:21:01,155 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,169 p=28260 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-10-13 00:21:01,169 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.038) 0:01:04.174 ******** 2025-10-13 00:21:01,191 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,205 p=28260 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-10-13 00:21:01,205 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.035) 0:01:04.210 ******** 2025-10-13 00:21:01,224 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,232 p=28260 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-10-13 00:21:01,232 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.027) 0:01:04.237 ******** 2025-10-13 00:21:01,249 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,257 p=28260 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-10-13 00:21:01,257 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.025) 0:01:04.262 ******** 2025-10-13 00:21:01,280 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,288 p=28260 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-10-13 00:21:01,288 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.030) 0:01:04.293 ******** 2025-10-13 00:21:01,517 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-10-13 00:21:01,711 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-10-13 00:21:01,906 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-10-13 00:21:02,134 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-10-13 00:21:02,311 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-10-13 00:21:02,336 p=28260 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-10-13 00:21:02,336 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:01.047) 0:01:05.341 ******** 2025-10-13 00:21:02,451 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-10-13 00:21:02,452 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:00.115) 0:01:05.456 ******** 2025-10-13 00:21:02,662 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-10-13 00:21:02,818 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-10-13 00:21:02,971 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-10-13 00:21:02,978 p=28260 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-10-13 00:21:02,978 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:00.526) 0:01:05.983 ******** 2025-10-13 00:21:03,005 p=28260 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-10-13 00:21:03,005 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.027) 0:01:06.010 ******** 2025-10-13 00:21:03,026 p=28260 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-10-13 00:21:03,028 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,033 p=28260 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-10-13 00:21:03,033 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.028) 0:01:06.038 ******** 2025-10-13 00:21:03,055 p=28260 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-10-13 00:21:03,057 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,067 p=28260 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-10-13 00:21:03,067 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.033) 0:01:06.072 ******** 2025-10-13 00:21:03,110 p=28260 u=zuul n=ansible | ok: [localhost] => (item={}) 2025-10-13 00:21:03,117 p=28260 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-10-13 00:21:03,117 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.050) 0:01:06.122 ******** 2025-10-13 00:21:03,147 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,152 p=28260 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-10-13 00:21:03,153 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.035) 0:01:06.157 ******** 2025-10-13 00:21:03,800 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,812 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-10-13 00:21:03,813 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.660) 0:01:06.817 ******** 2025-10-13 00:21:03,832 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,845 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-10-13 00:21:03,845 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.032) 0:01:06.850 ******** 2025-10-13 00:21:03,865 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,885 p=28260 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-10-13 00:21:03,885 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.039) 0:01:06.890 ******** 2025-10-13 00:21:03,904 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,913 p=28260 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-10-13 00:21:03,913 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.028) 0:01:06.918 ******** 2025-10-13 00:21:03,954 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,965 p=28260 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-10-13 00:21:03,965 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.051) 0:01:06.969 ******** 2025-10-13 00:21:03,982 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm 2025-10-13 00:21:03,991 p=28260 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-10-13 00:21:03,991 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.026) 0:01:06.996 ******** 2025-10-13 00:21:04,031 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: tests/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' 2025-10-13 00:21:04,038 p=28260 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-10-13 00:21:04,038 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.046) 0:01:07.043 ******** 2025-10-13 00:21:04,375 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:04,390 p=28260 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-10-13 00:21:04,390 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.351) 0:01:07.395 ******** 2025-10-13 00:21:04,414 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-10-13 00:21:04,421 p=28260 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-10-13 00:21:04,421 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.031) 0:01:07.426 ******** 2025-10-13 00:21:04,850 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:04,857 p=28260 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-10-13 00:21:04,857 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.435) 0:01:07.862 ******** 2025-10-13 00:21:04,873 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:04,887 p=28260 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-10-13 00:21:04,887 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.030) 0:01:07.892 ******** 2025-10-13 00:21:05,384 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:05,393 p=28260 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-10-13 00:21:05,393 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.506) 0:01:08.398 ******** 2025-10-13 00:21:05,417 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:05,440 p=28260 u=zuul n=ansible | TASK [Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-10-13 00:21:05,440 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.046) 0:01:08.445 ******** 2025-10-13 00:21:05,820 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:05,851 p=28260 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | localhost : ok=43 changed=23 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.411) 0:01:08.856 ******** 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | =============================================================================== 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 31.26s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 8.51s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.10s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 3.16s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.63s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.54s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Remove existing repos from /etc/yum.repos.d directory ------ 1.36s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 1.22s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.13s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.05s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 1.02s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.92s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 0.86s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Find existing repos from /etc/yum.repos.d directory -------- 0.70s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Run repo-setup --------------------------------------------- 0.68s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_yamls : Get environment structure ------------------------------- 0.66s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Copy generated repos to /etc/yum.repos.d directory --------- 0.55s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_yamls : Ensure directories exist -------------------------------- 0.53s 2025-10-13 00:21:05,853 p=28260 u=zuul n=ansible | discover_latest_image : Get latest image -------------------------------- 0.51s 2025-10-13 00:21:05,853 p=28260 u=zuul n=ansible | repo_setup : Run repo-setup-get-hash ------------------------------------ 0.47s 2025-10-13 00:21:07,183 p=29086 u=zuul n=ansible | PLAY [Run pre_infra hooks] ***************************************************** 2025-10-13 00:21:07,214 p=29086 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-10-13 00:21:07,214 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.045) 0:00:00.045 ******** 2025-10-13 00:21:07,306 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,316 p=29086 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-10-13 00:21:07,316 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.102) 0:00:00.147 ******** 2025-10-13 00:21:07,369 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,379 p=29086 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-10-13 00:21:07,379 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.062) 0:00:00.209 ******** 2025-10-13 00:21:07,428 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,469 p=29086 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-10-13 00:21:07,493 p=29086 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-10-13 00:21:07,493 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.114) 0:00:00.324 ******** 2025-10-13 00:21:07,581 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,591 p=29086 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-10-13 00:21:07,591 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.098) 0:00:00.422 ******** 2025-10-13 00:21:07,615 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,623 p=29086 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-10-13 00:21:07,623 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.032) 0:00:00.454 ******** 2025-10-13 00:21:07,642 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,677 p=29086 u=zuul n=ansible | PLAY [Prepare the platform] **************************************************** 2025-10-13 00:21:07,702 p=29086 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-10-13 00:21:07,702 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.078) 0:00:00.532 ******** 2025-10-13 00:21:07,738 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,747 p=29086 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-10-13 00:21:07,747 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.045) 0:00:00.578 ******** 2025-10-13 00:21:08,019 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,028 p=29086 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-10-13 00:21:08,028 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.281) 0:00:00.859 ******** 2025-10-13 00:21:08,047 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,057 p=29086 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-10-13 00:21:08,057 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:00.888 ******** 2025-10-13 00:21:08,076 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,086 p=29086 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-10-13 00:21:08,086 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:00.916 ******** 2025-10-13 00:21:08,104 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,121 p=29086 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-10-13 00:21:08,121 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.035) 0:00:00.952 ******** 2025-10-13 00:21:08,141 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,151 p=29086 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-10-13 00:21:08,151 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.030) 0:00:00.982 ******** 2025-10-13 00:21:08,171 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,180 p=29086 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-10-13 00:21:08,180 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.029) 0:00:01.011 ******** 2025-10-13 00:21:08,199 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,209 p=29086 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-10-13 00:21:08,210 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.029) 0:00:01.040 ******** 2025-10-13 00:21:08,515 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,525 p=29086 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-10-13 00:21:08,525 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.315) 0:00:01.356 ******** 2025-10-13 00:21:08,553 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-10-13 00:21:08,566 p=29086 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-10-13 00:21:08,566 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.041) 0:00:01.397 ******** 2025-10-13 00:21:08,586 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,595 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-10-13 00:21:08,595 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:01.426 ******** 2025-10-13 00:21:08,615 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,624 p=29086 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-10-13 00:21:08,624 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:01.455 ******** 2025-10-13 00:21:08,644 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,656 p=29086 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-10-13 00:21:08,657 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.032) 0:00:01.487 ******** 2025-10-13 00:21:08,684 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,697 p=29086 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-10-13 00:21:08,697 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.040) 0:00:01.528 ******** 2025-10-13 00:21:08,942 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,953 p=29086 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-10-13 00:21:08,953 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.255) 0:00:01.784 ******** 2025-10-13 00:21:08,983 p=29086 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-10-13 00:21:08,993 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-10-13 00:21:08,993 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.040) 0:00:01.824 ******** 2025-10-13 00:21:09,013 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,025 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-10-13 00:21:09,025 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.032) 0:00:01.856 ******** 2025-10-13 00:21:09,048 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,061 p=29086 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-10-13 00:21:09,061 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.035) 0:00:01.892 ******** 2025-10-13 00:21:09,080 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,090 p=29086 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-10-13 00:21:09,090 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.028) 0:00:01.921 ******** 2025-10-13 00:21:09,113 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:09,123 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-10-13 00:21:09,123 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.033) 0:00:01.954 ******** 2025-10-13 00:21:09,153 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-10-13 00:21:09,166 p=29086 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-10-13 00:21:09,166 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.042) 0:00:01.996 ******** 2025-10-13 00:21:09,181 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,194 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-10-13 00:21:09,194 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.028) 0:00:02.025 ******** 2025-10-13 00:21:09,236 p=29086 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log 2025-10-13 00:21:09,663 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:09,676 p=29086 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-10-13 00:21:09,676 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.481) 0:00:02.506 ******** 2025-10-13 00:21:09,696 p=29086 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-10-13 00:21:09,709 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-10-13 00:21:09,709 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.032) 0:00:02.539 ******** 2025-10-13 00:21:10,195 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,208 p=29086 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-10-13 00:21:10,209 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.499) 0:00:03.039 ******** 2025-10-13 00:21:10,254 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:10,272 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-10-13 00:21:10,272 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.063) 0:00:03.103 ******** 2025-10-13 00:21:10,637 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,646 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-10-13 00:21:10,646 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.373) 0:00:03.477 ******** 2025-10-13 00:21:10,900 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,919 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-10-13 00:21:10,920 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.273) 0:00:03.750 ******** 2025-10-13 00:21:11,252 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:11,275 p=29086 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-10-13 00:21:11,275 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.355) 0:00:04.106 ******** 2025-10-13 00:21:11,348 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:11,360 p=29086 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-10-13 00:21:11,360 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.084) 0:00:04.190 ******** 2025-10-13 00:21:11,893 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:11,904 p=29086 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-10-13 00:21:11,904 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.543) 0:00:04.734 ******** 2025-10-13 00:21:12,195 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,205 p=29086 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-10-13 00:21:12,205 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.300) 0:00:05.035 ******** 2025-10-13 00:21:12,641 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:12,678 p=29086 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-10-13 00:21:12,678 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.473) 0:00:05.509 ******** 2025-10-13 00:21:12,899 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,912 p=29086 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-10-13 00:21:12,912 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.233) 0:00:05.743 ******** 2025-10-13 00:21:12,936 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,952 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-10-13 00:21:12,952 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.039) 0:00:05.782 ******** 2025-10-13 00:21:13,926 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-10-13 00:21:14,610 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-10-13 00:21:14,634 p=29086 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-10-13 00:21:14,634 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:14 +0000 (0:00:01.682) 0:00:07.465 ******** 2025-10-13 00:21:15,723 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:15,733 p=29086 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-10-13 00:21:15,733 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:15 +0000 (0:00:01.098) 0:00:08.564 ******** 2025-10-13 00:21:16,498 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-10-13 00:21:17,268 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-10-13 00:21:17,284 p=29086 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-10-13 00:21:17,284 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:17 +0000 (0:00:01.550) 0:00:10.114 ******** 2025-10-13 00:21:18,202 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:18,217 p=29086 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-10-13 00:21:18,217 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.933) 0:00:11.048 ******** 2025-10-13 00:21:18,272 p=29086 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log 2025-10-13 00:21:18,460 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:18,471 p=29086 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-10-13 00:21:18,471 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.254) 0:00:11.302 ******** 2025-10-13 00:21:18,503 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,536 p=29086 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-10-13 00:21:18,536 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.064) 0:00:11.367 ******** 2025-10-13 00:21:18,554 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,564 p=29086 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-10-13 00:21:18,564 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.027) 0:00:11.394 ******** 2025-10-13 00:21:18,586 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,596 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-10-13 00:21:18,596 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.032) 0:00:11.427 ******** 2025-10-13 00:21:18,629 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,640 p=29086 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-10-13 00:21:18,640 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.044) 0:00:11.471 ******** 2025-10-13 00:21:18,664 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,675 p=29086 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-10-13 00:21:18,675 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.034) 0:00:11.506 ******** 2025-10-13 00:21:18,699 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,711 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-10-13 00:21:18,711 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.036) 0:00:11.542 ******** 2025-10-13 00:21:18,742 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,754 p=29086 u=zuul n=ansible | TASK [openshift_setup : Metal3 tweaks _raw_params=metal3_config.yml] *********** 2025-10-13 00:21:18,754 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.042) 0:00:11.585 ******** 2025-10-13 00:21:18,786 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_setup/tasks/metal3_config.yml for localhost 2025-10-13 00:21:18,803 p=29086 u=zuul n=ansible | TASK [openshift_setup : Fetch Metal3 configuration name _raw_params=oc get Provisioning -o name] *** 2025-10-13 00:21:18,803 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.048) 0:00:11.633 ******** 2025-10-13 00:21:18,817 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,828 p=29086 u=zuul n=ansible | TASK [openshift_setup : Apply the patch to Metal3 Provisioning _raw_params=oc patch {{ _cifmw_openshift_setup_provisioning_name.stdout }} --type='json' -p='[{"op": "replace", "path": "/spec/watchAllNamespaces", "value": true}]'] *** 2025-10-13 00:21:18,828 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.025) 0:00:11.658 ******** 2025-10-13 00:21:18,843 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,852 p=29086 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-10-13 00:21:18,853 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.024) 0:00:11.683 ******** 2025-10-13 00:21:19,567 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:19,580 p=29086 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-10-13 00:21:19,580 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:19 +0000 (0:00:00.727) 0:00:12.411 ******** 2025-10-13 00:21:20,513 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:20,535 p=29086 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-10-13 00:21:20,535 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:20 +0000 (0:00:00.954) 0:00:13.366 ******** 2025-10-13 00:21:21,226 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:21,247 p=29086 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-10-13 00:21:21,247 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.711) 0:00:14.077 ******** 2025-10-13 00:21:21,262 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,272 p=29086 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-10-13 00:21:21,273 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.025) 0:00:14.103 ******** 2025-10-13 00:21:21,294 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,319 p=29086 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-10-13 00:21:21,320 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.046) 0:00:14.150 ******** 2025-10-13 00:21:21,343 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,352 p=29086 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-10-13 00:21:21,352 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.032) 0:00:14.182 ******** 2025-10-13 00:21:21,379 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,394 p=29086 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-10-13 00:21:21,394 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.042) 0:00:14.225 ******** 2025-10-13 00:21:21,418 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,431 p=29086 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-10-13 00:21:21,431 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.036) 0:00:14.262 ******** 2025-10-13 00:21:21,448 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,457 p=29086 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-10-13 00:21:21,457 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.025) 0:00:14.288 ******** 2025-10-13 00:21:21,472 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,483 p=29086 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-10-13 00:21:21,483 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.026) 0:00:14.314 ******** 2025-10-13 00:21:21,498 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,507 p=29086 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-10-13 00:21:21,507 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.024) 0:00:14.338 ******** 2025-10-13 00:21:21,534 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,547 p=29086 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-10-13 00:21:21,547 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.039) 0:00:14.377 ******** 2025-10-13 00:21:21,576 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,591 p=29086 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-10-13 00:21:21,591 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.044) 0:00:14.422 ******** 2025-10-13 00:21:21,665 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:21,674 p=29086 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-10-13 00:21:21,674 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.082) 0:00:14.505 ******** 2025-10-13 00:21:21,731 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:21,741 p=29086 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-10-13 00:21:21,741 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.066) 0:00:14.572 ******** 2025-10-13 00:21:21,815 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,853 p=29086 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-10-13 00:21:21,853 p=29086 u=zuul n=ansible | localhost : ok=36 changed=12 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.112) 0:00:14.684 ******** 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | =============================================================================== 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.68s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces --- 1.55s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Get internal OpenShift registry route ----------------- 1.10s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 0.95s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Wait for the image registry to be ready --------------- 0.93s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.73s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Patch samples registry configuration ------------------ 0.71s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Create the openshift_login parameters file ------------ 0.54s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch new OpenShift access token ---------------------- 0.50s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift token --------------------------------- 0.48s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Append the KUBECONFIG to the install yamls parameters --- 0.47s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift API URL ------------------------------- 0.37s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift current user -------------------------- 0.36s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Ensure output directory exists ------------------------ 0.32s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Read the install yamls parameters file ---------------- 0.30s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | networking_mapper : Check for Networking Environment Definition file existence --- 0.28s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift kubeconfig context -------------------- 0.27s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Check if kubeconfig exists ---------------------------- 0.26s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Login into OpenShift internal registry ---------------- 0.25s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Ensure output directory exists ------------------------ 0.23s 2025-10-13 00:21:38,687 p=29689 u=zuul n=ansible | Starting galaxy collection install process 2025-10-13 00:21:38,713 p=29689 u=zuul n=ansible | Process install dependency map 2025-10-13 00:21:53,440 p=29689 u=zuul n=ansible | Starting collection install process 2025-10-13 00:21:53,440 p=29689 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+07f6a4f6' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+07f6a4f6 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | cifmw.general:1.0.0+07f6a4f6 was installed successfully 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-10-13 00:21:55,423 p=29689 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-10-13 00:21:55,423 p=29689 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-10-13 00:21:55,424 p=29689 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-10-13 00:21:55,694 p=29689 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-10-13 00:21:55,695 p=29689 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-10-13 00:21:55,695 p=29689 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-10-13 00:21:56,034 p=29689 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-10-13 00:21:56,035 p=29689 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-10-13 00:21:56,035 p=29689 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-10-13 00:21:56,354 p=29689 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-10-13 00:21:56,355 p=29689 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-10-13 00:21:56,355 p=29689 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-10-13 00:21:56,580 p=29689 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-10-13 00:21:56,580 p=29689 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/0000755000175000017500000000000015073043212025551 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/must-gather.logs0000644000175000017500000001033215073043212030676 0ustar zuulzuul[must-gather ] OUT 2025-10-13T00:23:36.73835945Z Using must-gather plug-in image: quay.io/openstack-k8s-operators/openstack-must-gather:latest When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.19.14 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing [must-gather ] OUT 2025-10-13T00:23:36.787875107Z namespace/openshift-must-gather-xwq75 created [must-gather ] OUT 2025-10-13T00:23:36.794218703Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-xmtph created [must-gather ] OUT 2025-10-13T00:23:38.431690265Z namespace/openshift-must-gather-xwq75 deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.19.14 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/timestamp0000644000175000017500000000015615073043212027501 0ustar zuulzuul2025-10-13 00:23:36.798176026 +0000 UTC m=+0.204457747 2025-10-13 00:23:38.428332678 +0000 UTC m=+1.834614379 home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/event-filter.html0000644000175000017500000000641015073043212031044 0ustar zuulzuul Events
Time Namespace Component RelatedObject Reason Message
home/zuul/zuul-output/logs/ci-framework-data/logs/2025-10-13_00-23/0000775000175000017500000000000015073043272023104 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/2025-10-13_00-23/ansible.log0000666000175000017500000040021415073043044025224 0ustar zuulzuul2025-10-13 00:19:31,046 p=27673 u=zuul n=ansible | Starting galaxy collection install process 2025-10-13 00:19:31,047 p=27673 u=zuul n=ansible | Process install dependency map 2025-10-13 00:19:46,129 p=27673 u=zuul n=ansible | Starting collection install process 2025-10-13 00:19:46,129 p=27673 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+07f6a4f6' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+07f6a4f6 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | cifmw.general:1.0.0+07f6a4f6 was installed successfully 2025-10-13 00:19:46,602 p=27673 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-10-13 00:19:46,657 p=27673 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-10-13 00:19:47,382 p=27673 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-10-13 00:19:47,434 p=27673 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-10-13 00:19:47,526 p=27673 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-10-13 00:19:47,527 p=27673 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-10-13 00:19:47,527 p=27673 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-10-13 00:19:47,549 p=27673 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-10-13 00:19:47,695 p=27673 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-10-13 00:19:47,809 p=27673 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-10-13 00:19:47,874 p=27673 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-10-13 00:19:47,891 p=27673 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-10-13 00:19:48,128 p=27673 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-10-13 00:19:48,128 p=27673 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-10-13 00:19:48,129 p=27673 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-10-13 00:19:48,388 p=27673 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-10-13 00:19:48,420 p=27673 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-10-13 00:19:48,449 p=27673 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-10-13 00:19:48,534 p=27673 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-10-13 00:19:48,534 p=27673 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-10-13 00:19:57,011 p=28260 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2025-10-13 00:19:57,028 p=28260 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-10-13 00:19:57,028 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:57 +0000 (0:00:00.033) 0:00:00.033 ******** 2025-10-13 00:19:58,125 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,159 p=28260 u=zuul n=ansible | TASK [Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-10-13 00:19:58,159 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:01.131) 0:00:01.164 ******** 2025-10-13 00:19:58,193 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,202 p=28260 u=zuul n=ansible | TASK [Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-10-13 00:19:58,202 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.042) 0:00:01.207 ******** 2025-10-13 00:19:58,253 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,259 p=28260 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-10-13 00:19:58,260 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.057) 0:00:01.264 ******** 2025-10-13 00:19:58,623 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:19:58,631 p=28260 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-10-13 00:19:58,631 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.371) 0:00:01.636 ******** 2025-10-13 00:19:58,654 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,663 p=28260 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-10-13 00:19:58,663 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.031) 0:00:01.668 ******** 2025-10-13 00:19:58,685 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,692 p=28260 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-10-13 00:19:58,692 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.029) 0:00:01.697 ******** 2025-10-13 00:19:58,723 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:19:58,732 p=28260 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-10-13 00:19:58,732 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:19:58 +0000 (0:00:00.039) 0:00:01.737 ******** 2025-10-13 00:20:00,253 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:00,276 p=28260 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-10-13 00:20:00,276 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:00 +0000 (0:00:01.544) 0:00:03.281 ******** 2025-10-13 00:20:00,537 p=28260 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-10-13 00:20:00,986 p=28260 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-10-13 00:20:01,282 p=28260 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-10-13 00:20:01,294 p=28260 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-10-13 00:20:01,295 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:01 +0000 (0:00:01.018) 0:00:04.299 ******** 2025-10-13 00:20:02,907 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:02,920 p=28260 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-10-13 00:20:02,921 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:02 +0000 (0:00:01.625) 0:00:05.925 ******** 2025-10-13 00:20:06,072 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:06,080 p=28260 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-10-13 00:20:06,081 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:06 +0000 (0:00:03.159) 0:00:09.085 ******** 2025-10-13 00:20:14,582 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:14,595 p=28260 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-10-13 00:20:14,595 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:14 +0000 (0:00:08.514) 0:00:17.600 ******** 2025-10-13 00:20:15,509 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:15,519 p=28260 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-10-13 00:20:15,519 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:15 +0000 (0:00:00.924) 0:00:18.524 ******** 2025-10-13 00:20:15,542 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:15,551 p=28260 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-10-13 00:20:15,551 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:15 +0000 (0:00:00.031) 0:00:18.556 ******** 2025-10-13 00:20:16,215 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:16,227 p=28260 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-10-13 00:20:16,227 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.676) 0:00:19.232 ******** 2025-10-13 00:20:16,277 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,286 p=28260 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-10-13 00:20:16,286 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.058) 0:00:19.291 ******** 2025-10-13 00:20:16,339 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,346 p=28260 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-10-13 00:20:16,346 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.059) 0:00:19.351 ******** 2025-10-13 00:20:16,374 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:16,381 p=28260 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-10-13 00:20:16,381 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.034) 0:00:19.386 ******** 2025-10-13 00:20:16,841 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:16,855 p=28260 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-10-13 00:20:16,855 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:16 +0000 (0:00:00.473) 0:00:19.860 ******** 2025-10-13 00:20:17,707 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:17,713 p=28260 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-10-13 00:20:17,713 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.858) 0:00:20.718 ******** 2025-10-13 00:20:17,733 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,740 p=28260 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-10-13 00:20:17,740 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.027) 0:00:20.745 ******** 2025-10-13 00:20:17,760 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,767 p=28260 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-10-13 00:20:17,767 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.026) 0:00:20.772 ******** 2025-10-13 00:20:17,786 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,792 p=28260 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-10-13 00:20:17,792 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.025) 0:00:20.797 ******** 2025-10-13 00:20:17,817 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:17,823 p=28260 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-10-13 00:20:17,823 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.030) 0:00:20.828 ******** 2025-10-13 00:20:17,845 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,852 p=28260 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-10-13 00:20:17,852 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.029) 0:00:20.857 ******** 2025-10-13 00:20:17,877 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,883 p=28260 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-10-13 00:20:17,884 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.031) 0:00:20.888 ******** 2025-10-13 00:20:17,909 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,916 p=28260 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-10-13 00:20:17,916 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.032) 0:00:20.921 ******** 2025-10-13 00:20:17,941 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,947 p=28260 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-10-13 00:20:17,947 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.031) 0:00:20.952 ******** 2025-10-13 00:20:17,963 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:17,974 p=28260 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-10-13 00:20:17,974 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:17 +0000 (0:00:00.027) 0:00:20.979 ******** 2025-10-13 00:20:17,989 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:18,000 p=28260 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-10-13 00:20:18,000 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.025) 0:00:21.005 ******** 2025-10-13 00:20:18,023 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:18,032 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-10-13 00:20:18,033 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.032) 0:00:21.037 ******** 2025-10-13 00:20:18,449 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:18,456 p=28260 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-10-13 00:20:18,456 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.423) 0:00:21.461 ******** 2025-10-13 00:20:18,860 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:18,873 p=28260 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-10-13 00:20:18,873 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:18 +0000 (0:00:00.416) 0:00:21.878 ******** 2025-10-13 00:20:19,280 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:19,286 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-10-13 00:20:19,287 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.413) 0:00:22.291 ******** 2025-10-13 00:20:19,316 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,323 p=28260 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-10-13 00:20:19,323 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.036) 0:00:22.328 ******** 2025-10-13 00:20:19,367 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,372 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-10-13 00:20:19,373 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.049) 0:00:22.377 ******** 2025-10-13 00:20:19,410 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,416 p=28260 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-10-13 00:20:19,416 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.043) 0:00:22.421 ******** 2025-10-13 00:20:19,437 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,443 p=28260 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-10-13 00:20:19,443 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.027) 0:00:22.448 ******** 2025-10-13 00:20:19,481 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,487 p=28260 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-10-13 00:20:19,487 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.043) 0:00:22.492 ******** 2025-10-13 00:20:19,508 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:20:19,515 p=28260 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-10-13 00:20:19,515 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:19 +0000 (0:00:00.027) 0:00:22.519 ******** 2025-10-13 00:20:20,207 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:20,215 p=28260 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-10-13 00:20:20,216 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:20 +0000 (0:00:00.700) 0:00:23.220 ******** 2025-10-13 00:20:20,597 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-10-13 00:20:21,567 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-10-13 00:20:21,574 p=28260 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-10-13 00:20:21,574 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:21 +0000 (0:00:01.358) 0:00:24.579 ******** 2025-10-13 00:20:22,779 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:22,791 p=28260 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-10-13 00:20:22,792 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:22 +0000 (0:00:01.217) 0:00:25.796 ******** 2025-10-13 00:20:23,314 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:23,341 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-10-13 00:20:23,341 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.549) 0:00:26.346 ******** 2025-10-13 00:20:23,400 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-10-13 00:20:23,415 p=28260 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-10-13 00:20:23,415 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.073) 0:00:26.420 ******** 2025-10-13 00:20:23,446 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-10-13 00:20:23,463 p=28260 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-10-13 00:20:23,463 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:23 +0000 (0:00:00.048) 0:00:26.468 ******** 2025-10-13 00:20:54,714 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:54,722 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-10-13 00:20:54,722 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:54 +0000 (0:00:31.258) 0:00:57.727 ******** 2025-10-13 00:20:54,914 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:20:54,920 p=28260 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-10-13 00:20:54,920 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:54 +0000 (0:00:00.198) 0:00:57.925 ******** 2025-10-13 00:20:55,105 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:20:55,112 p=28260 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-10-13 00:20:55,112 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:20:55 +0000 (0:00:00.192) 0:00:58.117 ******** 2025-10-13 00:21:00,193 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:00,208 p=28260 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-10-13 00:21:00,208 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:05.096) 0:01:03.213 ******** 2025-10-13 00:21:00,231 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:00,246 p=28260 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-10-13 00:21:00,246 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:00.037) 0:01:03.251 ******** 2025-10-13 00:21:00,681 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:00,689 p=28260 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-10-13 00:21:00,689 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:00 +0000 (0:00:00.443) 0:01:03.694 ******** 2025-10-13 00:21:01,077 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:01,092 p=28260 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-10-13 00:21:01,092 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.402) 0:01:04.097 ******** 2025-10-13 00:21:01,117 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,131 p=28260 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-10-13 00:21:01,131 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.038) 0:01:04.136 ******** 2025-10-13 00:21:01,155 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,169 p=28260 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-10-13 00:21:01,169 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.038) 0:01:04.174 ******** 2025-10-13 00:21:01,191 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,205 p=28260 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-10-13 00:21:01,205 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.035) 0:01:04.210 ******** 2025-10-13 00:21:01,224 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,232 p=28260 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-10-13 00:21:01,232 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.027) 0:01:04.237 ******** 2025-10-13 00:21:01,249 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,257 p=28260 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-10-13 00:21:01,257 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.025) 0:01:04.262 ******** 2025-10-13 00:21:01,280 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:01,288 p=28260 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-10-13 00:21:01,288 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:01 +0000 (0:00:00.030) 0:01:04.293 ******** 2025-10-13 00:21:01,517 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-10-13 00:21:01,711 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-10-13 00:21:01,906 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-10-13 00:21:02,134 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-10-13 00:21:02,311 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-10-13 00:21:02,336 p=28260 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-10-13 00:21:02,336 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:01.047) 0:01:05.341 ******** 2025-10-13 00:21:02,451 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-10-13 00:21:02,452 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:00.115) 0:01:05.456 ******** 2025-10-13 00:21:02,662 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-10-13 00:21:02,818 p=28260 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-10-13 00:21:02,971 p=28260 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-10-13 00:21:02,978 p=28260 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-10-13 00:21:02,978 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:02 +0000 (0:00:00.526) 0:01:05.983 ******** 2025-10-13 00:21:03,005 p=28260 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-10-13 00:21:03,005 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.027) 0:01:06.010 ******** 2025-10-13 00:21:03,026 p=28260 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-10-13 00:21:03,028 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,033 p=28260 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-10-13 00:21:03,033 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.028) 0:01:06.038 ******** 2025-10-13 00:21:03,055 p=28260 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-10-13 00:21:03,057 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,067 p=28260 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-10-13 00:21:03,067 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.033) 0:01:06.072 ******** 2025-10-13 00:21:03,110 p=28260 u=zuul n=ansible | ok: [localhost] => (item={}) 2025-10-13 00:21:03,117 p=28260 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-10-13 00:21:03,117 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.050) 0:01:06.122 ******** 2025-10-13 00:21:03,147 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,152 p=28260 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-10-13 00:21:03,153 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.035) 0:01:06.157 ******** 2025-10-13 00:21:03,800 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,812 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-10-13 00:21:03,813 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.660) 0:01:06.817 ******** 2025-10-13 00:21:03,832 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,845 p=28260 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-10-13 00:21:03,845 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.032) 0:01:06.850 ******** 2025-10-13 00:21:03,865 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,885 p=28260 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-10-13 00:21:03,885 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.039) 0:01:06.890 ******** 2025-10-13 00:21:03,904 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:03,913 p=28260 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-10-13 00:21:03,913 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.028) 0:01:06.918 ******** 2025-10-13 00:21:03,954 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:03,965 p=28260 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-10-13 00:21:03,965 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.051) 0:01:06.969 ******** 2025-10-13 00:21:03,982 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm 2025-10-13 00:21:03,991 p=28260 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-10-13 00:21:03,991 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:03 +0000 (0:00:00.026) 0:01:06.996 ******** 2025-10-13 00:21:04,031 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: tests/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' 2025-10-13 00:21:04,038 p=28260 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-10-13 00:21:04,038 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.046) 0:01:07.043 ******** 2025-10-13 00:21:04,375 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:04,390 p=28260 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-10-13 00:21:04,390 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.351) 0:01:07.395 ******** 2025-10-13 00:21:04,414 p=28260 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-10-13 00:21:04,421 p=28260 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-10-13 00:21:04,421 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.031) 0:01:07.426 ******** 2025-10-13 00:21:04,850 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:04,857 p=28260 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-10-13 00:21:04,857 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.435) 0:01:07.862 ******** 2025-10-13 00:21:04,873 p=28260 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:04,887 p=28260 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-10-13 00:21:04,887 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:04 +0000 (0:00:00.030) 0:01:07.892 ******** 2025-10-13 00:21:05,384 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:05,393 p=28260 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-10-13 00:21:05,393 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.506) 0:01:08.398 ******** 2025-10-13 00:21:05,417 p=28260 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:05,440 p=28260 u=zuul n=ansible | TASK [Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-10-13 00:21:05,440 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.046) 0:01:08.445 ******** 2025-10-13 00:21:05,820 p=28260 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:05,851 p=28260 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | localhost : ok=43 changed=23 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | Monday 13 October 2025 00:21:05 +0000 (0:00:00.411) 0:01:08.856 ******** 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | =============================================================================== 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 31.26s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 8.51s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.10s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 3.16s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.63s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.54s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Remove existing repos from /etc/yum.repos.d directory ------ 1.36s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 1.22s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.13s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.05s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 1.02s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.92s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 0.86s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Find existing repos from /etc/yum.repos.d directory -------- 0.70s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Run repo-setup --------------------------------------------- 0.68s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_yamls : Get environment structure ------------------------------- 0.66s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | repo_setup : Copy generated repos to /etc/yum.repos.d directory --------- 0.55s 2025-10-13 00:21:05,852 p=28260 u=zuul n=ansible | install_yamls : Ensure directories exist -------------------------------- 0.53s 2025-10-13 00:21:05,853 p=28260 u=zuul n=ansible | discover_latest_image : Get latest image -------------------------------- 0.51s 2025-10-13 00:21:05,853 p=28260 u=zuul n=ansible | repo_setup : Run repo-setup-get-hash ------------------------------------ 0.47s 2025-10-13 00:21:07,183 p=29086 u=zuul n=ansible | PLAY [Run pre_infra hooks] ***************************************************** 2025-10-13 00:21:07,214 p=29086 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-10-13 00:21:07,214 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.045) 0:00:00.045 ******** 2025-10-13 00:21:07,306 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,316 p=29086 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-10-13 00:21:07,316 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.102) 0:00:00.147 ******** 2025-10-13 00:21:07,369 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,379 p=29086 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-10-13 00:21:07,379 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.062) 0:00:00.209 ******** 2025-10-13 00:21:07,428 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,469 p=29086 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-10-13 00:21:07,493 p=29086 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-10-13 00:21:07,493 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.114) 0:00:00.324 ******** 2025-10-13 00:21:07,581 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,591 p=29086 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-10-13 00:21:07,591 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.098) 0:00:00.422 ******** 2025-10-13 00:21:07,615 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,623 p=29086 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-10-13 00:21:07,623 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.032) 0:00:00.454 ******** 2025-10-13 00:21:07,642 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:07,677 p=29086 u=zuul n=ansible | PLAY [Prepare the platform] **************************************************** 2025-10-13 00:21:07,702 p=29086 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-10-13 00:21:07,702 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.078) 0:00:00.532 ******** 2025-10-13 00:21:07,738 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:07,747 p=29086 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-10-13 00:21:07,747 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:07 +0000 (0:00:00.045) 0:00:00.578 ******** 2025-10-13 00:21:08,019 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,028 p=29086 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-10-13 00:21:08,028 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.281) 0:00:00.859 ******** 2025-10-13 00:21:08,047 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,057 p=29086 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-10-13 00:21:08,057 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:00.888 ******** 2025-10-13 00:21:08,076 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,086 p=29086 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-10-13 00:21:08,086 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:00.916 ******** 2025-10-13 00:21:08,104 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,121 p=29086 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-10-13 00:21:08,121 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.035) 0:00:00.952 ******** 2025-10-13 00:21:08,141 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,151 p=29086 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-10-13 00:21:08,151 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.030) 0:00:00.982 ******** 2025-10-13 00:21:08,171 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,180 p=29086 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-10-13 00:21:08,180 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.029) 0:00:01.011 ******** 2025-10-13 00:21:08,199 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,209 p=29086 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-10-13 00:21:08,210 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.029) 0:00:01.040 ******** 2025-10-13 00:21:08,515 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,525 p=29086 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-10-13 00:21:08,525 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.315) 0:00:01.356 ******** 2025-10-13 00:21:08,553 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-10-13 00:21:08,566 p=29086 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-10-13 00:21:08,566 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.041) 0:00:01.397 ******** 2025-10-13 00:21:08,586 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,595 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-10-13 00:21:08,595 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:01.426 ******** 2025-10-13 00:21:08,615 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,624 p=29086 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-10-13 00:21:08,624 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.028) 0:00:01.455 ******** 2025-10-13 00:21:08,644 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:08,656 p=29086 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-10-13 00:21:08,657 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.032) 0:00:01.487 ******** 2025-10-13 00:21:08,684 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,697 p=29086 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-10-13 00:21:08,697 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.040) 0:00:01.528 ******** 2025-10-13 00:21:08,942 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:08,953 p=29086 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-10-13 00:21:08,953 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.255) 0:00:01.784 ******** 2025-10-13 00:21:08,983 p=29086 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-10-13 00:21:08,993 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-10-13 00:21:08,993 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:08 +0000 (0:00:00.040) 0:00:01.824 ******** 2025-10-13 00:21:09,013 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,025 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-10-13 00:21:09,025 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.032) 0:00:01.856 ******** 2025-10-13 00:21:09,048 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,061 p=29086 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-10-13 00:21:09,061 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.035) 0:00:01.892 ******** 2025-10-13 00:21:09,080 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,090 p=29086 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-10-13 00:21:09,090 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.028) 0:00:01.921 ******** 2025-10-13 00:21:09,113 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:09,123 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-10-13 00:21:09,123 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.033) 0:00:01.954 ******** 2025-10-13 00:21:09,153 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-10-13 00:21:09,166 p=29086 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-10-13 00:21:09,166 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.042) 0:00:01.996 ******** 2025-10-13 00:21:09,181 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:09,194 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-10-13 00:21:09,194 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.028) 0:00:02.025 ******** 2025-10-13 00:21:09,236 p=29086 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log 2025-10-13 00:21:09,663 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:09,676 p=29086 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-10-13 00:21:09,676 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.481) 0:00:02.506 ******** 2025-10-13 00:21:09,696 p=29086 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-10-13 00:21:09,709 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-10-13 00:21:09,709 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:09 +0000 (0:00:00.032) 0:00:02.539 ******** 2025-10-13 00:21:10,195 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,208 p=29086 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-10-13 00:21:10,209 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.499) 0:00:03.039 ******** 2025-10-13 00:21:10,254 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:10,272 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-10-13 00:21:10,272 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.063) 0:00:03.103 ******** 2025-10-13 00:21:10,637 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,646 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-10-13 00:21:10,646 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.373) 0:00:03.477 ******** 2025-10-13 00:21:10,900 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:10,919 p=29086 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-10-13 00:21:10,920 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:10 +0000 (0:00:00.273) 0:00:03.750 ******** 2025-10-13 00:21:11,252 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:11,275 p=29086 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-10-13 00:21:11,275 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.355) 0:00:04.106 ******** 2025-10-13 00:21:11,348 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:11,360 p=29086 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-10-13 00:21:11,360 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.084) 0:00:04.190 ******** 2025-10-13 00:21:11,893 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:11,904 p=29086 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-10-13 00:21:11,904 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:11 +0000 (0:00:00.543) 0:00:04.734 ******** 2025-10-13 00:21:12,195 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,205 p=29086 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-10-13 00:21:12,205 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.300) 0:00:05.035 ******** 2025-10-13 00:21:12,641 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:12,678 p=29086 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-10-13 00:21:12,678 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.473) 0:00:05.509 ******** 2025-10-13 00:21:12,899 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,912 p=29086 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-10-13 00:21:12,912 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.233) 0:00:05.743 ******** 2025-10-13 00:21:12,936 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:12,952 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-10-13 00:21:12,952 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:12 +0000 (0:00:00.039) 0:00:05.782 ******** 2025-10-13 00:21:13,926 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-10-13 00:21:14,610 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-10-13 00:21:14,634 p=29086 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-10-13 00:21:14,634 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:14 +0000 (0:00:01.682) 0:00:07.465 ******** 2025-10-13 00:21:15,723 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:15,733 p=29086 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-10-13 00:21:15,733 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:15 +0000 (0:00:01.098) 0:00:08.564 ******** 2025-10-13 00:21:16,498 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-10-13 00:21:17,268 p=29086 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-10-13 00:21:17,284 p=29086 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-10-13 00:21:17,284 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:17 +0000 (0:00:01.550) 0:00:10.114 ******** 2025-10-13 00:21:18,202 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:18,217 p=29086 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-10-13 00:21:18,217 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.933) 0:00:11.048 ******** 2025-10-13 00:21:18,272 p=29086 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log 2025-10-13 00:21:18,460 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:18,471 p=29086 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-10-13 00:21:18,471 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.254) 0:00:11.302 ******** 2025-10-13 00:21:18,503 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,536 p=29086 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-10-13 00:21:18,536 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.064) 0:00:11.367 ******** 2025-10-13 00:21:18,554 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,564 p=29086 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-10-13 00:21:18,564 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.027) 0:00:11.394 ******** 2025-10-13 00:21:18,586 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,596 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-10-13 00:21:18,596 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.032) 0:00:11.427 ******** 2025-10-13 00:21:18,629 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,640 p=29086 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-10-13 00:21:18,640 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.044) 0:00:11.471 ******** 2025-10-13 00:21:18,664 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,675 p=29086 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-10-13 00:21:18,675 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.034) 0:00:11.506 ******** 2025-10-13 00:21:18,699 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,711 p=29086 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-10-13 00:21:18,711 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.036) 0:00:11.542 ******** 2025-10-13 00:21:18,742 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,754 p=29086 u=zuul n=ansible | TASK [openshift_setup : Metal3 tweaks _raw_params=metal3_config.yml] *********** 2025-10-13 00:21:18,754 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.042) 0:00:11.585 ******** 2025-10-13 00:21:18,786 p=29086 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_setup/tasks/metal3_config.yml for localhost 2025-10-13 00:21:18,803 p=29086 u=zuul n=ansible | TASK [openshift_setup : Fetch Metal3 configuration name _raw_params=oc get Provisioning -o name] *** 2025-10-13 00:21:18,803 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.048) 0:00:11.633 ******** 2025-10-13 00:21:18,817 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,828 p=29086 u=zuul n=ansible | TASK [openshift_setup : Apply the patch to Metal3 Provisioning _raw_params=oc patch {{ _cifmw_openshift_setup_provisioning_name.stdout }} --type='json' -p='[{"op": "replace", "path": "/spec/watchAllNamespaces", "value": true}]'] *** 2025-10-13 00:21:18,828 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.025) 0:00:11.658 ******** 2025-10-13 00:21:18,843 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:18,852 p=29086 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-10-13 00:21:18,853 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:18 +0000 (0:00:00.024) 0:00:11.683 ******** 2025-10-13 00:21:19,567 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:19,580 p=29086 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-10-13 00:21:19,580 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:19 +0000 (0:00:00.727) 0:00:12.411 ******** 2025-10-13 00:21:20,513 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:20,535 p=29086 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-10-13 00:21:20,535 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:20 +0000 (0:00:00.954) 0:00:13.366 ******** 2025-10-13 00:21:21,226 p=29086 u=zuul n=ansible | changed: [localhost] 2025-10-13 00:21:21,247 p=29086 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-10-13 00:21:21,247 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.711) 0:00:14.077 ******** 2025-10-13 00:21:21,262 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,272 p=29086 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-10-13 00:21:21,273 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.025) 0:00:14.103 ******** 2025-10-13 00:21:21,294 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,319 p=29086 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-10-13 00:21:21,320 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.046) 0:00:14.150 ******** 2025-10-13 00:21:21,343 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,352 p=29086 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-10-13 00:21:21,352 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.032) 0:00:14.182 ******** 2025-10-13 00:21:21,379 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,394 p=29086 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-10-13 00:21:21,394 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.042) 0:00:14.225 ******** 2025-10-13 00:21:21,418 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,431 p=29086 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-10-13 00:21:21,431 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.036) 0:00:14.262 ******** 2025-10-13 00:21:21,448 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,457 p=29086 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-10-13 00:21:21,457 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.025) 0:00:14.288 ******** 2025-10-13 00:21:21,472 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,483 p=29086 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-10-13 00:21:21,483 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.026) 0:00:14.314 ******** 2025-10-13 00:21:21,498 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,507 p=29086 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-10-13 00:21:21,507 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.024) 0:00:14.338 ******** 2025-10-13 00:21:21,534 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,547 p=29086 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-10-13 00:21:21,547 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.039) 0:00:14.377 ******** 2025-10-13 00:21:21,576 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,591 p=29086 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-10-13 00:21:21,591 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.044) 0:00:14.422 ******** 2025-10-13 00:21:21,665 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:21,674 p=29086 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-10-13 00:21:21,674 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.082) 0:00:14.505 ******** 2025-10-13 00:21:21,731 p=29086 u=zuul n=ansible | ok: [localhost] 2025-10-13 00:21:21,741 p=29086 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-10-13 00:21:21,741 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.066) 0:00:14.572 ******** 2025-10-13 00:21:21,815 p=29086 u=zuul n=ansible | skipping: [localhost] 2025-10-13 00:21:21,853 p=29086 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-10-13 00:21:21,853 p=29086 u=zuul n=ansible | localhost : ok=36 changed=12 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | Monday 13 October 2025 00:21:21 +0000 (0:00:00.112) 0:00:14.684 ******** 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | =============================================================================== 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.68s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces --- 1.55s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Get internal OpenShift registry route ----------------- 1.10s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 0.95s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Wait for the image registry to be ready --------------- 0.93s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.73s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Patch samples registry configuration ------------------ 0.71s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Create the openshift_login parameters file ------------ 0.54s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch new OpenShift access token ---------------------- 0.50s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift token --------------------------------- 0.48s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Append the KUBECONFIG to the install yamls parameters --- 0.47s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift API URL ------------------------------- 0.37s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift current user -------------------------- 0.36s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Ensure output directory exists ------------------------ 0.32s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Read the install yamls parameters file ---------------- 0.30s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | networking_mapper : Check for Networking Environment Definition file existence --- 0.28s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Fetch OpenShift kubeconfig context -------------------- 0.27s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_login : Check if kubeconfig exists ---------------------------- 0.26s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Login into OpenShift internal registry ---------------- 0.25s 2025-10-13 00:21:21,854 p=29086 u=zuul n=ansible | openshift_setup : Ensure output directory exists ------------------------ 0.23s 2025-10-13 00:21:38,687 p=29689 u=zuul n=ansible | Starting galaxy collection install process 2025-10-13 00:21:38,713 p=29689 u=zuul n=ansible | Process install dependency map 2025-10-13 00:21:53,440 p=29689 u=zuul n=ansible | Starting collection install process 2025-10-13 00:21:53,440 p=29689 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+07f6a4f6' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+07f6a4f6 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | cifmw.general:1.0.0+07f6a4f6 was installed successfully 2025-10-13 00:21:54,045 p=29689 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-10-13 00:21:54,114 p=29689 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-10-13 00:21:55,036 p=29689 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-10-13 00:21:55,095 p=29689 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-10-13 00:21:55,214 p=29689 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-10-13 00:21:55,247 p=29689 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-10-13 00:21:55,423 p=29689 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-10-13 00:21:55,423 p=29689 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-10-13 00:21:55,424 p=29689 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-10-13 00:21:55,567 p=29689 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-10-13 00:21:55,661 p=29689 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-10-13 00:21:55,694 p=29689 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-10-13 00:21:55,695 p=29689 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-10-13 00:21:55,695 p=29689 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-10-13 00:21:56,034 p=29689 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-10-13 00:21:56,035 p=29689 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-10-13 00:21:56,035 p=29689 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-10-13 00:21:56,354 p=29689 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-10-13 00:21:56,355 p=29689 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-10-13 00:21:56,355 p=29689 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-10-13 00:21:56,401 p=29689 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-10-13 00:21:56,450 p=29689 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-10-13 00:21:56,580 p=29689 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-10-13 00:21:56,580 p=29689 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully home/zuul/zuul-output/logs/ci-framework-data/logs/crc/0000755000175000017500000000000015073043232022075 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/0000755000175000017500000000000015073043232025564 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/0000755000175000017500000000000015073043234026533 5ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015073043233033065 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015073043233033065 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000007035515073043233033101 0ustar zuulzuul2025-08-13T20:11:02.960889364+00:00 stderr F I0813 20:11:02.960329 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:11:02.962080249+00:00 stderr F I0813 20:11:02.961753 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.16.0-202406131906.p0.g1432fe0.assembly.stream.el9-1432fe0) 2025-08-13T20:11:02.976121501+00:00 stderr F I0813 20:11:02.976055 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3deb112ca908d86a8b7f07feb4e0da8204aab510e2799d2dccdab5e5905d1b24" 2025-08-13T20:11:02.976121501+00:00 stderr F I0813 20:11:02.976091 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f69708d7711c9fdc19d1b60591f04a3061fe1f796853e2daea9edea688b2e086" 2025-08-13T20:11:02.976374458+00:00 stderr F I0813 20:11:02.976349 1 standalone_apiserver.go:105] Started health checks at 0.0.0.0:8443 2025-08-13T20:11:02.977037737+00:00 stderr F I0813 20:11:02.976985 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers... 2025-08-13T20:11:03.001296893+00:00 stderr F I0813 20:11:03.000593 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager/openshift-master-controllers 2025-08-13T20:11:03.001400006+00:00 stderr F I0813 20:11:03.001133 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"05722deb-fc4c-4763-8689-9fea6b1f7ec9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-778975cc4f-x5vcf became leader 2025-08-13T20:11:03.004702171+00:00 stderr F I0813 20:11:03.004599 1 controller_manager.go:145] Starting "openshift.io/serviceaccount" 2025-08-13T20:11:03.004702171+00:00 stderr F I0813 20:11:03.004686 1 serviceaccount.go:16] openshift.io/serviceaccount: no managed names specified 2025-08-13T20:11:03.004732031+00:00 stderr F W0813 20:11:03.004699 1 controller_manager.go:152] Skipping "openshift.io/serviceaccount" 2025-08-13T20:11:03.004732031+00:00 stderr F I0813 20:11:03.004708 1 controller_manager.go:145] Starting "openshift.io/origin-namespace" 2025-08-13T20:11:03.012836514+00:00 stderr F I0813 20:11:03.012692 1 controller_manager.go:155] Started "openshift.io/origin-namespace" 2025-08-13T20:11:03.012836514+00:00 stderr F I0813 20:11:03.012731 1 controller_manager.go:145] Starting "openshift.io/image-import" 2025-08-13T20:11:03.017441236+00:00 stderr F I0813 20:11:03.017343 1 imagestream_controller.go:66] Starting image stream controller 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021134 1 controller_manager.go:155] Started "openshift.io/image-import" 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021241 1 controller_manager.go:145] Starting "openshift.io/templateinstancefinalizer" 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021350 1 scheduled_image_controller.go:68] Starting scheduled import controller 2025-08-13T20:11:03.028759010+00:00 stderr F I0813 20:11:03.028660 1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer" 2025-08-13T20:11:03.028759010+00:00 stderr F I0813 20:11:03.028734 1 controller_manager.go:145] Starting "openshift.io/unidling" 2025-08-13T20:11:03.029112391+00:00 stderr F I0813 20:11:03.028885 1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync 2025-08-13T20:11:03.037609334+00:00 stderr F I0813 20:11:03.037503 1 controller_manager.go:155] Started "openshift.io/unidling" 2025-08-13T20:11:03.038359756+00:00 stderr F I0813 20:11:03.038263 1 controller_manager.go:145] Starting "openshift.io/builder-serviceaccount" 2025-08-13T20:11:03.041640580+00:00 stderr F I0813 20:11:03.041595 1 controller_manager.go:155] Started "openshift.io/builder-serviceaccount" 2025-08-13T20:11:03.041756663+00:00 stderr F I0813 20:11:03.041706 1 controller_manager.go:145] Starting "openshift.io/deployer-serviceaccount" 2025-08-13T20:11:03.042480464+00:00 stderr F I0813 20:11:03.042400 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:11:03.042550836+00:00 stderr F I0813 20:11:03.042535 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:11:03.045358456+00:00 stderr F I0813 20:11:03.045299 1 controller_manager.go:155] Started "openshift.io/deployer-serviceaccount" 2025-08-13T20:11:03.045358456+00:00 stderr F I0813 20:11:03.045337 1 controller_manager.go:145] Starting "openshift.io/deploymentconfig" 2025-08-13T20:11:03.045713826+00:00 stderr F I0813 20:11:03.045604 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:11:03.045736167+00:00 stderr F I0813 20:11:03.045724 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:11:03.051008128+00:00 stderr F I0813 20:11:03.050886 1 controller_manager.go:155] Started "openshift.io/deploymentconfig" 2025-08-13T20:11:03.051008128+00:00 stderr F I0813 20:11:03.050944 1 controller_manager.go:145] Starting "openshift.io/templateinstance" 2025-08-13T20:11:03.051386649+00:00 stderr F I0813 20:11:03.051242 1 factory.go:78] Starting deploymentconfig controller 2025-08-13T20:11:03.065455903+00:00 stderr F I0813 20:11:03.065352 1 controller_manager.go:155] Started "openshift.io/templateinstance" 2025-08-13T20:11:03.065455903+00:00 stderr F I0813 20:11:03.065397 1 controller_manager.go:145] Starting "openshift.io/serviceaccount-pull-secrets" 2025-08-13T20:11:03.068100168+00:00 stderr F I0813 20:11:03.068064 1 controller_manager.go:155] Started "openshift.io/serviceaccount-pull-secrets" 2025-08-13T20:11:03.068212132+00:00 stderr F I0813 20:11:03.068186 1 controller_manager.go:145] Starting "openshift.io/deployer-rolebindings" 2025-08-13T20:11:03.068317465+00:00 stderr F I0813 20:11:03.068249 1 registry_urls_observation_controller.go:139] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:11:03.068317465+00:00 stderr F I0813 20:11:03.068310 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_urls 2025-08-13T20:11:03.068393337+00:00 stderr F I0813 20:11:03.068341 1 keyid_observation_controller.go:164] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:11:03.068393337+00:00 stderr F I0813 20:11:03.068373 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_kids 2025-08-13T20:11:03.068410747+00:00 stderr F I0813 20:11:03.068401 1 legacy_token_secret_controller.go:109] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:11:03.068422458+00:00 stderr F I0813 20:11:03.068408 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-08-13T20:11:03.068434378+00:00 stderr F I0813 20:11:03.068426 1 image_pull_secret_controller.go:301] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:11:03.068446038+00:00 stderr F I0813 20:11:03.068432 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-08-13T20:11:03.068457779+00:00 stderr F I0813 20:11:03.068448 1 legacy_image_pull_secret_controller.go:131] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:11:03.068457779+00:00 stderr F I0813 20:11:03.068454 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-08-13T20:11:03.068704496+00:00 stderr F I0813 20:11:03.068203 1 service_account_controller.go:336] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:11:03.068704496+00:00 stderr F I0813 20:11:03.068622 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_service-account 2025-08-13T20:11:03.076382426+00:00 stderr F I0813 20:11:03.076273 1 controller_manager.go:155] Started "openshift.io/deployer-rolebindings" 2025-08-13T20:11:03.076555351+00:00 stderr F I0813 20:11:03.076483 1 controller_manager.go:145] Starting "openshift.io/image-signature-import" 2025-08-13T20:11:03.077265761+00:00 stderr F I0813 20:11:03.076414 1 defaultrolebindings.go:154] Starting DeployerRoleBindingController 2025-08-13T20:11:03.077265761+00:00 stderr F I0813 20:11:03.077204 1 shared_informer.go:311] Waiting for caches to sync for DeployerRoleBindingController 2025-08-13T20:11:03.080546645+00:00 stderr F I0813 20:11:03.080439 1 controller_manager.go:155] Started "openshift.io/image-signature-import" 2025-08-13T20:11:03.080546645+00:00 stderr F I0813 20:11:03.080472 1 controller_manager.go:145] Starting "openshift.io/deployer" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094601 1 controller_manager.go:155] Started "openshift.io/deployer" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094726 1 controller_manager.go:145] Starting "openshift.io/image-trigger" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094683 1 factory.go:73] Starting deployer controller 2025-08-13T20:11:03.113845090+00:00 stderr F I0813 20:11:03.113658 1 controller_manager.go:155] Started "openshift.io/image-trigger" 2025-08-13T20:11:03.113946443+00:00 stderr F I0813 20:11:03.113871 1 image_trigger_controller.go:229] Starting trigger controller 2025-08-13T20:11:03.114174979+00:00 stderr F I0813 20:11:03.114046 1 controller_manager.go:145] Starting "openshift.io/image-puller-rolebindings" 2025-08-13T20:11:03.118686329+00:00 stderr F I0813 20:11:03.118620 1 controller_manager.go:155] Started "openshift.io/image-puller-rolebindings" 2025-08-13T20:11:03.118686329+00:00 stderr F W0813 20:11:03.118641 1 controller_manager.go:142] "openshift.io/default-rolebindings" is disabled 2025-08-13T20:11:03.118686329+00:00 stderr F I0813 20:11:03.118648 1 controller_manager.go:145] Starting "openshift.io/build" 2025-08-13T20:11:03.119373948+00:00 stderr F I0813 20:11:03.119263 1 defaultrolebindings.go:154] Starting ImagePullerRoleBindingController 2025-08-13T20:11:03.119373948+00:00 stderr F I0813 20:11:03.119301 1 shared_informer.go:311] Waiting for caches to sync for ImagePullerRoleBindingController 2025-08-13T20:11:03.126268746+00:00 stderr F I0813 20:11:03.126188 1 controller_manager.go:155] Started "openshift.io/build" 2025-08-13T20:11:03.126268746+00:00 stderr F I0813 20:11:03.126228 1 controller_manager.go:145] Starting "openshift.io/build-config-change" 2025-08-13T20:11:03.135650695+00:00 stderr F I0813 20:11:03.135601 1 controller_manager.go:155] Started "openshift.io/build-config-change" 2025-08-13T20:11:03.135730517+00:00 stderr F I0813 20:11:03.135712 1 controller_manager.go:145] Starting "openshift.io/builder-rolebindings" 2025-08-13T20:11:03.139568287+00:00 stderr F I0813 20:11:03.139490 1 controller_manager.go:155] Started "openshift.io/builder-rolebindings" 2025-08-13T20:11:03.139568287+00:00 stderr F I0813 20:11:03.139538 1 controller_manager.go:157] Started Origin Controllers 2025-08-13T20:11:03.139651340+00:00 stderr F I0813 20:11:03.139628 1 defaultrolebindings.go:154] Starting BuilderRoleBindingController 2025-08-13T20:11:03.139741142+00:00 stderr F I0813 20:11:03.139723 1 shared_informer.go:311] Waiting for caches to sync for BuilderRoleBindingController 2025-08-13T20:11:03.169260069+00:00 stderr F I0813 20:11:03.169193 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.173518971+00:00 stderr F I0813 20:11:03.173458 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.176474546+00:00 stderr F I0813 20:11:03.176362 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.177665040+00:00 stderr F I0813 20:11:03.177622 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.180083009+00:00 stderr F I0813 20:11:03.180003 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.181215371+00:00 stderr F I0813 20:11:03.181106 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.181507830+00:00 stderr F I0813 20:11:03.181444 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.182751745+00:00 stderr F I0813 20:11:03.182712 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.185134084+00:00 stderr F I0813 20:11:03.183022 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.190246390+00:00 stderr F I0813 20:11:03.190190 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.194325617+00:00 stderr F I0813 20:11:03.194225 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.198378524+00:00 stderr F W0813 20:11:03.198291 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:11:03.199765823+00:00 stderr F I0813 20:11:03.199544 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.202071779+00:00 stderr F I0813 20:11:03.201023 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.214545197+00:00 stderr F I0813 20:11:03.210094 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.221080394+00:00 stderr F I0813 20:11:03.220985 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.222953158+00:00 stderr F I0813 20:11:03.222883 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.223201495+00:00 stderr F I0813 20:11:03.223176 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.224708568+00:00 stderr F I0813 20:11:03.224663 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.227238901+00:00 stderr F I0813 20:11:03.227174 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.227449147+00:00 stderr F I0813 20:11:03.227385 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.231337349+00:00 stderr F I0813 20:11:03.231271 1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller 2025-08-13T20:11:03.234503929+00:00 stderr F I0813 20:11:03.234457 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.239160863+00:00 stderr F I0813 20:11:03.238714 1 buildconfig_controller.go:212] Starting buildconfig controller 2025-08-13T20:11:03.255859232+00:00 stderr F I0813 20:11:03.255749 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers. 2025-08-13T20:11:03.258393844+00:00 stderr F I0813 20:11:03.258294 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:11:03.269350588+00:00 stderr F I0813 20:11:03.269203 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.269546314+00:00 stderr F I0813 20:11:03.269467 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:11:03.279697525+00:00 stderr F I0813 20:11:03.279610 1 templateinstance_controller.go:297] Starting TemplateInstance controller 2025-08-13T20:11:03.279697525+00:00 stderr F I0813 20:11:03.279670 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_urls 2025-08-13T20:11:03.279753797+00:00 stderr F I0813 20:11:03.279691 1 registry_urls_observation_controller.go:146] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:11:03.280877749+00:00 stderr F I0813 20:11:03.280840 1 shared_informer.go:318] Caches are synced for DeployerRoleBindingController 2025-08-13T20:11:03.282687271+00:00 stderr F W0813 20:11:03.282655 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:11:03.286173031+00:00 stderr F I0813 20:11:03.286089 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.300431929+00:00 stderr F I0813 20:11:03.300296 1 factory.go:80] Deployer controller caches are synced. Starting workers. 2025-08-13T20:11:03.323603564+00:00 stderr F I0813 20:11:03.323502 1 shared_informer.go:318] Caches are synced for ImagePullerRoleBindingController 2025-08-13T20:11:03.366003709+00:00 stderr F I0813 20:11:03.365879 1 shared_informer.go:318] Caches are synced for BuilderRoleBindingController 2025-08-13T20:11:03.371257520+00:00 stderr F I0813 20:11:03.369556 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.392538160+00:00 stderr F I0813 20:11:03.392303 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.500976489+00:00 stderr F I0813 20:11:03.497362 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.514602170+00:00 stderr F I0813 20:11:03.513510 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.527231062+00:00 stderr F I0813 20:11:03.526879 1 build_controller.go:503] Starting build controller 2025-08-13T20:11:03.527231062+00:00 stderr F I0813 20:11:03.526940 1 build_controller.go:505] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000 2025-08-13T20:11:03.568669160+00:00 stderr F I0813 20:11:03.568575 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-08-13T20:11:03.568727282+00:00 stderr F I0813 20:11:03.568691 1 legacy_image_pull_secret_controller.go:138] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:11:03.568727282+00:00 stderr F I0813 20:11:03.568720 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_service-account 2025-08-13T20:11:03.568827745+00:00 stderr F I0813 20:11:03.568734 1 service_account_controller.go:343] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.568993 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_kids 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569026 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569044 1 legacy_token_secret_controller.go:116] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569038 1 keyid_observation_controller.go:172] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:11:03.569341419+00:00 stderr F I0813 20:11:03.569012 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-08-13T20:11:03.569357520+00:00 stderr F I0813 20:11:03.569339 1 image_pull_secret_controller.go:327] Waiting for service account token signing cert to be observed 2025-08-13T20:11:03.569367700+00:00 stderr F I0813 20:11:03.569352 1 image_pull_secret_controller.go:313] Waiting for image registry urls to be observed 2025-08-13T20:11:03.569367700+00:00 stderr F I0813 20:11:03.569356 1 image_pull_secret_controller.go:330] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:11:03.569382131+00:00 stderr F I0813 20:11:03.569370 1 image_pull_secret_controller.go:317] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:11:03.569448183+00:00 stderr F I0813 20:11:03.569389 1 image_pull_secret_controller.go:374] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:19:56.300590604+00:00 stderr F W0813 20:19:56.295341 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:20:24.817863086+00:00 stderr F I0813 20:20:24.810877 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:21:03.289154765+00:00 stderr F I0813 20:21:03.287183 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:21:03.519750035+00:00 stderr F I0813 20:21:03.519604 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:25:55.305263486+00:00 stderr F W0813 20:25:55.303637 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:29:57.085327979+00:00 stderr F I0813 20:29:57.083719 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:31:03.288080738+00:00 stderr F I0813 20:31:03.284011 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:31:03.518212103+00:00 stderr F I0813 20:31:03.518089 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:34:21.325502573+00:00 stderr F W0813 20:34:21.324495 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:41:03.284951366+00:00 stderr F I0813 20:41:03.282391 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:41:03.572943138+00:00 stderr F I0813 20:41:03.572884 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:42:35.632999370+00:00 stderr F I0813 20:42:35.631768 1 keyid_observation_controller.go:174] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:42:35.634115802+00:00 stderr F I0813 20:42:35.633951 1 project_finalizer_controller.go:74] Shutting down 2025-08-13T20:42:35.634352309+00:00 stderr F I0813 20:42:35.634323 1 legacy_token_secret_controller.go:118] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:42:35.634432091+00:00 stderr F I0813 20:42:35.634416 1 service_account_controller.go:345] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:42:35.634506093+00:00 stderr F I0813 20:42:35.634490 1 legacy_image_pull_secret_controller.go:140] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:42:35.635735189+00:00 stderr F I0813 20:42:35.635662 1 serviceaccounts_controller.go:123] "Shutting down service account controller" 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636859 1 scheduled_image_controller.go:81] Shutting down image stream controller 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636904 1 imagestream_controller.go:81] Shutting down image stream controller 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636963 1 image_trigger_controller.go:245] Shutting down trigger controller 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638352 1 image_pull_secret_controller.go:376] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638478 1 serviceaccounts_controller.go:123] "Shutting down service account controller" 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638527 1 buildconfig_controller.go:219] Shutting down buildconfig controller 2025-08-13T20:42:35.638753276+00:00 stderr F I0813 20:42:35.638736 1 build_controller.go:521] Shutting down build controller 2025-08-13T20:42:35.638896900+00:00 stderr F I0813 20:42:35.638876 1 signature_import_controller.go:81] Shutting down 2025-08-13T20:42:35.638987433+00:00 stderr F I0813 20:42:35.638942 1 factory.go:88] Shutting down deployer controller 2025-08-13T20:42:35.639000763+00:00 stderr F I0813 20:42:35.638986 1 defaultrolebindings.go:166] Shutting down DeployerRoleBindingController 2025-08-13T20:42:35.639026874+00:00 stderr F I0813 20:42:35.634119 1 registry_urls_observation_controller.go:148] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:42:35.639449976+00:00 stderr F I0813 20:42:35.638947 1 defaultrolebindings.go:166] Shutting down BuilderRoleBindingController 2025-08-13T20:42:35.639600830+00:00 stderr F I0813 20:42:35.639579 1 templateinstance_finalizer.go:201] Stopping TemplateInstanceFinalizer controller 2025-08-13T20:42:35.639742184+00:00 stderr F I0813 20:42:35.639722 1 defaultrolebindings.go:166] Shutting down ImagePullerRoleBindingController 2025-08-13T20:42:35.640471215+00:00 stderr F I0813 20:42:35.640011 1 factory.go:95] Shutting down deploymentconfig controller 2025-08-13T20:42:35.652143072+00:00 stderr F W0813 20:42:35.649302 1 controller_manager.go:107] Controller Manager received stop signal: leaderelection lost ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000074206015073043233033100 0ustar zuulzuul2025-10-13T00:15:00.460390141+00:00 stderr F I1013 00:15:00.459126 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:00.510056529+00:00 stderr F I1013 00:15:00.509992 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.16.0-202406131906.p0.g1432fe0.assembly.stream.el9-1432fe0) 2025-10-13T00:15:00.541760019+00:00 stderr F I1013 00:15:00.541697 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3deb112ca908d86a8b7f07feb4e0da8204aab510e2799d2dccdab5e5905d1b24" 2025-10-13T00:15:00.541760019+00:00 stderr F I1013 00:15:00.541727 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f69708d7711c9fdc19d1b60591f04a3061fe1f796853e2daea9edea688b2e086" 2025-10-13T00:15:00.545315175+00:00 stderr F I1013 00:15:00.545262 1 standalone_apiserver.go:105] Started health checks at 0.0.0.0:8443 2025-10-13T00:15:00.557498720+00:00 stderr F I1013 00:15:00.557440 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers... 2025-10-13T00:15:00.598434357+00:00 stderr F I1013 00:15:00.598084 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager/openshift-master-controllers 2025-10-13T00:15:00.598934642+00:00 stderr F I1013 00:15:00.598682 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"05722deb-fc4c-4763-8689-9fea6b1f7ec9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40018", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-778975cc4f-x5vcf became leader 2025-10-13T00:15:00.612455567+00:00 stderr F W1013 00:15:00.612394 1 controller_manager.go:142] "openshift.io/default-rolebindings" is disabled 2025-10-13T00:15:00.612516229+00:00 stderr F I1013 00:15:00.612505 1 controller_manager.go:145] Starting "openshift.io/origin-namespace" 2025-10-13T00:15:00.622828848+00:00 stderr F I1013 00:15:00.622762 1 controller_manager.go:155] Started "openshift.io/origin-namespace" 2025-10-13T00:15:00.622828848+00:00 stderr F I1013 00:15:00.622787 1 controller_manager.go:145] Starting "openshift.io/builder-rolebindings" 2025-10-13T00:15:00.630787446+00:00 stderr F I1013 00:15:00.630739 1 controller_manager.go:155] Started "openshift.io/builder-rolebindings" 2025-10-13T00:15:00.630851178+00:00 stderr F I1013 00:15:00.630839 1 controller_manager.go:145] Starting "openshift.io/deploymentconfig" 2025-10-13T00:15:00.636848858+00:00 stderr F I1013 00:15:00.636802 1 defaultrolebindings.go:154] Starting BuilderRoleBindingController 2025-10-13T00:15:00.636941491+00:00 stderr F I1013 00:15:00.636920 1 shared_informer.go:311] Waiting for caches to sync for BuilderRoleBindingController 2025-10-13T00:15:00.637937640+00:00 stderr F I1013 00:15:00.637904 1 controller_manager.go:155] Started "openshift.io/deploymentconfig" 2025-10-13T00:15:00.637937640+00:00 stderr F I1013 00:15:00.637931 1 controller_manager.go:145] Starting "openshift.io/image-trigger" 2025-10-13T00:15:00.639342503+00:00 stderr F I1013 00:15:00.638097 1 factory.go:78] Starting deploymentconfig controller 2025-10-13T00:15:00.648553068+00:00 stderr F I1013 00:15:00.648509 1 controller_manager.go:155] Started "openshift.io/image-trigger" 2025-10-13T00:15:00.648604450+00:00 stderr F I1013 00:15:00.648593 1 controller_manager.go:145] Starting "openshift.io/image-puller-rolebindings" 2025-10-13T00:15:00.650457805+00:00 stderr F I1013 00:15:00.650441 1 image_trigger_controller.go:229] Starting trigger controller 2025-10-13T00:15:00.651193268+00:00 stderr F I1013 00:15:00.651162 1 controller_manager.go:155] Started "openshift.io/image-puller-rolebindings" 2025-10-13T00:15:00.651193268+00:00 stderr F I1013 00:15:00.651185 1 controller_manager.go:145] Starting "openshift.io/serviceaccount" 2025-10-13T00:15:00.651216048+00:00 stderr F I1013 00:15:00.651193 1 serviceaccount.go:16] openshift.io/serviceaccount: no managed names specified 2025-10-13T00:15:00.651216048+00:00 stderr F W1013 00:15:00.651200 1 controller_manager.go:152] Skipping "openshift.io/serviceaccount" 2025-10-13T00:15:00.651216048+00:00 stderr F I1013 00:15:00.651206 1 controller_manager.go:145] Starting "openshift.io/builder-serviceaccount" 2025-10-13T00:15:00.651354322+00:00 stderr F I1013 00:15:00.651316 1 defaultrolebindings.go:154] Starting ImagePullerRoleBindingController 2025-10-13T00:15:00.651373063+00:00 stderr F I1013 00:15:00.651351 1 shared_informer.go:311] Waiting for caches to sync for ImagePullerRoleBindingController 2025-10-13T00:15:00.653804116+00:00 stderr F I1013 00:15:00.653773 1 controller_manager.go:155] Started "openshift.io/builder-serviceaccount" 2025-10-13T00:15:00.653804116+00:00 stderr F I1013 00:15:00.653791 1 controller_manager.go:145] Starting "openshift.io/build" 2025-10-13T00:15:00.659026382+00:00 stderr F I1013 00:15:00.658982 1 controller_manager.go:155] Started "openshift.io/build" 2025-10-13T00:15:00.659026382+00:00 stderr F I1013 00:15:00.659006 1 controller_manager.go:145] Starting "openshift.io/deployer-serviceaccount" 2025-10-13T00:15:00.660454385+00:00 stderr F I1013 00:15:00.660406 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-10-13T00:15:00.660454385+00:00 stderr F I1013 00:15:00.660447 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-10-13T00:15:00.661260549+00:00 stderr F I1013 00:15:00.661217 1 controller_manager.go:155] Started "openshift.io/deployer-serviceaccount" 2025-10-13T00:15:00.661260549+00:00 stderr F I1013 00:15:00.661247 1 controller_manager.go:145] Starting "openshift.io/deployer-rolebindings" 2025-10-13T00:15:00.661914999+00:00 stderr F I1013 00:15:00.661433 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-10-13T00:15:00.661914999+00:00 stderr F I1013 00:15:00.661450 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-10-13T00:15:00.663539047+00:00 stderr F I1013 00:15:00.663500 1 controller_manager.go:155] Started "openshift.io/deployer-rolebindings" 2025-10-13T00:15:00.663539047+00:00 stderr F I1013 00:15:00.663524 1 controller_manager.go:145] Starting "openshift.io/unidling" 2025-10-13T00:15:00.663673351+00:00 stderr F I1013 00:15:00.663654 1 defaultrolebindings.go:154] Starting DeployerRoleBindingController 2025-10-13T00:15:00.663673351+00:00 stderr F I1013 00:15:00.663665 1 shared_informer.go:311] Waiting for caches to sync for DeployerRoleBindingController 2025-10-13T00:15:00.670164906+00:00 stderr F I1013 00:15:00.670099 1 controller_manager.go:155] Started "openshift.io/unidling" 2025-10-13T00:15:00.670164906+00:00 stderr F I1013 00:15:00.670121 1 controller_manager.go:145] Starting "openshift.io/image-import" 2025-10-13T00:15:00.673263049+00:00 stderr F I1013 00:15:00.673166 1 imagestream_controller.go:66] Starting image stream controller 2025-10-13T00:15:00.675994001+00:00 stderr F I1013 00:15:00.675960 1 controller_manager.go:155] Started "openshift.io/image-import" 2025-10-13T00:15:00.675994001+00:00 stderr F I1013 00:15:00.675981 1 controller_manager.go:145] Starting "openshift.io/templateinstance" 2025-10-13T00:15:00.676139195+00:00 stderr F I1013 00:15:00.676111 1 scheduled_image_controller.go:68] Starting scheduled import controller 2025-10-13T00:15:00.698862856+00:00 stderr F I1013 00:15:00.698804 1 controller_manager.go:155] Started "openshift.io/templateinstance" 2025-10-13T00:15:00.698862856+00:00 stderr F I1013 00:15:00.698824 1 controller_manager.go:145] Starting "openshift.io/serviceaccount-pull-secrets" 2025-10-13T00:15:00.702994540+00:00 stderr F I1013 00:15:00.702944 1 controller_manager.go:155] Started "openshift.io/serviceaccount-pull-secrets" 2025-10-13T00:15:00.702994540+00:00 stderr F I1013 00:15:00.702969 1 controller_manager.go:145] Starting "openshift.io/build-config-change" 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703180 1 image_pull_secret_controller.go:301] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703199 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703209 1 legacy_token_secret_controller.go:109] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703220 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703223 1 legacy_image_pull_secret_controller.go:131] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703230 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703246 1 keyid_observation_controller.go:164] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703246 1 service_account_controller.go:336] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703254 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_service-account 2025-10-13T00:15:00.703359731+00:00 stderr F I1013 00:15:00.703272 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_kids 2025-10-13T00:15:00.704711101+00:00 stderr F I1013 00:15:00.704641 1 registry_urls_observation_controller.go:139] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-10-13T00:15:00.704711101+00:00 stderr F I1013 00:15:00.704690 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_urls 2025-10-13T00:15:00.717065351+00:00 stderr F I1013 00:15:00.716676 1 controller_manager.go:155] Started "openshift.io/build-config-change" 2025-10-13T00:15:00.717065351+00:00 stderr F I1013 00:15:00.716703 1 controller_manager.go:145] Starting "openshift.io/deployer" 2025-10-13T00:15:00.723906476+00:00 stderr F I1013 00:15:00.723699 1 controller_manager.go:155] Started "openshift.io/deployer" 2025-10-13T00:15:00.723906476+00:00 stderr F I1013 00:15:00.723727 1 controller_manager.go:145] Starting "openshift.io/image-signature-import" 2025-10-13T00:15:00.723906476+00:00 stderr F I1013 00:15:00.723884 1 factory.go:73] Starting deployer controller 2025-10-13T00:15:00.730628988+00:00 stderr F I1013 00:15:00.730574 1 controller_manager.go:155] Started "openshift.io/image-signature-import" 2025-10-13T00:15:00.730628988+00:00 stderr F I1013 00:15:00.730599 1 controller_manager.go:145] Starting "openshift.io/templateinstancefinalizer" 2025-10-13T00:15:00.741631967+00:00 stderr F I1013 00:15:00.741546 1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer" 2025-10-13T00:15:00.741631967+00:00 stderr F I1013 00:15:00.741574 1 controller_manager.go:157] Started Origin Controllers 2025-10-13T00:15:00.741986848+00:00 stderr F I1013 00:15:00.741968 1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync 2025-10-13T00:15:00.745712770+00:00 stderr F I1013 00:15:00.743850 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.755409320+00:00 stderr F I1013 00:15:00.752826 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.755605636+00:00 stderr F I1013 00:15:00.755574 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.767804191+00:00 stderr F I1013 00:15:00.767306 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.768460011+00:00 stderr F I1013 00:15:00.768422 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.768653077+00:00 stderr F W1013 00:15:00.768600 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-10-13T00:15:00.768666517+00:00 stderr F E1013 00:15:00.768653 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-10-13T00:15:00.768709859+00:00 stderr F W1013 00:15:00.768685 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:00.768827732+00:00 stderr F W1013 00:15:00.768799 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:00.768827732+00:00 stderr F E1013 00:15:00.768822 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:00.769786101+00:00 stderr F I1013 00:15:00.769757 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.774358578+00:00 stderr F I1013 00:15:00.771961 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.776629986+00:00 stderr F I1013 00:15:00.776579 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.777360698+00:00 stderr F I1013 00:15:00.777306 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.777568404+00:00 stderr F W1013 00:15:00.777515 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:00.777611255+00:00 stderr F E1013 00:15:00.777578 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:00.785464991+00:00 stderr F I1013 00:15:00.784491 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.785464991+00:00 stderr F E1013 00:15:00.768706 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:00.785464991+00:00 stderr F W1013 00:15:00.785128 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-10-13T00:15:00.785464991+00:00 stderr F E1013 00:15:00.785185 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-10-13T00:15:00.799997666+00:00 stderr F I1013 00:15:00.799924 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.800223453+00:00 stderr F I1013 00:15:00.800192 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.800278434+00:00 stderr F W1013 00:15:00.800250 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-10-13T00:15:00.800289165+00:00 stderr F E1013 00:15:00.800280 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-10-13T00:15:00.800373757+00:00 stderr F I1013 00:15:00.800355 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.803466550+00:00 stderr F I1013 00:15:00.803413 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.806453669+00:00 stderr F I1013 00:15:00.806258 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_urls 2025-10-13T00:15:00.806453669+00:00 stderr F I1013 00:15:00.806283 1 registry_urls_observation_controller.go:146] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-10-13T00:15:00.817651115+00:00 stderr F I1013 00:15:00.816273 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.826586553+00:00 stderr F I1013 00:15:00.822682 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.852612632+00:00 stderr F I1013 00:15:00.851772 1 shared_informer.go:318] Caches are synced for ImagePullerRoleBindingController 2025-10-13T00:15:00.852612632+00:00 stderr F I1013 00:15:00.852583 1 shared_informer.go:318] Caches are synced for BuilderRoleBindingController 2025-10-13T00:15:00.856348494+00:00 stderr F I1013 00:15:00.855111 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.861982813+00:00 stderr F I1013 00:15:00.861930 1 shared_informer.go:318] Caches are synced for service account 2025-10-13T00:15:00.863374955+00:00 stderr F I1013 00:15:00.861947 1 shared_informer.go:318] Caches are synced for service account 2025-10-13T00:15:00.915520967+00:00 stderr F I1013 00:15:00.895249 1 shared_informer.go:318] Caches are synced for DeployerRoleBindingController 2025-10-13T00:15:00.915797066+00:00 stderr F I1013 00:15:00.903878 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.915973511+00:00 stderr F I1013 00:15:00.910496 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:00.925547828+00:00 stderr F I1013 00:15:00.925473 1 factory.go:80] Deployer controller caches are synced. Starting workers. 2025-10-13T00:15:01.033810492+00:00 stderr F I1013 00:15:01.033473 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:01.085319885+00:00 stderr F I1013 00:15:01.085187 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:01.103790438+00:00 stderr F I1013 00:15:01.103676 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_kids 2025-10-13T00:15:01.103790438+00:00 stderr F I1013 00:15:01.103727 1 keyid_observation_controller.go:172] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-10-13T00:15:01.103972124+00:00 stderr F I1013 00:15:01.103935 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-10-13T00:15:01.103972124+00:00 stderr F I1013 00:15:01.103958 1 image_pull_secret_controller.go:327] Waiting for service account token signing cert to be observed 2025-10-13T00:15:01.103982784+00:00 stderr F I1013 00:15:01.103963 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-10-13T00:15:01.103982784+00:00 stderr F I1013 00:15:01.103977 1 image_pull_secret_controller.go:330] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-10-13T00:15:01.104008215+00:00 stderr F I1013 00:15:01.103989 1 legacy_image_pull_secret_controller.go:138] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-10-13T00:15:01.104031405+00:00 stderr F I1013 00:15:01.104013 1 image_pull_secret_controller.go:313] Waiting for image registry urls to be observed 2025-10-13T00:15:01.104031405+00:00 stderr F I1013 00:15:01.104021 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-10-13T00:15:01.104039836+00:00 stderr F I1013 00:15:01.104031 1 legacy_token_secret_controller.go:116] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-10-13T00:15:01.104047876+00:00 stderr F I1013 00:15:01.104015 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_service-account 2025-10-13T00:15:01.104047876+00:00 stderr F I1013 00:15:01.104031 1 image_pull_secret_controller.go:317] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-10-13T00:15:01.104056236+00:00 stderr F I1013 00:15:01.104048 1 service_account_controller.go:343] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-10-13T00:15:01.104063996+00:00 stderr F I1013 00:15:01.104057 1 image_pull_secret_controller.go:374] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-10-13T00:15:01.108687285+00:00 stderr F I1013 00:15:01.108095 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="builder-dockercfg-68c6h" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.956773274 +0000 UTC" 2025-10-13T00:15:01.108687285+00:00 stderr F I1013 00:15:01.108487 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="builder-dockercfg-68c6h" serviceaccount="builder" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112090 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="deployer-dockercfg-rxncs" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.95518075 +0000 UTC" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112122 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="deployer-dockercfg-rxncs" serviceaccount="deployer" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112465 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="default-dockercfg-rwmqp" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.955022181 +0000 UTC" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112481 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="default-dockercfg-rwmqp" serviceaccount="default" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112539 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="builder-dockercfg-hn9nn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.956764853 +0000 UTC" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112551 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="builder-dockercfg-hn9nn" serviceaccount="builder" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.108095 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="csi-hostpath-provisioner-sa-dockercfg-nqbbq" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.956773278 +0000 UTC" 2025-10-13T00:15:01.113632503+00:00 stderr F I1013 00:15:01.112643 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="csi-hostpath-provisioner-sa-dockercfg-nqbbq" serviceaccount="csi-hostpath-provisioner-sa" 2025-10-13T00:15:01.180027712+00:00 stderr F I1013 00:15:01.179708 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="csi-provisioner-dockercfg-m4vbf" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.928130707 +0000 UTC" 2025-10-13T00:15:01.180027712+00:00 stderr F I1013 00:15:01.179741 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="csi-provisioner-dockercfg-m4vbf" serviceaccount="csi-provisioner" 2025-10-13T00:15:01.180186167+00:00 stderr F I1013 00:15:01.180150 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="default-dockercfg-svxcm" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-07-20 19:41:32.927954566 +0000 UTC" 2025-10-13T00:15:01.180197068+00:00 stderr F I1013 00:15:01.180184 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="default-dockercfg-svxcm" serviceaccount="default" 2025-10-13T00:15:01.180320111+00:00 stderr F I1013 00:15:01.180293 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="deployer-dockercfg-xtrqb" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.327890364 +0000 UTC" 2025-10-13T00:15:01.180320111+00:00 stderr F I1013 00:15:01.180314 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="deployer-dockercfg-xtrqb" serviceaccount="deployer" 2025-10-13T00:15:01.188564598+00:00 stderr F I1013 00:15:01.188487 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="builder-dockercfg-fhvt9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.324647035 +0000 UTC" 2025-10-13T00:15:01.188564598+00:00 stderr F I1013 00:15:01.188544 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="builder-dockercfg-fhvt9" serviceaccount="builder" 2025-10-13T00:15:01.198101234+00:00 stderr F I1013 00:15:01.198045 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="default-dockercfg-dp7cf" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.320795088 +0000 UTC" 2025-10-13T00:15:01.198101234+00:00 stderr F I1013 00:15:01.198076 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="default-dockercfg-dp7cf" serviceaccount="default" 2025-10-13T00:15:01.228251677+00:00 stderr F I1013 00:15:01.228162 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="deployer-dockercfg-l8zq8" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.308751735 +0000 UTC" 2025-10-13T00:15:01.228251677+00:00 stderr F I1013 00:15:01.228205 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="deployer-dockercfg-l8zq8" serviceaccount="deployer" 2025-10-13T00:15:01.228251677+00:00 stderr F I1013 00:15:01.228211 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="builder-dockercfg-pq2fn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.30872975 +0000 UTC" 2025-10-13T00:15:01.228251677+00:00 stderr F I1013 00:15:01.228235 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="builder-dockercfg-pq2fn" serviceaccount="builder" 2025-10-13T00:15:01.229127034+00:00 stderr F I1013 00:15:01.229080 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="default-dockercfg-mg7xn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.30837804 +0000 UTC" 2025-10-13T00:15:01.229127034+00:00 stderr F I1013 00:15:01.229107 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="default-dockercfg-mg7xn" serviceaccount="default" 2025-10-13T00:15:01.229294539+00:00 stderr F I1013 00:15:01.229201 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="deployer-dockercfg-4blxw" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.30832657 +0000 UTC" 2025-10-13T00:15:01.229294539+00:00 stderr F I1013 00:15:01.229225 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="deployer-dockercfg-4blxw" serviceaccount="deployer" 2025-10-13T00:15:01.254474833+00:00 stderr F I1013 00:15:01.253015 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="attachdetach-controller-dockercfg-fdtjb" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-07-20 19:41:34.298807905 +0000 UTC" 2025-10-13T00:15:01.254474833+00:00 stderr F I1013 00:15:01.253047 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="attachdetach-controller-dockercfg-fdtjb" serviceaccount="attachdetach-controller" 2025-10-13T00:15:01.261664268+00:00 stderr F I1013 00:15:01.258138 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="builder-dockercfg-kkqp2" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.696759463 +0000 UTC" 2025-10-13T00:15:01.261664268+00:00 stderr F I1013 00:15:01.258173 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="builder-dockercfg-kkqp2" serviceaccount="builder" 2025-10-13T00:15:01.267369949+00:00 stderr F I1013 00:15:01.264475 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="certificate-controller-dockercfg-9v2kj" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.694223463 +0000 UTC" 2025-10-13T00:15:01.267369949+00:00 stderr F I1013 00:15:01.264515 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="certificate-controller-dockercfg-9v2kj" serviceaccount="certificate-controller" 2025-10-13T00:15:01.277670288+00:00 stderr F I1013 00:15:01.277616 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="clusterrole-aggregation-controller-dockercfg-2tcfh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.690018837 +0000 UTC" 2025-10-13T00:15:01.277692039+00:00 stderr F I1013 00:15:01.277676 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="clusterrole-aggregation-controller-dockercfg-2tcfh" serviceaccount="clusterrole-aggregation-controller" 2025-10-13T00:15:01.281427911+00:00 stderr F I1013 00:15:01.278788 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="cronjob-controller-dockercfg-g2sp5" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.68849788 +0000 UTC" 2025-10-13T00:15:01.281427911+00:00 stderr F I1013 00:15:01.278814 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="cronjob-controller-dockercfg-g2sp5" serviceaccount="cronjob-controller" 2025-10-13T00:15:01.306419569+00:00 stderr F I1013 00:15:01.306338 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="daemon-set-controller-dockercfg-pjkzz" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.677481402 +0000 UTC" 2025-10-13T00:15:01.306419569+00:00 stderr F I1013 00:15:01.306366 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="daemon-set-controller-dockercfg-pjkzz" serviceaccount="daemon-set-controller" 2025-10-13T00:15:01.306513962+00:00 stderr F I1013 00:15:01.306476 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="default-dockercfg-q6b6n" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.677418928 +0000 UTC" 2025-10-13T00:15:01.306513962+00:00 stderr F I1013 00:15:01.306503 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="default-dockercfg-q6b6n" serviceaccount="default" 2025-10-13T00:15:01.315817001+00:00 stderr F I1013 00:15:01.315776 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="deployer-dockercfg-bscn9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.673701189 +0000 UTC" 2025-10-13T00:15:01.315817001+00:00 stderr F I1013 00:15:01.315804 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="deployer-dockercfg-bscn9" serviceaccount="deployer" 2025-10-13T00:15:01.319355027+00:00 stderr F I1013 00:15:01.316444 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="deployment-controller-dockercfg-xwj9s" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.673438045 +0000 UTC" 2025-10-13T00:15:01.319355027+00:00 stderr F I1013 00:15:01.316478 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="deployment-controller-dockercfg-xwj9s" serviceaccount="deployment-controller" 2025-10-13T00:15:01.329429509+00:00 stderr F I1013 00:15:01.329320 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="disruption-controller-dockercfg-27hxh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.668288055 +0000 UTC" 2025-10-13T00:15:01.329429509+00:00 stderr F I1013 00:15:01.329378 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="disruption-controller-dockercfg-27hxh" serviceaccount="disruption-controller" 2025-10-13T00:15:01.333379467+00:00 stderr F I1013 00:15:01.329949 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpoint-controller-dockercfg-fnmd9" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.668028587 +0000 UTC" 2025-10-13T00:15:01.333379467+00:00 stderr F I1013 00:15:01.329973 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpoint-controller-dockercfg-fnmd9" serviceaccount="endpoint-controller" 2025-10-13T00:15:01.340751638+00:00 stderr F I1013 00:15:01.340714 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpointslice-controller-dockercfg-kvrd9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.663725098 +0000 UTC" 2025-10-13T00:15:01.340751638+00:00 stderr F I1013 00:15:01.340742 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpointslice-controller-dockercfg-kvrd9" serviceaccount="endpointslice-controller" 2025-10-13T00:15:01.362260662+00:00 stderr F I1013 00:15:01.362128 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpointslicemirroring-controller-dockercfg-skzmn" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-07-20 19:41:35.655162025 +0000 UTC" 2025-10-13T00:15:01.362260662+00:00 stderr F I1013 00:15:01.362151 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpointslicemirroring-controller-dockercfg-skzmn" serviceaccount="endpointslicemirroring-controller" 2025-10-13T00:15:01.364998814+00:00 stderr F I1013 00:15:01.364965 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="ephemeral-volume-controller-dockercfg-jfqhh" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:44 +0000 UTC" refreshTime="2025-07-20 19:41:37.054029811 +0000 UTC" 2025-10-13T00:15:01.365057596+00:00 stderr F I1013 00:15:01.365046 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="ephemeral-volume-controller-dockercfg-jfqhh" serviceaccount="ephemeral-volume-controller" 2025-10-13T00:15:01.366218611+00:00 stderr F I1013 00:15:01.366155 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="expand-controller-dockercfg-ls7wp" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:44 +0000 UTC" refreshTime="2025-07-20 19:41:37.053556297 +0000 UTC" 2025-10-13T00:15:01.366218611+00:00 stderr F I1013 00:15:01.366203 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="expand-controller-dockercfg-ls7wp" serviceaccount="expand-controller" 2025-10-13T00:15:01.383745696+00:00 stderr F I1013 00:15:01.383614 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="horizontal-pod-autoscaler-dockercfg-5mlhd" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-07-20 19:41:39.846577244 +0000 UTC" 2025-10-13T00:15:01.383745696+00:00 stderr F I1013 00:15:01.383647 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="horizontal-pod-autoscaler-dockercfg-5mlhd" serviceaccount="horizontal-pod-autoscaler" 2025-10-13T00:15:01.384705115+00:00 stderr F I1013 00:15:01.384311 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="generic-garbage-collector-dockercfg-wqxkz" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-07-20 19:41:39.846715992 +0000 UTC" 2025-10-13T00:15:01.384705115+00:00 stderr F I1013 00:15:01.384359 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="generic-garbage-collector-dockercfg-wqxkz" serviceaccount="generic-garbage-collector" 2025-10-13T00:15:01.396963132+00:00 stderr F I1013 00:15:01.396286 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="job-controller-dockercfg-wq5x7" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-07-20 19:41:39.841508808 +0000 UTC" 2025-10-13T00:15:01.396963132+00:00 stderr F I1013 00:15:01.396346 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="job-controller-dockercfg-wq5x7" serviceaccount="job-controller" 2025-10-13T00:15:01.406408415+00:00 stderr F I1013 00:15:01.406356 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="legacy-service-account-token-cleaner-dockercfg-qqxct" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-07-20 19:41:39.837482979 +0000 UTC" 2025-10-13T00:15:01.406408415+00:00 stderr F I1013 00:15:01.406393 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="legacy-service-account-token-cleaner-dockercfg-qqxct" serviceaccount="legacy-service-account-token-cleaner" 2025-10-13T00:15:01.407186448+00:00 stderr F I1013 00:15:01.407155 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="namespace-controller-dockercfg-5hkmr" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-07-20 19:41:39.837156118 +0000 UTC" 2025-10-13T00:15:01.407244120+00:00 stderr F I1013 00:15:01.407230 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="namespace-controller-dockercfg-5hkmr" serviceaccount="namespace-controller" 2025-10-13T00:15:01.425886479+00:00 stderr F I1013 00:15:01.425831 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="node-controller-dockercfg-r8598" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-07-20 19:41:41.22968289 +0000 UTC" 2025-10-13T00:15:01.425972911+00:00 stderr F I1013 00:15:01.425959 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="node-controller-dockercfg-r8598" serviceaccount="node-controller" 2025-10-13T00:15:01.439822686+00:00 stderr F I1013 00:15:01.439746 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="persistent-volume-binder-dockercfg-49lxl" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-07-20 19:41:41.224117339 +0000 UTC" 2025-10-13T00:15:01.439822686+00:00 stderr F I1013 00:15:01.439784 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="persistent-volume-binder-dockercfg-49lxl" serviceaccount="persistent-volume-binder" 2025-10-13T00:15:01.443640821+00:00 stderr F I1013 00:15:01.442899 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pod-garbage-collector-dockercfg-9jzsm" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-07-20 19:41:41.222858153 +0000 UTC" 2025-10-13T00:15:01.443640821+00:00 stderr F I1013 00:15:01.442937 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pod-garbage-collector-dockercfg-9jzsm" serviceaccount="pod-garbage-collector" 2025-10-13T00:15:01.445431664+00:00 stderr F I1013 00:15:01.444075 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pv-protection-controller-dockercfg-r2lrg" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-07-20 19:41:41.222381615 +0000 UTC" 2025-10-13T00:15:01.445431664+00:00 stderr F I1013 00:15:01.444110 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pv-protection-controller-dockercfg-r2lrg" serviceaccount="pv-protection-controller" 2025-10-13T00:15:01.460457915+00:00 stderr F I1013 00:15:01.459862 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pvc-protection-controller-dockercfg-zqpk9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-07-20 19:41:42.616068082 +0000 UTC" 2025-10-13T00:15:01.460457915+00:00 stderr F I1013 00:15:01.460204 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pvc-protection-controller-dockercfg-zqpk9" serviceaccount="pvc-protection-controller" 2025-10-13T00:15:01.485314219+00:00 stderr F I1013 00:15:01.483143 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="replicaset-controller-dockercfg-m7w7t" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-07-20 19:41:42.606756883 +0000 UTC" 2025-10-13T00:15:01.485314219+00:00 stderr F I1013 00:15:01.483177 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="replicaset-controller-dockercfg-m7w7t" serviceaccount="replicaset-controller" 2025-10-13T00:15:01.502574297+00:00 stderr F I1013 00:15:01.502484 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="replication-controller-dockercfg-zx22f" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-07-20 19:41:42.599025931 +0000 UTC" 2025-10-13T00:15:01.502574297+00:00 stderr F I1013 00:15:01.502512 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="replication-controller-dockercfg-zx22f" serviceaccount="replication-controller" 2025-10-13T00:15:01.508633638+00:00 stderr F I1013 00:15:01.508593 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="resourcequota-controller-dockercfg-f7clv" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-07-20 19:41:42.59658335 +0000 UTC" 2025-10-13T00:15:01.508633638+00:00 stderr F I1013 00:15:01.508617 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="resourcequota-controller-dockercfg-f7clv" serviceaccount="resourcequota-controller" 2025-10-13T00:15:01.510193735+00:00 stderr F I1013 00:15:01.508949 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="root-ca-cert-publisher-dockercfg-4z4hh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-07-20 19:41:42.596434457 +0000 UTC" 2025-10-13T00:15:01.510193735+00:00 stderr F I1013 00:15:01.508984 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="root-ca-cert-publisher-dockercfg-4z4hh" serviceaccount="root-ca-cert-publisher" 2025-10-13T00:15:01.529745641+00:00 stderr F I1013 00:15:01.526647 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-account-controller-dockercfg-wvw6s" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-07-20 19:41:42.589359918 +0000 UTC" 2025-10-13T00:15:01.529745641+00:00 stderr F I1013 00:15:01.526691 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-account-controller-dockercfg-wvw6s" serviceaccount="service-account-controller" 2025-10-13T00:15:01.533585986+00:00 stderr F I1013 00:15:01.533523 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-ca-cert-publisher-dockercfg-npjg7" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.986604347 +0000 UTC" 2025-10-13T00:15:01.533585986+00:00 stderr F I1013 00:15:01.533564 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-ca-cert-publisher-dockercfg-npjg7" serviceaccount="service-ca-cert-publisher" 2025-10-13T00:15:01.574569284+00:00 stderr F I1013 00:15:01.574082 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-controller-dockercfg-4cv62" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.970379153 +0000 UTC" 2025-10-13T00:15:01.574569284+00:00 stderr F I1013 00:15:01.574526 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-controller-dockercfg-4cv62" serviceaccount="service-controller" 2025-10-13T00:15:01.574635356+00:00 stderr F I1013 00:15:01.574625 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="ttl-after-finished-controller-dockercfg-7wg62" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.970155822 +0000 UTC" 2025-10-13T00:15:01.574645366+00:00 stderr F I1013 00:15:01.574398 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="statefulset-controller-dockercfg-ndvv5" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.970248585 +0000 UTC" 2025-10-13T00:15:01.574653026+00:00 stderr F I1013 00:15:01.574639 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="ttl-after-finished-controller-dockercfg-7wg62" serviceaccount="ttl-after-finished-controller" 2025-10-13T00:15:01.574660516+00:00 stderr F I1013 00:15:01.574651 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="statefulset-controller-dockercfg-ndvv5" serviceaccount="statefulset-controller" 2025-10-13T00:15:01.592050387+00:00 stderr F I1013 00:15:01.590927 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="builder-dockercfg-fcp4f" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.963643115 +0000 UTC" 2025-10-13T00:15:01.592050387+00:00 stderr F I1013 00:15:01.590960 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="builder-dockercfg-fcp4f" serviceaccount="builder" 2025-10-13T00:15:01.600528141+00:00 stderr F I1013 00:15:01.599597 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="default-dockercfg-qknsb" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.960178375 +0000 UTC" 2025-10-13T00:15:01.600528141+00:00 stderr F I1013 00:15:01.599630 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="default-dockercfg-qknsb" serviceaccount="default" 2025-10-13T00:15:01.612479879+00:00 stderr F W1013 00:15:01.610021 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:01.612479879+00:00 stderr F E1013 00:15:01.610054 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:01.613428938+00:00 stderr F I1013 00:15:01.612916 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="deployer-dockercfg-rk5zr" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.954846247 +0000 UTC" 2025-10-13T00:15:01.613428938+00:00 stderr F I1013 00:15:01.612947 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="deployer-dockercfg-rk5zr" serviceaccount="deployer" 2025-10-13T00:15:01.619625244+00:00 stderr F W1013 00:15:01.618951 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-10-13T00:15:01.619625244+00:00 stderr F E1013 00:15:01.618983 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-10-13T00:15:01.625604333+00:00 stderr F I1013 00:15:01.624094 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="openshift-apiserver-operator-dockercfg-vw4hh" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:50 +0000 UTC" refreshTime="2025-07-20 19:41:45.350375842 +0000 UTC" 2025-10-13T00:15:01.625604333+00:00 stderr F I1013 00:15:01.624124 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="openshift-apiserver-operator-dockercfg-vw4hh" serviceaccount="openshift-apiserver-operator" 2025-10-13T00:15:01.625604333+00:00 stderr F I1013 00:15:01.624857 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="builder-dockercfg-bsrrx" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-07-20 19:41:43.950078917 +0000 UTC" 2025-10-13T00:15:01.625604333+00:00 stderr F I1013 00:15:01.624885 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="builder-dockercfg-bsrrx" serviceaccount="builder" 2025-10-13T00:15:01.645889881+00:00 stderr F I1013 00:15:01.645633 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="default-dockercfg-hxncm" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:50 +0000 UTC" refreshTime="2025-07-20 19:41:45.341757335 +0000 UTC" 2025-10-13T00:15:01.645889881+00:00 stderr F I1013 00:15:01.645660 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="default-dockercfg-hxncm" serviceaccount="default" 2025-10-13T00:15:01.645948202+00:00 stderr F I1013 00:15:01.645934 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="deployer-dockercfg-qkt4v" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-07-20 19:41:46.741632048 +0000 UTC" 2025-10-13T00:15:01.645956513+00:00 stderr F I1013 00:15:01.645945 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="deployer-dockercfg-qkt4v" serviceaccount="deployer" 2025-10-13T00:15:01.647096897+00:00 stderr F I1013 00:15:01.646987 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="openshift-apiserver-sa-dockercfg-r9fjc" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-07-20 19:41:46.741217643 +0000 UTC" 2025-10-13T00:15:01.647096897+00:00 stderr F I1013 00:15:01.647018 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="openshift-apiserver-sa-dockercfg-r9fjc" serviceaccount="openshift-apiserver-sa" 2025-10-13T00:15:01.659522379+00:00 stderr F I1013 00:15:01.658950 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="authentication-operator-dockercfg-7rvdq" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-07-20 19:41:46.73643202 +0000 UTC" 2025-10-13T00:15:01.659522379+00:00 stderr F I1013 00:15:01.658996 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="authentication-operator-dockercfg-7rvdq" serviceaccount="authentication-operator" 2025-10-13T00:15:01.668600751+00:00 stderr F I1013 00:15:01.667810 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="builder-dockercfg-gr58d" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-07-20 19:41:46.732889258 +0000 UTC" 2025-10-13T00:15:01.668600751+00:00 stderr F I1013 00:15:01.668043 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="builder-dockercfg-gr58d" serviceaccount="builder" 2025-10-13T00:15:01.672218969+00:00 stderr F I1013 00:15:01.671898 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="default-dockercfg-mpz9v" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-07-20 19:41:46.731252481 +0000 UTC" 2025-10-13T00:15:01.672218969+00:00 stderr F I1013 00:15:01.671930 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="default-dockercfg-mpz9v" serviceaccount="default" 2025-10-13T00:15:01.685468796+00:00 stderr F I1013 00:15:01.685404 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="deployer-dockercfg-7xqgr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-07-20 19:42:56.725849795 +0000 UTC" 2025-10-13T00:15:01.685468796+00:00 stderr F I1013 00:15:01.685430 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="deployer-dockercfg-7xqgr" serviceaccount="deployer" 2025-10-13T00:15:01.699599130+00:00 stderr F I1013 00:15:01.694353 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="builder-dockercfg-wbrzn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-07-20 19:42:56.722277864 +0000 UTC" 2025-10-13T00:15:01.699640861+00:00 stderr F I1013 00:15:01.699595 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="builder-dockercfg-wbrzn" serviceaccount="builder" 2025-10-13T00:15:01.704208608+00:00 stderr F W1013 00:15:01.703806 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-10-13T00:15:01.704208608+00:00 stderr F E1013 00:15:01.703838 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-10-13T00:15:01.704208608+00:00 stderr F I1013 00:15:01.704097 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="default-dockercfg-8smsw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-07-20 19:42:56.718371651 +0000 UTC" 2025-10-13T00:15:01.704208608+00:00 stderr F I1013 00:15:01.704113 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="default-dockercfg-8smsw" serviceaccount="default" 2025-10-13T00:15:01.704520807+00:00 stderr F I1013 00:15:01.704503 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="deployer-dockercfg-txlvt" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-07-20 19:42:56.718205398 +0000 UTC" 2025-10-13T00:15:01.704548288+00:00 stderr F I1013 00:15:01.704518 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="deployer-dockercfg-txlvt" serviceaccount="deployer" 2025-10-13T00:15:01.709129995+00:00 stderr F I1013 00:15:01.709101 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="oauth-openshift-dockercfg-6sd5l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-07-20 19:42:56.716367487 +0000 UTC" 2025-10-13T00:15:01.709129995+00:00 stderr F I1013 00:15:01.709120 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="oauth-openshift-dockercfg-6sd5l" serviceaccount="oauth-openshift" 2025-10-13T00:15:01.716961710+00:00 stderr F I1013 00:15:01.715920 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="builder-dockercfg-4stzg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-07-20 19:43:02.313646393 +0000 UTC" 2025-10-13T00:15:01.716961710+00:00 stderr F I1013 00:15:01.715965 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="builder-dockercfg-4stzg" serviceaccount="builder" 2025-10-13T00:15:01.745373741+00:00 stderr F I1013 00:15:01.742099 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="default-dockercfg-bswg4" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-07-20 19:43:02.303174767 +0000 UTC" 2025-10-13T00:15:01.745373741+00:00 stderr F I1013 00:15:01.742129 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="default-dockercfg-bswg4" serviceaccount="default" 2025-10-13T00:15:01.745373741+00:00 stderr F I1013 00:15:01.742738 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="deployer-dockercfg-95h82" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-07-20 19:43:02.302911335 +0000 UTC" 2025-10-13T00:15:01.745373741+00:00 stderr F I1013 00:15:01.742752 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="deployer-dockercfg-95h82" serviceaccount="deployer" 2025-10-13T00:15:01.746528466+00:00 stderr F I1013 00:15:01.745766 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="builder-dockercfg-88rrx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-07-20 19:43:02.301708355 +0000 UTC" 2025-10-13T00:15:01.746528466+00:00 stderr F I1013 00:15:01.745799 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="builder-dockercfg-88rrx" serviceaccount="builder" 2025-10-13T00:15:01.775820624+00:00 stderr F I1013 00:15:01.775559 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="default-dockercfg-7xbdb" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-07-20 19:43:02.289792423 +0000 UTC" 2025-10-13T00:15:01.775820624+00:00 stderr F I1013 00:15:01.775608 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="default-dockercfg-7xbdb" serviceaccount="default" 2025-10-13T00:15:01.793803002+00:00 stderr F I1013 00:15:01.793755 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="deployer-dockercfg-d4ldp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-07-20 19:43:03.68250946 +0000 UTC" 2025-10-13T00:15:01.793867274+00:00 stderr F I1013 00:15:01.793856 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="deployer-dockercfg-d4ldp" serviceaccount="deployer" 2025-10-13T00:15:01.825941135+00:00 stderr F I1013 00:15:01.825893 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="builder-dockercfg-dkg74" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-07-20 19:43:03.669652675 +0000 UTC" 2025-10-13T00:15:01.826007037+00:00 stderr F I1013 00:15:01.825995 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="builder-dockercfg-dkg74" serviceaccount="builder" 2025-10-13T00:15:01.826671897+00:00 stderr F I1013 00:15:01.826652 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="default-dockercfg-89xjf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-07-20 19:43:03.669345282 +0000 UTC" 2025-10-13T00:15:01.826713368+00:00 stderr F I1013 00:15:01.826703 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="default-dockercfg-89xjf" serviceaccount="default" 2025-10-13T00:15:01.827002257+00:00 stderr F I1013 00:15:01.826986 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="deployer-dockercfg-vb2qm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-07-20 19:43:03.66921209 +0000 UTC" 2025-10-13T00:15:01.827036968+00:00 stderr F I1013 00:15:01.827027 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="deployer-dockercfg-vb2qm" serviceaccount="deployer" 2025-10-13T00:15:01.871991685+00:00 stderr F I1013 00:15:01.871068 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="machine-approver-sa-dockercfg-6nbmk" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-07-20 19:43:03.651583718 +0000 UTC" 2025-10-13T00:15:01.871991685+00:00 stderr F I1013 00:15:01.871092 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="machine-approver-sa-dockercfg-6nbmk" serviceaccount="machine-approver-sa" 2025-10-13T00:15:01.872146140+00:00 stderr F I1013 00:15:01.872100 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="builder-dockercfg-bgnkz" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-07-20 19:43:05.051177201 +0000 UTC" 2025-10-13T00:15:01.872173870+00:00 stderr F I1013 00:15:01.872140 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="builder-dockercfg-bgnkz" serviceaccount="builder" 2025-10-13T00:15:01.891963663+00:00 stderr F I1013 00:15:01.891892 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="cluster-samples-operator-dockercfg-q289q" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-07-20 19:43:05.043255076 +0000 UTC" 2025-10-13T00:15:01.891963663+00:00 stderr F I1013 00:15:01.891935 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="cluster-samples-operator-dockercfg-q289q" serviceaccount="cluster-samples-operator" 2025-10-13T00:15:01.892400386+00:00 stderr F I1013 00:15:01.892273 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="default-dockercfg-78cjw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-07-20 19:43:05.043099991 +0000 UTC" 2025-10-13T00:15:01.892400386+00:00 stderr F I1013 00:15:01.892290 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="default-dockercfg-78cjw" serviceaccount="default" 2025-10-13T00:15:01.906881350+00:00 stderr F W1013 00:15:01.906811 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:01.906881350+00:00 stderr F E1013 00:15:01.906849 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:01.907133938+00:00 stderr F I1013 00:15:01.907087 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="deployer-dockercfg-hx9zf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-07-20 19:43:05.03717687 +0000 UTC" 2025-10-13T00:15:01.907133938+00:00 stderr F I1013 00:15:01.907129 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="deployer-dockercfg-hx9zf" serviceaccount="deployer" 2025-10-13T00:15:01.934388465+00:00 stderr F I1013 00:15:01.932472 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="builder-dockercfg-l8dbc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-07-20 19:43:06.427024554 +0000 UTC" 2025-10-13T00:15:01.934388465+00:00 stderr F I1013 00:15:01.932508 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="builder-dockercfg-l8dbc" serviceaccount="builder" 2025-10-13T00:15:01.948386164+00:00 stderr F I1013 00:15:01.946068 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="default-dockercfg-l44fb" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-07-20 19:43:06.421587523 +0000 UTC" 2025-10-13T00:15:01.948386164+00:00 stderr F I1013 00:15:01.946104 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="default-dockercfg-l44fb" serviceaccount="default" 2025-10-13T00:15:01.965385443+00:00 stderr F I1013 00:15:01.965074 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="deployer-dockercfg-g97l8" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:49 +0000 UTC" refreshTime="2025-07-20 19:43:07.813987239 +0000 UTC" 2025-10-13T00:15:01.965385443+00:00 stderr F I1013 00:15:01.965115 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="deployer-dockercfg-g97l8" serviceaccount="deployer" 2025-10-13T00:15:01.965624950+00:00 stderr F I1013 00:15:01.965583 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="builder-dockercfg-l4k9s" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-07-20 19:43:06.413786661 +0000 UTC" 2025-10-13T00:15:01.965624950+00:00 stderr F I1013 00:15:01.965610 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="builder-dockercfg-l4k9s" serviceaccount="builder" 2025-10-13T00:15:02.000251418+00:00 stderr F I1013 00:15:02.000167 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="default-dockercfg-5wpfz" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:49 +0000 UTC" refreshTime="2025-07-20 19:43:07.799956451 +0000 UTC" 2025-10-13T00:15:02.000377692+00:00 stderr F I1013 00:15:02.000360 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="default-dockercfg-5wpfz" serviceaccount="default" 2025-10-13T00:15:02.023059731+00:00 stderr F I1013 00:15:02.022984 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="deployer-dockercfg-r7kd4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-07-20 19:43:09.190826721 +0000 UTC" 2025-10-13T00:15:02.023159104+00:00 stderr F I1013 00:15:02.023146 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="deployer-dockercfg-r7kd4" serviceaccount="deployer" 2025-10-13T00:15:02.032804433+00:00 stderr F I1013 00:15:02.032714 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="builder-dockercfg-nndcv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-07-20 19:43:09.186932609 +0000 UTC" 2025-10-13T00:15:02.032804433+00:00 stderr F I1013 00:15:02.032750 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="builder-dockercfg-nndcv" serviceaccount="builder" 2025-10-13T00:15:02.058021339+00:00 stderr F I1013 00:15:02.057923 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="default-dockercfg-5zsff" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-07-20 19:43:09.176845198 +0000 UTC" 2025-10-13T00:15:02.058021339+00:00 stderr F I1013 00:15:02.057957 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="default-dockercfg-5zsff" serviceaccount="default" 2025-10-13T00:15:02.062676248+00:00 stderr F W1013 00:15:02.062631 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-10-13T00:15:02.062676248+00:00 stderr F E1013 00:15:02.062665 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-10-13T00:15:02.087742709+00:00 stderr F I1013 00:15:02.084729 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="deployer-dockercfg-v47lz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:56 +0000 UTC" refreshTime="2025-07-20 19:43:17.566120301 +0000 UTC" 2025-10-13T00:15:02.087742709+00:00 stderr F I1013 00:15:02.084754 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="deployer-dockercfg-v47lz" serviceaccount="deployer" 2025-10-13T00:15:02.139516230+00:00 stderr F I1013 00:15:02.139448 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="builder-dockercfg-lbblj" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-07-20 19:43:09.144237601 +0000 UTC" 2025-10-13T00:15:02.139516230+00:00 stderr F I1013 00:15:02.139483 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="builder-dockercfg-lbblj" serviceaccount="builder" 2025-10-13T00:15:02.159836349+00:00 stderr F I1013 00:15:02.159771 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="default-dockercfg-rltwn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-07-20 19:43:30.136107262 +0000 UTC" 2025-10-13T00:15:02.159836349+00:00 stderr F I1013 00:15:02.159809 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="default-dockercfg-rltwn" serviceaccount="default" 2025-10-13T00:15:02.173128998+00:00 stderr F I1013 00:15:02.173036 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="deployer-dockercfg-8tp68" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-07-20 19:43:30.130796955 +0000 UTC" 2025-10-13T00:15:02.173128998+00:00 stderr F I1013 00:15:02.173068 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="deployer-dockercfg-8tp68" serviceaccount="deployer" 2025-10-13T00:15:02.202891089+00:00 stderr F W1013 00:15:02.202830 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:02.202891089+00:00 stderr F E1013 00:15:02.202877 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:02.207201919+00:00 stderr F I1013 00:15:02.207142 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="openshift-config-operator-dockercfg-6jthd" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-07-20 19:43:30.117155378 +0000 UTC" 2025-10-13T00:15:02.207201919+00:00 stderr F I1013 00:15:02.207170 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="openshift-config-operator-dockercfg-6jthd" serviceaccount="openshift-config-operator" 2025-10-13T00:15:02.219383893+00:00 stderr F I1013 00:15:02.219321 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="builder-dockercfg-c75dg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-07-20 19:43:30.112284088 +0000 UTC" 2025-10-13T00:15:02.219383893+00:00 stderr F I1013 00:15:02.219364 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="builder-dockercfg-c75dg" serviceaccount="builder" 2025-10-13T00:15:02.274796184+00:00 stderr F I1013 00:15:02.269938 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="default-dockercfg-hbnsp" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-07-20 19:43:30.092038217 +0000 UTC" 2025-10-13T00:15:02.274796184+00:00 stderr F I1013 00:15:02.269971 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="default-dockercfg-hbnsp" serviceaccount="default" 2025-10-13T00:15:02.290765502+00:00 stderr F I1013 00:15:02.290658 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="deployer-dockercfg-q8mb8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.883750197 +0000 UTC" 2025-10-13T00:15:02.290765502+00:00 stderr F I1013 00:15:02.290688 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="deployer-dockercfg-q8mb8" serviceaccount="deployer" 2025-10-13T00:15:02.332423140+00:00 stderr F I1013 00:15:02.325422 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="builder-dockercfg-5h26t" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.869847653 +0000 UTC" 2025-10-13T00:15:02.332423140+00:00 stderr F I1013 00:15:02.325459 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="builder-dockercfg-5h26t" serviceaccount="builder" 2025-10-13T00:15:02.339508503+00:00 stderr F I1013 00:15:02.338104 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="console-operator-dockercfg-lwp4z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.864775173 +0000 UTC" 2025-10-13T00:15:02.339508503+00:00 stderr F I1013 00:15:02.338145 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="console-operator-dockercfg-lwp4z" serviceaccount="console-operator" 2025-10-13T00:15:02.383397228+00:00 stderr F I1013 00:15:02.379847 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="default-dockercfg-vgw7h" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.84807706 +0000 UTC" 2025-10-13T00:15:02.383397228+00:00 stderr F I1013 00:15:02.379888 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="default-dockercfg-vgw7h" serviceaccount="default" 2025-10-13T00:15:02.407906962+00:00 stderr F I1013 00:15:02.405107 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="deployer-dockercfg-cgf7g" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.837970321 +0000 UTC" 2025-10-13T00:15:02.409812509+00:00 stderr F I1013 00:15:02.405138 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="deployer-dockercfg-cgf7g" serviceaccount="deployer" 2025-10-13T00:15:02.438546710+00:00 stderr F I1013 00:15:02.435095 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="builder-dockercfg-s9kk5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.225974954 +0000 UTC" 2025-10-13T00:15:02.438546710+00:00 stderr F I1013 00:15:02.435126 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="builder-dockercfg-s9kk5" serviceaccount="builder" 2025-10-13T00:15:02.453464507+00:00 stderr F I1013 00:15:02.452541 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="default-dockercfg-mkcsd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.818995573 +0000 UTC" 2025-10-13T00:15:02.453464507+00:00 stderr F I1013 00:15:02.452576 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="default-dockercfg-mkcsd" serviceaccount="default" 2025-10-13T00:15:02.469432495+00:00 stderr F I1013 00:15:02.468968 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="deployer-dockercfg-s5pld" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.812426136 +0000 UTC" 2025-10-13T00:15:02.469432495+00:00 stderr F I1013 00:15:02.468998 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="deployer-dockercfg-s5pld" serviceaccount="deployer" 2025-10-13T00:15:02.525456844+00:00 stderr F I1013 00:15:02.523826 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="builder-dockercfg-nmnq6" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-07-20 19:43:32.790493319 +0000 UTC" 2025-10-13T00:15:02.525456844+00:00 stderr F I1013 00:15:02.524054 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="builder-dockercfg-nmnq6" serviceaccount="builder" 2025-10-13T00:15:02.540376541+00:00 stderr F I1013 00:15:02.539779 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="console-dockercfg-ng44q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.184116176 +0000 UTC" 2025-10-13T00:15:02.540376541+00:00 stderr F I1013 00:15:02.539827 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="console-dockercfg-ng44q" serviceaccount="console" 2025-10-13T00:15:02.564373090+00:00 stderr F I1013 00:15:02.563929 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="default-dockercfg-bv4gd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.174439166 +0000 UTC" 2025-10-13T00:15:02.564373090+00:00 stderr F I1013 00:15:02.563962 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="default-dockercfg-bv4gd" serviceaccount="default" 2025-10-13T00:15:02.588461422+00:00 stderr F I1013 00:15:02.588068 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="deployer-dockercfg-mpsf7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.164784337 +0000 UTC" 2025-10-13T00:15:02.588535814+00:00 stderr F I1013 00:15:02.588523 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="deployer-dockercfg-mpsf7" serviceaccount="deployer" 2025-10-13T00:15:02.610383349+00:00 stderr F I1013 00:15:02.610016 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="builder-dockercfg-rn5hk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.156004989 +0000 UTC" 2025-10-13T00:15:02.610383349+00:00 stderr F I1013 00:15:02.610040 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="builder-dockercfg-rn5hk" serviceaccount="builder" 2025-10-13T00:15:02.655896282+00:00 stderr F I1013 00:15:02.655830 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="default-dockercfg-hmzqd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.137680121 +0000 UTC" 2025-10-13T00:15:02.655896282+00:00 stderr F I1013 00:15:02.655857 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="default-dockercfg-hmzqd" serviceaccount="default" 2025-10-13T00:15:02.676688695+00:00 stderr F I1013 00:15:02.673998 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="deployer-dockercfg-8mhlz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.130415827 +0000 UTC" 2025-10-13T00:15:02.676688695+00:00 stderr F I1013 00:15:02.674033 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="deployer-dockercfg-8mhlz" serviceaccount="deployer" 2025-10-13T00:15:02.699633783+00:00 stderr F I1013 00:15:02.699286 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="openshift-controller-manager-operator-dockercfg-zx7mb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.120300792 +0000 UTC" 2025-10-13T00:15:02.699633783+00:00 stderr F I1013 00:15:02.699319 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="openshift-controller-manager-operator-dockercfg-zx7mb" serviceaccount="openshift-controller-manager-operator" 2025-10-13T00:15:02.727758135+00:00 stderr F I1013 00:15:02.727432 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="builder-dockercfg-gmnbf" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.10903823 +0000 UTC" 2025-10-13T00:15:02.727758135+00:00 stderr F I1013 00:15:02.727455 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="builder-dockercfg-gmnbf" serviceaccount="builder" 2025-10-13T00:15:02.743400104+00:00 stderr F I1013 00:15:02.741649 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="default-dockercfg-vdmzk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-07-20 19:43:34.10335606 +0000 UTC" 2025-10-13T00:15:02.743400104+00:00 stderr F I1013 00:15:02.741680 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="default-dockercfg-vdmzk" serviceaccount="default" 2025-10-13T00:15:02.787665820+00:00 stderr F I1013 00:15:02.787597 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="deployer-dockercfg-q4jdx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-07-20 19:43:35.484976993 +0000 UTC" 2025-10-13T00:15:02.787665820+00:00 stderr F I1013 00:15:02.787632 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="deployer-dockercfg-q4jdx" serviceaccount="deployer" 2025-10-13T00:15:02.813153324+00:00 stderr F I1013 00:15:02.813091 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="openshift-controller-manager-sa-dockercfg-58g82" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-07-20 19:43:35.474780468 +0000 UTC" 2025-10-13T00:15:02.813153324+00:00 stderr F I1013 00:15:02.813128 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="openshift-controller-manager-sa-dockercfg-58g82" serviceaccount="openshift-controller-manager-sa" 2025-10-13T00:15:02.845486853+00:00 stderr F I1013 00:15:02.845365 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="builder-dockercfg-pnlmc" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-07-20 19:43:35.461872669 +0000 UTC" 2025-10-13T00:15:02.845486853+00:00 stderr F I1013 00:15:02.845389 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="builder-dockercfg-pnlmc" serviceaccount="builder" 2025-10-13T00:15:02.856840123+00:00 stderr F I1013 00:15:02.856790 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="default-dockercfg-zzdtv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-07-20 19:43:35.45729738 +0000 UTC" 2025-10-13T00:15:02.856840123+00:00 stderr F I1013 00:15:02.856819 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="default-dockercfg-zzdtv" serviceaccount="default" 2025-10-13T00:15:02.865784431+00:00 stderr F I1013 00:15:02.865691 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="deployer-dockercfg-ft65g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-07-20 19:43:35.453732913 +0000 UTC" 2025-10-13T00:15:02.865784431+00:00 stderr F I1013 00:15:02.865713 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="deployer-dockercfg-ft65g" serviceaccount="deployer" 2025-10-13T00:15:02.924797239+00:00 stderr F I1013 00:15:02.924413 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="dns-operator-dockercfg-wgzbx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-07-20 19:43:36.830248514 +0000 UTC" 2025-10-13T00:15:02.924797239+00:00 stderr F I1013 00:15:02.924446 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="dns-operator-dockercfg-wgzbx" serviceaccount="dns-operator" 2025-10-13T00:15:02.951097417+00:00 stderr F I1013 00:15:02.950839 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="builder-dockercfg-hlsv2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-07-20 19:43:36.819676137 +0000 UTC" 2025-10-13T00:15:02.951097417+00:00 stderr F I1013 00:15:02.950867 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="builder-dockercfg-hlsv2" serviceaccount="builder" 2025-10-13T00:15:02.977419326+00:00 stderr F I1013 00:15:02.977245 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="default-dockercfg-4pr8h" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-07-20 19:43:36.809114709 +0000 UTC" 2025-10-13T00:15:02.977419326+00:00 stderr F I1013 00:15:02.977272 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="default-dockercfg-4pr8h" serviceaccount="default" 2025-10-13T00:15:02.995373904+00:00 stderr F I1013 00:15:02.991990 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="deployer-dockercfg-45hhc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-07-20 19:43:36.803224416 +0000 UTC" 2025-10-13T00:15:02.995373904+00:00 stderr F I1013 00:15:02.992018 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="deployer-dockercfg-45hhc" serviceaccount="deployer" 2025-10-13T00:15:03.008265550+00:00 stderr F I1013 00:15:03.007745 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="dns-dockercfg-dff28" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:18 +0000 UTC" refreshTime="2025-07-20 19:43:47.996915043 +0000 UTC" 2025-10-13T00:15:03.008265550+00:00 stderr F I1013 00:15:03.007773 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="dns-dockercfg-dff28" serviceaccount="dns" 2025-10-13T00:15:03.068251577+00:00 stderr F I1013 00:15:03.067755 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="node-resolver-dockercfg-5kr6x" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:18 +0000 UTC" refreshTime="2025-07-20 19:43:47.972911308 +0000 UTC" 2025-10-13T00:15:03.068251577+00:00 stderr F I1013 00:15:03.067783 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="node-resolver-dockercfg-5kr6x" serviceaccount="node-resolver" 2025-10-13T00:15:03.088090922+00:00 stderr F I1013 00:15:03.087886 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="builder-dockercfg-sf67n" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-07-20 19:43:50.764860423 +0000 UTC" 2025-10-13T00:15:03.088090922+00:00 stderr F I1013 00:15:03.087920 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="builder-dockercfg-sf67n" serviceaccount="builder" 2025-10-13T00:15:03.114797602+00:00 stderr F I1013 00:15:03.113845 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="default-dockercfg-xdg4w" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-07-20 19:43:50.754473255 +0000 UTC" 2025-10-13T00:15:03.114797602+00:00 stderr F I1013 00:15:03.113871 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="default-dockercfg-xdg4w" serviceaccount="default" 2025-10-13T00:15:03.132027618+00:00 stderr F I1013 00:15:03.131858 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="deployer-dockercfg-zmpgs" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-07-20 19:43:50.747267187 +0000 UTC" 2025-10-13T00:15:03.132027618+00:00 stderr F I1013 00:15:03.131885 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="deployer-dockercfg-zmpgs" serviceaccount="deployer" 2025-10-13T00:15:03.136405939+00:00 stderr F I1013 00:15:03.136176 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="etcd-operator-dockercfg-hwzhz" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.145542128 +0000 UTC" 2025-10-13T00:15:03.136405939+00:00 stderr F I1013 00:15:03.136207 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="etcd-operator-dockercfg-hwzhz" serviceaccount="etcd-operator" 2025-10-13T00:15:03.186548692+00:00 stderr F I1013 00:15:03.185995 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="builder-dockercfg-sqwsk" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.125614043 +0000 UTC" 2025-10-13T00:15:03.186548692+00:00 stderr F I1013 00:15:03.186024 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="builder-dockercfg-sqwsk" serviceaccount="builder" 2025-10-13T00:15:03.227941902+00:00 stderr F I1013 00:15:03.227875 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="default-dockercfg-vd62w" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-07-20 19:43:50.708862625 +0000 UTC" 2025-10-13T00:15:03.227941902+00:00 stderr F I1013 00:15:03.227902 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="default-dockercfg-vd62w" serviceaccount="default" 2025-10-13T00:15:03.251883189+00:00 stderr F I1013 00:15:03.251821 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="deployer-dockercfg-p6hbm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.099290802 +0000 UTC" 2025-10-13T00:15:03.251883189+00:00 stderr F I1013 00:15:03.251861 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="deployer-dockercfg-p6hbm" serviceaccount="deployer" 2025-10-13T00:15:03.259242580+00:00 stderr F I1013 00:15:03.259185 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="etcd-backup-sa-dockercfg-rd8b5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.096342933 +0000 UTC" 2025-10-13T00:15:03.259242580+00:00 stderr F I1013 00:15:03.259218 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="etcd-backup-sa-dockercfg-rd8b5" serviceaccount="etcd-backup-sa" 2025-10-13T00:15:03.270371393+00:00 stderr F I1013 00:15:03.268948 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="etcd-sa-dockercfg-cgskw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.092447945 +0000 UTC" 2025-10-13T00:15:03.270371393+00:00 stderr F I1013 00:15:03.268995 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="etcd-sa-dockercfg-cgskw" serviceaccount="etcd-sa" 2025-10-13T00:15:03.320851996+00:00 stderr F I1013 00:15:03.320782 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="installer-sa-dockercfg-gxvhz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.071707815 +0000 UTC" 2025-10-13T00:15:03.320851996+00:00 stderr F I1013 00:15:03.320814 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="installer-sa-dockercfg-gxvhz" serviceaccount="installer-sa" 2025-10-13T00:15:03.377165422+00:00 stderr F I1013 00:15:03.375684 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="builder-dockercfg-h5pg5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.04974337 +0000 UTC" 2025-10-13T00:15:03.377165422+00:00 stderr F I1013 00:15:03.375721 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="builder-dockercfg-h5pg5" serviceaccount="builder" 2025-10-13T00:15:03.381351017+00:00 stderr F I1013 00:15:03.381285 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="default-dockercfg-swwqf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-07-20 19:43:52.047503571 +0000 UTC" 2025-10-13T00:15:03.381351017+00:00 stderr F I1013 00:15:03.381341 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="default-dockercfg-swwqf" serviceaccount="default" 2025-10-13T00:15:03.401231603+00:00 stderr F I1013 00:15:03.401172 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="deployer-dockercfg-ddh74" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-07-20 19:49:37.83954428 +0000 UTC" 2025-10-13T00:15:03.401231603+00:00 stderr F I1013 00:15:03.401203 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="deployer-dockercfg-ddh74" serviceaccount="deployer" 2025-10-13T00:15:03.409405508+00:00 stderr F I1013 00:15:03.409076 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="builder-dockercfg-2jkwc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-07-20 19:49:37.836382017 +0000 UTC" 2025-10-13T00:15:03.409405508+00:00 stderr F I1013 00:15:03.409103 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="builder-dockercfg-2jkwc" serviceaccount="builder" 2025-10-13T00:15:03.454473008+00:00 stderr F I1013 00:15:03.454406 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="cluster-image-registry-operator-dockercfg-ddjzq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-07-20 19:49:37.818286215 +0000 UTC" 2025-10-13T00:15:03.454473008+00:00 stderr F I1013 00:15:03.454450 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="cluster-image-registry-operator-dockercfg-ddjzq" serviceaccount="cluster-image-registry-operator" 2025-10-13T00:15:03.503028253+00:00 stderr F I1013 00:15:03.502926 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="default-dockercfg-w58sb" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-07-20 19:49:37.798851797 +0000 UTC" 2025-10-13T00:15:03.503028253+00:00 stderr F I1013 00:15:03.502971 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="default-dockercfg-w58sb" serviceaccount="default" 2025-10-13T00:15:03.522496036+00:00 stderr F I1013 00:15:03.522414 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="deployer-dockercfg-5sk9l" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-07-20 19:49:37.791046563 +0000 UTC" 2025-10-13T00:15:03.522496036+00:00 stderr F I1013 00:15:03.522443 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="deployer-dockercfg-5sk9l" serviceaccount="deployer" 2025-10-13T00:15:03.536084623+00:00 stderr F I1013 00:15:03.534719 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="node-ca-dockercfg-mcgx9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.186125344 +0000 UTC" 2025-10-13T00:15:03.536084623+00:00 stderr F I1013 00:15:03.534750 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="node-ca-dockercfg-mcgx9" serviceaccount="node-ca" 2025-10-13T00:15:03.538760603+00:00 stderr F I1013 00:15:03.538715 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="pruner-dockercfg-nzhll" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.184533072 +0000 UTC" 2025-10-13T00:15:03.538778904+00:00 stderr F I1013 00:15:03.538755 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="pruner-dockercfg-nzhll" serviceaccount="pruner" 2025-10-13T00:15:03.592033860+00:00 stderr F I1013 00:15:03.591844 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="registry-dockercfg-q786x" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.163276447 +0000 UTC" 2025-10-13T00:15:03.592033860+00:00 stderr F I1013 00:15:03.591875 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="registry-dockercfg-q786x" serviceaccount="registry" 2025-10-13T00:15:03.649125540+00:00 stderr F I1013 00:15:03.647793 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="build-config-change-controller-dockercfg-x9cbn" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.140910457 +0000 UTC" 2025-10-13T00:15:03.649125540+00:00 stderr F I1013 00:15:03.647846 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="build-config-change-controller-dockercfg-x9cbn" serviceaccount="build-config-change-controller" 2025-10-13T00:15:03.649125540+00:00 stderr F I1013 00:15:03.648202 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="build-controller-dockercfg-6s44z" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.140728351 +0000 UTC" 2025-10-13T00:15:03.649125540+00:00 stderr F I1013 00:15:03.648216 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="build-controller-dockercfg-6s44z" serviceaccount="build-controller" 2025-10-13T00:15:03.664397278+00:00 stderr F I1013 00:15:03.664320 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="builder-dockercfg-ztkx9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.134288459 +0000 UTC" 2025-10-13T00:15:03.664397278+00:00 stderr F I1013 00:15:03.664369 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="builder-dockercfg-ztkx9" serviceaccount="builder" 2025-10-13T00:15:03.678461139+00:00 stderr F I1013 00:15:03.678163 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="cluster-csr-approver-controller-dockercfg-4n58l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.128755365 +0000 UTC" 2025-10-13T00:15:03.678461139+00:00 stderr F I1013 00:15:03.678191 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="cluster-csr-approver-controller-dockercfg-4n58l" serviceaccount="cluster-csr-approver-controller" 2025-10-13T00:15:03.715210810+00:00 stderr F W1013 00:15:03.715156 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:03.715210810+00:00 stderr F E1013 00:15:03.715193 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:03.736760876+00:00 stderr F I1013 00:15:03.736700 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="cluster-quota-reconciliation-controller-dockercfg-6clv4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-07-20 19:49:39.105331987 +0000 UTC" 2025-10-13T00:15:03.736760876+00:00 stderr F I1013 00:15:03.736746 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="cluster-quota-reconciliation-controller-dockercfg-6clv4" serviceaccount="cluster-quota-reconciliation-controller" 2025-10-13T00:15:03.772840187+00:00 stderr F I1013 00:15:03.772781 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="default-dockercfg-qcclx" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.490899303 +0000 UTC" 2025-10-13T00:15:03.772878488+00:00 stderr F I1013 00:15:03.772844 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="default-dockercfg-qcclx" serviceaccount="default" 2025-10-13T00:15:03.782619450+00:00 stderr F I1013 00:15:03.782556 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="default-rolebindings-controller-dockercfg-mjvl7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.486990629 +0000 UTC" 2025-10-13T00:15:03.782619450+00:00 stderr F I1013 00:15:03.782588 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="default-rolebindings-controller-dockercfg-mjvl7" serviceaccount="default-rolebindings-controller" 2025-10-13T00:15:03.796160406+00:00 stderr F I1013 00:15:03.796096 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deployer-controller-dockercfg-nps8b" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.481573603 +0000 UTC" 2025-10-13T00:15:03.796160406+00:00 stderr F I1013 00:15:03.796138 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deployer-controller-dockercfg-nps8b" serviceaccount="deployer-controller" 2025-10-13T00:15:03.824201556+00:00 stderr F I1013 00:15:03.824139 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deployer-dockercfg-fjtnq" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.470358975 +0000 UTC" 2025-10-13T00:15:03.824201556+00:00 stderr F I1013 00:15:03.824172 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deployer-dockercfg-fjtnq" serviceaccount="deployer" 2025-10-13T00:15:03.886380209+00:00 stderr F I1013 00:15:03.886310 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deploymentconfig-controller-dockercfg-7sjgp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.44548799 +0000 UTC" 2025-10-13T00:15:03.886380209+00:00 stderr F I1013 00:15:03.886355 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deploymentconfig-controller-dockercfg-7sjgp" serviceaccount="deploymentconfig-controller" 2025-10-13T00:15:03.909062838+00:00 stderr F I1013 00:15:03.908999 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="image-import-controller-dockercfg-wtcck" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.436418598 +0000 UTC" 2025-10-13T00:15:03.909062838+00:00 stderr F I1013 00:15:03.909030 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="image-import-controller-dockercfg-wtcck" serviceaccount="image-import-controller" 2025-10-13T00:15:03.928388247+00:00 stderr F I1013 00:15:03.925434 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="image-trigger-controller-dockercfg-75z9g" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.429839581 +0000 UTC" 2025-10-13T00:15:03.928388247+00:00 stderr F I1013 00:15:03.925720 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="image-trigger-controller-dockercfg-75z9g" serviceaccount="image-trigger-controller" 2025-10-13T00:15:03.942442979+00:00 stderr F I1013 00:15:03.939759 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="ingress-to-route-controller-dockercfg-486s5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.424110857 +0000 UTC" 2025-10-13T00:15:03.942442979+00:00 stderr F I1013 00:15:03.939791 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="ingress-to-route-controller-dockercfg-486s5" serviceaccount="ingress-to-route-controller" 2025-10-13T00:15:03.950593253+00:00 stderr F W1013 00:15:03.950540 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-10-13T00:15:03.950593253+00:00 stderr F E1013 00:15:03.950569 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-10-13T00:15:03.963376336+00:00 stderr F W1013 00:15:03.959135 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:03.963376336+00:00 stderr F E1013 00:15:03.959165 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:03.999616062+00:00 stderr F I1013 00:15:03.999462 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="namespace-security-allocation-controller-dockercfg-d9nzv" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.400227482 +0000 UTC" 2025-10-13T00:15:03.999616062+00:00 stderr F I1013 00:15:03.999487 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="namespace-security-allocation-controller-dockercfg-d9nzv" serviceaccount="namespace-security-allocation-controller" 2025-10-13T00:15:04.024435005+00:00 stderr F I1013 00:15:04.023695 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="node-bootstrapper-dockercfg-mj85j" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.390534411 +0000 UTC" 2025-10-13T00:15:04.024435005+00:00 stderr F I1013 00:15:04.023722 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="node-bootstrapper-dockercfg-mj85j" serviceaccount="node-bootstrapper" 2025-10-13T00:15:04.044379963+00:00 stderr F I1013 00:15:04.044266 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="origin-namespace-controller-dockercfg-5s4zt" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.382306449 +0000 UTC" 2025-10-13T00:15:04.044379963+00:00 stderr F I1013 00:15:04.044296 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="origin-namespace-controller-dockercfg-5s4zt" serviceaccount="origin-namespace-controller" 2025-10-13T00:15:04.077889487+00:00 stderr F I1013 00:15:04.077825 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="podsecurity-admission-label-syncer-controller-dockercfg-b5pxh" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.368883671 +0000 UTC" 2025-10-13T00:15:04.077889487+00:00 stderr F I1013 00:15:04.077856 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="podsecurity-admission-label-syncer-controller-dockercfg-b5pxh" serviceaccount="podsecurity-admission-label-syncer-controller" 2025-10-13T00:15:04.085025481+00:00 stderr F I1013 00:15:04.084971 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="privileged-namespaces-psa-label-syncer-dockercfg-lm8jh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.366023013 +0000 UTC" 2025-10-13T00:15:04.085025481+00:00 stderr F I1013 00:15:04.085001 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="privileged-namespaces-psa-label-syncer-dockercfg-lm8jh" serviceaccount="privileged-namespaces-psa-label-syncer" 2025-10-13T00:15:04.110968918+00:00 stderr F I1013 00:15:04.110857 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="pv-recycler-controller-dockercfg-d76pz" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.355671767 +0000 UTC" 2025-10-13T00:15:04.110968918+00:00 stderr F I1013 00:15:04.110894 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="pv-recycler-controller-dockercfg-d76pz" serviceaccount="pv-recycler-controller" 2025-10-13T00:15:04.164784000+00:00 stderr F I1013 00:15:04.164442 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="resourcequota-controller-dockercfg-mlv87" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.334237905 +0000 UTC" 2025-10-13T00:15:04.164809451+00:00 stderr F I1013 00:15:04.164759 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="resourcequota-controller-dockercfg-mlv87" serviceaccount="resourcequota-controller" 2025-10-13T00:15:04.177585164+00:00 stderr F W1013 00:15:04.177485 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-10-13T00:15:04.177585164+00:00 stderr F E1013 00:15:04.177537 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-10-13T00:15:04.183523352+00:00 stderr F I1013 00:15:04.183451 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="serviceaccount-controller-dockercfg-l8hfk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.326633141 +0000 UTC" 2025-10-13T00:15:04.183523352+00:00 stderr F I1013 00:15:04.183479 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="serviceaccount-controller-dockercfg-l8hfk" serviceaccount="serviceaccount-controller" 2025-10-13T00:15:04.211164320+00:00 stderr F I1013 00:15:04.211112 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="serviceaccount-pull-secrets-controller-dockercfg-hshqh" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.315569321 +0000 UTC" 2025-10-13T00:15:04.211164320+00:00 stderr F I1013 00:15:04.211145 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="serviceaccount-pull-secrets-controller-dockercfg-hshqh" serviceaccount="serviceaccount-pull-secrets-controller" 2025-10-13T00:15:04.217017335+00:00 stderr F I1013 00:15:04.216967 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="template-instance-controller-dockercfg-f72bl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.713225079 +0000 UTC" 2025-10-13T00:15:04.217017335+00:00 stderr F I1013 00:15:04.216996 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="template-instance-controller-dockercfg-f72bl" serviceaccount="template-instance-controller" 2025-10-13T00:15:04.232095897+00:00 stderr F W1013 00:15:04.232036 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-10-13T00:15:04.232095897+00:00 stderr F E1013 00:15:04.232069 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-10-13T00:15:04.250687784+00:00 stderr F I1013 00:15:04.250625 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="template-instance-finalizer-controller-dockercfg-xwvr9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-07-20 19:49:40.29976436 +0000 UTC" 2025-10-13T00:15:04.250687784+00:00 stderr F I1013 00:15:04.250653 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="template-instance-finalizer-controller-dockercfg-xwvr9" serviceaccount="template-instance-finalizer-controller" 2025-10-13T00:15:04.297463296+00:00 stderr F I1013 00:15:04.297394 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="unidling-controller-dockercfg-ndddq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.681055787 +0000 UTC" 2025-10-13T00:15:04.297463296+00:00 stderr F I1013 00:15:04.297422 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="unidling-controller-dockercfg-ndddq" serviceaccount="unidling-controller" 2025-10-13T00:15:04.303052263+00:00 stderr F I1013 00:15:04.303014 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="builder-dockercfg-jjc4r" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.678802584 +0000 UTC" 2025-10-13T00:15:04.303052263+00:00 stderr F I1013 00:15:04.303038 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="builder-dockercfg-jjc4r" serviceaccount="builder" 2025-10-13T00:15:04.342838805+00:00 stderr F I1013 00:15:04.342772 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="default-dockercfg-4clxc" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.662903879 +0000 UTC" 2025-10-13T00:15:04.342838805+00:00 stderr F I1013 00:15:04.342803 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="default-dockercfg-4clxc" serviceaccount="default" 2025-10-13T00:15:04.357242057+00:00 stderr F I1013 00:15:04.357175 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="deployer-dockercfg-njf4l" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.657149547 +0000 UTC" 2025-10-13T00:15:04.357242057+00:00 stderr F I1013 00:15:04.357203 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="deployer-dockercfg-njf4l" serviceaccount="deployer" 2025-10-13T00:15:04.402758850+00:00 stderr F I1013 00:15:04.402690 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="builder-dockercfg-qnlh9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.643956008 +0000 UTC" 2025-10-13T00:15:04.402758850+00:00 stderr F I1013 00:15:04.402733 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="builder-dockercfg-qnlh9" serviceaccount="builder" 2025-10-13T00:15:04.431905384+00:00 stderr F I1013 00:15:04.431844 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="default-dockercfg-dbsd9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.62727594 +0000 UTC" 2025-10-13T00:15:04.431905384+00:00 stderr F I1013 00:15:04.431875 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="default-dockercfg-dbsd9" serviceaccount="default" 2025-10-13T00:15:04.449279754+00:00 stderr F I1013 00:15:04.449219 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="deployer-dockercfg-m9j7c" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.620329364 +0000 UTC" 2025-10-13T00:15:04.449312995+00:00 stderr F I1013 00:15:04.449278 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="deployer-dockercfg-m9j7c" serviceaccount="deployer" 2025-10-13T00:15:04.485456768+00:00 stderr F I1013 00:15:04.485370 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="ingress-operator-dockercfg-sxxwd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.605876187 +0000 UTC" 2025-10-13T00:15:04.485456768+00:00 stderr F I1013 00:15:04.485406 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="ingress-operator-dockercfg-sxxwd" serviceaccount="ingress-operator" 2025-10-13T00:15:04.488554971+00:00 stderr F W1013 00:15:04.488518 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:04.488554971+00:00 stderr F E1013 00:15:04.488549 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:04.489864590+00:00 stderr F I1013 00:15:04.489823 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="builder-dockercfg-dc6f6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.604091709 +0000 UTC" 2025-10-13T00:15:04.489864590+00:00 stderr F I1013 00:15:04.489852 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="builder-dockercfg-dc6f6" serviceaccount="builder" 2025-10-13T00:15:04.543627901+00:00 stderr F I1013 00:15:04.543570 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="default-dockercfg-dvqwl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.582585745 +0000 UTC" 2025-10-13T00:15:04.543627901+00:00 stderr F I1013 00:15:04.543600 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="default-dockercfg-dvqwl" serviceaccount="default" 2025-10-13T00:15:04.563642971+00:00 stderr F I1013 00:15:04.563558 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="deployer-dockercfg-6cpmp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.574591745 +0000 UTC" 2025-10-13T00:15:04.563642971+00:00 stderr F I1013 00:15:04.563590 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="deployer-dockercfg-6cpmp" serviceaccount="deployer" 2025-10-13T00:15:04.577499126+00:00 stderr F I1013 00:15:04.577390 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="router-dockercfg-n864z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.569054039 +0000 UTC" 2025-10-13T00:15:04.577499126+00:00 stderr F I1013 00:15:04.577415 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="router-dockercfg-n864z" serviceaccount="router" 2025-10-13T00:15:04.617402712+00:00 stderr F I1013 00:15:04.617315 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="builder-dockercfg-pzdvv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.553085881 +0000 UTC" 2025-10-13T00:15:04.617402712+00:00 stderr F I1013 00:15:04.617358 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="builder-dockercfg-pzdvv" serviceaccount="builder" 2025-10-13T00:15:04.630637558+00:00 stderr F I1013 00:15:04.629826 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="default-dockercfg-2zsnk" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.548080295 +0000 UTC" 2025-10-13T00:15:04.630637558+00:00 stderr F I1013 00:15:04.629852 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="default-dockercfg-2zsnk" serviceaccount="default" 2025-10-13T00:15:04.689528513+00:00 stderr F I1013 00:15:04.687879 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="deployer-dockercfg-v52pl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.524860375 +0000 UTC" 2025-10-13T00:15:04.689528513+00:00 stderr F I1013 00:15:04.687910 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="deployer-dockercfg-v52pl" serviceaccount="deployer" 2025-10-13T00:15:04.697856912+00:00 stderr F I1013 00:15:04.697083 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="builder-dockercfg-2cs69" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.521176517 +0000 UTC" 2025-10-13T00:15:04.697856912+00:00 stderr F I1013 00:15:04.697107 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="builder-dockercfg-2cs69" serviceaccount="builder" 2025-10-13T00:15:04.721125019+00:00 stderr F I1013 00:15:04.721048 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="default-dockercfg-7dskq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-07-20 19:49:41.511592618 +0000 UTC" 2025-10-13T00:15:04.721125019+00:00 stderr F I1013 00:15:04.721077 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="default-dockercfg-7dskq" serviceaccount="default" 2025-10-13T00:15:04.748284043+00:00 stderr F I1013 00:15:04.747589 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="deployer-dockercfg-xjjdg" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.900999564 +0000 UTC" 2025-10-13T00:15:04.748284043+00:00 stderr F I1013 00:15:04.747617 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="deployer-dockercfg-xjjdg" serviceaccount="deployer" 2025-10-13T00:15:04.781625002+00:00 stderr F I1013 00:15:04.781566 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="kube-apiserver-operator-dockercfg-n4bm9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.887391489 +0000 UTC" 2025-10-13T00:15:04.781700734+00:00 stderr F I1013 00:15:04.781687 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="kube-apiserver-operator-dockercfg-n4bm9" serviceaccount="kube-apiserver-operator" 2025-10-13T00:15:04.820934350+00:00 stderr F I1013 00:15:04.818059 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="builder-dockercfg-rrcrf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.872796691 +0000 UTC" 2025-10-13T00:15:04.820934350+00:00 stderr F I1013 00:15:04.818088 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="builder-dockercfg-rrcrf" serviceaccount="builder" 2025-10-13T00:15:04.831898928+00:00 stderr F I1013 00:15:04.831456 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="default-dockercfg-dlw8f" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.867429079 +0000 UTC" 2025-10-13T00:15:04.831898928+00:00 stderr F I1013 00:15:04.831484 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="default-dockercfg-dlw8f" serviceaccount="default" 2025-10-13T00:15:04.864045571+00:00 stderr F I1013 00:15:04.863945 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="deployer-dockercfg-vr4tw" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.854445709 +0000 UTC" 2025-10-13T00:15:04.864045571+00:00 stderr F I1013 00:15:04.864031 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="deployer-dockercfg-vr4tw" serviceaccount="deployer" 2025-10-13T00:15:04.877969459+00:00 stderr F I1013 00:15:04.877892 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="installer-sa-dockercfg-4kgh8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.848858162 +0000 UTC" 2025-10-13T00:15:04.877969459+00:00 stderr F I1013 00:15:04.877922 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="installer-sa-dockercfg-4kgh8" serviceaccount="installer-sa" 2025-10-13T00:15:04.928996148+00:00 stderr F I1013 00:15:04.928947 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="localhost-recovery-client-dockercfg-qll5d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.828432218 +0000 UTC" 2025-10-13T00:15:04.929077390+00:00 stderr F I1013 00:15:04.929061 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="localhost-recovery-client-dockercfg-qll5d" serviceaccount="localhost-recovery-client" 2025-10-13T00:15:04.963365917+00:00 stderr F I1013 00:15:04.963289 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="builder-dockercfg-lfl8l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.814699899 +0000 UTC" 2025-10-13T00:15:04.963365917+00:00 stderr F I1013 00:15:04.963322 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="builder-dockercfg-lfl8l" serviceaccount="builder" 2025-10-13T00:15:04.972471600+00:00 stderr F I1013 00:15:04.972033 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="default-dockercfg-ztmg5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.81120247 +0000 UTC" 2025-10-13T00:15:04.972471600+00:00 stderr F I1013 00:15:04.972072 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="default-dockercfg-ztmg5" serviceaccount="default" 2025-10-13T00:15:04.984567263+00:00 stderr F I1013 00:15:04.984429 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="deployer-dockercfg-qpmjq" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.806245553 +0000 UTC" 2025-10-13T00:15:04.984567263+00:00 stderr F I1013 00:15:04.984464 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="deployer-dockercfg-qpmjq" serviceaccount="deployer" 2025-10-13T00:15:05.018021675+00:00 stderr F I1013 00:15:05.017946 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="kube-controller-manager-operator-dockercfg-mwmd7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.792839436 +0000 UTC" 2025-10-13T00:15:05.018021675+00:00 stderr F I1013 00:15:05.017990 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="kube-controller-manager-operator-dockercfg-mwmd7" serviceaccount="kube-controller-manager-operator" 2025-10-13T00:15:05.069299851+00:00 stderr F I1013 00:15:05.069227 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="builder-dockercfg-4xp92" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.772331889 +0000 UTC" 2025-10-13T00:15:05.069299851+00:00 stderr F I1013 00:15:05.069272 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="builder-dockercfg-4xp92" serviceaccount="builder" 2025-10-13T00:15:05.098604219+00:00 stderr F I1013 00:15:05.098538 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="default-dockercfg-8rtql" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.760609222 +0000 UTC" 2025-10-13T00:15:05.098639470+00:00 stderr F I1013 00:15:05.098599 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="default-dockercfg-8rtql" serviceaccount="default" 2025-10-13T00:15:05.112512196+00:00 stderr F I1013 00:15:05.111427 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="deployer-dockercfg-bnp5r" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.755445527 +0000 UTC" 2025-10-13T00:15:05.112512196+00:00 stderr F I1013 00:15:05.111466 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="deployer-dockercfg-bnp5r" serviceaccount="deployer" 2025-10-13T00:15:05.122049252+00:00 stderr F I1013 00:15:05.118923 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="installer-sa-dockercfg-dl9g2" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.752442529 +0000 UTC" 2025-10-13T00:15:05.122049252+00:00 stderr F I1013 00:15:05.118987 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="installer-sa-dockercfg-dl9g2" serviceaccount="installer-sa" 2025-10-13T00:15:05.158398611+00:00 stderr F I1013 00:15:05.158294 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="kube-controller-manager-sa-dockercfg-4jsp6" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.736693873 +0000 UTC" 2025-10-13T00:15:05.158398611+00:00 stderr F I1013 00:15:05.158359 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="kube-controller-manager-sa-dockercfg-4jsp6" serviceaccount="kube-controller-manager-sa" 2025-10-13T00:15:05.198467341+00:00 stderr F I1013 00:15:05.197600 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="localhost-recovery-client-dockercfg-862fd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.720980703 +0000 UTC" 2025-10-13T00:15:05.198467341+00:00 stderr F I1013 00:15:05.197639 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="localhost-recovery-client-dockercfg-862fd" serviceaccount="localhost-recovery-client" 2025-10-13T00:15:05.243370987+00:00 stderr F I1013 00:15:05.239802 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="builder-dockercfg-2h8s9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.704091057 +0000 UTC" 2025-10-13T00:15:05.243370987+00:00 stderr F I1013 00:15:05.239835 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="builder-dockercfg-2h8s9" serviceaccount="builder" 2025-10-13T00:15:05.257992115+00:00 stderr F I1013 00:15:05.257922 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="default-dockercfg-vmhb5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.696843563 +0000 UTC" 2025-10-13T00:15:05.257992115+00:00 stderr F I1013 00:15:05.257950 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="default-dockercfg-vmhb5" serviceaccount="default" 2025-10-13T00:15:05.262579582+00:00 stderr F I1013 00:15:05.262155 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="deployer-dockercfg-ltvsw" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.695145121 +0000 UTC" 2025-10-13T00:15:05.262579582+00:00 stderr F I1013 00:15:05.262174 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="deployer-dockercfg-ltvsw" serviceaccount="deployer" 2025-10-13T00:15:05.297665994+00:00 stderr F I1013 00:15:05.297602 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="openshift-kube-scheduler-operator-dockercfg-w67m2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.680979028 +0000 UTC" 2025-10-13T00:15:05.297665994+00:00 stderr F I1013 00:15:05.297648 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="openshift-kube-scheduler-operator-dockercfg-w67m2" serviceaccount="openshift-kube-scheduler-operator" 2025-10-13T00:15:05.335780586+00:00 stderr F I1013 00:15:05.335719 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="builder-dockercfg-wmtqz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.665724611 +0000 UTC" 2025-10-13T00:15:05.335780586+00:00 stderr F I1013 00:15:05.335773 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="builder-dockercfg-wmtqz" serviceaccount="builder" 2025-10-13T00:15:05.376156445+00:00 stderr F I1013 00:15:05.376092 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="default-dockercfg-m6rz5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.649577391 +0000 UTC" 2025-10-13T00:15:05.376156445+00:00 stderr F I1013 00:15:05.376122 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="default-dockercfg-m6rz5" serviceaccount="default" 2025-10-13T00:15:05.383719572+00:00 stderr F I1013 00:15:05.383016 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="deployer-dockercfg-mks8v" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-07-20 19:49:42.646800756 +0000 UTC" 2025-10-13T00:15:05.383719572+00:00 stderr F I1013 00:15:05.383035 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="deployer-dockercfg-mks8v" serviceaccount="deployer" 2025-10-13T00:15:05.391419063+00:00 stderr F I1013 00:15:05.391355 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="installer-sa-dockercfg-9ln8g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:44.043473541 +0000 UTC" 2025-10-13T00:15:05.391419063+00:00 stderr F I1013 00:15:05.391372 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="installer-sa-dockercfg-9ln8g" serviceaccount="installer-sa" 2025-10-13T00:15:05.437588816+00:00 stderr F I1013 00:15:05.437523 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="localhost-recovery-client-dockercfg-b5dfm" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:44.025001943 +0000 UTC" 2025-10-13T00:15:05.437588816+00:00 stderr F I1013 00:15:05.437551 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="localhost-recovery-client-dockercfg-b5dfm" serviceaccount="localhost-recovery-client" 2025-10-13T00:15:05.484803601+00:00 stderr F I1013 00:15:05.484713 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="openshift-kube-scheduler-sa-dockercfg-d9dtc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:44.006130665 +0000 UTC" 2025-10-13T00:15:05.484880323+00:00 stderr F I1013 00:15:05.484865 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="openshift-kube-scheduler-sa-dockercfg-d9dtc" serviceaccount="openshift-kube-scheduler-sa" 2025-10-13T00:15:05.514135849+00:00 stderr F I1013 00:15:05.514070 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="builder-dockercfg-cp4s8" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.994386218 +0000 UTC" 2025-10-13T00:15:05.514135849+00:00 stderr F I1013 00:15:05.514105 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="builder-dockercfg-cp4s8" serviceaccount="builder" 2025-10-13T00:15:05.536510940+00:00 stderr F I1013 00:15:05.536456 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="default-dockercfg-47tpp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.985433413 +0000 UTC" 2025-10-13T00:15:05.536613753+00:00 stderr F I1013 00:15:05.536598 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="default-dockercfg-47tpp" serviceaccount="default" 2025-10-13T00:15:05.537169940+00:00 stderr F I1013 00:15:05.537148 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="deployer-dockercfg-vphfw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.985148897 +0000 UTC" 2025-10-13T00:15:05.537215111+00:00 stderr F I1013 00:15:05.537201 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="deployer-dockercfg-vphfw" serviceaccount="deployer" 2025-10-13T00:15:05.573852579+00:00 stderr F I1013 00:15:05.573784 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="kube-storage-version-migrator-operator-dockercfg-8l4fr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.970499122 +0000 UTC" 2025-10-13T00:15:05.573852579+00:00 stderr F I1013 00:15:05.573816 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="kube-storage-version-migrator-operator-dockercfg-8l4fr" serviceaccount="kube-storage-version-migrator-operator" 2025-10-13T00:15:05.623438004+00:00 stderr F I1013 00:15:05.623372 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="builder-dockercfg-lvk8s" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.95067558 +0000 UTC" 2025-10-13T00:15:05.623438004+00:00 stderr F I1013 00:15:05.623411 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="builder-dockercfg-lvk8s" serviceaccount="builder" 2025-10-13T00:15:05.662463684+00:00 stderr F I1013 00:15:05.662378 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="default-dockercfg-4vhnw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.935073769 +0000 UTC" 2025-10-13T00:15:05.662463684+00:00 stderr F I1013 00:15:05.662416 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="default-dockercfg-4vhnw" serviceaccount="default" 2025-10-13T00:15:05.664783553+00:00 stderr F I1013 00:15:05.664741 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="deployer-dockercfg-tzgw6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.934126228 +0000 UTC" 2025-10-13T00:15:05.664805514+00:00 stderr F I1013 00:15:05.664794 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="deployer-dockercfg-tzgw6" serviceaccount="deployer" 2025-10-13T00:15:05.693700059+00:00 stderr F I1013 00:15:05.691619 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="kube-storage-version-migrator-sa-dockercfg-5v9xj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.92337401 +0000 UTC" 2025-10-13T00:15:05.693700059+00:00 stderr F I1013 00:15:05.691659 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="kube-storage-version-migrator-sa-dockercfg-5v9xj" serviceaccount="kube-storage-version-migrator-sa" 2025-10-13T00:15:05.717933176+00:00 stderr F I1013 00:15:05.717127 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="builder-dockercfg-t2jdt" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.913159716 +0000 UTC" 2025-10-13T00:15:05.717933176+00:00 stderr F I1013 00:15:05.717155 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="builder-dockercfg-t2jdt" serviceaccount="builder" 2025-10-13T00:15:05.743066979+00:00 stderr F I1013 00:15:05.742999 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="cluster-autoscaler-dockercfg-5ld89" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.902813166 +0000 UTC" 2025-10-13T00:15:05.743066979+00:00 stderr F I1013 00:15:05.743028 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="cluster-autoscaler-dockercfg-5ld89" serviceaccount="cluster-autoscaler" 2025-10-13T00:15:05.791940073+00:00 stderr F I1013 00:15:05.791876 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="cluster-autoscaler-operator-dockercfg-nlvfr" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.883264512 +0000 UTC" 2025-10-13T00:15:05.791940073+00:00 stderr F I1013 00:15:05.791908 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="cluster-autoscaler-operator-dockercfg-nlvfr" serviceaccount="cluster-autoscaler-operator" 2025-10-13T00:15:05.816303293+00:00 stderr F I1013 00:15:05.816239 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="control-plane-machine-set-operator-dockercfg-7wtgv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.873517412 +0000 UTC" 2025-10-13T00:15:05.816303293+00:00 stderr F I1013 00:15:05.816274 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="control-plane-machine-set-operator-dockercfg-7wtgv" serviceaccount="control-plane-machine-set-operator" 2025-10-13T00:15:05.831613032+00:00 stderr F I1013 00:15:05.831528 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="default-dockercfg-t6f6m" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.867399265 +0000 UTC" 2025-10-13T00:15:05.831613032+00:00 stderr F I1013 00:15:05.831556 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="default-dockercfg-t6f6m" serviceaccount="default" 2025-10-13T00:15:05.860378493+00:00 stderr F I1013 00:15:05.859862 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="deployer-dockercfg-z5zkh" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.856068081 +0000 UTC" 2025-10-13T00:15:05.860378493+00:00 stderr F I1013 00:15:05.859893 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="deployer-dockercfg-z5zkh" serviceaccount="deployer" 2025-10-13T00:15:05.873194797+00:00 stderr F I1013 00:15:05.873128 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-controllers-dockercfg-5gkdn" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.850764935 +0000 UTC" 2025-10-13T00:15:05.873194797+00:00 stderr F I1013 00:15:05.873165 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-controllers-dockercfg-5gkdn" serviceaccount="machine-api-controllers" 2025-10-13T00:15:05.937605347+00:00 stderr F I1013 00:15:05.937546 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-operator-dockercfg-q7fmc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.825006327 +0000 UTC" 2025-10-13T00:15:05.937605347+00:00 stderr F I1013 00:15:05.937577 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-operator-dockercfg-q7fmc" serviceaccount="machine-api-operator" 2025-10-13T00:15:05.957161823+00:00 stderr F I1013 00:15:05.955611 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-termination-handler-dockercfg-86pgj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.817768402 +0000 UTC" 2025-10-13T00:15:05.957161823+00:00 stderr F I1013 00:15:05.955642 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-termination-handler-dockercfg-86pgj" serviceaccount="machine-api-termination-handler" 2025-10-13T00:15:05.963355939+00:00 stderr F I1013 00:15:05.962728 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="builder-dockercfg-kjnvp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.814921169 +0000 UTC" 2025-10-13T00:15:05.963355939+00:00 stderr F I1013 00:15:05.962775 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="builder-dockercfg-kjnvp" serviceaccount="builder" 2025-10-13T00:15:05.993610905+00:00 stderr F I1013 00:15:05.993360 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="default-dockercfg-lh7f8" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.802677386 +0000 UTC" 2025-10-13T00:15:05.993610905+00:00 stderr F I1013 00:15:05.993389 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="default-dockercfg-lh7f8" serviceaccount="default" 2025-10-13T00:15:06.014886083+00:00 stderr F I1013 00:15:06.012507 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="deployer-dockercfg-8qpnv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.795013672 +0000 UTC" 2025-10-13T00:15:06.014886083+00:00 stderr F I1013 00:15:06.012547 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="deployer-dockercfg-8qpnv" serviceaccount="deployer" 2025-10-13T00:15:06.069563651+00:00 stderr F I1013 00:15:06.069453 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-controller-dockercfg-wtlbj" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.772232693 +0000 UTC" 2025-10-13T00:15:06.069563651+00:00 stderr F I1013 00:15:06.069531 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-controller-dockercfg-wtlbj" serviceaccount="machine-config-controller" 2025-10-13T00:15:06.089719005+00:00 stderr F I1013 00:15:06.089423 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-daemon-dockercfg-rfwqs" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.764279407 +0000 UTC" 2025-10-13T00:15:06.089719005+00:00 stderr F I1013 00:15:06.089456 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-daemon-dockercfg-rfwqs" serviceaccount="machine-config-daemon" 2025-10-13T00:15:06.104026344+00:00 stderr F I1013 00:15:06.103805 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-operator-dockercfg-vhshz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.758527807 +0000 UTC" 2025-10-13T00:15:06.104026344+00:00 stderr F I1013 00:15:06.103839 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-operator-dockercfg-vhshz" serviceaccount="machine-config-operator" 2025-10-13T00:15:06.124513357+00:00 stderr F I1013 00:15:06.124438 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-server-dockercfg-xm5kr" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.750236855 +0000 UTC" 2025-10-13T00:15:06.124513357+00:00 stderr F I1013 00:15:06.124464 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-server-dockercfg-xm5kr" serviceaccount="machine-config-server" 2025-10-13T00:15:06.150562588+00:00 stderr F I1013 00:15:06.150475 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-os-builder-dockercfg-p6ljl" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.739825835 +0000 UTC" 2025-10-13T00:15:06.150562588+00:00 stderr F I1013 00:15:06.150511 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-os-builder-dockercfg-p6ljl" serviceaccount="machine-os-builder" 2025-10-13T00:15:06.211467563+00:00 stderr F I1013 00:15:06.210992 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-os-puller-dockercfg-bkkmv" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.715614197 +0000 UTC" 2025-10-13T00:15:06.211467563+00:00 stderr F I1013 00:15:06.211022 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-os-puller-dockercfg-bkkmv" serviceaccount="machine-os-puller" 2025-10-13T00:15:06.230441381+00:00 stderr F I1013 00:15:06.230061 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="node-bootstrapper-dockercfg-8hvnd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.707987601 +0000 UTC" 2025-10-13T00:15:06.230441381+00:00 stderr F I1013 00:15:06.230090 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="node-bootstrapper-dockercfg-8hvnd" serviceaccount="node-bootstrapper" 2025-10-13T00:15:06.243998637+00:00 stderr F I1013 00:15:06.243939 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="builder-dockercfg-w5l7k" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.702435386 +0000 UTC" 2025-10-13T00:15:06.243998637+00:00 stderr F I1013 00:15:06.243963 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="builder-dockercfg-w5l7k" serviceaccount="builder" 2025-10-13T00:15:06.257032878+00:00 stderr F I1013 00:15:06.256976 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="certified-operators-dockercfg-twmwc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.69722113 +0000 UTC" 2025-10-13T00:15:06.257032878+00:00 stderr F I1013 00:15:06.257000 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="certified-operators-dockercfg-twmwc" serviceaccount="certified-operators" 2025-10-13T00:15:06.286558473+00:00 stderr F I1013 00:15:06.286510 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="community-operators-dockercfg-sv888" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.6854099 +0000 UTC" 2025-10-13T00:15:06.286629335+00:00 stderr F I1013 00:15:06.286616 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="community-operators-dockercfg-sv888" serviceaccount="community-operators" 2025-10-13T00:15:06.342784897+00:00 stderr F I1013 00:15:06.342713 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="default-dockercfg-4w6pc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.662941099 +0000 UTC" 2025-10-13T00:15:06.342784897+00:00 stderr F I1013 00:15:06.342739 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="default-dockercfg-4w6pc" serviceaccount="default" 2025-10-13T00:15:06.364643542+00:00 stderr F I1013 00:15:06.364585 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="deployer-dockercfg-wdpgc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.654182037 +0000 UTC" 2025-10-13T00:15:06.364643542+00:00 stderr F I1013 00:15:06.364618 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="deployer-dockercfg-wdpgc" serviceaccount="deployer" 2025-10-13T00:15:06.384711053+00:00 stderr F I1013 00:15:06.384658 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="marketplace-operator-dockercfg-b4zbk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.64615108 +0000 UTC" 2025-10-13T00:15:06.384711053+00:00 stderr F I1013 00:15:06.384689 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="marketplace-operator-dockercfg-b4zbk" serviceaccount="marketplace-operator" 2025-10-13T00:15:06.396904119+00:00 stderr F I1013 00:15:06.396806 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="redhat-marketplace-dockercfg-kpdvz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.641291598 +0000 UTC" 2025-10-13T00:15:06.396904119+00:00 stderr F I1013 00:15:06.396835 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="redhat-marketplace-dockercfg-kpdvz" serviceaccount="redhat-marketplace" 2025-10-13T00:15:06.423872147+00:00 stderr F I1013 00:15:06.423815 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="redhat-operators-dockercfg-dwn4s" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.630491767 +0000 UTC" 2025-10-13T00:15:06.423872147+00:00 stderr F I1013 00:15:06.423855 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="redhat-operators-dockercfg-dwn4s" serviceaccount="redhat-operators" 2025-10-13T00:15:06.484144113+00:00 stderr F I1013 00:15:06.484085 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="builder-dockercfg-c82h4" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.60637878 +0000 UTC" 2025-10-13T00:15:06.484144113+00:00 stderr F I1013 00:15:06.484115 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="builder-dockercfg-c82h4" serviceaccount="builder" 2025-10-13T00:15:06.517920385+00:00 stderr F I1013 00:15:06.517762 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="cluster-monitoring-operator-dockercfg-vg26t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.592905773 +0000 UTC" 2025-10-13T00:15:06.517920385+00:00 stderr F I1013 00:15:06.517806 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="cluster-monitoring-operator-dockercfg-vg26t" serviceaccount="cluster-monitoring-operator" 2025-10-13T00:15:06.522920144+00:00 stderr F I1013 00:15:06.522862 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="default-dockercfg-vffxx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.590867787 +0000 UTC" 2025-10-13T00:15:06.522920144+00:00 stderr F I1013 00:15:06.522888 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="default-dockercfg-vffxx" serviceaccount="default" 2025-10-13T00:15:06.539685197+00:00 stderr F I1013 00:15:06.539617 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="deployer-dockercfg-fzkn2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.584170695 +0000 UTC" 2025-10-13T00:15:06.539685197+00:00 stderr F I1013 00:15:06.539643 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="deployer-dockercfg-fzkn2" serviceaccount="deployer" 2025-10-13T00:15:06.561413808+00:00 stderr F I1013 00:15:06.560793 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="builder-dockercfg-wqmk7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.975699781 +0000 UTC" 2025-10-13T00:15:06.561413808+00:00 stderr F I1013 00:15:06.560817 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="builder-dockercfg-wqmk7" serviceaccount="builder" 2025-10-13T00:15:06.639205819+00:00 stderr F I1013 00:15:06.639144 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="default-dockercfg-smth4" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.944357646 +0000 UTC" 2025-10-13T00:15:06.639205819+00:00 stderr F I1013 00:15:06.639173 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="default-dockercfg-smth4" serviceaccount="default" 2025-10-13T00:15:06.650952961+00:00 stderr F I1013 00:15:06.650896 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="deployer-dockercfg-lbcm2" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-07-20 19:49:43.539657641 +0000 UTC" 2025-10-13T00:15:06.650952961+00:00 stderr F I1013 00:15:06.650924 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="deployer-dockercfg-lbcm2" serviceaccount="deployer" 2025-10-13T00:15:06.660218908+00:00 stderr F I1013 00:15:06.660169 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="metrics-daemon-sa-dockercfg-22xbz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.935945222 +0000 UTC" 2025-10-13T00:15:06.660218908+00:00 stderr F I1013 00:15:06.660196 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="metrics-daemon-sa-dockercfg-22xbz" serviceaccount="metrics-daemon-sa" 2025-10-13T00:15:06.684383032+00:00 stderr F I1013 00:15:06.684312 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-ac-dockercfg-ltm2q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.926288649 +0000 UTC" 2025-10-13T00:15:06.684383032+00:00 stderr F I1013 00:15:06.684365 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-ac-dockercfg-ltm2q" serviceaccount="multus-ac" 2025-10-13T00:15:06.690141555+00:00 stderr F I1013 00:15:06.690087 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-ancillary-tools-dockercfg-6hnwp" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.923995548 +0000 UTC" 2025-10-13T00:15:06.690141555+00:00 stderr F I1013 00:15:06.690122 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-ancillary-tools-dockercfg-6hnwp" serviceaccount="multus-ancillary-tools" 2025-10-13T00:15:06.777342197+00:00 stderr F I1013 00:15:06.777257 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-dockercfg-2hmjh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.889110459 +0000 UTC" 2025-10-13T00:15:06.777342197+00:00 stderr F I1013 00:15:06.777287 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-dockercfg-2hmjh" serviceaccount="multus" 2025-10-13T00:15:06.783151221+00:00 stderr F I1013 00:15:06.783090 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="builder-dockercfg-v84zj" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.886777341 +0000 UTC" 2025-10-13T00:15:06.783151221+00:00 stderr F I1013 00:15:06.783120 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="builder-dockercfg-v84zj" serviceaccount="builder" 2025-10-13T00:15:06.797954425+00:00 stderr F I1013 00:15:06.797880 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="default-dockercfg-l7mph" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.880868312 +0000 UTC" 2025-10-13T00:15:06.797954425+00:00 stderr F I1013 00:15:06.797920 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="default-dockercfg-l7mph" serviceaccount="default" 2025-10-13T00:15:06.817854491+00:00 stderr F I1013 00:15:06.817790 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="deployer-dockercfg-xfj6f" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.872896493 +0000 UTC" 2025-10-13T00:15:06.817854491+00:00 stderr F I1013 00:15:06.817835 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="deployer-dockercfg-xfj6f" serviceaccount="deployer" 2025-10-13T00:15:06.831507030+00:00 stderr F I1013 00:15:06.831445 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="network-diagnostics-dockercfg-lpz5v" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.867435653 +0000 UTC" 2025-10-13T00:15:06.831507030+00:00 stderr F I1013 00:15:06.831476 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="network-diagnostics-dockercfg-lpz5v" serviceaccount="network-diagnostics" 2025-10-13T00:15:06.909654841+00:00 stderr F I1013 00:15:06.909537 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="builder-dockercfg-jflnh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.83619737 +0000 UTC" 2025-10-13T00:15:06.909654841+00:00 stderr F I1013 00:15:06.909629 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="builder-dockercfg-jflnh" serviceaccount="builder" 2025-10-13T00:15:06.924343761+00:00 stderr F I1013 00:15:06.924272 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="default-dockercfg-75lxp" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.830309349 +0000 UTC" 2025-10-13T00:15:06.924343761+00:00 stderr F I1013 00:15:06.924299 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="default-dockercfg-75lxp" serviceaccount="default" 2025-10-13T00:15:06.938499325+00:00 stderr F I1013 00:15:06.938443 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="deployer-dockercfg-rj2df" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.824633683 +0000 UTC" 2025-10-13T00:15:06.938499325+00:00 stderr F I1013 00:15:06.938467 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="deployer-dockercfg-rj2df" serviceaccount="deployer" 2025-10-13T00:15:06.956775113+00:00 stderr F I1013 00:15:06.956712 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="network-node-identity-dockercfg-q58sj" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.817326436 +0000 UTC" 2025-10-13T00:15:06.956775113+00:00 stderr F I1013 00:15:06.956740 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="network-node-identity-dockercfg-q58sj" serviceaccount="network-node-identity" 2025-10-13T00:15:06.971096672+00:00 stderr F I1013 00:15:06.971033 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="builder-dockercfg-rm8cp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.811602149 +0000 UTC" 2025-10-13T00:15:06.971096672+00:00 stderr F I1013 00:15:06.971064 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="builder-dockercfg-rm8cp" serviceaccount="builder" 2025-10-13T00:15:07.039117380+00:00 stderr F W1013 00:15:07.039060 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:07.039117380+00:00 stderr F E1013 00:15:07.039093 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-10-13T00:15:07.053372907+00:00 stderr F I1013 00:15:07.053294 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="cluster-network-operator-dockercfg-fknq9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.778694299 +0000 UTC" 2025-10-13T00:15:07.053372907+00:00 stderr F I1013 00:15:07.053319 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="cluster-network-operator-dockercfg-fknq9" serviceaccount="cluster-network-operator" 2025-10-13T00:15:07.060907883+00:00 stderr F I1013 00:15:07.060843 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="default-dockercfg-qbblb" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.775678741 +0000 UTC" 2025-10-13T00:15:07.060907883+00:00 stderr F I1013 00:15:07.060874 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="default-dockercfg-qbblb" serviceaccount="default" 2025-10-13T00:15:07.087388476+00:00 stderr F I1013 00:15:07.085566 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="deployer-dockercfg-dxzsm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.765789724 +0000 UTC" 2025-10-13T00:15:07.087388476+00:00 stderr F I1013 00:15:07.085600 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="deployer-dockercfg-dxzsm" serviceaccount="deployer" 2025-10-13T00:15:07.113868679+00:00 stderr F I1013 00:15:07.113800 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="iptables-alerter-dockercfg-m85pb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.754499902 +0000 UTC" 2025-10-13T00:15:07.113868679+00:00 stderr F I1013 00:15:07.113832 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="iptables-alerter-dockercfg-m85pb" serviceaccount="iptables-alerter" 2025-10-13T00:15:07.118342573+00:00 stderr F I1013 00:15:07.118261 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="builder-dockercfg-x5dlr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.752709756 +0000 UTC" 2025-10-13T00:15:07.118342573+00:00 stderr F I1013 00:15:07.118294 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="builder-dockercfg-x5dlr" serviceaccount="builder" 2025-10-13T00:15:07.186338891+00:00 stderr F I1013 00:15:07.186257 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="default-dockercfg-rkcl2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.725508753 +0000 UTC" 2025-10-13T00:15:07.186338891+00:00 stderr F I1013 00:15:07.186286 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="default-dockercfg-rkcl2" serviceaccount="default" 2025-10-13T00:15:07.195994770+00:00 stderr F I1013 00:15:07.195930 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="deployer-dockercfg-ng566" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.721641009 +0000 UTC" 2025-10-13T00:15:07.195994770+00:00 stderr F I1013 00:15:07.195962 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="deployer-dockercfg-ng566" serviceaccount="deployer" 2025-10-13T00:15:07.224205175+00:00 stderr F I1013 00:15:07.224138 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="builder-dockercfg-zj94t" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.710356327 +0000 UTC" 2025-10-13T00:15:07.224205175+00:00 stderr F I1013 00:15:07.224166 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="builder-dockercfg-zj94t" serviceaccount="builder" 2025-10-13T00:15:07.236395311+00:00 stderr F I1013 00:15:07.236314 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="default-dockercfg-w25km" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-07-20 19:49:44.705489589 +0000 UTC" 2025-10-13T00:15:07.236395311+00:00 stderr F I1013 00:15:07.236364 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="default-dockercfg-w25km" serviceaccount="default" 2025-10-13T00:15:07.264782621+00:00 stderr F I1013 00:15:07.264718 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="deployer-dockercfg-dgzbc" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.094129201 +0000 UTC" 2025-10-13T00:15:07.264782621+00:00 stderr F I1013 00:15:07.264750 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="deployer-dockercfg-dgzbc" serviceaccount="deployer" 2025-10-13T00:15:07.317613424+00:00 stderr F I1013 00:15:07.317549 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="builder-dockercfg-ssklj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.072994268 +0000 UTC" 2025-10-13T00:15:07.317613424+00:00 stderr F I1013 00:15:07.317580 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="builder-dockercfg-ssklj" serviceaccount="builder" 2025-10-13T00:15:07.331571112+00:00 stderr F I1013 00:15:07.331502 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="default-dockercfg-m5zsx" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.067409845 +0000 UTC" 2025-10-13T00:15:07.331571112+00:00 stderr F I1013 00:15:07.331530 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="default-dockercfg-m5zsx" serviceaccount="default" 2025-10-13T00:15:07.366491449+00:00 stderr F I1013 00:15:07.366202 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="deployer-dockercfg-s7krv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.053530177 +0000 UTC" 2025-10-13T00:15:07.366491449+00:00 stderr F I1013 00:15:07.366229 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="deployer-dockercfg-s7krv" serviceaccount="deployer" 2025-10-13T00:15:07.376785487+00:00 stderr F I1013 00:15:07.376731 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="oauth-apiserver-sa-dockercfg-qvbzg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.049321187 +0000 UTC" 2025-10-13T00:15:07.376874530+00:00 stderr F I1013 00:15:07.376858 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="oauth-apiserver-sa-dockercfg-qvbzg" serviceaccount="oauth-apiserver-sa" 2025-10-13T00:15:07.391138187+00:00 stderr F I1013 00:15:07.391087 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="builder-dockercfg-74x9t" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.043576638 +0000 UTC" 2025-10-13T00:15:07.391138187+00:00 stderr F I1013 00:15:07.391130 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="builder-dockercfg-74x9t" serviceaccount="builder" 2025-10-13T00:15:07.451572868+00:00 stderr F I1013 00:15:07.451528 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="default-dockercfg-n4wxs" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.019401374 +0000 UTC" 2025-10-13T00:15:07.451636940+00:00 stderr F I1013 00:15:07.451623 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="default-dockercfg-n4wxs" serviceaccount="default" 2025-10-13T00:15:07.463171785+00:00 stderr F I1013 00:15:07.463123 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="deployer-dockercfg-ddvf7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:46.014763631 +0000 UTC" 2025-10-13T00:15:07.463257118+00:00 stderr F I1013 00:15:07.463243 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="deployer-dockercfg-ddvf7" serviceaccount="deployer" 2025-10-13T00:15:07.504557295+00:00 stderr F I1013 00:15:07.504485 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="builder-dockercfg-kvbw2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.998219599 +0000 UTC" 2025-10-13T00:15:07.504557295+00:00 stderr F I1013 00:15:07.504513 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="builder-dockercfg-kvbw2" serviceaccount="builder" 2025-10-13T00:15:07.523162763+00:00 stderr F I1013 00:15:07.523092 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="collect-profiles-dockercfg-45g9d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.990775956 +0000 UTC" 2025-10-13T00:15:07.523162763+00:00 stderr F I1013 00:15:07.523123 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="collect-profiles-dockercfg-45g9d" serviceaccount="collect-profiles" 2025-10-13T00:15:07.531606756+00:00 stderr F I1013 00:15:07.531550 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="default-dockercfg-sps9x" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.987390027 +0000 UTC" 2025-10-13T00:15:07.531606756+00:00 stderr F I1013 00:15:07.531578 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="default-dockercfg-sps9x" serviceaccount="default" 2025-10-13T00:15:07.582817820+00:00 stderr F I1013 00:15:07.582753 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="deployer-dockercfg-fm6b6" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.966910921 +0000 UTC" 2025-10-13T00:15:07.582817820+00:00 stderr F I1013 00:15:07.582781 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="deployer-dockercfg-fm6b6" serviceaccount="deployer" 2025-10-13T00:15:07.602712196+00:00 stderr F I1013 00:15:07.602633 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="olm-operator-serviceaccount-dockercfg-ncpbj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.958961299 +0000 UTC" 2025-10-13T00:15:07.602712196+00:00 stderr F I1013 00:15:07.602670 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="olm-operator-serviceaccount-dockercfg-ncpbj" serviceaccount="olm-operator-serviceaccount" 2025-10-13T00:15:07.638456097+00:00 stderr F I1013 00:15:07.638372 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="builder-dockercfg-bmp44" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.944680639 +0000 UTC" 2025-10-13T00:15:07.638456097+00:00 stderr F I1013 00:15:07.638399 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="builder-dockercfg-bmp44" serviceaccount="builder" 2025-10-13T00:15:07.651024034+00:00 stderr F I1013 00:15:07.650950 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="default-dockercfg-6cjkw" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.93963558 +0000 UTC" 2025-10-13T00:15:07.651024034+00:00 stderr F I1013 00:15:07.650985 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="default-dockercfg-6cjkw" serviceaccount="default" 2025-10-13T00:15:07.664675823+00:00 stderr F I1013 00:15:07.664612 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="deployer-dockercfg-kwdj9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.934166676 +0000 UTC" 2025-10-13T00:15:07.664675823+00:00 stderr F I1013 00:15:07.664640 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="deployer-dockercfg-kwdj9" serviceaccount="deployer" 2025-10-13T00:15:07.709701272+00:00 stderr F I1013 00:15:07.709627 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="builder-dockercfg-686g2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.916172153 +0000 UTC" 2025-10-13T00:15:07.709701272+00:00 stderr F I1013 00:15:07.709659 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="builder-dockercfg-686g2" serviceaccount="builder" 2025-10-13T00:15:07.743150674+00:00 stderr F I1013 00:15:07.743097 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="default-dockercfg-j8jz7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.902771007 +0000 UTC" 2025-10-13T00:15:07.743150674+00:00 stderr F I1013 00:15:07.743124 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="default-dockercfg-j8jz7" serviceaccount="default" 2025-10-13T00:15:07.777315248+00:00 stderr F I1013 00:15:07.777252 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="deployer-dockercfg-kmtk7" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.889111786 +0000 UTC" 2025-10-13T00:15:07.777315248+00:00 stderr F I1013 00:15:07.777280 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="deployer-dockercfg-kmtk7" serviceaccount="deployer" 2025-10-13T00:15:07.790701249+00:00 stderr F I1013 00:15:07.790598 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="builder-dockercfg-gvwbd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.883775779 +0000 UTC" 2025-10-13T00:15:07.790701249+00:00 stderr F I1013 00:15:07.790667 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="builder-dockercfg-gvwbd" serviceaccount="builder" 2025-10-13T00:15:07.798401869+00:00 stderr F I1013 00:15:07.798318 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="default-dockercfg-gd6sg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.880693734 +0000 UTC" 2025-10-13T00:15:07.798447631+00:00 stderr F I1013 00:15:07.798413 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="default-dockercfg-gd6sg" serviceaccount="default" 2025-10-13T00:15:07.844032316+00:00 stderr F I1013 00:15:07.843960 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="deployer-dockercfg-nhhff" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.862431295 +0000 UTC" 2025-10-13T00:15:07.844032316+00:00 stderr F I1013 00:15:07.843995 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="deployer-dockercfg-nhhff" serviceaccount="deployer" 2025-10-13T00:15:07.884213280+00:00 stderr F I1013 00:15:07.884142 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-control-plane-dockercfg-76h6h" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.846355169 +0000 UTC" 2025-10-13T00:15:07.884213280+00:00 stderr F I1013 00:15:07.884174 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-control-plane-dockercfg-76h6h" serviceaccount="ovn-kubernetes-control-plane" 2025-10-13T00:15:07.909182688+00:00 stderr F I1013 00:15:07.909128 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-node-dockercfg-jpwlq" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-07-20 19:49:45.836361442 +0000 UTC" 2025-10-13T00:15:07.909182688+00:00 stderr F I1013 00:15:07.909156 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-node-dockercfg-jpwlq" serviceaccount="ovn-kubernetes-node" 2025-10-13T00:15:07.937754645+00:00 stderr F I1013 00:15:07.937685 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="builder-dockercfg-6nwr2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.224938147 +0000 UTC" 2025-10-13T00:15:07.937754645+00:00 stderr F I1013 00:15:07.937713 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="builder-dockercfg-6nwr2" serviceaccount="builder" 2025-10-13T00:15:07.939958331+00:00 stderr F I1013 00:15:07.939640 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="default-dockercfg-gd9rc" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.224158696 +0000 UTC" 2025-10-13T00:15:07.939958331+00:00 stderr F I1013 00:15:07.939669 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="default-dockercfg-gd9rc" serviceaccount="default" 2025-10-13T00:15:07.976526876+00:00 stderr F I1013 00:15:07.976447 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="deployer-dockercfg-6wg24" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.209447491 +0000 UTC" 2025-10-13T00:15:07.976526876+00:00 stderr F I1013 00:15:07.976486 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="deployer-dockercfg-6wg24" serviceaccount="deployer" 2025-10-13T00:15:08.032233355+00:00 stderr F I1013 00:15:08.029757 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="route-controller-manager-sa-dockercfg-9r4gl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.188111372 +0000 UTC" 2025-10-13T00:15:08.032233355+00:00 stderr F I1013 00:15:08.029803 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="route-controller-manager-sa-dockercfg-9r4gl" serviceaccount="route-controller-manager-sa" 2025-10-13T00:15:08.050661627+00:00 stderr F I1013 00:15:08.050624 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="builder-dockercfg-bdttl" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.17976769 +0000 UTC" 2025-10-13T00:15:08.050740330+00:00 stderr F I1013 00:15:08.050728 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="builder-dockercfg-bdttl" serviceaccount="builder" 2025-10-13T00:15:08.067162332+00:00 stderr F I1013 00:15:08.066692 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="default-dockercfg-ptcxl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.173334409 +0000 UTC" 2025-10-13T00:15:08.067162332+00:00 stderr F I1013 00:15:08.066719 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="default-dockercfg-ptcxl" serviceaccount="default" 2025-10-13T00:15:08.079198992+00:00 stderr F I1013 00:15:08.078811 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="deployer-dockercfg-2k6vp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.168497593 +0000 UTC" 2025-10-13T00:15:08.079198992+00:00 stderr F I1013 00:15:08.078836 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="deployer-dockercfg-2k6vp" serviceaccount="deployer" 2025-10-13T00:15:08.110729677+00:00 stderr F I1013 00:15:08.110455 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="service-ca-operator-dockercfg-zrj8d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.155832589 +0000 UTC" 2025-10-13T00:15:08.110729677+00:00 stderr F I1013 00:15:08.110484 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="service-ca-operator-dockercfg-zrj8d" serviceaccount="service-ca-operator" 2025-10-13T00:15:08.112579303+00:00 stderr F W1013 00:15:08.111223 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:08.112579303+00:00 stderr F E1013 00:15:08.111250 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-10-13T00:15:08.163454257+00:00 stderr F I1013 00:15:08.163311 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="builder-dockercfg-hklq6" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.134733725 +0000 UTC" 2025-10-13T00:15:08.163525959+00:00 stderr F I1013 00:15:08.163509 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="builder-dockercfg-hklq6" serviceaccount="builder" 2025-10-13T00:15:08.192048074+00:00 stderr F I1013 00:15:08.191892 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="default-dockercfg-r6bvc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.123256681 +0000 UTC" 2025-10-13T00:15:08.192182368+00:00 stderr F I1013 00:15:08.192146 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="default-dockercfg-r6bvc" serviceaccount="default" 2025-10-13T00:15:08.202602660+00:00 stderr F I1013 00:15:08.202564 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="deployer-dockercfg-k7zp8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.118999753 +0000 UTC" 2025-10-13T00:15:08.202688872+00:00 stderr F I1013 00:15:08.202674 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="deployer-dockercfg-k7zp8" serviceaccount="deployer" 2025-10-13T00:15:08.209525177+00:00 stderr F I1013 00:15:08.209483 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="service-ca-dockercfg-79vsd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.116220278 +0000 UTC" 2025-10-13T00:15:08.209525177+00:00 stderr F I1013 00:15:08.209509 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="service-ca-dockercfg-79vsd" serviceaccount="service-ca" 2025-10-13T00:15:08.250593298+00:00 stderr F I1013 00:15:08.250468 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="builder-dockercfg-dktpk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.099824319 +0000 UTC" 2025-10-13T00:15:08.250593298+00:00 stderr F I1013 00:15:08.250495 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="builder-dockercfg-dktpk" serviceaccount="builder" 2025-10-13T00:15:08.304600786+00:00 stderr F I1013 00:15:08.304500 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="default-dockercfg-qbbwv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.078247645 +0000 UTC" 2025-10-13T00:15:08.304695739+00:00 stderr F I1013 00:15:08.304683 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="default-dockercfg-qbbwv" serviceaccount="default" 2025-10-13T00:15:08.329378538+00:00 stderr F I1013 00:15:08.329281 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="deployer-dockercfg-cxqvw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.068314231 +0000 UTC" 2025-10-13T00:15:08.329717258+00:00 stderr F I1013 00:15:08.329701 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="deployer-dockercfg-cxqvw" serviceaccount="deployer" 2025-10-13T00:15:08.337275495+00:00 stderr F I1013 00:15:08.337217 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="builder-dockercfg-d58tr" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.065124403 +0000 UTC" 2025-10-13T00:15:08.337275495+00:00 stderr F I1013 00:15:08.337244 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="builder-dockercfg-d58tr" serviceaccount="builder" 2025-10-13T00:15:08.349831021+00:00 stderr F I1013 00:15:08.349710 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="default-dockercfg-mvb5j" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.060135806 +0000 UTC" 2025-10-13T00:15:08.349831021+00:00 stderr F I1013 00:15:08.349742 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="default-dockercfg-mvb5j" serviceaccount="default" 2025-10-13T00:15:08.396837580+00:00 stderr F I1013 00:15:08.396709 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="deployer-dockercfg-tlw89" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.041338787 +0000 UTC" 2025-10-13T00:15:08.396837580+00:00 stderr F I1013 00:15:08.396741 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="deployer-dockercfg-tlw89" serviceaccount="deployer" 2025-10-13T00:15:08.450197608+00:00 stderr F I1013 00:15:08.450066 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="builder-dockercfg-7bl85" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.019987079 +0000 UTC" 2025-10-13T00:15:08.450297781+00:00 stderr F I1013 00:15:08.450269 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="builder-dockercfg-7bl85" serviceaccount="builder" 2025-10-13T00:15:08.466007332+00:00 stderr F I1013 00:15:08.465927 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="default-dockercfg-mqnf2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.013654129 +0000 UTC" 2025-10-13T00:15:08.466117635+00:00 stderr F I1013 00:15:08.466102 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="default-dockercfg-mqnf2" serviceaccount="default" 2025-10-13T00:15:08.479070993+00:00 stderr F I1013 00:15:08.479006 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="deployer-dockercfg-xhxvc" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-07-20 19:49:47.008409504 +0000 UTC" 2025-10-13T00:15:08.479070993+00:00 stderr F I1013 00:15:08.479034 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="deployer-dockercfg-xhxvc" serviceaccount="deployer" 2025-10-13T00:15:09.177097178+00:00 stderr F I1013 00:15:09.176772 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:09.613371009+00:00 stderr F W1013 00:15:09.613008 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:15:09.615364359+00:00 stderr F I1013 00:15:09.615067 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:09.620209894+00:00 stderr F W1013 00:15:09.620135 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:15:09.638716959+00:00 stderr F I1013 00:15:09.638650 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers. 2025-10-13T00:15:09.974958623+00:00 stderr F I1013 00:15:09.974904 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:10.582931598+00:00 stderr F I1013 00:15:10.578830 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:10.617213315+00:00 stderr F I1013 00:15:10.617090 1 buildconfig_controller.go:212] Starting buildconfig controller 2025-10-13T00:15:15.278373531+00:00 stderr F I1013 00:15:15.275656 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:15.419453908+00:00 stderr F I1013 00:15:15.418879 1 build_controller.go:503] Starting build controller 2025-10-13T00:15:15.419453908+00:00 stderr F I1013 00:15:15.419154 1 build_controller.go:505] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000 2025-10-13T00:15:19.766688929+00:00 stderr F I1013 00:15:19.766289 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:15:19.800082289+00:00 stderr F I1013 00:15:19.800007 1 templateinstance_controller.go:297] Starting TemplateInstance controller 2025-10-13T00:15:19.842457439+00:00 stderr F I1013 00:15:19.842389 1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller 2025-10-13T00:21:13.941020419+00:00 stderr F I1013 00:21:13.940234 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="deployer-dockercfg-r6lk2" expected=4 actual=0 2025-10-13T00:21:13.941091191+00:00 stderr F I1013 00:21:13.940296 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="default-dockercfg-tfwf4" expected=4 actual=0 2025-10-13T00:21:13.941091191+00:00 stderr F I1013 00:21:13.941005 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="deployer-dockercfg-r6lk2" serviceaccount="deployer" 2025-10-13T00:21:13.941091191+00:00 stderr F I1013 00:21:13.941074 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="default-dockercfg-tfwf4" serviceaccount="default" 2025-10-13T00:21:13.942059807+00:00 stderr F I1013 00:21:13.941462 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="builder-dockercfg-j6p5d" expected=4 actual=0 2025-10-13T00:21:13.942059807+00:00 stderr F I1013 00:21:13.941479 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="builder-dockercfg-j6p5d" serviceaccount="builder" 2025-10-13T00:21:14.528527068+00:00 stderr F I1013 00:21:14.528167 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="deployer-dockercfg-dg646" expected=4 actual=0 2025-10-13T00:21:14.528527068+00:00 stderr F I1013 00:21:14.528189 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="deployer-dockercfg-dg646" serviceaccount="deployer" 2025-10-13T00:21:14.530172513+00:00 stderr F I1013 00:21:14.530103 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="default-dockercfg-hv62z" expected=4 actual=0 2025-10-13T00:21:14.530172513+00:00 stderr F I1013 00:21:14.530119 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="default-dockercfg-hv62z" serviceaccount="default" 2025-10-13T00:21:14.530338957+00:00 stderr F I1013 00:21:14.530260 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="builder-dockercfg-cbr26" expected=4 actual=0 2025-10-13T00:21:14.530338957+00:00 stderr F I1013 00:21:14.530275 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="builder-dockercfg-cbr26" serviceaccount="builder" 2025-10-13T00:21:14.551133696+00:00 stderr F I1013 00:21:14.551057 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="deployer-dockercfg-dg646" expected=4 actual=0 2025-10-13T00:21:14.551133696+00:00 stderr F I1013 00:21:14.551081 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="deployer-dockercfg-dg646" serviceaccount="deployer" 2025-10-13T00:22:22.796546484+00:00 stderr F E1013 00:22:22.796041 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager/leases/openshift-master-controllers": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:48.798887913+00:00 stderr F E1013 00:22:48.798301 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager/leases/openshift-master-controllers": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:24.954676785+00:00 stderr F I1013 00:23:24.954106 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:23:24.959565161+00:00 stderr F I1013 00:23:24.959502 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-10-13T00:23:28.753752898+00:00 stderr F W1013 00:23:28.753185 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:23:45.804266982+00:00 stderr F I1013 00:23:45.803741 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-10-13T00:23:53.073802643+00:00 stderr F I1013 00:23:53.073273 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000023300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043233033025 5ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043233033025 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000007570315073043233033043 0ustar zuulzuul2025-08-13T19:50:44.179030078+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:50:44.237874850+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:50:44.289116574+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:50:44.432193094+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:50:44.488575685+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:50:44.596138679+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:51:44.793865665+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:51:44.800948797+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:51:44.807599916+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:51:44.816644794+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:51:44.820681709+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:51:44.824596800+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:52:44.840933394+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:52:44.848563401+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:52:44.857305630+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:52:44.866752939+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:52:44.873025757+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:52:44.877576937+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:53:44.895479215+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:53:44.904981736+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:53:44.916047412+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:53:44.927692085+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:53:44.931149313+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:53:44.935619241+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:54:44.950291420+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:54:44.959097171+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:54:44.972143633+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:54:44.985855865+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:54:44.989359554+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:54:44.993524073+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:55:45.010639720+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:55:45.019616857+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:55:45.025727891+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:55:45.034734568+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:55:45.038501826+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:55:45.042243723+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:56:45.054496589+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:56:45.061991153+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:56:45.072677698+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:56:45.095294644+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:56:45.102084498+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:56:45.109215231+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:57:45.130749729+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:57:45.137631605+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:57:45.144302516+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:57:45.154681432+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:57:45.158736538+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:57:45.163941517+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:58:45.177593190+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:58:45.184123356+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:58:45.190436746+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:58:45.202169050+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:58:45.205926076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:58:45.213955125+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:59:45.467363579+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:59:45.990733368+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:59:46.348749022+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:59:46.413734585+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:59:46.497041809+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:59:46.855909219+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:00:47.400675889+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:00:47.671347326+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:00:47.768266490+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:00:47.917191257+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:00:47.985897916+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:00:48.049021466+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:01:48.345872873+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:01:48.925360996+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:01:48.933652583+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:01:48.944651026+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:01:48.949217056+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:01:48.953745906+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:02:49.892068063+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:02:49.995503284+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:02:50.002994117+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:02:50.014342111+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:02:50.018820879+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:02:50.024269954+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:03:50.089607897+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:03:50.099923071+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:03:50.109966648+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:03:50.119533921+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:03:50.127032885+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:03:50.130664498+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:04:50.144878272+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:04:50.205034795+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:04:50.216842483+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:04:50.230380341+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:04:50.234838258+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:04:50.242270821+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:05:50.258590982+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:05:50.269693380+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:05:50.281888759+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:05:50.300749569+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:05:50.305997930+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:05:50.310677554+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:06:50.341001987+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:06:50.367363093+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:06:50.379169482+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:06:50.392109473+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:06:50.395078188+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:06:50.398663201+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:07:50.415535003+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:50.426037825+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:07:50.440053076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:07:50.457313681+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:50.476738488+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:07:50.481346290+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:08:50.505933265+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:08:50.511639988+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:08:50.519055261+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:08:50.532878397+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:08:50.536563493+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:08:50.542156553+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:09:50.561316124+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:09:50.588121533+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:09:50.647838885+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:09:50.702960175+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:09:50.714519997+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:09:50.719568922+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:10:50.739488450+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:10:50.747188040+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:10:50.754209522+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:10:50.767261916+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:10:50.772190487+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:10:50.776316666+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:11:50.794322108+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:11:50.803628205+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:11:50.813347114+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:11:50.825701918+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:11:50.830052603+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:11:50.834189241+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:12:50.848728773+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:12:50.854349454+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:12:50.863721563+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:12:50.870427995+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:12:50.874595945+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:12:50.878092745+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:13:50.889843871+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:13:50.897546822+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:13:50.903580425+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:13:50.913621003+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:13:50.917484983+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:13:50.922079615+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:14:50.934660344+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:14:50.946599046+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:14:50.955119770+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:14:50.965687243+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:14:50.970257264+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:14:50.974985180+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:15:50.990328799+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:15:50.998571764+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:15:51.007188290+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:15:51.021448067+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:15:51.026611565+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:15:51.031947957+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:16:51.046189254+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:16:51.054469721+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:16:51.064193108+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:16:51.075767989+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:16:51.079844505+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:16:51.083611393+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:17:51.095301291+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:17:51.106125390+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:17:51.122001684+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:17:51.132767541+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:17:51.140506572+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:17:51.148720447+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:18:51.164139785+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:18:51.171831135+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:18:51.184226349+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:18:51.204513368+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:18:51.215682157+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:18:51.223433189+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:19:51.242413375+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:19:51.255073497+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:19:51.264893077+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:19:51.276414546+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:19:51.280402130+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:19:51.287049410+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:20:51.299959742+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:20:51.305878951+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:20:51.312962754+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:20:51.323135154+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:20:51.326248163+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:20:51.330379541+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:21:51.343764978+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:21:51.352820006+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:21:51.361837164+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:21:51.372254532+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:21:51.376860923+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:21:51.381518287+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:22:51.398739807+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:22:51.406085316+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:22:51.412600773+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:22:51.421683252+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:22:51.425453870+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:22:51.429822705+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:23:51.445523366+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:23:51.453693890+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:23:51.460924166+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:23:51.471057946+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:23:51.478256292+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:23:51.481915147+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:24:51.495866769+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:24:51.503283901+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:24:51.512755712+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:24:51.527875515+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:24:51.532710953+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:24:51.536374068+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:25:51.549034970+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:25:51.557078810+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:25:51.565216283+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:25:51.575481146+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:25:51.580249903+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:25:51.584875085+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:26:51.598649328+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:26:51.605657798+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:26:51.612983138+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:26:51.622559972+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:26:51.627584345+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:26:51.631536758+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:27:51.647962447+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:27:51.658085056+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:27:51.665942891+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:27:51.677450850+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:27:51.681587858+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:27:51.685494920+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:28:51.706042488+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:28:51.720733561+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:28:51.729951246+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:28:51.755563072+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:28:51.768132063+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:28:51.772989693+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:29:51.793283737+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:29:51.806941069+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:29:51.816148324+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:29:51.830997841+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:29:51.837600431+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:29:51.842635045+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:30:51.859244339+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:30:51.871837001+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:30:51.879857372+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:30:51.892769683+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:30:51.898760945+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:30:51.902899894+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:31:51.915497901+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:31:51.926827037+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:31:51.934752705+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:31:51.946344638+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:31:51.950888159+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:31:51.954856063+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:32:51.967487272+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:32:51.973539756+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:32:51.981221237+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:32:51.991501942+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:32:51.995170888+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:32:51.999110381+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:33:52.015352083+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:33:52.028486171+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:33:52.036493631+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:33:52.053462279+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:33:52.062454957+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:33:52.069294024+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:34:52.087208945+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:34:52.094613198+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:34:52.102524996+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:34:52.111355859+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:34:52.122145500+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:34:52.126930647+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:35:52.148494694+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:35:52.156479084+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:35:52.169187349+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:35:52.182749539+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:35:52.188410682+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:35:52.194130316+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:36:52.212706927+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:36:52.225700141+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:36:52.236669598+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:36:52.248599752+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:36:52.252548165+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:36:52.257011264+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:37:52.271169813+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:37:52.282038636+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:37:52.291913851+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:37:52.315675506+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:37:52.324174341+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:37:52.329417192+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:38:52.345967174+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:38:52.353704657+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:38:52.364194029+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:38:52.374217218+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:38:52.378944234+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:38:52.382480376+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:39:52.408991716+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:39:52.419278352+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:39:52.432388350+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:39:52.445580601+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:39:52.451324546+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:39:52.456467214+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:40:52.475429215+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:40:52.484263909+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:40:52.492461906+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:40:52.503455673+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:40:52.507405917+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:40:52.511266108+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:41:52.525956568+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:41:52.538127589+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:41:52.556161139+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:41:52.577509234+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:41:52.582425076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:41:52.590598602+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:42:46.781617096+00:00 stdout F shutting down node-ca ././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000001620015073043233033026 0ustar zuulzuul2025-10-13T00:12:49.123409067+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:12:49.129738228+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:12:49.135423220+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:12:49.142939154+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:12:49.145856457+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:12:49.152128276+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:13:49.162043453+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:13:49.169320339+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:13:49.176520123+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:13:49.189762198+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:13:49.192984529+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:13:49.198258698+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:14:49.210918816+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:14:49.219672289+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:14:49.228909555+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:14:49.239190073+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:14:49.243277536+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:14:49.248786581+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:15:49.261359191+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:15:49.267089435+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:15:49.272507159+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:15:49.280698082+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:15:49.283183271+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:15:49.285845647+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:16:49.296088206+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:16:49.302540223+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:16:49.310026173+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:16:49.318190905+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:16:49.322735621+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:16:49.326132480+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:17:49.336036276+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:17:49.342822036+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:17:49.349212564+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:17:49.358707748+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:17:49.364469977+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:17:49.369919566+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:18:49.383071234+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:18:49.390635439+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:18:49.396977197+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:18:49.407927993+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:18:49.411558761+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:18:49.417650573+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:19:49.430670395+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:19:49.436933851+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:19:49.442960330+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:19:49.455668018+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:19:49.459572044+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:19:49.465781599+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:20:49.477140327+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:20:49.484320689+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:20:49.491752197+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:20:49.501793259+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:20:49.506820370+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:20:49.512962892+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:21:49.526868585+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:21:49.534113289+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:21:49.541117148+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:21:49.549966516+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:21:49.554984261+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:21:49.560124999+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:22:49.572418670+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:22:49.584746234+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:22:49.595563335+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:22:49.612020454+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:22:49.617595429+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:22:49.622920197+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-10-13T00:23:49.635095178+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:23:49.643167493+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-10-13T00:23:49.651911656+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:23:49.659814897+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:23:49.664376984+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-10-13T00:23:49.668633352+00:00 stdout F image-registry.openshift-image-registry.svc:5000 ././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016633515073043232033064 0ustar zuulzuul2025-10-13T00:21:32.328545612+00:00 stderr F I1013 00:21:32.328232 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a17ae0 cert-dir:0xc000a17cc0 cert-secrets:0xc000a17a40 configmaps:0xc000a175e0 namespace:0xc000a17400 optional-cert-configmaps:0xc000a17c20 optional-cert-secrets:0xc000a17b80 optional-configmaps:0xc000a17720 optional-secrets:0xc000a17680 pod:0xc000a174a0 pod-manifest-dir:0xc000a17860 resource-dir:0xc000a177c0 revision:0xc000a17360 secrets:0xc000a17540 v:0xc000a2f0e0] [0xc000a2f0e0 0xc000a17360 0xc000a17400 0xc000a174a0 0xc000a177c0 0xc000a17860 0xc000a175e0 0xc000a17720 0xc000a17540 0xc000a17680 0xc000a17cc0 0xc000a17ae0 0xc000a17c20 0xc000a17a40 0xc000a17b80] [] map[cert-configmaps:0xc000a17ae0 cert-dir:0xc000a17cc0 cert-secrets:0xc000a17a40 configmaps:0xc000a175e0 help:0xc000a2f4a0 kubeconfig:0xc000a172c0 log-flush-frequency:0xc000a2f040 namespace:0xc000a17400 optional-cert-configmaps:0xc000a17c20 optional-cert-secrets:0xc000a17b80 optional-configmaps:0xc000a17720 optional-secrets:0xc000a17680 pod:0xc000a174a0 pod-manifest-dir:0xc000a17860 pod-manifests-lock-file:0xc000a179a0 resource-dir:0xc000a177c0 revision:0xc000a17360 secrets:0xc000a17540 timeout-duration:0xc000a17900 v:0xc000a2f0e0 vmodule:0xc000a2f180] [0xc000a172c0 0xc000a17360 0xc000a17400 0xc000a174a0 0xc000a17540 0xc000a175e0 0xc000a17680 0xc000a17720 0xc000a177c0 0xc000a17860 0xc000a17900 0xc000a179a0 0xc000a17a40 0xc000a17ae0 0xc000a17b80 0xc000a17c20 0xc000a17cc0 0xc000a2f040 0xc000a2f0e0 0xc000a2f180 0xc000a2f4a0] [0xc000a17ae0 0xc000a17cc0 0xc000a17a40 0xc000a175e0 0xc000a2f4a0 0xc000a172c0 0xc000a2f040 0xc000a17400 0xc000a17c20 0xc000a17b80 0xc000a17720 0xc000a17680 0xc000a174a0 0xc000a17860 0xc000a179a0 0xc000a177c0 0xc000a17360 0xc000a17540 0xc000a17900 0xc000a2f0e0 0xc000a2f180] map[104:0xc000a2f4a0 118:0xc000a2f0e0] [] -1 0 0xc0009dbec0 true 0xa51380 []} 2025-10-13T00:21:32.328756617+00:00 stderr F I1013 00:21:32.328604 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a1c1a0)({ 2025-10-13T00:21:32.328756617+00:00 stderr F KubeConfig: (string) "", 2025-10-13T00:21:32.328756617+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-10-13T00:21:32.328756617+00:00 stderr F Revision: (string) (len=2) "13", 2025-10-13T00:21:32.328756617+00:00 stderr F NodeName: (string) "", 2025-10-13T00:21:32.328756617+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-10-13T00:21:32.328756617+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-10-13T00:21:32.328756617+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=11) "etcd-client", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=17) "encryption-config", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=6) "config", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=12) "cloud-config", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=17) "aggregator-client", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=14) "kubelet-client", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=9) "client-ca", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-10-13T00:21:32.328756617+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-10-13T00:21:32.328756617+00:00 stderr F }, 2025-10-13T00:21:32.328756617+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-10-13T00:21:32.328756617+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-10-13T00:21:32.328756617+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-10-13T00:21:32.328756617+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-10-13T00:21:32.328756617+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-10-13T00:21:32.328756617+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-10-13T00:21:32.328756617+00:00 stderr F KubeletVersion: (string) "" 2025-10-13T00:21:32.328756617+00:00 stderr F }) 2025-10-13T00:21:32.329643021+00:00 stderr F I1013 00:21:32.329597 1 cmd.go:410] Getting controller reference for node crc 2025-10-13T00:21:32.430371570+00:00 stderr F I1013 00:21:32.430273 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-10-13T00:21:32.433757931+00:00 stderr F I1013 00:21:32.433709 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-10-13T00:22:02.433947777+00:00 stderr F I1013 00:22:02.433851 1 cmd.go:521] Getting installer pods for node crc 2025-10-13T00:22:02.446886015+00:00 stderr F I1013 00:22:02.446781 1 cmd.go:539] Latest installer revision for node crc is: 13 2025-10-13T00:22:02.446886015+00:00 stderr F I1013 00:22:02.446834 1 cmd.go:428] Querying kubelet version for node crc 2025-10-13T00:22:02.450074081+00:00 stderr F I1013 00:22:02.450006 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-10-13T00:22:02.450074081+00:00 stderr F I1013 00:22:02.450032 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13" ... 2025-10-13T00:22:02.450526933+00:00 stderr F I1013 00:22:02.450450 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13" ... 2025-10-13T00:22:02.450526933+00:00 stderr F I1013 00:22:02.450466 1 cmd.go:226] Getting secrets ... 2025-10-13T00:22:02.453184844+00:00 stderr F I1013 00:22:02.453099 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-13 2025-10-13T00:22:02.455989660+00:00 stderr F I1013 00:22:02.455962 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-13 2025-10-13T00:22:02.457854850+00:00 stderr F I1013 00:22:02.457806 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-13 2025-10-13T00:22:02.459598407+00:00 stderr F I1013 00:22:02.459515 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-13: secrets "encryption-config-13" not found 2025-10-13T00:22:02.461263721+00:00 stderr F I1013 00:22:02.461194 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-13 2025-10-13T00:22:02.461285432+00:00 stderr F I1013 00:22:02.461265 1 cmd.go:239] Getting config maps ... 2025-10-13T00:22:02.463272415+00:00 stderr F I1013 00:22:02.463228 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-13 2025-10-13T00:22:02.465349141+00:00 stderr F I1013 00:22:02.465291 1 copy.go:60] Got configMap openshift-kube-apiserver/config-13 2025-10-13T00:22:02.467447258+00:00 stderr F I1013 00:22:02.467401 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-13 2025-10-13T00:22:02.638308093+00:00 stderr F I1013 00:22:02.638211 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-13 2025-10-13T00:22:02.837931511+00:00 stderr F I1013 00:22:02.837834 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-13 2025-10-13T00:22:03.037032165+00:00 stderr F I1013 00:22:03.036971 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-13 2025-10-13T00:22:03.237476075+00:00 stderr F I1013 00:22:03.237417 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-13 2025-10-13T00:22:03.437214587+00:00 stderr F I1013 00:22:03.437144 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-13 2025-10-13T00:22:03.636420804+00:00 stderr F I1013 00:22:03.636315 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-13: configmaps "cloud-config-13" not found 2025-10-13T00:22:03.837804600+00:00 stderr F I1013 00:22:03.837718 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-13 2025-10-13T00:22:04.038667261+00:00 stderr F I1013 00:22:04.038155 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-13 2025-10-13T00:22:04.038667261+00:00 stderr F I1013 00:22:04.038630 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client" ... 2025-10-13T00:22:04.038985070+00:00 stderr F I1013 00:22:04.038921 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client/tls.crt" ... 2025-10-13T00:22:04.039432652+00:00 stderr F I1013 00:22:04.039347 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client/tls.key" ... 2025-10-13T00:22:04.039544165+00:00 stderr F I1013 00:22:04.039511 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token" ... 2025-10-13T00:22:04.039614057+00:00 stderr F I1013 00:22:04.039587 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/token" ... 2025-10-13T00:22:04.039728270+00:00 stderr F I1013 00:22:04.039700 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/ca.crt" ... 2025-10-13T00:22:04.039840663+00:00 stderr F I1013 00:22:04.039813 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/namespace" ... 2025-10-13T00:22:04.039948316+00:00 stderr F I1013 00:22:04.039920 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-10-13T00:22:04.040184402+00:00 stderr F I1013 00:22:04.040106 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey" ... 2025-10-13T00:22:04.040237583+00:00 stderr F I1013 00:22:04.040209 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-10-13T00:22:04.040374267+00:00 stderr F I1013 00:22:04.040344 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-10-13T00:22:04.040491600+00:00 stderr F I1013 00:22:04.040464 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/webhook-authenticator" ... 2025-10-13T00:22:04.040556782+00:00 stderr F I1013 00:22:04.040531 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/webhook-authenticator/kubeConfig" ... 2025-10-13T00:22:04.040674375+00:00 stderr F I1013 00:22:04.040647 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/bound-sa-token-signing-certs" ... 2025-10-13T00:22:04.040793988+00:00 stderr F I1013 00:22:04.040766 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-10-13T00:22:04.040907432+00:00 stderr F I1013 00:22:04.040881 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/config" ... 2025-10-13T00:22:04.040985514+00:00 stderr F I1013 00:22:04.040960 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/config/config.yaml" ... 2025-10-13T00:22:04.041105467+00:00 stderr F I1013 00:22:04.041078 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/etcd-serving-ca" ... 2025-10-13T00:22:04.041170569+00:00 stderr F I1013 00:22:04.041144 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-10-13T00:22:04.041283342+00:00 stderr F I1013 00:22:04.041256 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-audit-policies" ... 2025-10-13T00:22:04.041421025+00:00 stderr F I1013 00:22:04.041320 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-10-13T00:22:04.041773975+00:00 stderr F I1013 00:22:04.041729 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-10-13T00:22:04.041871938+00:00 stderr F I1013 00:22:04.041830 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-10-13T00:22:04.042044712+00:00 stderr F I1013 00:22:04.042003 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod" ... 2025-10-13T00:22:04.042135955+00:00 stderr F I1013 00:22:04.042097 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/version" ... 2025-10-13T00:22:04.042309349+00:00 stderr F I1013 00:22:04.042268 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-10-13T00:22:04.042488564+00:00 stderr F I1013 00:22:04.042446 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-10-13T00:22:04.042755311+00:00 stderr F I1013 00:22:04.042713 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-10-13T00:22:04.043005048+00:00 stderr F I1013 00:22:04.042964 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kubelet-serving-ca" ... 2025-10-13T00:22:04.043180393+00:00 stderr F I1013 00:22:04.043141 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-10-13T00:22:04.043435890+00:00 stderr F I1013 00:22:04.043394 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs" ... 2025-10-13T00:22:04.043539442+00:00 stderr F I1013 00:22:04.043500 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-10-13T00:22:04.043711657+00:00 stderr F I1013 00:22:04.043671 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-10-13T00:22:04.043888392+00:00 stderr F I1013 00:22:04.043846 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-10-13T00:22:04.044055346+00:00 stderr F I1013 00:22:04.044028 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-server-ca" ... 2025-10-13T00:22:04.044194840+00:00 stderr F I1013 00:22:04.044148 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-10-13T00:22:04.044403566+00:00 stderr F I1013 00:22:04.044373 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/oauth-metadata" ... 2025-10-13T00:22:04.044501798+00:00 stderr F I1013 00:22:04.044475 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/oauth-metadata/oauthMetadata" ... 2025-10-13T00:22:04.044988571+00:00 stderr F I1013 00:22:04.044944 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-10-13T00:22:04.044988571+00:00 stderr F I1013 00:22:04.044966 1 cmd.go:226] Getting secrets ... 2025-10-13T00:22:04.237052806+00:00 stderr F I1013 00:22:04.236974 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-10-13T00:22:04.436521020+00:00 stderr F I1013 00:22:04.436478 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-10-13T00:22:04.637561237+00:00 stderr F I1013 00:22:04.637434 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-10-13T00:22:04.837173255+00:00 stderr F I1013 00:22:04.837112 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-10-13T00:22:05.037941174+00:00 stderr F I1013 00:22:05.037843 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-10-13T00:22:05.237869841+00:00 stderr F I1013 00:22:05.237807 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-10-13T00:22:05.437155080+00:00 stderr F I1013 00:22:05.437075 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-10-13T00:22:05.639688876+00:00 stderr F I1013 00:22:05.639631 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-10-13T00:22:05.839602631+00:00 stderr F I1013 00:22:05.839462 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-10-13T00:22:06.036352132+00:00 stderr F I1013 00:22:06.036289 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-10-13T00:22:06.239528476+00:00 stderr F I1013 00:22:06.236575 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-10-13T00:22:06.437008487+00:00 stderr F I1013 00:22:06.436925 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-10-13T00:22:06.636887522+00:00 stderr F I1013 00:22:06.636823 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-10-13T00:22:06.837523348+00:00 stderr F I1013 00:22:06.837440 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-10-13T00:22:07.037405003+00:00 stderr F I1013 00:22:07.037254 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-10-13T00:22:07.237096333+00:00 stderr F I1013 00:22:07.237014 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-10-13T00:22:07.437360739+00:00 stderr F I1013 00:22:07.437260 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-10-13T00:22:07.636489084+00:00 stderr F I1013 00:22:07.636416 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-10-13T00:22:07.838646980+00:00 stderr F I1013 00:22:07.838583 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-10-13T00:22:08.037629801+00:00 stderr F I1013 00:22:08.037539 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-10-13T00:22:08.237552118+00:00 stderr F I1013 00:22:08.236730 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-10-13T00:22:08.237552118+00:00 stderr F I1013 00:22:08.236754 1 cmd.go:239] Getting config maps ... 2025-10-13T00:22:08.437004541+00:00 stderr F I1013 00:22:08.436958 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-10-13T00:22:08.637152714+00:00 stderr F I1013 00:22:08.637115 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-10-13T00:22:08.836981067+00:00 stderr F I1013 00:22:08.836906 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-10-13T00:22:09.037611683+00:00 stderr F I1013 00:22:09.037557 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-10-13T00:22:09.246100129+00:00 stderr F I1013 00:22:09.246050 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-10-13T00:22:09.246476059+00:00 stderr F I1013 00:22:09.246382 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-10-13T00:22:09.246476059+00:00 stderr F I1013 00:22:09.246435 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-10-13T00:22:09.246701366+00:00 stderr F I1013 00:22:09.246660 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-10-13T00:22:09.248442192+00:00 stderr F I1013 00:22:09.247653 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-10-13T00:22:09.248442192+00:00 stderr F I1013 00:22:09.248428 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-10-13T00:22:09.248643498+00:00 stderr F I1013 00:22:09.248564 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-10-13T00:22:09.248719740+00:00 stderr F I1013 00:22:09.248695 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-10-13T00:22:09.248719740+00:00 stderr F I1013 00:22:09.248715 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-10-13T00:22:09.248857823+00:00 stderr F I1013 00:22:09.248829 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-10-13T00:22:09.249022938+00:00 stderr F I1013 00:22:09.248938 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-10-13T00:22:09.249022938+00:00 stderr F I1013 00:22:09.248956 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-10-13T00:22:09.249082850+00:00 stderr F I1013 00:22:09.249066 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-10-13T00:22:09.249206273+00:00 stderr F I1013 00:22:09.249172 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-10-13T00:22:09.249206273+00:00 stderr F I1013 00:22:09.249191 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-10-13T00:22:09.250527978+00:00 stderr F I1013 00:22:09.249321 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-10-13T00:22:09.250629961+00:00 stderr F I1013 00:22:09.250608 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-10-13T00:22:09.250629961+00:00 stderr F I1013 00:22:09.250624 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-10-13T00:22:09.250793626+00:00 stderr F I1013 00:22:09.250739 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-10-13T00:22:09.250992221+00:00 stderr F I1013 00:22:09.250871 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-10-13T00:22:09.250992221+00:00 stderr F I1013 00:22:09.250889 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-10-13T00:22:09.251064963+00:00 stderr F I1013 00:22:09.251044 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-10-13T00:22:09.251191326+00:00 stderr F I1013 00:22:09.251162 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-10-13T00:22:09.251191326+00:00 stderr F I1013 00:22:09.251182 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-10-13T00:22:09.251454253+00:00 stderr F I1013 00:22:09.251314 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-10-13T00:22:09.251528795+00:00 stderr F I1013 00:22:09.251471 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-10-13T00:22:09.251528795+00:00 stderr F I1013 00:22:09.251489 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-10-13T00:22:09.251651989+00:00 stderr F I1013 00:22:09.251623 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-10-13T00:22:09.251772352+00:00 stderr F I1013 00:22:09.251744 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-10-13T00:22:09.251888645+00:00 stderr F I1013 00:22:09.251861 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-10-13T00:22:09.252007128+00:00 stderr F I1013 00:22:09.251980 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-10-13T00:22:09.252007128+00:00 stderr F I1013 00:22:09.251997 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-10-13T00:22:09.252231294+00:00 stderr F I1013 00:22:09.252126 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-10-13T00:22:09.252375158+00:00 stderr F I1013 00:22:09.252250 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-10-13T00:22:09.252375158+00:00 stderr F I1013 00:22:09.252269 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-10-13T00:22:09.252560913+00:00 stderr F I1013 00:22:09.252443 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-10-13T00:22:09.252560913+00:00 stderr F I1013 00:22:09.252474 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-10-13T00:22:09.252625055+00:00 stderr F I1013 00:22:09.252604 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-10-13T00:22:09.252635725+00:00 stderr F I1013 00:22:09.252625 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-10-13T00:22:09.253059676+00:00 stderr F I1013 00:22:09.252976 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-10-13T00:22:09.253059676+00:00 stderr F I1013 00:22:09.253000 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-10-13T00:22:09.253151019+00:00 stderr F I1013 00:22:09.253121 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-10-13T00:22:09.253180480+00:00 stderr F I1013 00:22:09.253164 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-10-13T00:22:09.253513739+00:00 stderr F I1013 00:22:09.253487 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-13 -n openshift-kube-apiserver 2025-10-13T00:22:09.438021799+00:00 stderr F I1013 00:22:09.437977 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-10-13T00:22:09.438116792+00:00 stderr F I1013 00:22:09.438103 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-10-13T00:22:09.438116792+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"13"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=13","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-10-13T00:22:09.441585955+00:00 stderr F I1013 00:22:09.441561 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/kube-apiserver-startup-monitor-pod.yaml" ... 2025-10-13T00:22:09.441826122+00:00 stderr F I1013 00:22:09.441810 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-10-13T00:22:09.441826122+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"13"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=13","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-10-13T00:22:09.441970586+00:00 stderr F I1013 00:22:09.441950 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-10-13T00:22:09.441970586+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"13"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"13"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50M 2025-10-13T00:22:09.442089779+00:00 stderr F i"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-10-13T00:22:09.523414946+00:00 stderr F I1013 00:22:09.523349 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/kube-apiserver-pod.yaml" ... 2025-10-13T00:22:09.524190237+00:00 stderr F I1013 00:22:09.524163 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-10-13T00:22:09.524257458+00:00 stderr F I1013 00:22:09.524221 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-10-13T00:22:09.524257458+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"13"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"13"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"re 2025-10-13T00:22:09.524282149+00:00 stderr F quests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-10-13T00:22:09.641386718+00:00 stderr F W1013 00:22:09.639762 1 recorder.go:205] Error creating event &Event{ObjectMeta:{installer-13-crc.186de51a131f5c36 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-13-crc,UID:b0a4ec02-9b6b-400a-9633-c11280799f07,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerCompleted,Message:Successfully installed revision 13,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-10-13 00:22:09.524464694 +0000 UTC m=+37.769354374,LastTimestamp:2025-10-13 00:22:09.524464694 +0000 UTC m=+37.769354374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043233033025 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043233033025 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000000263515073043234033036 0ustar zuulzuul2025-10-13T00:16:04.094593536+00:00 stderr F Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:16:04.441041297+00:00 stderr F error: failed to ping registry https://image-registry.openshift-image-registry.svc:5000: Get "https://image-registry.openshift-image-registry.svc:5000/": dial tcp 10.217.4.41:5000: connect: connection refused 2025-10-13T00:16:04.449215439+00:00 stderr F attempt #1 has failed (exit code 1), going to make another attempt... 2025-10-13T00:16:34.755962133+00:00 stderr F Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:16:34.953586191+00:00 stderr F error: failed to ping registry https://image-registry.openshift-image-registry.svc:5000: Get "https://image-registry.openshift-image-registry.svc:5000/": dial tcp 10.217.4.41:5000: connect: connection refused 2025-10-13T00:16:34.964816951+00:00 stderr F attempt #2 has failed (exit code 1), going to make another attempt... 2025-10-13T00:17:35.195555521+00:00 stderr F Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:17:35.455630856+00:00 stderr F I1013 00:17:35.455579 39 prune.go:348] Creating image pruner with keepYoungerThan=1h0m0s, keepTagRevisions=3, pruneOverSizeLimit=, allImages=true 2025-10-13T00:17:35.570408741+00:00 stdout F Summary: deleted 0 objects ././@LongLink0000644000000000000000000000022600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000755000175000017500000000000015073043233033030 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000755000175000017500000000000015073043233033030 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000644000175000017500000000014015073043233033025 0ustar zuulzuul2025-10-13T00:15:09.576493804+00:00 stdout F /etc/hosts.tmp /etc/hosts differ: char 159, line 3 ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000644000175000017500000000000015073043233033020 0ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000755000175000017500000000000015073043233032725 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000755000175000017500000000000015073043233032725 5ustar zuulzuul././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000007003015073043233032727 0ustar zuulzuul2025-10-13T00:15:01.146002893+00:00 stderr F I1013 00:15:01.144431 1 cmd.go:233] Using service-serving-cert provided certificates 2025-10-13T00:15:01.146002893+00:00 stderr F I1013 00:15:01.144977 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:01.146002893+00:00 stderr F I1013 00:15:01.145785 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:01.309559813+00:00 stderr F I1013 00:15:01.309409 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-10-13T00:15:01.324403428+00:00 stderr F I1013 00:15:01.323803 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:01.918500798+00:00 stderr F I1013 00:15:01.916223 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:01.934453226+00:00 stderr F I1013 00:15:01.932820 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:01.934453226+00:00 stderr F I1013 00:15:01.932839 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:01.934453226+00:00 stderr F I1013 00:15:01.932854 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:15:01.934453226+00:00 stderr F I1013 00:15:01.932859 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:15:01.939710204+00:00 stderr F I1013 00:15:01.939234 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:01.939710204+00:00 stderr F W1013 00:15:01.939253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:01.939710204+00:00 stderr F W1013 00:15:01.939262 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:01.939710204+00:00 stderr F I1013 00:15:01.939446 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:01.945168287+00:00 stderr F I1013 00:15:01.945138 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:01.951269560+00:00 stderr F I1013 00:15:01.951231 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-10-13T00:15:01.951575619+00:00 stderr F I1013 00:15:01.951547 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:01.951575619+00:00 stderr F I1013 00:15:01.951567 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:01.951756435+00:00 stderr F I1013 00:15:01.951637 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:01.951756435+00:00 stderr F I1013 00:15:01.951657 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:01.951756435+00:00 stderr F I1013 00:15:01.951679 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:01.951756435+00:00 stderr F I1013 00:15:01.951684 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:01.952439835+00:00 stderr F I1013 00:15:01.952415 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-10-13 00:15:01.952319642 +0000 UTC))" 2025-10-13T00:15:01.952697633+00:00 stderr F I1013 00:15:01.952681 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314501\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314501\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:01.952663102 +0000 UTC))" 2025-10-13T00:15:01.952707233+00:00 stderr F I1013 00:15:01.952700 1 secure_serving.go:210] Serving securely on [::]:8443 2025-10-13T00:15:01.952748905+00:00 stderr F I1013 00:15:01.952720 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:01.952748905+00:00 stderr F I1013 00:15:01.952740 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:01.952958451+00:00 stderr F I1013 00:15:01.952938 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:02.052802762+00:00 stderr F I1013 00:15:02.052736 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:02.052935086+00:00 stderr F I1013 00:15:02.052905 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.052948417+00:00 stderr F I1013 00:15:02.052942 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.053557775+00:00 stderr F I1013 00:15:02.053526 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:02.053490653 +0000 UTC))" 2025-10-13T00:15:02.053571335+00:00 stderr F I1013 00:15:02.053560 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:02.053543215 +0000 UTC))" 2025-10-13T00:15:02.053611627+00:00 stderr F I1013 00:15:02.053585 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.053566385 +0000 UTC))" 2025-10-13T00:15:02.053621497+00:00 stderr F I1013 00:15:02.053613 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.053595496 +0000 UTC))" 2025-10-13T00:15:02.053654538+00:00 stderr F I1013 00:15:02.053637 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.053619737 +0000 UTC))" 2025-10-13T00:15:02.053675969+00:00 stderr F I1013 00:15:02.053663 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.053646968 +0000 UTC))" 2025-10-13T00:15:02.053700149+00:00 stderr F I1013 00:15:02.053684 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.053668458 +0000 UTC))" 2025-10-13T00:15:02.053725080+00:00 stderr F I1013 00:15:02.053709 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.053693349 +0000 UTC))" 2025-10-13T00:15:02.053769081+00:00 stderr F I1013 00:15:02.053744 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:02.05371847 +0000 UTC))" 2025-10-13T00:15:02.053777022+00:00 stderr F I1013 00:15:02.053771 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:02.053756971 +0000 UTC))" 2025-10-13T00:15:02.054417321+00:00 stderr F I1013 00:15:02.054387 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-10-13 00:15:02.054365209 +0000 UTC))" 2025-10-13T00:15:02.054968847+00:00 stderr F I1013 00:15:02.054930 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314501\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314501\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.054855294 +0000 UTC))" 2025-10-13T00:15:02.055177304+00:00 stderr F I1013 00:15:02.055141 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:02.055123902 +0000 UTC))" 2025-10-13T00:15:02.055196854+00:00 stderr F I1013 00:15:02.055174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:02.055156863 +0000 UTC))" 2025-10-13T00:15:02.055206234+00:00 stderr F I1013 00:15:02.055197 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.055180744 +0000 UTC))" 2025-10-13T00:15:02.055237085+00:00 stderr F I1013 00:15:02.055218 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.055203174 +0000 UTC))" 2025-10-13T00:15:02.055265126+00:00 stderr F I1013 00:15:02.055245 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.055228435 +0000 UTC))" 2025-10-13T00:15:02.055278877+00:00 stderr F I1013 00:15:02.055271 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.055255326 +0000 UTC))" 2025-10-13T00:15:02.055311838+00:00 stderr F I1013 00:15:02.055294 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.055276866 +0000 UTC))" 2025-10-13T00:15:02.055360459+00:00 stderr F I1013 00:15:02.055340 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.055303267 +0000 UTC))" 2025-10-13T00:15:02.055415731+00:00 stderr F I1013 00:15:02.055386 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:02.055350919 +0000 UTC))" 2025-10-13T00:15:02.055435711+00:00 stderr F I1013 00:15:02.055417 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:02.05540179 +0000 UTC))" 2025-10-13T00:15:02.055447052+00:00 stderr F I1013 00:15:02.055440 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.055423201 +0000 UTC))" 2025-10-13T00:15:02.055959317+00:00 stderr F I1013 00:15:02.055924 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-10-13 00:15:02.055901825 +0000 UTC))" 2025-10-13T00:15:02.057169313+00:00 stderr F I1013 00:15:02.056856 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314501\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314501\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.056775091 +0000 UTC))" 2025-10-13T00:19:35.141774759+00:00 stderr F I1013 00:19:35.140643 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-10-13T00:19:35.142728327+00:00 stderr F I1013 00:19:35.140917 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41785", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_d6faa6dd-bae0-47ca-929a-7b87b5300c45 became leader 2025-10-13T00:19:35.153075595+00:00 stderr F I1013 00:19:35.153005 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:19:35.153361564+00:00 stderr F I1013 00:19:35.153314 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-10-13T00:19:35.153433636+00:00 stderr F I1013 00:19:35.153416 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:19:35.155982611+00:00 stderr F I1013 00:19:35.155954 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-10-13T00:19:35.253531032+00:00 stderr F I1013 00:19:35.253453 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:19:35.253531032+00:00 stderr F I1013 00:19:35.253483 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:19:35.253731768+00:00 stderr F I1013 00:19:35.253690 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:19:35.253731768+00:00 stderr F I1013 00:19:35.253725 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:19:35.257095828+00:00 stderr F I1013 00:19:35.257065 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-10-13T00:19:35.257095828+00:00 stderr F I1013 00:19:35.257082 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-10-13T00:19:35.654441164+00:00 stderr F I1013 00:19:35.654339 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-10-13T00:19:35.654441164+00:00 stderr F I1013 00:19:35.654362 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937171 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.93713123 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937653 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.937629844 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937678 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.937660925 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937725 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.937683705 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937750 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937732157 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937770 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937754737 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937789 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937774118 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937809 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937794098 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937831 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.937813969 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937868 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.937838909 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937892 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.93787424 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.937925 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937908751 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.938390 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-10-13 00:21:11.938369954 +0000 UTC))" 2025-10-13T00:21:11.939562386+00:00 stderr F I1013 00:21:11.938798 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314501\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314501\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:21:11.938780165 +0000 UTC))" 2025-10-13T00:22:35.177304656+00:00 stderr F E1013 00:22:35.176821 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca-operator/service-ca-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000006266415073043233032745 0ustar zuulzuul2025-08-13T20:05:46.695171711+00:00 stderr F I0813 20:05:46.693109 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T20:05:46.698320051+00:00 stderr F I0813 20:05:46.697106 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:46.701930994+00:00 stderr F I0813 20:05:46.700578 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:46.766204245+00:00 stderr F I0813 20:05:46.766110 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T20:05:46.768050687+00:00 stderr F I0813 20:05:46.768021 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:47.202189140+00:00 stderr F I0813 20:05:47.202074 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:05:47.209249832+00:00 stderr F I0813 20:05:47.209203 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:05:47.209327574+00:00 stderr F I0813 20:05:47.209313 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:05:47.209388826+00:00 stderr F I0813 20:05:47.209374 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:05:47.209432707+00:00 stderr F I0813 20:05:47.209417 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:05:47.234427463+00:00 stderr F I0813 20:05:47.234338 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:47.234427463+00:00 stderr F W0813 20:05:47.234389 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:47.234427463+00:00 stderr F W0813 20:05:47.234399 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:47.234479114+00:00 stderr F I0813 20:05:47.234405 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:05:47.239726024+00:00 stderr F I0813 20:05:47.239644 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:47.239726024+00:00 stderr F I0813 20:05:47.239690 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:47.239852348+00:00 stderr F I0813 20:05:47.239830 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:47.240207368+00:00 stderr F I0813 20:05:47.240155 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.240124436 +0000 UTC))" 2025-08-13T20:05:47.240207368+00:00 stderr F I0813 20:05:47.240178 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:47.240223789+00:00 stderr F I0813 20:05:47.240207 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240600 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.240573979 +0000 UTC))" 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240685 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240714 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:05:47.241303220+00:00 stderr F I0813 20:05:47.241198 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:47.242046351+00:00 stderr F I0813 20:05:47.241956 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:47.242115163+00:00 stderr F I0813 20:05:47.242060 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:47.242115163+00:00 stderr F I0813 20:05:47.242100 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:47.243732009+00:00 stderr F I0813 20:05:47.243709 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-08-13T20:05:47.269716273+00:00 stderr F I0813 20:05:47.269602 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-08-13T20:05:47.272188094+00:00 stderr F I0813 20:05:47.272007 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_cd77d967-43e4-48e5-af41-05935a4b105a became leader 2025-08-13T20:05:47.306324662+00:00 stderr F I0813 20:05:47.306185 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:47.328919019+00:00 stderr F I0813 20:05:47.328769 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-08-13T20:05:47.330111923+00:00 stderr F I0813 20:05:47.329537 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:47.330460283+00:00 stderr F I0813 20:05:47.330404 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:47.330460283+00:00 stderr F I0813 20:05:47.330431 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:47.336307180+00:00 stderr F I0813 20:05:47.336161 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-08-13T20:05:47.341560371+00:00 stderr F I0813 20:05:47.341414 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:47.342178988+00:00 stderr F I0813 20:05:47.342066 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.342018904 +0000 UTC))" 2025-08-13T20:05:47.345069291+00:00 stderr F I0813 20:05:47.344954 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:47.345166924+00:00 stderr F I0813 20:05:47.344954 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:47.345447712+00:00 stderr F I0813 20:05:47.345315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.345247906 +0000 UTC))" 2025-08-13T20:05:47.345686099+00:00 stderr F I0813 20:05:47.345621 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.345601186 +0000 UTC))" 2025-08-13T20:05:47.347929553+00:00 stderr F I0813 20:05:47.347827 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:05:47.347745868 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353588 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:05:47.348304684 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353744 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:05:47.353713179 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353765 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:05:47.3537514 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353925 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353820102 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353957 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353941695 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353974 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353962766 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354005 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353985766 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354030 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:05:47.354012267 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354054 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:05:47.354040398 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354182 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.35412156 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354568 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.354544052 +0000 UTC))" 2025-08-13T20:05:47.358087384+00:00 stderr F I0813 20:05:47.357824 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.357737924 +0000 UTC))" 2025-08-13T20:05:47.430618321+00:00 stderr F I0813 20:05:47.430480 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-08-13T20:05:47.430668902+00:00 stderr F I0813 20:05:47.430627 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-08-13T20:05:47.437831297+00:00 stderr F I0813 20:05:47.437741 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-08-13T20:05:47.437999972+00:00 stderr F I0813 20:05:47.437834 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-08-13T20:05:47.807892884+00:00 stderr F I0813 20:05:47.807418 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:47.807892884+00:00 stderr F I0813 20:05:47.807476 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:49.261077294+00:00 stderr F E0813 20:08:49.260139 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca-operator/service-ca-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386102 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.385986 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386126 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386147 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.396431 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410947878+00:00 stderr F I0813 20:42:36.410909 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.430056448+00:00 stderr F I0813 20:42:36.429968 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.444700 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.444988 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.445099 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.445265 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454712929+00:00 stderr F I0813 20:42:36.453927 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454983407+00:00 stderr F I0813 20:42:36.454916 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.507069389+00:00 stderr F I0813 20:42:36.503562 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516213932+00:00 stderr F I0813 20:42:36.515748 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531605046+00:00 stderr F I0813 20:42:36.516940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531735290+00:00 stderr F I0813 20:42:36.516985 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538946848+00:00 stderr F I0813 20:42:36.519622 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538946848+00:00 stderr F I0813 20:42:36.519747 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.273148956+00:00 stderr F I0813 20:42:40.269755 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.275012839+00:00 stderr F I0813 20:42:40.274951 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.275037130+00:00 stderr F I0813 20:42:40.275010 1 base_controller.go:172] Shutting down StatusSyncer_service-ca ... 2025-08-13T20:42:40.275901815+00:00 stderr F I0813 20:42:40.275017 1 base_controller.go:150] All StatusSyncer_service-ca post start hooks have been terminated 2025-08-13T20:42:40.275901815+00:00 stderr F I0813 20:42:40.275890 1 base_controller.go:172] Shutting down ServiceCAOperator ... 2025-08-13T20:42:40.276134082+00:00 stderr F I0813 20:42:40.274141 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.281716873+00:00 stderr F I0813 20:42:40.281678 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:40.281880867+00:00 stderr F I0813 20:42:40.281865 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.281702 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.281741 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.282934 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.273874 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.281762 1 base_controller.go:114] Shutting down worker of ServiceCAOperator controller ... 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.283146 1 base_controller.go:104] All ServiceCAOperator workers have been terminated 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.281754 1 base_controller.go:114] Shutting down worker of StatusSyncer_service-ca controller ... 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.283157 1 base_controller.go:104] All StatusSyncer_service-ca workers have been terminated 2025-08-13T20:42:40.283275118+00:00 stderr F I0813 20:42:40.281769 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.283275118+00:00 stderr F I0813 20:42:40.283200 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.284330578+00:00 stderr F I0813 20:42:40.284206 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285683 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285725 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285838 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.285946314+00:00 stderr F I0813 20:42:40.285886 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.287334595+00:00 stderr F I0813 20:42:40.287275 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.287426957+00:00 stderr F I0813 20:42:40.287375 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.287506089+00:00 stderr F I0813 20:42:40.287457 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.287506089+00:00 stderr F I0813 20:42:40.287477 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:40.287523390+00:00 stderr F I0813 20:42:40.287504 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:40.287700335+00:00 stderr F I0813 20:42:40.285932 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.288313053+00:00 stderr F I0813 20:42:40.288252 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.289258460+00:00 stderr F I0813 20:42:40.289175 1 builder.go:302] server exited 2025-08-13T20:42:40.290335971+00:00 stderr F E0813 20:42:40.290246 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.291335530+00:00 stderr F I0813 20:42:40.291154 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_cd77d967-43e4-48e5-af41-05935a4b105a stopped leading 2025-08-13T20:42:40.292880134+00:00 stderr F W0813 20:42:40.292702 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000016255115073043233032741 0ustar zuulzuul2025-08-13T19:59:15.609666557+00:00 stderr F I0813 19:59:15.581369 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T19:59:15.609666557+00:00 stderr F I0813 19:59:15.596283 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:15.899439107+00:00 stderr F I0813 19:59:15.895681 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:17.925502651+00:00 stderr F I0813 19:59:17.918436 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T19:59:19.416925586+00:00 stderr F I0813 19:59:19.416426 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:27.199472670+00:00 stderr F I0813 19:59:27.164373 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359754 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359870 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359974 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359983 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:27.539080441+00:00 stderr F I0813 19:59:27.537724 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:27.539080441+00:00 stderr F W0813 19:59:27.538022 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:27.539080441+00:00 stderr F W0813 19:59:27.538031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:27.544388722+00:00 stderr F I0813 19:59:27.541588 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:27.754735019+00:00 stderr F I0813 19:59:27.754668 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:27.783809777+00:00 stderr F I0813 19:59:27.782997 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-08-13T19:59:27.811830216+00:00 stderr F I0813 19:59:27.811703 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:27.811669361 +0000 UTC))" 2025-08-13T19:59:27.825500576+00:00 stderr F I0813 19:59:27.825462 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:27.825998880+00:00 stderr F I0813 19:59:27.825978 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:27.836639763+00:00 stderr F I0813 19:59:27.836563 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:27.837165378+00:00 stderr F I0813 19:59:27.837143 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:27.838887667+00:00 stderr F I0813 19:59:27.838569 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.838887667+00:00 stderr F I0813 19:59:27.838654 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:28.034964727+00:00 stderr F I0813 19:59:28.032713 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.053879 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:27.812142195 +0000 UTC))" 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054053 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054180 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054368 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:28.207045091+00:00 stderr F I0813 19:59:28.204297 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:28.327321909+00:00 stderr F I0813 19:59:28.291761 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:28.425916980+00:00 stderr F E0813 19:59:28.423919 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.425916980+00:00 stderr F E0813 19:59:28.424055 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.425916980+00:00 stderr F I0813 19:59:28.425035 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:28.425002914 +0000 UTC))" 2025-08-13T19:59:28.425916980+00:00 stderr F I0813 19:59:28.425070 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:28.425050155 +0000 UTC))" 2025-08-13T19:59:28.436945004+00:00 stderr F E0813 19:59:28.436254 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.508061931+00:00 stderr F E0813 19:59:28.507546 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.511589642+00:00 stderr F I0813 19:59:28.511486 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:28.523985505+00:00 stderr F E0813 19:59:28.522980 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.523985505+00:00 stderr F E0813 19:59:28.523131 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.533477246+00:00 stderr F I0813 19:59:28.533326 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:28.425082696 +0000 UTC))" 2025-08-13T19:59:28.533477246+00:00 stderr F I0813 19:59:28.533400 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:28.533377223 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533661 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533417554 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533718 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533696162 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533754 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533733783 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533870 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533761984 +0000 UTC))" 2025-08-13T19:59:28.534476044+00:00 stderr F I0813 19:59:28.534395 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:28.534367511 +0000 UTC))" 2025-08-13T19:59:28.562541554+00:00 stderr F E0813 19:59:28.559065 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.562541554+00:00 stderr F E0813 19:59:28.559198 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.600922078+00:00 stderr F E0813 19:59:28.600126 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.629967166+00:00 stderr F E0813 19:59:28.627224 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.687441405+00:00 stderr F E0813 19:59:28.686150 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.708043192+00:00 stderr F E0813 19:59:28.707397 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.756934746+00:00 stderr F I0813 19:59:28.756063 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-08-13T19:59:28.776867694+00:00 stderr F I0813 19:59:28.758535 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28107", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_c8767c2b-8c87-42c2-ac6c-176a996e5e2c became leader 2025-08-13T19:59:28.819010675+00:00 stderr F I0813 19:59:28.813946 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:28.534766172 +0000 UTC))" 2025-08-13T19:59:28.894882238+00:00 stderr F I0813 19:59:28.877043 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:28.910128372+00:00 stderr F E0813 19:59:28.901710 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.910128372+00:00 stderr F E0813 19:59:28.901828 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.951357188+00:00 stderr F I0813 19:59:28.951299 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-08-13T19:59:28.951456871+00:00 stderr F I0813 19:59:28.951439 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:28.951975935+00:00 stderr F I0813 19:59:28.951949 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-08-13T19:59:31.323743733+00:00 stderr F I0813 19:59:31.292566 1 request.go:697] Waited for 2.247873836s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-08-13T19:59:31.532284157+00:00 stderr F I0813 19:59:31.453284 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:31.532284157+00:00 stderr F I0813 19:59:31.532072 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:33.853457023+00:00 stderr F E0813 19:59:33.847947 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.853457023+00:00 stderr F E0813 19:59:33.848557 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.154580 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.154670 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.155324 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.155334 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-08-13T19:59:34.504016287+00:00 stderr F E0813 19:59:34.497677 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.504016287+00:00 stderr F E0813 19:59:34.497764 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.801623925+00:00 stderr F E0813 19:59:35.778348 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.825044193+00:00 stderr F E0813 19:59:35.824071 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.955154397+00:00 stderr F I0813 19:59:36.955035 1 rotate.go:99] Rotating service CA due to the CA being past the mid-point of its validity. 2025-08-13T19:59:38.336152503+00:00 stderr F I0813 19:59:38.333455 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:38.336152503+00:00 stderr F I0813 19:59:38.334696 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:38.364878932+00:00 stderr F E0813 19:59:38.364613 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.386488598+00:00 stderr F E0813 19:59:38.385301 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.234761047+00:00 stderr F I0813 19:59:39.234566 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/signing-key -n openshift-service-ca because it changed 2025-08-13T19:59:39.234761047+00:00 stderr F I0813 19:59:39.234628 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ServiceCARotated' Rotating service CA due to the CA being past the mid-point of its validity. 2025-08-13T19:59:39.234761047+00:00 stderr F The previous CA will be trusted until 2026-09-12T19:59:38Z 2025-08-13T19:59:39.508502940+00:00 stderr F I0813 19:59:39.461379 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/service-ca -n openshift-config-managed: 2025-08-13T19:59:39.508502940+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:39.508502940+00:00 stderr F I0813 19:59:39.477989 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/signing-cabundle -n openshift-service-ca: 2025-08-13T19:59:39.508502940+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:39.756660394+00:00 stderr F I0813 19:59:39.756568 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/service-ca -n openshift-service-ca because it changed 2025-08-13T19:59:39.943240583+00:00 stderr F I0813 19:59:39.941911 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:15Z","message":"Progressing: \nProgressing: service-ca is updating","reason":"_ManagedDeploymentsAvailable","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:40.032656072+00:00 stderr F I0813 19:59:40.032507 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing message changed from "Progressing: \nProgressing: service-ca does not have available replicas" to "Progressing: \nProgressing: service-ca is updating" 2025-08-13T19:59:40.563249286+00:00 stderr F I0813 19:59:40.559852 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/service-ca -n openshift-service-ca because it changed 2025-08-13T19:59:40.887673554+00:00 stderr F I0813 19:59:40.887078 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:15Z","message":"Progressing: \nProgressing: service-ca does not have available replicas","reason":"_ManagedDeploymentsAvailable","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:41.121658674+00:00 stderr F I0813 19:59:41.119997 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing message changed from "Progressing: \nProgressing: service-ca is updating" to "Progressing: \nProgressing: service-ca does not have available replicas" 2025-08-13T19:59:43.486698869+00:00 stderr F E0813 19:59:43.485454 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.520859453+00:00 stderr F E0813 19:59:43.515370 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.907020147+00:00 stderr F I0813 19:59:51.906115 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.905972337 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916595 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.906769359 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916660 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.916633141 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916891 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.916724353 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916936 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916913229 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916967 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.91694589 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942353 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942279082 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942455 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942436646 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942475 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942462417 +0000 UTC))" 2025-08-13T19:59:51.945436062+00:00 stderr F I0813 19:59:51.945304 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:51.945250866 +0000 UTC))" 2025-08-13T19:59:51.955585631+00:00 stderr F I0813 19:59:51.955361 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:51.95521261 +0000 UTC))" 2025-08-13T19:59:52.001613473+00:00 stderr F I0813 19:59:51.999483 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.136393990+00:00 stderr F I0813 19:59:53.135266 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Progressing: All service-ca-operator deployments updated","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:53.166916290+00:00 stderr F I0813 19:59:53.166657 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") 2025-08-13T20:00:05.796004788+00:00 stderr F I0813 20:00:05.775547 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.775474452 +0000 UTC))" 2025-08-13T20:00:05.796196303+00:00 stderr F I0813 20:00:05.796174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.796110161 +0000 UTC))" 2025-08-13T20:00:05.796261825+00:00 stderr F I0813 20:00:05.796245 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.796221094 +0000 UTC))" 2025-08-13T20:00:05.796323487+00:00 stderr F I0813 20:00:05.796308 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.796285846 +0000 UTC))" 2025-08-13T20:00:05.796417939+00:00 stderr F I0813 20:00:05.796360 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796344797 +0000 UTC))" 2025-08-13T20:00:05.796482201+00:00 stderr F I0813 20:00:05.796468 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.79644076 +0000 UTC))" 2025-08-13T20:00:05.796551873+00:00 stderr F I0813 20:00:05.796533 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796501622 +0000 UTC))" 2025-08-13T20:00:05.796614985+00:00 stderr F I0813 20:00:05.796599 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796577774 +0000 UTC))" 2025-08-13T20:00:05.796687467+00:00 stderr F I0813 20:00:05.796674 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.796658416 +0000 UTC))" 2025-08-13T20:00:05.796747519+00:00 stderr F I0813 20:00:05.796732 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796708588 +0000 UTC))" 2025-08-13T20:00:05.797270254+00:00 stderr F I0813 20:00:05.797212 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 20:00:05.797190811 +0000 UTC))" 2025-08-13T20:00:05.797677035+00:00 stderr F I0813 20:00:05.797646 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:00:05.797621284 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.968609 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.968081498 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997119 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.997060114 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997182 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997132747 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997201 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997189298 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997284 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997211129 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997369 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997293401 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997459 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997377464 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997485 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997469856 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997511 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.997495797 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997553 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.997524678 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997573 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997561949 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.998251 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 20:00:59.998212147 +0000 UTC))" 2025-08-13T20:01:00.024263490+00:00 stderr F I0813 20:01:00.015459 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:01:00.015395877 +0000 UTC))" 2025-08-13T20:01:31.257242288+00:00 stderr F I0813 20:01:31.255926 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.257353131+00:00 stderr F I0813 20:01:31.257289 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:31.258978588+00:00 stderr F I0813 20:01:31.258095 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259200 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:31.259092921 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259309 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:31.259262486 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259346 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:31.259318087 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259371 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:31.259352518 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259397 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259385379 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259415 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.25940386 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259432 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.25942063 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259483 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259437051 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259500 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:31.259488302 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259542 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:31.259529983 +0000 UTC))" 2025-08-13T20:01:31.259630776+00:00 stderr F I0813 20:01:31.259608 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259554514 +0000 UTC))" 2025-08-13T20:01:31.260928813+00:00 stderr F I0813 20:01:31.260054 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:01:31.260027158 +0000 UTC))" 2025-08-13T20:01:31.260928813+00:00 stderr F I0813 20:01:31.260391 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:01:31.260373778 +0000 UTC))" 2025-08-13T20:01:36.007462308+00:00 stderr F I0813 20:01:36.007012 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="f269835458ddba2137165862fff7fd217525c9e40a9c6f489e3967be4db60372", new="cfa284ff196c7f240b2c9d12ecb58ca59660cd15e39fec28f4ebbfa7f9c29fb3") 2025-08-13T20:01:36.007530580+00:00 stderr F W0813 20:01:36.007467 1 builder.go:132] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:36.007639663+00:00 stderr F I0813 20:01:36.007582 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="6f1dcb91ce339e53ed60d5e5e4ed5a83d1b7ad3e174dad355b404f5099a93f97", new="31318292ca2ee5278a39f1090b20b1f370fcbc25269641a3cee345c20dd36b58") 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008876 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008946 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008994 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009336 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009375 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009395 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009449 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009486 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011416 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011631 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011682 1 base_controller.go:172] Shutting down ServiceCAOperator ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011699 1 base_controller.go:172] Shutting down StatusSyncer_service-ca ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012644 1 base_controller.go:150] All StatusSyncer_service-ca post start hooks have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011709 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012354 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012677 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012363 1 base_controller.go:114] Shutting down worker of ServiceCAOperator controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012712 1 base_controller.go:104] All ServiceCAOperator workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012569 1 base_controller.go:114] Shutting down worker of StatusSyncer_service-ca controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012762 1 base_controller.go:104] All StatusSyncer_service-ca workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012813 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012888 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012916 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012923 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:36.013602943+00:00 stderr F I0813 20:01:36.013542 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:01:36.013618823+00:00 stderr F I0813 20:01:36.013595 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:36.013618823+00:00 stderr F I0813 20:01:36.013613 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:36.014059546+00:00 stderr F I0813 20:01:36.014014 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:36.014082177+00:00 stderr F I0813 20:01:36.014063 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:01:36.014145768+00:00 stderr F I0813 20:01:36.014102 1 builder.go:302] server exited 2025-08-13T20:01:40.236433072+00:00 stderr F W0813 20:01:40.231558 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000755000175000017500000000000015073043233033173 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000755000175000017500000000000015073043233033173 5ustar zuulzuul././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000014744715073043233033216 0ustar zuulzuul2025-10-13T00:12:49.803621744+00:00 stderr F I1013 00:12:49.803356 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-10-13T00:12:49.819759294+00:00 stderr F I1013 00:12:49.819583 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-10-13T00:12:49.863117050+00:00 stderr F I1013 00:12:49.863009 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-10-13T00:12:49.863146151+00:00 stderr F I1013 00:12:49.863124 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-10-13T00:12:49.866803665+00:00 stderr F I1013 00:12:49.864405 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-10-13T00:12:49.866803665+00:00 stderr F I1013 00:12:49.864488 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-10-13T00:12:50.573733264+00:00 stderr F I1013 00:12:50.573660 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:50.573733264+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:50.573733264+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:51.578375811+00:00 stderr F I1013 00:12:51.575096 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:51.578375811+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:51.578375811+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:52.569769241+00:00 stderr F I1013 00:12:52.569687 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:52.569769241+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:52.569769241+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:53.570854537+00:00 stderr F I1013 00:12:53.570775 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:53.570854537+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:53.570854537+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:54.571644464+00:00 stderr F I1013 00:12:54.571586 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:54.571644464+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:54.571644464+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:55.570421175+00:00 stderr F I1013 00:12:55.570362 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:55.570421175+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:55.570421175+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:56.570863643+00:00 stderr F I1013 00:12:56.570772 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:56.570863643+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:56.570863643+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:57.571922229+00:00 stderr F I1013 00:12:57.571860 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:57.571922229+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:57.571922229+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:58.570740729+00:00 stderr F I1013 00:12:58.570630 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:58.570740729+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:58.570740729+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:12:59.570686363+00:00 stderr F I1013 00:12:59.570601 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:12:59.570686363+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:12:59.570686363+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:00.571322657+00:00 stderr F I1013 00:13:00.570697 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:00.571322657+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:00.571322657+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:01.569660405+00:00 stderr F I1013 00:13:01.569604 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:01.569660405+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:01.569660405+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:02.570766911+00:00 stderr F I1013 00:13:02.570677 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:02.570766911+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:02.570766911+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:03.571407805+00:00 stderr F I1013 00:13:03.571281 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:03.571407805+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:03.571407805+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:04.571463422+00:00 stderr F I1013 00:13:04.571319 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:04.571463422+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:04.571463422+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:05.570591482+00:00 stderr F I1013 00:13:05.570515 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:05.570591482+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:05.570591482+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:06.570856295+00:00 stderr F I1013 00:13:06.570793 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:06.570856295+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:06.570856295+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:07.570396227+00:00 stderr F I1013 00:13:07.570294 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:07.570396227+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:07.570396227+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:08.571086193+00:00 stderr F I1013 00:13:08.571033 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:08.571086193+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:08.571086193+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:09.570820190+00:00 stderr F I1013 00:13:09.570116 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:09.570820190+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:09.570820190+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:10.570809185+00:00 stderr F I1013 00:13:10.570689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:10.570809185+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:10.570809185+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:11.571186712+00:00 stderr F I1013 00:13:11.570924 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:11.571186712+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:11.571186712+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:12.570268660+00:00 stderr F I1013 00:13:12.570214 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:12.570268660+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:12.570268660+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:13.571028947+00:00 stderr F I1013 00:13:13.570919 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:13.571028947+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:13.571028947+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:14.570428686+00:00 stderr F I1013 00:13:14.570356 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:14.570428686+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:14.570428686+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:15.570449622+00:00 stderr F I1013 00:13:15.570364 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:15.570449622+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:15.570449622+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:16.571272040+00:00 stderr F I1013 00:13:16.571130 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:16.571272040+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:16.571272040+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:17.571878463+00:00 stderr F I1013 00:13:17.571772 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:17.571878463+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:17.571878463+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:18.570076867+00:00 stderr F I1013 00:13:18.570021 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:18.570076867+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:18.570076867+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:19.571165252+00:00 stderr F I1013 00:13:19.571050 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:19.571165252+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:19.571165252+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:19.863192920+00:00 stderr F W1013 00:13:19.863075 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.863192920+00:00 stderr F I1013 00:13:19.863156 1 trace.go:236] Trace[1479605919]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Oct-2025 00:12:49.858) (total time: 30004ms): 2025-10-13T00:13:19.863192920+00:00 stderr F Trace[1479605919]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30004ms (00:13:19.863) 2025-10-13T00:13:19.863192920+00:00 stderr F Trace[1479605919]: [30.004625269s] [30.004625269s] END 2025-10-13T00:13:19.863192920+00:00 stderr F E1013 00:13:19.863181 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.873885805+00:00 stderr F W1013 00:13:19.873748 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.873885805+00:00 stderr F W1013 00:13:19.873793 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.873885805+00:00 stderr F I1013 00:13:19.873860 1 trace.go:236] Trace[1609579954]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Oct-2025 00:12:49.870) (total time: 30003ms): 2025-10-13T00:13:19.873885805+00:00 stderr F Trace[1609579954]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (00:13:19.873) 2025-10-13T00:13:19.873885805+00:00 stderr F Trace[1609579954]: [30.003162078s] [30.003162078s] END 2025-10-13T00:13:19.873972557+00:00 stderr F E1013 00:13:19.873892 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.873972557+00:00 stderr F I1013 00:13:19.873895 1 trace.go:236] Trace[1291147250]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Oct-2025 00:12:49.870) (total time: 30003ms): 2025-10-13T00:13:19.873972557+00:00 stderr F Trace[1291147250]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (00:13:19.873) 2025-10-13T00:13:19.873972557+00:00 stderr F Trace[1291147250]: [30.003126206s] [30.003126206s] END 2025-10-13T00:13:19.873972557+00:00 stderr F E1013 00:13:19.873913 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:20.571270951+00:00 stderr F I1013 00:13:20.571177 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:20.571270951+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:20.571270951+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:21.570430623+00:00 stderr F I1013 00:13:21.570361 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:21.570430623+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:21.570430623+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:22.571823508+00:00 stderr F I1013 00:13:22.571664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:22.571823508+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:22.571823508+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:23.570070863+00:00 stderr F I1013 00:13:23.569994 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:23.570070863+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:23.570070863+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:24.570197012+00:00 stderr F I1013 00:13:24.570129 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:24.570197012+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:24.570197012+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:25.571074793+00:00 stderr F I1013 00:13:25.571003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:25.571074793+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:25.571074793+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:26.570825671+00:00 stderr F I1013 00:13:26.570749 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:26.570825671+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:26.570825671+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:27.570263018+00:00 stderr F I1013 00:13:27.570158 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:27.570263018+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:27.570263018+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:28.569903124+00:00 stderr F I1013 00:13:28.569837 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:28.569903124+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:28.569903124+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:29.570607390+00:00 stderr F I1013 00:13:29.570071 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:29.570607390+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:29.570607390+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:30.571359106+00:00 stderr F I1013 00:13:30.571261 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:30.571359106+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:30.571359106+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:31.571023342+00:00 stderr F I1013 00:13:31.570958 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:31.571023342+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:31.571023342+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:32.570080192+00:00 stderr F I1013 00:13:32.569957 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:32.570080192+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:32.570080192+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:33.570904602+00:00 stderr F I1013 00:13:33.570835 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:33.570904602+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:33.570904602+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:34.570265229+00:00 stderr F I1013 00:13:34.570189 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:34.570265229+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:34.570265229+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:35.571287503+00:00 stderr F I1013 00:13:35.571195 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:35.571287503+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:35.571287503+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:36.571102044+00:00 stderr F I1013 00:13:36.571009 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:36.571102044+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:36.571102044+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:37.571389383+00:00 stderr F I1013 00:13:37.571258 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:37.571389383+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:37.571389383+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:38.571562296+00:00 stderr F I1013 00:13:38.571472 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:38.571562296+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:38.571562296+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:39.571165752+00:00 stderr F I1013 00:13:39.571077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:39.571165752+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:39.571165752+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:40.571514510+00:00 stderr F I1013 00:13:40.571379 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:40.571514510+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:40.571514510+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:41.570864068+00:00 stderr F I1013 00:13:41.570790 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:41.570864068+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:41.570864068+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:42.569880618+00:00 stderr F I1013 00:13:42.569825 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:42.569880618+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:42.569880618+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:43.571161362+00:00 stderr F I1013 00:13:43.571104 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:43.571161362+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:43.571161362+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:44.570683725+00:00 stderr F I1013 00:13:44.570586 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:44.570683725+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:44.570683725+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:45.571098764+00:00 stderr F I1013 00:13:45.571034 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:45.571098764+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:45.571098764+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:46.570932016+00:00 stderr F I1013 00:13:46.570870 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:46.570932016+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:46.570932016+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:47.570779890+00:00 stderr F I1013 00:13:47.570688 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:47.570779890+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:47.570779890+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:48.570610352+00:00 stderr F I1013 00:13:48.570545 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:48.570610352+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:48.570610352+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:49.570568978+00:00 stderr F I1013 00:13:49.570496 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:49.570568978+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:49.570568978+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:50.570578066+00:00 stderr F I1013 00:13:50.570517 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:50.570578066+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:50.570578066+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:50.695553594+00:00 stderr F W1013 00:13:50.695448 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:50.695642026+00:00 stderr F I1013 00:13:50.695586 1 trace.go:236] Trace[1517108168]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Oct-2025 00:13:20.694) (total time: 30000ms): 2025-10-13T00:13:50.695642026+00:00 stderr F Trace[1517108168]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:13:50.695) 2025-10-13T00:13:50.695642026+00:00 stderr F Trace[1517108168]: [30.000799651s] [30.000799651s] END 2025-10-13T00:13:50.695642026+00:00 stderr F E1013 00:13:50.695613 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:50.960403431+00:00 stderr F W1013 00:13:50.960293 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:50.960523714+00:00 stderr F I1013 00:13:50.960509 1 trace.go:236] Trace[1380432896]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Oct-2025 00:13:20.958) (total time: 30001ms): 2025-10-13T00:13:50.960523714+00:00 stderr F Trace[1380432896]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:13:50.960) 2025-10-13T00:13:50.960523714+00:00 stderr F Trace[1380432896]: [30.001667651s] [30.001667651s] END 2025-10-13T00:13:50.960564795+00:00 stderr F E1013 00:13:50.960552 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.120571035+00:00 stderr F W1013 00:13:51.120501 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.120701219+00:00 stderr F I1013 00:13:51.120683 1 trace.go:236] Trace[1155942673]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Oct-2025 00:13:21.119) (total time: 30000ms): 2025-10-13T00:13:51.120701219+00:00 stderr F Trace[1155942673]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:13:51.120) 2025-10-13T00:13:51.120701219+00:00 stderr F Trace[1155942673]: [30.000919937s] [30.000919937s] END 2025-10-13T00:13:51.120748710+00:00 stderr F E1013 00:13:51.120734 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.570710737+00:00 stderr F I1013 00:13:51.570628 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:51.570710737+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:51.570710737+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:52.571437244+00:00 stderr F I1013 00:13:52.571316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:52.571437244+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:52.571437244+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:53.570699530+00:00 stderr F I1013 00:13:53.570587 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:53.570699530+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:53.570699530+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:54.571419459+00:00 stderr F I1013 00:13:54.571297 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:54.571419459+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:54.571419459+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:55.570094168+00:00 stderr F I1013 00:13:55.570036 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:55.570094168+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:55.570094168+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:56.570631450+00:00 stderr F I1013 00:13:56.570520 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:56.570631450+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:56.570631450+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:57.569874676+00:00 stderr F I1013 00:13:57.569812 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:57.569874676+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:57.569874676+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:58.570680926+00:00 stderr F I1013 00:13:58.570612 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:58.570680926+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:58.570680926+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:13:59.570759636+00:00 stderr F I1013 00:13:59.570713 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:13:59.570759636+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:13:59.570759636+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:00.570647501+00:00 stderr F I1013 00:14:00.570588 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:00.570647501+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:00.570647501+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:01.570010580+00:00 stderr F I1013 00:14:01.569947 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:01.570010580+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:01.570010580+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:02.570812618+00:00 stderr F I1013 00:14:02.570762 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:02.570812618+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:02.570812618+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:03.569675083+00:00 stderr F I1013 00:14:03.569626 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:03.569675083+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:03.569675083+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:04.570754952+00:00 stderr F I1013 00:14:04.570679 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:04.570754952+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:04.570754952+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:05.569838193+00:00 stderr F I1013 00:14:05.569737 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:05.569838193+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:05.569838193+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:06.570730485+00:00 stderr F I1013 00:14:06.570654 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:06.570730485+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:06.570730485+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:07.570409564+00:00 stderr F I1013 00:14:07.570358 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:07.570409564+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:07.570409564+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:08.570577326+00:00 stderr F I1013 00:14:08.570514 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:08.570577326+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:08.570577326+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:09.571630914+00:00 stderr F I1013 00:14:09.571532 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:09.571630914+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:09.571630914+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:10.578545575+00:00 stderr F I1013 00:14:10.577830 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:10.578545575+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:10.578545575+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:11.570652139+00:00 stderr F I1013 00:14:11.570594 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:11.570652139+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:11.570652139+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:12.570301988+00:00 stderr F I1013 00:14:12.570237 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:12.570301988+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:12.570301988+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:13.571360135+00:00 stderr F I1013 00:14:13.571260 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:13.571360135+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:13.571360135+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:14.571077965+00:00 stderr F I1013 00:14:14.571006 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:14.571077965+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:14.571077965+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:15.569803816+00:00 stderr F I1013 00:14:15.569754 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:15.569803816+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:15.569803816+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:16.570527384+00:00 stderr F I1013 00:14:16.570458 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:16.570527384+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:16.570527384+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:17.571474757+00:00 stderr F I1013 00:14:17.570655 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:17.571474757+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:17.571474757+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:18.570372913+00:00 stderr F I1013 00:14:18.570318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:18.570372913+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:18.570372913+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:19.570843934+00:00 stderr F I1013 00:14:19.570178 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:19.570843934+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:19.570843934+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:20.569174133+00:00 stderr F I1013 00:14:20.569130 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:20.569174133+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:20.569174133+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:21.570322474+00:00 stderr F I1013 00:14:21.570250 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:21.570322474+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:21.570322474+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:22.570227358+00:00 stderr F I1013 00:14:22.570026 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:22.570227358+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:22.570227358+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:23.001523787+00:00 stderr F W1013 00:14:23.001437 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:23.001523787+00:00 stderr F I1013 00:14:23.001501 1 trace.go:236] Trace[368069490]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Oct-2025 00:13:53.000) (total time: 30001ms): 2025-10-13T00:14:23.001523787+00:00 stderr F Trace[368069490]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:14:23.001) 2025-10-13T00:14:23.001523787+00:00 stderr F Trace[368069490]: [30.001280684s] [30.001280684s] END 2025-10-13T00:14:23.001589939+00:00 stderr F E1013 00:14:23.001516 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:23.570940255+00:00 stderr F I1013 00:14:23.570869 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:23.570940255+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:23.570940255+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:23.729844993+00:00 stderr F W1013 00:14:23.729736 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:23.729844993+00:00 stderr F I1013 00:14:23.729811 1 trace.go:236] Trace[926512130]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Oct-2025 00:13:53.728) (total time: 30001ms): 2025-10-13T00:14:23.729844993+00:00 stderr F Trace[926512130]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:14:23.729) 2025-10-13T00:14:23.729844993+00:00 stderr F Trace[926512130]: [30.001688055s] [30.001688055s] END 2025-10-13T00:14:23.729902215+00:00 stderr F E1013 00:14:23.729836 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.260099071+00:00 stderr F W1013 00:14:24.260019 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.260146853+00:00 stderr F I1013 00:14:24.260106 1 trace.go:236] Trace[1830883302]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Oct-2025 00:13:54.258) (total time: 30001ms): 2025-10-13T00:14:24.260146853+00:00 stderr F Trace[1830883302]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:14:24.259) 2025-10-13T00:14:24.260146853+00:00 stderr F Trace[1830883302]: [30.001065205s] [30.001065205s] END 2025-10-13T00:14:24.260146853+00:00 stderr F E1013 00:14:24.260132 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.570416025+00:00 stderr F I1013 00:14:24.569996 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:24.570416025+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:24.570416025+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:25.570994569+00:00 stderr F I1013 00:14:25.570869 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:25.570994569+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:25.570994569+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:26.571203303+00:00 stderr F I1013 00:14:26.571095 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:26.571203303+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:26.571203303+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:27.488864039+00:00 stderr F I1013 00:14:27.488756 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-10-13T00:14:27.570824218+00:00 stderr F I1013 00:14:27.570717 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:27.570824218+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:27.570824218+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:28.269927728+00:00 stderr F I1013 00:14:28.269857 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-10-13T00:14:28.570548248+00:00 stderr F I1013 00:14:28.570479 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:28.570548248+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:28.570548248+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:28.827512932+00:00 stderr F W1013 00:14:28.827432 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:14:28.827512932+00:00 stderr F E1013 00:14:28.827477 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:14:29.571057130+00:00 stderr F I1013 00:14:29.570955 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:29.571057130+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:29.571057130+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:30.571425207+00:00 stderr F I1013 00:14:30.571276 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:30.571425207+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:30.571425207+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:31.570563508+00:00 stderr F I1013 00:14:31.570487 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:31.570563508+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:31.570563508+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:32.569665190+00:00 stderr F I1013 00:14:32.569593 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:32.569665190+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:32.569665190+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:33.570031098+00:00 stderr F I1013 00:14:33.569931 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:33.570031098+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:33.570031098+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:34.572119745+00:00 stderr F I1013 00:14:34.572023 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:34.572119745+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:34.572119745+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:35.571516394+00:00 stderr F I1013 00:14:35.571428 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:35.571516394+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:35.571516394+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:36.570901154+00:00 stderr F I1013 00:14:36.570807 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:36.570901154+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:36.570901154+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:37.570916232+00:00 stderr F I1013 00:14:37.570836 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:37.570916232+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:37.570916232+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:38.571227398+00:00 stderr F I1013 00:14:38.571174 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:38.571227398+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:38.571227398+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:39.231290722+00:00 stderr F W1013 00:14:39.231197 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:14:39.231290722+00:00 stderr F E1013 00:14:39.231274 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:14:39.571189154+00:00 stderr F I1013 00:14:39.571017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:39.571189154+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:39.571189154+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:40.570739289+00:00 stderr F I1013 00:14:40.570633 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:40.570739289+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:40.570739289+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:41.571532819+00:00 stderr F I1013 00:14:41.571455 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:41.571532819+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:41.571532819+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:42.571252986+00:00 stderr F I1013 00:14:42.571179 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:42.571252986+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:42.571252986+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:43.570192247+00:00 stderr F I1013 00:14:43.570100 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:43.570192247+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:43.570192247+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:44.571780626+00:00 stderr F I1013 00:14:44.571699 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:44.571780626+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:44.571780626+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:45.570741947+00:00 stderr F I1013 00:14:45.570661 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:45.570741947+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:45.570741947+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:46.570218494+00:00 stderr F I1013 00:14:46.570104 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:46.570218494+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:46.570218494+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:47.570803016+00:00 stderr F I1013 00:14:47.570728 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:47.570803016+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:47.570803016+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:48.579566121+00:00 stderr F I1013 00:14:48.579487 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-10-13T00:14:48.579566121+00:00 stderr F [-]backend-http failed: backend reported failure 2025-10-13T00:14:48.579566121+00:00 stderr F [-]has-synced failed: Router not synced 2025-10-13T00:14:48.595947981+00:00 stderr F E1013 00:14:48.595874 1 factory.go:130] failed to sync cache for *v1.Route shared informer 2025-10-13T00:14:48.597712194+00:00 stderr F I1013 00:14:48.597672 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" 2025-10-13T00:14:48.601993792+00:00 stderr F E1013 00:14:48.601927 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-10-13T00:14:48.715893505+00:00 stderr F I1013 00:14:48.715794 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-10-13T00:15:33.598901546+00:00 stderr F I1013 00:15:33.598533 1 template.go:846] "msg"="Instructing the template router to terminate" "logger"="router" 2025-10-13T00:15:33.613789602+00:00 stderr F I1013 00:15:33.613749 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Shutting down\n" 2025-10-13T00:15:33.613814083+00:00 stderr F I1013 00:15:33.613793 1 template.go:850] "msg"="Shutdown complete, exiting" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000000676615073043233033214 0ustar zuulzuul2025-10-13T00:15:58.415043444+00:00 stderr F I1013 00:15:58.414808 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-10-13T00:15:58.417804632+00:00 stderr F I1013 00:15:58.417767 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-10-13T00:15:58.421500281+00:00 stderr F I1013 00:15:58.421471 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-10-13T00:15:58.421549143+00:00 stderr F I1013 00:15:58.421527 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-10-13T00:15:58.422004497+00:00 stderr F I1013 00:15:58.421983 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-10-13T00:15:58.422068959+00:00 stderr F I1013 00:15:58.422051 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-10-13T00:15:58.452525906+00:00 stderr F I1013 00:15:58.452473 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-10-13T00:15:58.456931467+00:00 stderr F I1013 00:15:58.456891 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-10-13T00:15:58.462273129+00:00 stderr F I1013 00:15:58.462240 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-10-13T00:15:58.529018769+00:00 stderr F E1013 00:15:58.528976 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-10-13T00:15:58.683539015+00:00 stderr F I1013 00:15:58.683474 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-10-13T00:16:04.316394679+00:00 stderr F I1013 00:16:04.316308 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-10-13T00:17:20.642986147+00:00 stderr F I1013 00:17:20.642913 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-10-13T00:19:59.348956516+00:00 stderr F I1013 00:19:59.348877 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-10-13T00:20:04.333908400+00:00 stderr F I1013 00:20:04.333797 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-10-13T00:22:58.970352881+00:00 stderr F I1013 00:22:58.969897 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-10-13T00:22:59.073593647+00:00 stderr F I1013 00:22:59.073543 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000014600015073043233033176 0ustar zuulzuul2025-08-13T19:56:16.897628970+00:00 stderr F I0813 19:56:16.897259 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-08-13T19:56:16.903853848+00:00 stderr F I0813 19:56:16.902128 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-08-13T19:56:16.906242926+00:00 stderr F I0813 19:56:16.906162 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-08-13T19:56:16.906321448+00:00 stderr F I0813 19:56:16.906281 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-08-13T19:56:16.906988387+00:00 stderr F I0813 19:56:16.906917 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-08-13T19:56:16.907098210+00:00 stderr F I0813 19:56:16.907056 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-08-13T19:56:17.432753830+00:00 stderr F I0813 19:56:17.432644 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:17.432753830+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:17.432753830+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:18.431893231+00:00 stderr F I0813 19:56:18.431619 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:18.431893231+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:18.431893231+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:19.432022349+00:00 stderr F I0813 19:56:19.431929 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:19.432022349+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:19.432022349+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:20.432639101+00:00 stderr F I0813 19:56:20.432527 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:20.432639101+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:20.432639101+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:21.432452011+00:00 stderr F I0813 19:56:21.432309 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:21.432452011+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:21.432452011+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:22.432538597+00:00 stderr F I0813 19:56:22.432355 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:22.432538597+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:22.432538597+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:23.432382067+00:00 stderr F I0813 19:56:23.432308 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:23.432382067+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:23.432382067+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:24.433473033+00:00 stderr F I0813 19:56:24.433330 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:24.433473033+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:24.433473033+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:25.433471328+00:00 stderr F I0813 19:56:25.431263 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:25.433471328+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:25.433471328+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:26.432059721+00:00 stderr F I0813 19:56:26.431979 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:26.432059721+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:26.432059721+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:27.434503986+00:00 stderr F I0813 19:56:27.434363 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:27.434503986+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:27.434503986+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:28.432741230+00:00 stderr F I0813 19:56:28.432682 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:28.432741230+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:28.432741230+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:29.439648112+00:00 stderr F I0813 19:56:29.439264 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:29.439648112+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:29.439648112+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:30.435213880+00:00 stderr F I0813 19:56:30.433077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:30.435213880+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:30.435213880+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:31.433340482+00:00 stderr F I0813 19:56:31.433017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:31.433340482+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:31.433340482+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:32.433555683+00:00 stderr F I0813 19:56:32.433494 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:32.433555683+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:32.433555683+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:33.434362750+00:00 stderr F I0813 19:56:33.434286 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:33.434362750+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:33.434362750+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:34.432458420+00:00 stderr F I0813 19:56:34.432307 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:34.432458420+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:34.432458420+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:35.432746463+00:00 stderr F I0813 19:56:35.432248 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:35.432746463+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:35.432746463+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:36.433244751+00:00 stderr F I0813 19:56:36.433192 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:36.433244751+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:36.433244751+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:37.431705222+00:00 stderr F I0813 19:56:37.431613 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:37.431705222+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:37.431705222+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:38.432325045+00:00 stderr F I0813 19:56:38.432166 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:38.432325045+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:38.432325045+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:39.431535137+00:00 stderr F I0813 19:56:39.431441 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:39.431535137+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:39.431535137+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:40.431416609+00:00 stderr F I0813 19:56:40.431310 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:40.431416609+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:40.431416609+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:41.431708842+00:00 stderr F I0813 19:56:41.431601 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:41.431708842+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:41.431708842+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:42.433355784+00:00 stderr F I0813 19:56:42.433239 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:42.433355784+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:42.433355784+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:43.433293677+00:00 stderr F I0813 19:56:43.433204 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:43.433293677+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:43.433293677+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:44.437035918+00:00 stderr F I0813 19:56:44.436975 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:44.437035918+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:44.437035918+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:45.434593002+00:00 stderr F I0813 19:56:45.434106 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:45.434593002+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:45.434593002+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:46.433540697+00:00 stderr F I0813 19:56:46.433398 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:46.433540697+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:46.433540697+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:46.905667848+00:00 stderr F W0813 19:56:46.905508 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.905970527+00:00 stderr F I0813 19:56:46.905939 1 trace.go:236] Trace[1642337220]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:56:16.903) (total time: 30002ms): 2025-08-13T19:56:46.905970527+00:00 stderr F Trace[1642337220]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.905) 2025-08-13T19:56:46.905970527+00:00 stderr F Trace[1642337220]: [30.002025157s] [30.002025157s] END 2025-08-13T19:56:46.906068190+00:00 stderr F E0813 19:56:46.906047 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.908969293+00:00 stderr F W0813 19:56:46.908900 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909091246+00:00 stderr F I0813 19:56:46.909070 1 trace.go:236] Trace[436705070]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:16.907) (total time: 30001ms): 2025-08-13T19:56:46.909091246+00:00 stderr F Trace[436705070]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.908) 2025-08-13T19:56:46.909091246+00:00 stderr F Trace[436705070]: [30.001298066s] [30.001298066s] END 2025-08-13T19:56:46.909150168+00:00 stderr F E0813 19:56:46.909134 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909259201+00:00 stderr F W0813 19:56:46.909072 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909357404+00:00 stderr F I0813 19:56:46.909284 1 trace.go:236] Trace[2037442418]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:16.907) (total time: 30001ms): 2025-08-13T19:56:46.909357404+00:00 stderr F Trace[2037442418]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.908) 2025-08-13T19:56:46.909357404+00:00 stderr F Trace[2037442418]: [30.001572733s] [30.001572733s] END 2025-08-13T19:56:46.909357404+00:00 stderr F E0813 19:56:46.909336 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:47.432589043+00:00 stderr F I0813 19:56:47.432529 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:47.432589043+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:47.432589043+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:48.432951947+00:00 stderr F I0813 19:56:48.432525 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:48.432951947+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:48.432951947+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:49.432066177+00:00 stderr F I0813 19:56:49.431941 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:49.432066177+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:49.432066177+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:50.432125873+00:00 stderr F I0813 19:56:50.432032 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:50.432125873+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:50.432125873+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:51.432001853+00:00 stderr F I0813 19:56:51.431906 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:51.432001853+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:51.432001853+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:52.432737878+00:00 stderr F I0813 19:56:52.432631 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:52.432737878+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:52.432737878+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:53.432868787+00:00 stderr F I0813 19:56:53.432673 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:53.432868787+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:53.432868787+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:54.433275973+00:00 stderr F I0813 19:56:54.432875 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:54.433275973+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:54.433275973+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:55.432271579+00:00 stderr F I0813 19:56:55.432202 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:55.432271579+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:55.432271579+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:56.432325345+00:00 stderr F I0813 19:56:56.432186 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:56.432325345+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:56.432325345+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:57.432645998+00:00 stderr F I0813 19:56:57.432569 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:57.432645998+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:57.432645998+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:58.432199589+00:00 stderr F I0813 19:56:58.432124 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:58.432199589+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:58.432199589+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:59.433020537+00:00 stderr F I0813 19:56:59.432864 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:59.433020537+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:59.433020537+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:00.433573017+00:00 stderr F I0813 19:57:00.433509 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:00.433573017+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:00.433573017+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:01.432562092+00:00 stderr F I0813 19:57:01.432418 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:01.432562092+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:01.432562092+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:02.433124814+00:00 stderr F I0813 19:57:02.433003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:02.433124814+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:02.433124814+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:03.432258824+00:00 stderr F I0813 19:57:03.432162 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:03.432258824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:03.432258824+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:04.432722113+00:00 stderr F I0813 19:57:04.432145 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:04.432722113+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:04.432722113+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:05.431260714+00:00 stderr F I0813 19:57:05.431205 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:05.431260714+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:05.431260714+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:06.432030181+00:00 stderr F I0813 19:57:06.431977 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:06.432030181+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:06.432030181+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:07.431464501+00:00 stderr F I0813 19:57:07.431406 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:07.431464501+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:07.431464501+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:08.432085384+00:00 stderr F I0813 19:57:08.431989 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:08.432085384+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:08.432085384+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:09.432053706+00:00 stderr F I0813 19:57:09.431951 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:09.432053706+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:09.432053706+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:10.432654007+00:00 stderr F I0813 19:57:10.432610 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:10.432654007+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:10.432654007+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:11.431640704+00:00 stderr F I0813 19:57:11.431588 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:11.431640704+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:11.431640704+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:12.434203290+00:00 stderr F I0813 19:57:12.434111 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:12.434203290+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:12.434203290+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:13.433521056+00:00 stderr F I0813 19:57:13.433423 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:13.433521056+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:13.433521056+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:14.433498481+00:00 stderr F I0813 19:57:14.433149 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:14.433498481+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:14.433498481+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:15.432030014+00:00 stderr F I0813 19:57:15.431954 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:15.432030014+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:15.432030014+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:16.434512699+00:00 stderr F I0813 19:57:16.434364 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:16.434512699+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:16.434512699+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:17.433485565+00:00 stderr F I0813 19:57:17.432334 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:17.433485565+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:17.433485565+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:17.783031196+00:00 stderr F W0813 19:57:17.780368 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:17.783031196+00:00 stderr F I0813 19:57:17.780622 1 trace.go:236] Trace[1748179681]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:47.772) (total time: 30007ms): 2025-08-13T19:57:17.783031196+00:00 stderr F Trace[1748179681]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30007ms (19:57:17.780) 2025-08-13T19:57:17.783031196+00:00 stderr F Trace[1748179681]: [30.007586633s] [30.007586633s] END 2025-08-13T19:57:17.783031196+00:00 stderr F E0813 19:57:17.780688 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.038305706+00:00 stderr F W0813 19:57:18.037956 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.038305706+00:00 stderr F I0813 19:57:18.038270 1 trace.go:236] Trace[1900261663]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:48.036) (total time: 30001ms): 2025-08-13T19:57:18.038305706+00:00 stderr F Trace[1900261663]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:57:18.037) 2025-08-13T19:57:18.038305706+00:00 stderr F Trace[1900261663]: [30.001957303s] [30.001957303s] END 2025-08-13T19:57:18.038418509+00:00 stderr F E0813 19:57:18.038308 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.135368408+00:00 stderr F W0813 19:57:18.135253 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.135478171+00:00 stderr F I0813 19:57:18.135462 1 trace.go:236] Trace[1715725877]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:56:48.134) (total time: 30001ms): 2025-08-13T19:57:18.135478171+00:00 stderr F Trace[1715725877]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:57:18.135) 2025-08-13T19:57:18.135478171+00:00 stderr F Trace[1715725877]: [30.001251833s] [30.001251833s] END 2025-08-13T19:57:18.135532372+00:00 stderr F E0813 19:57:18.135510 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.431943426+00:00 stderr F I0813 19:57:18.431887 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:18.431943426+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:18.431943426+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:19.431645322+00:00 stderr F I0813 19:57:19.431575 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:19.431645322+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:19.431645322+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:20.435646542+00:00 stderr F I0813 19:57:20.435520 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:20.435646542+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:20.435646542+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:21.431757945+00:00 stderr F I0813 19:57:21.431628 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:21.431757945+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:21.431757945+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:22.433349495+00:00 stderr F I0813 19:57:22.433199 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:22.433349495+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:22.433349495+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:23.431476856+00:00 stderr F I0813 19:57:23.431395 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:23.431476856+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:23.431476856+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:24.431854371+00:00 stderr F I0813 19:57:24.431497 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:24.431854371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:24.431854371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:25.433028709+00:00 stderr F I0813 19:57:25.432858 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:25.433028709+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:25.433028709+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:26.433458567+00:00 stderr F I0813 19:57:26.433214 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:26.433458567+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:26.433458567+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:27.433281236+00:00 stderr F I0813 19:57:27.433191 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:27.433281236+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:27.433281236+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:28.432026316+00:00 stderr F I0813 19:57:28.431902 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:28.432026316+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:28.432026316+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:29.433555235+00:00 stderr F I0813 19:57:29.433273 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:29.433555235+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:29.433555235+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:30.431537989+00:00 stderr F I0813 19:57:30.431465 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:30.431537989+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:30.431537989+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:31.431923854+00:00 stderr F I0813 19:57:31.431866 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:31.431923854+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:31.431923854+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:32.431204570+00:00 stderr F I0813 19:57:32.431059 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:32.431204570+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:32.431204570+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:33.433366346+00:00 stderr F I0813 19:57:33.433269 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:33.433366346+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:33.433366346+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:34.432642709+00:00 stderr F I0813 19:57:34.432592 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:34.432642709+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:34.432642709+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:35.432859600+00:00 stderr F I0813 19:57:35.432686 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:35.432859600+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:35.432859600+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:36.437029824+00:00 stderr F I0813 19:57:36.436741 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:36.437029824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:36.437029824+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:36.933443039+00:00 stderr F W0813 19:57:36.933286 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:36.933583323+00:00 stderr F I0813 19:57:36.933566 1 trace.go:236] Trace[2081447682]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:57:19.880) (total time: 17053ms): 2025-08-13T19:57:36.933583323+00:00 stderr F Trace[2081447682]: ---"Objects listed" error:the server is currently unable to handle the request (get routes.route.openshift.io) 17052ms (19:57:36.933) 2025-08-13T19:57:36.933583323+00:00 stderr F Trace[2081447682]: [17.053173756s] [17.053173756s] END 2025-08-13T19:57:36.933668565+00:00 stderr F E0813 19:57:36.933651 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:36.968356316+00:00 stderr F I0813 19:57:36.968289 1 trace.go:236] Trace[1148232741]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:57:19.880) (total time: 17087ms): 2025-08-13T19:57:36.968356316+00:00 stderr F Trace[1148232741]: ---"Objects listed" error: 17087ms (19:57:36.967) 2025-08-13T19:57:36.968356316+00:00 stderr F Trace[1148232741]: [17.087755934s] [17.087755934s] END 2025-08-13T19:57:36.968468509+00:00 stderr F I0813 19:57:36.968427 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T19:57:37.032534479+00:00 stderr F I0813 19:57:37.032393 1 trace.go:236] Trace[287589973]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:57:21.283) (total time: 15748ms): 2025-08-13T19:57:37.032534479+00:00 stderr F Trace[287589973]: ---"Objects listed" error: 15748ms (19:57:37.031) 2025-08-13T19:57:37.032534479+00:00 stderr F Trace[287589973]: [15.748753569s] [15.748753569s] END 2025-08-13T19:57:37.032609361+00:00 stderr F I0813 19:57:37.032593 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T19:57:37.434524076+00:00 stderr F I0813 19:57:37.434316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:37.434524076+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:37.434524076+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:38.433052679+00:00 stderr F I0813 19:57:38.432706 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:38.433052679+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:38.433052679+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:39.435744381+00:00 stderr F I0813 19:57:39.435644 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:39.435744381+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:39.435744381+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:40.432219425+00:00 stderr F I0813 19:57:40.432159 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:40.432219425+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:40.432219425+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:41.766582397+00:00 stderr F I0813 19:57:41.766530 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:41.766582397+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:41.766582397+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:42.001205487+00:00 stderr F W0813 19:57:42.001067 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:42.001205487+00:00 stderr F E0813 19:57:42.001137 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:42.434103768+00:00 stderr F I0813 19:57:42.434003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:42.434103768+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:42.434103768+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:43.432131466+00:00 stderr F I0813 19:57:43.432032 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:43.432131466+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:43.432131466+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:44.431976806+00:00 stderr F I0813 19:57:44.431854 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:44.431976806+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:44.431976806+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:45.432675090+00:00 stderr F I0813 19:57:45.432578 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:45.432675090+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:45.432675090+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:46.433509920+00:00 stderr F I0813 19:57:46.433334 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:46.433509920+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:46.433509920+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:47.432967569+00:00 stderr F I0813 19:57:47.432819 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:47.432967569+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:47.432967569+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:48.447547809+00:00 stderr F I0813 19:57:48.447211 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:48.447547809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:48.447547809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:49.433520123+00:00 stderr F I0813 19:57:49.433466 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:49.433520123+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:49.433520123+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:50.439672484+00:00 stderr F I0813 19:57:50.438318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:50.439672484+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:50.439672484+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:50.477344599+00:00 stderr F W0813 19:57:50.477079 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:50.477344599+00:00 stderr F E0813 19:57:50.477160 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:51.433433181+00:00 stderr F I0813 19:57:51.433291 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:51.433433181+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:51.433433181+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:52.431983943+00:00 stderr F I0813 19:57:52.431923 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:52.431983943+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:52.431983943+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:53.432056270+00:00 stderr F I0813 19:57:53.431981 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:53.432056270+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:53.432056270+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:54.433287283+00:00 stderr F I0813 19:57:54.433181 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:54.433287283+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:54.433287283+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:55.434947235+00:00 stderr F I0813 19:57:55.434648 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:55.434947235+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:55.434947235+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:56.432574223+00:00 stderr F I0813 19:57:56.432093 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:56.432574223+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:56.432574223+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:57.433262348+00:00 stderr F I0813 19:57:57.433165 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:57.433262348+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:57.433262348+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:58.431917455+00:00 stderr F I0813 19:57:58.431696 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:58.431917455+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:58.431917455+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:59.432912138+00:00 stderr F I0813 19:57:59.432742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:59.432912138+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:59.432912138+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:00.431893395+00:00 stderr F I0813 19:58:00.431689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:00.431893395+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:00.431893395+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:01.434020642+00:00 stderr F I0813 19:58:01.433872 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:01.434020642+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:01.434020642+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:02.435928520+00:00 stderr F I0813 19:58:02.435742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:02.435928520+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:02.435928520+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:03.433875287+00:00 stderr F I0813 19:58:03.433666 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:03.433875287+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:03.433875287+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:04.431119524+00:00 stderr F I0813 19:58:04.430898 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:04.431119524+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:04.431119524+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:05.432938093+00:00 stderr F I0813 19:58:05.432765 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:05.432938093+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:05.432938093+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:06.435314875+00:00 stderr F I0813 19:58:06.434429 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:06.435314875+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:06.435314875+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:07.434281442+00:00 stderr F I0813 19:58:07.434059 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:07.434281442+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:07.434281442+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:08.433708781+00:00 stderr F I0813 19:58:08.433055 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:08.433708781+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:08.433708781+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:09.437898085+00:00 stderr F I0813 19:58:09.437671 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:09.437898085+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:09.437898085+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:10.432124936+00:00 stderr F I0813 19:58:10.431734 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:10.432124936+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:10.432124936+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:11.034272901+00:00 stderr F W0813 19:58:11.034023 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:58:11.034272901+00:00 stderr F E0813 19:58:11.034095 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:58:11.432869003+00:00 stderr F I0813 19:58:11.432656 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:11.432869003+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:11.432869003+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:12.433933519+00:00 stderr F I0813 19:58:12.432599 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:12.433933519+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:12.433933519+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:13.431929237+00:00 stderr F I0813 19:58:13.431765 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:13.431929237+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:13.431929237+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:14.431767508+00:00 stderr F I0813 19:58:14.431622 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:14.431767508+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:14.431767508+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:15.434103371+00:00 stderr F I0813 19:58:15.434008 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:15.434103371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:15.434103371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:16.433465119+00:00 stderr F I0813 19:58:16.433332 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:16.433465119+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:16.433465119+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:16.457355050+00:00 stderr F E0813 19:58:16.457214 1 factory.go:130] failed to sync cache for *v1.Route shared informer 2025-08-13T19:58:16.463553086+00:00 stderr F I0813 19:58:16.463474 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" 2025-08-13T19:58:16.466863901+00:00 stderr F E0813 19:58:16.466693 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-08-13T19:58:16.525526943+00:00 stderr F I0813 19:58:16.525381 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T19:59:01.471118640+00:00 stderr F I0813 19:59:01.470384 1 template.go:846] "msg"="Instructing the template router to terminate" "logger"="router" 2025-08-13T19:59:02.673417482+00:00 stderr F I0813 19:59:02.673063 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Shutting down\n" 2025-08-13T19:59:02.673417482+00:00 stderr F I0813 19:59:02.673291 1 template.go:850] "msg"="Shutdown complete, exiting" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000011174415073043233033205 0ustar zuulzuul2025-08-13T19:59:11.304677918+00:00 stderr F I0813 19:59:11.297505 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-08-13T19:59:11.673366627+00:00 stderr F I0813 19:59:11.672883 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-08-13T19:59:11.769373724+00:00 stderr F I0813 19:59:11.769189 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-08-13T19:59:11.771932617+00:00 stderr F I0813 19:59:11.769613 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-08-13T19:59:11.777891327+00:00 stderr F I0813 19:59:11.773270 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-08-13T19:59:11.777891327+00:00 stderr F I0813 19:59:11.773414 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-08-13T19:59:11.886728879+00:00 stderr F W0813 19:59:11.886634 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:11.886728879+00:00 stderr F E0813 19:59:11.886705 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:12.672132258+00:00 stderr F I0813 19:59:12.671738 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:12.672132258+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:12.672132258+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:12.681237917+00:00 stderr F I0813 19:59:12.678724 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T19:59:12.851562443+00:00 stderr F I0813 19:59:12.851175 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T19:59:13.380116350+00:00 stderr F W0813 19:59:13.379504 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:13.380116350+00:00 stderr F E0813 19:59:13.380097 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:13.459080341+00:00 stderr F I0813 19:59:13.456206 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:13.459080341+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:13.459080341+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:14.445501809+00:00 stderr F I0813 19:59:14.439664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:14.445501809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:14.445501809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:15.468994886+00:00 stderr F I0813 19:59:15.459972 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:15.468994886+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:15.468994886+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:16.104399630+00:00 stderr F W0813 19:59:16.103133 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:16.104399630+00:00 stderr F E0813 19:59:16.103188 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:16.447703496+00:00 stderr F I0813 19:59:16.446220 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:16.447703496+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:16.447703496+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:17.449471752+00:00 stderr F I0813 19:59:17.447716 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:17.449471752+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:17.449471752+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:18.455398967+00:00 stderr F I0813 19:59:18.454926 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:18.455398967+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:18.455398967+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:19.478456090+00:00 stderr F I0813 19:59:19.478324 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:19.478456090+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:19.478456090+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:20.443736896+00:00 stderr F I0813 19:59:20.443497 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:20.443736896+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:20.443736896+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:21.154386143+00:00 stderr F W0813 19:59:21.149472 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:21.154386143+00:00 stderr F E0813 19:59:21.149615 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:21.441014694+00:00 stderr F I0813 19:59:21.438582 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:21.441014694+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:21.441014694+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:22.436881922+00:00 stderr F I0813 19:59:22.436077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:22.436881922+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:22.436881922+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:23.444874115+00:00 stderr F I0813 19:59:23.444249 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:23.444874115+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:23.444874115+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:24.443249214+00:00 stderr F I0813 19:59:24.441639 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:24.443249214+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:24.443249214+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:25.434856409+00:00 stderr F I0813 19:59:25.433416 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:25.434856409+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:25.434856409+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:26.455137963+00:00 stderr F I0813 19:59:26.452756 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:26.455137963+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:26.455137963+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:27.442377615+00:00 stderr F I0813 19:59:27.441945 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:27.442377615+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:27.442377615+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:28.441913556+00:00 stderr F I0813 19:59:28.440770 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:28.441913556+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:28.441913556+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:29.432264236+00:00 stderr F I0813 19:59:29.432104 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:29.432264236+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:29.432264236+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:30.434530676+00:00 stderr F I0813 19:59:30.434323 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:30.434530676+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:30.434530676+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:31.436502367+00:00 stderr F I0813 19:59:31.436416 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:31.436502367+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:31.436502367+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:32.438185499+00:00 stderr F I0813 19:59:32.436917 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:32.438185499+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:32.438185499+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:32.780568869+00:00 stderr F W0813 19:59:32.779563 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:32.780568869+00:00 stderr F E0813 19:59:32.779636 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:33.442515279+00:00 stderr F I0813 19:59:33.439470 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:33.442515279+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:33.442515279+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:34.440001973+00:00 stderr F I0813 19:59:34.432664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:34.440001973+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:34.440001973+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:35.437748283+00:00 stderr F I0813 19:59:35.437596 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:35.437748283+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:35.437748283+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:36.432709335+00:00 stderr F I0813 19:59:36.432555 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:36.432709335+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:36.432709335+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:37.441529821+00:00 stderr F I0813 19:59:37.441270 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:37.441529821+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:37.441529821+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:38.437191323+00:00 stderr F I0813 19:59:38.435413 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:38.437191323+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:38.437191323+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:39.443362104+00:00 stderr F I0813 19:59:39.443272 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:39.443362104+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:39.443362104+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:40.463715509+00:00 stderr F I0813 19:59:40.439513 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:40.463715509+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:40.463715509+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:41.444454315+00:00 stderr F I0813 19:59:41.444077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:41.444454315+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:41.444454315+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:42.447673783+00:00 stderr F I0813 19:59:42.447382 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:42.447673783+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:42.447673783+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:43.439405871+00:00 stderr F I0813 19:59:43.439305 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:43.439405871+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:43.439405871+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:44.440923330+00:00 stderr F I0813 19:59:44.438757 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:44.440923330+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:44.440923330+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:45.438150496+00:00 stderr F I0813 19:59:45.436473 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:45.438150496+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:45.438150496+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:46.442893156+00:00 stderr F I0813 19:59:46.437085 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:46.442893156+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:46.442893156+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:47.574656328+00:00 stderr F I0813 19:59:47.573480 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:47.574656328+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:47.574656328+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:48.435955780+00:00 stderr F I0813 19:59:48.434292 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:48.435955780+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:48.435955780+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:49.444657025+00:00 stderr F I0813 19:59:49.441019 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:49.444657025+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:49.444657025+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:50.441014186+00:00 stderr F I0813 19:59:50.440726 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:50.441014186+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:50.441014186+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:51.438074809+00:00 stderr F I0813 19:59:51.437388 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:51.438074809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:51.438074809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:52.271503646+00:00 stderr F W0813 19:59:52.271401 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:52.271503646+00:00 stderr F E0813 19:59:52.271467 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:52.437958611+00:00 stderr F I0813 19:59:52.435764 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:52.437958611+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:52.437958611+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:53.437003279+00:00 stderr F I0813 19:59:53.436582 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:53.437003279+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:53.437003279+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:54.435994195+00:00 stderr F I0813 19:59:54.435870 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:54.435994195+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:54.435994195+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:55.447610142+00:00 stderr F I0813 19:59:55.444133 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:55.447610142+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:55.447610142+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:56.435445371+00:00 stderr F I0813 19:59:56.435342 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:56.435445371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:56.435445371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:57.437007470+00:00 stderr F I0813 19:59:57.436719 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:57.437007470+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:57.437007470+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:58.441676389+00:00 stderr F I0813 19:59:58.441568 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:58.441676389+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:58.441676389+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:59.441219781+00:00 stderr F I0813 19:59:59.435551 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:59.441219781+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:59.441219781+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:00.449216644+00:00 stderr F I0813 20:00:00.449017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:00.449216644+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:00.449216644+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:01.435257071+00:00 stderr F I0813 20:00:01.435136 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:01.435257071+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:01.435257071+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:02.435075695+00:00 stderr F I0813 20:00:02.434217 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:02.435075695+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:02.435075695+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:03.435159681+00:00 stderr F I0813 20:00:03.435040 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:03.435159681+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:03.435159681+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:04.434374502+00:00 stderr F I0813 20:00:04.433405 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:04.434374502+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:04.434374502+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:05.435923430+00:00 stderr F I0813 20:00:05.434204 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:05.435923430+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:05.435923430+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:06.436268404+00:00 stderr F I0813 20:00:06.435113 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:06.436268404+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:06.436268404+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:07.444277557+00:00 stderr F I0813 20:00:07.444061 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:07.444277557+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:07.444277557+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:08.440445100+00:00 stderr F I0813 20:00:08.440318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:08.440445100+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:08.440445100+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:09.436634906+00:00 stderr F I0813 20:00:09.436496 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:09.436634906+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:09.436634906+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:10.444501654+00:00 stderr F I0813 20:00:10.443430 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:10.444501654+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:10.444501654+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:11.447671037+00:00 stderr F I0813 20:00:11.442535 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:11.447671037+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:11.447671037+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:12.432586351+00:00 stderr F I0813 20:00:12.432076 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:12.432586351+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:12.432586351+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:13.443036294+00:00 stderr F I0813 20:00:13.441571 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:13.443036294+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:13.443036294+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:14.439365563+00:00 stderr F I0813 20:00:14.437341 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:14.439365563+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:14.439365563+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:15.439457808+00:00 stderr F I0813 20:00:15.439401 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:15.439457808+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:15.439457808+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:16.445945307+00:00 stderr F I0813 20:00:16.445316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:16.445945307+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:16.445945307+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:17.448372851+00:00 stderr F I0813 20:00:17.448314 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:17.448372851+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:17.448372851+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:18.458895964+00:00 stderr F I0813 20:00:18.452920 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:18.458895964+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:18.458895964+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:19.443228891+00:00 stderr F I0813 20:00:19.439909 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:19.443228891+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:19.443228891+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:20.452264183+00:00 stderr F I0813 20:00:20.451613 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:20.452264183+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:20.452264183+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:21.444279109+00:00 stderr F I0813 20:00:21.441573 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:21.444279109+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:21.444279109+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:22.442027309+00:00 stderr F I0813 20:00:22.441223 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:22.442027309+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:22.442027309+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:23.442934930+00:00 stderr F I0813 20:00:23.441492 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:23.442934930+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:23.442934930+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:24.460454004+00:00 stderr F I0813 20:00:24.460307 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:24.460454004+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:24.460454004+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:25.448935401+00:00 stderr F I0813 20:00:25.441689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:25.448935401+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:25.448935401+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:26.031207313+00:00 stderr F I0813 20:00:26.031093 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:00:26.092081849+00:00 stderr F E0813 20:00:26.090484 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-08-13T20:00:26.742089514+00:00 stderr F I0813 20:00:26.741483 1 healthz.go:261] backend-http check failed: healthz 2025-08-13T20:00:26.742089514+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:27.296769290+00:00 stderr F I0813 20:00:27.296070 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:00:31.257363322+00:00 stderr F I0813 20:00:31.255569 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:00:41.640227457+00:00 stderr F I0813 20:00:41.636935 1 template.go:941] "msg"="reloaded metrics certificate" "cert"="/etc/pki/tls/metrics-certs/tls.crt" "key"="/etc/pki/tls/metrics-certs/tls.key" "logger"="router" 2025-08-13T20:01:09.895922795+00:00 stderr F I0813 20:01:09.894921 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:01:24.804112015+00:00 stderr F W0813 20:01:24.803826 1 reflector.go:462] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR; received from peer") has prevented the request from succeeding 2025-08-13T20:02:17.794050969+00:00 stderr F W0813 20:02:17.793041 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:17.794050969+00:00 stderr F E0813 20:02:17.793139 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:03:19.527530230+00:00 stderr F W0813 20:03:19.527086 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:03:19.527530230+00:00 stderr F I0813 20:03:19.527406 1 trace.go:236] Trace[310903481]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 20:02:49.524) (total time: 30002ms): 2025-08-13T20:03:19.527530230+00:00 stderr F Trace[310903481]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 30002ms (20:03:19.527) 2025-08-13T20:03:19.527530230+00:00 stderr F Trace[310903481]: [30.002341853s] [30.002341853s] END 2025-08-13T20:03:19.527530230+00:00 stderr F E0813 20:03:19.527464 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:28.566676368+00:00 stderr F W0813 20:04:28.566243 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:28.566676368+00:00 stderr F I0813 20:04:28.566510 1 trace.go:236] Trace[1141895289]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 20:03:58.471) (total time: 30095ms): 2025-08-13T20:04:28.566676368+00:00 stderr F Trace[1141895289]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 30082ms (20:04:28.553) 2025-08-13T20:04:28.566676368+00:00 stderr F Trace[1141895289]: [30.095336028s] [30.095336028s] END 2025-08-13T20:04:28.566676368+00:00 stderr F E0813 20:04:28.566571 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:05:22.531648891+00:00 stderr F W0813 20:05:22.531122 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:22.533378770+00:00 stderr F E0813 20:05:22.533163 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:23.709339045+00:00 stderr F I0813 20:05:23.709111 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T20:05:23.943489560+00:00 stderr F I0813 20:05:23.943369 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:05:24.175916606+00:00 stderr F I0813 20:05:24.175825 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:29.171169261+00:00 stderr F I0813 20:05:29.171023 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:40.055113516+00:00 stderr F I0813 20:05:40.055049 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:45.053675044+00:00 stderr F I0813 20:05:45.050701 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:06:04.550607868+00:00 stderr F W0813 20:06:04.550499 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:04.551886485+00:00 stderr F E0813 20:06:04.551731 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:53.870533142+00:00 stderr F W0813 20:06:53.870282 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:53.870533142+00:00 stderr F E0813 20:06:53.870427 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:40.667625093+00:00 stderr F I0813 20:07:40.666123 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:09:05.305116521+00:00 stderr F I0813 20:09:05.304952 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:09:05.342759711+00:00 stderr F I0813 20:09:05.342612 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351003 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351200 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351247 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:42.965144927+00:00 stderr F I0813 20:42:42.964922 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043234033000 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043234033000 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000004737115073043234033016 0ustar zuulzuul2025-10-13T00:19:23.488963465+00:00 stderr F I1013 00:19:23.488866 23313 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:19:23.489448809+00:00 stderr F I1013 00:19:23.489129 23313 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-10-13T00:19:23.494272362+00:00 stderr F I1013 00:19:23.494177 23313 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-10-13T00:19:23.498007243+00:00 stderr F I1013 00:19:23.497916 23313 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-10-13T00:19:23.586384712+00:00 stderr F I1013 00:19:23.586257 23313 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-10-13T00:19:23.636845432+00:00 stderr F I1013 00:19:23.636728 23313 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:19:23.637356588+00:00 stderr F E1013 00:19:23.637288 23313 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-10-13T00:19:23.637532433+00:00 stderr F I1013 00:19:23.637488 23313 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-10-13T00:19:23.883501017+00:00 stderr F I1013 00:19:23.883359 23313 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-10-13T00:19:23.885135696+00:00 stderr F I1013 00:19:23.885059 23313 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-10-13T00:19:23.886938430+00:00 stderr F I1013 00:19:23.886836 23313 metrics.go:100] Registering Prometheus metrics 2025-10-13T00:19:23.887060553+00:00 stderr F I1013 00:19:23.887009 23313 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-10-13T00:19:23.909786079+00:00 stderr F I1013 00:19:23.909712 23313 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:19:23.912913882+00:00 stderr F I1013 00:19:23.912861 23313 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-10-13T00:19:23.921977982+00:00 stderr F I1013 00:19:23.921869 23313 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:19:23.921977982+00:00 stderr F I1013 00:19:23.921923 23313 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:19:23.922055034+00:00 stderr F I1013 00:19:23.921984 23313 update.go:2610] Starting to manage node: crc 2025-10-13T00:19:23.931784163+00:00 stderr F I1013 00:19:23.931079 23313 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-10-13T00:19:24.000288251+00:00 stderr F I1013 00:19:24.000162 23313 daemon.go:1727] State: idle 2025-10-13T00:19:24.000288251+00:00 stderr F Deployments: 2025-10-13T00:19:24.000288251+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-10-13T00:19:24.000288251+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-10-13T00:19:24.000288251+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-10-13T00:19:24.000288251+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-10-13T00:19:24.000288251+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-10-13T00:19:24.000288251+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-10-13T00:19:24.000288251+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-10-13T00:19:24.000288251+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-10-13T00:19:24.000844807+00:00 stderr F I1013 00:19:24.000734 23313 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-10-13T00:19:24.000844807+00:00 stderr F { 2025-10-13T00:19:24.000844807+00:00 stderr F "container-image": { 2025-10-13T00:19:24.000844807+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-10-13T00:19:24.000844807+00:00 stderr F "image-labels": { 2025-10-13T00:19:24.000844807+00:00 stderr F "containers.bootc": "1", 2025-10-13T00:19:24.000844807+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-10-13T00:19:24.000844807+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-10-13T00:19:24.000844807+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-10-13T00:19:24.000844807+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-10-13T00:19:24.000844807+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-10-13T00:19:24.000844807+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-10-13T00:19:24.000844807+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-10-13T00:19:24.000844807+00:00 stderr F "ostree.bootable": "true", 2025-10-13T00:19:24.000844807+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-10-13T00:19:24.000844807+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-10-13T00:19:24.000844807+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-10-13T00:19:24.000844807+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-10-13T00:19:24.000844807+00:00 stderr F }, 2025-10-13T00:19:24.000844807+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-10-13T00:19:24.000844807+00:00 stderr F }, 2025-10-13T00:19:24.000844807+00:00 stderr F "osbuild-version": "114", 2025-10-13T00:19:24.000844807+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-10-13T00:19:24.000844807+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-10-13T00:19:24.000844807+00:00 stderr F "version": "416.94.202405291527-0" 2025-10-13T00:19:24.000844807+00:00 stderr F } 2025-10-13T00:19:24.001001122+00:00 stderr F I1013 00:19:24.000935 23313 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-10-13T00:19:24.001001122+00:00 stderr F I1013 00:19:24.000974 23313 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-10-13T00:19:24.018678868+00:00 stderr F I1013 00:19:24.018559 23313 daemon.go:1736] journalctl --list-boots: 2025-10-13T00:19:24.018678868+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-10-13T00:19:24.018678868+00:00 stderr F -4 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-10-13T00:19:24.018678868+00:00 stderr F -3 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-10-13T00:19:24.018678868+00:00 stderr F -2 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 20:42:52 UTC 2025-10-13T00:19:24.018678868+00:00 stderr F -1 bc816dcb45bb41c5aa65b8d774745fb2 Mon 2025-10-13 00:07:58 UTC Mon 2025-10-13 00:11:45 UTC 2025-10-13T00:19:24.018678868+00:00 stderr F 0 52d3147337074197b2aa2769ec9105af Mon 2025-10-13 00:11:53 UTC Mon 2025-10-13 00:19:23 UTC 2025-10-13T00:19:24.018678868+00:00 stderr F I1013 00:19:24.018592 23313 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-10-13T00:19:24.053424781+00:00 stderr F I1013 00:19:24.051500 23313 daemon.go:1751] systemd service state: OK 2025-10-13T00:19:24.053424781+00:00 stderr F I1013 00:19:24.051535 23313 daemon.go:1327] Starting MachineConfigDaemon 2025-10-13T00:19:24.053424781+00:00 stderr F I1013 00:19:24.051687 23313 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-10-13T00:19:24.933357247+00:00 stderr F I1013 00:19:24.933137 23313 daemon.go:647] Node crc is part of the control plane 2025-10-13T00:19:24.987401795+00:00 stderr F I1013 00:19:24.987257 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3976586942 --cleanup 2025-10-13T00:19:24.991941760+00:00 stderr F [2025-10-13T00:19:24Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:19:24.992097294+00:00 stdout F 2025-10-13T00:19:24.992113125+00:00 stderr F [2025-10-13T00:19:24Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:19:25.008234684+00:00 stderr F I1013 00:19:25.008054 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:19:25.008234684+00:00 stderr F E1013 00:19:25.008176 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:19:25.008234684+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:19:27.026160614+00:00 stderr F I1013 00:19:27.026036 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs120782674 --cleanup 2025-10-13T00:19:27.029229176+00:00 stderr F [2025-10-13T00:19:27Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:19:27.029267977+00:00 stderr F [2025-10-13T00:19:27Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:19:27.029288147+00:00 stdout F 2025-10-13T00:19:27.044533811+00:00 stderr F I1013 00:19:27.044411 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:19:27.044597783+00:00 stderr F E1013 00:19:27.044515 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:19:27.044597783+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:19:31.063129566+00:00 stderr F I1013 00:19:31.062987 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4023080676 --cleanup 2025-10-13T00:19:31.067763634+00:00 stderr F [2025-10-13T00:19:31Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:19:31.067938589+00:00 stdout F 2025-10-13T00:19:31.067963270+00:00 stderr F [2025-10-13T00:19:31Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:19:31.078014479+00:00 stderr F I1013 00:19:31.077933 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:19:31.078014479+00:00 stderr F E1013 00:19:31.077995 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:19:31.078014479+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:19:39.102849653+00:00 stderr F I1013 00:19:39.102755 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3234824532 --cleanup 2025-10-13T00:19:39.105716429+00:00 stderr F [2025-10-13T00:19:39Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:19:39.105777360+00:00 stdout F 2025-10-13T00:19:39.105783581+00:00 stderr F [2025-10-13T00:19:39Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:19:39.114617403+00:00 stderr F I1013 00:19:39.114548 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:19:39.114651994+00:00 stderr F E1013 00:19:39.114621 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:19:39.114651994+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:19:55.126691974+00:00 stderr F I1013 00:19:55.126583 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4037756419 --cleanup 2025-10-13T00:19:55.130195528+00:00 stderr F [2025-10-13T00:19:55Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:19:55.130231249+00:00 stderr F [2025-10-13T00:19:55Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:19:55.130247969+00:00 stdout F 2025-10-13T00:19:55.139149524+00:00 stderr F I1013 00:19:55.139094 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:19:55.139229716+00:00 stderr F E1013 00:19:55.139191 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:19:55.139229716+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:19.783942967+00:00 stderr F I1013 00:20:19.783832 23313 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 41955 2025-10-13T00:20:27.151160230+00:00 stderr F I1013 00:20:27.150749 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs755043496 --cleanup 2025-10-13T00:20:27.153657550+00:00 stderr F [2025-10-13T00:20:27Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:20:27.153796944+00:00 stderr F [2025-10-13T00:20:27Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:20:27.153838295+00:00 stdout F 2025-10-13T00:20:27.160903384+00:00 stderr F I1013 00:20:27.160858 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:20:27.160934155+00:00 stderr F E1013 00:20:27.160900 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:20:27.160934155+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:28.232080051+00:00 stderr F I1013 00:20:28.231648 23313 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 41993 2025-10-13T00:20:29.088362448+00:00 stderr F I1013 00:20:29.086980 23313 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 42002 2025-10-13T00:21:27.172020054+00:00 stderr F I1013 00:21:27.171904 23313 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4022862954 --cleanup 2025-10-13T00:21:27.174484720+00:00 stderr F [2025-10-13T00:21:27Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:21:27.174560592+00:00 stdout F 2025-10-13T00:21:27.174567422+00:00 stderr F [2025-10-13T00:21:27Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:21:27.181959001+00:00 stderr F I1013 00:21:27.181914 23313 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:21:27.182005223+00:00 stderr F E1013 00:21:27.181976 23313 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:21:27.182005223+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:22:23.077592152+00:00 stderr F I1013 00:22:23.077479 23313 daemon.go:1363] Shutting down MachineConfigDaemon ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000004445115073043234033012 0ustar zuulzuul2025-10-13T00:22:23.401666387+00:00 stderr F I1013 00:22:23.401571 30088 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:22:23.401799661+00:00 stderr F I1013 00:22:23.401792 30088 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-10-13T00:22:23.404631787+00:00 stderr F I1013 00:22:23.404608 30088 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-10-13T00:22:23.406741774+00:00 stderr F I1013 00:22:23.406718 30088 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-10-13T00:22:23.476077108+00:00 stderr F I1013 00:22:23.476041 30088 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-10-13T00:22:23.550184411+00:00 stderr F I1013 00:22:23.550125 30088 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:22:23.550489440+00:00 stderr F E1013 00:22:23.550467 30088 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-10-13T00:22:23.550639104+00:00 stderr F I1013 00:22:23.550617 30088 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-10-13T00:22:23.808949869+00:00 stderr F I1013 00:22:23.808898 30088 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-10-13T00:22:23.809690759+00:00 stderr F I1013 00:22:23.809661 30088 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-10-13T00:22:23.810601764+00:00 stderr F I1013 00:22:23.810553 30088 metrics.go:100] Registering Prometheus metrics 2025-10-13T00:22:23.810710736+00:00 stderr F I1013 00:22:23.810683 30088 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-10-13T00:22:29.437878582+00:00 stderr F I1013 00:22:29.436470 30088 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:22:29.439213138+00:00 stderr F I1013 00:22:29.438447 30088 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-10-13T00:22:29.452934267+00:00 stderr F I1013 00:22:29.452208 30088 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:22:29.452934267+00:00 stderr F I1013 00:22:29.452263 30088 update.go:2610] Starting to manage node: crc 2025-10-13T00:22:29.453095391+00:00 stderr F I1013 00:22:29.453043 30088 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:22:29.471122236+00:00 stderr F I1013 00:22:29.467582 30088 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-10-13T00:22:29.543404730+00:00 stderr F I1013 00:22:29.543342 30088 daemon.go:1727] State: idle 2025-10-13T00:22:29.543404730+00:00 stderr F Deployments: 2025-10-13T00:22:29.543404730+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-10-13T00:22:29.543404730+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-10-13T00:22:29.543404730+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-10-13T00:22:29.543404730+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-10-13T00:22:29.543404730+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-10-13T00:22:29.543404730+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-10-13T00:22:29.543404730+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-10-13T00:22:29.543404730+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-10-13T00:22:29.543668617+00:00 stderr F I1013 00:22:29.543624 30088 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-10-13T00:22:29.543668617+00:00 stderr F { 2025-10-13T00:22:29.543668617+00:00 stderr F "container-image": { 2025-10-13T00:22:29.543668617+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-10-13T00:22:29.543668617+00:00 stderr F "image-labels": { 2025-10-13T00:22:29.543668617+00:00 stderr F "containers.bootc": "1", 2025-10-13T00:22:29.543668617+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-10-13T00:22:29.543668617+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-10-13T00:22:29.543668617+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-10-13T00:22:29.543668617+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-10-13T00:22:29.543668617+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-10-13T00:22:29.543668617+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-10-13T00:22:29.543668617+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-10-13T00:22:29.543668617+00:00 stderr F "ostree.bootable": "true", 2025-10-13T00:22:29.543668617+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-10-13T00:22:29.543668617+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-10-13T00:22:29.543668617+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-10-13T00:22:29.543668617+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-10-13T00:22:29.543668617+00:00 stderr F }, 2025-10-13T00:22:29.543668617+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-10-13T00:22:29.543668617+00:00 stderr F }, 2025-10-13T00:22:29.543668617+00:00 stderr F "osbuild-version": "114", 2025-10-13T00:22:29.543668617+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-10-13T00:22:29.543668617+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-10-13T00:22:29.543668617+00:00 stderr F "version": "416.94.202405291527-0" 2025-10-13T00:22:29.543668617+00:00 stderr F } 2025-10-13T00:22:29.543722228+00:00 stderr F I1013 00:22:29.543699 30088 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-10-13T00:22:29.543722228+00:00 stderr F I1013 00:22:29.543710 30088 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-10-13T00:22:29.554359774+00:00 stderr F I1013 00:22:29.552402 30088 daemon.go:1736] journalctl --list-boots: 2025-10-13T00:22:29.554359774+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-10-13T00:22:29.554359774+00:00 stderr F -4 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-10-13T00:22:29.554359774+00:00 stderr F -3 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-10-13T00:22:29.554359774+00:00 stderr F -2 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 20:42:52 UTC 2025-10-13T00:22:29.554359774+00:00 stderr F -1 bc816dcb45bb41c5aa65b8d774745fb2 Mon 2025-10-13 00:07:58 UTC Mon 2025-10-13 00:11:45 UTC 2025-10-13T00:22:29.554359774+00:00 stderr F 0 52d3147337074197b2aa2769ec9105af Mon 2025-10-13 00:11:53 UTC Mon 2025-10-13 00:22:29 UTC 2025-10-13T00:22:29.554359774+00:00 stderr F I1013 00:22:29.552431 30088 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-10-13T00:22:29.569369878+00:00 stderr F I1013 00:22:29.566500 30088 daemon.go:1751] systemd service state: OK 2025-10-13T00:22:29.569369878+00:00 stderr F I1013 00:22:29.566520 30088 daemon.go:1327] Starting MachineConfigDaemon 2025-10-13T00:22:29.569369878+00:00 stderr F I1013 00:22:29.566579 30088 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-10-13T00:22:30.465507607+00:00 stderr F I1013 00:22:30.465429 30088 daemon.go:647] Node crc is part of the control plane 2025-10-13T00:22:30.515116391+00:00 stderr F I1013 00:22:30.515029 30088 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1894670412 --cleanup 2025-10-13T00:22:30.517240048+00:00 stderr F [2025-10-13T00:22:30Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:22:30.517260249+00:00 stderr F [2025-10-13T00:22:30Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:22:30.517270959+00:00 stdout F 2025-10-13T00:22:30.536798904+00:00 stderr F I1013 00:22:30.536735 30088 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:22:30.536872096+00:00 stderr F E1013 00:22:30.536833 30088 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:22:30.536872096+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:22:32.548576174+00:00 stderr F I1013 00:22:32.548476 30088 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3437371764 --cleanup 2025-10-13T00:22:32.550171577+00:00 stderr F [2025-10-13T00:22:32Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:22:32.550197768+00:00 stdout F 2025-10-13T00:22:32.550205218+00:00 stderr F [2025-10-13T00:22:32Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:22:32.562785327+00:00 stderr F I1013 00:22:32.562645 30088 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:22:32.562855488+00:00 stderr F E1013 00:22:32.562765 30088 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:22:32.562855488+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:22:36.577065245+00:00 stderr F I1013 00:22:36.576988 30088 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1867841258 --cleanup 2025-10-13T00:22:36.578710451+00:00 stderr F [2025-10-13T00:22:36Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:22:36.578730272+00:00 stderr F [2025-10-13T00:22:36Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:22:36.578739632+00:00 stdout F 2025-10-13T00:22:36.585750517+00:00 stderr F I1013 00:22:36.585698 30088 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:22:36.585777788+00:00 stderr F E1013 00:22:36.585754 30088 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:22:36.585777788+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:22:44.602182485+00:00 stderr F I1013 00:22:44.602075 30088 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs33098946 --cleanup 2025-10-13T00:22:44.604775427+00:00 stderr F [2025-10-13T00:22:44Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:22:44.604857999+00:00 stderr F [2025-10-13T00:22:44Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:22:44.604888940+00:00 stdout F 2025-10-13T00:22:44.621096502+00:00 stderr F I1013 00:22:44.621020 30088 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:22:44.621253246+00:00 stderr F E1013 00:22:44.621205 30088 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:22:44.621253246+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:23:00.643202829+00:00 stderr F I1013 00:23:00.643087 30088 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4067112959 --cleanup 2025-10-13T00:23:00.645649277+00:00 stderr F [2025-10-13T00:23:00Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:23:00.645757590+00:00 stdout F 2025-10-13T00:23:00.645771160+00:00 stderr F [2025-10-13T00:23:00Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:23:00.653200687+00:00 stderr F I1013 00:23:00.653053 30088 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:23:00.653200687+00:00 stderr F E1013 00:23:00.653133 30088 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:23:00.653200687+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:23:29.681014018+00:00 stderr F I1013 00:23:29.680952 30088 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 42002 2025-10-13T00:23:32.668214266+00:00 stderr F I1013 00:23:32.668147 30088 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1680678618 --cleanup 2025-10-13T00:23:32.672009612+00:00 stderr F [2025-10-13T00:23:32Z INFO nmstatectl] Nmstate version: 2.2.29 2025-10-13T00:23:32.672176237+00:00 stdout F 2025-10-13T00:23:32.672184487+00:00 stderr F [2025-10-13T00:23:32Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-10-13T00:23:32.681833066+00:00 stderr F I1013 00:23:32.681794 30088 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-10-13T00:23:32.681907828+00:00 stderr F E1013 00:23:32.681862 30088 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:23:32.681907828+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000023413515073043234033012 0ustar zuulzuul2025-08-13T19:57:10.537128831+00:00 stderr F I0813 19:57:10.536908 22232 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:57:10.537343497+00:00 stderr F I0813 19:57:10.537327 22232 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-08-13T19:57:10.544102180+00:00 stderr F I0813 19:57:10.543965 22232 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-08-13T19:57:10.548261819+00:00 stderr F I0813 19:57:10.548151 22232 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-08-13T19:57:10.660138773+00:00 stderr F I0813 19:57:10.660004 22232 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-08-13T19:57:10.729900775+00:00 stderr F I0813 19:57:10.729682 22232 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:57:10.730192564+00:00 stderr F E0813 19:57:10.730121 22232 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-08-13T19:57:10.730305507+00:00 stderr F I0813 19:57:10.730252 22232 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-08-13T19:57:10.952575284+00:00 stderr F I0813 19:57:10.952524 22232 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-08-13T19:57:10.953974444+00:00 stderr F I0813 19:57:10.953931 22232 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-08-13T19:57:10.955283391+00:00 stderr F I0813 19:57:10.955215 22232 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:57:10.955349303+00:00 stderr F I0813 19:57:10.955309 22232 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:57:10.986460181+00:00 stderr F I0813 19:57:10.985143 22232 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:57:10.986460181+00:00 stderr F I0813 19:57:10.985317 22232 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.000562 22232 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.001140 22232 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.001213 22232 update.go:2610] Starting to manage node: crc 2025-08-13T19:57:11.013211645+00:00 stderr F I0813 19:57:11.009509 22232 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-08-13T19:57:11.071088538+00:00 stderr F I0813 19:57:11.070943 22232 daemon.go:1727] State: idle 2025-08-13T19:57:11.071088538+00:00 stderr F Deployments: 2025-08-13T19:57:11.071088538+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:57:11.071088538+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:57:11.071088538+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-08-13T19:57:11.071088538+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-08-13T19:57:11.071088538+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071903531+00:00 stderr F I0813 19:57:11.071718 22232 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-08-13T19:57:11.071903531+00:00 stderr F { 2025-08-13T19:57:11.071903531+00:00 stderr F "container-image": { 2025-08-13T19:57:11.071903531+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-08-13T19:57:11.071903531+00:00 stderr F "image-labels": { 2025-08-13T19:57:11.071903531+00:00 stderr F "containers.bootc": "1", 2025-08-13T19:57:11.071903531+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-08-13T19:57:11.071903531+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-08-13T19:57:11.071903531+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-08-13T19:57:11.071903531+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.bootable": "true", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-08-13T19:57:11.071903531+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-08-13T19:57:11.071903531+00:00 stderr F }, 2025-08-13T19:57:11.071903531+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-08-13T19:57:11.071903531+00:00 stderr F }, 2025-08-13T19:57:11.071903531+00:00 stderr F "osbuild-version": "114", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:57:11.071903531+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-08-13T19:57:11.071903531+00:00 stderr F "version": "416.94.202405291527-0" 2025-08-13T19:57:11.071903531+00:00 stderr F } 2025-08-13T19:57:11.072011204+00:00 stderr F I0813 19:57:11.071965 22232 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-08-13T19:57:11.072011204+00:00 stderr F I0813 19:57:11.071980 22232 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-08-13T19:57:11.084910483+00:00 stderr F I0813 19:57:11.084726 22232 daemon.go:1736] journalctl --list-boots: 2025-08-13T19:57:11.084910483+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-08-13T19:57:11.084910483+00:00 stderr F -2 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F -1 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F 0 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 19:57:11 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F I0813 19:57:11.084819 22232 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.097905 22232 daemon.go:1751] systemd service state: OK 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.097949 22232 daemon.go:1327] Starting MachineConfigDaemon 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.098084 22232 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-08-13T19:57:11.990345587+00:00 stderr F I0813 19:57:11.990162 22232 daemon.go:647] Node crc is part of the control plane 2025-08-13T19:57:12.010350119+00:00 stderr F I0813 19:57:12.010192 22232 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1986687374 --cleanup 2025-08-13T19:57:12.013985292+00:00 stderr F [2025-08-13T19:57:12Z INFO nmstatectl] Nmstate version: 2.2.29 2025-08-13T19:57:12.014047174+00:00 stdout F 2025-08-13T19:57:12.014055814+00:00 stderr F [2025-08-13T19:57:12Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025258 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025316 22232 daemon.go:1680] Current+desired config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025328 22232 daemon.go:1695] state: Done 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025374 22232 update.go:2595] Running: rpm-ostree cleanup -r 2025-08-13T19:57:12.086057180+00:00 stdout F Deployments unchanged. 2025-08-13T19:57:12.096181789+00:00 stderr F I0813 19:57:12.096073 22232 daemon.go:2096] Validating against current config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:12.096716985+00:00 stderr F I0813 19:57:12.096505 22232 daemon.go:2008] SSH key location ("/home/core/.ssh/authorized_keys.d/ignition") up-to-date! 2025-08-13T19:57:12.367529387+00:00 stderr F W0813 19:57:12.367366 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:57:12.367529387+00:00 stderr F I0813 19:57:12.367431 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:57:12.438064461+00:00 stderr F I0813 19:57:12.437991 22232 update.go:2610] Validated on-disk state 2025-08-13T19:57:12.443242619+00:00 stderr F I0813 19:57:12.443192 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:22.467338806+00:00 stderr F I0813 19:57:22.467156 22232 update.go:2610] Update completed for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 and node has been successfully uncordoned 2025-08-13T19:57:22.488642294+00:00 stderr F I0813 19:57:22.487546 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:22.505041773+00:00 stderr F I0813 19:57:22.504737 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T19:57:22.505041773+00:00 stderr F I0813 19:57:22.504953 22232 daemon.go:735] Transitioned from state: -> Done 2025-08-13T19:58:11.115676832+00:00 stderr F I0813 19:58:11.115363 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 23037 2025-08-13T19:58:22.507690785+00:00 stderr F I0813 19:58:22.507325 22232 daemon.go:858] Starting health listener on 127.0.0.1:8798 2025-08-13T19:59:31.927707858+00:00 stderr F I0813 19:59:31.927502 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28133 2025-08-13T19:59:36.357925063+00:00 stderr F I0813 19:59:36.357581 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28190 2025-08-13T19:59:40.727962252+00:00 stderr F W0813 19:59:40.718365 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:59:40.727962252+00:00 stderr F I0813 19:59:40.718636 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:59:41.458408873+00:00 stderr F I0813 19:59:41.458316 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28276 2025-08-13T19:59:45.463438017+00:00 stderr F I0813 19:59:45.461866 22232 daemon.go:921] Preflight config drift check successful (took 5.102890289s) 2025-08-13T19:59:45.473484843+00:00 stderr F I0813 19:59:45.469582 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T19:59:46.475458464+00:00 stderr F I0813 19:59:46.475394 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T19:59:46.535535217+00:00 stderr F I0813 19:59:46.535393 22232 update.go:1011] Checking Reconcilable for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:47.009685103+00:00 stderr F I0813 19:59:47.009130 22232 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:47.009685103+00:00 stderr F I0813 19:59:47.009225 22232 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:47.357071315+00:00 stderr F I0813 19:59:47.349732 22232 update.go:2610] Starting update from rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a: &{osUpdate:false kargs:false fips:false passwd:true files:false units:false kernelType:false extensions:false} 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928599 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928671 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928751 22232 update.go:1824] Updating files 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928761 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T19:59:48.033316733+00:00 stderr F I0813 19:59:48.033244 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T19:59:48.696336273+00:00 stderr F I0813 19:59:48.689565 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T19:59:49.030039376+00:00 stderr F I0813 19:59:49.022894 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T19:59:49.484090729+00:00 stderr F I0813 19:59:49.483886 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T19:59:49.585269533+00:00 stderr F I0813 19:59:49.585015 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T19:59:49.689499053+00:00 stderr F I0813 19:59:49.687103 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T19:59:49.956253737+00:00 stderr F I0813 19:59:49.949356 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T19:59:50.261551160+00:00 stderr F I0813 19:59:50.261371 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T19:59:50.385114742+00:00 stderr F I0813 19:59:50.385051 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T19:59:50.741647616+00:00 stderr F I0813 19:59:50.722828 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T19:59:51.109315807+00:00 stderr F I0813 19:59:51.109245 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T19:59:51.192529159+00:00 stderr F I0813 19:59:51.192456 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T19:59:51.218742786+00:00 stderr F I0813 19:59:51.215769 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T19:59:51.460395665+00:00 stderr F I0813 19:59:51.460311 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T19:59:51.521491636+00:00 stderr F I0813 19:59:51.521209 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T19:59:51.562914567+00:00 stderr F I0813 19:59:51.560100 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T19:59:51.696166476+00:00 stderr F I0813 19:59:51.684283 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T19:59:51.777634438+00:00 stderr F I0813 19:59:51.777471 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T19:59:52.071225977+00:00 stderr F I0813 19:59:52.069309 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T19:59:52.171214088+00:00 stderr F I0813 19:59:52.170641 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T19:59:52.276324934+00:00 stderr F I0813 19:59:52.276183 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T19:59:52.514677088+00:00 stderr F I0813 19:59:52.513265 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T19:59:52.705937040+00:00 stderr F I0813 19:59:52.704355 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T19:59:52.794668479+00:00 stderr F I0813 19:59:52.793886 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T19:59:52.920257429+00:00 stderr F I0813 19:59:52.920192 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T19:59:53.087749554+00:00 stderr F I0813 19:59:53.087687 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T19:59:53.152336605+00:00 stderr F I0813 19:59:53.152274 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T19:59:53.237315557+00:00 stderr F I0813 19:59:53.237175 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T19:59:53.267968770+00:00 stderr F I0813 19:59:53.267902 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T19:59:53.297734859+00:00 stderr F I0813 19:59:53.296534 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T19:59:53.435545397+00:00 stderr F I0813 19:59:53.435193 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T19:59:53.530363950+00:00 stderr F I0813 19:59:53.530307 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T19:59:53.563872605+00:00 stderr F I0813 19:59:53.563294 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T19:59:53.635688322+00:00 stderr F I0813 19:59:53.635414 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T19:59:53.689104175+00:00 stderr F I0813 19:59:53.684418 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T19:59:53.737416492+00:00 stderr F I0813 19:59:53.737245 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T19:59:53.975887440+00:00 stderr F I0813 19:59:53.975753 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T19:59:54.027015427+00:00 stderr F I0813 19:59:54.026269 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T19:59:54.359685090+00:00 stderr F I0813 19:59:54.358593 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T19:59:54.453024270+00:00 stderr F I0813 19:59:54.452862 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T19:59:54.508676137+00:00 stderr F I0813 19:59:54.508492 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T19:59:54.524150608+00:00 stderr F I0813 19:59:54.520664 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T19:59:54.552559448+00:00 stderr F I0813 19:59:54.552348 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T19:59:54.552559448+00:00 stderr F I0813 19:59:54.552424 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T19:59:54.585310891+00:00 stderr F I0813 19:59:54.575814 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T19:59:54.590633523+00:00 stderr F I0813 19:59:54.590146 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T19:59:57.032018376+00:00 stderr F I0813 19:59:57.026027 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T19:59:57.032018376+00:00 stderr F I0813 19:59:57.026074 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T19:59:57.066021635+00:00 stderr F I0813 19:59:57.061688 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T19:59:57.207558150+00:00 stderr F I0813 19:59:57.201523 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T19:59:57.207558150+00:00 stderr F ) 2025-08-13T19:59:57.207558150+00:00 stderr F I0813 19:59:57.201627 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T19:59:57.491331609+00:00 stderr F I0813 19:59:57.484415 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T19:59:57.503493765+00:00 stderr F I0813 19:59:57.503151 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T19:59:59.829564801+00:00 stderr F I0813 19:59:59.826592 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T19:59:59.829664244+00:00 stderr F I0813 19:59:59.829642 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T19:59:59.838453104+00:00 stderr F I0813 19:59:59.838391 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T19:59:59.838532937+00:00 stderr F I0813 19:59:59.838512 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T19:59:59.843176079+00:00 stderr F I0813 19:59:59.843066 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T19:59:59.857110866+00:00 stderr F I0813 19:59:59.857036 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T19:59:59.871352322+00:00 stderr F I0813 19:59:59.870212 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T19:59:59.894870933+00:00 stderr F I0813 19:59:59.894672 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T19:59:59.910213540+00:00 stderr F I0813 19:59:59.909423 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T19:59:59.917510698+00:00 stderr F I0813 19:59:59.917338 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T19:59:59.931979601+00:00 stderr F I0813 19:59:59.930098 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T19:59:59.933565056+00:00 stderr F I0813 19:59:59.933409 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:00:02.559116822+00:00 stderr F I0813 20:00:02.552812 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:00:02.559116822+00:00 stderr F I0813 20:00:02.552900 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:00:02.794046180+00:00 stderr F I0813 20:00:02.793485 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064320 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:00:03.064705348+00:00 stderr F ) 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064467 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064495 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:00:05.470235199+00:00 stderr F I0813 20:00:05.470067 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:00:05.470235199+00:00 stderr F I0813 20:00:05.470165 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:00:05.512909075+00:00 stderr F I0813 20:00:05.511641 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:05.621250585+00:00 stderr F I0813 20:00:05.621138 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:00:05.621250585+00:00 stderr F ) 2025-08-13T20:00:05.621250585+00:00 stderr F I0813 20:00:05.621185 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:00:05.630887549+00:00 stderr F I0813 20:00:05.630133 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:00:07.534870020+00:00 stderr F I0813 20:00:07.525150 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:00:09.652570903+00:00 stderr F I0813 20:00:09.652173 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:00:09.652648545+00:00 stderr F I0813 20:00:09.652607 22232 update.go:1887] Deleting stale data 2025-08-13T20:00:09.653243112+00:00 stderr F I0813 20:00:09.652960 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:00:09.653243112+00:00 stderr F I0813 20:00:09.653074 22232 update.go:2316] updating SSH keys 2025-08-13T20:00:09.661636461+00:00 stderr F I0813 20:00:09.655549 22232 update.go:2217] Writing SSH keys to "/home/core/.ssh/authorized_keys.d/ignition" 2025-08-13T20:00:09.669466795+00:00 stderr F I0813 20:00:09.668341 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:00:09.947245965+00:00 stderr F I0813 20:00:09.947112 22232 update.go:2284] Password has been configured 2025-08-13T20:00:10.034967376+00:00 stderr F I0813 20:00:10.029610 22232 update.go:2610] Node has Desired Config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, skipping reboot 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081628 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081684 22232 daemon.go:1686] Current config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081695 22232 daemon.go:1687] Desired config: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081701 22232 daemon.go:1695] state: Done 2025-08-13T20:00:10.140755523+00:00 stderr F I0813 20:00:10.139823 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:20.784312371+00:00 stderr F I0813 20:00:20.784137 22232 update.go:2610] Update completed for config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a and node has been successfully uncordoned 2025-08-13T20:00:20.937038686+00:00 stderr F I0813 20:00:20.936759 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:21.054077223+00:00 stderr F I0813 20:00:21.053931 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T20:00:21.054077223+00:00 stderr F I0813 20:00:21.054062 22232 update.go:2640] Removing SIGTERM protection 2025-08-13T20:00:28.570968623+00:00 stderr F W0813 20:00:28.569716 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T20:00:28.570968623+00:00 stderr F I0813 20:00:28.569765 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T20:00:29.061416487+00:00 stderr F I0813 20:00:29.058613 22232 daemon.go:921] Preflight config drift check successful (took 838.048625ms) 2025-08-13T20:00:29.061416487+00:00 stderr F I0813 20:00:29.059008 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T20:00:29.153688818+00:00 stderr F I0813 20:00:29.153551 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T20:00:29.590755131+00:00 stderr F I0813 20:00:29.584560 22232 update.go:1011] Checking Reconcilable for config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a to rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:29.828121039+00:00 stderr F I0813 20:00:29.826580 22232 update.go:2610] Starting update from rendered-master-ef556ead28ddfad01c34ac56c7adfb5a to rendered-master-11405dc064e9fc83a779a06d1cd665b3: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false extensions:false} 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858244 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858329 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858389 22232 update.go:1824] Updating files 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858397 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:00:29.928929103+00:00 stderr F I0813 20:00:29.927253 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:00:29.956200881+00:00 stderr F I0813 20:00:29.953938 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:00:29.986563087+00:00 stderr F I0813 20:00:29.986401 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:00:30.003994594+00:00 stderr F I0813 20:00:30.002598 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:00:30.058935210+00:00 stderr F I0813 20:00:30.058265 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:00:30.138138529+00:00 stderr F I0813 20:00:30.137114 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:00:30.180542928+00:00 stderr F I0813 20:00:30.180426 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:00:30.205171880+00:00 stderr F I0813 20:00:30.205111 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:00:30.320662783+00:00 stderr F I0813 20:00:30.318958 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:00:30.446189862+00:00 stderr F I0813 20:00:30.446118 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:00:30.712471235+00:00 stderr F I0813 20:00:30.712360 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:00:30.749099069+00:00 stderr F I0813 20:00:30.749001 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:00:30.793060323+00:00 stderr F I0813 20:00:30.792857 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:00:30.822506863+00:00 stderr F I0813 20:00:30.822361 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:00:30.855605566+00:00 stderr F I0813 20:00:30.855477 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:00:30.893684752+00:00 stderr F I0813 20:00:30.893516 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:00:30.930010118+00:00 stderr F I0813 20:00:30.929909 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:00:30.949467253+00:00 stderr F I0813 20:00:30.949256 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:00:30.975131595+00:00 stderr F I0813 20:00:30.975007 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:00:31.026386506+00:00 stderr F I0813 20:00:31.023025 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:00:31.054892399+00:00 stderr F I0813 20:00:31.053008 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:00:31.236716744+00:00 stderr F I0813 20:00:31.236482 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:00:31.287532863+00:00 stderr F I0813 20:00:31.287281 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:00:31.367567625+00:00 stderr F I0813 20:00:31.367362 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:00:31.458897719+00:00 stderr F I0813 20:00:31.458746 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:00:31.519898518+00:00 stderr F I0813 20:00:31.518075 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:00:31.640943030+00:00 stderr F I0813 20:00:31.639712 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:00:31.995951702+00:00 stderr F I0813 20:00:31.995010 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:00:32.941604476+00:00 stderr F I0813 20:00:32.941205 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:00:33.099537179+00:00 stderr F I0813 20:00:33.098742 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:00:33.259858511+00:00 stderr F I0813 20:00:33.259051 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:00:33.358342259+00:00 stderr F I0813 20:00:33.357730 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:00:33.429253221+00:00 stderr F I0813 20:00:33.421085 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:00:33.718915470+00:00 stderr F I0813 20:00:33.714664 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:00:33.950983087+00:00 stderr F I0813 20:00:33.949050 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:00:34.153575824+00:00 stderr F I0813 20:00:34.152013 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:00:34.337065366+00:00 stderr F I0813 20:00:34.335199 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:00:34.391819897+00:00 stderr F I0813 20:00:34.391466 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:00:34.773422499+00:00 stderr F I0813 20:00:34.769368 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:00:34.913323098+00:00 stderr F I0813 20:00:34.912048 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:00:34.991521507+00:00 stderr F I0813 20:00:34.989402 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:00:35.019358491+00:00 stderr F I0813 20:00:35.019218 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:00:35.085055785+00:00 stderr F I0813 20:00:35.082897 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:35.085055785+00:00 stderr F I0813 20:00:35.083651 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:00:35.103228133+00:00 stderr F I0813 20:00:35.101299 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:00:35.157459499+00:00 stderr F I0813 20:00:35.157364 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:00:37.250278123+00:00 stderr F I0813 20:00:37.250022 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:00:37.250278123+00:00 stderr F I0813 20:00:37.250190 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:00:37.253953048+00:00 stderr F I0813 20:00:37.253910 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:37.405302973+00:00 stderr F I0813 20:00:37.405192 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:00:37.405302973+00:00 stderr F ) 2025-08-13T20:00:37.405302973+00:00 stderr F I0813 20:00:37.405273 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:00:37.420331222+00:00 stderr F I0813 20:00:37.417654 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:00:37.423037109+00:00 stderr F I0813 20:00:37.422320 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:00:39.911409521+00:00 stderr F I0813 20:00:39.911340 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:00:39.911522365+00:00 stderr F I0813 20:00:39.911506 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:00:39.919367808+00:00 stderr F I0813 20:00:39.919290 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:39.919452881+00:00 stderr F I0813 20:00:39.919431 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:00:39.927199332+00:00 stderr F I0813 20:00:39.927170 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:00:39.932952896+00:00 stderr F I0813 20:00:39.932926 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:00:39.946758159+00:00 stderr F I0813 20:00:39.946698 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:00:39.957654460+00:00 stderr F I0813 20:00:39.956581 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:00:39.981065418+00:00 stderr F I0813 20:00:39.980511 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:00:39.999995307+00:00 stderr F I0813 20:00:39.999741 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:00:40.023938250+00:00 stderr F I0813 20:00:40.023650 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:00:40.040043639+00:00 stderr F I0813 20:00:40.035400 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:00:42.156462427+00:00 stderr F I0813 20:00:42.155441 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:00:42.156462427+00:00 stderr F I0813 20:00:42.155486 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:00:42.566969882+00:00 stderr F I0813 20:00:42.561559 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670285 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:00:42.671871473+00:00 stderr F ) 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670332 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670357 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:00:46.439115612+00:00 stderr F I0813 20:00:46.435710 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:00:46.439115612+00:00 stderr F I0813 20:00:46.435881 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:00:46.556598492+00:00 stderr F I0813 20:00:46.553101 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:46.749134532+00:00 stderr F I0813 20:00:46.744992 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:00:46.749134532+00:00 stderr F ) 2025-08-13T20:00:46.749134532+00:00 stderr F I0813 20:00:46.745047 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:00:46.979914161+00:00 stderr F I0813 20:00:46.960227 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:00:50.349518792+00:00 stderr F I0813 20:00:50.332944 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701092 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701211 22232 update.go:1887] Deleting stale data 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701287 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701558 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:01:04.903053940+00:00 stderr F I0813 20:01:04.902693 22232 update.go:2284] Password has been configured 2025-08-13T20:01:05.406232137+00:00 stderr F I0813 20:01:05.404157 22232 update.go:2610] Node has Desired Config rendered-master-11405dc064e9fc83a779a06d1cd665b3, skipping reboot 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666377 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666613 22232 daemon.go:1686] Current config: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666626 22232 daemon.go:1687] Desired config: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666638 22232 daemon.go:1695] state: Done 2025-08-13T20:01:05.722514235+00:00 stderr F I0813 20:01:05.721976 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:29.333812924+00:00 stderr F I0813 20:01:29.332455 22232 update.go:2610] Update completed for config rendered-master-11405dc064e9fc83a779a06d1cd665b3 and node has been successfully uncordoned 2025-08-13T20:01:31.494362400+00:00 stderr F I0813 20:01:31.488741 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:31.628122714+00:00 stderr F I0813 20:01:31.627395 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T20:01:31.628122714+00:00 stderr F I0813 20:01:31.627505 22232 update.go:2640] Removing SIGTERM protection 2025-08-13T20:05:15.716149813+00:00 stderr F I0813 20:05:15.715368 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28276 2025-08-13T20:06:07.027530456+00:00 stderr F I0813 20:06:07.027467 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 31988 2025-08-13T20:06:14.811137908+00:00 stderr F I0813 20:06:14.810074 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32019 2025-08-13T20:06:49.883441178+00:00 stderr F I0813 20:06:49.877623 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32205 2025-08-13T20:06:50.782933748+00:00 stderr F I0813 20:06:50.782658 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32210 2025-08-13T20:06:51.459098074+00:00 stderr F I0813 20:06:51.458152 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:09:06.622937465+00:00 stderr F I0813 20:09:06.622671 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:36:38.651731771+00:00 stderr F I0813 20:36:38.651488 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:42:13.301299505+00:00 stderr F I0813 20:42:13.300560 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37415 2025-08-13T20:42:14.048707112+00:00 stderr F I0813 20:42:14.026597 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37419 2025-08-13T20:42:16.625653205+00:00 stderr F I0813 20:42:16.625182 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37431 2025-08-13T20:42:23.469674098+00:00 stderr F I0813 20:42:23.469267 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T20:42:24.066938248+00:00 stderr F I0813 20:42:24.065572 22232 daemon.go:921] Preflight config drift check successful (took 1.190182413s) 2025-08-13T20:42:24.072380705+00:00 stderr F I0813 20:42:24.072300 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T20:42:24.105901431+00:00 stderr F I0813 20:42:24.104044 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T20:42:24.149125037+00:00 stderr F I0813 20:42:24.149020 22232 update.go:1011] Checking Reconcilable for config rendered-master-11405dc064e9fc83a779a06d1cd665b3 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:24.198179062+00:00 stderr F I0813 20:42:24.198116 22232 update.go:2610] Starting update from rendered-master-11405dc064e9fc83a779a06d1cd665b3 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false extensions:false} 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214755 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214848 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214909 22232 update.go:1824] Updating files 2025-08-13T20:42:24.214956025+00:00 stderr F I0813 20:42:24.214916 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:42:24.252122577+00:00 stderr F I0813 20:42:24.251988 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:42:24.263958918+00:00 stderr F I0813 20:42:24.263917 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:42:24.277697514+00:00 stderr F I0813 20:42:24.277629 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:42:24.290403600+00:00 stderr F I0813 20:42:24.290214 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:42:24.301490240+00:00 stderr F I0813 20:42:24.301378 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:42:24.311638183+00:00 stderr F I0813 20:42:24.311569 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:42:24.322919988+00:00 stderr F I0813 20:42:24.322846 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:42:24.336051967+00:00 stderr F I0813 20:42:24.335914 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:42:24.361308765+00:00 stderr F I0813 20:42:24.361173 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:42:24.382915808+00:00 stderr F I0813 20:42:24.382510 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:42:24.404915332+00:00 stderr F I0813 20:42:24.403312 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:42:24.431580261+00:00 stderr F I0813 20:42:24.430042 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:42:24.449065225+00:00 stderr F I0813 20:42:24.449000 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:42:24.467327221+00:00 stderr F I0813 20:42:24.467204 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:42:24.479612755+00:00 stderr F I0813 20:42:24.479511 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:42:24.497209363+00:00 stderr F I0813 20:42:24.494684 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:42:24.511488174+00:00 stderr F I0813 20:42:24.511369 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:42:24.525574921+00:00 stderr F I0813 20:42:24.525487 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:42:24.542909090+00:00 stderr F I0813 20:42:24.542836 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:42:24.555617787+00:00 stderr F I0813 20:42:24.555531 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:42:24.571710431+00:00 stderr F I0813 20:42:24.571625 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:42:24.597107223+00:00 stderr F I0813 20:42:24.596981 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:42:24.612088595+00:00 stderr F I0813 20:42:24.611990 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:42:24.626033247+00:00 stderr F I0813 20:42:24.625139 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:42:24.640163794+00:00 stderr F I0813 20:42:24.640072 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:42:24.654876238+00:00 stderr F I0813 20:42:24.654702 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:42:24.678018696+00:00 stderr F I0813 20:42:24.677958 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:42:24.691753702+00:00 stderr F I0813 20:42:24.691701 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:42:24.710099500+00:00 stderr F I0813 20:42:24.709966 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:42:24.722011874+00:00 stderr F I0813 20:42:24.721912 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:42:24.745032968+00:00 stderr F I0813 20:42:24.744917 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:42:24.755827509+00:00 stderr F I0813 20:42:24.755676 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:42:24.770988076+00:00 stderr F I0813 20:42:24.770880 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:42:24.790473008+00:00 stderr F I0813 20:42:24.789876 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:42:24.807441977+00:00 stderr F I0813 20:42:24.807336 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:42:24.844291519+00:00 stderr F I0813 20:42:24.844124 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:42:24.855415970+00:00 stderr F I0813 20:42:24.855268 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:42:24.866124419+00:00 stderr F I0813 20:42:24.866022 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:42:24.881380209+00:00 stderr F I0813 20:42:24.879962 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:42:24.894569969+00:00 stderr F I0813 20:42:24.894399 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:42:24.908705326+00:00 stderr F I0813 20:42:24.908483 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:42:24.911642411+00:00 stderr F I0813 20:42:24.911520 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:24.917033356+00:00 stderr F I0813 20:42:24.916871 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:24.917033356+00:00 stderr F I0813 20:42:24.916973 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:42:24.919952361+00:00 stderr F I0813 20:42:24.919753 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:42:24.924153292+00:00 stderr F I0813 20:42:24.923975 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:26.791141357+00:00 stderr F I0813 20:42:26.791027 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:42:26.791141357+00:00 stderr F I0813 20:42:26.791080 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:42:26.794389151+00:00 stderr F I0813 20:42:26.794326 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:26.850068746+00:00 stderr F I0813 20:42:26.849861 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:42:26.850068746+00:00 stderr F ) 2025-08-13T20:42:26.850068746+00:00 stderr F I0813 20:42:26.849932 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:42:26.853442923+00:00 stderr F I0813 20:42:26.853333 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:42:26.859037604+00:00 stderr F I0813 20:42:26.858955 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:42:28.576991194+00:00 stderr F I0813 20:42:28.575017 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:42:28.576991194+00:00 stderr F I0813 20:42:28.575063 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:28.580716541+00:00 stderr F I0813 20:42:28.580693 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:28.580854265+00:00 stderr F I0813 20:42:28.580751 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:28.584480740+00:00 stderr F I0813 20:42:28.584458 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:42:28.593538641+00:00 stderr F I0813 20:42:28.593516 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:42:28.601360256+00:00 stderr F I0813 20:42:28.599059 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:42:28.602883280+00:00 stderr F I0813 20:42:28.602706 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:42:28.611321354+00:00 stderr F I0813 20:42:28.611279 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:42:28.614991489+00:00 stderr F I0813 20:42:28.614967 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:42:28.618508081+00:00 stderr F I0813 20:42:28.618438 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:42:28.621332872+00:00 stderr F I0813 20:42:28.621310 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:42:30.087391678+00:00 stderr F I0813 20:42:30.086209 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:42:30.087510012+00:00 stderr F I0813 20:42:30.087491 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:42:30.093930477+00:00 stderr F I0813 20:42:30.093554 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.114339 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:42:30.115671134+00:00 stderr F ) 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.115519 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.115562 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:42:31.509335524+00:00 stderr F I0813 20:42:31.509270 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:42:31.509424616+00:00 stderr F I0813 20:42:31.509409 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:42:31.513537035+00:00 stderr F I0813 20:42:31.513011 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:31.546347121+00:00 stderr F I0813 20:42:31.546288 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:42:31.546347121+00:00 stderr F ) 2025-08-13T20:42:31.546415863+00:00 stderr F I0813 20:42:31.546400 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:42:31.552157308+00:00 stderr F I0813 20:42:31.552085 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:42:33.062583914+00:00 stderr F I0813 20:42:33.061640 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:42:34.628042927+00:00 stderr F I0813 20:42:34.627976 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:42:34.628143799+00:00 stderr F I0813 20:42:34.628129 22232 update.go:1887] Deleting stale data 2025-08-13T20:42:34.628275183+00:00 stderr F I0813 20:42:34.628255 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:42:34.629196670+00:00 stderr F I0813 20:42:34.629177 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:42:34.902454208+00:00 stderr F I0813 20:42:34.901737 22232 update.go:2284] Password has been configured 2025-08-13T20:42:34.910419238+00:00 stderr F I0813 20:42:34.910383 22232 update.go:2610] Node has Desired Config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, skipping reboot 2025-08-13T20:42:34.974693461+00:00 stderr F I0813 20:42:34.974482 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:42:34.984181794+00:00 stderr F I0813 20:42:34.984085 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064034 22232 update.go:2284] Password has been configured 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064092 22232 update.go:1824] Updating files 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064105 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:42:35.080327306+00:00 stderr F I0813 20:42:35.080203 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:42:35.092598230+00:00 stderr F I0813 20:42:35.092489 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:42:35.103657929+00:00 stderr F I0813 20:42:35.103567 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:42:35.120481844+00:00 stderr F I0813 20:42:35.120434 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:42:35.135935139+00:00 stderr F I0813 20:42:35.135849 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:42:35.149484510+00:00 stderr F I0813 20:42:35.149389 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:42:35.161465315+00:00 stderr F I0813 20:42:35.161379 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:42:35.182567244+00:00 stderr F I0813 20:42:35.182459 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:42:35.199287336+00:00 stderr F I0813 20:42:35.199095 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:42:35.213657570+00:00 stderr F I0813 20:42:35.213564 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:42:35.225871972+00:00 stderr F I0813 20:42:35.225709 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:42:35.250539514+00:00 stderr F I0813 20:42:35.250122 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:42:35.273525386+00:00 stderr F I0813 20:42:35.271163 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:42:35.301671208+00:00 stderr F I0813 20:42:35.301016 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:42:35.328518312+00:00 stderr F I0813 20:42:35.328357 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:42:35.352840333+00:00 stderr F I0813 20:42:35.350893 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:42:35.373667023+00:00 stderr F I0813 20:42:35.371918 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:42:35.404121861+00:00 stderr F I0813 20:42:35.403939 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:42:35.436752152+00:00 stderr F I0813 20:42:35.436632 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:42:35.453877856+00:00 stderr F I0813 20:42:35.453638 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:42:35.468862658+00:00 stderr F I0813 20:42:35.468286 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:42:35.491322875+00:00 stderr F I0813 20:42:35.490614 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:42:35.504189526+00:00 stderr F I0813 20:42:35.504055 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:42:35.520377993+00:00 stderr F I0813 20:42:35.518283 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:42:35.539960518+00:00 stderr F I0813 20:42:35.538639 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:42:35.556549106+00:00 stderr F I0813 20:42:35.556450 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:42:35.568008026+00:00 stderr F I0813 20:42:35.567730 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:42:35.577947273+00:00 stderr F I0813 20:42:35.577848 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:42:35.588734984+00:00 stderr F I0813 20:42:35.588631 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:42:35.601888393+00:00 stderr F I0813 20:42:35.601827 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:42:35.615974709+00:00 stderr F I0813 20:42:35.615912 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:42:35.629195690+00:00 stderr F I0813 20:42:35.629134 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:42:35.651689649+00:00 stderr F I0813 20:42:35.651629 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:42:35.669093641+00:00 stderr F I0813 20:42:35.669030 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:42:35.682464926+00:00 stderr F I0813 20:42:35.682366 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:42:35.696846231+00:00 stderr F I0813 20:42:35.696688 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:42:35.709859166+00:00 stderr F I0813 20:42:35.708958 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:42:35.721989996+00:00 stderr F I0813 20:42:35.721926 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:42:35.736007530+00:00 stderr F I0813 20:42:35.735745 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:42:35.753092332+00:00 stderr F I0813 20:42:35.752857 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:42:35.780277376+00:00 stderr F I0813 20:42:35.779309 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:42:35.783961322+00:00 stderr F I0813 20:42:35.783555 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:35.789884023+00:00 stderr F I0813 20:42:35.788276 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:35.789884023+00:00 stderr F I0813 20:42:35.788335 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:42:35.791366746+00:00 stderr F I0813 20:42:35.791309 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:42:35.794091574+00:00 stderr F I0813 20:42:35.794017 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:37.702915466+00:00 stderr F I0813 20:42:37.702382 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:42:37.702915466+00:00 stderr F I0813 20:42:37.702519 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:42:37.707136597+00:00 stderr F I0813 20:42:37.706349 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:37.893965314+00:00 stderr F I0813 20:42:37.893585 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:42:37.893965314+00:00 stderr F ) 2025-08-13T20:42:37.893965314+00:00 stderr F I0813 20:42:37.893629 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:42:37.897539277+00:00 stderr F I0813 20:42:37.897475 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:42:37.901289415+00:00 stderr F I0813 20:42:37.901217 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:42:39.316065013+00:00 stderr F I0813 20:42:39.315961 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:42:39.316114875+00:00 stderr F I0813 20:42:39.316068 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:39.321291884+00:00 stderr F I0813 20:42:39.320616 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:39.321291884+00:00 stderr F I0813 20:42:39.320721 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:39.324873447+00:00 stderr F I0813 20:42:39.323944 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:42:39.327420141+00:00 stderr F I0813 20:42:39.327371 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:42:39.330551491+00:00 stderr F I0813 20:42:39.330511 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:42:39.334212687+00:00 stderr F I0813 20:42:39.334104 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:42:39.337603454+00:00 stderr F I0813 20:42:39.337539 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:42:39.339667454+00:00 stderr F I0813 20:42:39.339601 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:42:39.341844787+00:00 stderr F I0813 20:42:39.341732 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:42:39.344166914+00:00 stderr F I0813 20:42:39.344100 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:42:41.135897899+00:00 stderr F I0813 20:42:41.135709 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:42:41.135897899+00:00 stderr F I0813 20:42:41.135751 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:42:41.139032159+00:00 stderr F I0813 20:42:41.138978 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:41.405824051+00:00 stderr F W0813 20:42:41.405685 22232 daemon.go:1366] Got an error from auxiliary tools: kubelet health check has failed 1 times: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused 2025-08-13T20:42:42.159850680+00:00 stderr F I0813 20:42:42.159719 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:42:42.159850680+00:00 stderr F ) 2025-08-13T20:42:42.159850680+00:00 stderr F I0813 20:42:42.159768 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:42.159983064+00:00 stderr F I0813 20:42:42.159942 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:42:43.488849285+00:00 stderr F I0813 20:42:43.488711 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:42:43.488849285+00:00 stderr F I0813 20:42:43.488755 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:42:43.492146470+00:00 stderr F I0813 20:42:43.492068 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:44.411025122+00:00 stderr F I0813 20:42:44.410920 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:42:44.411025122+00:00 stderr F ) 2025-08-13T20:42:44.411025122+00:00 stderr F I0813 20:42:44.410979 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:42:44.415625774+00:00 stderr F I0813 20:42:44.415545 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:42:44.951528605+00:00 stderr F I0813 20:42:44.951462 22232 daemon.go:1302] Got SIGTERM, but actively updating ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000003575315073043234033017 0ustar zuulzuul2025-08-13T19:54:10.556685779+00:00 stderr F I0813 19:54:10.556542 19129 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:54:10.557371878+00:00 stderr F I0813 19:54:10.557253 19129 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-08-13T19:54:10.560882359+00:00 stderr F I0813 19:54:10.560852 19129 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-08-13T19:54:10.564159432+00:00 stderr F I0813 19:54:10.564115 19129 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-08-13T19:54:10.671602010+00:00 stderr F I0813 19:54:10.671421 19129 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-08-13T19:54:10.760550110+00:00 stderr F I0813 19:54:10.759528 19129 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:54:10.761339942+00:00 stderr F E0813 19:54:10.761303 19129 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-08-13T19:54:10.761511917+00:00 stderr F I0813 19:54:10.761490 19129 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-08-13T19:54:10.973048117+00:00 stderr F I0813 19:54:10.972984 19129 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-08-13T19:54:10.975268441+00:00 stderr F I0813 19:54:10.975225 19129 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-08-13T19:54:10.976896327+00:00 stderr F I0813 19:54:10.976617 19129 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:54:10.978138493+00:00 stderr F I0813 19:54:10.978067 19129 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:54:11.001263983+00:00 stderr F I0813 19:54:11.001140 19129 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:54:11.004521666+00:00 stderr F I0813 19:54:11.004417 19129 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-08-13T19:54:11.016089966+00:00 stderr F I0813 19:54:11.015988 19129 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:54:11.017386113+00:00 stderr F I0813 19:54:11.016256 19129 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:54:11.018157775+00:00 stderr F I0813 19:54:11.018022 19129 update.go:2610] Starting to manage node: crc 2025-08-13T19:54:11.022896291+00:00 stderr F I0813 19:54:11.022232 19129 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-08-13T19:54:11.074973758+00:00 stderr F I0813 19:54:11.074899 19129 daemon.go:1727] State: idle 2025-08-13T19:54:11.074973758+00:00 stderr F Deployments: 2025-08-13T19:54:11.074973758+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:54:11.074973758+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:54:11.074973758+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-08-13T19:54:11.074973758+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-08-13T19:54:11.074973758+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.075414900+00:00 stderr F I0813 19:54:11.075378 19129 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-08-13T19:54:11.075414900+00:00 stderr F { 2025-08-13T19:54:11.075414900+00:00 stderr F "container-image": { 2025-08-13T19:54:11.075414900+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-08-13T19:54:11.075414900+00:00 stderr F "image-labels": { 2025-08-13T19:54:11.075414900+00:00 stderr F "containers.bootc": "1", 2025-08-13T19:54:11.075414900+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-08-13T19:54:11.075414900+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-08-13T19:54:11.075414900+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-08-13T19:54:11.075414900+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.bootable": "true", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-08-13T19:54:11.075414900+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-08-13T19:54:11.075414900+00:00 stderr F }, 2025-08-13T19:54:11.075414900+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-08-13T19:54:11.075414900+00:00 stderr F }, 2025-08-13T19:54:11.075414900+00:00 stderr F "osbuild-version": "114", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:54:11.075414900+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-08-13T19:54:11.075414900+00:00 stderr F "version": "416.94.202405291527-0" 2025-08-13T19:54:11.075414900+00:00 stderr F } 2025-08-13T19:54:11.075551544+00:00 stderr F I0813 19:54:11.075531 19129 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-08-13T19:54:11.075591915+00:00 stderr F I0813 19:54:11.075578 19129 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-08-13T19:54:11.085158798+00:00 stderr F I0813 19:54:11.085050 19129 daemon.go:1736] journalctl --list-boots: 2025-08-13T19:54:11.085158798+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-08-13T19:54:11.085158798+00:00 stderr F -2 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F -1 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F 0 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 19:54:11 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F I0813 19:54:11.085107 19129 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096023 19129 daemon.go:1751] systemd service state: OK 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096062 19129 daemon.go:1327] Starting MachineConfigDaemon 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096159 19129 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-08-13T19:54:12.018003324+00:00 stderr F I0813 19:54:12.017747 19129 daemon.go:647] Node crc is part of the control plane 2025-08-13T19:54:12.039676423+00:00 stderr F I0813 19:54:12.039574 19129 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs7331210 --cleanup 2025-08-13T19:54:12.044998325+00:00 stderr F [2025-08-13T19:54:12Z INFO nmstatectl] Nmstate version: 2.2.29 2025-08-13T19:54:12.045105768+00:00 stdout F 2025-08-13T19:54:12.045116079+00:00 stderr F [2025-08-13T19:54:12Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053225 19129 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053264 19129 daemon.go:1680] Current+desired config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053273 19129 daemon.go:1695] state: Done 2025-08-13T19:54:12.053318993+00:00 stderr F I0813 19:54:12.053307 19129 update.go:2595] Running: rpm-ostree cleanup -r 2025-08-13T19:54:12.115456337+00:00 stdout F Deployments unchanged. 2025-08-13T19:54:12.126749860+00:00 stderr F I0813 19:54:12.126665 19129 daemon.go:2096] Validating against current config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:12.127261514+00:00 stderr F I0813 19:54:12.127202 19129 daemon.go:2008] SSH key location ("/home/core/.ssh/authorized_keys.d/ignition") up-to-date! 2025-08-13T19:54:12.425502300+00:00 stderr F W0813 19:54:12.425404 19129 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:54:12.425502300+00:00 stderr F I0813 19:54:12.425443 19129 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:54:12.499133842+00:00 stderr F I0813 19:54:12.498978 19129 update.go:2610] Validated on-disk state 2025-08-13T19:54:12.504581478+00:00 stderr F I0813 19:54:12.504490 19129 daemon.go:2198] Completing update to target MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:22.534493943+00:00 stderr F I0813 19:54:22.534332 19129 update.go:2610] Update completed for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 and node has been successfully uncordoned 2025-08-13T19:54:22.551916871+00:00 stderr F I0813 19:54:22.551858 19129 daemon.go:2223] In desired state MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:22.567300330+00:00 stderr F I0813 19:54:22.567229 19129 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T19:55:11.143020151+00:00 stderr F I0813 19:55:11.142956 19129 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 23037 2025-08-13T19:57:10.161621328+00:00 stderr F I0813 19:57:10.160577 19129 daemon.go:1363] Shutting down MachineConfigDaemon ././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043234033000 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415073043234033007 0ustar zuulzuul2025-10-13T00:12:49.263846332+00:00 stderr F W1013 00:12:49.263471 5869 deprecated.go:66] 2025-10-13T00:12:49.263846332+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:12:49.263846332+00:00 stderr F 2025-10-13T00:12:49.263846332+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:12:49.263846332+00:00 stderr F 2025-10-13T00:12:49.263846332+00:00 stderr F =============================================== 2025-10-13T00:12:49.263846332+00:00 stderr F 2025-10-13T00:12:49.263846332+00:00 stderr F I1013 00:12:49.263646 5869 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-10-13T00:12:49.264920422+00:00 stderr F I1013 00:12:49.264885 5869 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:12:49.265067806+00:00 stderr F I1013 00:12:49.264941 5869 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:12:49.265641573+00:00 stderr F I1013 00:12:49.265528 5869 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-10-13T00:12:49.266369014+00:00 stderr F I1013 00:12:49.266352 5869 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115073043234033000 0ustar zuulzuul2025-08-13T19:50:45.604729576+00:00 stderr F W0813 19:50:45.602545 13767 deprecated.go:66] 2025-08-13T19:50:45.604729576+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F =============================================== 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F I0813 19:50:45.602752 13767 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:50:45.618370216+00:00 stderr F I0813 19:50:45.617282 13767 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:45.618370216+00:00 stderr F I0813 19:50:45.617490 13767 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:45.649070553+00:00 stderr F I0813 19:50:45.647523 13767 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:50:45.662264789+00:00 stderr F I0813 19:50:45.660720 13767 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:46.043998571+00:00 stderr F I0813 20:42:46.043741 13767 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000367015073043232033063 0ustar zuulzuul2025-08-13T20:07:25.650906231+00:00 stderr F I0813 20:07:25.647078 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc00086f720 max-eligible-revision:0xc00086f4a0 protected-revisions:0xc00086f540 resource-dir:0xc00086f5e0 static-pod-name:0xc00086f680 v:0xc00086fe00] [0xc00086fe00 0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f720 0xc00086f680] [] map[cert-dir:0xc00086f720 help:0xc0008821e0 log-flush-frequency:0xc00086fd60 max-eligible-revision:0xc00086f4a0 protected-revisions:0xc00086f540 resource-dir:0xc00086f5e0 static-pod-name:0xc00086f680 v:0xc00086fe00 vmodule:0xc00086fea0] [0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f680 0xc00086f720 0xc00086fd60 0xc00086fe00 0xc00086fea0 0xc0008821e0] [0xc00086f720 0xc0008821e0 0xc00086fd60 0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f680 0xc00086fe00 0xc00086fea0] map[104:0xc0008821e0 118:0xc00086fe00] [] -1 0 0xc00085e870 true 0x73b100 []} 2025-08-13T20:07:25.650906231+00:00 stderr F I0813 20:07:25.648509 1 cmd.go:41] (*prune.PruneOptions)(0xc00085d8b0)({ 2025-08-13T20:07:25.650906231+00:00 stderr F MaxEligibleRevision: (int) 11, 2025-08-13T20:07:25.650906231+00:00 stderr F ProtectedRevisions: ([]int) (len=6 cap=6) { 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 6, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 7, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 8, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 9, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 10, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 11 2025-08-13T20:07:25.650906231+00:00 stderr F }, 2025-08-13T20:07:25.650906231+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:25.650906231+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:07:25.650906231+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:07:25.650906231+00:00 stderr F }) ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000045426515073043234033110 0ustar zuulzuul2025-10-13T00:15:02.217541118+00:00 stderr F I1013 00:15:02.216951 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:15:02.221286721+00:00 stderr F I1013 00:15:02.221233 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:02.226081424+00:00 stderr F I1013 00:15:02.226048 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:02.352504732+00:00 stderr F I1013 00:15:02.345681 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-10-13T00:15:03.160427879+00:00 stderr F I1013 00:15:03.159808 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:03.160427879+00:00 stderr F W1013 00:15:03.160281 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:03.160427879+00:00 stderr F W1013 00:15:03.160288 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:03.169157171+00:00 stderr F I1013 00:15:03.168562 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:03.169157171+00:00 stderr F I1013 00:15:03.168925 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-10-13T00:15:03.176313915+00:00 stderr F I1013 00:15:03.175322 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:03.177178791+00:00 stderr F I1013 00:15:03.176945 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:03.177178791+00:00 stderr F I1013 00:15:03.176963 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:03.177178791+00:00 stderr F I1013 00:15:03.177014 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:03.177178791+00:00 stderr F I1013 00:15:03.177028 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.177178791+00:00 stderr F I1013 00:15:03.177042 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:03.177178791+00:00 stderr F I1013 00:15:03.177046 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.178173431+00:00 stderr F I1013 00:15:03.177887 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:03.186104028+00:00 stderr F I1013 00:15:03.183405 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:03.278049683+00:00 stderr F I1013 00:15:03.277204 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.278049683+00:00 stderr F I1013 00:15:03.277278 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:03.278049683+00:00 stderr F I1013 00:15:03.277384 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:19:38.884612384+00:00 stderr F I1013 00:19:38.883507 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-10-13T00:19:38.891494809+00:00 stderr F I1013 00:19:38.890864 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41807", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_ab29c58b-dcf0-4746-8326-d6930b825507 became leader 2025-10-13T00:19:38.909974728+00:00 stderr F I1013 00:19:38.908108 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:19:38.909974728+00:00 stderr F I1013 00:19:38.909157 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:19:38.909974728+00:00 stderr F I1013 00:19:38.909781 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:19:39.019064492+00:00 stderr F I1013 00:19:39.018949 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-10-13T00:19:39.020454273+00:00 stderr F I1013 00:19:39.020408 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-10-13T00:19:39.023581696+00:00 stderr F I1013 00:19:39.021495 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:19:39.023705090+00:00 stderr F I1013 00:19:39.022208 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:19:39.023745131+00:00 stderr F I1013 00:19:39.022209 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:19:39.023793542+00:00 stderr F I1013 00:19:39.022227 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-10-13T00:19:39.032637425+00:00 stderr F I1013 00:19:39.032570 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-10-13T00:19:39.064071140+00:00 stderr F I1013 00:19:39.063155 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-10-13T00:19:39.132075242+00:00 stderr F I1013 00:19:39.131979 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:19:39.132075242+00:00 stderr F I1013 00:19:39.132020 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:19:39.132075242+00:00 stderr F I1013 00:19:39.132034 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:19:39.132075242+00:00 stderr F I1013 00:19:39.132059 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:19:39.132120224+00:00 stderr F I1013 00:19:39.132062 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:19:39.132120224+00:00 stderr F I1013 00:19:39.132091 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:19:39.132822175+00:00 stderr F I1013 00:19:39.132766 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-10-13T00:19:39.132822175+00:00 stderr F I1013 00:19:39.132783 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-10-13T00:19:39.133012690+00:00 stderr F I1013 00:19:39.132953 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-10-13T00:19:39.133012690+00:00 stderr F I1013 00:19:39.132975 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-10-13T00:19:39.164134766+00:00 stderr F I1013 00:19:39.164060 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-10-13T00:19:39.164217818+00:00 stderr F I1013 00:19:39.164198 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-10-13T00:19:39.170691621+00:00 stderr F I1013 00:19:39.169548 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:39.219937245+00:00 stderr F I1013 00:19:39.219871 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-10-13T00:19:39.220026088+00:00 stderr F I1013 00:19:39.220010 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.938199 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.938149758 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950478 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.950422458 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950513 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.95049176 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950538 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.95052185 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950561 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950545961 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950580 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950567252 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950600 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950585932 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950633 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950618093 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950654 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.950639694 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950681 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.950667364 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950707 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.950690695 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.950728 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950712926 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.951122 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:14 +0000 UTC to 2027-08-13 20:00:15 +0000 UTC (now=2025-10-13 00:21:11.951100116 +0000 UTC))" 2025-10-13T00:21:11.952227186+00:00 stderr F I1013 00:21:11.951492 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314503\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:21:11.951444215 +0000 UTC))" 2025-10-13T00:22:38.914621487+00:00 stderr F E1013 00:22:38.914041 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager-operator/openshift-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.209037668+00:00 stderr P E1013 00:22:39.208886 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:39.209127340+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:39.253109125+00:00 stderr P E1013 00:22:39.253025 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:39.253181637+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:39.509182928+00:00 stderr P E1013 00:22:39.509071 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:39.509259760+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:40.289175495+00:00 stderr P E1013 00:22:40.289022 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:40.289280998+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:41.073501913+00:00 stderr P E1013 00:22:41.073028 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:41.073554004+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:41.849741044+00:00 stderr P E1013 00:22:41.849611 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:41.849810866+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:42.631696956+00:00 stderr P E1013 00:22:42.631579 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:42.631776359+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:43.409963585+00:00 stderr P E1013 00:22:43.409654 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:43.410051767+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:44.190509477+00:00 stderr P E1013 00:22:44.190386 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:44.190936919+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:45.539473332+00:00 stderr P E1013 00:22:45.539311 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:45.539579015+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:48.133724506+00:00 stderr P E1013 00:22:48.133112 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:48.133779308+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:53.290912750+00:00 stderr P E1013 00:22:53.290334 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-10-13T00:22:53.290971661+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:23:27.864356155+00:00 stderr F I1013 00:23:27.863749 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:46.434453314+00:00 stderr F I1013 00:23:46.433738 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:46.834626291+00:00 stderr F I1013 00:23:46.834272 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000011407115073043234033074 0ustar zuulzuul2025-08-13T20:05:39.477713001+00:00 stderr F I0813 20:05:39.477011 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:39.480944024+00:00 stderr F I0813 20:05:39.480813 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:39.482761686+00:00 stderr F I0813 20:05:39.482640 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:39.616820695+00:00 stderr F I0813 20:05:39.612634 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-08-13T20:05:39.971388769+00:00 stderr F I0813 20:05:39.970377 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:39.971388769+00:00 stderr F W0813 20:05:39.970434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:39.971388769+00:00 stderr F W0813 20:05:39.970443 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:39.979965674+00:00 stderr F I0813 20:05:39.978309 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:39.986559793+00:00 stderr F I0813 20:05:39.986511 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-08-13T20:05:39.991060022+00:00 stderr F I0813 20:05:39.991028 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:39.991487234+00:00 stderr F I0813 20:05:39.991445 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:39.991591387+00:00 stderr F I0813 20:05:39.991531 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:39.992067701+00:00 stderr F I0813 20:05:39.992036 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:39.993505212+00:00 stderr F I0813 20:05:39.993450 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:39.993673207+00:00 stderr F I0813 20:05:39.992697 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:39.993687587+00:00 stderr F I0813 20:05:39.993668 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:39.993845892+00:00 stderr F I0813 20:05:39.992951 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:39.993888643+00:00 stderr F I0813 20:05:39.993837 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:40.095002829+00:00 stderr F I0813 20:05:40.094054 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:40.095197894+00:00 stderr F I0813 20:05:40.095170 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:40.095346198+00:00 stderr F I0813 20:05:40.095286 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:48.315010930+00:00 stderr F E0813 20:08:48.305568 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager-operator/openshift-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:10:59.203663912+00:00 stderr F I0813 20:10:59.202174 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-08-13T20:10:59.208330496+00:00 stderr F I0813 20:10:59.205651 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33251", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_97a869b3-b1e5-4caf-ad32-912e51043cee became leader 2025-08-13T20:10:59.232052376+00:00 stderr F I0813 20:10:59.231988 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:10:59.243593237+00:00 stderr F I0813 20:10:59.243456 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:10:59.244106501+00:00 stderr F I0813 20:10:59.243119 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:10:59.298685346+00:00 stderr F I0813 20:10:59.298610 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-08-13T20:10:59.309429584+00:00 stderr F I0813 20:10:59.309371 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-08-13T20:10:59.309539297+00:00 stderr F I0813 20:10:59.309523 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-08-13T20:10:59.310213146+00:00 stderr F I0813 20:10:59.310182 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:10:59.310296649+00:00 stderr F I0813 20:10:59.310279 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:10:59.310347910+00:00 stderr F I0813 20:10:59.310335 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-08-13T20:10:59.310550696+00:00 stderr F I0813 20:10:59.310525 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-08-13T20:10:59.310699400+00:00 stderr F I0813 20:10:59.310676 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:10:59.411721487+00:00 stderr F I0813 20:10:59.411566 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:10:59.411721487+00:00 stderr F I0813 20:10:59.411639 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:10:59.411844520+00:00 stderr F I0813 20:10:59.411737 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-08-13T20:10:59.411844520+00:00 stderr F I0813 20:10:59.411745 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-08-13T20:10:59.412006155+00:00 stderr F I0813 20:10:59.411946 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.418689607+00:00 stderr F I0813 20:10:59.418610 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.431131613+00:00 stderr F I0813 20:10:59.430873 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.442558351+00:00 stderr F I0813 20:10:59.442445 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.443436006+00:00 stderr F I0813 20:10:59.443371 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.448866902+00:00 stderr F I0813 20:10:59.448819 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.455659307+00:00 stderr F I0813 20:10:59.455614 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.459617080+00:00 stderr F I0813 20:10:59.458001 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.469865814+00:00 stderr F I0813 20:10:59.469573 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.510179590+00:00 stderr F I0813 20:10:59.510092 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-08-13T20:10:59.510280923+00:00 stderr F I0813 20:10:59.510258 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511488 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511529 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511581 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511589 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511616 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511629 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:10:59.552331418+00:00 stderr F I0813 20:10:59.552188 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.563880430+00:00 stderr F I0813 20:10:59.563622 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f6a079a2c81073c36b72bd771673556c9f87406689988ef3e993138968845bcc"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"30489"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:10:59.587185468+00:00 stderr F I0813 20:10:59.585561 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:10:59.596895606+00:00 stderr F I0813 20:10:59.596737 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"a8b1e7668d4b183445825c8d9d8daba3bc69d22add1b4a1e25e5081d7b9c2cd7"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"30534"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:10:59.601860628+00:00 stderr F I0813 20:10:59.601714 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-08-13T20:10:59.602059954+00:00 stderr F I0813 20:10:59.602030 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-08-13T20:10:59.614457720+00:00 stderr F I0813 20:10:59.614278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:10:59.650180144+00:00 stderr F I0813 20:10:59.650071 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:59.677105436+00:00 stderr F I0813 20:10:59.676700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14.",Available changed from False to True ("All is well") 2025-08-13T20:10:59.996206105+00:00 stderr F I0813 20:10:59.996116 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:00.047997790+00:00 stderr F I0813 20:11:00.047867 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14." to "Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no route controller manager deployment pods available on any node.") 2025-08-13T20:11:00.317103545+00:00 stderr F I0813 20:11:00.316543 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:00.340647020+00:00 stderr F I0813 20:11:00.339734 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no route controller manager deployment pods available on any node." to "Available: no pods available on any node." 2025-08-13T20:11:19.410645645+00:00 stderr F I0813 20:11:19.409794 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:19.441531050+00:00 stderr F I0813 20:11:19.436405 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.420037 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430567 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430625 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430641 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430655 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430667 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430679 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430691 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469924708+00:00 stderr F I0813 20:42:36.430703 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.640999140+00:00 stderr F I0813 20:42:36.430724 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.641265608+00:00 stderr F I0813 20:42:36.430733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.641460023+00:00 stderr F I0813 20:42:36.430743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.642325258+00:00 stderr F I0813 20:42:36.430752 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.644339656+00:00 stderr F I0813 20:42:36.422097 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.644450870+00:00 stderr F I0813 20:42:36.430828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430849 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430860 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430891 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430908 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430919 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645688405+00:00 stderr F I0813 20:42:36.430950 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645845310+00:00 stderr F I0813 20:42:36.430960 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646290823+00:00 stderr F I0813 20:42:36.430970 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646290823+00:00 stderr F I0813 20:42:36.430981 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646484298+00:00 stderr F I0813 20:42:36.430991 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646500289+00:00 stderr F I0813 20:42:36.431008 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646677554+00:00 stderr F I0813 20:42:36.431018 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646696514+00:00 stderr F I0813 20:42:36.431038 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646708345+00:00 stderr F I0813 20:42:36.431049 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.651420531+00:00 stderr F I0813 20:42:36.431059 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.651566525+00:00 stderr F I0813 20:42:36.431071 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652165712+00:00 stderr F I0813 20:42:36.431081 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652720058+00:00 stderr F I0813 20:42:36.431096 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652883633+00:00 stderr F I0813 20:42:36.431111 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431122 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431132 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431147 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655889539+00:00 stderr F I0813 20:42:36.431164 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.656338182+00:00 stderr F I0813 20:42:36.422033 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.657733093+00:00 stderr F I0813 20:42:36.422154 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.659246806+00:00 stderr F I0813 20:42:36.422194 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.659583316+00:00 stderr F I0813 20:42:36.422249 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.377512944+00:00 stderr F I0813 20:42:40.375202 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.377512944+00:00 stderr F I0813 20:42:40.376636 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.377736721+00:00 stderr F I0813 20:42:40.377656 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379633 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379656 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379685 1 base_controller.go:172] Shutting down ImagePullSecretCleanupController ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379689 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379697 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:40.379735989+00:00 stderr F I0813 20:42:40.379712 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.379735989+00:00 stderr F I0813 20:42:40.379719 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.380063278+00:00 stderr F I0813 20:42:40.379990 1 base_controller.go:172] Shutting down UserCAObservationController ... 2025-08-13T20:42:40.380163951+00:00 stderr F I0813 20:42:40.380116 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.380163951+00:00 stderr F I0813 20:42:40.380153 1 base_controller.go:114] Shutting down worker of ImagePullSecretCleanupController controller ... 2025-08-13T20:42:40.380180221+00:00 stderr F I0813 20:42:40.380162 1 base_controller.go:104] All ImagePullSecretCleanupController workers have been terminated 2025-08-13T20:42:40.380191982+00:00 stderr F I0813 20:42:40.380171 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:40.380206042+00:00 stderr F I0813 20:42:40.380196 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:40.380252273+00:00 stderr F I0813 20:42:40.380210 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ... 2025-08-13T20:42:40.380378737+00:00 stderr F I0813 20:42:40.380219 1 base_controller.go:104] All UserCAObservationController workers have been terminated 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.379696 1 base_controller.go:150] All StatusSyncer_openshift-controller-manager post start hooks have been terminated 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.381115 1 base_controller.go:114] Shutting down worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.381125 1 base_controller.go:104] All OpenshiftControllerManagerStaticResources workers have been terminated 2025-08-13T20:42:40.381155830+00:00 stderr F I0813 20:42:40.381134 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T20:42:40.381155830+00:00 stderr F I0813 20:42:40.381140 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.380217 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.381248 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.381248 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.381904751+00:00 stderr F I0813 20:42:40.380269 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator 2025-08-13T20:42:40.383730464+00:00 stderr F I0813 20:42:40.382525 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.383730464+00:00 stderr F W0813 20:42:40.382565 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000052140215073043234033074 0ustar zuulzuul2025-08-13T19:59:22.501014350+00:00 stderr F I0813 19:59:22.498949 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:22.518155678+00:00 stderr F I0813 19:59:22.517397 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:22.740731243+00:00 stderr F I0813 19:59:22.705993 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:24.642256226+00:00 stderr F I0813 19:59:24.638870 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-08-13T19:59:34.801442426+00:00 stderr F I0813 19:59:34.794211 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:34.801442426+00:00 stderr F W0813 19:59:34.799728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.801442426+00:00 stderr F W0813 19:59:34.799743 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.853910601+00:00 stderr F I0813 19:59:34.852730 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.930903836+00:00 stderr F I0813 19:59:34.924662 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-08-13T19:59:35.156718343+00:00 stderr F I0813 19:59:35.156653 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:35.274368056+00:00 stderr F I0813 19:59:35.237965 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:35.274496450+00:00 stderr F I0813 19:59:35.198935 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:35.274530241+00:00 stderr F I0813 19:59:35.199918 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:35.274556002+00:00 stderr F I0813 19:59:35.204331 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.274581893+00:00 stderr F I0813 19:59:35.253733 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:35.284174546+00:00 stderr F I0813 19:59:35.284131 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.352969556+00:00 stderr F I0813 19:59:35.296585 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.596705014+00:00 stderr F I0813 19:59:35.309268 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:35.605019651+00:00 stderr F I0813 19:59:35.584214 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:35.834367478+00:00 stderr F I0813 19:59:35.584244 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.843013895+00:00 stderr F I0813 19:59:35.842575 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.843500579+00:00 stderr F E0813 19:59:35.843458 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.843619232+00:00 stderr F E0813 19:59:35.843597 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.852731992+00:00 stderr F E0813 19:59:35.852661 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.879277559+00:00 stderr F E0813 19:59:35.876410 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.879277559+00:00 stderr F E0813 19:59:35.876472 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.887279527+00:00 stderr F E0813 19:59:35.887219 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.904343823+00:00 stderr F E0813 19:59:35.904291 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.912429184+00:00 stderr F E0813 19:59:35.912348 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.959033952+00:00 stderr F E0813 19:59:35.958973 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.959146965+00:00 stderr F E0813 19:59:35.959131 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.040763242+00:00 stderr F E0813 19:59:36.040676 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.054607717+00:00 stderr F E0813 19:59:36.054573 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.232656712+00:00 stderr F E0813 19:59:36.232421 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.331167670+00:00 stderr F I0813 19:59:36.329754 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-08-13T19:59:36.344607243+00:00 stderr F I0813 19:59:36.344427 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28188", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_9d027477-b136-47ef-a304-2dd35bc9cd4d became leader 2025-08-13T19:59:36.434342421+00:00 stderr F E0813 19:59:36.434276 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.557910153+00:00 stderr F E0813 19:59:36.555387 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.789066893+00:00 stderr F E0813 19:59:36.782173 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.792564442+00:00 stderr F I0813 19:59:36.792518 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:36.888852057+00:00 stderr F I0813 19:59:36.847317 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:36.995937550+00:00 stderr F I0813 19:59:36.848765 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:37.271882766+00:00 stderr F E0813 19:59:37.268698 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:37.436072326+00:00 stderr F E0813 19:59:37.422409 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:37.628636835+00:00 stderr F I0813 19:59:37.627170 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-08-13T19:59:37.691184378+00:00 stderr F I0813 19:59:37.674414 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:38.100083794+00:00 stderr F I0813 19:59:38.097705 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-08-13T19:59:39.611923168+00:00 stderr F E0813 19:59:39.610157 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.611923168+00:00 stderr F E0813 19:59:39.610561 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899051 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899362 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899512 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.900209 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.901615 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-08-13T19:59:42.993629804+00:00 stderr F E0813 19:59:42.978620 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.993629804+00:00 stderr F E0813 19:59:42.993327 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.024014960+00:00 stderr F I0813 19:59:43.021103 1 trace.go:236] Trace[411137344]: "DeltaFIFO Pop Process" ID:default,Depth:63,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.802) (total time: 214ms): 2025-08-13T19:59:43.024014960+00:00 stderr F Trace[411137344]: [214.00944ms] [214.00944ms] END 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.032975 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033125 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033291 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033300 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.107927 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.108215 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.109536 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.124359101+00:00 stderr F I0813 19:59:43.123596 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.219533754+00:00 stderr F I0813 19:59:43.217513 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.361143030+00:00 stderr F I0813 19:59:43.355828 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.368662565+00:00 stderr F I0813 19:59:43.368109 1 trace.go:236] Trace[1825624301]: "DeltaFIFO Pop Process" ID:cluster-admin,Depth:188,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.255) (total time: 112ms): 2025-08-13T19:59:43.368662565+00:00 stderr F Trace[1825624301]: [112.148267ms] [112.148267ms] END 2025-08-13T19:59:43.477358833+00:00 stderr F I0813 19:59:43.476446 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.637191349+00:00 stderr F I0813 19:59:43.636024 1 trace.go:236] Trace[430823319]: "DeltaFIFO Pop Process" ID:multus-group,Depth:146,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.432) (total time: 202ms): 2025-08-13T19:59:43.637191349+00:00 stderr F Trace[430823319]: [202.973466ms] [202.973466ms] END 2025-08-13T19:59:45.230277591+00:00 stderr F I0813 19:59:45.131770 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:45.264485136+00:00 stderr F I0813 19:59:45.235273 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.966283855+00:00 stderr F I0813 19:59:46.929670 1 trace.go:236] Trace[1910610313]: "DeltaFIFO Pop Process" ID:aggregate-olm-edit,Depth:223,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.235) (total time: 1693ms): 2025-08-13T19:59:46.966283855+00:00 stderr F Trace[1910610313]: [1.693771071s] [1.693771071s] END 2025-08-13T19:59:47.040871171+00:00 stderr F I0813 19:59:46.973674 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.190433995+00:00 stderr F I0813 19:59:47.190257 1 trace.go:236] Trace[287803092]: "DeltaFIFO Pop Process" ID:net-attach-def-project,Depth:179,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:47.076) (total time: 113ms): 2025-08-13T19:59:47.190433995+00:00 stderr F Trace[287803092]: [113.420843ms] [113.420843ms] END 2025-08-13T19:59:47.267690217+00:00 stderr F I0813 19:59:47.079030 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:47.304922349+00:00 stderr F I0813 19:59:47.293440 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:47.340553874+00:00 stderr F I0813 19:59:47.340491 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.342163810+00:00 stderr F I0813 19:59:47.341746 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401080 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401137 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401095 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401182 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.409057 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.409075 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T19:59:48.127951600+00:00 stderr F E0813 19:59:48.115094 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:48.128310830+00:00 stderr F E0813 19:59:48.128272 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.115961985+00:00 stderr P I0813 19:59:49.109919 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ 2025-08-13T19:59:49.116028847+00:00 stderr F /NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:49.116028847+00:00 stderr F I0813 19:59:49.111960 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T19:59:49.116028847+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:49.865857840+00:00 stderr P I0813 19:59:49.863221 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/ 2025-08-13T19:59:49.865993994+00:00 stderr F iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:49.899495549+00:00 stderr F I0813 19:59:49.899305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T19:59:49.899495549+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.414174951+00:00 stderr F I0813 19:59:50.410733 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"1accb1ac836aa33f8208a614d8657e2894b298fdc84501b2ced8f0aea7081a7e"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28451"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T19:59:50.764362733+00:00 stderr F I0813 19:59:50.763635 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T19:59:51.005057045+00:00 stderr F I0813 19:59:51.002574 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"bd022aaa08f7a35114f39954475616ebb6669ca5cfd704016d15dc0be2736d06"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28474"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.802982 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.802735174 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803080 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.803061153 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803107 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.803090464 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803124 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.803113735 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803143 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803131585 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803214 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803148676 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803326 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803221628 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803352 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803334771 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803370 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803359372 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.803714 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:51.803696981 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.865124 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 19:59:51.865080941 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.862192 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T19:59:51.908679914+00:00 stderr F I0813 19:59:51.908319 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.226393681+00:00 stderr F I0813 19:59:52.224708 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:52Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.309720466+00:00 stderr F I0813 19:59:52.307397 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.",Available changed from False to True ("All is well") 2025-08-13T19:59:52.632061214+00:00 stderr F I0813 19:59:52.630311 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:52Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.011012106+00:00 stderr F I0813 19:59:52.950889 1 trace.go:236] Trace[587110061]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:37.636) (total time: 15314ms): 2025-08-13T19:59:53.011012106+00:00 stderr F Trace[587110061]: ---"Objects listed" error: 15314ms (19:59:52.950) 2025-08-13T19:59:53.011012106+00:00 stderr F Trace[587110061]: [15.31460899s] [15.31460899s] END 2025-08-13T19:59:53.011402667+00:00 stderr F I0813 19:59:53.011356 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.012131518+00:00 stderr F E0813 19:59:52.962146 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.033095616+00:00 stderr F I0813 19:59:53.033026 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-08-13T19:59:53.039407866+00:00 stderr F I0813 19:59:53.039368 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-08-13T19:59:53.294618370+00:00 stderr F I0813 19:59:53.294426 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"MfAL0g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T19:59:53.309522275+00:00 stderr F I0813 19:59:53.295151 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T19:59:53.309522275+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2025-08-13T19:59:53.404664057+00:00 stderr F I0813 19:59:53.402032 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"MfAL0g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T19:59:53.404664057+00:00 stderr F I0813 19:59:53.402516 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T19:59:53.404664057+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T19:59:53.610534275+00:00 stderr F I0813 19:59:53.610241 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"9560e1ebfed279fa4b00d6622d2d0c7548ddaafda3bf7adb90c2675e98237adc"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"28609"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T19:59:53.839948774+00:00 stderr F I0813 19:59:53.792171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T19:59:53.839948774+00:00 stderr F I0813 19:59:53.817658 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"55202ddf9b00b1660ea2cb5f3c4a6bcc3fe4ffc25c2e72e085bdaf7de2334698"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"28614"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T19:59:53.899455651+00:00 stderr F I0813 19:59:53.858690 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T19:59:54.035239521+00:00 stderr F I0813 19:59:54.035105 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.195940292+00:00 stderr F I0813 19:59:54.195299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." to "Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") 2025-08-13T19:59:56.033043570+00:00 stderr F I0813 19:59:56.025824 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:56.508439492+00:00 stderr F I0813 19:59:56.501318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" 2025-08-13T19:59:57.112488860+00:00 stderr F I0813 19:59:57.111042 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.287965542+00:00 stderr F E0813 19:59:57.279216 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757458 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.757412947 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757647 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.757632283 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757667 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.757654564 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757719 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.757672245 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757753 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.757728736 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773048 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77295724 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773142 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773125205 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773167 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773149976 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.773176797 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773207 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773196747 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773564 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.773546847 +0000 UTC))" 2025-08-13T20:00:05.829599936+00:00 stderr F I0813 20:00:05.828483 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:05.828419362 +0000 UTC))" 2025-08-13T20:00:08.126868879+00:00 stderr P I0813 20:00:08.115317 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IB 2025-08-13T20:00:08.126971252+00:00 stderr F DwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.143879924+00:00 stderr F I0813 20:00:08.143169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T20:00:08.143879924+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.734625589+00:00 stderr P I0813 20:00:08.729751 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQ 2025-08-13T20:00:08.734699641+00:00 stderr F UAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.735633667+00:00 stderr F I0813 20:00:08.735589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T20:00:08.735633667+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.868503696+00:00 stderr F I0813 20:00:08.868378 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"fc47afb69400ec93f9f5988897f8473ca4c4bdea046fab70a0e165e5b4cc33c1"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28980"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:00:09.328665487+00:00 stderr F I0813 20:00:09.303747 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:00:09.761062066+00:00 stderr F I0813 20:00:09.759946 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"2997ec2c10eb5744ab5a4358602bc88f99f56641a9b7132faff9749e82773557"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28996"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:00:10.271157441+00:00 stderr F I0813 20:00:10.267309 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:00:10.956011969+00:00 stderr F I0813 20:00:10.955606 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:10Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:11.317181517+00:00 stderr F I0813 20:00:11.315079 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11.",Available changed from False to True ("All is well") 2025-08-13T20:00:20.807714778+00:00 stderr F I0813 20:00:20.797011 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"nkDd2A==","openshift-controller-manager.serving-cert.secret":"inPi3w=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:20.807714778+00:00 stderr F I0813 20:00:20.799219 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T20:00:20.807714778+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap,data.openshift-controller-manager.serving-cert.secret 2025-08-13T20:00:22.570906954+00:00 stderr F I0813 20:00:22.561663 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"nkDd2A=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:22.570906954+00:00 stderr F I0813 20:00:22.563114 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:00:22.570906954+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T20:00:23.296960378+00:00 stderr F I0813 20:00:23.294715 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"6bcc007c303cd189832d7956655ea4a765a39149dbd0e31991e8b4f86ad92eeb"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"29406","configmaps/openshift-service-ca":"29219"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:00:23.454507270+00:00 stderr F I0813 20:00:23.453199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:00:23.804053877+00:00 stderr F I0813 20:00:23.802669 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"d2cfbecabf7957093938fbbf20602fed627c8dcf88c7baa3039d1aa76706feb6"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"29435"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:00:24.288062459+00:00 stderr F I0813 20:00:24.286413 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:00:24.762012823+00:00 stderr F I0813 20:00:24.761237 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:24.887286025+00:00 stderr F I0813 20:00:24.885987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11." to "Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") 2025-08-13T20:00:40.856666175+00:00 stderr F I0813 20:00:40.845673 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:40.990130500+00:00 stderr F I0813 20:00:40.969350 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node." 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049126 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.048880262 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049726 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.049707486 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049761 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049732806 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050027 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049766497 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050108 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050042495 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050128 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050114927 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050146 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050134378 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050166 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050152008 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050184 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.050172309 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050252 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.05019623 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050294 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050261171 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050992 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:01:00.050970852 +0000 UTC))" 2025-08-13T20:01:00.053948177+00:00 stderr F I0813 20:01:00.051276 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:01:00.05126006 +0000 UTC))" 2025-08-13T20:01:08.673277712+00:00 stderr P I0813 20:01:08.666614 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1Nj 2025-08-13T20:01:08.674489597+00:00 stderr F YwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:01:08.713898331+00:00 stderr F I0813 20:01:08.711258 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T20:01:08.713898331+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:09.637633520+00:00 stderr F I0813 20:01:09.637216 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.serving-cert.secret":"6wVDCg=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:09.654632895+00:00 stderr F I0813 20:01:09.637763 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:01:09.654632895+00:00 stderr F cause by changes in data.openshift-route-controller-manager.serving-cert.secret 2025-08-13T20:01:10.338074073+00:00 stderr P I0813 20:01:10.337043 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUx 2025-08-13T20:01:10.338280109+00:00 stderr F MTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:01:10.344929248+00:00 stderr F I0813 20:01:10.344119 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T20:01:10.344929248+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:14.401913999+00:00 stderr F I0813 20:01:14.394388 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f04f287b763c8ca46f7254ec00d7f77f509cbf1bfd94b52bf4b4d93869c665c0"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"30255"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:01:17.951922354+00:00 stderr F I0813 20:01:17.951238 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:01:18.815583180+00:00 stderr F I0813 20:01:18.814730 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"b9a81854272e0c5424d4034a7ee3633ceff604e77368302720feb1f03d857755"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"30328","configmaps/config":"30299"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:01:20.174150267+00:00 stderr F I0813 20:01:20.173618 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:01:21.295577804+00:00 stderr F I0813 20:01:21.295110 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:21.648275290+00:00 stderr F I0813 20:01:21.638715 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" 2025-08-13T20:01:23.289898379+00:00 stderr F I0813 20:01:23.275156 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"d_XbdQ=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:23.304622698+00:00 stderr F I0813 20:01:23.293916 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T20:01:23.304622698+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2025-08-13T20:01:28.280990974+00:00 stderr F I0813 20:01:28.280362 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.281032975+00:00 stderr F I0813 20:01:28.280998 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:28.281407776+00:00 stderr F I0813 20:01:28.281358 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.282386073+00:00 stderr F I0813 20:01:28.282302 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:28.28226776 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282362 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:28.282335452 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282384 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.282369833 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282414 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.282402004 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282431 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282420344 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282451 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282438615 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282474 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282456705 +0000 UTC))" 2025-08-13T20:01:28.282513407+00:00 stderr F I0813 20:01:28.282495 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282482476 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282523 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:28.282501107 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282564 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:28.282548048 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282589 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282577829 +0000 UTC))" 2025-08-13T20:01:28.289939799+00:00 stderr F I0813 20:01:28.283343 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:14 +0000 UTC to 2027-08-13 20:00:15 +0000 UTC (now=2025-08-13 20:01:28.28332429 +0000 UTC))" 2025-08-13T20:01:28.289939799+00:00 stderr F I0813 20:01:28.284285 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:01:28.284143234 +0000 UTC))" 2025-08-13T20:01:31.519181458+00:00 stderr F I0813 20:01:31.515303 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"d_XbdQ=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:31.520220907+00:00 stderr F I0813 20:01:31.520148 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:01:31.520220907+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T20:01:32.710407505+00:00 stderr F I0813 20:01:32.709976 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="f4b72f648a02bf4d745720b461c43dc88e5b533156c427b7905f426178ca53a1", new="d241a06236d5f1f5f86885717c7d346103e02b5d1ed9dcf4c19f7f338250fbcb") 2025-08-13T20:01:32.711219518+00:00 stderr F W0813 20:01:32.710474 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.710576 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9fa7e5fbef9e286ed42003219ce81736b0a30e8ce2f7dd520c0c149b834fa6a0", new="db6902c5c5fee4f9a52663b228002d42646911159d139a2d4d9110064da348fd") 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.710987 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.711074 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.711163 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:32.711678161+00:00 stderr F I0813 20:01:32.711622 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ... 2025-08-13T20:01:32.711678161+00:00 stderr F I0813 20:01:32.711623 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ... 2025-08-13T20:01:32.711946829+00:00 stderr F I0813 20:01:32.711872 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator 2025-08-13T20:01:32.712037742+00:00 stderr F I0813 20:01:32.711949 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:32.712098493+00:00 stderr F I0813 20:01:32.711995 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:32.713875504+00:00 stderr F I0813 20:01:32.712115 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:32.713875504+00:00 stderr F W0813 20:01:32.712173 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043233033023 5ustar zuulzuul././@LongLink0000644000000000000000000000035300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043233033023 5ustar zuulzuul././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000035733115073043233033041 0ustar zuulzuul2025-08-13T20:05:30.576738510+00:00 stderr F I0813 20:05:30.564427 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:30.576738510+00:00 stderr F I0813 20:05:30.572354 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:30.585268894+00:00 stderr F I0813 20:05:30.583436 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:30.862237025+00:00 stderr F I0813 20:05:30.862163 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-08-13T20:05:31.764230725+00:00 stderr F I0813 20:05:31.762474 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:31.764230725+00:00 stderr F W0813 20:05:31.762888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:31.764230725+00:00 stderr F W0813 20:05:31.762898 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:31.771968887+00:00 stderr F I0813 20:05:31.771303 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:31.771968887+00:00 stderr F I0813 20:05:31.771722 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-08-13T20:05:31.842474786+00:00 stderr F I0813 20:05:31.841086 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:31.844878425+00:00 stderr F I0813 20:05:31.844819 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:31.845188073+00:00 stderr F I0813 20:05:31.844915 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:31.846711397+00:00 stderr F I0813 20:05:31.846567 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:31.850923628+00:00 stderr F I0813 20:05:31.846618 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:31.851387891+00:00 stderr F I0813 20:05:31.846641 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:31.851501344+00:00 stderr F I0813 20:05:31.851481 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:31.856524428+00:00 stderr F I0813 20:05:31.856485 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:31.856905299+00:00 stderr F I0813 20:05:31.856602 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:31.873882465+00:00 stderr F I0813 20:05:31.873732 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-08-13T20:05:31.875052119+00:00 stderr F I0813 20:05:31.875001 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31332", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_e7b654d9-77cd-448f-885c-1fad32cba9ad became leader 2025-08-13T20:05:31.893305981+00:00 stderr F I0813 20:05:31.893203 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:31.945610709+00:00 stderr F I0813 20:05:31.945524 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:31.958186119+00:00 stderr F I0813 20:05:31.958120 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.954502 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.962226 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.962346 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:32.186166178+00:00 stderr F I0813 20:05:32.181264 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:05:32.207337004+00:00 stderr F I0813 20:05:32.206298 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:32.209975339+00:00 stderr F I0813 20:05:32.208241 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:05:32.225986218+00:00 stderr F I0813 20:05:32.224997 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.237841 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238049 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238171 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238193 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238208 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238241 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238265 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238281 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238297 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:05:32.260341982+00:00 stderr F I0813 20:05:32.260284 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-08-13T20:05:32.260746383+00:00 stderr F I0813 20:05:32.260726 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T20:05:32.261051902+00:00 stderr F I0813 20:05:32.261025 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:05:32.261267278+00:00 stderr F I0813 20:05:32.261248 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541429 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541471 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541501 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541506 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.542500 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.542511 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:05:32.550244423+00:00 stderr F I0813 20:05:32.550159 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:32.550244423+00:00 stderr F I0813 20:05:32.550199 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:32.550411688+00:00 stderr F I0813 20:05:32.550356 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:05:32.550952684+00:00 stderr F I0813 20:05:32.550889 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.582093436+00:00 stderr F I0813 20:05:32.581994 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:05:32.582234650+00:00 stderr F I0813 20:05:32.582216 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:05:32.609923302+00:00 stderr F I0813 20:05:32.603422 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.641180047+00:00 stderr F I0813 20:05:32.639060 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:05:32.641180047+00:00 stderr F I0813 20:05:32.639217 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:05:32.648563229+00:00 stderr F I0813 20:05:32.648487 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.649318871+00:00 stderr F I0813 20:05:32.649295 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:05:32.649368642+00:00 stderr F I0813 20:05:32.649354 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:05:32.649417263+00:00 stderr F I0813 20:05:32.649404 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:05:32.649448994+00:00 stderr F I0813 20:05:32.649437 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.650958 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.679140 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.695711 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.679216 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.696984 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T20:05:32.738932587+00:00 stderr F I0813 20:05:32.738728 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:05:32.738932587+00:00 stderr F I0813 20:05:32.738891 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:05:32.751229279+00:00 stderr F I0813 20:05:32.750144 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:05:32.751229279+00:00 stderr F I0813 20:05:32.750230 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:05:32.814209622+00:00 stderr F I0813 20:05:32.814019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.995206675+00:00 stderr F I0813 20:05:32.986392 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.056409658+00:00 stderr F I0813 20:05:33.056318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.072895720+00:00 stderr F I0813 20:05:33.072733 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:05:33.072895720+00:00 stderr F I0813 20:05:33.072839 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:05:33.119484604+00:00 stderr F I0813 20:05:33.119347 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:33.119484604+00:00 stderr F I0813 20:05:33.119392 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:05:33.124444846+00:00 stderr F I0813 20:05:33.120767 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:05:33.124444846+00:00 stderr F I0813 20:05:33.123645 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:05:33.127741881+00:00 stderr F I0813 20:05:33.127651 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:05:33.127984328+00:00 stderr F I0813 20:05:33.127958 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:05:33.193213226+00:00 stderr F I0813 20:05:33.193157 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.261045287+00:00 stderr F I0813 20:05:33.260983 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T20:05:33.261153770+00:00 stderr F I0813 20:05:33.261137 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:05:33.780178083+00:00 stderr F I0813 20:05:33.779760 1 request.go:697] Waited for 1.082746184s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc 2025-08-13T20:05:35.270288154+00:00 stderr F I0813 20:05:35.269500 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:35.270288154+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:35.270288154+00:00 stderr F CurrentRevision: (int32) 7, 2025-08-13T20:05:35.270288154+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0016d1d70)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:35.270288154+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:05:35.270288154+00:00 stderr F } 2025-08-13T20:05:35.270288154+00:00 stderr F } 2025-08-13T20:05:35.270288154+00:00 stderr F because static pod is ready 2025-08-13T20:05:35.360014093+00:00 stderr F I0813 20:05:35.359946 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:05:35.361982280+00:00 stderr F I0813 20:05:35.361950 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 6 to 7 because static pod is ready 2025-08-13T20:05:35.372351637+00:00 stderr F I0813 20:05:35.369665 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:05:35Z","message":"NodeInstallerProgressing: 1 node is at revision 7","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:35.451418841+00:00 stderr F I0813 20:05:35.451335 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" 2025-08-13T20:07:15.704486660+00:00 stderr F I0813 20:07:15.701599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.783097454+00:00 stderr F I0813 20:07:15.780987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-pod-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:15.815624836+00:00 stderr F I0813 20:07:15.815563 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:16.977272212+00:00 stderr F I0813 20:07:16.976932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:17.180451136+00:00 stderr F I0813 20:07:17.173849 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/scheduler-kubeconfig-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:17.505357562+00:00 stderr F I0813 20:07:17.505279 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:18.719316847+00:00 stderr F I0813 20:07:18.718327 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:19.941996393+00:00 stderr F I0813 20:07:19.938574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:19.981566097+00:00 stderr F I0813 20:07:19.979750 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:20.101976720+00:00 stderr F I0813 20:07:20.100453 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:20.114002764+00:00 stderr F I0813 20:07:20.112277 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 8 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:21.320572926+00:00 stderr F I0813 20:07:21.318488 1 installer_controller.go:524] node crc with revision 7 is the oldest and needs new revision 8 2025-08-13T20:07:21.320572926+00:00 stderr F I0813 20:07:21.319194 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:21.320572926+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:21.320572926+00:00 stderr F CurrentRevision: (int32) 7, 2025-08-13T20:07:21.320572926+00:00 stderr F TargetRevision: (int32) 8, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001c32198)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:21.320572926+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:07:21.320572926+00:00 stderr F } 2025-08-13T20:07:21.320572926+00:00 stderr F } 2025-08-13T20:07:21.364919068+00:00 stderr F I0813 20:07:21.362636 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 7 to 8 because node crc with revision 7 is the oldest 2025-08-13T20:07:21.401748604+00:00 stderr F I0813 20:07:21.400658 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:21Z","message":"NodeInstallerProgressing: 1 node is at revision 7; 0 nodes have achieved new revision 8","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.405192112+00:00 stderr F I0813 20:07:21.405162 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:21.431219419+00:00 stderr F I0813 20:07:21.429589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 7; 0 nodes have achieved new revision 8"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8" 2025-08-13T20:07:23.084966174+00:00 stderr F I0813 20:07:23.081293 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-8-crc -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:23.920610573+00:00 stderr F I0813 20:07:23.915736 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:26.094867570+00:00 stderr F I0813 20:07:26.093717 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:27.701233176+00:00 stderr F I0813 20:07:27.691110 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:58.161748364+00:00 stderr F I0813 20:07:58.161265 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:59.618478561+00:00 stderr F I0813 20:07:59.618418 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because waiting for static pod of revision 8, found 7 2025-08-13T20:08:00.260228759+00:00 stderr F I0813 20:08:00.257628 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because waiting for static pod of revision 8, found 7 2025-08-13T20:08:08.302031825+00:00 stderr F I0813 20:08:08.299863 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:08.669468880+00:00 stderr F I0813 20:08:08.668167 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:09.661708029+00:00 stderr F I0813 20:08:09.660459 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:11.298218219+00:00 stderr F I0813 20:08:11.296588 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:32.423503917+00:00 stderr F E0813 20:08:32.421764 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.657075264+00:00 stderr F E0813 20:08:32.656916 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.659262606+00:00 stderr F E0813 20:08:32.659211 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.665237858+00:00 stderr F E0813 20:08:32.664273 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.671484387+00:00 stderr F E0813 20:08:32.670156 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.676878851+00:00 stderr F E0813 20:08:32.676854 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.684176151+00:00 stderr F E0813 20:08:32.684095 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.700397636+00:00 stderr F E0813 20:08:32.700290 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.709248700+00:00 stderr F E0813 20:08:32.709170 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.742218485+00:00 stderr F E0813 20:08:32.742106 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.754565889+00:00 stderr F E0813 20:08:32.754514 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.857328175+00:00 stderr F E0813 20:08:32.857281 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.058487083+00:00 stderr F E0813 20:08:33.058429 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.256689975+00:00 stderr F E0813 20:08:33.256581 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.460638473+00:00 stderr F E0813 20:08:33.460541 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:33.859451947+00:00 stderr F E0813 20:08:33.859324 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.259980410+00:00 stderr F E0813 20:08:34.259926 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:34.459226493+00:00 stderr F E0813 20:08:34.458756 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.065538257+00:00 stderr F E0813 20:08:35.065047 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.260978180+00:00 stderr F E0813 20:08:35.259879 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:35.857700678+00:00 stderr F E0813 20:08:35.857534 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.062962473+00:00 stderr F E0813 20:08:36.061522 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.460504571+00:00 stderr F E0813 20:08:36.460427 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.657974753+00:00 stderr F E0813 20:08:36.657874 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.863096394+00:00 stderr F E0813 20:08:36.862953 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.060307248+00:00 stderr F E0813 20:08:37.060175 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465059463+00:00 stderr F E0813 20:08:37.464972 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.860871651+00:00 stderr F E0813 20:08:37.858753 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.260733035+00:00 stderr F E0813 20:08:38.260682 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.663861994+00:00 stderr F E0813 20:08:38.662819 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.262952820+00:00 stderr F E0813 20:08:39.260829 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:39.462171091+00:00 stderr F E0813 20:08:39.462085 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:40.063432860+00:00 stderr F E0813 20:08:40.059750 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.486148519+00:00 stderr F E0813 20:08:40.481360 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.059009174+00:00 stderr F E0813 20:08:41.058291 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.466963720+00:00 stderr F E0813 20:08:41.465427 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.659714477+00:00 stderr F E0813 20:08:41.659572 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.265556167+00:00 stderr F E0813 20:08:42.265423 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.861740440+00:00 stderr F E0813 20:08:42.861671 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.469264278+00:00 stderr F E0813 20:08:43.463841 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.859357553+00:00 stderr F E0813 20:08:44.859243 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.105574872+00:00 stderr F E0813 20:08:45.105237 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.260272658+00:00 stderr F E0813 20:08:45.260163 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.657439815+00:00 stderr F E0813 20:08:46.657116 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.060711047+00:00 stderr F E0813 20:08:47.060353 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:47.264939003+00:00 stderr F E0813 20:08:47.262274 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.466948065+00:00 stderr F E0813 20:08:47.464918 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.860878800+00:00 stderr F E0813 20:08:48.860045 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.261750484+00:00 stderr F E0813 20:08:49.261652 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.664740178+00:00 stderr F E0813 20:08:50.664349 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.865964007+00:00 stderr F E0813 20:08:50.864296 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.260165459+00:00 stderr F E0813 20:08:51.260058 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.462859232+00:00 stderr F E0813 20:08:52.462645 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.061524716+00:00 stderr F E0813 20:08:53.061168 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.261631214+00:00 stderr F E0813 20:08:54.261472 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.864943431+00:00 stderr F E0813 20:08:54.861968 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:55.861874535+00:00 stderr F E0813 20:08:55.861383 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.900825473+00:00 stderr F E0813 20:08:56.900663 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.440125344+00:00 stderr F E0813 20:08:57.440012 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:57.507969159+00:00 stderr F E0813 20:08:57.507842 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.434416582+00:00 stderr F E0813 20:08:58.433978 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:28.101218495+00:00 stderr F I0813 20:09:28.100409 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:29.927306530+00:00 stderr F I0813 20:09:29.926836 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:29.930692147+00:00 stderr F I0813 20:09:29.930523 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:29.952076110+00:00 stderr F I0813 20:09:29.951739 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:29.952076110+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:29.952076110+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:09:29.952076110+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00042e9d8)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:29.952076110+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:09:29.952076110+00:00 stderr F } 2025-08-13T20:09:29.952076110+00:00 stderr F } 2025-08-13T20:09:29.952076110+00:00 stderr F because static pod is ready 2025-08-13T20:09:29.974254396+00:00 stderr F I0813 20:09:29.974145 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 7 to 8 because static pod is ready 2025-08-13T20:09:29.975662596+00:00 stderr F I0813 20:09:29.975627 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:29.976646144+00:00 stderr F I0813 20:09:29.976583 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:29.995650949+00:00 stderr F I0813 20:09:29.995595 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 8"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" 2025-08-13T20:09:30.537031651+00:00 stderr F I0813 20:09:30.536506 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.543282740+00:00 stderr F E0813 20:09:30.543234 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.544034722+00:00 stderr F I0813 20:09:30.544004 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.550363613+00:00 stderr F E0813 20:09:30.550266 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.551084374+00:00 stderr F I0813 20:09:30.550996 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.556504790+00:00 stderr F E0813 20:09:30.556470 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.561870313+00:00 stderr F I0813 20:09:30.561683 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.568154224+00:00 stderr F E0813 20:09:30.568032 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.609735966+00:00 stderr F I0813 20:09:30.609603 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.616439068+00:00 stderr F E0813 20:09:30.616301 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.699459548+00:00 stderr F I0813 20:09:30.697607 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.707994923+00:00 stderr F E0813 20:09:30.707681 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.869500534+00:00 stderr F I0813 20:09:30.869428 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.880470038+00:00 stderr F E0813 20:09:30.880354 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:31.203732086+00:00 stderr F I0813 20:09:31.203624 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:31Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:31.213663921+00:00 stderr F E0813 20:09:31.213471 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:31.855883135+00:00 stderr F I0813 20:09:31.855749 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:31Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:31.862935347+00:00 stderr F E0813 20:09:31.862764 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:32.274533878+00:00 stderr F I0813 20:09:32.274336 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:32.348989672+00:00 stderr F I0813 20:09:32.347245 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:32.702465977+00:00 stderr F I0813 20:09:32.702301 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:32.709356515+00:00 stderr F E0813 20:09:32.709256 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:33.136957823+00:00 stderr F I0813 20:09:33.136156 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:33Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:33.147025742+00:00 stderr F E0813 20:09:33.146737 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:33.148276978+00:00 stderr F I0813 20:09:33.148199 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:33Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:33.157408120+00:00 stderr F E0813 20:09:33.157256 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:34.113335487+00:00 stderr F I0813 20:09:34.113272 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:34.534085630+00:00 stderr F I0813 20:09:34.534017 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:34.774416151+00:00 stderr F I0813 20:09:34.774049 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:35.933337538+00:00 stderr F I0813 20:09:35.932834 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:36.134601698+00:00 stderr F I0813 20:09:36.134500 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:36.873630366+00:00 stderr F I0813 20:09:36.873530 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:37.075117703+00:00 stderr F I0813 20:09:37.074269 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:37.130367177+00:00 stderr F I0813 20:09:37.130258 1 request.go:697] Waited for 1.19496227s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc 2025-08-13T20:09:37.136475272+00:00 stderr F I0813 20:09:37.136238 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:37Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:37.142530496+00:00 stderr F E0813 20:09:37.142432 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:37.143245286+00:00 stderr F I0813 20:09:37.143168 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:37Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:37.149396453+00:00 stderr F E0813 20:09:37.149312 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:37.334031207+00:00 stderr F I0813 20:09:37.333836 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:38.268554420+00:00 stderr F I0813 20:09:38.268410 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.274959354+00:00 stderr F E0813 20:09:38.274735 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.330433744+00:00 stderr F I0813 20:09:38.330277 1 request.go:697] Waited for 1.195071584s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc 2025-08-13T20:09:38.340314518+00:00 stderr F I0813 20:09:38.340212 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.346515885+00:00 stderr F E0813 20:09:38.346338 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.347238156+00:00 stderr F I0813 20:09:38.347124 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.357465189+00:00 stderr F E0813 20:09:38.357309 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.533927949+00:00 stderr F I0813 20:09:38.533022 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:41.658292706+00:00 stderr F I0813 20:09:41.658105 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:41.798998570+00:00 stderr F I0813 20:09:41.798438 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:43.796436379+00:00 stderr F I0813 20:09:43.795019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:44.200848973+00:00 stderr F I0813 20:09:44.198357 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:49.203368260+00:00 stderr F I0813 20:09:49.202253 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:51.434943531+00:00 stderr F I0813 20:09:51.431532 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:52.016265048+00:00 stderr F I0813 20:09:52.016174 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:12.320140828+00:00 stderr F I0813 20:10:12.317306 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:14.400878154+00:00 stderr F I0813 20:10:14.399461 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:18.062113022+00:00 stderr F I0813 20:10:18.058972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:19.598567013+00:00 stderr F I0813 20:10:19.598491 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:26.215610350+00:00 stderr F I0813 20:10:26.214704 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:27.797339089+00:00 stderr F I0813 20:10:27.797234 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:28.405947179+00:00 stderr F I0813 20:10:28.405471 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:31.021413336+00:00 stderr F I0813 20:10:31.021171 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:40.552074109+00:00 stderr F I0813 20:10:40.543346 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:41.705436466+00:00 stderr F I0813 20:10:41.705315 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:36.355339714+00:00 stderr F I0813 20:42:36.341575 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350385 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350426 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350439 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350460 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373428316+00:00 stderr F I0813 20:42:36.350472 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350487 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350501 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350524 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350535 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350575 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.320254 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350642 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350656 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350666 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350677 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350687 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350723 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350738 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350754 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350883 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350908 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350917 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.393459673+00:00 stderr F I0813 20:42:36.350935 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.412702238+00:00 stderr F I0813 20:42:36.412551 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.402538706+00:00 stderr F I0813 20:42:39.401878 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:39.403501624+00:00 stderr F I0813 20:42:39.402611 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:39.403579226+00:00 stderr F E0813 20:42:39.403535 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.403682989+00:00 stderr F W0813 20:42:39.403643 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000023040715073043233033033 0ustar zuulzuul2025-08-13T19:59:06.750257233+00:00 stderr F I0813 19:59:06.742937 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:06.786971529+00:00 stderr F I0813 19:59:06.782877 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:06.942901614+00:00 stderr F I0813 19:59:06.940217 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:07.926525343+00:00 stderr F I0813 19:59:07.924691 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-08-13T19:59:12.720179267+00:00 stderr F I0813 19:59:12.719111 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:12.820604490+00:00 stderr F I0813 19:59:12.820420 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:12.820604490+00:00 stderr F W0813 19:59:12.820512 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:12.820604490+00:00 stderr F W0813 19:59:12.820521 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:12.823878544+00:00 stderr F I0813 19:59:12.822329 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-08-13T19:59:14.074160714+00:00 stderr F I0813 19:59:14.073686 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-08-13T19:59:14.086618029+00:00 stderr F I0813 19:59:14.082412 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"27949", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_b7a0aeae-1f73-4439-9109-a6d072941331 became leader 2025-08-13T19:59:14.277518981+00:00 stderr F I0813 19:59:14.276568 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:14.277518981+00:00 stderr F I0813 19:59:14.277119 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563198 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563643 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563239 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:15.587516315+00:00 stderr F I0813 19:59:15.582508 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:15.607910957+00:00 stderr F I0813 19:59:15.605047 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:15.607910957+00:00 stderr F I0813 19:59:15.605131 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:15.620592328+00:00 stderr F I0813 19:59:15.620545 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:15.620755023+00:00 stderr F I0813 19:59:15.620734 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:15.845312024+00:00 stderr F I0813 19:59:15.821569 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:15.845858660+00:00 stderr F I0813 19:59:15.845737 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:15.846489558+00:00 stderr F I0813 19:59:15.846218 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:15.847163457+00:00 stderr F I0813 19:59:15.846455 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:16.319895263+00:00 stderr F I0813 19:59:16.318321 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:16.319895263+00:00 stderr F E0813 19:59:16.318472 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.319895263+00:00 stderr F E0813 19:59:16.318510 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.334965852+00:00 stderr F E0813 19:59:16.326704 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.360658714+00:00 stderr F E0813 19:59:16.360212 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.360658714+00:00 stderr F E0813 19:59:16.360280 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.381641183+00:00 stderr F E0813 19:59:16.378296 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.445417691+00:00 stderr F E0813 19:59:16.436610 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.488232291+00:00 stderr F E0813 19:59:16.483222 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.568202701+00:00 stderr F E0813 19:59:16.558084 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.654130840+00:00 stderr F E0813 19:59:16.651388 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.654130840+00:00 stderr F E0813 19:59:16.651483 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.861331357+00:00 stderr F E0813 19:59:16.813732 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.861331357+00:00 stderr F E0813 19:59:16.827618 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F E0813 19:59:17.034141 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F E0813 19:59:17.176599 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.088712 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.118470 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.161507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.175011 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:19.941138489+00:00 stderr F I0813 19:59:19.923556 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T19:59:19.984238058+00:00 stderr F I0813 19:59:19.938257 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:19.989219850+00:00 stderr F I0813 19:59:19.988407 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:19.989219850+00:00 stderr F E0813 19:59:19.988689 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154621 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154758 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154911 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154937 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155286 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155309 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155322 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155766 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-08-13T19:59:20.217902409+00:00 stderr F I0813 19:59:20.217548 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:20.238744153+00:00 stderr F I0813 19:59:20.238127 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:20.571913630+00:00 stderr F E0813 19:59:20.571743 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027082 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027502 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027548 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027556 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:24.075747208+00:00 stderr F E0813 19:59:24.031037 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.075747208+00:00 stderr F E0813 19:59:24.031073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173715 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173875 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173900 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:24.180871845+00:00 stderr F I0813 19:59:24.178934 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:24.197711115+00:00 stderr F I0813 19:59:24.197643 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:24.252630560+00:00 stderr F I0813 19:59:24.252208 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:24.285859177+00:00 stderr F I0813 19:59:24.282293 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:24.298903559+00:00 stderr F I0813 19:59:24.286007 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:24.308657957+00:00 stderr F I0813 19:59:24.308498 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:24.308755230+00:00 stderr F I0813 19:59:24.308735 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:24.309183862+00:00 stderr F I0813 19:59:24.309119 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:24.309183862+00:00 stderr F I0813 19:59:24.309171 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:24.309538402+00:00 stderr F I0813 19:59:24.308546 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:24.309596684+00:00 stderr F I0813 19:59:24.309577 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:24.313347991+00:00 stderr F I0813 19:59:24.286034 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:24.313418613+00:00 stderr F I0813 19:59:24.313402 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:24.337597902+00:00 stderr F I0813 19:59:24.337309 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:24.338316453+00:00 stderr F I0813 19:59:24.338235 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:24.339141446+00:00 stderr F I0813 19:59:24.338647 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:24.345083176+00:00 stderr F I0813 19:59:24.345016 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.356059319+00:00 stderr F I0813 19:59:24.355992 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.357731286+00:00 stderr F I0813 19:59:24.357700 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-08-13T19:59:24.360130935+00:00 stderr F I0813 19:59:24.360021 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T19:59:24.362277496+00:00 stderr F I0813 19:59:24.362238 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:24Z","message":"NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:58:01Z","message":"NodeInstallerProgressing: 1 node is at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:24.371413966+00:00 stderr F I0813 19:59:24.364198 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.419431825+00:00 stderr F I0813 19:59:24.419309 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:24.419431825+00:00 stderr F I0813 19:59:24.419354 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:24.461939797+00:00 stderr F I0813 19:59:24.458873 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T19:59:24.461939797+00:00 stderr F I0813 19:59:24.458941 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T19:59:25.315461666+00:00 stderr F E0813 19:59:25.312038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.516356702+00:00 stderr F I0813 19:59:25.504740 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:25.820550084+00:00 stderr F I0813 19:59:25.820094 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:58:01Z","message":"NodeInstallerProgressing: 1 node is at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:26.105266230+00:00 stderr F I0813 19:59:26.105188 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:26.237973862+00:00 stderr F I0813 19:59:26.192630 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:26.593444855+00:00 stderr F E0813 19:59:26.593093 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.892963843+00:00 stderr F I0813 19:59:26.892221 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.979948 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.979994 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.980052 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.980059 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:27.875125470+00:00 stderr F E0813 19:59:27.872973 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.919694851+00:00 stderr F I0813 19:59:27.919459 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:27.919999479+00:00 stderr F I0813 19:59:27.919974 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:31.716486518+00:00 stderr F E0813 19:59:31.716107 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:32.995353672+00:00 stderr F E0813 19:59:32.994615 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.965062365+00:00 stderr F E0813 19:59:41.962572 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.238992878+00:00 stderr F E0813 19:59:43.238112 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.822111606+00:00 stderr F I0813 19:59:51.815170 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915673 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.915501758 +0000 UTC))" 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915890 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.915755946 +0000 UTC))" 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915924 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.91590089 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915949 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.915930981 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915958592 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915988 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915976512 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916011 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915993482 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916029 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916018183 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916055 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916034914 +0000 UTC))" 2025-08-13T19:59:51.917876096+00:00 stderr F I0813 19:59:51.916576 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 19:59:51.916552738 +0000 UTC))" 2025-08-13T19:59:52.021908452+00:00 stderr F I0813 19:59:52.021159 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 19:59:52.021103789 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.728508 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.728468532 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729386 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.729306736 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729417 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.729396818 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729494 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.72946132 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729511 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.729500051 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.729770 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.729519012 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799009 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.7988868 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799112 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799092906 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799206 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.799118016 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799338 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799217719 +0000 UTC))" 2025-08-13T20:00:05.825056146+00:00 stderr F I0813 20:00:05.822722 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:00:05.822678308 +0000 UTC))" 2025-08-13T20:00:05.825056146+00:00 stderr F I0813 20:00:05.823980 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:00:05.823609605 +0000 UTC))" 2025-08-13T20:00:30.817609083+00:00 stderr F I0813 20:00:30.815856 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 7 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:30.880127836+00:00 stderr F I0813 20:00:30.874647 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-pod-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:30.901642529+00:00 stderr F I0813 20:00:30.900057 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:31.038110090+00:00 stderr F I0813 20:00:31.035252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:31.628087483+00:00 stderr F I0813 20:00:31.626200 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/scheduler-kubeconfig-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.047671160+00:00 stderr F I0813 20:00:33.034887 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.234393285+00:00 stderr F I0813 20:00:33.230541 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.384646049+00:00 stderr F I0813 20:00:33.383876 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.794477925+00:00 stderr F I0813 20:00:33.794386 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 7 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:33.994328463+00:00 stderr F I0813 20:00:33.993397 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 7 created because optional secret/serving-cert has changed 2025-08-13T20:00:33.997412101+00:00 stderr F I0813 20:00:33.997345 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:35.638695201+00:00 stderr F I0813 20:00:35.635725 1 installer_controller.go:524] node crc with revision 6 is the oldest and needs new revision 7 2025-08-13T20:00:35.638695201+00:00 stderr F I0813 20:00:35.636690 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:35.638695201+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:35.638695201+00:00 stderr F CurrentRevision: (int32) 6, 2025-08-13T20:00:35.638695201+00:00 stderr F TargetRevision: (int32) 7, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0017426d8)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:35.638695201+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:00:35.638695201+00:00 stderr F } 2025-08-13T20:00:35.638695201+00:00 stderr F } 2025-08-13T20:00:35.744594801+00:00 stderr F I0813 20:00:35.744447 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 6 to 7 because node crc with revision 6 is the oldest 2025-08-13T20:00:35.765922529+00:00 stderr F I0813 20:00:35.762651 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:35Z","message":"NodeInstallerProgressing: 1 node is at revision 6; 0 nodes have achieved new revision 7","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:35.843115420+00:00 stderr F I0813 20:00:35.842474 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:35.907574728+00:00 stderr F I0813 20:00:35.905035 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 6; 0 nodes have achieved new revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7" 2025-08-13T20:00:37.153230506+00:00 stderr F I0813 20:00:37.136051 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-7-crc -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:38.492115353+00:00 stderr F I0813 20:00:38.485670 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:39.595945127+00:00 stderr F I0813 20:00:39.578461 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:46.881302911+00:00 stderr F I0813 20:00:46.866452 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.981038 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.980942085 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998578 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.994291525 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998644 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.998610639 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998677 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.99865393 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998719 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998701701 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998744 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998726352 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999094 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998750653 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999209 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.999111613 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999251 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.999224796 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999302 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.999281778 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999330 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.999312609 +0000 UTC))" 2025-08-13T20:01:00.028106330+00:00 stderr F I0813 20:01:00.016271 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:01:00.016232431 +0000 UTC))" 2025-08-13T20:01:00.028106330+00:00 stderr F I0813 20:01:00.020201 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:01:00.020135532 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.375177 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.375957 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.376642 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382332 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:20.382291382 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382367 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:20.382353574 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382388 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:20.382372944 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382413 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:20.382396385 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382431 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382419856 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382454 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382441626 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382474 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382461357 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382505 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382480117 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382523 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:20.382512488 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382541 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:20.382530409 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382567 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.38255572 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382990 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-08-13 20:01:20.382970791 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.383322 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:01:20.383296541 +0000 UTC))" 2025-08-13T20:01:21.951018043+00:00 stderr F I0813 20:01:21.950025 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="bb1e896cccd42fdf3b040bf941474d8b98c2e1008b067302988ef61fbb74af9d", new="f2ef63f4fe25e3a28410fbdc21c93cf27decbc59c0810ae7fae1df548bef3156") 2025-08-13T20:01:21.951018043+00:00 stderr F W0813 20:01:21.950997 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was modified 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951354 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="af4bed0830029e9d25ab9fe2cabce4be2989fe96db922a2b2f78449a73557f43", new="fd988104fafb68af302e48a9bb235ee14d5ce3d51ab6bad4c4b27339f3fa7c47") 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951708 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951904 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951966 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952066 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952097 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952166 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952182 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952302 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952312 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952406 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952419 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952426 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952430 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952437 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952448 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952452 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952461 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952508 1 base_controller.go:172] Shutting down StatusSyncer_kube-scheduler ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952533 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952555 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952570 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952584 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952597 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952609 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952624 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952641 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952694 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952829 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952872 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952884 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.953768 1 base_controller.go:150] All StatusSyncer_kube-scheduler post start hooks have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962390 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962408 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962418 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962424 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962495 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962552 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962565 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962602 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962665 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:21.962742357+00:00 stderr F E0813 20:01:21.962704 1 request.go:1116] Unexpected error when reading response body: context canceled 2025-08-13T20:01:21.962967624+00:00 stderr F I0813 20:01:21.962524 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963093 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963127 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963139 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963148 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:21.963176590+00:00 stderr F I0813 20:01:21.963155 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:21.963176590+00:00 stderr F I0813 20:01:21.963160 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:21.963189860+00:00 stderr F I0813 20:01:21.963180 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963187 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963193 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963201 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963209 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963085 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963231 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963236 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963248 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963255 1 base_controller.go:104] All StatusSyncer_kube-scheduler workers have been terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963265 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:21.963284713+00:00 stderr F I0813 20:01:21.963272 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:21.963284713+00:00 stderr F I0813 20:01:21.963277 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:21.963441377+00:00 stderr F I0813 20:01:21.963377 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:21.963531170+00:00 stderr F I0813 20:01:21.963478 1 builder.go:330] server exited 2025-08-13T20:01:21.963705225+00:00 stderr F I0813 20:01:21.963681 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:21.969730416+00:00 stderr F I0813 20:01:21.964094 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:01:21.969887911+00:00 stderr F I0813 20:01:21.964110 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:21.969938282+00:00 stderr F I0813 20:01:21.964128 1 base_controller.go:114] Shutting down worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:01:21.969994814+00:00 stderr F I0813 20:01:21.969976 1 base_controller.go:104] All KubeControllerManagerStaticResources workers have been terminated 2025-08-13T20:01:21.970029175+00:00 stderr F I0813 20:01:21.964139 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:21.970070696+00:00 stderr F I0813 20:01:21.970054 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:21.970116817+00:00 stderr F I0813 20:01:21.970100 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:21.970154508+00:00 stderr F I0813 20:01:21.964419 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:21.970195970+00:00 stderr F I0813 20:01:21.970180 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:21.970282332+00:00 stderr F I0813 20:01:21.964467 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:21.970337104+00:00 stderr F I0813 20:01:21.964538 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:21.970382325+00:00 stderr F I0813 20:01:21.970365 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:21.997038065+00:00 stderr F E0813 20:01:21.972707 1 base_controller.go:268] TargetConfigController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:21.997038065+00:00 stderr F I0813 20:01:21.972864 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:21.997038065+00:00 stderr F I0813 20:01:21.972905 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:22.298082319+00:00 stderr F W0813 20:01:22.298025 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000007777215073043233033051 0ustar zuulzuul2025-10-13T00:14:57.954349596+00:00 stderr F I1013 00:14:57.954176 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:14:57.954349596+00:00 stderr F I1013 00:14:57.954279 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:57.955739528+00:00 stderr F I1013 00:14:57.955705 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:57.975930233+00:00 stderr F I1013 00:14:57.975877 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-10-13T00:14:58.279513849+00:00 stderr F I1013 00:14:58.278881 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:14:58.279581641+00:00 stderr F W1013 00:14:58.279570 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:58.279611582+00:00 stderr F W1013 00:14:58.279602 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:58.284931051+00:00 stderr F I1013 00:14:58.284471 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:14:58.284931051+00:00 stderr F I1013 00:14:58.284827 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:14:58.285075835+00:00 stderr F I1013 00:14:58.285044 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-10-13T00:14:58.288356434+00:00 stderr F I1013 00:14:58.286912 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:14:58.288356434+00:00 stderr F I1013 00:14:58.287991 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:14:58.288356434+00:00 stderr F I1013 00:14:58.288315 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:14:58.288380934+00:00 stderr F I1013 00:14:58.288371 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:14:58.288472067+00:00 stderr F I1013 00:14:58.288442 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:14:58.288472067+00:00 stderr F I1013 00:14:58.288460 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:58.288495808+00:00 stderr F I1013 00:14:58.288480 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:14:58.288495808+00:00 stderr F I1013 00:14:58.288488 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:58.391171454+00:00 stderr F I1013 00:14:58.391071 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:14:58.391171454+00:00 stderr F I1013 00:14:58.391091 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:58.391171454+00:00 stderr F I1013 00:14:58.391150 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:21:06.953130993+00:00 stderr F I1013 00:21:06.952584 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-10-13T00:21:06.953182775+00:00 stderr F I1013 00:21:06.952672 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42184", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_9f27b437-0ee0-453f-bed3-9d23b1bc6b01 became leader 2025-10-13T00:21:06.957021533+00:00 stderr F I1013 00:21:06.956959 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:21:06.959616425+00:00 stderr F I1013 00:21:06.959567 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:21:06.959705298+00:00 stderr F I1013 00:21:06.959645 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:21:06.968629568+00:00 stderr F I1013 00:21:06.968558 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:21:06.968738141+00:00 stderr F I1013 00:21:06.968698 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-10-13T00:21:06.969074661+00:00 stderr F I1013 00:21:06.969039 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:21:06.969213355+00:00 stderr F I1013 00:21:06.969180 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-10-13T00:21:06.969213355+00:00 stderr F I1013 00:21:06.969197 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-10-13T00:21:06.969213355+00:00 stderr F I1013 00:21:06.969207 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:21:06.969224675+00:00 stderr F I1013 00:21:06.969212 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-10-13T00:21:06.969282877+00:00 stderr F I1013 00:21:06.969250 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-10-13T00:21:06.969308317+00:00 stderr F I1013 00:21:06.969294 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-10-13T00:21:06.969347508+00:00 stderr F I1013 00:21:06.969316 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-10-13T00:21:06.969359299+00:00 stderr F I1013 00:21:06.969350 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-10-13T00:21:06.969389440+00:00 stderr F I1013 00:21:06.969372 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-10-13T00:21:06.969425411+00:00 stderr F I1013 00:21:06.969401 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:21:06.969425411+00:00 stderr F I1013 00:21:06.969421 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:21:06.969457612+00:00 stderr F I1013 00:21:06.969433 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-10-13T00:21:06.969520243+00:00 stderr F I1013 00:21:06.969493 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-10-13T00:21:06.969528894+00:00 stderr F I1013 00:21:06.969521 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-10-13T00:21:07.068838080+00:00 stderr F I1013 00:21:07.068771 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:21:07.068838080+00:00 stderr F I1013 00:21:07.068811 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:21:07.069316614+00:00 stderr F I1013 00:21:07.069289 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:21:07.069316614+00:00 stderr F I1013 00:21:07.069301 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:21:07.069376655+00:00 stderr F I1013 00:21:07.069303 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-10-13T00:21:07.069392586+00:00 stderr F I1013 00:21:07.069384 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-10-13T00:21:07.069425707+00:00 stderr F I1013 00:21:07.069402 1 base_controller.go:73] Caches are synced for NodeController 2025-10-13T00:21:07.069425707+00:00 stderr F I1013 00:21:07.069413 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-10-13T00:21:07.069454127+00:00 stderr F I1013 00:21:07.069439 1 base_controller.go:73] Caches are synced for PruneController 2025-10-13T00:21:07.069461548+00:00 stderr F I1013 00:21:07.069452 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-10-13T00:21:07.069623362+00:00 stderr F I1013 00:21:07.069604 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:21:07.069623362+00:00 stderr F I1013 00:21:07.069616 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:21:07.069675154+00:00 stderr F I1013 00:21:07.069659 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:21:07.069675154+00:00 stderr F I1013 00:21:07.069668 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:21:07.069780837+00:00 stderr F I1013 00:21:07.069756 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:21:07.171199682+00:00 stderr F I1013 00:21:07.171132 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:21:07.371448831+00:00 stderr F I1013 00:21:07.371385 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:21:07.469616656+00:00 stderr F I1013 00:21:07.469543 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-10-13T00:21:07.469616656+00:00 stderr F I1013 00:21:07.469588 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-10-13T00:21:07.574729215+00:00 stderr F I1013 00:21:07.574672 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:21:07.669594587+00:00 stderr F I1013 00:21:07.669532 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-10-13T00:21:07.669594587+00:00 stderr F I1013 00:21:07.669553 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-10-13T00:21:07.669594587+00:00 stderr F I1013 00:21:07.669575 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-10-13T00:21:07.669594587+00:00 stderr F I1013 00:21:07.669577 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-10-13T00:21:07.669594587+00:00 stderr F I1013 00:21:07.669562 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-10-13T00:21:07.669615708+00:00 stderr F I1013 00:21:07.669593 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-10-13T00:21:07.669671379+00:00 stderr F I1013 00:21:07.669653 1 base_controller.go:73] Caches are synced for InstallerController 2025-10-13T00:21:07.669671379+00:00 stderr F I1013 00:21:07.669664 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-10-13T00:21:07.669741651+00:00 stderr F I1013 00:21:07.669715 1 base_controller.go:73] Caches are synced for GuardController 2025-10-13T00:21:07.669741651+00:00 stderr F I1013 00:21:07.669734 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-10-13T00:21:07.670131242+00:00 stderr F I1013 00:21:07.670090 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-10-13T00:21:07.676354447+00:00 stderr F E1013 00:21:07.676278 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-10-13T00:21:07.677181010+00:00 stderr F I1013 00:21:07.677149 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:21:07.677314764+00:00 stderr F I1013 00:21:07.677268 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-10-13T00:21:07.677440057+00:00 stderr F E1013 00:21:07.677401 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-10-13T00:21:07.678219249+00:00 stderr F I1013 00:21:07.678168 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:07.681974185+00:00 stderr F I1013 00:21:07.681932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-10-13T00:21:07.682190501+00:00 stderr F E1013 00:21:07.682142 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-10-13T00:21:07.686984825+00:00 stderr F I1013 00:21:07.686933 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" 2025-10-13T00:21:07.703020525+00:00 stderr F E1013 00:21:07.702962 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-10-13T00:21:07.703020525+00:00 stderr F I1013 00:21:07.702981 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-10-13T00:21:07.744150849+00:00 stderr F I1013 00:21:07.744075 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-10-13T00:21:07.744150849+00:00 stderr F E1013 00:21:07.744094 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-10-13T00:21:07.773667948+00:00 stderr F I1013 00:21:07.773534 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:21:07.870188896+00:00 stderr F I1013 00:21:07.869562 1 base_controller.go:73] Caches are synced for RevisionController 2025-10-13T00:21:07.870188896+00:00 stderr F I1013 00:21:07.869590 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-10-13T00:21:07.870188896+00:00 stderr F I1013 00:21:07.869824 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:21:07.870188896+00:00 stderr F I1013 00:21:07.869834 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:21:07.972470746+00:00 stderr F I1013 00:21:07.972402 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:21:08.069422817+00:00 stderr F I1013 00:21:08.069301 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-10-13T00:21:08.069422817+00:00 stderr F I1013 00:21:08.069395 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-10-13T00:21:08.069465878+00:00 stderr F I1013 00:21:08.069352 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-10-13T00:21:08.069473428+00:00 stderr F I1013 00:21:08.069461 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-10-13T00:21:09.169607757+00:00 stderr F I1013 00:21:09.169535 1 request.go:697] Waited for 1.098863663s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config 2025-10-13T00:21:09.780738205+00:00 stderr F I1013 00:21:09.780673 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:09.780784106+00:00 stderr F I1013 00:21:09.780763 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:21:09.786771034+00:00 stderr F I1013 00:21:09.786575 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946277 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.946233485 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946864 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.946843011 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946889 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.946871522 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946914 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.946898003 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946934 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.946920274 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946953 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.946939284 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946974 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.946959745 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.946999 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.946978665 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.947019 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.947004986 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.947041 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.947027676 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.947066 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.947048797 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.947086 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947071428 +0000 UTC))" 2025-10-13T00:21:11.947770556+00:00 stderr F I1013 00:21:11.947555 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-10-13 00:21:11.94753203 +0000 UTC))" 2025-10-13T00:21:11.948234689+00:00 stderr F I1013 00:21:11.947944 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314498\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314498\" (2025-10-12 23:14:57 +0000 UTC to 2026-10-12 23:14:57 +0000 UTC (now=2025-10-13 00:21:11.947926331 +0000 UTC))" 2025-10-13T00:23:38.671636393+00:00 stderr F I1013 00:23:38.671154 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:41.537973065+00:00 stderr F I1013 00:23:41.537466 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:46.328006199+00:00 stderr F I1013 00:23:46.327602 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015073043233033064 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015073043233033064 5ustar zuulzuul././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000000601615073043233033071 0ustar zuulzuul2025-10-13T00:12:49.043213310+00:00 stderr F + [[ -f /env/_master ]] 2025-10-13T00:12:49.043482878+00:00 stderr F + ho_enable=--enable-hybrid-overlay 2025-10-13T00:12:49.044022723+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-10-13T00:12:49.071916469+00:00 stderr F + echo 'I1013 00:12:49.058435534 - network-node-identity - start webhook' 2025-10-13T00:12:49.071987141+00:00 stdout F I1013 00:12:49.058435534 - network-node-identity - start webhook 2025-10-13T00:12:49.072146585+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --webhook-cert-dir=/etc/webhook-cert --webhook-host=127.0.0.1 --webhook-port=9743 --enable-hybrid-overlay --enable-interconnect --disable-approver --extra-allowed-user=system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane --wait-for-kubernetes-api=200s --pod-admission-conditions=/var/run/ovnkube-identity-config/additional-pod-admission-cond.json --loglevel=2 2025-10-13T00:12:49.731122956+00:00 stderr F I1013 00:12:49.730956 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:2 port:9743 host:127.0.0.1 certDir:/etc/webhook-cert metricsAddress:0 leaseNamespace: enableInterconnect:true enableHybridOverlay:true disableWebhook:false disableApprover:true waitForKAPIDuration:200000000000 localKAPIPort:6443 extraAllowedUsers:{slice:[system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane] hasBeenSet:true} csrAcceptanceConditionFile: csrAcceptanceConditions:[] podAdmissionConditionFile:/var/run/ovnkube-identity-config/additional-pod-admission-cond.json podAdmissionConditions:[]} 2025-10-13T00:12:49.731122956+00:00 stderr F W1013 00:12:49.731102 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:12:49.804095847+00:00 stderr F I1013 00:12:49.804007 1 ovnkubeidentity.go:351] Waiting for caches to sync 2025-10-13T00:12:49.817407347+00:00 stderr F I1013 00:12:49.817313 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:49.907019132+00:00 stderr F I1013 00:12:49.906940 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-10-13T00:12:49.907289190+00:00 stderr F I1013 00:12:49.907252 1 ovnkubeidentity.go:430] Starting the webhook server 2025-10-13T00:12:49.907799664+00:00 stderr F I1013 00:12:49.907412 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-10-13T00:14:56.989487457+00:00 stderr F 2025/10/13 00:14:56 http: TLS handshake error from 127.0.0.1:40554: EOF 2025-10-13T00:14:56.991878789+00:00 stderr F 2025/10/13 00:14:56 http: TLS handshake error from 127.0.0.1:40566: read tcp 127.0.0.1:9743->127.0.0.1:40566: read: connection reset by peer 2025-10-13T00:22:45.486243270+00:00 stderr F I1013 00:22:45.486127 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000271462415073043233033106 0ustar zuulzuul2025-08-13T19:50:43.127891376+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:43.128098882+00:00 stderr F + ho_enable=--enable-hybrid-overlay 2025-08-13T19:50:43.129082180+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:43.184222956+00:00 stderr F + echo 'I0813 19:50:43.132736804 - network-node-identity - start webhook' 2025-08-13T19:50:43.184322418+00:00 stdout F I0813 19:50:43.132736804 - network-node-identity - start webhook 2025-08-13T19:50:43.184928436+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --webhook-cert-dir=/etc/webhook-cert --webhook-host=127.0.0.1 --webhook-port=9743 --enable-hybrid-overlay --enable-interconnect --disable-approver --extra-allowed-user=system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane --wait-for-kubernetes-api=200s --pod-admission-conditions=/var/run/ovnkube-identity-config/additional-pod-admission-cond.json --loglevel=2 2025-08-13T19:50:47.188251283+00:00 stderr F I0813 19:50:47.185908 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:2 port:9743 host:127.0.0.1 certDir:/etc/webhook-cert metricsAddress:0 leaseNamespace: enableInterconnect:true enableHybridOverlay:true disableWebhook:false disableApprover:true waitForKAPIDuration:200000000000 localKAPIPort:6443 extraAllowedUsers:{slice:[system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane] hasBeenSet:true} csrAcceptanceConditionFile: csrAcceptanceConditions:[] podAdmissionConditionFile:/var/run/ovnkube-identity-config/additional-pod-admission-cond.json podAdmissionConditions:[]} 2025-08-13T19:50:47.188251283+00:00 stderr F W0813 19:50:47.188191 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:47.261974120+00:00 stderr F I0813 19:50:47.259425 1 ovnkubeidentity.go:351] Waiting for caches to sync 2025-08-13T19:50:47.517742120+00:00 stderr F I0813 19:50:47.517497 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:47.577168279+00:00 stderr F I0813 19:50:47.576751 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:50:47.590146560+00:00 stderr F I0813 19:50:47.587224 1 ovnkubeidentity.go:430] Starting the webhook server 2025-08-13T19:50:47.594226226+00:00 stderr F I0813 19:50:47.582558 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-08-13T19:50:47.745047647+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53894: remote error: tls: bad certificate 2025-08-13T19:50:47.793929724+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53908: remote error: tls: bad certificate 2025-08-13T19:50:47.868213187+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53924: remote error: tls: bad certificate 2025-08-13T19:50:47.964301333+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53936: remote error: tls: bad certificate 2025-08-13T19:50:48.029546398+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53952: remote error: tls: bad certificate 2025-08-13T19:50:48.147259312+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53954: remote error: tls: bad certificate 2025-08-13T19:50:48.215592515+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53960: remote error: tls: bad certificate 2025-08-13T19:50:48.280282874+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53968: remote error: tls: bad certificate 2025-08-13T19:50:48.342619216+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53984: remote error: tls: bad certificate 2025-08-13T19:50:48.687006018+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53992: remote error: tls: bad certificate 2025-08-13T19:50:48.866582011+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51718: remote error: tls: bad certificate 2025-08-13T19:50:48.912027150+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51724: remote error: tls: bad certificate 2025-08-13T19:50:48.944431296+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51736: remote error: tls: bad certificate 2025-08-13T19:50:49.378656735+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51750: remote error: tls: bad certificate 2025-08-13T19:50:49.710110938+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51754: remote error: tls: bad certificate 2025-08-13T19:50:49.824538099+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51762: remote error: tls: bad certificate 2025-08-13T19:50:49.900767758+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51766: remote error: tls: bad certificate 2025-08-13T19:50:49.986411945+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:50:50.647326135+00:00 stderr F 2025/08/13 19:50:50 http: TLS handshake error from 127.0.0.1:51790: remote error: tls: bad certificate 2025-08-13T19:50:51.312891567+00:00 stderr F 2025/08/13 19:50:51 http: TLS handshake error from 127.0.0.1:51806: remote error: tls: bad certificate 2025-08-13T19:50:52.077974724+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51822: remote error: tls: bad certificate 2025-08-13T19:50:52.149405405+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:50:52.196607595+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51832: remote error: tls: bad certificate 2025-08-13T19:50:52.244962827+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51846: remote error: tls: bad certificate 2025-08-13T19:50:52.314900675+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51862: remote error: tls: bad certificate 2025-08-13T19:50:52.379511692+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51864: remote error: tls: bad certificate 2025-08-13T19:50:52.434951447+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51872: remote error: tls: bad certificate 2025-08-13T19:50:52.503497206+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51882: remote error: tls: bad certificate 2025-08-13T19:50:52.594189118+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51894: remote error: tls: bad certificate 2025-08-13T19:50:52.645166325+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51902: remote error: tls: bad certificate 2025-08-13T19:50:52.700091544+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51906: remote error: tls: bad certificate 2025-08-13T19:50:52.759619795+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51916: remote error: tls: bad certificate 2025-08-13T19:50:52.790890399+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51920: remote error: tls: bad certificate 2025-08-13T19:50:52.808982886+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51934: remote error: tls: bad certificate 2025-08-13T19:50:52.857934075+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51944: remote error: tls: bad certificate 2025-08-13T19:50:52.898973088+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51952: remote error: tls: bad certificate 2025-08-13T19:50:52.920450271+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51976: remote error: tls: bad certificate 2025-08-13T19:50:53.225985724+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51992: remote error: tls: bad certificate 2025-08-13T19:50:53.241171268+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51998: remote error: tls: bad certificate 2025-08-13T19:50:53.277023692+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51964: remote error: tls: bad certificate 2025-08-13T19:50:53.294267605+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52008: remote error: tls: bad certificate 2025-08-13T19:50:53.327105514+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52024: remote error: tls: bad certificate 2025-08-13T19:50:53.330707707+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52038: remote error: tls: bad certificate 2025-08-13T19:50:53.362941238+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52044: remote error: tls: bad certificate 2025-08-13T19:50:53.377062762+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52056: remote error: tls: bad certificate 2025-08-13T19:50:53.412945867+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52064: remote error: tls: bad certificate 2025-08-13T19:50:53.441219756+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52068: remote error: tls: bad certificate 2025-08-13T19:50:53.451243732+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52076: remote error: tls: bad certificate 2025-08-13T19:50:53.478115140+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52078: remote error: tls: bad certificate 2025-08-13T19:50:53.520115450+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52090: remote error: tls: bad certificate 2025-08-13T19:50:53.605220803+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52092: remote error: tls: bad certificate 2025-08-13T19:50:53.639766180+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52106: remote error: tls: bad certificate 2025-08-13T19:50:53.704171171+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52120: remote error: tls: bad certificate 2025-08-13T19:50:53.753149931+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52136: remote error: tls: bad certificate 2025-08-13T19:50:53.821236107+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52152: remote error: tls: bad certificate 2025-08-13T19:50:53.864738370+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52158: remote error: tls: bad certificate 2025-08-13T19:50:53.962376541+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52168: remote error: tls: bad certificate 2025-08-13T19:50:53.974479196+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52174: remote error: tls: bad certificate 2025-08-13T19:50:54.024067274+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52186: remote error: tls: bad certificate 2025-08-13T19:50:54.030571270+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52178: remote error: tls: bad certificate 2025-08-13T19:50:54.079695624+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52202: remote error: tls: bad certificate 2025-08-13T19:50:54.114244271+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52204: remote error: tls: bad certificate 2025-08-13T19:50:54.160390340+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52210: remote error: tls: bad certificate 2025-08-13T19:50:54.251052751+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52212: remote error: tls: bad certificate 2025-08-13T19:50:54.313880267+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52228: remote error: tls: bad certificate 2025-08-13T19:50:54.570898253+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52240: remote error: tls: bad certificate 2025-08-13T19:50:54.755031685+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52252: remote error: tls: bad certificate 2025-08-13T19:50:54.912921418+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52262: remote error: tls: bad certificate 2025-08-13T19:50:55.003989171+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52272: remote error: tls: bad certificate 2025-08-13T19:50:55.046258119+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52288: remote error: tls: bad certificate 2025-08-13T19:50:55.117683540+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52294: remote error: tls: bad certificate 2025-08-13T19:50:55.312921210+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52310: remote error: tls: bad certificate 2025-08-13T19:50:55.382659673+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52316: remote error: tls: bad certificate 2025-08-13T19:50:55.451897042+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52326: remote error: tls: bad certificate 2025-08-13T19:50:55.533005970+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52334: remote error: tls: bad certificate 2025-08-13T19:50:55.849677271+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52348: remote error: tls: bad certificate 2025-08-13T19:50:55.917101448+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52364: remote error: tls: bad certificate 2025-08-13T19:50:56.051583702+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52366: remote error: tls: bad certificate 2025-08-13T19:50:56.185426907+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52378: remote error: tls: bad certificate 2025-08-13T19:50:56.239143822+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52384: remote error: tls: bad certificate 2025-08-13T19:50:56.289679247+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52386: remote error: tls: bad certificate 2025-08-13T19:50:56.334149327+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52390: remote error: tls: bad certificate 2025-08-13T19:50:56.375012175+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52394: remote error: tls: bad certificate 2025-08-13T19:50:56.439319143+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52410: remote error: tls: bad certificate 2025-08-13T19:50:56.645922828+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52426: remote error: tls: bad certificate 2025-08-13T19:50:56.699151349+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52432: remote error: tls: bad certificate 2025-08-13T19:50:56.994924873+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52440: remote error: tls: bad certificate 2025-08-13T19:50:57.237949208+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52454: remote error: tls: bad certificate 2025-08-13T19:50:57.305384326+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52456: remote error: tls: bad certificate 2025-08-13T19:50:57.386773162+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52472: remote error: tls: bad certificate 2025-08-13T19:50:57.465000898+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52488: remote error: tls: bad certificate 2025-08-13T19:50:57.496446756+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52496: remote error: tls: bad certificate 2025-08-13T19:50:57.522851991+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52506: remote error: tls: bad certificate 2025-08-13T19:50:57.553314542+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52522: remote error: tls: bad certificate 2025-08-13T19:50:57.598395340+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52524: remote error: tls: bad certificate 2025-08-13T19:50:57.665761646+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52528: remote error: tls: bad certificate 2025-08-13T19:50:57.732241096+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52540: remote error: tls: bad certificate 2025-08-13T19:50:57.800867447+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52550: remote error: tls: bad certificate 2025-08-13T19:50:57.842209629+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52560: remote error: tls: bad certificate 2025-08-13T19:50:57.889692366+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52576: remote error: tls: bad certificate 2025-08-13T19:50:57.934844296+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52588: remote error: tls: bad certificate 2025-08-13T19:50:58.005091664+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52604: remote error: tls: bad certificate 2025-08-13T19:50:58.042565145+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52620: remote error: tls: bad certificate 2025-08-13T19:50:58.213664785+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52632: remote error: tls: bad certificate 2025-08-13T19:50:58.300443125+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52634: remote error: tls: bad certificate 2025-08-13T19:50:58.342754055+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52646: remote error: tls: bad certificate 2025-08-13T19:50:58.397146519+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52662: remote error: tls: bad certificate 2025-08-13T19:50:58.424341976+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52674: remote error: tls: bad certificate 2025-08-13T19:50:58.462741084+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52686: remote error: tls: bad certificate 2025-08-13T19:50:58.499094283+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52700: remote error: tls: bad certificate 2025-08-13T19:50:58.532325083+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52702: remote error: tls: bad certificate 2025-08-13T19:50:58.559133629+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52704: remote error: tls: bad certificate 2025-08-13T19:50:58.598263777+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52714: remote error: tls: bad certificate 2025-08-13T19:50:58.618370812+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52722: remote error: tls: bad certificate 2025-08-13T19:50:58.651226101+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52726: remote error: tls: bad certificate 2025-08-13T19:50:58.682213447+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52730: remote error: tls: bad certificate 2025-08-13T19:50:58.707888900+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52746: remote error: tls: bad certificate 2025-08-13T19:50:58.742193181+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36542: remote error: tls: bad certificate 2025-08-13T19:50:58.767019280+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36550: remote error: tls: bad certificate 2025-08-13T19:50:58.793288891+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36562: remote error: tls: bad certificate 2025-08-13T19:50:58.820872670+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36570: remote error: tls: bad certificate 2025-08-13T19:50:58.871392534+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36586: remote error: tls: bad certificate 2025-08-13T19:50:58.904159000+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36594: remote error: tls: bad certificate 2025-08-13T19:50:58.940449017+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36608: remote error: tls: bad certificate 2025-08-13T19:50:58.970211598+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36616: remote error: tls: bad certificate 2025-08-13T19:50:59.017749157+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36624: remote error: tls: bad certificate 2025-08-13T19:50:59.071537814+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36626: remote error: tls: bad certificate 2025-08-13T19:50:59.123889600+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36634: remote error: tls: bad certificate 2025-08-13T19:50:59.173079226+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36640: remote error: tls: bad certificate 2025-08-13T19:50:59.227739088+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36650: remote error: tls: bad certificate 2025-08-13T19:50:59.281849325+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36664: remote error: tls: bad certificate 2025-08-13T19:50:59.317373600+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36668: remote error: tls: bad certificate 2025-08-13T19:50:59.348519680+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36674: remote error: tls: bad certificate 2025-08-13T19:50:59.406069255+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36682: remote error: tls: bad certificate 2025-08-13T19:50:59.445916724+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36694: remote error: tls: bad certificate 2025-08-13T19:50:59.470085825+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36700: remote error: tls: bad certificate 2025-08-13T19:50:59.497681314+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36706: remote error: tls: bad certificate 2025-08-13T19:50:59.530281755+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36710: remote error: tls: bad certificate 2025-08-13T19:50:59.599178285+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36716: remote error: tls: bad certificate 2025-08-13T19:50:59.649423641+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36732: remote error: tls: bad certificate 2025-08-13T19:50:59.684307848+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36740: remote error: tls: bad certificate 2025-08-13T19:50:59.718094433+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36756: remote error: tls: bad certificate 2025-08-13T19:50:59.754352070+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36768: remote error: tls: bad certificate 2025-08-13T19:50:59.800177539+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36776: remote error: tls: bad certificate 2025-08-13T19:50:59.847697058+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36780: remote error: tls: bad certificate 2025-08-13T19:50:59.890215563+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36788: remote error: tls: bad certificate 2025-08-13T19:50:59.934275701+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36790: remote error: tls: bad certificate 2025-08-13T19:50:59.991349152+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36800: remote error: tls: bad certificate 2025-08-13T19:51:00.049017011+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36802: remote error: tls: bad certificate 2025-08-13T19:51:00.132532968+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36818: remote error: tls: bad certificate 2025-08-13T19:51:00.189720412+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36822: remote error: tls: bad certificate 2025-08-13T19:51:00.223725634+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36830: remote error: tls: bad certificate 2025-08-13T19:51:00.261105912+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36838: remote error: tls: bad certificate 2025-08-13T19:51:00.300171249+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36842: remote error: tls: bad certificate 2025-08-13T19:51:00.316661440+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36856: remote error: tls: bad certificate 2025-08-13T19:51:00.357302802+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36858: remote error: tls: bad certificate 2025-08-13T19:51:00.386042854+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36874: remote error: tls: bad certificate 2025-08-13T19:51:00.420893710+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36888: remote error: tls: bad certificate 2025-08-13T19:51:00.454148510+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36898: remote error: tls: bad certificate 2025-08-13T19:51:00.504189320+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36914: remote error: tls: bad certificate 2025-08-13T19:51:00.550072572+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36926: remote error: tls: bad certificate 2025-08-13T19:51:00.574640284+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36938: remote error: tls: bad certificate 2025-08-13T19:51:00.617011395+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36940: remote error: tls: bad certificate 2025-08-13T19:51:00.650405229+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36950: remote error: tls: bad certificate 2025-08-13T19:51:00.686280005+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36966: remote error: tls: bad certificate 2025-08-13T19:51:00.725739813+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36982: remote error: tls: bad certificate 2025-08-13T19:51:00.773126677+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36990: remote error: tls: bad certificate 2025-08-13T19:51:00.829127177+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36992: remote error: tls: bad certificate 2025-08-13T19:51:00.853702340+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36994: remote error: tls: bad certificate 2025-08-13T19:51:00.897999775+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37000: remote error: tls: bad certificate 2025-08-13T19:51:00.925969375+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37012: remote error: tls: bad certificate 2025-08-13T19:51:00.956533788+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37026: remote error: tls: bad certificate 2025-08-13T19:51:00.994699869+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37042: remote error: tls: bad certificate 2025-08-13T19:51:01.033348614+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37050: remote error: tls: bad certificate 2025-08-13T19:51:01.060967123+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37066: remote error: tls: bad certificate 2025-08-13T19:51:01.100954416+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37068: remote error: tls: bad certificate 2025-08-13T19:51:01.144876961+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37076: remote error: tls: bad certificate 2025-08-13T19:51:01.175751554+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37090: remote error: tls: bad certificate 2025-08-13T19:51:01.207250064+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37104: remote error: tls: bad certificate 2025-08-13T19:51:01.247654899+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37114: remote error: tls: bad certificate 2025-08-13T19:51:01.276133483+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37118: remote error: tls: bad certificate 2025-08-13T19:51:01.306406678+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37130: remote error: tls: bad certificate 2025-08-13T19:51:01.339034530+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37138: remote error: tls: bad certificate 2025-08-13T19:51:01.360545975+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37148: remote error: tls: bad certificate 2025-08-13T19:51:01.395433632+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37154: remote error: tls: bad certificate 2025-08-13T19:51:01.511654254+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37164: remote error: tls: bad certificate 2025-08-13T19:51:01.516197744+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37176: remote error: tls: bad certificate 2025-08-13T19:51:01.612159086+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37184: remote error: tls: bad certificate 2025-08-13T19:51:01.653710183+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37192: remote error: tls: bad certificate 2025-08-13T19:51:01.710394323+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37200: remote error: tls: bad certificate 2025-08-13T19:51:01.746680500+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37210: remote error: tls: bad certificate 2025-08-13T19:51:01.803880064+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37214: remote error: tls: bad certificate 2025-08-13T19:51:01.839914504+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37230: remote error: tls: bad certificate 2025-08-13T19:51:01.870634442+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37234: remote error: tls: bad certificate 2025-08-13T19:51:01.949026672+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37236: remote error: tls: bad certificate 2025-08-13T19:51:01.998071504+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37244: remote error: tls: bad certificate 2025-08-13T19:51:02.030856891+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37246: remote error: tls: bad certificate 2025-08-13T19:51:02.066442158+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37252: remote error: tls: bad certificate 2025-08-13T19:51:02.105622638+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37268: remote error: tls: bad certificate 2025-08-13T19:51:02.146911037+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37280: remote error: tls: bad certificate 2025-08-13T19:51:02.177227054+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37286: remote error: tls: bad certificate 2025-08-13T19:51:02.195575838+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37292: remote error: tls: bad certificate 2025-08-13T19:51:02.227222943+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37304: remote error: tls: bad certificate 2025-08-13T19:51:02.254015388+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37310: remote error: tls: bad certificate 2025-08-13T19:51:02.282593005+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37322: remote error: tls: bad certificate 2025-08-13T19:51:02.313024875+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37330: remote error: tls: bad certificate 2025-08-13T19:51:02.339564593+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37342: remote error: tls: bad certificate 2025-08-13T19:51:02.372934127+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37350: remote error: tls: bad certificate 2025-08-13T19:51:02.391770735+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37358: remote error: tls: bad certificate 2025-08-13T19:51:02.465381079+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37366: remote error: tls: bad certificate 2025-08-13T19:51:02.492945957+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37378: remote error: tls: bad certificate 2025-08-13T19:51:02.522199363+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37394: remote error: tls: bad certificate 2025-08-13T19:51:02.562970878+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37400: remote error: tls: bad certificate 2025-08-13T19:51:02.611675860+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37408: remote error: tls: bad certificate 2025-08-13T19:51:02.646425693+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37416: remote error: tls: bad certificate 2025-08-13T19:51:02.675100923+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37420: remote error: tls: bad certificate 2025-08-13T19:51:03.107905973+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37426: remote error: tls: bad certificate 2025-08-13T19:51:03.165928801+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37434: remote error: tls: bad certificate 2025-08-13T19:51:03.216223679+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37442: remote error: tls: bad certificate 2025-08-13T19:51:03.248332746+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37448: remote error: tls: bad certificate 2025-08-13T19:51:03.313901890+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37452: remote error: tls: bad certificate 2025-08-13T19:51:03.355765376+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37468: remote error: tls: bad certificate 2025-08-13T19:51:03.383241102+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37470: remote error: tls: bad certificate 2025-08-13T19:51:03.406117275+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37482: remote error: tls: bad certificate 2025-08-13T19:51:03.462346822+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37498: remote error: tls: bad certificate 2025-08-13T19:51:03.512550436+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37510: remote error: tls: bad certificate 2025-08-13T19:51:03.556443761+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37514: remote error: tls: bad certificate 2025-08-13T19:51:03.592369398+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37516: remote error: tls: bad certificate 2025-08-13T19:51:03.629024485+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37518: remote error: tls: bad certificate 2025-08-13T19:51:03.692179381+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37524: remote error: tls: bad certificate 2025-08-13T19:51:03.730339411+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37538: remote error: tls: bad certificate 2025-08-13T19:51:03.860483790+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37546: remote error: tls: bad certificate 2025-08-13T19:51:03.886262596+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37548: remote error: tls: bad certificate 2025-08-13T19:51:03.922530403+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37554: remote error: tls: bad certificate 2025-08-13T19:51:03.963665218+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37566: remote error: tls: bad certificate 2025-08-13T19:51:03.969899456+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37570: remote error: tls: bad certificate 2025-08-13T19:51:03.998223616+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37572: remote error: tls: bad certificate 2025-08-13T19:51:04.014296155+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37586: remote error: tls: bad certificate 2025-08-13T19:51:04.035582264+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37602: remote error: tls: bad certificate 2025-08-13T19:51:04.067182027+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37606: remote error: tls: bad certificate 2025-08-13T19:51:04.088258739+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37622: remote error: tls: bad certificate 2025-08-13T19:51:04.096409982+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37628: remote error: tls: bad certificate 2025-08-13T19:51:04.142221921+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37638: remote error: tls: bad certificate 2025-08-13T19:51:04.172312281+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37644: remote error: tls: bad certificate 2025-08-13T19:51:04.212474329+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37658: remote error: tls: bad certificate 2025-08-13T19:51:04.263190889+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37668: remote error: tls: bad certificate 2025-08-13T19:51:04.299631800+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37682: remote error: tls: bad certificate 2025-08-13T19:51:04.387549273+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37690: remote error: tls: bad certificate 2025-08-13T19:51:04.424341654+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37698: remote error: tls: bad certificate 2025-08-13T19:51:04.497659110+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37704: remote error: tls: bad certificate 2025-08-13T19:51:04.610452053+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37716: remote error: tls: bad certificate 2025-08-13T19:51:04.640043379+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37720: remote error: tls: bad certificate 2025-08-13T19:51:04.668125171+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37732: remote error: tls: bad certificate 2025-08-13T19:51:04.746185803+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37734: remote error: tls: bad certificate 2025-08-13T19:51:04.946409465+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37748: remote error: tls: bad certificate 2025-08-13T19:51:05.034016969+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37754: remote error: tls: bad certificate 2025-08-13T19:51:05.061009981+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37766: remote error: tls: bad certificate 2025-08-13T19:51:05.091070990+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37770: remote error: tls: bad certificate 2025-08-13T19:51:05.131356681+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37776: remote error: tls: bad certificate 2025-08-13T19:51:05.205135240+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37780: remote error: tls: bad certificate 2025-08-13T19:51:05.255356405+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37794: remote error: tls: bad certificate 2025-08-13T19:51:05.387906103+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37810: remote error: tls: bad certificate 2025-08-13T19:51:05.443694848+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37816: remote error: tls: bad certificate 2025-08-13T19:51:05.519501824+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37824: remote error: tls: bad certificate 2025-08-13T19:51:05.543353656+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37834: remote error: tls: bad certificate 2025-08-13T19:51:05.620419769+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37848: remote error: tls: bad certificate 2025-08-13T19:51:05.643328973+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37864: remote error: tls: bad certificate 2025-08-13T19:51:05.746574714+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37876: remote error: tls: bad certificate 2025-08-13T19:51:05.817102220+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37888: remote error: tls: bad certificate 2025-08-13T19:51:05.962096764+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37898: remote error: tls: bad certificate 2025-08-13T19:51:06.000694597+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37910: remote error: tls: bad certificate 2025-08-13T19:51:06.087982192+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37920: remote error: tls: bad certificate 2025-08-13T19:51:06.151663462+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37926: remote error: tls: bad certificate 2025-08-13T19:51:07.520441000+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37936: remote error: tls: bad certificate 2025-08-13T19:51:07.584917952+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37942: remote error: tls: bad certificate 2025-08-13T19:51:07.637642999+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37950: remote error: tls: bad certificate 2025-08-13T19:51:07.664908569+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37956: remote error: tls: bad certificate 2025-08-13T19:51:07.688984427+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37960: remote error: tls: bad certificate 2025-08-13T19:51:07.736749202+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37976: remote error: tls: bad certificate 2025-08-13T19:51:07.798507217+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37990: remote error: tls: bad certificate 2025-08-13T19:51:07.826221579+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37994: remote error: tls: bad certificate 2025-08-13T19:51:07.859079428+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38000: remote error: tls: bad certificate 2025-08-13T19:51:07.896648992+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38008: remote error: tls: bad certificate 2025-08-13T19:51:07.936917273+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38018: remote error: tls: bad certificate 2025-08-13T19:51:07.967011883+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38030: remote error: tls: bad certificate 2025-08-13T19:51:08.022760096+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38032: remote error: tls: bad certificate 2025-08-13T19:51:08.058379924+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38046: remote error: tls: bad certificate 2025-08-13T19:51:08.094905998+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38048: remote error: tls: bad certificate 2025-08-13T19:51:08.119050568+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38058: remote error: tls: bad certificate 2025-08-13T19:51:08.146479512+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38066: remote error: tls: bad certificate 2025-08-13T19:51:08.177530120+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38070: remote error: tls: bad certificate 2025-08-13T19:51:08.201698570+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38078: remote error: tls: bad certificate 2025-08-13T19:51:08.231027769+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38094: remote error: tls: bad certificate 2025-08-13T19:51:08.259730369+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38102: remote error: tls: bad certificate 2025-08-13T19:51:08.289092248+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38110: remote error: tls: bad certificate 2025-08-13T19:51:08.309397008+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38114: remote error: tls: bad certificate 2025-08-13T19:51:08.936181992+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:55404: remote error: tls: bad certificate 2025-08-13T19:51:08.979532561+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:55406: remote error: tls: bad certificate 2025-08-13T19:51:09.008619833+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55422: remote error: tls: bad certificate 2025-08-13T19:51:09.045163547+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55424: remote error: tls: bad certificate 2025-08-13T19:51:09.072073746+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55434: remote error: tls: bad certificate 2025-08-13T19:51:09.111767841+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55442: remote error: tls: bad certificate 2025-08-13T19:51:09.139935806+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55444: remote error: tls: bad certificate 2025-08-13T19:51:09.168216894+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55456: remote error: tls: bad certificate 2025-08-13T19:51:09.200169877+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55470: remote error: tls: bad certificate 2025-08-13T19:51:09.232715287+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:51:09.267215783+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55494: remote error: tls: bad certificate 2025-08-13T19:51:09.296091539+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55504: remote error: tls: bad certificate 2025-08-13T19:51:09.343610397+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:51:09.848109466+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55528: remote error: tls: bad certificate 2025-08-13T19:51:09.878282038+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55530: remote error: tls: bad certificate 2025-08-13T19:51:09.916370777+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:51:09.945860180+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:51:09.970298888+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55548: remote error: tls: bad certificate 2025-08-13T19:51:09.989120466+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55554: remote error: tls: bad certificate 2025-08-13T19:51:10.010124886+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55564: remote error: tls: bad certificate 2025-08-13T19:51:10.029567152+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:51:10.082567337+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55582: remote error: tls: bad certificate 2025-08-13T19:51:10.200974171+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55590: remote error: tls: bad certificate 2025-08-13T19:51:10.269140019+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55596: remote error: tls: bad certificate 2025-08-13T19:51:10.526060142+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:51:10.565277933+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55614: remote error: tls: bad certificate 2025-08-13T19:51:10.621054507+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55624: remote error: tls: bad certificate 2025-08-13T19:51:10.680709671+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:51:10.708190187+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55650: remote error: tls: bad certificate 2025-08-13T19:51:10.765766342+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55658: remote error: tls: bad certificate 2025-08-13T19:51:10.813054613+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55670: remote error: tls: bad certificate 2025-08-13T19:51:10.851977196+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55672: remote error: tls: bad certificate 2025-08-13T19:51:10.909139640+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:51:11.112006178+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55678: remote error: tls: bad certificate 2025-08-13T19:51:11.134733157+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55682: remote error: tls: bad certificate 2025-08-13T19:51:11.179219989+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55692: remote error: tls: bad certificate 2025-08-13T19:51:11.199674053+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:51:11.236478535+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55710: remote error: tls: bad certificate 2025-08-13T19:51:11.258368291+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55724: remote error: tls: bad certificate 2025-08-13T19:51:11.277408425+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55740: remote error: tls: bad certificate 2025-08-13T19:51:11.310586903+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55750: remote error: tls: bad certificate 2025-08-13T19:51:11.337531274+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55762: remote error: tls: bad certificate 2025-08-13T19:51:11.359350087+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55764: remote error: tls: bad certificate 2025-08-13T19:51:11.384598969+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55778: remote error: tls: bad certificate 2025-08-13T19:51:11.405238439+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55794: remote error: tls: bad certificate 2025-08-13T19:51:11.441067473+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55804: remote error: tls: bad certificate 2025-08-13T19:51:11.466377236+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55816: remote error: tls: bad certificate 2025-08-13T19:51:11.488383965+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55832: remote error: tls: bad certificate 2025-08-13T19:51:11.513172993+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55842: remote error: tls: bad certificate 2025-08-13T19:51:11.541749410+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55856: remote error: tls: bad certificate 2025-08-13T19:51:11.569414231+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55858: remote error: tls: bad certificate 2025-08-13T19:51:11.583654368+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:51:11.604279097+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55888: remote error: tls: bad certificate 2025-08-13T19:51:11.623957230+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55890: remote error: tls: bad certificate 2025-08-13T19:51:11.645020082+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55902: remote error: tls: bad certificate 2025-08-13T19:51:11.662619235+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55914: remote error: tls: bad certificate 2025-08-13T19:51:11.677941633+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55922: remote error: tls: bad certificate 2025-08-13T19:51:11.696610586+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55924: remote error: tls: bad certificate 2025-08-13T19:51:11.720107488+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55936: remote error: tls: bad certificate 2025-08-13T19:51:11.740327816+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55952: remote error: tls: bad certificate 2025-08-13T19:51:11.757151677+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55962: remote error: tls: bad certificate 2025-08-13T19:51:11.778065764+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55976: remote error: tls: bad certificate 2025-08-13T19:51:11.800046323+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55982: remote error: tls: bad certificate 2025-08-13T19:51:11.818430788+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55986: remote error: tls: bad certificate 2025-08-13T19:51:11.835852666+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55996: remote error: tls: bad certificate 2025-08-13T19:51:11.869709524+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56006: remote error: tls: bad certificate 2025-08-13T19:51:11.901452691+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56022: remote error: tls: bad certificate 2025-08-13T19:51:11.925191249+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56032: remote error: tls: bad certificate 2025-08-13T19:51:11.944592524+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56038: remote error: tls: bad certificate 2025-08-13T19:51:11.964132062+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56044: remote error: tls: bad certificate 2025-08-13T19:51:11.987549822+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56050: remote error: tls: bad certificate 2025-08-13T19:51:12.007037978+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56062: remote error: tls: bad certificate 2025-08-13T19:51:12.026225957+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56070: remote error: tls: bad certificate 2025-08-13T19:51:12.049928134+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56074: remote error: tls: bad certificate 2025-08-13T19:51:12.066472967+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56080: remote error: tls: bad certificate 2025-08-13T19:51:12.094082986+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56084: remote error: tls: bad certificate 2025-08-13T19:51:12.121739217+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56098: remote error: tls: bad certificate 2025-08-13T19:51:12.157597582+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56100: remote error: tls: bad certificate 2025-08-13T19:51:12.177716267+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56116: remote error: tls: bad certificate 2025-08-13T19:51:12.207557610+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:51:12.232195604+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56140: remote error: tls: bad certificate 2025-08-13T19:51:12.247984205+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56142: remote error: tls: bad certificate 2025-08-13T19:51:12.263884989+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56148: remote error: tls: bad certificate 2025-08-13T19:51:12.283077628+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56150: remote error: tls: bad certificate 2025-08-13T19:51:12.301991789+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56166: remote error: tls: bad certificate 2025-08-13T19:51:12.315766972+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56176: remote error: tls: bad certificate 2025-08-13T19:51:12.331283186+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56178: remote error: tls: bad certificate 2025-08-13T19:51:12.349030813+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56182: remote error: tls: bad certificate 2025-08-13T19:51:12.366279616+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56186: remote error: tls: bad certificate 2025-08-13T19:51:12.391048224+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56188: remote error: tls: bad certificate 2025-08-13T19:51:12.407750971+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56196: remote error: tls: bad certificate 2025-08-13T19:51:12.424480839+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56198: remote error: tls: bad certificate 2025-08-13T19:51:12.441757163+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56200: remote error: tls: bad certificate 2025-08-13T19:51:12.460493379+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56216: remote error: tls: bad certificate 2025-08-13T19:51:12.487909002+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56218: remote error: tls: bad certificate 2025-08-13T19:51:12.503107247+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56222: remote error: tls: bad certificate 2025-08-13T19:51:12.519890386+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56232: remote error: tls: bad certificate 2025-08-13T19:51:12.541567386+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56248: remote error: tls: bad certificate 2025-08-13T19:51:12.559672193+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56250: remote error: tls: bad certificate 2025-08-13T19:51:12.574982911+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56252: remote error: tls: bad certificate 2025-08-13T19:51:12.599274365+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56258: remote error: tls: bad certificate 2025-08-13T19:51:12.622022115+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56262: remote error: tls: bad certificate 2025-08-13T19:51:12.643713105+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56268: remote error: tls: bad certificate 2025-08-13T19:51:12.660965638+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56284: remote error: tls: bad certificate 2025-08-13T19:51:12.676703558+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56296: remote error: tls: bad certificate 2025-08-13T19:51:12.695582158+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56308: remote error: tls: bad certificate 2025-08-13T19:51:12.712148481+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56312: remote error: tls: bad certificate 2025-08-13T19:51:12.729627371+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56328: remote error: tls: bad certificate 2025-08-13T19:51:12.746057320+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56340: remote error: tls: bad certificate 2025-08-13T19:51:12.769516241+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56348: remote error: tls: bad certificate 2025-08-13T19:51:12.795541405+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56358: remote error: tls: bad certificate 2025-08-13T19:51:12.811990965+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56368: remote error: tls: bad certificate 2025-08-13T19:51:12.829954018+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56382: remote error: tls: bad certificate 2025-08-13T19:51:12.846641755+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:51:12.873191424+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56402: remote error: tls: bad certificate 2025-08-13T19:51:12.893746191+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56418: remote error: tls: bad certificate 2025-08-13T19:51:12.909586664+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56430: remote error: tls: bad certificate 2025-08-13T19:51:12.935738082+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56434: remote error: tls: bad certificate 2025-08-13T19:51:12.951402209+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56450: remote error: tls: bad certificate 2025-08-13T19:51:12.971053681+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56458: remote error: tls: bad certificate 2025-08-13T19:51:12.988277633+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:51:13.002164440+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56470: remote error: tls: bad certificate 2025-08-13T19:51:13.018524058+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56480: remote error: tls: bad certificate 2025-08-13T19:51:13.037715756+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56484: remote error: tls: bad certificate 2025-08-13T19:51:13.055989508+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56492: remote error: tls: bad certificate 2025-08-13T19:51:13.077931436+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56502: remote error: tls: bad certificate 2025-08-13T19:51:13.094364215+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56516: remote error: tls: bad certificate 2025-08-13T19:51:13.108995613+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56532: remote error: tls: bad certificate 2025-08-13T19:51:13.126513384+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56538: remote error: tls: bad certificate 2025-08-13T19:51:13.139469984+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56554: remote error: tls: bad certificate 2025-08-13T19:51:13.157605813+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56558: remote error: tls: bad certificate 2025-08-13T19:51:13.176663217+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56566: remote error: tls: bad certificate 2025-08-13T19:51:13.193339994+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56568: remote error: tls: bad certificate 2025-08-13T19:51:13.210146094+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56570: remote error: tls: bad certificate 2025-08-13T19:51:13.237200328+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56586: remote error: tls: bad certificate 2025-08-13T19:51:13.257216900+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56602: remote error: tls: bad certificate 2025-08-13T19:51:13.275069500+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56614: remote error: tls: bad certificate 2025-08-13T19:51:13.296548644+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56618: remote error: tls: bad certificate 2025-08-13T19:51:13.336249349+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56632: remote error: tls: bad certificate 2025-08-13T19:51:13.375628944+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56642: remote error: tls: bad certificate 2025-08-13T19:51:13.417585093+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56654: remote error: tls: bad certificate 2025-08-13T19:51:13.460748847+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56662: remote error: tls: bad certificate 2025-08-13T19:51:13.496887470+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56672: remote error: tls: bad certificate 2025-08-13T19:51:13.538101458+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56674: remote error: tls: bad certificate 2025-08-13T19:51:13.577150764+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56676: remote error: tls: bad certificate 2025-08-13T19:51:13.614976155+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56692: remote error: tls: bad certificate 2025-08-13T19:51:13.654620938+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56704: remote error: tls: bad certificate 2025-08-13T19:51:13.696578717+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56720: remote error: tls: bad certificate 2025-08-13T19:51:13.736185909+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56726: remote error: tls: bad certificate 2025-08-13T19:51:13.776889442+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56738: remote error: tls: bad certificate 2025-08-13T19:51:13.825333577+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56750: remote error: tls: bad certificate 2025-08-13T19:51:13.878017402+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56754: remote error: tls: bad certificate 2025-08-13T19:51:13.918018146+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56762: remote error: tls: bad certificate 2025-08-13T19:51:13.940852788+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56778: remote error: tls: bad certificate 2025-08-13T19:51:13.975715405+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56784: remote error: tls: bad certificate 2025-08-13T19:51:14.017318674+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56788: remote error: tls: bad certificate 2025-08-13T19:51:14.055215897+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56796: remote error: tls: bad certificate 2025-08-13T19:51:14.104613509+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56808: remote error: tls: bad certificate 2025-08-13T19:51:14.136732097+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56814: remote error: tls: bad certificate 2025-08-13T19:51:14.173913969+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56822: remote error: tls: bad certificate 2025-08-13T19:51:14.220596933+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56832: remote error: tls: bad certificate 2025-08-13T19:51:14.255966324+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56846: remote error: tls: bad certificate 2025-08-13T19:51:14.301126084+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56858: remote error: tls: bad certificate 2025-08-13T19:51:14.330879975+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56862: remote error: tls: bad certificate 2025-08-13T19:51:14.337535445+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56872: remote error: tls: bad certificate 2025-08-13T19:51:14.356650221+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56882: remote error: tls: bad certificate 2025-08-13T19:51:14.385160856+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56894: remote error: tls: bad certificate 2025-08-13T19:51:14.386954837+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56906: remote error: tls: bad certificate 2025-08-13T19:51:14.405027844+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56910: remote error: tls: bad certificate 2025-08-13T19:51:14.418855549+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56920: remote error: tls: bad certificate 2025-08-13T19:51:14.430440620+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56926: remote error: tls: bad certificate 2025-08-13T19:51:14.455565418+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56930: remote error: tls: bad certificate 2025-08-13T19:51:14.496387855+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56934: remote error: tls: bad certificate 2025-08-13T19:51:14.537382497+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56950: remote error: tls: bad certificate 2025-08-13T19:51:14.578863402+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56966: remote error: tls: bad certificate 2025-08-13T19:51:14.625207267+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56968: remote error: tls: bad certificate 2025-08-13T19:51:14.666023853+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56970: remote error: tls: bad certificate 2025-08-13T19:51:14.697620196+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56982: remote error: tls: bad certificate 2025-08-13T19:51:14.737162267+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56998: remote error: tls: bad certificate 2025-08-13T19:51:14.779391794+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57008: remote error: tls: bad certificate 2025-08-13T19:51:14.832110740+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57016: remote error: tls: bad certificate 2025-08-13T19:51:14.857645800+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57024: remote error: tls: bad certificate 2025-08-13T19:51:14.899008222+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57028: remote error: tls: bad certificate 2025-08-13T19:51:14.942346031+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57038: remote error: tls: bad certificate 2025-08-13T19:51:14.975859139+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57040: remote error: tls: bad certificate 2025-08-13T19:51:15.016744567+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57046: remote error: tls: bad certificate 2025-08-13T19:51:15.054408254+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57056: remote error: tls: bad certificate 2025-08-13T19:51:15.094640014+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57060: remote error: tls: bad certificate 2025-08-13T19:51:15.139407263+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57066: remote error: tls: bad certificate 2025-08-13T19:51:15.176716129+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57068: remote error: tls: bad certificate 2025-08-13T19:51:15.220768898+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57076: remote error: tls: bad certificate 2025-08-13T19:51:15.252941698+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57092: remote error: tls: bad certificate 2025-08-13T19:51:15.358891256+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57104: remote error: tls: bad certificate 2025-08-13T19:51:15.619594007+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57112: remote error: tls: bad certificate 2025-08-13T19:51:15.684559674+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57120: remote error: tls: bad certificate 2025-08-13T19:51:15.848595482+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57122: remote error: tls: bad certificate 2025-08-13T19:51:15.893192097+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57130: remote error: tls: bad certificate 2025-08-13T19:51:16.000179064+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57140: remote error: tls: bad certificate 2025-08-13T19:51:16.049123203+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57144: remote error: tls: bad certificate 2025-08-13T19:51:16.079272745+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57146: remote error: tls: bad certificate 2025-08-13T19:51:16.288498265+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57162: remote error: tls: bad certificate 2025-08-13T19:51:16.361176402+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57172: remote error: tls: bad certificate 2025-08-13T19:51:16.380248147+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57180: remote error: tls: bad certificate 2025-08-13T19:51:16.402147563+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57182: remote error: tls: bad certificate 2025-08-13T19:51:16.419362805+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57190: remote error: tls: bad certificate 2025-08-13T19:51:16.437168534+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57194: remote error: tls: bad certificate 2025-08-13T19:51:16.470389923+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57204: remote error: tls: bad certificate 2025-08-13T19:51:16.492042662+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57218: remote error: tls: bad certificate 2025-08-13T19:51:16.512763464+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57220: remote error: tls: bad certificate 2025-08-13T19:51:16.532359734+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57224: remote error: tls: bad certificate 2025-08-13T19:51:16.552450229+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57230: remote error: tls: bad certificate 2025-08-13T19:51:16.572054709+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57238: remote error: tls: bad certificate 2025-08-13T19:51:16.594323935+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57242: remote error: tls: bad certificate 2025-08-13T19:51:16.637283303+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57254: remote error: tls: bad certificate 2025-08-13T19:51:16.652766666+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57256: remote error: tls: bad certificate 2025-08-13T19:51:16.669300648+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57260: remote error: tls: bad certificate 2025-08-13T19:51:16.700094658+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57262: remote error: tls: bad certificate 2025-08-13T19:51:16.724684431+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57268: remote error: tls: bad certificate 2025-08-13T19:51:16.744451996+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57272: remote error: tls: bad certificate 2025-08-13T19:51:16.758405375+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57280: remote error: tls: bad certificate 2025-08-13T19:51:16.773868067+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57290: remote error: tls: bad certificate 2025-08-13T19:51:16.795180356+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57296: remote error: tls: bad certificate 2025-08-13T19:51:16.830660130+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57310: remote error: tls: bad certificate 2025-08-13T19:51:16.847641125+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57314: remote error: tls: bad certificate 2025-08-13T19:51:16.864753594+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57326: remote error: tls: bad certificate 2025-08-13T19:51:16.886215948+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57330: remote error: tls: bad certificate 2025-08-13T19:51:16.907208558+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:51:16.923319178+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57348: remote error: tls: bad certificate 2025-08-13T19:51:16.937390440+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57362: remote error: tls: bad certificate 2025-08-13T19:51:16.953968324+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57376: remote error: tls: bad certificate 2025-08-13T19:51:16.969876929+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57384: remote error: tls: bad certificate 2025-08-13T19:51:16.988606234+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57398: remote error: tls: bad certificate 2025-08-13T19:51:17.004299883+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57414: remote error: tls: bad certificate 2025-08-13T19:51:17.019194058+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57422: remote error: tls: bad certificate 2025-08-13T19:51:17.037623665+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57424: remote error: tls: bad certificate 2025-08-13T19:51:17.054081786+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57440: remote error: tls: bad certificate 2025-08-13T19:51:17.070039562+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:51:17.095944982+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57470: remote error: tls: bad certificate 2025-08-13T19:51:17.136189822+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57474: remote error: tls: bad certificate 2025-08-13T19:51:17.175875847+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:51:17.218141185+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57494: remote error: tls: bad certificate 2025-08-13T19:51:17.255710168+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57498: remote error: tls: bad certificate 2025-08-13T19:51:17.297227785+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57500: remote error: tls: bad certificate 2025-08-13T19:51:17.336637281+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57506: remote error: tls: bad certificate 2025-08-13T19:51:17.377241142+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57518: remote error: tls: bad certificate 2025-08-13T19:51:17.419906141+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57526: remote error: tls: bad certificate 2025-08-13T19:51:17.454640834+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57540: remote error: tls: bad certificate 2025-08-13T19:51:17.495319557+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57542: remote error: tls: bad certificate 2025-08-13T19:51:17.545379047+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57548: remote error: tls: bad certificate 2025-08-13T19:51:17.574591122+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57556: remote error: tls: bad certificate 2025-08-13T19:51:17.617194900+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57564: remote error: tls: bad certificate 2025-08-13T19:51:17.658094319+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57578: remote error: tls: bad certificate 2025-08-13T19:51:17.696045924+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57588: remote error: tls: bad certificate 2025-08-13T19:51:17.736190631+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57598: remote error: tls: bad certificate 2025-08-13T19:51:17.775011651+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57610: remote error: tls: bad certificate 2025-08-13T19:51:17.816130585+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57618: remote error: tls: bad certificate 2025-08-13T19:51:17.857471686+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57622: remote error: tls: bad certificate 2025-08-13T19:51:17.893332051+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57634: remote error: tls: bad certificate 2025-08-13T19:51:17.937021800+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57646: remote error: tls: bad certificate 2025-08-13T19:51:17.974630585+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57658: remote error: tls: bad certificate 2025-08-13T19:51:18.018132358+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57660: remote error: tls: bad certificate 2025-08-13T19:51:18.055549548+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57674: remote error: tls: bad certificate 2025-08-13T19:51:18.098351461+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57676: remote error: tls: bad certificate 2025-08-13T19:51:18.140087574+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57690: remote error: tls: bad certificate 2025-08-13T19:51:18.175060213+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57702: remote error: tls: bad certificate 2025-08-13T19:51:18.218624428+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:51:18.258133277+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57734: remote error: tls: bad certificate 2025-08-13T19:51:18.296634998+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57740: remote error: tls: bad certificate 2025-08-13T19:51:18.337196447+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57748: remote error: tls: bad certificate 2025-08-13T19:51:18.378149598+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57764: remote error: tls: bad certificate 2025-08-13T19:51:18.415622969+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57774: remote error: tls: bad certificate 2025-08-13T19:51:18.458387461+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57790: remote error: tls: bad certificate 2025-08-13T19:51:18.495199003+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57798: remote error: tls: bad certificate 2025-08-13T19:51:18.539216271+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57800: remote error: tls: bad certificate 2025-08-13T19:51:18.577979439+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57804: remote error: tls: bad certificate 2025-08-13T19:51:18.614594906+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57808: remote error: tls: bad certificate 2025-08-13T19:51:18.655243847+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57812: remote error: tls: bad certificate 2025-08-13T19:51:18.696197528+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57816: remote error: tls: bad certificate 2025-08-13T19:51:18.734933055+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48794: remote error: tls: bad certificate 2025-08-13T19:51:18.782486814+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48798: remote error: tls: bad certificate 2025-08-13T19:51:18.815491857+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48814: remote error: tls: bad certificate 2025-08-13T19:51:18.853877375+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48826: remote error: tls: bad certificate 2025-08-13T19:51:18.896680738+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48842: remote error: tls: bad certificate 2025-08-13T19:51:18.938149613+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48844: remote error: tls: bad certificate 2025-08-13T19:51:18.973016829+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48858: remote error: tls: bad certificate 2025-08-13T19:51:19.018100608+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48872: remote error: tls: bad certificate 2025-08-13T19:51:19.057138824+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48888: remote error: tls: bad certificate 2025-08-13T19:51:19.095294264+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48902: remote error: tls: bad certificate 2025-08-13T19:51:19.136037609+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48904: remote error: tls: bad certificate 2025-08-13T19:51:19.175518737+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48910: remote error: tls: bad certificate 2025-08-13T19:51:19.218533266+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48918: remote error: tls: bad certificate 2025-08-13T19:51:19.256003887+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48928: remote error: tls: bad certificate 2025-08-13T19:51:19.298466001+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48932: remote error: tls: bad certificate 2025-08-13T19:51:19.335958282+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48946: remote error: tls: bad certificate 2025-08-13T19:51:19.378441817+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48956: remote error: tls: bad certificate 2025-08-13T19:51:19.416966898+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48964: remote error: tls: bad certificate 2025-08-13T19:51:19.456156398+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48970: remote error: tls: bad certificate 2025-08-13T19:51:19.497225832+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48984: remote error: tls: bad certificate 2025-08-13T19:51:19.535277549+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:51:19.574767638+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48988: remote error: tls: bad certificate 2025-08-13T19:51:19.619140096+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49002: remote error: tls: bad certificate 2025-08-13T19:51:19.760504665+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49018: remote error: tls: bad certificate 2025-08-13T19:51:19.797073871+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49034: remote error: tls: bad certificate 2025-08-13T19:51:19.819899484+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49046: remote error: tls: bad certificate 2025-08-13T19:51:19.845909868+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49054: remote error: tls: bad certificate 2025-08-13T19:51:19.870769268+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49056: remote error: tls: bad certificate 2025-08-13T19:51:19.890556244+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49070: remote error: tls: bad certificate 2025-08-13T19:51:19.907127727+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49076: remote error: tls: bad certificate 2025-08-13T19:51:19.935927590+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49082: remote error: tls: bad certificate 2025-08-13T19:51:19.975422599+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49092: remote error: tls: bad certificate 2025-08-13T19:51:20.014550047+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49106: remote error: tls: bad certificate 2025-08-13T19:51:20.054914621+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate 2025-08-13T19:51:20.095512841+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate 2025-08-13T19:51:20.136422211+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49130: remote error: tls: bad certificate 2025-08-13T19:51:20.179887973+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49138: remote error: tls: bad certificate 2025-08-13T19:51:20.216285713+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate 2025-08-13T19:51:20.256577895+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate 2025-08-13T19:51:20.294250642+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49166: remote error: tls: bad certificate 2025-08-13T19:51:20.338264850+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate 2025-08-13T19:51:20.378175010+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49192: remote error: tls: bad certificate 2025-08-13T19:51:20.420151640+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49208: remote error: tls: bad certificate 2025-08-13T19:51:20.455171891+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49224: remote error: tls: bad certificate 2025-08-13T19:51:20.497421358+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49230: remote error: tls: bad certificate 2025-08-13T19:51:20.539308466+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49232: remote error: tls: bad certificate 2025-08-13T19:51:20.576921541+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49242: remote error: tls: bad certificate 2025-08-13T19:51:20.620756943+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49248: remote error: tls: bad certificate 2025-08-13T19:51:20.655605999+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49256: remote error: tls: bad certificate 2025-08-13T19:51:20.701613924+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49262: remote error: tls: bad certificate 2025-08-13T19:51:20.737487670+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49268: remote error: tls: bad certificate 2025-08-13T19:51:20.775267579+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49272: remote error: tls: bad certificate 2025-08-13T19:51:20.814435639+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49284: remote error: tls: bad certificate 2025-08-13T19:51:20.861732111+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:51:20.897545024+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49306: remote error: tls: bad certificate 2025-08-13T19:51:20.935412567+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49312: remote error: tls: bad certificate 2025-08-13T19:51:20.976219453+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49314: remote error: tls: bad certificate 2025-08-13T19:51:21.015857946+00:00 stderr F 2025/08/13 19:51:21 http: TLS handshake error from 127.0.0.1:49324: remote error: tls: bad certificate 2025-08-13T19:51:22.820721319+00:00 stderr F 2025/08/13 19:51:22 http: TLS handshake error from 127.0.0.1:49336: remote error: tls: bad certificate 2025-08-13T19:51:24.650938228+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49344: remote error: tls: bad certificate 2025-08-13T19:51:24.671006000+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:51:24.693388507+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49364: remote error: tls: bad certificate 2025-08-13T19:51:24.713368767+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49370: remote error: tls: bad certificate 2025-08-13T19:51:24.735238740+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49382: remote error: tls: bad certificate 2025-08-13T19:51:25.229155691+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49384: remote error: tls: bad certificate 2025-08-13T19:51:25.246324780+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49398: remote error: tls: bad certificate 2025-08-13T19:51:25.263464138+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49406: remote error: tls: bad certificate 2025-08-13T19:51:25.277217700+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:51:25.293293068+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49422: remote error: tls: bad certificate 2025-08-13T19:51:25.310585241+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:51:25.325511446+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49426: remote error: tls: bad certificate 2025-08-13T19:51:25.341518872+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49432: remote error: tls: bad certificate 2025-08-13T19:51:25.364286111+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:51:25.381632405+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49450: remote error: tls: bad certificate 2025-08-13T19:51:25.398575458+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49462: remote error: tls: bad certificate 2025-08-13T19:51:25.414258465+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49464: remote error: tls: bad certificate 2025-08-13T19:51:25.430355163+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49466: remote error: tls: bad certificate 2025-08-13T19:51:25.454371238+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49478: remote error: tls: bad certificate 2025-08-13T19:51:25.471703371+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:51:25.487219043+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49490: remote error: tls: bad certificate 2025-08-13T19:51:25.505701240+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49500: remote error: tls: bad certificate 2025-08-13T19:51:25.524707501+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49502: remote error: tls: bad certificate 2025-08-13T19:51:25.543520727+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49504: remote error: tls: bad certificate 2025-08-13T19:51:25.560540912+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49520: remote error: tls: bad certificate 2025-08-13T19:51:25.578019030+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49530: remote error: tls: bad certificate 2025-08-13T19:51:25.610920948+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49540: remote error: tls: bad certificate 2025-08-13T19:51:25.627284454+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49556: remote error: tls: bad certificate 2025-08-13T19:51:25.645934675+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:51:25.663733742+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49574: remote error: tls: bad certificate 2025-08-13T19:51:25.681895430+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49584: remote error: tls: bad certificate 2025-08-13T19:51:25.705085211+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49598: remote error: tls: bad certificate 2025-08-13T19:51:25.722750824+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:51:25.741721714+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49606: remote error: tls: bad certificate 2025-08-13T19:51:25.763099613+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49616: remote error: tls: bad certificate 2025-08-13T19:51:25.782352652+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49624: remote error: tls: bad certificate 2025-08-13T19:51:25.802023502+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49626: remote error: tls: bad certificate 2025-08-13T19:51:25.821910699+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:51:25.841640791+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49642: remote error: tls: bad certificate 2025-08-13T19:51:25.862193877+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49654: remote error: tls: bad certificate 2025-08-13T19:51:25.877658537+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49656: remote error: tls: bad certificate 2025-08-13T19:51:25.892992604+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49662: remote error: tls: bad certificate 2025-08-13T19:51:25.918943874+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49670: remote error: tls: bad certificate 2025-08-13T19:51:25.945947213+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:51:25.977615655+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49694: remote error: tls: bad certificate 2025-08-13T19:51:26.003864763+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49708: remote error: tls: bad certificate 2025-08-13T19:51:26.020383534+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49720: remote error: tls: bad certificate 2025-08-13T19:51:26.038353276+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49726: remote error: tls: bad certificate 2025-08-13T19:51:26.058040357+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49732: remote error: tls: bad certificate 2025-08-13T19:51:26.081340790+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49734: remote error: tls: bad certificate 2025-08-13T19:51:26.097624354+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49744: remote error: tls: bad certificate 2025-08-13T19:51:26.113924419+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49758: remote error: tls: bad certificate 2025-08-13T19:51:26.129926265+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49764: remote error: tls: bad certificate 2025-08-13T19:51:26.144420338+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:51:26.167900957+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49776: remote error: tls: bad certificate 2025-08-13T19:51:26.181471043+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:51:26.195262196+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49796: remote error: tls: bad certificate 2025-08-13T19:51:26.213324421+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49802: remote error: tls: bad certificate 2025-08-13T19:51:26.230124529+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49810: remote error: tls: bad certificate 2025-08-13T19:51:26.246231448+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49816: remote error: tls: bad certificate 2025-08-13T19:51:26.269661956+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49820: remote error: tls: bad certificate 2025-08-13T19:51:26.286405443+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49824: remote error: tls: bad certificate 2025-08-13T19:51:26.303175101+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49836: remote error: tls: bad certificate 2025-08-13T19:51:26.318382014+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49838: remote error: tls: bad certificate 2025-08-13T19:51:26.335882373+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49846: remote error: tls: bad certificate 2025-08-13T19:51:26.350144109+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49860: remote error: tls: bad certificate 2025-08-13T19:51:26.365631970+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49874: remote error: tls: bad certificate 2025-08-13T19:51:26.383921891+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49882: remote error: tls: bad certificate 2025-08-13T19:51:26.400607027+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49896: remote error: tls: bad certificate 2025-08-13T19:51:26.419436783+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49906: remote error: tls: bad certificate 2025-08-13T19:51:26.438519207+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49918: remote error: tls: bad certificate 2025-08-13T19:51:26.454055969+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49920: remote error: tls: bad certificate 2025-08-13T19:51:34.888909375+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44154: remote error: tls: bad certificate 2025-08-13T19:51:34.911553090+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44164: remote error: tls: bad certificate 2025-08-13T19:51:34.932418284+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44174: remote error: tls: bad certificate 2025-08-13T19:51:34.953989569+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44182: remote error: tls: bad certificate 2025-08-13T19:51:34.977100287+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44194: remote error: tls: bad certificate 2025-08-13T19:51:35.230890378+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44204: remote error: tls: bad certificate 2025-08-13T19:51:35.246219745+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44220: remote error: tls: bad certificate 2025-08-13T19:51:35.260329997+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44228: remote error: tls: bad certificate 2025-08-13T19:51:35.278439343+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44238: remote error: tls: bad certificate 2025-08-13T19:51:35.298312789+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44250: remote error: tls: bad certificate 2025-08-13T19:51:35.324349951+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44252: remote error: tls: bad certificate 2025-08-13T19:51:35.341884941+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44268: remote error: tls: bad certificate 2025-08-13T19:51:35.358668909+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44274: remote error: tls: bad certificate 2025-08-13T19:51:35.374070068+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44276: remote error: tls: bad certificate 2025-08-13T19:51:35.387741277+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44282: remote error: tls: bad certificate 2025-08-13T19:51:35.412084041+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44294: remote error: tls: bad certificate 2025-08-13T19:51:35.428288072+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44298: remote error: tls: bad certificate 2025-08-13T19:51:35.445766120+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44314: remote error: tls: bad certificate 2025-08-13T19:51:35.461766916+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44318: remote error: tls: bad certificate 2025-08-13T19:51:35.481981802+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44334: remote error: tls: bad certificate 2025-08-13T19:51:35.506534571+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44342: remote error: tls: bad certificate 2025-08-13T19:51:35.521755915+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44348: remote error: tls: bad certificate 2025-08-13T19:51:35.536405812+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44360: remote error: tls: bad certificate 2025-08-13T19:51:35.551632586+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44374: remote error: tls: bad certificate 2025-08-13T19:51:35.572519391+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44376: remote error: tls: bad certificate 2025-08-13T19:51:35.596243597+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44388: remote error: tls: bad certificate 2025-08-13T19:51:35.613958172+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44394: remote error: tls: bad certificate 2025-08-13T19:51:35.631295326+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44410: remote error: tls: bad certificate 2025-08-13T19:51:35.650620427+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44412: remote error: tls: bad certificate 2025-08-13T19:51:35.667202959+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44426: remote error: tls: bad certificate 2025-08-13T19:51:35.680952901+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44442: remote error: tls: bad certificate 2025-08-13T19:51:35.699039085+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44446: remote error: tls: bad certificate 2025-08-13T19:51:35.714517136+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44460: remote error: tls: bad certificate 2025-08-13T19:51:35.734411193+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44472: remote error: tls: bad certificate 2025-08-13T19:51:35.749755970+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44480: remote error: tls: bad certificate 2025-08-13T19:51:35.769064090+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44486: remote error: tls: bad certificate 2025-08-13T19:51:35.793466566+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44494: remote error: tls: bad certificate 2025-08-13T19:51:35.809748190+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44504: remote error: tls: bad certificate 2025-08-13T19:51:35.823169672+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44508: remote error: tls: bad certificate 2025-08-13T19:51:35.844928822+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44522: remote error: tls: bad certificate 2025-08-13T19:51:35.862579885+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44532: remote error: tls: bad certificate 2025-08-13T19:51:35.878615402+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44536: remote error: tls: bad certificate 2025-08-13T19:51:35.893603579+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44552: remote error: tls: bad certificate 2025-08-13T19:51:35.908233105+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44554: remote error: tls: bad certificate 2025-08-13T19:51:35.924323204+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44558: remote error: tls: bad certificate 2025-08-13T19:51:35.938981422+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44572: remote error: tls: bad certificate 2025-08-13T19:51:35.953170416+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44582: remote error: tls: bad certificate 2025-08-13T19:51:35.968663917+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44596: remote error: tls: bad certificate 2025-08-13T19:51:35.998586760+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44604: remote error: tls: bad certificate 2025-08-13T19:51:36.013735551+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44620: remote error: tls: bad certificate 2025-08-13T19:51:36.033660039+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:51:36.048400469+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44628: remote error: tls: bad certificate 2025-08-13T19:51:36.063665594+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44640: remote error: tls: bad certificate 2025-08-13T19:51:36.105005242+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44654: remote error: tls: bad certificate 2025-08-13T19:51:36.170414595+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44664: remote error: tls: bad certificate 2025-08-13T19:51:36.185508785+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44680: remote error: tls: bad certificate 2025-08-13T19:51:36.210122447+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44696: remote error: tls: bad certificate 2025-08-13T19:51:36.225264118+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44710: remote error: tls: bad certificate 2025-08-13T19:51:36.240088881+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:51:36.254623595+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44740: remote error: tls: bad certificate 2025-08-13T19:51:36.265593497+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44746: remote error: tls: bad certificate 2025-08-13T19:51:36.279867284+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44754: remote error: tls: bad certificate 2025-08-13T19:51:36.294984375+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44760: remote error: tls: bad certificate 2025-08-13T19:51:36.315271863+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44766: remote error: tls: bad certificate 2025-08-13T19:51:36.331644569+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44770: remote error: tls: bad certificate 2025-08-13T19:51:36.349393065+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44776: remote error: tls: bad certificate 2025-08-13T19:51:36.367290895+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44778: remote error: tls: bad certificate 2025-08-13T19:51:36.383337162+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44790: remote error: tls: bad certificate 2025-08-13T19:51:36.398226026+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44798: remote error: tls: bad certificate 2025-08-13T19:51:36.415058326+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44804: remote error: tls: bad certificate 2025-08-13T19:51:36.429761165+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44818: remote error: tls: bad certificate 2025-08-13T19:51:36.447405417+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:51:45.229423954+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58312: remote error: tls: bad certificate 2025-08-13T19:51:45.247171210+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58324: remote error: tls: bad certificate 2025-08-13T19:51:45.257149764+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58336: remote error: tls: bad certificate 2025-08-13T19:51:45.273425098+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58352: remote error: tls: bad certificate 2025-08-13T19:51:45.283935737+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58366: remote error: tls: bad certificate 2025-08-13T19:51:45.295551288+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58372: remote error: tls: bad certificate 2025-08-13T19:51:45.308979121+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58378: remote error: tls: bad certificate 2025-08-13T19:51:45.312430609+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58382: remote error: tls: bad certificate 2025-08-13T19:51:45.331969776+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58394: remote error: tls: bad certificate 2025-08-13T19:51:45.331969776+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58404: remote error: tls: bad certificate 2025-08-13T19:51:45.349488195+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58406: remote error: tls: bad certificate 2025-08-13T19:51:45.352435989+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58416: remote error: tls: bad certificate 2025-08-13T19:51:45.369491465+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58422: remote error: tls: bad certificate 2025-08-13T19:51:45.388581459+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58436: remote error: tls: bad certificate 2025-08-13T19:51:45.407101377+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58452: remote error: tls: bad certificate 2025-08-13T19:51:45.424728109+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58454: remote error: tls: bad certificate 2025-08-13T19:51:45.441388033+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58464: remote error: tls: bad certificate 2025-08-13T19:51:45.457253205+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58472: remote error: tls: bad certificate 2025-08-13T19:51:45.480961341+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58486: remote error: tls: bad certificate 2025-08-13T19:51:45.500220840+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58488: remote error: tls: bad certificate 2025-08-13T19:51:45.520414815+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58502: remote error: tls: bad certificate 2025-08-13T19:51:45.539251772+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58518: remote error: tls: bad certificate 2025-08-13T19:51:45.557504182+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58526: remote error: tls: bad certificate 2025-08-13T19:51:45.571286554+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58536: remote error: tls: bad certificate 2025-08-13T19:51:45.589704359+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58542: remote error: tls: bad certificate 2025-08-13T19:51:45.607948879+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58548: remote error: tls: bad certificate 2025-08-13T19:51:45.625083857+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58560: remote error: tls: bad certificate 2025-08-13T19:51:45.646020864+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58568: remote error: tls: bad certificate 2025-08-13T19:51:45.667046433+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58584: remote error: tls: bad certificate 2025-08-13T19:51:45.690323306+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58598: remote error: tls: bad certificate 2025-08-13T19:51:45.709131022+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58604: remote error: tls: bad certificate 2025-08-13T19:51:45.725559170+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58620: remote error: tls: bad certificate 2025-08-13T19:51:45.747176896+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58636: remote error: tls: bad certificate 2025-08-13T19:51:45.765553239+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58638: remote error: tls: bad certificate 2025-08-13T19:51:45.785663762+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58642: remote error: tls: bad certificate 2025-08-13T19:51:45.804608922+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58648: remote error: tls: bad certificate 2025-08-13T19:51:45.820687760+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58662: remote error: tls: bad certificate 2025-08-13T19:51:45.837835759+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58678: remote error: tls: bad certificate 2025-08-13T19:51:45.855400129+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58694: remote error: tls: bad certificate 2025-08-13T19:51:45.874041070+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58706: remote error: tls: bad certificate 2025-08-13T19:51:45.895304306+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58718: remote error: tls: bad certificate 2025-08-13T19:51:45.918644221+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58722: remote error: tls: bad certificate 2025-08-13T19:51:45.937194459+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58728: remote error: tls: bad certificate 2025-08-13T19:51:45.956282993+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58742: remote error: tls: bad certificate 2025-08-13T19:51:45.973995498+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58752: remote error: tls: bad certificate 2025-08-13T19:51:45.991052044+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58758: remote error: tls: bad certificate 2025-08-13T19:51:46.014019558+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58772: remote error: tls: bad certificate 2025-08-13T19:51:46.036255592+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58782: remote error: tls: bad certificate 2025-08-13T19:51:46.060187624+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58790: remote error: tls: bad certificate 2025-08-13T19:51:46.079926496+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58806: remote error: tls: bad certificate 2025-08-13T19:51:46.100487972+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58818: remote error: tls: bad certificate 2025-08-13T19:51:46.128986464+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58834: remote error: tls: bad certificate 2025-08-13T19:51:46.144252709+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58844: remote error: tls: bad certificate 2025-08-13T19:51:46.158254278+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58852: remote error: tls: bad certificate 2025-08-13T19:51:46.178462863+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58854: remote error: tls: bad certificate 2025-08-13T19:51:46.198764252+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58862: remote error: tls: bad certificate 2025-08-13T19:51:46.213966015+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58874: remote error: tls: bad certificate 2025-08-13T19:51:46.227996355+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58882: remote error: tls: bad certificate 2025-08-13T19:51:46.246208283+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58894: remote error: tls: bad certificate 2025-08-13T19:51:46.266552293+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58900: remote error: tls: bad certificate 2025-08-13T19:51:46.283127445+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58908: remote error: tls: bad certificate 2025-08-13T19:51:46.298365510+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58910: remote error: tls: bad certificate 2025-08-13T19:51:46.323312250+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58926: remote error: tls: bad certificate 2025-08-13T19:51:46.339574334+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58938: remote error: tls: bad certificate 2025-08-13T19:51:46.360373316+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58952: remote error: tls: bad certificate 2025-08-13T19:51:46.379384908+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58954: remote error: tls: bad certificate 2025-08-13T19:51:46.397547285+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58966: remote error: tls: bad certificate 2025-08-13T19:51:46.416198287+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58972: remote error: tls: bad certificate 2025-08-13T19:51:46.433426266+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58988: remote error: tls: bad certificate 2025-08-13T19:51:46.447897129+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59002: remote error: tls: bad certificate 2025-08-13T19:51:46.468113175+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59014: remote error: tls: bad certificate 2025-08-13T19:51:46.491347087+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59022: remote error: tls: bad certificate 2025-08-13T19:51:49.076095659+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48056: remote error: tls: bad certificate 2025-08-13T19:51:49.107637277+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48070: remote error: tls: bad certificate 2025-08-13T19:51:49.128606465+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48082: remote error: tls: bad certificate 2025-08-13T19:51:49.149867760+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48094: remote error: tls: bad certificate 2025-08-13T19:51:49.168729248+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48110: remote error: tls: bad certificate 2025-08-13T19:51:49.185655820+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48112: remote error: tls: bad certificate 2025-08-13T19:51:49.212078173+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48120: remote error: tls: bad certificate 2025-08-13T19:51:49.232245838+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48128: remote error: tls: bad certificate 2025-08-13T19:51:49.250031674+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48138: remote error: tls: bad certificate 2025-08-13T19:51:49.273241696+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48146: remote error: tls: bad certificate 2025-08-13T19:51:49.291936728+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48158: remote error: tls: bad certificate 2025-08-13T19:51:49.308127519+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48160: remote error: tls: bad certificate 2025-08-13T19:51:49.327733988+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48166: remote error: tls: bad certificate 2025-08-13T19:51:49.337156937+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48170: remote error: tls: bad certificate 2025-08-13T19:51:49.354172371+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48178: remote error: tls: bad certificate 2025-08-13T19:51:49.380365958+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48192: remote error: tls: bad certificate 2025-08-13T19:51:49.411187506+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48204: remote error: tls: bad certificate 2025-08-13T19:51:49.440935043+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48212: remote error: tls: bad certificate 2025-08-13T19:51:49.460707737+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48222: remote error: tls: bad certificate 2025-08-13T19:51:49.491663629+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48238: remote error: tls: bad certificate 2025-08-13T19:51:49.509939839+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48248: remote error: tls: bad certificate 2025-08-13T19:51:49.537520105+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48256: remote error: tls: bad certificate 2025-08-13T19:51:49.560039917+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48272: remote error: tls: bad certificate 2025-08-13T19:51:49.575920549+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48288: remote error: tls: bad certificate 2025-08-13T19:51:49.600389346+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48300: remote error: tls: bad certificate 2025-08-13T19:51:49.618001058+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48302: remote error: tls: bad certificate 2025-08-13T19:51:49.644572915+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48310: remote error: tls: bad certificate 2025-08-13T19:51:49.672764988+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48324: remote error: tls: bad certificate 2025-08-13T19:51:49.692653795+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48330: remote error: tls: bad certificate 2025-08-13T19:51:49.716426202+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48334: remote error: tls: bad certificate 2025-08-13T19:51:49.760494018+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48344: remote error: tls: bad certificate 2025-08-13T19:51:49.806922041+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48350: remote error: tls: bad certificate 2025-08-13T19:51:49.835891726+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48362: remote error: tls: bad certificate 2025-08-13T19:51:49.865521470+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48374: remote error: tls: bad certificate 2025-08-13T19:51:49.892461968+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48378: remote error: tls: bad certificate 2025-08-13T19:51:49.935357250+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48394: remote error: tls: bad certificate 2025-08-13T19:51:49.982184734+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48410: remote error: tls: bad certificate 2025-08-13T19:51:50.011342255+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48424: remote error: tls: bad certificate 2025-08-13T19:51:50.037960573+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48438: remote error: tls: bad certificate 2025-08-13T19:51:50.058617801+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48454: remote error: tls: bad certificate 2025-08-13T19:51:50.105753704+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48468: remote error: tls: bad certificate 2025-08-13T19:51:50.135751038+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48470: remote error: tls: bad certificate 2025-08-13T19:51:50.158219968+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48472: remote error: tls: bad certificate 2025-08-13T19:51:50.201533853+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48476: remote error: tls: bad certificate 2025-08-13T19:51:50.244723343+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48490: remote error: tls: bad certificate 2025-08-13T19:51:50.315070827+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48502: remote error: tls: bad certificate 2025-08-13T19:51:50.376929440+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48508: remote error: tls: bad certificate 2025-08-13T19:51:50.555041034+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48514: remote error: tls: bad certificate 2025-08-13T19:51:50.609665641+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48524: remote error: tls: bad certificate 2025-08-13T19:51:50.672921733+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48532: remote error: tls: bad certificate 2025-08-13T19:51:50.769471844+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48544: remote error: tls: bad certificate 2025-08-13T19:51:50.831486710+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48560: remote error: tls: bad certificate 2025-08-13T19:51:50.863310337+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48566: remote error: tls: bad certificate 2025-08-13T19:51:50.908341840+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48580: remote error: tls: bad certificate 2025-08-13T19:51:50.924962964+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48596: remote error: tls: bad certificate 2025-08-13T19:51:50.948964588+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48606: remote error: tls: bad certificate 2025-08-13T19:51:50.968907536+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48614: remote error: tls: bad certificate 2025-08-13T19:51:51.001053502+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48626: remote error: tls: bad certificate 2025-08-13T19:51:51.019893938+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48638: remote error: tls: bad certificate 2025-08-13T19:51:51.046002942+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48644: remote error: tls: bad certificate 2025-08-13T19:51:51.076886692+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48646: remote error: tls: bad certificate 2025-08-13T19:51:51.109906753+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48662: remote error: tls: bad certificate 2025-08-13T19:51:51.133918747+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48674: remote error: tls: bad certificate 2025-08-13T19:51:51.154886284+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48684: remote error: tls: bad certificate 2025-08-13T19:51:51.185132866+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48698: remote error: tls: bad certificate 2025-08-13T19:51:51.225982780+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48712: remote error: tls: bad certificate 2025-08-13T19:51:51.248045369+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48718: remote error: tls: bad certificate 2025-08-13T19:51:51.275053928+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48728: remote error: tls: bad certificate 2025-08-13T19:51:51.307019519+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48732: remote error: tls: bad certificate 2025-08-13T19:51:51.330985142+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48746: remote error: tls: bad certificate 2025-08-13T19:51:51.360095031+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48760: remote error: tls: bad certificate 2025-08-13T19:51:51.386985437+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48772: remote error: tls: bad certificate 2025-08-13T19:51:51.414079099+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48780: remote error: tls: bad certificate 2025-08-13T19:51:51.443983831+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48792: remote error: tls: bad certificate 2025-08-13T19:51:51.468964643+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48802: remote error: tls: bad certificate 2025-08-13T19:51:51.498901096+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48808: remote error: tls: bad certificate 2025-08-13T19:51:51.510905168+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48814: remote error: tls: bad certificate 2025-08-13T19:51:51.523940479+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48822: remote error: tls: bad certificate 2025-08-13T19:51:51.547918962+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48824: remote error: tls: bad certificate 2025-08-13T19:51:51.585071241+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48830: remote error: tls: bad certificate 2025-08-13T19:51:51.604072822+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48832: remote error: tls: bad certificate 2025-08-13T19:51:51.632913834+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48840: remote error: tls: bad certificate 2025-08-13T19:51:51.652868552+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48848: remote error: tls: bad certificate 2025-08-13T19:51:51.671007739+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48864: remote error: tls: bad certificate 2025-08-13T19:51:51.688964061+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48876: remote error: tls: bad certificate 2025-08-13T19:51:51.706877871+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48878: remote error: tls: bad certificate 2025-08-13T19:51:51.722889297+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48884: remote error: tls: bad certificate 2025-08-13T19:51:51.741873858+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48900: remote error: tls: bad certificate 2025-08-13T19:51:51.762867906+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48908: remote error: tls: bad certificate 2025-08-13T19:51:51.779890261+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48922: remote error: tls: bad certificate 2025-08-13T19:51:51.803894675+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48934: remote error: tls: bad certificate 2025-08-13T19:51:51.827888129+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48938: remote error: tls: bad certificate 2025-08-13T19:51:51.848906338+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48948: remote error: tls: bad certificate 2025-08-13T19:51:51.875903357+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48950: remote error: tls: bad certificate 2025-08-13T19:51:51.895929788+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48952: remote error: tls: bad certificate 2025-08-13T19:51:51.917910464+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48956: remote error: tls: bad certificate 2025-08-13T19:51:51.938886121+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48968: remote error: tls: bad certificate 2025-08-13T19:51:51.959876029+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48974: remote error: tls: bad certificate 2025-08-13T19:51:51.977891333+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48980: remote error: tls: bad certificate 2025-08-13T19:51:51.995901926+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48982: remote error: tls: bad certificate 2025-08-13T19:51:52.012876879+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:51:52.030907153+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48992: remote error: tls: bad certificate 2025-08-13T19:51:52.057915593+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48994: remote error: tls: bad certificate 2025-08-13T19:51:52.080271780+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49000: remote error: tls: bad certificate 2025-08-13T19:51:52.100929988+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49014: remote error: tls: bad certificate 2025-08-13T19:51:52.118900100+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49016: remote error: tls: bad certificate 2025-08-13T19:51:52.144946212+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49030: remote error: tls: bad certificate 2025-08-13T19:51:52.166445025+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49038: remote error: tls: bad certificate 2025-08-13T19:51:52.189448370+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49042: remote error: tls: bad certificate 2025-08-13T19:51:52.217337345+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49054: remote error: tls: bad certificate 2025-08-13T19:51:52.244161329+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49060: remote error: tls: bad certificate 2025-08-13T19:51:52.255997516+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49076: remote error: tls: bad certificate 2025-08-13T19:51:52.274023970+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49086: remote error: tls: bad certificate 2025-08-13T19:51:52.304564450+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49098: remote error: tls: bad certificate 2025-08-13T19:51:52.330130308+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49104: remote error: tls: bad certificate 2025-08-13T19:51:52.361858712+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49112: remote error: tls: bad certificate 2025-08-13T19:51:52.386456493+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49120: remote error: tls: bad certificate 2025-08-13T19:51:52.414108811+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49134: remote error: tls: bad certificate 2025-08-13T19:51:52.444031763+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate 2025-08-13T19:51:52.468897962+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49140: remote error: tls: bad certificate 2025-08-13T19:51:52.490647532+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49156: remote error: tls: bad certificate 2025-08-13T19:51:52.518408173+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49158: remote error: tls: bad certificate 2025-08-13T19:51:52.554922713+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate 2025-08-13T19:51:52.570868187+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate 2025-08-13T19:51:52.603080335+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49180: remote error: tls: bad certificate 2025-08-13T19:51:52.626078340+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49194: remote error: tls: bad certificate 2025-08-13T19:51:52.653371248+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49198: remote error: tls: bad certificate 2025-08-13T19:51:52.674299524+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49206: remote error: tls: bad certificate 2025-08-13T19:51:52.698756671+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49218: remote error: tls: bad certificate 2025-08-13T19:51:52.719594044+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49230: remote error: tls: bad certificate 2025-08-13T19:51:52.740359316+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49232: remote error: tls: bad certificate 2025-08-13T19:51:52.775948640+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49238: remote error: tls: bad certificate 2025-08-13T19:51:52.802281040+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:51:52.816648530+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49260: remote error: tls: bad certificate 2025-08-13T19:51:52.819327866+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49272: remote error: tls: bad certificate 2025-08-13T19:51:52.838937585+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49274: remote error: tls: bad certificate 2025-08-13T19:51:52.862078924+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49286: remote error: tls: bad certificate 2025-08-13T19:51:52.888863847+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49292: remote error: tls: bad certificate 2025-08-13T19:51:52.915021762+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:51:52.938728128+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49304: remote error: tls: bad certificate 2025-08-13T19:51:52.963520884+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49308: remote error: tls: bad certificate 2025-08-13T19:51:52.986878080+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49318: remote error: tls: bad certificate 2025-08-13T19:51:53.016400571+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49334: remote error: tls: bad certificate 2025-08-13T19:51:53.033557990+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49348: remote error: tls: bad certificate 2025-08-13T19:51:53.437623582+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49356: remote error: tls: bad certificate 2025-08-13T19:51:53.463060217+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:51:53.481366338+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49364: remote error: tls: bad certificate 2025-08-13T19:51:53.503377355+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49372: remote error: tls: bad certificate 2025-08-13T19:51:53.528933483+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49386: remote error: tls: bad certificate 2025-08-13T19:51:53.553239986+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49388: remote error: tls: bad certificate 2025-08-13T19:51:53.571636570+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49402: remote error: tls: bad certificate 2025-08-13T19:51:53.593067800+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:51:53.617272199+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49428: remote error: tls: bad certificate 2025-08-13T19:51:53.639766470+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49432: remote error: tls: bad certificate 2025-08-13T19:51:53.658890745+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:51:53.677946088+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49442: remote error: tls: bad certificate 2025-08-13T19:51:53.696568608+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49458: remote error: tls: bad certificate 2025-08-13T19:51:53.714454278+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49460: remote error: tls: bad certificate 2025-08-13T19:51:53.733860061+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49472: remote error: tls: bad certificate 2025-08-13T19:51:53.751093742+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:51:53.776317781+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49496: remote error: tls: bad certificate 2025-08-13T19:51:53.796904197+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49504: remote error: tls: bad certificate 2025-08-13T19:51:53.812332277+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49520: remote error: tls: bad certificate 2025-08-13T19:51:53.827761696+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:51:53.845674277+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49542: remote error: tls: bad certificate 2025-08-13T19:51:53.863381401+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:51:53.881640031+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49568: remote error: tls: bad certificate 2025-08-13T19:51:53.899249993+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49576: remote error: tls: bad certificate 2025-08-13T19:51:53.915200718+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49592: remote error: tls: bad certificate 2025-08-13T19:51:53.932169221+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:51:53.956525415+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49612: remote error: tls: bad certificate 2025-08-13T19:51:53.972322225+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49622: remote error: tls: bad certificate 2025-08-13T19:51:53.986572571+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49628: remote error: tls: bad certificate 2025-08-13T19:51:54.001901338+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:51:54.018089079+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49640: remote error: tls: bad certificate 2025-08-13T19:51:54.042053942+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49646: remote error: tls: bad certificate 2025-08-13T19:51:54.071468220+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:51:54.107311621+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49660: remote error: tls: bad certificate 2025-08-13T19:51:54.154198657+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49666: remote error: tls: bad certificate 2025-08-13T19:51:54.190040388+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49674: remote error: tls: bad certificate 2025-08-13T19:51:54.227873526+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:51:54.267584117+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49690: remote error: tls: bad certificate 2025-08-13T19:51:54.315009608+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49698: remote error: tls: bad certificate 2025-08-13T19:51:54.350597882+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49714: remote error: tls: bad certificate 2025-08-13T19:51:54.393021381+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49728: remote error: tls: bad certificate 2025-08-13T19:51:54.427350599+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49738: remote error: tls: bad certificate 2025-08-13T19:51:54.466697610+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49752: remote error: tls: bad certificate 2025-08-13T19:51:54.510339604+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49768: remote error: tls: bad certificate 2025-08-13T19:51:54.551517927+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:51:54.585918877+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:51:54.588643785+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49782: remote error: tls: bad certificate 2025-08-13T19:51:54.626564205+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49796: remote error: tls: bad certificate 2025-08-13T19:51:54.673626936+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49804: remote error: tls: bad certificate 2025-08-13T19:51:54.706184093+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49808: remote error: tls: bad certificate 2025-08-13T19:51:54.747370227+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49812: remote error: tls: bad certificate 2025-08-13T19:51:54.786201023+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49828: remote error: tls: bad certificate 2025-08-13T19:51:54.825520463+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49838: remote error: tls: bad certificate 2025-08-13T19:51:54.868887379+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49850: remote error: tls: bad certificate 2025-08-13T19:51:54.906038627+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49866: remote error: tls: bad certificate 2025-08-13T19:51:54.948265401+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49882: remote error: tls: bad certificate 2025-08-13T19:51:55.071308806+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49890: remote error: tls: bad certificate 2025-08-13T19:51:55.090315868+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49894: remote error: tls: bad certificate 2025-08-13T19:51:55.108698042+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49898: remote error: tls: bad certificate 2025-08-13T19:51:55.123356209+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49904: remote error: tls: bad certificate 2025-08-13T19:51:55.147871908+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49910: remote error: tls: bad certificate 2025-08-13T19:51:55.187059074+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49924: remote error: tls: bad certificate 2025-08-13T19:51:55.226522038+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49928: remote error: tls: bad certificate 2025-08-13T19:51:55.266248410+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49936: remote error: tls: bad certificate 2025-08-13T19:51:55.306411905+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49948: remote error: tls: bad certificate 2025-08-13T19:51:55.347991039+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49954: remote error: tls: bad certificate 2025-08-13T19:51:55.386904648+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49960: remote error: tls: bad certificate 2025-08-13T19:51:55.427086683+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49974: remote error: tls: bad certificate 2025-08-13T19:51:55.466236038+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49986: remote error: tls: bad certificate 2025-08-13T19:51:55.506382512+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49996: remote error: tls: bad certificate 2025-08-13T19:51:55.546525846+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50004: remote error: tls: bad certificate 2025-08-13T19:51:55.597896839+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50012: remote error: tls: bad certificate 2025-08-13T19:51:55.643475588+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50026: remote error: tls: bad certificate 2025-08-13T19:51:55.672675510+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50042: remote error: tls: bad certificate 2025-08-13T19:51:55.686856504+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50046: remote error: tls: bad certificate 2025-08-13T19:51:55.708117569+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50072: remote error: tls: bad certificate 2025-08-13T19:51:55.708869141+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50058: remote error: tls: bad certificate 2025-08-13T19:51:55.727301646+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50074: remote error: tls: bad certificate 2025-08-13T19:51:55.746007539+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50096: remote error: tls: bad certificate 2025-08-13T19:51:55.748355306+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50088: remote error: tls: bad certificate 2025-08-13T19:51:55.767081920+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50104: remote error: tls: bad certificate 2025-08-13T19:51:55.789621482+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50106: remote error: tls: bad certificate 2025-08-13T19:51:55.828289813+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50120: remote error: tls: bad certificate 2025-08-13T19:51:55.866932614+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50134: remote error: tls: bad certificate 2025-08-13T19:51:55.909077455+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50142: remote error: tls: bad certificate 2025-08-13T19:51:55.944401361+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50144: remote error: tls: bad certificate 2025-08-13T19:51:55.987899461+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate 2025-08-13T19:51:56.026151471+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:51:56.068135107+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50182: remote error: tls: bad certificate 2025-08-13T19:51:56.110857324+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50184: remote error: tls: bad certificate 2025-08-13T19:51:56.148175537+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50188: remote error: tls: bad certificate 2025-08-13T19:51:56.187425075+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50204: remote error: tls: bad certificate 2025-08-13T19:51:56.225651835+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50210: remote error: tls: bad certificate 2025-08-13T19:51:56.267155117+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50218: remote error: tls: bad certificate 2025-08-13T19:51:56.309512874+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50230: remote error: tls: bad certificate 2025-08-13T19:51:56.347987120+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50236: remote error: tls: bad certificate 2025-08-13T19:51:56.389556494+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50240: remote error: tls: bad certificate 2025-08-13T19:51:56.427295400+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50246: remote error: tls: bad certificate 2025-08-13T19:51:56.467683890+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50258: remote error: tls: bad certificate 2025-08-13T19:51:56.507263138+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50272: remote error: tls: bad certificate 2025-08-13T19:51:56.546551757+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50276: remote error: tls: bad certificate 2025-08-13T19:51:56.586389562+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:51:56.633498824+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:51:56.672197457+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50288: remote error: tls: bad certificate 2025-08-13T19:51:56.707403420+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50304: remote error: tls: bad certificate 2025-08-13T19:51:56.748165521+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50314: remote error: tls: bad certificate 2025-08-13T19:51:56.785644649+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50320: remote error: tls: bad certificate 2025-08-13T19:51:56.830688413+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50330: remote error: tls: bad certificate 2025-08-13T19:51:56.868089638+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:51:56.908528840+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:51:56.947165361+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50346: remote error: tls: bad certificate 2025-08-13T19:51:56.987654225+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50348: remote error: tls: bad certificate 2025-08-13T19:51:57.035987822+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50364: remote error: tls: bad certificate 2025-08-13T19:51:57.064526955+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50368: remote error: tls: bad certificate 2025-08-13T19:51:57.105877663+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50380: remote error: tls: bad certificate 2025-08-13T19:51:57.145515902+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50388: remote error: tls: bad certificate 2025-08-13T19:51:57.189225367+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50394: remote error: tls: bad certificate 2025-08-13T19:51:57.230210824+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50408: remote error: tls: bad certificate 2025-08-13T19:51:57.267122786+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:51:57.307740073+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50422: remote error: tls: bad certificate 2025-08-13T19:51:57.347906888+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:51:57.389106251+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50432: remote error: tls: bad certificate 2025-08-13T19:51:57.425764166+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50434: remote error: tls: bad certificate 2025-08-13T19:51:57.468734730+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50444: remote error: tls: bad certificate 2025-08-13T19:51:57.509384488+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50452: remote error: tls: bad certificate 2025-08-13T19:51:57.550016526+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50462: remote error: tls: bad certificate 2025-08-13T19:51:57.588688438+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50470: remote error: tls: bad certificate 2025-08-13T19:51:57.628283016+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50476: remote error: tls: bad certificate 2025-08-13T19:51:57.668227604+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50490: remote error: tls: bad certificate 2025-08-13T19:51:57.709942472+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50492: remote error: tls: bad certificate 2025-08-13T19:51:57.745881856+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50494: remote error: tls: bad certificate 2025-08-13T19:51:57.788856991+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50498: remote error: tls: bad certificate 2025-08-13T19:51:57.825960748+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50510: remote error: tls: bad certificate 2025-08-13T19:51:57.865308779+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50512: remote error: tls: bad certificate 2025-08-13T19:51:57.907450620+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50518: remote error: tls: bad certificate 2025-08-13T19:51:57.947535242+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50524: remote error: tls: bad certificate 2025-08-13T19:51:57.987173751+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50530: remote error: tls: bad certificate 2025-08-13T19:51:58.027241023+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50542: remote error: tls: bad certificate 2025-08-13T19:51:58.070232508+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50558: remote error: tls: bad certificate 2025-08-13T19:51:58.107928581+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50574: remote error: tls: bad certificate 2025-08-13T19:51:58.147928351+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50580: remote error: tls: bad certificate 2025-08-13T19:51:58.186857380+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:51:58.230539035+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50604: remote error: tls: bad certificate 2025-08-13T19:51:58.266508570+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50614: remote error: tls: bad certificate 2025-08-13T19:51:58.306916331+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50618: remote error: tls: bad certificate 2025-08-13T19:51:58.345992464+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50632: remote error: tls: bad certificate 2025-08-13T19:51:58.389033810+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50642: remote error: tls: bad certificate 2025-08-13T19:51:58.428723821+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50656: remote error: tls: bad certificate 2025-08-13T19:51:58.467154726+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:51:58.507931868+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50680: remote error: tls: bad certificate 2025-08-13T19:51:58.544065057+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50682: remote error: tls: bad certificate 2025-08-13T19:51:58.586625050+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50690: remote error: tls: bad certificate 2025-08-13T19:51:58.626725882+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50700: remote error: tls: bad certificate 2025-08-13T19:51:58.669401158+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50712: remote error: tls: bad certificate 2025-08-13T19:51:58.707880285+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50724: remote error: tls: bad certificate 2025-08-13T19:51:58.748734129+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35984: remote error: tls: bad certificate 2025-08-13T19:51:58.792397833+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35986: remote error: tls: bad certificate 2025-08-13T19:51:58.826091813+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35994: remote error: tls: bad certificate 2025-08-13T19:51:58.872266098+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36002: remote error: tls: bad certificate 2025-08-13T19:51:58.912637468+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36008: remote error: tls: bad certificate 2025-08-13T19:51:58.948347306+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36012: remote error: tls: bad certificate 2025-08-13T19:51:58.989556140+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36016: remote error: tls: bad certificate 2025-08-13T19:51:59.026058560+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36018: remote error: tls: bad certificate 2025-08-13T19:51:59.066424630+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36030: remote error: tls: bad certificate 2025-08-13T19:51:59.105723980+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate 2025-08-13T19:51:59.149355533+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36054: remote error: tls: bad certificate 2025-08-13T19:51:59.187990094+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36066: remote error: tls: bad certificate 2025-08-13T19:51:59.231429931+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36076: remote error: tls: bad certificate 2025-08-13T19:51:59.271972936+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36086: remote error: tls: bad certificate 2025-08-13T19:51:59.308136197+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36096: remote error: tls: bad certificate 2025-08-13T19:51:59.346877050+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36098: remote error: tls: bad certificate 2025-08-13T19:51:59.387407615+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36110: remote error: tls: bad certificate 2025-08-13T19:51:59.430143873+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36116: remote error: tls: bad certificate 2025-08-13T19:51:59.468856036+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36130: remote error: tls: bad certificate 2025-08-13T19:51:59.511932223+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36140: remote error: tls: bad certificate 2025-08-13T19:51:59.546727344+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36156: remote error: tls: bad certificate 2025-08-13T19:51:59.588886105+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36168: remote error: tls: bad certificate 2025-08-13T19:51:59.635268037+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36172: remote error: tls: bad certificate 2025-08-13T19:51:59.669538453+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36184: remote error: tls: bad certificate 2025-08-13T19:51:59.707252578+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36188: remote error: tls: bad certificate 2025-08-13T19:51:59.745287591+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36194: remote error: tls: bad certificate 2025-08-13T19:51:59.788976006+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36200: remote error: tls: bad certificate 2025-08-13T19:51:59.828248605+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36204: remote error: tls: bad certificate 2025-08-13T19:51:59.866762582+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36208: remote error: tls: bad certificate 2025-08-13T19:51:59.909291384+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36214: remote error: tls: bad certificate 2025-08-13T19:51:59.947633607+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36224: remote error: tls: bad certificate 2025-08-13T19:51:59.987223464+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36236: remote error: tls: bad certificate 2025-08-13T19:52:00.027913324+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36242: remote error: tls: bad certificate 2025-08-13T19:52:00.067411429+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36246: remote error: tls: bad certificate 2025-08-13T19:52:00.111442214+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36260: remote error: tls: bad certificate 2025-08-13T19:52:00.157400113+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36276: remote error: tls: bad certificate 2025-08-13T19:52:00.190490616+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36292: remote error: tls: bad certificate 2025-08-13T19:52:00.229408055+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36294: remote error: tls: bad certificate 2025-08-13T19:52:01.167641625+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36298: remote error: tls: bad certificate 2025-08-13T19:52:01.184061993+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36308: remote error: tls: bad certificate 2025-08-13T19:52:01.201028956+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36318: remote error: tls: bad certificate 2025-08-13T19:52:01.219742989+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36324: remote error: tls: bad certificate 2025-08-13T19:52:01.236437835+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36326: remote error: tls: bad certificate 2025-08-13T19:52:01.251860544+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36332: remote error: tls: bad certificate 2025-08-13T19:52:05.236931622+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36340: remote error: tls: bad certificate 2025-08-13T19:52:05.266534105+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36356: remote error: tls: bad certificate 2025-08-13T19:52:05.287584185+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36368: remote error: tls: bad certificate 2025-08-13T19:52:05.307226524+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36374: remote error: tls: bad certificate 2025-08-13T19:52:05.324289521+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36390: remote error: tls: bad certificate 2025-08-13T19:52:05.342008606+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36392: remote error: tls: bad certificate 2025-08-13T19:52:05.360517743+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36400: remote error: tls: bad certificate 2025-08-13T19:52:05.377038244+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36404: remote error: tls: bad certificate 2025-08-13T19:52:05.397209168+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36418: remote error: tls: bad certificate 2025-08-13T19:52:05.415505810+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36424: remote error: tls: bad certificate 2025-08-13T19:52:05.432891705+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36428: remote error: tls: bad certificate 2025-08-13T19:52:05.451260688+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36442: remote error: tls: bad certificate 2025-08-13T19:52:05.469665253+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36444: remote error: tls: bad certificate 2025-08-13T19:52:05.490288090+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36450: remote error: tls: bad certificate 2025-08-13T19:52:05.509049285+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36456: remote error: tls: bad certificate 2025-08-13T19:52:05.532361669+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36462: remote error: tls: bad certificate 2025-08-13T19:52:05.552752400+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36468: remote error: tls: bad certificate 2025-08-13T19:52:05.570749933+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36482: remote error: tls: bad certificate 2025-08-13T19:52:05.584704090+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36494: remote error: tls: bad certificate 2025-08-13T19:52:05.603485405+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36506: remote error: tls: bad certificate 2025-08-13T19:52:05.620620393+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36518: remote error: tls: bad certificate 2025-08-13T19:52:05.635519638+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36532: remote error: tls: bad certificate 2025-08-13T19:52:05.653222512+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36548: remote error: tls: bad certificate 2025-08-13T19:52:05.673669555+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36564: remote error: tls: bad certificate 2025-08-13T19:52:05.698261476+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36570: remote error: tls: bad certificate 2025-08-13T19:52:05.716276199+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36578: remote error: tls: bad certificate 2025-08-13T19:52:05.735325551+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36580: remote error: tls: bad certificate 2025-08-13T19:52:05.755627170+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36584: remote error: tls: bad certificate 2025-08-13T19:52:05.771925954+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36596: remote error: tls: bad certificate 2025-08-13T19:52:05.788884888+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36612: remote error: tls: bad certificate 2025-08-13T19:52:05.808722333+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36614: remote error: tls: bad certificate 2025-08-13T19:52:05.824987336+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36622: remote error: tls: bad certificate 2025-08-13T19:52:05.839405767+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36634: remote error: tls: bad certificate 2025-08-13T19:52:05.854456306+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36638: remote error: tls: bad certificate 2025-08-13T19:52:05.879326394+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36644: remote error: tls: bad certificate 2025-08-13T19:52:05.935344450+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36646: remote error: tls: bad certificate 2025-08-13T19:52:05.957058429+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36660: remote error: tls: bad certificate 2025-08-13T19:52:05.972047726+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36664: remote error: tls: bad certificate 2025-08-13T19:52:05.985517710+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36680: remote error: tls: bad certificate 2025-08-13T19:52:06.003663327+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36682: remote error: tls: bad certificate 2025-08-13T19:52:06.017726227+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36686: remote error: tls: bad certificate 2025-08-13T19:52:06.026959610+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36692: remote error: tls: bad certificate 2025-08-13T19:52:06.037721947+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36706: remote error: tls: bad certificate 2025-08-13T19:52:06.046892248+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36708: remote error: tls: bad certificate 2025-08-13T19:52:06.050854951+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36712: remote error: tls: bad certificate 2025-08-13T19:52:06.066339242+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36722: remote error: tls: bad certificate 2025-08-13T19:52:06.068868084+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36724: remote error: tls: bad certificate 2025-08-13T19:52:06.083759869+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36740: remote error: tls: bad certificate 2025-08-13T19:52:06.097086158+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36750: remote error: tls: bad certificate 2025-08-13T19:52:06.117000506+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36762: remote error: tls: bad certificate 2025-08-13T19:52:06.146716492+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36774: remote error: tls: bad certificate 2025-08-13T19:52:06.167216166+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36776: remote error: tls: bad certificate 2025-08-13T19:52:06.183553462+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36782: remote error: tls: bad certificate 2025-08-13T19:52:06.199196368+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36798: remote error: tls: bad certificate 2025-08-13T19:52:06.214675209+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36808: remote error: tls: bad certificate 2025-08-13T19:52:06.234711359+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36814: remote error: tls: bad certificate 2025-08-13T19:52:06.251624711+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36816: remote error: tls: bad certificate 2025-08-13T19:52:06.268128402+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36826: remote error: tls: bad certificate 2025-08-13T19:52:06.287366830+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36840: remote error: tls: bad certificate 2025-08-13T19:52:06.303615793+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36844: remote error: tls: bad certificate 2025-08-13T19:52:06.317056476+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36846: remote error: tls: bad certificate 2025-08-13T19:52:06.340746921+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36852: remote error: tls: bad certificate 2025-08-13T19:52:06.360356229+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36862: remote error: tls: bad certificate 2025-08-13T19:52:06.379545686+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36866: remote error: tls: bad certificate 2025-08-13T19:52:06.399064262+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36876: remote error: tls: bad certificate 2025-08-13T19:52:06.413962877+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36888: remote error: tls: bad certificate 2025-08-13T19:52:06.429262262+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36892: remote error: tls: bad certificate 2025-08-13T19:52:06.447497802+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36902: remote error: tls: bad certificate 2025-08-13T19:52:06.462392416+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36916: remote error: tls: bad certificate 2025-08-13T19:52:06.479386651+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36918: remote error: tls: bad certificate 2025-08-13T19:52:06.494960344+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36920: remote error: tls: bad certificate 2025-08-13T19:52:06.509599981+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36926: remote error: tls: bad certificate 2025-08-13T19:52:09.232627872+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38862: remote error: tls: bad certificate 2025-08-13T19:52:09.252110557+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38870: remote error: tls: bad certificate 2025-08-13T19:52:09.266079425+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38886: remote error: tls: bad certificate 2025-08-13T19:52:09.280973830+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38894: remote error: tls: bad certificate 2025-08-13T19:52:09.303372578+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38904: remote error: tls: bad certificate 2025-08-13T19:52:09.318289233+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38918: remote error: tls: bad certificate 2025-08-13T19:52:09.344658414+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38928: remote error: tls: bad certificate 2025-08-13T19:52:09.363391458+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38942: remote error: tls: bad certificate 2025-08-13T19:52:09.383070928+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38958: remote error: tls: bad certificate 2025-08-13T19:52:09.401216235+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38972: remote error: tls: bad certificate 2025-08-13T19:52:09.425712023+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38982: remote error: tls: bad certificate 2025-08-13T19:52:09.449089149+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38988: remote error: tls: bad certificate 2025-08-13T19:52:09.463050887+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38992: remote error: tls: bad certificate 2025-08-13T19:52:09.481665857+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38998: remote error: tls: bad certificate 2025-08-13T19:52:09.503006726+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39002: remote error: tls: bad certificate 2025-08-13T19:52:09.531838787+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39016: remote error: tls: bad certificate 2025-08-13T19:52:09.548955925+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39032: remote error: tls: bad certificate 2025-08-13T19:52:09.565971849+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39042: remote error: tls: bad certificate 2025-08-13T19:52:09.583457428+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39048: remote error: tls: bad certificate 2025-08-13T19:52:09.607585995+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39060: remote error: tls: bad certificate 2025-08-13T19:52:09.632363741+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39066: remote error: tls: bad certificate 2025-08-13T19:52:09.653141543+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39068: remote error: tls: bad certificate 2025-08-13T19:52:09.675610153+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39084: remote error: tls: bad certificate 2025-08-13T19:52:09.691690161+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39100: remote error: tls: bad certificate 2025-08-13T19:52:09.715331175+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39110: remote error: tls: bad certificate 2025-08-13T19:52:09.731424563+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39112: remote error: tls: bad certificate 2025-08-13T19:52:09.755659974+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39128: remote error: tls: bad certificate 2025-08-13T19:52:09.790123386+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39130: remote error: tls: bad certificate 2025-08-13T19:52:09.810916228+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39136: remote error: tls: bad certificate 2025-08-13T19:52:09.835099087+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39148: remote error: tls: bad certificate 2025-08-13T19:52:09.852948146+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39160: remote error: tls: bad certificate 2025-08-13T19:52:09.868696204+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39176: remote error: tls: bad certificate 2025-08-13T19:52:09.889639671+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39180: remote error: tls: bad certificate 2025-08-13T19:52:09.907500820+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39186: remote error: tls: bad certificate 2025-08-13T19:52:09.923646710+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39198: remote error: tls: bad certificate 2025-08-13T19:52:09.939259035+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39210: remote error: tls: bad certificate 2025-08-13T19:52:09.953406518+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39218: remote error: tls: bad certificate 2025-08-13T19:52:09.977557946+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39234: remote error: tls: bad certificate 2025-08-13T19:52:10.413194018+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39238: remote error: tls: bad certificate 2025-08-13T19:52:10.434192256+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39242: remote error: tls: bad certificate 2025-08-13T19:52:10.458764126+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39256: remote error: tls: bad certificate 2025-08-13T19:52:10.491059726+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39268: remote error: tls: bad certificate 2025-08-13T19:52:10.510424878+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39278: remote error: tls: bad certificate 2025-08-13T19:52:10.530265113+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39290: remote error: tls: bad certificate 2025-08-13T19:52:10.556875341+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39294: remote error: tls: bad certificate 2025-08-13T19:52:10.576465279+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39308: remote error: tls: bad certificate 2025-08-13T19:52:10.596180591+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39324: remote error: tls: bad certificate 2025-08-13T19:52:10.611123507+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39330: remote error: tls: bad certificate 2025-08-13T19:52:10.627052181+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39336: remote error: tls: bad certificate 2025-08-13T19:52:10.647875534+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39342: remote error: tls: bad certificate 2025-08-13T19:52:10.669904852+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39344: remote error: tls: bad certificate 2025-08-13T19:52:10.690860479+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39346: remote error: tls: bad certificate 2025-08-13T19:52:10.708047158+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39362: remote error: tls: bad certificate 2025-08-13T19:52:10.727132012+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39372: remote error: tls: bad certificate 2025-08-13T19:52:10.743498018+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39378: remote error: tls: bad certificate 2025-08-13T19:52:10.762885421+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39380: remote error: tls: bad certificate 2025-08-13T19:52:10.777832007+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39384: remote error: tls: bad certificate 2025-08-13T19:52:10.797945690+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39400: remote error: tls: bad certificate 2025-08-13T19:52:10.814191273+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39410: remote error: tls: bad certificate 2025-08-13T19:52:10.823028084+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39416: remote error: tls: bad certificate 2025-08-13T19:52:10.831588288+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39418: remote error: tls: bad certificate 2025-08-13T19:52:10.852153644+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39424: remote error: tls: bad certificate 2025-08-13T19:52:10.871182816+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39430: remote error: tls: bad certificate 2025-08-13T19:52:10.886941185+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39446: remote error: tls: bad certificate 2025-08-13T19:52:10.907397778+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39462: remote error: tls: bad certificate 2025-08-13T19:52:10.927762728+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39474: remote error: tls: bad certificate 2025-08-13T19:52:10.944996149+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39490: remote error: tls: bad certificate 2025-08-13T19:52:10.963029773+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39502: remote error: tls: bad certificate 2025-08-13T19:52:10.978043161+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39512: remote error: tls: bad certificate 2025-08-13T19:52:10.994343835+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39524: remote error: tls: bad certificate 2025-08-13T19:52:11.009925899+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39540: remote error: tls: bad certificate 2025-08-13T19:52:11.035461807+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39552: remote error: tls: bad certificate 2025-08-13T19:52:11.054002075+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39564: remote error: tls: bad certificate 2025-08-13T19:52:11.071367320+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39572: remote error: tls: bad certificate 2025-08-13T19:52:11.089543108+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39584: remote error: tls: bad certificate 2025-08-13T19:52:11.107528200+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39594: remote error: tls: bad certificate 2025-08-13T19:52:11.127368165+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39610: remote error: tls: bad certificate 2025-08-13T19:52:11.144221796+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39618: remote error: tls: bad certificate 2025-08-13T19:52:11.161006134+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39624: remote error: tls: bad certificate 2025-08-13T19:52:11.179588303+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39626: remote error: tls: bad certificate 2025-08-13T19:52:11.196653949+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39630: remote error: tls: bad certificate 2025-08-13T19:52:11.221924519+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39644: remote error: tls: bad certificate 2025-08-13T19:52:11.236727971+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39658: remote error: tls: bad certificate 2025-08-13T19:52:11.257653857+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39674: remote error: tls: bad certificate 2025-08-13T19:52:11.281644401+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39682: remote error: tls: bad certificate 2025-08-13T19:52:11.298472340+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39688: remote error: tls: bad certificate 2025-08-13T19:52:11.315032192+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39702: remote error: tls: bad certificate 2025-08-13T19:52:11.331735458+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39712: remote error: tls: bad certificate 2025-08-13T19:52:11.347877758+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39720: remote error: tls: bad certificate 2025-08-13T19:52:11.364930624+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39722: remote error: tls: bad certificate 2025-08-13T19:52:11.381492806+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39730: remote error: tls: bad certificate 2025-08-13T19:52:11.396977417+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39746: remote error: tls: bad certificate 2025-08-13T19:52:11.411919512+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39748: remote error: tls: bad certificate 2025-08-13T19:52:11.425877970+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39752: remote error: tls: bad certificate 2025-08-13T19:52:11.442925426+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39768: remote error: tls: bad certificate 2025-08-13T19:52:11.457592704+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39780: remote error: tls: bad certificate 2025-08-13T19:52:11.474499765+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39794: remote error: tls: bad certificate 2025-08-13T19:52:11.489083600+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39798: remote error: tls: bad certificate 2025-08-13T19:52:11.504001685+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39812: remote error: tls: bad certificate 2025-08-13T19:52:11.527297039+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39824: remote error: tls: bad certificate 2025-08-13T19:52:11.544857089+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39838: remote error: tls: bad certificate 2025-08-13T19:52:11.561618747+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39842: remote error: tls: bad certificate 2025-08-13T19:52:11.578022934+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39856: remote error: tls: bad certificate 2025-08-13T19:52:11.592737703+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39862: remote error: tls: bad certificate 2025-08-13T19:52:11.605950840+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39864: remote error: tls: bad certificate 2025-08-13T19:52:11.620057401+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39874: remote error: tls: bad certificate 2025-08-13T19:52:11.632685761+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39884: remote error: tls: bad certificate 2025-08-13T19:52:11.644263651+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39900: remote error: tls: bad certificate 2025-08-13T19:52:11.657287642+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39906: remote error: tls: bad certificate 2025-08-13T19:52:11.669849710+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39920: remote error: tls: bad certificate 2025-08-13T19:52:11.684449876+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39932: remote error: tls: bad certificate 2025-08-13T19:52:11.700539075+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39946: remote error: tls: bad certificate 2025-08-13T19:52:11.717894039+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39948: remote error: tls: bad certificate 2025-08-13T19:52:11.741139591+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39958: remote error: tls: bad certificate 2025-08-13T19:52:11.783948171+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39972: remote error: tls: bad certificate 2025-08-13T19:52:11.825070713+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39984: remote error: tls: bad certificate 2025-08-13T19:52:11.862937661+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39986: remote error: tls: bad certificate 2025-08-13T19:52:11.902239011+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39996: remote error: tls: bad certificate 2025-08-13T19:52:11.942315013+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:40006: remote error: tls: bad certificate 2025-08-13T19:52:11.983443985+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:40012: remote error: tls: bad certificate 2025-08-13T19:52:12.021448508+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40024: remote error: tls: bad certificate 2025-08-13T19:52:12.060711606+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40030: remote error: tls: bad certificate 2025-08-13T19:52:12.102512157+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40046: remote error: tls: bad certificate 2025-08-13T19:52:12.167097067+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40048: remote error: tls: bad certificate 2025-08-13T19:52:12.217665528+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40052: remote error: tls: bad certificate 2025-08-13T19:52:12.240474918+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40054: remote error: tls: bad certificate 2025-08-13T19:52:12.261388544+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40056: remote error: tls: bad certificate 2025-08-13T19:52:12.301742003+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40060: remote error: tls: bad certificate 2025-08-13T19:52:12.341023962+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40066: remote error: tls: bad certificate 2025-08-13T19:52:12.381319700+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40080: remote error: tls: bad certificate 2025-08-13T19:52:12.422914886+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40092: remote error: tls: bad certificate 2025-08-13T19:52:12.463697788+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40102: remote error: tls: bad certificate 2025-08-13T19:52:12.504071048+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40116: remote error: tls: bad certificate 2025-08-13T19:52:12.547703351+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40122: remote error: tls: bad certificate 2025-08-13T19:52:12.581839253+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40128: remote error: tls: bad certificate 2025-08-13T19:52:12.622860002+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40138: remote error: tls: bad certificate 2025-08-13T19:52:12.661340779+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40140: remote error: tls: bad certificate 2025-08-13T19:52:12.701079951+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40148: remote error: tls: bad certificate 2025-08-13T19:52:12.740683089+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40152: remote error: tls: bad certificate 2025-08-13T19:52:12.781331177+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40166: remote error: tls: bad certificate 2025-08-13T19:52:12.822731237+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40178: remote error: tls: bad certificate 2025-08-13T19:52:12.864116206+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40192: remote error: tls: bad certificate 2025-08-13T19:52:12.902372336+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40206: remote error: tls: bad certificate 2025-08-13T19:52:12.940494812+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40216: remote error: tls: bad certificate 2025-08-13T19:52:12.981758598+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40232: remote error: tls: bad certificate 2025-08-13T19:52:13.031258298+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40248: remote error: tls: bad certificate 2025-08-13T19:52:13.066912604+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40262: remote error: tls: bad certificate 2025-08-13T19:52:13.107282584+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40268: remote error: tls: bad certificate 2025-08-13T19:52:13.142567149+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40280: remote error: tls: bad certificate 2025-08-13T19:52:13.183164186+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40288: remote error: tls: bad certificate 2025-08-13T19:52:13.223954458+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40300: remote error: tls: bad certificate 2025-08-13T19:52:13.264955716+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40308: remote error: tls: bad certificate 2025-08-13T19:52:13.309511016+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40320: remote error: tls: bad certificate 2025-08-13T19:52:13.348260370+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40330: remote error: tls: bad certificate 2025-08-13T19:52:13.382583788+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40344: remote error: tls: bad certificate 2025-08-13T19:52:13.422758932+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40350: remote error: tls: bad certificate 2025-08-13T19:52:13.464926023+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40366: remote error: tls: bad certificate 2025-08-13T19:52:13.503688118+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40374: remote error: tls: bad certificate 2025-08-13T19:52:13.545931881+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40386: remote error: tls: bad certificate 2025-08-13T19:52:13.582022830+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40398: remote error: tls: bad certificate 2025-08-13T19:52:13.621601367+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40406: remote error: tls: bad certificate 2025-08-13T19:52:13.664280403+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40422: remote error: tls: bad certificate 2025-08-13T19:52:13.703613304+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40436: remote error: tls: bad certificate 2025-08-13T19:52:13.744616232+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40440: remote error: tls: bad certificate 2025-08-13T19:52:13.785130986+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40454: remote error: tls: bad certificate 2025-08-13T19:52:13.822426749+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40456: remote error: tls: bad certificate 2025-08-13T19:52:13.861883983+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:52:13.910098867+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40472: remote error: tls: bad certificate 2025-08-13T19:52:13.939739281+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40488: remote error: tls: bad certificate 2025-08-13T19:52:13.983495428+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40490: remote error: tls: bad certificate 2025-08-13T19:52:14.021704327+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40494: remote error: tls: bad certificate 2025-08-13T19:52:14.063676833+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40504: remote error: tls: bad certificate 2025-08-13T19:52:14.104156576+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40516: remote error: tls: bad certificate 2025-08-13T19:52:14.144366011+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40520: remote error: tls: bad certificate 2025-08-13T19:52:14.191540715+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40534: remote error: tls: bad certificate 2025-08-13T19:52:14.221933991+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40544: remote error: tls: bad certificate 2025-08-13T19:52:14.268723094+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40560: remote error: tls: bad certificate 2025-08-13T19:52:14.308665693+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:52:14.342111615+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40566: remote error: tls: bad certificate 2025-08-13T19:52:14.388072655+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:52:14.421615230+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40582: remote error: tls: bad certificate 2025-08-13T19:52:14.461890188+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:52:14.501881407+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40598: remote error: tls: bad certificate 2025-08-13T19:52:14.543302267+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:52:14.588453424+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:52:14.623759570+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40624: remote error: tls: bad certificate 2025-08-13T19:52:14.661446144+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40638: remote error: tls: bad certificate 2025-08-13T19:52:14.701385821+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40648: remote error: tls: bad certificate 2025-08-13T19:52:14.742138102+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40658: remote error: tls: bad certificate 2025-08-13T19:52:14.781102153+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40666: remote error: tls: bad certificate 2025-08-13T19:52:14.822133612+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40678: remote error: tls: bad certificate 2025-08-13T19:52:14.861476223+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40688: remote error: tls: bad certificate 2025-08-13T19:52:14.902266185+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40690: remote error: tls: bad certificate 2025-08-13T19:52:14.942502491+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40696: remote error: tls: bad certificate 2025-08-13T19:52:14.989922982+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40702: remote error: tls: bad certificate 2025-08-13T19:52:15.035294025+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40716: remote error: tls: bad certificate 2025-08-13T19:52:15.062418068+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40718: remote error: tls: bad certificate 2025-08-13T19:52:15.101994224+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40730: remote error: tls: bad certificate 2025-08-13T19:52:15.142621022+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40742: remote error: tls: bad certificate 2025-08-13T19:52:15.194532501+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40756: remote error: tls: bad certificate 2025-08-13T19:52:15.224587177+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40758: remote error: tls: bad certificate 2025-08-13T19:52:15.266966404+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40772: remote error: tls: bad certificate 2025-08-13T19:52:15.307219001+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:52:15.346391457+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40786: remote error: tls: bad certificate 2025-08-13T19:52:15.388867288+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40788: remote error: tls: bad certificate 2025-08-13T19:52:15.426451058+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40798: remote error: tls: bad certificate 2025-08-13T19:52:15.462024732+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40800: remote error: tls: bad certificate 2025-08-13T19:52:15.504317767+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40808: remote error: tls: bad certificate 2025-08-13T19:52:15.541993750+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40824: remote error: tls: bad certificate 2025-08-13T19:52:15.587716863+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40830: remote error: tls: bad certificate 2025-08-13T19:52:15.620622270+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40840: remote error: tls: bad certificate 2025-08-13T19:52:15.662918125+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40844: remote error: tls: bad certificate 2025-08-13T19:52:15.700840836+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40850: remote error: tls: bad certificate 2025-08-13T19:52:15.743609805+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40866: remote error: tls: bad certificate 2025-08-13T19:52:15.788287597+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40876: remote error: tls: bad certificate 2025-08-13T19:52:15.821371540+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40892: remote error: tls: bad certificate 2025-08-13T19:52:15.862442170+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40908: remote error: tls: bad certificate 2025-08-13T19:52:15.901707219+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40910: remote error: tls: bad certificate 2025-08-13T19:52:15.944092526+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40920: remote error: tls: bad certificate 2025-08-13T19:52:15.989336395+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40936: remote error: tls: bad certificate 2025-08-13T19:52:16.021855652+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40946: remote error: tls: bad certificate 2025-08-13T19:52:16.063159029+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40950: remote error: tls: bad certificate 2025-08-13T19:52:16.104336372+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40952: remote error: tls: bad certificate 2025-08-13T19:52:16.143946810+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40968: remote error: tls: bad certificate 2025-08-13T19:52:16.185765572+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40972: remote error: tls: bad certificate 2025-08-13T19:52:16.224690171+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40988: remote error: tls: bad certificate 2025-08-13T19:52:16.271144184+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:52:16.303604989+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41002: remote error: tls: bad certificate 2025-08-13T19:52:16.343666201+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41018: remote error: tls: bad certificate 2025-08-13T19:52:16.384617087+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:52:16.421874179+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41046: remote error: tls: bad certificate 2025-08-13T19:52:16.427717065+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41060: remote error: tls: bad certificate 2025-08-13T19:52:16.450360310+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41076: remote error: tls: bad certificate 2025-08-13T19:52:16.463030341+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41078: remote error: tls: bad certificate 2025-08-13T19:52:16.474139578+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41088: remote error: tls: bad certificate 2025-08-13T19:52:16.495102035+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41092: remote error: tls: bad certificate 2025-08-13T19:52:16.508591420+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41094: remote error: tls: bad certificate 2025-08-13T19:52:16.516873826+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41096: remote error: tls: bad certificate 2025-08-13T19:52:16.543182675+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41104: remote error: tls: bad certificate 2025-08-13T19:52:16.583160114+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41118: remote error: tls: bad certificate 2025-08-13T19:52:16.622651059+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41134: remote error: tls: bad certificate 2025-08-13T19:52:16.662400672+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41150: remote error: tls: bad certificate 2025-08-13T19:52:16.702245907+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41156: remote error: tls: bad certificate 2025-08-13T19:52:16.742052621+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41168: remote error: tls: bad certificate 2025-08-13T19:52:16.784443129+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41172: remote error: tls: bad certificate 2025-08-13T19:52:16.822364699+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41182: remote error: tls: bad certificate 2025-08-13T19:52:16.863114030+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41194: remote error: tls: bad certificate 2025-08-13T19:52:16.904113078+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41198: remote error: tls: bad certificate 2025-08-13T19:52:16.942268855+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41204: remote error: tls: bad certificate 2025-08-13T19:52:16.981872174+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41206: remote error: tls: bad certificate 2025-08-13T19:52:17.024432966+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41208: remote error: tls: bad certificate 2025-08-13T19:52:17.069047137+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41214: remote error: tls: bad certificate 2025-08-13T19:52:17.108159092+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41226: remote error: tls: bad certificate 2025-08-13T19:52:17.151381943+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41230: remote error: tls: bad certificate 2025-08-13T19:52:17.189513860+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41240: remote error: tls: bad certificate 2025-08-13T19:52:17.228182411+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41248: remote error: tls: bad certificate 2025-08-13T19:52:17.266487893+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41264: remote error: tls: bad certificate 2025-08-13T19:52:17.307342997+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41272: remote error: tls: bad certificate 2025-08-13T19:52:17.346971366+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41276: remote error: tls: bad certificate 2025-08-13T19:52:17.383013183+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41278: remote error: tls: bad certificate 2025-08-13T19:52:17.425935346+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41282: remote error: tls: bad certificate 2025-08-13T19:52:17.464476354+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41286: remote error: tls: bad certificate 2025-08-13T19:52:17.502745114+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41296: remote error: tls: bad certificate 2025-08-13T19:52:17.544722150+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41308: remote error: tls: bad certificate 2025-08-13T19:52:17.584005599+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41316: remote error: tls: bad certificate 2025-08-13T19:52:17.629548597+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41318: remote error: tls: bad certificate 2025-08-13T19:52:17.665853351+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41334: remote error: tls: bad certificate 2025-08-13T19:52:17.703093872+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41348: remote error: tls: bad certificate 2025-08-13T19:52:17.744529713+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41354: remote error: tls: bad certificate 2025-08-13T19:52:17.785493290+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41364: remote error: tls: bad certificate 2025-08-13T19:52:17.823942645+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41380: remote error: tls: bad certificate 2025-08-13T19:52:17.871266064+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41384: remote error: tls: bad certificate 2025-08-13T19:52:17.904524381+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:52:17.947506846+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41408: remote error: tls: bad certificate 2025-08-13T19:52:22.825275536+00:00 stderr F 2025/08/13 19:52:22 http: TLS handshake error from 127.0.0.1:58334: remote error: tls: bad certificate 2025-08-13T19:52:25.223326149+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58338: remote error: tls: bad certificate 2025-08-13T19:52:25.240954791+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58350: remote error: tls: bad certificate 2025-08-13T19:52:25.296952207+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58364: remote error: tls: bad certificate 2025-08-13T19:52:25.338490320+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58368: remote error: tls: bad certificate 2025-08-13T19:52:25.356747301+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58384: remote error: tls: bad certificate 2025-08-13T19:52:25.372677344+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58386: remote error: tls: bad certificate 2025-08-13T19:52:25.387531868+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58402: remote error: tls: bad certificate 2025-08-13T19:52:25.407517437+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58412: remote error: tls: bad certificate 2025-08-13T19:52:25.424520621+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58416: remote error: tls: bad certificate 2025-08-13T19:52:25.442419241+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58430: remote error: tls: bad certificate 2025-08-13T19:52:25.459381585+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58434: remote error: tls: bad certificate 2025-08-13T19:52:25.475724650+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58440: remote error: tls: bad certificate 2025-08-13T19:52:25.501438643+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58442: remote error: tls: bad certificate 2025-08-13T19:52:25.516681827+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58450: remote error: tls: bad certificate 2025-08-13T19:52:25.531246612+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58458: remote error: tls: bad certificate 2025-08-13T19:52:25.545003804+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58466: remote error: tls: bad certificate 2025-08-13T19:52:25.561746321+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58472: remote error: tls: bad certificate 2025-08-13T19:52:25.579395714+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58480: remote error: tls: bad certificate 2025-08-13T19:52:25.593453455+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58490: remote error: tls: bad certificate 2025-08-13T19:52:25.609185443+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58492: remote error: tls: bad certificate 2025-08-13T19:52:25.625933600+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58498: remote error: tls: bad certificate 2025-08-13T19:52:25.639068834+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58514: remote error: tls: bad certificate 2025-08-13T19:52:25.652001703+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58524: remote error: tls: bad certificate 2025-08-13T19:52:25.666618059+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58534: remote error: tls: bad certificate 2025-08-13T19:52:25.682511092+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58536: remote error: tls: bad certificate 2025-08-13T19:52:25.701233115+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58552: remote error: tls: bad certificate 2025-08-13T19:52:25.716989544+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58560: remote error: tls: bad certificate 2025-08-13T19:52:25.735908663+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58566: remote error: tls: bad certificate 2025-08-13T19:52:25.754454232+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58582: remote error: tls: bad certificate 2025-08-13T19:52:25.769279854+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58594: remote error: tls: bad certificate 2025-08-13T19:52:25.783194450+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58600: remote error: tls: bad certificate 2025-08-13T19:52:25.799534336+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58602: remote error: tls: bad certificate 2025-08-13T19:52:25.818184086+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58608: remote error: tls: bad certificate 2025-08-13T19:52:25.843068765+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58618: remote error: tls: bad certificate 2025-08-13T19:52:25.858664330+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58632: remote error: tls: bad certificate 2025-08-13T19:52:25.876926030+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58642: remote error: tls: bad certificate 2025-08-13T19:52:25.900269615+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58644: remote error: tls: bad certificate 2025-08-13T19:52:25.916197429+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58654: remote error: tls: bad certificate 2025-08-13T19:52:25.933082790+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58666: remote error: tls: bad certificate 2025-08-13T19:52:25.950171327+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58678: remote error: tls: bad certificate 2025-08-13T19:52:25.968612212+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58694: remote error: tls: bad certificate 2025-08-13T19:52:25.985738720+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58706: remote error: tls: bad certificate 2025-08-13T19:52:26.003170677+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58708: remote error: tls: bad certificate 2025-08-13T19:52:26.027500480+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58716: remote error: tls: bad certificate 2025-08-13T19:52:26.054754387+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58718: remote error: tls: bad certificate 2025-08-13T19:52:26.069362183+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58730: remote error: tls: bad certificate 2025-08-13T19:52:26.085567544+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58740: remote error: tls: bad certificate 2025-08-13T19:52:26.098739770+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58742: remote error: tls: bad certificate 2025-08-13T19:52:26.115668642+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58756: remote error: tls: bad certificate 2025-08-13T19:52:26.135727873+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58764: remote error: tls: bad certificate 2025-08-13T19:52:26.149984610+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58766: remote error: tls: bad certificate 2025-08-13T19:52:26.167277312+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58776: remote error: tls: bad certificate 2025-08-13T19:52:26.181919249+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58786: remote error: tls: bad certificate 2025-08-13T19:52:26.198191933+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58798: remote error: tls: bad certificate 2025-08-13T19:52:26.216253928+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58804: remote error: tls: bad certificate 2025-08-13T19:52:26.234710754+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58816: remote error: tls: bad certificate 2025-08-13T19:52:26.255699392+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58826: remote error: tls: bad certificate 2025-08-13T19:52:26.275589568+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58832: remote error: tls: bad certificate 2025-08-13T19:52:26.293391465+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58836: remote error: tls: bad certificate 2025-08-13T19:52:26.309984128+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58850: remote error: tls: bad certificate 2025-08-13T19:52:26.327238480+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58856: remote error: tls: bad certificate 2025-08-13T19:52:26.350983526+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58864: remote error: tls: bad certificate 2025-08-13T19:52:26.367071035+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58870: remote error: tls: bad certificate 2025-08-13T19:52:26.386034035+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58880: remote error: tls: bad certificate 2025-08-13T19:52:26.404960874+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58896: remote error: tls: bad certificate 2025-08-13T19:52:26.473646541+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58900: remote error: tls: bad certificate 2025-08-13T19:52:26.487644650+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58912: remote error: tls: bad certificate 2025-08-13T19:52:26.693322640+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58926: remote error: tls: bad certificate 2025-08-13T19:52:26.712385813+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58936: remote error: tls: bad certificate 2025-08-13T19:52:26.738643721+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58950: remote error: tls: bad certificate 2025-08-13T19:52:26.760013040+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58962: remote error: tls: bad certificate 2025-08-13T19:52:26.793966847+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58968: remote error: tls: bad certificate 2025-08-13T19:52:35.227220003+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:59988: remote error: tls: bad certificate 2025-08-13T19:52:35.249591660+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:59992: remote error: tls: bad certificate 2025-08-13T19:52:35.268651212+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60004: remote error: tls: bad certificate 2025-08-13T19:52:35.292355107+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60018: remote error: tls: bad certificate 2025-08-13T19:52:35.314572919+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60030: remote error: tls: bad certificate 2025-08-13T19:52:35.338897801+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60046: remote error: tls: bad certificate 2025-08-13T19:52:35.361975338+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60048: remote error: tls: bad certificate 2025-08-13T19:52:35.380907637+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60058: remote error: tls: bad certificate 2025-08-13T19:52:35.404499658+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60070: remote error: tls: bad certificate 2025-08-13T19:52:35.451247219+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60082: remote error: tls: bad certificate 2025-08-13T19:52:35.479362279+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60096: remote error: tls: bad certificate 2025-08-13T19:52:35.501621333+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60106: remote error: tls: bad certificate 2025-08-13T19:52:35.523306260+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60114: remote error: tls: bad certificate 2025-08-13T19:52:35.540165600+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60118: remote error: tls: bad certificate 2025-08-13T19:52:35.558233844+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60130: remote error: tls: bad certificate 2025-08-13T19:52:35.580755125+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60142: remote error: tls: bad certificate 2025-08-13T19:52:35.599640012+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60156: remote error: tls: bad certificate 2025-08-13T19:52:35.617128680+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60170: remote error: tls: bad certificate 2025-08-13T19:52:35.634223507+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60182: remote error: tls: bad certificate 2025-08-13T19:52:35.652244880+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60192: remote error: tls: bad certificate 2025-08-13T19:52:35.670488279+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60200: remote error: tls: bad certificate 2025-08-13T19:52:35.694291766+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60202: remote error: tls: bad certificate 2025-08-13T19:52:35.718700431+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60210: remote error: tls: bad certificate 2025-08-13T19:52:35.740032238+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60226: remote error: tls: bad certificate 2025-08-13T19:52:35.761107728+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60240: remote error: tls: bad certificate 2025-08-13T19:52:35.785249005+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60248: remote error: tls: bad certificate 2025-08-13T19:52:35.805376058+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60262: remote error: tls: bad certificate 2025-08-13T19:52:35.830883094+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60276: remote error: tls: bad certificate 2025-08-13T19:52:35.848616099+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60290: remote error: tls: bad certificate 2025-08-13T19:52:35.869591326+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60294: remote error: tls: bad certificate 2025-08-13T19:52:35.886356883+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60298: remote error: tls: bad certificate 2025-08-13T19:52:35.908531544+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60314: remote error: tls: bad certificate 2025-08-13T19:52:35.935897313+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60328: remote error: tls: bad certificate 2025-08-13T19:52:35.959551676+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60338: remote error: tls: bad certificate 2025-08-13T19:52:35.979486513+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60348: remote error: tls: bad certificate 2025-08-13T19:52:36.010430514+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60360: remote error: tls: bad certificate 2025-08-13T19:52:36.028902670+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60372: remote error: tls: bad certificate 2025-08-13T19:52:36.047386496+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:52:36.064362099+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60390: remote error: tls: bad certificate 2025-08-13T19:52:36.088390823+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60402: remote error: tls: bad certificate 2025-08-13T19:52:36.102733541+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60408: remote error: tls: bad certificate 2025-08-13T19:52:36.117908473+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60414: remote error: tls: bad certificate 2025-08-13T19:52:36.135445892+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60426: remote error: tls: bad certificate 2025-08-13T19:52:36.160731252+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60436: remote error: tls: bad certificate 2025-08-13T19:52:36.176887242+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60442: remote error: tls: bad certificate 2025-08-13T19:52:36.191049525+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60452: remote error: tls: bad certificate 2025-08-13T19:52:36.215687576+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60462: remote error: tls: bad certificate 2025-08-13T19:52:36.285396160+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60476: remote error: tls: bad certificate 2025-08-13T19:52:36.310347920+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60478: remote error: tls: bad certificate 2025-08-13T19:52:36.337593086+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60486: remote error: tls: bad certificate 2025-08-13T19:52:36.356948637+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:52:36.380633021+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60496: remote error: tls: bad certificate 2025-08-13T19:52:36.399318903+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60500: remote error: tls: bad certificate 2025-08-13T19:52:36.422595145+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60504: remote error: tls: bad certificate 2025-08-13T19:52:36.438301642+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60516: remote error: tls: bad certificate 2025-08-13T19:52:36.460266597+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60522: remote error: tls: bad certificate 2025-08-13T19:52:36.494429050+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60530: remote error: tls: bad certificate 2025-08-13T19:52:36.519513724+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60536: remote error: tls: bad certificate 2025-08-13T19:52:36.542643321+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60552: remote error: tls: bad certificate 2025-08-13T19:52:36.561230200+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60564: remote error: tls: bad certificate 2025-08-13T19:52:36.583634798+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60570: remote error: tls: bad certificate 2025-08-13T19:52:36.607348422+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60582: remote error: tls: bad certificate 2025-08-13T19:52:36.631427048+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60598: remote error: tls: bad certificate 2025-08-13T19:52:36.654232857+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60614: remote error: tls: bad certificate 2025-08-13T19:52:36.671708964+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60622: remote error: tls: bad certificate 2025-08-13T19:52:36.689869311+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60636: remote error: tls: bad certificate 2025-08-13T19:52:36.707753290+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60650: remote error: tls: bad certificate 2025-08-13T19:52:37.193249368+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60660: remote error: tls: bad certificate 2025-08-13T19:52:37.227900124+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60672: remote error: tls: bad certificate 2025-08-13T19:52:37.253324988+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60682: remote error: tls: bad certificate 2025-08-13T19:52:37.289531118+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60698: remote error: tls: bad certificate 2025-08-13T19:52:37.320870740+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60702: remote error: tls: bad certificate 2025-08-13T19:52:37.359422918+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:52:37.385714426+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60724: remote error: tls: bad certificate 2025-08-13T19:52:37.406349833+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60736: remote error: tls: bad certificate 2025-08-13T19:52:37.439943789+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60740: remote error: tls: bad certificate 2025-08-13T19:52:37.465069355+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60754: remote error: tls: bad certificate 2025-08-13T19:52:37.484755785+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60768: remote error: tls: bad certificate 2025-08-13T19:52:37.511383423+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60780: remote error: tls: bad certificate 2025-08-13T19:52:37.540501142+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60794: remote error: tls: bad certificate 2025-08-13T19:52:37.560563783+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60800: remote error: tls: bad certificate 2025-08-13T19:52:37.586075869+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60804: remote error: tls: bad certificate 2025-08-13T19:52:37.602590359+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60808: remote error: tls: bad certificate 2025-08-13T19:52:37.622554317+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60824: remote error: tls: bad certificate 2025-08-13T19:52:37.644898103+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60826: remote error: tls: bad certificate 2025-08-13T19:52:37.657870252+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60838: remote error: tls: bad certificate 2025-08-13T19:52:37.663043059+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60852: remote error: tls: bad certificate 2025-08-13T19:52:37.680747583+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60864: remote error: tls: bad certificate 2025-08-13T19:52:37.699417155+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60868: remote error: tls: bad certificate 2025-08-13T19:52:37.716896672+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60884: remote error: tls: bad certificate 2025-08-13T19:52:37.735016848+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60898: remote error: tls: bad certificate 2025-08-13T19:52:37.752608598+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60908: remote error: tls: bad certificate 2025-08-13T19:52:37.772235147+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60910: remote error: tls: bad certificate 2025-08-13T19:52:37.791632809+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60926: remote error: tls: bad certificate 2025-08-13T19:52:37.808453488+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60934: remote error: tls: bad certificate 2025-08-13T19:52:37.829219909+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60936: remote error: tls: bad certificate 2025-08-13T19:52:37.843633199+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60948: remote error: tls: bad certificate 2025-08-13T19:52:37.859916393+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60950: remote error: tls: bad certificate 2025-08-13T19:52:37.877126023+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60956: remote error: tls: bad certificate 2025-08-13T19:52:37.893380835+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60964: remote error: tls: bad certificate 2025-08-13T19:52:37.909553905+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60966: remote error: tls: bad certificate 2025-08-13T19:52:37.927916698+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60982: remote error: tls: bad certificate 2025-08-13T19:52:37.940262830+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60996: remote error: tls: bad certificate 2025-08-13T19:52:37.958360625+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32774: remote error: tls: bad certificate 2025-08-13T19:52:37.976464500+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32790: remote error: tls: bad certificate 2025-08-13T19:52:37.996353576+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32798: remote error: tls: bad certificate 2025-08-13T19:52:38.013785722+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32800: remote error: tls: bad certificate 2025-08-13T19:52:38.031618750+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32806: remote error: tls: bad certificate 2025-08-13T19:52:38.048144550+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32818: remote error: tls: bad certificate 2025-08-13T19:52:38.066642077+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32832: remote error: tls: bad certificate 2025-08-13T19:52:38.087573122+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32848: remote error: tls: bad certificate 2025-08-13T19:52:38.107236172+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32858: remote error: tls: bad certificate 2025-08-13T19:52:38.121904749+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32862: remote error: tls: bad certificate 2025-08-13T19:52:38.147047965+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32870: remote error: tls: bad certificate 2025-08-13T19:52:38.160669693+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32878: remote error: tls: bad certificate 2025-08-13T19:52:38.178071618+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32886: remote error: tls: bad certificate 2025-08-13T19:52:38.197938123+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32900: remote error: tls: bad certificate 2025-08-13T19:52:38.220079204+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32904: remote error: tls: bad certificate 2025-08-13T19:52:38.239529217+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32918: remote error: tls: bad certificate 2025-08-13T19:52:38.263114808+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32930: remote error: tls: bad certificate 2025-08-13T19:52:38.283980442+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32932: remote error: tls: bad certificate 2025-08-13T19:52:38.301495141+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32946: remote error: tls: bad certificate 2025-08-13T19:52:38.318188146+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32958: remote error: tls: bad certificate 2025-08-13T19:52:38.334608393+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32966: remote error: tls: bad certificate 2025-08-13T19:52:38.368160578+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32970: remote error: tls: bad certificate 2025-08-13T19:52:38.390010810+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32984: remote error: tls: bad certificate 2025-08-13T19:52:38.413753726+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32986: remote error: tls: bad certificate 2025-08-13T19:52:38.432204501+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33000: remote error: tls: bad certificate 2025-08-13T19:52:38.456990616+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33004: remote error: tls: bad certificate 2025-08-13T19:52:38.472303832+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33006: remote error: tls: bad certificate 2025-08-13T19:52:38.486705692+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33016: remote error: tls: bad certificate 2025-08-13T19:52:38.502739549+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33030: remote error: tls: bad certificate 2025-08-13T19:52:38.519135215+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33044: remote error: tls: bad certificate 2025-08-13T19:52:38.532735662+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33058: remote error: tls: bad certificate 2025-08-13T19:52:38.549972963+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33074: remote error: tls: bad certificate 2025-08-13T19:52:38.565916947+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33090: remote error: tls: bad certificate 2025-08-13T19:52:38.579729910+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33104: remote error: tls: bad certificate 2025-08-13T19:52:38.595466578+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33118: remote error: tls: bad certificate 2025-08-13T19:52:38.617000141+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33132: remote error: tls: bad certificate 2025-08-13T19:52:38.633136970+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33148: remote error: tls: bad certificate 2025-08-13T19:52:38.649141945+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33162: remote error: tls: bad certificate 2025-08-13T19:52:38.664753450+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33172: remote error: tls: bad certificate 2025-08-13T19:52:38.704004367+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33180: remote error: tls: bad certificate 2025-08-13T19:52:38.738288043+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49942: remote error: tls: bad certificate 2025-08-13T19:52:38.778373084+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49958: remote error: tls: bad certificate 2025-08-13T19:52:38.818572478+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49974: remote error: tls: bad certificate 2025-08-13T19:52:38.860590264+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49988: remote error: tls: bad certificate 2025-08-13T19:52:38.911176863+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50004: remote error: tls: bad certificate 2025-08-13T19:52:38.937672828+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50008: remote error: tls: bad certificate 2025-08-13T19:52:38.978188481+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50014: remote error: tls: bad certificate 2025-08-13T19:52:39.019203178+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50024: remote error: tls: bad certificate 2025-08-13T19:52:39.062579053+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50036: remote error: tls: bad certificate 2025-08-13T19:52:39.100899163+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50040: remote error: tls: bad certificate 2025-08-13T19:52:39.140337246+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50044: remote error: tls: bad certificate 2025-08-13T19:52:39.179467790+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50048: remote error: tls: bad certificate 2025-08-13T19:52:39.223054360+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50064: remote error: tls: bad certificate 2025-08-13T19:52:39.258021305+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50072: remote error: tls: bad certificate 2025-08-13T19:52:39.297485249+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50086: remote error: tls: bad certificate 2025-08-13T19:52:39.341536392+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50092: remote error: tls: bad certificate 2025-08-13T19:52:39.384896026+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50094: remote error: tls: bad certificate 2025-08-13T19:52:39.420532681+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50106: remote error: tls: bad certificate 2025-08-13T19:52:39.460883209+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50114: remote error: tls: bad certificate 2025-08-13T19:52:39.500563549+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50116: remote error: tls: bad certificate 2025-08-13T19:52:39.539577799+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50122: remote error: tls: bad certificate 2025-08-13T19:52:39.580488363+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50130: remote error: tls: bad certificate 2025-08-13T19:52:39.620439841+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50144: remote error: tls: bad certificate 2025-08-13T19:52:39.660903792+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50146: remote error: tls: bad certificate 2025-08-13T19:52:39.699542742+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50148: remote error: tls: bad certificate 2025-08-13T19:52:39.739991853+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50152: remote error: tls: bad certificate 2025-08-13T19:52:39.781197136+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50158: remote error: tls: bad certificate 2025-08-13T19:52:39.818201909+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50164: remote error: tls: bad certificate 2025-08-13T19:52:39.859011151+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:52:39.900138121+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50184: remote error: tls: bad certificate 2025-08-13T19:52:39.940674485+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50198: remote error: tls: bad certificate 2025-08-13T19:52:39.981173628+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50208: remote error: tls: bad certificate 2025-08-13T19:52:40.021504846+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50210: remote error: tls: bad certificate 2025-08-13T19:52:40.060499815+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:52:40.099227238+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50218: remote error: tls: bad certificate 2025-08-13T19:52:40.138207596+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50234: remote error: tls: bad certificate 2025-08-13T19:52:40.179364457+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50248: remote error: tls: bad certificate 2025-08-13T19:52:40.220187179+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50250: remote error: tls: bad certificate 2025-08-13T19:52:40.266923459+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50262: remote error: tls: bad certificate 2025-08-13T19:52:40.300463004+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50278: remote error: tls: bad certificate 2025-08-13T19:52:40.338439655+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:52:40.380567274+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50296: remote error: tls: bad certificate 2025-08-13T19:52:40.418242506+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50310: remote error: tls: bad certificate 2025-08-13T19:52:40.465546533+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50320: remote error: tls: bad certificate 2025-08-13T19:52:40.496763391+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:52:40.538459038+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:52:40.578203929+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50340: remote error: tls: bad certificate 2025-08-13T19:52:40.620458722+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50354: remote error: tls: bad certificate 2025-08-13T19:52:40.660864622+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50360: remote error: tls: bad certificate 2025-08-13T19:52:40.699334777+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50366: remote error: tls: bad certificate 2025-08-13T19:52:40.739588543+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:52:40.779523379+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50388: remote error: tls: bad certificate 2025-08-13T19:52:40.822485122+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50392: remote error: tls: bad certificate 2025-08-13T19:52:40.860665839+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50400: remote error: tls: bad certificate 2025-08-13T19:52:40.900133812+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50404: remote error: tls: bad certificate 2025-08-13T19:52:40.941152159+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50416: remote error: tls: bad certificate 2025-08-13T19:52:40.978907494+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50424: remote error: tls: bad certificate 2025-08-13T19:52:41.019480659+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:52:41.058350335+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50436: remote error: tls: bad certificate 2025-08-13T19:52:41.098664902+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50438: remote error: tls: bad certificate 2025-08-13T19:52:41.142159720+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50450: remote error: tls: bad certificate 2025-08-13T19:52:41.185477073+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:52:41.238757810+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50470: remote error: tls: bad certificate 2025-08-13T19:52:41.258080630+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50480: remote error: tls: bad certificate 2025-08-13T19:52:41.298741647+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50486: remote error: tls: bad certificate 2025-08-13T19:52:41.341063322+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50496: remote error: tls: bad certificate 2025-08-13T19:52:41.382123200+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50508: remote error: tls: bad certificate 2025-08-13T19:52:41.418920798+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50520: remote error: tls: bad certificate 2025-08-13T19:52:41.464397882+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50536: remote error: tls: bad certificate 2025-08-13T19:52:41.498985476+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50538: remote error: tls: bad certificate 2025-08-13T19:52:41.538845161+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50546: remote error: tls: bad certificate 2025-08-13T19:52:41.578564341+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50554: remote error: tls: bad certificate 2025-08-13T19:52:41.619309401+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:52:41.665264039+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50578: remote error: tls: bad certificate 2025-08-13T19:52:41.698925307+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:52:41.738637707+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50600: remote error: tls: bad certificate 2025-08-13T19:52:41.783739051+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50616: remote error: tls: bad certificate 2025-08-13T19:52:41.826310292+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50624: remote error: tls: bad certificate 2025-08-13T19:52:41.860895847+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50628: remote error: tls: bad certificate 2025-08-13T19:52:41.898189188+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50632: remote error: tls: bad certificate 2025-08-13T19:52:41.938887107+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50638: remote error: tls: bad certificate 2025-08-13T19:52:41.979832382+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50648: remote error: tls: bad certificate 2025-08-13T19:52:42.020386316+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50664: remote error: tls: bad certificate 2025-08-13T19:52:42.085024836+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50674: remote error: tls: bad certificate 2025-08-13T19:52:42.108896596+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50684: remote error: tls: bad certificate 2025-08-13T19:52:42.142605725+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:52:42.181068060+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50706: remote error: tls: bad certificate 2025-08-13T19:52:42.220756259+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50716: remote error: tls: bad certificate 2025-08-13T19:52:42.262327832+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50730: remote error: tls: bad certificate 2025-08-13T19:52:42.301191519+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50734: remote error: tls: bad certificate 2025-08-13T19:52:42.339704595+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50738: remote error: tls: bad certificate 2025-08-13T19:52:42.381139314+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50754: remote error: tls: bad certificate 2025-08-13T19:52:42.419863786+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50766: remote error: tls: bad certificate 2025-08-13T19:52:42.465001591+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50776: remote error: tls: bad certificate 2025-08-13T19:52:42.498351280+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50790: remote error: tls: bad certificate 2025-08-13T19:52:42.539761019+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50802: remote error: tls: bad certificate 2025-08-13T19:52:42.579370336+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50804: remote error: tls: bad certificate 2025-08-13T19:52:42.617901903+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50814: remote error: tls: bad certificate 2025-08-13T19:52:42.658504158+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50816: remote error: tls: bad certificate 2025-08-13T19:52:42.700273977+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50818: remote error: tls: bad certificate 2025-08-13T19:52:42.739716560+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50832: remote error: tls: bad certificate 2025-08-13T19:52:42.783097475+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50840: remote error: tls: bad certificate 2025-08-13T19:52:42.818829582+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:52:42.858970954+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50854: remote error: tls: bad certificate 2025-08-13T19:52:42.898699005+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50862: remote error: tls: bad certificate 2025-08-13T19:52:42.938124517+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate 2025-08-13T19:52:42.978704422+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50880: remote error: tls: bad certificate 2025-08-13T19:52:43.021602073+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50884: remote error: tls: bad certificate 2025-08-13T19:52:43.058596086+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50896: remote error: tls: bad certificate 2025-08-13T19:52:43.101869527+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50906: remote error: tls: bad certificate 2025-08-13T19:52:43.139448487+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50912: remote error: tls: bad certificate 2025-08-13T19:52:43.177199172+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50920: remote error: tls: bad certificate 2025-08-13T19:52:43.233126223+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50926: remote error: tls: bad certificate 2025-08-13T19:52:43.260025009+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50930: remote error: tls: bad certificate 2025-08-13T19:52:43.304552106+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50932: remote error: tls: bad certificate 2025-08-13T19:52:43.337638108+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50936: remote error: tls: bad certificate 2025-08-13T19:52:43.384202933+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50952: remote error: tls: bad certificate 2025-08-13T19:52:43.421442143+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:52:43.467144854+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50968: remote error: tls: bad certificate 2025-08-13T19:52:43.484164738+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50984: remote error: tls: bad certificate 2025-08-13T19:52:43.501910653+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50990: remote error: tls: bad certificate 2025-08-13T19:52:43.510504498+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50994: remote error: tls: bad certificate 2025-08-13T19:52:43.541136590+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50996: remote error: tls: bad certificate 2025-08-13T19:52:43.577963828+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51010: remote error: tls: bad certificate 2025-08-13T19:52:43.581344194+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51020: remote error: tls: bad certificate 2025-08-13T19:52:43.623530135+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51022: remote error: tls: bad certificate 2025-08-13T19:52:43.660688303+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51038: remote error: tls: bad certificate 2025-08-13T19:52:43.699336413+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51042: remote error: tls: bad certificate 2025-08-13T19:52:43.739646039+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51058: remote error: tls: bad certificate 2025-08-13T19:52:43.781623654+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51060: remote error: tls: bad certificate 2025-08-13T19:52:43.819892993+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51066: remote error: tls: bad certificate 2025-08-13T19:52:43.851440761+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51076: remote error: tls: bad certificate 2025-08-13T19:52:43.859924972+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51086: remote error: tls: bad certificate 2025-08-13T19:52:43.899604992+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51094: remote error: tls: bad certificate 2025-08-13T19:52:43.944966503+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51098: remote error: tls: bad certificate 2025-08-13T19:52:45.225947302+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51112: remote error: tls: bad certificate 2025-08-13T19:52:45.240247699+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51122: remote error: tls: bad certificate 2025-08-13T19:52:45.256047379+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51130: remote error: tls: bad certificate 2025-08-13T19:52:45.271318753+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51146: remote error: tls: bad certificate 2025-08-13T19:52:45.296477029+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51162: remote error: tls: bad certificate 2025-08-13T19:52:45.313750891+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51176: remote error: tls: bad certificate 2025-08-13T19:52:45.330859518+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51182: remote error: tls: bad certificate 2025-08-13T19:52:45.347684477+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51192: remote error: tls: bad certificate 2025-08-13T19:52:45.365724360+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51204: remote error: tls: bad certificate 2025-08-13T19:52:45.382989652+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51210: remote error: tls: bad certificate 2025-08-13T19:52:45.399849591+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51224: remote error: tls: bad certificate 2025-08-13T19:52:45.416096524+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51236: remote error: tls: bad certificate 2025-08-13T19:52:45.432255704+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51246: remote error: tls: bad certificate 2025-08-13T19:52:45.448523286+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51254: remote error: tls: bad certificate 2025-08-13T19:52:45.464452620+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51270: remote error: tls: bad certificate 2025-08-13T19:52:45.481724901+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51272: remote error: tls: bad certificate 2025-08-13T19:52:45.502720799+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51288: remote error: tls: bad certificate 2025-08-13T19:52:45.523698776+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51300: remote error: tls: bad certificate 2025-08-13T19:52:45.542021538+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51316: remote error: tls: bad certificate 2025-08-13T19:52:45.558209038+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51328: remote error: tls: bad certificate 2025-08-13T19:52:45.573304198+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51338: remote error: tls: bad certificate 2025-08-13T19:52:45.590321372+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51342: remote error: tls: bad certificate 2025-08-13T19:52:45.605003560+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51352: remote error: tls: bad certificate 2025-08-13T19:52:45.621551881+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51362: remote error: tls: bad certificate 2025-08-13T19:52:45.638193525+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51374: remote error: tls: bad certificate 2025-08-13T19:52:45.653460929+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51388: remote error: tls: bad certificate 2025-08-13T19:52:45.670676119+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51392: remote error: tls: bad certificate 2025-08-13T19:52:45.687166648+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51400: remote error: tls: bad certificate 2025-08-13T19:52:45.704924204+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51402: remote error: tls: bad certificate 2025-08-13T19:52:45.724972404+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51412: remote error: tls: bad certificate 2025-08-13T19:52:45.741738981+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51428: remote error: tls: bad certificate 2025-08-13T19:52:45.761212126+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51444: remote error: tls: bad certificate 2025-08-13T19:52:45.779491976+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51460: remote error: tls: bad certificate 2025-08-13T19:52:45.796329015+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51472: remote error: tls: bad certificate 2025-08-13T19:52:45.811374093+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51474: remote error: tls: bad certificate 2025-08-13T19:52:45.826917786+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51480: remote error: tls: bad certificate 2025-08-13T19:52:45.844705732+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51486: remote error: tls: bad certificate 2025-08-13T19:52:45.868352425+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51496: remote error: tls: bad certificate 2025-08-13T19:52:45.889509207+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51504: remote error: tls: bad certificate 2025-08-13T19:52:45.907740935+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51506: remote error: tls: bad certificate 2025-08-13T19:52:45.924041009+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51516: remote error: tls: bad certificate 2025-08-13T19:52:45.938378668+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51530: remote error: tls: bad certificate 2025-08-13T19:52:45.957604385+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51532: remote error: tls: bad certificate 2025-08-13T19:52:45.974873586+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51542: remote error: tls: bad certificate 2025-08-13T19:52:45.995423291+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51546: remote error: tls: bad certificate 2025-08-13T19:52:46.012261460+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51560: remote error: tls: bad certificate 2025-08-13T19:52:46.028545314+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51576: remote error: tls: bad certificate 2025-08-13T19:52:46.052997680+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51582: remote error: tls: bad certificate 2025-08-13T19:52:46.068505791+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51596: remote error: tls: bad certificate 2025-08-13T19:52:46.083164058+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51608: remote error: tls: bad certificate 2025-08-13T19:52:46.097340252+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51622: remote error: tls: bad certificate 2025-08-13T19:52:46.116855027+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51636: remote error: tls: bad certificate 2025-08-13T19:52:46.134343505+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51648: remote error: tls: bad certificate 2025-08-13T19:52:46.149321121+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51654: remote error: tls: bad certificate 2025-08-13T19:52:46.164659358+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51670: remote error: tls: bad certificate 2025-08-13T19:52:46.182469634+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51674: remote error: tls: bad certificate 2025-08-13T19:52:46.219869249+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51684: remote error: tls: bad certificate 2025-08-13T19:52:46.260067963+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51696: remote error: tls: bad certificate 2025-08-13T19:52:46.302070268+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51712: remote error: tls: bad certificate 2025-08-13T19:52:46.339118923+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51726: remote error: tls: bad certificate 2025-08-13T19:52:46.381685234+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51742: remote error: tls: bad certificate 2025-08-13T19:52:46.424425790+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51744: remote error: tls: bad certificate 2025-08-13T19:52:46.460672882+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51746: remote error: tls: bad certificate 2025-08-13T19:52:46.499684202+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51758: remote error: tls: bad certificate 2025-08-13T19:52:46.539385302+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51760: remote error: tls: bad certificate 2025-08-13T19:52:46.577085205+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51770: remote error: tls: bad certificate 2025-08-13T19:52:46.618899655+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:52:47.525561588+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51788: remote error: tls: bad certificate 2025-08-13T19:52:47.549911571+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51800: remote error: tls: bad certificate 2025-08-13T19:52:47.571483325+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51804: remote error: tls: bad certificate 2025-08-13T19:52:47.591991639+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51818: remote error: tls: bad certificate 2025-08-13T19:52:47.613573423+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:52:52.822261126+00:00 stderr F 2025/08/13 19:52:52 http: TLS handshake error from 127.0.0.1:41480: remote error: tls: bad certificate 2025-08-13T19:52:53.250892205+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41492: remote error: tls: bad certificate 2025-08-13T19:52:53.288911887+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41498: remote error: tls: bad certificate 2025-08-13T19:52:53.306545969+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41502: remote error: tls: bad certificate 2025-08-13T19:52:53.329355968+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41510: remote error: tls: bad certificate 2025-08-13T19:52:53.371528159+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41522: remote error: tls: bad certificate 2025-08-13T19:52:53.388894133+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41532: remote error: tls: bad certificate 2025-08-13T19:52:53.409691325+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41534: remote error: tls: bad certificate 2025-08-13T19:52:53.428690276+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41542: remote error: tls: bad certificate 2025-08-13T19:52:53.448461298+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41556: remote error: tls: bad certificate 2025-08-13T19:52:53.465886094+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41558: remote error: tls: bad certificate 2025-08-13T19:52:53.489946899+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41564: remote error: tls: bad certificate 2025-08-13T19:52:53.524639517+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41572: remote error: tls: bad certificate 2025-08-13T19:52:53.544525362+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41582: remote error: tls: bad certificate 2025-08-13T19:52:53.561312900+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41594: remote error: tls: bad certificate 2025-08-13T19:52:53.595445612+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41600: remote error: tls: bad certificate 2025-08-13T19:52:53.621247736+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41614: remote error: tls: bad certificate 2025-08-13T19:52:53.642441509+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41624: remote error: tls: bad certificate 2025-08-13T19:52:53.658166437+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41638: remote error: tls: bad certificate 2025-08-13T19:52:53.676036176+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41640: remote error: tls: bad certificate 2025-08-13T19:52:53.701036027+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41646: remote error: tls: bad certificate 2025-08-13T19:52:53.720359397+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41652: remote error: tls: bad certificate 2025-08-13T19:52:53.744577396+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41660: remote error: tls: bad certificate 2025-08-13T19:52:53.763927677+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41674: remote error: tls: bad certificate 2025-08-13T19:52:53.782188737+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41686: remote error: tls: bad certificate 2025-08-13T19:52:53.799675384+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41700: remote error: tls: bad certificate 2025-08-13T19:52:53.823228715+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41704: remote error: tls: bad certificate 2025-08-13T19:52:53.874706880+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41716: remote error: tls: bad certificate 2025-08-13T19:52:53.907436572+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41726: remote error: tls: bad certificate 2025-08-13T19:52:53.929080088+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41738: remote error: tls: bad certificate 2025-08-13T19:52:53.951725902+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41754: remote error: tls: bad certificate 2025-08-13T19:52:53.982139838+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41766: remote error: tls: bad certificate 2025-08-13T19:52:54.005000498+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41772: remote error: tls: bad certificate 2025-08-13T19:52:54.032761258+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41776: remote error: tls: bad certificate 2025-08-13T19:52:54.055918458+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41780: remote error: tls: bad certificate 2025-08-13T19:52:54.080633471+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41782: remote error: tls: bad certificate 2025-08-13T19:52:54.100256039+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41784: remote error: tls: bad certificate 2025-08-13T19:52:54.121094663+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41794: remote error: tls: bad certificate 2025-08-13T19:52:54.143504460+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41800: remote error: tls: bad certificate 2025-08-13T19:52:54.175703647+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41808: remote error: tls: bad certificate 2025-08-13T19:52:54.201592554+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41810: remote error: tls: bad certificate 2025-08-13T19:52:54.247903882+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41826: remote error: tls: bad certificate 2025-08-13T19:52:54.286569942+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41834: remote error: tls: bad certificate 2025-08-13T19:52:54.327975851+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41842: remote error: tls: bad certificate 2025-08-13T19:52:54.355567666+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41854: remote error: tls: bad certificate 2025-08-13T19:52:54.381564386+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41868: remote error: tls: bad certificate 2025-08-13T19:52:54.432900547+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41880: remote error: tls: bad certificate 2025-08-13T19:52:54.461137820+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41896: remote error: tls: bad certificate 2025-08-13T19:52:54.484679820+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41908: remote error: tls: bad certificate 2025-08-13T19:52:54.507568811+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41910: remote error: tls: bad certificate 2025-08-13T19:52:54.530932366+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41922: remote error: tls: bad certificate 2025-08-13T19:52:54.555939808+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41930: remote error: tls: bad certificate 2025-08-13T19:52:54.578209102+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41934: remote error: tls: bad certificate 2025-08-13T19:52:54.598142099+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41940: remote error: tls: bad certificate 2025-08-13T19:52:54.618382465+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41948: remote error: tls: bad certificate 2025-08-13T19:52:54.640995009+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41964: remote error: tls: bad certificate 2025-08-13T19:52:54.658542898+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41980: remote error: tls: bad certificate 2025-08-13T19:52:54.677479797+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41982: remote error: tls: bad certificate 2025-08-13T19:52:54.694943944+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41992: remote error: tls: bad certificate 2025-08-13T19:52:54.714329486+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41998: remote error: tls: bad certificate 2025-08-13T19:52:54.734549962+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42008: remote error: tls: bad certificate 2025-08-13T19:52:54.753707347+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42018: remote error: tls: bad certificate 2025-08-13T19:52:54.771011529+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42032: remote error: tls: bad certificate 2025-08-13T19:52:54.793527740+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42036: remote error: tls: bad certificate 2025-08-13T19:52:54.810987377+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42044: remote error: tls: bad certificate 2025-08-13T19:52:54.830154683+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42050: remote error: tls: bad certificate 2025-08-13T19:52:54.856083541+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42066: remote error: tls: bad certificate 2025-08-13T19:52:54.870171852+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42082: remote error: tls: bad certificate 2025-08-13T19:52:54.898071926+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42098: remote error: tls: bad certificate 2025-08-13T19:52:54.914767671+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42114: remote error: tls: bad certificate 2025-08-13T19:52:54.936620233+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42118: remote error: tls: bad certificate 2025-08-13T19:52:54.953233386+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42120: remote error: tls: bad certificate 2025-08-13T19:52:54.985152344+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42132: remote error: tls: bad certificate 2025-08-13T19:52:55.002014864+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42148: remote error: tls: bad certificate 2025-08-13T19:52:55.054629911+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42156: remote error: tls: bad certificate 2025-08-13T19:52:55.071086060+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42162: remote error: tls: bad certificate 2025-08-13T19:52:55.092919321+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42170: remote error: tls: bad certificate 2025-08-13T19:52:55.118983953+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42186: remote error: tls: bad certificate 2025-08-13T19:52:55.133997631+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42192: remote error: tls: bad certificate 2025-08-13T19:52:55.151021575+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42208: remote error: tls: bad certificate 2025-08-13T19:52:55.170148129+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42212: remote error: tls: bad certificate 2025-08-13T19:52:55.242229321+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42224: remote error: tls: bad certificate 2025-08-13T19:52:55.273872862+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42238: remote error: tls: bad certificate 2025-08-13T19:52:55.291481883+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42252: remote error: tls: bad certificate 2025-08-13T19:52:55.308587490+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42260: remote error: tls: bad certificate 2025-08-13T19:52:55.323694750+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42276: remote error: tls: bad certificate 2025-08-13T19:52:55.339687145+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42282: remote error: tls: bad certificate 2025-08-13T19:52:55.358117329+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42294: remote error: tls: bad certificate 2025-08-13T19:52:55.375193855+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42306: remote error: tls: bad certificate 2025-08-13T19:52:55.394539286+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42310: remote error: tls: bad certificate 2025-08-13T19:52:55.412068405+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42316: remote error: tls: bad certificate 2025-08-13T19:52:55.427551085+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42322: remote error: tls: bad certificate 2025-08-13T19:52:55.443897061+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42332: remote error: tls: bad certificate 2025-08-13T19:52:55.461866062+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42334: remote error: tls: bad certificate 2025-08-13T19:52:55.479754801+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42348: remote error: tls: bad certificate 2025-08-13T19:52:55.495305974+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42358: remote error: tls: bad certificate 2025-08-13T19:52:55.512859304+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42370: remote error: tls: bad certificate 2025-08-13T19:52:55.528691324+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42374: remote error: tls: bad certificate 2025-08-13T19:52:55.544188645+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42382: remote error: tls: bad certificate 2025-08-13T19:52:55.558555384+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42396: remote error: tls: bad certificate 2025-08-13T19:52:55.574377544+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42408: remote error: tls: bad certificate 2025-08-13T19:52:55.589147665+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42410: remote error: tls: bad certificate 2025-08-13T19:52:55.603646977+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42418: remote error: tls: bad certificate 2025-08-13T19:52:55.620698043+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42420: remote error: tls: bad certificate 2025-08-13T19:52:55.638494469+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42432: remote error: tls: bad certificate 2025-08-13T19:52:55.653157467+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42442: remote error: tls: bad certificate 2025-08-13T19:52:55.668553795+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42446: remote error: tls: bad certificate 2025-08-13T19:52:55.685881678+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42462: remote error: tls: bad certificate 2025-08-13T19:52:55.699141055+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42466: remote error: tls: bad certificate 2025-08-13T19:52:55.716705675+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42474: remote error: tls: bad certificate 2025-08-13T19:52:55.731095095+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42478: remote error: tls: bad certificate 2025-08-13T19:52:55.748325765+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42488: remote error: tls: bad certificate 2025-08-13T19:52:55.764921978+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42492: remote error: tls: bad certificate 2025-08-13T19:52:55.780853921+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42508: remote error: tls: bad certificate 2025-08-13T19:52:55.804416932+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42522: remote error: tls: bad certificate 2025-08-13T19:52:55.845388358+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42536: remote error: tls: bad certificate 2025-08-13T19:52:55.887732713+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42552: remote error: tls: bad certificate 2025-08-13T19:52:55.924672094+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42556: remote error: tls: bad certificate 2025-08-13T19:52:55.965096165+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42562: remote error: tls: bad certificate 2025-08-13T19:52:56.006509904+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42574: remote error: tls: bad certificate 2025-08-13T19:52:56.048707594+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42576: remote error: tls: bad certificate 2025-08-13T19:52:56.085449970+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42582: remote error: tls: bad certificate 2025-08-13T19:52:56.126100917+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42590: remote error: tls: bad certificate 2025-08-13T19:52:56.164636584+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42606: remote error: tls: bad certificate 2025-08-13T19:52:56.214036610+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42608: remote error: tls: bad certificate 2025-08-13T19:52:56.248078759+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42622: remote error: tls: bad certificate 2025-08-13T19:52:56.284706301+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42624: remote error: tls: bad certificate 2025-08-13T19:52:56.334890570+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42636: remote error: tls: bad certificate 2025-08-13T19:52:56.364250135+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42646: remote error: tls: bad certificate 2025-08-13T19:52:56.407463465+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42654: remote error: tls: bad certificate 2025-08-13T19:52:56.445212330+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42670: remote error: tls: bad certificate 2025-08-13T19:52:56.485499106+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42676: remote error: tls: bad certificate 2025-08-13T19:52:56.524645670+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42692: remote error: tls: bad certificate 2025-08-13T19:52:56.565708889+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42700: remote error: tls: bad certificate 2025-08-13T19:52:56.602967379+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42702: remote error: tls: bad certificate 2025-08-13T19:52:56.642951327+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42714: remote error: tls: bad certificate 2025-08-13T19:52:56.684217562+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42720: remote error: tls: bad certificate 2025-08-13T19:52:56.724915080+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42722: remote error: tls: bad certificate 2025-08-13T19:52:56.766377500+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42730: remote error: tls: bad certificate 2025-08-13T19:52:56.806280176+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42738: remote error: tls: bad certificate 2025-08-13T19:52:56.848622651+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42742: remote error: tls: bad certificate 2025-08-13T19:52:56.887936530+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42756: remote error: tls: bad certificate 2025-08-13T19:52:56.925047886+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42760: remote error: tls: bad certificate 2025-08-13T19:52:56.966627020+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42776: remote error: tls: bad certificate 2025-08-13T19:52:57.004946620+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42782: remote error: tls: bad certificate 2025-08-13T19:52:57.046337938+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42784: remote error: tls: bad certificate 2025-08-13T19:52:57.083754583+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42796: remote error: tls: bad certificate 2025-08-13T19:52:57.123685920+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42800: remote error: tls: bad certificate 2025-08-13T19:52:57.165114619+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42810: remote error: tls: bad certificate 2025-08-13T19:52:57.217762127+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42826: remote error: tls: bad certificate 2025-08-13T19:52:57.245899898+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42834: remote error: tls: bad certificate 2025-08-13T19:52:57.286054221+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42838: remote error: tls: bad certificate 2025-08-13T19:52:57.322986532+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42842: remote error: tls: bad certificate 2025-08-13T19:52:57.366049608+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42852: remote error: tls: bad certificate 2025-08-13T19:52:57.404543844+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42864: remote error: tls: bad certificate 2025-08-13T19:52:57.443362368+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42874: remote error: tls: bad certificate 2025-08-13T19:52:57.484454008+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42886: remote error: tls: bad certificate 2025-08-13T19:52:57.523669134+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42890: remote error: tls: bad certificate 2025-08-13T19:52:57.563286882+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42904: remote error: tls: bad certificate 2025-08-13T19:52:57.617717321+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42914: remote error: tls: bad certificate 2025-08-13T19:52:57.646665185+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42930: remote error: tls: bad certificate 2025-08-13T19:52:57.692141989+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42932: remote error: tls: bad certificate 2025-08-13T19:52:57.723772439+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42940: remote error: tls: bad certificate 2025-08-13T19:52:57.764528949+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42956: remote error: tls: bad certificate 2025-08-13T19:52:57.813446482+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42962: remote error: tls: bad certificate 2025-08-13T19:52:57.840955855+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42964: remote error: tls: bad certificate 2025-08-13T19:52:57.848026926+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42976: remote error: tls: bad certificate 2025-08-13T19:52:57.864354691+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42978: remote error: tls: bad certificate 2025-08-13T19:52:57.889663811+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42986: remote error: tls: bad certificate 2025-08-13T19:52:57.889663811+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42992: remote error: tls: bad certificate 2025-08-13T19:52:57.908096476+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43006: remote error: tls: bad certificate 2025-08-13T19:52:57.928752624+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43014: remote error: tls: bad certificate 2025-08-13T19:52:57.931942654+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43018: remote error: tls: bad certificate 2025-08-13T19:52:57.972696274+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43022: remote error: tls: bad certificate 2025-08-13T19:52:58.004026516+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43036: remote error: tls: bad certificate 2025-08-13T19:52:58.077624340+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43040: remote error: tls: bad certificate 2025-08-13T19:52:58.099196944+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43048: remote error: tls: bad certificate 2025-08-13T19:52:58.144243276+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43056: remote error: tls: bad certificate 2025-08-13T19:52:58.164125772+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43062: remote error: tls: bad certificate 2025-08-13T19:52:58.206422075+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43070: remote error: tls: bad certificate 2025-08-13T19:52:58.246240849+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43078: remote error: tls: bad certificate 2025-08-13T19:52:58.288872202+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43084: remote error: tls: bad certificate 2025-08-13T19:52:58.341151290+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43100: remote error: tls: bad certificate 2025-08-13T19:52:58.365887304+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43106: remote error: tls: bad certificate 2025-08-13T19:52:58.409760853+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43118: remote error: tls: bad certificate 2025-08-13T19:52:58.448663510+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43132: remote error: tls: bad certificate 2025-08-13T19:52:58.486195918+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43140: remote error: tls: bad certificate 2025-08-13T19:52:58.526132395+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43154: remote error: tls: bad certificate 2025-08-13T19:52:58.565579728+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43158: remote error: tls: bad certificate 2025-08-13T19:52:58.612442612+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43170: remote error: tls: bad certificate 2025-08-13T19:52:58.647630263+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43186: remote error: tls: bad certificate 2025-08-13T19:52:58.688646890+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43188: remote error: tls: bad certificate 2025-08-13T19:52:58.725170550+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43196: remote error: tls: bad certificate 2025-08-13T19:52:58.766512877+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54328: remote error: tls: bad certificate 2025-08-13T19:52:58.805249169+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54332: remote error: tls: bad certificate 2025-08-13T19:52:58.845984689+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54340: remote error: tls: bad certificate 2025-08-13T19:52:58.885730500+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54350: remote error: tls: bad certificate 2025-08-13T19:52:58.926610643+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54362: remote error: tls: bad certificate 2025-08-13T19:52:58.967522538+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54376: remote error: tls: bad certificate 2025-08-13T19:52:59.010137541+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54386: remote error: tls: bad certificate 2025-08-13T19:52:59.045977381+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54398: remote error: tls: bad certificate 2025-08-13T19:52:59.084213709+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54408: remote error: tls: bad certificate 2025-08-13T19:52:59.123999671+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54422: remote error: tls: bad certificate 2025-08-13T19:52:59.166915983+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54438: remote error: tls: bad certificate 2025-08-13T19:52:59.203270908+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54452: remote error: tls: bad certificate 2025-08-13T19:52:59.248928677+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54462: remote error: tls: bad certificate 2025-08-13T19:52:59.286589269+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54466: remote error: tls: bad certificate 2025-08-13T19:53:05.229994755+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54472: remote error: tls: bad certificate 2025-08-13T19:53:05.246737951+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54488: remote error: tls: bad certificate 2025-08-13T19:53:05.263674723+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54504: remote error: tls: bad certificate 2025-08-13T19:53:05.285403972+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54516: remote error: tls: bad certificate 2025-08-13T19:53:05.301223082+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54520: remote error: tls: bad certificate 2025-08-13T19:53:05.322422625+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54536: remote error: tls: bad certificate 2025-08-13T19:53:05.342665881+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54552: remote error: tls: bad certificate 2025-08-13T19:53:05.362065353+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54558: remote error: tls: bad certificate 2025-08-13T19:53:05.382523416+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54560: remote error: tls: bad certificate 2025-08-13T19:53:05.398259064+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54572: remote error: tls: bad certificate 2025-08-13T19:53:05.418216132+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54578: remote error: tls: bad certificate 2025-08-13T19:53:05.435298288+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54594: remote error: tls: bad certificate 2025-08-13T19:53:05.453285800+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54606: remote error: tls: bad certificate 2025-08-13T19:53:05.471257461+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54618: remote error: tls: bad certificate 2025-08-13T19:53:05.498908108+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54626: remote error: tls: bad certificate 2025-08-13T19:53:05.520762580+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54640: remote error: tls: bad certificate 2025-08-13T19:53:05.539638158+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54642: remote error: tls: bad certificate 2025-08-13T19:53:05.556648482+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54656: remote error: tls: bad certificate 2025-08-13T19:53:05.574329645+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54664: remote error: tls: bad certificate 2025-08-13T19:53:05.592428570+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54678: remote error: tls: bad certificate 2025-08-13T19:53:05.608616891+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54688: remote error: tls: bad certificate 2025-08-13T19:53:05.625933634+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54698: remote error: tls: bad certificate 2025-08-13T19:53:05.641921509+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54710: remote error: tls: bad certificate 2025-08-13T19:53:05.658406988+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54720: remote error: tls: bad certificate 2025-08-13T19:53:05.683173693+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54724: remote error: tls: bad certificate 2025-08-13T19:53:05.703444870+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54738: remote error: tls: bad certificate 2025-08-13T19:53:05.724500489+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54750: remote error: tls: bad certificate 2025-08-13T19:53:05.742389318+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54754: remote error: tls: bad certificate 2025-08-13T19:53:05.758951460+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54762: remote error: tls: bad certificate 2025-08-13T19:53:05.790780435+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54774: remote error: tls: bad certificate 2025-08-13T19:53:05.807976765+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54780: remote error: tls: bad certificate 2025-08-13T19:53:05.823508947+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54782: remote error: tls: bad certificate 2025-08-13T19:53:05.841977013+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54794: remote error: tls: bad certificate 2025-08-13T19:53:05.863257048+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54798: remote error: tls: bad certificate 2025-08-13T19:53:05.878192283+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54804: remote error: tls: bad certificate 2025-08-13T19:53:05.895533517+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54810: remote error: tls: bad certificate 2025-08-13T19:53:05.921506496+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54812: remote error: tls: bad certificate 2025-08-13T19:53:05.941344901+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54816: remote error: tls: bad certificate 2025-08-13T19:53:05.956712448+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54824: remote error: tls: bad certificate 2025-08-13T19:53:05.974279778+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54834: remote error: tls: bad certificate 2025-08-13T19:53:05.989635535+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54838: remote error: tls: bad certificate 2025-08-13T19:53:06.007107003+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54846: remote error: tls: bad certificate 2025-08-13T19:53:06.025421104+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54854: remote error: tls: bad certificate 2025-08-13T19:53:06.048625044+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54860: remote error: tls: bad certificate 2025-08-13T19:53:06.071913707+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54874: remote error: tls: bad certificate 2025-08-13T19:53:06.089215219+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54890: remote error: tls: bad certificate 2025-08-13T19:53:06.105432961+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54904: remote error: tls: bad certificate 2025-08-13T19:53:06.125697978+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54918: remote error: tls: bad certificate 2025-08-13T19:53:06.144250206+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54924: remote error: tls: bad certificate 2025-08-13T19:53:06.174491027+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54930: remote error: tls: bad certificate 2025-08-13T19:53:06.191339976+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54936: remote error: tls: bad certificate 2025-08-13T19:53:06.206277961+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54944: remote error: tls: bad certificate 2025-08-13T19:53:06.223619465+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54956: remote error: tls: bad certificate 2025-08-13T19:53:06.245045025+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54964: remote error: tls: bad certificate 2025-08-13T19:53:06.259867487+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54970: remote error: tls: bad certificate 2025-08-13T19:53:06.277542920+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54978: remote error: tls: bad certificate 2025-08-13T19:53:06.295119290+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54986: remote error: tls: bad certificate 2025-08-13T19:53:06.312107523+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54992: remote error: tls: bad certificate 2025-08-13T19:53:06.327216773+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54998: remote error: tls: bad certificate 2025-08-13T19:53:06.343608990+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55012: remote error: tls: bad certificate 2025-08-13T19:53:06.361256012+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55020: remote error: tls: bad certificate 2025-08-13T19:53:06.381916410+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55036: remote error: tls: bad certificate 2025-08-13T19:53:06.396906137+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55044: remote error: tls: bad certificate 2025-08-13T19:53:06.413956552+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55048: remote error: tls: bad certificate 2025-08-13T19:53:06.428983080+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55052: remote error: tls: bad certificate 2025-08-13T19:53:06.444305856+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55066: remote error: tls: bad certificate 2025-08-13T19:53:06.461992179+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55074: remote error: tls: bad certificate 2025-08-13T19:53:08.124377283+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55090: remote error: tls: bad certificate 2025-08-13T19:53:08.144708872+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55098: remote error: tls: bad certificate 2025-08-13T19:53:08.169200089+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55100: remote error: tls: bad certificate 2025-08-13T19:53:08.191587256+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55108: remote error: tls: bad certificate 2025-08-13T19:53:08.215764564+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55122: remote error: tls: bad certificate 2025-08-13T19:53:15.230254308+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57320: remote error: tls: bad certificate 2025-08-13T19:53:15.254519958+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57336: remote error: tls: bad certificate 2025-08-13T19:53:15.273246881+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:53:15.290993626+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57344: remote error: tls: bad certificate 2025-08-13T19:53:15.308042922+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57360: remote error: tls: bad certificate 2025-08-13T19:53:15.323416099+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57376: remote error: tls: bad certificate 2025-08-13T19:53:15.348010609+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57378: remote error: tls: bad certificate 2025-08-13T19:53:15.364569510+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57382: remote error: tls: bad certificate 2025-08-13T19:53:15.383879610+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57388: remote error: tls: bad certificate 2025-08-13T19:53:15.403854139+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57404: remote error: tls: bad certificate 2025-08-13T19:53:15.421906672+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57406: remote error: tls: bad certificate 2025-08-13T19:53:15.441069268+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57422: remote error: tls: bad certificate 2025-08-13T19:53:15.472902974+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57438: remote error: tls: bad certificate 2025-08-13T19:53:15.488501388+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57442: remote error: tls: bad certificate 2025-08-13T19:53:15.502286220+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57454: remote error: tls: bad certificate 2025-08-13T19:53:15.519585602+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:53:15.544101080+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57458: remote error: tls: bad certificate 2025-08-13T19:53:15.561890296+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57462: remote error: tls: bad certificate 2025-08-13T19:53:15.579199579+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57464: remote error: tls: bad certificate 2025-08-13T19:53:15.599265390+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57468: remote error: tls: bad certificate 2025-08-13T19:53:15.617419847+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:53:15.633664659+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57486: remote error: tls: bad certificate 2025-08-13T19:53:15.648979215+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57502: remote error: tls: bad certificate 2025-08-13T19:53:15.668605694+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57504: remote error: tls: bad certificate 2025-08-13T19:53:15.685357471+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57516: remote error: tls: bad certificate 2025-08-13T19:53:15.702692944+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57530: remote error: tls: bad certificate 2025-08-13T19:53:15.720031608+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57532: remote error: tls: bad certificate 2025-08-13T19:53:15.738600986+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57534: remote error: tls: bad certificate 2025-08-13T19:53:15.761599861+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57536: remote error: tls: bad certificate 2025-08-13T19:53:15.776962568+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57546: remote error: tls: bad certificate 2025-08-13T19:53:15.791767109+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57558: remote error: tls: bad certificate 2025-08-13T19:53:15.809112343+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57568: remote error: tls: bad certificate 2025-08-13T19:53:15.825866270+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57574: remote error: tls: bad certificate 2025-08-13T19:53:15.841345910+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57580: remote error: tls: bad certificate 2025-08-13T19:53:15.855971407+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57584: remote error: tls: bad certificate 2025-08-13T19:53:15.871254862+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57590: remote error: tls: bad certificate 2025-08-13T19:53:15.888957645+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57596: remote error: tls: bad certificate 2025-08-13T19:53:15.907379430+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57608: remote error: tls: bad certificate 2025-08-13T19:53:15.925199656+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57620: remote error: tls: bad certificate 2025-08-13T19:53:15.942733185+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57626: remote error: tls: bad certificate 2025-08-13T19:53:15.960165881+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57630: remote error: tls: bad certificate 2025-08-13T19:53:15.976287160+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57636: remote error: tls: bad certificate 2025-08-13T19:53:15.994373955+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57644: remote error: tls: bad certificate 2025-08-13T19:53:16.010107503+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57648: remote error: tls: bad certificate 2025-08-13T19:53:16.024446411+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57654: remote error: tls: bad certificate 2025-08-13T19:53:16.040982671+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57670: remote error: tls: bad certificate 2025-08-13T19:53:16.056222665+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57686: remote error: tls: bad certificate 2025-08-13T19:53:16.073181838+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57690: remote error: tls: bad certificate 2025-08-13T19:53:16.091155159+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57696: remote error: tls: bad certificate 2025-08-13T19:53:16.107908546+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57710: remote error: tls: bad certificate 2025-08-13T19:53:16.130296923+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:53:16.147045200+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57728: remote error: tls: bad certificate 2025-08-13T19:53:16.161113440+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57736: remote error: tls: bad certificate 2025-08-13T19:53:16.179346959+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57744: remote error: tls: bad certificate 2025-08-13T19:53:16.194681446+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57750: remote error: tls: bad certificate 2025-08-13T19:53:16.214370226+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57754: remote error: tls: bad certificate 2025-08-13T19:53:16.230876996+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57766: remote error: tls: bad certificate 2025-08-13T19:53:16.247018645+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57768: remote error: tls: bad certificate 2025-08-13T19:53:16.264169344+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57772: remote error: tls: bad certificate 2025-08-13T19:53:16.280287652+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57776: remote error: tls: bad certificate 2025-08-13T19:53:16.300380054+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57792: remote error: tls: bad certificate 2025-08-13T19:53:16.315936757+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57808: remote error: tls: bad certificate 2025-08-13T19:53:16.334189337+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57816: remote error: tls: bad certificate 2025-08-13T19:53:16.353718072+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57818: remote error: tls: bad certificate 2025-08-13T19:53:16.373083804+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57824: remote error: tls: bad certificate 2025-08-13T19:53:16.399998900+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57838: remote error: tls: bad certificate 2025-08-13T19:53:16.418068144+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57844: remote error: tls: bad certificate 2025-08-13T19:53:18.646465778+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57856: remote error: tls: bad certificate 2025-08-13T19:53:18.680082655+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57872: remote error: tls: bad certificate 2025-08-13T19:53:18.706134956+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57882: remote error: tls: bad certificate 2025-08-13T19:53:18.728389240+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57884: remote error: tls: bad certificate 2025-08-13T19:53:18.749899032+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:34698: remote error: tls: bad certificate 2025-08-13T19:53:22.591021295+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34710: remote error: tls: bad certificate 2025-08-13T19:53:22.605655702+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34724: remote error: tls: bad certificate 2025-08-13T19:53:22.620566116+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34736: remote error: tls: bad certificate 2025-08-13T19:53:22.645705832+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34746: remote error: tls: bad certificate 2025-08-13T19:53:22.698604287+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34750: remote error: tls: bad certificate 2025-08-13T19:53:22.723939048+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34754: remote error: tls: bad certificate 2025-08-13T19:53:22.751387410+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34760: remote error: tls: bad certificate 2025-08-13T19:53:22.776338900+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34774: remote error: tls: bad certificate 2025-08-13T19:53:22.803865243+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34788: remote error: tls: bad certificate 2025-08-13T19:53:22.825047576+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34800: remote error: tls: bad certificate 2025-08-13T19:53:22.826663162+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34796: remote error: tls: bad certificate 2025-08-13T19:53:22.844406477+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34804: remote error: tls: bad certificate 2025-08-13T19:53:22.862061109+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34820: remote error: tls: bad certificate 2025-08-13T19:53:22.880748861+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34824: remote error: tls: bad certificate 2025-08-13T19:53:22.902638684+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34832: remote error: tls: bad certificate 2025-08-13T19:53:22.921644185+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34838: remote error: tls: bad certificate 2025-08-13T19:53:22.933935435+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34840: remote error: tls: bad certificate 2025-08-13T19:53:22.950113506+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34850: remote error: tls: bad certificate 2025-08-13T19:53:22.965595246+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34858: remote error: tls: bad certificate 2025-08-13T19:53:22.981860829+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34870: remote error: tls: bad certificate 2025-08-13T19:53:23.001581990+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34884: remote error: tls: bad certificate 2025-08-13T19:53:23.021316992+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34896: remote error: tls: bad certificate 2025-08-13T19:53:23.039603773+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34900: remote error: tls: bad certificate 2025-08-13T19:53:23.063196324+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34910: remote error: tls: bad certificate 2025-08-13T19:53:23.080436494+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34914: remote error: tls: bad certificate 2025-08-13T19:53:23.100300379+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34922: remote error: tls: bad certificate 2025-08-13T19:53:23.122367917+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34938: remote error: tls: bad certificate 2025-08-13T19:53:23.141321797+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34944: remote error: tls: bad certificate 2025-08-13T19:53:23.159916806+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34948: remote error: tls: bad certificate 2025-08-13T19:53:23.178090393+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34954: remote error: tls: bad certificate 2025-08-13T19:53:23.196761804+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34968: remote error: tls: bad certificate 2025-08-13T19:53:23.220670975+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34976: remote error: tls: bad certificate 2025-08-13T19:53:23.248884208+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34990: remote error: tls: bad certificate 2025-08-13T19:53:23.266045596+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34992: remote error: tls: bad certificate 2025-08-13T19:53:23.285069508+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35000: remote error: tls: bad certificate 2025-08-13T19:53:23.307900188+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35008: remote error: tls: bad certificate 2025-08-13T19:53:23.326323412+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35020: remote error: tls: bad certificate 2025-08-13T19:53:23.346357272+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35026: remote error: tls: bad certificate 2025-08-13T19:53:23.362327797+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35038: remote error: tls: bad certificate 2025-08-13T19:53:23.379161196+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35044: remote error: tls: bad certificate 2025-08-13T19:53:23.402203582+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35046: remote error: tls: bad certificate 2025-08-13T19:53:23.417482587+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35056: remote error: tls: bad certificate 2025-08-13T19:53:23.447920933+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35058: remote error: tls: bad certificate 2025-08-13T19:53:23.473523032+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35074: remote error: tls: bad certificate 2025-08-13T19:53:23.500123999+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35084: remote error: tls: bad certificate 2025-08-13T19:53:23.518177983+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35098: remote error: tls: bad certificate 2025-08-13T19:53:23.536484004+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35112: remote error: tls: bad certificate 2025-08-13T19:53:23.563257856+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35122: remote error: tls: bad certificate 2025-08-13T19:53:23.585182160+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35130: remote error: tls: bad certificate 2025-08-13T19:53:23.605052625+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35134: remote error: tls: bad certificate 2025-08-13T19:53:23.631633512+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35148: remote error: tls: bad certificate 2025-08-13T19:53:23.653284638+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35156: remote error: tls: bad certificate 2025-08-13T19:53:23.674903663+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35162: remote error: tls: bad certificate 2025-08-13T19:53:23.693084581+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35168: remote error: tls: bad certificate 2025-08-13T19:53:23.716563189+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35180: remote error: tls: bad certificate 2025-08-13T19:53:23.734459658+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35184: remote error: tls: bad certificate 2025-08-13T19:53:23.753267944+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35186: remote error: tls: bad certificate 2025-08-13T19:53:23.771244175+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35188: remote error: tls: bad certificate 2025-08-13T19:53:23.786530830+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35194: remote error: tls: bad certificate 2025-08-13T19:53:23.808616299+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35210: remote error: tls: bad certificate 2025-08-13T19:53:23.833299832+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35218: remote error: tls: bad certificate 2025-08-13T19:53:23.850043338+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35228: remote error: tls: bad certificate 2025-08-13T19:53:23.865424936+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35242: remote error: tls: bad certificate 2025-08-13T19:53:23.885454916+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35250: remote error: tls: bad certificate 2025-08-13T19:53:23.903011546+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35260: remote error: tls: bad certificate 2025-08-13T19:53:23.921135541+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35262: remote error: tls: bad certificate 2025-08-13T19:53:23.933987027+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35268: remote error: tls: bad certificate 2025-08-13T19:53:23.939688169+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35282: remote error: tls: bad certificate 2025-08-13T19:53:23.961271644+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35290: remote error: tls: bad certificate 2025-08-13T19:53:24.608612958+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35304: remote error: tls: bad certificate 2025-08-13T19:53:24.627416963+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35308: remote error: tls: bad certificate 2025-08-13T19:53:24.648132722+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35312: remote error: tls: bad certificate 2025-08-13T19:53:24.667632877+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35314: remote error: tls: bad certificate 2025-08-13T19:53:24.687181144+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35326: remote error: tls: bad certificate 2025-08-13T19:53:24.708126890+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35334: remote error: tls: bad certificate 2025-08-13T19:53:24.726086821+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35338: remote error: tls: bad certificate 2025-08-13T19:53:24.749867818+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35344: remote error: tls: bad certificate 2025-08-13T19:53:24.769401404+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35346: remote error: tls: bad certificate 2025-08-13T19:53:24.788411295+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35348: remote error: tls: bad certificate 2025-08-13T19:53:24.806189681+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35362: remote error: tls: bad certificate 2025-08-13T19:53:24.823518654+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35364: remote error: tls: bad certificate 2025-08-13T19:53:24.839507479+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35380: remote error: tls: bad certificate 2025-08-13T19:53:24.863265816+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35382: remote error: tls: bad certificate 2025-08-13T19:53:24.888878835+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35384: remote error: tls: bad certificate 2025-08-13T19:53:24.903199092+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35390: remote error: tls: bad certificate 2025-08-13T19:53:24.922199463+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35392: remote error: tls: bad certificate 2025-08-13T19:53:24.938519207+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35408: remote error: tls: bad certificate 2025-08-13T19:53:24.964687082+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35412: remote error: tls: bad certificate 2025-08-13T19:53:24.984069714+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35428: remote error: tls: bad certificate 2025-08-13T19:53:25.010507486+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35442: remote error: tls: bad certificate 2025-08-13T19:53:25.027988314+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35456: remote error: tls: bad certificate 2025-08-13T19:53:25.046068598+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35460: remote error: tls: bad certificate 2025-08-13T19:53:25.066701436+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35464: remote error: tls: bad certificate 2025-08-13T19:53:25.092472169+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35466: remote error: tls: bad certificate 2025-08-13T19:53:25.110581865+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35478: remote error: tls: bad certificate 2025-08-13T19:53:25.127249079+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35484: remote error: tls: bad certificate 2025-08-13T19:53:25.144859750+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35494: remote error: tls: bad certificate 2025-08-13T19:53:25.168981347+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35510: remote error: tls: bad certificate 2025-08-13T19:53:25.187260547+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35522: remote error: tls: bad certificate 2025-08-13T19:53:25.203738546+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35536: remote error: tls: bad certificate 2025-08-13T19:53:25.233270696+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35544: remote error: tls: bad certificate 2025-08-13T19:53:25.250883778+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35558: remote error: tls: bad certificate 2025-08-13T19:53:25.266268146+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35560: remote error: tls: bad certificate 2025-08-13T19:53:25.281882170+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35570: remote error: tls: bad certificate 2025-08-13T19:53:25.301460557+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35580: remote error: tls: bad certificate 2025-08-13T19:53:25.319433589+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35582: remote error: tls: bad certificate 2025-08-13T19:53:25.339662705+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35590: remote error: tls: bad certificate 2025-08-13T19:53:25.362344160+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35592: remote error: tls: bad certificate 2025-08-13T19:53:25.381756173+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35608: remote error: tls: bad certificate 2025-08-13T19:53:25.404191501+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35610: remote error: tls: bad certificate 2025-08-13T19:53:25.420409283+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35612: remote error: tls: bad certificate 2025-08-13T19:53:25.434623817+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35626: remote error: tls: bad certificate 2025-08-13T19:53:25.454376730+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35634: remote error: tls: bad certificate 2025-08-13T19:53:25.472355041+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35636: remote error: tls: bad certificate 2025-08-13T19:53:25.492913216+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35650: remote error: tls: bad certificate 2025-08-13T19:53:25.507830691+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35654: remote error: tls: bad certificate 2025-08-13T19:53:25.523375113+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35668: remote error: tls: bad certificate 2025-08-13T19:53:25.538526555+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35676: remote error: tls: bad certificate 2025-08-13T19:53:25.555080726+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35682: remote error: tls: bad certificate 2025-08-13T19:53:25.568680863+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35698: remote error: tls: bad certificate 2025-08-13T19:53:25.585471281+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35710: remote error: tls: bad certificate 2025-08-13T19:53:25.611140111+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35712: remote error: tls: bad certificate 2025-08-13T19:53:25.626273972+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35722: remote error: tls: bad certificate 2025-08-13T19:53:25.642274498+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35730: remote error: tls: bad certificate 2025-08-13T19:53:25.659359294+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35732: remote error: tls: bad certificate 2025-08-13T19:53:25.680177646+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35734: remote error: tls: bad certificate 2025-08-13T19:53:25.700996869+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35740: remote error: tls: bad certificate 2025-08-13T19:53:25.716110819+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35742: remote error: tls: bad certificate 2025-08-13T19:53:25.740514744+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35748: remote error: tls: bad certificate 2025-08-13T19:53:25.759281618+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35764: remote error: tls: bad certificate 2025-08-13T19:53:25.775606942+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35776: remote error: tls: bad certificate 2025-08-13T19:53:25.797337521+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35792: remote error: tls: bad certificate 2025-08-13T19:53:25.826253004+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35796: remote error: tls: bad certificate 2025-08-13T19:53:25.866070427+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35800: remote error: tls: bad certificate 2025-08-13T19:53:25.909065421+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35806: remote error: tls: bad certificate 2025-08-13T19:53:25.945585070+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35814: remote error: tls: bad certificate 2025-08-13T19:53:25.985547478+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35824: remote error: tls: bad certificate 2025-08-13T19:53:26.029852279+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35836: remote error: tls: bad certificate 2025-08-13T19:53:26.068288143+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35840: remote error: tls: bad certificate 2025-08-13T19:53:26.105680077+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35846: remote error: tls: bad certificate 2025-08-13T19:53:26.146863929+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35856: remote error: tls: bad certificate 2025-08-13T19:53:26.190884572+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35870: remote error: tls: bad certificate 2025-08-13T19:53:26.228761950+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35880: remote error: tls: bad certificate 2025-08-13T19:53:26.266262067+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35896: remote error: tls: bad certificate 2025-08-13T19:53:26.309062875+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35908: remote error: tls: bad certificate 2025-08-13T19:53:26.351186044+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35912: remote error: tls: bad certificate 2025-08-13T19:53:26.386908341+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35924: remote error: tls: bad certificate 2025-08-13T19:53:26.425640023+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35938: remote error: tls: bad certificate 2025-08-13T19:53:26.465864668+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35950: remote error: tls: bad certificate 2025-08-13T19:53:26.511654321+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35966: remote error: tls: bad certificate 2025-08-13T19:53:26.550470566+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35974: remote error: tls: bad certificate 2025-08-13T19:53:26.590697781+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35984: remote error: tls: bad certificate 2025-08-13T19:53:26.626270743+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35988: remote error: tls: bad certificate 2025-08-13T19:53:26.666348433+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35994: remote error: tls: bad certificate 2025-08-13T19:53:26.704407896+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36008: remote error: tls: bad certificate 2025-08-13T19:53:26.744889409+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36014: remote error: tls: bad certificate 2025-08-13T19:53:26.785546426+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36016: remote error: tls: bad certificate 2025-08-13T19:53:26.823026432+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36032: remote error: tls: bad certificate 2025-08-13T19:53:26.864359949+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36040: remote error: tls: bad certificate 2025-08-13T19:53:26.937361086+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate 2025-08-13T19:53:26.985947649+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36050: remote error: tls: bad certificate 2025-08-13T19:53:27.005917928+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36052: remote error: tls: bad certificate 2025-08-13T19:53:27.023608801+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36062: remote error: tls: bad certificate 2025-08-13T19:53:27.064141945+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36066: remote error: tls: bad certificate 2025-08-13T19:53:27.107856529+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36078: remote error: tls: bad certificate 2025-08-13T19:53:27.147151868+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36090: remote error: tls: bad certificate 2025-08-13T19:53:27.186036994+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36106: remote error: tls: bad certificate 2025-08-13T19:53:27.225337063+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36120: remote error: tls: bad certificate 2025-08-13T19:53:27.267027029+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36122: remote error: tls: bad certificate 2025-08-13T19:53:27.305330799+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36126: remote error: tls: bad certificate 2025-08-13T19:53:27.344620998+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36134: remote error: tls: bad certificate 2025-08-13T19:53:27.387692233+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36144: remote error: tls: bad certificate 2025-08-13T19:53:27.428065963+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36148: remote error: tls: bad certificate 2025-08-13T19:53:27.465972601+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36154: remote error: tls: bad certificate 2025-08-13T19:53:27.505528007+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36158: remote error: tls: bad certificate 2025-08-13T19:53:27.545444804+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36172: remote error: tls: bad certificate 2025-08-13T19:53:27.583631270+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36182: remote error: tls: bad certificate 2025-08-13T19:53:27.626555522+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36190: remote error: tls: bad certificate 2025-08-13T19:53:27.664970126+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36202: remote error: tls: bad certificate 2025-08-13T19:53:27.706243150+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36208: remote error: tls: bad certificate 2025-08-13T19:53:27.750895711+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36212: remote error: tls: bad certificate 2025-08-13T19:53:27.785310780+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36222: remote error: tls: bad certificate 2025-08-13T19:53:27.824358372+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36234: remote error: tls: bad certificate 2025-08-13T19:53:27.863650190+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36242: remote error: tls: bad certificate 2025-08-13T19:53:27.907160328+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36248: remote error: tls: bad certificate 2025-08-13T19:53:27.944426589+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36260: remote error: tls: bad certificate 2025-08-13T19:53:27.988175244+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36266: remote error: tls: bad certificate 2025-08-13T19:53:28.025438775+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36276: remote error: tls: bad certificate 2025-08-13T19:53:28.066518224+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36292: remote error: tls: bad certificate 2025-08-13T19:53:28.104514646+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36302: remote error: tls: bad certificate 2025-08-13T19:53:28.147359285+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36304: remote error: tls: bad certificate 2025-08-13T19:53:28.183104632+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36316: remote error: tls: bad certificate 2025-08-13T19:53:28.228401082+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36324: remote error: tls: bad certificate 2025-08-13T19:53:28.268228015+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36332: remote error: tls: bad certificate 2025-08-13T19:53:28.304863848+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36338: remote error: tls: bad certificate 2025-08-13T19:53:28.345934627+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36342: remote error: tls: bad certificate 2025-08-13T19:53:28.387258953+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36358: remote error: tls: bad certificate 2025-08-13T19:53:28.426484299+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36374: remote error: tls: bad certificate 2025-08-13T19:53:28.467051074+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36382: remote error: tls: bad certificate 2025-08-13T19:53:28.507511066+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36398: remote error: tls: bad certificate 2025-08-13T19:53:28.542378358+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36410: remote error: tls: bad certificate 2025-08-13T19:53:28.587055820+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36418: remote error: tls: bad certificate 2025-08-13T19:53:28.623354253+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36426: remote error: tls: bad certificate 2025-08-13T19:53:29.089904451+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55254: remote error: tls: bad certificate 2025-08-13T19:53:29.108303885+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55260: remote error: tls: bad certificate 2025-08-13T19:53:29.126511273+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55276: remote error: tls: bad certificate 2025-08-13T19:53:29.143966440+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55288: remote error: tls: bad certificate 2025-08-13T19:53:29.161856669+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55294: remote error: tls: bad certificate 2025-08-13T19:53:30.648552861+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55300: remote error: tls: bad certificate 2025-08-13T19:53:30.670918988+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55302: remote error: tls: bad certificate 2025-08-13T19:53:30.690852175+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:53:30.706744047+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55324: remote error: tls: bad certificate 2025-08-13T19:53:30.730114503+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55330: remote error: tls: bad certificate 2025-08-13T19:53:30.777334826+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55332: remote error: tls: bad certificate 2025-08-13T19:53:30.797171361+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:53:30.815413220+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55344: remote error: tls: bad certificate 2025-08-13T19:53:30.830752917+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55356: remote error: tls: bad certificate 2025-08-13T19:53:30.848394119+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55362: remote error: tls: bad certificate 2025-08-13T19:53:30.865723282+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:53:30.882869460+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55376: remote error: tls: bad certificate 2025-08-13T19:53:30.900918944+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:53:30.918315269+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55392: remote error: tls: bad certificate 2025-08-13T19:53:30.936917549+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55408: remote error: tls: bad certificate 2025-08-13T19:53:30.955001983+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55420: remote error: tls: bad certificate 2025-08-13T19:53:30.973015396+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55434: remote error: tls: bad certificate 2025-08-13T19:53:30.991390179+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:53:31.007022344+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55450: remote error: tls: bad certificate 2025-08-13T19:53:31.022090353+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55466: remote error: tls: bad certificate 2025-08-13T19:53:31.042383720+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55474: remote error: tls: bad certificate 2025-08-13T19:53:31.061742501+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55484: remote error: tls: bad certificate 2025-08-13T19:53:31.079064354+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:53:31.099677331+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:53:31.124328733+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55502: remote error: tls: bad certificate 2025-08-13T19:53:31.142139209+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:53:31.161342126+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55524: remote error: tls: bad certificate 2025-08-13T19:53:31.181322705+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:53:31.204093203+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55538: remote error: tls: bad certificate 2025-08-13T19:53:31.226852321+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:53:31.244687408+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55558: remote error: tls: bad certificate 2025-08-13T19:53:31.263547545+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:53:31.285508470+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:53:31.300322562+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:53:31.315855434+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:53:31.337696365+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55616: remote error: tls: bad certificate 2025-08-13T19:53:31.361398320+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55622: remote error: tls: bad certificate 2025-08-13T19:53:31.382212752+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:53:31.398568328+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:53:31.413845023+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55646: remote error: tls: bad certificate 2025-08-13T19:53:31.429848348+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55650: remote error: tls: bad certificate 2025-08-13T19:53:31.444301550+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55652: remote error: tls: bad certificate 2025-08-13T19:53:31.461504869+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:53:31.476961669+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55684: remote error: tls: bad certificate 2025-08-13T19:53:31.493117119+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55694: remote error: tls: bad certificate 2025-08-13T19:53:31.514311192+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55696: remote error: tls: bad certificate 2025-08-13T19:53:31.533068836+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55712: remote error: tls: bad certificate 2025-08-13T19:53:31.548439704+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:53:31.565037486+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55734: remote error: tls: bad certificate 2025-08-13T19:53:31.580173317+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55750: remote error: tls: bad certificate 2025-08-13T19:53:31.604257492+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55760: remote error: tls: bad certificate 2025-08-13T19:53:31.622169452+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55772: remote error: tls: bad certificate 2025-08-13T19:53:31.640706120+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55780: remote error: tls: bad certificate 2025-08-13T19:53:31.668234673+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:53:31.686745880+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55804: remote error: tls: bad certificate 2025-08-13T19:53:31.702044525+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:53:31.715996002+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55814: remote error: tls: bad certificate 2025-08-13T19:53:31.731558275+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:53:31.755003593+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55824: remote error: tls: bad certificate 2025-08-13T19:53:31.775730363+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55836: remote error: tls: bad certificate 2025-08-13T19:53:31.793231691+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55838: remote error: tls: bad certificate 2025-08-13T19:53:31.818394737+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55854: remote error: tls: bad certificate 2025-08-13T19:53:31.836188003+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55858: remote error: tls: bad certificate 2025-08-13T19:53:31.858885919+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55870: remote error: tls: bad certificate 2025-08-13T19:53:31.875597495+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:53:31.892421774+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55888: remote error: tls: bad certificate 2025-08-13T19:53:31.908754679+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55890: remote error: tls: bad certificate 2025-08-13T19:53:35.224718462+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55896: remote error: tls: bad certificate 2025-08-13T19:53:35.244349553+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55906: remote error: tls: bad certificate 2025-08-13T19:53:35.261050260+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55920: remote error: tls: bad certificate 2025-08-13T19:53:35.279110096+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55926: remote error: tls: bad certificate 2025-08-13T19:53:35.303117371+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55938: remote error: tls: bad certificate 2025-08-13T19:53:35.321457375+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55952: remote error: tls: bad certificate 2025-08-13T19:53:35.337327778+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55958: remote error: tls: bad certificate 2025-08-13T19:53:35.355110416+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55968: remote error: tls: bad certificate 2025-08-13T19:53:35.368436766+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55978: remote error: tls: bad certificate 2025-08-13T19:53:35.389464507+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55994: remote error: tls: bad certificate 2025-08-13T19:53:35.408029257+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55996: remote error: tls: bad certificate 2025-08-13T19:53:35.425146375+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55998: remote error: tls: bad certificate 2025-08-13T19:53:35.441105791+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56000: remote error: tls: bad certificate 2025-08-13T19:53:35.457208691+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56004: remote error: tls: bad certificate 2025-08-13T19:53:35.473743393+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56010: remote error: tls: bad certificate 2025-08-13T19:53:35.498664745+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56020: remote error: tls: bad certificate 2025-08-13T19:53:35.513682983+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56034: remote error: tls: bad certificate 2025-08-13T19:53:35.526658354+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56040: remote error: tls: bad certificate 2025-08-13T19:53:35.549298110+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56046: remote error: tls: bad certificate 2025-08-13T19:53:35.567544191+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56052: remote error: tls: bad certificate 2025-08-13T19:53:35.584734132+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56066: remote error: tls: bad certificate 2025-08-13T19:53:35.600997377+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56068: remote error: tls: bad certificate 2025-08-13T19:53:35.618671231+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56074: remote error: tls: bad certificate 2025-08-13T19:53:35.634733770+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56076: remote error: tls: bad certificate 2025-08-13T19:53:35.654704980+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56088: remote error: tls: bad certificate 2025-08-13T19:53:35.669576245+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56090: remote error: tls: bad certificate 2025-08-13T19:53:35.687559288+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56102: remote error: tls: bad certificate 2025-08-13T19:53:35.705010066+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56112: remote error: tls: bad certificate 2025-08-13T19:53:35.722108005+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56118: remote error: tls: bad certificate 2025-08-13T19:53:35.743603298+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:53:35.766235015+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56138: remote error: tls: bad certificate 2025-08-13T19:53:35.786905845+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56146: remote error: tls: bad certificate 2025-08-13T19:53:35.802561162+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56154: remote error: tls: bad certificate 2025-08-13T19:53:35.818548608+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56168: remote error: tls: bad certificate 2025-08-13T19:53:35.838656143+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56172: remote error: tls: bad certificate 2025-08-13T19:53:35.857985005+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56186: remote error: tls: bad certificate 2025-08-13T19:53:35.875248337+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56198: remote error: tls: bad certificate 2025-08-13T19:53:35.891375408+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56212: remote error: tls: bad certificate 2025-08-13T19:53:35.906640524+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56216: remote error: tls: bad certificate 2025-08-13T19:53:35.928942981+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56232: remote error: tls: bad certificate 2025-08-13T19:53:35.947115659+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56244: remote error: tls: bad certificate 2025-08-13T19:53:35.968036007+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56254: remote error: tls: bad certificate 2025-08-13T19:53:35.987885954+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56270: remote error: tls: bad certificate 2025-08-13T19:53:36.005971000+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56284: remote error: tls: bad certificate 2025-08-13T19:53:36.063898344+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56288: remote error: tls: bad certificate 2025-08-13T19:53:36.100912221+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56302: remote error: tls: bad certificate 2025-08-13T19:53:36.113892492+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56308: remote error: tls: bad certificate 2025-08-13T19:53:36.133520782+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56312: remote error: tls: bad certificate 2025-08-13T19:53:36.152039901+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56318: remote error: tls: bad certificate 2025-08-13T19:53:36.172991059+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56334: remote error: tls: bad certificate 2025-08-13T19:53:36.193922207+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56336: remote error: tls: bad certificate 2025-08-13T19:53:36.214219296+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56350: remote error: tls: bad certificate 2025-08-13T19:53:36.236563664+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56360: remote error: tls: bad certificate 2025-08-13T19:53:36.254482196+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56362: remote error: tls: bad certificate 2025-08-13T19:53:36.275089304+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56370: remote error: tls: bad certificate 2025-08-13T19:53:36.290217346+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56378: remote error: tls: bad certificate 2025-08-13T19:53:36.308409306+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56388: remote error: tls: bad certificate 2025-08-13T19:53:36.327734598+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:53:36.349240702+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56402: remote error: tls: bad certificate 2025-08-13T19:53:36.370983732+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56410: remote error: tls: bad certificate 2025-08-13T19:53:36.390080348+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56426: remote error: tls: bad certificate 2025-08-13T19:53:36.407403552+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56442: remote error: tls: bad certificate 2025-08-13T19:53:36.424618184+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56450: remote error: tls: bad certificate 2025-08-13T19:53:36.464289917+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:53:36.482019983+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56472: remote error: tls: bad certificate 2025-08-13T19:53:36.499596495+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56484: remote error: tls: bad certificate 2025-08-13T19:53:36.518895376+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56498: remote error: tls: bad certificate 2025-08-13T19:53:39.228603146+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:53:39.242932135+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50224: remote error: tls: bad certificate 2025-08-13T19:53:39.258147480+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50226: remote error: tls: bad certificate 2025-08-13T19:53:39.273697894+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50240: remote error: tls: bad certificate 2025-08-13T19:53:39.297952886+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50244: remote error: tls: bad certificate 2025-08-13T19:53:39.314940781+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50256: remote error: tls: bad certificate 2025-08-13T19:53:39.330761093+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50268: remote error: tls: bad certificate 2025-08-13T19:53:39.345921536+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50274: remote error: tls: bad certificate 2025-08-13T19:53:39.366085762+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:53:39.381858312+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50288: remote error: tls: bad certificate 2025-08-13T19:53:39.399841156+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50290: remote error: tls: bad certificate 2025-08-13T19:53:39.414869175+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50298: remote error: tls: bad certificate 2025-08-13T19:53:39.434983749+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50312: remote error: tls: bad certificate 2025-08-13T19:53:39.441425853+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50324: remote error: tls: bad certificate 2025-08-13T19:53:39.454354192+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:53:39.460229950+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50348: remote error: tls: bad certificate 2025-08-13T19:53:39.473339114+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50362: remote error: tls: bad certificate 2025-08-13T19:53:39.485412799+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50372: remote error: tls: bad certificate 2025-08-13T19:53:39.488173648+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50376: remote error: tls: bad certificate 2025-08-13T19:53:39.506087959+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:53:39.508954771+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50396: remote error: tls: bad certificate 2025-08-13T19:53:39.523573769+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50400: remote error: tls: bad certificate 2025-08-13T19:53:39.535036366+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50410: remote error: tls: bad certificate 2025-08-13T19:53:39.540852022+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50416: remote error: tls: bad certificate 2025-08-13T19:53:39.556114958+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:53:39.570016065+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50430: remote error: tls: bad certificate 2025-08-13T19:53:39.591465867+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50436: remote error: tls: bad certificate 2025-08-13T19:53:39.609619156+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50446: remote error: tls: bad certificate 2025-08-13T19:53:39.624335206+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50454: remote error: tls: bad certificate 2025-08-13T19:53:39.646403826+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50458: remote error: tls: bad certificate 2025-08-13T19:53:39.662891037+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:53:39.679004807+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50472: remote error: tls: bad certificate 2025-08-13T19:53:39.693375467+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50488: remote error: tls: bad certificate 2025-08-13T19:53:39.710139846+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50492: remote error: tls: bad certificate 2025-08-13T19:53:39.731186017+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50506: remote error: tls: bad certificate 2025-08-13T19:53:39.745092314+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50514: remote error: tls: bad certificate 2025-08-13T19:53:39.759182706+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50526: remote error: tls: bad certificate 2025-08-13T19:53:39.785442716+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50536: remote error: tls: bad certificate 2025-08-13T19:53:39.813161948+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50550: remote error: tls: bad certificate 2025-08-13T19:53:39.838641005+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:53:39.864312688+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50572: remote error: tls: bad certificate 2025-08-13T19:53:39.880470369+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50584: remote error: tls: bad certificate 2025-08-13T19:53:39.896138737+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50598: remote error: tls: bad certificate 2025-08-13T19:53:39.912435242+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50606: remote error: tls: bad certificate 2025-08-13T19:53:39.932437133+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50612: remote error: tls: bad certificate 2025-08-13T19:53:39.949203182+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50622: remote error: tls: bad certificate 2025-08-13T19:53:39.966415514+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50636: remote error: tls: bad certificate 2025-08-13T19:53:39.981556116+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50648: remote error: tls: bad certificate 2025-08-13T19:53:39.997675256+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50656: remote error: tls: bad certificate 2025-08-13T19:53:40.011871321+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:53:40.034566529+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50686: remote error: tls: bad certificate 2025-08-13T19:53:40.049210498+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:53:40.064974488+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50702: remote error: tls: bad certificate 2025-08-13T19:53:40.079085701+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50712: remote error: tls: bad certificate 2025-08-13T19:53:40.096647742+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50728: remote error: tls: bad certificate 2025-08-13T19:53:40.113604686+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50732: remote error: tls: bad certificate 2025-08-13T19:53:40.131424605+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50740: remote error: tls: bad certificate 2025-08-13T19:53:40.147917386+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50744: remote error: tls: bad certificate 2025-08-13T19:53:40.167720461+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50758: remote error: tls: bad certificate 2025-08-13T19:53:40.181520045+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50762: remote error: tls: bad certificate 2025-08-13T19:53:40.195728551+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50770: remote error: tls: bad certificate 2025-08-13T19:53:40.212724847+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50784: remote error: tls: bad certificate 2025-08-13T19:53:40.232612634+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50800: remote error: tls: bad certificate 2025-08-13T19:53:40.256121036+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50806: remote error: tls: bad certificate 2025-08-13T19:53:40.280362708+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50822: remote error: tls: bad certificate 2025-08-13T19:53:40.298548077+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50836: remote error: tls: bad certificate 2025-08-13T19:53:40.317684724+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:53:40.334333439+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50850: remote error: tls: bad certificate 2025-08-13T19:53:40.348950486+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50866: remote error: tls: bad certificate 2025-08-13T19:53:40.370865132+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50876: remote error: tls: bad certificate 2025-08-13T19:53:40.386952351+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50890: remote error: tls: bad certificate 2025-08-13T19:53:40.404317317+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50896: remote error: tls: bad certificate 2025-08-13T19:53:40.682183542+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50906: remote error: tls: bad certificate 2025-08-13T19:53:40.830138346+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50912: remote error: tls: bad certificate 2025-08-13T19:53:40.852362851+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50916: remote error: tls: bad certificate 2025-08-13T19:53:40.867715729+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50924: remote error: tls: bad certificate 2025-08-13T19:53:40.893137955+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50936: remote error: tls: bad certificate 2025-08-13T19:53:40.908983197+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50950: remote error: tls: bad certificate 2025-08-13T19:53:40.924633474+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:53:40.940548709+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50970: remote error: tls: bad certificate 2025-08-13T19:53:40.956331789+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50984: remote error: tls: bad certificate 2025-08-13T19:53:40.970675268+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50996: remote error: tls: bad certificate 2025-08-13T19:53:40.984748710+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:51012: remote error: tls: bad certificate 2025-08-13T19:53:41.000306144+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51018: remote error: tls: bad certificate 2025-08-13T19:53:41.016050553+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51024: remote error: tls: bad certificate 2025-08-13T19:53:41.029655492+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51036: remote error: tls: bad certificate 2025-08-13T19:53:41.046054150+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51042: remote error: tls: bad certificate 2025-08-13T19:53:41.063399065+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51050: remote error: tls: bad certificate 2025-08-13T19:53:41.077921450+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51052: remote error: tls: bad certificate 2025-08-13T19:53:41.095677637+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51062: remote error: tls: bad certificate 2025-08-13T19:53:41.110105979+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51068: remote error: tls: bad certificate 2025-08-13T19:53:41.125640423+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51082: remote error: tls: bad certificate 2025-08-13T19:53:41.145631493+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51096: remote error: tls: bad certificate 2025-08-13T19:53:41.164570324+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51098: remote error: tls: bad certificate 2025-08-13T19:53:41.184034170+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51104: remote error: tls: bad certificate 2025-08-13T19:53:41.203053723+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51118: remote error: tls: bad certificate 2025-08-13T19:53:41.232431612+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51126: remote error: tls: bad certificate 2025-08-13T19:53:41.251340712+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51132: remote error: tls: bad certificate 2025-08-13T19:53:41.267706039+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51148: remote error: tls: bad certificate 2025-08-13T19:53:41.287701610+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51152: remote error: tls: bad certificate 2025-08-13T19:53:41.305005814+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51162: remote error: tls: bad certificate 2025-08-13T19:53:41.324337866+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51170: remote error: tls: bad certificate 2025-08-13T19:53:41.340895179+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51172: remote error: tls: bad certificate 2025-08-13T19:53:41.358911143+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51186: remote error: tls: bad certificate 2025-08-13T19:53:41.378574595+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51202: remote error: tls: bad certificate 2025-08-13T19:53:41.394209601+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51212: remote error: tls: bad certificate 2025-08-13T19:53:41.415108168+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51216: remote error: tls: bad certificate 2025-08-13T19:53:41.438500306+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51226: remote error: tls: bad certificate 2025-08-13T19:53:41.455054438+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51228: remote error: tls: bad certificate 2025-08-13T19:53:41.473008991+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51232: remote error: tls: bad certificate 2025-08-13T19:53:41.488947096+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51236: remote error: tls: bad certificate 2025-08-13T19:53:41.505689324+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51248: remote error: tls: bad certificate 2025-08-13T19:53:41.542899427+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51260: remote error: tls: bad certificate 2025-08-13T19:53:41.580967164+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51276: remote error: tls: bad certificate 2025-08-13T19:53:41.621004757+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51290: remote error: tls: bad certificate 2025-08-13T19:53:41.659405283+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51294: remote error: tls: bad certificate 2025-08-13T19:53:41.701855455+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51302: remote error: tls: bad certificate 2025-08-13T19:53:41.750594647+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51310: remote error: tls: bad certificate 2025-08-13T19:53:41.780458050+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51318: remote error: tls: bad certificate 2025-08-13T19:53:41.819269118+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51330: remote error: tls: bad certificate 2025-08-13T19:53:41.859475406+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51336: remote error: tls: bad certificate 2025-08-13T19:53:41.904050659+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51350: remote error: tls: bad certificate 2025-08-13T19:53:41.941043335+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51358: remote error: tls: bad certificate 2025-08-13T19:53:41.980527062+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51370: remote error: tls: bad certificate 2025-08-13T19:53:42.021474882+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51384: remote error: tls: bad certificate 2025-08-13T19:53:42.061280108+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51388: remote error: tls: bad certificate 2025-08-13T19:53:42.103641688+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51390: remote error: tls: bad certificate 2025-08-13T19:53:42.138520574+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51394: remote error: tls: bad certificate 2025-08-13T19:53:42.179604217+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51410: remote error: tls: bad certificate 2025-08-13T19:53:42.223700516+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51426: remote error: tls: bad certificate 2025-08-13T19:53:42.266453537+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51440: remote error: tls: bad certificate 2025-08-13T19:53:42.413671650+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51450: remote error: tls: bad certificate 2025-08-13T19:53:42.439923510+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51454: remote error: tls: bad certificate 2025-08-13T19:53:42.482879177+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51464: remote error: tls: bad certificate 2025-08-13T19:53:42.510876456+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51466: remote error: tls: bad certificate 2025-08-13T19:53:42.546915215+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51474: remote error: tls: bad certificate 2025-08-13T19:53:42.564044934+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51482: remote error: tls: bad certificate 2025-08-13T19:53:42.579032892+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51486: remote error: tls: bad certificate 2025-08-13T19:53:42.597240472+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51492: remote error: tls: bad certificate 2025-08-13T19:53:45.226381044+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51494: remote error: tls: bad certificate 2025-08-13T19:53:45.241320290+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51500: remote error: tls: bad certificate 2025-08-13T19:53:45.263977617+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51506: remote error: tls: bad certificate 2025-08-13T19:53:45.279577513+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51522: remote error: tls: bad certificate 2025-08-13T19:53:45.299569743+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51524: remote error: tls: bad certificate 2025-08-13T19:53:45.315553690+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51536: remote error: tls: bad certificate 2025-08-13T19:53:45.341116639+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51546: remote error: tls: bad certificate 2025-08-13T19:53:45.361326947+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51562: remote error: tls: bad certificate 2025-08-13T19:53:45.386511596+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51574: remote error: tls: bad certificate 2025-08-13T19:53:45.405372184+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51590: remote error: tls: bad certificate 2025-08-13T19:53:45.426649522+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51602: remote error: tls: bad certificate 2025-08-13T19:53:45.442340130+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51604: remote error: tls: bad certificate 2025-08-13T19:53:45.460097047+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51612: remote error: tls: bad certificate 2025-08-13T19:53:45.479731118+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51624: remote error: tls: bad certificate 2025-08-13T19:53:45.533474902+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51640: remote error: tls: bad certificate 2025-08-13T19:53:45.551358853+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51654: remote error: tls: bad certificate 2025-08-13T19:53:45.565606420+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51670: remote error: tls: bad certificate 2025-08-13T19:53:45.580892986+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51682: remote error: tls: bad certificate 2025-08-13T19:53:45.598843629+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51684: remote error: tls: bad certificate 2025-08-13T19:53:45.614548227+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51700: remote error: tls: bad certificate 2025-08-13T19:53:45.631554293+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51712: remote error: tls: bad certificate 2025-08-13T19:53:45.646499990+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51724: remote error: tls: bad certificate 2025-08-13T19:53:45.664373710+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51740: remote error: tls: bad certificate 2025-08-13T19:53:45.679107241+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51748: remote error: tls: bad certificate 2025-08-13T19:53:45.701559222+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51764: remote error: tls: bad certificate 2025-08-13T19:53:45.720105771+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51772: remote error: tls: bad certificate 2025-08-13T19:53:45.737989852+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:53:45.756665025+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51796: remote error: tls: bad certificate 2025-08-13T19:53:45.775487953+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51806: remote error: tls: bad certificate 2025-08-13T19:53:45.795995138+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51808: remote error: tls: bad certificate 2025-08-13T19:53:45.815296229+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51816: remote error: tls: bad certificate 2025-08-13T19:53:45.837343949+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51818: remote error: tls: bad certificate 2025-08-13T19:53:45.854453197+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51822: remote error: tls: bad certificate 2025-08-13T19:53:45.872035029+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51824: remote error: tls: bad certificate 2025-08-13T19:53:45.889322273+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:53:45.910040195+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51828: remote error: tls: bad certificate 2025-08-13T19:53:45.935540523+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51842: remote error: tls: bad certificate 2025-08-13T19:53:45.954573896+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51856: remote error: tls: bad certificate 2025-08-13T19:53:45.972642602+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51858: remote error: tls: bad certificate 2025-08-13T19:53:45.992005025+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51860: remote error: tls: bad certificate 2025-08-13T19:53:46.008844656+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51868: remote error: tls: bad certificate 2025-08-13T19:53:46.024552044+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51870: remote error: tls: bad certificate 2025-08-13T19:53:46.040129379+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51882: remote error: tls: bad certificate 2025-08-13T19:53:46.054839769+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51894: remote error: tls: bad certificate 2025-08-13T19:53:46.069429396+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51898: remote error: tls: bad certificate 2025-08-13T19:53:46.084429934+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51914: remote error: tls: bad certificate 2025-08-13T19:53:46.106535985+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51928: remote error: tls: bad certificate 2025-08-13T19:53:46.134769421+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51930: remote error: tls: bad certificate 2025-08-13T19:53:46.159466717+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51934: remote error: tls: bad certificate 2025-08-13T19:53:46.178661315+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51948: remote error: tls: bad certificate 2025-08-13T19:53:46.195898767+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51956: remote error: tls: bad certificate 2025-08-13T19:53:46.219998745+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51958: remote error: tls: bad certificate 2025-08-13T19:53:46.241092777+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51972: remote error: tls: bad certificate 2025-08-13T19:53:46.260916013+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51976: remote error: tls: bad certificate 2025-08-13T19:53:46.278870666+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51980: remote error: tls: bad certificate 2025-08-13T19:53:46.299576877+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51992: remote error: tls: bad certificate 2025-08-13T19:53:46.316179501+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51998: remote error: tls: bad certificate 2025-08-13T19:53:46.333503016+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52006: remote error: tls: bad certificate 2025-08-13T19:53:46.349693378+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52022: remote error: tls: bad certificate 2025-08-13T19:53:46.368871606+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52034: remote error: tls: bad certificate 2025-08-13T19:53:46.389220157+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52038: remote error: tls: bad certificate 2025-08-13T19:53:46.409970849+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52044: remote error: tls: bad certificate 2025-08-13T19:53:46.429712483+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52056: remote error: tls: bad certificate 2025-08-13T19:53:46.448750617+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52058: remote error: tls: bad certificate 2025-08-13T19:53:46.480049740+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52068: remote error: tls: bad certificate 2025-08-13T19:53:46.501206315+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52076: remote error: tls: bad certificate 2025-08-13T19:53:46.522698028+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52088: remote error: tls: bad certificate 2025-08-13T19:53:49.617164196+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44584: remote error: tls: bad certificate 2025-08-13T19:53:49.636610121+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44600: remote error: tls: bad certificate 2025-08-13T19:53:49.659212496+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44614: remote error: tls: bad certificate 2025-08-13T19:53:49.678895508+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:53:49.696625414+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44630: remote error: tls: bad certificate 2025-08-13T19:53:52.227095057+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44646: remote error: tls: bad certificate 2025-08-13T19:53:52.268926331+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44660: remote error: tls: bad certificate 2025-08-13T19:53:52.294504561+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44676: remote error: tls: bad certificate 2025-08-13T19:53:52.314733569+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44684: remote error: tls: bad certificate 2025-08-13T19:53:52.333155155+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44686: remote error: tls: bad certificate 2025-08-13T19:53:52.353277979+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44702: remote error: tls: bad certificate 2025-08-13T19:53:52.380062434+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44710: remote error: tls: bad certificate 2025-08-13T19:53:52.414549829+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44718: remote error: tls: bad certificate 2025-08-13T19:53:52.440060697+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44720: remote error: tls: bad certificate 2025-08-13T19:53:52.457727922+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:53:52.472683809+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44734: remote error: tls: bad certificate 2025-08-13T19:53:52.496383826+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44750: remote error: tls: bad certificate 2025-08-13T19:53:52.517896320+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44758: remote error: tls: bad certificate 2025-08-13T19:53:52.541147764+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44768: remote error: tls: bad certificate 2025-08-13T19:53:52.557736047+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44778: remote error: tls: bad certificate 2025-08-13T19:53:52.574990300+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44792: remote error: tls: bad certificate 2025-08-13T19:53:52.603636518+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44806: remote error: tls: bad certificate 2025-08-13T19:53:52.619749288+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44812: remote error: tls: bad certificate 2025-08-13T19:53:52.635091136+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:53:52.660011058+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44830: remote error: tls: bad certificate 2025-08-13T19:53:52.680188254+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44836: remote error: tls: bad certificate 2025-08-13T19:53:52.696991764+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44838: remote error: tls: bad certificate 2025-08-13T19:53:52.711068896+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44854: remote error: tls: bad certificate 2025-08-13T19:53:52.729148802+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44866: remote error: tls: bad certificate 2025-08-13T19:53:52.745949452+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44868: remote error: tls: bad certificate 2025-08-13T19:53:52.762474943+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44872: remote error: tls: bad certificate 2025-08-13T19:53:52.778204163+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44888: remote error: tls: bad certificate 2025-08-13T19:53:52.796036292+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44898: remote error: tls: bad certificate 2025-08-13T19:53:52.818504463+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44902: remote error: tls: bad certificate 2025-08-13T19:53:52.837562267+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44916: remote error: tls: bad certificate 2025-08-13T19:53:52.856105647+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44930: remote error: tls: bad certificate 2025-08-13T19:53:52.875528422+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44940: remote error: tls: bad certificate 2025-08-13T19:53:52.894423241+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44942: remote error: tls: bad certificate 2025-08-13T19:53:52.916570783+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44950: remote error: tls: bad certificate 2025-08-13T19:53:52.936099211+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44956: remote error: tls: bad certificate 2025-08-13T19:53:52.955720791+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44972: remote error: tls: bad certificate 2025-08-13T19:53:52.972533801+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44984: remote error: tls: bad certificate 2025-08-13T19:53:52.988563959+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44994: remote error: tls: bad certificate 2025-08-13T19:53:53.005485722+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45008: remote error: tls: bad certificate 2025-08-13T19:53:53.022024175+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45012: remote error: tls: bad certificate 2025-08-13T19:53:53.038862085+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45016: remote error: tls: bad certificate 2025-08-13T19:53:53.056149359+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45032: remote error: tls: bad certificate 2025-08-13T19:53:53.076240013+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45038: remote error: tls: bad certificate 2025-08-13T19:53:53.093720342+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45054: remote error: tls: bad certificate 2025-08-13T19:53:53.110699807+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45056: remote error: tls: bad certificate 2025-08-13T19:53:53.129115762+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45068: remote error: tls: bad certificate 2025-08-13T19:53:53.145554232+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45076: remote error: tls: bad certificate 2025-08-13T19:53:53.161638391+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45088: remote error: tls: bad certificate 2025-08-13T19:53:53.178665857+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45098: remote error: tls: bad certificate 2025-08-13T19:53:53.194976583+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45102: remote error: tls: bad certificate 2025-08-13T19:53:53.217332771+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45116: remote error: tls: bad certificate 2025-08-13T19:53:53.232177375+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45126: remote error: tls: bad certificate 2025-08-13T19:53:53.247967496+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45140: remote error: tls: bad certificate 2025-08-13T19:53:53.273479805+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45152: remote error: tls: bad certificate 2025-08-13T19:53:53.291633863+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45166: remote error: tls: bad certificate 2025-08-13T19:53:53.305135728+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45178: remote error: tls: bad certificate 2025-08-13T19:53:53.322149134+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45192: remote error: tls: bad certificate 2025-08-13T19:53:53.338193382+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45194: remote error: tls: bad certificate 2025-08-13T19:53:53.357901645+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45202: remote error: tls: bad certificate 2025-08-13T19:53:53.380898872+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45214: remote error: tls: bad certificate 2025-08-13T19:53:53.399923025+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45228: remote error: tls: bad certificate 2025-08-13T19:53:53.418956578+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45240: remote error: tls: bad certificate 2025-08-13T19:53:53.438110095+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45248: remote error: tls: bad certificate 2025-08-13T19:53:53.456393227+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45252: remote error: tls: bad certificate 2025-08-13T19:53:53.473543627+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45254: remote error: tls: bad certificate 2025-08-13T19:53:53.493036634+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45270: remote error: tls: bad certificate 2025-08-13T19:53:53.512355815+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45276: remote error: tls: bad certificate 2025-08-13T19:53:55.236033843+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45290: remote error: tls: bad certificate 2025-08-13T19:53:55.252968866+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45298: remote error: tls: bad certificate 2025-08-13T19:53:55.270724493+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45310: remote error: tls: bad certificate 2025-08-13T19:53:55.287538742+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45326: remote error: tls: bad certificate 2025-08-13T19:53:55.302703325+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45338: remote error: tls: bad certificate 2025-08-13T19:53:55.321409719+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45352: remote error: tls: bad certificate 2025-08-13T19:53:55.338625911+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45356: remote error: tls: bad certificate 2025-08-13T19:53:55.358347094+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45358: remote error: tls: bad certificate 2025-08-13T19:53:55.373379503+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45370: remote error: tls: bad certificate 2025-08-13T19:53:55.390440810+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45382: remote error: tls: bad certificate 2025-08-13T19:53:55.436477805+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45392: remote error: tls: bad certificate 2025-08-13T19:53:55.465944856+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45396: remote error: tls: bad certificate 2025-08-13T19:53:55.491653850+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45400: remote error: tls: bad certificate 2025-08-13T19:53:55.511147616+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45408: remote error: tls: bad certificate 2025-08-13T19:53:55.526959548+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45410: remote error: tls: bad certificate 2025-08-13T19:53:55.544041555+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45416: remote error: tls: bad certificate 2025-08-13T19:53:55.560329050+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45430: remote error: tls: bad certificate 2025-08-13T19:53:55.575998688+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45434: remote error: tls: bad certificate 2025-08-13T19:53:55.590063159+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45442: remote error: tls: bad certificate 2025-08-13T19:53:55.602979578+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45444: remote error: tls: bad certificate 2025-08-13T19:53:55.618206203+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45446: remote error: tls: bad certificate 2025-08-13T19:53:55.632362677+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45458: remote error: tls: bad certificate 2025-08-13T19:53:55.652016208+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45462: remote error: tls: bad certificate 2025-08-13T19:53:55.670677731+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45476: remote error: tls: bad certificate 2025-08-13T19:53:55.692228507+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45478: remote error: tls: bad certificate 2025-08-13T19:53:55.707612656+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45492: remote error: tls: bad certificate 2025-08-13T19:53:55.725362833+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45500: remote error: tls: bad certificate 2025-08-13T19:53:55.743701086+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45506: remote error: tls: bad certificate 2025-08-13T19:53:55.759030924+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45520: remote error: tls: bad certificate 2025-08-13T19:53:55.776724589+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45528: remote error: tls: bad certificate 2025-08-13T19:53:55.795650500+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45534: remote error: tls: bad certificate 2025-08-13T19:53:55.815980670+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45546: remote error: tls: bad certificate 2025-08-13T19:53:55.834473528+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45556: remote error: tls: bad certificate 2025-08-13T19:53:55.849135396+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45564: remote error: tls: bad certificate 2025-08-13T19:53:55.864593148+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45580: remote error: tls: bad certificate 2025-08-13T19:53:55.890759775+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45590: remote error: tls: bad certificate 2025-08-13T19:53:55.909515351+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45596: remote error: tls: bad certificate 2025-08-13T19:53:55.926053343+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45608: remote error: tls: bad certificate 2025-08-13T19:53:55.941849684+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45612: remote error: tls: bad certificate 2025-08-13T19:53:55.961326830+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45616: remote error: tls: bad certificate 2025-08-13T19:53:55.979036006+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45620: remote error: tls: bad certificate 2025-08-13T19:53:55.995602028+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45632: remote error: tls: bad certificate 2025-08-13T19:53:56.012747358+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45646: remote error: tls: bad certificate 2025-08-13T19:53:56.030061782+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45648: remote error: tls: bad certificate 2025-08-13T19:53:56.046427580+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45656: remote error: tls: bad certificate 2025-08-13T19:53:56.069072727+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45672: remote error: tls: bad certificate 2025-08-13T19:53:56.085597829+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45678: remote error: tls: bad certificate 2025-08-13T19:53:56.101531444+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45690: remote error: tls: bad certificate 2025-08-13T19:53:56.118885179+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45700: remote error: tls: bad certificate 2025-08-13T19:53:56.133745703+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45702: remote error: tls: bad certificate 2025-08-13T19:53:56.158297445+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45712: remote error: tls: bad certificate 2025-08-13T19:53:56.179437368+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45724: remote error: tls: bad certificate 2025-08-13T19:53:56.194996012+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45726: remote error: tls: bad certificate 2025-08-13T19:53:56.215381685+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45740: remote error: tls: bad certificate 2025-08-13T19:53:56.232753340+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45744: remote error: tls: bad certificate 2025-08-13T19:53:56.247559913+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45754: remote error: tls: bad certificate 2025-08-13T19:53:56.261352467+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45770: remote error: tls: bad certificate 2025-08-13T19:53:56.280958337+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45784: remote error: tls: bad certificate 2025-08-13T19:53:56.298734144+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45786: remote error: tls: bad certificate 2025-08-13T19:53:56.314708520+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45802: remote error: tls: bad certificate 2025-08-13T19:53:56.330703157+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45804: remote error: tls: bad certificate 2025-08-13T19:53:56.347122716+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45814: remote error: tls: bad certificate 2025-08-13T19:53:56.372862041+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45828: remote error: tls: bad certificate 2025-08-13T19:53:56.393062768+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45838: remote error: tls: bad certificate 2025-08-13T19:53:56.414088778+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45848: remote error: tls: bad certificate 2025-08-13T19:53:56.431538687+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45864: remote error: tls: bad certificate 2025-08-13T19:53:56.448332786+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45872: remote error: tls: bad certificate 2025-08-13T19:53:59.994288411+00:00 stderr F 2025/08/13 19:53:59 http: TLS handshake error from 127.0.0.1:55278: remote error: tls: bad certificate 2025-08-13T19:54:00.014664192+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55280: remote error: tls: bad certificate 2025-08-13T19:54:00.034988413+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55296: remote error: tls: bad certificate 2025-08-13T19:54:00.055701424+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55310: remote error: tls: bad certificate 2025-08-13T19:54:00.075603002+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate 2025-08-13T19:54:03.801344573+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55322: remote error: tls: bad certificate 2025-08-13T19:54:03.819261265+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55328: remote error: tls: bad certificate 2025-08-13T19:54:03.840228804+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:54:03.858451464+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55348: remote error: tls: bad certificate 2025-08-13T19:54:03.877581670+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55360: remote error: tls: bad certificate 2025-08-13T19:54:03.896873111+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:54:03.915740430+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55372: remote error: tls: bad certificate 2025-08-13T19:54:03.934873826+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55376: remote error: tls: bad certificate 2025-08-13T19:54:03.951560583+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55378: remote error: tls: bad certificate 2025-08-13T19:54:03.967153948+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:54:03.986569662+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55396: remote error: tls: bad certificate 2025-08-13T19:54:04.011360750+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55404: remote error: tls: bad certificate 2025-08-13T19:54:04.028307304+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55410: remote error: tls: bad certificate 2025-08-13T19:54:04.045060242+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55422: remote error: tls: bad certificate 2025-08-13T19:54:04.061246445+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:54:04.076940903+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55440: remote error: tls: bad certificate 2025-08-13T19:54:04.089880132+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55446: remote error: tls: bad certificate 2025-08-13T19:54:04.107173836+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55448: remote error: tls: bad certificate 2025-08-13T19:54:04.121222957+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55460: remote error: tls: bad certificate 2025-08-13T19:54:04.139381656+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55464: remote error: tls: bad certificate 2025-08-13T19:54:04.156322149+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55478: remote error: tls: bad certificate 2025-08-13T19:54:04.175953050+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:54:04.193653455+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55492: remote error: tls: bad certificate 2025-08-13T19:54:04.216049095+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55500: remote error: tls: bad certificate 2025-08-13T19:54:04.232299699+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55510: remote error: tls: bad certificate 2025-08-13T19:54:04.249470209+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55514: remote error: tls: bad certificate 2025-08-13T19:54:04.266014841+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55516: remote error: tls: bad certificate 2025-08-13T19:54:04.283309595+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55522: remote error: tls: bad certificate 2025-08-13T19:54:04.298301583+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:54:04.314131095+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:54:04.332145940+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55548: remote error: tls: bad certificate 2025-08-13T19:54:04.349726382+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55556: remote error: tls: bad certificate 2025-08-13T19:54:04.366151321+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:54:04.384687490+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55574: remote error: tls: bad certificate 2025-08-13T19:54:04.399412240+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55584: remote error: tls: bad certificate 2025-08-13T19:54:04.422123929+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:54:04.440418811+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55602: remote error: tls: bad certificate 2025-08-13T19:54:04.458900569+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55610: remote error: tls: bad certificate 2025-08-13T19:54:04.473187077+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55624: remote error: tls: bad certificate 2025-08-13T19:54:04.486955510+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55626: remote error: tls: bad certificate 2025-08-13T19:54:04.508549017+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:54:04.529548556+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:54:04.546664235+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55642: remote error: tls: bad certificate 2025-08-13T19:54:04.562135227+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55654: remote error: tls: bad certificate 2025-08-13T19:54:04.582879889+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55666: remote error: tls: bad certificate 2025-08-13T19:54:04.609154709+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:54:04.628435650+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:54:04.645911439+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:54:04.669073030+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55700: remote error: tls: bad certificate 2025-08-13T19:54:04.687314331+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55704: remote error: tls: bad certificate 2025-08-13T19:54:04.701906648+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:54:04.724113212+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55732: remote error: tls: bad certificate 2025-08-13T19:54:04.744842514+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55736: remote error: tls: bad certificate 2025-08-13T19:54:04.761768077+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55748: remote error: tls: bad certificate 2025-08-13T19:54:04.791839526+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55758: remote error: tls: bad certificate 2025-08-13T19:54:04.809122739+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55766: remote error: tls: bad certificate 2025-08-13T19:54:04.824963751+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55774: remote error: tls: bad certificate 2025-08-13T19:54:04.842395249+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:54:04.862607836+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55792: remote error: tls: bad certificate 2025-08-13T19:54:04.883321408+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55800: remote error: tls: bad certificate 2025-08-13T19:54:04.910074012+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55810: remote error: tls: bad certificate 2025-08-13T19:54:04.934903461+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55812: remote error: tls: bad certificate 2025-08-13T19:54:04.956612600+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55828: remote error: tls: bad certificate 2025-08-13T19:54:04.984290721+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55840: remote error: tls: bad certificate 2025-08-13T19:54:05.004661692+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55852: remote error: tls: bad certificate 2025-08-13T19:54:05.029091190+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55854: remote error: tls: bad certificate 2025-08-13T19:54:05.068919787+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55866: remote error: tls: bad certificate 2025-08-13T19:54:05.231187881+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55868: remote error: tls: bad certificate 2025-08-13T19:54:05.253435246+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:54:05.283089523+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55874: remote error: tls: bad certificate 2025-08-13T19:54:05.297448933+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55884: remote error: tls: bad certificate 2025-08-13T19:54:05.313188692+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55894: remote error: tls: bad certificate 2025-08-13T19:54:05.339418061+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55898: remote error: tls: bad certificate 2025-08-13T19:54:05.360628287+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55910: remote error: tls: bad certificate 2025-08-13T19:54:05.377985413+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55918: remote error: tls: bad certificate 2025-08-13T19:54:05.395087051+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55928: remote error: tls: bad certificate 2025-08-13T19:54:05.411553941+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55944: remote error: tls: bad certificate 2025-08-13T19:54:05.429766171+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55958: remote error: tls: bad certificate 2025-08-13T19:54:05.449028661+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55966: remote error: tls: bad certificate 2025-08-13T19:54:05.467946011+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55980: remote error: tls: bad certificate 2025-08-13T19:54:05.485629346+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55994: remote error: tls: bad certificate 2025-08-13T19:54:05.501872070+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56010: remote error: tls: bad certificate 2025-08-13T19:54:05.516638442+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56014: remote error: tls: bad certificate 2025-08-13T19:54:05.531454335+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56016: remote error: tls: bad certificate 2025-08-13T19:54:05.545691651+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56028: remote error: tls: bad certificate 2025-08-13T19:54:05.562223723+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56042: remote error: tls: bad certificate 2025-08-13T19:54:05.578884149+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56058: remote error: tls: bad certificate 2025-08-13T19:54:05.590260494+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56066: remote error: tls: bad certificate 2025-08-13T19:54:05.605460888+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56080: remote error: tls: bad certificate 2025-08-13T19:54:05.620248910+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56086: remote error: tls: bad certificate 2025-08-13T19:54:05.638069889+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56102: remote error: tls: bad certificate 2025-08-13T19:54:05.653660004+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56106: remote error: tls: bad certificate 2025-08-13T19:54:05.673205262+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56116: remote error: tls: bad certificate 2025-08-13T19:54:05.726669629+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56122: remote error: tls: bad certificate 2025-08-13T19:54:05.768411221+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56124: remote error: tls: bad certificate 2025-08-13T19:54:05.774735111+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56126: remote error: tls: bad certificate 2025-08-13T19:54:05.795135264+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:54:05.812221972+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56134: remote error: tls: bad certificate 2025-08-13T19:54:05.830135333+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56142: remote error: tls: bad certificate 2025-08-13T19:54:05.845655206+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56158: remote error: tls: bad certificate 2025-08-13T19:54:05.861531970+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56172: remote error: tls: bad certificate 2025-08-13T19:54:05.877125745+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56184: remote error: tls: bad certificate 2025-08-13T19:54:05.910217470+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56194: remote error: tls: bad certificate 2025-08-13T19:54:05.948091351+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56208: remote error: tls: bad certificate 2025-08-13T19:54:05.988656779+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56210: remote error: tls: bad certificate 2025-08-13T19:54:06.029546066+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56214: remote error: tls: bad certificate 2025-08-13T19:54:06.076607450+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56226: remote error: tls: bad certificate 2025-08-13T19:54:06.107183223+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56238: remote error: tls: bad certificate 2025-08-13T19:54:06.146499386+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56252: remote error: tls: bad certificate 2025-08-13T19:54:06.194484836+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56266: remote error: tls: bad certificate 2025-08-13T19:54:06.228528168+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56282: remote error: tls: bad certificate 2025-08-13T19:54:06.271680090+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56292: remote error: tls: bad certificate 2025-08-13T19:54:06.309391997+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56298: remote error: tls: bad certificate 2025-08-13T19:54:06.350234173+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56310: remote error: tls: bad certificate 2025-08-13T19:54:06.386162899+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56314: remote error: tls: bad certificate 2025-08-13T19:54:06.428316272+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56326: remote error: tls: bad certificate 2025-08-13T19:54:06.476694274+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56334: remote error: tls: bad certificate 2025-08-13T19:54:06.509834350+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56346: remote error: tls: bad certificate 2025-08-13T19:54:06.553684632+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56358: remote error: tls: bad certificate 2025-08-13T19:54:06.591259765+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56374: remote error: tls: bad certificate 2025-08-13T19:54:06.630498745+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56380: remote error: tls: bad certificate 2025-08-13T19:54:06.666598066+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56388: remote error: tls: bad certificate 2025-08-13T19:54:06.705427025+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:54:06.748031691+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56406: remote error: tls: bad certificate 2025-08-13T19:54:06.789061583+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56416: remote error: tls: bad certificate 2025-08-13T19:54:06.828297713+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56426: remote error: tls: bad certificate 2025-08-13T19:54:06.869535760+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56440: remote error: tls: bad certificate 2025-08-13T19:54:06.906655370+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56452: remote error: tls: bad certificate 2025-08-13T19:54:06.947239849+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:54:06.990194796+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56474: remote error: tls: bad certificate 2025-08-13T19:54:07.028611323+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56480: remote error: tls: bad certificate 2025-08-13T19:54:07.072629619+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56496: remote error: tls: bad certificate 2025-08-13T19:54:07.110073149+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56498: remote error: tls: bad certificate 2025-08-13T19:54:07.147519778+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56504: remote error: tls: bad certificate 2025-08-13T19:54:10.324143599+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53800: remote error: tls: bad certificate 2025-08-13T19:54:10.351052607+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53810: remote error: tls: bad certificate 2025-08-13T19:54:10.379028716+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53826: remote error: tls: bad certificate 2025-08-13T19:54:10.411750200+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53836: remote error: tls: bad certificate 2025-08-13T19:54:10.438112913+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53842: remote error: tls: bad certificate 2025-08-13T19:54:10.843878839+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53844: remote error: tls: bad certificate 2025-08-13T19:54:10.870279393+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53860: remote error: tls: bad certificate 2025-08-13T19:54:10.892950700+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53868: remote error: tls: bad certificate 2025-08-13T19:54:10.913163997+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53882: remote error: tls: bad certificate 2025-08-13T19:54:10.931237763+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53892: remote error: tls: bad certificate 2025-08-13T19:54:10.955366742+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53900: remote error: tls: bad certificate 2025-08-13T19:54:10.974190990+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53908: remote error: tls: bad certificate 2025-08-13T19:54:10.995896150+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53916: remote error: tls: bad certificate 2025-08-13T19:54:11.025141415+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53924: remote error: tls: bad certificate 2025-08-13T19:54:11.046967078+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53926: remote error: tls: bad certificate 2025-08-13T19:54:11.066142645+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53942: remote error: tls: bad certificate 2025-08-13T19:54:11.084764397+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53958: remote error: tls: bad certificate 2025-08-13T19:54:11.106043095+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53972: remote error: tls: bad certificate 2025-08-13T19:54:11.132503240+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53982: remote error: tls: bad certificate 2025-08-13T19:54:11.154251721+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53986: remote error: tls: bad certificate 2025-08-13T19:54:11.173139731+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53994: remote error: tls: bad certificate 2025-08-13T19:54:11.194903502+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54004: remote error: tls: bad certificate 2025-08-13T19:54:11.220897794+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54008: remote error: tls: bad certificate 2025-08-13T19:54:11.239054393+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54022: remote error: tls: bad certificate 2025-08-13T19:54:11.257567391+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54032: remote error: tls: bad certificate 2025-08-13T19:54:11.273608709+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54042: remote error: tls: bad certificate 2025-08-13T19:54:11.289194604+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54048: remote error: tls: bad certificate 2025-08-13T19:54:11.305607913+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54064: remote error: tls: bad certificate 2025-08-13T19:54:11.321356573+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54068: remote error: tls: bad certificate 2025-08-13T19:54:11.347699525+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54076: remote error: tls: bad certificate 2025-08-13T19:54:11.366058569+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54086: remote error: tls: bad certificate 2025-08-13T19:54:11.385265668+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54094: remote error: tls: bad certificate 2025-08-13T19:54:11.408454650+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54106: remote error: tls: bad certificate 2025-08-13T19:54:11.428514993+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54110: remote error: tls: bad certificate 2025-08-13T19:54:11.449096430+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54114: remote error: tls: bad certificate 2025-08-13T19:54:11.464211522+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54130: remote error: tls: bad certificate 2025-08-13T19:54:11.481318800+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54134: remote error: tls: bad certificate 2025-08-13T19:54:11.499429527+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54148: remote error: tls: bad certificate 2025-08-13T19:54:11.516029891+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54164: remote error: tls: bad certificate 2025-08-13T19:54:11.534594972+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54176: remote error: tls: bad certificate 2025-08-13T19:54:11.549282941+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54190: remote error: tls: bad certificate 2025-08-13T19:54:11.567407278+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54194: remote error: tls: bad certificate 2025-08-13T19:54:11.587741259+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54208: remote error: tls: bad certificate 2025-08-13T19:54:11.606065312+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54222: remote error: tls: bad certificate 2025-08-13T19:54:11.622622815+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54236: remote error: tls: bad certificate 2025-08-13T19:54:11.639475956+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54240: remote error: tls: bad certificate 2025-08-13T19:54:11.653602620+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54250: remote error: tls: bad certificate 2025-08-13T19:54:11.669066781+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54256: remote error: tls: bad certificate 2025-08-13T19:54:11.688043553+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54268: remote error: tls: bad certificate 2025-08-13T19:54:11.704404330+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54278: remote error: tls: bad certificate 2025-08-13T19:54:11.719577643+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54284: remote error: tls: bad certificate 2025-08-13T19:54:11.734508770+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54290: remote error: tls: bad certificate 2025-08-13T19:54:11.750367353+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54294: remote error: tls: bad certificate 2025-08-13T19:54:11.766342269+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54306: remote error: tls: bad certificate 2025-08-13T19:54:11.783522849+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54316: remote error: tls: bad certificate 2025-08-13T19:54:11.800200285+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54328: remote error: tls: bad certificate 2025-08-13T19:54:11.817072067+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54330: remote error: tls: bad certificate 2025-08-13T19:54:11.833085354+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54336: remote error: tls: bad certificate 2025-08-13T19:54:11.849252126+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54352: remote error: tls: bad certificate 2025-08-13T19:54:11.865088588+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54360: remote error: tls: bad certificate 2025-08-13T19:54:11.881589549+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54374: remote error: tls: bad certificate 2025-08-13T19:54:11.897978597+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54382: remote error: tls: bad certificate 2025-08-13T19:54:11.915336733+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54388: remote error: tls: bad certificate 2025-08-13T19:54:11.931051652+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54390: remote error: tls: bad certificate 2025-08-13T19:54:11.948364826+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54404: remote error: tls: bad certificate 2025-08-13T19:54:11.968452890+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54420: remote error: tls: bad certificate 2025-08-13T19:54:11.989436089+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54424: remote error: tls: bad certificate 2025-08-13T19:54:12.004551540+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54432: remote error: tls: bad certificate 2025-08-13T19:54:12.026195218+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54446: remote error: tls: bad certificate 2025-08-13T19:54:12.037979565+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54452: remote error: tls: bad certificate 2025-08-13T19:54:12.052939342+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54466: remote error: tls: bad certificate 2025-08-13T19:54:12.071769910+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54470: remote error: tls: bad certificate 2025-08-13T19:54:15.853328536+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54476: remote error: tls: bad certificate 2025-08-13T19:54:15.966140586+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54490: remote error: tls: bad certificate 2025-08-13T19:54:15.987193217+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54496: remote error: tls: bad certificate 2025-08-13T19:54:16.003514784+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54500: remote error: tls: bad certificate 2025-08-13T19:54:16.022766333+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54510: remote error: tls: bad certificate 2025-08-13T19:54:16.038674267+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54524: remote error: tls: bad certificate 2025-08-13T19:54:16.054370536+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54540: remote error: tls: bad certificate 2025-08-13T19:54:16.143197972+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54552: remote error: tls: bad certificate 2025-08-13T19:54:16.158607582+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54566: remote error: tls: bad certificate 2025-08-13T19:54:16.175517075+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54574: remote error: tls: bad certificate 2025-08-13T19:54:16.191456580+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54580: remote error: tls: bad certificate 2025-08-13T19:54:16.205409408+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54592: remote error: tls: bad certificate 2025-08-13T19:54:16.223006201+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54602: remote error: tls: bad certificate 2025-08-13T19:54:16.239594454+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54608: remote error: tls: bad certificate 2025-08-13T19:54:16.258661369+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54622: remote error: tls: bad certificate 2025-08-13T19:54:16.276867389+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54628: remote error: tls: bad certificate 2025-08-13T19:54:16.294151662+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54644: remote error: tls: bad certificate 2025-08-13T19:54:16.312158906+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54654: remote error: tls: bad certificate 2025-08-13T19:54:16.329057329+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54670: remote error: tls: bad certificate 2025-08-13T19:54:16.347042602+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54682: remote error: tls: bad certificate 2025-08-13T19:54:16.367316011+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54686: remote error: tls: bad certificate 2025-08-13T19:54:16.386183990+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54688: remote error: tls: bad certificate 2025-08-13T19:54:16.402416114+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54702: remote error: tls: bad certificate 2025-08-13T19:54:16.417756592+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54716: remote error: tls: bad certificate 2025-08-13T19:54:16.433322186+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54724: remote error: tls: bad certificate 2025-08-13T19:54:16.457360412+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54740: remote error: tls: bad certificate 2025-08-13T19:54:16.479533656+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54754: remote error: tls: bad certificate 2025-08-13T19:54:16.499071644+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54768: remote error: tls: bad certificate 2025-08-13T19:54:16.524058997+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54784: remote error: tls: bad certificate 2025-08-13T19:54:16.540699042+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54790: remote error: tls: bad certificate 2025-08-13T19:54:16.560202489+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54806: remote error: tls: bad certificate 2025-08-13T19:54:16.576866315+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54814: remote error: tls: bad certificate 2025-08-13T19:54:16.594332203+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54822: remote error: tls: bad certificate 2025-08-13T19:54:16.609170517+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54828: remote error: tls: bad certificate 2025-08-13T19:54:16.626135522+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54830: remote error: tls: bad certificate 2025-08-13T19:54:16.650720813+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54834: remote error: tls: bad certificate 2025-08-13T19:54:16.675662296+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54848: remote error: tls: bad certificate 2025-08-13T19:54:16.695085490+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54850: remote error: tls: bad certificate 2025-08-13T19:54:16.711583271+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54852: remote error: tls: bad certificate 2025-08-13T19:54:16.725719775+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54860: remote error: tls: bad certificate 2025-08-13T19:54:16.744172602+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54870: remote error: tls: bad certificate 2025-08-13T19:54:16.761632880+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54882: remote error: tls: bad certificate 2025-08-13T19:54:16.780635192+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54884: remote error: tls: bad certificate 2025-08-13T19:54:16.798684607+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54898: remote error: tls: bad certificate 2025-08-13T19:54:16.817738631+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54904: remote error: tls: bad certificate 2025-08-13T19:54:16.835916381+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54918: remote error: tls: bad certificate 2025-08-13T19:54:16.859974438+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54928: remote error: tls: bad certificate 2025-08-13T19:54:16.915907125+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54938: remote error: tls: bad certificate 2025-08-13T19:54:16.931963703+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54940: remote error: tls: bad certificate 2025-08-13T19:54:16.952153640+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54954: remote error: tls: bad certificate 2025-08-13T19:54:16.969020321+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54962: remote error: tls: bad certificate 2025-08-13T19:54:17.086860076+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54970: remote error: tls: bad certificate 2025-08-13T19:54:17.104701715+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54978: remote error: tls: bad certificate 2025-08-13T19:54:17.123251545+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54980: remote error: tls: bad certificate 2025-08-13T19:54:17.143195575+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54986: remote error: tls: bad certificate 2025-08-13T19:54:17.161661582+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54998: remote error: tls: bad certificate 2025-08-13T19:54:17.179342307+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55010: remote error: tls: bad certificate 2025-08-13T19:54:17.199657357+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55018: remote error: tls: bad certificate 2025-08-13T19:54:17.218228467+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55030: remote error: tls: bad certificate 2025-08-13T19:54:17.237230160+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55044: remote error: tls: bad certificate 2025-08-13T19:54:17.251893088+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55046: remote error: tls: bad certificate 2025-08-13T19:54:17.270647944+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55062: remote error: tls: bad certificate 2025-08-13T19:54:17.287085233+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55070: remote error: tls: bad certificate 2025-08-13T19:54:17.308525895+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55072: remote error: tls: bad certificate 2025-08-13T19:54:17.324706747+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55088: remote error: tls: bad certificate 2025-08-13T19:54:17.341763604+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55102: remote error: tls: bad certificate 2025-08-13T19:54:17.360314454+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55116: remote error: tls: bad certificate 2025-08-13T19:54:20.721525557+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53216: remote error: tls: bad certificate 2025-08-13T19:54:20.746320355+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53224: remote error: tls: bad certificate 2025-08-13T19:54:20.771123713+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53234: remote error: tls: bad certificate 2025-08-13T19:54:20.796913209+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53236: remote error: tls: bad certificate 2025-08-13T19:54:20.821871722+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53250: remote error: tls: bad certificate 2025-08-13T19:54:22.817745111+00:00 stderr F 2025/08/13 19:54:22 http: TLS handshake error from 127.0.0.1:53254: remote error: tls: bad certificate 2025-08-13T19:54:25.227300862+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53266: remote error: tls: bad certificate 2025-08-13T19:54:25.245855302+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53276: remote error: tls: bad certificate 2025-08-13T19:54:25.262362583+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53282: remote error: tls: bad certificate 2025-08-13T19:54:25.281349155+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53292: remote error: tls: bad certificate 2025-08-13T19:54:25.301179791+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53302: remote error: tls: bad certificate 2025-08-13T19:54:25.324967410+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53318: remote error: tls: bad certificate 2025-08-13T19:54:25.364442558+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53326: remote error: tls: bad certificate 2025-08-13T19:54:25.397158082+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53336: remote error: tls: bad certificate 2025-08-13T19:54:25.419758267+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53338: remote error: tls: bad certificate 2025-08-13T19:54:25.436679680+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53350: remote error: tls: bad certificate 2025-08-13T19:54:25.453617054+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53364: remote error: tls: bad certificate 2025-08-13T19:54:25.469684553+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53380: remote error: tls: bad certificate 2025-08-13T19:54:25.490530948+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53388: remote error: tls: bad certificate 2025-08-13T19:54:25.508708857+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53400: remote error: tls: bad certificate 2025-08-13T19:54:25.525215248+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53404: remote error: tls: bad certificate 2025-08-13T19:54:25.537947032+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53412: remote error: tls: bad certificate 2025-08-13T19:54:25.554302069+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53422: remote error: tls: bad certificate 2025-08-13T19:54:25.568329099+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53436: remote error: tls: bad certificate 2025-08-13T19:54:25.582991348+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53444: remote error: tls: bad certificate 2025-08-13T19:54:25.596693629+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53456: remote error: tls: bad certificate 2025-08-13T19:54:25.616365821+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53470: remote error: tls: bad certificate 2025-08-13T19:54:25.632041678+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53482: remote error: tls: bad certificate 2025-08-13T19:54:25.653220783+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53488: remote error: tls: bad certificate 2025-08-13T19:54:25.670881997+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53492: remote error: tls: bad certificate 2025-08-13T19:54:25.688280574+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53508: remote error: tls: bad certificate 2025-08-13T19:54:25.706843854+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53520: remote error: tls: bad certificate 2025-08-13T19:54:25.722669636+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53536: remote error: tls: bad certificate 2025-08-13T19:54:25.740935798+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53550: remote error: tls: bad certificate 2025-08-13T19:54:25.757637885+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53556: remote error: tls: bad certificate 2025-08-13T19:54:25.773219450+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53568: remote error: tls: bad certificate 2025-08-13T19:54:25.788331711+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53584: remote error: tls: bad certificate 2025-08-13T19:54:25.810117233+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53600: remote error: tls: bad certificate 2025-08-13T19:54:25.827526600+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53604: remote error: tls: bad certificate 2025-08-13T19:54:25.843408304+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53616: remote error: tls: bad certificate 2025-08-13T19:54:25.858583567+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53622: remote error: tls: bad certificate 2025-08-13T19:54:25.874072539+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53624: remote error: tls: bad certificate 2025-08-13T19:54:25.890998383+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53640: remote error: tls: bad certificate 2025-08-13T19:54:25.905653711+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53646: remote error: tls: bad certificate 2025-08-13T19:54:25.926635510+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53648: remote error: tls: bad certificate 2025-08-13T19:54:25.942071211+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53656: remote error: tls: bad certificate 2025-08-13T19:54:25.957762569+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53666: remote error: tls: bad certificate 2025-08-13T19:54:25.975213977+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53668: remote error: tls: bad certificate 2025-08-13T19:54:25.992678176+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53678: remote error: tls: bad certificate 2025-08-13T19:54:26.010514455+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53692: remote error: tls: bad certificate 2025-08-13T19:54:26.025077611+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53706: remote error: tls: bad certificate 2025-08-13T19:54:26.039340648+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53708: remote error: tls: bad certificate 2025-08-13T19:54:26.066560865+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53724: remote error: tls: bad certificate 2025-08-13T19:54:26.089577813+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53728: remote error: tls: bad certificate 2025-08-13T19:54:26.107120494+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53744: remote error: tls: bad certificate 2025-08-13T19:54:26.126047744+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53754: remote error: tls: bad certificate 2025-08-13T19:54:26.142558895+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53756: remote error: tls: bad certificate 2025-08-13T19:54:26.158527671+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53766: remote error: tls: bad certificate 2025-08-13T19:54:26.176040571+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53770: remote error: tls: bad certificate 2025-08-13T19:54:26.192314476+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53772: remote error: tls: bad certificate 2025-08-13T19:54:26.210127685+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53776: remote error: tls: bad certificate 2025-08-13T19:54:26.228091938+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53786: remote error: tls: bad certificate 2025-08-13T19:54:26.244063774+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53792: remote error: tls: bad certificate 2025-08-13T19:54:26.259651109+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53800: remote error: tls: bad certificate 2025-08-13T19:54:26.278102256+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53804: remote error: tls: bad certificate 2025-08-13T19:54:26.297222942+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53816: remote error: tls: bad certificate 2025-08-13T19:54:26.316242055+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53832: remote error: tls: bad certificate 2025-08-13T19:54:26.332867529+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53848: remote error: tls: bad certificate 2025-08-13T19:54:26.350388950+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53864: remote error: tls: bad certificate 2025-08-13T19:54:26.365764699+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53870: remote error: tls: bad certificate 2025-08-13T19:54:26.382657631+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53872: remote error: tls: bad certificate 2025-08-13T19:54:26.404027901+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53884: remote error: tls: bad certificate 2025-08-13T19:54:26.415088637+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53886: remote error: tls: bad certificate 2025-08-13T19:54:31.046433198+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41316: remote error: tls: bad certificate 2025-08-13T19:54:31.068376685+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41320: remote error: tls: bad certificate 2025-08-13T19:54:31.089971020+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41322: remote error: tls: bad certificate 2025-08-13T19:54:31.111292849+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41338: remote error: tls: bad certificate 2025-08-13T19:54:31.131290610+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41344: remote error: tls: bad certificate 2025-08-13T19:54:35.227171052+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41346: remote error: tls: bad certificate 2025-08-13T19:54:35.252922997+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41360: remote error: tls: bad certificate 2025-08-13T19:54:35.276163581+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41362: remote error: tls: bad certificate 2025-08-13T19:54:35.304159070+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41376: remote error: tls: bad certificate 2025-08-13T19:54:35.340938550+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41378: remote error: tls: bad certificate 2025-08-13T19:54:35.369984870+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41384: remote error: tls: bad certificate 2025-08-13T19:54:35.397360532+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41386: remote error: tls: bad certificate 2025-08-13T19:54:35.413134552+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:54:35.427249325+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41410: remote error: tls: bad certificate 2025-08-13T19:54:35.443855749+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41412: remote error: tls: bad certificate 2025-08-13T19:54:35.459520286+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41420: remote error: tls: bad certificate 2025-08-13T19:54:35.477164140+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41422: remote error: tls: bad certificate 2025-08-13T19:54:35.496001008+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41436: remote error: tls: bad certificate 2025-08-13T19:54:35.513180579+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41450: remote error: tls: bad certificate 2025-08-13T19:54:35.530507003+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41462: remote error: tls: bad certificate 2025-08-13T19:54:35.545536492+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41466: remote error: tls: bad certificate 2025-08-13T19:54:35.563984219+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41470: remote error: tls: bad certificate 2025-08-13T19:54:35.584946118+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41482: remote error: tls: bad certificate 2025-08-13T19:54:35.602649913+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41490: remote error: tls: bad certificate 2025-08-13T19:54:35.616981962+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41500: remote error: tls: bad certificate 2025-08-13T19:54:35.632596948+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41508: remote error: tls: bad certificate 2025-08-13T19:54:35.649045698+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41518: remote error: tls: bad certificate 2025-08-13T19:54:35.666645051+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41520: remote error: tls: bad certificate 2025-08-13T19:54:35.684569622+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41524: remote error: tls: bad certificate 2025-08-13T19:54:35.704045878+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41534: remote error: tls: bad certificate 2025-08-13T19:54:35.726147510+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41542: remote error: tls: bad certificate 2025-08-13T19:54:35.742761164+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41548: remote error: tls: bad certificate 2025-08-13T19:54:35.761563161+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41560: remote error: tls: bad certificate 2025-08-13T19:54:35.777918448+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41574: remote error: tls: bad certificate 2025-08-13T19:54:35.792654319+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41580: remote error: tls: bad certificate 2025-08-13T19:54:35.815639335+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41588: remote error: tls: bad certificate 2025-08-13T19:54:35.829742187+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41594: remote error: tls: bad certificate 2025-08-13T19:54:35.845423815+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41610: remote error: tls: bad certificate 2025-08-13T19:54:35.861880605+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41626: remote error: tls: bad certificate 2025-08-13T19:54:35.877927283+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41636: remote error: tls: bad certificate 2025-08-13T19:54:35.895002341+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41642: remote error: tls: bad certificate 2025-08-13T19:54:35.908226958+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41646: remote error: tls: bad certificate 2025-08-13T19:54:35.930885865+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41656: remote error: tls: bad certificate 2025-08-13T19:54:35.945323178+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41672: remote error: tls: bad certificate 2025-08-13T19:54:35.963176427+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41682: remote error: tls: bad certificate 2025-08-13T19:54:35.982345695+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41688: remote error: tls: bad certificate 2025-08-13T19:54:35.997435986+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41698: remote error: tls: bad certificate 2025-08-13T19:54:36.015206053+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41702: remote error: tls: bad certificate 2025-08-13T19:54:36.030573252+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41704: remote error: tls: bad certificate 2025-08-13T19:54:36.047594838+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41712: remote error: tls: bad certificate 2025-08-13T19:54:36.067366642+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41728: remote error: tls: bad certificate 2025-08-13T19:54:36.082614408+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41730: remote error: tls: bad certificate 2025-08-13T19:54:36.101377354+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41744: remote error: tls: bad certificate 2025-08-13T19:54:36.117705550+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41746: remote error: tls: bad certificate 2025-08-13T19:54:36.133877822+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41752: remote error: tls: bad certificate 2025-08-13T19:54:36.148239472+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41756: remote error: tls: bad certificate 2025-08-13T19:54:36.164068494+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41768: remote error: tls: bad certificate 2025-08-13T19:54:36.181239164+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41778: remote error: tls: bad certificate 2025-08-13T19:54:36.205109676+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41784: remote error: tls: bad certificate 2025-08-13T19:54:36.225618141+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41786: remote error: tls: bad certificate 2025-08-13T19:54:36.244268714+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41796: remote error: tls: bad certificate 2025-08-13T19:54:36.259244651+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41806: remote error: tls: bad certificate 2025-08-13T19:54:36.276200615+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41812: remote error: tls: bad certificate 2025-08-13T19:54:36.301162898+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41824: remote error: tls: bad certificate 2025-08-13T19:54:36.325165024+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41838: remote error: tls: bad certificate 2025-08-13T19:54:36.349232481+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41852: remote error: tls: bad certificate 2025-08-13T19:54:36.367540803+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41868: remote error: tls: bad certificate 2025-08-13T19:54:36.391947470+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41880: remote error: tls: bad certificate 2025-08-13T19:54:36.427244898+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41884: remote error: tls: bad certificate 2025-08-13T19:54:36.451516341+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41900: remote error: tls: bad certificate 2025-08-13T19:54:36.468951009+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41904: remote error: tls: bad certificate 2025-08-13T19:54:36.491550874+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41912: remote error: tls: bad certificate 2025-08-13T19:54:41.519071725+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50242: remote error: tls: bad certificate 2025-08-13T19:54:41.538992693+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50258: remote error: tls: bad certificate 2025-08-13T19:54:41.563178683+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50266: remote error: tls: bad certificate 2025-08-13T19:54:41.626025906+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50276: remote error: tls: bad certificate 2025-08-13T19:54:41.651701259+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:54:45.236328752+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50292: remote error: tls: bad certificate 2025-08-13T19:54:45.257041153+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50296: remote error: tls: bad certificate 2025-08-13T19:54:45.273909114+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50298: remote error: tls: bad certificate 2025-08-13T19:54:45.288248913+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50302: remote error: tls: bad certificate 2025-08-13T19:54:45.305045482+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50306: remote error: tls: bad certificate 2025-08-13T19:54:45.320968267+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50312: remote error: tls: bad certificate 2025-08-13T19:54:45.339462534+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50326: remote error: tls: bad certificate 2025-08-13T19:54:45.367084432+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:54:45.384769607+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:54:45.399928829+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50346: remote error: tls: bad certificate 2025-08-13T19:54:45.419268830+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50354: remote error: tls: bad certificate 2025-08-13T19:54:45.436260775+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50370: remote error: tls: bad certificate 2025-08-13T19:54:45.452531150+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:54:45.472156780+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50384: remote error: tls: bad certificate 2025-08-13T19:54:45.488235818+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50386: remote error: tls: bad certificate 2025-08-13T19:54:45.505031358+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50396: remote error: tls: bad certificate 2025-08-13T19:54:45.519045817+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50412: remote error: tls: bad certificate 2025-08-13T19:54:45.536148925+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:54:45.552672057+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:54:45.567490320+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50440: remote error: tls: bad certificate 2025-08-13T19:54:45.586848402+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50446: remote error: tls: bad certificate 2025-08-13T19:54:45.609313923+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50448: remote error: tls: bad certificate 2025-08-13T19:54:45.629726306+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50460: remote error: tls: bad certificate 2025-08-13T19:54:45.646340430+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:54:45.661195803+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50478: remote error: tls: bad certificate 2025-08-13T19:54:45.675670796+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50480: remote error: tls: bad certificate 2025-08-13T19:54:45.691987512+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50496: remote error: tls: bad certificate 2025-08-13T19:54:45.707129494+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50502: remote error: tls: bad certificate 2025-08-13T19:54:45.721192075+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50508: remote error: tls: bad certificate 2025-08-13T19:54:45.736476481+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50514: remote error: tls: bad certificate 2025-08-13T19:54:45.751938933+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50516: remote error: tls: bad certificate 2025-08-13T19:54:45.767759134+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50528: remote error: tls: bad certificate 2025-08-13T19:54:45.783994357+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50542: remote error: tls: bad certificate 2025-08-13T19:54:45.797493492+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50548: remote error: tls: bad certificate 2025-08-13T19:54:45.813383646+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:54:45.827280712+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50564: remote error: tls: bad certificate 2025-08-13T19:54:45.844484013+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50572: remote error: tls: bad certificate 2025-08-13T19:54:45.858049520+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50580: remote error: tls: bad certificate 2025-08-13T19:54:45.874998724+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50584: remote error: tls: bad certificate 2025-08-13T19:54:45.896306262+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50588: remote error: tls: bad certificate 2025-08-13T19:54:45.913262116+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:54:45.929432387+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50596: remote error: tls: bad certificate 2025-08-13T19:54:45.948160311+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50606: remote error: tls: bad certificate 2025-08-13T19:54:45.964073845+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50612: remote error: tls: bad certificate 2025-08-13T19:54:46.006916808+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50616: remote error: tls: bad certificate 2025-08-13T19:54:46.020915227+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50628: remote error: tls: bad certificate 2025-08-13T19:54:46.055171725+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50642: remote error: tls: bad certificate 2025-08-13T19:54:46.069256957+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50654: remote error: tls: bad certificate 2025-08-13T19:54:46.092022466+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:54:46.116726671+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50676: remote error: tls: bad certificate 2025-08-13T19:54:46.136084183+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50678: remote error: tls: bad certificate 2025-08-13T19:54:46.151573485+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50694: remote error: tls: bad certificate 2025-08-13T19:54:46.166209213+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:54:46.184561257+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50714: remote error: tls: bad certificate 2025-08-13T19:54:46.202747295+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50724: remote error: tls: bad certificate 2025-08-13T19:54:46.227492002+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50738: remote error: tls: bad certificate 2025-08-13T19:54:46.255444979+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50742: remote error: tls: bad certificate 2025-08-13T19:54:46.274494723+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50744: remote error: tls: bad certificate 2025-08-13T19:54:46.296945013+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50758: remote error: tls: bad certificate 2025-08-13T19:54:46.318310713+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50770: remote error: tls: bad certificate 2025-08-13T19:54:46.348319239+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50786: remote error: tls: bad certificate 2025-08-13T19:54:46.367607499+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50790: remote error: tls: bad certificate 2025-08-13T19:54:46.389904406+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50798: remote error: tls: bad certificate 2025-08-13T19:54:46.409265978+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50802: remote error: tls: bad certificate 2025-08-13T19:54:46.433139139+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50814: remote error: tls: bad certificate 2025-08-13T19:54:46.452182303+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50820: remote error: tls: bad certificate 2025-08-13T19:54:46.467259783+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50826: remote error: tls: bad certificate 2025-08-13T19:54:47.001052853+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50840: remote error: tls: bad certificate 2025-08-13T19:54:47.019595873+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:54:47.039075658+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50848: remote error: tls: bad certificate 2025-08-13T19:54:47.056672000+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50854: remote error: tls: bad certificate 2025-08-13T19:54:47.073365597+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50860: remote error: tls: bad certificate 2025-08-13T19:54:47.091295068+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50862: remote error: tls: bad certificate 2025-08-13T19:54:47.111718101+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate 2025-08-13T19:54:47.137031453+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50874: remote error: tls: bad certificate 2025-08-13T19:54:47.157653012+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50890: remote error: tls: bad certificate 2025-08-13T19:54:47.176022046+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50892: remote error: tls: bad certificate 2025-08-13T19:54:47.196487130+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50908: remote error: tls: bad certificate 2025-08-13T19:54:47.227302899+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50920: remote error: tls: bad certificate 2025-08-13T19:54:47.404909387+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50924: remote error: tls: bad certificate 2025-08-13T19:54:47.423644541+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50930: remote error: tls: bad certificate 2025-08-13T19:54:47.441867851+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50942: remote error: tls: bad certificate 2025-08-13T19:54:47.457562599+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50950: remote error: tls: bad certificate 2025-08-13T19:54:47.474138762+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:54:47.495737168+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50978: remote error: tls: bad certificate 2025-08-13T19:54:47.514976567+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50992: remote error: tls: bad certificate 2025-08-13T19:54:47.526215338+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51002: remote error: tls: bad certificate 2025-08-13T19:54:47.537221872+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51008: remote error: tls: bad certificate 2025-08-13T19:54:47.554406162+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51012: remote error: tls: bad certificate 2025-08-13T19:54:47.575909946+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51016: remote error: tls: bad certificate 2025-08-13T19:54:47.599926841+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51024: remote error: tls: bad certificate 2025-08-13T19:54:47.616442193+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51030: remote error: tls: bad certificate 2025-08-13T19:54:47.632249434+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51032: remote error: tls: bad certificate 2025-08-13T19:54:47.648476656+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51046: remote error: tls: bad certificate 2025-08-13T19:54:47.663969029+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51062: remote error: tls: bad certificate 2025-08-13T19:54:47.679760609+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51064: remote error: tls: bad certificate 2025-08-13T19:54:47.697938208+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51078: remote error: tls: bad certificate 2025-08-13T19:54:47.711034242+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51092: remote error: tls: bad certificate 2025-08-13T19:54:47.729134198+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51100: remote error: tls: bad certificate 2025-08-13T19:54:47.743552369+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51104: remote error: tls: bad certificate 2025-08-13T19:54:47.762242143+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51116: remote error: tls: bad certificate 2025-08-13T19:54:47.782497591+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51122: remote error: tls: bad certificate 2025-08-13T19:54:47.799375712+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51132: remote error: tls: bad certificate 2025-08-13T19:54:47.817075917+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51136: remote error: tls: bad certificate 2025-08-13T19:54:47.838958022+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51146: remote error: tls: bad certificate 2025-08-13T19:54:47.855899665+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51152: remote error: tls: bad certificate 2025-08-13T19:54:47.876687658+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51166: remote error: tls: bad certificate 2025-08-13T19:54:47.898643715+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51180: remote error: tls: bad certificate 2025-08-13T19:54:47.921446705+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51190: remote error: tls: bad certificate 2025-08-13T19:54:47.935888657+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51198: remote error: tls: bad certificate 2025-08-13T19:54:47.960274003+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51214: remote error: tls: bad certificate 2025-08-13T19:54:47.978524504+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51230: remote error: tls: bad certificate 2025-08-13T19:54:48.005522014+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51244: remote error: tls: bad certificate 2025-08-13T19:54:48.046443822+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51260: remote error: tls: bad certificate 2025-08-13T19:54:48.064538418+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51262: remote error: tls: bad certificate 2025-08-13T19:54:48.080613007+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51274: remote error: tls: bad certificate 2025-08-13T19:54:48.099628939+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51282: remote error: tls: bad certificate 2025-08-13T19:54:48.121064661+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51294: remote error: tls: bad certificate 2025-08-13T19:54:48.145690804+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51304: remote error: tls: bad certificate 2025-08-13T19:54:48.167332531+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51318: remote error: tls: bad certificate 2025-08-13T19:54:48.185262613+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51334: remote error: tls: bad certificate 2025-08-13T19:54:48.202229307+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51336: remote error: tls: bad certificate 2025-08-13T19:54:48.220639022+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51342: remote error: tls: bad certificate 2025-08-13T19:54:48.238681507+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51344: remote error: tls: bad certificate 2025-08-13T19:54:48.266258334+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51346: remote error: tls: bad certificate 2025-08-13T19:54:48.282623311+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51354: remote error: tls: bad certificate 2025-08-13T19:54:48.297842775+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51368: remote error: tls: bad certificate 2025-08-13T19:54:48.312705359+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51376: remote error: tls: bad certificate 2025-08-13T19:54:48.366947119+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51378: remote error: tls: bad certificate 2025-08-13T19:54:48.383752057+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51390: remote error: tls: bad certificate 2025-08-13T19:54:48.420479774+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51404: remote error: tls: bad certificate 2025-08-13T19:54:48.460301511+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51416: remote error: tls: bad certificate 2025-08-13T19:54:48.503072411+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51418: remote error: tls: bad certificate 2025-08-13T19:54:48.539468199+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51434: remote error: tls: bad certificate 2025-08-13T19:54:48.580702536+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51450: remote error: tls: bad certificate 2025-08-13T19:54:48.622874469+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51462: remote error: tls: bad certificate 2025-08-13T19:54:48.665528796+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51470: remote error: tls: bad certificate 2025-08-13T19:54:48.699564097+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51476: remote error: tls: bad certificate 2025-08-13T19:54:48.743502741+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37316: remote error: tls: bad certificate 2025-08-13T19:54:48.781037412+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37318: remote error: tls: bad certificate 2025-08-13T19:54:48.820597231+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37334: remote error: tls: bad certificate 2025-08-13T19:54:48.861330103+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37348: remote error: tls: bad certificate 2025-08-13T19:54:48.900106410+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37354: remote error: tls: bad certificate 2025-08-13T19:54:48.941313065+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37366: remote error: tls: bad certificate 2025-08-13T19:54:48.982903331+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37368: remote error: tls: bad certificate 2025-08-13T19:54:49.025480786+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37378: remote error: tls: bad certificate 2025-08-13T19:54:49.061073002+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37384: remote error: tls: bad certificate 2025-08-13T19:54:49.101488645+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37390: remote error: tls: bad certificate 2025-08-13T19:54:49.139364575+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37400: remote error: tls: bad certificate 2025-08-13T19:54:49.181347173+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37414: remote error: tls: bad certificate 2025-08-13T19:54:49.228421736+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37418: remote error: tls: bad certificate 2025-08-13T19:54:49.264503406+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37420: remote error: tls: bad certificate 2025-08-13T19:54:49.303318783+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37426: remote error: tls: bad certificate 2025-08-13T19:54:49.341191304+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37434: remote error: tls: bad certificate 2025-08-13T19:54:49.385224931+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37442: remote error: tls: bad certificate 2025-08-13T19:54:49.421119105+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37448: remote error: tls: bad certificate 2025-08-13T19:54:49.463034371+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37456: remote error: tls: bad certificate 2025-08-13T19:54:49.507885881+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37462: remote error: tls: bad certificate 2025-08-13T19:54:49.541032956+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37464: remote error: tls: bad certificate 2025-08-13T19:54:49.594906904+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37466: remote error: tls: bad certificate 2025-08-13T19:54:49.622143321+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37472: remote error: tls: bad certificate 2025-08-13T19:54:49.666360792+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37482: remote error: tls: bad certificate 2025-08-13T19:54:49.699256001+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37488: remote error: tls: bad certificate 2025-08-13T19:54:49.742683830+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37492: remote error: tls: bad certificate 2025-08-13T19:54:49.787688754+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37506: remote error: tls: bad certificate 2025-08-13T19:54:49.827469789+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37518: remote error: tls: bad certificate 2025-08-13T19:54:49.863624121+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37534: remote error: tls: bad certificate 2025-08-13T19:54:49.901086060+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37544: remote error: tls: bad certificate 2025-08-13T19:54:49.940695440+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37548: remote error: tls: bad certificate 2025-08-13T19:54:49.980859066+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37550: remote error: tls: bad certificate 2025-08-13T19:54:50.050907135+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37554: remote error: tls: bad certificate 2025-08-13T19:54:50.137901097+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37564: remote error: tls: bad certificate 2025-08-13T19:54:50.151417743+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37572: remote error: tls: bad certificate 2025-08-13T19:54:50.165392081+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37584: remote error: tls: bad certificate 2025-08-13T19:54:50.184057924+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37592: remote error: tls: bad certificate 2025-08-13T19:54:50.232257059+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37602: remote error: tls: bad certificate 2025-08-13T19:54:50.263172202+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37612: remote error: tls: bad certificate 2025-08-13T19:54:50.299941171+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37620: remote error: tls: bad certificate 2025-08-13T19:54:50.339362235+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37628: remote error: tls: bad certificate 2025-08-13T19:54:50.383579637+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37642: remote error: tls: bad certificate 2025-08-13T19:54:50.422077116+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37644: remote error: tls: bad certificate 2025-08-13T19:54:50.459365299+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37646: remote error: tls: bad certificate 2025-08-13T19:54:50.503114028+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37654: remote error: tls: bad certificate 2025-08-13T19:54:50.542227284+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37662: remote error: tls: bad certificate 2025-08-13T19:54:50.580443804+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37666: remote error: tls: bad certificate 2025-08-13T19:54:50.618663025+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37670: remote error: tls: bad certificate 2025-08-13T19:54:50.662091074+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37674: remote error: tls: bad certificate 2025-08-13T19:54:50.703324070+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37690: remote error: tls: bad certificate 2025-08-13T19:54:50.742994632+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37706: remote error: tls: bad certificate 2025-08-13T19:54:50.781174782+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37716: remote error: tls: bad certificate 2025-08-13T19:54:50.822514951+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37724: remote error: tls: bad certificate 2025-08-13T19:54:50.864965152+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37734: remote error: tls: bad certificate 2025-08-13T19:54:50.903082800+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37744: remote error: tls: bad certificate 2025-08-13T19:54:50.939730446+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37746: remote error: tls: bad certificate 2025-08-13T19:54:50.980327724+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37762: remote error: tls: bad certificate 2025-08-13T19:54:51.028358395+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37776: remote error: tls: bad certificate 2025-08-13T19:54:51.063714993+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37786: remote error: tls: bad certificate 2025-08-13T19:54:51.099078432+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37798: remote error: tls: bad certificate 2025-08-13T19:54:51.138421105+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37810: remote error: tls: bad certificate 2025-08-13T19:54:51.184977953+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37822: remote error: tls: bad certificate 2025-08-13T19:54:51.224947554+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37824: remote error: tls: bad certificate 2025-08-13T19:54:51.259583092+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37838: remote error: tls: bad certificate 2025-08-13T19:54:51.301650752+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37840: remote error: tls: bad certificate 2025-08-13T19:54:51.344008331+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37850: remote error: tls: bad certificate 2025-08-13T19:54:51.381655405+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37854: remote error: tls: bad certificate 2025-08-13T19:54:51.422474030+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37870: remote error: tls: bad certificate 2025-08-13T19:54:51.461519404+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37886: remote error: tls: bad certificate 2025-08-13T19:54:51.502654738+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37900: remote error: tls: bad certificate 2025-08-13T19:54:51.542258418+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37916: remote error: tls: bad certificate 2025-08-13T19:54:51.594379955+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37932: remote error: tls: bad certificate 2025-08-13T19:54:51.622457366+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37948: remote error: tls: bad certificate 2025-08-13T19:54:51.660415729+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37960: remote error: tls: bad certificate 2025-08-13T19:54:51.703872139+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37974: remote error: tls: bad certificate 2025-08-13T19:54:51.741578755+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37988: remote error: tls: bad certificate 2025-08-13T19:54:51.784254163+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37992: remote error: tls: bad certificate 2025-08-13T19:54:51.819879569+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38000: remote error: tls: bad certificate 2025-08-13T19:54:51.859965063+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38010: remote error: tls: bad certificate 2025-08-13T19:54:51.878121091+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38026: remote error: tls: bad certificate 2025-08-13T19:54:51.902353582+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38034: remote error: tls: bad certificate 2025-08-13T19:54:51.906515861+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38036: remote error: tls: bad certificate 2025-08-13T19:54:51.927728697+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38052: remote error: tls: bad certificate 2025-08-13T19:54:51.942359554+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38058: remote error: tls: bad certificate 2025-08-13T19:54:51.948343675+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38066: remote error: tls: bad certificate 2025-08-13T19:54:51.967201263+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38076: remote error: tls: bad certificate 2025-08-13T19:54:51.981193772+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38082: remote error: tls: bad certificate 2025-08-13T19:54:52.020268547+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38088: remote error: tls: bad certificate 2025-08-13T19:54:52.061667128+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38096: remote error: tls: bad certificate 2025-08-13T19:54:52.107280290+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38106: remote error: tls: bad certificate 2025-08-13T19:54:52.141223088+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38122: remote error: tls: bad certificate 2025-08-13T19:54:52.184956296+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38136: remote error: tls: bad certificate 2025-08-13T19:54:52.223043423+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38138: remote error: tls: bad certificate 2025-08-13T19:54:52.262898920+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38150: remote error: tls: bad certificate 2025-08-13T19:54:52.302521061+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38156: remote error: tls: bad certificate 2025-08-13T19:54:52.417076959+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38158: remote error: tls: bad certificate 2025-08-13T19:54:52.437246314+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38162: remote error: tls: bad certificate 2025-08-13T19:54:52.456128063+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38168: remote error: tls: bad certificate 2025-08-13T19:54:52.490100172+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38172: remote error: tls: bad certificate 2025-08-13T19:54:52.513515161+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38182: remote error: tls: bad certificate 2025-08-13T19:54:52.577266329+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38188: remote error: tls: bad certificate 2025-08-13T19:54:52.614880222+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38190: remote error: tls: bad certificate 2025-08-13T19:54:52.636364625+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38196: remote error: tls: bad certificate 2025-08-13T19:54:52.661284256+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38200: remote error: tls: bad certificate 2025-08-13T19:54:52.702261005+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38208: remote error: tls: bad certificate 2025-08-13T19:54:52.740693602+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38212: remote error: tls: bad certificate 2025-08-13T19:54:52.782375921+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38224: remote error: tls: bad certificate 2025-08-13T19:54:52.820380875+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38228: remote error: tls: bad certificate 2025-08-13T19:54:52.859233114+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38236: remote error: tls: bad certificate 2025-08-13T19:54:52.909319983+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38238: remote error: tls: bad certificate 2025-08-13T19:54:52.939708260+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38244: remote error: tls: bad certificate 2025-08-13T19:54:52.980378011+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38254: remote error: tls: bad certificate 2025-08-13T19:54:53.020623659+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38270: remote error: tls: bad certificate 2025-08-13T19:54:53.062672749+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38276: remote error: tls: bad certificate 2025-08-13T19:54:53.102471034+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38290: remote error: tls: bad certificate 2025-08-13T19:54:53.140476859+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38296: remote error: tls: bad certificate 2025-08-13T19:54:53.179913034+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38308: remote error: tls: bad certificate 2025-08-13T19:54:53.222325944+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38312: remote error: tls: bad certificate 2025-08-13T19:54:53.258608689+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38314: remote error: tls: bad certificate 2025-08-13T19:54:53.298877468+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38328: remote error: tls: bad certificate 2025-08-13T19:54:53.347174317+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38338: remote error: tls: bad certificate 2025-08-13T19:54:53.397025109+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38346: remote error: tls: bad certificate 2025-08-13T19:54:53.444224336+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38350: remote error: tls: bad certificate 2025-08-13T19:54:53.497524887+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38366: remote error: tls: bad certificate 2025-08-13T19:54:53.533090441+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38378: remote error: tls: bad certificate 2025-08-13T19:54:53.553177204+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38386: remote error: tls: bad certificate 2025-08-13T19:54:53.582106060+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38392: remote error: tls: bad certificate 2025-08-13T19:54:53.674882677+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38400: remote error: tls: bad certificate 2025-08-13T19:54:53.702545147+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38410: remote error: tls: bad certificate 2025-08-13T19:54:53.724945836+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38416: remote error: tls: bad certificate 2025-08-13T19:54:53.765981567+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38422: remote error: tls: bad certificate 2025-08-13T19:54:53.787168921+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38426: remote error: tls: bad certificate 2025-08-13T19:54:53.819082872+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38430: remote error: tls: bad certificate 2025-08-13T19:54:53.861335407+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38446: remote error: tls: bad certificate 2025-08-13T19:54:53.948517495+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38454: remote error: tls: bad certificate 2025-08-13T19:54:53.966378695+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38466: remote error: tls: bad certificate 2025-08-13T19:54:55.227851349+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38480: remote error: tls: bad certificate 2025-08-13T19:54:55.242359483+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38484: remote error: tls: bad certificate 2025-08-13T19:54:55.257741052+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38486: remote error: tls: bad certificate 2025-08-13T19:54:55.273574374+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38502: remote error: tls: bad certificate 2025-08-13T19:54:55.290432665+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38506: remote error: tls: bad certificate 2025-08-13T19:54:55.308278864+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38516: remote error: tls: bad certificate 2025-08-13T19:54:55.331014593+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38522: remote error: tls: bad certificate 2025-08-13T19:54:55.345660111+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38524: remote error: tls: bad certificate 2025-08-13T19:54:55.362567503+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38528: remote error: tls: bad certificate 2025-08-13T19:54:55.379289260+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38532: remote error: tls: bad certificate 2025-08-13T19:54:55.396632185+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38536: remote error: tls: bad certificate 2025-08-13T19:54:55.414524296+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38540: remote error: tls: bad certificate 2025-08-13T19:54:55.443186163+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38556: remote error: tls: bad certificate 2025-08-13T19:54:55.459181660+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38572: remote error: tls: bad certificate 2025-08-13T19:54:55.475361311+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38586: remote error: tls: bad certificate 2025-08-13T19:54:55.498190953+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38602: remote error: tls: bad certificate 2025-08-13T19:54:55.523388592+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38610: remote error: tls: bad certificate 2025-08-13T19:54:55.539278625+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38622: remote error: tls: bad certificate 2025-08-13T19:54:55.556160687+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38634: remote error: tls: bad certificate 2025-08-13T19:54:55.574049117+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38642: remote error: tls: bad certificate 2025-08-13T19:54:55.588104238+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38652: remote error: tls: bad certificate 2025-08-13T19:54:55.604112455+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38664: remote error: tls: bad certificate 2025-08-13T19:54:55.617740784+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38668: remote error: tls: bad certificate 2025-08-13T19:54:55.636686545+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38680: remote error: tls: bad certificate 2025-08-13T19:54:55.670200451+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38686: remote error: tls: bad certificate 2025-08-13T19:54:55.688522254+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38702: remote error: tls: bad certificate 2025-08-13T19:54:55.710979924+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38706: remote error: tls: bad certificate 2025-08-13T19:54:55.763993497+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38712: remote error: tls: bad certificate 2025-08-13T19:54:55.783688559+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38716: remote error: tls: bad certificate 2025-08-13T19:54:55.800447597+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38732: remote error: tls: bad certificate 2025-08-13T19:54:55.818536713+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38738: remote error: tls: bad certificate 2025-08-13T19:54:55.837187016+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38752: remote error: tls: bad certificate 2025-08-13T19:54:55.853265814+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38764: remote error: tls: bad certificate 2025-08-13T19:54:55.870504146+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38772: remote error: tls: bad certificate 2025-08-13T19:54:55.889106467+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38788: remote error: tls: bad certificate 2025-08-13T19:54:55.909333034+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38796: remote error: tls: bad certificate 2025-08-13T19:54:55.926467863+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38806: remote error: tls: bad certificate 2025-08-13T19:54:55.945926278+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38810: remote error: tls: bad certificate 2025-08-13T19:54:55.973565937+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38816: remote error: tls: bad certificate 2025-08-13T19:54:55.999894898+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38826: remote error: tls: bad certificate 2025-08-13T19:54:56.016873703+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38838: remote error: tls: bad certificate 2025-08-13T19:54:56.035196045+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38848: remote error: tls: bad certificate 2025-08-13T19:54:56.051654435+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38856: remote error: tls: bad certificate 2025-08-13T19:54:56.067487627+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38862: remote error: tls: bad certificate 2025-08-13T19:54:56.088916998+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38878: remote error: tls: bad certificate 2025-08-13T19:54:56.111201194+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38884: remote error: tls: bad certificate 2025-08-13T19:54:56.127637483+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38894: remote error: tls: bad certificate 2025-08-13T19:54:56.142056813+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38906: remote error: tls: bad certificate 2025-08-13T19:54:56.166355907+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38912: remote error: tls: bad certificate 2025-08-13T19:54:56.190515486+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38922: remote error: tls: bad certificate 2025-08-13T19:54:56.207329826+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38924: remote error: tls: bad certificate 2025-08-13T19:54:56.224884987+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38938: remote error: tls: bad certificate 2025-08-13T19:54:56.248384287+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38948: remote error: tls: bad certificate 2025-08-13T19:54:56.266689340+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38958: remote error: tls: bad certificate 2025-08-13T19:54:56.282990775+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38964: remote error: tls: bad certificate 2025-08-13T19:54:56.298635591+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38968: remote error: tls: bad certificate 2025-08-13T19:54:56.315967746+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38972: remote error: tls: bad certificate 2025-08-13T19:54:56.333487666+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38984: remote error: tls: bad certificate 2025-08-13T19:54:56.350389798+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38992: remote error: tls: bad certificate 2025-08-13T19:54:56.368765802+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38994: remote error: tls: bad certificate 2025-08-13T19:54:56.384156131+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38996: remote error: tls: bad certificate 2025-08-13T19:54:56.419667275+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39004: remote error: tls: bad certificate 2025-08-13T19:54:56.461618872+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39012: remote error: tls: bad certificate 2025-08-13T19:54:56.501147229+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39024: remote error: tls: bad certificate 2025-08-13T19:54:56.543489418+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39036: remote error: tls: bad certificate 2025-08-13T19:54:56.581312257+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39042: remote error: tls: bad certificate 2025-08-13T19:54:56.620140895+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39048: remote error: tls: bad certificate 2025-08-13T19:55:02.072369324+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49228: remote error: tls: bad certificate 2025-08-13T19:55:02.095266718+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49244: remote error: tls: bad certificate 2025-08-13T19:55:02.121954589+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:55:02.145891702+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49256: remote error: tls: bad certificate 2025-08-13T19:55:02.181856968+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49268: remote error: tls: bad certificate 2025-08-13T19:55:03.229036497+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49270: remote error: tls: bad certificate 2025-08-13T19:55:03.245168697+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49278: remote error: tls: bad certificate 2025-08-13T19:55:03.263196582+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49286: remote error: tls: bad certificate 2025-08-13T19:55:03.281314219+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:55:03.300451624+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49310: remote error: tls: bad certificate 2025-08-13T19:55:03.319897959+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49312: remote error: tls: bad certificate 2025-08-13T19:55:03.347261069+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49316: remote error: tls: bad certificate 2025-08-13T19:55:03.364128491+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49330: remote error: tls: bad certificate 2025-08-13T19:55:03.379416757+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49338: remote error: tls: bad certificate 2025-08-13T19:55:03.394573079+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49344: remote error: tls: bad certificate 2025-08-13T19:55:03.415680942+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:55:03.431020299+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49366: remote error: tls: bad certificate 2025-08-13T19:55:03.450436733+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49368: remote error: tls: bad certificate 2025-08-13T19:55:03.468695354+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49380: remote error: tls: bad certificate 2025-08-13T19:55:03.486017728+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49392: remote error: tls: bad certificate 2025-08-13T19:55:03.552110164+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49402: remote error: tls: bad certificate 2025-08-13T19:55:03.569330636+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49404: remote error: tls: bad certificate 2025-08-13T19:55:03.586444334+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:55:03.600984759+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:55:03.618684434+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49436: remote error: tls: bad certificate 2025-08-13T19:55:03.634714241+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49448: remote error: tls: bad certificate 2025-08-13T19:55:03.651106139+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49460: remote error: tls: bad certificate 2025-08-13T19:55:03.669949057+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49464: remote error: tls: bad certificate 2025-08-13T19:55:03.686266662+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49468: remote error: tls: bad certificate 2025-08-13T19:55:03.703419112+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:55:03.724175074+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49484: remote error: tls: bad certificate 2025-08-13T19:55:03.740313314+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49492: remote error: tls: bad certificate 2025-08-13T19:55:03.756300640+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49508: remote error: tls: bad certificate 2025-08-13T19:55:03.778144294+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49510: remote error: tls: bad certificate 2025-08-13T19:55:03.796837767+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49518: remote error: tls: bad certificate 2025-08-13T19:55:03.811734952+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:55:03.828271844+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49528: remote error: tls: bad certificate 2025-08-13T19:55:03.843378505+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49534: remote error: tls: bad certificate 2025-08-13T19:55:03.861767820+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49542: remote error: tls: bad certificate 2025-08-13T19:55:03.897177870+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49556: remote error: tls: bad certificate 2025-08-13T19:55:03.917193541+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:55:03.933047284+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49564: remote error: tls: bad certificate 2025-08-13T19:55:03.950366808+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49574: remote error: tls: bad certificate 2025-08-13T19:55:03.971187182+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49580: remote error: tls: bad certificate 2025-08-13T19:55:03.987624301+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49590: remote error: tls: bad certificate 2025-08-13T19:55:04.003668679+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:55:04.019499970+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49610: remote error: tls: bad certificate 2025-08-13T19:55:04.035059394+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49626: remote error: tls: bad certificate 2025-08-13T19:55:04.050753602+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49638: remote error: tls: bad certificate 2025-08-13T19:55:04.067880621+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49640: remote error: tls: bad certificate 2025-08-13T19:55:04.097679831+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:55:04.114539012+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49654: remote error: tls: bad certificate 2025-08-13T19:55:04.129129608+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49670: remote error: tls: bad certificate 2025-08-13T19:55:04.142750187+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:55:04.160052891+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49686: remote error: tls: bad certificate 2025-08-13T19:55:04.176010726+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49690: remote error: tls: bad certificate 2025-08-13T19:55:04.190972713+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49694: remote error: tls: bad certificate 2025-08-13T19:55:04.206735953+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49702: remote error: tls: bad certificate 2025-08-13T19:55:04.226151637+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49706: remote error: tls: bad certificate 2025-08-13T19:55:04.240415524+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49722: remote error: tls: bad certificate 2025-08-13T19:55:04.254145705+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49724: remote error: tls: bad certificate 2025-08-13T19:55:04.270585565+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49730: remote error: tls: bad certificate 2025-08-13T19:55:04.292159220+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49738: remote error: tls: bad certificate 2025-08-13T19:55:04.309279679+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49740: remote error: tls: bad certificate 2025-08-13T19:55:04.326258183+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49752: remote error: tls: bad certificate 2025-08-13T19:55:04.344220726+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49766: remote error: tls: bad certificate 2025-08-13T19:55:04.362106736+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:55:04.378886265+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49776: remote error: tls: bad certificate 2025-08-13T19:55:04.395214081+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:55:04.412095992+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49786: remote error: tls: bad certificate 2025-08-13T19:55:04.428080138+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49798: remote error: tls: bad certificate 2025-08-13T19:55:04.443888319+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49812: remote error: tls: bad certificate 2025-08-13T19:55:05.227723685+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49822: remote error: tls: bad certificate 2025-08-13T19:55:05.247336744+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49836: remote error: tls: bad certificate 2025-08-13T19:55:05.263668160+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49840: remote error: tls: bad certificate 2025-08-13T19:55:05.298717480+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49846: remote error: tls: bad certificate 2025-08-13T19:55:05.324918628+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49862: remote error: tls: bad certificate 2025-08-13T19:55:05.352543047+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49876: remote error: tls: bad certificate 2025-08-13T19:55:05.369881471+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49886: remote error: tls: bad certificate 2025-08-13T19:55:05.390610883+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49900: remote error: tls: bad certificate 2025-08-13T19:55:05.415905724+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49902: remote error: tls: bad certificate 2025-08-13T19:55:05.433602269+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49914: remote error: tls: bad certificate 2025-08-13T19:55:05.449245916+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49918: remote error: tls: bad certificate 2025-08-13T19:55:05.479003685+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49932: remote error: tls: bad certificate 2025-08-13T19:55:05.495592618+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49948: remote error: tls: bad certificate 2025-08-13T19:55:05.510164614+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49960: remote error: tls: bad certificate 2025-08-13T19:55:05.525165502+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49976: remote error: tls: bad certificate 2025-08-13T19:55:05.544950246+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49992: remote error: tls: bad certificate 2025-08-13T19:55:05.563106044+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50008: remote error: tls: bad certificate 2025-08-13T19:55:05.579723289+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50014: remote error: tls: bad certificate 2025-08-13T19:55:05.600977585+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50024: remote error: tls: bad certificate 2025-08-13T19:55:05.621222902+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50038: remote error: tls: bad certificate 2025-08-13T19:55:05.643762036+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50050: remote error: tls: bad certificate 2025-08-13T19:55:05.661943755+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50060: remote error: tls: bad certificate 2025-08-13T19:55:05.683520490+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50074: remote error: tls: bad certificate 2025-08-13T19:55:05.705586820+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50078: remote error: tls: bad certificate 2025-08-13T19:55:05.727709081+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50088: remote error: tls: bad certificate 2025-08-13T19:55:05.744994374+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50090: remote error: tls: bad certificate 2025-08-13T19:55:05.763105891+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50096: remote error: tls: bad certificate 2025-08-13T19:55:05.780056955+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50104: remote error: tls: bad certificate 2025-08-13T19:55:05.798433359+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50114: remote error: tls: bad certificate 2025-08-13T19:55:05.820980692+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50128: remote error: tls: bad certificate 2025-08-13T19:55:05.844742590+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50142: remote error: tls: bad certificate 2025-08-13T19:55:05.859925023+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50150: remote error: tls: bad certificate 2025-08-13T19:55:05.877955148+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate 2025-08-13T19:55:05.896653292+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50166: remote error: tls: bad certificate 2025-08-13T19:55:05.915192051+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50170: remote error: tls: bad certificate 2025-08-13T19:55:05.932388981+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:55:05.950755436+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50176: remote error: tls: bad certificate 2025-08-13T19:55:05.967581306+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50180: remote error: tls: bad certificate 2025-08-13T19:55:05.986260369+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50196: remote error: tls: bad certificate 2025-08-13T19:55:06.001455402+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50202: remote error: tls: bad certificate 2025-08-13T19:55:06.025355184+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:55:06.042577586+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50222: remote error: tls: bad certificate 2025-08-13T19:55:06.056247496+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50236: remote error: tls: bad certificate 2025-08-13T19:55:06.072909791+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50248: remote error: tls: bad certificate 2025-08-13T19:55:06.092756498+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50264: remote error: tls: bad certificate 2025-08-13T19:55:06.109374162+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50270: remote error: tls: bad certificate 2025-08-13T19:55:06.123302560+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50284: remote error: tls: bad certificate 2025-08-13T19:55:06.139014788+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:55:06.155313473+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50302: remote error: tls: bad certificate 2025-08-13T19:55:06.169356584+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50314: remote error: tls: bad certificate 2025-08-13T19:55:06.182737056+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50316: remote error: tls: bad certificate 2025-08-13T19:55:06.197899328+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50326: remote error: tls: bad certificate 2025-08-13T19:55:06.215977304+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50330: remote error: tls: bad certificate 2025-08-13T19:55:06.234231425+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:55:06.253554847+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:55:06.270428588+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50358: remote error: tls: bad certificate 2025-08-13T19:55:06.286947419+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50374: remote error: tls: bad certificate 2025-08-13T19:55:06.303024518+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50380: remote error: tls: bad certificate 2025-08-13T19:55:06.329432042+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50384: remote error: tls: bad certificate 2025-08-13T19:55:06.348380412+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50390: remote error: tls: bad certificate 2025-08-13T19:55:06.366329395+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50392: remote error: tls: bad certificate 2025-08-13T19:55:06.386994805+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50408: remote error: tls: bad certificate 2025-08-13T19:55:06.425499093+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50420: remote error: tls: bad certificate 2025-08-13T19:55:06.465500825+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50422: remote error: tls: bad certificate 2025-08-13T19:55:06.506290269+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50434: remote error: tls: bad certificate 2025-08-13T19:55:06.545177628+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50444: remote error: tls: bad certificate 2025-08-13T19:55:06.594091194+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50460: remote error: tls: bad certificate 2025-08-13T19:55:12.331142743+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46964: remote error: tls: bad certificate 2025-08-13T19:55:12.354480659+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46968: remote error: tls: bad certificate 2025-08-13T19:55:12.378760032+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46982: remote error: tls: bad certificate 2025-08-13T19:55:12.403076225+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46992: remote error: tls: bad certificate 2025-08-13T19:55:12.426234696+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46996: remote error: tls: bad certificate 2025-08-13T19:55:15.225173818+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47006: remote error: tls: bad certificate 2025-08-13T19:55:15.285497839+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47020: remote error: tls: bad certificate 2025-08-13T19:55:15.311115810+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47036: remote error: tls: bad certificate 2025-08-13T19:55:15.337115672+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47048: remote error: tls: bad certificate 2025-08-13T19:55:15.358567974+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47050: remote error: tls: bad certificate 2025-08-13T19:55:15.379595424+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47052: remote error: tls: bad certificate 2025-08-13T19:55:15.394880050+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47066: remote error: tls: bad certificate 2025-08-13T19:55:15.408566091+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47078: remote error: tls: bad certificate 2025-08-13T19:55:15.423299021+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47092: remote error: tls: bad certificate 2025-08-13T19:55:15.439522234+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47106: remote error: tls: bad certificate 2025-08-13T19:55:15.454868982+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47110: remote error: tls: bad certificate 2025-08-13T19:55:15.471000052+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47114: remote error: tls: bad certificate 2025-08-13T19:55:15.488863622+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47122: remote error: tls: bad certificate 2025-08-13T19:55:15.515423570+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47128: remote error: tls: bad certificate 2025-08-13T19:55:15.534169575+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47134: remote error: tls: bad certificate 2025-08-13T19:55:15.549044499+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47150: remote error: tls: bad certificate 2025-08-13T19:55:15.564432328+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47158: remote error: tls: bad certificate 2025-08-13T19:55:15.583125322+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47160: remote error: tls: bad certificate 2025-08-13T19:55:15.596480623+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47168: remote error: tls: bad certificate 2025-08-13T19:55:15.615202497+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47176: remote error: tls: bad certificate 2025-08-13T19:55:15.636354100+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47180: remote error: tls: bad certificate 2025-08-13T19:55:15.654612161+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47192: remote error: tls: bad certificate 2025-08-13T19:55:15.674444857+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47206: remote error: tls: bad certificate 2025-08-13T19:55:15.691401601+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47214: remote error: tls: bad certificate 2025-08-13T19:55:15.712371689+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47220: remote error: tls: bad certificate 2025-08-13T19:55:15.730596589+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47226: remote error: tls: bad certificate 2025-08-13T19:55:15.753963686+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47230: remote error: tls: bad certificate 2025-08-13T19:55:15.774295636+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47232: remote error: tls: bad certificate 2025-08-13T19:55:15.796067738+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47242: remote error: tls: bad certificate 2025-08-13T19:55:15.820903106+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47248: remote error: tls: bad certificate 2025-08-13T19:55:15.840719152+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47256: remote error: tls: bad certificate 2025-08-13T19:55:15.861375081+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47260: remote error: tls: bad certificate 2025-08-13T19:55:15.881214097+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47270: remote error: tls: bad certificate 2025-08-13T19:55:15.900874318+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47282: remote error: tls: bad certificate 2025-08-13T19:55:15.930890674+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47292: remote error: tls: bad certificate 2025-08-13T19:55:15.952244104+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47304: remote error: tls: bad certificate 2025-08-13T19:55:15.969910698+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47306: remote error: tls: bad certificate 2025-08-13T19:55:15.985747920+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47312: remote error: tls: bad certificate 2025-08-13T19:55:16.001598612+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47324: remote error: tls: bad certificate 2025-08-13T19:55:16.021975563+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47328: remote error: tls: bad certificate 2025-08-13T19:55:16.040118041+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47342: remote error: tls: bad certificate 2025-08-13T19:55:16.056385355+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47356: remote error: tls: bad certificate 2025-08-13T19:55:16.076656634+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47366: remote error: tls: bad certificate 2025-08-13T19:55:16.094177364+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47374: remote error: tls: bad certificate 2025-08-13T19:55:16.116261034+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47386: remote error: tls: bad certificate 2025-08-13T19:55:16.142067610+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47392: remote error: tls: bad certificate 2025-08-13T19:55:16.158906441+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47402: remote error: tls: bad certificate 2025-08-13T19:55:16.174759553+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47414: remote error: tls: bad certificate 2025-08-13T19:55:16.192625763+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47424: remote error: tls: bad certificate 2025-08-13T19:55:16.211655506+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47440: remote error: tls: bad certificate 2025-08-13T19:55:16.234517378+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47452: remote error: tls: bad certificate 2025-08-13T19:55:16.253576982+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47468: remote error: tls: bad certificate 2025-08-13T19:55:16.275850697+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47474: remote error: tls: bad certificate 2025-08-13T19:55:16.296477066+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47476: remote error: tls: bad certificate 2025-08-13T19:55:16.312267216+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47488: remote error: tls: bad certificate 2025-08-13T19:55:16.329667203+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47502: remote error: tls: bad certificate 2025-08-13T19:55:16.348455249+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47516: remote error: tls: bad certificate 2025-08-13T19:55:16.365290759+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47520: remote error: tls: bad certificate 2025-08-13T19:55:16.382462859+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47526: remote error: tls: bad certificate 2025-08-13T19:55:16.399512616+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47528: remote error: tls: bad certificate 2025-08-13T19:55:16.414212135+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47544: remote error: tls: bad certificate 2025-08-13T19:55:16.430592093+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47552: remote error: tls: bad certificate 2025-08-13T19:55:16.449546983+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47554: remote error: tls: bad certificate 2025-08-13T19:55:16.468413702+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47558: remote error: tls: bad certificate 2025-08-13T19:55:16.489704129+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47562: remote error: tls: bad certificate 2025-08-13T19:55:16.508568797+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47574: remote error: tls: bad certificate 2025-08-13T19:55:16.528029303+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47588: remote error: tls: bad certificate 2025-08-13T19:55:22.724258620+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60124: remote error: tls: bad certificate 2025-08-13T19:55:22.747266076+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60134: remote error: tls: bad certificate 2025-08-13T19:55:22.773129974+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60142: remote error: tls: bad certificate 2025-08-13T19:55:22.796035168+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60154: remote error: tls: bad certificate 2025-08-13T19:55:22.822198264+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60158: remote error: tls: bad certificate 2025-08-13T19:55:25.226013992+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60168: remote error: tls: bad certificate 2025-08-13T19:55:25.243915683+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60176: remote error: tls: bad certificate 2025-08-13T19:55:25.263236734+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60184: remote error: tls: bad certificate 2025-08-13T19:55:25.279889680+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60194: remote error: tls: bad certificate 2025-08-13T19:55:25.311380298+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60204: remote error: tls: bad certificate 2025-08-13T19:55:25.335893618+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60216: remote error: tls: bad certificate 2025-08-13T19:55:25.352574434+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60232: remote error: tls: bad certificate 2025-08-13T19:55:25.374067167+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60242: remote error: tls: bad certificate 2025-08-13T19:55:25.390984790+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60250: remote error: tls: bad certificate 2025-08-13T19:55:25.408735036+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60262: remote error: tls: bad certificate 2025-08-13T19:55:25.425133544+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60270: remote error: tls: bad certificate 2025-08-13T19:55:25.444543878+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60280: remote error: tls: bad certificate 2025-08-13T19:55:25.460848663+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60296: remote error: tls: bad certificate 2025-08-13T19:55:25.476048517+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60304: remote error: tls: bad certificate 2025-08-13T19:55:25.498246390+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60310: remote error: tls: bad certificate 2025-08-13T19:55:25.522613145+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60318: remote error: tls: bad certificate 2025-08-13T19:55:25.542240625+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60330: remote error: tls: bad certificate 2025-08-13T19:55:25.559260111+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60342: remote error: tls: bad certificate 2025-08-13T19:55:25.574486505+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60354: remote error: tls: bad certificate 2025-08-13T19:55:25.593993892+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60368: remote error: tls: bad certificate 2025-08-13T19:55:25.614568859+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60374: remote error: tls: bad certificate 2025-08-13T19:55:25.631331347+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60376: remote error: tls: bad certificate 2025-08-13T19:55:25.647363455+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:55:25.666637255+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60398: remote error: tls: bad certificate 2025-08-13T19:55:25.682739784+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60404: remote error: tls: bad certificate 2025-08-13T19:55:25.698456613+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60408: remote error: tls: bad certificate 2025-08-13T19:55:25.715578541+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60416: remote error: tls: bad certificate 2025-08-13T19:55:25.731499376+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60430: remote error: tls: bad certificate 2025-08-13T19:55:25.748157641+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60444: remote error: tls: bad certificate 2025-08-13T19:55:25.764416645+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60460: remote error: tls: bad certificate 2025-08-13T19:55:25.783475489+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60468: remote error: tls: bad certificate 2025-08-13T19:55:25.806465025+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60470: remote error: tls: bad certificate 2025-08-13T19:55:25.827758752+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60482: remote error: tls: bad certificate 2025-08-13T19:55:25.845141138+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60484: remote error: tls: bad certificate 2025-08-13T19:55:25.865158879+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:55:25.886289762+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60496: remote error: tls: bad certificate 2025-08-13T19:55:25.906937921+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60504: remote error: tls: bad certificate 2025-08-13T19:55:25.923766471+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60520: remote error: tls: bad certificate 2025-08-13T19:55:25.942594599+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60522: remote error: tls: bad certificate 2025-08-13T19:55:25.961747565+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60528: remote error: tls: bad certificate 2025-08-13T19:55:25.982764275+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60540: remote error: tls: bad certificate 2025-08-13T19:55:25.998073442+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60550: remote error: tls: bad certificate 2025-08-13T19:55:26.019044340+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60564: remote error: tls: bad certificate 2025-08-13T19:55:26.036205540+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60578: remote error: tls: bad certificate 2025-08-13T19:55:26.052680450+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60592: remote error: tls: bad certificate 2025-08-13T19:55:26.070468387+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60594: remote error: tls: bad certificate 2025-08-13T19:55:26.088758559+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60602: remote error: tls: bad certificate 2025-08-13T19:55:26.107552705+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60608: remote error: tls: bad certificate 2025-08-13T19:55:26.126840926+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60622: remote error: tls: bad certificate 2025-08-13T19:55:26.144071408+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60626: remote error: tls: bad certificate 2025-08-13T19:55:26.163052349+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60634: remote error: tls: bad certificate 2025-08-13T19:55:26.178105229+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60640: remote error: tls: bad certificate 2025-08-13T19:55:26.194324331+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60652: remote error: tls: bad certificate 2025-08-13T19:55:26.215030852+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60658: remote error: tls: bad certificate 2025-08-13T19:55:26.233077127+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60662: remote error: tls: bad certificate 2025-08-13T19:55:26.246435338+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60666: remote error: tls: bad certificate 2025-08-13T19:55:26.263378962+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60670: remote error: tls: bad certificate 2025-08-13T19:55:26.278150433+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60686: remote error: tls: bad certificate 2025-08-13T19:55:26.298293728+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60688: remote error: tls: bad certificate 2025-08-13T19:55:26.316858798+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60704: remote error: tls: bad certificate 2025-08-13T19:55:26.338270129+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:55:26.355897422+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60720: remote error: tls: bad certificate 2025-08-13T19:55:26.373379100+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60722: remote error: tls: bad certificate 2025-08-13T19:55:26.393035921+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60730: remote error: tls: bad certificate 2025-08-13T19:55:26.409610044+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60742: remote error: tls: bad certificate 2025-08-13T19:55:26.426176577+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60756: remote error: tls: bad certificate 2025-08-13T19:55:26.445216310+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60770: remote error: tls: bad certificate 2025-08-13T19:55:30.187158407+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40300: remote error: tls: bad certificate 2025-08-13T19:55:30.210267796+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40304: remote error: tls: bad certificate 2025-08-13T19:55:30.227160748+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40316: remote error: tls: bad certificate 2025-08-13T19:55:30.250921076+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40322: remote error: tls: bad certificate 2025-08-13T19:55:30.276694852+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40332: remote error: tls: bad certificate 2025-08-13T19:55:30.297585858+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40340: remote error: tls: bad certificate 2025-08-13T19:55:30.313942174+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40344: remote error: tls: bad certificate 2025-08-13T19:55:30.330537588+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40352: remote error: tls: bad certificate 2025-08-13T19:55:30.350941500+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40360: remote error: tls: bad certificate 2025-08-13T19:55:30.369636034+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40370: remote error: tls: bad certificate 2025-08-13T19:55:30.388164132+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40382: remote error: tls: bad certificate 2025-08-13T19:55:30.405098565+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40394: remote error: tls: bad certificate 2025-08-13T19:55:30.422513152+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40400: remote error: tls: bad certificate 2025-08-13T19:55:30.443720767+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40410: remote error: tls: bad certificate 2025-08-13T19:55:30.462170194+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40426: remote error: tls: bad certificate 2025-08-13T19:55:30.478958423+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40432: remote error: tls: bad certificate 2025-08-13T19:55:30.493470887+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40448: remote error: tls: bad certificate 2025-08-13T19:55:30.508128785+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40464: remote error: tls: bad certificate 2025-08-13T19:55:30.524500612+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:55:30.539152440+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40474: remote error: tls: bad certificate 2025-08-13T19:55:30.555514727+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40478: remote error: tls: bad certificate 2025-08-13T19:55:30.572329757+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40488: remote error: tls: bad certificate 2025-08-13T19:55:30.593456840+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40490: remote error: tls: bad certificate 2025-08-13T19:55:30.610402143+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40502: remote error: tls: bad certificate 2025-08-13T19:55:30.634690056+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40508: remote error: tls: bad certificate 2025-08-13T19:55:30.655562932+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40514: remote error: tls: bad certificate 2025-08-13T19:55:30.674533753+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40528: remote error: tls: bad certificate 2025-08-13T19:55:30.695425009+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40534: remote error: tls: bad certificate 2025-08-13T19:55:30.713093333+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40540: remote error: tls: bad certificate 2025-08-13T19:55:30.731038856+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40544: remote error: tls: bad certificate 2025-08-13T19:55:30.746182788+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40548: remote error: tls: bad certificate 2025-08-13T19:55:30.761851995+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40550: remote error: tls: bad certificate 2025-08-13T19:55:30.779347764+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:55:30.803917115+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:55:30.825400208+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:55:30.843969128+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40586: remote error: tls: bad certificate 2025-08-13T19:55:30.861194199+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40590: remote error: tls: bad certificate 2025-08-13T19:55:30.877293299+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40594: remote error: tls: bad certificate 2025-08-13T19:55:30.895121527+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40598: remote error: tls: bad certificate 2025-08-13T19:55:30.910407503+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40606: remote error: tls: bad certificate 2025-08-13T19:55:30.935099758+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:55:30.952342530+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40614: remote error: tls: bad certificate 2025-08-13T19:55:30.967850072+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:55:30.987941256+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40634: remote error: tls: bad certificate 2025-08-13T19:55:31.004267672+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40648: remote error: tls: bad certificate 2025-08-13T19:55:31.019576318+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40664: remote error: tls: bad certificate 2025-08-13T19:55:31.035443981+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40672: remote error: tls: bad certificate 2025-08-13T19:55:31.050174521+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40676: remote error: tls: bad certificate 2025-08-13T19:55:31.075765112+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40678: remote error: tls: bad certificate 2025-08-13T19:55:31.095842565+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40694: remote error: tls: bad certificate 2025-08-13T19:55:31.111585564+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40700: remote error: tls: bad certificate 2025-08-13T19:55:31.130693119+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40706: remote error: tls: bad certificate 2025-08-13T19:55:31.149737572+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40720: remote error: tls: bad certificate 2025-08-13T19:55:31.167994773+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40734: remote error: tls: bad certificate 2025-08-13T19:55:31.186055539+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40740: remote error: tls: bad certificate 2025-08-13T19:55:31.202670953+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40748: remote error: tls: bad certificate 2025-08-13T19:55:31.222940151+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40750: remote error: tls: bad certificate 2025-08-13T19:55:31.241208052+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40752: remote error: tls: bad certificate 2025-08-13T19:55:31.260454701+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40766: remote error: tls: bad certificate 2025-08-13T19:55:31.276256072+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40782: remote error: tls: bad certificate 2025-08-13T19:55:31.296312925+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:55:31.320436283+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40798: remote error: tls: bad certificate 2025-08-13T19:55:31.341494914+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40804: remote error: tls: bad certificate 2025-08-13T19:55:31.360622269+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40810: remote error: tls: bad certificate 2025-08-13T19:55:31.385463358+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40824: remote error: tls: bad certificate 2025-08-13T19:55:31.410011959+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40830: remote error: tls: bad certificate 2025-08-13T19:55:31.429662579+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40838: remote error: tls: bad certificate 2025-08-13T19:55:33.027972933+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40840: remote error: tls: bad certificate 2025-08-13T19:55:33.057389732+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40850: remote error: tls: bad certificate 2025-08-13T19:55:33.085435843+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40854: remote error: tls: bad certificate 2025-08-13T19:55:33.104620350+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40866: remote error: tls: bad certificate 2025-08-13T19:55:33.126115503+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40870: remote error: tls: bad certificate 2025-08-13T19:55:35.227045600+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40884: remote error: tls: bad certificate 2025-08-13T19:55:35.246573448+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40898: remote error: tls: bad certificate 2025-08-13T19:55:35.268547395+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40906: remote error: tls: bad certificate 2025-08-13T19:55:35.284942182+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40914: remote error: tls: bad certificate 2025-08-13T19:55:35.302182604+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40930: remote error: tls: bad certificate 2025-08-13T19:55:35.322002280+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40936: remote error: tls: bad certificate 2025-08-13T19:55:35.344218634+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40948: remote error: tls: bad certificate 2025-08-13T19:55:35.361602570+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40950: remote error: tls: bad certificate 2025-08-13T19:55:35.383517945+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40966: remote error: tls: bad certificate 2025-08-13T19:55:35.400439768+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40976: remote error: tls: bad certificate 2025-08-13T19:55:35.417424883+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:55:35.436431915+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40994: remote error: tls: bad certificate 2025-08-13T19:55:35.454947543+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41010: remote error: tls: bad certificate 2025-08-13T19:55:35.471143235+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41014: remote error: tls: bad certificate 2025-08-13T19:55:35.502896571+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41022: remote error: tls: bad certificate 2025-08-13T19:55:35.575462881+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41030: remote error: tls: bad certificate 2025-08-13T19:55:35.575754439+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41026: remote error: tls: bad certificate 2025-08-13T19:55:35.603601564+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:55:35.621914196+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41044: remote error: tls: bad certificate 2025-08-13T19:55:35.642222516+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41050: remote error: tls: bad certificate 2025-08-13T19:55:35.662885595+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41064: remote error: tls: bad certificate 2025-08-13T19:55:35.681107765+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41066: remote error: tls: bad certificate 2025-08-13T19:55:35.699474799+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41074: remote error: tls: bad certificate 2025-08-13T19:55:35.716397602+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41090: remote error: tls: bad certificate 2025-08-13T19:55:35.735190078+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41098: remote error: tls: bad certificate 2025-08-13T19:55:35.753157131+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41110: remote error: tls: bad certificate 2025-08-13T19:55:35.772507463+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41118: remote error: tls: bad certificate 2025-08-13T19:55:35.786490422+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41132: remote error: tls: bad certificate 2025-08-13T19:55:35.803189179+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41142: remote error: tls: bad certificate 2025-08-13T19:55:35.819016400+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41156: remote error: tls: bad certificate 2025-08-13T19:55:35.844026044+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41168: remote error: tls: bad certificate 2025-08-13T19:55:35.860004930+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41180: remote error: tls: bad certificate 2025-08-13T19:55:35.876589063+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41190: remote error: tls: bad certificate 2025-08-13T19:55:35.900619959+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41194: remote error: tls: bad certificate 2025-08-13T19:55:35.916129001+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41208: remote error: tls: bad certificate 2025-08-13T19:55:35.935918986+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41224: remote error: tls: bad certificate 2025-08-13T19:55:35.956083701+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41236: remote error: tls: bad certificate 2025-08-13T19:55:35.974952029+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41238: remote error: tls: bad certificate 2025-08-13T19:55:35.994320032+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41250: remote error: tls: bad certificate 2025-08-13T19:55:36.011561524+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41254: remote error: tls: bad certificate 2025-08-13T19:55:36.027715485+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41270: remote error: tls: bad certificate 2025-08-13T19:55:36.045503063+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41274: remote error: tls: bad certificate 2025-08-13T19:55:36.063375353+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41286: remote error: tls: bad certificate 2025-08-13T19:55:36.088994874+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41302: remote error: tls: bad certificate 2025-08-13T19:55:36.108676985+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41310: remote error: tls: bad certificate 2025-08-13T19:55:36.132025821+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41314: remote error: tls: bad certificate 2025-08-13T19:55:36.149121519+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41324: remote error: tls: bad certificate 2025-08-13T19:55:36.164097096+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41326: remote error: tls: bad certificate 2025-08-13T19:55:36.177856509+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41332: remote error: tls: bad certificate 2025-08-13T19:55:36.190974313+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41336: remote error: tls: bad certificate 2025-08-13T19:55:36.207574167+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41338: remote error: tls: bad certificate 2025-08-13T19:55:36.224421308+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41342: remote error: tls: bad certificate 2025-08-13T19:55:36.238508850+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41358: remote error: tls: bad certificate 2025-08-13T19:55:36.258044717+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41364: remote error: tls: bad certificate 2025-08-13T19:55:36.286055726+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41376: remote error: tls: bad certificate 2025-08-13T19:55:36.300371135+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41382: remote error: tls: bad certificate 2025-08-13T19:55:36.313492559+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:55:36.328174548+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41404: remote error: tls: bad certificate 2025-08-13T19:55:36.345859293+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41410: remote error: tls: bad certificate 2025-08-13T19:55:36.365382020+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41412: remote error: tls: bad certificate 2025-08-13T19:55:36.380294025+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41426: remote error: tls: bad certificate 2025-08-13T19:55:36.396448136+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41440: remote error: tls: bad certificate 2025-08-13T19:55:36.411076403+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41446: remote error: tls: bad certificate 2025-08-13T19:55:36.425283589+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41454: remote error: tls: bad certificate 2025-08-13T19:55:36.439167365+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41466: remote error: tls: bad certificate 2025-08-13T19:55:36.456911581+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41478: remote error: tls: bad certificate 2025-08-13T19:55:36.477905750+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41486: remote error: tls: bad certificate 2025-08-13T19:55:43.542359063+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39088: remote error: tls: bad certificate 2025-08-13T19:55:43.562344944+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39090: remote error: tls: bad certificate 2025-08-13T19:55:43.579504574+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39096: remote error: tls: bad certificate 2025-08-13T19:55:43.600220455+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39102: remote error: tls: bad certificate 2025-08-13T19:55:43.623744817+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39106: remote error: tls: bad certificate 2025-08-13T19:55:45.230706044+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39110: remote error: tls: bad certificate 2025-08-13T19:55:45.249200712+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39120: remote error: tls: bad certificate 2025-08-13T19:55:45.271050146+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39130: remote error: tls: bad certificate 2025-08-13T19:55:45.290850271+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39144: remote error: tls: bad certificate 2025-08-13T19:55:45.310144452+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39152: remote error: tls: bad certificate 2025-08-13T19:55:45.323583086+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39154: remote error: tls: bad certificate 2025-08-13T19:55:45.341274101+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39160: remote error: tls: bad certificate 2025-08-13T19:55:45.362244860+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39164: remote error: tls: bad certificate 2025-08-13T19:55:45.381071508+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39170: remote error: tls: bad certificate 2025-08-13T19:55:45.403018454+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39174: remote error: tls: bad certificate 2025-08-13T19:55:45.421341798+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39188: remote error: tls: bad certificate 2025-08-13T19:55:45.437453518+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39202: remote error: tls: bad certificate 2025-08-13T19:55:45.452191819+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39210: remote error: tls: bad certificate 2025-08-13T19:55:45.469058160+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39224: remote error: tls: bad certificate 2025-08-13T19:55:45.488306020+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39226: remote error: tls: bad certificate 2025-08-13T19:55:45.506644104+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39236: remote error: tls: bad certificate 2025-08-13T19:55:45.524887524+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39244: remote error: tls: bad certificate 2025-08-13T19:55:45.544297949+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39250: remote error: tls: bad certificate 2025-08-13T19:55:45.560512862+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39262: remote error: tls: bad certificate 2025-08-13T19:55:45.577144087+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39276: remote error: tls: bad certificate 2025-08-13T19:55:45.594452551+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39288: remote error: tls: bad certificate 2025-08-13T19:55:45.610481149+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39294: remote error: tls: bad certificate 2025-08-13T19:55:45.625259311+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39306: remote error: tls: bad certificate 2025-08-13T19:55:45.642920215+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39322: remote error: tls: bad certificate 2025-08-13T19:55:45.660028343+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39324: remote error: tls: bad certificate 2025-08-13T19:55:45.676483103+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39334: remote error: tls: bad certificate 2025-08-13T19:55:45.692538492+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39344: remote error: tls: bad certificate 2025-08-13T19:55:45.711744340+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39356: remote error: tls: bad certificate 2025-08-13T19:55:45.728074646+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39366: remote error: tls: bad certificate 2025-08-13T19:55:45.743032844+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39378: remote error: tls: bad certificate 2025-08-13T19:55:45.759152424+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39384: remote error: tls: bad certificate 2025-08-13T19:55:45.775766568+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39396: remote error: tls: bad certificate 2025-08-13T19:55:45.796582783+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39406: remote error: tls: bad certificate 2025-08-13T19:55:45.818418716+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39412: remote error: tls: bad certificate 2025-08-13T19:55:45.836564444+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39426: remote error: tls: bad certificate 2025-08-13T19:55:45.854195358+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39438: remote error: tls: bad certificate 2025-08-13T19:55:45.879333456+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39454: remote error: tls: bad certificate 2025-08-13T19:55:45.899477641+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39468: remote error: tls: bad certificate 2025-08-13T19:55:45.918489234+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39484: remote error: tls: bad certificate 2025-08-13T19:55:45.945675230+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39500: remote error: tls: bad certificate 2025-08-13T19:55:45.968765109+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39504: remote error: tls: bad certificate 2025-08-13T19:55:45.987233797+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39510: remote error: tls: bad certificate 2025-08-13T19:55:46.008955997+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39514: remote error: tls: bad certificate 2025-08-13T19:55:46.027873827+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39530: remote error: tls: bad certificate 2025-08-13T19:55:46.052034017+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39546: remote error: tls: bad certificate 2025-08-13T19:55:46.077634698+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39554: remote error: tls: bad certificate 2025-08-13T19:55:46.099153273+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39560: remote error: tls: bad certificate 2025-08-13T19:55:46.116990972+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39572: remote error: tls: bad certificate 2025-08-13T19:55:46.135143360+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39576: remote error: tls: bad certificate 2025-08-13T19:55:46.154638557+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39582: remote error: tls: bad certificate 2025-08-13T19:55:46.175216695+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39588: remote error: tls: bad certificate 2025-08-13T19:55:46.204208832+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39604: remote error: tls: bad certificate 2025-08-13T19:55:46.224925874+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39616: remote error: tls: bad certificate 2025-08-13T19:55:46.246351985+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39620: remote error: tls: bad certificate 2025-08-13T19:55:46.270031801+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39624: remote error: tls: bad certificate 2025-08-13T19:55:46.291949157+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39640: remote error: tls: bad certificate 2025-08-13T19:55:46.321080139+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39650: remote error: tls: bad certificate 2025-08-13T19:55:46.339222077+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39664: remote error: tls: bad certificate 2025-08-13T19:55:46.355744949+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39666: remote error: tls: bad certificate 2025-08-13T19:55:46.374868885+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39672: remote error: tls: bad certificate 2025-08-13T19:55:46.398128389+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39678: remote error: tls: bad certificate 2025-08-13T19:55:46.420764495+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39690: remote error: tls: bad certificate 2025-08-13T19:55:46.439880921+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39700: remote error: tls: bad certificate 2025-08-13T19:55:46.460968123+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39712: remote error: tls: bad certificate 2025-08-13T19:55:46.491667340+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39716: remote error: tls: bad certificate 2025-08-13T19:55:46.511737343+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39726: remote error: tls: bad certificate 2025-08-13T19:55:46.529936343+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39742: remote error: tls: bad certificate 2025-08-13T19:55:52.824556573+00:00 stderr F 2025/08/13 19:55:52 http: TLS handshake error from 127.0.0.1:44446: remote error: tls: bad certificate 2025-08-13T19:55:53.780167759+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44456: remote error: tls: bad certificate 2025-08-13T19:55:53.807762677+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44472: remote error: tls: bad certificate 2025-08-13T19:55:53.845223767+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44484: remote error: tls: bad certificate 2025-08-13T19:55:53.883531911+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44494: remote error: tls: bad certificate 2025-08-13T19:55:53.910349246+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44510: remote error: tls: bad certificate 2025-08-13T19:55:55.224732629+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44516: remote error: tls: bad certificate 2025-08-13T19:55:55.241277971+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44532: remote error: tls: bad certificate 2025-08-13T19:55:55.261690894+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44538: remote error: tls: bad certificate 2025-08-13T19:55:55.280748038+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44550: remote error: tls: bad certificate 2025-08-13T19:55:55.299029930+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44552: remote error: tls: bad certificate 2025-08-13T19:55:55.316595082+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44564: remote error: tls: bad certificate 2025-08-13T19:55:55.331861858+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44578: remote error: tls: bad certificate 2025-08-13T19:55:55.358071616+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44584: remote error: tls: bad certificate 2025-08-13T19:55:55.375025030+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44586: remote error: tls: bad certificate 2025-08-13T19:55:55.395235878+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44594: remote error: tls: bad certificate 2025-08-13T19:55:55.419379097+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44596: remote error: tls: bad certificate 2025-08-13T19:55:55.439173882+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44604: remote error: tls: bad certificate 2025-08-13T19:55:55.455989842+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44608: remote error: tls: bad certificate 2025-08-13T19:55:55.469614712+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44616: remote error: tls: bad certificate 2025-08-13T19:55:55.496532970+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:55:55.520431953+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44642: remote error: tls: bad certificate 2025-08-13T19:55:55.536882282+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44650: remote error: tls: bad certificate 2025-08-13T19:55:55.560878418+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44660: remote error: tls: bad certificate 2025-08-13T19:55:55.575270509+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44662: remote error: tls: bad certificate 2025-08-13T19:55:55.591161202+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44672: remote error: tls: bad certificate 2025-08-13T19:55:55.612307966+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44682: remote error: tls: bad certificate 2025-08-13T19:55:55.631002370+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44692: remote error: tls: bad certificate 2025-08-13T19:55:55.648979183+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44696: remote error: tls: bad certificate 2025-08-13T19:55:55.666100582+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44700: remote error: tls: bad certificate 2025-08-13T19:55:55.682322165+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44706: remote error: tls: bad certificate 2025-08-13T19:55:55.699884547+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44718: remote error: tls: bad certificate 2025-08-13T19:55:55.715290757+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:55:55.733637551+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44732: remote error: tls: bad certificate 2025-08-13T19:55:55.750501372+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44746: remote error: tls: bad certificate 2025-08-13T19:55:55.764348138+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44760: remote error: tls: bad certificate 2025-08-13T19:55:55.778900093+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44772: remote error: tls: bad certificate 2025-08-13T19:55:55.795024764+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44784: remote error: tls: bad certificate 2025-08-13T19:55:55.812279856+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44796: remote error: tls: bad certificate 2025-08-13T19:55:55.832994698+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44810: remote error: tls: bad certificate 2025-08-13T19:55:55.849993373+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44818: remote error: tls: bad certificate 2025-08-13T19:55:55.866183645+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:55:55.881169033+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44834: remote error: tls: bad certificate 2025-08-13T19:55:55.900964599+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44840: remote error: tls: bad certificate 2025-08-13T19:55:55.920877277+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44846: remote error: tls: bad certificate 2025-08-13T19:55:55.935490895+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44858: remote error: tls: bad certificate 2025-08-13T19:55:55.950882664+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44864: remote error: tls: bad certificate 2025-08-13T19:55:55.966917262+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44876: remote error: tls: bad certificate 2025-08-13T19:55:55.985651847+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44880: remote error: tls: bad certificate 2025-08-13T19:55:56.000949174+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44884: remote error: tls: bad certificate 2025-08-13T19:55:56.016273631+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44894: remote error: tls: bad certificate 2025-08-13T19:55:56.033253086+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44900: remote error: tls: bad certificate 2025-08-13T19:55:56.052562828+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44906: remote error: tls: bad certificate 2025-08-13T19:55:56.066392292+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44908: remote error: tls: bad certificate 2025-08-13T19:55:56.083670006+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44916: remote error: tls: bad certificate 2025-08-13T19:55:56.110284246+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44932: remote error: tls: bad certificate 2025-08-13T19:55:56.134524528+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44940: remote error: tls: bad certificate 2025-08-13T19:55:56.156586868+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44944: remote error: tls: bad certificate 2025-08-13T19:55:56.182486117+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44956: remote error: tls: bad certificate 2025-08-13T19:55:56.203323543+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44958: remote error: tls: bad certificate 2025-08-13T19:55:56.225880157+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44962: remote error: tls: bad certificate 2025-08-13T19:55:56.242304366+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44974: remote error: tls: bad certificate 2025-08-13T19:55:56.262851962+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44988: remote error: tls: bad certificate 2025-08-13T19:55:56.283443650+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45000: remote error: tls: bad certificate 2025-08-13T19:55:56.302729671+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45016: remote error: tls: bad certificate 2025-08-13T19:55:56.320443107+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45030: remote error: tls: bad certificate 2025-08-13T19:55:56.338330158+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45038: remote error: tls: bad certificate 2025-08-13T19:55:56.353768378+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45040: remote error: tls: bad certificate 2025-08-13T19:55:56.367973094+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45052: remote error: tls: bad certificate 2025-08-13T19:55:56.384070984+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45058: remote error: tls: bad certificate 2025-08-13T19:55:56.406155034+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45060: remote error: tls: bad certificate 2025-08-13T19:55:56.423310914+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45064: remote error: tls: bad certificate 2025-08-13T19:55:56.439226739+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45074: remote error: tls: bad certificate 2025-08-13T19:56:04.212256863+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34962: remote error: tls: bad certificate 2025-08-13T19:56:04.232744598+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34966: remote error: tls: bad certificate 2025-08-13T19:56:04.251181244+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34972: remote error: tls: bad certificate 2025-08-13T19:56:04.274170991+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34988: remote error: tls: bad certificate 2025-08-13T19:56:04.295367696+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:35004: remote error: tls: bad certificate 2025-08-13T19:56:05.226453313+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35006: remote error: tls: bad certificate 2025-08-13T19:56:05.244457487+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35022: remote error: tls: bad certificate 2025-08-13T19:56:05.262510113+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35032: remote error: tls: bad certificate 2025-08-13T19:56:05.279905070+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35040: remote error: tls: bad certificate 2025-08-13T19:56:05.297311857+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35046: remote error: tls: bad certificate 2025-08-13T19:56:05.318033838+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35062: remote error: tls: bad certificate 2025-08-13T19:56:05.335469256+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35064: remote error: tls: bad certificate 2025-08-13T19:56:05.353410969+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35080: remote error: tls: bad certificate 2025-08-13T19:56:05.371305510+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35094: remote error: tls: bad certificate 2025-08-13T19:56:05.384066564+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35110: remote error: tls: bad certificate 2025-08-13T19:56:05.401367898+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35114: remote error: tls: bad certificate 2025-08-13T19:56:05.417630852+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35120: remote error: tls: bad certificate 2025-08-13T19:56:05.433924208+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35134: remote error: tls: bad certificate 2025-08-13T19:56:05.458980173+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35144: remote error: tls: bad certificate 2025-08-13T19:56:05.477643286+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35154: remote error: tls: bad certificate 2025-08-13T19:56:05.493881660+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35156: remote error: tls: bad certificate 2025-08-13T19:56:05.509205347+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35162: remote error: tls: bad certificate 2025-08-13T19:56:05.526420229+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35176: remote error: tls: bad certificate 2025-08-13T19:56:05.549349164+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35184: remote error: tls: bad certificate 2025-08-13T19:56:05.563699803+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35200: remote error: tls: bad certificate 2025-08-13T19:56:05.579660349+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35212: remote error: tls: bad certificate 2025-08-13T19:56:05.595921573+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35226: remote error: tls: bad certificate 2025-08-13T19:56:05.611611262+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35240: remote error: tls: bad certificate 2025-08-13T19:56:05.630486300+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35250: remote error: tls: bad certificate 2025-08-13T19:56:05.646580120+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35262: remote error: tls: bad certificate 2025-08-13T19:56:05.662331250+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35278: remote error: tls: bad certificate 2025-08-13T19:56:05.688963440+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35286: remote error: tls: bad certificate 2025-08-13T19:56:05.708348024+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35290: remote error: tls: bad certificate 2025-08-13T19:56:05.728864560+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35302: remote error: tls: bad certificate 2025-08-13T19:56:05.748190062+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35306: remote error: tls: bad certificate 2025-08-13T19:56:05.767613786+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35320: remote error: tls: bad certificate 2025-08-13T19:56:05.789348337+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35328: remote error: tls: bad certificate 2025-08-13T19:56:05.811254952+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35330: remote error: tls: bad certificate 2025-08-13T19:56:05.833526528+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35336: remote error: tls: bad certificate 2025-08-13T19:56:05.851578174+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35344: remote error: tls: bad certificate 2025-08-13T19:56:05.872663876+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35346: remote error: tls: bad certificate 2025-08-13T19:56:05.899258815+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35350: remote error: tls: bad certificate 2025-08-13T19:56:05.917351002+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35356: remote error: tls: bad certificate 2025-08-13T19:56:05.940849733+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35362: remote error: tls: bad certificate 2025-08-13T19:56:05.961742819+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35378: remote error: tls: bad certificate 2025-08-13T19:56:05.982911784+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35384: remote error: tls: bad certificate 2025-08-13T19:56:06.006127677+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35388: remote error: tls: bad certificate 2025-08-13T19:56:06.022642348+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35392: remote error: tls: bad certificate 2025-08-13T19:56:06.046241482+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35408: remote error: tls: bad certificate 2025-08-13T19:56:06.073366907+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35410: remote error: tls: bad certificate 2025-08-13T19:56:06.094022967+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35422: remote error: tls: bad certificate 2025-08-13T19:56:06.112557836+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35426: remote error: tls: bad certificate 2025-08-13T19:56:06.133374070+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35432: remote error: tls: bad certificate 2025-08-13T19:56:06.157912101+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35440: remote error: tls: bad certificate 2025-08-13T19:56:06.175482033+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35450: remote error: tls: bad certificate 2025-08-13T19:56:06.197201003+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35452: remote error: tls: bad certificate 2025-08-13T19:56:06.217123102+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35460: remote error: tls: bad certificate 2025-08-13T19:56:06.237505664+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35476: remote error: tls: bad certificate 2025-08-13T19:56:06.255534069+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35484: remote error: tls: bad certificate 2025-08-13T19:56:06.275069526+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35494: remote error: tls: bad certificate 2025-08-13T19:56:06.292607227+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35506: remote error: tls: bad certificate 2025-08-13T19:56:06.312165656+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35522: remote error: tls: bad certificate 2025-08-13T19:56:06.329967944+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35528: remote error: tls: bad certificate 2025-08-13T19:56:06.351057836+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35542: remote error: tls: bad certificate 2025-08-13T19:56:06.367303130+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35558: remote error: tls: bad certificate 2025-08-13T19:56:06.382659809+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35562: remote error: tls: bad certificate 2025-08-13T19:56:06.400181979+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35568: remote error: tls: bad certificate 2025-08-13T19:56:06.416965998+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35578: remote error: tls: bad certificate 2025-08-13T19:56:06.436212358+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35590: remote error: tls: bad certificate 2025-08-13T19:56:06.459046810+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35604: remote error: tls: bad certificate 2025-08-13T19:56:06.478109724+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35614: remote error: tls: bad certificate 2025-08-13T19:56:06.497619531+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35616: remote error: tls: bad certificate 2025-08-13T19:56:14.523282831+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48366: remote error: tls: bad certificate 2025-08-13T19:56:14.543111027+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48368: remote error: tls: bad certificate 2025-08-13T19:56:14.561732609+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48378: remote error: tls: bad certificate 2025-08-13T19:56:14.586077774+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48386: remote error: tls: bad certificate 2025-08-13T19:56:14.606737354+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48402: remote error: tls: bad certificate 2025-08-13T19:56:15.225634456+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48404: remote error: tls: bad certificate 2025-08-13T19:56:15.241406516+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48406: remote error: tls: bad certificate 2025-08-13T19:56:15.257578618+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48422: remote error: tls: bad certificate 2025-08-13T19:56:15.274052669+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48428: remote error: tls: bad certificate 2025-08-13T19:56:15.291650561+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48444: remote error: tls: bad certificate 2025-08-13T19:56:15.309985504+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48460: remote error: tls: bad certificate 2025-08-13T19:56:15.332499957+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48464: remote error: tls: bad certificate 2025-08-13T19:56:15.352566890+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48472: remote error: tls: bad certificate 2025-08-13T19:56:15.376644158+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48488: remote error: tls: bad certificate 2025-08-13T19:56:15.395280090+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48500: remote error: tls: bad certificate 2025-08-13T19:56:15.412434010+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48502: remote error: tls: bad certificate 2025-08-13T19:56:15.431586287+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48516: remote error: tls: bad certificate 2025-08-13T19:56:15.447224413+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48528: remote error: tls: bad certificate 2025-08-13T19:56:15.462534960+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48536: remote error: tls: bad certificate 2025-08-13T19:56:15.487488243+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48544: remote error: tls: bad certificate 2025-08-13T19:56:15.504228111+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48548: remote error: tls: bad certificate 2025-08-13T19:56:15.520344662+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48558: remote error: tls: bad certificate 2025-08-13T19:56:15.539117308+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48572: remote error: tls: bad certificate 2025-08-13T19:56:15.554470726+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48588: remote error: tls: bad certificate 2025-08-13T19:56:15.570651978+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48602: remote error: tls: bad certificate 2025-08-13T19:56:15.586091099+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48604: remote error: tls: bad certificate 2025-08-13T19:56:15.602721954+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48618: remote error: tls: bad certificate 2025-08-13T19:56:15.618959948+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48622: remote error: tls: bad certificate 2025-08-13T19:56:15.637066125+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48630: remote error: tls: bad certificate 2025-08-13T19:56:15.652735732+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48642: remote error: tls: bad certificate 2025-08-13T19:56:15.668418590+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48656: remote error: tls: bad certificate 2025-08-13T19:56:15.683131270+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48670: remote error: tls: bad certificate 2025-08-13T19:56:15.696912744+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48686: remote error: tls: bad certificate 2025-08-13T19:56:15.720008783+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48692: remote error: tls: bad certificate 2025-08-13T19:56:15.735206527+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48702: remote error: tls: bad certificate 2025-08-13T19:56:15.758105811+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48704: remote error: tls: bad certificate 2025-08-13T19:56:15.777319080+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48708: remote error: tls: bad certificate 2025-08-13T19:56:15.798733631+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48714: remote error: tls: bad certificate 2025-08-13T19:56:15.814436700+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48726: remote error: tls: bad certificate 2025-08-13T19:56:15.834244675+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48740: remote error: tls: bad certificate 2025-08-13T19:56:15.854860184+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48756: remote error: tls: bad certificate 2025-08-13T19:56:15.872314452+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48770: remote error: tls: bad certificate 2025-08-13T19:56:15.891735747+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48786: remote error: tls: bad certificate 2025-08-13T19:56:15.913677114+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48798: remote error: tls: bad certificate 2025-08-13T19:56:15.942193228+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48810: remote error: tls: bad certificate 2025-08-13T19:56:15.964604088+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48826: remote error: tls: bad certificate 2025-08-13T19:56:15.985579917+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48838: remote error: tls: bad certificate 2025-08-13T19:56:16.019097874+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48848: remote error: tls: bad certificate 2025-08-13T19:56:16.045058055+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48850: remote error: tls: bad certificate 2025-08-13T19:56:16.066288041+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48856: remote error: tls: bad certificate 2025-08-13T19:56:16.091576833+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48868: remote error: tls: bad certificate 2025-08-13T19:56:16.106235992+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48882: remote error: tls: bad certificate 2025-08-13T19:56:16.122164446+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48892: remote error: tls: bad certificate 2025-08-13T19:56:16.146996326+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48906: remote error: tls: bad certificate 2025-08-13T19:56:16.163858827+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48914: remote error: tls: bad certificate 2025-08-13T19:56:16.181500171+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48920: remote error: tls: bad certificate 2025-08-13T19:56:16.198979520+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48928: remote error: tls: bad certificate 2025-08-13T19:56:16.220998039+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48942: remote error: tls: bad certificate 2025-08-13T19:56:16.237085798+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48958: remote error: tls: bad certificate 2025-08-13T19:56:16.269023870+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48972: remote error: tls: bad certificate 2025-08-13T19:56:16.282916207+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48974: remote error: tls: bad certificate 2025-08-13T19:56:16.300881490+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:56:16.318491013+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48996: remote error: tls: bad certificate 2025-08-13T19:56:16.337081954+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48998: remote error: tls: bad certificate 2025-08-13T19:56:16.361762798+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49010: remote error: tls: bad certificate 2025-08-13T19:56:16.379760432+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49016: remote error: tls: bad certificate 2025-08-13T19:56:16.397045626+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49020: remote error: tls: bad certificate 2025-08-13T19:56:16.414064142+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49030: remote error: tls: bad certificate 2025-08-13T19:56:16.428882755+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49042: remote error: tls: bad certificate 2025-08-13T19:56:16.443707818+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49056: remote error: tls: bad certificate 2025-08-13T19:56:16.459309754+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49058: remote error: tls: bad certificate 2025-08-13T19:56:16.481357933+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49074: remote error: tls: bad certificate 2025-08-13T19:56:16.501159139+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49086: remote error: tls: bad certificate 2025-08-13T19:56:16.518668019+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49088: remote error: tls: bad certificate 2025-08-13T19:56:16.536877229+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49100: remote error: tls: bad certificate 2025-08-13T19:56:16.556201451+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49114: remote error: tls: bad certificate 2025-08-13T19:56:16.574138933+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate 2025-08-13T19:56:16.592120766+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate 2025-08-13T19:56:16.608378450+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49132: remote error: tls: bad certificate 2025-08-13T19:56:16.627979080+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate 2025-08-13T19:56:16.661932740+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate 2025-08-13T19:56:16.678177733+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate 2025-08-13T19:56:16.713048449+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate 2025-08-13T19:56:16.731738853+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49182: remote error: tls: bad certificate 2025-08-13T19:56:16.749534101+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49190: remote error: tls: bad certificate 2025-08-13T19:56:16.766628639+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49202: remote error: tls: bad certificate 2025-08-13T19:56:16.785175379+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49210: remote error: tls: bad certificate 2025-08-13T19:56:16.804865541+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49218: remote error: tls: bad certificate 2025-08-13T19:56:16.833077097+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49234: remote error: tls: bad certificate 2025-08-13T19:56:16.852035838+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:56:16.875039805+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49266: remote error: tls: bad certificate 2025-08-13T19:56:16.901080509+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49274: remote error: tls: bad certificate 2025-08-13T19:56:16.917706683+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49290: remote error: tls: bad certificate 2025-08-13T19:56:16.937721095+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:56:16.956477951+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49298: remote error: tls: bad certificate 2025-08-13T19:56:16.976037679+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49314: remote error: tls: bad certificate 2025-08-13T19:56:16.991848391+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49318: remote error: tls: bad certificate 2025-08-13T19:56:17.005632784+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49334: remote error: tls: bad certificate 2025-08-13T19:56:17.030932497+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49342: remote error: tls: bad certificate 2025-08-13T19:56:17.045555164+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:56:17.061213182+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49370: remote error: tls: bad certificate 2025-08-13T19:56:17.102422098+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49386: remote error: tls: bad certificate 2025-08-13T19:56:17.141107583+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49394: remote error: tls: bad certificate 2025-08-13T19:56:17.178469190+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49408: remote error: tls: bad certificate 2025-08-13T19:56:17.226950744+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:56:17.259250946+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49426: remote error: tls: bad certificate 2025-08-13T19:56:17.300422942+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:56:17.340849686+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49444: remote error: tls: bad certificate 2025-08-13T19:56:17.379916752+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49448: remote error: tls: bad certificate 2025-08-13T19:56:17.419144322+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49452: remote error: tls: bad certificate 2025-08-13T19:56:17.458899517+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49454: remote error: tls: bad certificate 2025-08-13T19:56:17.502985276+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49466: remote error: tls: bad certificate 2025-08-13T19:56:17.546481698+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49476: remote error: tls: bad certificate 2025-08-13T19:56:17.578102301+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49492: remote error: tls: bad certificate 2025-08-13T19:56:17.622084926+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49496: remote error: tls: bad certificate 2025-08-13T19:56:17.660241796+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49506: remote error: tls: bad certificate 2025-08-13T19:56:17.700904407+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49512: remote error: tls: bad certificate 2025-08-13T19:56:17.741491176+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49516: remote error: tls: bad certificate 2025-08-13T19:56:17.780104989+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:56:17.817659271+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49530: remote error: tls: bad certificate 2025-08-13T19:56:17.858408045+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49540: remote error: tls: bad certificate 2025-08-13T19:56:17.900224919+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49550: remote error: tls: bad certificate 2025-08-13T19:56:17.938504802+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49562: remote error: tls: bad certificate 2025-08-13T19:56:17.979759980+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49568: remote error: tls: bad certificate 2025-08-13T19:56:18.024401465+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49576: remote error: tls: bad certificate 2025-08-13T19:56:18.060726722+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49580: remote error: tls: bad certificate 2025-08-13T19:56:18.099416237+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49584: remote error: tls: bad certificate 2025-08-13T19:56:18.141405796+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49592: remote error: tls: bad certificate 2025-08-13T19:56:18.182291913+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49604: remote error: tls: bad certificate 2025-08-13T19:56:18.223781818+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49606: remote error: tls: bad certificate 2025-08-13T19:56:18.260134486+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49612: remote error: tls: bad certificate 2025-08-13T19:56:18.313018516+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49622: remote error: tls: bad certificate 2025-08-13T19:56:18.352276627+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49630: remote error: tls: bad certificate 2025-08-13T19:56:18.385901677+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:56:18.421409611+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:56:18.460554168+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49662: remote error: tls: bad certificate 2025-08-13T19:56:18.497052690+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49672: remote error: tls: bad certificate 2025-08-13T19:56:18.540230523+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49680: remote error: tls: bad certificate 2025-08-13T19:56:18.580440782+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49688: remote error: tls: bad certificate 2025-08-13T19:56:18.621526565+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49698: remote error: tls: bad certificate 2025-08-13T19:56:18.660588940+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49700: remote error: tls: bad certificate 2025-08-13T19:56:18.699347827+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49702: remote error: tls: bad certificate 2025-08-13T19:56:18.739736240+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60210: remote error: tls: bad certificate 2025-08-13T19:56:18.778538288+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60212: remote error: tls: bad certificate 2025-08-13T19:56:18.822894145+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60224: remote error: tls: bad certificate 2025-08-13T19:56:18.867266522+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60238: remote error: tls: bad certificate 2025-08-13T19:56:18.901464949+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60254: remote error: tls: bad certificate 2025-08-13T19:56:18.941194953+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60260: remote error: tls: bad certificate 2025-08-13T19:56:18.978890780+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60274: remote error: tls: bad certificate 2025-08-13T19:56:19.023520184+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60282: remote error: tls: bad certificate 2025-08-13T19:56:19.059161562+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60284: remote error: tls: bad certificate 2025-08-13T19:56:19.098975159+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60292: remote error: tls: bad certificate 2025-08-13T19:56:19.141963816+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60304: remote error: tls: bad certificate 2025-08-13T19:56:19.178275523+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60308: remote error: tls: bad certificate 2025-08-13T19:56:19.217360059+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60316: remote error: tls: bad certificate 2025-08-13T19:56:19.260116940+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60320: remote error: tls: bad certificate 2025-08-13T19:56:19.298187827+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60324: remote error: tls: bad certificate 2025-08-13T19:56:19.340386132+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60336: remote error: tls: bad certificate 2025-08-13T19:56:19.382478584+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60348: remote error: tls: bad certificate 2025-08-13T19:56:19.423060843+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60360: remote error: tls: bad certificate 2025-08-13T19:56:19.459342529+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60372: remote error: tls: bad certificate 2025-08-13T19:56:19.502522042+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:56:19.541055782+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60384: remote error: tls: bad certificate 2025-08-13T19:56:19.580605031+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60386: remote error: tls: bad certificate 2025-08-13T19:56:19.691913910+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60390: remote error: tls: bad certificate 2025-08-13T19:56:19.723640376+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60404: remote error: tls: bad certificate 2025-08-13T19:56:19.748067543+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60414: remote error: tls: bad certificate 2025-08-13T19:56:19.767062166+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60416: remote error: tls: bad certificate 2025-08-13T19:56:19.830188728+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60432: remote error: tls: bad certificate 2025-08-13T19:56:19.847652487+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60442: remote error: tls: bad certificate 2025-08-13T19:56:19.873422673+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60450: remote error: tls: bad certificate 2025-08-13T19:56:19.903512852+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60452: remote error: tls: bad certificate 2025-08-13T19:56:19.940493338+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60458: remote error: tls: bad certificate 2025-08-13T19:56:19.979206143+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60474: remote error: tls: bad certificate 2025-08-13T19:56:20.019782402+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60480: remote error: tls: bad certificate 2025-08-13T19:56:20.060214907+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:56:20.097512922+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60502: remote error: tls: bad certificate 2025-08-13T19:56:20.138601255+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60514: remote error: tls: bad certificate 2025-08-13T19:56:20.179310167+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60528: remote error: tls: bad certificate 2025-08-13T19:56:20.224400055+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60544: remote error: tls: bad certificate 2025-08-13T19:56:20.263304946+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60556: remote error: tls: bad certificate 2025-08-13T19:56:20.302375711+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60560: remote error: tls: bad certificate 2025-08-13T19:56:20.352099811+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60574: remote error: tls: bad certificate 2025-08-13T19:56:20.383428046+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60580: remote error: tls: bad certificate 2025-08-13T19:56:20.419989800+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60588: remote error: tls: bad certificate 2025-08-13T19:56:20.493288673+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60600: remote error: tls: bad certificate 2025-08-13T19:56:20.516630530+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60604: remote error: tls: bad certificate 2025-08-13T19:56:20.543469776+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60608: remote error: tls: bad certificate 2025-08-13T19:56:20.589926052+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60624: remote error: tls: bad certificate 2025-08-13T19:56:20.621435993+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60640: remote error: tls: bad certificate 2025-08-13T19:56:20.658229593+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60646: remote error: tls: bad certificate 2025-08-13T19:56:20.697298119+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60660: remote error: tls: bad certificate 2025-08-13T19:56:20.741112860+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60664: remote error: tls: bad certificate 2025-08-13T19:56:20.782320987+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60670: remote error: tls: bad certificate 2025-08-13T19:56:20.819863509+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60672: remote error: tls: bad certificate 2025-08-13T19:56:20.860204190+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60674: remote error: tls: bad certificate 2025-08-13T19:56:20.897859876+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60680: remote error: tls: bad certificate 2025-08-13T19:56:20.938916668+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60682: remote error: tls: bad certificate 2025-08-13T19:56:20.981937877+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60698: remote error: tls: bad certificate 2025-08-13T19:56:21.020673772+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60714: remote error: tls: bad certificate 2025-08-13T19:56:21.059742628+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:56:21.103108316+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60722: remote error: tls: bad certificate 2025-08-13T19:56:21.141400180+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60726: remote error: tls: bad certificate 2025-08-13T19:56:21.181011431+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60736: remote error: tls: bad certificate 2025-08-13T19:56:21.221674872+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60738: remote error: tls: bad certificate 2025-08-13T19:56:21.259679737+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60746: remote error: tls: bad certificate 2025-08-13T19:56:24.713609512+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60752: remote error: tls: bad certificate 2025-08-13T19:56:24.736715922+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60764: remote error: tls: bad certificate 2025-08-13T19:56:24.756431835+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60766: remote error: tls: bad certificate 2025-08-13T19:56:24.776895689+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60772: remote error: tls: bad certificate 2025-08-13T19:56:24.795441349+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60782: remote error: tls: bad certificate 2025-08-13T19:56:25.228152195+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60794: remote error: tls: bad certificate 2025-08-13T19:56:25.245030607+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60806: remote error: tls: bad certificate 2025-08-13T19:56:25.266129119+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60808: remote error: tls: bad certificate 2025-08-13T19:56:25.283865286+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60824: remote error: tls: bad certificate 2025-08-13T19:56:25.301257042+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60836: remote error: tls: bad certificate 2025-08-13T19:56:25.315609002+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60844: remote error: tls: bad certificate 2025-08-13T19:56:25.333272366+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60858: remote error: tls: bad certificate 2025-08-13T19:56:25.348940394+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60864: remote error: tls: bad certificate 2025-08-13T19:56:25.370581172+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60872: remote error: tls: bad certificate 2025-08-13T19:56:25.394051512+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60874: remote error: tls: bad certificate 2025-08-13T19:56:25.405926421+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60888: remote error: tls: bad certificate 2025-08-13T19:56:25.422310069+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60902: remote error: tls: bad certificate 2025-08-13T19:56:25.438460950+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60912: remote error: tls: bad certificate 2025-08-13T19:56:25.454018324+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60924: remote error: tls: bad certificate 2025-08-13T19:56:25.471947896+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60938: remote error: tls: bad certificate 2025-08-13T19:56:25.489410415+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60944: remote error: tls: bad certificate 2025-08-13T19:56:25.506624236+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60952: remote error: tls: bad certificate 2025-08-13T19:56:25.531308851+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60958: remote error: tls: bad certificate 2025-08-13T19:56:25.546933968+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60974: remote error: tls: bad certificate 2025-08-13T19:56:25.561763591+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60984: remote error: tls: bad certificate 2025-08-13T19:56:25.578331604+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60988: remote error: tls: bad certificate 2025-08-13T19:56:25.598173051+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60994: remote error: tls: bad certificate 2025-08-13T19:56:25.620897300+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32778: remote error: tls: bad certificate 2025-08-13T19:56:25.638916323+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32794: remote error: tls: bad certificate 2025-08-13T19:56:25.653611893+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32800: remote error: tls: bad certificate 2025-08-13T19:56:25.668929760+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32808: remote error: tls: bad certificate 2025-08-13T19:56:25.688427757+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32822: remote error: tls: bad certificate 2025-08-13T19:56:25.700366698+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32838: remote error: tls: bad certificate 2025-08-13T19:56:25.713189444+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32840: remote error: tls: bad certificate 2025-08-13T19:56:25.730241761+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32854: remote error: tls: bad certificate 2025-08-13T19:56:25.750001565+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32870: remote error: tls: bad certificate 2025-08-13T19:56:25.766507936+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32878: remote error: tls: bad certificate 2025-08-13T19:56:25.781043662+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32888: remote error: tls: bad certificate 2025-08-13T19:56:25.799626672+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32902: remote error: tls: bad certificate 2025-08-13T19:56:25.816681149+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32908: remote error: tls: bad certificate 2025-08-13T19:56:25.836270739+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32922: remote error: tls: bad certificate 2025-08-13T19:56:25.852295126+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32938: remote error: tls: bad certificate 2025-08-13T19:56:25.874538131+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32940: remote error: tls: bad certificate 2025-08-13T19:56:25.899968987+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32944: remote error: tls: bad certificate 2025-08-13T19:56:25.918739933+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32956: remote error: tls: bad certificate 2025-08-13T19:56:25.936371367+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32958: remote error: tls: bad certificate 2025-08-13T19:56:25.952628741+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32960: remote error: tls: bad certificate 2025-08-13T19:56:25.971218142+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32964: remote error: tls: bad certificate 2025-08-13T19:56:25.996639018+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32980: remote error: tls: bad certificate 2025-08-13T19:56:26.015352622+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:32982: remote error: tls: bad certificate 2025-08-13T19:56:26.032520442+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:32986: remote error: tls: bad certificate 2025-08-13T19:56:26.048177349+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33000: remote error: tls: bad certificate 2025-08-13T19:56:26.065905696+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33016: remote error: tls: bad certificate 2025-08-13T19:56:26.100657058+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33032: remote error: tls: bad certificate 2025-08-13T19:56:26.161130055+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33036: remote error: tls: bad certificate 2025-08-13T19:56:26.181934369+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33038: remote error: tls: bad certificate 2025-08-13T19:56:26.207480808+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33044: remote error: tls: bad certificate 2025-08-13T19:56:26.228406646+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33052: remote error: tls: bad certificate 2025-08-13T19:56:26.245143444+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33066: remote error: tls: bad certificate 2025-08-13T19:56:26.274306257+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33072: remote error: tls: bad certificate 2025-08-13T19:56:26.294680768+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33080: remote error: tls: bad certificate 2025-08-13T19:56:26.309468851+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33088: remote error: tls: bad certificate 2025-08-13T19:56:26.324231492+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33094: remote error: tls: bad certificate 2025-08-13T19:56:26.350863583+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33100: remote error: tls: bad certificate 2025-08-13T19:56:26.373687874+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33110: remote error: tls: bad certificate 2025-08-13T19:56:26.394339524+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33126: remote error: tls: bad certificate 2025-08-13T19:56:26.422339264+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33138: remote error: tls: bad certificate 2025-08-13T19:56:26.447093040+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33148: remote error: tls: bad certificate 2025-08-13T19:56:26.463037236+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33162: remote error: tls: bad certificate 2025-08-13T19:56:26.480013631+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33168: remote error: tls: bad certificate 2025-08-13T19:56:26.499265120+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33176: remote error: tls: bad certificate 2025-08-13T19:56:26.516897704+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33190: remote error: tls: bad certificate 2025-08-13T19:56:28.230030422+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33202: remote error: tls: bad certificate 2025-08-13T19:56:28.256028364+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33210: remote error: tls: bad certificate 2025-08-13T19:56:28.276163999+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33222: remote error: tls: bad certificate 2025-08-13T19:56:28.372292724+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33232: remote error: tls: bad certificate 2025-08-13T19:56:28.434955944+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33238: remote error: tls: bad certificate 2025-08-13T19:56:28.462683985+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33246: remote error: tls: bad certificate 2025-08-13T19:56:28.520193828+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33250: remote error: tls: bad certificate 2025-08-13T19:56:28.534392783+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33254: remote error: tls: bad certificate 2025-08-13T19:56:28.558302796+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33268: remote error: tls: bad certificate 2025-08-13T19:56:28.591037830+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33278: remote error: tls: bad certificate 2025-08-13T19:56:28.609239080+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33286: remote error: tls: bad certificate 2025-08-13T19:56:28.632140994+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33290: remote error: tls: bad certificate 2025-08-13T19:56:28.652923158+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33292: remote error: tls: bad certificate 2025-08-13T19:56:28.672149507+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33298: remote error: tls: bad certificate 2025-08-13T19:56:28.691951712+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33308: remote error: tls: bad certificate 2025-08-13T19:56:28.711067508+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33312: remote error: tls: bad certificate 2025-08-13T19:56:28.725863900+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33326: remote error: tls: bad certificate 2025-08-13T19:56:28.743639948+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40208: remote error: tls: bad certificate 2025-08-13T19:56:28.765675137+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40224: remote error: tls: bad certificate 2025-08-13T19:56:28.788074757+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40232: remote error: tls: bad certificate 2025-08-13T19:56:28.806516774+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40234: remote error: tls: bad certificate 2025-08-13T19:56:28.829392007+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40242: remote error: tls: bad certificate 2025-08-13T19:56:28.851298442+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40248: remote error: tls: bad certificate 2025-08-13T19:56:28.869924884+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40254: remote error: tls: bad certificate 2025-08-13T19:56:28.914021143+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40256: remote error: tls: bad certificate 2025-08-13T19:56:28.934675163+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40260: remote error: tls: bad certificate 2025-08-13T19:56:28.953654265+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40276: remote error: tls: bad certificate 2025-08-13T19:56:28.976501517+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40284: remote error: tls: bad certificate 2025-08-13T19:56:29.022968854+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40294: remote error: tls: bad certificate 2025-08-13T19:56:29.041279527+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40298: remote error: tls: bad certificate 2025-08-13T19:56:29.060927628+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40304: remote error: tls: bad certificate 2025-08-13T19:56:29.078177891+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40308: remote error: tls: bad certificate 2025-08-13T19:56:29.111441111+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40310: remote error: tls: bad certificate 2025-08-13T19:56:29.133208612+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40312: remote error: tls: bad certificate 2025-08-13T19:56:29.149290632+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40328: remote error: tls: bad certificate 2025-08-13T19:56:29.167316776+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40334: remote error: tls: bad certificate 2025-08-13T19:56:29.180703269+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40336: remote error: tls: bad certificate 2025-08-13T19:56:29.198915218+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40346: remote error: tls: bad certificate 2025-08-13T19:56:29.224107937+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40360: remote error: tls: bad certificate 2025-08-13T19:56:29.237585812+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40376: remote error: tls: bad certificate 2025-08-13T19:56:29.252533569+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40392: remote error: tls: bad certificate 2025-08-13T19:56:29.270678007+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40406: remote error: tls: bad certificate 2025-08-13T19:56:29.286617012+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40416: remote error: tls: bad certificate 2025-08-13T19:56:29.312717657+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40422: remote error: tls: bad certificate 2025-08-13T19:56:29.333581853+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40428: remote error: tls: bad certificate 2025-08-13T19:56:29.356018324+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40438: remote error: tls: bad certificate 2025-08-13T19:56:29.383142538+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40444: remote error: tls: bad certificate 2025-08-13T19:56:29.444007006+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40450: remote error: tls: bad certificate 2025-08-13T19:56:29.594660068+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40464: remote error: tls: bad certificate 2025-08-13T19:56:29.615071811+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:56:29.638519201+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40484: remote error: tls: bad certificate 2025-08-13T19:56:29.694117008+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40494: remote error: tls: bad certificate 2025-08-13T19:56:29.717957479+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40496: remote error: tls: bad certificate 2025-08-13T19:56:29.734070789+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40510: remote error: tls: bad certificate 2025-08-13T19:56:29.752045142+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40512: remote error: tls: bad certificate 2025-08-13T19:56:29.771756565+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40520: remote error: tls: bad certificate 2025-08-13T19:56:29.792145097+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40524: remote error: tls: bad certificate 2025-08-13T19:56:29.809265456+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40532: remote error: tls: bad certificate 2025-08-13T19:56:29.828091824+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40546: remote error: tls: bad certificate 2025-08-13T19:56:29.851987096+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40556: remote error: tls: bad certificate 2025-08-13T19:56:29.874285573+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:56:29.899997457+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40568: remote error: tls: bad certificate 2025-08-13T19:56:29.921379078+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40574: remote error: tls: bad certificate 2025-08-13T19:56:29.951109226+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:56:29.969078150+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:56:29.993981281+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40600: remote error: tls: bad certificate 2025-08-13T19:56:30.017090871+00:00 stderr F 2025/08/13 19:56:30 http: TLS handshake error from 127.0.0.1:40604: remote error: tls: bad certificate 2025-08-13T19:56:35.187288604+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:56:35.213660707+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:56:35.228910922+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40634: remote error: tls: bad certificate 2025-08-13T19:56:35.249924482+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40650: remote error: tls: bad certificate 2025-08-13T19:56:35.252557028+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40656: remote error: tls: bad certificate 2025-08-13T19:56:35.277141189+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40666: remote error: tls: bad certificate 2025-08-13T19:56:35.284230962+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40670: remote error: tls: bad certificate 2025-08-13T19:56:35.300761394+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40672: remote error: tls: bad certificate 2025-08-13T19:56:35.303906424+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40684: remote error: tls: bad certificate 2025-08-13T19:56:35.316918835+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40700: remote error: tls: bad certificate 2025-08-13T19:56:35.332963674+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40712: remote error: tls: bad certificate 2025-08-13T19:56:35.348473916+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40728: remote error: tls: bad certificate 2025-08-13T19:56:35.365609396+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40736: remote error: tls: bad certificate 2025-08-13T19:56:35.387059708+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40738: remote error: tls: bad certificate 2025-08-13T19:56:35.404192377+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40752: remote error: tls: bad certificate 2025-08-13T19:56:35.425539397+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40768: remote error: tls: bad certificate 2025-08-13T19:56:35.442012117+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40772: remote error: tls: bad certificate 2025-08-13T19:56:35.461092042+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40780: remote error: tls: bad certificate 2025-08-13T19:56:35.477609224+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:56:35.496233846+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40792: remote error: tls: bad certificate 2025-08-13T19:56:35.512696136+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40804: remote error: tls: bad certificate 2025-08-13T19:56:35.531024449+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40812: remote error: tls: bad certificate 2025-08-13T19:56:35.548935241+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40828: remote error: tls: bad certificate 2025-08-13T19:56:35.567623174+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40836: remote error: tls: bad certificate 2025-08-13T19:56:35.584547287+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40846: remote error: tls: bad certificate 2025-08-13T19:56:35.607592976+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40856: remote error: tls: bad certificate 2025-08-13T19:56:35.629469020+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40858: remote error: tls: bad certificate 2025-08-13T19:56:35.645192989+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40862: remote error: tls: bad certificate 2025-08-13T19:56:35.661243847+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40870: remote error: tls: bad certificate 2025-08-13T19:56:35.677320057+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40872: remote error: tls: bad certificate 2025-08-13T19:56:35.701640621+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40884: remote error: tls: bad certificate 2025-08-13T19:56:35.721678683+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40898: remote error: tls: bad certificate 2025-08-13T19:56:35.737307960+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40914: remote error: tls: bad certificate 2025-08-13T19:56:35.754155081+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40922: remote error: tls: bad certificate 2025-08-13T19:56:35.769055146+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40926: remote error: tls: bad certificate 2025-08-13T19:56:35.785245618+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40928: remote error: tls: bad certificate 2025-08-13T19:56:35.800938696+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40942: remote error: tls: bad certificate 2025-08-13T19:56:35.822369228+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40956: remote error: tls: bad certificate 2025-08-13T19:56:35.841470544+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40970: remote error: tls: bad certificate 2025-08-13T19:56:35.860976601+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40978: remote error: tls: bad certificate 2025-08-13T19:56:35.881642571+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40988: remote error: tls: bad certificate 2025-08-13T19:56:35.911084762+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:56:35.929287982+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41000: remote error: tls: bad certificate 2025-08-13T19:56:35.944178507+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41006: remote error: tls: bad certificate 2025-08-13T19:56:35.962487619+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41008: remote error: tls: bad certificate 2025-08-13T19:56:35.984109867+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41024: remote error: tls: bad certificate 2025-08-13T19:56:36.002488702+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:56:36.019123537+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41038: remote error: tls: bad certificate 2025-08-13T19:56:36.039185030+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41054: remote error: tls: bad certificate 2025-08-13T19:56:36.126893464+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41068: remote error: tls: bad certificate 2025-08-13T19:56:36.147953065+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41074: remote error: tls: bad certificate 2025-08-13T19:56:36.188928986+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41076: remote error: tls: bad certificate 2025-08-13T19:56:36.191859689+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41084: remote error: tls: bad certificate 2025-08-13T19:56:36.210958865+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41090: remote error: tls: bad certificate 2025-08-13T19:56:36.230747780+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41096: remote error: tls: bad certificate 2025-08-13T19:56:36.247985622+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41098: remote error: tls: bad certificate 2025-08-13T19:56:36.263572937+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41100: remote error: tls: bad certificate 2025-08-13T19:56:36.287882181+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41116: remote error: tls: bad certificate 2025-08-13T19:56:36.309217850+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41126: remote error: tls: bad certificate 2025-08-13T19:56:36.330068596+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41130: remote error: tls: bad certificate 2025-08-13T19:56:36.353389761+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41142: remote error: tls: bad certificate 2025-08-13T19:56:36.372585129+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41144: remote error: tls: bad certificate 2025-08-13T19:56:36.395342199+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41160: remote error: tls: bad certificate 2025-08-13T19:56:36.412297303+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41172: remote error: tls: bad certificate 2025-08-13T19:56:36.431758728+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41188: remote error: tls: bad certificate 2025-08-13T19:56:36.449463374+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41202: remote error: tls: bad certificate 2025-08-13T19:56:36.466698776+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41214: remote error: tls: bad certificate 2025-08-13T19:56:36.484006140+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41216: remote error: tls: bad certificate 2025-08-13T19:56:36.503870588+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41230: remote error: tls: bad certificate 2025-08-13T19:56:36.522234852+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41246: remote error: tls: bad certificate 2025-08-13T19:56:36.539405542+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41258: remote error: tls: bad certificate 2025-08-13T19:56:36.555686927+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41262: remote error: tls: bad certificate 2025-08-13T19:56:45.232549103+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55276: remote error: tls: bad certificate 2025-08-13T19:56:45.254060377+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55286: remote error: tls: bad certificate 2025-08-13T19:56:45.274263444+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55292: remote error: tls: bad certificate 2025-08-13T19:56:45.295747548+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55304: remote error: tls: bad certificate 2025-08-13T19:56:45.315181213+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate 2025-08-13T19:56:45.333756853+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:56:45.354737182+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55334: remote error: tls: bad certificate 2025-08-13T19:56:45.373444306+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55342: remote error: tls: bad certificate 2025-08-13T19:56:45.393533520+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55350: remote error: tls: bad certificate 2025-08-13T19:56:45.410423982+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55360: remote error: tls: bad certificate 2025-08-13T19:56:45.427273023+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:56:45.444438213+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55378: remote error: tls: bad certificate 2025-08-13T19:56:45.460449321+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55384: remote error: tls: bad certificate 2025-08-13T19:56:45.481026718+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55398: remote error: tls: bad certificate 2025-08-13T19:56:45.505608970+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55414: remote error: tls: bad certificate 2025-08-13T19:56:45.521532935+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55426: remote error: tls: bad certificate 2025-08-13T19:56:45.530583033+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55428: remote error: tls: bad certificate 2025-08-13T19:56:45.544694146+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:56:45.547977270+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55438: remote error: tls: bad certificate 2025-08-13T19:56:45.563034420+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55452: remote error: tls: bad certificate 2025-08-13T19:56:45.565378167+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55468: remote error: tls: bad certificate 2025-08-13T19:56:45.579891871+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55484: remote error: tls: bad certificate 2025-08-13T19:56:45.586223532+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:56:45.600715616+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:56:45.609232269+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55496: remote error: tls: bad certificate 2025-08-13T19:56:45.618993548+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:56:45.635656164+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55526: remote error: tls: bad certificate 2025-08-13T19:56:45.652487294+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55540: remote error: tls: bad certificate 2025-08-13T19:56:45.675504121+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55542: remote error: tls: bad certificate 2025-08-13T19:56:45.705092476+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:56:45.721005711+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:56:45.735256278+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55570: remote error: tls: bad certificate 2025-08-13T19:56:45.750643727+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55582: remote error: tls: bad certificate 2025-08-13T19:56:45.777755061+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55598: remote error: tls: bad certificate 2025-08-13T19:56:45.799503372+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55606: remote error: tls: bad certificate 2025-08-13T19:56:45.816620341+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55614: remote error: tls: bad certificate 2025-08-13T19:56:45.835108589+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55620: remote error: tls: bad certificate 2025-08-13T19:56:45.852698741+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55630: remote error: tls: bad certificate 2025-08-13T19:56:45.874041651+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55632: remote error: tls: bad certificate 2025-08-13T19:56:45.890161121+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:56:45.907023903+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55644: remote error: tls: bad certificate 2025-08-13T19:56:45.925525041+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55648: remote error: tls: bad certificate 2025-08-13T19:56:45.943903096+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55662: remote error: tls: bad certificate 2025-08-13T19:56:45.963412623+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:56:45.980134920+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:56:46.000327957+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:56:46.017528248+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55702: remote error: tls: bad certificate 2025-08-13T19:56:46.032597078+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55712: remote error: tls: bad certificate 2025-08-13T19:56:46.047546835+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55722: remote error: tls: bad certificate 2025-08-13T19:56:46.063708677+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55732: remote error: tls: bad certificate 2025-08-13T19:56:46.081696430+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55738: remote error: tls: bad certificate 2025-08-13T19:56:46.097068889+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55746: remote error: tls: bad certificate 2025-08-13T19:56:46.110881664+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55748: remote error: tls: bad certificate 2025-08-13T19:56:46.126720106+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55762: remote error: tls: bad certificate 2025-08-13T19:56:46.140744616+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55766: remote error: tls: bad certificate 2025-08-13T19:56:46.160978344+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55770: remote error: tls: bad certificate 2025-08-13T19:56:46.180014938+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55772: remote error: tls: bad certificate 2025-08-13T19:56:46.200083631+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55776: remote error: tls: bad certificate 2025-08-13T19:56:46.217562270+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55780: remote error: tls: bad certificate 2025-08-13T19:56:46.235698008+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55782: remote error: tls: bad certificate 2025-08-13T19:56:46.251926321+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:56:46.267677251+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55800: remote error: tls: bad certificate 2025-08-13T19:56:46.283754510+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:56:46.301716153+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:56:46.315549568+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55828: remote error: tls: bad certificate 2025-08-13T19:56:46.331707829+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55834: remote error: tls: bad certificate 2025-08-13T19:56:46.348641403+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55838: remote error: tls: bad certificate 2025-08-13T19:56:46.373020389+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55850: remote error: tls: bad certificate 2025-08-13T19:56:46.389893821+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55852: remote error: tls: bad certificate 2025-08-13T19:56:46.402739798+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55862: remote error: tls: bad certificate 2025-08-13T19:56:46.419895958+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55866: remote error: tls: bad certificate 2025-08-13T19:56:46.436435340+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55876: remote error: tls: bad certificate 2025-08-13T19:56:55.251937089+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57292: remote error: tls: bad certificate 2025-08-13T19:56:55.303880243+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57306: remote error: tls: bad certificate 2025-08-13T19:56:55.334475496+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57318: remote error: tls: bad certificate 2025-08-13T19:56:55.358762930+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57324: remote error: tls: bad certificate 2025-08-13T19:56:55.391517465+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57326: remote error: tls: bad certificate 2025-08-13T19:56:55.414621175+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57332: remote error: tls: bad certificate 2025-08-13T19:56:55.434673147+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:56:55.459612429+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57346: remote error: tls: bad certificate 2025-08-13T19:56:55.481214456+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57362: remote error: tls: bad certificate 2025-08-13T19:56:55.504114070+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57364: remote error: tls: bad certificate 2025-08-13T19:56:55.519919441+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57378: remote error: tls: bad certificate 2025-08-13T19:56:55.540751916+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57386: remote error: tls: bad certificate 2025-08-13T19:56:55.568641093+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57396: remote error: tls: bad certificate 2025-08-13T19:56:55.587000887+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57400: remote error: tls: bad certificate 2025-08-13T19:56:55.604516937+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57410: remote error: tls: bad certificate 2025-08-13T19:56:55.623090237+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57426: remote error: tls: bad certificate 2025-08-13T19:56:55.645844407+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57442: remote error: tls: bad certificate 2025-08-13T19:56:55.667699131+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57452: remote error: tls: bad certificate 2025-08-13T19:56:55.687297951+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:56:55.706473568+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57472: remote error: tls: bad certificate 2025-08-13T19:56:55.724226585+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:56:55.739952854+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57498: remote error: tls: bad certificate 2025-08-13T19:56:55.754345995+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57502: remote error: tls: bad certificate 2025-08-13T19:56:55.774045348+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57508: remote error: tls: bad certificate 2025-08-13T19:56:55.791300540+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57514: remote error: tls: bad certificate 2025-08-13T19:56:55.809028357+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57518: remote error: tls: bad certificate 2025-08-13T19:56:55.839509397+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57530: remote error: tls: bad certificate 2025-08-13T19:56:55.857904082+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57538: remote error: tls: bad certificate 2025-08-13T19:56:55.864706607+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57546: remote error: tls: bad certificate 2025-08-13T19:56:55.880642142+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57548: remote error: tls: bad certificate 2025-08-13T19:56:55.889942997+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57556: remote error: tls: bad certificate 2025-08-13T19:56:55.912389418+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57558: remote error: tls: bad certificate 2025-08-13T19:56:55.919864572+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57570: remote error: tls: bad certificate 2025-08-13T19:56:55.930057443+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57576: remote error: tls: bad certificate 2025-08-13T19:56:55.946863433+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57590: remote error: tls: bad certificate 2025-08-13T19:56:55.953193543+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57596: remote error: tls: bad certificate 2025-08-13T19:56:55.972275648+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57612: remote error: tls: bad certificate 2025-08-13T19:56:55.979193276+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57628: remote error: tls: bad certificate 2025-08-13T19:56:55.999218188+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57630: remote error: tls: bad certificate 2025-08-13T19:56:56.019032933+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57642: remote error: tls: bad certificate 2025-08-13T19:56:56.038596002+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57646: remote error: tls: bad certificate 2025-08-13T19:56:56.059092197+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57660: remote error: tls: bad certificate 2025-08-13T19:56:56.077239265+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57672: remote error: tls: bad certificate 2025-08-13T19:56:56.093169110+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57684: remote error: tls: bad certificate 2025-08-13T19:56:56.109713403+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57694: remote error: tls: bad certificate 2025-08-13T19:56:56.129161008+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57696: remote error: tls: bad certificate 2025-08-13T19:56:56.145271858+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57706: remote error: tls: bad certificate 2025-08-13T19:56:56.165505986+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:56:56.184171279+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57726: remote error: tls: bad certificate 2025-08-13T19:56:56.200467834+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57736: remote error: tls: bad certificate 2025-08-13T19:56:56.218955612+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57750: remote error: tls: bad certificate 2025-08-13T19:56:56.238091339+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57754: remote error: tls: bad certificate 2025-08-13T19:56:56.255381722+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57766: remote error: tls: bad certificate 2025-08-13T19:56:56.273644024+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57778: remote error: tls: bad certificate 2025-08-13T19:56:56.292020579+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57792: remote error: tls: bad certificate 2025-08-13T19:56:56.312756671+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57802: remote error: tls: bad certificate 2025-08-13T19:56:56.334546453+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57814: remote error: tls: bad certificate 2025-08-13T19:56:56.357384105+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57820: remote error: tls: bad certificate 2025-08-13T19:56:56.380181586+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57828: remote error: tls: bad certificate 2025-08-13T19:56:56.395532484+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57838: remote error: tls: bad certificate 2025-08-13T19:56:56.415227557+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57854: remote error: tls: bad certificate 2025-08-13T19:56:56.436356060+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57866: remote error: tls: bad certificate 2025-08-13T19:56:56.454692693+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57868: remote error: tls: bad certificate 2025-08-13T19:56:56.474250182+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57874: remote error: tls: bad certificate 2025-08-13T19:56:56.490920798+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57884: remote error: tls: bad certificate 2025-08-13T19:56:56.508707886+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57890: remote error: tls: bad certificate 2025-08-13T19:56:56.528748348+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57892: remote error: tls: bad certificate 2025-08-13T19:56:56.547876064+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57894: remote error: tls: bad certificate 2025-08-13T19:56:56.567367651+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57906: remote error: tls: bad certificate 2025-08-13T19:56:56.584531361+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57916: remote error: tls: bad certificate 2025-08-13T19:56:56.600718173+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57932: remote error: tls: bad certificate 2025-08-13T19:56:56.617605085+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57948: remote error: tls: bad certificate 2025-08-13T19:57:05.230301916+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45610: remote error: tls: bad certificate 2025-08-13T19:57:05.248021232+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45624: remote error: tls: bad certificate 2025-08-13T19:57:05.273461108+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45634: remote error: tls: bad certificate 2025-08-13T19:57:05.305141713+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45636: remote error: tls: bad certificate 2025-08-13T19:57:05.325868925+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45640: remote error: tls: bad certificate 2025-08-13T19:57:05.342881591+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45656: remote error: tls: bad certificate 2025-08-13T19:57:05.367365400+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45662: remote error: tls: bad certificate 2025-08-13T19:57:05.386722343+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45678: remote error: tls: bad certificate 2025-08-13T19:57:05.407868847+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45686: remote error: tls: bad certificate 2025-08-13T19:57:05.424290465+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45692: remote error: tls: bad certificate 2025-08-13T19:57:05.442185886+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45698: remote error: tls: bad certificate 2025-08-13T19:57:05.460172400+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45712: remote error: tls: bad certificate 2025-08-13T19:57:05.478593426+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45724: remote error: tls: bad certificate 2025-08-13T19:57:05.495328664+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45730: remote error: tls: bad certificate 2025-08-13T19:57:05.511511216+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45742: remote error: tls: bad certificate 2025-08-13T19:57:05.529711046+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45748: remote error: tls: bad certificate 2025-08-13T19:57:05.548125822+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45754: remote error: tls: bad certificate 2025-08-13T19:57:05.570967944+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45758: remote error: tls: bad certificate 2025-08-13T19:57:05.586724984+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45768: remote error: tls: bad certificate 2025-08-13T19:57:05.605996564+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45780: remote error: tls: bad certificate 2025-08-13T19:57:05.622596058+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45786: remote error: tls: bad certificate 2025-08-13T19:57:05.643546386+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45798: remote error: tls: bad certificate 2025-08-13T19:57:05.660334256+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45802: remote error: tls: bad certificate 2025-08-13T19:57:05.681204832+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45814: remote error: tls: bad certificate 2025-08-13T19:57:05.702463449+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45826: remote error: tls: bad certificate 2025-08-13T19:57:05.723456308+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45840: remote error: tls: bad certificate 2025-08-13T19:57:05.745062205+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45854: remote error: tls: bad certificate 2025-08-13T19:57:05.763269205+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45860: remote error: tls: bad certificate 2025-08-13T19:57:05.788013961+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45870: remote error: tls: bad certificate 2025-08-13T19:57:05.805992145+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45878: remote error: tls: bad certificate 2025-08-13T19:57:05.822549078+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45882: remote error: tls: bad certificate 2025-08-13T19:57:05.846873602+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45896: remote error: tls: bad certificate 2025-08-13T19:57:05.864119685+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45910: remote error: tls: bad certificate 2025-08-13T19:57:05.880618756+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45920: remote error: tls: bad certificate 2025-08-13T19:57:05.898885137+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45926: remote error: tls: bad certificate 2025-08-13T19:57:05.919036103+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45940: remote error: tls: bad certificate 2025-08-13T19:57:05.934533455+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45956: remote error: tls: bad certificate 2025-08-13T19:57:05.951481689+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45966: remote error: tls: bad certificate 2025-08-13T19:57:05.969558015+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45972: remote error: tls: bad certificate 2025-08-13T19:57:05.988580929+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45986: remote error: tls: bad certificate 2025-08-13T19:57:06.008454336+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46002: remote error: tls: bad certificate 2025-08-13T19:57:06.025493193+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46018: remote error: tls: bad certificate 2025-08-13T19:57:06.046371529+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46020: remote error: tls: bad certificate 2025-08-13T19:57:06.065501075+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46028: remote error: tls: bad certificate 2025-08-13T19:57:06.088051199+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46032: remote error: tls: bad certificate 2025-08-13T19:57:06.105245760+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46036: remote error: tls: bad certificate 2025-08-13T19:57:06.124323665+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46046: remote error: tls: bad certificate 2025-08-13T19:57:06.146168149+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46048: remote error: tls: bad certificate 2025-08-13T19:57:06.166367205+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46064: remote error: tls: bad certificate 2025-08-13T19:57:06.185961905+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46072: remote error: tls: bad certificate 2025-08-13T19:57:06.203137765+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46074: remote error: tls: bad certificate 2025-08-13T19:57:06.284907090+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46082: remote error: tls: bad certificate 2025-08-13T19:57:06.313498497+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46098: remote error: tls: bad certificate 2025-08-13T19:57:06.334065354+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46106: remote error: tls: bad certificate 2025-08-13T19:57:06.337165382+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46116: remote error: tls: bad certificate 2025-08-13T19:57:06.361948500+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46128: remote error: tls: bad certificate 2025-08-13T19:57:06.372693227+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46130: remote error: tls: bad certificate 2025-08-13T19:57:06.398034201+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46142: remote error: tls: bad certificate 2025-08-13T19:57:06.409981882+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46144: remote error: tls: bad certificate 2025-08-13T19:57:06.421979374+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46158: remote error: tls: bad certificate 2025-08-13T19:57:06.431347652+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46166: remote error: tls: bad certificate 2025-08-13T19:57:06.441501522+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46170: remote error: tls: bad certificate 2025-08-13T19:57:06.448276865+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46172: remote error: tls: bad certificate 2025-08-13T19:57:06.464864639+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46188: remote error: tls: bad certificate 2025-08-13T19:57:06.483985125+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46194: remote error: tls: bad certificate 2025-08-13T19:57:06.501266748+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46196: remote error: tls: bad certificate 2025-08-13T19:57:06.520198659+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46210: remote error: tls: bad certificate 2025-08-13T19:57:06.539083878+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46212: remote error: tls: bad certificate 2025-08-13T19:57:06.559295805+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46218: remote error: tls: bad certificate 2025-08-13T19:57:06.574943602+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46224: remote error: tls: bad certificate 2025-08-13T19:57:06.596600171+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46236: remote error: tls: bad certificate 2025-08-13T19:57:06.614022658+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46252: remote error: tls: bad certificate 2025-08-13T19:57:10.600037687+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55290: remote error: tls: bad certificate 2025-08-13T19:57:10.621431758+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55298: remote error: tls: bad certificate 2025-08-13T19:57:10.635941312+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55308: remote error: tls: bad certificate 2025-08-13T19:57:10.650896939+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:57:10.669900572+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55320: remote error: tls: bad certificate 2025-08-13T19:57:10.689583654+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:57:10.706357373+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55352: remote error: tls: bad certificate 2025-08-13T19:57:10.724895082+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55354: remote error: tls: bad certificate 2025-08-13T19:57:10.739721396+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:57:10.755175867+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55370: remote error: tls: bad certificate 2025-08-13T19:57:10.777086313+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55374: remote error: tls: bad certificate 2025-08-13T19:57:10.795641893+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:57:10.814930223+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55394: remote error: tls: bad certificate 2025-08-13T19:57:10.832184976+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55400: remote error: tls: bad certificate 2025-08-13T19:57:10.847275607+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55402: remote error: tls: bad certificate 2025-08-13T19:57:10.889940626+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55410: remote error: tls: bad certificate 2025-08-13T19:57:10.907945069+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55420: remote error: tls: bad certificate 2025-08-13T19:57:10.932224823+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55426: remote error: tls: bad certificate 2025-08-13T19:57:10.956453325+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55440: remote error: tls: bad certificate 2025-08-13T19:57:10.973945124+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55444: remote error: tls: bad certificate 2025-08-13T19:57:10.997486336+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55448: remote error: tls: bad certificate 2025-08-13T19:57:11.018105455+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55452: remote error: tls: bad certificate 2025-08-13T19:57:11.037043386+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55456: remote error: tls: bad certificate 2025-08-13T19:57:11.065667623+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55472: remote error: tls: bad certificate 2025-08-13T19:57:11.085483659+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55482: remote error: tls: bad certificate 2025-08-13T19:57:11.101455615+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:57:11.124011199+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55502: remote error: tls: bad certificate 2025-08-13T19:57:11.142479637+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55508: remote error: tls: bad certificate 2025-08-13T19:57:11.160491541+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55516: remote error: tls: bad certificate 2025-08-13T19:57:11.181507541+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55520: remote error: tls: bad certificate 2025-08-13T19:57:11.197179268+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55536: remote error: tls: bad certificate 2025-08-13T19:57:11.221586335+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55540: remote error: tls: bad certificate 2025-08-13T19:57:11.240347491+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:57:11.254939198+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55546: remote error: tls: bad certificate 2025-08-13T19:57:11.272016065+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:57:11.286613462+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:57:11.306205962+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:57:11.323518106+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:57:11.342847668+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:57:11.362680484+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55612: remote error: tls: bad certificate 2025-08-13T19:57:11.380238716+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55626: remote error: tls: bad certificate 2025-08-13T19:57:11.406290580+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55634: remote error: tls: bad certificate 2025-08-13T19:57:11.422362459+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55644: remote error: tls: bad certificate 2025-08-13T19:57:11.441281279+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55660: remote error: tls: bad certificate 2025-08-13T19:57:11.462956728+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:57:11.480138568+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55670: remote error: tls: bad certificate 2025-08-13T19:57:11.498075931+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:57:11.515517819+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55676: remote error: tls: bad certificate 2025-08-13T19:57:11.534354167+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:57:11.556291493+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:57:11.576725386+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55704: remote error: tls: bad certificate 2025-08-13T19:57:11.598127228+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:57:11.613018833+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55722: remote error: tls: bad certificate 2025-08-13T19:57:11.629109082+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55728: remote error: tls: bad certificate 2025-08-13T19:57:11.647121027+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55740: remote error: tls: bad certificate 2025-08-13T19:57:11.665426079+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55756: remote error: tls: bad certificate 2025-08-13T19:57:11.682359963+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55764: remote error: tls: bad certificate 2025-08-13T19:57:11.700356587+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55770: remote error: tls: bad certificate 2025-08-13T19:57:11.713879813+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55786: remote error: tls: bad certificate 2025-08-13T19:57:11.733523184+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55796: remote error: tls: bad certificate 2025-08-13T19:57:11.777744517+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55798: remote error: tls: bad certificate 2025-08-13T19:57:11.802611327+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:57:11.829885545+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55810: remote error: tls: bad certificate 2025-08-13T19:57:11.858481022+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:57:11.886875443+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55826: remote error: tls: bad certificate 2025-08-13T19:57:11.910943450+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55830: remote error: tls: bad certificate 2025-08-13T19:57:11.931481526+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55842: remote error: tls: bad certificate 2025-08-13T19:57:14.325250390+00:00 stderr F I0813 19:57:14.325188 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.crt\"" 2025-08-13T19:57:14.326847276+00:00 stderr F I0813 19:57:14.326768 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:57:14.326941248+00:00 stderr F I0813 19:57:14.326923 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.key\"" 2025-08-13T19:57:14.327263998+00:00 stderr F I0813 19:57:14.327247 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T20:04:58.811366355+00:00 stderr F I0813 20:04:58.810626 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.888436763+00:00 stderr F I0813 20:09:03.887738 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:11:01.590145355+00:00 stderr F 2025/08/13 20:11:01 http: TLS handshake error from 127.0.0.1:59200: EOF 2025-08-13T20:41:21.499517961+00:00 stderr F 2025/08/13 20:41:21 http: TLS handshake error from 127.0.0.1:56308: EOF 2025-08-13T20:42:36.402219136+00:00 stderr F I0813 20:42:36.399978 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:46.384981592+00:00 stderr F I0813 20:42:46.384029 1 ovnkubeidentity.go:297] Received signal terminated.... 2025-08-13T20:42:46.384981592+00:00 stderr F I0813 20:42:46.384942 1 ovnkubeidentity.go:77] Waiting (3m20s) for kubernetes-api to stop... ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015073043233033064 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000003513715073043233033077 0ustar zuulzuul2025-08-13T20:03:52.252101561+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T20:03:52.252902494+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T20:03:52.258513094+00:00 stdout F I0813 20:03:52.257953568 - network-node-identity - start approver 2025-08-13T20:03:52.258534125+00:00 stderr F + echo 'I0813 20:03:52.257953568 - network-node-identity - start approver' 2025-08-13T20:03:52.259139592+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-08-13T20:03:52.393001511+00:00 stderr F I0813 20:03:52.392657 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-08-13T20:03:52.393001511+00:00 stderr F W0813 20:03:52.392885 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:03:52.398011554+00:00 stderr F I0813 20:03:52.397928 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-08-13T20:03:52.398136927+00:00 stderr F I0813 20:03:52.398089 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-08-13T20:03:52.403382227+00:00 stderr F E0813 20:03:52.403301 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:52.403406728+00:00 stderr F I0813 20:03:52.403375 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:04:31.982134314+00:00 stderr F I0813 20:04:31.980755 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:04:31.982134314+00:00 stderr F I0813 20:04:31.980925 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:05:00.826537302+00:00 stderr F I0813 20:05:00.826382 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:05:00.826537302+00:00 stderr F I0813 20:05:00.826425 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:05:28.356696557+00:00 stderr F I0813 20:05:28.355349 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:05:28.356696557+00:00 stderr F I0813 20:05:28.355398 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:06:04.133623897+00:00 stderr F I0813 20:06:04.133466 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:06:04.160531788+00:00 stderr F I0813 20:06:04.159733 1 recorder.go:104] "crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"31975"} reason="LeaderElection" 2025-08-13T20:06:04.164734928+00:00 stderr F I0813 20:06:04.164577 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:06:04.164734928+00:00 stderr F I0813 20:06:04.164702 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:06:04.211696293+00:00 stderr F I0813 20:06:04.211552 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h50m56.013963378s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.211696293+00:00 stderr F I0813 20:06:04.211604 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.253691615+00:00 stderr F I0813 20:06:04.253587 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.309922526+00:00 stderr F I0813 20:06:04.309672 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-08-13T20:06:04.346192274+00:00 stderr F I0813 20:06:04.346069 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 60.352µs 2025-08-13T20:06:04.346385140+00:00 stderr F I0813 20:06:04.346317 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 29.561µs 2025-08-13T20:06:04.346551995+00:00 stderr F I0813 20:06:04.346480 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 22.77µs 2025-08-13T20:06:04.346604026+00:00 stderr F I0813 20:06:04.346568 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 19.381µs 2025-08-13T20:06:04.346665638+00:00 stderr F I0813 20:06:04.346625 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 18.661µs 2025-08-13T20:06:04.346920915+00:00 stderr F I0813 20:06:04.346762 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 15.041µs 2025-08-13T20:08:26.272461173+00:00 stderr F E0813 20:08:26.272090 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:26.297370847+00:00 stderr F I0813 20:08:26.293078 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 2 items received 2025-08-13T20:08:26.308511837+00:00 stderr F I0813 20:08:26.303578 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=599&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.178411178+00:00 stderr F I0813 20:08:27.177465 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=541&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.356501355+00:00 stderr F I0813 20:08:29.356300 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=472&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.382094292+00:00 stderr F I0813 20:08:34.376626 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=456&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:45.578744749+00:00 stderr F I0813 20:08:45.578627 1 reflector.go:449] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest closed with: too old resource version: 32771 (32913) 2025-08-13T20:09:03.647365741+00:00 stderr F I0813 20:09:03.646470 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.660568 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663174 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 62.051µs 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663258 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 38.281µs 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663320 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 39.452µs 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.664963 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 1.617777ms 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.665162 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 37.851µs 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.665323 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 19.921µs 2025-08-13T20:14:41.664443608+00:00 stderr F I0813 20:14:41.664204 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 6 items received 2025-08-13T20:23:18.672638163+00:00 stderr F I0813 20:23:18.672237 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 10 items received 2025-08-13T20:32:30.677906302+00:00 stderr F I0813 20:32:30.677644 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 11 items received 2025-08-13T20:42:11.687999923+00:00 stderr F I0813 20:42:11.684848 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 11 items received 2025-08-13T20:42:36.392193737+00:00 stderr F I0813 20:42:36.345690 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.412533663+00:00 stderr F I0813 20:42:36.412501 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 0 items received 2025-08-13T20:42:36.547542146+00:00 stderr F I0813 20:42:36.546111 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=37397&timeoutSeconds=395&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:38.747969165+00:00 stderr F I0813 20:42:38.747179 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=37397&timeoutSeconds=562&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.143997762+00:00 stderr F I0813 20:42:40.143888 1 ovnkubeidentity.go:297] Received signal terminated.... 2025-08-13T20:42:40.159443517+00:00 stderr F I0813 20:42:40.159310 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:40.160173778+00:00 stderr F I0813 20:42:40.160116 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:40.161560158+00:00 stderr F I0813 20:42:40.161479 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:40.161560158+00:00 stderr F I0813 20:42:40.161527 1 controller.go:242] "All workers finished" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:40.162157096+00:00 stderr F I0813 20:42:40.162096 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:40.162300340+00:00 stderr F I0813 20:42:40.162205 1 reflector.go:295] Stopping reflector *v1.CertificateSigningRequest (9h50m56.013963378s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163570 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163608 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163621 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:40.168672223+00:00 stderr F E0813 20:42:40.167441 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:40.169350333+00:00 stderr F I0813 20:42:40.168700 1 recorder.go:104] "crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"37542"} reason="LeaderElection" ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000002664715073043233033105 0ustar zuulzuul2025-10-13T00:12:49.284528721+00:00 stderr F + [[ -f /env/_master ]] 2025-10-13T00:12:49.285844409+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-10-13T00:12:49.287967079+00:00 stdout F I1013 00:12:49.287449125 - network-node-identity - start approver 2025-10-13T00:12:49.287981510+00:00 stderr F + echo 'I1013 00:12:49.287449125 - network-node-identity - start approver' 2025-10-13T00:12:49.288017321+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-10-13T00:12:49.730187340+00:00 stderr F I1013 00:12:49.729398 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-10-13T00:12:49.731239940+00:00 stderr F W1013 00:12:49.731206 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:12:49.745364812+00:00 stderr F I1013 00:12:49.745268 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-10-13T00:12:49.745923058+00:00 stderr F I1013 00:12:49.745888 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-10-13T00:12:49.776918962+00:00 stderr F I1013 00:12:49.776862 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2025-10-13T00:12:49.776918962+00:00 stderr F I1013 00:12:49.776886 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-10-13T00:13:11.657967766+00:00 stderr F I1013 00:13:11.657891 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2025-10-13T00:13:11.657967766+00:00 stderr F I1013 00:13:11.657908 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-10-13T00:13:32.609234359+00:00 stderr F I1013 00:13:32.609124 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2025-10-13T00:13:32.609234359+00:00 stderr F I1013 00:13:32.609174 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-10-13T00:14:13.198663475+00:00 stderr F I1013 00:14:13.198575 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-10-13T00:14:13.198836640+00:00 stderr F I1013 00:14:13.198792 1 recorder.go:104] "crc_ab94f93a-5178-42b5-b9c9-9fd6c8cedf1e became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"39806"} reason="LeaderElection" 2025-10-13T00:14:13.200283131+00:00 stderr F I1013 00:14:13.200221 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-10-13T00:14:13.200283131+00:00 stderr F I1013 00:14:13.200254 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-10-13T00:14:13.205893330+00:00 stderr F I1013 00:14:13.205825 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h57m52.625913609s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:14:13.205893330+00:00 stderr F I1013 00:14:13.205852 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:14:13.209015998+00:00 stderr F I1013 00:14:13.208963 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:14:13.306739494+00:00 stderr F I1013 00:14:13.306638 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-10-13T00:14:13.307349841+00:00 stderr F I1013 00:14:13.307278 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 50.282µs 2025-10-13T00:14:13.307546457+00:00 stderr F I1013 00:14:13.307502 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 42.481µs 2025-10-13T00:14:13.307624709+00:00 stderr F I1013 00:14:13.307586 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 42.582µs 2025-10-13T00:14:13.307771593+00:00 stderr F I1013 00:14:13.307733 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 46.502µs 2025-10-13T00:14:13.308362040+00:00 stderr F I1013 00:14:13.308268 1 recorder.go:104] "CSR \"csr-wgcnk\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-wgcnk"} reason="CSRApproved" 2025-10-13T00:14:13.314698339+00:00 stderr F I1013 00:14:13.314601 1 approver.go:230] Finished syncing CSR csr-wgcnk for crc node in 6.823613ms 2025-10-13T00:14:13.315005748+00:00 stderr F I1013 00:14:13.314964 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 38.521µs 2025-10-13T00:14:13.315163192+00:00 stderr F I1013 00:14:13.315131 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 51.191µs 2025-10-13T00:14:13.315270235+00:00 stderr F I1013 00:14:13.315241 1 approver.go:230] Finished syncing CSR csr-wgcnk for unknown node in 38.161µs 2025-10-13T00:14:13.350708638+00:00 stderr F I1013 00:14:13.350480 1 approver.go:230] Finished syncing CSR csr-wgcnk for unknown node in 63.482µs 2025-10-13T00:14:16.452181694+00:00 stderr F I1013 00:14:16.452105 1 recorder.go:104] "CSR \"csr-756z8\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-756z8"} reason="CSRApproved" 2025-10-13T00:14:16.456379513+00:00 stderr F I1013 00:14:16.456199 1 approver.go:230] Finished syncing CSR csr-756z8 for crc node in 4.549409ms 2025-10-13T00:14:16.456379513+00:00 stderr F I1013 00:14:16.456314 1 approver.go:230] Finished syncing CSR csr-756z8 for unknown node in 60.452µs 2025-10-13T00:14:16.463176525+00:00 stderr F I1013 00:14:16.463112 1 approver.go:230] Finished syncing CSR csr-756z8 for unknown node in 63.102µs 2025-10-13T00:20:13.211291275+00:00 stderr F I1013 00:20:13.210521 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 12 items received 2025-10-13T00:22:11.685088867+00:00 stderr F I1013 00:22:11.683673 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 1 items received 2025-10-13T00:22:11.685088867+00:00 stderr F I1013 00:22:11.684775 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42200&timeoutSeconds=385&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.931126704+00:00 stderr F I1013 00:22:12.931019 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42200&timeoutSeconds=496&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.473775507+00:00 stderr F E1013 00:22:13.473689 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.086032276+00:00 stderr F I1013 00:22:16.085957 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42200&timeoutSeconds=575&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.980167007+00:00 stderr F I1013 00:22:19.980062 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42200&timeoutSeconds=498&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:27.104189185+00:00 stderr F I1013 00:22:27.104085 1 reflector.go:449] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest closed with: too old resource version: 42200 (42861) 2025-10-13T00:22:47.254716131+00:00 stderr F I1013 00:22:47.254588 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:22:47.258514437+00:00 stderr F I1013 00:22:47.258435 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:22:47.259177405+00:00 stderr F I1013 00:22:47.259113 1 approver.go:230] Finished syncing CSR csr-756z8 for unknown node in 19.3µs 2025-10-13T00:22:47.259177405+00:00 stderr F I1013 00:22:47.259168 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 18.111µs 2025-10-13T00:22:47.259229847+00:00 stderr F I1013 00:22:47.259204 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 17.461µs 2025-10-13T00:22:47.259250747+00:00 stderr F I1013 00:22:47.259239 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 22.161µs 2025-10-13T00:22:47.259312329+00:00 stderr F I1013 00:22:47.259268 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 16.561µs 2025-10-13T00:22:47.259363910+00:00 stderr F I1013 00:22:47.259336 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 26.191µs 2025-10-13T00:22:47.259476833+00:00 stderr F I1013 00:22:47.259433 1 approver.go:230] Finished syncing CSR csr-wgcnk for unknown node in 16.99µs 2025-10-13T00:22:47.259575836+00:00 stderr F I1013 00:22:47.259522 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 21.831µs ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000002774015073043233033100 0ustar zuulzuul2025-08-13T19:50:46.034128438+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:46.061916362+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:46.146976623+00:00 stdout F I0813 19:50:46.088947634 - network-node-identity - start approver 2025-08-13T19:50:46.147024094+00:00 stderr F + echo 'I0813 19:50:46.088947634 - network-node-identity - start approver' 2025-08-13T19:50:46.147024094+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-08-13T19:50:47.258007987+00:00 stderr F I0813 19:50:47.257460 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-08-13T19:50:47.259446928+00:00 stderr F W0813 19:50:47.259418 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:47.270012330+00:00 stderr F I0813 19:50:47.269622 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-08-13T19:50:47.283521506+00:00 stderr F I0813 19:50:47.282897 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-08-13T19:50:47.517100522+00:00 stderr F I0813 19:50:47.516520 1 leaderelection.go:354] lock is held by crc_b2366d4a-899d-4575-ad93-10121ab7b42a and has not yet expired 2025-08-13T19:50:47.517190414+00:00 stderr F I0813 19:50:47.517169 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:24.777301158+00:00 stderr F I0813 19:51:24.777094 1 leaderelection.go:354] lock is held by crc_b2366d4a-899d-4575-ad93-10121ab7b42a and has not yet expired 2025-08-13T19:51:24.777301158+00:00 stderr F I0813 19:51:24.777172 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:48.849637807+00:00 stderr F I0813 19:51:48.849570 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:48.850659406+00:00 stderr F I0813 19:51:48.850602 1 recorder.go:104] "crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"313ac7f3-4ba1-4dd0-b6b5-40f9f3a73f08","apiVersion":"coordination.k8s.io/v1","resourceVersion":"26671"} reason="LeaderElection" 2025-08-13T19:51:48.851389587+00:00 stderr F I0813 19:51:48.851289 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:51:48.851494360+00:00 stderr F I0813 19:51:48.851475 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T19:51:48.856428690+00:00 stderr F I0813 19:51:48.856310 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h28m13.239043519s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.856428690+00:00 stderr F I0813 19:51:48.856387 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.869034989+00:00 stderr F I0813 19:51:48.868974 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.957216392+00:00 stderr F I0813 19:51:48.957111 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-08-13T19:51:48.959225619+00:00 stderr F I0813 19:51:48.959157 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 16.16µs 2025-08-13T19:51:48.959737294+00:00 stderr F I0813 19:51:48.959701 1 recorder.go:104] "CSR \"csr-dpjmc\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-dpjmc"} reason="CSRApproved" 2025-08-13T19:51:48.971123478+00:00 stderr F I0813 19:51:48.971041 1 approver.go:230] Finished syncing CSR csr-dpjmc for crc node in 11.734344ms 2025-08-13T19:51:48.971246191+00:00 stderr F I0813 19:51:48.971186 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 59.072µs 2025-08-13T19:51:48.971320084+00:00 stderr F I0813 19:51:48.971288 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 35.321µs 2025-08-13T19:51:48.971392386+00:00 stderr F I0813 19:51:48.971339 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 24.941µs 2025-08-13T19:51:48.971698394+00:00 stderr F I0813 19:51:48.971647 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 26.731µs 2025-08-13T19:51:48.983918182+00:00 stderr F I0813 19:51:48.983873 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 132.074µs 2025-08-13T19:57:39.305594345+00:00 stderr F I0813 19:57:39.305421 1 recorder.go:104] "CSR \"csr-fxkbs\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-fxkbs"} reason="CSRApproved" 2025-08-13T19:57:39.316617830+00:00 stderr F I0813 19:57:39.316333 1 approver.go:230] Finished syncing CSR csr-fxkbs for crc node in 12.345512ms 2025-08-13T19:57:39.316617830+00:00 stderr F I0813 19:57:39.316501 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 61.332µs 2025-08-13T19:57:39.330480056+00:00 stderr F I0813 19:57:39.330297 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 62.762µs 2025-08-13T19:58:46.872170084+00:00 stderr F I0813 19:58:46.872041 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 15 items received 2025-08-13T20:02:29.640165504+00:00 stderr F I0813 20:02:29.639336 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 6 items received 2025-08-13T20:02:29.652962089+00:00 stderr F I0813 20:02:29.650652 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=373&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.045345338+00:00 stderr F I0813 20:02:31.045086 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=339&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.125400204+00:00 stderr F I0813 20:02:34.125327 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=508&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.664872788+00:00 stderr F I0813 20:02:39.664401 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=503&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:40.049084358+00:00 stderr F E0813 20:02:40.048894 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:02:47.690939252+00:00 stderr F I0813 20:02:47.690717 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=566&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:00.053560022+00:00 stderr F E0813 20:03:00.053423 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:00.839901233+00:00 stderr F I0813 20:03:00.839743 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=591&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:10.047475339+00:00 stderr F I0813 20:03:10.047083 1 leaderelection.go:285] failed to renew lease openshift-network-node-identity/ovnkube-identity: timed out waiting for the condition 2025-08-13T20:03:10.050289689+00:00 stderr F E0813 20:03:10.050206 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:10.050881566+00:00 stderr F I0813 20:03:10.050704 1 recorder.go:104] "crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"30647"} reason="LeaderElection" 2025-08-13T20:03:10.051385910+00:00 stderr F I0813 20:03:10.051306 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051417 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051459 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051469 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051476 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051484 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:03:10.052057009+00:00 stderr F error running approver: leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043233033025 5ustar zuulzuul././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043233033025 5ustar zuulzuul././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000005547515073043233033047 0ustar zuulzuul2025-08-13T20:05:31.350875728+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-08-13T20:05:34.560759116+00:00 stderr F I0813 20:05:34.538410 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:34.573181841+00:00 stderr F I0813 20:05:34.572394 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:35.152155431+00:00 stderr F I0813 20:05:35.151753 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:35.153203081+00:00 stderr F I0813 20:05:35.152517 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-08-13T20:05:35.236413874+00:00 stderr F I0813 20:05:35.234124 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-08-13T20:05:35.236594939+00:00 stderr F I0813 20:05:35.236561 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-08-13T20:05:35.236630760+00:00 stderr F I0813 20:05:35.236617 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-08-13T20:05:35.236661851+00:00 stderr F I0813 20:05:35.236649 1 main.go:35] Go OS/Arch: linux/amd64 2025-08-13T20:05:35.236703192+00:00 stderr F I0813 20:05:35.236680 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-08-13T20:05:35.237208107+00:00 stderr F I0813 20:05:35.236940 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_54cb8f8a-f1fe-4304-818b-373b62828b0d became leader 2025-08-13T20:05:35.432352325+00:00 stderr F I0813 20:05:35.432268 1 metrics.go:88] Starting MetricsController 2025-08-13T20:05:35.433254951+00:00 stderr F I0813 20:05:35.433184 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435108 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435151 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.433210 1 imageconfig.go:86] Starting ImageConfigController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435274 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435290 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-08-13T20:05:35.444151743+00:00 stderr F I0813 20:05:35.441931 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900142 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900192 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900349 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-08-13T20:05:35.977097965+00:00 stderr F W0813 20:05:35.881106 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:35.977097965+00:00 stderr F E0813 20:05:35.920501 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:35.977097965+00:00 stderr F I0813 20:05:35.924914 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.977212858+00:00 stderr F W0813 20:05:35.896004 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:35.977301521+00:00 stderr F E0813 20:05:35.977280 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:35.977617480+00:00 stderr F I0813 20:05:35.977441 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-08-13T20:05:35.996186362+00:00 stderr F I0813 20:05:35.988230 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.013418735+00:00 stderr F I0813 20:05:36.013360 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.030196465+00:00 stderr F I0813 20:05:36.028088 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.035365954+00:00 stderr F I0813 20:05:36.035338 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-08-13T20:05:36.036283540+00:00 stderr F I0813 20:05:36.036260 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-08-13T20:05:36.073980569+00:00 stderr F I0813 20:05:36.067391 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.133655429+00:00 stderr F I0813 20:05:36.133375 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-08-13T20:05:36.280345169+00:00 stderr F I0813 20:05:36.280188 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.333671476+00:00 stderr F I0813 20:05:36.333611 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-08-13T20:05:37.020230946+00:00 stderr F W0813 20:05:37.018982 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:37.020230946+00:00 stderr F E0813 20:05:37.019044 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:37.281066526+00:00 stderr F W0813 20:05:37.280612 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:37.281066526+00:00 stderr F E0813 20:05:37.280700 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:39.504821258+00:00 stderr F W0813 20:05:39.504581 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:39.504821258+00:00 stderr F E0813 20:05:39.504637 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:40.202603800+00:00 stderr F W0813 20:05:40.202537 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:40.202693703+00:00 stderr F E0813 20:05:40.202679 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.347291943+00:00 stderr F W0813 20:05:45.346582 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.347291943+00:00 stderr F E0813 20:05:45.347206 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.904703685+00:00 stderr F W0813 20:05:45.904527 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:45.904703685+00:00 stderr F E0813 20:05:45.904584 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:52.005179407+00:00 stderr F I0813 20:05:52.004586 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:05:52.033375184+00:00 stderr F I0813 20:05:52.033311 1 metrics.go:94] Started MetricsController 2025-08-13T20:05:54.197518657+00:00 stderr F I0813 20:05:54.197085 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:05:54.236459062+00:00 stderr F I0813 20:05:54.236312 1 imageconfig.go:93] Started ImageConfigController 2025-08-13T20:05:54.236585266+00:00 stderr F I0813 20:05:54.236562 1 controller.go:452] Starting Controller 2025-08-13T20:07:14.860751489+00:00 stderr F E0813 20:07:14.860066 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:14.864649781+00:00 stderr F E0813 20:07:14.864593 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:30.246382098+00:00 stderr F W0813 20:07:30.245491 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:30.246382098+00:00 stderr F E0813 20:07:30.246168 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:37.313181438+00:00 stderr F I0813 20:07:37.312479 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:08:10.069237273+00:00 stderr F I0813 20:08:10.067066 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:08:35.407215323+00:00 stderr F E0813 20:08:35.406443 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.484992848+00:00 stderr F I0813 20:09:29.484498 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:33.942276903+00:00 stderr F I0813 20:09:33.941268 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.313329223+00:00 stderr F I0813 20:09:36.312206 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:37.685525694+00:00 stderr F I0813 20:09:37.685404 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:39.680454581+00:00 stderr F I0813 20:09:39.678471 1 reflector.go:351] Caches populated for *v1.ImagePruner from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T20:09:42.768944279+00:00 stderr F I0813 20:09:42.767471 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.115880377+00:00 stderr F I0813 20:09:43.115025 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:44.458981734+00:00 stderr F I0813 20:09:44.458753 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.363674635+00:00 stderr F I0813 20:09:48.363526 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:49.508340314+00:00 stderr F I0813 20:09:49.507835 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:52.284846749+00:00 stderr F I0813 20:09:52.284085 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.366653799+00:00 stderr F I0813 20:09:57.364300 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:07.683255424+00:00 stderr F I0813 20:10:07.681191 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.606673474+00:00 stderr F I0813 20:10:15.605733 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.953871779+00:00 stderr F I0813 20:10:15.953646 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.627732939+00:00 stderr F I0813 20:10:29.627031 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:30.282694577+00:00 stderr F I0813 20:10:30.282122 1 reflector.go:351] Caches populated for *v1.Config from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T20:10:33.630065369+00:00 stderr F I0813 20:10:33.628984 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:43.053059944+00:00 stderr F I0813 20:10:43.052423 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:43.982593184+00:00 stderr F I0813 20:10:43.981760 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:36.400650211+00:00 stderr F I0813 20:42:36.399905 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.404340367+00:00 stderr F I0813 20:42:36.404002 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415105747+00:00 stderr F I0813 20:42:36.408974 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415105747+00:00 stderr F I0813 20:42:36.409471 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.425743904+00:00 stderr F I0813 20:42:36.398202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.499699596+00:00 stderr F I0813 20:42:36.499303 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516513441+00:00 stderr F I0813 20:42:36.516428 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541373478+00:00 stderr F I0813 20:42:36.541150 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568582222+00:00 stderr F I0813 20:42:36.568013 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.584575623+00:00 stderr F I0813 20:42:36.584446 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.584976065+00:00 stderr F I0813 20:42:36.584877 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585458739+00:00 stderr F I0813 20:42:36.585398 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586120668+00:00 stderr F I0813 20:42:36.586047 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586513629+00:00 stderr F I0813 20:42:36.586436 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586865359+00:00 stderr F I0813 20:42:36.585473 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586865359+00:00 stderr F I0813 20:42:36.586739 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.587276561+00:00 stderr F I0813 20:42:36.587143 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.587560869+00:00 stderr F I0813 20:42:36.586722 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588023573+00:00 stderr F I0813 20:42:36.587939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588713133+00:00 stderr F I0813 20:42:36.588590 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588987971+00:00 stderr F I0813 20:42:36.588931 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.589666410+00:00 stderr F I0813 20:42:36.586706 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.715605201+00:00 stderr F E0813 20:42:36.715085 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.215010560+00:00 stderr F I0813 20:42:39.213109 1 main.go:52] Received SIGTERM or SIGINT signal, shutting down the operator. 2025-08-13T20:42:39.217512372+00:00 stderr F I0813 20:42:39.217454 1 controllerimagepruner.go:390] Shutting down ImagePrunerController ... 2025-08-13T20:42:39.217538183+00:00 stderr F I0813 20:42:39.217517 1 controller.go:456] Shutting down Controller ... 2025-08-13T20:42:39.217561664+00:00 stderr F I0813 20:42:39.217544 1 imageconfig.go:95] Shutting down ImageConfigController 2025-08-13T20:42:39.217573774+00:00 stderr F I0813 20:42:39.217567 1 metrics.go:96] Shutting down MetricsController 2025-08-13T20:42:39.218483870+00:00 stderr F I0813 20:42:39.217577 1 imageregistrycertificates.go:216] Shutting down ImageRegistryCertificatesController 2025-08-13T20:42:39.218851141+00:00 stderr F I0813 20:42:39.218713 1 leaderelection.go:285] failed to renew lease openshift-image-registry/openshift-master-controllers: timed out waiting for the condition 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222866 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222930 1 azurepathfixcontroller.go:326] Shutting down AzurePathFixController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222943 1 clusteroperator.go:152] Shutting down ClusterOperatorStatusController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222953 1 azurestackcloud.go:181] Shutting down AzureStackCloudController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222962 1 nodecadaemon.go:211] Shutting down NodeCADaemonController 2025-08-13T20:42:39.224584866+00:00 stderr F I0813 20:42:39.223740 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:39.226353487+00:00 stderr F I0813 20:42:39.226310 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:39.229605441+00:00 stderr F E0813 20:42:39.229511 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.230585389+00:00 stderr F W0813 20:42:39.230513 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000005462315073043233033041 0ustar zuulzuul2025-10-13T00:14:57.260435675+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-10-13T00:14:58.681716910+00:00 stderr F I1013 00:14:58.679602 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:58.683867804+00:00 stderr F I1013 00:14:58.683596 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:58.746178351+00:00 stderr F I1013 00:14:58.745996 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:14:58.747045457+00:00 stderr F I1013 00:14:58.746903 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-10-13T00:19:38.217018961+00:00 stderr F I1013 00:19:38.216277 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-10-13T00:19:38.217018961+00:00 stderr F I1013 00:19:38.216407 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41796", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_7411d8d0-3b60-4282-b6be-65da7110a845 became leader 2025-10-13T00:19:38.217131654+00:00 stderr F I1013 00:19:38.217092 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-10-13T00:19:38.217131654+00:00 stderr F I1013 00:19:38.217104 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-10-13T00:19:38.217131654+00:00 stderr F I1013 00:19:38.217109 1 main.go:35] Go OS/Arch: linux/amd64 2025-10-13T00:19:38.217131654+00:00 stderr F I1013 00:19:38.217113 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.230625 1 metrics.go:88] Starting MetricsController 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.230664 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.230680 1 imageconfig.go:86] Starting ImageConfigController 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.231817 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.231842 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.231862 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-10-13T00:19:38.234567263+00:00 stderr F I1013 00:19:38.230625 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-10-13T00:19:38.236602493+00:00 stderr F I1013 00:19:38.236541 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-10-13T00:19:38.331389502+00:00 stderr F I1013 00:19:38.331261 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-10-13T00:19:38.331389502+00:00 stderr F I1013 00:19:38.331296 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-10-13T00:19:38.331617139+00:00 stderr F I1013 00:19:38.331535 1 imageconfig.go:93] Started ImageConfigController 2025-10-13T00:19:38.331617139+00:00 stderr F I1013 00:19:38.331579 1 metrics.go:94] Started MetricsController 2025-10-13T00:19:38.331911478+00:00 stderr F I1013 00:19:38.331878 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:19:38.331911478+00:00 stderr F I1013 00:19:38.331893 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:19:38.332026281+00:00 stderr F I1013 00:19:38.331994 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-10-13T00:19:38.332567487+00:00 stderr F I1013 00:19:38.332522 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-10-13T00:19:38.332600368+00:00 stderr F I1013 00:19:38.332576 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-10-13T00:19:38.337319069+00:00 stderr F I1013 00:19:38.337272 1 controller.go:452] Starting Controller 2025-10-13T00:19:38.337513084+00:00 stderr F I1013 00:19:38.337461 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-10-13T00:19:38.425607024+00:00 stderr F I1013 00:19:38.425499 1 generator.go:62] object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets updated: changed:data..dockerconfigjson={ -> }, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:134d2023417aa99dc70c099f12731fc3d94cb8fe5fef3d499d5c1ff70d124cfb" -> "sha256:cd9658903c20944eb60992db7d1167845b9660771a71716e1875bc10e5145610"}, changed:metadata.managedFields.0.time={"2025-08-13T20:00:23Z" -> "2025-10-13T00:19:38Z"}, changed:metadata.resourceVersion={"29461" -> "41799"} 2025-10-13T00:19:38.830031651+00:00 stderr F I1013 00:19:38.829940 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"metadata":{"annotations":{"imageregistry.operator.openshift.io/checksum":"sha256:3616891ac97a04dff8f52e8fc01cee609bbfbe0247bfa3ef0f9ebbcc435b27f1","operator.openshift.io/spec-hash":"3abd68f3c2e68f9a4d2c85d68647a58f3da61ebcaeeecc1baedcf649ce0065c8"}},"spec":{"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"imageregistry.operator.openshift.io/dependencies-checksum":"sha256:53869d9c320e001c11f9e0c8b26efab68c1d93a6051736c231681cabec99482e"}},"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-10-13T00:19:38.844550763+00:00 stderr F I1013 00:19:38.844423 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-10-13T00:19:38.848213412+00:00 stderr F I1013 00:19:38.848134 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6" -> "sha256:3616891ac97a04dff8f52e8fc01cee609bbfbe0247bfa3ef0f9ebbcc435b27f1"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da" -> "3abd68f3c2e68f9a4d2c85d68647a58f3da61ebcaeeecc1baedcf649ce0065c8"}, changed:metadata.generation={"4.000000" -> "5.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, added:metadata.managedFields.0.subresource="status", changed:metadata.managedFields.0.time={"2025-08-13T20:00:24Z" -> "2025-10-13T00:17:18Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, removed:metadata.managedFields.1.subresource="status", changed:metadata.managedFields.1.time={"2025-10-13T00:17:18Z" -> "2025-10-13T00:19:38Z"}, changed:metadata.resourceVersion={"41456" -> "41801"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10" -> "sha256:53869d9c320e001c11f9e0c8b26efab68c1d93a6051736c231681cabec99482e"} 2025-10-13T00:19:38.851360545+00:00 stderr F I1013 00:19:38.851273 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2025-10-13T00:19:38Z"}, changed:status.conditions.0.message={"The registry is ready" -> "The deployment has not completed"}, changed:status.conditions.0.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.1.message={"The registry is ready" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"Ready" -> "MinimumAvailability"}, changed:status.generations.1.lastGeneration={"4.000000" -> "5.000000"} 2025-10-13T00:19:38.884640515+00:00 stderr F I1013 00:19:38.884541 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: removed:apiVersion="config.openshift.io/v1", removed:kind="ClusterOperator", changed:metadata.managedFields.2.time={"2025-08-13T20:01:21Z" -> "2025-10-13T00:19:38Z"}, changed:metadata.resourceVersion={"30445" -> "41805"}, changed:status.conditions.0.message={"Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"Ready" -> "MinimumAvailability"}, changed:status.conditions.1.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2025-10-13T00:19:38Z"}, changed:status.conditions.1.message={"Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.1.status={"False" -> "True"} 2025-10-13T00:19:39.835297595+00:00 stderr F I1013 00:19:39.835195 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"spec":{"revisionHistoryLimit":null,"template":{"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-10-13T00:19:39.846668773+00:00 stderr F I1013 00:19:39.846572 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-10-13T00:19:39.849206439+00:00 stderr F I1013 00:19:39.849150 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: 2025-10-13T00:19:39.850653132+00:00 stderr F I1013 00:19:39.850608 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2025-10-13T00:19:39Z"}, changed:status.conditions.0.message={"The registry is ready" -> "The deployment has not completed"}, changed:status.conditions.0.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.1.message={"The registry is ready" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"Ready" -> "MinimumAvailability"}, changed:status.generations.1.lastGeneration={"4.000000" -> "5.000000"} 2025-10-13T00:19:39.858323150+00:00 stderr F E1013 00:19:39.858263 1 controller.go:377] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing 2025-10-13T00:19:59.311781341+00:00 stderr F I1013 00:19:59.310970 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.readyReplicas={"1.000000" -> "2.000000"} 2025-10-13T00:19:59.347955636+00:00 stderr F I1013 00:19:59.347890 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-10-13T00:19:38Z" -> "2025-10-13T00:19:59Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"} 2025-10-13T00:19:59.352562803+00:00 stderr F E1013 00:19:59.352520 1 controller.go:377] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing 2025-10-13T00:19:59.691661738+00:00 stderr F I1013 00:19:59.691562 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-10-13T00:19:38Z" -> "2025-10-13T00:19:59Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"}, changed:status.readyReplicas={"2.000000" -> "1.000000"} 2025-10-13T00:19:59.708361014+00:00 stderr F I1013 00:19:59.708281 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-10-13T00:19:38Z" -> "2025-10-13T00:19:59Z"}, changed:metadata.resourceVersion={"41805" -> "41891"}, changed:status.conditions.0.message={"Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.1.lastTransitionTime={"2025-10-13T00:19:38Z" -> "2025-10-13T00:19:59Z"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.1.status={"True" -> "False"} 2025-10-13T00:20:00.495578734+00:00 stderr F I1013 00:20:00.495519 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-10-13T00:19:38Z" -> "2025-10-13T00:20:00Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"}, changed:status.readyReplicas={"2.000000" -> "1.000000"} 2025-10-13T00:20:00.502920062+00:00 stderr F E1013 00:20:00.502874 1 controller.go:377] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing 2025-10-13T00:22:38.246094415+00:00 stderr F E1013 00:22:38.245767 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000013306715073043233033041 0ustar zuulzuul2025-08-13T19:59:07.238527301+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-08-13T19:59:32.468279537+00:00 stderr F I0813 19:59:32.410410 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:32.532267201+00:00 stderr F I0813 19:59:32.529815 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:34.559928351+00:00 stderr F I0813 19:59:34.555025 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.559928351+00:00 stderr F I0813 19:59:34.559332 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.698956 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699759 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699856 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699863 1 main.go:35] Go OS/Arch: linux/amd64 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699996 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.706495 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_6e4ef831-9ca6-4914-905d-7214cad92174 became leader 2025-08-13T19:59:36.902238389+00:00 stderr F I0813 19:59:36.899333 1 metrics.go:88] Starting MetricsController 2025-08-13T19:59:36.902238389+00:00 stderr F I0813 19:59:36.901727 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-08-13T19:59:37.182626381+00:00 stderr F I0813 19:59:37.158995 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.189139 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.191135 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.191173 1 imageconfig.go:86] Starting ImageConfigController 2025-08-13T19:59:37.243681972+00:00 stderr F I0813 19:59:37.242191 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-08-13T19:59:37.247402978+00:00 stderr F I0813 19:59:37.247377 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-08-13T19:59:37.413710078+00:00 stderr F I0813 19:59:37.381885 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-08-13T19:59:37.501316355+00:00 stderr F I0813 19:59:37.444165 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-08-13T19:59:38.007381131+00:00 stderr F E0813 19:59:38.007004 1 azurestackcloud.go:76] AzureStackCloudController: unable to sync: config.imageregistry.operator.openshift.io "cluster" not found, requeuing 2025-08-13T19:59:39.197697421+00:00 stderr F I0813 19:59:39.197219 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:39.522447768+00:00 stderr F I0813 19:59:39.197982 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:39.591490816+00:00 stderr F I0813 19:59:39.577516 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.591490816+00:00 stderr F I0813 19:59:39.579410 1 reflector.go:351] Caches populated for *v1.ImagePruner from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T19:59:39.591490816+00:00 stderr F W0813 19:59:39.579746 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:39.591490816+00:00 stderr F E0813 19:59:39.580001 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:39.652526036+00:00 stderr F W0813 19:59:39.652136 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:39.652526036+00:00 stderr F E0813 19:59:39.652274 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:39.781990576+00:00 stderr F I0813 19:59:39.757255 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.881095 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.888743 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.889591 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-08-13T19:59:39.948356619+00:00 stderr F I0813 19:59:39.948219 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.983099779+00:00 stderr F I0813 19:59:39.982996 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-08-13T19:59:40.086146276+00:00 stderr F I0813 19:59:40.079998 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:40.148072042+00:00 stderr F I0813 19:59:40.148002 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:40.246921819+00:00 stderr F I0813 19:59:40.246315 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-08-13T19:59:40.820345945+00:00 stderr F W0813 19:59:40.797598 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:40.820481959+00:00 stderr F E0813 19:59:40.820460 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:41.074455408+00:00 stderr F W0813 19:59:41.074352 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:41.074455408+00:00 stderr F E0813 19:59:41.074410 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:41.135983282+00:00 stderr F I0813 19:59:41.135814 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: removed:apiVersion="config.openshift.io/v1", removed:kind="ClusterOperator", changed:metadata.managedFields.2.time={"2024-06-27T13:34:18Z" -> "2025-08-13T19:59:40Z"}, changed:metadata.resourceVersion={"23930" -> "28282"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca does not have available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable::NodeCADaemonNoAvailableReplicas" -> "NoReplicasAvailable"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deploying node pods" -> "Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted::NodeCADaemonUnavailable" -> "DeploymentNotCompleted"} 2025-08-13T19:59:41.141502780+00:00 stderr F I0813 19:59:41.136956 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:41.307344807+00:00 stderr F I0813 19:59:41.306542 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-08-13T19:59:42.817613347+00:00 stderr F W0813 19:59:42.814334 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.817613347+00:00 stderr F E0813 19:59:42.815089 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:43.437966550+00:00 stderr F W0813 19:59:43.437445 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:43.437966550+00:00 stderr F E0813 19:59:43.437509 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:47.824966453+00:00 stderr F W0813 19:59:47.824230 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:47.824966453+00:00 stderr F E0813 19:59:47.824864 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:48.187447036+00:00 stderr F W0813 19:59:48.187101 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:48.187447036+00:00 stderr F E0813 19:59:48.187195 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:54.721302158+00:00 stderr F W0813 19:59:54.720490 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:54.721302158+00:00 stderr F E0813 19:59:54.721160 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:58.226936117+00:00 stderr F W0813 19:59:58.226058 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:58.226936117+00:00 stderr F E0813 19:59:58.226686 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:15.597920077+00:00 stderr F I0813 20:00:15.596690 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:00:15.602244620+00:00 stderr F I0813 20:00:15.602202 1 metrics.go:94] Started MetricsController 2025-08-13T20:00:17.585059728+00:00 stderr F I0813 20:00:17.568887 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-image-registry, Name=serviceca updated: removed:metadata.annotations.openshift.io/description="Configmap is added/updated with a data item containing the CA signing bundle that can be used to verify service-serving certificates", removed:metadata.annotations.openshift.io/owning-component="service-ca", changed:metadata.resourceVersion={"29256" -> "29269"} 2025-08-13T20:00:17.629035862+00:00 stderr F I0813 20:00:17.598084 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-image-registry, Name=image-registry-certificates updated: changed:data.image-registry.openshift-image-registry.svc..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:data.image-registry.openshift-image-registry.svc.cluster.local..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:f0bdbd4b4b483117b7890bf3f3d072b58bd5c5ac221322fb5c069a25e1e8cb66" -> "sha256:653bf3ca288d3df55507222afe41f33beb70f23514182de2cc295244a1d79403"}, changed:metadata.managedFields.0.time={"2024-06-27T13:18:57Z" -> "2025-08-13T20:00:17Z"}, changed:metadata.resourceVersion={"18030" -> "29265"} 2025-08-13T20:00:17.949627914+00:00 stderr F I0813 20:00:17.948689 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-config-managed, Name=image-registry-ca updated: changed:data.image-registry.openshift-image-registry.svc..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:data.image-registry.openshift-image-registry.svc.cluster.local..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:d537ad4b8475d1cab44f6a9af72391f500985389b6664e1dba8e546d76b40b02" -> "sha256:8a4b2012ebacc19d57e4aae623e3012d9b0e3cf1957da5588850ec19f97baa40"}, changed:metadata.managedFields.0.time={"2024-06-27T13:18:53Z" -> "2025-08-13T20:00:17Z"}, changed:metadata.resourceVersion={"17963" -> "29281"} 2025-08-13T20:00:22.798386460+00:00 stderr F I0813 20:00:22.779164 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:22.798386460+00:00 stderr F I0813 20:00:22.792146 1 imageconfig.go:93] Started ImageConfigController 2025-08-13T20:00:22.848609373+00:00 stderr F I0813 20:00:22.848251 1 controller.go:452] Starting Controller 2025-08-13T20:00:23.585293700+00:00 stderr F I0813 20:00:23.584565 1 generator.go:62] object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets updated: changed:data..dockerconfigjson={ -> }, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:085fdb2709b57d501872b4e20b38e3618d21be40f24851b4fad2074469e1fa6d" -> "sha256:134d2023417aa99dc70c099f12731fc3d94cb8fe5fef3d499d5c1ff70d124cfb"}, changed:metadata.managedFields.0.time={"2024-06-27T13:34:15Z" -> "2025-08-13T20:00:23Z"}, changed:metadata.resourceVersion={"23543" -> "29461"} 2025-08-13T20:00:24.745952665+00:00 stderr F I0813 20:00:24.745340 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"metadata":{"annotations":{"imageregistry.operator.openshift.io/checksum":"sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6","operator.openshift.io/spec-hash":"2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da"}},"spec":{"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"imageregistry.operator.openshift.io/dependencies-checksum":"sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10"}},"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-08-13T20:00:24.918966429+00:00 stderr F I0813 20:00:24.918342 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-08-13T20:00:24.975235403+00:00 stderr F I0813 20:00:24.965374 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:f7ef6312b4aa9a5819f99115a43ad18318ade18e78d0d43d8f8db34ee8a97e8d" -> "sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"944e90126c7956be2484e645afe5c783cacf55a40f11cb132e07b25294ee50fa" -> "2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da"}, changed:metadata.generation={"3.000000" -> "4.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, added:metadata.managedFields.0.subresource="status", changed:metadata.managedFields.0.time={"2024-06-27T13:34:15Z" -> "2025-08-13T19:49:58Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, removed:metadata.managedFields.1.subresource="status", changed:metadata.managedFields.1.time={"2025-08-13T19:49:58Z" -> "2025-08-13T20:00:24Z"}, changed:metadata.resourceVersion={"25235" -> "29506"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:c4eb23334aa8e38243d604088f7d430c98cd061e527b1f39182877df0dc8680c" -> "sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10"} 2025-08-13T20:00:25.031080526+00:00 stderr F I0813 20:00:25.029193 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.2.lastTransitionTime={"2024-06-26T12:53:37Z" -> "2025-08-13T20:00:25Z"}, added:status.conditions.2.message="The deployment does not have available replicas", added:status.conditions.2.reason="Unavailable", changed:status.conditions.2.status={"False" -> "True"}, changed:status.generations.1.lastGeneration={"3.000000" -> "4.000000"} 2025-08-13T20:00:25.122118682+00:00 stderr F I0813 20:00:25.122061 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T19:59:40Z" -> "2025-08-13T20:00:25Z"}, changed:metadata.resourceVersion={"28282" -> "29540"}, changed:status.conditions.2.lastTransitionTime={"2024-06-26T12:53:37Z" -> "2025-08-13T20:00:25Z"}, added:status.conditions.2.message="Degraded: The deployment does not have available replicas", changed:status.conditions.2.reason={"AsExpected" -> "Unavailable"}, changed:status.conditions.2.status={"False" -> "True"} 2025-08-13T20:00:49.469631363+00:00 stderr F E0813 20:00:49.468721 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:49.493293998+00:00 stderr F E0813 20:00:49.489473 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:09.481028325+00:00 stderr F I0813 20:01:09.479275 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.1.lastTransitionTime={"2024-06-27T13:34:18Z" -> "2025-08-13T20:01:09Z"}, changed:status.conditions.1.message={"The deployment does not have available replicas" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.1.status={"False" -> "True"}, changed:status.conditions.2.lastTransitionTime={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, removed:status.conditions.2.message="The deployment does not have available replicas", removed:status.conditions.2.reason="Unavailable", changed:status.conditions.2.status={"True" -> "False"}, changed:status.readyReplicas={"0.000000" -> "1.000000"} 2025-08-13T20:01:10.204765482+00:00 stderr F I0813 20:01:10.204334 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, changed:metadata.resourceVersion={"29540" -> "30319"}, changed:status.conditions.0.lastTransitionTime={"2024-06-27T13:34:14Z" -> "2025-08-13T20:01:09Z"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.2.lastTransitionTime={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, removed:status.conditions.2.message="Degraded: The deployment does not have available replicas", changed:status.conditions.2.reason={"Unavailable" -> "AsExpected"}, changed:status.conditions.2.status={"True" -> "False"} 2025-08-13T20:01:19.011059104+00:00 stderr F W0813 20:01:19.006176 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:19.011059104+00:00 stderr F E0813 20:01:19.006869 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:21.283886640+00:00 stderr F I0813 20:01:21.283335 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2024-06-27T13:34:15Z" -> "2025-08-13T20:01:21Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"} 2025-08-13T20:01:21.697944527+00:00 stderr F I0813 20:01:21.697887 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T20:01:09Z" -> "2025-08-13T20:01:21Z"}, changed:metadata.resourceVersion={"30319" -> "30445"}, changed:status.conditions.0.message={"Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.1.lastTransitionTime={"2024-06-27T13:34:14Z" -> "2025-08-13T20:01:21Z"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.1.status={"True" -> "False"} 2025-08-13T20:01:22.535888750+00:00 stderr F I0813 20:01:22.535112 1 observer_polling.go:120] Observed file "/etc/secrets/tls.crt" has been modified (old="1b4fefb1e2cc07f8a890596a1d5f574f6950f0d920567a2f2dc82a253d167d29", new="c608c7a9b6a1e5e93b05aeb838482ea7878695c06fd899cda56bfdd48b1f7613") 2025-08-13T20:01:22.536123607+00:00 stderr F W0813 20:01:22.536099 1 builder.go:154] Restart triggered because of file /etc/secrets/tls.crt was modified 2025-08-13T20:01:22.536476357+00:00 stderr F I0813 20:01:22.536316 1 observer_polling.go:120] Observed file "/etc/secrets/tls.key" has been modified (old="ce0633a34805c4dab1eb6f9f90254ed254d7f3cf0899dcbde466b988b655d7c9", new="2d36d9ce30dbba8b01ea274d2bf93b78a3e49924e61177e7a6e202a5e6fd5047") 2025-08-13T20:01:22.536712773+00:00 stderr F I0813 20:01:22.536659 1 main.go:54] Watched file changed, shutting down the operator. 2025-08-13T20:01:22.538703110+00:00 stderr F I0813 20:01:22.538674 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:22.540514022+00:00 stderr F I0813 20:01:22.538889 1 nodecadaemon.go:211] Shutting down NodeCADaemonController 2025-08-13T20:01:22.565977378+00:00 stderr F I0813 20:01:22.538933 1 azurestackcloud.go:181] Shutting down AzureStackCloudController 2025-08-13T20:01:22.566163403+00:00 stderr F I0813 20:01:22.539232 1 controller.go:456] Shutting down Controller ... 2025-08-13T20:01:22.566208224+00:00 stderr F I0813 20:01:22.539238 1 clusteroperator.go:152] Shutting down ClusterOperatorStatusController 2025-08-13T20:01:22.566243185+00:00 stderr F I0813 20:01:22.539258 1 imageconfig.go:95] Shutting down ImageConfigController 2025-08-13T20:01:22.566372099+00:00 stderr F I0813 20:01:22.539259 1 azurepathfixcontroller.go:326] Shutting down AzurePathFixController 2025-08-13T20:01:22.566426581+00:00 stderr F I0813 20:01:22.539290 1 metrics.go:96] Shutting down MetricsController 2025-08-13T20:01:22.566454101+00:00 stderr F I0813 20:01:22.539308 1 controllerimagepruner.go:390] Shutting down ImagePrunerController ... 2025-08-13T20:01:22.566491692+00:00 stderr F I0813 20:01:22.539334 1 imageregistrycertificates.go:216] Shutting down ImageRegistryCertificatesController 2025-08-13T20:01:22.566533594+00:00 stderr F I0813 20:01:22.539593 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:22.566898264+00:00 stderr F I0813 20:01:22.566827 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:23.271401421+00:00 stderr F W0813 20:01:23.271351 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000023400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000112777215073043232033067 0ustar zuulzuul2025-10-13T00:22:22.772476517+00:00 stdout F flock: getting lock took 0.000005 seconds 2025-10-13T00:22:22.772713723+00:00 stdout F Copying system trust bundle ... 2025-10-13T00:22:22.790000088+00:00 stderr F I1013 00:22:22.789867 1 loader.go:395] Config loaded from file: /etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig 2025-10-13T00:22:22.790595354+00:00 stderr F Copying termination logs to "/var/log/kube-apiserver/termination.log" 2025-10-13T00:22:22.790653686+00:00 stderr F I1013 00:22:22.790628 1 main.go:161] Touching termination lock file "/var/log/kube-apiserver/.terminating" 2025-10-13T00:22:22.791195300+00:00 stderr F I1013 00:22:22.791107 1 main.go:219] Launching sub-process "/usr/bin/hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=192.168.126.11 -v=2 --permit-address-sharing" 2025-10-13T00:22:22.877868231+00:00 stderr F Flag --openshift-config has been deprecated, to be removed 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877854 17 flags.go:64] FLAG: --admission-control="[]" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877924 17 flags.go:64] FLAG: --admission-control-config-file="" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877929 17 flags.go:64] FLAG: --advertise-address="192.168.126.11" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877933 17 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877938 17 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877942 17 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877946 17 flags.go:64] FLAG: --allow-privileged="false" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877949 17 flags.go:64] FLAG: --anonymous-auth="true" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877952 17 flags.go:64] FLAG: --api-audiences="[]" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877956 17 flags.go:64] FLAG: --apiserver-count="1" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877963 17 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877966 17 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877969 17 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877973 17 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877976 17 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-10-13T00:22:22.877992455+00:00 stderr F I1013 00:22:22.877979 17 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-10-13T00:22:22.878007305+00:00 stderr F I1013 00:22:22.877985 17 flags.go:64] FLAG: --audit-log-compress="false" 2025-10-13T00:22:22.878007305+00:00 stderr F I1013 00:22:22.877988 17 flags.go:64] FLAG: --audit-log-format="json" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878017 17 flags.go:64] FLAG: --audit-log-maxage="0" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878024 17 flags.go:64] FLAG: --audit-log-maxbackup="0" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878028 17 flags.go:64] FLAG: --audit-log-maxsize="0" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878031 17 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878034 17 flags.go:64] FLAG: --audit-log-path="" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878038 17 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878042 17 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-10-13T00:22:22.878059806+00:00 stderr F I1013 00:22:22.878050 17 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878054 17 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878059 17 flags.go:64] FLAG: --audit-policy-file="" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878062 17 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878066 17 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878070 17 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878074 17 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878077 17 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878081 17 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878084 17 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878089 17 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-10-13T00:22:22.878099388+00:00 stderr F I1013 00:22:22.878092 17 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-10-13T00:22:22.878110128+00:00 stderr F I1013 00:22:22.878096 17 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-10-13T00:22:22.878110128+00:00 stderr F I1013 00:22:22.878099 17 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-10-13T00:22:22.878110128+00:00 stderr F I1013 00:22:22.878103 17 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-10-13T00:22:22.878119488+00:00 stderr F I1013 00:22:22.878106 17 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-10-13T00:22:22.878119488+00:00 stderr F I1013 00:22:22.878110 17 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-10-13T00:22:22.878119488+00:00 stderr F I1013 00:22:22.878114 17 flags.go:64] FLAG: --authentication-config="" 2025-10-13T00:22:22.878127948+00:00 stderr F I1013 00:22:22.878117 17 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" 2025-10-13T00:22:22.878127948+00:00 stderr F I1013 00:22:22.878122 17 flags.go:64] FLAG: --authentication-token-webhook-config-file="" 2025-10-13T00:22:22.878135619+00:00 stderr F I1013 00:22:22.878125 17 flags.go:64] FLAG: --authentication-token-webhook-version="v1beta1" 2025-10-13T00:22:22.878135619+00:00 stderr F I1013 00:22:22.878129 17 flags.go:64] FLAG: --authorization-config="" 2025-10-13T00:22:22.878143109+00:00 stderr F I1013 00:22:22.878133 17 flags.go:64] FLAG: --authorization-mode="[]" 2025-10-13T00:22:22.878143109+00:00 stderr F I1013 00:22:22.878137 17 flags.go:64] FLAG: --authorization-policy-file="" 2025-10-13T00:22:22.878150609+00:00 stderr F I1013 00:22:22.878140 17 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" 2025-10-13T00:22:22.878150609+00:00 stderr F I1013 00:22:22.878144 17 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" 2025-10-13T00:22:22.878158169+00:00 stderr F I1013 00:22:22.878147 17 flags.go:64] FLAG: --authorization-webhook-config-file="" 2025-10-13T00:22:22.878158169+00:00 stderr F I1013 00:22:22.878151 17 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" 2025-10-13T00:22:22.878165669+00:00 stderr F I1013 00:22:22.878155 17 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:22:22.878165669+00:00 stderr F I1013 00:22:22.878159 17 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:22:22.878173130+00:00 stderr F I1013 00:22:22.878163 17 flags.go:64] FLAG: --client-ca-file="" 2025-10-13T00:22:22.878173130+00:00 stderr F I1013 00:22:22.878166 17 flags.go:64] FLAG: --cloud-config="" 2025-10-13T00:22:22.878184230+00:00 stderr F I1013 00:22:22.878170 17 flags.go:64] FLAG: --cloud-provider="" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878173 17 flags.go:64] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878182 17 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878187 17 flags.go:64] FLAG: --contention-profiling="false" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878191 17 flags.go:64] FLAG: --cors-allowed-origins="[]" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878194 17 flags.go:64] FLAG: --debug-socket-path="" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878197 17 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878201 17 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" 2025-10-13T00:22:22.878211491+00:00 stderr F I1013 00:22:22.878204 17 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-10-13T00:22:22.878221931+00:00 stderr F I1013 00:22:22.878208 17 flags.go:64] FLAG: --delete-collection-workers="1" 2025-10-13T00:22:22.878221931+00:00 stderr F I1013 00:22:22.878212 17 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-10-13T00:22:22.878229391+00:00 stderr F I1013 00:22:22.878216 17 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:22:22.878229391+00:00 stderr F I1013 00:22:22.878220 17 flags.go:64] FLAG: --egress-selector-config-file="" 2025-10-13T00:22:22.878236801+00:00 stderr F I1013 00:22:22.878223 17 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-10-13T00:22:22.878236801+00:00 stderr F I1013 00:22:22.878228 17 flags.go:64] FLAG: --enable-aggregator-routing="false" 2025-10-13T00:22:22.878236801+00:00 stderr F I1013 00:22:22.878232 17 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" 2025-10-13T00:22:22.878244731+00:00 stderr F I1013 00:22:22.878235 17 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-10-13T00:22:22.878244731+00:00 stderr F I1013 00:22:22.878239 17 flags.go:64] FLAG: --enable-logs-handler="true" 2025-10-13T00:22:22.878252212+00:00 stderr F I1013 00:22:22.878243 17 flags.go:64] FLAG: --enable-priority-and-fairness="true" 2025-10-13T00:22:22.878252212+00:00 stderr F I1013 00:22:22.878246 17 flags.go:64] FLAG: --encryption-provider-config="" 2025-10-13T00:22:22.878259572+00:00 stderr F I1013 00:22:22.878249 17 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878253 17 flags.go:64] FLAG: --endpoint-reconciler-type="lease" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878269 17 flags.go:64] FLAG: --etcd-cafile="" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878272 17 flags.go:64] FLAG: --etcd-certfile="" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878275 17 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878278 17 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878282 17 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878287 17 flags.go:64] FLAG: --etcd-healthcheck-timeout="2s" 2025-10-13T00:22:22.878299053+00:00 stderr F I1013 00:22:22.878291 17 flags.go:64] FLAG: --etcd-keyfile="" 2025-10-13T00:22:22.878308973+00:00 stderr F I1013 00:22:22.878294 17 flags.go:64] FLAG: --etcd-prefix="/registry" 2025-10-13T00:22:22.878308973+00:00 stderr F I1013 00:22:22.878298 17 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-10-13T00:22:22.878308973+00:00 stderr F I1013 00:22:22.878302 17 flags.go:64] FLAG: --etcd-servers="[]" 2025-10-13T00:22:22.878321594+00:00 stderr F I1013 00:22:22.878306 17 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-10-13T00:22:22.878321594+00:00 stderr F I1013 00:22:22.878310 17 flags.go:64] FLAG: --event-ttl="1h0m0s" 2025-10-13T00:22:22.878321594+00:00 stderr F I1013 00:22:22.878314 17 flags.go:64] FLAG: --external-hostname="" 2025-10-13T00:22:22.878365415+00:00 stderr F I1013 00:22:22.878318 17 flags.go:64] FLAG: --feature-gates="" 2025-10-13T00:22:22.878365415+00:00 stderr F I1013 00:22:22.878340 17 flags.go:64] FLAG: --goaway-chance="0" 2025-10-13T00:22:22.878365415+00:00 stderr F I1013 00:22:22.878345 17 flags.go:64] FLAG: --help="false" 2025-10-13T00:22:22.878365415+00:00 stderr F I1013 00:22:22.878348 17 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-10-13T00:22:22.878365415+00:00 stderr F I1013 00:22:22.878355 17 flags.go:64] FLAG: --kubelet-certificate-authority="" 2025-10-13T00:22:22.878375145+00:00 stderr F I1013 00:22:22.878359 17 flags.go:64] FLAG: --kubelet-client-certificate="" 2025-10-13T00:22:22.878375145+00:00 stderr F I1013 00:22:22.878363 17 flags.go:64] FLAG: --kubelet-client-key="" 2025-10-13T00:22:22.878375145+00:00 stderr F I1013 00:22:22.878366 17 flags.go:64] FLAG: --kubelet-port="10250" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878371 17 flags.go:64] FLAG: --kubelet-preferred-address-types="[Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878381 17 flags.go:64] FLAG: --kubelet-read-only-port="10255" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878384 17 flags.go:64] FLAG: --kubelet-timeout="5s" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878387 17 flags.go:64] FLAG: --kubernetes-service-node-port="0" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878390 17 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878393 17 flags.go:64] FLAG: --livez-grace-period="0s" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878397 17 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878400 17 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:22:22.878413326+00:00 stderr F I1013 00:22:22.878406 17 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:22:22.878423446+00:00 stderr F I1013 00:22:22.878410 17 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:22:22.878423446+00:00 stderr F I1013 00:22:22.878414 17 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" 2025-10-13T00:22:22.878423446+00:00 stderr F I1013 00:22:22.878417 17 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-10-13T00:22:22.878431366+00:00 stderr F I1013 00:22:22.878421 17 flags.go:64] FLAG: --max-requests-inflight="400" 2025-10-13T00:22:22.878431366+00:00 stderr F I1013 00:22:22.878424 17 flags.go:64] FLAG: --min-request-timeout="1800" 2025-10-13T00:22:22.878438927+00:00 stderr F I1013 00:22:22.878428 17 flags.go:64] FLAG: --oidc-ca-file="" 2025-10-13T00:22:22.878438927+00:00 stderr F I1013 00:22:22.878432 17 flags.go:64] FLAG: --oidc-client-id="" 2025-10-13T00:22:22.878446307+00:00 stderr F I1013 00:22:22.878435 17 flags.go:64] FLAG: --oidc-groups-claim="" 2025-10-13T00:22:22.878446307+00:00 stderr F I1013 00:22:22.878439 17 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-10-13T00:22:22.878453977+00:00 stderr F I1013 00:22:22.878442 17 flags.go:64] FLAG: --oidc-issuer-url="" 2025-10-13T00:22:22.878461167+00:00 stderr F I1013 00:22:22.878446 17 flags.go:64] FLAG: --oidc-required-claim="" 2025-10-13T00:22:22.878461167+00:00 stderr F I1013 00:22:22.878450 17 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" 2025-10-13T00:22:22.878461167+00:00 stderr F I1013 00:22:22.878455 17 flags.go:64] FLAG: --oidc-username-claim="sub" 2025-10-13T00:22:22.878473698+00:00 stderr F I1013 00:22:22.878458 17 flags.go:64] FLAG: --oidc-username-prefix="" 2025-10-13T00:22:22.878473698+00:00 stderr F I1013 00:22:22.878462 17 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:22:22.878473698+00:00 stderr F I1013 00:22:22.878466 17 flags.go:64] FLAG: --peer-advertise-ip="" 2025-10-13T00:22:22.878481688+00:00 stderr F I1013 00:22:22.878469 17 flags.go:64] FLAG: --peer-advertise-port="" 2025-10-13T00:22:22.878481688+00:00 stderr F I1013 00:22:22.878472 17 flags.go:64] FLAG: --peer-ca-file="" 2025-10-13T00:22:22.878489298+00:00 stderr F I1013 00:22:22.878475 17 flags.go:64] FLAG: --permit-address-sharing="true" 2025-10-13T00:22:22.878489298+00:00 stderr F I1013 00:22:22.878479 17 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:22:22.878489298+00:00 stderr F I1013 00:22:22.878483 17 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:22:22.878497218+00:00 stderr F I1013 00:22:22.878486 17 flags.go:64] FLAG: --proxy-client-cert-file="" 2025-10-13T00:22:22.878497218+00:00 stderr F I1013 00:22:22.878489 17 flags.go:64] FLAG: --proxy-client-key-file="" 2025-10-13T00:22:22.878504898+00:00 stderr F I1013 00:22:22.878493 17 flags.go:64] FLAG: --request-timeout="1m0s" 2025-10-13T00:22:22.878512099+00:00 stderr F I1013 00:22:22.878497 17 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:22:22.878512099+00:00 stderr F I1013 00:22:22.878502 17 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-10-13T00:22:22.878519519+00:00 stderr F I1013 00:22:22.878505 17 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[]" 2025-10-13T00:22:22.878519519+00:00 stderr F I1013 00:22:22.878509 17 flags.go:64] FLAG: --requestheader-group-headers="[]" 2025-10-13T00:22:22.878526999+00:00 stderr F I1013 00:22:22.878514 17 flags.go:64] FLAG: --requestheader-username-headers="[]" 2025-10-13T00:22:22.878526999+00:00 stderr F I1013 00:22:22.878518 17 flags.go:64] FLAG: --runtime-config="" 2025-10-13T00:22:22.878534509+00:00 stderr F I1013 00:22:22.878523 17 flags.go:64] FLAG: --secure-port="6443" 2025-10-13T00:22:22.878534509+00:00 stderr F I1013 00:22:22.878527 17 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="false" 2025-10-13T00:22:22.878541839+00:00 stderr F I1013 00:22:22.878530 17 flags.go:64] FLAG: --service-account-extend-token-expiration="true" 2025-10-13T00:22:22.878549160+00:00 stderr F I1013 00:22:22.878534 17 flags.go:64] FLAG: --service-account-issuer="[]" 2025-10-13T00:22:22.878549160+00:00 stderr F I1013 00:22:22.878539 17 flags.go:64] FLAG: --service-account-jwks-uri="" 2025-10-13T00:22:22.878556510+00:00 stderr F I1013 00:22:22.878544 17 flags.go:64] FLAG: --service-account-key-file="[]" 2025-10-13T00:22:22.878556510+00:00 stderr F I1013 00:22:22.878548 17 flags.go:64] FLAG: --service-account-lookup="true" 2025-10-13T00:22:22.878564080+00:00 stderr F I1013 00:22:22.878552 17 flags.go:64] FLAG: --service-account-max-token-expiration="0s" 2025-10-13T00:22:22.878564080+00:00 stderr F I1013 00:22:22.878555 17 flags.go:64] FLAG: --service-account-signing-key-file="" 2025-10-13T00:22:22.878564080+00:00 stderr F I1013 00:22:22.878558 17 flags.go:64] FLAG: --service-cluster-ip-range="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878562 17 flags.go:64] FLAG: --service-node-port-range="30000-32767" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878572 17 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878576 17 flags.go:64] FLAG: --shutdown-delay-duration="0s" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878579 17 flags.go:64] FLAG: --shutdown-send-retry-after="false" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878582 17 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878585 17 flags.go:64] FLAG: --storage-backend="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878587 17 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878591 17 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878596 17 flags.go:64] FLAG: --tls-cert-file="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878599 17 flags.go:64] FLAG: --tls-cipher-suites="[]" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878602 17 flags.go:64] FLAG: --tls-min-version="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878605 17 flags.go:64] FLAG: --tls-private-key-file="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878608 17 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878613 17 flags.go:64] FLAG: --token-auth-file="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878616 17 flags.go:64] FLAG: --tracing-config-file="" 2025-10-13T00:22:22.878627172+00:00 stderr F I1013 00:22:22.878619 17 flags.go:64] FLAG: --v="2" 2025-10-13T00:22:22.878643942+00:00 stderr F I1013 00:22:22.878624 17 flags.go:64] FLAG: --version="false" 2025-10-13T00:22:22.878643942+00:00 stderr F I1013 00:22:22.878629 17 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:22:22.878643942+00:00 stderr F I1013 00:22:22.878634 17 flags.go:64] FLAG: --watch-cache="true" 2025-10-13T00:22:22.878651832+00:00 stderr F I1013 00:22:22.878637 17 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878684 17 plugins.go:83] "Registered admission plugin" plugin="authorization.openshift.io/RestrictSubjectBindings" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878697 17 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/RouteHostAssignment" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878702 17 plugins.go:83] "Registered admission plugin" plugin="image.openshift.io/ImagePolicy" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878707 17 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/IngressAdmission" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878712 17 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagementCPUsOverride" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878716 17 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagedNode" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878721 17 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/MixedCPUs" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878726 17 plugins.go:83] "Registered admission plugin" plugin="scheduling.openshift.io/OriginPodNodeEnvironment" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878732 17 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ClusterResourceOverride" 2025-10-13T00:22:22.878755175+00:00 stderr F I1013 00:22:22.878737 17 plugins.go:83] "Registered admission plugin" plugin="quota.openshift.io/ClusterResourceQuota" 2025-10-13T00:22:22.878780676+00:00 stderr F I1013 00:22:22.878744 17 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/RunOnceDuration" 2025-10-13T00:22:22.878780676+00:00 stderr F I1013 00:22:22.878755 17 plugins.go:83] "Registered admission plugin" plugin="scheduling.openshift.io/PodNodeConstraints" 2025-10-13T00:22:22.878780676+00:00 stderr F I1013 00:22:22.878761 17 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/SecurityContextConstraint" 2025-10-13T00:22:22.878780676+00:00 stderr F I1013 00:22:22.878767 17 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/SCCExecRestrictions" 2025-10-13T00:22:22.878809317+00:00 stderr F I1013 00:22:22.878773 17 plugins.go:83] "Registered admission plugin" plugin="network.openshift.io/ExternalIPRanger" 2025-10-13T00:22:22.878809317+00:00 stderr F I1013 00:22:22.878783 17 plugins.go:83] "Registered admission plugin" plugin="network.openshift.io/RestrictedEndpointsAdmission" 2025-10-13T00:22:22.878809317+00:00 stderr F I1013 00:22:22.878788 17 plugins.go:83] "Registered admission plugin" plugin="storage.openshift.io/CSIInlineVolumeSecurity" 2025-10-13T00:22:22.878844108+00:00 stderr F I1013 00:22:22.878804 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIServer" 2025-10-13T00:22:22.878844108+00:00 stderr F I1013 00:22:22.878813 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAuthentication" 2025-10-13T00:22:22.878844108+00:00 stderr F I1013 00:22:22.878818 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateFeatureGate" 2025-10-13T00:22:22.878844108+00:00 stderr F I1013 00:22:22.878822 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateConsole" 2025-10-13T00:22:22.878844108+00:00 stderr F I1013 00:22:22.878827 17 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/ValidateDNS" 2025-10-13T00:22:22.878844108+00:00 stderr F I1013 00:22:22.878833 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateImage" 2025-10-13T00:22:22.878857738+00:00 stderr F I1013 00:22:22.878838 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateOAuth" 2025-10-13T00:22:22.878857738+00:00 stderr F I1013 00:22:22.878843 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateProject" 2025-10-13T00:22:22.878893869+00:00 stderr F I1013 00:22:22.878849 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/DenyDeleteClusterConfiguration" 2025-10-13T00:22:22.878893869+00:00 stderr F I1013 00:22:22.878857 17 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/DenyDeleteClusterOperators" 2025-10-13T00:22:22.878893869+00:00 stderr F I1013 00:22:22.878862 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateScheduler" 2025-10-13T00:22:22.878893869+00:00 stderr F I1013 00:22:22.878867 17 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/ValidateKubeControllerManager" 2025-10-13T00:22:22.878893869+00:00 stderr F I1013 00:22:22.878871 17 plugins.go:83] "Registered admission plugin" plugin="quota.openshift.io/ValidateClusterResourceQuota" 2025-10-13T00:22:22.878893869+00:00 stderr F I1013 00:22:22.878877 17 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/ValidateSecurityContextConstraints" 2025-10-13T00:22:22.878905509+00:00 stderr F I1013 00:22:22.878882 17 plugins.go:83] "Registered admission plugin" plugin="authorization.openshift.io/ValidateRoleBindingRestriction" 2025-10-13T00:22:22.878905509+00:00 stderr F I1013 00:22:22.878890 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateNetwork" 2025-10-13T00:22:22.878905509+00:00 stderr F I1013 00:22:22.878895 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIRequestCount" 2025-10-13T00:22:22.878964041+00:00 stderr F I1013 00:22:22.878902 17 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/RestrictExtremeWorkerLatencyProfile" 2025-10-13T00:22:22.878964041+00:00 stderr F I1013 00:22:22.878912 17 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/DefaultSecurityContextConstraints" 2025-10-13T00:22:22.878964041+00:00 stderr F I1013 00:22:22.878917 17 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/ValidateRoute" 2025-10-13T00:22:22.878964041+00:00 stderr F I1013 00:22:22.878922 17 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/DefaultRoute" 2025-10-13T00:22:22.881084078+00:00 stderr F W1013 00:22:22.880999 17 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-10-13T00:22:22.881084078+00:00 stderr F I1013 00:22:22.881019 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.881101768+00:00 stderr F W1013 00:22:22.881061 17 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-10-13T00:22:22.881101768+00:00 stderr F I1013 00:22:22.881074 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.881146459+00:00 stderr F W1013 00:22:22.881099 17 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-10-13T00:22:22.881146459+00:00 stderr F I1013 00:22:22.881107 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.881154910+00:00 stderr F W1013 00:22:22.881129 17 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-10-13T00:22:22.881154910+00:00 stderr F I1013 00:22:22.881135 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.881195241+00:00 stderr F W1013 00:22:22.881158 17 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-10-13T00:22:22.881195241+00:00 stderr F I1013 00:22:22.881166 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.881219411+00:00 stderr F W1013 00:22:22.881188 17 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-10-13T00:22:22.881219411+00:00 stderr F I1013 00:22:22.881196 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881223 17 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881231 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881252 17 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881257 17 feature_gate.go:250] feature gates: &{map[]} 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881665 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881732 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881744 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881792 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881801 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881845 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881853 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881898 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881904 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882030563+00:00 stderr F W1013 00:22:22.881968 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-10-13T00:22:22.882030563+00:00 stderr F I1013 00:22:22.881973 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882306831+00:00 stderr F W1013 00:22:22.882018 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-10-13T00:22:22.882306831+00:00 stderr F I1013 00:22:22.882025 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882306831+00:00 stderr F W1013 00:22:22.882067 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-10-13T00:22:22.882306831+00:00 stderr F I1013 00:22:22.882071 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882306831+00:00 stderr F W1013 00:22:22.882112 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-10-13T00:22:22.882306831+00:00 stderr F I1013 00:22:22.882116 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882306831+00:00 stderr F W1013 00:22:22.882156 17 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-10-13T00:22:22.882306831+00:00 stderr F I1013 00:22:22.882163 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882306831+00:00 stderr F W1013 00:22:22.882203 17 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-10-13T00:22:22.882306831+00:00 stderr F I1013 00:22:22.882208 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-10-13T00:22:22.882306831+00:00 stderr F I1013 00:22:22.882250 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true]} 2025-10-13T00:22:22.882621619+00:00 stderr F I1013 00:22:22.882291 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false]} 2025-10-13T00:22:22.882621619+00:00 stderr F W1013 00:22:22.882363 17 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-10-13T00:22:22.882621619+00:00 stderr F I1013 00:22:22.882368 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false]} 2025-10-13T00:22:22.882621619+00:00 stderr F I1013 00:22:22.882412 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882621619+00:00 stderr F W1013 00:22:22.882456 17 feature_gate.go:227] unrecognized feature gate: Example 2025-10-13T00:22:22.882621619+00:00 stderr F I1013 00:22:22.882460 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882621619+00:00 stderr F W1013 00:22:22.882524 17 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-10-13T00:22:22.882621619+00:00 stderr F I1013 00:22:22.882529 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882621619+00:00 stderr F W1013 00:22:22.882573 17 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-10-13T00:22:22.882621619+00:00 stderr F I1013 00:22:22.882577 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882672450+00:00 stderr F W1013 00:22:22.882622 17 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-10-13T00:22:22.882672450+00:00 stderr F I1013 00:22:22.882628 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882706251+00:00 stderr F W1013 00:22:22.882671 17 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-10-13T00:22:22.882706251+00:00 stderr F I1013 00:22:22.882677 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882794804+00:00 stderr F W1013 00:22:22.882731 17 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-10-13T00:22:22.882794804+00:00 stderr F I1013 00:22:22.882741 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882836475+00:00 stderr F W1013 00:22:22.882786 17 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-10-13T00:22:22.882836475+00:00 stderr F I1013 00:22:22.882791 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882930077+00:00 stderr F W1013 00:22:22.882870 17 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-10-13T00:22:22.882930077+00:00 stderr F I1013 00:22:22.882880 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.882977319+00:00 stderr F W1013 00:22:22.882922 17 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-10-13T00:22:22.882977319+00:00 stderr F I1013 00:22:22.882928 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883264436+00:00 stderr F W1013 00:22:22.882972 17 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-10-13T00:22:22.883264436+00:00 stderr F I1013 00:22:22.882982 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883264436+00:00 stderr F W1013 00:22:22.883049 17 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-10-13T00:22:22.883264436+00:00 stderr F I1013 00:22:22.883055 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883264436+00:00 stderr F W1013 00:22:22.883100 17 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-10-13T00:22:22.883264436+00:00 stderr F I1013 00:22:22.883107 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883264436+00:00 stderr F W1013 00:22:22.883146 17 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-10-13T00:22:22.883264436+00:00 stderr F I1013 00:22:22.883153 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883264436+00:00 stderr F W1013 00:22:22.883196 17 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-10-13T00:22:22.883544724+00:00 stderr F I1013 00:22:22.883247 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883544724+00:00 stderr F W1013 00:22:22.883414 17 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-10-13T00:22:22.883544724+00:00 stderr F I1013 00:22:22.883429 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883544724+00:00 stderr F W1013 00:22:22.883495 17 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-10-13T00:22:22.883544724+00:00 stderr F I1013 00:22:22.883502 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-10-13T00:22:22.883636986+00:00 stderr F W1013 00:22:22.883583 17 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:22:22.883636986+00:00 stderr F I1013 00:22:22.883599 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-10-13T00:22:22.883707078+00:00 stderr F W1013 00:22:22.883671 17 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-10-13T00:22:22.883707078+00:00 stderr F I1013 00:22:22.883683 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-10-13T00:22:22.883792791+00:00 stderr F W1013 00:22:22.883752 17 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-10-13T00:22:22.883792791+00:00 stderr F I1013 00:22:22.883767 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-10-13T00:22:22.883931024+00:00 stderr F W1013 00:22:22.883883 17 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-10-13T00:22:22.883931024+00:00 stderr F I1013 00:22:22.883896 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-10-13T00:22:22.884005886+00:00 stderr F W1013 00:22:22.883965 17 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-10-13T00:22:22.884005886+00:00 stderr F I1013 00:22:22.883976 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-10-13T00:22:22.884108889+00:00 stderr F I1013 00:22:22.884054 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884168231+00:00 stderr F W1013 00:22:22.884133 17 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-10-13T00:22:22.884168231+00:00 stderr F I1013 00:22:22.884144 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884260773+00:00 stderr F W1013 00:22:22.884222 17 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-10-13T00:22:22.884260773+00:00 stderr F I1013 00:22:22.884234 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884372076+00:00 stderr F W1013 00:22:22.884303 17 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-10-13T00:22:22.884372076+00:00 stderr F I1013 00:22:22.884314 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884480709+00:00 stderr F W1013 00:22:22.884430 17 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-10-13T00:22:22.884480709+00:00 stderr F I1013 00:22:22.884447 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884563031+00:00 stderr F W1013 00:22:22.884516 17 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-10-13T00:22:22.884563031+00:00 stderr F I1013 00:22:22.884527 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884652474+00:00 stderr F W1013 00:22:22.884602 17 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-10-13T00:22:22.884652474+00:00 stderr F I1013 00:22:22.884609 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884712045+00:00 stderr F W1013 00:22:22.884674 17 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-10-13T00:22:22.884712045+00:00 stderr F I1013 00:22:22.884685 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-10-13T00:22:22.884814008+00:00 stderr F I1013 00:22:22.884768 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-10-13T00:22:22.884939451+00:00 stderr F W1013 00:22:22.884893 17 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-10-13T00:22:22.884964092+00:00 stderr F I1013 00:22:22.884907 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-10-13T00:22:22.885043314+00:00 stderr F W1013 00:22:22.885002 17 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-10-13T00:22:22.885043314+00:00 stderr F I1013 00:22:22.885014 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-10-13T00:22:22.885213779+00:00 stderr F W1013 00:22:22.885171 17 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-10-13T00:22:22.885213779+00:00 stderr F I1013 00:22:22.885185 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-10-13T00:22:22.885319392+00:00 stderr F W1013 00:22:22.885280 17 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-10-13T00:22:22.885319392+00:00 stderr F I1013 00:22:22.885293 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-10-13T00:22:22.885766814+00:00 stderr F W1013 00:22:22.885710 17 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-10-13T00:22:22.885766814+00:00 stderr F I1013 00:22:22.885728 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-10-13T00:22:22.885842486+00:00 stderr F I1013 00:22:22.885781 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false]} 2025-10-13T00:22:22.885914958+00:00 stderr F I1013 00:22:22.885860 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false]} 2025-10-13T00:22:22.885957829+00:00 stderr F I1013 00:22:22.885916 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false]} 2025-10-13T00:22:22.886059722+00:00 stderr F I1013 00:22:22.885981 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2025-10-13T00:22:22.886118703+00:00 stderr F W1013 00:22:22.886075 17 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-10-13T00:22:22.886118703+00:00 stderr F I1013 00:22:22.886088 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2025-10-13T00:22:22.886192115+00:00 stderr F W1013 00:22:22.886143 17 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-10-13T00:22:22.886192115+00:00 stderr F I1013 00:22:22.886154 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2025-10-13T00:22:22.886280667+00:00 stderr F I1013 00:22:22.886217 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-10-13T00:22:22.886316678+00:00 stderr F W1013 00:22:22.886283 17 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-10-13T00:22:22.886316678+00:00 stderr F I1013 00:22:22.886295 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-10-13T00:22:22.886420001+00:00 stderr F W1013 00:22:22.886371 17 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-10-13T00:22:22.886420001+00:00 stderr F I1013 00:22:22.886386 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-10-13T00:22:22.886476193+00:00 stderr F W1013 00:22:22.886435 17 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-10-13T00:22:22.886476193+00:00 stderr F I1013 00:22:22.886446 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-10-13T00:22:22.886532104+00:00 stderr F W1013 00:22:22.886493 17 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-10-13T00:22:22.886532104+00:00 stderr F I1013 00:22:22.886504 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-10-13T00:22:22.886596906+00:00 stderr F W1013 00:22:22.886555 17 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-10-13T00:22:22.886596906+00:00 stderr F I1013 00:22:22.886566 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-10-13T00:22:22.886666208+00:00 stderr F I1013 00:22:22.886610 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-10-13T00:22:22.886700339+00:00 stderr F W1013 00:22:22.886663 17 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-10-13T00:22:22.886700339+00:00 stderr F I1013 00:22:22.886674 17 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-10-13T00:22:22.886798421+00:00 stderr F Flag --openshift-config has been deprecated, to be removed 2025-10-13T00:22:22.886798421+00:00 stderr F Flag --enable-logs-handler has been deprecated, This flag will be removed in v1.19 2025-10-13T00:22:22.886798421+00:00 stderr F Flag --kubelet-read-only-port has been deprecated, kubelet-read-only-port is deprecated and will be removed. 2025-10-13T00:22:22.886798421+00:00 stderr F I1013 00:22:22.886774 17 flags.go:64] FLAG: --admission-control="[]" 2025-10-13T00:22:22.886798421+00:00 stderr F I1013 00:22:22.886783 17 flags.go:64] FLAG: --admission-control-config-file="/tmp/kubeapiserver-admission-config.yaml3876501736" 2025-10-13T00:22:22.886814472+00:00 stderr F I1013 00:22:22.886790 17 flags.go:64] FLAG: --advertise-address="192.168.126.11" 2025-10-13T00:22:22.886814472+00:00 stderr F I1013 00:22:22.886797 17 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" 2025-10-13T00:22:22.886845243+00:00 stderr F I1013 00:22:22.886808 17 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:22:22.886845243+00:00 stderr F I1013 00:22:22.886819 17 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:22:22.886845243+00:00 stderr F I1013 00:22:22.886824 17 flags.go:64] FLAG: --allow-privileged="true" 2025-10-13T00:22:22.886845243+00:00 stderr F I1013 00:22:22.886828 17 flags.go:64] FLAG: --anonymous-auth="true" 2025-10-13T00:22:22.886845243+00:00 stderr F I1013 00:22:22.886833 17 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-10-13T00:22:22.886875664+00:00 stderr F I1013 00:22:22.886841 17 flags.go:64] FLAG: --apiserver-count="1" 2025-10-13T00:22:22.886875664+00:00 stderr F I1013 00:22:22.886850 17 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-10-13T00:22:22.886875664+00:00 stderr F I1013 00:22:22.886855 17 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-10-13T00:22:22.886875664+00:00 stderr F I1013 00:22:22.886860 17 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-10-13T00:22:22.886901524+00:00 stderr F I1013 00:22:22.886866 17 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-10-13T00:22:22.886901524+00:00 stderr F I1013 00:22:22.886875 17 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-10-13T00:22:22.886901524+00:00 stderr F I1013 00:22:22.886881 17 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-10-13T00:22:22.886901524+00:00 stderr F I1013 00:22:22.886887 17 flags.go:64] FLAG: --audit-log-compress="false" 2025-10-13T00:22:22.886901524+00:00 stderr F I1013 00:22:22.886892 17 flags.go:64] FLAG: --audit-log-format="json" 2025-10-13T00:22:22.886910724+00:00 stderr F I1013 00:22:22.886898 17 flags.go:64] FLAG: --audit-log-maxage="0" 2025-10-13T00:22:22.886944165+00:00 stderr F I1013 00:22:22.886904 17 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-10-13T00:22:22.886944165+00:00 stderr F I1013 00:22:22.886912 17 flags.go:64] FLAG: --audit-log-maxsize="200" 2025-10-13T00:22:22.886944165+00:00 stderr F I1013 00:22:22.886917 17 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-10-13T00:22:22.886944165+00:00 stderr F I1013 00:22:22.886922 17 flags.go:64] FLAG: --audit-log-path="/var/log/kube-apiserver/audit.log" 2025-10-13T00:22:22.886944165+00:00 stderr F I1013 00:22:22.886927 17 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-10-13T00:22:22.886944165+00:00 stderr F I1013 00:22:22.886933 17 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886938 17 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886947 17 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886952 17 flags.go:64] FLAG: --audit-policy-file="/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-audit-policies/policy.yaml" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886959 17 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886964 17 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886969 17 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886974 17 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-10-13T00:22:22.886991157+00:00 stderr F I1013 00:22:22.886978 17 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-10-13T00:22:22.887006027+00:00 stderr F I1013 00:22:22.886984 17 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-10-13T00:22:22.887006027+00:00 stderr F I1013 00:22:22.886989 17 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-10-13T00:22:22.887017207+00:00 stderr F I1013 00:22:22.886995 17 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-10-13T00:22:22.887017207+00:00 stderr F I1013 00:22:22.887004 17 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-10-13T00:22:22.887024608+00:00 stderr F I1013 00:22:22.887009 17 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-10-13T00:22:22.887024608+00:00 stderr F I1013 00:22:22.887014 17 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-10-13T00:22:22.887032298+00:00 stderr F I1013 00:22:22.887019 17 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-10-13T00:22:22.887032298+00:00 stderr F I1013 00:22:22.887024 17 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-10-13T00:22:22.887039798+00:00 stderr F I1013 00:22:22.887029 17 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-10-13T00:22:22.887046808+00:00 stderr F I1013 00:22:22.887034 17 flags.go:64] FLAG: --authentication-config="" 2025-10-13T00:22:22.887072239+00:00 stderr F I1013 00:22:22.887039 17 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" 2025-10-13T00:22:22.887072239+00:00 stderr F I1013 00:22:22.887049 17 flags.go:64] FLAG: --authentication-token-webhook-config-file="/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig" 2025-10-13T00:22:22.887072239+00:00 stderr F I1013 00:22:22.887055 17 flags.go:64] FLAG: --authentication-token-webhook-version="v1" 2025-10-13T00:22:22.887072239+00:00 stderr F I1013 00:22:22.887060 17 flags.go:64] FLAG: --authorization-config="" 2025-10-13T00:22:22.887108470+00:00 stderr F I1013 00:22:22.887064 17 flags.go:64] FLAG: --authorization-mode="[Scope,SystemMasters,RBAC,Node]" 2025-10-13T00:22:22.887108470+00:00 stderr F I1013 00:22:22.887084 17 flags.go:64] FLAG: --authorization-policy-file="" 2025-10-13T00:22:22.887108470+00:00 stderr F I1013 00:22:22.887088 17 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" 2025-10-13T00:22:22.887108470+00:00 stderr F I1013 00:22:22.887093 17 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" 2025-10-13T00:22:22.887108470+00:00 stderr F I1013 00:22:22.887098 17 flags.go:64] FLAG: --authorization-webhook-config-file="" 2025-10-13T00:22:22.887117720+00:00 stderr F I1013 00:22:22.887104 17 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" 2025-10-13T00:22:22.887117720+00:00 stderr F I1013 00:22:22.887109 17 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:22:22.887155821+00:00 stderr F I1013 00:22:22.887115 17 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:22:22.887155821+00:00 stderr F I1013 00:22:22.887124 17 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:22:22.887155821+00:00 stderr F I1013 00:22:22.887130 17 flags.go:64] FLAG: --cloud-config="" 2025-10-13T00:22:22.887155821+00:00 stderr F I1013 00:22:22.887135 17 flags.go:64] FLAG: --cloud-provider="" 2025-10-13T00:22:22.887186352+00:00 stderr F I1013 00:22:22.887140 17 flags.go:64] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16" 2025-10-13T00:22:22.887186352+00:00 stderr F I1013 00:22:22.887152 17 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-10-13T00:22:22.887186352+00:00 stderr F I1013 00:22:22.887159 17 flags.go:64] FLAG: --contention-profiling="false" 2025-10-13T00:22:22.887186352+00:00 stderr F I1013 00:22:22.887164 17 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-10-13T00:22:22.887186352+00:00 stderr F I1013 00:22:22.887170 17 flags.go:64] FLAG: --debug-socket-path="" 2025-10-13T00:22:22.887186352+00:00 stderr F I1013 00:22:22.887175 17 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" 2025-10-13T00:22:22.887204512+00:00 stderr F I1013 00:22:22.887181 17 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" 2025-10-13T00:22:22.887204512+00:00 stderr F I1013 00:22:22.887186 17 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-10-13T00:22:22.887237173+00:00 stderr F I1013 00:22:22.887191 17 flags.go:64] FLAG: --delete-collection-workers="1" 2025-10-13T00:22:22.887237173+00:00 stderr F I1013 00:22:22.887200 17 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-10-13T00:22:22.887237173+00:00 stderr F I1013 00:22:22.887206 17 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:22:22.887237173+00:00 stderr F I1013 00:22:22.887211 17 flags.go:64] FLAG: --egress-selector-config-file="" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887217 17 flags.go:64] FLAG: --enable-admission-plugins="[CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DefaultIngressClass,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,NodeRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,PersistentVolumeLabel,PodNodeSelector,PodTolerationRestriction,Priority,ResourceQuota,RuntimeClass,ServiceAccount,StorageObjectInUseProtection,TaintNodesByCondition,ValidatingAdmissionWebhook,ValidatingAdmissionPolicy,authorization.openshift.io/RestrictSubjectBindings,authorization.openshift.io/ValidateRoleBindingRestriction,config.openshift.io/DenyDeleteClusterConfiguration,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateConsole,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/ValidateScheduler,image.openshift.io/ImagePolicy,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,quota.openshift.io/ClusterResourceQuota,quota.openshift.io/ValidateClusterResourceQuota,route.openshift.io/IngressAdmission,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/DefaultSecurityContextConstraints,security.openshift.io/SCCExecRestrictions,security.openshift.io/SecurityContextConstraint,security.openshift.io/ValidateSecurityContextConstraints,storage.openshift.io/CSIInlineVolumeSecurity]" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887256 17 flags.go:64] FLAG: --enable-aggregator-routing="true" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887261 17 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887265 17 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887270 17 flags.go:64] FLAG: --enable-logs-handler="false" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887275 17 flags.go:64] FLAG: --enable-priority-and-fairness="true" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887280 17 flags.go:64] FLAG: --encryption-provider-config="" 2025-10-13T00:22:22.887291835+00:00 stderr F I1013 00:22:22.887284 17 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-10-13T00:22:22.887303255+00:00 stderr F I1013 00:22:22.887289 17 flags.go:64] FLAG: --endpoint-reconciler-type="lease" 2025-10-13T00:22:22.887310465+00:00 stderr F I1013 00:22:22.887294 17 flags.go:64] FLAG: --etcd-cafile="/etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-10-13T00:22:22.887310465+00:00 stderr F I1013 00:22:22.887301 17 flags.go:64] FLAG: --etcd-certfile="/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt" 2025-10-13T00:22:22.887317865+00:00 stderr F I1013 00:22:22.887307 17 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-10-13T00:22:22.887342756+00:00 stderr F I1013 00:22:22.887312 17 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-10-13T00:22:22.887342756+00:00 stderr F I1013 00:22:22.887317 17 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-10-13T00:22:22.887381597+00:00 stderr F I1013 00:22:22.887344 17 flags.go:64] FLAG: --etcd-healthcheck-timeout="9s" 2025-10-13T00:22:22.887381597+00:00 stderr F I1013 00:22:22.887353 17 flags.go:64] FLAG: --etcd-keyfile="/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key" 2025-10-13T00:22:22.887381597+00:00 stderr F I1013 00:22:22.887362 17 flags.go:64] FLAG: --etcd-prefix="kubernetes.io" 2025-10-13T00:22:22.887381597+00:00 stderr F I1013 00:22:22.887368 17 flags.go:64] FLAG: --etcd-readycheck-timeout="9s" 2025-10-13T00:22:22.887390287+00:00 stderr F I1013 00:22:22.887373 17 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379,https://localhost:2379]" 2025-10-13T00:22:22.887427028+00:00 stderr F I1013 00:22:22.887381 17 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-10-13T00:22:22.887427028+00:00 stderr F I1013 00:22:22.887401 17 flags.go:64] FLAG: --event-ttl="3h0m0s" 2025-10-13T00:22:22.887427028+00:00 stderr F I1013 00:22:22.887406 17 flags.go:64] FLAG: --external-hostname="" 2025-10-13T00:22:22.887474000+00:00 stderr F I1013 00:22:22.887411 17 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-10-13T00:22:22.887474000+00:00 stderr F I1013 00:22:22.887440 17 flags.go:64] FLAG: --goaway-chance="0" 2025-10-13T00:22:22.887474000+00:00 stderr F I1013 00:22:22.887447 17 flags.go:64] FLAG: --help="false" 2025-10-13T00:22:22.887474000+00:00 stderr F I1013 00:22:22.887451 17 flags.go:64] FLAG: --http2-max-streams-per-connection="2000" 2025-10-13T00:22:22.887474000+00:00 stderr F I1013 00:22:22.887456 17 flags.go:64] FLAG: --kubelet-certificate-authority="/etc/kubernetes/static-pod-resources/configmaps/kubelet-serving-ca/ca-bundle.crt" 2025-10-13T00:22:22.887474000+00:00 stderr F I1013 00:22:22.887462 17 flags.go:64] FLAG: --kubelet-client-certificate="/etc/kubernetes/static-pod-certs/secrets/kubelet-client/tls.crt" 2025-10-13T00:22:22.887483470+00:00 stderr F I1013 00:22:22.887467 17 flags.go:64] FLAG: --kubelet-client-key="/etc/kubernetes/static-pod-certs/secrets/kubelet-client/tls.key" 2025-10-13T00:22:22.887483470+00:00 stderr F I1013 00:22:22.887473 17 flags.go:64] FLAG: --kubelet-port="10250" 2025-10-13T00:22:22.887512961+00:00 stderr F I1013 00:22:22.887479 17 flags.go:64] FLAG: --kubelet-preferred-address-types="[InternalIP]" 2025-10-13T00:22:22.887512961+00:00 stderr F I1013 00:22:22.887489 17 flags.go:64] FLAG: --kubelet-read-only-port="0" 2025-10-13T00:22:22.887512961+00:00 stderr F I1013 00:22:22.887494 17 flags.go:64] FLAG: --kubelet-timeout="5s" 2025-10-13T00:22:22.887512961+00:00 stderr F I1013 00:22:22.887499 17 flags.go:64] FLAG: --kubernetes-service-node-port="0" 2025-10-13T00:22:22.887512961+00:00 stderr F I1013 00:22:22.887504 17 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-10-13T00:22:22.887545932+00:00 stderr F I1013 00:22:22.887510 17 flags.go:64] FLAG: --livez-grace-period="0s" 2025-10-13T00:22:22.887545932+00:00 stderr F I1013 00:22:22.887532 17 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:22:22.887545932+00:00 stderr F I1013 00:22:22.887537 17 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:22:22.887568932+00:00 stderr F I1013 00:22:22.887543 17 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:22:22.887568932+00:00 stderr F I1013 00:22:22.887548 17 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:22:22.887568932+00:00 stderr F I1013 00:22:22.887553 17 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" 2025-10-13T00:22:22.887568932+00:00 stderr F I1013 00:22:22.887558 17 flags.go:64] FLAG: --max-mutating-requests-inflight="1000" 2025-10-13T00:22:22.887580402+00:00 stderr F I1013 00:22:22.887562 17 flags.go:64] FLAG: --max-requests-inflight="3000" 2025-10-13T00:22:22.887580402+00:00 stderr F I1013 00:22:22.887567 17 flags.go:64] FLAG: --min-request-timeout="3600" 2025-10-13T00:22:22.887580402+00:00 stderr F I1013 00:22:22.887572 17 flags.go:64] FLAG: --oidc-ca-file="" 2025-10-13T00:22:22.887589113+00:00 stderr F I1013 00:22:22.887577 17 flags.go:64] FLAG: --oidc-client-id="" 2025-10-13T00:22:22.887589113+00:00 stderr F I1013 00:22:22.887582 17 flags.go:64] FLAG: --oidc-groups-claim="" 2025-10-13T00:22:22.887596413+00:00 stderr F I1013 00:22:22.887587 17 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-10-13T00:22:22.887603703+00:00 stderr F I1013 00:22:22.887592 17 flags.go:64] FLAG: --oidc-issuer-url="" 2025-10-13T00:22:22.887603703+00:00 stderr F I1013 00:22:22.887597 17 flags.go:64] FLAG: --oidc-required-claim="" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887603 17 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887614 17 flags.go:64] FLAG: --oidc-username-claim="sub" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887619 17 flags.go:64] FLAG: --oidc-username-prefix="" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887623 17 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887628 17 flags.go:64] FLAG: --peer-advertise-ip="" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887632 17 flags.go:64] FLAG: --peer-advertise-port="" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887637 17 flags.go:64] FLAG: --peer-ca-file="" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887641 17 flags.go:64] FLAG: --permit-address-sharing="true" 2025-10-13T00:22:22.887656805+00:00 stderr F I1013 00:22:22.887646 17 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:22:22.887666685+00:00 stderr F I1013 00:22:22.887650 17 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:22:22.887666685+00:00 stderr F I1013 00:22:22.887655 17 flags.go:64] FLAG: --proxy-client-cert-file="/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt" 2025-10-13T00:22:22.887694306+00:00 stderr F I1013 00:22:22.887661 17 flags.go:64] FLAG: --proxy-client-key-file="/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2025-10-13T00:22:22.887694306+00:00 stderr F I1013 00:22:22.887671 17 flags.go:64] FLAG: --request-timeout="1m0s" 2025-10-13T00:22:22.887694306+00:00 stderr F I1013 00:22:22.887676 17 flags.go:64] FLAG: --requestheader-allowed-names="[kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator]" 2025-10-13T00:22:22.887694306+00:00 stderr F I1013 00:22:22.887683 17 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:22:22.887724186+00:00 stderr F I1013 00:22:22.887689 17 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]" 2025-10-13T00:22:22.887724186+00:00 stderr F I1013 00:22:22.887700 17 flags.go:64] FLAG: --requestheader-group-headers="[X-Remote-Group]" 2025-10-13T00:22:22.887724186+00:00 stderr F I1013 00:22:22.887705 17 flags.go:64] FLAG: --requestheader-username-headers="[X-Remote-User]" 2025-10-13T00:22:22.887724186+00:00 stderr F I1013 00:22:22.887712 17 flags.go:64] FLAG: --runtime-config="" 2025-10-13T00:22:22.887739667+00:00 stderr F I1013 00:22:22.887719 17 flags.go:64] FLAG: --secure-port="6443" 2025-10-13T00:22:22.887748347+00:00 stderr F I1013 00:22:22.887725 17 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="true" 2025-10-13T00:22:22.887748347+00:00 stderr F I1013 00:22:22.887730 17 flags.go:64] FLAG: --service-account-extend-token-expiration="true" 2025-10-13T00:22:22.887748347+00:00 stderr F I1013 00:22:22.887735 17 flags.go:64] FLAG: --service-account-issuer="[https://kubernetes.default.svc]" 2025-10-13T00:22:22.887757247+00:00 stderr F I1013 00:22:22.887742 17 flags.go:64] FLAG: --service-account-jwks-uri="https://api.crc.testing:6443/openid/v1/jwks" 2025-10-13T00:22:22.887792508+00:00 stderr F I1013 00:22:22.887748 17 flags.go:64] FLAG: --service-account-key-file="[/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-001.pub,/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-002.pub,/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-003.pub,/etc/kubernetes/static-pod-resources/configmaps/bound-sa-token-signing-certs/service-account-001.pub]" 2025-10-13T00:22:22.887792508+00:00 stderr F I1013 00:22:22.887763 17 flags.go:64] FLAG: --service-account-lookup="true" 2025-10-13T00:22:22.887792508+00:00 stderr F I1013 00:22:22.887768 17 flags.go:64] FLAG: --service-account-max-token-expiration="0s" 2025-10-13T00:22:22.887792508+00:00 stderr F I1013 00:22:22.887773 17 flags.go:64] FLAG: --service-account-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/bound-service-account-signing-key/service-account.key" 2025-10-13T00:22:22.887792508+00:00 stderr F I1013 00:22:22.887779 17 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-10-13T00:22:22.887822319+00:00 stderr F I1013 00:22:22.887784 17 flags.go:64] FLAG: --service-node-port-range="30000-32767" 2025-10-13T00:22:22.887822319+00:00 stderr F I1013 00:22:22.887795 17 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:22:22.887822319+00:00 stderr F I1013 00:22:22.887800 17 flags.go:64] FLAG: --shutdown-delay-duration="0s" 2025-10-13T00:22:22.887822319+00:00 stderr F I1013 00:22:22.887805 17 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-10-13T00:22:22.887822319+00:00 stderr F I1013 00:22:22.887810 17 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-10-13T00:22:22.887822319+00:00 stderr F I1013 00:22:22.887814 17 flags.go:64] FLAG: --storage-backend="etcd3" 2025-10-13T00:22:22.887832669+00:00 stderr F I1013 00:22:22.887819 17 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:22:22.887840009+00:00 stderr F I1013 00:22:22.887824 17 flags.go:64] FLAG: --strict-transport-security-directives="[max-age=31536000,includeSubDomains,preload]" 2025-10-13T00:22:22.887840009+00:00 stderr F I1013 00:22:22.887831 17 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt" 2025-10-13T00:22:22.887885781+00:00 stderr F I1013 00:22:22.887838 17 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:22:22.887885781+00:00 stderr F I1013 00:22:22.887859 17 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:22:22.887885781+00:00 stderr F I1013 00:22:22.887864 17 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-10-13T00:22:22.887927902+00:00 stderr F I1013 00:22:22.887870 17 flags.go:64] FLAG: --tls-sni-cert-key="[/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key;/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt,/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key]" 2025-10-13T00:22:22.887927902+00:00 stderr F I1013 00:22:22.887896 17 flags.go:64] FLAG: --token-auth-file="" 2025-10-13T00:22:22.887927902+00:00 stderr F I1013 00:22:22.887901 17 flags.go:64] FLAG: --tracing-config-file="" 2025-10-13T00:22:22.887927902+00:00 stderr F I1013 00:22:22.887905 17 flags.go:64] FLAG: --v="2" 2025-10-13T00:22:22.887927902+00:00 stderr F I1013 00:22:22.887911 17 flags.go:64] FLAG: --version="false" 2025-10-13T00:22:22.887927902+00:00 stderr F I1013 00:22:22.887917 17 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:22:22.887941112+00:00 stderr F I1013 00:22:22.887923 17 flags.go:64] FLAG: --watch-cache="true" 2025-10-13T00:22:22.887941112+00:00 stderr F I1013 00:22:22.887928 17 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-10-13T00:22:22.888041415+00:00 stderr F I1013 00:22:22.888001 17 options.go:222] external host was not specified, using 192.168.126.11 2025-10-13T00:22:22.890037939+00:00 stderr F I1013 00:22:22.889952 17 server.go:189] Version: v1.29.5+29c95f3 2025-10-13T00:22:22.890037939+00:00 stderr F I1013 00:22:22.889996 17 server.go:191] "Golang settings" GOGC="100" GOMAXPROCS="" GOTRACEBACK="" 2025-10-13T00:22:22.890993324+00:00 stderr F I1013 00:22:22.890934 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-10-13T00:22:22.891392885+00:00 stderr F I1013 00:22:22.891340 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" 2025-10-13T00:22:22.892041032+00:00 stderr F I1013 00:22:22.891972 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-10-13T00:22:22.892654069+00:00 stderr F I1013 00:22:22.892598 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" 2025-10-13T00:22:22.893305916+00:00 stderr F I1013 00:22:22.893230 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" 2025-10-13T00:22:22.893970854+00:00 stderr F I1013 00:22:22.893897 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" 2025-10-13T00:22:23.436079303+00:00 stderr F I1013 00:22:23.435935 17 apf_controller.go:292] NewTestableController "Controller" with serverConcurrencyLimit=4000, name=Controller, asFieldManager="api-priority-and-fairness-config-consumer-v1" 2025-10-13T00:22:23.436218687+00:00 stderr F I1013 00:22:23.436042 17 apf_controller.go:861] Introducing queues for priority level "exempt": config={"type":"Exempt","exempt":{"nominalConcurrencyShares":0,"lendablePercent":0}}, nominalCL=0, lendableCL=0, borrowingCL=4000, currentCL=0, quiescing=false (shares=0xc00056de70, shareSum=5) 2025-10-13T00:22:23.436218687+00:00 stderr F I1013 00:22:23.436154 17 apf_controller.go:861] Introducing queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=4000, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false (shares=0xc0000178c0, shareSum=5) 2025-10-13T00:22:23.446144603+00:00 stderr F I1013 00:22:23.446008 17 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:22:23.446228206+00:00 stderr F I1013 00:22:23.446126 17 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:22:23.454871378+00:00 stderr F I1013 00:22:23.451314 17 audit.go:340] Using audit backend: ignoreErrors 2025-10-13T00:22:23.454871378+00:00 stderr F I1013 00:22:23.452371 17 shared_informer.go:311] Waiting for caches to sync for node_authorizer 2025-10-13T00:22:23.466451370+00:00 stderr F I1013 00:22:23.466065 17 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2025-10-13T00:22:23.466451370+00:00 stderr F I1013 00:22:23.466354 17 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:22:23.466504391+00:00 stderr F I1013 00:22:23.466429 17 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:22:23.471220708+00:00 stderr F W1013 00:22:23.469671 17 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. 2025-10-13T00:22:23.471220708+00:00 stderr F I1013 00:22:23.469856 17 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. 2025-10-13T00:22:23.471220708+00:00 stderr F I1013 00:22:23.469972 17 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. 2025-10-13T00:22:23.471220708+00:00 stderr F I1013 00:22:23.469984 17 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. 2025-10-13T00:22:23.477480546+00:00 stderr F I1013 00:22:23.477317 17 plugins.go:157] Loaded 23 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,autoscaling.openshift.io/ManagementCPUsOverride,scheduling.openshift.io/OriginPodNodeEnvironment,image.openshift.io/ImagePolicy,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,autoscaling.openshift.io/MixedCPUs,route.openshift.io/DefaultRoute,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. 2025-10-13T00:22:23.477480546+00:00 stderr F I1013 00:22:23.477398 17 plugins.go:160] Loaded 47 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,autoscaling.openshift.io/ManagementCPUsOverride,authorization.openshift.io/RestrictSubjectBindings,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,image.openshift.io/ImagePolicy,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,storage.openshift.io/CSIInlineVolumeSecurity,autoscaling.openshift.io/ManagedNode,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateConsole,operator.openshift.io/ValidateDNS,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/DenyDeleteClusterConfiguration,operator.openshift.io/DenyDeleteClusterOperators,config.openshift.io/ValidateScheduler,quota.openshift.io/ValidateClusterResourceQuota,security.openshift.io/ValidateSecurityContextConstraints,authorization.openshift.io/ValidateRoleBindingRestriction,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota,quota.openshift.io/ClusterResourceQuota. 2025-10-13T00:22:23.477912958+00:00 stderr F I1013 00:22:23.477846 17 instance.go:297] Using reconciler: lease 2025-10-13T00:22:23.505482089+00:00 stderr F I1013 00:22:23.498923 17 store.go:1579] "Monitoring resource count at path" resource="customresourcedefinitions.apiextensions.k8s.io" path="//apiextensions.k8s.io/customresourcedefinitions" 2025-10-13T00:22:23.505482089+00:00 stderr F I1013 00:22:23.501743 17 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.505482089+00:00 stderr F W1013 00:22:23.501765 17 genericapiserver.go:774] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.531951101+00:00 stderr F I1013 00:22:23.531813 17 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2025-10-13T00:22:23.540796639+00:00 stderr F I1013 00:22:23.540638 17 store.go:1579] "Monitoring resource count at path" resource="resourcequotas" path="//resourcequotas" 2025-10-13T00:22:23.545580038+00:00 stderr F I1013 00:22:23.543417 17 cacher.go:460] cacher (resourcequotas): initialized 2025-10-13T00:22:23.545580038+00:00 stderr F I1013 00:22:23.543442 17 reflector.go:351] Caches populated for *core.ResourceQuota from storage/cacher.go:/resourcequotas 2025-10-13T00:22:23.549938675+00:00 stderr F I1013 00:22:23.549861 17 store.go:1579] "Monitoring resource count at path" resource="secrets" path="//secrets" 2025-10-13T00:22:23.563364196+00:00 stderr F I1013 00:22:23.562692 17 store.go:1579] "Monitoring resource count at path" resource="configmaps" path="//configmaps" 2025-10-13T00:22:23.579409406+00:00 stderr F I1013 00:22:23.578197 17 store.go:1579] "Monitoring resource count at path" resource="namespaces" path="//namespaces" 2025-10-13T00:22:23.585341556+00:00 stderr F I1013 00:22:23.585212 17 cacher.go:460] cacher (namespaces): initialized 2025-10-13T00:22:23.585341556+00:00 stderr F I1013 00:22:23.585247 17 reflector.go:351] Caches populated for *core.Namespace from storage/cacher.go:/namespaces 2025-10-13T00:22:23.595793927+00:00 stderr F I1013 00:22:23.595676 17 store.go:1579] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.607788 17 cacher.go:460] cacher (secrets): initialized 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.607833 17 reflector.go:351] Caches populated for *core.Secret from storage/cacher.go:/secrets 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.609633 17 store.go:1579] "Monitoring resource count at path" resource="podtemplates" path="//podtemplates" 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.611384 17 cacher.go:460] cacher (serviceaccounts): initialized 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.611413 17 reflector.go:351] Caches populated for *core.ServiceAccount from storage/cacher.go:/serviceaccounts 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.612253 17 cacher.go:460] cacher (podtemplates): initialized 2025-10-13T00:22:23.613424401+00:00 stderr F I1013 00:22:23.612284 17 reflector.go:351] Caches populated for *core.PodTemplate from storage/cacher.go:/podtemplates 2025-10-13T00:22:23.618379564+00:00 stderr F I1013 00:22:23.618313 17 store.go:1579] "Monitoring resource count at path" resource="limitranges" path="//limitranges" 2025-10-13T00:22:23.619137515+00:00 stderr F I1013 00:22:23.619100 17 cacher.go:460] cacher (limitranges): initialized 2025-10-13T00:22:23.619166625+00:00 stderr F I1013 00:22:23.619124 17 reflector.go:351] Caches populated for *core.LimitRange from storage/cacher.go:/limitranges 2025-10-13T00:22:23.627961692+00:00 stderr F I1013 00:22:23.627893 17 store.go:1579] "Monitoring resource count at path" resource="persistentvolumes" path="//persistentvolumes" 2025-10-13T00:22:23.630308675+00:00 stderr F I1013 00:22:23.629927 17 cacher.go:460] cacher (configmaps): initialized 2025-10-13T00:22:23.630308675+00:00 stderr F I1013 00:22:23.630280 17 reflector.go:351] Caches populated for *core.ConfigMap from storage/cacher.go:/configmaps 2025-10-13T00:22:23.631361923+00:00 stderr F I1013 00:22:23.631217 17 cacher.go:460] cacher (persistentvolumes): initialized 2025-10-13T00:22:23.631361923+00:00 stderr F I1013 00:22:23.631252 17 reflector.go:351] Caches populated for *core.PersistentVolume from storage/cacher.go:/persistentvolumes 2025-10-13T00:22:23.636211074+00:00 stderr F I1013 00:22:23.636137 17 store.go:1579] "Monitoring resource count at path" resource="persistentvolumeclaims" path="//persistentvolumeclaims" 2025-10-13T00:22:23.639029500+00:00 stderr F I1013 00:22:23.638975 17 cacher.go:460] cacher (persistentvolumeclaims): initialized 2025-10-13T00:22:23.639029500+00:00 stderr F I1013 00:22:23.638996 17 reflector.go:351] Caches populated for *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims 2025-10-13T00:22:23.644977930+00:00 stderr F I1013 00:22:23.644850 17 store.go:1579] "Monitoring resource count at path" resource="endpoints" path="//services/endpoints" 2025-10-13T00:22:23.646903971+00:00 stderr F I1013 00:22:23.646852 17 cacher.go:460] cacher (endpoints): initialized 2025-10-13T00:22:23.646903971+00:00 stderr F I1013 00:22:23.646872 17 reflector.go:351] Caches populated for *core.Endpoints from storage/cacher.go:/services/endpoints 2025-10-13T00:22:23.653152819+00:00 stderr F I1013 00:22:23.653098 17 store.go:1579] "Monitoring resource count at path" resource="nodes" path="//minions" 2025-10-13T00:22:23.656202511+00:00 stderr F I1013 00:22:23.656145 17 cacher.go:460] cacher (nodes): initialized 2025-10-13T00:22:23.656202511+00:00 stderr F I1013 00:22:23.656179 17 reflector.go:351] Caches populated for *core.Node from storage/cacher.go:/minions 2025-10-13T00:22:23.668505062+00:00 stderr F I1013 00:22:23.668420 17 store.go:1579] "Monitoring resource count at path" resource="pods" path="//pods" 2025-10-13T00:22:23.675877550+00:00 stderr F I1013 00:22:23.675816 17 store.go:1579] "Monitoring resource count at path" resource="services" path="//services/specs" 2025-10-13T00:22:23.676812476+00:00 stderr F I1013 00:22:23.676774 17 cacher.go:460] cacher (pods): initialized 2025-10-13T00:22:23.676812476+00:00 stderr F I1013 00:22:23.676797 17 reflector.go:351] Caches populated for *core.Pod from storage/cacher.go:/pods 2025-10-13T00:22:23.678456150+00:00 stderr F I1013 00:22:23.678211 17 cacher.go:460] cacher (services): initialized 2025-10-13T00:22:23.678456150+00:00 stderr F I1013 00:22:23.678232 17 reflector.go:351] Caches populated for *core.Service from storage/cacher.go:/services/specs 2025-10-13T00:22:23.683143736+00:00 stderr F I1013 00:22:23.683094 17 store.go:1579] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" 2025-10-13T00:22:23.688860600+00:00 stderr F I1013 00:22:23.688819 17 cacher.go:460] cacher (serviceaccounts): initialized 2025-10-13T00:22:23.688860600+00:00 stderr F I1013 00:22:23.688838 17 reflector.go:351] Caches populated for *core.ServiceAccount from storage/cacher.go:/serviceaccounts 2025-10-13T00:22:23.690281798+00:00 stderr F I1013 00:22:23.690245 17 store.go:1579] "Monitoring resource count at path" resource="replicationcontrollers" path="//controllers" 2025-10-13T00:22:23.690977287+00:00 stderr F I1013 00:22:23.690942 17 instance.go:707] Enabling API group "". 2025-10-13T00:22:23.691740287+00:00 stderr F I1013 00:22:23.691702 17 cacher.go:460] cacher (replicationcontrollers): initialized 2025-10-13T00:22:23.691740287+00:00 stderr F I1013 00:22:23.691716 17 reflector.go:351] Caches populated for *core.ReplicationController from storage/cacher.go:/controllers 2025-10-13T00:22:23.705225360+00:00 stderr F I1013 00:22:23.705143 17 handler.go:275] Adding GroupVersion v1 to ResourceManager 2025-10-13T00:22:23.705493957+00:00 stderr F I1013 00:22:23.705463 17 instance.go:694] API group "internal.apiserver.k8s.io" is not enabled, skipping. 2025-10-13T00:22:23.705579559+00:00 stderr F I1013 00:22:23.705559 17 instance.go:707] Enabling API group "authentication.k8s.io". 2025-10-13T00:22:23.705669122+00:00 stderr F I1013 00:22:23.705649 17 instance.go:707] Enabling API group "authorization.k8s.io". 2025-10-13T00:22:23.712531356+00:00 stderr F I1013 00:22:23.712490 17 cacher.go:460] cacher (customresourcedefinitions.apiextensions.k8s.io): initialized 2025-10-13T00:22:23.712631589+00:00 stderr F I1013 00:22:23.712598 17 reflector.go:351] Caches populated for *apiextensions.CustomResourceDefinition from storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions 2025-10-13T00:22:23.716031280+00:00 stderr F I1013 00:22:23.715976 17 store.go:1579] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" 2025-10-13T00:22:23.720176612+00:00 stderr F I1013 00:22:23.720125 17 cacher.go:460] cacher (horizontalpodautoscalers.autoscaling): initialized 2025-10-13T00:22:23.720226063+00:00 stderr F I1013 00:22:23.720152 17 reflector.go:351] Caches populated for *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers 2025-10-13T00:22:23.725457864+00:00 stderr F I1013 00:22:23.725402 17 store.go:1579] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" 2025-10-13T00:22:23.725500255+00:00 stderr F I1013 00:22:23.725467 17 instance.go:707] Enabling API group "autoscaling". 2025-10-13T00:22:23.726135992+00:00 stderr F I1013 00:22:23.726093 17 cacher.go:460] cacher (horizontalpodautoscalers.autoscaling): initialized 2025-10-13T00:22:23.726135992+00:00 stderr F I1013 00:22:23.726117 17 reflector.go:351] Caches populated for *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers 2025-10-13T00:22:23.732228746+00:00 stderr F I1013 00:22:23.732174 17 store.go:1579] "Monitoring resource count at path" resource="jobs.batch" path="//jobs" 2025-10-13T00:22:23.733692335+00:00 stderr F I1013 00:22:23.733658 17 cacher.go:460] cacher (jobs.batch): initialized 2025-10-13T00:22:23.733771487+00:00 stderr F I1013 00:22:23.733751 17 reflector.go:351] Caches populated for *batch.Job from storage/cacher.go:/jobs 2025-10-13T00:22:23.738841454+00:00 stderr F I1013 00:22:23.738800 17 store.go:1579] "Monitoring resource count at path" resource="cronjobs.batch" path="//cronjobs" 2025-10-13T00:22:23.738879775+00:00 stderr F I1013 00:22:23.738858 17 instance.go:707] Enabling API group "batch". 2025-10-13T00:22:23.739664906+00:00 stderr F I1013 00:22:23.739632 17 cacher.go:460] cacher (cronjobs.batch): initialized 2025-10-13T00:22:23.739731308+00:00 stderr F I1013 00:22:23.739712 17 reflector.go:351] Caches populated for *batch.CronJob from storage/cacher.go:/cronjobs 2025-10-13T00:22:23.745076961+00:00 stderr F I1013 00:22:23.745016 17 store.go:1579] "Monitoring resource count at path" resource="certificatesigningrequests.certificates.k8s.io" path="//certificatesigningrequests" 2025-10-13T00:22:23.745119453+00:00 stderr F I1013 00:22:23.745075 17 instance.go:707] Enabling API group "certificates.k8s.io". 2025-10-13T00:22:23.747300551+00:00 stderr F I1013 00:22:23.747232 17 cacher.go:460] cacher (certificatesigningrequests.certificates.k8s.io): initialized 2025-10-13T00:22:23.747300551+00:00 stderr F I1013 00:22:23.747259 17 reflector.go:351] Caches populated for *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests 2025-10-13T00:22:23.755726588+00:00 stderr F I1013 00:22:23.755667 17 store.go:1579] "Monitoring resource count at path" resource="leases.coordination.k8s.io" path="//leases" 2025-10-13T00:22:23.755767909+00:00 stderr F I1013 00:22:23.755711 17 instance.go:707] Enabling API group "coordination.k8s.io". 2025-10-13T00:22:23.757884906+00:00 stderr F I1013 00:22:23.757830 17 cacher.go:460] cacher (leases.coordination.k8s.io): initialized 2025-10-13T00:22:23.757884906+00:00 stderr F I1013 00:22:23.757859 17 reflector.go:351] Caches populated for *coordination.Lease from storage/cacher.go:/leases 2025-10-13T00:22:23.764354470+00:00 stderr F I1013 00:22:23.764278 17 store.go:1579] "Monitoring resource count at path" resource="endpointslices.discovery.k8s.io" path="//endpointslices" 2025-10-13T00:22:23.764354470+00:00 stderr F I1013 00:22:23.764315 17 instance.go:707] Enabling API group "discovery.k8s.io". 2025-10-13T00:22:23.769357484+00:00 stderr F I1013 00:22:23.769290 17 cacher.go:460] cacher (endpointslices.discovery.k8s.io): initialized 2025-10-13T00:22:23.769357484+00:00 stderr F I1013 00:22:23.769318 17 reflector.go:351] Caches populated for *discovery.EndpointSlice from storage/cacher.go:/endpointslices 2025-10-13T00:22:23.771738488+00:00 stderr F I1013 00:22:23.771690 17 store.go:1579] "Monitoring resource count at path" resource="networkpolicies.networking.k8s.io" path="//networkpolicies" 2025-10-13T00:22:23.772412516+00:00 stderr F I1013 00:22:23.772368 17 cacher.go:460] cacher (networkpolicies.networking.k8s.io): initialized 2025-10-13T00:22:23.772412516+00:00 stderr F I1013 00:22:23.772395 17 reflector.go:351] Caches populated for *networking.NetworkPolicy from storage/cacher.go:/networkpolicies 2025-10-13T00:22:23.778268384+00:00 stderr F I1013 00:22:23.778229 17 store.go:1579] "Monitoring resource count at path" resource="ingresses.networking.k8s.io" path="//ingress" 2025-10-13T00:22:23.779217030+00:00 stderr F I1013 00:22:23.779172 17 cacher.go:460] cacher (ingresses.networking.k8s.io): initialized 2025-10-13T00:22:23.779217030+00:00 stderr F I1013 00:22:23.779196 17 reflector.go:351] Caches populated for *networking.Ingress from storage/cacher.go:/ingress 2025-10-13T00:22:23.785235611+00:00 stderr F I1013 00:22:23.785026 17 store.go:1579] "Monitoring resource count at path" resource="ingressclasses.networking.k8s.io" path="//ingressclasses" 2025-10-13T00:22:23.785235611+00:00 stderr F I1013 00:22:23.785077 17 instance.go:707] Enabling API group "networking.k8s.io". 2025-10-13T00:22:23.786723121+00:00 stderr F I1013 00:22:23.786664 17 cacher.go:460] cacher (ingressclasses.networking.k8s.io): initialized 2025-10-13T00:22:23.786770983+00:00 stderr F I1013 00:22:23.786701 17 reflector.go:351] Caches populated for *networking.IngressClass from storage/cacher.go:/ingressclasses 2025-10-13T00:22:23.793127954+00:00 stderr F I1013 00:22:23.793069 17 store.go:1579] "Monitoring resource count at path" resource="runtimeclasses.node.k8s.io" path="//runtimeclasses" 2025-10-13T00:22:23.793178865+00:00 stderr F I1013 00:22:23.793116 17 instance.go:707] Enabling API group "node.k8s.io". 2025-10-13T00:22:23.793908535+00:00 stderr F I1013 00:22:23.793867 17 cacher.go:460] cacher (runtimeclasses.node.k8s.io): initialized 2025-10-13T00:22:23.793908535+00:00 stderr F I1013 00:22:23.793887 17 reflector.go:351] Caches populated for *node.RuntimeClass from storage/cacher.go:/runtimeclasses 2025-10-13T00:22:23.801892959+00:00 stderr F I1013 00:22:23.801796 17 store.go:1579] "Monitoring resource count at path" resource="poddisruptionbudgets.policy" path="//poddisruptionbudgets" 2025-10-13T00:22:23.801892959+00:00 stderr F I1013 00:22:23.801844 17 instance.go:707] Enabling API group "policy". 2025-10-13T00:22:23.805616999+00:00 stderr F I1013 00:22:23.802933 17 cacher.go:460] cacher (poddisruptionbudgets.policy): initialized 2025-10-13T00:22:23.805616999+00:00 stderr F I1013 00:22:23.802952 17 reflector.go:351] Caches populated for *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets 2025-10-13T00:22:23.811885108+00:00 stderr F I1013 00:22:23.811735 17 store.go:1579] "Monitoring resource count at path" resource="roles.rbac.authorization.k8s.io" path="//roles" 2025-10-13T00:22:23.814605601+00:00 stderr F I1013 00:22:23.814546 17 cacher.go:460] cacher (roles.rbac.authorization.k8s.io): initialized 2025-10-13T00:22:23.814605601+00:00 stderr F I1013 00:22:23.814576 17 reflector.go:351] Caches populated for *rbac.Role from storage/cacher.go:/roles 2025-10-13T00:22:23.819742339+00:00 stderr F I1013 00:22:23.819677 17 store.go:1579] "Monitoring resource count at path" resource="rolebindings.rbac.authorization.k8s.io" path="//rolebindings" 2025-10-13T00:22:23.824976550+00:00 stderr F I1013 00:22:23.824909 17 cacher.go:460] cacher (rolebindings.rbac.authorization.k8s.io): initialized 2025-10-13T00:22:23.824976550+00:00 stderr F I1013 00:22:23.824929 17 reflector.go:351] Caches populated for *rbac.RoleBinding from storage/cacher.go:/rolebindings 2025-10-13T00:22:23.825791232+00:00 stderr F I1013 00:22:23.825729 17 store.go:1579] "Monitoring resource count at path" resource="clusterroles.rbac.authorization.k8s.io" path="//clusterroles" 2025-10-13T00:22:23.835074502+00:00 stderr F I1013 00:22:23.834931 17 store.go:1579] "Monitoring resource count at path" resource="clusterrolebindings.rbac.authorization.k8s.io" path="//clusterrolebindings" 2025-10-13T00:22:23.835074502+00:00 stderr F I1013 00:22:23.834997 17 instance.go:707] Enabling API group "rbac.authorization.k8s.io". 2025-10-13T00:22:23.838741440+00:00 stderr F I1013 00:22:23.838639 17 cacher.go:460] cacher (clusterroles.rbac.authorization.k8s.io): initialized 2025-10-13T00:22:23.838741440+00:00 stderr F I1013 00:22:23.838669 17 reflector.go:351] Caches populated for *rbac.ClusterRole from storage/cacher.go:/clusterroles 2025-10-13T00:22:23.838828423+00:00 stderr F I1013 00:22:23.838696 17 cacher.go:460] cacher (clusterrolebindings.rbac.authorization.k8s.io): initialized 2025-10-13T00:22:23.838828423+00:00 stderr F I1013 00:22:23.838709 17 reflector.go:351] Caches populated for *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings 2025-10-13T00:22:23.843223031+00:00 stderr F I1013 00:22:23.843175 17 store.go:1579] "Monitoring resource count at path" resource="priorityclasses.scheduling.k8s.io" path="//priorityclasses" 2025-10-13T00:22:23.843275592+00:00 stderr F I1013 00:22:23.843204 17 instance.go:707] Enabling API group "scheduling.k8s.io". 2025-10-13T00:22:23.845315697+00:00 stderr F I1013 00:22:23.845274 17 cacher.go:460] cacher (priorityclasses.scheduling.k8s.io): initialized 2025-10-13T00:22:23.845423300+00:00 stderr F I1013 00:22:23.845394 17 reflector.go:351] Caches populated for *scheduling.PriorityClass from storage/cacher.go:/priorityclasses 2025-10-13T00:22:23.849911371+00:00 stderr F I1013 00:22:23.849865 17 store.go:1579] "Monitoring resource count at path" resource="storageclasses.storage.k8s.io" path="//storageclasses" 2025-10-13T00:22:23.851173275+00:00 stderr F I1013 00:22:23.851125 17 cacher.go:460] cacher (storageclasses.storage.k8s.io): initialized 2025-10-13T00:22:23.851173275+00:00 stderr F I1013 00:22:23.851148 17 reflector.go:351] Caches populated for *storage.StorageClass from storage/cacher.go:/storageclasses 2025-10-13T00:22:23.856238111+00:00 stderr F I1013 00:22:23.856195 17 store.go:1579] "Monitoring resource count at path" resource="volumeattachments.storage.k8s.io" path="//volumeattachments" 2025-10-13T00:22:23.857188046+00:00 stderr F I1013 00:22:23.857125 17 cacher.go:460] cacher (volumeattachments.storage.k8s.io): initialized 2025-10-13T00:22:23.857231617+00:00 stderr F I1013 00:22:23.857172 17 reflector.go:351] Caches populated for *storage.VolumeAttachment from storage/cacher.go:/volumeattachments 2025-10-13T00:22:23.863733032+00:00 stderr F I1013 00:22:23.863677 17 store.go:1579] "Monitoring resource count at path" resource="csinodes.storage.k8s.io" path="//csinodes" 2025-10-13T00:22:23.864741889+00:00 stderr F I1013 00:22:23.864696 17 cacher.go:460] cacher (csinodes.storage.k8s.io): initialized 2025-10-13T00:22:23.864781941+00:00 stderr F I1013 00:22:23.864717 17 reflector.go:351] Caches populated for *storage.CSINode from storage/cacher.go:/csinodes 2025-10-13T00:22:23.871140442+00:00 stderr F I1013 00:22:23.871028 17 store.go:1579] "Monitoring resource count at path" resource="csidrivers.storage.k8s.io" path="//csidrivers" 2025-10-13T00:22:23.872210650+00:00 stderr F I1013 00:22:23.872148 17 cacher.go:460] cacher (csidrivers.storage.k8s.io): initialized 2025-10-13T00:22:23.872210650+00:00 stderr F I1013 00:22:23.872174 17 reflector.go:351] Caches populated for *storage.CSIDriver from storage/cacher.go:/csidrivers 2025-10-13T00:22:23.877570664+00:00 stderr F I1013 00:22:23.877494 17 store.go:1579] "Monitoring resource count at path" resource="csistoragecapacities.storage.k8s.io" path="//csistoragecapacities" 2025-10-13T00:22:23.877630796+00:00 stderr F I1013 00:22:23.877589 17 instance.go:707] Enabling API group "storage.k8s.io". 2025-10-13T00:22:23.878526900+00:00 stderr F I1013 00:22:23.878480 17 cacher.go:460] cacher (csistoragecapacities.storage.k8s.io): initialized 2025-10-13T00:22:23.878526900+00:00 stderr F I1013 00:22:23.878503 17 reflector.go:351] Caches populated for *storage.CSIStorageCapacity from storage/cacher.go:/csistoragecapacities 2025-10-13T00:22:23.884924592+00:00 stderr F I1013 00:22:23.884854 17 store.go:1579] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" 2025-10-13T00:22:23.886675099+00:00 stderr F I1013 00:22:23.886619 17 cacher.go:460] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized 2025-10-13T00:22:23.886675099+00:00 stderr F I1013 00:22:23.886643 17 reflector.go:351] Caches populated for *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas 2025-10-13T00:22:23.892102215+00:00 stderr F I1013 00:22:23.892053 17 store.go:1579] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" 2025-10-13T00:22:23.893115352+00:00 stderr F I1013 00:22:23.893074 17 cacher.go:460] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized 2025-10-13T00:22:23.893170114+00:00 stderr F I1013 00:22:23.893096 17 reflector.go:351] Caches populated for *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations 2025-10-13T00:22:23.899282848+00:00 stderr F I1013 00:22:23.899239 17 store.go:1579] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" 2025-10-13T00:22:23.901031565+00:00 stderr F I1013 00:22:23.900972 17 cacher.go:460] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized 2025-10-13T00:22:23.901031565+00:00 stderr F I1013 00:22:23.900995 17 reflector.go:351] Caches populated for *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas 2025-10-13T00:22:23.906142753+00:00 stderr F I1013 00:22:23.906097 17 store.go:1579] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" 2025-10-13T00:22:23.906192594+00:00 stderr F I1013 00:22:23.906174 17 instance.go:707] Enabling API group "flowcontrol.apiserver.k8s.io". 2025-10-13T00:22:23.907506430+00:00 stderr F I1013 00:22:23.907470 17 cacher.go:460] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized 2025-10-13T00:22:23.907578571+00:00 stderr F I1013 00:22:23.907559 17 reflector.go:351] Caches populated for *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations 2025-10-13T00:22:23.914853137+00:00 stderr F I1013 00:22:23.914787 17 store.go:1579] "Monitoring resource count at path" resource="deployments.apps" path="//deployments" 2025-10-13T00:22:23.922960465+00:00 stderr F I1013 00:22:23.922886 17 cacher.go:460] cacher (deployments.apps): initialized 2025-10-13T00:22:23.922960465+00:00 stderr F I1013 00:22:23.922910 17 reflector.go:351] Caches populated for *apps.Deployment from storage/cacher.go:/deployments 2025-10-13T00:22:23.924301811+00:00 stderr F I1013 00:22:23.924259 17 store.go:1579] "Monitoring resource count at path" resource="statefulsets.apps" path="//statefulsets" 2025-10-13T00:22:23.925223936+00:00 stderr F I1013 00:22:23.925154 17 cacher.go:460] cacher (statefulsets.apps): initialized 2025-10-13T00:22:23.925223936+00:00 stderr F I1013 00:22:23.925169 17 reflector.go:351] Caches populated for *apps.StatefulSet from storage/cacher.go:/statefulsets 2025-10-13T00:22:23.930994511+00:00 stderr F I1013 00:22:23.930947 17 store.go:1579] "Monitoring resource count at path" resource="daemonsets.apps" path="//daemonsets" 2025-10-13T00:22:23.934283960+00:00 stderr F I1013 00:22:23.934246 17 cacher.go:460] cacher (daemonsets.apps): initialized 2025-10-13T00:22:23.934394053+00:00 stderr F I1013 00:22:23.934357 17 reflector.go:351] Caches populated for *apps.DaemonSet from storage/cacher.go:/daemonsets 2025-10-13T00:22:23.937268620+00:00 stderr F I1013 00:22:23.937224 17 store.go:1579] "Monitoring resource count at path" resource="replicasets.apps" path="//replicasets" 2025-10-13T00:22:23.944064213+00:00 stderr F I1013 00:22:23.944001 17 store.go:1579] "Monitoring resource count at path" resource="controllerrevisions.apps" path="//controllerrevisions" 2025-10-13T00:22:23.944269678+00:00 stderr F I1013 00:22:23.944244 17 instance.go:707] Enabling API group "apps". 2025-10-13T00:22:23.944886635+00:00 stderr F I1013 00:22:23.944837 17 cacher.go:460] cacher (replicasets.apps): initialized 2025-10-13T00:22:23.944886635+00:00 stderr F I1013 00:22:23.944865 17 reflector.go:351] Caches populated for *apps.ReplicaSet from storage/cacher.go:/replicasets 2025-10-13T00:22:23.947166866+00:00 stderr F I1013 00:22:23.947124 17 cacher.go:460] cacher (controllerrevisions.apps): initialized 2025-10-13T00:22:23.947166866+00:00 stderr F I1013 00:22:23.947146 17 reflector.go:351] Caches populated for *apps.ControllerRevision from storage/cacher.go:/controllerrevisions 2025-10-13T00:22:23.951063141+00:00 stderr F I1013 00:22:23.950987 17 store.go:1579] "Monitoring resource count at path" resource="validatingwebhookconfigurations.admissionregistration.k8s.io" path="//validatingwebhookconfigurations" 2025-10-13T00:22:23.954132143+00:00 stderr F I1013 00:22:23.954062 17 cacher.go:460] cacher (validatingwebhookconfigurations.admissionregistration.k8s.io): initialized 2025-10-13T00:22:23.954132143+00:00 stderr F I1013 00:22:23.954089 17 reflector.go:351] Caches populated for *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations 2025-10-13T00:22:23.959045875+00:00 stderr F I1013 00:22:23.958956 17 store.go:1579] "Monitoring resource count at path" resource="mutatingwebhookconfigurations.admissionregistration.k8s.io" path="//mutatingwebhookconfigurations" 2025-10-13T00:22:23.959227140+00:00 stderr F I1013 00:22:23.959185 17 instance.go:707] Enabling API group "admissionregistration.k8s.io". 2025-10-13T00:22:23.959847177+00:00 stderr F I1013 00:22:23.959809 17 cacher.go:460] cacher (mutatingwebhookconfigurations.admissionregistration.k8s.io): initialized 2025-10-13T00:22:23.959847177+00:00 stderr F I1013 00:22:23.959822 17 reflector.go:351] Caches populated for *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations 2025-10-13T00:22:23.965934531+00:00 stderr F I1013 00:22:23.965888 17 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2025-10-13T00:22:23.965976072+00:00 stderr F I1013 00:22:23.965929 17 instance.go:707] Enabling API group "events.k8s.io". 2025-10-13T00:22:23.965976072+00:00 stderr F I1013 00:22:23.965940 17 instance.go:694] API group "resource.k8s.io" is not enabled, skipping. 2025-10-13T00:22:23.985814895+00:00 stderr F I1013 00:22:23.985703 17 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.985814895+00:00 stderr F W1013 00:22:23.985728 17 genericapiserver.go:774] Skipping API authentication.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.985814895+00:00 stderr F W1013 00:22:23.985734 17 genericapiserver.go:774] Skipping API authentication.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.986064392+00:00 stderr F I1013 00:22:23.986027 17 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.986064392+00:00 stderr F W1013 00:22:23.986042 17 genericapiserver.go:774] Skipping API authorization.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.986741010+00:00 stderr F I1013 00:22:23.986689 17 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager 2025-10-13T00:22:23.987250974+00:00 stderr F I1013 00:22:23.987206 17 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager 2025-10-13T00:22:23.987250974+00:00 stderr F W1013 00:22:23.987219 17 genericapiserver.go:774] Skipping API autoscaling/v2beta1 because it has no resources. 2025-10-13T00:22:23.987250974+00:00 stderr F W1013 00:22:23.987223 17 genericapiserver.go:774] Skipping API autoscaling/v2beta2 because it has no resources. 2025-10-13T00:22:23.988276492+00:00 stderr F I1013 00:22:23.988227 17 handler.go:275] Adding GroupVersion batch v1 to ResourceManager 2025-10-13T00:22:23.988276492+00:00 stderr F W1013 00:22:23.988244 17 genericapiserver.go:774] Skipping API batch/v1beta1 because it has no resources. 2025-10-13T00:22:23.988924509+00:00 stderr F I1013 00:22:23.988884 17 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.988924509+00:00 stderr F W1013 00:22:23.988899 17 genericapiserver.go:774] Skipping API certificates.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.988924509+00:00 stderr F W1013 00:22:23.988903 17 genericapiserver.go:774] Skipping API certificates.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.989474824+00:00 stderr F I1013 00:22:23.989409 17 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.989474824+00:00 stderr F W1013 00:22:23.989424 17 genericapiserver.go:774] Skipping API coordination.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.989515975+00:00 stderr F W1013 00:22:23.989453 17 genericapiserver.go:774] Skipping API discovery.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.989905855+00:00 stderr F I1013 00:22:23.989859 17 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.991520579+00:00 stderr F I1013 00:22:23.991454 17 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.991520579+00:00 stderr F W1013 00:22:23.991471 17 genericapiserver.go:774] Skipping API networking.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.991520579+00:00 stderr F W1013 00:22:23.991476 17 genericapiserver.go:774] Skipping API networking.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.991838417+00:00 stderr F I1013 00:22:23.991783 17 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.991838417+00:00 stderr F W1013 00:22:23.991797 17 genericapiserver.go:774] Skipping API node.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.991838417+00:00 stderr F W1013 00:22:23.991801 17 genericapiserver.go:774] Skipping API node.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.992356861+00:00 stderr F I1013 00:22:23.992300 17 handler.go:275] Adding GroupVersion policy v1 to ResourceManager 2025-10-13T00:22:23.992356861+00:00 stderr F W1013 00:22:23.992315 17 genericapiserver.go:774] Skipping API policy/v1beta1 because it has no resources. 2025-10-13T00:22:23.996407730+00:00 stderr F I1013 00:22:23.994802 17 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.996407730+00:00 stderr F W1013 00:22:23.994819 17 genericapiserver.go:774] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.996407730+00:00 stderr F W1013 00:22:23.994824 17 genericapiserver.go:774] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.996407730+00:00 stderr F I1013 00:22:23.995132 17 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.996407730+00:00 stderr F W1013 00:22:23.995140 17 genericapiserver.go:774] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.996407730+00:00 stderr F W1013 00:22:23.995143 17 genericapiserver.go:774] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.996601625+00:00 stderr F I1013 00:22:23.996559 17 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.996601625+00:00 stderr F W1013 00:22:23.996573 17 genericapiserver.go:774] Skipping API storage.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:23.996601625+00:00 stderr F W1013 00:22:23.996578 17 genericapiserver.go:774] Skipping API storage.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:23.997547841+00:00 stderr F I1013 00:22:23.997503 17 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager 2025-10-13T00:22:23.998280571+00:00 stderr F I1013 00:22:23.998238 17 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager 2025-10-13T00:22:23.998280571+00:00 stderr F W1013 00:22:23.998251 17 genericapiserver.go:774] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources. 2025-10-13T00:22:23.998280571+00:00 stderr F W1013 00:22:23.998256 17 genericapiserver.go:774] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:24.001401814+00:00 stderr F I1013 00:22:24.001279 17 handler.go:275] Adding GroupVersion apps v1 to ResourceManager 2025-10-13T00:22:24.001401814+00:00 stderr F W1013 00:22:24.001297 17 genericapiserver.go:774] Skipping API apps/v1beta2 because it has no resources. 2025-10-13T00:22:24.001401814+00:00 stderr F W1013 00:22:24.001303 17 genericapiserver.go:774] Skipping API apps/v1beta1 because it has no resources. 2025-10-13T00:22:24.002101503+00:00 stderr F I1013 00:22:24.002021 17 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager 2025-10-13T00:22:24.002101503+00:00 stderr F W1013 00:22:24.002040 17 genericapiserver.go:774] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:24.002101503+00:00 stderr F W1013 00:22:24.002045 17 genericapiserver.go:774] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. 2025-10-13T00:22:24.002601367+00:00 stderr F I1013 00:22:24.002557 17 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager 2025-10-13T00:22:24.002601367+00:00 stderr F W1013 00:22:24.002573 17 genericapiserver.go:774] Skipping API events.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:24.014112216+00:00 stderr F I1013 00:22:24.014024 17 store.go:1579] "Monitoring resource count at path" resource="apiservices.apiregistration.k8s.io" path="//apiregistration.k8s.io/apiservices" 2025-10-13T00:22:24.015185565+00:00 stderr F I1013 00:22:24.014956 17 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager 2025-10-13T00:22:24.015185565+00:00 stderr F W1013 00:22:24.014970 17 genericapiserver.go:774] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. 2025-10-13T00:22:24.015840953+00:00 stderr F I1013 00:22:24.015798 17 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2025-10-13T00:22:24.018786792+00:00 stderr F I1013 00:22:24.018729 17 cacher.go:460] cacher (apiservices.apiregistration.k8s.io): initialized 2025-10-13T00:22:24.018786792+00:00 stderr F I1013 00:22:24.018751 17 reflector.go:351] Caches populated for *apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices 2025-10-13T00:22:24.382506043+00:00 stderr F I1013 00:22:24.382373 17 genericapiserver.go:576] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-10-13T00:22:24.382645887+00:00 stderr F I1013 00:22:24.382579 17 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:22:24.382680738+00:00 stderr F I1013 00:22:24.382619 17 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:22:24.382953865+00:00 stderr F I1013 00:22:24.382890 17 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-10-13T00:22:24.383274014+00:00 stderr F I1013 00:22:24.383226 17 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" 2025-10-13T00:22:24.383451459+00:00 stderr F I1013 00:22:24.383402 17 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-10-13T00:22:24.383797278+00:00 stderr F I1013 00:22:24.383706 17 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" 2025-10-13T00:22:24.384106856+00:00 stderr F I1013 00:22:24.384045 17 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" 2025-10-13T00:22:24.384303642+00:00 stderr F I1013 00:22:24.384255 17 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" 2025-10-13T00:22:24.384354453+00:00 stderr F I1013 00:22:24.384308 17 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:22:24.384281631 +0000 UTC))" 2025-10-13T00:22:24.384391204+00:00 stderr F I1013 00:22:24.384358 17 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:22:24.384341163 +0000 UTC))" 2025-10-13T00:22:24.384426915+00:00 stderr F I1013 00:22:24.384376 17 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:22:24.384365013 +0000 UTC))" 2025-10-13T00:22:24.384426915+00:00 stderr F I1013 00:22:24.384392 17 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:22:24.384381704 +0000 UTC))" 2025-10-13T00:22:24.384468896+00:00 stderr F I1013 00:22:24.384409 17 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:22:24.384397164 +0000 UTC))" 2025-10-13T00:22:24.384468896+00:00 stderr F I1013 00:22:24.384425 17 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.384413845 +0000 UTC))" 2025-10-13T00:22:24.384468896+00:00 stderr F I1013 00:22:24.384440 17 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.384429385 +0000 UTC))" 2025-10-13T00:22:24.384523067+00:00 stderr F I1013 00:22:24.384455 17 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.384444285 +0000 UTC))" 2025-10-13T00:22:24.384523067+00:00 stderr F I1013 00:22:24.384469 17 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:22:24.384459946 +0000 UTC))" 2025-10-13T00:22:24.384523067+00:00 stderr F I1013 00:22:24.384486 17 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.384475846 +0000 UTC))" 2025-10-13T00:22:24.384554988+00:00 stderr F I1013 00:22:24.384504 17 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.384492877 +0000 UTC))" 2025-10-13T00:22:24.384866997+00:00 stderr F I1013 00:22:24.384801 17 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" certDetail="\"10.217.4.1\" [serving] validServingFor=[10.217.4.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.217.4.1] issuer=\"kube-apiserver-service-network-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.384783744 +0000 UTC))" 2025-10-13T00:22:24.385130114+00:00 stderr F I1013 00:22:24.385077 17 named_certificates.go:53] "Loaded SNI cert" index=5 certName="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" certDetail="\"localhost-recovery\" [serving] validServingFor=[localhost-recovery] issuer=\"openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1719406013\" (2024-06-26 12:47:06 +0000 UTC to 2034-06-24 12:46:53 +0000 UTC (now=2025-10-13 00:22:24.385061192 +0000 UTC))" 2025-10-13T00:22:24.385407931+00:00 stderr F I1013 00:22:24.385355 17 named_certificates.go:53] "Loaded SNI cert" index=4 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" certDetail="\"api-int.crc.testing\" [serving] validServingFor=[api-int.crc.testing] issuer=\"kube-apiserver-lb-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.385343179 +0000 UTC))" 2025-10-13T00:22:24.385679939+00:00 stderr F I1013 00:22:24.385629 17 named_certificates.go:53] "Loaded SNI cert" index=3 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" certDetail="\"api.crc.testing\" [serving] validServingFor=[api.crc.testing] issuer=\"kube-apiserver-lb-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.385591856 +0000 UTC))" 2025-10-13T00:22:24.385976637+00:00 stderr F I1013 00:22:24.385918 17 named_certificates.go:53] "Loaded SNI cert" index=2 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" certDetail="\"10.217.4.1\" [serving] validServingFor=[10.217.4.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.217.4.1] issuer=\"kube-apiserver-service-network-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.385902975 +0000 UTC))" 2025-10-13T00:22:24.386249244+00:00 stderr F I1013 00:22:24.386190 17 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" certDetail="\"127.0.0.1\" [serving] validServingFor=[127.0.0.1,localhost,127.0.0.1] issuer=\"kube-apiserver-localhost-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:24.386150341 +0000 UTC))" 2025-10-13T00:22:24.386505461+00:00 stderr F I1013 00:22:24.386452 17 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314943\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314943\" (2025-10-12 23:22:22 +0000 UTC to 2026-10-12 23:22:22 +0000 UTC (now=2025-10-13 00:22:24.386438969 +0000 UTC))" 2025-10-13T00:22:24.386505461+00:00 stderr F I1013 00:22:24.386484 17 secure_serving.go:213] Serving securely on [::]:6443 2025-10-13T00:22:24.386544642+00:00 stderr F I1013 00:22:24.386522 17 genericapiserver.go:700] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:22:24.386544642+00:00 stderr F I1013 00:22:24.386522 17 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:22:24.386645385+00:00 stderr F I1013 00:22:24.386584 17 controller.go:78] Starting OpenAPI AggregationController 2025-10-13T00:22:24.386645385+00:00 stderr F I1013 00:22:24.386599 17 available_controller.go:445] Starting AvailableConditionController 2025-10-13T00:22:24.386645385+00:00 stderr F I1013 00:22:24.386607 17 cache.go:32] Waiting for caches to sync for AvailableConditionController controller 2025-10-13T00:22:24.386687896+00:00 stderr F I1013 00:22:24.386621 17 apf_controller.go:374] Starting API Priority and Fairness config controller 2025-10-13T00:22:24.386687896+00:00 stderr F I1013 00:22:24.386664 17 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2025-10-13T00:22:24.386808129+00:00 stderr F I1013 00:22:24.386669 17 controller.go:80] Starting OpenAPI V3 AggregationController 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386720 17 gc_controller.go:78] Starting apiserver lease garbage collector 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386747 17 system_namespaces_controller.go:67] Starting system namespaces controller 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386765 17 controller.go:116] Starting legacy_token_tracking_controller 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386771 17 apiservice_controller.go:97] Starting APIServiceRegistrationController 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386774 17 shared_informer.go:311] Waiting for caches to sync for configmaps 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386780 17 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller 2025-10-13T00:22:24.386848110+00:00 stderr F I1013 00:22:24.386804 17 handler_discovery.go:412] Starting ResourceDiscoveryManager 2025-10-13T00:22:24.386948413+00:00 stderr F I1013 00:22:24.386900 17 customresource_discovery_controller.go:289] Starting DiscoveryController 2025-10-13T00:22:24.387074866+00:00 stderr F I1013 00:22:24.387023 17 apiaccess_count_controller.go:89] Starting APIRequestCount controller. 2025-10-13T00:22:24.387155418+00:00 stderr F I1013 00:22:24.387101 17 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController 2025-10-13T00:22:24.387231230+00:00 stderr F I1013 00:22:24.387184 17 controller.go:85] Starting OpenAPI V3 controller 2025-10-13T00:22:24.387270991+00:00 stderr F I1013 00:22:24.387217 17 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-10-13T00:22:24.387270991+00:00 stderr F I1013 00:22:24.387220 17 naming_controller.go:291] Starting NamingConditionController 2025-10-13T00:22:24.387270991+00:00 stderr F I1013 00:22:24.387243 17 establishing_controller.go:76] Starting EstablishingController 2025-10-13T00:22:24.387309202+00:00 stderr F I1013 00:22:24.387274 17 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController 2025-10-13T00:22:24.387309202+00:00 stderr F I1013 00:22:24.387189 17 aggregator.go:163] waiting for initial CRD sync... 2025-10-13T00:22:24.387503268+00:00 stderr F I1013 00:22:24.387443 17 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller 2025-10-13T00:22:24.387503268+00:00 stderr F I1013 00:22:24.387458 17 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller 2025-10-13T00:22:24.387503268+00:00 stderr F I1013 00:22:24.386780 17 controller.go:133] Starting OpenAPI controller 2025-10-13T00:22:24.387817426+00:00 stderr F I1013 00:22:24.387753 17 crd_finalizer.go:266] Starting CRDFinalizer 2025-10-13T00:22:24.388234937+00:00 stderr F I1013 00:22:24.388154 17 crdregistration_controller.go:112] Starting crd-autoregister controller 2025-10-13T00:22:24.388234937+00:00 stderr F I1013 00:22:24.388168 17 shared_informer.go:311] Waiting for caches to sync for crd-autoregister 2025-10-13T00:22:24.388292449+00:00 stderr F I1013 00:22:24.388264 17 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:22:24.388472964+00:00 stderr F I1013 00:22:24.388418 17 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:22:24.393481058+00:00 stderr F W1013 00:22:24.393400 17 patch_genericapiserver.go:204] Request to "/apis/config.openshift.io/v1/clusterversions/version/status" (source IP 38.102.83.180:51018, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/openshift-cluster-version") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.396420677+00:00 stderr F W1013 00:22:24.396269 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.398902544+00:00 stderr F I1013 00:22:24.398654 17 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/secrets" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-10-13T00:22:24.399228783+00:00 stderr F I1013 00:22:24.399182 17 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/configmaps" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-10-13T00:22:24.399403198+00:00 stderr F W1013 00:22:24.399356 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ovn-kubernetes/pods" (source IP 38.102.83.180:51032, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.400275671+00:00 stderr F I1013 00:22:24.399812 17 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.400605550+00:00 stderr F I1013 00:22:24.400543 17 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-10-13T00:22:24.400952599+00:00 stderr F I1013 00:22:24.400898 17 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-10-13T00:22:24.401835393+00:00 stderr F W1013 00:22:24.401629 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-machine-config-operator/pods" (source IP 38.102.83.180:51048, user agent "machine-config-daemon/v0.0.0 (linux/amd64) kubernetes/$Format/kube-shared-informer") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.402416099+00:00 stderr F I1013 00:22:24.402361 17 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.403573250+00:00 stderr F I1013 00:22:24.403042 17 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-10-13T00:22:24.403801406+00:00 stderr F E1013 00:22:24.403752 17 sdn_readyz_wait.go:100] api-openshift-apiserver-available did not find any IPs for kubernetes.default.svc endpoint 2025-10-13T00:22:24.404773662+00:00 stderr F I1013 00:22:24.404717 17 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.404868945+00:00 stderr F I1013 00:22:24.404835 17 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.404984338+00:00 stderr F I1013 00:22:24.404942 17 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.405036689+00:00 stderr F I1013 00:22:24.404998 17 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.405201854+00:00 stderr F I1013 00:22:24.405149 17 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.405201854+00:00 stderr F I1013 00:22:24.405172 17 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.405375618+00:00 stderr F I1013 00:22:24.404877 17 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.406319634+00:00 stderr F I1013 00:22:24.406262 17 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.407142106+00:00 stderr F I1013 00:22:24.407064 17 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.407190567+00:00 stderr F I1013 00:22:24.407068 17 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.407624919+00:00 stderr F I1013 00:22:24.407572 17 reflector.go:351] Caches populated for *v1.APIService from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.408537883+00:00 stderr F E1013 00:22:24.408481 17 sdn_readyz_wait.go:100] api-openshift-oauth-apiserver-available did not find any IPs for kubernetes.default.svc endpoint 2025-10-13T00:22:24.408621536+00:00 stderr F I1013 00:22:24.408585 17 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.408833571+00:00 stderr F I1013 00:22:24.408790 17 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.408868022+00:00 stderr F I1013 00:22:24.408810 17 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.408902593+00:00 stderr F I1013 00:22:24.408883 17 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.409150850+00:00 stderr F I1013 00:22:24.409086 17 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.410073295+00:00 stderr F I1013 00:22:24.410020 17 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.410890447+00:00 stderr F I1013 00:22:24.410495 17 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.410935868+00:00 stderr F I1013 00:22:24.410883 17 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.411255766+00:00 stderr F I1013 00:22:24.411202 17 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.412486159+00:00 stderr F I1013 00:22:24.412419 17 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.413304641+00:00 stderr F I1013 00:22:24.413232 17 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.414889744+00:00 stderr F I1013 00:22:24.414753 17 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.416144928+00:00 stderr F I1013 00:22:24.416059 17 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.422228061+00:00 stderr F I1013 00:22:24.422152 17 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.452621519+00:00 stderr F I1013 00:22:24.452504 17 shared_informer.go:318] Caches are synced for node_authorizer 2025-10-13T00:22:24.480858688+00:00 stderr F I1013 00:22:24.480721 17 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.485746640+00:00 stderr F W1013 00:22:24.485638 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.486754317+00:00 stderr F I1013 00:22:24.486685 17 apf_controller.go:379] Running API Priority and Fairness config worker 2025-10-13T00:22:24.486754317+00:00 stderr F I1013 00:22:24.486705 17 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process 2025-10-13T00:22:24.486754317+00:00 stderr F I1013 00:22:24.486715 17 cache.go:39] Caches are synced for AvailableConditionController controller 2025-10-13T00:22:24.486872880+00:00 stderr F I1013 00:22:24.486825 17 cache.go:39] Caches are synced for APIServiceRegistrationController controller 2025-10-13T00:22:24.486920981+00:00 stderr F I1013 00:22:24.486890 17 shared_informer.go:318] Caches are synced for configmaps 2025-10-13T00:22:24.487086026+00:00 stderr F I1013 00:22:24.487019 17 apf_controller.go:869] Retaining queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=79, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false, numPending=0 (shares=0xc005e0fe00, shareSum=255) 2025-10-13T00:22:24.487137157+00:00 stderr F I1013 00:22:24.487059 17 apf_controller.go:861] Introducing queues for priority level "global-default": config={"type":"Limited","limited":{"nominalConcurrencyShares":20,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=314, lendableCL=157, borrowingCL=4000, currentCL=236, quiescing=false (shares=0xc005e0fe80, shareSum=255) 2025-10-13T00:22:24.487137157+00:00 stderr F I1013 00:22:24.487076 17 apf_controller.go:861] Introducing queues for priority level "system": config={"type":"Limited","limited":{"nominalConcurrencyShares":30,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=471, lendableCL=155, borrowingCL=4000, currentCL=394, quiescing=false (shares=0xc005e0ffe8, shareSum=255) 2025-10-13T00:22:24.487137157+00:00 stderr F I1013 00:22:24.487088 17 apf_controller.go:861] Introducing queues for priority level "openshift-control-plane-operators": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=157, lendableCL=52, borrowingCL=4000, currentCL=131, quiescing=false (shares=0xc005e0ff98, shareSum=255) 2025-10-13T00:22:24.487137157+00:00 stderr F I1013 00:22:24.487101 17 apf_controller.go:861] Introducing queues for priority level "workload-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=628, lendableCL=314, borrowingCL=4000, currentCL=471, quiescing=false (shares=0xc0063c6500, shareSum=255) 2025-10-13T00:22:24.487189448+00:00 stderr F I1013 00:22:24.487110 17 apf_controller.go:861] Introducing queues for priority level "leader-election": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":16,"handSize":4,"queueLengthLimit":50}},"lendablePercent":0}}, nominalCL=157, lendableCL=0, borrowingCL=4000, currentCL=157, quiescing=false (shares=0xc005e0fed0, shareSum=255) 2025-10-13T00:22:24.487189448+00:00 stderr F I1013 00:22:24.487121 17 apf_controller.go:861] Introducing queues for priority level "node-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":25}}, nominalCL=628, lendableCL=157, borrowingCL=4000, currentCL=550, quiescing=false (shares=0xc005e0ff18, shareSum=255) 2025-10-13T00:22:24.487189448+00:00 stderr F I1013 00:22:24.487135 17 apf_controller.go:861] Introducing queues for priority level "workload-low": config={"type":"Limited","limited":{"nominalConcurrencyShares":100,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":90}}, nominalCL=1569, lendableCL=1412, borrowingCL=4000, currentCL=863, quiescing=false (shares=0xc0063c65b0, shareSum=255) 2025-10-13T00:22:24.487247440+00:00 stderr F I1013 00:22:24.487168 17 apf_controller.go:455] "Update CurrentCL" plName="leader-election" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=357 concurrencyDenominator=357 backstop=false 2025-10-13T00:22:24.487247440+00:00 stderr F I1013 00:22:24.487195 17 apf_controller.go:455] "Update CurrentCL" plName="node-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=1072 concurrencyDenominator=1072 backstop=false 2025-10-13T00:22:24.487288541+00:00 stderr F I1013 00:22:24.487227 17 apf_controller.go:455] "Update CurrentCL" plName="workload-low" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=357 concurrencyDenominator=357 backstop=false 2025-10-13T00:22:24.487351633+00:00 stderr F I1013 00:22:24.487269 17 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.487351633+00:00 stderr F I1013 00:22:24.487295 17 apf_controller.go:455] "Update CurrentCL" plName="catch-all" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=180 concurrencyDenominator=180 backstop=false 2025-10-13T00:22:24.487351633+00:00 stderr F I1013 00:22:24.487309 17 apf_controller.go:455] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=2 seatDemandAvg=2 seatDemandStdev=0 seatDemandSmoothed=2 fairFrac=2.277019340159272 currentCL=5 concurrencyDenominator=5 backstop=false 2025-10-13T00:22:24.487400294+00:00 stderr F I1013 00:22:24.487345 17 apf_controller.go:455] "Update CurrentCL" plName="global-default" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=357 concurrencyDenominator=357 backstop=false 2025-10-13T00:22:24.487521187+00:00 stderr F I1013 00:22:24.487476 17 apf_controller.go:455] "Update CurrentCL" plName="system" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=720 concurrencyDenominator=720 backstop=false 2025-10-13T00:22:24.487562788+00:00 stderr F I1013 00:22:24.487539 17 apf_controller.go:455] "Update CurrentCL" plName="openshift-control-plane-operators" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=239 concurrencyDenominator=239 backstop=false 2025-10-13T00:22:24.487641221+00:00 stderr F I1013 00:22:24.487599 17 apf_controller.go:455] "Update CurrentCL" plName="workload-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.277019340159272 currentCL=715 concurrencyDenominator=715 backstop=false 2025-10-13T00:22:24.489128191+00:00 stderr F I1013 00:22:24.489012 17 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller 2025-10-13T00:22:24.508769829+00:00 stderr F I1013 00:22:24.506871 17 healthz.go:261] poststarthook/start-apiextensions-controllers,poststarthook/crd-informer-synced,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,poststarthook/apiservice-registration-controller check failed: readyz 2025-10-13T00:22:24.508769829+00:00 stderr F [-]poststarthook/start-apiextensions-controllers failed: not finished 2025-10-13T00:22:24.508769829+00:00 stderr F [-]poststarthook/crd-informer-synced failed: not finished 2025-10-13T00:22:24.508769829+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:24.508769829+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:24.508769829+00:00 stderr F [-]poststarthook/apiservice-registration-controller failed: not finished 2025-10-13T00:22:24.508769829+00:00 stderr F I1013 00:22:24.507774 17 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io 2025-10-13T00:22:24.510555377+00:00 stderr F I1013 00:22:24.510404 17 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.511075711+00:00 stderr F I1013 00:22:24.510992 17 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.516970239+00:00 stderr F I1013 00:22:24.516899 17 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-10-13T00:22:24.517728000+00:00 stderr F I1013 00:22:24.517633 17 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.518167971+00:00 stderr F I1013 00:22:24.518077 17 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.518892501+00:00 stderr F I1013 00:22:24.518806 17 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.520208506+00:00 stderr F I1013 00:22:24.520107 17 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.520237117+00:00 stderr F I1013 00:22:24.520183 17 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.521032309+00:00 stderr F I1013 00:22:24.520930 17 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.522353684+00:00 stderr F I1013 00:22:24.522244 17 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.525174370+00:00 stderr F I1013 00:22:24.525043 17 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.528448728+00:00 stderr F I1013 00:22:24.525751 17 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.528448728+00:00 stderr F E1013 00:22:24.527950 17 controller.go:146] Error updating APIService "v1.apps.openshift.io" with err: failed to download v1.apps.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.528448728+00:00 stderr F , Header: map[Audit-Id:[52482c48-9c17-401d-8767-84e5635725e9] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.530073342+00:00 stderr F E1013 00:22:24.529921 17 controller.go:146] Error updating APIService "v1.authorization.openshift.io" with err: failed to download v1.authorization.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.530073342+00:00 stderr F , Header: map[Audit-Id:[a4c36c8a-b25f-4ce4-a1de-d6ea538bf8c3] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.530393410+00:00 stderr F I1013 00:22:24.530188 17 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:24.533146284+00:00 stderr F E1013 00:22:24.532719 17 controller.go:146] Error updating APIService "v1.build.openshift.io" with err: failed to download v1.build.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.533146284+00:00 stderr F , Header: map[Audit-Id:[5cb918cd-d685-4caf-8890-ccff9908784e] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.534937462+00:00 stderr F E1013 00:22:24.534841 17 controller.go:146] Error updating APIService "v1.image.openshift.io" with err: failed to download v1.image.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.534937462+00:00 stderr F , Header: map[Audit-Id:[5c779d61-5d02-4d30-affe-e4c1eceb328c] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.538349284+00:00 stderr F E1013 00:22:24.537080 17 controller.go:146] Error updating APIService "v1.oauth.openshift.io" with err: failed to download v1.oauth.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.538349284+00:00 stderr F , Header: map[Audit-Id:[b49c0960-c6a8-4859-b089-3405f1795dbb] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.542859195+00:00 stderr F E1013 00:22:24.542761 17 controller.go:146] Error updating APIService "v1.packages.operators.coreos.com" with err: failed to download v1.packages.operators.coreos.com: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.542859195+00:00 stderr F , Header: map[Audit-Id:[dee15742-fd0d-4dfe-af4b-b4a3da4e06ba] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.544585812+00:00 stderr F E1013 00:22:24.544517 17 controller.go:146] Error updating APIService "v1.project.openshift.io" with err: failed to download v1.project.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.544585812+00:00 stderr F , Header: map[Audit-Id:[8dee0890-272c-4595-8ebc-8e159430bb14] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.546841783+00:00 stderr F E1013 00:22:24.546767 17 controller.go:146] Error updating APIService "v1.quota.openshift.io" with err: failed to download v1.quota.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.546841783+00:00 stderr F , Header: map[Audit-Id:[d12a10af-7dc5-4801-a10a-d65dadceafd5] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.548355653+00:00 stderr F E1013 00:22:24.548254 17 controller.go:146] Error updating APIService "v1.route.openshift.io" with err: failed to download v1.route.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.548355653+00:00 stderr F , Header: map[Audit-Id:[192f2871-e148-478e-8959-5ab1825fefee] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.550031678+00:00 stderr F E1013 00:22:24.549947 17 controller.go:146] Error updating APIService "v1.security.openshift.io" with err: failed to download v1.security.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.550031678+00:00 stderr F , Header: map[Audit-Id:[8dc5132c-314d-4e92-8aa3-018b748e7ec6] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.551740394+00:00 stderr F E1013 00:22:24.551659 17 controller.go:146] Error updating APIService "v1.template.openshift.io" with err: failed to download v1.template.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.551740394+00:00 stderr F , Header: map[Audit-Id:[dab98d65-e236-4c69-b9cb-157e3b2d628b] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.554277583+00:00 stderr F E1013 00:22:24.554203 17 controller.go:146] Error updating APIService "v1.user.openshift.io" with err: failed to download v1.user.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.554277583+00:00 stderr F , Header: map[Audit-Id:[ab0677be-66b2-4cfc-8029-189f525d60b3] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:24 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:24.582193713+00:00 stderr F W1013 00:22:24.581536 17 patch_genericapiserver.go:204] Request to "/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoleplugins.console.openshift.io" (source IP 38.102.83.180:51018, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.588243936+00:00 stderr F I1013 00:22:24.588151 17 genericapiserver.go:527] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:22:24.588369589+00:00 stderr F I1013 00:22:24.588266 17 shared_informer.go:318] Caches are synced for crd-autoregister 2025-10-13T00:22:24.588558464+00:00 stderr F I1013 00:22:24.588499 17 handler.go:275] Adding GroupVersion operator.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.588601596+00:00 stderr F I1013 00:22:24.588561 17 handler.go:275] Adding GroupVersion console.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.588674168+00:00 stderr F I1013 00:22:24.588625 17 handler.go:275] Adding GroupVersion k8s.ovn.org v1 to ResourceManager 2025-10-13T00:22:24.588709319+00:00 stderr F I1013 00:22:24.588662 17 handler.go:275] Adding GroupVersion infrastructure.cluster.x-k8s.io v1alpha5 to ResourceManager 2025-10-13T00:22:24.588754050+00:00 stderr F I1013 00:22:24.588702 17 handler.go:275] Adding GroupVersion infrastructure.cluster.x-k8s.io v1beta1 to ResourceManager 2025-10-13T00:22:24.588754050+00:00 stderr F I1013 00:22:24.588737 17 handler.go:275] Adding GroupVersion network.operator.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.588868013+00:00 stderr F I1013 00:22:24.588560 17 controller.go:222] Updating CRD OpenAPI spec because adminnetworkpolicies.policy.networking.k8s.io changed 2025-10-13T00:22:24.588868013+00:00 stderr F I1013 00:22:24.588818 17 handler.go:275] Adding GroupVersion config.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.588904544+00:00 stderr F I1013 00:22:24.588854 17 controller.go:222] Updating CRD OpenAPI spec because adminpolicybasedexternalroutes.k8s.ovn.org changed 2025-10-13T00:22:24.588904544+00:00 stderr F I1013 00:22:24.588865 17 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager 2025-10-13T00:22:24.588904544+00:00 stderr F I1013 00:22:24.588875 17 controller.go:222] Updating CRD OpenAPI spec because alertingrules.monitoring.openshift.io changed 2025-10-13T00:22:24.588918354+00:00 stderr F I1013 00:22:24.588895 17 controller.go:222] Updating CRD OpenAPI spec because alertmanagerconfigs.monitoring.coreos.com changed 2025-10-13T00:22:24.588927804+00:00 stderr F I1013 00:22:24.588912 17 controller.go:222] Updating CRD OpenAPI spec because alertmanagers.monitoring.coreos.com changed 2025-10-13T00:22:24.588970666+00:00 stderr F I1013 00:22:24.588925 17 controller.go:222] Updating CRD OpenAPI spec because alertrelabelconfigs.monitoring.openshift.io changed 2025-10-13T00:22:24.588970666+00:00 stderr F I1013 00:22:24.588942 17 controller.go:222] Updating CRD OpenAPI spec because apirequestcounts.apiserver.openshift.io changed 2025-10-13T00:22:24.588970666+00:00 stderr F I1013 00:22:24.588954 17 controller.go:222] Updating CRD OpenAPI spec because apiservers.config.openshift.io changed 2025-10-13T00:22:24.589004756+00:00 stderr F I1013 00:22:24.588967 17 controller.go:222] Updating CRD OpenAPI spec because authentications.config.openshift.io changed 2025-10-13T00:22:24.589004756+00:00 stderr F I1013 00:22:24.588990 17 controller.go:222] Updating CRD OpenAPI spec because authentications.operator.openshift.io changed 2025-10-13T00:22:24.589064658+00:00 stderr F I1013 00:22:24.589007 17 controller.go:222] Updating CRD OpenAPI spec because baselineadminnetworkpolicies.policy.networking.k8s.io changed 2025-10-13T00:22:24.589064658+00:00 stderr F I1013 00:22:24.589028 17 controller.go:222] Updating CRD OpenAPI spec because builds.config.openshift.io changed 2025-10-13T00:22:24.589064658+00:00 stderr F I1013 00:22:24.589041 17 controller.go:222] Updating CRD OpenAPI spec because catalogsources.operators.coreos.com changed 2025-10-13T00:22:24.589085069+00:00 stderr F I1013 00:22:24.589055 17 controller.go:222] Updating CRD OpenAPI spec because clusterautoscalers.autoscaling.openshift.io changed 2025-10-13T00:22:24.589085069+00:00 stderr F I1013 00:22:24.589074 17 controller.go:222] Updating CRD OpenAPI spec because clustercsidrivers.operator.openshift.io changed 2025-10-13T00:22:24.589133390+00:00 stderr F I1013 00:22:24.589089 17 controller.go:222] Updating CRD OpenAPI spec because clusteroperators.config.openshift.io changed 2025-10-13T00:22:24.589133390+00:00 stderr F I1013 00:22:24.589112 17 controller.go:222] Updating CRD OpenAPI spec because clusterresourcequotas.quota.openshift.io changed 2025-10-13T00:22:24.589147980+00:00 stderr F I1013 00:22:24.589126 17 controller.go:222] Updating CRD OpenAPI spec because clusterserviceversions.operators.coreos.com changed 2025-10-13T00:22:24.589147980+00:00 stderr F I1013 00:22:24.589134 17 handler.go:275] Adding GroupVersion operators.coreos.com v1alpha1 to ResourceManager 2025-10-13T00:22:24.589184581+00:00 stderr F I1013 00:22:24.589143 17 controller.go:222] Updating CRD OpenAPI spec because clusterversions.config.openshift.io changed 2025-10-13T00:22:24.589184581+00:00 stderr F I1013 00:22:24.589166 17 controller.go:222] Updating CRD OpenAPI spec because configs.imageregistry.operator.openshift.io changed 2025-10-13T00:22:24.589196222+00:00 stderr F I1013 00:22:24.589173 17 handler.go:275] Adding GroupVersion ipam.cluster.x-k8s.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.589196222+00:00 stderr F I1013 00:22:24.589183 17 controller.go:222] Updating CRD OpenAPI spec because configs.operator.openshift.io changed 2025-10-13T00:22:24.589240113+00:00 stderr F I1013 00:22:24.589195 17 controller.go:222] Updating CRD OpenAPI spec because configs.samples.operator.openshift.io changed 2025-10-13T00:22:24.589240113+00:00 stderr F I1013 00:22:24.589207 17 handler.go:275] Adding GroupVersion ipam.cluster.x-k8s.io v1beta1 to ResourceManager 2025-10-13T00:22:24.589240113+00:00 stderr F I1013 00:22:24.589216 17 controller.go:222] Updating CRD OpenAPI spec because consoleclidownloads.console.openshift.io changed 2025-10-13T00:22:24.589271944+00:00 stderr F I1013 00:22:24.589234 17 controller.go:222] Updating CRD OpenAPI spec because consoleexternalloglinks.console.openshift.io changed 2025-10-13T00:22:24.589271944+00:00 stderr F I1013 00:22:24.589253 17 controller.go:222] Updating CRD OpenAPI spec because consolelinks.console.openshift.io changed 2025-10-13T00:22:24.589338795+00:00 stderr F I1013 00:22:24.589268 17 controller.go:222] Updating CRD OpenAPI spec because consolenotifications.console.openshift.io changed 2025-10-13T00:22:24.589338795+00:00 stderr F I1013 00:22:24.589288 17 controller.go:222] Updating CRD OpenAPI spec because consoleplugins.console.openshift.io changed 2025-10-13T00:22:24.589338795+00:00 stderr F I1013 00:22:24.589302 17 controller.go:222] Updating CRD OpenAPI spec because consolequickstarts.console.openshift.io changed 2025-10-13T00:22:24.589378727+00:00 stderr F I1013 00:22:24.589314 17 controller.go:222] Updating CRD OpenAPI spec because consoles.config.openshift.io changed 2025-10-13T00:22:24.589378727+00:00 stderr F I1013 00:22:24.589235 17 handler.go:275] Adding GroupVersion autoscaling.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.589378727+00:00 stderr F I1013 00:22:24.589346 17 controller.go:222] Updating CRD OpenAPI spec because consoles.operator.openshift.io changed 2025-10-13T00:22:24.589378727+00:00 stderr F I1013 00:22:24.589364 17 controller.go:222] Updating CRD OpenAPI spec because consolesamples.console.openshift.io changed 2025-10-13T00:22:24.589422278+00:00 stderr F I1013 00:22:24.589376 17 controller.go:222] Updating CRD OpenAPI spec because consoleyamlsamples.console.openshift.io changed 2025-10-13T00:22:24.589422278+00:00 stderr F I1013 00:22:24.589394 17 controller.go:222] Updating CRD OpenAPI spec because containerruntimeconfigs.machineconfiguration.openshift.io changed 2025-10-13T00:22:24.589422278+00:00 stderr F I1013 00:22:24.589394 17 handler.go:275] Adding GroupVersion machineconfiguration.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.589440088+00:00 stderr F I1013 00:22:24.589408 17 controller.go:222] Updating CRD OpenAPI spec because controllerconfigs.machineconfiguration.openshift.io changed 2025-10-13T00:22:24.589449768+00:00 stderr F I1013 00:22:24.589426 17 controller.go:222] Updating CRD OpenAPI spec because controlplanemachinesets.machine.openshift.io changed 2025-10-13T00:22:24.589449768+00:00 stderr F I1013 00:22:24.589433 17 handler.go:275] Adding GroupVersion machine.openshift.io v1beta1 to ResourceManager 2025-10-13T00:22:24.589449768+00:00 stderr F I1013 00:22:24.589442 17 controller.go:222] Updating CRD OpenAPI spec because csisnapshotcontrollers.operator.openshift.io changed 2025-10-13T00:22:24.589519210+00:00 stderr F I1013 00:22:24.589457 17 controller.go:222] Updating CRD OpenAPI spec because dnses.config.openshift.io changed 2025-10-13T00:22:24.589519210+00:00 stderr F I1013 00:22:24.589474 17 controller.go:222] Updating CRD OpenAPI spec because dnses.operator.openshift.io changed 2025-10-13T00:22:24.589519210+00:00 stderr F I1013 00:22:24.589486 17 controller.go:222] Updating CRD OpenAPI spec because dnsrecords.ingress.operator.openshift.io changed 2025-10-13T00:22:24.589519210+00:00 stderr F I1013 00:22:24.589500 17 controller.go:222] Updating CRD OpenAPI spec because egressfirewalls.k8s.ovn.org changed 2025-10-13T00:22:24.589552101+00:00 stderr F I1013 00:22:24.589514 17 controller.go:222] Updating CRD OpenAPI spec because egressips.k8s.ovn.org changed 2025-10-13T00:22:24.589552101+00:00 stderr F I1013 00:22:24.589531 17 controller.go:222] Updating CRD OpenAPI spec because egressqoses.k8s.ovn.org changed 2025-10-13T00:22:24.589563271+00:00 stderr F I1013 00:22:24.589543 17 controller.go:222] Updating CRD OpenAPI spec because egressrouters.network.operator.openshift.io changed 2025-10-13T00:22:24.589572142+00:00 stderr F I1013 00:22:24.589559 17 controller.go:222] Updating CRD OpenAPI spec because egressservices.k8s.ovn.org changed 2025-10-13T00:22:24.589620493+00:00 stderr F I1013 00:22:24.589576 17 handler.go:275] Adding GroupVersion migration.k8s.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.589620493+00:00 stderr F I1013 00:22:24.589591 17 controller.go:222] Updating CRD OpenAPI spec because etcds.operator.openshift.io changed 2025-10-13T00:22:24.589620493+00:00 stderr F I1013 00:22:24.589605 17 controller.go:222] Updating CRD OpenAPI spec because featuregates.config.openshift.io changed 2025-10-13T00:22:24.589669234+00:00 stderr F I1013 00:22:24.589624 17 controller.go:222] Updating CRD OpenAPI spec because helmchartrepositories.helm.openshift.io changed 2025-10-13T00:22:24.589669234+00:00 stderr F I1013 00:22:24.589644 17 controller.go:222] Updating CRD OpenAPI spec because imagecontentpolicies.config.openshift.io changed 2025-10-13T00:22:24.589681035+00:00 stderr F I1013 00:22:24.589657 17 controller.go:222] Updating CRD OpenAPI spec because imagecontentsourcepolicies.operator.openshift.io changed 2025-10-13T00:22:24.589690195+00:00 stderr F I1013 00:22:24.589677 17 controller.go:222] Updating CRD OpenAPI spec because imagedigestmirrorsets.config.openshift.io changed 2025-10-13T00:22:24.589762087+00:00 stderr F I1013 00:22:24.589691 17 controller.go:222] Updating CRD OpenAPI spec because imagepruners.imageregistry.operator.openshift.io changed 2025-10-13T00:22:24.589762087+00:00 stderr F I1013 00:22:24.589712 17 controller.go:222] Updating CRD OpenAPI spec because images.config.openshift.io changed 2025-10-13T00:22:24.589762087+00:00 stderr F I1013 00:22:24.589725 17 controller.go:222] Updating CRD OpenAPI spec because imagetagmirrorsets.config.openshift.io changed 2025-10-13T00:22:24.589762087+00:00 stderr F I1013 00:22:24.589726 17 handler.go:275] Adding GroupVersion operators.coreos.com v1 to ResourceManager 2025-10-13T00:22:24.589762087+00:00 stderr F I1013 00:22:24.589737 17 controller.go:222] Updating CRD OpenAPI spec because infrastructures.config.openshift.io changed 2025-10-13T00:22:24.589782537+00:00 stderr F I1013 00:22:24.589752 17 controller.go:222] Updating CRD OpenAPI spec because ingresscontrollers.operator.openshift.io changed 2025-10-13T00:22:24.589782537+00:00 stderr F I1013 00:22:24.589770 17 controller.go:222] Updating CRD OpenAPI spec because ingresses.config.openshift.io changed 2025-10-13T00:22:24.589824458+00:00 stderr F I1013 00:22:24.589782 17 controller.go:222] Updating CRD OpenAPI spec because installplans.operators.coreos.com changed 2025-10-13T00:22:24.589824458+00:00 stderr F I1013 00:22:24.589799 17 controller.go:222] Updating CRD OpenAPI spec because ipaddressclaims.ipam.cluster.x-k8s.io changed 2025-10-13T00:22:24.589857609+00:00 stderr F I1013 00:22:24.589817 17 controller.go:222] Updating CRD OpenAPI spec because ipaddresses.ipam.cluster.x-k8s.io changed 2025-10-13T00:22:24.589857609+00:00 stderr F I1013 00:22:24.589837 17 controller.go:222] Updating CRD OpenAPI spec because ippools.whereabouts.cni.cncf.io changed 2025-10-13T00:22:24.589888030+00:00 stderr F I1013 00:22:24.589854 17 controller.go:222] Updating CRD OpenAPI spec because kubeapiservers.operator.openshift.io changed 2025-10-13T00:22:24.589888030+00:00 stderr F I1013 00:22:24.589871 17 controller.go:222] Updating CRD OpenAPI spec because kubecontrollermanagers.operator.openshift.io changed 2025-10-13T00:22:24.589918321+00:00 stderr F I1013 00:22:24.589882 17 controller.go:222] Updating CRD OpenAPI spec because kubeletconfigs.machineconfiguration.openshift.io changed 2025-10-13T00:22:24.589918321+00:00 stderr F I1013 00:22:24.589899 17 controller.go:222] Updating CRD OpenAPI spec because kubeschedulers.operator.openshift.io changed 2025-10-13T00:22:24.589946762+00:00 stderr F I1013 00:22:24.589913 17 controller.go:222] Updating CRD OpenAPI spec because kubestorageversionmigrators.operator.openshift.io changed 2025-10-13T00:22:24.590018534+00:00 stderr F I1013 00:22:24.589967 17 controller.go:222] Updating CRD OpenAPI spec because machineautoscalers.autoscaling.openshift.io changed 2025-10-13T00:22:24.590018534+00:00 stderr F I1013 00:22:24.589987 17 controller.go:222] Updating CRD OpenAPI spec because machineconfigpools.machineconfiguration.openshift.io changed 2025-10-13T00:22:24.590018534+00:00 stderr F I1013 00:22:24.589999 17 controller.go:222] Updating CRD OpenAPI spec because machineconfigs.machineconfiguration.openshift.io changed 2025-10-13T00:22:24.590080395+00:00 stderr F I1013 00:22:24.590038 17 controller.go:222] Updating CRD OpenAPI spec because machineconfigurations.operator.openshift.io changed 2025-10-13T00:22:24.590092326+00:00 stderr F I1013 00:22:24.590059 17 controller.go:222] Updating CRD OpenAPI spec because machinehealthchecks.machine.openshift.io changed 2025-10-13T00:22:24.590092326+00:00 stderr F I1013 00:22:24.590079 17 controller.go:222] Updating CRD OpenAPI spec because machines.machine.openshift.io changed 2025-10-13T00:22:24.590135237+00:00 stderr F I1013 00:22:24.590096 17 controller.go:222] Updating CRD OpenAPI spec because machinesets.machine.openshift.io changed 2025-10-13T00:22:24.590135237+00:00 stderr F I1013 00:22:24.590115 17 controller.go:222] Updating CRD OpenAPI spec because metal3remediations.infrastructure.cluster.x-k8s.io changed 2025-10-13T00:22:24.590167088+00:00 stderr F I1013 00:22:24.590129 17 controller.go:222] Updating CRD OpenAPI spec because metal3remediationtemplates.infrastructure.cluster.x-k8s.io changed 2025-10-13T00:22:24.590167088+00:00 stderr F I1013 00:22:24.590146 17 controller.go:222] Updating CRD OpenAPI spec because network-attachment-definitions.k8s.cni.cncf.io changed 2025-10-13T00:22:24.590180408+00:00 stderr F I1013 00:22:24.590158 17 controller.go:222] Updating CRD OpenAPI spec because networks.config.openshift.io changed 2025-10-13T00:22:24.590214879+00:00 stderr F I1013 00:22:24.590174 17 controller.go:222] Updating CRD OpenAPI spec because networks.operator.openshift.io changed 2025-10-13T00:22:24.590214879+00:00 stderr F I1013 00:22:24.590200 17 controller.go:222] Updating CRD OpenAPI spec because nodes.config.openshift.io changed 2025-10-13T00:22:24.590304601+00:00 stderr F I1013 00:22:24.590246 17 controller.go:222] Updating CRD OpenAPI spec because oauths.config.openshift.io changed 2025-10-13T00:22:24.590304601+00:00 stderr F I1013 00:22:24.590269 17 controller.go:222] Updating CRD OpenAPI spec because olmconfigs.operators.coreos.com changed 2025-10-13T00:22:24.590304601+00:00 stderr F I1013 00:22:24.590284 17 controller.go:222] Updating CRD OpenAPI spec because openshiftapiservers.operator.openshift.io changed 2025-10-13T00:22:24.590319292+00:00 stderr F I1013 00:22:24.590294 17 controller.go:222] Updating CRD OpenAPI spec because openshiftcontrollermanagers.operator.openshift.io changed 2025-10-13T00:22:24.590348503+00:00 stderr F I1013 00:22:24.590312 17 controller.go:222] Updating CRD OpenAPI spec because operatorconditions.operators.coreos.com changed 2025-10-13T00:22:24.590386624+00:00 stderr F I1013 00:22:24.590343 17 controller.go:222] Updating CRD OpenAPI spec because operatorgroups.operators.coreos.com changed 2025-10-13T00:22:24.590386624+00:00 stderr F I1013 00:22:24.590366 17 controller.go:222] Updating CRD OpenAPI spec because operatorhubs.config.openshift.io changed 2025-10-13T00:22:24.590420314+00:00 stderr F I1013 00:22:24.590382 17 controller.go:222] Updating CRD OpenAPI spec because operatorpkis.network.operator.openshift.io changed 2025-10-13T00:22:24.590451945+00:00 stderr F I1013 00:22:24.590413 17 controller.go:222] Updating CRD OpenAPI spec because operators.operators.coreos.com changed 2025-10-13T00:22:24.590451945+00:00 stderr F I1013 00:22:24.590435 17 controller.go:222] Updating CRD OpenAPI spec because overlappingrangeipreservations.whereabouts.cni.cncf.io changed 2025-10-13T00:22:24.590484076+00:00 stderr F I1013 00:22:24.590448 17 controller.go:222] Updating CRD OpenAPI spec because podmonitors.monitoring.coreos.com changed 2025-10-13T00:22:24.590484076+00:00 stderr F I1013 00:22:24.590467 17 controller.go:222] Updating CRD OpenAPI spec because podnetworkconnectivitychecks.controlplane.operator.openshift.io changed 2025-10-13T00:22:24.590515727+00:00 stderr F I1013 00:22:24.590479 17 controller.go:222] Updating CRD OpenAPI spec because probes.monitoring.coreos.com changed 2025-10-13T00:22:24.590515727+00:00 stderr F I1013 00:22:24.590485 17 handler.go:275] Adding GroupVersion monitoring.coreos.com v1alpha1 to ResourceManager 2025-10-13T00:22:24.590531147+00:00 stderr F I1013 00:22:24.590500 17 controller.go:222] Updating CRD OpenAPI spec because projecthelmchartrepositories.helm.openshift.io changed 2025-10-13T00:22:24.590572229+00:00 stderr F I1013 00:22:24.590517 17 controller.go:222] Updating CRD OpenAPI spec because projects.config.openshift.io changed 2025-10-13T00:22:24.590572229+00:00 stderr F I1013 00:22:24.590546 17 controller.go:222] Updating CRD OpenAPI spec because prometheuses.monitoring.coreos.com changed 2025-10-13T00:22:24.590572229+00:00 stderr F I1013 00:22:24.590560 17 controller.go:222] Updating CRD OpenAPI spec because prometheusrules.monitoring.coreos.com changed 2025-10-13T00:22:24.590620510+00:00 stderr F I1013 00:22:24.590582 17 controller.go:222] Updating CRD OpenAPI spec because proxies.config.openshift.io changed 2025-10-13T00:22:24.590620510+00:00 stderr F I1013 00:22:24.590588 17 handler.go:275] Adding GroupVersion monitoring.coreos.com v1beta1 to ResourceManager 2025-10-13T00:22:24.590632310+00:00 stderr F I1013 00:22:24.590610 17 controller.go:222] Updating CRD OpenAPI spec because rangeallocations.security.internal.openshift.io changed 2025-10-13T00:22:24.590692342+00:00 stderr F I1013 00:22:24.590637 17 controller.go:222] Updating CRD OpenAPI spec because rolebindingrestrictions.authorization.openshift.io changed 2025-10-13T00:22:24.590692342+00:00 stderr F I1013 00:22:24.590661 17 controller.go:222] Updating CRD OpenAPI spec because schedulers.config.openshift.io changed 2025-10-13T00:22:24.590707982+00:00 stderr F I1013 00:22:24.590681 17 controller.go:222] Updating CRD OpenAPI spec because securitycontextconstraints.security.openshift.io changed 2025-10-13T00:22:24.590767134+00:00 stderr F I1013 00:22:24.590711 17 controller.go:222] Updating CRD OpenAPI spec because servicecas.operator.openshift.io changed 2025-10-13T00:22:24.590767134+00:00 stderr F I1013 00:22:24.590734 17 controller.go:222] Updating CRD OpenAPI spec because servicemonitors.monitoring.coreos.com changed 2025-10-13T00:22:24.590779364+00:00 stderr F I1013 00:22:24.590751 17 controller.go:222] Updating CRD OpenAPI spec because storages.operator.openshift.io changed 2025-10-13T00:22:24.590779364+00:00 stderr F I1013 00:22:24.590767 17 controller.go:222] Updating CRD OpenAPI spec because storagestates.migration.k8s.io changed 2025-10-13T00:22:24.590847866+00:00 stderr F I1013 00:22:24.590789 17 controller.go:222] Updating CRD OpenAPI spec because storageversionmigrations.migration.k8s.io changed 2025-10-13T00:22:24.590847866+00:00 stderr F I1013 00:22:24.590811 17 controller.go:222] Updating CRD OpenAPI spec because subscriptions.operators.coreos.com changed 2025-10-13T00:22:24.590847866+00:00 stderr F I1013 00:22:24.590826 17 handler.go:275] Adding GroupVersion autoscaling.openshift.io v1beta1 to ResourceManager 2025-10-13T00:22:24.590985480+00:00 stderr F I1013 00:22:24.590829 17 controller.go:222] Updating CRD OpenAPI spec because thanosrulers.monitoring.coreos.com changed 2025-10-13T00:22:24.592714436+00:00 stderr F I1013 00:22:24.592601 17 handler.go:275] Adding GroupVersion whereabouts.cni.cncf.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.592843480+00:00 stderr F I1013 00:22:24.592789 17 handler.go:275] Adding GroupVersion samples.operator.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.597463894+00:00 stderr F I1013 00:22:24.597237 17 aggregator.go:165] initial CRD sync complete... 2025-10-13T00:22:24.597463894+00:00 stderr F I1013 00:22:24.597268 17 autoregister_controller.go:141] Starting autoregister controller 2025-10-13T00:22:24.597463894+00:00 stderr F I1013 00:22:24.597282 17 cache.go:32] Waiting for caches to sync for autoregister controller 2025-10-13T00:22:24.597463894+00:00 stderr F I1013 00:22:24.597292 17 cache.go:39] Caches are synced for autoregister controller 2025-10-13T00:22:24.600684001+00:00 stderr F I1013 00:22:24.600575 17 handler.go:275] Adding GroupVersion policy.networking.k8s.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.600684001+00:00 stderr F I1013 00:22:24.600636 17 handler.go:275] Adding GroupVersion imageregistry.operator.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.601947404+00:00 stderr F I1013 00:22:24.601611 17 handler.go:275] Adding GroupVersion machine.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.602513050+00:00 stderr F I1013 00:22:24.602431 17 handler.go:275] Adding GroupVersion helm.openshift.io v1beta1 to ResourceManager 2025-10-13T00:22:24.602713965+00:00 stderr F I1013 00:22:24.602628 17 handler.go:275] Adding GroupVersion console.openshift.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.603136256+00:00 stderr F I1013 00:22:24.603031 17 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.604410051+00:00 stderr F I1013 00:22:24.604340 17 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.612288853+00:00 stderr F I1013 00:22:24.612000 17 handler.go:275] Adding GroupVersion operators.coreos.com v2 to ResourceManager 2025-10-13T00:22:24.620438632+00:00 stderr F I1013 00:22:24.620276 17 healthz.go:261] poststarthook/start-apiextensions-controllers,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:24.620438632+00:00 stderr F [-]poststarthook/start-apiextensions-controllers failed: not finished 2025-10-13T00:22:24.620438632+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:24.620438632+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:24.621491730+00:00 stderr F I1013 00:22:24.621413 17 handler.go:275] Adding GroupVersion security.internal.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.622949409+00:00 stderr F I1013 00:22:24.622856 17 handler.go:275] Adding GroupVersion monitoring.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.623374461+00:00 stderr F I1013 00:22:24.623296 17 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.625126728+00:00 stderr F I1013 00:22:24.625040 17 handler.go:275] Adding GroupVersion k8s.cni.cncf.io v1 to ResourceManager 2025-10-13T00:22:24.625342664+00:00 stderr F I1013 00:22:24.625277 17 handler.go:275] Adding GroupVersion controlplane.operator.openshift.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.625427546+00:00 stderr F I1013 00:22:24.625318 17 handler.go:275] Adding GroupVersion apiserver.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.625707753+00:00 stderr F I1013 00:22:24.625619 17 handler.go:275] Adding GroupVersion ingress.operator.openshift.io v1 to ResourceManager 2025-10-13T00:22:24.625893888+00:00 stderr F I1013 00:22:24.625831 17 handler.go:275] Adding GroupVersion operators.coreos.com v1alpha2 to ResourceManager 2025-10-13T00:22:24.627443120+00:00 stderr F I1013 00:22:24.627314 17 handler.go:275] Adding GroupVersion operator.openshift.io v1alpha1 to ResourceManager 2025-10-13T00:22:24.647993083+00:00 stderr F W1013 00:22:24.647862 17 patch_genericapiserver.go:204] Request to "/apis/monitoring.coreos.com/v1/namespaces/openshift-route-controller-manager/servicemonitors/openshift-route-controller-manager" (source IP 38.102.83.180:51018, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.684307969+00:00 stderr F W1013 00:22:24.684173 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/events" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:24.709507227+00:00 stderr F I1013 00:22:24.709322 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:24.709507227+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:24.709507227+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:24.808619462+00:00 stderr F I1013 00:22:24.808503 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:24.808619462+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:24.808619462+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:24.915113416+00:00 stderr F I1013 00:22:24.914941 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:24.915113416+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:24.915113416+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:25.009227707+00:00 stderr F I1013 00:22:25.009048 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:25.009227707+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.009227707+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:25.117485388+00:00 stderr F I1013 00:22:25.116799 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:25.117485388+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.117485388+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:25.134940618+00:00 stderr F W1013 00:22:25.134737 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver-operator/secrets" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.178441568+00:00 stderr F W1013 00:22:25.178291 17 patch_genericapiserver.go:204] Request to "/apis/machine.openshift.io/v1beta1/machinesets" (source IP 38.102.83.180:40574, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.209045141+00:00 stderr F I1013 00:22:25.208899 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:25.209045141+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.209045141+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:25.309268746+00:00 stderr F I1013 00:22:25.309068 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:25.309268746+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.309268746+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:25.395211047+00:00 stderr F W1013 00:22:25.395069 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-console/configmaps" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.405442052+00:00 stderr F W1013 00:22:25.405288 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-cluster-samples-operator/configmaps" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.416775377+00:00 stderr F I1013 00:22:25.416660 17 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-10-13T00:22:25.416775377+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.416775377+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-10-13T00:22:25.456268009+00:00 stderr F I1013 00:22:25.455607 17 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets" (user agent "cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-10-13T00:22:25.461627903+00:00 stderr F I1013 00:22:25.461541 17 storage_scheduling.go:111] all system priority classes are created successfully or already exist. 2025-10-13T00:22:25.464594183+00:00 stderr F W1013 00:22:25.464524 17 patch_genericapiserver.go:204] Request to "/apis/apps/v1/controllerrevisions" (source IP 38.102.83.180:40574, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.495002161+00:00 stderr F E1013 00:22:25.494903 17 controller.go:102] loading OpenAPI spec for "v1.project.openshift.io" failed with: failed to download v1.project.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.495002161+00:00 stderr F , Header: map[Audit-Id:[ee93ded4-565a-4f35-8eef-f5721c7d6332] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.495002161+00:00 stderr F I1013 00:22:25.494920 17 controller.go:109] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.501785053+00:00 stderr F E1013 00:22:25.501707 17 controller.go:102] loading OpenAPI spec for "v1.route.openshift.io" failed with: failed to download v1.route.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.501785053+00:00 stderr F , Header: map[Audit-Id:[e1323817-c006-4f03-9d12-319b17dad56e] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.501785053+00:00 stderr F I1013 00:22:25.501725 17 controller.go:109] OpenAPI AggregationController: action for item v1.route.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.508357070+00:00 stderr F I1013 00:22:25.508239 17 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2025-10-13T00:22:25.508357070+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.509989184+00:00 stderr F E1013 00:22:25.509920 17 controller.go:102] loading OpenAPI spec for "v1.user.openshift.io" failed with: failed to download v1.user.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.509989184+00:00 stderr F , Header: map[Audit-Id:[b44db6b6-7220-420e-8093-88126a69f10f] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.509989184+00:00 stderr F I1013 00:22:25.509943 17 controller.go:109] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.512914052+00:00 stderr F E1013 00:22:25.512836 17 controller.go:102] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to download v1.packages.operators.coreos.com: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.512914052+00:00 stderr F , Header: map[Audit-Id:[1a515550-cfad-4795-a8c6-97dac6a135b8] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.512914052+00:00 stderr F I1013 00:22:25.512847 17 controller.go:109] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. 2025-10-13T00:22:25.517196858+00:00 stderr F E1013 00:22:25.517121 17 controller.go:102] loading OpenAPI spec for "v1.authorization.openshift.io" failed with: failed to download v1.authorization.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.517196858+00:00 stderr F , Header: map[Audit-Id:[ae3bccc7-32a5-4b79-b4e5-79d294c52077] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.517196858+00:00 stderr F I1013 00:22:25.517134 17 controller.go:109] OpenAPI AggregationController: action for item v1.authorization.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.518845012+00:00 stderr F E1013 00:22:25.518770 17 controller.go:113] loading OpenAPI spec for "v1.project.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.518845012+00:00 stderr F I1013 00:22:25.518790 17 controller.go:126] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.519476099+00:00 stderr F E1013 00:22:25.519382 17 controller.go:102] loading OpenAPI spec for "v1.oauth.openshift.io" failed with: failed to download v1.oauth.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.519476099+00:00 stderr F , Header: map[Audit-Id:[a2902116-339b-4296-897d-1b78bc6d449e] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.520382543+00:00 stderr F E1013 00:22:25.520282 17 controller.go:113] loading OpenAPI spec for "v1.route.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.520382543+00:00 stderr F I1013 00:22:25.520283 17 controller.go:109] OpenAPI AggregationController: action for item v1.oauth.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.520474906+00:00 stderr F I1013 00:22:25.520427 17 controller.go:126] OpenAPI AggregationController: action for item v1.route.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.522713636+00:00 stderr F E1013 00:22:25.522630 17 controller.go:113] loading OpenAPI spec for "v1.user.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.522713636+00:00 stderr F I1013 00:22:25.522646 17 controller.go:126] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.525565933+00:00 stderr F E1013 00:22:25.525440 17 controller.go:102] loading OpenAPI spec for "v1.quota.openshift.io" failed with: failed to download v1.quota.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.525565933+00:00 stderr F , Header: map[Audit-Id:[68106c9f-70a5-4432-bb20-4ecb1492afcd] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.525565933+00:00 stderr F I1013 00:22:25.525452 17 controller.go:109] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.527851804+00:00 stderr F E1013 00:22:25.527676 17 controller.go:102] loading OpenAPI spec for "v1.image.openshift.io" failed with: failed to download v1.image.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.527851804+00:00 stderr F , Header: map[Audit-Id:[f199e2ac-086b-4259-a614-53e509f4c1cb] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.527851804+00:00 stderr F I1013 00:22:25.527692 17 controller.go:109] OpenAPI AggregationController: action for item v1.image.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.529503709+00:00 stderr F E1013 00:22:25.529294 17 controller.go:102] loading OpenAPI spec for "v1.security.openshift.io" failed with: failed to download v1.security.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.529503709+00:00 stderr F , Header: map[Audit-Id:[b754c1cd-b520-4a06-ad5f-8e0cc13f51a0] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.529503709+00:00 stderr F I1013 00:22:25.529390 17 controller.go:109] OpenAPI AggregationController: action for item v1.security.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.529653723+00:00 stderr F E1013 00:22:25.529505 17 controller.go:113] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.531737409+00:00 stderr F E1013 00:22:25.531635 17 controller.go:102] loading OpenAPI spec for "v1.apps.openshift.io" failed with: failed to download v1.apps.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.531737409+00:00 stderr F , Header: map[Audit-Id:[e500fa49-d5cd-440b-959a-82d63e1ff4db] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.531737409+00:00 stderr F I1013 00:22:25.531653 17 controller.go:109] OpenAPI AggregationController: action for item v1.apps.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.531996146+00:00 stderr F I1013 00:22:25.531929 17 controller.go:126] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. 2025-10-13T00:22:25.534415961+00:00 stderr F E1013 00:22:25.534300 17 controller.go:102] loading OpenAPI spec for "v1.build.openshift.io" failed with: failed to download v1.build.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.534415961+00:00 stderr F , Header: map[Audit-Id:[d70ebb63-b63b-4db5-8dc8-570d38329bd9] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.534415961+00:00 stderr F I1013 00:22:25.534313 17 controller.go:109] OpenAPI AggregationController: action for item v1.build.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.535525080+00:00 stderr F E1013 00:22:25.535456 17 controller.go:113] loading OpenAPI spec for "v1.authorization.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.535525080+00:00 stderr F I1013 00:22:25.535474 17 controller.go:126] OpenAPI AggregationController: action for item v1.authorization.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.536615830+00:00 stderr F E1013 00:22:25.536545 17 controller.go:102] loading OpenAPI spec for "v1.template.openshift.io" failed with: failed to download v1.template.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.536615830+00:00 stderr F , Header: map[Audit-Id:[94104033-9b2b-4b9b-b9cc-f755e4cced89] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Mon, 13 Oct 2025 00:22:25 GMT] X-Content-Type-Options:[nosniff]] 2025-10-13T00:22:25.536615830+00:00 stderr F I1013 00:22:25.536557 17 controller.go:109] OpenAPI AggregationController: action for item v1.template.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.538519411+00:00 stderr F E1013 00:22:25.538455 17 controller.go:113] loading OpenAPI spec for "v1.oauth.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.538519411+00:00 stderr F I1013 00:22:25.538471 17 controller.go:126] OpenAPI AggregationController: action for item v1.oauth.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.540208106+00:00 stderr F E1013 00:22:25.540154 17 controller.go:113] loading OpenAPI spec for "v1.quota.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.540208106+00:00 stderr F I1013 00:22:25.540168 17 controller.go:126] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.541685906+00:00 stderr F E1013 00:22:25.541622 17 controller.go:113] loading OpenAPI spec for "v1.image.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.541685906+00:00 stderr F I1013 00:22:25.541646 17 controller.go:126] OpenAPI AggregationController: action for item v1.image.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.543128365+00:00 stderr F E1013 00:22:25.543022 17 controller.go:113] loading OpenAPI spec for "v1.security.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.543128365+00:00 stderr F I1013 00:22:25.543039 17 controller.go:126] OpenAPI AggregationController: action for item v1.security.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.544774459+00:00 stderr F E1013 00:22:25.544724 17 controller.go:113] loading OpenAPI spec for "v1.apps.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.544774459+00:00 stderr F I1013 00:22:25.544742 17 controller.go:126] OpenAPI AggregationController: action for item v1.apps.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.546352872+00:00 stderr F E1013 00:22:25.546271 17 controller.go:113] loading OpenAPI spec for "v1.build.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.546352872+00:00 stderr F I1013 00:22:25.546287 17 controller.go:126] OpenAPI AggregationController: action for item v1.build.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.547623236+00:00 stderr F E1013 00:22:25.547572 17 controller.go:113] loading OpenAPI spec for "v1.template.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-10-13T00:22:25.547623236+00:00 stderr F I1013 00:22:25.547586 17 controller.go:126] OpenAPI AggregationController: action for item v1.template.openshift.io: Rate Limited Requeue. 2025-10-13T00:22:25.589661936+00:00 stderr F W1013 00:22:25.589530 17 patch_genericapiserver.go:204] Request to "/apis/monitoring.openshift.io/v1/alertingrules" (source IP 38.102.83.180:40574, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.608236406+00:00 stderr F I1013 00:22:25.608134 17 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2025-10-13T00:22:25.608236406+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-10-13T00:22:25.635087248+00:00 stderr F W1013 00:22:25.634940 17 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-image-registry/configmaps" (source IP 38.102.83.180:51040, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-10-13T00:22:25.711626766+00:00 stderr F I1013 00:22:25.710280 17 patch_genericapiserver.go:93] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-crc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'KubeAPIReadyz' readyz=true 2025-10-13T00:22:25.716999671+00:00 stderr F W1013 00:22:25.716898 17 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.126.11] 2025-10-13T00:22:25.718730087+00:00 stderr F I1013 00:22:25.718656 17 controller.go:624] quota admission added evaluator for: endpoints 2025-10-13T00:22:25.722754635+00:00 stderr F I1013 00:22:25.722691 17 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io 2025-10-13T00:22:25.810577987+00:00 stderr F I1013 00:22:25.810305 17 store.go:1579] "Monitoring resource count at path" resource="featuregates.config.openshift.io" path="//config.openshift.io/featuregates" 2025-10-13T00:22:25.812479548+00:00 stderr F I1013 00:22:25.812396 17 cacher.go:460] cacher (featuregates.config.openshift.io): initialized 2025-10-13T00:22:25.812479548+00:00 stderr F I1013 00:22:25.812415 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=FeatureGate from storage/cacher.go:/config.openshift.io/featuregates 2025-10-13T00:22:26.246863960+00:00 stderr F I1013 00:22:26.246717 17 store.go:1579] "Monitoring resource count at path" resource="networks.config.openshift.io" path="//config.openshift.io/networks" 2025-10-13T00:22:26.249100340+00:00 stderr F I1013 00:22:26.248997 17 cacher.go:460] cacher (networks.config.openshift.io): initialized 2025-10-13T00:22:26.249100340+00:00 stderr F I1013 00:22:26.249028 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Network from storage/cacher.go:/config.openshift.io/networks 2025-10-13T00:22:26.484904831+00:00 stderr F I1013 00:22:26.484742 17 store.go:1579] "Monitoring resource count at path" resource="adminnetworkpolicies.policy.networking.k8s.io" path="//policy.networking.k8s.io/adminnetworkpolicies" 2025-10-13T00:22:26.485779605+00:00 stderr F I1013 00:22:26.485691 17 cacher.go:460] cacher (adminnetworkpolicies.policy.networking.k8s.io): initialized 2025-10-13T00:22:26.485779605+00:00 stderr F I1013 00:22:26.485721 17 reflector.go:351] Caches populated for policy.networking.k8s.io/v1alpha1, Kind=AdminNetworkPolicy from storage/cacher.go:/policy.networking.k8s.io/adminnetworkpolicies 2025-10-13T00:22:26.537728542+00:00 stderr F I1013 00:22:26.537578 17 store.go:1579] "Monitoring resource count at path" resource="egressips.k8s.ovn.org" path="//k8s.ovn.org/egressips" 2025-10-13T00:22:26.539160180+00:00 stderr F I1013 00:22:26.539049 17 cacher.go:460] cacher (egressips.k8s.ovn.org): initialized 2025-10-13T00:22:26.539160180+00:00 stderr F I1013 00:22:26.539076 17 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressIP from storage/cacher.go:/k8s.ovn.org/egressips 2025-10-13T00:22:26.622738588+00:00 stderr F I1013 00:22:26.622596 17 store.go:1579] "Monitoring resource count at path" resource="operatorconditions.operators.coreos.com" path="//operators.coreos.com/operatorconditions" 2025-10-13T00:22:26.624276469+00:00 stderr F I1013 00:22:26.624201 17 cacher.go:460] cacher (operatorconditions.operators.coreos.com): initialized 2025-10-13T00:22:26.624276469+00:00 stderr F I1013 00:22:26.624224 17 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OperatorCondition from storage/cacher.go:/operators.coreos.com/operatorconditions 2025-10-13T00:22:26.631523524+00:00 stderr F I1013 00:22:26.631429 17 store.go:1579] "Monitoring resource count at path" resource="operatorconditions.operators.coreos.com" path="//operators.coreos.com/operatorconditions" 2025-10-13T00:22:26.635849450+00:00 stderr F I1013 00:22:26.635783 17 cacher.go:460] cacher (operatorconditions.operators.coreos.com): initialized 2025-10-13T00:22:26.635849450+00:00 stderr F I1013 00:22:26.635806 17 reflector.go:351] Caches populated for operators.coreos.com/v2, Kind=OperatorCondition from storage/cacher.go:/operators.coreos.com/operatorconditions 2025-10-13T00:22:26.644505483+00:00 stderr F I1013 00:22:26.644432 17 store.go:1579] "Monitoring resource count at path" resource="kubecontrollermanagers.operator.openshift.io" path="//operator.openshift.io/kubecontrollermanagers" 2025-10-13T00:22:26.646780754+00:00 stderr F I1013 00:22:26.646717 17 cacher.go:460] cacher (kubecontrollermanagers.operator.openshift.io): initialized 2025-10-13T00:22:26.646780754+00:00 stderr F I1013 00:22:26.646736 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeControllerManager from storage/cacher.go:/operator.openshift.io/kubecontrollermanagers 2025-10-13T00:22:26.831462641+00:00 stderr F I1013 00:22:26.828620 17 store.go:1579] "Monitoring resource count at path" resource="network-attachment-definitions.k8s.cni.cncf.io" path="//k8s.cni.cncf.io/network-attachment-definitions" 2025-10-13T00:22:26.831462641+00:00 stderr F I1013 00:22:26.831344 17 cacher.go:460] cacher (network-attachment-definitions.k8s.cni.cncf.io): initialized 2025-10-13T00:22:26.831462641+00:00 stderr F I1013 00:22:26.831366 17 reflector.go:351] Caches populated for k8s.cni.cncf.io/v1, Kind=NetworkAttachmentDefinition from storage/cacher.go:/k8s.cni.cncf.io/network-attachment-definitions 2025-10-13T00:22:26.893011626+00:00 stderr F I1013 00:22:26.892859 17 store.go:1579] "Monitoring resource count at path" resource="ippools.whereabouts.cni.cncf.io" path="//whereabouts.cni.cncf.io/ippools" 2025-10-13T00:22:26.894844395+00:00 stderr F I1013 00:22:26.894751 17 cacher.go:460] cacher (ippools.whereabouts.cni.cncf.io): initialized 2025-10-13T00:22:26.895471932+00:00 stderr F I1013 00:22:26.894841 17 reflector.go:351] Caches populated for whereabouts.cni.cncf.io/v1alpha1, Kind=IPPool from storage/cacher.go:/whereabouts.cni.cncf.io/ippools 2025-10-13T00:22:27.246446600+00:00 stderr F I1013 00:22:27.246185 17 store.go:1579] "Monitoring resource count at path" resource="servicemonitors.monitoring.coreos.com" path="//monitoring.coreos.com/servicemonitors" 2025-10-13T00:22:27.275006228+00:00 stderr F I1013 00:22:27.274856 17 cacher.go:460] cacher (servicemonitors.monitoring.coreos.com): initialized 2025-10-13T00:22:27.275006228+00:00 stderr F I1013 00:22:27.274902 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=ServiceMonitor from storage/cacher.go:/monitoring.coreos.com/servicemonitors 2025-10-13T00:22:27.906318645+00:00 stderr F I1013 00:22:27.905937 17 store.go:1579] "Monitoring resource count at path" resource="alertmanagerconfigs.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagerconfigs" 2025-10-13T00:22:27.909546472+00:00 stderr F I1013 00:22:27.908219 17 cacher.go:460] cacher (alertmanagerconfigs.monitoring.coreos.com): initialized 2025-10-13T00:22:27.909546472+00:00 stderr F I1013 00:22:27.908246 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1alpha1, Kind=AlertmanagerConfig from storage/cacher.go:/monitoring.coreos.com/alertmanagerconfigs 2025-10-13T00:22:27.917696301+00:00 stderr F I1013 00:22:27.917559 17 store.go:1579] "Monitoring resource count at path" resource="alertmanagerconfigs.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagerconfigs" 2025-10-13T00:22:27.921886014+00:00 stderr F I1013 00:22:27.921773 17 cacher.go:460] cacher (alertmanagerconfigs.monitoring.coreos.com): initialized 2025-10-13T00:22:27.921886014+00:00 stderr F I1013 00:22:27.921813 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1beta1, Kind=AlertmanagerConfig from storage/cacher.go:/monitoring.coreos.com/alertmanagerconfigs 2025-10-13T00:22:27.935291374+00:00 stderr F I1013 00:22:27.934971 17 store.go:1579] "Monitoring resource count at path" resource="machineautoscalers.autoscaling.openshift.io" path="//autoscaling.openshift.io/machineautoscalers" 2025-10-13T00:22:27.938314526+00:00 stderr F I1013 00:22:27.937561 17 cacher.go:460] cacher (machineautoscalers.autoscaling.openshift.io): initialized 2025-10-13T00:22:27.938314526+00:00 stderr F I1013 00:22:27.937582 17 reflector.go:351] Caches populated for autoscaling.openshift.io/v1beta1, Kind=MachineAutoscaler from storage/cacher.go:/autoscaling.openshift.io/machineautoscalers 2025-10-13T00:22:28.015414969+00:00 stderr F I1013 00:22:28.015155 17 store.go:1579] "Monitoring resource count at path" resource="ingresscontrollers.operator.openshift.io" path="//operator.openshift.io/ingresscontrollers" 2025-10-13T00:22:28.019088948+00:00 stderr F I1013 00:22:28.017733 17 cacher.go:460] cacher (ingresscontrollers.operator.openshift.io): initialized 2025-10-13T00:22:28.019088948+00:00 stderr F I1013 00:22:28.017784 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=IngressController from storage/cacher.go:/operator.openshift.io/ingresscontrollers 2025-10-13T00:22:28.192439409+00:00 stderr F I1013 00:22:28.191499 17 store.go:1579] "Monitoring resource count at path" resource="clusteroperators.config.openshift.io" path="//config.openshift.io/clusteroperators" 2025-10-13T00:22:28.215672144+00:00 stderr F I1013 00:22:28.215559 17 cacher.go:460] cacher (clusteroperators.config.openshift.io): initialized 2025-10-13T00:22:28.215672144+00:00 stderr F I1013 00:22:28.215586 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ClusterOperator from storage/cacher.go:/config.openshift.io/clusteroperators 2025-10-13T00:22:28.267776725+00:00 stderr F I1013 00:22:28.267623 17 store.go:1579] "Monitoring resource count at path" resource="openshiftapiservers.operator.openshift.io" path="//operator.openshift.io/openshiftapiservers" 2025-10-13T00:22:28.269720698+00:00 stderr F I1013 00:22:28.269639 17 cacher.go:460] cacher (openshiftapiservers.operator.openshift.io): initialized 2025-10-13T00:22:28.269720698+00:00 stderr F I1013 00:22:28.269657 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=OpenShiftAPIServer from storage/cacher.go:/operator.openshift.io/openshiftapiservers 2025-10-13T00:22:28.362710618+00:00 stderr F I1013 00:22:28.362587 17 store.go:1579] "Monitoring resource count at path" resource="prometheuses.monitoring.coreos.com" path="//monitoring.coreos.com/prometheuses" 2025-10-13T00:22:28.363941992+00:00 stderr F I1013 00:22:28.363737 17 cacher.go:460] cacher (prometheuses.monitoring.coreos.com): initialized 2025-10-13T00:22:28.363941992+00:00 stderr F I1013 00:22:28.363766 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Prometheus from storage/cacher.go:/monitoring.coreos.com/prometheuses 2025-10-13T00:22:28.374813774+00:00 stderr F I1013 00:22:28.374241 17 store.go:1579] "Monitoring resource count at path" resource="egressfirewalls.k8s.ovn.org" path="//k8s.ovn.org/egressfirewalls" 2025-10-13T00:22:28.375396100+00:00 stderr F I1013 00:22:28.375285 17 cacher.go:460] cacher (egressfirewalls.k8s.ovn.org): initialized 2025-10-13T00:22:28.375470382+00:00 stderr F I1013 00:22:28.375416 17 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressFirewall from storage/cacher.go:/k8s.ovn.org/egressfirewalls 2025-10-13T00:22:28.396921408+00:00 stderr F I1013 00:22:28.396791 17 store.go:1579] "Monitoring resource count at path" resource="thanosrulers.monitoring.coreos.com" path="//monitoring.coreos.com/thanosrulers" 2025-10-13T00:22:28.398593683+00:00 stderr F I1013 00:22:28.398499 17 cacher.go:460] cacher (thanosrulers.monitoring.coreos.com): initialized 2025-10-13T00:22:28.398593683+00:00 stderr F I1013 00:22:28.398532 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=ThanosRuler from storage/cacher.go:/monitoring.coreos.com/thanosrulers 2025-10-13T00:22:28.434081728+00:00 stderr F I1013 00:22:28.433963 17 store.go:1579] "Monitoring resource count at path" resource="clusterserviceversions.operators.coreos.com" path="//operators.coreos.com/clusterserviceversions" 2025-10-13T00:22:28.437216562+00:00 stderr F I1013 00:22:28.437083 17 cacher.go:460] cacher (clusterserviceversions.operators.coreos.com): initialized 2025-10-13T00:22:28.437216562+00:00 stderr F I1013 00:22:28.437107 17 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=ClusterServiceVersion from storage/cacher.go:/operators.coreos.com/clusterserviceversions 2025-10-13T00:22:28.454262711+00:00 stderr F I1013 00:22:28.453958 17 store.go:1579] "Monitoring resource count at path" resource="clusterversions.config.openshift.io" path="//config.openshift.io/clusterversions" 2025-10-13T00:22:28.459679316+00:00 stderr F I1013 00:22:28.459575 17 cacher.go:460] cacher (clusterversions.config.openshift.io): initialized 2025-10-13T00:22:28.459679316+00:00 stderr F I1013 00:22:28.459595 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ClusterVersion from storage/cacher.go:/config.openshift.io/clusterversions 2025-10-13T00:22:28.511543881+00:00 stderr F I1013 00:22:28.510642 17 store.go:1579] "Monitoring resource count at path" resource="proxies.config.openshift.io" path="//config.openshift.io/proxies" 2025-10-13T00:22:28.512823595+00:00 stderr F I1013 00:22:28.512675 17 cacher.go:460] cacher (proxies.config.openshift.io): initialized 2025-10-13T00:22:28.512823595+00:00 stderr F I1013 00:22:28.512691 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Proxy from storage/cacher.go:/config.openshift.io/proxies 2025-10-13T00:22:28.586656951+00:00 stderr F I1013 00:22:28.586549 17 store.go:1579] "Monitoring resource count at path" resource="networks.operator.openshift.io" path="//operator.openshift.io/networks" 2025-10-13T00:22:28.589430345+00:00 stderr F I1013 00:22:28.589307 17 cacher.go:460] cacher (networks.operator.openshift.io): initialized 2025-10-13T00:22:28.589430345+00:00 stderr F I1013 00:22:28.589342 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Network from storage/cacher.go:/operator.openshift.io/networks 2025-10-13T00:22:28.649198943+00:00 stderr F I1013 00:22:28.648804 17 store.go:1579] "Monitoring resource count at path" resource="infrastructures.config.openshift.io" path="//config.openshift.io/infrastructures" 2025-10-13T00:22:28.650191869+00:00 stderr F I1013 00:22:28.650093 17 cacher.go:460] cacher (infrastructures.config.openshift.io): initialized 2025-10-13T00:22:28.650191869+00:00 stderr F I1013 00:22:28.650114 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Infrastructure from storage/cacher.go:/config.openshift.io/infrastructures 2025-10-13T00:22:28.666736674+00:00 stderr F I1013 00:22:28.666596 17 store.go:1579] "Monitoring resource count at path" resource="egressrouters.network.operator.openshift.io" path="//network.operator.openshift.io/egressrouters" 2025-10-13T00:22:28.667750072+00:00 stderr F I1013 00:22:28.667635 17 cacher.go:460] cacher (egressrouters.network.operator.openshift.io): initialized 2025-10-13T00:22:28.667750072+00:00 stderr F I1013 00:22:28.667677 17 reflector.go:351] Caches populated for network.operator.openshift.io/v1, Kind=EgressRouter from storage/cacher.go:/network.operator.openshift.io/egressrouters 2025-10-13T00:22:28.677379071+00:00 stderr F I1013 00:22:28.677197 17 store.go:1579] "Monitoring resource count at path" resource="machineconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/machineconfigs" 2025-10-13T00:22:28.693134554+00:00 stderr F I1013 00:22:28.693029 17 cacher.go:460] cacher (machineconfigs.machineconfiguration.openshift.io): initialized 2025-10-13T00:22:28.693134554+00:00 stderr F I1013 00:22:28.693056 17 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=MachineConfig from storage/cacher.go:/machineconfiguration.openshift.io/machineconfigs 2025-10-13T00:22:28.793745910+00:00 stderr F I1013 00:22:28.793612 17 store.go:1579] "Monitoring resource count at path" resource="adminpolicybasedexternalroutes.k8s.ovn.org" path="//k8s.ovn.org/adminpolicybasedexternalroutes" 2025-10-13T00:22:28.794441849+00:00 stderr F I1013 00:22:28.794245 17 cacher.go:460] cacher (adminpolicybasedexternalroutes.k8s.ovn.org): initialized 2025-10-13T00:22:28.794441849+00:00 stderr F I1013 00:22:28.794270 17 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=AdminPolicyBasedExternalRoute from storage/cacher.go:/k8s.ovn.org/adminpolicybasedexternalroutes 2025-10-13T00:22:28.837289641+00:00 stderr F I1013 00:22:28.837168 17 store.go:1579] "Monitoring resource count at path" resource="alertmanagers.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagers" 2025-10-13T00:22:28.839868080+00:00 stderr F I1013 00:22:28.839064 17 cacher.go:460] cacher (alertmanagers.monitoring.coreos.com): initialized 2025-10-13T00:22:28.839868080+00:00 stderr F I1013 00:22:28.839184 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Alertmanager from storage/cacher.go:/monitoring.coreos.com/alertmanagers 2025-10-13T00:22:28.952730215+00:00 stderr F I1013 00:22:28.952630 17 store.go:1579] "Monitoring resource count at path" resource="baselineadminnetworkpolicies.policy.networking.k8s.io" path="//policy.networking.k8s.io/baselineadminnetworkpolicies" 2025-10-13T00:22:28.953512586+00:00 stderr F I1013 00:22:28.953435 17 cacher.go:460] cacher (baselineadminnetworkpolicies.policy.networking.k8s.io): initialized 2025-10-13T00:22:28.953512586+00:00 stderr F I1013 00:22:28.953458 17 reflector.go:351] Caches populated for policy.networking.k8s.io/v1alpha1, Kind=BaselineAdminNetworkPolicy from storage/cacher.go:/policy.networking.k8s.io/baselineadminnetworkpolicies 2025-10-13T00:22:28.983238386+00:00 stderr F I1013 00:22:28.983138 17 store.go:1579] "Monitoring resource count at path" resource="ipaddressclaims.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddressclaims" 2025-10-13T00:22:28.984232593+00:00 stderr F I1013 00:22:28.984142 17 cacher.go:460] cacher (ipaddressclaims.ipam.cluster.x-k8s.io): initialized 2025-10-13T00:22:28.984232593+00:00 stderr F I1013 00:22:28.984168 17 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddressClaim from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddressclaims 2025-10-13T00:22:28.991665142+00:00 stderr F I1013 00:22:28.991558 17 store.go:1579] "Monitoring resource count at path" resource="ipaddressclaims.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddressclaims" 2025-10-13T00:22:28.992408382+00:00 stderr F I1013 00:22:28.992266 17 cacher.go:460] cacher (ipaddressclaims.ipam.cluster.x-k8s.io): initialized 2025-10-13T00:22:28.992408382+00:00 stderr F I1013 00:22:28.992282 17 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1beta1, Kind=IPAddressClaim from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddressclaims 2025-10-13T00:22:28.995390043+00:00 stderr F I1013 00:22:28.995304 17 controller.go:624] quota admission added evaluator for: namespaces 2025-10-13T00:22:29.004531538+00:00 stderr F I1013 00:22:29.004189 17 store.go:1579] "Monitoring resource count at path" resource="egressqoses.k8s.ovn.org" path="//k8s.ovn.org/egressqoses" 2025-10-13T00:22:29.005182796+00:00 stderr F I1013 00:22:29.005110 17 cacher.go:460] cacher (egressqoses.k8s.ovn.org): initialized 2025-10-13T00:22:29.005182796+00:00 stderr F I1013 00:22:29.005135 17 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressQoS from storage/cacher.go:/k8s.ovn.org/egressqoses 2025-10-13T00:22:29.008368192+00:00 stderr F I1013 00:22:29.008236 17 controller.go:624] quota admission added evaluator for: serviceaccounts 2025-10-13T00:22:29.193158421+00:00 stderr F I1013 00:22:29.193051 17 store.go:1579] "Monitoring resource count at path" resource="controlplanemachinesets.machine.openshift.io" path="//machine.openshift.io/controlplanemachinesets" 2025-10-13T00:22:29.194714263+00:00 stderr F I1013 00:22:29.194661 17 cacher.go:460] cacher (controlplanemachinesets.machine.openshift.io): initialized 2025-10-13T00:22:29.194730143+00:00 stderr F I1013 00:22:29.194715 17 reflector.go:351] Caches populated for machine.openshift.io/v1, Kind=ControlPlaneMachineSet from storage/cacher.go:/machine.openshift.io/controlplanemachinesets 2025-10-13T00:22:29.211120974+00:00 stderr F I1013 00:22:29.211034 17 store.go:1579] "Monitoring resource count at path" resource="alertrelabelconfigs.monitoring.openshift.io" path="//monitoring.openshift.io/alertrelabelconfigs" 2025-10-13T00:22:29.211882285+00:00 stderr F I1013 00:22:29.211837 17 cacher.go:460] cacher (alertrelabelconfigs.monitoring.openshift.io): initialized 2025-10-13T00:22:29.211916055+00:00 stderr F I1013 00:22:29.211889 17 reflector.go:351] Caches populated for monitoring.openshift.io/v1, Kind=AlertRelabelConfig from storage/cacher.go:/monitoring.openshift.io/alertrelabelconfigs 2025-10-13T00:22:29.408133642+00:00 stderr F I1013 00:22:29.408015 17 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.411417090+00:00 stderr F I1013 00:22:29.411345 17 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.423651039+00:00 stderr F I1013 00:22:29.423533 17 store.go:1579] "Monitoring resource count at path" resource="apirequestcounts.apiserver.openshift.io" path="//apiserver.openshift.io/apirequestcounts" 2025-10-13T00:22:29.440743639+00:00 stderr F I1013 00:22:29.440671 17 store.go:1579] "Monitoring resource count at path" resource="clusterresourcequotas.quota.openshift.io" path="//quota.openshift.io/clusterresourcequotas" 2025-10-13T00:22:29.447337786+00:00 stderr F I1013 00:22:29.446849 17 cacher.go:460] cacher (clusterresourcequotas.quota.openshift.io): initialized 2025-10-13T00:22:29.447337786+00:00 stderr F I1013 00:22:29.446870 17 reflector.go:351] Caches populated for quota.openshift.io/v1, Kind=ClusterResourceQuota from storage/cacher.go:/quota.openshift.io/clusterresourcequotas 2025-10-13T00:22:29.447814419+00:00 stderr F I1013 00:22:29.447740 17 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.452554667+00:00 stderr F I1013 00:22:29.451679 17 store.go:1579] "Monitoring resource count at path" resource="securitycontextconstraints.security.openshift.io" path="//security.openshift.io/securitycontextconstraints" 2025-10-13T00:22:29.459302698+00:00 stderr F I1013 00:22:29.459213 17 cacher.go:460] cacher (securitycontextconstraints.security.openshift.io): initialized 2025-10-13T00:22:29.459302698+00:00 stderr F I1013 00:22:29.459239 17 reflector.go:351] Caches populated for security.openshift.io/v1, Kind=SecurityContextConstraints from storage/cacher.go:/security.openshift.io/securitycontextconstraints 2025-10-13T00:22:29.468132196+00:00 stderr F I1013 00:22:29.468033 17 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.471162367+00:00 stderr F I1013 00:22:29.471013 17 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io 2025-10-13T00:22:29.474229900+00:00 stderr F I1013 00:22:29.474151 17 trace.go:236] Trace[642891268]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:41d6c9dd-7b25-49be-9f61-b6531dfcad02,client:38.102.83.180,api-group:coordination.k8s.io,api-version:v1,name:crc,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc,user-agent:kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21,verb:PUT (13-Oct-2025 00:22:26.953) (total time: 2520ms): 2025-10-13T00:22:29.474229900+00:00 stderr F Trace[642891268]: ["GuaranteedUpdate etcd3" audit-id:41d6c9dd-7b25-49be-9f61-b6531dfcad02,key:/leases/kube-node-lease/crc,type:*coordination.Lease,resource:leases.coordination.k8s.io 2520ms (00:22:26.953) 2025-10-13T00:22:29.474229900+00:00 stderr F Trace[642891268]: ---"About to Encode" 2517ms (00:22:29.471)] 2025-10-13T00:22:29.474229900+00:00 stderr F Trace[642891268]: [2.520379197s] [2.520379197s] END 2025-10-13T00:22:29.530454552+00:00 stderr F I1013 00:22:29.530354 17 trace.go:236] Trace[454378606]: "Update" accept:application/json, */*,audit-id:4ec173b4-5353-4ae5-ac39-2d96f4c5c883,client:38.102.83.180,api-group:coordination.k8s.io,api-version:v1,name:version,subresource:,namespace:openshift-cluster-version,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-version/leases/version,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (13-Oct-2025 00:22:25.904) (total time: 3625ms): 2025-10-13T00:22:29.530454552+00:00 stderr F Trace[454378606]: ["GuaranteedUpdate etcd3" audit-id:4ec173b4-5353-4ae5-ac39-2d96f4c5c883,key:/leases/openshift-cluster-version/version,type:*coordination.Lease,resource:leases.coordination.k8s.io 3625ms (00:22:25.905) 2025-10-13T00:22:29.530454552+00:00 stderr F Trace[454378606]: ---"About to Encode" 3623ms (00:22:29.528)] 2025-10-13T00:22:29.530454552+00:00 stderr F Trace[454378606]: [3.625424704s] [3.625424704s] END 2025-10-13T00:22:29.547363886+00:00 stderr F I1013 00:22:29.546246 17 store.go:1579] "Monitoring resource count at path" resource="controllerconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/controllerconfigs" 2025-10-13T00:22:29.563261024+00:00 stderr F I1013 00:22:29.563138 17 cacher.go:460] cacher (controllerconfigs.machineconfiguration.openshift.io): initialized 2025-10-13T00:22:29.563261024+00:00 stderr F I1013 00:22:29.563196 17 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=ControllerConfig from storage/cacher.go:/machineconfiguration.openshift.io/controllerconfigs 2025-10-13T00:22:29.585837861+00:00 stderr F I1013 00:22:29.585666 17 store.go:1579] "Monitoring resource count at path" resource="kubeapiservers.operator.openshift.io" path="//operator.openshift.io/kubeapiservers" 2025-10-13T00:22:29.595030488+00:00 stderr F I1013 00:22:29.594875 17 cacher.go:460] cacher (kubeapiservers.operator.openshift.io): initialized 2025-10-13T00:22:29.595030488+00:00 stderr F I1013 00:22:29.594908 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeAPIServer from storage/cacher.go:/operator.openshift.io/kubeapiservers 2025-10-13T00:22:29.608047688+00:00 stderr F I1013 00:22:29.605846 17 store.go:1579] "Monitoring resource count at path" resource="machines.machine.openshift.io" path="//machine.openshift.io/machines" 2025-10-13T00:22:29.608906031+00:00 stderr F I1013 00:22:29.608833 17 cacher.go:460] cacher (machines.machine.openshift.io): initialized 2025-10-13T00:22:29.608906031+00:00 stderr F I1013 00:22:29.608865 17 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=Machine from storage/cacher.go:/machine.openshift.io/machines 2025-10-13T00:22:29.617576264+00:00 stderr F I1013 00:22:29.617488 17 store.go:1579] "Monitoring resource count at path" resource="operatorpkis.network.operator.openshift.io" path="//network.operator.openshift.io/operatorpkis" 2025-10-13T00:22:29.619227239+00:00 stderr F I1013 00:22:29.619165 17 cacher.go:460] cacher (operatorpkis.network.operator.openshift.io): initialized 2025-10-13T00:22:29.619227239+00:00 stderr F I1013 00:22:29.619186 17 reflector.go:351] Caches populated for network.operator.openshift.io/v1, Kind=OperatorPKI from storage/cacher.go:/network.operator.openshift.io/operatorpkis 2025-10-13T00:22:29.631221011+00:00 stderr F I1013 00:22:29.631106 17 store.go:1579] "Monitoring resource count at path" resource="egressservices.k8s.ovn.org" path="//k8s.ovn.org/egressservices" 2025-10-13T00:22:29.632360702+00:00 stderr F I1013 00:22:29.632212 17 cacher.go:460] cacher (egressservices.k8s.ovn.org): initialized 2025-10-13T00:22:29.632360702+00:00 stderr F I1013 00:22:29.632234 17 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressService from storage/cacher.go:/k8s.ovn.org/egressservices 2025-10-13T00:22:29.640354147+00:00 stderr F I1013 00:22:29.640242 17 store.go:1579] "Monitoring resource count at path" resource="consolequickstarts.console.openshift.io" path="//console.openshift.io/consolequickstarts" 2025-10-13T00:22:29.648659470+00:00 stderr F I1013 00:22:29.648565 17 cacher.go:460] cacher (consolequickstarts.console.openshift.io): initialized 2025-10-13T00:22:29.648659470+00:00 stderr F I1013 00:22:29.648593 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleQuickStart from storage/cacher.go:/console.openshift.io/consolequickstarts 2025-10-13T00:22:29.657298113+00:00 stderr F I1013 00:22:29.655971 17 store.go:1579] "Monitoring resource count at path" resource="prometheusrules.monitoring.coreos.com" path="//monitoring.coreos.com/prometheusrules" 2025-10-13T00:22:29.669516391+00:00 stderr F I1013 00:22:29.668228 17 cacher.go:460] cacher (prometheusrules.monitoring.coreos.com): initialized 2025-10-13T00:22:29.669516391+00:00 stderr F I1013 00:22:29.668255 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=PrometheusRule from storage/cacher.go:/monitoring.coreos.com/prometheusrules 2025-10-13T00:22:29.694933985+00:00 stderr F I1013 00:22:29.694833 17 cacher.go:460] cacher (apirequestcounts.apiserver.openshift.io): initialized 2025-10-13T00:22:29.694933985+00:00 stderr F I1013 00:22:29.694868 17 reflector.go:351] Caches populated for apiserver.openshift.io/v1, Kind=APIRequestCount from storage/cacher.go:/apiserver.openshift.io/apirequestcounts 2025-10-13T00:22:29.851612568+00:00 stderr F I1013 00:22:29.851506 17 store.go:1579] "Monitoring resource count at path" resource="rolebindingrestrictions.authorization.openshift.io" path="//authorization.openshift.io/rolebindingrestrictions" 2025-10-13T00:22:29.852768159+00:00 stderr F I1013 00:22:29.852650 17 cacher.go:460] cacher (rolebindingrestrictions.authorization.openshift.io): initialized 2025-10-13T00:22:29.852768159+00:00 stderr F I1013 00:22:29.852671 17 reflector.go:351] Caches populated for authorization.openshift.io/v1, Kind=RoleBindingRestriction from storage/cacher.go:/authorization.openshift.io/rolebindingrestrictions 2025-10-13T00:22:29.907457240+00:00 stderr F I1013 00:22:29.907320 17 store.go:1579] "Monitoring resource count at path" resource="ipaddresses.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddresses" 2025-10-13T00:22:29.910428330+00:00 stderr F I1013 00:22:29.909081 17 cacher.go:460] cacher (ipaddresses.ipam.cluster.x-k8s.io): initialized 2025-10-13T00:22:29.910428330+00:00 stderr F I1013 00:22:29.909107 17 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddress from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddresses 2025-10-13T00:22:29.914348305+00:00 stderr F I1013 00:22:29.914268 17 store.go:1579] "Monitoring resource count at path" resource="ipaddresses.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddresses" 2025-10-13T00:22:29.915439095+00:00 stderr F I1013 00:22:29.915037 17 cacher.go:460] cacher (ipaddresses.ipam.cluster.x-k8s.io): initialized 2025-10-13T00:22:29.915439095+00:00 stderr F I1013 00:22:29.915061 17 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1beta1, Kind=IPAddress from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddresses 2025-10-13T00:22:29.933446999+00:00 stderr F I1013 00:22:29.933214 17 store.go:1579] "Monitoring resource count at path" resource="dnsrecords.ingress.operator.openshift.io" path="//ingress.operator.openshift.io/dnsrecords" 2025-10-13T00:22:29.934400615+00:00 stderr F I1013 00:22:29.934263 17 cacher.go:460] cacher (dnsrecords.ingress.operator.openshift.io): initialized 2025-10-13T00:22:29.934400615+00:00 stderr F I1013 00:22:29.934288 17 reflector.go:351] Caches populated for ingress.operator.openshift.io/v1, Kind=DNSRecord from storage/cacher.go:/ingress.operator.openshift.io/dnsrecords 2025-10-13T00:22:29.990074742+00:00 stderr F I1013 00:22:29.989819 17 store.go:1579] "Monitoring resource count at path" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" path="//controlplane.operator.openshift.io/podnetworkconnectivitychecks" 2025-10-13T00:22:29.995958920+00:00 stderr F I1013 00:22:29.995811 17 cacher.go:460] cacher (podnetworkconnectivitychecks.controlplane.operator.openshift.io): initialized 2025-10-13T00:22:29.995958920+00:00 stderr F I1013 00:22:29.995833 17 reflector.go:351] Caches populated for controlplane.operator.openshift.io/v1alpha1, Kind=PodNetworkConnectivityCheck from storage/cacher.go:/controlplane.operator.openshift.io/podnetworkconnectivitychecks 2025-10-13T00:22:30.007535901+00:00 stderr F I1013 00:22:30.007441 17 store.go:1579] "Monitoring resource count at path" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediationtemplates" 2025-10-13T00:22:30.008440616+00:00 stderr F I1013 00:22:30.008385 17 cacher.go:460] cacher (metal3remediationtemplates.infrastructure.cluster.x-k8s.io): initialized 2025-10-13T00:22:30.008456106+00:00 stderr F I1013 00:22:30.008432 17 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1alpha5, Kind=Metal3RemediationTemplate from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediationtemplates 2025-10-13T00:22:30.014438617+00:00 stderr F I1013 00:22:30.014344 17 store.go:1579] "Monitoring resource count at path" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediationtemplates" 2025-10-13T00:22:30.015501696+00:00 stderr F I1013 00:22:30.015445 17 cacher.go:460] cacher (metal3remediationtemplates.infrastructure.cluster.x-k8s.io): initialized 2025-10-13T00:22:30.015501696+00:00 stderr F I1013 00:22:30.015476 17 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1beta1, Kind=Metal3RemediationTemplate from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediationtemplates 2025-10-13T00:22:30.058651446+00:00 stderr F I1013 00:22:30.058542 17 store.go:1579] "Monitoring resource count at path" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" path="//whereabouts.cni.cncf.io/overlappingrangeipreservations" 2025-10-13T00:22:30.059767156+00:00 stderr F I1013 00:22:30.059679 17 cacher.go:460] cacher (overlappingrangeipreservations.whereabouts.cni.cncf.io): initialized 2025-10-13T00:22:30.059767156+00:00 stderr F I1013 00:22:30.059716 17 reflector.go:351] Caches populated for whereabouts.cni.cncf.io/v1alpha1, Kind=OverlappingRangeIPReservation from storage/cacher.go:/whereabouts.cni.cncf.io/overlappingrangeipreservations 2025-10-13T00:22:30.100234274+00:00 stderr F I1013 00:22:30.100081 17 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io 2025-10-13T00:22:30.122079142+00:00 stderr F I1013 00:22:30.121947 17 store.go:1579] "Monitoring resource count at path" resource="operatorgroups.operators.coreos.com" path="//operators.coreos.com/operatorgroups" 2025-10-13T00:22:30.124839596+00:00 stderr F I1013 00:22:30.124770 17 cacher.go:460] cacher (operatorgroups.operators.coreos.com): initialized 2025-10-13T00:22:30.124839596+00:00 stderr F I1013 00:22:30.124793 17 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OperatorGroup from storage/cacher.go:/operators.coreos.com/operatorgroups 2025-10-13T00:22:30.130652152+00:00 stderr F I1013 00:22:30.130577 17 store.go:1579] "Monitoring resource count at path" resource="operatorgroups.operators.coreos.com" path="//operators.coreos.com/operatorgroups" 2025-10-13T00:22:30.133052577+00:00 stderr F I1013 00:22:30.132970 17 cacher.go:460] cacher (operatorgroups.operators.coreos.com): initialized 2025-10-13T00:22:30.133052577+00:00 stderr F I1013 00:22:30.132991 17 reflector.go:351] Caches populated for operators.coreos.com/v1alpha2, Kind=OperatorGroup from storage/cacher.go:/operators.coreos.com/operatorgroups 2025-10-13T00:22:30.200941242+00:00 stderr F I1013 00:22:30.200794 17 store.go:1579] "Monitoring resource count at path" resource="machinesets.machine.openshift.io" path="//machine.openshift.io/machinesets" 2025-10-13T00:22:30.202286029+00:00 stderr F I1013 00:22:30.202152 17 cacher.go:460] cacher (machinesets.machine.openshift.io): initialized 2025-10-13T00:22:30.202286029+00:00 stderr F I1013 00:22:30.202215 17 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=MachineSet from storage/cacher.go:/machine.openshift.io/machinesets 2025-10-13T00:22:30.499203583+00:00 stderr F I1013 00:22:30.499040 17 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io 2025-10-13T00:22:30.604263949+00:00 stderr F I1013 00:22:30.604160 17 store.go:1579] "Monitoring resource count at path" resource="alertingrules.monitoring.openshift.io" path="//monitoring.openshift.io/alertingrules" 2025-10-13T00:22:30.605796890+00:00 stderr F I1013 00:22:30.605708 17 cacher.go:460] cacher (alertingrules.monitoring.openshift.io): initialized 2025-10-13T00:22:30.605796890+00:00 stderr F I1013 00:22:30.605730 17 reflector.go:351] Caches populated for monitoring.openshift.io/v1, Kind=AlertingRule from storage/cacher.go:/monitoring.openshift.io/alertingrules 2025-10-13T00:22:30.713286380+00:00 stderr F I1013 00:22:30.713165 17 store.go:1579] "Monitoring resource count at path" resource="probes.monitoring.coreos.com" path="//monitoring.coreos.com/probes" 2025-10-13T00:22:30.714163684+00:00 stderr F I1013 00:22:30.714092 17 cacher.go:460] cacher (probes.monitoring.coreos.com): initialized 2025-10-13T00:22:30.714163684+00:00 stderr F I1013 00:22:30.714123 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Probe from storage/cacher.go:/monitoring.coreos.com/probes 2025-10-13T00:22:30.851496786+00:00 stderr F I1013 00:22:30.850955 17 store.go:1579] "Monitoring resource count at path" resource="machinehealthchecks.machine.openshift.io" path="//machine.openshift.io/machinehealthchecks" 2025-10-13T00:22:30.852840202+00:00 stderr F I1013 00:22:30.852738 17 cacher.go:460] cacher (machinehealthchecks.machine.openshift.io): initialized 2025-10-13T00:22:30.852840202+00:00 stderr F I1013 00:22:30.852775 17 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=MachineHealthCheck from storage/cacher.go:/machine.openshift.io/machinehealthchecks 2025-10-13T00:22:31.122378761+00:00 stderr F I1013 00:22:31.122207 17 store.go:1579] "Monitoring resource count at path" resource="projecthelmchartrepositories.helm.openshift.io" path="//helm.openshift.io/projecthelmchartrepositories" 2025-10-13T00:22:31.123533912+00:00 stderr F I1013 00:22:31.123423 17 cacher.go:460] cacher (projecthelmchartrepositories.helm.openshift.io): initialized 2025-10-13T00:22:31.123533912+00:00 stderr F I1013 00:22:31.123453 17 reflector.go:351] Caches populated for helm.openshift.io/v1beta1, Kind=ProjectHelmChartRepository from storage/cacher.go:/helm.openshift.io/projecthelmchartrepositories 2025-10-13T00:22:31.312445602+00:00 stderr F I1013 00:22:31.312252 17 store.go:1579] "Monitoring resource count at path" resource="installplans.operators.coreos.com" path="//operators.coreos.com/installplans" 2025-10-13T00:22:31.313823089+00:00 stderr F I1013 00:22:31.313504 17 cacher.go:460] cacher (installplans.operators.coreos.com): initialized 2025-10-13T00:22:31.313823089+00:00 stderr F I1013 00:22:31.313527 17 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=InstallPlan from storage/cacher.go:/operators.coreos.com/installplans 2025-10-13T00:22:31.507437226+00:00 stderr F I1013 00:22:31.507297 17 controller.go:624] quota admission added evaluator for: daemonsets.apps 2025-10-13T00:22:31.624152094+00:00 stderr F I1013 00:22:31.624002 17 store.go:1579] "Monitoring resource count at path" resource="catalogsources.operators.coreos.com" path="//operators.coreos.com/catalogsources" 2025-10-13T00:22:31.629152419+00:00 stderr F I1013 00:22:31.629057 17 cacher.go:460] cacher (catalogsources.operators.coreos.com): initialized 2025-10-13T00:22:31.629152419+00:00 stderr F I1013 00:22:31.629091 17 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=CatalogSource from storage/cacher.go:/operators.coreos.com/catalogsources 2025-10-13T00:22:31.966663045+00:00 stderr F I1013 00:22:31.966541 17 store.go:1579] "Monitoring resource count at path" resource="podmonitors.monitoring.coreos.com" path="//monitoring.coreos.com/podmonitors" 2025-10-13T00:22:31.968287149+00:00 stderr F I1013 00:22:31.968203 17 cacher.go:460] cacher (podmonitors.monitoring.coreos.com): initialized 2025-10-13T00:22:31.968287149+00:00 stderr F I1013 00:22:31.968221 17 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=PodMonitor from storage/cacher.go:/monitoring.coreos.com/podmonitors 2025-10-13T00:22:32.251569997+00:00 stderr F I1013 00:22:32.251435 17 store.go:1579] "Monitoring resource count at path" resource="kubeschedulers.operator.openshift.io" path="//operator.openshift.io/kubeschedulers" 2025-10-13T00:22:32.253628443+00:00 stderr F I1013 00:22:32.253521 17 cacher.go:460] cacher (kubeschedulers.operator.openshift.io): initialized 2025-10-13T00:22:32.253628443+00:00 stderr F I1013 00:22:32.253541 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeScheduler from storage/cacher.go:/operator.openshift.io/kubeschedulers 2025-10-13T00:22:32.702641818+00:00 stderr F I1013 00:22:32.702507 17 store.go:1579] "Monitoring resource count at path" resource="subscriptions.operators.coreos.com" path="//operators.coreos.com/subscriptions" 2025-10-13T00:22:32.706105071+00:00 stderr F I1013 00:22:32.705065 17 cacher.go:460] cacher (subscriptions.operators.coreos.com): initialized 2025-10-13T00:22:32.706105071+00:00 stderr F I1013 00:22:32.705097 17 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=Subscription from storage/cacher.go:/operators.coreos.com/subscriptions 2025-10-13T00:22:32.706169042+00:00 stderr F I1013 00:22:32.706053 17 controller.go:624] quota admission added evaluator for: servicemonitors.monitoring.coreos.com 2025-10-13T00:22:32.726828388+00:00 stderr F I1013 00:22:32.725042 17 store.go:1579] "Monitoring resource count at path" resource="metal3remediations.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediations" 2025-10-13T00:22:32.728882253+00:00 stderr F I1013 00:22:32.728781 17 cacher.go:460] cacher (metal3remediations.infrastructure.cluster.x-k8s.io): initialized 2025-10-13T00:22:32.728882253+00:00 stderr F I1013 00:22:32.728816 17 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1alpha5, Kind=Metal3Remediation from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediations 2025-10-13T00:22:32.737885995+00:00 stderr F I1013 00:22:32.737762 17 store.go:1579] "Monitoring resource count at path" resource="metal3remediations.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediations" 2025-10-13T00:22:32.742571041+00:00 stderr F I1013 00:22:32.742307 17 cacher.go:460] cacher (metal3remediations.infrastructure.cluster.x-k8s.io): initialized 2025-10-13T00:22:32.742571041+00:00 stderr F I1013 00:22:32.742389 17 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1beta1, Kind=Metal3Remediation from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediations 2025-10-13T00:22:34.487260045+00:00 stderr F I1013 00:22:34.487098 17 apf_controller.go:455] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=5 seatDemandAvg=0.07820742180770882 seatDemandStdev=0.3251314441406763 seatDemandSmoothed=1.9632767939168128 fairFrac=2.2770669062555537 currentCL=4 concurrencyDenominator=4 backstop=false 2025-10-13T00:22:34.502522320+00:00 stderr F I1013 00:22:34.502420 17 controller.go:624] quota admission added evaluator for: deployments.apps 2025-10-13T00:22:35.300561019+00:00 stderr F I1013 00:22:35.300373 17 controller.go:624] quota admission added evaluator for: prometheusrules.monitoring.coreos.com 2025-10-13T00:22:38.446089186+00:00 stderr F I1013 00:22:38.445943 17 store.go:1579] "Monitoring resource count at path" resource="kubestorageversionmigrators.operator.openshift.io" path="//operator.openshift.io/kubestorageversionmigrators" 2025-10-13T00:22:38.448438721+00:00 stderr F I1013 00:22:38.448317 17 cacher.go:460] cacher (kubestorageversionmigrators.operator.openshift.io): initialized 2025-10-13T00:22:38.448438721+00:00 stderr F I1013 00:22:38.448397 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeStorageVersionMigrator from storage/cacher.go:/operator.openshift.io/kubestorageversionmigrators 2025-10-13T00:22:39.702496623+00:00 stderr F I1013 00:22:39.701741 17 controller.go:624] quota admission added evaluator for: operatorpkis.network.operator.openshift.io 2025-10-13T00:22:40.253481561+00:00 stderr F I1013 00:22:40.253245 17 controller.go:624] quota admission added evaluator for: podnetworkconnectivitychecks.controlplane.operator.openshift.io 2025-10-13T00:22:40.253481561+00:00 stderr F I1013 00:22:40.253309 17 controller.go:624] quota admission added evaluator for: podnetworkconnectivitychecks.controlplane.operator.openshift.io 2025-10-13T00:22:43.500210869+00:00 stderr F I1013 00:22:43.500004 17 controller.go:624] quota admission added evaluator for: serviceaccounts 2025-10-13T00:22:43.663354223+00:00 stderr F I1013 00:22:43.663098 17 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.internal.openshift.io" path="//security.internal.openshift.io/rangeallocations" 2025-10-13T00:22:43.665240806+00:00 stderr F I1013 00:22:43.665120 17 cacher.go:460] cacher (rangeallocations.security.internal.openshift.io): initialized 2025-10-13T00:22:43.665240806+00:00 stderr F I1013 00:22:43.665162 17 reflector.go:351] Caches populated for security.internal.openshift.io/v1, Kind=RangeAllocation from storage/cacher.go:/security.internal.openshift.io/rangeallocations 2025-10-13T00:22:43.702296678+00:00 stderr F I1013 00:22:43.702069 17 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io 2025-10-13T00:22:43.899130291+00:00 stderr F I1013 00:22:43.898929 17 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io 2025-10-13T00:22:44.706103209+00:00 stderr F I1013 00:22:44.705863 17 controller.go:624] quota admission added evaluator for: deployments.apps 2025-10-13T00:22:45.102305975+00:00 stderr F I1013 00:22:45.102143 17 controller.go:624] quota admission added evaluator for: servicemonitors.monitoring.coreos.com 2025-10-13T00:22:45.710084335+00:00 stderr F I1013 00:22:45.709861 17 controller.go:624] quota admission added evaluator for: daemonsets.apps 2025-10-13T00:22:48.099430691+00:00 stderr F I1013 00:22:48.099231 17 controller.go:624] quota admission added evaluator for: operatorpkis.network.operator.openshift.io 2025-10-13T00:22:48.695483723+00:00 stderr F I1013 00:22:48.695290 17 store.go:1579] "Monitoring resource count at path" resource="olmconfigs.operators.coreos.com" path="//operators.coreos.com/olmconfigs" 2025-10-13T00:22:48.696605064+00:00 stderr F I1013 00:22:48.696513 17 cacher.go:460] cacher (olmconfigs.operators.coreos.com): initialized 2025-10-13T00:22:48.696605064+00:00 stderr F I1013 00:22:48.696541 17 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OLMConfig from storage/cacher.go:/operators.coreos.com/olmconfigs 2025-10-13T00:22:50.507002583+00:00 stderr F I1013 00:22:50.506854 17 store.go:1579] "Monitoring resource count at path" resource="openshiftcontrollermanagers.operator.openshift.io" path="//operator.openshift.io/openshiftcontrollermanagers" 2025-10-13T00:22:50.511717175+00:00 stderr F I1013 00:22:50.511599 17 cacher.go:460] cacher (openshiftcontrollermanagers.operator.openshift.io): initialized 2025-10-13T00:22:50.511717175+00:00 stderr F I1013 00:22:50.511639 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=OpenShiftControllerManager from storage/cacher.go:/operator.openshift.io/openshiftcontrollermanagers 2025-10-13T00:22:51.846448224+00:00 stderr F I1013 00:22:51.846223 17 store.go:1579] "Monitoring resource count at path" resource="etcds.operator.openshift.io" path="//operator.openshift.io/etcds" 2025-10-13T00:22:51.851051883+00:00 stderr F I1013 00:22:51.850890 17 cacher.go:460] cacher (etcds.operator.openshift.io): initialized 2025-10-13T00:22:51.851051883+00:00 stderr F I1013 00:22:51.850933 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Etcd from storage/cacher.go:/operator.openshift.io/etcds 2025-10-13T00:22:54.493992302+00:00 stderr F I1013 00:22:54.493873 17 store.go:1579] "Monitoring resource count at path" resource="authentications.operator.openshift.io" path="//operator.openshift.io/authentications" 2025-10-13T00:22:54.498196639+00:00 stderr F I1013 00:22:54.497768 17 cacher.go:460] cacher (authentications.operator.openshift.io): initialized 2025-10-13T00:22:54.498196639+00:00 stderr F I1013 00:22:54.497802 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Authentication from storage/cacher.go:/operator.openshift.io/authentications 2025-10-13T00:22:55.601486962+00:00 stderr F I1013 00:22:55.601278 17 store.go:1579] "Monitoring resource count at path" resource="storageversionmigrations.migration.k8s.io" path="//migration.k8s.io/storageversionmigrations" 2025-10-13T00:22:55.605264747+00:00 stderr F I1013 00:22:55.605150 17 cacher.go:460] cacher (storageversionmigrations.migration.k8s.io): initialized 2025-10-13T00:22:55.605264747+00:00 stderr F I1013 00:22:55.605185 17 reflector.go:351] Caches populated for migration.k8s.io/v1alpha1, Kind=StorageVersionMigration from storage/cacher.go:/migration.k8s.io/storageversionmigrations 2025-10-13T00:22:55.914717886+00:00 stderr F I1013 00:22:55.914104 17 store.go:1579] "Monitoring resource count at path" resource="schedulers.config.openshift.io" path="//config.openshift.io/schedulers" 2025-10-13T00:22:55.916078304+00:00 stderr F I1013 00:22:55.915977 17 cacher.go:460] cacher (schedulers.config.openshift.io): initialized 2025-10-13T00:22:55.916078304+00:00 stderr F I1013 00:22:55.916001 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Scheduler from storage/cacher.go:/config.openshift.io/schedulers 2025-10-13T00:22:56.705168064+00:00 stderr F I1013 00:22:56.704672 17 store.go:1579] "Monitoring resource count at path" resource="authentications.config.openshift.io" path="//config.openshift.io/authentications" 2025-10-13T00:22:56.706870082+00:00 stderr F I1013 00:22:56.706764 17 cacher.go:460] cacher (authentications.config.openshift.io): initialized 2025-10-13T00:22:56.706870082+00:00 stderr F I1013 00:22:56.706787 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Authentication from storage/cacher.go:/config.openshift.io/authentications 2025-10-13T00:22:58.013110257+00:00 stderr F I1013 00:22:58.012309 17 store.go:1579] "Monitoring resource count at path" resource="machineconfigpools.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/machineconfigpools" 2025-10-13T00:22:58.017943472+00:00 stderr F I1013 00:22:58.016759 17 cacher.go:460] cacher (machineconfigpools.machineconfiguration.openshift.io): initialized 2025-10-13T00:22:58.017943472+00:00 stderr F I1013 00:22:58.016797 17 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=MachineConfigPool from storage/cacher.go:/machineconfiguration.openshift.io/machineconfigpools 2025-10-13T00:23:02.123978675+00:00 stderr F I1013 00:23:02.123737 17 store.go:1579] "Monitoring resource count at path" resource="dnses.operator.openshift.io" path="//operator.openshift.io/dnses" 2025-10-13T00:23:02.126242679+00:00 stderr F I1013 00:23:02.126138 17 cacher.go:460] cacher (dnses.operator.openshift.io): initialized 2025-10-13T00:23:02.126350392+00:00 stderr F I1013 00:23:02.126274 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=DNS from storage/cacher.go:/operator.openshift.io/dnses 2025-10-13T00:23:02.633407916+00:00 stderr F I1013 00:23:02.633188 17 store.go:1579] "Monitoring resource count at path" resource="consolenotifications.console.openshift.io" path="//console.openshift.io/consolenotifications" 2025-10-13T00:23:02.635162395+00:00 stderr F I1013 00:23:02.634885 17 cacher.go:460] cacher (consolenotifications.console.openshift.io): initialized 2025-10-13T00:23:02.635162395+00:00 stderr F I1013 00:23:02.634923 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleNotification from storage/cacher.go:/console.openshift.io/consolenotifications 2025-10-13T00:23:02.734833961+00:00 stderr F I1013 00:23:02.733604 17 store.go:1579] "Monitoring resource count at path" resource="consoleplugins.console.openshift.io" path="//console.openshift.io/consoleplugins" 2025-10-13T00:23:02.739486591+00:00 stderr F I1013 00:23:02.738998 17 cacher.go:460] cacher (consoleplugins.console.openshift.io): initialized 2025-10-13T00:23:02.739486591+00:00 stderr F I1013 00:23:02.739030 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsolePlugin from storage/cacher.go:/console.openshift.io/consoleplugins 2025-10-13T00:23:02.746791764+00:00 stderr F I1013 00:23:02.746658 17 store.go:1579] "Monitoring resource count at path" resource="consoleplugins.console.openshift.io" path="//console.openshift.io/consoleplugins" 2025-10-13T00:23:02.748057059+00:00 stderr F I1013 00:23:02.747948 17 cacher.go:460] cacher (consoleplugins.console.openshift.io): initialized 2025-10-13T00:23:02.748057059+00:00 stderr F I1013 00:23:02.748011 17 reflector.go:351] Caches populated for console.openshift.io/v1alpha1, Kind=ConsolePlugin from storage/cacher.go:/console.openshift.io/consoleplugins 2025-10-13T00:23:02.758961543+00:00 stderr F I1013 00:23:02.758867 17 store.go:1579] "Monitoring resource count at path" resource="consoleclidownloads.console.openshift.io" path="//console.openshift.io/consoleclidownloads" 2025-10-13T00:23:02.760896197+00:00 stderr F I1013 00:23:02.760787 17 cacher.go:460] cacher (consoleclidownloads.console.openshift.io): initialized 2025-10-13T00:23:02.760896197+00:00 stderr F I1013 00:23:02.760809 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleCLIDownload from storage/cacher.go:/console.openshift.io/consoleclidownloads 2025-10-13T00:23:05.959206596+00:00 stderr F I1013 00:23:05.959040 17 controller.go:624] quota admission added evaluator for: csistoragecapacities.storage.k8s.io 2025-10-13T00:23:05.959206596+00:00 stderr F I1013 00:23:05.959101 17 controller.go:624] quota admission added evaluator for: csistoragecapacities.storage.k8s.io 2025-10-13T00:23:11.177783338+00:00 stderr F I1013 00:23:11.177614 17 store.go:1579] "Monitoring resource count at path" resource="projects.config.openshift.io" path="//config.openshift.io/projects" 2025-10-13T00:23:11.180239637+00:00 stderr F I1013 00:23:11.179603 17 cacher.go:460] cacher (projects.config.openshift.io): initialized 2025-10-13T00:23:11.180239637+00:00 stderr F I1013 00:23:11.179628 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Project from storage/cacher.go:/config.openshift.io/projects 2025-10-13T00:23:11.864707464+00:00 stderr F I1013 00:23:11.864547 17 store.go:1579] "Monitoring resource count at path" resource="apiservers.config.openshift.io" path="//config.openshift.io/apiservers" 2025-10-13T00:23:11.867677506+00:00 stderr F I1013 00:23:11.867465 17 cacher.go:460] cacher (apiservers.config.openshift.io): initialized 2025-10-13T00:23:11.867677506+00:00 stderr F I1013 00:23:11.867511 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=APIServer from storage/cacher.go:/config.openshift.io/apiservers 2025-10-13T00:23:13.114269240+00:00 stderr F I1013 00:23:13.113820 17 store.go:1579] "Monitoring resource count at path" resource="imagedigestmirrorsets.config.openshift.io" path="//config.openshift.io/imagedigestmirrorsets" 2025-10-13T00:23:13.114756713+00:00 stderr F I1013 00:23:13.114695 17 cacher.go:460] cacher (imagedigestmirrorsets.config.openshift.io): initialized 2025-10-13T00:23:13.114756713+00:00 stderr F I1013 00:23:13.114717 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageDigestMirrorSet from storage/cacher.go:/config.openshift.io/imagedigestmirrorsets 2025-10-13T00:23:13.235013953+00:00 stderr F I1013 00:23:13.231889 17 store.go:1579] "Monitoring resource count at path" resource="operatorhubs.config.openshift.io" path="//config.openshift.io/operatorhubs" 2025-10-13T00:23:13.235013953+00:00 stderr F I1013 00:23:13.234152 17 cacher.go:460] cacher (operatorhubs.config.openshift.io): initialized 2025-10-13T00:23:13.235013953+00:00 stderr F I1013 00:23:13.234173 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=OperatorHub from storage/cacher.go:/config.openshift.io/operatorhubs 2025-10-13T00:23:13.325069791+00:00 stderr F I1013 00:23:13.324965 17 store.go:1579] "Monitoring resource count at path" resource="builds.config.openshift.io" path="//config.openshift.io/builds" 2025-10-13T00:23:13.326156232+00:00 stderr F I1013 00:23:13.326094 17 cacher.go:460] cacher (builds.config.openshift.io): initialized 2025-10-13T00:23:13.326156232+00:00 stderr F I1013 00:23:13.326116 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Build from storage/cacher.go:/config.openshift.io/builds 2025-10-13T00:23:13.503150692+00:00 stderr F I1013 00:23:13.502614 17 store.go:1579] "Monitoring resource count at path" resource="nodes.config.openshift.io" path="//config.openshift.io/nodes" 2025-10-13T00:23:13.504069147+00:00 stderr F I1013 00:23:13.504006 17 cacher.go:460] cacher (nodes.config.openshift.io): initialized 2025-10-13T00:23:13.504069147+00:00 stderr F I1013 00:23:13.504023 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Node from storage/cacher.go:/config.openshift.io/nodes 2025-10-13T00:23:14.261592967+00:00 stderr F I1013 00:23:14.261463 17 store.go:1579] "Monitoring resource count at path" resource="oauths.config.openshift.io" path="//config.openshift.io/oauths" 2025-10-13T00:23:14.263185692+00:00 stderr F I1013 00:23:14.263104 17 cacher.go:460] cacher (oauths.config.openshift.io): initialized 2025-10-13T00:23:14.263185692+00:00 stderr F I1013 00:23:14.263122 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=OAuth from storage/cacher.go:/config.openshift.io/oauths 2025-10-13T00:23:15.014588482+00:00 stderr F I1013 00:23:15.014358 17 store.go:1579] "Monitoring resource count at path" resource="imagetagmirrorsets.config.openshift.io" path="//config.openshift.io/imagetagmirrorsets" 2025-10-13T00:23:15.016059013+00:00 stderr F I1013 00:23:15.015921 17 cacher.go:460] cacher (imagetagmirrorsets.config.openshift.io): initialized 2025-10-13T00:23:15.016059013+00:00 stderr F I1013 00:23:15.015990 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageTagMirrorSet from storage/cacher.go:/config.openshift.io/imagetagmirrorsets 2025-10-13T00:23:15.288116632+00:00 stderr F I1013 00:23:15.287913 17 store.go:1579] "Monitoring resource count at path" resource="machineconfigurations.operator.openshift.io" path="//operator.openshift.io/machineconfigurations" 2025-10-13T00:23:15.290003974+00:00 stderr F I1013 00:23:15.289891 17 cacher.go:460] cacher (machineconfigurations.operator.openshift.io): initialized 2025-10-13T00:23:15.290003974+00:00 stderr F I1013 00:23:15.289918 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=MachineConfiguration from storage/cacher.go:/operator.openshift.io/machineconfigurations 2025-10-13T00:23:16.095359268+00:00 stderr F I1013 00:23:16.095157 17 store.go:1579] "Monitoring resource count at path" resource="ingresses.config.openshift.io" path="//config.openshift.io/ingresses" 2025-10-13T00:23:16.097178148+00:00 stderr F I1013 00:23:16.097063 17 cacher.go:460] cacher (ingresses.config.openshift.io): initialized 2025-10-13T00:23:16.097178148+00:00 stderr F I1013 00:23:16.097097 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Ingress from storage/cacher.go:/config.openshift.io/ingresses 2025-10-13T00:23:16.992307083+00:00 stderr F I1013 00:23:16.992081 17 store.go:1579] "Monitoring resource count at path" resource="dnses.config.openshift.io" path="//config.openshift.io/dnses" 2025-10-13T00:23:16.994109323+00:00 stderr F I1013 00:23:16.993902 17 cacher.go:460] cacher (dnses.config.openshift.io): initialized 2025-10-13T00:23:16.994109323+00:00 stderr F I1013 00:23:16.993957 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=DNS from storage/cacher.go:/config.openshift.io/dnses 2025-10-13T00:23:19.011472456+00:00 stderr F I1013 00:23:19.010843 17 store.go:1579] "Monitoring resource count at path" resource="configs.samples.operator.openshift.io" path="//samples.operator.openshift.io/configs" 2025-10-13T00:23:19.012734131+00:00 stderr F I1013 00:23:19.012651 17 cacher.go:460] cacher (configs.samples.operator.openshift.io): initialized 2025-10-13T00:23:19.012734131+00:00 stderr F I1013 00:23:19.012676 17 reflector.go:351] Caches populated for samples.operator.openshift.io/v1, Kind=Config from storage/cacher.go:/samples.operator.openshift.io/configs 2025-10-13T00:23:21.514138718+00:00 stderr F I1013 00:23:21.513970 17 store.go:1579] "Monitoring resource count at path" resource="servicecas.operator.openshift.io" path="//operator.openshift.io/servicecas" 2025-10-13T00:23:21.517810870+00:00 stderr F I1013 00:23:21.515792 17 cacher.go:460] cacher (servicecas.operator.openshift.io): initialized 2025-10-13T00:23:21.517810870+00:00 stderr F I1013 00:23:21.515814 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=ServiceCA from storage/cacher.go:/operator.openshift.io/servicecas 2025-10-13T00:23:21.698753671+00:00 stderr F I1013 00:23:21.698356 17 store.go:1579] "Monitoring resource count at path" resource="helmchartrepositories.helm.openshift.io" path="//helm.openshift.io/helmchartrepositories" 2025-10-13T00:23:21.701017484+00:00 stderr F I1013 00:23:21.700868 17 cacher.go:460] cacher (helmchartrepositories.helm.openshift.io): initialized 2025-10-13T00:23:21.701017484+00:00 stderr F I1013 00:23:21.700904 17 reflector.go:351] Caches populated for helm.openshift.io/v1beta1, Kind=HelmChartRepository from storage/cacher.go:/helm.openshift.io/helmchartrepositories 2025-10-13T00:23:21.729396894+00:00 stderr F I1013 00:23:21.729267 17 store.go:1579] "Monitoring resource count at path" resource="consoles.operator.openshift.io" path="//operator.openshift.io/consoles" 2025-10-13T00:23:21.731400960+00:00 stderr F I1013 00:23:21.731063 17 cacher.go:460] cacher (consoles.operator.openshift.io): initialized 2025-10-13T00:23:21.731400960+00:00 stderr F I1013 00:23:21.731086 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Console from storage/cacher.go:/operator.openshift.io/consoles 2025-10-13T00:23:21.898952337+00:00 stderr F I1013 00:23:21.898793 17 store.go:1579] "Monitoring resource count at path" resource="configs.imageregistry.operator.openshift.io" path="//imageregistry.operator.openshift.io/configs" 2025-10-13T00:23:21.901299663+00:00 stderr F I1013 00:23:21.901194 17 cacher.go:460] cacher (configs.imageregistry.operator.openshift.io): initialized 2025-10-13T00:23:21.901299663+00:00 stderr F I1013 00:23:21.901221 17 reflector.go:351] Caches populated for imageregistry.operator.openshift.io/v1, Kind=Config from storage/cacher.go:/imageregistry.operator.openshift.io/configs 2025-10-13T00:23:22.881808085+00:00 stderr F I1013 00:23:22.881637 17 store.go:1579] "Monitoring resource count at path" resource="configs.operator.openshift.io" path="//operator.openshift.io/configs" 2025-10-13T00:23:22.884137630+00:00 stderr F I1013 00:23:22.884049 17 cacher.go:460] cacher (configs.operator.openshift.io): initialized 2025-10-13T00:23:22.884137630+00:00 stderr F I1013 00:23:22.884066 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Config from storage/cacher.go:/operator.openshift.io/configs 2025-10-13T00:23:24.086287416+00:00 stderr F I1013 00:23:24.086148 17 store.go:1579] "Monitoring resource count at path" resource="images.config.openshift.io" path="//config.openshift.io/images" 2025-10-13T00:23:24.088623802+00:00 stderr F I1013 00:23:24.088390 17 cacher.go:460] cacher (images.config.openshift.io): initialized 2025-10-13T00:23:24.088623802+00:00 stderr F I1013 00:23:24.088420 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Image from storage/cacher.go:/config.openshift.io/images 2025-10-13T00:23:24.327705061+00:00 stderr F I1013 00:23:24.327586 17 store.go:1579] "Monitoring resource count at path" resource="containerruntimeconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/containerruntimeconfigs" 2025-10-13T00:23:24.328794262+00:00 stderr F I1013 00:23:24.328729 17 cacher.go:460] cacher (containerruntimeconfigs.machineconfiguration.openshift.io): initialized 2025-10-13T00:23:24.328794262+00:00 stderr F I1013 00:23:24.328752 17 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=ContainerRuntimeConfig from storage/cacher.go:/machineconfiguration.openshift.io/containerruntimeconfigs 2025-10-13T00:23:24.789175575+00:00 stderr F I1013 00:23:24.789056 17 store.go:1579] "Monitoring resource count at path" resource="imagecontentsourcepolicies.operator.openshift.io" path="//operator.openshift.io/imagecontentsourcepolicies" 2025-10-13T00:23:24.791112058+00:00 stderr F I1013 00:23:24.791029 17 cacher.go:460] cacher (imagecontentsourcepolicies.operator.openshift.io): initialized 2025-10-13T00:23:24.791112058+00:00 stderr F I1013 00:23:24.791045 17 reflector.go:351] Caches populated for operator.openshift.io/v1alpha1, Kind=ImageContentSourcePolicy from storage/cacher.go:/operator.openshift.io/imagecontentsourcepolicies 2025-10-13T00:23:28.920614086+00:00 stderr F I1013 00:23:28.920452 17 store.go:1579] "Monitoring resource count at path" resource="consoles.config.openshift.io" path="//config.openshift.io/consoles" 2025-10-13T00:23:28.923076325+00:00 stderr F I1013 00:23:28.922944 17 cacher.go:460] cacher (consoles.config.openshift.io): initialized 2025-10-13T00:23:28.923076325+00:00 stderr F I1013 00:23:28.923005 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Console from storage/cacher.go:/config.openshift.io/consoles 2025-10-13T00:23:32.699869538+00:00 stderr F I1013 00:23:32.699678 17 store.go:1579] "Monitoring resource count at path" resource="operators.operators.coreos.com" path="//operators.coreos.com/operators" 2025-10-13T00:23:32.701109533+00:00 stderr F I1013 00:23:32.700992 17 cacher.go:460] cacher (operators.operators.coreos.com): initialized 2025-10-13T00:23:32.701109533+00:00 stderr F I1013 00:23:32.701014 17 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=Operator from storage/cacher.go:/operators.coreos.com/operators 2025-10-13T00:23:33.830848762+00:00 stderr F I1013 00:23:33.830667 17 store.go:1579] "Monitoring resource count at path" resource="imagepruners.imageregistry.operator.openshift.io" path="//imageregistry.operator.openshift.io/imagepruners" 2025-10-13T00:23:33.833554677+00:00 stderr F I1013 00:23:33.833418 17 cacher.go:460] cacher (imagepruners.imageregistry.operator.openshift.io): initialized 2025-10-13T00:23:33.833554677+00:00 stderr F I1013 00:23:33.833457 17 reflector.go:351] Caches populated for imageregistry.operator.openshift.io/v1, Kind=ImagePruner from storage/cacher.go:/imageregistry.operator.openshift.io/imagepruners 2025-10-13T00:23:36.112481977+00:00 stderr F I1013 00:23:36.111801 17 store.go:1579] "Monitoring resource count at path" resource="kubeletconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/kubeletconfigs" 2025-10-13T00:23:36.112934569+00:00 stderr F I1013 00:23:36.112829 17 cacher.go:460] cacher (kubeletconfigs.machineconfiguration.openshift.io): initialized 2025-10-13T00:23:36.112934569+00:00 stderr F I1013 00:23:36.112852 17 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=KubeletConfig from storage/cacher.go:/machineconfiguration.openshift.io/kubeletconfigs 2025-10-13T00:23:38.426375461+00:00 stderr F I1013 00:23:38.426242 17 trace.go:236] Trace[690527233]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:926679ef-7fd7-439a-b570-c378eaf083bd,client:38.102.83.214,api-group:,api-version:v1,name:,subresource:,namespace:openshift-must-gather-xwq75,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openshift-must-gather-xwq75/pods,user-agent:oc/4.19.0 (linux/amd64) kubernetes/298429b,verb:POST (13-Oct-2025 00:23:36.798) (total time: 1627ms): 2025-10-13T00:23:38.426375461+00:00 stderr F Trace[690527233]: ---"Write to database call failed" len:1624,err:pods "must-gather-" is forbidden: error looking up service account openshift-must-gather-xwq75/default: serviceaccount "default" not found 1627ms (00:23:38.426) 2025-10-13T00:23:38.426375461+00:00 stderr F Trace[690527233]: [1.627806113s] [1.627806113s] END 2025-10-13T00:23:42.815208962+00:00 stderr F I1013 00:23:42.815106 17 store.go:1579] "Monitoring resource count at path" resource="storages.operator.openshift.io" path="//operator.openshift.io/storages" 2025-10-13T00:23:42.821449236+00:00 stderr F I1013 00:23:42.819432 17 cacher.go:460] cacher (storages.operator.openshift.io): initialized 2025-10-13T00:23:42.821449236+00:00 stderr F I1013 00:23:42.819458 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Storage from storage/cacher.go:/operator.openshift.io/storages 2025-10-13T00:23:42.824395748+00:00 stderr F I1013 00:23:42.824338 17 store.go:1579] "Monitoring resource count at path" resource="csisnapshotcontrollers.operator.openshift.io" path="//operator.openshift.io/csisnapshotcontrollers" 2025-10-13T00:23:42.825170640+00:00 stderr F I1013 00:23:42.825106 17 cacher.go:460] cacher (csisnapshotcontrollers.operator.openshift.io): initialized 2025-10-13T00:23:42.825170640+00:00 stderr F I1013 00:23:42.825129 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=CSISnapshotController from storage/cacher.go:/operator.openshift.io/csisnapshotcontrollers 2025-10-13T00:23:42.836802424+00:00 stderr F I1013 00:23:42.836708 17 store.go:1579] "Monitoring resource count at path" resource="storagestates.migration.k8s.io" path="//migration.k8s.io/storagestates" 2025-10-13T00:23:42.837737570+00:00 stderr F I1013 00:23:42.837668 17 cacher.go:460] cacher (storagestates.migration.k8s.io): initialized 2025-10-13T00:23:42.837737570+00:00 stderr F I1013 00:23:42.837694 17 reflector.go:351] Caches populated for migration.k8s.io/v1alpha1, Kind=StorageState from storage/cacher.go:/migration.k8s.io/storagestates 2025-10-13T00:23:42.847951514+00:00 stderr F I1013 00:23:42.847880 17 store.go:1579] "Monitoring resource count at path" resource="clustercsidrivers.operator.openshift.io" path="//operator.openshift.io/clustercsidrivers" 2025-10-13T00:23:42.848668644+00:00 stderr F I1013 00:23:42.848613 17 cacher.go:460] cacher (clustercsidrivers.operator.openshift.io): initialized 2025-10-13T00:23:42.848668644+00:00 stderr F I1013 00:23:42.848638 17 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=ClusterCSIDriver from storage/cacher.go:/operator.openshift.io/clustercsidrivers 2025-10-13T00:23:42.859994430+00:00 stderr F I1013 00:23:42.859916 17 store.go:1579] "Monitoring resource count at path" resource="consolesamples.console.openshift.io" path="//console.openshift.io/consolesamples" 2025-10-13T00:23:42.862598922+00:00 stderr F I1013 00:23:42.862538 17 cacher.go:460] cacher (consolesamples.console.openshift.io): initialized 2025-10-13T00:23:42.862598922+00:00 stderr F I1013 00:23:42.862560 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleSample from storage/cacher.go:/console.openshift.io/consolesamples 2025-10-13T00:23:42.870759800+00:00 stderr F I1013 00:23:42.870637 17 store.go:1579] "Monitoring resource count at path" resource="imagecontentpolicies.config.openshift.io" path="//config.openshift.io/imagecontentpolicies" 2025-10-13T00:23:42.872566820+00:00 stderr F I1013 00:23:42.872490 17 cacher.go:460] cacher (imagecontentpolicies.config.openshift.io): initialized 2025-10-13T00:23:42.872566820+00:00 stderr F I1013 00:23:42.872525 17 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageContentPolicy from storage/cacher.go:/config.openshift.io/imagecontentpolicies 2025-10-13T00:23:42.879999197+00:00 stderr F I1013 00:23:42.879894 17 store.go:1579] "Monitoring resource count at path" resource="consoleyamlsamples.console.openshift.io" path="//console.openshift.io/consoleyamlsamples" 2025-10-13T00:23:42.880749398+00:00 stderr F I1013 00:23:42.880682 17 cacher.go:460] cacher (consoleyamlsamples.console.openshift.io): initialized 2025-10-13T00:23:42.880749398+00:00 stderr F I1013 00:23:42.880712 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleYAMLSample from storage/cacher.go:/console.openshift.io/consoleyamlsamples 2025-10-13T00:23:42.890022346+00:00 stderr F I1013 00:23:42.889924 17 store.go:1579] "Monitoring resource count at path" resource="consoleexternalloglinks.console.openshift.io" path="//console.openshift.io/consoleexternalloglinks" 2025-10-13T00:23:42.890622223+00:00 stderr F I1013 00:23:42.890534 17 cacher.go:460] cacher (consoleexternalloglinks.console.openshift.io): initialized 2025-10-13T00:23:42.890622223+00:00 stderr F I1013 00:23:42.890558 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleExternalLogLink from storage/cacher.go:/console.openshift.io/consoleexternalloglinks 2025-10-13T00:23:42.899952353+00:00 stderr F I1013 00:23:42.899844 17 store.go:1579] "Monitoring resource count at path" resource="consolelinks.console.openshift.io" path="//console.openshift.io/consolelinks" 2025-10-13T00:23:42.901238069+00:00 stderr F I1013 00:23:42.901178 17 cacher.go:460] cacher (consolelinks.console.openshift.io): initialized 2025-10-13T00:23:42.901258919+00:00 stderr F I1013 00:23:42.901238 17 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleLink from storage/cacher.go:/console.openshift.io/consolelinks 2025-10-13T00:23:42.913817579+00:00 stderr F I1013 00:23:42.913716 17 store.go:1579] "Monitoring resource count at path" resource="clusterautoscalers.autoscaling.openshift.io" path="//autoscaling.openshift.io/clusterautoscalers" 2025-10-13T00:23:42.914786086+00:00 stderr F I1013 00:23:42.914713 17 cacher.go:460] cacher (clusterautoscalers.autoscaling.openshift.io): initialized 2025-10-13T00:23:42.914786086+00:00 stderr F I1013 00:23:42.914737 17 reflector.go:351] Caches populated for autoscaling.openshift.io/v1, Kind=ClusterAutoscaler from storage/cacher.go:/autoscaling.openshift.io/clusterautoscalers 2025-10-13T00:23:47.795856596+00:00 stderr F I1013 00:23:47.795668 17 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2025-10-13T00:23:47.800210038+00:00 stderr F I1013 00:23:47.800119 17 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2025-10-13T00:23:47.921681851+00:00 stderr F I1013 00:23:47.921567 17 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] 2025-10-13T00:23:47.925486177+00:00 stderr F I1013 00:23:47.925356 17 httplog.go:92] system:serviceaccount:openshift-apiserver:openshift-apiserver-sa[system:serviceaccounts,system:serviceaccounts:openshift-apiserver,system:authenticated] is impersonating system:serviceaccount:kube-system:namespace-controller[system:serviceaccounts,system:serviceaccounts:kube-system,system:authenticated] ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000041115073043232033042 0ustar zuulzuul2025-10-13T00:22:21.690316515+00:00 stdout F Fixing audit permissions ... 2025-10-13T00:22:21.698419173+00:00 stdout F Acquiring exclusive lock /var/log/kube-apiserver/.lock ... 2025-10-13T00:22:21.699856762+00:00 stdout F flock: getting lock took 0.000007 seconds ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000004104215073043232033047 0ustar zuulzuul2025-10-13T00:22:23.885881338+00:00 stderr F W1013 00:22:23.885475 1 cmd.go:245] Using insecure, self-signed certificates 2025-10-13T00:22:23.885881338+00:00 stderr F I1013 00:22:23.885762 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760314943 cert, and key in /tmp/serving-cert-1393998481/serving-signer.crt, /tmp/serving-cert-1393998481/serving-signer.key 2025-10-13T00:22:24.158673414+00:00 stderr F I1013 00:22:24.158554 1 observer_polling.go:159] Starting file observer 2025-10-13T00:22:29.417725130+00:00 stderr F I1013 00:22:29.417675 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-10-13T00:22:29.418872421+00:00 stderr F I1013 00:22:29.418843 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1393998481/tls.crt::/tmp/serving-cert-1393998481/tls.key" 2025-10-13T00:22:29.663450768+00:00 stderr F I1013 00:22:29.663397 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:22:29.665184375+00:00 stderr F I1013 00:22:29.665137 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:22:29.665184375+00:00 stderr F I1013 00:22:29.665158 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:22:29.665184375+00:00 stderr F I1013 00:22:29.665172 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:22:29.665184375+00:00 stderr F I1013 00:22:29.665178 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:22:29.668911195+00:00 stderr F I1013 00:22:29.668878 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:22:29.668911195+00:00 stderr F W1013 00:22:29.668904 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:22:29.668934186+00:00 stderr F W1013 00:22:29.668911 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:22:29.670842327+00:00 stderr F I1013 00:22:29.670792 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:22:29.674923927+00:00 stderr F I1013 00:22:29.674879 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:22:29.674923927+00:00 stderr F I1013 00:22:29.674908 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:22:29.674949307+00:00 stderr F I1013 00:22:29.674936 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:22:29.674958548+00:00 stderr F I1013 00:22:29.674947 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:22:29.674991378+00:00 stderr F I1013 00:22:29.674973 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:22:29.674991378+00:00 stderr F I1013 00:22:29.674983 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:22:29.676834288+00:00 stderr F I1013 00:22:29.676786 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1393998481/tls.crt::/tmp/serving-cert-1393998481/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314943\" (2025-10-13 00:22:23 +0000 UTC to 2025-11-12 00:22:24 +0000 UTC (now=2025-10-13 00:22:29.676701194 +0000 UTC))" 2025-10-13T00:22:29.677140526+00:00 stderr F I1013 00:22:29.677116 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-10-13T00:22:29.677378153+00:00 stderr F I1013 00:22:29.677319 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1393998481/tls.crt::/tmp/serving-cert-1393998481/tls.key" 2025-10-13T00:22:29.677604999+00:00 stderr F I1013 00:22:29.677566 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314949\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314949\" (2025-10-12 23:22:29 +0000 UTC to 2026-10-12 23:22:29 +0000 UTC (now=2025-10-13 00:22:29.677197338 +0000 UTC))" 2025-10-13T00:22:29.677655170+00:00 stderr F I1013 00:22:29.677631 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:22:29.682251774+00:00 stderr F I1013 00:22:29.682219 1 secure_serving.go:213] Serving securely on [::]:17697 2025-10-13T00:22:29.682275334+00:00 stderr F I1013 00:22:29.682249 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:22:29.683077156+00:00 stderr F I1013 00:22:29.683050 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.683302892+00:00 stderr F I1013 00:22:29.683263 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.683719053+00:00 stderr F I1013 00:22:29.683701 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.777458794+00:00 stderr F I1013 00:22:29.777407 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:22:29.777502195+00:00 stderr F I1013 00:22:29.777454 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:22:29.777502195+00:00 stderr F I1013 00:22:29.777480 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:22:29.781474472+00:00 stderr F I1013 00:22:29.781443 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:29.7814047 +0000 UTC))" 2025-10-13T00:22:29.782315725+00:00 stderr F I1013 00:22:29.782290 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1393998481/tls.crt::/tmp/serving-cert-1393998481/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314943\" (2025-10-13 00:22:23 +0000 UTC to 2025-11-12 00:22:24 +0000 UTC (now=2025-10-13 00:22:29.782269193 +0000 UTC))" 2025-10-13T00:22:29.782760277+00:00 stderr F I1013 00:22:29.782737 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314949\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314949\" (2025-10-12 23:22:29 +0000 UTC to 2026-10-12 23:22:29 +0000 UTC (now=2025-10-13 00:22:29.782712035 +0000 UTC))" 2025-10-13T00:22:29.782971562+00:00 stderr F I1013 00:22:29.782950 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:22:29.782932381 +0000 UTC))" 2025-10-13T00:22:29.782997953+00:00 stderr F I1013 00:22:29.782978 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:22:29.782963962 +0000 UTC))" 2025-10-13T00:22:29.783015813+00:00 stderr F I1013 00:22:29.783004 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:22:29.782987683 +0000 UTC))" 2025-10-13T00:22:29.783040544+00:00 stderr F I1013 00:22:29.783024 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:22:29.783008883 +0000 UTC))" 2025-10-13T00:22:29.783067265+00:00 stderr F I1013 00:22:29.783048 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:29.783032844 +0000 UTC))" 2025-10-13T00:22:29.783095876+00:00 stderr F I1013 00:22:29.783079 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:29.783063535 +0000 UTC))" 2025-10-13T00:22:29.783120926+00:00 stderr F I1013 00:22:29.783105 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:29.783089145 +0000 UTC))" 2025-10-13T00:22:29.783146337+00:00 stderr F I1013 00:22:29.783130 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:29.783114026 +0000 UTC))" 2025-10-13T00:22:29.783171158+00:00 stderr F I1013 00:22:29.783155 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:22:29.783138967 +0000 UTC))" 2025-10-13T00:22:29.783197338+00:00 stderr F I1013 00:22:29.783181 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:22:29.783167018 +0000 UTC))" 2025-10-13T00:22:29.783238989+00:00 stderr F I1013 00:22:29.783222 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:22:29.783191438 +0000 UTC))" 2025-10-13T00:22:29.783271170+00:00 stderr F I1013 00:22:29.783249 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:22:29.783231649 +0000 UTC))" 2025-10-13T00:22:29.783709962+00:00 stderr F I1013 00:22:29.783689 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1393998481/tls.crt::/tmp/serving-cert-1393998481/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314943\" (2025-10-13 00:22:23 +0000 UTC to 2025-11-12 00:22:24 +0000 UTC (now=2025-10-13 00:22:29.783664871 +0000 UTC))" 2025-10-13T00:22:29.784091132+00:00 stderr F I1013 00:22:29.784071 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314949\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314949\" (2025-10-12 23:22:29 +0000 UTC to 2026-10-12 23:22:29 +0000 UTC (now=2025-10-13 00:22:29.784051701 +0000 UTC))" 2025-10-13T00:22:29.929770620+00:00 stderr F I1013 00:22:29.929699 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.978220983+00:00 stderr F I1013 00:22:29.978149 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-10-13T00:22:29.978220983+00:00 stderr F I1013 00:22:29.978175 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-10-13T00:22:29.978265044+00:00 stderr F I1013 00:22:29.978247 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-10-13T00:22:29.978265044+00:00 stderr F I1013 00:22:29.978252 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-10-13T00:22:29.978265044+00:00 stderr F I1013 00:22:29.978255 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-10-13T00:22:29.978343236+00:00 stderr F I1013 00:22:29.978298 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-10-13T00:22:29.978343236+00:00 stderr F I1013 00:22:29.978320 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-10-13T00:22:29.978401348+00:00 stderr F I1013 00:22:29.978366 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-10-13T00:22:29.978401348+00:00 stderr F I1013 00:22:29.978397 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-10-13T00:22:29.986678190+00:00 stderr F I1013 00:22:29.986621 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.992644951+00:00 stderr F I1013 00:22:29.992576 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:30.079451665+00:00 stderr F I1013 00:22:30.079404 1 base_controller.go:73] Caches are synced for check-endpoints 2025-10-13T00:22:30.079451665+00:00 stderr F I1013 00:22:30.079432 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000016415073043232033047 0ustar zuulzuul2025-10-13T00:22:23.593981828+00:00 stderr F I1013 00:22:23.593797 1 readyz.go:111] Listening on 0.0.0.0:6080 ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000004233715073043232033057 0ustar zuulzuul2025-10-13T00:22:23.338493499+00:00 stderr F W1013 00:22:23.338065 1 cmd.go:245] Using insecure, self-signed certificates 2025-10-13T00:22:23.338493499+00:00 stderr F I1013 00:22:23.338406 1 crypto.go:601] Generating new CA for cert-regeneration-controller-signer@1760314943 cert, and key in /tmp/serving-cert-69541581/serving-signer.crt, /tmp/serving-cert-69541581/serving-signer.key 2025-10-13T00:22:23.554983900+00:00 stderr F I1013 00:22:23.554916 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:22:23.555953276+00:00 stderr F I1013 00:22:23.555903 1 observer_polling.go:159] Starting file observer 2025-10-13T00:22:29.432427285+00:00 stderr F I1013 00:22:29.431227 1 builder.go:299] cert-regeneration-controller version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-10-13T00:22:29.439245499+00:00 stderr F I1013 00:22:29.438960 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:22:29.439451154+00:00 stderr F I1013 00:22:29.439317 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver/cert-regeneration-controller-lock... 2025-10-13T00:22:29.452186637+00:00 stderr F I1013 00:22:29.452143 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver/cert-regeneration-controller-lock 2025-10-13T00:22:29.452957788+00:00 stderr F I1013 00:22:29.452926 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver", Name:"cert-regeneration-controller-lock", UID:"eb250dab-ea81-4164-a3be-f2834c870dea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42870", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_630e14ed-3564-40dc-ab9d-7a5ee4d468a0 became leader 2025-10-13T00:22:29.456731779+00:00 stderr F I1013 00:22:29.456676 1 cabundlesyncer.go:82] Starting CA bundle controller 2025-10-13T00:22:29.456785750+00:00 stderr F I1013 00:22:29.456775 1 shared_informer.go:311] Waiting for caches to sync for CABundleController 2025-10-13T00:22:29.457586412+00:00 stderr F I1013 00:22:29.457569 1 certrotationcontroller.go:886] Starting CertRotation 2025-10-13T00:22:29.457621193+00:00 stderr F I1013 00:22:29.457611 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-10-13T00:22:29.469420900+00:00 stderr F I1013 00:22:29.469371 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.470469638+00:00 stderr F I1013 00:22:29.470449 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.472057651+00:00 stderr F I1013 00:22:29.472014 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.472112343+00:00 stderr F I1013 00:22:29.472087 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.472256396+00:00 stderr F I1013 00:22:29.472236 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.472772420+00:00 stderr F I1013 00:22:29.472729 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.494861314+00:00 stderr F I1013 00:22:29.494800 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.512629202+00:00 stderr F I1013 00:22:29.512558 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.522487007+00:00 stderr F I1013 00:22:29.522434 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:29.557588191+00:00 stderr F I1013 00:22:29.557481 1 shared_informer.go:318] Caches are synced for CABundleController 2025-10-13T00:22:29.561618720+00:00 stderr F I1013 00:22:29.561523 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-10-13T00:22:29.561618720+00:00 stderr F I1013 00:22:29.561584 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-10-13T00:22:29.561618720+00:00 stderr F I1013 00:22:29.561591 1 internalloadbalancer.go:27] syncing internal loadbalancer hostnames: api-int.crc.testing 2025-10-13T00:22:29.561618720+00:00 stderr F I1013 00:22:29.561596 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-10-13T00:22:29.561662541+00:00 stderr F I1013 00:22:29.561628 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561662541+00:00 stderr F I1013 00:22:29.561633 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561662541+00:00 stderr F I1013 00:22:29.561638 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561717912+00:00 stderr F I1013 00:22:29.561681 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-10-13T00:22:29.561717912+00:00 stderr F I1013 00:22:29.561711 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-10-13T00:22:29.561749363+00:00 stderr F I1013 00:22:29.561730 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561749363+00:00 stderr F I1013 00:22:29.561738 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561749363+00:00 stderr F I1013 00:22:29.561741 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561760723+00:00 stderr F I1013 00:22:29.561751 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561760723+00:00 stderr F I1013 00:22:29.561755 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561770704+00:00 stderr F I1013 00:22:29.561758 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561779844+00:00 stderr F I1013 00:22:29.561768 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561779844+00:00 stderr F I1013 00:22:29.561772 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561779844+00:00 stderr F I1013 00:22:29.561776 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561807855+00:00 stderr F I1013 00:22:29.561790 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561807855+00:00 stderr F I1013 00:22:29.561797 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561807855+00:00 stderr F I1013 00:22:29.561801 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561835735+00:00 stderr F I1013 00:22:29.561817 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561835735+00:00 stderr F I1013 00:22:29.561823 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561835735+00:00 stderr F I1013 00:22:29.561827 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561858236+00:00 stderr F I1013 00:22:29.561840 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561858236+00:00 stderr F I1013 00:22:29.561844 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561858236+00:00 stderr F I1013 00:22:29.561847 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561870606+00:00 stderr F I1013 00:22:29.561865 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561879697+00:00 stderr F I1013 00:22:29.561869 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561879697+00:00 stderr F I1013 00:22:29.561872 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561891267+00:00 stderr F I1013 00:22:29.561884 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561891267+00:00 stderr F I1013 00:22:29.561888 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561899907+00:00 stderr F I1013 00:22:29.561891 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561907957+00:00 stderr F I1013 00:22:29.561901 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561907957+00:00 stderr F I1013 00:22:29.561904 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561916058+00:00 stderr F I1013 00:22:29.561908 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561924168+00:00 stderr F I1013 00:22:29.561917 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561924168+00:00 stderr F I1013 00:22:29.561921 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561932138+00:00 stderr F I1013 00:22:29.561924 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.561940788+00:00 stderr F I1013 00:22:29.561934 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:22:29.561940788+00:00 stderr F I1013 00:22:29.561938 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:22:29.561949829+00:00 stderr F I1013 00:22:29.561941 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:22:29.562414151+00:00 stderr F E1013 00:22:29.562367 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.563842669+00:00 stderr F E1013 00:22:29.562788 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.563842669+00:00 stderr F E1013 00:22:29.563272 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.579960153+00:00 stderr F E1013 00:22:29.579893 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.580381314+00:00 stderr F E1013 00:22:29.580342 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.580869447+00:00 stderr F E1013 00:22:29.580808 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.580896638+00:00 stderr F E1013 00:22:29.580864 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.581285569+00:00 stderr F E1013 00:22:29.581255 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.581540575+00:00 stderr F E1013 00:22:29.581511 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.581790132+00:00 stderr F E1013 00:22:29.581763 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.581801472+00:00 stderr F E1013 00:22:29.581794 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.582005578+00:00 stderr F E1013 00:22:29.581979 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.582412839+00:00 stderr F E1013 00:22:29.582366 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.584978868+00:00 stderr F E1013 00:22:29.584946 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.585206224+00:00 stderr F E1013 00:22:29.585174 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.585773689+00:00 stderr F E1013 00:22:29.585750 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.589604812+00:00 stderr F E1013 00:22:29.589550 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.589755536+00:00 stderr F E1013 00:22:29.589724 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590077675+00:00 stderr F E1013 00:22:29.590056 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590116526+00:00 stderr F E1013 00:22:29.590096 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590417014+00:00 stderr F E1013 00:22:29.590366 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590506726+00:00 stderr F E1013 00:22:29.590465 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590619259+00:00 stderr F E1013 00:22:29.590587 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590682711+00:00 stderr F E1013 00:22:29.590660 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.590835985+00:00 stderr F E1013 00:22:29.590804 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.591009850+00:00 stderr F E1013 00:22:29.590977 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.591033651+00:00 stderr F E1013 00:22:29.591015 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.591191105+00:00 stderr F E1013 00:22:29.591168 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.595994834+00:00 stderr F E1013 00:22:29.595925 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.596090747+00:00 stderr F E1013 00:22:29.596054 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.596464597+00:00 stderr F E1013 00:22:29.596429 1 base_controller.go:268] CertRotationController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-10-13T00:22:29.597725571+00:00 stderr F I1013 00:22:29.597689 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000316015073043232033046 0ustar zuulzuul2025-10-13T00:22:23.112318856+00:00 stderr F I1013 00:22:23.112141 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-10-13T00:22:23.112318856+00:00 stderr F I1013 00:22:23.112188 1 observer_polling.go:159] Starting file observer 2025-10-13T00:22:29.513138946+00:00 stderr F I1013 00:22:29.512540 1 base_controller.go:73] Caches are synced for CertSyncController 2025-10-13T00:22:29.513138946+00:00 stderr F I1013 00:22:29.512568 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-10-13T00:22:29.513138946+00:00 stderr F I1013 00:22:29.512639 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}] 2025-10-13T00:22:29.513138946+00:00 stderr F I1013 00:22:29.512949 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}] ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015073043234033056 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015073043234033056 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000273622615073043234033102 0ustar zuulzuul2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.027560 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.027993 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.033219 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:21.804748812+00:00 stderr F I0813 20:01:21.804107 1 builder.go:299] console-operator version - 2025-08-13T20:01:23.971163904+00:00 stderr F I0813 20:01:23.969130 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:23.971163904+00:00 stderr F W0813 20:01:23.969729 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:23.971163904+00:00 stderr F W0813 20:01:23.969910 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.040286 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.040734 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043251 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043292 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043447 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043467 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043492 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043500 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.044628 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:24.108125490+00:00 stderr F I0813 20:01:24.103247 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:24.108125490+00:00 stderr F I0813 20:01:24.103350 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.144496 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.144576 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.145110 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:31.459571568+00:00 stderr F I0813 20:01:31.458825 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-08-13T20:01:31.509035438+00:00 stderr F I0813 20:01:31.508679 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_26c3518c-99b0-4182-9f74-a6df43fe8de3 became leader 2025-08-13T20:01:31.661391592+00:00 stderr F I0813 20:01:31.656860 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:01:31.683595846+00:00 stderr F I0813 20:01:31.679742 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:01:31.683595846+00:00 stderr F I0813 20:01:31.680373 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:01:36.909432847+00:00 stderr F I0813 20:01:36.897612 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:01:36.909432847+00:00 stderr F I0813 20:01:36.901422 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-08-13T20:01:36.992913177+00:00 stderr F I0813 20:01:36.992598 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.992895 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.992994 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993019 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993031 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993048 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T20:01:36.993626828+00:00 stderr F I0813 20:01:36.993059 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-08-13T20:01:36.993626828+00:00 stderr F I0813 20:01:36.993200 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-08-13T20:01:36.993643118+00:00 stderr F I0813 20:01:36.993223 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-08-13T20:01:36.993643118+00:00 stderr F I0813 20:01:36.993637 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-08-13T20:01:36.993733931+00:00 stderr F I0813 20:01:36.993692 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-08-13T20:01:36.994068770+00:00 stderr F I0813 20:01:36.993365 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993446 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993461 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993471 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-08-13T20:01:36.994104791+00:00 stderr F I0813 20:01:36.993479 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T20:01:36.994104791+00:00 stderr F I0813 20:01:36.993486 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T20:01:36.994115792+00:00 stderr F I0813 20:01:36.993494 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-08-13T20:01:36.994126842+00:00 stderr F I0813 20:01:36.993498 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-08-13T20:01:36.994126842+00:00 stderr F I0813 20:01:36.993505 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-08-13T20:01:36.994137602+00:00 stderr F I0813 20:01:36.993513 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T20:01:36.994971946+00:00 stderr F I0813 20:01:36.994439 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-08-13T20:01:37.000550155+00:00 stderr F I0813 20:01:36.996683 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-08-13T20:01:37.000550155+00:00 stderr F I0813 20:01:36.998209 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-08-13T20:01:37.000550155+00:00 stderr F E0813 20:01:36.999530 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T20:01:37.019403323+00:00 stderr F W0813 20:01:37.018234 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:37.019403323+00:00 stderr F E0813 20:01:37.018301 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:37.098673342+00:00 stderr F I0813 20:01:37.093008 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:37.098673342+00:00 stderr F I0813 20:01:37.098659 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.093245 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.098683 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.093279 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:01:37.098721753+00:00 stderr F I0813 20:01:37.098709 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.093289 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.098766 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.093302 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T20:01:37.098878058+00:00 stderr F I0813 20:01:37.098858 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T20:01:37.110930882+00:00 stderr F I0813 20:01:37.110829 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T20:01:37.111337563+00:00 stderr F I0813 20:01:37.111260 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-08-13T20:01:37.111337563+00:00 stderr F I0813 20:01:37.111296 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111416 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111304 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111447 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111329 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111483 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-08-13T20:01:37.112073594+00:00 stderr F I0813 20:01:37.111337 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T20:01:37.112073594+00:00 stderr F I0813 20:01:37.112043 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T20:01:37.112248559+00:00 stderr F I0813 20:01:37.111345 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-08-13T20:01:37.112297170+00:00 stderr F I0813 20:01:37.112281 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-08-13T20:01:37.115871872+00:00 stderr F I0813 20:01:37.115684 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:01:37.115871872+00:00 stderr F I0813 20:01:37.115732 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:37.139159046+00:00 stderr F I0813 20:01:37.138982 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.157168150+00:00 stderr F I0813 20:01:37.156393 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.184583552+00:00 stderr F I0813 20:01:37.184437 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.193553177+00:00 stderr F I0813 20:01:37.193494 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:37.193633830+00:00 stderr F I0813 20:01:37.193619 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:37.198707084+00:00 stderr F I0813 20:01:37.198596 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-08-13T20:01:37.198707084+00:00 stderr F I0813 20:01:37.198656 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-08-13T20:01:38.553891696+00:00 stderr F W0813 20:01:38.553477 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:38.553891696+00:00 stderr F E0813 20:01:38.553542 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:40.636470078+00:00 stderr F W0813 20:01:40.635957 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:40.636470078+00:00 stderr F E0813 20:01:40.636399 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:46.988630633+00:00 stderr F W0813 20:01:46.987604 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:46.988681465+00:00 stderr F E0813 20:01:46.988648 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:51.227743396+00:00 stderr F I0813 20:01:51.224273 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:55.704917637+00:00 stderr F W0813 20:01:55.704384 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:55.704917637+00:00 stderr F E0813 20:01:55.704888 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:56.189227326+00:00 stderr F I0813 20:01:56.178481 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" 2025-08-13T20:02:10.877413274+00:00 stderr F W0813 20:02:10.876711 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:10.877413274+00:00 stderr F E0813 20:02:10.877391 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:35.304614522+00:00 stderr F E0813 20:02:35.303652 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.001196281+00:00 stderr F E0813 20:02:37.001089 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.006719239+00:00 stderr F I0813 20:02:37.006649 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.006719239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.006719239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.006719239+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F - { 2025-08-13T20:02:37.006719239+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.006719239+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.006719239+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.006719239+00:00 stderr F - }, 2025-08-13T20:02:37.006719239+00:00 stderr F + { 2025-08-13T20:02:37.006719239+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.006719239+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.006719239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.001171681 +0000 UTC m=+76.206270985", 2025-08-13T20:02:37.006719239+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.006719239+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.006719239+00:00 stderr F + }, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.006719239+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.006719239+00:00 stderr F }, 2025-08-13T20:02:37.006719239+00:00 stderr F Version: "", 2025-08-13T20:02:37.006719239+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.006719239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.006719239+00:00 stderr F } 2025-08-13T20:02:37.008377726+00:00 stderr F E0813 20:02:37.008348 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.015011335+00:00 stderr F E0813 20:02:37.014935 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.017540448+00:00 stderr F I0813 20:02:37.017287 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.017540448+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.017540448+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.017540448+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F - { 2025-08-13T20:02:37.017540448+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.017540448+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.017540448+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.017540448+00:00 stderr F - }, 2025-08-13T20:02:37.017540448+00:00 stderr F + { 2025-08-13T20:02:37.017540448+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.017540448+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.017540448+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.014994335 +0000 UTC m=+76.220093699", 2025-08-13T20:02:37.017540448+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.017540448+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.017540448+00:00 stderr F + }, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.017540448+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.017540448+00:00 stderr F }, 2025-08-13T20:02:37.017540448+00:00 stderr F Version: "", 2025-08-13T20:02:37.017540448+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.017540448+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.017540448+00:00 stderr F } 2025-08-13T20:02:37.018674320+00:00 stderr F E0813 20:02:37.018650 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.030345693+00:00 stderr F E0813 20:02:37.030317 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.032925026+00:00 stderr F I0813 20:02:37.032590 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.032925026+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.032925026+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.032925026+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F - { 2025-08-13T20:02:37.032925026+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.032925026+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.032925026+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.032925026+00:00 stderr F - }, 2025-08-13T20:02:37.032925026+00:00 stderr F + { 2025-08-13T20:02:37.032925026+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.032925026+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.032925026+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.030398664 +0000 UTC m=+76.235497818", 2025-08-13T20:02:37.032925026+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.032925026+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.032925026+00:00 stderr F + }, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.032925026+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.032925026+00:00 stderr F }, 2025-08-13T20:02:37.032925026+00:00 stderr F Version: "", 2025-08-13T20:02:37.032925026+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.032925026+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.032925026+00:00 stderr F } 2025-08-13T20:02:37.035612393+00:00 stderr F E0813 20:02:37.035519 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.056986483+00:00 stderr F E0813 20:02:37.056954 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.059095323+00:00 stderr F I0813 20:02:37.059069 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.059095323+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.059095323+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.059095323+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F - { 2025-08-13T20:02:37.059095323+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.059095323+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.059095323+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.059095323+00:00 stderr F - }, 2025-08-13T20:02:37.059095323+00:00 stderr F + { 2025-08-13T20:02:37.059095323+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.059095323+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.059095323+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.057040674 +0000 UTC m=+76.262139848", 2025-08-13T20:02:37.059095323+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.059095323+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.059095323+00:00 stderr F + }, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.059095323+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.059095323+00:00 stderr F }, 2025-08-13T20:02:37.059095323+00:00 stderr F Version: "", 2025-08-13T20:02:37.059095323+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.059095323+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.059095323+00:00 stderr F } 2025-08-13T20:02:37.060330618+00:00 stderr F E0813 20:02:37.060220 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.103134429+00:00 stderr F E0813 20:02:37.103081 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.106226368+00:00 stderr F I0813 20:02:37.106197 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.106226368+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.106226368+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.106226368+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F - { 2025-08-13T20:02:37.106226368+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.106226368+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.106226368+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.106226368+00:00 stderr F - }, 2025-08-13T20:02:37.106226368+00:00 stderr F + { 2025-08-13T20:02:37.106226368+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.106226368+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.106226368+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.103221472 +0000 UTC m=+76.308320556", 2025-08-13T20:02:37.106226368+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.106226368+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.106226368+00:00 stderr F + }, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.106226368+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.106226368+00:00 stderr F }, 2025-08-13T20:02:37.106226368+00:00 stderr F Version: "", 2025-08-13T20:02:37.106226368+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.106226368+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.106226368+00:00 stderr F } 2025-08-13T20:02:37.107951347+00:00 stderr F E0813 20:02:37.107861 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.121601696+00:00 stderr F E0813 20:02:37.121368 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.124528880+00:00 stderr F I0813 20:02:37.124409 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.124528880+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.124528880+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.124528880+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.124528880+00:00 stderr F - { 2025-08-13T20:02:37.124528880+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.124528880+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.124528880+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.124528880+00:00 stderr F - }, 2025-08-13T20:02:37.124528880+00:00 stderr F + { 2025-08-13T20:02:37.124528880+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.124528880+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.124528880+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.121440242 +0000 UTC m=+76.326539426", 2025-08-13T20:02:37.124528880+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.124528880+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.124528880+00:00 stderr F + }, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.124528880+00:00 stderr F }, 2025-08-13T20:02:37.124528880+00:00 stderr F Version: "", 2025-08-13T20:02:37.124528880+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.124528880+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.124528880+00:00 stderr F } 2025-08-13T20:02:37.126343221+00:00 stderr F E0813 20:02:37.126256 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.126520716+00:00 stderr F E0813 20:02:37.126442 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.129926634+00:00 stderr F E0813 20:02:37.129762 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.130551841+00:00 stderr F E0813 20:02:37.130431 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.132014023+00:00 stderr F I0813 20:02:37.131960 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.132014023+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.132014023+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.132014023+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.132014023+00:00 stderr F - { 2025-08-13T20:02:37.132014023+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.132014023+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.132014023+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.132014023+00:00 stderr F - }, 2025-08-13T20:02:37.132014023+00:00 stderr F + { 2025-08-13T20:02:37.132014023+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.132014023+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.132014023+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.126492286 +0000 UTC m=+76.331591490", 2025-08-13T20:02:37.132014023+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.132014023+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.132014023+00:00 stderr F + }, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.132014023+00:00 stderr F }, 2025-08-13T20:02:37.132014023+00:00 stderr F Version: "", 2025-08-13T20:02:37.132014023+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.132014023+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.132014023+00:00 stderr F } 2025-08-13T20:02:37.133736572+00:00 stderr F I0813 20:02:37.133640 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.133736572+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.133736572+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.133736572+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F - { 2025-08-13T20:02:37.133736572+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.133736572+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.133736572+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.133736572+00:00 stderr F - }, 2025-08-13T20:02:37.133736572+00:00 stderr F + { 2025-08-13T20:02:37.133736572+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.133736572+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.133736572+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.130450949 +0000 UTC m=+76.335550003", 2025-08-13T20:02:37.133736572+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.133736572+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.133736572+00:00 stderr F + }, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.133736572+00:00 stderr F }, 2025-08-13T20:02:37.133736572+00:00 stderr F Version: "", 2025-08-13T20:02:37.133736572+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.133736572+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.133736572+00:00 stderr F } 2025-08-13T20:02:37.134305229+00:00 stderr F E0813 20:02:37.134188 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.134900546+00:00 stderr F E0813 20:02:37.134697 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.135542744+00:00 stderr F I0813 20:02:37.135463 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.135542744+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.135542744+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.135542744+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F - { 2025-08-13T20:02:37.135542744+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.135542744+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.135542744+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.135542744+00:00 stderr F - }, 2025-08-13T20:02:37.135542744+00:00 stderr F + { 2025-08-13T20:02:37.135542744+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.135542744+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.135542744+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.126359392 +0000 UTC m=+76.331458536", 2025-08-13T20:02:37.135542744+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.135542744+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.135542744+00:00 stderr F + }, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.135542744+00:00 stderr F }, 2025-08-13T20:02:37.135542744+00:00 stderr F Version: "", 2025-08-13T20:02:37.135542744+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.135542744+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.135542744+00:00 stderr F } 2025-08-13T20:02:37.136386588+00:00 stderr F I0813 20:02:37.136322 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.136386588+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.136386588+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.136386588+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.136386588+00:00 stderr F - { 2025-08-13T20:02:37.136386588+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.136386588+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.136386588+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.136386588+00:00 stderr F - }, 2025-08-13T20:02:37.136386588+00:00 stderr F + { 2025-08-13T20:02:37.136386588+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.136386588+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.136386588+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.134232746 +0000 UTC m=+76.339331950", 2025-08-13T20:02:37.136386588+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.136386588+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:37.136386588+00:00 stderr F + }, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.136386588+00:00 stderr F }, 2025-08-13T20:02:37.136386588+00:00 stderr F Version: "", 2025-08-13T20:02:37.136386588+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.136386588+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.136386588+00:00 stderr F } 2025-08-13T20:02:37.136681286+00:00 stderr F E0813 20:02:37.136602 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.136954614+00:00 stderr F E0813 20:02:37.136764 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.137758157+00:00 stderr F E0813 20:02:37.137668 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.138021025+00:00 stderr F E0813 20:02:37.137945 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.138645062+00:00 stderr F I0813 20:02:37.138562 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.138645062+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.138645062+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.138645062+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.138645062+00:00 stderr F - { 2025-08-13T20:02:37.138645062+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.138645062+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.138645062+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.138645062+00:00 stderr F - }, 2025-08-13T20:02:37.138645062+00:00 stderr F + { 2025-08-13T20:02:37.138645062+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.138645062+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.138645062+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.136946624 +0000 UTC m=+76.342045848", 2025-08-13T20:02:37.138645062+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.138645062+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.138645062+00:00 stderr F + }, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.138645062+00:00 stderr F }, 2025-08-13T20:02:37.138645062+00:00 stderr F Version: "", 2025-08-13T20:02:37.138645062+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.138645062+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.138645062+00:00 stderr F } 2025-08-13T20:02:37.143150611+00:00 stderr F E0813 20:02:37.143051 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.144031406+00:00 stderr F E0813 20:02:37.143962 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.144376696+00:00 stderr F E0813 20:02:37.144241 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.145581960+00:00 stderr F I0813 20:02:37.145476 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.145581960+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.145581960+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.145581960+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F - { 2025-08-13T20:02:37.145581960+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.145581960+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.145581960+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.145581960+00:00 stderr F - }, 2025-08-13T20:02:37.145581960+00:00 stderr F + { 2025-08-13T20:02:37.145581960+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.145581960+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.145581960+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.143086079 +0000 UTC m=+76.348185273", 2025-08-13T20:02:37.145581960+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.145581960+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.145581960+00:00 stderr F + }, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.145581960+00:00 stderr F }, 2025-08-13T20:02:37.145581960+00:00 stderr F Version: "", 2025-08-13T20:02:37.145581960+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.145581960+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.145581960+00:00 stderr F } 2025-08-13T20:02:37.146268710+00:00 stderr F I0813 20:02:37.146151 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.146268710+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.146268710+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.146268710+00:00 stderr F - { 2025-08-13T20:02:37.146268710+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.146268710+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.146268710+00:00 stderr F - }, 2025-08-13T20:02:37.146268710+00:00 stderr F + { 2025-08-13T20:02:37.146268710+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.146268710+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.144033216 +0000 UTC m=+76.349132530", 2025-08-13T20:02:37.146268710+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.146268710+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.146268710+00:00 stderr F + }, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F }, 2025-08-13T20:02:37.146268710+00:00 stderr F Version: "", 2025-08-13T20:02:37.146268710+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.146268710+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.146268710+00:00 stderr F } 2025-08-13T20:02:37.146268710+00:00 stderr F I0813 20:02:37.146196 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.146268710+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.146268710+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F - { 2025-08-13T20:02:37.146268710+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.146268710+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.146268710+00:00 stderr F - }, 2025-08-13T20:02:37.146268710+00:00 stderr F + { 2025-08-13T20:02:37.146268710+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.146268710+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.144273173 +0000 UTC m=+76.349372367", 2025-08-13T20:02:37.146268710+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.146268710+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.146268710+00:00 stderr F + }, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F }, 2025-08-13T20:02:37.146268710+00:00 stderr F Version: "", 2025-08-13T20:02:37.146268710+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.146268710+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.146268710+00:00 stderr F } 2025-08-13T20:02:37.147033832+00:00 stderr F E0813 20:02:37.146954 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.150736087+00:00 stderr F I0813 20:02:37.150114 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.150736087+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.150736087+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.150736087+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.150736087+00:00 stderr F - { 2025-08-13T20:02:37.150736087+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.150736087+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.150736087+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.150736087+00:00 stderr F - }, 2025-08-13T20:02:37.150736087+00:00 stderr F + { 2025-08-13T20:02:37.150736087+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.150736087+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.150736087+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.147029161 +0000 UTC m=+76.352128676", 2025-08-13T20:02:37.150736087+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.150736087+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:37.150736087+00:00 stderr F + }, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.150736087+00:00 stderr F }, 2025-08-13T20:02:37.150736087+00:00 stderr F Version: "", 2025-08-13T20:02:37.150736087+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.150736087+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.150736087+00:00 stderr F } 2025-08-13T20:02:37.189472952+00:00 stderr F E0813 20:02:37.189377 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.192793477+00:00 stderr F I0813 20:02:37.192709 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.192793477+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.192793477+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.192793477+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F - { 2025-08-13T20:02:37.192793477+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.192793477+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.192793477+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.192793477+00:00 stderr F - }, 2025-08-13T20:02:37.192793477+00:00 stderr F + { 2025-08-13T20:02:37.192793477+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.192793477+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.192793477+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.189460582 +0000 UTC m=+76.394559916", 2025-08-13T20:02:37.192793477+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.192793477+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.192793477+00:00 stderr F + }, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.192793477+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.192793477+00:00 stderr F }, 2025-08-13T20:02:37.192793477+00:00 stderr F Version: "", 2025-08-13T20:02:37.192793477+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.192793477+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.192793477+00:00 stderr F } 2025-08-13T20:02:37.211320526+00:00 stderr F E0813 20:02:37.211235 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.223465952+00:00 stderr F E0813 20:02:37.223395 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.225205742+00:00 stderr F I0813 20:02:37.225143 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.225205742+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.225205742+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.225205742+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.225205742+00:00 stderr F - { 2025-08-13T20:02:37.225205742+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.225205742+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.225205742+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.225205742+00:00 stderr F - }, 2025-08-13T20:02:37.225205742+00:00 stderr F + { 2025-08-13T20:02:37.225205742+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.225205742+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.225205742+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.223457082 +0000 UTC m=+76.428556296", 2025-08-13T20:02:37.225205742+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.225205742+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.225205742+00:00 stderr F + }, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.225205742+00:00 stderr F }, 2025-08-13T20:02:37.225205742+00:00 stderr F Version: "", 2025-08-13T20:02:37.225205742+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.225205742+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.225205742+00:00 stderr F } 2025-08-13T20:02:37.410190499+00:00 stderr F E0813 20:02:37.410003 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.422460029+00:00 stderr F E0813 20:02:37.421621 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.428228663+00:00 stderr F I0813 20:02:37.427543 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.428228663+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.428228663+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.428228663+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F - { 2025-08-13T20:02:37.428228663+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.428228663+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.428228663+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.428228663+00:00 stderr F - }, 2025-08-13T20:02:37.428228663+00:00 stderr F + { 2025-08-13T20:02:37.428228663+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.428228663+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.428228663+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.422604473 +0000 UTC m=+76.627703867", 2025-08-13T20:02:37.428228663+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.428228663+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.428228663+00:00 stderr F + }, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.428228663+00:00 stderr F }, 2025-08-13T20:02:37.428228663+00:00 stderr F Version: "", 2025-08-13T20:02:37.428228663+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.428228663+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.428228663+00:00 stderr F } 2025-08-13T20:02:37.610046740+00:00 stderr F E0813 20:02:37.609937 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.633734006+00:00 stderr F E0813 20:02:37.633628 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.635942429+00:00 stderr F I0813 20:02:37.635888 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.635942429+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.635942429+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.635942429+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.635942429+00:00 stderr F - { 2025-08-13T20:02:37.635942429+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.635942429+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.635942429+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.635942429+00:00 stderr F - }, 2025-08-13T20:02:37.635942429+00:00 stderr F + { 2025-08-13T20:02:37.635942429+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.635942429+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.635942429+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.633691365 +0000 UTC m=+76.838790539", 2025-08-13T20:02:37.635942429+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.635942429+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.635942429+00:00 stderr F + }, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.635942429+00:00 stderr F }, 2025-08-13T20:02:37.635942429+00:00 stderr F Version: "", 2025-08-13T20:02:37.635942429+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.635942429+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.635942429+00:00 stderr F } 2025-08-13T20:02:37.809628154+00:00 stderr F E0813 20:02:37.809489 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.821830842+00:00 stderr F E0813 20:02:37.821736 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.824464047+00:00 stderr F I0813 20:02:37.824383 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.824464047+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.824464047+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.824464047+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F - { 2025-08-13T20:02:37.824464047+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.824464047+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.824464047+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.824464047+00:00 stderr F - }, 2025-08-13T20:02:37.824464047+00:00 stderr F + { 2025-08-13T20:02:37.824464047+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.824464047+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.824464047+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.821865803 +0000 UTC m=+77.026965177", 2025-08-13T20:02:37.824464047+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.824464047+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.824464047+00:00 stderr F + }, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.824464047+00:00 stderr F }, 2025-08-13T20:02:37.824464047+00:00 stderr F Version: "", 2025-08-13T20:02:37.824464047+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.824464047+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.824464047+00:00 stderr F } 2025-08-13T20:02:38.009616618+00:00 stderr F E0813 20:02:38.009553 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.022606408+00:00 stderr F E0813 20:02:38.022533 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.024687358+00:00 stderr F I0813 20:02:38.024601 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.024687358+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.024687358+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.024687358+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.024687358+00:00 stderr F - { 2025-08-13T20:02:38.024687358+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.024687358+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.024687358+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:38.024687358+00:00 stderr F - }, 2025-08-13T20:02:38.024687358+00:00 stderr F + { 2025-08-13T20:02:38.024687358+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.024687358+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.024687358+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.022592148 +0000 UTC m=+77.227691292", 2025-08-13T20:02:38.024687358+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.024687358+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:38.024687358+00:00 stderr F + }, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:38.024687358+00:00 stderr F }, 2025-08-13T20:02:38.024687358+00:00 stderr F Version: "", 2025-08-13T20:02:38.024687358+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.024687358+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.024687358+00:00 stderr F } 2025-08-13T20:02:38.208207273+00:00 stderr F I0813 20:02:38.208072 1 request.go:697] Waited for 1.014815649s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:38.210816507+00:00 stderr F E0813 20:02:38.210699 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.374117816+00:00 stderr F E0813 20:02:38.373953 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.377282966+00:00 stderr F I0813 20:02:38.377135 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.377282966+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.377282966+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.377282966+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F - { 2025-08-13T20:02:38.377282966+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:38.377282966+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.377282966+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:38.377282966+00:00 stderr F - }, 2025-08-13T20:02:38.377282966+00:00 stderr F + { 2025-08-13T20:02:38.377282966+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:38.377282966+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.377282966+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.374052224 +0000 UTC m=+77.579151698", 2025-08-13T20:02:38.377282966+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:38.377282966+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:38.377282966+00:00 stderr F + }, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.377282966+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:38.377282966+00:00 stderr F }, 2025-08-13T20:02:38.377282966+00:00 stderr F Version: "", 2025-08-13T20:02:38.377282966+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.377282966+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.377282966+00:00 stderr F } 2025-08-13T20:02:38.409229868+00:00 stderr F E0813 20:02:38.409114 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.432512932+00:00 stderr F E0813 20:02:38.432459 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.434983522+00:00 stderr F I0813 20:02:38.434956 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.434983522+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.434983522+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.434983522+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.434983522+00:00 stderr F - { 2025-08-13T20:02:38.434983522+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.434983522+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.434983522+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:38.434983522+00:00 stderr F - }, 2025-08-13T20:02:38.434983522+00:00 stderr F + { 2025-08-13T20:02:38.434983522+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.434983522+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.434983522+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.432589994 +0000 UTC m=+77.637689208", 2025-08-13T20:02:38.434983522+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.434983522+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:38.434983522+00:00 stderr F + }, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:38.434983522+00:00 stderr F }, 2025-08-13T20:02:38.434983522+00:00 stderr F Version: "", 2025-08-13T20:02:38.434983522+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.434983522+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.434983522+00:00 stderr F } 2025-08-13T20:02:38.609067568+00:00 stderr F E0813 20:02:38.608938 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.631156048+00:00 stderr F E0813 20:02:38.631058 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.633299470+00:00 stderr F I0813 20:02:38.633058 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.633299470+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.633299470+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.633299470+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F - { 2025-08-13T20:02:38.633299470+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:38.633299470+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.633299470+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:38.633299470+00:00 stderr F - }, 2025-08-13T20:02:38.633299470+00:00 stderr F + { 2025-08-13T20:02:38.633299470+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:38.633299470+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.633299470+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.631231761 +0000 UTC m=+77.836330955", 2025-08-13T20:02:38.633299470+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.633299470+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:38.633299470+00:00 stderr F + }, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:38.633299470+00:00 stderr F }, 2025-08-13T20:02:38.633299470+00:00 stderr F Version: "", 2025-08-13T20:02:38.633299470+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.633299470+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.633299470+00:00 stderr F } 2025-08-13T20:02:38.809022192+00:00 stderr F E0813 20:02:38.808964 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.831928866+00:00 stderr F E0813 20:02:38.831696 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.833717297+00:00 stderr F I0813 20:02:38.833666 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.833717297+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.833717297+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.833717297+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:38.833717297+00:00 stderr F - { 2025-08-13T20:02:38.833717297+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:38.833717297+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.833717297+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:38.833717297+00:00 stderr F - }, 2025-08-13T20:02:38.833717297+00:00 stderr F + { 2025-08-13T20:02:38.833717297+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:38.833717297+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.833717297+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.831754401 +0000 UTC m=+78.036853625", 2025-08-13T20:02:38.833717297+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.833717297+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:38.833717297+00:00 stderr F + }, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:38.833717297+00:00 stderr F }, 2025-08-13T20:02:38.833717297+00:00 stderr F Version: "", 2025-08-13T20:02:38.833717297+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.833717297+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.833717297+00:00 stderr F } 2025-08-13T20:02:39.008508193+00:00 stderr F E0813 20:02:39.008405 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.033549498+00:00 stderr F E0813 20:02:39.033436 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.036017018+00:00 stderr F I0813 20:02:39.035943 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.036017018+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.036017018+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.036017018+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F - { 2025-08-13T20:02:39.036017018+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:39.036017018+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.036017018+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:39.036017018+00:00 stderr F - }, 2025-08-13T20:02:39.036017018+00:00 stderr F + { 2025-08-13T20:02:39.036017018+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:39.036017018+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.036017018+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.033546278 +0000 UTC m=+78.238645532", 2025-08-13T20:02:39.036017018+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.036017018+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:39.036017018+00:00 stderr F + }, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:39.036017018+00:00 stderr F }, 2025-08-13T20:02:39.036017018+00:00 stderr F Version: "", 2025-08-13T20:02:39.036017018+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.036017018+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.036017018+00:00 stderr F } 2025-08-13T20:02:39.209076115+00:00 stderr F E0813 20:02:39.209019 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.233264635+00:00 stderr F E0813 20:02:39.233185 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.234977154+00:00 stderr F I0813 20:02:39.234925 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.234977154+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.234977154+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.234977154+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.234977154+00:00 stderr F - { 2025-08-13T20:02:39.234977154+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.234977154+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.234977154+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:39.234977154+00:00 stderr F - }, 2025-08-13T20:02:39.234977154+00:00 stderr F + { 2025-08-13T20:02:39.234977154+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.234977154+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.234977154+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.233222854 +0000 UTC m=+78.438322008", 2025-08-13T20:02:39.234977154+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.234977154+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:39.234977154+00:00 stderr F + }, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:39.234977154+00:00 stderr F }, 2025-08-13T20:02:39.234977154+00:00 stderr F Version: "", 2025-08-13T20:02:39.234977154+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.234977154+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.234977154+00:00 stderr F } 2025-08-13T20:02:39.408274038+00:00 stderr F I0813 20:02:39.407918 1 request.go:697] Waited for 1.030436886s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:39.409755390+00:00 stderr F E0813 20:02:39.409699 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.609879939+00:00 stderr F E0813 20:02:39.609674 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.652287489+00:00 stderr F E0813 20:02:39.652176 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.654922334+00:00 stderr F I0813 20:02:39.654739 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.654922334+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.654922334+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.654922334+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.654922334+00:00 stderr F - { 2025-08-13T20:02:39.654922334+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.654922334+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.654922334+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:39.654922334+00:00 stderr F - }, 2025-08-13T20:02:39.654922334+00:00 stderr F + { 2025-08-13T20:02:39.654922334+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.654922334+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.654922334+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.652405342 +0000 UTC m=+78.857504516", 2025-08-13T20:02:39.654922334+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.654922334+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:39.654922334+00:00 stderr F + }, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:39.654922334+00:00 stderr F }, 2025-08-13T20:02:39.654922334+00:00 stderr F Version: "", 2025-08-13T20:02:39.654922334+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.654922334+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.654922334+00:00 stderr F } 2025-08-13T20:02:39.733002751+00:00 stderr F E0813 20:02:39.732689 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.736083259+00:00 stderr F I0813 20:02:39.735966 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.736083259+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.736083259+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.736083259+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F - { 2025-08-13T20:02:39.736083259+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:39.736083259+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.736083259+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:39.736083259+00:00 stderr F - }, 2025-08-13T20:02:39.736083259+00:00 stderr F + { 2025-08-13T20:02:39.736083259+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:39.736083259+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.736083259+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.732833536 +0000 UTC m=+78.937932970", 2025-08-13T20:02:39.736083259+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:39.736083259+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:39.736083259+00:00 stderr F + }, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.736083259+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:39.736083259+00:00 stderr F }, 2025-08-13T20:02:39.736083259+00:00 stderr F Version: "", 2025-08-13T20:02:39.736083259+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.736083259+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.736083259+00:00 stderr F } 2025-08-13T20:02:39.810677247+00:00 stderr F E0813 20:02:39.810556 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.855908507+00:00 stderr F E0813 20:02:39.855723 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.858237484+00:00 stderr F I0813 20:02:39.858171 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.858237484+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.858237484+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.858237484+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F - { 2025-08-13T20:02:39.858237484+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:39.858237484+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.858237484+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:39.858237484+00:00 stderr F - }, 2025-08-13T20:02:39.858237484+00:00 stderr F + { 2025-08-13T20:02:39.858237484+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:39.858237484+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.858237484+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.855873176 +0000 UTC m=+79.060972630", 2025-08-13T20:02:39.858237484+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.858237484+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:39.858237484+00:00 stderr F + }, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:39.858237484+00:00 stderr F }, 2025-08-13T20:02:39.858237484+00:00 stderr F Version: "", 2025-08-13T20:02:39.858237484+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.858237484+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.858237484+00:00 stderr F } 2025-08-13T20:02:40.011393083+00:00 stderr F E0813 20:02:40.010825 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.055184292+00:00 stderr F E0813 20:02:40.055065 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.057880279+00:00 stderr F I0813 20:02:40.057721 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.057880279+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.057880279+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.057880279+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:40.057880279+00:00 stderr F - { 2025-08-13T20:02:40.057880279+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:40.057880279+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.057880279+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:40.057880279+00:00 stderr F - }, 2025-08-13T20:02:40.057880279+00:00 stderr F + { 2025-08-13T20:02:40.057880279+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:40.057880279+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.057880279+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.055165632 +0000 UTC m=+79.260264806", 2025-08-13T20:02:40.057880279+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.057880279+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:40.057880279+00:00 stderr F + }, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:40.057880279+00:00 stderr F }, 2025-08-13T20:02:40.057880279+00:00 stderr F Version: "", 2025-08-13T20:02:40.057880279+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.057880279+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.057880279+00:00 stderr F } 2025-08-13T20:02:40.211032458+00:00 stderr F E0813 20:02:40.210726 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.254105447+00:00 stderr F E0813 20:02:40.253832 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.257162694+00:00 stderr F I0813 20:02:40.257024 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.257162694+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.257162694+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.257162694+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F - { 2025-08-13T20:02:40.257162694+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:40.257162694+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.257162694+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:40.257162694+00:00 stderr F - }, 2025-08-13T20:02:40.257162694+00:00 stderr F + { 2025-08-13T20:02:40.257162694+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:40.257162694+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.257162694+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.253945202 +0000 UTC m=+79.459044626", 2025-08-13T20:02:40.257162694+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.257162694+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:40.257162694+00:00 stderr F + }, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:40.257162694+00:00 stderr F }, 2025-08-13T20:02:40.257162694+00:00 stderr F Version: "", 2025-08-13T20:02:40.257162694+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.257162694+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.257162694+00:00 stderr F } 2025-08-13T20:02:40.408925773+00:00 stderr F I0813 20:02:40.408472 1 request.go:697] Waited for 1.17324116s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:40.410283412+00:00 stderr F E0813 20:02:40.410202 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.453893266+00:00 stderr F E0813 20:02:40.453701 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.459104355+00:00 stderr F I0813 20:02:40.458981 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.459104355+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.459104355+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.459104355+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:40.459104355+00:00 stderr F - { 2025-08-13T20:02:40.459104355+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.459104355+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.459104355+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:40.459104355+00:00 stderr F - }, 2025-08-13T20:02:40.459104355+00:00 stderr F + { 2025-08-13T20:02:40.459104355+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.459104355+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.459104355+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.45403173 +0000 UTC m=+79.659131874", 2025-08-13T20:02:40.459104355+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.459104355+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:40.459104355+00:00 stderr F + }, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:40.459104355+00:00 stderr F }, 2025-08-13T20:02:40.459104355+00:00 stderr F Version: "", 2025-08-13T20:02:40.459104355+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.459104355+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.459104355+00:00 stderr F } 2025-08-13T20:02:40.610141543+00:00 stderr F E0813 20:02:40.609707 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.693087400+00:00 stderr F E0813 20:02:40.693030 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.695862119+00:00 stderr F I0813 20:02:40.695752 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.695862119+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.695862119+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.695862119+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:40.695862119+00:00 stderr F - { 2025-08-13T20:02:40.695862119+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.695862119+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.695862119+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:40.695862119+00:00 stderr F - }, 2025-08-13T20:02:40.695862119+00:00 stderr F + { 2025-08-13T20:02:40.695862119+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.695862119+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.695862119+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.693200443 +0000 UTC m=+79.898299647", 2025-08-13T20:02:40.695862119+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.695862119+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:40.695862119+00:00 stderr F + }, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:40.695862119+00:00 stderr F }, 2025-08-13T20:02:40.695862119+00:00 stderr F Version: "", 2025-08-13T20:02:40.695862119+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.695862119+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.695862119+00:00 stderr F } 2025-08-13T20:02:40.809537742+00:00 stderr F E0813 20:02:40.809455 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.009569698+00:00 stderr F E0813 20:02:41.009458 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.092326639+00:00 stderr F E0813 20:02:41.092276 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.094510742+00:00 stderr F I0813 20:02:41.094483 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.094510742+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.094510742+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.094510742+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F - { 2025-08-13T20:02:41.094510742+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:41.094510742+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.094510742+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:41.094510742+00:00 stderr F - }, 2025-08-13T20:02:41.094510742+00:00 stderr F + { 2025-08-13T20:02:41.094510742+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:41.094510742+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.094510742+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.092412442 +0000 UTC m=+80.297511526", 2025-08-13T20:02:41.094510742+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.094510742+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:41.094510742+00:00 stderr F + }, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:41.094510742+00:00 stderr F }, 2025-08-13T20:02:41.094510742+00:00 stderr F Version: "", 2025-08-13T20:02:41.094510742+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.094510742+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.094510742+00:00 stderr F } 2025-08-13T20:02:41.209867223+00:00 stderr F E0813 20:02:41.209685 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.292589613+00:00 stderr F E0813 20:02:41.292529 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.295547067+00:00 stderr F I0813 20:02:41.295514 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.295547067+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.295547067+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.295547067+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:41.295547067+00:00 stderr F - { 2025-08-13T20:02:41.295547067+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:41.295547067+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.295547067+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:41.295547067+00:00 stderr F - }, 2025-08-13T20:02:41.295547067+00:00 stderr F + { 2025-08-13T20:02:41.295547067+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:41.295547067+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.295547067+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.292687976 +0000 UTC m=+80.497787250", 2025-08-13T20:02:41.295547067+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.295547067+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:41.295547067+00:00 stderr F + }, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:41.295547067+00:00 stderr F }, 2025-08-13T20:02:41.295547067+00:00 stderr F Version: "", 2025-08-13T20:02:41.295547067+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.295547067+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.295547067+00:00 stderr F } 2025-08-13T20:02:41.408646444+00:00 stderr F I0813 20:02:41.408524 1 request.go:697] Waited for 1.151170611s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:41.410157427+00:00 stderr F E0813 20:02:41.410122 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.452286258+00:00 stderr F E0813 20:02:41.452087 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.455177620+00:00 stderr F I0813 20:02:41.455081 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.455177620+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.455177620+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.455177620+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F - { 2025-08-13T20:02:41.455177620+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:41.455177620+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.455177620+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:41.455177620+00:00 stderr F - }, 2025-08-13T20:02:41.455177620+00:00 stderr F + { 2025-08-13T20:02:41.455177620+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:41.455177620+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.455177620+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.452167594 +0000 UTC m=+80.657266869", 2025-08-13T20:02:41.455177620+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:41.455177620+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:41.455177620+00:00 stderr F + }, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.455177620+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:41.455177620+00:00 stderr F }, 2025-08-13T20:02:41.455177620+00:00 stderr F Version: "", 2025-08-13T20:02:41.455177620+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.455177620+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.455177620+00:00 stderr F } 2025-08-13T20:02:41.492750232+00:00 stderr F E0813 20:02:41.492654 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.498085424+00:00 stderr F I0813 20:02:41.498014 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.498085424+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.498085424+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.498085424+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F - { 2025-08-13T20:02:41.498085424+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:41.498085424+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.498085424+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:41.498085424+00:00 stderr F - }, 2025-08-13T20:02:41.498085424+00:00 stderr F + { 2025-08-13T20:02:41.498085424+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:41.498085424+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.498085424+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.492727882 +0000 UTC m=+80.697827116", 2025-08-13T20:02:41.498085424+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.498085424+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:41.498085424+00:00 stderr F + }, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:41.498085424+00:00 stderr F }, 2025-08-13T20:02:41.498085424+00:00 stderr F Version: "", 2025-08-13T20:02:41.498085424+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.498085424+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.498085424+00:00 stderr F } 2025-08-13T20:02:41.609886224+00:00 stderr F E0813 20:02:41.609696 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.692903652+00:00 stderr F E0813 20:02:41.692665 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.695712942+00:00 stderr F I0813 20:02:41.695640 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.695712942+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.695712942+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.695712942+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.695712942+00:00 stderr F - { 2025-08-13T20:02:41.695712942+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.695712942+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.695712942+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:41.695712942+00:00 stderr F - }, 2025-08-13T20:02:41.695712942+00:00 stderr F + { 2025-08-13T20:02:41.695712942+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.695712942+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.695712942+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.692911722 +0000 UTC m=+80.898011126", 2025-08-13T20:02:41.695712942+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.695712942+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:41.695712942+00:00 stderr F + }, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:41.695712942+00:00 stderr F }, 2025-08-13T20:02:41.695712942+00:00 stderr F Version: "", 2025-08-13T20:02:41.695712942+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.695712942+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.695712942+00:00 stderr F } 2025-08-13T20:02:41.809092337+00:00 stderr F E0813 20:02:41.808992 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.971767138+00:00 stderr F E0813 20:02:41.971645 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.975059611+00:00 stderr F I0813 20:02:41.975003 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.975059611+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.975059611+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.975059611+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.975059611+00:00 stderr F - { 2025-08-13T20:02:41.975059611+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.975059611+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.975059611+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:41.975059611+00:00 stderr F - }, 2025-08-13T20:02:41.975059611+00:00 stderr F + { 2025-08-13T20:02:41.975059611+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.975059611+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.975059611+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.971699266 +0000 UTC m=+81.176798460", 2025-08-13T20:02:41.975059611+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.975059611+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:41.975059611+00:00 stderr F + }, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:41.975059611+00:00 stderr F }, 2025-08-13T20:02:41.975059611+00:00 stderr F Version: "", 2025-08-13T20:02:41.975059611+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.975059611+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.975059611+00:00 stderr F } 2025-08-13T20:02:42.009924016+00:00 stderr F E0813 20:02:42.009727 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.173314598+00:00 stderr F E0813 20:02:42.173193 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.175037337+00:00 stderr F I0813 20:02:42.174736 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.175037337+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.175037337+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.175037337+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F - { 2025-08-13T20:02:42.175037337+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:42.175037337+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.175037337+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:42.175037337+00:00 stderr F - }, 2025-08-13T20:02:42.175037337+00:00 stderr F + { 2025-08-13T20:02:42.175037337+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:42.175037337+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.175037337+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.173233336 +0000 UTC m=+81.378332610", 2025-08-13T20:02:42.175037337+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.175037337+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:42.175037337+00:00 stderr F + }, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:42.175037337+00:00 stderr F }, 2025-08-13T20:02:42.175037337+00:00 stderr F Version: "", 2025-08-13T20:02:42.175037337+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.175037337+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.175037337+00:00 stderr F } 2025-08-13T20:02:42.209069618+00:00 stderr F E0813 20:02:42.208979 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.373713945+00:00 stderr F E0813 20:02:42.373274 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.376973758+00:00 stderr F I0813 20:02:42.376690 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.376973758+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.376973758+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.376973758+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:42.376973758+00:00 stderr F - { 2025-08-13T20:02:42.376973758+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:42.376973758+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.376973758+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:42.376973758+00:00 stderr F - }, 2025-08-13T20:02:42.376973758+00:00 stderr F + { 2025-08-13T20:02:42.376973758+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:42.376973758+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.376973758+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.373340805 +0000 UTC m=+81.578439969", 2025-08-13T20:02:42.376973758+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.376973758+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:42.376973758+00:00 stderr F + }, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:42.376973758+00:00 stderr F }, 2025-08-13T20:02:42.376973758+00:00 stderr F Version: "", 2025-08-13T20:02:42.376973758+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.376973758+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.376973758+00:00 stderr F } 2025-08-13T20:02:42.410688000+00:00 stderr F E0813 20:02:42.410560 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.608154843+00:00 stderr F I0813 20:02:42.607562 1 request.go:697] Waited for 1.109243645s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:42.610078638+00:00 stderr F E0813 20:02:42.609974 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.772931454+00:00 stderr F E0813 20:02:42.772878 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.775012764+00:00 stderr F I0813 20:02:42.774987 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.775012764+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.775012764+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.775012764+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F - { 2025-08-13T20:02:42.775012764+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:42.775012764+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.775012764+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:42.775012764+00:00 stderr F - }, 2025-08-13T20:02:42.775012764+00:00 stderr F + { 2025-08-13T20:02:42.775012764+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:42.775012764+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.775012764+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.773033487 +0000 UTC m=+81.978132701", 2025-08-13T20:02:42.775012764+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.775012764+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:42.775012764+00:00 stderr F + }, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:42.775012764+00:00 stderr F }, 2025-08-13T20:02:42.775012764+00:00 stderr F Version: "", 2025-08-13T20:02:42.775012764+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.775012764+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.775012764+00:00 stderr F } 2025-08-13T20:02:42.809667162+00:00 stderr F E0813 20:02:42.809511 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.972950400+00:00 stderr F E0813 20:02:42.972830 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.983392638+00:00 stderr F I0813 20:02:42.983251 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.983392638+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.983392638+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.983392638+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:42.983392638+00:00 stderr F - { 2025-08-13T20:02:42.983392638+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:42.983392638+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.983392638+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:42.983392638+00:00 stderr F - }, 2025-08-13T20:02:42.983392638+00:00 stderr F + { 2025-08-13T20:02:42.983392638+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:42.983392638+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.983392638+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.972919719 +0000 UTC m=+82.178018933", 2025-08-13T20:02:42.983392638+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.983392638+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:42.983392638+00:00 stderr F + }, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:42.983392638+00:00 stderr F }, 2025-08-13T20:02:42.983392638+00:00 stderr F Version: "", 2025-08-13T20:02:42.983392638+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.983392638+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.983392638+00:00 stderr F } 2025-08-13T20:02:43.009577656+00:00 stderr F E0813 20:02:43.009483 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.211282169+00:00 stderr F E0813 20:02:43.210955 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.332547909+00:00 stderr F E0813 20:02:43.332470 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.335764841+00:00 stderr F I0813 20:02:43.335650 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.335764841+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.335764841+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.335764841+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:43.335764841+00:00 stderr F - { 2025-08-13T20:02:43.335764841+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:43.335764841+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.335764841+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:43.335764841+00:00 stderr F - }, 2025-08-13T20:02:43.335764841+00:00 stderr F + { 2025-08-13T20:02:43.335764841+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:43.335764841+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.335764841+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.332531919 +0000 UTC m=+82.537631143", 2025-08-13T20:02:43.335764841+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.335764841+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:43.335764841+00:00 stderr F + }, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:43.335764841+00:00 stderr F }, 2025-08-13T20:02:43.335764841+00:00 stderr F Version: "", 2025-08-13T20:02:43.335764841+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.335764841+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.335764841+00:00 stderr F } 2025-08-13T20:02:43.410632006+00:00 stderr F E0813 20:02:43.410544 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.533812061+00:00 stderr F E0813 20:02:43.533681 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.535882560+00:00 stderr F I0813 20:02:43.535707 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.535882560+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.535882560+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.535882560+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F - { 2025-08-13T20:02:43.535882560+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:43.535882560+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.535882560+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:43.535882560+00:00 stderr F - }, 2025-08-13T20:02:43.535882560+00:00 stderr F + { 2025-08-13T20:02:43.535882560+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:43.535882560+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.535882560+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.533750569 +0000 UTC m=+82.738849793", 2025-08-13T20:02:43.535882560+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.535882560+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:43.535882560+00:00 stderr F + }, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:43.535882560+00:00 stderr F }, 2025-08-13T20:02:43.535882560+00:00 stderr F Version: "", 2025-08-13T20:02:43.535882560+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.535882560+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.535882560+00:00 stderr F } 2025-08-13T20:02:43.613427862+00:00 stderr F E0813 20:02:43.613301 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.692654802+00:00 stderr F E0813 20:02:43.692552 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.694875885+00:00 stderr F I0813 20:02:43.694731 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.694875885+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.694875885+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.694875885+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F - { 2025-08-13T20:02:43.694875885+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:43.694875885+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.694875885+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:43.694875885+00:00 stderr F - }, 2025-08-13T20:02:43.694875885+00:00 stderr F + { 2025-08-13T20:02:43.694875885+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:43.694875885+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.694875885+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.692635021 +0000 UTC m=+82.897734245", 2025-08-13T20:02:43.694875885+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:43.694875885+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:43.694875885+00:00 stderr F + }, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:43.694875885+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:43.694875885+00:00 stderr F }, 2025-08-13T20:02:43.694875885+00:00 stderr F Version: "", 2025-08-13T20:02:43.694875885+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.694875885+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.694875885+00:00 stderr F } 2025-08-13T20:02:43.738092648+00:00 stderr F E0813 20:02:43.738038 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.740215249+00:00 stderr F I0813 20:02:43.740186 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.740215249+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.740215249+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.740215249+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:43.740215249+00:00 stderr F - { 2025-08-13T20:02:43.740215249+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:43.740215249+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.740215249+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:43.740215249+00:00 stderr F - }, 2025-08-13T20:02:43.740215249+00:00 stderr F + { 2025-08-13T20:02:43.740215249+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:43.740215249+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.740215249+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.738236632 +0000 UTC m=+82.943335826", 2025-08-13T20:02:43.740215249+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.740215249+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:43.740215249+00:00 stderr F + }, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:43.740215249+00:00 stderr F }, 2025-08-13T20:02:43.740215249+00:00 stderr F Version: "", 2025-08-13T20:02:43.740215249+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.740215249+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.740215249+00:00 stderr F } 2025-08-13T20:02:43.810450213+00:00 stderr F E0813 20:02:43.810300 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.937389464+00:00 stderr F E0813 20:02:43.937257 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.943501168+00:00 stderr F I0813 20:02:43.943424 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.943501168+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.943501168+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.943501168+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F - { 2025-08-13T20:02:43.943501168+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:43.943501168+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.943501168+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:43.943501168+00:00 stderr F - }, 2025-08-13T20:02:43.943501168+00:00 stderr F + { 2025-08-13T20:02:43.943501168+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:43.943501168+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.943501168+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.937455325 +0000 UTC m=+83.142555230", 2025-08-13T20:02:43.943501168+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.943501168+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:43.943501168+00:00 stderr F + }, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:43.943501168+00:00 stderr F }, 2025-08-13T20:02:43.943501168+00:00 stderr F Version: "", 2025-08-13T20:02:43.943501168+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.943501168+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.943501168+00:00 stderr F } 2025-08-13T20:02:44.009754338+00:00 stderr F E0813 20:02:44.009698 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.133475947+00:00 stderr F E0813 20:02:44.133326 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.136048801+00:00 stderr F I0813 20:02:44.136012 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.136048801+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.136048801+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.136048801+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:44.136048801+00:00 stderr F - { 2025-08-13T20:02:44.136048801+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.136048801+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.136048801+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:44.136048801+00:00 stderr F - }, 2025-08-13T20:02:44.136048801+00:00 stderr F + { 2025-08-13T20:02:44.136048801+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.136048801+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.136048801+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.133948111 +0000 UTC m=+83.339047395", 2025-08-13T20:02:44.136048801+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.136048801+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:44.136048801+00:00 stderr F + }, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:44.136048801+00:00 stderr F }, 2025-08-13T20:02:44.136048801+00:00 stderr F Version: "", 2025-08-13T20:02:44.136048801+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.136048801+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.136048801+00:00 stderr F } 2025-08-13T20:02:44.210293679+00:00 stderr F E0813 20:02:44.210220 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.410241533+00:00 stderr F E0813 20:02:44.410145 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.610735652+00:00 stderr F E0813 20:02:44.610572 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.653021509+00:00 stderr F E0813 20:02:44.652960 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.654728947+00:00 stderr F I0813 20:02:44.654702 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.654728947+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.654728947+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.654728947+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:44.654728947+00:00 stderr F - { 2025-08-13T20:02:44.654728947+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.654728947+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.654728947+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:44.654728947+00:00 stderr F - }, 2025-08-13T20:02:44.654728947+00:00 stderr F + { 2025-08-13T20:02:44.654728947+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.654728947+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.654728947+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.653127182 +0000 UTC m=+83.858226276", 2025-08-13T20:02:44.654728947+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.654728947+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:44.654728947+00:00 stderr F + }, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:44.654728947+00:00 stderr F }, 2025-08-13T20:02:44.654728947+00:00 stderr F Version: "", 2025-08-13T20:02:44.654728947+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.654728947+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.654728947+00:00 stderr F } 2025-08-13T20:02:44.809875933+00:00 stderr F E0813 20:02:44.809720 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.852525840+00:00 stderr F E0813 20:02:44.852477 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.854682591+00:00 stderr F I0813 20:02:44.854656 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.854682591+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.854682591+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.854682591+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F - { 2025-08-13T20:02:44.854682591+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:44.854682591+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.854682591+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:44.854682591+00:00 stderr F - }, 2025-08-13T20:02:44.854682591+00:00 stderr F + { 2025-08-13T20:02:44.854682591+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:44.854682591+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.854682591+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.852597902 +0000 UTC m=+84.057697016", 2025-08-13T20:02:44.854682591+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.854682591+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:44.854682591+00:00 stderr F + }, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:44.854682591+00:00 stderr F }, 2025-08-13T20:02:44.854682591+00:00 stderr F Version: "", 2025-08-13T20:02:44.854682591+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.854682591+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.854682591+00:00 stderr F } 2025-08-13T20:02:45.010542128+00:00 stderr F E0813 20:02:45.010409 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.209175334+00:00 stderr F E0813 20:02:45.208632 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.254010902+00:00 stderr F E0813 20:02:45.253914 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.256010770+00:00 stderr F I0813 20:02:45.255942 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.256010770+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.256010770+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.256010770+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:45.256010770+00:00 stderr F - { 2025-08-13T20:02:45.256010770+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:45.256010770+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.256010770+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:45.256010770+00:00 stderr F - }, 2025-08-13T20:02:45.256010770+00:00 stderr F + { 2025-08-13T20:02:45.256010770+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:45.256010770+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.256010770+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.253957231 +0000 UTC m=+84.459056385", 2025-08-13T20:02:45.256010770+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.256010770+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:45.256010770+00:00 stderr F + }, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:45.256010770+00:00 stderr F }, 2025-08-13T20:02:45.256010770+00:00 stderr F Version: "", 2025-08-13T20:02:45.256010770+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.256010770+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.256010770+00:00 stderr F } 2025-08-13T20:02:45.409763546+00:00 stderr F E0813 20:02:45.409605 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.453226156+00:00 stderr F E0813 20:02:45.453104 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.456240132+00:00 stderr F I0813 20:02:45.456165 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.456240132+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.456240132+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.456240132+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F - { 2025-08-13T20:02:45.456240132+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:45.456240132+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.456240132+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:45.456240132+00:00 stderr F - }, 2025-08-13T20:02:45.456240132+00:00 stderr F + { 2025-08-13T20:02:45.456240132+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:45.456240132+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.456240132+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.453183404 +0000 UTC m=+84.658282768", 2025-08-13T20:02:45.456240132+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.456240132+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:45.456240132+00:00 stderr F + }, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:45.456240132+00:00 stderr F }, 2025-08-13T20:02:45.456240132+00:00 stderr F Version: "", 2025-08-13T20:02:45.456240132+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.456240132+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.456240132+00:00 stderr F } 2025-08-13T20:02:45.610164553+00:00 stderr F E0813 20:02:45.609866 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.654445656+00:00 stderr F E0813 20:02:45.654335 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.658755559+00:00 stderr F I0813 20:02:45.658340 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.658755559+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.658755559+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.658755559+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:45.658755559+00:00 stderr F - { 2025-08-13T20:02:45.658755559+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:45.658755559+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.658755559+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:45.658755559+00:00 stderr F - }, 2025-08-13T20:02:45.658755559+00:00 stderr F + { 2025-08-13T20:02:45.658755559+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:45.658755559+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.658755559+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.654403195 +0000 UTC m=+84.859502529", 2025-08-13T20:02:45.658755559+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.658755559+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:45.658755559+00:00 stderr F + }, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:45.658755559+00:00 stderr F }, 2025-08-13T20:02:45.658755559+00:00 stderr F Version: "", 2025-08-13T20:02:45.658755559+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.658755559+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.658755559+00:00 stderr F } 2025-08-13T20:02:45.812534816+00:00 stderr F E0813 20:02:45.812435 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.009827914+00:00 stderr F E0813 20:02:46.009709 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.491641599+00:00 stderr F E0813 20:02:46.491579 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.493878643+00:00 stderr F I0813 20:02:46.493768 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.493878643+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.493878643+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.493878643+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:46.493878643+00:00 stderr F - { 2025-08-13T20:02:46.493878643+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:46.493878643+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.493878643+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:46.493878643+00:00 stderr F - }, 2025-08-13T20:02:46.493878643+00:00 stderr F + { 2025-08-13T20:02:46.493878643+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:46.493878643+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.493878643+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.491770803 +0000 UTC m=+85.696870017", 2025-08-13T20:02:46.493878643+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.493878643+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:46.493878643+00:00 stderr F + }, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:46.493878643+00:00 stderr F }, 2025-08-13T20:02:46.493878643+00:00 stderr F Version: "", 2025-08-13T20:02:46.493878643+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.493878643+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.493878643+00:00 stderr F } 2025-08-13T20:02:46.495978243+00:00 stderr F E0813 20:02:46.495218 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.692088328+00:00 stderr F E0813 20:02:46.692031 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.694663531+00:00 stderr F I0813 20:02:46.694637 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.694663531+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.694663531+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.694663531+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F - { 2025-08-13T20:02:46.694663531+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:46.694663531+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.694663531+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:46.694663531+00:00 stderr F - }, 2025-08-13T20:02:46.694663531+00:00 stderr F + { 2025-08-13T20:02:46.694663531+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:46.694663531+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.694663531+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.692192771 +0000 UTC m=+85.897291995", 2025-08-13T20:02:46.694663531+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.694663531+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:46.694663531+00:00 stderr F + }, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:46.694663531+00:00 stderr F }, 2025-08-13T20:02:46.694663531+00:00 stderr F Version: "", 2025-08-13T20:02:46.694663531+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.694663531+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.694663531+00:00 stderr F } 2025-08-13T20:02:46.696121993+00:00 stderr F E0813 20:02:46.696027 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.893951606+00:00 stderr F E0813 20:02:46.893726 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.896709045+00:00 stderr F I0813 20:02:46.896600 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.896709045+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.896709045+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.896709045+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:46.896709045+00:00 stderr F - { 2025-08-13T20:02:46.896709045+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:46.896709045+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.896709045+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:46.896709045+00:00 stderr F - }, 2025-08-13T20:02:46.896709045+00:00 stderr F + { 2025-08-13T20:02:46.896709045+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:46.896709045+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.896709045+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.893873064 +0000 UTC m=+86.098972518", 2025-08-13T20:02:46.896709045+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.896709045+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:46.896709045+00:00 stderr F + }, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:46.896709045+00:00 stderr F }, 2025-08-13T20:02:46.896709045+00:00 stderr F Version: "", 2025-08-13T20:02:46.896709045+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.896709045+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.896709045+00:00 stderr F } 2025-08-13T20:02:46.898198017+00:00 stderr F E0813 20:02:46.898159 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.972571309+00:00 stderr F E0813 20:02:46.972327 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.975397639+00:00 stderr F I0813 20:02:46.975340 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.975397639+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.975397639+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.975397639+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F - { 2025-08-13T20:02:46.975397639+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:46.975397639+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.975397639+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:46.975397639+00:00 stderr F - }, 2025-08-13T20:02:46.975397639+00:00 stderr F + { 2025-08-13T20:02:46.975397639+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:46.975397639+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.975397639+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.972388134 +0000 UTC m=+86.177487448", 2025-08-13T20:02:46.975397639+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:46.975397639+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:46.975397639+00:00 stderr F + }, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:46.975397639+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:46.975397639+00:00 stderr F }, 2025-08-13T20:02:46.975397639+00:00 stderr F Version: "", 2025-08-13T20:02:46.975397639+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.975397639+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.975397639+00:00 stderr F } 2025-08-13T20:02:46.977408187+00:00 stderr F E0813 20:02:46.977378 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.094554669+00:00 stderr F E0813 20:02:47.094426 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.096831594+00:00 stderr F I0813 20:02:47.096699 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:47.096831594+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:47.096831594+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:47.096831594+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F - { 2025-08-13T20:02:47.096831594+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:47.096831594+00:00 stderr F - Status: "False", 2025-08-13T20:02:47.096831594+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:47.096831594+00:00 stderr F - }, 2025-08-13T20:02:47.096831594+00:00 stderr F + { 2025-08-13T20:02:47.096831594+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:47.096831594+00:00 stderr F + Status: "True", 2025-08-13T20:02:47.096831594+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:47.094501647 +0000 UTC m=+86.299600961", 2025-08-13T20:02:47.096831594+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:47.096831594+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:47.096831594+00:00 stderr F + }, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:47.096831594+00:00 stderr F }, 2025-08-13T20:02:47.096831594+00:00 stderr F Version: "", 2025-08-13T20:02:47.096831594+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:47.096831594+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:47.096831594+00:00 stderr F } 2025-08-13T20:02:47.098274195+00:00 stderr F E0813 20:02:47.098170 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.292954108+00:00 stderr F E0813 20:02:47.292596 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.295254294+00:00 stderr F I0813 20:02:47.295053 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:47.295254294+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:47.295254294+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:47.295254294+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:47.295254294+00:00 stderr F - { 2025-08-13T20:02:47.295254294+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:47.295254294+00:00 stderr F - Status: "False", 2025-08-13T20:02:47.295254294+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:47.295254294+00:00 stderr F - }, 2025-08-13T20:02:47.295254294+00:00 stderr F + { 2025-08-13T20:02:47.295254294+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:47.295254294+00:00 stderr F + Status: "True", 2025-08-13T20:02:47.295254294+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:47.29266478 +0000 UTC m=+86.497763994", 2025-08-13T20:02:47.295254294+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:47.295254294+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:47.295254294+00:00 stderr F + }, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:47.295254294+00:00 stderr F }, 2025-08-13T20:02:47.295254294+00:00 stderr F Version: "", 2025-08-13T20:02:47.295254294+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:47.295254294+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:47.295254294+00:00 stderr F } 2025-08-13T20:02:47.296515290+00:00 stderr F E0813 20:02:47.296303 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.058882924+00:00 stderr F E0813 20:02:49.058373 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.060875371+00:00 stderr F I0813 20:02:49.060752 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.060875371+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.060875371+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.060875371+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:49.060875371+00:00 stderr F - { 2025-08-13T20:02:49.060875371+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.060875371+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.060875371+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:49.060875371+00:00 stderr F - }, 2025-08-13T20:02:49.060875371+00:00 stderr F + { 2025-08-13T20:02:49.060875371+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.060875371+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.060875371+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.058833793 +0000 UTC m=+88.263933037", 2025-08-13T20:02:49.060875371+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.060875371+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:49.060875371+00:00 stderr F + }, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:49.060875371+00:00 stderr F }, 2025-08-13T20:02:49.060875371+00:00 stderr F Version: "", 2025-08-13T20:02:49.060875371+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.060875371+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.060875371+00:00 stderr F } 2025-08-13T20:02:49.061996513+00:00 stderr F E0813 20:02:49.061969 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.258492338+00:00 stderr F E0813 20:02:49.258437 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.261381521+00:00 stderr F I0813 20:02:49.261351 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.261381521+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.261381521+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.261381521+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F - { 2025-08-13T20:02:49.261381521+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:49.261381521+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.261381521+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:49.261381521+00:00 stderr F - }, 2025-08-13T20:02:49.261381521+00:00 stderr F + { 2025-08-13T20:02:49.261381521+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:49.261381521+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.261381521+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.258576731 +0000 UTC m=+88.463675915", 2025-08-13T20:02:49.261381521+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.261381521+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:49.261381521+00:00 stderr F + }, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:49.261381521+00:00 stderr F }, 2025-08-13T20:02:49.261381521+00:00 stderr F Version: "", 2025-08-13T20:02:49.261381521+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.261381521+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.261381521+00:00 stderr F } 2025-08-13T20:02:49.262738109+00:00 stderr F E0813 20:02:49.262649 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.463326122+00:00 stderr F E0813 20:02:49.463212 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.464921058+00:00 stderr F I0813 20:02:49.464835 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.464921058+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.464921058+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.464921058+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:49.464921058+00:00 stderr F - { 2025-08-13T20:02:49.464921058+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:49.464921058+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.464921058+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:49.464921058+00:00 stderr F - }, 2025-08-13T20:02:49.464921058+00:00 stderr F + { 2025-08-13T20:02:49.464921058+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:49.464921058+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.464921058+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.463280471 +0000 UTC m=+88.668379695", 2025-08-13T20:02:49.464921058+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.464921058+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:49.464921058+00:00 stderr F + }, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:49.464921058+00:00 stderr F }, 2025-08-13T20:02:49.464921058+00:00 stderr F Version: "", 2025-08-13T20:02:49.464921058+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.464921058+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.464921058+00:00 stderr F } 2025-08-13T20:02:49.466155773+00:00 stderr F E0813 20:02:49.466097 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.662081552+00:00 stderr F E0813 20:02:49.661961 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.665182730+00:00 stderr F I0813 20:02:49.664728 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.665182730+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.665182730+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.665182730+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F - { 2025-08-13T20:02:49.665182730+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:49.665182730+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.665182730+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:49.665182730+00:00 stderr F - }, 2025-08-13T20:02:49.665182730+00:00 stderr F + { 2025-08-13T20:02:49.665182730+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:49.665182730+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.665182730+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.662041651 +0000 UTC m=+88.867141105", 2025-08-13T20:02:49.665182730+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.665182730+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:49.665182730+00:00 stderr F + }, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:49.665182730+00:00 stderr F }, 2025-08-13T20:02:49.665182730+00:00 stderr F Version: "", 2025-08-13T20:02:49.665182730+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.665182730+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.665182730+00:00 stderr F } 2025-08-13T20:02:49.666707054+00:00 stderr F E0813 20:02:49.666621 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.859386661+00:00 stderr F E0813 20:02:49.859269 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.862996234+00:00 stderr F I0813 20:02:49.862758 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.862996234+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.862996234+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.862996234+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:49.862996234+00:00 stderr F - { 2025-08-13T20:02:49.862996234+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.862996234+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.862996234+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:49.862996234+00:00 stderr F - }, 2025-08-13T20:02:49.862996234+00:00 stderr F + { 2025-08-13T20:02:49.862996234+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.862996234+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.862996234+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.85935953 +0000 UTC m=+89.064458744", 2025-08-13T20:02:49.862996234+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.862996234+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:49.862996234+00:00 stderr F + }, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:49.862996234+00:00 stderr F }, 2025-08-13T20:02:49.862996234+00:00 stderr F Version: "", 2025-08-13T20:02:49.862996234+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.862996234+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.862996234+00:00 stderr F } 2025-08-13T20:02:49.864553388+00:00 stderr F E0813 20:02:49.864490 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.100349989+00:00 stderr F E0813 20:02:52.100206 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.102302825+00:00 stderr F I0813 20:02:52.102244 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:52.102302825+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:52.102302825+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:52.102302825+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F - { 2025-08-13T20:02:52.102302825+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:52.102302825+00:00 stderr F - Status: "False", 2025-08-13T20:02:52.102302825+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:52.102302825+00:00 stderr F - }, 2025-08-13T20:02:52.102302825+00:00 stderr F + { 2025-08-13T20:02:52.102302825+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:52.102302825+00:00 stderr F + Status: "True", 2025-08-13T20:02:52.102302825+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:52.100318508 +0000 UTC m=+91.305417732", 2025-08-13T20:02:52.102302825+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:52.102302825+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:52.102302825+00:00 stderr F + }, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:52.102302825+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:52.102302825+00:00 stderr F }, 2025-08-13T20:02:52.102302825+00:00 stderr F Version: "", 2025-08-13T20:02:52.102302825+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:52.102302825+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:52.102302825+00:00 stderr F } 2025-08-13T20:02:52.103716805+00:00 stderr F E0813 20:02:52.103639 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.922128069+00:00 stderr F W0813 20:02:53.921703 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.922128069+00:00 stderr F E0813 20:02:53.921904 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.185693288+00:00 stderr F E0813 20:02:54.185571 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.188333574+00:00 stderr F I0813 20:02:54.188235 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.188333574+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.188333574+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.188333574+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:54.188333574+00:00 stderr F - { 2025-08-13T20:02:54.188333574+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.188333574+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.188333574+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:54.188333574+00:00 stderr F - }, 2025-08-13T20:02:54.188333574+00:00 stderr F + { 2025-08-13T20:02:54.188333574+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.188333574+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.188333574+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.185654827 +0000 UTC m=+93.390754171", 2025-08-13T20:02:54.188333574+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.188333574+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:54.188333574+00:00 stderr F + }, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:54.188333574+00:00 stderr F }, 2025-08-13T20:02:54.188333574+00:00 stderr F Version: "", 2025-08-13T20:02:54.188333574+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.188333574+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.188333574+00:00 stderr F } 2025-08-13T20:02:54.190895287+00:00 stderr F E0813 20:02:54.190263 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.385528939+00:00 stderr F E0813 20:02:54.385395 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.389274516+00:00 stderr F I0813 20:02:54.389174 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.389274516+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.389274516+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.389274516+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F - { 2025-08-13T20:02:54.389274516+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:54.389274516+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.389274516+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:54.389274516+00:00 stderr F - }, 2025-08-13T20:02:54.389274516+00:00 stderr F + { 2025-08-13T20:02:54.389274516+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:54.389274516+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.389274516+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.385467807 +0000 UTC m=+93.590566981", 2025-08-13T20:02:54.389274516+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.389274516+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:54.389274516+00:00 stderr F + }, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:54.389274516+00:00 stderr F }, 2025-08-13T20:02:54.389274516+00:00 stderr F Version: "", 2025-08-13T20:02:54.389274516+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.389274516+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.389274516+00:00 stderr F } 2025-08-13T20:02:54.394194976+00:00 stderr F E0813 20:02:54.394097 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.590049464+00:00 stderr F E0813 20:02:54.589960 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.592184545+00:00 stderr F I0813 20:02:54.592117 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.592184545+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.592184545+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.592184545+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:54.592184545+00:00 stderr F - { 2025-08-13T20:02:54.592184545+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:54.592184545+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.592184545+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:54.592184545+00:00 stderr F - }, 2025-08-13T20:02:54.592184545+00:00 stderr F + { 2025-08-13T20:02:54.592184545+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:54.592184545+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.592184545+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.590033023 +0000 UTC m=+93.795132307", 2025-08-13T20:02:54.592184545+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.592184545+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:54.592184545+00:00 stderr F + }, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:54.592184545+00:00 stderr F }, 2025-08-13T20:02:54.592184545+00:00 stderr F Version: "", 2025-08-13T20:02:54.592184545+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.592184545+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.592184545+00:00 stderr F } 2025-08-13T20:02:54.593226934+00:00 stderr F E0813 20:02:54.593183 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.789140294+00:00 stderr F E0813 20:02:54.789091 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.791350607+00:00 stderr F I0813 20:02:54.791303 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.791350607+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.791350607+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.791350607+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F - { 2025-08-13T20:02:54.791350607+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:54.791350607+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.791350607+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:54.791350607+00:00 stderr F - }, 2025-08-13T20:02:54.791350607+00:00 stderr F + { 2025-08-13T20:02:54.791350607+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:54.791350607+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.791350607+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.789221796 +0000 UTC m=+93.994320850", 2025-08-13T20:02:54.791350607+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.791350607+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:54.791350607+00:00 stderr F + }, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:54.791350607+00:00 stderr F }, 2025-08-13T20:02:54.791350607+00:00 stderr F Version: "", 2025-08-13T20:02:54.791350607+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.791350607+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.791350607+00:00 stderr F } 2025-08-13T20:02:54.793879449+00:00 stderr F E0813 20:02:54.793257 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.987382889+00:00 stderr F E0813 20:02:54.987331 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.990219700+00:00 stderr F I0813 20:02:54.990194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.990219700+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.990219700+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.990219700+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:54.990219700+00:00 stderr F - { 2025-08-13T20:02:54.990219700+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.990219700+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.990219700+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:54.990219700+00:00 stderr F - }, 2025-08-13T20:02:54.990219700+00:00 stderr F + { 2025-08-13T20:02:54.990219700+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.990219700+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.990219700+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.987465371 +0000 UTC m=+94.192564445", 2025-08-13T20:02:54.990219700+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.990219700+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:54.990219700+00:00 stderr F + }, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:54.990219700+00:00 stderr F }, 2025-08-13T20:02:54.990219700+00:00 stderr F Version: "", 2025-08-13T20:02:54.990219700+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.990219700+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.990219700+00:00 stderr F } 2025-08-13T20:02:54.991403564+00:00 stderr F E0813 20:02:54.991380 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.347261235+00:00 stderr F E0813 20:03:02.346578 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.348916012+00:00 stderr F I0813 20:03:02.348863 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:02.348916012+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:02.348916012+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:02.348916012+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F - { 2025-08-13T20:03:02.348916012+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:02.348916012+00:00 stderr F - Status: "False", 2025-08-13T20:03:02.348916012+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:02.348916012+00:00 stderr F - }, 2025-08-13T20:03:02.348916012+00:00 stderr F + { 2025-08-13T20:03:02.348916012+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:02.348916012+00:00 stderr F + Status: "True", 2025-08-13T20:03:02.348916012+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:02.347210973 +0000 UTC m=+101.552310187", 2025-08-13T20:03:02.348916012+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:02.348916012+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:02.348916012+00:00 stderr F + }, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:02.348916012+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:02.348916012+00:00 stderr F }, 2025-08-13T20:03:02.348916012+00:00 stderr F Version: "", 2025-08-13T20:03:02.348916012+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:02.348916012+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:02.348916012+00:00 stderr F } 2025-08-13T20:03:02.350906699+00:00 stderr F E0813 20:03:02.350729 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.433210680+00:00 stderr F E0813 20:03:04.433095 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.435199707+00:00 stderr F I0813 20:03:04.435118 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.435199707+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.435199707+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.435199707+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:04.435199707+00:00 stderr F - { 2025-08-13T20:03:04.435199707+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:04.435199707+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.435199707+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:04.435199707+00:00 stderr F - }, 2025-08-13T20:03:04.435199707+00:00 stderr F + { 2025-08-13T20:03:04.435199707+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:04.435199707+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.435199707+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.433156558 +0000 UTC m=+103.638255912", 2025-08-13T20:03:04.435199707+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.435199707+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:04.435199707+00:00 stderr F + }, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:04.435199707+00:00 stderr F }, 2025-08-13T20:03:04.435199707+00:00 stderr F Version: "", 2025-08-13T20:03:04.435199707+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.435199707+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.435199707+00:00 stderr F } 2025-08-13T20:03:04.436316628+00:00 stderr F E0813 20:03:04.436239 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.637671522+00:00 stderr F E0813 20:03:04.637543 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.640534914+00:00 stderr F I0813 20:03:04.640422 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.640534914+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.640534914+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.640534914+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F - { 2025-08-13T20:03:04.640534914+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:04.640534914+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.640534914+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:04.640534914+00:00 stderr F - }, 2025-08-13T20:03:04.640534914+00:00 stderr F + { 2025-08-13T20:03:04.640534914+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:04.640534914+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.640534914+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.637684293 +0000 UTC m=+103.842783657", 2025-08-13T20:03:04.640534914+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.640534914+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:04.640534914+00:00 stderr F + }, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:04.640534914+00:00 stderr F }, 2025-08-13T20:03:04.640534914+00:00 stderr F Version: "", 2025-08-13T20:03:04.640534914+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.640534914+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.640534914+00:00 stderr F } 2025-08-13T20:03:04.642434758+00:00 stderr F E0813 20:03:04.642398 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.838517592+00:00 stderr F E0813 20:03:04.838410 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.840338214+00:00 stderr F I0813 20:03:04.840273 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.840338214+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.840338214+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.840338214+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:04.840338214+00:00 stderr F - { 2025-08-13T20:03:04.840338214+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:04.840338214+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.840338214+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:04.840338214+00:00 stderr F - }, 2025-08-13T20:03:04.840338214+00:00 stderr F + { 2025-08-13T20:03:04.840338214+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:04.840338214+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.840338214+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.838476711 +0000 UTC m=+104.043575975", 2025-08-13T20:03:04.840338214+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.840338214+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:04.840338214+00:00 stderr F + }, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:04.840338214+00:00 stderr F }, 2025-08-13T20:03:04.840338214+00:00 stderr F Version: "", 2025-08-13T20:03:04.840338214+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.840338214+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.840338214+00:00 stderr F } 2025-08-13T20:03:04.841729014+00:00 stderr F E0813 20:03:04.841647 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.036209562+00:00 stderr F E0813 20:03:05.036004 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.041749930+00:00 stderr F I0813 20:03:05.041664 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:05.041749930+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:05.041749930+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:05.041749930+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F - { 2025-08-13T20:03:05.041749930+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:05.041749930+00:00 stderr F - Status: "False", 2025-08-13T20:03:05.041749930+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:05.041749930+00:00 stderr F - }, 2025-08-13T20:03:05.041749930+00:00 stderr F + { 2025-08-13T20:03:05.041749930+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:05.041749930+00:00 stderr F + Status: "True", 2025-08-13T20:03:05.041749930+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:05.036119899 +0000 UTC m=+104.241219113", 2025-08-13T20:03:05.041749930+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:05.041749930+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:05.041749930+00:00 stderr F + }, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:05.041749930+00:00 stderr F }, 2025-08-13T20:03:05.041749930+00:00 stderr F Version: "", 2025-08-13T20:03:05.041749930+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:05.041749930+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:05.041749930+00:00 stderr F } 2025-08-13T20:03:05.042971324+00:00 stderr F E0813 20:03:05.042921 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.234516949+00:00 stderr F E0813 20:03:05.234399 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.237211016+00:00 stderr F I0813 20:03:05.237104 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:05.237211016+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:05.237211016+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:05.237211016+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:05.237211016+00:00 stderr F - { 2025-08-13T20:03:05.237211016+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:05.237211016+00:00 stderr F - Status: "False", 2025-08-13T20:03:05.237211016+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:05.237211016+00:00 stderr F - }, 2025-08-13T20:03:05.237211016+00:00 stderr F + { 2025-08-13T20:03:05.237211016+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:05.237211016+00:00 stderr F + Status: "True", 2025-08-13T20:03:05.237211016+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:05.234450737 +0000 UTC m=+104.439549901", 2025-08-13T20:03:05.237211016+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:05.237211016+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:05.237211016+00:00 stderr F + }, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:05.237211016+00:00 stderr F }, 2025-08-13T20:03:05.237211016+00:00 stderr F Version: "", 2025-08-13T20:03:05.237211016+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:05.237211016+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:05.237211016+00:00 stderr F } 2025-08-13T20:03:05.239046088+00:00 stderr F E0813 20:03:05.238768 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.836279379+00:00 stderr F E0813 20:03:22.835474 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.839112150+00:00 stderr F I0813 20:03:22.839011 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:22.839112150+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:22.839112150+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:22.839112150+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F - { 2025-08-13T20:03:22.839112150+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:22.839112150+00:00 stderr F - Status: "False", 2025-08-13T20:03:22.839112150+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:22.839112150+00:00 stderr F - }, 2025-08-13T20:03:22.839112150+00:00 stderr F + { 2025-08-13T20:03:22.839112150+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:22.839112150+00:00 stderr F + Status: "True", 2025-08-13T20:03:22.839112150+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:22.83629378 +0000 UTC m=+122.041393044", 2025-08-13T20:03:22.839112150+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:22.839112150+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:22.839112150+00:00 stderr F + }, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:22.839112150+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:22.839112150+00:00 stderr F }, 2025-08-13T20:03:22.839112150+00:00 stderr F Version: "", 2025-08-13T20:03:22.839112150+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:22.839112150+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:22.839112150+00:00 stderr F } 2025-08-13T20:03:22.841314113+00:00 stderr F E0813 20:03:22.841211 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.921173105+00:00 stderr F E0813 20:03:24.921029 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.923902473+00:00 stderr F I0813 20:03:24.923866 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:24.923902473+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:24.923902473+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:24.923902473+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:24.923902473+00:00 stderr F - { 2025-08-13T20:03:24.923902473+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:24.923902473+00:00 stderr F - Status: "False", 2025-08-13T20:03:24.923902473+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:24.923902473+00:00 stderr F - }, 2025-08-13T20:03:24.923902473+00:00 stderr F + { 2025-08-13T20:03:24.923902473+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:24.923902473+00:00 stderr F + Status: "True", 2025-08-13T20:03:24.923902473+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:24.921269628 +0000 UTC m=+124.126368962", 2025-08-13T20:03:24.923902473+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:24.923902473+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:24.923902473+00:00 stderr F + }, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:24.923902473+00:00 stderr F }, 2025-08-13T20:03:24.923902473+00:00 stderr F Version: "", 2025-08-13T20:03:24.923902473+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:24.923902473+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:24.923902473+00:00 stderr F } 2025-08-13T20:03:24.925448307+00:00 stderr F E0813 20:03:24.925394 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.126030619+00:00 stderr F E0813 20:03:25.125969 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.129151838+00:00 stderr F I0813 20:03:25.129119 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.129151838+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.129151838+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.129151838+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F - { 2025-08-13T20:03:25.129151838+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:25.129151838+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.129151838+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:25.129151838+00:00 stderr F - }, 2025-08-13T20:03:25.129151838+00:00 stderr F + { 2025-08-13T20:03:25.129151838+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:25.129151838+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.129151838+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.126180074 +0000 UTC m=+124.331279348", 2025-08-13T20:03:25.129151838+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.129151838+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:25.129151838+00:00 stderr F + }, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:25.129151838+00:00 stderr F }, 2025-08-13T20:03:25.129151838+00:00 stderr F Version: "", 2025-08-13T20:03:25.129151838+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.129151838+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.129151838+00:00 stderr F } 2025-08-13T20:03:25.130926699+00:00 stderr F E0813 20:03:25.130880 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.326744395+00:00 stderr F E0813 20:03:25.326646 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.329996568+00:00 stderr F I0813 20:03:25.329836 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.329996568+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.329996568+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.329996568+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:25.329996568+00:00 stderr F - { 2025-08-13T20:03:25.329996568+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:25.329996568+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.329996568+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:25.329996568+00:00 stderr F - }, 2025-08-13T20:03:25.329996568+00:00 stderr F + { 2025-08-13T20:03:25.329996568+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:25.329996568+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.329996568+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.326953871 +0000 UTC m=+124.532053195", 2025-08-13T20:03:25.329996568+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.329996568+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:25.329996568+00:00 stderr F + }, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:25.329996568+00:00 stderr F }, 2025-08-13T20:03:25.329996568+00:00 stderr F Version: "", 2025-08-13T20:03:25.329996568+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.329996568+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.329996568+00:00 stderr F } 2025-08-13T20:03:25.332283563+00:00 stderr F E0813 20:03:25.332211 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.525548166+00:00 stderr F E0813 20:03:25.525415 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.528057577+00:00 stderr F I0813 20:03:25.527970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.528057577+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.528057577+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.528057577+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F - { 2025-08-13T20:03:25.528057577+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:25.528057577+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.528057577+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:25.528057577+00:00 stderr F - }, 2025-08-13T20:03:25.528057577+00:00 stderr F + { 2025-08-13T20:03:25.528057577+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:25.528057577+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.528057577+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.525488714 +0000 UTC m=+124.730587908", 2025-08-13T20:03:25.528057577+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.528057577+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:25.528057577+00:00 stderr F + }, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:25.528057577+00:00 stderr F }, 2025-08-13T20:03:25.528057577+00:00 stderr F Version: "", 2025-08-13T20:03:25.528057577+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.528057577+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.528057577+00:00 stderr F } 2025-08-13T20:03:25.529081187+00:00 stderr F E0813 20:03:25.528986 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.722506365+00:00 stderr F E0813 20:03:25.722404 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.724716808+00:00 stderr F I0813 20:03:25.724619 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.724716808+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.724716808+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.724716808+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:25.724716808+00:00 stderr F - { 2025-08-13T20:03:25.724716808+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:25.724716808+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.724716808+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:25.724716808+00:00 stderr F - }, 2025-08-13T20:03:25.724716808+00:00 stderr F + { 2025-08-13T20:03:25.724716808+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:25.724716808+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.724716808+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.722476354 +0000 UTC m=+124.927575578", 2025-08-13T20:03:25.724716808+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.724716808+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:25.724716808+00:00 stderr F + }, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:25.724716808+00:00 stderr F }, 2025-08-13T20:03:25.724716808+00:00 stderr F Version: "", 2025-08-13T20:03:25.724716808+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.724716808+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.724716808+00:00 stderr F } 2025-08-13T20:03:25.726247031+00:00 stderr F E0813 20:03:25.726156 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.309131285+00:00 stderr F E0813 20:03:35.308076 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.002192285+00:00 stderr F E0813 20:03:37.002088 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.005224771+00:00 stderr F I0813 20:03:37.005161 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.005224771+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.005224771+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.005224771+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F - { 2025-08-13T20:03:37.005224771+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:37.005224771+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.005224771+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:37.005224771+00:00 stderr F - }, 2025-08-13T20:03:37.005224771+00:00 stderr F + { 2025-08-13T20:03:37.005224771+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:37.005224771+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.005224771+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.002153564 +0000 UTC m=+136.207252708", 2025-08-13T20:03:37.005224771+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:37.005224771+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:37.005224771+00:00 stderr F + }, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.005224771+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:37.005224771+00:00 stderr F }, 2025-08-13T20:03:37.005224771+00:00 stderr F Version: "", 2025-08-13T20:03:37.005224771+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.005224771+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.005224771+00:00 stderr F } 2025-08-13T20:03:37.006494927+00:00 stderr F E0813 20:03:37.006414 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.115226139+00:00 stderr F E0813 20:03:37.115175 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.117290808+00:00 stderr F I0813 20:03:37.117251 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.117290808+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.117290808+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.117290808+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F - { 2025-08-13T20:03:37.117290808+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:37.117290808+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.117290808+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:37.117290808+00:00 stderr F - }, 2025-08-13T20:03:37.117290808+00:00 stderr F + { 2025-08-13T20:03:37.117290808+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:37.117290808+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.117290808+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.115319432 +0000 UTC m=+136.320418566", 2025-08-13T20:03:37.117290808+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.117290808+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:37.117290808+00:00 stderr F + }, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:37.117290808+00:00 stderr F }, 2025-08-13T20:03:37.117290808+00:00 stderr F Version: "", 2025-08-13T20:03:37.117290808+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.117290808+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.117290808+00:00 stderr F } 2025-08-13T20:03:37.118891614+00:00 stderr F E0813 20:03:37.118757 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.119163772+00:00 stderr F E0813 20:03:37.117534 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.120903341+00:00 stderr F I0813 20:03:37.120758 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.120903341+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.120903341+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.120903341+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.120903341+00:00 stderr F - { 2025-08-13T20:03:37.120903341+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.120903341+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.120903341+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:37.120903341+00:00 stderr F - }, 2025-08-13T20:03:37.120903341+00:00 stderr F + { 2025-08-13T20:03:37.120903341+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.120903341+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.120903341+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.118886654 +0000 UTC m=+136.323986058", 2025-08-13T20:03:37.120903341+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.120903341+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:37.120903341+00:00 stderr F + }, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:37.120903341+00:00 stderr F }, 2025-08-13T20:03:37.120903341+00:00 stderr F Version: "", 2025-08-13T20:03:37.120903341+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.120903341+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.120903341+00:00 stderr F } 2025-08-13T20:03:37.121382915+00:00 stderr F I0813 20:03:37.121358 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.121382915+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.121382915+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.121382915+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:37.121382915+00:00 stderr F - { 2025-08-13T20:03:37.121382915+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:37.121382915+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.121382915+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:37.121382915+00:00 stderr F - }, 2025-08-13T20:03:37.121382915+00:00 stderr F + { 2025-08-13T20:03:37.121382915+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:37.121382915+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.121382915+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.119211223 +0000 UTC m=+136.324310487", 2025-08-13T20:03:37.121382915+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.121382915+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:37.121382915+00:00 stderr F + }, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:37.121382915+00:00 stderr F }, 2025-08-13T20:03:37.121382915+00:00 stderr F Version: "", 2025-08-13T20:03:37.121382915+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.121382915+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.121382915+00:00 stderr F } 2025-08-13T20:03:37.121892799+00:00 stderr F E0813 20:03:37.121868 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.122761554+00:00 stderr F E0813 20:03:37.122662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.123134605+00:00 stderr F E0813 20:03:37.123101 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.128750475+00:00 stderr F E0813 20:03:37.128724 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.131482963+00:00 stderr F I0813 20:03:37.131423 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.131482963+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.131482963+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.131482963+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F - { 2025-08-13T20:03:37.131482963+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:37.131482963+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.131482963+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:37.131482963+00:00 stderr F - }, 2025-08-13T20:03:37.131482963+00:00 stderr F + { 2025-08-13T20:03:37.131482963+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:37.131482963+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.131482963+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.128898429 +0000 UTC m=+136.333997793", 2025-08-13T20:03:37.131482963+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.131482963+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:37.131482963+00:00 stderr F + }, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:37.131482963+00:00 stderr F }, 2025-08-13T20:03:37.131482963+00:00 stderr F Version: "", 2025-08-13T20:03:37.131482963+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.131482963+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.131482963+00:00 stderr F } 2025-08-13T20:03:37.132311937+00:00 stderr F E0813 20:03:37.132267 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.136277390+00:00 stderr F E0813 20:03:37.136211 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.136968619+00:00 stderr F I0813 20:03:37.136907 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.136968619+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.136968619+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.136968619+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.136968619+00:00 stderr F - { 2025-08-13T20:03:37.136968619+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.136968619+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.136968619+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:37.136968619+00:00 stderr F - }, 2025-08-13T20:03:37.136968619+00:00 stderr F + { 2025-08-13T20:03:37.136968619+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.136968619+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.136968619+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.132306286 +0000 UTC m=+136.337405491", 2025-08-13T20:03:37.136968619+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.136968619+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:37.136968619+00:00 stderr F + }, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:37.136968619+00:00 stderr F }, 2025-08-13T20:03:37.136968619+00:00 stderr F Version: "", 2025-08-13T20:03:37.136968619+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.136968619+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.136968619+00:00 stderr F } 2025-08-13T20:03:37.146010717+00:00 stderr F E0813 20:03:37.145622 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:50.832318355+00:00 stderr F W0813 20:03:50.831668 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:50.832318355+00:00 stderr F E0813 20:03:50.832276 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:03.811403786+00:00 stderr F E0813 20:04:03.807034 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:03.825416906+00:00 stderr F I0813 20:04:03.824582 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:03.825416906+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:03.825416906+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:03.825416906+00:00 stderr F ... // 43 identical elements 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F - { 2025-08-13T20:04:03.825416906+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:03.825416906+00:00 stderr F - Status: "False", 2025-08-13T20:04:03.825416906+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:04:03.825416906+00:00 stderr F - }, 2025-08-13T20:04:03.825416906+00:00 stderr F + { 2025-08-13T20:04:03.825416906+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:03.825416906+00:00 stderr F + Status: "True", 2025-08-13T20:04:03.825416906+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:03.807627468 +0000 UTC m=+163.012726792", 2025-08-13T20:04:03.825416906+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:04:03.825416906+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:04:03.825416906+00:00 stderr F + }, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:03.825416906+00:00 stderr F ... // 14 identical elements 2025-08-13T20:04:03.825416906+00:00 stderr F }, 2025-08-13T20:04:03.825416906+00:00 stderr F Version: "", 2025-08-13T20:04:03.825416906+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:03.825416906+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:03.825416906+00:00 stderr F } 2025-08-13T20:04:03.858934092+00:00 stderr F E0813 20:04:03.853628 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.896446866+00:00 stderr F E0813 20:04:05.893516 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.923424836+00:00 stderr F I0813 20:04:05.923124 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:05.923424836+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:05.923424836+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:05.923424836+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:05.923424836+00:00 stderr F - { 2025-08-13T20:04:05.923424836+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:05.923424836+00:00 stderr F - Status: "False", 2025-08-13T20:04:05.923424836+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:05.923424836+00:00 stderr F - }, 2025-08-13T20:04:05.923424836+00:00 stderr F + { 2025-08-13T20:04:05.923424836+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:05.923424836+00:00 stderr F + Status: "True", 2025-08-13T20:04:05.923424836+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:05.894241363 +0000 UTC m=+165.099340748", 2025-08-13T20:04:05.923424836+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:05.923424836+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:04:05.923424836+00:00 stderr F + }, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:05.923424836+00:00 stderr F }, 2025-08-13T20:04:05.923424836+00:00 stderr F Version: "", 2025-08-13T20:04:05.923424836+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:05.923424836+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:05.923424836+00:00 stderr F } 2025-08-13T20:04:05.937612201+00:00 stderr F E0813 20:04:05.936704 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.095120444+00:00 stderr F E0813 20:04:06.095053 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.098044487+00:00 stderr F I0813 20:04:06.098014 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.098044487+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.098044487+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.098044487+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F - { 2025-08-13T20:04:06.098044487+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:06.098044487+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.098044487+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:06.098044487+00:00 stderr F - }, 2025-08-13T20:04:06.098044487+00:00 stderr F + { 2025-08-13T20:04:06.098044487+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:06.098044487+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.098044487+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.095211036 +0000 UTC m=+165.300310130", 2025-08-13T20:04:06.098044487+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.098044487+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:04:06.098044487+00:00 stderr F + }, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:06.098044487+00:00 stderr F }, 2025-08-13T20:04:06.098044487+00:00 stderr F Version: "", 2025-08-13T20:04:06.098044487+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.098044487+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.098044487+00:00 stderr F } 2025-08-13T20:04:06.102716631+00:00 stderr F E0813 20:04:06.100421 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.296891860+00:00 stderr F E0813 20:04:06.296625 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.300179564+00:00 stderr F I0813 20:04:06.300020 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.300179564+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.300179564+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.300179564+00:00 stderr F ... // 2 identical elements 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:04:06.300179564+00:00 stderr F - { 2025-08-13T20:04:06.300179564+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:06.300179564+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.300179564+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:04:06.300179564+00:00 stderr F - }, 2025-08-13T20:04:06.300179564+00:00 stderr F + { 2025-08-13T20:04:06.300179564+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:06.300179564+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.300179564+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.296729665 +0000 UTC m=+165.501828979", 2025-08-13T20:04:06.300179564+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.300179564+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:04:06.300179564+00:00 stderr F + }, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F ... // 55 identical elements 2025-08-13T20:04:06.300179564+00:00 stderr F }, 2025-08-13T20:04:06.300179564+00:00 stderr F Version: "", 2025-08-13T20:04:06.300179564+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.300179564+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.300179564+00:00 stderr F } 2025-08-13T20:04:06.301719637+00:00 stderr F E0813 20:04:06.301617 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.491970365+00:00 stderr F E0813 20:04:06.491887 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.494330812+00:00 stderr F I0813 20:04:06.494219 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.494330812+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.494330812+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.494330812+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F - { 2025-08-13T20:04:06.494330812+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:06.494330812+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.494330812+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:06.494330812+00:00 stderr F - }, 2025-08-13T20:04:06.494330812+00:00 stderr F + { 2025-08-13T20:04:06.494330812+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:06.494330812+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.494330812+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.491934914 +0000 UTC m=+165.697034178", 2025-08-13T20:04:06.494330812+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.494330812+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:04:06.494330812+00:00 stderr F + }, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:06.494330812+00:00 stderr F }, 2025-08-13T20:04:06.494330812+00:00 stderr F Version: "", 2025-08-13T20:04:06.494330812+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.494330812+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.494330812+00:00 stderr F } 2025-08-13T20:04:06.495895007+00:00 stderr F E0813 20:04:06.495730 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.689370306+00:00 stderr F E0813 20:04:06.689248 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.691285151+00:00 stderr F I0813 20:04:06.691211 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.691285151+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.691285151+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.691285151+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:06.691285151+00:00 stderr F - { 2025-08-13T20:04:06.691285151+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:06.691285151+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.691285151+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:06.691285151+00:00 stderr F - }, 2025-08-13T20:04:06.691285151+00:00 stderr F + { 2025-08-13T20:04:06.691285151+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:06.691285151+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.691285151+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.689319605 +0000 UTC m=+165.894418939", 2025-08-13T20:04:06.691285151+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.691285151+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:04:06.691285151+00:00 stderr F + }, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:06.691285151+00:00 stderr F }, 2025-08-13T20:04:06.691285151+00:00 stderr F Version: "", 2025-08-13T20:04:06.691285151+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.691285151+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.691285151+00:00 stderr F } 2025-08-13T20:04:06.692658550+00:00 stderr F E0813 20:04:06.692545 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:35.324389024+00:00 stderr F E0813 20:04:35.323337 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.015702996+00:00 stderr F E0813 20:04:37.015359 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.032889778+00:00 stderr F I0813 20:04:37.032444 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.032889778+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.032889778+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.032889778+00:00 stderr F ... // 43 identical elements 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F - { 2025-08-13T20:04:37.032889778+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:37.032889778+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.032889778+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:04:37.032889778+00:00 stderr F - }, 2025-08-13T20:04:37.032889778+00:00 stderr F + { 2025-08-13T20:04:37.032889778+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:37.032889778+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.032889778+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.015561332 +0000 UTC m=+196.220660786", 2025-08-13T20:04:37.032889778+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:04:37.032889778+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:04:37.032889778+00:00 stderr F + }, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.032889778+00:00 stderr F ... // 14 identical elements 2025-08-13T20:04:37.032889778+00:00 stderr F }, 2025-08-13T20:04:37.032889778+00:00 stderr F Version: "", 2025-08-13T20:04:37.032889778+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.032889778+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.032889778+00:00 stderr F } 2025-08-13T20:04:37.040484775+00:00 stderr F E0813 20:04:37.040421 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.127144927+00:00 stderr F E0813 20:04:37.126951 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.130468242+00:00 stderr F I0813 20:04:37.130394 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.130468242+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.130468242+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.130468242+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F - { 2025-08-13T20:04:37.130468242+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:37.130468242+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.130468242+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:37.130468242+00:00 stderr F - }, 2025-08-13T20:04:37.130468242+00:00 stderr F + { 2025-08-13T20:04:37.130468242+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:37.130468242+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.130468242+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.127039864 +0000 UTC m=+196.332139128", 2025-08-13T20:04:37.130468242+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.130468242+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:04:37.130468242+00:00 stderr F + }, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:37.130468242+00:00 stderr F }, 2025-08-13T20:04:37.130468242+00:00 stderr F Version: "", 2025-08-13T20:04:37.130468242+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.130468242+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.130468242+00:00 stderr F } 2025-08-13T20:04:37.141613431+00:00 stderr F E0813 20:04:37.141491 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.141957491+00:00 stderr F E0813 20:04:37.141922 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.142348962+00:00 stderr F E0813 20:04:37.142323 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.142905478+00:00 stderr F E0813 20:04:37.142812 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.145037209+00:00 stderr F I0813 20:04:37.144970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145037209+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145037209+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145037209+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F - { 2025-08-13T20:04:37.145037209+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:37.145037209+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145037209+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:37.145037209+00:00 stderr F - }, 2025-08-13T20:04:37.145037209+00:00 stderr F + { 2025-08-13T20:04:37.145037209+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:37.145037209+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145037209+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.142403284 +0000 UTC m=+196.347502578", 2025-08-13T20:04:37.145037209+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145037209+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:04:37.145037209+00:00 stderr F + }, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:37.145037209+00:00 stderr F }, 2025-08-13T20:04:37.145037209+00:00 stderr F Version: "", 2025-08-13T20:04:37.145037209+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145037209+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145037209+00:00 stderr F } 2025-08-13T20:04:37.145271676+00:00 stderr F I0813 20:04:37.145211 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145271676+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145271676+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145271676+00:00 stderr F ... // 2 identical elements 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:04:37.145271676+00:00 stderr F - { 2025-08-13T20:04:37.145271676+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:37.145271676+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145271676+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:04:37.145271676+00:00 stderr F - }, 2025-08-13T20:04:37.145271676+00:00 stderr F + { 2025-08-13T20:04:37.145271676+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:37.145271676+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145271676+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.142885227 +0000 UTC m=+196.347984652", 2025-08-13T20:04:37.145271676+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145271676+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:04:37.145271676+00:00 stderr F + }, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F ... // 55 identical elements 2025-08-13T20:04:37.145271676+00:00 stderr F }, 2025-08-13T20:04:37.145271676+00:00 stderr F Version: "", 2025-08-13T20:04:37.145271676+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145271676+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145271676+00:00 stderr F } 2025-08-13T20:04:37.145656917+00:00 stderr F I0813 20:04:37.145549 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145656917+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145656917+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145656917+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.145656917+00:00 stderr F - { 2025-08-13T20:04:37.145656917+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.145656917+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145656917+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:37.145656917+00:00 stderr F - }, 2025-08-13T20:04:37.145656917+00:00 stderr F + { 2025-08-13T20:04:37.145656917+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.145656917+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145656917+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.141552419 +0000 UTC m=+196.346651603", 2025-08-13T20:04:37.145656917+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145656917+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:04:37.145656917+00:00 stderr F + }, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:37.145656917+00:00 stderr F }, 2025-08-13T20:04:37.145656917+00:00 stderr F Version: "", 2025-08-13T20:04:37.145656917+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145656917+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145656917+00:00 stderr F } 2025-08-13T20:04:37.148239511+00:00 stderr F E0813 20:04:37.148127 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.148386505+00:00 stderr F E0813 20:04:37.148301 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.174762180+00:00 stderr F E0813 20:04:37.174421 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.174762180+00:00 stderr F E0813 20:04:37.174495 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.222108896+00:00 stderr F I0813 20:04:37.219913 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.222108896+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.222108896+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.222108896+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.222108896+00:00 stderr F - { 2025-08-13T20:04:37.222108896+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.222108896+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.222108896+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:37.222108896+00:00 stderr F - }, 2025-08-13T20:04:37.222108896+00:00 stderr F + { 2025-08-13T20:04:37.222108896+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.222108896+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.222108896+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.174489833 +0000 UTC m=+196.379589187", 2025-08-13T20:04:37.222108896+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.222108896+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:04:37.222108896+00:00 stderr F + }, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:37.222108896+00:00 stderr F }, 2025-08-13T20:04:37.222108896+00:00 stderr F Version: "", 2025-08-13T20:04:37.222108896+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.222108896+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.222108896+00:00 stderr F } 2025-08-13T20:04:37.240222785+00:00 stderr F E0813 20:04:37.239626 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.159436499+00:00 stderr F W0813 20:04:40.159270 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.159436499+00:00 stderr F E0813 20:04:40.159417 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.939682414+00:00 stderr F W0813 20:05:15.939006 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.939682414+00:00 stderr F E0813 20:05:15.939606 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:58.894029135+00:00 stderr F I0813 20:05:58.893245 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:00.833503405+00:00 stderr F W0813 20:06:00.833311 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:00.833503405+00:00 stderr F E0813 20:06:00.833411 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:07.070995261+00:00 stderr F I0813 20:06:07.070544 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:09.506747891+00:00 stderr F I0813 20:06:09.505328 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:06:10.008389275+00:00 stderr F I0813 20:06:10.008146 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:11.858828575+00:00 stderr F I0813 20:06:11.858679 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.574120261+00:00 stderr F I0813 20:06:14.571912 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:16.430994894+00:00 stderr F I0813 20:06:16.428932 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:16.503308295+00:00 stderr F I0813 20:06:16.502856 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.745989123+00:00 stderr F I0813 20:06:19.745756 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:21.584488059+00:00 stderr F I0813 20:06:21.583599 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:24.265860302+00:00 stderr F I0813 20:06:24.265660 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:27.103264534+00:00 stderr F I0813 20:06:27.103083 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.102166851+00:00 stderr F I0813 20:06:30.101372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.696617163+00:00 stderr F I0813 20:06:30.692061 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:32.583329881+00:00 stderr F I0813 20:06:32.577091 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:34.743077822+00:00 stderr F I0813 20:06:34.741619 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:37.031253456+00:00 stderr F I0813 20:06:37.029264 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:38.867945805+00:00 stderr F I0813 20:06:38.867053 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:06:39.042915812+00:00 stderr F I0813 20:06:39.041057 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:39.826960871+00:00 stderr F I0813 20:06:39.826296 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:40.339728663+00:00 stderr F I0813 20:06:40.339674 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.448014411+00:00 stderr F I0813 20:06:44.446671 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:46.224728850+00:00 stderr F W0813 20:06:46.224243 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:46.224728850+00:00 stderr F E0813 20:06:46.224304 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:48.474750951+00:00 stderr F I0813 20:06:48.471933 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:48.536239163+00:00 stderr F I0813 20:06:48.536187 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:50.189428322+00:00 stderr F I0813 20:06:50.188710 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:51.116409329+00:00 stderr F I0813 20:06:51.116171 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:55.690126201+00:00 stderr F I0813 20:06:55.687737 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:57.820930254+00:00 stderr F I0813 20:06:57.814209 1 reflector.go:351] Caches populated for operators.coreos.com/v1, Resource=olmconfigs from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:07:04.256997701+00:00 stderr F I0813 20:07:04.256271 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.194607533+00:00 stderr F I0813 20:07:05.193920 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:29.199852862+00:00 stderr F W0813 20:07:29.198739 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:29.199852862+00:00 stderr F E0813 20:07:29.199674 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:08:11.536021817+00:00 stderr F I0813 20:08:11.534969 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:08:11.599255470+00:00 stderr F I0813 20:08:11.599129 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-08-13T20:08:11.599615240+00:00 stderr F I0813 20:08:11.599512 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-08-13T20:08:11.599682932+00:00 stderr F I0813 20:08:11.599608 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-08-13T20:08:11.599682932+00:00 stderr F I0813 20:08:11.599612 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-08-13T20:08:11.600076293+00:00 stderr F I0813 20:08:11.599243 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-08-13T20:08:11.600330041+00:00 stderr F I0813 20:08:11.599912 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:08:11.600393192+00:00 stderr F I0813 20:08:11.600358 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:08:11.601047241+00:00 stderr F I0813 20:08:11.599595 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-08-13T20:08:11.601182305+00:00 stderr F I0813 20:08:11.599678 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-08-13T20:08:11.601873235+00:00 stderr F I0813 20:08:11.599678 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-08-13T20:08:11.609975107+00:00 stderr F I0813 20:08:11.609743 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-08-13T20:08:11.609975107+00:00 stderr F I0813 20:08:11.609956 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:08:11.611633065+00:00 stderr F I0813 20:08:11.611351 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-08-13T20:08:11.647366239+00:00 stderr F I0813 20:08:11.647226 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.647366239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.647366239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.647366239+00:00 stderr F ... // 7 identical elements 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F - { 2025-08-13T20:08:11.647366239+00:00 stderr F - Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:08:11.647366239+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.647366239+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:07 +0000 UTC", 2025-08-13T20:08:11.647366239+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647366239+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:08:11.647366239+00:00 stderr F - }, 2025-08-13T20:08:11.647366239+00:00 stderr F + { 2025-08-13T20:08:11.647366239+00:00 stderr F + Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:08:11.647366239+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.647366239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.642454678 +0000 UTC m=+410.847553882", 2025-08-13T20:08:11.647366239+00:00 stderr F + }, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F - { 2025-08-13T20:08:11.647366239+00:00 stderr F - Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647366239+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.647366239+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:07 +0000 UTC", 2025-08-13T20:08:11.647366239+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647366239+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:08:11.647366239+00:00 stderr F - }, 2025-08-13T20:08:11.647366239+00:00 stderr F + { 2025-08-13T20:08:11.647366239+00:00 stderr F + Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647366239+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.647366239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.642456158 +0000 UTC m=+410.847555162", 2025-08-13T20:08:11.647366239+00:00 stderr F + }, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F ... // 48 identical elements 2025-08-13T20:08:11.647366239+00:00 stderr F }, 2025-08-13T20:08:11.647366239+00:00 stderr F Version: "", 2025-08-13T20:08:11.647366239+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:08:11.647366239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.647366239+00:00 stderr F } 2025-08-13T20:08:11.647426461+00:00 stderr F I0813 20:08:11.647372 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.647426461+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.647426461+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.647426461+00:00 stderr F ... // 45 identical elements 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F - { 2025-08-13T20:08:11.647426461+00:00 stderr F - Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.647426461+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.647426461+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.647426461+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647426461+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.647426461+00:00 stderr F - }, 2025-08-13T20:08:11.647426461+00:00 stderr F + { 2025-08-13T20:08:11.647426461+00:00 stderr F + Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.647426461+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.647426461+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.644182268 +0000 UTC m=+410.849281462", 2025-08-13T20:08:11.647426461+00:00 stderr F + }, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F - { 2025-08-13T20:08:11.647426461+00:00 stderr F - Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647426461+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.647426461+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.647426461+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647426461+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.647426461+00:00 stderr F - }, 2025-08-13T20:08:11.647426461+00:00 stderr F + { 2025-08-13T20:08:11.647426461+00:00 stderr F + Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647426461+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.647426461+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.644184158 +0000 UTC m=+410.849283172", 2025-08-13T20:08:11.647426461+00:00 stderr F + }, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:11.647426461+00:00 stderr F }, 2025-08-13T20:08:11.647426461+00:00 stderr F Version: "", 2025-08-13T20:08:11.647426461+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:08:11.647426461+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.647426461+00:00 stderr F } 2025-08-13T20:08:11.684401991+00:00 stderr F I0813 20:08:11.684097 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.684401991+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.684401991+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 13 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F - { 2025-08-13T20:08:11.684401991+00:00 stderr F - Type: "SyncLoopRefreshProgressing", 2025-08-13T20:08:11.684401991+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.684401991+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.684401991+00:00 stderr F - Reason: "InProgress", 2025-08-13T20:08:11.684401991+00:00 stderr F - Message: "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:08:11.684401991+00:00 stderr F - }, 2025-08-13T20:08:11.684401991+00:00 stderr F + { 2025-08-13T20:08:11.684401991+00:00 stderr F + Type: "SyncLoopRefreshProgressing", 2025-08-13T20:08:11.684401991+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.684401991+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.681641202 +0000 UTC m=+410.886740456", 2025-08-13T20:08:11.684401991+00:00 stderr F + }, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 6 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "ServiceCASyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "TrustedCASyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F - { 2025-08-13T20:08:11.684401991+00:00 stderr F - Type: "DeploymentAvailable", 2025-08-13T20:08:11.684401991+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.684401991+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.684401991+00:00 stderr F - Reason: "InsufficientReplicas", 2025-08-13T20:08:11.684401991+00:00 stderr F - Message: "0 replicas available for console deployment", 2025-08-13T20:08:11.684401991+00:00 stderr F - }, 2025-08-13T20:08:11.684401991+00:00 stderr F + { 2025-08-13T20:08:11.684401991+00:00 stderr F + Type: "DeploymentAvailable", 2025-08-13T20:08:11.684401991+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.684401991+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.681643252 +0000 UTC m=+410.886742386", 2025-08-13T20:08:11.684401991+00:00 stderr F + }, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "TrustedCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthServingCertValidationProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 33 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F }, 2025-08-13T20:08:11.684401991+00:00 stderr F Version: "", 2025-08-13T20:08:11.684401991+00:00 stderr F - ReadyReplicas: 0, 2025-08-13T20:08:11.684401991+00:00 stderr F + ReadyReplicas: 1, 2025-08-13T20:08:11.684401991+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.684401991+00:00 stderr F } 2025-08-13T20:08:11.726243231+00:00 stderr F I0813 20:08:11.724414 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.738741229+00:00 stderr F I0813 20:08:11.737607 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.738741229+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.738741229+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.738741229+00:00 stderr F ... // 45 identical elements 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F - { 2025-08-13T20:08:11.738741229+00:00 stderr F - Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.738741229+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.738741229+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.738741229+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.738741229+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.738741229+00:00 stderr F - }, 2025-08-13T20:08:11.738741229+00:00 stderr F + { 2025-08-13T20:08:11.738741229+00:00 stderr F + Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.738741229+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.738741229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.705518836 +0000 UTC m=+410.910617990", 2025-08-13T20:08:11.738741229+00:00 stderr F + }, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F - { 2025-08-13T20:08:11.738741229+00:00 stderr F - Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.738741229+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.738741229+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.738741229+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.738741229+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.738741229+00:00 stderr F - }, 2025-08-13T20:08:11.738741229+00:00 stderr F + { 2025-08-13T20:08:11.738741229+00:00 stderr F + Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.738741229+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.738741229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.705520676 +0000 UTC m=+410.910619680", 2025-08-13T20:08:11.738741229+00:00 stderr F + }, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:11.738741229+00:00 stderr F }, 2025-08-13T20:08:11.738741229+00:00 stderr F Version: "", 2025-08-13T20:08:11.738741229+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:11.738741229+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.738741229+00:00 stderr F } 2025-08-13T20:08:11.765700862+00:00 stderr F I0813 20:08:11.764955 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" 2025-08-13T20:08:11.787871757+00:00 stderr F I0813 20:08:11.787363 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.798038939+00:00 stderr F I0813 20:08:11.797846 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.798038939+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.798038939+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:08:11.798038939+00:00 stderr F - { 2025-08-13T20:08:11.798038939+00:00 stderr F - Type: "RouteHealthDegraded", 2025-08-13T20:08:11.798038939+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.798038939+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.798038939+00:00 stderr F - Reason: "StatusError", 2025-08-13T20:08:11.798038939+00:00 stderr F - Message: "route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'", 2025-08-13T20:08:11.798038939+00:00 stderr F - }, 2025-08-13T20:08:11.798038939+00:00 stderr F + { 2025-08-13T20:08:11.798038939+00:00 stderr F + Type: "RouteHealthDegraded", 2025-08-13T20:08:11.798038939+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.798038939+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.795014552 +0000 UTC m=+411.000113566", 2025-08-13T20:08:11.798038939+00:00 stderr F + }, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:11.798038939+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F - { 2025-08-13T20:08:11.798038939+00:00 stderr F - Type: "RouteHealthAvailable", 2025-08-13T20:08:11.798038939+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.798038939+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.798038939+00:00 stderr F - Reason: "StatusError", 2025-08-13T20:08:11.798038939+00:00 stderr F - Message: "route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'", 2025-08-13T20:08:11.798038939+00:00 stderr F - }, 2025-08-13T20:08:11.798038939+00:00 stderr F + { 2025-08-13T20:08:11.798038939+00:00 stderr F + Type: "RouteHealthAvailable", 2025-08-13T20:08:11.798038939+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.798038939+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.795013432 +0000 UTC m=+411.000112516", 2025-08-13T20:08:11.798038939+00:00 stderr F + }, 2025-08-13T20:08:11.798038939+00:00 stderr F }, 2025-08-13T20:08:11.798038939+00:00 stderr F Version: "", 2025-08-13T20:08:11.798038939+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:11.798038939+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.798038939+00:00 stderr F } 2025-08-13T20:08:11.810129396+00:00 stderr F I0813 20:08:11.810025 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" 2025-08-13T20:08:11.817676502+00:00 stderr F I0813 20:08:11.815021 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.829639345+00:00 stderr F E0813 20:08:11.828392 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:08:12.214932842+00:00 stderr F I0813 20:08:12.214692 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:12.240848715+00:00 stderr F I0813 20:08:12.240681 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well"),Upgradeable changed from False to True ("All is well") 2025-08-13T20:08:35.466638407+00:00 stderr F E0813 20:08:35.465262 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.009647895+00:00 stderr F E0813 20:08:37.009556 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.012931140+00:00 stderr F I0813 20:08:37.012550 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.012931140+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.012931140+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.012931140+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F - { 2025-08-13T20:08:37.012931140+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.012931140+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.012931140+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.012931140+00:00 stderr F - }, 2025-08-13T20:08:37.012931140+00:00 stderr F + { 2025-08-13T20:08:37.012931140+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.012931140+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.012931140+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.009736018 +0000 UTC m=+436.214835452", 2025-08-13T20:08:37.012931140+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.012931140+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.012931140+00:00 stderr F + }, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.012931140+00:00 stderr F }, 2025-08-13T20:08:37.012931140+00:00 stderr F Version: "", 2025-08-13T20:08:37.012931140+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.012931140+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.012931140+00:00 stderr F } 2025-08-13T20:08:37.014415062+00:00 stderr F E0813 20:08:37.014389 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.024056229+00:00 stderr F E0813 20:08:37.024023 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.025952823+00:00 stderr F I0813 20:08:37.025888 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.025952823+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.025952823+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.025952823+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F - { 2025-08-13T20:08:37.025952823+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.025952823+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.025952823+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.025952823+00:00 stderr F - }, 2025-08-13T20:08:37.025952823+00:00 stderr F + { 2025-08-13T20:08:37.025952823+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.025952823+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.025952823+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.02412334 +0000 UTC m=+436.229222504", 2025-08-13T20:08:37.025952823+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.025952823+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.025952823+00:00 stderr F + }, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.025952823+00:00 stderr F }, 2025-08-13T20:08:37.025952823+00:00 stderr F Version: "", 2025-08-13T20:08:37.025952823+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.025952823+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.025952823+00:00 stderr F } 2025-08-13T20:08:37.027692853+00:00 stderr F E0813 20:08:37.027653 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.039249144+00:00 stderr F E0813 20:08:37.039190 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.041532590+00:00 stderr F I0813 20:08:37.041481 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.041532590+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.041532590+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.041532590+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F - { 2025-08-13T20:08:37.041532590+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.041532590+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.041532590+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.041532590+00:00 stderr F - }, 2025-08-13T20:08:37.041532590+00:00 stderr F + { 2025-08-13T20:08:37.041532590+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.041532590+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.041532590+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.039340387 +0000 UTC m=+436.244439561", 2025-08-13T20:08:37.041532590+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.041532590+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.041532590+00:00 stderr F + }, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.041532590+00:00 stderr F }, 2025-08-13T20:08:37.041532590+00:00 stderr F Version: "", 2025-08-13T20:08:37.041532590+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.041532590+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.041532590+00:00 stderr F } 2025-08-13T20:08:37.043141576+00:00 stderr F E0813 20:08:37.043110 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.068983347+00:00 stderr F E0813 20:08:37.068842 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.071530750+00:00 stderr F I0813 20:08:37.071381 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.071530750+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.071530750+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.071530750+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F - { 2025-08-13T20:08:37.071530750+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.071530750+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.071530750+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.071530750+00:00 stderr F - }, 2025-08-13T20:08:37.071530750+00:00 stderr F + { 2025-08-13T20:08:37.071530750+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.071530750+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.071530750+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.068923635 +0000 UTC m=+436.274022979", 2025-08-13T20:08:37.071530750+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.071530750+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.071530750+00:00 stderr F + }, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.071530750+00:00 stderr F }, 2025-08-13T20:08:37.071530750+00:00 stderr F Version: "", 2025-08-13T20:08:37.071530750+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.071530750+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.071530750+00:00 stderr F } 2025-08-13T20:08:37.074563647+00:00 stderr F E0813 20:08:37.074482 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.118723633+00:00 stderr F E0813 20:08:37.118547 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.120936056+00:00 stderr F I0813 20:08:37.120846 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.120936056+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.120936056+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.120936056+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F - { 2025-08-13T20:08:37.120936056+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.120936056+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.120936056+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.120936056+00:00 stderr F - }, 2025-08-13T20:08:37.120936056+00:00 stderr F + { 2025-08-13T20:08:37.120936056+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.120936056+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.120936056+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.118607149 +0000 UTC m=+436.323706353", 2025-08-13T20:08:37.120936056+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.120936056+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.120936056+00:00 stderr F + }, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.120936056+00:00 stderr F }, 2025-08-13T20:08:37.120936056+00:00 stderr F Version: "", 2025-08-13T20:08:37.120936056+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.120936056+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.120936056+00:00 stderr F } 2025-08-13T20:08:37.122380188+00:00 stderr F E0813 20:08:37.122297 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.126693301+00:00 stderr F E0813 20:08:37.126620 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.128860763+00:00 stderr F E0813 20:08:37.128828 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.129044779+00:00 stderr F I0813 20:08:37.128986 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.129044779+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.129044779+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.129044779+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.129044779+00:00 stderr F - { 2025-08-13T20:08:37.129044779+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.129044779+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.129044779+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.129044779+00:00 stderr F - }, 2025-08-13T20:08:37.129044779+00:00 stderr F + { 2025-08-13T20:08:37.129044779+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.129044779+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.129044779+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.126777714 +0000 UTC m=+436.331918699", 2025-08-13T20:08:37.129044779+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.129044779+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.129044779+00:00 stderr F + }, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.129044779+00:00 stderr F }, 2025-08-13T20:08:37.129044779+00:00 stderr F Version: "", 2025-08-13T20:08:37.129044779+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.129044779+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.129044779+00:00 stderr F } 2025-08-13T20:08:37.129133261+00:00 stderr F E0813 20:08:37.129080 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.130269294+00:00 stderr F E0813 20:08:37.130192 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.131451618+00:00 stderr F I0813 20:08:37.131424 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.131451618+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.131451618+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.131451618+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F - { 2025-08-13T20:08:37.131451618+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.131451618+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.131451618+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.131451618+00:00 stderr F - }, 2025-08-13T20:08:37.131451618+00:00 stderr F + { 2025-08-13T20:08:37.131451618+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.131451618+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.131451618+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.128947986 +0000 UTC m=+436.334047360", 2025-08-13T20:08:37.131451618+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.131451618+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.131451618+00:00 stderr F + }, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.131451618+00:00 stderr F }, 2025-08-13T20:08:37.131451618+00:00 stderr F Version: "", 2025-08-13T20:08:37.131451618+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.131451618+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.131451618+00:00 stderr F } 2025-08-13T20:08:37.132122847+00:00 stderr F E0813 20:08:37.131759 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.132520558+00:00 stderr F E0813 20:08:37.132497 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.134950188+00:00 stderr F E0813 20:08:37.134854 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.135505744+00:00 stderr F I0813 20:08:37.135447 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.135505744+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.135505744+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.135505744+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F - { 2025-08-13T20:08:37.135505744+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.135505744+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.135505744+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.135505744+00:00 stderr F - }, 2025-08-13T20:08:37.135505744+00:00 stderr F + { 2025-08-13T20:08:37.135505744+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.135505744+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.135505744+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.131959352 +0000 UTC m=+436.337599672", 2025-08-13T20:08:37.135505744+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.135505744+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.135505744+00:00 stderr F + }, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.135505744+00:00 stderr F }, 2025-08-13T20:08:37.135505744+00:00 stderr F Version: "", 2025-08-13T20:08:37.135505744+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.135505744+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.135505744+00:00 stderr F } 2025-08-13T20:08:37.136976046+00:00 stderr F E0813 20:08:37.136520 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.136976046+00:00 stderr F I0813 20:08:37.136883 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.136976046+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.136976046+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.136976046+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F - { 2025-08-13T20:08:37.136976046+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.136976046+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.136976046+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.136976046+00:00 stderr F - }, 2025-08-13T20:08:37.136976046+00:00 stderr F + { 2025-08-13T20:08:37.136976046+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.136976046+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.136976046+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.129233434 +0000 UTC m=+436.334332668", 2025-08-13T20:08:37.136976046+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.136976046+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.136976046+00:00 stderr F + }, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.136976046+00:00 stderr F }, 2025-08-13T20:08:37.136976046+00:00 stderr F Version: "", 2025-08-13T20:08:37.136976046+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.136976046+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.136976046+00:00 stderr F } 2025-08-13T20:08:37.137052298+00:00 stderr F I0813 20:08:37.137032 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.137052298+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.137052298+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.137052298+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F - { 2025-08-13T20:08:37.137052298+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.137052298+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.137052298+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.137052298+00:00 stderr F - }, 2025-08-13T20:08:37.137052298+00:00 stderr F + { 2025-08-13T20:08:37.137052298+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.137052298+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.137052298+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.13256414 +0000 UTC m=+436.337663404", 2025-08-13T20:08:37.137052298+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.137052298+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:37.137052298+00:00 stderr F + }, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.137052298+00:00 stderr F }, 2025-08-13T20:08:37.137052298+00:00 stderr F Version: "", 2025-08-13T20:08:37.137052298+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.137052298+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.137052298+00:00 stderr F } 2025-08-13T20:08:37.138667285+00:00 stderr F E0813 20:08:37.138621 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.138667285+00:00 stderr F E0813 20:08:37.138645 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.138865790+00:00 stderr F E0813 20:08:37.138778 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.140730034+00:00 stderr F I0813 20:08:37.140654 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.140730034+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.140730034+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.140730034+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.140730034+00:00 stderr F - { 2025-08-13T20:08:37.140730034+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.140730034+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.140730034+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.140730034+00:00 stderr F - }, 2025-08-13T20:08:37.140730034+00:00 stderr F + { 2025-08-13T20:08:37.140730034+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.140730034+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.140730034+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.138652834 +0000 UTC m=+436.343751888", 2025-08-13T20:08:37.140730034+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.140730034+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.140730034+00:00 stderr F + }, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.140730034+00:00 stderr F }, 2025-08-13T20:08:37.140730034+00:00 stderr F Version: "", 2025-08-13T20:08:37.140730034+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.140730034+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.140730034+00:00 stderr F } 2025-08-13T20:08:37.141507606+00:00 stderr F E0813 20:08:37.141454 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.145027597+00:00 stderr F E0813 20:08:37.144866 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.147193069+00:00 stderr F I0813 20:08:37.147113 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.147193069+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.147193069+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.147193069+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F - { 2025-08-13T20:08:37.147193069+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.147193069+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.147193069+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.147193069+00:00 stderr F - }, 2025-08-13T20:08:37.147193069+00:00 stderr F + { 2025-08-13T20:08:37.147193069+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.147193069+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.147193069+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.141502156 +0000 UTC m=+436.346601450", 2025-08-13T20:08:37.147193069+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.147193069+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.147193069+00:00 stderr F + }, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.147193069+00:00 stderr F }, 2025-08-13T20:08:37.147193069+00:00 stderr F Version: "", 2025-08-13T20:08:37.147193069+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.147193069+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.147193069+00:00 stderr F } 2025-08-13T20:08:37.147287632+00:00 stderr F E0813 20:08:37.147229 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.147414245+00:00 stderr F E0813 20:08:37.147333 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.149523346+00:00 stderr F I0813 20:08:37.149277 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.149523346+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.149523346+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.149523346+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F - { 2025-08-13T20:08:37.149523346+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149523346+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.149523346+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.149523346+00:00 stderr F - }, 2025-08-13T20:08:37.149523346+00:00 stderr F + { 2025-08-13T20:08:37.149523346+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149523346+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.149523346+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.144932974 +0000 UTC m=+436.350032228", 2025-08-13T20:08:37.149523346+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.149523346+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.149523346+00:00 stderr F + }, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.149523346+00:00 stderr F }, 2025-08-13T20:08:37.149523346+00:00 stderr F Version: "", 2025-08-13T20:08:37.149523346+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.149523346+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.149523346+00:00 stderr F } 2025-08-13T20:08:37.149849525+00:00 stderr F I0813 20:08:37.149707 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.149849525+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.149849525+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.149849525+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F - { 2025-08-13T20:08:37.149849525+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149849525+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.149849525+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.149849525+00:00 stderr F - }, 2025-08-13T20:08:37.149849525+00:00 stderr F + { 2025-08-13T20:08:37.149849525+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149849525+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.149849525+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.147289752 +0000 UTC m=+436.352388986", 2025-08-13T20:08:37.149849525+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.149849525+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.149849525+00:00 stderr F + }, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.149849525+00:00 stderr F }, 2025-08-13T20:08:37.149849525+00:00 stderr F Version: "", 2025-08-13T20:08:37.149849525+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.149849525+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.149849525+00:00 stderr F } 2025-08-13T20:08:37.150974307+00:00 stderr F I0813 20:08:37.150709 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.150974307+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.150974307+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.150974307+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F - { 2025-08-13T20:08:37.150974307+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.150974307+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.150974307+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.150974307+00:00 stderr F - }, 2025-08-13T20:08:37.150974307+00:00 stderr F + { 2025-08-13T20:08:37.150974307+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.150974307+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.150974307+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.147427256 +0000 UTC m=+436.352526510", 2025-08-13T20:08:37.150974307+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.150974307+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:37.150974307+00:00 stderr F + }, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.150974307+00:00 stderr F }, 2025-08-13T20:08:37.150974307+00:00 stderr F Version: "", 2025-08-13T20:08:37.150974307+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.150974307+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.150974307+00:00 stderr F } 2025-08-13T20:08:37.205003486+00:00 stderr F E0813 20:08:37.204868 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.217414432+00:00 stderr F I0813 20:08:37.214036 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.217414432+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.217414432+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.217414432+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F - { 2025-08-13T20:08:37.217414432+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.217414432+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.217414432+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.217414432+00:00 stderr F - }, 2025-08-13T20:08:37.217414432+00:00 stderr F + { 2025-08-13T20:08:37.217414432+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.217414432+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.217414432+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.204941165 +0000 UTC m=+436.410040379", 2025-08-13T20:08:37.217414432+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.217414432+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.217414432+00:00 stderr F + }, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.217414432+00:00 stderr F }, 2025-08-13T20:08:37.217414432+00:00 stderr F Version: "", 2025-08-13T20:08:37.217414432+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.217414432+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.217414432+00:00 stderr F } 2025-08-13T20:08:37.224473165+00:00 stderr F E0813 20:08:37.224400 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.241217205+00:00 stderr F E0813 20:08:37.241152 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.243916362+00:00 stderr F I0813 20:08:37.243773 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.243916362+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.243916362+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.243916362+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.243916362+00:00 stderr F - { 2025-08-13T20:08:37.243916362+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.243916362+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.243916362+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.243916362+00:00 stderr F - }, 2025-08-13T20:08:37.243916362+00:00 stderr F + { 2025-08-13T20:08:37.243916362+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.243916362+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.243916362+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.241369539 +0000 UTC m=+436.446468853", 2025-08-13T20:08:37.243916362+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.243916362+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.243916362+00:00 stderr F + }, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.243916362+00:00 stderr F }, 2025-08-13T20:08:37.243916362+00:00 stderr F Version: "", 2025-08-13T20:08:37.243916362+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.243916362+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.243916362+00:00 stderr F } 2025-08-13T20:08:37.416690046+00:00 stderr F E0813 20:08:37.416629 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.431698516+00:00 stderr F E0813 20:08:37.431585 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.434256229+00:00 stderr F I0813 20:08:37.434128 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.434256229+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.434256229+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.434256229+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F - { 2025-08-13T20:08:37.434256229+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.434256229+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.434256229+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.434256229+00:00 stderr F - }, 2025-08-13T20:08:37.434256229+00:00 stderr F + { 2025-08-13T20:08:37.434256229+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.434256229+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.434256229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.431642004 +0000 UTC m=+436.636741178", 2025-08-13T20:08:37.434256229+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.434256229+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.434256229+00:00 stderr F + }, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.434256229+00:00 stderr F }, 2025-08-13T20:08:37.434256229+00:00 stderr F Version: "", 2025-08-13T20:08:37.434256229+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.434256229+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.434256229+00:00 stderr F } 2025-08-13T20:08:37.615607029+00:00 stderr F E0813 20:08:37.615445 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.629245430+00:00 stderr F E0813 20:08:37.629132 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.632642387+00:00 stderr F I0813 20:08:37.632497 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.632642387+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.632642387+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.632642387+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F - { 2025-08-13T20:08:37.632642387+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.632642387+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.632642387+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.632642387+00:00 stderr F - }, 2025-08-13T20:08:37.632642387+00:00 stderr F + { 2025-08-13T20:08:37.632642387+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.632642387+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.632642387+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.629412355 +0000 UTC m=+436.834511619", 2025-08-13T20:08:37.632642387+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.632642387+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.632642387+00:00 stderr F + }, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.632642387+00:00 stderr F }, 2025-08-13T20:08:37.632642387+00:00 stderr F Version: "", 2025-08-13T20:08:37.632642387+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.632642387+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.632642387+00:00 stderr F } 2025-08-13T20:08:37.817262301+00:00 stderr F E0813 20:08:37.816548 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.832922820+00:00 stderr F E0813 20:08:37.832497 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.838182340+00:00 stderr F I0813 20:08:37.835045 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.838182340+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.838182340+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.838182340+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F - { 2025-08-13T20:08:37.838182340+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.838182340+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.838182340+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.838182340+00:00 stderr F - }, 2025-08-13T20:08:37.838182340+00:00 stderr F + { 2025-08-13T20:08:37.838182340+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.838182340+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.838182340+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.83257012 +0000 UTC m=+437.037669464", 2025-08-13T20:08:37.838182340+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.838182340+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.838182340+00:00 stderr F + }, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.838182340+00:00 stderr F }, 2025-08-13T20:08:37.838182340+00:00 stderr F Version: "", 2025-08-13T20:08:37.838182340+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.838182340+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.838182340+00:00 stderr F } 2025-08-13T20:08:38.017333607+00:00 stderr F E0813 20:08:38.016268 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.028539988+00:00 stderr F E0813 20:08:38.028429 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.030472094+00:00 stderr F I0813 20:08:38.030194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.030472094+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.030472094+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.030472094+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F - { 2025-08-13T20:08:38.030472094+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.030472094+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.030472094+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:38.030472094+00:00 stderr F - }, 2025-08-13T20:08:38.030472094+00:00 stderr F + { 2025-08-13T20:08:38.030472094+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.030472094+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.030472094+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.028485107 +0000 UTC m=+437.233584351", 2025-08-13T20:08:38.030472094+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.030472094+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:38.030472094+00:00 stderr F + }, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:38.030472094+00:00 stderr F }, 2025-08-13T20:08:38.030472094+00:00 stderr F Version: "", 2025-08-13T20:08:38.030472094+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.030472094+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.030472094+00:00 stderr F } 2025-08-13T20:08:38.214811839+00:00 stderr F E0813 20:08:38.214667 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.377653358+00:00 stderr F E0813 20:08:38.377601 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.379842010+00:00 stderr F I0813 20:08:38.379755 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.379842010+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.379842010+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.379842010+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F - { 2025-08-13T20:08:38.379842010+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:38.379842010+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.379842010+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:38.379842010+00:00 stderr F - }, 2025-08-13T20:08:38.379842010+00:00 stderr F + { 2025-08-13T20:08:38.379842010+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:38.379842010+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.379842010+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.37774217 +0000 UTC m=+437.582841404", 2025-08-13T20:08:38.379842010+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:38.379842010+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:38.379842010+00:00 stderr F + }, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:38.379842010+00:00 stderr F }, 2025-08-13T20:08:38.379842010+00:00 stderr F Version: "", 2025-08-13T20:08:38.379842010+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.379842010+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.379842010+00:00 stderr F } 2025-08-13T20:08:38.420703232+00:00 stderr F I0813 20:08:38.420561 1 request.go:697] Waited for 1.16981886s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:38.422954186+00:00 stderr F E0813 20:08:38.422867 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.446282985+00:00 stderr F E0813 20:08:38.446243 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.448131668+00:00 stderr F I0813 20:08:38.448104 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.448131668+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.448131668+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.448131668+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:38.448131668+00:00 stderr F - { 2025-08-13T20:08:38.448131668+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:38.448131668+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.448131668+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:38.448131668+00:00 stderr F - }, 2025-08-13T20:08:38.448131668+00:00 stderr F + { 2025-08-13T20:08:38.448131668+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:38.448131668+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.448131668+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.446349037 +0000 UTC m=+437.651448241", 2025-08-13T20:08:38.448131668+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.448131668+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:38.448131668+00:00 stderr F + }, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:38.448131668+00:00 stderr F }, 2025-08-13T20:08:38.448131668+00:00 stderr F Version: "", 2025-08-13T20:08:38.448131668+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.448131668+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.448131668+00:00 stderr F } 2025-08-13T20:08:38.616531016+00:00 stderr F E0813 20:08:38.616131 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.641005418+00:00 stderr F E0813 20:08:38.640926 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.642727708+00:00 stderr F I0813 20:08:38.642666 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.642727708+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.642727708+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.642727708+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F - { 2025-08-13T20:08:38.642727708+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.642727708+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.642727708+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:38.642727708+00:00 stderr F - }, 2025-08-13T20:08:38.642727708+00:00 stderr F + { 2025-08-13T20:08:38.642727708+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.642727708+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.642727708+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.640990858 +0000 UTC m=+437.846090092", 2025-08-13T20:08:38.642727708+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.642727708+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:38.642727708+00:00 stderr F + }, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:38.642727708+00:00 stderr F }, 2025-08-13T20:08:38.642727708+00:00 stderr F Version: "", 2025-08-13T20:08:38.642727708+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.642727708+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.642727708+00:00 stderr F } 2025-08-13T20:08:38.815355887+00:00 stderr F E0813 20:08:38.815104 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.838994335+00:00 stderr F E0813 20:08:38.838944 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.841204708+00:00 stderr F I0813 20:08:38.841152 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.841204708+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.841204708+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.841204708+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F - { 2025-08-13T20:08:38.841204708+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:38.841204708+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.841204708+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:38.841204708+00:00 stderr F - }, 2025-08-13T20:08:38.841204708+00:00 stderr F + { 2025-08-13T20:08:38.841204708+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:38.841204708+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.841204708+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.839076657 +0000 UTC m=+438.044175821", 2025-08-13T20:08:38.841204708+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.841204708+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:38.841204708+00:00 stderr F + }, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:38.841204708+00:00 stderr F }, 2025-08-13T20:08:38.841204708+00:00 stderr F Version: "", 2025-08-13T20:08:38.841204708+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.841204708+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.841204708+00:00 stderr F } 2025-08-13T20:08:39.020182970+00:00 stderr F E0813 20:08:39.019603 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.043126507+00:00 stderr F E0813 20:08:39.043075 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.044739084+00:00 stderr F I0813 20:08:39.044699 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.044739084+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.044739084+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.044739084+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F - { 2025-08-13T20:08:39.044739084+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:39.044739084+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.044739084+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:39.044739084+00:00 stderr F - }, 2025-08-13T20:08:39.044739084+00:00 stderr F + { 2025-08-13T20:08:39.044739084+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:39.044739084+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.044739084+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.04320704 +0000 UTC m=+438.248306194", 2025-08-13T20:08:39.044739084+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.044739084+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:39.044739084+00:00 stderr F + }, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:39.044739084+00:00 stderr F }, 2025-08-13T20:08:39.044739084+00:00 stderr F Version: "", 2025-08-13T20:08:39.044739084+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.044739084+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.044739084+00:00 stderr F } 2025-08-13T20:08:39.216042325+00:00 stderr F E0813 20:08:39.215998 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.240292330+00:00 stderr F E0813 20:08:39.238950 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.241651919+00:00 stderr F I0813 20:08:39.241606 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.241651919+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.241651919+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.241651919+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F - { 2025-08-13T20:08:39.241651919+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.241651919+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.241651919+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:39.241651919+00:00 stderr F - }, 2025-08-13T20:08:39.241651919+00:00 stderr F + { 2025-08-13T20:08:39.241651919+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.241651919+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.241651919+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.239087206 +0000 UTC m=+438.444186720", 2025-08-13T20:08:39.241651919+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.241651919+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:39.241651919+00:00 stderr F + }, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:39.241651919+00:00 stderr F }, 2025-08-13T20:08:39.241651919+00:00 stderr F Version: "", 2025-08-13T20:08:39.241651919+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.241651919+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.241651919+00:00 stderr F } 2025-08-13T20:08:39.415781821+00:00 stderr F E0813 20:08:39.415305 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.614970782+00:00 stderr F I0813 20:08:39.613913 1 request.go:697] Waited for 1.165510665s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:39.615019093+00:00 stderr F E0813 20:08:39.614983 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.666494069+00:00 stderr F E0813 20:08:39.666435 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.672193102+00:00 stderr F I0813 20:08:39.671638 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.672193102+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.672193102+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.672193102+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:39.672193102+00:00 stderr F - { 2025-08-13T20:08:39.672193102+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:39.672193102+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.672193102+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:39.672193102+00:00 stderr F - }, 2025-08-13T20:08:39.672193102+00:00 stderr F + { 2025-08-13T20:08:39.672193102+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:39.672193102+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.672193102+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.666922771 +0000 UTC m=+438.872022185", 2025-08-13T20:08:39.672193102+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.672193102+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:39.672193102+00:00 stderr F + }, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:39.672193102+00:00 stderr F }, 2025-08-13T20:08:39.672193102+00:00 stderr F Version: "", 2025-08-13T20:08:39.672193102+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.672193102+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.672193102+00:00 stderr F } 2025-08-13T20:08:39.738606097+00:00 stderr F E0813 20:08:39.738379 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.740856061+00:00 stderr F I0813 20:08:39.740551 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.740856061+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.740856061+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.740856061+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F - { 2025-08-13T20:08:39.740856061+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:39.740856061+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.740856061+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:39.740856061+00:00 stderr F - }, 2025-08-13T20:08:39.740856061+00:00 stderr F + { 2025-08-13T20:08:39.740856061+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:39.740856061+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.740856061+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.738432722 +0000 UTC m=+438.943531886", 2025-08-13T20:08:39.740856061+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:39.740856061+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:39.740856061+00:00 stderr F + }, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:39.740856061+00:00 stderr F }, 2025-08-13T20:08:39.740856061+00:00 stderr F Version: "", 2025-08-13T20:08:39.740856061+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.740856061+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.740856061+00:00 stderr F } 2025-08-13T20:08:39.815139801+00:00 stderr F E0813 20:08:39.815043 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.861647434+00:00 stderr F E0813 20:08:39.861102 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.863310732+00:00 stderr F I0813 20:08:39.863196 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.863310732+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.863310732+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.863310732+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F - { 2025-08-13T20:08:39.863310732+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.863310732+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.863310732+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:39.863310732+00:00 stderr F - }, 2025-08-13T20:08:39.863310732+00:00 stderr F + { 2025-08-13T20:08:39.863310732+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.863310732+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.863310732+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.861184181 +0000 UTC m=+439.066283455", 2025-08-13T20:08:39.863310732+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.863310732+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:39.863310732+00:00 stderr F + }, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:39.863310732+00:00 stderr F }, 2025-08-13T20:08:39.863310732+00:00 stderr F Version: "", 2025-08-13T20:08:39.863310732+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.863310732+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.863310732+00:00 stderr F } 2025-08-13T20:08:40.023122134+00:00 stderr F E0813 20:08:40.022978 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.068376981+00:00 stderr F E0813 20:08:40.068301 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.070879393+00:00 stderr F I0813 20:08:40.070814 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.070879393+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.070879393+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.070879393+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F - { 2025-08-13T20:08:40.070879393+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:40.070879393+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.070879393+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:40.070879393+00:00 stderr F - }, 2025-08-13T20:08:40.070879393+00:00 stderr F + { 2025-08-13T20:08:40.070879393+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:40.070879393+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.070879393+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.068369611 +0000 UTC m=+439.273468915", 2025-08-13T20:08:40.070879393+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.070879393+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:40.070879393+00:00 stderr F + }, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:40.070879393+00:00 stderr F }, 2025-08-13T20:08:40.070879393+00:00 stderr F Version: "", 2025-08-13T20:08:40.070879393+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.070879393+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.070879393+00:00 stderr F } 2025-08-13T20:08:40.218648330+00:00 stderr F E0813 20:08:40.218518 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.262499887+00:00 stderr F E0813 20:08:40.262217 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.270760884+00:00 stderr F I0813 20:08:40.267420 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.270760884+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.270760884+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.270760884+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F - { 2025-08-13T20:08:40.270760884+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:40.270760884+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.270760884+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:40.270760884+00:00 stderr F - }, 2025-08-13T20:08:40.270760884+00:00 stderr F + { 2025-08-13T20:08:40.270760884+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:40.270760884+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.270760884+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.262340883 +0000 UTC m=+439.467440147", 2025-08-13T20:08:40.270760884+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.270760884+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:40.270760884+00:00 stderr F + }, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:40.270760884+00:00 stderr F }, 2025-08-13T20:08:40.270760884+00:00 stderr F Version: "", 2025-08-13T20:08:40.270760884+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.270760884+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.270760884+00:00 stderr F } 2025-08-13T20:08:40.416578585+00:00 stderr F E0813 20:08:40.416357 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.477135771+00:00 stderr F E0813 20:08:40.476206 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.519601119+00:00 stderr F I0813 20:08:40.518976 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.519601119+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.519601119+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.519601119+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F - { 2025-08-13T20:08:40.519601119+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:40.519601119+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.519601119+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:40.519601119+00:00 stderr F - }, 2025-08-13T20:08:40.519601119+00:00 stderr F + { 2025-08-13T20:08:40.519601119+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:40.519601119+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.519601119+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.476273256 +0000 UTC m=+439.681372500", 2025-08-13T20:08:40.519601119+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.519601119+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:40.519601119+00:00 stderr F + }, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:40.519601119+00:00 stderr F }, 2025-08-13T20:08:40.519601119+00:00 stderr F Version: "", 2025-08-13T20:08:40.519601119+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.519601119+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.519601119+00:00 stderr F } 2025-08-13T20:08:40.622538090+00:00 stderr F E0813 20:08:40.622405 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.705527799+00:00 stderr F E0813 20:08:40.705212 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.707585248+00:00 stderr F I0813 20:08:40.707517 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.707585248+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.707585248+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.707585248+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:40.707585248+00:00 stderr F - { 2025-08-13T20:08:40.707585248+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:40.707585248+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.707585248+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:40.707585248+00:00 stderr F - }, 2025-08-13T20:08:40.707585248+00:00 stderr F + { 2025-08-13T20:08:40.707585248+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:40.707585248+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.707585248+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.705510879 +0000 UTC m=+439.910610083", 2025-08-13T20:08:40.707585248+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.707585248+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:40.707585248+00:00 stderr F + }, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:40.707585248+00:00 stderr F }, 2025-08-13T20:08:40.707585248+00:00 stderr F Version: "", 2025-08-13T20:08:40.707585248+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.707585248+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.707585248+00:00 stderr F } 2025-08-13T20:08:40.814590936+00:00 stderr F I0813 20:08:40.814038 1 request.go:697] Waited for 1.073262681s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:40.830016859+00:00 stderr F E0813 20:08:40.826096 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.020581012+00:00 stderr F E0813 20:08:41.020441 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.106359572+00:00 stderr F E0813 20:08:41.106226 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.108849493+00:00 stderr F I0813 20:08:41.108071 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.108849493+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.108849493+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.108849493+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F - { 2025-08-13T20:08:41.108849493+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.108849493+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.108849493+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:41.108849493+00:00 stderr F - }, 2025-08-13T20:08:41.108849493+00:00 stderr F + { 2025-08-13T20:08:41.108849493+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.108849493+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.108849493+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.10630736 +0000 UTC m=+440.311406594", 2025-08-13T20:08:41.108849493+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.108849493+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:41.108849493+00:00 stderr F + }, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:41.108849493+00:00 stderr F }, 2025-08-13T20:08:41.108849493+00:00 stderr F Version: "", 2025-08-13T20:08:41.108849493+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.108849493+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.108849493+00:00 stderr F } 2025-08-13T20:08:41.220472763+00:00 stderr F E0813 20:08:41.219218 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.312987176+00:00 stderr F E0813 20:08:41.310744 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.317692401+00:00 stderr F I0813 20:08:41.317583 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.317692401+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.317692401+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.317692401+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F - { 2025-08-13T20:08:41.317692401+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:41.317692401+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.317692401+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:41.317692401+00:00 stderr F - }, 2025-08-13T20:08:41.317692401+00:00 stderr F + { 2025-08-13T20:08:41.317692401+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:41.317692401+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.317692401+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.310887966 +0000 UTC m=+440.516016311", 2025-08-13T20:08:41.317692401+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.317692401+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:41.317692401+00:00 stderr F + }, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:41.317692401+00:00 stderr F }, 2025-08-13T20:08:41.317692401+00:00 stderr F Version: "", 2025-08-13T20:08:41.317692401+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.317692401+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.317692401+00:00 stderr F } 2025-08-13T20:08:41.415911757+00:00 stderr F E0813 20:08:41.415712 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.470984576+00:00 stderr F E0813 20:08:41.470499 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.474142146+00:00 stderr F I0813 20:08:41.474075 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.474142146+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.474142146+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.474142146+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F - { 2025-08-13T20:08:41.474142146+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:41.474142146+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.474142146+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:41.474142146+00:00 stderr F - }, 2025-08-13T20:08:41.474142146+00:00 stderr F + { 2025-08-13T20:08:41.474142146+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:41.474142146+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.474142146+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.470928164 +0000 UTC m=+440.676027688", 2025-08-13T20:08:41.474142146+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:41.474142146+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:41.474142146+00:00 stderr F + }, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:41.474142146+00:00 stderr F }, 2025-08-13T20:08:41.474142146+00:00 stderr F Version: "", 2025-08-13T20:08:41.474142146+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.474142146+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.474142146+00:00 stderr F } 2025-08-13T20:08:41.499540955+00:00 stderr F E0813 20:08:41.499430 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.501244953+00:00 stderr F I0813 20:08:41.501116 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.501244953+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.501244953+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.501244953+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F - { 2025-08-13T20:08:41.501244953+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:41.501244953+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.501244953+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:41.501244953+00:00 stderr F - }, 2025-08-13T20:08:41.501244953+00:00 stderr F + { 2025-08-13T20:08:41.501244953+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:41.501244953+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.501244953+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.499509634 +0000 UTC m=+440.704608858", 2025-08-13T20:08:41.501244953+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.501244953+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:41.501244953+00:00 stderr F + }, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:41.501244953+00:00 stderr F }, 2025-08-13T20:08:41.501244953+00:00 stderr F Version: "", 2025-08-13T20:08:41.501244953+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.501244953+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.501244953+00:00 stderr F } 2025-08-13T20:08:41.617100685+00:00 stderr F E0813 20:08:41.617046 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.702115823+00:00 stderr F E0813 20:08:41.700869 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.703451601+00:00 stderr F I0813 20:08:41.703292 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.703451601+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.703451601+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.703451601+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F - { 2025-08-13T20:08:41.703451601+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.703451601+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.703451601+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:41.703451601+00:00 stderr F - }, 2025-08-13T20:08:41.703451601+00:00 stderr F + { 2025-08-13T20:08:41.703451601+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.703451601+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.703451601+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.700951129 +0000 UTC m=+440.906050523", 2025-08-13T20:08:41.703451601+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.703451601+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:41.703451601+00:00 stderr F + }, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:41.703451601+00:00 stderr F }, 2025-08-13T20:08:41.703451601+00:00 stderr F Version: "", 2025-08-13T20:08:41.703451601+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.703451601+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.703451601+00:00 stderr F } 2025-08-13T20:08:41.815672598+00:00 stderr F E0813 20:08:41.815214 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.988818273+00:00 stderr F E0813 20:08:41.988457 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.998696056+00:00 stderr F I0813 20:08:41.990208 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.998696056+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.998696056+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.998696056+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:41.998696056+00:00 stderr F - { 2025-08-13T20:08:41.998696056+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:41.998696056+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.998696056+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:41.998696056+00:00 stderr F - }, 2025-08-13T20:08:41.998696056+00:00 stderr F + { 2025-08-13T20:08:41.998696056+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:41.998696056+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.998696056+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.988499663 +0000 UTC m=+441.193598837", 2025-08-13T20:08:41.998696056+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.998696056+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:41.998696056+00:00 stderr F + }, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:41.998696056+00:00 stderr F }, 2025-08-13T20:08:41.998696056+00:00 stderr F Version: "", 2025-08-13T20:08:41.998696056+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.998696056+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.998696056+00:00 stderr F } 2025-08-13T20:08:42.021301974+00:00 stderr F E0813 20:08:42.018061 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.186702406+00:00 stderr F E0813 20:08:42.186597 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.191558915+00:00 stderr F I0813 20:08:42.190611 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.191558915+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.191558915+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.191558915+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F - { 2025-08-13T20:08:42.191558915+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.191558915+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.191558915+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:42.191558915+00:00 stderr F - }, 2025-08-13T20:08:42.191558915+00:00 stderr F + { 2025-08-13T20:08:42.191558915+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.191558915+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.191558915+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.186660165 +0000 UTC m=+441.391759379", 2025-08-13T20:08:42.191558915+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.191558915+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:42.191558915+00:00 stderr F + }, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:42.191558915+00:00 stderr F }, 2025-08-13T20:08:42.191558915+00:00 stderr F Version: "", 2025-08-13T20:08:42.191558915+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.191558915+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.191558915+00:00 stderr F } 2025-08-13T20:08:42.217340755+00:00 stderr F E0813 20:08:42.216746 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.387145983+00:00 stderr F E0813 20:08:42.386163 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.389334026+00:00 stderr F I0813 20:08:42.387949 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.389334026+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.389334026+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.389334026+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F - { 2025-08-13T20:08:42.389334026+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:42.389334026+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.389334026+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:42.389334026+00:00 stderr F - }, 2025-08-13T20:08:42.389334026+00:00 stderr F + { 2025-08-13T20:08:42.389334026+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:42.389334026+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.389334026+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.386224797 +0000 UTC m=+441.591324171", 2025-08-13T20:08:42.389334026+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.389334026+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:42.389334026+00:00 stderr F + }, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:42.389334026+00:00 stderr F }, 2025-08-13T20:08:42.389334026+00:00 stderr F Version: "", 2025-08-13T20:08:42.389334026+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.389334026+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.389334026+00:00 stderr F } 2025-08-13T20:08:42.420863810+00:00 stderr F E0813 20:08:42.418609 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.614367508+00:00 stderr F I0813 20:08:42.613673 1 request.go:697] Waited for 1.112255989s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:42.619195186+00:00 stderr F E0813 20:08:42.616075 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.777729212+00:00 stderr F E0813 20:08:42.777624 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.781402937+00:00 stderr F I0813 20:08:42.781368 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.781402937+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.781402937+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.781402937+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F - { 2025-08-13T20:08:42.781402937+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:42.781402937+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.781402937+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:42.781402937+00:00 stderr F - }, 2025-08-13T20:08:42.781402937+00:00 stderr F + { 2025-08-13T20:08:42.781402937+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:42.781402937+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.781402937+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.777866465 +0000 UTC m=+441.982965660", 2025-08-13T20:08:42.781402937+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.781402937+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:42.781402937+00:00 stderr F + }, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:42.781402937+00:00 stderr F }, 2025-08-13T20:08:42.781402937+00:00 stderr F Version: "", 2025-08-13T20:08:42.781402937+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.781402937+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.781402937+00:00 stderr F } 2025-08-13T20:08:42.820097516+00:00 stderr F E0813 20:08:42.817337 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.980056452+00:00 stderr F E0813 20:08:42.979975 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.984561011+00:00 stderr F I0813 20:08:42.984516 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.984561011+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.984561011+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.984561011+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F - { 2025-08-13T20:08:42.984561011+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.984561011+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.984561011+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:42.984561011+00:00 stderr F - }, 2025-08-13T20:08:42.984561011+00:00 stderr F + { 2025-08-13T20:08:42.984561011+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.984561011+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.984561011+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.980152504 +0000 UTC m=+442.185251748", 2025-08-13T20:08:42.984561011+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.984561011+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:42.984561011+00:00 stderr F + }, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:42.984561011+00:00 stderr F }, 2025-08-13T20:08:42.984561011+00:00 stderr F Version: "", 2025-08-13T20:08:42.984561011+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.984561011+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.984561011+00:00 stderr F } 2025-08-13T20:08:43.026203515+00:00 stderr F E0813 20:08:43.026107 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.219599940+00:00 stderr F E0813 20:08:43.219361 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.351032518+00:00 stderr F E0813 20:08:43.350753 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.358561944+00:00 stderr F I0813 20:08:43.357865 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.358561944+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.358561944+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.358561944+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:43.358561944+00:00 stderr F - { 2025-08-13T20:08:43.358561944+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:43.358561944+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.358561944+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:43.358561944+00:00 stderr F - }, 2025-08-13T20:08:43.358561944+00:00 stderr F + { 2025-08-13T20:08:43.358561944+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:43.358561944+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.358561944+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.350980336 +0000 UTC m=+442.556079610", 2025-08-13T20:08:43.358561944+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.358561944+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:43.358561944+00:00 stderr F + }, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:43.358561944+00:00 stderr F }, 2025-08-13T20:08:43.358561944+00:00 stderr F Version: "", 2025-08-13T20:08:43.358561944+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.358561944+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.358561944+00:00 stderr F } 2025-08-13T20:08:43.420359366+00:00 stderr F E0813 20:08:43.420292 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.544543156+00:00 stderr F E0813 20:08:43.541670 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.544543156+00:00 stderr F I0813 20:08:43.543481 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.544543156+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.544543156+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.544543156+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F - { 2025-08-13T20:08:43.544543156+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:43.544543156+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.544543156+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:43.544543156+00:00 stderr F - }, 2025-08-13T20:08:43.544543156+00:00 stderr F + { 2025-08-13T20:08:43.544543156+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:43.544543156+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.544543156+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.541760616 +0000 UTC m=+442.746859820", 2025-08-13T20:08:43.544543156+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.544543156+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:43.544543156+00:00 stderr F + }, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:43.544543156+00:00 stderr F }, 2025-08-13T20:08:43.544543156+00:00 stderr F Version: "", 2025-08-13T20:08:43.544543156+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.544543156+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.544543156+00:00 stderr F } 2025-08-13T20:08:43.621723299+00:00 stderr F E0813 20:08:43.617077 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.704040239+00:00 stderr F E0813 20:08:43.700450 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.707824157+00:00 stderr F I0813 20:08:43.707671 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.707824157+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.707824157+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.707824157+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F - { 2025-08-13T20:08:43.707824157+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:43.707824157+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.707824157+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:43.707824157+00:00 stderr F - }, 2025-08-13T20:08:43.707824157+00:00 stderr F + { 2025-08-13T20:08:43.707824157+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:43.707824157+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.707824157+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.700504517 +0000 UTC m=+442.905603752", 2025-08-13T20:08:43.707824157+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:43.707824157+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:43.707824157+00:00 stderr F + }, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:43.707824157+00:00 stderr F }, 2025-08-13T20:08:43.707824157+00:00 stderr F Version: "", 2025-08-13T20:08:43.707824157+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.707824157+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.707824157+00:00 stderr F } 2025-08-13T20:08:43.747565477+00:00 stderr F E0813 20:08:43.747500 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.754464205+00:00 stderr F I0813 20:08:43.749557 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.754464205+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.754464205+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.754464205+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F - { 2025-08-13T20:08:43.754464205+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:43.754464205+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.754464205+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:43.754464205+00:00 stderr F - }, 2025-08-13T20:08:43.754464205+00:00 stderr F + { 2025-08-13T20:08:43.754464205+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:43.754464205+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.754464205+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.74766571 +0000 UTC m=+442.952764914", 2025-08-13T20:08:43.754464205+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.754464205+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:43.754464205+00:00 stderr F + }, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:43.754464205+00:00 stderr F }, 2025-08-13T20:08:43.754464205+00:00 stderr F Version: "", 2025-08-13T20:08:43.754464205+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.754464205+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.754464205+00:00 stderr F } 2025-08-13T20:08:43.817070210+00:00 stderr F E0813 20:08:43.816973 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.954648214+00:00 stderr F E0813 20:08:43.954369 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.958497534+00:00 stderr F I0813 20:08:43.958002 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.958497534+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.958497534+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.958497534+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F - { 2025-08-13T20:08:43.958497534+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:43.958497534+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.958497534+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:43.958497534+00:00 stderr F - }, 2025-08-13T20:08:43.958497534+00:00 stderr F + { 2025-08-13T20:08:43.958497534+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:43.958497534+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.958497534+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.954440868 +0000 UTC m=+443.159540072", 2025-08-13T20:08:43.958497534+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.958497534+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:43.958497534+00:00 stderr F + }, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:43.958497534+00:00 stderr F }, 2025-08-13T20:08:43.958497534+00:00 stderr F Version: "", 2025-08-13T20:08:43.958497534+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.958497534+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.958497534+00:00 stderr F } 2025-08-13T20:08:44.016321952+00:00 stderr F E0813 20:08:44.015605 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.140135912+00:00 stderr F E0813 20:08:44.140038 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.142288394+00:00 stderr F I0813 20:08:44.142194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.142288394+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.142288394+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.142288394+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F - { 2025-08-13T20:08:44.142288394+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.142288394+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.142288394+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:44.142288394+00:00 stderr F - }, 2025-08-13T20:08:44.142288394+00:00 stderr F + { 2025-08-13T20:08:44.142288394+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.142288394+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.142288394+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.140098821 +0000 UTC m=+443.345198165", 2025-08-13T20:08:44.142288394+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.142288394+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:44.142288394+00:00 stderr F + }, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:44.142288394+00:00 stderr F }, 2025-08-13T20:08:44.142288394+00:00 stderr F Version: "", 2025-08-13T20:08:44.142288394+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.142288394+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.142288394+00:00 stderr F } 2025-08-13T20:08:44.217677275+00:00 stderr F E0813 20:08:44.216206 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.414622152+00:00 stderr F E0813 20:08:44.414500 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.627485295+00:00 stderr F E0813 20:08:44.625858 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.660118291+00:00 stderr F E0813 20:08:44.659512 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.664169017+00:00 stderr F I0813 20:08:44.661294 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.664169017+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.664169017+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.664169017+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:44.664169017+00:00 stderr F - { 2025-08-13T20:08:44.664169017+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:44.664169017+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.664169017+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:44.664169017+00:00 stderr F - }, 2025-08-13T20:08:44.664169017+00:00 stderr F + { 2025-08-13T20:08:44.664169017+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:44.664169017+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.664169017+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.659587885 +0000 UTC m=+443.864687109", 2025-08-13T20:08:44.664169017+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.664169017+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:44.664169017+00:00 stderr F + }, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:44.664169017+00:00 stderr F }, 2025-08-13T20:08:44.664169017+00:00 stderr F Version: "", 2025-08-13T20:08:44.664169017+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.664169017+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.664169017+00:00 stderr F } 2025-08-13T20:08:44.816568856+00:00 stderr F E0813 20:08:44.816386 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.858538260+00:00 stderr F E0813 20:08:44.858437 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.861991979+00:00 stderr F I0813 20:08:44.860457 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.861991979+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.861991979+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.861991979+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F - { 2025-08-13T20:08:44.861991979+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.861991979+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.861991979+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:44.861991979+00:00 stderr F - }, 2025-08-13T20:08:44.861991979+00:00 stderr F + { 2025-08-13T20:08:44.861991979+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.861991979+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.861991979+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.858510859 +0000 UTC m=+444.063610093", 2025-08-13T20:08:44.861991979+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.861991979+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:44.861991979+00:00 stderr F + }, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:44.861991979+00:00 stderr F }, 2025-08-13T20:08:44.861991979+00:00 stderr F Version: "", 2025-08-13T20:08:44.861991979+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.861991979+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.861991979+00:00 stderr F } 2025-08-13T20:08:45.015636224+00:00 stderr F E0813 20:08:45.015347 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.215994528+00:00 stderr F E0813 20:08:45.214927 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.268234216+00:00 stderr F E0813 20:08:45.268094 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.270711397+00:00 stderr F I0813 20:08:45.270626 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.270711397+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.270711397+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.270711397+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F - { 2025-08-13T20:08:45.270711397+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:45.270711397+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.270711397+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:45.270711397+00:00 stderr F - }, 2025-08-13T20:08:45.270711397+00:00 stderr F + { 2025-08-13T20:08:45.270711397+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:45.270711397+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.270711397+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.268154714 +0000 UTC m=+444.473253928", 2025-08-13T20:08:45.270711397+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.270711397+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:45.270711397+00:00 stderr F + }, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:45.270711397+00:00 stderr F }, 2025-08-13T20:08:45.270711397+00:00 stderr F Version: "", 2025-08-13T20:08:45.270711397+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.270711397+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.270711397+00:00 stderr F } 2025-08-13T20:08:45.422093407+00:00 stderr F E0813 20:08:45.414446 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.459687385+00:00 stderr F E0813 20:08:45.459567 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.462676851+00:00 stderr F I0813 20:08:45.462418 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.462676851+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.462676851+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.462676851+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F - { 2025-08-13T20:08:45.462676851+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:45.462676851+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.462676851+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:45.462676851+00:00 stderr F - }, 2025-08-13T20:08:45.462676851+00:00 stderr F + { 2025-08-13T20:08:45.462676851+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:45.462676851+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.462676851+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.459644254 +0000 UTC m=+444.664743638", 2025-08-13T20:08:45.462676851+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.462676851+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:45.462676851+00:00 stderr F + }, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:45.462676851+00:00 stderr F }, 2025-08-13T20:08:45.462676851+00:00 stderr F Version: "", 2025-08-13T20:08:45.462676851+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.462676851+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.462676851+00:00 stderr F } 2025-08-13T20:08:45.618344284+00:00 stderr F E0813 20:08:45.617682 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.657507677+00:00 stderr F E0813 20:08:45.657306 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.663505239+00:00 stderr F I0813 20:08:45.663430 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.663505239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.663505239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.663505239+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F - { 2025-08-13T20:08:45.663505239+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:45.663505239+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.663505239+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:45.663505239+00:00 stderr F - }, 2025-08-13T20:08:45.663505239+00:00 stderr F + { 2025-08-13T20:08:45.663505239+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:45.663505239+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.663505239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.657411114 +0000 UTC m=+444.862510488", 2025-08-13T20:08:45.663505239+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.663505239+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:45.663505239+00:00 stderr F + }, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:45.663505239+00:00 stderr F }, 2025-08-13T20:08:45.663505239+00:00 stderr F Version: "", 2025-08-13T20:08:45.663505239+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.663505239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.663505239+00:00 stderr F } 2025-08-13T20:08:45.815375213+00:00 stderr F E0813 20:08:45.815224 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.015615404+00:00 stderr F E0813 20:08:46.015490 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.501005391+00:00 stderr F E0813 20:08:46.500924 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.502990298+00:00 stderr F I0813 20:08:46.502964 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.502990298+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.502990298+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.502990298+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:46.502990298+00:00 stderr F - { 2025-08-13T20:08:46.502990298+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:46.502990298+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.502990298+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:46.502990298+00:00 stderr F - }, 2025-08-13T20:08:46.502990298+00:00 stderr F + { 2025-08-13T20:08:46.502990298+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:46.502990298+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.502990298+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.501100054 +0000 UTC m=+445.706199228", 2025-08-13T20:08:46.502990298+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.502990298+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:46.502990298+00:00 stderr F + }, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:46.502990298+00:00 stderr F }, 2025-08-13T20:08:46.502990298+00:00 stderr F Version: "", 2025-08-13T20:08:46.502990298+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.502990298+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.502990298+00:00 stderr F } 2025-08-13T20:08:46.505478419+00:00 stderr F E0813 20:08:46.505412 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.706044639+00:00 stderr F E0813 20:08:46.705954 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.708050136+00:00 stderr F I0813 20:08:46.707877 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.708050136+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.708050136+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.708050136+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F - { 2025-08-13T20:08:46.708050136+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:46.708050136+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.708050136+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:46.708050136+00:00 stderr F - }, 2025-08-13T20:08:46.708050136+00:00 stderr F + { 2025-08-13T20:08:46.708050136+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:46.708050136+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.708050136+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.706148572 +0000 UTC m=+445.911247926", 2025-08-13T20:08:46.708050136+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.708050136+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:46.708050136+00:00 stderr F + }, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:46.708050136+00:00 stderr F }, 2025-08-13T20:08:46.708050136+00:00 stderr F Version: "", 2025-08-13T20:08:46.708050136+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.708050136+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.708050136+00:00 stderr F } 2025-08-13T20:08:46.709550969+00:00 stderr F E0813 20:08:46.709501 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.901179643+00:00 stderr F E0813 20:08:46.900695 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.906398613+00:00 stderr F I0813 20:08:46.906310 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.906398613+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.906398613+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.906398613+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F - { 2025-08-13T20:08:46.906398613+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:46.906398613+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.906398613+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:46.906398613+00:00 stderr F - }, 2025-08-13T20:08:46.906398613+00:00 stderr F + { 2025-08-13T20:08:46.906398613+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:46.906398613+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.906398613+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.901114832 +0000 UTC m=+446.106214156", 2025-08-13T20:08:46.906398613+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.906398613+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:46.906398613+00:00 stderr F + }, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:46.906398613+00:00 stderr F }, 2025-08-13T20:08:46.906398613+00:00 stderr F Version: "", 2025-08-13T20:08:46.906398613+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.906398613+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.906398613+00:00 stderr F } 2025-08-13T20:08:46.907769112+00:00 stderr F E0813 20:08:46.907662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.977342047+00:00 stderr F E0813 20:08:46.977226 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.979552040+00:00 stderr F I0813 20:08:46.979470 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.979552040+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.979552040+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.979552040+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F - { 2025-08-13T20:08:46.979552040+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:46.979552040+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.979552040+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:46.979552040+00:00 stderr F - }, 2025-08-13T20:08:46.979552040+00:00 stderr F + { 2025-08-13T20:08:46.979552040+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:46.979552040+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.979552040+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.977278655 +0000 UTC m=+446.182377829", 2025-08-13T20:08:46.979552040+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:46.979552040+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:46.979552040+00:00 stderr F + }, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:46.979552040+00:00 stderr F }, 2025-08-13T20:08:46.979552040+00:00 stderr F Version: "", 2025-08-13T20:08:46.979552040+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.979552040+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.979552040+00:00 stderr F } 2025-08-13T20:08:46.983096882+00:00 stderr F E0813 20:08:46.982938 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.098157441+00:00 stderr F E0813 20:08:47.098074 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.102136425+00:00 stderr F I0813 20:08:47.100739 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:47.102136425+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:47.102136425+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:47.102136425+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F - { 2025-08-13T20:08:47.102136425+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:47.102136425+00:00 stderr F - Status: "False", 2025-08-13T20:08:47.102136425+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:47.102136425+00:00 stderr F - }, 2025-08-13T20:08:47.102136425+00:00 stderr F + { 2025-08-13T20:08:47.102136425+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:47.102136425+00:00 stderr F + Status: "True", 2025-08-13T20:08:47.102136425+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:47.098144541 +0000 UTC m=+446.303243835", 2025-08-13T20:08:47.102136425+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:47.102136425+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:47.102136425+00:00 stderr F + }, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:47.102136425+00:00 stderr F }, 2025-08-13T20:08:47.102136425+00:00 stderr F Version: "", 2025-08-13T20:08:47.102136425+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:47.102136425+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:47.102136425+00:00 stderr F } 2025-08-13T20:08:47.105107300+00:00 stderr F E0813 20:08:47.105013 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.299446792+00:00 stderr F E0813 20:08:47.298300 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.314666088+00:00 stderr F I0813 20:08:47.299923 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:47.314666088+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:47.314666088+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:47.314666088+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F - { 2025-08-13T20:08:47.314666088+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:47.314666088+00:00 stderr F - Status: "False", 2025-08-13T20:08:47.314666088+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:47.314666088+00:00 stderr F - }, 2025-08-13T20:08:47.314666088+00:00 stderr F + { 2025-08-13T20:08:47.314666088+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:47.314666088+00:00 stderr F + Status: "True", 2025-08-13T20:08:47.314666088+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:47.298356561 +0000 UTC m=+446.503455745", 2025-08-13T20:08:47.314666088+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:47.314666088+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:47.314666088+00:00 stderr F + }, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:47.314666088+00:00 stderr F }, 2025-08-13T20:08:47.314666088+00:00 stderr F Version: "", 2025-08-13T20:08:47.314666088+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:47.314666088+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:47.314666088+00:00 stderr F } 2025-08-13T20:08:47.314666088+00:00 stderr F E0813 20:08:47.301488 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.073730023+00:00 stderr F E0813 20:08:49.073678 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.078610473+00:00 stderr F I0813 20:08:49.077425 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.078610473+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.078610473+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.078610473+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:49.078610473+00:00 stderr F - { 2025-08-13T20:08:49.078610473+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:49.078610473+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.078610473+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:49.078610473+00:00 stderr F - }, 2025-08-13T20:08:49.078610473+00:00 stderr F + { 2025-08-13T20:08:49.078610473+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:49.078610473+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.078610473+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.073921938 +0000 UTC m=+448.279021332", 2025-08-13T20:08:49.078610473+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.078610473+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:49.078610473+00:00 stderr F + }, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:49.078610473+00:00 stderr F }, 2025-08-13T20:08:49.078610473+00:00 stderr F Version: "", 2025-08-13T20:08:49.078610473+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.078610473+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.078610473+00:00 stderr F } 2025-08-13T20:08:49.082722501+00:00 stderr F E0813 20:08:49.082646 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.272062159+00:00 stderr F E0813 20:08:49.271885 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.275436466+00:00 stderr F I0813 20:08:49.275396 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.275436466+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.275436466+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.275436466+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F - { 2025-08-13T20:08:49.275436466+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.275436466+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.275436466+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:49.275436466+00:00 stderr F - }, 2025-08-13T20:08:49.275436466+00:00 stderr F + { 2025-08-13T20:08:49.275436466+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.275436466+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.275436466+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.271969677 +0000 UTC m=+448.477068891", 2025-08-13T20:08:49.275436466+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.275436466+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:49.275436466+00:00 stderr F + }, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:49.275436466+00:00 stderr F }, 2025-08-13T20:08:49.275436466+00:00 stderr F Version: "", 2025-08-13T20:08:49.275436466+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.275436466+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.275436466+00:00 stderr F } 2025-08-13T20:08:49.277923097+00:00 stderr F E0813 20:08:49.276996 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.470053486+00:00 stderr F E0813 20:08:49.469879 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.472027282+00:00 stderr F I0813 20:08:49.471955 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.472027282+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.472027282+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.472027282+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F - { 2025-08-13T20:08:49.472027282+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:49.472027282+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.472027282+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:49.472027282+00:00 stderr F - }, 2025-08-13T20:08:49.472027282+00:00 stderr F + { 2025-08-13T20:08:49.472027282+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:49.472027282+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.472027282+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.469974794 +0000 UTC m=+448.675074108", 2025-08-13T20:08:49.472027282+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.472027282+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:49.472027282+00:00 stderr F + }, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:49.472027282+00:00 stderr F }, 2025-08-13T20:08:49.472027282+00:00 stderr F Version: "", 2025-08-13T20:08:49.472027282+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.472027282+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.472027282+00:00 stderr F } 2025-08-13T20:08:49.473334110+00:00 stderr F E0813 20:08:49.473274 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.667066634+00:00 stderr F E0813 20:08:49.666953 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.672081948+00:00 stderr F I0813 20:08:49.671872 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.672081948+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.672081948+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.672081948+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F - { 2025-08-13T20:08:49.672081948+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:49.672081948+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.672081948+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:49.672081948+00:00 stderr F - }, 2025-08-13T20:08:49.672081948+00:00 stderr F + { 2025-08-13T20:08:49.672081948+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:49.672081948+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.672081948+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.667166027 +0000 UTC m=+448.872265331", 2025-08-13T20:08:49.672081948+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.672081948+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:49.672081948+00:00 stderr F + }, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:49.672081948+00:00 stderr F }, 2025-08-13T20:08:49.672081948+00:00 stderr F Version: "", 2025-08-13T20:08:49.672081948+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.672081948+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.672081948+00:00 stderr F } 2025-08-13T20:08:49.673729856+00:00 stderr F E0813 20:08:49.673622 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.863610400+00:00 stderr F E0813 20:08:49.863525 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.865616637+00:00 stderr F I0813 20:08:49.865399 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.865616637+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.865616637+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.865616637+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F - { 2025-08-13T20:08:49.865616637+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.865616637+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.865616637+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:49.865616637+00:00 stderr F - }, 2025-08-13T20:08:49.865616637+00:00 stderr F + { 2025-08-13T20:08:49.865616637+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.865616637+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.865616637+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.863575439 +0000 UTC m=+449.068674603", 2025-08-13T20:08:49.865616637+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.865616637+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:49.865616637+00:00 stderr F + }, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:49.865616637+00:00 stderr F }, 2025-08-13T20:08:49.865616637+00:00 stderr F Version: "", 2025-08-13T20:08:49.865616637+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.865616637+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.865616637+00:00 stderr F } 2025-08-13T20:08:49.867161861+00:00 stderr F E0813 20:08:49.867084 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.105158806+00:00 stderr F E0813 20:08:52.105106 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.107373990+00:00 stderr F I0813 20:08:52.107315 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:52.107373990+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:52.107373990+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:52.107373990+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F - { 2025-08-13T20:08:52.107373990+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:52.107373990+00:00 stderr F - Status: "False", 2025-08-13T20:08:52.107373990+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:52.107373990+00:00 stderr F - }, 2025-08-13T20:08:52.107373990+00:00 stderr F + { 2025-08-13T20:08:52.107373990+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:52.107373990+00:00 stderr F + Status: "True", 2025-08-13T20:08:52.107373990+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:52.105270139 +0000 UTC m=+451.310369213", 2025-08-13T20:08:52.107373990+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:52.107373990+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:52.107373990+00:00 stderr F + }, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:52.107373990+00:00 stderr F }, 2025-08-13T20:08:52.107373990+00:00 stderr F Version: "", 2025-08-13T20:08:52.107373990+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:52.107373990+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:52.107373990+00:00 stderr F } 2025-08-13T20:08:52.109545982+00:00 stderr F E0813 20:08:52.108731 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.207325077+00:00 stderr F E0813 20:08:54.206946 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.210384064+00:00 stderr F I0813 20:08:54.209528 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.210384064+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.210384064+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.210384064+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:54.210384064+00:00 stderr F - { 2025-08-13T20:08:54.210384064+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:54.210384064+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.210384064+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:54.210384064+00:00 stderr F - }, 2025-08-13T20:08:54.210384064+00:00 stderr F + { 2025-08-13T20:08:54.210384064+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:54.210384064+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.210384064+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.207307786 +0000 UTC m=+453.412407020", 2025-08-13T20:08:54.210384064+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.210384064+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:54.210384064+00:00 stderr F + }, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:54.210384064+00:00 stderr F }, 2025-08-13T20:08:54.210384064+00:00 stderr F Version: "", 2025-08-13T20:08:54.210384064+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.210384064+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.210384064+00:00 stderr F } 2025-08-13T20:08:54.212992689+00:00 stderr F E0813 20:08:54.211585 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.401921646+00:00 stderr F E0813 20:08:54.399433 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.403230154+00:00 stderr F I0813 20:08:54.401987 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.403230154+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.403230154+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.403230154+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F - { 2025-08-13T20:08:54.403230154+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.403230154+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.403230154+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:54.403230154+00:00 stderr F - }, 2025-08-13T20:08:54.403230154+00:00 stderr F + { 2025-08-13T20:08:54.403230154+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.403230154+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.403230154+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.399525887 +0000 UTC m=+453.604625211", 2025-08-13T20:08:54.403230154+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.403230154+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:54.403230154+00:00 stderr F + }, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:54.403230154+00:00 stderr F }, 2025-08-13T20:08:54.403230154+00:00 stderr F Version: "", 2025-08-13T20:08:54.403230154+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.403230154+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.403230154+00:00 stderr F } 2025-08-13T20:08:54.404868690+00:00 stderr F E0813 20:08:54.404648 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.598589065+00:00 stderr F E0813 20:08:54.598467 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.600546971+00:00 stderr F I0813 20:08:54.600449 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.600546971+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.600546971+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.600546971+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F - { 2025-08-13T20:08:54.600546971+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:54.600546971+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.600546971+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:54.600546971+00:00 stderr F - }, 2025-08-13T20:08:54.600546971+00:00 stderr F + { 2025-08-13T20:08:54.600546971+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:54.600546971+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.600546971+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.598694108 +0000 UTC m=+453.803793192", 2025-08-13T20:08:54.600546971+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.600546971+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:54.600546971+00:00 stderr F + }, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:54.600546971+00:00 stderr F }, 2025-08-13T20:08:54.600546971+00:00 stderr F Version: "", 2025-08-13T20:08:54.600546971+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.600546971+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.600546971+00:00 stderr F } 2025-08-13T20:08:54.606496561+00:00 stderr F E0813 20:08:54.605662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.803139279+00:00 stderr F E0813 20:08:54.803041 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.804947011+00:00 stderr F I0813 20:08:54.804771 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.804947011+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.804947011+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.804947011+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F - { 2025-08-13T20:08:54.804947011+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:54.804947011+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.804947011+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:54.804947011+00:00 stderr F - }, 2025-08-13T20:08:54.804947011+00:00 stderr F + { 2025-08-13T20:08:54.804947011+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:54.804947011+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.804947011+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.803096458 +0000 UTC m=+454.008195642", 2025-08-13T20:08:54.804947011+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.804947011+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:54.804947011+00:00 stderr F + }, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:54.804947011+00:00 stderr F }, 2025-08-13T20:08:54.804947011+00:00 stderr F Version: "", 2025-08-13T20:08:54.804947011+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.804947011+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.804947011+00:00 stderr F } 2025-08-13T20:08:54.812567070+00:00 stderr F E0813 20:08:54.812132 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.994513956+00:00 stderr F E0813 20:08:54.994469 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.998052418+00:00 stderr F I0813 20:08:54.998024 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.998052418+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.998052418+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.998052418+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F - { 2025-08-13T20:08:54.998052418+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.998052418+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.998052418+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:54.998052418+00:00 stderr F - }, 2025-08-13T20:08:54.998052418+00:00 stderr F + { 2025-08-13T20:08:54.998052418+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.998052418+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.998052418+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.994837066 +0000 UTC m=+454.199936400", 2025-08-13T20:08:54.998052418+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.998052418+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:54.998052418+00:00 stderr F + }, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:54.998052418+00:00 stderr F }, 2025-08-13T20:08:54.998052418+00:00 stderr F Version: "", 2025-08-13T20:08:54.998052418+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.998052418+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.998052418+00:00 stderr F } 2025-08-13T20:08:55.000666323+00:00 stderr F E0813 20:08:54.999518 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.132447831+00:00 stderr F I0813 20:09:29.131961 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:09:31.727617477+00:00 stderr F I0813 20:09:31.726940 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:33.466439120+00:00 stderr F I0813 20:09:33.466285 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:38.293980459+00:00 stderr F I0813 20:09:38.293268 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:38.438579575+00:00 stderr F I0813 20:09:38.438168 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:39.328846060+00:00 stderr F I0813 20:09:39.328717 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:41.223935102+00:00 stderr F I0813 20:09:41.221758 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:09:42.504541109+00:00 stderr F I0813 20:09:42.504138 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.362113407+00:00 stderr F I0813 20:09:43.361586 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.809874274+00:00 stderr F I0813 20:09:43.808495 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:44.589466095+00:00 stderr F I0813 20:09:44.589375 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:44.848149892+00:00 stderr F I0813 20:09:44.848068 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.227115527+00:00 stderr F I0813 20:09:45.227056 1 reflector.go:351] Caches populated for operators.coreos.com/v1, Resource=olmconfigs from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:09:45.377858239+00:00 stderr F I0813 20:09:45.376607 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:45.788857763+00:00 stderr F I0813 20:09:45.786291 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.932375290+00:00 stderr F I0813 20:09:48.931130 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.007225758+00:00 stderr F I0813 20:09:50.006990 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:51.632549997+00:00 stderr F I0813 20:09:51.628985 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.739357640+00:00 stderr F I0813 20:09:52.737293 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:53.232652774+00:00 stderr F I0813 20:09:53.232458 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:08.318075215+00:00 stderr F I0813 20:10:08.307610 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:09.116330181+00:00 stderr F I0813 20:10:09.115726 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:13.496759312+00:00 stderr F I0813 20:10:13.496314 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:10:14.844105511+00:00 stderr F I0813 20:10:14.842060 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:21.201057198+00:00 stderr F I0813 20:10:21.200251 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:26.969289168+00:00 stderr F I0813 20:10:26.960073 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.566027517+00:00 stderr F I0813 20:10:27.565375 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:28.128064702+00:00 stderr F I0813 20:10:28.127899 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.661418915+00:00 stderr F I0813 20:10:29.639889 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:38.203711119+00:00 stderr F I0813 20:10:38.202869 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.413613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.411381 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.417730 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.411546 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.420250 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.470292238+00:00 stderr F I0813 20:42:36.424386 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.479606457+00:00 stderr F I0813 20:42:36.478993 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.479606457+00:00 stderr F I0813 20:42:36.479403 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486942459+00:00 stderr F I0813 20:42:36.482209 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.487206916+00:00 stderr F I0813 20:42:36.487174 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.488106742+00:00 stderr F I0813 20:42:36.488085 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.488608537+00:00 stderr F I0813 20:42:36.488588 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.489345088+00:00 stderr F I0813 20:42:36.489320 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.489868833+00:00 stderr F I0813 20:42:36.489841 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.490397378+00:00 stderr F I0813 20:42:36.490371 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.490919933+00:00 stderr F I0813 20:42:36.490891 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491293134+00:00 stderr F I0813 20:42:36.491263 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.498563 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.499140 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.499433 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.500829 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501329 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501902 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.502180 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.502471 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.513653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516357957+00:00 stderr F I0813 20:42:36.516322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516766668+00:00 stderr F I0813 20:42:36.516740 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517208181+00:00 stderr F I0813 20:42:36.517182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517532220+00:00 stderr F I0813 20:42:36.517509 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.535073116+00:00 stderr F I0813 20:42:36.411567 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.724747085+00:00 stderr F E0813 20:42:36.724674 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.047531220+00:00 stderr F E0813 20:42:37.047077 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.072970634+00:00 stderr F I0813 20:42:37.072881 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.072970634+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.072970634+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.072970634+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F - { 2025-08-13T20:42:37.072970634+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.072970634+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.072970634+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.072970634+00:00 stderr F - }, 2025-08-13T20:42:37.072970634+00:00 stderr F + { 2025-08-13T20:42:37.072970634+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.072970634+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.072970634+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.047612663 +0000 UTC m=+2476.252711917", 2025-08-13T20:42:37.072970634+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.072970634+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.072970634+00:00 stderr F + }, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.072970634+00:00 stderr F }, 2025-08-13T20:42:37.072970634+00:00 stderr F Version: "", 2025-08-13T20:42:37.072970634+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.072970634+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.072970634+00:00 stderr F } 2025-08-13T20:42:37.080679256+00:00 stderr F E0813 20:42:37.080588 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.091382095+00:00 stderr F E0813 20:42:37.088612 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.093880137+00:00 stderr F I0813 20:42:37.091708 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.093880137+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.093880137+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.093880137+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F - { 2025-08-13T20:42:37.093880137+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.093880137+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.093880137+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.093880137+00:00 stderr F - }, 2025-08-13T20:42:37.093880137+00:00 stderr F + { 2025-08-13T20:42:37.093880137+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.093880137+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.093880137+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.088681547 +0000 UTC m=+2476.293780771", 2025-08-13T20:42:37.093880137+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.093880137+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.093880137+00:00 stderr F + }, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.093880137+00:00 stderr F }, 2025-08-13T20:42:37.093880137+00:00 stderr F Version: "", 2025-08-13T20:42:37.093880137+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.093880137+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.093880137+00:00 stderr F } 2025-08-13T20:42:37.093880137+00:00 stderr F E0813 20:42:37.093400 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.104514653+00:00 stderr F E0813 20:42:37.104469 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.107567551+00:00 stderr F I0813 20:42:37.107531 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.107567551+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.107567551+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.107567551+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F - { 2025-08-13T20:42:37.107567551+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.107567551+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.107567551+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.107567551+00:00 stderr F - }, 2025-08-13T20:42:37.107567551+00:00 stderr F + { 2025-08-13T20:42:37.107567551+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.107567551+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.107567551+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.104589836 +0000 UTC m=+2476.309689100", 2025-08-13T20:42:37.107567551+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.107567551+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.107567551+00:00 stderr F + }, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.107567551+00:00 stderr F }, 2025-08-13T20:42:37.107567551+00:00 stderr F Version: "", 2025-08-13T20:42:37.107567551+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.107567551+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.107567551+00:00 stderr F } 2025-08-13T20:42:37.108659843+00:00 stderr F E0813 20:42:37.108634 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.130076070+00:00 stderr F E0813 20:42:37.130008 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.134904380+00:00 stderr F I0813 20:42:37.134145 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.134904380+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.134904380+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.134904380+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F - { 2025-08-13T20:42:37.134904380+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.134904380+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.134904380+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.134904380+00:00 stderr F - }, 2025-08-13T20:42:37.134904380+00:00 stderr F + { 2025-08-13T20:42:37.134904380+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.134904380+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.134904380+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.130170743 +0000 UTC m=+2476.335269987", 2025-08-13T20:42:37.134904380+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.134904380+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.134904380+00:00 stderr F + }, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.134904380+00:00 stderr F }, 2025-08-13T20:42:37.134904380+00:00 stderr F Version: "", 2025-08-13T20:42:37.134904380+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.134904380+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.134904380+00:00 stderr F } 2025-08-13T20:42:37.136082974+00:00 stderr F E0813 20:42:37.136006 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.152401314+00:00 stderr F E0813 20:42:37.152276 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.153696011+00:00 stderr F E0813 20:42:37.153636 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.154406892+00:00 stderr F I0813 20:42:37.154344 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.154406892+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.154406892+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.154406892+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.154406892+00:00 stderr F - { 2025-08-13T20:42:37.154406892+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.154406892+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.154406892+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.154406892+00:00 stderr F - }, 2025-08-13T20:42:37.154406892+00:00 stderr F + { 2025-08-13T20:42:37.154406892+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.154406892+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.154406892+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.152329982 +0000 UTC m=+2476.357429176", 2025-08-13T20:42:37.154406892+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.154406892+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.154406892+00:00 stderr F + }, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.154406892+00:00 stderr F }, 2025-08-13T20:42:37.154406892+00:00 stderr F Version: "", 2025-08-13T20:42:37.154406892+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.154406892+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.154406892+00:00 stderr F } 2025-08-13T20:42:37.154951978+00:00 stderr F E0813 20:42:37.154902 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.156953415+00:00 stderr F E0813 20:42:37.156911 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.157254174+00:00 stderr F I0813 20:42:37.157149 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.157254174+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.157254174+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.157254174+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F - { 2025-08-13T20:42:37.157254174+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.157254174+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.157254174+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.157254174+00:00 stderr F - }, 2025-08-13T20:42:37.157254174+00:00 stderr F + { 2025-08-13T20:42:37.157254174+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.157254174+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.157254174+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.153704092 +0000 UTC m=+2476.358803346", 2025-08-13T20:42:37.157254174+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.157254174+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.157254174+00:00 stderr F + }, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.157254174+00:00 stderr F }, 2025-08-13T20:42:37.157254174+00:00 stderr F Version: "", 2025-08-13T20:42:37.157254174+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.157254174+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.157254174+00:00 stderr F } 2025-08-13T20:42:37.157768069+00:00 stderr F E0813 20:42:37.157694 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.158715096+00:00 stderr F I0813 20:42:37.158674 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.158715096+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.158715096+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.158715096+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F - { 2025-08-13T20:42:37.158715096+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.158715096+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.158715096+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.158715096+00:00 stderr F - }, 2025-08-13T20:42:37.158715096+00:00 stderr F + { 2025-08-13T20:42:37.158715096+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.158715096+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.158715096+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.156949085 +0000 UTC m=+2476.362048299", 2025-08-13T20:42:37.158715096+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.158715096+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.158715096+00:00 stderr F + }, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.158715096+00:00 stderr F }, 2025-08-13T20:42:37.158715096+00:00 stderr F Version: "", 2025-08-13T20:42:37.158715096+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.158715096+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.158715096+00:00 stderr F } 2025-08-13T20:42:37.161049123+00:00 stderr F E0813 20:42:37.160953 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.162561037+00:00 stderr F E0813 20:42:37.162500 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.168692684+00:00 stderr F E0813 20:42:37.166942 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.170063273+00:00 stderr F I0813 20:42:37.169966 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.170063273+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.170063273+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.170063273+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.170063273+00:00 stderr F - { 2025-08-13T20:42:37.170063273+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.170063273+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.170063273+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.170063273+00:00 stderr F - }, 2025-08-13T20:42:37.170063273+00:00 stderr F + { 2025-08-13T20:42:37.170063273+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.170063273+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.170063273+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.162545296 +0000 UTC m=+2476.367644531", 2025-08-13T20:42:37.170063273+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.170063273+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.170063273+00:00 stderr F + }, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.170063273+00:00 stderr F }, 2025-08-13T20:42:37.170063273+00:00 stderr F Version: "", 2025-08-13T20:42:37.170063273+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.170063273+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.170063273+00:00 stderr F } 2025-08-13T20:42:37.170813665+00:00 stderr F E0813 20:42:37.170710 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.171744182+00:00 stderr F E0813 20:42:37.171719 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.174842371+00:00 stderr F I0813 20:42:37.174733 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.174842371+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.174842371+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.174842371+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F - { 2025-08-13T20:42:37.174842371+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.174842371+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.174842371+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.174842371+00:00 stderr F - }, 2025-08-13T20:42:37.174842371+00:00 stderr F + { 2025-08-13T20:42:37.174842371+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.174842371+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.174842371+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.169093895 +0000 UTC m=+2476.374193069", 2025-08-13T20:42:37.174842371+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.174842371+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.174842371+00:00 stderr F + }, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.174842371+00:00 stderr F }, 2025-08-13T20:42:37.174842371+00:00 stderr F Version: "", 2025-08-13T20:42:37.174842371+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.174842371+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.174842371+00:00 stderr F } 2025-08-13T20:42:37.175020506+00:00 stderr F E0813 20:42:37.174961 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.175419478+00:00 stderr F E0813 20:42:37.175365 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.175579562+00:00 stderr F I0813 20:42:37.175559 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.175579562+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.175579562+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.175579562+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F - { 2025-08-13T20:42:37.175579562+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.175579562+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.175579562+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.175579562+00:00 stderr F - }, 2025-08-13T20:42:37.175579562+00:00 stderr F + { 2025-08-13T20:42:37.175579562+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.175579562+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.175579562+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.172106842 +0000 UTC m=+2476.377206116", 2025-08-13T20:42:37.175579562+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.175579562+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:37.175579562+00:00 stderr F + }, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.175579562+00:00 stderr F }, 2025-08-13T20:42:37.175579562+00:00 stderr F Version: "", 2025-08-13T20:42:37.175579562+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.175579562+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.175579562+00:00 stderr F } 2025-08-13T20:42:37.177169998+00:00 stderr F E0813 20:42:37.177145 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.178307351+00:00 stderr F E0813 20:42:37.178282 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.181095601+00:00 stderr F I0813 20:42:37.181070 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.181095601+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.181095601+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.181095601+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F - { 2025-08-13T20:42:37.181095601+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.181095601+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.181095601+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.181095601+00:00 stderr F - }, 2025-08-13T20:42:37.181095601+00:00 stderr F + { 2025-08-13T20:42:37.181095601+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.181095601+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.181095601+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.178357482 +0000 UTC m=+2476.383456656", 2025-08-13T20:42:37.181095601+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.181095601+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.181095601+00:00 stderr F + }, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.181095601+00:00 stderr F }, 2025-08-13T20:42:37.181095601+00:00 stderr F Version: "", 2025-08-13T20:42:37.181095601+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.181095601+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.181095601+00:00 stderr F } 2025-08-13T20:42:37.181326208+00:00 stderr F E0813 20:42:37.181273 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.182698968+00:00 stderr F E0813 20:42:37.182673 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.184335625+00:00 stderr F E0813 20:42:37.183135 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.193748266+00:00 stderr F I0813 20:42:37.193682 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.193748266+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.193748266+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.193748266+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F - { 2025-08-13T20:42:37.193748266+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.193748266+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.193748266+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.193748266+00:00 stderr F - }, 2025-08-13T20:42:37.193748266+00:00 stderr F + { 2025-08-13T20:42:37.193748266+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.193748266+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.193748266+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.181330718 +0000 UTC m=+2476.386429932", 2025-08-13T20:42:37.193748266+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.193748266+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.193748266+00:00 stderr F + }, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.193748266+00:00 stderr F }, 2025-08-13T20:42:37.193748266+00:00 stderr F Version: "", 2025-08-13T20:42:37.193748266+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.193748266+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.193748266+00:00 stderr F } 2025-08-13T20:42:37.196264659+00:00 stderr F E0813 20:42:37.195842 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.197475353+00:00 stderr F I0813 20:42:37.197447 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.197475353+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.197475353+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.197475353+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.197475353+00:00 stderr F - { 2025-08-13T20:42:37.197475353+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.197475353+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.197475353+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.197475353+00:00 stderr F - }, 2025-08-13T20:42:37.197475353+00:00 stderr F + { 2025-08-13T20:42:37.197475353+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.197475353+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.197475353+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.182755359 +0000 UTC m=+2476.387854593", 2025-08-13T20:42:37.197475353+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.197475353+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.197475353+00:00 stderr F + }, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.197475353+00:00 stderr F }, 2025-08-13T20:42:37.197475353+00:00 stderr F Version: "", 2025-08-13T20:42:37.197475353+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.197475353+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.197475353+00:00 stderr F } 2025-08-13T20:42:37.197866805+00:00 stderr F I0813 20:42:37.197734 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.197866805+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.197866805+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.197866805+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F - { 2025-08-13T20:42:37.197866805+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.197866805+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.197866805+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.197866805+00:00 stderr F - }, 2025-08-13T20:42:37.197866805+00:00 stderr F + { 2025-08-13T20:42:37.197866805+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.197866805+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.197866805+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.195891338 +0000 UTC m=+2476.400990662", 2025-08-13T20:42:37.197866805+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.197866805+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.197866805+00:00 stderr F + }, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.197866805+00:00 stderr F }, 2025-08-13T20:42:37.197866805+00:00 stderr F Version: "", 2025-08-13T20:42:37.197866805+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.197866805+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.197866805+00:00 stderr F } 2025-08-13T20:42:37.201089658+00:00 stderr F I0813 20:42:37.199658 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.201089658+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F - { 2025-08-13T20:42:37.201089658+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.201089658+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.201089658+00:00 stderr F - }, 2025-08-13T20:42:37.201089658+00:00 stderr F + { 2025-08-13T20:42:37.201089658+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.201089658+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.175048387 +0000 UTC m=+2476.380147631", 2025-08-13T20:42:37.201089658+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.201089658+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:37.201089658+00:00 stderr F + }, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F }, 2025-08-13T20:42:37.201089658+00:00 stderr F Version: "", 2025-08-13T20:42:37.201089658+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.201089658+00:00 stderr F } 2025-08-13T20:42:37.201089658+00:00 stderr F I0813 20:42:37.200041 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.201089658+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F - { 2025-08-13T20:42:37.201089658+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.201089658+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.201089658+00:00 stderr F - }, 2025-08-13T20:42:37.201089658+00:00 stderr F + { 2025-08-13T20:42:37.201089658+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.201089658+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.183194532 +0000 UTC m=+2476.388293796", 2025-08-13T20:42:37.201089658+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.201089658+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:37.201089658+00:00 stderr F + }, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F }, 2025-08-13T20:42:37.201089658+00:00 stderr F Version: "", 2025-08-13T20:42:37.201089658+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.201089658+00:00 stderr F } 2025-08-13T20:42:37.278750157+00:00 stderr F E0813 20:42:37.278081 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.361374069+00:00 stderr F E0813 20:42:37.360021 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.362399578+00:00 stderr F I0813 20:42:37.361872 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.362399578+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.362399578+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.362399578+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F - { 2025-08-13T20:42:37.362399578+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.362399578+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.362399578+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.362399578+00:00 stderr F - }, 2025-08-13T20:42:37.362399578+00:00 stderr F + { 2025-08-13T20:42:37.362399578+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.362399578+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.362399578+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.360093372 +0000 UTC m=+2476.565192556", 2025-08-13T20:42:37.362399578+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.362399578+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.362399578+00:00 stderr F + }, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.362399578+00:00 stderr F }, 2025-08-13T20:42:37.362399578+00:00 stderr F Version: "", 2025-08-13T20:42:37.362399578+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.362399578+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.362399578+00:00 stderr F } 2025-08-13T20:42:37.480029010+00:00 stderr F E0813 20:42:37.478833 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.499478650+00:00 stderr F E0813 20:42:37.499402 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.508584733+00:00 stderr F I0813 20:42:37.508555 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.508584733+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.508584733+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.508584733+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F - { 2025-08-13T20:42:37.508584733+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.508584733+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.508584733+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.508584733+00:00 stderr F - }, 2025-08-13T20:42:37.508584733+00:00 stderr F + { 2025-08-13T20:42:37.508584733+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.508584733+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.508584733+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.499967515 +0000 UTC m=+2476.705066729", 2025-08-13T20:42:37.508584733+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.508584733+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.508584733+00:00 stderr F + }, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.508584733+00:00 stderr F }, 2025-08-13T20:42:37.508584733+00:00 stderr F Version: "", 2025-08-13T20:42:37.508584733+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.508584733+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.508584733+00:00 stderr F } 2025-08-13T20:42:37.678252325+00:00 stderr F E0813 20:42:37.678114 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.709321850+00:00 stderr F E0813 20:42:37.709216 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.714349285+00:00 stderr F I0813 20:42:37.714323 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.714349285+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.714349285+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.714349285+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.714349285+00:00 stderr F - { 2025-08-13T20:42:37.714349285+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.714349285+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.714349285+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.714349285+00:00 stderr F - }, 2025-08-13T20:42:37.714349285+00:00 stderr F + { 2025-08-13T20:42:37.714349285+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.714349285+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.714349285+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.709431683 +0000 UTC m=+2476.914531117", 2025-08-13T20:42:37.714349285+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.714349285+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.714349285+00:00 stderr F + }, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.714349285+00:00 stderr F }, 2025-08-13T20:42:37.714349285+00:00 stderr F Version: "", 2025-08-13T20:42:37.714349285+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.714349285+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.714349285+00:00 stderr F } 2025-08-13T20:42:37.880558617+00:00 stderr F E0813 20:42:37.880503 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.902973793+00:00 stderr F E0813 20:42:37.902903 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.905525307+00:00 stderr F I0813 20:42:37.905036 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.905525307+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.905525307+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.905525307+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F - { 2025-08-13T20:42:37.905525307+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.905525307+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.905525307+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.905525307+00:00 stderr F - }, 2025-08-13T20:42:37.905525307+00:00 stderr F + { 2025-08-13T20:42:37.905525307+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.905525307+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.905525307+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.902964833 +0000 UTC m=+2477.108064057", 2025-08-13T20:42:37.905525307+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.905525307+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.905525307+00:00 stderr F + }, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.905525307+00:00 stderr F }, 2025-08-13T20:42:37.905525307+00:00 stderr F Version: "", 2025-08-13T20:42:37.905525307+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.905525307+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.905525307+00:00 stderr F } 2025-08-13T20:42:38.079416610+00:00 stderr F E0813 20:42:38.079292 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.087560055+00:00 stderr F E0813 20:42:38.087532 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.095856864+00:00 stderr F I0813 20:42:38.095829 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.095856864+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.095856864+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.095856864+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F - { 2025-08-13T20:42:38.095856864+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:38.095856864+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.095856864+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:38.095856864+00:00 stderr F - }, 2025-08-13T20:42:38.095856864+00:00 stderr F + { 2025-08-13T20:42:38.095856864+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:38.095856864+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.095856864+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.087688979 +0000 UTC m=+2477.292788183", 2025-08-13T20:42:38.095856864+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.095856864+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:38.095856864+00:00 stderr F + }, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:38.095856864+00:00 stderr F }, 2025-08-13T20:42:38.095856864+00:00 stderr F Version: "", 2025-08-13T20:42:38.095856864+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.095856864+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.095856864+00:00 stderr F } 2025-08-13T20:42:38.281930229+00:00 stderr F I0813 20:42:38.281864 1 request.go:697] Waited for 1.077706361s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:38.284905095+00:00 stderr F E0813 20:42:38.284725 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.297294822+00:00 stderr F E0813 20:42:38.297210 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.299001191+00:00 stderr F I0813 20:42:38.298970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.299001191+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.299001191+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.299001191+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F - { 2025-08-13T20:42:38.299001191+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.299001191+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.299001191+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:38.299001191+00:00 stderr F - }, 2025-08-13T20:42:38.299001191+00:00 stderr F + { 2025-08-13T20:42:38.299001191+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.299001191+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.299001191+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.297415085 +0000 UTC m=+2477.502514279", 2025-08-13T20:42:38.299001191+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.299001191+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:38.299001191+00:00 stderr F + }, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:38.299001191+00:00 stderr F }, 2025-08-13T20:42:38.299001191+00:00 stderr F Version: "", 2025-08-13T20:42:38.299001191+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.299001191+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.299001191+00:00 stderr F } 2025-08-13T20:42:38.481760610+00:00 stderr F E0813 20:42:38.481054 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.645268924+00:00 stderr F E0813 20:42:38.643682 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.647826998+00:00 stderr F I0813 20:42:38.645476 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.647826998+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.647826998+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.647826998+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F - { 2025-08-13T20:42:38.647826998+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:38.647826998+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.647826998+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:38.647826998+00:00 stderr F - }, 2025-08-13T20:42:38.647826998+00:00 stderr F + { 2025-08-13T20:42:38.647826998+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:38.647826998+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.647826998+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.64373663 +0000 UTC m=+2477.848835834", 2025-08-13T20:42:38.647826998+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:38.647826998+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:38.647826998+00:00 stderr F + }, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:38.647826998+00:00 stderr F }, 2025-08-13T20:42:38.647826998+00:00 stderr F Version: "", 2025-08-13T20:42:38.647826998+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.647826998+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.647826998+00:00 stderr F } 2025-08-13T20:42:38.679048798+00:00 stderr F E0813 20:42:38.677606 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.699143977+00:00 stderr F E0813 20:42:38.699048 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.702053281+00:00 stderr F I0813 20:42:38.700674 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.702053281+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.702053281+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.702053281+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F - { 2025-08-13T20:42:38.702053281+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.702053281+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.702053281+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:38.702053281+00:00 stderr F - }, 2025-08-13T20:42:38.702053281+00:00 stderr F + { 2025-08-13T20:42:38.702053281+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.702053281+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.702053281+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.699106996 +0000 UTC m=+2477.904206200", 2025-08-13T20:42:38.702053281+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.702053281+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:38.702053281+00:00 stderr F + }, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:38.702053281+00:00 stderr F }, 2025-08-13T20:42:38.702053281+00:00 stderr F Version: "", 2025-08-13T20:42:38.702053281+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.702053281+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.702053281+00:00 stderr F } 2025-08-13T20:42:38.877903141+00:00 stderr F E0813 20:42:38.877464 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.920318844+00:00 stderr F E0813 20:42:38.920260 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.922484606+00:00 stderr F I0813 20:42:38.922457 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.922484606+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.922484606+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.922484606+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:38.922484606+00:00 stderr F - { 2025-08-13T20:42:38.922484606+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:38.922484606+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.922484606+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:38.922484606+00:00 stderr F - }, 2025-08-13T20:42:38.922484606+00:00 stderr F + { 2025-08-13T20:42:38.922484606+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:38.922484606+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.922484606+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.920400506 +0000 UTC m=+2478.125499590", 2025-08-13T20:42:38.922484606+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.922484606+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:38.922484606+00:00 stderr F + }, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:38.922484606+00:00 stderr F }, 2025-08-13T20:42:38.922484606+00:00 stderr F Version: "", 2025-08-13T20:42:38.922484606+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.922484606+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.922484606+00:00 stderr F } 2025-08-13T20:42:39.078488314+00:00 stderr F E0813 20:42:39.077955 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.122207924+00:00 stderr F E0813 20:42:39.122164 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.123761269+00:00 stderr F I0813 20:42:39.123735 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.123761269+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.123761269+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.123761269+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F - { 2025-08-13T20:42:39.123761269+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:39.123761269+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.123761269+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:39.123761269+00:00 stderr F - }, 2025-08-13T20:42:39.123761269+00:00 stderr F + { 2025-08-13T20:42:39.123761269+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:39.123761269+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.123761269+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.122300367 +0000 UTC m=+2478.327399581", 2025-08-13T20:42:39.123761269+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.123761269+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:39.123761269+00:00 stderr F + }, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:39.123761269+00:00 stderr F }, 2025-08-13T20:42:39.123761269+00:00 stderr F Version: "", 2025-08-13T20:42:39.123761269+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.123761269+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.123761269+00:00 stderr F } 2025-08-13T20:42:39.279080997+00:00 stderr F E0813 20:42:39.278603 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.291431193+00:00 stderr F E0813 20:42:39.291116 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.295216782+00:00 stderr F I0813 20:42:39.294507 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.295216782+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.295216782+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.295216782+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F - { 2025-08-13T20:42:39.295216782+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:39.295216782+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.295216782+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:39.295216782+00:00 stderr F - }, 2025-08-13T20:42:39.295216782+00:00 stderr F + { 2025-08-13T20:42:39.295216782+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:39.295216782+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.295216782+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.291218057 +0000 UTC m=+2478.496317251", 2025-08-13T20:42:39.295216782+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.295216782+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:39.295216782+00:00 stderr F + }, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:39.295216782+00:00 stderr F }, 2025-08-13T20:42:39.295216782+00:00 stderr F Version: "", 2025-08-13T20:42:39.295216782+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.295216782+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.295216782+00:00 stderr F } 2025-08-13T20:42:39.478261130+00:00 stderr F I0813 20:42:39.477886 1 request.go:697] Waited for 1.178505017s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:39.479417903+00:00 stderr F E0813 20:42:39.479296 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.501115318+00:00 stderr F E0813 20:42:39.501043 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.503024893+00:00 stderr F I0813 20:42:39.502969 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.503024893+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.503024893+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.503024893+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F - { 2025-08-13T20:42:39.503024893+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.503024893+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.503024893+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:39.503024893+00:00 stderr F - }, 2025-08-13T20:42:39.503024893+00:00 stderr F + { 2025-08-13T20:42:39.503024893+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.503024893+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.503024893+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.501092018 +0000 UTC m=+2478.706191252", 2025-08-13T20:42:39.503024893+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.503024893+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:39.503024893+00:00 stderr F + }, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:39.503024893+00:00 stderr F }, 2025-08-13T20:42:39.503024893+00:00 stderr F Version: "", 2025-08-13T20:42:39.503024893+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.503024893+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.503024893+00:00 stderr F } 2025-08-13T20:42:39.678143251+00:00 stderr F E0813 20:42:39.678091 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.877598372+00:00 stderr F E0813 20:42:39.877538 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.919911252+00:00 stderr F E0813 20:42:39.919849 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.922002062+00:00 stderr F I0813 20:42:39.921973 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.922002062+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.922002062+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.922002062+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F - { 2025-08-13T20:42:39.922002062+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.922002062+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.922002062+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:39.922002062+00:00 stderr F - }, 2025-08-13T20:42:39.922002062+00:00 stderr F + { 2025-08-13T20:42:39.922002062+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.922002062+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.922002062+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.920004324 +0000 UTC m=+2479.125103398", 2025-08-13T20:42:39.922002062+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.922002062+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:39.922002062+00:00 stderr F + }, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:39.922002062+00:00 stderr F }, 2025-08-13T20:42:39.922002062+00:00 stderr F Version: "", 2025-08-13T20:42:39.922002062+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.922002062+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.922002062+00:00 stderr F } 2025-08-13T20:42:39.999384503+00:00 stderr F E0813 20:42:39.999332 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.001633568+00:00 stderr F I0813 20:42:40.001606 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.001633568+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.001633568+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.001633568+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F - { 2025-08-13T20:42:40.001633568+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:40.001633568+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.001633568+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:40.001633568+00:00 stderr F - }, 2025-08-13T20:42:40.001633568+00:00 stderr F + { 2025-08-13T20:42:40.001633568+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:40.001633568+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.001633568+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.999473585 +0000 UTC m=+2479.204572669", 2025-08-13T20:42:40.001633568+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:40.001633568+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:40.001633568+00:00 stderr F + }, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:40.001633568+00:00 stderr F }, 2025-08-13T20:42:40.001633568+00:00 stderr F Version: "", 2025-08-13T20:42:40.001633568+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.001633568+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.001633568+00:00 stderr F } 2025-08-13T20:42:40.077993219+00:00 stderr F E0813 20:42:40.077941 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.160942641+00:00 stderr F E0813 20:42:40.160874 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.163209416+00:00 stderr F I0813 20:42:40.163164 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.163209416+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.163209416+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.163209416+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:40.163209416+00:00 stderr F - { 2025-08-13T20:42:40.163209416+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:40.163209416+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.163209416+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:40.163209416+00:00 stderr F - }, 2025-08-13T20:42:40.163209416+00:00 stderr F + { 2025-08-13T20:42:40.163209416+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:40.163209416+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.163209416+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.16092296 +0000 UTC m=+2479.366022164", 2025-08-13T20:42:40.163209416+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.163209416+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:40.163209416+00:00 stderr F + }, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:40.163209416+00:00 stderr F }, 2025-08-13T20:42:40.163209416+00:00 stderr F Version: "", 2025-08-13T20:42:40.163209416+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.163209416+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.163209416+00:00 stderr F } 2025-08-13T20:42:40.278471909+00:00 stderr F E0813 20:42:40.278269 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.359349001+00:00 stderr F E0813 20:42:40.359265 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.361268266+00:00 stderr F I0813 20:42:40.361120 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.361268266+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.361268266+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.361268266+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F - { 2025-08-13T20:42:40.361268266+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:40.361268266+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.361268266+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:40.361268266+00:00 stderr F - }, 2025-08-13T20:42:40.361268266+00:00 stderr F + { 2025-08-13T20:42:40.361268266+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:40.361268266+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.361268266+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.35931355 +0000 UTC m=+2479.564412834", 2025-08-13T20:42:40.361268266+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.361268266+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:40.361268266+00:00 stderr F + }, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:40.361268266+00:00 stderr F }, 2025-08-13T20:42:40.361268266+00:00 stderr F Version: "", 2025-08-13T20:42:40.361268266+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.361268266+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.361268266+00:00 stderr F } 2025-08-13T20:42:40.478274389+00:00 stderr F E0813 20:42:40.478160 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.500108769+00:00 stderr F E0813 20:42:40.499999 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.502410005+00:00 stderr F I0813 20:42:40.502353 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.502410005+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.502410005+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.502410005+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F - { 2025-08-13T20:42:40.502410005+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:40.502410005+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.502410005+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:40.502410005+00:00 stderr F - }, 2025-08-13T20:42:40.502410005+00:00 stderr F + { 2025-08-13T20:42:40.502410005+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:40.502410005+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.502410005+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.500058417 +0000 UTC m=+2479.705157681", 2025-08-13T20:42:40.502410005+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.502410005+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:40.502410005+00:00 stderr F + }, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:40.502410005+00:00 stderr F }, 2025-08-13T20:42:40.502410005+00:00 stderr F Version: "", 2025-08-13T20:42:40.502410005+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.502410005+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.502410005+00:00 stderr F } 2025-08-13T20:42:40.677891154+00:00 stderr F I0813 20:42:40.677742 1 request.go:697] Waited for 1.174569142s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:40.678918274+00:00 stderr F E0813 20:42:40.678855 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.721323527+00:00 stderr F E0813 20:42:40.721165 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.724381145+00:00 stderr F I0813 20:42:40.724319 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.724381145+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.724381145+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.724381145+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F - { 2025-08-13T20:42:40.724381145+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:40.724381145+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.724381145+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:40.724381145+00:00 stderr F - }, 2025-08-13T20:42:40.724381145+00:00 stderr F + { 2025-08-13T20:42:40.724381145+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:40.724381145+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.724381145+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.721275235 +0000 UTC m=+2479.926374889", 2025-08-13T20:42:40.724381145+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.724381145+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:40.724381145+00:00 stderr F + }, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:40.724381145+00:00 stderr F }, 2025-08-13T20:42:40.724381145+00:00 stderr F Version: "", 2025-08-13T20:42:40.724381145+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.724381145+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.724381145+00:00 stderr F } 2025-08-13T20:42:40.799625384+00:00 stderr F I0813 20:42:40.798651 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.801046815+00:00 stderr F I0813 20:42:40.799884 1 leaderelection.go:285] failed to renew lease openshift-console-operator/console-operator-lock: timed out waiting for the condition 2025-08-13T20:42:40.801210510+00:00 stderr F I0813 20:42:40.801130 1 base_controller.go:172] Shutting down ConsoleCLIDownloadsController ... 2025-08-13T20:42:40.801210510+00:00 stderr F I0813 20:42:40.801202 1 base_controller.go:172] Shutting down HealthCheckController ... 2025-08-13T20:42:40.801294602+00:00 stderr F I0813 20:42:40.801250 1 base_controller.go:172] Shutting down ConsoleOperator ... 2025-08-13T20:42:40.801307573+00:00 stderr F I0813 20:42:40.801290 1 base_controller.go:172] Shutting down DownloadsRouteController ... 2025-08-13T20:42:40.801317543+00:00 stderr F I0813 20:42:40.801310 1 base_controller.go:172] Shutting down ConsoleRouteController ... 2025-08-13T20:42:40.801368394+00:00 stderr F I0813 20:42:40.801326 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:42:40.801368394+00:00 stderr F I0813 20:42:40.801361 1 base_controller.go:172] Shutting down OIDCSetupController ... 2025-08-13T20:42:40.801384975+00:00 stderr F I0813 20:42:40.801377 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.801433016+00:00 stderr F I0813 20:42:40.801394 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.801433016+00:00 stderr F I0813 20:42:40.801424 1 base_controller.go:172] Shutting down OAuthClientSecretController ... 2025-08-13T20:42:40.801445507+00:00 stderr F I0813 20:42:40.801439 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:42:40.801457967+00:00 stderr F I0813 20:42:40.801450 1 base_controller.go:172] Shutting down ConsoleDownloadsDeploymentSyncController ... 2025-08-13T20:42:40.801508098+00:00 stderr F I0813 20:42:40.801468 1 base_controller.go:172] Shutting down StatusSyncer_console ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801569 1 base_controller.go:172] Shutting down CLIOIDCClientStatusController ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801605 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801618 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:42:40.801650342+00:00 stderr F I0813 20:42:40.801630 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:40.801650342+00:00 stderr F I0813 20:42:40.801642 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:40.801660693+00:00 stderr F I0813 20:42:40.801654 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:42:40.801673243+00:00 stderr F I0813 20:42:40.801665 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.801683563+00:00 stderr F I0813 20:42:40.801676 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ... 2025-08-13T20:42:40.801695694+00:00 stderr F I0813 20:42:40.801689 1 base_controller.go:172] Shutting down InformerWithSwitchController ... 2025-08-13T20:42:40.801754715+00:00 stderr F I0813 20:42:40.801711 1 base_controller.go:114] Shutting down worker of CLIOIDCClientStatusController controller ... 2025-08-13T20:42:40.801754715+00:00 stderr F I0813 20:42:40.801736 1 base_controller.go:104] All CLIOIDCClientStatusController workers have been terminated 2025-08-13T20:42:40.801767756+00:00 stderr F I0813 20:42:40.801753 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:40.801767756+00:00 stderr F I0813 20:42:40.801760 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:40.801870469+00:00 stderr F I0813 20:42:40.801851 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:40.801870469+00:00 stderr F I0813 20:42:40.801860 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801868 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801874 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801881 1 base_controller.go:114] Shutting down worker of InformerWithSwitchController controller ... 2025-08-13T20:42:40.801899380+00:00 stderr F I0813 20:42:40.801886 1 base_controller.go:104] All InformerWithSwitchController workers have been terminated 2025-08-13T20:42:40.802006443+00:00 stderr F E0813 20:42:40.801959 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802019993+00:00 stderr F I0813 20:42:40.802004 1 base_controller.go:114] Shutting down worker of PodDisruptionBudgetController controller ... 2025-08-13T20:42:40.802019993+00:00 stderr F I0813 20:42:40.802013 1 base_controller.go:104] All PodDisruptionBudgetController workers have been terminated 2025-08-13T20:42:40.802182328+00:00 stderr F E0813 20:42:40.802113 1 base_controller.go:268] ConsoleServiceController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.801490 1 base_controller.go:150] All StatusSyncer_console post start hooks have been terminated 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.802308 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.802328 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:42:40.802351263+00:00 stderr F I0813 20:42:40.802338 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.802351263+00:00 stderr F I0813 20:42:40.802330 1 base_controller.go:114] Shutting down worker of OIDCSetupController controller ... 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802367 1 base_controller.go:104] All OIDCSetupController workers have been terminated 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802347 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802406 1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ... 2025-08-13T20:42:40.802441615+00:00 stderr F I0813 20:42:40.802433 1 base_controller.go:114] Shutting down worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:42:40.802454836+00:00 stderr F I0813 20:42:40.802445 1 base_controller.go:104] All ConsoleCLIDownloadsController workers have been terminated 2025-08-13T20:42:40.802464726+00:00 stderr F I0813 20:42:40.802447 1 base_controller.go:104] All StatusSyncer_console workers have been terminated 2025-08-13T20:42:40.802464726+00:00 stderr F I0813 20:42:40.802396 1 base_controller.go:114] Shutting down worker of OAuthClientSecretController controller ... 2025-08-13T20:42:40.802474956+00:00 stderr F I0813 20:42:40.802462 1 base_controller.go:114] Shutting down worker of HealthCheckController controller ... 2025-08-13T20:42:40.802474956+00:00 stderr F I0813 20:42:40.802470 1 base_controller.go:104] All OAuthClientSecretController workers have been terminated 2025-08-13T20:42:40.802485237+00:00 stderr F I0813 20:42:40.802387 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:40.802485237+00:00 stderr F I0813 20:42:40.802477 1 base_controller.go:104] All HealthCheckController workers have been terminated 2025-08-13T20:42:40.802495347+00:00 stderr F I0813 20:42:40.802480 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:40.802505137+00:00 stderr F I0813 20:42:40.802494 1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ... 2025-08-13T20:42:40.802514787+00:00 stderr F I0813 20:42:40.802506 1 base_controller.go:104] All ConsoleOperator workers have been terminated 2025-08-13T20:42:40.802607070+00:00 stderr F I0813 20:42:40.802552 1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ... 2025-08-13T20:42:40.802607070+00:00 stderr F E0813 20:42:40.802586 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802607070+00:00 stderr F I0813 20:42:40.802593 1 base_controller.go:104] All DownloadsRouteController workers have been terminated 2025-08-13T20:42:40.802622521+00:00 stderr F I0813 20:42:40.802610 1 base_controller.go:114] Shutting down worker of ConsoleRouteController controller ... 2025-08-13T20:42:40.802632471+00:00 stderr F I0813 20:42:40.802621 1 base_controller.go:104] All ConsoleRouteController workers have been terminated 2025-08-13T20:42:40.803512196+00:00 stderr F W0813 20:42:40.803457 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000067477315073043234033110 0ustar zuulzuul2025-08-13T19:59:35.430117225+00:00 stderr F I0813 19:59:35.366203 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:35.430117225+00:00 stderr F I0813 19:59:35.428254 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:35.614651095+00:00 stderr F I0813 19:59:35.614467 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:37.273990016+00:00 stderr F I0813 19:59:37.272267 1 builder.go:299] console-operator version - 2025-08-13T19:59:42.268464924+00:00 stderr F I0813 19:59:42.264480 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:42.268464924+00:00 stderr F W0813 19:59:42.265251 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:42.268464924+00:00 stderr F W0813 19:59:42.265265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:42.417642327+00:00 stderr F I0813 19:59:42.417146 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:42.447585060+00:00 stderr F I0813 19:59:42.447491 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:42.460890169+00:00 stderr F I0813 19:59:42.447747 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.464867 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.470576 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.471496 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:42.484648657+00:00 stderr F I0813 19:59:42.479569 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:42.485093609+00:00 stderr F I0813 19:59:42.485053 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:42.485156761+00:00 stderr F I0813 19:59:42.485136 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.494534208+00:00 stderr F I0813 19:59:42.493042 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.494534208+00:00 stderr F I0813 19:59:42.493284 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.624227255+00:00 stderr F I0813 19:59:42.570766 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:42.689951678+00:00 stderr F I0813 19:59:42.685458 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.742681721+00:00 stderr F I0813 19:59:42.742026 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.742923228+00:00 stderr F E0813 19:59:42.742750 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.742923228+00:00 stderr F E0813 19:59:42.742910 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.749667020+00:00 stderr F E0813 19:59:42.749548 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.749667020+00:00 stderr F E0813 19:59:42.749610 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.761053454+00:00 stderr F E0813 19:59:42.760935 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.762733642+00:00 stderr F E0813 19:59:42.762656 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.788828846+00:00 stderr F E0813 19:59:42.788226 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.788828846+00:00 stderr F E0813 19:59:42.788471 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.857348119+00:00 stderr F E0813 19:59:42.828627 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.857348119+00:00 stderr F E0813 19:59:42.837936 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.918957066+00:00 stderr F E0813 19:59:42.915022 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.926281904+00:00 stderr F E0813 19:59:42.926087 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.077938407+00:00 stderr F E0813 19:59:43.077073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.096293421+00:00 stderr F E0813 19:59:43.093707 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.140896612+00:00 stderr F I0813 19:59:43.138759 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-08-13T19:59:43.140896612+00:00 stderr F I0813 19:59:43.139101 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28335", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_b845076a-9ad5-4a9c-bbd4-efec7e6dc1b0 became leader 2025-08-13T19:59:43.398872586+00:00 stderr F E0813 19:59:43.398742 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.414467350+00:00 stderr F E0813 19:59:43.414400 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.612315180+00:00 stderr F I0813 19:59:43.603718 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:43.613618117+00:00 stderr F I0813 19:59:43.613557 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:43.623642823+00:00 stderr F I0813 19:59:43.614377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:45.601309287+00:00 stderr F E0813 19:59:45.600542 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.601434231+00:00 stderr F E0813 19:59:45.601415 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.883192447+00:00 stderr F E0813 19:59:46.883025 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.898282427+00:00 stderr F E0813 19:59:46.898118 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.387849482+00:00 stderr F I0813 19:59:47.387336 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388575 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388614 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388689 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-08-13T19:59:47.389757287+00:00 stderr F I0813 19:59:47.389689 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:47.391083334+00:00 stderr F I0813 19:59:47.390899 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-08-13T19:59:47.391309231+00:00 stderr F I0813 19:59:47.391175 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:47.391326521+00:00 stderr F I0813 19:59:47.391312 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:59:47.391346322+00:00 stderr F I0813 19:59:47.391331 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:47.391359512+00:00 stderr F I0813 19:59:47.391352 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T19:59:47.391435754+00:00 stderr F I0813 19:59:47.391367 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391538 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391745 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391763 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.392019 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.392231 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398725 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398868 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398891 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398909 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398926 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398940 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398953 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398968 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398973 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398978 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-08-13T19:59:47.401900803+00:00 stderr F E0813 19:59:47.401812 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.410046175+00:00 stderr F E0813 19:59:47.409978 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.420544574+00:00 stderr F E0813 19:59:47.420492 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.447700848+00:00 stderr F E0813 19:59:47.442074 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.489532211+00:00 stderr F E0813 19:59:47.489400 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.585230899+00:00 stderr F E0813 19:59:47.574356 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.742574994+00:00 stderr F W0813 19:59:47.742493 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:47.798358105+00:00 stderr F E0813 19:59:47.794599 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.823880362+00:00 stderr F W0813 19:59:47.822633 1 reflector.go:539] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:47.830907522+00:00 stderr F E0813 19:59:47.830857 1 reflector.go:147] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:47.860633060+00:00 stderr F E0813 19:59:47.860553 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.508335 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509186 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509247 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509255 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:49.515921776+00:00 stderr F I0813 19:59:49.515608 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:59:49.515921776+00:00 stderr F I0813 19:59:49.515678 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:59:49.520307491+00:00 stderr F I0813 19:59:49.520186 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.531943 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.532167 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.532184 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T19:59:49.535538356+00:00 stderr F I0813 19:59:49.533502 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T19:59:49.535538356+00:00 stderr F I0813 19:59:49.533548 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T19:59:49.541890497+00:00 stderr F I0813 19:59:49.536144 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:49.541890497+00:00 stderr F I0813 19:59:49.536295 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:49.618398968+00:00 stderr F I0813 19:59:49.618299 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T19:59:49.618398968+00:00 stderr F I0813 19:59:49.618337 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T19:59:49.646890379+00:00 stderr F I0813 19:59:49.644990 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T19:59:49.646890379+00:00 stderr F I0813 19:59:49.645071 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T19:59:49.721671560+00:00 stderr F I0813 19:59:49.718432 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:49.721671560+00:00 stderr F I0813 19:59:49.718955 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.721671560+00:00 stderr F E0813 19:59:49.719549 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.721671560+00:00 stderr F E0813 19:59:49.719590 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.898184732+00:00 stderr F I0813 19:59:49.898121 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.940917540+00:00 stderr F I0813 19:59:49.928583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.940917540+00:00 stderr F I0813 19:59:49.936265 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.942210487+00:00 stderr F I0813 19:59:49.942177 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.948889987+00:00 stderr F I0813 19:59:49.945321 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:50.002587738+00:00 stderr F I0813 19:59:50.001186 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-08-13T19:59:50.002587738+00:00 stderr F I0813 19:59:50.001291 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-08-13T19:59:50.398113313+00:00 stderr F I0813 19:59:50.330334 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:50.402557329+00:00 stderr F I0813 19:59:50.402519 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-08-13T19:59:50.402662812+00:00 stderr F I0813 19:59:50.402640 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-08-13T19:59:50.402731154+00:00 stderr F I0813 19:59:50.402517 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-08-13T19:59:50.402823577+00:00 stderr F I0813 19:59:50.402760 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-08-13T19:59:50.678549937+00:00 stderr F W0813 19:59:50.636559 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F E0813 19:59:50.636746 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F W0813 19:59:50.637078 1 reflector.go:539] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F E0813 19:59:50.637099 1 reflector.go:147] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:51.714595091+00:00 stderr F I0813 19:59:51.713173 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.756014067+00:00 stderr F I0813 19:59:52.744048 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.766049034+00:00 stderr F I0813 19:59:52.744062 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.986403855+00:00 stderr F I0813 19:59:52.978266 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-08-13T19:59:52.986403855+00:00 stderr F I0813 19:59:52.978319 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-08-13T19:59:53.101033032+00:00 stderr F I0813 19:59:53.096560 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.118272194+00:00 stderr F I0813 19:59:53.106464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded changed from False to True ("ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:47256->10.217.4.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:53.128385752+00:00 stderr F I0813 19:59:53.128276 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.174648771+00:00 stderr F E0813 19:59:53.145522 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.174648771+00:00 stderr F I0813 19:59:53.172768 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.216904325+00:00 stderr F E0813 19:59:53.208544 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.220559200+00:00 stderr F I0813 19:59:53.217599 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.221030863+00:00 stderr F I0813 19:59:53.220678 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:53.221165027+00:00 stderr F I0813 19:59:53.221102 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:53.229032401+00:00 stderr F I0813 19:59:53.222660 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:53.222200806 +0000 UTC))" 2025-08-13T19:59:53.240469716+00:00 stderr F I0813 19:59:53.240433 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:53.240379524 +0000 UTC))" 2025-08-13T19:59:53.240560859+00:00 stderr F I0813 19:59:53.240540 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.240514928 +0000 UTC))" 2025-08-13T19:59:53.241113115+00:00 stderr F I0813 19:59:53.241084 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.240698013 +0000 UTC))" 2025-08-13T19:59:53.241191047+00:00 stderr F I0813 19:59:53.241170 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241152076 +0000 UTC))" 2025-08-13T19:59:53.241241448+00:00 stderr F I0813 19:59:53.241228 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241212937 +0000 UTC))" 2025-08-13T19:59:53.241293460+00:00 stderr F I0813 19:59:53.241280 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241260189 +0000 UTC))" 2025-08-13T19:59:53.241341011+00:00 stderr F I0813 19:59:53.241325 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.24131169 +0000 UTC))" 2025-08-13T19:59:53.241386082+00:00 stderr F I0813 19:59:53.241373 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241359553 +0000 UTC))" 2025-08-13T19:59:53.243396370+00:00 stderr F I0813 19:59:53.243370 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 19:59:53.243349938 +0000 UTC))" 2025-08-13T19:59:53.244109660+00:00 stderr F I0813 19:59:53.244085 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.303105192+00:00 stderr F W0813 19:59:53.303040 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.303400070+00:00 stderr F E0813 19:59:53.303374 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.304328677+00:00 stderr F I0813 19:59:53.244764 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 19:59:53.24376803 +0000 UTC))" 2025-08-13T19:59:54.584047775+00:00 stderr F I0813 19:59:54.583331 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106 2025-08-13T19:59:59.114234360+00:00 stderr F W0813 19:59:59.112599 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:59.114711074+00:00 stderr F E0813 19:59:59.114643 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.736477 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.736397648 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.818989 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.818933081 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819019 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.819004153 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819043 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.819026214 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819062 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819050635 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819085 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819068345 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819112 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819093806 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819157 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819119837 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.819171098 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819220 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819203849 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819560 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 20:00:05.819537779 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819995 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:05.81991988 +0000 UTC))" 2025-08-13T20:00:09.419606640+00:00 stderr F I0813 20:00:09.418744 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:00:10.061828092+00:00 stderr F I0813 20:00:10.056742 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092230 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092282 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092324 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092329 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092343 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092348 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092366 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092733 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-08-13T20:00:10.121956577+00:00 stderr F I0813 20:00:10.121143 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:00:10.121956577+00:00 stderr F I0813 20:00:10.121235 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:00:10.151497129+00:00 stderr F I0813 20:00:10.151095 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-08-13T20:00:10.151558671+00:00 stderr F I0813 20:00:10.151535 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-08-13T20:00:10.205152159+00:00 stderr F I0813 20:00:10.169180 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-08-13T20:00:11.778877921+00:00 stderr F I0813 20:00:11.778245 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.778877921+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.778877921+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.778877921+00:00 stderr F    ... // 38 identical elements 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F -  { 2025-08-13T20:00:11.778877921+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.778877921+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Reason: "FailedRegister", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:11.778877921+00:00 stderr F -  }, 2025-08-13T20:00:11.778877921+00:00 stderr F +  { 2025-08-13T20:00:11.778877921+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-08-13T20:00:11.778877921+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.778877921+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.35249456 +0000 UTC m=+46.642382196", 2025-08-13T20:00:11.778877921+00:00 stderr F +  }, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    ... // 19 identical elements 2025-08-13T20:00:11.778877921+00:00 stderr F    }, 2025-08-13T20:00:11.778877921+00:00 stderr F    Version: "", 2025-08-13T20:00:11.778877921+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.778877921+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.778877921+00:00 stderr F   } 2025-08-13T20:00:11.869674490+00:00 stderr F I0813 20:00:11.864551 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.869674490+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.869674490+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.869674490+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F -  { 2025-08-13T20:00:11.869674490+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.869674490+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:11.869674490+00:00 stderr F -  }, 2025-08-13T20:00:11.869674490+00:00 stderr F +  { 2025-08-13T20:00:11.869674490+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:11.869674490+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.869674490+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.776893212 +0000 UTC m=+47.066780948", 2025-08-13T20:00:11.869674490+00:00 stderr F +  }, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F -  { 2025-08-13T20:00:11.869674490+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Status: "False", 2025-08-13T20:00:11.869674490+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:11.869674490+00:00 stderr F -  }, 2025-08-13T20:00:11.869674490+00:00 stderr F +  { 2025-08-13T20:00:11.869674490+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:11.869674490+00:00 stderr F +  Status: "True", 2025-08-13T20:00:11.869674490+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.776895672 +0000 UTC m=+47.066782988", 2025-08-13T20:00:11.869674490+00:00 stderr F +  }, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:00:11.869674490+00:00 stderr F    }, 2025-08-13T20:00:11.869674490+00:00 stderr F    Version: "", 2025-08-13T20:00:11.869674490+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.869674490+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.869674490+00:00 stderr F   } 2025-08-13T20:00:11.906421638+00:00 stderr F I0813 20:00:11.906049 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.906421638+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.906421638+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "True", LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, Reason: "FailedGet", ...}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F -  { 2025-08-13T20:00:11.906421638+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.906421638+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Reason: "SyncError", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:11.906421638+00:00 stderr F -  }, 2025-08-13T20:00:11.906421638+00:00 stderr F +  { 2025-08-13T20:00:11.906421638+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:11.906421638+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.906421638+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:11.893081228 +0000 UTC m=+48.182968804", 2025-08-13T20:00:11.906421638+00:00 stderr F +  Reason: "AsExpected", 2025-08-13T20:00:11.906421638+00:00 stderr F +  }, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F    ... // 56 identical elements 2025-08-13T20:00:11.906421638+00:00 stderr F    }, 2025-08-13T20:00:11.906421638+00:00 stderr F    Version: "", 2025-08-13T20:00:11.906421638+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.906421638+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.906421638+00:00 stderr F   } 2025-08-13T20:00:12.014816399+00:00 stderr F I0813 20:00:12.014451 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.014816399+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.014816399+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.014816399+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F -  { 2025-08-13T20:00:12.014816399+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.014816399+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.014816399+00:00 stderr F -  }, 2025-08-13T20:00:12.014816399+00:00 stderr F +  { 2025-08-13T20:00:12.014816399+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.014816399+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.014816399+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.999440878 +0000 UTC m=+47.289328174", 2025-08-13T20:00:12.014816399+00:00 stderr F +  }, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F -  { 2025-08-13T20:00:12.014816399+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.014816399+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.014816399+00:00 stderr F -  }, 2025-08-13T20:00:12.014816399+00:00 stderr F +  { 2025-08-13T20:00:12.014816399+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.014816399+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.014816399+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.999434448 +0000 UTC m=+47.289325784", 2025-08-13T20:00:12.014816399+00:00 stderr F +  }, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.014816399+00:00 stderr F    }, 2025-08-13T20:00:12.014816399+00:00 stderr F    Version: "", 2025-08-13T20:00:12.014816399+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.014816399+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.014816399+00:00 stderr F   } 2025-08-13T20:00:12.096945711+00:00 stderr F I0813 20:00:12.094858 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.096945711+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.096945711+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "True", LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, Reason: "FailedGet", ...}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F -  { 2025-08-13T20:00:12.096945711+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.096945711+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Reason: "SyncError", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:12.096945711+00:00 stderr F -  }, 2025-08-13T20:00:12.096945711+00:00 stderr F +  { 2025-08-13T20:00:12.096945711+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:12.096945711+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.096945711+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.082043956 +0000 UTC m=+48.371931642", 2025-08-13T20:00:12.096945711+00:00 stderr F +  Reason: "AsExpected", 2025-08-13T20:00:12.096945711+00:00 stderr F +  }, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F    ... // 56 identical elements 2025-08-13T20:00:12.096945711+00:00 stderr F    }, 2025-08-13T20:00:12.096945711+00:00 stderr F    Version: "", 2025-08-13T20:00:12.096945711+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.096945711+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.096945711+00:00 stderr F   } 2025-08-13T20:00:12.279647800+00:00 stderr F I0813 20:00:12.275503 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:12.279647800+00:00 stderr F I0813 20:00:12.278446 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.279647800+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.279647800+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.279647800+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F -  { 2025-08-13T20:00:12.279647800+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.279647800+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.279647800+00:00 stderr F -  }, 2025-08-13T20:00:12.279647800+00:00 stderr F +  { 2025-08-13T20:00:12.279647800+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.279647800+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.279647800+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.16985366 +0000 UTC m=+48.459741076", 2025-08-13T20:00:12.279647800+00:00 stderr F +  }, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F -  { 2025-08-13T20:00:12.279647800+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.279647800+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.279647800+00:00 stderr F -  }, 2025-08-13T20:00:12.279647800+00:00 stderr F +  { 2025-08-13T20:00:12.279647800+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.279647800+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.279647800+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.16985055 +0000 UTC m=+48.459738236", 2025-08-13T20:00:12.279647800+00:00 stderr F +  }, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.279647800+00:00 stderr F    }, 2025-08-13T20:00:12.279647800+00:00 stderr F    Version: "", 2025-08-13T20:00:12.279647800+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.279647800+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.279647800+00:00 stderr F   } 2025-08-13T20:00:12.297038556+00:00 stderr F I0813 20:00:12.295727 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.297038556+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.297038556+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.297038556+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F -  { 2025-08-13T20:00:12.297038556+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.297038556+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:12.297038556+00:00 stderr F -  }, 2025-08-13T20:00:12.297038556+00:00 stderr F +  { 2025-08-13T20:00:12.297038556+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:12.297038556+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.297038556+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.286220938 +0000 UTC m=+48.576108394", 2025-08-13T20:00:12.297038556+00:00 stderr F +  }, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F -  { 2025-08-13T20:00:12.297038556+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.297038556+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:12.297038556+00:00 stderr F -  }, 2025-08-13T20:00:12.297038556+00:00 stderr F +  { 2025-08-13T20:00:12.297038556+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.297038556+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.297038556+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.286222948 +0000 UTC m=+48.576110244", 2025-08-13T20:00:12.297038556+00:00 stderr F +  }, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:00:12.297038556+00:00 stderr F    }, 2025-08-13T20:00:12.297038556+00:00 stderr F    Version: "", 2025-08-13T20:00:12.297038556+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.297038556+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.297038556+00:00 stderr F   } 2025-08-13T20:00:12.378818518+00:00 stderr F I0813 20:00:12.378663 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:12.536265268+00:00 stderr F E0813 20:00:12.534526 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:12.536265268+00:00 stderr F E0813 20:00:12.534603 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:12.545689446+00:00 stderr F I0813 20:00:12.539000 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.545689446+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.545689446+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.545689446+00:00 stderr F    { 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.545689446+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.545689446+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.545689446+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.545689446+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.545689446+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.545689446+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":", 2025-08-13T20:00:12.545689446+00:00 stderr F    " ", 2025-08-13T20:00:12.545689446+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.545689446+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.545689446+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.545689446+00:00 stderr F    }, ""), 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    { 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.545689446+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.545689446+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.545689446+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.545689446+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.545689446+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.545689446+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":", 2025-08-13T20:00:12.545689446+00:00 stderr F    " ", 2025-08-13T20:00:12.545689446+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.545689446+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.545689446+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.545689446+00:00 stderr F    }, ""), 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    Version: "", 2025-08-13T20:00:12.545689446+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.545689446+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.545689446+00:00 stderr F   } 2025-08-13T20:00:12.888645326+00:00 stderr F I0813 20:00:12.851613 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.888645326+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.888645326+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.888645326+00:00 stderr F    { 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.888645326+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.888645326+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.888645326+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.888645326+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.888645326+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.888645326+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":", 2025-08-13T20:00:12.888645326+00:00 stderr F    " ", 2025-08-13T20:00:12.888645326+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.888645326+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.888645326+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.888645326+00:00 stderr F    }, ""), 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    { 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.888645326+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.888645326+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.888645326+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.888645326+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.888645326+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.888645326+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":", 2025-08-13T20:00:12.888645326+00:00 stderr F    " ", 2025-08-13T20:00:12.888645326+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.888645326+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.888645326+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.888645326+00:00 stderr F    }, ""), 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    Version: "", 2025-08-13T20:00:12.888645326+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.888645326+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.888645326+00:00 stderr F   } 2025-08-13T20:00:12.890976112+00:00 stderr F I0813 20:00:12.865542 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:12.969377458+00:00 stderr F I0813 20:00:12.966556 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.969377458+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.969377458+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.969377458+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F -  { 2025-08-13T20:00:12.969377458+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.969377458+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.969377458+00:00 stderr F -  }, 2025-08-13T20:00:12.969377458+00:00 stderr F +  { 2025-08-13T20:00:12.969377458+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.969377458+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.969377458+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.962050399 +0000 UTC m=+49.251937695", 2025-08-13T20:00:12.969377458+00:00 stderr F +  }, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F -  { 2025-08-13T20:00:12.969377458+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.969377458+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.969377458+00:00 stderr F -  }, 2025-08-13T20:00:12.969377458+00:00 stderr F +  { 2025-08-13T20:00:12.969377458+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.969377458+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.969377458+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.962047889 +0000 UTC m=+49.251935475", 2025-08-13T20:00:12.969377458+00:00 stderr F +  }, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.969377458+00:00 stderr F    }, 2025-08-13T20:00:12.969377458+00:00 stderr F    Version: "", 2025-08-13T20:00:12.969377458+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.969377458+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.969377458+00:00 stderr F   } 2025-08-13T20:00:13.033632120+00:00 stderr F E0813 20:00:13.029315 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:13.033632120+00:00 stderr F E0813 20:00:13.029355 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:13.073359003+00:00 stderr F I0813 20:00:13.072674 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:13.091020806+00:00 stderr F E0813 20:00:13.090678 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.260640553+00:00 stderr F I0813 20:00:13.258400 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.344880425+00:00 stderr F I0813 20:00:13.344591 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" 2025-08-13T20:00:13.484965749+00:00 stderr F I0813 20:00:13.483043 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:13.484965749+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:13.484965749+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:13.484965749+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F -  { 2025-08-13T20:00:13.484965749+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Status: "True", 2025-08-13T20:00:13.484965749+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:13.484965749+00:00 stderr F -  }, 2025-08-13T20:00:13.484965749+00:00 stderr F +  { 2025-08-13T20:00:13.484965749+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:13.484965749+00:00 stderr F +  Status: "False", 2025-08-13T20:00:13.484965749+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:13.440922653 +0000 UTC m=+49.730810019", 2025-08-13T20:00:13.484965749+00:00 stderr F +  }, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F -  { 2025-08-13T20:00:13.484965749+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Status: "False", 2025-08-13T20:00:13.484965749+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:13.484965749+00:00 stderr F -  }, 2025-08-13T20:00:13.484965749+00:00 stderr F +  { 2025-08-13T20:00:13.484965749+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:13.484965749+00:00 stderr F +  Status: "True", 2025-08-13T20:00:13.484965749+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:13.440919713 +0000 UTC m=+49.730807239", 2025-08-13T20:00:13.484965749+00:00 stderr F +  }, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:13.484965749+00:00 stderr F    }, 2025-08-13T20:00:13.484965749+00:00 stderr F    Version: "", 2025-08-13T20:00:13.484965749+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:13.484965749+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:13.484965749+00:00 stderr F   } 2025-08-13T20:00:13.587009919+00:00 stderr F E0813 20:00:13.586324 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.587009919+00:00 stderr F E0813 20:00:13.586656 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.587743220+00:00 stderr F E0813 20:00:13.587122 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.752083226+00:00 stderr F I0813 20:00:13.751974 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:14.129461516+00:00 stderr F I0813 20:00:14.122744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Upgradeable changed from False to True ("All is well") 2025-08-13T20:00:14.175457698+00:00 stderr F E0813 20:00:14.175390 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.175553251+00:00 stderr F E0813 20:00:14.175539 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.175988813+00:00 stderr F E0813 20:00:14.175880 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.526271251+00:00 stderr F E0813 20:00:14.494881 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:14.526425155+00:00 stderr F E0813 20:00:14.526399 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:14.584962424+00:00 stderr F I0813 20:00:14.581817 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:14Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742362 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742404 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742578 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.394056514+00:00 stderr F E0813 20:00:15.393954 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.394115755+00:00 stderr F E0813 20:00:15.394101 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.399979303+00:00 stderr F E0813 20:00:15.399954 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.741117730+00:00 stderr F E0813 20:00:15.740427 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.741117730+00:00 stderr F E0813 20:00:15.741014 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.750376894+00:00 stderr F E0813 20:00:15.750217 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005193 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005266 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005459 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200377335+00:00 stderr F E0813 20:00:16.200319 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200448337+00:00 stderr F E0813 20:00:16.200431 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200679204+00:00 stderr F E0813 20:00:16.200650 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.452760992+00:00 stderr F E0813 20:00:16.452659 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.452908646+00:00 stderr F E0813 20:00:16.452858 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.453274616+00:00 stderr F E0813 20:00:16.453131 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.705107057+00:00 stderr F E0813 20:00:16.703434 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.638125962+00:00 stderr F I0813 20:00:17.637402 1 apps.go:154] Deployment "openshift-console/console" changes: {"metadata":{"annotations":{"console.openshift.io/service-ca-config-version":"29218","operator.openshift.io/spec-hash":"b2372c4f2f3d3abb58592f8e229a7b3901addc8a288a978cd753c769ea967ca8"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"metadata":{"annotations":{"console.openshift.io/service-ca-config-version":"29218"}},"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:00:17.981299097+00:00 stderr F E0813 20:00:17.980371 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:00:17.981299097+00:00 stderr F E0813 20:00:17.980549 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:17.984023174+00:00 stderr F I0813 20:00:17.983958 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:00:18.129528263+00:00 stderr F I0813 20:00:18.129456 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:18.129528263+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:18.129528263+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    { 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:18.129528263+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:18.129528263+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:00:18.129528263+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:18.129528263+00:00 stderr F -  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:00:18.129528263+00:00 stderr F +  "changes made during sync updates, additional sync expected", 2025-08-13T20:00:18.129528263+00:00 stderr F    }, ""), 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    Version: "", 2025-08-13T20:00:18.129528263+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:18.129528263+00:00 stderr F    Generations: []v1.GenerationStatus{ 2025-08-13T20:00:18.129528263+00:00 stderr F    {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, 2025-08-13T20:00:18.129528263+00:00 stderr F    { 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:18.129528263+00:00 stderr F    Namespace: "openshift-console", 2025-08-13T20:00:18.129528263+00:00 stderr F    Name: "console", 2025-08-13T20:00:18.129528263+00:00 stderr F -  LastGeneration: 3, 2025-08-13T20:00:18.129528263+00:00 stderr F +  LastGeneration: 4, 2025-08-13T20:00:18.129528263+00:00 stderr F    Hash: "", 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F   } 2025-08-13T20:00:18.203709259+00:00 stderr F E0813 20:00:18.203654 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.203857193+00:00 stderr F E0813 20:00:18.203763 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.204171062+00:00 stderr F E0813 20:00:18.204143 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.594160241+00:00 stderr F I0813 20:00:18.560202 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.906190398+00:00 stderr F I0813 20:00:18.905262 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" 2025-08-13T20:00:19.181957811+00:00 stderr F I0813 20:00:19.180766 1 apps.go:154] Deployment "openshift-console/console" changes: {"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:00:19.273735058+00:00 stderr F E0813 20:00:19.273406 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.273735058+00:00 stderr F E0813 20:00:19.273483 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.273930014+00:00 stderr F E0813 20:00:19.273739 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.396090437+00:00 stderr F E0813 20:00:19.393079 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:00:19.396090437+00:00 stderr F E0813 20:00:19.393134 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:19.396090437+00:00 stderr F I0813 20:00:19.394369 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:00:19.941306583+00:00 stderr F E0813 20:00:19.928673 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:19.941306583+00:00 stderr F E0813 20:00:19.938591 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:20.063366034+00:00 stderr F I0813 20:00:20.058701 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:20.063366034+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:20.063366034+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    { 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:20.063366034+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:20.063366034+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:00:20.063366034+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:20.063366034+00:00 stderr F -  "changes made during sync updates, additional sync expected", 2025-08-13T20:00:20.063366034+00:00 stderr F +  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:00:20.063366034+00:00 stderr F    }, ""), 2025-08-13T20:00:20.063366034+00:00 stderr F    }, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:00:20.063366034+00:00 stderr F    }, 2025-08-13T20:00:20.063366034+00:00 stderr F    Version: "", 2025-08-13T20:00:20.063366034+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:20.063366034+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:20.063366034+00:00 stderr F   } 2025-08-13T20:00:20.461970770+00:00 stderr F I0813 20:00:20.459046 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:20.641330564+00:00 stderr F I0813 20:00:20.641200 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" 2025-08-13T20:00:20.795446338+00:00 stderr F E0813 20:00:20.788140 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:20.795446338+00:00 stderr F E0813 20:00:20.788183 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.155039 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.155745 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.156032 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705293 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705709 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705972 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:22.124634068+00:00 stderr F E0813 20:00:22.117354 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:22.124634068+00:00 stderr F E0813 20:00:22.117397 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:27.342212836+00:00 stderr F E0813 20:00:27.338400 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:27.342212836+00:00 stderr F E0813 20:00:27.338884 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:29.403930083+00:00 stderr F E0813 20:00:29.403323 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:29.404013726+00:00 stderr F E0813 20:00:29.403997 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:32.183362086+00:00 stderr F E0813 20:00:32.174301 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:32.183362086+00:00 stderr F E0813 20:00:32.174954 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:32.187702610+00:00 stderr F I0813 20:00:32.187561 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:32.187702610+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:32.187702610+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:32.187702610+00:00 stderr F    { 2025-08-13T20:00:32.187702610+00:00 stderr F    Type: "RouteHealthDegraded", 2025-08-13T20:00:32.187702610+00:00 stderr F    Status: "True", 2025-08-13T20:00:32.187702610+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:32.187702610+00:00 stderr F -  Reason: "FailedGet", 2025-08-13T20:00:32.187702610+00:00 stderr F +  Reason: "StatusError", 2025-08-13T20:00:32.187702610+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:32.187702610+00:00 stderr F -  "failed to GET route (", 2025-08-13T20:00:32.187702610+00:00 stderr F +  "route not yet available, ", 2025-08-13T20:00:32.187702610+00:00 stderr F    "https://console-openshift-console.apps-crc.testing", 2025-08-13T20:00:32.187702610+00:00 stderr F -  `): Get "https://console-openshift-console.apps-crc.testing": dia`, 2025-08-13T20:00:32.187702610+00:00 stderr F -  "l tcp 192.168.130.11:443: connect: connection refused", 2025-08-13T20:00:32.187702610+00:00 stderr F +  " returns '503 Service Unavailable'", 2025-08-13T20:00:32.187702610+00:00 stderr F    }, ""), 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:32.187702610+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    { 2025-08-13T20:00:32.187702610+00:00 stderr F    Type: "RouteHealthAvailable", 2025-08-13T20:00:32.187702610+00:00 stderr F    Status: "False", 2025-08-13T20:00:32.187702610+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:32.187702610+00:00 stderr F -  Reason: "FailedGet", 2025-08-13T20:00:32.187702610+00:00 stderr F +  Reason: "StatusError", 2025-08-13T20:00:32.187702610+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:32.187702610+00:00 stderr F -  "failed to GET route (", 2025-08-13T20:00:32.187702610+00:00 stderr F +  "route not yet available, ", 2025-08-13T20:00:32.187702610+00:00 stderr F    "https://console-openshift-console.apps-crc.testing", 2025-08-13T20:00:32.187702610+00:00 stderr F -  `): Get "https://console-openshift-console.apps-crc.testing": dia`, 2025-08-13T20:00:32.187702610+00:00 stderr F -  "l tcp 192.168.130.11:443: connect: connection refused", 2025-08-13T20:00:32.187702610+00:00 stderr F +  " returns '503 Service Unavailable'", 2025-08-13T20:00:32.187702610+00:00 stderr F    }, ""), 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    Version: "", 2025-08-13T20:00:32.187702610+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:32.187702610+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:32.187702610+00:00 stderr F   } 2025-08-13T20:00:33.048627008+00:00 stderr F E0813 20:00:33.030179 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:33.426894334+00:00 stderr F I0813 20:00:33.413078 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:33.651450317+00:00 stderr F I0813 20:00:33.646207 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" 2025-08-13T20:00:33.658043525+00:00 stderr F E0813 20:00:33.640606 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:33.658668352+00:00 stderr F E0813 20:00:33.658630 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:33.870570415+00:00 stderr F I0813 20:00:33.870467 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:33.996030952+00:00 stderr F E0813 20:00:33.993652 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:34.083989390+00:00 stderr F E0813 20:00:34.083684 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.083989390+00:00 stderr F E0813 20:00:34.083728 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.084042942+00:00 stderr F E0813 20:00:34.084025 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.166760700+00:00 stderr F E0813 20:00:34.165395 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:34.166760700+00:00 stderr F E0813 20:00:34.165741 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:35.974390653+00:00 stderr F E0813 20:00:35.963976 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:35.974390653+00:00 stderr F E0813 20:00:35.974218 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:36.148861788+00:00 stderr F E0813 20:00:36.148466 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:36.148861788+00:00 stderr F E0813 20:00:36.148707 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:36.149084144+00:00 stderr F E0813 20:00:36.148959 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.791228523+00:00 stderr F E0813 20:00:41.768136 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.791362577+00:00 stderr F E0813 20:00:41.791342 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.797892713+00:00 stderr F E0813 20:00:41.791605 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.961555 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.955325084 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.962859 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.962733346 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.962930 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.962911841 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963081 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.963064525 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963120 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963105756 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963142 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963126837 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963159 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963147707 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963195 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963164118 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963214 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.963201899 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963235 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.9632251 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963332 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963315972 +0000 UTC))" 2025-08-13T20:00:59.979997458+00:00 stderr F I0813 20:00:59.970434 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 20:00:59.964032113 +0000 UTC))" 2025-08-13T20:00:59.979997458+00:00 stderr F I0813 20:00:59.978952 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:59.978871596 +0000 UTC))" 2025-08-13T20:01:06.989884093+00:00 stderr F E0813 20:01:06.988167 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:06.989884093+00:00 stderr F E0813 20:01:06.988728 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:06.993574719+00:00 stderr F E0813 20:01:06.988515 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:06.993646521+00:00 stderr F E0813 20:01:06.993631 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.015583326+00:00 stderr F I0813 20:01:07.015493 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.015583326+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.015583326+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.015583326+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F -  { 2025-08-13T20:01:07.015583326+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.015583326+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.015583326+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.015583326+00:00 stderr F -  }, 2025-08-13T20:01:07.015583326+00:00 stderr F +  { 2025-08-13T20:01:07.015583326+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.015583326+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.993883917 +0000 UTC m=+103.283771773", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.015583326+00:00 stderr F +  }, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F -  { 2025-08-13T20:01:07.015583326+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.015583326+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.015583326+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.015583326+00:00 stderr F -  }, 2025-08-13T20:01:07.015583326+00:00 stderr F +  { 2025-08-13T20:01:07.015583326+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.015583326+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.993886447 +0000 UTC m=+103.283773743", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.015583326+00:00 stderr F +  }, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:01:07.015583326+00:00 stderr F    }, 2025-08-13T20:01:07.015583326+00:00 stderr F    Version: "", 2025-08-13T20:01:07.015583326+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.015583326+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.015583326+00:00 stderr F   } 2025-08-13T20:01:07.078335875+00:00 stderr F I0813 20:01:07.065635 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.078335875+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.078335875+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.078335875+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F -  { 2025-08-13T20:01:07.078335875+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.078335875+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.078335875+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.078335875+00:00 stderr F -  }, 2025-08-13T20:01:07.078335875+00:00 stderr F +  { 2025-08-13T20:01:07.078335875+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.078335875+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.988753571 +0000 UTC m=+103.278640987", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.078335875+00:00 stderr F +  }, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F -  { 2025-08-13T20:01:07.078335875+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.078335875+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.078335875+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.078335875+00:00 stderr F -  }, 2025-08-13T20:01:07.078335875+00:00 stderr F +  { 2025-08-13T20:01:07.078335875+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.078335875+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.988755081 +0000 UTC m=+103.278642367", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.078335875+00:00 stderr F +  }, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:01:07.078335875+00:00 stderr F    }, 2025-08-13T20:01:07.078335875+00:00 stderr F    Version: "", 2025-08-13T20:01:07.078335875+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.078335875+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.078335875+00:00 stderr F   } 2025-08-13T20:01:07.264500694+00:00 stderr F E0813 20:01:07.264400 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.275896009+00:00 stderr F E0813 20:01:07.272629 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.275896009+00:00 stderr F E0813 20:01:07.272826 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.278903904+00:00 stderr F I0813 20:01:07.276103 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.278903904+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.278903904+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.278903904+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F -  { 2025-08-13T20:01:07.278903904+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.278903904+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.278903904+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.278903904+00:00 stderr F -  }, 2025-08-13T20:01:07.278903904+00:00 stderr F +  { 2025-08-13T20:01:07.278903904+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.278903904+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.273043147 +0000 UTC m=+103.562930533", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.278903904+00:00 stderr F +  }, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F -  { 2025-08-13T20:01:07.278903904+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.278903904+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.278903904+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.278903904+00:00 stderr F -  }, 2025-08-13T20:01:07.278903904+00:00 stderr F +  { 2025-08-13T20:01:07.278903904+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.278903904+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.273040577 +0000 UTC m=+103.562928533", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.278903904+00:00 stderr F +  }, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:01:07.278903904+00:00 stderr F    }, 2025-08-13T20:01:07.278903904+00:00 stderr F    Version: "", 2025-08-13T20:01:07.278903904+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.278903904+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.278903904+00:00 stderr F   } 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.538002 1 core.go:341] ConfigMap "openshift-console/console-config" changes: {"apiVersion":"v1","data":{"console-config.yaml":"apiVersion: console.openshift.io/v1\nauth:\n authType: openshift\n clientID: console\n clientSecretFile: /var/oauth-config/clientSecret\n oauthEndpointCAFile: /var/oauth-serving-cert/ca-bundle.crt\nclusterInfo:\n consoleBaseAddress: https://console-openshift-console.apps-crc.testing\n controlPlaneTopology: SingleReplica\n masterPublicURL: https://api.crc.testing:6443\n nodeArchitectures:\n - amd64\n nodeOperatingSystems:\n - linux\n releaseVersion: 4.16.0\ncustomization:\n branding: ocp\n documentationBaseURL: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.16/\nkind: ConsoleConfig\nproviders: {}\nservingInfo:\n bindAddress: https://[::]:8443\n certFile: /var/serving-cert/tls.crt\n keyFile: /var/serving-cert/tls.key\nsession: {}\ntelemetry:\n CLUSTER_ID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304\n SEGMENT_API_HOST: console.redhat.com/connections/api/v1\n SEGMENT_JS_HOST: console.redhat.com/connections/cdn\n SEGMENT_PUBLIC_API_KEY: BnuS1RP39EmLQjP21ko67oDjhbl9zpNU\n TELEMETER_CLIENT_DISABLED: \"true\"\n"},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.543698 1 helpers.go:184] lister was stale at resourceVersion=29730, live get showed resourceVersion=30184 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.544621 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.551336 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.587016780+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.587016780+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.587016780+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F -  { 2025-08-13T20:01:07.587016780+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.587016780+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.587016780+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.587016780+00:00 stderr F -  }, 2025-08-13T20:01:07.587016780+00:00 stderr F +  { 2025-08-13T20:01:07.587016780+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.587016780+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.544159288 +0000 UTC m=+103.834046734", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.587016780+00:00 stderr F +  }, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F -  { 2025-08-13T20:01:07.587016780+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.587016780+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.587016780+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.587016780+00:00 stderr F -  }, 2025-08-13T20:01:07.587016780+00:00 stderr F +  { 2025-08-13T20:01:07.587016780+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.587016780+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.544161238 +0000 UTC m=+103.834048644", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.587016780+00:00 stderr F +  }, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:01:07.587016780+00:00 stderr F    }, 2025-08-13T20:01:07.587016780+00:00 stderr F    Version: "", 2025-08-13T20:01:07.587016780+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.587016780+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.587016780+00:00 stderr F   } 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.551749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/console-config -n openshift-console: 2025-08-13T20:01:07.587016780+00:00 stderr F cause by changes in data.console-config.yaml 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555037 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555048 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555177 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558442 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558451 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558591 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646400 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646455 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646626 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696232 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696304 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696511 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.869646059+00:00 stderr F I0813 20:01:07.823208 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:07.881203309+00:00 stderr F E0813 20:01:07.880855 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.894914909+00:00 stderr F E0813 20:01:07.894527 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.904815762+00:00 stderr F E0813 20:01:07.898203 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.904815762+00:00 stderr F E0813 20:01:07.884409 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.910492344+00:00 stderr F E0813 20:01:07.909147 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.910667659+00:00 stderr F E0813 20:01:07.910649 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.911087881+00:00 stderr F E0813 20:01:07.911062 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.916939968+00:00 stderr F E0813 20:01:07.916908 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.917004159+00:00 stderr F E0813 20:01:07.916989 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.917282507+00:00 stderr F E0813 20:01:07.917254 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084310 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084362 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084596 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110863267+00:00 stderr F I0813 20:01:08.110664 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable changed from True to False ("ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)") 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118295 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118382 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118624 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.177380183+00:00 stderr F I0813 20:01:08.177300 1 apps.go:154] Deployment "openshift-console/console" changes: {"metadata":{"annotations":{"console.openshift.io/console-config-version":"30193","operator.openshift.io/spec-hash":"f1efe610e88a03d177d7da7c48414aef173b52463548255cc8d0eb8b0da2387b"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"metadata":{"annotations":{"console.openshift.io/console-config-version":"30193"}},"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:01:08.227915294+00:00 stderr F E0813 20:01:08.225041 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.262308 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263064 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263077 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263497 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.276528661+00:00 stderr F I0813 20:01:08.274511 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:08.283445968+00:00 stderr F E0813 20:01:08.282072 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.324892079+00:00 stderr F E0813 20:01:08.324712 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:01:08.324892079+00:00 stderr F E0813 20:01:08.324766 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:01:08.327935766+00:00 stderr F I0813 20:01:08.327887 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:01:08.350113509+00:00 stderr F I0813 20:01:08.347209 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:08.350113509+00:00 stderr F I0813 20:01:08.347910 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:08.409921724+00:00 stderr F I0813 20:01:08.405262 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:08.414879955+00:00 stderr F I0813 20:01:08.414034 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:08.41398487 +0000 UTC))" 2025-08-13T20:01:08.415876944+00:00 stderr F I0813 20:01:08.414953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:08.414931167 +0000 UTC))" 2025-08-13T20:01:08.415969196+00:00 stderr F I0813 20:01:08.415945 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:08.415914925 +0000 UTC))" 2025-08-13T20:01:08.416032318+00:00 stderr F I0813 20:01:08.416013 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:08.415991547 +0000 UTC))" 2025-08-13T20:01:08.416092010+00:00 stderr F I0813 20:01:08.416073 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416057379 +0000 UTC))" 2025-08-13T20:01:08.416142221+00:00 stderr F I0813 20:01:08.416129 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416112411 +0000 UTC))" 2025-08-13T20:01:08.416246064+00:00 stderr F I0813 20:01:08.416230 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416210733 +0000 UTC))" 2025-08-13T20:01:08.416304896+00:00 stderr F I0813 20:01:08.416288 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416267875 +0000 UTC))" 2025-08-13T20:01:08.416354447+00:00 stderr F I0813 20:01:08.416342 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:08.416327387 +0000 UTC))" 2025-08-13T20:01:08.416457460+00:00 stderr F I0813 20:01:08.416437 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:08.416421569 +0000 UTC))" 2025-08-13T20:01:08.416518042+00:00 stderr F I0813 20:01:08.416497 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416480951 +0000 UTC))" 2025-08-13T20:01:08.417026957+00:00 stderr F I0813 20:01:08.416992 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:12 +0000 UTC to 2027-08-13 20:00:13 +0000 UTC (now=2025-08-13 20:01:08.416966885 +0000 UTC))" 2025-08-13T20:01:08.420489225+00:00 stderr F I0813 20:01:08.420460 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:01:08.420425854 +0000 UTC))" 2025-08-13T20:01:08.738228045+00:00 stderr F E0813 20:01:08.733037 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.738228045+00:00 stderr F E0813 20:01:08.733079 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.752475751+00:00 stderr F E0813 20:01:08.746554 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.826232414+00:00 stderr F I0813 20:01:08.765352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" 2025-08-13T20:01:08.826232414+00:00 stderr F I0813 20:01:08.817376 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:08.826232414+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:08.826232414+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    { 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:08.826232414+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:01:08.826232414+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:01:08.826232414+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:01:08.826232414+00:00 stderr F -  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:01:08.826232414+00:00 stderr F +  "changes made during sync updates, additional sync expected", 2025-08-13T20:01:08.826232414+00:00 stderr F    }, ""), 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    Version: "", 2025-08-13T20:01:08.826232414+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:08.826232414+00:00 stderr F    Generations: []v1.GenerationStatus{ 2025-08-13T20:01:08.826232414+00:00 stderr F    {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, 2025-08-13T20:01:08.826232414+00:00 stderr F    { 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:08.826232414+00:00 stderr F    Namespace: "openshift-console", 2025-08-13T20:01:08.826232414+00:00 stderr F    Name: "console", 2025-08-13T20:01:08.826232414+00:00 stderr F -  LastGeneration: 4, 2025-08-13T20:01:08.826232414+00:00 stderr F +  LastGeneration: 5, 2025-08-13T20:01:08.826232414+00:00 stderr F    Hash: "", 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F   } 2025-08-13T20:01:09.226328592+00:00 stderr F E0813 20:01:09.174533 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.226500987+00:00 stderr F E0813 20:01:09.226463 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.250915843+00:00 stderr F E0813 20:01:09.250812 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.409214567+00:00 stderr F E0813 20:01:09.409153 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.409302429+00:00 stderr F E0813 20:01:09.409287 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.412097119+00:00 stderr F E0813 20:01:09.410035 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.450065452+00:00 stderr F E0813 20:01:09.449911 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.450065452+00:00 stderr F E0813 20:01:09.449991 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.450697530+00:00 stderr F E0813 20:01:09.450576 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.500530601+00:00 stderr F I0813 20:01:09.493874 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614090 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614158 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614374 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.645963 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.646046 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.646219 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.924185441+00:00 stderr F I0813 20:01:09.915994 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" 2025-08-13T20:01:09.958638953+00:00 stderr F E0813 20:01:09.958438 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:09.958638953+00:00 stderr F E0813 20:01:09.958476 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:09.958869540+00:00 stderr F E0813 20:01:09.958723 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199527 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199568 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199727 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199932 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199941 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.200059 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.235524539+00:00 stderr F E0813 20:01:10.235060 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:01:10.235524539+00:00 stderr F E0813 20:01:10.235387 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:01:10.443988903+00:00 stderr F I0813 20:01:10.426019 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:10.443988903+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:10.443988903+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    { 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:10.443988903+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:01:10.443988903+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:01:10.443988903+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:01:10.443988903+00:00 stderr F -  "changes made during sync updates, additional sync expected", 2025-08-13T20:01:10.443988903+00:00 stderr F +  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:01:10.443988903+00:00 stderr F    }, ""), 2025-08-13T20:01:10.443988903+00:00 stderr F    }, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:01:10.443988903+00:00 stderr F    }, 2025-08-13T20:01:10.443988903+00:00 stderr F    Version: "", 2025-08-13T20:01:10.443988903+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:10.443988903+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:10.443988903+00:00 stderr F   } 2025-08-13T20:01:10.648973078+00:00 stderr F I0813 20:01:10.648051 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="986026bc94c265a214cb3459ff9cc01d5aa0eabbc41959f11d26b6222c432f4b", new="c8d612f3b74dc6507c61e4d04d4ecf5c547ff292af799c7a689fe7a15e5377e0") 2025-08-13T20:01:10.684128900+00:00 stderr F W0813 20:01:10.679640 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.680909 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="4b5d87903056afff0f59aa1059503707e0decf9c5ece89d2e759b1a6adbf089a", new="b9e8e76d9d6343210f883954e57c9ccdef1698a4fed96aca367288053d3b1f02") 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.683590 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.683741 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:10.684188392+00:00 stderr F I0813 20:01:10.684120 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:10.684188392+00:00 stderr F I0813 20:01:10.684129 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:01:10.684349227+00:00 stderr F I0813 20:01:10.684313 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:01:10.684398598+00:00 stderr F I0813 20:01:10.684385 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:10.684434139+00:00 stderr F I0813 20:01:10.684408 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ... 2025-08-13T20:01:10.684480740+00:00 stderr F I0813 20:01:10.684468 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:01:10.684521261+00:00 stderr F I0813 20:01:10.684509 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:01:10.684556742+00:00 stderr F I0813 20:01:10.684517 1 base_controller.go:172] Shutting down InformerWithSwitchController ... 2025-08-13T20:01:10.684587203+00:00 stderr F W0813 20:01:10.684548 1 builder.go:131] graceful termination failed, controllers failed with error: stopped 2025-08-13T20:01:10.685148869+00:00 stderr F I0813 20:01:10.684633 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000006606315073043234033073 0ustar zuulzuul2025-10-13T00:15:00.352897460+00:00 stderr F I1013 00:15:00.352106 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:15:00.353276111+00:00 stderr F I1013 00:15:00.353250 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:00.355957552+00:00 stderr F I1013 00:15:00.355278 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:00.457008869+00:00 stderr F I1013 00:15:00.456277 1 builder.go:299] console-operator version - 2025-10-13T00:15:01.322146891+00:00 stderr F I1013 00:15:01.321563 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:01.322146891+00:00 stderr F W1013 00:15:01.322107 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:01.322146891+00:00 stderr F W1013 00:15:01.322117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:01.334405998+00:00 stderr F I1013 00:15:01.333662 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:01.334405998+00:00 stderr F I1013 00:15:01.334036 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-10-13T00:15:01.334537252+00:00 stderr F I1013 00:15:01.334504 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:01.334546302+00:00 stderr F I1013 00:15:01.334534 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:01.334600274+00:00 stderr F I1013 00:15:01.334575 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:01.334600274+00:00 stderr F I1013 00:15:01.334594 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:01.334634575+00:00 stderr F I1013 00:15:01.334586 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:01.334670616+00:00 stderr F I1013 00:15:01.334660 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:01.337099579+00:00 stderr F I1013 00:15:01.335259 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:01.337099579+00:00 stderr F I1013 00:15:01.335348 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:01.337099579+00:00 stderr F I1013 00:15:01.335481 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:01.449417674+00:00 stderr F I1013 00:15:01.449350 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:01.449502466+00:00 stderr F I1013 00:15:01.449481 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:01.449643141+00:00 stderr F I1013 00:15:01.449624 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:20:02.594491313+00:00 stderr F I1013 00:20:02.593840 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-10-13T00:20:02.594491313+00:00 stderr F I1013 00:20:02.594121 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41902", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_32d8801b-d930-4a30-86de-a10511566b73 became leader 2025-10-13T00:20:02.601600235+00:00 stderr F I1013 00:20:02.601543 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:20:02.605286934+00:00 stderr F I1013 00:20:02.605218 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:20:02.605385277+00:00 stderr F I1013 00:20:02.605297 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:20:02.616305822+00:00 stderr F I1013 00:20:02.616219 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:20:02.616679503+00:00 stderr F I1013 00:20:02.616631 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-10-13T00:20:02.616704084+00:00 stderr F I1013 00:20:02.616679 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-10-13T00:20:02.616745335+00:00 stderr F I1013 00:20:02.616715 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-10-13T00:20:02.616745335+00:00 stderr F I1013 00:20:02.616728 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-10-13T00:20:02.616803057+00:00 stderr F I1013 00:20:02.616763 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-10-13T00:20:02.616803057+00:00 stderr F I1013 00:20:02.616786 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-10-13T00:20:02.616803057+00:00 stderr F I1013 00:20:02.616792 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-10-13T00:20:02.616803057+00:00 stderr F I1013 00:20:02.616798 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-10-13T00:20:02.616821988+00:00 stderr F I1013 00:20:02.616805 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-10-13T00:20:02.616821988+00:00 stderr F I1013 00:20:02.616764 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-10-13T00:20:02.616834748+00:00 stderr F I1013 00:20:02.616717 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-10-13T00:20:02.616847268+00:00 stderr F I1013 00:20:02.616839 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-10-13T00:20:02.616859169+00:00 stderr F I1013 00:20:02.616772 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-10-13T00:20:02.616871109+00:00 stderr F I1013 00:20:02.616862 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-10-13T00:20:02.616883579+00:00 stderr F E1013 00:20:02.616871 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-10-13T00:20:02.616883579+00:00 stderr F I1013 00:20:02.616878 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-10-13T00:20:02.617090395+00:00 stderr F I1013 00:20:02.617027 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-10-13T00:20:02.617090395+00:00 stderr F I1013 00:20:02.617042 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:20:02.617090395+00:00 stderr F I1013 00:20:02.617070 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-10-13T00:20:02.617110276+00:00 stderr F I1013 00:20:02.617095 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:20:02.617122256+00:00 stderr F I1013 00:20:02.617115 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-10-13T00:20:02.617186298+00:00 stderr F I1013 00:20:02.617133 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-10-13T00:20:02.617292451+00:00 stderr F I1013 00:20:02.617261 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-10-13T00:20:02.617292451+00:00 stderr F I1013 00:20:02.617268 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-10-13T00:20:02.617292451+00:00 stderr F I1013 00:20:02.617273 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-10-13T00:20:02.617643792+00:00 stderr F I1013 00:20:02.617614 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:20:02.622243289+00:00 stderr F E1013 00:20:02.622171 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-10-13T00:20:02.716828151+00:00 stderr F I1013 00:20:02.716766 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-10-13T00:20:02.716828151+00:00 stderr F I1013 00:20:02.716800 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-10-13T00:20:02.716945645+00:00 stderr F I1013 00:20:02.716910 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-10-13T00:20:02.716945645+00:00 stderr F I1013 00:20:02.716936 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-10-13T00:20:02.716991346+00:00 stderr F I1013 00:20:02.716904 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:20:02.716991346+00:00 stderr F I1013 00:20:02.716984 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:20:02.717089439+00:00 stderr F I1013 00:20:02.717065 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-10-13T00:20:02.717089439+00:00 stderr F I1013 00:20:02.717079 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-10-13T00:20:02.717238683+00:00 stderr F I1013 00:20:02.717179 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-10-13T00:20:02.717238683+00:00 stderr F I1013 00:20:02.717196 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-10-13T00:20:02.717238683+00:00 stderr F I1013 00:20:02.717198 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-10-13T00:20:02.717264794+00:00 stderr F I1013 00:20:02.717252 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-10-13T00:20:02.717264794+00:00 stderr F I1013 00:20:02.717261 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-10-13T00:20:02.717364027+00:00 stderr F I1013 00:20:02.717297 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-10-13T00:20:02.717463660+00:00 stderr F I1013 00:20:02.717405 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-10-13T00:20:02.717463660+00:00 stderr F I1013 00:20:02.717425 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-10-13T00:20:02.717809260+00:00 stderr F I1013 00:20:02.717211 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-10-13T00:20:02.717809260+00:00 stderr F I1013 00:20:02.717785 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-10-13T00:20:02.717809260+00:00 stderr F I1013 00:20:02.717803 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-10-13T00:20:02.717824541+00:00 stderr F I1013 00:20:02.717808 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-10-13T00:20:02.717824541+00:00 stderr F I1013 00:20:02.717219 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-10-13T00:20:02.717824541+00:00 stderr F I1013 00:20:02.717821 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-10-13T00:20:02.718368207+00:00 stderr F I1013 00:20:02.717172 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-10-13T00:20:02.718368207+00:00 stderr F I1013 00:20:02.718356 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-10-13T00:20:02.718908553+00:00 stderr F I1013 00:20:02.718818 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-10-13T00:20:02.718908553+00:00 stderr F I1013 00:20:02.718886 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:20:02.718908553+00:00 stderr F I1013 00:20:02.718899 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:20:02.718957555+00:00 stderr F I1013 00:20:02.718924 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:20:02.718957555+00:00 stderr F I1013 00:20:02.718939 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:20:02.718985905+00:00 stderr F I1013 00:20:02.718967 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-10-13T00:20:02.718985905+00:00 stderr F I1013 00:20:02.718980 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-10-13T00:20:02.719029187+00:00 stderr F I1013 00:20:02.719001 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-10-13T00:20:02.719029187+00:00 stderr F I1013 00:20:02.719014 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-10-13T00:20:02.719039717+00:00 stderr F I1013 00:20:02.719032 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:20:02.719048387+00:00 stderr F I1013 00:20:02.719038 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:20:02.818683690+00:00 stderr F I1013 00:20:02.818599 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:02.917474038+00:00 stderr F I1013 00:20:02.917372 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-10-13T00:20:02.917474038+00:00 stderr F I1013 00:20:02.917409 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-10-13T00:20:02.917474038+00:00 stderr F I1013 00:20:02.917439 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-10-13T00:20:02.917474038+00:00 stderr F I1013 00:20:02.917461 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-10-13T00:20:02.917542980+00:00 stderr F I1013 00:20:02.917503 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-10-13T00:20:02.917563451+00:00 stderr F I1013 00:20:02.917547 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938459 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.938409925 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938638 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.93862038 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938660 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.938643791 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938681 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.938665442 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938704 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.938686992 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938726 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.938710103 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938753 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.938731333 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938774 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.938758274 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938796 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.938781115 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938817 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.938804415 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938839 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.938823316 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.938859 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.938844556 +0000 UTC))" 2025-10-13T00:21:11.939641678+00:00 stderr F I1013 00:21:11.939289 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:12 +0000 UTC to 2027-08-13 20:00:13 +0000 UTC (now=2025-10-13 00:21:11.939270608 +0000 UTC))" 2025-10-13T00:21:11.949534564+00:00 stderr F I1013 00:21:11.949183 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314501\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314501\" (2025-10-12 23:15:00 +0000 UTC to 2026-10-12 23:15:00 +0000 UTC (now=2025-10-13 00:21:11.949151924 +0000 UTC))" 2025-10-13T00:23:21.503194313+00:00 stderr F I1013 00:23:21.502489 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:24.807155235+00:00 stderr F I1013 00:23:24.806767 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:51.707173776+00:00 stderr F I1013 00:23:51.706669 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-10-13T00:23:52.939803012+00:00 stderr F I1013 00:23:52.939235 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043234033100 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043234033100 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000001770615073043234033115 0ustar zuulzuul2025-10-13T00:16:11.626147434+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:16:11.626147434+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="Go OS/Arch: linux/amd64" 2025-10-13T00:16:11.626147434+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[metrics] Registering marketplace metrics" 2025-10-13T00:16:11.626477214+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[metrics] Serving marketplace metrics" 2025-10-13T00:16:11.626477214+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="TLS keys set, using https for metrics" 2025-10-13T00:16:11.669617998+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="Config API is available" 2025-10-13T00:16:11.669617998+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="setting up scheme" 2025-10-13T00:16:11.722153863+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="setting up health checks" 2025-10-13T00:16:11.725042705+00:00 stderr F I1013 00:16:11.725007 1 leaderelection.go:250] attempting to acquire leader lease openshift-marketplace/marketplace-operator-lock... 2025-10-13T00:16:11.734396865+00:00 stderr F I1013 00:16:11.734256 1 leaderelection.go:260] successfully acquired lease openshift-marketplace/marketplace-operator-lock 2025-10-13T00:16:11.743219778+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="became leader: marketplace-operator-8b455464d-29pzg" 2025-10-13T00:16:11.743219778+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="registering components" 2025-10-13T00:16:11.743219778+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="setting up the marketplace clusteroperator status reporter" 2025-10-13T00:16:11.755514453+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="setting up controllers" 2025-10-13T00:16:11.755899305+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="starting the marketplace clusteroperator status reporter" 2025-10-13T00:16:11.755941596+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="starting manager" 2025-10-13T00:16:11.757084133+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"starting server","kind":"pprof","addr":"[::]:6060"} 2025-10-13T00:16:11.757439065+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting EventSource","controller":"catalogsource-controller","source":"kind source: *v1alpha1.CatalogSource"} 2025-10-13T00:16:11.757464965+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting Controller","controller":"catalogsource-controller"} 2025-10-13T00:16:11.759436729+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting EventSource","controller":"operatorhub-controller","source":"kind source: *v1.OperatorHub"} 2025-10-13T00:16:11.761378431+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting Controller","controller":"operatorhub-controller"} 2025-10-13T00:16:11.761378431+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting EventSource","controller":"configmap-controller","source":"kind source: *v1.ConfigMap"} 2025-10-13T00:16:11.761378431+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting Controller","controller":"configmap-controller"} 2025-10-13T00:16:11.864220299+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting workers","controller":"catalogsource-controller","worker count":1} 2025-10-13T00:16:11.876042618+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting workers","controller":"configmap-controller","worker count":1} 2025-10-13T00:16:11.876263935+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="Reconciling ConfigMap openshift-marketplace/marketplace-trusted-ca" 2025-10-13T00:16:11.878985623+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[ca] Certificate Authorization ConfigMap openshift-marketplace/marketplace-trusted-ca is in sync with disk." name=marketplace-trusted-ca type=ConfigMap 2025-10-13T00:16:11.879353825+00:00 stderr F {"level":"info","ts":"2025-10-13T00:16:11Z","msg":"Starting workers","controller":"operatorhub-controller","worker count":1} 2025-10-13T00:16:11.879509730+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="Reconciling OperatorHub cluster" 2025-10-13T00:16:11.879777958+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:11.879858581+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:11.879907102+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:11.879964164+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-10-13T00:16:23.679118731+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:27.430602667+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:29.037522554+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:50.944913426+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-10-13T00:16:57.740266975+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:58.267644537+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:16:59.736238181+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:17:16.820202314+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:17:59.656595619+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:18:09.995503410+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:22:11.953185637+00:00 stderr F E1013 00:22:11.952678 1 leaderelection.go:332] error retrieving resource lock openshift-marketplace/marketplace-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-marketplace/leases/marketplace-operator-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.958075862+00:00 stderr F E1013 00:22:41.957704 1 leaderelection.go:332] error retrieving resource lock openshift-marketplace/marketplace-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-marketplace/leases/marketplace-operator-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:11.951465130+00:00 stderr F I1013 00:23:11.950654 1 leaderelection.go:285] failed to renew lease openshift-marketplace/marketplace-operator-lock: timed out waiting for the condition 2025-10-13T00:23:11.982158515+00:00 stderr F time="2025-10-13T00:23:11Z" level=warning msg="leader election lost for marketplace-operator-8b455464d-29pzg identity" ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000001361115073043234033104 0ustar zuulzuul2025-10-13T00:23:13.108388126+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:23:13.109265340+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="Go OS/Arch: linux/amd64" 2025-10-13T00:23:13.109408044+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[metrics] Registering marketplace metrics" 2025-10-13T00:23:13.109466476+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[metrics] Serving marketplace metrics" 2025-10-13T00:23:13.109661791+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="TLS keys set, using https for metrics" 2025-10-13T00:23:13.146077476+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="Config API is available" 2025-10-13T00:23:13.146077476+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="setting up scheme" 2025-10-13T00:23:13.181964525+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="setting up health checks" 2025-10-13T00:23:13.187062097+00:00 stderr F I1013 00:23:13.187014 1 leaderelection.go:250] attempting to acquire leader lease openshift-marketplace/marketplace-operator-lock... 2025-10-13T00:23:13.196510141+00:00 stderr F I1013 00:23:13.196447 1 leaderelection.go:260] successfully acquired lease openshift-marketplace/marketplace-operator-lock 2025-10-13T00:23:13.196544711+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="became leader: marketplace-operator-8b455464d-29pzg" 2025-10-13T00:23:13.196544711+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="registering components" 2025-10-13T00:23:13.196573922+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="setting up the marketplace clusteroperator status reporter" 2025-10-13T00:23:13.204781321+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="setting up controllers" 2025-10-13T00:23:13.204865763+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="starting the marketplace clusteroperator status reporter" 2025-10-13T00:23:13.204865763+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="starting manager" 2025-10-13T00:23:13.206168790+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"starting server","kind":"pprof","addr":"[::]:6060"} 2025-10-13T00:23:13.206418357+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting EventSource","controller":"operatorhub-controller","source":"kind source: *v1.OperatorHub"} 2025-10-13T00:23:13.206418357+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting EventSource","controller":"catalogsource-controller","source":"kind source: *v1alpha1.CatalogSource"} 2025-10-13T00:23:13.206432247+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting Controller","controller":"catalogsource-controller"} 2025-10-13T00:23:13.206488288+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting Controller","controller":"operatorhub-controller"} 2025-10-13T00:23:13.207132616+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting EventSource","controller":"configmap-controller","source":"kind source: *v1.ConfigMap"} 2025-10-13T00:23:13.207132616+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting Controller","controller":"configmap-controller"} 2025-10-13T00:23:13.313863369+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting workers","controller":"catalogsource-controller","worker count":1} 2025-10-13T00:23:13.323109167+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting workers","controller":"configmap-controller","worker count":1} 2025-10-13T00:23:13.323408515+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="Reconciling ConfigMap openshift-marketplace/marketplace-trusted-ca" 2025-10-13T00:23:13.327927501+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[ca] Certificate Authorization ConfigMap openshift-marketplace/marketplace-trusted-ca is in sync with disk." name=marketplace-trusted-ca type=ConfigMap 2025-10-13T00:23:13.327990653+00:00 stderr F {"level":"info","ts":"2025-10-13T00:23:13Z","msg":"Starting workers","controller":"operatorhub-controller","worker count":1} 2025-10-13T00:23:13.328156837+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="Reconciling OperatorHub cluster" 2025-10-13T00:23:13.328353663+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.328420305+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.328478446+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.328540408+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.337618111+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="Reconciling OperatorHub cluster" 2025-10-13T00:23:13.337782455+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.337928720+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.337998691+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-10-13T00:23:13.338039473+00:00 stderr F time="2025-10-13T00:23:13Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015073043233033104 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015073043233033104 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000214255315073043233033122 0ustar zuulzuul2025-08-13T20:05:42.847593472+00:00 stderr F I0813 20:05:42.847360 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:42.849079444+00:00 stderr F I0813 20:05:42.848832 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:42.850582657+00:00 stderr F I0813 20:05:42.850357 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:42.959341362+00:00 stderr F I0813 20:05:42.959227 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T20:05:43.835638706+00:00 stderr F I0813 20:05:43.832980 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:43.835638706+00:00 stderr F W0813 20:05:43.833036 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:43.835638706+00:00 stderr F W0813 20:05:43.833046 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:43.839256959+00:00 stderr F I0813 20:05:43.837451 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:43.842104231+00:00 stderr F I0813 20:05:43.841213 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:43.842104231+00:00 stderr F I0813 20:05:43.841744 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:43.842415920+00:00 stderr F I0813 20:05:43.842362 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:43.842824481+00:00 stderr F I0813 20:05:43.842538 1 secure_serving.go:213] Serving securely on [::]:9104 2025-08-13T20:05:43.842953695+00:00 stderr F I0813 20:05:43.842935 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:43.843316465+00:00 stderr F I0813 20:05:43.843296 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:43.847064563+00:00 stderr F I0813 20:05:43.846944 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-08-13T20:05:43.849444071+00:00 stderr F I0813 20:05:43.847922 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:43.849444071+00:00 stderr F I0813 20:05:43.848425 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:43.849493042+00:00 stderr F I0813 20:05:43.849475 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:43.950929837+00:00 stderr F I0813 20:05:43.950757 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:43.950983549+00:00 stderr F I0813 20:05:43.950918 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:43.952076690+00:00 stderr F I0813 20:05:43.951937 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:10:14.618367979+00:00 stderr F I0813 20:10:14.615016 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-08-13T20:10:14.619163142+00:00 stderr F I0813 20:10:14.619073 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33148", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_c4643de4-0a66-40c8-abff-4239e04f61ab became leader 2025-08-13T20:10:14.783630177+00:00 stderr F I0813 20:10:14.783197 1 operator.go:97] Creating status manager for stand-alone cluster 2025-08-13T20:10:14.784895684+00:00 stderr F I0813 20:10:14.784846 1 operator.go:102] Adding controller-runtime controllers 2025-08-13T20:10:14.806739700+00:00 stderr F I0813 20:10:14.805093 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-08-13T20:10:14.813119033+00:00 stderr F I0813 20:10:14.812083 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:10:14.823420598+00:00 stderr F I0813 20:10:14.823371 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:10:14.825390405+00:00 stderr F I0813 20:10:14.825331 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:10:14.838390267+00:00 stderr F I0813 20:10:14.838340 1 client.go:239] Starting informers... 2025-08-13T20:10:14.839898711+00:00 stderr F I0813 20:10:14.839877 1 client.go:250] Waiting for informers to sync... 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.041276 1 client.go:271] Informers started and synced 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.041345 1 operator.go:126] Starting controller-manager 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.042155 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T20:10:15.043980672+00:00 stderr F I0813 20:10:15.043893 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:10:15.044043254+00:00 stderr F I0813 20:10:15.044026 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:10:15.044104816+00:00 stderr F I0813 20:10:15.044071 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:10:15.045016342+00:00 stderr F I0813 20:10:15.044954 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-08-13T20:10:15.045158616+00:00 stderr F I0813 20:10:15.045136 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:10:15.045206317+00:00 stderr F I0813 20:10:15.045189 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:10:15.045246928+00:00 stderr F I0813 20:10:15.045232 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046648 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046696 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046726 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046768 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047049 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047177 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047487 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047536 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047670 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000b7bb80" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048008 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000b7bc30" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048136 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048165 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048265 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048279 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-08-13T20:10:15.052039263+00:00 stderr F I0813 20:10:15.049571 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:10:15.052097065+00:00 stderr F I0813 20:10:15.047085 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.052127916+00:00 stderr F I0813 20:10:15.050615 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-08-13T20:10:15.052158247+00:00 stderr F I0813 20:10:15.050697 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc000b7bd90" 2025-08-13T20:10:15.052283460+00:00 stderr F I0813 20:10:15.052266 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-08-13T20:10:15.052378773+00:00 stderr F I0813 20:10:15.052362 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-08-13T20:10:15.055895624+00:00 stderr F I0813 20:10:15.052956 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.064980024+00:00 stderr F I0813 20:10:15.056021 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc000b7bad0" 2025-08-13T20:10:15.065230501+00:00 stderr F I0813 20:10:15.065197 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-08-13T20:10:15.065286083+00:00 stderr F I0813 20:10:15.065268 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-08-13T20:10:15.065560431+00:00 stderr F I0813 20:10:15.056196 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:15.066300372+00:00 stderr F I0813 20:10:15.066210 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.069116893+00:00 stderr F I0813 20:10:15.068249 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055259 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055426 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc000b7bce0" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074287 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074304 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055632 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000b7be40" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074346 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000b7bef0" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074372 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc0002ae790" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074386 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074392 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055668 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.058618 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.080641983+00:00 stderr F I0813 20:10:15.079665 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.080641983+00:00 stderr F I0813 20:10:15.080391 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081068 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081116 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081450 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081473 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081485 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081490 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081497 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081502 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.058942 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.082569 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.084970727+00:00 stderr F I0813 20:10:15.083428 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.084970727+00:00 stderr F I0813 20:10:15.084043 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085413390+00:00 stderr F I0813 20:10:15.085380 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085639496+00:00 stderr F I0813 20:10:15.059371 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085742399+00:00 stderr F I0813 20:10:15.059579 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.086010897+00:00 stderr F I0813 20:10:15.054200 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000b7b6b0" 2025-08-13T20:10:15.086122110+00:00 stderr F I0813 20:10:15.086106 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-08-13T20:10:15.086163021+00:00 stderr F I0813 20:10:15.086150 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-08-13T20:10:15.088455547+00:00 stderr F I0813 20:10:15.086260 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088631592+00:00 stderr F I0813 20:10:15.059959 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088903120+00:00 stderr F I0813 20:10:15.088853 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088969962+00:00 stderr F I0813 20:10:15.060065 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089088075+00:00 stderr F I0813 20:10:15.060137 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089189328+00:00 stderr F I0813 20:10:15.060191 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089282461+00:00 stderr F I0813 20:10:15.060315 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089355763+00:00 stderr F I0813 20:10:15.061244 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089459266+00:00 stderr F I0813 20:10:15.061712 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089474106+00:00 stderr F I0813 20:10:15.059849 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089570569+00:00 stderr F I0813 20:10:15.086609 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089879838+00:00 stderr F I0813 20:10:15.089859 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-08-13T20:10:15.089982301+00:00 stderr F I0813 20:10:15.089963 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:10:15.090061853+00:00 stderr F I0813 20:10:15.090013 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-08-13T20:10:15.090105684+00:00 stderr F I0813 20:10:15.090093 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:10:15.100008418+00:00 stderr F I0813 20:10:15.099132 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-08-13T20:10:15.141355304+00:00 stderr F I0813 20:10:15.137225 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.145938415+00:00 stderr F I0813 20:10:15.144502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150383 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150424 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150547 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150641 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:10:15.158031662+00:00 stderr F I0813 20:10:15.157991 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.158970089+00:00 stderr F I0813 20:10:15.158871 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-08-13T20:10:15.159072092+00:00 stderr F I0813 20:10:15.159011 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T20:10:15.159352200+00:00 stderr F I0813 20:10:15.159283 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-08-13T20:10:15.159451862+00:00 stderr F I0813 20:10:15.159384 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-08-13T20:10:15.159468423+00:00 stderr F I0813 20:10:15.159449 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.159886 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.159941 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.166186 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.171500 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.172980 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.188693941+00:00 stderr F I0813 20:10:15.188566 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-08-13T20:10:15.191884182+00:00 stderr F I0813 20:10:15.188865 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-08-13T20:10:15.197997997+00:00 stderr F I0813 20:10:15.193331 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:15.197997997+00:00 stderr F I0813 20:10:15.193401 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:15.214512021+00:00 stderr F I0813 20:10:15.214316 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:15.224471057+00:00 stderr F I0813 20:10:15.222161 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.224471057+00:00 stderr F I0813 20:10:15.222263 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-08-13T20:10:15.229861391+00:00 stderr F I0813 20:10:15.228541 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-08-13T20:10:15.240978060+00:00 stderr F I0813 20:10:15.233518 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.265250526+00:00 stderr F I0813 20:10:15.260705 1 log.go:245] successful reconciliation 2025-08-13T20:10:15.269867948+00:00 stderr F I0813 20:10:15.267717 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-08-13T20:10:15.269867948+00:00 stderr F I0813 20:10:15.267902 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.276520 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.276604 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.278257 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-08-13T20:10:15.308118515+00:00 stderr F I0813 20:10:15.305285 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.308468075+00:00 stderr F I0813 20:10:15.308386 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.308486515+00:00 stderr F I0813 20:10:15.308478 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:10:15.338965739+00:00 stderr F I0813 20:10:15.338564 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.338965739+00:00 stderr F I0813 20:10:15.338664 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T20:10:15.353518877+00:00 stderr F I0813 20:10:15.353420 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.353565208+00:00 stderr F I0813 20:10:15.353520 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-08-13T20:10:15.360472336+00:00 stderr F I0813 20:10:15.360382 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.360514647+00:00 stderr F I0813 20:10:15.360482 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-08-13T20:10:15.381515149+00:00 stderr F I0813 20:10:15.379059 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.381515149+00:00 stderr F I0813 20:10:15.379154 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-08-13T20:10:15.387460300+00:00 stderr F I0813 20:10:15.387420 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.387566783+00:00 stderr F I0813 20:10:15.387553 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-08-13T20:10:15.520354019+00:00 stderr F I0813 20:10:15.520245 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.520690129+00:00 stderr F I0813 20:10:15.520661 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-08-13T20:10:15.551942095+00:00 stderr F I0813 20:10:15.551594 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.553274773+00:00 stderr F I0813 20:10:15.553210 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:10:15.553274773+00:00 stderr F I0813 20:10:15.553242 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:10:15.600361053+00:00 stderr F I0813 20:10:15.600133 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.625530425+00:00 stderr F I0813 20:10:15.624979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.635443619+00:00 stderr F I0813 20:10:15.634847 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.657638876+00:00 stderr F I0813 20:10:15.657545 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.696231862+00:00 stderr F I0813 20:10:15.695244 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.710192442+00:00 stderr F I0813 20:10:15.707050 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:15.713141237+00:00 stderr F I0813 20:10:15.713054 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.713343193+00:00 stderr F I0813 20:10:15.713319 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.713431175+00:00 stderr F I0813 20:10:15.713417 1 log.go:245] Reconciling proxy 'cluster' 2025-08-13T20:10:15.727638152+00:00 stderr F I0813 20:10:15.727582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.745508955+00:00 stderr F I0813 20:10:15.740998 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:10:15.746159484+00:00 stderr F I0813 20:10:15.746132 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:15.746209645+00:00 stderr F I0813 20:10:15.746197 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T20:10:15.747837812+00:00 stderr F I0813 20:10:15.746759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.756361906+00:00 stderr F I0813 20:10:15.755365 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.787690624+00:00 stderr F I0813 20:10:15.787166 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-08-13T20:10:15.905854422+00:00 stderr F I0813 20:10:15.905622 1 log.go:245] Reconciling proxy 'cluster' complete 2025-08-13T20:10:16.493622982+00:00 stderr F I0813 20:10:16.492608 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:16.504737851+00:00 stderr F I0813 20:10:16.504547 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:16.521899913+00:00 stderr F I0813 20:10:16.521531 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:16.537375427+00:00 stderr F I0813 20:10:16.537311 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:16.537504620+00:00 stderr F I0813 20:10:16.537487 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:16.551889763+00:00 stderr F I0813 20:10:16.551720 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:16.692362681+00:00 stderr F I0813 20:10:16.692297 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:10:16.700454323+00:00 stderr F I0813 20:10:16.700272 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.702027648+00:00 stderr F I0813 20:10:16.700874 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.795859278+00:00 stderr F I0813 20:10:16.795186 1 log.go:245] successful reconciliation 2025-08-13T20:10:17.097212428+00:00 stderr F I0813 20:10:17.096540 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-08-13T20:10:17.100598595+00:00 stderr F I0813 20:10:17.100378 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:17.300866087+00:00 stderr F I0813 20:10:17.300728 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:17.304838281+00:00 stderr F I0813 20:10:17.304748 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:17.310176314+00:00 stderr F I0813 20:10:17.310118 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:17.310201994+00:00 stderr F I0813 20:10:17.310159 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000940380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:17.318161843+00:00 stderr F I0813 20:10:17.317414 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:17.318235395+00:00 stderr F I0813 20:10:17.318219 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:17.318275836+00:00 stderr F I0813 20:10:17.318262 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:17.322330792+00:00 stderr F I0813 20:10:17.322283 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:17.322399284+00:00 stderr F I0813 20:10:17.322386 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:17.322429815+00:00 stderr F I0813 20:10:17.322418 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:17.322458676+00:00 stderr F I0813 20:10:17.322448 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:17.322533848+00:00 stderr F I0813 20:10:17.322507 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:17.491410320+00:00 stderr F I0813 20:10:17.491325 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T20:10:17.703263844+00:00 stderr F I0813 20:10:17.703141 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:17.719277103+00:00 stderr F I0813 20:10:17.719118 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:17.719277103+00:00 stderr F E0813 20:10:17.719231 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="1f50cad5-60d2-4f3d-b8ab-18987764a41f" 2025-08-13T20:10:17.725032788+00:00 stderr F I0813 20:10:17.724614 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:17.897303557+00:00 stderr F I0813 20:10:17.896154 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:17.902614249+00:00 stderr F I0813 20:10:17.902576 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:17.923173799+00:00 stderr F I0813 20:10:17.922722 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:17.943853801+00:00 stderr F I0813 20:10:17.943707 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:17.943853801+00:00 stderr F I0813 20:10:17.943755 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:17.957862963+00:00 stderr F I0813 20:10:17.957742 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:18.093558154+00:00 stderr F I0813 20:10:18.093248 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:10:18.099429212+00:00 stderr F I0813 20:10:18.099385 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.109217933+00:00 stderr F I0813 20:10:18.108590 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.178884190+00:00 stderr F I0813 20:10:18.178330 1 allowlist_controller.go:146] Successfully updated sysctl allowlist 2025-08-13T20:10:18.200862150+00:00 stderr F I0813 20:10:18.198317 1 log.go:245] successful reconciliation 2025-08-13T20:10:18.289874792+00:00 stderr F I0813 20:10:18.289321 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:18.313854600+00:00 stderr F I0813 20:10:18.313708 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:18.313854600+00:00 stderr F I0813 20:10:18.313768 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T20:10:18.315971261+00:00 stderr F I0813 20:10:18.315882 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:10:18.492054889+00:00 stderr F I0813 20:10:18.491952 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-08-13T20:10:18.495456387+00:00 stderr F I0813 20:10:18.495370 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:19.498520906+00:00 stderr F I0813 20:10:19.498423 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:19.503122328+00:00 stderr F I0813 20:10:19.503082 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:19.507103102+00:00 stderr F I0813 20:10:19.507068 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:19.507187084+00:00 stderr F I0813 20:10:19.507146 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003acdb80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:19.517054227+00:00 stderr F I0813 20:10:19.517007 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:19.517129689+00:00 stderr F I0813 20:10:19.517115 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:19.517163580+00:00 stderr F I0813 20:10:19.517152 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:19.521718241+00:00 stderr F I0813 20:10:19.521682 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:19.521859855+00:00 stderr F I0813 20:10:19.521764 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:19.521903126+00:00 stderr F I0813 20:10:19.521888 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:19.522002509+00:00 stderr F I0813 20:10:19.521986 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:19.522061621+00:00 stderr F I0813 20:10:19.522049 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:19.703470441+00:00 stderr F I0813 20:10:19.703361 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:19.720904881+00:00 stderr F I0813 20:10:19.720475 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:19.720904881+00:00 stderr F E0813 20:10:19.720610 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="046702fa-95f0-4511-9599-1da40328714e" 2025-08-13T20:10:19.731743692+00:00 stderr F I0813 20:10:19.731668 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:19.892101659+00:00 stderr F I0813 20:10:19.890303 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-08-13T20:10:19.893996923+00:00 stderr F I0813 20:10:19.893943 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.091372732+00:00 stderr F I0813 20:10:20.091233 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-08-13T20:10:20.093704069+00:00 stderr F I0813 20:10:20.093616 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.493945505+00:00 stderr F I0813 20:10:20.493604 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-08-13T20:10:20.498337961+00:00 stderr F I0813 20:10:20.498289 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.891459272+00:00 stderr F I0813 20:10:20.891337 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-08-13T20:10:20.905319379+00:00 stderr F I0813 20:10:20.905262 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.103277445+00:00 stderr F I0813 20:10:21.103186 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:21.106639701+00:00 stderr F I0813 20:10:21.106580 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:21.108664629+00:00 stderr F I0813 20:10:21.108585 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:21.108664629+00:00 stderr F I0813 20:10:21.108619 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0010c5800 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114159 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114206 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114224 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.117969 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118007 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118026 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118033 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:21.118318786+00:00 stderr F I0813 20:10:21.118274 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:21.295458115+00:00 stderr F I0813 20:10:21.293016 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:10:21.297510904+00:00 stderr F I0813 20:10:21.297171 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.491060843+00:00 stderr F I0813 20:10:21.490953 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:10:21.494009628+00:00 stderr F I0813 20:10:21.493448 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.504048745+00:00 stderr F I0813 20:10:21.503017 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:21.523490663+00:00 stderr F I0813 20:10:21.523122 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:21.523490663+00:00 stderr F I0813 20:10:21.523183 1 log.go:245] Starting render phase 2025-08-13T20:10:21.561189264+00:00 stderr F I0813 20:10:21.559944 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601509 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601549 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601586 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601624 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:10:21.691747777+00:00 stderr F I0813 20:10:21.691660 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-08-13T20:10:21.695859975+00:00 stderr F I0813 20:10:21.694041 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.707107557+00:00 stderr F I0813 20:10:21.707020 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:10:21.707107557+00:00 stderr F I0813 20:10:21.707053 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:10:21.894252693+00:00 stderr F I0813 20:10:21.893607 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-08-13T20:10:21.897754513+00:00 stderr F I0813 20:10:21.897684 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-08-13T20:10:21.897838216+00:00 stderr F I0813 20:10:21.897816 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897857 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897952 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897985 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898029 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898082 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898122 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898151 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898173 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898203 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898244 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898265 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.939293424+00:00 stderr F I0813 20:10:21.938402 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:10:22.312176035+00:00 stderr F I0813 20:10:22.310021 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:10:22.325019283+00:00 stderr F I0813 20:10:22.323766 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:10:22.325019283+00:00 stderr F I0813 20:10:22.324015 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:10:22.336750280+00:00 stderr F I0813 20:10:22.336655 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:10:22.336750280+00:00 stderr F I0813 20:10:22.336710 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:10:22.346664414+00:00 stderr F I0813 20:10:22.346573 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:10:22.346753177+00:00 stderr F I0813 20:10:22.346663 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:10:22.362561200+00:00 stderr F I0813 20:10:22.362508 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:10:22.362671523+00:00 stderr F I0813 20:10:22.362656 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:10:22.370158458+00:00 stderr F I0813 20:10:22.369302 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:10:22.370158458+00:00 stderr F I0813 20:10:22.369394 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:10:22.382879762+00:00 stderr F I0813 20:10:22.382733 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:10:22.382879762+00:00 stderr F I0813 20:10:22.382865 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:10:22.397029888+00:00 stderr F I0813 20:10:22.396511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:10:22.397029888+00:00 stderr F I0813 20:10:22.396601 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:10:22.409078684+00:00 stderr F I0813 20:10:22.408017 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:10:22.409078684+00:00 stderr F I0813 20:10:22.408086 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:10:22.421082108+00:00 stderr F I0813 20:10:22.419645 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:10:22.421082108+00:00 stderr F I0813 20:10:22.419731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:10:22.428624814+00:00 stderr F I0813 20:10:22.427127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:10:22.428624814+00:00 stderr F I0813 20:10:22.427193 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:10:22.520892639+00:00 stderr F I0813 20:10:22.520085 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:10:22.520892639+00:00 stderr F I0813 20:10:22.520153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:10:22.721411529+00:00 stderr F I0813 20:10:22.721251 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:10:22.721411529+00:00 stderr F I0813 20:10:22.721321 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:10:22.927994422+00:00 stderr F I0813 20:10:22.927876 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:10:22.927994422+00:00 stderr F I0813 20:10:22.927981 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:10:23.123548518+00:00 stderr F I0813 20:10:23.121703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:10:23.123548518+00:00 stderr F I0813 20:10:23.121845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:10:23.323863840+00:00 stderr F I0813 20:10:23.322137 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:10:23.323863840+00:00 stderr F I0813 20:10:23.322283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:10:23.524312658+00:00 stderr F I0813 20:10:23.521103 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:10:23.524312658+00:00 stderr F I0813 20:10:23.521193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:10:23.721742038+00:00 stderr F I0813 20:10:23.721581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:10:23.721742038+00:00 stderr F I0813 20:10:23.721722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:10:23.924133671+00:00 stderr F I0813 20:10:23.922598 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:10:23.924133671+00:00 stderr F I0813 20:10:23.922687 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:10:24.119658897+00:00 stderr F I0813 20:10:24.119602 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:10:24.119879823+00:00 stderr F I0813 20:10:24.119855 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:10:24.319565018+00:00 stderr F I0813 20:10:24.319507 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:10:24.319760864+00:00 stderr F I0813 20:10:24.319741 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:10:24.522948220+00:00 stderr F I0813 20:10:24.522836 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:10:24.522948220+00:00 stderr F I0813 20:10:24.522904 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:10:24.799190120+00:00 stderr F I0813 20:10:24.799067 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:10:24.799372835+00:00 stderr F I0813 20:10:24.799350 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:10:24.931274677+00:00 stderr F I0813 20:10:24.931220 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:10:24.931429361+00:00 stderr F I0813 20:10:24.931414 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:10:25.124266860+00:00 stderr F I0813 20:10:25.123539 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:10:25.124382073+00:00 stderr F I0813 20:10:25.124367 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:10:25.320061254+00:00 stderr F I0813 20:10:25.320004 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:10:25.320175797+00:00 stderr F I0813 20:10:25.320161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:10:25.519275435+00:00 stderr F I0813 20:10:25.518652 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:10:25.519275435+00:00 stderr F I0813 20:10:25.518731 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:10:25.736134673+00:00 stderr F I0813 20:10:25.736036 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:10:25.736134673+00:00 stderr F I0813 20:10:25.736108 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:10:25.925590275+00:00 stderr F I0813 20:10:25.925203 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:10:25.925590275+00:00 stderr F I0813 20:10:25.925275 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:10:26.123879040+00:00 stderr F I0813 20:10:26.123028 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:10:26.123879040+00:00 stderr F I0813 20:10:26.123099 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:10:26.323700779+00:00 stderr F I0813 20:10:26.323533 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:26.323700779+00:00 stderr F I0813 20:10:26.323636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:10:26.527292577+00:00 stderr F I0813 20:10:26.527233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:26.527426310+00:00 stderr F I0813 20:10:26.527402 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:10:26.724639985+00:00 stderr F I0813 20:10:26.724538 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:10:26.724699056+00:00 stderr F I0813 20:10:26.724643 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:10:26.923624379+00:00 stderr F I0813 20:10:26.923318 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:10:26.923887836+00:00 stderr F I0813 20:10:26.923867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:10:27.128076081+00:00 stderr F I0813 20:10:27.127842 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:10:27.128076081+00:00 stderr F I0813 20:10:27.128023 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:10:27.325535142+00:00 stderr F I0813 20:10:27.325407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:10:27.325535142+00:00 stderr F I0813 20:10:27.325510 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:10:27.529757257+00:00 stderr F I0813 20:10:27.529598 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:10:27.529757257+00:00 stderr F I0813 20:10:27.529682 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:10:27.724759538+00:00 stderr F I0813 20:10:27.724633 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:10:27.724759538+00:00 stderr F I0813 20:10:27.724711 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:10:27.920503770+00:00 stderr F I0813 20:10:27.920391 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:10:27.920503770+00:00 stderr F I0813 20:10:27.920468 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:10:28.124461178+00:00 stderr F I0813 20:10:28.123842 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:28.124461178+00:00 stderr F I0813 20:10:28.124025 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:10:28.319762888+00:00 stderr F I0813 20:10:28.319650 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:28.319762888+00:00 stderr F I0813 20:10:28.319733 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:10:28.522318335+00:00 stderr F I0813 20:10:28.522195 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:10:28.522318335+00:00 stderr F I0813 20:10:28.522260 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:10:28.721852396+00:00 stderr F I0813 20:10:28.721622 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:10:28.721852396+00:00 stderr F I0813 20:10:28.721715 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:10:28.927618676+00:00 stderr F I0813 20:10:28.927482 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:10:28.927618676+00:00 stderr F I0813 20:10:28.927562 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:10:29.136455113+00:00 stderr F I0813 20:10:29.136266 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:10:29.136455113+00:00 stderr F I0813 20:10:29.136352 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:10:29.326397239+00:00 stderr F I0813 20:10:29.326310 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:10:29.326440441+00:00 stderr F I0813 20:10:29.326399 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:10:29.529527483+00:00 stderr F I0813 20:10:29.529420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:10:29.529527483+00:00 stderr F I0813 20:10:29.529505 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:10:29.724242336+00:00 stderr F I0813 20:10:29.724035 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:10:29.724242336+00:00 stderr F I0813 20:10:29.724102 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:10:29.952650554+00:00 stderr F I0813 20:10:29.952593 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:10:29.952941233+00:00 stderr F I0813 20:10:29.952896 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:10:30.156097907+00:00 stderr F I0813 20:10:30.155900 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:10:30.156097907+00:00 stderr F I0813 20:10:30.156025 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:10:30.319976035+00:00 stderr F I0813 20:10:30.319680 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:10:30.319976035+00:00 stderr F I0813 20:10:30.319769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:10:30.520501744+00:00 stderr F I0813 20:10:30.520394 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:10:30.520563216+00:00 stderr F I0813 20:10:30.520555 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:10:30.734934362+00:00 stderr F I0813 20:10:30.734182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:10:30.734934362+00:00 stderr F I0813 20:10:30.734256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:10:30.921047819+00:00 stderr F I0813 20:10:30.920898 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:10:30.921177432+00:00 stderr F I0813 20:10:30.921161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:10:31.119574791+00:00 stderr F I0813 20:10:31.119426 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:10:31.119574791+00:00 stderr F I0813 20:10:31.119505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:10:31.322412276+00:00 stderr F I0813 20:10:31.322271 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:10:31.322412276+00:00 stderr F I0813 20:10:31.322363 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:10:31.521363790+00:00 stderr F I0813 20:10:31.521222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:10:31.521363790+00:00 stderr F I0813 20:10:31.521297 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:10:31.723192067+00:00 stderr F I0813 20:10:31.723043 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:10:31.723192067+00:00 stderr F I0813 20:10:31.723129 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:10:31.920969337+00:00 stderr F I0813 20:10:31.920680 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:10:31.920969337+00:00 stderr F I0813 20:10:31.920767 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.121627650+00:00 stderr F I0813 20:10:32.121471 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.121627650+00:00 stderr F I0813 20:10:32.121564 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.324740024+00:00 stderr F I0813 20:10:32.324619 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.324740024+00:00 stderr F I0813 20:10:32.324722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.518459638+00:00 stderr F I0813 20:10:32.518327 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.518459638+00:00 stderr F I0813 20:10:32.518405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.718275557+00:00 stderr F I0813 20:10:32.718118 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.718275557+00:00 stderr F I0813 20:10:32.718222 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:10:32.922672908+00:00 stderr F I0813 20:10:32.922513 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:10:32.922672908+00:00 stderr F I0813 20:10:32.922624 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:10:33.121058765+00:00 stderr F I0813 20:10:33.120958 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:10:33.121058765+00:00 stderr F I0813 20:10:33.121044 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:10:33.322137191+00:00 stderr F I0813 20:10:33.321998 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:10:33.322137191+00:00 stderr F I0813 20:10:33.322076 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:10:33.521951420+00:00 stderr F I0813 20:10:33.521861 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:10:33.522072753+00:00 stderr F I0813 20:10:33.522056 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:10:33.721470860+00:00 stderr F I0813 20:10:33.721344 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:10:33.721470860+00:00 stderr F I0813 20:10:33.721428 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:10:33.924454139+00:00 stderr F I0813 20:10:33.924308 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:10:33.924660925+00:00 stderr F I0813 20:10:33.924645 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:10:34.122531938+00:00 stderr F I0813 20:10:34.122431 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:10:34.122590230+00:00 stderr F I0813 20:10:34.122528 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:10:34.318450145+00:00 stderr F I0813 20:10:34.318305 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:10:34.318450145+00:00 stderr F I0813 20:10:34.318389 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:10:34.527260152+00:00 stderr F I0813 20:10:34.527157 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:10:34.528939790+00:00 stderr F I0813 20:10:34.528564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:10:34.725633460+00:00 stderr F I0813 20:10:34.725198 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:10:34.725633460+00:00 stderr F I0813 20:10:34.725356 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:10:34.925560072+00:00 stderr F I0813 20:10:34.925500 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:10:34.925682115+00:00 stderr F I0813 20:10:34.925667 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:10:35.119538073+00:00 stderr F I0813 20:10:35.119417 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:10:35.119578044+00:00 stderr F I0813 20:10:35.119548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:10:35.318718504+00:00 stderr F I0813 20:10:35.318551 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:10:35.319024753+00:00 stderr F I0813 20:10:35.319001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:10:35.519099909+00:00 stderr F I0813 20:10:35.518651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:10:35.519099909+00:00 stderr F I0813 20:10:35.518748 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:10:35.722389858+00:00 stderr F I0813 20:10:35.722324 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:10:35.722567473+00:00 stderr F I0813 20:10:35.722546 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:10:35.921644231+00:00 stderr F I0813 20:10:35.921586 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:10:35.922010341+00:00 stderr F I0813 20:10:35.921740 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:10:36.129045217+00:00 stderr F I0813 20:10:36.128878 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:10:36.129045217+00:00 stderr F I0813 20:10:36.128967 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:10:36.342385344+00:00 stderr F I0813 20:10:36.342256 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:10:36.342385344+00:00 stderr F I0813 20:10:36.342348 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:10:36.519449990+00:00 stderr F I0813 20:10:36.519340 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:10:36.519449990+00:00 stderr F I0813 20:10:36.519424 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:36.719878037+00:00 stderr F I0813 20:10:36.719706 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:36.719948579+00:00 stderr F I0813 20:10:36.719876 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:36.919298924+00:00 stderr F I0813 20:10:36.919195 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:36.919298924+00:00 stderr F I0813 20:10:36.919269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:37.118161506+00:00 stderr F I0813 20:10:37.117957 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:37.118161506+00:00 stderr F I0813 20:10:37.118039 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:10:37.322144624+00:00 stderr F I0813 20:10:37.322025 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:10:37.322144624+00:00 stderr F I0813 20:10:37.322107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:10:37.519550883+00:00 stderr F I0813 20:10:37.519429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:10:37.519550883+00:00 stderr F I0813 20:10:37.519499 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:10:37.718666492+00:00 stderr F I0813 20:10:37.718559 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:10:37.718666492+00:00 stderr F I0813 20:10:37.718648 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:37.940935225+00:00 stderr F I0813 20:10:37.938980 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:37.940935225+00:00 stderr F I0813 20:10:37.939317 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:38.123136988+00:00 stderr F I0813 20:10:38.123011 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:38.123136988+00:00 stderr F I0813 20:10:38.123099 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:38.327121127+00:00 stderr F I0813 20:10:38.326997 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:38.327121127+00:00 stderr F I0813 20:10:38.327105 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:10:38.518325349+00:00 stderr F I0813 20:10:38.518189 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:10:38.518325349+00:00 stderr F I0813 20:10:38.518302 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:10:38.720081113+00:00 stderr F I0813 20:10:38.719936 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:10:38.720081113+00:00 stderr F I0813 20:10:38.720022 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:10:38.924877685+00:00 stderr F I0813 20:10:38.924686 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:10:38.924877685+00:00 stderr F I0813 20:10:38.924830 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:10:39.119301820+00:00 stderr F I0813 20:10:39.119180 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:10:39.119301820+00:00 stderr F I0813 20:10:39.119262 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:10:39.322502676+00:00 stderr F I0813 20:10:39.322362 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:10:39.322502676+00:00 stderr F I0813 20:10:39.322449 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:10:39.522842260+00:00 stderr F I0813 20:10:39.522711 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:10:39.522905202+00:00 stderr F I0813 20:10:39.522864 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:10:39.720545158+00:00 stderr F I0813 20:10:39.720445 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:10:39.720545158+00:00 stderr F I0813 20:10:39.720510 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:39.925620748+00:00 stderr F I0813 20:10:39.923488 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:39.925620748+00:00 stderr F I0813 20:10:39.923561 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:10:40.118724484+00:00 stderr F I0813 20:10:40.118611 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:10:40.118724484+00:00 stderr F I0813 20:10:40.118691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:10:40.320312704+00:00 stderr F I0813 20:10:40.320183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:10:40.320312704+00:00 stderr F I0813 20:10:40.320273 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:10:40.530647374+00:00 stderr F I0813 20:10:40.530503 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:10:40.530647374+00:00 stderr F I0813 20:10:40.530601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:10:40.719393536+00:00 stderr F I0813 20:10:40.719282 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:10:40.719446957+00:00 stderr F I0813 20:10:40.719386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:10:40.919968957+00:00 stderr F I0813 20:10:40.919588 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:10:40.919968957+00:00 stderr F I0813 20:10:40.919670 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:10:41.120396652+00:00 stderr F I0813 20:10:41.120317 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:10:41.120438443+00:00 stderr F I0813 20:10:41.120409 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:41.327983544+00:00 stderr F I0813 20:10:41.327640 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:41.328054846+00:00 stderr F I0813 20:10:41.327997 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:10:41.522964074+00:00 stderr F I0813 20:10:41.522755 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:10:41.523020465+00:00 stderr F I0813 20:10:41.522971 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:41.725399998+00:00 stderr F I0813 20:10:41.725267 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:41.725399998+00:00 stderr F I0813 20:10:41.725342 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:10:41.919321038+00:00 stderr F I0813 20:10:41.919192 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:10:41.919472992+00:00 stderr F I0813 20:10:41.919407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:10:42.120738593+00:00 stderr F I0813 20:10:42.120473 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:10:42.120738593+00:00 stderr F I0813 20:10:42.120550 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:10:42.321723816+00:00 stderr F I0813 20:10:42.321611 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:10:42.321841499+00:00 stderr F I0813 20:10:42.321711 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:10:42.526454565+00:00 stderr F I0813 20:10:42.526331 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:10:42.526454565+00:00 stderr F I0813 20:10:42.526424 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:10:42.719186791+00:00 stderr F I0813 20:10:42.719072 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:10:42.719252073+00:00 stderr F I0813 20:10:42.719180 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:10:42.922417198+00:00 stderr F I0813 20:10:42.922270 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:10:42.939641362+00:00 stderr F I0813 20:10:42.939553 1 log.go:245] Operconfig Controller complete 2025-08-13T20:13:15.311294346+00:00 stderr F I0813 20:13:15.310918 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:13:42.940568520+00:00 stderr F I0813 20:13:42.940483 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:13:43.374828721+00:00 stderr F I0813 20:13:43.374716 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:13:43.377434225+00:00 stderr F I0813 20:13:43.377324 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:13:43.380157543+00:00 stderr F I0813 20:13:43.380092 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:13:43.380179564+00:00 stderr F I0813 20:13:43.380129 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00074a600 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385539 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385589 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385600 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389128 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389169 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389192 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389200 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389330 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:13:43.403831402+00:00 stderr F I0813 20:13:43.403740 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:13:43.418676848+00:00 stderr F I0813 20:13:43.418573 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:13:43.418676848+00:00 stderr F I0813 20:13:43.418656 1 log.go:245] Starting render phase 2025-08-13T20:13:43.432573196+00:00 stderr F I0813 20:13:43.432482 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469024 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469065 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469136 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:13:43.469218897+00:00 stderr F I0813 20:13:43.469166 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:13:43.487704557+00:00 stderr F I0813 20:13:43.487597 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:13:43.487704557+00:00 stderr F I0813 20:13:43.487653 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:13:43.503889401+00:00 stderr F I0813 20:13:43.503672 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:13:43.520217339+00:00 stderr F I0813 20:13:43.520092 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:13:43.527586020+00:00 stderr F I0813 20:13:43.527422 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:13:43.527586020+00:00 stderr F I0813 20:13:43.527494 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:13:43.536987710+00:00 stderr F I0813 20:13:43.536676 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:13:43.536987710+00:00 stderr F I0813 20:13:43.536759 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:13:43.545637138+00:00 stderr F I0813 20:13:43.545533 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:13:43.545637138+00:00 stderr F I0813 20:13:43.545579 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:13:43.554432040+00:00 stderr F I0813 20:13:43.554328 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:13:43.554432040+00:00 stderr F I0813 20:13:43.554370 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:13:43.563117879+00:00 stderr F I0813 20:13:43.562860 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:13:43.563117879+00:00 stderr F I0813 20:13:43.562923 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:13:43.570672736+00:00 stderr F I0813 20:13:43.570530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:13:43.570672736+00:00 stderr F I0813 20:13:43.570618 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:13:43.578617994+00:00 stderr F I0813 20:13:43.578499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:13:43.578617994+00:00 stderr F I0813 20:13:43.578605 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:13:43.584363808+00:00 stderr F I0813 20:13:43.584218 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:13:43.584363808+00:00 stderr F I0813 20:13:43.584292 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:13:43.590532865+00:00 stderr F I0813 20:13:43.590429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:13:43.590532865+00:00 stderr F I0813 20:13:43.590502 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:13:43.615068418+00:00 stderr F I0813 20:13:43.614748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:13:43.615068418+00:00 stderr F I0813 20:13:43.615055 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:13:43.816850564+00:00 stderr F I0813 20:13:43.816620 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:13:43.816850564+00:00 stderr F I0813 20:13:43.816732 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:13:44.045211281+00:00 stderr F I0813 20:13:44.045101 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:13:44.045211281+00:00 stderr F I0813 20:13:44.045173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:13:44.214676170+00:00 stderr F I0813 20:13:44.214560 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:13:44.214676170+00:00 stderr F I0813 20:13:44.214633 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:13:44.414017305+00:00 stderr F I0813 20:13:44.413907 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:13:44.414017305+00:00 stderr F I0813 20:13:44.414004 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:13:44.616854340+00:00 stderr F I0813 20:13:44.616651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:13:44.617054796+00:00 stderr F I0813 20:13:44.616892 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:13:44.816605287+00:00 stderr F I0813 20:13:44.815293 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:13:44.816605287+00:00 stderr F I0813 20:13:44.816217 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:13:45.018822195+00:00 stderr F I0813 20:13:45.018280 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:13:45.018822195+00:00 stderr F I0813 20:13:45.018359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:13:45.216616916+00:00 stderr F I0813 20:13:45.216222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:13:45.216616916+00:00 stderr F I0813 20:13:45.216293 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:13:45.417322111+00:00 stderr F I0813 20:13:45.417262 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:13:45.417534628+00:00 stderr F I0813 20:13:45.417515 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:13:45.615292018+00:00 stderr F I0813 20:13:45.615154 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:13:45.615292018+00:00 stderr F I0813 20:13:45.615220 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:13:45.814613313+00:00 stderr F I0813 20:13:45.814560 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:13:45.814888881+00:00 stderr F I0813 20:13:45.814867 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:13:46.025141219+00:00 stderr F I0813 20:13:46.024971 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:13:46.025141219+00:00 stderr F I0813 20:13:46.025052 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:13:46.225747440+00:00 stderr F I0813 20:13:46.225614 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:13:46.225747440+00:00 stderr F I0813 20:13:46.225690 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:13:46.414999516+00:00 stderr F I0813 20:13:46.414885 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:13:46.415107769+00:00 stderr F I0813 20:13:46.415093 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:13:46.614743393+00:00 stderr F I0813 20:13:46.614643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:13:46.614913388+00:00 stderr F I0813 20:13:46.614882 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:13:46.815699004+00:00 stderr F I0813 20:13:46.815560 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:13:46.815699004+00:00 stderr F I0813 20:13:46.815642 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:13:47.022632887+00:00 stderr F I0813 20:13:47.022324 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:13:47.022632887+00:00 stderr F I0813 20:13:47.022434 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:13:47.218154962+00:00 stderr F I0813 20:13:47.217983 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:13:47.218154962+00:00 stderr F I0813 20:13:47.218104 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:13:47.420447001+00:00 stderr F I0813 20:13:47.420287 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:13:47.420447001+00:00 stderr F I0813 20:13:47.420392 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:13:47.615309948+00:00 stderr F I0813 20:13:47.615199 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:47.615309948+00:00 stderr F I0813 20:13:47.615294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:13:47.815000913+00:00 stderr F I0813 20:13:47.814648 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:47.815000913+00:00 stderr F I0813 20:13:47.814727 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:13:48.018637922+00:00 stderr F I0813 20:13:48.018534 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:13:48.018637922+00:00 stderr F I0813 20:13:48.018579 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:13:48.217180944+00:00 stderr F I0813 20:13:48.216324 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:13:48.217180944+00:00 stderr F I0813 20:13:48.216413 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:13:48.414719087+00:00 stderr F I0813 20:13:48.414622 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:13:48.414719087+00:00 stderr F I0813 20:13:48.414691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:13:48.614108514+00:00 stderr F I0813 20:13:48.613413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:13:48.614149025+00:00 stderr F I0813 20:13:48.614131 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:13:48.817015652+00:00 stderr F I0813 20:13:48.816916 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:13:48.817055383+00:00 stderr F I0813 20:13:48.817009 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:13:49.019493637+00:00 stderr F I0813 20:13:49.019344 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:13:49.019493637+00:00 stderr F I0813 20:13:49.019461 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:13:49.217746911+00:00 stderr F I0813 20:13:49.216564 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:13:49.217746911+00:00 stderr F I0813 20:13:49.216637 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:13:49.443239016+00:00 stderr F I0813 20:13:49.442371 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:49.443239016+00:00 stderr F I0813 20:13:49.443043 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:13:49.618431979+00:00 stderr F I0813 20:13:49.617657 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:49.618431979+00:00 stderr F I0813 20:13:49.618405 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:13:49.816567380+00:00 stderr F I0813 20:13:49.816398 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:13:49.816667963+00:00 stderr F I0813 20:13:49.816646 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:13:50.014527865+00:00 stderr F I0813 20:13:50.014388 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:13:50.014527865+00:00 stderr F I0813 20:13:50.014460 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:13:50.225897456+00:00 stderr F I0813 20:13:50.225706 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:13:50.225897456+00:00 stderr F I0813 20:13:50.225863 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:13:50.420505806+00:00 stderr F I0813 20:13:50.420379 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:13:50.420505806+00:00 stderr F I0813 20:13:50.420455 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:13:50.625005169+00:00 stderr F I0813 20:13:50.624863 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:13:50.625005169+00:00 stderr F I0813 20:13:50.624933 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:13:50.824855698+00:00 stderr F I0813 20:13:50.824663 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:13:50.824855698+00:00 stderr F I0813 20:13:50.824758 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:13:51.017863611+00:00 stderr F I0813 20:13:51.017673 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:13:51.017863611+00:00 stderr F I0813 20:13:51.017749 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:13:51.289760177+00:00 stderr F I0813 20:13:51.287445 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:13:51.289760177+00:00 stderr F I0813 20:13:51.287519 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:13:51.453939414+00:00 stderr F I0813 20:13:51.453768 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:13:51.453939414+00:00 stderr F I0813 20:13:51.453894 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:13:51.615114005+00:00 stderr F I0813 20:13:51.615004 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:13:51.615114005+00:00 stderr F I0813 20:13:51.615074 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:13:51.817841177+00:00 stderr F I0813 20:13:51.817663 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:13:51.817841177+00:00 stderr F I0813 20:13:51.817744 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:13:52.016338579+00:00 stderr F I0813 20:13:52.016238 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:13:52.016338579+00:00 stderr F I0813 20:13:52.016312 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:13:52.216276261+00:00 stderr F I0813 20:13:52.216221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:13:52.216375604+00:00 stderr F I0813 20:13:52.216361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:13:52.414331500+00:00 stderr F I0813 20:13:52.414205 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:13:52.414331500+00:00 stderr F I0813 20:13:52.414278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:13:52.615858418+00:00 stderr F I0813 20:13:52.615651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:13:52.615858418+00:00 stderr F I0813 20:13:52.615720 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:13:52.816060237+00:00 stderr F I0813 20:13:52.815899 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:13:52.816060237+00:00 stderr F I0813 20:13:52.816013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:13:53.019017056+00:00 stderr F I0813 20:13:53.018889 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:13:53.019066518+00:00 stderr F I0813 20:13:53.019021 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:13:53.216182769+00:00 stderr F I0813 20:13:53.216074 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:13:53.216182769+00:00 stderr F I0813 20:13:53.216144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.418753757+00:00 stderr F I0813 20:13:53.418643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.418753757+00:00 stderr F I0813 20:13:53.418731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.625225817+00:00 stderr F I0813 20:13:53.625005 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.625225817+00:00 stderr F I0813 20:13:53.625073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.815445371+00:00 stderr F I0813 20:13:53.815317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.815445371+00:00 stderr F I0813 20:13:53.815387 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:54.014515318+00:00 stderr F I0813 20:13:54.014401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:54.014515318+00:00 stderr F I0813 20:13:54.014468 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:13:54.231292933+00:00 stderr F I0813 20:13:54.231165 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:13:54.231500299+00:00 stderr F I0813 20:13:54.231279 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:13:54.418158130+00:00 stderr F I0813 20:13:54.417727 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:13:54.418158130+00:00 stderr F I0813 20:13:54.417841 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:13:54.618489853+00:00 stderr F I0813 20:13:54.618262 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:13:54.618489853+00:00 stderr F I0813 20:13:54.618344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:13:54.815864362+00:00 stderr F I0813 20:13:54.815675 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:13:54.815864362+00:00 stderr F I0813 20:13:54.815760 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:13:55.017643068+00:00 stderr F I0813 20:13:55.017533 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:13:55.017643068+00:00 stderr F I0813 20:13:55.017618 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:13:55.221649977+00:00 stderr F I0813 20:13:55.221007 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:13:55.221649977+00:00 stderr F I0813 20:13:55.221077 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:13:55.420225810+00:00 stderr F I0813 20:13:55.419610 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:13:55.420225810+00:00 stderr F I0813 20:13:55.419684 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:13:55.616715154+00:00 stderr F I0813 20:13:55.616622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:13:55.616715154+00:00 stderr F I0813 20:13:55.616674 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:13:55.819831198+00:00 stderr F I0813 20:13:55.819646 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:13:55.820014403+00:00 stderr F I0813 20:13:55.819929 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:13:56.020237654+00:00 stderr F I0813 20:13:56.020119 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:13:56.020237654+00:00 stderr F I0813 20:13:56.020193 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:13:56.218649183+00:00 stderr F I0813 20:13:56.217976 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:13:56.218649183+00:00 stderr F I0813 20:13:56.218612 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:13:56.443436028+00:00 stderr F I0813 20:13:56.442659 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:13:56.443436028+00:00 stderr F I0813 20:13:56.442737 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:13:56.620633618+00:00 stderr F I0813 20:13:56.620546 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:13:56.620681910+00:00 stderr F I0813 20:13:56.620650 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:13:56.817469731+00:00 stderr F I0813 20:13:56.817347 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:13:56.817704398+00:00 stderr F I0813 20:13:56.817688 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:13:57.016028065+00:00 stderr F I0813 20:13:57.015855 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:13:57.016028065+00:00 stderr F I0813 20:13:57.015967 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:13:57.220501447+00:00 stderr F I0813 20:13:57.219443 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:13:57.220501447+00:00 stderr F I0813 20:13:57.220307 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:13:57.424032872+00:00 stderr F I0813 20:13:57.423901 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:13:57.424093534+00:00 stderr F I0813 20:13:57.424028 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:13:57.637005778+00:00 stderr F I0813 20:13:57.636883 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:13:57.637005778+00:00 stderr F I0813 20:13:57.636933 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:13:57.815596899+00:00 stderr F I0813 20:13:57.815465 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:13:57.815596899+00:00 stderr F I0813 20:13:57.815547 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.015258352+00:00 stderr F I0813 20:13:58.015162 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.015258352+00:00 stderr F I0813 20:13:58.015233 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.221014882+00:00 stderr F I0813 20:13:58.220477 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.221014882+00:00 stderr F I0813 20:13:58.220645 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.416857407+00:00 stderr F I0813 20:13:58.416681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.416857407+00:00 stderr F I0813 20:13:58.416762 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:13:58.614262667+00:00 stderr F I0813 20:13:58.614088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:13:58.614262667+00:00 stderr F I0813 20:13:58.614173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:13:58.816146785+00:00 stderr F I0813 20:13:58.816089 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:13:58.816263938+00:00 stderr F I0813 20:13:58.816250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:13:59.015859581+00:00 stderr F I0813 20:13:59.015758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:13:59.015904282+00:00 stderr F I0813 20:13:59.015867 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.226871771+00:00 stderr F I0813 20:13:59.226666 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.226871771+00:00 stderr F I0813 20:13:59.226742 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.419289098+00:00 stderr F I0813 20:13:59.419152 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.419289098+00:00 stderr F I0813 20:13:59.419230 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.619827387+00:00 stderr F I0813 20:13:59.619650 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.619827387+00:00 stderr F I0813 20:13:59.619724 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:13:59.815385904+00:00 stderr F I0813 20:13:59.815249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:13:59.815385904+00:00 stderr F I0813 20:13:59.815325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:14:00.015931724+00:00 stderr F I0813 20:14:00.015717 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:14:00.015931724+00:00 stderr F I0813 20:14:00.015885 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:14:00.234377297+00:00 stderr F I0813 20:14:00.234281 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:14:00.234377297+00:00 stderr F I0813 20:14:00.234351 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:14:00.414505691+00:00 stderr F I0813 20:14:00.414410 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:14:00.414505691+00:00 stderr F I0813 20:14:00.414484 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:14:00.615417922+00:00 stderr F I0813 20:14:00.615262 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:14:00.615417922+00:00 stderr F I0813 20:14:00.615346 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:14:00.815018555+00:00 stderr F I0813 20:14:00.814875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:14:00.815018555+00:00 stderr F I0813 20:14:00.814972 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:14:01.019400795+00:00 stderr F I0813 20:14:01.018477 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:14:01.019612631+00:00 stderr F I0813 20:14:01.019521 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:01.217608247+00:00 stderr F I0813 20:14:01.216765 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:01.217608247+00:00 stderr F I0813 20:14:01.216920 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:14:01.415330817+00:00 stderr F I0813 20:14:01.415212 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:14:01.415330817+00:00 stderr F I0813 20:14:01.415284 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:14:01.617152582+00:00 stderr F I0813 20:14:01.617097 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:14:01.617282986+00:00 stderr F I0813 20:14:01.617268 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:14:01.814648095+00:00 stderr F I0813 20:14:01.814586 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:14:01.814864021+00:00 stderr F I0813 20:14:01.814760 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:14:02.014336540+00:00 stderr F I0813 20:14:02.014160 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:14:02.015155753+00:00 stderr F I0813 20:14:02.015102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:14:02.215230920+00:00 stderr F I0813 20:14:02.214761 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:14:02.215286621+00:00 stderr F I0813 20:14:02.215233 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:14:02.415911973+00:00 stderr F I0813 20:14:02.415713 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:14:02.416126300+00:00 stderr F I0813 20:14:02.416052 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:02.615518257+00:00 stderr F I0813 20:14:02.615394 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:02.615518257+00:00 stderr F I0813 20:14:02.615476 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:14:02.838552571+00:00 stderr F I0813 20:14:02.838390 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:14:02.838617163+00:00 stderr F I0813 20:14:02.838584 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:03.027283372+00:00 stderr F I0813 20:14:03.027133 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:03.027283372+00:00 stderr F I0813 20:14:03.027242 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:14:03.222765997+00:00 stderr F I0813 20:14:03.222560 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:14:03.222765997+00:00 stderr F I0813 20:14:03.222681 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:14:03.420601300+00:00 stderr F I0813 20:14:03.420453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:14:03.421113364+00:00 stderr F I0813 20:14:03.421057 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:14:03.617066523+00:00 stderr F I0813 20:14:03.616967 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:14:03.617121434+00:00 stderr F I0813 20:14:03.617064 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:14:03.818043865+00:00 stderr F I0813 20:14:03.817875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:14:03.818043865+00:00 stderr F I0813 20:14:03.817982 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:14:04.021085856+00:00 stderr F I0813 20:14:04.020926 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:14:04.021085856+00:00 stderr F I0813 20:14:04.021019 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:14:04.222729668+00:00 stderr F I0813 20:14:04.222669 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:14:04.245133170+00:00 stderr F I0813 20:14:04.245074 1 log.go:245] Operconfig Controller complete 2025-08-13T20:14:46.226385533+00:00 stderr F I0813 20:14:46.226016 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.236012789+00:00 stderr F I0813 20:14:46.235859 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.243878934+00:00 stderr F I0813 20:14:46.243702 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.255624171+00:00 stderr F I0813 20:14:46.255576 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.263317642+00:00 stderr F I0813 20:14:46.263194 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.277946161+00:00 stderr F I0813 20:14:46.276878 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.290137251+00:00 stderr F I0813 20:14:46.289856 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.301030933+00:00 stderr F I0813 20:14:46.300886 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.311577085+00:00 stderr F I0813 20:14:46.311445 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:15:16.693640856+00:00 stderr F I0813 20:15:16.693425 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:15:16.694565642+00:00 stderr F I0813 20:15:16.694470 1 log.go:245] successful reconciliation 2025-08-13T20:15:18.098319579+00:00 stderr F I0813 20:15:18.093704 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:15:18.099662928+00:00 stderr F I0813 20:15:18.099572 1 log.go:245] successful reconciliation 2025-08-13T20:15:19.293654066+00:00 stderr F I0813 20:15:19.291996 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:15:19.293654066+00:00 stderr F I0813 20:15:19.292752 1 log.go:245] successful reconciliation 2025-08-13T20:16:15.330726733+00:00 stderr F I0813 20:16:15.330619 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:17:04.248934446+00:00 stderr F I0813 20:17:04.247105 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:17:04.923761548+00:00 stderr F I0813 20:17:04.923308 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:17:04.928896624+00:00 stderr F I0813 20:17:04.927616 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:17:04.933402173+00:00 stderr F I0813 20:17:04.933366 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:17:04.933532467+00:00 stderr F I0813 20:17:04.933442 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0038ed480 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948374 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948451 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948460 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:17:04.953758454+00:00 stderr F I0813 20:17:04.953665 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:17:04.953864467+00:00 stderr F I0813 20:17:04.953849 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:17:04.953902188+00:00 stderr F I0813 20:17:04.953887 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:17:04.953931809+00:00 stderr F I0813 20:17:04.953920 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:17:04.954125055+00:00 stderr F I0813 20:17:04.954104 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:17:04.975956108+00:00 stderr F I0813 20:17:04.975644 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:17:05.003896586+00:00 stderr F I0813 20:17:05.002820 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:17:05.003896586+00:00 stderr F I0813 20:17:05.002885 1 log.go:245] Starting render phase 2025-08-13T20:17:05.018018340+00:00 stderr F I0813 20:17:05.016832 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:17:05.095545084+00:00 stderr F I0813 20:17:05.095477 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:17:05.095627966+00:00 stderr F I0813 20:17:05.095611 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:17:05.095709958+00:00 stderr F I0813 20:17:05.095682 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:17:05.095826582+00:00 stderr F I0813 20:17:05.095761 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:17:05.114408032+00:00 stderr F I0813 20:17:05.114126 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:17:05.114408032+00:00 stderr F I0813 20:17:05.114169 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:17:05.128207176+00:00 stderr F I0813 20:17:05.126949 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:17:05.148047803+00:00 stderr F I0813 20:17:05.146815 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:17:05.155462774+00:00 stderr F I0813 20:17:05.155385 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:17:05.155462774+00:00 stderr F I0813 20:17:05.155434 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:17:05.165527762+00:00 stderr F I0813 20:17:05.165441 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:17:05.165630375+00:00 stderr F I0813 20:17:05.165615 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:17:05.189557328+00:00 stderr F I0813 20:17:05.187764 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:17:05.189557328+00:00 stderr F I0813 20:17:05.187892 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:17:05.199769360+00:00 stderr F I0813 20:17:05.198740 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:17:05.200000276+00:00 stderr F I0813 20:17:05.199944 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:17:05.215723615+00:00 stderr F I0813 20:17:05.215615 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:17:05.216057955+00:00 stderr F I0813 20:17:05.216006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:17:05.229190230+00:00 stderr F I0813 20:17:05.229001 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:17:05.231911968+00:00 stderr F I0813 20:17:05.231476 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:17:05.249879071+00:00 stderr F I0813 20:17:05.247699 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:17:05.249879071+00:00 stderr F I0813 20:17:05.247765 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:17:05.262497681+00:00 stderr F I0813 20:17:05.262441 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:17:05.262631155+00:00 stderr F I0813 20:17:05.262611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:17:05.269325766+00:00 stderr F I0813 20:17:05.269269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:17:05.269427789+00:00 stderr F I0813 20:17:05.269414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:17:05.278072156+00:00 stderr F I0813 20:17:05.276175 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:17:05.278072156+00:00 stderr F I0813 20:17:05.276242 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:17:05.396580740+00:00 stderr F I0813 20:17:05.396502 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:17:05.396580740+00:00 stderr F I0813 20:17:05.396569 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:17:05.600000119+00:00 stderr F I0813 20:17:05.599714 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:17:05.600124203+00:00 stderr F I0813 20:17:05.600107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:17:05.800443824+00:00 stderr F I0813 20:17:05.799029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:17:05.800443824+00:00 stderr F I0813 20:17:05.799429 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:17:05.994423033+00:00 stderr F I0813 20:17:05.994317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:17:05.995766151+00:00 stderr F I0813 20:17:05.994432 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:17:06.200320373+00:00 stderr F I0813 20:17:06.199748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:17:06.200430726+00:00 stderr F I0813 20:17:06.200412 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:17:06.397920626+00:00 stderr F I0813 20:17:06.396137 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:17:06.397920626+00:00 stderr F I0813 20:17:06.396208 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:17:06.597172416+00:00 stderr F I0813 20:17:06.595172 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:17:06.597172416+00:00 stderr F I0813 20:17:06.595253 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:17:06.961050417+00:00 stderr F I0813 20:17:06.960676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:17:06.961753458+00:00 stderr F I0813 20:17:06.961479 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:17:07.025874209+00:00 stderr F I0813 20:17:07.025442 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:17:07.025874209+00:00 stderr F I0813 20:17:07.025856 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:17:07.600142918+00:00 stderr F I0813 20:17:07.596664 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:17:07.600142918+00:00 stderr F I0813 20:17:07.596734 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:17:08.303768101+00:00 stderr F I0813 20:17:08.303595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:17:08.303768101+00:00 stderr F I0813 20:17:08.303664 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:17:09.114260336+00:00 stderr F I0813 20:17:09.114146 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:17:09.114260336+00:00 stderr F I0813 20:17:09.114227 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:17:09.141484114+00:00 stderr F I0813 20:17:09.141308 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:17:09.141484114+00:00 stderr F I0813 20:17:09.141358 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:17:09.151986274+00:00 stderr F I0813 20:17:09.149873 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:17:09.151986274+00:00 stderr F I0813 20:17:09.150046 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:17:09.436568531+00:00 stderr F I0813 20:17:09.436517 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:17:09.436715335+00:00 stderr F I0813 20:17:09.436697 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:17:09.450451017+00:00 stderr F I0813 20:17:09.450399 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:17:09.450556220+00:00 stderr F I0813 20:17:09.450539 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:17:09.467747911+00:00 stderr F I0813 20:17:09.467593 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:17:09.467747911+00:00 stderr F I0813 20:17:09.467664 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:17:09.491358585+00:00 stderr F I0813 20:17:09.490353 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:17:09.492730374+00:00 stderr F I0813 20:17:09.492588 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:17:09.513332593+00:00 stderr F I0813 20:17:09.513225 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:17:09.513332593+00:00 stderr F I0813 20:17:09.513291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:17:09.521854906+00:00 stderr F I0813 20:17:09.519131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:09.521854906+00:00 stderr F I0813 20:17:09.519202 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:17:09.525744807+00:00 stderr F I0813 20:17:09.525647 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:09.525744807+00:00 stderr F I0813 20:17:09.525719 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:17:09.623143019+00:00 stderr F I0813 20:17:09.622999 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:17:09.623143019+00:00 stderr F I0813 20:17:09.623069 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:17:09.809079268+00:00 stderr F I0813 20:17:09.808507 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:17:09.809079268+00:00 stderr F I0813 20:17:09.808594 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:17:10.025402256+00:00 stderr F I0813 20:17:10.025194 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:17:10.025402256+00:00 stderr F I0813 20:17:10.025275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:17:10.209152154+00:00 stderr F I0813 20:17:10.209003 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:17:10.209152154+00:00 stderr F I0813 20:17:10.209075 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:17:10.399028096+00:00 stderr F I0813 20:17:10.398950 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:17:10.399139459+00:00 stderr F I0813 20:17:10.399124 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:17:10.624912547+00:00 stderr F I0813 20:17:10.624513 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:17:10.625088892+00:00 stderr F I0813 20:17:10.625068 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:17:10.819764451+00:00 stderr F I0813 20:17:10.819627 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:17:10.819845573+00:00 stderr F I0813 20:17:10.819766 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:17:11.011765324+00:00 stderr F I0813 20:17:11.011626 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:11.011765324+00:00 stderr F I0813 20:17:11.011698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:17:11.206567386+00:00 stderr F I0813 20:17:11.206413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:11.206567386+00:00 stderr F I0813 20:17:11.206509 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:17:11.409652566+00:00 stderr F I0813 20:17:11.409552 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:17:11.409652566+00:00 stderr F I0813 20:17:11.409636 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:17:11.599700713+00:00 stderr F I0813 20:17:11.599627 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:17:11.599743204+00:00 stderr F I0813 20:17:11.599697 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:17:11.828678432+00:00 stderr F I0813 20:17:11.828561 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:17:11.828678432+00:00 stderr F I0813 20:17:11.828666 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:17:12.007482418+00:00 stderr F I0813 20:17:12.007375 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:17:12.007482418+00:00 stderr F I0813 20:17:12.007460 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:17:12.246413281+00:00 stderr F I0813 20:17:12.245524 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:17:12.246413281+00:00 stderr F I0813 20:17:12.245593 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:17:12.421217743+00:00 stderr F I0813 20:17:12.421129 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:17:12.421217743+00:00 stderr F I0813 20:17:12.421201 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:17:12.615136101+00:00 stderr F I0813 20:17:12.615078 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:17:12.615335817+00:00 stderr F I0813 20:17:12.615316 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:17:12.841163046+00:00 stderr F I0813 20:17:12.841047 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:17:12.841163046+00:00 stderr F I0813 20:17:12.841126 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:17:13.041407154+00:00 stderr F I0813 20:17:13.041305 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:17:13.041407154+00:00 stderr F I0813 20:17:13.041386 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:17:13.195388202+00:00 stderr F I0813 20:17:13.195274 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:17:13.195388202+00:00 stderr F I0813 20:17:13.195345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:17:13.397167614+00:00 stderr F I0813 20:17:13.395151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:17:13.397167614+00:00 stderr F I0813 20:17:13.395395 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:17:14.550767769+00:00 stderr F I0813 20:17:14.546523 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:17:14.550767769+00:00 stderr F I0813 20:17:14.546616 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:17:14.751016676+00:00 stderr F I0813 20:17:14.750921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:17:14.751136970+00:00 stderr F I0813 20:17:14.751116 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:17:15.457592522+00:00 stderr F I0813 20:17:15.457543 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:17:15.458085836+00:00 stderr F I0813 20:17:15.457691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:17:16.081516709+00:00 stderr F I0813 20:17:16.080115 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:17:16.081516709+00:00 stderr F I0813 20:17:16.080196 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:17:18.510213935+00:00 stderr F I0813 20:17:18.509540 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:17:18.510213935+00:00 stderr F I0813 20:17:18.509656 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:17:20.689240763+00:00 stderr F I0813 20:17:20.687479 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:17:20.689240763+00:00 stderr F I0813 20:17:20.687544 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.815747 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.815890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.827718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.828089 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.110686038+00:00 stderr F I0813 20:17:21.110587 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.110732109+00:00 stderr F I0813 20:17:21.110719 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.455195706+00:00 stderr F I0813 20:17:21.455105 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.455195706+00:00 stderr F I0813 20:17:21.455173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.468591539+00:00 stderr F I0813 20:17:21.468122 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.468591539+00:00 stderr F I0813 20:17:21.468188 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:17:21.479685966+00:00 stderr F I0813 20:17:21.477249 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:17:21.479685966+00:00 stderr F I0813 20:17:21.477320 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:17:21.609719289+00:00 stderr F I0813 20:17:21.609615 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:17:21.609759620+00:00 stderr F I0813 20:17:21.609712 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:17:21.643704690+00:00 stderr F I0813 20:17:21.642515 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:17:21.643704690+00:00 stderr F I0813 20:17:21.642611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:17:22.156243025+00:00 stderr F I0813 20:17:22.155518 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:17:22.163195764+00:00 stderr F I0813 20:17:22.162545 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:17:22.175694211+00:00 stderr F I0813 20:17:22.174350 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:17:22.175940878+00:00 stderr F I0813 20:17:22.175917 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:17:24.354742039+00:00 stderr F I0813 20:17:24.354686 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:17:24.354908564+00:00 stderr F I0813 20:17:24.354893 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:17:25.198849754+00:00 stderr F I0813 20:17:25.196380 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:17:25.198849754+00:00 stderr F I0813 20:17:25.196470 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:17:25.367419628+00:00 stderr F I0813 20:17:25.367017 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:17:25.367419628+00:00 stderr F I0813 20:17:25.367087 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:17:25.611864598+00:00 stderr F I0813 20:17:25.610414 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:17:25.611864598+00:00 stderr F I0813 20:17:25.610533 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:17:25.865057108+00:00 stderr F I0813 20:17:25.864496 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:17:25.865057108+00:00 stderr F I0813 20:17:25.864581 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:17:25.964360494+00:00 stderr F I0813 20:17:25.964059 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:17:25.964360494+00:00 stderr F I0813 20:17:25.964140 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:17:25.976997645+00:00 stderr F I0813 20:17:25.976861 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:17:25.976997645+00:00 stderr F I0813 20:17:25.976925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:17:25.993494326+00:00 stderr F I0813 20:17:25.993267 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:17:25.993494326+00:00 stderr F I0813 20:17:25.993333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:17:26.007510616+00:00 stderr F I0813 20:17:26.006831 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:17:26.007510616+00:00 stderr F I0813 20:17:26.006899 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:17:26.015620538+00:00 stderr F I0813 20:17:26.015577 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:17:26.015652549+00:00 stderr F I0813 20:17:26.015626 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:17:26.028443164+00:00 stderr F I0813 20:17:26.028273 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:17:26.028443164+00:00 stderr F I0813 20:17:26.028365 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:17:26.156008707+00:00 stderr F I0813 20:17:26.155864 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:17:26.156008707+00:00 stderr F I0813 20:17:26.155938 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:17:26.192566731+00:00 stderr F I0813 20:17:26.192221 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:17:26.192566731+00:00 stderr F I0813 20:17:26.192307 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:17:26.202242798+00:00 stderr F I0813 20:17:26.202193 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:17:26.202305549+00:00 stderr F I0813 20:17:26.202262 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.206278563+00:00 stderr F I0813 20:17:26.206218 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.206525590+00:00 stderr F I0813 20:17:26.206505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.210906375+00:00 stderr F I0813 20:17:26.210881 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.211096360+00:00 stderr F I0813 20:17:26.211077 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.272601667+00:00 stderr F I0813 20:17:26.272547 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.272877395+00:00 stderr F I0813 20:17:26.272858 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:17:26.497129599+00:00 stderr F I0813 20:17:26.497077 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:17:26.497238062+00:00 stderr F I0813 20:17:26.497223 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:17:26.671165119+00:00 stderr F I0813 20:17:26.671060 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:17:26.671215510+00:00 stderr F I0813 20:17:26.671180 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:17:26.878282874+00:00 stderr F I0813 20:17:26.876824 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:17:26.878282874+00:00 stderr F I0813 20:17:26.876943 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.078192682+00:00 stderr F I0813 20:17:27.077948 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.078192682+00:00 stderr F I0813 20:17:27.078042 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.281010014+00:00 stderr F I0813 20:17:27.280909 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.281151648+00:00 stderr F I0813 20:17:27.281132 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.475283332+00:00 stderr F I0813 20:17:27.475194 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.475283332+00:00 stderr F I0813 20:17:27.475264 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:17:27.673828522+00:00 stderr F I0813 20:17:27.673727 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:17:27.673934355+00:00 stderr F I0813 20:17:27.673919 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:17:27.882406239+00:00 stderr F I0813 20:17:27.881724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:17:27.882453420+00:00 stderr F I0813 20:17:27.882413 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:17:28.089025749+00:00 stderr F I0813 20:17:28.088921 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:17:28.089025749+00:00 stderr F I0813 20:17:28.089011 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:17:28.316000211+00:00 stderr F I0813 20:17:28.312312 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:17:28.316000211+00:00 stderr F I0813 20:17:28.312405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:17:28.474872858+00:00 stderr F I0813 20:17:28.474171 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:17:28.474872858+00:00 stderr F I0813 20:17:28.474241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:17:28.677523755+00:00 stderr F I0813 20:17:28.677094 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:17:28.677523755+00:00 stderr F I0813 20:17:28.677162 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:17:28.878186476+00:00 stderr F I0813 20:17:28.877952 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:17:28.878186476+00:00 stderr F I0813 20:17:28.878077 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:29.074164322+00:00 stderr F I0813 20:17:29.072985 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:29.074345078+00:00 stderr F I0813 20:17:29.074323 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:17:29.810163930+00:00 stderr F I0813 20:17:29.808755 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:17:29.810163930+00:00 stderr F I0813 20:17:29.808885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:17:29.979875406+00:00 stderr F I0813 20:17:29.979530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:17:29.980053091+00:00 stderr F I0813 20:17:29.980026 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:17:29.987354000+00:00 stderr F I0813 20:17:29.987307 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:17:29.987504534+00:00 stderr F I0813 20:17:29.987452 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:17:30.003726477+00:00 stderr F I0813 20:17:30.003608 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:17:30.003848001+00:00 stderr F I0813 20:17:30.003737 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:17:30.081601471+00:00 stderr F I0813 20:17:30.081307 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:17:30.081601471+00:00 stderr F I0813 20:17:30.081480 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:17:30.285579766+00:00 stderr F I0813 20:17:30.285454 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:17:30.285700750+00:00 stderr F I0813 20:17:30.285616 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:30.476206270+00:00 stderr F I0813 20:17:30.474574 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:30.476206270+00:00 stderr F I0813 20:17:30.474644 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:17:30.681186174+00:00 stderr F I0813 20:17:30.679527 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:17:30.681186174+00:00 stderr F I0813 20:17:30.679616 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:30.899633072+00:00 stderr F I0813 20:17:30.898764 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:30.899633072+00:00 stderr F I0813 20:17:30.898896 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:17:31.086475278+00:00 stderr F I0813 20:17:31.086416 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:17:31.086651643+00:00 stderr F I0813 20:17:31.086601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:17:31.278907913+00:00 stderr F I0813 20:17:31.276175 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:17:31.278907913+00:00 stderr F I0813 20:17:31.276245 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:17:31.489349833+00:00 stderr F I0813 20:17:31.488684 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:17:31.489349833+00:00 stderr F I0813 20:17:31.489295 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:17:31.673177362+00:00 stderr F I0813 20:17:31.672490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:17:31.673177362+00:00 stderr F I0813 20:17:31.672744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:17:32.992174549+00:00 stderr F I0813 20:17:32.991855 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:17:32.992174549+00:00 stderr F I0813 20:17:32.991920 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:17:34.146403610+00:00 stderr F I0813 20:17:34.141673 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:17:34.418636985+00:00 stderr F I0813 20:17:34.418051 1 log.go:245] Operconfig Controller complete 2025-08-13T20:19:15.390012368+00:00 stderr F I0813 20:19:15.389744 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:20:15.101084387+00:00 stderr F I0813 20:20:15.100748 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.114815029+00:00 stderr F I0813 20:20:15.114695 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.125764452+00:00 stderr F I0813 20:20:15.125638 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.137224420+00:00 stderr F I0813 20:20:15.137120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.152676501+00:00 stderr F I0813 20:20:15.152593 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.164924782+00:00 stderr F I0813 20:20:15.164672 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.176847732+00:00 stderr F I0813 20:20:15.176734 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.187516697+00:00 stderr F I0813 20:20:15.187416 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.198853051+00:00 stderr F I0813 20:20:15.198680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.220993194+00:00 stderr F I0813 20:20:15.220128 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.300436784+00:00 stderr F I0813 20:20:15.300324 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.503327633+00:00 stderr F I0813 20:20:15.503190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.707100297+00:00 stderr F I0813 20:20:15.706874 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.899922868+00:00 stderr F I0813 20:20:15.899841 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.109863998+00:00 stderr F I0813 20:20:16.109035 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.300858926+00:00 stderr F I0813 20:20:16.300648 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.502383996+00:00 stderr F I0813 20:20:16.502267 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.700403735+00:00 stderr F I0813 20:20:16.700085 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.709526806+00:00 stderr F I0813 20:20:16.709440 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:20:16.722279010+00:00 stderr F I0813 20:20:16.722178 1 log.go:245] successful reconciliation 2025-08-13T20:20:16.903428678+00:00 stderr F I0813 20:20:16.903340 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.101131458+00:00 stderr F I0813 20:20:17.101067 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.302029259+00:00 stderr F I0813 20:20:17.301849 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.506245175+00:00 stderr F I0813 20:20:17.506137 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.698525250+00:00 stderr F I0813 20:20:17.698399 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.901593583+00:00 stderr F I0813 20:20:17.901476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.098873941+00:00 stderr F I0813 20:20:18.098756 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.112693666+00:00 stderr F I0813 20:20:18.112663 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:20:18.113472659+00:00 stderr F I0813 20:20:18.113449 1 log.go:245] successful reconciliation 2025-08-13T20:20:18.300057921+00:00 stderr F I0813 20:20:18.299926 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.499598024+00:00 stderr F I0813 20:20:18.499481 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:19.309851151+00:00 stderr F I0813 20:20:19.309652 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:20:19.318428196+00:00 stderr F I0813 20:20:19.318244 1 log.go:245] successful reconciliation 2025-08-13T20:20:34.428447821+00:00 stderr F I0813 20:20:34.419477 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:20:34.851948604+00:00 stderr F I0813 20:20:34.851346 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:20:34.855412533+00:00 stderr F I0813 20:20:34.855289 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:20:34.858759729+00:00 stderr F I0813 20:20:34.858709 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:20:34.858873532+00:00 stderr F I0813 20:20:34.858750 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000ff3900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866657 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866693 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866702 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:20:34.870574977+00:00 stderr F I0813 20:20:34.870536 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870556 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870577 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870582 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:20:34.873440519+00:00 stderr F I0813 20:20:34.873276 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:20:34.900480641+00:00 stderr F I0813 20:20:34.900355 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:20:34.919302469+00:00 stderr F I0813 20:20:34.919179 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:20:34.919409652+00:00 stderr F I0813 20:20:34.919351 1 log.go:245] Starting render phase 2025-08-13T20:20:34.938492368+00:00 stderr F I0813 20:20:34.938343 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990139 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990184 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990234 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990288 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:20:35.004262808+00:00 stderr F I0813 20:20:35.004169 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:20:35.004262808+00:00 stderr F I0813 20:20:35.004218 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:20:35.017341791+00:00 stderr F I0813 20:20:35.017274 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:20:35.051393225+00:00 stderr F I0813 20:20:35.051231 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:20:35.058551829+00:00 stderr F I0813 20:20:35.058446 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:20:35.059034963+00:00 stderr F I0813 20:20:35.058900 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:20:35.079694123+00:00 stderr F I0813 20:20:35.079607 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:20:35.079694123+00:00 stderr F I0813 20:20:35.079679 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:20:35.090843102+00:00 stderr F I0813 20:20:35.090591 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:20:35.090989686+00:00 stderr F I0813 20:20:35.090906 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:20:35.108679192+00:00 stderr F I0813 20:20:35.108582 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:20:35.108679192+00:00 stderr F I0813 20:20:35.108661 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:20:35.115992141+00:00 stderr F I0813 20:20:35.115879 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:20:35.115992141+00:00 stderr F I0813 20:20:35.115945 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:20:35.123428763+00:00 stderr F I0813 20:20:35.123263 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:20:35.123428763+00:00 stderr F I0813 20:20:35.123331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:20:35.129516987+00:00 stderr F I0813 20:20:35.129375 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:20:35.129516987+00:00 stderr F I0813 20:20:35.129442 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:20:35.135237861+00:00 stderr F I0813 20:20:35.135154 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:20:35.135276982+00:00 stderr F I0813 20:20:35.135234 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:20:35.140937714+00:00 stderr F I0813 20:20:35.140874 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:20:35.140998655+00:00 stderr F I0813 20:20:35.140939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:20:35.146605056+00:00 stderr F I0813 20:20:35.146536 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:20:35.146636316+00:00 stderr F I0813 20:20:35.146606 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:20:35.316872931+00:00 stderr F I0813 20:20:35.316759 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:20:35.316911002+00:00 stderr F I0813 20:20:35.316867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:20:35.515310052+00:00 stderr F I0813 20:20:35.515201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:20:35.515310052+00:00 stderr F I0813 20:20:35.515289 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:20:35.716497682+00:00 stderr F I0813 20:20:35.714520 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:20:35.716497682+00:00 stderr F I0813 20:20:35.714574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:20:35.914878012+00:00 stderr F I0813 20:20:35.914749 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:20:35.914878012+00:00 stderr F I0813 20:20:35.914866 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:20:36.115593748+00:00 stderr F I0813 20:20:36.115430 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:20:36.115593748+00:00 stderr F I0813 20:20:36.115478 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:20:36.314676358+00:00 stderr F I0813 20:20:36.314602 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:20:36.314676358+00:00 stderr F I0813 20:20:36.314651 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:20:36.516143806+00:00 stderr F I0813 20:20:36.516030 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:20:36.516143806+00:00 stderr F I0813 20:20:36.516127 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:20:36.714144305+00:00 stderr F I0813 20:20:36.714085 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:20:36.714343080+00:00 stderr F I0813 20:20:36.714319 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:20:36.914930013+00:00 stderr F I0813 20:20:36.914717 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:20:36.915104228+00:00 stderr F I0813 20:20:36.915080 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:20:37.115703191+00:00 stderr F I0813 20:20:37.115575 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:20:37.115703191+00:00 stderr F I0813 20:20:37.115659 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:20:37.318935769+00:00 stderr F I0813 20:20:37.318860 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:20:37.319066933+00:00 stderr F I0813 20:20:37.319052 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:20:37.525262466+00:00 stderr F I0813 20:20:37.525170 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:20:37.525304217+00:00 stderr F I0813 20:20:37.525259 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:20:37.732249852+00:00 stderr F I0813 20:20:37.732083 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:20:37.732249852+00:00 stderr F I0813 20:20:37.732159 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:20:37.914341566+00:00 stderr F I0813 20:20:37.914221 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:20:37.914404717+00:00 stderr F I0813 20:20:37.914348 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:20:38.118032347+00:00 stderr F I0813 20:20:38.117908 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:20:38.118032347+00:00 stderr F I0813 20:20:38.118019 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:20:38.318535957+00:00 stderr F I0813 20:20:38.318407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:20:38.318535957+00:00 stderr F I0813 20:20:38.318516 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:20:38.522085155+00:00 stderr F I0813 20:20:38.522006 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:20:38.522085155+00:00 stderr F I0813 20:20:38.522072 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:20:38.729408640+00:00 stderr F I0813 20:20:38.729323 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:20:38.729408640+00:00 stderr F I0813 20:20:38.729392 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:20:38.919334157+00:00 stderr F I0813 20:20:38.919168 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:20:38.919334157+00:00 stderr F I0813 20:20:38.919238 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:20:39.114197166+00:00 stderr F I0813 20:20:39.113946 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:39.114197166+00:00 stderr F I0813 20:20:39.114030 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:20:39.314688925+00:00 stderr F I0813 20:20:39.314561 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:39.314688925+00:00 stderr F I0813 20:20:39.314633 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:20:39.516921546+00:00 stderr F I0813 20:20:39.516837 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:20:39.516921546+00:00 stderr F I0813 20:20:39.516910 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:20:39.713355890+00:00 stderr F I0813 20:20:39.713214 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:20:39.713355890+00:00 stderr F I0813 20:20:39.713285 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:20:39.913859030+00:00 stderr F I0813 20:20:39.913665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:20:39.913859030+00:00 stderr F I0813 20:20:39.913753 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:20:40.119089585+00:00 stderr F I0813 20:20:40.118954 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:20:40.119089585+00:00 stderr F I0813 20:20:40.119072 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:20:40.317057773+00:00 stderr F I0813 20:20:40.316939 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:20:40.317107275+00:00 stderr F I0813 20:20:40.317073 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:20:40.522411752+00:00 stderr F I0813 20:20:40.522288 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:20:40.522411752+00:00 stderr F I0813 20:20:40.522356 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:20:40.717497637+00:00 stderr F I0813 20:20:40.717372 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:20:40.717497637+00:00 stderr F I0813 20:20:40.717442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:20:40.915160267+00:00 stderr F I0813 20:20:40.915061 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:40.915160267+00:00 stderr F I0813 20:20:40.915130 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:20:41.114289977+00:00 stderr F I0813 20:20:41.114186 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:41.114289977+00:00 stderr F I0813 20:20:41.114261 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:20:41.316610860+00:00 stderr F I0813 20:20:41.316514 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:20:41.316610860+00:00 stderr F I0813 20:20:41.316597 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:20:41.514700131+00:00 stderr F I0813 20:20:41.514583 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:20:41.514700131+00:00 stderr F I0813 20:20:41.514658 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:20:41.746578498+00:00 stderr F I0813 20:20:41.746453 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:20:41.746578498+00:00 stderr F I0813 20:20:41.746523 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:20:41.920291183+00:00 stderr F I0813 20:20:41.920172 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:20:41.920291183+00:00 stderr F I0813 20:20:41.920282 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:20:42.123346296+00:00 stderr F I0813 20:20:42.123249 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:20:42.123346296+00:00 stderr F I0813 20:20:42.123323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:20:42.360555405+00:00 stderr F I0813 20:20:42.359752 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:20:42.360555405+00:00 stderr F I0813 20:20:42.359862 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:20:42.521421362+00:00 stderr F I0813 20:20:42.521174 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:20:42.521421362+00:00 stderr F I0813 20:20:42.521269 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:20:42.764649163+00:00 stderr F I0813 20:20:42.764515 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:20:42.764649163+00:00 stderr F I0813 20:20:42.764587 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:20:42.947993663+00:00 stderr F I0813 20:20:42.947726 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:20:42.947993663+00:00 stderr F I0813 20:20:42.947869 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:20:43.115195341+00:00 stderr F I0813 20:20:43.115053 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:20:43.115195341+00:00 stderr F I0813 20:20:43.115161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:20:43.314743734+00:00 stderr F I0813 20:20:43.314552 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:20:43.314743734+00:00 stderr F I0813 20:20:43.314640 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:20:43.538945162+00:00 stderr F I0813 20:20:43.536506 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:20:43.538945162+00:00 stderr F I0813 20:20:43.536611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:20:43.714613733+00:00 stderr F I0813 20:20:43.714499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:20:43.714613733+00:00 stderr F I0813 20:20:43.714569 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:20:43.916191274+00:00 stderr F I0813 20:20:43.916093 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:20:43.916425280+00:00 stderr F I0813 20:20:43.916375 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:20:44.112591947+00:00 stderr F I0813 20:20:44.112526 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:20:44.112694870+00:00 stderr F I0813 20:20:44.112680 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:20:44.313680614+00:00 stderr F I0813 20:20:44.313552 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:20:44.313680614+00:00 stderr F I0813 20:20:44.313627 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:20:44.521593585+00:00 stderr F I0813 20:20:44.521003 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:20:44.521593585+00:00 stderr F I0813 20:20:44.521564 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:20:44.714654432+00:00 stderr F I0813 20:20:44.714600 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:20:44.714764915+00:00 stderr F I0813 20:20:44.714751 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:44.915892243+00:00 stderr F I0813 20:20:44.915660 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:44.915892243+00:00 stderr F I0813 20:20:44.915760 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.115664543+00:00 stderr F I0813 20:20:45.115534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.115664543+00:00 stderr F I0813 20:20:45.115620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.341162817+00:00 stderr F I0813 20:20:45.341077 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.341162817+00:00 stderr F I0813 20:20:45.341146 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.516009704+00:00 stderr F I0813 20:20:45.515460 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.516009704+00:00 stderr F I0813 20:20:45.515531 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:20:45.718480460+00:00 stderr F I0813 20:20:45.718322 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:20:45.718480460+00:00 stderr F I0813 20:20:45.718406 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:20:45.917883789+00:00 stderr F I0813 20:20:45.917660 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:20:45.917883789+00:00 stderr F I0813 20:20:45.917741 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:20:46.115838446+00:00 stderr F I0813 20:20:46.115696 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:20:46.115838446+00:00 stderr F I0813 20:20:46.115764 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:20:46.317691464+00:00 stderr F I0813 20:20:46.317581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:20:46.317691464+00:00 stderr F I0813 20:20:46.317634 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:20:46.523676001+00:00 stderr F I0813 20:20:46.523581 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:20:46.523720803+00:00 stderr F I0813 20:20:46.523669 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:20:46.719160048+00:00 stderr F I0813 20:20:46.718504 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:20:46.719160048+00:00 stderr F I0813 20:20:46.718650 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:20:46.918250678+00:00 stderr F I0813 20:20:46.918163 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:20:46.918250678+00:00 stderr F I0813 20:20:46.918244 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:20:47.114304512+00:00 stderr F I0813 20:20:47.114169 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:20:47.114304512+00:00 stderr F I0813 20:20:47.114274 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:20:47.317070466+00:00 stderr F I0813 20:20:47.316924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:20:47.317070466+00:00 stderr F I0813 20:20:47.317018 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:20:47.524885645+00:00 stderr F I0813 20:20:47.524665 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:20:47.524885645+00:00 stderr F I0813 20:20:47.524762 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:20:47.719041623+00:00 stderr F I0813 20:20:47.718885 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:20:47.719041623+00:00 stderr F I0813 20:20:47.719023 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:20:47.916272160+00:00 stderr F I0813 20:20:47.916161 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:20:47.916272160+00:00 stderr F I0813 20:20:47.916232 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:20:48.114940447+00:00 stderr F I0813 20:20:48.114849 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:20:48.114940447+00:00 stderr F I0813 20:20:48.114925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:20:48.318125074+00:00 stderr F I0813 20:20:48.318029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:20:48.318125074+00:00 stderr F I0813 20:20:48.318095 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:20:48.515077522+00:00 stderr F I0813 20:20:48.514919 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:20:48.515077522+00:00 stderr F I0813 20:20:48.515033 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:20:48.716356645+00:00 stderr F I0813 20:20:48.716255 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:20:48.716400156+00:00 stderr F I0813 20:20:48.716355 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:20:48.943259860+00:00 stderr F I0813 20:20:48.943095 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:20:48.943259860+00:00 stderr F I0813 20:20:48.943245 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:20:49.169943599+00:00 stderr F I0813 20:20:49.168227 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:20:49.169943599+00:00 stderr F I0813 20:20:49.168305 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:20:49.317727692+00:00 stderr F I0813 20:20:49.317655 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:20:49.317727692+00:00 stderr F I0813 20:20:49.317717 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.518447409+00:00 stderr F I0813 20:20:49.518139 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.518447409+00:00 stderr F I0813 20:20:49.518411 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.715203211+00:00 stderr F I0813 20:20:49.715070 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.715203211+00:00 stderr F I0813 20:20:49.715160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.923103433+00:00 stderr F I0813 20:20:49.922956 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.923204656+00:00 stderr F I0813 20:20:49.923166 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:20:50.116761217+00:00 stderr F I0813 20:20:50.116517 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:20:50.116761217+00:00 stderr F I0813 20:20:50.116673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:20:50.314908870+00:00 stderr F I0813 20:20:50.314731 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:20:50.315132137+00:00 stderr F I0813 20:20:50.315102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:20:50.514398462+00:00 stderr F I0813 20:20:50.514290 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:20:50.514398462+00:00 stderr F I0813 20:20:50.514382 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:50.726556925+00:00 stderr F I0813 20:20:50.726420 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:50.726556925+00:00 stderr F I0813 20:20:50.726535 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:50.919371475+00:00 stderr F I0813 20:20:50.919234 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:50.919371475+00:00 stderr F I0813 20:20:50.919331 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:51.116153669+00:00 stderr F I0813 20:20:51.116044 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:51.116153669+00:00 stderr F I0813 20:20:51.116120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:20:51.316356891+00:00 stderr F I0813 20:20:51.316310 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:20:51.316471514+00:00 stderr F I0813 20:20:51.316456 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:20:51.516163331+00:00 stderr F I0813 20:20:51.516066 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:20:51.516163331+00:00 stderr F I0813 20:20:51.516133 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:20:51.719646336+00:00 stderr F I0813 20:20:51.719546 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:20:51.719770710+00:00 stderr F I0813 20:20:51.719756 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:20:51.916366349+00:00 stderr F I0813 20:20:51.915645 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:20:51.918048257+00:00 stderr F I0813 20:20:51.918017 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:20:52.117860137+00:00 stderr F I0813 20:20:52.117626 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:20:52.117860137+00:00 stderr F I0813 20:20:52.117750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:20:52.314295181+00:00 stderr F I0813 20:20:52.314191 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:20:52.314295181+00:00 stderr F I0813 20:20:52.314259 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:20:52.516091738+00:00 stderr F I0813 20:20:52.516039 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:20:52.516239083+00:00 stderr F I0813 20:20:52.516222 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:52.719680517+00:00 stderr F I0813 20:20:52.719550 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:52.719680517+00:00 stderr F I0813 20:20:52.719668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:20:52.914934307+00:00 stderr F I0813 20:20:52.914744 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:20:52.915017569+00:00 stderr F I0813 20:20:52.914998 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:20:53.121269393+00:00 stderr F I0813 20:20:53.121151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:20:53.121269393+00:00 stderr F I0813 20:20:53.121221 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:20:53.318071618+00:00 stderr F I0813 20:20:53.317222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:20:53.318071618+00:00 stderr F I0813 20:20:53.317285 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:20:53.515249083+00:00 stderr F I0813 20:20:53.515093 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:20:53.515249083+00:00 stderr F I0813 20:20:53.515188 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:20:53.715506386+00:00 stderr F I0813 20:20:53.715429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:20:53.715561408+00:00 stderr F I0813 20:20:53.715502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:20:53.928710710+00:00 stderr F I0813 20:20:53.928579 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:20:53.928710710+00:00 stderr F I0813 20:20:53.928647 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:54.115359034+00:00 stderr F I0813 20:20:54.115214 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:54.115431756+00:00 stderr F I0813 20:20:54.115353 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:20:54.329936146+00:00 stderr F I0813 20:20:54.329755 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:20:54.329936146+00:00 stderr F I0813 20:20:54.329912 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:54.530932861+00:00 stderr F I0813 20:20:54.529003 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:54.530932861+00:00 stderr F I0813 20:20:54.530847 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:20:54.721034324+00:00 stderr F I0813 20:20:54.720904 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:20:54.721113416+00:00 stderr F I0813 20:20:54.721035 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:20:54.915171662+00:00 stderr F I0813 20:20:54.915090 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:20:54.915229324+00:00 stderr F I0813 20:20:54.915182 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:20:55.115471167+00:00 stderr F I0813 20:20:55.115377 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:20:55.115552259+00:00 stderr F I0813 20:20:55.115490 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:20:55.315049650+00:00 stderr F I0813 20:20:55.314920 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:20:55.315049650+00:00 stderr F I0813 20:20:55.315022 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:20:55.515115428+00:00 stderr F I0813 20:20:55.514999 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:20:55.515172839+00:00 stderr F I0813 20:20:55.515132 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:20:55.721548628+00:00 stderr F I0813 20:20:55.721333 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:20:55.742750874+00:00 stderr F I0813 20:20:55.742651 1 log.go:245] Operconfig Controller complete 2025-08-13T20:22:15.414660491+00:00 stderr F I0813 20:22:15.414332 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:23:55.745175459+00:00 stderr F I0813 20:23:55.744924 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:23:56.056716478+00:00 stderr F I0813 20:23:56.056606 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:23:56.062176634+00:00 stderr F I0813 20:23:56.062079 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:23:56.066000683+00:00 stderr F I0813 20:23:56.065853 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:23:56.066000683+00:00 stderr F I0813 20:23:56.065883 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003b9ea80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.071972 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.072031 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.072040 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077868 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077895 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077901 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077907 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.078022 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:23:56.091823722+00:00 stderr F I0813 20:23:56.091671 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:23:56.106619705+00:00 stderr F I0813 20:23:56.106510 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:23:56.106619705+00:00 stderr F I0813 20:23:56.106587 1 log.go:245] Starting render phase 2025-08-13T20:23:56.124137446+00:00 stderr F I0813 20:23:56.124053 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161549 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161596 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161653 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:23:56.161763471+00:00 stderr F I0813 20:23:56.161690 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:23:56.175322409+00:00 stderr F I0813 20:23:56.175221 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:23:56.175322409+00:00 stderr F I0813 20:23:56.175269 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:23:56.189743842+00:00 stderr F I0813 20:23:56.189626 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:23:56.223138826+00:00 stderr F I0813 20:23:56.223025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:23:56.229318543+00:00 stderr F I0813 20:23:56.229226 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:23:56.229318543+00:00 stderr F I0813 20:23:56.229284 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:23:56.238065373+00:00 stderr F I0813 20:23:56.238029 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:23:56.238157806+00:00 stderr F I0813 20:23:56.238144 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:23:56.248264415+00:00 stderr F I0813 20:23:56.248236 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:23:56.248351667+00:00 stderr F I0813 20:23:56.248335 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:23:56.256419778+00:00 stderr F I0813 20:23:56.256333 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:23:56.256419778+00:00 stderr F I0813 20:23:56.256408 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:23:56.263159241+00:00 stderr F I0813 20:23:56.263114 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:23:56.263267974+00:00 stderr F I0813 20:23:56.263249 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:23:56.271216531+00:00 stderr F I0813 20:23:56.271161 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:23:56.271329714+00:00 stderr F I0813 20:23:56.271314 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:23:56.279186849+00:00 stderr F I0813 20:23:56.279141 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:23:56.279288662+00:00 stderr F I0813 20:23:56.279274 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:23:56.285667304+00:00 stderr F I0813 20:23:56.285512 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:23:56.285667304+00:00 stderr F I0813 20:23:56.285593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:23:56.292482569+00:00 stderr F I0813 20:23:56.292447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:23:56.292577362+00:00 stderr F I0813 20:23:56.292562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:23:56.301649901+00:00 stderr F I0813 20:23:56.301509 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:23:56.301649901+00:00 stderr F I0813 20:23:56.301564 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:23:56.506046986+00:00 stderr F I0813 20:23:56.505885 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:23:56.506090177+00:00 stderr F I0813 20:23:56.506040 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:23:56.703672107+00:00 stderr F I0813 20:23:56.703336 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:23:56.703672107+00:00 stderr F I0813 20:23:56.703429 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:23:56.903017297+00:00 stderr F I0813 20:23:56.902868 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:23:56.903017297+00:00 stderr F I0813 20:23:56.902937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:23:57.105524158+00:00 stderr F I0813 20:23:57.105419 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:23:57.105524158+00:00 stderr F I0813 20:23:57.105497 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:23:57.304444006+00:00 stderr F I0813 20:23:57.304332 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:23:57.304444006+00:00 stderr F I0813 20:23:57.304407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:23:57.503685003+00:00 stderr F I0813 20:23:57.503636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:23:57.503957881+00:00 stderr F I0813 20:23:57.503938 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:23:57.704355091+00:00 stderr F I0813 20:23:57.704301 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:23:57.704573877+00:00 stderr F I0813 20:23:57.704552 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:23:57.905446501+00:00 stderr F I0813 20:23:57.902869 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:23:57.905587625+00:00 stderr F I0813 20:23:57.905570 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:23:58.103037451+00:00 stderr F I0813 20:23:58.102875 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:23:58.103037451+00:00 stderr F I0813 20:23:58.102951 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:23:58.305037147+00:00 stderr F I0813 20:23:58.304951 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:23:58.305097379+00:00 stderr F I0813 20:23:58.305086 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:23:58.504179291+00:00 stderr F I0813 20:23:58.504046 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:23:58.504179291+00:00 stderr F I0813 20:23:58.504118 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:23:58.716846142+00:00 stderr F I0813 20:23:58.716743 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:23:58.716968416+00:00 stderr F I0813 20:23:58.716953 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:23:58.915933825+00:00 stderr F I0813 20:23:58.915878 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:23:58.916067789+00:00 stderr F I0813 20:23:58.916051 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:23:59.107436621+00:00 stderr F I0813 20:23:59.107367 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:23:59.107547914+00:00 stderr F I0813 20:23:59.107531 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:23:59.303224768+00:00 stderr F I0813 20:23:59.303168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:23:59.303363672+00:00 stderr F I0813 20:23:59.303344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:23:59.504193185+00:00 stderr F I0813 20:23:59.504044 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:23:59.504193185+00:00 stderr F I0813 20:23:59.504108 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:23:59.707910500+00:00 stderr F I0813 20:23:59.707758 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:23:59.708084205+00:00 stderr F I0813 20:23:59.708067 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:23:59.905622053+00:00 stderr F I0813 20:23:59.905511 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:23:59.905622053+00:00 stderr F I0813 20:23:59.905586 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:24:00.111376367+00:00 stderr F I0813 20:24:00.111228 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:24:00.111376367+00:00 stderr F I0813 20:24:00.111333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:24:00.309879493+00:00 stderr F I0813 20:24:00.307164 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:00.309879493+00:00 stderr F I0813 20:24:00.307251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:24:00.502397528+00:00 stderr F I0813 20:24:00.502198 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:00.502397528+00:00 stderr F I0813 20:24:00.502279 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:24:00.703889160+00:00 stderr F I0813 20:24:00.703701 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:24:00.703889160+00:00 stderr F I0813 20:24:00.703847 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:24:00.932875887+00:00 stderr F I0813 20:24:00.932711 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:24:00.932875887+00:00 stderr F I0813 20:24:00.932857 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:24:01.109561740+00:00 stderr F I0813 20:24:01.107418 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:24:01.109561740+00:00 stderr F I0813 20:24:01.107519 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:24:01.303446083+00:00 stderr F I0813 20:24:01.303328 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:24:01.303446083+00:00 stderr F I0813 20:24:01.303412 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:24:01.505444009+00:00 stderr F I0813 20:24:01.505299 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:24:01.505444009+00:00 stderr F I0813 20:24:01.505379 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:24:01.707515097+00:00 stderr F I0813 20:24:01.707435 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:24:01.707628561+00:00 stderr F I0813 20:24:01.707610 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:24:01.903363987+00:00 stderr F I0813 20:24:01.903289 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:24:01.903363987+00:00 stderr F I0813 20:24:01.903342 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:24:02.104367165+00:00 stderr F I0813 20:24:02.104131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:02.104571011+00:00 stderr F I0813 20:24:02.104553 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:24:02.304961631+00:00 stderr F I0813 20:24:02.304722 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:02.304961631+00:00 stderr F I0813 20:24:02.304918 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:24:02.504882538+00:00 stderr F I0813 20:24:02.504097 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:24:02.504882538+00:00 stderr F I0813 20:24:02.504765 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:24:02.704090464+00:00 stderr F I0813 20:24:02.703946 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:24:02.704171946+00:00 stderr F I0813 20:24:02.704101 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:24:02.910926357+00:00 stderr F I0813 20:24:02.910395 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:24:02.910926357+00:00 stderr F I0813 20:24:02.910473 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:24:03.109578138+00:00 stderr F I0813 20:24:03.109479 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:24:03.109650070+00:00 stderr F I0813 20:24:03.109604 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:24:03.311612165+00:00 stderr F I0813 20:24:03.311452 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:24:03.311612165+00:00 stderr F I0813 20:24:03.311527 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:24:03.518483780+00:00 stderr F I0813 20:24:03.518204 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:24:03.518483780+00:00 stderr F I0813 20:24:03.518275 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:24:03.709574234+00:00 stderr F I0813 20:24:03.709513 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:24:03.709700518+00:00 stderr F I0813 20:24:03.709686 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:24:03.941181797+00:00 stderr F I0813 20:24:03.941042 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:24:03.941181797+00:00 stderr F I0813 20:24:03.941116 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:24:04.148030891+00:00 stderr F I0813 20:24:04.147255 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:24:04.148075283+00:00 stderr F I0813 20:24:04.148031 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:24:04.303652371+00:00 stderr F I0813 20:24:04.303504 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:24:04.303652371+00:00 stderr F I0813 20:24:04.303550 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:24:04.506547913+00:00 stderr F I0813 20:24:04.506449 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:24:04.506547913+00:00 stderr F I0813 20:24:04.506515 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:24:04.706940153+00:00 stderr F I0813 20:24:04.706715 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:24:04.706940153+00:00 stderr F I0813 20:24:04.706865 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:24:04.906138199+00:00 stderr F I0813 20:24:04.905277 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:24:04.906138199+00:00 stderr F I0813 20:24:04.905349 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:24:05.102190895+00:00 stderr F I0813 20:24:05.102072 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:24:05.102190895+00:00 stderr F I0813 20:24:05.102163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:24:05.303721187+00:00 stderr F I0813 20:24:05.303579 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:24:05.303721187+00:00 stderr F I0813 20:24:05.303651 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:24:05.504195840+00:00 stderr F I0813 20:24:05.504088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:24:05.504195840+00:00 stderr F I0813 20:24:05.504182 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:24:05.703151339+00:00 stderr F I0813 20:24:05.703030 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:24:05.703295753+00:00 stderr F I0813 20:24:05.703218 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:24:05.905630218+00:00 stderr F I0813 20:24:05.905487 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:24:05.905630218+00:00 stderr F I0813 20:24:05.905566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.104202557+00:00 stderr F I0813 20:24:06.104087 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.104202557+00:00 stderr F I0813 20:24:06.104178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.304850084+00:00 stderr F I0813 20:24:06.304742 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.304896585+00:00 stderr F I0813 20:24:06.304870 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.502601377+00:00 stderr F I0813 20:24:06.502497 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.502601377+00:00 stderr F I0813 20:24:06.502574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.707914818+00:00 stderr F I0813 20:24:06.707659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.707914818+00:00 stderr F I0813 20:24:06.707874 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:24:06.905040635+00:00 stderr F I0813 20:24:06.904866 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:24:06.905040635+00:00 stderr F I0813 20:24:06.904934 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:24:07.104293682+00:00 stderr F I0813 20:24:07.104200 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:24:07.104293682+00:00 stderr F I0813 20:24:07.104283 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:24:07.305269639+00:00 stderr F I0813 20:24:07.305107 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:24:07.305269639+00:00 stderr F I0813 20:24:07.305193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:24:07.505268098+00:00 stderr F I0813 20:24:07.505131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:24:07.505268098+00:00 stderr F I0813 20:24:07.505260 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:24:07.707370547+00:00 stderr F I0813 20:24:07.706715 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:24:07.707370547+00:00 stderr F I0813 20:24:07.707020 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:24:07.908327433+00:00 stderr F I0813 20:24:07.908202 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:24:07.908327433+00:00 stderr F I0813 20:24:07.908274 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:24:08.107377655+00:00 stderr F I0813 20:24:08.107259 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:24:08.107377655+00:00 stderr F I0813 20:24:08.107332 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:24:08.304240254+00:00 stderr F I0813 20:24:08.304146 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:24:08.304240254+00:00 stderr F I0813 20:24:08.304217 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:24:08.506155518+00:00 stderr F I0813 20:24:08.506100 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:24:08.506360984+00:00 stderr F I0813 20:24:08.506343 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:24:08.708078002+00:00 stderr F I0813 20:24:08.707897 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:24:08.708078002+00:00 stderr F I0813 20:24:08.708005 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:24:08.905852117+00:00 stderr F I0813 20:24:08.905680 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:24:08.905852117+00:00 stderr F I0813 20:24:08.905748 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:24:09.105362862+00:00 stderr F I0813 20:24:09.105229 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:24:09.105362862+00:00 stderr F I0813 20:24:09.105302 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:24:09.308393187+00:00 stderr F I0813 20:24:09.308281 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:24:09.308393187+00:00 stderr F I0813 20:24:09.308351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:24:09.503841136+00:00 stderr F I0813 20:24:09.503616 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:24:09.503841136+00:00 stderr F I0813 20:24:09.503689 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:24:09.710423843+00:00 stderr F I0813 20:24:09.710291 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:24:09.710423843+00:00 stderr F I0813 20:24:09.710370 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:24:09.904903784+00:00 stderr F I0813 20:24:09.904143 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:24:09.905225733+00:00 stderr F I0813 20:24:09.905103 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:24:10.117640306+00:00 stderr F I0813 20:24:10.117359 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:24:10.117640306+00:00 stderr F I0813 20:24:10.117462 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:24:10.334751444+00:00 stderr F I0813 20:24:10.334449 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:24:10.334751444+00:00 stderr F I0813 20:24:10.334553 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:24:10.505470656+00:00 stderr F I0813 20:24:10.505341 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:24:10.505470656+00:00 stderr F I0813 20:24:10.505420 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:10.705679281+00:00 stderr F I0813 20:24:10.705551 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:10.705679281+00:00 stderr F I0813 20:24:10.705638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:10.902584351+00:00 stderr F I0813 20:24:10.902460 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:10.902584351+00:00 stderr F I0813 20:24:10.902548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:11.104597097+00:00 stderr F I0813 20:24:11.104095 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:11.104635988+00:00 stderr F I0813 20:24:11.104593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:24:11.303297819+00:00 stderr F I0813 20:24:11.303215 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:24:11.303340690+00:00 stderr F I0813 20:24:11.303301 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:24:11.507098977+00:00 stderr F I0813 20:24:11.506921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:24:11.507098977+00:00 stderr F I0813 20:24:11.507039 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:24:11.704892512+00:00 stderr F I0813 20:24:11.704748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:24:11.704962174+00:00 stderr F I0813 20:24:11.704935 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:11.908494334+00:00 stderr F I0813 20:24:11.908373 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:11.908494334+00:00 stderr F I0813 20:24:11.908476 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:12.107716141+00:00 stderr F I0813 20:24:12.107620 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:12.107922847+00:00 stderr F I0813 20:24:12.107719 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:12.306970778+00:00 stderr F I0813 20:24:12.306848 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:12.306970778+00:00 stderr F I0813 20:24:12.306943 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:24:12.510876339+00:00 stderr F I0813 20:24:12.510737 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:24:12.510937191+00:00 stderr F I0813 20:24:12.510900 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:24:12.703612550+00:00 stderr F I0813 20:24:12.703499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:24:12.703612550+00:00 stderr F I0813 20:24:12.703577 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:24:12.908060046+00:00 stderr F I0813 20:24:12.907951 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:24:12.908060046+00:00 stderr F I0813 20:24:12.908041 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:24:13.106532681+00:00 stderr F I0813 20:24:13.106404 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:24:13.106532681+00:00 stderr F I0813 20:24:13.106494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:24:13.304753239+00:00 stderr F I0813 20:24:13.304631 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:24:13.304753239+00:00 stderr F I0813 20:24:13.304735 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:24:13.505024716+00:00 stderr F I0813 20:24:13.504860 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:24:13.505024716+00:00 stderr F I0813 20:24:13.505011 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:24:13.705484167+00:00 stderr F I0813 20:24:13.705360 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:24:13.705484167+00:00 stderr F I0813 20:24:13.705464 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:13.903625263+00:00 stderr F I0813 20:24:13.903529 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:13.903625263+00:00 stderr F I0813 20:24:13.903602 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:24:14.104092195+00:00 stderr F I0813 20:24:14.103966 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:24:14.104133356+00:00 stderr F I0813 20:24:14.104092 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:24:14.303117126+00:00 stderr F I0813 20:24:14.302972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:24:14.303117126+00:00 stderr F I0813 20:24:14.303097 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:24:14.508547180+00:00 stderr F I0813 20:24:14.508316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:24:14.508547180+00:00 stderr F I0813 20:24:14.508405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:24:14.706561422+00:00 stderr F I0813 20:24:14.706437 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:24:14.706561422+00:00 stderr F I0813 20:24:14.706527 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:24:14.904234084+00:00 stderr F I0813 20:24:14.904096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:24:14.904234084+00:00 stderr F I0813 20:24:14.904220 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:24:15.102945636+00:00 stderr F I0813 20:24:15.102856 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:24:15.102945636+00:00 stderr F I0813 20:24:15.102920 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:15.304903511+00:00 stderr F I0813 20:24:15.304838 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:15.305076096+00:00 stderr F I0813 20:24:15.305056 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:24:15.507388510+00:00 stderr F I0813 20:24:15.507226 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:24:15.507492543+00:00 stderr F I0813 20:24:15.507399 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:15.720216476+00:00 stderr F I0813 20:24:15.719621 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:15.720216476+00:00 stderr F I0813 20:24:15.719699 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:24:15.904202717+00:00 stderr F I0813 20:24:15.904044 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:24:15.904202717+00:00 stderr F I0813 20:24:15.904115 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:24:16.105078811+00:00 stderr F I0813 20:24:16.104583 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:24:16.105078811+00:00 stderr F I0813 20:24:16.104659 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:24:16.303380021+00:00 stderr F I0813 20:24:16.303289 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:24:16.303380021+00:00 stderr F I0813 20:24:16.303362 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:24:16.504512272+00:00 stderr F I0813 20:24:16.504309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:24:16.504512272+00:00 stderr F I0813 20:24:16.504421 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:24:16.704225613+00:00 stderr F I0813 20:24:16.704172 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:24:16.704360707+00:00 stderr F I0813 20:24:16.704345 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:24:16.911404137+00:00 stderr F I0813 20:24:16.911266 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:24:16.937133472+00:00 stderr F I0813 20:24:16.936845 1 log.go:245] Operconfig Controller complete 2025-08-13T20:25:15.429967135+00:00 stderr F I0813 20:25:15.429662 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:25:16.740137398+00:00 stderr F I0813 20:25:16.739949 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:25:16.742323171+00:00 stderr F I0813 20:25:16.742244 1 log.go:245] successful reconciliation 2025-08-13T20:25:18.126867019+00:00 stderr F I0813 20:25:18.126717 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:25:18.127921029+00:00 stderr F I0813 20:25:18.127892 1 log.go:245] successful reconciliation 2025-08-13T20:25:19.346569016+00:00 stderr F I0813 20:25:19.345537 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:25:19.346569016+00:00 stderr F I0813 20:25:19.346393 1 log.go:245] successful reconciliation 2025-08-13T20:27:16.938669622+00:00 stderr F I0813 20:27:16.938467 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:27:19.812921138+00:00 stderr F I0813 20:27:19.812057 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:27:19.816840280+00:00 stderr F I0813 20:27:19.814837 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:27:19.819847356+00:00 stderr F I0813 20:27:19.817458 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:27:19.819847356+00:00 stderr F I0813 20:27:19.817517 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a8ed00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824289 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824340 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824379 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828891 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828939 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828948 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828956 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:27:19.829950725+00:00 stderr F I0813 20:27:19.829083 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:27:19.861508177+00:00 stderr F I0813 20:27:19.860090 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:27:19.882350973+00:00 stderr F I0813 20:27:19.879935 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:27:19.882350973+00:00 stderr F I0813 20:27:19.880025 1 log.go:245] Starting render phase 2025-08-13T20:27:19.896873369+00:00 stderr F I0813 20:27:19.896220 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931254 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931296 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931322 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931349 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:27:19.944909902+00:00 stderr F I0813 20:27:19.943601 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:27:19.944909902+00:00 stderr F I0813 20:27:19.943649 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:27:19.957856962+00:00 stderr F I0813 20:27:19.955095 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:27:19.990336141+00:00 stderr F I0813 20:27:19.982749 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:27:19.999886924+00:00 stderr F I0813 20:27:19.995326 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:27:19.999886924+00:00 stderr F I0813 20:27:19.995387 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:27:20.006681439+00:00 stderr F I0813 20:27:20.004174 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:27:20.006681439+00:00 stderr F I0813 20:27:20.004253 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:27:20.019916377+00:00 stderr F I0813 20:27:20.019718 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:27:20.019916377+00:00 stderr F I0813 20:27:20.019844 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:27:20.032081685+00:00 stderr F I0813 20:27:20.030848 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:27:20.032081685+00:00 stderr F I0813 20:27:20.030919 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:27:20.039156817+00:00 stderr F I0813 20:27:20.038392 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:27:20.039156817+00:00 stderr F I0813 20:27:20.038461 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:27:20.048527175+00:00 stderr F I0813 20:27:20.046105 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:27:20.048527175+00:00 stderr F I0813 20:27:20.046172 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:27:20.052883210+00:00 stderr F I0813 20:27:20.051561 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:27:20.052883210+00:00 stderr F I0813 20:27:20.051642 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:27:20.058851640+00:00 stderr F I0813 20:27:20.057424 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:27:20.058851640+00:00 stderr F I0813 20:27:20.057470 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:27:20.065037407+00:00 stderr F I0813 20:27:20.062875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:27:20.065037407+00:00 stderr F I0813 20:27:20.062920 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:27:20.072289915+00:00 stderr F I0813 20:27:20.072255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:27:20.072368487+00:00 stderr F I0813 20:27:20.072355 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:27:20.277151312+00:00 stderr F I0813 20:27:20.277043 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:27:20.277151312+00:00 stderr F I0813 20:27:20.277109 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:27:20.479363295+00:00 stderr F I0813 20:27:20.479249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:27:20.479363295+00:00 stderr F I0813 20:27:20.479316 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:27:20.674626708+00:00 stderr F I0813 20:27:20.674501 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:27:20.674626708+00:00 stderr F I0813 20:27:20.674573 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:27:20.877436387+00:00 stderr F I0813 20:27:20.877296 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:27:20.877436387+00:00 stderr F I0813 20:27:20.877385 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:27:21.073222746+00:00 stderr F I0813 20:27:21.073047 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:27:21.073222746+00:00 stderr F I0813 20:27:21.073108 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:27:21.275581292+00:00 stderr F I0813 20:27:21.275420 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:27:21.275581292+00:00 stderr F I0813 20:27:21.275493 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:27:21.475888709+00:00 stderr F I0813 20:27:21.475668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:27:21.476053114+00:00 stderr F I0813 20:27:21.475950 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:27:21.674091187+00:00 stderr F I0813 20:27:21.673969 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:27:21.674091187+00:00 stderr F I0813 20:27:21.674079 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:27:21.877266086+00:00 stderr F I0813 20:27:21.877110 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:27:21.877266086+00:00 stderr F I0813 20:27:21.877249 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:27:22.075151775+00:00 stderr F I0813 20:27:22.075092 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:27:22.075265938+00:00 stderr F I0813 20:27:22.075247 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:27:22.275664958+00:00 stderr F I0813 20:27:22.275608 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:27:22.275879375+00:00 stderr F I0813 20:27:22.275859 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:27:22.492454097+00:00 stderr F I0813 20:27:22.492318 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:27:22.492579351+00:00 stderr F I0813 20:27:22.492565 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:27:22.686749513+00:00 stderr F I0813 20:27:22.686621 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:27:22.686749513+00:00 stderr F I0813 20:27:22.686691 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:27:22.873680618+00:00 stderr F I0813 20:27:22.873593 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:27:22.873680618+00:00 stderr F I0813 20:27:22.873661 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:27:23.072481833+00:00 stderr F I0813 20:27:23.072305 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:27:23.072481833+00:00 stderr F I0813 20:27:23.072374 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:27:23.276458574+00:00 stderr F I0813 20:27:23.276323 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:27:23.276458574+00:00 stderr F I0813 20:27:23.276436 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:27:23.484912135+00:00 stderr F I0813 20:27:23.483658 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:27:23.484912135+00:00 stderr F I0813 20:27:23.484887 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:27:23.677401369+00:00 stderr F I0813 20:27:23.677220 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:27:23.677401369+00:00 stderr F I0813 20:27:23.677313 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:27:23.880580669+00:00 stderr F I0813 20:27:23.880454 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:27:23.880580669+00:00 stderr F I0813 20:27:23.880539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:27:24.074331689+00:00 stderr F I0813 20:27:24.074174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:24.074331689+00:00 stderr F I0813 20:27:24.074250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:27:24.275878012+00:00 stderr F I0813 20:27:24.275651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:24.275878012+00:00 stderr F I0813 20:27:24.275734 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:27:24.490976322+00:00 stderr F I0813 20:27:24.490877 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:27:24.490976322+00:00 stderr F I0813 20:27:24.490949 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:27:24.678108233+00:00 stderr F I0813 20:27:24.677687 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:27:24.678108233+00:00 stderr F I0813 20:27:24.677785 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:27:24.879197503+00:00 stderr F I0813 20:27:24.879127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:27:24.879323967+00:00 stderr F I0813 20:27:24.879309 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:27:25.074599921+00:00 stderr F I0813 20:27:25.074447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:27:25.074599921+00:00 stderr F I0813 20:27:25.074543 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:27:25.278032578+00:00 stderr F I0813 20:27:25.277904 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:27:25.278091839+00:00 stderr F I0813 20:27:25.278028 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:27:25.482462533+00:00 stderr F I0813 20:27:25.481667 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:27:25.482462533+00:00 stderr F I0813 20:27:25.481758 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:27:25.675832623+00:00 stderr F I0813 20:27:25.675602 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:27:25.675832623+00:00 stderr F I0813 20:27:25.675686 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:27:25.875290516+00:00 stderr F I0813 20:27:25.875144 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:25.875290516+00:00 stderr F I0813 20:27:25.875220 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:27:26.074473591+00:00 stderr F I0813 20:27:26.074356 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:26.074473591+00:00 stderr F I0813 20:27:26.074429 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:27:26.275488309+00:00 stderr F I0813 20:27:26.275383 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:27:26.275488309+00:00 stderr F I0813 20:27:26.275470 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:27:26.475374105+00:00 stderr F I0813 20:27:26.475280 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:27:26.475374105+00:00 stderr F I0813 20:27:26.475356 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:27:26.682038284+00:00 stderr F I0813 20:27:26.681942 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:27:26.682249990+00:00 stderr F I0813 20:27:26.682232 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:27:26.880741565+00:00 stderr F I0813 20:27:26.880534 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:27:26.880741565+00:00 stderr F I0813 20:27:26.880624 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:27:27.085580382+00:00 stderr F I0813 20:27:27.085487 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:27:27.085580382+00:00 stderr F I0813 20:27:27.085559 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:27:27.294074424+00:00 stderr F I0813 20:27:27.293871 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:27:27.294074424+00:00 stderr F I0813 20:27:27.293955 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:27:27.479526467+00:00 stderr F I0813 20:27:27.479420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:27:27.479526467+00:00 stderr F I0813 20:27:27.479508 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:27:27.713295301+00:00 stderr F I0813 20:27:27.713142 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:27:27.713479096+00:00 stderr F I0813 20:27:27.713458 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:27:27.905872938+00:00 stderr F I0813 20:27:27.905709 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:27:27.905872938+00:00 stderr F I0813 20:27:27.905783 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:27:28.076200558+00:00 stderr F I0813 20:27:28.075234 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:27:28.076200558+00:00 stderr F I0813 20:27:28.075333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:27:28.272687606+00:00 stderr F I0813 20:27:28.272579 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:27:28.272983305+00:00 stderr F I0813 20:27:28.272963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:27:28.475242958+00:00 stderr F I0813 20:27:28.475165 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:27:28.475400733+00:00 stderr F I0813 20:27:28.475385 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:27:28.676608776+00:00 stderr F I0813 20:27:28.676464 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:27:28.676866624+00:00 stderr F I0813 20:27:28.676840 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:27:28.874520685+00:00 stderr F I0813 20:27:28.874390 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:27:28.874520685+00:00 stderr F I0813 20:27:28.874488 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:27:29.078941491+00:00 stderr F I0813 20:27:29.078704 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:27:29.078941491+00:00 stderr F I0813 20:27:29.078885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:27:29.274753910+00:00 stderr F I0813 20:27:29.274270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:27:29.274753910+00:00 stderr F I0813 20:27:29.274357 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:27:29.478494395+00:00 stderr F I0813 20:27:29.476646 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:27:29.478494395+00:00 stderr F I0813 20:27:29.476729 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:27:29.673460880+00:00 stderr F I0813 20:27:29.673334 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:27:29.673460880+00:00 stderr F I0813 20:27:29.673448 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:29.903861329+00:00 stderr F I0813 20:27:29.903288 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:29.903924220+00:00 stderr F I0813 20:27:29.903904 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.075440425+00:00 stderr F I0813 20:27:30.075353 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.075577639+00:00 stderr F I0813 20:27:30.075562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.275364651+00:00 stderr F I0813 20:27:30.275309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.275477125+00:00 stderr F I0813 20:27:30.275462 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.475919065+00:00 stderr F I0813 20:27:30.475812 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.475919065+00:00 stderr F I0813 20:27:30.475891 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:27:30.676348546+00:00 stderr F I0813 20:27:30.676239 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:27:30.676403658+00:00 stderr F I0813 20:27:30.676357 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:27:30.875751948+00:00 stderr F I0813 20:27:30.875595 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:27:30.875751948+00:00 stderr F I0813 20:27:30.875706 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:27:31.077562149+00:00 stderr F I0813 20:27:31.077426 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:27:31.077562149+00:00 stderr F I0813 20:27:31.077503 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:27:31.275134298+00:00 stderr F I0813 20:27:31.274718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:27:31.275134298+00:00 stderr F I0813 20:27:31.274879 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:27:31.478038670+00:00 stderr F I0813 20:27:31.477907 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:27:31.478038670+00:00 stderr F I0813 20:27:31.477991 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:27:31.677351119+00:00 stderr F I0813 20:27:31.677256 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:27:31.677411741+00:00 stderr F I0813 20:27:31.677367 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:27:31.902942360+00:00 stderr F I0813 20:27:31.902022 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:27:31.902942360+00:00 stderr F I0813 20:27:31.902168 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:27:32.074235438+00:00 stderr F I0813 20:27:32.074163 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:27:32.074288709+00:00 stderr F I0813 20:27:32.074270 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:27:32.275389269+00:00 stderr F I0813 20:27:32.274892 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:27:32.275389269+00:00 stderr F I0813 20:27:32.274967 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:27:32.478609180+00:00 stderr F I0813 20:27:32.478481 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:27:32.478609180+00:00 stderr F I0813 20:27:32.478551 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:27:32.676428627+00:00 stderr F I0813 20:27:32.676267 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:27:32.676428627+00:00 stderr F I0813 20:27:32.676390 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:27:32.875700675+00:00 stderr F I0813 20:27:32.875605 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:27:32.875700675+00:00 stderr F I0813 20:27:32.875675 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:27:33.074303934+00:00 stderr F I0813 20:27:33.074195 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:27:33.074303934+00:00 stderr F I0813 20:27:33.074294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:27:33.275156507+00:00 stderr F I0813 20:27:33.275042 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:27:33.275156507+00:00 stderr F I0813 20:27:33.275119 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:27:33.486121099+00:00 stderr F I0813 20:27:33.485938 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:27:33.486121099+00:00 stderr F I0813 20:27:33.486059 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:27:33.676598736+00:00 stderr F I0813 20:27:33.676496 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:27:33.676658168+00:00 stderr F I0813 20:27:33.676624 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:27:33.884867211+00:00 stderr F I0813 20:27:33.884746 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:27:33.884867211+00:00 stderr F I0813 20:27:33.884851 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:27:34.117624456+00:00 stderr F I0813 20:27:34.117501 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:27:34.117624456+00:00 stderr F I0813 20:27:34.117581 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:27:34.274883432+00:00 stderr F I0813 20:27:34.274724 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:27:34.274883432+00:00 stderr F I0813 20:27:34.274855 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.475435797+00:00 stderr F I0813 20:27:34.474983 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.475435797+00:00 stderr F I0813 20:27:34.475083 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.674946292+00:00 stderr F I0813 20:27:34.674724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.674946292+00:00 stderr F I0813 20:27:34.674915 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.877277537+00:00 stderr F I0813 20:27:34.877183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.877430331+00:00 stderr F I0813 20:27:34.877415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:27:35.074172647+00:00 stderr F I0813 20:27:35.074074 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:27:35.074302281+00:00 stderr F I0813 20:27:35.074286 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:27:35.324443254+00:00 stderr F I0813 20:27:35.323637 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:27:35.324562067+00:00 stderr F I0813 20:27:35.324546 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:27:35.476392218+00:00 stderr F I0813 20:27:35.476338 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:27:35.476532242+00:00 stderr F I0813 20:27:35.476514 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:35.679270359+00:00 stderr F I0813 20:27:35.679214 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:35.679430624+00:00 stderr F I0813 20:27:35.679415 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:35.878030423+00:00 stderr F I0813 20:27:35.877335 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:35.878030423+00:00 stderr F I0813 20:27:35.877973 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:36.078351921+00:00 stderr F I0813 20:27:36.078295 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:36.078520286+00:00 stderr F I0813 20:27:36.078497 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:27:36.277145725+00:00 stderr F I0813 20:27:36.276983 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:27:36.277145725+00:00 stderr F I0813 20:27:36.277125 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:27:36.473252673+00:00 stderr F I0813 20:27:36.473140 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:27:36.473252673+00:00 stderr F I0813 20:27:36.473234 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:27:36.682613589+00:00 stderr F I0813 20:27:36.682452 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:27:36.682848456+00:00 stderr F I0813 20:27:36.682747 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:27:36.875639039+00:00 stderr F I0813 20:27:36.875564 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:27:36.875764662+00:00 stderr F I0813 20:27:36.875750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:27:37.077621424+00:00 stderr F I0813 20:27:37.077553 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:27:37.077683456+00:00 stderr F I0813 20:27:37.077622 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:27:37.275223275+00:00 stderr F I0813 20:27:37.275132 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:27:37.275264046+00:00 stderr F I0813 20:27:37.275234 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:27:37.476175620+00:00 stderr F I0813 20:27:37.475378 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:27:37.476175620+00:00 stderr F I0813 20:27:37.475454 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:37.675347645+00:00 stderr F I0813 20:27:37.675220 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:37.675347645+00:00 stderr F I0813 20:27:37.675305 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:27:37.876045223+00:00 stderr F I0813 20:27:37.875901 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:27:37.876045223+00:00 stderr F I0813 20:27:37.876018 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:27:38.075255540+00:00 stderr F I0813 20:27:38.075132 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:27:38.075255540+00:00 stderr F I0813 20:27:38.075206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:27:38.275369412+00:00 stderr F I0813 20:27:38.275255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:27:38.275369412+00:00 stderr F I0813 20:27:38.275327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:27:38.476051670+00:00 stderr F I0813 20:27:38.475905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:27:38.476093981+00:00 stderr F I0813 20:27:38.476044 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:27:38.673141516+00:00 stderr F I0813 20:27:38.672967 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:27:38.673141516+00:00 stderr F I0813 20:27:38.673065 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:27:38.874939536+00:00 stderr F I0813 20:27:38.874841 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:27:38.874983707+00:00 stderr F I0813 20:27:38.874942 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:39.080548935+00:00 stderr F I0813 20:27:39.080478 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:39.080742101+00:00 stderr F I0813 20:27:39.080726 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:27:39.275239472+00:00 stderr F I0813 20:27:39.275107 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:27:39.275320675+00:00 stderr F I0813 20:27:39.275260 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:39.479622147+00:00 stderr F I0813 20:27:39.479412 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:39.479622147+00:00 stderr F I0813 20:27:39.479482 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:27:39.678976277+00:00 stderr F I0813 20:27:39.678915 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:27:39.679137262+00:00 stderr F I0813 20:27:39.679119 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:27:39.874529669+00:00 stderr F I0813 20:27:39.874368 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:27:39.874529669+00:00 stderr F I0813 20:27:39.874466 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:27:40.074724013+00:00 stderr F I0813 20:27:40.074608 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:27:40.074724013+00:00 stderr F I0813 20:27:40.074684 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:27:40.277202343+00:00 stderr F I0813 20:27:40.277103 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:27:40.277240424+00:00 stderr F I0813 20:27:40.277203 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:27:40.475867144+00:00 stderr F I0813 20:27:40.475539 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:27:40.476030788+00:00 stderr F I0813 20:27:40.475980 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:27:40.684934542+00:00 stderr F I0813 20:27:40.684749 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:27:40.717508413+00:00 stderr F I0813 20:27:40.717296 1 log.go:245] Operconfig Controller complete 2025-08-13T20:28:15.445211430+00:00 stderr F I0813 20:28:15.444915 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:30:15.103488261+00:00 stderr F I0813 20:30:15.102262 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.120589893+00:00 stderr F I0813 20:30:15.120430 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.131220018+00:00 stderr F I0813 20:30:15.131144 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.144720936+00:00 stderr F I0813 20:30:15.144172 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.162873468+00:00 stderr F I0813 20:30:15.160154 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.176998854+00:00 stderr F I0813 20:30:15.175634 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.238151032+00:00 stderr F I0813 20:30:15.235906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.253899975+00:00 stderr F I0813 20:30:15.253546 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.271076388+00:00 stderr F I0813 20:30:15.270870 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.319889072+00:00 stderr F I0813 20:30:15.319191 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.333916245+00:00 stderr F I0813 20:30:15.332979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.501238095+00:00 stderr F I0813 20:30:15.501167 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.701316776+00:00 stderr F I0813 20:30:15.700247 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.899667878+00:00 stderr F I0813 20:30:15.899606 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.105764482+00:00 stderr F I0813 20:30:16.105626 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.308067017+00:00 stderr F I0813 20:30:16.307675 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.500222871+00:00 stderr F I0813 20:30:16.500105 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.707581772+00:00 stderr F I0813 20:30:16.707413 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.765385643+00:00 stderr F I0813 20:30:16.765326 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:30:16.766400673+00:00 stderr F I0813 20:30:16.766196 1 log.go:245] successful reconciliation 2025-08-13T20:30:16.903174684+00:00 stderr F I0813 20:30:16.903121 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.102322299+00:00 stderr F I0813 20:30:17.101879 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.308739373+00:00 stderr F I0813 20:30:17.307947 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.504629714+00:00 stderr F I0813 20:30:17.503536 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.700326829+00:00 stderr F I0813 20:30:17.700165 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.902967744+00:00 stderr F I0813 20:30:17.902587 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.101906923+00:00 stderr F I0813 20:30:18.101174 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.146436613+00:00 stderr F I0813 20:30:18.146369 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:30:18.147537174+00:00 stderr F I0813 20:30:18.147506 1 log.go:245] successful reconciliation 2025-08-13T20:30:18.301902702+00:00 stderr F I0813 20:30:18.301414 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.497639668+00:00 stderr F I0813 20:30:18.497531 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:19.369631973+00:00 stderr F I0813 20:30:19.368156 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:30:19.369722486+00:00 stderr F I0813 20:30:19.369627 1 log.go:245] successful reconciliation 2025-08-13T20:30:40.720631459+00:00 stderr F I0813 20:30:40.720502 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:30:41.019113419+00:00 stderr F I0813 20:30:41.018948 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:30:41.023017072+00:00 stderr F I0813 20:30:41.022967 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:30:41.027077108+00:00 stderr F I0813 20:30:41.026934 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:30:41.027077108+00:00 stderr F I0813 20:30:41.026973 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a58f80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:30:41.033230705+00:00 stderr F I0813 20:30:41.033178 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:30:41.033311127+00:00 stderr F I0813 20:30:41.033296 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:30:41.033345358+00:00 stderr F I0813 20:30:41.033333 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:30:41.037094376+00:00 stderr F I0813 20:30:41.037018 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:30:41.037164898+00:00 stderr F I0813 20:30:41.037147 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:30:41.037224280+00:00 stderr F I0813 20:30:41.037206 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:30:41.037263221+00:00 stderr F I0813 20:30:41.037248 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:30:41.037358094+00:00 stderr F I0813 20:30:41.037340 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:30:41.053752475+00:00 stderr F I0813 20:30:41.053611 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:30:41.072225106+00:00 stderr F I0813 20:30:41.072124 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:30:41.072312749+00:00 stderr F I0813 20:30:41.072299 1 log.go:245] Starting render phase 2025-08-13T20:30:41.085439336+00:00 stderr F I0813 20:30:41.085383 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:30:41.132865619+00:00 stderr F I0813 20:30:41.132699 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:30:41.132865619+00:00 stderr F I0813 20:30:41.132756 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:30:41.132968252+00:00 stderr F I0813 20:30:41.132901 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:30:41.132968252+00:00 stderr F I0813 20:30:41.132950 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:30:41.151236047+00:00 stderr F I0813 20:30:41.151108 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:30:41.151236047+00:00 stderr F I0813 20:30:41.151155 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:30:41.164714875+00:00 stderr F I0813 20:30:41.164602 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:30:41.181887598+00:00 stderr F I0813 20:30:41.181699 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:30:41.189108226+00:00 stderr F I0813 20:30:41.188930 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:30:41.189108226+00:00 stderr F I0813 20:30:41.189004 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:30:41.203751137+00:00 stderr F I0813 20:30:41.203609 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:30:41.203751137+00:00 stderr F I0813 20:30:41.203682 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:30:41.213886538+00:00 stderr F I0813 20:30:41.213712 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:30:41.213886538+00:00 stderr F I0813 20:30:41.213845 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:30:41.224961757+00:00 stderr F I0813 20:30:41.224729 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:30:41.224961757+00:00 stderr F I0813 20:30:41.224884 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:30:41.231178145+00:00 stderr F I0813 20:30:41.231089 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:30:41.231178145+00:00 stderr F I0813 20:30:41.231141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:30:41.236415726+00:00 stderr F I0813 20:30:41.236325 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:30:41.236415726+00:00 stderr F I0813 20:30:41.236383 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:30:41.241243505+00:00 stderr F I0813 20:30:41.241148 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:30:41.241243505+00:00 stderr F I0813 20:30:41.241199 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:30:41.246169636+00:00 stderr F I0813 20:30:41.246013 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:30:41.246169636+00:00 stderr F I0813 20:30:41.246093 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:30:41.251924512+00:00 stderr F I0813 20:30:41.251822 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:30:41.251924512+00:00 stderr F I0813 20:30:41.251873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:30:41.265869343+00:00 stderr F I0813 20:30:41.265736 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:30:41.265869343+00:00 stderr F I0813 20:30:41.265849 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:30:41.466570622+00:00 stderr F I0813 20:30:41.466472 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:30:41.466570622+00:00 stderr F I0813 20:30:41.466545 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:30:41.669875076+00:00 stderr F I0813 20:30:41.668975 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:30:41.669875076+00:00 stderr F I0813 20:30:41.669075 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:30:41.871059470+00:00 stderr F I0813 20:30:41.870961 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:30:41.871245545+00:00 stderr F I0813 20:30:41.871225 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:30:42.066910000+00:00 stderr F I0813 20:30:42.066720 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:30:42.066982132+00:00 stderr F I0813 20:30:42.066927 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:30:42.265595481+00:00 stderr F I0813 20:30:42.265493 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:30:42.265595481+00:00 stderr F I0813 20:30:42.265566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:30:42.467688321+00:00 stderr F I0813 20:30:42.467458 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:30:42.467688321+00:00 stderr F I0813 20:30:42.467506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:30:42.664977302+00:00 stderr F I0813 20:30:42.664915 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:30:42.664977302+00:00 stderr F I0813 20:30:42.664963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:30:42.865076634+00:00 stderr F I0813 20:30:42.864938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:30:42.865076634+00:00 stderr F I0813 20:30:42.865013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:30:43.064592409+00:00 stderr F I0813 20:30:43.064424 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:30:43.064592409+00:00 stderr F I0813 20:30:43.064503 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:30:43.266429742+00:00 stderr F I0813 20:30:43.266315 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:30:43.266429742+00:00 stderr F I0813 20:30:43.266384 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:30:43.467720158+00:00 stderr F I0813 20:30:43.467559 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:30:43.467720158+00:00 stderr F I0813 20:30:43.467643 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:30:43.682762459+00:00 stderr F I0813 20:30:43.682707 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:30:43.682942494+00:00 stderr F I0813 20:30:43.682926 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:30:43.901669821+00:00 stderr F I0813 20:30:43.899014 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:30:43.901669821+00:00 stderr F I0813 20:30:43.899096 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:30:44.067638902+00:00 stderr F I0813 20:30:44.067574 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:30:44.067746905+00:00 stderr F I0813 20:30:44.067732 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:30:44.268109565+00:00 stderr F I0813 20:30:44.268021 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:30:44.268392203+00:00 stderr F I0813 20:30:44.268377 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:30:44.467892528+00:00 stderr F I0813 20:30:44.466314 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:30:44.467892528+00:00 stderr F I0813 20:30:44.466385 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:30:44.672502490+00:00 stderr F I0813 20:30:44.672371 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:30:44.672502490+00:00 stderr F I0813 20:30:44.672441 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:30:44.866728623+00:00 stderr F I0813 20:30:44.866626 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:30:44.866728623+00:00 stderr F I0813 20:30:44.866699 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:30:45.068913005+00:00 stderr F I0813 20:30:45.068734 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:30:45.068913005+00:00 stderr F I0813 20:30:45.068884 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:30:45.266746562+00:00 stderr F I0813 20:30:45.266691 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:45.267020740+00:00 stderr F I0813 20:30:45.266993 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:30:45.467309668+00:00 stderr F I0813 20:30:45.467206 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:45.467309668+00:00 stderr F I0813 20:30:45.467294 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:30:45.668306165+00:00 stderr F I0813 20:30:45.667615 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:30:45.668306165+00:00 stderr F I0813 20:30:45.668254 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:30:45.868098279+00:00 stderr F I0813 20:30:45.867943 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:30:45.868159231+00:00 stderr F I0813 20:30:45.868026 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:30:46.066620786+00:00 stderr F I0813 20:30:46.066456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:30:46.066620786+00:00 stderr F I0813 20:30:46.066532 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:30:46.266411099+00:00 stderr F I0813 20:30:46.266159 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:30:46.266411099+00:00 stderr F I0813 20:30:46.266255 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:30:46.476275982+00:00 stderr F I0813 20:30:46.476164 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:30:46.476275982+00:00 stderr F I0813 20:30:46.476242 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:30:46.669208458+00:00 stderr F I0813 20:30:46.669152 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:30:46.669312791+00:00 stderr F I0813 20:30:46.669298 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:30:46.866456978+00:00 stderr F I0813 20:30:46.865708 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:30:46.866456978+00:00 stderr F I0813 20:30:46.866417 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:30:47.065006916+00:00 stderr F I0813 20:30:47.064927 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:47.065006916+00:00 stderr F I0813 20:30:47.064995 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:30:47.266742004+00:00 stderr F I0813 20:30:47.266681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:47.266862928+00:00 stderr F I0813 20:30:47.266745 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:30:47.470235984+00:00 stderr F I0813 20:30:47.470126 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:30:47.470235984+00:00 stderr F I0813 20:30:47.470209 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:30:47.666509226+00:00 stderr F I0813 20:30:47.666430 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:30:47.666509226+00:00 stderr F I0813 20:30:47.666477 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:30:47.875247236+00:00 stderr F I0813 20:30:47.875142 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:30:47.875247236+00:00 stderr F I0813 20:30:47.875215 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:30:48.071547479+00:00 stderr F I0813 20:30:48.071454 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:30:48.071547479+00:00 stderr F I0813 20:30:48.071525 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:30:48.272230598+00:00 stderr F I0813 20:30:48.272099 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:30:48.272230598+00:00 stderr F I0813 20:30:48.272166 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:30:48.476084348+00:00 stderr F I0813 20:30:48.475878 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:30:48.476084348+00:00 stderr F I0813 20:30:48.475947 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:30:48.670865127+00:00 stderr F I0813 20:30:48.670753 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:30:48.670932099+00:00 stderr F I0813 20:30:48.670882 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:30:48.907618253+00:00 stderr F I0813 20:30:48.907289 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:30:48.907618253+00:00 stderr F I0813 20:30:48.907398 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:30:49.098693795+00:00 stderr F I0813 20:30:49.098640 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:30:49.098886131+00:00 stderr F I0813 20:30:49.098866 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:30:49.265969404+00:00 stderr F I0813 20:30:49.265883 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:30:49.265969404+00:00 stderr F I0813 20:30:49.265948 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:30:49.472286255+00:00 stderr F I0813 20:30:49.470468 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:30:49.472286255+00:00 stderr F I0813 20:30:49.470607 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:30:49.667573459+00:00 stderr F I0813 20:30:49.667472 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:30:49.667630500+00:00 stderr F I0813 20:30:49.667601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:30:49.866920589+00:00 stderr F I0813 20:30:49.866592 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:30:49.866920589+00:00 stderr F I0813 20:30:49.866668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:30:50.067448213+00:00 stderr F I0813 20:30:50.067332 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:30:50.067448213+00:00 stderr F I0813 20:30:50.067404 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:30:50.270901692+00:00 stderr F I0813 20:30:50.269555 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:30:50.270901692+00:00 stderr F I0813 20:30:50.269634 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:30:50.465650800+00:00 stderr F I0813 20:30:50.465403 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:30:50.465650800+00:00 stderr F I0813 20:30:50.465480 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:30:50.664696432+00:00 stderr F I0813 20:30:50.664595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:30:50.664696432+00:00 stderr F I0813 20:30:50.664675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:30:50.864598677+00:00 stderr F I0813 20:30:50.864493 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:30:50.864598677+00:00 stderr F I0813 20:30:50.864558 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.068003334+00:00 stderr F I0813 20:30:51.067922 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.068003334+00:00 stderr F I0813 20:30:51.067976 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.268356824+00:00 stderr F I0813 20:30:51.268193 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.268356824+00:00 stderr F I0813 20:30:51.268267 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.465676606+00:00 stderr F I0813 20:30:51.465523 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.465676606+00:00 stderr F I0813 20:30:51.465609 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.667382744+00:00 stderr F I0813 20:30:51.667285 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.667382744+00:00 stderr F I0813 20:30:51.667357 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:30:51.872449269+00:00 stderr F I0813 20:30:51.872009 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:30:51.872449269+00:00 stderr F I0813 20:30:51.872198 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:30:52.067623169+00:00 stderr F I0813 20:30:52.067490 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:30:52.067623169+00:00 stderr F I0813 20:30:52.067578 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:30:52.266402163+00:00 stderr F I0813 20:30:52.266348 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:30:52.266521707+00:00 stderr F I0813 20:30:52.266506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:30:52.465656291+00:00 stderr F I0813 20:30:52.465489 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:30:52.465873007+00:00 stderr F I0813 20:30:52.465847 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:30:52.667109222+00:00 stderr F I0813 20:30:52.667051 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:30:52.667232086+00:00 stderr F I0813 20:30:52.667218 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:30:52.870098207+00:00 stderr F I0813 20:30:52.869973 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:30:52.870146849+00:00 stderr F I0813 20:30:52.870093 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:30:53.069201471+00:00 stderr F I0813 20:30:53.069143 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:30:53.069335705+00:00 stderr F I0813 20:30:53.069321 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:30:53.269534440+00:00 stderr F I0813 20:30:53.269479 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:30:53.269729705+00:00 stderr F I0813 20:30:53.269682 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:30:53.465182654+00:00 stderr F I0813 20:30:53.465126 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:30:53.465304477+00:00 stderr F I0813 20:30:53.465290 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:30:53.668848778+00:00 stderr F I0813 20:30:53.668682 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:30:53.668913010+00:00 stderr F I0813 20:30:53.668852 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:30:53.868876008+00:00 stderr F I0813 20:30:53.868747 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:30:53.868934110+00:00 stderr F I0813 20:30:53.868885 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:30:54.068198808+00:00 stderr F I0813 20:30:54.068087 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:30:54.068198808+00:00 stderr F I0813 20:30:54.068153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:30:54.268524407+00:00 stderr F I0813 20:30:54.268393 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:30:54.268524407+00:00 stderr F I0813 20:30:54.268475 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:30:54.465879089+00:00 stderr F I0813 20:30:54.465370 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:30:54.465879089+00:00 stderr F I0813 20:30:54.465444 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:30:54.670215923+00:00 stderr F I0813 20:30:54.670087 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:30:54.670215923+00:00 stderr F I0813 20:30:54.670201 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:30:54.867530015+00:00 stderr F I0813 20:30:54.867414 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:30:54.867530015+00:00 stderr F I0813 20:30:54.867497 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:30:55.074336970+00:00 stderr F I0813 20:30:55.074217 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:30:55.074336970+00:00 stderr F I0813 20:30:55.074299 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:30:55.287739444+00:00 stderr F I0813 20:30:55.287549 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:30:55.287739444+00:00 stderr F I0813 20:30:55.287648 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:30:55.466752630+00:00 stderr F I0813 20:30:55.466509 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:30:55.467058379+00:00 stderr F I0813 20:30:55.466932 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:55.667846611+00:00 stderr F I0813 20:30:55.667238 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:55.668121898+00:00 stderr F I0813 20:30:55.668065 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:55.865593585+00:00 stderr F I0813 20:30:55.865535 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:55.865732009+00:00 stderr F I0813 20:30:55.865716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:56.065572483+00:00 stderr F I0813 20:30:56.065470 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:56.065572483+00:00 stderr F I0813 20:30:56.065543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:30:56.265318505+00:00 stderr F I0813 20:30:56.265244 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:30:56.265318505+00:00 stderr F I0813 20:30:56.265297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:30:56.464912203+00:00 stderr F I0813 20:30:56.464767 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:30:56.465016976+00:00 stderr F I0813 20:30:56.465001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:30:56.667616009+00:00 stderr F I0813 20:30:56.667003 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:30:56.667877337+00:00 stderr F I0813 20:30:56.667855 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:56.872565761+00:00 stderr F I0813 20:30:56.872507 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:56.872705995+00:00 stderr F I0813 20:30:56.872686 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:57.069824261+00:00 stderr F I0813 20:30:57.069557 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:57.069824261+00:00 stderr F I0813 20:30:57.069631 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:57.267319578+00:00 stderr F I0813 20:30:57.267193 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:57.267319578+00:00 stderr F I0813 20:30:57.267269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:30:57.467141612+00:00 stderr F I0813 20:30:57.466963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:30:57.467141612+00:00 stderr F I0813 20:30:57.467073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:30:57.665717791+00:00 stderr F I0813 20:30:57.665665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:30:57.665977288+00:00 stderr F I0813 20:30:57.665939 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:30:57.871306161+00:00 stderr F I0813 20:30:57.871173 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:30:57.871306161+00:00 stderr F I0813 20:30:57.871241 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:30:58.065934614+00:00 stderr F I0813 20:30:58.065885 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:30:58.066085039+00:00 stderr F I0813 20:30:58.066066 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:30:58.264672847+00:00 stderr F I0813 20:30:58.264573 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:30:58.264672847+00:00 stderr F I0813 20:30:58.264642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:30:58.465591573+00:00 stderr F I0813 20:30:58.465490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:30:58.465735327+00:00 stderr F I0813 20:30:58.465714 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:30:58.667376133+00:00 stderr F I0813 20:30:58.667275 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:30:58.667376133+00:00 stderr F I0813 20:30:58.667355 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:30:58.865918911+00:00 stderr F I0813 20:30:58.865764 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:30:58.865918911+00:00 stderr F I0813 20:30:58.865873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:30:59.065382604+00:00 stderr F I0813 20:30:59.065328 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:30:59.065550759+00:00 stderr F I0813 20:30:59.065531 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:30:59.266353811+00:00 stderr F I0813 20:30:59.266260 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:30:59.266526286+00:00 stderr F I0813 20:30:59.266505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:30:59.467545325+00:00 stderr F I0813 20:30:59.467453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:30:59.467727930+00:00 stderr F I0813 20:30:59.467713 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:30:59.668898103+00:00 stderr F I0813 20:30:59.668676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:30:59.668898103+00:00 stderr F I0813 20:30:59.668761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:30:59.867370678+00:00 stderr F I0813 20:30:59.866937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:30:59.867370678+00:00 stderr F I0813 20:30:59.867013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:31:00.067346366+00:00 stderr F I0813 20:31:00.067238 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:31:00.067346366+00:00 stderr F I0813 20:31:00.067294 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:31:00.267185481+00:00 stderr F I0813 20:31:00.266536 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:31:00.267185481+00:00 stderr F I0813 20:31:00.266619 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:31:00.468525989+00:00 stderr F I0813 20:31:00.468416 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:31:00.468525989+00:00 stderr F I0813 20:31:00.468500 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:31:00.679869654+00:00 stderr F I0813 20:31:00.677152 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:31:00.679869654+00:00 stderr F I0813 20:31:00.677235 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:31:00.866958832+00:00 stderr F I0813 20:31:00.866765 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:31:00.867020224+00:00 stderr F I0813 20:31:00.866957 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:31:01.066267221+00:00 stderr F I0813 20:31:01.066207 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:31:01.066368744+00:00 stderr F I0813 20:31:01.066354 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:31:01.266406504+00:00 stderr F I0813 20:31:01.266312 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:31:01.266406504+00:00 stderr F I0813 20:31:01.266382 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:31:01.468601207+00:00 stderr F I0813 20:31:01.468511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:31:01.468601207+00:00 stderr F I0813 20:31:01.468561 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:31:01.668015968+00:00 stderr F I0813 20:31:01.667907 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:31:01.668015968+00:00 stderr F I0813 20:31:01.667978 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:31:01.874729560+00:00 stderr F I0813 20:31:01.874545 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:31:01.897344430+00:00 stderr F I0813 20:31:01.897256 1 log.go:245] Operconfig Controller complete 2025-08-13T20:31:15.463724055+00:00 stderr F I0813 20:31:15.463641 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:34:01.898846109+00:00 stderr F I0813 20:34:01.898266 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:34:02.347061513+00:00 stderr F I0813 20:34:02.346995 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:34:02.349605686+00:00 stderr F I0813 20:34:02.349583 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:34:02.353003844+00:00 stderr F I0813 20:34:02.352974 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:34:02.353157229+00:00 stderr F I0813 20:34:02.353045 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002c81300 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:34:02.358610565+00:00 stderr F I0813 20:34:02.358578 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:34:02.358666277+00:00 stderr F I0813 20:34:02.358653 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:34:02.358710538+00:00 stderr F I0813 20:34:02.358698 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:34:02.362291441+00:00 stderr F I0813 20:34:02.362252 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:34:02.362351813+00:00 stderr F I0813 20:34:02.362339 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:34:02.362383404+00:00 stderr F I0813 20:34:02.362371 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:34:02.362413865+00:00 stderr F I0813 20:34:02.362401 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:34:02.362584459+00:00 stderr F I0813 20:34:02.362561 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:34:02.376896681+00:00 stderr F I0813 20:34:02.376855 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:34:02.394217089+00:00 stderr F I0813 20:34:02.394175 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:34:02.394371933+00:00 stderr F I0813 20:34:02.394355 1 log.go:245] Starting render phase 2025-08-13T20:34:02.408479919+00:00 stderr F I0813 20:34:02.408389 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459339 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459359 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459390 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459415 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:34:02.481868178+00:00 stderr F I0813 20:34:02.477364 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:34:02.481868178+00:00 stderr F I0813 20:34:02.477405 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:34:02.490105455+00:00 stderr F I0813 20:34:02.490028 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:34:02.516704930+00:00 stderr F I0813 20:34:02.515292 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:34:02.525922575+00:00 stderr F I0813 20:34:02.525857 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:34:02.526061659+00:00 stderr F I0813 20:34:02.526039 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:34:02.538826236+00:00 stderr F I0813 20:34:02.538702 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:34:02.538885037+00:00 stderr F I0813 20:34:02.538825 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:34:02.549844182+00:00 stderr F I0813 20:34:02.549367 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:34:02.549844182+00:00 stderr F I0813 20:34:02.549439 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:34:02.557690168+00:00 stderr F I0813 20:34:02.557571 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:34:02.557690168+00:00 stderr F I0813 20:34:02.557644 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:34:02.564064961+00:00 stderr F I0813 20:34:02.563968 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:34:02.564064961+00:00 stderr F I0813 20:34:02.564018 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:34:02.571366011+00:00 stderr F I0813 20:34:02.571235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:34:02.571528926+00:00 stderr F I0813 20:34:02.571361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:34:02.580674579+00:00 stderr F I0813 20:34:02.580567 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:34:02.580925406+00:00 stderr F I0813 20:34:02.580869 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:34:02.590179692+00:00 stderr F I0813 20:34:02.590113 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:34:02.590179692+00:00 stderr F I0813 20:34:02.590160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:34:02.596106172+00:00 stderr F I0813 20:34:02.595989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:34:02.596217205+00:00 stderr F I0813 20:34:02.596144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:34:02.605350648+00:00 stderr F I0813 20:34:02.605284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:34:02.605471461+00:00 stderr F I0813 20:34:02.605412 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:34:02.791759386+00:00 stderr F I0813 20:34:02.791696 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:34:02.791857339+00:00 stderr F I0813 20:34:02.791826 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:34:02.992460636+00:00 stderr F I0813 20:34:02.992338 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:34:02.992460636+00:00 stderr F I0813 20:34:02.992427 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:34:03.190330584+00:00 stderr F I0813 20:34:03.190203 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:34:03.190330584+00:00 stderr F I0813 20:34:03.190270 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:34:03.392275479+00:00 stderr F I0813 20:34:03.391893 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:34:03.392275479+00:00 stderr F I0813 20:34:03.391985 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:34:03.590293521+00:00 stderr F I0813 20:34:03.590185 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:34:03.590293521+00:00 stderr F I0813 20:34:03.590276 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:34:03.793694477+00:00 stderr F I0813 20:34:03.793546 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:34:03.793694477+00:00 stderr F I0813 20:34:03.793631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:34:03.991153244+00:00 stderr F I0813 20:34:03.990960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:34:03.991420951+00:00 stderr F I0813 20:34:03.991281 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:34:04.193105408+00:00 stderr F I0813 20:34:04.192850 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:34:04.194444156+00:00 stderr F I0813 20:34:04.193990 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:34:04.390930445+00:00 stderr F I0813 20:34:04.390837 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:34:04.390930445+00:00 stderr F I0813 20:34:04.390908 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:34:04.592739116+00:00 stderr F I0813 20:34:04.592578 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:34:04.592739116+00:00 stderr F I0813 20:34:04.592648 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:34:04.791824868+00:00 stderr F I0813 20:34:04.791398 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:34:04.791824868+00:00 stderr F I0813 20:34:04.791478 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:34:05.006237832+00:00 stderr F I0813 20:34:05.006111 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:34:05.006237832+00:00 stderr F I0813 20:34:05.006213 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:34:05.205585162+00:00 stderr F I0813 20:34:05.205455 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:34:05.205585162+00:00 stderr F I0813 20:34:05.205563 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:34:05.391462976+00:00 stderr F I0813 20:34:05.391375 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:34:05.391462976+00:00 stderr F I0813 20:34:05.391447 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:34:05.592371751+00:00 stderr F I0813 20:34:05.592168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:34:05.592371751+00:00 stderr F I0813 20:34:05.592327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:34:05.792461253+00:00 stderr F I0813 20:34:05.792350 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:34:05.792461253+00:00 stderr F I0813 20:34:05.792418 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:34:05.997855747+00:00 stderr F I0813 20:34:05.997661 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:34:05.997855747+00:00 stderr F I0813 20:34:05.997731 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:34:06.192750000+00:00 stderr F I0813 20:34:06.192023 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:34:06.192750000+00:00 stderr F I0813 20:34:06.192131 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:34:06.405499526+00:00 stderr F I0813 20:34:06.405402 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:34:06.405499526+00:00 stderr F I0813 20:34:06.405474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:34:06.591590125+00:00 stderr F I0813 20:34:06.591475 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:06.591590125+00:00 stderr F I0813 20:34:06.591543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:34:06.791561694+00:00 stderr F I0813 20:34:06.791434 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:06.791599175+00:00 stderr F I0813 20:34:06.791554 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:34:06.990274817+00:00 stderr F I0813 20:34:06.990146 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:34:06.990274817+00:00 stderr F I0813 20:34:06.990229 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:34:07.193212091+00:00 stderr F I0813 20:34:07.193041 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:34:07.193212091+00:00 stderr F I0813 20:34:07.193178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:34:07.390697817+00:00 stderr F I0813 20:34:07.390573 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:34:07.390697817+00:00 stderr F I0813 20:34:07.390674 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:34:07.590741787+00:00 stderr F I0813 20:34:07.590633 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:34:07.590741787+00:00 stderr F I0813 20:34:07.590711 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:34:07.793677420+00:00 stderr F I0813 20:34:07.793387 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:34:07.793677420+00:00 stderr F I0813 20:34:07.793517 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:34:07.997853869+00:00 stderr F I0813 20:34:07.997702 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:34:07.997900471+00:00 stderr F I0813 20:34:07.997890 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:34:08.192646179+00:00 stderr F I0813 20:34:08.192560 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:34:08.192688900+00:00 stderr F I0813 20:34:08.192641 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:34:08.391741752+00:00 stderr F I0813 20:34:08.391636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:08.391741752+00:00 stderr F I0813 20:34:08.391731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:34:08.590589948+00:00 stderr F I0813 20:34:08.590476 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:08.590589948+00:00 stderr F I0813 20:34:08.590547 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:34:08.792295437+00:00 stderr F I0813 20:34:08.792186 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:34:08.792295437+00:00 stderr F I0813 20:34:08.792256 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:34:08.994584142+00:00 stderr F I0813 20:34:08.994441 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:34:08.994584142+00:00 stderr F I0813 20:34:08.994536 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:34:09.200691657+00:00 stderr F I0813 20:34:09.200536 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:34:09.200691657+00:00 stderr F I0813 20:34:09.200660 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:34:09.401433477+00:00 stderr F I0813 20:34:09.401329 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:34:09.401433477+00:00 stderr F I0813 20:34:09.401420 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:34:09.603680051+00:00 stderr F I0813 20:34:09.602910 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:34:09.603680051+00:00 stderr F I0813 20:34:09.603043 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:34:09.803002570+00:00 stderr F I0813 20:34:09.802890 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:34:09.803002570+00:00 stderr F I0813 20:34:09.802975 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:34:09.996217015+00:00 stderr F I0813 20:34:09.995977 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:34:09.996217015+00:00 stderr F I0813 20:34:09.996101 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:34:10.230448068+00:00 stderr F I0813 20:34:10.229725 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:34:10.230448068+00:00 stderr F I0813 20:34:10.230408 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:34:10.428630135+00:00 stderr F I0813 20:34:10.428498 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:34:10.428630135+00:00 stderr F I0813 20:34:10.428569 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:34:10.589824859+00:00 stderr F I0813 20:34:10.589701 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:34:10.589869290+00:00 stderr F I0813 20:34:10.589847 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:34:10.792166955+00:00 stderr F I0813 20:34:10.792015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:34:10.792166955+00:00 stderr F I0813 20:34:10.792135 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:34:10.990958880+00:00 stderr F I0813 20:34:10.990854 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:34:10.990958880+00:00 stderr F I0813 20:34:10.990926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:34:11.194214743+00:00 stderr F I0813 20:34:11.194059 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:34:11.194214743+00:00 stderr F I0813 20:34:11.194150 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:34:11.391465992+00:00 stderr F I0813 20:34:11.391317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:34:11.391465992+00:00 stderr F I0813 20:34:11.391386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:34:11.590546464+00:00 stderr F I0813 20:34:11.590394 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:34:11.590546464+00:00 stderr F I0813 20:34:11.590443 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:34:11.792278223+00:00 stderr F I0813 20:34:11.792183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:34:11.792278223+00:00 stderr F I0813 20:34:11.792257 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:34:11.994766304+00:00 stderr F I0813 20:34:11.994636 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:34:11.994878057+00:00 stderr F I0813 20:34:11.994733 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:34:12.189735708+00:00 stderr F I0813 20:34:12.189641 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:34:12.189735708+00:00 stderr F I0813 20:34:12.189727 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.393122385+00:00 stderr F I0813 20:34:12.392978 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.393122385+00:00 stderr F I0813 20:34:12.393100 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.590111908+00:00 stderr F I0813 20:34:12.589937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.590111908+00:00 stderr F I0813 20:34:12.590006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.790832658+00:00 stderr F I0813 20:34:12.790703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.790873779+00:00 stderr F I0813 20:34:12.790835 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.990068935+00:00 stderr F I0813 20:34:12.989969 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.990068935+00:00 stderr F I0813 20:34:12.990038 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:34:13.193573535+00:00 stderr F I0813 20:34:13.193472 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:34:13.193573535+00:00 stderr F I0813 20:34:13.193543 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:34:13.392244116+00:00 stderr F I0813 20:34:13.392125 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:34:13.392244116+00:00 stderr F I0813 20:34:13.392208 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:34:13.592920595+00:00 stderr F I0813 20:34:13.592819 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:34:13.592920595+00:00 stderr F I0813 20:34:13.592891 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:34:13.791756610+00:00 stderr F I0813 20:34:13.791647 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:34:13.791756610+00:00 stderr F I0813 20:34:13.791744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:34:13.994705574+00:00 stderr F I0813 20:34:13.994533 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:34:13.994705574+00:00 stderr F I0813 20:34:13.994646 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:34:14.198914114+00:00 stderr F I0813 20:34:14.198527 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:34:14.199246014+00:00 stderr F I0813 20:34:14.199219 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:34:14.396138834+00:00 stderr F I0813 20:34:14.395985 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:34:14.396138834+00:00 stderr F I0813 20:34:14.396098 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:34:14.591892783+00:00 stderr F I0813 20:34:14.591738 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:34:14.591939165+00:00 stderr F I0813 20:34:14.591899 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:34:14.791929321+00:00 stderr F I0813 20:34:14.791768 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:34:14.791992423+00:00 stderr F I0813 20:34:14.791889 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:34:14.999191288+00:00 stderr F I0813 20:34:14.999125 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:34:14.999324092+00:00 stderr F I0813 20:34:14.999307 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:34:15.193444792+00:00 stderr F I0813 20:34:15.193317 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:34:15.193444792+00:00 stderr F I0813 20:34:15.193407 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:34:15.391907047+00:00 stderr F I0813 20:34:15.391685 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:34:15.391907047+00:00 stderr F I0813 20:34:15.391845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:34:15.472661219+00:00 stderr F I0813 20:34:15.472518 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:34:15.590637810+00:00 stderr F I0813 20:34:15.590511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:34:15.590637810+00:00 stderr F I0813 20:34:15.590611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:34:15.792261286+00:00 stderr F I0813 20:34:15.792159 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:34:15.792261286+00:00 stderr F I0813 20:34:15.792251 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:34:15.991879234+00:00 stderr F I0813 20:34:15.991680 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:34:15.991879234+00:00 stderr F I0813 20:34:15.991757 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:34:16.192925793+00:00 stderr F I0813 20:34:16.192759 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:34:16.193001135+00:00 stderr F I0813 20:34:16.192915 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:34:16.398388169+00:00 stderr F I0813 20:34:16.398285 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:34:16.398388169+00:00 stderr F I0813 20:34:16.398371 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:34:16.626511537+00:00 stderr F I0813 20:34:16.626381 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:34:16.626511537+00:00 stderr F I0813 20:34:16.626463 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:34:16.790697576+00:00 stderr F I0813 20:34:16.790576 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:34:16.790697576+00:00 stderr F I0813 20:34:16.790651 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:16.990958463+00:00 stderr F I0813 20:34:16.990747 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:16.990958463+00:00 stderr F I0813 20:34:16.990950 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:17.190279383+00:00 stderr F I0813 20:34:17.190166 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:17.190279383+00:00 stderr F I0813 20:34:17.190238 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:17.391147327+00:00 stderr F I0813 20:34:17.391009 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:17.391212799+00:00 stderr F I0813 20:34:17.391144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:34:17.593873105+00:00 stderr F I0813 20:34:17.593713 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:34:17.593957077+00:00 stderr F I0813 20:34:17.593935 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:34:17.790564209+00:00 stderr F I0813 20:34:17.790444 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:34:17.790564209+00:00 stderr F I0813 20:34:17.790539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:34:17.990932369+00:00 stderr F I0813 20:34:17.990765 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:34:17.990932369+00:00 stderr F I0813 20:34:17.990915 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.197471136+00:00 stderr F I0813 20:34:18.197310 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.197471136+00:00 stderr F I0813 20:34:18.197417 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.395934641+00:00 stderr F I0813 20:34:18.395759 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.395934641+00:00 stderr F I0813 20:34:18.395897 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.595005712+00:00 stderr F I0813 20:34:18.594883 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.595005712+00:00 stderr F I0813 20:34:18.594963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:34:18.792035076+00:00 stderr F I0813 20:34:18.791938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:34:18.792035076+00:00 stderr F I0813 20:34:18.792008 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:34:18.991953233+00:00 stderr F I0813 20:34:18.991677 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:34:18.992025365+00:00 stderr F I0813 20:34:18.991976 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:34:19.199985023+00:00 stderr F I0813 20:34:19.199520 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:34:19.199985023+00:00 stderr F I0813 20:34:19.199645 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:34:19.392242189+00:00 stderr F I0813 20:34:19.392145 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:34:19.392242189+00:00 stderr F I0813 20:34:19.392214 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:34:19.590447337+00:00 stderr F I0813 20:34:19.590324 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:34:19.590447337+00:00 stderr F I0813 20:34:19.590411 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:34:19.790043175+00:00 stderr F I0813 20:34:19.789893 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:34:19.790303302+00:00 stderr F I0813 20:34:19.790283 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:34:19.991140495+00:00 stderr F I0813 20:34:19.990982 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:34:19.991140495+00:00 stderr F I0813 20:34:19.991069 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:20.190366642+00:00 stderr F I0813 20:34:20.190237 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:20.190366642+00:00 stderr F I0813 20:34:20.190307 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:34:20.391485854+00:00 stderr F I0813 20:34:20.391364 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:34:20.391527305+00:00 stderr F I0813 20:34:20.391494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:34:20.590484144+00:00 stderr F I0813 20:34:20.590359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:34:20.590484144+00:00 stderr F I0813 20:34:20.590468 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:34:20.792546193+00:00 stderr F I0813 20:34:20.792493 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:34:20.792848241+00:00 stderr F I0813 20:34:20.792768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:34:20.993187900+00:00 stderr F I0813 20:34:20.992968 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:34:20.993187900+00:00 stderr F I0813 20:34:20.993038 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:34:21.190684068+00:00 stderr F I0813 20:34:21.190623 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:34:21.190863913+00:00 stderr F I0813 20:34:21.190838 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:34:21.391707346+00:00 stderr F I0813 20:34:21.391595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:34:21.391707346+00:00 stderr F I0813 20:34:21.391685 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:21.600415226+00:00 stderr F I0813 20:34:21.600355 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:21.600548830+00:00 stderr F I0813 20:34:21.600533 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:34:21.794062293+00:00 stderr F I0813 20:34:21.793997 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:34:21.794323100+00:00 stderr F I0813 20:34:21.794298 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:21.999035905+00:00 stderr F I0813 20:34:21.998981 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:21.999185979+00:00 stderr F I0813 20:34:21.999170 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:34:22.195112490+00:00 stderr F I0813 20:34:22.194997 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:34:22.195171272+00:00 stderr F I0813 20:34:22.195105 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:34:22.391537307+00:00 stderr F I0813 20:34:22.391375 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:34:22.391537307+00:00 stderr F I0813 20:34:22.391449 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:34:22.590581448+00:00 stderr F I0813 20:34:22.590471 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:34:22.590581448+00:00 stderr F I0813 20:34:22.590564 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:34:22.791332809+00:00 stderr F I0813 20:34:22.791203 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:34:22.791332809+00:00 stderr F I0813 20:34:22.791295 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:34:22.994927962+00:00 stderr F I0813 20:34:22.994657 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:34:22.994927962+00:00 stderr F I0813 20:34:22.994740 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:34:23.196711822+00:00 stderr F I0813 20:34:23.196646 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:34:23.217697855+00:00 stderr F I0813 20:34:23.217574 1 log.go:245] Operconfig Controller complete 2025-08-13T20:35:16.785492872+00:00 stderr F I0813 20:35:16.785143 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:35:16.787115989+00:00 stderr F I0813 20:35:16.787010 1 log.go:245] successful reconciliation 2025-08-13T20:35:18.170311261+00:00 stderr F I0813 20:35:18.170171 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:35:18.174400889+00:00 stderr F I0813 20:35:18.174311 1 log.go:245] successful reconciliation 2025-08-13T20:35:19.385155273+00:00 stderr F I0813 20:35:19.385001 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:35:19.385899974+00:00 stderr F I0813 20:35:19.385759 1 log.go:245] successful reconciliation 2025-08-13T20:37:15.481051441+00:00 stderr F I0813 20:37:15.480749 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:37:23.219393589+00:00 stderr F I0813 20:37:23.219216 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:37:23.607865939+00:00 stderr F I0813 20:37:23.605470 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:37:23.614544421+00:00 stderr F I0813 20:37:23.613518 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:37:23.629073360+00:00 stderr F I0813 20:37:23.627334 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:37:23.629073360+00:00 stderr F I0813 20:37:23.627379 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003acd900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.636944 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.637162 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.637178 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642346 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642392 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642416 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642423 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642612 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:37:23.695010751+00:00 stderr F I0813 20:37:23.694666 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:37:23.717315084+00:00 stderr F I0813 20:37:23.717177 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:37:23.717315084+00:00 stderr F I0813 20:37:23.717248 1 log.go:245] Starting render phase 2025-08-13T20:37:23.737492456+00:00 stderr F I0813 20:37:23.737362 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784170 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784204 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784244 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:37:23.784358457+00:00 stderr F I0813 20:37:23.784270 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:37:23.800890364+00:00 stderr F I0813 20:37:23.799868 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:37:23.800890364+00:00 stderr F I0813 20:37:23.799906 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:37:23.815894706+00:00 stderr F I0813 20:37:23.814725 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:37:23.842947236+00:00 stderr F I0813 20:37:23.842823 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:37:23.848839156+00:00 stderr F I0813 20:37:23.848721 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:37:23.848839156+00:00 stderr F I0813 20:37:23.848759 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:37:23.858370741+00:00 stderr F I0813 20:37:23.858253 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:37:23.858370741+00:00 stderr F I0813 20:37:23.858323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:37:23.874865737+00:00 stderr F I0813 20:37:23.874692 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:37:23.875210397+00:00 stderr F I0813 20:37:23.874977 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:37:23.887039078+00:00 stderr F I0813 20:37:23.886972 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:37:23.887209623+00:00 stderr F I0813 20:37:23.887190 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:37:23.894961726+00:00 stderr F I0813 20:37:23.894886 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:37:23.895193193+00:00 stderr F I0813 20:37:23.895165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:37:23.902042160+00:00 stderr F I0813 20:37:23.901964 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:37:23.902236726+00:00 stderr F I0813 20:37:23.902211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:37:23.910201475+00:00 stderr F I0813 20:37:23.910042 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:37:23.910201475+00:00 stderr F I0813 20:37:23.910184 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:37:23.916724563+00:00 stderr F I0813 20:37:23.916579 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:37:23.916724563+00:00 stderr F I0813 20:37:23.916673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:37:23.924086736+00:00 stderr F I0813 20:37:23.923638 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:37:23.924270911+00:00 stderr F I0813 20:37:23.924248 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:37:23.931562381+00:00 stderr F I0813 20:37:23.931519 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:37:23.931682785+00:00 stderr F I0813 20:37:23.931662 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:37:24.107742351+00:00 stderr F I0813 20:37:24.107614 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:37:24.107742351+00:00 stderr F I0813 20:37:24.107698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:37:24.307965673+00:00 stderr F I0813 20:37:24.307867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:37:24.308059906+00:00 stderr F I0813 20:37:24.307958 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:37:24.507947379+00:00 stderr F I0813 20:37:24.507853 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:37:24.507947379+00:00 stderr F I0813 20:37:24.507926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:37:24.710617201+00:00 stderr F I0813 20:37:24.710550 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:37:24.710695853+00:00 stderr F I0813 20:37:24.710630 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:37:24.908331961+00:00 stderr F I0813 20:37:24.908164 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:37:24.908331961+00:00 stderr F I0813 20:37:24.908232 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:37:25.107497343+00:00 stderr F I0813 20:37:25.107416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:37:25.107497343+00:00 stderr F I0813 20:37:25.107465 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:37:25.307848999+00:00 stderr F I0813 20:37:25.307732 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:37:25.307909041+00:00 stderr F I0813 20:37:25.307859 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:37:25.511641345+00:00 stderr F I0813 20:37:25.511530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:37:25.511641345+00:00 stderr F I0813 20:37:25.511627 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:37:25.707709358+00:00 stderr F I0813 20:37:25.707622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:37:25.707759669+00:00 stderr F I0813 20:37:25.707702 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:37:25.912432070+00:00 stderr F I0813 20:37:25.912270 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:37:25.912432070+00:00 stderr F I0813 20:37:25.912342 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:37:26.107642188+00:00 stderr F I0813 20:37:26.107510 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:37:26.107642188+00:00 stderr F I0813 20:37:26.107596 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:37:26.316556871+00:00 stderr F I0813 20:37:26.316473 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:37:26.316599892+00:00 stderr F I0813 20:37:26.316557 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:37:26.519348457+00:00 stderr F I0813 20:37:26.519242 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:37:26.519348457+00:00 stderr F I0813 20:37:26.519323 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:37:26.707399369+00:00 stderr F I0813 20:37:26.707282 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:37:26.707399369+00:00 stderr F I0813 20:37:26.707350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:37:26.908764955+00:00 stderr F I0813 20:37:26.908643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:37:26.908880428+00:00 stderr F I0813 20:37:26.908761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:37:27.108116552+00:00 stderr F I0813 20:37:27.107985 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:37:27.108116552+00:00 stderr F I0813 20:37:27.108061 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:37:27.316842560+00:00 stderr F I0813 20:37:27.316700 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:37:27.316842560+00:00 stderr F I0813 20:37:27.316832 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:37:27.510084171+00:00 stderr F I0813 20:37:27.509947 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:37:27.510084171+00:00 stderr F I0813 20:37:27.510036 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:37:27.713203738+00:00 stderr F I0813 20:37:27.713073 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:37:27.713203738+00:00 stderr F I0813 20:37:27.713185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:37:27.913630256+00:00 stderr F I0813 20:37:27.913477 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:27.913680007+00:00 stderr F I0813 20:37:27.913631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:37:28.107087573+00:00 stderr F I0813 20:37:28.106960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:28.107087573+00:00 stderr F I0813 20:37:28.107035 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:37:28.309265712+00:00 stderr F I0813 20:37:28.309159 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:37:28.309265712+00:00 stderr F I0813 20:37:28.309246 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:37:28.508677781+00:00 stderr F I0813 20:37:28.508560 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:37:28.508677781+00:00 stderr F I0813 20:37:28.508632 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:37:28.707441951+00:00 stderr F I0813 20:37:28.707314 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:37:28.707441951+00:00 stderr F I0813 20:37:28.707387 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:37:28.910568438+00:00 stderr F I0813 20:37:28.910450 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:37:28.910568438+00:00 stderr F I0813 20:37:28.910523 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:37:29.114977161+00:00 stderr F I0813 20:37:29.113892 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:37:29.114977161+00:00 stderr F I0813 20:37:29.113979 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:37:29.316423349+00:00 stderr F I0813 20:37:29.316314 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:37:29.316423349+00:00 stderr F I0813 20:37:29.316386 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:37:29.510629207+00:00 stderr F I0813 20:37:29.510455 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:37:29.510629207+00:00 stderr F I0813 20:37:29.510544 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:37:29.708267586+00:00 stderr F I0813 20:37:29.708112 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:29.708267586+00:00 stderr F I0813 20:37:29.708211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:37:29.909850897+00:00 stderr F I0813 20:37:29.909753 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:29.909956990+00:00 stderr F I0813 20:37:29.909879 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:37:30.109516274+00:00 stderr F I0813 20:37:30.109431 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:37:30.109516274+00:00 stderr F I0813 20:37:30.109502 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:37:30.307823971+00:00 stderr F I0813 20:37:30.307667 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:37:30.307823971+00:00 stderr F I0813 20:37:30.307760 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:37:30.514521560+00:00 stderr F I0813 20:37:30.514407 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:37:30.514582472+00:00 stderr F I0813 20:37:30.514548 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:37:30.717672507+00:00 stderr F I0813 20:37:30.717559 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:37:30.717672507+00:00 stderr F I0813 20:37:30.717643 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:37:30.917608311+00:00 stderr F I0813 20:37:30.917472 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:37:30.917608311+00:00 stderr F I0813 20:37:30.917582 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:37:31.118597046+00:00 stderr F I0813 20:37:31.118382 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:37:31.118597046+00:00 stderr F I0813 20:37:31.118471 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:37:31.313304599+00:00 stderr F I0813 20:37:31.313188 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:37:31.313304599+00:00 stderr F I0813 20:37:31.313259 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:37:31.549347115+00:00 stderr F I0813 20:37:31.549115 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:37:31.549347115+00:00 stderr F I0813 20:37:31.549216 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:37:31.742304547+00:00 stderr F I0813 20:37:31.742197 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:37:31.742304547+00:00 stderr F I0813 20:37:31.742290 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:37:31.908476348+00:00 stderr F I0813 20:37:31.908341 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:37:31.908476348+00:00 stderr F I0813 20:37:31.908430 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:37:32.110072410+00:00 stderr F I0813 20:37:32.109951 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:37:32.110072410+00:00 stderr F I0813 20:37:32.110045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:37:32.311913049+00:00 stderr F I0813 20:37:32.311826 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:37:32.311963980+00:00 stderr F I0813 20:37:32.311911 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:37:32.508538458+00:00 stderr F I0813 20:37:32.508483 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:37:32.508677642+00:00 stderr F I0813 20:37:32.508661 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:37:32.707224186+00:00 stderr F I0813 20:37:32.707114 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:37:32.707224186+00:00 stderr F I0813 20:37:32.707206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:37:32.907333975+00:00 stderr F I0813 20:37:32.907219 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:37:32.907333975+00:00 stderr F I0813 20:37:32.907320 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:37:33.107524727+00:00 stderr F I0813 20:37:33.107397 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:37:33.107524727+00:00 stderr F I0813 20:37:33.107482 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:37:33.308833780+00:00 stderr F I0813 20:37:33.308683 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:37:33.308883852+00:00 stderr F I0813 20:37:33.308767 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:37:33.508307632+00:00 stderr F I0813 20:37:33.508186 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:37:33.508307632+00:00 stderr F I0813 20:37:33.508259 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:33.711968483+00:00 stderr F I0813 20:37:33.711904 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:33.712018365+00:00 stderr F I0813 20:37:33.711971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:33.909602181+00:00 stderr F I0813 20:37:33.909480 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:33.909602181+00:00 stderr F I0813 20:37:33.909548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:34.108732982+00:00 stderr F I0813 20:37:34.108526 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:34.108732982+00:00 stderr F I0813 20:37:34.108624 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:34.308305936+00:00 stderr F I0813 20:37:34.308054 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:34.308305936+00:00 stderr F I0813 20:37:34.308156 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:37:34.510729862+00:00 stderr F I0813 20:37:34.510621 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:37:34.510729862+00:00 stderr F I0813 20:37:34.510707 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:37:34.708272368+00:00 stderr F I0813 20:37:34.708107 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:37:34.708272368+00:00 stderr F I0813 20:37:34.708211 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:37:34.908726827+00:00 stderr F I0813 20:37:34.908629 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:37:34.908726827+00:00 stderr F I0813 20:37:34.908697 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:37:35.106672564+00:00 stderr F I0813 20:37:35.106564 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:37:35.106672564+00:00 stderr F I0813 20:37:35.106637 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:37:35.311843897+00:00 stderr F I0813 20:37:35.311735 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:37:35.311906899+00:00 stderr F I0813 20:37:35.311858 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:37:35.512505492+00:00 stderr F I0813 20:37:35.512411 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:37:35.512505492+00:00 stderr F I0813 20:37:35.512488 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:37:35.722896327+00:00 stderr F I0813 20:37:35.722061 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:37:35.722896327+00:00 stderr F I0813 20:37:35.722160 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:37:35.908469097+00:00 stderr F I0813 20:37:35.908325 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:37:35.908469097+00:00 stderr F I0813 20:37:35.908411 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:37:36.109765210+00:00 stderr F I0813 20:37:36.109588 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:37:36.109765210+00:00 stderr F I0813 20:37:36.109734 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:37:36.311869647+00:00 stderr F I0813 20:37:36.311711 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:37:36.311936499+00:00 stderr F I0813 20:37:36.311867 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:37:36.511670837+00:00 stderr F I0813 20:37:36.511531 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:37:36.511670837+00:00 stderr F I0813 20:37:36.511653 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:37:36.712617920+00:00 stderr F I0813 20:37:36.711973 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:37:36.712617920+00:00 stderr F I0813 20:37:36.712579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:37:36.913230494+00:00 stderr F I0813 20:37:36.913096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:37:36.913230494+00:00 stderr F I0813 20:37:36.913195 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:37:37.108671609+00:00 stderr F I0813 20:37:37.108545 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:37:37.108671609+00:00 stderr F I0813 20:37:37.108643 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:37:37.311879127+00:00 stderr F I0813 20:37:37.311227 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:37:37.311879127+00:00 stderr F I0813 20:37:37.311295 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:37:37.511670577+00:00 stderr F I0813 20:37:37.511561 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:37:37.511716399+00:00 stderr F I0813 20:37:37.511668 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:37:37.718119269+00:00 stderr F I0813 20:37:37.717978 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:37:37.718119269+00:00 stderr F I0813 20:37:37.718057 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:37:37.931684726+00:00 stderr F I0813 20:37:37.931575 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:37:37.931684726+00:00 stderr F I0813 20:37:37.931669 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:37:38.110214683+00:00 stderr F I0813 20:37:38.109307 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:37:38.110278385+00:00 stderr F I0813 20:37:38.110221 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.310597560+00:00 stderr F I0813 20:37:38.310485 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.310597560+00:00 stderr F I0813 20:37:38.310559 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.517357531+00:00 stderr F I0813 20:37:38.516962 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.517881746+00:00 stderr F I0813 20:37:38.517768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.713851576+00:00 stderr F I0813 20:37:38.713639 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.713851576+00:00 stderr F I0813 20:37:38.713761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:37:38.913720767+00:00 stderr F I0813 20:37:38.913474 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:37:38.913720767+00:00 stderr F I0813 20:37:38.913587 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:37:39.111837119+00:00 stderr F I0813 20:37:39.111638 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:37:39.111965403+00:00 stderr F I0813 20:37:39.111890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:37:39.308543710+00:00 stderr F I0813 20:37:39.308471 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:37:39.308610902+00:00 stderr F I0813 20:37:39.308549 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.516062593+00:00 stderr F I0813 20:37:39.515940 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.516062593+00:00 stderr F I0813 20:37:39.516029 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.720623170+00:00 stderr F I0813 20:37:39.720563 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.721215527+00:00 stderr F I0813 20:37:39.721163 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.908000432+00:00 stderr F I0813 20:37:39.907906 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.908000432+00:00 stderr F I0813 20:37:39.907979 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:37:40.107714410+00:00 stderr F I0813 20:37:40.107612 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:37:40.107714410+00:00 stderr F I0813 20:37:40.107682 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:37:40.310193258+00:00 stderr F I0813 20:37:40.310015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:37:40.311905397+00:00 stderr F I0813 20:37:40.310253 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:37:40.513283703+00:00 stderr F I0813 20:37:40.513228 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:37:40.513401556+00:00 stderr F I0813 20:37:40.513388 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:37:40.708574643+00:00 stderr F I0813 20:37:40.708517 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:37:40.708689806+00:00 stderr F I0813 20:37:40.708674 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:37:40.909344761+00:00 stderr F I0813 20:37:40.909216 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:37:40.909344761+00:00 stderr F I0813 20:37:40.909309 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:37:41.107342230+00:00 stderr F I0813 20:37:41.107235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:37:41.107509094+00:00 stderr F I0813 20:37:41.107328 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:37:41.309322663+00:00 stderr F I0813 20:37:41.309263 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:37:41.309322663+00:00 stderr F I0813 20:37:41.309315 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:41.508740131+00:00 stderr F I0813 20:37:41.508433 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:41.508740131+00:00 stderr F I0813 20:37:41.508505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:37:41.709441598+00:00 stderr F I0813 20:37:41.709284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:37:41.709441598+00:00 stderr F I0813 20:37:41.709355 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:37:41.908863267+00:00 stderr F I0813 20:37:41.908758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:37:41.908921099+00:00 stderr F I0813 20:37:41.908905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:37:42.112349844+00:00 stderr F I0813 20:37:42.112233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:37:42.112349844+00:00 stderr F I0813 20:37:42.112308 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:37:42.308936191+00:00 stderr F I0813 20:37:42.308859 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:37:42.309073755+00:00 stderr F I0813 20:37:42.309013 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:37:42.520145489+00:00 stderr F I0813 20:37:42.511742 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:37:42.520145489+00:00 stderr F I0813 20:37:42.511854 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:37:42.708728676+00:00 stderr F I0813 20:37:42.708617 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:37:42.708728676+00:00 stderr F I0813 20:37:42.708693 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:42.916895448+00:00 stderr F I0813 20:37:42.915329 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:42.916895448+00:00 stderr F I0813 20:37:42.915397 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:37:43.108367398+00:00 stderr F I0813 20:37:43.108240 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:37:43.108367398+00:00 stderr F I0813 20:37:43.108322 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:43.314610504+00:00 stderr F I0813 20:37:43.314521 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:43.314610504+00:00 stderr F I0813 20:37:43.314597 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:37:43.510358067+00:00 stderr F I0813 20:37:43.510242 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:37:43.510358067+00:00 stderr F I0813 20:37:43.510319 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:37:43.707886952+00:00 stderr F I0813 20:37:43.707754 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:37:43.708051337+00:00 stderr F I0813 20:37:43.707980 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:37:43.909417972+00:00 stderr F I0813 20:37:43.909287 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:37:43.909460144+00:00 stderr F I0813 20:37:43.909423 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:37:44.108268745+00:00 stderr F I0813 20:37:44.107990 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:37:44.108268745+00:00 stderr F I0813 20:37:44.108067 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:37:44.310035742+00:00 stderr F I0813 20:37:44.309955 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:37:44.310080864+00:00 stderr F I0813 20:37:44.310047 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:37:44.522115036+00:00 stderr F I0813 20:37:44.520425 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:37:44.546810448+00:00 stderr F I0813 20:37:44.546650 1 log.go:245] Operconfig Controller complete 2025-08-13T20:40:15.097755834+00:00 stderr F I0813 20:40:15.096603 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.106656070+00:00 stderr F I0813 20:40:15.106593 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.116356380+00:00 stderr F I0813 20:40:15.116120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.126956326+00:00 stderr F I0813 20:40:15.126523 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.135594905+00:00 stderr F I0813 20:40:15.135452 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.145745757+00:00 stderr F I0813 20:40:15.145707 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.155562220+00:00 stderr F I0813 20:40:15.155518 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.164362444+00:00 stderr F I0813 20:40:15.164243 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.172441127+00:00 stderr F I0813 20:40:15.172395 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.187869062+00:00 stderr F I0813 20:40:15.187749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.297835262+00:00 stderr F I0813 20:40:15.297718 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.494120081+00:00 stderr F I0813 20:40:15.494020 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:40:15.495948704+00:00 stderr F I0813 20:40:15.495463 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.699396089+00:00 stderr F I0813 20:40:15.699217 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.895930785+00:00 stderr F I0813 20:40:15.895768 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.098507356+00:00 stderr F I0813 20:40:16.098373 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.297440331+00:00 stderr F I0813 20:40:16.297311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.494455340+00:00 stderr F I0813 20:40:16.494324 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.698251816+00:00 stderr F I0813 20:40:16.698096 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.803980614+00:00 stderr F I0813 20:40:16.803839 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:40:16.805148378+00:00 stderr F I0813 20:40:16.805001 1 log.go:245] successful reconciliation 2025-08-13T20:40:16.901600799+00:00 stderr F I0813 20:40:16.901504 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.097096905+00:00 stderr F I0813 20:40:17.097019 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.297375289+00:00 stderr F I0813 20:40:17.297223 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.499045973+00:00 stderr F I0813 20:40:17.498826 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.696763303+00:00 stderr F I0813 20:40:17.696577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.896643316+00:00 stderr F I0813 20:40:17.896577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.097033654+00:00 stderr F I0813 20:40:18.096890 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.191251460+00:00 stderr F I0813 20:40:18.191147 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:40:18.192369292+00:00 stderr F I0813 20:40:18.192299 1 log.go:245] successful reconciliation 2025-08-13T20:40:18.297909155+00:00 stderr F I0813 20:40:18.297078 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.495371918+00:00 stderr F I0813 20:40:18.495316 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:19.402151561+00:00 stderr F I0813 20:40:19.402033 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:40:19.402839470+00:00 stderr F I0813 20:40:19.402737 1 log.go:245] successful reconciliation 2025-08-13T20:40:44.550953322+00:00 stderr F I0813 20:40:44.547924 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:40:44.880082911+00:00 stderr F I0813 20:40:44.879976 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:40:44.885301581+00:00 stderr F I0813 20:40:44.885179 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:40:44.888364619+00:00 stderr F I0813 20:40:44.888287 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:40:44.888387960+00:00 stderr F I0813 20:40:44.888319 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a58f80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893531 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893575 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893586 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900643 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900709 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900726 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900732 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900864 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:40:44.922083981+00:00 stderr F I0813 20:40:44.921997 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:40:44.939979497+00:00 stderr F I0813 20:40:44.938052 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:40:44.939979497+00:00 stderr F I0813 20:40:44.938126 1 log.go:245] Starting render phase 2025-08-13T20:40:44.959439848+00:00 stderr F I0813 20:40:44.959229 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:40:45.000872032+00:00 stderr F I0813 20:40:45.000752 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:40:45.000995475+00:00 stderr F I0813 20:40:45.000950 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:40:45.001087238+00:00 stderr F I0813 20:40:45.001035 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:40:45.001255463+00:00 stderr F I0813 20:40:45.001101 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:40:45.021410054+00:00 stderr F I0813 20:40:45.021335 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:40:45.021410054+00:00 stderr F I0813 20:40:45.021374 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:40:45.031547496+00:00 stderr F I0813 20:40:45.031435 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:40:45.047656371+00:00 stderr F I0813 20:40:45.047569 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:40:45.054137838+00:00 stderr F I0813 20:40:45.054082 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:40:45.054293752+00:00 stderr F I0813 20:40:45.054245 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:40:45.066159814+00:00 stderr F I0813 20:40:45.066090 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:40:45.066159814+00:00 stderr F I0813 20:40:45.066154 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:40:45.076235345+00:00 stderr F I0813 20:40:45.075669 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:40:45.076235345+00:00 stderr F I0813 20:40:45.075868 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:40:45.083839714+00:00 stderr F I0813 20:40:45.083750 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:40:45.083937557+00:00 stderr F I0813 20:40:45.083899 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:40:45.090948699+00:00 stderr F I0813 20:40:45.090864 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:40:45.090948699+00:00 stderr F I0813 20:40:45.090934 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:40:45.096176520+00:00 stderr F I0813 20:40:45.096122 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:40:45.096229091+00:00 stderr F I0813 20:40:45.096178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:40:45.103018037+00:00 stderr F I0813 20:40:45.102976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:40:45.103082479+00:00 stderr F I0813 20:40:45.103024 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:40:45.108667400+00:00 stderr F I0813 20:40:45.108546 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:40:45.108667400+00:00 stderr F I0813 20:40:45.108620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:40:45.113478218+00:00 stderr F I0813 20:40:45.113383 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:40:45.113478218+00:00 stderr F I0813 20:40:45.113442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:40:45.132231419+00:00 stderr F I0813 20:40:45.132133 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:40:45.132263690+00:00 stderr F I0813 20:40:45.132233 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:40:45.334941113+00:00 stderr F I0813 20:40:45.333055 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:40:45.334941113+00:00 stderr F I0813 20:40:45.333120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:40:45.534044994+00:00 stderr F I0813 20:40:45.533935 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:40:45.534044994+00:00 stderr F I0813 20:40:45.534020 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:40:45.734093761+00:00 stderr F I0813 20:40:45.733991 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:40:45.734093761+00:00 stderr F I0813 20:40:45.734056 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:40:45.934989923+00:00 stderr F I0813 20:40:45.934862 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:40:45.934989923+00:00 stderr F I0813 20:40:45.934937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:40:46.136050940+00:00 stderr F I0813 20:40:46.135386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:40:46.136261176+00:00 stderr F I0813 20:40:46.136241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:40:46.334704357+00:00 stderr F I0813 20:40:46.334389 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:40:46.334704357+00:00 stderr F I0813 20:40:46.334492 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:40:46.534071955+00:00 stderr F I0813 20:40:46.533976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:40:46.534071955+00:00 stderr F I0813 20:40:46.534046 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:40:46.734065191+00:00 stderr F I0813 20:40:46.733923 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:40:46.734065191+00:00 stderr F I0813 20:40:46.734018 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:40:46.935165579+00:00 stderr F I0813 20:40:46.935051 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:40:46.935165579+00:00 stderr F I0813 20:40:46.935123 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:40:47.133985861+00:00 stderr F I0813 20:40:47.133850 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:40:47.133985861+00:00 stderr F I0813 20:40:47.133914 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:40:47.336224892+00:00 stderr F I0813 20:40:47.336084 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:40:47.337163039+00:00 stderr F I0813 20:40:47.336992 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:40:47.556753560+00:00 stderr F I0813 20:40:47.556557 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:40:47.556753560+00:00 stderr F I0813 20:40:47.556645 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:40:47.744716309+00:00 stderr F I0813 20:40:47.744594 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:40:47.744716309+00:00 stderr F I0813 20:40:47.744664 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:40:47.933715097+00:00 stderr F I0813 20:40:47.933590 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:40:47.933715097+00:00 stderr F I0813 20:40:47.933657 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:40:48.133974231+00:00 stderr F I0813 20:40:48.133292 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:40:48.133974231+00:00 stderr F I0813 20:40:48.133946 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:40:48.333302858+00:00 stderr F I0813 20:40:48.333182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:40:48.333302858+00:00 stderr F I0813 20:40:48.333279 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:40:48.539219144+00:00 stderr F I0813 20:40:48.539086 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:40:48.539219144+00:00 stderr F I0813 20:40:48.539181 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:40:48.735148462+00:00 stderr F I0813 20:40:48.735023 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:40:48.735148462+00:00 stderr F I0813 20:40:48.735103 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:40:48.942234152+00:00 stderr F I0813 20:40:48.942114 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:40:48.942272123+00:00 stderr F I0813 20:40:48.942231 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:40:49.133289281+00:00 stderr F I0813 20:40:49.133155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:49.133289281+00:00 stderr F I0813 20:40:49.133256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:40:49.338485556+00:00 stderr F I0813 20:40:49.338022 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:49.338485556+00:00 stderr F I0813 20:40:49.338108 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:40:49.536014531+00:00 stderr F I0813 20:40:49.535911 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:40:49.536014531+00:00 stderr F I0813 20:40:49.535998 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:40:49.732605459+00:00 stderr F I0813 20:40:49.732506 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:40:49.732605459+00:00 stderr F I0813 20:40:49.732579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:40:49.934277163+00:00 stderr F I0813 20:40:49.934067 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:40:49.934277163+00:00 stderr F I0813 20:40:49.934237 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:40:50.133117616+00:00 stderr F I0813 20:40:50.132972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:40:50.133117616+00:00 stderr F I0813 20:40:50.133083 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:40:50.336270013+00:00 stderr F I0813 20:40:50.336095 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:40:50.336270013+00:00 stderr F I0813 20:40:50.336180 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:40:50.541051407+00:00 stderr F I0813 20:40:50.540908 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:40:50.541051407+00:00 stderr F I0813 20:40:50.540998 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:40:50.738020606+00:00 stderr F I0813 20:40:50.737874 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:40:50.738020606+00:00 stderr F I0813 20:40:50.737961 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:40:50.933706277+00:00 stderr F I0813 20:40:50.933583 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:50.933767719+00:00 stderr F I0813 20:40:50.933716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:40:51.134281030+00:00 stderr F I0813 20:40:51.134102 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:51.134281030+00:00 stderr F I0813 20:40:51.134177 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:40:51.337694324+00:00 stderr F I0813 20:40:51.337469 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:40:51.338152948+00:00 stderr F I0813 20:40:51.337707 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:40:51.535758445+00:00 stderr F I0813 20:40:51.535658 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:40:51.535758445+00:00 stderr F I0813 20:40:51.535737 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:40:51.741660291+00:00 stderr F I0813 20:40:51.741520 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:40:51.741905518+00:00 stderr F I0813 20:40:51.741876 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:40:51.942284875+00:00 stderr F I0813 20:40:51.942181 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:40:51.942435259+00:00 stderr F I0813 20:40:51.942409 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:40:52.141424596+00:00 stderr F I0813 20:40:52.141239 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:40:52.141424596+00:00 stderr F I0813 20:40:52.141311 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:40:52.344283844+00:00 stderr F I0813 20:40:52.344100 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:40:52.344283844+00:00 stderr F I0813 20:40:52.344176 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:40:52.543569269+00:00 stderr F I0813 20:40:52.543477 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:40:52.543725784+00:00 stderr F I0813 20:40:52.543709 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:40:52.775044693+00:00 stderr F I0813 20:40:52.774900 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:40:52.775302360+00:00 stderr F I0813 20:40:52.775289 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:40:52.965434652+00:00 stderr F I0813 20:40:52.965380 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:40:52.965575536+00:00 stderr F I0813 20:40:52.965557 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:40:53.133901969+00:00 stderr F I0813 20:40:53.133753 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:40:53.134093365+00:00 stderr F I0813 20:40:53.134070 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:40:53.333476783+00:00 stderr F I0813 20:40:53.333361 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:40:53.333476783+00:00 stderr F I0813 20:40:53.333426 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:40:53.532567543+00:00 stderr F I0813 20:40:53.532463 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:40:53.532567543+00:00 stderr F I0813 20:40:53.532532 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:40:53.733328031+00:00 stderr F I0813 20:40:53.733183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:40:53.733328031+00:00 stderr F I0813 20:40:53.733301 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:40:53.934893202+00:00 stderr F I0813 20:40:53.934723 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:40:53.934893202+00:00 stderr F I0813 20:40:53.934877 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:40:54.140545551+00:00 stderr F I0813 20:40:54.140415 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:40:54.140545551+00:00 stderr F I0813 20:40:54.140522 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:40:54.334310287+00:00 stderr F I0813 20:40:54.334151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:40:54.334310287+00:00 stderr F I0813 20:40:54.334278 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:40:54.540096500+00:00 stderr F I0813 20:40:54.539963 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:40:54.540096500+00:00 stderr F I0813 20:40:54.540064 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:40:54.737360797+00:00 stderr F I0813 20:40:54.737277 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:40:54.737448070+00:00 stderr F I0813 20:40:54.737359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:54.936895240+00:00 stderr F I0813 20:40:54.936435 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:54.936895240+00:00 stderr F I0813 20:40:54.936568 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.135324501+00:00 stderr F I0813 20:40:55.135174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.135324501+00:00 stderr F I0813 20:40:55.135311 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.333496524+00:00 stderr F I0813 20:40:55.333401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.333496524+00:00 stderr F I0813 20:40:55.333474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.534464098+00:00 stderr F I0813 20:40:55.533916 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.534464098+00:00 stderr F I0813 20:40:55.534436 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:40:55.733257638+00:00 stderr F I0813 20:40:55.733132 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:40:55.733315220+00:00 stderr F I0813 20:40:55.733259 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:40:55.935170400+00:00 stderr F I0813 20:40:55.935001 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:40:55.935170400+00:00 stderr F I0813 20:40:55.935125 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:40:56.137363189+00:00 stderr F I0813 20:40:56.137224 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:40:56.137363189+00:00 stderr F I0813 20:40:56.137345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:40:56.333316829+00:00 stderr F I0813 20:40:56.333179 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:40:56.333379720+00:00 stderr F I0813 20:40:56.333317 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:40:56.537067023+00:00 stderr F I0813 20:40:56.536396 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:40:56.537067023+00:00 stderr F I0813 20:40:56.537048 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:40:56.739909091+00:00 stderr F I0813 20:40:56.739827 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:40:56.739955312+00:00 stderr F I0813 20:40:56.739917 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:40:56.939033072+00:00 stderr F I0813 20:40:56.938924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:40:56.939033072+00:00 stderr F I0813 20:40:56.939011 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:40:57.134730513+00:00 stderr F I0813 20:40:57.134621 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:40:57.134730513+00:00 stderr F I0813 20:40:57.134702 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:40:57.335601675+00:00 stderr F I0813 20:40:57.335497 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:40:57.335601675+00:00 stderr F I0813 20:40:57.335564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:40:57.536922069+00:00 stderr F I0813 20:40:57.536749 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:40:57.537086954+00:00 stderr F I0813 20:40:57.537025 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:40:57.735013100+00:00 stderr F I0813 20:40:57.734861 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:40:57.735013100+00:00 stderr F I0813 20:40:57.734952 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:40:57.935669555+00:00 stderr F I0813 20:40:57.935559 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:40:57.935669555+00:00 stderr F I0813 20:40:57.935638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:40:58.134578770+00:00 stderr F I0813 20:40:58.134440 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:40:58.134578770+00:00 stderr F I0813 20:40:58.134514 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:40:58.334404561+00:00 stderr F I0813 20:40:58.334294 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:40:58.334404561+00:00 stderr F I0813 20:40:58.334365 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:40:58.534003205+00:00 stderr F I0813 20:40:58.533921 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:40:58.534003205+00:00 stderr F I0813 20:40:58.533985 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:40:58.735044331+00:00 stderr F I0813 20:40:58.734959 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:40:58.735145574+00:00 stderr F I0813 20:40:58.735044 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:40:58.940868225+00:00 stderr F I0813 20:40:58.940706 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:40:58.940868225+00:00 stderr F I0813 20:40:58.940847 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:40:59.169112596+00:00 stderr F I0813 20:40:59.168295 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:40:59.169112596+00:00 stderr F I0813 20:40:59.168393 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:40:59.333872705+00:00 stderr F I0813 20:40:59.333697 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:40:59.333930886+00:00 stderr F I0813 20:40:59.333875 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.533867841+00:00 stderr F I0813 20:40:59.533268 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.533867841+00:00 stderr F I0813 20:40:59.533351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.733832806+00:00 stderr F I0813 20:40:59.733672 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.733832806+00:00 stderr F I0813 20:40:59.733746 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.933404709+00:00 stderr F I0813 20:40:59.933267 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.933404709+00:00 stderr F I0813 20:40:59.933356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:41:00.135516177+00:00 stderr F I0813 20:41:00.135412 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:41:00.135516177+00:00 stderr F I0813 20:41:00.135481 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:41:00.337475089+00:00 stderr F I0813 20:41:00.337316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:41:00.337475089+00:00 stderr F I0813 20:41:00.337384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:41:00.532647036+00:00 stderr F I0813 20:41:00.532587 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:41:00.532765110+00:00 stderr F I0813 20:41:00.532749 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:00.753741351+00:00 stderr F I0813 20:41:00.752926 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:00.753741351+00:00 stderr F I0813 20:41:00.753011 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:00.939638220+00:00 stderr F I0813 20:41:00.939578 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:00.939826206+00:00 stderr F I0813 20:41:00.939761 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:01.134297502+00:00 stderr F I0813 20:41:01.134230 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:01.134439496+00:00 stderr F I0813 20:41:01.134425 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:41:01.335296097+00:00 stderr F I0813 20:41:01.335059 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:41:01.335296097+00:00 stderr F I0813 20:41:01.335145 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:41:01.534527091+00:00 stderr F I0813 20:41:01.534407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:41:01.534527091+00:00 stderr F I0813 20:41:01.534504 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:41:01.740891471+00:00 stderr F I0813 20:41:01.739968 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:41:01.740891471+00:00 stderr F I0813 20:41:01.740139 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:41:01.935963905+00:00 stderr F I0813 20:41:01.935834 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:41:01.935963905+00:00 stderr F I0813 20:41:01.935913 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:41:02.133495580+00:00 stderr F I0813 20:41:02.133434 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:41:02.133631094+00:00 stderr F I0813 20:41:02.133612 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:41:02.334090013+00:00 stderr F I0813 20:41:02.333947 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:41:02.334090013+00:00 stderr F I0813 20:41:02.334034 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:41:02.540864845+00:00 stderr F I0813 20:41:02.540755 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:41:02.541058460+00:00 stderr F I0813 20:41:02.541038 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:02.733239701+00:00 stderr F I0813 20:41:02.733155 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:02.733383725+00:00 stderr F I0813 20:41:02.733345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:41:02.934099401+00:00 stderr F I0813 20:41:02.933992 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:41:02.934099401+00:00 stderr F I0813 20:41:02.934069 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:41:03.138715190+00:00 stderr F I0813 20:41:03.138572 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:41:03.138715190+00:00 stderr F I0813 20:41:03.138639 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:41:03.333914997+00:00 stderr F I0813 20:41:03.332724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:41:03.333914997+00:00 stderr F I0813 20:41:03.332971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:41:03.532882163+00:00 stderr F I0813 20:41:03.532038 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:41:03.532882163+00:00 stderr F I0813 20:41:03.532120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:41:03.735113184+00:00 stderr F I0813 20:41:03.734248 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:41:03.735113184+00:00 stderr F I0813 20:41:03.734328 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:41:03.934974305+00:00 stderr F I0813 20:41:03.933942 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:41:03.934974305+00:00 stderr F I0813 20:41:03.934050 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:04.137248687+00:00 stderr F I0813 20:41:04.136951 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:04.137248687+00:00 stderr F I0813 20:41:04.137087 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:41:04.334109462+00:00 stderr F I0813 20:41:04.333452 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:41:04.334325979+00:00 stderr F I0813 20:41:04.334306 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:04.556272377+00:00 stderr F I0813 20:41:04.555674 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:04.556420642+00:00 stderr F I0813 20:41:04.556405 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:41:04.749264212+00:00 stderr F I0813 20:41:04.745995 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:41:04.749264212+00:00 stderr F I0813 20:41:04.746078 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:41:04.939856926+00:00 stderr F I0813 20:41:04.939308 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:41:04.939856926+00:00 stderr F I0813 20:41:04.939425 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:41:05.134035275+00:00 stderr F I0813 20:41:05.133413 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:41:05.134089146+00:00 stderr F I0813 20:41:05.134032 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:41:05.333707351+00:00 stderr F I0813 20:41:05.332665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:41:05.333707351+00:00 stderr F I0813 20:41:05.332833 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:41:05.536474187+00:00 stderr F I0813 20:41:05.536400 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:41:05.536603541+00:00 stderr F I0813 20:41:05.536587 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:41:05.737175183+00:00 stderr F I0813 20:41:05.737070 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:41:05.758628642+00:00 stderr F I0813 20:41:05.758569 1 log.go:245] Operconfig Controller complete 2025-08-13T20:42:36.391312212+00:00 stderr F I0813 20:42:36.391113 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.391312212+00:00 stderr F I0813 20:42:36.389176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.391453956+00:00 stderr F I0813 20:42:36.389413 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.392183 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.389336 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.395504 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440454488+00:00 stderr F I0813 20:42:36.440420 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460945159+00:00 stderr F I0813 20:42:36.460876 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512163326+00:00 stderr F I0813 20:42:36.512108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512654420+00:00 stderr F I0813 20:42:36.512632 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512876606+00:00 stderr F I0813 20:42:36.512854 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513069152+00:00 stderr F I0813 20:42:36.513050 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513195025+00:00 stderr F I0813 20:42:36.513178 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513426082+00:00 stderr F I0813 20:42:36.513406 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513558686+00:00 stderr F I0813 20:42:36.513541 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513675699+00:00 stderr F I0813 20:42:36.513659 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513892706+00:00 stderr F I0813 20:42:36.513871 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514085401+00:00 stderr F I0813 20:42:36.514061 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514206105+00:00 stderr F I0813 20:42:36.514189 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514409870+00:00 stderr F I0813 20:42:36.514388 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514547994+00:00 stderr F I0813 20:42:36.514529 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514919475+00:00 stderr F I0813 20:42:36.514897 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515101870+00:00 stderr F I0813 20:42:36.515083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515254945+00:00 stderr F I0813 20:42:36.515205 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515385399+00:00 stderr F I0813 20:42:36.515367 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515515932+00:00 stderr F I0813 20:42:36.515499 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515645166+00:00 stderr F I0813 20:42:36.515624 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515849292+00:00 stderr F I0813 20:42:36.515828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515985746+00:00 stderr F I0813 20:42:36.515967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516156521+00:00 stderr F I0813 20:42:36.516139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516309315+00:00 stderr F I0813 20:42:36.516288 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516422969+00:00 stderr F I0813 20:42:36.516406 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522322549+00:00 stderr F I0813 20:42:36.520212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522322549+00:00 stderr F I0813 20:42:36.520650 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522705880+00:00 stderr F I0813 20:42:36.522676 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522979318+00:00 stderr F I0813 20:42:36.522958 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523127952+00:00 stderr F I0813 20:42:36.523111 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523295327+00:00 stderr F I0813 20:42:36.523274 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523473222+00:00 stderr F I0813 20:42:36.523454 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523632686+00:00 stderr F I0813 20:42:36.523613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523770170+00:00 stderr F I0813 20:42:36.523753 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.524150731+00:00 stderr F I0813 20:42:36.524127 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.524397088+00:00 stderr F I0813 20:42:36.524376 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.534004525+00:00 stderr F I0813 20:42:36.524513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556483373+00:00 stderr F I0813 20:42:36.555278 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.495414124+00:00 stderr F I0813 20:42:40.494156 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.498079220+00:00 stderr F I0813 20:42:40.498000 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.499157992+00:00 stderr F I0813 20:42:40.499087 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.502500498+00:00 stderr F I0813 20:42:40.502429 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:40.502583810+00:00 stderr F I0813 20:42:40.502556 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.502687953+00:00 stderr F I0813 20:42:40.502645 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:40.503464056+00:00 stderr F I0813 20:42:40.503402 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:40.503464056+00:00 stderr F I0813 20:42:40.503449 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:40.503516897+00:00 stderr F I0813 20:42:40.503468 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.503516897+00:00 stderr F I0813 20:42:40.503477 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.505134334+00:00 stderr F I0813 20:42:40.504123 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.505327259+00:00 stderr F I0813 20:42:40.505277 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:40.505347790+00:00 stderr F I0813 20:42:40.505336 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:40.505361020+00:00 stderr F I0813 20:42:40.505342 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.508271484+00:00 stderr F E0813 20:42:40.507176 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:40.508370927+00:00 stderr F I0813 20:42:40.508323 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509205 1 secure_serving.go:258] Stopped listening on [::]:9104 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509260 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509264 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509298 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.509335995+00:00 stderr F I0813 20:42:40.509314 1 builder.go:330] server exited 2025-08-13T20:42:40.509335995+00:00 stderr F I0813 20:42:40.509321 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.509346375+00:00 stderr F I0813 20:42:40.509338 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.510411026+00:00 stderr F I0813 20:42:40.510349 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.510652763+00:00 stderr F I0813 20:42:40.510631 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:40.511647552+00:00 stderr F I0813 20:42:40.511625 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="operconfig-controller" 2025-08-13T20:42:40.511714894+00:00 stderr F I0813 20:42:40.511700 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="proxyconfig-controller" 2025-08-13T20:42:40.511752765+00:00 stderr F I0813 20:42:40.511741 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="ingress-config-controller" 2025-08-13T20:42:40.511843317+00:00 stderr F I0813 20:42:40.511825 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="infrastructureconfig-controller" 2025-08-13T20:42:40.511911469+00:00 stderr F I0813 20:42:40.511898 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="clusterconfig-controller" 2025-08-13T20:42:40.511946980+00:00 stderr F I0813 20:42:40.511935 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pki-controller" 2025-08-13T20:42:40.511981081+00:00 stderr F I0813 20:42:40.511969 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="egress-router-controller" 2025-08-13T20:42:40.512019012+00:00 stderr F I0813 20:42:40.512007 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="signer-controller" 2025-08-13T20:42:40.512057463+00:00 stderr F I0813 20:42:40.512046 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pod-watcher" 2025-08-13T20:42:40.512091064+00:00 stderr F I0813 20:42:40.512079 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="allowlist-controller" 2025-08-13T20:42:40.512144096+00:00 stderr F I0813 20:42:40.512132 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="dashboard-controller" 2025-08-13T20:42:40.512217028+00:00 stderr F I0813 20:42:40.512204 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:42:40.512325451+00:00 stderr F I0813 20:42:40.512311 1 controller.go:242] "All workers finished" controller="pod-watcher" 2025-08-13T20:42:40.512364382+00:00 stderr F I0813 20:42:40.512352 1 controller.go:242] "All workers finished" controller="allowlist-controller" 2025-08-13T20:42:40.512399143+00:00 stderr F I0813 20:42:40.512388 1 controller.go:242] "All workers finished" controller="proxyconfig-controller" 2025-08-13T20:42:40.512432934+00:00 stderr F I0813 20:42:40.512421 1 controller.go:242] "All workers finished" controller="infrastructureconfig-controller" 2025-08-13T20:42:40.512473035+00:00 stderr F I0813 20:42:40.512458 1 controller.go:242] "All workers finished" controller="dashboard-controller" 2025-08-13T20:42:40.512513917+00:00 stderr F I0813 20:42:40.512501 1 controller.go:242] "All workers finished" controller="ingress-config-controller" 2025-08-13T20:42:40.512548758+00:00 stderr F I0813 20:42:40.512537 1 controller.go:242] "All workers finished" controller="clusterconfig-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512597 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512608 1 controller.go:242] "All workers finished" controller="operconfig-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512684 1 controller.go:242] "All workers finished" controller="signer-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512685 1 controller.go:242] "All workers finished" controller="egress-router-controller" 2025-08-13T20:42:40.512716922+00:00 stderr F I0813 20:42:40.512699 1 controller.go:242] "All workers finished" controller="pki-controller" 2025-08-13T20:42:40.512754053+00:00 stderr F I0813 20:42:40.512739 1 controller.go:242] "All workers finished" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:42:40.512845306+00:00 stderr F I0813 20:42:40.512827 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:40.513871366+00:00 stderr F I0813 20:42:40.513704 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_c4643de4-0a66-40c8-abff-4239e04f61ab stopped leading 2025-08-13T20:42:40.514043371+00:00 stderr F I0813 20:42:40.513966 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:40.514043371+00:00 stderr F I0813 20:42:40.514017 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:40.516137041+00:00 stderr F W0813 20:42:40.516017 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000114051315073043233033113 0ustar zuulzuul2025-10-13T00:12:50.145435671+00:00 stderr F I1013 00:12:50.139716 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:12:50.145604875+00:00 stderr F I1013 00:12:50.145446 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:12:50.147017006+00:00 stderr F I1013 00:12:50.146191 1 observer_polling.go:159] Starting file observer 2025-10-13T00:12:50.247779249+00:00 stderr F I1013 00:12:50.247707 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-10-13T00:12:51.116779368+00:00 stderr F I1013 00:12:51.115976 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:12:51.116779368+00:00 stderr F W1013 00:12:51.116710 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:12:51.116779368+00:00 stderr F W1013 00:12:51.116727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:12:51.120297088+00:00 stderr F I1013 00:12:51.120244 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:12:51.121823672+00:00 stderr F I1013 00:12:51.121766 1 secure_serving.go:213] Serving securely on [::]:9104 2025-10-13T00:12:51.122363277+00:00 stderr F I1013 00:12:51.122270 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:12:51.122363277+00:00 stderr F I1013 00:12:51.122347 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:12:51.122453680+00:00 stderr F I1013 00:12:51.122402 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:12:51.122494621+00:00 stderr F I1013 00:12:51.122282 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:12:51.122546002+00:00 stderr F I1013 00:12:51.122503 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:12:51.123642064+00:00 stderr F I1013 00:12:51.123590 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-10-13T00:12:51.125486326+00:00 stderr F I1013 00:12:51.125400 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:12:51.125745854+00:00 stderr F I1013 00:12:51.125691 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:12:51.125760714+00:00 stderr F I1013 00:12:51.125743 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:12:51.227242698+00:00 stderr F I1013 00:12:51.227141 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:12:51.227677240+00:00 stderr F I1013 00:12:51.227443 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:12:51.227796674+00:00 stderr F I1013 00:12:51.227762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:19:02.081971147+00:00 stderr F I1013 00:19:02.081833 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-10-13T00:19:02.082397700+00:00 stderr F I1013 00:19:02.082273 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_9124d9dd-1853-4162-b4bd-f4bf86ccaa6d became leader 2025-10-13T00:19:02.095550101+00:00 stderr F I1013 00:19:02.095517 1 operator.go:97] Creating status manager for stand-alone cluster 2025-10-13T00:19:02.095624543+00:00 stderr F I1013 00:19:02.095613 1 operator.go:102] Adding controller-runtime controllers 2025-10-13T00:19:02.097966193+00:00 stderr F I1013 00:19:02.097925 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:19:02.099233221+00:00 stderr F I1013 00:19:02.099124 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:19:02.102858378+00:00 stderr F I1013 00:19:02.102819 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-10-13T00:19:02.103134877+00:00 stderr F I1013 00:19:02.103022 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:19:02.104188668+00:00 stderr F I1013 00:19:02.104147 1 client.go:239] Starting informers... 2025-10-13T00:19:02.110314920+00:00 stderr F I1013 00:19:02.110273 1 client.go:250] Waiting for informers to sync... 2025-10-13T00:19:02.312075890+00:00 stderr F I1013 00:19:02.311936 1 client.go:271] Informers started and synced 2025-10-13T00:19:02.312075890+00:00 stderr F I1013 00:19:02.311986 1 operator.go:126] Starting controller-manager 2025-10-13T00:19:02.312526523+00:00 stderr F I1013 00:19:02.312460 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-10-13T00:19:02.312526523+00:00 stderr F I1013 00:19:02.312490 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-10-13T00:19:02.312584595+00:00 stderr F I1013 00:19:02.312532 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000ca91e0" 2025-10-13T00:19:02.312584595+00:00 stderr F I1013 00:19:02.312555 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-10-13T00:19:02.312605186+00:00 stderr F I1013 00:19:02.312580 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-10-13T00:19:02.312605186+00:00 stderr F I1013 00:19:02.312597 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-10-13T00:19:02.312622996+00:00 stderr F I1013 00:19:02.312608 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-10-13T00:19:02.312753640+00:00 stderr F I1013 00:19:02.312700 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-10-13T00:19:02.312753640+00:00 stderr F I1013 00:19:02.312718 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-10-13T00:19:02.312834663+00:00 stderr F I1013 00:19:02.312798 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-10-13T00:19:02.312834663+00:00 stderr F I1013 00:19:02.312817 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-10-13T00:19:02.312912615+00:00 stderr F I1013 00:19:02.312877 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc000ca9550" 2025-10-13T00:19:02.312931025+00:00 stderr F I1013 00:19:02.312909 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-10-13T00:19:02.312931025+00:00 stderr F I1013 00:19:02.312917 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-10-13T00:19:02.313219524+00:00 stderr F I1013 00:19:02.313164 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-10-13T00:19:02.313219524+00:00 stderr F I1013 00:19:02.313184 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-10-13T00:19:02.313238895+00:00 stderr F I1013 00:19:02.313221 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000ca9600" 2025-10-13T00:19:02.313281326+00:00 stderr F I1013 00:19:02.313258 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000ca96b0" 2025-10-13T00:19:02.313281326+00:00 stderr F I1013 00:19:02.313261 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-10-13T00:19:02.313281326+00:00 stderr F I1013 00:19:02.313269 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-10-13T00:19:02.313301946+00:00 stderr F I1013 00:19:02.313292 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-10-13T00:19:02.313323447+00:00 stderr F I1013 00:19:02.313310 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-10-13T00:19:02.313469631+00:00 stderr F I1013 00:19:02.313409 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-10-13T00:19:02.313469631+00:00 stderr F I1013 00:19:02.313435 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-10-13T00:19:02.313469631+00:00 stderr F I1013 00:19:02.313439 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-10-13T00:19:02.313469631+00:00 stderr F I1013 00:19:02.313443 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-10-13T00:19:02.313469631+00:00 stderr F I1013 00:19:02.313448 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-10-13T00:19:02.313469631+00:00 stderr F I1013 00:19:02.313451 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-10-13T00:19:02.313680958+00:00 stderr F I1013 00:19:02.313588 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-10-13T00:19:02.313771750+00:00 stderr F I1013 00:19:02.313723 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc000ca9760" 2025-10-13T00:19:02.313771750+00:00 stderr F I1013 00:19:02.313737 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc000ca9810" 2025-10-13T00:19:02.313771750+00:00 stderr F I1013 00:19:02.313765 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-10-13T00:19:02.313798581+00:00 stderr F I1013 00:19:02.313780 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-10-13T00:19:02.313895814+00:00 stderr F I1013 00:19:02.313843 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-10-13T00:19:02.313895814+00:00 stderr F I1013 00:19:02.313878 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-10-13T00:19:02.314045739+00:00 stderr F I1013 00:19:02.314005 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000ca98c0" 2025-10-13T00:19:02.314045739+00:00 stderr F I1013 00:19:02.314039 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000ca9970" 2025-10-13T00:19:02.314170632+00:00 stderr F I1013 00:19:02.314092 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-10-13T00:19:02.314170632+00:00 stderr F I1013 00:19:02.314115 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-10-13T00:19:02.314170632+00:00 stderr F I1013 00:19:02.314124 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-10-13T00:19:02.314316367+00:00 stderr F I1013 00:19:02.314275 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:19:02.314316367+00:00 stderr F I1013 00:19:02.314301 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-10-13T00:19:02.314316367+00:00 stderr F I1013 00:19:02.314309 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-10-13T00:19:02.314394039+00:00 stderr F I1013 00:19:02.314314 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-10-13T00:19:02.314394039+00:00 stderr F I1013 00:19:02.314320 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-10-13T00:19:02.314394039+00:00 stderr F I1013 00:19:02.314364 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:19:02.314539153+00:00 stderr F I1013 00:19:02.314469 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:19:02.314593165+00:00 stderr F I1013 00:19:02.314560 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:19:02.314651537+00:00 stderr F I1013 00:19:02.314581 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:19:02.314711238+00:00 stderr F I1013 00:19:02.314487 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-10-13T00:19:02.315744339+00:00 stderr F I1013 00:19:02.315667 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000ca9a20" 2025-10-13T00:19:02.315744339+00:00 stderr F I1013 00:19:02.315696 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-10-13T00:19:02.315744339+00:00 stderr F I1013 00:19:02.315709 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-10-13T00:19:02.315786360+00:00 stderr F I1013 00:19:02.315746 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-10-13T00:19:02.315786360+00:00 stderr F I1013 00:19:02.315776 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-10-13T00:19:02.316666026+00:00 stderr F I1013 00:19:02.316604 1 dashboard_controller.go:113] Reconcile dashboards 2025-10-13T00:19:02.317918154+00:00 stderr F I1013 00:19:02.317845 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:02.318008556+00:00 stderr F I1013 00:19:02.317954 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-10-13T00:19:02.318295035+00:00 stderr F I1013 00:19:02.318249 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-10-13T00:19:02.320040967+00:00 stderr F I1013 00:19:02.319988 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.320582063+00:00 stderr F I1013 00:19:02.320498 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.320750418+00:00 stderr F I1013 00:19:02.320714 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.320906543+00:00 stderr F I1013 00:19:02.320867 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.321271383+00:00 stderr F I1013 00:19:02.321229 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.322317995+00:00 stderr F I1013 00:19:02.322264 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.322473149+00:00 stderr F I1013 00:19:02.322428 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-10-13T00:19:02.323066487+00:00 stderr F I1013 00:19:02.323014 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.323248412+00:00 stderr F I1013 00:19:02.323073 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.323645874+00:00 stderr F I1013 00:19:02.323569 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.325276422+00:00 stderr F I1013 00:19:02.325196 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.327829018+00:00 stderr F I1013 00:19:02.327754 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.328351594+00:00 stderr F I1013 00:19:02.328279 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.328351594+00:00 stderr F I1013 00:19:02.328296 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.328702354+00:00 stderr F I1013 00:19:02.328659 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.328702354+00:00 stderr F I1013 00:19:02.328675 1 dashboard_controller.go:139] Applying dashboards manifests 2025-10-13T00:19:02.328907450+00:00 stderr F I1013 00:19:02.328866 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.329135557+00:00 stderr F I1013 00:19:02.329093 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.329201909+00:00 stderr F I1013 00:19:02.328892 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.329779636+00:00 stderr F I1013 00:19:02.329738 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.329987443+00:00 stderr F I1013 00:19:02.329741 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.330186248+00:00 stderr F I1013 00:19:02.330098 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.330657302+00:00 stderr F I1013 00:19:02.330576 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.330924060+00:00 stderr F I1013 00:19:02.330843 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.341532906+00:00 stderr F I1013 00:19:02.341437 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-10-13T00:19:02.346044360+00:00 stderr F I1013 00:19:02.345939 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-10-13T00:19:02.346044360+00:00 stderr F I1013 00:19:02.345968 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-10-13T00:19:02.361910882+00:00 stderr F I1013 00:19:02.361855 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-10-13T00:19:02.361910882+00:00 stderr F I1013 00:19:02.361896 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-10-13T00:19:02.372451195+00:00 stderr F I1013 00:19:02.372390 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-10-13T00:19:02.412919049+00:00 stderr F I1013 00:19:02.412851 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-10-13T00:19:02.412968801+00:00 stderr F I1013 00:19:02.412961 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-10-13T00:19:02.414093234+00:00 stderr F I1013 00:19:02.414016 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-10-13T00:19:02.414221428+00:00 stderr F I1013 00:19:02.414163 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-10-13T00:19:02.414353742+00:00 stderr F I1013 00:19:02.414292 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-10-13T00:19:02.414398083+00:00 stderr F I1013 00:19:02.414372 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-10-13T00:19:02.414476165+00:00 stderr F I1013 00:19:02.414440 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-10-13T00:19:02.414655171+00:00 stderr F I1013 00:19:02.414614 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-10-13T00:19:02.414696082+00:00 stderr F I1013 00:19:02.414669 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-10-13T00:19:02.414746243+00:00 stderr F I1013 00:19:02.414720 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-10-13T00:19:02.414902278+00:00 stderr F I1013 00:19:02.414862 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-10-13T00:19:02.414931449+00:00 stderr F I1013 00:19:02.414915 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-10-13T00:19:02.415869067+00:00 stderr F I1013 00:19:02.415509 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.416946409+00:00 stderr F I1013 00:19:02.416888 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.420067472+00:00 stderr F I1013 00:19:02.420019 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.514002515+00:00 stderr F I1013 00:19:02.513947 1 log.go:245] successful reconciliation 2025-10-13T00:19:02.515874980+00:00 stderr F I1013 00:19:02.515813 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-10-13T00:19:02.515920752+00:00 stderr F I1013 00:19:02.515904 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-10-13T00:19:02.524379623+00:00 stderr F I1013 00:19:02.523452 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.524379623+00:00 stderr F I1013 00:19:02.523521 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-10-13T00:19:02.529602399+00:00 stderr F I1013 00:19:02.529564 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.529602399+00:00 stderr F I1013 00:19:02.529596 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-10-13T00:19:02.532984889+00:00 stderr F I1013 00:19:02.532941 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.533007770+00:00 stderr F I1013 00:19:02.533001 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-10-13T00:19:02.536996769+00:00 stderr F I1013 00:19:02.536954 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.536996769+00:00 stderr F I1013 00:19:02.536979 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-10-13T00:19:02.541465692+00:00 stderr F I1013 00:19:02.541422 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.541487122+00:00 stderr F I1013 00:19:02.541478 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-10-13T00:19:02.546398238+00:00 stderr F I1013 00:19:02.546342 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.546398238+00:00 stderr F I1013 00:19:02.546369 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-10-13T00:19:02.550936603+00:00 stderr F I1013 00:19:02.550897 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.550936603+00:00 stderr F I1013 00:19:02.550922 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-10-13T00:19:02.558624762+00:00 stderr F I1013 00:19:02.558500 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:02.558957062+00:00 stderr F I1013 00:19:02.558900 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.559019393+00:00 stderr F I1013 00:19:02.558990 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-10-13T00:19:02.617071340+00:00 stderr F I1013 00:19:02.616369 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-10-13T00:19:02.617071340+00:00 stderr F I1013 00:19:02.617032 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-10-13T00:19:02.646364681+00:00 stderr F I1013 00:19:02.646282 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.655780721+00:00 stderr F I1013 00:19:02.655759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.663503051+00:00 stderr F I1013 00:19:02.663320 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.673456397+00:00 stderr F I1013 00:19:02.673356 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.680135995+00:00 stderr F I1013 00:19:02.680071 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.687359280+00:00 stderr F I1013 00:19:02.687260 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.695014518+00:00 stderr F I1013 00:19:02.694952 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.704539221+00:00 stderr F I1013 00:19:02.703476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.712443126+00:00 stderr F I1013 00:19:02.712108 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:19:02.719405103+00:00 stderr F I1013 00:19:02.719355 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:19:02.727414361+00:00 stderr F I1013 00:19:02.727370 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.727433662+00:00 stderr F I1013 00:19:02.727415 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-10-13T00:19:02.731318308+00:00 stderr F I1013 00:19:02.731258 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:19:02.732079870+00:00 stderr F I1013 00:19:02.732058 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-10-13T00:19:02.732079870+00:00 stderr F I1013 00:19:02.732070 1 log.go:245] Successfully updated Operator config from Cluster config 2025-10-13T00:19:02.928606805+00:00 stderr F I1013 00:19:02.928558 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:19:02.928655496+00:00 stderr F I1013 00:19:02.928603 1 log.go:245] Reconciling proxy 'cluster' 2025-10-13T00:19:03.025441794+00:00 stderr F I1013 00:19:03.025344 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-10-13T00:19:03.321472357+00:00 stderr F I1013 00:19:03.321422 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-10-13T00:19:03.324062944+00:00 stderr F I1013 00:19:03.324030 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:03.325486076+00:00 stderr F I1013 00:19:03.325468 1 log.go:245] Reconciling proxy 'cluster' complete 2025-10-13T00:19:03.537406599+00:00 stderr F I1013 00:19:03.533983 1 dashboard_controller.go:113] Reconcile dashboards 2025-10-13T00:19:03.543608653+00:00 stderr F I1013 00:19:03.543466 1 dashboard_controller.go:139] Applying dashboards manifests 2025-10-13T00:19:03.572441580+00:00 stderr F I1013 00:19:03.572362 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-10-13T00:19:03.598340381+00:00 stderr F I1013 00:19:03.597717 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-10-13T00:19:03.598340381+00:00 stderr F I1013 00:19:03.597765 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-10-13T00:19:03.614470370+00:00 stderr F I1013 00:19:03.610642 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-10-13T00:19:03.721808482+00:00 stderr F I1013 00:19:03.721729 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-10-13T00:19:03.728117520+00:00 stderr F I1013 00:19:03.728027 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:03.731774659+00:00 stderr F I1013 00:19:03.731707 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:03.823470316+00:00 stderr F I1013 00:19:03.823370 1 log.go:245] successful reconciliation 2025-10-13T00:19:04.123857129+00:00 stderr F I1013 00:19:04.123756 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-10-13T00:19:04.526154092+00:00 stderr F I1013 00:19:04.526082 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-10-13T00:19:04.529139111+00:00 stderr F I1013 00:19:04.529068 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-10-13T00:19:04.531654226+00:00 stderr F I1013 00:19:04.531605 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:04.532479690+00:00 stderr F I1013 00:19:04.532407 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-10-13T00:19:04.534887982+00:00 stderr F I1013 00:19:04.534846 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-10-13T00:19:04.535008246+00:00 stderr F I1013 00:19:04.534947 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0018b2980 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-10-13T00:19:04.721279515+00:00 stderr F I1013 00:19:04.721129 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:19:04.728714416+00:00 stderr F I1013 00:19:04.728629 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-10-13T00:19:04.728714416+00:00 stderr F I1013 00:19:04.728651 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-10-13T00:19:04.728714416+00:00 stderr F I1013 00:19:04.728662 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-10-13T00:19:04.734019734+00:00 stderr F I1013 00:19:04.733956 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:19:04.734019734+00:00 stderr F I1013 00:19:04.733976 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:19:04.734019734+00:00 stderr F I1013 00:19:04.733983 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:19:04.734019734+00:00 stderr F I1013 00:19:04.733989 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:19:04.734063185+00:00 stderr F I1013 00:19:04.734028 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-10-13T00:19:04.745442534+00:00 stderr F I1013 00:19:04.745316 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-10-13T00:19:04.745442534+00:00 stderr F I1013 00:19:04.745398 1 log.go:245] Successfully updated Operator config from Cluster config 2025-10-13T00:19:04.746215487+00:00 stderr F I1013 00:19:04.745914 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:19:04.924641903+00:00 stderr F I1013 00:19:04.924552 1 dashboard_controller.go:113] Reconcile dashboards 2025-10-13T00:19:04.929442756+00:00 stderr F I1013 00:19:04.929308 1 dashboard_controller.go:139] Applying dashboards manifests 2025-10-13T00:19:04.949019188+00:00 stderr F I1013 00:19:04.948971 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-10-13T00:19:04.969139286+00:00 stderr F I1013 00:19:04.969046 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-10-13T00:19:04.969139286+00:00 stderr F I1013 00:19:04.969106 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-10-13T00:19:04.986275436+00:00 stderr F I1013 00:19:04.986201 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-10-13T00:19:05.127515586+00:00 stderr F I1013 00:19:05.127400 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:19:05.144041158+00:00 stderr F I1013 00:19:05.143947 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:19:05.144179402+00:00 stderr F E1013 00:19:05.144109 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="29419321-9989-4d34-a42c-749a3de30783" 2025-10-13T00:19:05.144210223+00:00 stderr F I1013 00:19:05.144197 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-10-13T00:19:05.325783452+00:00 stderr F I1013 00:19:05.325650 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-10-13T00:19:05.332181023+00:00 stderr F I1013 00:19:05.332100 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:05.333899594+00:00 stderr F I1013 00:19:05.333832 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:05.426938091+00:00 stderr F I1013 00:19:05.426860 1 log.go:245] successful reconciliation 2025-10-13T00:19:05.527491901+00:00 stderr F I1013 00:19:05.525138 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-10-13T00:19:05.527491901+00:00 stderr F I1013 00:19:05.527220 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:06.531453388+00:00 stderr F I1013 00:19:06.531355 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-10-13T00:19:06.534951152+00:00 stderr F I1013 00:19:06.534807 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-10-13T00:19:06.538726174+00:00 stderr F I1013 00:19:06.538613 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-10-13T00:19:06.538726174+00:00 stderr F I1013 00:19:06.538643 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000ec5480 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-10-13T00:19:06.545709042+00:00 stderr F I1013 00:19:06.545620 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-10-13T00:19:06.545709042+00:00 stderr F I1013 00:19:06.545645 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-10-13T00:19:06.545709042+00:00 stderr F I1013 00:19:06.545655 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-10-13T00:19:06.549685360+00:00 stderr F I1013 00:19:06.549583 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:19:06.549685360+00:00 stderr F I1013 00:19:06.549638 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:19:06.549685360+00:00 stderr F I1013 00:19:06.549647 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:19:06.549685360+00:00 stderr F I1013 00:19:06.549654 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:19:06.549729631+00:00 stderr F I1013 00:19:06.549688 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-10-13T00:19:06.926498084+00:00 stderr F I1013 00:19:06.926366 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-10-13T00:19:06.927659509+00:00 stderr F I1013 00:19:06.927569 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:19:06.931372629+00:00 stderr F I1013 00:19:06.931275 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:06.949616342+00:00 stderr F I1013 00:19:06.949522 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-10-13T00:19:06.949616342+00:00 stderr F I1013 00:19:06.949582 1 log.go:245] Starting render phase 2025-10-13T00:19:07.014221543+00:00 stderr F I1013 00:19:07.014112 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-10-13T00:19:07.122579706+00:00 stderr F I1013 00:19:07.122474 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-10-13T00:19:07.122579706+00:00 stderr F I1013 00:19:07.122517 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-10-13T00:19:07.122579706+00:00 stderr F I1013 00:19:07.122557 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-10-13T00:19:07.122649568+00:00 stderr F I1013 00:19:07.122608 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-10-13T00:19:07.127858783+00:00 stderr F I1013 00:19:07.127816 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-10-13T00:19:07.130034047+00:00 stderr F I1013 00:19:07.129996 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:07.138785757+00:00 stderr F I1013 00:19:07.138745 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-10-13T00:19:07.138785757+00:00 stderr F I1013 00:19:07.138770 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-10-13T00:19:07.325157180+00:00 stderr F I1013 00:19:07.324128 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-10-13T00:19:07.326794379+00:00 stderr F I1013 00:19:07.326747 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:07.345318630+00:00 stderr F I1013 00:19:07.345247 1 log.go:245] Render phase done, rendered 112 objects 2025-10-13T00:19:07.523709265+00:00 stderr F I1013 00:19:07.523620 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-10-13T00:19:07.525610871+00:00 stderr F I1013 00:19:07.525522 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:07.927057690+00:00 stderr F I1013 00:19:07.926988 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-10-13T00:19:07.927427161+00:00 stderr F I1013 00:19:07.927394 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-10-13T00:19:07.934524002+00:00 stderr F I1013 00:19:07.934461 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:07.934696537+00:00 stderr F I1013 00:19:07.934656 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-10-13T00:19:07.934737108+00:00 stderr F I1013 00:19:07.934712 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-10-13T00:19:07.942863220+00:00 stderr F I1013 00:19:07.942830 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-10-13T00:19:07.942941482+00:00 stderr F I1013 00:19:07.942927 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-10-13T00:19:07.949879449+00:00 stderr F I1013 00:19:07.949837 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-10-13T00:19:07.949904249+00:00 stderr F I1013 00:19:07.949884 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-10-13T00:19:07.959540796+00:00 stderr F I1013 00:19:07.959511 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-10-13T00:19:07.959613988+00:00 stderr F I1013 00:19:07.959598 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-10-13T00:19:07.965386970+00:00 stderr F I1013 00:19:07.965282 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-10-13T00:19:07.965464352+00:00 stderr F I1013 00:19:07.965403 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-10-13T00:19:07.970568814+00:00 stderr F I1013 00:19:07.970536 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-10-13T00:19:07.970648536+00:00 stderr F I1013 00:19:07.970633 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-10-13T00:19:07.975126219+00:00 stderr F I1013 00:19:07.975099 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-10-13T00:19:07.975197972+00:00 stderr F I1013 00:19:07.975183 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-10-13T00:19:07.979187520+00:00 stderr F I1013 00:19:07.979140 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-10-13T00:19:07.979238902+00:00 stderr F I1013 00:19:07.979193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-10-13T00:19:07.982790187+00:00 stderr F I1013 00:19:07.982765 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-10-13T00:19:07.982869410+00:00 stderr F I1013 00:19:07.982844 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-10-13T00:19:07.986196189+00:00 stderr F I1013 00:19:07.986156 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-10-13T00:19:07.986196189+00:00 stderr F I1013 00:19:07.986184 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-10-13T00:19:08.127116069+00:00 stderr F I1013 00:19:08.127008 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-10-13T00:19:08.129839060+00:00 stderr F I1013 00:19:08.129781 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.131873971+00:00 stderr F I1013 00:19:08.131835 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-10-13T00:19:08.131873971+00:00 stderr F I1013 00:19:08.131866 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-10-13T00:19:08.327896601+00:00 stderr F I1013 00:19:08.327732 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-10-13T00:19:08.330089066+00:00 stderr F I1013 00:19:08.329978 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.334179508+00:00 stderr F I1013 00:19:08.334099 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-10-13T00:19:08.334179508+00:00 stderr F I1013 00:19:08.334172 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-10-13T00:19:08.531668010+00:00 stderr F I1013 00:19:08.531576 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-10-13T00:19:08.535888086+00:00 stderr F I1013 00:19:08.535839 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.536782172+00:00 stderr F I1013 00:19:08.536740 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-10-13T00:19:08.536807523+00:00 stderr F I1013 00:19:08.536791 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-10-13T00:19:08.727304488+00:00 stderr F I1013 00:19:08.726409 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-10-13T00:19:08.731862414+00:00 stderr F I1013 00:19:08.731801 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-10-13T00:19:08.731862414+00:00 stderr F I1013 00:19:08.731856 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-10-13T00:19:08.732440781+00:00 stderr F I1013 00:19:08.732401 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-10-13T00:19:08.732554555+00:00 stderr F I1013 00:19:08.732529 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.732650737+00:00 stderr F I1013 00:19:08.732624 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.732760421+00:00 stderr F I1013 00:19:08.732737 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.732890555+00:00 stderr F I1013 00:19:08.732847 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733006138+00:00 stderr F I1013 00:19:08.732981 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733098601+00:00 stderr F I1013 00:19:08.733075 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733204234+00:00 stderr F I1013 00:19:08.733165 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733382119+00:00 stderr F I1013 00:19:08.733307 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733495242+00:00 stderr F I1013 00:19:08.733470 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733609306+00:00 stderr F I1013 00:19:08.733584 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733696608+00:00 stderr F I1013 00:19:08.733672 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.733782141+00:00 stderr F I1013 00:19:08.733758 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:19:08.936117398+00:00 stderr F I1013 00:19:08.935958 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-10-13T00:19:08.936117398+00:00 stderr F I1013 00:19:08.936058 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-10-13T00:19:09.134276561+00:00 stderr F I1013 00:19:09.134154 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-10-13T00:19:09.134276561+00:00 stderr F I1013 00:19:09.134216 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-10-13T00:19:09.332272149+00:00 stderr F I1013 00:19:09.332188 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-10-13T00:19:09.332272149+00:00 stderr F I1013 00:19:09.332228 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-10-13T00:19:09.534480642+00:00 stderr F I1013 00:19:09.534319 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-10-13T00:19:09.534480642+00:00 stderr F I1013 00:19:09.534465 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-10-13T00:19:09.734793919+00:00 stderr F I1013 00:19:09.734694 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-10-13T00:19:09.734793919+00:00 stderr F I1013 00:19:09.734742 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-10-13T00:19:09.933072436+00:00 stderr F I1013 00:19:09.933027 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-10-13T00:19:09.933165459+00:00 stderr F I1013 00:19:09.933143 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-10-13T00:19:10.133404254+00:00 stderr F I1013 00:19:10.133315 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-10-13T00:19:10.133404254+00:00 stderr F I1013 00:19:10.133378 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-10-13T00:19:10.354905279+00:00 stderr F I1013 00:19:10.354831 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-10-13T00:19:10.354905279+00:00 stderr F I1013 00:19:10.354894 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-10-13T00:19:10.543438356+00:00 stderr F I1013 00:19:10.543350 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-10-13T00:19:10.543625252+00:00 stderr F I1013 00:19:10.543601 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-10-13T00:19:10.732923661+00:00 stderr F I1013 00:19:10.732814 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-10-13T00:19:10.732923661+00:00 stderr F I1013 00:19:10.732863 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-10-13T00:19:10.933111435+00:00 stderr F I1013 00:19:10.932999 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-10-13T00:19:10.933111435+00:00 stderr F I1013 00:19:10.933069 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-10-13T00:19:11.132639999+00:00 stderr F I1013 00:19:11.132515 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-10-13T00:19:11.132639999+00:00 stderr F I1013 00:19:11.132583 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-10-13T00:19:11.338853141+00:00 stderr F I1013 00:19:11.338740 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-10-13T00:19:11.338853141+00:00 stderr F I1013 00:19:11.338817 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-10-13T00:19:11.539403076+00:00 stderr F I1013 00:19:11.539245 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-10-13T00:19:11.539403076+00:00 stderr F I1013 00:19:11.539314 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-10-13T00:19:11.736576749+00:00 stderr F I1013 00:19:11.736494 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-10-13T00:19:11.736749274+00:00 stderr F I1013 00:19:11.736724 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:19:11.934025041+00:00 stderr F I1013 00:19:11.933960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:11.934162305+00:00 stderr F I1013 00:19:11.934151 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:19:12.141168841+00:00 stderr F I1013 00:19:12.141086 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:12.141409328+00:00 stderr F I1013 00:19:12.141382 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-10-13T00:19:12.337039556+00:00 stderr F I1013 00:19:12.336954 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-10-13T00:19:12.337244022+00:00 stderr F I1013 00:19:12.337220 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-10-13T00:19:12.533853829+00:00 stderr F I1013 00:19:12.533768 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-10-13T00:19:12.534049865+00:00 stderr F I1013 00:19:12.534025 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-10-13T00:19:12.734626310+00:00 stderr F I1013 00:19:12.734511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-10-13T00:19:12.734711322+00:00 stderr F I1013 00:19:12.734627 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-10-13T00:19:12.933742061+00:00 stderr F I1013 00:19:12.933636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-10-13T00:19:12.933742061+00:00 stderr F I1013 00:19:12.933698 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-10-13T00:19:13.145089976+00:00 stderr F I1013 00:19:13.145015 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-10-13T00:19:13.145274062+00:00 stderr F I1013 00:19:13.145249 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-10-13T00:19:13.344287780+00:00 stderr F I1013 00:19:13.344183 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-10-13T00:19:13.344287780+00:00 stderr F I1013 00:19:13.344257 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-10-13T00:19:13.356649918+00:00 stderr F I1013 00:19:13.356588 1 allowlist_controller.go:146] Successfully updated sysctl allowlist 2025-10-13T00:19:13.540641930+00:00 stderr F I1013 00:19:13.539687 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-10-13T00:19:13.540641930+00:00 stderr F I1013 00:19:13.539743 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:19:13.735203966+00:00 stderr F I1013 00:19:13.735096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:13.735203966+00:00 stderr F I1013 00:19:13.735144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:19:13.935473350+00:00 stderr F I1013 00:19:13.935380 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:13.935473350+00:00 stderr F I1013 00:19:13.935439 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-10-13T00:19:14.138543459+00:00 stderr F I1013 00:19:14.138454 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-10-13T00:19:14.138543459+00:00 stderr F I1013 00:19:14.138522 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-10-13T00:19:14.332968291+00:00 stderr F I1013 00:19:14.332855 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-10-13T00:19:14.332968291+00:00 stderr F I1013 00:19:14.332921 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-10-13T00:19:14.545399188+00:00 stderr F I1013 00:19:14.545220 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-10-13T00:19:14.545399188+00:00 stderr F I1013 00:19:14.545312 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-10-13T00:19:14.764755622+00:00 stderr F I1013 00:19:14.764621 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-10-13T00:19:14.764755622+00:00 stderr F I1013 00:19:14.764717 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-10-13T00:19:14.949219128+00:00 stderr F I1013 00:19:14.949122 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-10-13T00:19:14.949219128+00:00 stderr F I1013 00:19:14.949208 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-10-13T00:19:15.181411663+00:00 stderr F I1013 00:19:15.181145 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-10-13T00:19:15.189380360+00:00 stderr F I1013 00:19:15.188395 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-10-13T00:19:15.341640028+00:00 stderr F I1013 00:19:15.341523 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-10-13T00:19:15.341640028+00:00 stderr F I1013 00:19:15.341582 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:19:15.561896358+00:00 stderr F I1013 00:19:15.561810 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:19:15.562083104+00:00 stderr F I1013 00:19:15.562059 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:19:15.774153730+00:00 stderr F I1013 00:19:15.774025 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:19:15.774153730+00:00 stderr F I1013 00:19:15.774113 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:19:15.937079035+00:00 stderr F I1013 00:19:15.936963 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:19:15.937079035+00:00 stderr F I1013 00:19:15.937056 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-10-13T00:19:16.135105575+00:00 stderr F I1013 00:19:16.134985 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:19:16.135105575+00:00 stderr F I1013 00:19:16.135060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-10-13T00:19:16.334991759+00:00 stderr F I1013 00:19:16.334866 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-10-13T00:19:16.334991759+00:00 stderr F I1013 00:19:16.334957 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-10-13T00:19:16.537143530+00:00 stderr F I1013 00:19:16.537036 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:19:16.537143530+00:00 stderr F I1013 00:19:16.537088 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-10-13T00:19:16.735623463+00:00 stderr F I1013 00:19:16.735475 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-10-13T00:19:16.735623463+00:00 stderr F I1013 00:19:16.735533 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-10-13T00:19:16.935461306+00:00 stderr F I1013 00:19:16.935388 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-10-13T00:19:16.935544598+00:00 stderr F I1013 00:19:16.935463 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-10-13T00:19:17.135177445+00:00 stderr F I1013 00:19:17.135112 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-10-13T00:19:17.135177445+00:00 stderr F I1013 00:19:17.135169 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-10-13T00:19:17.335790211+00:00 stderr F I1013 00:19:17.335691 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-10-13T00:19:17.335858433+00:00 stderr F I1013 00:19:17.335783 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:19:17.539259551+00:00 stderr F I1013 00:19:17.538846 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:19:17.539259551+00:00 stderr F I1013 00:19:17.538912 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:17.737816906+00:00 stderr F I1013 00:19:17.737745 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:17.737816906+00:00 stderr F I1013 00:19:17.737798 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:17.934281619+00:00 stderr F I1013 00:19:17.933748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:17.934281619+00:00 stderr F I1013 00:19:17.934242 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:18.136000837+00:00 stderr F I1013 00:19:18.135923 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:18.136000837+00:00 stderr F I1013 00:19:18.135972 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:18.334588023+00:00 stderr F I1013 00:19:18.334466 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:18.334588023+00:00 stderr F I1013 00:19:18.334559 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-10-13T00:19:18.535597061+00:00 stderr F I1013 00:19:18.535513 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-10-13T00:19:18.535597061+00:00 stderr F I1013 00:19:18.535561 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-10-13T00:19:18.737582967+00:00 stderr F I1013 00:19:18.737472 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-10-13T00:19:18.737582967+00:00 stderr F I1013 00:19:18.737534 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-10-13T00:19:18.939666327+00:00 stderr F I1013 00:19:18.939590 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-10-13T00:19:18.939730459+00:00 stderr F I1013 00:19:18.939670 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-10-13T00:19:19.135396257+00:00 stderr F I1013 00:19:19.135245 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-10-13T00:19:19.135396257+00:00 stderr F I1013 00:19:19.135357 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-10-13T00:19:19.340387474+00:00 stderr F I1013 00:19:19.340220 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-10-13T00:19:19.340387474+00:00 stderr F I1013 00:19:19.340285 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-10-13T00:19:19.544418561+00:00 stderr F I1013 00:19:19.544273 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-10-13T00:19:19.544418561+00:00 stderr F I1013 00:19:19.544372 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-10-13T00:19:19.742252114+00:00 stderr F I1013 00:19:19.742141 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-10-13T00:19:19.742368998+00:00 stderr F I1013 00:19:19.742255 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-10-13T00:19:19.934793470+00:00 stderr F I1013 00:19:19.934676 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-10-13T00:19:19.934793470+00:00 stderr F I1013 00:19:19.934741 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-10-13T00:19:20.139620012+00:00 stderr F I1013 00:19:20.139442 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-10-13T00:19:20.139620012+00:00 stderr F I1013 00:19:20.139531 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:19:20.339170586+00:00 stderr F I1013 00:19:20.339069 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:19:20.339218238+00:00 stderr F I1013 00:19:20.339161 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-10-13T00:19:20.538858735+00:00 stderr F I1013 00:19:20.538745 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-10-13T00:19:20.538858735+00:00 stderr F I1013 00:19:20.538832 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:19:20.735396699+00:00 stderr F I1013 00:19:20.735291 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:19:20.735458541+00:00 stderr F I1013 00:19:20.735407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:19:20.935555902+00:00 stderr F I1013 00:19:20.935443 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:19:20.935555902+00:00 stderr F I1013 00:19:20.935529 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:19:21.134102645+00:00 stderr F I1013 00:19:21.134014 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:19:21.134146857+00:00 stderr F I1013 00:19:21.134106 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-10-13T00:19:21.337389821+00:00 stderr F I1013 00:19:21.337291 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-10-13T00:19:21.337453233+00:00 stderr F I1013 00:19:21.337426 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-10-13T00:19:21.538052478+00:00 stderr F I1013 00:19:21.537967 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-10-13T00:19:21.538090069+00:00 stderr F I1013 00:19:21.538067 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-10-13T00:19:21.755648719+00:00 stderr F I1013 00:19:21.755573 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-10-13T00:19:21.755648719+00:00 stderr F I1013 00:19:21.755630 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-10-13T00:19:21.952979407+00:00 stderr F I1013 00:19:21.952927 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-10-13T00:19:21.953122042+00:00 stderr F I1013 00:19:21.953108 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-10-13T00:19:22.134648750+00:00 stderr F I1013 00:19:22.134579 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-10-13T00:19:22.134648750+00:00 stderr F I1013 00:19:22.134627 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:19:22.336402550+00:00 stderr F I1013 00:19:22.336319 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:19:22.336402550+00:00 stderr F I1013 00:19:22.336394 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:19:22.534635844+00:00 stderr F I1013 00:19:22.534527 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:19:22.534635844+00:00 stderr F I1013 00:19:22.534577 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:19:22.732806808+00:00 stderr F I1013 00:19:22.732690 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:19:22.732806808+00:00 stderr F I1013 00:19:22.732744 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-10-13T00:19:22.934874387+00:00 stderr F I1013 00:19:22.934809 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-10-13T00:19:22.934874387+00:00 stderr F I1013 00:19:22.934856 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-10-13T00:19:23.133681529+00:00 stderr F I1013 00:19:23.133617 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-10-13T00:19:23.133876475+00:00 stderr F I1013 00:19:23.133680 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-10-13T00:19:23.333358327+00:00 stderr F I1013 00:19:23.333215 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-10-13T00:19:23.333358327+00:00 stderr F I1013 00:19:23.333304 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-10-13T00:19:23.538646122+00:00 stderr F I1013 00:19:23.538559 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:19:23.538697244+00:00 stderr F I1013 00:19:23.538650 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-10-13T00:19:23.737856486+00:00 stderr F I1013 00:19:23.737215 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:19:23.737890307+00:00 stderr F I1013 00:19:23.737858 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-10-13T00:19:23.933513675+00:00 stderr F I1013 00:19:23.933441 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:19:23.934073361+00:00 stderr F I1013 00:19:23.933547 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:19:24.134565634+00:00 stderr F I1013 00:19:24.134467 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:19:24.134565634+00:00 stderr F I1013 00:19:24.134521 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:19:24.333380296+00:00 stderr F I1013 00:19:24.333283 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:19:24.333437738+00:00 stderr F I1013 00:19:24.333376 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-10-13T00:19:24.544538536+00:00 stderr F I1013 00:19:24.544416 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:19:24.544538536+00:00 stderr F I1013 00:19:24.544503 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-10-13T00:19:24.736136883+00:00 stderr F I1013 00:19:24.736031 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:19:24.736136883+00:00 stderr F I1013 00:19:24.736096 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-10-13T00:19:24.936780319+00:00 stderr F I1013 00:19:24.936669 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-10-13T00:19:24.936780319+00:00 stderr F I1013 00:19:24.936740 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-10-13T00:19:25.136231241+00:00 stderr F I1013 00:19:25.135937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-10-13T00:19:25.136231241+00:00 stderr F I1013 00:19:25.136170 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-10-13T00:19:25.334505007+00:00 stderr F I1013 00:19:25.334307 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-10-13T00:19:25.334505007+00:00 stderr F I1013 00:19:25.334437 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-10-13T00:19:25.535311039+00:00 stderr F I1013 00:19:25.535211 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:19:25.535311039+00:00 stderr F I1013 00:19:25.535263 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-10-13T00:19:25.735175602+00:00 stderr F I1013 00:19:25.735078 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-10-13T00:19:25.735175602+00:00 stderr F I1013 00:19:25.735132 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-10-13T00:19:25.935705356+00:00 stderr F I1013 00:19:25.935615 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-10-13T00:19:25.935705356+00:00 stderr F I1013 00:19:25.935689 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:19:26.136426445+00:00 stderr F I1013 00:19:26.136153 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:19:26.136426445+00:00 stderr F I1013 00:19:26.136261 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:19:26.336210446+00:00 stderr F I1013 00:19:26.336109 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:19:26.336210446+00:00 stderr F I1013 00:19:26.336165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-10-13T00:19:26.534400490+00:00 stderr F I1013 00:19:26.534284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-10-13T00:19:26.534504273+00:00 stderr F I1013 00:19:26.534400 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-10-13T00:19:26.735444839+00:00 stderr F I1013 00:19:26.735318 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-10-13T00:19:26.735444839+00:00 stderr F I1013 00:19:26.735415 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-10-13T00:19:26.936680073+00:00 stderr F I1013 00:19:26.936579 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:19:26.936680073+00:00 stderr F I1013 00:19:26.936662 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-10-13T00:19:27.137555977+00:00 stderr F I1013 00:19:27.137445 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-10-13T00:19:27.137555977+00:00 stderr F I1013 00:19:27.137516 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-10-13T00:19:27.346180991+00:00 stderr F I1013 00:19:27.346071 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:19:27.346269744+00:00 stderr F I1013 00:19:27.346173 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-10-13T00:19:27.534891543+00:00 stderr F I1013 00:19:27.534753 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-10-13T00:19:27.534891543+00:00 stderr F I1013 00:19:27.534837 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-10-13T00:19:27.735847819+00:00 stderr F I1013 00:19:27.735721 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-10-13T00:19:27.735847819+00:00 stderr F I1013 00:19:27.735808 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-10-13T00:19:27.932979662+00:00 stderr F I1013 00:19:27.932901 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:19:27.932979662+00:00 stderr F I1013 00:19:27.932971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-10-13T00:19:28.132600939+00:00 stderr F I1013 00:19:28.132528 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-10-13T00:19:28.132600939+00:00 stderr F I1013 00:19:28.132576 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-10-13T00:19:28.333149032+00:00 stderr F I1013 00:19:28.333060 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-10-13T00:19:28.333149032+00:00 stderr F I1013 00:19:28.333138 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-10-13T00:19:28.540137937+00:00 stderr F I1013 00:19:28.540070 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:19:28.553381391+00:00 stderr F I1013 00:19:28.553272 1 log.go:245] Operconfig Controller complete 2025-10-13T00:19:28.553381391+00:00 stderr F I1013 00:19:28.553357 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-10-13T00:19:28.817866276+00:00 stderr F I1013 00:19:28.817160 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-10-13T00:19:28.819849825+00:00 stderr F I1013 00:19:28.819779 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-10-13T00:19:28.821603397+00:00 stderr F I1013 00:19:28.821537 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-10-13T00:19:28.821629468+00:00 stderr F I1013 00:19:28.821572 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0036fc800 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-10-13T00:19:28.825633867+00:00 stderr F I1013 00:19:28.825559 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-10-13T00:19:28.825633867+00:00 stderr F I1013 00:19:28.825594 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-10-13T00:19:28.825633867+00:00 stderr F I1013 00:19:28.825609 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-10-13T00:19:28.827924396+00:00 stderr F I1013 00:19:28.827849 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:19:28.827924396+00:00 stderr F I1013 00:19:28.827888 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:19:28.827924396+00:00 stderr F I1013 00:19:28.827900 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:19:28.827924396+00:00 stderr F I1013 00:19:28.827914 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:19:28.828019378+00:00 stderr F I1013 00:19:28.827964 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-10-13T00:19:28.839628334+00:00 stderr F I1013 00:19:28.839549 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:19:28.852113485+00:00 stderr F I1013 00:19:28.852054 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-10-13T00:19:28.852113485+00:00 stderr F I1013 00:19:28.852086 1 log.go:245] Starting render phase 2025-10-13T00:19:28.869036828+00:00 stderr F I1013 00:19:28.868944 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-10-13T00:19:28.891641290+00:00 stderr F I1013 00:19:28.891582 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-10-13T00:19:28.891641290+00:00 stderr F I1013 00:19:28.891606 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-10-13T00:19:28.891641290+00:00 stderr F I1013 00:19:28.891630 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-10-13T00:19:28.891708252+00:00 stderr F I1013 00:19:28.891658 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-10-13T00:19:28.899181705+00:00 stderr F I1013 00:19:28.899147 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-10-13T00:19:28.899181705+00:00 stderr F I1013 00:19:28.899160 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-10-13T00:19:28.904504433+00:00 stderr F I1013 00:19:28.904459 1 log.go:245] Render phase done, rendered 112 objects 2025-10-13T00:19:28.918741776+00:00 stderr F I1013 00:19:28.918697 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-10-13T00:19:28.931432924+00:00 stderr F I1013 00:19:28.931400 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-10-13T00:19:28.931432924+00:00 stderr F I1013 00:19:28.931424 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-10-13T00:19:29.134204124+00:00 stderr F I1013 00:19:29.134117 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-10-13T00:19:29.134204124+00:00 stderr F I1013 00:19:29.134158 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-10-13T00:19:29.333540242+00:00 stderr F I1013 00:19:29.333474 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-10-13T00:19:29.333573463+00:00 stderr F I1013 00:19:29.333536 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-10-13T00:19:29.539119075+00:00 stderr F I1013 00:19:29.539019 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-10-13T00:19:29.539119075+00:00 stderr F I1013 00:19:29.539101 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-10-13T00:19:29.734200557+00:00 stderr F I1013 00:19:29.734097 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-10-13T00:19:29.734200557+00:00 stderr F I1013 00:19:29.734143 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-10-13T00:19:29.937462901+00:00 stderr F I1013 00:19:29.937364 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-10-13T00:19:29.937462901+00:00 stderr F I1013 00:19:29.937446 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-10-13T00:19:30.133843661+00:00 stderr F I1013 00:19:30.133762 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-10-13T00:19:30.133843661+00:00 stderr F I1013 00:19:30.133825 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-10-13T00:19:30.334223040+00:00 stderr F I1013 00:19:30.334045 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-10-13T00:19:30.334223040+00:00 stderr F I1013 00:19:30.334103 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-10-13T00:19:30.532770394+00:00 stderr F I1013 00:19:30.532690 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-10-13T00:19:30.532927339+00:00 stderr F I1013 00:19:30.532907 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-10-13T00:19:30.732206855+00:00 stderr F I1013 00:19:30.732148 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-10-13T00:19:30.732206855+00:00 stderr F I1013 00:19:30.732197 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-10-13T00:19:30.939396767+00:00 stderr F I1013 00:19:30.935825 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-10-13T00:19:30.939396767+00:00 stderr F I1013 00:19:30.935912 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-10-13T00:19:31.135220890+00:00 stderr F I1013 00:19:31.135130 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-10-13T00:19:31.135220890+00:00 stderr F I1013 00:19:31.135186 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-10-13T00:19:31.334726883+00:00 stderr F I1013 00:19:31.334659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-10-13T00:19:31.334763694+00:00 stderr F I1013 00:19:31.334726 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-10-13T00:19:31.538117792+00:00 stderr F I1013 00:19:31.538015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-10-13T00:19:31.538117792+00:00 stderr F I1013 00:19:31.538066 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-10-13T00:19:31.736124240+00:00 stderr F I1013 00:19:31.735823 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-10-13T00:19:31.736124240+00:00 stderr F I1013 00:19:31.735875 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-10-13T00:19:31.935637582+00:00 stderr F I1013 00:19:31.935530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-10-13T00:19:31.935637582+00:00 stderr F I1013 00:19:31.935601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-10-13T00:19:32.136609659+00:00 stderr F I1013 00:19:32.136192 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-10-13T00:19:32.136609659+00:00 stderr F I1013 00:19:32.136278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-10-13T00:19:32.335932956+00:00 stderr F I1013 00:19:32.335824 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-10-13T00:19:32.335932956+00:00 stderr F I1013 00:19:32.335904 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-10-13T00:19:32.536548183+00:00 stderr F I1013 00:19:32.536430 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-10-13T00:19:32.536548183+00:00 stderr F I1013 00:19:32.536536 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-10-13T00:19:32.733815159+00:00 stderr F I1013 00:19:32.733676 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-10-13T00:19:32.733815159+00:00 stderr F I1013 00:19:32.733747 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-10-13T00:19:32.949853344+00:00 stderr F I1013 00:19:32.949739 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-10-13T00:19:32.949853344+00:00 stderr F I1013 00:19:32.949794 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-10-13T00:19:33.142918015+00:00 stderr F I1013 00:19:33.142781 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-10-13T00:19:33.142918015+00:00 stderr F I1013 00:19:33.142885 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-10-13T00:19:33.349145999+00:00 stderr F I1013 00:19:33.349063 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-10-13T00:19:33.349145999+00:00 stderr F I1013 00:19:33.349113 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-10-13T00:19:33.532903123+00:00 stderr F I1013 00:19:33.532776 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-10-13T00:19:33.532903123+00:00 stderr F I1013 00:19:33.532826 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-10-13T00:19:33.735511209+00:00 stderr F I1013 00:19:33.735400 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-10-13T00:19:33.735511209+00:00 stderr F I1013 00:19:33.735464 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-10-13T00:19:33.935070013+00:00 stderr F I1013 00:19:33.934993 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-10-13T00:19:33.935116384+00:00 stderr F I1013 00:19:33.935083 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-10-13T00:19:34.139807822+00:00 stderr F I1013 00:19:34.139695 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-10-13T00:19:34.139807822+00:00 stderr F I1013 00:19:34.139750 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-10-13T00:19:34.336271484+00:00 stderr F I1013 00:19:34.336197 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-10-13T00:19:34.336420989+00:00 stderr F I1013 00:19:34.336403 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-10-13T00:19:34.540362834+00:00 stderr F I1013 00:19:34.540252 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-10-13T00:19:34.540427136+00:00 stderr F I1013 00:19:34.540376 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:19:34.732538749+00:00 stderr F I1013 00:19:34.732449 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:34.732538749+00:00 stderr F I1013 00:19:34.732506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:19:34.933597518+00:00 stderr F I1013 00:19:34.933508 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:34.933597518+00:00 stderr F I1013 00:19:34.933578 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-10-13T00:19:35.135713619+00:00 stderr F I1013 00:19:35.135663 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-10-13T00:19:35.135828612+00:00 stderr F I1013 00:19:35.135813 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-10-13T00:19:35.334947963+00:00 stderr F I1013 00:19:35.334844 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-10-13T00:19:35.334947963+00:00 stderr F I1013 00:19:35.334901 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-10-13T00:19:35.538084714+00:00 stderr F I1013 00:19:35.537996 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-10-13T00:19:35.538251989+00:00 stderr F I1013 00:19:35.538228 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-10-13T00:19:35.734543856+00:00 stderr F I1013 00:19:35.734452 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-10-13T00:19:35.734543856+00:00 stderr F I1013 00:19:35.734516 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-10-13T00:19:35.937258895+00:00 stderr F I1013 00:19:35.937153 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-10-13T00:19:35.937258895+00:00 stderr F I1013 00:19:35.937223 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-10-13T00:19:36.145246150+00:00 stderr F I1013 00:19:36.145127 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-10-13T00:19:36.145246150+00:00 stderr F I1013 00:19:36.145209 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-10-13T00:19:36.337228829+00:00 stderr F I1013 00:19:36.337143 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-10-13T00:19:36.337512087+00:00 stderr F I1013 00:19:36.337484 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:19:36.534546027+00:00 stderr F I1013 00:19:36.534416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:36.534546027+00:00 stderr F I1013 00:19:36.534524 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:19:36.734545385+00:00 stderr F I1013 00:19:36.734448 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:19:36.734583977+00:00 stderr F I1013 00:19:36.734542 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-10-13T00:19:36.939105148+00:00 stderr F I1013 00:19:36.938466 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-10-13T00:19:36.939105148+00:00 stderr F I1013 00:19:36.939089 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-10-13T00:19:37.133906081+00:00 stderr F I1013 00:19:37.133822 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-10-13T00:19:37.133945722+00:00 stderr F I1013 00:19:37.133935 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-10-13T00:19:37.339838475+00:00 stderr F I1013 00:19:37.339728 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-10-13T00:19:37.339838475+00:00 stderr F I1013 00:19:37.339809 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-10-13T00:19:37.540919214+00:00 stderr F I1013 00:19:37.540831 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-10-13T00:19:37.540919214+00:00 stderr F I1013 00:19:37.540882 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-10-13T00:19:37.740404127+00:00 stderr F I1013 00:19:37.740258 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-10-13T00:19:37.740404127+00:00 stderr F I1013 00:19:37.740315 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-10-13T00:19:37.939810697+00:00 stderr F I1013 00:19:37.939743 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-10-13T00:19:37.939810697+00:00 stderr F I1013 00:19:37.939791 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-10-13T00:19:38.137710342+00:00 stderr F I1013 00:19:38.137643 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-10-13T00:19:38.137710342+00:00 stderr F I1013 00:19:38.137687 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:19:38.356916661+00:00 stderr F I1013 00:19:38.356858 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:19:38.357019504+00:00 stderr F I1013 00:19:38.357008 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:19:38.573024468+00:00 stderr F I1013 00:19:38.566984 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:19:38.573317677+00:00 stderr F I1013 00:19:38.573275 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:19:38.734986504+00:00 stderr F I1013 00:19:38.734915 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:19:38.734986504+00:00 stderr F I1013 00:19:38.734973 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-10-13T00:19:38.933456986+00:00 stderr F I1013 00:19:38.933400 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:19:38.933494117+00:00 stderr F I1013 00:19:38.933452 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-10-13T00:19:39.135355800+00:00 stderr F I1013 00:19:39.135291 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-10-13T00:19:39.135444563+00:00 stderr F I1013 00:19:39.135428 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-10-13T00:19:39.335390579+00:00 stderr F I1013 00:19:39.335316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:19:39.335390579+00:00 stderr F I1013 00:19:39.335370 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-10-13T00:19:39.533760068+00:00 stderr F I1013 00:19:39.533072 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-10-13T00:19:39.533760068+00:00 stderr F I1013 00:19:39.533736 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-10-13T00:19:39.732901110+00:00 stderr F I1013 00:19:39.732828 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-10-13T00:19:39.732901110+00:00 stderr F I1013 00:19:39.732883 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-10-13T00:19:39.932807095+00:00 stderr F I1013 00:19:39.932764 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-10-13T00:19:39.932899298+00:00 stderr F I1013 00:19:39.932883 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-10-13T00:19:40.133375510+00:00 stderr F I1013 00:19:40.133257 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-10-13T00:19:40.133375510+00:00 stderr F I1013 00:19:40.133308 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:19:40.334438499+00:00 stderr F I1013 00:19:40.334376 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:19:40.334574423+00:00 stderr F I1013 00:19:40.334549 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:40.537758655+00:00 stderr F I1013 00:19:40.537669 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:40.537758655+00:00 stderr F I1013 00:19:40.537740 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:40.735453374+00:00 stderr F I1013 00:19:40.735387 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:40.735614229+00:00 stderr F I1013 00:19:40.735590 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:40.935178874+00:00 stderr F I1013 00:19:40.935111 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:40.935372250+00:00 stderr F I1013 00:19:40.935311 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:19:41.134388498+00:00 stderr F I1013 00:19:41.134286 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:19:41.134474120+00:00 stderr F I1013 00:19:41.134406 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-10-13T00:19:41.333989063+00:00 stderr F I1013 00:19:41.333897 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-10-13T00:19:41.333989063+00:00 stderr F I1013 00:19:41.333949 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-10-13T00:19:41.535416274+00:00 stderr F I1013 00:19:41.535365 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-10-13T00:19:41.535451635+00:00 stderr F I1013 00:19:41.535427 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-10-13T00:19:41.734076451+00:00 stderr F I1013 00:19:41.734018 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-10-13T00:19:41.734076451+00:00 stderr F I1013 00:19:41.734060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-10-13T00:19:41.934226133+00:00 stderr F I1013 00:19:41.934134 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-10-13T00:19:41.934226133+00:00 stderr F I1013 00:19:41.934184 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-10-13T00:19:42.135067725+00:00 stderr F I1013 00:19:42.134996 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-10-13T00:19:42.135067725+00:00 stderr F I1013 00:19:42.135053 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-10-13T00:19:42.336989190+00:00 stderr F I1013 00:19:42.336916 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-10-13T00:19:42.336989190+00:00 stderr F I1013 00:19:42.336960 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-10-13T00:19:42.533164273+00:00 stderr F I1013 00:19:42.533106 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-10-13T00:19:42.533164273+00:00 stderr F I1013 00:19:42.533151 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-10-13T00:19:42.737242142+00:00 stderr F I1013 00:19:42.737171 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-10-13T00:19:42.737242142+00:00 stderr F I1013 00:19:42.737222 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-10-13T00:19:42.933543050+00:00 stderr F I1013 00:19:42.933462 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-10-13T00:19:42.933543050+00:00 stderr F I1013 00:19:42.933508 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:19:43.136315240+00:00 stderr F I1013 00:19:43.136202 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:19:43.136315240+00:00 stderr F I1013 00:19:43.136282 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-10-13T00:19:43.335551575+00:00 stderr F I1013 00:19:43.335441 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-10-13T00:19:43.335551575+00:00 stderr F I1013 00:19:43.335488 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:19:43.536087150+00:00 stderr F I1013 00:19:43.535973 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:19:43.536087150+00:00 stderr F I1013 00:19:43.536022 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:19:43.733182231+00:00 stderr F I1013 00:19:43.733069 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:19:43.733182231+00:00 stderr F I1013 00:19:43.733152 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:19:43.934396935+00:00 stderr F I1013 00:19:43.934273 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:19:43.934456517+00:00 stderr F I1013 00:19:43.934416 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-10-13T00:19:44.136830915+00:00 stderr F I1013 00:19:44.136707 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-10-13T00:19:44.136830915+00:00 stderr F I1013 00:19:44.136767 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-10-13T00:19:44.334895365+00:00 stderr F I1013 00:19:44.334796 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-10-13T00:19:44.334981128+00:00 stderr F I1013 00:19:44.334887 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-10-13T00:19:44.547568140+00:00 stderr F I1013 00:19:44.547472 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-10-13T00:19:44.547568140+00:00 stderr F I1013 00:19:44.547528 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-10-13T00:19:44.761715638+00:00 stderr F I1013 00:19:44.761087 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-10-13T00:19:44.761715638+00:00 stderr F I1013 00:19:44.761698 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-10-13T00:19:44.935740694+00:00 stderr F I1013 00:19:44.935647 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-10-13T00:19:44.935740694+00:00 stderr F I1013 00:19:44.935715 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:19:45.133767393+00:00 stderr F I1013 00:19:45.133646 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:19:45.133767393+00:00 stderr F I1013 00:19:45.133721 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:19:45.333632766+00:00 stderr F I1013 00:19:45.333490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:19:45.333632766+00:00 stderr F I1013 00:19:45.333562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:19:45.536244521+00:00 stderr F I1013 00:19:45.536133 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:19:45.536244521+00:00 stderr F I1013 00:19:45.536199 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-10-13T00:19:45.737885668+00:00 stderr F I1013 00:19:45.737771 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-10-13T00:19:45.737885668+00:00 stderr F I1013 00:19:45.737850 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-10-13T00:19:45.935404171+00:00 stderr F I1013 00:19:45.935292 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-10-13T00:19:45.935492904+00:00 stderr F I1013 00:19:45.935407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-10-13T00:19:46.136558112+00:00 stderr F I1013 00:19:46.136432 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-10-13T00:19:46.136558112+00:00 stderr F I1013 00:19:46.136514 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-10-13T00:19:46.342688282+00:00 stderr F I1013 00:19:46.342027 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:19:46.342688282+00:00 stderr F I1013 00:19:46.342115 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-10-13T00:19:46.536908638+00:00 stderr F I1013 00:19:46.536787 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:19:46.536908638+00:00 stderr F I1013 00:19:46.536838 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-10-13T00:19:46.736985268+00:00 stderr F I1013 00:19:46.736859 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:19:46.736985268+00:00 stderr F I1013 00:19:46.736921 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:19:46.935443420+00:00 stderr F I1013 00:19:46.935377 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:19:46.935443420+00:00 stderr F I1013 00:19:46.935434 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:19:47.133882301+00:00 stderr F I1013 00:19:47.133764 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:19:47.133882301+00:00 stderr F I1013 00:19:47.133844 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-10-13T00:19:47.342240537+00:00 stderr F I1013 00:19:47.342127 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:19:47.342240537+00:00 stderr F I1013 00:19:47.342199 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-10-13T00:19:47.535866256+00:00 stderr F I1013 00:19:47.535749 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:19:47.535866256+00:00 stderr F I1013 00:19:47.535835 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-10-13T00:19:47.734420930+00:00 stderr F I1013 00:19:47.734359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-10-13T00:19:47.734420930+00:00 stderr F I1013 00:19:47.734404 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-10-13T00:19:47.934044827+00:00 stderr F I1013 00:19:47.933989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-10-13T00:19:47.934084248+00:00 stderr F I1013 00:19:47.934057 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-10-13T00:19:48.136394814+00:00 stderr F I1013 00:19:48.136317 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-10-13T00:19:48.136394814+00:00 stderr F I1013 00:19:48.136386 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-10-13T00:19:48.333375912+00:00 stderr F I1013 00:19:48.333242 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:19:48.333375912+00:00 stderr F I1013 00:19:48.333294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-10-13T00:19:48.533313878+00:00 stderr F I1013 00:19:48.533185 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-10-13T00:19:48.533313878+00:00 stderr F I1013 00:19:48.533269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-10-13T00:19:48.734270064+00:00 stderr F I1013 00:19:48.734172 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-10-13T00:19:48.734270064+00:00 stderr F I1013 00:19:48.734223 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:19:48.934975953+00:00 stderr F I1013 00:19:48.934866 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:19:48.934975953+00:00 stderr F I1013 00:19:48.934935 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:19:49.134316311+00:00 stderr F I1013 00:19:49.134220 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:19:49.134316311+00:00 stderr F I1013 00:19:49.134283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-10-13T00:19:49.332376451+00:00 stderr F I1013 00:19:49.332279 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-10-13T00:19:49.332376451+00:00 stderr F I1013 00:19:49.332364 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-10-13T00:19:49.534662337+00:00 stderr F I1013 00:19:49.534558 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-10-13T00:19:49.534662337+00:00 stderr F I1013 00:19:49.534638 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-10-13T00:19:49.737185159+00:00 stderr F I1013 00:19:49.737082 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:19:49.737185159+00:00 stderr F I1013 00:19:49.737146 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-10-13T00:19:49.937446495+00:00 stderr F I1013 00:19:49.937274 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-10-13T00:19:49.937446495+00:00 stderr F I1013 00:19:49.937419 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-10-13T00:19:50.140649957+00:00 stderr F I1013 00:19:50.140600 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:19:50.140763611+00:00 stderr F I1013 00:19:50.140748 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-10-13T00:19:50.335621445+00:00 stderr F I1013 00:19:50.335538 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-10-13T00:19:50.335621445+00:00 stderr F I1013 00:19:50.335605 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-10-13T00:19:50.535707206+00:00 stderr F I1013 00:19:50.535246 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-10-13T00:19:50.535707206+00:00 stderr F I1013 00:19:50.535365 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-10-13T00:19:50.734146207+00:00 stderr F I1013 00:19:50.734059 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:19:50.734146207+00:00 stderr F I1013 00:19:50.734108 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-10-13T00:19:50.932272989+00:00 stderr F I1013 00:19:50.931674 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-10-13T00:19:50.932272989+00:00 stderr F I1013 00:19:50.932233 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-10-13T00:19:51.137002957+00:00 stderr F I1013 00:19:51.136863 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-10-13T00:19:51.137002957+00:00 stderr F I1013 00:19:51.136953 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-10-13T00:19:51.337042516+00:00 stderr F I1013 00:19:51.336957 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:19:51.354791714+00:00 stderr F I1013 00:19:51.354718 1 log.go:245] Operconfig Controller complete 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943007 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.942965217 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943061 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.943042629 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943083 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.94306631 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943104 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.943087601 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943124 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943108651 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943143 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943128742 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943164 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943148772 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943185 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943170373 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943206 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.943189803 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943230 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.943215774 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943256 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.943236485 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943278 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943261525 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943676 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-10-13 00:21:11.943652126 +0000 UTC))" 2025-10-13T00:21:11.944484638+00:00 stderr F I1013 00:21:11.943975 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314371\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314370\" (2025-10-12 23:12:50 +0000 UTC to 2026-10-12 23:12:50 +0000 UTC (now=2025-10-13 00:21:11.943958114 +0000 UTC))" 2025-10-13T00:21:20.413104855+00:00 stderr F I1013 00:21:20.413065 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-10-13T00:21:20.606668961+00:00 stderr F I1013 00:21:20.606603 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-10-13T00:21:20.608869070+00:00 stderr F I1013 00:21:20.608805 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-10-13T00:21:20.611551312+00:00 stderr F I1013 00:21:20.611509 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-10-13T00:21:20.611551312+00:00 stderr F I1013 00:21:20.611524 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0018b3080 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-10-13T00:21:20.615877768+00:00 stderr F I1013 00:21:20.615828 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-10-13T00:21:20.615877768+00:00 stderr F I1013 00:21:20.615850 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-10-13T00:21:20.615877768+00:00 stderr F I1013 00:21:20.615858 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-10-13T00:21:20.618354485+00:00 stderr F I1013 00:21:20.618288 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:21:20.618354485+00:00 stderr F I1013 00:21:20.618306 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:21:20.618354485+00:00 stderr F I1013 00:21:20.618311 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-10-13T00:21:20.618354485+00:00 stderr F I1013 00:21:20.618318 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:21:20.618402986+00:00 stderr F I1013 00:21:20.618373 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-10-13T00:21:20.630250895+00:00 stderr F I1013 00:21:20.630199 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:21:20.642247257+00:00 stderr F I1013 00:21:20.642192 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-10-13T00:21:20.642294839+00:00 stderr F I1013 00:21:20.642255 1 log.go:245] Starting render phase 2025-10-13T00:21:20.644416916+00:00 stderr F I1013 00:21:20.644381 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:21:20.666381686+00:00 stderr F I1013 00:21:20.666284 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-10-13T00:21:20.703891105+00:00 stderr F I1013 00:21:20.703752 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-10-13T00:21:20.703891105+00:00 stderr F I1013 00:21:20.703790 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-10-13T00:21:20.703891105+00:00 stderr F I1013 00:21:20.703817 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-10-13T00:21:20.703891105+00:00 stderr F I1013 00:21:20.703855 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-10-13T00:21:20.712652231+00:00 stderr F I1013 00:21:20.712581 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-10-13T00:21:20.712652231+00:00 stderr F I1013 00:21:20.712604 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-10-13T00:21:20.721224921+00:00 stderr F I1013 00:21:20.721179 1 log.go:245] Render phase done, rendered 112 objects 2025-10-13T00:21:20.777288309+00:00 stderr F I1013 00:21:20.777197 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-10-13T00:21:20.782118659+00:00 stderr F I1013 00:21:20.781988 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-10-13T00:21:20.782814658+00:00 stderr F I1013 00:21:20.782468 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-10-13T00:21:20.791461350+00:00 stderr F I1013 00:21:20.791410 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-10-13T00:21:20.791461350+00:00 stderr F I1013 00:21:20.791454 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-10-13T00:21:20.797120112+00:00 stderr F I1013 00:21:20.797083 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-10-13T00:21:20.797203254+00:00 stderr F I1013 00:21:20.797190 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-10-13T00:21:20.802428725+00:00 stderr F I1013 00:21:20.802411 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-10-13T00:21:20.802480366+00:00 stderr F I1013 00:21:20.802468 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-10-13T00:21:20.805850607+00:00 stderr F I1013 00:21:20.805817 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-10-13T00:21:20.805850607+00:00 stderr F I1013 00:21:20.805839 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-10-13T00:21:20.809418273+00:00 stderr F I1013 00:21:20.809366 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-10-13T00:21:20.809521296+00:00 stderr F I1013 00:21:20.809507 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-10-13T00:21:20.812754933+00:00 stderr F I1013 00:21:20.812737 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-10-13T00:21:20.812804724+00:00 stderr F I1013 00:21:20.812795 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-10-13T00:21:20.815453135+00:00 stderr F I1013 00:21:20.815436 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-10-13T00:21:20.815516377+00:00 stderr F I1013 00:21:20.815506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-10-13T00:21:20.818788695+00:00 stderr F I1013 00:21:20.818771 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-10-13T00:21:20.818838186+00:00 stderr F I1013 00:21:20.818827 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-10-13T00:21:20.837251101+00:00 stderr F I1013 00:21:20.837197 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-10-13T00:21:20.837251101+00:00 stderr F I1013 00:21:20.837243 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-10-13T00:21:21.040701713+00:00 stderr F I1013 00:21:21.040520 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-10-13T00:21:21.040701713+00:00 stderr F I1013 00:21:21.040560 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-10-13T00:21:21.237981708+00:00 stderr F I1013 00:21:21.237940 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-10-13T00:21:21.238075901+00:00 stderr F I1013 00:21:21.238065 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-10-13T00:21:21.438749337+00:00 stderr F I1013 00:21:21.438682 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-10-13T00:21:21.438784138+00:00 stderr F I1013 00:21:21.438744 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-10-13T00:21:21.639995189+00:00 stderr F I1013 00:21:21.639900 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-10-13T00:21:21.639995189+00:00 stderr F I1013 00:21:21.639961 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-10-13T00:21:21.838713653+00:00 stderr F I1013 00:21:21.838605 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-10-13T00:21:21.838713653+00:00 stderr F I1013 00:21:21.838668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-10-13T00:21:22.037567241+00:00 stderr F I1013 00:21:22.037495 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-10-13T00:21:22.037567241+00:00 stderr F I1013 00:21:22.037543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-10-13T00:21:22.237284221+00:00 stderr F I1013 00:21:22.237200 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-10-13T00:21:22.237284221+00:00 stderr F I1013 00:21:22.237241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-10-13T00:21:22.440044664+00:00 stderr F I1013 00:21:22.439453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-10-13T00:21:22.440044664+00:00 stderr F I1013 00:21:22.439501 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-10-13T00:21:22.637660198+00:00 stderr F I1013 00:21:22.637607 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-10-13T00:21:22.637660198+00:00 stderr F I1013 00:21:22.637650 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-10-13T00:21:22.838906079+00:00 stderr F I1013 00:21:22.838832 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-10-13T00:21:22.838906079+00:00 stderr F I1013 00:21:22.838885 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-10-13T00:21:23.038984119+00:00 stderr F I1013 00:21:23.038822 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-10-13T00:21:23.038984119+00:00 stderr F I1013 00:21:23.038883 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-10-13T00:21:23.246055377+00:00 stderr F I1013 00:21:23.245969 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-10-13T00:21:23.246055377+00:00 stderr F I1013 00:21:23.246011 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-10-13T00:21:23.449912679+00:00 stderr F I1013 00:21:23.449848 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-10-13T00:21:23.449912679+00:00 stderr F I1013 00:21:23.449903 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-10-13T00:21:23.638128571+00:00 stderr F I1013 00:21:23.638068 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-10-13T00:21:23.638174672+00:00 stderr F I1013 00:21:23.638121 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-10-13T00:21:23.841392377+00:00 stderr F I1013 00:21:23.841283 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-10-13T00:21:23.841427638+00:00 stderr F I1013 00:21:23.841392 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-10-13T00:21:24.040618465+00:00 stderr F I1013 00:21:24.039340 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-10-13T00:21:24.040618465+00:00 stderr F I1013 00:21:24.039878 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-10-13T00:21:24.241542088+00:00 stderr F I1013 00:21:24.241487 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-10-13T00:21:24.241542088+00:00 stderr F I1013 00:21:24.241531 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-10-13T00:21:24.439287686+00:00 stderr F I1013 00:21:24.439211 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-10-13T00:21:24.439287686+00:00 stderr F I1013 00:21:24.439254 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-10-13T00:21:24.640645061+00:00 stderr F I1013 00:21:24.640523 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-10-13T00:21:24.640645061+00:00 stderr F I1013 00:21:24.640572 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:21:24.838478332+00:00 stderr F I1013 00:21:24.838409 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:21:24.838478332+00:00 stderr F I1013 00:21:24.838448 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:21:25.037902854+00:00 stderr F I1013 00:21:25.037833 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:21:25.038027668+00:00 stderr F I1013 00:21:25.038012 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-10-13T00:21:25.244138791+00:00 stderr F I1013 00:21:25.244054 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-10-13T00:21:25.244138791+00:00 stderr F I1013 00:21:25.244105 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-10-13T00:21:25.443168013+00:00 stderr F I1013 00:21:25.443105 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-10-13T00:21:25.443168013+00:00 stderr F I1013 00:21:25.443146 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-10-13T00:21:25.639559114+00:00 stderr F I1013 00:21:25.639469 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-10-13T00:21:25.639615506+00:00 stderr F I1013 00:21:25.639562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-10-13T00:21:25.839166342+00:00 stderr F I1013 00:21:25.839060 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-10-13T00:21:25.839166342+00:00 stderr F I1013 00:21:25.839141 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-10-13T00:21:26.039222302+00:00 stderr F I1013 00:21:26.039154 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-10-13T00:21:26.039222302+00:00 stderr F I1013 00:21:26.039194 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-10-13T00:21:26.244253785+00:00 stderr F I1013 00:21:26.244188 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-10-13T00:21:26.244253785+00:00 stderr F I1013 00:21:26.244240 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-10-13T00:21:26.438638872+00:00 stderr F I1013 00:21:26.438584 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-10-13T00:21:26.438739615+00:00 stderr F I1013 00:21:26.438728 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:21:26.638618470+00:00 stderr F I1013 00:21:26.638532 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:21:26.638618470+00:00 stderr F I1013 00:21:26.638577 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:21:26.838688180+00:00 stderr F I1013 00:21:26.838609 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:21:26.838688180+00:00 stderr F I1013 00:21:26.838653 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-10-13T00:21:27.038703129+00:00 stderr F I1013 00:21:27.038615 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-10-13T00:21:27.038703129+00:00 stderr F I1013 00:21:27.038665 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-10-13T00:21:27.238645686+00:00 stderr F I1013 00:21:27.238592 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-10-13T00:21:27.238645686+00:00 stderr F I1013 00:21:27.238636 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-10-13T00:21:27.442607241+00:00 stderr F I1013 00:21:27.442543 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-10-13T00:21:27.442607241+00:00 stderr F I1013 00:21:27.442598 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-10-13T00:21:27.644258004+00:00 stderr F I1013 00:21:27.644213 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-10-13T00:21:27.644380277+00:00 stderr F I1013 00:21:27.644364 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-10-13T00:21:27.847160610+00:00 stderr F I1013 00:21:27.847116 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-10-13T00:21:27.847266573+00:00 stderr F I1013 00:21:27.847252 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-10-13T00:21:28.045303879+00:00 stderr F I1013 00:21:28.045177 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-10-13T00:21:28.045422252+00:00 stderr F I1013 00:21:28.045409 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-10-13T00:21:28.241374042+00:00 stderr F I1013 00:21:28.241297 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-10-13T00:21:28.241407382+00:00 stderr F I1013 00:21:28.241370 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:21:28.483804261+00:00 stderr F I1013 00:21:28.483736 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:21:28.483804261+00:00 stderr F I1013 00:21:28.483788 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:21:28.663468132+00:00 stderr F I1013 00:21:28.663425 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:21:28.663567385+00:00 stderr F I1013 00:21:28.663554 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:28.839161597+00:00 stderr F I1013 00:21:28.838624 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:21:28.839277100+00:00 stderr F I1013 00:21:28.839258 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-10-13T00:21:29.038888638+00:00 stderr F I1013 00:21:29.038793 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:21:29.038888638+00:00 stderr F I1013 00:21:29.038849 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-10-13T00:21:29.241733613+00:00 stderr F I1013 00:21:29.241655 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-10-13T00:21:29.241733613+00:00 stderr F I1013 00:21:29.241703 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-10-13T00:21:29.439991674+00:00 stderr F I1013 00:21:29.439500 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:21:29.439991674+00:00 stderr F I1013 00:21:29.439548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-10-13T00:21:29.637450214+00:00 stderr F I1013 00:21:29.637405 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-10-13T00:21:29.637555247+00:00 stderr F I1013 00:21:29.637541 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-10-13T00:21:29.839494697+00:00 stderr F I1013 00:21:29.839417 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-10-13T00:21:29.839494697+00:00 stderr F I1013 00:21:29.839485 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-10-13T00:21:30.040065400+00:00 stderr F I1013 00:21:30.039989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-10-13T00:21:30.040065400+00:00 stderr F I1013 00:21:30.040031 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-10-13T00:21:30.237848358+00:00 stderr F I1013 00:21:30.237791 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-10-13T00:21:30.237953131+00:00 stderr F I1013 00:21:30.237928 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:30.441531536+00:00 stderr F I1013 00:21:30.441131 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:21:30.441627559+00:00 stderr F I1013 00:21:30.441612 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:21:30.639121749+00:00 stderr F I1013 00:21:30.639061 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:21:30.639121749+00:00 stderr F I1013 00:21:30.639102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:21:30.837879564+00:00 stderr F I1013 00:21:30.837821 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:21:30.837879564+00:00 stderr F I1013 00:21:30.837873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:21:31.038101479+00:00 stderr F I1013 00:21:31.038030 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:21:31.038101479+00:00 stderr F I1013 00:21:31.038074 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:21:31.238610291+00:00 stderr F I1013 00:21:31.238544 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:21:31.238610291+00:00 stderr F I1013 00:21:31.238595 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-10-13T00:21:31.438359633+00:00 stderr F I1013 00:21:31.438303 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-10-13T00:21:31.438451245+00:00 stderr F I1013 00:21:31.438440 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-10-13T00:21:31.640632832+00:00 stderr F I1013 00:21:31.640576 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-10-13T00:21:31.640662183+00:00 stderr F I1013 00:21:31.640648 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-10-13T00:21:31.840379714+00:00 stderr F I1013 00:21:31.839580 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-10-13T00:21:31.840439645+00:00 stderr F I1013 00:21:31.840394 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-10-13T00:21:32.038316997+00:00 stderr F I1013 00:21:32.038246 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-10-13T00:21:32.038316997+00:00 stderr F I1013 00:21:32.038293 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-10-13T00:21:32.243129835+00:00 stderr F I1013 00:21:32.243054 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-10-13T00:21:32.244422499+00:00 stderr F I1013 00:21:32.244399 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-10-13T00:21:32.442123846+00:00 stderr F I1013 00:21:32.442081 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-10-13T00:21:32.442217778+00:00 stderr F I1013 00:21:32.442203 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-10-13T00:21:32.641493287+00:00 stderr F I1013 00:21:32.641451 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-10-13T00:21:32.641603550+00:00 stderr F I1013 00:21:32.641587 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-10-13T00:21:32.840303404+00:00 stderr F I1013 00:21:32.840179 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-10-13T00:21:32.840303404+00:00 stderr F I1013 00:21:32.840231 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-10-13T00:21:33.040647852+00:00 stderr F I1013 00:21:33.040588 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-10-13T00:21:33.040680503+00:00 stderr F I1013 00:21:33.040666 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:33.240370782+00:00 stderr F I1013 00:21:33.240295 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:21:33.240370782+00:00 stderr F I1013 00:21:33.240355 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-10-13T00:21:33.439103017+00:00 stderr F I1013 00:21:33.439048 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-10-13T00:21:33.439193609+00:00 stderr F I1013 00:21:33.439180 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:33.641226711+00:00 stderr F I1013 00:21:33.641167 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:21:33.641399836+00:00 stderr F I1013 00:21:33.641377 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:21:33.840725486+00:00 stderr F I1013 00:21:33.840669 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:21:33.840865580+00:00 stderr F I1013 00:21:33.840843 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:21:34.043537990+00:00 stderr F I1013 00:21:34.043448 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:21:34.043537990+00:00 stderr F I1013 00:21:34.043517 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-10-13T00:21:34.238619556+00:00 stderr F I1013 00:21:34.238460 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-10-13T00:21:34.238619556+00:00 stderr F I1013 00:21:34.238509 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-10-13T00:21:34.438884802+00:00 stderr F I1013 00:21:34.438842 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-10-13T00:21:34.438984385+00:00 stderr F I1013 00:21:34.438973 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-10-13T00:21:34.646016922+00:00 stderr F I1013 00:21:34.645949 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-10-13T00:21:34.646016922+00:00 stderr F I1013 00:21:34.645995 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-10-13T00:21:34.860633493+00:00 stderr F I1013 00:21:34.860581 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:34.860633493+00:00 stderr F I1013 00:21:34.860618 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:34.860946022+00:00 stderr F I1013 00:21:34.860928 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-10-13T00:21:34.861006914+00:00 stderr F I1013 00:21:34.860995 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-10-13T00:21:34.899686944+00:00 stderr F I1013 00:21:34.899490 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:34.899686944+00:00 stderr F I1013 00:21:34.899526 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:34.916634620+00:00 stderr F I1013 00:21:34.916573 1 log.go:245] Network operator config updated with conditions: 2025-10-13T00:21:34.916634620+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:34.916634620+00:00 stderr F status: "False" 2025-10-13T00:21:34.916634620+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:34.916634620+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:34.916634620+00:00 stderr F status: "False" 2025-10-13T00:21:34.916634620+00:00 stderr F type: Degraded 2025-10-13T00:21:34.916634620+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:34.916634620+00:00 stderr F status: "True" 2025-10-13T00:21:34.916634620+00:00 stderr F type: Upgradeable 2025-10-13T00:21:34.916634620+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:34.916634620+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is being processed 2025-10-13T00:21:34.916634620+00:00 stderr F (generation 3, observed generation 2) 2025-10-13T00:21:34.916634620+00:00 stderr F reason: Deploying 2025-10-13T00:21:34.916634620+00:00 stderr F status: "True" 2025-10-13T00:21:34.916634620+00:00 stderr F type: Progressing 2025-10-13T00:21:34.916634620+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:34.916634620+00:00 stderr F status: "True" 2025-10-13T00:21:34.916634620+00:00 stderr F type: Available 2025-10-13T00:21:34.917393400+00:00 stderr F I1013 00:21:34.916872 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:21:34.934946302+00:00 stderr F I1013 00:21:34.934894 1 log.go:245] ClusterOperator config status updated with conditions: 2025-10-13T00:21:34.934946302+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:34.934946302+00:00 stderr F status: "False" 2025-10-13T00:21:34.934946302+00:00 stderr F type: Degraded 2025-10-13T00:21:34.934946302+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:34.934946302+00:00 stderr F status: "True" 2025-10-13T00:21:34.934946302+00:00 stderr F type: Upgradeable 2025-10-13T00:21:34.934946302+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:34.934946302+00:00 stderr F status: "False" 2025-10-13T00:21:34.934946302+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:34.934946302+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:34.934946302+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is being processed 2025-10-13T00:21:34.934946302+00:00 stderr F (generation 3, observed generation 2) 2025-10-13T00:21:34.934946302+00:00 stderr F reason: Deploying 2025-10-13T00:21:34.934946302+00:00 stderr F status: "True" 2025-10-13T00:21:34.934946302+00:00 stderr F type: Progressing 2025-10-13T00:21:34.934946302+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:34.934946302+00:00 stderr F status: "True" 2025-10-13T00:21:34.934946302+00:00 stderr F type: Available 2025-10-13T00:21:35.017457631+00:00 stderr F I1013 00:21:35.017390 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:21:35.018457988+00:00 stderr F I1013 00:21:35.018422 1 log.go:245] Network operator config updated with conditions: 2025-10-13T00:21:35.018457988+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:35.018457988+00:00 stderr F status: "False" 2025-10-13T00:21:35.018457988+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:35.018457988+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:35.018457988+00:00 stderr F status: "False" 2025-10-13T00:21:35.018457988+00:00 stderr F type: Degraded 2025-10-13T00:21:35.018457988+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:35.018457988+00:00 stderr F status: "True" 2025-10-13T00:21:35.018457988+00:00 stderr F type: Upgradeable 2025-10-13T00:21:35.018457988+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:35.018457988+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out 2025-10-13T00:21:35.018457988+00:00 stderr F (0 out of 1 updated) 2025-10-13T00:21:35.018457988+00:00 stderr F reason: Deploying 2025-10-13T00:21:35.018457988+00:00 stderr F status: "True" 2025-10-13T00:21:35.018457988+00:00 stderr F type: Progressing 2025-10-13T00:21:35.018457988+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:35.018457988+00:00 stderr F status: "True" 2025-10-13T00:21:35.018457988+00:00 stderr F type: Available 2025-10-13T00:21:35.038554168+00:00 stderr F I1013 00:21:35.038510 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-10-13T00:21:35.038586539+00:00 stderr F I1013 00:21:35.038559 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:21:35.072104310+00:00 stderr F I1013 00:21:35.070891 1 log.go:245] ClusterOperator config status updated with conditions: 2025-10-13T00:21:35.072104310+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:35.072104310+00:00 stderr F status: "False" 2025-10-13T00:21:35.072104310+00:00 stderr F type: Degraded 2025-10-13T00:21:35.072104310+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:35.072104310+00:00 stderr F status: "True" 2025-10-13T00:21:35.072104310+00:00 stderr F type: Upgradeable 2025-10-13T00:21:35.072104310+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:35.072104310+00:00 stderr F status: "False" 2025-10-13T00:21:35.072104310+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:35.072104310+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:35.072104310+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out 2025-10-13T00:21:35.072104310+00:00 stderr F (0 out of 1 updated) 2025-10-13T00:21:35.072104310+00:00 stderr F reason: Deploying 2025-10-13T00:21:35.072104310+00:00 stderr F status: "True" 2025-10-13T00:21:35.072104310+00:00 stderr F type: Progressing 2025-10-13T00:21:35.072104310+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:35.072104310+00:00 stderr F status: "True" 2025-10-13T00:21:35.072104310+00:00 stderr F type: Available 2025-10-13T00:21:35.213983186+00:00 stderr F I1013 00:21:35.213918 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:21:35.213983186+00:00 stderr F I1013 00:21:35.213943 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:21:35.237787466+00:00 stderr F I1013 00:21:35.237463 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:21:35.237787466+00:00 stderr F I1013 00:21:35.237771 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:21:35.305844616+00:00 stderr F I1013 00:21:35.305755 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:35.305844616+00:00 stderr F I1013 00:21:35.305783 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:35.332351059+00:00 stderr F I1013 00:21:35.332260 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:35.332351059+00:00 stderr F I1013 00:21:35.332283 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:21:35.437496357+00:00 stderr F I1013 00:21:35.437353 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:21:35.437496357+00:00 stderr F I1013 00:21:35.437404 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:21:35.639056937+00:00 stderr F I1013 00:21:35.638987 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:21:35.639056937+00:00 stderr F I1013 00:21:35.639027 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-10-13T00:21:35.839077796+00:00 stderr F I1013 00:21:35.839017 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-10-13T00:21:35.839108107+00:00 stderr F I1013 00:21:35.839072 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-10-13T00:21:35.889881082+00:00 stderr F I1013 00:21:35.889825 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:21:35.893883800+00:00 stderr F I1013 00:21:35.893855 1 log.go:245] Network operator config updated with conditions: 2025-10-13T00:21:35.893883800+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:35.893883800+00:00 stderr F status: "False" 2025-10-13T00:21:35.893883800+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:35.893883800+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:35.893883800+00:00 stderr F status: "False" 2025-10-13T00:21:35.893883800+00:00 stderr F type: Degraded 2025-10-13T00:21:35.893883800+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:35.893883800+00:00 stderr F status: "True" 2025-10-13T00:21:35.893883800+00:00 stderr F type: Upgradeable 2025-10-13T00:21:35.893883800+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:35.893883800+00:00 stderr F message: |- 2025-10-13T00:21:35.893883800+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out (0 out of 1 updated) 2025-10-13T00:21:35.893883800+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-10-13T00:21:35.893883800+00:00 stderr F reason: Deploying 2025-10-13T00:21:35.893883800+00:00 stderr F status: "True" 2025-10-13T00:21:35.893883800+00:00 stderr F type: Progressing 2025-10-13T00:21:35.893883800+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:35.893883800+00:00 stderr F status: "True" 2025-10-13T00:21:35.893883800+00:00 stderr F type: Available 2025-10-13T00:21:36.038292193+00:00 stderr F I1013 00:21:36.038240 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-10-13T00:21:36.038339575+00:00 stderr F I1013 00:21:36.038288 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-10-13T00:21:36.245420633+00:00 stderr F I1013 00:21:36.243617 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-10-13T00:21:36.245420633+00:00 stderr F I1013 00:21:36.243673 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-10-13T00:21:36.282532531+00:00 stderr F I1013 00:21:36.280201 1 log.go:245] ClusterOperator config status updated with conditions: 2025-10-13T00:21:36.282532531+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:36.282532531+00:00 stderr F status: "False" 2025-10-13T00:21:36.282532531+00:00 stderr F type: Degraded 2025-10-13T00:21:36.282532531+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:36.282532531+00:00 stderr F status: "True" 2025-10-13T00:21:36.282532531+00:00 stderr F type: Upgradeable 2025-10-13T00:21:36.282532531+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:36.282532531+00:00 stderr F status: "False" 2025-10-13T00:21:36.282532531+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:36.282532531+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:36.282532531+00:00 stderr F message: |- 2025-10-13T00:21:36.282532531+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out (0 out of 1 updated) 2025-10-13T00:21:36.282532531+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-10-13T00:21:36.282532531+00:00 stderr F reason: Deploying 2025-10-13T00:21:36.282532531+00:00 stderr F status: "True" 2025-10-13T00:21:36.282532531+00:00 stderr F type: Progressing 2025-10-13T00:21:36.282532531+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:36.282532531+00:00 stderr F status: "True" 2025-10-13T00:21:36.282532531+00:00 stderr F type: Available 2025-10-13T00:21:36.442693839+00:00 stderr F I1013 00:21:36.442633 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:21:36.442693839+00:00 stderr F I1013 00:21:36.442678 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-10-13T00:21:36.639973434+00:00 stderr F I1013 00:21:36.639902 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:21:36.639973434+00:00 stderr F I1013 00:21:36.639950 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-10-13T00:21:36.838504403+00:00 stderr F I1013 00:21:36.838042 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:21:36.838504403+00:00 stderr F I1013 00:21:36.838092 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:21:37.039429626+00:00 stderr F I1013 00:21:37.039361 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:21:37.039429626+00:00 stderr F I1013 00:21:37.039408 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:21:37.096765547+00:00 stderr F I1013 00:21:37.096582 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:21:37.096941592+00:00 stderr F I1013 00:21:37.096874 1 log.go:245] Network operator config updated with conditions: 2025-10-13T00:21:37.096941592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:37.096941592+00:00 stderr F status: "False" 2025-10-13T00:21:37.096941592+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:37.096941592+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:37.096941592+00:00 stderr F status: "False" 2025-10-13T00:21:37.096941592+00:00 stderr F type: Degraded 2025-10-13T00:21:37.096941592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:37.096941592+00:00 stderr F status: "True" 2025-10-13T00:21:37.096941592+00:00 stderr F type: Upgradeable 2025-10-13T00:21:37.096941592+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:37.096941592+00:00 stderr F message: |- 2025-10-13T00:21:37.096941592+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-10-13T00:21:37.096941592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-10-13T00:21:37.096941592+00:00 stderr F reason: Deploying 2025-10-13T00:21:37.096941592+00:00 stderr F status: "True" 2025-10-13T00:21:37.096941592+00:00 stderr F type: Progressing 2025-10-13T00:21:37.096941592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:37.096941592+00:00 stderr F status: "True" 2025-10-13T00:21:37.096941592+00:00 stderr F type: Available 2025-10-13T00:21:37.238021476+00:00 stderr F I1013 00:21:37.237954 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:21:37.238021476+00:00 stderr F I1013 00:21:37.238008 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-10-13T00:21:37.444515529+00:00 stderr F I1013 00:21:37.444472 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:21:37.444515529+00:00 stderr F I1013 00:21:37.444509 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-10-13T00:21:37.474991878+00:00 stderr F I1013 00:21:37.473701 1 log.go:245] ClusterOperator config status updated with conditions: 2025-10-13T00:21:37.474991878+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:37.474991878+00:00 stderr F status: "False" 2025-10-13T00:21:37.474991878+00:00 stderr F type: Degraded 2025-10-13T00:21:37.474991878+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:37.474991878+00:00 stderr F status: "True" 2025-10-13T00:21:37.474991878+00:00 stderr F type: Upgradeable 2025-10-13T00:21:37.474991878+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:37.474991878+00:00 stderr F status: "False" 2025-10-13T00:21:37.474991878+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:37.474991878+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:37.474991878+00:00 stderr F message: |- 2025-10-13T00:21:37.474991878+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-10-13T00:21:37.474991878+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-10-13T00:21:37.474991878+00:00 stderr F reason: Deploying 2025-10-13T00:21:37.474991878+00:00 stderr F status: "True" 2025-10-13T00:21:37.474991878+00:00 stderr F type: Progressing 2025-10-13T00:21:37.474991878+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:37.474991878+00:00 stderr F status: "True" 2025-10-13T00:21:37.474991878+00:00 stderr F type: Available 2025-10-13T00:21:37.639062151+00:00 stderr F I1013 00:21:37.639005 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:21:37.639062151+00:00 stderr F I1013 00:21:37.639051 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-10-13T00:21:37.841423303+00:00 stderr F I1013 00:21:37.840675 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-10-13T00:21:37.841423303+00:00 stderr F I1013 00:21:37.840720 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-10-13T00:21:38.037810404+00:00 stderr F I1013 00:21:38.037744 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-10-13T00:21:38.037810404+00:00 stderr F I1013 00:21:38.037791 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-10-13T00:21:38.238857080+00:00 stderr F I1013 00:21:38.238792 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-10-13T00:21:38.238857080+00:00 stderr F I1013 00:21:38.238837 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-10-13T00:21:38.439090285+00:00 stderr F I1013 00:21:38.439026 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:21:38.439129996+00:00 stderr F I1013 00:21:38.439101 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-10-13T00:21:38.638344663+00:00 stderr F I1013 00:21:38.638239 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-10-13T00:21:38.638344663+00:00 stderr F I1013 00:21:38.638297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-10-13T00:21:38.839247076+00:00 stderr F I1013 00:21:38.839178 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-10-13T00:21:38.839247076+00:00 stderr F I1013 00:21:38.839216 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:21:39.039191463+00:00 stderr F I1013 00:21:39.039134 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:21:39.039191463+00:00 stderr F I1013 00:21:39.039182 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:21:39.239397027+00:00 stderr F I1013 00:21:39.239208 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:21:39.239397027+00:00 stderr F I1013 00:21:39.239251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-10-13T00:21:39.438797979+00:00 stderr F I1013 00:21:39.438734 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-10-13T00:21:39.438797979+00:00 stderr F I1013 00:21:39.438779 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-10-13T00:21:39.638559611+00:00 stderr F I1013 00:21:39.638503 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-10-13T00:21:39.638559611+00:00 stderr F I1013 00:21:39.638546 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-10-13T00:21:39.840136912+00:00 stderr F I1013 00:21:39.840069 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:21:39.840187844+00:00 stderr F I1013 00:21:39.840129 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-10-13T00:21:40.039312119+00:00 stderr F I1013 00:21:40.039223 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-10-13T00:21:40.039312119+00:00 stderr F I1013 00:21:40.039276 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-10-13T00:21:40.246743807+00:00 stderr F I1013 00:21:40.246671 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:21:40.246743807+00:00 stderr F I1013 00:21:40.246725 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-10-13T00:21:40.442237944+00:00 stderr F I1013 00:21:40.442137 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-10-13T00:21:40.442237944+00:00 stderr F I1013 00:21:40.442230 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-10-13T00:21:40.648079619+00:00 stderr F I1013 00:21:40.648000 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-10-13T00:21:40.648079619+00:00 stderr F I1013 00:21:40.648043 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-10-13T00:21:40.839720942+00:00 stderr F I1013 00:21:40.839609 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:21:40.839720942+00:00 stderr F I1013 00:21:40.839692 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-10-13T00:21:41.037664316+00:00 stderr F I1013 00:21:41.037581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-10-13T00:21:41.037664316+00:00 stderr F I1013 00:21:41.037624 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-10-13T00:21:41.238528267+00:00 stderr F I1013 00:21:41.238465 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-10-13T00:21:41.238576359+00:00 stderr F I1013 00:21:41.238523 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-10-13T00:21:41.441572137+00:00 stderr F I1013 00:21:41.441491 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:21:41.456920280+00:00 stderr F I1013 00:21:41.456849 1 log.go:245] Operconfig Controller complete 2025-10-13T00:21:47.314023117+00:00 stderr F I1013 00:21:47.313961 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:21:47.314023117+00:00 stderr F I1013 00:21:47.313996 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:21:47.362487071+00:00 stderr F I1013 00:21:47.362420 1 log.go:245] Network operator config updated with conditions: 2025-10-13T00:21:47.362487071+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:47.362487071+00:00 stderr F status: "False" 2025-10-13T00:21:47.362487071+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:47.362487071+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:47.362487071+00:00 stderr F status: "False" 2025-10-13T00:21:47.362487071+00:00 stderr F type: Degraded 2025-10-13T00:21:47.362487071+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:47.362487071+00:00 stderr F status: "True" 2025-10-13T00:21:47.362487071+00:00 stderr F type: Upgradeable 2025-10-13T00:21:47.362487071+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:47.362487071+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2025-10-13T00:21:47.362487071+00:00 stderr F 1 nodes) 2025-10-13T00:21:47.362487071+00:00 stderr F reason: Deploying 2025-10-13T00:21:47.362487071+00:00 stderr F status: "True" 2025-10-13T00:21:47.362487071+00:00 stderr F type: Progressing 2025-10-13T00:21:47.362487071+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:47.362487071+00:00 stderr F status: "True" 2025-10-13T00:21:47.362487071+00:00 stderr F type: Available 2025-10-13T00:21:47.363717204+00:00 stderr F I1013 00:21:47.363683 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:21:47.378733907+00:00 stderr F I1013 00:21:47.378677 1 log.go:245] ClusterOperator config status updated with conditions: 2025-10-13T00:21:47.378733907+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:21:47.378733907+00:00 stderr F status: "False" 2025-10-13T00:21:47.378733907+00:00 stderr F type: Degraded 2025-10-13T00:21:47.378733907+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:47.378733907+00:00 stderr F status: "True" 2025-10-13T00:21:47.378733907+00:00 stderr F type: Upgradeable 2025-10-13T00:21:47.378733907+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:21:47.378733907+00:00 stderr F status: "False" 2025-10-13T00:21:47.378733907+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:21:47.378733907+00:00 stderr F - lastTransitionTime: "2025-10-13T00:21:34Z" 2025-10-13T00:21:47.378733907+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2025-10-13T00:21:47.378733907+00:00 stderr F 1 nodes) 2025-10-13T00:21:47.378733907+00:00 stderr F reason: Deploying 2025-10-13T00:21:47.378733907+00:00 stderr F status: "True" 2025-10-13T00:21:47.378733907+00:00 stderr F type: Progressing 2025-10-13T00:21:47.378733907+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:21:47.378733907+00:00 stderr F status: "True" 2025-10-13T00:21:47.378733907+00:00 stderr F type: Available 2025-10-13T00:22:02.526003652+00:00 stderr F I1013 00:22:02.525928 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-10-13T00:22:05.714982930+00:00 stderr F I1013 00:22:05.714879 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:22:05.714982930+00:00 stderr F I1013 00:22:05.714905 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:22:05.752549570+00:00 stderr F I1013 00:22:05.752483 1 log.go:245] Network operator config updated with conditions: 2025-10-13T00:22:05.752549570+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:22:05.752549570+00:00 stderr F status: "False" 2025-10-13T00:22:05.752549570+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:22:05.752549570+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:22:05.752549570+00:00 stderr F status: "False" 2025-10-13T00:22:05.752549570+00:00 stderr F type: Degraded 2025-10-13T00:22:05.752549570+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:22:05.752549570+00:00 stderr F status: "True" 2025-10-13T00:22:05.752549570+00:00 stderr F type: Upgradeable 2025-10-13T00:22:05.752549570+00:00 stderr F - lastTransitionTime: "2025-10-13T00:22:05Z" 2025-10-13T00:22:05.752549570+00:00 stderr F status: "False" 2025-10-13T00:22:05.752549570+00:00 stderr F type: Progressing 2025-10-13T00:22:05.752549570+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:22:05.752549570+00:00 stderr F status: "True" 2025-10-13T00:22:05.752549570+00:00 stderr F type: Available 2025-10-13T00:22:05.752598922+00:00 stderr F I1013 00:22:05.752579 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:22:05.770687238+00:00 stderr F I1013 00:22:05.770624 1 log.go:245] ClusterOperator config status updated with conditions: 2025-10-13T00:22:05.770687238+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-10-13T00:22:05.770687238+00:00 stderr F status: "False" 2025-10-13T00:22:05.770687238+00:00 stderr F type: Degraded 2025-10-13T00:22:05.770687238+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:22:05.770687238+00:00 stderr F status: "True" 2025-10-13T00:22:05.770687238+00:00 stderr F type: Upgradeable 2025-10-13T00:22:05.770687238+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-10-13T00:22:05.770687238+00:00 stderr F status: "False" 2025-10-13T00:22:05.770687238+00:00 stderr F type: ManagementStateDegraded 2025-10-13T00:22:05.770687238+00:00 stderr F - lastTransitionTime: "2025-10-13T00:22:05Z" 2025-10-13T00:22:05.770687238+00:00 stderr F status: "False" 2025-10-13T00:22:05.770687238+00:00 stderr F type: Progressing 2025-10-13T00:22:05.770687238+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-10-13T00:22:05.770687238+00:00 stderr F status: "True" 2025-10-13T00:22:05.770687238+00:00 stderr F type: Available 2025-10-13T00:22:28.553861219+00:00 stderr F I1013 00:22:28.553782 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-10-13T00:22:28.877456901+00:00 stderr F I1013 00:22:28.877397 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-10-13T00:22:28.879244339+00:00 stderr F I1013 00:22:28.879208 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-10-13T00:22:28.880756960+00:00 stderr F I1013 00:22:28.880725 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-10-13T00:22:28.880756960+00:00 stderr F I1013 00:22:28.880740 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003cdf380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-10-13T00:22:28.883975276+00:00 stderr F I1013 00:22:28.883936 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-10-13T00:22:28.883975276+00:00 stderr F I1013 00:22:28.883953 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-10-13T00:22:28.883975276+00:00 stderr F I1013 00:22:28.883960 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-10-13T00:22:28.885723213+00:00 stderr F I1013 00:22:28.885689 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 3 -> 3 2025-10-13T00:22:28.885723213+00:00 stderr F I1013 00:22:28.885703 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:22:28.885723213+00:00 stderr F I1013 00:22:28.885707 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 3 -> 3 2025-10-13T00:22:28.885723213+00:00 stderr F I1013 00:22:28.885712 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-10-13T00:22:28.885738084+00:00 stderr F I1013 00:22:28.885731 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-10-13T00:22:28.893156973+00:00 stderr F I1013 00:22:28.893136 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-10-13T00:22:28.903342897+00:00 stderr F I1013 00:22:28.903302 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-10-13T00:22:28.903363278+00:00 stderr F I1013 00:22:28.903348 1 log.go:245] Starting render phase 2025-10-13T00:22:28.913535681+00:00 stderr F I1013 00:22:28.913492 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-10-13T00:22:28.945368577+00:00 stderr F I1013 00:22:28.943687 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-10-13T00:22:28.945368577+00:00 stderr F I1013 00:22:28.943707 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-10-13T00:22:28.945368577+00:00 stderr F I1013 00:22:28.943725 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-10-13T00:22:28.945368577+00:00 stderr F I1013 00:22:28.943746 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-10-13T00:22:28.951902683+00:00 stderr F I1013 00:22:28.951204 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-10-13T00:22:28.951902683+00:00 stderr F I1013 00:22:28.951219 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-10-13T00:22:28.956288651+00:00 stderr F I1013 00:22:28.956253 1 log.go:245] Render phase done, rendered 112 objects 2025-10-13T00:22:28.965742275+00:00 stderr F I1013 00:22:28.965697 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-10-13T00:22:28.969401354+00:00 stderr F I1013 00:22:28.969355 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-10-13T00:22:28.969418704+00:00 stderr F I1013 00:22:28.969401 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-10-13T00:22:28.975596570+00:00 stderr F I1013 00:22:28.975550 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-10-13T00:22:28.975596570+00:00 stderr F I1013 00:22:28.975579 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-10-13T00:22:28.983371549+00:00 stderr F I1013 00:22:28.980856 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-10-13T00:22:28.983371549+00:00 stderr F I1013 00:22:28.980876 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-10-13T00:22:28.987186052+00:00 stderr F I1013 00:22:28.987140 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-10-13T00:22:28.987186052+00:00 stderr F I1013 00:22:28.987163 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-10-13T00:22:28.996946644+00:00 stderr F I1013 00:22:28.996898 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-10-13T00:22:28.996946644+00:00 stderr F I1013 00:22:28.996942 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-10-13T00:22:29.001019594+00:00 stderr F I1013 00:22:29.000979 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-10-13T00:22:29.001019594+00:00 stderr F I1013 00:22:29.001007 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-10-13T00:22:29.005782242+00:00 stderr F I1013 00:22:29.005728 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-10-13T00:22:29.005782242+00:00 stderr F I1013 00:22:29.005772 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-10-13T00:22:29.009541663+00:00 stderr F I1013 00:22:29.009516 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-10-13T00:22:29.009559554+00:00 stderr F I1013 00:22:29.009539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-10-13T00:22:29.012904404+00:00 stderr F I1013 00:22:29.012867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-10-13T00:22:29.012919204+00:00 stderr F I1013 00:22:29.012910 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-10-13T00:22:29.100682334+00:00 stderr F I1013 00:22:29.100634 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-10-13T00:22:29.100682334+00:00 stderr F I1013 00:22:29.100675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-10-13T00:22:29.300293482+00:00 stderr F I1013 00:22:29.300241 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-10-13T00:22:29.300293482+00:00 stderr F I1013 00:22:29.300280 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-10-13T00:22:29.507039742+00:00 stderr F I1013 00:22:29.506915 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-10-13T00:22:29.507039742+00:00 stderr F I1013 00:22:29.506966 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-10-13T00:22:29.705515439+00:00 stderr F I1013 00:22:29.705168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-10-13T00:22:29.705515439+00:00 stderr F I1013 00:22:29.705207 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-10-13T00:22:29.899810474+00:00 stderr F I1013 00:22:29.899771 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-10-13T00:22:29.899897677+00:00 stderr F I1013 00:22:29.899887 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-10-13T00:22:30.104550450+00:00 stderr F I1013 00:22:30.103390 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-10-13T00:22:30.104550450+00:00 stderr F I1013 00:22:30.103447 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-10-13T00:22:30.301166378+00:00 stderr F I1013 00:22:30.301049 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-10-13T00:22:30.301166378+00:00 stderr F I1013 00:22:30.301139 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-10-13T00:22:30.500599501+00:00 stderr F I1013 00:22:30.500524 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-10-13T00:22:30.500599501+00:00 stderr F I1013 00:22:30.500582 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-10-13T00:22:30.699769657+00:00 stderr F I1013 00:22:30.699712 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-10-13T00:22:30.699816538+00:00 stderr F I1013 00:22:30.699773 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-10-13T00:22:30.901339467+00:00 stderr F I1013 00:22:30.901107 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-10-13T00:22:30.901368647+00:00 stderr F I1013 00:22:30.901350 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-10-13T00:22:31.102419214+00:00 stderr F I1013 00:22:31.102361 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-10-13T00:22:31.102419214+00:00 stderr F I1013 00:22:31.102409 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-10-13T00:22:31.302238828+00:00 stderr F I1013 00:22:31.302147 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-10-13T00:22:31.302238828+00:00 stderr F I1013 00:22:31.302213 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-10-13T00:22:31.510120638+00:00 stderr F I1013 00:22:31.509997 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-10-13T00:22:31.510120638+00:00 stderr F I1013 00:22:31.510086 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-10-13T00:22:31.712285935+00:00 stderr F I1013 00:22:31.712224 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-10-13T00:22:31.712285935+00:00 stderr F I1013 00:22:31.712279 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-10-13T00:22:31.901558545+00:00 stderr F I1013 00:22:31.901484 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-10-13T00:22:31.901558545+00:00 stderr F I1013 00:22:31.901537 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-10-13T00:22:32.100436723+00:00 stderr F I1013 00:22:32.100367 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-10-13T00:22:32.100436723+00:00 stderr F I1013 00:22:32.100419 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-10-13T00:22:32.301249703+00:00 stderr F I1013 00:22:32.301169 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-10-13T00:22:32.301249703+00:00 stderr F I1013 00:22:32.301215 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-10-13T00:22:32.509784281+00:00 stderr F I1013 00:22:32.509689 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-10-13T00:22:32.509784281+00:00 stderr F I1013 00:22:32.509755 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-10-13T00:22:32.708694780+00:00 stderr F I1013 00:22:32.708591 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-10-13T00:22:32.708694780+00:00 stderr F I1013 00:22:32.708665 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-10-13T00:22:32.908462682+00:00 stderr F I1013 00:22:32.908391 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-10-13T00:22:32.908494453+00:00 stderr F I1013 00:22:32.908472 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:22:33.100625490+00:00 stderr F I1013 00:22:33.100562 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:22:33.100625490+00:00 stderr F I1013 00:22:33.100616 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:22:33.300415897+00:00 stderr F I1013 00:22:33.299813 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:22:33.300485609+00:00 stderr F I1013 00:22:33.300413 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-10-13T00:22:33.501891448+00:00 stderr F I1013 00:22:33.501832 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-10-13T00:22:33.501948030+00:00 stderr F I1013 00:22:33.501891 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-10-13T00:22:33.703595987+00:00 stderr F I1013 00:22:33.703504 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-10-13T00:22:33.703595987+00:00 stderr F I1013 00:22:33.703568 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-10-13T00:22:33.900766179+00:00 stderr F I1013 00:22:33.900286 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-10-13T00:22:33.900766179+00:00 stderr F I1013 00:22:33.900748 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-10-13T00:22:34.103084015+00:00 stderr F I1013 00:22:34.102957 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-10-13T00:22:34.103084015+00:00 stderr F I1013 00:22:34.103042 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-10-13T00:22:34.307967191+00:00 stderr F I1013 00:22:34.307925 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-10-13T00:22:34.308075904+00:00 stderr F I1013 00:22:34.308060 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-10-13T00:22:34.505397780+00:00 stderr F I1013 00:22:34.505284 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-10-13T00:22:34.505397780+00:00 stderr F I1013 00:22:34.505384 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-10-13T00:22:34.703824827+00:00 stderr F I1013 00:22:34.703760 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-10-13T00:22:34.703862708+00:00 stderr F I1013 00:22:34.703826 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-10-13T00:22:34.901026930+00:00 stderr F I1013 00:22:34.900972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-10-13T00:22:34.901137303+00:00 stderr F I1013 00:22:34.901122 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-10-13T00:22:35.101270318+00:00 stderr F I1013 00:22:35.101227 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-10-13T00:22:35.101510474+00:00 stderr F I1013 00:22:35.101479 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-10-13T00:22:35.302844812+00:00 stderr F I1013 00:22:35.302770 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-10-13T00:22:35.302844812+00:00 stderr F I1013 00:22:35.302819 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-10-13T00:22:35.509094887+00:00 stderr F I1013 00:22:35.508996 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-10-13T00:22:35.509094887+00:00 stderr F I1013 00:22:35.509044 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-10-13T00:22:35.714670333+00:00 stderr F I1013 00:22:35.714575 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-10-13T00:22:35.714789877+00:00 stderr F I1013 00:22:35.714773 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-10-13T00:22:35.922401250+00:00 stderr F I1013 00:22:35.922274 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-10-13T00:22:35.922401250+00:00 stderr F I1013 00:22:35.922323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-10-13T00:22:36.105558422+00:00 stderr F I1013 00:22:36.105507 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-10-13T00:22:36.105675585+00:00 stderr F I1013 00:22:36.105660 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-10-13T00:22:36.323083670+00:00 stderr F I1013 00:22:36.322992 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-10-13T00:22:36.323083670+00:00 stderr F I1013 00:22:36.323041 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-10-13T00:22:36.505484841+00:00 stderr F I1013 00:22:36.505420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-10-13T00:22:36.505484841+00:00 stderr F I1013 00:22:36.505461 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:22:36.782893838+00:00 stderr F I1013 00:22:36.782842 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:22:36.783010652+00:00 stderr F I1013 00:22:36.782991 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-10-13T00:22:36.943220724+00:00 stderr F I1013 00:22:36.943153 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-10-13T00:22:36.943266576+00:00 stderr F I1013 00:22:36.943256 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:22:37.101697298+00:00 stderr F I1013 00:22:37.101629 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:22:37.101697298+00:00 stderr F I1013 00:22:37.101677 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-10-13T00:22:37.301509034+00:00 stderr F I1013 00:22:37.301463 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:22:37.301615367+00:00 stderr F I1013 00:22:37.301600 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-10-13T00:22:37.501936417+00:00 stderr F I1013 00:22:37.501855 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-10-13T00:22:37.501936417+00:00 stderr F I1013 00:22:37.501905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-10-13T00:22:37.701730822+00:00 stderr F I1013 00:22:37.701640 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-10-13T00:22:37.701730822+00:00 stderr F I1013 00:22:37.701687 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-10-13T00:22:37.900904680+00:00 stderr F I1013 00:22:37.900840 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-10-13T00:22:37.900904680+00:00 stderr F I1013 00:22:37.900898 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-10-13T00:22:38.100933601+00:00 stderr F I1013 00:22:38.100873 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-10-13T00:22:38.100933601+00:00 stderr F I1013 00:22:38.100923 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-10-13T00:22:38.301550149+00:00 stderr F I1013 00:22:38.301507 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-10-13T00:22:38.301657232+00:00 stderr F I1013 00:22:38.301642 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-10-13T00:22:38.502536398+00:00 stderr F I1013 00:22:38.502458 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-10-13T00:22:38.502536398+00:00 stderr F I1013 00:22:38.502506 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:22:38.701105619+00:00 stderr F I1013 00:22:38.701062 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:22:38.701197602+00:00 stderr F I1013 00:22:38.701182 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:22:38.904191156+00:00 stderr F I1013 00:22:38.904131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:22:38.904220977+00:00 stderr F I1013 00:22:38.904189 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:22:39.102082958+00:00 stderr F I1013 00:22:39.102019 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:22:39.102253483+00:00 stderr F I1013 00:22:39.102223 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:22:39.301104452+00:00 stderr F I1013 00:22:39.300635 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:22:39.301242266+00:00 stderr F I1013 00:22:39.301223 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-10-13T00:22:39.501272128+00:00 stderr F I1013 00:22:39.501215 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-10-13T00:22:39.501449523+00:00 stderr F I1013 00:22:39.501426 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-10-13T00:22:39.704665894+00:00 stderr F I1013 00:22:39.704587 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-10-13T00:22:39.704665894+00:00 stderr F I1013 00:22:39.704639 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-10-13T00:22:39.901486976+00:00 stderr F I1013 00:22:39.901412 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-10-13T00:22:39.901486976+00:00 stderr F I1013 00:22:39.901476 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-10-13T00:22:40.103874864+00:00 stderr F I1013 00:22:40.103800 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-10-13T00:22:40.103874864+00:00 stderr F I1013 00:22:40.103859 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-10-13T00:22:40.240813758+00:00 stderr F I1013 00:22:40.240731 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:40.258273554+00:00 stderr F I1013 00:22:40.258207 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.267246694+00:00 stderr F I1013 00:22:40.267182 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.276467041+00:00 stderr F I1013 00:22:40.276402 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.285534454+00:00 stderr F I1013 00:22:40.285477 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.294232486+00:00 stderr F I1013 00:22:40.294158 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.304752019+00:00 stderr F I1013 00:22:40.304652 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-10-13T00:22:40.304752019+00:00 stderr F I1013 00:22:40.304724 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-10-13T00:22:40.306036045+00:00 stderr F I1013 00:22:40.305964 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.317530645+00:00 stderr F I1013 00:22:40.317474 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.326317040+00:00 stderr F I1013 00:22:40.326215 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.334986061+00:00 stderr F I1013 00:22:40.334911 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:40.505717487+00:00 stderr F I1013 00:22:40.505628 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-10-13T00:22:40.505717487+00:00 stderr F I1013 00:22:40.505696 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-10-13T00:22:40.711934661+00:00 stderr F I1013 00:22:40.711896 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-10-13T00:22:40.712040074+00:00 stderr F I1013 00:22:40.712029 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-10-13T00:22:40.905475262+00:00 stderr F I1013 00:22:40.905426 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-10-13T00:22:40.905597586+00:00 stderr F I1013 00:22:40.905575 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-10-13T00:22:41.052202310+00:00 stderr F I1013 00:22:41.052101 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:41.101447321+00:00 stderr F I1013 00:22:41.101389 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-10-13T00:22:41.101568705+00:00 stderr F I1013 00:22:41.101552 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-10-13T00:22:41.305265519+00:00 stderr F I1013 00:22:41.305217 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-10-13T00:22:41.305409883+00:00 stderr F I1013 00:22:41.305392 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:22:41.509127207+00:00 stderr F I1013 00:22:41.509035 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-10-13T00:22:41.509127207+00:00 stderr F I1013 00:22:41.509080 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-10-13T00:22:41.567438331+00:00 stderr F I1013 00:22:41.567363 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:41.585297718+00:00 stderr F I1013 00:22:41.585189 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.595405250+00:00 stderr F I1013 00:22:41.595350 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.605653975+00:00 stderr F I1013 00:22:41.605616 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.613636908+00:00 stderr F I1013 00:22:41.613597 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.619142261+00:00 stderr F I1013 00:22:41.619107 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.630784325+00:00 stderr F I1013 00:22:41.630706 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.637815411+00:00 stderr F I1013 00:22:41.637779 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.654162737+00:00 stderr F I1013 00:22:41.654119 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.702879914+00:00 stderr F I1013 00:22:41.702787 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-10-13T00:22:41.702879914+00:00 stderr F I1013 00:22:41.702854 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:22:41.855195536+00:00 stderr F I1013 00:22:41.855085 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:41.902771202+00:00 stderr F I1013 00:22:41.902684 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-10-13T00:22:41.902771202+00:00 stderr F I1013 00:22:41.902729 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:22:41.962284309+00:00 stderr F I1013 00:22:41.962182 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:41.980457236+00:00 stderr F I1013 00:22:41.980311 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:42.103303878+00:00 stderr F I1013 00:22:42.101979 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:22:42.103303878+00:00 stderr F I1013 00:22:42.102028 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-10-13T00:22:42.103405671+00:00 stderr F I1013 00:22:42.103300 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:42.118085329+00:00 stderr F I1013 00:22:42.117966 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:42.130236868+00:00 stderr F I1013 00:22:42.130126 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:42.253481191+00:00 stderr F I1013 00:22:42.253429 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:42.303991128+00:00 stderr F I1013 00:22:42.303433 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-10-13T00:22:42.303991128+00:00 stderr F I1013 00:22:42.303479 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-10-13T00:22:42.455094747+00:00 stderr F I1013 00:22:42.455010 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:42.509144933+00:00 stderr F I1013 00:22:42.509068 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-10-13T00:22:42.509194644+00:00 stderr F I1013 00:22:42.509155 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-10-13T00:22:42.660361025+00:00 stderr F I1013 00:22:42.660263 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:42.705179013+00:00 stderr F I1013 00:22:42.705104 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-10-13T00:22:42.705179013+00:00 stderr F I1013 00:22:42.705153 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-10-13T00:22:42.855212752+00:00 stderr F I1013 00:22:42.855153 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:42.917558059+00:00 stderr F I1013 00:22:42.917465 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-10-13T00:22:42.917558059+00:00 stderr F I1013 00:22:42.917509 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-10-13T00:22:43.059512233+00:00 stderr F I1013 00:22:43.059416 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:43.118389243+00:00 stderr F I1013 00:22:43.118278 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-10-13T00:22:43.118450475+00:00 stderr F I1013 00:22:43.118385 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-10-13T00:22:43.220270751+00:00 stderr F I1013 00:22:43.220188 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.220413345+00:00 stderr F I1013 00:22:43.220362 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-10-13T00:22:43.255243665+00:00 stderr F I1013 00:22:43.255171 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:43.301407081+00:00 stderr F I1013 00:22:43.301313 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-10-13T00:22:43.301438242+00:00 stderr F I1013 00:22:43.301413 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:22:43.330780729+00:00 stderr F I1013 00:22:43.330718 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.344972575+00:00 stderr F I1013 00:22:43.344913 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.454986419+00:00 stderr F I1013 00:22:43.454911 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:43.502988606+00:00 stderr F I1013 00:22:43.502920 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:22:43.503020257+00:00 stderr F I1013 00:22:43.502986 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:22:43.615691156+00:00 stderr F I1013 00:22:43.615592 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.657644544+00:00 stderr F I1013 00:22:43.657557 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:43.704822958+00:00 stderr F I1013 00:22:43.704723 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:22:43.704822958+00:00 stderr F I1013 00:22:43.704768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-10-13T00:22:43.865248507+00:00 stderr F I1013 00:22:43.865146 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:43.902103234+00:00 stderr F I1013 00:22:43.902011 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-10-13T00:22:43.902103234+00:00 stderr F I1013 00:22:43.902065 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-10-13T00:22:44.058403517+00:00 stderr F I1013 00:22:44.058300 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:44.106755904+00:00 stderr F I1013 00:22:44.106624 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-10-13T00:22:44.106755904+00:00 stderr F I1013 00:22:44.106722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-10-13T00:22:44.259171810+00:00 stderr F I1013 00:22:44.259062 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:44.302922379+00:00 stderr F I1013 00:22:44.302831 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-10-13T00:22:44.302922379+00:00 stderr F I1013 00:22:44.302890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-10-13T00:22:44.448399401+00:00 stderr F I1013 00:22:44.448156 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:44.461741313+00:00 stderr F I1013 00:22:44.461549 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:44.502626032+00:00 stderr F I1013 00:22:44.502520 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-10-13T00:22:44.502626032+00:00 stderr F I1013 00:22:44.502607 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-10-13T00:22:44.586154058+00:00 stderr F I1013 00:22:44.586034 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:44.655062158+00:00 stderr F I1013 00:22:44.654931 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:44.709456843+00:00 stderr F I1013 00:22:44.709317 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:22:44.709456843+00:00 stderr F I1013 00:22:44.709397 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-10-13T00:22:44.856887150+00:00 stderr F I1013 00:22:44.856578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:44.903441586+00:00 stderr F I1013 00:22:44.903359 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:22:44.903441586+00:00 stderr F I1013 00:22:44.903407 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-10-13T00:22:44.913237099+00:00 stderr F I1013 00:22:44.913154 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:44.978151057+00:00 stderr F I1013 00:22:44.978044 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:45.022416740+00:00 stderr F I1013 00:22:45.019487 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:45.061484818+00:00 stderr F I1013 00:22:45.061381 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:45.105557425+00:00 stderr F I1013 00:22:45.105481 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-10-13T00:22:45.105557425+00:00 stderr F I1013 00:22:45.105544 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:22:45.259008020+00:00 stderr F I1013 00:22:45.257616 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:45.304376104+00:00 stderr F I1013 00:22:45.304253 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:22:45.304426285+00:00 stderr F I1013 00:22:45.304383 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-10-13T00:22:45.319390742+00:00 stderr F I1013 00:22:45.319316 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:45.319771182+00:00 stderr F I1013 00:22:45.319723 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-10-13T00:22:45.325349908+00:00 stderr F I1013 00:22:45.325307 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.325370168+00:00 stderr F I1013 00:22:45.325363 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-10-13T00:22:45.332275171+00:00 stderr F I1013 00:22:45.332249 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.332315062+00:00 stderr F I1013 00:22:45.332274 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-10-13T00:22:45.338472053+00:00 stderr F I1013 00:22:45.338432 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.338569526+00:00 stderr F I1013 00:22:45.338549 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-10-13T00:22:45.344020008+00:00 stderr F I1013 00:22:45.343707 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.344020008+00:00 stderr F I1013 00:22:45.343768 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-10-13T00:22:45.349305285+00:00 stderr F I1013 00:22:45.349250 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.349385577+00:00 stderr F I1013 00:22:45.349330 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-10-13T00:22:45.354531841+00:00 stderr F I1013 00:22:45.354477 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.354560501+00:00 stderr F I1013 00:22:45.354537 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-10-13T00:22:45.359880180+00:00 stderr F I1013 00:22:45.359844 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.359922311+00:00 stderr F I1013 00:22:45.359895 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-10-13T00:22:45.364043886+00:00 stderr F I1013 00:22:45.364001 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.364062836+00:00 stderr F I1013 00:22:45.364054 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-10-13T00:22:45.368030767+00:00 stderr F I1013 00:22:45.367991 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.368051137+00:00 stderr F I1013 00:22:45.368044 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-10-13T00:22:45.374034684+00:00 stderr F I1013 00:22:45.374007 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-10-13T00:22:45.457941421+00:00 stderr F I1013 00:22:45.457876 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:45.505652270+00:00 stderr F I1013 00:22:45.505513 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-10-13T00:22:45.505652270+00:00 stderr F I1013 00:22:45.505604 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-10-13T00:22:45.728887118+00:00 stderr F I1013 00:22:45.728806 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:22:45.729059023+00:00 stderr F I1013 00:22:45.729034 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-10-13T00:22:45.901812795+00:00 stderr F I1013 00:22:45.901714 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-10-13T00:22:45.901812795+00:00 stderr F I1013 00:22:45.901805 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-10-13T00:22:45.964284235+00:00 stderr F I1013 00:22:45.964179 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:45.978182673+00:00 stderr F I1013 00:22:45.978144 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:45.989936630+00:00 stderr F I1013 00:22:45.989893 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:46.057852092+00:00 stderr F I1013 00:22:46.057753 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:46.104718297+00:00 stderr F I1013 00:22:46.104613 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-10-13T00:22:46.104808560+00:00 stderr F I1013 00:22:46.104727 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-10-13T00:22:46.260762654+00:00 stderr F I1013 00:22:46.260641 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:46.308380020+00:00 stderr F I1013 00:22:46.308261 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-10-13T00:22:46.308380020+00:00 stderr F I1013 00:22:46.308332 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-10-13T00:22:46.455319353+00:00 stderr F I1013 00:22:46.455279 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:46.500951565+00:00 stderr F I1013 00:22:46.500899 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-10-13T00:22:46.501054337+00:00 stderr F I1013 00:22:46.501038 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-10-13T00:22:46.656171958+00:00 stderr F I1013 00:22:46.656106 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:46.701590123+00:00 stderr F I1013 00:22:46.701489 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:22:46.701590123+00:00 stderr F I1013 00:22:46.701537 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-10-13T00:22:46.794620405+00:00 stderr F I1013 00:22:46.794525 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:46.859618675+00:00 stderr F I1013 00:22:46.859399 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:46.905424461+00:00 stderr F I1013 00:22:46.903323 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-10-13T00:22:46.905424461+00:00 stderr F I1013 00:22:46.903401 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-10-13T00:22:47.056642353+00:00 stderr F I1013 00:22:47.056546 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:47.101407460+00:00 stderr F I1013 00:22:47.101342 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-10-13T00:22:47.101407460+00:00 stderr F I1013 00:22:47.101384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:22:47.254525936+00:00 stderr F I1013 00:22:47.254450 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:47.256105780+00:00 stderr F I1013 00:22:47.256062 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.299091087+00:00 stderr F I1013 00:22:47.299028 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.299246351+00:00 stderr F I1013 00:22:47.299206 1 log.go:245] Reconciling proxy 'cluster' 2025-10-13T00:22:47.300620990+00:00 stderr F I1013 00:22:47.300584 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:22:47.300642040+00:00 stderr F I1013 00:22:47.300620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-10-13T00:22:47.301381281+00:00 stderr F I1013 00:22:47.301328 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-10-13T00:22:47.310961798+00:00 stderr F I1013 00:22:47.310921 1 log.go:245] Reconciling proxy 'cluster' complete 2025-10-13T00:22:47.460312558+00:00 stderr F I1013 00:22:47.460190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:47.504063547+00:00 stderr F I1013 00:22:47.503973 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-10-13T00:22:47.504063547+00:00 stderr F I1013 00:22:47.504019 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-10-13T00:22:47.659892257+00:00 stderr F I1013 00:22:47.659804 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:47.701302741+00:00 stderr F I1013 00:22:47.701249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-10-13T00:22:47.701368913+00:00 stderr F I1013 00:22:47.701304 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-10-13T00:22:47.793589331+00:00 stderr F I1013 00:22:47.793531 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.855514116+00:00 stderr F I1013 00:22:47.855449 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:47.901258361+00:00 stderr F I1013 00:22:47.901186 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-10-13T00:22:47.901258361+00:00 stderr F I1013 00:22:47.901229 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-10-13T00:22:47.938299802+00:00 stderr F I1013 00:22:47.938233 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.055048704+00:00 stderr F I1013 00:22:48.054992 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:48.101804077+00:00 stderr F I1013 00:22:48.101733 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:22:48.101804077+00:00 stderr F I1013 00:22:48.101774 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-10-13T00:22:48.255752975+00:00 stderr F I1013 00:22:48.255686 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:48.296758577+00:00 stderr F I1013 00:22:48.296695 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.303079133+00:00 stderr F I1013 00:22:48.303032 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-10-13T00:22:48.303116974+00:00 stderr F I1013 00:22:48.303082 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-10-13T00:22:48.408065668+00:00 stderr F I1013 00:22:48.407985 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.457663419+00:00 stderr F I1013 00:22:48.456924 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:48.506194011+00:00 stderr F I1013 00:22:48.506128 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-10-13T00:22:48.506194011+00:00 stderr F I1013 00:22:48.506170 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-10-13T00:22:48.507246671+00:00 stderr F I1013 00:22:48.507206 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.654454940+00:00 stderr F I1013 00:22:48.654321 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:48.701095839+00:00 stderr F I1013 00:22:48.700744 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-10-13T00:22:48.701095839+00:00 stderr F I1013 00:22:48.700790 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-10-13T00:22:48.842804447+00:00 stderr F I1013 00:22:48.842726 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.859482891+00:00 stderr F I1013 00:22:48.855945 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:48.902723276+00:00 stderr F I1013 00:22:48.902656 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-10-13T00:22:48.902723276+00:00 stderr F I1013 00:22:48.902699 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-10-13T00:22:48.969872636+00:00 stderr F I1013 00:22:48.969805 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.969921818+00:00 stderr F I1013 00:22:48.969909 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:22:48.969945308+00:00 stderr F I1013 00:22:48.969928 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-10-13T00:22:48.969945308+00:00 stderr F I1013 00:22:48.969937 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-10-13T00:22:48.969945308+00:00 stderr F I1013 00:22:48.969941 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-10-13T00:22:48.969956519+00:00 stderr F I1013 00:22:48.969952 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-10-13T00:22:48.969972359+00:00 stderr F I1013 00:22:48.969959 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-10-13T00:22:48.969972359+00:00 stderr F I1013 00:22:48.969962 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-10-13T00:22:48.969972359+00:00 stderr F I1013 00:22:48.969967 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-10-13T00:22:48.970063972+00:00 stderr F I1013 00:22:48.970033 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-10-13T00:22:48.970063972+00:00 stderr F I1013 00:22:48.970055 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-10-13T00:22:48.970063972+00:00 stderr F I1013 00:22:48.970061 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:22:48.970072692+00:00 stderr F I1013 00:22:48.970065 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-10-13T00:22:49.053871486+00:00 stderr F I1013 00:22:49.053792 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:49.060076619+00:00 stderr F I1013 00:22:49.060019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:49.101916104+00:00 stderr F I1013 00:22:49.101838 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:22:49.101916104+00:00 stderr F I1013 00:22:49.101897 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-10-13T00:22:49.151914827+00:00 stderr F I1013 00:22:49.151838 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:49.152244306+00:00 stderr F I1013 00:22:49.152210 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-10-13T00:22:49.154943531+00:00 stderr F I1013 00:22:49.154905 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-10-13T00:22:49.154965532+00:00 stderr F I1013 00:22:49.154943 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155068155+00:00 stderr F I1013 00:22:49.155039 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155079685+00:00 stderr F I1013 00:22:49.155066 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155110436+00:00 stderr F I1013 00:22:49.155087 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155123776+00:00 stderr F I1013 00:22:49.155117 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155160297+00:00 stderr F I1013 00:22:49.155141 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155191188+00:00 stderr F I1013 00:22:49.155173 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155202179+00:00 stderr F I1013 00:22:49.155196 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155222679+00:00 stderr F I1013 00:22:49.155215 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155253270+00:00 stderr F I1013 00:22:49.155234 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155264260+00:00 stderr F I1013 00:22:49.155258 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.155297921+00:00 stderr F I1013 00:22:49.155279 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-10-13T00:22:49.258137016+00:00 stderr F I1013 00:22:49.258052 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:49.301759491+00:00 stderr F I1013 00:22:49.301681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-10-13T00:22:49.301826233+00:00 stderr F I1013 00:22:49.301749 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-10-13T00:22:49.459156765+00:00 stderr F I1013 00:22:49.459051 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:49.504298273+00:00 stderr F I1013 00:22:49.504213 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-10-13T00:22:49.504298273+00:00 stderr F I1013 00:22:49.504290 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-10-13T00:22:49.550384747+00:00 stderr F I1013 00:22:49.550257 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:49.658455387+00:00 stderr F I1013 00:22:49.657581 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:49.705660352+00:00 stderr F I1013 00:22:49.705590 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-10-13T00:22:49.745011278+00:00 stderr F I1013 00:22:49.744437 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:49.745011278+00:00 stderr F I1013 00:22:49.744638 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-10-13T00:22:49.745450710+00:00 stderr F I1013 00:22:49.745408 1 log.go:245] successful reconciliation 2025-10-13T00:22:49.788340055+00:00 stderr F I1013 00:22:49.787163 1 log.go:245] Operconfig Controller complete 2025-10-13T00:22:49.830151800+00:00 stderr F I1013 00:22:49.828680 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:49.857687947+00:00 stderr F I1013 00:22:49.857623 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:49.965524801+00:00 stderr F I1013 00:22:49.965445 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:49.976631390+00:00 stderr F I1013 00:22:49.976576 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-10-13T00:22:49.977107293+00:00 stderr F I1013 00:22:49.977063 1 log.go:245] successful reconciliation 2025-10-13T00:22:50.053949744+00:00 stderr F I1013 00:22:50.053876 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:50.178864713+00:00 stderr F I1013 00:22:50.178806 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-10-13T00:22:50.179380438+00:00 stderr F I1013 00:22:50.179312 1 log.go:245] successful reconciliation 2025-10-13T00:22:50.257233066+00:00 stderr F I1013 00:22:50.257159 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:50.462279188+00:00 stderr F I1013 00:22:50.462170 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:50.658052901+00:00 stderr F I1013 00:22:50.657947 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:50.856433447+00:00 stderr F I1013 00:22:50.855262 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:51.655914377+00:00 stderr F I1013 00:22:51.655823 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:51.877752066+00:00 stderr F I1013 00:22:51.877608 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:51.897097965+00:00 stderr F I1013 00:22:51.897038 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:51.903788062+00:00 stderr F I1013 00:22:51.903712 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:51.913519613+00:00 stderr F I1013 00:22:51.913460 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:51.924836028+00:00 stderr F I1013 00:22:51.924768 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:51.931070192+00:00 stderr F I1013 00:22:51.931028 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:51.935322630+00:00 stderr F I1013 00:22:51.935287 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:52.054752677+00:00 stderr F I1013 00:22:52.054680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:52.235662865+00:00 stderr F I1013 00:22:52.235595 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:52.254188011+00:00 stderr F I1013 00:22:52.254101 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:52.453116072+00:00 stderr F I1013 00:22:52.453035 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:52.657135385+00:00 stderr F I1013 00:22:52.657063 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:52.854644787+00:00 stderr F I1013 00:22:52.854577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:53.046045829+00:00 stderr F I1013 00:22:53.046000 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:53.046139811+00:00 stderr F I1013 00:22:53.046113 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-10-13T00:22:53.046139811+00:00 stderr F I1013 00:22:53.046135 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-10-13T00:22:53.046149442+00:00 stderr F I1013 00:22:53.046142 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-10-13T00:22:53.046149442+00:00 stderr F I1013 00:22:53.046146 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-10-13T00:22:53.048174918+00:00 stderr F I1013 00:22:53.048107 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:53.053959609+00:00 stderr F I1013 00:22:53.053921 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:53.107936743+00:00 stderr F I1013 00:22:53.107856 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:53.253157368+00:00 stderr F I1013 00:22:53.253088 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:53.458849318+00:00 stderr F I1013 00:22:53.458795 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:53.653283184+00:00 stderr F I1013 00:22:53.653227 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:53.854179060+00:00 stderr F I1013 00:22:53.854120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:54.068721546+00:00 stderr F I1013 00:22:54.067969 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:54.256152417+00:00 stderr F I1013 00:22:54.256112 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:54.454270265+00:00 stderr F I1013 00:22:54.454212 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:54.652476886+00:00 stderr F I1013 00:22:54.652419 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:55.400816362+00:00 stderr F I1013 00:22:55.400759 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:55.566396694+00:00 stderr F I1013 00:22:55.566293 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:55.566478746+00:00 stderr F I1013 00:22:55.566446 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-10-13T00:22:56.085155503+00:00 stderr F I1013 00:22:56.085104 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:58.060069435+00:00 stderr F I1013 00:22:58.059985 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.069105577+00:00 stderr F I1013 00:22:58.069024 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.078414536+00:00 stderr F I1013 00:22:58.078349 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.086843101+00:00 stderr F I1013 00:22:58.086764 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.093839416+00:00 stderr F I1013 00:22:58.093754 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.100956914+00:00 stderr F I1013 00:22:58.100887 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.107835596+00:00 stderr F I1013 00:22:58.107786 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.114015998+00:00 stderr F I1013 00:22:58.113968 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:22:58.119497841+00:00 stderr F I1013 00:22:58.119457 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.881703024+00:00 stderr F I1013 00:23:42.881617 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.889673766+00:00 stderr F I1013 00:23:42.889638 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.896395704+00:00 stderr F I1013 00:23:42.896358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.903396189+00:00 stderr F I1013 00:23:42.902631 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.908457620+00:00 stderr F I1013 00:23:42.908426 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.916032211+00:00 stderr F I1013 00:23:42.915983 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.921817742+00:00 stderr F I1013 00:23:42.921781 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.926215724+00:00 stderr F I1013 00:23:42.926195 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-10-13T00:23:42.930536625+00:00 stderr F I1013 00:23:42.930509 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000144355515073043233033127 0ustar zuulzuul2025-08-13T19:50:51.526005638+00:00 stderr F I0813 19:50:51.524133 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:50:51.805548568+00:00 stderr F I0813 19:50:51.796567 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:50:51.907527603+00:00 stderr F I0813 19:50:51.906451 1 observer_polling.go:159] Starting file observer 2025-08-13T19:50:52.430967163+00:00 stderr F I0813 19:50:52.430191 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T19:50:53.696147402+00:00 stderr F I0813 19:50:53.658721 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:50:53.696515842+00:00 stderr F W0813 19:50:53.696475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:50:53.696565593+00:00 stderr F W0813 19:50:53.696546 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:50:53.711051698+00:00 stderr F I0813 19:50:53.688429 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:50:53.729164305+00:00 stderr F I0813 19:50:53.728532 1 secure_serving.go:213] Serving securely on [::]:9104 2025-08-13T19:50:53.732060848+00:00 stderr F I0813 19:50:53.731301 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:50:53.733718265+00:00 stderr F I0813 19:50:53.733681 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.737647 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.739633 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748216 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748340 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748333 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748369 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748549 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-08-13T19:50:53.855680041+00:00 stderr F I0813 19:50:53.851938 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:50:53.855680041+00:00 stderr F I0813 19:50:53.852293 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:50:53.871563255+00:00 stderr F I0813 19:50:53.871012 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:50:53.878122182+00:00 stderr F E0813 19:50:53.878073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.878459512+00:00 stderr F E0813 19:50:53.878441 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.884662709+00:00 stderr F E0813 19:50:53.883761 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.888043106+00:00 stderr F E0813 19:50:53.887730 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.894232243+00:00 stderr F E0813 19:50:53.894194 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.921112841+00:00 stderr F E0813 19:50:53.921050 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.921221634+00:00 stderr F E0813 19:50:53.921202 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.944473119+00:00 stderr F E0813 19:50:53.944405 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.965390587+00:00 stderr F E0813 19:50:53.964305 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.990391071+00:00 stderr F E0813 19:50:53.989304 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.053048102+00:00 stderr F E0813 19:50:54.051645 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.076296547+00:00 stderr F E0813 19:50:54.076033 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.214466776+00:00 stderr F E0813 19:50:54.213642 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.237106333+00:00 stderr F E0813 19:50:54.236942 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.539054753+00:00 stderr F E0813 19:50:54.535234 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.558006074+00:00 stderr F E0813 19:50:54.557434 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:55.176672496+00:00 stderr F E0813 19:50:55.176032 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:55.199176329+00:00 stderr F E0813 19:50:55.198548 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:56.458594364+00:00 stderr F E0813 19:50:56.458543 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:56.485217234+00:00 stderr F E0813 19:50:56.481127 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:59.034562497+00:00 stderr F E0813 19:50:59.034344 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:59.044633055+00:00 stderr F E0813 19:50:59.044467 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:04.165674402+00:00 stderr F E0813 19:51:04.162623 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:04.166167186+00:00 stderr F E0813 19:51:04.166139 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:14.403445769+00:00 stderr F E0813 19:51:14.403252 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:14.407896356+00:00 stderr F E0813 19:51:14.406714 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:34.885304072+00:00 stderr F E0813 19:51:34.884178 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:34.887387631+00:00 stderr F E0813 19:51:34.887255 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:53.853097978+00:00 stderr F E0813 19:51:53.852977 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:15.845380614+00:00 stderr F E0813 19:52:15.845240 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:15.847854545+00:00 stderr F E0813 19:52:15.847751 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:53.852904639+00:00 stderr F E0813 19:52:53.852557 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:53:37.766395565+00:00 stderr F E0813 19:53:37.766247 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:53:53.853033193+00:00 stderr F E0813 19:53:53.852902 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:54:53.853306268+00:00 stderr F E0813 19:54:53.853107 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:54:59.688769553+00:00 stderr F E0813 19:54:59.688621 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:55:53.852650979+00:00 stderr F E0813 19:55:53.852507 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:21.606934133+00:00 stderr F E0813 19:56:21.606713 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:27.298752589+00:00 stderr F I0813 19:56:27.298520 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"f2f09683-2189-4368-ac3d-7dc7538da4b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"27121", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_3457bc4e-7009-4fb9-bd36-e077ad27f0dd became leader 2025-08-13T19:56:27.299471170+00:00 stderr F I0813 19:56:27.299360 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-08-13T19:56:27.351114894+00:00 stderr F I0813 19:56:27.351001 1 operator.go:97] Creating status manager for stand-alone cluster 2025-08-13T19:56:27.351565257+00:00 stderr F I0813 19:56:27.351512 1 operator.go:102] Adding controller-runtime controllers 2025-08-13T19:56:27.354990565+00:00 stderr F I0813 19:56:27.354921 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-08-13T19:56:27.358337231+00:00 stderr F I0813 19:56:27.358191 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:56:27.363699364+00:00 stderr F I0813 19:56:27.363602 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:56:27.364136466+00:00 stderr F I0813 19:56:27.364072 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:56:27.366122993+00:00 stderr F I0813 19:56:27.366051 1 client.go:239] Starting informers... 2025-08-13T19:56:27.366292048+00:00 stderr F I0813 19:56:27.366232 1 client.go:250] Waiting for informers to sync... 2025-08-13T19:56:27.567890954+00:00 stderr F I0813 19:56:27.567745 1 client.go:271] Informers started and synced 2025-08-13T19:56:27.567980227+00:00 stderr F I0813 19:56:27.567965 1 operator.go:126] Starting controller-manager 2025-08-13T19:56:27.569429138+00:00 stderr F I0813 19:56:27.569399 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:56:27.569530191+00:00 stderr F I0813 19:56:27.569512 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:56:27.569658345+00:00 stderr F I0813 19:56:27.569634 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:56:27.569953743+00:00 stderr F I0813 19:56:27.569405 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:56:27.569976414+00:00 stderr F I0813 19:56:27.569955 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:56:27.569988794+00:00 stderr F I0813 19:56:27.569970 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:56:27.570112128+00:00 stderr F I0813 19:56:27.570015 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T19:56:27.571338433+00:00 stderr F I0813 19:56:27.570882 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-08-13T19:56:27.573138374+00:00 stderr F I0813 19:56:27.573062 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000d2d970" 2025-08-13T19:56:27.573237987+00:00 stderr F I0813 19:56:27.573176 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-08-13T19:56:27.573380101+00:00 stderr F I0813 19:56:27.573272 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-08-13T19:56:27.573393931+00:00 stderr F I0813 19:56:27.573376 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-08-13T19:56:27.573393931+00:00 stderr F I0813 19:56:27.573376 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.573928 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.574014 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.573086 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-08-13T19:56:27.574103762+00:00 stderr F I0813 19:56:27.574085 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-08-13T19:56:27.574197864+00:00 stderr F I0813 19:56:27.574118 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574211965+00:00 stderr F I0813 19:56:27.574195 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574211965+00:00 stderr F I0813 19:56:27.574207 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc001030000" 2025-08-13T19:56:27.574395470+00:00 stderr F I0813 19:56:27.574278 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-08-13T19:56:27.574395470+00:00 stderr F I0813 19:56:27.574335 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-08-13T19:56:27.574414261+00:00 stderr F I0813 19:56:27.574389 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-08-13T19:56:27.574426401+00:00 stderr F I0813 19:56:27.574417 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-08-13T19:56:27.574506623+00:00 stderr F I0813 19:56:27.574434 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc0010302c0" 2025-08-13T19:56:27.574542484+00:00 stderr F I0813 19:56:27.574279 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:56:27.574664318+00:00 stderr F I0813 19:56:27.574565 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-08-13T19:56:27.574776281+00:00 stderr F I0813 19:56:27.574601 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc001030210" 2025-08-13T19:56:27.574888574+00:00 stderr F I0813 19:56:27.574574 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-08-13T19:56:27.575969945+00:00 stderr F I0813 19:56:27.574542 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-08-13T19:56:27.576037697+00:00 stderr F I0813 19:56:27.576022 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-08-13T19:56:27.576094398+00:00 stderr F I0813 19:56:27.574714 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-08-13T19:56:27.576491250+00:00 stderr F I0813 19:56:27.574875 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-08-13T19:56:27.576617133+00:00 stderr F I0813 19:56:27.576531 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-08-13T19:56:27.576926262+00:00 stderr F I0813 19:56:27.574307 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc0010300b0" 2025-08-13T19:56:27.576951953+00:00 stderr F I0813 19:56:27.576927 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc001030160" 2025-08-13T19:56:27.577041156+00:00 stderr F I0813 19:56:27.576992 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-08-13T19:56:27.577074326+00:00 stderr F I0813 19:56:27.576996 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:27.577215551+00:00 stderr F I0813 19:56:27.577133 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-08-13T19:56:27.577215551+00:00 stderr F I0813 19:56:27.574959 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc001030370" 2025-08-13T19:56:27.577232791+00:00 stderr F I0813 19:56:27.577215 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc001030420" 2025-08-13T19:56:27.577244481+00:00 stderr F I0813 19:56:27.577233 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc0010304d0" 2025-08-13T19:56:27.577254382+00:00 stderr F I0813 19:56:27.577246 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-08-13T19:56:27.577264902+00:00 stderr F I0813 19:56:27.577254 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-08-13T19:56:27.577437857+00:00 stderr F I0813 19:56:27.575918 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577667 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577720 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577732 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577737 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577744 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577750 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577767 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:56:27.578450466+00:00 stderr F I0813 19:56:27.577775 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.578575999+00:00 stderr F I0813 19:56:27.578480 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.578575999+00:00 stderr F I0813 19:56:27.578544 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.578921419+00:00 stderr F I0813 19:56:27.575299 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-08-13T19:56:27.579011332+00:00 stderr F I0813 19:56:27.578991 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-08-13T19:56:27.579067203+00:00 stderr F I0813 19:56:27.579050 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T19:56:27.579356502+00:00 stderr F I0813 19:56:27.579333 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-08-13T19:56:27.580948257+00:00 stderr F I0813 19:56:27.579506 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.587549346+00:00 stderr F I0813 19:56:27.584463 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-08-13T19:56:27.591530269+00:00 stderr F I0813 19:56:27.590312 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:27.591530269+00:00 stderr F I0813 19:56:27.591221 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.591860269+00:00 stderr F I0813 19:56:27.591702 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.592230659+00:00 stderr F I0813 19:56:27.592106 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:27.592852457+00:00 stderr F I0813 19:56:27.592740 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.593769843+00:00 stderr F I0813 19:56:27.593747 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.594589167+00:00 stderr F I0813 19:56:27.594464 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.595849663+00:00 stderr F I0813 19:56:27.595733 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.596215333+00:00 stderr F I0813 19:56:27.596118 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.599428015+00:00 stderr F I0813 19:56:27.599166 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.599659351+00:00 stderr F I0813 19:56:27.599636 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.601773632+00:00 stderr F I0813 19:56:27.600107 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.602556664+00:00 stderr F I0813 19:56:27.602266 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.603653376+00:00 stderr F I0813 19:56:27.603261 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.605116727+00:00 stderr F I0813 19:56:27.605009 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.605654523+00:00 stderr F I0813 19:56:27.605606 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.606413 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.606613 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.607599 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.608162284+00:00 stderr F I0813 19:56:27.608138 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.609076180+00:00 stderr F I0813 19:56:27.609050 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-08-13T19:56:27.609481932+00:00 stderr F I0813 19:56:27.609420 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.612131368+00:00 stderr F I0813 19:56:27.612050 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:27.635883836+00:00 stderr F I0813 19:56:27.635730 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.635883836+00:00 stderr F I0813 19:56:27.635768 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.639028526+00:00 stderr F I0813 19:56:27.638976 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:27.639028526+00:00 stderr F I0813 19:56:27.639008 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:27.644852602+00:00 stderr F I0813 19:56:27.644681 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-08-13T19:56:27.644852602+00:00 stderr F I0813 19:56:27.644740 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-08-13T19:56:27.662414693+00:00 stderr F I0813 19:56:27.661700 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:27.677311729+00:00 stderr F I0813 19:56:27.675885 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-08-13T19:56:27.677694760+00:00 stderr F I0813 19:56:27.677612 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-08-13T19:56:27.677762442+00:00 stderr F I0813 19:56:27.677703 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-08-13T19:56:27.677977208+00:00 stderr F I0813 19:56:27.677866 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-08-13T19:56:27.678407880+00:00 stderr F I0813 19:56:27.678174 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.683757 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.683940 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684316 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684541 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684572 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684543 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684871 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-08-13T19:56:27.685271766+00:00 stderr F I0813 19:56:27.685248 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685387 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685465 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685503 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685712 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:27.698346949+00:00 stderr F I0813 19:56:27.698189 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.701887971+00:00 stderr F I0813 19:56:27.700366 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.728995845+00:00 stderr F I0813 19:56:27.728910 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.729443867+00:00 stderr F I0813 19:56:27.729417 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-08-13T19:56:27.743084397+00:00 stderr F I0813 19:56:27.742986 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.743084397+00:00 stderr F I0813 19:56:27.743033 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.749590293+00:00 stderr F I0813 19:56:27.749370 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.751598850+00:00 stderr F I0813 19:56:27.749864 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-08-13T19:56:27.796282896+00:00 stderr F I0813 19:56:27.793146 1 log.go:245] "network-node-identity-cert" in "openshift-network-node-identity" requires a new target cert/key pair: already expired 2025-08-13T19:56:27.814100495+00:00 stderr F I0813 19:56:27.808184 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.814100495+00:00 stderr F I0813 19:56:27.808279 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838241 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838324 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838474 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838488 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.849897277+00:00 stderr F I0813 19:56:27.849302 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.849897277+00:00 stderr F I0813 19:56:27.849392 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-08-13T19:56:27.865566954+00:00 stderr F I0813 19:56:27.864192 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.865566954+00:00 stderr F I0813 19:56:27.864295 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-08-13T19:56:27.872170163+00:00 stderr F I0813 19:56:27.871893 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.872170163+00:00 stderr F I0813 19:56:27.872007 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T19:56:27.915571142+00:00 stderr F I0813 19:56:27.915478 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:27.917293291+00:00 stderr F I0813 19:56:27.917181 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.917293291+00:00 stderr F I0813 19:56:27.917230 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.979374604+00:00 stderr F I0813 19:56:27.979044 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:27.979374604+00:00 stderr F I0813 19:56:27.979088 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T19:56:27.984852501+00:00 stderr F I0813 19:56:27.980469 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.002926967+00:00 stderr F I0813 19:56:28.002764 1 log.go:245] Updated Secret/network-node-identity-cert -n openshift-network-node-identity because it changed 2025-08-13T19:56:28.002926967+00:00 stderr F I0813 19:56:28.002871 1 log.go:245] successful reconciliation 2025-08-13T19:56:28.027777056+00:00 stderr F I0813 19:56:28.023051 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.027777056+00:00 stderr F I0813 19:56:28.025626 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.028953530+00:00 stderr F I0813 19:56:28.028774 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.029014962+00:00 stderr F I0813 19:56:28.028982 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T19:56:28.045195174+00:00 stderr F I0813 19:56:28.045140 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.045630416+00:00 stderr F I0813 19:56:28.045572 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "False" 2025-08-13T19:56:28.045630416+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "False" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Degraded 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.045630416+00:00 stderr F message: |- 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Progressing 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Available 2025-08-13T19:56:28.048961891+00:00 stderr F I0813 19:56:28.048120 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:28.078560756+00:00 stderr F I0813 19:56:28.078446 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:56:28.078560756+00:00 stderr F I0813 19:56:28.078494 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:56:28.109633104+00:00 stderr F I0813 19:56:28.109512 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:33Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "False" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Degraded 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "False" 2025-08-13T19:56:28.109633104+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.109633104+00:00 stderr F message: |- 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Progressing 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Available 2025-08-13T19:56:28.169379430+00:00 stderr F I0813 19:56:28.168772 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.171271324+00:00 stderr F I0813 19:56:28.171043 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "False" 2025-08-13T19:56:28.171271324+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.171271324+00:00 stderr F message: |- 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" rollout is not making progress - last change 2024-06-27T13:34:19Z 2025-08-13T19:56:28.171271324+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Degraded 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.171271324+00:00 stderr F message: |- 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Progressing 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Available 2025-08-13T19:56:28.182717181+00:00 stderr F I0813 19:56:28.182605 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.190129682+00:00 stderr F I0813 19:56:28.190008 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.215646811+00:00 stderr F I0813 19:56:28.215536 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.221927250+00:00 stderr F I0813 19:56:28.221689 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.221927250+00:00 stderr F message: |- 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" rollout is not making progress - last change 2024-06-27T13:34:19Z 2025-08-13T19:56:28.221927250+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Degraded 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "False" 2025-08-13T19:56:28.221927250+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.221927250+00:00 stderr F message: |- 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Progressing 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Available 2025-08-13T19:56:28.226875732+00:00 stderr F I0813 19:56:28.226700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.235752065+00:00 stderr F I0813 19:56:28.235649 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.251752272+00:00 stderr F I0813 19:56:28.251700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.418311498+00:00 stderr F I0813 19:56:28.417090 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.418311498+00:00 stderr F I0813 19:56:28.417135 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.446930105+00:00 stderr F I0813 19:56:28.446671 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.446930105+00:00 stderr F I0813 19:56:28.446723 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.447300036+00:00 stderr F I0813 19:56:28.446986 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.447300036+00:00 stderr F I0813 19:56:28.447064 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-08-13T19:56:28.541006542+00:00 stderr F I0813 19:56:28.540908 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.541331291+00:00 stderr F I0813 19:56:28.541228 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "False" 2025-08-13T19:56:28.541331291+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.541331291+00:00 stderr F message: |- 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.541331291+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Degraded 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.541331291+00:00 stderr F message: |- 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Progressing 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Available 2025-08-13T19:56:28.819877075+00:00 stderr F I0813 19:56:28.819654 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.819877075+00:00 stderr F I0813 19:56:28.819756 1 log.go:245] Reconciling proxy 'cluster' 2025-08-13T19:56:28.822445798+00:00 stderr F I0813 19:56:28.822344 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-08-13T19:56:28.896447592+00:00 stderr F I0813 19:56:28.896354 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.896447592+00:00 stderr F message: |- 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.896447592+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Degraded 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "False" 2025-08-13T19:56:28.896447592+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.896447592+00:00 stderr F message: |- 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Progressing 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Available 2025-08-13T19:56:29.014293887+00:00 stderr F I0813 19:56:29.014225 1 log.go:245] Reconciling proxy 'cluster' complete 2025-08-13T19:56:29.259978901+00:00 stderr F I0813 19:56:29.259705 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T19:56:29.262194434+00:00 stderr F I0813 19:56:29.262113 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:29.568310526+00:00 stderr F I0813 19:56:29.568258 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:29.573930976+00:00 stderr F I0813 19:56:29.573686 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:29.629145853+00:00 stderr F I0813 19:56:29.629050 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:29.717455055+00:00 stderr F I0813 19:56:29.717192 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:29.717455055+00:00 stderr F I0813 19:56:29.717251 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:29.725126934+00:00 stderr F I0813 19:56:29.724986 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T19:56:29.727114510+00:00 stderr F I0813 19:56:29.727056 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:29.731300240+00:00 stderr F I0813 19:56:29.730746 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:29.733667117+00:00 stderr F I0813 19:56:29.732921 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:29.737509137+00:00 stderr F I0813 19:56:29.737396 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:29.737509137+00:00 stderr F I0813 19:56:29.737431 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000b25200 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:29.863906976+00:00 stderr F I0813 19:56:29.861509 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T19:56:29.869447885+00:00 stderr F I0813 19:56:29.868308 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:29.944962301+00:00 stderr F I0813 19:56:29.944068 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:29.963897282+00:00 stderr F I0813 19:56:29.962888 1 log.go:245] "ovn-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: already expired 2025-08-13T19:56:30.021370463+00:00 stderr F I0813 19:56:30.021290 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:30.021404664+00:00 stderr F I0813 19:56:30.021394 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:30.024986796+00:00 stderr F I0813 19:56:30.024932 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:30.024986796+00:00 stderr F W0813 19:56:30.024966 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:30.025103729+00:00 stderr F I0813 19:56:30.025059 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:30.025103729+00:00 stderr F W0813 19:56:30.025087 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:30.025313625+00:00 stderr F I0813 19:56:30.025270 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:30.257542217+00:00 stderr F I0813 19:56:30.257476 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:30.304158318+00:00 stderr F I0813 19:56:30.303717 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:30.304158318+00:00 stderr F I0813 19:56:30.303893 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T19:56:30.305343021+00:00 stderr F I0813 19:56:30.305227 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:30.407072026+00:00 stderr F I0813 19:56:30.406632 1 log.go:245] Updated Secret/ovn-cert -n openshift-ovn-kubernetes because it changed 2025-08-13T19:56:30.407072026+00:00 stderr F I0813 19:56:30.406702 1 log.go:245] successful reconciliation 2025-08-13T19:56:30.414902470+00:00 stderr F I0813 19:56:30.414745 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:30.433026738+00:00 stderr F I0813 19:56:30.432771 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:56:30.434142459+00:00 stderr F E0813 19:56:30.434050 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="3ac04967-f509-46f0-a407-9eb8ddb74597" 2025-08-13T19:56:30.440151681+00:00 stderr F I0813 19:56:30.439985 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:30.705619982+00:00 stderr F I0813 19:56:30.705474 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "False" 2025-08-13T19:56:30.705619982+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:30.705619982+00:00 stderr F message: |- 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:30.705619982+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Degraded 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Upgradeable 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:30.705619982+00:00 stderr F message: |- 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F reason: Deploying 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Progressing 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Available 2025-08-13T19:56:30.706326832+00:00 stderr F I0813 19:56:30.706243 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:31.279067037+00:00 stderr F I0813 19:56:31.278972 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:31.279067037+00:00 stderr F message: |- 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:31.279067037+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Degraded 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Upgradeable 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "False" 2025-08-13T19:56:31.279067037+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:31.279067037+00:00 stderr F message: |- 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F reason: Deploying 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Progressing 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Available 2025-08-13T19:56:31.307715565+00:00 stderr F I0813 19:56:31.307639 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "False" 2025-08-13T19:56:31.307715565+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:31.307715565+00:00 stderr F message: |- 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.307715565+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Degraded 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Upgradeable 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:31.307715565+00:00 stderr F message: |- 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F reason: Deploying 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Progressing 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Available 2025-08-13T19:56:31.308573909+00:00 stderr F I0813 19:56:31.308483 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:31.870004360+00:00 stderr F I0813 19:56:31.869375 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:31.871996847+00:00 stderr F I0813 19:56:31.871895 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:31.874093797+00:00 stderr F I0813 19:56:31.873954 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:31.874184540+00:00 stderr F I0813 19:56:31.874125 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003cc6380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:31.880532731+00:00 stderr F I0813 19:56:31.880410 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:31.880532731+00:00 stderr F I0813 19:56:31.880446 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886341 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:31.886454200+00:00 stderr F W0813 19:56:31.886377 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886385 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:31.886454200+00:00 stderr F W0813 19:56:31.886390 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886414 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:32.073586814+00:00 stderr F I0813 19:56:32.073229 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:32.073586814+00:00 stderr F message: |- 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:32.073586814+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Degraded 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Upgradeable 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "False" 2025-08-13T19:56:32.073586814+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:32.073586814+00:00 stderr F message: |- 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F reason: Deploying 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Progressing 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Available 2025-08-13T19:56:32.215632320+00:00 stderr F I0813 19:56:32.215466 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:32.231190424+00:00 stderr F I0813 19:56:32.231059 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:56:32.231261466+00:00 stderr F E0813 19:56:32.231196 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="e3de9d8e-7ea9-4fab-b6a7-656f1b43f054" 2025-08-13T19:56:32.241853199+00:00 stderr F I0813 19:56:32.241740 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:32.461214022+00:00 stderr F I0813 19:56:32.460764 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-08-13T19:56:32.463244990+00:00 stderr F I0813 19:56:32.463203 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:32.669238543+00:00 stderr F I0813 19:56:32.669132 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:32.674038810+00:00 stderr F I0813 19:56:32.673902 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:32.683527121+00:00 stderr F I0813 19:56:32.683422 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:32.699001412+00:00 stderr F I0813 19:56:32.698891 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:32.699001412+00:00 stderr F I0813 19:56:32.698936 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:32.710297945+00:00 stderr F I0813 19:56:32.710190 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:33.461166035+00:00 stderr F I0813 19:56:33.461019 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T19:56:33.466487867+00:00 stderr F I0813 19:56:33.466357 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:33.468734431+00:00 stderr F I0813 19:56:33.468632 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:33.563471056+00:00 stderr F I0813 19:56:33.563150 1 log.go:245] "signer-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: already expired 2025-08-13T19:56:33.664681716+00:00 stderr F I0813 19:56:33.664587 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:33.667218929+00:00 stderr F I0813 19:56:33.667162 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:33.670093191+00:00 stderr F I0813 19:56:33.670062 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:33.670153243+00:00 stderr F I0813 19:56:33.670127 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000abf900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:33.675535246+00:00 stderr F I0813 19:56:33.675435 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:33.675535246+00:00 stderr F I0813 19:56:33.675515 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:33.678138111+00:00 stderr F I0813 19:56:33.678080 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:33.678138111+00:00 stderr F W0813 19:56:33.678122 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:33.678138111+00:00 stderr F I0813 19:56:33.678131 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:33.678163041+00:00 stderr F W0813 19:56:33.678138 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:33.678172702+00:00 stderr F I0813 19:56:33.678163 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:33.702260569+00:00 stderr F I0813 19:56:33.702159 1 log.go:245] Updated Secret/signer-cert -n openshift-ovn-kubernetes because it changed 2025-08-13T19:56:33.702260569+00:00 stderr F I0813 19:56:33.702186 1 log.go:245] successful reconciliation 2025-08-13T19:56:33.859683725+00:00 stderr F I0813 19:56:33.859617 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-08-13T19:56:33.862468614+00:00 stderr F I0813 19:56:33.862444 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.017071259+00:00 stderr F I0813 19:56:34.017011 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:34.036715309+00:00 stderr F I0813 19:56:34.036622 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:34.036914765+00:00 stderr F I0813 19:56:34.036872 1 log.go:245] Starting render phase 2025-08-13T19:56:34.037773970+00:00 stderr F I0813 19:56:34.037655 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:34.068159047+00:00 stderr F I0813 19:56:34.068024 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106431 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106484 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106516 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T19:56:34.106600245+00:00 stderr F I0813 19:56:34.106554 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T19:56:34.218480220+00:00 stderr F I0813 19:56:34.218361 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 1 -> 1 2025-08-13T19:56:34.421917649+00:00 stderr F I0813 19:56:34.421720 1 node_identity.go:204] network-node-identity webhook will not be applied, the deployment/daemonset is not ready 2025-08-13T19:56:34.421917649+00:00 stderr F I0813 19:56:34.421856 1 node_identity.go:208] network-node-identity webhook will not be applied, if it already exists it won't be removed 2025-08-13T19:56:34.426493139+00:00 stderr F I0813 19:56:34.426420 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T19:56:34.461098758+00:00 stderr F I0813 19:56:34.460764 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-08-13T19:56:34.464623398+00:00 stderr F I0813 19:56:34.464156 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.862418207+00:00 stderr F I0813 19:56:34.862291 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-08-13T19:56:34.863104167+00:00 stderr F I0813 19:56:34.863025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T19:56:34.864596619+00:00 stderr F I0813 19:56:34.864483 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.870510848+00:00 stderr F I0813 19:56:34.870429 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T19:56:34.870543839+00:00 stderr F I0813 19:56:34.870507 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T19:56:34.880596586+00:00 stderr F I0813 19:56:34.880486 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T19:56:34.880596586+00:00 stderr F I0813 19:56:34.880545 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T19:56:34.891750095+00:00 stderr F I0813 19:56:34.891663 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T19:56:34.892159396+00:00 stderr F I0813 19:56:34.892081 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T19:56:34.902209913+00:00 stderr F I0813 19:56:34.902054 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T19:56:34.902209913+00:00 stderr F I0813 19:56:34.902128 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T19:56:34.910589693+00:00 stderr F I0813 19:56:34.910560 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T19:56:34.910863740+00:00 stderr F I0813 19:56:34.910730 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T19:56:34.919874608+00:00 stderr F I0813 19:56:34.919635 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T19:56:34.919874608+00:00 stderr F I0813 19:56:34.919709 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T19:56:34.926683742+00:00 stderr F I0813 19:56:34.926177 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T19:56:34.927102754+00:00 stderr F I0813 19:56:34.927021 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T19:56:34.934686861+00:00 stderr F I0813 19:56:34.934559 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T19:56:34.934686861+00:00 stderr F I0813 19:56:34.934619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T19:56:34.940050964+00:00 stderr F I0813 19:56:34.939867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T19:56:34.940050964+00:00 stderr F I0813 19:56:34.939909 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T19:56:34.944758798+00:00 stderr F I0813 19:56:34.944607 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T19:56:34.944758798+00:00 stderr F I0813 19:56:34.944675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T19:56:35.060988547+00:00 stderr F I0813 19:56:35.060754 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-08-13T19:56:35.063339344+00:00 stderr F I0813 19:56:35.063294 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.069140020+00:00 stderr F I0813 19:56:35.068999 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T19:56:35.069140020+00:00 stderr F I0813 19:56:35.069061 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T19:56:35.269428489+00:00 stderr F I0813 19:56:35.269371 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-08-13T19:56:35.273852706+00:00 stderr F I0813 19:56:35.273718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T19:56:35.273878236+00:00 stderr F I0813 19:56:35.273861 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T19:56:35.274124703+00:00 stderr F I0813 19:56:35.274103 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.463171062+00:00 stderr F I0813 19:56:35.463013 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T19:56:35.466748384+00:00 stderr F I0813 19:56:35.466671 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.473264810+00:00 stderr F I0813 19:56:35.473220 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T19:56:35.473337272+00:00 stderr F I0813 19:56:35.473284 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T19:56:35.661679340+00:00 stderr F I0813 19:56:35.661587 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-08-13T19:56:35.666281531+00:00 stderr F I0813 19:56:35.666181 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.670559483+00:00 stderr F I0813 19:56:35.669432 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T19:56:35.670559483+00:00 stderr F I0813 19:56:35.669496 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T19:56:35.863934805+00:00 stderr F I0813 19:56:35.863675 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-08-13T19:56:35.868876786+00:00 stderr F I0813 19:56:35.866928 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.877013529+00:00 stderr F I0813 19:56:35.873385 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T19:56:35.877013529+00:00 stderr F I0813 19:56:35.873492 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T19:56:36.114440068+00:00 stderr F I0813 19:56:36.107927 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-08-13T19:56:36.114440068+00:00 stderr F I0813 19:56:36.112344 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.115323574+00:00 stderr F I0813 19:56:36.115043 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T19:56:36.115475538+00:00 stderr F I0813 19:56:36.115455 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T19:56:36.260204401+00:00 stderr F I0813 19:56:36.260087 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-08-13T19:56:36.262636010+00:00 stderr F I0813 19:56:36.262575 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-08-13T19:56:36.262660061+00:00 stderr F I0813 19:56:36.262630 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262660061+00:00 stderr F I0813 19:56:36.262656 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262716893+00:00 stderr F I0813 19:56:36.262679 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262721 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262743 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262765 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262859 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263059382+00:00 stderr F I0813 19:56:36.262890 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263074453+00:00 stderr F I0813 19:56:36.263057 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263221087+00:00 stderr F I0813 19:56:36.263138 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263367011+00:00 stderr F I0813 19:56:36.263260 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263367011+00:00 stderr F I0813 19:56:36.263308 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.270019731+00:00 stderr F I0813 19:56:36.269948 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T19:56:36.270156515+00:00 stderr F I0813 19:56:36.270141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T19:56:36.468757485+00:00 stderr F I0813 19:56:36.468668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T19:56:36.468907739+00:00 stderr F I0813 19:56:36.468753 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T19:56:36.669737104+00:00 stderr F I0813 19:56:36.669618 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T19:56:36.669737104+00:00 stderr F I0813 19:56:36.669696 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T19:56:36.870671062+00:00 stderr F I0813 19:56:36.870397 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T19:56:36.870671062+00:00 stderr F I0813 19:56:36.870476 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T19:56:37.070240691+00:00 stderr F I0813 19:56:37.070104 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T19:56:37.070585640+00:00 stderr F I0813 19:56:37.070449 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T19:56:37.282758169+00:00 stderr F I0813 19:56:37.282586 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T19:56:37.282758169+00:00 stderr F I0813 19:56:37.282706 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T19:56:37.486565189+00:00 stderr F I0813 19:56:37.486446 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T19:56:37.486565189+00:00 stderr F I0813 19:56:37.486513 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T19:56:37.671652084+00:00 stderr F I0813 19:56:37.671551 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T19:56:37.671652084+00:00 stderr F I0813 19:56:37.671620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T19:56:37.870397229+00:00 stderr F I0813 19:56:37.870274 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T19:56:37.870397229+00:00 stderr F I0813 19:56:37.870350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T19:56:38.070926605+00:00 stderr F I0813 19:56:38.070534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T19:56:38.070926605+00:00 stderr F I0813 19:56:38.070592 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T19:56:38.280464439+00:00 stderr F I0813 19:56:38.280129 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T19:56:38.280464439+00:00 stderr F I0813 19:56:38.280180 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T19:56:38.473032357+00:00 stderr F I0813 19:56:38.472924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T19:56:38.473032357+00:00 stderr F I0813 19:56:38.473016 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T19:56:38.672108052+00:00 stderr F I0813 19:56:38.671995 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T19:56:38.672211555+00:00 stderr F I0813 19:56:38.672197 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T19:56:38.870135517+00:00 stderr F I0813 19:56:38.869996 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:38.870135517+00:00 stderr F I0813 19:56:38.870081 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T19:56:39.069669964+00:00 stderr F I0813 19:56:39.069522 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:39.069669964+00:00 stderr F I0813 19:56:39.069628 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T19:56:39.270930261+00:00 stderr F I0813 19:56:39.270701 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T19:56:39.270930261+00:00 stderr F I0813 19:56:39.270901 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T19:56:39.469740118+00:00 stderr F I0813 19:56:39.469665 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T19:56:39.469952214+00:00 stderr F I0813 19:56:39.469936 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T19:56:39.669101311+00:00 stderr F I0813 19:56:39.668988 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T19:56:39.669101311+00:00 stderr F I0813 19:56:39.669087 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T19:56:39.869489454+00:00 stderr F I0813 19:56:39.869368 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T19:56:39.869489454+00:00 stderr F I0813 19:56:39.869457 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T19:56:40.080668163+00:00 stderr F I0813 19:56:40.079891 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T19:56:40.080668163+00:00 stderr F I0813 19:56:40.079990 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T19:56:40.274378724+00:00 stderr F I0813 19:56:40.274266 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T19:56:40.274378724+00:00 stderr F I0813 19:56:40.274335 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T19:56:40.470202326+00:00 stderr F I0813 19:56:40.470092 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T19:56:40.470202326+00:00 stderr F I0813 19:56:40.470165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T19:56:40.669492477+00:00 stderr F I0813 19:56:40.669386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:40.669492477+00:00 stderr F I0813 19:56:40.669450 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T19:56:40.872887145+00:00 stderr F I0813 19:56:40.871054 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:40.872887145+00:00 stderr F I0813 19:56:40.871135 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T19:56:41.072220557+00:00 stderr F I0813 19:56:41.072099 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T19:56:41.072279518+00:00 stderr F I0813 19:56:41.072217 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T19:56:41.271660712+00:00 stderr F I0813 19:56:41.271529 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T19:56:41.271660712+00:00 stderr F I0813 19:56:41.271631 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T19:56:41.477445898+00:00 stderr F I0813 19:56:41.477261 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T19:56:41.477445898+00:00 stderr F I0813 19:56:41.477350 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T19:56:41.689719429+00:00 stderr F I0813 19:56:41.689618 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T19:56:41.689719429+00:00 stderr F I0813 19:56:41.689703 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T19:56:41.875882805+00:00 stderr F I0813 19:56:41.875677 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T19:56:41.875882805+00:00 stderr F I0813 19:56:41.875862 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T19:56:42.083596536+00:00 stderr F I0813 19:56:42.083236 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T19:56:42.083596536+00:00 stderr F I0813 19:56:42.083547 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T19:56:42.289346682+00:00 stderr F I0813 19:56:42.289193 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T19:56:42.289346682+00:00 stderr F I0813 19:56:42.289326 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T19:56:42.505345469+00:00 stderr F I0813 19:56:42.505191 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T19:56:42.505345469+00:00 stderr F I0813 19:56:42.505331 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T19:56:42.726707130+00:00 stderr F I0813 19:56:42.726650 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T19:56:42.726974988+00:00 stderr F I0813 19:56:42.726959 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T19:56:42.869456647+00:00 stderr F I0813 19:56:42.869380 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T19:56:42.869582880+00:00 stderr F I0813 19:56:42.869565 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T19:56:43.070341413+00:00 stderr F I0813 19:56:43.070201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T19:56:43.070341413+00:00 stderr F I0813 19:56:43.070278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T19:56:43.270219901+00:00 stderr F I0813 19:56:43.270102 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T19:56:43.270219901+00:00 stderr F I0813 19:56:43.270191 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T19:56:43.473703151+00:00 stderr F I0813 19:56:43.473570 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T19:56:43.473703151+00:00 stderr F I0813 19:56:43.473654 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T19:56:43.671879419+00:00 stderr F I0813 19:56:43.671643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T19:56:43.671879419+00:00 stderr F I0813 19:56:43.671717 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T19:56:43.870044978+00:00 stderr F I0813 19:56:43.869963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T19:56:43.870044978+00:00 stderr F I0813 19:56:43.870036 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T19:56:44.071046257+00:00 stderr F I0813 19:56:44.070938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T19:56:44.071046257+00:00 stderr F I0813 19:56:44.071025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T19:56:44.270071800+00:00 stderr F I0813 19:56:44.269958 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T19:56:44.270071800+00:00 stderr F I0813 19:56:44.270023 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T19:56:44.470025050+00:00 stderr F I0813 19:56:44.469913 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T19:56:44.470025050+00:00 stderr F I0813 19:56:44.469980 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:44.669212908+00:00 stderr F I0813 19:56:44.669080 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:44.669212908+00:00 stderr F I0813 19:56:44.669163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:44.868872549+00:00 stderr F I0813 19:56:44.868710 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:44.868993452+00:00 stderr F I0813 19:56:44.868978 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:45.069440356+00:00 stderr F I0813 19:56:45.069285 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:45.069440356+00:00 stderr F I0813 19:56:45.069366 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:45.274258124+00:00 stderr F I0813 19:56:45.273359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:45.274258124+00:00 stderr F I0813 19:56:45.273617 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T19:56:45.474058959+00:00 stderr F I0813 19:56:45.473945 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T19:56:45.474058959+00:00 stderr F I0813 19:56:45.474024 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T19:56:45.672082224+00:00 stderr F I0813 19:56:45.671497 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T19:56:45.672082224+00:00 stderr F I0813 19:56:45.671605 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T19:56:45.872153397+00:00 stderr F I0813 19:56:45.872041 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T19:56:45.872153397+00:00 stderr F I0813 19:56:45.872110 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T19:56:46.071185660+00:00 stderr F I0813 19:56:46.071082 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T19:56:46.071185660+00:00 stderr F I0813 19:56:46.071169 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T19:56:46.272108338+00:00 stderr F I0813 19:56:46.271639 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T19:56:46.272108338+00:00 stderr F I0813 19:56:46.271731 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T19:56:46.475244508+00:00 stderr F I0813 19:56:46.475168 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T19:56:46.475364051+00:00 stderr F I0813 19:56:46.475347 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T19:56:46.672464330+00:00 stderr F I0813 19:56:46.672407 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T19:56:46.672563062+00:00 stderr F I0813 19:56:46.672549 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T19:56:46.870759002+00:00 stderr F I0813 19:56:46.870673 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T19:56:46.871024059+00:00 stderr F I0813 19:56:46.871000 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T19:56:47.070729842+00:00 stderr F I0813 19:56:47.070569 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T19:56:47.070769013+00:00 stderr F I0813 19:56:47.070732 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T19:56:47.272746319+00:00 stderr F I0813 19:56:47.272608 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T19:56:47.272746319+00:00 stderr F I0813 19:56:47.272689 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T19:56:47.472268006+00:00 stderr F I0813 19:56:47.472172 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T19:56:47.472268006+00:00 stderr F I0813 19:56:47.472243 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T19:56:47.669651062+00:00 stderr F I0813 19:56:47.669352 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T19:56:47.669651062+00:00 stderr F I0813 19:56:47.669530 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T19:56:47.869551281+00:00 stderr F I0813 19:56:47.869456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T19:56:47.869551281+00:00 stderr F I0813 19:56:47.869518 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T19:56:48.070491658+00:00 stderr F I0813 19:56:48.070388 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T19:56:48.070491658+00:00 stderr F I0813 19:56:48.070460 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T19:56:48.270906480+00:00 stderr F I0813 19:56:48.270569 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T19:56:48.270906480+00:00 stderr F I0813 19:56:48.270648 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T19:56:48.471754555+00:00 stderr F I0813 19:56:48.471633 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T19:56:48.471754555+00:00 stderr F I0813 19:56:48.471707 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T19:56:48.679738444+00:00 stderr F I0813 19:56:48.679667 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T19:56:48.680020982+00:00 stderr F I0813 19:56:48.679998 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T19:56:48.927119287+00:00 stderr F I0813 19:56:48.919690 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T19:56:48.927631082+00:00 stderr F I0813 19:56:48.927557 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T19:56:49.053428524+00:00 stderr F I0813 19:56:49.053356 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.067288300+00:00 stderr F I0813 19:56:49.067134 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.069858223+00:00 stderr F I0813 19:56:49.069697 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T19:56:49.070064309+00:00 stderr F I0813 19:56:49.069941 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.075114563+00:00 stderr F I0813 19:56:49.075070 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.091433090+00:00 stderr F I0813 19:56:49.091319 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.098720678+00:00 stderr F I0813 19:56:49.098614 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.108031803+00:00 stderr F I0813 19:56:49.106716 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.269661349+00:00 stderr F I0813 19:56:49.269561 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.269661349+00:00 stderr F I0813 19:56:49.269630 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.471665687+00:00 stderr F I0813 19:56:49.471548 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.471665687+00:00 stderr F I0813 19:56:49.471642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.669349102+00:00 stderr F I0813 19:56:49.669279 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.669378623+00:00 stderr F I0813 19:56:49.669344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T19:56:49.871717901+00:00 stderr F I0813 19:56:49.871571 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T19:56:49.871717901+00:00 stderr F I0813 19:56:49.871636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T19:56:50.069382135+00:00 stderr F I0813 19:56:50.069261 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T19:56:50.069382135+00:00 stderr F I0813 19:56:50.069333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T19:56:50.294058591+00:00 stderr F I0813 19:56:50.293963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T19:56:50.294058591+00:00 stderr F I0813 19:56:50.294035 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.474174344+00:00 stderr F I0813 19:56:50.474124 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.474286327+00:00 stderr F I0813 19:56:50.474266 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.678359292+00:00 stderr F I0813 19:56:50.678236 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.678404604+00:00 stderr F I0813 19:56:50.678369 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.876748957+00:00 stderr F I0813 19:56:50.876645 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.876748957+00:00 stderr F I0813 19:56:50.876715 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T19:56:51.084106038+00:00 stderr F I0813 19:56:51.083989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T19:56:51.084106038+00:00 stderr F I0813 19:56:51.084060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T19:56:51.268857194+00:00 stderr F I0813 19:56:51.268668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T19:56:51.268857194+00:00 stderr F I0813 19:56:51.268735 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T19:56:51.476582676+00:00 stderr F I0813 19:56:51.476441 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T19:56:51.476582676+00:00 stderr F I0813 19:56:51.476509 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T19:56:51.670874374+00:00 stderr F I0813 19:56:51.670615 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T19:56:51.670874374+00:00 stderr F I0813 19:56:51.670759 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T19:56:51.869480295+00:00 stderr F I0813 19:56:51.869386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T19:56:51.869480295+00:00 stderr F I0813 19:56:51.869456 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T19:56:52.072013618+00:00 stderr F I0813 19:56:52.071894 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T19:56:52.072013618+00:00 stderr F I0813 19:56:52.071968 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T19:56:52.272030859+00:00 stderr F I0813 19:56:52.271918 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T19:56:52.272030859+00:00 stderr F I0813 19:56:52.271999 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:52.469568890+00:00 stderr F I0813 19:56:52.469439 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:52.469568890+00:00 stderr F I0813 19:56:52.469506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T19:56:52.669634632+00:00 stderr F I0813 19:56:52.669507 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T19:56:52.669634632+00:00 stderr F I0813 19:56:52.669602 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T19:56:52.870676873+00:00 stderr F I0813 19:56:52.870269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T19:56:52.870676873+00:00 stderr F I0813 19:56:52.870349 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T19:56:53.071203919+00:00 stderr F I0813 19:56:53.070908 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T19:56:53.071203919+00:00 stderr F I0813 19:56:53.070992 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T19:56:53.272353583+00:00 stderr F I0813 19:56:53.272206 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T19:56:53.272353583+00:00 stderr F I0813 19:56:53.272315 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T19:56:53.470096510+00:00 stderr F I0813 19:56:53.469939 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T19:56:53.470096510+00:00 stderr F I0813 19:56:53.470021 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T19:56:53.670002818+00:00 stderr F I0813 19:56:53.669895 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T19:56:53.670002818+00:00 stderr F I0813 19:56:53.669958 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:53.853625041+00:00 stderr F E0813 19:56:53.853468 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:53.872491880+00:00 stderr F I0813 19:56:53.872351 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:53.872491880+00:00 stderr F I0813 19:56:53.872427 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T19:56:53.872585833+00:00 stderr F I0813 19:56:53.872497 1 log.go:245] Object (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io has create-wait annotation, skipping apply. 2025-08-13T19:56:53.872585833+00:00 stderr F I0813 19:56:53.872529 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:54.079313836+00:00 stderr F I0813 19:56:54.079125 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:54.079313836+00:00 stderr F I0813 19:56:54.079196 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T19:56:54.274740046+00:00 stderr F I0813 19:56:54.274540 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T19:56:54.274740046+00:00 stderr F I0813 19:56:54.274617 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T19:56:54.474113979+00:00 stderr F I0813 19:56:54.473973 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T19:56:54.474113979+00:00 stderr F I0813 19:56:54.474062 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T19:56:54.670616190+00:00 stderr F I0813 19:56:54.670498 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T19:56:54.670616190+00:00 stderr F I0813 19:56:54.670566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T19:56:54.871342992+00:00 stderr F I0813 19:56:54.871161 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T19:56:54.871342992+00:00 stderr F I0813 19:56:54.871256 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T19:56:55.073747421+00:00 stderr F I0813 19:56:55.073604 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T19:56:55.073747421+00:00 stderr F I0813 19:56:55.073706 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T19:56:55.282257015+00:00 stderr F I0813 19:56:55.281910 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T19:56:55.309945396+00:00 stderr F I0813 19:56:55.309311 1 log.go:245] Operconfig Controller complete 2025-08-13T19:57:16.034315282+00:00 stderr F I0813 19:57:16.034152 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.042003061+00:00 stderr F I0813 19:57:16.041865 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.070489014+00:00 stderr F I0813 19:57:16.070373 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.088582771+00:00 stderr F I0813 19:57:16.088455 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.192429817+00:00 stderr F I0813 19:57:16.192311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.202311909+00:00 stderr F I0813 19:57:16.202193 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.225711577+00:00 stderr F I0813 19:57:16.225656 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.245669617+00:00 stderr F I0813 19:57:16.245515 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.254938681+00:00 stderr F I0813 19:57:16.254894 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.281678735+00:00 stderr F I0813 19:57:16.281563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.339818425+00:00 stderr F I0813 19:57:16.339719 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.431200795+00:00 stderr F I0813 19:57:16.431143 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.761302101+00:00 stderr F I0813 19:57:16.761186 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.761302101+00:00 stderr F I0813 19:57:16.761234 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.802314282+00:00 stderr F I0813 19:57:16.800418 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:16.802314282+00:00 stderr F I0813 19:57:16.800463 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:16.808145299+00:00 stderr F I0813 19:57:16.808077 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.808291143+00:00 stderr F I0813 19:57:16.808189 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.969932258+00:00 stderr F I0813 19:57:16.969745 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "False" 2025-08-13T19:57:16.969932258+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:16.969932258+00:00 stderr F message: |- 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:16.969932258+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Degraded 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Upgradeable 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:16.969932258+00:00 stderr F message: |- 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F reason: Deploying 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Progressing 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Available 2025-08-13T19:57:16.972714958+00:00 stderr F I0813 19:57:16.972639 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.003603950+00:00 stderr F I0813 19:57:17.003501 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.003603950+00:00 stderr F message: |- 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.003603950+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Degraded 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "False" 2025-08-13T19:57:17.003603950+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.003603950+00:00 stderr F message: |- 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Progressing 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Available 2025-08-13T19:57:17.027331517+00:00 stderr F I0813 19:57:17.025409 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "False" 2025-08-13T19:57:17.027331517+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.027331517+00:00 stderr F message: |- 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.027331517+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Degraded 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.027331517+00:00 stderr F message: |- 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Progressing 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Available 2025-08-13T19:57:17.027331517+00:00 stderr F I0813 19:57:17.025657 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.064305253+00:00 stderr F I0813 19:57:17.064072 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.064305253+00:00 stderr F message: |- 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.064305253+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Degraded 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "False" 2025-08-13T19:57:17.064305253+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.064305253+00:00 stderr F message: |- 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Progressing 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Available 2025-08-13T19:57:17.088951607+00:00 stderr F I0813 19:57:17.088886 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:17.089022279+00:00 stderr F I0813 19:57:17.089009 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:17.201901152+00:00 stderr F I0813 19:57:17.201844 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "False" 2025-08-13T19:57:17.201901152+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.201901152+00:00 stderr F message: |- 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.201901152+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Degraded 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.201901152+00:00 stderr F message: |- 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Progressing 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Available 2025-08-13T19:57:17.202354635+00:00 stderr F I0813 19:57:17.202188 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.223129958+00:00 stderr F I0813 19:57:17.222995 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:57:17.232103015+00:00 stderr F I0813 19:57:17.232074 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.245811336+00:00 stderr F I0813 19:57:17.245701 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.245811336+00:00 stderr F message: |- 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.245811336+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Degraded 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "False" 2025-08-13T19:57:17.245811336+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.245811336+00:00 stderr F message: |- 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Progressing 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Available 2025-08-13T19:57:17.249137321+00:00 stderr F I0813 19:57:17.249110 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.259308692+00:00 stderr F I0813 19:57:17.259190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.270118710+00:00 stderr F I0813 19:57:17.269988 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.274218327+00:00 stderr F I0813 19:57:17.274162 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.275952317+00:00 stderr F I0813 19:57:17.274720 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "False" 2025-08-13T19:57:17.275952317+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.275952317+00:00 stderr F message: |- 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.275952317+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Degraded 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.275952317+00:00 stderr F message: |- 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Progressing 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Available 2025-08-13T19:57:17.438910890+00:00 stderr F I0813 19:57:17.438515 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.591118266+00:00 stderr F I0813 19:57:17.587982 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.591118266+00:00 stderr F message: |- 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.591118266+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Degraded 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "False" 2025-08-13T19:57:17.591118266+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.591118266+00:00 stderr F message: |- 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Progressing 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Available 2025-08-13T19:57:17.632087666+00:00 stderr F I0813 19:57:17.631991 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.642120213+00:00 stderr F I0813 19:57:17.641978 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:57:17.831496050+00:00 stderr F I0813 19:57:17.831406 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.036197856+00:00 stderr F I0813 19:57:18.035861 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.133008850+00:00 stderr F I0813 19:57:18.132935 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:57:18.133102643+00:00 stderr F I0813 19:57:18.133088 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:57:18.196054480+00:00 stderr F I0813 19:57:18.195709 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "False" 2025-08-13T19:57:18.196054480+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.196054480+00:00 stderr F message: |- 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.196054480+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Degraded 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.196054480+00:00 stderr F message: |- 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Progressing 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Available 2025-08-13T19:57:18.202886366+00:00 stderr F I0813 19:57:18.202753 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:18.236058863+00:00 stderr F I0813 19:57:18.235671 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.430881896+00:00 stderr F I0813 19:57:18.430665 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.579966113+00:00 stderr F I0813 19:57:18.579881 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.579966113+00:00 stderr F message: |- 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.579966113+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Degraded 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "False" 2025-08-13T19:57:18.579966113+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.579966113+00:00 stderr F message: |- 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Progressing 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Available 2025-08-13T19:57:18.603711061+00:00 stderr F I0813 19:57:18.603625 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "False" 2025-08-13T19:57:18.603711061+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.603711061+00:00 stderr F message: |- 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.603711061+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Degraded 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.603711061+00:00 stderr F message: |- 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Progressing 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Available 2025-08-13T19:57:18.604328479+00:00 stderr F I0813 19:57:18.604264 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:18.630157446+00:00 stderr F I0813 19:57:18.630055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.831180777+00:00 stderr F I0813 19:57:18.831084 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.982905299+00:00 stderr F I0813 19:57:18.982710 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.982905299+00:00 stderr F message: |- 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.982905299+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Degraded 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "False" 2025-08-13T19:57:18.982905299+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.982905299+00:00 stderr F message: |- 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Progressing 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Available 2025-08-13T19:57:19.031076985+00:00 stderr F I0813 19:57:19.030963 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:19.674537108+00:00 stderr F I0813 19:57:19.674439 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "False" 2025-08-13T19:57:19.674537108+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:19.674537108+00:00 stderr F message: |- 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:19.674537108+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Degraded 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Upgradeable 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:19.674537108+00:00 stderr F message: |- 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F reason: Deploying 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Progressing 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Available 2025-08-13T19:57:19.675118124+00:00 stderr F I0813 19:57:19.675020 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:19.984504589+00:00 stderr F I0813 19:57:19.984386 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:19.984504589+00:00 stderr F message: |- 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:19.984504589+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Degraded 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Upgradeable 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "False" 2025-08-13T19:57:19.984504589+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:19.984504589+00:00 stderr F message: |- 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F reason: Deploying 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Progressing 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Available 2025-08-13T19:57:20.007128255+00:00 stderr F I0813 19:57:20.006991 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "False" 2025-08-13T19:57:20.007128255+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:20.007128255+00:00 stderr F message: |- 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:20.007128255+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Degraded 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Upgradeable 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:20.007128255+00:00 stderr F message: |- 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F reason: Deploying 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Progressing 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Available 2025-08-13T19:57:20.007128255+00:00 stderr F I0813 19:57:20.007109 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:20.384928903+00:00 stderr F I0813 19:57:20.384687 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:20.384928903+00:00 stderr F message: |- 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:20.384928903+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Degraded 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Upgradeable 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "False" 2025-08-13T19:57:20.384928903+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:20.384928903+00:00 stderr F message: |- 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F reason: Deploying 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Progressing 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Available 2025-08-13T19:57:27.646199035+00:00 stderr F E0813 19:57:27.645984 1 allowlist_controller.go:142] Failed to verify ready status on allowlist daemonset pods: client rate limiter Wait returned an error: context deadline exceeded 2025-08-13T19:57:36.833565267+00:00 stderr F I0813 19:57:36.829316 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.834918476+00:00 stderr F I0813 19:57:36.834168 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.943902088+00:00 stderr F I0813 19:57:36.943626 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.943902088+00:00 stderr F I0813 19:57:36.943743 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:37.094172179+00:00 stderr F I0813 19:57:37.088743 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:37.094172179+00:00 stderr F I0813 19:57:37.090964 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "False" 2025-08-13T19:57:37.094172179+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.094172179+00:00 stderr F message: |- 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:37.094172179+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Degraded 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.094172179+00:00 stderr F message: |- 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.094172179+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Progressing 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Available 2025-08-13T19:57:37.165531076+00:00 stderr F I0813 19:57:37.165462 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.165531076+00:00 stderr F message: |- 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:37.165531076+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Degraded 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "False" 2025-08-13T19:57:37.165531076+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.165531076+00:00 stderr F message: |- 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.165531076+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Progressing 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Available 2025-08-13T19:57:37.199699932+00:00 stderr F I0813 19:57:37.199578 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:37.200026730+00:00 stderr F I0813 19:57:37.199884 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "False" 2025-08-13T19:57:37.200026730+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.200026730+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:37.200026730+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.200026730+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Degraded 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.200026730+00:00 stderr F message: |- 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.200026730+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Progressing 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Available 2025-08-13T19:57:37.267659872+00:00 stderr F I0813 19:57:37.267557 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.267659872+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:37.267659872+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.267659872+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Degraded 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "False" 2025-08-13T19:57:37.267659872+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.267659872+00:00 stderr F message: |- 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.267659872+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Progressing 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Available 2025-08-13T19:57:40.331406117+00:00 stderr F I0813 19:57:40.331298 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.331406117+00:00 stderr F I0813 19:57:40.331339 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.363359359+00:00 stderr F I0813 19:57:40.362143 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.363359359+00:00 stderr F I0813 19:57:40.362189 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.422123417+00:00 stderr F I0813 19:57:40.421990 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:40.422558630+00:00 stderr F I0813 19:57:40.422454 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "False" 2025-08-13T19:57:40.422558630+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:40.422558630+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:40.422558630+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:40.422558630+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Degraded 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.422558630+00:00 stderr F message: |- 2025-08-13T19:57:40.422558630+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Progressing 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Available 2025-08-13T19:57:40.451011842+00:00 stderr F I0813 19:57:40.450905 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:40.451011842+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:40.451011842+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:40.451011842+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Degraded 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "False" 2025-08-13T19:57:40.451011842+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.451011842+00:00 stderr F message: |- 2025-08-13T19:57:40.451011842+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Progressing 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Available 2025-08-13T19:57:40.483402657+00:00 stderr F I0813 19:57:40.483342 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:40.485476046+00:00 stderr F I0813 19:57:40.484150 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "False" 2025-08-13T19:57:40.485476046+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "False" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Degraded 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.485476046+00:00 stderr F message: |- 2025-08-13T19:57:40.485476046+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Progressing 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Available 2025-08-13T19:57:40.515236706+00:00 stderr F I0813 19:57:40.515082 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "False" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Degraded 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "False" 2025-08-13T19:57:40.515236706+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.515236706+00:00 stderr F message: |- 2025-08-13T19:57:40.515236706+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Progressing 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Available 2025-08-13T19:57:48.718553717+00:00 stderr F I0813 19:57:48.718162 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:57:48.718553717+00:00 stderr F I0813 19:57:48.718217 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:57:50.186632608+00:00 stderr F I0813 19:57:50.186549 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "False" 2025-08-13T19:57:50.186632608+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "False" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Degraded 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Upgradeable 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:50.186632608+00:00 stderr F message: |- 2025-08-13T19:57:50.186632608+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:50.186632608+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:50.186632608+00:00 stderr F reason: Deploying 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Progressing 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Available 2025-08-13T19:57:50.188397668+00:00 stderr F I0813 19:57:50.187123 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:50.711039923+00:00 stderr F I0813 19:57:50.710052 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "False" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Degraded 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Upgradeable 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "False" 2025-08-13T19:57:50.711039923+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:50.711039923+00:00 stderr F message: |- 2025-08-13T19:57:50.711039923+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:50.711039923+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:50.711039923+00:00 stderr F reason: Deploying 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Progressing 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Available 2025-08-13T19:57:53.854169574+00:00 stderr F E0813 19:57:53.854019 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:58:53.853220130+00:00 stderr F E0813 19:58:53.852971 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.933536645+00:00 stderr F I0813 19:59:16.928079 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:59:17.449368829+00:00 stderr F I0813 19:59:17.449298 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.540009033+00:00 stderr F I0813 19:59:17.539733 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.797601196+00:00 stderr F I0813 19:59:17.794238 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.849392622+00:00 stderr F I0813 19:59:17.849009 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.021281852+00:00 stderr F I0813 19:59:18.020347 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.078086281+00:00 stderr F I0813 19:59:18.076133 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:59:18.078086281+00:00 stderr F I0813 19:59:18.076275 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:59:18.247723307+00:00 stderr F I0813 19:59:18.247439 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.718923629+00:00 stderr F I0813 19:59:18.716957 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:19.320117607+00:00 stderr F I0813 19:59:19.294730 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:59:19.725952816+00:00 stderr F I0813 19:59:19.703422 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:20.361934375+00:00 stderr F I0813 19:59:20.359979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:20.786513098+00:00 stderr F I0813 19:59:20.784268 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.111025047+00:00 stderr F I0813 19:59:21.108740 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.648204880+00:00 stderr F I0813 19:59:21.648113 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.852747820+00:00 stderr F I0813 19:59:21.852489 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.966195494+00:00 stderr F I0813 19:59:21.945572 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:22.005721751+00:00 stderr F I0813 19:59:22.003273 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:59:22.007893113+00:00 stderr F I0813 19:59:22.006732 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "False" 2025-08-13T19:59:22.007893113+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "False" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Degraded 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Upgradeable 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:59:22.007893113+00:00 stderr F message: Deployment "/openshift-multus/multus-admission-controller" is waiting for 2025-08-13T19:59:22.007893113+00:00 stderr F other operators to become ready 2025-08-13T19:59:22.007893113+00:00 stderr F reason: Deploying 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Progressing 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Available 2025-08-13T19:59:22.623140311+00:00 stderr F I0813 19:59:22.622565 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "False" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Degraded 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Upgradeable 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "False" 2025-08-13T19:59:22.623140311+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:59:22.623140311+00:00 stderr F message: Deployment "/openshift-multus/multus-admission-controller" is waiting for 2025-08-13T19:59:22.623140311+00:00 stderr F other operators to become ready 2025-08-13T19:59:22.623140311+00:00 stderr F reason: Deploying 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Progressing 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Available 2025-08-13T19:59:23.111462501+00:00 stderr F I0813 19:59:23.104684 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.350067482+00:00 stderr F I0813 19:59:23.315737 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.446624725+00:00 stderr F I0813 19:59:23.425940 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.962965053+00:00 stderr F I0813 19:59:23.962769 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:24.943743630+00:00 stderr F I0813 19:59:24.935070 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:25.812553016+00:00 stderr F I0813 19:59:25.811360 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:26.292940969+00:00 stderr F I0813 19:59:26.292674 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:26.995629630+00:00 stderr F I0813 19:59:26.995568 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.126353436+00:00 stderr F I0813 19:59:27.126299 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.330258469+00:00 stderr F I0813 19:59:27.330199 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.764258490+00:00 stderr F I0813 19:59:27.760309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.805641620+00:00 stderr F I0813 19:59:27.802246 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:59:28.078679293+00:00 stderr F I0813 19:59:28.078586 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.101021519+00:00 stderr F I0813 19:59:28.098559 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:59:28.101021519+00:00 stderr F I0813 19:59:28.098616 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:59:28.363168061+00:00 stderr F I0813 19:59:28.363039 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.696077801+00:00 stderr F I0813 19:59:28.692985 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.843750770+00:00 stderr F I0813 19:59:28.839683 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.196290770+00:00 stderr F I0813 19:59:29.196230 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.349409284+00:00 stderr F I0813 19:59:29.348558 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.477965059+00:00 stderr F I0813 19:59:29.477700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.641111209+00:00 stderr F I0813 19:59:29.641055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.699073332+00:00 stderr F I0813 19:59:29.685576 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:59:29.709884170+00:00 stderr F I0813 19:59:29.709004 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Degraded 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "True" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Upgradeable 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2025-08-13T19:59:29Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Progressing 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "True" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Available 2025-08-13T19:59:30.241056071+00:00 stderr F I0813 19:59:30.241003 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:31.334293824+00:00 stderr F I0813 19:59:31.332858 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:31.510259090+00:00 stderr F I0813 19:59:31.509102 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Degraded 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "True" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Upgradeable 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2025-08-13T19:59:30Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Progressing 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "True" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Available 2025-08-13T19:59:48.140554559+00:00 stderr F I0813 19:59:48.131293 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.391260996+00:00 stderr F I0813 19:59:48.389276 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.570048173+00:00 stderr F I0813 19:59:48.568410 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.833236585+00:00 stderr F I0813 19:59:48.832707 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.974349148+00:00 stderr F I0813 19:59:48.973569 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.216714687+00:00 stderr F I0813 19:59:49.205400 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.290936663+00:00 stderr F I0813 19:59:49.287212 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.426012323+00:00 stderr F I0813 19:59:49.425152 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.346580474+00:00 stderr F I0813 19:59:50.346523 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.613374570+00:00 stderr F I0813 19:59:50.609724 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.864720594+00:00 stderr F I0813 19:59:50.864066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.243916454+00:00 stderr F I0813 19:59:51.243825 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.804263647+00:00 stderr F I0813 19:59:51.804204 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.834284163+00:00 stderr F I0813 19:59:51.834085 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.979645897+00:00 stderr F I0813 19:59:51.979480 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.954065868 +0000 UTC))" 2025-08-13T19:59:51.979891574+00:00 stderr F I0813 19:59:51.979865 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.979721539 +0000 UTC))" 2025-08-13T19:59:51.980096460+00:00 stderr F I0813 19:59:51.979996 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.979966846 +0000 UTC))" 2025-08-13T19:59:51.980178442+00:00 stderr F I0813 19:59:51.980157 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.98011923 +0000 UTC))" 2025-08-13T19:59:51.980242584+00:00 stderr F I0813 19:59:51.980217 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980200393 +0000 UTC))" 2025-08-13T19:59:51.980446290+00:00 stderr F I0813 19:59:51.980416 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980275205 +0000 UTC))" 2025-08-13T19:59:51.980535682+00:00 stderr F I0813 19:59:51.980508 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.98047528 +0000 UTC))" 2025-08-13T19:59:51.980603604+00:00 stderr F I0813 19:59:51.980583 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980560143 +0000 UTC))" 2025-08-13T19:59:51.980667716+00:00 stderr F I0813 19:59:51.980653 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980630925 +0000 UTC))" 2025-08-13T19:59:52.032246636+00:00 stderr F I0813 19:59:52.030423 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 19:59:52.030310541 +0000 UTC))" 2025-08-13T19:59:52.032246636+00:00 stderr F I0813 19:59:52.031124 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 19:59:52.031027672 +0000 UTC))" 2025-08-13T19:59:52.146350009+00:00 stderr F I0813 19:59:52.144680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.231451655+00:00 stderr F I0813 19:59:52.230944 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.309745156+00:00 stderr F I0813 19:59:52.309454 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.637698115+00:00 stderr F I0813 19:59:52.634134 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:59:55.316949758+00:00 stderr F I0813 19:59:55.316873 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:59:58.072087613+00:00 stderr F I0813 19:59:58.071428 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.305632231+00:00 stderr F I0813 19:59:58.304088 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.377723176+00:00 stderr F I0813 19:59:58.367820 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.434523765+00:00 stderr F I0813 19:59:58.434098 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.482619416+00:00 stderr F I0813 19:59:58.480009 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.528341879+00:00 stderr F I0813 19:59:58.522759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.545087616+00:00 stderr F I0813 19:59:58.543626 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.600185317+00:00 stderr F I0813 19:59:58.589599 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.703059880+00:00 stderr F I0813 19:59:58.701218 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.738516400+00:00 stderr F I0813 19:59:58.737600 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.787730443+00:00 stderr F I0813 19:59:58.787055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.841489106+00:00 stderr F I0813 19:59:58.838749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.884679367+00:00 stderr F I0813 19:59:58.856358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.963088732+00:00 stderr F I0813 19:59:58.961511 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:59.131081701+00:00 stderr F I0813 19:59:59.125180 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:59.191984487+00:00 stderr F I0813 19:59:59.189505 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.016409427+00:00 stderr F I0813 20:00:00.015619 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:00:00.044321713+00:00 stderr F I0813 20:00:00.039911 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:00:00.051241160+00:00 stderr F I0813 20:00:00.047119 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:00:00.051241160+00:00 stderr F I0813 20:00:00.047155 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000abff80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085465 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085513 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085531 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.100949 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101022 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101030 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101036 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101079 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:00:00.240105194+00:00 stderr F I0813 20:00:00.236030 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:00:00.415568186+00:00 stderr F I0813 20:00:00.415503 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:00:00.419339323+00:00 stderr F I0813 20:00:00.419293 1 log.go:245] Starting render phase 2025-08-13T20:00:00.537154070+00:00 stderr F I0813 20:00:00.537099 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.583753909+00:00 stderr F I0813 20:00:00.576994 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:00:00.585098887+00:00 stderr F I0813 20:00:00.585069 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.606038084+00:00 stderr F I0813 20:00:00.605336 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.704095379+00:00 stderr F I0813 20:00:00.685563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.755946427+00:00 stderr F I0813 20:00:00.750192 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:00:00.756553244+00:00 stderr F I0813 20:00:00.756405 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.810522553+00:00 stderr F I0813 20:00:00.810427 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.935619299+00:00 stderr F I0813 20:00:00.925007 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.966938491+00:00 stderr F I0813 20:00:00.966348 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.012044437+00:00 stderr F I0813 20:00:00.994077 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.198850282+00:00 stderr F I0813 20:00:01.196550 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.378529464+00:00 stderr F I0813 20:00:01.373556 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.526869373+00:00 stderr F I0813 20:00:01.526736 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.793436441+00:00 stderr F I0813 20:00:01.793339 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.974855962+00:00 stderr F I0813 20:00:01.963018 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.048943 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.048980 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.049010 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.049058 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:00:02.145098276+00:00 stderr F I0813 20:00:02.144755 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.293526629+00:00 stderr F I0813 20:00:02.293464 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:00:02.293683323+00:00 stderr F I0813 20:00:02.293670 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:00:02.323525774+00:00 stderr F I0813 20:00:02.323471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.433367366+00:00 stderr F I0813 20:00:02.425473 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:00:02.833232198+00:00 stderr F I0813 20:00:02.833170 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.912416596+00:00 stderr F I0813 20:00:02.912359 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.960892078+00:00 stderr F I0813 20:00:02.959429 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:00:02.990424180+00:00 stderr F I0813 20:00:02.990367 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.003689948+00:00 stderr F I0813 20:00:03.003291 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:00:03.003689948+00:00 stderr F I0813 20:00:03.003423 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:00:03.024762009+00:00 stderr F I0813 20:00:03.024709 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:00:03.024974555+00:00 stderr F I0813 20:00:03.024956 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:00:03.157048601+00:00 stderr F I0813 20:00:03.156993 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.204146314+00:00 stderr F I0813 20:00:03.203526 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:03.204146314+00:00 stderr F I0813 20:00:03.203594 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:00:03.428984805+00:00 stderr F I0813 20:00:03.428928 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:03.429097758+00:00 stderr F I0813 20:00:03.429084 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:00:03.441752649+00:00 stderr F I0813 20:00:03.440235 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.529341597+00:00 stderr F I0813 20:00:03.528243 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:00:03.538519809+00:00 stderr F I0813 20:00:03.528329 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:00:03.588427262+00:00 stderr F I0813 20:00:03.588335 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.640600989+00:00 stderr F I0813 20:00:03.640539 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:00:03.641026771+00:00 stderr F I0813 20:00:03.641007 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:00:03.772039317+00:00 stderr F I0813 20:00:03.765621 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:00:03.772039317+00:00 stderr F I0813 20:00:03.769461 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:00:03.782352661+00:00 stderr F I0813 20:00:03.772442 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.871938865+00:00 stderr F I0813 20:00:03.866133 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:00:03.872073069+00:00 stderr F I0813 20:00:03.872053 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:00:03.917299689+00:00 stderr F I0813 20:00:03.914318 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:00:03.920611853+00:00 stderr F I0813 20:00:03.917435 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:00:03.929444985+00:00 stderr F I0813 20:00:03.929282 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:00:03.929554168+00:00 stderr F I0813 20:00:03.929539 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:00:03.961414587+00:00 stderr F I0813 20:00:03.961358 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:00:03.961531410+00:00 stderr F I0813 20:00:03.961513 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:00:03.985005849+00:00 stderr F I0813 20:00:03.982309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:00:04.003910608+00:00 stderr F I0813 20:00:03.999925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:00:04.005635377+00:00 stderr F I0813 20:00:04.005601 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.027927552+00:00 stderr F I0813 20:00:04.027069 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:00:04.045293458+00:00 stderr F I0813 20:00:04.045115 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:00:04.063575569+00:00 stderr F I0813 20:00:04.063518 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:00:04.063912578+00:00 stderr F I0813 20:00:04.063892 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:00:04.088456888+00:00 stderr F I0813 20:00:04.088401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:00:04.089140188+00:00 stderr F I0813 20:00:04.088955 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:00:04.131601498+00:00 stderr F I0813 20:00:04.131542 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.182439358+00:00 stderr F I0813 20:00:04.182377 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:00:04.182577902+00:00 stderr F I0813 20:00:04.182554 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:00:04.322262595+00:00 stderr F I0813 20:00:04.322190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.371423647+00:00 stderr F I0813 20:00:04.371362 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:00:04.371567621+00:00 stderr F I0813 20:00:04.371546 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:00:04.563670579+00:00 stderr F I0813 20:00:04.563619 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.579273074+00:00 stderr F I0813 20:00:04.577416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:00:04.579526481+00:00 stderr F I0813 20:00:04.579502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:00:04.768828728+00:00 stderr F I0813 20:00:04.768573 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:00:04.768828728+00:00 stderr F I0813 20:00:04.768703 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:00:05.089912124+00:00 stderr F I0813 20:00:05.089594 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:00:05.090033748+00:00 stderr F I0813 20:00:05.090018 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:00:05.245299915+00:00 stderr F I0813 20:00:05.243752 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:00:05.245299915+00:00 stderr F I0813 20:00:05.243892 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:00:05.402293141+00:00 stderr F I0813 20:00:05.402240 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:00:05.402397694+00:00 stderr F I0813 20:00:05.402383 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:00:05.494256684+00:00 stderr F I0813 20:00:05.494179 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.521273424+00:00 stderr F I0813 20:00:05.521228 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.571085004+00:00 stderr F I0813 20:00:05.571031 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.585370512+00:00 stderr F I0813 20:00:05.585318 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.592692180+00:00 stderr F I0813 20:00:05.592578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.613300498+00:00 stderr F I0813 20:00:05.612470 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:00:05.613300498+00:00 stderr F I0813 20:00:05.612559 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740411 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.740375171 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740519 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.740496435 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740546 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.740529816 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740565 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.740551536 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740584 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740572197 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740624 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740610108 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740643 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740629369 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740660 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740648819 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740682 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.74066537 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740703 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.74069001 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.741475 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 20:00:05.741448102 +0000 UTC))" 2025-08-13T20:00:05.748960716+00:00 stderr F I0813 20:00:05.748933 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.782825402+00:00 stderr F I0813 20:00:05.782349 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:00:05.782298587 +0000 UTC))" 2025-08-13T20:00:05.829491792+00:00 stderr F I0813 20:00:05.827515 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:00:05.829491792+00:00 stderr F I0813 20:00:05.827590 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:00:05.936560876+00:00 stderr F I0813 20:00:05.935582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.973523859+00:00 stderr F I0813 20:00:05.973349 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:00:05.973523859+00:00 stderr F I0813 20:00:05.973409 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:00:06.147090158+00:00 stderr F I0813 20:00:06.146975 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.173986895+00:00 stderr F I0813 20:00:06.171235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:00:06.174203062+00:00 stderr F I0813 20:00:06.174180 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:00:06.335594323+00:00 stderr F I0813 20:00:06.335530 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.413332790+00:00 stderr F I0813 20:00:06.411275 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:00:06.413332790+00:00 stderr F I0813 20:00:06.411388 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:00:06.543155352+00:00 stderr F I0813 20:00:06.538471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.591404678+00:00 stderr F I0813 20:00:06.587102 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:00:06.591567032+00:00 stderr F I0813 20:00:06.591551 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:00:06.727749705+00:00 stderr F I0813 20:00:06.726725 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.834137209+00:00 stderr F I0813 20:00:06.833488 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:00:06.834137209+00:00 stderr F I0813 20:00:06.833576 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:06.972059312+00:00 stderr F I0813 20:00:06.969446 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.068523202+00:00 stderr F I0813 20:00:07.067173 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:07.068523202+00:00 stderr F I0813 20:00:07.067241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:07.126004661+00:00 stderr F I0813 20:00:07.125906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.181911975+00:00 stderr F I0813 20:00:07.179116 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:07.181911975+00:00 stderr F I0813 20:00:07.179189 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:00:07.367050765+00:00 stderr F I0813 20:00:07.364935 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.439404058+00:00 stderr F I0813 20:00:07.439299 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:07.439404058+00:00 stderr F I0813 20:00:07.439362 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:00:07.570584558+00:00 stderr F I0813 20:00:07.566959 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.610458055+00:00 stderr F I0813 20:00:07.609288 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:00:07.610458055+00:00 stderr F I0813 20:00:07.609359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:00:07.740640546+00:00 stderr F I0813 20:00:07.740525 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.913240478+00:00 stderr F I0813 20:00:07.908830 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:00:07.913240478+00:00 stderr F I0813 20:00:07.908932 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:00:08.024148710+00:00 stderr F I0813 20:00:08.021659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:00:08.024148710+00:00 stderr F I0813 20:00:08.021742 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:00:08.076240345+00:00 stderr F I0813 20:00:08.072585 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.234126367+00:00 stderr F I0813 20:00:08.232925 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:00:08.234126367+00:00 stderr F I0813 20:00:08.232991 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:00:08.246128620+00:00 stderr F I0813 20:00:08.245571 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.484883697+00:00 stderr F I0813 20:00:08.484422 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:08.484883697+00:00 stderr F I0813 20:00:08.484544 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:00:08.509748366+00:00 stderr F I0813 20:00:08.507183 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.724308484+00:00 stderr F I0813 20:00:08.723940 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.890706279+00:00 stderr F I0813 20:00:08.890426 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:00:08.906373966+00:00 stderr F I0813 20:00:08.905441 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:08.923653378+00:00 stderr F I0813 20:00:08.920199 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:09.990633902+00:00 stderr F I0813 20:00:09.985965 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:09.990633902+00:00 stderr F I0813 20:00:09.986155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:10.110526881+00:00 stderr F I0813 20:00:10.110462 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.206356513+00:00 stderr F I0813 20:00:10.206050 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:10.206356513+00:00 stderr F I0813 20:00:10.206099 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:00:10.221050462+00:00 stderr F I0813 20:00:10.220763 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.611933238+00:00 stderr F I0813 20:00:10.607622 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.612359180+00:00 stderr F I0813 20:00:10.612334 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:00:10.612448893+00:00 stderr F I0813 20:00:10.612432 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:00:10.825699854+00:00 stderr F I0813 20:00:10.825565 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:10.830917772+00:00 stderr F I0813 20:00:10.825762 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:00:10.852116287+00:00 stderr F I0813 20:00:10.848476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.308090267+00:00 stderr F I0813 20:00:11.307681 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:00:11.308090267+00:00 stderr F I0813 20:00:11.307755 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:00:11.316324702+00:00 stderr F I0813 20:00:11.316292 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.695420422+00:00 stderr F I0813 20:00:11.695361 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.700181848+00:00 stderr F I0813 20:00:11.698073 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:00:11.700181848+00:00 stderr F I0813 20:00:11.698161 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:00:11.924462803+00:00 stderr F I0813 20:00:11.924415 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:00:11.924557115+00:00 stderr F I0813 20:00:11.924540 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:00:12.057868176+00:00 stderr F I0813 20:00:12.057643 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.138956629+00:00 stderr F I0813 20:00:12.138102 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.219931 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.220000 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.225358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.341434712+00:00 stderr F I0813 20:00:12.338142 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.382584996+00:00 stderr F I0813 20:00:12.381413 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:00:12.382584996+00:00 stderr F I0813 20:00:12.381486 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:12.822989293+00:00 stderr F I0813 20:00:12.816201 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.912339241+00:00 stderr F I0813 20:00:12.902022 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:12.912339241+00:00 stderr F I0813 20:00:12.902123 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:13.015617166+00:00 stderr F I0813 20:00:13.012751 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.232013697+00:00 stderr F I0813 20:00:13.223038 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:13.232013697+00:00 stderr F I0813 20:00:13.223117 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:13.250970737+00:00 stderr F I0813 20:00:13.239234 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.306879791+00:00 stderr F I0813 20:00:13.298383 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:13.306879791+00:00 stderr F I0813 20:00:13.298451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:13.338236715+00:00 stderr F I0813 20:00:13.335162 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.359024598+00:00 stderr F I0813 20:00:13.358209 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:13.359024598+00:00 stderr F I0813 20:00:13.358297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471658 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.638533468+00:00 stderr F I0813 20:00:13.634291 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:13.638533468+00:00 stderr F I0813 20:00:13.634535 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:00:13.678753645+00:00 stderr F I0813 20:00:13.677949 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:00:13.678753645+00:00 stderr F I0813 20:00:13.678384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:00:14.228906282+00:00 stderr F I0813 20:00:14.228151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:00:14.228906282+00:00 stderr F I0813 20:00:14.228229 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:00:14.997396433+00:00 stderr F I0813 20:00:14.993453 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:00:15.392155450+00:00 stderr F I0813 20:00:15.389447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:00:15.392155450+00:00 stderr F I0813 20:00:15.389661 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:00:16.733912929+00:00 stderr F I0813 20:00:16.729412 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:00:16.733912929+00:00 stderr F I0813 20:00:16.729568 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:17.055233911+00:00 stderr F I0813 20:00:17.046488 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:17.055233911+00:00 stderr F I0813 20:00:17.046589 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.084964209+00:00 stderr F I0813 20:00:17.080672 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:17.396449320+00:00 stderr F I0813 20:00:17.396312 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.396449320+00:00 stderr F I0813 20:00:17.396395 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.518882001+00:00 stderr F I0813 20:00:17.509358 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:00:17.608238829+00:00 stderr F I0813 20:00:17.607567 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.608238829+00:00 stderr F I0813 20:00:17.607642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.971545149+00:00 stderr F I0813 20:00:17.971490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.972118835+00:00 stderr F I0813 20:00:17.971982 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.999936048+00:00 stderr F I0813 20:00:17.976912 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:18.198934632+00:00 stderr F I0813 20:00:18.198525 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:18.198934632+00:00 stderr F I0813 20:00:18.198631 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:00:18.375177797+00:00 stderr F I0813 20:00:18.370384 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:00:18.375177797+00:00 stderr F I0813 20:00:18.370537 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:00:18.554072558+00:00 stderr F I0813 20:00:18.553209 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:00:18.554072558+00:00 stderr F I0813 20:00:18.553370 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:00:18.920695282+00:00 stderr F I0813 20:00:18.920636 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:18.920943779+00:00 stderr F I0813 20:00:18.920917 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:00:19.139574513+00:00 stderr F I0813 20:00:19.137568 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:00:19.139574513+00:00 stderr F I0813 20:00:19.137636 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:00:19.404905919+00:00 stderr F I0813 20:00:19.402702 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:00:19.404905919+00:00 stderr F I0813 20:00:19.402860 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:00:19.573699642+00:00 stderr F I0813 20:00:19.573642 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:00:19.574088273+00:00 stderr F I0813 20:00:19.574064 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:00:19.724215163+00:00 stderr F I0813 20:00:19.723080 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:19.874118948+00:00 stderr F I0813 20:00:19.872469 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:00:19.874118948+00:00 stderr F I0813 20:00:19.872687 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:00:19.946009208+00:00 stderr F I0813 20:00:19.922655 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.075301534+00:00 stderr F I0813 20:00:20.075240 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.090052095+00:00 stderr F I0813 20:00:20.076416 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:00:20.117163328+00:00 stderr F I0813 20:00:20.105138 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:00:20.251120698+00:00 stderr F I0813 20:00:20.251042 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.314602038+00:00 stderr F I0813 20:00:20.314148 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:00:20.314602038+00:00 stderr F I0813 20:00:20.314290 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:20.361443863+00:00 stderr F I0813 20:00:20.361386 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.584204975+00:00 stderr F I0813 20:00:20.584143 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:20.584352459+00:00 stderr F I0813 20:00:20.584337 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:00:20.607314304+00:00 stderr F I0813 20:00:20.599333 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.794267785+00:00 stderr F I0813 20:00:20.794008 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:00:20.794267785+00:00 stderr F I0813 20:00:20.794119 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:20.801894842+00:00 stderr F I0813 20:00:20.800106 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.932976790+00:00 stderr F I0813 20:00:20.932921 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.967903216+00:00 stderr F I0813 20:00:20.960389 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:20.967903216+00:00 stderr F I0813 20:00:20.963155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:21.002545324+00:00 stderr F I0813 20:00:21.001559 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:21.317566856+00:00 stderr F I0813 20:00:21.316563 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:21.317566856+00:00 stderr F I0813 20:00:21.316644 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.014466 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.014514 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.015032 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.015078 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:00:22.106024227+00:00 stderr F I0813 20:00:22.105687 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:00:22.106024227+00:00 stderr F I0813 20:00:22.105765 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:00:22.130968929+00:00 stderr F I0813 20:00:22.130918 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:00:22.131218176+00:00 stderr F I0813 20:00:22.131200 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:00:22.191311289+00:00 stderr F I0813 20:00:22.174576 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:00:22.191311289+00:00 stderr F I0813 20:00:22.174677 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:00:22.548052572+00:00 stderr F I0813 20:00:22.547727 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:00:22.548052572+00:00 stderr F I0813 20:00:22.547934 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:00:22.649770082+00:00 stderr F I0813 20:00:22.649264 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:00:22.649770082+00:00 stderr F I0813 20:00:22.649389 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.170936145+00:00 stderr F I0813 20:00:23.169369 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.170936145+00:00 stderr F I0813 20:00:23.169451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.219909761+00:00 stderr F I0813 20:00:23.216758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.219909761+00:00 stderr F I0813 20:00:23.216930 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.277926796+00:00 stderr F I0813 20:00:23.277766 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:23.278022798+00:00 stderr F I0813 20:00:23.278009 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:23.311979927+00:00 stderr F I0813 20:00:23.311917 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.312134401+00:00 stderr F I0813 20:00:23.312106 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:00:23.448147939+00:00 stderr F I0813 20:00:23.446470 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:00:23.448147939+00:00 stderr F I0813 20:00:23.446518 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:00:23.809745610+00:00 stderr F I0813 20:00:23.808217 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:00:23.809745610+00:00 stderr F I0813 20:00:23.808331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:00:24.459348903+00:00 stderr F I0813 20:00:24.455964 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:00:24.459348903+00:00 stderr F I0813 20:00:24.456049 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:24.754925571+00:00 stderr F I0813 20:00:24.744957 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:24.754925571+00:00 stderr F I0813 20:00:24.745021 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:24.983936881+00:00 stderr F I0813 20:00:24.982250 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:24.983936881+00:00 stderr F I0813 20:00:24.982318 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:25.021522503+00:00 stderr F I0813 20:00:25.017295 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:25.021522503+00:00 stderr F I0813 20:00:25.017361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:00:25.081994857+00:00 stderr F I0813 20:00:25.081905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:00:25.082097940+00:00 stderr F I0813 20:00:25.082084 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:00:25.145287482+00:00 stderr F I0813 20:00:25.145235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:00:25.145432446+00:00 stderr F I0813 20:00:25.145414 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:00:25.237749949+00:00 stderr F I0813 20:00:25.237611 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:00:25.237953855+00:00 stderr F I0813 20:00:25.237936 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:00:25.276163404+00:00 stderr F I0813 20:00:25.276107 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:00:25.276324179+00:00 stderr F I0813 20:00:25.276304 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:00:25.304971936+00:00 stderr F I0813 20:00:25.304829 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:00:25.305650655+00:00 stderr F I0813 20:00:25.305185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:00:25.352379178+00:00 stderr F I0813 20:00:25.352324 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:00:25.356604208+00:00 stderr F I0813 20:00:25.356569 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:00:25.519980045+00:00 stderr F I0813 20:00:25.519607 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:00:25.519980045+00:00 stderr F I0813 20:00:25.519720 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:25.547145310+00:00 stderr F I0813 20:00:25.546995 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:25.547145310+00:00 stderr F I0813 20:00:25.547060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:00:25.626914625+00:00 stderr F I0813 20:00:25.621566 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:00:25.626914625+00:00 stderr F I0813 20:00:25.621871 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:00:25.678964769+00:00 stderr F I0813 20:00:25.677521 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:00:25.678964769+00:00 stderr F I0813 20:00:25.677575 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:00:25.724325682+00:00 stderr F I0813 20:00:25.724270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:00:25.724439176+00:00 stderr F I0813 20:00:25.724418 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:00:25.755367988+00:00 stderr F I0813 20:00:25.755303 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:00:25.755481341+00:00 stderr F I0813 20:00:25.755466 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:00:25.810361636+00:00 stderr F I0813 20:00:25.810284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:00:25.810565262+00:00 stderr F I0813 20:00:25.810536 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:00:26.008240528+00:00 stderr F I0813 20:00:26.006854 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:00:26.008240528+00:00 stderr F I0813 20:00:26.006968 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:26.200354916+00:00 stderr F I0813 20:00:26.200288 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:26.200509281+00:00 stderr F I0813 20:00:26.200482 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:00:26.409191671+00:00 stderr F I0813 20:00:26.407084 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:00:26.409191671+00:00 stderr F I0813 20:00:26.407611 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:26.663284947+00:00 stderr F I0813 20:00:26.663029 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:26.663441071+00:00 stderr F I0813 20:00:26.663423 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:00:26.803330350+00:00 stderr F I0813 20:00:26.803278 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:00:26.803432753+00:00 stderr F I0813 20:00:26.803415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:00:27.017349632+00:00 stderr F I0813 20:00:27.017209 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:00:27.019970497+00:00 stderr F I0813 20:00:27.017413 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:00:27.224954812+00:00 stderr F I0813 20:00:27.221811 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:00:27.224954812+00:00 stderr F I0813 20:00:27.221904 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:00:27.410078641+00:00 stderr F I0813 20:00:27.410001 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:00:27.410425941+00:00 stderr F I0813 20:00:27.410404 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:00:27.604440143+00:00 stderr F I0813 20:00:27.604250 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:00:27.604440143+00:00 stderr F I0813 20:00:27.604339 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:00:27.835231354+00:00 stderr F I0813 20:00:27.833215 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:00:28.068597348+00:00 stderr F I0813 20:00:28.064478 1 log.go:245] Operconfig Controller complete 2025-08-13T20:00:28.068597348+00:00 stderr F I0813 20:00:28.064713 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:00:29.775416506+00:00 stderr F I0813 20:00:29.771208 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:00:29.775416506+00:00 stderr F I0813 20:00:29.775167 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:00:29.794362896+00:00 stderr F I0813 20:00:29.784451 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:00:29.794362896+00:00 stderr F I0813 20:00:29.784499 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00428ad00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798045 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798086 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798095 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823867 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823908 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823915 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823921 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823948 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:00:29.932167476+00:00 stderr F I0813 20:00:29.918701 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:00:29.980936696+00:00 stderr F I0813 20:00:29.975894 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:00:29.989299175+00:00 stderr F I0813 20:00:29.989037 1 log.go:245] Starting render phase 2025-08-13T20:00:30.016067508+00:00 stderr F I0813 20:00:30.013338 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107529 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107568 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107595 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107618 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:00:30.125986172+00:00 stderr F I0813 20:00:30.124066 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:00:30.125986172+00:00 stderr F I0813 20:00:30.124102 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:00:30.144326355+00:00 stderr F I0813 20:00:30.144236 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:00:30.213997792+00:00 stderr F I0813 20:00:30.212562 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:00:30.237987136+00:00 stderr F I0813 20:00:30.237936 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:00:30.238115149+00:00 stderr F I0813 20:00:30.238100 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:00:30.273752145+00:00 stderr F I0813 20:00:30.273704 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:00:30.273929241+00:00 stderr F I0813 20:00:30.273912 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:00:30.287072325+00:00 stderr F I0813 20:00:30.286516 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:30.287072325+00:00 stderr F I0813 20:00:30.286581 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:00:30.299328935+00:00 stderr F I0813 20:00:30.299291 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:30.299507580+00:00 stderr F I0813 20:00:30.299451 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:00:30.306408037+00:00 stderr F I0813 20:00:30.306354 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:00:30.306479469+00:00 stderr F I0813 20:00:30.306465 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:00:30.315886657+00:00 stderr F I0813 20:00:30.314876 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:00:30.315886657+00:00 stderr F I0813 20:00:30.315010 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:00:30.329411403+00:00 stderr F I0813 20:00:30.329281 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:00:30.329411403+00:00 stderr F I0813 20:00:30.329360 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:00:30.342455165+00:00 stderr F I0813 20:00:30.342345 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:00:30.342687001+00:00 stderr F I0813 20:00:30.342668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:00:30.353904471+00:00 stderr F I0813 20:00:30.353878 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:00:30.354048575+00:00 stderr F I0813 20:00:30.353994 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:00:30.368868138+00:00 stderr F I0813 20:00:30.366356 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:00:30.368868138+00:00 stderr F I0813 20:00:30.366421 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:00:30.457129684+00:00 stderr F I0813 20:00:30.454154 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:00:30.457129684+00:00 stderr F I0813 20:00:30.454266 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:00:30.734759971+00:00 stderr F I0813 20:00:30.732826 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:00:30.734759971+00:00 stderr F I0813 20:00:30.732939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:00:30.846214329+00:00 stderr F I0813 20:00:30.846151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:00:30.846328932+00:00 stderr F I0813 20:00:30.846314 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:00:31.033400936+00:00 stderr F I0813 20:00:31.033259 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:00:31.033400936+00:00 stderr F I0813 20:00:31.033344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:00:31.264141776+00:00 stderr F I0813 20:00:31.263064 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:00:31.264141776+00:00 stderr F I0813 20:00:31.263128 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:00:31.423354566+00:00 stderr F I0813 20:00:31.423155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:00:31.423354566+00:00 stderr F I0813 20:00:31.423228 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:00:31.642241897+00:00 stderr F I0813 20:00:31.641700 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:00:31.642241897+00:00 stderr F I0813 20:00:31.641940 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:00:31.830724141+00:00 stderr F I0813 20:00:31.829337 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:00:31.830724141+00:00 stderr F I0813 20:00:31.829470 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:00:33.021246527+00:00 stderr F I0813 20:00:33.017039 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:00:33.021246527+00:00 stderr F I0813 20:00:33.017201 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:00:33.214942320+00:00 stderr F I0813 20:00:33.211612 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:00:33.225679446+00:00 stderr F I0813 20:00:33.223342 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:00:33.291981177+00:00 stderr F I0813 20:00:33.291921 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:00:33.292109290+00:00 stderr F I0813 20:00:33.292076 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:00:33.518331391+00:00 stderr F I0813 20:00:33.511285 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:00:33.518331391+00:00 stderr F I0813 20:00:33.511361 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:00:33.618065725+00:00 stderr F I0813 20:00:33.613622 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:00:33.618065725+00:00 stderr F I0813 20:00:33.613729 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:00:33.781043142+00:00 stderr F I0813 20:00:33.779456 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:00:33.781043142+00:00 stderr F I0813 20:00:33.779529 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:00:33.809218685+00:00 stderr F I0813 20:00:33.808024 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:00:33.809218685+00:00 stderr F I0813 20:00:33.808281 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:00:33.976896016+00:00 stderr F I0813 20:00:33.976325 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:00:33.980126638+00:00 stderr F I0813 20:00:33.978085 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:00:34.074260693+00:00 stderr F I0813 20:00:34.074164 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:00:34.076908508+00:00 stderr F I0813 20:00:34.074419 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:00:34.180443360+00:00 stderr F I0813 20:00:34.179269 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:00:34.180443360+00:00 stderr F I0813 20:00:34.179367 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:00:34.240868913+00:00 stderr F I0813 20:00:34.239584 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:00:34.240868913+00:00 stderr F I0813 20:00:34.239660 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:34.345699852+00:00 stderr F I0813 20:00:34.345343 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:34.345699852+00:00 stderr F I0813 20:00:34.345407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:34.494032052+00:00 stderr F I0813 20:00:34.493918 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:34.494032052+00:00 stderr F I0813 20:00:34.493988 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:00:34.638011747+00:00 stderr F I0813 20:00:34.636415 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:34.638011747+00:00 stderr F I0813 20:00:34.636571 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:00:34.941888462+00:00 stderr F I0813 20:00:34.940449 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:00:34.941888462+00:00 stderr F I0813 20:00:34.940613 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:00:35.142183403+00:00 stderr F I0813 20:00:35.129461 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:00:35.142183403+00:00 stderr F I0813 20:00:35.129619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:00:35.292967053+00:00 stderr F I0813 20:00:35.292333 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:00:35.292967053+00:00 stderr F I0813 20:00:35.292402 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:00:35.599894075+00:00 stderr F I0813 20:00:35.597497 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:00:35.599894075+00:00 stderr F I0813 20:00:35.597575 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:00:35.786772323+00:00 stderr F I0813 20:00:35.786550 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:35.786951618+00:00 stderr F I0813 20:00:35.786915 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:00:35.919109847+00:00 stderr F I0813 20:00:35.908447 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:00:35.919109847+00:00 stderr F I0813 20:00:35.908521 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:36.280135760+00:00 stderr F I0813 20:00:36.279934 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:36.280135760+00:00 stderr F I0813 20:00:36.279997 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:36.304390692+00:00 stderr F I0813 20:00:36.304280 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:36.304390692+00:00 stderr F I0813 20:00:36.304353 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:00:36.527641798+00:00 stderr F I0813 20:00:36.526617 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:00:36.527641798+00:00 stderr F I0813 20:00:36.526714 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:00:36.872335626+00:00 stderr F I0813 20:00:36.870403 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:36.872335626+00:00 stderr F I0813 20:00:36.870490 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:00:37.039218725+00:00 stderr F I0813 20:00:37.035778 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:00:37.039218725+00:00 stderr F I0813 20:00:37.035911 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:00:37.061460459+00:00 stderr F I0813 20:00:37.060257 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:00:37.061460459+00:00 stderr F I0813 20:00:37.060813 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:00:37.405100147+00:00 stderr F I0813 20:00:37.404219 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:00:37.405100147+00:00 stderr F I0813 20:00:37.404308 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:00:37.677505535+00:00 stderr F I0813 20:00:37.676925 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:00:37.677505535+00:00 stderr F I0813 20:00:37.677028 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:00:38.481051027+00:00 stderr F I0813 20:00:38.480998 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:00:38.481223452+00:00 stderr F I0813 20:00:38.481204 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:39.484972473+00:00 stderr F I0813 20:00:39.482212 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:39.484972473+00:00 stderr F I0813 20:00:39.482313 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:39.657008589+00:00 stderr F I0813 20:00:39.655702 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.718378388+00:00 stderr F I0813 20:00:39.716749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.811242325+00:00 stderr F I0813 20:00:39.809485 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.926295526+00:00 stderr F I0813 20:00:39.925431 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.132291870+00:00 stderr F I0813 20:00:40.127019 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.192144436+00:00 stderr F I0813 20:00:40.190426 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.247094393+00:00 stderr F I0813 20:00:40.246116 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.264164620+00:00 stderr F I0813 20:00:40.260585 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:40.264164620+00:00 stderr F I0813 20:00:40.260662 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.294411 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.294480 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.295031 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.343954825+00:00 stderr F I0813 20:00:40.341401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:40.343954825+00:00 stderr F I0813 20:00:40.341469 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:00:40.350194063+00:00 stderr F I0813 20:00:40.348485 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.386627212+00:00 stderr F I0813 20:00:40.385264 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:00:40.386627212+00:00 stderr F I0813 20:00:40.385333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:40.447356633+00:00 stderr F I0813 20:00:40.446253 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:40.447356633+00:00 stderr F I0813 20:00:40.446327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:00:40.512614574+00:00 stderr F I0813 20:00:40.512450 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:00:40.512614574+00:00 stderr F I0813 20:00:40.512525 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:00:40.564477983+00:00 stderr F I0813 20:00:40.564392 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:00:40.574181020+00:00 stderr F I0813 20:00:40.573921 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:00:40.617123694+00:00 stderr F I0813 20:00:40.616045 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:00:40.617123694+00:00 stderr F I0813 20:00:40.616113 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:00:40.665397691+00:00 stderr F I0813 20:00:40.665036 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:00:40.665397691+00:00 stderr F I0813 20:00:40.665126 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:40.817811727+00:00 stderr F I0813 20:00:40.768086 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:40.825312231+00:00 stderr F I0813 20:00:40.817770 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.865493456+00:00 stderr F I0813 20:00:40.864759 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.893745262+00:00 stderr F I0813 20:00:40.893356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.940753422+00:00 stderr F I0813 20:00:40.933778 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.940753422+00:00 stderr F I0813 20:00:40.933933 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.959415614+00:00 stderr F I0813 20:00:40.953109 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.959415614+00:00 stderr F I0813 20:00:40.953208 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:41.088007401+00:00 stderr F I0813 20:00:41.078313 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:41.088007401+00:00 stderr F I0813 20:00:41.078427 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:00:41.111112900+00:00 stderr F I0813 20:00:41.107727 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:00:41.111112900+00:00 stderr F I0813 20:00:41.107887 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:00:41.287864020+00:00 stderr F I0813 20:00:41.287155 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:00:41.287864020+00:00 stderr F I0813 20:00:41.287242 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:00:41.573910536+00:00 stderr F I0813 20:00:41.571755 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:41.573910536+00:00 stderr F I0813 20:00:41.571885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:00:41.589723997+00:00 stderr F I0813 20:00:41.589567 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:00:41.804696177+00:00 stderr F I0813 20:00:41.799270 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:41.884044949+00:00 stderr F I0813 20:00:41.882960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:00:41.884044949+00:00 stderr F I0813 20:00:41.883028 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:00:42.031881395+00:00 stderr F I0813 20:00:42.029309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:45.426981842+00:00 stderr F I0813 20:00:45.426759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:45.465658575+00:00 stderr F I0813 20:00:45.460570 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:00:45.465658575+00:00 stderr F I0813 20:00:45.460646 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:00:45.854964905+00:00 stderr F I0813 20:00:45.847386 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:00:45.854964905+00:00 stderr F I0813 20:00:45.847471 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:00:46.049895544+00:00 stderr F I0813 20:00:46.049222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.185547212+00:00 stderr F I0813 20:00:46.177358 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:00:46.185547212+00:00 stderr F I0813 20:00:46.177434 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:00:46.357890236+00:00 stderr F I0813 20:00:46.355693 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.557055535+00:00 stderr F I0813 20:00:46.556373 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:00:46.557055535+00:00 stderr F I0813 20:00:46.556443 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:00:46.638880888+00:00 stderr F I0813 20:00:46.638271 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.862627028+00:00 stderr F I0813 20:00:46.861683 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:00:46.862627028+00:00 stderr F I0813 20:00:46.861773 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:47.049860316+00:00 stderr F I0813 20:00:47.046662 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:47.103034452+00:00 stderr F I0813 20:00:47.099528 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:47.103034452+00:00 stderr F I0813 20:00:47.099593 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:00:47.445677632+00:00 stderr F I0813 20:00:47.442375 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:00:47.445677632+00:00 stderr F I0813 20:00:47.442477 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:47.461549324+00:00 stderr F I0813 20:00:47.460309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:47.653116347+00:00 stderr F I0813 20:00:47.640629 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T20:00:48.149622854+00:00 stderr F I0813 20:00:48.147700 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:48.149622854+00:00 stderr F I0813 20:00:48.147769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:48.711000051+00:00 stderr F I0813 20:00:48.710428 1 log.go:245] Deleted PodNetworkConnectivityCheck.controlplane.operator.openshift.io/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics because it is no more valid. 2025-08-13T20:00:48.751613939+00:00 stderr F I0813 20:00:48.751242 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:48.766983538+00:00 stderr F I0813 20:00:48.763258 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:48.766983538+00:00 stderr F I0813 20:00:48.763335 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:48.785319110+00:00 stderr F I0813 20:00:48.783615 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:00:49.312439251+00:00 stderr F I0813 20:00:49.311670 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:49.312439251+00:00 stderr F I0813 20:00:49.311743 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:00:49.381325275+00:00 stderr F I0813 20:00:49.377963 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:49.780431785+00:00 stderr F I0813 20:00:49.780116 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:49.828949339+00:00 stderr F I0813 20:00:49.828457 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:00:49.828949339+00:00 stderr F I0813 20:00:49.828519 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:00:50.410139431+00:00 stderr F I0813 20:00:50.408428 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:50.447921398+00:00 stderr F I0813 20:00:50.446764 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:00:50.447921398+00:00 stderr F I0813 20:00:50.447337 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.311867 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.311985 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.312066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:59.931671730+00:00 stderr F I0813 20:00:59.928650 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.928441377 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997081 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.928746796 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997209 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997146407 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997306 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997279051 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997471 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997384424 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997510 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997484367 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997536 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997519628 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997585 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997542278 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997621 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.99759535 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997647 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.997632671 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997905 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997665592 +0000 UTC))" 2025-08-13T20:01:00.013692589+00:00 stderr F I0813 20:01:00.013315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 20:01:00.012919427 +0000 UTC))" 2025-08-13T20:01:00.013764041+00:00 stderr F I0813 20:01:00.013712 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:01:00.013687419 +0000 UTC))" 2025-08-13T20:01:03.338570499+00:00 stderr F I0813 20:01:03.337009 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:01:03.338570499+00:00 stderr F I0813 20:01:03.337223 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:01:03.347989128+00:00 stderr F I0813 20:01:03.340281 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:01:05.775416984+00:00 stderr F I0813 20:01:05.773578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:01:05.851148293+00:00 stderr F I0813 20:01:05.848382 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:01:05.851148293+00:00 stderr F I0813 20:01:05.848503 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:06.799594387+00:00 stderr F I0813 20:01:06.795264 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:06.975266097+00:00 stderr F I0813 20:01:06.965133 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:06.975266097+00:00 stderr F I0813 20:01:06.965226 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:07.408913532+00:00 stderr F I0813 20:01:07.408155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:07.408913532+00:00 stderr F I0813 20:01:07.408221 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:07.505907187+00:00 stderr F I0813 20:01:07.496222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:07.544588930+00:00 stderr F I0813 20:01:07.542569 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:07.544588930+00:00 stderr F I0813 20:01:07.542660 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:01:07.713043074+00:00 stderr F I0813 20:01:07.705253 1 log.go:245] PodNetworkConnectivityCheck.controlplane.operator.openshift.io/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics: podnetworkconnectivitychecks.controlplane.operator.openshift.io "network-check-source-crc-to-openshift-apiserver-endpoint-crc" not found 2025-08-13T20:01:07.800526638+00:00 stderr F I0813 20:01:07.798978 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:01:07.800526638+00:00 stderr F I0813 20:01:07.799081 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:01:07.868560558+00:00 stderr F I0813 20:01:07.867235 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:01:07.997000220+00:00 stderr F I0813 20:01:07.996687 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.001494768+00:00 stderr F I0813 20:01:07.997167 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:01:08.001494768+00:00 stderr F I0813 20:01:07.997252 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:01:08.111965388+00:00 stderr F I0813 20:01:08.111873 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.179292768+00:00 stderr F I0813 20:01:08.178584 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:01:08.179292768+00:00 stderr F I0813 20:01:08.178677 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.267942886+00:00 stderr F I0813 20:01:08.262127 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.424896791+00:00 stderr F I0813 20:01:08.421744 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.425373585+00:00 stderr F I0813 20:01:08.425345 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.425494888+00:00 stderr F I0813 20:01:08.425473 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.557829191+00:00 stderr F I0813 20:01:08.557427 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.557973345+00:00 stderr F I0813 20:01:08.557950 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.574873937+00:00 stderr F I0813 20:01:08.570390 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.658270425+00:00 stderr F I0813 20:01:08.653255 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.658270425+00:00 stderr F I0813 20:01:08.653325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:01:08.809928729+00:00 stderr F I0813 20:01:08.809867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:01:08.810065923+00:00 stderr F I0813 20:01:08.810045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:01:08.850252439+00:00 stderr F I0813 20:01:08.850045 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.377500883+00:00 stderr F I0813 20:01:09.377442 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:01:09.377669027+00:00 stderr F I0813 20:01:09.377650 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:01:09.495525358+00:00 stderr F I0813 20:01:09.495391 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:01:09.495525358+00:00 stderr F I0813 20:01:09.495459 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:01:09.585283377+00:00 stderr F I0813 20:01:09.578555 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.674311386+00:00 stderr F I0813 20:01:09.670623 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:01:09.764236940+00:00 stderr F I0813 20:01:09.763933 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:01:09.957706507+00:00 stderr F I0813 20:01:09.957533 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.982307818+00:00 stderr F I0813 20:01:09.982249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:01:09.982460653+00:00 stderr F I0813 20:01:09.982442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:01:10.205196494+00:00 stderr F I0813 20:01:10.204294 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:01:10.205196494+00:00 stderr F I0813 20:01:10.204407 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:01:14.365655855+00:00 stderr F I0813 20:01:14.365324 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:01:14.365655855+00:00 stderr F I0813 20:01:14.365415 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:18.807159000+00:00 stderr F I0813 20:01:18.807070 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:18.807159000+00:00 stderr F I0813 20:01:18.807143 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:01:20.173082827+00:00 stderr F I0813 20:01:20.171163 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:01:20.173082827+00:00 stderr F I0813 20:01:20.171250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:01:20.947087297+00:00 stderr F I0813 20:01:20.945174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:01:20.947087297+00:00 stderr F I0813 20:01:20.945275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:01:21.297330493+00:00 stderr F I0813 20:01:21.296989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:01:21.297330493+00:00 stderr F I0813 20:01:21.297045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:01:21.660375606+00:00 stderr F I0813 20:01:21.659963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:01:21.660375606+00:00 stderr F I0813 20:01:21.660031 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:01:21.927616236+00:00 stderr F I0813 20:01:21.922315 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:01:21.927616236+00:00 stderr F I0813 20:01:21.922433 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:01:21.978028653+00:00 stderr F I0813 20:01:21.975980 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:01:22.195034151+00:00 stderr F I0813 20:01:22.194690 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:01:22.195234016+00:00 stderr F I0813 20:01:22.195212 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:22.838528918+00:00 stderr F I0813 20:01:22.832272 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:22.838528918+00:00 stderr F I0813 20:01:22.832354 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:01:23.444975610+00:00 stderr F I0813 20:01:23.443616 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:01:23.444975610+00:00 stderr F I0813 20:01:23.443759 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:23.878311757+00:00 stderr F I0813 20:01:23.872714 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:23.879092539+00:00 stderr F I0813 20:01:23.878991 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:01:23.995899970+00:00 stderr F I0813 20:01:23.994182 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:01:23.995899970+00:00 stderr F I0813 20:01:23.994291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:01:28.283247368+00:00 stderr F I0813 20:01:28.283056 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.283877656+00:00 stderr F I0813 20:01:28.283516 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:28.284144654+00:00 stderr F I0813 20:01:28.284080 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284554 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:28.284501904 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284631 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:28.284602557 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284655 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.284638218 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284674 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.284660188 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284729 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284692069 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284747 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.28473534 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284860 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284752491 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284887 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284871524 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284907 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:28.284895155 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284936 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:28.284925406 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284957 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284944016 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.285444 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:01:28.285370599 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.285903 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:01:28.285885023 +0000 UTC))" 2025-08-13T20:01:29.312658341+00:00 stderr F I0813 20:01:29.310941 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:01:29.312658341+00:00 stderr F I0813 20:01:29.311047 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:01:29.863595680+00:00 stderr F I0813 20:01:29.863511 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:01:29.868243463+00:00 stderr F I0813 20:01:29.868114 1 log.go:245] successful reconciliation 2025-08-13T20:01:31.577566643+00:00 stderr F I0813 20:01:31.573682 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:01:31.577566643+00:00 stderr F I0813 20:01:31.574006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:01:31.911241717+00:00 stderr F I0813 20:01:31.911140 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cb00b31757c1394caec2ab807eb732759e4f60c33864abe1196343a32306fb6", new="9ad6ea9e8750b1797a2290d936071e1b6afeb9dca994b721070d9e8357ccc62d") 2025-08-13T20:01:31.911347140+00:00 stderr F W0813 20:01:31.911330 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:31.911447063+00:00 stderr F I0813 20:01:31.911429 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="4b32fe525f439299dff9032d1fedd04523f56def3ae405a8eb1dcca9b4fa85c6", new="87b482cbe679adfeab619a3107dca513400610c55f0dbc209ea08b45f985b260") 2025-08-13T20:01:31.911757862+00:00 stderr F E0813 20:01:31.911723 1 leaderelection.go:369] Failed to update lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": context canceled 2025-08-13T20:01:31.911910516+00:00 stderr F I0813 20:01:31.911891 1 leaderelection.go:285] failed to renew lease openshift-network-operator/network-operator-lock: timed out waiting for the condition 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.914450 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.916186 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.916519 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.919079251+00:00 stderr F I0813 20:01:31.919055 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:31.919271436+00:00 stderr F I0813 20:01:31.919254 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:31.924649409+00:00 stderr F I0813 20:01:31.919711 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:31.925018960+00:00 stderr F I0813 20:01:31.924991 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:31.925073401+00:00 stderr F I0813 20:01:31.925059 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:01:31.925122453+00:00 stderr F I0813 20:01:31.925106 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:31.925156124+00:00 stderr F I0813 20:01:31.925144 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:31.925197425+00:00 stderr F I0813 20:01:31.925182 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:01:31.925228356+00:00 stderr F I0813 20:01:31.925217 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:01:31.925347939+00:00 stderr F I0813 20:01:31.925333 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:31.926140032+00:00 stderr F I0813 20:01:31.926120 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:01:31.926218674+00:00 stderr F I0813 20:01:31.926205 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:01:31.926305776+00:00 stderr F I0813 20:01:31.926240 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:01:31.927496950+00:00 stderr F I0813 20:01:31.927469 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:31.927554022+00:00 stderr F I0813 20:01:31.927541 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929071 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929147 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929151 1 secure_serving.go:258] Stopped listening on [::]:9104 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929197 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929514 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/openshift-iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": context canceled 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929620 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929647 1 builder.go:330] server exited 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929764 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="dashboard-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929929 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="operconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929942 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="infrastructureconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929954 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="ingress-config-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929946 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929976 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="clusterconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930177 1 log.go:245] could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930268 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930294 1 controller.go:242] "All workers finished" controller="dashboard-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930368 1 controller.go:242] "All workers finished" controller="ingress-config-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930381 1 controller.go:242] "All workers finished" controller="infrastructureconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930419 1 controller.go:242] "All workers finished" controller="clusterconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930448 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="signer-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930454 1 controller.go:242] "All workers finished" controller="signer-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930465 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pki-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930474 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="egress-router-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930479 1 controller.go:242] "All workers finished" controller="egress-router-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930489 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930496 1 controller.go:242] "All workers finished" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930505 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pod-watcher" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930510 1 controller.go:242] "All workers finished" controller="pod-watcher" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930519 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="allowlist-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930525 1 controller.go:242] "All workers finished" controller="allowlist-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929965 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="proxyconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.931928 1 controller.go:242] "All workers finished" controller="proxyconfig-controller" 2025-08-13T20:01:31.937819265+00:00 stderr F I0813 20:01:31.937655 1 log.go:245] could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:35.298495932+00:00 stderr F E0813 20:01:35.297231 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "network-operator-lock": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:35.298495932+00:00 stderr F W0813 20:01:35.297515 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000755000175000017500000000000015073043233033036 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000755000175000017500000000000015073043233033036 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000110333715073043233033050 0ustar zuulzuul2025-08-13T20:00:59.204934877+00:00 stderr F I0813 20:00:59.201500 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-08-13T20:00:59.221560951+00:00 stderr F I0813 20:00:59.220554 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:59.227991934+00:00 stderr F I0813 20:00:59.227901 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:59.230144035+00:00 stderr F I0813 20:00:59.229979 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:59.237435533+00:00 stderr F I0813 20:00:59.235747 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:06.985203130+00:00 stderr F I0813 20:01:06.952113 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-08-13T20:01:20.230947497+00:00 stderr F I0813 20:01:20.230035 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.230947497+00:00 stderr F W0813 20:01:20.230635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.230947497+00:00 stderr F W0813 20:01:20.230645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.949569307+00:00 stderr F I0813 20:01:20.949138 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:20.950898435+00:00 stderr F I0813 20:01:20.950870 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957052 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957150 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957211 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957229 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957253 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957261 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.962225238+00:00 stderr F I0813 20:01:20.962192 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:20.962340592+00:00 stderr F I0813 20:01:20.962318 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.962574838+00:00 stderr F I0813 20:01:20.962552 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:21.071599137+00:00 stderr F I0813 20:01:21.071521 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.072017149+00:00 stderr F I0813 20:01:21.071742 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.072623786+00:00 stderr F I0813 20:01:21.071762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.420290630+00:00 stderr F I0813 20:01:21.417525 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-08-13T20:01:21.420502616+00:00 stderr F I0813 20:01:21.420447 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_41a3e9d3-dcbb-4c99-b08b-92fe107d13b5 became leader 2025-08-13T20:01:21.693231682+00:00 stderr F I0813 20:01:21.693176 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-08-13T20:01:21.803696622+00:00 stderr F I0813 20:01:21.803631 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114639 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114639 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114760 1 starter.go:499] waiting for cluster version informer sync... 2025-08-13T20:01:22.835208314+00:00 stderr F I0813 20:01:22.833505 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-08-13T20:01:22.835208314+00:00 stderr F I0813 20:01:22.834925 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835586 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835631 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835942 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838357 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838389 1 base_controller.go:73] Caches are synced for FSyncController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838398 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838614 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838633 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843569 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843631 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843638 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843644 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843715 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843734 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844119 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844149 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844166 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844171 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844176 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844207 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844222 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844905 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844929 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844961 1 envvarcontroller.go:193] Starting EnvVarController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844980 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:01:22.845886058+00:00 stderr F E0813 20:01:22.845191 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845358 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845681 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.846999 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847159 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847181 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847196 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847225 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:01:22.851604501+00:00 stderr F I0813 20:01:22.851481 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:01:22.851868739+00:00 stderr F E0813 20:01:22.851701 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:22.852231279+00:00 stderr F I0813 20:01:22.852034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.865982 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.866018 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.866027 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-08-13T20:01:22.907728821+00:00 stderr F E0813 20:01:22.907578 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:22.945245131+00:00 stderr F I0813 20:01:22.945145 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:22.945245131+00:00 stderr F I0813 20:01:22.945194 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:22.945523019+00:00 stderr F I0813 20:01:22.945492 1 base_controller.go:73] Caches are synced for DefragController 2025-08-13T20:01:22.945563420+00:00 stderr F I0813 20:01:22.945549 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-08-13T20:01:22.947247808+00:00 stderr F I0813 20:01:22.947181 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:01:22.947302060+00:00 stderr F I0813 20:01:22.947287 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:01:22.948105423+00:00 stderr F I0813 20:01:22.948050 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:01:22.948105423+00:00 stderr F I0813 20:01:22.948083 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:01:22.948126773+00:00 stderr F I0813 20:01:22.948109 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:22.948126773+00:00 stderr F I0813 20:01:22.948114 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:22.948223566+00:00 stderr F I0813 20:01:22.948206 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:22.948262877+00:00 stderr F I0813 20:01:22.948248 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:22.952234560+00:00 stderr F I0813 20:01:22.952136 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:01:22.952234560+00:00 stderr F I0813 20:01:22.952166 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:01:22.952550329+00:00 stderr F I0813 20:01:22.952431 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:22.956176783+00:00 stderr F E0813 20:01:22.956032 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:22.964645374+00:00 stderr F I0813 20:01:22.964470 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:22.988731301+00:00 stderr F E0813 20:01:22.988570 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.022307729+00:00 stderr F E0813 20:01:23.022169 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.040754585+00:00 stderr F I0813 20:01:23.040666 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.045177371+00:00 stderr F I0813 20:01:23.045113 1 base_controller.go:73] Caches are synced for ScriptController 2025-08-13T20:01:23.045177371+00:00 stderr F I0813 20:01:23.045158 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-08-13T20:01:23.045208242+00:00 stderr F I0813 20:01:23.045200 1 envvarcontroller.go:199] caches synced 2025-08-13T20:01:23.082200166+00:00 stderr F E0813 20:01:23.082082 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.183869165+00:00 stderr F E0813 20:01:23.183735 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.240423948+00:00 stderr F I0813 20:01:23.240267 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.244659499+00:00 stderr F I0813 20:01:23.244611 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:01:23.244714180+00:00 stderr F I0813 20:01:23.244699 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.248015 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.248116 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.252637 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253148 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253159 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253185 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253189 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:01:23.363347963+00:00 stderr F E0813 20:01:23.363263 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.475565093+00:00 stderr F I0813 20:01:23.475449 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.478628840+00:00 stderr F E0813 20:01:23.478489 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:23.480253096+00:00 stderr F I0813 20:01:23.480151 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.481074550+00:00 stderr F I0813 20:01:23.481010 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:23.539188437+00:00 stderr F I0813 20:01:23.539021 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-08-13T20:01:23.539188437+00:00 stderr F I0813 20:01:23.539124 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-08-13T20:01:23.671176050+00:00 stderr F I0813 20:01:23.661078 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.711893141+00:00 stderr F E0813 20:01:23.710921 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737021 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737068 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737113 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737119 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-08-13T20:01:23.739571481+00:00 stderr F I0813 20:01:23.739135 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-08-13T20:01:23.739571481+00:00 stderr F I0813 20:01:23.739217 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-08-13T20:01:23.739752336+00:00 stderr F I0813 20:01:23.739723 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:01:23.739863029+00:00 stderr F I0813 20:01:23.739821 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:01:23.740017183+00:00 stderr F E0813 20:01:23.739997 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.745144849+00:00 stderr F I0813 20:01:23.745064 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-08-13T20:01:23.745144849+00:00 stderr F I0813 20:01:23.745101 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-08-13T20:01:23.745293434+00:00 stderr F I0813 20:01:23.745208 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:23.745365916+00:00 stderr F I0813 20:01:23.745308 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:01:23.745365916+00:00 stderr F I0813 20:01:23.745348 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.746990 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.747024 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.747092 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:23.760225729+00:00 stderr F E0813 20:01:23.760115 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.770430700+00:00 stderr F E0813 20:01:23.770333 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.794754584+00:00 stderr F E0813 20:01:23.794566 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.837589226+00:00 stderr F E0813 20:01:23.835259 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.845963064+00:00 stderr F I0813 20:01:23.841472 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.876753702+00:00 stderr F E0813 20:01:23.876682 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:23.887741586+00:00 stderr F E0813 20:01:23.886505 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.903355931+00:00 stderr F I0813 20:01:23.893974 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:23.905662336+00:00 stderr F I0813 20:01:23.905547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:23.909276520+00:00 stderr F I0813 20:01:23.905930 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.917638608+00:00 stderr F E0813 20:01:23.917231 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.984460943+00:00 stderr F I0813 20:01:23.984397 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:24.011359700+00:00 stderr F E0813 20:01:24.008314 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced 2025-08-13T20:01:24.011359700+00:00 stderr F E0813 20:01:24.009051 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:24.012053910+00:00 stderr F I0813 20:01:24.012017 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:24.030213498+00:00 stderr F E0813 20:01:24.026310 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.030213498+00:00 stderr F E0813 20:01:24.026659 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.038711430+00:00 stderr F I0813 20:01:24.037373 1 request.go:697] Waited for 1.198785213s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/secrets?limit=500&resourceVersion=0 2025-08-13T20:01:24.042444197+00:00 stderr F E0813 20:01:24.041417 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.043651721+00:00 stderr F I0813 20:01:24.043530 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:24.046381469+00:00 stderr F I0813 20:01:24.043917 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.048881210+00:00 stderr F I0813 20:01:24.048655 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:24.086625656+00:00 stderr F E0813 20:01:24.086488 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.169315404+00:00 stderr F E0813 20:01:24.169212 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.240874095+00:00 stderr F I0813 20:01:24.239677 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244433 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244464 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244495 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244500 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244696 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:24.331661633+00:00 stderr F E0813 20:01:24.330450 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.337326115+00:00 stderr F I0813 20:01:24.336511 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-08-13T20:01:24.337326115+00:00 stderr F I0813 20:01:24.336565 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-08-13T20:01:24.359629601+00:00 stderr F E0813 20:01:24.358162 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:24.445604942+00:00 stderr F I0813 20:01:24.445404 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.551146152+00:00 stderr F I0813 20:01:24.547065 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:24.551146152+00:00 stderr F I0813 20:01:24.547110 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:24.652969385+00:00 stderr F E0813 20:01:24.652424 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:25.238667776+00:00 stderr F I0813 20:01:25.236481 1 request.go:697] Waited for 1.696867805s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-08-13T20:01:25.293606912+00:00 stderr F E0813 20:01:25.293553 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:25.678198389+00:00 stderr F E0813 20:01:25.678017 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:26.575472982+00:00 stderr F E0813 20:01:26.575394 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:28.261052465+00:00 stderr F E0813 20:01:28.258990 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:29.139588316+00:00 stderr F E0813 20:01:29.136873 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:29.233407391+00:00 stderr F I0813 20:01:29.233029 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:29.234221804+00:00 stderr F E0813 20:01:29.234128 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:29.261491642+00:00 stderr F E0813 20:01:29.261353 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3] 2025-08-13T20:01:29.263634923+00:00 stderr F E0813 20:01:29.262756 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T20:01:29.263896160+00:00 stderr F I0813 20:01:29.263863 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EnvVarControllerUpdatingStatus' Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:29.364256102+00:00 stderr F E0813 20:01:29.360701 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:29.364256102+00:00 stderr F I0813 20:01:29.361877 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:31.538098707+00:00 stderr F I0813 20:01:31.502003 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:31.538098707+00:00 stderr F I0813 20:01:31.502226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:33.450883531+00:00 stderr F E0813 20:01:33.448764 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:35.546125383+00:00 stderr F I0813 20:01:35.545906 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:36.913351769+00:00 stderr F E0813 20:01:36.909745 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:36.913351769+00:00 stderr F I0813 20:01:36.913015 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:42.986081825+00:00 stderr F I0813 20:01:42.983366 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:42.998117788+00:00 stderr F I0813 20:01:42.995881 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:43.031426538+00:00 stderr F I0813 20:01:43.030573 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:43.705192780+00:00 stderr F E0813 20:01:43.705110 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:56.176110932+00:00 stderr F E0813 20:01:56.135500 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:56.176110932+00:00 stderr F I0813 20:01:56.146405 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:56.176110932+00:00 stderr F I0813 20:01:56.168699 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:58.241888256+00:00 stderr F I0813 20:01:58.240961 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:58.244003966+00:00 stderr F I0813 20:01:58.243317 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:59.464618610+00:00 stderr F E0813 20:01:59.454164 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:04.200344733+00:00 stderr F E0813 20:02:04.199129 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:22.853589283+00:00 stderr F E0813 20:02:22.852691 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:27.916072140+00:00 stderr F E0813 20:02:27.915438 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused - error from a previous attempt: dial tcp 10.217.4.1:443: connect: connection reset by peer, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.944047368+00:00 stderr F E0813 20:02:27.943599 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.310388068+00:00 stderr F E0813 20:02:28.310273 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.121359753+00:00 stderr F E0813 20:02:29.121111 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.910865446+00:00 stderr F E0813 20:02:29.910411 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.712047130+00:00 stderr F E0813 20:02:30.711543 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.521644476+00:00 stderr F E0813 20:02:31.521476 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.904943841+00:00 stderr F E0813 20:02:31.904879 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error on serving cert sync for node crc: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-serving-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.905151887+00:00 stderr F I0813 20:02:31.905066 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.909971884+00:00 stderr F E0813 20:02:31.909916 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:31.913650089+00:00 stderr F E0813 20:02:31.913236 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.914249526+00:00 stderr F I0813 20:02:31.914150 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917480528+00:00 stderr F E0813 20:02:31.917423 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917612332+00:00 stderr F I0813 20:02:31.917558 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.930318194+00:00 stderr F E0813 20:02:31.930241 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.930375156+00:00 stderr F I0813 20:02:31.930349 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.975917815+00:00 stderr F E0813 20:02:31.975647 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.975917815+00:00 stderr F I0813 20:02:31.975896 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.059535441+00:00 stderr F E0813 20:02:32.059417 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.059535441+00:00 stderr F I0813 20:02:32.059488 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.223615501+00:00 stderr F E0813 20:02:32.223472 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.223615501+00:00 stderr F I0813 20:02:32.223557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.310012026+00:00 stderr F E0813 20:02:32.309952 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.548999514+00:00 stderr F E0813 20:02:32.548871 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.548999514+00:00 stderr F I0813 20:02:32.548968 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.111589303+00:00 stderr F E0813 20:02:33.111451 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:33.194473277+00:00 stderr F E0813 20:02:33.194344 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.194708674+00:00 stderr F I0813 20:02:33.194606 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.407945453+00:00 stderr F E0813 20:02:34.407394 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.481051079+00:00 stderr F E0813 20:02:34.480960 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.481220563+00:00 stderr F I0813 20:02:34.481118 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.983359442+00:00 stderr F E0813 20:02:36.983231 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:37.045231388+00:00 stderr F E0813 20:02:37.045168 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.045450154+00:00 stderr F I0813 20:02:37.045403 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.823228592+00:00 stderr F E0813 20:02:40.822097 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:42.119568465+00:00 stderr F E0813 20:02:42.119470 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.172278588+00:00 stderr F E0813 20:02:42.172223 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.172419712+00:00 stderr F I0813 20:02:42.172387 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.181326239+00:00 stderr F E0813 20:02:45.181006 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:50.826440458+00:00 stderr F E0813 20:02:50.826086 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:52.378155643+00:00 stderr F E0813 20:02:52.378050 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.440554073+00:00 stderr F E0813 20:02:52.440430 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.440626755+00:00 stderr F I0813 20:02:52.440586 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.828872299+00:00 stderr F E0813 20:03:00.828215 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:10.831582506+00:00 stderr F E0813 20:03:10.831026 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:12.883654656+00:00 stderr F E0813 20:03:12.883340 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:12.929214286+00:00 stderr F E0813 20:03:12.929073 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.929364070+00:00 stderr F I0813 20:03:12.929250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:20.838115277+00:00 stderr F E0813 20:03:20.837631 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:22.859631275+00:00 stderr F E0813 20:03:22.859033 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:03:22.957431345+00:00 stderr F E0813 20:03:22.957273 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:22.967341118+00:00 stderr F E0813 20:03:22.967259 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:22.982678106+00:00 stderr F E0813 20:03:22.982628 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.008378959+00:00 stderr F E0813 20:03:23.008287 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.053133056+00:00 stderr F E0813 20:03:23.053046 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.053411613+00:00 stderr F I0813 20:03:23.053338 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.056392899+00:00 stderr F E0813 20:03:23.056266 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.057256643+00:00 stderr F E0813 20:03:23.057197 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.057374826+00:00 stderr F E0813 20:03:23.057317 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:23.057744617+00:00 stderr F E0813 20:03:23.057688 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.057887711+00:00 stderr F I0813 20:03:23.057738 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.062345748+00:00 stderr F E0813 20:03:23.062284 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.062372019+00:00 stderr F I0813 20:03:23.062353 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.356343475+00:00 stderr F E0813 20:03:23.356244 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.356714196+00:00 stderr F I0813 20:03:23.356653 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.557486883+00:00 stderr F E0813 20:03:23.557332 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.757595502+00:00 stderr F E0813 20:03:23.757413 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.757595502+00:00 stderr F I0813 20:03:23.757553 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.780075193+00:00 stderr F E0813 20:03:23.780005 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.786199928+00:00 stderr F E0813 20:03:23.786101 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.786246539+00:00 stderr F I0813 20:03:23.786202 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.787520656+00:00 stderr F E0813 20:03:23.787490 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:23.957981659+00:00 stderr F E0813 20:03:23.957886 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.957981659+00:00 stderr F I0813 20:03:23.957943 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.154655539+00:00 stderr F E0813 20:03:24.154590 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.158468888+00:00 stderr F E0813 20:03:24.158439 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.357587489+00:00 stderr F E0813 20:03:24.357513 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.357626330+00:00 stderr F I0813 20:03:24.357589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.505373634+00:00 stderr F E0813 20:03:24.505108 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.551723986+00:00 stderr F I0813 20:03:24.551623 1 request.go:697] Waited for 1.009458816s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-08-13T20:03:24.558413797+00:00 stderr F E0813 20:03:24.558307 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.558517610+00:00 stderr F I0813 20:03:24.558484 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.757933178+00:00 stderr F E0813 20:03:24.757746 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.758035851+00:00 stderr F I0813 20:03:24.757993 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.958230252+00:00 stderr F E0813 20:03:24.958137 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:25.153714949+00:00 stderr F E0813 20:03:25.153597 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.158041973+00:00 stderr F E0813 20:03:25.157992 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.158129965+00:00 stderr F I0813 20:03:25.158072 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.358476540+00:00 stderr F E0813 20:03:25.358158 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.358615464+00:00 stderr F I0813 20:03:25.358296 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.558516896+00:00 stderr F E0813 20:03:25.558383 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.752531091+00:00 stderr F I0813 20:03:25.752453 1 request.go:697] Waited for 1.193181188s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2025-08-13T20:03:25.758576004+00:00 stderr F E0813 20:03:25.757732 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.758576004+00:00 stderr F I0813 20:03:25.757892 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.857071064+00:00 stderr F E0813 20:03:25.856945 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:25.958207469+00:00 stderr F E0813 20:03:25.958059 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.958207469+00:00 stderr F I0813 20:03:25.958158 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.153532231+00:00 stderr F E0813 20:03:26.153414 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.158027719+00:00 stderr F E0813 20:03:26.157943 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.357712246+00:00 stderr F E0813 20:03:26.357613 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.357827169+00:00 stderr F I0813 20:03:26.357705 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.557380822+00:00 stderr F E0813 20:03:26.557258 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:26.758231992+00:00 stderr F E0813 20:03:26.758118 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.958319360+00:00 stderr F E0813 20:03:26.958176 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.958319360+00:00 stderr F I0813 20:03:26.958226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.157973746+00:00 stderr F E0813 20:03:27.157640 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.157973746+00:00 stderr F I0813 20:03:27.157736 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.352905736+00:00 stderr F I0813 20:03:27.352124 1 request.go:697] Waited for 1.178087108s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-08-13T20:03:27.355495730+00:00 stderr F E0813 20:03:27.355381 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.558159942+00:00 stderr F E0813 20:03:27.558037 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:27.754411790+00:00 stderr F E0813 20:03:27.754272 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.957162104+00:00 stderr F E0813 20:03:27.957062 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.957613027+00:00 stderr F I0813 20:03:27.957217 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.156063508+00:00 stderr F E0813 20:03:28.155964 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.355594340+00:00 stderr F E0813 20:03:28.355485 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:28.444511917+00:00 stderr F E0813 20:03:28.444455 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.444640480+00:00 stderr F I0813 20:03:28.444615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.551331324+00:00 stderr F I0813 20:03:28.551276 1 request.go:697] Waited for 1.154931357s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-08-13T20:03:28.553475925+00:00 stderr F E0813 20:03:28.553451 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.762089566+00:00 stderr F E0813 20:03:28.761998 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.957700287+00:00 stderr F E0813 20:03:28.957497 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.957700287+00:00 stderr F I0813 20:03:28.957571 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.355682610+00:00 stderr F E0813 20:03:29.355320 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.537526057+00:00 stderr F E0813 20:03:29.537405 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:29.556090137+00:00 stderr F E0813 20:03:29.555992 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.765203002+00:00 stderr F E0813 20:03:29.765039 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.765297575+00:00 stderr F I0813 20:03:29.765179 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.958615090+00:00 stderr F E0813 20:03:29.958470 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.153994804+00:00 stderr F E0813 20:03:30.153764 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.359384223+00:00 stderr F E0813 20:03:30.359136 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:30.555817727+00:00 stderr F E0813 20:03:30.555628 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.755600527+00:00 stderr F E0813 20:03:30.755477 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.755941116+00:00 stderr F I0813 20:03:30.755697 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.840899200+00:00 stderr F E0813 20:03:30.840733 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:31.012560357+00:00 stderr F E0813 20:03:31.012387 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.012560357+00:00 stderr F I0813 20:03:31.012536 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.159309693+00:00 stderr F E0813 20:03:31.159218 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.356281642+00:00 stderr F E0813 20:03:31.356157 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.356618142+00:00 stderr F I0813 20:03:31.356536 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.554473645+00:00 stderr F E0813 20:03:31.554337 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.755144050+00:00 stderr F E0813 20:03:31.755021 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.955864116+00:00 stderr F E0813 20:03:31.955695 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.956168485+00:00 stderr F I0813 20:03:31.956101 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.353411897+00:00 stderr F E0813 20:03:32.353282 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.555996746+00:00 stderr F E0813 20:03:32.555895 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.556484370+00:00 stderr F I0813 20:03:32.556404 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.757106004+00:00 stderr F E0813 20:03:32.756929 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.958043466+00:00 stderr F E0813 20:03:32.957913 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.160642345+00:00 stderr F E0813 20:03:33.156525 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.160642345+00:00 stderr F I0813 20:03:33.158978 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.559447462+00:00 stderr F E0813 20:03:33.558753 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:33.757573125+00:00 stderr F E0813 20:03:33.757461 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.757573125+00:00 stderr F I0813 20:03:33.757547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.957753815+00:00 stderr F E0813 20:03:33.957575 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.161032784+00:00 stderr F E0813 20:03:34.159741 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.355140732+00:00 stderr F E0813 20:03:34.355038 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.355239124+00:00 stderr F I0813 20:03:34.355179 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.755950416+00:00 stderr F E0813 20:03:34.755750 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.956662761+00:00 stderr F E0813 20:03:34.956237 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.956662761+00:00 stderr F I0813 20:03:34.956341 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.169619416+00:00 stderr F E0813 20:03:35.169293 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.557111580+00:00 stderr F E0813 20:03:35.557005 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.861442162+00:00 stderr F E0813 20:03:35.861273 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:35.962182946+00:00 stderr F E0813 20:03:35.962038 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.962252108+00:00 stderr F I0813 20:03:35.962169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.139711820+00:00 stderr F E0813 20:03:36.139611 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.139925116+00:00 stderr F I0813 20:03:36.139700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.157481337+00:00 stderr F E0813 20:03:36.157376 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.556533341+00:00 stderr F E0813 20:03:36.556330 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.556533341+00:00 stderr F I0813 20:03:36.556415 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.759466900+00:00 stderr F E0813 20:03:36.759334 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.953633649+00:00 stderr F E0813 20:03:36.953531 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.356673327+00:00 stderr F E0813 20:03:37.356579 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.555997593+00:00 stderr F E0813 20:03:37.555915 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.959219406+00:00 stderr F E0813 20:03:37.959149 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.355437569+00:00 stderr F E0813 20:03:38.355380 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.556167655+00:00 stderr F E0813 20:03:38.556111 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.556307989+00:00 stderr F I0813 20:03:38.556219 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.955301350+00:00 stderr F E0813 20:03:38.955074 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:39.156957833+00:00 stderr F E0813 20:03:39.156656 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.356349261+00:00 stderr F E0813 20:03:39.356168 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.356349261+00:00 stderr F I0813 20:03:39.356305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.541460462+00:00 stderr F E0813 20:03:39.541323 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:39.760090389+00:00 stderr F E0813 20:03:39.758278 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.410572265+00:00 stderr F E0813 20:03:40.410156 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.554832701+00:00 stderr F E0813 20:03:40.554724 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.844255727+00:00 stderr F E0813 20:03:40.844164 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:41.697942270+00:00 stderr F E0813 20:03:41.696580 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.076366176+00:00 stderr F E0813 20:03:42.076310 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.683909466+00:00 stderr F E0813 20:03:42.683724 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.122655472+00:00 stderr F E0813 20:03:43.122546 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.681258448+00:00 stderr F E0813 20:03:43.681139 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.681320709+00:00 stderr F I0813 20:03:43.681273 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.263280191+00:00 stderr F E0813 20:03:44.263223 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.482826264+00:00 stderr F E0813 20:03:44.482146 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.482826264+00:00 stderr F I0813 20:03:44.482228 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:45.864068586+00:00 stderr F E0813 20:03:45.863953 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:46.393626573+00:00 stderr F E0813 20:03:46.393442 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.393626573+00:00 stderr F I0813 20:03:46.393548 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:48.262920558+00:00 stderr F E0813 20:03:48.262499 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.207182985+00:00 stderr F E0813 20:03:49.207077 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:49.391407061+00:00 stderr F E0813 20:03:49.391305 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.545107794+00:00 stderr F E0813 20:03:49.544819 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:50.846910431+00:00 stderr F E0813 20:03:50.846526 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:52.325943768+00:00 stderr F E0813 20:03:52.322551 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.940969334+00:00 stderr F E0813 20:03:52.940373 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.932722016+00:00 stderr F E0813 20:03:53.931117 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:53.950185344+00:00 stderr F E0813 20:03:53.946198 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.950185344+00:00 stderr F I0813 20:03:53.946277 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.974173028+00:00 stderr F E0813 20:03:53.974113 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.974330663+00:00 stderr F I0813 20:03:53.974301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.735887598+00:00 stderr F E0813 20:03:54.734474 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.735887598+00:00 stderr F I0813 20:03:54.735026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:55.873922234+00:00 stderr F E0813 20:03:55.873512 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:58.509095808+00:00 stderr F E0813 20:03:58.508630 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.547894352+00:00 stderr F E0813 20:03:59.547725 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:59.658252940+00:00 stderr F E0813 20:03:59.647921 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:00.859073585+00:00 stderr F E0813 20:04:00.855063 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:05.881238883+00:00 stderr F E0813 20:04:05.878083 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:07.554363661+00:00 stderr F E0813 20:04:07.554312 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:07.554545866+00:00 stderr F I0813 20:04:07.554507 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.551030160+00:00 stderr F E0813 20:04:09.550761 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:09.724299903+00:00 stderr F E0813 20:04:09.722238 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:10.862056161+00:00 stderr F E0813 20:04:10.861468 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:12.806143739+00:00 stderr F E0813 20:04:12.806031 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:13.424606892+00:00 stderr F E0813 20:04:13.424481 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:14.434228454+00:00 stderr F E0813 20:04:14.434114 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:14.434322137+00:00 stderr F I0813 20:04:14.434260 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.221155572+00:00 stderr F E0813 20:04:15.221031 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.221155572+00:00 stderr F I0813 20:04:15.221124 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.894947624+00:00 stderr F E0813 20:04:15.892361 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:18.996137771+00:00 stderr F E0813 20:04:18.995688 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.556094465+00:00 stderr F E0813 20:04:19.555673 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:20.142189435+00:00 stderr F E0813 20:04:20.142129 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.869223985+00:00 stderr F E0813 20:04:20.867149 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:20.869223985+00:00 stderr F E0813 20:04:20.867241 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:20.869580806+00:00 stderr F E0813 20:04:20.869524 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:22.857735944+00:00 stderr F E0813 20:04:22.857600 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:04:22.961238868+00:00 stderr F E0813 20:04:22.959978 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.051968907+00:00 stderr F E0813 20:04:23.051911 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.052106241+00:00 stderr F I0813 20:04:23.052049 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.052967065+00:00 stderr F E0813 20:04:23.052883 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.054510849+00:00 stderr F E0813 20:04:23.054456 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.054532100+00:00 stderr F I0813 20:04:23.054510 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.261045995+00:00 stderr F E0813 20:04:23.260917 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.262444295+00:00 stderr F E0813 20:04:23.262379 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.559501720+00:00 stderr F E0813 20:04:23.559378 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.760595529+00:00 stderr F E0813 20:04:23.760494 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.761087813+00:00 stderr F E0813 20:04:23.760634 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.761243628+00:00 stderr F I0813 20:04:23.760744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.238534455+00:00 stderr F E0813 20:04:24.236428 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.238534455+00:00 stderr F I0813 20:04:24.237021 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.511558044+00:00 stderr F E0813 20:04:24.510976 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.895454472+00:00 stderr F E0813 20:04:25.895088 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:29.439894403+00:00 stderr F E0813 20:04:29.438675 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:29.557961674+00:00 stderr F E0813 20:04:29.557907 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:35.898907446+00:00 stderr F E0813 20:04:35.898239 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:39.444014303+00:00 stderr F E0813 20:04:39.442629 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:39.582918490+00:00 stderr F E0813 20:04:39.582396 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:45.905067071+00:00 stderr F E0813 20:04:45.903678 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:48.532965973+00:00 stderr F E0813 20:04:48.528929 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:48.532965973+00:00 stderr F I0813 20:04:48.532009 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:49.445322549+00:00 stderr F E0813 20:04:49.444886 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:49.585467953+00:00 stderr F E0813 20:04:49.585240 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:50.693385169+00:00 stderr F E0813 20:04:50.693311 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:53.773511932+00:00 stderr F E0813 20:04:53.773174 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:54.390655633+00:00 stderr F E0813 20:04:54.390251 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.404886357+00:00 stderr F E0813 20:04:55.404737 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.405273519+00:00 stderr F I0813 20:04:55.404969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.909447746+00:00 stderr F E0813 20:04:55.909312 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:56.190326910+00:00 stderr F E0813 20:04:56.190154 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.190381151+00:00 stderr F I0813 20:04:56.190310 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.452388172+00:00 stderr F E0813 20:04:59.451291 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:59.593054480+00:00 stderr F E0813 20:04:59.591542 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:00.060713062+00:00 stderr F E0813 20:05:00.058154 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:01.130496844+00:00 stderr F E0813 20:05:01.130339 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.967346406+00:00 stderr F E0813 20:05:05.966528 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:05.967413718+00:00 stderr F E0813 20:05:05.967379 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:05.969130917+00:00 stderr F E0813 20:05:05.969101 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c29267bd768\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.957751582 +0000 UTC m=+145.691815432,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:09.457429598+00:00 stderr F E0813 20:05:09.456992 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:05:09.598327073+00:00 stderr F E0813 20:05:09.598138 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:09.598327073+00:00 stderr F E0813 20:05:09.598283 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:09.600359401+00:00 stderr F E0813 20:05:09.600235 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c28facabbfa\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.057670635 +0000 UTC m=+144.791734625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:13.643070308+00:00 stderr F E0813 20:05:13.642637 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c28facabbfa\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.057670635 +0000 UTC m=+144.791734625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:13.767411739+00:00 stderr F E0813 20:05:13.767326 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c29267bd768\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.957751582 +0000 UTC m=+145.691815432,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:19.462051600+00:00 stderr F E0813 20:05:19.461334 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:05:22.879651976+00:00 stderr F E0813 20:05:22.878646 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:05:29.048246551+00:00 stderr F E0813 20:05:29.046023 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:06:02.210131115+00:00 stderr F I0813 20:06:02.208595 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:03.033576196+00:00 stderr F I0813 20:06:03.033438 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.445424067+00:00 stderr F I0813 20:06:06.444339 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:07.608175334+00:00 stderr F I0813 20:06:07.608122 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.612634827+00:00 stderr F I0813 20:06:08.611959 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:09.414593552+00:00 stderr F I0813 20:06:09.414529 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:12.348169958+00:00 stderr F I0813 20:06:12.348000 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:12.836973505+00:00 stderr F I0813 20:06:12.835622 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:15.988378780+00:00 stderr F I0813 20:06:15.985500 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:22.843903184+00:00 stderr F I0813 20:06:22.842449 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:22.861297962+00:00 stderr F E0813 20:06:22.861183 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:06:23.072250683+00:00 stderr F I0813 20:06:23.072111 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:23.617974219+00:00 stderr F I0813 20:06:23.617182 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.659640189+00:00 stderr F I0813 20:06:24.659469 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=etcds from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.676183782+00:00 stderr F I0813 20:06:24.675972 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:27.158032882+00:00 stderr F I0813 20:06:27.157843 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:27.558221692+00:00 stderr F I0813 20:06:27.558152 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.358379415+00:00 stderr F I0813 20:06:28.358311 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.174883817+00:00 stderr F I0813 20:06:29.174307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.207098629+00:00 stderr F I0813 20:06:29.206943 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.561481258+00:00 stderr F I0813 20:06:29.557522 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:30.560253238+00:00 stderr F I0813 20:06:30.560145 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.778853495+00:00 stderr F I0813 20:06:31.706217 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:32.160490047+00:00 stderr F I0813 20:06:32.160293 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.148306849+00:00 stderr F I0813 20:06:33.147728 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.372511267+00:00 stderr F I0813 20:06:33.365420 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:34.644663810+00:00 stderr F I0813 20:06:34.644608 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:37.201202349+00:00 stderr F I0813 20:06:37.200137 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:39.541999251+00:00 stderr F I0813 20:06:39.522707 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:42.650540216+00:00 stderr F I0813 20:06:42.649910 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.907959937+00:00 stderr F I0813 20:06:43.897571 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.809039123+00:00 stderr F I0813 20:06:45.808925 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:50.511690451+00:00 stderr F I0813 20:06:50.502367 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:50.553990194+00:00 stderr F I0813 20:06:50.552587 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:57.566845679+00:00 stderr F I0813 20:06:57.561917 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:02.371117412+00:00 stderr F I0813 20:07:02.370179 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:04.722915769+00:00 stderr F I0813 20:07:04.722764 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:10.046854770+00:00 stderr F I0813 20:07:10.044642 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:22.928281282+00:00 stderr F E0813 20:07:22.926421 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:08:22.866636195+00:00 stderr F E0813 20:08:22.865703 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:08:24.161995414+00:00 stderr F E0813 20:08:24.161937 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error on peer cert sync for node crc: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-peer-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.184943762+00:00 stderr F I0813 20:08:24.182280 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.223274251+00:00 stderr F E0813 20:08:24.222855 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:24.359258560+00:00 stderr F I0813 20:08:24.358504 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.359258560+00:00 stderr F E0813 20:08:24.359213 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.569482947+00:00 stderr F E0813 20:08:24.567947 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.569482947+00:00 stderr F I0813 20:08:24.568045 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.690223139+00:00 stderr F E0813 20:08:24.689972 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.757401485+00:00 stderr F E0813 20:08:24.757221 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.757401485+00:00 stderr F I0813 20:08:24.757345 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.956251467+00:00 stderr F E0813 20:08:24.956169 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.956301158+00:00 stderr F I0813 20:08:24.956261 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.153401538+00:00 stderr F E0813 20:08:25.153177 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.155126058+00:00 stderr F I0813 20:08:25.153268 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.350054066+00:00 stderr F E0813 20:08:25.350001 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.350190790+00:00 stderr F I0813 20:08:25.350166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.716429421+00:00 stderr F E0813 20:08:25.714732 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.716429421+00:00 stderr F I0813 20:08:25.714869 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.387466040+00:00 stderr F E0813 20:08:26.377075 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.387466040+00:00 stderr F I0813 20:08:26.377191 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.679662479+00:00 stderr F E0813 20:08:27.678724 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.679662479+00:00 stderr F I0813 20:08:27.678885 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268107832+00:00 stderr F E0813 20:08:30.267513 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268107832+00:00 stderr F I0813 20:08:30.267695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.645977526+00:00 stderr F E0813 20:08:31.645840 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:35.396656560+00:00 stderr F E0813 20:08:35.393708 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.396656560+00:00 stderr F I0813 20:08:35.394473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.650405050+00:00 stderr F E0813 20:08:41.648539 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:45.655396576+00:00 stderr F E0813 20:08:45.654638 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.655446358+00:00 stderr F I0813 20:08:45.654841 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.655745521+00:00 stderr F E0813 20:08:51.655347 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:09:22.866896272+00:00 stderr F E0813 20:09:22.866157 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:09:28.690253803+00:00 stderr F I0813 20:09:28.689154 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:29.796471329+00:00 stderr F I0813 20:09:29.796019 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:29.950505895+00:00 stderr F I0813 20:09:29.950372 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:30.868878596+00:00 stderr F I0813 20:09:30.868418 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.896868311+00:00 stderr F I0813 20:09:32.896678 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:33.825653849+00:00 stderr F I0813 20:09:33.825145 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:34.341240701+00:00 stderr F I0813 20:09:34.341160 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:34.508565869+00:00 stderr F I0813 20:09:34.508463 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.550575104+00:00 stderr F I0813 20:09:35.550203 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.763696894+00:00 stderr F I0813 20:09:36.763292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:37.445863483+00:00 stderr F I0813 20:09:37.445703 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.041151681+00:00 stderr F I0813 20:09:39.040874 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.059841067+00:00 stderr F I0813 20:09:39.059728 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.775376112+00:00 stderr F I0813 20:09:39.775289 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.877276473+00:00 stderr F I0813 20:09:39.877139 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:42.554569883+00:00 stderr F I0813 20:09:42.553989 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:44.886981005+00:00 stderr F I0813 20:09:44.885150 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.269155223+00:00 stderr F I0813 20:09:45.269000 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.299185674+00:00 stderr F I0813 20:09:45.291372 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=etcds from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.301321245+00:00 stderr F I0813 20:09:45.301283 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.321293898+00:00 stderr F I0813 20:09:45.305596 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:45.683142992+00:00 stderr F I0813 20:09:45.683015 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.246210666+00:00 stderr F I0813 20:09:46.246035 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.483549671+00:00 stderr F I0813 20:09:46.480153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.069358597+00:00 stderr F I0813 20:09:48.066045 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.655359978+00:00 stderr F I0813 20:09:48.655206 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.226945617+00:00 stderr F I0813 20:09:50.224154 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.332282769+00:00 stderr F I0813 20:09:52.331846 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:54.468656401+00:00 stderr F I0813 20:09:54.468339 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.661100629+00:00 stderr F I0813 20:09:55.652730 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:56.423555399+00:00 stderr F I0813 20:09:56.423197 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:08.511851241+00:00 stderr F I0813 20:10:08.507574 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:14.081605350+00:00 stderr F I0813 20:10:14.079826 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:21.924516261+00:00 stderr F I0813 20:10:21.923721 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:22.870503533+00:00 stderr F E0813 20:10:22.870263 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:10:25.574907120+00:00 stderr F I0813 20:10:25.574734 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:35.172134291+00:00 stderr F I0813 20:10:35.168313 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:43.224549840+00:00 stderr F I0813 20:10:43.223556 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:11:22.868676659+00:00 stderr F E0813 20:11:22.868202 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:12:22.881343790+00:00 stderr F E0813 20:12:22.877458 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:13:22.867875010+00:00 stderr F E0813 20:13:22.867581 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:14:22.868843436+00:00 stderr F E0813 20:14:22.868197 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:15:22.867512094+00:00 stderr F E0813 20:15:22.867177 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:16:22.870828367+00:00 stderr F E0813 20:16:22.870141 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:17:22.876213876+00:00 stderr F E0813 20:17:22.875455 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:18:22.877519331+00:00 stderr F E0813 20:18:22.876956 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:19:22.880135864+00:00 stderr F E0813 20:19:22.879267 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:20:22.877295837+00:00 stderr F E0813 20:20:22.876172 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:21:22.884254785+00:00 stderr F E0813 20:21:22.878816 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:22:09.064314212+00:00 stderr F E0813 20:22:09.063425 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:22:22.884322808+00:00 stderr F E0813 20:22:22.882273 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:23:22.874296613+00:00 stderr F E0813 20:23:22.873563 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:24:22.880233946+00:00 stderr F E0813 20:24:22.879473 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:25:22.877911382+00:00 stderr F E0813 20:25:22.877520 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:26:22.890656328+00:00 stderr F E0813 20:26:22.889040 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:27:22.881600995+00:00 stderr F E0813 20:27:22.881267 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:28:22.881465280+00:00 stderr F E0813 20:28:22.881095 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:29:22.880966008+00:00 stderr F E0813 20:29:22.880293 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:30:22.883764059+00:00 stderr F E0813 20:30:22.883122 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:31:22.885060755+00:00 stderr F E0813 20:31:22.884194 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:32:22.891502994+00:00 stderr F E0813 20:32:22.890695 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:33:22.884361515+00:00 stderr F E0813 20:33:22.883390 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:34:22.885392643+00:00 stderr F E0813 20:34:22.884208 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:35:22.886406289+00:00 stderr F E0813 20:35:22.885518 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:36:22.891921201+00:00 stderr F E0813 20:36:22.889617 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:37:22.886728388+00:00 stderr F E0813 20:37:22.886091 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:38:22.891078907+00:00 stderr F E0813 20:38:22.890725 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:38:49.076961309+00:00 stderr F E0813 20:38:49.076496 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:39:22.889738503+00:00 stderr F E0813 20:39:22.888686 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:40:22.889029618+00:00 stderr F E0813 20:40:22.888462 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:41:22.895517819+00:00 stderr F E0813 20:41:22.893211 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:42:22.891578222+00:00 stderr F E0813 20:42:22.891138 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:42:36.405189682+00:00 stderr F I0813 20:42:36.341327 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.344825 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.352873 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.352895 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418628739+00:00 stderr F I0813 20:42:36.352915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.352971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353002 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353046 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353086 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353132 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353217 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353266 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450603191+00:00 stderr F I0813 20:42:36.353283 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451313861+00:00 stderr F I0813 20:42:36.353299 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451663361+00:00 stderr F I0813 20:42:36.353316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.463306937+00:00 stderr F I0813 20:42:36.353361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.463971006+00:00 stderr F I0813 20:42:36.353378 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464496041+00:00 stderr F I0813 20:42:36.353393 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464896573+00:00 stderr F I0813 20:42:36.353411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.465287254+00:00 stderr F I0813 20:42:36.353428 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.465534021+00:00 stderr F I0813 20:42:36.353445 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466008965+00:00 stderr F I0813 20:42:36.353460 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466395156+00:00 stderr F I0813 20:42:36.353475 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466717065+00:00 stderr F I0813 20:42:36.353491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467177499+00:00 stderr F I0813 20:42:36.353506 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467517179+00:00 stderr F I0813 20:42:36.353521 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467915070+00:00 stderr F I0813 20:42:36.353535 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468287871+00:00 stderr F I0813 20:42:36.353552 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468517787+00:00 stderr F I0813 20:42:36.353568 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468972860+00:00 stderr F I0813 20:42:36.353585 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469270839+00:00 stderr F I0813 20:42:36.353602 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469545767+00:00 stderr F I0813 20:42:36.353617 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511689472+00:00 stderr F I0813 20:42:36.353630 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531136833+00:00 stderr F I0813 20:42:36.353647 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531136833+00:00 stderr F I0813 20:42:36.342482 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.120750 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125509 1 base_controller.go:172] Shutting down ClusterMemberController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125549 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125596 1 base_controller.go:172] Shutting down EtcdEndpointsController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125613 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125628 1 base_controller.go:172] Shutting down EtcdCertSignerController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125644 1 base_controller.go:172] Shutting down ClusterMemberRemovalController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125663 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125685 1 base_controller.go:172] Shutting down EtcdStaticResources ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125703 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125717 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125732 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125750 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:42:41.126171969+00:00 stderr F I0813 20:42:41.126131 1 base_controller.go:172] Shutting down ScriptController ... 2025-08-13T20:42:41.126296192+00:00 stderr F I0813 20:42:41.126270 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:42:41.126367504+00:00 stderr F I0813 20:42:41.126349 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.126429406+00:00 stderr F I0813 20:42:41.126412 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.126499908+00:00 stderr F I0813 20:42:41.126481 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:42:41.126563860+00:00 stderr F I0813 20:42:41.126545 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:41.126642342+00:00 stderr F I0813 20:42:41.126618 1 base_controller.go:172] Shutting down DefragController ... 2025-08-13T20:42:41.126717944+00:00 stderr F I0813 20:42:41.126697 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.126853818+00:00 stderr F I0813 20:42:41.126765 1 base_controller.go:172] Shutting down StatusSyncer_etcd ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127575 1 base_controller.go:172] Shutting down BootstrapTeardownController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127632 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127646 1 base_controller.go:172] Shutting down MachineDeletionHooksController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127671 1 base_controller.go:114] Shutting down worker of ClusterMemberController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127675 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127706 1 base_controller.go:114] Shutting down worker of BootstrapTeardownController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.128508 1 envvarcontroller.go:209] Shutting down EnvVarController 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127684 1 base_controller.go:104] All ClusterMemberController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F E0813 20:42:41.128859 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127889 1 base_controller.go:104] All BootstrapTeardownController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129352 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129526 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129543 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129560 1 base_controller.go:114] Shutting down worker of MachineDeletionHooksController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129570 1 base_controller.go:104] All MachineDeletionHooksController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129630 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129654 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129830 1 base_controller.go:172] Shutting down EtcdMembersController ... 2025-08-13T20:42:41.129907136+00:00 stderr F I0813 20:42:41.129897 1 base_controller.go:172] Shutting down EtcdCertCleanerController ... 2025-08-13T20:42:41.129981228+00:00 stderr F I0813 20:42:41.126895 1 base_controller.go:150] All StatusSyncer_etcd post start hooks have been terminated 2025-08-13T20:42:41.132899923+00:00 stderr F I0813 20:42:41.130971 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.132899923+00:00 stderr F W0813 20:42:41.132287 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000030366115073043233033051 0ustar zuulzuul2025-08-13T19:59:19.439582302+00:00 stderr F I0813 19:59:18.611601 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:19.439582302+00:00 stderr F I0813 19:59:18.511681 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-08-13T19:59:19.497175674+00:00 stderr F I0813 19:59:18.676219 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:19.497590426+00:00 stderr F I0813 19:59:19.497506 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:19.742684212+00:00 stderr F I0813 19:59:19.687992 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:24.021392759+00:00 stderr F I0813 19:59:24.020506 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-08-13T19:59:29.136189326+00:00 stderr F I0813 19:59:29.134492 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:29.136189326+00:00 stderr F W0813 19:59:29.135111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:29.136189326+00:00 stderr F W0813 19:59:29.135122 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:29.236993950+00:00 stderr F I0813 19:59:29.235641 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:29.242432165+00:00 stderr F I0813 19:59:29.240557 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:29.242432165+00:00 stderr F I0813 19:59:29.241447 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:29.273372357+00:00 stderr F I0813 19:59:29.272043 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:29.313331306+00:00 stderr F I0813 19:59:29.312113 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.313331306+00:00 stderr F I0813 19:59:29.312268 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:29.364638288+00:00 stderr F I0813 19:59:29.364375 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:29.367155910+00:00 stderr F I0813 19:59:29.365767 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:29.555894250+00:00 stderr F I0813 19:59:29.546100 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:29.555894250+00:00 stderr F I0813 19:59:29.546695 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-08-13T19:59:29.569083816+00:00 stderr F I0813 19:59:29.567958 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:29.569083816+00:00 stderr F I0813 19:59:29.569001 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:29.713290977+00:00 stderr F I0813 19:59:29.713230 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:29.724559348+00:00 stderr F E0813 19:59:29.722298 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.725038032+00:00 stderr F E0813 19:59:29.725012 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.736378395+00:00 stderr F E0813 19:59:29.730717 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.736378395+00:00 stderr F E0813 19:59:29.734086 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.743035945+00:00 stderr F E0813 19:59:29.742379 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.744939749+00:00 stderr F E0813 19:59:29.744270 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.766698299+00:00 stderr F E0813 19:59:29.763559 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.766698299+00:00 stderr F E0813 19:59:29.766038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.774469311+00:00 stderr F I0813 19:59:29.774188 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:29.912187596+00:00 stderr F E0813 19:59:29.908002 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.912187596+00:00 stderr F E0813 19:59:29.908064 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.990180180+00:00 stderr F E0813 19:59:29.990117 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.039085414+00:00 stderr F E0813 19:59:30.039024 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.167941217+00:00 stderr F E0813 19:59:30.167862 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.204724855+00:00 stderr F E0813 19:59:30.204571 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.489536614+00:00 stderr F E0813 19:59:30.488746 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.525665924+00:00 stderr F E0813 19:59:30.525598 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.151526054+00:00 stderr F E0813 19:59:31.151456 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.175321802+00:00 stderr F I0813 19:59:31.175063 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-08-13T19:59:31.175363553+00:00 stderr F E0813 19:59:31.175316 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.208255471+00:00 stderr F I0813 19:59:31.208171 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28128", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_13b0a2d9-9b76-44b9-abad-e471c2f65ca3 became leader 2025-08-13T19:59:31.550569339+00:00 stderr F I0813 19:59:31.549964 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-08-13T19:59:32.441695689+00:00 stderr F E0813 19:59:32.437879 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:32.458201790+00:00 stderr F E0813 19:59:32.458107 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.874763240+00:00 stderr F I0813 19:59:33.874019 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:34.190295535+00:00 stderr F I0813 19:59:34.184537 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:34.190295535+00:00 stderr F I0813 19:59:34.184682 1 starter.go:499] waiting for cluster version informer sync... 2025-08-13T19:59:34.282931475+00:00 stderr F I0813 19:59:34.280679 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:34.412808117+00:00 stderr F I0813 19:59:34.403536 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-08-13T19:59:34.599604162+00:00 stderr F I0813 19:59:34.599224 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-08-13T19:59:35.074728296+00:00 stderr F I0813 19:59:35.074419 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-08-13T19:59:35.080904012+00:00 stderr F I0813 19:59:35.080535 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.115441 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.247896 1 base_controller.go:73] Caches are synced for FSyncController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.247922 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243497 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243594 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243610 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313378 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313393 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243630 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243649 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243908 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243926 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243939 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243951 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313995 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.314001 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243962 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-08-13T19:59:35.315551700+00:00 stderr F E0813 19:59:35.314409 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243973 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.314517 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243992 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-08-13T19:59:35.388949042+00:00 stderr F I0813 19:59:35.244004 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-08-13T19:59:35.389179068+00:00 stderr F E0813 19:59:35.389154 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.389382674+00:00 stderr F I0813 19:59:35.244225 1 envvarcontroller.go:193] Starting EnvVarController 2025-08-13T19:59:35.391058182+00:00 stderr F I0813 19:59:35.391020 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.468019336+00:00 stderr F I0813 19:59:35.244247 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:35.483227819+00:00 stderr F E0813 19:59:35.476989 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244441 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.478066 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244455 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244539 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244551 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244567 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244736 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244751 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244761 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244770 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.245278 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.245323 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.504348 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.509325 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.529198640+00:00 stderr F I0813 19:59:35.509567 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229019 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229573 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229618 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229624 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229640 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229645 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:38.315580626+00:00 stderr F I0813 19:59:38.315147 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-08-13T19:59:38.315580626+00:00 stderr F I0813 19:59:38.315168 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-08-13T19:59:38.445108599+00:00 stderr F E0813 19:59:38.444373 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:38.515916647+00:00 stderr F E0813 19:59:38.503438 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:38.566434227+00:00 stderr F I0813 19:59:38.566370 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:38.575941538+00:00 stderr F I0813 19:59:38.575907 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:38.577320148+00:00 stderr F I0813 19:59:38.577282 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:38.606453528+00:00 stderr F I0813 19:59:38.597541 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.681984 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.682102 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.682111 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.654724 1 base_controller.go:73] Caches are synced for DefragController 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.683248 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-08-13T19:59:38.838939705+00:00 stderr F I0813 19:59:38.836534 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:38.843890516+00:00 stderr F I0813 19:59:38.842074 1 base_controller.go:73] Caches are synced for ScriptController 2025-08-13T19:59:38.874057006+00:00 stderr F I0813 19:59:38.872051 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-08-13T19:59:38.982047663+00:00 stderr F I0813 19:59:38.967712 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:38.982047663+00:00 stderr F E0813 19:59:38.970342 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:38.982047663+00:00 stderr F I0813 19:59:38.971545 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:39.036536197+00:00 stderr F I0813 19:59:39.036468 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:39.036630399+00:00 stderr F I0813 19:59:39.036612 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:39.098609596+00:00 stderr F I0813 19:59:39.098186 1 envvarcontroller.go:199] caches synced 2025-08-13T19:59:39.512175835+00:00 stderr F E0813 19:59:39.511575 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.554505982+00:00 stderr F I0813 19:59:39.554323 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:39.555444998+00:00 stderr F E0813 19:59:39.555415 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.597947860+00:00 stderr F E0813 19:59:39.597723 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.614323977+00:00 stderr F E0813 19:59:39.614245 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:41.444218429+00:00 stderr F I0813 19:59:41.402415 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.444218429+00:00 stderr F I0813 19:59:41.410329 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.510703544+00:00 stderr F E0813 19:59:41.453610 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.575286225+00:00 stderr F E0813 19:59:41.491014 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:41.688451451+00:00 stderr F E0813 19:59:41.686147 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.716241883+00:00 stderr F E0813 19:59:41.714508 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.772432005+00:00 stderr F E0813 19:59:41.772374 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T19:59:41.792715813+00:00 stderr F I0813 19:59:41.792615 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:41.797374666+00:00 stderr F E0813 19:59:41.793568 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.813208317+00:00 stderr F I0813 19:59:41.802685 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.815346458+00:00 stderr F I0813 19:59:41.815115 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.030002477+00:00 stderr F I0813 19:59:42.029895 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.032700264+00:00 stderr F I0813 19:59:42.032424 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.045939881+00:00 stderr F E0813 19:59:42.045747 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:42.050357667+00:00 stderr F I0813 19:59:42.050321 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:42Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"EnvVarController_Error::NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:42.064411728+00:00 stderr F I0813 19:59:42.051461 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:42.073188328+00:00 stderr F E0813 19:59:42.070405 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:42.084309165+00:00 stderr F I0813 19:59:42.057153 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.084309165+00:00 stderr F I0813 19:59:42.059321 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.104760 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.220958 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.239738 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.221066 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.239829 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.221247 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.240287 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:42.244982595+00:00 stderr F I0813 19:59:42.241586 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.244982595+00:00 stderr F I0813 19:59:42.242611 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.267741034+00:00 stderr F I0813 19:59:42.267670 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.274360852+00:00 stderr F I0813 19:59:42.274307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.364353727+00:00 stderr F I0813 19:59:42.221285 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:42.364470211+00:00 stderr F I0813 19:59:42.364447 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:42.381169387+00:00 stderr F I0813 19:59:42.368972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.385226692+00:00 stderr F I0813 19:59:42.221433 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.411953034+00:00 stderr F I0813 19:59:42.411890 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:42.415565527+00:00 stderr F I0813 19:59:42.415474 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:42.415733612+00:00 stderr F I0813 19:59:42.415686 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-08-13T19:59:42.415963909+00:00 stderr F I0813 19:59:42.415763 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-08-13T19:59:42.439443888+00:00 stderr F I0813 19:59:42.438475 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.455607459+00:00 stderr F E0813 19:59:42.455540 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:42.455983369+00:00 stderr F I0813 19:59:42.455959 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-08-13T19:59:42.456028521+00:00 stderr F I0813 19:59:42.456015 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-08-13T19:59:42.456351400+00:00 stderr F I0813 19:59:42.456328 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:42.456389951+00:00 stderr F I0813 19:59:42.456377 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:42.456532865+00:00 stderr F I0813 19:59:42.456516 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-08-13T19:59:42.456565556+00:00 stderr F I0813 19:59:42.456553 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-08-13T19:59:42.459898211+00:00 stderr F I0813 19:59:42.459871 1 trace.go:236] Trace[761031083]: "DeltaFIFO Pop Process" ID:system:controller:pvc-protection-controller,Depth:106,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.274) (total time: 185ms): 2025-08-13T19:59:42.459898211+00:00 stderr F Trace[761031083]: [185.063576ms] [185.063576ms] END 2025-08-13T19:59:42.515986220+00:00 stderr F I0813 19:59:42.514103 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:42.515986220+00:00 stderr F I0813 19:59:42.515190 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.519198 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.520748 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.520766 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-08-13T19:59:42.529508635+00:00 stderr F I0813 19:59:42.529367 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.541220119+00:00 stderr F I0813 19:59:42.535950 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:42.541220119+00:00 stderr F I0813 19:59:42.536164 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:42.542379502+00:00 stderr F I0813 19:59:42.542265 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:43.388393107+00:00 stderr F I0813 19:59:43.174330 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:43.437977401+00:00 stderr F I0813 19:59:43.437366 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:43.442007086+00:00 stderr F I0813 19:59:43.190035 1 trace.go:236] Trace[790508828]: "DeltaFIFO Pop Process" ID:system:openshift:controller:serviceaccount-controller,Depth:41,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.552) (total time: 637ms): 2025-08-13T19:59:43.442007086+00:00 stderr F Trace[790508828]: [637.43805ms] [637.43805ms] END 2025-08-13T19:59:43.461142281+00:00 stderr F I0813 19:59:43.190583 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:43.461142281+00:00 stderr F I0813 19:59:43.460085 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:43.469627043+00:00 stderr F I0813 19:59:43.235059 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:43.469885710+00:00 stderr F E0813 19:59:43.240476 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:43.470704823+00:00 stderr F E0813 19:59:43.240674 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:43.470889159+00:00 stderr F I0813 19:59:43.248768 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:43.477025134+00:00 stderr F I0813 19:59:43.476267 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"EnvVarController_Error::EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:43.494418570+00:00 stderr F I0813 19:59:43.255933 1 helpers.go:184] lister was stale at resourceVersion=28242, live get showed resourceVersion=28318 2025-08-13T19:59:43.494418570+00:00 stderr F E0813 19:59:43.329586 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T19:59:43.494418570+00:00 stderr F E0813 19:59:43.353066 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.513639467+00:00 stderr F I0813 19:59:43.512527 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-08-13T19:59:43.513639467+00:00 stderr F I0813 19:59:43.512590 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-08-13T19:59:43.687120012+00:00 stderr F E0813 19:59:43.683712 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:43.926955579+00:00 stderr F E0813 19:59:43.891241 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T19:59:44.024683875+00:00 stderr F I0813 19:59:44.017254 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values","reason":"EnvVarController_Error::EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady::ScriptController_Error","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:44.115764081+00:00 stderr F I0813 19:59:44.040471 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" 2025-08-13T19:59:44.115764081+00:00 stderr F I0813 19:59:44.066649 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.115764081+00:00 stderr F E0813 19:59:44.076233 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:44.460925990+00:00 stderr F I0813 19:59:44.406579 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.491474701+00:00 stderr F E0813 19:59:44.491373 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:44.501905558+00:00 stderr F I0813 19:59:44.499984 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values","reason":"EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady::ScriptController_Error","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:44.510090791+00:00 stderr F I0813 19:59:44.507255 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" 2025-08-13T19:59:45.405026942+00:00 stderr F E0813 19:59:45.402517 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:46.080965999+00:00 stderr F I0813 19:59:46.077699 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" 2025-08-13T19:59:46.527543259+00:00 stderr F I0813 19:59:46.523634 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:46.544948805+00:00 stderr F I0813 19:59:46.544689 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:46.832987076+00:00 stderr F E0813 19:59:46.821501 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:47.102551730+00:00 stderr F I0813 19:59:47.102160 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:47.363246801+00:00 stderr F I0813 19:59:47.317985 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:47.363246801+00:00 stderr F I0813 19:59:47.334353 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded changed from True to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found") 2025-08-13T19:59:47.767981449+00:00 stderr F I0813 19:59:47.766996 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T19:59:48.064737628+00:00 stderr F I0813 19:59:48.051566 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.064737628+00:00 stderr F I0813 19:59:48.055625 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:48.611643899+00:00 stderr F I0813 19:59:48.609906 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.611643899+00:00 stderr F I0813 19:59:48.610301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T19:59:48.812943397+00:00 stderr F E0813 19:59:48.806732 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.572577771+00:00 stderr F E0813 19:59:49.572209 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:51.830068133+00:00 stderr F I0813 19:59:51.825763 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.928955572+00:00 stderr F E0813 19:59:51.927523 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.961419857+00:00 stderr F I0813 19:59:51.961204 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.961109858 +0000 UTC))" 2025-08-13T19:59:51.961559271+00:00 stderr F I0813 19:59:51.961543 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.96152121 +0000 UTC))" 2025-08-13T19:59:51.961682245+00:00 stderr F I0813 19:59:51.961664 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.961640153 +0000 UTC))" 2025-08-13T19:59:51.961743026+00:00 stderr F I0813 19:59:51.961724 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.961704295 +0000 UTC))" 2025-08-13T19:59:51.961958763+00:00 stderr F I0813 19:59:51.961828 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.961762997 +0000 UTC))" 2025-08-13T19:59:51.962028875+00:00 stderr F I0813 19:59:51.962009 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.961990933 +0000 UTC))" 2025-08-13T19:59:51.962091646+00:00 stderr F I0813 19:59:51.962076 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.962050705 +0000 UTC))" 2025-08-13T19:59:51.962191349+00:00 stderr F I0813 19:59:51.962170 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.962119247 +0000 UTC))" 2025-08-13T19:59:51.962244911+00:00 stderr F I0813 19:59:51.962231 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.96221247 +0000 UTC))" 2025-08-13T19:59:51.962662473+00:00 stderr F I0813 19:59:51.962641 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 19:59:51.962616281 +0000 UTC))" 2025-08-13T19:59:51.963046684+00:00 stderr F I0813 19:59:51.963019 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 19:59:51.963000952 +0000 UTC))" 2025-08-13T19:59:54.765899009+00:00 stderr F E0813 19:59:54.737133 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:05.196145423+00:00 stderr F E0813 20:00:05.186288 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:05.765550329+00:00 stderr F I0813 20:00:05.765069 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.765023964 +0000 UTC))" 2025-08-13T20:00:05.783205133+00:00 stderr F I0813 20:00:05.783177 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.783096489 +0000 UTC))" 2025-08-13T20:00:05.783328776+00:00 stderr F I0813 20:00:05.783309 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783252924 +0000 UTC))" 2025-08-13T20:00:05.783385568+00:00 stderr F I0813 20:00:05.783368 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783351887 +0000 UTC))" 2025-08-13T20:00:05.783446519+00:00 stderr F I0813 20:00:05.783426 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783409788 +0000 UTC))" 2025-08-13T20:00:05.783515761+00:00 stderr F I0813 20:00:05.783502 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783485201 +0000 UTC))" 2025-08-13T20:00:05.783586993+00:00 stderr F I0813 20:00:05.783570 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783552522 +0000 UTC))" 2025-08-13T20:00:05.783645745+00:00 stderr F I0813 20:00:05.783629 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783610724 +0000 UTC))" 2025-08-13T20:00:05.783709567+00:00 stderr F I0813 20:00:05.783694 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.783672236 +0000 UTC))" 2025-08-13T20:00:05.783764869+00:00 stderr F I0813 20:00:05.783746 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783731698 +0000 UTC))" 2025-08-13T20:00:05.784556051+00:00 stderr F I0813 20:00:05.784526 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.78450408 +0000 UTC))" 2025-08-13T20:00:05.838385956+00:00 stderr F I0813 20:00:05.834488 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:05.784988084 +0000 UTC))" 2025-08-13T20:00:25.750118848+00:00 stderr F E0813 20:00:25.749354 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:35.469254510+00:00 stderr F E0813 20:00:35.468465 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:40.271403626+00:00 stderr F I0813 20:00:40.269711 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.307409 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308144 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:40.308098593 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:40.308159874 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308233 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:40.308183055 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308251 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:40.308239327 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308269 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308257217 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308288 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308276678 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308307 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308293358 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308333 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308313139 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308353 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:40.30834121 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308399 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.30836468 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308688 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:18 +0000 UTC to 2027-08-13 20:00:19 +0000 UTC (now=2025-08-13 20:00:40.308670389 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.309022 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:40.309005079 +0000 UTC))" 2025-08-13T20:00:40.309778251+00:00 stderr F I0813 20:00:40.309716 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.342161 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cc073c73c8c431e58bd599b22534e379416474b407ab24fcc4003eb26d4d413", new="d7210eb97e9fdfd0251de57fa3139593b93cc605b31992f9f8fc921c7054baed") 2025-08-13T20:00:41.345958686+00:00 stderr F W0813 20:00:41.342771 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.342183 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="052d72315f99f9ec238bcbb8fe50e397b66ccc142d1ac794929899df735a889d", new="8fce782bf4ffc195a605185b97044a1cf85fb8408671072386211fc5d0f7f9b4") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.343639 1 cmd.go:160] exiting because "/var/run/secrets/serving-cert/tls.key" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344070 1 observer_polling.go:120] Observed file "/var/run/configmaps/etcd-service-ca/service-ca.crt" has been modified (old="4aae21eb5e7288ebd1f51edb8217b701366dd5aec958415476bca84ab942e90c", new="51e7a388d2ba2794fb8e557a4c52a736ae262d754d30e8729e9392e40100869b") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344086 1 cmd.go:160] exiting because "/var/run/configmaps/etcd-service-ca/service-ca.crt" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344144 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cc073c73c8c431e58bd599b22534e379416474b407ab24fcc4003eb26d4d413", new="d7210eb97e9fdfd0251de57fa3139593b93cc605b31992f9f8fc921c7054baed") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344153 1 cmd.go:160] exiting because "/var/run/secrets/serving-cert/tls.crt" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344617 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344649 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:41.353961434+00:00 stderr F I0813 20:00:41.352308 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:41.387389708+00:00 stderr F I0813 20:00:41.385105 1 base_controller.go:172] Shutting down EtcdStaticResources ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.393930 1 envvarcontroller.go:209] Shutting down EnvVarController 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394029 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394084 1 base_controller.go:172] Shutting down ScriptController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394116 1 base_controller.go:172] Shutting down DefragController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394132 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394148 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394171 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394376 1 base_controller.go:172] Shutting down MachineDeletionHooksController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394470 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394490 1 base_controller.go:172] Shutting down BootstrapTeardownController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394512 1 base_controller.go:172] Shutting down StatusSyncer_etcd ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394526 1 base_controller.go:150] All StatusSyncer_etcd post start hooks have been terminated 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394649 1 base_controller.go:172] Shutting down FSyncController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394665 1 base_controller.go:172] Shutting down EtcdMembersController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394682 1 base_controller.go:172] Shutting down EtcdCertCleanerController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.395094 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450279 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450362 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450540 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450579 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:41.457009103+00:00 stderr F I0813 20:00:41.456673 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:00:41.457009103+00:00 stderr F I0813 20:00:41.456755 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465575 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465631 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465673 1 base_controller.go:172] Shutting down ClusterMemberController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465719 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465751 1 base_controller.go:172] Shutting down EtcdCertSignerController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465860 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465892 1 base_controller.go:172] Shutting down EtcdEndpointsController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466643 1 base_controller.go:172] Shutting down ClusterMemberRemovalController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466720 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466750 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467204 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467253 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467270 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467280 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467341 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467357 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467454 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467468 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467483 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467495 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467501 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467515 1 base_controller.go:114] Shutting down worker of ClusterMemberController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467527 1 base_controller.go:104] All ClusterMemberController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467548 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467558 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467563 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467576 1 base_controller.go:114] Shutting down worker of EtcdCertSignerController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467586 1 base_controller.go:104] All EtcdCertSignerController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467602 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467618 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467633 1 base_controller.go:114] Shutting down worker of EtcdEndpointsController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467647 1 base_controller.go:104] All EtcdEndpointsController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471415 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471485 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471493 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471525 1 base_controller.go:114] Shutting down worker of MachineDeletionHooksController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471534 1 base_controller.go:104] All MachineDeletionHooksController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471623 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471632 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471642 1 base_controller.go:114] Shutting down worker of BootstrapTeardownController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471650 1 base_controller.go:104] All BootstrapTeardownController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471660 1 base_controller.go:114] Shutting down worker of StatusSyncer_etcd controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471670 1 base_controller.go:104] All StatusSyncer_etcd workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471680 1 base_controller.go:114] Shutting down worker of FSyncController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471688 1 base_controller.go:104] All FSyncController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471697 1 base_controller.go:114] Shutting down worker of EtcdMembersController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471707 1 base_controller.go:104] All EtcdMembersController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471716 1 base_controller.go:114] Shutting down worker of EtcdCertCleanerController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471724 1 base_controller.go:104] All EtcdCertCleanerController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472098 1 base_controller.go:114] Shutting down worker of ClusterMemberRemovalController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472111 1 base_controller.go:104] All ClusterMemberRemovalController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472298 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472906 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472940 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472949 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473459 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473707 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473762 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474073 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474175 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474217 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474233 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474250 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474259 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475635 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475688 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475742 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:41.475882591+00:00 stderr F I0813 20:00:41.475872 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530027 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530740 1 base_controller.go:114] Shutting down worker of EtcdStaticResources controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530756 1 base_controller.go:104] All EtcdStaticResources workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530780 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530956 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530971 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534080 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534227 1 base_controller.go:114] Shutting down worker of DefragController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534240 1 base_controller.go:104] All DefragController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534251 1 base_controller.go:114] Shutting down worker of ScriptController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534258 1 base_controller.go:104] All ScriptController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534280 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534311 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534320 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534325 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:00:41.541754909+00:00 stderr F I0813 20:00:41.541461 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:41.541754909+00:00 stderr F I0813 20:00:41.541554 1 builder.go:329] server exited 2025-08-13T20:00:41.551133457+00:00 stderr F I0813 20:00:41.542312 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="052d72315f99f9ec238bcbb8fe50e397b66ccc142d1ac794929899df735a889d", new="8fce782bf4ffc195a605185b97044a1cf85fb8408671072386211fc5d0f7f9b4") 2025-08-13T20:00:41.551133457+00:00 stderr F W0813 20:00:41.542993 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000053367315073043233033061 0ustar zuulzuul2025-10-13T00:14:59.716712730+00:00 stderr F I1013 00:14:59.716516 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-10-13T00:14:59.719675209+00:00 stderr F I1013 00:14:59.719106 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:59.719675209+00:00 stderr F I1013 00:14:59.719573 1 cmd.go:240] Using service-serving-cert provided certificates 2025-10-13T00:14:59.719675209+00:00 stderr F I1013 00:14:59.719638 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:59.720665678+00:00 stderr F I1013 00:14:59.720093 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:59.879450645+00:00 stderr F I1013 00:14:59.879379 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-10-13T00:15:00.270457250+00:00 stderr F I1013 00:15:00.268526 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:00.270457250+00:00 stderr F W1013 00:15:00.269207 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:00.270457250+00:00 stderr F W1013 00:15:00.269218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:00.275375497+00:00 stderr F I1013 00:15:00.275291 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:00.275757499+00:00 stderr F I1013 00:15:00.275677 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:00.275795570+00:00 stderr F I1013 00:15:00.275760 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:00.275980135+00:00 stderr F I1013 00:15:00.275927 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:00.276485191+00:00 stderr F I1013 00:15:00.276422 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:00.276761599+00:00 stderr F I1013 00:15:00.275954 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:00.279422809+00:00 stderr F I1013 00:15:00.279391 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:00.279422809+00:00 stderr F I1013 00:15:00.279401 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:00.279492451+00:00 stderr F I1013 00:15:00.279454 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:00.279538242+00:00 stderr F I1013 00:15:00.279509 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:00.280559263+00:00 stderr F I1013 00:15:00.280503 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-10-13T00:15:00.379928870+00:00 stderr F I1013 00:15:00.379876 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:00.380219469+00:00 stderr F I1013 00:15:00.380189 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:00.402627140+00:00 stderr F I1013 00:15:00.401681 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:20:27.648422864+00:00 stderr F I1013 00:20:27.645743 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-10-13T00:20:27.649504644+00:00 stderr F I1013 00:20:27.649421 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41974", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_e5610b9d-a0f9-47cb-916f-9d97b535241f became leader 2025-10-13T00:20:27.657701954+00:00 stderr F I1013 00:20:27.657639 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-10-13T00:20:27.668625060+00:00 stderr F I1013 00:20:27.668388 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:20:27.671149371+00:00 stderr F I1013 00:20:27.671087 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:20:27.671149371+00:00 stderr F I1013 00:20:27.671140 1 starter.go:499] waiting for cluster version informer sync... 2025-10-13T00:20:27.671192372+00:00 stderr F I1013 00:20:27.671175 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:20:27.773910625+00:00 stderr F I1013 00:20:27.773814 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-10-13T00:20:27.774549573+00:00 stderr F I1013 00:20:27.774374 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-10-13T00:20:27.774549573+00:00 stderr F I1013 00:20:27.774400 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-10-13T00:20:27.779961205+00:00 stderr F I1013 00:20:27.779871 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-10-13T00:20:27.779961205+00:00 stderr F I1013 00:20:27.779888 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-10-13T00:20:27.779961205+00:00 stderr F I1013 00:20:27.779918 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:20:27.780534391+00:00 stderr F I1013 00:20:27.780491 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-10-13T00:20:27.780534391+00:00 stderr F I1013 00:20:27.780520 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:20:27.780554581+00:00 stderr F I1013 00:20:27.780535 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-10-13T00:20:27.780554581+00:00 stderr F I1013 00:20:27.780551 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-10-13T00:20:27.780564362+00:00 stderr F I1013 00:20:27.780556 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-10-13T00:20:27.780572912+00:00 stderr F I1013 00:20:27.780561 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-10-13T00:20:27.780615243+00:00 stderr F I1013 00:20:27.780594 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-10-13T00:20:27.780648994+00:00 stderr F I1013 00:20:27.780625 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:20:27.780648994+00:00 stderr F I1013 00:20:27.780644 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-10-13T00:20:27.780674315+00:00 stderr F I1013 00:20:27.780657 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-10-13T00:20:27.780697785+00:00 stderr F I1013 00:20:27.780685 1 envvarcontroller.go:193] Starting EnvVarController 2025-10-13T00:20:27.780705746+00:00 stderr F I1013 00:20:27.780700 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-10-13T00:20:27.780854820+00:00 stderr F I1013 00:20:27.780822 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-10-13T00:20:27.780854820+00:00 stderr F I1013 00:20:27.780841 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-10-13T00:20:27.780867620+00:00 stderr F I1013 00:20:27.780859 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-10-13T00:20:27.780889221+00:00 stderr F I1013 00:20:27.780884 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-10-13T00:20:27.780917512+00:00 stderr F I1013 00:20:27.780899 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-10-13T00:20:27.781024045+00:00 stderr F I1013 00:20:27.780994 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-10-13T00:20:27.781024045+00:00 stderr F I1013 00:20:27.781013 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:20:27.781036865+00:00 stderr F I1013 00:20:27.781026 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:20:27.781832387+00:00 stderr F I1013 00:20:27.781755 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-10-13T00:20:27.781855438+00:00 stderr F I1013 00:20:27.781834 1 base_controller.go:73] Caches are synced for FSyncController 2025-10-13T00:20:27.781855438+00:00 stderr F I1013 00:20:27.781842 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-10-13T00:20:27.782557128+00:00 stderr F I1013 00:20:27.781985 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-10-13T00:20:27.782557128+00:00 stderr F I1013 00:20:27.782022 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-10-13T00:20:27.782557128+00:00 stderr F I1013 00:20:27.782034 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-10-13T00:20:27.782882217+00:00 stderr F I1013 00:20:27.779894 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-10-13T00:20:27.782882217+00:00 stderr F I1013 00:20:27.782860 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-10-13T00:20:27.782895227+00:00 stderr F I1013 00:20:27.779868 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-10-13T00:20:27.789564024+00:00 stderr F I1013 00:20:27.789496 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-10-13T00:20:27.794897624+00:00 stderr F E1013 00:20:27.794813 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:27.810334427+00:00 stderr F E1013 00:20:27.808460 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:27.810334427+00:00 stderr F E1013 00:20:27.808897 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.815503492+00:00 stderr F E1013 00:20:27.815457 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.822218200+00:00 stderr F E1013 00:20:27.822169 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:27.825939305+00:00 stderr F E1013 00:20:27.825900 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.846369008+00:00 stderr F E1013 00:20:27.846310 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:27.846429000+00:00 stderr F E1013 00:20:27.846379 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.874629451+00:00 stderr F I1013 00:20:27.874583 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-10-13T00:20:27.874629451+00:00 stderr F I1013 00:20:27.874609 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-10-13T00:20:27.876751491+00:00 stderr F I1013 00:20:27.876722 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-10-13T00:20:27.876751491+00:00 stderr F I1013 00:20:27.876736 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-10-13T00:20:27.880574198+00:00 stderr F I1013 00:20:27.880525 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-10-13T00:20:27.880574198+00:00 stderr F I1013 00:20:27.880556 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-10-13T00:20:27.881517054+00:00 stderr F I1013 00:20:27.881471 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:20:27.881517054+00:00 stderr F I1013 00:20:27.881509 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:20:27.881545515+00:00 stderr F I1013 00:20:27.881513 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:20:27.881545515+00:00 stderr F I1013 00:20:27.881526 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:20:27.881545515+00:00 stderr F I1013 00:20:27.881537 1 base_controller.go:73] Caches are synced for PruneController 2025-10-13T00:20:27.881556085+00:00 stderr F I1013 00:20:27.881545 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-10-13T00:20:27.881625637+00:00 stderr F I1013 00:20:27.881594 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-10-13T00:20:27.881625637+00:00 stderr F I1013 00:20:27.881618 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-10-13T00:20:27.881778132+00:00 stderr F I1013 00:20:27.881759 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:27.881778132+00:00 stderr F I1013 00:20:27.881494 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-10-13T00:20:27.881790472+00:00 stderr F I1013 00:20:27.881778 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-10-13T00:20:27.881790472+00:00 stderr F I1013 00:20:27.881785 1 base_controller.go:73] Caches are synced for RevisionController 2025-10-13T00:20:27.881799222+00:00 stderr F I1013 00:20:27.881790 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-10-13T00:20:27.881819323+00:00 stderr F I1013 00:20:27.881764 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:20:27.881828833+00:00 stderr F I1013 00:20:27.881817 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:20:27.881872494+00:00 stderr F I1013 00:20:27.881848 1 base_controller.go:73] Caches are synced for InstallerController 2025-10-13T00:20:27.881872494+00:00 stderr F I1013 00:20:27.881861 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-10-13T00:20:27.882032819+00:00 stderr F I1013 00:20:27.882009 1 base_controller.go:73] Caches are synced for ScriptController 2025-10-13T00:20:27.882032819+00:00 stderr F I1013 00:20:27.882022 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-10-13T00:20:27.882059870+00:00 stderr F I1013 00:20:27.882039 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-10-13T00:20:27.882059870+00:00 stderr F I1013 00:20:27.882045 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-10-13T00:20:27.882069070+00:00 stderr F I1013 00:20:27.882064 1 base_controller.go:73] Caches are synced for DefragController 2025-10-13T00:20:27.882089340+00:00 stderr F I1013 00:20:27.882069 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-10-13T00:20:27.882233644+00:00 stderr F I1013 00:20:27.882207 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-10-13T00:20:27.882233644+00:00 stderr F I1013 00:20:27.882222 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-10-13T00:20:27.882348258+00:00 stderr F I1013 00:20:27.881811 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-10-13T00:20:27.882363588+00:00 stderr F I1013 00:20:27.882343 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-10-13T00:20:27.882478441+00:00 stderr F I1013 00:20:27.882441 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:27.882478441+00:00 stderr F E1013 00:20:27.882460 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.882490292+00:00 stderr F I1013 00:20:27.881517 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-10-13T00:20:27.882565284+00:00 stderr F I1013 00:20:27.882497 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-10-13T00:20:27.882864192+00:00 stderr F E1013 00:20:27.882839 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.883627684+00:00 stderr F E1013 00:20:27.883607 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.889937671+00:00 stderr F I1013 00:20:27.889895 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:27.890260390+00:00 stderr F I1013 00:20:27.890226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:27.891771992+00:00 stderr F E1013 00:20:27.891361 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.891771992+00:00 stderr F E1013 00:20:27.891401 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.891771992+00:00 stderr F E1013 00:20:27.891520 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.891771992+00:00 stderr F I1013 00:20:27.891562 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-10-13T00:20:27.891771992+00:00 stderr F I1013 00:20:27.891572 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-10-13T00:20:27.891831124+00:00 stderr F E1013 00:20:27.891810 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.892661997+00:00 stderr F E1013 00:20:27.892634 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:27.896387162+00:00 stderr F E1013 00:20:27.896339 1 base_controller.go:268] ClusterMemberRemovalController reconciliation failed: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.897092351+00:00 stderr F E1013 00:20:27.897041 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.897226355+00:00 stderr F I1013 00:20:27.897178 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:27.897260786+00:00 stderr F E1013 00:20:27.897237 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.900518948+00:00 stderr F E1013 00:20:27.900473 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:20:27.900744744+00:00 stderr F E1013 00:20:27.900714 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.901152965+00:00 stderr F I1013 00:20:27.901122 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:27.901832584+00:00 stderr F E1013 00:20:27.901803 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.902109632+00:00 stderr F E1013 00:20:27.902078 1 base_controller.go:268] ClusterMemberRemovalController reconciliation failed: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.903597374+00:00 stderr F E1013 00:20:27.903567 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.903597374+00:00 stderr F E1013 00:20:27.903577 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.908041789+00:00 stderr F I1013 00:20:27.906804 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:27.908041789+00:00 stderr F I1013 00:20:27.907235 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:27.910618341+00:00 stderr F E1013 00:20:27.910578 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:20:27.913971255+00:00 stderr F E1013 00:20:27.913934 1 base_controller.go:268] ClusterMemberRemovalController reconciliation failed: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.923529653+00:00 stderr F E1013 00:20:27.923484 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.923761910+00:00 stderr F E1013 00:20:27.923734 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.923946945+00:00 stderr F I1013 00:20:27.923919 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:27.924864601+00:00 stderr F I1013 00:20:27.924833 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:27.924953283+00:00 stderr F E1013 00:20:27.924910 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.925686704+00:00 stderr F E1013 00:20:27.925651 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.926168837+00:00 stderr F E1013 00:20:27.926142 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.927251468+00:00 stderr F E1013 00:20:27.927235 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.932718391+00:00 stderr F E1013 00:20:27.932676 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.935097248+00:00 stderr F E1013 00:20:27.935070 1 base_controller.go:268] ClusterMemberRemovalController reconciliation failed: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.943149144+00:00 stderr F E1013 00:20:27.943123 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.944440670+00:00 stderr F E1013 00:20:27.944423 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.945776347+00:00 stderr F E1013 00:20:27.945722 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.953429892+00:00 stderr F E1013 00:20:27.953249 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.973816854+00:00 stderr F E1013 00:20:27.973743 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.975496421+00:00 stderr F E1013 00:20:27.975459 1 base_controller.go:268] ClusterMemberRemovalController reconciliation failed: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.978989619+00:00 stderr F I1013 00:20:27.978947 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:27.980085060+00:00 stderr F E1013 00:20:27.980047 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:27.981174701+00:00 stderr F I1013 00:20:27.981150 1 base_controller.go:73] Caches are synced for NodeController 2025-10-13T00:20:27.981174701+00:00 stderr F I1013 00:20:27.981168 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-10-13T00:20:27.989084693+00:00 stderr F E1013 00:20:27.989037 1 base_controller.go:268] ClusterMemberController reconciliation failed: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced 2025-10-13T00:20:27.989119744+00:00 stderr F I1013 00:20:27.989089 1 etcdcli_pool.go:70] creating a new cached client 2025-10-13T00:20:27.989601537+00:00 stderr F I1013 00:20:27.989581 1 base_controller.go:73] Caches are synced for GuardController 2025-10-13T00:20:27.989637998+00:00 stderr F I1013 00:20:27.989627 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-10-13T00:20:27.989743581+00:00 stderr F I1013 00:20:27.989719 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:27.990510243+00:00 stderr F E1013 00:20:27.990481 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:27.990549844+00:00 stderr F I1013 00:20:27.990532 1 etcdcli_pool.go:70] creating a new cached client 2025-10-13T00:20:27.990746579+00:00 stderr F I1013 00:20:27.989590 1 etcdcli_pool.go:70] creating a new cached client 2025-10-13T00:20:28.062802381+00:00 stderr F I1013 00:20:28.062689 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:28.064484908+00:00 stderr F I1013 00:20:28.064442 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:28.106753324+00:00 stderr F E1013 00:20:28.106683 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-10-13T00:20:28.143478075+00:00 stderr F E1013 00:20:28.143408 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:28.182537611+00:00 stderr F I1013 00:20:28.182183 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:28.192808869+00:00 stderr F I1013 00:20:28.190517 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:28.260510639+00:00 stderr F E1013 00:20:28.260034 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:20:28.261306361+00:00 stderr F I1013 00:20:28.261289 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:28.382377279+00:00 stderr F I1013 00:20:28.379751 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:28.475033129+00:00 stderr F I1013 00:20:28.474217 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:28.476543511+00:00 stderr F E1013 00:20:28.476448 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:28.584522581+00:00 stderr F I1013 00:20:28.584469 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:28.682160561+00:00 stderr F I1013 00:20:28.682093 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-10-13T00:20:28.682160561+00:00 stderr F I1013 00:20:28.682121 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-10-13T00:20:28.792945109+00:00 stderr F I1013 00:20:28.792865 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:28.881654759+00:00 stderr F I1013 00:20:28.881588 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:20:28.881654759+00:00 stderr F I1013 00:20:28.881614 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:20:28.977088546+00:00 stderr F I1013 00:20:28.977000 1 request.go:697] Waited for 1.195067674s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-10-13T00:20:28.984402042+00:00 stderr F I1013 00:20:28.982355 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:28.993542488+00:00 stderr F I1013 00:20:28.992411 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:28.993542488+00:00 stderr F E1013 00:20:28.992470 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-10-13T00:20:28.993542488+00:00 stderr F I1013 00:20:28.993430 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:29.004749593+00:00 stderr F I1013 00:20:29.003301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:29.080954080+00:00 stderr F I1013 00:20:29.080876 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:20:29.080954080+00:00 stderr F I1013 00:20:29.080934 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:20:29.121049955+00:00 stderr F E1013 00:20:29.120987 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:29.180422071+00:00 stderr F I1013 00:20:29.178846 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:29.181633605+00:00 stderr F I1013 00:20:29.181600 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-10-13T00:20:29.181633605+00:00 stderr F I1013 00:20:29.181619 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-10-13T00:20:29.379637991+00:00 stderr F I1013 00:20:29.379536 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:20:29.382703847+00:00 stderr F I1013 00:20:29.382668 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-10-13T00:20:29.382703847+00:00 stderr F I1013 00:20:29.382689 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-10-13T00:20:29.809210155+00:00 stderr F I1013 00:20:29.809076 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:29.810094520+00:00 stderr F I1013 00:20:29.810012 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:29.824076022+00:00 stderr F I1013 00:20:29.823972 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:29.977069805+00:00 stderr F I1013 00:20:29.976978 1 request.go:697] Waited for 2.094266115s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-10-13T00:20:30.401381781+00:00 stderr F I1013 00:20:30.398738 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:30.401381781+00:00 stderr F I1013 00:20:30.400998 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:30.407077961+00:00 stderr F E1013 00:20:30.407019 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:30.421690071+00:00 stderr F I1013 00:20:30.421583 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:30.798141164+00:00 stderr F I1013 00:20:30.798069 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:30.798817773+00:00 stderr F I1013 00:20:30.798778 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:30.809661858+00:00 stderr F I1013 00:20:30.808740 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:30.977453846+00:00 stderr F I1013 00:20:30.977363 1 request.go:697] Waited for 1.396857536s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-10-13T00:20:31.192428788+00:00 stderr F I1013 00:20:31.192301 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:20:31.192624474+00:00 stderr F I1013 00:20:31.192528 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:20:31.203906800+00:00 stderr F I1013 00:20:31.203808 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-10-13T00:20:32.377410859+00:00 stderr F I1013 00:20:32.377269 1 request.go:697] Waited for 1.181254976s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2025-10-13T00:20:32.972068454+00:00 stderr F E1013 00:20:32.971991 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:33.577359279+00:00 stderr F I1013 00:20:33.577234 1 request.go:697] Waited for 1.196338369s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2025-10-13T00:20:38.096567817+00:00 stderr F E1013 00:20:38.096004 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:20:48.341957833+00:00 stderr F E1013 00:20:48.341400 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:21:08.829211416+00:00 stderr F E1013 00:21:08.828550 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951241 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.951204269 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951775 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.951754114 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951799 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.951782014 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951824 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.951804685 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951841 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.951828686 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951858 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.951845086 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951874 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.951862686 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951890 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.951878177 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951907 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.951894297 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951927 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.951915618 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951944 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.951931868 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.951962 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.951949989 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.952480 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:18 +0000 UTC to 2027-08-13 20:00:19 +0000 UTC (now=2025-10-13 00:21:11.952464423 +0000 UTC))" 2025-10-13T00:21:11.953385337+00:00 stderr F I1013 00:21:11.952729 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314500\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314500\" (2025-10-12 23:14:59 +0000 UTC to 2026-10-12 23:14:59 +0000 UTC (now=2025-10-13 00:21:11.952714599 +0000 UTC))" 2025-10-13T00:21:27.787367362+00:00 stderr F E1013 00:21:27.787026 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:21:49.796644540+00:00 stderr F E1013 00:21:49.796002 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:22:27.660034692+00:00 stderr F E1013 00:22:27.659420 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.791266891+00:00 stderr F E1013 00:22:27.791198 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:22:27.888201238+00:00 stderr F E1013 00:22:27.888059 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.888201238+00:00 stderr F E1013 00:22:27.888188 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.890211302+00:00 stderr F E1013 00:22:27.890157 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.891543498+00:00 stderr F I1013 00:22:27.891462 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.892288898+00:00 stderr F E1013 00:22:27.892244 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.892467293+00:00 stderr F I1013 00:22:27.892428 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.892707619+00:00 stderr F E1013 00:22:27.892655 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59cd234a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,LastTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-10-13T00:22:27.893525401+00:00 stderr F E1013 00:22:27.893476 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59ed21aa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.892224426 +0000 UTC m=+448.275472082,LastTimestamp:2025-10-13 00:22:27.892224426 +0000 UTC m=+448.275472082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-10-13T00:22:27.893587943+00:00 stderr F E1013 00:22:27.893566 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.894520338+00:00 stderr F E1013 00:22:27.894482 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.894520338+00:00 stderr F I1013 00:22:27.894506 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.897154419+00:00 stderr F E1013 00:22:27.897103 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.902103352+00:00 stderr F E1013 00:22:27.902055 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.902462221+00:00 stderr F I1013 00:22:27.902417 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.902816221+00:00 stderr F E1013 00:22:27.902779 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.912109051+00:00 stderr F E1013 00:22:27.912039 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.912235584+00:00 stderr F I1013 00:22:27.912166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.929638262+00:00 stderr F E1013 00:22:27.929549 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.929699994+00:00 stderr F I1013 00:22:27.929629 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.955876828+00:00 stderr F E1013 00:22:27.955797 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.955940320+00:00 stderr F I1013 00:22:27.955865 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.085766851+00:00 stderr F E1013 00:22:28.085689 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.085766851+00:00 stderr F I1013 00:22:28.085744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.285712688+00:00 stderr F E1013 00:22:28.285654 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.485281185+00:00 stderr F E1013 00:22:28.485233 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.485315676+00:00 stderr F I1013 00:22:28.485289 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.684580884+00:00 stderr F E1013 00:22:28.684467 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.687568625+00:00 stderr F E1013 00:22:28.687516 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.687592485+00:00 stderr F I1013 00:22:28.687547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.885600100+00:00 stderr F E1013 00:22:28.885549 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.885623661+00:00 stderr F I1013 00:22:28.885594 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.087991663+00:00 stderr F E1013 00:22:29.087889 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.088039854+00:00 stderr F I1013 00:22:29.088003 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.089047841+00:00 stderr F E1013 00:22:29.089010 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51ea130f909 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-10-13 00:22:29.087852809 +0000 UTC m=+449.471100465,LastTimestamp:2025-10-13 00:22:29.087852809 +0000 UTC m=+449.471100465,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-10-13T00:22:29.285893095+00:00 stderr F E1013 00:22:29.285819 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.487506937+00:00 stderr F E1013 00:22:29.486895 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.487506937+00:00 stderr F E1013 00:22:29.487032 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.690378942+00:00 stderr F E1013 00:22:29.687280 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.690378942+00:00 stderr F I1013 00:22:29.687538 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.886014213+00:00 stderr F E1013 00:22:29.885932 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.886014213+00:00 stderr F I1013 00:22:29.885985 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.085264462+00:00 stderr F E1013 00:22:30.085162 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.085560190+00:00 stderr F I1013 00:22:30.085496 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.285923468+00:00 stderr F E1013 00:22:30.285840 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:30.486372138+00:00 stderr F E1013 00:22:30.485600 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.486372138+00:00 stderr F E1013 00:22:30.485964 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.486372138+00:00 stderr F I1013 00:22:30.486001 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.685798101+00:00 stderr F E1013 00:22:30.685750 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.886181129+00:00 stderr F E1013 00:22:30.886088 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.886219030+00:00 stderr F I1013 00:22:30.886180 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.086426294+00:00 stderr F E1013 00:22:31.086315 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.086426294+00:00 stderr F I1013 00:22:31.086398 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.286087483+00:00 stderr F E1013 00:22:31.285713 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.286087483+00:00 stderr F I1013 00:22:31.286008 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.484677844+00:00 stderr F E1013 00:22:31.484411 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.486503583+00:00 stderr F E1013 00:22:31.486455 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.486577375+00:00 stderr F I1013 00:22:31.486535 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.686178943+00:00 stderr F E1013 00:22:31.686093 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.883868499+00:00 stderr F I1013 00:22:31.883798 1 request.go:697] Waited for 1.157542108s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-10-13T00:22:31.885634906+00:00 stderr F E1013 00:22:31.885584 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:32.085483251+00:00 stderr F E1013 00:22:32.085384 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.085530932+00:00 stderr F I1013 00:22:32.085456 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.285490859+00:00 stderr F E1013 00:22:32.285413 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.285953022+00:00 stderr F E1013 00:22:32.285914 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.486671910+00:00 stderr F E1013 00:22:32.486573 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.486745602+00:00 stderr F I1013 00:22:32.486668 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.685784534+00:00 stderr F E1013 00:22:32.685674 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.685842406+00:00 stderr F I1013 00:22:32.685774 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.886111051+00:00 stderr F E1013 00:22:32.886017 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.886111051+00:00 stderr F I1013 00:22:32.886090 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.085912424+00:00 stderr F E1013 00:22:33.085866 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:33.286462518+00:00 stderr F E1013 00:22:33.286383 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:33.486943982+00:00 stderr F E1013 00:22:33.486631 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.486943982+00:00 stderr F I1013 00:22:33.486711 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.683445645+00:00 stderr F I1013 00:22:33.683318 1 request.go:697] Waited for 1.07718002s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-10-13T00:22:33.684683400+00:00 stderr F E1013 00:22:33.684635 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.686560752+00:00 stderr F E1013 00:22:33.686464 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.886444420+00:00 stderr F E1013 00:22:33.886377 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.089441414+00:00 stderr F E1013 00:22:34.089379 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.089523537+00:00 stderr F I1013 00:22:34.089456 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.286645498+00:00 stderr F E1013 00:22:34.286285 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:34.467314909+00:00 stderr F E1013 00:22:34.466958 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59cd234a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,LastTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-10-13T00:22:34.486084332+00:00 stderr F E1013 00:22:34.486010 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.486197775+00:00 stderr F I1013 00:22:34.486124 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.686358060+00:00 stderr F E1013 00:22:34.686269 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.887774741+00:00 stderr F E1013 00:22:34.887709 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.887981567+00:00 stderr F I1013 00:22:34.887942 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.085140768+00:00 stderr F E1013 00:22:35.085075 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.289272264+00:00 stderr F E1013 00:22:35.289200 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:35.453870449+00:00 stderr F E1013 00:22:35.453773 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.453936501+00:00 stderr F I1013 00:22:35.453881 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.694301216+00:00 stderr F E1013 00:22:35.694230 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.771536197+00:00 stderr F E1013 00:22:35.771437 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.771586919+00:00 stderr F I1013 00:22:35.771531 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.813031584+00:00 stderr F E1013 00:22:35.812988 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51ea130f909 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-10-13 00:22:29.087852809 +0000 UTC m=+449.471100465,LastTimestamp:2025-10-13 00:22:29.087852809 +0000 UTC m=+449.471100465,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-10-13T00:22:35.887796876+00:00 stderr F E1013 00:22:35.887659 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.888145996+00:00 stderr F I1013 00:22:35.888026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.083875048+00:00 stderr F I1013 00:22:36.083815 1 request.go:697] Waited for 1.076200797s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-10-13T00:22:36.087698134+00:00 stderr F E1013 00:22:36.087631 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.289747692+00:00 stderr F E1013 00:22:36.289686 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:36.489281760+00:00 stderr F E1013 00:22:36.489174 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:36.886620518+00:00 stderr F E1013 00:22:36.886550 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.886669779+00:00 stderr F I1013 00:22:36.886620 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.084973782+00:00 stderr F E1013 00:22:37.084907 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.289466069+00:00 stderr F E1013 00:22:37.289231 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:37.290188699+00:00 stderr F E1013 00:22:37.289560 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59ed21aa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.892224426 +0000 UTC m=+448.275472082,LastTimestamp:2025-10-13 00:22:27.892224426 +0000 UTC m=+448.275472082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-10-13T00:22:37.686617511+00:00 stderr F E1013 00:22:37.686362 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.883671601+00:00 stderr F I1013 00:22:37.883567 1 request.go:697] Waited for 1.15452853s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-10-13T00:22:37.886200731+00:00 stderr F E1013 00:22:37.885714 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.087865277+00:00 stderr F E1013 00:22:38.087769 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.087933289+00:00 stderr F I1013 00:22:38.087848 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.336954246+00:00 stderr F E1013 00:22:38.336629 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.336954246+00:00 stderr F I1013 00:22:38.336681 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.488604220+00:00 stderr F E1013 00:22:38.488524 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:38.887434509+00:00 stderr F E1013 00:22:38.887366 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.887477351+00:00 stderr F I1013 00:22:38.887438 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.287196335+00:00 stderr F E1013 00:22:39.287130 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.487676649+00:00 stderr F E1013 00:22:39.487616 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.487829723+00:00 stderr F I1013 00:22:39.487773 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.689720357+00:00 stderr F E1013 00:22:39.689658 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:39.890652464+00:00 stderr F E1013 00:22:39.890589 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:40.086180671+00:00 stderr F E1013 00:22:40.086130 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.487024476+00:00 stderr F E1013 00:22:40.486938 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.487270723+00:00 stderr F I1013 00:22:40.487161 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.581004334+00:00 stderr F E1013 00:22:40.580948 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.581166529+00:00 stderr F I1013 00:22:40.581129 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.683950312+00:00 stderr F I1013 00:22:40.683880 1 request.go:697] Waited for 1.038376774s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-10-13T00:22:40.685452194+00:00 stderr F E1013 00:22:40.685416 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.087639037+00:00 stderr F E1013 00:22:41.087551 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.287049421+00:00 stderr F E1013 00:22:41.286983 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.287383921+00:00 stderr F I1013 00:22:41.287062 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.890044547+00:00 stderr F E1013 00:22:41.889991 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:42.092299121+00:00 stderr F E1013 00:22:42.091851 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.092299121+00:00 stderr F I1013 00:22:42.092175 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.490935705+00:00 stderr F E1013 00:22:42.490129 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.687003147+00:00 stderr F E1013 00:22:42.686690 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.687047278+00:00 stderr F I1013 00:22:42.686772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.889088856+00:00 stderr F E1013 00:22:42.889023 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:43.287400751+00:00 stderr F E1013 00:22:43.287349 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:43.461996944+00:00 stderr F E1013 00:22:43.461891 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:43.462061756+00:00 stderr F I1013 00:22:43.461996 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:43.688770381+00:00 stderr F E1013 00:22:43.688581 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:43.890035978+00:00 stderr F E1013 00:22:43.889962 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:43.890096259+00:00 stderr F I1013 00:22:43.890053 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:44.470717433+00:00 stderr F E1013 00:22:44.470646 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59cd234a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,LastTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-10-13T00:22:44.489596649+00:00 stderr F E1013 00:22:44.488190 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:44.489596649+00:00 stderr F I1013 00:22:44.488364 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:44.886647949+00:00 stderr F E1013 00:22:44.886559 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.090852816+00:00 stderr F E1013 00:22:45.090791 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:45.294356074+00:00 stderr F E1013 00:22:45.294230 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:45.815669406+00:00 stderr F E1013 00:22:45.815611 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51ea130f909 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-10-13 00:22:29.087852809 +0000 UTC m=+449.471100465,LastTimestamp:2025-10-13 00:22:29.087852809 +0000 UTC m=+449.471100465,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-10-13T00:22:45.887230599+00:00 stderr F E1013 00:22:45.887151 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:46.287202170+00:00 stderr F E1013 00:22:46.285819 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:46.893835288+00:00 stderr F E1013 00:22:46.893216 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:46.893924221+00:00 stderr F I1013 00:22:46.893405 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:47.088497221+00:00 stderr F E1013 00:22:47.088414 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:47.290718234+00:00 stderr F E1013 00:22:47.290652 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:47.290926359+00:00 stderr F E1013 00:22:47.290894 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59ed21aa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.892224426 +0000 UTC m=+448.275472082,LastTimestamp:2025-10-13 00:22:27.892224426 +0000 UTC m=+448.275472082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-10-13T00:22:47.487573937+00:00 stderr F E1013 00:22:47.487012 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:47.487573937+00:00 stderr F I1013 00:22:47.487064 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:48.087394495+00:00 stderr F E1013 00:22:48.087286 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:48.886354480+00:00 stderr F E1013 00:22:48.886241 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.089645273+00:00 stderr F E1013 00:22:49.087138 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.290097736+00:00 stderr F E1013 00:22:49.289604 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:49.689459611+00:00 stderr F E1013 00:22:49.688083 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.487014707+00:00 stderr F E1013 00:22:50.486926 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.689738574+00:00 stderr F E1013 00:22:50.689673 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:50.826597876+00:00 stderr F E1013 00:22:50.826539 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.826641247+00:00 stderr F I1013 00:22:50.826619 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.090378924+00:00 stderr F E1013 00:22:51.090309 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:51.487083744+00:00 stderr F E1013 00:22:51.486992 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.487372937+00:00 stderr F E1013 00:22:52.486673 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.487372937+00:00 stderr F I1013 00:22:52.486726 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.685841005+00:00 stderr F E1013 00:22:52.685770 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.888725276+00:00 stderr F E1013 00:22:52.888582 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:53.087339179+00:00 stderr F E1013 00:22:53.087248 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.087402201+00:00 stderr F I1013 00:22:53.087310 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.705424976+00:00 stderr F E1013 00:22:53.705368 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.705473397+00:00 stderr F I1013 00:22:53.705426 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.056730962+00:00 stderr F E1013 00:22:54.056667 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.289052253+00:00 stderr F E1013 00:22:54.288989 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:54.473111500+00:00 stderr F E1013 00:22:54.473057 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.186de51e59cd234a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,LastTimestamp:2025-10-13 00:22:27.89012769 +0000 UTC m=+448.273375386,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-10-13T00:22:55.251046140+00:00 stderr F E1013 00:22:55.250954 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:27.792179044+00:00 stderr F E1013 00:23:27.791764 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-10-13T00:23:47.727441671+00:00 stderr F I1013 00:23:47.726760 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:50.489964570+00:00 stderr F I1013 00:23:50.489824 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000023700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000755000175000017500000000000015073043233033134 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000755000175000017500000000000015073043233033134 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000202215073043233033132 0ustar zuulzuul2025-08-13T20:05:30.075912298+00:00 stderr F W0813 20:05:30.075190 1 authoptions.go:112] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! 2025-08-13T20:05:30.788889885+00:00 stderr F I0813 20:05:30.788210 1 main.go:605] Binding to [::]:8443... 2025-08-13T20:05:30.788889885+00:00 stderr F I0813 20:05:30.788264 1 main.go:607] using TLS 2025-08-13T20:05:33.795639346+00:00 stderr F I0813 20:05:33.789942 1 metrics.go:128] serverconfig.Metrics: Update ConsolePlugin metrics... 2025-08-13T20:05:33.904070491+00:00 stderr F I0813 20:05:33.895041 1 metrics.go:138] serverconfig.Metrics: Update ConsolePlugin metrics: &map[] (took 103.776412ms) 2025-08-13T20:05:35.792924520+00:00 stderr F I0813 20:05:35.791269 1 metrics.go:80] usage.Metrics: Count console users... 2025-08-13T20:05:36.280627718+00:00 stderr F I0813 20:05:36.280048 1 metrics.go:156] usage.Metrics: Update console users metrics: 0 kubeadmin, 0 cluster-admins, 0 developers, 0 unknown/errors (took 488.51125ms) ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000000015073043233033124 0ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000202215073043233033132 0ustar zuulzuul2025-10-13T00:15:02.130482010+00:00 stderr F W1013 00:15:02.127982 1 authoptions.go:112] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! 2025-10-13T00:15:12.390845577+00:00 stderr F I1013 00:15:12.390175 1 main.go:605] Binding to [::]:8443... 2025-10-13T00:15:12.390845577+00:00 stderr F I1013 00:15:12.390830 1 main.go:607] using TLS 2025-10-13T00:15:15.392483590+00:00 stderr F I1013 00:15:15.392433 1 metrics.go:128] serverconfig.Metrics: Update ConsolePlugin metrics... 2025-10-13T00:15:15.416539371+00:00 stderr F I1013 00:15:15.416487 1 metrics.go:138] serverconfig.Metrics: Update ConsolePlugin metrics: &map[] (took 23.954028ms) 2025-10-13T00:15:17.391240267+00:00 stderr F I1013 00:15:17.390645 1 metrics.go:80] usage.Metrics: Count console users... 2025-10-13T00:15:17.949814612+00:00 stderr F I1013 00:15:17.949721 1 metrics.go:156] usage.Metrics: Update console users metrics: 0 kubeadmin, 0 cluster-admins, 0 developers, 0 unknown/errors (took 558.463252ms) ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000755000175000017500000000000015073043234033065 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000755000175000017500000000000015073043234033065 5ustar zuulzuul././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000016733515073043234033106 0ustar zuulzuul2025-10-13T00:14:58.764035606+00:00 stderr F I1013 00:14:58.763523 1 cmd.go:240] Using service-serving-cert provided certificates 2025-10-13T00:14:58.764279863+00:00 stderr F I1013 00:14:58.764249 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:58.765987295+00:00 stderr F I1013 00:14:58.765955 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:58.889483895+00:00 stderr F I1013 00:14:58.888072 1 builder.go:298] openshift-apiserver-operator version - 2025-10-13T00:14:59.526832491+00:00 stderr F I1013 00:14:59.525997 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:14:59.526971385+00:00 stderr F W1013 00:14:59.526956 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:59.527003856+00:00 stderr F W1013 00:14:59.526993 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:59.537935333+00:00 stderr F I1013 00:14:59.537884 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:14:59.538545182+00:00 stderr F I1013 00:14:59.538518 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-10-13T00:14:59.545347675+00:00 stderr F I1013 00:14:59.544701 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:14:59.545347675+00:00 stderr F I1013 00:14:59.544770 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:14:59.546036306+00:00 stderr F I1013 00:14:59.545706 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:14:59.546036306+00:00 stderr F I1013 00:14:59.545777 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:14:59.546036306+00:00 stderr F I1013 00:14:59.545907 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:14:59.548188810+00:00 stderr F I1013 00:14:59.548165 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:14:59.548231802+00:00 stderr F I1013 00:14:59.548221 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:59.548506750+00:00 stderr F I1013 00:14:59.548484 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:14:59.548688465+00:00 stderr F I1013 00:14:59.548677 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:59.646499646+00:00 stderr F I1013 00:14:59.646166 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:14:59.650452335+00:00 stderr F I1013 00:14:59.650401 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:59.650452335+00:00 stderr F I1013 00:14:59.650429 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:19:31.187954108+00:00 stderr F I1013 00:19:31.186437 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-10-13T00:19:31.188301369+00:00 stderr F I1013 00:19:31.187791 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41770", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_ed4eb833-40e4-4d04-90e7-1eb6b4811839 became leader 2025-10-13T00:19:31.198932075+00:00 stderr F I1013 00:19:31.198767 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:19:31.202814680+00:00 stderr F I1013 00:19:31.202719 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:19:31.202855432+00:00 stderr F I1013 00:19:31.202776 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.223875 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.223895 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.223936 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.223931 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.223989 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.224031 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.224035 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225196 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225251 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225267 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225283 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225323 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225425 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-10-13T00:19:31.225488795+00:00 stderr F I1013 00:19:31.225472 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-10-13T00:19:31.225570447+00:00 stderr F I1013 00:19:31.225488 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-10-13T00:19:31.225570447+00:00 stderr F I1013 00:19:31.225512 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:19:31.225570447+00:00 stderr F I1013 00:19:31.225559 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-10-13T00:19:31.225625569+00:00 stderr F I1013 00:19:31.225590 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-10-13T00:19:31.225681620+00:00 stderr F I1013 00:19:31.225611 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-10-13T00:19:31.324434267+00:00 stderr F I1013 00:19:31.324313 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:19:31.324434267+00:00 stderr F I1013 00:19:31.324378 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:19:31.324434267+00:00 stderr F I1013 00:19:31.324388 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-10-13T00:19:31.324517730+00:00 stderr F I1013 00:19:31.324444 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-10-13T00:19:31.325520360+00:00 stderr F I1013 00:19:31.325466 1 base_controller.go:73] Caches are synced for RevisionController 2025-10-13T00:19:31.325520360+00:00 stderr F I1013 00:19:31.325480 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-10-13T00:19:31.325639623+00:00 stderr F I1013 00:19:31.325553 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:19:31.325639623+00:00 stderr F I1013 00:19:31.325623 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:19:31.325689635+00:00 stderr F I1013 00:19:31.325652 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:19:31.325689635+00:00 stderr F I1013 00:19:31.325675 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:19:31.325689635+00:00 stderr F I1013 00:19:31.325668 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-10-13T00:19:31.325713945+00:00 stderr F I1013 00:19:31.325703 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-10-13T00:19:31.402408236+00:00 stderr F I1013 00:19:31.402280 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:19:31.426459151+00:00 stderr F I1013 00:19:31.426375 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-10-13T00:19:31.426459151+00:00 stderr F I1013 00:19:31.426407 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-10-13T00:19:31.426770240+00:00 stderr F I1013 00:19:31.426706 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-10-13T00:19:31.433604634+00:00 stderr F I1013 00:19:31.433526 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:31.484987932+00:00 stderr F I1013 00:19:31.484879 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:19:31.627526721+00:00 stderr F I1013 00:19:31.627456 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:31.725756082+00:00 stderr F I1013 00:19:31.725635 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-10-13T00:19:31.725945267+00:00 stderr F I1013 00:19:31.725850 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-10-13T00:19:31.725945267+00:00 stderr F I1013 00:19:31.725884 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-10-13T00:19:31.725945267+00:00 stderr F I1013 00:19:31.725667 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-10-13T00:19:31.725945267+00:00 stderr F I1013 00:19:31.725906 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-10-13T00:19:31.726103552+00:00 stderr F I1013 00:19:31.725884 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-10-13T00:19:31.827408283+00:00 stderr F I1013 00:19:31.827300 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:32.027059351+00:00 stderr F I1013 00:19:32.026930 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:32.227324227+00:00 stderr F I1013 00:19:32.227198 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:32.424647745+00:00 stderr F I1013 00:19:32.424519 1 request.go:697] Waited for 1.199388747s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps?limit=500&resourceVersion=0 2025-10-13T00:19:32.430971753+00:00 stderr F I1013 00:19:32.430888 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:32.524939387+00:00 stderr F I1013 00:19:32.524837 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:19:32.525126683+00:00 stderr F I1013 00:19:32.525084 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:19:32.628640051+00:00 stderr F I1013 00:19:32.628596 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:32.834002779+00:00 stderr F I1013 00:19:32.833927 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:33.026211405+00:00 stderr F I1013 00:19:33.026104 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:33.124524258+00:00 stderr F I1013 00:19:33.124444 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-10-13T00:19:33.124646982+00:00 stderr F I1013 00:19:33.124624 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-10-13T00:19:33.227216962+00:00 stderr F I1013 00:19:33.227091 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:33.436263569+00:00 stderr F I1013 00:19:33.436153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:33.624054394+00:00 stderr F I1013 00:19:33.623984 1 request.go:697] Waited for 2.398772505s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps?limit=500&resourceVersion=0 2025-10-13T00:19:33.641589315+00:00 stderr F I1013 00:19:33.641475 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:33.827466963+00:00 stderr F I1013 00:19:33.827395 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:34.027458981+00:00 stderr F I1013 00:19:34.027301 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:34.125715423+00:00 stderr F I1013 00:19:34.125646 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-10-13T00:19:34.125821076+00:00 stderr F I1013 00:19:34.125804 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-10-13T00:19:34.126563148+00:00 stderr F I1013 00:19:34.126501 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-10-13T00:19:34.226414457+00:00 stderr F I1013 00:19:34.226350 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:19:34.324503434+00:00 stderr F I1013 00:19:34.324424 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:19:34.324503434+00:00 stderr F I1013 00:19:34.324474 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:19:34.324589807+00:00 stderr F I1013 00:19:34.324523 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-10-13T00:19:34.324589807+00:00 stderr F I1013 00:19:34.324536 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-10-13T00:19:34.325709000+00:00 stderr F I1013 00:19:34.325662 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-10-13T00:19:34.325709000+00:00 stderr F I1013 00:19:34.325689 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-10-13T00:19:34.325709000+00:00 stderr F I1013 00:19:34.325700 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-10-13T00:19:34.325725461+00:00 stderr F I1013 00:19:34.325706 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-10-13T00:19:34.325763562+00:00 stderr F I1013 00:19:34.325739 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-10-13T00:19:34.325763562+00:00 stderr F I1013 00:19:34.325758 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-10-13T00:19:34.325799753+00:00 stderr F I1013 00:19:34.325782 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-10-13T00:19:34.325799753+00:00 stderr F I1013 00:19:34.325793 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-10-13T00:19:34.823848414+00:00 stderr F I1013 00:19:34.823787 1 request.go:697] Waited for 3.097696781s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.940853 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.940770268 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.947838 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.947757616 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.947873 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.947857229 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.947937 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.947922971 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.947965 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947944481 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.947995 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947973482 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.948015 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948002733 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.948036 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948021403 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.948054 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.948042134 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.948087 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.948069344 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.948124 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.948099775 +0000 UTC))" 2025-10-13T00:21:11.948240239+00:00 stderr F I1013 00:21:11.948146 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948129136 +0000 UTC))" 2025-10-13T00:21:11.948688501+00:00 stderr F I1013 00:21:11.948524 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-10-13 00:21:11.948508056 +0000 UTC))" 2025-10-13T00:21:11.948886266+00:00 stderr F I1013 00:21:11.948803 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314499\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314499\" (2025-10-12 23:14:58 +0000 UTC to 2026-10-12 23:14:58 +0000 UTC (now=2025-10-13 00:21:11.948786364 +0000 UTC))" 2025-10-13T00:22:11.444741433+00:00 stderr F E1013 00:22:11.444119 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.481743928+00:00 stderr F E1013 00:22:11.481670 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.516814442+00:00 stderr F E1013 00:22:11.515976 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.541415203+00:00 stderr F E1013 00:22:11.539569 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.586175237+00:00 stderr F E1013 00:22:11.586119 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.715873395+00:00 stderr F E1013 00:22:11.715778 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.880753779+00:00 stderr F E1013 00:22:11.880199 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.205173683+00:00 stderr F E1013 00:22:12.205094 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.848314518+00:00 stderr F E1013 00:22:12.848260 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:14.132483811+00:00 stderr F E1013 00:22:14.132308 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:14.145605654+00:00 stderr F E1013 00:22:14.145513 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:15.739845267+00:00 stderr F E1013 00:22:15.739303 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.696918683+00:00 stderr F E1013 00:22:16.696840 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:17.540030416+00:00 stderr F E1013 00:22:17.539972 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.340747871+00:00 stderr F E1013 00:22:19.340555 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.143053538+00:00 stderr F E1013 00:22:21.142726 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.444530976+00:00 stderr F E1013 00:22:21.443999 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.821103323+00:00 stderr F E1013 00:22:21.821029 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.940493785+00:00 stderr F E1013 00:22:22.940423 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.739856663+00:00 stderr F E1013 00:22:24.739226 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.541038911+00:00 stderr F E1013 00:22:26.540393 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.339678379+00:00 stderr F E1013 00:22:28.339612 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.139674715+00:00 stderr F E1013 00:22:30.139141 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.215603948+00:00 stderr F E1013 00:22:31.215519 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.335725318+00:00 stderr F E1013 00:22:31.335672 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:31.349246552+00:00 stderr F E1013 00:22:31.349178 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:31.366260749+00:00 stderr F E1013 00:22:31.366195 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:31.730603537+00:00 stderr F E1013 00:22:31.730559 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.934246284+00:00 stderr F E1013 00:22:31.934192 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:31.940218124+00:00 stderr F E1013 00:22:31.940177 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.128626371+00:00 stderr F E1013 00:22:32.128544 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.528833194+00:00 stderr F E1013 00:22:32.528775 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.928855191+00:00 stderr F E1013 00:22:32.928802 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.131405357+00:00 stderr F E1013 00:22:33.131321 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:33.329217699+00:00 stderr F E1013 00:22:33.329176 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.729438687+00:00 stderr F E1013 00:22:33.729385 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.739678792+00:00 stderr F E1013 00:22:33.739647 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.129217342+00:00 stderr F E1013 00:22:34.128906 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.331868866+00:00 stderr F E1013 00:22:34.331787 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:34.529041968+00:00 stderr F E1013 00:22:34.528976 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.129265148+00:00 stderr F E1013 00:22:35.128839 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.333412574+00:00 stderr F E1013 00:22:35.333315 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:35.541994894+00:00 stderr F E1013 00:22:35.541937 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.933719965+00:00 stderr F E1013 00:22:35.933602 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:36.129423616+00:00 stderr F E1013 00:22:36.129376 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.747814311+00:00 stderr F E1013 00:22:36.744075 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:37.411765545+00:00 stderr F E1013 00:22:37.411676 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.032098844+00:00 stderr F E1013 00:22:38.032036 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:39.973534843+00:00 stderr F E1013 00:22:39.973441 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.599876470+00:00 stderr F E1013 00:22:40.599638 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:41.444918729+00:00 stderr F E1013 00:22:41.444823 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.305241163+00:00 stderr F E1013 00:22:42.304649 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:43.992505242+00:00 stderr F E1013 00:22:43.991970 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.094857847+00:00 stderr F E1013 00:22:45.094776 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.586486402+00:00 stderr F E1013 00:22:45.586252 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.728299732+00:00 stderr F E1013 00:22:45.728254 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:51.446859464+00:00 stderr F E1013 00:22:51.446393 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.150204926+00:00 stderr F E1013 00:22:54.150155 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:55.335740079+00:00 stderr F E1013 00:22:55.335691 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:25.663393406+00:00 stderr F I1013 00:23:25.659126 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:28.083938210+00:00 stderr F I1013 00:23:28.083428 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:42.646062091+00:00 stderr F I1013 00:23:42.645468 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:47.406963644+00:00 stderr F I1013 00:23:47.406411 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:49.873953821+00:00 stderr F I1013 00:23:49.873841 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:50.059612913+00:00 stderr F I1013 00:23:50.059528 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:50.308932368+00:00 stderr F I1013 00:23:50.308875 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:51.062844868+00:00 stderr F I1013 00:23:51.062787 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:23:53.340569434+00:00 stderr F I1013 00:23:53.340054 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 ././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000064223615073043234033104 0ustar zuulzuul2025-08-13T20:00:32.623453154+00:00 stderr F I0813 20:00:32.622897 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:32.623453154+00:00 stderr F I0813 20:00:32.623330 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:32.625343068+00:00 stderr F I0813 20:00:32.623961 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:33.276056553+00:00 stderr F I0813 20:00:33.274072 1 builder.go:298] openshift-apiserver-operator version - 2025-08-13T20:00:36.573424133+00:00 stderr F I0813 20:00:36.572654 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:36.573424133+00:00 stderr F W0813 20:00:36.573276 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:36.573424133+00:00 stderr F W0813 20:00:36.573285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.880697 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.883661 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.883892 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.884303 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.884422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.886040 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.886058 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:36.904856823+00:00 stderr F I0813 20:00:36.903498 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:36.904856823+00:00 stderr F I0813 20:00:36.903553 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:36.923594798+00:00 stderr F I0813 20:00:36.920666 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:36.923594798+00:00 stderr F I0813 20:00:36.921349 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-08-13T20:00:36.989631781+00:00 stderr F I0813 20:00:36.986773 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:36.990036712+00:00 stderr F I0813 20:00:36.990004 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:37.010309200+00:00 stderr F I0813 20:00:37.010248 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:37.044255768+00:00 stderr F I0813 20:00:37.044199 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-08-13T20:00:37.064193987+00:00 stderr F I0813 20:00:37.046499 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"29854", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_a533f076-9102-4f1c-ac58-2cc3fe6b65c6 became leader 2025-08-13T20:00:37.064279739+00:00 stderr F I0813 20:00:37.059666 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:00:37.151322491+00:00 stderr F I0813 20:00:37.151228 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:00:37.203134919+00:00 stderr F I0813 20:00:37.201263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:00:37.642070444+00:00 stderr F I0813 20:00:37.637452 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:00:37.650333960+00:00 stderr F I0813 20:00:37.643338 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:00:37.659630705+00:00 stderr F I0813 20:00:37.650486 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:00:37.659732778+00:00 stderr F I0813 20:00:37.658070 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:00:37.659787410+00:00 stderr F I0813 20:00:37.658093 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:00:37.680132630+00:00 stderr F I0813 20:00:37.680081 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:00:37.680234053+00:00 stderr F I0813 20:00:37.680217 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-08-13T20:00:38.720224967+00:00 stderr F I0813 20:00:38.719249 1 request.go:697] Waited for 1.078839913s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver-operator/secrets?limit=500&resourceVersion=0 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738681 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738733 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738746 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738760 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738772 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738827 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738905 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739020 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739036 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739050 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:00:38.805124978+00:00 stderr F I0813 20:00:38.797689 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T20:00:38.805124978+00:00 stderr F I0813 20:00:38.798463 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:00:41.851229294+00:00 stderr F I0813 20:00:41.800206 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T20:00:42.094738887+00:00 stderr F I0813 20:00:42.094226 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:41.805143 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:42.094944 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:41.848831 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:00:42.095051716+00:00 stderr F I0813 20:00:42.095011 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:00:42.101372966+00:00 stderr F I0813 20:00:42.101290 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:00:42.101372966+00:00 stderr F I0813 20:00:42.101339 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:00:42.102589461+00:00 stderr F I0813 20:00:41.849214 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-08-13T20:00:42.102589461+00:00 stderr F I0813 20:00:42.102575 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:00:42.102870659+00:00 stderr F I0813 20:00:41.849228 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T20:00:42.102870659+00:00 stderr F I0813 20:00:42.102854 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:42.103007643+00:00 stderr F I0813 20:00:41.849236 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:00:42.103007643+00:00 stderr F I0813 20:00:42.102939 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:00:42.103378783+00:00 stderr F I0813 20:00:41.849559 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:00:42.103378783+00:00 stderr F I0813 20:00:42.103367 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:42.103666 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:41.849576 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:42.103710 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:41.849585 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110192 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:41.849679 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110352 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.849703 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.150683 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.961021 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.150763 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.961108 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151139 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.021017 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151487 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.032626 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151738 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:00:42.203779096+00:00 stderr F I0813 20:00:42.202033 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-08-13T20:00:42.203779096+00:00 stderr F I0813 20:00:42.202077 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:00:42.260567555+00:00 stderr F I0813 20:00:42.260259 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.299249098+00:00 stderr F I0813 20:00:42.298608 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.322060519+00:00 stderr F I0813 20:00:42.321963 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.327430152+00:00 stderr F I0813 20:00:42.323555 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.362673617+00:00 stderr F I0813 20:00:42.360270 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:00:42.362673617+00:00 stderr F I0813 20:00:42.360314 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:00:42.670889245+00:00 stderr F I0813 20:00:42.669476 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:00:42.743163146+00:00 stderr F I0813 20:00:42.742372 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:00:42.743163146+00:00 stderr F I0813 20:00:42.742670 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:00:45.524920664+00:00 stderr F I0813 20:00:45.508635 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServicesAvailable: PreconditionNotReady","reason":"APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:45.821627595+00:00 stderr F I0813 20:00:45.812347 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: PreconditionNotReady") 2025-08-13T20:01:00.051369613+00:00 stderr F I0813 20:00:59.999945 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.999559696 +0000 UTC))" 2025-08-13T20:01:00.051369613+00:00 stderr F I0813 20:01:00.051334 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.051282991 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051370 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.051347462 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051403 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.051380133 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051429 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051415584 +0000 UTC))" 2025-08-13T20:01:00.051566589+00:00 stderr F I0813 20:01:00.051481 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051435195 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051592 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051564049 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051651 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.05162805 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051670 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.051659281 +0000 UTC))" 2025-08-13T20:01:00.051711783+00:00 stderr F I0813 20:01:00.051697 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.051680092 +0000 UTC))" 2025-08-13T20:01:00.051812326+00:00 stderr F I0813 20:01:00.051723 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051707923 +0000 UTC))" 2025-08-13T20:01:00.052406013+00:00 stderr F I0813 20:01:00.052327 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-08-13 20:01:00.05230566 +0000 UTC))" 2025-08-13T20:01:00.052915467+00:00 stderr F I0813 20:01:00.052691 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115236\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115234\" (2025-08-13 19:00:33 +0000 UTC to 2026-08-13 19:00:33 +0000 UTC (now=2025-08-13 20:01:00.05265489 +0000 UTC))" 2025-08-13T20:01:00.075203073+00:00 stderr F I0813 20:01:00.074877 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:04.674153274+00:00 stderr F I0813 20:01:04.672724 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" 2025-08-13T20:02:16.819652382+00:00 stderr F I0813 20:02:16.817488 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:17.836711546+00:00 stderr F I0813 20:02:17.836540 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)" 2025-08-13T20:02:31.903143649+00:00 stderr F E0813 20:02:31.902389 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.915566814+00:00 stderr F E0813 20:02:31.915523 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.929668616+00:00 stderr F E0813 20:02:31.929568 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.953661520+00:00 stderr F E0813 20:02:31.953490 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.998468599+00:00 stderr F E0813 20:02:31.998309 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.094028095+00:00 stderr F E0813 20:02:32.091120 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.123160126+00:00 stderr F E0813 20:02:32.123058 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.132639056+00:00 stderr F E0813 20:02:32.132542 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.149736984+00:00 stderr F E0813 20:02:32.149581 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.174235313+00:00 stderr F E0813 20:02:32.174067 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.218270729+00:00 stderr F E0813 20:02:32.218136 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.300349550+00:00 stderr F E0813 20:02:32.300297 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.500620033+00:00 stderr F E0813 20:02:32.500564 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.700608449+00:00 stderr F E0813 20:02:32.700509 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.901161410+00:00 stderr F E0813 20:02:32.901109 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.225520973+00:00 stderr F E0813 20:02:33.225466 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.345324831+00:00 stderr F E0813 20:02:33.345274 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.888991740+00:00 stderr F E0813 20:02:33.888337 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.632116838+00:00 stderr F E0813 20:02:34.632014 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.172962687+00:00 stderr F E0813 20:02:35.172879 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.197114260+00:00 stderr F E0813 20:02:37.197026 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.737456145+00:00 stderr F E0813 20:02:37.737335 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.123017143+00:00 stderr F E0813 20:02:42.122325 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.155753357+00:00 stderr F E0813 20:02:42.155690 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.163199439+00:00 stderr F E0813 20:02:42.162939 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.163199439+00:00 stderr F E0813 20:02:42.163106 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.174492282+00:00 stderr F E0813 20:02:42.174391 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.179982768+00:00 stderr F E0813 20:02:42.179920 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.322272097+00:00 stderr F E0813 20:02:42.322144 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.521540853+00:00 stderr F E0813 20:02:42.521307 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.922004457+00:00 stderr F E0813 20:02:42.921908 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.124500944+00:00 stderr F E0813 20:02:43.124391 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:43.323042348+00:00 stderr F E0813 20:02:43.322908 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.521237212+00:00 stderr F E0813 20:02:43.521044 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.920910073+00:00 stderr F E0813 20:02:43.920768 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.323879629+00:00 stderr F E0813 20:02:44.323719 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:44.521968140+00:00 stderr F E0813 20:02:44.521871 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.124535789+00:00 stderr F E0813 20:02:45.124346 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.319572743+00:00 stderr F E0813 20:02:45.319381 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.924420698+00:00 stderr F E0813 20:02:45.924237 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:46.523057565+00:00 stderr F E0813 20:02:46.522935 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:46.720616821+00:00 stderr F E0813 20:02:46.720477 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.336036357+00:00 stderr F E0813 20:02:47.335935 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:47.994396188+00:00 stderr F E0813 20:02:47.993511 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:49.283347107+00:00 stderr F E0813 20:02:49.283216 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.283472481+00:00 stderr F E0813 20:02:49.283445 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:51.855959557+00:00 stderr F E0813 20:02:51.855716 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.123110748+00:00 stderr F E0813 20:02:52.122730 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.567385021+00:00 stderr F E0813 20:02:52.567277 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.409958606+00:00 stderr F E0813 20:02:54.409592 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.099987657+00:00 stderr F E0813 20:02:56.099735 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.992247701+00:00 stderr F E0813 20:02:56.991649 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:02.124640714+00:00 stderr F E0813 20:03:02.123737 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.807227862+00:00 stderr F E0813 20:03:03.807117 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.652390122+00:00 stderr F E0813 20:03:04.652187 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:07.242069628+00:00 stderr F E0813 20:03:07.241944 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:12.125445946+00:00 stderr F E0813 20:03:12.124657 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.051087142+00:00 stderr F E0813 20:03:13.050957 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.132117281+00:00 stderr F E0813 20:03:22.131173 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.136296422+00:00 stderr F E0813 20:03:25.136123 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.738604899+00:00 stderr F E0813 20:03:27.738450 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:32.124766154+00:00 stderr F E0813 20:03:32.124621 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.126310690+00:00 stderr F E0813 20:03:42.125532 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.154910106+00:00 stderr F E0813 20:03:42.154730 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.161279778+00:00 stderr F E0813 20:03:42.161177 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:42.205909101+00:00 stderr F I0813 20:03:42.205800 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.207388163+00:00 stderr F E0813 20:03:42.207359 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.213896209+00:00 stderr F I0813 20:03:42.213740 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.215527656+00:00 stderr F E0813 20:03:42.215513 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.227506847+00:00 stderr F I0813 20:03:42.227401 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.229187455+00:00 stderr F E0813 20:03:42.229161 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.250562135+00:00 stderr F I0813 20:03:42.250442 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.252163971+00:00 stderr F E0813 20:03:42.252100 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.293449997+00:00 stderr F I0813 20:03:42.293392 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.295256739+00:00 stderr F E0813 20:03:42.295181 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.377169276+00:00 stderr F I0813 20:03:42.377103 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.379393929+00:00 stderr F E0813 20:03:42.379299 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.541071301+00:00 stderr F I0813 20:03:42.540941 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.545997002+00:00 stderr F E0813 20:03:42.545764 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.866756742+00:00 stderr F I0813 20:03:42.866645 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.867915695+00:00 stderr F E0813 20:03:42.867756 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.509672403+00:00 stderr F I0813 20:03:43.509555 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:43Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:43.511247028+00:00 stderr F E0813 20:03:43.511182 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.793050554+00:00 stderr F I0813 20:03:44.792933 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:44Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:44.795308088+00:00 stderr F E0813 20:03:44.795247 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:47.357068477+00:00 stderr F I0813 20:03:47.356945 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:47Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:47.359056754+00:00 stderr F E0813 20:03:47.358985 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.133416455+00:00 stderr F E0813 20:03:52.132768 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.493938720+00:00 stderr F I0813 20:03:52.486096 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:52Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:52.493938720+00:00 stderr F E0813 20:03:52.488188 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.024984158+00:00 stderr F E0813 20:03:54.024284 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.105893981+00:00 stderr F E0813 20:03:56.104408 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.137089403+00:00 stderr F E0813 20:04:02.135923 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.739498108+00:00 stderr F I0813 20:04:02.734664 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:02Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:02.740609460+00:00 stderr F E0813 20:04:02.740483 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.102662429+00:00 stderr F E0813 20:04:06.101089 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:08.723089732+00:00 stderr F E0813 20:04:08.722924 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:12.136377423+00:00 stderr F E0813 20:04:12.135667 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:22.140944498+00:00 stderr F E0813 20:04:22.139880 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.242015481+00:00 stderr F I0813 20:04:23.241952 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:23Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:23.246984593+00:00 stderr F E0813 20:04:23.246831 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:32.136452333+00:00 stderr F E0813 20:04:32.135741 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.157979619+00:00 stderr F E0813 20:04:42.157223 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.178114396+00:00 stderr F E0813 20:04:42.177968 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.183583663+00:00 stderr F E0813 20:04:42.183519 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:42.205522291+00:00 stderr F I0813 20:04:42.205452 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:42.207164318+00:00 stderr F E0813 20:04:42.207094 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.143327400+00:00 stderr F E0813 20:04:52.142325 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.135917062+00:00 stderr F E0813 20:04:56.134039 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:02.150079333+00:00 stderr F E0813 20:05:02.149400 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:04.226529326+00:00 stderr F I0813 20:05:04.223354 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:05:04Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:04.226756632+00:00 stderr F E0813 20:05:04.226667 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.436760004+00:00 stderr F E0813 20:05:12.436000 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.982682525+00:00 stderr F E0813 20:05:15.982157 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.214735049+00:00 stderr F I0813 20:05:42.213508 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:05:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.275706025+00:00 stderr F I0813 20:05:42.275473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)") 2025-08-13T20:05:57.285378861+00:00 stderr F I0813 20:05:57.284355 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:57.523034657+00:00 stderr F I0813 20:05:57.519246 1 core.go:359] ConfigMap "openshift-apiserver/image-import-ca" changes: {"data":{"image-registry.openshift-image-registry.svc..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n","image-registry.openshift-image-registry.svc.cluster.local..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:05:57.525101056+00:00 stderr F I0813 20:05:57.523954 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/image-import-ca -n openshift-apiserver: 2025-08-13T20:05:57.525101056+00:00 stderr F cause by changes in data.image-registry.openshift-image-registry.svc..5000,data.image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:05:57.677947233+00:00 stderr F I0813 20:05:57.676320 1 apps.go:154] Deployment "openshift-apiserver/apiserver" changes: {"metadata":{"annotations":{"operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":"ZjlHVA==","operator.openshift.io/spec-hash":"7538696d7771eb6997d5f9627023b75abea5bcd941bd000eddd83452c44c117a"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":"ZjlHVA=="}},"spec":{"containers":[{"args":["if [ -s /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec openshift-apiserver start --config=/var/run/configmaps/config/config.yaml -v=2\n"],"command":["/bin/bash","-ec"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"openshift-apiserver","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":1,"httpGet":{"path":"readyz","port":8443,"scheme":"HTTPS"},"periodSeconds":5,"successThreshold":1,"timeoutSeconds":10},"resources":{"requests":{"cpu":"100m","memory":"200Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"periodSeconds":5,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/lib/kubelet/","name":"node-pullsecrets","readOnly":true},{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/audit","name":"audit"},{"mountPath":"/var/run/secrets/etcd-client","name":"etcd-client"},{"mountPath":"/var/run/configmaps/etcd-serving-ca","name":"etcd-serving-ca"},{"mountPath":"/var/run/configmaps/image-import-ca","name":"image-import-ca"},{"mountPath":"/var/run/configmaps/trusted-ca-bundle","name":"trusted-ca-bundle"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/var/run/secrets/encryption-config","name":"encryption-config"},{"mountPath":"/var/log/openshift-apiserver","name":"audit-dir"}]},{"args":["--listen","0.0.0.0:17698","--namespace","$(POD_NAMESPACE)","--v","2"],"command":["cluster-kube-apiserver-operator","check-endpoints"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","imagePullPolicy":"IfNotPresent","name":"openshift-apiserver-check-endpoints","ports":[{"containerPort":17698,"name":"check-endpoints","protocol":"TCP"}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError"}],"dnsPolicy":null,"initContainers":[{"command":["sh","-c","chmod 0700 /var/log/openshift-apiserver \u0026\u0026 touch /var/log/openshift-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/openshift-apiserver/*"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78","imagePullPolicy":"IfNotPresent","name":"fix-audit-permissions","resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"securityContext":{"privileged":true,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/log/openshift-apiserver","name":"audit-dir"}]}],"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"hostPath":{"path":"/var/lib/kubelet/","type":"Directory"},"name":"node-pullsecrets"},{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"audit-1"},"name":"audit"},{"name":"etcd-client","secret":{"defaultMode":384,"secretName":"etcd-client"}},{"configMap":{"name":"etcd-serving-ca"},"name":"etcd-serving-ca"},{"configMap":{"name":"image-import-ca","optional":true},"name":"image-import-ca"},{"name":"serving-cert","secret":{"defaultMode":384,"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle","optional":true},"name":"trusted-ca-bundle"},{"name":"encryption-config","secret":{"defaultMode":384,"optional":true,"secretName":"encryption-config-1"}},{"hostPath":{"path":"/var/log/openshift-apiserver"},"name":"audit-dir"}]}}}} 2025-08-13T20:05:57.712527173+00:00 stderr F I0813 20:05:57.710377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/apiserver -n openshift-apiserver because it changed 2025-08-13T20:05:57.792728349+00:00 stderr F I0813 20:05:57.792468 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31903 2025-08-13T20:06:06.054104191+00:00 stderr F I0813 20:06:06.052307 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.357041686+00:00 stderr F I0813 20:06:06.356939 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.445729146+00:00 stderr F I0813 20:06:06.445316 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31909 2025-08-13T20:06:06.477306380+00:00 stderr F I0813 20:06:06.477196 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31909 2025-08-13T20:06:11.032088889+00:00 stderr F I0813 20:06:11.031411 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:11.299406344+00:00 stderr F I0813 20:06:11.299320 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:13.285884901+00:00 stderr F I0813 20:06:13.284224 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.486063610+00:00 stderr F I0813 20:06:14.485923 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:15.055726412+00:00 stderr F I0813 20:06:15.053898 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:15.459610248+00:00 stderr F I0813 20:06:15.459453 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:17.607433323+00:00 stderr F I0813 20:06:17.607118 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.355976544+00:00 stderr F I0813 20:06:19.354945 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.425608978+00:00 stderr F I0813 20:06:19.425257 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.623244878+00:00 stderr F I0813 20:06:19.623160 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31984 2025-08-13T20:06:19.669516183+00:00 stderr F I0813 20:06:19.668198 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31984 2025-08-13T20:06:19.768531358+00:00 stderr F I0813 20:06:19.768080 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:20.567335762+00:00 stderr F I0813 20:06:20.566920 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:21.953215278+00:00 stderr F I0813 20:06:21.952766 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.176324287+00:00 stderr F I0813 20:06:22.176251 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:06:23.357625934+00:00 stderr F I0813 20:06:23.357518 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:23.807047764+00:00 stderr F I0813 20:06:23.806931 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.827925682+00:00 stderr F E0813 20:06:23.827635 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.829383633+00:00 stderr F I0813 20:06:23.829291 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:23.834485579+00:00 stderr F I0813 20:06:23.834379 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.842751926+00:00 stderr F E0813 20:06:23.842631 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.855149561+00:00 stderr F I0813 20:06:23.855012 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.862187083+00:00 stderr F E0813 20:06:23.862066 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.884900333+00:00 stderr F I0813 20:06:23.884209 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.891101781+00:00 stderr F E0813 20:06:23.891048 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.932631530+00:00 stderr F I0813 20:06:23.932329 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.940998389+00:00 stderr F E0813 20:06:23.940698 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.024163381+00:00 stderr F I0813 20:06:24.023531 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.031439149+00:00 stderr F E0813 20:06:24.031375 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.193903412+00:00 stderr F I0813 20:06:24.193399 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.201835099+00:00 stderr F E0813 20:06:24.201172 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.524439637+00:00 stderr F I0813 20:06:24.522204 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.536657377+00:00 stderr F E0813 20:06:24.536589 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.971001895+00:00 stderr F I0813 20:06:24.970928 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:25.177934641+00:00 stderr F I0813 20:06:25.177836 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:25.185500097+00:00 stderr F E0813 20:06:25.185333 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:26.476481046+00:00 stderr F I0813 20:06:26.476125 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:26.492605468+00:00 stderr F E0813 20:06:26.492414 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:29.054767127+00:00 stderr F I0813 20:06:29.053969 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:29.065161865+00:00 stderr F E0813 20:06:29.063297 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:32.761019095+00:00 stderr F I0813 20:06:32.749752 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.373923428+00:00 stderr F I0813 20:06:33.370632 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:34.188020078+00:00 stderr F I0813 20:06:34.186083 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:34.201345500+00:00 stderr F E0813 20:06:34.201140 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:34.629962859+00:00 stderr F I0813 20:06:34.629748 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:35.628992572+00:00 stderr F I0813 20:06:35.623979 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:06:37.573096771+00:00 stderr F I0813 20:06:37.559256 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:39.863063066+00:00 stderr F I0813 20:06:39.854404 1 reflector.go:351] Caches populated for *v1.OpenShiftAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:40.874002091+00:00 stderr F I0813 20:06:40.873474 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:41.517049087+00:00 stderr F I0813 20:06:41.516981 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:42.215088761+00:00 stderr F I0813 20:06:42.212846 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:42.230288416+00:00 stderr F E0813 20:06:42.228844 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:42.403380339+00:00 stderr F I0813 20:06:42.402470 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.618395174+00:00 stderr F I0813 20:06:42.618271 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:43.381188904+00:00 stderr F I0813 20:06:43.380529 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:43.702493616+00:00 stderr F I0813 20:06:43.702335 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.447973600+00:00 stderr F I0813 20:06:44.446315 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:44.709153819+00:00 stderr F E0813 20:06:44.707647 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:45.663485390+00:00 stderr F I0813 20:06:45.663401 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:46.013983068+00:00 stderr F I0813 20:06:46.013676 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:46.630691410+00:00 stderr F I0813 20:06:46.629730 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:47.052339409+00:00 stderr F I0813 20:06:47.051586 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:48.619540462+00:00 stderr F I0813 20:06:48.619377 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:49.788030533+00:00 stderr F I0813 20:06:49.784534 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:50.214571492+00:00 stderr F I0813 20:06:50.214240 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.329369285+00:00 stderr F I0813 20:06:52.329004 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.581238027+00:00 stderr F I0813 20:06:52.581124 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:53.108701330+00:00 stderr F I0813 20:06:53.106313 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:53.850514579+00:00 stderr F I0813 20:06:53.850039 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:53.852128345+00:00 stderr F I0813 20:06:53.851354 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:54.563951803+00:00 stderr F I0813 20:06:54.563666 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)") 2025-08-13T20:06:57.320102145+00:00 stderr F I0813 20:06:57.319606 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:01.233182636+00:00 stderr F I0813 20:07:01.232496 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:17.890728631+00:00 stderr F I0813 20:07:17.881267 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:18.637895843+00:00 stderr F I0813 20:07:18.635400 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" 2025-08-13T20:07:21.193390350+00:00 stderr F I0813 20:07:21.178719 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.265029694+00:00 stderr F I0813 20:07:21.263587 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:24.311204460+00:00 stderr F I0813 20:07:24.310365 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:24.368491813+00:00 stderr F I0813 20:07:24.367473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:27.080709085+00:00 stderr F I0813 20:07:27.078242 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:27.121975998+00:00 stderr F I0813 20:07:27.118623 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:32.597969319+00:00 stderr F I0813 20:07:32.596873 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.667490962+00:00 stderr F I0813 20:07:32.658855 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" 2025-08-13T20:07:32.750329137+00:00 stderr F E0813 20:07:32.750225 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:07:32.750329137+00:00 stderr F apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:07:32.759322405+00:00 stderr F I0813 20:07:32.752724 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServerDeployment_NoPod::APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.792450235+00:00 stderr F I0813 20:07:32.792049 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:07:34.176923719+00:00 stderr F I0813 20:07:34.170758 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.","reason":"APIServerDeployment_NoPod","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:34.195911053+00:00 stderr F I0813 20:07:34.195827 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node." 2025-08-13T20:07:36.311843249+00:00 stderr F I0813 20:07:36.311067 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:07:36Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:36.346835552+00:00 stderr F I0813 20:07:36.346490 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "All is well",Available changed from False to True ("All is well") 2025-08-13T20:08:32.179388330+00:00 stderr F E0813 20:08:32.178583 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.189343265+00:00 stderr F E0813 20:08:32.189232 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.203144140+00:00 stderr F E0813 20:08:32.203070 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.228753714+00:00 stderr F E0813 20:08:32.228697 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.275595507+00:00 stderr F E0813 20:08:32.275537 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.289332641+00:00 stderr F E0813 20:08:32.289208 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.360314446+00:00 stderr F E0813 20:08:32.360136 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.527515109+00:00 stderr F E0813 20:08:32.527362 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.856120041+00:00 stderr F E0813 20:08:32.855691 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.501560646+00:00 stderr F E0813 20:08:33.501472 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.881626283+00:00 stderr F E0813 20:08:33.881532 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.787856165+00:00 stderr F E0813 20:08:34.786299 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.683511705+00:00 stderr F E0813 20:08:35.682585 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.352861596+00:00 stderr F E0813 20:08:37.352634 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.486967511+00:00 stderr F E0813 20:08:37.486760 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.283191570+00:00 stderr F E0813 20:08:39.282318 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.096391326+00:00 stderr F E0813 20:08:41.095863 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.181862167+00:00 stderr F E0813 20:08:42.179122 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.207859713+00:00 stderr F E0813 20:08:42.207708 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.208449180+00:00 stderr F E0813 20:08:42.207955 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.229404461+00:00 stderr F E0813 20:08:42.229189 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.229404461+00:00 stderr F E0813 20:08:42.229343 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.268871782+00:00 stderr F E0813 20:08:42.267704 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.382185411+00:00 stderr F E0813 20:08:42.381628 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766076528+00:00 stderr F E0813 20:08:42.766028 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.891857444+00:00 stderr F E0813 20:08:42.889247 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.973855744+00:00 stderr F E0813 20:08:42.971113 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.367543341+00:00 stderr F E0813 20:08:43.367458 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.570761568+00:00 stderr F E0813 20:08:43.570658 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.769219928+00:00 stderr F E0813 20:08:43.769164 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.368482849+00:00 stderr F E0813 20:08:44.366601 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.576396050+00:00 stderr F E0813 20:08:44.572921 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.687210467+00:00 stderr F E0813 20:08:44.687127 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.170720400+00:00 stderr F E0813 20:08:45.170091 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.367202854+00:00 stderr F E0813 20:08:45.366522 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.970722057+00:00 stderr F E0813 20:08:45.970545 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.481114281+00:00 stderr F E0813 20:08:46.481039 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.570690708+00:00 stderr F E0813 20:08:46.570554 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.766928924+00:00 stderr F E0813 20:08:46.766834 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.389880155+00:00 stderr F E0813 20:08:47.387262 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:48.059283578+00:00 stderr F E0813 20:08:48.059137 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:48.287271614+00:00 stderr F E0813 20:08:48.287221 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.328361563+00:00 stderr F E0813 20:08:49.328167 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.350525429+00:00 stderr F E0813 20:08:49.350239 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.082487095+00:00 stderr F E0813 20:08:50.081155 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.883630395+00:00 stderr F E0813 20:08:51.883481 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.921413918+00:00 stderr F E0813 20:08:51.921209 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:52.180256249+00:00 stderr F E0813 20:08:52.180202 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.684069314+00:00 stderr F E0813 20:08:53.682653 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.453376391+00:00 stderr F E0813 20:08:54.451082 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.419142742+00:00 stderr F E0813 20:08:56.418869 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.052167092+00:00 stderr F E0813 20:08:57.052043 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:09:29.824090490+00:00 stderr F I0813 20:09:29.822973 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:31.240366727+00:00 stderr F I0813 20:09:31.239942 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.295474821+00:00 stderr F I0813 20:09:36.294940 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.472729763+00:00 stderr F I0813 20:09:36.472581 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.011372797+00:00 stderr F I0813 20:09:39.011140 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.190142743+00:00 stderr F I0813 20:09:39.189974 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.949937737+00:00 stderr F I0813 20:09:39.949751 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:40.632052473+00:00 stderr F I0813 20:09:40.631458 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:42.697111970+00:00 stderr F I0813 20:09:42.696118 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:09:44.948264682+00:00 stderr F I0813 20:09:44.947880 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.090233883+00:00 stderr F I0813 20:09:45.090158 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.949887510+00:00 stderr F I0813 20:09:45.949400 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:46.360890224+00:00 stderr F I0813 20:09:46.360329 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:48.613127477+00:00 stderr F I0813 20:09:48.612972 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:09:48.923736133+00:00 stderr F I0813 20:09:48.922579 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:49.794622532+00:00 stderr F I0813 20:09:49.793994 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.355175894+00:00 stderr F I0813 20:09:50.352768 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.502105847+00:00 stderr F I0813 20:09:50.500868 1 reflector.go:351] Caches populated for *v1.OpenShiftAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:52.014953081+00:00 stderr F I0813 20:09:52.013366 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.469163333+00:00 stderr F I0813 20:09:52.468239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:09:52.841878240+00:00 stderr F I0813 20:09:52.840162 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:09:53.419225933+00:00 stderr F I0813 20:09:53.416296 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.028120271+00:00 stderr F I0813 20:09:54.026244 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.472161952+00:00 stderr F I0813 20:09:54.471226 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.804352795+00:00 stderr F I0813 20:09:54.803583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:55.208708598+00:00 stderr F I0813 20:09:55.208626 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:56.289377132+00:00 stderr F I0813 20:09:56.288969 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.105866032+00:00 stderr F I0813 20:09:57.105686 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:05.900208772+00:00 stderr F I0813 20:10:05.899418 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:12.147163668+00:00 stderr F I0813 20:10:12.146472 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.256298709+00:00 stderr F I0813 20:10:15.242093 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:20.755480033+00:00 stderr F I0813 20:10:20.754899 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:21.980853556+00:00 stderr F I0813 20:10:21.980244 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:23.527055176+00:00 stderr F I0813 20:10:23.526690 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:23.859439076+00:00 stderr F I0813 20:10:23.859356 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:24.663610433+00:00 stderr F I0813 20:10:24.662181 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:10:26.217124734+00:00 stderr F I0813 20:10:26.216375 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.979341987+00:00 stderr F I0813 20:10:27.979198 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.016644348+00:00 stderr F I0813 20:10:29.016063 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:33.261888743+00:00 stderr F I0813 20:10:33.261552 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:34.479462892+00:00 stderr F I0813 20:10:34.478962 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:34.698461010+00:00 stderr F I0813 20:10:34.697436 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:36.213468297+00:00 stderr F I0813 20:10:36.213066 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:37.684223695+00:00 stderr F I0813 20:10:37.684042 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:11:00.169609486+00:00 stderr F I0813 20:11:00.165362 1 request.go:697] Waited for 1.001815503s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/route.openshift.io/v1 2025-08-13T20:21:41.161724381+00:00 stderr F I0813 20:21:41.160393 1 request.go:697] Waited for 1.000116633s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/template.openshift.io/v1 2025-08-13T20:42:36.411058991+00:00 stderr F I0813 20:42:36.401963 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415098707+00:00 stderr F I0813 20:42:36.414695 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.416342 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.419879 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.421094 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441999143+00:00 stderr F I0813 20:42:36.436903 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454961217+00:00 stderr F I0813 20:42:36.452746 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464151 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464389 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464482 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464527 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464646 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.465696 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.465866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466178 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466359 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466497 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466640 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466873 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466952 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.467003 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.467068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.483443 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.483970 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484189 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484396 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508609013+00:00 stderr F I0813 20:42:36.507478 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.547376721+00:00 stderr F I0813 20:42:36.547282 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.563360252+00:00 stderr F I0813 20:42:36.563269 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.563690941+00:00 stderr F I0813 20:42:36.563595 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.571200458+00:00 stderr F I0813 20:42:36.571058 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.579639761+00:00 stderr F I0813 20:42:36.579547 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.579848497+00:00 stderr F I0813 20:42:36.579727 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.580176156+00:00 stderr F I0813 20:42:36.580117 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585343085+00:00 stderr F I0813 20:42:36.584706 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585343085+00:00 stderr F I0813 20:42:36.585101 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.601554873+00:00 stderr F I0813 20:42:36.601410 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604017204+00:00 stderr F I0813 20:42:36.603073 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604017204+00:00 stderr F I0813 20:42:36.603646 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604410115+00:00 stderr F I0813 20:42:36.604202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.605026003+00:00 stderr F I0813 20:42:36.604971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.560395498+00:00 stderr F I0813 20:42:41.559614 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.564204617+00:00 stderr F I0813 20:42:41.562965 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.565885616+00:00 stderr F I0813 20:42:41.565738 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.567214934+00:00 stderr F I0813 20:42:41.567135 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.567214934+00:00 stderr F I0813 20:42:41.567198 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-apiserver ... 2025-08-13T20:42:41.567470661+00:00 stderr F I0813 20:42:41.567393 1 base_controller.go:172] Shutting down StatusSyncer_openshift-apiserver ... 2025-08-13T20:42:41.567531943+00:00 stderr F I0813 20:42:41.567493 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:41.567531943+00:00 stderr F I0813 20:42:41.567512 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.567577785+00:00 stderr F I0813 20:42:41.567549 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:41.567591225+00:00 stderr F I0813 20:42:41.567554 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:42:41.568050328+00:00 stderr F I0813 20:42:41.567536 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.568158161+00:00 stderr F I0813 20:42:41.567213 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.568158161+00:00 stderr F I0813 20:42:41.568152 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:42:41.568176892+00:00 stderr F I0813 20:42:41.568166 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.568189682+00:00 stderr F I0813 20:42:41.568179 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:41.568201383+00:00 stderr F I0813 20:42:41.568193 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:41.568213603+00:00 stderr F I0813 20:42:41.568206 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:41.568263164+00:00 stderr F I0813 20:42:41.568220 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:41.568278355+00:00 stderr F I0813 20:42:41.568267 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:42:41.568290495+00:00 stderr F I0813 20:42:41.568279 1 base_controller.go:172] Shutting down OpenShiftAPIServerWorkloadController ... 2025-08-13T20:42:41.568302435+00:00 stderr F I0813 20:42:41.568293 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.568314006+00:00 stderr F I0813 20:42:41.568305 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.567455 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.567589 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.568842 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.568869 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568101 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568948 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568319 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568964 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568974 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568981 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568982 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:41.569004836+00:00 stderr F I0813 20:42:41.568996 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:42:41.569016266+00:00 stderr F I0813 20:42:41.569008 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.569027306+00:00 stderr F I0813 20:42:41.569019 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.569038277+00:00 stderr F I0813 20:42:41.569030 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:41.569049237+00:00 stderr F I0813 20:42:41.569038 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:41.569060767+00:00 stderr F I0813 20:42:41.569046 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:41.569072158+00:00 stderr F I0813 20:42:41.569059 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:41.569083528+00:00 stderr F I0813 20:42:41.569067 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:42:41.569130949+00:00 stderr F I0813 20:42:41.569092 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:42:41.569130949+00:00 stderr F I0813 20:42:41.569126 1 base_controller.go:114] Shutting down worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:42:41.569143190+00:00 stderr F I0813 20:42:41.569135 1 base_controller.go:104] All OpenShiftAPIServerWorkloadController workers have been terminated 2025-08-13T20:42:41.569152760+00:00 stderr F I0813 20:42:41.569145 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.569163740+00:00 stderr F I0813 20:42:41.569153 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:41.569217322+00:00 stderr F I0813 20:42:41.569184 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.569217322+00:00 stderr F I0813 20:42:41.569211 1 base_controller.go:104] All NamespaceFinalizerController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.569276544+00:00 stderr F I0813 20:42:41.569251 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.569276544+00:00 stderr F I0813 20:42:41.569263 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:41.569290064+00:00 stderr F I0813 20:42:41.569274 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:41.569290064+00:00 stderr F I0813 20:42:41.569284 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:41.569302034+00:00 stderr F I0813 20:42:41.569293 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:41.569313445+00:00 stderr F I0813 20:42:41.569303 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:41.570580971+00:00 stderr F I0813 20:42:41.570511 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.571554339+00:00 stderr F I0813 20:42:41.571477 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:41.571822437+00:00 stderr F I0813 20:42:41.571727 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:42:41.571822437+00:00 stderr F I0813 20:42:41.571759 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:42:41.572262430+00:00 stderr F I0813 20:42:41.572191 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.572279030+00:00 stderr F I0813 20:42:41.572263 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:41.572570778+00:00 stderr F I0813 20:42:41.572519 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.572683722+00:00 stderr F I0813 20:42:41.572279 1 base_controller.go:150] All StatusSyncer_openshift-apiserver post start hooks have been terminated 2025-08-13T20:42:41.572683722+00:00 stderr F I0813 20:42:41.572664 1 base_controller.go:104] All StatusSyncer_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.573062053+00:00 stderr F I0813 20:42:41.572997 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.574001360+00:00 stderr F I0813 20:42:41.573930 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.574739081+00:00 stderr F I0813 20:42:41.574010 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.574865255+00:00 stderr F I0813 20:42:41.574102 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.574896446+00:00 stderr F I0813 20:42:41.574375 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:41.574984268+00:00 stderr F E0813 20:42:41.574407 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574941 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574958 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574974 1 builder.go:329] server exited 2025-08-13T20:42:41.575146793+00:00 stderr F I0813 20:42:41.574458 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:41.575146793+00:00 stderr F I0813 20:42:41.573201 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:41.576441030+00:00 stderr F I0813 20:42:41.576366 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_a533f076-9102-4f1c-ac58-2cc3fe6b65c6 stopped leading 2025-08-13T20:42:41.578623853+00:00 stderr F W0813 20:42:41.578497 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000024575515073043234033111 0ustar zuulzuul2025-08-13T19:59:34.135123902+00:00 stderr F I0813 19:59:34.115398 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:34.136554623+00:00 stderr F I0813 19:59:34.136493 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:34.219283961+00:00 stderr F I0813 19:59:34.207593 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:35.552098122+00:00 stderr F I0813 19:59:35.549458 1 builder.go:298] openshift-apiserver-operator version - 2025-08-13T19:59:41.023017032+00:00 stderr F I0813 19:59:41.021715 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:41.023017032+00:00 stderr F W0813 19:59:41.022603 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.023017032+00:00 stderr F W0813 19:59:41.022613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.121703825+00:00 stderr F I0813 19:59:41.096993 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:41.121703825+00:00 stderr F I0813 19:59:41.097644 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:41.124078033+00:00 stderr F I0813 19:59:41.123190 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:41.133035848+00:00 stderr F I0813 19:59:41.132945 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-08-13T19:59:41.144280409+00:00 stderr F I0813 19:59:41.136102 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.144507235+00:00 stderr F I0813 19:59:41.144422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.145183715+00:00 stderr F I0813 19:59:41.141975 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:41.145266797+00:00 stderr F I0813 19:59:41.145244 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:41.230075334+00:00 stderr F I0813 19:59:41.206005 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:41.230075334+00:00 stderr F I0813 19:59:41.207491 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:41.258358011+00:00 stderr F I0813 19:59:41.234516 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:41.507550514+00:00 stderr F I0813 19:59:41.490492 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:41.622481470+00:00 stderr F I0813 19:59:41.562304 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.562884 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.562933 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.594311 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.638918819+00:00 stderr F E0813 19:59:41.638768 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.642404768+00:00 stderr F E0813 19:59:41.642245 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.687986337+00:00 stderr F E0813 19:59:41.686170 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.792514437+00:00 stderr F E0813 19:59:41.790374 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.854587096+00:00 stderr F E0813 19:59:41.854483 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.854587096+00:00 stderr F E0813 19:59:41.854544 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.924636323+00:00 stderr F I0813 19:59:41.889206 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-08-13T19:59:41.950578953+00:00 stderr F I0813 19:59:41.948675 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.022083821+00:00 stderr F I0813 19:59:42.018917 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28297", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_f4e743f1-18d7-4ed5-bf22-f0d7b2e289da became leader 2025-08-13T19:59:42.022083821+00:00 stderr F E0813 19:59:42.019038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.022083821+00:00 stderr F E0813 19:59:42.019069 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.100370823+00:00 stderr F E0813 19:59:42.100311 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.184075459+00:00 stderr F E0813 19:59:42.184009 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.305423498+00:00 stderr F E0813 19:59:42.266711 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.308877906+00:00 stderr F I0813 19:59:42.308767 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:42.324312196+00:00 stderr F I0813 19:59:42.314029 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:42.325739457+00:00 stderr F I0813 19:59:42.325498 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:42.505465430+00:00 stderr F E0813 19:59:42.505124 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.587987932+00:00 stderr F E0813 19:59:42.587167 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.157083323+00:00 stderr F E0813 19:59:43.155147 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.243961490+00:00 stderr F E0813 19:59:43.242357 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.423531584+00:00 stderr F I0813 19:59:44.350010 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:59:44.423531584+00:00 stderr F I0813 19:59:44.370098 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:44.425127769+00:00 stderr F I0813 19:59:44.425086 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-08-13T19:59:44.425207902+00:00 stderr F I0813 19:59:44.425189 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.428650 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452637 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452689 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452706 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452940 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453112 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453130 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453280 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453303 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453360 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453383 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453447 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453460 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:44.499400267+00:00 stderr F E0813 19:59:44.497287 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.529484614+00:00 stderr F E0813 19:59:44.528348 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.637050180+00:00 stderr F I0813 19:59:44.636109 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T19:59:44.680926361+00:00 stderr F I0813 19:59:44.678565 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-08-13T19:59:45.346473993+00:00 stderr F I0813 19:59:45.333954 1 trace.go:236] Trace[880847307]: "DeltaFIFO Pop Process" ID:config-operator,Depth:22,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.152) (total time: 180ms): 2025-08-13T19:59:45.346473993+00:00 stderr F Trace[880847307]: [180.65508ms] [180.65508ms] END 2025-08-13T19:59:45.347405809+00:00 stderr F I0813 19:59:45.347363 1 request.go:697] Waited for 1.074015845s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:46.881152029+00:00 stderr F I0813 19:59:46.880389 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.911315 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.880450 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.911661 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T19:59:47.039904944+00:00 stderr F I0813 19:59:47.038817 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T19:59:47.039904944+00:00 stderr F I0813 19:59:47.038944 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T19:59:47.050216138+00:00 stderr F I0813 19:59:47.049173 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:47.050216138+00:00 stderr F I0813 19:59:47.049312 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:47.268890551+00:00 stderr F I0813 19:59:47.268767 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:47.268993164+00:00 stderr F I0813 19:59:47.268972 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:47.269063136+00:00 stderr F I0813 19:59:47.269043 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:47.269097157+00:00 stderr F I0813 19:59:47.269085 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:47.318959819+00:00 stderr F E0813 19:59:47.282138 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.318959819+00:00 stderr F E0813 19:59:47.282332 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.359016360+00:00 stderr F I0813 19:59:47.358950 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:47.359111993+00:00 stderr F I0813 19:59:47.359088 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:47.385992089+00:00 stderr F I0813 19:59:47.365671 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:47.385992089+00:00 stderr F I0813 19:59:47.365900 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.147306 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.152040 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151264 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.152162 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151273 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.153572 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151281 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.153614 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:48.178754988+00:00 stderr F I0813 19:59:48.178520 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T19:59:48.178984295+00:00 stderr F I0813 19:59:48.178887 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.189988 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151349 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190079 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151410 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190101 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151426 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190641 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T19:59:48.201255449+00:00 stderr F I0813 19:59:48.201208 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.201310881+00:00 stderr F I0813 19:59:48.151535 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:48.201365282+00:00 stderr F I0813 19:59:48.201348 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:48.379765438+00:00 stderr F I0813 19:59:48.377924 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.679660601+00:00 stderr F I0813 19:59:52.678581 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()") 2025-08-13T19:59:52.815968576+00:00 stderr F I0813 19:59:52.814213 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.856690037+00:00 stderr F I0813 19:59:52.856631 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.947888527+00:00 stderr F I0813 19:59:52.930025 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985768 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:52.985684334 +0000 UTC))" 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985892 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:52.98587307 +0000 UTC))" 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985914 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.98589904 +0000 UTC))" 2025-08-13T19:59:52.986020194+00:00 stderr F I0813 19:59:52.985939 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.985919701 +0000 UTC))" 2025-08-13T19:59:52.989767381+00:00 stderr F I0813 19:59:52.989708 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.985948382 +0000 UTC))" 2025-08-13T19:59:52.989870384+00:00 stderr F I0813 19:59:52.989827 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.98974213 +0000 UTC))" 2025-08-13T19:59:52.989922125+00:00 stderr F I0813 19:59:52.989879 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989863123 +0000 UTC))" 2025-08-13T19:59:52.989934526+00:00 stderr F I0813 19:59:52.989918 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989903795 +0000 UTC))" 2025-08-13T19:59:52.989986507+00:00 stderr F I0813 19:59:52.989946 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989924315 +0000 UTC))" 2025-08-13T19:59:52.990365098+00:00 stderr F I0813 19:59:52.990315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:52.990295666 +0000 UTC))" 2025-08-13T19:59:53.015943127+00:00 stderr F I0813 19:59:52.990714 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 19:59:52.990651836 +0000 UTC))" 2025-08-13T19:59:53.044103870+00:00 stderr F I0813 19:59:52.941961 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.150721773+00:00 stderr F I0813 19:59:54.136572 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.150721773+00:00 stderr F I0813 19:59:54.141652 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.186460882+00:00 stderr F I0813 19:59:54.186325 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:54.186460882+00:00 stderr F I0813 19:59:54.186427 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:55.132750746+00:00 stderr F I0813 19:59:55.131992 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.849174409+00:00 stderr F I0813 19:59:55.847137 1 trace.go:236] Trace[896855870]: "Reflector ListAndWatch" name:k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 (13-Aug-2025 19:59:44.428) (total time: 11413ms): 2025-08-13T19:59:55.849174409+00:00 stderr F Trace[896855870]: ---"Objects listed" error: 11412ms (19:59:55.841) 2025-08-13T19:59:55.849174409+00:00 stderr F Trace[896855870]: [11.413184959s] [11.413184959s] END 2025-08-13T19:59:55.849174409+00:00 stderr F I0813 19:59:55.847586 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T19:59:55.852177834+00:00 stderr F I0813 19:59:55.850951 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:59:55.852177834+00:00 stderr F I0813 19:59:55.851008 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:59:56.081976655+00:00 stderr F I0813 19:59:56.063059 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)" 2025-08-13T20:00:00.935404843+00:00 stderr F I0813 20:00:00.926421 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServerDeployment_NoPod::APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.007640322+00:00 stderr F E0813 20:00:01.007204 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.324598297+00:00 stderr F I0813 20:00:01.321168 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:01.389367993+00:00 stderr F I0813 20:00:01.389240 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.733756640+00:00 stderr F I0813 20:00:01.729465 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.733756640+00:00 stderr F I0813 20:00:01.702742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:01.893116002+00:00 stderr F E0813 20:00:01.892352 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.341122666+00:00 stderr F I0813 20:00:02.338716 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:02.380369065+00:00 stderr F E0813 20:00:02.378934 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:02.898135048+00:00 stderr F I0813 20:00:02.894040 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:02.960055244+00:00 stderr F E0813 20:00:02.959041 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.961464364+00:00 stderr F I0813 20:00:02.960628 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:05.445899825+00:00 stderr F I0813 20:00:05.444491 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:05Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:05.525944197+00:00 stderr F I0813 20:00:05.520820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") 2025-08-13T20:00:05.783362207+00:00 stderr F I0813 20:00:05.767722 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.767683 +0000 UTC))" 2025-08-13T20:00:05.783543412+00:00 stderr F I0813 20:00:05.783514 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.78345736 +0000 UTC))" 2025-08-13T20:00:05.783626085+00:00 stderr F I0813 20:00:05.783603 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783570633 +0000 UTC))" 2025-08-13T20:00:05.783685886+00:00 stderr F I0813 20:00:05.783668 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783647445 +0000 UTC))" 2025-08-13T20:00:05.783761848+00:00 stderr F I0813 20:00:05.783741 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783705937 +0000 UTC))" 2025-08-13T20:00:05.784002945+00:00 stderr F I0813 20:00:05.783985 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783963184 +0000 UTC))" 2025-08-13T20:00:05.784071537+00:00 stderr F I0813 20:00:05.784056 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.784040176 +0000 UTC))" 2025-08-13T20:00:05.798856269+00:00 stderr F I0813 20:00:05.784390 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.784092188 +0000 UTC))" 2025-08-13T20:00:05.799622201+00:00 stderr F I0813 20:00:05.799596 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.799511798 +0000 UTC))" 2025-08-13T20:00:05.799693793+00:00 stderr F I0813 20:00:05.799679 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799652172 +0000 UTC))" 2025-08-13T20:00:05.800171056+00:00 stderr F I0813 20:00:05.800144 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.800123445 +0000 UTC))" 2025-08-13T20:00:05.800500196+00:00 stderr F I0813 20:00:05.800476 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 20:00:05.800458495 +0000 UTC))" 2025-08-13T20:00:28.273069989+00:00 stderr F I0813 20:00:28.266938 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:28.273069989+00:00 stderr F I0813 20:00:28.272188 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.272753 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:28.272702608 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275316 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:28.275274432 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275353 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:28.275331893 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275377 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:28.275358604 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275399 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275385875 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275428 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275405685 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275448 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275435676 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275467 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275454277 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275494 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:28.275476947 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275524 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275506228 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275954 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-08-13 20:00:28.27593466 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.279726 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 20:00:28.279546053 +0000 UTC))" 2025-08-13T20:00:28.309754775+00:00 stderr F I0813 20:00:28.308236 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:29.270989773+00:00 stderr F I0813 20:00:29.263950 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="32d40e5d8e7856640e9aa0fa689da20ce17efb0f940d3f20467677d758499f97", new="619bd1166efa93cd4a6ecce49845ad14ea28915925e46f2a1ae6f0f79bf4e301") 2025-08-13T20:00:29.270989773+00:00 stderr F W0813 20:00:29.264566 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:29.270989773+00:00 stderr F I0813 20:00:29.264640 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="2deeedad0b20e8edd995d2b2452184a3a6d229c608ee3d8e51038616f572694e", new="cc755c6895764b0695467ba8a77e5ba8598a858e253fc2c3e013de460f5584c5") 2025-08-13T20:00:29.271215499+00:00 stderr F I0813 20:00:29.271181 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:29.271295661+00:00 stderr F I0813 20:00:29.271281 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:29.271485157+00:00 stderr F I0813 20:00:29.271434 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:00:29.271501527+00:00 stderr F I0813 20:00:29.271493 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:00:29.271551519+00:00 stderr F I0813 20:00:29.271514 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:29.271562879+00:00 stderr F I0813 20:00:29.271550 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:00:29.271682982+00:00 stderr F I0813 20:00:29.271572 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:00:29.271682982+00:00 stderr F I0813 20:00:29.271586 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-apiserver ... 2025-08-13T20:00:29.271738294+00:00 stderr F I0813 20:00:29.271698 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:00:29.271738294+00:00 stderr F I0813 20:00:29.271732 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271748 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271770 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271828 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:29.271880868+00:00 stderr F I0813 20:00:29.271858 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:29.271880868+00:00 stderr F I0813 20:00:29.271874 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:00:29.271890288+00:00 stderr F I0813 20:00:29.271881 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:00:29.271899219+00:00 stderr F I0813 20:00:29.271889 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:29.271899219+00:00 stderr F I0813 20:00:29.271895 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:00:29.271908319+00:00 stderr F I0813 20:00:29.271902 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:00:29.271917359+00:00 stderr F I0813 20:00:29.271908 1 base_controller.go:104] All NamespaceFinalizerController_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.271983701+00:00 stderr F I0813 20:00:29.271959 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:29.272080664+00:00 stderr F I0813 20:00:29.272047 1 base_controller.go:172] Shutting down OpenShiftAPIServerWorkloadController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273043 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273101 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273117 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273129 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273144 1 base_controller.go:172] Shutting down StatusSyncer_openshift-apiserver ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273149 1 base_controller.go:150] All StatusSyncer_openshift-apiserver post start hooks have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273175 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273188 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273201 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273210 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273217 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273226 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273235 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273240 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273249 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273255 1 base_controller.go:104] All StatusSyncer_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273263 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273267 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273274 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273279 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273307 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273318 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273323 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273412 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273463 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273552 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273577 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273602 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273725 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:00:29.276727666+00:00 stderr F I0813 20:00:29.276689 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:29.277242271+00:00 stderr F I0813 20:00:29.277217 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:29.277347174+00:00 stderr F I0813 20:00:29.277326 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:29.280376320+00:00 stderr F I0813 20:00:29.277485 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:00:29.281461021+00:00 stderr F E0813 20:00:29.281299 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:29.281558024+00:00 stderr F I0813 20:00:29.281539 1 base_controller.go:114] Shutting down worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.281357 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282340 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282541 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282553 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294267 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294408 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294470 1 builder.go:329] server exited 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294494 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294505 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:00:29.325711713+00:00 stderr F I0813 20:00:29.325682 1 base_controller.go:104] All OpenShiftAPIServerWorkloadController workers have been terminated 2025-08-13T20:00:29.325923179+00:00 stderr F I0813 20:00:29.325901 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:29.326444954+00:00 stderr F I0813 20:00:29.326421 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:00:29.326606559+00:00 stderr F E0813 20:00:29.326582 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-apiserver/podnetworkconnectivitychecks": context canceled 2025-08-13T20:00:29.326673950+00:00 stderr F I0813 20:00:29.326657 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:00:29.326720562+00:00 stderr F I0813 20:00:29.326704 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:00:29.326878466+00:00 stderr F I0813 20:00:29.326859 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:29.326955608+00:00 stderr F I0813 20:00:29.326939 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:29.406027283+00:00 stderr F W0813 20:00:29.405506 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000755000175000017500000000000015073043233033110 5ustar zuulzuul././@LongLink0000644000000000000000000000033200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000755000175000017500000000000015073043233033110 5ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000644000175000017500000006147315073043233033125 0ustar zuulzuul2025-08-13T20:11:02.865431268+00:00 stderr F I0813 20:11:02.864881 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:11:02.865957823+00:00 stderr F I0813 20:11:02.865900 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:11:02.869571566+00:00 stderr F I0813 20:11:02.869444 1 observer_polling.go:159] Starting file observer 2025-08-13T20:11:02.871168652+00:00 stderr F I0813 20:11:02.871084 1 builder.go:299] route-controller-manager version 4.16.0-202406131906.p0.g3112b45.assembly.stream.el9-3112b45-3112b458983c6fca6f77d5a945fb0026186dace6 2025-08-13T20:11:02.873305093+00:00 stderr F I0813 20:11:02.872602 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:11:03.231962636+00:00 stderr F I0813 20:11:03.230544 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:11:03.241153190+00:00 stderr F I0813 20:11:03.241104 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:11:03.241245003+00:00 stderr F I0813 20:11:03.241227 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:11:03.241307574+00:00 stderr F I0813 20:11:03.241291 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:11:03.241343445+00:00 stderr F I0813 20:11:03.241331 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:11:03.248281334+00:00 stderr F I0813 20:11:03.248242 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:11:03.248348536+00:00 stderr F W0813 20:11:03.248335 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:11:03.248401628+00:00 stderr F W0813 20:11:03.248388 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:11:03.248685216+00:00 stderr F I0813 20:11:03.248665 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:11:03.252619539+00:00 stderr F I0813 20:11:03.252579 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.252521796 +0000 UTC))" 2025-08-13T20:11:03.252898317+00:00 stderr F I0813 20:11:03.252834 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:11:03.253358230+00:00 stderr F I0813 20:11:03.253304 1 leaderelection.go:250] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... 2025-08-13T20:11:03.253598927+00:00 stderr F I0813 20:11:03.253574 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.253525135 +0000 UTC))" 2025-08-13T20:11:03.253748771+00:00 stderr F I0813 20:11:03.253659 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:11:03.253748771+00:00 stderr F I0813 20:11:03.253740 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:11:03.253970027+00:00 stderr F I0813 20:11:03.253765 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:11:03.253970027+00:00 stderr F I0813 20:11:03.253821 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:11:03.254063190+00:00 stderr F I0813 20:11:03.253670 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:11:03.254248195+00:00 stderr F I0813 20:11:03.254181 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:11:03.254374039+00:00 stderr F I0813 20:11:03.254319 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.253603 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.254609 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.254372 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:11:03.256473759+00:00 stderr F I0813 20:11:03.256445 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.257269042+00:00 stderr F I0813 20:11:03.257244 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.263174311+00:00 stderr F I0813 20:11:03.263124 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.266734733+00:00 stderr F I0813 20:11:03.266693 1 leaderelection.go:260] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers 2025-08-13T20:11:03.268959017+00:00 stderr F I0813 20:11:03.268857 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"2ba9fc4c-f1d7-4b43-b8a4-0a6afbf10f5f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' route-controller-manager-776b8b7477-sfpvs_f8c2bc95-1e3b-4dd4-b71e-62fdd54204d3 became leader 2025-08-13T20:11:03.285414649+00:00 stderr F I0813 20:11:03.273726 1 controller_manager.go:36] Starting "openshift.io/ingress-to-route" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297503 1 ingress.go:262] ingress-to-route metrics registered with prometheus 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297553 1 controller_manager.go:46] Started "openshift.io/ingress-to-route" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297567 1 controller_manager.go:36] Starting "openshift.io/ingress-ip" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297574 1 controller_manager.go:46] Started "openshift.io/ingress-ip" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297580 1 controller_manager.go:48] Started Route Controllers 2025-08-13T20:11:03.298100563+00:00 stderr F I0813 20:11:03.298077 1 ingress.go:313] Starting controller 2025-08-13T20:11:03.313641928+00:00 stderr F I0813 20:11:03.307439 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.313641928+00:00 stderr F I0813 20:11:03.308068 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.332055726+00:00 stderr F I0813 20:11:03.331962 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.342666120+00:00 stderr F I0813 20:11:03.342509 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.355204820+00:00 stderr F I0813 20:11:03.355148 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:11:03.355473718+00:00 stderr F I0813 20:11:03.355452 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:11:03.355534719+00:00 stderr F I0813 20:11:03.355520 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:11:03.356633191+00:00 stderr F I0813 20:11:03.356544 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:11:03.356434885 +0000 UTC))" 2025-08-13T20:11:03.356708193+00:00 stderr F I0813 20:11:03.356692 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:11:03.356671482 +0000 UTC))" 2025-08-13T20:11:03.356771135+00:00 stderr F I0813 20:11:03.356754 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.356729844 +0000 UTC))" 2025-08-13T20:11:03.356883988+00:00 stderr F I0813 20:11:03.356868 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.356843687 +0000 UTC))" 2025-08-13T20:11:03.356991411+00:00 stderr F I0813 20:11:03.356971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.356904939 +0000 UTC))" 2025-08-13T20:11:03.357045013+00:00 stderr F I0813 20:11:03.357032 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357015692 +0000 UTC))" 2025-08-13T20:11:03.357212707+00:00 stderr F I0813 20:11:03.357191 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357064243 +0000 UTC))" 2025-08-13T20:11:03.357282179+00:00 stderr F I0813 20:11:03.357268 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357250468 +0000 UTC))" 2025-08-13T20:11:03.357327551+00:00 stderr F I0813 20:11:03.357315 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:11:03.35730128 +0000 UTC))" 2025-08-13T20:11:03.357505346+00:00 stderr F I0813 20:11:03.357488 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:11:03.357469695 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.359232 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.359196934 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.359974 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.359899514 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360194 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:11:03.360175212 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360217 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:11:03.360204343 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360276 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.360222524 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360296 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.360283565 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360314 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360302636 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360332 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360319096 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360368 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360336977 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360390 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360377138 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360410 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:11:03.360399279 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360430 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:11:03.360416149 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360447 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.36043704 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360751 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.360726238 +0000 UTC))" 2025-08-13T20:11:03.363035544+00:00 stderr F I0813 20:11:03.362972 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.362947502 +0000 UTC))" 2025-08-13T20:11:03.771896587+00:00 stderr F I0813 20:11:03.771480 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:35.517719936+00:00 stderr F I0813 20:42:35.515539 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:35.521577918+00:00 stderr F I0813 20:42:35.521486 1 ingress.go:325] Shutting down controller 2025-08-13T20:42:35.522140354+00:00 stderr F I0813 20:42:35.520517 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:35.538315390+00:00 stderr F I0813 20:42:35.538116 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:35.538315390+00:00 stderr F I0813 20:42:35.538286 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:35.538345431+00:00 stderr F I0813 20:42:35.538320 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:35.538396633+00:00 stderr F I0813 20:42:35.538343 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:35.538396633+00:00 stderr F I0813 20:42:35.538362 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:35.539195936+00:00 stderr F I0813 20:42:35.539088 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:35.539195936+00:00 stderr F I0813 20:42:35.539178 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:35.539902056+00:00 stderr F I0813 20:42:35.539840 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:35.539902056+00:00 stderr F I0813 20:42:35.539891 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:35.540137253+00:00 stderr F I0813 20:42:35.540075 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:35.540467262+00:00 stderr F I0813 20:42:35.540410 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:35.541122221+00:00 stderr F I0813 20:42:35.541044 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:35.541445950+00:00 stderr F I0813 20:42:35.541373 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:35.541445950+00:00 stderr F I0813 20:42:35.541415 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:35.543525750+00:00 stderr F I0813 20:42:35.543421 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:35.544800087+00:00 stderr F I0813 20:42:35.544735 1 builder.go:330] server exited 2025-08-13T20:42:35.552278363+00:00 stderr F W0813 20:42:35.552143 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000644000175000017500000006341515073043233033123 0ustar zuulzuul2025-10-13T00:15:01.911887850+00:00 stderr F I1013 00:15:01.910904 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:15:01.914610932+00:00 stderr F I1013 00:15:01.914552 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:01.924378285+00:00 stderr F I1013 00:15:01.921429 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:01.991717672+00:00 stderr F I1013 00:15:01.991671 1 builder.go:299] route-controller-manager version 4.16.0-202406131906.p0.g3112b45.assembly.stream.el9-3112b45-3112b458983c6fca6f77d5a945fb0026186dace6 2025-10-13T00:15:01.995554667+00:00 stderr F I1013 00:15:01.995379 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:02.734957421+00:00 stderr F I1013 00:15:02.733413 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:02.744454116+00:00 stderr F I1013 00:15:02.744373 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:02.744454116+00:00 stderr F I1013 00:15:02.744400 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:02.744485467+00:00 stderr F I1013 00:15:02.744473 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:15:02.744485467+00:00 stderr F I1013 00:15:02.744482 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:15:02.750807726+00:00 stderr F I1013 00:15:02.749082 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:02.750807726+00:00 stderr F W1013 00:15:02.749104 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:02.750807726+00:00 stderr F W1013 00:15:02.749109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:02.750807726+00:00 stderr F I1013 00:15:02.749304 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:02.755598060+00:00 stderr F I1013 00:15:02.755557 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-10-13 00:15:02.755498007 +0000 UTC))" 2025-10-13T00:15:02.756014512+00:00 stderr F I1013 00:15:02.755985 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.75594164 +0000 UTC))" 2025-10-13T00:15:02.756022702+00:00 stderr F I1013 00:15:02.756013 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:02.756062383+00:00 stderr F I1013 00:15:02.756039 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:02.756083974+00:00 stderr F I1013 00:15:02.756069 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:02.757392883+00:00 stderr F I1013 00:15:02.757362 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:02.757415714+00:00 stderr F I1013 00:15:02.757406 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:02.757461205+00:00 stderr F I1013 00:15:02.757414 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.761297810+00:00 stderr F I1013 00:15:02.760792 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:02.761297810+00:00 stderr F I1013 00:15:02.760858 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.763360452+00:00 stderr F I1013 00:15:02.762931 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:02.763360452+00:00 stderr F I1013 00:15:02.763145 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:02.766802855+00:00 stderr F I1013 00:15:02.766763 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:02.785368231+00:00 stderr F I1013 00:15:02.782844 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:02.785368231+00:00 stderr F I1013 00:15:02.783135 1 leaderelection.go:250] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... 2025-10-13T00:15:02.785368231+00:00 stderr F I1013 00:15:02.783769 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:02.785368231+00:00 stderr F I1013 00:15:02.783955 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:02.804000400+00:00 stderr F I1013 00:15:02.803683 1 leaderelection.go:260] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers 2025-10-13T00:15:02.807632929+00:00 stderr F I1013 00:15:02.807581 1 controller_manager.go:36] Starting "openshift.io/ingress-ip" 2025-10-13T00:15:02.807632929+00:00 stderr F I1013 00:15:02.807607 1 controller_manager.go:46] Started "openshift.io/ingress-ip" 2025-10-13T00:15:02.807632929+00:00 stderr F I1013 00:15:02.807612 1 controller_manager.go:36] Starting "openshift.io/ingress-to-route" 2025-10-13T00:15:02.808405372+00:00 stderr F I1013 00:15:02.807661 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"2ba9fc4c-f1d7-4b43-b8a4-0a6afbf10f5f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40296", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' route-controller-manager-776b8b7477-sfpvs_5cc3a86b-f82e-4a8f-b20b-5ea2428b7489 became leader 2025-10-13T00:15:02.815747522+00:00 stderr F I1013 00:15:02.815656 1 ingress.go:262] ingress-to-route metrics registered with prometheus 2025-10-13T00:15:02.815747522+00:00 stderr F I1013 00:15:02.815707 1 controller_manager.go:46] Started "openshift.io/ingress-to-route" 2025-10-13T00:15:02.815747522+00:00 stderr F I1013 00:15:02.815717 1 controller_manager.go:48] Started Route Controllers 2025-10-13T00:15:02.821439852+00:00 stderr F I1013 00:15:02.818069 1 ingress.go:313] Starting controller 2025-10-13T00:15:02.821439852+00:00 stderr F I1013 00:15:02.820980 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:02.821439852+00:00 stderr F W1013 00:15:02.821161 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:15:02.821439852+00:00 stderr F E1013 00:15:02.821197 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:15:02.821439852+00:00 stderr F I1013 00:15:02.821411 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:02.870081610+00:00 stderr F I1013 00:15:02.869997 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.870213754+00:00 stderr F I1013 00:15:02.870195 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.870447161+00:00 stderr F I1013 00:15:02.870414 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:02.870815172+00:00 stderr F I1013 00:15:02.870782 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.870718359 +0000 UTC))" 2025-10-13T00:15:02.871163962+00:00 stderr F I1013 00:15:02.871143 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-10-13 00:15:02.87110531 +0000 UTC))" 2025-10-13T00:15:02.871486362+00:00 stderr F I1013 00:15:02.871465 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.871447071 +0000 UTC))" 2025-10-13T00:15:02.871765730+00:00 stderr F I1013 00:15:02.871743 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:02.871722219 +0000 UTC))" 2025-10-13T00:15:02.871765730+00:00 stderr F I1013 00:15:02.871740 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:02.871801591+00:00 stderr F I1013 00:15:02.871772 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:02.87175865 +0000 UTC))" 2025-10-13T00:15:02.871809981+00:00 stderr F I1013 00:15:02.871799 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.871781061 +0000 UTC))" 2025-10-13T00:15:02.871825252+00:00 stderr F I1013 00:15:02.871815 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.871804321 +0000 UTC))" 2025-10-13T00:15:02.871859223+00:00 stderr F I1013 00:15:02.871838 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.871820942 +0000 UTC))" 2025-10-13T00:15:02.871884594+00:00 stderr F I1013 00:15:02.871868 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.871852333 +0000 UTC))" 2025-10-13T00:15:02.871892994+00:00 stderr F I1013 00:15:02.871886 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.871875563 +0000 UTC))" 2025-10-13T00:15:02.871919915+00:00 stderr F I1013 00:15:02.871903 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.871891314 +0000 UTC))" 2025-10-13T00:15:02.871943485+00:00 stderr F I1013 00:15:02.871926 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:02.871911764 +0000 UTC))" 2025-10-13T00:15:02.871952616+00:00 stderr F I1013 00:15:02.871947 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:02.871936985 +0000 UTC))" 2025-10-13T00:15:02.871986137+00:00 stderr F I1013 00:15:02.871966 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.871954026 +0000 UTC))" 2025-10-13T00:15:02.872365898+00:00 stderr F I1013 00:15:02.872345 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-10-13 00:15:02.872313196 +0000 UTC))" 2025-10-13T00:15:02.894892443+00:00 stderr F I1013 00:15:02.894537 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.875559964 +0000 UTC))" 2025-10-13T00:15:03.191917822+00:00 stderr F I1013 00:15:03.191793 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:04.372017869+00:00 stderr F W1013 00:15:04.371465 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:15:04.372017869+00:00 stderr F E1013 00:15:04.371986 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:15:06.971412461+00:00 stderr F W1013 00:15:06.971352 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:15:06.971412461+00:00 stderr F E1013 00:15:06.971392 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-10-13T00:15:13.208858376+00:00 stderr F I1013 00:15:13.207908 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:21:11.941954550+00:00 stderr F I1013 00:21:11.941436 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.941384735 +0000 UTC))" 2025-10-13T00:21:11.942012672+00:00 stderr F I1013 00:21:11.941949 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.941923429 +0000 UTC))" 2025-10-13T00:21:11.942012672+00:00 stderr F I1013 00:21:11.941987 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.94195777 +0000 UTC))" 2025-10-13T00:21:11.942035722+00:00 stderr F I1013 00:21:11.942017 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.941994101 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942048 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.942024262 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942082 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.942058783 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942110 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.942088524 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942139 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.942117174 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942167 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.942145315 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942195 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.942175446 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942226 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.942202527 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942257 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.942234118 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.942826 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-10-13 00:21:11.942793803 +0000 UTC))" 2025-10-13T00:21:11.943449960+00:00 stderr F I1013 00:21:11.943378 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:21:11.943324777 +0000 UTC))" ././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015073043233033040 5ustar zuulzuul././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015073043233033040 5ustar zuulzuul././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000007105415073043233033051 0ustar zuulzuul2025-10-13T00:14:58.332568439+00:00 stderr F I1013 00:14:58.328781 1 cmd.go:233] Using service-serving-cert provided certificates 2025-10-13T00:14:58.335501746+00:00 stderr F I1013 00:14:58.335456 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:58.335958710+00:00 stderr F I1013 00:14:58.335915 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:58.430928826+00:00 stderr F I1013 00:14:58.430264 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-10-13T00:14:58.830984512+00:00 stderr F I1013 00:14:58.829293 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:14:58.830984512+00:00 stderr F W1013 00:14:58.829608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:58.830984512+00:00 stderr F W1013 00:14:58.829616 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:58.853389733+00:00 stderr F I1013 00:14:58.844188 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:14:58.865639060+00:00 stderr F I1013 00:14:58.858884 1 secure_serving.go:210] Serving securely on [::]:8443 2025-10-13T00:14:58.865788245+00:00 stderr F I1013 00:14:58.862268 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:14:58.866066793+00:00 stderr F I1013 00:14:58.862369 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:14:58.866117645+00:00 stderr F I1013 00:14:58.866101 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:58.866645660+00:00 stderr F I1013 00:14:58.862387 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:14:58.866687672+00:00 stderr F I1013 00:14:58.866675 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:58.869525177+00:00 stderr F I1013 00:14:58.862421 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:14:58.869525177+00:00 stderr F I1013 00:14:58.867822 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:14:58.869847926+00:00 stderr F I1013 00:14:58.863019 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:14:58.870075023+00:00 stderr F I1013 00:14:58.864418 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-10-13T00:14:58.972102680+00:00 stderr F I1013 00:14:58.971824 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:58.972102680+00:00 stderr F I1013 00:14:58.972028 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:58.972737389+00:00 stderr F I1013 00:14:58.972055 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:21:19.190680721+00:00 stderr F I1013 00:21:19.190125 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-10-13T00:21:19.190757293+00:00 stderr F I1013 00:21:19.190201 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_b3bfd9de-5529-4405-b1f9-256520de1020 became leader 2025-10-13T00:21:19.203389053+00:00 stderr F I1013 00:21:19.202039 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:21:19.203389053+00:00 stderr F I1013 00:21:19.202068 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-10-13T00:21:19.203389053+00:00 stderr F I1013 00:21:19.202081 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:21:19.203389053+00:00 stderr F I1013 00:21:19.202112 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-10-13T00:21:19.203389053+00:00 stderr F I1013 00:21:19.202264 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-10-13T00:21:19.203389053+00:00 stderr F I1013 00:21:19.202415 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-10-13T00:21:19.302506008+00:00 stderr F I1013 00:21:19.302415 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-10-13T00:21:19.302506008+00:00 stderr F I1013 00:21:19.302449 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-10-13T00:21:19.302506008+00:00 stderr F I1013 00:21:19.302451 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-10-13T00:21:19.302506008+00:00 stderr F I1013 00:21:19.302463 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-10-13T00:21:19.302506008+00:00 stderr F I1013 00:21:19.302472 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:21:19.302574670+00:00 stderr F I1013 00:21:19.302537 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-10-13T00:21:19.302574670+00:00 stderr F I1013 00:21:19.302537 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-10-13T00:21:19.302574670+00:00 stderr F I1013 00:21:19.302560 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-10-13T00:21:19.302574670+00:00 stderr F I1013 00:21:19.302493 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:21:19.302620421+00:00 stderr F I1013 00:21:19.302547 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-10-13T00:21:19.302633442+00:00 stderr F I1013 00:21:19.302428 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:21:19.302645472+00:00 stderr F I1013 00:21:19.302631 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:22:19.197755896+00:00 stderr F E1013 00:22:19.197197 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.306960663+00:00 stderr F W1013 00:22:19.306700 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.306960663+00:00 stderr F E1013 00:22:19.306736 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.307958460+00:00 stderr F E1013 00:22:19.307930 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:19.314931267+00:00 stderr F W1013 00:22:19.314882 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.314994639+00:00 stderr F E1013 00:22:19.314978 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.318547404+00:00 stderr F E1013 00:22:19.318523 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:19.330688781+00:00 stderr F W1013 00:22:19.330641 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.330688781+00:00 stderr F E1013 00:22:19.330678 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.333302431+00:00 stderr F E1013 00:22:19.333278 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:19.353669609+00:00 stderr F W1013 00:22:19.353610 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.353738801+00:00 stderr F E1013 00:22:19.353722 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.357688247+00:00 stderr F E1013 00:22:19.357659 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:19.397263761+00:00 stderr F W1013 00:22:19.397185 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.397263761+00:00 stderr F E1013 00:22:19.397230 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.402973585+00:00 stderr F E1013 00:22:19.402915 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:19.506541320+00:00 stderr F W1013 00:22:19.506429 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.506541320+00:00 stderr F E1013 00:22:19.506503 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.706284341+00:00 stderr F W1013 00:22:19.706193 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.706284341+00:00 stderr F E1013 00:22:19.706258 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.908830878+00:00 stderr F E1013 00:22:19.908221 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:20.106570245+00:00 stderr F W1013 00:22:20.106508 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.106570245+00:00 stderr F E1013 00:22:20.106562 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.306204654+00:00 stderr F E1013 00:22:20.306140 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:20.630525705+00:00 stderr F E1013 00:22:20.630474 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:20.750822210+00:00 stderr F W1013 00:22:20.750753 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.750822210+00:00 stderr F E1013 00:22:20.750786 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.276149677+00:00 stderr F E1013 00:22:21.276029 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:22.034471980+00:00 stderr F W1013 00:22:22.034360 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.034471980+00:00 stderr F E1013 00:22:22.034420 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.562130520+00:00 stderr F E1013 00:22:22.561982 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:24.603671121+00:00 stderr F W1013 00:22:24.601348 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.603671121+00:00 stderr F E1013 00:22:24.601863 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.131013332+00:00 stderr F E1013 00:22:25.130761 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:29.728501477+00:00 stderr F W1013 00:22:29.726343 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.728501477+00:00 stderr F E1013 00:22:29.726555 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.254698868+00:00 stderr F E1013 00:22:30.254631 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:39.974106239+00:00 stderr F W1013 00:22:39.973474 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.974106239+00:00 stderr F E1013 00:22:39.974046 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.501009356+00:00 stderr F E1013 00:22:40.500911 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] ././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000010334015073043233033043 0ustar zuulzuul2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.616690 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.617047 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.617822 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:46.189065082+00:00 stderr F I0813 20:00:46.188028 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-08-13T20:00:57.392828055+00:00 stderr F I0813 20:00:57.391890 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:57.392828055+00:00 stderr F W0813 20:00:57.392512 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:57.392828055+00:00 stderr F W0813 20:00:57.392521 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:03.400228587+00:00 stderr F I0813 20:01:03.372176 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:03.400228587+00:00 stderr F I0813 20:01:03.376636 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:03.402490702+00:00 stderr F I0813 20:01:03.382316 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:03.408512604+00:00 stderr F I0813 20:01:03.382385 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:03.409650426+00:00 stderr F I0813 20:01:03.382414 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:03.433956279+00:00 stderr F I0813 20:01:03.427085 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:03.463050499+00:00 stderr F I0813 20:01:03.447590 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:03.467044513+00:00 stderr F I0813 20:01:03.464488 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:03.467044513+00:00 stderr F I0813 20:01:03.464725 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:01:03.551232493+00:00 stderr F I0813 20:01:03.549119 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:03.586958362+00:00 stderr F I0813 20:01:03.585324 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:03.602420253+00:00 stderr F I0813 20:01:03.598102 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-08-13T20:01:03.646296424+00:00 stderr F I0813 20:01:03.644984 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:03.665302456+00:00 stderr F I0813 20:01:03.665177 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:02:58.875037003+00:00 stderr F E0813 20:02:58.874152 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.552341092+00:00 stderr F E0813 20:04:47.551352 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:25.498325085+00:00 stderr F I0813 20:06:25.496973 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32049", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_7f35bdb1-fde8-47f2-9c84-83ee0362fb0d became leader 2025-08-13T20:06:25.499967312+00:00 stderr F I0813 20:06:25.499398 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587371 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587401 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587485 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587504 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587368 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595316 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595372 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595391 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687663 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687688 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687728 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687709 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687739 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.687731 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.688646 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.688661 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:06:25.691854777+00:00 stderr F I0813 20:06:25.689721 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:06:25.691854777+00:00 stderr F I0813 20:06:25.689750 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:08:25.599533059+00:00 stderr F E0813 20:08:25.597985 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.734448848+00:00 stderr F W0813 20:08:25.734004 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.735539009+00:00 stderr F E0813 20:08:25.734976 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.735539009+00:00 stderr F E0813 20:08:25.735112 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.745218906+00:00 stderr F W0813 20:08:25.745141 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.745249227+00:00 stderr F E0813 20:08:25.745211 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.750312662+00:00 stderr F E0813 20:08:25.750288 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.759633450+00:00 stderr F W0813 20:08:25.759430 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.759633450+00:00 stderr F E0813 20:08:25.759503 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.770068159+00:00 stderr F E0813 20:08:25.770022 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.789868556+00:00 stderr F W0813 20:08:25.788442 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.789868556+00:00 stderr F E0813 20:08:25.788577 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.806198235+00:00 stderr F E0813 20:08:25.806145 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.857742602+00:00 stderr F W0813 20:08:25.852403 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.857742602+00:00 stderr F E0813 20:08:25.852463 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.861860991+00:00 stderr F E0813 20:08:25.860102 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.943490411+00:00 stderr F W0813 20:08:25.943359 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.943548423+00:00 stderr F E0813 20:08:25.943538 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.167700009+00:00 stderr F W0813 20:08:26.147031 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.168963606+00:00 stderr F E0813 20:08:26.168842 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.336574241+00:00 stderr F E0813 20:08:26.336238 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.535848075+00:00 stderr F W0813 20:08:26.534337 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.535848075+00:00 stderr F E0813 20:08:26.534557 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.734114189+00:00 stderr F E0813 20:08:26.734020 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.063517294+00:00 stderr F E0813 20:08:27.063454 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.184202064+00:00 stderr F W0813 20:08:27.184100 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.184202064+00:00 stderr F E0813 20:08:27.184168 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.720215052+00:00 stderr F E0813 20:08:27.719910 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:28.473667634+00:00 stderr F W0813 20:08:28.473522 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.473667634+00:00 stderr F E0813 20:08:28.473628 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.023002023+00:00 stderr F E0813 20:08:29.022568 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:31.039953282+00:00 stderr F W0813 20:08:31.039353 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.039953282+00:00 stderr F E0813 20:08:31.039685 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.591380821+00:00 stderr F E0813 20:08:31.591277 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.165164983+00:00 stderr F W0813 20:08:36.164486 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.165164983+00:00 stderr F E0813 20:08:36.165119 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.719395434+00:00 stderr F E0813 20:08:36.718222 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.418918077+00:00 stderr F W0813 20:08:46.416514 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.418918077+00:00 stderr F E0813 20:08:46.418697 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.965185829+00:00 stderr F E0813 20:08:46.964570 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:36.461846675+00:00 stderr F I0813 20:42:36.430750 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461846675+00:00 stderr F I0813 20:42:36.438450 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467606601+00:00 stderr F I0813 20:42:36.443218 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.471980047+00:00 stderr F I0813 20:42:36.438644 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.472057619+00:00 stderr F I0813 20:42:36.451160 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.478446954+00:00 stderr F I0813 20:42:36.441451 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.483736176+00:00 stderr F I0813 20:42:36.451208 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.244733697+00:00 stderr F I0813 20:42:41.242977 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.245123478+00:00 stderr F I0813 20:42:41.245089 1 base_controller.go:172] Shutting down KubeStorageVersionMigrator ... 2025-08-13T20:42:41.245201180+00:00 stderr F I0813 20:42:41.245181 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.245334404+00:00 stderr F I0813 20:42:41.245305 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ... 2025-08-13T20:42:41.245404646+00:00 stderr F I0813 20:42:41.245314 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ... 2025-08-13T20:42:41.246118767+00:00 stderr F I0813 20:42:41.246045 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.246118767+00:00 stderr F I0813 20:42:41.245437 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated 2025-08-13T20:42:41.246338343+00:00 stderr F I0813 20:42:41.246165 1 base_controller.go:172] Shutting down StaticConditionsController ... 2025-08-13T20:42:41.246648952+00:00 stderr F I0813 20:42:41.246111 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.246648952+00:00 stderr F I0813 20:42:41.246205 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ... 2025-08-13T20:42:41.246720334+00:00 stderr F I0813 20:42:41.246375 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:42:41.247062304+00:00 stderr F I0813 20:42:41.245736 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.247193198+00:00 stderr F I0813 20:42:41.247165 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.247711623+00:00 stderr F I0813 20:42:41.247664 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.247711623+00:00 stderr F I0813 20:42:41.247686 1 base_controller.go:104] All StaticConditionsController workers have been terminated 2025-08-13T20:42:41.247734993+00:00 stderr F I0813 20:42:41.247715 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:42:41.247734993+00:00 stderr F I0813 20:42:41.247725 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated 2025-08-13T20:42:41.247749164+00:00 stderr F I0813 20:42:41.247733 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.247749164+00:00 stderr F I0813 20:42:41.247739 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.248002231+00:00 stderr F W0813 20:42:41.247762 1 builder.go:109] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000011576115073043233033055 0ustar zuulzuul2025-08-13T19:59:35.330178547+00:00 stderr F I0813 19:59:35.328428 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T19:59:35.355461738+00:00 stderr F I0813 19:59:35.332273 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:35.355461738+00:00 stderr F I0813 19:59:35.339700 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:37.266163302+00:00 stderr F I0813 19:59:37.198407 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-08-13T19:59:43.151264028+00:00 stderr F I0813 19:59:43.111663 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:43.151420742+00:00 stderr F W0813 19:59:43.151395 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:43.151455293+00:00 stderr F W0813 19:59:43.151443 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:43.215601542+00:00 stderr F I0813 19:59:43.215510 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:43.243873157+00:00 stderr F I0813 19:59:43.243742 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T19:59:43.244636699+00:00 stderr F I0813 19:59:43.244613 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.342200 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.245514 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.246310 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.343769 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:43.386934916+00:00 stderr F I0813 19:59:43.386311 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-08-13T19:59:43.406232186+00:00 stderr F I0813 19:59:43.315336 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.406232186+00:00 stderr F I0813 19:59:43.405896 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:43.480296707+00:00 stderr F I0813 19:59:43.268900 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:44.354915128+00:00 stderr F I0813 19:59:44.351258 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:44.432892331+00:00 stderr F I0813 19:59:44.432397 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:44.467855027+00:00 stderr F I0813 19:59:44.467672 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:44.490205385+00:00 stderr F E0813 19:59:44.488873 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.490205385+00:00 stderr F E0813 19:59:44.488994 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.520984552+00:00 stderr F E0813 19:59:44.519629 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.554039574+00:00 stderr F E0813 19:59:44.553983 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.564088650+00:00 stderr F E0813 19:59:44.564054 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.573408966+00:00 stderr F E0813 19:59:44.572182 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.613393246+00:00 stderr F E0813 19:59:44.588701 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.613393246+00:00 stderr F E0813 19:59:44.604003 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633157629+00:00 stderr F E0813 19:59:44.632735 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.654976221+00:00 stderr F E0813 19:59:44.654441 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.713615723+00:00 stderr F E0813 19:59:44.713422 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.736544557+00:00 stderr F E0813 19:59:44.736012 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.877195426+00:00 stderr F E0813 19:59:44.873915 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.900236113+00:00 stderr F E0813 19:59:44.900115 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.213142002+00:00 stderr F E0813 19:59:45.212320 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.238307950+00:00 stderr F E0813 19:59:45.235450 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.856013558+00:00 stderr F E0813 19:59:45.855286 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.879065245+00:00 stderr F E0813 19:59:45.878010 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.461090794+00:00 stderr F I0813 19:59:46.460595 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-08-13T19:59:46.462002060+00:00 stderr F I0813 19:59:46.461413 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_1a321c6b-3aae-44eb-a5dc-fa9e08493642 became leader 2025-08-13T19:59:47.139745260+00:00 stderr F E0813 19:59:47.136166 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.160170902+00:00 stderr F E0813 19:59:47.160114 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.211266 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212603 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212710 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212710 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.231932 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-08-13T19:59:47.250364453+00:00 stderr F I0813 19:59:47.208997 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:47.356337384+00:00 stderr F I0813 19:59:47.356281 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:47.356431077+00:00 stderr F I0813 19:59:47.356417 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:47.413240606+00:00 stderr F I0813 19:59:47.413189 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-08-13T19:59:47.413321108+00:00 stderr F I0813 19:59:47.413301 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T19:59:47.413666138+00:00 stderr F I0813 19:59:47.413500 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:47.413666138+00:00 stderr F I0813 19:59:47.413658 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:47.418444654+00:00 stderr F I0813 19:59:47.415580 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-08-13T19:59:47.418444654+00:00 stderr F I0813 19:59:47.415616 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-08-13T19:59:47.834501895+00:00 stderr F I0813 19:59:47.819048 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-08-13T19:59:47.834501895+00:00 stderr F I0813 19:59:47.819436 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-08-13T19:59:47.933420705+00:00 stderr F I0813 19:59:47.932095 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-08-13T19:59:47.933420705+00:00 stderr F I0813 19:59:47.932166 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T19:59:48.570223548+00:00 stderr F I0813 19:59:48.569636 1 status_controller.go:213] clusteroperator/kube-storage-version-migrator diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:48.812035401+00:00 stderr F I0813 19:59:48.811900 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") 2025-08-13T19:59:49.703053540+00:00 stderr F E0813 19:59:49.702187 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.722226326+00:00 stderr F E0813 19:59:49.721050 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.828009674+00:00 stderr F I0813 19:59:51.826737 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876354 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.876167977 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876408 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.876391944 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876427 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.876414654 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876444 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.876432905 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876461 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876449915 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876480 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876466376 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876496 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876484806 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876517 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876500697 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876539 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876525097 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.877084 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 19:59:51.877056793 +0000 UTC))" 2025-08-13T19:59:51.877674800+00:00 stderr F I0813 19:59:51.877505 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 19:59:51.877488145 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.730045 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.729990125 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747565 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.747497084 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747598 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.747581067 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747621 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.747605227 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747644 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747631328 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747663 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747650479 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747705 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747667939 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747727 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747714131 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747743 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.747731741 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747766 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747755002 +0000 UTC))" 2025-08-13T20:00:05.749320426+00:00 stderr F I0813 20:00:05.748276 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:00:05.748243846 +0000 UTC))" 2025-08-13T20:00:05.772445416+00:00 stderr F I0813 20:00:05.769337 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:05.769289016 +0000 UTC))" 2025-08-13T20:00:34.342539512+00:00 stderr F I0813 20:00:34.340074 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:34.342539512+00:00 stderr F I0813 20:00:34.342506 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:34.347134543+00:00 stderr F I0813 20:00:34.346908 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347876 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:34.347766291 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347908 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:34.347894655 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347944 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:34.347915466 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347964 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:34.347950157 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347989 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.347970797 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348009 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.347997468 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348026 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348014418 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348051 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348031179 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348082 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:34.34806024 +0000 UTC))" 2025-08-13T20:00:34.348300997+00:00 stderr F I0813 20:00:34.348159 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348095171 +0000 UTC))" 2025-08-13T20:00:34.351572430+00:00 stderr F I0813 20:00:34.348503 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-08-13 20:00:34.348471461 +0000 UTC))" 2025-08-13T20:00:34.357169469+00:00 stderr F I0813 20:00:34.356184 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:34.356076198 +0000 UTC))" 2025-08-13T20:00:35.369679330+00:00 stderr F I0813 20:00:35.364527 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="43548186e7ce5eab21976aea3b471207a358b9f8fb63bf325b8f4755a5142ae9", new="cdf9d7851715e3205e610dc8b06ddc4b8a158c767e0f50cab6e974e6fee4d6bf") 2025-08-13T20:00:35.369679330+00:00 stderr F W0813 20:00:35.369569 1 builder.go:132] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:35.369753702+00:00 stderr F I0813 20:00:35.369717 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9e10b51cb3256c60ae44b395564462b79050e988d1626d5f34804f849a3655a7", new="f7b6ebeaff863e5f1a2771d98136282ec8f6675eb20222ebefd0d2097785c6f3") 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372131 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372256 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372306 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372714 1 base_controller.go:172] Shutting down KubeStorageVersionMigrator ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372742 1 base_controller.go:172] Shutting down StaticConditionsController ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372894 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373245 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373320 1 base_controller.go:104] All StaticConditionsController workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373339 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373685 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373702 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376760 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376876 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376922 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376984 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377018 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377039 1 builder.go:302] server exited 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377111 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377129 1 base_controller.go:104] All KubeStorageVersionMigrator workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377162 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377182 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377194 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377277 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377284 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377292 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377298 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377307 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377314 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377334 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378324 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378427 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378437 1 base_controller.go:104] All StatusSyncer_kube-storage-version-migrator workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F W0813 20:00:35.381309 1 builder.go:109] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000033200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000004030215073043233033055 0ustar zuulzuul2025-10-13T00:12:30.565344830+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-10-13T00:12:30.569268660+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-10-13T00:12:30.575444950+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:12:30.576452289+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-10-13T00:12:30.968688765+00:00 stderr F W1013 00:12:30.968515 1 cmd.go:244] Using insecure, self-signed certificates 2025-10-13T00:12:30.968904566+00:00 stderr F I1013 00:12:30.968864 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1760314350 cert, and key in /tmp/serving-cert-3947563483/serving-signer.crt, /tmp/serving-cert-3947563483/serving-signer.key 2025-10-13T00:12:31.356050835+00:00 stderr F I1013 00:12:31.355938 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:12:31.357266914+00:00 stderr F I1013 00:12:31.356972 1 observer_polling.go:159] Starting file observer 2025-10-13T00:12:31.359889621+00:00 stderr F W1013 00:12:31.359822 1 builder.go:266] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.360305921+00:00 stderr F I1013 00:12:31.360269 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-10-13T00:12:31.365366157+00:00 stderr F W1013 00:12:31.362755 1 builder.go:357] unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.365366157+00:00 stderr F I1013 00:12:31.362928 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.365366157+00:00 stderr F I1013 00:12:31.363216 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-10-13T00:12:31.365366157+00:00 stderr F E1013 00:12:31.364650 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.368207125+00:00 stderr F E1013 00:12:31.367212 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-controller-manager.186de49375ff2b8c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-controller-manager,Name:openshift-kube-controller-manager,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2025-10-13 00:12:31.362714508 +0000 UTC m=+0.781834575,LastTimestamp:2025-10-13 00:12:31.362714508 +0000 UTC m=+0.781834575,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2025-10-13T00:17:56.572234584+00:00 stderr F I1013 00:17:56.572134 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock 2025-10-13T00:17:56.572519533+00:00 stderr F I1013 00:17:56.572433 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fea77749-6e8a-4e4e-9933-ff0da4b5904e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41539", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_276bc818-c853-4bb7-887e-e55c0e845dd6 became leader 2025-10-13T00:17:56.582354185+00:00 stderr F I1013 00:17:56.581359 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:17:56.583020755+00:00 stderr F I1013 00:17:56.582895 1 csrcontroller.go:102] Starting CSR controller 2025-10-13T00:17:56.583020755+00:00 stderr F I1013 00:17:56.583011 1 shared_informer.go:311] Waiting for caches to sync for CSRController 2025-10-13T00:17:56.601403412+00:00 stderr F I1013 00:17:56.601315 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.601776334+00:00 stderr F I1013 00:17:56.601729 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.603149934+00:00 stderr F I1013 00:17:56.603103 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.604827964+00:00 stderr F I1013 00:17:56.604692 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.606673719+00:00 stderr F I1013 00:17:56.606569 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.611439621+00:00 stderr F I1013 00:17:56.611308 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.620029427+00:00 stderr F I1013 00:17:56.619906 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.621154280+00:00 stderr F I1013 00:17:56.621055 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.630765316+00:00 stderr F I1013 00:17:56.630696 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:17:56.681826046+00:00 stderr F I1013 00:17:56.681763 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:17:56.681910758+00:00 stderr F I1013 00:17:56.681893 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:17:56.683550927+00:00 stderr F I1013 00:17:56.683516 1 shared_informer.go:318] Caches are synced for CSRController 2025-10-13T00:17:56.683639710+00:00 stderr F I1013 00:17:56.683608 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:17:56.683639710+00:00 stderr F I1013 00:17:56.683629 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:17:56.683655780+00:00 stderr F I1013 00:17:56.683638 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:21:10.717439300+00:00 stderr F I1013 00:21:10.716957 1 core.go:359] ConfigMap "openshift-config-managed/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:13Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-10-13T00:21:10Z"}],"resourceVersion":null,"uid":"4aabbce1-72f4-478a-b382-9ed7c988ad76"}} 2025-10-13T00:21:10.722753023+00:00 stderr F I1013 00:21:10.720471 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-controller-ca -n openshift-config-managed: 2025-10-13T00:21:10.722753023+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:22:16.661653305+00:00 stderr F E1013 00:22:16.661557 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:22:41.958708350+00:00 stderr F I1013 00:22:41.958604 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:42.094793921+00:00 stderr F I1013 00:22:42.094730 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.274996635+00:00 stderr F I1013 00:22:43.274913 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.922971175+00:00 stderr F I1013 00:22:43.922864 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:46.302987580+00:00 stderr F I1013 00:22:46.301500 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.817456896+00:00 stderr F I1013 00:22:47.816079 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.878407584+00:00 stderr F I1013 00:22:47.877564 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:53.033849049+00:00 stderr F I1013 00:22:53.033381 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:56.152296704+00:00 stderr F I1013 00:22:56.152179 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000002613215073043233033062 0ustar zuulzuul2025-08-13T20:08:14.197956647+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-08-13T20:08:14.202063545+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-08-13T20:08:14.221748139+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:14.225107916+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-08-13T20:08:14.457003233+00:00 stderr F W0813 20:08:14.455918 1 cmd.go:244] Using insecure, self-signed certificates 2025-08-13T20:08:14.457003233+00:00 stderr F I0813 20:08:14.456485 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1755115694 cert, and key in /tmp/serving-cert-54853747/serving-signer.crt, /tmp/serving-cert-54853747/serving-signer.key 2025-08-13T20:08:15.002454652+00:00 stderr F I0813 20:08:15.002044 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:15.006327353+00:00 stderr F I0813 20:08:15.006199 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:15.025571455+00:00 stderr F I0813 20:08:15.025458 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T20:08:15.032073211+00:00 stderr F I0813 20:08:15.032017 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T20:08:15.034477840+00:00 stderr F I0813 20:08:15.032351 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-08-13T20:08:15.047637117+00:00 stderr F I0813 20:08:15.046947 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock 2025-08-13T20:08:15.050152229+00:00 stderr F I0813 20:08:15.047956 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fea77749-6e8a-4e4e-9933-ff0da4b5904e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32883", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_4dcd504f-4413-4e50-8836-0f9844860e38 became leader 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.072722 1 csrcontroller.go:102] Starting CSR controller 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.072814 1 shared_informer.go:311] Waiting for caches to sync for CSRController 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.073619 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:08:15.110874860+00:00 stderr F I0813 20:08:15.110647 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.111544390+00:00 stderr F I0813 20:08:15.111477 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.125419677+00:00 stderr F I0813 20:08:15.125251 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.136199657+00:00 stderr F I0813 20:08:15.136093 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.140250753+00:00 stderr F I0813 20:08:15.140161 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.149215380+00:00 stderr F I0813 20:08:15.149097 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.157014833+00:00 stderr F I0813 20:08:15.153563 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.157014833+00:00 stderr F I0813 20:08:15.155864 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.163627113+00:00 stderr F I0813 20:08:15.163577 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.174963438+00:00 stderr F I0813 20:08:15.174910 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:08:15.175033700+00:00 stderr F I0813 20:08:15.175018 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:08:15.175122503+00:00 stderr F I0813 20:08:15.175107 1 shared_informer.go:318] Caches are synced for CSRController 2025-08-13T20:08:15.175204425+00:00 stderr F I0813 20:08:15.175189 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:08:15.175239226+00:00 stderr F I0813 20:08:15.175227 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:08:15.175270617+00:00 stderr F I0813 20:08:15.175259 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:25.185664033+00:00 stderr F E0813 20:08:25.185445 1 csrcontroller.go:146] key failed with : Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:08:35.194931186+00:00 stderr F E0813 20:08:35.194614 1 csrcontroller.go:146] key failed with : Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:09:00.927591932+00:00 stderr F I0813 20:09:00.927364 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.126142517+00:00 stderr F I0813 20:09:03.125981 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.735864809+00:00 stderr F I0813 20:09:03.733930 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.052092650+00:00 stderr F I0813 20:09:07.050311 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:08.250621750+00:00 stderr F I0813 20:09:08.250516 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:08.999427729+00:00 stderr F I0813 20:09:08.999355 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:09.482069647+00:00 stderr F I0813 20:09:09.481699 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.710151277+00:00 stderr F I0813 20:09:10.710080 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:12.143121140+00:00 stderr F I0813 20:09:12.141232 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.401701 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.376416 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.473142 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483130 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483496 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483591 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483732 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.485449 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.485893 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.839045900+00:00 stderr F I0813 20:42:36.838854 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:36.844614870+00:00 stderr F E0813 20:42:36.844372 1 leaderelection.go:308] Failed to release lock: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:42:36.848355598+00:00 stderr F I0813 20:42:36.848208 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848420 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848449 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848475 1 csrcontroller.go:104] Shutting down CSR controller 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848485 1 csrcontroller.go:106] CSR controller shut down 2025-08-13T20:42:36.849512342+00:00 stderr F I0813 20:42:36.848300 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:36.849512342+00:00 stderr F I0813 20:42:36.849486 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:36.849651526+00:00 stderr F I0813 20:42:36.848500 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:36.849651526+00:00 stderr F I0813 20:42:36.849609 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:36.851173319+00:00 stderr F W0813 20:42:36.850995 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001377615073043233033074 0ustar zuulzuul2025-10-13T00:08:26.904592700+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-10-13T00:08:26.913980836+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-10-13T00:08:26.918692319+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:08:26.919524575+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-10-13T00:08:27.733866429+00:00 stderr F W1013 00:08:27.733653 1 cmd.go:244] Using insecure, self-signed certificates 2025-10-13T00:08:27.740811640+00:00 stderr F I1013 00:08:27.740756 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1760314107 cert, and key in /tmp/serving-cert-1984696415/serving-signer.crt, /tmp/serving-cert-1984696415/serving-signer.key 2025-10-13T00:08:28.119859521+00:00 stderr F I1013 00:08:28.119773 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:08:28.120673776+00:00 stderr F I1013 00:08:28.120635 1 observer_polling.go:159] Starting file observer 2025-10-13T00:08:28.125043059+00:00 stderr F W1013 00:08:28.124990 1 builder.go:266] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.125208694+00:00 stderr F I1013 00:08:28.125169 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-10-13T00:08:28.130170275+00:00 stderr F W1013 00:08:28.130113 1 builder.go:357] unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.130334050+00:00 stderr F I1013 00:08:28.130259 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.130542196+00:00 stderr F I1013 00:08:28.130509 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-10-13T00:08:28.131118144+00:00 stderr F E1013 00:08:28.131074 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.147745800+00:00 stderr F E1013 00:08:28.147638 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-controller-manager.186de45ad433bbce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-controller-manager,Name:openshift-kube-controller-manager,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2025-10-13 00:08:28.130081742 +0000 UTC m=+1.202217844,LastTimestamp:2025-10-13 00:08:28.130081742 +0000 UTC m=+1.202217844,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2025-10-13T00:08:42.567278566+00:00 stderr F E1013 00:08:42.565693 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{openshift-kube-controller-manager.186de45ad433bbce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-controller-manager,Name:openshift-kube-controller-manager,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2025-10-13 00:08:28.130081742 +0000 UTC m=+1.202217844,LastTimestamp:2025-10-13 00:08:28.130081742 +0000 UTC m=+1.202217844,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2025-10-13T00:11:29.662011194+00:00 stderr F I1013 00:11:29.661931 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-10-13T00:11:29.662011194+00:00 stderr F W1013 00:11:29.661978 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000032200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001510615073043233033061 0ustar zuulzuul2025-08-13T20:08:13.912570675+00:00 stderr F I0813 20:08:13.912110 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:13.912570675+00:00 stderr F I0813 20:08:13.912498 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.012990 1 base_controller.go:73] Caches are synced for CertSyncController 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013087 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013232 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013695 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:00.642329395+00:00 stderr F I0813 20:09:00.642073 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:00.642836380+00:00 stderr F I0813 20:09:00.642748 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.039403674+00:00 stderr F I0813 20:09:06.039168 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.040066213+00:00 stderr F I0813 20:09:06.040023 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.040328071+00:00 stderr F I0813 20:09:06.040299 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.040866436+00:00 stderr F I0813 20:09:06.040697 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.041170375+00:00 stderr F I0813 20:09:06.041082 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.041673049+00:00 stderr F I0813 20:09:06.041618 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:00.657764897+00:00 stderr F I0813 20:19:00.657557 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:00.669646716+00:00 stderr F I0813 20:19:00.669516 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:00.676291486+00:00 stderr F I0813 20:19:00.676168 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:00.684225793+00:00 stderr F I0813 20:19:00.684075 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:06.041870472+00:00 stderr F I0813 20:19:06.041707 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:06.042281974+00:00 stderr F I0813 20:19:06.042201 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:00.643715058+00:00 stderr F I0813 20:29:00.643482 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:00.644704676+00:00 stderr F I0813 20:29:00.644581 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:00.644983084+00:00 stderr F I0813 20:29:00.644921 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:00.646332343+00:00 stderr F I0813 20:29:00.645357 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:06.042539590+00:00 stderr F I0813 20:29:06.042335 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:06.043743395+00:00 stderr F I0813 20:29:06.043351 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:06.043743395+00:00 stderr F I0813 20:29:06.043572 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:06.044232079+00:00 stderr F I0813 20:29:06.044104 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:00.644997435+00:00 stderr F I0813 20:39:00.644245 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:00.645629583+00:00 stderr F I0813 20:39:00.645535 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:00.645942712+00:00 stderr F I0813 20:39:00.645848 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:00.646226851+00:00 stderr F I0813 20:39:00.646102 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:06.043658059+00:00 stderr F I0813 20:39:06.043487 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:06.049015103+00:00 stderr F I0813 20:39:06.048915 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:06.049189868+00:00 stderr F I0813 20:39:06.049110 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:06.049439836+00:00 stderr F I0813 20:39:06.049391 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001440715073043233033064 0ustar zuulzuul2025-10-13T00:08:27.809428879+00:00 stderr F I1013 00:08:27.809116 1 observer_polling.go:159] Starting file observer 2025-10-13T00:08:27.810307056+00:00 stderr F I1013 00:08:27.809121 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-10-13T00:08:27.828455299+00:00 stderr F W1013 00:08:27.828311 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:27.828581373+00:00 stderr F E1013 00:08:27.828541 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:27.828581373+00:00 stderr F W1013 00:08:27.828500 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:27.828647525+00:00 stderr F E1013 00:08:27.828616 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.790923982+00:00 stderr F W1013 00:08:28.790820 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.790923982+00:00 stderr F E1013 00:08:28.790879 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.014571171+00:00 stderr F W1013 00:08:29.014451 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.014571171+00:00 stderr F E1013 00:08:29.014516 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:40.832673849+00:00 stderr F W1013 00:08:40.832377 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:08:40.833280967+00:00 stderr F I1013 00:08:40.833245 1 trace.go:236] Trace[921536294]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:08:30.828) (total time: 10003ms): 2025-10-13T00:08:40.833280967+00:00 stderr F Trace[921536294]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (00:08:40.832) 2025-10-13T00:08:40.833280967+00:00 stderr F Trace[921536294]: [10.003726926s] [10.003726926s] END 2025-10-13T00:08:40.833330659+00:00 stderr F E1013 00:08:40.833318 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:08:41.058532276+00:00 stderr F W1013 00:08:41.058342 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:08:41.058532276+00:00 stderr F I1013 00:08:41.058512 1 trace.go:236] Trace[995502492]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:08:31.056) (total time: 10001ms): 2025-10-13T00:08:41.058532276+00:00 stderr F Trace[995502492]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:08:41.058) 2025-10-13T00:08:41.058532276+00:00 stderr F Trace[995502492]: [10.001919381s] [10.001919381s] END 2025-10-13T00:08:41.058612458+00:00 stderr F E1013 00:08:41.058541 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:08:46.810536676+00:00 stderr F I1013 00:08:46.810410 1 base_controller.go:73] Caches are synced for CertSyncController 2025-10-13T00:08:46.810536676+00:00 stderr F I1013 00:08:46.810460 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-10-13T00:08:46.811320949+00:00 stderr F I1013 00:08:46.811241 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:08:46.811801223+00:00 stderr F I1013 00:08:46.811739 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000002743715073043233033073 0ustar zuulzuul2025-10-13T00:12:31.008606294+00:00 stderr F I1013 00:12:31.008354 1 observer_polling.go:159] Starting file observer 2025-10-13T00:12:31.012138455+00:00 stderr F I1013 00:12:31.010833 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-10-13T00:12:31.022453356+00:00 stderr F W1013 00:12:31.022355 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.022453356+00:00 stderr F W1013 00:12:31.022356 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.022474707+00:00 stderr F E1013 00:12:31.022458 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.022474707+00:00 stderr F E1013 00:12:31.022466 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.830587107+00:00 stderr F W1013 00:12:31.830523 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.830587107+00:00 stderr F E1013 00:12:31.830568 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.843522096+00:00 stderr F W1013 00:12:31.843443 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.843522096+00:00 stderr F E1013 00:12:31.843505 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:48.039735276+00:00 stderr F I1013 00:12:48.039660 1 trace.go:236] Trace[1593644811]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.617) (total time: 14422ms): 2025-10-13T00:12:48.039735276+00:00 stderr F Trace[1593644811]: ---"Objects listed" error: 14421ms (00:12:48.038) 2025-10-13T00:12:48.039735276+00:00 stderr F Trace[1593644811]: [14.422126653s] [14.422126653s] END 2025-10-13T00:12:48.041682591+00:00 stderr F I1013 00:12:48.041625 1 trace.go:236] Trace[1366908370]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.964) (total time: 14077ms): 2025-10-13T00:12:48.041682591+00:00 stderr F Trace[1366908370]: ---"Objects listed" error: 14077ms (00:12:48.041) 2025-10-13T00:12:48.041682591+00:00 stderr F Trace[1366908370]: [14.07747818s] [14.07747818s] END 2025-10-13T00:12:48.117111032+00:00 stderr F I1013 00:12:48.117032 1 base_controller.go:73] Caches are synced for CertSyncController 2025-10-13T00:12:48.117111032+00:00 stderr F I1013 00:12:48.117063 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-10-13T00:12:48.118780840+00:00 stderr F I1013 00:12:48.118727 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:12:48.119091809+00:00 stderr F I1013 00:12:48.119060 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:05.197603055+00:00 stderr F I1013 00:15:05.197523 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:05.198092260+00:00 stderr F I1013 00:15:05.198062 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:05.237754218+00:00 stderr F I1013 00:15:05.237253 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:05.237754218+00:00 stderr F I1013 00:15:05.237496 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:05.257507780+00:00 stderr F I1013 00:15:05.257389 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:05.257730277+00:00 stderr F I1013 00:15:05.257690 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:05.262101568+00:00 stderr F I1013 00:15:05.262032 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:05.262340495+00:00 stderr F I1013 00:15:05.262287 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:05.297420526+00:00 stderr F I1013 00:15:05.297346 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:05.297634653+00:00 stderr F I1013 00:15:05.297601 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:05.335627161+00:00 stderr F I1013 00:15:05.335549 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:05.335838867+00:00 stderr F I1013 00:15:05.335813 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:10.209086178+00:00 stderr F I1013 00:15:10.209024 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:10.209442359+00:00 stderr F I1013 00:15:10.209367 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:10.216205111+00:00 stderr F I1013 00:15:10.216170 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:10.216468209+00:00 stderr F I1013 00:15:10.216446 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:10.225099708+00:00 stderr F I1013 00:15:10.225018 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:10.225269563+00:00 stderr F I1013 00:15:10.225249 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:10.229519000+00:00 stderr F I1013 00:15:10.229493 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:10.229739947+00:00 stderr F I1013 00:15:10.229725 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:10.235847690+00:00 stderr F I1013 00:15:10.235825 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:10.236201960+00:00 stderr F I1013 00:15:10.236177 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:15:10.279675473+00:00 stderr F I1013 00:15:10.279597 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:15:10.279976162+00:00 stderr F I1013 00:15:10.279954 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:21:14.302537261+00:00 stderr F I1013 00:21:14.301980 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:21:14.305921222+00:00 stderr F I1013 00:21:14.304634 1 certsync_controller.go:146] Creating directory "/etc/kubernetes/static-pod-certs/configmaps/client-ca" ... 2025-10-13T00:21:14.305921222+00:00 stderr F I1013 00:21:14.304693 1 certsync_controller.go:159] Writing configmap manifest "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-10-13T00:21:14.305921222+00:00 stderr F I1013 00:21:14.305237 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:21:14.305921222+00:00 stderr F I1013 00:21:14.305677 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CertificateUpdated' Wrote updated configmap: openshift-kube-controller-manager/client-ca 2025-10-13T00:22:49.593595200+00:00 stderr F I1013 00:22:49.593381 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:22:49.593812346+00:00 stderr F I1013 00:22:49.593742 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:22:49.593870008+00:00 stderr F I1013 00:22:49.593835 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:22:49.594023112+00:00 stderr F I1013 00:22:49.593972 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:22:55.324371842+00:00 stderr F I1013 00:22:55.324252 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:22:55.330812682+00:00 stderr F I1013 00:22:55.330731 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-10-13T00:22:55.331030178+00:00 stderr F I1013 00:22:55.330981 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-10-13T00:22:55.331554902+00:00 stderr F I1013 00:22:55.331490 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000403415073043233033057 0ustar zuulzuul2025-10-13T00:10:26.499840195+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-10-13T00:10:26.503766269+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-10-13T00:10:26.508296331+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:10:26.509109514+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-10-13T00:10:26.574601973+00:00 stderr F I1013 00:10:26.574303 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:10:26.575849740+00:00 stderr F I1013 00:10:26.575772 1 observer_polling.go:159] Starting file observer 2025-10-13T00:10:26.578217518+00:00 stderr F I1013 00:10:26.578092 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-10-13T00:10:26.579724192+00:00 stderr F I1013 00:10:26.579628 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:10:56.063317577+00:00 stderr F I1013 00:10:56.063240 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-10-13T00:10:56.063369268+00:00 stderr F F1013 00:10:56.063317 1 cmd.go:170] failed checking apiserver connectivity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000040332215073043233033062 0ustar zuulzuul2025-08-13T20:08:13.208515559+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-08-13T20:08:13.220701708+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-08-13T20:08:13.241290128+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:13.250386589+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-08-13T20:08:13.608173907+00:00 stderr F I0813 20:08:13.607874 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:13.609914107+00:00 stderr F I0813 20:08:13.609835 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:13.624952348+00:00 stderr F I0813 20:08:13.624655 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-08-13T20:08:13.626584115+00:00 stderr F I0813 20:08:13.626491 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:14.664548094+00:00 stderr F I0813 20:08:14.662425 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.676873 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678838 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678872 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678879 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:08:14.693006890+00:00 stderr F I0813 20:08:14.691169 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:08:14.693006890+00:00 stderr F I0813 20:08:14.692424 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:08:14.711465439+00:00 stderr F W0813 20:08:14.711350 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-08-13T20:08:14.712233371+00:00 stderr F I0813 20:08:14.712027 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cluster-policy-controller-lock... 2025-08-13T20:08:14.713337843+00:00 stderr F I0813 20:08:14.713274 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:08:14.713531348+00:00 stderr F I0813 20:08:14.713444 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:08:14.713655132+00:00 stderr F I0813 20:08:14.713580 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:08:14.713655132+00:00 stderr F I0813 20:08:14.713635 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:14.713698223+00:00 stderr F I0813 20:08:14.713576 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-08-13T20:08:14.714049543+00:00 stderr F I0813 20:08:14.714017 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:14.716051860+00:00 stderr F I0813 20:08:14.714734 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:08:14.716264537+00:00 stderr F I0813 20:08:14.716236 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:14.717059509+00:00 stderr F I0813 20:08:14.716760 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.71671728 +0000 UTC))" 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.717945 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.717855672 +0000 UTC))" 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718004 1 secure_serving.go:213] Serving securely on [::]:10357 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718032 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718048 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:14.720447926+00:00 stderr F I0813 20:08:14.720412 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.724976626+00:00 stderr F I0813 20:08:14.724881 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.729246629+00:00 stderr F I0813 20:08:14.729159 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.729419954+00:00 stderr F I0813 20:08:14.729391 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cluster-policy-controller-lock 2025-08-13T20:08:14.730022001+00:00 stderr F I0813 20:08:14.729949 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cluster-policy-controller-lock", UID:"bb093f33-8655-47de-8ab9-7ce6fce91fc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_2c95bd88-c329-4928-a119-89e7d6436f66 became leader 2025-08-13T20:08:14.738369680+00:00 stderr F I0813 20:08:14.738315 1 policy_controller.go:78] Starting "openshift.io/cluster-quota-reconciliation" 2025-08-13T20:08:14.814328098+00:00 stderr F I0813 20:08:14.814216 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:14.814499683+00:00 stderr F I0813 20:08:14.814217 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:14.814741960+00:00 stderr F I0813 20:08:14.814693 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.814646487 +0000 UTC))" 2025-08-13T20:08:14.815283746+00:00 stderr F I0813 20:08:14.815214 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.815180533 +0000 UTC))" 2025-08-13T20:08:14.816965214+00:00 stderr F I0813 20:08:14.816924 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:14.825132808+00:00 stderr F I0813 20:08:14.825025 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.824993454 +0000 UTC))" 2025-08-13T20:08:14.825274092+00:00 stderr F I0813 20:08:14.825210 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:14.82519215 +0000 UTC))" 2025-08-13T20:08:14.825289022+00:00 stderr F I0813 20:08:14.825274 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:14.825240061 +0000 UTC))" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825400 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825458 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825487 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825506 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825538 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825557 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825576 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-08-13T20:08:14.825623412+00:00 stderr F I0813 20:08:14.825592 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-08-13T20:08:14.825623412+00:00 stderr F I0813 20:08:14.825609 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-08-13T20:08:14.825634042+00:00 stderr F I0813 20:08:14.825626 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-08-13T20:08:14.825697544+00:00 stderr F I0813 20:08:14.825642 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-08-13T20:08:14.825697544+00:00 stderr F I0813 20:08:14.825692 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825709 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825834 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:14.825282982 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825883 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:14.825864139 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825955 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825940541 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825974 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825963172 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825989 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825979022 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.826007 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825994963 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.826023 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:14.826012713 +0000 UTC))" 2025-08-13T20:08:14.826067965+00:00 stderr F I0813 20:08:14.826049 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:14.826030704 +0000 UTC))" 2025-08-13T20:08:14.826133267+00:00 stderr F I0813 20:08:14.826072 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.826060315 +0000 UTC))" 2025-08-13T20:08:14.826567179+00:00 stderr F I0813 20:08:14.826496 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.826462946 +0000 UTC))" 2025-08-13T20:08:14.826931140+00:00 stderr F I0813 20:08:14.826822 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.826762155 +0000 UTC))" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830158 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830227 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830250 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830267 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830286 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830302 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-08-13T20:08:14.830329687+00:00 stderr F I0813 20:08:14.830320 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:14.831114599+00:00 stderr F I0813 20:08:14.830389 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-08-13T20:08:14.831114599+00:00 stderr F I0813 20:08:14.830482 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831415 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831478 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831498 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831521 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831550 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831566 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831597 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831616 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831714 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831753 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831768 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831948 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831969 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831987 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832076 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832113 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832154 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832179 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832198 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832218 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832234 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832250 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832268 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832292 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832331 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832350 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832390 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832407 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832432 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832447 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832463 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832480 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832496 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832511 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832546 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832583 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832599 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832615 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832641 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832657 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832713 1 policy_controller.go:88] Started "openshift.io/cluster-quota-reconciliation" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832723 1 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833019 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833093 1 reconciliation_controller.go:140] Starting the cluster quota reconciliation controller 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833272 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:14.876513471+00:00 stderr F I0813 20:08:14.876383 1 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" 2025-08-13T20:08:14.876513471+00:00 stderr F I0813 20:08:14.876440 1 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" 2025-08-13T20:08:14.877069787+00:00 stderr F I0813 20:08:14.877029 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.883489 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: [] 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885290 1 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885311 1 policy_controller.go:78] Starting "openshift.io/privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885539 1 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller 2025-08-13T20:08:14.889977587+00:00 stderr F I0813 20:08:14.889608 1 policy_controller.go:88] Started "openshift.io/privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.889977587+00:00 stderr F I0813 20:08:14.889639 1 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" 2025-08-13T20:08:14.890300166+00:00 stderr F I0813 20:08:14.890259 1 privileged_namespaces_controller.go:75] "Starting" controller="privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.890355978+00:00 stderr F I0813 20:08:14.890339 1 shared_informer.go:311] Waiting for caches to sync for privileged-namespaces-psa-label-syncer 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904851 1 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904911 1 policy_controller.go:78] Starting "openshift.io/resourcequota" 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904982 1 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller 2025-08-13T20:08:15.162401148+00:00 stderr F I0813 20:08:15.161553 1 policy_controller.go:88] Started "openshift.io/resourcequota" 2025-08-13T20:08:15.162401148+00:00 stderr F I0813 20:08:15.161594 1 policy_controller.go:91] Started Origin Controllers 2025-08-13T20:08:15.169223483+00:00 stderr F I0813 20:08:15.162368 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-08-13T20:08:15.169223483+00:00 stderr F I0813 20:08:15.169155 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:15.169337147+00:00 stderr F I0813 20:08:15.169293 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176319 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176639 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176918 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.177210 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.216483758+00:00 stderr F I0813 20:08:15.211262 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.231770197+00:00 stderr F I0813 20:08:15.231706 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.234820014+00:00 stderr F I0813 20:08:15.234745 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.237585623+00:00 stderr F I0813 20:08:15.237558 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.244701137+00:00 stderr F I0813 20:08:15.237756 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.294877586+00:00 stderr F I0813 20:08:15.242155 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.294877586+00:00 stderr F I0813 20:08:15.288151 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295008890+00:00 stderr F I0813 20:08:15.288479 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295342139+00:00 stderr F I0813 20:08:15.295285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295398661+00:00 stderr F I0813 20:08:15.295372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295510714+00:00 stderr F I0813 20:08:15.294031 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295674509+00:00 stderr F I0813 20:08:15.295656 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.298247583+00:00 stderr F I0813 20:08:15.298219 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.298883051+00:00 stderr F I0813 20:08:15.298858 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.345075025+00:00 stderr F I0813 20:08:15.345008 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.352814607+00:00 stderr F I0813 20:08:15.352734 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.377367631+00:00 stderr F I0813 20:08:15.377297 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.380231 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.383691 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.387926 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.389594 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.427446 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-08-13T20:08:15.445339760+00:00 stderr F I0813 20:08:15.445274 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.484563625+00:00 stderr F I0813 20:08:15.484393 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.489766 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [image.openshift.io/v1, Resource=imagestreams], removed: []" 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.489952 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.488683 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.492547094+00:00 stderr F I0813 20:08:15.492488 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.494281263+00:00 stderr F I0813 20:08:15.494227 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.494741616+00:00 stderr F I0813 20:08:15.494715 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "basic-user" not found 2025-08-13T20:08:15.496173787+00:00 stderr F I0813 20:08:15.496128 1 trace.go:236] Trace[1789160028]: "DeltaFIFO Pop Process" ID:cluster-autoscaler,Depth:186,Reason:slow event handlers blocking the queue (13-Aug-2025 20:08:15.383) (total time: 112ms): 2025-08-13T20:08:15.496173787+00:00 stderr F Trace[1789160028]: [112.348001ms] [112.348001ms] END 2025-08-13T20:08:15.502290473+00:00 stderr F I0813 20:08:15.502258 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506106 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506151 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-provisioner-cfg" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506159 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-provisioner-cfg" not found 2025-08-13T20:08:15.506199455+00:00 stderr F I0813 20:08:15.506179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found 2025-08-13T20:08:15.506199455+00:00 stderr F I0813 20:08:15.506191 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506198 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506207 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506214 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506236806+00:00 stderr F I0813 20:08:15.506222 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506236806+00:00 stderr F I0813 20:08:15.506229 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506249946+00:00 stderr F I0813 20:08:15.506238 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506249946+00:00 stderr F I0813 20:08:15.506244 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found 2025-08-13T20:08:15.506262537+00:00 stderr F I0813 20:08:15.506252 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found 2025-08-13T20:08:15.506262537+00:00 stderr F I0813 20:08:15.506259 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found 2025-08-13T20:08:15.506274767+00:00 stderr F I0813 20:08:15.506267 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:cloud-provider" not found 2025-08-13T20:08:15.506285057+00:00 stderr F I0813 20:08:15.506276 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:token-cleaner" not found 2025-08-13T20:08:15.506297818+00:00 stderr F I0813 20:08:15.506291 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506307568+00:00 stderr F I0813 20:08:15.506300 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found 2025-08-13T20:08:15.506319168+00:00 stderr F I0813 20:08:15.506306 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found 2025-08-13T20:08:15.506319168+00:00 stderr F I0813 20:08:15.506315 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-controller-manager" not found 2025-08-13T20:08:15.506339989+00:00 stderr F I0813 20:08:15.506321 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506339989+00:00 stderr F I0813 20:08:15.506335 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506355549+00:00 stderr F I0813 20:08:15.506346 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506367130+00:00 stderr F I0813 20:08:15.506359 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506380 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-approver" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506408 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506423 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-samples-operator" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506429 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506452792+00:00 stderr F I0813 20:08:15.506444 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "csi-snapshot-controller-operator-role" not found 2025-08-13T20:08:15.506467723+00:00 stderr F I0813 20:08:15.506458 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506478923+00:00 stderr F I0813 20:08:15.506471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-configmap-reader" not found 2025-08-13T20:08:15.506488833+00:00 stderr F I0813 20:08:15.506479 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506488833+00:00 stderr F I0813 20:08:15.506485 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-public" not found 2025-08-13T20:08:15.506501614+00:00 stderr F I0813 20:08:15.506493 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.506513854+00:00 stderr F I0813 20:08:15.506500 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-approver" not found 2025-08-13T20:08:15.506513854+00:00 stderr F I0813 20:08:15.506509 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-network-public-role" not found 2025-08-13T20:08:15.506532774+00:00 stderr F I0813 20:08:15.506521 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:oauth-servercert-trust" not found 2025-08-13T20:08:15.506532774+00:00 stderr F I0813 20:08:15.506528 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506548415+00:00 stderr F I0813 20:08:15.506540 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "coreos-pull-secret-reader" not found 2025-08-13T20:08:15.506560575+00:00 stderr F I0813 20:08:15.506549 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506560575+00:00 stderr F I0813 20:08:15.506555 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "ingress-operator" not found 2025-08-13T20:08:15.506572646+00:00 stderr F I0813 20:08:15.506565 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.506588206+00:00 stderr F I0813 20:08:15.506578 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506600286+00:00 stderr F I0813 20:08:15.506585 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506612357+00:00 stderr F I0813 20:08:15.506599 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-user-settings-admin" not found 2025-08-13T20:08:15.506624287+00:00 stderr F I0813 20:08:15.506611 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506624287+00:00 stderr F I0813 20:08:15.506620 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506639688+00:00 stderr F I0813 20:08:15.506632 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506651758+00:00 stderr F I0813 20:08:15.506644 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506667328+00:00 stderr F I0813 20:08:15.506658 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-controller-manager" not found 2025-08-13T20:08:15.506679559+00:00 stderr F I0813 20:08:15.506664 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "dns-operator" not found 2025-08-13T20:08:15.506679559+00:00 stderr F I0813 20:08:15.506674 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506747431+00:00 stderr F I0813 20:08:15.506699 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506747431+00:00 stderr F I0813 20:08:15.506733 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506768191+00:00 stderr F I0813 20:08:15.506746 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506768191+00:00 stderr F I0813 20:08:15.506762 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-image-registry-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506770 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "node-ca" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506827 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506852 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-openshift-controller-manager" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506861 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-route-controller-manager" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506873 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "ingress-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506880 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506921 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506939 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506950 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506965 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506978 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506991 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-election-lock-cluster-policy-controller" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506998 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507009 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507022 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-listing-configmaps" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507039 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-autoscaler" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507045 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-autoscaler-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507053 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "control-plane-machine-set-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507058 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507066 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507075 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s-cluster-autoscaler-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507083 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s-machine-api-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507095 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-config-daemon" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507107 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "mcc-prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507115 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "mcd-prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507122 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507134 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "marketplace-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507141 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-marketplace-metrics" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507153 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-monitoring-operator-alert-customization" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507173 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "whereabouts-cni" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507190 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "network-diagnostics" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507197 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507209 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "network-node-identity-leases" not found 2025-08-13T20:08:15.509924612+00:00 stderr F E0813 20:08:15.507232 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507241 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507300 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:node-config-reader" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507312 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr P I0813 20:08:15.507330 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "col 2025-08-13T20:08:15.509993354+00:00 stderr F lect-profiles" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507338 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "operator-lifecycle-manager-metrics" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507344 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "packageserver" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507354 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "packageserver-service-cert" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507374 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-control-plane-limited" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507382 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node-limited" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507389 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507401 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507437 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-route-controller-manager" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507453 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507490 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "copied-csv-viewer" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "shared-resource-viewer" not found 2025-08-13T20:08:15.511094655+00:00 stderr F I0813 20:08:15.510971 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.512133275+00:00 stderr F E0813 20:08:15.511975 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-08-13T20:08:15.514202474+00:00 stderr F E0813 20:08:15.514156 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-08-13T20:08:15.524208091+00:00 stderr F I0813 20:08:15.524124 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.532220851+00:00 stderr F I0813 20:08:15.532160 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.545559513+00:00 stderr F I0813 20:08:15.545469 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.556140527+00:00 stderr F I0813 20:08:15.556044 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.583228713+00:00 stderr F I0813 20:08:15.583171 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.585195870+00:00 stderr F I0813 20:08:15.585160 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.596064472+00:00 stderr F I0813 20:08:15.595457 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:08:15.596064472+00:00 stderr F I0813 20:08:15.595523 1 resource_quota_controller.go:496] "synced quota controller" 2025-08-13T20:08:15.752113946+00:00 stderr F I0813 20:08:15.752055 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.766668683+00:00 stderr F I0813 20:08:15.766543 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.791997529+00:00 stderr F I0813 20:08:15.791017 1 shared_informer.go:318] Caches are synced for privileged-namespaces-psa-label-syncer 2025-08-13T20:08:15.805262569+00:00 stderr F I0813 20:08:15.805174 1 base_controller.go:73] Caches are synced for namespace-security-allocation-controller 2025-08-13T20:08:15.805262569+00:00 stderr F I0813 20:08:15.805219 1 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... 2025-08-13T20:08:15.805317911+00:00 stderr F I0813 20:08:15.805294 1 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations 2025-08-13T20:08:15.949999609+00:00 stderr F I0813 20:08:15.949577 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.970305261+00:00 stderr F I0813 20:08:15.968108 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.167957588+00:00 stderr F I0813 20:08:16.167440 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.343548593+00:00 stderr F I0813 20:08:16.343039 1 request.go:697] Waited for 1.173256768s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/resourcequotas?limit=500&resourceVersion=0 2025-08-13T20:08:16.346233830+00:00 stderr F I0813 20:08:16.346152 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.367920662+00:00 stderr F I0813 20:08:16.366461 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.370045532+00:00 stderr F I0813 20:08:16.369989 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:08:16.508977916+00:00 stderr F I0813 20:08:16.508543 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.549842197+00:00 stderr F I0813 20:08:16.549722 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.568407570+00:00 stderr F I0813 20:08:16.568294 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.766668494+00:00 stderr F I0813 20:08:16.766564 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.945633085+00:00 stderr F I0813 20:08:16.945525 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.967295166+00:00 stderr F I0813 20:08:16.967193 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.086702110+00:00 stderr F I0813 20:08:17.085952 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.158487008+00:00 stderr F I0813 20:08:17.157991 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.172618403+00:00 stderr F W0813 20:08:17.172277 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:08:17.172618403+00:00 stderr F I0813 20:08:17.172391 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.181821517+00:00 stderr F W0813 20:08:17.181359 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:08:17.349080902+00:00 stderr F I0813 20:08:17.346494 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.363956099+00:00 stderr F I0813 20:08:17.363068 1 request.go:697] Waited for 2.191471622s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/servicemonitors?limit=500&resourceVersion=0 2025-08-13T20:08:17.369678783+00:00 stderr F I0813 20:08:17.369557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.567909807+00:00 stderr F I0813 20:08:17.567537 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.577176012+00:00 stderr F I0813 20:08:17.577067 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.586982193+00:00 stderr F I0813 20:08:17.586130 1 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller 2025-08-13T20:08:17.586982193+00:00 stderr F I0813 20:08:17.586172 1 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... 2025-08-13T20:08:17.768841898+00:00 stderr F I0813 20:08:17.768711 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.973583317+00:00 stderr F I0813 20:08:17.973528 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.980504715+00:00 stderr F I0813 20:08:17.980440 1 namespace_scc_allocation_controller.go:116] Repair complete 2025-08-13T20:08:18.168683820+00:00 stderr F I0813 20:08:18.168557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.365096742+00:00 stderr F I0813 20:08:18.363927 1 request.go:697] Waited for 3.191556604s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v2/operatorconditions?limit=500&resourceVersion=0 2025-08-13T20:08:18.368047266+00:00 stderr F I0813 20:08:18.367957 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.567371701+00:00 stderr F I0813 20:08:18.567211 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.766389487+00:00 stderr F I0813 20:08:18.766128 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.966073862+00:00 stderr F I0813 20:08:18.965964 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.180138490+00:00 stderr F I0813 20:08:19.179751 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.364296150+00:00 stderr F I0813 20:08:19.364147 1 request.go:697] Waited for 4.191066971s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0 2025-08-13T20:08:19.379091444+00:00 stderr F I0813 20:08:19.379038 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.568422642+00:00 stderr F I0813 20:08:19.568366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.766052129+00:00 stderr F I0813 20:08:19.765998 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.967672679+00:00 stderr F I0813 20:08:19.967043 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.168539889+00:00 stderr F I0813 20:08:20.168358 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.366753172+00:00 stderr F I0813 20:08:20.366609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.563195074+00:00 stderr F I0813 20:08:20.563064 1 request.go:697] Waited for 5.389329947s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v1/operatorgroups?limit=500&resourceVersion=0 2025-08-13T20:08:20.566910400+00:00 stderr F I0813 20:08:20.566494 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.767176112+00:00 stderr F I0813 20:08:20.767083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.970139491+00:00 stderr F I0813 20:08:20.970051 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.166165592+00:00 stderr F I0813 20:08:21.166100 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.366337351+00:00 stderr F I0813 20:08:21.366152 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.564432569+00:00 stderr F I0813 20:08:21.564369 1 request.go:697] Waited for 6.389277006s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs?limit=500&resourceVersion=0 2025-08-13T20:08:21.575270540+00:00 stderr F I0813 20:08:21.575083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.766876404+00:00 stderr F I0813 20:08:21.766682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.966526208+00:00 stderr F I0813 20:08:21.965911 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.970859722+00:00 stderr F I0813 20:08:21.970506 1 reconciliation_controller.go:224] synced cluster resource quota controller 2025-08-13T20:08:22.034338812+00:00 stderr F I0813 20:08:22.034242 1 reconciliation_controller.go:149] Caches are synced 2025-08-13T20:08:44.520618251+00:00 stderr F E0813 20:08:44.520526 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=32866&timeout=5m7s&timeoutSeconds=307&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:45.257388125+00:00 stderr F E0813 20:08:45.257309 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=32866&timeout=5m53s&timeoutSeconds=353&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:45.783930832+00:00 stderr F E0813 20:08:45.783822 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m30s&timeoutSeconds=390&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:47.641227111+00:00 stderr F E0813 20:08:47.639719 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=32882&timeout=5m56s&timeoutSeconds=356&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:47.889473979+00:00 stderr F E0813 20:08:47.889166 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=32866&timeout=8m34s&timeoutSeconds=514&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.277980608+00:00 stderr F E0813 20:08:48.277870 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=32887&timeout=6m7s&timeoutSeconds=367&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:48.487870096+00:00 stderr F E0813 20:08:48.485466 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=32881&timeout=9m9s&timeoutSeconds=549&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:58.353763679+00:00 stderr F I0813 20:08:58.353644 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.468278553+00:00 stderr F I0813 20:08:58.468098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.701085768+00:00 stderr F I0813 20:08:58.700666 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.937601149+00:00 stderr F I0813 20:08:58.937479 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:59.881007247+00:00 stderr F I0813 20:08:59.880767 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:59.987227123+00:00 stderr F I0813 20:08:59.987057 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.003355905+00:00 stderr F I0813 20:09:00.003258 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.150216806+00:00 stderr F I0813 20:09:00.150103 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.425437117+00:00 stderr F I0813 20:09:00.425289 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.492074917+00:00 stderr F I0813 20:09:00.491969 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.705735963+00:00 stderr F I0813 20:09:00.705607 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.908423513+00:00 stderr F I0813 20:09:00.908259 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.307012931+00:00 stderr F I0813 20:09:01.306948 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.332152642+00:00 stderr F I0813 20:09:01.332074 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.943401017+00:00 stderr F I0813 20:09:01.943318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.165926027+00:00 stderr F I0813 20:09:02.165756 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.170353624+00:00 stderr F I0813 20:09:02.169458 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.200222970+00:00 stderr F I0813 20:09:02.198436 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.243060378+00:00 stderr F I0813 20:09:02.243000 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.276498097+00:00 stderr F I0813 20:09:02.276377 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.413029791+00:00 stderr F I0813 20:09:02.412912 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.474952837+00:00 stderr F I0813 20:09:02.471915 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.480103634+00:00 stderr F I0813 20:09:02.479473 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.714094713+00:00 stderr F I0813 20:09:02.713322 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.723976247+00:00 stderr F I0813 20:09:02.723848 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.241760852+00:00 stderr F I0813 20:09:03.241565 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.284594570+00:00 stderr F I0813 20:09:03.284437 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.503286530+00:00 stderr F I0813 20:09:03.503146 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.604591535+00:00 stderr F I0813 20:09:03.604525 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.722984019+00:00 stderr F I0813 20:09:03.722920 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.850325700+00:00 stderr F I0813 20:09:03.850225 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.850922437+00:00 stderr F I0813 20:09:03.850753 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:09:03.851303858+00:00 stderr F I0813 20:09:03.851165 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:09:04.098252009+00:00 stderr F I0813 20:09:04.097733 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.152472923+00:00 stderr F I0813 20:09:04.152204 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.225751164+00:00 stderr F I0813 20:09:04.225694 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.817556751+00:00 stderr F I0813 20:09:04.817408 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.831556452+00:00 stderr F W0813 20:09:04.831450 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:09:04.831616414+00:00 stderr F I0813 20:09:04.831588 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.840324364+00:00 stderr F W0813 20:09:04.840265 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:09:04.859071671+00:00 stderr F I0813 20:09:04.858997 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.894092585+00:00 stderr F I0813 20:09:04.893961 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.441834341+00:00 stderr F I0813 20:09:05.441682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.474877109+00:00 stderr F I0813 20:09:05.474730 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.562097250+00:00 stderr F I0813 20:09:05.562031 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.640134787+00:00 stderr F I0813 20:09:05.640072 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.838480114+00:00 stderr F I0813 20:09:05.838389 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.990681308+00:00 stderr F I0813 20:09:05.990602 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.433181395+00:00 stderr F I0813 20:09:06.433114 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.474216631+00:00 stderr F I0813 20:09:06.474129 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.505855978+00:00 stderr F I0813 20:09:06.505680 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.513476227+00:00 stderr F I0813 20:09:06.513410 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.558408355+00:00 stderr F I0813 20:09:06.558346 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.571862281+00:00 stderr F I0813 20:09:06.571677 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.699625664+00:00 stderr F I0813 20:09:06.699451 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.951436104+00:00 stderr F I0813 20:09:06.951223 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.370529009+00:00 stderr F I0813 20:09:07.370423 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.459069147+00:00 stderr F I0813 20:09:07.458924 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.471986478+00:00 stderr F I0813 20:09:07.471871 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.600928745+00:00 stderr F I0813 20:09:07.600544 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.149093319+00:00 stderr F I0813 20:09:08.149020 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.218213131+00:00 stderr F I0813 20:09:08.218130 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.324632872+00:00 stderr F I0813 20:09:08.324517 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.624968103+00:00 stderr F I0813 20:09:08.624869 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.710337260+00:00 stderr F I0813 20:09:08.710168 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.856873891+00:00 stderr F I0813 20:09:08.856737 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.951410252+00:00 stderr F I0813 20:09:08.951241 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.958402552+00:00 stderr F I0813 20:09:08.958285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.342542026+00:00 stderr F I0813 20:09:09.342458 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.516463073+00:00 stderr F I0813 20:09:09.516330 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.586271414+00:00 stderr F I0813 20:09:09.586068 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.136190901+00:00 stderr F I0813 20:09:10.136083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.477340542+00:00 stderr F I0813 20:09:10.476685 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.552882668+00:00 stderr F I0813 20:09:10.552464 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.762035674+00:00 stderr F I0813 20:09:11.747513 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.815543008+00:00 stderr F I0813 20:09:11.813580 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.818041440+00:00 stderr F I0813 20:09:11.818002 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:09:11.818113032+00:00 stderr F I0813 20:09:11.818093 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:09:12.066879514+00:00 stderr F I0813 20:09:12.065303 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.315147813+00:00 stderr F I0813 20:09:12.303247 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.524755562+00:00 stderr F I0813 20:09:12.524662 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.720248007+00:00 stderr F I0813 20:09:12.720183 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.244186329+00:00 stderr F I0813 20:09:13.244048 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.257643605+00:00 stderr F I0813 20:09:13.257585 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.616384901+00:00 stderr F I0813 20:09:13.616285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.735556497+00:00 stderr F I0813 20:09:13.735460 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.984755492+00:00 stderr F I0813 20:09:13.984631 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:17:31.923443489+00:00 stderr F W0813 20:17:31.922743 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:19:03.872750719+00:00 stderr F I0813 20:19:03.872574 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:19:03.872750719+00:00 stderr F I0813 20:19:03.872691 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:19:11.823569241+00:00 stderr F I0813 20:19:11.823055 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:19:11.823691854+00:00 stderr F I0813 20:19:11.823534 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:23:32.941069167+00:00 stderr F W0813 20:23:32.940701 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:29:03.858920750+00:00 stderr F I0813 20:29:03.858613 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:29:03.858920750+00:00 stderr F I0813 20:29:03.858685 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:29:11.824287689+00:00 stderr F I0813 20:29:11.824144 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:29:11.824287689+00:00 stderr F I0813 20:29:11.824213 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:30:08.952423486+00:00 stderr F W0813 20:30:08.951886 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:38:14.959442937+00:00 stderr F W0813 20:38:14.959259 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:39:03.857190733+00:00 stderr F I0813 20:39:03.856948 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:39:03.857190733+00:00 stderr F I0813 20:39:03.857044 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:39:11.824009178+00:00 stderr F I0813 20:39:11.823497 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:39:11.824009178+00:00 stderr F I0813 20:39:11.823576 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:42:36.401697921+00:00 stderr F I0813 20:42:36.401556 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.402032941+00:00 stderr F I0813 20:42:36.401959 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.452903 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453302 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453415 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453526 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453623 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453837 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454384 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454477 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454593 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454704 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454871 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454963 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.455028 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.455092 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455254165+00:00 stderr F I0813 20:42:36.455171 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455598315+00:00 stderr F I0813 20:42:36.455311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508540511+00:00 stderr F I0813 20:42:36.476285 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509114978+00:00 stderr F I0813 20:42:36.476311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509639023+00:00 stderr F I0813 20:42:36.476325 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510044045+00:00 stderr F I0813 20:42:36.476337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510289942+00:00 stderr F I0813 20:42:36.476350 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510379984+00:00 stderr F I0813 20:42:36.476374 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510477887+00:00 stderr F I0813 20:42:36.476387 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510663292+00:00 stderr F I0813 20:42:36.476398 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510663292+00:00 stderr F I0813 20:42:36.476411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510860668+00:00 stderr F I0813 20:42:36.476423 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511060774+00:00 stderr F I0813 20:42:36.476435 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511280550+00:00 stderr F I0813 20:42:36.476445 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511485186+00:00 stderr F I0813 20:42:36.476728 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511736753+00:00 stderr F I0813 20:42:36.476742 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531584186+00:00 stderr F I0813 20:42:36.476757 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531934476+00:00 stderr F I0813 20:42:36.476768 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532140292+00:00 stderr F I0813 20:42:36.476838 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532311397+00:00 stderr F I0813 20:42:36.476850 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532883403+00:00 stderr F I0813 20:42:36.476862 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537875467+00:00 stderr F I0813 20:42:36.476872 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538035062+00:00 stderr F I0813 20:42:36.476884 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538194606+00:00 stderr F I0813 20:42:36.476894 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538423403+00:00 stderr F I0813 20:42:36.476905 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538639449+00:00 stderr F I0813 20:42:36.476915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538853175+00:00 stderr F I0813 20:42:36.476927 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539185355+00:00 stderr F I0813 20:42:36.476950 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539359230+00:00 stderr F I0813 20:42:36.476961 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540034479+00:00 stderr F I0813 20:42:36.476973 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540474832+00:00 stderr F I0813 20:42:36.476984 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542741847+00:00 stderr F I0813 20:42:36.476995 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542931643+00:00 stderr F I0813 20:42:36.477110 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543525590+00:00 stderr F I0813 20:42:36.477126 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543853859+00:00 stderr F I0813 20:42:36.484817 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.549611285+00:00 stderr F I0813 20:42:36.484846 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.549737449+00:00 stderr F I0813 20:42:36.484864 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550030577+00:00 stderr F I0813 20:42:36.484875 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550115900+00:00 stderr F I0813 20:42:36.484887 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550366187+00:00 stderr F I0813 20:42:36.484898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550463350+00:00 stderr F I0813 20:42:36.484911 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554275140+00:00 stderr F I0813 20:42:36.484922 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554705332+00:00 stderr F I0813 20:42:36.484934 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554705332+00:00 stderr F I0813 20:42:36.484945 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554825256+00:00 stderr F I0813 20:42:36.484958 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555266918+00:00 stderr F I0813 20:42:36.485013 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555976549+00:00 stderr F I0813 20:42:36.485027 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556657328+00:00 stderr F I0813 20:42:36.485046 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556854264+00:00 stderr F I0813 20:42:36.485063 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557073040+00:00 stderr F I0813 20:42:36.485074 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557320978+00:00 stderr F I0813 20:42:36.485085 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557464872+00:00 stderr F I0813 20:42:36.485095 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557752930+00:00 stderr F I0813 20:42:36.485108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557977447+00:00 stderr F I0813 20:42:36.485119 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558130701+00:00 stderr F I0813 20:42:36.485138 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558316766+00:00 stderr F I0813 20:42:36.485148 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558438440+00:00 stderr F I0813 20:42:36.485159 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558550093+00:00 stderr F I0813 20:42:36.485173 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568607083+00:00 stderr F I0813 20:42:36.485183 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.570719554+00:00 stderr F I0813 20:42:36.485201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575013368+00:00 stderr F I0813 20:42:36.485212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575123871+00:00 stderr F I0813 20:42:36.485257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575261375+00:00 stderr F I0813 20:42:36.485276 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.043289618+00:00 stderr F I0813 20:42:37.042098 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.044261266+00:00 stderr F I0813 20:42:37.044132 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:37.045152852+00:00 stderr F I0813 20:42:37.044963 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:37.046348896+00:00 stderr F I0813 20:42:37.046268 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:37.046475130+00:00 stderr F I0813 20:42:37.046323 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:37.046685336+00:00 stderr F I0813 20:42:37.046626 1 reconciliation_controller.go:159] Shutting down ClusterQuotaReconcilationController 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047620 1 secure_serving.go:258] Stopped listening on [::]:10357 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047658 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047672 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047765 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047186 1 base_controller.go:172] Shutting down pod-security-admission-label-synchronization-controller ... 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048472 1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048504 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048523 1 base_controller.go:172] Shutting down namespace-security-allocation-controller ... 2025-08-13T20:42:37.048654573+00:00 stderr F I0813 20:42:37.048574 1 privileged_namespaces_controller.go:85] "Shutting down" controller="privileged-namespaces-psa-label-syncer" 2025-08-13T20:42:37.048654573+00:00 stderr F I0813 20:42:37.048640 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:37.048713745+00:00 stderr F I0813 20:42:37.048670 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_csr-approver-controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048937 1 base_controller.go:114] Shutting down worker of namespace-security-allocation-controller controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048968 1 base_controller.go:104] All namespace-security-allocation-controller workers have been terminated 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048544 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048996 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.049010 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.049017 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_csr-approver-controller workers have been terminated 2025-08-13T20:42:37.049174998+00:00 stderr F I0813 20:42:37.049126 1 resource_quota_monitor.go:339] "QuotaMonitor stopped monitors" stopped=72 total=72 2025-08-13T20:42:37.049174998+00:00 stderr F I0813 20:42:37.049155 1 resource_quota_monitor.go:340] "QuotaMonitor stopping" 2025-08-13T20:42:37.049209279+00:00 stderr F I0813 20:42:37.049130 1 resource_quota_monitor.go:339] "QuotaMonitor stopped monitors" stopped=1 total=1 2025-08-13T20:42:37.049356803+00:00 stderr F I0813 20:42:37.049337 1 resource_quota_monitor.go:340] "QuotaMonitor stopping" 2025-08-13T20:42:37.049442716+00:00 stderr F I0813 20:42:37.049427 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:37.049846687+00:00 stderr F I0813 20:42:37.049757 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:37.050170747+00:00 stderr F I0813 20:42:37.050063 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.050614049+00:00 stderr F I0813 20:42:37.050553 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050637 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050666 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050675 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050695842+00:00 stderr F I0813 20:42:37.050683 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050711832+00:00 stderr F I0813 20:42:37.050702 1 base_controller.go:114] Shutting down worker of pod-security-admission-label-synchronization-controller controller ... 2025-08-13T20:42:37.050723803+00:00 stderr F I0813 20:42:37.050714 1 base_controller.go:104] All pod-security-admission-label-synchronization-controller workers have been terminated 2025-08-13T20:42:37.051066102+00:00 stderr F I0813 20:42:37.050975 1 resource_quota_controller.go:317] "Shutting down resource quota controller" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051005 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051076 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051096 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051102 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051106 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051116 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051119 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051131 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051132 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051166095+00:00 stderr F I0813 20:42:37.051141 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051166095+00:00 stderr F I0813 20:42:37.051146 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051279829+00:00 stderr F I0813 20:42:37.051203 1 builder.go:330] server exited 2025-08-13T20:42:37.058025303+00:00 stderr F E0813 20:42:37.056294 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock?timeout=1m47s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:37.058025303+00:00 stderr F W0813 20:42:37.056374 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000104572615073043233033075 0ustar zuulzuul2025-10-13T00:12:29.950673193+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-10-13T00:12:29.958978181+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-10-13T00:12:29.967483825+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:12:29.968764872+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-10-13T00:12:30.668781522+00:00 stderr F I1013 00:12:30.668559 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:12:30.673291381+00:00 stderr F I1013 00:12:30.673237 1 observer_polling.go:159] Starting file observer 2025-10-13T00:12:30.732888485+00:00 stderr F I1013 00:12:30.732787 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-10-13T00:12:30.736883819+00:00 stderr F I1013 00:12:30.736831 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:12:48.019060126+00:00 stderr F I1013 00:12:48.018877 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:12:48.026644572+00:00 stderr F I1013 00:12:48.026566 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:12:48.026644572+00:00 stderr F I1013 00:12:48.026611 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:12:48.026682983+00:00 stderr F I1013 00:12:48.026643 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:12:48.026682983+00:00 stderr F I1013 00:12:48.026648 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:12:48.037031798+00:00 stderr F I1013 00:12:48.036967 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:12:48.037395619+00:00 stderr F I1013 00:12:48.037307 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:12:48.037871732+00:00 stderr F W1013 00:12:48.037831 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-10-13T00:12:48.037992266+00:00 stderr F I1013 00:12:48.037940 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-10-13T00:12:48.038237183+00:00 stderr F I1013 00:12:48.038199 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cluster-policy-controller-lock... 2025-10-13T00:12:48.041347842+00:00 stderr F I1013 00:12:48.041278 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:12:48.041347842+00:00 stderr F I1013 00:12:48.041340 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:12:48.041418204+00:00 stderr F I1013 00:12:48.041377 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:12:48.041418204+00:00 stderr F I1013 00:12:48.041400 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:12:48.041418204+00:00 stderr F I1013 00:12:48.041413 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:12:48.041428144+00:00 stderr F I1013 00:12:48.041419 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:12:48.041996080+00:00 stderr F I1013 00:12:48.041918 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:12:48.042250907+00:00 stderr F I1013 00:12:48.042220 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:12:48.042174925 +0000 UTC))" 2025-10-13T00:12:48.042716341+00:00 stderr F I1013 00:12:48.042614 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314350\" (2025-10-12 23:12:30 +0000 UTC to 2026-10-12 23:12:30 +0000 UTC (now=2025-10-13 00:12:48.042591807 +0000 UTC))" 2025-10-13T00:12:48.042735171+00:00 stderr F I1013 00:12:48.042722 1 secure_serving.go:213] Serving securely on [::]:10357 2025-10-13T00:12:48.043967966+00:00 stderr F I1013 00:12:48.042741 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:12:48.043967966+00:00 stderr F I1013 00:12:48.042747 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:12:48.045290264+00:00 stderr F I1013 00:12:48.045239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:12:48.046246951+00:00 stderr F I1013 00:12:48.045665 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:12:48.046246951+00:00 stderr F I1013 00:12:48.045888 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:12:48.142441204+00:00 stderr F I1013 00:12:48.142362 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:12:48.142493266+00:00 stderr F I1013 00:12:48.142463 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:12:48.142583738+00:00 stderr F I1013 00:12:48.142541 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:12:48.142766714+00:00 stderr F I1013 00:12:48.142739 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:48.142704662 +0000 UTC))" 2025-10-13T00:12:48.143441683+00:00 stderr F I1013 00:12:48.143232 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:12:48.143205366 +0000 UTC))" 2025-10-13T00:12:48.143643859+00:00 stderr F I1013 00:12:48.143593 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314350\" (2025-10-12 23:12:30 +0000 UTC to 2026-10-12 23:12:30 +0000 UTC (now=2025-10-13 00:12:48.143557116 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.143927 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:12:48.143893736 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.143949 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:12:48.143939867 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.143965 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:12:48.143954757 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.143998 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:12:48.143969418 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.144012 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:48.144002599 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.144026 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:48.144016779 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.144040 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:48.14403034 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.144076 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:48.14404405 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.144099 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:12:48.144082371 +0000 UTC))" 2025-10-13T00:12:48.144186164+00:00 stderr F I1013 00:12:48.144159 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:12:48.144111132 +0000 UTC))" 2025-10-13T00:12:48.144211165+00:00 stderr F I1013 00:12:48.144186 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:48.144168344 +0000 UTC))" 2025-10-13T00:12:48.146881401+00:00 stderr F I1013 00:12:48.146583 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:12:48.146554562 +0000 UTC))" 2025-10-13T00:12:48.147075636+00:00 stderr F I1013 00:12:48.147044 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314350\" (2025-10-12 23:12:30 +0000 UTC to 2026-10-12 23:12:30 +0000 UTC (now=2025-10-13 00:12:48.147025215 +0000 UTC))" 2025-10-13T00:15:35.218465103+00:00 stderr F I1013 00:15:35.218391 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cluster-policy-controller-lock 2025-10-13T00:15:35.218643968+00:00 stderr F I1013 00:15:35.218583 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cluster-policy-controller-lock", UID:"bb093f33-8655-47de-8ab9-7ce6fce91fc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40944", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_ccc0bc3a-e33d-4dac-811f-4b9c3ae4fbbf became leader 2025-10-13T00:15:35.223213425+00:00 stderr F I1013 00:15:35.223156 1 policy_controller.go:78] Starting "openshift.io/privileged-namespaces-psa-label-syncer" 2025-10-13T00:15:35.227801882+00:00 stderr F I1013 00:15:35.227759 1 policy_controller.go:88] Started "openshift.io/privileged-namespaces-psa-label-syncer" 2025-10-13T00:15:35.227871834+00:00 stderr F I1013 00:15:35.227840 1 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" 2025-10-13T00:15:35.231832543+00:00 stderr F I1013 00:15:35.231804 1 privileged_namespaces_controller.go:75] "Starting" controller="privileged-namespaces-psa-label-syncer" 2025-10-13T00:15:35.231874034+00:00 stderr F I1013 00:15:35.231862 1 shared_informer.go:311] Waiting for caches to sync for privileged-namespaces-psa-label-syncer 2025-10-13T00:15:35.235933516+00:00 stderr F I1013 00:15:35.235869 1 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" 2025-10-13T00:15:35.235933516+00:00 stderr F I1013 00:15:35.235900 1 policy_controller.go:78] Starting "openshift.io/resourcequota" 2025-10-13T00:15:35.236091561+00:00 stderr F I1013 00:15:35.236044 1 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller 2025-10-13T00:15:35.284297255+00:00 stderr F I1013 00:15:35.284229 1 policy_controller.go:88] Started "openshift.io/resourcequota" 2025-10-13T00:15:35.284297255+00:00 stderr F I1013 00:15:35.284253 1 policy_controller.go:78] Starting "openshift.io/cluster-quota-reconciliation" 2025-10-13T00:15:35.284368707+00:00 stderr F I1013 00:15:35.284284 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-10-13T00:15:35.284611804+00:00 stderr F I1013 00:15:35.284585 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-10-13T00:15:35.284640035+00:00 stderr F I1013 00:15:35.284618 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-10-13T00:15:35.293873442+00:00 stderr F I1013 00:15:35.293804 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [image.openshift.io/v1, Resource=imagestreams], removed: []" 2025-10-13T00:15:35.303364446+00:00 stderr F I1013 00:15:35.303284 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-10-13T00:15:35.303364446+00:00 stderr F I1013 00:15:35.303344 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-10-13T00:15:35.303394757+00:00 stderr F I1013 00:15:35.303372 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-10-13T00:15:35.303403737+00:00 stderr F I1013 00:15:35.303391 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-10-13T00:15:35.303442579+00:00 stderr F I1013 00:15:35.303413 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-10-13T00:15:35.303442579+00:00 stderr F I1013 00:15:35.303433 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-10-13T00:15:35.303456559+00:00 stderr F I1013 00:15:35.303450 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-10-13T00:15:35.303481980+00:00 stderr F I1013 00:15:35.303474 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-10-13T00:15:35.303492720+00:00 stderr F I1013 00:15:35.303488 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-10-13T00:15:35.303525151+00:00 stderr F I1013 00:15:35.303504 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-10-13T00:15:35.303525151+00:00 stderr F I1013 00:15:35.303519 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-10-13T00:15:35.303553402+00:00 stderr F I1013 00:15:35.303535 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-10-13T00:15:35.308915303+00:00 stderr F I1013 00:15:35.308879 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-10-13T00:15:35.308915303+00:00 stderr F I1013 00:15:35.308907 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-10-13T00:15:35.308937093+00:00 stderr F I1013 00:15:35.308926 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-10-13T00:15:35.308966964+00:00 stderr F I1013 00:15:35.308945 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-10-13T00:15:35.308994515+00:00 stderr F I1013 00:15:35.308975 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-10-13T00:15:35.309027356+00:00 stderr F I1013 00:15:35.309009 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-10-13T00:15:35.309054367+00:00 stderr F I1013 00:15:35.309037 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-10-13T00:15:35.309063097+00:00 stderr F I1013 00:15:35.309055 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-10-13T00:15:35.309087588+00:00 stderr F I1013 00:15:35.309070 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-10-13T00:15:35.309096038+00:00 stderr F I1013 00:15:35.309088 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-10-13T00:15:35.309120469+00:00 stderr F I1013 00:15:35.309104 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-10-13T00:15:35.309128829+00:00 stderr F I1013 00:15:35.309123 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-10-13T00:15:35.309175920+00:00 stderr F I1013 00:15:35.309154 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-10-13T00:15:35.309204181+00:00 stderr F I1013 00:15:35.309183 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-10-13T00:15:35.309204181+00:00 stderr F I1013 00:15:35.309200 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-10-13T00:15:35.309231962+00:00 stderr F I1013 00:15:35.309215 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-10-13T00:15:35.309248263+00:00 stderr F I1013 00:15:35.309234 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-10-13T00:15:35.309255453+00:00 stderr F I1013 00:15:35.309248 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-10-13T00:15:35.309281424+00:00 stderr F I1013 00:15:35.309265 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-10-13T00:15:35.309289214+00:00 stderr F I1013 00:15:35.309283 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-10-13T00:15:35.309313355+00:00 stderr F I1013 00:15:35.309297 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-10-13T00:15:35.309378686+00:00 stderr F I1013 00:15:35.309344 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-10-13T00:15:35.309423188+00:00 stderr F I1013 00:15:35.309392 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-10-13T00:15:35.309423188+00:00 stderr F I1013 00:15:35.309412 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-10-13T00:15:35.309433258+00:00 stderr F I1013 00:15:35.309425 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-10-13T00:15:35.309460359+00:00 stderr F I1013 00:15:35.309440 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-10-13T00:15:35.309469779+00:00 stderr F I1013 00:15:35.309464 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-10-13T00:15:35.309498270+00:00 stderr F I1013 00:15:35.309479 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-10-13T00:15:35.309507310+00:00 stderr F I1013 00:15:35.309496 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-10-13T00:15:35.309515611+00:00 stderr F I1013 00:15:35.309510 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-10-13T00:15:35.309541891+00:00 stderr F I1013 00:15:35.309524 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-10-13T00:15:35.309550632+00:00 stderr F I1013 00:15:35.309542 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-10-13T00:15:35.309576462+00:00 stderr F I1013 00:15:35.309559 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-10-13T00:15:35.309585593+00:00 stderr F I1013 00:15:35.309577 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-10-13T00:15:35.309596653+00:00 stderr F I1013 00:15:35.309591 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-10-13T00:15:35.309624684+00:00 stderr F I1013 00:15:35.309607 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-10-13T00:15:35.309633234+00:00 stderr F I1013 00:15:35.309625 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-10-13T00:15:35.309674225+00:00 stderr F I1013 00:15:35.309656 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-10-13T00:15:35.309683406+00:00 stderr F I1013 00:15:35.309674 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-10-13T00:15:35.309693906+00:00 stderr F I1013 00:15:35.309689 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-10-13T00:15:35.309722257+00:00 stderr F I1013 00:15:35.309702 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-10-13T00:15:35.309732057+00:00 stderr F I1013 00:15:35.309726 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-10-13T00:15:35.309779069+00:00 stderr F I1013 00:15:35.309762 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-10-13T00:15:35.309787489+00:00 stderr F I1013 00:15:35.309780 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-10-13T00:15:35.309821170+00:00 stderr F I1013 00:15:35.309805 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-10-13T00:15:35.309828840+00:00 stderr F I1013 00:15:35.309823 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-10-13T00:15:35.309865411+00:00 stderr F I1013 00:15:35.309847 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-10-13T00:15:35.309873031+00:00 stderr F I1013 00:15:35.309864 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-10-13T00:15:35.309907242+00:00 stderr F I1013 00:15:35.309883 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-10-13T00:15:35.309907242+00:00 stderr F I1013 00:15:35.309901 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-10-13T00:15:35.309932893+00:00 stderr F I1013 00:15:35.309917 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-10-13T00:15:35.309957954+00:00 stderr F I1013 00:15:35.309943 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-10-13T00:15:35.310003605+00:00 stderr F I1013 00:15:35.309983 1 policy_controller.go:88] Started "openshift.io/cluster-quota-reconciliation" 2025-10-13T00:15:35.310003605+00:00 stderr F I1013 00:15:35.309992 1 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" 2025-10-13T00:15:35.310055447+00:00 stderr F I1013 00:15:35.310039 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-10-13T00:15:35.310083998+00:00 stderr F I1013 00:15:35.310065 1 reconciliation_controller.go:140] Starting the cluster quota reconciliation controller 2025-10-13T00:15:35.310115189+00:00 stderr F I1013 00:15:35.310098 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-10-13T00:15:35.315652744+00:00 stderr F I1013 00:15:35.315620 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: [] 2025-10-13T00:15:35.427903368+00:00 stderr F I1013 00:15:35.427819 1 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" 2025-10-13T00:15:35.427903368+00:00 stderr F I1013 00:15:35.427843 1 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" 2025-10-13T00:15:35.427903368+00:00 stderr F I1013 00:15:35.427894 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-10-13T00:15:35.627068364+00:00 stderr F I1013 00:15:35.626994 1 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" 2025-10-13T00:15:35.627068364+00:00 stderr F I1013 00:15:35.627019 1 policy_controller.go:91] Started Origin Controllers 2025-10-13T00:15:35.627068364+00:00 stderr F I1013 00:15:35.627041 1 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller 2025-10-13T00:15:35.631395134+00:00 stderr F I1013 00:15:35.631360 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-10-13T00:15:35.652127555+00:00 stderr F I1013 00:15:35.648843 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.652127555+00:00 stderr F I1013 00:15:35.649238 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.652127555+00:00 stderr F I1013 00:15:35.649484 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.652127555+00:00 stderr F I1013 00:15:35.649903 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.652620540+00:00 stderr F I1013 00:15:35.652593 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.653537687+00:00 stderr F I1013 00:15:35.653504 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.653647181+00:00 stderr F I1013 00:15:35.653622 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.653996701+00:00 stderr F I1013 00:15:35.653510 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.654068373+00:00 stderr F I1013 00:15:35.654030 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.654260999+00:00 stderr F I1013 00:15:35.654236 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.654430904+00:00 stderr F I1013 00:15:35.654411 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.655501386+00:00 stderr F I1013 00:15:35.654768 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.655501386+00:00 stderr F I1013 00:15:35.654930 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.655501386+00:00 stderr F I1013 00:15:35.655133 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.656217148+00:00 stderr F I1013 00:15:35.656176 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.656698292+00:00 stderr F I1013 00:15:35.656051 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.659450715+00:00 stderr F I1013 00:15:35.657669 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.660147715+00:00 stderr F I1013 00:15:35.660094 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.663410603+00:00 stderr F I1013 00:15:35.662923 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.663410603+00:00 stderr F I1013 00:15:35.663059 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.664034012+00:00 stderr F I1013 00:15:35.663990 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.664561948+00:00 stderr F I1013 00:15:35.664531 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.671568518+00:00 stderr F I1013 00:15:35.671491 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.672692711+00:00 stderr F I1013 00:15:35.672008 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.676274149+00:00 stderr F I1013 00:15:35.676233 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.676529586+00:00 stderr F I1013 00:15:35.676488 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "basic-user" not found 2025-10-13T00:15:35.676591448+00:00 stderr F I1013 00:15:35.676546 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.676591448+00:00 stderr F I1013 00:15:35.676574 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.676711222+00:00 stderr F I1013 00:15:35.676672 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-autoscaler" not found 2025-10-13T00:15:35.676711222+00:00 stderr F I1013 00:15:35.676701 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-autoscaler-operator" not found 2025-10-13T00:15:35.676720712+00:00 stderr F I1013 00:15:35.676714 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-monitoring-operator" not found 2025-10-13T00:15:35.676728032+00:00 stderr F I1013 00:15:35.676722 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.676774554+00:00 stderr F I1013 00:15:35.676732 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-reader" not found 2025-10-13T00:15:35.676774554+00:00 stderr F I1013 00:15:35.676741 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator" not found 2025-10-13T00:15:35.676774554+00:00 stderr F I1013 00:15:35.676752 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator-imageconfig-reader" not found 2025-10-13T00:15:35.676774554+00:00 stderr F I1013 00:15:35.676760 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator-proxy-reader" not found 2025-10-13T00:15:35.676774554+00:00 stderr F I1013 00:15:35.676769 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-status" not found 2025-10-13T00:15:35.676795774+00:00 stderr F I1013 00:15:35.676777 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.676795774+00:00 stderr F I1013 00:15:35.676791 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console" not found 2025-10-13T00:15:35.676949799+00:00 stderr F I1013 00:15:35.676918 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-10-13T00:15:35.676949799+00:00 stderr F I1013 00:15:35.676937 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found 2025-10-13T00:15:35.676958259+00:00 stderr F I1013 00:15:35.676951 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console-operator" not found 2025-10-13T00:15:35.676965239+00:00 stderr F I1013 00:15:35.676958 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-10-13T00:15:35.676972330+00:00 stderr F I1013 00:15:35.676965 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "control-plane-machine-set-operator" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.676977 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.676990 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-provisioner-runner" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.677001 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-provisioner-runner" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.677008 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "csi-snapshot-controller-operator-clusterrole" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.677014 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.677021 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-image-registry-operator" not found 2025-10-13T00:15:35.677037142+00:00 stderr F I1013 00:15:35.677031 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "dns-monitoring" not found 2025-10-13T00:15:35.677048842+00:00 stderr F I1013 00:15:35.677038 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found 2025-10-13T00:15:35.677048842+00:00 stderr F I1013 00:15:35.677045 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "kube-apiserver" not found 2025-10-13T00:15:35.677061582+00:00 stderr F I1013 00:15:35.677055 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.677068902+00:00 stderr F I1013 00:15:35.677062 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-10-13T00:15:35.677075993+00:00 stderr F I1013 00:15:35.677070 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-controllers-metal3-remediation" not found 2025-10-13T00:15:35.677085753+00:00 stderr F I1013 00:15:35.677080 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-operator" not found 2025-10-13T00:15:35.677110514+00:00 stderr F I1013 00:15:35.677088 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-operator-ext-remediation" not found 2025-10-13T00:15:35.677118614+00:00 stderr F I1013 00:15:35.677095 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-controller" not found 2025-10-13T00:15:35.677118614+00:00 stderr F I1013 00:15:35.677107 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-daemon" not found 2025-10-13T00:15:35.677118614+00:00 stderr F I1013 00:15:35.677114 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-server" not found 2025-10-13T00:15:35.677130374+00:00 stderr F I1013 00:15:35.677121 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-os-builder" not found 2025-10-13T00:15:35.677137795+00:00 stderr F I1013 00:15:35.677132 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:anyuid" not found 2025-10-13T00:15:35.677144915+00:00 stderr F I1013 00:15:35.677140 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "marketplace-operator" not found 2025-10-13T00:15:35.677151895+00:00 stderr F I1013 00:15:35.677147 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "metrics-daemon-role" not found 2025-10-13T00:15:35.677160505+00:00 stderr F I1013 00:15:35.677155 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-admission-controller-webhook" not found 2025-10-13T00:15:35.677167585+00:00 stderr F I1013 00:15:35.677161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2025-10-13T00:15:35.677192996+00:00 stderr F I1013 00:15:35.677170 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2025-10-13T00:15:35.677192996+00:00 stderr F I1013 00:15:35.677176 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus" not found 2025-10-13T00:15:35.677192996+00:00 stderr F I1013 00:15:35.677184 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2025-10-13T00:15:35.677205517+00:00 stderr F I1013 00:15:35.677190 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "whereabouts-cni" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677216 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "network-diagnostics" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677244 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "network-node-identity" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677317 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:operator-lifecycle-manager" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677337 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677346 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns-operator" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677419 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-pruner" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677426 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-operator" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677434 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-router" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677440 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-iptables-alerter" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677448 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-control-plane-limited" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677454 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node-limited" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677466 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-kube-rbac-proxy" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677473 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677480 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "prometheus-k8s-scheduler-resources" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677513 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "registry-monitoring" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677532 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:registry" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677543 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "router-monitoring" not found 2025-10-13T00:15:35.677637360+00:00 stderr F I1013 00:15:35.677614 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found 2025-10-13T00:15:35.677738663+00:00 stderr F I1013 00:15:35.677715 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "self-provisioner" not found 2025-10-13T00:15:35.677738663+00:00 stderr F I1013 00:15:35.677727 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.677746723+00:00 stderr F I1013 00:15:35.677738 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-bootstrapper" not found 2025-10-13T00:15:35.677753913+00:00 stderr F I1013 00:15:35.677746 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677876 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677896 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677911 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677923 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677935 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:attachdetach-controller" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677946 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:certificate-controller" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677958 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:clusterrole-aggregation-controller" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677976 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:cronjob-controller" not found 2025-10-13T00:15:35.677996590+00:00 stderr F I1013 00:15:35.677987 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:daemon-set-controller" not found 2025-10-13T00:15:35.678013151+00:00 stderr F I1013 00:15:35.677999 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:deployment-controller" not found 2025-10-13T00:15:35.678020161+00:00 stderr F I1013 00:15:35.678010 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:disruption-controller" not found 2025-10-13T00:15:35.678028891+00:00 stderr F I1013 00:15:35.678022 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpoint-controller" not found 2025-10-13T00:15:35.678054492+00:00 stderr F I1013 00:15:35.678033 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslice-controller" not found 2025-10-13T00:15:35.678054492+00:00 stderr F I1013 00:15:35.678050 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslicemirroring-controller" not found 2025-10-13T00:15:35.678075413+00:00 stderr F I1013 00:15:35.678060 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ephemeral-volume-controller" not found 2025-10-13T00:15:35.678082823+00:00 stderr F I1013 00:15:35.678076 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:expand-controller" not found 2025-10-13T00:15:35.678126864+00:00 stderr F I1013 00:15:35.678111 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:generic-garbage-collector" not found 2025-10-13T00:15:35.678134294+00:00 stderr F I1013 00:15:35.678128 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:horizontal-pod-autoscaler" not found 2025-10-13T00:15:35.678156115+00:00 stderr F I1013 00:15:35.678140 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:job-controller" not found 2025-10-13T00:15:35.678163345+00:00 stderr F I1013 00:15:35.678156 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:legacy-service-account-token-cleaner" not found 2025-10-13T00:15:35.678178776+00:00 stderr F I1013 00:15:35.678171 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:namespace-controller" not found 2025-10-13T00:15:35.678188776+00:00 stderr F I1013 00:15:35.678182 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:node-controller" not found 2025-10-13T00:15:35.678211017+00:00 stderr F I1013 00:15:35.678195 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:persistent-volume-binder" not found 2025-10-13T00:15:35.678218417+00:00 stderr F I1013 00:15:35.678209 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pod-garbage-collector" not found 2025-10-13T00:15:35.678227087+00:00 stderr F I1013 00:15:35.678221 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pv-protection-controller" not found 2025-10-13T00:15:35.678246918+00:00 stderr F I1013 00:15:35.678231 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pvc-protection-controller" not found 2025-10-13T00:15:35.678254138+00:00 stderr F I1013 00:15:35.678246 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replicaset-controller" not found 2025-10-13T00:15:35.678262808+00:00 stderr F I1013 00:15:35.678256 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replication-controller" not found 2025-10-13T00:15:35.678282789+00:00 stderr F I1013 00:15:35.678268 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:resourcequota-controller" not found 2025-10-13T00:15:35.678290019+00:00 stderr F I1013 00:15:35.678282 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:root-ca-cert-publisher" not found 2025-10-13T00:15:35.678314200+00:00 stderr F I1013 00:15:35.678298 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:route-controller" not found 2025-10-13T00:15:35.678321440+00:00 stderr F I1013 00:15:35.678315 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-account-controller" not found 2025-10-13T00:15:35.678371502+00:00 stderr F I1013 00:15:35.678354 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-ca-cert-publisher" not found 2025-10-13T00:15:35.678378982+00:00 stderr F I1013 00:15:35.678371 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-controller" not found 2025-10-13T00:15:35.678398522+00:00 stderr F I1013 00:15:35.678383 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:statefulset-controller" not found 2025-10-13T00:15:35.678409003+00:00 stderr F I1013 00:15:35.678399 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-after-finished-controller" not found 2025-10-13T00:15:35.678416143+00:00 stderr F I1013 00:15:35.678410 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-controller" not found 2025-10-13T00:15:35.678435503+00:00 stderr F I1013 00:15:35.678420 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.678442724+00:00 stderr F I1013 00:15:35.678435 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:discovery" not found 2025-10-13T00:15:35.678463264+00:00 stderr F I1013 00:15:35.678447 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.678470494+00:00 stderr F I1013 00:15:35.678464 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.678479115+00:00 stderr F I1013 00:15:35.678473 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found 2025-10-13T00:15:35.678498685+00:00 stderr F I1013 00:15:35.678484 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-dns" not found 2025-10-13T00:15:35.678505886+00:00 stderr F I1013 00:15:35.678498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found 2025-10-13T00:15:35.678514526+00:00 stderr F I1013 00:15:35.678508 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:master" not found 2025-10-13T00:15:35.678535826+00:00 stderr F I1013 00:15:35.678519 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:monitoring" not found 2025-10-13T00:15:35.678544707+00:00 stderr F I1013 00:15:35.678535 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node" not found 2025-10-13T00:15:35.678552147+00:00 stderr F I1013 00:15:35.678544 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found 2025-10-13T00:15:35.678563647+00:00 stderr F I1013 00:15:35.678557 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found 2025-10-13T00:15:35.678574038+00:00 stderr F I1013 00:15:35.678567 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-bootstrapper" not found 2025-10-13T00:15:35.678585388+00:00 stderr F I1013 00:15:35.678579 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found 2025-10-13T00:15:35.678594018+00:00 stderr F I1013 00:15:35.678588 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found 2025-10-13T00:15:35.678615569+00:00 stderr F I1013 00:15:35.678599 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found 2025-10-13T00:15:35.678622829+00:00 stderr F I1013 00:15:35.678615 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:build-config-change-controller" not found 2025-10-13T00:15:35.678632279+00:00 stderr F I1013 00:15:35.678626 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:build-controller" not found 2025-10-13T00:15:35.679164765+00:00 stderr F I1013 00:15:35.679131 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-csr-approver-controller" not found 2025-10-13T00:15:35.679173125+00:00 stderr F I1013 00:15:35.679164 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-quota-reconciliation-controller" not found 2025-10-13T00:15:35.679200096+00:00 stderr F I1013 00:15:35.679175 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:default-rolebindings-controller" not found 2025-10-13T00:15:35.679200096+00:00 stderr F I1013 00:15:35.679194 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:deployer-controller" not found 2025-10-13T00:15:35.679211657+00:00 stderr F I1013 00:15:35.679204 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:deploymentconfig-controller" not found 2025-10-13T00:15:35.679221987+00:00 stderr F I1013 00:15:35.679216 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:horizontal-pod-autoscaler" not found 2025-10-13T00:15:35.679231647+00:00 stderr F I1013 00:15:35.679226 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:image-import-controller" not found 2025-10-13T00:15:35.679274079+00:00 stderr F I1013 00:15:35.679237 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:image-trigger-controller" not found 2025-10-13T00:15:35.679283769+00:00 stderr F I1013 00:15:35.679272 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-10-13T00:15:35.679292089+00:00 stderr F I1013 00:15:35.679284 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints-crd-reader" not found 2025-10-13T00:15:35.679305279+00:00 stderr F I1013 00:15:35.679294 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints-node-reader" not found 2025-10-13T00:15:35.679433543+00:00 stderr F I1013 00:15:35.679393 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found 2025-10-13T00:15:35.679433543+00:00 stderr F I1013 00:15:35.679421 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:namespace-security-allocation-controller" not found 2025-10-13T00:15:35.679446664+00:00 stderr F I1013 00:15:35.679436 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:origin-namespace-controller" not found 2025-10-13T00:15:35.679453954+00:00 stderr F I1013 00:15:35.679448 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:podsecurity-admission-label-syncer-controller" not found 2025-10-13T00:15:35.679481805+00:00 stderr F I1013 00:15:35.679460 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:privileged-namespaces-psa-label-syncer" not found 2025-10-13T00:15:35.679481805+00:00 stderr F I1013 00:15:35.679477 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:pv-recycler-controller" not found 2025-10-13T00:15:35.679513396+00:00 stderr F I1013 00:15:35.679489 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:resourcequota-controller" not found 2025-10-13T00:15:35.679513396+00:00 stderr F I1013 00:15:35.679508 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found 2025-10-13T00:15:35.679537226+00:00 stderr F I1013 00:15:35.679519 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ingress-ip-controller" not found 2025-10-13T00:15:35.679570057+00:00 stderr F I1013 00:15:35.679549 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:serviceaccount-controller" not found 2025-10-13T00:15:35.679577998+00:00 stderr F I1013 00:15:35.679569 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:serviceaccount-pull-secrets-controller" not found 2025-10-13T00:15:35.679586698+00:00 stderr F I1013 00:15:35.679580 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-instance-controller" not found 2025-10-13T00:15:35.679598048+00:00 stderr F I1013 00:15:35.679591 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "admin" not found 2025-10-13T00:15:35.679615319+00:00 stderr F I1013 00:15:35.679601 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-instance-finalizer-controller" not found 2025-10-13T00:15:35.679622369+00:00 stderr F I1013 00:15:35.679612 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "admin" not found 2025-10-13T00:15:35.679652620+00:00 stderr F I1013 00:15:35.679626 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-service-broker" not found 2025-10-13T00:15:35.679722892+00:00 stderr F I1013 00:15:35.679703 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:unidling-controller" not found 2025-10-13T00:15:35.679752703+00:00 stderr F I1013 00:15:35.679743 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found 2025-10-13T00:15:35.679778394+00:00 stderr F I1013 00:15:35.679769 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.679806074+00:00 stderr F I1013 00:15:35.679796 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.679830275+00:00 stderr F I1013 00:15:35.679821 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.679857336+00:00 stderr F I1013 00:15:35.679847 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager" not found 2025-10-13T00:15:35.679891487+00:00 stderr F I1013 00:15:35.679876 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:image-trigger-controller" not found 2025-10-13T00:15:35.679926418+00:00 stderr F I1013 00:15:35.679914 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:ingress-to-route-controller" not found 2025-10-13T00:15:35.679954659+00:00 stderr F I1013 00:15:35.679945 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:update-buildconfig-status" not found 2025-10-13T00:15:35.679983060+00:00 stderr F I1013 00:15:35.679973 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-route-controller-manager" not found 2025-10-13T00:15:35.680049972+00:00 stderr F I1013 00:15:35.680037 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680083133+00:00 stderr F I1013 00:15:35.680074 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680125424+00:00 stderr F I1013 00:15:35.680116 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:operator:etcd-backup-role" not found 2025-10-13T00:15:35.680152165+00:00 stderr F I1013 00:15:35.680143 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680178826+00:00 stderr F I1013 00:15:35.680170 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680203386+00:00 stderr F I1013 00:15:35.680194 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680235237+00:00 stderr F I1013 00:15:35.680226 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680265988+00:00 stderr F I1013 00:15:35.680257 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680291479+00:00 stderr F I1013 00:15:35.680282 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680318350+00:00 stderr F I1013 00:15:35.680310 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found 2025-10-13T00:15:35.680359261+00:00 stderr F I1013 00:15:35.680349 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680445784+00:00 stderr F I1013 00:15:35.680393 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680445784+00:00 stderr F I1013 00:15:35.680418 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680445784+00:00 stderr F I1013 00:15:35.680428 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680445784+00:00 stderr F I1013 00:15:35.680438 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680456634+00:00 stderr F I1013 00:15:35.680449 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680465254+00:00 stderr F I1013 00:15:35.680459 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680476335+00:00 stderr F I1013 00:15:35.680470 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680508116+00:00 stderr F I1013 00:15:35.680481 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-10-13T00:15:35.680508116+00:00 stderr F I1013 00:15:35.680498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found 2025-10-13T00:15:35.680531636+00:00 stderr F I1013 00:15:35.680516 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found 2025-10-13T00:15:35.680538966+00:00 stderr F I1013 00:15:35.680531 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-controller-manager" not found 2025-10-13T00:15:35.680547807+00:00 stderr F I1013 00:15:35.680542 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-route-controller-manager" not found 2025-10-13T00:15:35.680578538+00:00 stderr F I1013 00:15:35.680556 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:useroauthaccesstoken-manager" not found 2025-10-13T00:15:35.680578538+00:00 stderr F I1013 00:15:35.680573 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found 2025-10-13T00:15:35.680610019+00:00 stderr F I1013 00:15:35.680584 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found 2025-10-13T00:15:35.680618949+00:00 stderr F I1013 00:15:35.680613 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:sdn-reader" not found 2025-10-13T00:15:35.680641350+00:00 stderr F I1013 00:15:35.680625 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found 2025-10-13T00:15:35.680641350+00:00 stderr F I1013 00:15:35.680637 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found 2025-10-13T00:15:35.680663930+00:00 stderr F I1013 00:15:35.680648 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:webhook" not found 2025-10-13T00:15:35.681261668+00:00 stderr F I1013 00:15:35.681225 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.682285149+00:00 stderr F I1013 00:15:35.682235 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.683173185+00:00 stderr F E1013 00:15:35.683137 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:35.683846816+00:00 stderr F E1013 00:15:35.683806 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-10-13T00:15:35.684782924+00:00 stderr F I1013 00:15:35.684743 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685155 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685284 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-controller-events" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685314 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-daemon-events" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685550 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-os-builder-events" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685587 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685597 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685615 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685630 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685661 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685670 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685679 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685703 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685722 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685743 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685757 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685771 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685781 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685856 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685870 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685875 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685892 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685900 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685909 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685928 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685947 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685957 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685970 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685982 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.685992 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686004 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686023 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686050 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686059 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686069 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686079 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686088 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686107 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686356821+00:00 stderr F I1013 00:15:35.686123 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686356821+00:00 stderr P I1013 00:15:35.686175 1 sccrolecache.go:466] failed to retrieve a role for a rolebindin 2025-10-13T00:15:35.686396102+00:00 stderr F g ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686191 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686200 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686222 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686231 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686236 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686250 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686262 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686267 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686290 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686307 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686396102+00:00 stderr F I1013 00:15:35.686318 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686478484+00:00 stderr F I1013 00:15:35.686439 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686478484+00:00 stderr F I1013 00:15:35.686460 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686478484+00:00 stderr F I1013 00:15:35.686470 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686579017+00:00 stderr F I1013 00:15:35.686549 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686588068+00:00 stderr F I1013 00:15:35.686576 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686588068+00:00 stderr F I1013 00:15:35.686583 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686674650+00:00 stderr F I1013 00:15:35.686646 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686674650+00:00 stderr F I1013 00:15:35.686657 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686674650+00:00 stderr F I1013 00:15:35.686664 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686709481+00:00 stderr F I1013 00:15:35.686692 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686716882+00:00 stderr F I1013 00:15:35.686711 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686727302+00:00 stderr F I1013 00:15:35.686717 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686781924+00:00 stderr F I1013 00:15:35.686759 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686781924+00:00 stderr F I1013 00:15:35.686775 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686789544+00:00 stderr F I1013 00:15:35.686782 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686840965+00:00 stderr F I1013 00:15:35.686819 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686840965+00:00 stderr F I1013 00:15:35.686834 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686870696+00:00 stderr F I1013 00:15:35.686853 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686902817+00:00 stderr F I1013 00:15:35.686882 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686931248+00:00 stderr F I1013 00:15:35.686912 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.686931248+00:00 stderr F I1013 00:15:35.686923 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.686968989+00:00 stderr F I1013 00:15:35.686950 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.686968989+00:00 stderr F I1013 00:15:35.686961 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687016951+00:00 stderr F I1013 00:15:35.686994 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687050872+00:00 stderr F I1013 00:15:35.687033 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687059652+00:00 stderr F I1013 00:15:35.687053 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687066752+00:00 stderr F I1013 00:15:35.687061 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687123304+00:00 stderr F I1013 00:15:35.687102 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687123304+00:00 stderr F I1013 00:15:35.687116 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687151955+00:00 stderr F I1013 00:15:35.687135 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687179125+00:00 stderr F I1013 00:15:35.687161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687179125+00:00 stderr F I1013 00:15:35.687173 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687186786+00:00 stderr F I1013 00:15:35.687178 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687216477+00:00 stderr F I1013 00:15:35.687200 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687229027+00:00 stderr F I1013 00:15:35.687221 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687237757+00:00 stderr F I1013 00:15:35.687231 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687246487+00:00 stderr F I1013 00:15:35.687241 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687286169+00:00 stderr F I1013 00:15:35.687266 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687339360+00:00 stderr F I1013 00:15:35.687304 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687350521+00:00 stderr F E1013 00:15:35.687338 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:35.687391822+00:00 stderr F I1013 00:15:35.687368 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687399392+00:00 stderr F I1013 00:15:35.687393 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687406512+00:00 stderr F I1013 00:15:35.687399 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687444033+00:00 stderr F I1013 00:15:35.687423 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687444033+00:00 stderr F I1013 00:15:35.687440 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687470874+00:00 stderr F I1013 00:15:35.687454 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687507075+00:00 stderr F I1013 00:15:35.687489 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687514555+00:00 stderr F I1013 00:15:35.687506 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687521676+00:00 stderr F I1013 00:15:35.687513 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687530316+00:00 stderr F I1013 00:15:35.687525 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687537466+00:00 stderr F I1013 00:15:35.687532 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687546076+00:00 stderr F I1013 00:15:35.687540 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687566977+00:00 stderr F I1013 00:15:35.687551 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687575987+00:00 stderr F I1013 00:15:35.687571 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687583097+00:00 stderr F I1013 00:15:35.687576 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687590508+00:00 stderr F I1013 00:15:35.687582 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687597448+00:00 stderr F I1013 00:15:35.687588 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687604368+00:00 stderr F I1013 00:15:35.687596 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687647619+00:00 stderr F I1013 00:15:35.687627 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687658440+00:00 stderr F I1013 00:15:35.687645 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687658440+00:00 stderr F I1013 00:15:35.687654 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687692721+00:00 stderr F I1013 00:15:35.687673 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687692721+00:00 stderr F I1013 00:15:35.687689 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687701791+00:00 stderr F I1013 00:15:35.687696 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.687729052+00:00 stderr F I1013 00:15:35.687706 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints" not found 2025-10-13T00:15:35.687729052+00:00 stderr F I1013 00:15:35.687722 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.687757653+00:00 stderr F I1013 00:15:35.687734 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.687757653+00:00 stderr F I1013 00:15:35.687750 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.688435903+00:00 stderr F I1013 00:15:35.688409 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.688504935+00:00 stderr F I1013 00:15:35.688486 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.688504935+00:00 stderr F I1013 00:15:35.688498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.688592068+00:00 stderr F I1013 00:15:35.688564 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.688592068+00:00 stderr F I1013 00:15:35.688578 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.688670890+00:00 stderr F I1013 00:15:35.688653 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.688713531+00:00 stderr F I1013 00:15:35.688696 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.688764663+00:00 stderr F I1013 00:15:35.688740 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.688764663+00:00 stderr F I1013 00:15:35.688752 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.688897247+00:00 stderr F I1013 00:15:35.688869 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.688906827+00:00 stderr F I1013 00:15:35.688897 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.689076552+00:00 stderr F I1013 00:15:35.689045 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.689087373+00:00 stderr F I1013 00:15:35.689077 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.689112943+00:00 stderr F I1013 00:15:35.689094 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.689136554+00:00 stderr F I1013 00:15:35.689111 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.689258288+00:00 stderr F I1013 00:15:35.689237 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.689283808+00:00 stderr F I1013 00:15:35.689265 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.689344480+00:00 stderr F I1013 00:15:35.689311 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.689429343+00:00 stderr F I1013 00:15:35.689395 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-controller-events" not found 2025-10-13T00:15:35.689438973+00:00 stderr F I1013 00:15:35.689432 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-daemon-events" not found 2025-10-13T00:15:35.689511065+00:00 stderr F I1013 00:15:35.689490 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-os-builder-events" not found 2025-10-13T00:15:35.689519846+00:00 stderr F I1013 00:15:35.689509 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.689626869+00:00 stderr F I1013 00:15:35.689606 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.689635749+00:00 stderr F I1013 00:15:35.689630 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.689721302+00:00 stderr F I1013 00:15:35.689701 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.689795514+00:00 stderr F I1013 00:15:35.689776 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.689803534+00:00 stderr F I1013 00:15:35.689793 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.689847035+00:00 stderr F I1013 00:15:35.689828 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.689890917+00:00 stderr F I1013 00:15:35.689872 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-monitoring-operator-namespaced" not found 2025-10-13T00:15:35.689980139+00:00 stderr F I1013 00:15:35.689961 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.689988080+00:00 stderr F I1013 00:15:35.689978 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.689998460+00:00 stderr F I1013 00:15:35.689991 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690074162+00:00 stderr F I1013 00:15:35.690054 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690140484+00:00 stderr F I1013 00:15:35.690122 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.690148344+00:00 stderr F I1013 00:15:35.690138 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690225677+00:00 stderr F I1013 00:15:35.690206 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690237347+00:00 stderr F I1013 00:15:35.690230 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.690261718+00:00 stderr F I1013 00:15:35.690244 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690391972+00:00 stderr F I1013 00:15:35.690360 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690391972+00:00 stderr F I1013 00:15:35.690381 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.690404712+00:00 stderr F I1013 00:15:35.690395 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690495645+00:00 stderr F I1013 00:15:35.690465 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:hostnetwork-v2" not found 2025-10-13T00:15:35.690495645+00:00 stderr F I1013 00:15:35.690491 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690580527+00:00 stderr F I1013 00:15:35.690553 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.690580527+00:00 stderr F I1013 00:15:35.690574 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690616808+00:00 stderr F I1013 00:15:35.690596 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690678040+00:00 stderr F I1013 00:15:35.690658 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.690686170+00:00 stderr F I1013 00:15:35.690676 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690764993+00:00 stderr F I1013 00:15:35.690736 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690764993+00:00 stderr F I1013 00:15:35.690756 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.690775813+00:00 stderr F I1013 00:15:35.690770 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.690844555+00:00 stderr F I1013 00:15:35.690825 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.690953598+00:00 stderr F I1013 00:15:35.690927 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691074112+00:00 stderr F I1013 00:15:35.691034 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691087202+00:00 stderr F I1013 00:15:35.691069 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691110353+00:00 stderr F I1013 00:15:35.691092 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691119423+00:00 stderr F I1013 00:15:35.691114 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691281768+00:00 stderr F I1013 00:15:35.691266 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691291009+00:00 stderr F I1013 00:15:35.691286 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691299179+00:00 stderr F I1013 00:15:35.691294 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691356771+00:00 stderr F I1013 00:15:35.691319 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691356771+00:00 stderr F I1013 00:15:35.691341 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691377531+00:00 stderr F I1013 00:15:35.691364 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691419712+00:00 stderr F I1013 00:15:35.691406 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691419712+00:00 stderr F I1013 00:15:35.691415 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691427393+00:00 stderr F I1013 00:15:35.691421 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691535966+00:00 stderr F I1013 00:15:35.691489 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691535966+00:00 stderr F I1013 00:15:35.691498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691535966+00:00 stderr F I1013 00:15:35.691505 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691546466+00:00 stderr F I1013 00:15:35.691542 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691557717+00:00 stderr F I1013 00:15:35.691553 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691597628+00:00 stderr F I1013 00:15:35.691571 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691620348+00:00 stderr F I1013 00:15:35.691608 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691620348+00:00 stderr F I1013 00:15:35.691616 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691667700+00:00 stderr F I1013 00:15:35.691655 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691667700+00:00 stderr F I1013 00:15:35.691663 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691675880+00:00 stderr F I1013 00:15:35.691670 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691715581+00:00 stderr F I1013 00:15:35.691703 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691735482+00:00 stderr F E1013 00:15:35.691722 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-10-13T00:15:35.691755792+00:00 stderr F I1013 00:15:35.691735 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-monitoring-operator-namespaced" not found 2025-10-13T00:15:35.691762963+00:00 stderr F I1013 00:15:35.691756 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691771103+00:00 stderr F I1013 00:15:35.691767 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691801934+00:00 stderr F I1013 00:15:35.691789 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691824495+00:00 stderr F I1013 00:15:35.691812 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.691824495+00:00 stderr F I1013 00:15:35.691819 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.691894867+00:00 stderr F I1013 00:15:35.691864 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.691894867+00:00 stderr F I1013 00:15:35.691886 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "edit" not found 2025-10-13T00:15:35.692190386+00:00 stderr F I1013 00:15:35.692142 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-10-13T00:15:35.692267008+00:00 stderr F I1013 00:15:35.692246 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-10-13T00:15:35.692308129+00:00 stderr F I1013 00:15:35.692295 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-10-13T00:15:35.694765803+00:00 stderr F I1013 00:15:35.694738 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.699096252+00:00 stderr F I1013 00:15:35.699049 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.705067501+00:00 stderr F I1013 00:15:35.705019 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.728871585+00:00 stderr F I1013 00:15:35.728756 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-10-13T00:15:35.728871585+00:00 stderr F I1013 00:15:35.728773 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-10-13T00:15:35.732026729+00:00 stderr F I1013 00:15:35.731946 1 shared_informer.go:318] Caches are synced for resource quota 2025-10-13T00:15:35.732026729+00:00 stderr F I1013 00:15:35.731971 1 resource_quota_controller.go:496] "synced quota controller" 2025-10-13T00:15:35.825683295+00:00 stderr F I1013 00:15:35.825604 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:35.833805489+00:00 stderr F I1013 00:15:35.833730 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.034234364+00:00 stderr F I1013 00:15:36.034174 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.118366385+00:00 stderr F I1013 00:15:36.118301 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.226307439+00:00 stderr F I1013 00:15:36.226241 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.233011690+00:00 stderr F I1013 00:15:36.232965 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.425930760+00:00 stderr F I1013 00:15:36.425853 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.432819926+00:00 stderr F I1013 00:15:36.432767 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.638423977+00:00 stderr F I1013 00:15:36.631698 1 request.go:697] Waited for 1.000197778s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/prometheuses?limit=500&resourceVersion=0 2025-10-13T00:15:36.638423977+00:00 stderr F I1013 00:15:36.633372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.831462481+00:00 stderr F I1013 00:15:36.831343 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.833369418+00:00 stderr F I1013 00:15:36.833268 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.842991236+00:00 stderr F I1013 00:15:36.842891 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:36.843065048+00:00 stderr F E1013 00:15:36.843019 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "default" should be enqueued: namespace "default" not found 2025-10-13T00:15:36.843082499+00:00 stderr F E1013 00:15:36.843064 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "default" should be enqueued: namespace "default" not found 2025-10-13T00:15:36.843098769+00:00 stderr F E1013 00:15:36.843081 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "default" should be enqueued: namespace "default" not found 2025-10-13T00:15:36.843098769+00:00 stderr F E1013 00:15:36.843090 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-10-13T00:15:36.843116210+00:00 stderr F E1013 00:15:36.843097 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-10-13T00:15:36.843116210+00:00 stderr F E1013 00:15:36.843104 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-10-13T00:15:36.843116210+00:00 stderr F E1013 00:15:36.843111 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-10-13T00:15:36.843136380+00:00 stderr F E1013 00:15:36.843125 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-10-13T00:15:36.843187692+00:00 stderr F E1013 00:15:36.843153 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found 2025-10-13T00:15:36.843187692+00:00 stderr F E1013 00:15:36.843176 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found 2025-10-13T00:15:36.843204452+00:00 stderr F E1013 00:15:36.843187 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found 2025-10-13T00:15:36.843204452+00:00 stderr F E1013 00:15:36.843193 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found 2025-10-13T00:15:36.843220793+00:00 stderr F E1013 00:15:36.843207 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found 2025-10-13T00:15:36.843236623+00:00 stderr F E1013 00:15:36.843219 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found 2025-10-13T00:15:36.843236623+00:00 stderr F E1013 00:15:36.843228 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843271464+00:00 stderr F E1013 00:15:36.843234 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843271464+00:00 stderr F E1013 00:15:36.843242 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843271464+00:00 stderr F E1013 00:15:36.843263 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843290345+00:00 stderr F E1013 00:15:36.843276 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843290345+00:00 stderr F E1013 00:15:36.843281 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843306425+00:00 stderr F E1013 00:15:36.843287 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843394518+00:00 stderr F E1013 00:15:36.843312 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843394518+00:00 stderr F E1013 00:15:36.843318 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843394518+00:00 stderr F E1013 00:15:36.843348 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843394518+00:00 stderr F E1013 00:15:36.843366 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843418879+00:00 stderr F E1013 00:15:36.843392 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843418879+00:00 stderr F E1013 00:15:36.843403 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843418879+00:00 stderr F E1013 00:15:36.843410 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843436639+00:00 stderr F E1013 00:15:36.843417 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843436639+00:00 stderr F E1013 00:15:36.843424 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843436639+00:00 stderr F E1013 00:15:36.843431 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843502881+00:00 stderr F E1013 00:15:36.843453 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843525972+00:00 stderr F E1013 00:15:36.843507 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843525972+00:00 stderr F E1013 00:15:36.843514 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843525972+00:00 stderr F E1013 00:15:36.843520 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843543553+00:00 stderr F E1013 00:15:36.843530 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843543553+00:00 stderr F E1013 00:15:36.843535 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843559803+00:00 stderr F E1013 00:15:36.843549 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843640565+00:00 stderr F E1013 00:15:36.843585 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843640565+00:00 stderr F E1013 00:15:36.843611 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843658456+00:00 stderr F E1013 00:15:36.843636 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843658456+00:00 stderr F E1013 00:15:36.843646 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843675307+00:00 stderr F E1013 00:15:36.843654 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843675307+00:00 stderr F E1013 00:15:36.843663 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843691657+00:00 stderr F E1013 00:15:36.843671 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843691657+00:00 stderr F E1013 00:15:36.843679 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843712668+00:00 stderr F E1013 00:15:36.843697 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843712668+00:00 stderr F E1013 00:15:36.843707 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-10-13T00:15:36.843729098+00:00 stderr F E1013 00:15:36.843716 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-10-13T00:15:36.843729098+00:00 stderr F E1013 00:15:36.843724 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-10-13T00:15:36.843752569+00:00 stderr F E1013 00:15:36.843733 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-10-13T00:15:36.843752569+00:00 stderr F E1013 00:15:36.843740 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-10-13T00:15:36.843752569+00:00 stderr F E1013 00:15:36.843747 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-10-13T00:15:36.843770159+00:00 stderr F E1013 00:15:36.843755 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-10-13T00:15:36.843770159+00:00 stderr F E1013 00:15:36.843763 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-10-13T00:15:36.843833431+00:00 stderr F E1013 00:15:36.843798 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-10-13T00:15:36.843833431+00:00 stderr F E1013 00:15:36.843816 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-10-13T00:15:36.843833431+00:00 stderr F E1013 00:15:36.843825 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-10-13T00:15:36.843851202+00:00 stderr F E1013 00:15:36.843833 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-10-13T00:15:36.843872142+00:00 stderr F E1013 00:15:36.843858 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-10-13T00:15:36.843891523+00:00 stderr F E1013 00:15:36.843880 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-10-13T00:15:36.843907033+00:00 stderr F E1013 00:15:36.843891 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-10-13T00:15:36.843922944+00:00 stderr F E1013 00:15:36.843910 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-10-13T00:15:36.843938124+00:00 stderr F E1013 00:15:36.843919 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-10-13T00:15:36.843956855+00:00 stderr F E1013 00:15:36.843946 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-network-config-controller" should be enqueued: namespace "openshift-cloud-network-config-controller" not found 2025-10-13T00:15:36.843972815+00:00 stderr F E1013 00:15:36.843954 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-network-config-controller" should be enqueued: namespace "openshift-cloud-network-config-controller" not found 2025-10-13T00:15:36.843972815+00:00 stderr F E1013 00:15:36.843964 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-network-config-controller" should be enqueued: namespace "openshift-cloud-network-config-controller" not found 2025-10-13T00:15:36.843995686+00:00 stderr F E1013 00:15:36.843983 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-platform-infra" should be enqueued: namespace "openshift-cloud-platform-infra" not found 2025-10-13T00:15:36.844011357+00:00 stderr F E1013 00:15:36.843992 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-platform-infra" should be enqueued: namespace "openshift-cloud-platform-infra" not found 2025-10-13T00:15:36.844011357+00:00 stderr F E1013 00:15:36.844000 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-platform-infra" should be enqueued: namespace "openshift-cloud-platform-infra" not found 2025-10-13T00:15:36.844011357+00:00 stderr F E1013 00:15:36.844006 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-10-13T00:15:36.844028517+00:00 stderr F E1013 00:15:36.844013 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-10-13T00:15:36.844044278+00:00 stderr F E1013 00:15:36.844034 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-10-13T00:15:36.844059828+00:00 stderr F E1013 00:15:36.844040 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-10-13T00:15:36.844059828+00:00 stderr F E1013 00:15:36.844048 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-10-13T00:15:36.844059828+00:00 stderr F E1013 00:15:36.844054 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-10-13T00:15:36.844076859+00:00 stderr F E1013 00:15:36.844062 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-10-13T00:15:36.844092059+00:00 stderr F E1013 00:15:36.844076 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-10-13T00:15:36.844107659+00:00 stderr F E1013 00:15:36.844091 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-storage-operator" should be enqueued: namespace "openshift-cluster-storage-operator" not found 2025-10-13T00:15:36.844107659+00:00 stderr F E1013 00:15:36.844102 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-storage-operator" should be enqueued: namespace "openshift-cluster-storage-operator" not found 2025-10-13T00:15:36.844171061+00:00 stderr F E1013 00:15:36.844124 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-storage-operator" should be enqueued: namespace "openshift-cluster-storage-operator" not found 2025-10-13T00:15:36.844171061+00:00 stderr F E1013 00:15:36.844139 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-version" should be enqueued: namespace "openshift-cluster-version" not found 2025-10-13T00:15:36.844171061+00:00 stderr F E1013 00:15:36.844147 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-version" should be enqueued: namespace "openshift-cluster-version" not found 2025-10-13T00:15:36.844171061+00:00 stderr F E1013 00:15:36.844156 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-version" should be enqueued: namespace "openshift-cluster-version" not found 2025-10-13T00:15:36.844171061+00:00 stderr F E1013 00:15:36.844164 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-managed" should be enqueued: namespace "openshift-config-managed" not found 2025-10-13T00:15:36.844198282+00:00 stderr F E1013 00:15:36.844171 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-managed" should be enqueued: namespace "openshift-config-managed" not found 2025-10-13T00:15:36.844198282+00:00 stderr F E1013 00:15:36.844181 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-managed" should be enqueued: namespace "openshift-config-managed" not found 2025-10-13T00:15:36.844198282+00:00 stderr F E1013 00:15:36.844192 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-10-13T00:15:36.844215653+00:00 stderr F E1013 00:15:36.844200 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-10-13T00:15:36.844215653+00:00 stderr F E1013 00:15:36.844208 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-10-13T00:15:36.844231833+00:00 stderr F E1013 00:15:36.844215 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-10-13T00:15:36.844299825+00:00 stderr F E1013 00:15:36.844254 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config" should be enqueued: namespace "openshift-config" not found 2025-10-13T00:15:36.844299825+00:00 stderr F E1013 00:15:36.844285 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config" should be enqueued: namespace "openshift-config" not found 2025-10-13T00:15:36.844439669+00:00 stderr F E1013 00:15:36.844371 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config" should be enqueued: namespace "openshift-config" not found 2025-10-13T00:15:36.844439669+00:00 stderr F E1013 00:15:36.844391 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-10-13T00:15:36.844439669+00:00 stderr F E1013 00:15:36.844418 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-10-13T00:15:36.844439669+00:00 stderr F E1013 00:15:36.844432 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-10-13T00:15:36.844470350+00:00 stderr F E1013 00:15:36.844460 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-10-13T00:15:36.844492771+00:00 stderr F E1013 00:15:36.844472 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-user-settings" should be enqueued: namespace "openshift-console-user-settings" not found 2025-10-13T00:15:36.844508341+00:00 stderr F E1013 00:15:36.844494 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-user-settings" should be enqueued: namespace "openshift-console-user-settings" not found 2025-10-13T00:15:36.844523912+00:00 stderr F E1013 00:15:36.844505 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-user-settings" should be enqueued: namespace "openshift-console-user-settings" not found 2025-10-13T00:15:36.844523912+00:00 stderr F E1013 00:15:36.844517 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-10-13T00:15:36.844543563+00:00 stderr F E1013 00:15:36.844531 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-10-13T00:15:36.844608294+00:00 stderr F E1013 00:15:36.844572 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-10-13T00:15:36.844608294+00:00 stderr F E1013 00:15:36.844592 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-10-13T00:15:36.844608294+00:00 stderr F E1013 00:15:36.844600 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-10-13T00:15:36.844626335+00:00 stderr F E1013 00:15:36.844608 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-10-13T00:15:36.844626335+00:00 stderr F E1013 00:15:36.844614 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-10-13T00:15:36.844673256+00:00 stderr F E1013 00:15:36.844643 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-10-13T00:15:36.844673256+00:00 stderr F E1013 00:15:36.844663 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-10-13T00:15:36.844689717+00:00 stderr F E1013 00:15:36.844673 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-10-13T00:15:36.844708757+00:00 stderr F E1013 00:15:36.844694 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-10-13T00:15:36.844755589+00:00 stderr F E1013 00:15:36.844723 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-10-13T00:15:36.844755589+00:00 stderr F E1013 00:15:36.844742 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-10-13T00:15:36.844755589+00:00 stderr F E1013 00:15:36.844750 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-10-13T00:15:36.844780010+00:00 stderr F E1013 00:15:36.844759 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-10-13T00:15:36.844780010+00:00 stderr F E1013 00:15:36.844766 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-10-13T00:15:36.844780010+00:00 stderr F E1013 00:15:36.844773 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-10-13T00:15:36.844797980+00:00 stderr F E1013 00:15:36.844779 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-10-13T00:15:36.844797980+00:00 stderr F E1013 00:15:36.844786 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-10-13T00:15:36.844814151+00:00 stderr F E1013 00:15:36.844794 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-10-13T00:15:36.844829811+00:00 stderr F E1013 00:15:36.844816 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-10-13T00:15:36.844845062+00:00 stderr F E1013 00:15:36.844830 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-10-13T00:15:36.844860692+00:00 stderr F E1013 00:15:36.844850 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-10-13T00:15:36.844860692+00:00 stderr F E1013 00:15:36.844856 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-10-13T00:15:36.844876563+00:00 stderr F E1013 00:15:36.844867 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-10-13T00:15:36.844895363+00:00 stderr F E1013 00:15:36.844885 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-10-13T00:15:36.844910664+00:00 stderr F E1013 00:15:36.844894 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-10-13T00:15:36.844926344+00:00 stderr F E1013 00:15:36.844910 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-10-13T00:15:36.844926344+00:00 stderr F E1013 00:15:36.844918 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-10-13T00:15:36.844951495+00:00 stderr F E1013 00:15:36.844940 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-10-13T00:15:36.844978666+00:00 stderr F E1013 00:15:36.844947 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-10-13T00:15:36.844978666+00:00 stderr F E1013 00:15:36.844956 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-host-network" should be enqueued: namespace "openshift-host-network" not found 2025-10-13T00:15:36.844978666+00:00 stderr F E1013 00:15:36.844963 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-host-network" should be enqueued: namespace "openshift-host-network" not found 2025-10-13T00:15:36.845001596+00:00 stderr F E1013 00:15:36.844974 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-host-network" should be enqueued: namespace "openshift-host-network" not found 2025-10-13T00:15:36.845001596+00:00 stderr F E1013 00:15:36.844986 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845022557+00:00 stderr F E1013 00:15:36.844996 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845022557+00:00 stderr F E1013 00:15:36.845005 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845022557+00:00 stderr F E1013 00:15:36.845014 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845043488+00:00 stderr F E1013 00:15:36.845031 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845059418+00:00 stderr F E1013 00:15:36.845045 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845059418+00:00 stderr F E1013 00:15:36.845054 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-10-13T00:15:36.845075609+00:00 stderr F E1013 00:15:36.845064 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845094869+00:00 stderr F E1013 00:15:36.845084 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845146151+00:00 stderr F E1013 00:15:36.845109 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845146151+00:00 stderr F E1013 00:15:36.845122 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845162951+00:00 stderr F E1013 00:15:36.845143 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845162951+00:00 stderr F E1013 00:15:36.845156 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845223743+00:00 stderr F E1013 00:15:36.845163 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845223743+00:00 stderr F E1013 00:15:36.845171 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845223743+00:00 stderr F E1013 00:15:36.845177 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845286805+00:00 stderr F E1013 00:15:36.845251 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845286805+00:00 stderr F E1013 00:15:36.845267 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845286805+00:00 stderr F E1013 00:15:36.845275 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845304965+00:00 stderr F E1013 00:15:36.845292 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845320516+00:00 stderr F E1013 00:15:36.845309 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845381918+00:00 stderr F E1013 00:15:36.845354 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845398128+00:00 stderr F E1013 00:15:36.845381 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845413729+00:00 stderr F E1013 00:15:36.845392 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845413729+00:00 stderr F E1013 00:15:36.845407 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845429879+00:00 stderr F E1013 00:15:36.845417 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845448960+00:00 stderr F E1013 00:15:36.845436 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845464440+00:00 stderr F E1013 00:15:36.845446 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845480841+00:00 stderr F E1013 00:15:36.845469 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845480841+00:00 stderr F E1013 00:15:36.845476 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845497071+00:00 stderr F E1013 00:15:36.845484 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845497071+00:00 stderr F E1013 00:15:36.845491 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-10-13T00:15:36.845520462+00:00 stderr F E1013 00:15:36.845500 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-canary" should be enqueued: namespace "openshift-ingress-canary" not found 2025-10-13T00:15:36.845520462+00:00 stderr F E1013 00:15:36.845508 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-canary" should be enqueued: namespace "openshift-ingress-canary" not found 2025-10-13T00:15:36.845520462+00:00 stderr F E1013 00:15:36.845516 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-canary" should be enqueued: namespace "openshift-ingress-canary" not found 2025-10-13T00:15:36.845538032+00:00 stderr F E1013 00:15:36.845522 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-10-13T00:15:36.845538032+00:00 stderr F E1013 00:15:36.845530 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-10-13T00:15:36.845558743+00:00 stderr F E1013 00:15:36.845549 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-10-13T00:15:36.845604924+00:00 stderr F E1013 00:15:36.845571 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-10-13T00:15:36.845604924+00:00 stderr F E1013 00:15:36.845585 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-10-13T00:15:36.845604924+00:00 stderr F E1013 00:15:36.845592 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-10-13T00:15:36.845622315+00:00 stderr F E1013 00:15:36.845607 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-10-13T00:15:36.845686507+00:00 stderr F E1013 00:15:36.845648 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-10-13T00:15:36.845686507+00:00 stderr F E1013 00:15:36.845674 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kni-infra" should be enqueued: namespace "openshift-kni-infra" not found 2025-10-13T00:15:36.845703277+00:00 stderr F E1013 00:15:36.845685 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kni-infra" should be enqueued: namespace "openshift-kni-infra" not found 2025-10-13T00:15:36.845718858+00:00 stderr F E1013 00:15:36.845700 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kni-infra" should be enqueued: namespace "openshift-kni-infra" not found 2025-10-13T00:15:36.845737858+00:00 stderr F E1013 00:15:36.845726 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-10-13T00:15:36.845756899+00:00 stderr F E1013 00:15:36.845743 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-10-13T00:15:36.845790770+00:00 stderr F E1013 00:15:36.845774 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-10-13T00:15:36.845790770+00:00 stderr F E1013 00:15:36.845782 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-10-13T00:15:36.845807120+00:00 stderr F E1013 00:15:36.845790 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-10-13T00:15:36.845855122+00:00 stderr F E1013 00:15:36.845828 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-10-13T00:15:36.845872482+00:00 stderr F E1013 00:15:36.845861 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-10-13T00:15:36.845872482+00:00 stderr F E1013 00:15:36.845867 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-10-13T00:15:36.845888583+00:00 stderr F E1013 00:15:36.845878 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-10-13T00:15:36.845904343+00:00 stderr F E1013 00:15:36.845890 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-10-13T00:15:36.845904343+00:00 stderr F E1013 00:15:36.845897 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-10-13T00:15:36.845951205+00:00 stderr F E1013 00:15:36.845922 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-10-13T00:15:36.845951205+00:00 stderr F E1013 00:15:36.845938 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-10-13T00:15:36.845967715+00:00 stderr F E1013 00:15:36.845954 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-10-13T00:15:36.845983276+00:00 stderr F E1013 00:15:36.845964 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-10-13T00:15:36.845983276+00:00 stderr F E1013 00:15:36.845976 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-10-13T00:15:36.846048918+00:00 stderr F E1013 00:15:36.846011 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-10-13T00:15:36.846048918+00:00 stderr F E1013 00:15:36.846029 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-10-13T00:15:36.846072088+00:00 stderr F E1013 00:15:36.846044 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-10-13T00:15:36.846072088+00:00 stderr F E1013 00:15:36.846057 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-10-13T00:15:36.846088439+00:00 stderr F E1013 00:15:36.846070 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-10-13T00:15:36.846088439+00:00 stderr F E1013 00:15:36.846081 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-10-13T00:15:36.846104629+00:00 stderr F E1013 00:15:36.846092 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-10-13T00:15:36.846120190+00:00 stderr F E1013 00:15:36.846101 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-10-13T00:15:36.846183092+00:00 stderr F E1013 00:15:36.846132 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-10-13T00:15:36.846183092+00:00 stderr F E1013 00:15:36.846160 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-10-13T00:15:36.846183092+00:00 stderr F E1013 00:15:36.846177 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-10-13T00:15:36.846201152+00:00 stderr F E1013 00:15:36.846185 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-10-13T00:15:36.846248314+00:00 stderr F E1013 00:15:36.846204 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-10-13T00:15:36.846248314+00:00 stderr F E1013 00:15:36.846210 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-10-13T00:15:36.846248314+00:00 stderr F E1013 00:15:36.846238 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-10-13T00:15:36.846248314+00:00 stderr F E1013 00:15:36.846243 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-10-13T00:15:36.846267744+00:00 stderr F E1013 00:15:36.846249 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-10-13T00:15:36.846267744+00:00 stderr F E1013 00:15:36.846255 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-10-13T00:15:36.846267744+00:00 stderr F E1013 00:15:36.846261 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-10-13T00:15:36.846291505+00:00 stderr F E1013 00:15:36.846272 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-10-13T00:15:36.846291505+00:00 stderr F E1013 00:15:36.846286 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-10-13T00:15:36.846307745+00:00 stderr F E1013 00:15:36.846295 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846356717+00:00 stderr F E1013 00:15:36.846308 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846356717+00:00 stderr F E1013 00:15:36.846317 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846356717+00:00 stderr F E1013 00:15:36.846350 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846384308+00:00 stderr F E1013 00:15:36.846375 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846399938+00:00 stderr F E1013 00:15:36.846382 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846399938+00:00 stderr F E1013 00:15:36.846389 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846399938+00:00 stderr F E1013 00:15:36.846394 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846417119+00:00 stderr F E1013 00:15:36.846401 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-10-13T00:15:36.846417119+00:00 stderr F E1013 00:15:36.846406 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846433409+00:00 stderr F E1013 00:15:36.846416 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846433409+00:00 stderr F E1013 00:15:36.846423 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846494301+00:00 stderr F E1013 00:15:36.846446 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846494301+00:00 stderr F E1013 00:15:36.846481 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846511182+00:00 stderr F E1013 00:15:36.846499 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846526792+00:00 stderr F E1013 00:15:36.846509 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846542442+00:00 stderr F E1013 00:15:36.846526 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846542442+00:00 stderr F E1013 00:15:36.846536 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846558553+00:00 stderr F E1013 00:15:36.846548 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-10-13T00:15:36.846574143+00:00 stderr F E1013 00:15:36.846559 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846593214+00:00 stderr F E1013 00:15:36.846583 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846609384+00:00 stderr F E1013 00:15:36.846596 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846624965+00:00 stderr F E1013 00:15:36.846605 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846624965+00:00 stderr F E1013 00:15:36.846615 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846641285+00:00 stderr F E1013 00:15:36.846625 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846641285+00:00 stderr F E1013 00:15:36.846635 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846657556+00:00 stderr F E1013 00:15:36.846643 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-10-13T00:15:36.846657556+00:00 stderr F E1013 00:15:36.846652 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-10-13T00:15:36.846679517+00:00 stderr F E1013 00:15:36.846660 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-10-13T00:15:36.846679517+00:00 stderr F E1013 00:15:36.846669 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-10-13T00:15:36.846695637+00:00 stderr F E1013 00:15:36.846682 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-10-13T00:15:36.846711187+00:00 stderr F E1013 00:15:36.846693 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846727348+00:00 stderr F E1013 00:15:36.846708 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846727348+00:00 stderr F E1013 00:15:36.846716 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846743578+00:00 stderr F E1013 00:15:36.846723 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846743578+00:00 stderr F E1013 00:15:36.846736 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846759819+00:00 stderr F E1013 00:15:36.846749 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846809140+00:00 stderr F E1013 00:15:36.846776 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-10-13T00:15:36.846809140+00:00 stderr F E1013 00:15:36.846787 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-10-13T00:15:36.846809140+00:00 stderr F E1013 00:15:36.846795 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-10-13T00:15:36.846875732+00:00 stderr F E1013 00:15:36.846834 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-10-13T00:15:36.846875732+00:00 stderr F E1013 00:15:36.846861 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-10-13T00:15:36.846925114+00:00 stderr F E1013 00:15:36.846894 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-10-13T00:15:36.846925114+00:00 stderr F E1013 00:15:36.846903 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-10-13T00:15:36.846925114+00:00 stderr F E1013 00:15:36.846909 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-10-13T00:15:36.846925114+00:00 stderr F E1013 00:15:36.846915 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-10-13T00:15:36.846949365+00:00 stderr F E1013 00:15:36.846922 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-10-13T00:15:36.846949365+00:00 stderr F E1013 00:15:36.846929 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-10-13T00:15:36.846949365+00:00 stderr F E1013 00:15:36.846936 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-10-13T00:15:36.847016767+00:00 stderr F E1013 00:15:36.846978 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-10-13T00:15:36.847016767+00:00 stderr F E1013 00:15:36.847001 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-10-13T00:15:36.847016767+00:00 stderr F E1013 00:15:36.847011 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-node" should be enqueued: namespace "openshift-node" not found 2025-10-13T00:15:36.847034257+00:00 stderr F E1013 00:15:36.847025 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-node" should be enqueued: namespace "openshift-node" not found 2025-10-13T00:15:36.847049738+00:00 stderr F E1013 00:15:36.847034 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-node" should be enqueued: namespace "openshift-node" not found 2025-10-13T00:15:36.847065288+00:00 stderr F E1013 00:15:36.847048 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-nutanix-infra" should be enqueued: namespace "openshift-nutanix-infra" not found 2025-10-13T00:15:36.847065288+00:00 stderr F E1013 00:15:36.847059 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-nutanix-infra" should be enqueued: namespace "openshift-nutanix-infra" not found 2025-10-13T00:15:36.847081379+00:00 stderr F E1013 00:15:36.847070 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-nutanix-infra" should be enqueued: namespace "openshift-nutanix-infra" not found 2025-10-13T00:15:36.847096979+00:00 stderr F E1013 00:15:36.847080 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-10-13T00:15:36.847096979+00:00 stderr F E1013 00:15:36.847091 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-10-13T00:15:36.847112780+00:00 stderr F E1013 00:15:36.847101 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-10-13T00:15:36.847131870+00:00 stderr F E1013 00:15:36.847120 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-10-13T00:15:36.847152671+00:00 stderr F E1013 00:15:36.847135 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-openstack-infra" should be enqueued: namespace "openshift-openstack-infra" not found 2025-10-13T00:15:36.847200072+00:00 stderr F E1013 00:15:36.847170 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-openstack-infra" should be enqueued: namespace "openshift-openstack-infra" not found 2025-10-13T00:15:36.847259284+00:00 stderr F E1013 00:15:36.847201 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-openstack-infra" should be enqueued: namespace "openshift-openstack-infra" not found 2025-10-13T00:15:36.847275534+00:00 stderr F E1013 00:15:36.847252 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-10-13T00:15:36.847275534+00:00 stderr F E1013 00:15:36.847267 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-10-13T00:15:36.847291765+00:00 stderr F E1013 00:15:36.847279 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-10-13T00:15:36.847307695+00:00 stderr F E1013 00:15:36.847289 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-10-13T00:15:36.847307695+00:00 stderr F E1013 00:15:36.847300 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-10-13T00:15:36.847323966+00:00 stderr F E1013 00:15:36.847310 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operators" should be enqueued: namespace "openshift-operators" not found 2025-10-13T00:15:36.847376707+00:00 stderr F E1013 00:15:36.847320 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operators" should be enqueued: namespace "openshift-operators" not found 2025-10-13T00:15:36.847393748+00:00 stderr F E1013 00:15:36.847378 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operators" should be enqueued: namespace "openshift-operators" not found 2025-10-13T00:15:36.847410728+00:00 stderr F E1013 00:15:36.847393 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovirt-infra" should be enqueued: namespace "openshift-ovirt-infra" not found 2025-10-13T00:15:36.847410728+00:00 stderr F E1013 00:15:36.847404 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovirt-infra" should be enqueued: namespace "openshift-ovirt-infra" not found 2025-10-13T00:15:36.847469500+00:00 stderr F E1013 00:15:36.847433 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovirt-infra" should be enqueued: namespace "openshift-ovirt-infra" not found 2025-10-13T00:15:36.847469500+00:00 stderr F E1013 00:15:36.847452 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-10-13T00:15:36.847513962+00:00 stderr F E1013 00:15:36.847482 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-10-13T00:15:36.847536772+00:00 stderr F E1013 00:15:36.847517 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-10-13T00:15:36.847536772+00:00 stderr F E1013 00:15:36.847530 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-10-13T00:15:36.847552713+00:00 stderr F E1013 00:15:36.847541 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-10-13T00:15:36.847568263+00:00 stderr F E1013 00:15:36.847550 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-10-13T00:15:36.847568263+00:00 stderr F E1013 00:15:36.847561 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-10-13T00:15:36.847584474+00:00 stderr F E1013 00:15:36.847571 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-10-13T00:15:36.847645905+00:00 stderr F E1013 00:15:36.847595 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-10-13T00:15:36.847645905+00:00 stderr F E1013 00:15:36.847618 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-10-13T00:15:36.847645905+00:00 stderr F E1013 00:15:36.847628 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-10-13T00:15:36.847664076+00:00 stderr F E1013 00:15:36.847644 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-10-13T00:15:36.847664076+00:00 stderr F E1013 00:15:36.847657 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-10-13T00:15:36.847701447+00:00 stderr F E1013 00:15:36.847677 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-10-13T00:15:36.847701447+00:00 stderr F E1013 00:15:36.847690 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-10-13T00:15:36.847718118+00:00 stderr F E1013 00:15:36.847701 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-10-13T00:15:36.847718118+00:00 stderr F E1013 00:15:36.847710 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-10-13T00:15:36.847737838+00:00 stderr F E1013 00:15:36.847725 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-user-workload-monitoring" should be enqueued: namespace "openshift-user-workload-monitoring" not found 2025-10-13T00:15:36.847762529+00:00 stderr F E1013 00:15:36.847751 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-user-workload-monitoring" should be enqueued: namespace "openshift-user-workload-monitoring" not found 2025-10-13T00:15:36.847762529+00:00 stderr F E1013 00:15:36.847758 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-user-workload-monitoring" should be enqueued: namespace "openshift-user-workload-monitoring" not found 2025-10-13T00:15:36.847778669+00:00 stderr F E1013 00:15:36.847764 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-vsphere-infra" should be enqueued: namespace "openshift-vsphere-infra" not found 2025-10-13T00:15:36.847778669+00:00 stderr F E1013 00:15:36.847770 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-vsphere-infra" should be enqueued: namespace "openshift-vsphere-infra" not found 2025-10-13T00:15:36.847794800+00:00 stderr F E1013 00:15:36.847777 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-vsphere-infra" should be enqueued: namespace "openshift-vsphere-infra" not found 2025-10-13T00:15:36.847794800+00:00 stderr F E1013 00:15:36.847782 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift" should be enqueued: namespace "openshift" not found 2025-10-13T00:15:36.847794800+00:00 stderr F E1013 00:15:36.847789 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift" should be enqueued: namespace "openshift" not found 2025-10-13T00:15:36.847845501+00:00 stderr F E1013 00:15:36.847813 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift" should be enqueued: namespace "openshift" not found 2025-10-13T00:15:37.033563166+00:00 stderr F I1013 00:15:37.033505 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.034443022+00:00 stderr F I1013 00:15:37.034384 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.037016249+00:00 stderr F I1013 00:15:37.036963 1 base_controller.go:73] Caches are synced for namespace-security-allocation-controller 2025-10-13T00:15:37.037016249+00:00 stderr F I1013 00:15:37.036991 1 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... 2025-10-13T00:15:37.037071991+00:00 stderr F I1013 00:15:37.037041 1 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations 2025-10-13T00:15:37.137461369+00:00 stderr F I1013 00:15:37.137401 1 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller 2025-10-13T00:15:37.137461369+00:00 stderr F I1013 00:15:37.137425 1 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... 2025-10-13T00:15:37.137800449+00:00 stderr F I1013 00:15:37.137779 1 shared_informer.go:318] Caches are synced for privileged-namespaces-psa-label-syncer 2025-10-13T00:15:37.225345212+00:00 stderr F I1013 00:15:37.225262 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.232671461+00:00 stderr F I1013 00:15:37.232632 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.425912732+00:00 stderr F I1013 00:15:37.425833 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.433379665+00:00 stderr F I1013 00:15:37.433297 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.485800066+00:00 stderr F I1013 00:15:37.485727 1 shared_informer.go:318] Caches are synced for resource quota 2025-10-13T00:15:37.633625475+00:00 stderr F I1013 00:15:37.633556 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.823765172+00:00 stderr F I1013 00:15:37.823425 1 request.go:697] Waited for 2.192330727s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/endpoints?limit=500&resourceVersion=0 2025-10-13T00:15:37.828395961+00:00 stderr F I1013 00:15:37.828350 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.832968958+00:00 stderr F I1013 00:15:37.832916 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:37.880427200+00:00 stderr F I1013 00:15:37.880292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:38.030774154+00:00 stderr F I1013 00:15:38.030693 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:38.032648950+00:00 stderr F I1013 00:15:38.032609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:38.235765206+00:00 stderr F I1013 00:15:38.235705 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:38.433477600+00:00 stderr F I1013 00:15:38.433386 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:38.451956284+00:00 stderr F I1013 00:15:38.451843 1 namespace_scc_allocation_controller.go:116] Repair complete 2025-10-13T00:15:38.635533714+00:00 stderr F I1013 00:15:38.635458 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:38.831459395+00:00 stderr F I1013 00:15:38.831383 1 request.go:697] Waited for 3.199737971s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/network.operator.openshift.io/v1/operatorpkis?limit=500&resourceVersion=0 2025-10-13T00:15:38.833516826+00:00 stderr F I1013 00:15:38.833467 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:39.034275112+00:00 stderr F I1013 00:15:39.034218 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:39.236550961+00:00 stderr F I1013 00:15:39.235927 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:39.434084020+00:00 stderr F I1013 00:15:39.434022 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:39.635632849+00:00 stderr F I1013 00:15:39.635524 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:39.831647822+00:00 stderr F I1013 00:15:39.831562 1 request.go:697] Waited for 4.199873016s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/machine.openshift.io/v1beta1/machinehealthchecks?limit=500&resourceVersion=0 2025-10-13T00:15:39.834982252+00:00 stderr F I1013 00:15:39.834920 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:40.034785998+00:00 stderr F I1013 00:15:40.034688 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:40.235837382+00:00 stderr F W1013 00:15:40.235758 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:15:40.235906634+00:00 stderr F I1013 00:15:40.235881 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:40.240345337+00:00 stderr F W1013 00:15:40.240277 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:15:40.435915727+00:00 stderr F I1013 00:15:40.435848 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:40.633828626+00:00 stderr F I1013 00:15:40.633707 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:40.833712825+00:00 stderr F I1013 00:15:40.833620 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:41.031493151+00:00 stderr F I1013 00:15:41.031409 1 request.go:697] Waited for 5.399655904s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0 2025-10-13T00:15:41.038083689+00:00 stderr F I1013 00:15:41.038001 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:41.235279507+00:00 stderr F I1013 00:15:41.235175 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:41.434899498+00:00 stderr F I1013 00:15:41.434798 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:41.634316173+00:00 stderr F I1013 00:15:41.634234 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:41.834525392+00:00 stderr F I1013 00:15:41.834378 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:42.032018309+00:00 stderr F I1013 00:15:42.031942 1 request.go:697] Waited for 6.400056868s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0 2025-10-13T00:15:42.034935066+00:00 stderr F I1013 00:15:42.034902 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:42.233462605+00:00 stderr F I1013 00:15:42.233415 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:42.433363864+00:00 stderr F I1013 00:15:42.433249 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:15:42.444620921+00:00 stderr F I1013 00:15:42.444574 1 reconciliation_controller.go:224] synced cluster resource quota controller 2025-10-13T00:15:42.510955479+00:00 stderr F I1013 00:15:42.510881 1 reconciliation_controller.go:149] Caches are synced 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936015 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.935965239 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936051 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.936039071 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936071 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.936057751 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936085 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.936075052 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936101 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936091102 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936120 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936106223 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936134 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936124013 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936148 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936138604 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936168 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.936152964 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936185 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.936175715 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936201 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.936189975 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936216 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936205745 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936553 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:21:11.936537894 +0000 UTC))" 2025-10-13T00:21:11.939456653+00:00 stderr F I1013 00:21:11.936822 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314350\" (2025-10-12 23:12:30 +0000 UTC to 2026-10-12 23:12:30 +0000 UTC (now=2025-10-13 00:21:11.936810422 +0000 UTC))" 2025-10-13T00:21:13.831303669+00:00 stderr F I1013 00:21:13.831225 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "openstack" NS 2025-10-13T00:21:13.834688100+00:00 stderr F I1013 00:21:13.832144 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openstack namespace 2025-10-13T00:21:14.524050848+00:00 stderr F I1013 00:21:14.523801 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openstack-operators namespace 2025-10-13T00:22:12.582099139+00:00 stderr F I1013 00:22:12.582014 1 reconciliation_controller.go:171] error occurred GetQuotableResources err=failed to discover resources: Get "https://api-int.crc.testing:6443/api": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:12.582099139+00:00 stderr F E1013 00:22:12.582062 1 reconciliation_controller.go:172] failed to discover resources: Get "https://api-int.crc.testing:6443/api": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:26.008982973+00:00 stderr F E1013 00:22:26.008885 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=42712&timeout=9m30s&timeoutSeconds=570&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:26.397786558+00:00 stderr F E1013 00:22:26.397714 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=42722&timeout=8m33s&timeoutSeconds=513&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:27.073803688+00:00 stderr F E1013 00:22:27.073714 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=42626&timeout=6m28s&timeoutSeconds=388&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:27.765098397+00:00 stderr F E1013 00:22:27.764989 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=42713&timeout=9m19s&timeoutSeconds=559&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:30.253058914+00:00 stderr F E1013 00:22:30.252964 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=42631&timeout=9m32s&timeoutSeconds=572&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:30.597060705+00:00 stderr F E1013 00:22:30.597003 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=42635&timeout=6m13s&timeoutSeconds=373&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-10-13T00:22:30.647891402+00:00 stderr F E1013 00:22:30.647848 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=42708&timeout=6m23s&timeoutSeconds=383&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:40.091269413+00:00 stderr F W1013 00:22:40.091210 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?resourceVersion=42722\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:40.091269413+00:00 stderr F E1013 00:22:40.091252 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?resourceVersion=42722\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:41.508071827+00:00 stderr F I1013 00:22:41.506527 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:42.241446216+00:00 stderr F W1013 00:22:42.241343 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?resourceVersion=42712\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:42.241446216+00:00 stderr F E1013 00:22:42.241385 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?resourceVersion=42712\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:42.484887277+00:00 stderr F I1013 00:22:42.484767 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:42.811850695+00:00 stderr F I1013 00:22:42.811727 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:42.988797493+00:00 stderr F I1013 00:22:42.988689 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:43.071101196+00:00 stderr F I1013 00:22:43.070982 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:43.189269248+00:00 stderr F I1013 00:22:43.189161 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:43.408549136+00:00 stderr F I1013 00:22:43.408445 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:43.639059547+00:00 stderr F I1013 00:22:43.638933 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:43.676703985+00:00 stderr F I1013 00:22:43.676563 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for service-telemetry namespace 2025-10-13T00:22:43.677409245+00:00 stderr F I1013 00:22:43.677311 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:44.556059970+00:00 stderr F I1013 00:22:44.555944 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.043410945+00:00 stderr F I1013 00:22:45.043299 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.045954896+00:00 stderr F I1013 00:22:45.045890 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.287411381+00:00 stderr F I1013 00:22:45.287250 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.676899590+00:00 stderr F I1013 00:22:45.676115 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.685928012+00:00 stderr F I1013 00:22:45.685866 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.803390284+00:00 stderr F I1013 00:22:45.803228 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.837023261+00:00 stderr F I1013 00:22:45.836965 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.865161144+00:00 stderr F I1013 00:22:45.865096 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.866676447+00:00 stderr F I1013 00:22:45.866625 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:45.871849991+00:00 stderr F I1013 00:22:45.871804 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.130691431+00:00 stderr F I1013 00:22:46.130584 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.580312715+00:00 stderr F I1013 00:22:46.580243 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.861140448+00:00 stderr F I1013 00:22:46.861063 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.871933778+00:00 stderr F I1013 00:22:46.871845 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.906779069+00:00 stderr F I1013 00:22:46.906714 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.908081405+00:00 stderr F I1013 00:22:46.908030 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-10-13T00:22:46.908107226+00:00 stderr F I1013 00:22:46.908088 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-10-13T00:22:46.908131227+00:00 stderr F I1013 00:22:46.908110 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.909217687+00:00 stderr F I1013 00:22:46.909180 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.910622526+00:00 stderr F I1013 00:22:46.910578 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.911231123+00:00 stderr F I1013 00:22:46.911201 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.912199600+00:00 stderr F I1013 00:22:46.912161 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.912993012+00:00 stderr F I1013 00:22:46.912948 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.913789244+00:00 stderr F I1013 00:22:46.913746 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.914496754+00:00 stderr F I1013 00:22:46.914461 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:46.915026209+00:00 stderr F I1013 00:22:46.914990 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:22:47.064077411+00:00 stderr F I1013 00:22:47.063988 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:47.106040529+00:00 stderr F W1013 00:22:47.105939 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?resourceVersion=42708\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:47.106040529+00:00 stderr F E1013 00:22:47.106004 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?resourceVersion=42708\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:47.147962757+00:00 stderr F I1013 00:22:47.147260 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:47.282094863+00:00 stderr F I1013 00:22:47.281997 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:47.285179399+00:00 stderr F I1013 00:22:47.285105 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:47.668972200+00:00 stderr F I1013 00:22:47.668878 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:47.815405809+00:00 stderr F I1013 00:22:47.815301 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:48.054136929+00:00 stderr F I1013 00:22:48.054006 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:48.364509014+00:00 stderr F I1013 00:22:48.364388 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:48.664900231+00:00 stderr F I1013 00:22:48.664794 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:48.699119084+00:00 stderr F I1013 00:22:48.699018 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:48.714767930+00:00 stderr F I1013 00:22:48.714665 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.005828908+00:00 stderr F I1013 00:22:49.005674 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.019287583+00:00 stderr F I1013 00:22:49.019199 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.373009766+00:00 stderr F I1013 00:22:49.372879 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.389684870+00:00 stderr F I1013 00:22:49.389497 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.389889396+00:00 stderr F I1013 00:22:49.389821 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-10-13T00:22:49.389889396+00:00 stderr F I1013 00:22:49.389879 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-10-13T00:22:49.440075134+00:00 stderr F I1013 00:22:49.439903 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.552677311+00:00 stderr F I1013 00:22:49.552560 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.681235162+00:00 stderr F I1013 00:22:49.681128 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.815450400+00:00 stderr F I1013 00:22:49.815327 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.828120693+00:00 stderr F I1013 00:22:49.828042 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:49.867453819+00:00 stderr F I1013 00:22:49.867364 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:50.101082547+00:00 stderr F I1013 00:22:50.101006 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:50.172214738+00:00 stderr F I1013 00:22:50.172103 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:50.173136144+00:00 stderr F I1013 00:22:50.173089 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:50.259370376+00:00 stderr F I1013 00:22:50.259301 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:50.305887431+00:00 stderr F W1013 00:22:50.305807 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?resourceVersion=42713\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:50.305887431+00:00 stderr F E1013 00:22:50.305846 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?resourceVersion=42713\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:50.864394609+00:00 stderr F I1013 00:22:50.863537 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:51.006782165+00:00 stderr F I1013 00:22:51.006230 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:51.017556855+00:00 stderr F I1013 00:22:51.017488 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:51.340987105+00:00 stderr F I1013 00:22:51.340905 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:51.539462273+00:00 stderr F W1013 00:22:51.538243 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?resourceVersion=42626\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:51.539462273+00:00 stderr F E1013 00:22:51.538276 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?resourceVersion=42626\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:51.703483222+00:00 stderr F I1013 00:22:51.703410 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:51.737805898+00:00 stderr F I1013 00:22:51.737759 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:51.863505469+00:00 stderr F I1013 00:22:51.863439 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:52.238256857+00:00 stderr F W1013 00:22:52.238205 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?resourceVersion=42635\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-10-13T00:22:52.238256857+00:00 stderr F E1013 00:22:52.238241 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?resourceVersion=42635\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-10-13T00:22:52.256294660+00:00 stderr F I1013 00:22:52.256217 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:52.297960480+00:00 stderr F I1013 00:22:52.297903 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:52.380517620+00:00 stderr F I1013 00:22:52.380423 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:52.844590017+00:00 stderr F I1013 00:22:52.844522 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:52.957835922+00:00 stderr F I1013 00:22:52.957764 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:53.007085763+00:00 stderr F I1013 00:22:53.007034 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:53.569192921+00:00 stderr F I1013 00:22:53.568736 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:53.580929748+00:00 stderr F I1013 00:22:53.580891 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:54.140794753+00:00 stderr F W1013 00:22:54.140742 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?resourceVersion=42631\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:54.140794753+00:00 stderr F E1013 00:22:54.140772 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?resourceVersion=42631\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-10-13T00:22:54.148027815+00:00 stderr F I1013 00:22:54.147983 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:54.407156633+00:00 stderr F I1013 00:22:54.407102 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:55.220972112+00:00 stderr F I1013 00:22:55.220900 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:55.542966802+00:00 stderr F I1013 00:22:55.542869 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:55.778555234+00:00 stderr F I1013 00:22:55.778479 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:56.163078104+00:00 stderr F I1013 00:22:56.163010 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:56.520734587+00:00 stderr F I1013 00:22:56.520673 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:56.540300452+00:00 stderr F I1013 00:22:56.540238 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:56.542241556+00:00 stderr F I1013 00:22:56.542217 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:56.639381782+00:00 stderr F I1013 00:22:56.639310 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:24.611810574+00:00 stderr F I1013 00:23:24.611322 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:25.837750153+00:00 stderr F I1013 00:23:25.837706 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:27.327967833+00:00 stderr F I1013 00:23:27.327870 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:27.394228239+00:00 stderr F I1013 00:23:27.394149 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:29.401263255+00:00 stderr F I1013 00:23:29.401200 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:31.091743784+00:00 stderr F I1013 00:23:31.091673 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:36.795804501+00:00 stderr F I1013 00:23:36.795730 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-10-13T00:23:36.803826114+00:00 stderr F I1013 00:23:36.803763 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openshift-must-gather-xwq75 namespace 2025-10-13T00:23:38.850758382+00:00 stderr F W1013 00:23:38.850719 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-10-13T00:23:38.850787382+00:00 stderr F I1013 00:23:38.850769 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:23:38.853939240+00:00 stderr F W1013 00:23:38.853917 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000264042715073043233033075 0ustar zuulzuul2025-08-13T20:08:12.802058725+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-08-13T20:08:12.814964715+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-08-13T20:08:12.823497100+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:12.826307440+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-08-13T20:08:12.826307440+00:00 stderr F + echo 'Copying system trust bundle' 2025-08-13T20:08:12.826307440+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-08-13T20:08:12.826418804+00:00 stdout F Copying system trust bundle 2025-08-13T20:08:12.833987271+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-08-13T20:08:12.834642410+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.217037 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218038 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218143 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218936 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219007 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219115 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219181 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219238 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219374 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-08-13T20:08:13.219493834+00:00 stderr F W0813 20:08:13.219435 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-08-13T20:08:13.219503624+00:00 stderr F W0813 20:08:13.219496 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-08-13T20:08:13.219624757+00:00 stderr F W0813 20:08:13.219574 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-08-13T20:08:13.219694409+00:00 stderr F W0813 20:08:13.219655 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-08-13T20:08:13.219957897+00:00 stderr F W0813 20:08:13.219834 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-08-13T20:08:13.219980567+00:00 stderr F W0813 20:08:13.219954 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-08-13T20:08:13.220114111+00:00 stderr F W0813 20:08:13.220028 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-08-13T20:08:13.220130522+00:00 stderr F W0813 20:08:13.220122 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-08-13T20:08:13.220338958+00:00 stderr F W0813 20:08:13.220244 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-08-13T20:08:13.220510753+00:00 stderr F W0813 20:08:13.220445 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-08-13T20:08:13.220660927+00:00 stderr F W0813 20:08:13.220583 1 feature_gate.go:227] unrecognized feature gate: Example 2025-08-13T20:08:13.220759810+00:00 stderr F W0813 20:08:13.220672 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-08-13T20:08:13.220759810+00:00 stderr F W0813 20:08:13.220746 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-08-13T20:08:13.220980366+00:00 stderr F W0813 20:08:13.220930 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-08-13T20:08:13.221078669+00:00 stderr F W0813 20:08:13.221034 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-08-13T20:08:13.221226703+00:00 stderr F W0813 20:08:13.221167 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-08-13T20:08:13.221321016+00:00 stderr F W0813 20:08:13.221259 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-08-13T20:08:13.221435869+00:00 stderr F W0813 20:08:13.221357 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-08-13T20:08:13.221449420+00:00 stderr F W0813 20:08:13.221439 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-08-13T20:08:13.221647945+00:00 stderr F W0813 20:08:13.221535 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-08-13T20:08:13.221647945+00:00 stderr F W0813 20:08:13.221628 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-08-13T20:08:13.221755538+00:00 stderr F W0813 20:08:13.221707 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-08-13T20:08:13.222715416+00:00 stderr F W0813 20:08:13.222675 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-08-13T20:08:13.222878071+00:00 stderr F W0813 20:08:13.222845 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-08-13T20:08:13.223023605+00:00 stderr F W0813 20:08:13.222982 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-08-13T20:08:13.223200670+00:00 stderr F W0813 20:08:13.223143 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:08:13.223277612+00:00 stderr F W0813 20:08:13.223248 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-08-13T20:08:13.223453957+00:00 stderr F W0813 20:08:13.223362 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-08-13T20:08:13.223503038+00:00 stderr F W0813 20:08:13.223473 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-08-13T20:08:13.223605581+00:00 stderr F W0813 20:08:13.223570 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-08-13T20:08:13.223913790+00:00 stderr F W0813 20:08:13.223758 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-08-13T20:08:13.224109056+00:00 stderr F W0813 20:08:13.223957 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-08-13T20:08:13.224109056+00:00 stderr F W0813 20:08:13.224079 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-08-13T20:08:13.224210499+00:00 stderr F W0813 20:08:13.224173 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-08-13T20:08:13.224326502+00:00 stderr F W0813 20:08:13.224289 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-08-13T20:08:13.224450766+00:00 stderr F W0813 20:08:13.224413 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-08-13T20:08:13.224622561+00:00 stderr F W0813 20:08:13.224535 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-08-13T20:08:13.224868638+00:00 stderr F W0813 20:08:13.224731 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-08-13T20:08:13.224955920+00:00 stderr F W0813 20:08:13.224940 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-08-13T20:08:13.225143595+00:00 stderr F W0813 20:08:13.225056 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-08-13T20:08:13.225218398+00:00 stderr F W0813 20:08:13.225163 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-08-13T20:08:13.225303580+00:00 stderr F W0813 20:08:13.225268 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-08-13T20:08:13.226198776+00:00 stderr F W0813 20:08:13.225924 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-08-13T20:08:13.226198776+00:00 stderr F W0813 20:08:13.226080 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-08-13T20:08:13.226355810+00:00 stderr F W0813 20:08:13.226243 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.231603 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.231843 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232047 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232157 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232381 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232672 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232693 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232707 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232716 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232721 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232727 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232735 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232744 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232749 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232753 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232827 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232841 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232846 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232854 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232865 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232869 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232873 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232882 1 flags.go:64] FLAG: --cloud-config="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232915 1 flags.go:64] FLAG: --cloud-provider="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232920 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232931 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232935 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232942 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232947 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232951 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232961 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232965 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232969 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232973 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232977 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232981 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232985 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232988 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232994 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233000 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233004 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233008 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233013 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233018 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233022 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233026 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233029 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233033 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233062 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233067 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233071 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233075 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233079 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233082 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233086 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233090 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233094 1 flags.go:64] FLAG: --contention-profiling="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233098 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233106 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233137 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233143 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233155 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233160 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233165 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233169 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233174 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233196 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233202 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233206 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233253 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233259 1 flags.go:64] FLAG: --help="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233264 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233268 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233272 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233276 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233279 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233284 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233294 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233298 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233303 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233308 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233312 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-08-13T20:08:13.234484243+00:00 stderr P I0813 20:08:13.233319 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps 2025-08-13T20:08:13.234855374+00:00 stderr F /controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233326 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233334 1 flags.go:64] FLAG: --leader-elect="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233339 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233343 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233350 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233354 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233358 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233362 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233366 1 flags.go:64] FLAG: --leader-migration-config="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233370 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233374 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233378 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233386 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233390 1 flags.go:64] FLAG: --logging-format="text" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233396 1 flags.go:64] FLAG: --master="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233400 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233404 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233408 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233412 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233420 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233425 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233429 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233433 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233436 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233442 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233447 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233451 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233454 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233458 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233462 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233467 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233471 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233474 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233478 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233482 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233488 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233492 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233501 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233507 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233511 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233515 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233524 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233529 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233539 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233550 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233564 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233568 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233573 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233577 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233581 1 flags.go:64] FLAG: --secure-port="10257" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233586 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233590 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233594 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233598 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233602 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233610 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233635 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233723 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233729 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233737 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233743 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233748 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233754 1 flags.go:64] FLAG: --v="2" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233761 1 flags.go:64] FLAG: --version="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233769 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233833 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233840 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-08-13T20:08:13.247379713+00:00 stderr F I0813 20:08:13.246721 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.820033282+00:00 stderr F I0813 20:08:13.816639 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.820033282+00:00 stderr F I0813 20:08:13.816752 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.827928258+00:00 stderr F I0813 20:08:13.827555 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-08-13T20:08:13.827928258+00:00 stderr F I0813 20:08:13.827621 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-08-13T20:08:13.839544281+00:00 stderr F I0813 20:08:13.839462 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:13.839409377 +0000 UTC))" 2025-08-13T20:08:13.839544281+00:00 stderr F I0813 20:08:13.839535 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:13.83951253 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839555 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:13.839542801 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839572 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:13.839560951 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839587 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839576672 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839604 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839593662 +0000 UTC))" 2025-08-13T20:08:13.839631154+00:00 stderr F I0813 20:08:13.839620 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839609253 +0000 UTC))" 2025-08-13T20:08:13.839643864+00:00 stderr F I0813 20:08:13.839635 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:13.839624983 +0000 UTC))" 2025-08-13T20:08:13.839692735+00:00 stderr F I0813 20:08:13.839651 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839640424 +0000 UTC))" 2025-08-13T20:08:13.839711196+00:00 stderr F I0813 20:08:13.839702 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839680665 +0000 UTC))" 2025-08-13T20:08:13.840731445+00:00 stderr F I0813 20:08:13.840547 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:13.840525139 +0000 UTC))" 2025-08-13T20:08:13.841108246+00:00 stderr F I0813 20:08:13.841055 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115693\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:13.841028684 +0000 UTC))" 2025-08-13T20:08:13.841128376+00:00 stderr F I0813 20:08:13.841103 1 secure_serving.go:213] Serving securely on [::]:10257 2025-08-13T20:08:13.841605360+00:00 stderr F I0813 20:08:13.841538 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:13.841704753+00:00 stderr F I0813 20:08:13.841650 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.841704753+00:00 stderr F I0813 20:08:13.841657 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.842010682+00:00 stderr F I0813 20:08:13.841932 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.844290897+00:00 stderr F I0813 20:08:13.844225 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2025-08-13T20:08:24.708357459+00:00 stderr F E0813 20:08:24.708183 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:28.619865836+00:00 stderr F E0813 20:08:28.618751 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:33.289069024+00:00 stderr F E0813 20:08:33.288206 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:37.779061605+00:00 stderr F E0813 20:08:37.778956 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:47.305208117+00:00 stderr F I0813 20:08:47.304304 1 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager 2025-08-13T20:08:47.307865224+00:00 stderr F I0813 20:08:47.306050 1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="crc_02346a15-e302-4734-adfc-ae3167a2c006 became leader" 2025-08-13T20:08:47.314085082+00:00 stderr F I0813 20:08:47.313970 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-token-controller" 2025-08-13T20:08:47.319254970+00:00 stderr F I0813 20:08:47.319158 1 controllermanager.go:787] "Started controller" controller="serviceaccount-token-controller" 2025-08-13T20:08:47.319254970+00:00 stderr F I0813 20:08:47.319196 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-protection-controller" 2025-08-13T20:08:47.319562919+00:00 stderr F I0813 20:08:47.319326 1 shared_informer.go:311] Waiting for caches to sync for tokens 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.330725 1 controllermanager.go:787] "Started controller" controller="persistentvolume-protection-controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.330769 1 controllermanager.go:756] "Starting controller" controller="cronjob-controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.331075 1 pv_protection_controller.go:78] "Starting PV protection controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.331085 1 shared_informer.go:311] Waiting for caches to sync for PV protection 2025-08-13T20:08:47.345728149+00:00 stderr F I0813 20:08:47.345645 1 controllermanager.go:787] "Started controller" controller="cronjob-controller" 2025-08-13T20:08:47.345728149+00:00 stderr F I0813 20:08:47.345692 1 controllermanager.go:756] "Starting controller" controller="node-lifecycle-controller" 2025-08-13T20:08:47.348831908+00:00 stderr F I0813 20:08:47.346065 1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" 2025-08-13T20:08:47.348831908+00:00 stderr F I0813 20:08:47.346100 1 shared_informer.go:311] Waiting for caches to sync for cronjob 2025-08-13T20:08:47.351836084+00:00 stderr F I0813 20:08:47.349161 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:47.359707530+00:00 stderr F I0813 20:08:47.359642 1 node_lifecycle_controller.go:425] "Controller will reconcile labels" 2025-08-13T20:08:47.362833019+00:00 stderr F I0813 20:08:47.360031 1 controllermanager.go:787] "Started controller" controller="node-lifecycle-controller" 2025-08-13T20:08:47.362833019+00:00 stderr F I0813 20:08:47.360095 1 controllermanager.go:756] "Starting controller" controller="service-lb-controller" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.380943 1 node_lifecycle_controller.go:459] "Sending events to api server" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.380996 1 node_lifecycle_controller.go:470] "Starting node controller" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.381006 1 shared_informer.go:311] Waiting for caches to sync for taint 2025-08-13T20:08:47.400273773+00:00 stderr F E0813 20:08:47.400179 1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" 2025-08-13T20:08:47.400273773+00:00 stderr F I0813 20:08:47.400230 1 controllermanager.go:765] "Warning: skipping controller" controller="service-lb-controller" 2025-08-13T20:08:47.400273773+00:00 stderr F I0813 20:08:47.400244 1 controllermanager.go:756] "Starting controller" controller="persistentvolumeclaim-protection-controller" 2025-08-13T20:08:47.409490347+00:00 stderr F I0813 20:08:47.409417 1 controllermanager.go:787] "Started controller" controller="persistentvolumeclaim-protection-controller" 2025-08-13T20:08:47.409490347+00:00 stderr F I0813 20:08:47.409466 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-expander-controller" 2025-08-13T20:08:47.412744711+00:00 stderr F I0813 20:08:47.409730 1 pvc_protection_controller.go:102] "Starting PVC protection controller" 2025-08-13T20:08:47.412744711+00:00 stderr F I0813 20:08:47.409761 1 shared_informer.go:311] Waiting for caches to sync for PVC protection 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413923 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413966 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413977 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413985 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414091 1 controllermanager.go:787] "Started controller" controller="persistentvolume-expander-controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414103 1 controllermanager.go:756] "Starting controller" controller="namespace-controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414324 1 expand_controller.go:328] "Starting expand controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414337 1 shared_informer.go:311] Waiting for caches to sync for expand 2025-08-13T20:08:47.484767685+00:00 stderr F I0813 20:08:47.484662 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.511982 1 controllermanager.go:787] "Started controller" controller="namespace-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512070 1 controllermanager.go:750] "Warning: controller is disabled" controller="bootstrap-signer-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512082 1 controllermanager.go:756] "Starting controller" controller="cloud-node-lifecycle-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512320 1 namespace_controller.go:197] "Starting namespace controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512331 1 shared_informer.go:311] Waiting for caches to sync for namespace 2025-08-13T20:08:47.518661417+00:00 stderr F E0813 20:08:47.518410 1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518457 1 controllermanager.go:765] "Warning: skipping controller" controller="cloud-node-lifecycle-controller" 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518501 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"] 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518510 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"] 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518523 1 controllermanager.go:756] "Starting controller" controller="taint-eviction-controller" 2025-08-13T20:08:47.521696374+00:00 stderr F I0813 20:08:47.521606 1 shared_informer.go:318] Caches are synced for tokens 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528370 1 controllermanager.go:787] "Started controller" controller="taint-eviction-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528423 1 controllermanager.go:756] "Starting controller" controller="pod-garbage-collector-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528595 1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528649 1 taint_eviction.go:291] "Sending events to api server" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528669 1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller 2025-08-13T20:08:47.533961206+00:00 stderr F I0813 20:08:47.533595 1 controllermanager.go:787] "Started controller" controller="pod-garbage-collector-controller" 2025-08-13T20:08:47.533961206+00:00 stderr F I0813 20:08:47.533691 1 controllermanager.go:756] "Starting controller" controller="job-controller" 2025-08-13T20:08:47.542092599+00:00 stderr F I0813 20:08:47.538074 1 gc_controller.go:101] "Starting GC controller" 2025-08-13T20:08:47.542092599+00:00 stderr F I0813 20:08:47.538152 1 shared_informer.go:311] Waiting for caches to sync for GC 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543498 1 controllermanager.go:787] "Started controller" controller="job-controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543543 1 controllermanager.go:756] "Starting controller" controller="deployment-controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543719 1 job_controller.go:224] "Starting job controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543748 1 shared_informer.go:311] Waiting for caches to sync for job 2025-08-13T20:08:47.556115701+00:00 stderr F I0813 20:08:47.555921 1 controllermanager.go:787] "Started controller" controller="deployment-controller" 2025-08-13T20:08:47.559824737+00:00 stderr F I0813 20:08:47.559712 1 deployment_controller.go:168] "Starting controller" controller="deployment" 2025-08-13T20:08:47.559824737+00:00 stderr F I0813 20:08:47.559810 1 shared_informer.go:311] Waiting for caches to sync for deployment 2025-08-13T20:08:47.565910802+00:00 stderr F I0813 20:08:47.555967 1 controllermanager.go:756] "Starting controller" controller="service-ca-certificate-publisher-controller" 2025-08-13T20:08:47.578665598+00:00 stderr F I0813 20:08:47.578540 1 controllermanager.go:787] "Started controller" controller="service-ca-certificate-publisher-controller" 2025-08-13T20:08:47.578665598+00:00 stderr F I0813 20:08:47.578585 1 controllermanager.go:756] "Starting controller" controller="ephemeral-volume-controller" 2025-08-13T20:08:47.578839603+00:00 stderr F I0813 20:08:47.578749 1 publisher.go:80] Starting service CA certificate configmap publisher 2025-08-13T20:08:47.578839603+00:00 stderr F I0813 20:08:47.578821 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593446 1 controllermanager.go:787] "Started controller" controller="ephemeral-volume-controller" 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593506 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"] 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593563 1 controllermanager.go:756] "Starting controller" controller="resourcequota-controller" 2025-08-13T20:08:47.594115581+00:00 stderr F I0813 20:08:47.593956 1 controller.go:169] "Starting ephemeral volume controller" 2025-08-13T20:08:47.594115581+00:00 stderr F I0813 20:08:47.593990 1 shared_informer.go:311] Waiting for caches to sync for ephemeral 2025-08-13T20:08:47.663499480+00:00 stderr F I0813 20:08:47.663405 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-08-13T20:08:47.663592503+00:00 stderr F I0813 20:08:47.663576 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-08-13T20:08:47.663757647+00:00 stderr F I0813 20:08:47.663737 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-08-13T20:08:47.663855670+00:00 stderr F I0813 20:08:47.663838 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-08-13T20:08:47.664090207+00:00 stderr F I0813 20:08:47.664064 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-08-13T20:08:47.664204100+00:00 stderr F I0813 20:08:47.664180 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-08-13T20:08:47.664274882+00:00 stderr F I0813 20:08:47.664256 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-08-13T20:08:47.665563459+00:00 stderr F I0813 20:08:47.665538 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-08-13T20:08:47.672934471+00:00 stderr F I0813 20:08:47.665683 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-08-13T20:08:47.681052003+00:00 stderr F I0813 20:08:47.680988 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-08-13T20:08:47.681210638+00:00 stderr F I0813 20:08:47.681193 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-08-13T20:08:47.681275660+00:00 stderr F I0813 20:08:47.681261 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-08-13T20:08:47.681527937+00:00 stderr F I0813 20:08:47.681514 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-08-13T20:08:47.681640190+00:00 stderr F I0813 20:08:47.681622 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-08-13T20:08:47.681755253+00:00 stderr F I0813 20:08:47.681735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-08-13T20:08:47.681918398+00:00 stderr F I0813 20:08:47.681874 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-08-13T20:08:47.682005181+00:00 stderr F I0813 20:08:47.681990 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-08-13T20:08:47.682088773+00:00 stderr F I0813 20:08:47.682074 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-08-13T20:08:47.682201246+00:00 stderr F I0813 20:08:47.682146 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-08-13T20:08:47.682254948+00:00 stderr F I0813 20:08:47.682241 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-08-13T20:08:47.689962349+00:00 stderr F I0813 20:08:47.682335 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-08-13T20:08:47.690404331+00:00 stderr F I0813 20:08:47.690360 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-08-13T20:08:47.690549246+00:00 stderr F I0813 20:08:47.690490 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="imagestreams.image.openshift.io" 2025-08-13T20:08:47.690656209+00:00 stderr F I0813 20:08:47.690633 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-08-13T20:08:47.690863845+00:00 stderr F I0813 20:08:47.690813 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-08-13T20:08:47.690983548+00:00 stderr F I0813 20:08:47.690962 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-08-13T20:08:47.691075101+00:00 stderr F I0813 20:08:47.691034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-08-13T20:08:47.691162583+00:00 stderr F I0813 20:08:47.691142 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:47.691261316+00:00 stderr F I0813 20:08:47.691239 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-08-13T20:08:47.691387000+00:00 stderr F I0813 20:08:47.691364 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-08-13T20:08:47.691459882+00:00 stderr F I0813 20:08:47.691440 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-08-13T20:08:47.691533464+00:00 stderr F I0813 20:08:47.691516 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-08-13T20:08:47.691663838+00:00 stderr F I0813 20:08:47.691614 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-08-13T20:08:47.691750050+00:00 stderr F I0813 20:08:47.691730 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-08-13T20:08:47.693725277+00:00 stderr F I0813 20:08:47.691869 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-08-13T20:08:47.693972164+00:00 stderr F I0813 20:08:47.693888 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-08-13T20:08:47.694075027+00:00 stderr F I0813 20:08:47.694053 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-08-13T20:08:47.694980603+00:00 stderr F I0813 20:08:47.694954 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-08-13T20:08:47.695050235+00:00 stderr F I0813 20:08:47.695036 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-08-13T20:08:47.695113096+00:00 stderr F I0813 20:08:47.695099 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-08-13T20:08:47.695176268+00:00 stderr F I0813 20:08:47.695162 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-08-13T20:08:47.695549889+00:00 stderr F I0813 20:08:47.695534 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-08-13T20:08:47.695709674+00:00 stderr F I0813 20:08:47.695690 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-08-13T20:08:47.695829957+00:00 stderr F I0813 20:08:47.695765 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-08-13T20:08:47.695972371+00:00 stderr F I0813 20:08:47.695949 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-08-13T20:08:47.696053863+00:00 stderr F I0813 20:08:47.696037 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-08-13T20:08:47.697097493+00:00 stderr F I0813 20:08:47.696100 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-08-13T20:08:47.697188466+00:00 stderr F I0813 20:08:47.697170 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-08-13T20:08:47.697251688+00:00 stderr F I0813 20:08:47.697237 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-08-13T20:08:47.697311319+00:00 stderr F I0813 20:08:47.697298 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-08-13T20:08:47.697363921+00:00 stderr F I0813 20:08:47.697350 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-08-13T20:08:47.697460114+00:00 stderr F I0813 20:08:47.697441 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-08-13T20:08:47.697514945+00:00 stderr F I0813 20:08:47.697502 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-08-13T20:08:47.699990416+00:00 stderr F I0813 20:08:47.699964 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-08-13T20:08:47.700063808+00:00 stderr F I0813 20:08:47.700049 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-08-13T20:08:47.700124190+00:00 stderr F I0813 20:08:47.700110 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-08-13T20:08:47.700181062+00:00 stderr F I0813 20:08:47.700167 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:47.700241273+00:00 stderr F I0813 20:08:47.700227 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-08-13T20:08:47.700302275+00:00 stderr F I0813 20:08:47.700289 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-08-13T20:08:47.700361927+00:00 stderr F I0813 20:08:47.700348 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-08-13T20:08:47.700418678+00:00 stderr F I0813 20:08:47.700404 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-08-13T20:08:47.700488130+00:00 stderr F I0813 20:08:47.700473 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-08-13T20:08:47.700552982+00:00 stderr F I0813 20:08:47.700539 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-08-13T20:08:47.700609754+00:00 stderr F I0813 20:08:47.700595 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-08-13T20:08:47.708060808+00:00 stderr F I0813 20:08:47.708031 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-08-13T20:08:47.708192761+00:00 stderr F I0813 20:08:47.708170 1 controllermanager.go:787] "Started controller" controller="resourcequota-controller" 2025-08-13T20:08:47.708234283+00:00 stderr F I0813 20:08:47.708222 1 controllermanager.go:756] "Starting controller" controller="statefulset-controller" 2025-08-13T20:08:47.708613683+00:00 stderr F I0813 20:08:47.708589 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-08-13T20:08:47.708658455+00:00 stderr F I0813 20:08:47.708645 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:47.708983164+00:00 stderr F I0813 20:08:47.708956 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:47.717629892+00:00 stderr F I0813 20:08:47.717537 1 controllermanager.go:787] "Started controller" controller="statefulset-controller" 2025-08-13T20:08:47.717629892+00:00 stderr F I0813 20:08:47.717590 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-approving-controller" 2025-08-13T20:08:47.717967062+00:00 stderr F I0813 20:08:47.717887 1 stateful_set.go:161] "Starting stateful set controller" 2025-08-13T20:08:47.717967062+00:00 stderr F I0813 20:08:47.717946 1 shared_informer.go:311] Waiting for caches to sync for stateful set 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725686 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-approving-controller" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725735 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-attach-detach-controller" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725962 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.726044 1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.726061 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving 2025-08-13T20:08:47.731523350+00:00 stderr F W0813 20:08:47.731379 1 probe.go:268] Flexvolume plugin directory at /etc/kubernetes/kubelet-plugins/volume/exec does not exist. Recreating. 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733142 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733180 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733191 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733199 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733236 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-08-13T20:08:47.733397194+00:00 stderr F I0813 20:08:47.733310 1 controllermanager.go:787] "Started controller" controller="persistentvolume-attach-detach-controller" 2025-08-13T20:08:47.733397194+00:00 stderr F I0813 20:08:47.733323 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-controller" 2025-08-13T20:08:47.734055003+00:00 stderr F I0813 20:08:47.733559 1 attach_detach_controller.go:337] "Starting attach detach controller" 2025-08-13T20:08:47.734055003+00:00 stderr F I0813 20:08:47.733597 1 shared_informer.go:311] Waiting for caches to sync for attach detach 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742516 1 controllermanager.go:787] "Started controller" controller="serviceaccount-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742608 1 controllermanager.go:756] "Starting controller" controller="node-ipam-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742622 1 controllermanager.go:765] "Warning: skipping controller" controller="node-ipam-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742629 1 controllermanager.go:756] "Starting controller" controller="root-ca-certificate-publisher-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.743502 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.743515 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748654 1 controllermanager.go:787] "Started controller" controller="root-ca-certificate-publisher-controller" 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748695 1 controllermanager.go:750] "Warning: controller is disabled" controller="token-cleaner-controller" 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748706 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-binder-controller" 2025-08-13T20:08:47.751055660+00:00 stderr F I0813 20:08:47.750996 1 publisher.go:102] "Starting root CA cert publisher controller" 2025-08-13T20:08:47.751055660+00:00 stderr F I0813 20:08:47.751038 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769926 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769963 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769979 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769987 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769994 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770003 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770016 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770071 1 controllermanager.go:787] "Started controller" controller="persistentvolume-binder-controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770094 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"] 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770106 1 controllermanager.go:756] "Starting controller" controller="endpoints-controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770573 1 pv_controller_base.go:319] "Starting persistent volume controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770661 1 shared_informer.go:311] Waiting for caches to sync for persistent volume 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.783877 1 controllermanager.go:787] "Started controller" controller="endpoints-controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.783943 1 controllermanager.go:756] "Starting controller" controller="garbage-collector-controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.784181 1 endpoints_controller.go:174] "Starting endpoint controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.784193 1 shared_informer.go:311] Waiting for caches to sync for endpoint 2025-08-13T20:08:47.820883362+00:00 stderr F I0813 20:08:47.820581 1 controllermanager.go:787] "Started controller" controller="garbage-collector-controller" 2025-08-13T20:08:47.820883362+00:00 stderr F I0813 20:08:47.820627 1 controllermanager.go:756] "Starting controller" controller="horizontal-pod-autoscaler-controller" 2025-08-13T20:08:47.830451327+00:00 stderr F I0813 20:08:47.830373 1 garbagecollector.go:155] "Starting controller" controller="garbagecollector" 2025-08-13T20:08:47.830451327+00:00 stderr F I0813 20:08:47.830427 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-08-13T20:08:47.830523069+00:00 stderr F I0813 20:08:47.830478 1 graph_builder.go:302] "Running" component="GraphBuilder" 2025-08-13T20:08:47.877555977+00:00 stderr F I0813 20:08:47.877463 1 controllermanager.go:787] "Started controller" controller="horizontal-pod-autoscaler-controller" 2025-08-13T20:08:47.877555977+00:00 stderr F I0813 20:08:47.877513 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-cleaner-controller" 2025-08-13T20:08:47.877706862+00:00 stderr F I0813 20:08:47.877652 1 horizontal.go:200] "Starting HPA controller" 2025-08-13T20:08:47.877706862+00:00 stderr F I0813 20:08:47.877683 1 shared_informer.go:311] Waiting for caches to sync for HPA 2025-08-13T20:08:47.894572865+00:00 stderr F I0813 20:08:47.894436 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-cleaner-controller" 2025-08-13T20:08:47.894572865+00:00 stderr F I0813 20:08:47.894485 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-signing-controller" 2025-08-13T20:08:47.894677348+00:00 stderr F I0813 20:08:47.894627 1 cleaner.go:83] "Starting CSR cleaner controller" 2025-08-13T20:08:47.906217069+00:00 stderr F I0813 20:08:47.906136 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.907055493+00:00 stderr F I0813 20:08:47.907018 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.907687251+00:00 stderr F I0813 20:08:47.907664 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.908426862+00:00 stderr F I0813 20:08:47.908405 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.908849514+00:00 stderr F I0813 20:08:47.908826 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-signing-controller" 2025-08-13T20:08:47.908945487+00:00 stderr F I0813 20:08:47.908923 1 controllermanager.go:756] "Starting controller" controller="node-route-controller" 2025-08-13T20:08:47.909004379+00:00 stderr F I0813 20:08:47.908989 1 core.go:290] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true 2025-08-13T20:08:47.909040140+00:00 stderr F I0813 20:08:47.909028 1 controllermanager.go:765] "Warning: skipping controller" controller="node-route-controller" 2025-08-13T20:08:47.909075031+00:00 stderr F I0813 20:08:47.909063 1 controllermanager.go:756] "Starting controller" controller="clusterrole-aggregation-controller" 2025-08-13T20:08:47.909285347+00:00 stderr F I0813 20:08:47.909267 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving" 2025-08-13T20:08:47.909325288+00:00 stderr F I0813 20:08:47.909313 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving 2025-08-13T20:08:47.909398970+00:00 stderr F I0813 20:08:47.909385 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client" 2025-08-13T20:08:47.909430071+00:00 stderr F I0813 20:08:47.909419 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client 2025-08-13T20:08:47.909487813+00:00 stderr F I0813 20:08:47.909474 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client" 2025-08-13T20:08:47.909518374+00:00 stderr F I0813 20:08:47.909507 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client 2025-08-13T20:08:47.909576075+00:00 stderr F I0813 20:08:47.909562 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown" 2025-08-13T20:08:47.909615386+00:00 stderr F I0813 20:08:47.909600 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown 2025-08-13T20:08:47.909668118+00:00 stderr F I0813 20:08:47.909653 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.909880414+00:00 stderr F I0813 20:08:47.909831 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.910015758+00:00 stderr F I0813 20:08:47.909999 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.910103910+00:00 stderr F I0813 20:08:47.910089 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.925983626+00:00 stderr P I0813 20:08:47.925923 1 garbagecollector.go:241] "syncing garbage collector with updated resources from discovery" attempt=1 diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apiserver.openshift.io/v1, Resource=apirequestcounts apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1, Resource=clusterautoscalers autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds certificates.k8s.io/v1, Resource=certificatesigningrequests config.openshift.io/v1, Resource=apiservers config.openshift.io/v1, Resource=authentications config.openshift.io/v1, Resource=builds config.openshift.io/v1, Resource=clusteroperators config.openshift.io/v1, Resource=clusterversions config.openshift.io/v1, Resource=consoles config.openshift.io/v1, Resource=dnses config.openshift.io/v1, Resource=featuregates config.openshift.io/v1, Resource=imagecontentpolicies config.openshift.io/v1, Resource=imagedigestmirrorsets config.openshift.io/v1, Resource=images config.openshift.io/v1, Resource=imagetagmirrorsets config.openshift.io/v1, Resource=infrastructures config.openshift.io/v1, Resource=ingresses config.openshift.io/v1, Resource=networks config.openshift.io/v1, Resource=nodes config.openshift.io/v1, Resource=oauths config.openshift.io/v1, Resource=operatorhubs config.openshift.io/v1, Resource=projects config.openshift.io/v1, Resource=proxies config.openshift.io/v1, Resource=schedulers console.openshift.io/v1, Resource=consoleclidownloads console.openshift.io/v1, Resource=consoleexternalloglinks console.openshift.io/v1, Resource=consolelinks console.openshift.io/v1, Resource=consolenotifications console.openshift.io/v1, Resource=consoleplugins console.openshift.io/v1, Resource=consolequickstarts console.openshift.io/v1, Resource=consolesamples console.openshift.io/v1, Resource=consoleyamlsamples controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1, Resource=prioritylevelconfigurations helm.openshift.io/v1beta1, Resource=helmchartrepositories helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=images image.openshift.io/v1, Resource=imagestreams imageregistry.operator.openshift.io/v1, Resource=configs imageregistry.operator.openshift.io/v1, Resource=imagepruners infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=adminpolicybasedexternalroutes k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressips k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets machineconfiguration.openshift.io/v1, Resource=containerruntimeconfigs machineconfiguration.openshift.io/v1, Resource=controllerconfigs machineconfiguration.openshift.io/v1, Resource=kubeletconfigs machineconfiguration.openshift.io/v1, Resource=machineconfigpools machineconfiguration.openshift.io/v1, Resource=machineconfigs migration.k8s.io/v1alpha1, Resource=storagestates migration.k8s.io/v1alpha1, Resource=storageversionmigrations monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses oauth.openshift.io/v1, Resource=oauthaccesstokens oauth.openshift.io/v1, Resource=oauthauthorizetokens oauth.openshift.io/v1, Resource=oauthclientauthorizations oauth.openshift.io/v1, Resource=oauthclients oauth.openshift.io/v1, Resource=useroauthaccesstokens operator.openshift.io/v1, Resource=authentications operator.openshift.io/v1, Resource=clustercsidrivers operator.openshift.io/v1, Resource=configs operator.openshift.io/v1, Resource=consoles operator.openshift.io/v1, Resource=csisnapshotcontrollers operator.openshift.io/v1, Resource=dnses operator.openshift.io/v1, Resource=etcds operator.openshift.io/v1, Resource=ingresscontrollers operator.openshift.io/v1, Resource=kubeapiservers operator.openshift.io/v1, Resource=kubecontrollermanagers operator.openshift.io/v1, Resource=kubeschedulers operator.openshift.io/v1, Resource=kubestorageversionmigrators operator.openshift.io/v1, Resource=machineconfigurations operator.openshift.io/v1, Resource=networks operator.openshift.io/v1, Resource=openshiftapiservers operator.openshift.io/v1, Resource=openshiftcontrollermanagers operator.openshift.io/v1, Resource=servicecas operator.openshift.io/v1, Resource=storages operator.openshift.io/v1alpha1, Resource=imagecontentsourcepolicies operators.coreos.com/v1, Resource=olmconfigs operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1, Resource=operators operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy.networking.k8s.io/v1alpha1, Resource=adminnetworkpolicies policy.networking.k8s.io/v1alpha1, Resource=baselineadminnetworkpolicies policy/v1, Resource=poddisruptionbudgets project.openshift.io/v1, Resource=projects quota.openshift.io/v1, Resource=clusterresourcequotas rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes samples.operator.openshift.io/v1, Resource=configs scheduling.k8s.io/v1, Resource=priorityclasses security.internal.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=securitycontextconstraints storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments t 2025-08-13T20:08:47.926037307+00:00 stderr F emplate.openshift.io/v1, Resource=brokertemplateinstances template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates user.openshift.io/v1, Resource=groups user.openshift.io/v1, Resource=identities user.openshift.io/v1, Resource=users whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-08-13T20:08:47.930503085+00:00 stderr F I0813 20:08:47.930474 1 controllermanager.go:787] "Started controller" controller="clusterrole-aggregation-controller" 2025-08-13T20:08:47.930555957+00:00 stderr F I0813 20:08:47.930542 1 controllermanager.go:756] "Starting controller" controller="ttl-after-finished-controller" 2025-08-13T20:08:47.930855115+00:00 stderr F I0813 20:08:47.930766 1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" 2025-08-13T20:08:47.930944148+00:00 stderr F I0813 20:08:47.930920 1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator 2025-08-13T20:08:47.942238932+00:00 stderr F I0813 20:08:47.942157 1 controllermanager.go:787] "Started controller" controller="ttl-after-finished-controller" 2025-08-13T20:08:47.942238932+00:00 stderr F I0813 20:08:47.942206 1 controllermanager.go:756] "Starting controller" controller="endpointslice-controller" 2025-08-13T20:08:47.942862410+00:00 stderr F I0813 20:08:47.942411 1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" 2025-08-13T20:08:47.942862410+00:00 stderr F I0813 20:08:47.942443 1 shared_informer.go:311] Waiting for caches to sync for TTL after finished 2025-08-13T20:08:47.951827457+00:00 stderr F I0813 20:08:47.951738 1 controllermanager.go:787] "Started controller" controller="endpointslice-controller" 2025-08-13T20:08:47.951940510+00:00 stderr F I0813 20:08:47.951885 1 controllermanager.go:756] "Starting controller" controller="endpointslice-mirroring-controller" 2025-08-13T20:08:47.952293440+00:00 stderr F I0813 20:08:47.952270 1 endpointslice_controller.go:264] "Starting endpoint slice controller" 2025-08-13T20:08:47.952336421+00:00 stderr F I0813 20:08:47.952324 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice 2025-08-13T20:08:47.965276902+00:00 stderr F I0813 20:08:47.965247 1 controllermanager.go:787] "Started controller" controller="endpointslice-mirroring-controller" 2025-08-13T20:08:47.965335574+00:00 stderr F I0813 20:08:47.965322 1 controllermanager.go:756] "Starting controller" controller="daemonset-controller" 2025-08-13T20:08:47.965650653+00:00 stderr F I0813 20:08:47.965587 1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" 2025-08-13T20:08:47.965947521+00:00 stderr F I0813 20:08:47.965884 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring 2025-08-13T20:08:47.984336619+00:00 stderr F I0813 20:08:47.984295 1 controllermanager.go:787] "Started controller" controller="daemonset-controller" 2025-08-13T20:08:47.984405141+00:00 stderr F I0813 20:08:47.984391 1 controllermanager.go:756] "Starting controller" controller="disruption-controller" 2025-08-13T20:08:47.984636617+00:00 stderr F I0813 20:08:47.984617 1 daemon_controller.go:297] "Starting daemon sets controller" 2025-08-13T20:08:47.984675948+00:00 stderr F I0813 20:08:47.984663 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-08-13T20:08:48.022675888+00:00 stderr F I0813 20:08:48.022550 1 controllermanager.go:787] "Started controller" controller="disruption-controller" 2025-08-13T20:08:48.022675888+00:00 stderr F I0813 20:08:48.022596 1 controllermanager.go:756] "Starting controller" controller="replicationcontroller-controller" 2025-08-13T20:08:48.022881444+00:00 stderr F I0813 20:08:48.022826 1 disruption.go:433] "Sending events to api server." 2025-08-13T20:08:48.022881444+00:00 stderr F I0813 20:08:48.022874 1 disruption.go:444] "Starting disruption controller" 2025-08-13T20:08:48.022925815+00:00 stderr F I0813 20:08:48.022885 1 shared_informer.go:311] Waiting for caches to sync for disruption 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045213 1 controllermanager.go:787] "Started controller" controller="replicationcontroller-controller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045261 1 controllermanager.go:756] "Starting controller" controller="replicaset-controller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045467 1 replica_set.go:214] "Starting controller" name="replicationcontroller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045477 1 shared_informer.go:311] Waiting for caches to sync for ReplicationController 2025-08-13T20:08:48.056757625+00:00 stderr F I0813 20:08:48.056717 1 controllermanager.go:787] "Started controller" controller="replicaset-controller" 2025-08-13T20:08:48.056887219+00:00 stderr F I0813 20:08:48.056870 1 controllermanager.go:750] "Warning: controller is disabled" controller="ttl-controller" 2025-08-13T20:08:48.056970581+00:00 stderr F I0813 20:08:48.056953 1 controllermanager.go:756] "Starting controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-08-13T20:08:48.057249199+00:00 stderr F I0813 20:08:48.057228 1 replica_set.go:214] "Starting controller" name="replicaset" 2025-08-13T20:08:48.057290110+00:00 stderr F I0813 20:08:48.057277 1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet 2025-08-13T20:08:48.074530845+00:00 stderr F I0813 20:08:48.074487 1 controllermanager.go:787] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-08-13T20:08:48.077342405+00:00 stderr F I0813 20:08:48.076871 1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" 2025-08-13T20:08:48.077342405+00:00 stderr F I0813 20:08:48.076923 1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner 2025-08-13T20:08:48.086612211+00:00 stderr F I0813 20:08:48.085824 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:48.130402997+00:00 stderr F I0813 20:08:48.130305 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.138358525+00:00 stderr F I0813 20:08:48.138217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.170315431+00:00 stderr F I0813 20:08:48.170265 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.171449313+00:00 stderr F I0813 20:08:48.171427 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.172733700+00:00 stderr F I0813 20:08:48.172666 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.174917463+00:00 stderr F I0813 20:08:48.174283 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.175025206+00:00 stderr F I0813 20:08:48.174973 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.198988943+00:00 stderr F W0813 20:08:48.198926 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.199176618+00:00 stderr F E0813 20:08:48.199153 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.207232879+00:00 stderr F I0813 20:08:48.207003 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.208400983+00:00 stderr F I0813 20:08:48.208321 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216556407+00:00 stderr F I0813 20:08:48.215182 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216579717+00:00 stderr F I0813 20:08:48.216549 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216974529+00:00 stderr F I0813 20:08:48.216922 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217110883+00:00 stderr F I0813 20:08:48.216938 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217265997+00:00 stderr F I0813 20:08:48.217211 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217400161+00:00 stderr F I0813 20:08:48.217342 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217544645+00:00 stderr F I0813 20:08:48.217463 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217544645+00:00 stderr F I0813 20:08:48.217534 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217710190+00:00 stderr F I0813 20:08:48.217663 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217933916+00:00 stderr F I0813 20:08:48.217865 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218073750+00:00 stderr F I0813 20:08:48.218022 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218409630+00:00 stderr F I0813 20:08:48.218346 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218512183+00:00 stderr F I0813 20:08:48.218456 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218690978+00:00 stderr F I0813 20:08:48.218631 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.220661534+00:00 stderr F I0813 20:08:48.220472 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.223578098+00:00 stderr F I0813 20:08:48.220985 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.224551166+00:00 stderr F I0813 20:08:48.221162 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"crc\" does not exist" 2025-08-13T20:08:48.224595457+00:00 stderr F I0813 20:08:48.221204 1 topologycache.go:217] "Ignoring node because it has an excluded label" node="crc" 2025-08-13T20:08:48.224743342+00:00 stderr F I0813 20:08:48.224717 1 topologycache.go:253] "Insufficient node info for topology hints" totalZones=0 totalCPU="0" sufficientNodeInfo=true 2025-08-13T20:08:48.224842944+00:00 stderr F I0813 20:08:48.221292 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225055050+00:00 stderr F I0813 20:08:48.221631 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225223615+00:00 stderr F I0813 20:08:48.221716 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225371769+00:00 stderr F I0813 20:08:48.221719 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225526074+00:00 stderr F W0813 20:08:48.221748 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:48.225670288+00:00 stderr F E0813 20:08:48.225648 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:48.225749350+00:00 stderr F I0813 20:08:48.221862 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-28658250" 2025-08-13T20:08:48.225880434+00:00 stderr F I0813 20:08:48.225847 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:08:48.225964516+00:00 stderr F I0813 20:08:48.225950 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:08:48.225994367+00:00 stderr F I0813 20:08:48.221888 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226186543+00:00 stderr F I0813 20:08:48.221990 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226397989+00:00 stderr F I0813 20:08:48.222054 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226537903+00:00 stderr F I0813 20:08:48.222075 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226714268+00:00 stderr F I0813 20:08:48.222116 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226981866+00:00 stderr F I0813 20:08:48.217472 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.228448028+00:00 stderr F I0813 20:08:48.228393 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.230164247+00:00 stderr F I0813 20:08:48.230139 1 shared_informer.go:318] Caches are synced for taint-eviction-controller 2025-08-13T20:08:48.234610694+00:00 stderr F I0813 20:08:48.231951 1 shared_informer.go:318] Caches are synced for PV protection 2025-08-13T20:08:48.234610694+00:00 stderr F I0813 20:08:48.232593 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.238498946+00:00 stderr F I0813 20:08:48.238401 1 shared_informer.go:318] Caches are synced for GC 2025-08-13T20:08:48.242876641+00:00 stderr F I0813 20:08:48.242709 1 shared_informer.go:318] Caches are synced for TTL after finished 2025-08-13T20:08:48.244119587+00:00 stderr F I0813 20:08:48.244009 1 shared_informer.go:318] Caches are synced for job 2025-08-13T20:08:48.244119587+00:00 stderr F I0813 20:08:48.244088 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:08:48.250977604+00:00 stderr F I0813 20:08:48.250638 1 shared_informer.go:318] Caches are synced for ReplicationController 2025-08-13T20:08:48.250977604+00:00 stderr F I0813 20:08:48.250684 1 shared_informer.go:318] Caches are synced for cronjob 2025-08-13T20:08:48.278232365+00:00 stderr F I0813 20:08:48.278135 1 shared_informer.go:318] Caches are synced for HPA 2025-08-13T20:08:48.279346197+00:00 stderr F I0813 20:08:48.279293 1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner 2025-08-13T20:08:48.298048733+00:00 stderr F I0813 20:08:48.297946 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.317593214+00:00 stderr F I0813 20:08:48.317526 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.318342615+00:00 stderr F I0813 20:08:48.318317 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.325951913+00:00 stderr F I0813 20:08:48.325858 1 shared_informer.go:318] Caches are synced for namespace 2025-08-13T20:08:48.331644416+00:00 stderr F I0813 20:08:48.331605 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.333685375+00:00 stderr F I0813 20:08:48.333658 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.338846663+00:00 stderr F I0813 20:08:48.338697 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.348167540+00:00 stderr F I0813 20:08:48.348048 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator 2025-08-13T20:08:48.350661502+00:00 stderr F I0813 20:08:48.350587 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.350875298+00:00 stderr F W0813 20:08:48.350769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:48.352023321+00:00 stderr F E0813 20:08:48.351990 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:48.352316749+00:00 stderr F I0813 20:08:48.352292 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.352696890+00:00 stderr F I0813 20:08:48.352673 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.353075921+00:00 stderr F W0813 20:08:48.353051 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:48.354002427+00:00 stderr F E0813 20:08:48.353935 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:48.354090910+00:00 stderr F W0813 20:08:48.353565 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:48.354207563+00:00 stderr F E0813 20:08:48.354182 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:48.354292706+00:00 stderr F I0813 20:08:48.353672 1 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.354933774+00:00 stderr F W0813 20:08:48.353699 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:48.358727313+00:00 stderr F I0813 20:08:48.358592 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359022251+00:00 stderr F I0813 20:08:48.358977 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359144925+00:00 stderr F I0813 20:08:48.353742 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359256398+00:00 stderr F I0813 20:08:48.359208 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359385592+00:00 stderr F W0813 20:08:48.359333 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:48.359401552+00:00 stderr F E0813 20:08:48.359384 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:48.359477644+00:00 stderr F I0813 20:08:48.356161 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359677240+00:00 stderr F I0813 20:08:48.359138 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.360197125+00:00 stderr F I0813 20:08:48.360133 1 reflector.go:351] Caches populated for *v1.RoleBindingRestriction from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.361672467+00:00 stderr F E0813 20:08:48.361440 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:48.375195585+00:00 stderr F I0813 20:08:48.375086 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379486658+00:00 stderr F I0813 20:08:48.379428 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379542450+00:00 stderr F I0813 20:08:48.379435 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379935441+00:00 stderr F I0813 20:08:48.379854 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.381218408+00:00 stderr F I0813 20:08:48.381024 1 shared_informer.go:318] Caches are synced for persistent volume 2025-08-13T20:08:48.387830607+00:00 stderr F I0813 20:08:48.387722 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.388398514+00:00 stderr F I0813 20:08:48.388370 1 shared_informer.go:318] Caches are synced for daemon sets 2025-08-13T20:08:48.388496096+00:00 stderr F I0813 20:08:48.388480 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-08-13T20:08:48.388531187+00:00 stderr F I0813 20:08:48.388519 1 shared_informer.go:318] Caches are synced for daemon sets 2025-08-13T20:08:48.388627580+00:00 stderr F I0813 20:08:48.388610 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring 2025-08-13T20:08:48.388703572+00:00 stderr F I0813 20:08:48.388683 1 endpointslicemirroring_controller.go:230] "Starting worker threads" total=5 2025-08-13T20:08:48.389498105+00:00 stderr F I0813 20:08:48.389447 1 shared_informer.go:318] Caches are synced for taint 2025-08-13T20:08:48.389560367+00:00 stderr F I0813 20:08:48.389516 1 node_lifecycle_controller.go:676] "Controller observed a new Node" node="crc" 2025-08-13T20:08:48.389620629+00:00 stderr F I0813 20:08:48.389566 1 controller_utils.go:173] "Recording event message for node" event="Registered Node crc in Controller" node="crc" 2025-08-13T20:08:48.389633729+00:00 stderr F I0813 20:08:48.389620 1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone="" 2025-08-13T20:08:48.389888996+00:00 stderr F I0813 20:08:48.389735 1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="crc" 2025-08-13T20:08:48.391201484+00:00 stderr F I0813 20:08:48.391145 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.391743690+00:00 stderr F I0813 20:08:48.391691 1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal" 2025-08-13T20:08:48.393003736+00:00 stderr F I0813 20:08:48.392924 1 event.go:376] "Event occurred" object="crc" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node crc event: Registered Node crc in Controller" 2025-08-13T20:08:48.393402597+00:00 stderr F I0813 20:08:48.388428 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.395087505+00:00 stderr F I0813 20:08:48.395035 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.397462214+00:00 stderr F I0813 20:08:48.397421 1 shared_informer.go:318] Caches are synced for ephemeral 2025-08-13T20:08:48.400224593+00:00 stderr F I0813 20:08:48.400144 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.401080097+00:00 stderr F I0813 20:08:48.400979 1 shared_informer.go:318] Caches are synced for endpoint 2025-08-13T20:08:48.410372524+00:00 stderr F I0813 20:08:48.410296 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.410976841+00:00 stderr F I0813 20:08:48.410950 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.411347542+00:00 stderr F I0813 20:08:48.411308 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.411610529+00:00 stderr F I0813 20:08:48.411587 1 shared_informer.go:318] Caches are synced for PVC protection 2025-08-13T20:08:48.411818525+00:00 stderr F I0813 20:08:48.411718 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.412553236+00:00 stderr F I0813 20:08:48.412524 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.417402485+00:00 stderr F I0813 20:08:48.417332 1 shared_informer.go:318] Caches are synced for expand 2025-08-13T20:08:48.417578380+00:00 stderr F I0813 20:08:48.417554 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown 2025-08-13T20:08:48.417712944+00:00 stderr F I0813 20:08:48.417698 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client 2025-08-13T20:08:48.417850498+00:00 stderr F I0813 20:08:48.417732 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.418007713+00:00 stderr F I0813 20:08:48.417959 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving 2025-08-13T20:08:48.418052554+00:00 stderr F I0813 20:08:48.418037 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client 2025-08-13T20:08:48.418422424+00:00 stderr F I0813 20:08:48.418400 1 shared_informer.go:318] Caches are synced for stateful set 2025-08-13T20:08:48.431114778+00:00 stderr F I0813 20:08:48.423134 1 shared_informer.go:318] Caches are synced for disruption 2025-08-13T20:08:48.435616157+00:00 stderr F I0813 20:08:48.435579 1 shared_informer.go:318] Caches are synced for certificate-csrapproving 2025-08-13T20:08:48.438044067+00:00 stderr F I0813 20:08:48.438016 1 shared_informer.go:318] Caches are synced for attach detach 2025-08-13T20:08:48.454388596+00:00 stderr F I0813 20:08:48.454249 1 shared_informer.go:318] Caches are synced for endpoint_slice 2025-08-13T20:08:48.454388596+00:00 stderr F I0813 20:08:48.454349 1 endpointslice_controller.go:271] "Starting worker threads" total=5 2025-08-13T20:08:48.460518801+00:00 stderr F I0813 20:08:48.460429 1 shared_informer.go:318] Caches are synced for deployment 2025-08-13T20:08:48.478241410+00:00 stderr F I0813 20:08:48.478170 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.503509074+00:00 stderr F I0813 20:08:48.503079 1 shared_informer.go:318] Caches are synced for ReplicaSet 2025-08-13T20:08:48.508104666+00:00 stderr F I0813 20:08:48.508060 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="131.204µs" 2025-08-13T20:08:48.508303131+00:00 stderr F I0813 20:08:48.508281 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="117.054µs" 2025-08-13T20:08:48.508427425+00:00 stderr F I0813 20:08:48.508411 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="74.312µs" 2025-08-13T20:08:48.508714653+00:00 stderr F I0813 20:08:48.508693 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="233.996µs" 2025-08-13T20:08:48.508920109+00:00 stderr F I0813 20:08:48.508873 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="117.344µs" 2025-08-13T20:08:48.509201057+00:00 stderr F I0813 20:08:48.509124 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="150.545µs" 2025-08-13T20:08:48.509321921+00:00 stderr F I0813 20:08:48.509304 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="63.222µs" 2025-08-13T20:08:48.509433834+00:00 stderr F I0813 20:08:48.509416 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="64.282µs" 2025-08-13T20:08:48.509532907+00:00 stderr F I0813 20:08:48.509517 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="53.111µs" 2025-08-13T20:08:48.510116973+00:00 stderr F I0813 20:08:48.509758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="193.206µs" 2025-08-13T20:08:48.510340060+00:00 stderr F I0813 20:08:48.510316 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="66.902µs" 2025-08-13T20:08:48.510444173+00:00 stderr F I0813 20:08:48.510428 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="52.671µs" 2025-08-13T20:08:48.510617848+00:00 stderr F I0813 20:08:48.510548 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="47.331µs" 2025-08-13T20:08:48.510865445+00:00 stderr F I0813 20:08:48.510842 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="184.695µs" 2025-08-13T20:08:48.511003849+00:00 stderr F I0813 20:08:48.510980 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="46.281µs" 2025-08-13T20:08:48.511213555+00:00 stderr F I0813 20:08:48.511193 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="67.052µs" 2025-08-13T20:08:48.511316458+00:00 stderr F I0813 20:08:48.511300 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="50.271µs" 2025-08-13T20:08:48.511487413+00:00 stderr F I0813 20:08:48.511471 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="123.113µs" 2025-08-13T20:08:48.511580455+00:00 stderr F I0813 20:08:48.511565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="46.872µs" 2025-08-13T20:08:48.511685768+00:00 stderr F I0813 20:08:48.511670 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="46.642µs" 2025-08-13T20:08:48.511837123+00:00 stderr F I0813 20:08:48.511762 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="44.761µs" 2025-08-13T20:08:48.512094350+00:00 stderr F I0813 20:08:48.512072 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="190.645µs" 2025-08-13T20:08:48.512197953+00:00 stderr F I0813 20:08:48.512179 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="51.971µs" 2025-08-13T20:08:48.512302406+00:00 stderr F I0813 20:08:48.512286 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="48.421µs" 2025-08-13T20:08:48.512395189+00:00 stderr F I0813 20:08:48.512380 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="48.661µs" 2025-08-13T20:08:48.515739085+00:00 stderr F I0813 20:08:48.515698 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="3.261564ms" 2025-08-13T20:08:48.515985072+00:00 stderr F I0813 20:08:48.515957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="99.363µs" 2025-08-13T20:08:48.516149796+00:00 stderr F I0813 20:08:48.516126 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="93.453µs" 2025-08-13T20:08:48.516332402+00:00 stderr F I0813 20:08:48.516313 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="94.853µs" 2025-08-13T20:08:48.516480176+00:00 stderr F I0813 20:08:48.516457 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="61.871µs" 2025-08-13T20:08:48.516847846+00:00 stderr F I0813 20:08:48.516823 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="304.438µs" 2025-08-13T20:08:48.517000331+00:00 stderr F I0813 20:08:48.516976 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="83.823µs" 2025-08-13T20:08:48.517119064+00:00 stderr F I0813 20:08:48.517101 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="60.702µs" 2025-08-13T20:08:48.517214557+00:00 stderr F I0813 20:08:48.517199 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-75cfd5db5d" duration="48.281µs" 2025-08-13T20:08:48.517416273+00:00 stderr F I0813 20:08:48.517399 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="149.344µs" 2025-08-13T20:08:48.517542806+00:00 stderr F I0813 20:08:48.517495 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="47.162µs" 2025-08-13T20:08:48.517640709+00:00 stderr F I0813 20:08:48.517625 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="47.821µs" 2025-08-13T20:08:48.517733182+00:00 stderr F I0813 20:08:48.517718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="45.691µs" 2025-08-13T20:08:48.518174034+00:00 stderr F I0813 20:08:48.518154 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="147.475µs" 2025-08-13T20:08:48.518278607+00:00 stderr F I0813 20:08:48.518262 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="48.322µs" 2025-08-13T20:08:48.518372390+00:00 stderr F I0813 20:08:48.518357 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="47.482µs" 2025-08-13T20:08:48.518461153+00:00 stderr F I0813 20:08:48.518445 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="43.021µs" 2025-08-13T20:08:48.518642008+00:00 stderr F I0813 20:08:48.518625 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="133.144µs" 2025-08-13T20:08:48.518733881+00:00 stderr F I0813 20:08:48.518718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="45.801µs" 2025-08-13T20:08:48.518917706+00:00 stderr F I0813 20:08:48.518871 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="104.853µs" 2025-08-13T20:08:48.519125312+00:00 stderr F I0813 20:08:48.519013 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="51.771µs" 2025-08-13T20:08:48.519227035+00:00 stderr F I0813 20:08:48.519212 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="52.222µs" 2025-08-13T20:08:48.519330148+00:00 stderr F I0813 20:08:48.519314 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="57.292µs" 2025-08-13T20:08:48.519425300+00:00 stderr F I0813 20:08:48.519409 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="45.112µs" 2025-08-13T20:08:48.519514883+00:00 stderr F I0813 20:08:48.519499 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="44.561µs" 2025-08-13T20:08:48.519698918+00:00 stderr F I0813 20:08:48.519682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="136.063µs" 2025-08-13T20:08:48.519873833+00:00 stderr F I0813 20:08:48.519845 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="106.423µs" 2025-08-13T20:08:48.520035408+00:00 stderr F I0813 20:08:48.520011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="47.032µs" 2025-08-13T20:08:48.520123260+00:00 stderr F I0813 20:08:48.520108 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="38.331µs" 2025-08-13T20:08:48.520224123+00:00 stderr F I0813 20:08:48.520205 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="43.781µs" 2025-08-13T20:08:48.520428139+00:00 stderr F I0813 20:08:48.520411 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="156.254µs" 2025-08-13T20:08:48.520514642+00:00 stderr F I0813 20:08:48.520499 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="39.891µs" 2025-08-13T20:08:48.520614754+00:00 stderr F I0813 20:08:48.520599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="44.941µs" 2025-08-13T20:08:48.520707577+00:00 stderr F I0813 20:08:48.520691 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="46.591µs" 2025-08-13T20:08:48.521066787+00:00 stderr F I0813 20:08:48.521045 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="304.829µs" 2025-08-13T20:08:48.521168150+00:00 stderr F I0813 20:08:48.521151 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="44.222µs" 2025-08-13T20:08:48.521297464+00:00 stderr F I0813 20:08:48.521282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="42.281µs" 2025-08-13T20:08:48.521387367+00:00 stderr F I0813 20:08:48.521369 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="40.101µs" 2025-08-13T20:08:48.521618653+00:00 stderr F I0813 20:08:48.521596 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="175.855µs" 2025-08-13T20:08:48.521724346+00:00 stderr F I0813 20:08:48.521709 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="56.512µs" 2025-08-13T20:08:48.521877411+00:00 stderr F I0813 20:08:48.521856 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="97.333µs" 2025-08-13T20:08:48.522025955+00:00 stderr F I0813 20:08:48.522009 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="60.672µs" 2025-08-13T20:08:48.522209420+00:00 stderr F I0813 20:08:48.522190 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="130.343µs" 2025-08-13T20:08:48.522306923+00:00 stderr F I0813 20:08:48.522291 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="47.221µs" 2025-08-13T20:08:48.522396456+00:00 stderr F I0813 20:08:48.522378 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="40.661µs" 2025-08-13T20:08:48.522477938+00:00 stderr F I0813 20:08:48.522462 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="35.511µs" 2025-08-13T20:08:48.522561200+00:00 stderr F I0813 20:08:48.522546 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="38.981µs" 2025-08-13T20:08:48.522763826+00:00 stderr F I0813 20:08:48.522744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="123.553µs" 2025-08-13T20:08:48.523657262+00:00 stderr F I0813 20:08:48.523633 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="58.202µs" 2025-08-13T20:08:48.523737074+00:00 stderr F I0813 20:08:48.523680 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="100.573µs" 2025-08-13T20:08:48.523935630+00:00 stderr F I0813 20:08:48.523817 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="44.211µs" 2025-08-13T20:08:48.523935630+00:00 stderr F I0813 20:08:48.523918 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="41.351µs" 2025-08-13T20:08:48.524028362+00:00 stderr F I0813 20:08:48.524010 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="306.618µs" 2025-08-13T20:08:48.524132005+00:00 stderr F I0813 20:08:48.524115 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="49.082µs" 2025-08-13T20:08:48.524378432+00:00 stderr F I0813 20:08:48.524361 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="198.806µs" 2025-08-13T20:08:48.528627834+00:00 stderr F I0813 20:08:48.528586 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.529736296+00:00 stderr F I0813 20:08:48.524011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="66.492µs" 2025-08-13T20:08:48.529835749+00:00 stderr F I0813 20:08:48.523496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="44.411µs" 2025-08-13T20:08:48.529881410+00:00 stderr F I0813 20:08:48.523422 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-66f66f94cf" duration="76.642µs" 2025-08-13T20:08:48.532723372+00:00 stderr F I0813 20:08:48.532694 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.540652669+00:00 stderr F I0813 20:08:48.540449 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[v1/Node, namespace: openshift-network-diagnostics, name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" observed="[v1/Node, namespace: , name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" 2025-08-13T20:08:48.562388312+00:00 stderr F I0813 20:08:48.562313 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-08-13T20:08:48.566424568+00:00 stderr F I0813 20:08:48.566367 1 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.566628734+00:00 stderr F I0813 20:08:48.566556 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.566831220+00:00 stderr F I0813 20:08:48.566750 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.567184670+00:00 stderr F I0813 20:08:48.567158 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.567446997+00:00 stderr F I0813 20:08:48.566866 1 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.569103285+00:00 stderr F I0813 20:08:48.567739 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.569210628+00:00 stderr F I0813 20:08:48.569149 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.570571407+00:00 stderr F I0813 20:08:48.570495 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.573558342+00:00 stderr F I0813 20:08:48.572326 1 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.573985525+00:00 stderr F I0813 20:08:48.573960 1 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.574140849+00:00 stderr F W0813 20:08:48.574121 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:48.574193961+00:00 stderr F E0813 20:08:48.574176 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:48.576663381+00:00 stderr F I0813 20:08:48.576638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.577426713+00:00 stderr F I0813 20:08:48.577400 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.579201814+00:00 stderr F I0813 20:08:48.579139 1 shared_informer.go:318] Caches are synced for crt configmap 2025-08-13T20:08:48.586915115+00:00 stderr F W0813 20:08:48.585066 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:48.586915115+00:00 stderr F E0813 20:08:48.585116 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:48.591644731+00:00 stderr F I0813 20:08:48.591578 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592452454+00:00 stderr F W0813 20:08:48.592423 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:48.592565347+00:00 stderr F I0813 20:08:48.592512 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592703811+00:00 stderr F I0813 20:08:48.592654 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592880516+00:00 stderr F I0813 20:08:48.592831 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593031731+00:00 stderr F I0813 20:08:48.592980 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593171065+00:00 stderr F E0813 20:08:48.592512 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:48.593312349+00:00 stderr F I0813 20:08:48.592871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593403331+00:00 stderr F I0813 20:08:48.593253 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593720100+00:00 stderr F I0813 20:08:48.593307 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.599483286+00:00 stderr F I0813 20:08:48.599421 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602196293+00:00 stderr F I0813 20:08:48.600468 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602296876+00:00 stderr F I0813 20:08:48.600671 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602373079+00:00 stderr F I0813 20:08:48.601187 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602494552+00:00 stderr F I0813 20:08:48.601738 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[config.openshift.io/v1/ClusterVersion, namespace: openshift-monitoring, name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" observed="[config.openshift.io/v1/ClusterVersion, namespace: , name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" 2025-08-13T20:08:48.608744571+00:00 stderr F I0813 20:08:48.608609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.615807384+00:00 stderr F I0813 20:08:48.615638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.622410963+00:00 stderr F I0813 20:08:48.622293 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623100253+00:00 stderr F I0813 20:08:48.623058 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623356120+00:00 stderr F I0813 20:08:48.623293 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623675739+00:00 stderr F I0813 20:08:48.623549 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.628728374+00:00 stderr F I0813 20:08:48.628596 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.652296110+00:00 stderr F I0813 20:08:48.652191 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.658929790+00:00 stderr F I0813 20:08:48.658113 1 shared_informer.go:318] Caches are synced for crt configmap 2025-08-13T20:08:48.676236946+00:00 stderr F I0813 20:08:48.675092 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.708386508+00:00 stderr F I0813 20:08:48.707751 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.741002513+00:00 stderr F I0813 20:08:48.738120 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.741002513+00:00 stderr F I0813 20:08:48.738431 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Console, namespace: openshift-console, name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" observed="[operator.openshift.io/v1/Console, namespace: , name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" 2025-08-13T20:08:48.841039621+00:00 stderr F I0813 20:08:48.838733 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.844560452+00:00 stderr F I0813 20:08:48.844037 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.862288021+00:00 stderr F I0813 20:08:48.862136 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.874573103+00:00 stderr F I0813 20:08:48.874469 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.893934288+00:00 stderr F I0813 20:08:48.891728 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.916631109+00:00 stderr F I0813 20:08:48.916556 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.927503220+00:00 stderr F I0813 20:08:48.927264 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.941566404+00:00 stderr F I0813 20:08:48.941452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.958976503+00:00 stderr F I0813 20:08:48.958634 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.977092122+00:00 stderr F I0813 20:08:48.976871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.990464056+00:00 stderr F I0813 20:08:48.988703 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.012039984+00:00 stderr F I0813 20:08:49.010217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.014452403+00:00 stderr F W0813 20:08:49.014420 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:49.014524605+00:00 stderr F E0813 20:08:49.014506 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:49.022316009+00:00 stderr F I0813 20:08:49.020263 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.037744521+00:00 stderr F I0813 20:08:49.037694 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.048128789+00:00 stderr F I0813 20:08:49.048016 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.064155498+00:00 stderr F I0813 20:08:49.062308 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.078476709+00:00 stderr F I0813 20:08:49.078311 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.091086090+00:00 stderr F I0813 20:08:49.090366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.105036000+00:00 stderr F I0813 20:08:49.104982 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.120080202+00:00 stderr F I0813 20:08:49.120025 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.130395058+00:00 stderr F I0813 20:08:49.130346 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.147741075+00:00 stderr F I0813 20:08:49.147222 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.162212570+00:00 stderr F I0813 20:08:49.162098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.176187670+00:00 stderr F I0813 20:08:49.176131 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.190469110+00:00 stderr F I0813 20:08:49.190333 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.208186768+00:00 stderr F I0813 20:08:49.208129 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.208536468+00:00 stderr F I0813 20:08:49.208504 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" 2025-08-13T20:08:49.208618340+00:00 stderr F I0813 20:08:49.208600 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" 2025-08-13T20:08:49.220694246+00:00 stderr F I0813 20:08:49.220638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.236635423+00:00 stderr F I0813 20:08:49.236452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.254709032+00:00 stderr F I0813 20:08:49.254553 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.275917170+00:00 stderr F I0813 20:08:49.275764 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.290558850+00:00 stderr F I0813 20:08:49.290497 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.300722161+00:00 stderr F I0813 20:08:49.300641 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.312479628+00:00 stderr F I0813 20:08:49.312393 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.328198769+00:00 stderr F I0813 20:08:49.328091 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.328658202+00:00 stderr F W0813 20:08:49.328589 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:49.328658202+00:00 stderr F E0813 20:08:49.328644 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:49.344517027+00:00 stderr F I0813 20:08:49.344372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.361311858+00:00 stderr F I0813 20:08:49.360169 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.375651149+00:00 stderr F I0813 20:08:49.375555 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.376022760+00:00 stderr F I0813 20:08:49.375736 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/DNS, namespace: openshift-dns, name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" observed="[operator.openshift.io/v1/DNS, namespace: , name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" 2025-08-13T20:08:49.398016871+00:00 stderr F I0813 20:08:49.397925 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.416319915+00:00 stderr F I0813 20:08:49.416217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.428527415+00:00 stderr F I0813 20:08:49.428372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.442392953+00:00 stderr F I0813 20:08:49.442298 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.455387295+00:00 stderr F I0813 20:08:49.455267 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.466575146+00:00 stderr F I0813 20:08:49.466459 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.515061396+00:00 stderr F I0813 20:08:49.514868 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.515827588+00:00 stderr F I0813 20:08:49.515264 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Network, namespace: openshift-host-network, name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" observed="[operator.openshift.io/v1/Network, namespace: , name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" 2025-08-13T20:08:49.675247759+00:00 stderr F W0813 20:08:49.675128 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:49.675247759+00:00 stderr F E0813 20:08:49.675207 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:49.725230932+00:00 stderr F W0813 20:08:49.725119 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:49.725230932+00:00 stderr F E0813 20:08:49.725191 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:49.822870962+00:00 stderr F W0813 20:08:49.822769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:49.822989725+00:00 stderr F E0813 20:08:49.822972 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:49.825884788+00:00 stderr F W0813 20:08:49.825859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:49.826011682+00:00 stderr F E0813 20:08:49.825995 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:49.888533904+00:00 stderr F W0813 20:08:49.888484 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:49.888609186+00:00 stderr F E0813 20:08:49.888590 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:49.960108826+00:00 stderr F W0813 20:08:49.958192 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:49.960108826+00:00 stderr F E0813 20:08:49.958250 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:49.976144286+00:00 stderr F I0813 20:08:49.976056 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:50.069486842+00:00 stderr F W0813 20:08:50.069337 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:50.069486842+00:00 stderr F E0813 20:08:50.069406 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:50.140110166+00:00 stderr F W0813 20:08:50.139872 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:50.140110166+00:00 stderr F E0813 20:08:50.139952 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:51.308362571+00:00 stderr F W0813 20:08:51.307680 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:51.308362571+00:00 stderr F E0813 20:08:51.307763 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:51.773955860+00:00 stderr F W0813 20:08:51.773871 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:51.774141596+00:00 stderr F E0813 20:08:51.774083 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:51.806047811+00:00 stderr F W0813 20:08:51.805161 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:51.806047811+00:00 stderr F E0813 20:08:51.805217 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:52.179231950+00:00 stderr F W0813 20:08:52.179055 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:52.179231950+00:00 stderr F E0813 20:08:52.179115 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:52.265486693+00:00 stderr F W0813 20:08:52.265263 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:52.265486693+00:00 stderr F E0813 20:08:52.265331 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:52.491827473+00:00 stderr F W0813 20:08:52.491636 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:52.491827473+00:00 stderr F E0813 20:08:52.491694 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:52.500160531+00:00 stderr F W0813 20:08:52.500089 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:52.500160531+00:00 stderr F E0813 20:08:52.500138 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:52.812302311+00:00 stderr F W0813 20:08:52.812161 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:52.812302311+00:00 stderr F E0813 20:08:52.812225 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:52.815945425+00:00 stderr F W0813 20:08:52.815757 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:52.815976186+00:00 stderr F E0813 20:08:52.815945 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:53.031703841+00:00 stderr F W0813 20:08:53.031583 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:53.031703841+00:00 stderr F E0813 20:08:53.031646 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:56.011840344+00:00 stderr F W0813 20:08:56.011526 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:56.011840344+00:00 stderr F E0813 20:08:56.011692 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:56.601994895+00:00 stderr F W0813 20:08:56.601862 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:56.601994895+00:00 stderr F E0813 20:08:56.601951 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:56.803401380+00:00 stderr F W0813 20:08:56.803277 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:56.803401380+00:00 stderr F E0813 20:08:56.803351 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:57.305835734+00:00 stderr F W0813 20:08:57.305720 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:57.305835734+00:00 stderr F E0813 20:08:57.305815 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:57.338952673+00:00 stderr F W0813 20:08:57.338861 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:57.339030125+00:00 stderr F E0813 20:08:57.339013 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:57.612987720+00:00 stderr F W0813 20:08:57.612859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:57.612987720+00:00 stderr F E0813 20:08:57.612941 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:57.796716768+00:00 stderr F W0813 20:08:57.796587 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:57.796716768+00:00 stderr F E0813 20:08:57.796653 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:58.662998036+00:00 stderr F W0813 20:08:58.662024 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:58.662998036+00:00 stderr F E0813 20:08:58.662124 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:58.715656035+00:00 stderr F W0813 20:08:58.715505 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:58.715656035+00:00 stderr F E0813 20:08:58.715607 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:58.744535883+00:00 stderr F W0813 20:08:58.744405 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:58.744535883+00:00 stderr F E0813 20:08:58.744479 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:09:03.688126690+00:00 stderr F I0813 20:09:03.688005 1 reflector.go:351] Caches populated for *v1.RangeAllocation from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.809332475+00:00 stderr F I0813 20:09:04.809170 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.427590013+00:00 stderr F I0813 20:09:05.427367 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.487728298+00:00 stderr F I0813 20:09:05.487572 1 reflector.go:351] Caches populated for *v1.UserOAuthAccessToken from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.573574079+00:00 stderr F I0813 20:09:05.573457 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.227612332+00:00 stderr F I0813 20:09:07.227471 1 reflector.go:351] Caches populated for *v1.BrokerTemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.547623066+00:00 stderr F I0813 20:09:07.545003 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.547623066+00:00 stderr F I0813 20:09:07.545470 1 graph_builder.go:407] "item references an owner with coordinates that do not match the observed identity" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-08-13T20:09:07.868350962+00:00 stderr F I0813 20:09:07.868204 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.443062899+00:00 stderr F I0813 20:09:10.442950 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:11.364711734+00:00 stderr F I0813 20:09:11.364560 1 reflector.go:351] Caches populated for *v1.Template from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:11.388139495+00:00 stderr F I0813 20:09:11.387977 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:09:11.388139495+00:00 stderr F I0813 20:09:11.388056 1 resource_quota_controller.go:496] "synced quota controller" 2025-08-13T20:09:11.410004902+00:00 stderr F I0813 20:09:11.409687 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:09:11.430987904+00:00 stderr F I0813 20:09:11.430874 1 shared_informer.go:318] Caches are synced for garbage collector 2025-08-13T20:09:11.430987904+00:00 stderr F I0813 20:09:11.430946 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463303 1 shared_informer.go:318] Caches are synced for garbage collector 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463340 1 garbagecollector.go:290] "synced garbage collector" 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463419 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" virtual=false 2025-08-13T20:09:11.463690351+00:00 stderr F I0813 20:09:11.463664 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" virtual=false 2025-08-13T20:09:11.463939508+00:00 stderr F I0813 20:09:11.463865 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" virtual=false 2025-08-13T20:09:11.464029561+00:00 stderr F I0813 20:09:11.463976 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" virtual=false 2025-08-13T20:09:11.464126364+00:00 stderr F I0813 20:09:11.464058 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" virtual=false 2025-08-13T20:09:11.464126364+00:00 stderr F I0813 20:09:11.464107 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464183 1 garbagecollector.go:549] "Processing item" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464264 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464287 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464302 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464303 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" virtual=false 2025-08-13T20:09:11.464452453+00:00 stderr F I0813 20:09:11.464379 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" virtual=false 2025-08-13T20:09:11.464533586+00:00 stderr F I0813 20:09:11.464477 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" virtual=false 2025-08-13T20:09:11.464533586+00:00 stderr F I0813 20:09:11.463860 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" virtual=false 2025-08-13T20:09:11.464701300+00:00 stderr F I0813 20:09:11.464219 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" virtual=false 2025-08-13T20:09:11.464822804+00:00 stderr F I0813 20:09:11.464225 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464241 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464252 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464266 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464267 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" virtual=false 2025-08-13T20:09:11.485689722+00:00 stderr F I0813 20:09:11.483124 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.485689722+00:00 stderr F I0813 20:09:11.483194 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.486306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.486367 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488147 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488431 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488457 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" virtual=false 2025-08-13T20:09:11.489874192+00:00 stderr F I0813 20:09:11.489157 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.489874192+00:00 stderr F I0813 20:09:11.489254 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" virtual=false 2025-08-13T20:09:11.493095934+00:00 stderr F I0813 20:09:11.491170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.493095934+00:00 stderr F I0813 20:09:11.491207 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" virtual=false 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495106 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495170 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" virtual=false 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495421 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495447 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" virtual=false 2025-08-13T20:09:11.498893421+00:00 stderr F I0813 20:09:11.498853 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.498959612+00:00 stderr F I0813 20:09:11.498929 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" virtual=false 2025-08-13T20:09:11.499281652+00:00 stderr F I0813 20:09:11.499145 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.499281652+00:00 stderr F I0813 20:09:11.499196 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" virtual=false 2025-08-13T20:09:11.530719793+00:00 stderr F I0813 20:09:11.530619 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.530719793+00:00 stderr F I0813 20:09:11.530670 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" virtual=false 2025-08-13T20:09:11.532400661+00:00 stderr F I0813 20:09:11.531008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.532400661+00:00 stderr F I0813 20:09:11.531054 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" virtual=false 2025-08-13T20:09:11.540053941+00:00 stderr F I0813 20:09:11.539760 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.540053941+00:00 stderr F I0813 20:09:11.539925 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" virtual=false 2025-08-13T20:09:11.559864139+00:00 stderr F I0813 20:09:11.559719 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.559864139+00:00 stderr F I0813 20:09:11.559764 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" virtual=false 2025-08-13T20:09:11.562980258+00:00 stderr F I0813 20:09:11.561154 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.562980258+00:00 stderr F I0813 20:09:11.561206 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" virtual=false 2025-08-13T20:09:11.565924383+00:00 stderr F I0813 20:09:11.563281 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.565924383+00:00 stderr F I0813 20:09:11.563318 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" virtual=false 2025-08-13T20:09:11.577965448+00:00 stderr F I0813 20:09:11.575135 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.577965448+00:00 stderr F I0813 20:09:11.575179 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" virtual=false 2025-08-13T20:09:11.591330640+00:00 stderr F I0813 20:09:11.590238 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.591330640+00:00 stderr F I0813 20:09:11.590311 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" virtual=false 2025-08-13T20:09:11.607001829+00:00 stderr F I0813 20:09:11.605842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.607001829+00:00 stderr F I0813 20:09:11.605964 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" virtual=false 2025-08-13T20:09:11.640845670+00:00 stderr F I0813 20:09:11.640673 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.640887131+00:00 stderr F I0813 20:09:11.640845 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641360 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641410 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641460 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641560 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641712 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641780 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" virtual=false 2025-08-13T20:09:11.664780876+00:00 stderr F I0813 20:09:11.664673 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.664780876+00:00 stderr F I0813 20:09:11.664757 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" virtual=false 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670364 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670410 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" virtual=false 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670670 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" virtual=false 2025-08-13T20:09:11.673116295+00:00 stderr F I0813 20:09:11.671985 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.673116295+00:00 stderr F I0813 20:09:11.672040 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699000 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699066 1 garbagecollector.go:549] "Processing item" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699325 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699343 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699650 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.699716667+00:00 stderr F I0813 20:09:11.699668 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" virtual=false 2025-08-13T20:09:11.702848717+00:00 stderr F I0813 20:09:11.700316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.702848717+00:00 stderr F I0813 20:09:11.700379 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" virtual=false 2025-08-13T20:09:11.705897565+00:00 stderr F I0813 20:09:11.705756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.705897565+00:00 stderr F I0813 20:09:11.705863 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" virtual=false 2025-08-13T20:09:11.723875370+00:00 stderr F I0813 20:09:11.722541 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.723875370+00:00 stderr F I0813 20:09:11.723168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" virtual=false 2025-08-13T20:09:11.762132647+00:00 stderr F I0813 20:09:11.762062 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" 2025-08-13T20:09:11.762162448+00:00 stderr F I0813 20:09:11.762129 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" virtual=false 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.768206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.768272 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" virtual=false 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.771153 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.771186 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" virtual=false 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.784705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.784743 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" virtual=false 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.785041 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.785059 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" virtual=false 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798363 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798441 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" virtual=false 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798719 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" virtual=false 2025-08-13T20:09:11.802963598+00:00 stderr F I0813 20:09:11.800572 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.802963598+00:00 stderr F I0813 20:09:11.800629 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" virtual=false 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.811035 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.811086 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" virtual=false 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.812320 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.812348 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" virtual=false 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836450 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836541 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" virtual=false 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836481 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.836703275+00:00 stderr F I0813 20:09:11.836599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" virtual=false 2025-08-13T20:09:11.842381268+00:00 stderr F I0813 20:09:11.842199 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" 2025-08-13T20:09:11.842381268+00:00 stderr F I0813 20:09:11.842249 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" virtual=false 2025-08-13T20:09:11.845851897+00:00 stderr F I0813 20:09:11.845357 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.845851897+00:00 stderr F I0813 20:09:11.845410 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" virtual=false 2025-08-13T20:09:11.847981208+00:00 stderr F I0813 20:09:11.845932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.847981208+00:00 stderr F I0813 20:09:11.845999 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" virtual=false 2025-08-13T20:09:11.850861941+00:00 stderr F I0813 20:09:11.848977 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.850861941+00:00 stderr F I0813 20:09:11.849009 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" virtual=false 2025-08-13T20:09:11.854377312+00:00 stderr F I0813 20:09:11.854323 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" 2025-08-13T20:09:11.854398752+00:00 stderr F I0813 20:09:11.854378 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" virtual=false 2025-08-13T20:09:11.860222669+00:00 stderr F I0813 20:09:11.860191 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" 2025-08-13T20:09:11.860301602+00:00 stderr F I0813 20:09:11.860286 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" virtual=false 2025-08-13T20:09:11.860632361+00:00 stderr F I0813 20:09:11.860607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.860687503+00:00 stderr F I0813 20:09:11.860673 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" virtual=false 2025-08-13T20:09:11.860985341+00:00 stderr F I0813 20:09:11.860959 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.861048233+00:00 stderr F I0813 20:09:11.861033 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" virtual=false 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888111 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888234 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" virtual=false 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888673 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888694 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" virtual=false 2025-08-13T20:09:11.891468375+00:00 stderr F I0813 20:09:11.891401 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" 2025-08-13T20:09:11.891832736+00:00 stderr F I0813 20:09:11.891737 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" virtual=false 2025-08-13T20:09:11.892158405+00:00 stderr F I0813 20:09:11.892118 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.892280449+00:00 stderr F I0813 20:09:11.892258 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" virtual=false 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.902848 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.902960 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" virtual=false 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.903236 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.903260 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" virtual=false 2025-08-13T20:09:11.903731527+00:00 stderr F I0813 20:09:11.903685 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" 2025-08-13T20:09:11.903731527+00:00 stderr F I0813 20:09:11.903710 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" virtual=false 2025-08-13T20:09:11.909882383+00:00 stderr F I0813 20:09:11.909254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.909882383+00:00 stderr F I0813 20:09:11.909310 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" virtual=false 2025-08-13T20:09:11.922423143+00:00 stderr F I0813 20:09:11.922356 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.922534976+00:00 stderr F I0813 20:09:11.922515 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" virtual=false 2025-08-13T20:09:11.923117003+00:00 stderr F I0813 20:09:11.923036 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.923183885+00:00 stderr F I0813 20:09:11.923168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" virtual=false 2025-08-13T20:09:11.923394271+00:00 stderr F I0813 20:09:11.923374 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" 2025-08-13T20:09:11.923445722+00:00 stderr F I0813 20:09:11.923432 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" virtual=false 2025-08-13T20:09:11.929533177+00:00 stderr F I0813 20:09:11.929488 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-08-13T20:09:11.929614339+00:00 stderr F I0813 20:09:11.929599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" virtual=false 2025-08-13T20:09:11.942948991+00:00 stderr F I0813 20:09:11.941430 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.942948991+00:00 stderr F I0813 20:09:11.941514 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" virtual=false 2025-08-13T20:09:11.943400094+00:00 stderr F I0813 20:09:11.943370 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.943472496+00:00 stderr F I0813 20:09:11.943458 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" virtual=false 2025-08-13T20:09:11.971524541+00:00 stderr F I0813 20:09:11.971460 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.971697176+00:00 stderr F I0813 20:09:11.971675 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" virtual=false 2025-08-13T20:09:11.994898121+00:00 stderr F I0813 20:09:11.992669 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-08-13T20:09:11.994898121+00:00 stderr F I0813 20:09:11.992737 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995501 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995697 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.996054 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.996080 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" virtual=false 2025-08-13T20:09:12.017887330+00:00 stderr F I0813 20:09:12.015311 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.017887330+00:00 stderr F I0813 20:09:12.015397 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" virtual=false 2025-08-13T20:09:12.041160677+00:00 stderr F I0813 20:09:12.041085 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.041262830+00:00 stderr F I0813 20:09:12.041247 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" virtual=false 2025-08-13T20:09:12.049357052+00:00 stderr F I0813 20:09:12.049267 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.049535217+00:00 stderr F I0813 20:09:12.049515 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.049640290+00:00 stderr F I0813 20:09:12.049621 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" virtual=false 2025-08-13T20:09:12.049960139+00:00 stderr F I0813 20:09:12.049932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050025021+00:00 stderr F I0813 20:09:12.050011 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" virtual=false 2025-08-13T20:09:12.050276048+00:00 stderr F I0813 20:09:12.050206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050363321+00:00 stderr F I0813 20:09:12.050349 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" virtual=false 2025-08-13T20:09:12.050620128+00:00 stderr F I0813 20:09:12.050598 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050672680+00:00 stderr F I0813 20:09:12.050658 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" virtual=false 2025-08-13T20:09:12.053485991+00:00 stderr F I0813 20:09:12.052006 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" virtual=false 2025-08-13T20:09:12.054286883+00:00 stderr F I0813 20:09:12.054263 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054364546+00:00 stderr F I0813 20:09:12.054349 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" virtual=false 2025-08-13T20:09:12.054559731+00:00 stderr F I0813 20:09:12.054539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054612673+00:00 stderr F I0813 20:09:12.054599 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" virtual=false 2025-08-13T20:09:12.054930162+00:00 stderr F I0813 20:09:12.054880 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054994774+00:00 stderr F I0813 20:09:12.054979 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" virtual=false 2025-08-13T20:09:12.060510762+00:00 stderr F I0813 20:09:12.060461 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.060623125+00:00 stderr F I0813 20:09:12.060605 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" virtual=false 2025-08-13T20:09:12.061691196+00:00 stderr F I0813 20:09:12.061225 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" 2025-08-13T20:09:12.061756458+00:00 stderr F I0813 20:09:12.061741 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" virtual=false 2025-08-13T20:09:12.062305663+00:00 stderr F I0813 20:09:12.061474 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.062326874+00:00 stderr F I0813 20:09:12.062307 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" virtual=false 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.068469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.068538 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" virtual=false 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.070876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.070959 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" virtual=false 2025-08-13T20:09:12.071640041+00:00 stderr F I0813 20:09:12.071612 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.072191877+00:00 stderr F I0813 20:09:12.072166 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" virtual=false 2025-08-13T20:09:12.076067098+00:00 stderr F I0813 20:09:12.076018 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.078967051+00:00 stderr F I0813 20:09:12.078882 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" virtual=false 2025-08-13T20:09:12.079216158+00:00 stderr F I0813 20:09:12.079084 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.080136485+00:00 stderr F I0813 20:09:12.080010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":false,"blockOwnerDeletion":false}] 2025-08-13T20:09:12.083709447+00:00 stderr F I0813 20:09:12.083590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.097817612+00:00 stderr F I0813 20:09:12.097632 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098618 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098877 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098990 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.099138 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.099174 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" virtual=false 2025-08-13T20:09:12.099485639+00:00 stderr F I0813 20:09:12.099424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" virtual=false 2025-08-13T20:09:12.099543011+00:00 stderr F I0813 20:09:12.099509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099672715+00:00 stderr F I0813 20:09:12.099598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" virtual=false 2025-08-13T20:09:12.099819039+00:00 stderr F I0813 20:09:12.099759 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" virtual=false 2025-08-13T20:09:12.102401363+00:00 stderr F I0813 20:09:12.102314 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.102401363+00:00 stderr F I0813 20:09:12.102368 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" virtual=false 2025-08-13T20:09:12.102957679+00:00 stderr F I0813 20:09:12.102870 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.102978670+00:00 stderr F I0813 20:09:12.102968 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" virtual=false 2025-08-13T20:09:12.107458998+00:00 stderr F I0813 20:09:12.107389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.107492199+00:00 stderr F I0813 20:09:12.107454 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" virtual=false 2025-08-13T20:09:12.108032264+00:00 stderr F I0813 20:09:12.107973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.108065565+00:00 stderr F I0813 20:09:12.108031 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" 2025-08-13T20:09:12.108065565+00:00 stderr F I0813 20:09:12.108050 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" virtual=false 2025-08-13T20:09:12.108075166+00:00 stderr F I0813 20:09:12.108066 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" virtual=false 2025-08-13T20:09:12.108594791+00:00 stderr F I0813 20:09:12.108569 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.108654192+00:00 stderr F I0813 20:09:12.108639 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" virtual=false 2025-08-13T20:09:12.109243969+00:00 stderr F I0813 20:09:12.109217 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.109349772+00:00 stderr F I0813 20:09:12.108590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.109379303+00:00 stderr F I0813 20:09:12.109293 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" virtual=false 2025-08-13T20:09:12.109548158+00:00 stderr F I0813 20:09:12.109372 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" virtual=false 2025-08-13T20:09:12.116526798+00:00 stderr F I0813 20:09:12.116423 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" 2025-08-13T20:09:12.116526798+00:00 stderr F I0813 20:09:12.116503 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" virtual=false 2025-08-13T20:09:12.116969681+00:00 stderr F I0813 20:09:12.116897 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" 2025-08-13T20:09:12.116989171+00:00 stderr F I0813 20:09:12.116967 1 garbagecollector.go:549] "Processing item" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905, uid: 7f4930dc-9c3e-449f-86b0-c6f776dc6141]" virtual=false 2025-08-13T20:09:12.117066763+00:00 stderr F I0813 20:09:12.117043 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" 2025-08-13T20:09:12.117175677+00:00 stderr F I0813 20:09:12.117156 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" virtual=false 2025-08-13T20:09:12.119927885+00:00 stderr F I0813 20:09:12.118382 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" 2025-08-13T20:09:12.119927885+00:00 stderr F I0813 20:09:12.118434 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" virtual=false 2025-08-13T20:09:12.120191133+00:00 stderr F I0813 20:09:12.120167 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" 2025-08-13T20:09:12.120282096+00:00 stderr F I0813 20:09:12.120267 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" virtual=false 2025-08-13T20:09:12.120491802+00:00 stderr F I0813 20:09:12.120469 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" 2025-08-13T20:09:12.120602845+00:00 stderr F I0813 20:09:12.120585 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124429 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124502 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124879 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124929 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125180 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125318 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125336 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125581 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125599 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125743 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125761 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" virtual=false 2025-08-13T20:09:12.127499803+00:00 stderr F I0813 20:09:12.127446 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" 2025-08-13T20:09:12.127499803+00:00 stderr F I0813 20:09:12.127492 1 garbagecollector.go:549] "Processing item" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251920, uid: 11f6ecfd-32e5-4666-a477-30fd544386df]" virtual=false 2025-08-13T20:09:12.128876522+00:00 stderr F I0813 20:09:12.127733 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.128876522+00:00 stderr F I0813 20:09:12.127820 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" virtual=false 2025-08-13T20:09:12.129457119+00:00 stderr F I0813 20:09:12.129427 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" 2025-08-13T20:09:12.129578222+00:00 stderr F I0813 20:09:12.129523 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.129631784+00:00 stderr F I0813 20:09:12.129591 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" virtual=false 2025-08-13T20:09:12.130128788+00:00 stderr F I0813 20:09:12.129973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.130128788+00:00 stderr F I0813 20:09:12.130021 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" virtual=false 2025-08-13T20:09:12.130150349+00:00 stderr F I0813 20:09:12.129513 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" virtual=false 2025-08-13T20:09:12.130355574+00:00 stderr F I0813 20:09:12.130273 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.130567431+00:00 stderr F I0813 20:09:12.130470 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" virtual=false 2025-08-13T20:09:12.131152237+00:00 stderr F I0813 20:09:12.131126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.131214269+00:00 stderr F I0813 20:09:12.131199 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.131665 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.131710 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132331 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132357 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132490 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132510 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" virtual=false 2025-08-13T20:09:12.140217257+00:00 stderr F I0813 20:09:12.140124 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" 2025-08-13T20:09:12.140217257+00:00 stderr F I0813 20:09:12.140202 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" virtual=false 2025-08-13T20:09:12.140333231+00:00 stderr F I0813 20:09:12.140272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.140368022+00:00 stderr F I0813 20:09:12.140342 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" virtual=false 2025-08-13T20:09:12.140540736+00:00 stderr F I0813 20:09:12.140471 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.142763940+00:00 stderr F I0813 20:09:12.141077 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" virtual=false 2025-08-13T20:09:12.143075239+00:00 stderr F I0813 20:09:12.141132 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" 2025-08-13T20:09:12.143192553+00:00 stderr F I0813 20:09:12.143172 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" virtual=false 2025-08-13T20:09:12.143303126+00:00 stderr F I0813 20:09:12.140679 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" 2025-08-13T20:09:12.143333507+00:00 stderr F I0813 20:09:12.142878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.143430199+00:00 stderr F I0813 20:09:12.143372 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" virtual=false 2025-08-13T20:09:12.143712207+00:00 stderr F I0813 20:09:12.143690 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" virtual=false 2025-08-13T20:09:12.143986035+00:00 stderr F I0813 20:09:12.140599 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" 2025-08-13T20:09:12.144078658+00:00 stderr F I0813 20:09:12.144061 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148210 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148284 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148522 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148549 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148763 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148837 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148938 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.149025 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.149252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152116508+00:00 stderr F I0813 20:09:12.149954 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905, uid: 7f4930dc-9c3e-449f-86b0-c6f776dc6141]" owner=[{"apiVersion":"batch/v1","kind":"CronJob","name":"collect-profiles","uid":"946673ee-e5bd-418a-934e-c38198674faa","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152259572+00:00 stderr F I0813 20:09:12.152238 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" virtual=false 2025-08-13T20:09:12.152353735+00:00 stderr F I0813 20:09:12.150233 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" virtual=false 2025-08-13T20:09:12.152383816+00:00 stderr F I0813 20:09:12.150263 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" 2025-08-13T20:09:12.152410617+00:00 stderr F I0813 20:09:12.150432 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" 2025-08-13T20:09:12.152436268+00:00 stderr F I0813 20:09:12.150476 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-08-13T20:09:12.152518170+00:00 stderr F I0813 20:09:12.150517 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251920, uid: 11f6ecfd-32e5-4666-a477-30fd544386df]" owner=[{"apiVersion":"batch/v1","kind":"CronJob","name":"collect-profiles","uid":"946673ee-e5bd-418a-934e-c38198674faa","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152577312+00:00 stderr F I0813 20:09:12.150685 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152605922+00:00 stderr F I0813 20:09:12.150722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152632433+00:00 stderr F I0813 20:09:12.152010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152751747+00:00 stderr F I0813 20:09:12.152675 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" virtual=false 2025-08-13T20:09:12.153314163+00:00 stderr F I0813 20:09:12.153198 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.153314163+00:00 stderr F I0813 20:09:12.153248 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" virtual=false 2025-08-13T20:09:12.153599511+00:00 stderr F I0813 20:09:12.153500 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" virtual=false 2025-08-13T20:09:12.155172536+00:00 stderr F I0813 20:09:12.155128 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" virtual=false 2025-08-13T20:09:12.157216495+00:00 stderr F I0813 20:09:12.157163 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" virtual=false 2025-08-13T20:09:12.159301594+00:00 stderr F I0813 20:09:12.159273 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" virtual=false 2025-08-13T20:09:12.159539891+00:00 stderr F I0813 20:09:12.159480 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" 2025-08-13T20:09:12.159560312+00:00 stderr F I0813 20:09:12.159551 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" virtual=false 2025-08-13T20:09:12.159893161+00:00 stderr F I0813 20:09:12.159779 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" 2025-08-13T20:09:12.159893161+00:00 stderr F I0813 20:09:12.159872 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" virtual=false 2025-08-13T20:09:12.160164019+00:00 stderr F I0813 20:09:12.160109 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" virtual=false 2025-08-13T20:09:12.160282903+00:00 stderr F I0813 20:09:12.160235 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" virtual=false 2025-08-13T20:09:12.160880920+00:00 stderr F I0813 20:09:12.160856 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" 2025-08-13T20:09:12.161076945+00:00 stderr F I0813 20:09:12.161057 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" virtual=false 2025-08-13T20:09:12.161314362+00:00 stderr F I0813 20:09:12.160926 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.161412945+00:00 stderr F I0813 20:09:12.161393 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" virtual=false 2025-08-13T20:09:12.176955860+00:00 stderr F I0813 20:09:12.176871 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.177926438+00:00 stderr F I0813 20:09:12.177854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177350 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177379 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178125084+00:00 stderr F I0813 20:09:12.177707 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178340290+00:00 stderr F I0813 20:09:12.178315 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" virtual=false 2025-08-13T20:09:12.178518585+00:00 stderr F I0813 20:09:12.178500 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" virtual=false 2025-08-13T20:09:12.181700677+00:00 stderr F I0813 20:09:12.181587 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" virtual=false 2025-08-13T20:09:12.181861941+00:00 stderr F I0813 20:09:12.181764 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" virtual=false 2025-08-13T20:09:12.182004805+00:00 stderr F I0813 20:09:12.181957 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" virtual=false 2025-08-13T20:09:12.182129669+00:00 stderr F I0813 20:09:12.182066 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" virtual=false 2025-08-13T20:09:12.182449348+00:00 stderr F I0813 20:09:12.182425 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" virtual=false 2025-08-13T20:09:12.192205018+00:00 stderr F I0813 20:09:12.192096 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.192205018+00:00 stderr F I0813 20:09:12.192160 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" virtual=false 2025-08-13T20:09:12.192485266+00:00 stderr F I0813 20:09:12.192441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.192499226+00:00 stderr F I0813 20:09:12.192484 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" virtual=false 2025-08-13T20:09:12.192868117+00:00 stderr F I0813 20:09:12.192841 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.193029261+00:00 stderr F I0813 20:09:12.192970 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" virtual=false 2025-08-13T20:09:12.193188676+00:00 stderr F I0813 20:09:12.192860 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193203056+00:00 stderr F I0813 20:09:12.193192 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" virtual=false 2025-08-13T20:09:12.193548976+00:00 stderr F I0813 20:09:12.193526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193615328+00:00 stderr F I0813 20:09:12.193599 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" virtual=false 2025-08-13T20:09:12.193941588+00:00 stderr F I0813 20:09:12.193854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193963218+00:00 stderr F I0813 20:09:12.193951 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" virtual=false 2025-08-13T20:09:12.194042570+00:00 stderr F I0813 20:09:12.194024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194093642+00:00 stderr F I0813 20:09:12.194079 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" virtual=false 2025-08-13T20:09:12.194156014+00:00 stderr F I0813 20:09:12.194104 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194156014+00:00 stderr F I0813 20:09:12.194126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194169124+00:00 stderr F I0813 20:09:12.194156 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" virtual=false 2025-08-13T20:09:12.194290668+00:00 stderr F I0813 20:09:12.194243 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194378 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194489 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194509 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.195007 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.195045 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" virtual=false 2025-08-13T20:09:12.196044708+00:00 stderr F I0813 20:09:12.196019 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.196108950+00:00 stderr F I0813 20:09:12.196094 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" virtual=false 2025-08-13T20:09:12.208664920+00:00 stderr F I0813 20:09:12.208603 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.210105881+00:00 stderr F I0813 20:09:12.208749 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" virtual=false 2025-08-13T20:09:12.210307237+00:00 stderr F I0813 20:09:12.210283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.210369729+00:00 stderr F I0813 20:09:12.210355 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" virtual=false 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216074 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216092 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216132 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" virtual=false 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216171 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" virtual=false 2025-08-13T20:09:12.216833614+00:00 stderr F I0813 20:09:12.216740 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.216946657+00:00 stderr F I0813 20:09:12.216893 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" virtual=false 2025-08-13T20:09:12.217211575+00:00 stderr F I0813 20:09:12.217190 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217284797+00:00 stderr F I0813 20:09:12.217230 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217297887+00:00 stderr F I0813 20:09:12.217282 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" virtual=false 2025-08-13T20:09:12.217334648+00:00 stderr F I0813 20:09:12.217320 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" virtual=false 2025-08-13T20:09:12.217563315+00:00 stderr F I0813 20:09:12.217197 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217633817+00:00 stderr F I0813 20:09:12.217617 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" virtual=false 2025-08-13T20:09:12.217772131+00:00 stderr F I0813 20:09:12.217549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.218021748+00:00 stderr F I0813 20:09:12.217975 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.218057489+00:00 stderr F I0813 20:09:12.218019 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" virtual=false 2025-08-13T20:09:12.218309466+00:00 stderr F I0813 20:09:12.218252 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" virtual=false 2025-08-13T20:09:12.218436530+00:00 stderr F I0813 20:09:12.217761 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.218487231+00:00 stderr F I0813 20:09:12.218473 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" virtual=false 2025-08-13T20:09:12.220758446+00:00 stderr F I0813 20:09:12.219119 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.220758446+00:00 stderr F I0813 20:09:12.219254 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" virtual=false 2025-08-13T20:09:12.223625759+00:00 stderr F I0813 20:09:12.223593 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.223704391+00:00 stderr F I0813 20:09:12.223690 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" virtual=false 2025-08-13T20:09:12.223897986+00:00 stderr F I0813 20:09:12.223767 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.223897986+00:00 stderr F I0813 20:09:12.223890 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" virtual=false 2025-08-13T20:09:12.224086352+00:00 stderr F I0813 20:09:12.223641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.224147064+00:00 stderr F I0813 20:09:12.224131 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" virtual=false 2025-08-13T20:09:12.224251647+00:00 stderr F I0813 20:09:12.223716 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.224328929+00:00 stderr F I0813 20:09:12.224286 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.225486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.225542 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.226890 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.226954 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.227110 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.227128 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" virtual=false 2025-08-13T20:09:12.228750075+00:00 stderr F I0813 20:09:12.215968 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.228766906+00:00 stderr F I0813 20:09:12.228754 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" virtual=false 2025-08-13T20:09:12.229110386+00:00 stderr F I0813 20:09:12.229047 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.229110386+00:00 stderr F I0813 20:09:12.229090 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" virtual=false 2025-08-13T20:09:12.240537414+00:00 stderr F I0813 20:09:12.240469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.240687398+00:00 stderr F I0813 20:09:12.240667 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" virtual=false 2025-08-13T20:09:12.241118140+00:00 stderr F I0813 20:09:12.241091 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.241193892+00:00 stderr F I0813 20:09:12.241176 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" virtual=false 2025-08-13T20:09:12.241581703+00:00 stderr F I0813 20:09:12.241555 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.241669886+00:00 stderr F I0813 20:09:12.241651 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" virtual=false 2025-08-13T20:09:12.242099178+00:00 stderr F I0813 20:09:12.242070 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.242211792+00:00 stderr F I0813 20:09:12.242191 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" virtual=false 2025-08-13T20:09:12.242457019+00:00 stderr F I0813 20:09:12.240588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.242753617+00:00 stderr F I0813 20:09:12.242730 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" virtual=false 2025-08-13T20:09:12.248399919+00:00 stderr F I0813 20:09:12.248316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.248445300+00:00 stderr F I0813 20:09:12.248396 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" virtual=false 2025-08-13T20:09:12.249083479+00:00 stderr F I0813 20:09:12.249057 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.249158521+00:00 stderr F I0813 20:09:12.249141 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" virtual=false 2025-08-13T20:09:12.249591473+00:00 stderr F I0813 20:09:12.249464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.249591473+00:00 stderr F I0813 20:09:12.249531 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" virtual=false 2025-08-13T20:09:12.249687706+00:00 stderr F I0813 20:09:12.249665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.249750918+00:00 stderr F I0813 20:09:12.249734 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" virtual=false 2025-08-13T20:09:12.250150769+00:00 stderr F I0813 20:09:12.250125 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250220171+00:00 stderr F I0813 20:09:12.250203 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" virtual=false 2025-08-13T20:09:12.250331984+00:00 stderr F I0813 20:09:12.250219 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250386216+00:00 stderr F I0813 20:09:12.250371 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" virtual=false 2025-08-13T20:09:12.250673734+00:00 stderr F I0813 20:09:12.250614 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250673734+00:00 stderr F I0813 20:09:12.250666 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" virtual=false 2025-08-13T20:09:12.250761387+00:00 stderr F I0813 20:09:12.250741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250876190+00:00 stderr F I0813 20:09:12.250854 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" virtual=false 2025-08-13T20:09:12.251012814+00:00 stderr F I0813 20:09:12.250955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.251098046+00:00 stderr F I0813 20:09:12.250874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.251133857+00:00 stderr F I0813 20:09:12.250766 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-08-13T20:09:12.251257711+00:00 stderr F I0813 20:09:12.251203 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" virtual=false 2025-08-13T20:09:12.251440246+00:00 stderr F I0813 20:09:12.251394 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" virtual=false 2025-08-13T20:09:12.251581170+00:00 stderr F I0813 20:09:12.251502 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" virtual=false 2025-08-13T20:09:12.251702794+00:00 stderr F I0813 20:09:12.251649 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.251718564+00:00 stderr F I0813 20:09:12.251700 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" virtual=false 2025-08-13T20:09:12.252037803+00:00 stderr F I0813 20:09:12.251978 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.252037803+00:00 stderr F I0813 20:09:12.252008 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" virtual=false 2025-08-13T20:09:12.266836177+00:00 stderr F I0813 20:09:12.266683 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.266836177+00:00 stderr F I0813 20:09:12.266752 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" virtual=false 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.267722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.267777 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" virtual=false 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.269128 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.269153 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" virtual=false 2025-08-13T20:09:12.275197617+00:00 stderr F I0813 20:09:12.275109 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.275197617+00:00 stderr F I0813 20:09:12.275171 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" virtual=false 2025-08-13T20:09:12.275375472+00:00 stderr F I0813 20:09:12.275237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.275375472+00:00 stderr F I0813 20:09:12.275290 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" virtual=false 2025-08-13T20:09:12.275645460+00:00 stderr F I0813 20:09:12.275588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.275725782+00:00 stderr F I0813 20:09:12.275686 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" virtual=false 2025-08-13T20:09:12.276086313+00:00 stderr F I0813 20:09:12.276002 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.276086313+00:00 stderr F I0813 20:09:12.276046 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" virtual=false 2025-08-13T20:09:12.276288369+00:00 stderr F I0813 20:09:12.276222 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.276288369+00:00 stderr F I0813 20:09:12.276276 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" virtual=false 2025-08-13T20:09:12.284424682+00:00 stderr F I0813 20:09:12.284287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.284424682+00:00 stderr F I0813 20:09:12.284361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" virtual=false 2025-08-13T20:09:12.284479383+00:00 stderr F I0813 20:09:12.284447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284479383+00:00 stderr F I0813 20:09:12.284471 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" virtual=false 2025-08-13T20:09:12.284707390+00:00 stderr F I0813 20:09:12.284607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284707390+00:00 stderr F I0813 20:09:12.284662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" virtual=false 2025-08-13T20:09:12.284960887+00:00 stderr F I0813 20:09:12.284851 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284960887+00:00 stderr F I0813 20:09:12.284925 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" virtual=false 2025-08-13T20:09:12.285163303+00:00 stderr F I0813 20:09:12.285098 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285163303+00:00 stderr F I0813 20:09:12.285148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" virtual=false 2025-08-13T20:09:12.285247625+00:00 stderr F I0813 20:09:12.284660 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285326448+00:00 stderr F I0813 20:09:12.285252 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" virtual=false 2025-08-13T20:09:12.285343518+00:00 stderr F I0813 20:09:12.285329 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285381289+00:00 stderr F I0813 20:09:12.285350 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" virtual=false 2025-08-13T20:09:12.285605626+00:00 stderr F I0813 20:09:12.285545 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285605626+00:00 stderr F I0813 20:09:12.285594 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" virtual=false 2025-08-13T20:09:12.285620386+00:00 stderr F I0813 20:09:12.285602 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285629906+00:00 stderr F I0813 20:09:12.285622 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" virtual=false 2025-08-13T20:09:12.286560013+00:00 stderr F I0813 20:09:12.286478 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.286560013+00:00 stderr F I0813 20:09:12.286527 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" virtual=false 2025-08-13T20:09:12.286871272+00:00 stderr F I0813 20:09:12.286741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.287074038+00:00 stderr F I0813 20:09:12.286935 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" virtual=false 2025-08-13T20:09:12.287955653+00:00 stderr F I0813 20:09:12.287194 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.287955653+00:00 stderr F I0813 20:09:12.287244 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" virtual=false 2025-08-13T20:09:12.293719958+00:00 stderr F I0813 20:09:12.293056 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.293719958+00:00 stderr F I0813 20:09:12.293155 1 garbagecollector.go:549] "Processing item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" virtual=false 2025-08-13T20:09:12.301522992+00:00 stderr F I0813 20:09:12.297237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.301522992+00:00 stderr F I0813 20:09:12.297307 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.310588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.310661 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311587 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311615 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311965 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311994 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" virtual=false 2025-08-13T20:09:12.312497737+00:00 stderr F I0813 20:09:12.312423 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312497737+00:00 stderr F I0813 20:09:12.312482 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" virtual=false 2025-08-13T20:09:12.312830996+00:00 stderr F I0813 20:09:12.312725 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.312961070+00:00 stderr F I0813 20:09:12.312835 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" virtual=false 2025-08-13T20:09:12.313132905+00:00 stderr F I0813 20:09:12.313098 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313132905+00:00 stderr F I0813 20:09:12.313125 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" virtual=false 2025-08-13T20:09:12.313529186+00:00 stderr F I0813 20:09:12.313444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313680291+00:00 stderr F I0813 20:09:12.313662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" virtual=false 2025-08-13T20:09:12.313748242+00:00 stderr F I0813 20:09:12.313512 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313867126+00:00 stderr F I0813 20:09:12.313760 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" virtual=false 2025-08-13T20:09:12.314473653+00:00 stderr F I0813 20:09:12.313558 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" 2025-08-13T20:09:12.314575056+00:00 stderr F I0813 20:09:12.314488 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" virtual=false 2025-08-13T20:09:12.320078794+00:00 stderr F I0813 20:09:12.320053 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.320142526+00:00 stderr F I0813 20:09:12.320128 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" virtual=false 2025-08-13T20:09:12.331265975+00:00 stderr F I0813 20:09:12.331199 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.331377248+00:00 stderr F I0813 20:09:12.331361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" virtual=false 2025-08-13T20:09:12.332702606+00:00 stderr F I0813 20:09:12.332677 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.332766778+00:00 stderr F I0813 20:09:12.332752 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" virtual=false 2025-08-13T20:09:12.342028193+00:00 stderr F I0813 20:09:12.341973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.342183748+00:00 stderr F I0813 20:09:12.342168 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" virtual=false 2025-08-13T20:09:12.349950290+00:00 stderr F I0813 20:09:12.349734 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.349950290+00:00 stderr F I0813 20:09:12.349851 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" virtual=false 2025-08-13T20:09:12.350963259+00:00 stderr F I0813 20:09:12.350931 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.351649619+00:00 stderr F I0813 20:09:12.351627 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" virtual=false 2025-08-13T20:09:12.351864935+00:00 stderr F I0813 20:09:12.351084 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.351967908+00:00 stderr F I0813 20:09:12.351945 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" virtual=false 2025-08-13T20:09:12.352274527+00:00 stderr F I0813 20:09:12.351370 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.352330339+00:00 stderr F I0813 20:09:12.352316 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" virtual=false 2025-08-13T20:09:12.352467823+00:00 stderr F I0813 20:09:12.351407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.352522534+00:00 stderr F I0813 20:09:12.352506 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" virtual=false 2025-08-13T20:09:12.356563010+00:00 stderr F I0813 20:09:12.356532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.356750355+00:00 stderr F I0813 20:09:12.356732 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" virtual=false 2025-08-13T20:09:12.357100215+00:00 stderr F I0813 20:09:12.357078 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.357158247+00:00 stderr F I0813 20:09:12.357144 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" virtual=false 2025-08-13T20:09:12.364557999+00:00 stderr F I0813 20:09:12.364447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.364761565+00:00 stderr F I0813 20:09:12.364698 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" virtual=false 2025-08-13T20:09:12.372323552+00:00 stderr F I0813 20:09:12.372201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.372548008+00:00 stderr F I0813 20:09:12.372528 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" virtual=false 2025-08-13T20:09:12.374115013+00:00 stderr F I0813 20:09:12.374089 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.374261918+00:00 stderr F I0813 20:09:12.374244 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" virtual=false 2025-08-13T20:09:12.375837833+00:00 stderr F I0813 20:09:12.375616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.375837833+00:00 stderr F I0813 20:09:12.375706 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" virtual=false 2025-08-13T20:09:12.376448560+00:00 stderr F I0813 20:09:12.376404 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.376448560+00:00 stderr F I0813 20:09:12.376439 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" virtual=false 2025-08-13T20:09:12.378645203+00:00 stderr F I0813 20:09:12.378560 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.378838279+00:00 stderr F I0813 20:09:12.378746 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" virtual=false 2025-08-13T20:09:12.381070403+00:00 stderr F I0813 20:09:12.381048 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.381133485+00:00 stderr F I0813 20:09:12.381119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" virtual=false 2025-08-13T20:09:12.396499335+00:00 stderr F I0813 20:09:12.396407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.396701841+00:00 stderr F I0813 20:09:12.396623 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" virtual=false 2025-08-13T20:09:12.397560596+00:00 stderr F I0813 20:09:12.397538 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.397691069+00:00 stderr F I0813 20:09:12.397673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.403536 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.403659 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404050 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404076 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404267 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404432743+00:00 stderr F I0813 20:09:12.404291 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" virtual=false 2025-08-13T20:09:12.406146462+00:00 stderr F I0813 20:09:12.406090 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.406170052+00:00 stderr F I0813 20:09:12.406140 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" virtual=false 2025-08-13T20:09:12.406757639+00:00 stderr F I0813 20:09:12.406726 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.406927144+00:00 stderr F I0813 20:09:12.406882 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" virtual=false 2025-08-13T20:09:12.408725406+00:00 stderr F I0813 20:09:12.408614 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.410531157+00:00 stderr F I0813 20:09:12.410405 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" virtual=false 2025-08-13T20:09:12.412443682+00:00 stderr F I0813 20:09:12.412409 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.412522745+00:00 stderr F I0813 20:09:12.412508 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" virtual=false 2025-08-13T20:09:12.416593851+00:00 stderr F I0813 20:09:12.416547 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.416688384+00:00 stderr F I0813 20:09:12.416673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" virtual=false 2025-08-13T20:09:12.417007953+00:00 stderr F I0813 20:09:12.416983 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.417067905+00:00 stderr F I0813 20:09:12.417054 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" virtual=false 2025-08-13T20:09:12.417556739+00:00 stderr F I0813 20:09:12.417534 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.417611870+00:00 stderr F I0813 20:09:12.417598 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" virtual=false 2025-08-13T20:09:12.419447023+00:00 stderr F I0813 20:09:12.418165 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.419447023+00:00 stderr F I0813 20:09:12.418219 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" virtual=false 2025-08-13T20:09:12.422225643+00:00 stderr F I0813 20:09:12.422075 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.422225643+00:00 stderr F I0813 20:09:12.422215 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" virtual=false 2025-08-13T20:09:12.423494039+00:00 stderr F I0813 20:09:12.423389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.423494039+00:00 stderr F I0813 20:09:12.423460 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" virtual=false 2025-08-13T20:09:12.428961936+00:00 stderr F I0813 20:09:12.428863 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.429135791+00:00 stderr F I0813 20:09:12.429078 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" virtual=false 2025-08-13T20:09:12.435999688+00:00 stderr F I0813 20:09:12.435876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.436231434+00:00 stderr F I0813 20:09:12.436211 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" virtual=false 2025-08-13T20:09:12.438677634+00:00 stderr F I0813 20:09:12.438616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.438760677+00:00 stderr F I0813 20:09:12.438742 1 garbagecollector.go:549] "Processing item" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" virtual=false 2025-08-13T20:09:12.441166536+00:00 stderr F I0813 20:09:12.441086 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.441266709+00:00 stderr F I0813 20:09:12.441249 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" virtual=false 2025-08-13T20:09:12.450292927+00:00 stderr F I0813 20:09:12.450201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.451266095+00:00 stderr F I0813 20:09:12.451242 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.451402359+00:00 stderr F I0813 20:09:12.451385 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" virtual=false 2025-08-13T20:09:12.452016007+00:00 stderr F I0813 20:09:12.451992 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" virtual=false 2025-08-13T20:09:12.454326373+00:00 stderr F I0813 20:09:12.454300 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.454392435+00:00 stderr F I0813 20:09:12.454377 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" virtual=false 2025-08-13T20:09:12.454642582+00:00 stderr F I0813 20:09:12.454594 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.454700504+00:00 stderr F I0813 20:09:12.454686 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" virtual=false 2025-08-13T20:09:12.455150337+00:00 stderr F I0813 20:09:12.454987 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.455653451+00:00 stderr F I0813 20:09:12.455633 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" virtual=false 2025-08-13T20:09:12.455999951+00:00 stderr F I0813 20:09:12.455969 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.456096404+00:00 stderr F I0813 20:09:12.456078 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" virtual=false 2025-08-13T20:09:12.457175995+00:00 stderr F I0813 20:09:12.457152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.457244887+00:00 stderr F I0813 20:09:12.457226 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" virtual=false 2025-08-13T20:09:12.477003863+00:00 stderr F I0813 20:09:12.476842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.477003863+00:00 stderr F I0813 20:09:12.476946 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" virtual=false 2025-08-13T20:09:12.477135457+00:00 stderr F I0813 20:09:12.477107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.477344733+00:00 stderr F I0813 20:09:12.477269 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" 2025-08-13T20:09:12.477344733+00:00 stderr F I0813 20:09:12.477322 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" virtual=false 2025-08-13T20:09:12.477401165+00:00 stderr F I0813 20:09:12.477384 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" virtual=false 2025-08-13T20:09:12.477833677+00:00 stderr F I0813 20:09:12.477752 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" 2025-08-13T20:09:12.477854728+00:00 stderr F I0813 20:09:12.477844 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" virtual=false 2025-08-13T20:09:12.478324681+00:00 stderr F I0813 20:09:12.478265 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.478461955+00:00 stderr F I0813 20:09:12.478443 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" virtual=false 2025-08-13T20:09:12.479538086+00:00 stderr F I0813 20:09:12.479505 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.479538086+00:00 stderr F I0813 20:09:12.479532 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" virtual=false 2025-08-13T20:09:12.493480686+00:00 stderr F I0813 20:09:12.493361 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.493740073+00:00 stderr F I0813 20:09:12.493643 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" virtual=false 2025-08-13T20:09:12.498699055+00:00 stderr F I0813 20:09:12.498093 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.498699055+00:00 stderr F I0813 20:09:12.498165 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" virtual=false 2025-08-13T20:09:12.501325531+00:00 stderr F I0813 20:09:12.501260 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.501400523+00:00 stderr F I0813 20:09:12.501324 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" virtual=false 2025-08-13T20:09:12.508439585+00:00 stderr F I0813 20:09:12.508356 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.508491956+00:00 stderr F I0813 20:09:12.508443 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" virtual=false 2025-08-13T20:09:12.508763754+00:00 stderr F I0813 20:09:12.508721 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.508817975+00:00 stderr F I0813 20:09:12.508768 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" virtual=false 2025-08-13T20:09:12.522491778+00:00 stderr F I0813 20:09:12.522362 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.522491778+00:00 stderr F I0813 20:09:12.522440 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.526882 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.526958 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.528365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.528393 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.529008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.529033 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" virtual=false 2025-08-13T20:09:12.550622954+00:00 stderr F I0813 20:09:12.550550 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.550622954+00:00 stderr F I0813 20:09:12.550598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" virtual=false 2025-08-13T20:09:12.551651474+00:00 stderr F I0813 20:09:12.551566 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.551942492+00:00 stderr F I0813 20:09:12.551755 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" virtual=false 2025-08-13T20:09:12.552365464+00:00 stderr F I0813 20:09:12.552307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.552365464+00:00 stderr F I0813 20:09:12.552355 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" virtual=false 2025-08-13T20:09:12.553244169+00:00 stderr F I0813 20:09:12.553152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.553244169+00:00 stderr F I0813 20:09:12.553197 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" virtual=false 2025-08-13T20:09:12.553442745+00:00 stderr F I0813 20:09:12.553420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.553509177+00:00 stderr F I0813 20:09:12.553491 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" virtual=false 2025-08-13T20:09:12.553583769+00:00 stderr F I0813 20:09:12.553532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.558109069+00:00 stderr F I0813 20:09:12.557092 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.558109069+00:00 stderr F I0813 20:09:12.557146 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" virtual=false 2025-08-13T20:09:12.559074326+00:00 stderr F I0813 20:09:12.553580 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" virtual=false 2025-08-13T20:09:12.559685664+00:00 stderr F I0813 20:09:12.559656 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.559850299+00:00 stderr F I0813 20:09:12.559764 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" virtual=false 2025-08-13T20:09:12.575375254+00:00 stderr F I0813 20:09:12.575307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.575933170+00:00 stderr F I0813 20:09:12.575871 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" virtual=false 2025-08-13T20:09:12.576147906+00:00 stderr F I0813 20:09:12.575346 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.576231668+00:00 stderr F I0813 20:09:12.576213 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" virtual=false 2025-08-13T20:09:12.576455695+00:00 stderr F I0813 20:09:12.575368 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.576455695+00:00 stderr F I0813 20:09:12.576446 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" virtual=false 2025-08-13T20:09:12.577232417+00:00 stderr F I0813 20:09:12.575465 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.577383341+00:00 stderr F I0813 20:09:12.577361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" virtual=false 2025-08-13T20:09:12.577482354+00:00 stderr F I0813 20:09:12.575524 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.577520175+00:00 stderr F I0813 20:09:12.575563 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.577753982+00:00 stderr F I0813 20:09:12.577675 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" virtual=false 2025-08-13T20:09:12.578699689+00:00 stderr F I0813 20:09:12.577633 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" virtual=false 2025-08-13T20:09:12.582263241+00:00 stderr F I0813 20:09:12.581183 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.582263241+00:00 stderr F I0813 20:09:12.581249 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" virtual=false 2025-08-13T20:09:12.588976464+00:00 stderr F I0813 20:09:12.588896 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.589136238+00:00 stderr F I0813 20:09:12.589075 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" virtual=false 2025-08-13T20:09:12.599089704+00:00 stderr F I0813 20:09:12.598960 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.599089704+00:00 stderr F I0813 20:09:12.599058 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" virtual=false 2025-08-13T20:09:12.605089976+00:00 stderr F I0813 20:09:12.605023 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.605274441+00:00 stderr F I0813 20:09:12.605249 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" virtual=false 2025-08-13T20:09:12.608372920+00:00 stderr F I0813 20:09:12.605634 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.608757351+00:00 stderr F I0813 20:09:12.608729 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.608943616+00:00 stderr F I0813 20:09:12.608858 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" virtual=false 2025-08-13T20:09:12.609003888+00:00 stderr F I0813 20:09:12.608983 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" virtual=false 2025-08-13T20:09:12.609266195+00:00 stderr F I0813 20:09:12.608610 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.609369928+00:00 stderr F I0813 20:09:12.609314 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" virtual=false 2025-08-13T20:09:12.609510002+00:00 stderr F I0813 20:09:12.608682 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.609510002+00:00 stderr F I0813 20:09:12.609497 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" virtual=false 2025-08-13T20:09:12.609528213+00:00 stderr F I0813 20:09:12.609510 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.609539903+00:00 stderr F I0813 20:09:12.609532 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" virtual=false 2025-08-13T20:09:12.609635346+00:00 stderr F I0813 20:09:12.609081 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.609703858+00:00 stderr F I0813 20:09:12.609650 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" virtual=false 2025-08-13T20:09:12.614504206+00:00 stderr F I0813 20:09:12.614402 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.614616709+00:00 stderr F I0813 20:09:12.614597 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" virtual=false 2025-08-13T20:09:12.619052696+00:00 stderr F I0813 20:09:12.617203 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.619052696+00:00 stderr F I0813 20:09:12.617280 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" virtual=false 2025-08-13T20:09:12.636666471+00:00 stderr F I0813 20:09:12.636562 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.636666471+00:00 stderr F I0813 20:09:12.636637 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" virtual=false 2025-08-13T20:09:12.652250608+00:00 stderr F I0813 20:09:12.645874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.652250608+00:00 stderr F I0813 20:09:12.645968 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" virtual=false 2025-08-13T20:09:12.667004601+00:00 stderr F I0813 20:09:12.660727 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.667004601+00:00 stderr F I0813 20:09:12.661014 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.670689 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.670818 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671445 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671686 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671706 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" virtual=false 2025-08-13T20:09:12.752097651+00:00 stderr F I0813 20:09:12.749502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.752097651+00:00 stderr F I0813 20:09:12.749572 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" virtual=false 2025-08-13T20:09:12.766936966+00:00 stderr F I0813 20:09:12.764520 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.766936966+00:00 stderr F I0813 20:09:12.764614 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" virtual=false 2025-08-13T20:09:12.769938302+00:00 stderr F I0813 20:09:12.769769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.770103907+00:00 stderr F I0813 20:09:12.769978 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" virtual=false 2025-08-13T20:09:12.770444817+00:00 stderr F I0813 20:09:12.770423 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.770582391+00:00 stderr F I0813 20:09:12.770564 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" virtual=false 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781338 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781442 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" virtual=false 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781986 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.782077 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" virtual=false 2025-08-13T20:09:12.796726050+00:00 stderr F I0813 20:09:12.795367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.796726050+00:00 stderr F I0813 20:09:12.795585 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" virtual=false 2025-08-13T20:09:12.807507729+00:00 stderr F I0813 20:09:12.807422 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.809240 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" virtual=false 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.810134 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.810160 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" virtual=false 2025-08-13T20:09:12.832015512+00:00 stderr F I0813 20:09:12.829524 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.832015512+00:00 stderr F I0813 20:09:12.829567 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" virtual=false 2025-08-13T20:09:12.865013238+00:00 stderr F I0813 20:09:12.864864 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.865013238+00:00 stderr F I0813 20:09:12.864977 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" virtual=false 2025-08-13T20:09:12.873874792+00:00 stderr F I0813 20:09:12.871746 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.873874792+00:00 stderr F I0813 20:09:12.871993 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" virtual=false 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.890509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.890661 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" virtual=false 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.892010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.892114 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" virtual=false 2025-08-13T20:09:12.895847312+00:00 stderr F I0813 20:09:12.893068 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.895847312+00:00 stderr F I0813 20:09:12.893184 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" virtual=false 2025-08-13T20:09:12.898930050+00:00 stderr F I0813 20:09:12.896708 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.898930050+00:00 stderr F I0813 20:09:12.896988 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906484 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906555 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906652 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906750 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.907040 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.907070 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" virtual=false 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.935885 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936095 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" virtual=false 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936108 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936135 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" virtual=false 2025-08-13T20:09:12.953929447+00:00 stderr F I0813 20:09:12.953257 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.953929447+00:00 stderr F I0813 20:09:12.953468 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" virtual=false 2025-08-13T20:09:12.965881760+00:00 stderr F I0813 20:09:12.965646 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.965881760+00:00 stderr F I0813 20:09:12.965692 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" virtual=false 2025-08-13T20:09:12.983866316+00:00 stderr F I0813 20:09:12.983705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.983866316+00:00 stderr F I0813 20:09:12.983775 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" virtual=false 2025-08-13T20:09:12.995885660+00:00 stderr F I0813 20:09:12.995107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.995885660+00:00 stderr F I0813 20:09:12.995169 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" virtual=false 2025-08-13T20:09:12.998082183+00:00 stderr F I0813 20:09:12.996410 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.998082183+00:00 stderr F I0813 20:09:12.996455 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" virtual=false 2025-08-13T20:09:12.998882086+00:00 stderr F I0813 20:09:12.998833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.998934438+00:00 stderr F I0813 20:09:12.998876 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" virtual=false 2025-08-13T20:09:13.002491110+00:00 stderr F I0813 20:09:13.002453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.002587312+00:00 stderr F I0813 20:09:13.002566 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" virtual=false 2025-08-13T20:09:13.012008912+00:00 stderr F I0813 20:09:13.011947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.012109495+00:00 stderr F I0813 20:09:13.012094 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" virtual=false 2025-08-13T20:09:13.012169307+00:00 stderr F I0813 20:09:13.011976 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.012261210+00:00 stderr F I0813 20:09:13.012221 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" virtual=false 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.014925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.014968 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" virtual=false 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.015272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.015311 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" virtual=false 2025-08-13T20:09:13.031552413+00:00 stderr F I0813 20:09:13.031318 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.031552413+00:00 stderr F I0813 20:09:13.031407 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" virtual=false 2025-08-13T20:09:13.043345101+00:00 stderr F I0813 20:09:13.042453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.043345101+00:00 stderr F I0813 20:09:13.042538 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" virtual=false 2025-08-13T20:09:13.046583434+00:00 stderr F I0813 20:09:13.046029 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.046583434+00:00 stderr F I0813 20:09:13.046094 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" virtual=false 2025-08-13T20:09:13.060381259+00:00 stderr F I0813 20:09:13.060293 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.060588545+00:00 stderr F I0813 20:09:13.060566 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" virtual=false 2025-08-13T20:09:13.066276918+00:00 stderr F I0813 20:09:13.066136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.066276918+00:00 stderr F I0813 20:09:13.066241 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" virtual=false 2025-08-13T20:09:13.089862925+00:00 stderr F I0813 20:09:13.089744 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.090052650+00:00 stderr F I0813 20:09:13.090028 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" virtual=false 2025-08-13T20:09:13.090622636+00:00 stderr F I0813 20:09:13.090564 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.090684488+00:00 stderr F I0813 20:09:13.090668 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" virtual=false 2025-08-13T20:09:13.091029158+00:00 stderr F I0813 20:09:13.091001 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.091160512+00:00 stderr F I0813 20:09:13.091143 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" virtual=false 2025-08-13T20:09:13.095020513+00:00 stderr F I0813 20:09:13.094981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.095108125+00:00 stderr F I0813 20:09:13.095090 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" virtual=false 2025-08-13T20:09:13.095408744+00:00 stderr F I0813 20:09:13.095384 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.095636160+00:00 stderr F I0813 20:09:13.095549 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" virtual=false 2025-08-13T20:09:13.100217561+00:00 stderr F I0813 20:09:13.100180 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.100395677+00:00 stderr F I0813 20:09:13.100329 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" virtual=false 2025-08-13T20:09:13.137387087+00:00 stderr F I0813 20:09:13.137293 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.137387087+00:00 stderr F I0813 20:09:13.137372 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" virtual=false 2025-08-13T20:09:13.137681486+00:00 stderr F I0813 20:09:13.137622 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.137681486+00:00 stderr F I0813 20:09:13.137669 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" virtual=false 2025-08-13T20:09:13.146855939+00:00 stderr F I0813 20:09:13.146467 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.146855939+00:00 stderr F I0813 20:09:13.146753 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" virtual=false 2025-08-13T20:09:13.165397000+00:00 stderr F I0813 20:09:13.163994 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.178827755+00:00 stderr F I0813 20:09:13.167036 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" virtual=false 2025-08-13T20:09:13.179596237+00:00 stderr F I0813 20:09:13.179360 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.179881066+00:00 stderr F I0813 20:09:13.179857 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" virtual=false 2025-08-13T20:09:13.188229685+00:00 stderr F I0813 20:09:13.187076 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.188229685+00:00 stderr F I0813 20:09:13.187157 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" virtual=false 2025-08-13T20:09:13.249248164+00:00 stderr F I0813 20:09:13.248861 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.249248164+00:00 stderr F I0813 20:09:13.248947 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" virtual=false 2025-08-13T20:09:13.251207321+00:00 stderr F I0813 20:09:13.251121 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.254506875+00:00 stderr F I0813 20:09:13.254470 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" virtual=false 2025-08-13T20:09:13.254625179+00:00 stderr F I0813 20:09:13.251225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255062961+00:00 stderr F I0813 20:09:13.251340 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255243496+00:00 stderr F I0813 20:09:13.251382 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255411801+00:00 stderr F I0813 20:09:13.251464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255592556+00:00 stderr F I0813 20:09:13.254317 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255737890+00:00 stderr F I0813 20:09:13.255167 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" virtual=false 2025-08-13T20:09:13.256109371+00:00 stderr F I0813 20:09:13.255343 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" virtual=false 2025-08-13T20:09:13.256625366+00:00 stderr F I0813 20:09:13.255515 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" virtual=false 2025-08-13T20:09:13.257069259+00:00 stderr F I0813 20:09:13.255697 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" virtual=false 2025-08-13T20:09:13.258028156+00:00 stderr F I0813 20:09:13.255882 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263495 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263563 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263784 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263864 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264093 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264113 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264260 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264277 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264417 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264434 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264580 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264598 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" virtual=false 2025-08-13T20:09:13.279019018+00:00 stderr F I0813 20:09:13.278873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.279211213+00:00 stderr F I0813 20:09:13.279142 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" virtual=false 2025-08-13T20:09:13.279737128+00:00 stderr F I0813 20:09:13.279714 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.279859402+00:00 stderr F I0813 20:09:13.279839 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" virtual=false 2025-08-13T20:09:13.280452989+00:00 stderr F I0813 20:09:13.280430 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.280717027+00:00 stderr F I0813 20:09:13.280694 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" virtual=false 2025-08-13T20:09:13.292366881+00:00 stderr F I0813 20:09:13.292254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.292366881+00:00 stderr F I0813 20:09:13.292319 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" virtual=false 2025-08-13T20:09:13.292857535+00:00 stderr F I0813 20:09:13.292771 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.292878845+00:00 stderr F I0813 20:09:13.292857 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" virtual=false 2025-08-13T20:09:13.293312008+00:00 stderr F I0813 20:09:13.293287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.293394050+00:00 stderr F I0813 20:09:13.293354 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" virtual=false 2025-08-13T20:09:13.298474026+00:00 stderr F I0813 20:09:13.293694 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.298474026+00:00 stderr F I0813 20:09:13.293747 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" virtual=false 2025-08-13T20:09:13.319708115+00:00 stderr F I0813 20:09:13.319464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.320034494+00:00 stderr F I0813 20:09:13.320012 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" virtual=false 2025-08-13T20:09:13.320962761+00:00 stderr F I0813 20:09:13.320839 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.320962761+00:00 stderr F I0813 20:09:13.320946 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" virtual=false 2025-08-13T20:09:13.321174957+00:00 stderr F I0813 20:09:13.321097 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.321378752+00:00 stderr F I0813 20:09:13.321353 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.333665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.333743 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334185 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334226 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334292 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334559 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334588 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334659 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335000 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335232 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335265 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" virtual=false 2025-08-13T20:09:13.335495837+00:00 stderr F I0813 20:09:13.335439 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.335512978+00:00 stderr F I0813 20:09:13.335495 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" virtual=false 2025-08-13T20:09:13.336057523+00:00 stderr F I0813 20:09:13.336029 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.336242219+00:00 stderr F I0813 20:09:13.336219 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" virtual=false 2025-08-13T20:09:13.336650830+00:00 stderr F I0813 20:09:13.336624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.336814855+00:00 stderr F I0813 20:09:13.336737 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" virtual=false 2025-08-13T20:09:13.337409332+00:00 stderr F I0813 20:09:13.337334 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.338635057+00:00 stderr F I0813 20:09:13.337484 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.366693 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.366768 1 garbagecollector.go:549] "Processing item" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367313 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367335 1 garbagecollector.go:549] "Processing item" item="[operator.openshift.io/v1/KubeControllerManager, namespace: , name: cluster, uid: 466fedf7-9ce3-473e-9b71-9bf08b103d4a]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367564 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367583 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367678 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" virtual=false 2025-08-13T20:09:13.374689641+00:00 stderr F I0813 20:09:13.374596 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.375007130+00:00 stderr F I0813 20:09:13.374863 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" virtual=false 2025-08-13T20:09:13.376608086+00:00 stderr F I0813 20:09:13.376539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.376703549+00:00 stderr F I0813 20:09:13.376686 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" virtual=false 2025-08-13T20:09:13.377532662+00:00 stderr F I0813 20:09:13.377507 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.377627915+00:00 stderr F I0813 20:09:13.377610 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" virtual=false 2025-08-13T20:09:13.378016906+00:00 stderr F I0813 20:09:13.377993 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.378108909+00:00 stderr F I0813 20:09:13.378091 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" virtual=false 2025-08-13T20:09:13.380415715+00:00 stderr F I0813 20:09:13.380389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.380584200+00:00 stderr F I0813 20:09:13.380480 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.380584200+00:00 stderr F I0813 20:09:13.380551 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" virtual=false 2025-08-13T20:09:13.380759865+00:00 stderr F I0813 20:09:13.380502 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" virtual=false 2025-08-13T20:09:13.380884209+00:00 stderr F I0813 20:09:13.380865 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.381071064+00:00 stderr F I0813 20:09:13.381013 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" virtual=false 2025-08-13T20:09:13.406487093+00:00 stderr F I0813 20:09:13.406365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.406487093+00:00 stderr F I0813 20:09:13.406462 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" virtual=false 2025-08-13T20:09:13.407113531+00:00 stderr F I0813 20:09:13.407055 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.407135321+00:00 stderr F I0813 20:09:13.407113 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" virtual=false 2025-08-13T20:09:13.407333367+00:00 stderr F I0813 20:09:13.407269 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.407333367+00:00 stderr F I0813 20:09:13.407306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operator.openshift.io/v1/KubeControllerManager, namespace: , name: cluster, uid: 466fedf7-9ce3-473e-9b71-9bf08b103d4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.407440120+00:00 stderr F I0813 20:09:13.407343 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" virtual=false 2025-08-13T20:09:13.407496982+00:00 stderr F I0813 20:09:13.407290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.407560733+00:00 stderr F I0813 20:09:13.407546 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" virtual=false 2025-08-13T20:09:13.407860632+00:00 stderr F I0813 20:09:13.407695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.412673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408221 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.412981 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408256 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413217 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408416 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413340 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408470 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413531 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408544 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413632 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" virtual=false 2025-08-13T20:09:13.413750331+00:00 stderr F I0813 20:09:13.409166 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413763121+00:00 stderr F I0813 20:09:13.413753 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" virtual=false 2025-08-13T20:09:13.414027469+00:00 stderr F I0813 20:09:13.411010 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" virtual=false 2025-08-13T20:09:13.424537780+00:00 stderr F I0813 20:09:13.424475 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.424715915+00:00 stderr F I0813 20:09:13.424696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" virtual=false 2025-08-13T20:09:13.425479357+00:00 stderr F I0813 20:09:13.425452 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.425969221+00:00 stderr F I0813 20:09:13.425946 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" virtual=false 2025-08-13T20:09:13.426696432+00:00 stderr F I0813 20:09:13.426624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.427005881+00:00 stderr F I0813 20:09:13.426982 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" virtual=false 2025-08-13T20:09:13.427273909+00:00 stderr F I0813 20:09:13.427250 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.427334890+00:00 stderr F I0813 20:09:13.427318 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" virtual=false 2025-08-13T20:09:13.435208336+00:00 stderr F I0813 20:09:13.435142 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.435349080+00:00 stderr F I0813 20:09:13.435330 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" virtual=false 2025-08-13T20:09:13.435880395+00:00 stderr F I0813 20:09:13.435715 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.436124982+00:00 stderr F I0813 20:09:13.436095 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" virtual=false 2025-08-13T20:09:13.436457652+00:00 stderr F I0813 20:09:13.436435 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.436545814+00:00 stderr F I0813 20:09:13.436531 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" virtual=false 2025-08-13T20:09:13.441559038+00:00 stderr F I0813 20:09:13.441526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.441675811+00:00 stderr F I0813 20:09:13.441660 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" virtual=false 2025-08-13T20:09:13.442136135+00:00 stderr F I0813 20:09:13.442085 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.442270429+00:00 stderr F I0813 20:09:13.442250 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: hostnetwork-v2, uid: 456c5d8f-a352-4d46-a237-b5198b2b47bf]" virtual=false 2025-08-13T20:09:13.442769403+00:00 stderr F I0813 20:09:13.442745 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.442956278+00:00 stderr F I0813 20:09:13.442934 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: machine-api-termination-handler, uid: 0d71aa13-f5f7-4516-9777-54ea4b30344d]" virtual=false 2025-08-13T20:09:13.443288498+00:00 stderr F I0813 20:09:13.443186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.443348979+00:00 stderr F I0813 20:09:13.443334 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: nonroot-v2, uid: 414f5c6e-4eb7-4571-b5b7-7dc6cb18bba6]" virtual=false 2025-08-13T20:09:13.443516034+00:00 stderr F I0813 20:09:13.443497 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.443564586+00:00 stderr F I0813 20:09:13.443551 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: restricted-v2, uid: ccc1f43b-6b7f-41ce-a399-bc1d0b2e9f0d]" virtual=false 2025-08-13T20:09:13.443891595+00:00 stderr F I0813 20:09:13.443838 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.444072160+00:00 stderr F I0813 20:09:13.444056 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" virtual=false 2025-08-13T20:09:13.444386469+00:00 stderr F I0813 20:09:13.444355 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.444536564+00:00 stderr F I0813 20:09:13.444491 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" virtual=false 2025-08-13T20:09:13.444979366+00:00 stderr F I0813 20:09:13.444955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.445090699+00:00 stderr F I0813 20:09:13.445075 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" virtual=false 2025-08-13T20:09:13.445266624+00:00 stderr F I0813 20:09:13.445246 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.445316236+00:00 stderr F I0813 20:09:13.445303 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" virtual=false 2025-08-13T20:09:13.451400100+00:00 stderr F I0813 20:09:13.451347 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.451501153+00:00 stderr F I0813 20:09:13.451485 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" virtual=false 2025-08-13T20:09:13.451657248+00:00 stderr F I0813 20:09:13.451637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.451708439+00:00 stderr F I0813 20:09:13.451694 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" virtual=false 2025-08-13T20:09:13.452104661+00:00 stderr F I0813 20:09:13.452081 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.452163852+00:00 stderr F I0813 20:09:13.452150 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" virtual=false 2025-08-13T20:09:13.452397689+00:00 stderr F I0813 20:09:13.452374 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.452450420+00:00 stderr F I0813 20:09:13.452437 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" virtual=false 2025-08-13T20:09:13.455024574+00:00 stderr F I0813 20:09:13.454984 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455118307+00:00 stderr F I0813 20:09:13.455076 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" virtual=false 2025-08-13T20:09:13.455267011+00:00 stderr F I0813 20:09:13.455247 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455321923+00:00 stderr F I0813 20:09:13.455307 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" virtual=false 2025-08-13T20:09:13.455840988+00:00 stderr F I0813 20:09:13.455766 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455956611+00:00 stderr F I0813 20:09:13.455934 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" virtual=false 2025-08-13T20:09:13.461531121+00:00 stderr F I0813 20:09:13.461492 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.461651464+00:00 stderr F I0813 20:09:13.461635 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" virtual=false 2025-08-13T20:09:13.468554532+00:00 stderr F I0813 20:09:13.468521 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.468869991+00:00 stderr F I0813 20:09:13.468849 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" virtual=false 2025-08-13T20:09:13.473698810+00:00 stderr F I0813 20:09:13.473638 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.474087021+00:00 stderr F I0813 20:09:13.474021 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" virtual=false 2025-08-13T20:09:13.474732509+00:00 stderr F I0813 20:09:13.474675 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.474993347+00:00 stderr F I0813 20:09:13.474880 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" virtual=false 2025-08-13T20:09:13.484234522+00:00 stderr F I0813 20:09:13.483741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.484234522+00:00 stderr F I0813 20:09:13.483859 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" virtual=false 2025-08-13T20:09:13.499940822+00:00 stderr F I0813 20:09:13.499769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: hostnetwork-v2, uid: 456c5d8f-a352-4d46-a237-b5198b2b47bf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.500235710+00:00 stderr F I0813 20:09:13.500146 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" virtual=false 2025-08-13T20:09:13.504300597+00:00 stderr F I0813 20:09:13.504168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: machine-api-termination-handler, uid: 0d71aa13-f5f7-4516-9777-54ea4b30344d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.504300597+00:00 stderr F I0813 20:09:13.504250 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" virtual=false 2025-08-13T20:09:13.512250795+00:00 stderr F I0813 20:09:13.511857 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.512250795+00:00 stderr F I0813 20:09:13.511992 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" virtual=false 2025-08-13T20:09:13.512566694+00:00 stderr F I0813 20:09:13.512531 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: nonroot-v2, uid: 414f5c6e-4eb7-4571-b5b7-7dc6cb18bba6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.512652596+00:00 stderr F I0813 20:09:13.512631 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" virtual=false 2025-08-13T20:09:13.519372129+00:00 stderr F I0813 20:09:13.519325 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: restricted-v2, uid: ccc1f43b-6b7f-41ce-a399-bc1d0b2e9f0d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.519467732+00:00 stderr F I0813 20:09:13.519452 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" virtual=false 2025-08-13T20:09:13.533261697+00:00 stderr F I0813 20:09:13.533170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.533261697+00:00 stderr F I0813 20:09:13.533239 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" virtual=false 2025-08-13T20:09:13.533665949+00:00 stderr F I0813 20:09:13.533638 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.533735521+00:00 stderr F I0813 20:09:13.533719 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" virtual=false 2025-08-13T20:09:13.536348366+00:00 stderr F I0813 20:09:13.536318 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.536468969+00:00 stderr F I0813 20:09:13.536423 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" virtual=false 2025-08-13T20:09:13.536743237+00:00 stderr F I0813 20:09:13.536672 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.536759008+00:00 stderr F I0813 20:09:13.536738 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" virtual=false 2025-08-13T20:09:13.567349115+00:00 stderr F I0813 20:09:13.566833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.568242800+00:00 stderr F I0813 20:09:13.568208 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.568333043+00:00 stderr F I0813 20:09:13.568313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" virtual=false 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568416 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568479 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" virtual=false 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568585 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" virtual=false 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.572339 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.572424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" virtual=false 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.575071 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.575139 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" virtual=false 2025-08-13T20:09:13.577416103+00:00 stderr F I0813 20:09:13.576873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.577416103+00:00 stderr F I0813 20:09:13.576958 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" virtual=false 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.586674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.586762 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" virtual=false 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.587023 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.587114 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" virtual=false 2025-08-13T20:09:13.597370555+00:00 stderr F I0813 20:09:13.597273 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.597370555+00:00 stderr F I0813 20:09:13.597352 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" virtual=false 2025-08-13T20:09:13.608836714+00:00 stderr F I0813 20:09:13.608728 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.608896796+00:00 stderr F I0813 20:09:13.608867 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" virtual=false 2025-08-13T20:09:13.611452139+00:00 stderr F I0813 20:09:13.611353 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.611491640+00:00 stderr F I0813 20:09:13.611409 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" virtual=false 2025-08-13T20:09:13.615759543+00:00 stderr F I0813 20:09:13.615705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.615830865+00:00 stderr F I0813 20:09:13.615766 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" virtual=false 2025-08-13T20:09:13.626977234+00:00 stderr F I0813 20:09:13.626765 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.626977234+00:00 stderr F I0813 20:09:13.626949 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" virtual=false 2025-08-13T20:09:13.637646600+00:00 stderr F I0813 20:09:13.637605 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.637739093+00:00 stderr F I0813 20:09:13.637723 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" virtual=false 2025-08-13T20:09:13.646385521+00:00 stderr F I0813 20:09:13.646352 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.646500994+00:00 stderr F I0813 20:09:13.646465 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" virtual=false 2025-08-13T20:09:13.649411438+00:00 stderr F I0813 20:09:13.649386 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.649478569+00:00 stderr F I0813 20:09:13.649464 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" virtual=false 2025-08-13T20:09:13.657377166+00:00 stderr F I0813 20:09:13.657225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.657377166+00:00 stderr F I0813 20:09:13.657299 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" virtual=false 2025-08-13T20:09:13.661150164+00:00 stderr F I0813 20:09:13.659981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.661150164+00:00 stderr F I0813 20:09:13.660034 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" virtual=false 2025-08-13T20:09:13.666638971+00:00 stderr F I0813 20:09:13.666528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.666638971+00:00 stderr F I0813 20:09:13.666605 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" virtual=false 2025-08-13T20:09:13.676642828+00:00 stderr F I0813 20:09:13.676136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.676642828+00:00 stderr F I0813 20:09:13.676211 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" virtual=false 2025-08-13T20:09:13.689615640+00:00 stderr F I0813 20:09:13.688406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.689615640+00:00 stderr F I0813 20:09:13.688528 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" virtual=false 2025-08-13T20:09:13.690349261+00:00 stderr F I0813 20:09:13.690264 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.690349261+00:00 stderr F I0813 20:09:13.690337 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" virtual=false 2025-08-13T20:09:13.697501996+00:00 stderr F I0813 20:09:13.697274 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.697501996+00:00 stderr F I0813 20:09:13.697347 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" virtual=false 2025-08-13T20:09:13.700591095+00:00 stderr F I0813 20:09:13.700486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.700591095+00:00 stderr F I0813 20:09:13.700564 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" virtual=false 2025-08-13T20:09:13.708161272+00:00 stderr F I0813 20:09:13.707979 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.708161272+00:00 stderr F I0813 20:09:13.708069 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" virtual=false 2025-08-13T20:09:13.712588149+00:00 stderr F I0813 20:09:13.712498 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.712836536+00:00 stderr F I0813 20:09:13.712813 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" virtual=false 2025-08-13T20:09:13.719177678+00:00 stderr F I0813 20:09:13.719008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.719177678+00:00 stderr F I0813 20:09:13.719076 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" virtual=false 2025-08-13T20:09:13.723093280+00:00 stderr F I0813 20:09:13.722922 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.723093280+00:00 stderr F I0813 20:09:13.722970 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" virtual=false 2025-08-13T20:09:13.737514924+00:00 stderr F I0813 20:09:13.737391 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.737514924+00:00 stderr F I0813 20:09:13.737480 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" virtual=false 2025-08-13T20:09:13.744522544+00:00 stderr F I0813 20:09:13.744425 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" 2025-08-13T20:09:13.744588796+00:00 stderr F I0813 20:09:13.744524 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" virtual=false 2025-08-13T20:09:13.748353244+00:00 stderr F I0813 20:09:13.748283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.748559260+00:00 stderr F I0813 20:09:13.748531 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" virtual=false 2025-08-13T20:09:13.749489707+00:00 stderr F I0813 20:09:13.749461 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" 2025-08-13T20:09:13.749588160+00:00 stderr F I0813 20:09:13.749568 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" virtual=false 2025-08-13T20:09:13.750179797+00:00 stderr F I0813 20:09:13.750150 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.750269479+00:00 stderr F I0813 20:09:13.750240 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" virtual=false 2025-08-13T20:09:13.754986244+00:00 stderr F I0813 20:09:13.754877 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" 2025-08-13T20:09:13.755054466+00:00 stderr F I0813 20:09:13.754996 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" virtual=false 2025-08-13T20:09:13.755302883+00:00 stderr F I0813 20:09:13.755184 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.755302883+00:00 stderr F I0813 20:09:13.755231 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" virtual=false 2025-08-13T20:09:13.758261748+00:00 stderr F I0813 20:09:13.758150 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" 2025-08-13T20:09:13.758261748+00:00 stderr F I0813 20:09:13.758198 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" virtual=false 2025-08-13T20:09:13.759644848+00:00 stderr F I0813 20:09:13.759589 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.759644848+00:00 stderr F I0813 20:09:13.759615 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" virtual=false 2025-08-13T20:09:13.764300921+00:00 stderr F I0813 20:09:13.764230 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" 2025-08-13T20:09:13.764445256+00:00 stderr F I0813 20:09:13.764423 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" virtual=false 2025-08-13T20:09:13.765971299+00:00 stderr F I0813 20:09:13.765741 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" 2025-08-13T20:09:13.766123394+00:00 stderr F I0813 20:09:13.766055 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" virtual=false 2025-08-13T20:09:13.775135382+00:00 stderr F I0813 20:09:13.775063 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.775284306+00:00 stderr F I0813 20:09:13.775260 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" virtual=false 2025-08-13T20:09:13.780334191+00:00 stderr F I0813 20:09:13.780199 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.780334191+00:00 stderr F I0813 20:09:13.780271 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" virtual=false 2025-08-13T20:09:13.781297219+00:00 stderr F I0813 20:09:13.781215 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.781297219+00:00 stderr F I0813 20:09:13.781263 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" virtual=false 2025-08-13T20:09:13.789884805+00:00 stderr F I0813 20:09:13.789723 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.789884805+00:00 stderr F I0813 20:09:13.789863 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" virtual=false 2025-08-13T20:09:13.796895156+00:00 stderr F I0813 20:09:13.796698 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.796895156+00:00 stderr F I0813 20:09:13.796842 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" virtual=false 2025-08-13T20:09:13.804105813+00:00 stderr F I0813 20:09:13.803146 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.804105813+00:00 stderr F I0813 20:09:13.803218 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" virtual=false 2025-08-13T20:09:13.842101862+00:00 stderr F I0813 20:09:13.841989 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.842101862+00:00 stderr F I0813 20:09:13.842053 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" virtual=false 2025-08-13T20:09:13.845007075+00:00 stderr F I0813 20:09:13.844888 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" 2025-08-13T20:09:13.845007075+00:00 stderr F I0813 20:09:13.844962 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" virtual=false 2025-08-13T20:09:13.854466177+00:00 stderr F I0813 20:09:13.854348 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.854466177+00:00 stderr F I0813 20:09:13.854424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" virtual=false 2025-08-13T20:09:13.856594638+00:00 stderr F I0813 20:09:13.856528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.856594638+00:00 stderr F I0813 20:09:13.856557 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" virtual=false 2025-08-13T20:09:13.865252486+00:00 stderr F I0813 20:09:13.865155 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.865297697+00:00 stderr F I0813 20:09:13.865245 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" virtual=false 2025-08-13T20:09:13.872710760+00:00 stderr F I0813 20:09:13.871890 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.872710760+00:00 stderr F I0813 20:09:13.871974 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" virtual=false 2025-08-13T20:09:13.875620983+00:00 stderr F I0813 20:09:13.874997 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.875620983+00:00 stderr F I0813 20:09:13.875119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" virtual=false 2025-08-13T20:09:13.878346631+00:00 stderr F I0813 20:09:13.878266 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.878346631+00:00 stderr F I0813 20:09:13.878325 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" virtual=false 2025-08-13T20:09:13.883757186+00:00 stderr F I0813 20:09:13.883066 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.883757186+00:00 stderr F I0813 20:09:13.883148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" virtual=false 2025-08-13T20:09:13.886694131+00:00 stderr F I0813 20:09:13.886582 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.886730652+00:00 stderr F I0813 20:09:13.886696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" virtual=false 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.889455 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.889526 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" virtual=false 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.890261 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.890325895+00:00 stderr F I0813 20:09:13.890313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" virtual=false 2025-08-13T20:09:13.893479375+00:00 stderr F I0813 20:09:13.893039 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.893479375+00:00 stderr F I0813 20:09:13.893098 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" virtual=false 2025-08-13T20:09:13.897031317+00:00 stderr F I0813 20:09:13.896866 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.897031317+00:00 stderr F I0813 20:09:13.896947 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" virtual=false 2025-08-13T20:09:13.900166967+00:00 stderr F I0813 20:09:13.900055 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.900166967+00:00 stderr F I0813 20:09:13.900142 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" virtual=false 2025-08-13T20:09:13.903428440+00:00 stderr F I0813 20:09:13.903277 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.903428440+00:00 stderr F I0813 20:09:13.903327 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" virtual=false 2025-08-13T20:09:13.906514819+00:00 stderr F I0813 20:09:13.906406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.906514819+00:00 stderr F I0813 20:09:13.906470 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" virtual=false 2025-08-13T20:09:13.924139574+00:00 stderr F I0813 20:09:13.923878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.924139574+00:00 stderr F I0813 20:09:13.923972 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" virtual=false 2025-08-13T20:09:13.928879240+00:00 stderr F I0813 20:09:13.928268 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.928879240+00:00 stderr F I0813 20:09:13.928347 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" virtual=false 2025-08-13T20:09:13.935465829+00:00 stderr F I0813 20:09:13.935276 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.935465829+00:00 stderr F I0813 20:09:13.935346 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" virtual=false 2025-08-13T20:09:13.977415072+00:00 stderr F I0813 20:09:13.977322 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.977415072+00:00 stderr F I0813 20:09:13.977387 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" virtual=false 2025-08-13T20:09:13.979169852+00:00 stderr F I0813 20:09:13.979111 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.979265325+00:00 stderr F I0813 20:09:13.979166 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" virtual=false 2025-08-13T20:09:13.988566541+00:00 stderr F I0813 20:09:13.988453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.988566541+00:00 stderr F I0813 20:09:13.988525 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" virtual=false 2025-08-13T20:09:13.994508452+00:00 stderr F I0813 20:09:13.994420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.994508452+00:00 stderr F I0813 20:09:13.994475 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" virtual=false 2025-08-13T20:09:14.001161803+00:00 stderr F I0813 20:09:14.000597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.001161803+00:00 stderr F I0813 20:09:14.000767 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" virtual=false 2025-08-13T20:09:14.006212027+00:00 stderr F I0813 20:09:14.006105 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.006212027+00:00 stderr F I0813 20:09:14.006157 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" virtual=false 2025-08-13T20:09:14.007450383+00:00 stderr F I0813 20:09:14.006640 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.007450383+00:00 stderr F I0813 20:09:14.006713 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" virtual=false 2025-08-13T20:09:14.009835451+00:00 stderr F I0813 20:09:14.009730 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.009863642+00:00 stderr F I0813 20:09:14.009847 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-11405dc064e9fc83a779a06d1cd665b3, uid: 6c39c9b1-6ae4-4da9-bc28-1744aa4c8a1d]" virtual=false 2025-08-13T20:09:14.013264310+00:00 stderr F I0813 20:09:14.013186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.013264310+00:00 stderr F I0813 20:09:14.013233 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5, uid: c414de32-5f1e-433d-8ddc-6ef8e86afda7]" virtual=false 2025-08-13T20:09:14.017742038+00:00 stderr F I0813 20:09:14.017635 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.017742038+00:00 stderr F I0813 20:09:14.017719 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, uid: 8980258c-3f45-4fdf-85aa-5d7816ef57b0]" virtual=false 2025-08-13T20:09:14.019994403+00:00 stderr F I0813 20:09:14.019883 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.020016583+00:00 stderr F I0813 20:09:14.019999 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-83accf81260e29bcce65a184dd980479, uid: 064c58c6-b7e3-4279-8d84-dee3da8cc701]" virtual=false 2025-08-13T20:09:14.025669655+00:00 stderr F I0813 20:09:14.025112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.025669655+00:00 stderr F I0813 20:09:14.025162 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff, uid: 3ef54a1f-2601-44d6-aa6d-d19e1277d0b9]" virtual=false 2025-08-13T20:09:14.027746875+00:00 stderr F I0813 20:09:14.027600 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.027746875+00:00 stderr F I0813 20:09:14.027686 1 garbagecollector.go:549] "Processing item" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" virtual=false 2025-08-13T20:09:14.029419133+00:00 stderr F I0813 20:09:14.029059 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.034613732+00:00 stderr F I0813 20:09:14.034479 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.038099632+00:00 stderr F I0813 20:09:14.037756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.046549154+00:00 stderr F I0813 20:09:14.046371 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.065247960+00:00 stderr F I0813 20:09:14.065136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.065678152+00:00 stderr F I0813 20:09:14.065612 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.066061833+00:00 stderr F I0813 20:09:14.066024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.097353211+00:00 stderr F I0813 20:09:14.096713 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.100254584+00:00 stderr F I0813 20:09:14.100156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.104339911+00:00 stderr F I0813 20:09:14.103727 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.107078699+00:00 stderr F I0813 20:09:14.106627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.114501082+00:00 stderr F I0813 20:09:14.113938 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.117990042+00:00 stderr F I0813 20:09:14.116225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.120355630+00:00 stderr F I0813 20:09:14.119464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.121110752+00:00 stderr F I0813 20:09:14.121034 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-11405dc064e9fc83a779a06d1cd665b3, uid: 6c39c9b1-6ae4-4da9-bc28-1744aa4c8a1d]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.123210882+00:00 stderr F I0813 20:09:14.123012 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5, uid: c414de32-5f1e-433d-8ddc-6ef8e86afda7]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.125406005+00:00 stderr F I0813 20:09:14.125289 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, uid: 8980258c-3f45-4fdf-85aa-5d7816ef57b0]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.130221963+00:00 stderr F I0813 20:09:14.130187 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-83accf81260e29bcce65a184dd980479, uid: 064c58c6-b7e3-4279-8d84-dee3da8cc701]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"worker","uid":"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.132879569+00:00 stderr F I0813 20:09:14.132824 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff, uid: 3ef54a1f-2601-44d6-aa6d-d19e1277d0b9]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"worker","uid":"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.135660319+00:00 stderr F I0813 20:09:14.135627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-08-13T20:10:15.242365070+00:00 stderr F I0813 20:10:15.233259 1 event.go:376] "Event occurred" object="openshift-multus/cni-sysctl-allowlist-ds" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cni-sysctl-allowlist-ds-jx5m8" 2025-08-13T20:10:18.196245188+00:00 stderr F I0813 20:10:18.196096 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ControllerRevision, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-56444b9596, uid: 296d6144-09f5-4dc7-9ab3-000f2dd8cf46]" virtual=false 2025-08-13T20:10:18.196610098+00:00 stderr F I0813 20:10:18.196559 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-jx5m8, uid: b78e72e3-8ece-4d66-aa9c-25445bacdc99]" virtual=false 2025-08-13T20:10:18.219466944+00:00 stderr F I0813 20:10:18.219401 1 garbagecollector.go:688] "Deleting item" item="[apps/v1/ControllerRevision, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-56444b9596, uid: 296d6144-09f5-4dc7-9ab3-000f2dd8cf46]" propagationPolicy="Background" 2025-08-13T20:10:18.220159104+00:00 stderr F I0813 20:10:18.219455 1 garbagecollector.go:688] "Deleting item" item="[v1/Pod, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-jx5m8, uid: b78e72e3-8ece-4d66-aa9c-25445bacdc99]" propagationPolicy="Background" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.683724 1 replica_set.go:621] "Too many replicas" replicaSet="openshift-route-controller-manager/route-controller-manager-6884dcf749" need=0 deleting=1 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684108 1 replica_set.go:248] "Found related ReplicaSets" replicaSet="openshift-route-controller-manager/route-controller-manager-6884dcf749" relatedReplicaSets=["openshift-route-controller-manager/route-controller-manager-776b8b7477","openshift-route-controller-manager/route-controller-manager-5446f98575","openshift-route-controller-manager/route-controller-manager-5b77f9fd48","openshift-route-controller-manager/route-controller-manager-5c4dbb8899","openshift-route-controller-manager/route-controller-manager-6884dcf749","openshift-route-controller-manager/route-controller-manager-777dbbb7bb","openshift-route-controller-manager/route-controller-manager-7d967d98df","openshift-route-controller-manager/route-controller-manager-846977c6bc","openshift-route-controller-manager/route-controller-manager-66f66f94cf","openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc","openshift-route-controller-manager/route-controller-manager-7f79969969","openshift-route-controller-manager/route-controller-manager-868695ccb4"] 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684171 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set route-controller-manager-6884dcf749 to 0 from 1" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684410 1 controller_utils.go:609] "Deleting pod" controller="route-controller-manager-6884dcf749" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684749 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set controller-manager-598fc85fd4 to 0 from 1" 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687007 1 replica_set.go:621] "Too many replicas" replicaSet="openshift-controller-manager/controller-manager-598fc85fd4" need=0 deleting=1 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687101 1 replica_set.go:248] "Found related ReplicaSets" replicaSet="openshift-controller-manager/controller-manager-598fc85fd4" relatedReplicaSets=["openshift-controller-manager/controller-manager-659898b96d","openshift-controller-manager/controller-manager-6ff78978b4","openshift-controller-manager/controller-manager-75cfd5db5d","openshift-controller-manager/controller-manager-7bbb4b7f4c","openshift-controller-manager/controller-manager-99c8765d7","openshift-controller-manager/controller-manager-b69786f4f","openshift-controller-manager/controller-manager-5797bcd546","openshift-controller-manager/controller-manager-67685c4459","openshift-controller-manager/controller-manager-78589965b8","openshift-controller-manager/controller-manager-c4dd57946","openshift-controller-manager/controller-manager-778975cc4f","openshift-controller-manager/controller-manager-598fc85fd4"] 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687333 1 controller_utils.go:609] "Deleting pod" controller="controller-manager-598fc85fd4" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" 2025-08-13T20:10:59.695564135+00:00 stderr F I0813 20:10:59.695433 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-controller-manager/controller-manager" err="Operation cannot be fulfilled on deployments.apps \"controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.702011570+00:00 stderr F I0813 20:10:59.701910 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-route-controller-manager/route-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"route-controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.721870629+00:00 stderr F I0813 20:10:59.721668 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set route-controller-manager-776b8b7477 to 1 from 0" 2025-08-13T20:10:59.731856846+00:00 stderr F I0813 20:10:59.728994 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set controller-manager-778975cc4f to 1 from 0" 2025-08-13T20:10:59.748226745+00:00 stderr F I0813 20:10:59.747647 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager-6884dcf749" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: route-controller-manager-6884dcf749-n4qpx" 2025-08-13T20:10:59.800091072+00:00 stderr F I0813 20:10:59.799961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="116.454919ms" 2025-08-13T20:10:59.807908986+00:00 stderr F I0813 20:10:59.800945 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager-598fc85fd4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: controller-manager-598fc85fd4-8wlsm" 2025-08-13T20:10:59.809548343+00:00 stderr F I0813 20:10:59.809388 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="161.014787ms" 2025-08-13T20:10:59.809574074+00:00 stderr F I0813 20:10:59.809537 1 replica_set.go:585] "Too few replicas" replicaSet="openshift-controller-manager/controller-manager-778975cc4f" need=1 creating=1 2025-08-13T20:10:59.840894912+00:00 stderr F I0813 20:10:59.840650 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="192.344795ms" 2025-08-13T20:10:59.841197671+00:00 stderr F I0813 20:10:59.841143 1 replica_set.go:585] "Too few replicas" replicaSet="openshift-route-controller-manager/route-controller-manager-776b8b7477" need=1 creating=1 2025-08-13T20:10:59.842972571+00:00 stderr F I0813 20:10:59.841585 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="154.722446ms" 2025-08-13T20:10:59.846082041+00:00 stderr F I0813 20:10:59.843119 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-route-controller-manager/route-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"route-controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.847259364+00:00 stderr F I0813 20:10:59.847173 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="47.132021ms" 2025-08-13T20:10:59.850840847+00:00 stderr F I0813 20:10:59.849446 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="121.073µs" 2025-08-13T20:10:59.895562309+00:00 stderr F I0813 20:10:59.893518 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager-778975cc4f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: controller-manager-778975cc4f-x5vcf" 2025-08-13T20:10:59.916033096+00:00 stderr F I0813 20:10:59.913261 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager-776b8b7477" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: route-controller-manager-776b8b7477-sfpvs" 2025-08-13T20:10:59.956079364+00:00 stderr F I0813 20:10:59.955984 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="114.325958ms" 2025-08-13T20:10:59.956286910+00:00 stderr F I0813 20:10:59.956255 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="93.453µs" 2025-08-13T20:10:59.970990092+00:00 stderr F I0813 20:10:59.970868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="161.369486ms" 2025-08-13T20:10:59.994078604+00:00 stderr F I0813 20:10:59.993975 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="153.216473ms" 2025-08-13T20:11:00.002982189+00:00 stderr F I0813 20:11:00.002852 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="31.625627ms" 2025-08-13T20:11:00.003505364+00:00 stderr F I0813 20:11:00.003416 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="217.526µs" 2025-08-13T20:11:00.006635064+00:00 stderr F I0813 20:11:00.006602 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="38.101µs" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.020429 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="26.091848ms" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.021165 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="131.754µs" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.021224 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="18.911µs" 2025-08-13T20:11:00.051909872+00:00 stderr F I0813 20:11:00.051612 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="62.532µs" 2025-08-13T20:11:00.399700743+00:00 stderr F I0813 20:11:00.399640 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="77.293µs" 2025-08-13T20:11:00.509064199+00:00 stderr F I0813 20:11:00.508908 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="66.812µs" 2025-08-13T20:11:00.666413040+00:00 stderr F I0813 20:11:00.666271 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="78.033µs" 2025-08-13T20:11:00.744231472+00:00 stderr F I0813 20:11:00.744045 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="304.548µs" 2025-08-13T20:11:00.772302806+00:00 stderr F I0813 20:11:00.772210 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="83.382µs" 2025-08-13T20:11:00.782855379+00:00 stderr F I0813 20:11:00.780878 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="96.213µs" 2025-08-13T20:11:00.820830668+00:00 stderr F I0813 20:11:00.815070 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="59.832µs" 2025-08-13T20:11:00.848323746+00:00 stderr F I0813 20:11:00.845189 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="57.481µs" 2025-08-13T20:11:01.521766564+00:00 stderr F I0813 20:11:01.521384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="407.931µs" 2025-08-13T20:11:01.553975188+00:00 stderr F I0813 20:11:01.552726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="95.063µs" 2025-08-13T20:11:01.573428735+00:00 stderr F I0813 20:11:01.573000 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="215.366µs" 2025-08-13T20:11:01.597047563+00:00 stderr F I0813 20:11:01.596684 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="84.792µs" 2025-08-13T20:11:01.614690128+00:00 stderr F I0813 20:11:01.612124 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="61.432µs" 2025-08-13T20:11:01.643016831+00:00 stderr F I0813 20:11:01.642674 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="85.323µs" 2025-08-13T20:11:02.189949232+00:00 stderr F I0813 20:11:02.189586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="118.123µs" 2025-08-13T20:11:02.284109911+00:00 stderr F I0813 20:11:02.282662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="160.454µs" 2025-08-13T20:11:02.706573933+00:00 stderr F I0813 20:11:02.706232 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="3.430578ms" 2025-08-13T20:11:02.747130546+00:00 stderr F I0813 20:11:02.742157 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="59.321µs" 2025-08-13T20:11:03.756875606+00:00 stderr F I0813 20:11:03.756699 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="27.958882ms" 2025-08-13T20:11:03.757122423+00:00 stderr F I0813 20:11:03.757071 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="118.803µs" 2025-08-13T20:11:03.831419043+00:00 stderr F I0813 20:11:03.831349 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="48.773649ms" 2025-08-13T20:11:03.831652280+00:00 stderr F I0813 20:11:03.831627 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="92.093µs" 2025-08-13T20:11:03.840501514+00:00 stderr F I0813 20:11:03.837482 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-66f66f94cf" duration="9.26µs" 2025-08-13T20:11:03.878395910+00:00 stderr F I0813 20:11:03.875526 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-75cfd5db5d" duration="11.16µs" 2025-08-13T20:15:00.203593711+00:00 stderr F I0813 20:15:00.203243 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created job collect-profiles-29251935" 2025-08-13T20:15:00.210739586+00:00 stderr F I0813 20:15:00.210633 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.344609754+00:00 stderr F I0813 20:15:00.339868 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.344609754+00:00 stderr F I0813 20:15:00.340109 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251935" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: collect-profiles-29251935-d7x6j" 2025-08-13T20:15:00.374665496+00:00 stderr F I0813 20:15:00.374329 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.382615054+00:00 stderr F I0813 20:15:00.379152 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.411165522+00:00 stderr F I0813 20:15:00.409102 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.466745326+00:00 stderr F I0813 20:15:00.466507 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:01.020671647+00:00 stderr F I0813 20:15:01.020543 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:02.374771730+00:00 stderr F I0813 20:15:02.374665 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:03.385932521+00:00 stderr F I0813 20:15:03.385873 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:03.395491475+00:00 stderr F I0813 20:15:03.395432 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:04.448076683+00:00 stderr F I0813 20:15:04.446505 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:04.689482515+00:00 stderr F I0813 20:15:04.689325 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.382243267+00:00 stderr F I0813 20:15:05.381489 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.546215138+00:00 stderr F I0813 20:15:05.545919 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.593678289+00:00 stderr F I0813 20:15:05.593530 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.629989270+00:00 stderr F I0813 20:15:05.629819 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251935" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" 2025-08-13T20:15:05.629989270+00:00 stderr F I0813 20:15:05.629766 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.675380811+00:00 stderr F I0813 20:15:05.675228 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-28658250" 2025-08-13T20:15:05.675952768+00:00 stderr F I0813 20:15:05.675839 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulDelete" message="Deleted job collect-profiles-28658250" 2025-08-13T20:15:05.675952768+00:00 stderr F I0813 20:15:05.675883 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SawCompletedJob" message="Saw completed job: collect-profiles-29251935, status: Complete" 2025-08-13T20:18:48.575668025+00:00 stderr F I0813 20:18:48.575532 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:18:48.576946972+00:00 stderr F I0813 20:18:48.576877 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="501.145µs" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578014 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="252.247µs" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578835 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578877 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:18:48.579479144+00:00 stderr F I0813 20:18:48.579402 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="1.280157ms" 2025-08-13T20:18:48.579602698+00:00 stderr F I0813 20:18:48.579552 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="62.932µs" 2025-08-13T20:18:48.580147903+00:00 stderr F I0813 20:18:48.580037 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="397.592µs" 2025-08-13T20:18:48.580406671+00:00 stderr F I0813 20:18:48.580310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="204.405µs" 2025-08-13T20:18:48.580451922+00:00 stderr F I0813 20:18:48.580415 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="49.102µs" 2025-08-13T20:18:48.580581386+00:00 stderr F I0813 20:18:48.580511 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="37.091µs" 2025-08-13T20:18:48.580660798+00:00 stderr F I0813 20:18:48.580609 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="48.081µs" 2025-08-13T20:18:48.580824103+00:00 stderr F I0813 20:18:48.580726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="39.251µs" 2025-08-13T20:18:48.581135021+00:00 stderr F I0813 20:18:48.581062 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="175.775µs" 2025-08-13T20:18:48.581301466+00:00 stderr F I0813 20:18:48.581206 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="45.061µs" 2025-08-13T20:18:48.581301466+00:00 stderr F I0813 20:18:48.581288 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="37.992µs" 2025-08-13T20:18:48.581422490+00:00 stderr F I0813 20:18:48.581384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="62.401µs" 2025-08-13T20:18:48.581541263+00:00 stderr F I0813 20:18:48.581504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="46.972µs" 2025-08-13T20:18:48.581645876+00:00 stderr F I0813 20:18:48.581610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="31.421µs" 2025-08-13T20:18:48.581941554+00:00 stderr F I0813 20:18:48.581832 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="120.523µs" 2025-08-13T20:18:48.581961025+00:00 stderr F I0813 20:18:48.581947 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="37.771µs" 2025-08-13T20:18:48.582184921+00:00 stderr F I0813 20:18:48.582075 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="27.691µs" 2025-08-13T20:18:48.582184921+00:00 stderr F I0813 20:18:48.582163 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="41.281µs" 2025-08-13T20:18:48.582299475+00:00 stderr F I0813 20:18:48.582249 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="37.531µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582409 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="56.532µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582562 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="52.932µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582634 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="38.631µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="81.872µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583189 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="102.563µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583290 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="42.012µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583454 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="44.371µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583506 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="30.511µs" 2025-08-13T20:18:48.583678534+00:00 stderr F I0813 20:18:48.583610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="68.892µs" 2025-08-13T20:18:48.583689784+00:00 stderr F I0813 20:18:48.583675 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="41.401µs" 2025-08-13T20:18:48.583878770+00:00 stderr F I0813 20:18:48.583763 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="44.461µs" 2025-08-13T20:18:48.584012814+00:00 stderr F I0813 20:18:48.583939 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="44.741µs" 2025-08-13T20:18:48.584209389+00:00 stderr F I0813 20:18:48.584158 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="120.563µs" 2025-08-13T20:18:48.584343763+00:00 stderr F I0813 20:18:48.584270 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="37.201µs" 2025-08-13T20:18:48.584480837+00:00 stderr F I0813 20:18:48.584360 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="40.231µs" 2025-08-13T20:18:48.584703693+00:00 stderr F I0813 20:18:48.584654 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="49.902µs" 2025-08-13T20:18:48.585082864+00:00 stderr F I0813 20:18:48.584838 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="122.894µs" 2025-08-13T20:18:48.585082864+00:00 stderr F I0813 20:18:48.585011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="88.173µs" 2025-08-13T20:18:48.585104995+00:00 stderr F I0813 20:18:48.585079 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="41.281µs" 2025-08-13T20:18:48.585226448+00:00 stderr F I0813 20:18:48.585142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="31.771µs" 2025-08-13T20:18:48.585293140+00:00 stderr F I0813 20:18:48.585242 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="57.141µs" 2025-08-13T20:18:48.585418334+00:00 stderr F I0813 20:18:48.585364 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="39.571µs" 2025-08-13T20:18:48.585596599+00:00 stderr F I0813 20:18:48.585510 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="45.891µs" 2025-08-13T20:18:48.585695662+00:00 stderr F I0813 20:18:48.585636 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="57.112µs" 2025-08-13T20:18:48.585842386+00:00 stderr F I0813 20:18:48.585746 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="55.842µs" 2025-08-13T20:18:48.586018651+00:00 stderr F I0813 20:18:48.585930 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="50.101µs" 2025-08-13T20:18:48.586438363+00:00 stderr F I0813 20:18:48.586348 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="59.982µs" 2025-08-13T20:18:48.586755572+00:00 stderr F I0813 20:18:48.586671 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="245.357µs" 2025-08-13T20:18:48.587424411+00:00 stderr F I0813 20:18:48.587357 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="36.031µs" 2025-08-13T20:18:48.594681488+00:00 stderr F I0813 20:18:48.594599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="37.611µs" 2025-08-13T20:18:48.594706819+00:00 stderr F I0813 20:18:48.594697 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="32.811µs" 2025-08-13T20:18:48.594801982+00:00 stderr F I0813 20:18:48.594751 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="24.561µs" 2025-08-13T20:18:48.594943216+00:00 stderr F I0813 20:18:48.594887 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="30.461µs" 2025-08-13T20:18:48.595144622+00:00 stderr F I0813 20:18:48.595051 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="35.071µs" 2025-08-13T20:18:48.595160232+00:00 stderr F I0813 20:18:48.595142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="37.141µs" 2025-08-13T20:18:48.595255485+00:00 stderr F I0813 20:18:48.595197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="23.461µs" 2025-08-13T20:18:48.595335777+00:00 stderr F I0813 20:18:48.595282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="30.451µs" 2025-08-13T20:18:48.595410959+00:00 stderr F I0813 20:18:48.595359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="27.43µs" 2025-08-13T20:18:48.595473371+00:00 stderr F I0813 20:18:48.595435 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="25.651µs" 2025-08-13T20:18:48.595512062+00:00 stderr F I0813 20:18:48.595498 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="34.711µs" 2025-08-13T20:18:48.596365926+00:00 stderr F I0813 20:18:48.596286 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="34.821µs" 2025-08-13T20:18:48.596385377+00:00 stderr F I0813 20:18:48.596367 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="36.311µs" 2025-08-13T20:18:48.596489480+00:00 stderr F I0813 20:18:48.596420 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="23.921µs" 2025-08-13T20:18:48.596502440+00:00 stderr F I0813 20:18:48.596485 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="29.291µs" 2025-08-13T20:18:48.596592103+00:00 stderr F I0813 20:18:48.596541 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="25.54µs" 2025-08-13T20:18:48.596678375+00:00 stderr F I0813 20:18:48.596627 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="32.831µs" 2025-08-13T20:18:48.596820069+00:00 stderr F I0813 20:18:48.596731 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="34.631µs" 2025-08-13T20:18:48.597425287+00:00 stderr F I0813 20:18:48.597366 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="592.737µs" 2025-08-13T20:18:48.597515979+00:00 stderr F I0813 20:18:48.597470 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="40.571µs" 2025-08-13T20:18:48.597581281+00:00 stderr F I0813 20:18:48.597543 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="36.811µs" 2025-08-13T20:18:48.597709255+00:00 stderr F I0813 20:18:48.597629 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="30.711µs" 2025-08-13T20:18:48.597852239+00:00 stderr F I0813 20:18:48.597758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="22.721µs" 2025-08-13T20:18:48.597909680+00:00 stderr F I0813 20:18:48.597875 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="41.331µs" 2025-08-13T20:18:48.597958332+00:00 stderr F I0813 20:18:48.597927 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="20.131µs" 2025-08-13T20:18:48.598063635+00:00 stderr F I0813 20:18:48.598017 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="73.102µs" 2025-08-13T20:18:48.598143787+00:00 stderr F I0813 20:18:48.598101 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="31.091µs" 2025-08-13T20:18:48.598239250+00:00 stderr F I0813 20:18:48.598197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="32.871µs" 2025-08-13T20:18:48.598292361+00:00 stderr F I0813 20:18:48.598251 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="21.781µs" 2025-08-13T20:18:48.598344053+00:00 stderr F I0813 20:18:48.598310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="16.211µs" 2025-08-13T20:18:48.598395504+00:00 stderr F I0813 20:18:48.598359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="27.161µs" 2025-08-13T20:18:48.598407275+00:00 stderr F I0813 20:18:48.598393 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="21.41µs" 2025-08-13T20:18:48.598491587+00:00 stderr F I0813 20:18:48.598449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="34.551µs" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561715 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561942 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561975 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:28:48.572416611+00:00 stderr F I0813 20:28:48.572297 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="85.463µs" 2025-08-13T20:28:48.572499904+00:00 stderr F I0813 20:28:48.572463 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="117.013µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572551 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="34.691µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572558 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="33.061µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572483 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="76.202µs" 2025-08-13T20:28:48.572757081+00:00 stderr F I0813 20:28:48.572662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="75.922µs" 2025-08-13T20:28:48.572873254+00:00 stderr F I0813 20:28:48.572822 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="41.641µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.572909 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="47.451µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="47.121µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573518 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="38.131µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573685 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="45.511µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573745 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="43.461µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573854 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="78.192µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573899 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="27.591µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="23.76µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573998 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="24.511µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574109 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="39.352µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574180 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="33.721µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="55.232µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574505 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="205.626µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574598 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="60.452µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574636 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="24.411µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574697 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="29.201µs" 2025-08-13T20:28:48.574873782+00:00 stderr F I0813 20:28:48.574763 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="30.641µs" 2025-08-13T20:28:48.574894543+00:00 stderr F I0813 20:28:48.574884 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="26.231µs" 2025-08-13T20:28:48.574996966+00:00 stderr F I0813 20:28:48.574952 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="37.291µs" 2025-08-13T20:28:48.575148860+00:00 stderr F I0813 20:28:48.575105 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="61.651µs" 2025-08-13T20:28:48.575220462+00:00 stderr F I0813 20:28:48.575180 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="33.781µs" 2025-08-13T20:28:48.575297414+00:00 stderr F I0813 20:28:48.575258 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="27.651µs" 2025-08-13T20:28:48.575443028+00:00 stderr F I0813 20:28:48.575400 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="54.142µs" 2025-08-13T20:28:48.575522701+00:00 stderr F I0813 20:28:48.575478 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="40.361µs" 2025-08-13T20:28:48.575628464+00:00 stderr F I0813 20:28:48.575586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="44.871µs" 2025-08-13T20:28:48.575731537+00:00 stderr F I0813 20:28:48.575692 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="32.401µs" 2025-08-13T20:28:48.575953403+00:00 stderr F I0813 20:28:48.575902 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="130.744µs" 2025-08-13T20:28:48.576073166+00:00 stderr F I0813 20:28:48.576033 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="50.821µs" 2025-08-13T20:28:48.576170999+00:00 stderr F I0813 20:28:48.576127 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="57.532µs" 2025-08-13T20:28:48.582190482+00:00 stderr F I0813 20:28:48.582120 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="29.911µs" 2025-08-13T20:28:48.582343637+00:00 stderr F I0813 20:28:48.582293 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="129.864µs" 2025-08-13T20:28:48.582499811+00:00 stderr F I0813 20:28:48.582449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="45.631µs" 2025-08-13T20:28:48.583335125+00:00 stderr F I0813 20:28:48.583281 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="113.853µs" 2025-08-13T20:28:48.583938443+00:00 stderr F I0813 20:28:48.583890 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="45.131µs" 2025-08-13T20:28:48.584046436+00:00 stderr F I0813 20:28:48.583990 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="33.051µs" 2025-08-13T20:28:48.584188930+00:00 stderr F I0813 20:28:48.584142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="99.673µs" 2025-08-13T20:28:48.585535798+00:00 stderr F I0813 20:28:48.584312 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="44.421µs" 2025-08-13T20:28:48.586182467+00:00 stderr F I0813 20:28:48.586133 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="913.937µs" 2025-08-13T20:28:48.586293080+00:00 stderr F I0813 20:28:48.586249 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="59.842µs" 2025-08-13T20:28:48.586364702+00:00 stderr F I0813 20:28:48.586323 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="39.511µs" 2025-08-13T20:28:48.586441614+00:00 stderr F I0813 20:28:48.586400 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="34.391µs" 2025-08-13T20:28:48.586544867+00:00 stderr F I0813 20:28:48.586500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="43.001µs" 2025-08-13T20:28:48.586659321+00:00 stderr F I0813 20:28:48.586616 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="59.341µs" 2025-08-13T20:28:48.586722923+00:00 stderr F I0813 20:28:48.586682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="30.361µs" 2025-08-13T20:28:48.586953449+00:00 stderr F I0813 20:28:48.586879 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="114.814µs" 2025-08-13T20:28:48.587034661+00:00 stderr F I0813 20:28:48.586968 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="51.511µs" 2025-08-13T20:28:48.587130644+00:00 stderr F I0813 20:28:48.587086 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="41.431µs" 2025-08-13T20:28:48.587262428+00:00 stderr F I0813 20:28:48.587178 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="34.911µs" 2025-08-13T20:28:48.587368671+00:00 stderr F I0813 20:28:48.587326 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="75.113µs" 2025-08-13T20:28:48.587453754+00:00 stderr F I0813 20:28:48.587412 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="30.911µs" 2025-08-13T20:28:48.587557306+00:00 stderr F I0813 20:28:48.587514 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="38.641µs" 2025-08-13T20:28:48.587613678+00:00 stderr F I0813 20:28:48.587576 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="23.861µs" 2025-08-13T20:28:48.587703781+00:00 stderr F I0813 20:28:48.587656 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="28.301µs" 2025-08-13T20:28:48.587851305+00:00 stderr F I0813 20:28:48.587761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="45.171µs" 2025-08-13T20:28:48.587911167+00:00 stderr F I0813 20:28:48.587869 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="29.431µs" 2025-08-13T20:28:48.588074651+00:00 stderr F I0813 20:28:48.587997 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="42.771µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588170 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="74.632µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588259 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="37.031µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588304 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="30.261µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588366 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="34.341µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588419 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="21.48µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="23.58µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="39.001µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="91.553µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588606 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="15.021µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588676 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="29.521µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="54.632µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="99.053µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="39.531µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.589076 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="98.843µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.589133 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="24.191µs" 2025-08-13T20:28:48.589284106+00:00 stderr F I0813 20:28:48.589212 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="49.501µs" 2025-08-13T20:28:48.589284106+00:00 stderr F I0813 20:28:48.589245 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="18.491µs" 2025-08-13T20:28:48.589508483+00:00 stderr F I0813 20:28:48.589442 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="166.574µs" 2025-08-13T20:28:48.589623106+00:00 stderr F I0813 20:28:48.589571 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="61.752µs" 2025-08-13T20:28:48.589721529+00:00 stderr F I0813 20:28:48.589668 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="40.511µs" 2025-08-13T20:30:01.235727093+00:00 stderr F I0813 20:30:01.235606 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created job collect-profiles-29251950" 2025-08-13T20:30:01.241730696+00:00 stderr F I0813 20:30:01.241647 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.844719909+00:00 stderr F I0813 20:30:01.844619 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.844986387+00:00 stderr F I0813 20:30:01.844929 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251950" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: collect-profiles-29251950-x8jjd" 2025-08-13T20:30:01.967460807+00:00 stderr F I0813 20:30:01.966767 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.986882936+00:00 stderr F I0813 20:30:01.986767 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.037764628+00:00 stderr F I0813 20:30:02.037702 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.065930468+00:00 stderr F I0813 20:30:02.065451 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.813309422+00:00 stderr F I0813 20:30:02.811373 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:03.324810106+00:00 stderr F I0813 20:30:03.324606 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:04.354378292+00:00 stderr F I0813 20:30:04.354319 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:05.704293336+00:00 stderr F I0813 20:30:05.704129 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:06.998551512+00:00 stderr F I0813 20:30:06.997918 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:07.101296225+00:00 stderr F I0813 20:30:07.100365 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:07.348520052+00:00 stderr F I0813 20:30:07.348345 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.078659269+00:00 stderr F I0813 20:30:08.075462 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.095253666+00:00 stderr F I0813 20:30:08.095157 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.107656023+00:00 stderr F I0813 20:30:08.107555 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251950" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" 2025-08-13T20:30:08.108126586+00:00 stderr F I0813 20:30:08.108002 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.125248199+00:00 stderr F I0813 20:30:08.125193 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulDelete" message="Deleted job collect-profiles-29251905" 2025-08-13T20:30:08.125360992+00:00 stderr F I0813 20:30:08.125312 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SawCompletedJob" message="Saw completed job: collect-profiles-29251950, status: Complete" 2025-08-13T20:30:08.127420121+00:00 stderr F I0813 20:30:08.127298 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:30:08.131391925+00:00 stderr F I0813 20:30:08.131365 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905-zmjv9, uid: 8500d7bd-50fb-4ca6-af41-b7a24cae43cd]" virtual=false 2025-08-13T20:30:08.156610490+00:00 stderr F I0813 20:30:08.156406 1 garbagecollector.go:688] "Deleting item" item="[v1/Pod, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905-zmjv9, uid: 8500d7bd-50fb-4ca6-af41-b7a24cae43cd]" propagationPolicy="Background" 2025-08-13T20:38:48.563688441+00:00 stderr F I0813 20:38:48.563520 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:38:48.563921368+00:00 stderr F I0813 20:38:48.563891 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:38:48.563995810+00:00 stderr F I0813 20:38:48.563976 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:38:48.579185338+00:00 stderr F I0813 20:38:48.579044 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="96.223µs" 2025-08-13T20:38:48.579382724+00:00 stderr F I0813 20:38:48.579325 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="64.582µs" 2025-08-13T20:38:48.579478777+00:00 stderr F I0813 20:38:48.579417 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="70.062µs" 2025-08-13T20:38:48.579595150+00:00 stderr F I0813 20:38:48.579542 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="128.404µs" 2025-08-13T20:38:48.579691783+00:00 stderr F I0813 20:38:48.579644 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="38.571µs" 2025-08-13T20:38:48.579874068+00:00 stderr F I0813 20:38:48.579751 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="23.39µs" 2025-08-13T20:38:48.579874068+00:00 stderr F I0813 20:38:48.579855 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="53.032µs" 2025-08-13T20:38:48.579893449+00:00 stderr F I0813 20:38:48.579555 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="41.712µs" 2025-08-13T20:38:48.579893449+00:00 stderr F I0813 20:38:48.579881 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="32.901µs" 2025-08-13T20:38:48.580004002+00:00 stderr F I0813 20:38:48.579957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="33.591µs" 2025-08-13T20:38:48.580082494+00:00 stderr F I0813 20:38:48.580029 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="35.401µs" 2025-08-13T20:38:48.580273420+00:00 stderr F I0813 20:38:48.580135 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="29.191µs" 2025-08-13T20:38:48.580355612+00:00 stderr F I0813 20:38:48.580309 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="38.441µs" 2025-08-13T20:38:48.580447345+00:00 stderr F I0813 20:38:48.580402 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="35.661µs" 2025-08-13T20:38:48.580511216+00:00 stderr F I0813 20:38:48.580468 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="29.241µs" 2025-08-13T20:38:48.580644020+00:00 stderr F I0813 20:38:48.580575 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="46.271µs" 2025-08-13T20:38:48.580763494+00:00 stderr F I0813 20:38:48.580716 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="42.351µs" 2025-08-13T20:38:48.581030471+00:00 stderr F I0813 20:38:48.580978 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="71.373µs" 2025-08-13T20:38:48.581085763+00:00 stderr F I0813 20:38:48.581044 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="26.761µs" 2025-08-13T20:38:48.581194846+00:00 stderr F I0813 20:38:48.581123 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="28.801µs" 2025-08-13T20:38:48.581346230+00:00 stderr F I0813 20:38:48.581230 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="34.051µs" 2025-08-13T20:38:48.581507155+00:00 stderr F I0813 20:38:48.581457 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="34.011µs" 2025-08-13T20:38:48.581600998+00:00 stderr F I0813 20:38:48.581520 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="23.7µs" 2025-08-13T20:38:48.581851845+00:00 stderr F I0813 20:38:48.581745 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="91.943µs" 2025-08-13T20:38:48.582722360+00:00 stderr F I0813 20:38:48.582669 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="79.962µs" 2025-08-13T20:38:48.582849384+00:00 stderr F I0813 20:38:48.582757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="27.321µs" 2025-08-13T20:38:48.583107141+00:00 stderr F I0813 20:38:48.583052 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="197.715µs" 2025-08-13T20:38:48.583309887+00:00 stderr F I0813 20:38:48.583255 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="117.753µs" 2025-08-13T20:38:48.583549744+00:00 stderr F I0813 20:38:48.583504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="205.326µs" 2025-08-13T20:38:48.583708129+00:00 stderr F I0813 20:38:48.583662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="28.671µs" 2025-08-13T20:38:48.583830742+00:00 stderr F I0813 20:38:48.583742 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="41.281µs" 2025-08-13T20:38:48.583936325+00:00 stderr F I0813 20:38:48.583891 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="26.831µs" 2025-08-13T20:38:48.584010307+00:00 stderr F I0813 20:38:48.583968 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="37.761µs" 2025-08-13T20:38:48.584276475+00:00 stderr F I0813 20:38:48.584227 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="196.265µs" 2025-08-13T20:38:48.584376748+00:00 stderr F I0813 20:38:48.584333 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="32.111µs" 2025-08-13T20:38:48.584441130+00:00 stderr F I0813 20:38:48.584401 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="31.011µs" 2025-08-13T20:38:48.584523642+00:00 stderr F I0813 20:38:48.584483 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="27.481µs" 2025-08-13T20:38:48.584710087+00:00 stderr F I0813 20:38:48.584664 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="38.741µs" 2025-08-13T20:38:48.584883842+00:00 stderr F I0813 20:38:48.584758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="26.791µs" 2025-08-13T20:38:48.584902043+00:00 stderr F I0813 20:38:48.584887 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="52.341µs" 2025-08-13T20:38:48.584992956+00:00 stderr F I0813 20:38:48.584949 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="27.581µs" 2025-08-13T20:38:48.585127759+00:00 stderr F I0813 20:38:48.585070 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="53.972µs" 2025-08-13T20:38:48.585459759+00:00 stderr F I0813 20:38:48.585410 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="33.531µs" 2025-08-13T20:38:48.585527751+00:00 stderr F I0813 20:38:48.585485 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="34.571µs" 2025-08-13T20:38:48.585616714+00:00 stderr F I0813 20:38:48.585565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="31.711µs" 2025-08-13T20:38:48.589687711+00:00 stderr F I0813 20:38:48.589631 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="4.005145ms" 2025-08-13T20:38:48.589843235+00:00 stderr F I0813 20:38:48.589742 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="66.022µs" 2025-08-13T20:38:48.590003700+00:00 stderr F I0813 20:38:48.589957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="50.462µs" 2025-08-13T20:38:48.590130664+00:00 stderr F I0813 20:38:48.590062 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="33.121µs" 2025-08-13T20:38:48.590369011+00:00 stderr F I0813 20:38:48.590321 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="223.846µs" 2025-08-13T20:38:48.590464003+00:00 stderr F I0813 20:38:48.590420 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="36.831µs" 2025-08-13T20:38:48.590549516+00:00 stderr F I0813 20:38:48.590505 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="28.981µs" 2025-08-13T20:38:48.590647599+00:00 stderr F I0813 20:38:48.590601 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="38.001µs" 2025-08-13T20:38:48.590744671+00:00 stderr F I0813 20:38:48.590701 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="60.032µs" 2025-08-13T20:38:48.590946127+00:00 stderr F I0813 20:38:48.590847 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="88.072µs" 2025-08-13T20:38:48.590946127+00:00 stderr F I0813 20:38:48.590939 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="30.511µs" 2025-08-13T20:38:48.591036330+00:00 stderr F I0813 20:38:48.590992 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="39.751µs" 2025-08-13T20:38:48.591137643+00:00 stderr F I0813 20:38:48.591096 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="48.141µs" 2025-08-13T20:38:48.591320338+00:00 stderr F I0813 20:38:48.591264 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="39.011µs" 2025-08-13T20:38:48.591434191+00:00 stderr F I0813 20:38:48.591365 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="42.922µs" 2025-08-13T20:38:48.591563155+00:00 stderr F I0813 20:38:48.591507 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="34.601µs" 2025-08-13T20:38:48.591712439+00:00 stderr F I0813 20:38:48.591665 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="63.352µs" 2025-08-13T20:38:48.591858964+00:00 stderr F I0813 20:38:48.591761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="55.722µs" 2025-08-13T20:38:48.592280436+00:00 stderr F I0813 20:38:48.592216 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="112.373µs" 2025-08-13T20:38:48.592280436+00:00 stderr F I0813 20:38:48.592241 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="54.821µs" 2025-08-13T20:38:48.592337147+00:00 stderr F I0813 20:38:48.592292 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="28.831µs" 2025-08-13T20:38:48.592410689+00:00 stderr F I0813 20:38:48.592361 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="55.082µs" 2025-08-13T20:38:48.592423270+00:00 stderr F I0813 20:38:48.592407 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="48.321µs" 2025-08-13T20:38:48.592544483+00:00 stderr F I0813 20:38:48.592501 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="46.191µs" 2025-08-13T20:38:48.592544483+00:00 stderr F I0813 20:38:48.592500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="73.452µs" 2025-08-13T20:38:48.592645506+00:00 stderr F I0813 20:38:48.592602 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="35.051µs" 2025-08-13T20:38:48.592645506+00:00 stderr F I0813 20:38:48.592607 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="74.652µs" 2025-08-13T20:38:48.592742669+00:00 stderr F I0813 20:38:48.592700 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="37.121µs" 2025-08-13T20:38:48.592858822+00:00 stderr F I0813 20:38:48.592814 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="32.851µs" 2025-08-13T20:38:48.593329486+00:00 stderr F I0813 20:38:48.593282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="36.211µs" 2025-08-13T20:38:48.593329486+00:00 stderr F I0813 20:38:48.593317 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="71.042µs" 2025-08-13T20:38:48.593463430+00:00 stderr F I0813 20:38:48.593410 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="59.982µs" 2025-08-13T20:38:48.593463430+00:00 stderr F I0813 20:38:48.593430 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="57.031µs" 2025-08-13T20:38:48.593569523+00:00 stderr F I0813 20:38:48.593515 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="33.301µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593581 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="57.002µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593520 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="108.993µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593656 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="93.573µs" 2025-08-13T20:38:48.593840021+00:00 stderr F I0813 20:38:48.593757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="761.432µs" 2025-08-13T20:42:36.319033048+00:00 stderr F I0813 20:42:36.318589 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319139171+00:00 stderr F I0813 20:42:36.319088 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319178332+00:00 stderr F I0813 20:42:36.317754 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319271204+00:00 stderr F I0813 20:42:36.319077 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319420109+00:00 stderr F I0813 20:42:36.317882 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.328448 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.330937 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.330939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331078 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331255 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331304 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331362 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331380 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331430 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331436 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331495 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331521 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331638 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331714 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331767 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331857 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331953 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331966 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332074 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332104 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332160 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332270 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332294 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332370 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332392 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332495 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332518 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332543 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332633 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332724 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332746 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332906 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333041 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333101 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333156 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334203 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334319 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334389 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334458 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334576 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334639 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334689 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334739 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334892535+00:00 stderr F I0813 20:42:36.334863 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334973467+00:00 stderr F I0813 20:42:36.334926 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335049099+00:00 stderr F I0813 20:42:36.335007 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335274136+00:00 stderr F I0813 20:42:36.335166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335292186+00:00 stderr F I0813 20:42:36.335281 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335344998+00:00 stderr F I0813 20:42:36.335317 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335406310+00:00 stderr F I0813 20:42:36.335360 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335515213+00:00 stderr F I0813 20:42:36.335444 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335531863+00:00 stderr F I0813 20:42:36.335523 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335630546+00:00 stderr F I0813 20:42:36.335586 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335706258+00:00 stderr F I0813 20:42:36.335663 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335871703+00:00 stderr F I0813 20:42:36.335753 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335890124+00:00 stderr F I0813 20:42:36.335880 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.336009607+00:00 stderr F I0813 20:42:36.335967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346184240+00:00 stderr F I0813 20:42:36.346144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346439938+00:00 stderr F I0813 20:42:36.346417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346598742+00:00 stderr F I0813 20:42:36.346578 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346698135+00:00 stderr F I0813 20:42:36.346682 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346955483+00:00 stderr F I0813 20:42:36.346934 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347066456+00:00 stderr F I0813 20:42:36.347050 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347323543+00:00 stderr F I0813 20:42:36.347301 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347448877+00:00 stderr F I0813 20:42:36.347432 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347572440+00:00 stderr F I0813 20:42:36.347555 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347707654+00:00 stderr F I0813 20:42:36.347691 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347887830+00:00 stderr F I0813 20:42:36.347866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354572862+00:00 stderr F I0813 20:42:36.347987 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354670485+00:00 stderr F I0813 20:42:36.335346 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354752457+00:00 stderr F I0813 20:42:36.347994 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354893622+00:00 stderr F I0813 20:42:36.348005 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355175890+00:00 stderr F I0813 20:42:36.348010 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355309814+00:00 stderr F I0813 20:42:36.335334 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355389816+00:00 stderr F I0813 20:42:36.348124 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355461678+00:00 stderr F I0813 20:42:36.348139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355558661+00:00 stderr F I0813 20:42:36.348153 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355760807+00:00 stderr F I0813 20:42:36.348167 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353021 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353054 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353097 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353131 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353146 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353163 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353188 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353256 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353275 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.507074939+00:00 stderr F I0813 20:42:36.353287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353300 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353313 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353324 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353349 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353373 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353385 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353404 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353429 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353441 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353452 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353469 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353480 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539522884+00:00 stderr F I0813 20:42:36.353493 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539643678+00:00 stderr F I0813 20:42:36.353505 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539880275+00:00 stderr F I0813 20:42:36.353532 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539911286+00:00 stderr F I0813 20:42:36.353546 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540170323+00:00 stderr F I0813 20:42:36.353558 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540352968+00:00 stderr F I0813 20:42:36.353570 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541644816+00:00 stderr F I0813 20:42:36.353582 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541845051+00:00 stderr F I0813 20:42:36.353593 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541925174+00:00 stderr F I0813 20:42:36.353685 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542021067+00:00 stderr F I0813 20:42:36.353711 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542147120+00:00 stderr F I0813 20:42:36.353723 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542277294+00:00 stderr F I0813 20:42:36.353733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542345226+00:00 stderr F I0813 20:42:36.353745 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542485010+00:00 stderr F I0813 20:42:36.353824 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542703936+00:00 stderr F I0813 20:42:36.353846 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543057326+00:00 stderr F I0813 20:42:36.353862 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543371405+00:00 stderr F I0813 20:42:36.353884 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543665844+00:00 stderr F I0813 20:42:36.353900 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543995933+00:00 stderr F I0813 20:42:36.353910 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.544116567+00:00 stderr F I0813 20:42:36.353924 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.544263771+00:00 stderr F I0813 20:42:36.353941 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550576843+00:00 stderr F I0813 20:42:36.353952 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554663011+00:00 stderr F I0813 20:42:36.353961 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554996211+00:00 stderr F I0813 20:42:36.353972 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555433513+00:00 stderr F I0813 20:42:36.353983 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555596668+00:00 stderr F I0813 20:42:36.353998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555743802+00:00 stderr F I0813 20:42:36.354008 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556133413+00:00 stderr F I0813 20:42:36.354020 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556312429+00:00 stderr F I0813 20:42:36.354034 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354045 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354055 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354070 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354081 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354170 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354187 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354203 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354221 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354280 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354307 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354353 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354924 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.167199781+00:00 stderr F I0813 20:42:37.167091 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.167384846+00:00 stderr F I0813 20:42:37.167363 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.169842477+00:00 stderr F I0813 20:42:37.168712 1 secure_serving.go:258] Stopped listening on [::]:10257 2025-08-13T20:42:37.172441962+00:00 stderr F I0813 20:42:37.172372 1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:42:37.177007703+00:00 stderr F I0813 20:42:37.176268 1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:42:37.179891917+00:00 stderr F I0813 20:42:37.179850 1 publisher.go:114] "Shutting down root CA cert publisher controller" 2025-08-13T20:42:37.181820452+00:00 stderr F I0813 20:42:37.180049 1 publisher.go:92] Shutting down service CA certificate configmap publisher 2025-08-13T20:42:37.185348774+00:00 stderr F I0813 20:42:37.185285 1 garbagecollector.go:175] "Shutting down controller" controller="garbagecollector" 2025-08-13T20:42:37.185440086+00:00 stderr F I0813 20:42:37.185376 1 job_controller.go:238] "Shutting down job controller" ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000251243315073043233033070 0ustar zuulzuul2025-10-13T00:23:23.296297551+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-10-13T00:23:23.301171577+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:23.305937379+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:23:23.306682290+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-10-13T00:23:23.306731942+00:00 stderr F + echo 'Copying system trust bundle' 2025-10-13T00:23:23.306763532+00:00 stdout F Copying system trust bundle 2025-10-13T00:23:23.306789353+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-10-13T00:23:23.310635690+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-10-13T00:23:23.310989150+00:00 stderr P + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false 2025-10-13T00:23:23.311032811+00:00 stderr F --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-10-13T00:23:23.383684445+00:00 stderr F W1013 00:23:23.383518 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-10-13T00:23:23.383684445+00:00 stderr F W1013 00:23:23.383655 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-10-13T00:23:23.383731026+00:00 stderr F W1013 00:23:23.383685 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-10-13T00:23:23.383731026+00:00 stderr F W1013 00:23:23.383709 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-10-13T00:23:23.383742837+00:00 stderr F W1013 00:23:23.383735 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-10-13T00:23:23.383786958+00:00 stderr F W1013 00:23:23.383760 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-10-13T00:23:23.383800698+00:00 stderr F W1013 00:23:23.383793 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-10-13T00:23:23.383839849+00:00 stderr F W1013 00:23:23.383816 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-10-13T00:23:23.383889301+00:00 stderr F W1013 00:23:23.383862 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-10-13T00:23:23.383903351+00:00 stderr F W1013 00:23:23.383891 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-10-13T00:23:23.383936772+00:00 stderr F W1013 00:23:23.383916 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-10-13T00:23:23.383950103+00:00 stderr F W1013 00:23:23.383943 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-10-13T00:23:23.383990984+00:00 stderr F W1013 00:23:23.383968 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-10-13T00:23:23.384016374+00:00 stderr F W1013 00:23:23.383999 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-10-13T00:23:23.384047595+00:00 stderr F W1013 00:23:23.384028 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-10-13T00:23:23.384077846+00:00 stderr F W1013 00:23:23.384055 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-10-13T00:23:23.384090766+00:00 stderr F W1013 00:23:23.384083 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-10-13T00:23:23.384130868+00:00 stderr F W1013 00:23:23.384108 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-10-13T00:23:23.384217980+00:00 stderr F W1013 00:23:23.384181 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-10-13T00:23:23.384301142+00:00 stderr F W1013 00:23:23.384268 1 feature_gate.go:227] unrecognized feature gate: Example 2025-10-13T00:23:23.384312483+00:00 stderr F W1013 00:23:23.384299 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-10-13T00:23:23.384380765+00:00 stderr F W1013 00:23:23.384341 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-10-13T00:23:23.384380765+00:00 stderr F W1013 00:23:23.384373 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-10-13T00:23:23.384418576+00:00 stderr F W1013 00:23:23.384398 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-10-13T00:23:23.384432016+00:00 stderr F W1013 00:23:23.384424 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-10-13T00:23:23.384470267+00:00 stderr F W1013 00:23:23.384448 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-10-13T00:23:23.384483507+00:00 stderr F W1013 00:23:23.384476 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-10-13T00:23:23.384521418+00:00 stderr F W1013 00:23:23.384500 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-10-13T00:23:23.384560870+00:00 stderr F W1013 00:23:23.384531 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-10-13T00:23:23.384560870+00:00 stderr F W1013 00:23:23.384557 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-10-13T00:23:23.384603691+00:00 stderr F W1013 00:23:23.384581 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-10-13T00:23:23.384643172+00:00 stderr F W1013 00:23:23.384619 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-10-13T00:23:23.384695673+00:00 stderr F W1013 00:23:23.384668 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-10-13T00:23:23.384709244+00:00 stderr F W1013 00:23:23.384703 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-10-13T00:23:23.384756525+00:00 stderr F W1013 00:23:23.384734 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:23:23.384789946+00:00 stderr F W1013 00:23:23.384767 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-10-13T00:23:23.384834357+00:00 stderr F W1013 00:23:23.384809 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-10-13T00:23:23.384864288+00:00 stderr F W1013 00:23:23.384844 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-10-13T00:23:23.384905189+00:00 stderr F W1013 00:23:23.384883 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-10-13T00:23:23.384971581+00:00 stderr F W1013 00:23:23.384950 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-10-13T00:23:23.385002632+00:00 stderr F W1013 00:23:23.384986 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-10-13T00:23:23.385052223+00:00 stderr F W1013 00:23:23.385031 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-10-13T00:23:23.385089324+00:00 stderr F W1013 00:23:23.385068 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-10-13T00:23:23.385128715+00:00 stderr F W1013 00:23:23.385107 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-10-13T00:23:23.385158846+00:00 stderr F W1013 00:23:23.385142 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-10-13T00:23:23.385201757+00:00 stderr F W1013 00:23:23.385180 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-10-13T00:23:23.385296610+00:00 stderr F W1013 00:23:23.385257 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-10-13T00:23:23.385307800+00:00 stderr F W1013 00:23:23.385295 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-10-13T00:23:23.385371982+00:00 stderr F W1013 00:23:23.385349 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-10-13T00:23:23.385407503+00:00 stderr F W1013 00:23:23.385388 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-10-13T00:23:23.385446254+00:00 stderr F W1013 00:23:23.385425 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-10-13T00:23:23.385630269+00:00 stderr F W1013 00:23:23.385598 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-10-13T00:23:23.385661280+00:00 stderr F W1013 00:23:23.385643 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-10-13T00:23:23.385735262+00:00 stderr F W1013 00:23:23.385713 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-10-13T00:23:23.385774573+00:00 stderr F W1013 00:23:23.385753 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-10-13T00:23:23.385821315+00:00 stderr F W1013 00:23:23.385795 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-10-13T00:23:23.385856316+00:00 stderr F W1013 00:23:23.385833 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-10-13T00:23:23.385898997+00:00 stderr F W1013 00:23:23.385877 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-10-13T00:23:23.385980689+00:00 stderr F W1013 00:23:23.385958 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386063 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386078 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386085 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386089 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386092 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386099 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:23:23.386107643+00:00 stderr F I1013 00:23:23.386103 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-10-13T00:23:23.386126743+00:00 stderr F I1013 00:23:23.386106 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-10-13T00:23:23.386126743+00:00 stderr F I1013 00:23:23.386110 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-10-13T00:23:23.386126743+00:00 stderr F I1013 00:23:23.386113 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-10-13T00:23:23.386149994+00:00 stderr F I1013 00:23:23.386123 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:23:23.386149994+00:00 stderr F I1013 00:23:23.386128 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-10-13T00:23:23.386149994+00:00 stderr F I1013 00:23:23.386131 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-10-13T00:23:23.386149994+00:00 stderr F I1013 00:23:23.386137 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:23:23.386149994+00:00 stderr F I1013 00:23:23.386141 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:23:23.386149994+00:00 stderr F I1013 00:23:23.386144 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-10-13T00:23:23.386165164+00:00 stderr F I1013 00:23:23.386147 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:23:23.386165164+00:00 stderr F I1013 00:23:23.386152 1 flags.go:64] FLAG: --cloud-config="" 2025-10-13T00:23:23.386165164+00:00 stderr F I1013 00:23:23.386155 1 flags.go:64] FLAG: --cloud-provider="" 2025-10-13T00:23:23.386165164+00:00 stderr F I1013 00:23:23.386157 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-10-13T00:23:23.386178895+00:00 stderr F I1013 00:23:23.386163 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-10-13T00:23:23.386178895+00:00 stderr F I1013 00:23:23.386167 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-10-13T00:23:23.386178895+00:00 stderr F I1013 00:23:23.386170 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-10-13T00:23:23.386178895+00:00 stderr F I1013 00:23:23.386173 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-10-13T00:23:23.386193165+00:00 stderr F I1013 00:23:23.386177 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:23.386193165+00:00 stderr F I1013 00:23:23.386181 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-10-13T00:23:23.386193165+00:00 stderr F I1013 00:23:23.386184 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-10-13T00:23:23.386193165+00:00 stderr F I1013 00:23:23.386186 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-10-13T00:23:23.386193165+00:00 stderr F I1013 00:23:23.386189 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-10-13T00:23:23.386208575+00:00 stderr F I1013 00:23:23.386192 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-10-13T00:23:23.386208575+00:00 stderr F I1013 00:23:23.386195 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-10-13T00:23:23.386208575+00:00 stderr F I1013 00:23:23.386201 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-10-13T00:23:23.386208575+00:00 stderr F I1013 00:23:23.386203 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-10-13T00:23:23.386222906+00:00 stderr F I1013 00:23:23.386206 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-10-13T00:23:23.386222906+00:00 stderr F I1013 00:23:23.386211 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-10-13T00:23:23.386222906+00:00 stderr F I1013 00:23:23.386214 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-10-13T00:23:23.386222906+00:00 stderr F I1013 00:23:23.386216 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-10-13T00:23:23.386237276+00:00 stderr F I1013 00:23:23.386220 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-10-13T00:23:23.386237276+00:00 stderr F I1013 00:23:23.386224 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-10-13T00:23:23.386237276+00:00 stderr F I1013 00:23:23.386227 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-10-13T00:23:23.386237276+00:00 stderr F I1013 00:23:23.386230 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-10-13T00:23:23.386237276+00:00 stderr F I1013 00:23:23.386233 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386236 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386239 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386242 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386245 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386247 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386250 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-10-13T00:23:23.386259277+00:00 stderr F I1013 00:23:23.386253 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-10-13T00:23:23.386275287+00:00 stderr F I1013 00:23:23.386259 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-10-13T00:23:23.386275287+00:00 stderr F I1013 00:23:23.386262 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-10-13T00:23:23.386275287+00:00 stderr F I1013 00:23:23.386265 1 flags.go:64] FLAG: --contention-profiling="false" 2025-10-13T00:23:23.386275287+00:00 stderr F I1013 00:23:23.386268 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-10-13T00:23:23.386289728+00:00 stderr F I1013 00:23:23.386271 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-10-13T00:23:23.386289728+00:00 stderr F I1013 00:23:23.386278 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386281 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386520 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386525 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386528 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386532 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386536 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-10-13T00:23:23.386546515+00:00 stderr F I1013 00:23:23.386539 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-10-13T00:23:23.386571716+00:00 stderr F I1013 00:23:23.386543 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-10-13T00:23:23.386623897+00:00 stderr F I1013 00:23:23.386548 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-10-13T00:23:23.386623897+00:00 stderr F I1013 00:23:23.386601 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-10-13T00:23:23.386623897+00:00 stderr F I1013 00:23:23.386611 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:23:23.386640437+00:00 stderr F I1013 00:23:23.386620 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-10-13T00:23:23.386671498+00:00 stderr F I1013 00:23:23.386643 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-10-13T00:23:23.386671498+00:00 stderr F I1013 00:23:23.386650 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-10-13T00:23:23.386671498+00:00 stderr F I1013 00:23:23.386657 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-10-13T00:23:23.386671498+00:00 stderr F I1013 00:23:23.386665 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389088 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389123 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389127 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389131 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389135 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389138 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389143 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389148 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389151 1 flags.go:64] FLAG: --leader-elect="true" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389154 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389161 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389164 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389167 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389170 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389178 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389181 1 flags.go:64] FLAG: --leader-migration-config="" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389184 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389188 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389191 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389197 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389202 1 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389205 1 flags.go:64] FLAG: --master="" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389208 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389211 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389214 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389217 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389220 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389223 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389226 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389230 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389234 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389237 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389243 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389247 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389250 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389253 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389257 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389272 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389275 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389278 1 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389281 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389286 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389289 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389292 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389296 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389300 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389304 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389307 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389318 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389338 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389349 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389357 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389364 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389367 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389371 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389374 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389378 1 flags.go:64] FLAG: --secure-port="10257" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389381 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389385 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389388 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389391 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389394 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389400 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389410 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389414 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389418 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389422 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389426 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389442 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389445 1 flags.go:64] FLAG: --v="2" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389449 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389458 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389462 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-10-13T00:23:23.390499535+00:00 stderr F I1013 00:23:23.389465 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-10-13T00:23:23.392207663+00:00 stderr F I1013 00:23:23.392158 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:23:23.682746756+00:00 stderr F I1013 00:23:23.682684 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:23:23.682782377+00:00 stderr F I1013 00:23:23.682756 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:23:23.686019607+00:00 stderr F I1013 00:23:23.685951 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-10-13T00:23:23.686122180+00:00 stderr F I1013 00:23:23.686109 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-10-13T00:23:23.688348272+00:00 stderr F I1013 00:23:23.688282 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:23:23.688423074+00:00 stderr F I1013 00:23:23.688293 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:23:23.688818615+00:00 stderr F I1013 00:23:23.688798 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:23:23.688934848+00:00 stderr F I1013 00:23:23.688913 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:23:23.688882847 +0000 UTC))" 2025-10-13T00:23:23.688996400+00:00 stderr F I1013 00:23:23.688981 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:23:23.688958009 +0000 UTC))" 2025-10-13T00:23:23.689036441+00:00 stderr F I1013 00:23:23.689026 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:23:23.68901271 +0000 UTC))" 2025-10-13T00:23:23.689073402+00:00 stderr F I1013 00:23:23.689063 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:23:23.689050711 +0000 UTC))" 2025-10-13T00:23:23.689108023+00:00 stderr F I1013 00:23:23.689099 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:23:23.689086892 +0000 UTC))" 2025-10-13T00:23:23.689145414+00:00 stderr F I1013 00:23:23.689134 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:23.689122153 +0000 UTC))" 2025-10-13T00:23:23.689180655+00:00 stderr F I1013 00:23:23.689171 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:23.689158914 +0000 UTC))" 2025-10-13T00:23:23.689227606+00:00 stderr F I1013 00:23:23.689213 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:23.689201275 +0000 UTC))" 2025-10-13T00:23:23.689263107+00:00 stderr F I1013 00:23:23.689253 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:23:23.689241907 +0000 UTC))" 2025-10-13T00:23:23.689301308+00:00 stderr F I1013 00:23:23.689291 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:23.689279128 +0000 UTC))" 2025-10-13T00:23:23.689363640+00:00 stderr F I1013 00:23:23.689352 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:23.689316279 +0000 UTC))" 2025-10-13T00:23:23.689673649+00:00 stderr F I1013 00:23:23.689646 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:23:23.689629387 +0000 UTC))" 2025-10-13T00:23:23.689946366+00:00 stderr F I1013 00:23:23.689934 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760315003\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760315003\" (2025-10-12 23:23:23 +0000 UTC to 2026-10-12 23:23:23 +0000 UTC (now=2025-10-13 00:23:23.689919605 +0000 UTC))" 2025-10-13T00:23:23.689987947+00:00 stderr F I1013 00:23:23.689977 1 secure_serving.go:213] Serving securely on [::]:10257 2025-10-13T00:23:23.690016238+00:00 stderr F I1013 00:23:23.689996 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:23:23.690336507+00:00 stderr F I1013 00:23:23.690308 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2025-10-13T00:23:42.381455610+00:00 stderr F I1013 00:23:42.381415 1 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager 2025-10-13T00:23:42.382110728+00:00 stderr F I1013 00:23:42.382030 1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="crc_2088964b-fd64-44d1-895a-bca54156dc3a became leader" 2025-10-13T00:23:42.383924449+00:00 stderr F I1013 00:23:42.383882 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-token-controller" 2025-10-13T00:23:42.386120600+00:00 stderr F I1013 00:23:42.385142 1 controllermanager.go:787] "Started controller" controller="serviceaccount-token-controller" 2025-10-13T00:23:42.386120600+00:00 stderr F I1013 00:23:42.385157 1 controllermanager.go:756] "Starting controller" controller="pod-garbage-collector-controller" 2025-10-13T00:23:42.386120600+00:00 stderr F I1013 00:23:42.385185 1 shared_informer.go:311] Waiting for caches to sync for tokens 2025-10-13T00:23:42.391668904+00:00 stderr F I1013 00:23:42.391634 1 controllermanager.go:787] "Started controller" controller="pod-garbage-collector-controller" 2025-10-13T00:23:42.391735326+00:00 stderr F I1013 00:23:42.391719 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-signing-controller" 2025-10-13T00:23:42.392940730+00:00 stderr F I1013 00:23:42.391928 1 gc_controller.go:101] "Starting GC controller" 2025-10-13T00:23:42.392940730+00:00 stderr F I1013 00:23:42.391955 1 shared_informer.go:311] Waiting for caches to sync for GC 2025-10-13T00:23:42.398019631+00:00 stderr F I1013 00:23:42.397987 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.410905860+00:00 stderr F I1013 00:23:42.410844 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.412544166+00:00 stderr F I1013 00:23:42.412520 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving" 2025-10-13T00:23:42.412597127+00:00 stderr F I1013 00:23:42.412582 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving 2025-10-13T00:23:42.412648269+00:00 stderr F I1013 00:23:42.412634 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.413449231+00:00 stderr F I1013 00:23:42.413394 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.414368337+00:00 stderr F I1013 00:23:42.414335 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client" 2025-10-13T00:23:42.414368337+00:00 stderr F I1013 00:23:42.414353 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client 2025-10-13T00:23:42.414393727+00:00 stderr F I1013 00:23:42.414386 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.415004504+00:00 stderr F I1013 00:23:42.414969 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.416279180+00:00 stderr F I1013 00:23:42.416234 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.416947968+00:00 stderr F I1013 00:23:42.416909 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-signing-controller" 2025-10-13T00:23:42.416975899+00:00 stderr F I1013 00:23:42.416945 1 controllermanager.go:756] "Starting controller" controller="node-lifecycle-controller" 2025-10-13T00:23:42.417198745+00:00 stderr F I1013 00:23:42.417158 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client" 2025-10-13T00:23:42.417198745+00:00 stderr F I1013 00:23:42.417191 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client 2025-10-13T00:23:42.417251417+00:00 stderr F I1013 00:23:42.417224 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.419401527+00:00 stderr F I1013 00:23:42.419361 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown" 2025-10-13T00:23:42.419401527+00:00 stderr F I1013 00:23:42.419395 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown 2025-10-13T00:23:42.419461768+00:00 stderr F I1013 00:23:42.419434 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:42.451553782+00:00 stderr F I1013 00:23:42.451030 1 node_lifecycle_controller.go:425] "Controller will reconcile labels" 2025-10-13T00:23:42.451553782+00:00 stderr F I1013 00:23:42.451156 1 controllermanager.go:787] "Started controller" controller="node-lifecycle-controller" 2025-10-13T00:23:42.451553782+00:00 stderr F I1013 00:23:42.451186 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-expander-controller" 2025-10-13T00:23:42.451553782+00:00 stderr F I1013 00:23:42.451413 1 node_lifecycle_controller.go:459] "Sending events to api server" 2025-10-13T00:23:42.451553782+00:00 stderr F I1013 00:23:42.451479 1 node_lifecycle_controller.go:470] "Starting node controller" 2025-10-13T00:23:42.451553782+00:00 stderr F I1013 00:23:42.451492 1 shared_informer.go:311] Waiting for caches to sync for taint 2025-10-13T00:23:42.458266069+00:00 stderr F I1013 00:23:42.458196 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-10-13T00:23:42.458295970+00:00 stderr F I1013 00:23:42.458271 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-10-13T00:23:42.458347022+00:00 stderr F I1013 00:23:42.458304 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-10-13T00:23:42.458378222+00:00 stderr F I1013 00:23:42.458358 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-10-13T00:23:42.458657560+00:00 stderr F I1013 00:23:42.458626 1 controllermanager.go:787] "Started controller" controller="persistentvolume-expander-controller" 2025-10-13T00:23:42.458693611+00:00 stderr F I1013 00:23:42.458665 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-protection-controller" 2025-10-13T00:23:42.458748933+00:00 stderr F I1013 00:23:42.458711 1 expand_controller.go:328] "Starting expand controller" 2025-10-13T00:23:42.458774524+00:00 stderr F I1013 00:23:42.458755 1 shared_informer.go:311] Waiting for caches to sync for expand 2025-10-13T00:23:42.462431145+00:00 stderr F I1013 00:23:42.462360 1 controllermanager.go:787] "Started controller" controller="persistentvolume-protection-controller" 2025-10-13T00:23:42.462460406+00:00 stderr F I1013 00:23:42.462428 1 controllermanager.go:756] "Starting controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-10-13T00:23:42.462494477+00:00 stderr F I1013 00:23:42.462456 1 pv_protection_controller.go:78] "Starting PV protection controller" 2025-10-13T00:23:42.462524628+00:00 stderr F I1013 00:23:42.462490 1 shared_informer.go:311] Waiting for caches to sync for PV protection 2025-10-13T00:23:42.465865961+00:00 stderr F I1013 00:23:42.465802 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.469104451+00:00 stderr F I1013 00:23:42.468980 1 controllermanager.go:787] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-10-13T00:23:42.469104451+00:00 stderr F I1013 00:23:42.469088 1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" 2025-10-13T00:23:42.469126632+00:00 stderr F I1013 00:23:42.469113 1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner 2025-10-13T00:23:42.469161853+00:00 stderr F I1013 00:23:42.469124 1 controllermanager.go:756] "Starting controller" controller="replicationcontroller-controller" 2025-10-13T00:23:42.472983249+00:00 stderr F I1013 00:23:42.472922 1 controllermanager.go:787] "Started controller" controller="replicationcontroller-controller" 2025-10-13T00:23:42.472983249+00:00 stderr F I1013 00:23:42.472963 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-controller" 2025-10-13T00:23:42.473299558+00:00 stderr F I1013 00:23:42.473248 1 replica_set.go:214] "Starting controller" name="replicationcontroller" 2025-10-13T00:23:42.473299558+00:00 stderr F I1013 00:23:42.473281 1 shared_informer.go:311] Waiting for caches to sync for ReplicationController 2025-10-13T00:23:42.476982271+00:00 stderr F I1013 00:23:42.476922 1 controllermanager.go:787] "Started controller" controller="serviceaccount-controller" 2025-10-13T00:23:42.476982271+00:00 stderr F I1013 00:23:42.476958 1 controllermanager.go:756] "Starting controller" controller="statefulset-controller" 2025-10-13T00:23:42.477427383+00:00 stderr F I1013 00:23:42.477388 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-10-13T00:23:42.477427383+00:00 stderr F I1013 00:23:42.477408 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-10-13T00:23:42.480885829+00:00 stderr F I1013 00:23:42.480844 1 controllermanager.go:787] "Started controller" controller="statefulset-controller" 2025-10-13T00:23:42.480885829+00:00 stderr F I1013 00:23:42.480880 1 controllermanager.go:756] "Starting controller" controller="node-ipam-controller" 2025-10-13T00:23:42.480939561+00:00 stderr F I1013 00:23:42.480899 1 controllermanager.go:765] "Warning: skipping controller" controller="node-ipam-controller" 2025-10-13T00:23:42.480950311+00:00 stderr F I1013 00:23:42.480942 1 controllermanager.go:756] "Starting controller" controller="service-lb-controller" 2025-10-13T00:23:42.481015403+00:00 stderr F I1013 00:23:42.480976 1 stateful_set.go:161] "Starting stateful set controller" 2025-10-13T00:23:42.481015403+00:00 stderr F I1013 00:23:42.481002 1 shared_informer.go:311] Waiting for caches to sync for stateful set 2025-10-13T00:23:42.483657767+00:00 stderr F E1013 00:23:42.483609 1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" 2025-10-13T00:23:42.483657767+00:00 stderr F I1013 00:23:42.483645 1 controllermanager.go:765] "Warning: skipping controller" controller="service-lb-controller" 2025-10-13T00:23:42.483680907+00:00 stderr F I1013 00:23:42.483661 1 controllermanager.go:756] "Starting controller" controller="node-route-controller" 2025-10-13T00:23:42.483729929+00:00 stderr F I1013 00:23:42.483694 1 core.go:290] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true 2025-10-13T00:23:42.483729929+00:00 stderr F I1013 00:23:42.483717 1 controllermanager.go:765] "Warning: skipping controller" controller="node-route-controller" 2025-10-13T00:23:42.483752529+00:00 stderr F I1013 00:23:42.483729 1 controllermanager.go:756] "Starting controller" controller="ephemeral-volume-controller" 2025-10-13T00:23:42.485970301+00:00 stderr F I1013 00:23:42.485916 1 shared_informer.go:318] Caches are synced for tokens 2025-10-13T00:23:42.486446884+00:00 stderr F I1013 00:23:42.486409 1 controllermanager.go:787] "Started controller" controller="ephemeral-volume-controller" 2025-10-13T00:23:42.486462275+00:00 stderr F I1013 00:23:42.486444 1 controllermanager.go:756] "Starting controller" controller="disruption-controller" 2025-10-13T00:23:42.486654870+00:00 stderr F I1013 00:23:42.486611 1 controller.go:169] "Starting ephemeral volume controller" 2025-10-13T00:23:42.486654870+00:00 stderr F I1013 00:23:42.486637 1 shared_informer.go:311] Waiting for caches to sync for ephemeral 2025-10-13T00:23:42.496022501+00:00 stderr F I1013 00:23:42.495957 1 controllermanager.go:787] "Started controller" controller="disruption-controller" 2025-10-13T00:23:42.496022501+00:00 stderr F I1013 00:23:42.495993 1 controllermanager.go:756] "Starting controller" controller="cloud-node-lifecycle-controller" 2025-10-13T00:23:42.497127312+00:00 stderr F I1013 00:23:42.496174 1 disruption.go:433] "Sending events to api server." 2025-10-13T00:23:42.497127312+00:00 stderr F I1013 00:23:42.496252 1 disruption.go:444] "Starting disruption controller" 2025-10-13T00:23:42.497127312+00:00 stderr F I1013 00:23:42.496265 1 shared_informer.go:311] Waiting for caches to sync for disruption 2025-10-13T00:23:42.498473659+00:00 stderr F E1013 00:23:42.498432 1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" 2025-10-13T00:23:42.498473659+00:00 stderr F I1013 00:23:42.498464 1 controllermanager.go:765] "Warning: skipping controller" controller="cloud-node-lifecycle-controller" 2025-10-13T00:23:42.498497960+00:00 stderr F I1013 00:23:42.498482 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-attach-detach-controller" 2025-10-13T00:23:42.501992027+00:00 stderr F W1013 00:23:42.501948 1 probe.go:268] Flexvolume plugin directory at /etc/kubernetes/kubelet-plugins/volume/exec does not exist. Recreating. 2025-10-13T00:23:42.502685577+00:00 stderr F I1013 00:23:42.502654 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-10-13T00:23:42.502715838+00:00 stderr F I1013 00:23:42.502683 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-10-13T00:23:42.502715838+00:00 stderr F I1013 00:23:42.502699 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-10-13T00:23:42.502727048+00:00 stderr F I1013 00:23:42.502713 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" 2025-10-13T00:23:42.502767819+00:00 stderr F I1013 00:23:42.502742 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-10-13T00:23:42.502856261+00:00 stderr F I1013 00:23:42.502832 1 controllermanager.go:787] "Started controller" controller="persistentvolume-attach-detach-controller" 2025-10-13T00:23:42.502868682+00:00 stderr F I1013 00:23:42.502856 1 controllermanager.go:756] "Starting controller" controller="taint-eviction-controller" 2025-10-13T00:23:42.503998383+00:00 stderr F I1013 00:23:42.502997 1 attach_detach_controller.go:337] "Starting attach detach controller" 2025-10-13T00:23:42.503998383+00:00 stderr F I1013 00:23:42.503048 1 shared_informer.go:311] Waiting for caches to sync for attach detach 2025-10-13T00:23:42.505482865+00:00 stderr F I1013 00:23:42.505437 1 controllermanager.go:787] "Started controller" controller="taint-eviction-controller" 2025-10-13T00:23:42.505482865+00:00 stderr F I1013 00:23:42.505469 1 controllermanager.go:756] "Starting controller" controller="namespace-controller" 2025-10-13T00:23:42.505701291+00:00 stderr F I1013 00:23:42.505627 1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller" 2025-10-13T00:23:42.506692128+00:00 stderr F I1013 00:23:42.505740 1 taint_eviction.go:291] "Sending events to api server" 2025-10-13T00:23:42.506692128+00:00 stderr F I1013 00:23:42.505777 1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller 2025-10-13T00:23:42.552343240+00:00 stderr F I1013 00:23:42.552257 1 namespace_controller.go:197] "Starting namespace controller" 2025-10-13T00:23:42.552343240+00:00 stderr F I1013 00:23:42.552287 1 shared_informer.go:311] Waiting for caches to sync for namespace 2025-10-13T00:23:42.552397811+00:00 stderr F I1013 00:23:42.552185 1 controllermanager.go:787] "Started controller" controller="namespace-controller" 2025-10-13T00:23:42.552397811+00:00 stderr F I1013 00:23:42.552386 1 controllermanager.go:756] "Starting controller" controller="replicaset-controller" 2025-10-13T00:23:42.555906189+00:00 stderr F I1013 00:23:42.555852 1 controllermanager.go:787] "Started controller" controller="replicaset-controller" 2025-10-13T00:23:42.555906189+00:00 stderr F I1013 00:23:42.555876 1 controllermanager.go:756] "Starting controller" controller="clusterrole-aggregation-controller" 2025-10-13T00:23:42.556142096+00:00 stderr F I1013 00:23:42.556078 1 replica_set.go:214] "Starting controller" name="replicaset" 2025-10-13T00:23:42.556142096+00:00 stderr F I1013 00:23:42.556117 1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet 2025-10-13T00:23:42.558992345+00:00 stderr F I1013 00:23:42.558896 1 controllermanager.go:787] "Started controller" controller="clusterrole-aggregation-controller" 2025-10-13T00:23:42.559015656+00:00 stderr F I1013 00:23:42.558991 1 controllermanager.go:756] "Starting controller" controller="persistentvolumeclaim-protection-controller" 2025-10-13T00:23:42.559194171+00:00 stderr F I1013 00:23:42.559060 1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" 2025-10-13T00:23:42.559194171+00:00 stderr F I1013 00:23:42.559180 1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator 2025-10-13T00:23:42.566374551+00:00 stderr F I1013 00:23:42.563706 1 controllermanager.go:787] "Started controller" controller="persistentvolumeclaim-protection-controller" 2025-10-13T00:23:42.566374551+00:00 stderr F I1013 00:23:42.563747 1 controllermanager.go:756] "Starting controller" controller="root-ca-certificate-publisher-controller" 2025-10-13T00:23:42.566374551+00:00 stderr F I1013 00:23:42.563763 1 pvc_protection_controller.go:102] "Starting PVC protection controller" 2025-10-13T00:23:42.566374551+00:00 stderr F I1013 00:23:42.563781 1 shared_informer.go:311] Waiting for caches to sync for PVC protection 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567066 1 controllermanager.go:787] "Started controller" controller="root-ca-certificate-publisher-controller" 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567125 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"] 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567143 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"] 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567161 1 controllermanager.go:750] "Warning: controller is disabled" controller="ttl-controller" 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567177 1 controllermanager.go:750] "Warning: controller is disabled" controller="bootstrap-signer-controller" 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567191 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-binder-controller" 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567245 1 publisher.go:102] "Starting root CA cert publisher controller" 2025-10-13T00:23:42.567788190+00:00 stderr F I1013 00:23:42.567273 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-10-13T00:23:42.577455239+00:00 stderr F I1013 00:23:42.577391 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" 2025-10-13T00:23:42.577455239+00:00 stderr F I1013 00:23:42.577433 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" 2025-10-13T00:23:42.577497471+00:00 stderr F I1013 00:23:42.577464 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-10-13T00:23:42.577497471+00:00 stderr F I1013 00:23:42.577484 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-10-13T00:23:42.577535032+00:00 stderr F I1013 00:23:42.577504 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-10-13T00:23:42.577593523+00:00 stderr F I1013 00:23:42.577565 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" 2025-10-13T00:23:42.577659585+00:00 stderr F I1013 00:23:42.577605 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-10-13T00:23:42.577708316+00:00 stderr F I1013 00:23:42.577687 1 controllermanager.go:787] "Started controller" controller="persistentvolume-binder-controller" 2025-10-13T00:23:42.577739197+00:00 stderr F I1013 00:23:42.577721 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"] 2025-10-13T00:23:42.577761318+00:00 stderr F I1013 00:23:42.577743 1 controllermanager.go:756] "Starting controller" controller="endpointslice-controller" 2025-10-13T00:23:42.577977884+00:00 stderr F I1013 00:23:42.577912 1 pv_controller_base.go:319] "Starting persistent volume controller" 2025-10-13T00:23:42.577977884+00:00 stderr F I1013 00:23:42.577960 1 shared_informer.go:311] Waiting for caches to sync for persistent volume 2025-10-13T00:23:42.581351168+00:00 stderr F I1013 00:23:42.581296 1 controllermanager.go:787] "Started controller" controller="endpointslice-controller" 2025-10-13T00:23:42.581376109+00:00 stderr F I1013 00:23:42.581355 1 controllermanager.go:756] "Starting controller" controller="endpointslice-mirroring-controller" 2025-10-13T00:23:42.581642346+00:00 stderr F I1013 00:23:42.581609 1 endpointslice_controller.go:264] "Starting endpoint slice controller" 2025-10-13T00:23:42.581642346+00:00 stderr F I1013 00:23:42.581627 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice 2025-10-13T00:23:42.585084762+00:00 stderr F I1013 00:23:42.585048 1 controllermanager.go:787] "Started controller" controller="endpointslice-mirroring-controller" 2025-10-13T00:23:42.585108413+00:00 stderr F I1013 00:23:42.585096 1 controllermanager.go:756] "Starting controller" controller="resourcequota-controller" 2025-10-13T00:23:42.585344289+00:00 stderr F I1013 00:23:42.585308 1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" 2025-10-13T00:23:42.585344289+00:00 stderr F I1013 00:23:42.585321 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring 2025-10-13T00:23:42.623031619+00:00 stderr F I1013 00:23:42.622946 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-10-13T00:23:42.623075040+00:00 stderr F I1013 00:23:42.623054 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-10-13T00:23:42.623091431+00:00 stderr F I1013 00:23:42.623084 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-10-13T00:23:42.623145622+00:00 stderr F I1013 00:23:42.623117 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-10-13T00:23:42.623166883+00:00 stderr F I1013 00:23:42.623149 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-10-13T00:23:42.623197394+00:00 stderr F I1013 00:23:42.623178 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-10-13T00:23:42.623262455+00:00 stderr F I1013 00:23:42.623229 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-10-13T00:23:42.623277946+00:00 stderr F I1013 00:23:42.623270 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-10-13T00:23:42.623342228+00:00 stderr F I1013 00:23:42.623300 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-10-13T00:23:42.623389309+00:00 stderr F I1013 00:23:42.623364 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-10-13T00:23:42.623419750+00:00 stderr F I1013 00:23:42.623398 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-10-13T00:23:42.623447041+00:00 stderr F I1013 00:23:42.623427 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-10-13T00:23:42.623519583+00:00 stderr F I1013 00:23:42.623491 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-10-13T00:23:42.623548643+00:00 stderr F I1013 00:23:42.623528 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-10-13T00:23:42.623561134+00:00 stderr F I1013 00:23:42.623554 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-10-13T00:23:42.623594575+00:00 stderr F I1013 00:23:42.623575 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-10-13T00:23:42.623625646+00:00 stderr F I1013 00:23:42.623606 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-10-13T00:23:42.623659296+00:00 stderr F I1013 00:23:42.623641 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-10-13T00:23:42.623691257+00:00 stderr F I1013 00:23:42.623672 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-10-13T00:23:42.623721008+00:00 stderr F I1013 00:23:42.623703 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-10-13T00:23:42.623754299+00:00 stderr F I1013 00:23:42.623735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-10-13T00:23:42.623766569+00:00 stderr F I1013 00:23:42.623758 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-10-13T00:23:42.623805951+00:00 stderr F I1013 00:23:42.623786 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-10-13T00:23:42.623836741+00:00 stderr F I1013 00:23:42.623817 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-10-13T00:23:42.624730996+00:00 stderr F I1013 00:23:42.624680 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-10-13T00:23:42.624730996+00:00 stderr F I1013 00:23:42.624715 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-10-13T00:23:42.624763567+00:00 stderr F I1013 00:23:42.624751 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="imagestreams.image.openshift.io" 2025-10-13T00:23:42.624780128+00:00 stderr F I1013 00:23:42.624771 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-10-13T00:23:42.624838919+00:00 stderr F I1013 00:23:42.624798 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-10-13T00:23:42.624838919+00:00 stderr F I1013 00:23:42.624819 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-10-13T00:23:42.624853940+00:00 stderr F I1013 00:23:42.624840 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-10-13T00:23:42.624866330+00:00 stderr F I1013 00:23:42.624857 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-10-13T00:23:42.624881501+00:00 stderr F I1013 00:23:42.624873 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-10-13T00:23:42.624922192+00:00 stderr F I1013 00:23:42.624897 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-10-13T00:23:42.624934732+00:00 stderr F I1013 00:23:42.624926 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-10-13T00:23:42.624961173+00:00 stderr F I1013 00:23:42.624941 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-10-13T00:23:42.624973743+00:00 stderr F I1013 00:23:42.624967 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-10-13T00:23:42.625044205+00:00 stderr F I1013 00:23:42.624990 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-10-13T00:23:42.625044205+00:00 stderr F I1013 00:23:42.625019 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-10-13T00:23:42.625044205+00:00 stderr F I1013 00:23:42.625034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-10-13T00:23:42.625058575+00:00 stderr F I1013 00:23:42.625052 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-10-13T00:23:42.625106287+00:00 stderr F I1013 00:23:42.625079 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-10-13T00:23:42.625148118+00:00 stderr F I1013 00:23:42.625116 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-10-13T00:23:42.625158748+00:00 stderr F I1013 00:23:42.625146 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-10-13T00:23:42.625221110+00:00 stderr F I1013 00:23:42.625185 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-10-13T00:23:42.625221110+00:00 stderr F I1013 00:23:42.625216 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-10-13T00:23:42.625264821+00:00 stderr F I1013 00:23:42.625242 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-10-13T00:23:42.625293932+00:00 stderr F I1013 00:23:42.625275 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-10-13T00:23:42.625344653+00:00 stderr F I1013 00:23:42.625301 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-10-13T00:23:42.625384795+00:00 stderr F I1013 00:23:42.625363 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-10-13T00:23:42.625434306+00:00 stderr F I1013 00:23:42.625392 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-10-13T00:23:42.625475347+00:00 stderr F I1013 00:23:42.625454 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-10-13T00:23:42.625499458+00:00 stderr F I1013 00:23:42.625485 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-10-13T00:23:42.625529019+00:00 stderr F I1013 00:23:42.625510 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-10-13T00:23:42.625575520+00:00 stderr F I1013 00:23:42.625556 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-10-13T00:23:42.625604791+00:00 stderr F I1013 00:23:42.625586 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-10-13T00:23:42.625638332+00:00 stderr F I1013 00:23:42.625619 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-10-13T00:23:42.625670433+00:00 stderr F I1013 00:23:42.625652 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-10-13T00:23:42.625699263+00:00 stderr F I1013 00:23:42.625680 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-10-13T00:23:42.625727764+00:00 stderr F I1013 00:23:42.625709 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-10-13T00:23:42.625754195+00:00 stderr F I1013 00:23:42.625735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-10-13T00:23:42.625783996+00:00 stderr F I1013 00:23:42.625765 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-10-13T00:23:42.625807016+00:00 stderr F I1013 00:23:42.625793 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-10-13T00:23:42.625836957+00:00 stderr F I1013 00:23:42.625818 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-10-13T00:23:42.625870358+00:00 stderr F I1013 00:23:42.625845 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-10-13T00:23:42.625882788+00:00 stderr F I1013 00:23:42.625875 1 controllermanager.go:787] "Started controller" controller="resourcequota-controller" 2025-10-13T00:23:42.625894859+00:00 stderr F I1013 00:23:42.625887 1 controllermanager.go:756] "Starting controller" controller="daemonset-controller" 2025-10-13T00:23:42.625989561+00:00 stderr F I1013 00:23:42.625947 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-10-13T00:23:42.625989561+00:00 stderr F I1013 00:23:42.625978 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-10-13T00:23:42.626027212+00:00 stderr F I1013 00:23:42.626007 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-10-13T00:23:42.629086468+00:00 stderr F I1013 00:23:42.628915 1 controllermanager.go:787] "Started controller" controller="daemonset-controller" 2025-10-13T00:23:42.629086468+00:00 stderr F I1013 00:23:42.628944 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-cleaner-controller" 2025-10-13T00:23:42.629318884+00:00 stderr F I1013 00:23:42.629135 1 daemon_controller.go:297] "Starting daemon sets controller" 2025-10-13T00:23:42.629318884+00:00 stderr F I1013 00:23:42.629175 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-10-13T00:23:42.631606908+00:00 stderr F I1013 00:23:42.631545 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-cleaner-controller" 2025-10-13T00:23:42.631606908+00:00 stderr F I1013 00:23:42.631594 1 controllermanager.go:756] "Starting controller" controller="ttl-after-finished-controller" 2025-10-13T00:23:42.631794543+00:00 stderr F I1013 00:23:42.631739 1 cleaner.go:83] "Starting CSR cleaner controller" 2025-10-13T00:23:42.634216081+00:00 stderr F I1013 00:23:42.634158 1 controllermanager.go:787] "Started controller" controller="ttl-after-finished-controller" 2025-10-13T00:23:42.634216081+00:00 stderr F I1013 00:23:42.634191 1 controllermanager.go:756] "Starting controller" controller="service-ca-certificate-publisher-controller" 2025-10-13T00:23:42.634242681+00:00 stderr F I1013 00:23:42.634210 1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" 2025-10-13T00:23:42.634242681+00:00 stderr F I1013 00:23:42.634226 1 shared_informer.go:311] Waiting for caches to sync for TTL after finished 2025-10-13T00:23:42.636678949+00:00 stderr F I1013 00:23:42.636632 1 controllermanager.go:787] "Started controller" controller="service-ca-certificate-publisher-controller" 2025-10-13T00:23:42.636678949+00:00 stderr F I1013 00:23:42.636661 1 controllermanager.go:756] "Starting controller" controller="endpoints-controller" 2025-10-13T00:23:42.636773052+00:00 stderr F I1013 00:23:42.636733 1 publisher.go:80] Starting service CA certificate configmap publisher 2025-10-13T00:23:42.636773052+00:00 stderr F I1013 00:23:42.636750 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-10-13T00:23:42.637830451+00:00 stderr F I1013 00:23:42.637299 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-10-13T00:23:42.639673433+00:00 stderr F I1013 00:23:42.639623 1 controllermanager.go:787] "Started controller" controller="endpoints-controller" 2025-10-13T00:23:42.639673433+00:00 stderr F I1013 00:23:42.639659 1 controllermanager.go:756] "Starting controller" controller="job-controller" 2025-10-13T00:23:42.639860418+00:00 stderr F I1013 00:23:42.639814 1 endpoints_controller.go:174] "Starting endpoint controller" 2025-10-13T00:23:42.639871878+00:00 stderr F I1013 00:23:42.639858 1 shared_informer.go:311] Waiting for caches to sync for endpoint 2025-10-13T00:23:42.642140941+00:00 stderr F I1013 00:23:42.642085 1 controllermanager.go:787] "Started controller" controller="job-controller" 2025-10-13T00:23:42.642140941+00:00 stderr F I1013 00:23:42.642114 1 controllermanager.go:756] "Starting controller" controller="deployment-controller" 2025-10-13T00:23:42.642296816+00:00 stderr F I1013 00:23:42.642248 1 job_controller.go:224] "Starting job controller" 2025-10-13T00:23:42.642296816+00:00 stderr F I1013 00:23:42.642272 1 shared_informer.go:311] Waiting for caches to sync for job 2025-10-13T00:23:42.644640231+00:00 stderr F I1013 00:23:42.644596 1 controllermanager.go:787] "Started controller" controller="deployment-controller" 2025-10-13T00:23:42.644640231+00:00 stderr F I1013 00:23:42.644631 1 controllermanager.go:756] "Starting controller" controller="horizontal-pod-autoscaler-controller" 2025-10-13T00:23:42.644886828+00:00 stderr F I1013 00:23:42.644850 1 deployment_controller.go:168] "Starting controller" controller="deployment" 2025-10-13T00:23:42.644886828+00:00 stderr F I1013 00:23:42.644864 1 shared_informer.go:311] Waiting for caches to sync for deployment 2025-10-13T00:23:42.655954916+00:00 stderr F I1013 00:23:42.655901 1 controllermanager.go:787] "Started controller" controller="horizontal-pod-autoscaler-controller" 2025-10-13T00:23:42.655954916+00:00 stderr F I1013 00:23:42.655928 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-approving-controller" 2025-10-13T00:23:42.656024698+00:00 stderr F I1013 00:23:42.655986 1 horizontal.go:200] "Starting HPA controller" 2025-10-13T00:23:42.656024698+00:00 stderr F I1013 00:23:42.656015 1 shared_informer.go:311] Waiting for caches to sync for HPA 2025-10-13T00:23:42.658724553+00:00 stderr F I1013 00:23:42.658676 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-approving-controller" 2025-10-13T00:23:42.658724553+00:00 stderr F I1013 00:23:42.658717 1 controllermanager.go:756] "Starting controller" controller="garbage-collector-controller" 2025-10-13T00:23:42.658960200+00:00 stderr F I1013 00:23:42.658924 1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving" 2025-10-13T00:23:42.658979240+00:00 stderr F I1013 00:23:42.658960 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving 2025-10-13T00:23:42.666349846+00:00 stderr F I1013 00:23:42.666273 1 garbagecollector.go:155] "Starting controller" controller="garbagecollector" 2025-10-13T00:23:42.666349846+00:00 stderr F I1013 00:23:42.666309 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-10-13T00:23:42.666405937+00:00 stderr F I1013 00:23:42.666378 1 controllermanager.go:787] "Started controller" controller="garbage-collector-controller" 2025-10-13T00:23:42.666417087+00:00 stderr F I1013 00:23:42.666400 1 graph_builder.go:302] "Running" component="GraphBuilder" 2025-10-13T00:23:42.666427068+00:00 stderr F I1013 00:23:42.666412 1 controllermanager.go:756] "Starting controller" controller="cronjob-controller" 2025-10-13T00:23:42.670839311+00:00 stderr F I1013 00:23:42.670762 1 controllermanager.go:787] "Started controller" controller="cronjob-controller" 2025-10-13T00:23:42.670865881+00:00 stderr F I1013 00:23:42.670835 1 controllermanager.go:750] "Warning: controller is disabled" controller="token-cleaner-controller" 2025-10-13T00:23:42.670898672+00:00 stderr F I1013 00:23:42.670854 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"] 2025-10-13T00:23:42.670957434+00:00 stderr F I1013 00:23:42.670921 1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" 2025-10-13T00:23:42.670957434+00:00 stderr F I1013 00:23:42.670947 1 shared_informer.go:311] Waiting for caches to sync for cronjob 2025-10-13T00:23:42.681933050+00:00 stderr F I1013 00:23:42.681857 1 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.682116305+00:00 stderr F I1013 00:23:42.681864 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-10-13T00:23:42.682429964+00:00 stderr F I1013 00:23:42.682384 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.692629108+00:00 stderr F I1013 00:23:42.692544 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.692629108+00:00 stderr F I1013 00:23:42.692595 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.692784 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.692839 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.692955 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.693456 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.693605 1 job_controller.go:554] "enqueueing job" key="openshift-image-registry/image-pruner-29338560" 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.693643 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.693655 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.693690 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29338575" 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.694443 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.694585 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.694586 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.696378702+00:00 stderr F I1013 00:23:42.694709 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.700364403+00:00 stderr F I1013 00:23:42.698691 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.700364403+00:00 stderr F I1013 00:23:42.699507 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.700364403+00:00 stderr F I1013 00:23:42.699613 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.701960568+00:00 stderr F I1013 00:23:42.701904 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.702175884+00:00 stderr F I1013 00:23:42.702132 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.703385637+00:00 stderr F I1013 00:23:42.703316 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.704580921+00:00 stderr F I1013 00:23:42.704545 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.711048971+00:00 stderr F I1013 00:23:42.710945 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.712569053+00:00 stderr F I1013 00:23:42.712492 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.713157359+00:00 stderr F I1013 00:23:42.713105 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.713352515+00:00 stderr F I1013 00:23:42.713272 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.714699622+00:00 stderr F I1013 00:23:42.714646 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.715438933+00:00 stderr F I1013 00:23:42.715398 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.717290215+00:00 stderr F I1013 00:23:42.717216 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.717856910+00:00 stderr F I1013 00:23:42.717803 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.720774842+00:00 stderr F I1013 00:23:42.720708 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.720859724+00:00 stderr F I1013 00:23:42.720819 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721046019+00:00 stderr F I1013 00:23:42.721008 1 reflector.go:351] Caches populated for *v1.RoleBindingRestriction from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721133022+00:00 stderr F I1013 00:23:42.720814 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721133022+00:00 stderr F I1013 00:23:42.720838 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721174533+00:00 stderr F I1013 00:23:42.721140 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721399249+00:00 stderr F I1013 00:23:42.721363 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721425500+00:00 stderr F I1013 00:23:42.721403 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721491192+00:00 stderr F I1013 00:23:42.721432 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721505362+00:00 stderr F I1013 00:23:42.721482 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721609995+00:00 stderr F I1013 00:23:42.721573 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721627565+00:00 stderr F I1013 00:23:42.721600 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.721779710+00:00 stderr F I1013 00:23:42.721730 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"crc\" does not exist" 2025-10-13T00:23:42.721828501+00:00 stderr F I1013 00:23:42.721796 1 topologycache.go:217] "Ignoring node because it has an excluded label" node="crc" 2025-10-13T00:23:42.721916743+00:00 stderr F I1013 00:23:42.721883 1 topologycache.go:253] "Insufficient node info for topology hints" totalZones=0 totalCPU="0" sufficientNodeInfo=true 2025-10-13T00:23:42.723070346+00:00 stderr F I1013 00:23:42.723014 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.723186939+00:00 stderr F I1013 00:23:42.723149 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.723318343+00:00 stderr F I1013 00:23:42.723283 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.723458516+00:00 stderr F I1013 00:23:42.723410 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.723458516+00:00 stderr F I1013 00:23:42.723439 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.723534619+00:00 stderr F I1013 00:23:42.723505 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.723590630+00:00 stderr F I1013 00:23:42.723562 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.724113705+00:00 stderr F I1013 00:23:42.724087 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.724291930+00:00 stderr F I1013 00:23:42.724260 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.724437544+00:00 stderr F I1013 00:23:42.724388 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.724458634+00:00 stderr F I1013 00:23:42.724426 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.724563067+00:00 stderr F I1013 00:23:42.723535 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.725163814+00:00 stderr F I1013 00:23:42.725127 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.725283747+00:00 stderr F I1013 00:23:42.725252 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.725719069+00:00 stderr F I1013 00:23:42.725642 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.725957166+00:00 stderr F I1013 00:23:42.725909 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726065549+00:00 stderr F I1013 00:23:42.726040 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726119301+00:00 stderr F I1013 00:23:42.726084 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726168322+00:00 stderr F I1013 00:23:42.726142 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726225264+00:00 stderr F I1013 00:23:42.726197 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726316286+00:00 stderr F I1013 00:23:42.726227 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726449760+00:00 stderr F I1013 00:23:42.726406 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.726588174+00:00 stderr F I1013 00:23:42.726559 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.727036146+00:00 stderr F I1013 00:23:42.726991 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.727346695+00:00 stderr F I1013 00:23:42.727300 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.727976782+00:00 stderr F I1013 00:23:42.727931 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.728946999+00:00 stderr F I1013 00:23:42.728828 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.729106234+00:00 stderr F I1013 00:23:42.729062 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.729895926+00:00 stderr F I1013 00:23:42.729827 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.731209522+00:00 stderr F I1013 00:23:42.730980 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.733393823+00:00 stderr P I1013 00:23:42.732539 1 garbagecollector.go:241] "syncing garbage collector with updated resources from discovery" attempt=1 diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apiserver.openshift.io/v1, Resource=apirequestcounts apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1, Resource=clusterautoscalers autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds certificates.k8s.io/v1, Resource=certificatesigningrequests config.openshift.io/v1, Resource=apiservers config.openshift.io/v1, Resource=authentications config.openshift.io/v1, Resource=builds config.openshift.io/v1, Resource=clusteroperators config.openshift.io/v1, Resource=clusterversions config.openshift.io/v1, Resource=consoles config.openshift.io/v1, Resource=dnses config.openshift.io/v1, Resource=featuregates config.openshift.io/v1, Resource=imagecontentpolicies config.openshift.io/v1, Resource=imagedigestmirrorsets config.openshift.io/v1, Resource=images config.openshift.io/v1, Resource=imagetagmirrorsets config.openshift.io/v1, Resource=infrastructures config.openshift.io/v1, Resource=ingresses config.openshift.io/v1, Resource=networks config.openshift.io/v1, Resource=nodes config.openshift.io/v1, Resource=oauths config.openshift.io/v1, Resource=operatorhubs config.openshift.io/v1, Resource=projects config.openshift.io/v1, Resource=proxies config.openshift.io/v1, Resource=schedulers console.openshift.io/v1, Resource=consoleclidownloads console.openshift.io/v1, Resource=consoleexternalloglinks console.openshift.io/v1, Resource=consolelinks console.openshift.io/v1, Resource=consolenotifications console.openshift.io/v1, Resource=consoleplugins console.openshift.io/v1, Resource=consolequickstarts console.openshift.io/v1, Resource=consolesamples console.openshift.io/v1, Resource=consoleyamlsamples controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1, Resource=prioritylevelconfigurations helm.openshift.io/v1beta1, Resource=helmchartrepositories helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=images image.openshift.io/v1, Resource=imagestreams imageregistry.operator.openshift.io/v1, Resource=configs imageregistry.operator.openshift.io/v1, Resource=imagepruners infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=adminpolicybasedexternalroutes k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressips k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets machineconfiguration.openshift.io/v1, Resource=containerruntimeconfigs machineconfiguration.openshift.io/v1, Resource=controllerconfigs machineconfiguration.openshift.io/v1, Resource=kubeletconfigs machineconfiguration.openshift.io/v1, Resource=machineconfigpools machineconfiguration.openshift.io/v1, Resource=machineconfigs migration.k8s.io/v1alpha1, Resource=storagestates migration.k8s.io/v1alpha1, Resource=storageversionmigrations monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses oauth.openshift.io/v1, Resource=oauthaccesstokens oauth.openshift.io/v1, Resource=oauthauthorizetokens oauth.openshift.io/v1, Resource=oauthclientauthorizations oauth.openshift.io/v1, Resource=oauthclients oauth.openshift.io/v1, Resource=useroauthaccesstokens operator.openshift.io/v1, Resource=authentications operator.openshift.io/v1, Resource=clustercsidrivers operator.openshift.io/v1, Resource=configs operator.openshift.io/v1, Resource=consoles operator.openshift.io/v1, Resource=csisnapshotcontrollers operator.openshift.io/v1, Resource=dnses operator.openshift.io/v1, Resource=etcds operator.openshift.io/v1, Resource=ingresscontrollers operator.openshift.io/v1, Resource=kubeapiservers operator.openshift.io/v1, Resource=kubecontrollermanagers operator.openshift.io/v1, Resource=kubeschedulers operator.openshift.io/v1, Resource=kubestorageversionmigrators operator.openshift.io/v1, Resource=machineconfigurations operator.openshift.io/v1, Resource=networks operator.openshift.io/v1, Resource=openshiftapiservers operator.openshift.io/v1, Resource=openshiftcontrollermanagers operator.openshift.io/v1, Resource=servicecas operator.openshift.io/v1, Resource=storages operator.openshift.io/v1alpha1, Resource=imagecontentsourcepolicies operators.coreos.com/v1, Resource=olmconfigs operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1, Resource=operators operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy.networking.k8s.io/v1alpha1, Resource=adminnetworkpolicies policy.networking.k8s.io/v1alpha1, Resource=baselineadminnetworkpolicies policy/v1, Resource=poddisruptionbudgets project.openshift.io/v1, Resource=projects quota.openshift.io/v1, Resource=clusterresourcequotas rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes samples.operator.openshift.io/v1, Resource=configs scheduling.k8s.io/v1, Resource=priorityclasses security.internal.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=securitycontextconstraints storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments t 2025-10-13T00:23:42.733437534+00:00 stderr F emplate.openshift.io/v1, Resource=brokertemplateinstances template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates user.openshift.io/v1, Resource=groups user.openshift.io/v1, Resource=identities user.openshift.io/v1, Resource=users whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-10-13T00:23:42.733437534+00:00 stderr F I1013 00:23:42.732878 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.733791464+00:00 stderr F I1013 00:23:42.733745 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.735011178+00:00 stderr F I1013 00:23:42.734979 1 shared_informer.go:318] Caches are synced for TTL after finished 2025-10-13T00:23:42.735814121+00:00 stderr F I1013 00:23:42.735749 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.736462549+00:00 stderr F I1013 00:23:42.736426 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.736992663+00:00 stderr F I1013 00:23:42.736935 1 job_controller.go:554] "enqueueing job" key="openshift-image-registry/image-pruner-29338560" 2025-10-13T00:23:42.737386214+00:00 stderr F I1013 00:23:42.737356 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.739156764+00:00 stderr F I1013 00:23:42.739118 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-10-13T00:23:42.739216555+00:00 stderr F I1013 00:23:42.739187 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-10-13T00:23:42.739231066+00:00 stderr F I1013 00:23:42.739222 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29338575" 2025-10-13T00:23:42.739295217+00:00 stderr F I1013 00:23:42.739265 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.740378748+00:00 stderr F I1013 00:23:42.740349 1 reflector.go:351] Caches populated for *v1.Template from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.745382177+00:00 stderr F I1013 00:23:42.742480 1 shared_informer.go:318] Caches are synced for job 2025-10-13T00:23:42.745382177+00:00 stderr F I1013 00:23:42.742955 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.745382177+00:00 stderr F I1013 00:23:42.743540 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.750667084+00:00 stderr F I1013 00:23:42.750605 1 shared_informer.go:318] Caches are synced for deployment 2025-10-13T00:23:42.756436155+00:00 stderr F I1013 00:23:42.756367 1 shared_informer.go:318] Caches are synced for ReplicaSet 2025-10-13T00:23:42.756597590+00:00 stderr F I1013 00:23:42.756554 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="119.744µs" 2025-10-13T00:23:42.756696112+00:00 stderr F I1013 00:23:42.756675 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="86.992µs" 2025-10-13T00:23:42.756792965+00:00 stderr F I1013 00:23:42.756771 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="74.472µs" 2025-10-13T00:23:42.756937189+00:00 stderr F I1013 00:23:42.756872 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="82.252µs" 2025-10-13T00:23:42.756996591+00:00 stderr F I1013 00:23:42.756967 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="77.553µs" 2025-10-13T00:23:42.757074963+00:00 stderr F I1013 00:23:42.757041 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="47.682µs" 2025-10-13T00:23:42.757074963+00:00 stderr F I1013 00:23:42.757051 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="156.485µs" 2025-10-13T00:23:42.757190636+00:00 stderr F I1013 00:23:42.757128 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="64.402µs" 2025-10-13T00:23:42.757190636+00:00 stderr F I1013 00:23:42.757178 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="83.202µs" 2025-10-13T00:23:42.757274988+00:00 stderr F I1013 00:23:42.757248 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="54.012µs" 2025-10-13T00:23:42.757274988+00:00 stderr F I1013 00:23:42.757261 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="69.512µs" 2025-10-13T00:23:42.757459534+00:00 stderr F I1013 00:23:42.757345 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="74.172µs" 2025-10-13T00:23:42.757459534+00:00 stderr F I1013 00:23:42.757369 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="89.713µs" 2025-10-13T00:23:42.757459534+00:00 stderr F I1013 00:23:42.757396 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="33.901µs" 2025-10-13T00:23:42.757459534+00:00 stderr F I1013 00:23:42.757430 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="41.191µs" 2025-10-13T00:23:42.757482414+00:00 stderr F I1013 00:23:42.757456 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="41.971µs" 2025-10-13T00:23:42.757500245+00:00 stderr F I1013 00:23:42.757482 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="38.621µs" 2025-10-13T00:23:42.757792203+00:00 stderr F I1013 00:23:42.757547 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="50.932µs" 2025-10-13T00:23:42.757792203+00:00 stderr F I1013 00:23:42.757599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="36.341µs" 2025-10-13T00:23:42.757792203+00:00 stderr F I1013 00:23:42.757651 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="39.991µs" 2025-10-13T00:23:42.760191240+00:00 stderr F I1013 00:23:42.758093 1 shared_informer.go:318] Caches are synced for taint 2025-10-13T00:23:42.760238471+00:00 stderr F I1013 00:23:42.760216 1 node_lifecycle_controller.go:676] "Controller observed a new Node" node="crc" 2025-10-13T00:23:42.760280032+00:00 stderr F I1013 00:23:42.758194 1 shared_informer.go:318] Caches are synced for HPA 2025-10-13T00:23:42.760280032+00:00 stderr F I1013 00:23:42.760247 1 controller_utils.go:173] "Recording event message for node" event="Registered Node crc in Controller" node="crc" 2025-10-13T00:23:42.760305403+00:00 stderr F I1013 00:23:42.760292 1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone="" 2025-10-13T00:23:42.760305403+00:00 stderr F I1013 00:23:42.758310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="94.803µs" 2025-10-13T00:23:42.760313263+00:00 stderr F I1013 00:23:42.758355 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="54.572µs" 2025-10-13T00:23:42.760313263+00:00 stderr F I1013 00:23:42.759751 1 shared_informer.go:318] Caches are synced for expand 2025-10-13T00:23:42.761024973+00:00 stderr F I1013 00:23:42.760991 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="128.173µs" 2025-10-13T00:23:42.761205548+00:00 stderr F I1013 00:23:42.761165 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="82.742µs" 2025-10-13T00:23:42.761348942+00:00 stderr F I1013 00:23:42.761305 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="102.143µs" 2025-10-13T00:23:42.761364492+00:00 stderr F I1013 00:23:42.759803 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="2.054807ms" 2025-10-13T00:23:42.761495846+00:00 stderr F I1013 00:23:42.761473 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="106.123µs" 2025-10-13T00:23:42.761596919+00:00 stderr F I1013 00:23:42.761539 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="166.315µs" 2025-10-13T00:23:42.761629300+00:00 stderr F I1013 00:23:42.761610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="95.853µs" 2025-10-13T00:23:42.761737353+00:00 stderr F I1013 00:23:42.761688 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="128.703µs" 2025-10-13T00:23:42.761745783+00:00 stderr F I1013 00:23:42.761733 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="94.063µs" 2025-10-13T00:23:42.761815345+00:00 stderr F I1013 00:23:42.761794 1 event.go:376] "Event occurred" object="crc" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node crc event: Registered Node crc in Controller" 2025-10-13T00:23:42.761885937+00:00 stderr F I1013 00:23:42.761864 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="151.894µs" 2025-10-13T00:23:42.762030261+00:00 stderr F I1013 00:23:42.761994 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="100.292µs" 2025-10-13T00:23:42.762030261+00:00 stderr F I1013 00:23:42.762009 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="130.514µs" 2025-10-13T00:23:42.762164365+00:00 stderr F I1013 00:23:42.762134 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="98.583µs" 2025-10-13T00:23:42.762164365+00:00 stderr F I1013 00:23:42.762141 1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="crc" 2025-10-13T00:23:42.762231586+00:00 stderr F I1013 00:23:42.762171 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="145.574µs" 2025-10-13T00:23:42.762353990+00:00 stderr F I1013 00:23:42.762262 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="86.343µs" 2025-10-13T00:23:42.762353990+00:00 stderr F I1013 00:23:42.762294 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="76.372µs" 2025-10-13T00:23:42.762353990+00:00 stderr F I1013 00:23:42.762296 1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal" 2025-10-13T00:23:42.762387021+00:00 stderr F I1013 00:23:42.759868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="103.083µs" 2025-10-13T00:23:42.762467803+00:00 stderr F I1013 00:23:42.762436 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="132.094µs" 2025-10-13T00:23:42.762616877+00:00 stderr F I1013 00:23:42.762581 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="105.483µs" 2025-10-13T00:23:42.762616877+00:00 stderr F I1013 00:23:42.762588 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="167.515µs" 2025-10-13T00:23:42.762766701+00:00 stderr F I1013 00:23:42.762741 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75b7bb6564" duration="128.313µs" 2025-10-13T00:23:42.762854294+00:00 stderr F I1013 00:23:42.762823 1 shared_informer.go:318] Caches are synced for PV protection 2025-10-13T00:23:42.762944056+00:00 stderr F I1013 00:23:42.762904 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="116.704µs" 2025-10-13T00:23:42.763147192+00:00 stderr F I1013 00:23:42.763104 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="128.644µs" 2025-10-13T00:23:42.763286056+00:00 stderr F I1013 00:23:42.763257 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="948.547µs" 2025-10-13T00:23:42.763286056+00:00 stderr F I1013 00:23:42.763268 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="110.213µs" 2025-10-13T00:23:42.763444860+00:00 stderr F I1013 00:23:42.763382 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="101.992µs" 2025-10-13T00:23:42.763517442+00:00 stderr F I1013 00:23:42.763482 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="172.545µs" 2025-10-13T00:23:42.763642976+00:00 stderr F I1013 00:23:42.763566 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="164.144µs" 2025-10-13T00:23:42.763642976+00:00 stderr F I1013 00:23:42.763619 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="89.772µs" 2025-10-13T00:23:42.763730868+00:00 stderr F I1013 00:23:42.763704 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="60.221µs" 2025-10-13T00:23:42.763801360+00:00 stderr F I1013 00:23:42.763755 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="134.833µs" 2025-10-13T00:23:42.763814741+00:00 stderr F I1013 00:23:42.763806 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="74.422µs" 2025-10-13T00:23:42.763863582+00:00 stderr F I1013 00:23:42.763838 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="69.772µs" 2025-10-13T00:23:42.763922364+00:00 stderr F I1013 00:23:42.763900 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="78.152µs" 2025-10-13T00:23:42.764018176+00:00 stderr F I1013 00:23:42.761863 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="95.593µs" 2025-10-13T00:23:42.764018176+00:00 stderr F I1013 00:23:42.764003 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="70.252µs" 2025-10-13T00:23:42.764018176+00:00 stderr F I1013 00:23:42.762913 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="275.928µs" 2025-10-13T00:23:42.764027816+00:00 stderr F I1013 00:23:42.759989 1 shared_informer.go:318] Caches are synced for certificate-csrapproving 2025-10-13T00:23:42.764071168+00:00 stderr F I1013 00:23:42.760021 1 shared_informer.go:318] Caches are synced for namespace 2025-10-13T00:23:42.764082298+00:00 stderr F I1013 00:23:42.759888 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="2.346005ms" 2025-10-13T00:23:42.764082298+00:00 stderr F I1013 00:23:42.764037 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="179.125µs" 2025-10-13T00:23:42.764284394+00:00 stderr F I1013 00:23:42.760002 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator 2025-10-13T00:23:42.764369616+00:00 stderr F I1013 00:23:42.764344 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="251.077µs" 2025-10-13T00:23:42.764652064+00:00 stderr F I1013 00:23:42.764604 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="87.453µs" 2025-10-13T00:23:42.764696855+00:00 stderr F I1013 00:23:42.764663 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="76.022µs" 2025-10-13T00:23:42.765296432+00:00 stderr F I1013 00:23:42.765242 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="235.457µs" 2025-10-13T00:23:42.765296432+00:00 stderr F I1013 00:23:42.765257 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="135.664µs" 2025-10-13T00:23:42.765311522+00:00 stderr F I1013 00:23:42.765272 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="405.901µs" 2025-10-13T00:23:42.765456206+00:00 stderr F I1013 00:23:42.765414 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="219.527µs" 2025-10-13T00:23:42.765456206+00:00 stderr F I1013 00:23:42.765443 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="125.263µs" 2025-10-13T00:23:42.765567009+00:00 stderr F I1013 00:23:42.765539 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="812.663µs" 2025-10-13T00:23:42.765595010+00:00 stderr F I1013 00:23:42.765566 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="94.903µs" 2025-10-13T00:23:42.765716494+00:00 stderr F I1013 00:23:42.765684 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="211.946µs" 2025-10-13T00:23:42.765716494+00:00 stderr F I1013 00:23:42.765700 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="97.753µs" 2025-10-13T00:23:42.765888458+00:00 stderr F I1013 00:23:42.765851 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="101.813µs" 2025-10-13T00:23:42.765967641+00:00 stderr F I1013 00:23:42.765937 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="185.815µs" 2025-10-13T00:23:42.766015712+00:00 stderr F I1013 00:23:42.765986 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="85.752µs" 2025-10-13T00:23:42.766101964+00:00 stderr F I1013 00:23:42.766076 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="94.312µs" 2025-10-13T00:23:42.766239498+00:00 stderr F I1013 00:23:42.766214 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="182.395µs" 2025-10-13T00:23:42.766508096+00:00 stderr F I1013 00:23:42.766474 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="214.706µs" 2025-10-13T00:23:42.766620849+00:00 stderr F I1013 00:23:42.766600 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="82.802µs" 2025-10-13T00:23:42.767833502+00:00 stderr F I1013 00:23:42.767787 1 shared_informer.go:318] Caches are synced for PVC protection 2025-10-13T00:23:42.769493619+00:00 stderr F I1013 00:23:42.769457 1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner 2025-10-13T00:23:42.771541936+00:00 stderr F I1013 00:23:42.771501 1 shared_informer.go:318] Caches are synced for cronjob 2025-10-13T00:23:42.772164833+00:00 stderr F I1013 00:23:42.772134 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="6.80693ms" 2025-10-13T00:23:42.772177694+00:00 stderr F I1013 00:23:42.772167 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="6.864071ms" 2025-10-13T00:23:42.772466952+00:00 stderr F I1013 00:23:42.772424 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="6.742017ms" 2025-10-13T00:23:42.772511503+00:00 stderr F I1013 00:23:42.772482 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="6.373098ms" 2025-10-13T00:23:42.774098767+00:00 stderr F I1013 00:23:42.774063 1 shared_informer.go:318] Caches are synced for ReplicationController 2025-10-13T00:23:42.780813224+00:00 stderr F I1013 00:23:42.780782 1 shared_informer.go:318] Caches are synced for persistent volume 2025-10-13T00:23:42.781010310+00:00 stderr F I1013 00:23:42.780992 1 shared_informer.go:318] Caches are synced for service account 2025-10-13T00:23:42.781449392+00:00 stderr F I1013 00:23:42.781427 1 shared_informer.go:318] Caches are synced for stateful set 2025-10-13T00:23:42.784059834+00:00 stderr F I1013 00:23:42.784010 1 shared_informer.go:318] Caches are synced for endpoint_slice 2025-10-13T00:23:42.784059834+00:00 stderr F I1013 00:23:42.784034 1 endpointslice_controller.go:271] "Starting worker threads" total=5 2025-10-13T00:23:42.788607711+00:00 stderr F I1013 00:23:42.788547 1 shared_informer.go:318] Caches are synced for ephemeral 2025-10-13T00:23:42.789173337+00:00 stderr F I1013 00:23:42.789135 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring 2025-10-13T00:23:42.789188747+00:00 stderr F I1013 00:23:42.789170 1 endpointslicemirroring_controller.go:230] "Starting worker threads" total=5 2025-10-13T00:23:42.789197638+00:00 stderr F I1013 00:23:42.789183 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.792900561+00:00 stderr F I1013 00:23:42.792873 1 shared_informer.go:318] Caches are synced for GC 2025-10-13T00:23:42.793614611+00:00 stderr F I1013 00:23:42.793582 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[apps/v1/daemonset, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" observed="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-10-13T00:23:42.796355387+00:00 stderr F I1013 00:23:42.796314 1 shared_informer.go:318] Caches are synced for disruption 2025-10-13T00:23:42.796923683+00:00 stderr F I1013 00:23:42.796887 1 graph_builder.go:407] "item references an owner with coordinates that do not match the observed identity" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-10-13T00:23:42.797006205+00:00 stderr F I1013 00:23:42.796981 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[v1/Node, namespace: openshift-machine-config-operator, name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" observed="[v1/Node, namespace: , name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" 2025-10-13T00:23:42.797912890+00:00 stderr F I1013 00:23:42.797893 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-10-13T00:23:42.805369188+00:00 stderr F I1013 00:23:42.804290 1 shared_informer.go:318] Caches are synced for attach detach 2025-10-13T00:23:42.808537656+00:00 stderr F I1013 00:23:42.805843 1 shared_informer.go:318] Caches are synced for taint-eviction-controller 2025-10-13T00:23:42.808537656+00:00 stderr F I1013 00:23:42.808151 1 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.808574647+00:00 stderr F I1013 00:23:42.808541 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.811845388+00:00 stderr F I1013 00:23:42.808689 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.811845388+00:00 stderr F I1013 00:23:42.809135 1 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.811845388+00:00 stderr F I1013 00:23:42.809238 1 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.811845388+00:00 stderr F I1013 00:23:42.809377 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.813388531+00:00 stderr F I1013 00:23:42.813364 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.813529115+00:00 stderr F I1013 00:23:42.813511 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814449971+00:00 stderr F I1013 00:23:42.814407 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client 2025-10-13T00:23:42.814498412+00:00 stderr F I1013 00:23:42.814472 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814590265+00:00 stderr F I1013 00:23:42.814414 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814626836+00:00 stderr F I1013 00:23:42.814599 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814703778+00:00 stderr F I1013 00:23:42.814675 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814733809+00:00 stderr F I1013 00:23:42.814714 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814801691+00:00 stderr F I1013 00:23:42.814785 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814842752+00:00 stderr F I1013 00:23:42.814809 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814905074+00:00 stderr F I1013 00:23:42.814477 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814959855+00:00 stderr F I1013 00:23:42.814923 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.814959855+00:00 stderr F I1013 00:23:42.814943 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815038087+00:00 stderr F I1013 00:23:42.815014 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815038087+00:00 stderr F I1013 00:23:42.815024 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815108019+00:00 stderr F I1013 00:23:42.815086 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815108019+00:00 stderr F I1013 00:23:42.815098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815201302+00:00 stderr F I1013 00:23:42.815175 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815302555+00:00 stderr F I1013 00:23:42.815274 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815339796+00:00 stderr F I1013 00:23:42.815310 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815426218+00:00 stderr F I1013 00:23:42.815407 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815472099+00:00 stderr F I1013 00:23:42.815445 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/DNS, namespace: openshift-dns, name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" observed="[operator.openshift.io/v1/DNS, namespace: , name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" 2025-10-13T00:23:42.815482930+00:00 stderr F I1013 00:23:42.815476 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815559512+00:00 stderr F I1013 00:23:42.815535 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815600923+00:00 stderr F I1013 00:23:42.814884 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815633874+00:00 stderr F I1013 00:23:42.815612 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815696856+00:00 stderr F I1013 00:23:42.815539 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815696856+00:00 stderr F I1013 00:23:42.815690 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815787378+00:00 stderr F I1013 00:23:42.815764 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.815882431+00:00 stderr F I1013 00:23:42.815858 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" 2025-10-13T00:23:42.815893721+00:00 stderr F I1013 00:23:42.815882 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" 2025-10-13T00:23:42.816038625+00:00 stderr F I1013 00:23:42.816000 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816038625+00:00 stderr F I1013 00:23:42.814677 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816105917+00:00 stderr F I1013 00:23:42.816083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816148288+00:00 stderr F I1013 00:23:42.816126 1 reflector.go:351] Caches populated for *v1.UserOAuthAccessToken from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816175439+00:00 stderr F I1013 00:23:42.816158 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816247211+00:00 stderr F I1013 00:23:42.816225 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816315183+00:00 stderr F I1013 00:23:42.816294 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816405845+00:00 stderr F I1013 00:23:42.816382 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816441306+00:00 stderr F I1013 00:23:42.816417 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816465997+00:00 stderr F I1013 00:23:42.816449 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816511678+00:00 stderr F I1013 00:23:42.816494 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816553640+00:00 stderr F I1013 00:23:42.816529 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Console, namespace: openshift-console, name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" observed="[operator.openshift.io/v1/Console, namespace: , name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" 2025-10-13T00:23:42.816562140+00:00 stderr F I1013 00:23:42.816555 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816647962+00:00 stderr F I1013 00:23:42.816632 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816659163+00:00 stderr F I1013 00:23:42.816652 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816710444+00:00 stderr F I1013 00:23:42.816696 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816779436+00:00 stderr F I1013 00:23:42.816765 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816843718+00:00 stderr F I1013 00:23:42.816830 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816907699+00:00 stderr F I1013 00:23:42.816890 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.816962751+00:00 stderr F I1013 00:23:42.816944 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817041803+00:00 stderr F I1013 00:23:42.816995 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817137436+00:00 stderr F I1013 00:23:42.817118 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817146386+00:00 stderr F I1013 00:23:42.817133 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817225978+00:00 stderr F I1013 00:23:42.817204 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817337951+00:00 stderr F I1013 00:23:42.817303 1 reflector.go:351] Caches populated for *v1.BrokerTemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817382633+00:00 stderr F I1013 00:23:42.817317 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.817979849+00:00 stderr F I1013 00:23:42.817940 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.818231306+00:00 stderr F I1013 00:23:42.818193 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.818231306+00:00 stderr F I1013 00:23:42.818216 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.818315399+00:00 stderr F I1013 00:23:42.818282 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.818315399+00:00 stderr F I1013 00:23:42.818302 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.818402321+00:00 stderr F I1013 00:23:42.818370 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.818531695+00:00 stderr F I1013 00:23:42.818497 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.819049789+00:00 stderr F I1013 00:23:42.819011 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.819167192+00:00 stderr F I1013 00:23:42.819132 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.819254515+00:00 stderr F I1013 00:23:42.819222 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.819360228+00:00 stderr F I1013 00:23:42.819339 1 reflector.go:351] Caches populated for *v1.RangeAllocation from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.819646806+00:00 stderr F I1013 00:23:42.819612 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.820317214+00:00 stderr F I1013 00:23:42.820278 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.820724766+00:00 stderr F I1013 00:23:42.820686 1 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.821351673+00:00 stderr F I1013 00:23:42.821293 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown 2025-10-13T00:23:42.821383704+00:00 stderr F I1013 00:23:42.821368 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client 2025-10-13T00:23:42.825398306+00:00 stderr F I1013 00:23:42.823558 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving 2025-10-13T00:23:42.826953379+00:00 stderr F I1013 00:23:42.826921 1 shared_informer.go:318] Caches are synced for resource quota 2025-10-13T00:23:42.827863745+00:00 stderr F I1013 00:23:42.827821 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.828271466+00:00 stderr F I1013 00:23:42.828226 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.830447267+00:00 stderr F I1013 00:23:42.830404 1 shared_informer.go:318] Caches are synced for daemon sets 2025-10-13T00:23:42.830447267+00:00 stderr F I1013 00:23:42.830418 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-10-13T00:23:42.830447267+00:00 stderr F I1013 00:23:42.830422 1 shared_informer.go:318] Caches are synced for daemon sets 2025-10-13T00:23:42.837225025+00:00 stderr F I1013 00:23:42.837179 1 shared_informer.go:318] Caches are synced for crt configmap 2025-10-13T00:23:42.838655535+00:00 stderr F I1013 00:23:42.838616 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.840032324+00:00 stderr F I1013 00:23:42.840003 1 shared_informer.go:318] Caches are synced for endpoint 2025-10-13T00:23:42.849507418+00:00 stderr F I1013 00:23:42.849464 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.862923161+00:00 stderr F I1013 00:23:42.862880 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.869203386+00:00 stderr F I1013 00:23:42.869185 1 shared_informer.go:318] Caches are synced for crt configmap 2025-10-13T00:23:42.872800816+00:00 stderr F I1013 00:23:42.872778 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.881603162+00:00 stderr F I1013 00:23:42.881579 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.882991280+00:00 stderr F I1013 00:23:42.882969 1 shared_informer.go:318] Caches are synced for resource quota 2025-10-13T00:23:42.883004261+00:00 stderr F I1013 00:23:42.882988 1 resource_quota_controller.go:496] "synced quota controller" 2025-10-13T00:23:42.892188426+00:00 stderr F I1013 00:23:42.892162 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.910125296+00:00 stderr F I1013 00:23:42.910082 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.915738512+00:00 stderr F I1013 00:23:42.915697 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:42.967545936+00:00 stderr F I1013 00:23:42.967482 1 shared_informer.go:318] Caches are synced for garbage collector 2025-10-13T00:23:42.967545936+00:00 stderr F I1013 00:23:42.967520 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" 2025-10-13T00:23:42.998713474+00:00 stderr F I1013 00:23:42.998664 1 shared_informer.go:318] Caches are synced for garbage collector 2025-10-13T00:23:42.998713474+00:00 stderr F I1013 00:23:42.998691 1 garbagecollector.go:290] "synced garbage collector" 2025-10-13T00:23:42.998827557+00:00 stderr F I1013 00:23:42.998755 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: b70f0647-f140-40de-94b5-a719b7eea118]" virtual=false 2025-10-13T00:23:42.999275939+00:00 stderr F I1013 00:23:42.998844 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 46d97ac2-17af-4ffa-903a-c11271097ec9]" virtual=false 2025-10-13T00:23:42.999380082+00:00 stderr F I1013 00:23:42.998861 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-api-controllers, uid: 6e3cba9d-4203-4082-ad69-807338a0bc67]" virtual=false 2025-10-13T00:23:42.999491775+00:00 stderr F I1013 00:23:42.998861 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: c732613d-55c3-4b72-b9c5-e82223355330]" virtual=false 2025-10-13T00:23:42.999546947+00:00 stderr F I1013 00:23:42.998874 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 57452db9-9f49-4e6d-9f15-a66c5d7a3b74]" virtual=false 2025-10-13T00:23:42.999672140+00:00 stderr F I1013 00:23:42.998876 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: ingress-operator, uid: fb403acd-60bb-4e1d-812d-5b3609688224]" virtual=false 2025-10-13T00:23:42.999713812+00:00 stderr F I1013 00:23:42.998902 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" virtual=false 2025-10-13T00:23:42.999817495+00:00 stderr F I1013 00:23:42.998904 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: console-operator, uid: b185912e-a17b-4654-b0d2-b107b556fec5]" virtual=false 2025-10-13T00:23:42.999876736+00:00 stderr F I1013 00:23:42.998905 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/ClusterVersion, namespace: , name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" virtual=false 2025-10-13T00:23:42.999960989+00:00 stderr F I1013 00:23:42.998921 1 garbagecollector.go:549] "Processing item" item="[operator.openshift.io/v1/Network, namespace: , name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" virtual=false 2025-10-13T00:23:43.000040211+00:00 stderr F I1013 00:23:42.998917 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" virtual=false 2025-10-13T00:23:43.000132463+00:00 stderr F I1013 00:23:42.998920 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-user-workload-monitoring, name: cluster-monitoring-operator, uid: 10195306-33ff-4c68-8c56-cd4ae639cbde]" virtual=false 2025-10-13T00:23:43.000239976+00:00 stderr F I1013 00:23:42.998938 1 garbagecollector.go:549] "Processing item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" virtual=false 2025-10-13T00:23:43.000342929+00:00 stderr F I1013 00:23:42.998937 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: ingress-operator, uid: 10dd3c44-d460-4ab3-b671-c27cf19cd1d5]" virtual=false 2025-10-13T00:23:43.000470173+00:00 stderr F I1013 00:23:42.998953 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" virtual=false 2025-10-13T00:23:43.000605876+00:00 stderr F I1013 00:23:42.998948 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: 6398f534-6d5a-4a68-8bf3-84a463b928f0]" virtual=false 2025-10-13T00:23:43.000734570+00:00 stderr F I1013 00:23:42.998958 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" virtual=false 2025-10-13T00:23:43.000858384+00:00 stderr F I1013 00:23:42.998970 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: aac1809c-7591-4279-9b3a-f54481665b62]" virtual=false 2025-10-13T00:23:43.000982027+00:00 stderr F I1013 00:23:42.998972 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 5ab31ba2-d0b0-4677-80c5-034b6c6541d6]" virtual=false 2025-10-13T00:23:43.001089800+00:00 stderr F I1013 00:23:42.998983 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: b81faa8b-7299-4812-aa0b-64bd21477eb2]" virtual=false 2025-10-13T00:23:43.007381125+00:00 stderr F I1013 00:23:43.007351 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" 2025-10-13T00:23:43.007462387+00:00 stderr F I1013 00:23:43.007447 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-approver, uid: bfc0a838-d2db-4b1b-b2fa-b1b3ed79837f]" virtual=false 2025-10-13T00:23:43.007702344+00:00 stderr F I1013 00:23:43.007681 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operator.openshift.io/v1/Network, namespace: , name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" 2025-10-13T00:23:43.007761706+00:00 stderr F I1013 00:23:43.007746 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: marketplace-operator, uid: ce0496bb-6d27-4a64-b0ea-2858aaaaf38f]" virtual=false 2025-10-13T00:23:43.009086833+00:00 stderr F I1013 00:23:43.009038 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[config.openshift.io/v1/ClusterVersion, namespace: , name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" 2025-10-13T00:23:43.009086833+00:00 stderr F I1013 00:23:43.009063 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: ffe4690e-17be-4ba4-b050-35c28bc0fc83]" virtual=false 2025-10-13T00:23:43.017388104+00:00 stderr F I1013 00:23:43.014251 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: 6398f534-6d5a-4a68-8bf3-84a463b928f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.017388104+00:00 stderr F I1013 00:23:43.014288 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" virtual=false 2025-10-13T00:23:43.018669810+00:00 stderr F I1013 00:23:43.018641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-user-workload-monitoring, name: cluster-monitoring-operator, uid: 10195306-33ff-4c68-8c56-cd4ae639cbde]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.018688960+00:00 stderr F I1013 00:23:43.018675 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: a213bff1-520d-4e2f-927a-40d13b2f1901]" virtual=false 2025-10-13T00:23:43.018812534+00:00 stderr F I1013 00:23:43.018793 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.018862535+00:00 stderr F I1013 00:23:43.018842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: b70f0647-f140-40de-94b5-a719b7eea118]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.018872865+00:00 stderr F I1013 00:23:43.018862 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: prometheus-k8s, uid: a9e56c6d-5ac9-4b79-990a-a087cbdff6b6]" virtual=false 2025-10-13T00:23:43.018901786+00:00 stderr F I1013 00:23:43.018889 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 4f8c14ca-36ee-4b58-aa1f-cac42171174a]" virtual=false 2025-10-13T00:23:43.023433632+00:00 stderr F I1013 00:23:43.022657 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: console-operator, uid: b185912e-a17b-4654-b0d2-b107b556fec5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.023433632+00:00 stderr F I1013 00:23:43.022689 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: b230d512-d31a-48b9-b1eb-5b0af8942d72]" virtual=false 2025-10-13T00:23:43.024286416+00:00 stderr F I1013 00:23:43.024045 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.024286416+00:00 stderr F I1013 00:23:43.024073 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" virtual=false 2025-10-13T00:23:43.024348958+00:00 stderr F I1013 00:23:43.024315 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: ffe4690e-17be-4ba4-b050-35c28bc0fc83]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.024364298+00:00 stderr F I1013 00:23:43.024358 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" virtual=false 2025-10-13T00:23:43.025968313+00:00 stderr F I1013 00:23:43.025928 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: b81faa8b-7299-4812-aa0b-64bd21477eb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.025968313+00:00 stderr F I1013 00:23:43.025955 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: e3ac9630-07e1-4aa4-843f-40a4b0edff98]" virtual=false 2025-10-13T00:23:43.026192199+00:00 stderr F I1013 00:23:43.026165 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: ingress-operator, uid: fb403acd-60bb-4e1d-812d-5b3609688224]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.026192199+00:00 stderr F I1013 00:23:43.026186 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: d9bf7097-7fa5-4e09-8381-9bfaa22c2f7a]" virtual=false 2025-10-13T00:23:43.033446311+00:00 stderr F I1013 00:23:43.030530 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 57452db9-9f49-4e6d-9f15-a66c5d7a3b74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.033446311+00:00 stderr F I1013 00:23:43.030557 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-version, name: prometheus-k8s, uid: 9e098fa2-db52-4c33-b39a-02358e3a9e0c]" virtual=false 2025-10-13T00:23:43.033446311+00:00 stderr F I1013 00:23:43.030742 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.033446311+00:00 stderr F I1013 00:23:43.030763 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-configmap-reader, uid: 96f1c1d8-079f-4d3e-ab80-520fea692547]" virtual=false 2025-10-13T00:23:43.050401304+00:00 stderr F I1013 00:23:43.048756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: c732613d-55c3-4b72-b9c5-e82223355330]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.050401304+00:00 stderr F I1013 00:23:43.048808 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: console-operator, uid: 4e4abf45-f8a5-4971-bef5-322574b06e51]" virtual=false 2025-10-13T00:23:43.050401304+00:00 stderr F I1013 00:23:43.049559 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 46d97ac2-17af-4ffa-903a-c11271097ec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.050401304+00:00 stderr F I1013 00:23:43.049580 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: node-ca, uid: 153c121a-eaab-48ca-a2d5-58b32eb3c97b]" virtual=false 2025-10-13T00:23:43.050401304+00:00 stderr F I1013 00:23:43.049774 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: prometheus-k8s, uid: a9e56c6d-5ac9-4b79-990a-a087cbdff6b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.050401304+00:00 stderr F I1013 00:23:43.049795 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-controllers, uid: afaa6258-38bd-452b-b419-a7a2631b3090]" virtual=false 2025-10-13T00:23:43.053611133+00:00 stderr F I1013 00:23:43.053542 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: aac1809c-7591-4279-9b3a-f54481665b62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.053611133+00:00 stderr F I1013 00:23:43.053577 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" virtual=false 2025-10-13T00:23:43.061377819+00:00 stderr F I1013 00:23:43.057652 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.061377819+00:00 stderr F I1013 00:23:43.057688 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: prometheus-k8s, uid: 68628535-73c6-4332-9439-4dbc315aff94]" virtual=false 2025-10-13T00:23:43.061377819+00:00 stderr F I1013 00:23:43.057880 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 5ab31ba2-d0b0-4677-80c5-034b6c6541d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.061377819+00:00 stderr F I1013 00:23:43.057900 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" virtual=false 2025-10-13T00:23:43.061377819+00:00 stderr F I1013 00:23:43.058113 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.061377819+00:00 stderr F I1013 00:23:43.058148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console, uid: f23dffb7-2c10-4dad-918c-95f11496c236]" virtual=false 2025-10-13T00:23:43.067206712+00:00 stderr F I1013 00:23:43.066103 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.067206712+00:00 stderr F I1013 00:23:43.066177 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 1d070311-4fbd-4dae-9140-d78d7f4d4575]" virtual=false 2025-10-13T00:23:43.068702683+00:00 stderr F I1013 00:23:43.068625 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 4f8c14ca-36ee-4b58-aa1f-cac42171174a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.068723064+00:00 stderr F I1013 00:23:43.068710 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 25069215-dc01-419a-ad4e-bcf1eeb7deaf]" virtual=false 2025-10-13T00:23:43.068988861+00:00 stderr F I1013 00:23:43.068960 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-marketplace, name: marketplace-operator, uid: ce0496bb-6d27-4a64-b0ea-2858aaaaf38f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.069178977+00:00 stderr F I1013 00:23:43.069067 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: copied-csv-viewers, uid: 51e093b6-6761-48c8-bdf3-e1d1320734a3]" virtual=false 2025-10-13T00:23:43.073054635+00:00 stderr F I1013 00:23:43.072998 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: ingress-operator, uid: 10dd3c44-d460-4ab3-b671-c27cf19cd1d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.073086235+00:00 stderr F I1013 00:23:43.073064 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-operator, name: prometheus-k8s, uid: 7188429c-116a-41d4-b2a2-45b51b794769]" virtual=false 2025-10-13T00:23:43.081116139+00:00 stderr F I1013 00:23:43.081047 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: d9bf7097-7fa5-4e09-8381-9bfaa22c2f7a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.081154480+00:00 stderr F I1013 00:23:43.081123 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: df902098-e780-4aea-b4ba-386424eac15b]" virtual=false 2025-10-13T00:23:43.084364190+00:00 stderr F I1013 00:23:43.081590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-api-controllers, uid: 6e3cba9d-4203-4082-ad69-807338a0bc67]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.084364190+00:00 stderr F I1013 00:23:43.081639 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: b1006f5c-7799-42f1-904e-88f572ba47b3]" virtual=false 2025-10-13T00:23:43.090150171+00:00 stderr F I1013 00:23:43.089501 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: machine-approver, uid: bfc0a838-d2db-4b1b-b2fa-b1b3ed79837f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.090150171+00:00 stderr F I1013 00:23:43.089580 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-operator, uid: b434b818-3667-41cd-9f86-670cf81c3dc9]" virtual=false 2025-10-13T00:23:43.090150171+00:00 stderr F I1013 00:23:43.090123 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: a213bff1-520d-4e2f-927a-40d13b2f1901]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.090201802+00:00 stderr F I1013 00:23:43.090156 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver, name: prometheus-k8s, uid: aaaa746a-592e-4f64-9701-4c374c38bb62]" virtual=false 2025-10-13T00:23:43.094395759+00:00 stderr F I1013 00:23:43.090808 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: e3ac9630-07e1-4aa4-843f-40a4b0edff98]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.094395759+00:00 stderr F I1013 00:23:43.090859 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: console-operator, uid: fffaa351-d6ff-44c3-9a2e-3f8fb82481ce]" virtual=false 2025-10-13T00:23:43.094395759+00:00 stderr F I1013 00:23:43.091303 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-version, name: prometheus-k8s, uid: 9e098fa2-db52-4c33-b39a-02358e3a9e0c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.094395759+00:00 stderr F I1013 00:23:43.091349 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 90f553e9-1bdb-4888-8400-8635eb23b719]" virtual=false 2025-10-13T00:23:43.110379644+00:00 stderr F I1013 00:23:43.106664 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-configmap-reader, uid: 96f1c1d8-079f-4d3e-ab80-520fea692547]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.110379644+00:00 stderr F I1013 00:23:43.106698 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: cluster-samples-operator-openshift-config-secret-reader, uid: 8e56c231-008d-4621-9c6e-23f979369b21]" virtual=false 2025-10-13T00:23:43.110379644+00:00 stderr F I1013 00:23:43.106859 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.110379644+00:00 stderr F I1013 00:23:43.106873 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" virtual=false 2025-10-13T00:23:43.110379644+00:00 stderr F I1013 00:23:43.107080 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.110379644+00:00 stderr F I1013 00:23:43.107094 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: 529eb670-d2f5-4a87-a1cc-b0a4cbd22e03]" virtual=false 2025-10-13T00:23:43.112938816+00:00 stderr F I1013 00:23:43.112907 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.112938816+00:00 stderr F I1013 00:23:43.112934 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console-operator, uid: 4c176e02-35d3-42a6-9509-b0af233a3594]" virtual=false 2025-10-13T00:23:43.116347040+00:00 stderr F I1013 00:23:43.113112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: b230d512-d31a-48b9-b1eb-5b0af8942d72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.116347040+00:00 stderr F I1013 00:23:43.113190 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" virtual=false 2025-10-13T00:23:43.120356582+00:00 stderr F I1013 00:23:43.116587 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: console-operator, uid: 4e4abf45-f8a5-4971-bef5-322574b06e51]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.120356582+00:00 stderr F I1013 00:23:43.116615 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: bbf1a2e5-8b73-4bae-b5a6-d557ecbce86f]" virtual=false 2025-10-13T00:23:43.121542105+00:00 stderr F I1013 00:23:43.120556 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-controllers, uid: afaa6258-38bd-452b-b419-a7a2631b3090]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.121542105+00:00 stderr F I1013 00:23:43.120575 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: dns-operator, uid: f37660aa-b829-4efb-af91-c0475f8a91fd]" virtual=false 2025-10-13T00:23:43.121542105+00:00 stderr F I1013 00:23:43.120865 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-image-registry, name: node-ca, uid: 153c121a-eaab-48ca-a2d5-58b32eb3c97b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.121542105+00:00 stderr F I1013 00:23:43.120880 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/ClusterServiceVersion, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: 0beab272-7637-4d44-b3aa-502dcafbc929]" virtual=false 2025-10-13T00:23:43.121542105+00:00 stderr F I1013 00:23:43.121129 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console, uid: f23dffb7-2c10-4dad-918c-95f11496c236]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.121542105+00:00 stderr F I1013 00:23:43.121145 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 63fb0acf-79a0-4111-a6ec-5dda2d4bdf26]" virtual=false 2025-10-13T00:23:43.128347335+00:00 stderr F I1013 00:23:43.128231 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 1d070311-4fbd-4dae-9140-d78d7f4d4575]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.128347335+00:00 stderr F I1013 00:23:43.128259 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" virtual=false 2025-10-13T00:23:43.145844062+00:00 stderr F I1013 00:23:43.145784 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.145844062+00:00 stderr F I1013 00:23:43.145819 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: console-operator, uid: 51573cb0-4753-43d5-a80a-ca9398cf9f90]" virtual=false 2025-10-13T00:23:43.149542565+00:00 stderr F I1013 00:23:43.149509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-operator, name: prometheus-k8s, uid: 7188429c-116a-41d4-b2a2-45b51b794769]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.149580076+00:00 stderr F I1013 00:23:43.149540 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-operator, name: prometheus-k8s, uid: 282ae96b-f5ba-4a11-bc5e-9b2688f7715c]" virtual=false 2025-10-13T00:23:43.151145030+00:00 stderr F I1013 00:23:43.151080 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.151145030+00:00 stderr F I1013 00:23:43.151119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" virtual=false 2025-10-13T00:23:43.151569342+00:00 stderr F I1013 00:23:43.151448 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/ClusterServiceVersion, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: 0beab272-7637-4d44-b3aa-502dcafbc929]" 2025-10-13T00:23:43.151569342+00:00 stderr F I1013 00:23:43.151468 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: dcbd6504-f1ec-4cd3-aa4c-d9fa262e8691]" virtual=false 2025-10-13T00:23:43.156207031+00:00 stderr F I1013 00:23:43.155616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: b1006f5c-7799-42f1-904e-88f572ba47b3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.156207031+00:00 stderr F I1013 00:23:43.155654 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6f2a9ce0-e85a-4870-a5b8-641b64dc9c93]" virtual=false 2025-10-13T00:23:43.166182409+00:00 stderr F I1013 00:23:43.166145 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver, name: prometheus-k8s, uid: aaaa746a-592e-4f64-9701-4c374c38bb62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.166214120+00:00 stderr F I1013 00:23:43.166190 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: csi-snapshot-controller-operator-authentication-reader, uid: daae309d-1001-4cca-b012-aa17027ac98c]" virtual=false 2025-10-13T00:23:43.166290572+00:00 stderr F I1013 00:23:43.166231 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 25069215-dc01-419a-ad4e-bcf1eeb7deaf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.166290572+00:00 stderr F I1013 00:23:43.166264 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: a97a87c8-9657-4a6f-88f4-38a7871d42d4]" virtual=false 2025-10-13T00:23:43.169529212+00:00 stderr F I1013 00:23:43.166456 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: cluster-samples-operator-openshift-config-secret-reader, uid: 8e56c231-008d-4621-9c6e-23f979369b21]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.169572 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 5b8609fb-7185-421b-a062-64861725c240]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166571 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.169807 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.169862 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: bbf1a2e5-8b73-4bae-b5a6-d557ecbce86f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.169881 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: fafa61e3-9783-4739-8430-0d7952720a14]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166621 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: prometheus-k8s, uid: 68628535-73c6-4332-9439-4dbc315aff94]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.169951 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-operator, uid: e9525389-2dca-4e82-a0c8-1959e8d3f6ef]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166675 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: console-operator, uid: 4c176e02-35d3-42a6-9509-b0af233a3594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170061 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: prometheus-k8s, uid: 7ed8090b-3718-4d38-abfd-8f495986d2c4]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166701 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 90f553e9-1bdb-4888-8400-8635eb23b719]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170177 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: 239e5b05-c173-4227-8370-2ccdce453a8c]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: machine-api-operator, uid: b434b818-3667-41cd-9f86-670cf81c3dc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170202 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: bf0c3d32-dd79-434c-b9f7-cd199254cb05]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166784 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: 529eb670-d2f5-4a87-a1cc-b0a4cbd22e03]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170294 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 0c21355c-3ceb-4fa7-8948-24ef652948e2]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166795 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console, name: console-operator, uid: fffaa351-d6ff-44c3-9a2e-3f8fb82481ce]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170377 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-authentication, name: prometheus-k8s, uid: 7c67d003-8fda-45c8-9db8-f7c2add219eb]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 63fb0acf-79a0-4111-a6ec-5dda2d4bdf26]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170471 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: 5e63327b-7c2f-40c4-bdb1-3382f0df6785]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170508 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-dns-operator, name: dns-operator, uid: f37660aa-b829-4efb-af91-c0475f8a91fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170525 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: 9830b203-a5a1-400e-ab46-bc60cd82db13]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.166980 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: copied-csv-viewers, uid: 51e093b6-6761-48c8-bdf3-e1d1320734a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.170576 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b4dbad3d-fcdf-47df-9f99-96752d898355]" virtual=false 2025-10-13T00:23:43.173457881+00:00 stderr F I1013 00:23:43.172646 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6f2a9ce0-e85a-4870-a5b8-641b64dc9c93]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173457881+00:00 stderr P I1013 00:23:43.172692 1 garbagecollector.go:549] "Processi 2025-10-13T00:23:43.173498272+00:00 stderr F ng item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: 8b0b5b45-675e-4709-9b90-74e3f108e035]" virtual=false 2025-10-13T00:23:43.173498272+00:00 stderr F I1013 00:23:43.173315 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-operator, name: prometheus-k8s, uid: 282ae96b-f5ba-4a11-bc5e-9b2688f7715c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173498272+00:00 stderr F I1013 00:23:43.173360 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v2/OperatorCondition, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: e1433908-0107-419f-b096-c81f5e875465]" virtual=false 2025-10-13T00:23:43.173498272+00:00 stderr F I1013 00:23:43.173457 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-monitoring, name: console-operator, uid: 51573cb0-4753-43d5-a80a-ca9398cf9f90]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.173498272+00:00 stderr F I1013 00:23:43.173480 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: cluster-samples-operator-openshift-edit, uid: 51645039-f461-4bd6-b917-c350f2081dd5]" virtual=false 2025-10-13T00:23:43.175181229+00:00 stderr F I1013 00:23:43.173566 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: df902098-e780-4aea-b4ba-386424eac15b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.175181229+00:00 stderr F I1013 00:23:43.173599 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" virtual=false 2025-10-13T00:23:43.176737043+00:00 stderr F I1013 00:23:43.176700 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: dcbd6504-f1ec-4cd3-aa4c-d9fa262e8691]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.176754523+00:00 stderr F I1013 00:23:43.176743 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" virtual=false 2025-10-13T00:23:43.177023851+00:00 stderr F I1013 00:23:43.176996 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.177090753+00:00 stderr F I1013 00:23:43.177075 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" virtual=false 2025-10-13T00:23:43.181177526+00:00 stderr F I1013 00:23:43.181138 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: csi-snapshot-controller-operator-authentication-reader, uid: daae309d-1001-4cca-b012-aa17027ac98c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.181196077+00:00 stderr F I1013 00:23:43.181183 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" virtual=false 2025-10-13T00:23:43.182455202+00:00 stderr F I1013 00:23:43.181414 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: a97a87c8-9657-4a6f-88f4-38a7871d42d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.182455202+00:00 stderr F I1013 00:23:43.181441 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" virtual=false 2025-10-13T00:23:43.184696164+00:00 stderr F I1013 00:23:43.183502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.184696164+00:00 stderr F I1013 00:23:43.183550 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" virtual=false 2025-10-13T00:23:43.189664933+00:00 stderr F I1013 00:23:43.188610 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-console-operator, name: prometheus-k8s, uid: 7ed8090b-3718-4d38-abfd-8f495986d2c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.189664933+00:00 stderr F I1013 00:23:43.188647 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" virtual=false 2025-10-13T00:23:43.189664933+00:00 stderr F I1013 00:23:43.188810 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-authentication, name: prometheus-k8s, uid: 7c67d003-8fda-45c8-9db8-f7c2add219eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.189664933+00:00 stderr F I1013 00:23:43.188824 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.195751 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: 239e5b05-c173-4227-8370-2ccdce453a8c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.195788 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.195972 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 5b8609fb-7185-421b-a062-64861725c240]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.195989 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196310 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v2/OperatorCondition, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: e1433908-0107-419f-b096-c81f5e875465]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":true,"blockOwnerDeletion":false}] 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196350 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: 5e63327b-7c2f-40c4-bdb1-3382f0df6785]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196527 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196791 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196807 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196968 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-operator, uid: e9525389-2dca-4e82-a0c8-1959e8d3f6ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.196994 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.197134 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 0c21355c-3ceb-4fa7-8948-24ef652948e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.197155 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" virtual=false 2025-10-13T00:23:43.197344687+00:00 stderr F I1013 00:23:43.197310 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197391118+00:00 stderr F I1013 00:23:43.197347 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" virtual=false 2025-10-13T00:23:43.197529842+00:00 stderr F I1013 00:23:43.197502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: fafa61e3-9783-4739-8430-0d7952720a14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197542222+00:00 stderr F I1013 00:23:43.197527 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" virtual=false 2025-10-13T00:23:43.197935103+00:00 stderr F I1013 00:23:43.197776 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b4dbad3d-fcdf-47df-9f99-96752d898355]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.197935103+00:00 stderr F I1013 00:23:43.197802 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" virtual=false 2025-10-13T00:23:43.197976464+00:00 stderr F I1013 00:23:43.197948 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" 2025-10-13T00:23:43.197976464+00:00 stderr F I1013 00:23:43.197970 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" virtual=false 2025-10-13T00:23:43.198162819+00:00 stderr F I1013 00:23:43.198141 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: 9830b203-a5a1-400e-ab46-bc60cd82db13]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.198173250+00:00 stderr F I1013 00:23:43.198161 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" virtual=false 2025-10-13T00:23:43.198250312+00:00 stderr F I1013 00:23:43.198231 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.198312754+00:00 stderr F I1013 00:23:43.198301 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" virtual=false 2025-10-13T00:23:43.199777064+00:00 stderr F I1013 00:23:43.198577 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift, name: cluster-samples-operator-openshift-edit, uid: 51645039-f461-4bd6-b917-c350f2081dd5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.199777064+00:00 stderr F I1013 00:23:43.198623 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" virtual=false 2025-10-13T00:23:43.200217537+00:00 stderr F I1013 00:23:43.200187 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: 8b0b5b45-675e-4709-9b90-74e3f108e035]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.200300919+00:00 stderr F I1013 00:23:43.200289 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" virtual=false 2025-10-13T00:23:43.205655458+00:00 stderr F I1013 00:23:43.203945 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" 2025-10-13T00:23:43.205655458+00:00 stderr F I1013 00:23:43.203985 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 8108d04e-011f-45ba-91f8-5a8b6851d41d]" virtual=false 2025-10-13T00:23:43.205655458+00:00 stderr F I1013 00:23:43.204117 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" 2025-10-13T00:23:43.205655458+00:00 stderr F I1013 00:23:43.204145 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" virtual=false 2025-10-13T00:23:43.205948896+00:00 stderr F I1013 00:23:43.205928 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: bf0c3d32-dd79-434c-b9f7-cd199254cb05]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.205996098+00:00 stderr F I1013 00:23:43.205985 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" virtual=false 2025-10-13T00:23:43.206193943+00:00 stderr F I1013 00:23:43.206177 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.206233324+00:00 stderr F I1013 00:23:43.206223 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" virtual=false 2025-10-13T00:23:43.206459981+00:00 stderr F I1013 00:23:43.206439 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.206505662+00:00 stderr F I1013 00:23:43.206493 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" virtual=false 2025-10-13T00:23:43.206681087+00:00 stderr F I1013 00:23:43.206662 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.206730318+00:00 stderr F I1013 00:23:43.206717 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" virtual=false 2025-10-13T00:23:43.210361129+00:00 stderr F I1013 00:23:43.207453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.210361129+00:00 stderr F I1013 00:23:43.207485 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" virtual=false 2025-10-13T00:23:43.210361129+00:00 stderr F I1013 00:23:43.208151 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.210361129+00:00 stderr F I1013 00:23:43.208174 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" virtual=false 2025-10-13T00:23:43.211226843+00:00 stderr F I1013 00:23:43.211205 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.211277785+00:00 stderr F I1013 00:23:43.211266 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" virtual=false 2025-10-13T00:23:43.211443579+00:00 stderr F I1013 00:23:43.211427 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.211489581+00:00 stderr F I1013 00:23:43.211479 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" virtual=false 2025-10-13T00:23:43.211564083+00:00 stderr F I1013 00:23:43.211534 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.211581663+00:00 stderr F I1013 00:23:43.211573 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" virtual=false 2025-10-13T00:23:43.212031336+00:00 stderr F I1013 00:23:43.211996 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.212069647+00:00 stderr F I1013 00:23:43.212044 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" virtual=false 2025-10-13T00:23:43.212157719+00:00 stderr F I1013 00:23:43.212128 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.212197750+00:00 stderr F I1013 00:23:43.212187 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" virtual=false 2025-10-13T00:23:43.212350865+00:00 stderr F I1013 00:23:43.212008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.212397426+00:00 stderr F I1013 00:23:43.212386 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" virtual=false 2025-10-13T00:23:43.212496299+00:00 stderr F I1013 00:23:43.212472 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.212542030+00:00 stderr F I1013 00:23:43.212520 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" virtual=false 2025-10-13T00:23:43.213817326+00:00 stderr F I1013 00:23:43.213797 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.213869857+00:00 stderr F I1013 00:23:43.213859 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" virtual=false 2025-10-13T00:23:43.214278148+00:00 stderr F I1013 00:23:43.214263 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.214363811+00:00 stderr F I1013 00:23:43.214314 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" virtual=false 2025-10-13T00:23:43.217147118+00:00 stderr F I1013 00:23:43.217118 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.217147118+00:00 stderr F I1013 00:23:43.217140 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" virtual=false 2025-10-13T00:23:43.217343574+00:00 stderr F I1013 00:23:43.217307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.217343574+00:00 stderr F I1013 00:23:43.217339 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" virtual=false 2025-10-13T00:23:43.217586961+00:00 stderr F I1013 00:23:43.217522 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.217586961+00:00 stderr F I1013 00:23:43.217548 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" virtual=false 2025-10-13T00:23:43.217963201+00:00 stderr F I1013 00:23:43.217947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.218003342+00:00 stderr F I1013 00:23:43.217993 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" virtual=false 2025-10-13T00:23:43.218037343+00:00 stderr F I1013 00:23:43.218020 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" 2025-10-13T00:23:43.218044993+00:00 stderr F I1013 00:23:43.218039 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" virtual=false 2025-10-13T00:23:43.218168397+00:00 stderr F I1013 00:23:43.218153 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.218217058+00:00 stderr F I1013 00:23:43.218204 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" virtual=false 2025-10-13T00:23:43.218356902+00:00 stderr F I1013 00:23:43.218341 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.218420984+00:00 stderr F I1013 00:23:43.218410 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" virtual=false 2025-10-13T00:23:43.218863866+00:00 stderr F I1013 00:23:43.218847 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.218904597+00:00 stderr F I1013 00:23:43.218894 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" virtual=false 2025-10-13T00:23:43.219064642+00:00 stderr F I1013 00:23:43.219050 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.219102693+00:00 stderr F I1013 00:23:43.219093 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" virtual=false 2025-10-13T00:23:43.219242787+00:00 stderr F I1013 00:23:43.219193 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 8108d04e-011f-45ba-91f8-5a8b6851d41d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.219253817+00:00 stderr F I1013 00:23:43.219240 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" virtual=false 2025-10-13T00:23:43.221114139+00:00 stderr F I1013 00:23:43.221064 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.221114139+00:00 stderr F I1013 00:23:43.221097 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" virtual=false 2025-10-13T00:23:43.223923387+00:00 stderr F I1013 00:23:43.223842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.223923387+00:00 stderr F I1013 00:23:43.223899 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" virtual=false 2025-10-13T00:23:43.229230325+00:00 stderr F I1013 00:23:43.229187 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.229258676+00:00 stderr F I1013 00:23:43.229248 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" virtual=false 2025-10-13T00:23:43.229320987+00:00 stderr F I1013 00:23:43.229300 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.229365659+00:00 stderr F I1013 00:23:43.229344 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" virtual=false 2025-10-13T00:23:43.229459531+00:00 stderr F I1013 00:23:43.229444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.229498812+00:00 stderr F I1013 00:23:43.229488 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" virtual=false 2025-10-13T00:23:43.229605855+00:00 stderr F I1013 00:23:43.229592 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.229645096+00:00 stderr F I1013 00:23:43.229635 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" virtual=false 2025-10-13T00:23:43.229931884+00:00 stderr F I1013 00:23:43.229911 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.229991816+00:00 stderr F I1013 00:23:43.229976 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" virtual=false 2025-10-13T00:23:43.230180611+00:00 stderr F I1013 00:23:43.230162 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.230231233+00:00 stderr F I1013 00:23:43.230217 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" virtual=false 2025-10-13T00:23:43.232508376+00:00 stderr F I1013 00:23:43.232469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.232541627+00:00 stderr F I1013 00:23:43.232526 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" virtual=false 2025-10-13T00:23:43.232842366+00:00 stderr F I1013 00:23:43.232813 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.232855166+00:00 stderr F I1013 00:23:43.232842 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" virtual=false 2025-10-13T00:23:43.232933058+00:00 stderr F I1013 00:23:43.232919 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.233005940+00:00 stderr F I1013 00:23:43.232994 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" virtual=false 2025-10-13T00:23:43.235943022+00:00 stderr F I1013 00:23:43.235488 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.235943022+00:00 stderr F I1013 00:23:43.235509 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" virtual=false 2025-10-13T00:23:43.235943022+00:00 stderr F I1013 00:23:43.235635 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.235943022+00:00 stderr F I1013 00:23:43.235650 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" virtual=false 2025-10-13T00:23:43.235943022+00:00 stderr F I1013 00:23:43.235930 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.235991963+00:00 stderr F I1013 00:23:43.235946 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" virtual=false 2025-10-13T00:23:43.236219820+00:00 stderr F I1013 00:23:43.236060 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.236219820+00:00 stderr F I1013 00:23:43.236077 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" virtual=false 2025-10-13T00:23:43.236278671+00:00 stderr F I1013 00:23:43.236250 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.236278671+00:00 stderr F I1013 00:23:43.236268 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" virtual=false 2025-10-13T00:23:43.236488657+00:00 stderr F I1013 00:23:43.236432 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.236488657+00:00 stderr F I1013 00:23:43.236451 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" virtual=false 2025-10-13T00:23:43.238934495+00:00 stderr F I1013 00:23:43.238914 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.238987887+00:00 stderr F I1013 00:23:43.238977 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" virtual=false 2025-10-13T00:23:43.239248554+00:00 stderr F I1013 00:23:43.239216 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.239263094+00:00 stderr F I1013 00:23:43.239256 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" virtual=false 2025-10-13T00:23:43.239409838+00:00 stderr F I1013 00:23:43.239376 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.239409838+00:00 stderr F I1013 00:23:43.239392 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.239424129+00:00 stderr F I1013 00:23:43.239414 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-public, name: builder, uid: bd66f17f-99cb-4aca-b8fe-2005e825b594]" virtual=false 2025-10-13T00:23:43.239472890+00:00 stderr F I1013 00:23:43.239451 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: namespace-controller, uid: 5cbbe1b8-537f-4db9-8127-40c8e5f13e16]" virtual=false 2025-10-13T00:23:43.241586339+00:00 stderr F I1013 00:23:43.241276 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.241586339+00:00 stderr F I1013 00:23:43.241343 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: deployer, uid: 4ad0491b-4fc6-4956-bc0a-a442ce52e7d5]" virtual=false 2025-10-13T00:23:43.241586339+00:00 stderr F I1013 00:23:43.241525 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.241586339+00:00 stderr F I1013 00:23:43.241543 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: deployer, uid: 77608cbc-3acb-4004-ab27-b446c3f642ec]" virtual=false 2025-10-13T00:23:43.241963560+00:00 stderr F I1013 00:23:43.241918 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.241963560+00:00 stderr F I1013 00:23:43.241947 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" virtual=false 2025-10-13T00:23:43.242933537+00:00 stderr F I1013 00:23:43.242374 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-public, name: builder, uid: bd66f17f-99cb-4aca-b8fe-2005e825b594]" 2025-10-13T00:23:43.242978248+00:00 stderr F I1013 00:23:43.242936 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: default, uid: 8f69c38a-de95-441b-9d4e-8b2f658ea073]" virtual=false 2025-10-13T00:23:43.243383439+00:00 stderr F I1013 00:23:43.243350 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.243383439+00:00 stderr F I1013 00:23:43.243376 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openstack-operators, name: builder, uid: aab62fd8-98cc-46a6-b30e-1a1a971cc28a]" virtual=false 2025-10-13T00:23:43.243781730+00:00 stderr F I1013 00:23:43.243749 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: namespace-controller, uid: 5cbbe1b8-537f-4db9-8127-40c8e5f13e16]" 2025-10-13T00:23:43.243781730+00:00 stderr F I1013 00:23:43.243776 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: default, name: builder, uid: b37b6bb4-9c42-40a8-8a21-afbe86d01475]" virtual=false 2025-10-13T00:23:43.244065018+00:00 stderr F I1013 00:23:43.244039 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.244074968+00:00 stderr F I1013 00:23:43.244063 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cloud-platform-infra, name: default, uid: 4b10a7df-b492-44dc-9ed2-bfd068e1978c]" virtual=false 2025-10-13T00:23:43.247522704+00:00 stderr F I1013 00:23:43.247464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.247522704+00:00 stderr F I1013 00:23:43.247511 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: default, uid: d7e4d75d-8e1a-4016-9b12-75dea1c35412]" virtual=false 2025-10-13T00:23:43.247750501+00:00 stderr F I1013 00:23:43.247699 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.247750501+00:00 stderr F I1013 00:23:43.247732 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-host-network, name: deployer, uid: 52a2ba71-083d-4285-9d61-9d47f8311ba5]" virtual=false 2025-10-13T00:23:43.247967727+00:00 stderr F I1013 00:23:43.247923 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: deployer, uid: 4ad0491b-4fc6-4956-bc0a-a442ce52e7d5]" 2025-10-13T00:23:43.247967727+00:00 stderr F I1013 00:23:43.247948 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: origin-namespace-controller, uid: 38dcfe5d-f62c-409c-88a0-5a91312dbb6a]" virtual=false 2025-10-13T00:23:43.248109871+00:00 stderr F I1013 00:23:43.248072 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.248109871+00:00 stderr F I1013 00:23:43.248099 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-7, uid: 29e42176-b5f0-48b5-a766-b1545d67b91c]" virtual=false 2025-10-13T00:23:43.248284196+00:00 stderr F I1013 00:23:43.248246 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openstack-operators, name: builder, uid: aab62fd8-98cc-46a6-b30e-1a1a971cc28a]" 2025-10-13T00:23:43.248284196+00:00 stderr F I1013 00:23:43.248271 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: localhost-recovery-client, uid: cf39a450-93c1-4d67-b015-2957d9be0077]" virtual=false 2025-10-13T00:23:43.248515672+00:00 stderr F I1013 00:23:43.248420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.248515672+00:00 stderr F I1013 00:23:43.248444 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" virtual=false 2025-10-13T00:23:43.248581914+00:00 stderr F I1013 00:23:43.248556 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cloud-platform-infra, name: default, uid: 4b10a7df-b492-44dc-9ed2-bfd068e1978c]" 2025-10-13T00:23:43.248581914+00:00 stderr F I1013 00:23:43.248576 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: default, uid: d0192a27-10dc-4247-b008-5ee83583fcd2]" virtual=false 2025-10-13T00:23:43.248725888+00:00 stderr F I1013 00:23:43.248699 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: default, uid: 8f69c38a-de95-441b-9d4e-8b2f658ea073]" 2025-10-13T00:23:43.248725888+00:00 stderr F I1013 00:23:43.248719 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: deployer, uid: f74640ad-73ea-44c8-a54b-275c3bc4150d]" virtual=false 2025-10-13T00:23:43.249019156+00:00 stderr F I1013 00:23:43.248901 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.249019156+00:00 stderr F I1013 00:23:43.248925 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: legacy-service-account-token-cleaner, uid: b41cd033-3847-43be-ad4f-8a86f660e901]" virtual=false 2025-10-13T00:23:43.249126669+00:00 stderr F I1013 00:23:43.249063 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.249126669+00:00 stderr F I1013 00:23:43.249087 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" virtual=false 2025-10-13T00:23:43.251032222+00:00 stderr F I1013 00:23:43.250990 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.251052923+00:00 stderr F I1013 00:23:43.251027 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: endpointslice-controller, uid: 96181cff-8acf-48c7-82ae-c942612e088e]" virtual=false 2025-10-13T00:23:43.251142555+00:00 stderr F I1013 00:23:43.251106 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: default, name: builder, uid: b37b6bb4-9c42-40a8-8a21-afbe86d01475]" 2025-10-13T00:23:43.251198337+00:00 stderr F I1013 00:23:43.251153 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: default, uid: 93550023-23c3-49bd-af88-7d358d59731f]" virtual=false 2025-10-13T00:23:43.251310110+00:00 stderr F I1013 00:23:43.251290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.251402843+00:00 stderr F I1013 00:23:43.251383 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" virtual=false 2025-10-13T00:23:43.251562607+00:00 stderr F I1013 00:23:43.251526 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-7, uid: 29e42176-b5f0-48b5-a766-b1545d67b91c]" 2025-10-13T00:23:43.251582368+00:00 stderr F I1013 00:23:43.251560 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" virtual=false 2025-10-13T00:23:43.251582368+00:00 stderr F I1013 00:23:43.251565 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-host-network, name: deployer, uid: 52a2ba71-083d-4285-9d61-9d47f8311ba5]" 2025-10-13T00:23:43.251630999+00:00 stderr F I1013 00:23:43.251603 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-node, name: builder, uid: d46a6adc-b186-421b-98a7-9360bdb8be9f]" virtual=false 2025-10-13T00:23:43.251709801+00:00 stderr F I1013 00:23:43.251679 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: origin-namespace-controller, uid: 38dcfe5d-f62c-409c-88a0-5a91312dbb6a]" 2025-10-13T00:23:43.251722151+00:00 stderr F I1013 00:23:43.251706 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-nutanix-infra, name: default, uid: 64806580-83f6-4c6a-b286-66920da0bd71]" virtual=false 2025-10-13T00:23:43.251858975+00:00 stderr F I1013 00:23:43.251828 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: localhost-recovery-client, uid: cf39a450-93c1-4d67-b015-2957d9be0077]" 2025-10-13T00:23:43.251900356+00:00 stderr F I1013 00:23:43.251880 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" virtual=false 2025-10-13T00:23:43.251965368+00:00 stderr F I1013 00:23:43.251949 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: default, uid: d7e4d75d-8e1a-4016-9b12-75dea1c35412]" 2025-10-13T00:23:43.252016490+00:00 stderr F I1013 00:23:43.252000 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" virtual=false 2025-10-13T00:23:43.252159284+00:00 stderr F I1013 00:23:43.252085 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: deployer, uid: f74640ad-73ea-44c8-a54b-275c3bc4150d]" 2025-10-13T00:23:43.252159284+00:00 stderr F I1013 00:23:43.252154 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: builder, uid: 80d44a37-4e98-44c6-9a10-e06e14bdf0cf]" virtual=false 2025-10-13T00:23:43.252273487+00:00 stderr F I1013 00:23:43.252255 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: deployer, uid: 77608cbc-3acb-4004-ab27-b446c3f642ec]" 2025-10-13T00:23:43.252339799+00:00 stderr F I1013 00:23:43.252309 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: deployer, uid: 4f4e13af-84d8-427f-bc28-c6c2f069a474]" virtual=false 2025-10-13T00:23:43.252424811+00:00 stderr F I1013 00:23:43.252400 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.252470152+00:00 stderr F I1013 00:23:43.252450 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openstack, name: default, uid: b82475e5-af35-496a-ac82-0e2dc8879988]" virtual=false 2025-10-13T00:23:43.252752050+00:00 stderr F I1013 00:23:43.252725 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.252789201+00:00 stderr F I1013 00:23:43.252749 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovirt-infra, name: default, uid: 3ab7eb27-6efd-4054-8a22-0675330641fc]" virtual=false 2025-10-13T00:23:43.253051998+00:00 stderr F I1013 00:23:43.252847 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: legacy-service-account-token-cleaner, uid: b41cd033-3847-43be-ad4f-8a86f660e901]" 2025-10-13T00:23:43.253112650+00:00 stderr F I1013 00:23:43.253097 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" virtual=false 2025-10-13T00:23:43.253201223+00:00 stderr F I1013 00:23:43.253176 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.253213713+00:00 stderr F I1013 00:23:43.253199 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-vsphere-infra, name: builder, uid: 4b777e53-4e82-4070-838d-dcb14b0cc53f]" virtual=false 2025-10-13T00:23:43.253315296+00:00 stderr F I1013 00:23:43.253297 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: default, uid: d0192a27-10dc-4247-b008-5ee83583fcd2]" 2025-10-13T00:23:43.253391688+00:00 stderr F I1013 00:23:43.253366 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.253433279+00:00 stderr F I1013 00:23:43.253414 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" virtual=false 2025-10-13T00:23:43.253509841+00:00 stderr F I1013 00:23:43.253492 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" 2025-10-13T00:23:43.253560313+00:00 stderr F I1013 00:23:43.253546 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" virtual=false 2025-10-13T00:23:43.253701527+00:00 stderr F I1013 00:23:43.253415 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: resourcequota-controller, uid: dd59173a-8b20-44dd-b67b-8b82c396c74b]" virtual=false 2025-10-13T00:23:43.253800289+00:00 stderr F I1013 00:23:43.253783 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.253850001+00:00 stderr F I1013 00:23:43.253836 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: builder, uid: 5d239f9b-acfb-4be0-9c09-e8e8740bf120]" virtual=false 2025-10-13T00:23:43.254145829+00:00 stderr F I1013 00:23:43.254119 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" 2025-10-13T00:23:43.254145829+00:00 stderr F I1013 00:23:43.254140 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: default, name: default, uid: d048f127-d5e0-4b33-a504-c18c59c46f0b]" virtual=false 2025-10-13T00:23:43.254186010+00:00 stderr F I1013 00:23:43.254171 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" 2025-10-13T00:23:43.254232251+00:00 stderr F I1013 00:23:43.254218 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: openshift-apiserver-sa, uid: 84c1bf4b-b079-43a0-8f2c-aa4f9a5dbe22]" virtual=false 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255343 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: default, uid: 93550023-23c3-49bd-af88-7d358d59731f]" 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255378 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" virtual=false 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255558 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255578 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: redhat-operators, uid: 716d9f03-1d38-4368-80c8-2a28a060a8c4]" virtual=false 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255708 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-nutanix-infra, name: default, uid: 64806580-83f6-4c6a-b286-66920da0bd71]" 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255725 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-4, uid: 932884fb-7f39-49cc-bc5a-34d361a12e91]" virtual=false 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255851 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-node, name: builder, uid: d46a6adc-b186-421b-98a7-9360bdb8be9f]" 2025-10-13T00:23:43.257001898+00:00 stderr F I1013 00:23:43.255866 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operators, name: deployer, uid: 2e4f8711-fcf9-45f4-adf5-e7b856b26c27]" virtual=false 2025-10-13T00:23:43.257613866+00:00 stderr F I1013 00:23:43.257545 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" 2025-10-13T00:23:43.257613866+00:00 stderr F I1013 00:23:43.257569 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: builder, uid: dde9a855-9d63-4b86-8a7e-7c39b987e355]" virtual=false 2025-10-13T00:23:43.257805581+00:00 stderr F I1013 00:23:43.257774 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: builder, uid: 5d239f9b-acfb-4be0-9c09-e8e8740bf120]" 2025-10-13T00:23:43.257805581+00:00 stderr F I1013 00:23:43.257788 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: default, name: default, uid: d048f127-d5e0-4b33-a504-c18c59c46f0b]" 2025-10-13T00:23:43.257805581+00:00 stderr F I1013 00:23:43.257794 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: replication-controller, uid: 8b41c496-3b28-4906-87b1-1e8857eaeeed]" virtual=false 2025-10-13T00:23:43.257858402+00:00 stderr F I1013 00:23:43.257838 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: deployer, uid: d83a1e61-4d3f-45b4-b796-fdd89361f95b]" virtual=false 2025-10-13T00:23:43.257925564+00:00 stderr F I1013 00:23:43.257900 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-vsphere-infra, name: builder, uid: 4b777e53-4e82-4070-838d-dcb14b0cc53f]" 2025-10-13T00:23:43.257925564+00:00 stderr F I1013 00:23:43.257921 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: kube-storage-version-migrator-sa, uid: f91dc624-c163-416c-a5c6-8a0ac9755b53]" virtual=false 2025-10-13T00:23:43.258290834+00:00 stderr F I1013 00:23:43.258266 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: openshift-apiserver-sa, uid: 84c1bf4b-b079-43a0-8f2c-aa4f9a5dbe22]" 2025-10-13T00:23:43.258290834+00:00 stderr F I1013 00:23:43.258279 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-operators, name: deployer, uid: 2e4f8711-fcf9-45f4-adf5-e7b856b26c27]" 2025-10-13T00:23:43.258306255+00:00 stderr F I1013 00:23:43.258292 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: oauth-openshift, uid: ef9342eb-93c7-4a21-a619-4c023d779706]" virtual=false 2025-10-13T00:23:43.258306255+00:00 stderr F I1013 00:23:43.258297 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: ingress-to-route-controller, uid: da6f1d73-d8f7-4825-b281-bf5dcf7754d6]" virtual=false 2025-10-13T00:23:43.258398907+00:00 stderr F I1013 00:23:43.258381 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-4, uid: 932884fb-7f39-49cc-bc5a-34d361a12e91]" 2025-10-13T00:23:43.258456849+00:00 stderr F I1013 00:23:43.258442 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: default, uid: 1fdd7471-8383-40c7-b422-57debdf5832c]" virtual=false 2025-10-13T00:23:43.258579462+00:00 stderr F I1013 00:23:43.258552 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ovirt-infra, name: default, uid: 3ab7eb27-6efd-4054-8a22-0675330641fc]" 2025-10-13T00:23:43.258617544+00:00 stderr F I1013 00:23:43.258600 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" virtual=false 2025-10-13T00:23:43.259038835+00:00 stderr F I1013 00:23:43.258761 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: resourcequota-controller, uid: dd59173a-8b20-44dd-b67b-8b82c396c74b]" 2025-10-13T00:23:43.259038835+00:00 stderr F I1013 00:23:43.258812 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-user-workload-monitoring, name: default, uid: bf4aba96-e5e8-443d-87ca-912cd44dd6a6]" virtual=false 2025-10-13T00:23:43.259038835+00:00 stderr F I1013 00:23:43.258876 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: deployer, uid: 4f4e13af-84d8-427f-bc28-c6c2f069a474]" 2025-10-13T00:23:43.259038835+00:00 stderr F I1013 00:23:43.258898 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openstack-operators, name: deployer, uid: 823c9ad6-58e1-4d60-8755-7f8fb446b71a]" virtual=false 2025-10-13T00:23:43.259038835+00:00 stderr F I1013 00:23:43.258813 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" 2025-10-13T00:23:43.259057926+00:00 stderr F I1013 00:23:43.259034 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openstack, name: builder, uid: 65bfb1ce-0653-4241-9f9b-57bfa207b5b7]" virtual=false 2025-10-13T00:23:43.259132138+00:00 stderr F I1013 00:23:43.259110 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: endpointslice-controller, uid: 96181cff-8acf-48c7-82ae-c942612e088e]" 2025-10-13T00:23:43.259144798+00:00 stderr F I1013 00:23:43.259138 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-managed, name: default, uid: cb13ea90-7bd3-48aa-99fd-aa0933fee153]" virtual=false 2025-10-13T00:23:43.259214070+00:00 stderr F I1013 00:23:43.259190 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openstack, name: default, uid: b82475e5-af35-496a-ac82-0e2dc8879988]" 2025-10-13T00:23:43.259226540+00:00 stderr F I1013 00:23:43.259211 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: default, uid: f08ee6e7-bff3-441a-a619-0ccd1300f0fe]" virtual=false 2025-10-13T00:23:43.259286292+00:00 stderr F I1013 00:23:43.259257 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.259298982+00:00 stderr F I1013 00:23:43.259284 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: template-instance-controller, uid: d7ddc0aa-a22f-4301-af6d-b194d5c1f589]" virtual=false 2025-10-13T00:23:43.259348384+00:00 stderr F I1013 00:23:43.259315 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: builder, uid: 80d44a37-4e98-44c6-9a10-e06e14bdf0cf]" 2025-10-13T00:23:43.259407035+00:00 stderr F I1013 00:23:43.259390 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" virtual=false 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260416 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: builder, uid: dde9a855-9d63-4b86-8a7e-7c39b987e355]" 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260442 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" virtual=false 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260579 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: replication-controller, uid: 8b41c496-3b28-4906-87b1-1e8857eaeeed]" 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260595 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: builder, uid: 3ea88b4a-06e7-432d-9b8b-7cb9c1cb3c38]" virtual=false 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260731 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: deployer, uid: d83a1e61-4d3f-45b4-b796-fdd89361f95b]" 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260747 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: disruption-controller, uid: 1e3a1d6a-40dc-4fcf-a7b0-c3f2f6e24e21]" virtual=false 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260862 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: kube-storage-version-migrator-sa, uid: f91dc624-c163-416c-a5c6-8a0ac9755b53]" 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.260876 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cloud-platform-infra, name: builder, uid: 813004ef-3fad-46b0-817a-218aa486a381]" virtual=false 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.261148 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" 2025-10-13T00:23:43.261193705+00:00 stderr F I1013 00:23:43.261163 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" virtual=false 2025-10-13T00:23:43.261419072+00:00 stderr F I1013 00:23:43.261310 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: ingress-to-route-controller, uid: da6f1d73-d8f7-4825-b281-bf5dcf7754d6]" 2025-10-13T00:23:43.261419072+00:00 stderr F I1013 00:23:43.261369 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: installer-sa, uid: 9a297264-2a80-4923-819a-12c8d4559a3d]" virtual=false 2025-10-13T00:23:43.262255265+00:00 stderr F I1013 00:23:43.262233 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: template-instance-controller, uid: d7ddc0aa-a22f-4301-af6d-b194d5c1f589]" 2025-10-13T00:23:43.262311336+00:00 stderr F I1013 00:23:43.262297 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: deployer, uid: 3590b7cf-df96-4cca-a1f2-47f3b7d83795]" virtual=false 2025-10-13T00:23:43.262698507+00:00 stderr F I1013 00:23:43.262678 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: default, uid: 1fdd7471-8383-40c7-b422-57debdf5832c]" 2025-10-13T00:23:43.262759279+00:00 stderr F I1013 00:23:43.262746 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" virtual=false 2025-10-13T00:23:43.262923103+00:00 stderr F I1013 00:23:43.262906 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config-managed, name: default, uid: cb13ea90-7bd3-48aa-99fd-aa0933fee153]" 2025-10-13T00:23:43.262968795+00:00 stderr F I1013 00:23:43.262956 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operators, name: builder, uid: 7543e46c-52ce-4946-9ae0-cf808d47740c]" virtual=false 2025-10-13T00:23:43.263132959+00:00 stderr F I1013 00:23:43.263116 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openstack, name: builder, uid: 65bfb1ce-0653-4241-9f9b-57bfa207b5b7]" 2025-10-13T00:23:43.263176500+00:00 stderr F I1013 00:23:43.263165 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-storage-operator, name: builder, uid: d8e3fa65-7f6d-4665-bb55-7e973ddccebc]" virtual=false 2025-10-13T00:23:43.263364056+00:00 stderr F I1013 00:23:43.263346 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openstack-operators, name: deployer, uid: 823c9ad6-58e1-4d60-8755-7f8fb446b71a]" 2025-10-13T00:23:43.263415657+00:00 stderr F I1013 00:23:43.263402 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-host-network, name: builder, uid: 6e65d5ff-1271-41a9-b8c4-f37321c5346a]" virtual=false 2025-10-13T00:23:43.263593672+00:00 stderr F I1013 00:23:43.263574 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: oauth-openshift, uid: ef9342eb-93c7-4a21-a619-4c023d779706]" 2025-10-13T00:23:43.263642643+00:00 stderr F I1013 00:23:43.263630 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: podsecurity-admission-label-syncer-controller, uid: d546701e-5a7a-4c88-9b40-4c7696d9f32c]" virtual=false 2025-10-13T00:23:43.263852889+00:00 stderr F I1013 00:23:43.263833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: redhat-operators, uid: 716d9f03-1d38-4368-80c8-2a28a060a8c4]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"redhat-operators","uid":"9ba0c63a-ccef-4143-b195-48b1ad0b0bb7","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.263908231+00:00 stderr F I1013 00:23:43.263893 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: etcd-backup-sa, uid: 13a28779-c7ba-4266-8f1d-aa374218f088]" virtual=false 2025-10-13T00:23:43.264090126+00:00 stderr F I1013 00:23:43.264070 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" 2025-10-13T00:23:43.264167208+00:00 stderr F I1013 00:23:43.264152 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" virtual=false 2025-10-13T00:23:43.264524888+00:00 stderr F I1013 00:23:43.264501 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-user-workload-monitoring, name: default, uid: bf4aba96-e5e8-443d-87ca-912cd44dd6a6]" 2025-10-13T00:23:43.264600730+00:00 stderr F I1013 00:23:43.264586 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: oauth-apiserver-sa, uid: 43153168-26f3-49a1-b8ab-c2a8f08ecfce]" virtual=false 2025-10-13T00:23:43.264800706+00:00 stderr F I1013 00:23:43.264781 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: default, uid: f08ee6e7-bff3-441a-a619-0ccd1300f0fe]" 2025-10-13T00:23:43.264851167+00:00 stderr F I1013 00:23:43.264836 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: builder, uid: 08426b5f-d715-4491-bc3e-d58b198d668d]" virtual=false 2025-10-13T00:23:43.265029742+00:00 stderr F I1013 00:23:43.264990 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: installer-sa, uid: 9a297264-2a80-4923-819a-12c8d4559a3d]" 2025-10-13T00:23:43.265044692+00:00 stderr F I1013 00:23:43.265028 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: cronjob-controller, uid: 7b364c77-d33c-4164-9f16-977084c8ac52]" virtual=false 2025-10-13T00:23:43.265159586+00:00 stderr F I1013 00:23:43.265138 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.265212677+00:00 stderr F I1013 00:23:43.265198 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" virtual=false 2025-10-13T00:23:43.265298610+00:00 stderr F I1013 00:23:43.265266 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.265311590+00:00 stderr F I1013 00:23:43.265298 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" virtual=false 2025-10-13T00:23:43.265433283+00:00 stderr F I1013 00:23:43.265390 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.265433283+00:00 stderr F I1013 00:23:43.265420 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cloud-platform-infra, name: builder, uid: 813004ef-3fad-46b0-817a-218aa486a381]" 2025-10-13T00:23:43.265453104+00:00 stderr F I1013 00:23:43.265425 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" virtual=false 2025-10-13T00:23:43.265453104+00:00 stderr F I1013 00:23:43.265436 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-public, name: deployer, uid: 50560a3b-b057-466d-bcf6-867cc67ab743]" virtual=false 2025-10-13T00:23:43.265507065+00:00 stderr F I1013 00:23:43.265395 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: deployer, uid: 3590b7cf-df96-4cca-a1f2-47f3b7d83795]" 2025-10-13T00:23:43.265525356+00:00 stderr F I1013 00:23:43.265508 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: default, uid: 3fa70294-95e4-4463-9434-7460bb361913]" virtual=false 2025-10-13T00:23:43.266074561+00:00 stderr F I1013 00:23:43.265593 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" 2025-10-13T00:23:43.266074561+00:00 stderr F I1013 00:23:43.265618 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: default, uid: 4abc6b78-3694-4630-9f51-d47dbd877b35]" virtual=false 2025-10-13T00:23:43.266074561+00:00 stderr F I1013 00:23:43.265685 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: builder, uid: 3ea88b4a-06e7-432d-9b8b-7cb9c1cb3c38]" 2025-10-13T00:23:43.266074561+00:00 stderr F I1013 00:23:43.265700 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: builder, uid: 924fdc74-bd59-4cd3-92ac-acf3d9a3ce0b]" virtual=false 2025-10-13T00:23:43.266181124+00:00 stderr F I1013 00:23:43.266159 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: disruption-controller, uid: 1e3a1d6a-40dc-4fcf-a7b0-c3f2f6e24e21]" 2025-10-13T00:23:43.266244836+00:00 stderr F I1013 00:23:43.266230 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: builder, uid: 8a67b2ae-c9a0-448a-991d-17a308d919f5]" virtual=false 2025-10-13T00:23:43.266621886+00:00 stderr F I1013 00:23:43.266597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.266668198+00:00 stderr F I1013 00:23:43.266648 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: default, uid: 229b06fa-c347-468b-aba0-1539568c8230]" virtual=false 2025-10-13T00:23:43.266706209+00:00 stderr F I1013 00:23:43.266678 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-storage-operator, name: builder, uid: d8e3fa65-7f6d-4665-bb55-7e973ddccebc]" 2025-10-13T00:23:43.266706209+00:00 stderr F I1013 00:23:43.266693 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" 2025-10-13T00:23:43.266735870+00:00 stderr F I1013 00:23:43.266713 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: deployer, uid: 7041f142-42a5-4786-b6df-c226acb05209]" virtual=false 2025-10-13T00:23:43.266746560+00:00 stderr F I1013 00:23:43.266736 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" 2025-10-13T00:23:43.266756060+00:00 stderr F I1013 00:23:43.266748 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: endpointslicemirroring-controller, uid: 8e9fc4ea-7f37-459a-88c6-6abd7fe697c8]" virtual=false 2025-10-13T00:23:43.266770821+00:00 stderr F I1013 00:23:43.266756 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: etcd-backup-sa, uid: 13a28779-c7ba-4266-8f1d-aa374218f088]" 2025-10-13T00:23:43.266809832+00:00 stderr F I1013 00:23:43.266714 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-os-builder, uid: fed52002-b7ee-413d-809d-e5337f37643d]" virtual=false 2025-10-13T00:23:43.266820132+00:00 stderr F I1013 00:23:43.266807 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: deployer, uid: 5942996f-2808-4cbe-a896-64131fe654a0]" virtual=false 2025-10-13T00:23:43.266864823+00:00 stderr F I1013 00:23:43.266849 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: podsecurity-admission-label-syncer-controller, uid: d546701e-5a7a-4c88-9b40-4c7696d9f32c]" 2025-10-13T00:23:43.266924755+00:00 stderr F I1013 00:23:43.266907 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" virtual=false 2025-10-13T00:23:43.266961336+00:00 stderr F I1013 00:23:43.266940 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-operators, name: builder, uid: 7543e46c-52ce-4946-9ae0-cf808d47740c]" 2025-10-13T00:23:43.267032808+00:00 stderr F I1013 00:23:43.267007 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: builder, uid: 08426b5f-d715-4491-bc3e-d58b198d668d]" 2025-10-13T00:23:43.267072199+00:00 stderr F I1013 00:23:43.266852 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-host-network, name: builder, uid: 6e65d5ff-1271-41a9-b8c4-f37321c5346a]" 2025-10-13T00:23:43.267072199+00:00 stderr F I1013 00:23:43.267067 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" virtual=false 2025-10-13T00:23:43.267126751+00:00 stderr F I1013 00:23:43.266855 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: oauth-apiserver-sa, uid: 43153168-26f3-49a1-b8ab-c2a8f08ecfce]" 2025-10-13T00:23:43.267126751+00:00 stderr F I1013 00:23:43.267118 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" virtual=false 2025-10-13T00:23:43.267172892+00:00 stderr F I1013 00:23:43.267027 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: builder, uid: 5b9e173a-7bbc-4b75-aa44-02e0bcef637d]" virtual=false 2025-10-13T00:23:43.267205943+00:00 stderr F I1013 00:23:43.267190 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: cronjob-controller, uid: 7b364c77-d33c-4164-9f16-977084c8ac52]" 2025-10-13T00:23:43.267256044+00:00 stderr F I1013 00:23:43.267242 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: deployer, uid: bfe682c1-db20-42b6-9df6-64610f4153c2]" virtual=false 2025-10-13T00:23:43.267544722+00:00 stderr F I1013 00:23:43.267517 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" virtual=false 2025-10-13T00:23:43.269104296+00:00 stderr F I1013 00:23:43.269083 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.269159497+00:00 stderr F I1013 00:23:43.269148 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" virtual=false 2025-10-13T00:23:43.270419012+00:00 stderr F I1013 00:23:43.270401 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-public, name: deployer, uid: 50560a3b-b057-466d-bcf6-867cc67ab743]" 2025-10-13T00:23:43.270466584+00:00 stderr F I1013 00:23:43.270456 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-vsphere-infra, name: default, uid: 2a370b8f-9c63-48b3-8ebd-bf602778b32c]" virtual=false 2025-10-13T00:23:43.277270673+00:00 stderr F I1013 00:23:43.277252 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: default, uid: 3fa70294-95e4-4463-9434-7460bb361913]" 2025-10-13T00:23:43.277317554+00:00 stderr F I1013 00:23:43.277307 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cloud-network-config-controller, name: builder, uid: effb7ec5-fb8d-4ea5-ab0d-3ac28ab73bed]" virtual=false 2025-10-13T00:23:43.280853453+00:00 stderr F I1013 00:23:43.280816 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: default, uid: 4abc6b78-3694-4630-9f51-d47dbd877b35]" 2025-10-13T00:23:43.280853453+00:00 stderr F I1013 00:23:43.280847 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: deployer, uid: 7c6f099c-1291-412d-a1f3-b42552751c1a]" virtual=false 2025-10-13T00:23:43.284414362+00:00 stderr F I1013 00:23:43.284380 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: builder, uid: 924fdc74-bd59-4cd3-92ac-acf3d9a3ce0b]" 2025-10-13T00:23:43.284414362+00:00 stderr F I1013 00:23:43.284403 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: deployer, uid: a175f920-fb5d-4099-81da-23e28b31e201]" virtual=false 2025-10-13T00:23:43.287924400+00:00 stderr F I1013 00:23:43.287899 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: builder, uid: 8a67b2ae-c9a0-448a-991d-17a308d919f5]" 2025-10-13T00:23:43.287944450+00:00 stderr F I1013 00:23:43.287923 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: service-ca, uid: 3a765bcf-2989-4b4a-b4aa-fc63a29876f4]" virtual=false 2025-10-13T00:23:43.291317744+00:00 stderr F I1013 00:23:43.291289 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: default, uid: 229b06fa-c347-468b-aba0-1539568c8230]" 2025-10-13T00:23:43.291353585+00:00 stderr F I1013 00:23:43.291317 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: builder, uid: 6a0ee96f-fbfa-4b3b-8995-5a4c0e14b9dd]" virtual=false 2025-10-13T00:23:43.294728409+00:00 stderr F I1013 00:23:43.294699 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: deployer, uid: 7041f142-42a5-4786-b6df-c226acb05209]" 2025-10-13T00:23:43.294805352+00:00 stderr F I1013 00:23:43.294794 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: default, uid: 256555f5-7ee7-4598-bfc7-d5155a8c60af]" virtual=false 2025-10-13T00:23:43.297955879+00:00 stderr F I1013 00:23:43.297936 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: endpointslicemirroring-controller, uid: 8e9fc4ea-7f37-459a-88c6-6abd7fe697c8]" 2025-10-13T00:23:43.298003321+00:00 stderr F I1013 00:23:43.297993 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: default, uid: 274d2e81-b3fd-4c10-9149-31ee6a57fad1]" virtual=false 2025-10-13T00:23:43.301320953+00:00 stderr F I1013 00:23:43.301303 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-os-builder, uid: fed52002-b7ee-413d-809d-e5337f37643d]" 2025-10-13T00:23:43.301386605+00:00 stderr F I1013 00:23:43.301375 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" virtual=false 2025-10-13T00:23:43.304591584+00:00 stderr F I1013 00:23:43.304568 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: deployer, uid: 5942996f-2808-4cbe-a896-64131fe654a0]" 2025-10-13T00:23:43.304653176+00:00 stderr F I1013 00:23:43.304638 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kni-infra, name: builder, uid: 6b87941c-75e3-4cb4-aa14-65b1ae2f75de]" virtual=false 2025-10-13T00:23:43.311349372+00:00 stderr F I1013 00:23:43.311303 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" 2025-10-13T00:23:43.311397784+00:00 stderr F I1013 00:23:43.311387 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" virtual=false 2025-10-13T00:23:43.317582676+00:00 stderr F I1013 00:23:43.317558 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-multus, name: builder, uid: 5b9e173a-7bbc-4b75-aa44-02e0bcef637d]" 2025-10-13T00:23:43.317600296+00:00 stderr F I1013 00:23:43.317581 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: deployer, uid: c566acbf-8dc6-4355-8841-fdd6aac3739b]" virtual=false 2025-10-13T00:23:43.321009511+00:00 stderr F I1013 00:23:43.320989 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: deployer, uid: bfe682c1-db20-42b6-9df6-64610f4153c2]" 2025-10-13T00:23:43.321027402+00:00 stderr F I1013 00:23:43.321007 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" virtual=false 2025-10-13T00:23:43.324456977+00:00 stderr F I1013 00:23:43.324440 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" 2025-10-13T00:23:43.324503509+00:00 stderr F I1013 00:23:43.324493 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: builder, uid: 46bd11b1-e144-4f00-b9f6-a584df6e9a0e]" virtual=false 2025-10-13T00:23:43.330453064+00:00 stderr F I1013 00:23:43.330409 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.330453064+00:00 stderr F I1013 00:23:43.330433 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: deployer, uid: f4c89bc9-3e01-4478-b3a9-2cc72d081462]" virtual=false 2025-10-13T00:23:43.332940684+00:00 stderr F I1013 00:23:43.332892 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.332958704+00:00 stderr F I1013 00:23:43.332941 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-storage-operator, name: default, uid: f824a836-a7c5-45e4-b08b-fccfdcc65861]" virtual=false 2025-10-13T00:23:43.337205383+00:00 stderr F I1013 00:23:43.337155 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-vsphere-infra, name: default, uid: 2a370b8f-9c63-48b3-8ebd-bf602778b32c]" 2025-10-13T00:23:43.337205383+00:00 stderr F I1013 00:23:43.337186 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 1f20f44d-90c3-49c1-8b97-57b6110e3aff]" virtual=false 2025-10-13T00:23:43.343582040+00:00 stderr F I1013 00:23:43.343526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:43.343582040+00:00 stderr F I1013 00:23:43.343554 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-version, name: builder, uid: a8671f7c-0d12-49e3-ae5c-95cf7b944a29]" virtual=false 2025-10-13T00:23:43.350845013+00:00 stderr F I1013 00:23:43.350795 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cloud-network-config-controller, name: builder, uid: effb7ec5-fb8d-4ea5-ab0d-3ac28ab73bed]" 2025-10-13T00:23:43.350874823+00:00 stderr F I1013 00:23:43.350841 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" virtual=false 2025-10-13T00:23:43.350954366+00:00 stderr F I1013 00:23:43.350930 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: deployer, uid: 7c6f099c-1291-412d-a1f3-b42552751c1a]" 2025-10-13T00:23:43.350954366+00:00 stderr F I1013 00:23:43.350946 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" virtual=false 2025-10-13T00:23:43.351851181+00:00 stderr F I1013 00:23:43.351817 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: deployer, uid: a175f920-fb5d-4099-81da-23e28b31e201]" 2025-10-13T00:23:43.351868521+00:00 stderr F I1013 00:23:43.351856 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: default, uid: c9d1e021-69f2-45aa-b8ad-5cfec705cb75]" virtual=false 2025-10-13T00:23:43.354296669+00:00 stderr F I1013 00:23:43.354249 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: service-ca, uid: 3a765bcf-2989-4b4a-b4aa-fc63a29876f4]" 2025-10-13T00:23:43.354296669+00:00 stderr F I1013 00:23:43.354275 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-server, uid: 0eb66684-fa46-42f5-b7b4-c1321a39b4c9]" virtual=false 2025-10-13T00:23:43.358086454+00:00 stderr F I1013 00:23:43.358032 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: builder, uid: 6a0ee96f-fbfa-4b3b-8995-5a4c0e14b9dd]" 2025-10-13T00:23:43.358086454+00:00 stderr F I1013 00:23:43.358059 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: builder, uid: 594a7d96-d3fe-4511-9c87-8c2ef267199e]" virtual=false 2025-10-13T00:23:43.360777249+00:00 stderr F I1013 00:23:43.360735 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: default, uid: 256555f5-7ee7-4598-bfc7-d5155a8c60af]" 2025-10-13T00:23:43.360777249+00:00 stderr F I1013 00:23:43.360760 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" virtual=false 2025-10-13T00:23:43.364128743+00:00 stderr F I1013 00:23:43.364083 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca, name: default, uid: 274d2e81-b3fd-4c10-9149-31ee6a57fad1]" 2025-10-13T00:23:43.364128743+00:00 stderr F I1013 00:23:43.364110 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: route-controller-manager-sa, uid: 7efcfd8a-4b2f-43f8-a395-0133337395f5]" virtual=false 2025-10-13T00:23:43.367856466+00:00 stderr F I1013 00:23:43.367810 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" 2025-10-13T00:23:43.367856466+00:00 stderr F I1013 00:23:43.367841 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: service-ca-cert-publisher, uid: 02f90a92-6ded-4e20-a7df-5cc7f2fa21fd]" virtual=false 2025-10-13T00:23:43.371409085+00:00 stderr F I1013 00:23:43.371362 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kni-infra, name: builder, uid: 6b87941c-75e3-4cb4-aa14-65b1ae2f75de]" 2025-10-13T00:23:43.371409085+00:00 stderr F I1013 00:23:43.371388 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: kube-controller-manager-sa, uid: 9d8a8d0b-0400-4437-8ad3-6fe3dd8ed347]" virtual=false 2025-10-13T00:23:43.377791233+00:00 stderr F I1013 00:23:43.377762 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" 2025-10-13T00:23:43.377808584+00:00 stderr F I1013 00:23:43.377787 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.377808584+00:00 stderr F I1013 00:23:43.377793 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: installer-sa, uid: b63a1ffe-4ba6-4841-87bd-4f7cac850088]" virtual=false 2025-10-13T00:23:43.377824774+00:00 stderr F I1013 00:23:43.377813 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" virtual=false 2025-10-13T00:23:43.384360556+00:00 stderr F I1013 00:23:43.384313 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: deployer, uid: c566acbf-8dc6-4355-8841-fdd6aac3739b]" 2025-10-13T00:23:43.384360556+00:00 stderr F I1013 00:23:43.384353 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: replicaset-controller, uid: 8383c687-2c78-4fbd-8477-e7296e85fbb0]" virtual=false 2025-10-13T00:23:43.384405667+00:00 stderr F I1013 00:23:43.384381 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.384415558+00:00 stderr F I1013 00:23:43.384410 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: default, uid: a66582c5-7a62-45dc-bc70-dfbd941826ae]" virtual=false 2025-10-13T00:23:43.391054363+00:00 stderr F I1013 00:23:43.391015 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: builder, uid: 46bd11b1-e144-4f00-b9f6-a584df6e9a0e]" 2025-10-13T00:23:43.391054363+00:00 stderr F I1013 00:23:43.391038 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: default, uid: c3220290-47c8-497c-b8fb-76f2b394a93f]" virtual=false 2025-10-13T00:23:43.394477308+00:00 stderr F I1013 00:23:43.394419 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: deployer, uid: f4c89bc9-3e01-4478-b3a9-2cc72d081462]" 2025-10-13T00:23:43.394477308+00:00 stderr F I1013 00:23:43.394449 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-13, uid: 83d53725-68b7-4a2f-8899-90e4f52a4d27]" virtual=false 2025-10-13T00:23:43.397808751+00:00 stderr F I1013 00:23:43.397765 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-storage-operator, name: default, uid: f824a836-a7c5-45e4-b08b-fccfdcc65861]" 2025-10-13T00:23:43.397808751+00:00 stderr F I1013 00:23:43.397786 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: redhat-marketplace, uid: 24cda2f5-7354-4ddb-adcb-eb6d99a3f134]" virtual=false 2025-10-13T00:23:43.404055385+00:00 stderr F I1013 00:23:43.403873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.404055385+00:00 stderr F I1013 00:23:43.403903 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: builder, uid: e0f23aad-26ac-45bc-8246-fb5e82862217]" virtual=false 2025-10-13T00:23:43.404483817+00:00 stderr F I1013 00:23:43.404457 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 1f20f44d-90c3-49c1-8b97-57b6110e3aff]" 2025-10-13T00:23:43.404514157+00:00 stderr F I1013 00:23:43.404482 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: service-account-controller, uid: 3d9cedaf-de0b-4c93-81c2-4fd5571709df]" virtual=false 2025-10-13T00:23:43.408485498+00:00 stderr F I1013 00:23:43.408458 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-version, name: builder, uid: a8671f7c-0d12-49e3-ae5c-95cf7b944a29]" 2025-10-13T00:23:43.408485498+00:00 stderr F I1013 00:23:43.408478 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: builder, uid: 6aa5bd61-3b9a-4f36-b946-865df341128c]" virtual=false 2025-10-13T00:23:43.410877975+00:00 stderr F I1013 00:23:43.410845 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" 2025-10-13T00:23:43.410877975+00:00 stderr F I1013 00:23:43.410865 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-user-settings, name: deployer, uid: 678fd191-9c31-4cd1-bc59-14502492b299]" virtual=false 2025-10-13T00:23:43.414712152+00:00 stderr F I1013 00:23:43.414680 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" 2025-10-13T00:23:43.414730102+00:00 stderr F I1013 00:23:43.414715 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: serviceaccount-pull-secrets-controller, uid: 540d616f-b240-4ab1-9a32-b988f3807afe]" virtual=false 2025-10-13T00:23:43.418138287+00:00 stderr F I1013 00:23:43.417938 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: default, uid: c9d1e021-69f2-45aa-b8ad-5cfec705cb75]" 2025-10-13T00:23:43.418138287+00:00 stderr F I1013 00:23:43.417979 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: builder, uid: 502f6763-5236-4d42-8adb-1893f19e8f56]" virtual=false 2025-10-13T00:23:43.421402548+00:00 stderr F I1013 00:23:43.421356 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-server, uid: 0eb66684-fa46-42f5-b7b4-c1321a39b4c9]" 2025-10-13T00:23:43.421402548+00:00 stderr F I1013 00:23:43.421393 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" virtual=false 2025-10-13T00:23:43.424857204+00:00 stderr F I1013 00:23:43.424818 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: builder, uid: 594a7d96-d3fe-4511-9c87-8c2ef267199e]" 2025-10-13T00:23:43.424857204+00:00 stderr F I1013 00:23:43.424839 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: builder, uid: 51685b4d-84c4-47a6-8ede-6070efc4e895]" virtual=false 2025-10-13T00:23:43.427294342+00:00 stderr F I1013 00:23:43.427266 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" 2025-10-13T00:23:43.427294342+00:00 stderr F I1013 00:23:43.427284 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: builder, uid: fbc62d52-90cd-4a51-b9e3-59db3adc4c20]" virtual=false 2025-10-13T00:23:43.430594194+00:00 stderr F I1013 00:23:43.430554 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: route-controller-manager-sa, uid: 7efcfd8a-4b2f-43f8-a395-0133337395f5]" 2025-10-13T00:23:43.430594194+00:00 stderr F I1013 00:23:43.430574 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: default, uid: d75cdb91-c7bc-4733-8e50-40d4aad14f90]" virtual=false 2025-10-13T00:23:43.434081741+00:00 stderr F I1013 00:23:43.434041 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: service-ca-cert-publisher, uid: 02f90a92-6ded-4e20-a7df-5cc7f2fa21fd]" 2025-10-13T00:23:43.434081741+00:00 stderr F I1013 00:23:43.434062 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: statefulset-controller, uid: f051e323-1133-49de-84bc-b59793b94ae7]" virtual=false 2025-10-13T00:23:43.437357162+00:00 stderr F I1013 00:23:43.437293 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: kube-controller-manager-sa, uid: 9d8a8d0b-0400-4437-8ad3-6fe3dd8ed347]" 2025-10-13T00:23:43.437357162+00:00 stderr F I1013 00:23:43.437317 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: installer-sa, uid: 2c34d780-64da-44ba-a979-d4c38d9bc0ca]" virtual=false 2025-10-13T00:23:43.441028875+00:00 stderr F I1013 00:23:43.440988 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: installer-sa, uid: b63a1ffe-4ba6-4841-87bd-4f7cac850088]" 2025-10-13T00:23:43.441028875+00:00 stderr F I1013 00:23:43.441010 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: localhost-recovery-client, uid: d0e97c67-ef5a-4577-8381-df1235489439]" virtual=false 2025-10-13T00:23:43.448007169+00:00 stderr F I1013 00:23:43.447983 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: replicaset-controller, uid: 8383c687-2c78-4fbd-8477-e7296e85fbb0]" 2025-10-13T00:23:43.448028790+00:00 stderr F I1013 00:23:43.448006 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" virtual=false 2025-10-13T00:23:43.453860032+00:00 stderr F I1013 00:23:43.453578 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: default, uid: a66582c5-7a62-45dc-bc70-dfbd941826ae]" 2025-10-13T00:23:43.453860032+00:00 stderr F I1013 00:23:43.453611 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns, name: builder, uid: f159e95e-7503-483c-864b-ee75be3c2c0e]" virtual=false 2025-10-13T00:23:43.456930708+00:00 stderr F I1013 00:23:43.456900 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.456930708+00:00 stderr F I1013 00:23:43.456926 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: builder, uid: 69cf57f7-7ebd-43e8-bca6-7e59790367a9]" virtual=false 2025-10-13T00:23:43.468238253+00:00 stderr F I1013 00:23:43.467779 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: default, uid: c3220290-47c8-497c-b8fb-76f2b394a93f]" 2025-10-13T00:23:43.468238253+00:00 stderr F I1013 00:23:43.467817 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" virtual=false 2025-10-13T00:23:43.468809828+00:00 stderr F I1013 00:23:43.468779 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-13, uid: 83d53725-68b7-4a2f-8899-90e4f52a4d27]" 2025-10-13T00:23:43.468809828+00:00 stderr F I1013 00:23:43.468805 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: job-controller, uid: 250b6322-339f-4e0d-bf74-cc5b9d7f8202]" virtual=false 2025-10-13T00:23:43.468885811+00:00 stderr F I1013 00:23:43.468854 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: builder, uid: e0f23aad-26ac-45bc-8246-fb5e82862217]" 2025-10-13T00:23:43.468921232+00:00 stderr F I1013 00:23:43.468901 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: default, uid: bb89d33f-0889-4a1e-8f6a-9da16ea10b43]" virtual=false 2025-10-13T00:23:43.470981869+00:00 stderr F I1013 00:23:43.470956 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: service-account-controller, uid: 3d9cedaf-de0b-4c93-81c2-4fd5571709df]" 2025-10-13T00:23:43.470998849+00:00 stderr F I1013 00:23:43.470980 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: deployer, uid: 7f339e62-7b79-412c-8321-94b9592d2f45]" virtual=false 2025-10-13T00:23:43.474606600+00:00 stderr F I1013 00:23:43.474579 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console, name: builder, uid: 6aa5bd61-3b9a-4f36-b946-865df341128c]" 2025-10-13T00:23:43.474606600+00:00 stderr F I1013 00:23:43.474598 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: default, uid: 6a20f4ee-b54e-4159-bd01-1edaf4e9a852]" virtual=false 2025-10-13T00:23:43.478063686+00:00 stderr F I1013 00:23:43.478034 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console-user-settings, name: deployer, uid: 678fd191-9c31-4cd1-bc59-14502492b299]" 2025-10-13T00:23:43.478063686+00:00 stderr F I1013 00:23:43.478059 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: default, uid: ef433a66-5abe-439e-a715-d6613daf1c82]" virtual=false 2025-10-13T00:23:43.480803893+00:00 stderr F I1013 00:23:43.480764 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: serviceaccount-pull-secrets-controller, uid: 540d616f-b240-4ab1-9a32-b988f3807afe]" 2025-10-13T00:23:43.480803893+00:00 stderr F I1013 00:23:43.480790 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-openstack-infra, name: builder, uid: d5380a37-7145-4964-87d6-db2083bd9da7]" virtual=false 2025-10-13T00:23:43.484542077+00:00 stderr F I1013 00:23:43.484507 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: builder, uid: 502f6763-5236-4d42-8adb-1893f19e8f56]" 2025-10-13T00:23:43.484542077+00:00 stderr F I1013 00:23:43.484527 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" virtual=false 2025-10-13T00:23:43.488278611+00:00 stderr F I1013 00:23:43.488250 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" 2025-10-13T00:23:43.488278611+00:00 stderr F I1013 00:23:43.488269 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: default, uid: 1b11a024-5d6c-469f-846c-fd8950d06613]" virtual=false 2025-10-13T00:23:43.490589715+00:00 stderr F I1013 00:23:43.490562 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: builder, uid: 51685b4d-84c4-47a6-8ede-6070efc4e895]" 2025-10-13T00:23:43.490589715+00:00 stderr F I1013 00:23:43.490581 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: deployer, uid: 08906078-e9b1-4644-a139-75ce13d77fa4]" virtual=false 2025-10-13T00:23:43.493938598+00:00 stderr F I1013 00:23:43.493911 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: builder, uid: fbc62d52-90cd-4a51-b9e3-59db3adc4c20]" 2025-10-13T00:23:43.493938598+00:00 stderr F I1013 00:23:43.493934 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" virtual=false 2025-10-13T00:23:43.497352424+00:00 stderr F I1013 00:23:43.497313 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: default, uid: d75cdb91-c7bc-4733-8e50-40d4aad14f90]" 2025-10-13T00:23:43.497370484+00:00 stderr F I1013 00:23:43.497361 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: openshift-kube-scheduler-sa, uid: 50e97386-39db-4db4-8ac2-1405079af49a]" virtual=false 2025-10-13T00:23:43.500865831+00:00 stderr F I1013 00:23:43.500821 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: statefulset-controller, uid: f051e323-1133-49de-84bc-b59793b94ae7]" 2025-10-13T00:23:43.500865831+00:00 stderr F I1013 00:23:43.500845 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: pruner, uid: 33d7fb85-9932-408c-b54f-73de75f129b4]" virtual=false 2025-10-13T00:23:43.504384439+00:00 stderr F I1013 00:23:43.504355 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: installer-sa, uid: 2c34d780-64da-44ba-a979-d4c38d9bc0ca]" 2025-10-13T00:23:43.504402090+00:00 stderr F I1013 00:23:43.504380 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: deployer, uid: 8289b59d-a7c1-4307-b929-b84ecd2a47ea]" virtual=false 2025-10-13T00:23:43.507897067+00:00 stderr F I1013 00:23:43.507849 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: localhost-recovery-client, uid: d0e97c67-ef5a-4577-8381-df1235489439]" 2025-10-13T00:23:43.507897067+00:00 stderr F I1013 00:23:43.507888 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-node-lease, name: builder, uid: b98ca068-3954-4ae8-a8c1-8fb2792d2619]" virtual=false 2025-10-13T00:23:43.512677060+00:00 stderr F I1013 00:23:43.512634 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.512696341+00:00 stderr F I1013 00:23:43.512685 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 86c0fe0d-29a3-4976-8e34-622c5487375c]" virtual=false 2025-10-13T00:23:43.513808602+00:00 stderr F I1013 00:23:43.513768 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" 2025-10-13T00:23:43.513808602+00:00 stderr F I1013 00:23:43.513794 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: default, uid: a3b8cf3a-2d8e-415f-a00b-11caf3e6a9b1]" virtual=false 2025-10-13T00:23:43.517308839+00:00 stderr F I1013 00:23:43.517270 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns, name: builder, uid: f159e95e-7503-483c-864b-ee75be3c2c0e]" 2025-10-13T00:23:43.517308839+00:00 stderr F I1013 00:23:43.517294 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-public, name: default, uid: a625954b-202b-4f35-9cb5-d6ad934c5cb8]" virtual=false 2025-10-13T00:23:43.520679403+00:00 stderr F I1013 00:23:43.520653 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: builder, uid: 69cf57f7-7ebd-43e8-bca6-7e59790367a9]" 2025-10-13T00:23:43.520679403+00:00 stderr F I1013 00:23:43.520675 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: default, uid: 3e90d461-4ce0-4540-afeb-415795acd757]" virtual=false 2025-10-13T00:23:43.527481183+00:00 stderr F I1013 00:23:43.527457 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: job-controller, uid: 250b6322-339f-4e0d-bf74-cc5b9d7f8202]" 2025-10-13T00:23:43.527501423+00:00 stderr F I1013 00:23:43.527480 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: etcd-sa, uid: 28368ba8-8c81-4622-8a2f-ed2a24741425]" virtual=false 2025-10-13T00:23:43.530757704+00:00 stderr F I1013 00:23:43.530709 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: default, uid: bb89d33f-0889-4a1e-8f6a-9da16ea10b43]" 2025-10-13T00:23:43.530757704+00:00 stderr F I1013 00:23:43.530732 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovirt-infra, name: builder, uid: 5938604c-e1aa-4d02-a94d-f4ac3c17393a]" virtual=false 2025-10-13T00:23:43.534920860+00:00 stderr F I1013 00:23:43.534895 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: redhat-marketplace, uid: 24cda2f5-7354-4ddb-adcb-eb6d99a3f134]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"redhat-marketplace","uid":"6f259421-4edb-49d8-a6ce-aa41dfc64264","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.534948241+00:00 stderr F I1013 00:23:43.534923 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovirt-infra, name: deployer, uid: aa845d60-a851-46d3-be37-c54118ec92da]" virtual=false 2025-10-13T00:23:43.537182003+00:00 stderr F I1013 00:23:43.537159 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: deployer, uid: 7f339e62-7b79-412c-8321-94b9592d2f45]" 2025-10-13T00:23:43.537201654+00:00 stderr F I1013 00:23:43.537182 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: deployer, uid: 3d0c970c-58e1-4e63-aa68-e1d505d58654]" virtual=false 2025-10-13T00:23:43.540638499+00:00 stderr F I1013 00:23:43.540606 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: default, uid: 6a20f4ee-b54e-4159-bd01-1edaf4e9a852]" 2025-10-13T00:23:43.540638499+00:00 stderr F I1013 00:23:43.540629 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: deployer, uid: 7d13164d-5611-4bc4-a1cb-b633d70d00b2]" virtual=false 2025-10-13T00:23:43.543994793+00:00 stderr F I1013 00:23:43.543972 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: default, uid: ef433a66-5abe-439e-a715-d6613daf1c82]" 2025-10-13T00:23:43.544014493+00:00 stderr F I1013 00:23:43.543996 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kni-infra, name: default, uid: d42c9b8e-a514-4a22-ac6d-40f7fe71e4fa]" virtual=false 2025-10-13T00:23:43.547787268+00:00 stderr F I1013 00:23:43.547687 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-openstack-infra, name: builder, uid: d5380a37-7145-4964-87d6-db2083bd9da7]" 2025-10-13T00:23:43.547787268+00:00 stderr F I1013 00:23:43.547710 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-vsphere-infra, name: deployer, uid: a3f53903-5655-4ea6-ab22-e7f7c8324e55]" virtual=false 2025-10-13T00:23:43.554527576+00:00 stderr F I1013 00:23:43.554492 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: default, uid: 1b11a024-5d6c-469f-846c-fd8950d06613]" 2025-10-13T00:23:43.554553747+00:00 stderr F I1013 00:23:43.554526 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-user-settings, name: builder, uid: eceac60d-7c88-4164-a1c0-7d757c8d667c]" virtual=false 2025-10-13T00:23:43.557826388+00:00 stderr F I1013 00:23:43.557793 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: deployer, uid: 08906078-e9b1-4644-a139-75ce13d77fa4]" 2025-10-13T00:23:43.557826388+00:00 stderr F I1013 00:23:43.557813 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: builder, uid: 3bb6108c-1524-4458-a997-b9343271d310]" virtual=false 2025-10-13T00:23:43.564635588+00:00 stderr F I1013 00:23:43.564582 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: openshift-kube-scheduler-sa, uid: 50e97386-39db-4db4-8ac2-1405079af49a]" 2025-10-13T00:23:43.564635588+00:00 stderr F I1013 00:23:43.564611 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: build-controller, uid: d79ef9d0-a3b9-435c-83dc-f8656a4c5a35]" virtual=false 2025-10-13T00:23:43.567999791+00:00 stderr F I1013 00:23:43.567972 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: pruner, uid: 33d7fb85-9932-408c-b54f-73de75f129b4]" 2025-10-13T00:23:43.567999791+00:00 stderr F I1013 00:23:43.567995 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" virtual=false 2025-10-13T00:23:43.572711683+00:00 stderr F I1013 00:23:43.572676 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: deployer, uid: 8289b59d-a7c1-4307-b929-b84ecd2a47ea]" 2025-10-13T00:23:43.572711683+00:00 stderr F I1013 00:23:43.572699 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: deployment-controller, uid: 4681442a-6316-4a67-b22e-8bc7bb143010]" virtual=false 2025-10-13T00:23:43.574234145+00:00 stderr F I1013 00:23:43.574209 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-node-lease, name: builder, uid: b98ca068-3954-4ae8-a8c1-8fb2792d2619]" 2025-10-13T00:23:43.574251816+00:00 stderr F I1013 00:23:43.574239 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: unidling-controller, uid: 3d348dd7-11ce-41a8-bee4-d73cc528669a]" virtual=false 2025-10-13T00:23:43.580639944+00:00 stderr F I1013 00:23:43.580614 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-multus, name: default, uid: a3b8cf3a-2d8e-415f-a00b-11caf3e6a9b1]" 2025-10-13T00:23:43.580657914+00:00 stderr F I1013 00:23:43.580638 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-openstack-infra, name: deployer, uid: ed625bba-8050-41c4-9235-0ac2d5d46e45]" virtual=false 2025-10-13T00:23:43.584250474+00:00 stderr F I1013 00:23:43.584225 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-public, name: default, uid: a625954b-202b-4f35-9cb5-d6ad934c5cb8]" 2025-10-13T00:23:43.584268375+00:00 stderr F I1013 00:23:43.584253 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift, name: deployer, uid: 4795be54-cb97-4e8d-9285-4d3b1203f2e3]" virtual=false 2025-10-13T00:23:43.588769740+00:00 stderr F I1013 00:23:43.588741 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: default, uid: 3e90d461-4ce0-4540-afeb-415795acd757]" 2025-10-13T00:23:43.588788741+00:00 stderr F I1013 00:23:43.588769 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: builder, uid: f203a333-3df9-43b3-a4f7-eb0cc340f660]" virtual=false 2025-10-13T00:23:43.594093378+00:00 stderr F I1013 00:23:43.594045 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.594093378+00:00 stderr F I1013 00:23:43.594075 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: default, uid: 14b1e48e-808c-4a4d-a76a-1fc31b8ec5d0]" virtual=false 2025-10-13T00:23:43.595070946+00:00 stderr F I1013 00:23:43.595031 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: etcd-sa, uid: 28368ba8-8c81-4622-8a2f-ed2a24741425]" 2025-10-13T00:23:43.595070946+00:00 stderr F I1013 00:23:43.595057 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" virtual=false 2025-10-13T00:23:43.598114610+00:00 stderr F I1013 00:23:43.598073 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ovirt-infra, name: builder, uid: 5938604c-e1aa-4d02-a94d-f4ac3c17393a]" 2025-10-13T00:23:43.598114610+00:00 stderr F I1013 00:23:43.598094 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: serviceaccount-controller, uid: 2cdd6f03-ad8d-4ce5-bd91-c2634ce30d94]" virtual=false 2025-10-13T00:23:43.600878717+00:00 stderr F I1013 00:23:43.600316 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ovirt-infra, name: deployer, uid: aa845d60-a851-46d3-be37-c54118ec92da]" 2025-10-13T00:23:43.600878717+00:00 stderr F I1013 00:23:43.600351 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: builder, uid: e98c39f6-8260-4ce8-9daa-f3be18e21432]" virtual=false 2025-10-13T00:23:43.604006024+00:00 stderr F I1013 00:23:43.603975 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: deployer, uid: 3d0c970c-58e1-4e63-aa68-e1d505d58654]" 2025-10-13T00:23:43.604006024+00:00 stderr F I1013 00:23:43.603996 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: installer-sa, uid: 4f7819d7-19cd-4dd3-b8b3-538113e28a63]" virtual=false 2025-10-13T00:23:43.607701017+00:00 stderr F I1013 00:23:43.607655 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: deployer, uid: 7d13164d-5611-4bc4-a1cb-b633d70d00b2]" 2025-10-13T00:23:43.607701017+00:00 stderr F I1013 00:23:43.607677 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: default, name: deployer, uid: e1051944-6500-4ddb-bf82-e38c2d9bd592]" virtual=false 2025-10-13T00:23:43.611285417+00:00 stderr F I1013 00:23:43.611250 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kni-infra, name: default, uid: d42c9b8e-a514-4a22-ac6d-40f7fe71e4fa]" 2025-10-13T00:23:43.611303748+00:00 stderr F I1013 00:23:43.611281 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: default, uid: 9bc3f270-1e87-47e0-9960-44639870da45]" virtual=false 2025-10-13T00:23:43.615031432+00:00 stderr F I1013 00:23:43.614984 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-vsphere-infra, name: deployer, uid: a3f53903-5655-4ea6-ab22-e7f7c8324e55]" 2025-10-13T00:23:43.615031432+00:00 stderr F I1013 00:23:43.615011 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: default, uid: d463847b-4122-4714-b876-291e301e5162]" virtual=false 2025-10-13T00:23:43.620253917+00:00 stderr F I1013 00:23:43.620203 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.620253917+00:00 stderr F I1013 00:23:43.620228 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" virtual=false 2025-10-13T00:23:43.621974695+00:00 stderr F I1013 00:23:43.621935 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console-user-settings, name: builder, uid: eceac60d-7c88-4164-a1c0-7d757c8d667c]" 2025-10-13T00:23:43.621974695+00:00 stderr F I1013 00:23:43.621962 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-openstack-infra, name: default, uid: 67a61792-07e7-42c5-ac91-caa067107b12]" virtual=false 2025-10-13T00:23:43.624227488+00:00 stderr F I1013 00:23:43.624188 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: builder, uid: 3bb6108c-1524-4458-a997-b9343271d310]" 2025-10-13T00:23:43.624227488+00:00 stderr F I1013 00:23:43.624216 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift, name: builder, uid: c96fbe55-0c67-4e96-a1e7-244c2609f912]" virtual=false 2025-10-13T00:23:43.630893173+00:00 stderr F I1013 00:23:43.630855 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: build-controller, uid: d79ef9d0-a3b9-435c-83dc-f8656a4c5a35]" 2025-10-13T00:23:43.630893173+00:00 stderr F I1013 00:23:43.630865 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.630893173+00:00 stderr F I1013 00:23:43.630885 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: default, uid: 9707de75-ce5c-44be-8b74-34b186cd35cc]" virtual=false 2025-10-13T00:23:43.630920584+00:00 stderr F I1013 00:23:43.630888 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-nutanix-infra, name: deployer, uid: 8b1104d8-68c7-4d88-acb3-e16dd3d7e7c2]" virtual=false 2025-10-13T00:23:43.637747134+00:00 stderr F I1013 00:23:43.637709 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: deployment-controller, uid: 4681442a-6316-4a67-b22e-8bc7bb143010]" 2025-10-13T00:23:43.637747134+00:00 stderr F I1013 00:23:43.637731 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: default, uid: b570ad13-5841-4bc7-8bfc-04941e39690b]" virtual=false 2025-10-13T00:23:43.641216421+00:00 stderr F I1013 00:23:43.641168 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: unidling-controller, uid: 3d348dd7-11ce-41a8-bee4-d73cc528669a]" 2025-10-13T00:23:43.641216421+00:00 stderr F I1013 00:23:43.641189 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: persistent-volume-binder, uid: 36fcd6df-30f3-446d-b417-37823a3484ff]" virtual=false 2025-10-13T00:23:43.647030263+00:00 stderr F I1013 00:23:43.646980 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: cluster-monitoring-operator, uid: 86c0fe0d-29a3-4976-8e34-622c5487375c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.647030263+00:00 stderr F I1013 00:23:43.647012 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: pv-protection-controller, uid: acddf128-939a-45bf-96e7-495271e27729]" virtual=false 2025-10-13T00:23:43.648045421+00:00 stderr F I1013 00:23:43.648002 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-openstack-infra, name: deployer, uid: ed625bba-8050-41c4-9235-0ac2d5d46e45]" 2025-10-13T00:23:43.648045421+00:00 stderr F I1013 00:23:43.648023 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: deployer, uid: e2f89c23-5a71-4909-86bf-421aedac4fcf]" virtual=false 2025-10-13T00:23:43.650660314+00:00 stderr F I1013 00:23:43.650614 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift, name: deployer, uid: 4795be54-cb97-4e8d-9285-4d3b1203f2e3]" 2025-10-13T00:23:43.650660314+00:00 stderr F I1013 00:23:43.650640 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: image-trigger-controller, uid: 3e65b559-003b-42d3-bcfe-7f0576f40efb]" virtual=false 2025-10-13T00:23:43.656644401+00:00 stderr F I1013 00:23:43.656592 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: builder, uid: f203a333-3df9-43b3-a4f7-eb0cc340f660]" 2025-10-13T00:23:43.656644401+00:00 stderr F I1013 00:23:43.656625 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" virtual=false 2025-10-13T00:23:43.657299099+00:00 stderr F I1013 00:23:43.657242 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: default, uid: 14b1e48e-808c-4a4d-a76a-1fc31b8ec5d0]" 2025-10-13T00:23:43.657299099+00:00 stderr F I1013 00:23:43.657268 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: deployer, uid: 0dac318c-ae44-4526-9949-82e8fa35096d]" virtual=false 2025-10-13T00:23:43.664072418+00:00 stderr F I1013 00:23:43.664034 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: serviceaccount-controller, uid: 2cdd6f03-ad8d-4ce5-bd91-c2634ce30d94]" 2025-10-13T00:23:43.664072418+00:00 stderr F I1013 00:23:43.664064 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: deployer, uid: 4140fa52-817b-49e5-8163-5f8cfcf3125f]" virtual=false 2025-10-13T00:23:43.667524544+00:00 stderr F I1013 00:23:43.667487 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: builder, uid: e98c39f6-8260-4ce8-9daa-f3be18e21432]" 2025-10-13T00:23:43.667544154+00:00 stderr F I1013 00:23:43.667521 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: builder, uid: 376a91ee-9861-4e5a-b33a-7552d503555b]" virtual=false 2025-10-13T00:23:43.671085103+00:00 stderr F I1013 00:23:43.671045 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: installer-sa, uid: 4f7819d7-19cd-4dd3-b8b3-538113e28a63]" 2025-10-13T00:23:43.671122774+00:00 stderr F I1013 00:23:43.671105 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" virtual=false 2025-10-13T00:23:43.676372790+00:00 stderr F I1013 00:23:43.676306 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: default, name: deployer, uid: e1051944-6500-4ddb-bf82-e38c2d9bd592]" 2025-10-13T00:23:43.676372790+00:00 stderr F I1013 00:23:43.676356 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" virtual=false 2025-10-13T00:23:43.678182511+00:00 stderr F I1013 00:23:43.678151 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: default, uid: 9bc3f270-1e87-47e0-9960-44639870da45]" 2025-10-13T00:23:43.678182511+00:00 stderr F I1013 00:23:43.678178 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: builder, uid: 65829925-2f80-4dca-9f8b-9b8d3c81c156]" virtual=false 2025-10-13T00:23:43.680540046+00:00 stderr F I1013 00:23:43.680506 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: default, uid: d463847b-4122-4714-b876-291e301e5162]" 2025-10-13T00:23:43.680540046+00:00 stderr F I1013 00:23:43.680527 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: default, uid: 1e0359df-bf9b-4484-bc1b-40ecc3f031b2]" virtual=false 2025-10-13T00:23:43.687439919+00:00 stderr F I1013 00:23:43.687397 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-openstack-infra, name: default, uid: 67a61792-07e7-42c5-ac91-caa067107b12]" 2025-10-13T00:23:43.687439919+00:00 stderr F I1013 00:23:43.687421 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: deployer, uid: e263b815-420a-4fad-81a5-51a909811982]" virtual=false 2025-10-13T00:23:43.691482901+00:00 stderr F I1013 00:23:43.691436 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift, name: builder, uid: c96fbe55-0c67-4e96-a1e7-244c2609f912]" 2025-10-13T00:23:43.691482901+00:00 stderr F I1013 00:23:43.691473 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config, name: builder, uid: c1b8ba2e-a3c9-480b-8495-449426a7b3a1]" virtual=false 2025-10-13T00:23:43.693908859+00:00 stderr F I1013 00:23:43.693875 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd, name: default, uid: 9707de75-ce5c-44be-8b74-34b186cd35cc]" 2025-10-13T00:23:43.693908859+00:00 stderr F I1013 00:23:43.693903 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: certificate-controller, uid: 2eaa139f-33a4-430e-a823-c54a248e980a]" virtual=false 2025-10-13T00:23:43.697462618+00:00 stderr F I1013 00:23:43.697411 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-nutanix-infra, name: deployer, uid: 8b1104d8-68c7-4d88-acb3-e16dd3d7e7c2]" 2025-10-13T00:23:43.697462618+00:00 stderr F I1013 00:23:43.697448 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: generic-garbage-collector, uid: 32e156a8-8572-4862-9ca2-c1232de64f64]" virtual=false 2025-10-13T00:23:43.703215998+00:00 stderr F I1013 00:23:43.703177 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.703215998+00:00 stderr F I1013 00:23:43.703202 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: default, uid: 622e0d59-85a6-440a-aa80-830294bd72d4]" virtual=false 2025-10-13T00:23:43.704311089+00:00 stderr F I1013 00:23:43.704277 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: default, uid: b570ad13-5841-4bc7-8bfc-04941e39690b]" 2025-10-13T00:23:43.704311089+00:00 stderr F I1013 00:23:43.704298 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: deployer, uid: a46c9bb8-83c2-4f02-bedc-96f7e62c1c31]" virtual=false 2025-10-13T00:23:43.707757744+00:00 stderr F I1013 00:23:43.707711 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: persistent-volume-binder, uid: 36fcd6df-30f3-446d-b417-37823a3484ff]" 2025-10-13T00:23:43.707757744+00:00 stderr F I1013 00:23:43.707741 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: csi-hostpath-provisioner-sa, uid: e5580f83-c1b1-4a76-a4a4-7ae67d35adbd]" virtual=false 2025-10-13T00:23:43.711031266+00:00 stderr F I1013 00:23:43.710979 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: pv-protection-controller, uid: acddf128-939a-45bf-96e7-495271e27729]" 2025-10-13T00:23:43.711031266+00:00 stderr F I1013 00:23:43.711004 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" virtual=false 2025-10-13T00:23:43.714782600+00:00 stderr F I1013 00:23:43.714747 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: deployer, uid: e2f89c23-5a71-4909-86bf-421aedac4fcf]" 2025-10-13T00:23:43.714782600+00:00 stderr F I1013 00:23:43.714769 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: deployer, uid: 8fe3df78-baa8-4c0f-92f1-779dcbd36e0c]" virtual=false 2025-10-13T00:23:43.716966121+00:00 stderr F I1013 00:23:43.716930 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: image-trigger-controller, uid: 3e65b559-003b-42d3-bcfe-7f0576f40efb]" 2025-10-13T00:23:43.716966121+00:00 stderr F I1013 00:23:43.716950 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: expand-controller, uid: c74e901a-55b2-4b90-bcae-452af104a0de]" virtual=false 2025-10-13T00:23:43.721950000+00:00 stderr F I1013 00:23:43.721906 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" 2025-10-13T00:23:43.721950000+00:00 stderr F I1013 00:23:43.721926 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: image-import-controller, uid: d246e709-c05f-4c2b-99bf-9854891708ae]" virtual=false 2025-10-13T00:23:43.725263432+00:00 stderr F I1013 00:23:43.725237 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: deployer, uid: 0dac318c-ae44-4526-9949-82e8fa35096d]" 2025-10-13T00:23:43.725263432+00:00 stderr F I1013 00:23:43.725259 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns, name: default, uid: 226dc9d2-584b-4534-91a1-f54991832c53]" virtual=false 2025-10-13T00:23:43.728534803+00:00 stderr F I1013 00:23:43.728511 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.728559264+00:00 stderr F I1013 00:23:43.728534 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: default, uid: 76e83ab2-9557-4a68-bf85-85479267a323]" virtual=false 2025-10-13T00:23:43.731247239+00:00 stderr F I1013 00:23:43.731187 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: deployer, uid: 4140fa52-817b-49e5-8163-5f8cfcf3125f]" 2025-10-13T00:23:43.731337731+00:00 stderr F I1013 00:23:43.731307 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: csi-provisioner, uid: 9d38c9ea-df18-4655-b6fd-c492ef612ce7]" virtual=false 2025-10-13T00:23:43.734479689+00:00 stderr F I1013 00:23:43.734421 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: builder, uid: 376a91ee-9861-4e5a-b33a-7552d503555b]" 2025-10-13T00:23:43.734518530+00:00 stderr F I1013 00:23:43.734499 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: default, uid: 91a386dc-6d1d-4e89-bd05-8b6277c5e339]" virtual=false 2025-10-13T00:23:43.741275768+00:00 stderr F I1013 00:23:43.741247 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" 2025-10-13T00:23:43.741294639+00:00 stderr F I1013 00:23:43.741285 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: default, uid: a65d3046-3797-4e1b-930b-9b2fb23dfe1c]" virtual=false 2025-10-13T00:23:43.744487188+00:00 stderr F I1013 00:23:43.744463 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: builder, uid: 65829925-2f80-4dca-9f8b-9b8d3c81c156]" 2025-10-13T00:23:43.744504518+00:00 stderr F I1013 00:23:43.744495 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" virtual=false 2025-10-13T00:23:43.747666746+00:00 stderr F I1013 00:23:43.747645 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: default, uid: 1e0359df-bf9b-4484-bc1b-40ecc3f031b2]" 2025-10-13T00:23:43.747682727+00:00 stderr F I1013 00:23:43.747673 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: deployer, uid: d2891b43-c43d-4301-8b2f-ab810f64e280]" virtual=false 2025-10-13T00:23:43.753791727+00:00 stderr F I1013 00:23:43.753764 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.753810457+00:00 stderr F I1013 00:23:43.753796 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: default, uid: 7539f070-c84c-4fec-855c-36bedaa3a0b4]" virtual=false 2025-10-13T00:23:43.754416794+00:00 stderr F I1013 00:23:43.754391 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: deployer, uid: e263b815-420a-4fad-81a5-51a909811982]" 2025-10-13T00:23:43.754446445+00:00 stderr F I1013 00:23:43.754419 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" virtual=false 2025-10-13T00:23:43.757934522+00:00 stderr F I1013 00:23:43.757905 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config, name: builder, uid: c1b8ba2e-a3c9-480b-8495-449426a7b3a1]" 2025-10-13T00:23:43.757971503+00:00 stderr F I1013 00:23:43.757935 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-node, name: default, uid: 5fe6de71-20b4-4aa6-8f8a-11550bac84a3]" virtual=false 2025-10-13T00:23:43.761283315+00:00 stderr F I1013 00:23:43.761253 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: certificate-controller, uid: 2eaa139f-33a4-430e-a823-c54a248e980a]" 2025-10-13T00:23:43.761306786+00:00 stderr F I1013 00:23:43.761281 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: clusterrole-aggregation-controller, uid: ab877d58-60b9-4b8e-b53f-4cbac59ce1a1]" virtual=false 2025-10-13T00:23:43.765354889+00:00 stderr F I1013 00:23:43.765318 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: generic-garbage-collector, uid: 32e156a8-8572-4862-9ca2-c1232de64f64]" 2025-10-13T00:23:43.765378330+00:00 stderr F I1013 00:23:43.765367 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: deployer, uid: 03dcb8d5-5387-4ab7-ac52-21daa21d7657]" virtual=false 2025-10-13T00:23:43.767804407+00:00 stderr F I1013 00:23:43.767777 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-monitoring, name: default, uid: 622e0d59-85a6-440a-aa80-830294bd72d4]" 2025-10-13T00:23:43.767825268+00:00 stderr F I1013 00:23:43.767802 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: attachdetach-controller, uid: 84c7583d-0cff-4679-bed0-fb80e01073ff]" virtual=false 2025-10-13T00:23:43.771165271+00:00 stderr F I1013 00:23:43.771122 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: deployer, uid: a46c9bb8-83c2-4f02-bedc-96f7e62c1c31]" 2025-10-13T00:23:43.771165271+00:00 stderr F I1013 00:23:43.771160 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: node-bootstrapper, uid: 53707188-3ffd-4410-8ac9-c1c5f2472b99]" virtual=false 2025-10-13T00:23:43.774660868+00:00 stderr F I1013 00:23:43.774629 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: csi-hostpath-provisioner-sa, uid: e5580f83-c1b1-4a76-a4a4-7ae67d35adbd]" 2025-10-13T00:23:43.774677949+00:00 stderr F I1013 00:23:43.774660 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-nutanix-infra, name: builder, uid: fac62e7e-8a0e-4408-ba15-724f1742674f]" virtual=false 2025-10-13T00:23:43.781183160+00:00 stderr F I1013 00:23:43.781163 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: deployer, uid: 8fe3df78-baa8-4c0f-92f1-779dcbd36e0c]" 2025-10-13T00:23:43.781201840+00:00 stderr F I1013 00:23:43.781186 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 038ef135-d9c7-4820-b7df-d41eb9c0a4a5]" virtual=false 2025-10-13T00:23:43.784335958+00:00 stderr F I1013 00:23:43.784295 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: expand-controller, uid: c74e901a-55b2-4b90-bcae-452af104a0de]" 2025-10-13T00:23:43.784335958+00:00 stderr F I1013 00:23:43.784316 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config, name: default, uid: c1c8f43a-b274-4e76-9c8d-38afdb7af5f0]" virtual=false 2025-10-13T00:23:43.787752163+00:00 stderr F I1013 00:23:43.787726 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: image-import-controller, uid: d246e709-c05f-4c2b-99bf-9854891708ae]" 2025-10-13T00:23:43.787774923+00:00 stderr F I1013 00:23:43.787753 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: builder, uid: 3bd19ce1-db77-44fc-ad8b-0defff2f4dc1]" virtual=false 2025-10-13T00:23:43.791038664+00:00 stderr F I1013 00:23:43.791011 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns, name: default, uid: 226dc9d2-584b-4534-91a1-f54991832c53]" 2025-10-13T00:23:43.791038664+00:00 stderr F I1013 00:23:43.791031 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: deployer, uid: 30e9c873-497c-4428-9ec2-d7ed533e7561]" virtual=false 2025-10-13T00:23:43.794665495+00:00 stderr F I1013 00:23:43.794614 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: default, uid: 76e83ab2-9557-4a68-bf85-85479267a323]" 2025-10-13T00:23:43.794665495+00:00 stderr F I1013 00:23:43.794638 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: node-bootstrapper, uid: 8e90b9f3-0284-4ea2-8fe6-6adcf38a8c3e]" virtual=false 2025-10-13T00:23:43.797842794+00:00 stderr F I1013 00:23:43.797822 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: csi-provisioner, uid: 9d38c9ea-df18-4655-b6fd-c492ef612ce7]" 2025-10-13T00:23:43.797859844+00:00 stderr F I1013 00:23:43.797845 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" virtual=false 2025-10-13T00:23:43.801345791+00:00 stderr F I1013 00:23:43.801301 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: default, uid: 91a386dc-6d1d-4e89-bd05-8b6277c5e339]" 2025-10-13T00:23:43.801345791+00:00 stderr F I1013 00:23:43.801340 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: default, uid: c75d59d6-8d7f-4955-8506-128d6a61d134]" virtual=false 2025-10-13T00:23:43.807414830+00:00 stderr F I1013 00:23:43.807377 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.807436281+00:00 stderr F I1013 00:23:43.807416 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: template-instance-finalizer-controller, uid: a23704f3-91ad-4f71-9fff-77e50f39a01d]" virtual=false 2025-10-13T00:23:43.808410058+00:00 stderr F I1013 00:23:43.808388 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: default, uid: a65d3046-3797-4e1b-930b-9b2fb23dfe1c]" 2025-10-13T00:23:43.808445989+00:00 stderr F I1013 00:23:43.808412 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: certified-operators, uid: 3d716352-cf0b-4bde-baa2-6d6e7ea8300a]" virtual=false 2025-10-13T00:23:43.814189899+00:00 stderr F I1013 00:23:43.814158 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: deployer, uid: d2891b43-c43d-4301-8b2f-ab810f64e280]" 2025-10-13T00:23:43.814189899+00:00 stderr F I1013 00:23:43.814179 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config, name: deployer, uid: 94d98fef-7ecf-43f9-8cff-175541232594]" virtual=false 2025-10-13T00:23:43.817541433+00:00 stderr F I1013 00:23:43.817509 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: default, uid: 7539f070-c84c-4fec-855c-36bedaa3a0b4]" 2025-10-13T00:23:43.817541433+00:00 stderr F I1013 00:23:43.817536 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: builder, uid: 9a0c3214-a4d8-4bdb-9ffb-edbd5a6be0e9]" virtual=false 2025-10-13T00:23:43.820941147+00:00 stderr F I1013 00:23:43.820911 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" 2025-10-13T00:23:43.820941147+00:00 stderr F I1013 00:23:43.820937 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: default, uid: 77cc3574-9c4e-448c-a039-779321a01aa0]" virtual=false 2025-10-13T00:23:43.824279670+00:00 stderr F I1013 00:23:43.824250 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-node, name: default, uid: 5fe6de71-20b4-4aa6-8f8a-11550bac84a3]" 2025-10-13T00:23:43.824296791+00:00 stderr F I1013 00:23:43.824277 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: deployer, uid: c41dfd1e-abb2-4d2c-b277-3b26354a5ca6]" virtual=false 2025-10-13T00:23:43.827428448+00:00 stderr F I1013 00:23:43.827403 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: clusterrole-aggregation-controller, uid: ab877d58-60b9-4b8e-b53f-4cbac59ce1a1]" 2025-10-13T00:23:43.827428448+00:00 stderr F I1013 00:23:43.827423 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns, name: deployer, uid: 2e37f17f-e07c-4773-87c3-3687450441df]" virtual=false 2025-10-13T00:23:43.830809042+00:00 stderr F I1013 00:23:43.830784 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication, name: deployer, uid: 03dcb8d5-5387-4ab7-ac52-21daa21d7657]" 2025-10-13T00:23:43.830809042+00:00 stderr F I1013 00:23:43.830803 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: resourcequota-controller, uid: dbe28b53-0d5c-4d2a-8ff6-1b19545db119]" virtual=false 2025-10-13T00:23:43.834447393+00:00 stderr F I1013 00:23:43.834420 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: attachdetach-controller, uid: 84c7583d-0cff-4679-bed0-fb80e01073ff]" 2025-10-13T00:23:43.834447393+00:00 stderr F I1013 00:23:43.834438 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: builder, uid: b99cae80-55eb-4921-9f95-9ac1dd4fbc4a]" virtual=false 2025-10-13T00:23:43.837768936+00:00 stderr F I1013 00:23:43.837742 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: node-bootstrapper, uid: 53707188-3ffd-4410-8ac9-c1c5f2472b99]" 2025-10-13T00:23:43.837768936+00:00 stderr F I1013 00:23:43.837760 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-os-puller, uid: c64da89b-7c2d-4eb1-98dc-8714438c20f2]" virtual=false 2025-10-13T00:23:43.841007826+00:00 stderr F I1013 00:23:43.840981 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-nutanix-infra, name: builder, uid: fac62e7e-8a0e-4408-ba15-724f1742674f]" 2025-10-13T00:23:43.841007826+00:00 stderr F I1013 00:23:43.840999 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: default, uid: d0e4fff4-4c96-46f2-9d90-7cb2123d135c]" virtual=false 2025-10-13T00:23:43.846823738+00:00 stderr F I1013 00:23:43.846782 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.846823738+00:00 stderr F I1013 00:23:43.846811 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: default, uid: 2535416e-6576-49e1-a74f-6d6adfb61a1b]" virtual=false 2025-10-13T00:23:43.847617530+00:00 stderr F I1013 00:23:43.847585 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 038ef135-d9c7-4820-b7df-d41eb9c0a4a5]" 2025-10-13T00:23:43.847617530+00:00 stderr F I1013 00:23:43.847605 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: build-config-change-controller, uid: 99805550-2568-4d4d-9494-176e5906aa5b]" virtual=false 2025-10-13T00:23:43.851104367+00:00 stderr F I1013 00:23:43.851063 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config, name: default, uid: c1c8f43a-b274-4e76-9c8d-38afdb7af5f0]" 2025-10-13T00:23:43.851104367+00:00 stderr F I1013 00:23:43.851086 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kni-infra, name: deployer, uid: c42fd2a9-7c3b-40d3-9ceb-8a7084420815]" virtual=false 2025-10-13T00:23:43.854418750+00:00 stderr F I1013 00:23:43.854380 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: builder, uid: 3bd19ce1-db77-44fc-ad8b-0defff2f4dc1]" 2025-10-13T00:23:43.854418750+00:00 stderr F I1013 00:23:43.854403 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: openshift-controller-manager-sa, uid: cbc80d53-2db1-4380-8a7d-3f4f8600b677]" virtual=false 2025-10-13T00:23:43.857754653+00:00 stderr F I1013 00:23:43.857724 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: deployer, uid: 30e9c873-497c-4428-9ec2-d7ed533e7561]" 2025-10-13T00:23:43.857754653+00:00 stderr F I1013 00:23:43.857746 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: builder, uid: 51084ba0-e550-4240-b14a-c93eca499889]" virtual=false 2025-10-13T00:23:43.861213309+00:00 stderr F I1013 00:23:43.861174 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: node-bootstrapper, uid: 8e90b9f3-0284-4ea2-8fe6-6adcf38a8c3e]" 2025-10-13T00:23:43.861213309+00:00 stderr F I1013 00:23:43.861198 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" virtual=false 2025-10-13T00:23:43.867957807+00:00 stderr F I1013 00:23:43.867926 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: default, uid: c75d59d6-8d7f-4955-8506-128d6a61d134]" 2025-10-13T00:23:43.867957807+00:00 stderr F I1013 00:23:43.867950 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: builder, uid: 4fc5ea06-52ff-4e48-bade-47a549889ab6]" virtual=false 2025-10-13T00:23:43.871358772+00:00 stderr F I1013 00:23:43.871276 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: template-instance-finalizer-controller, uid: a23704f3-91ad-4f71-9fff-77e50f39a01d]" 2025-10-13T00:23:43.871358772+00:00 stderr F I1013 00:23:43.871302 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" virtual=false 2025-10-13T00:23:43.879964231+00:00 stderr F I1013 00:23:43.879892 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:43.879964231+00:00 stderr F I1013 00:23:43.879926 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: deployer-controller, uid: a8ac2e98-3289-4616-a62d-093d5419a990]" virtual=false 2025-10-13T00:23:43.881147244+00:00 stderr F I1013 00:23:43.881106 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config, name: deployer, uid: 94d98fef-7ecf-43f9-8cff-175541232594]" 2025-10-13T00:23:43.881147244+00:00 stderr F I1013 00:23:43.881131 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" virtual=false 2025-10-13T00:23:43.884713454+00:00 stderr F I1013 00:23:43.884685 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: builder, uid: 9a0c3214-a4d8-4bdb-9ffb-edbd5a6be0e9]" 2025-10-13T00:23:43.884713454+00:00 stderr F I1013 00:23:43.884707 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: namespace-security-allocation-controller, uid: 66c10b5f-162f-4487-92b9-070818701ee3]" virtual=false 2025-10-13T00:23:43.886840993+00:00 stderr F I1013 00:23:43.886814 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: default, uid: 77cc3574-9c4e-448c-a039-779321a01aa0]" 2025-10-13T00:23:43.886840993+00:00 stderr F I1013 00:23:43.886833 1 garbagecollector.go:549] "Processing item" item="[batch/v1/CronJob, namespace: openshift-image-registry, name: image-pruner, uid: 4864366f-c43f-4d3a-b201-d3cbf0d7b4e5]" virtual=false 2025-10-13T00:23:43.891276927+00:00 stderr F I1013 00:23:43.891243 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: deployer, uid: c41dfd1e-abb2-4d2c-b277-3b26354a5ca6]" 2025-10-13T00:23:43.891276927+00:00 stderr F I1013 00:23:43.891266 1 garbagecollector.go:549] "Processing item" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" virtual=false 2025-10-13T00:23:43.893531739+00:00 stderr F I1013 00:23:43.893509 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns, name: deployer, uid: 2e37f17f-e07c-4773-87c3-3687450441df]" 2025-10-13T00:23:43.893549460+00:00 stderr F I1013 00:23:43.893531 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-marketplace, name: marketplace-operator-8b455464d, uid: facbd59b-fbe9-42f6-9512-9219bc0f7c59]" virtual=false 2025-10-13T00:23:43.898070326+00:00 stderr F I1013 00:23:43.898043 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: resourcequota-controller, uid: dbe28b53-0d5c-4d2a-8ff6-1b19545db119]" 2025-10-13T00:23:43.898087896+00:00 stderr F I1013 00:23:43.898068 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-operator-5dbbc74dc9, uid: f1923bf4-180b-4023-9749-8f4ab7def832]" virtual=false 2025-10-13T00:23:43.900846193+00:00 stderr F I1013 00:23:43.900821 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: builder, uid: b99cae80-55eb-4921-9f95-9ac1dd4fbc4a]" 2025-10-13T00:23:43.900846193+00:00 stderr F I1013 00:23:43.900842 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-5d9b995f6b, uid: 10507f34-78cd-4b53-87ae-2511ad70bc72]" virtual=false 2025-10-13T00:23:43.904146835+00:00 stderr F I1013 00:23:43.904123 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-os-puller, uid: c64da89b-7c2d-4eb1-98dc-8714438c20f2]" 2025-10-13T00:23:43.904164196+00:00 stderr F I1013 00:23:43.904146 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" virtual=false 2025-10-13T00:23:43.907548540+00:00 stderr F I1013 00:23:43.907525 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: default, uid: d0e4fff4-4c96-46f2-9d90-7cb2123d135c]" 2025-10-13T00:23:43.907574160+00:00 stderr F I1013 00:23:43.907547 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca-operator, name: service-ca-operator-546b4f8984, uid: 9edef8f0-3959-4861-8d92-af5228053363]" virtual=false 2025-10-13T00:23:43.910925704+00:00 stderr F I1013 00:23:43.910871 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console, name: default, uid: 2535416e-6576-49e1-a74f-6d6adfb61a1b]" 2025-10-13T00:23:43.910925704+00:00 stderr F I1013 00:23:43.910917 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-conversion-webhook-595f9969b, uid: 74c18b0a-8cb0-4535-bfe8-26315d38e6da]" virtual=false 2025-10-13T00:23:43.915682036+00:00 stderr F I1013 00:23:43.915581 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: build-config-change-controller, uid: 99805550-2568-4d4d-9494-176e5906aa5b]" 2025-10-13T00:23:43.915682036+00:00 stderr F I1013 00:23:43.915611 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: control-plane-machine-set-operator-649bd778b4, uid: 8b73a9c4-0835-468c-8d91-7b4c54aef119]" virtual=false 2025-10-13T00:23:43.917846367+00:00 stderr F I1013 00:23:43.917747 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kni-infra, name: deployer, uid: c42fd2a9-7c3b-40d3-9ceb-8a7084420815]" 2025-10-13T00:23:43.917846367+00:00 stderr F I1013 00:23:43.917772 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" virtual=false 2025-10-13T00:23:43.921282022+00:00 stderr F I1013 00:23:43.921238 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager, name: openshift-controller-manager-sa, uid: cbc80d53-2db1-4380-8a7d-3f4f8600b677]" 2025-10-13T00:23:43.921282022+00:00 stderr F I1013 00:23:43.921263 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-7c88c4c865, uid: 769e5acf-9de5-406b-a06b-bd222bbd75ef]" virtual=false 2025-10-13T00:23:43.924871182+00:00 stderr F I1013 00:23:43.924825 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: builder, uid: 51084ba0-e550-4240-b14a-c93eca499889]" 2025-10-13T00:23:43.924891693+00:00 stderr F I1013 00:23:43.924871 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-version, name: cluster-version-operator-6d5d9649f6, uid: c82e10c3-bd71-43ae-bdff-a5dce77ff096]" virtual=false 2025-10-13T00:23:43.932840044+00:00 stderr F I1013 00:23:43.932770 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.932840044+00:00 stderr F I1013 00:23:43.932806 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator-686c6c748c, uid: 2625a7a1-fddd-409b-a1b0-0dcea9e8febb]" virtual=false 2025-10-13T00:23:43.934165831+00:00 stderr F I1013 00:23:43.934128 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: builder, uid: 4fc5ea06-52ff-4e48-bade-47a549889ab6]" 2025-10-13T00:23:43.934182072+00:00 stderr F I1013 00:23:43.934162 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: machine-api-operator-788b7c6b6c, uid: 405b6017-8f85-4458-96f6-8265aa8e42d7]" virtual=false 2025-10-13T00:23:43.941641569+00:00 stderr F I1013 00:23:43.941568 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: certified-operators, uid: 3d716352-cf0b-4bde-baa2-6d6e7ea8300a]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"certified-operators","uid":"16d5fe82-aef0-4700-8b13-e78e71d2a10d","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:43.941641569+00:00 stderr F I1013 00:23:43.941607 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" virtual=false 2025-10-13T00:23:43.944044896+00:00 stderr F I1013 00:23:43.944003 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: deployer-controller, uid: a8ac2e98-3289-4616-a62d-093d5419a990]" 2025-10-13T00:23:43.944044896+00:00 stderr F I1013 00:23:43.944028 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-6f6cb54958, uid: 05830690-2017-4811-86f7-90459442ac07]" virtual=false 2025-10-13T00:23:43.947316688+00:00 stderr F I1013 00:23:43.947269 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" 2025-10-13T00:23:43.947316688+00:00 stderr F I1013 00:23:43.947294 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-kube-scheduler, name: openshift-kube-scheduler-crc, uid: bc41b00e-72b1-4d82-a286-aa30fbe4095a]" virtual=false 2025-10-13T00:23:43.951220096+00:00 stderr F I1013 00:23:43.951189 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: namespace-security-allocation-controller, uid: 66c10b5f-162f-4487-92b9-070818701ee3]" 2025-10-13T00:23:43.951220096+00:00 stderr F I1013 00:23:43.951214 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-config-operator, name: openshift-config-operator-77658b5b66, uid: 9cef21ef-8176-4c79-a1b0-a266ead9d4f6]" virtual=false 2025-10-13T00:23:43.955189417+00:00 stderr F I1013 00:23:43.955141 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[batch/v1/CronJob, namespace: openshift-image-registry, name: image-pruner, uid: 4864366f-c43f-4d3a-b201-d3cbf0d7b4e5]" 2025-10-13T00:23:43.955189417+00:00 stderr F I1013 00:23:43.955164 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" virtual=false 2025-10-13T00:23:43.985543142+00:00 stderr F I1013 00:23:43.984923 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" 2025-10-13T00:23:43.985543142+00:00 stderr F I1013 00:23:43.984965 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: cluster-image-registry-operator-7769bd8d7d, uid: 9711a43d-4a1b-4a69-9c39-f52785ebbf0d]" virtual=false 2025-10-13T00:23:44.002411482+00:00 stderr F I1013 00:23:44.002343 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.002411482+00:00 stderr F I1013 00:23:44.002384 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" virtual=false 2025-10-13T00:23:44.008252215+00:00 stderr F I1013 00:23:44.007597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.008252215+00:00 stderr F I1013 00:23:44.007653 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-ingress-operator, name: ingress-operator-7d46d5bb6d, uid: 8836d337-e669-4b70-84af-a8e408527030]" virtual=false 2025-10-13T00:23:44.027971524+00:00 stderr F I1013 00:23:44.027909 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-marketplace, name: marketplace-operator-8b455464d, uid: facbd59b-fbe9-42f6-9512-9219bc0f7c59]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"marketplace-operator","uid":"0b54327c-0c40-46f3-a17b-0f07f095ccb7","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.027971524+00:00 stderr F I1013 00:23:44.027953 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" virtual=false 2025-10-13T00:23:44.030031642+00:00 stderr F I1013 00:23:44.030004 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.030054212+00:00 stderr F I1013 00:23:44.030029 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver, name: apiserver-7fc54b8dd7, uid: 5f61a04f-ba2f-40ae-a540-5a3f540b9dc2]" virtual=false 2025-10-13T00:23:44.030713811+00:00 stderr F I1013 00:23:44.030685 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-operator-5dbbc74dc9, uid: f1923bf4-180b-4023-9749-8f4ab7def832]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"console-operator","uid":"e977212b-5bb5-4096-9f11-353076a2ebeb","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.030713811+00:00 stderr F I1013 00:23:44.030709 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-machine-approver, name: machine-approver-7874c8775, uid: c132add1-7a11-405a-a206-ad555920dbc3]" virtual=false 2025-10-13T00:23:44.038712833+00:00 stderr F I1013 00:23:44.038676 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-5d9b995f6b, uid: 10507f34-78cd-4b53-87ae-2511ad70bc72]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-kube-scheduler-operator","uid":"f13e36c5-b283-4235-867d-e2ae26d7fa2a","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.038712833+00:00 stderr F I1013 00:23:44.038706 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-etcd-operator, name: etcd-operator-768d5b5d86, uid: 5f81e406-3d11-4aea-acf0-1e40fdd29de2]" virtual=false 2025-10-13T00:23:44.041002487+00:00 stderr F I1013 00:23:44.040948 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.041002487+00:00 stderr F I1013 00:23:44.040993 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" virtual=false 2025-10-13T00:23:44.042734905+00:00 stderr F I1013 00:23:44.042681 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca-operator, name: service-ca-operator-546b4f8984, uid: 9edef8f0-3959-4861-8d92-af5228053363]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"service-ca-operator","uid":"4e10c137-983b-49c4-b977-7d19390e427b","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.042734905+00:00 stderr F I1013 00:23:44.042709 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" virtual=false 2025-10-13T00:23:44.044260458+00:00 stderr F I1013 00:23:44.044212 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console-operator, name: console-conversion-webhook-595f9969b, uid: 74c18b0a-8cb0-4535-bfe8-26315d38e6da]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"console-conversion-webhook","uid":"4dae11c2-6acd-446b-b52c-67345d4c21ea","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.044260458+00:00 stderr F I1013 00:23:44.044238 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane-77c846df58, uid: 435781ac-1a0b-430f-bb65-abd6c1ca968f]" virtual=false 2025-10-13T00:23:44.051723266+00:00 stderr F I1013 00:23:44.051670 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: control-plane-machine-set-operator-649bd778b4, uid: 8b73a9c4-0835-468c-8d91-7b4c54aef119]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"control-plane-machine-set-operator","uid":"e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.051723266+00:00 stderr F I1013 00:23:44.051707 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" virtual=false 2025-10-13T00:23:44.054149083+00:00 stderr F I1013 00:23:44.054104 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-7c88c4c865, uid: 769e5acf-9de5-406b-a06b-bd222bbd75ef]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-apiserver-operator","uid":"bf9e0c20-07cb-4537-b7f9-efae9f964f5e","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.054149083+00:00 stderr F I1013 00:23:44.054126 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-network-diagnostics, name: network-check-source-5c5478f8c, uid: 1983421b-516e-40bf-be2f-d5ca39e5e8f0]" virtual=false 2025-10-13T00:23:44.057851796+00:00 stderr F I1013 00:23:44.057798 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-version, name: cluster-version-operator-6d5d9649f6, uid: c82e10c3-bd71-43ae-bdff-a5dce77ff096]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cluster-version-operator","uid":"b5151a8e-7df7-4f3b-9ada-e1cfd0badda9","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.057851796+00:00 stderr F I1013 00:23:44.057823 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-authentication, name: oauth-openshift-74fc7c67cc, uid: 19328877-ae29-4751-a48b-fd7ef5663e0e]" virtual=false 2025-10-13T00:23:44.062269750+00:00 stderr F I1013 00:23:44.061717 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator-686c6c748c, uid: 2625a7a1-fddd-409b-a1b0-0dcea9e8febb]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"kube-storage-version-migrator-operator","uid":"59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.062269750+00:00 stderr F I1013 00:23:44.061759 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator-bc474d5d6, uid: 47558afd-87c6-4c87-9f4f-0cedb5bf1348]" virtual=false 2025-10-13T00:23:44.069637325+00:00 stderr F I1013 00:23:44.069581 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" 2025-10-13T00:23:44.069637325+00:00 stderr F I1013 00:23:44.069617 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-ingress-canary, name: ingress-canary-2vhcn, uid: 0b5d722a-1123-4935-9740-52a08d018bc9]" virtual=false 2025-10-13T00:23:44.070897290+00:00 stderr F I1013 00:23:44.070866 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-api, name: machine-api-operator-788b7c6b6c, uid: 405b6017-8f85-4458-96f6-8265aa8e42d7]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-api-operator","uid":"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.070917030+00:00 stderr F I1013 00:23:44.070908 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-kube-apiserver, name: kube-apiserver-crc, uid: 79aee24b-2740-4e0e-a8e1-9b052bf1e855]" virtual=false 2025-10-13T00:23:44.078052669+00:00 stderr F I1013 00:23:44.077975 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.078052669+00:00 stderr F I1013 00:23:44.078010 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: olm-operator-6d8474f75f, uid: d58f2cd8-620a-4650-8e4f-65f3b094b59b]" virtual=false 2025-10-13T00:23:44.082633297+00:00 stderr F I1013 00:23:44.082057 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-6f6cb54958, uid: 05830690-2017-4811-86f7-90459442ac07]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"kube-controller-manager-operator","uid":"8a9ccf98-e60f-4580-94d2-1560cf66cd74","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.082633297+00:00 stderr F I1013 00:23:44.082107 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca, name: service-ca-666f99b6f, uid: 8e1c6ac1-0ab1-48bc-9de4-83068455da2e]" virtual=false 2025-10-13T00:23:44.086635178+00:00 stderr F I1013 00:23:44.086593 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-config-operator, name: openshift-config-operator-77658b5b66, uid: 9cef21ef-8176-4c79-a1b0-a266ead9d4f6]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-config-operator","uid":"46cebc51-d29e-4081-9edb-d9f437810b86","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.086659559+00:00 stderr F I1013 00:23:44.086646 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-kube-controller-manager, name: kube-controller-manager-crc, uid: 663515de-9ac9-4c55-8755-a591a2de3714]" virtual=false 2025-10-13T00:23:44.087055750+00:00 stderr F I1013 00:23:44.086204 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Pod, namespace: openshift-kube-scheduler, name: openshift-kube-scheduler-crc, uid: bc41b00e-72b1-4d82-a286-aa30fbe4095a]" owner=[{"apiVersion":"v1","kind":"Node","name":"crc","uid":"c83c88d3-f34d-4083-a59d-1c50f90f89b8","controller":true}] 2025-10-13T00:23:44.087055750+00:00 stderr F I1013 00:23:44.087041 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: packageserver-8464bcc55b, uid: 6ee8a05c-e6f7-4c44-96b6-a21e4fd03368]" virtual=false 2025-10-13T00:23:44.088967303+00:00 stderr F I1013 00:23:44.088292 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-10-13T00:23:44.088967303+00:00 stderr F I1013 00:23:44.088337 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-857456c46, uid: d5cc066b-fa23-4d13-982e-a52282290255]" virtual=false 2025-10-13T00:23:44.115866933+00:00 stderr F I1013 00:23:44.115821 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" 2025-10-13T00:23:44.115943985+00:00 stderr F I1013 00:23:44.115932 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: downloads-65476884b9, uid: 8abeeec7-ca20-4671-b076-82bbe28392eb]" virtual=false 2025-10-13T00:23:44.117176649+00:00 stderr F I1013 00:23:44.117146 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: cluster-image-registry-operator-7769bd8d7d, uid: 9711a43d-4a1b-4a69-9c39-f52785ebbf0d]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cluster-image-registry-operator","uid":"485aecbc-d986-4290-a12b-2be6eccbc76c","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.117195870+00:00 stderr F I1013 00:23:44.117180 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-84d578d794, uid: 3c47c3d4-976b-483f-964e-8c46f11ecf15]" virtual=false 2025-10-13T00:23:44.134960954+00:00 stderr F I1013 00:23:44.134914 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-ingress-operator, name: ingress-operator-7d46d5bb6d, uid: 8836d337-e669-4b70-84af-a8e408527030]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"ingress-operator","uid":"a575f0c7-77ce-41f4-a832-11dcd8374f9e","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.134995175+00:00 stderr F I1013 00:23:44.134958 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager, name: controller-manager-778975cc4f, uid: b3903b7b-b29a-4c4c-afdf-e6733a611156]" virtual=false 2025-10-13T00:23:44.158634904+00:00 stderr F I1013 00:23:44.158581 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.158634904+00:00 stderr F I1013 00:23:44.158619 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-78d54458c4, uid: 0653d8bb-773a-4180-8163-2e0e8b1868bc]" virtual=false 2025-10-13T00:23:44.160975009+00:00 stderr F I1013 00:23:44.160954 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-apiserver, name: apiserver-7fc54b8dd7, uid: 5f61a04f-ba2f-40ae-a540-5a3f540b9dc2]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"apiserver","uid":"a424780c-5ff8-49aa-b616-57c2d7958f81","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.161040571+00:00 stderr F I1013 00:23:44.161029 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-oauth-apiserver, name: apiserver-69c565c9b6, uid: 21e763e9-1eef-4800-be82-6a4db146f2d1]" virtual=false 2025-10-13T00:23:44.166454032+00:00 stderr F I1013 00:23:44.165465 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-machine-approver, name: machine-approver-7874c8775, uid: c132add1-7a11-405a-a206-ad555920dbc3]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-approver","uid":"7ee99584-ec5a-490c-bc55-11ed3e60244a","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.166454032+00:00 stderr F I1013 00:23:44.165500 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-dns-operator, name: dns-operator-75f687757b, uid: 89316280-b23a-4936-afc9-ac76646fd5b0]" virtual=false 2025-10-13T00:23:44.170429772+00:00 stderr F I1013 00:23:44.170357 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-etcd-operator, name: etcd-operator-768d5b5d86, uid: 5f81e406-3d11-4aea-acf0-1e40fdd29de2]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"etcd-operator","uid":"fb798e33-6d4c-4082-a60c-594a9db7124a","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.170449893+00:00 stderr F I1013 00:23:44.170426 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-route-controller-manager, name: route-controller-manager-776b8b7477, uid: 6bdcaed9-ad1b-4cd3-b947-bcc889a39f90]" virtual=false 2025-10-13T00:23:44.174191057+00:00 stderr F I1013 00:23:44.174168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.174249599+00:00 stderr F I1013 00:23:44.174238 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator, name: migrator-f7c6d88df, uid: dfdeca8a-71c7-4f55-ad85-86a3d4f6ef9d]" virtual=false 2025-10-13T00:23:44.177582422+00:00 stderr F I1013 00:23:44.177532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane-77c846df58, uid: 435781ac-1a0b-430f-bb65-abd6c1ca968f]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"ovnkube-control-plane","uid":"346798bd-68de-4941-ab69-8a5a56dd55f7","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.177582422+00:00 stderr F I1013 00:23:44.177564 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-controller-6df6df6b6b, uid: b8a1c4b8-fe9f-4171-a66c-6b7abb807274]" virtual=false 2025-10-13T00:23:44.177966302+00:00 stderr F I1013 00:23:44.177946 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.178022374+00:00 stderr F I1013 00:23:44.178011 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-multus, name: multus-admission-controller-6c7c885997, uid: cc1283b1-c4b8-45e0-98f4-c53dae453433]" virtual=false 2025-10-13T00:23:44.188629689+00:00 stderr F I1013 00:23:44.188575 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-network-diagnostics, name: network-check-source-5c5478f8c, uid: 1983421b-516e-40bf-be2f-d5ca39e5e8f0]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"network-check-source","uid":"5694fe8b-b5a5-4c14-bc2c-e30718ec8465","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.188660390+00:00 stderr F I1013 00:23:44.188626 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-network-operator, name: network-operator-767c585db5, uid: 032f7ffa-36f3-4edf-882a-6a236dc81772]" virtual=false 2025-10-13T00:23:44.191078308+00:00 stderr F I1013 00:23:44.191042 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-authentication, name: oauth-openshift-74fc7c67cc, uid: 19328877-ae29-4751-a48b-fd7ef5663e0e]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"oauth-openshift","uid":"5c77e036-b030-4587-8bd4-079bc5e84c22","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.191095918+00:00 stderr F I1013 00:23:44.191078 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: image-registry-75b7bb6564, uid: a0aac66b-10f1-4481-9eca-bcf4119d89c5]" virtual=false 2025-10-13T00:23:44.195070259+00:00 stderr F I1013 00:23:44.194990 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator-bc474d5d6, uid: 47558afd-87c6-4c87-9f4f-0cedb5bf1348]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"cluster-samples-operator","uid":"d75ff515-88a9-4644-8711-c99a391dcc77","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.195070259+00:00 stderr F I1013 00:23:44.195020 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-ingress, name: router-default-5c9bf7bc58, uid: 0efb51ee-9643-4b0f-9d44-4a886579a1c5]" virtual=false 2025-10-13T00:23:44.197693332+00:00 stderr F I1013 00:23:44.197665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Pod, namespace: openshift-ingress-canary, name: ingress-canary-2vhcn, uid: 0b5d722a-1123-4935-9740-52a08d018bc9]" owner=[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.197721763+00:00 stderr F I1013 00:23:44.197693 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-operator-76788bff89, uid: 77c5a195-a240-44aa-beb3-a4a81134728a]" virtual=false 2025-10-13T00:23:44.205754966+00:00 stderr F I1013 00:23:44.205535 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Pod, namespace: openshift-kube-apiserver, name: kube-apiserver-crc, uid: 79aee24b-2740-4e0e-a8e1-9b052bf1e855]" owner=[{"apiVersion":"v1","kind":"Node","name":"crc","uid":"c83c88d3-f34d-4083-a59d-1c50f90f89b8","controller":true}] 2025-10-13T00:23:44.205754966+00:00 stderr F I1013 00:23:44.205567 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: console-644bb77b49, uid: cb02675b-d6fb-40e7-b7d7-0c82671355c4]" virtual=false 2025-10-13T00:23:44.208997607+00:00 stderr F I1013 00:23:44.208865 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: olm-operator-6d8474f75f, uid: d58f2cd8-620a-4650-8e4f-65f3b094b59b]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"olm-operator","uid":"e9ed1986-ebb6-4ce1-af63-63b3f002df9e","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.208997607+00:00 stderr F I1013 00:23:44.208903 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" virtual=false 2025-10-13T00:23:44.211720523+00:00 stderr F I1013 00:23:44.211699 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-service-ca, name: service-ca-666f99b6f, uid: 8e1c6ac1-0ab1-48bc-9de4-83068455da2e]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"service-ca","uid":"054eb633-29d2-4eec-90a7-1a83a0e386c1","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.211740283+00:00 stderr F I1013 00:23:44.211719 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" virtual=false 2025-10-13T00:23:44.214735947+00:00 stderr F I1013 00:23:44.214696 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Pod, namespace: openshift-kube-controller-manager, name: kube-controller-manager-crc, uid: 663515de-9ac9-4c55-8755-a591a2de3714]" owner=[{"apiVersion":"v1","kind":"Node","name":"crc","uid":"c83c88d3-f34d-4083-a59d-1c50f90f89b8","controller":true}] 2025-10-13T00:23:44.214751027+00:00 stderr F I1013 00:23:44.214738 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" virtual=false 2025-10-13T00:23:44.219083048+00:00 stderr F I1013 00:23:44.218880 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: packageserver-8464bcc55b, uid: 6ee8a05c-e6f7-4c44-96b6-a21e4fd03368]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"packageserver","uid":"c7e0a213-d3b0-4220-bc12-3e9beb007a7b","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.219083048+00:00 stderr F I1013 00:23:44.218908 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" virtual=false 2025-10-13T00:23:44.220322322+00:00 stderr F I1013 00:23:44.220301 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-857456c46, uid: d5cc066b-fa23-4d13-982e-a52282290255]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"catalog-operator","uid":"4a08358e-6f9b-492b-9df8-8e54f40e2fb4","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.220364383+00:00 stderr F I1013 00:23:44.220341 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-authentication-operator, name: authentication-operator-7cc7ff75d5, uid: 988a2264-fbab-4296-93d4-de8bc5a38df5]" virtual=false 2025-10-13T00:23:44.247729826+00:00 stderr F I1013 00:23:44.247696 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: downloads-65476884b9, uid: 8abeeec7-ca20-4671-b076-82bbe28392eb]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"downloads","uid":"03b2baf0-d10c-4001-94a6-800af015de08","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.247748736+00:00 stderr F I1013 00:23:44.247735 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-7978d7d7f6, uid: 89e69b0e-6261-49ff-80e6-98dbd196bd87]" virtual=false 2025-10-13T00:23:44.251499021+00:00 stderr F I1013 00:23:44.251475 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-84d578d794, uid: 3c47c3d4-976b-483f-964e-8c46f11ecf15]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"package-server-manager","uid":"3368f5bb-29da-4770-b432-4e5d1a8491a9","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.251515541+00:00 stderr F I1013 00:23:44.251496 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-etcd, name: etcd-crc, uid: c5947f21-291a-48d6-85be-6bc67d8adcb5]" virtual=false 2025-10-13T00:23:44.267590929+00:00 stderr F I1013 00:23:44.267549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager, name: controller-manager-778975cc4f, uid: b3903b7b-b29a-4c4c-afdf-e6733a611156]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"controller-manager","uid":"b42ae171-8338-4274-922f-79cfacb9cfe9","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.267610039+00:00 stderr F I1013 00:23:44.267591 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" virtual=false 2025-10-13T00:23:44.285294732+00:00 stderr F I1013 00:23:44.285238 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" 2025-10-13T00:23:44.285294732+00:00 stderr F I1013 00:23:44.285283 1 garbagecollector.go:549] "Processing item" item="[storage.k8s.io/v1/CSINode, namespace: , name: crc, uid: 196d6c92-b529-4faa-bf5b-f558b5823939]" virtual=false 2025-10-13T00:23:44.291191826+00:00 stderr F I1013 00:23:44.291162 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-78d54458c4, uid: 0653d8bb-773a-4180-8163-2e0e8b1868bc]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"kube-apiserver-operator","uid":"1685682f-c45b-43b7-8431-b19c7e8a7d30","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.291218337+00:00 stderr F I1013 00:23:44.291198 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tests, uid: 7778095f-760e-4c45-992d-fa5bfe5ceefd]" virtual=false 2025-10-13T00:23:44.297554524+00:00 stderr F I1013 00:23:44.297448 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-oauth-apiserver, name: apiserver-69c565c9b6, uid: 21e763e9-1eef-4800-be82-6a4db146f2d1]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"apiserver","uid":"8ac71ab9-8c3d-4c89-9962-205eed0149d5","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.297554524+00:00 stderr F I1013 00:23:44.297485 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli, uid: bffe1e0b-75b9-453c-a1d8-5723606ab263]" virtual=false 2025-10-13T00:23:44.298748387+00:00 stderr F I1013 00:23:44.298713 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-dns-operator, name: dns-operator-75f687757b, uid: 89316280-b23a-4936-afc9-ac76646fd5b0]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"dns-operator","uid":"d7110071-d620-4ed4-b7e1-05c1c458b7f0","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.298771447+00:00 stderr F I1013 00:23:44.298739 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: builder, uid: 5caffbbc-0d80-4043-a509-525d2e98ef7f]" virtual=false 2025-10-13T00:23:44.301225236+00:00 stderr F I1013 00:23:44.301201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-route-controller-manager, name: route-controller-manager-776b8b7477, uid: 6bdcaed9-ad1b-4cd3-b947-bcc889a39f90]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"route-controller-manager","uid":"db9b5a0e-2471-4a94-bbe5-01c34efec882","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.301253867+00:00 stderr F I1013 00:23:44.301228 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-version, name: default, uid: f66fc0bb-2a49-4c36-aaf9-e6e132096b02]" virtual=false 2025-10-13T00:23:44.305210937+00:00 stderr F I1013 00:23:44.305187 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-kube-storage-version-migrator, name: migrator-f7c6d88df, uid: dfdeca8a-71c7-4f55-ad85-86a3d4f6ef9d]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"migrator","uid":"88da59ff-e1b0-4998-b48b-d9b2e9bee2ae","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.305227527+00:00 stderr F I1013 00:23:44.305214 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns, name: dns, uid: 2113db5b-7943-4341-a992-869814272644]" virtual=false 2025-10-13T00:23:44.307687496+00:00 stderr F I1013 00:23:44.307663 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-controller-6df6df6b6b, uid: b8a1c4b8-fe9f-4171-a66c-6b7abb807274]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-config-controller","uid":"0b1d22b6-78ae-4d00-94ad-381755e08383","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.307704826+00:00 stderr F I1013 00:23:44.307688 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" virtual=false 2025-10-13T00:23:44.311062690+00:00 stderr F I1013 00:23:44.311036 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-multus, name: multus-admission-controller-6c7c885997, uid: cc1283b1-c4b8-45e0-98f4-c53dae453433]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"multus-admission-controller","uid":"3d4e04fb-152e-4b67-b110-7b7edfa1a90a","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.311080010+00:00 stderr F I1013 00:23:44.311061 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: deployer, uid: 9d4365c2-bdb0-4b49-ac1b-ee3d5dda04f2]" virtual=false 2025-10-13T00:23:44.321530291+00:00 stderr F I1013 00:23:44.321494 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-network-operator, name: network-operator-767c585db5, uid: 032f7ffa-36f3-4edf-882a-6a236dc81772]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"network-operator","uid":"d09aa085-6368-4540-a8c1-4e4c3e9e7344","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.321548172+00:00 stderr F I1013 00:23:44.321530 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli-artifacts, uid: 942e0ba6-9f7e-4dd1-9976-a373d64fbf65]" virtual=false 2025-10-13T00:23:44.324344380+00:00 stderr F I1013 00:23:44.324309 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-image-registry, name: image-registry-75b7bb6564, uid: a0aac66b-10f1-4481-9eca-bcf4119d89c5]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"image-registry","uid":"ff5a6fbd-d479-457d-86ba-428162a82d5c","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.324364470+00:00 stderr F I1013 00:23:44.324354 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: network-tools, uid: c03fc985-15de-4664-a9fe-10bd1f45afd8]" virtual=false 2025-10-13T00:23:44.328546377+00:00 stderr F I1013 00:23:44.327860 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-ingress, name: router-default-5c9bf7bc58, uid: 0efb51ee-9643-4b0f-9d44-4a886579a1c5]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"router-default","uid":"9ae4d312-7fc4-4344-ab7a-669da95f56bf","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.328546377+00:00 stderr F I1013 00:23:44.327883 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: community-operators, uid: d55598c2-6685-4f1a-889b-a194f4aed791]" virtual=false 2025-10-13T00:23:44.331385806+00:00 stderr F I1013 00:23:44.331355 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-machine-config-operator, name: machine-config-operator-76788bff89, uid: 77c5a195-a240-44aa-beb3-a4a81134728a]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"machine-config-operator","uid":"8cb0f5f7-4dca-477c-8627-e6db485e4cb2","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.331385806+00:00 stderr F I1013 00:23:44.331377 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: must-gather, uid: c474c767-c860-4e73-b1b6-ce1a6f4cf507]" virtual=false 2025-10-13T00:23:44.337933958+00:00 stderr F I1013 00:23:44.337890 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-console, name: console-644bb77b49, uid: cb02675b-d6fb-40e7-b7d7-0c82671355c4]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"console","uid":"acc4559a-2586-4482-947a-aae611d8d9f6","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.337952619+00:00 stderr F I1013 00:23:44.337937 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tools, uid: dfed6660-519c-4896-a7a0-c3ac7cb3f64b]" virtual=false 2025-10-13T00:23:44.341707743+00:00 stderr F I1013 00:23:44.341667 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-10-13T00:23:44.341707743+00:00 stderr F I1013 00:23:44.341696 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cloud-network-config-controller, name: deployer, uid: 8f9706ce-dc3d-497e-9910-d1464e35be90]" virtual=false 2025-10-13T00:23:44.345392736+00:00 stderr F I1013 00:23:44.345348 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.345409077+00:00 stderr F I1013 00:23:44.345388 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: oauth-proxy, uid: 9acec097-9868-44f6-a179-49640dd8e719]" virtual=false 2025-10-13T00:23:44.348368759+00:00 stderr F I1013 00:23:44.348322 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.348368759+00:00 stderr F I1013 00:23:44.348360 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" virtual=false 2025-10-13T00:23:44.354836659+00:00 stderr F I1013 00:23:44.354797 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-authentication-operator, name: authentication-operator-7cc7ff75d5, uid: 988a2264-fbab-4296-93d4-de8bc5a38df5]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"authentication-operator","uid":"5e81203d-c202-48ae-b652-35b68d7e5586","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.354836659+00:00 stderr F I1013 00:23:44.354826 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: builder, uid: d025b17e-94e6-4117-a58c-aa794c8f92d6]" virtual=false 2025-10-13T00:23:44.364538309+00:00 stderr F I1013 00:23:44.364452 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: builder, uid: 5caffbbc-0d80-4043-a509-525d2e98ef7f]" 2025-10-13T00:23:44.364538309+00:00 stderr F I1013 00:23:44.364499 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: builder, uid: d6c2298e-46fb-48ee-b60a-d2e29d10a611]" virtual=false 2025-10-13T00:23:44.368460409+00:00 stderr F I1013 00:23:44.368413 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-version, name: default, uid: f66fc0bb-2a49-4c36-aaf9-e6e132096b02]" 2025-10-13T00:23:44.368460409+00:00 stderr F I1013 00:23:44.368435 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer, uid: 88d58746-7912-47b5-a52e-f0badf538ff2]" virtual=false 2025-10-13T00:23:44.370833855+00:00 stderr F I1013 00:23:44.370773 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns, name: dns, uid: 2113db5b-7943-4341-a992-869814272644]" 2025-10-13T00:23:44.370833855+00:00 stderr F I1013 00:23:44.370804 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: deployer, uid: efac8c96-daad-473a-a712-f4f4f97472b1]" virtual=false 2025-10-13T00:23:44.377422678+00:00 stderr F I1013 00:23:44.377363 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: hostpath-provisioner, name: deployer, uid: 9d4365c2-bdb0-4b49-ac1b-ee3d5dda04f2]" 2025-10-13T00:23:44.377422678+00:00 stderr F I1013 00:23:44.377387 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" virtual=false 2025-10-13T00:23:44.381238015+00:00 stderr F I1013 00:23:44.381208 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/ReplicaSet, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-7978d7d7f6, uid: 89e69b0e-6261-49ff-80e6-98dbd196bd87]" owner=[{"apiVersion":"apps/v1","kind":"Deployment","name":"openshift-controller-manager-operator","uid":"945d64e1-c873-4e9d-b5ff-47904d2b347f","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.381238015+00:00 stderr F I1013 00:23:44.381233 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: driver-toolkit, uid: bf2aafeb-b943-4673-8978-ff549746c6bb]" virtual=false 2025-10-13T00:23:44.385601516+00:00 stderr F I1013 00:23:44.385555 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Pod, namespace: openshift-etcd, name: etcd-crc, uid: c5947f21-291a-48d6-85be-6bc67d8adcb5]" owner=[{"apiVersion":"v1","kind":"Node","name":"crc","uid":"c83c88d3-f34d-4083-a59d-1c50f90f89b8","controller":true}] 2025-10-13T00:23:44.385648397+00:00 stderr F I1013 00:23:44.385601 1 garbagecollector.go:549] "Processing item" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer-artifacts, uid: 897a4e58-3c9c-4859-a14e-e70529b83bbe]" virtual=false 2025-10-13T00:23:44.401052316+00:00 stderr F I1013 00:23:44.400981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-10-13T00:23:44.401052316+00:00 stderr F I1013 00:23:44.401027 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openstack-operators, name: default, uid: df6f1ae6-3288-4f3a-864f-75c0bf8bde52]" virtual=false 2025-10-13T00:23:44.407963229+00:00 stderr F I1013 00:23:44.407893 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cloud-network-config-controller, name: deployer, uid: 8f9706ce-dc3d-497e-9910-d1464e35be90]" 2025-10-13T00:23:44.407963229+00:00 stderr F I1013 00:23:44.407922 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openstack, name: deployer, uid: 5f4609ab-6e00-4d76-8f90-62ee3ce6ab4b]" virtual=false 2025-10-13T00:23:44.418054030+00:00 stderr F I1013 00:23:44.417968 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[storage.k8s.io/v1/CSINode, namespace: , name: crc, uid: 196d6c92-b529-4faa-bf5b-f558b5823939]" owner=[{"apiVersion":"v1","kind":"Node","name":"crc","uid":"c83c88d3-f34d-4083-a59d-1c50f90f89b8"}] 2025-10-13T00:23:44.418054030+00:00 stderr F I1013 00:23:44.418031 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" virtual=false 2025-10-13T00:23:44.420591161+00:00 stderr F I1013 00:23:44.420528 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: builder, uid: d025b17e-94e6-4117-a58c-aa794c8f92d6]" 2025-10-13T00:23:44.420591161+00:00 stderr F I1013 00:23:44.420573 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: default-rolebindings-controller, uid: 9b7c5166-ab1d-46ae-9a22-1699c9eb4b12]" virtual=false 2025-10-13T00:23:44.427552005+00:00 stderr F I1013 00:23:44.427494 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tests, uid: 7778095f-760e-4c45-992d-fa5bfe5ceefd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.427552005+00:00 stderr F I1013 00:23:44.427522 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-user-workload-monitoring, name: deployer, uid: 82356b80-bf87-4244-963a-88995d56cb24]" virtual=false 2025-10-13T00:23:44.428426499+00:00 stderr F I1013 00:23:44.428379 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: builder, uid: d6c2298e-46fb-48ee-b60a-d2e29d10a611]" 2025-10-13T00:23:44.428426499+00:00 stderr F I1013 00:23:44.428411 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-managed, name: builder, uid: be3e0515-5bd5-4454-bc98-a0afc45b0dac]" virtual=false 2025-10-13T00:23:44.434186089+00:00 stderr F I1013 00:23:44.433498 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli, uid: bffe1e0b-75b9-453c-a1d8-5723606ab263]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.434186089+00:00 stderr F I1013 00:23:44.433523 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-node, name: deployer, uid: 24b4447a-ee05-4f49-a398-98668937d09e]" virtual=false 2025-10-13T00:23:44.438047927+00:00 stderr F I1013 00:23:44.438018 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-multus, name: deployer, uid: efac8c96-daad-473a-a712-f4f4f97472b1]" 2025-10-13T00:23:44.438047927+00:00 stderr F I1013 00:23:44.438037 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: deployer, uid: ae15d72c-4c34-4428-9b01-5cc97fd8406e]" virtual=false 2025-10-13T00:23:44.443078537+00:00 stderr F I1013 00:23:44.443049 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:44.443078537+00:00 stderr F I1013 00:23:44.443067 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: builder, uid: b6b8bd78-379d-4a40-b5df-4f1be0ab27bb]" virtual=false 2025-10-13T00:23:44.457429357+00:00 stderr F I1013 00:23:44.457367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: cli-artifacts, uid: 942e0ba6-9f7e-4dd1-9976-a373d64fbf65]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.457429357+00:00 stderr F I1013 00:23:44.457411 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift, name: default, uid: 283f9a2e-49be-43d9-bb7c-1702f37cd585]" virtual=false 2025-10-13T00:23:44.461258754+00:00 stderr F I1013 00:23:44.461207 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: community-operators, uid: d55598c2-6685-4f1a-889b-a194f4aed791]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","name":"community-operators","uid":"e583c58d-4569-4cab-9192-62c813516208","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.461258754+00:00 stderr F I1013 00:23:44.461240 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-managed, name: deployer, uid: 388b2309-a465-4dec-9ca0-b0009937475c]" virtual=false 2025-10-13T00:23:44.475481020+00:00 stderr F I1013 00:23:44.471571 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: network-tools, uid: c03fc985-15de-4664-a9fe-10bd1f45afd8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.475481020+00:00 stderr F I1013 00:23:44.471595 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: default, uid: a5711013-e20d-4eb3-ae93-30453f0e3621]" virtual=false 2025-10-13T00:23:44.486222889+00:00 stderr F I1013 00:23:44.483578 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openstack, name: deployer, uid: 5f4609ab-6e00-4d76-8f90-62ee3ce6ab4b]" 2025-10-13T00:23:44.486222889+00:00 stderr F I1013 00:23:44.483609 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openstack-operators, name: default, uid: df6f1ae6-3288-4f3a-864f-75c0bf8bde52]" 2025-10-13T00:23:44.486222889+00:00 stderr F I1013 00:23:44.483621 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: builder, uid: 86fa1aad-514f-44b6-881d-d635653a0739]" virtual=false 2025-10-13T00:23:44.486222889+00:00 stderr F I1013 00:23:44.483625 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: deployer, uid: 512f21bb-4373-4442-b7a4-d2867db36d68]" virtual=false 2025-10-13T00:23:44.486222889+00:00 stderr F I1013 00:23:44.483580 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: must-gather, uid: c474c767-c860-4e73-b1b6-ce1a6f4cf507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.486222889+00:00 stderr F I1013 00:23:44.483699 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-user-workload-monitoring, name: builder, uid: 547fd83b-97e1-4e39-9465-e767267ae539]" virtual=false 2025-10-13T00:23:44.491625189+00:00 stderr F I1013 00:23:44.491563 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: tools, uid: dfed6660-519c-4896-a7a0-c3ac7cb3f64b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.491625189+00:00 stderr F I1013 00:23:44.491597 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" virtual=false 2025-10-13T00:23:44.499420837+00:00 stderr F I1013 00:23:44.495496 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: default-rolebindings-controller, uid: 9b7c5166-ab1d-46ae-9a22-1699c9eb4b12]" 2025-10-13T00:23:44.499420837+00:00 stderr F I1013 00:23:44.495519 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: registry, uid: 99e50b49-684f-41ea-874f-26873a6c6ef5]" virtual=false 2025-10-13T00:23:44.499420837+00:00 stderr F I1013 00:23:44.497625 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config-managed, name: builder, uid: be3e0515-5bd5-4454-bc98-a0afc45b0dac]" 2025-10-13T00:23:44.499420837+00:00 stderr F I1013 00:23:44.497645 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: daemon-set-controller, uid: f914f44b-c5b4-43e0-8fbd-34eb3aae706c]" virtual=false 2025-10-13T00:23:44.499420837+00:00 stderr F I1013 00:23:44.497720 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-user-workload-monitoring, name: deployer, uid: 82356b80-bf87-4244-963a-88995d56cb24]" 2025-10-13T00:23:44.499420837+00:00 stderr F I1013 00:23:44.497735 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: deployer, uid: cf0a4ff6-9829-4347-8ea8-c1c9fea3f30e]" virtual=false 2025-10-13T00:23:44.501517925+00:00 stderr F I1013 00:23:44.501469 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-node, name: deployer, uid: 24b4447a-ee05-4f49-a398-98668937d09e]" 2025-10-13T00:23:44.501517925+00:00 stderr F I1013 00:23:44.501497 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" virtual=false 2025-10-13T00:23:44.509467586+00:00 stderr F I1013 00:23:44.505512 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager, name: deployer, uid: ae15d72c-4c34-4428-9b01-5cc97fd8406e]" 2025-10-13T00:23:44.509467586+00:00 stderr F I1013 00:23:44.505544 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: deployer, uid: bc6662b7-4795-4ec2-b063-d5c11ad003cc]" virtual=false 2025-10-13T00:23:44.513401206+00:00 stderr F I1013 00:23:44.510925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.513401206+00:00 stderr F I1013 00:23:44.510962 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cloud-network-config-controller, name: default, uid: ae911303-2e32-409a-8609-7d5e14bb3d24]" virtual=false 2025-10-13T00:23:44.513401206+00:00 stderr F I1013 00:23:44.511088 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: oauth-proxy, uid: 9acec097-9868-44f6-a179-49640dd8e719]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.513401206+00:00 stderr F I1013 00:23:44.511106 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-canary, name: builder, uid: 550cef98-4985-49ca-974e-bb3771bfe049]" virtual=false 2025-10-13T00:23:44.520431872+00:00 stderr F I1013 00:23:44.519256 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: builder, uid: b6b8bd78-379d-4a40-b5df-4f1be0ab27bb]" 2025-10-13T00:23:44.520431872+00:00 stderr F I1013 00:23:44.519294 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: default, uid: 9074fe25-68b9-4491-8977-ffc2cd9aaea8]" virtual=false 2025-10-13T00:23:44.524616478+00:00 stderr F I1013 00:23:44.524537 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift, name: default, uid: 283f9a2e-49be-43d9-bb7c-1702f37cd585]" 2025-10-13T00:23:44.524616478+00:00 stderr F I1013 00:23:44.524573 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: service-controller, uid: ea11a3af-751f-4a63-a42e-37eb56782a64]" virtual=false 2025-10-13T00:23:44.532427456+00:00 stderr F I1013 00:23:44.531574 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config-managed, name: deployer, uid: 388b2309-a465-4dec-9ca0-b0009937475c]" 2025-10-13T00:23:44.532427456+00:00 stderr F I1013 00:23:44.531611 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: deployer, uid: eaf2a0b5-f5ed-420a-b307-6591b5715c59]" virtual=false 2025-10-13T00:23:44.537028144+00:00 stderr F I1013 00:23:44.535304 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: deployer, uid: 512f21bb-4373-4442-b7a4-d2867db36d68]" 2025-10-13T00:23:44.537028144+00:00 stderr F I1013 00:23:44.535367 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: localhost-recovery-client, uid: d0c58f48-5242-4a1d-a89a-310015517fc4]" virtual=false 2025-10-13T00:23:44.537028144+00:00 stderr F I1013 00:23:44.535437 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: default, uid: a5711013-e20d-4eb3-ae93-30453f0e3621]" 2025-10-13T00:23:44.537028144+00:00 stderr F I1013 00:23:44.535456 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: deployer, uid: f2f2dea3-e303-4775-bf48-d87896cf58b7]" virtual=false 2025-10-13T00:23:44.543393741+00:00 stderr F I1013 00:23:44.540210 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-apiserver, name: builder, uid: 86fa1aad-514f-44b6-881d-d635653a0739]" 2025-10-13T00:23:44.543393741+00:00 stderr F I1013 00:23:44.540247 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: deployer, uid: ffc21ca4-56d4-4b65-979c-f7a5f08c213c]" virtual=false 2025-10-13T00:23:44.546285002+00:00 stderr F I1013 00:23:44.545353 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: driver-toolkit, uid: bf2aafeb-b943-4673-8978-ff549746c6bb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.546285002+00:00 stderr F I1013 00:23:44.545390 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" virtual=false 2025-10-13T00:23:44.546285002+00:00 stderr F I1013 00:23:44.545527 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer, uid: 88d58746-7912-47b5-a52e-f0badf538ff2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.546285002+00:00 stderr F I1013 00:23:44.545545 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" virtual=false 2025-10-13T00:23:44.546285002+00:00 stderr F I1013 00:23:44.545606 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-user-workload-monitoring, name: builder, uid: 547fd83b-97e1-4e39-9465-e767267ae539]" 2025-10-13T00:23:44.546285002+00:00 stderr F I1013 00:23:44.545617 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: builder, uid: 41243c2d-cf84-4420-acb8-6eb33a6f2db9]" virtual=false 2025-10-13T00:23:44.549689657+00:00 stderr F I1013 00:23:44.549630 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[image.openshift.io/v1/ImageStream, namespace: openshift, name: installer-artifacts, uid: 897a4e58-3c9c-4859-a14e-e70529b83bbe]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.549689657+00:00 stderr F I1013 00:23:44.549676 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: cluster-csr-approver-controller, uid: ab736ea5-8514-4922-ba8c-a38770b65ac1]" virtual=false 2025-10-13T00:23:44.550415267+00:00 stderr F I1013 00:23:44.550363 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.550415267+00:00 stderr F I1013 00:23:44.550401 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: default, uid: 1bede0a3-475c-4e7a-882a-fc97e0e194f0]" virtual=false 2025-10-13T00:23:44.550475919+00:00 stderr F I1013 00:23:44.550444 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: registry, uid: 99e50b49-684f-41ea-874f-26873a6c6ef5]" 2025-10-13T00:23:44.550475919+00:00 stderr F I1013 00:23:44.550470 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: default, uid: 0ecb6711-f0d5-41dc-944a-96bbfe551c1f]" virtual=false 2025-10-13T00:23:44.551945600+00:00 stderr F I1013 00:23:44.551906 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: daemon-set-controller, uid: f914f44b-c5b4-43e0-8fbd-34eb3aae706c]" 2025-10-13T00:23:44.551945600+00:00 stderr F I1013 00:23:44.551932 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-user-settings, name: default, uid: 1538199f-0a00-4d3c-859c-9846a8d533c9]" virtual=false 2025-10-13T00:23:44.555000795+00:00 stderr F I1013 00:23:44.554956 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.555000795+00:00 stderr F I1013 00:23:44.554980 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-node-lease, name: default, uid: c20f149a-6064-48c3-afc4-6a5ef492129b]" virtual=false 2025-10-13T00:23:44.556127386+00:00 stderr F I1013 00:23:44.556085 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: deployer, uid: cf0a4ff6-9829-4347-8ea8-c1c9fea3f30e]" 2025-10-13T00:23:44.556127386+00:00 stderr F I1013 00:23:44.556114 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-host-network, name: default, uid: d489aa6a-33df-4e82-b5e0-b744668ca153]" virtual=false 2025-10-13T00:23:44.557588967+00:00 stderr F I1013 00:23:44.557553 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" 2025-10-13T00:23:44.557588967+00:00 stderr F I1013 00:23:44.557579 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: deployer, uid: 7333087b-1f9c-4237-944d-e2cc9e4404aa]" virtual=false 2025-10-13T00:23:44.561237199+00:00 stderr F I1013 00:23:44.561192 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: deployer, uid: bc6662b7-4795-4ec2-b063-d5c11ad003cc]" 2025-10-13T00:23:44.561237199+00:00 stderr F I1013 00:23:44.561225 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: builder, uid: 06f6a2b4-fafa-454e-bd21-a4e492bc69a3]" virtual=false 2025-10-13T00:23:44.564553251+00:00 stderr F I1013 00:23:44.564511 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cloud-network-config-controller, name: default, uid: ae911303-2e32-409a-8609-7d5e14bb3d24]" 2025-10-13T00:23:44.564553251+00:00 stderr F I1013 00:23:44.564536 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: deployer, uid: f14f20f6-a6e6-47eb-b7c8-e784690d8815]" virtual=false 2025-10-13T00:23:44.568095830+00:00 stderr F I1013 00:23:44.568052 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress-canary, name: builder, uid: 550cef98-4985-49ca-974e-bb3771bfe049]" 2025-10-13T00:23:44.568095830+00:00 stderr F I1013 00:23:44.568076 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: deployer, uid: 0409a103-84df-4e81-b524-ba9535936b01]" virtual=false 2025-10-13T00:23:44.570761104+00:00 stderr F I1013 00:23:44.570716 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: default, uid: 9074fe25-68b9-4491-8977-ffc2cd9aaea8]" 2025-10-13T00:23:44.570761104+00:00 stderr F I1013 00:23:44.570744 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" virtual=false 2025-10-13T00:23:44.574585820+00:00 stderr F I1013 00:23:44.574541 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: service-controller, uid: ea11a3af-751f-4a63-a42e-37eb56782a64]" 2025-10-13T00:23:44.574585820+00:00 stderr F I1013 00:23:44.574565 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: root-ca-cert-publisher, uid: d3a36919-faa6-429b-96b6-9f96846dbfa7]" virtual=false 2025-10-13T00:23:44.578066477+00:00 stderr F I1013 00:23:44.578024 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver, name: deployer, uid: eaf2a0b5-f5ed-420a-b307-6591b5715c59]" 2025-10-13T00:23:44.578066477+00:00 stderr F I1013 00:23:44.578047 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: router, uid: 99445ce8-ddd7-4421-a693-e1d07a2b13e8]" virtual=false 2025-10-13T00:23:44.580651319+00:00 stderr F I1013 00:23:44.580587 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler, name: localhost-recovery-client, uid: d0c58f48-5242-4a1d-a89a-310015517fc4]" 2025-10-13T00:23:44.580651319+00:00 stderr F I1013 00:23:44.580611 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: horizontal-pod-autoscaler, uid: 1c2e2eb6-146b-4596-823e-04a27e7f82a4]" virtual=false 2025-10-13T00:23:44.584485746+00:00 stderr F I1013 00:23:44.584443 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: deployer, uid: f2f2dea3-e303-4775-bf48-d87896cf58b7]" 2025-10-13T00:23:44.584485746+00:00 stderr F I1013 00:23:44.584469 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: builder, uid: 48ae66e4-f4a8-4e18-a659-467a093f9d8d]" virtual=false 2025-10-13T00:23:44.587974823+00:00 stderr F I1013 00:23:44.587940 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: deployer, uid: ffc21ca4-56d4-4b65-979c-f7a5f08c213c]" 2025-10-13T00:23:44.587974823+00:00 stderr F I1013 00:23:44.587964 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: ttl-after-finished-controller, uid: 1450504f-c037-4252-8dcb-f0f55b495d9d]" virtual=false 2025-10-13T00:23:44.598090175+00:00 stderr F I1013 00:23:44.598031 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: builder, uid: 41243c2d-cf84-4420-acb8-6eb33a6f2db9]" 2025-10-13T00:23:44.598090175+00:00 stderr F I1013 00:23:44.598069 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: builder, uid: b8fc58c0-6e6d-4d56-92df-941c665511b7]" virtual=false 2025-10-13T00:23:44.604240216+00:00 stderr F I1013 00:23:44.604186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.604277607+00:00 stderr F I1013 00:23:44.604236 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: endpoint-controller, uid: 6b92463a-04af-404a-b2a2-00000adcb481]" virtual=false 2025-10-13T00:23:44.605391238+00:00 stderr F I1013 00:23:44.605349 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: cluster-csr-approver-controller, uid: ab736ea5-8514-4922-ba8c-a38770b65ac1]" 2025-10-13T00:23:44.605391238+00:00 stderr F I1013 00:23:44.605383 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: builder, uid: ce67a32e-cc11-4b7e-8dd7-1676f214d466]" virtual=false 2025-10-13T00:23:44.608423263+00:00 stderr F I1013 00:23:44.608381 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-oauth-apiserver, name: default, uid: 1bede0a3-475c-4e7a-882a-fc97e0e194f0]" 2025-10-13T00:23:44.608423263+00:00 stderr F I1013 00:23:44.608408 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: builder, uid: fea4839a-4e87-44a0-a411-d40850673f0f]" virtual=false 2025-10-13T00:23:44.611112178+00:00 stderr F I1013 00:23:44.611076 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: default, uid: 0ecb6711-f0d5-41dc-944a-96bbfe551c1f]" 2025-10-13T00:23:44.611132648+00:00 stderr F I1013 00:23:44.611110 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: default, uid: baa17716-5369-4afe-afd4-8e8473df9387]" virtual=false 2025-10-13T00:23:44.614792210+00:00 stderr F I1013 00:23:44.614746 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console-user-settings, name: default, uid: 1538199f-0a00-4d3c-859c-9846a8d533c9]" 2025-10-13T00:23:44.614853942+00:00 stderr F I1013 00:23:44.614826 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" virtual=false 2025-10-13T00:23:44.617599789+00:00 stderr F I1013 00:23:44.617563 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-node-lease, name: default, uid: c20f149a-6064-48c3-afc4-6a5ef492129b]" 2025-10-13T00:23:44.617599789+00:00 stderr F I1013 00:23:44.617593 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: builder, uid: 078eb1e7-6355-446b-acac-00bddfd4578a]" virtual=false 2025-10-13T00:23:44.621729134+00:00 stderr F I1013 00:23:44.621579 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-host-network, name: default, uid: d489aa6a-33df-4e82-b5e0-b744668ca153]" 2025-10-13T00:23:44.621729134+00:00 stderr F I1013 00:23:44.621610 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: pv-recycler-controller, uid: f12a85ec-5223-4bb8-b064-20e50e4ec1e2]" virtual=false 2025-10-13T00:23:44.624346626+00:00 stderr F I1013 00:23:44.624289 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: deployer, uid: 7333087b-1f9c-4237-944d-e2cc9e4404aa]" 2025-10-13T00:23:44.624346626+00:00 stderr F I1013 00:23:44.624315 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-canary, name: default, uid: 3999d247-8df5-418d-9615-0a40bec2cc09]" virtual=false 2025-10-13T00:23:44.628265406+00:00 stderr F I1013 00:23:44.628235 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: builder, uid: 06f6a2b4-fafa-454e-bd21-a4e492bc69a3]" 2025-10-13T00:23:44.628265406+00:00 stderr F I1013 00:23:44.628261 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" virtual=false 2025-10-13T00:23:44.630807866+00:00 stderr F I1013 00:23:44.630768 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: deployer, uid: f14f20f6-a6e6-47eb-b7c8-e784690d8815]" 2025-10-13T00:23:44.630807866+00:00 stderr F I1013 00:23:44.630790 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: default, uid: a75dd68d-f1bd-4ec6-aacb-77be49fce8c0]" virtual=false 2025-10-13T00:23:44.635243180+00:00 stderr F I1013 00:23:44.635201 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: deployer, uid: 0409a103-84df-4e81-b524-ba9535936b01]" 2025-10-13T00:23:44.635243180+00:00 stderr F I1013 00:23:44.635228 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: node-controller, uid: 05a5cda9-553b-4360-b20c-6070426a50e4]" virtual=false 2025-10-13T00:23:44.641682039+00:00 stderr F I1013 00:23:44.641639 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: root-ca-cert-publisher, uid: d3a36919-faa6-429b-96b6-9f96846dbfa7]" 2025-10-13T00:23:44.641682039+00:00 stderr F I1013 00:23:44.641668 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: deployer, uid: 93530fde-b51d-4e8e-ba4e-847ee87d0c99]" virtual=false 2025-10-13T00:23:44.645975239+00:00 stderr F I1013 00:23:44.645362 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress, name: router, uid: 99445ce8-ddd7-4421-a693-e1d07a2b13e8]" 2025-10-13T00:23:44.645975239+00:00 stderr F I1013 00:23:44.645389 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: builder, uid: 047d3497-ac43-4fab-af33-7ca096bf1114]" virtual=false 2025-10-13T00:23:44.648394346+00:00 stderr F I1013 00:23:44.648369 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: horizontal-pod-autoscaler, uid: 1c2e2eb6-146b-4596-823e-04a27e7f82a4]" 2025-10-13T00:23:44.648416047+00:00 stderr F I1013 00:23:44.648392 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: pod-garbage-collector, uid: 7ab28279-04a6-46c7-911d-030bdcb307f1]" virtual=false 2025-10-13T00:23:44.650910796+00:00 stderr F I1013 00:23:44.650877 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: builder, uid: 48ae66e4-f4a8-4e18-a659-467a093f9d8d]" 2025-10-13T00:23:44.650910796+00:00 stderr F I1013 00:23:44.650900 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: pvc-protection-controller, uid: 04c25e7a-98e3-48af-9503-d157c4960141]" virtual=false 2025-10-13T00:23:44.654614610+00:00 stderr F I1013 00:23:44.654518 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: ttl-after-finished-controller, uid: 1450504f-c037-4252-8dcb-f0f55b495d9d]" 2025-10-13T00:23:44.654614610+00:00 stderr F I1013 00:23:44.654544 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: deploymentconfig-controller, uid: c102bf10-5b91-48a0-97d1-695fe6421533]" virtual=false 2025-10-13T00:23:44.658941570+00:00 stderr F I1013 00:23:44.658903 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.658941570+00:00 stderr F I1013 00:23:44.658931 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: deployer, uid: 1db8cd6e-7968-49f2-a0d3-b90750fedf20]" virtual=false 2025-10-13T00:23:44.664527236+00:00 stderr F I1013 00:23:44.664486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.664527236+00:00 stderr F I1013 00:23:44.664517 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: cluster-quota-reconciliation-controller, uid: 3f47da69-a809-457d-b6ed-d6329eaf1d97]" virtual=false 2025-10-13T00:23:44.664752082+00:00 stderr F I1013 00:23:44.664706 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator, name: builder, uid: b8fc58c0-6e6d-4d56-92df-941c665511b7]" 2025-10-13T00:23:44.664763122+00:00 stderr F I1013 00:23:44.664755 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operators, name: default, uid: 3c6f0bfe-4a37-43a8-be5d-22300b6a9ceb]" virtual=false 2025-10-13T00:23:44.668598519+00:00 stderr F I1013 00:23:44.668443 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: endpoint-controller, uid: 6b92463a-04af-404a-b2a2-00000adcb481]" 2025-10-13T00:23:44.668598519+00:00 stderr F I1013 00:23:44.668472 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cloud-platform-infra, name: deployer, uid: aa245dc2-937d-44ac-8ecf-06278d9afd37]" virtual=false 2025-10-13T00:23:44.671735056+00:00 stderr F I1013 00:23:44.671706 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: builder, uid: ce67a32e-cc11-4b7e-8dd7-1676f214d466]" 2025-10-13T00:23:44.671764767+00:00 stderr F I1013 00:23:44.671738 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-version, name: deployer, uid: 3e64e4ba-9952-4db3-aa4c-baf533fd10f4]" virtual=false 2025-10-13T00:23:44.674220776+00:00 stderr F I1013 00:23:44.674187 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: builder, uid: fea4839a-4e87-44a0-a411-d40850673f0f]" 2025-10-13T00:23:44.674237816+00:00 stderr F I1013 00:23:44.674216 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: builder, uid: de0d3b7c-e767-4e96-864d-3402aa1badb2]" virtual=false 2025-10-13T00:23:44.677783735+00:00 stderr F I1013 00:23:44.677747 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: default, uid: baa17716-5369-4afe-afd4-8e8473df9387]" 2025-10-13T00:23:44.677783735+00:00 stderr F I1013 00:23:44.677777 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-system, name: ephemeral-volume-controller, uid: 0788a9dd-ccc4-4a8f-a9ce-5cf6bf27cd73]" virtual=false 2025-10-13T00:23:44.684583165+00:00 stderr F I1013 00:23:44.684548 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: builder, uid: 078eb1e7-6355-446b-acac-00bddfd4578a]" 2025-10-13T00:23:44.684621276+00:00 stderr F I1013 00:23:44.684590 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-storage-operator, name: deployer, uid: c1ed9a54-dcca-44f4-826a-caeeb2c94bc3]" virtual=false 2025-10-13T00:23:44.689241584+00:00 stderr F I1013 00:23:44.689179 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: pv-recycler-controller, uid: f12a85ec-5223-4bb8-b064-20e50e4ec1e2]" 2025-10-13T00:23:44.689241584+00:00 stderr F I1013 00:23:44.689216 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-infra, name: privileged-namespaces-psa-label-syncer, uid: ccee1f96-f588-4f5c-965a-f40d4ca11ee0]" virtual=false 2025-10-13T00:23:44.690898310+00:00 stderr F I1013 00:23:44.690860 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress-canary, name: default, uid: 3999d247-8df5-418d-9615-0a40bec2cc09]" 2025-10-13T00:23:44.690898310+00:00 stderr F I1013 00:23:44.690888 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: kube-node-lease, name: deployer, uid: f6671aa6-4a25-45d3-bec0-55d3165b52bd]" virtual=false 2025-10-13T00:23:44.697042992+00:00 stderr F I1013 00:23:44.697008 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-route-controller-manager, name: default, uid: a75dd68d-f1bd-4ec6-aacb-77be49fce8c0]" 2025-10-13T00:23:44.697060912+00:00 stderr F I1013 00:23:44.697042 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" virtual=false 2025-10-13T00:23:44.701139746+00:00 stderr F I1013 00:23:44.701112 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: node-controller, uid: 05a5cda9-553b-4360-b20c-6070426a50e4]" 2025-10-13T00:23:44.701158196+00:00 stderr F I1013 00:23:44.701137 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" virtual=false 2025-10-13T00:23:44.708402058+00:00 stderr F I1013 00:23:44.708057 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: deployer, uid: 93530fde-b51d-4e8e-ba4e-847ee87d0c99]" 2025-10-13T00:23:44.708402058+00:00 stderr F I1013 00:23:44.708084 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns, name: node-resolver, uid: ffab3515-5e15-4244-a1f9-b35c0223ffe7]" virtual=false 2025-10-13T00:23:44.708519391+00:00 stderr F I1013 00:23:44.708492 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.708528491+00:00 stderr F I1013 00:23:44.708519 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" virtual=false 2025-10-13T00:23:44.710927048+00:00 stderr F I1013 00:23:44.710903 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: builder, uid: 047d3497-ac43-4fab-af33-7ca096bf1114]" 2025-10-13T00:23:44.710953099+00:00 stderr F I1013 00:23:44.710927 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-canary, name: deployer, uid: 682cc265-63a9-4e45-af55-564117dad2d3]" virtual=false 2025-10-13T00:23:44.714655982+00:00 stderr F I1013 00:23:44.714631 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: pod-garbage-collector, uid: 7ab28279-04a6-46c7-911d-030bdcb307f1]" 2025-10-13T00:23:44.714681353+00:00 stderr F I1013 00:23:44.714656 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: builder, uid: e5aa0f53-6b29-446f-b20e-4098ddf36d61]" virtual=false 2025-10-13T00:23:44.717120911+00:00 stderr F I1013 00:23:44.717092 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: pvc-protection-controller, uid: 04c25e7a-98e3-48af-9503-d157c4960141]" 2025-10-13T00:23:44.717120911+00:00 stderr F I1013 00:23:44.717113 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" virtual=false 2025-10-13T00:23:44.722173752+00:00 stderr F I1013 00:23:44.721161 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: deploymentconfig-controller, uid: c102bf10-5b91-48a0-97d1-695fe6421533]" 2025-10-13T00:23:44.722173752+00:00 stderr F I1013 00:23:44.721574 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" virtual=false 2025-10-13T00:23:44.724726123+00:00 stderr F I1013 00:23:44.724702 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-console, name: deployer, uid: 1db8cd6e-7968-49f2-a0d3-b90750fedf20]" 2025-10-13T00:23:44.724754503+00:00 stderr F I1013 00:23:44.724729 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" virtual=false 2025-10-13T00:23:44.727782468+00:00 stderr F I1013 00:23:44.727744 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: cluster-quota-reconciliation-controller, uid: 3f47da69-a809-457d-b6ed-d6329eaf1d97]" 2025-10-13T00:23:44.727782468+00:00 stderr F I1013 00:23:44.727771 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" virtual=false 2025-10-13T00:23:44.731432630+00:00 stderr F I1013 00:23:44.731405 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-operators, name: default, uid: 3c6f0bfe-4a37-43a8-be5d-22300b6a9ceb]" 2025-10-13T00:23:44.731454320+00:00 stderr F I1013 00:23:44.731429 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" virtual=false 2025-10-13T00:23:44.734667510+00:00 stderr F I1013 00:23:44.734634 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cloud-platform-infra, name: deployer, uid: aa245dc2-937d-44ac-8ecf-06278d9afd37]" 2025-10-13T00:23:44.734667510+00:00 stderr F I1013 00:23:44.734658 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" virtual=false 2025-10-13T00:23:44.737348304+00:00 stderr F I1013 00:23:44.737303 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-version, name: deployer, uid: 3e64e4ba-9952-4db3-aa4c-baf533fd10f4]" 2025-10-13T00:23:44.737368635+00:00 stderr F I1013 00:23:44.737350 1 garbagecollector.go:549] "Processing item" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" virtual=false 2025-10-13T00:23:44.741219722+00:00 stderr F I1013 00:23:44.741192 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: builder, uid: de0d3b7c-e767-4e96-864d-3402aa1badb2]" 2025-10-13T00:23:44.741219722+00:00 stderr F I1013 00:23:44.741215 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" virtual=false 2025-10-13T00:23:44.744656918+00:00 stderr F I1013 00:23:44.744621 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-system, name: ephemeral-volume-controller, uid: 0788a9dd-ccc4-4a8f-a9ce-5cf6bf27cd73]" 2025-10-13T00:23:44.744656918+00:00 stderr F I1013 00:23:44.744643 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" virtual=false 2025-10-13T00:23:44.748170226+00:00 stderr F I1013 00:23:44.748122 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.748170226+00:00 stderr F I1013 00:23:44.748157 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" virtual=false 2025-10-13T00:23:44.751148259+00:00 stderr F I1013 00:23:44.751107 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-storage-operator, name: deployer, uid: c1ed9a54-dcca-44f4-826a-caeeb2c94bc3]" 2025-10-13T00:23:44.751148259+00:00 stderr F I1013 00:23:44.751144 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" virtual=false 2025-10-13T00:23:44.754905263+00:00 stderr F I1013 00:23:44.754882 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-infra, name: privileged-namespaces-psa-label-syncer, uid: ccee1f96-f588-4f5c-965a-f40d4ca11ee0]" 2025-10-13T00:23:44.754923054+00:00 stderr F I1013 00:23:44.754905 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" virtual=false 2025-10-13T00:23:44.757344261+00:00 stderr F I1013 00:23:44.757306 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: kube-node-lease, name: deployer, uid: f6671aa6-4a25-45d3-bec0-55d3165b52bd]" 2025-10-13T00:23:44.757344261+00:00 stderr F I1013 00:23:44.757339 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" virtual=false 2025-10-13T00:23:44.762038702+00:00 stderr F I1013 00:23:44.762012 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:44.762059433+00:00 stderr F I1013 00:23:44.762036 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" virtual=false 2025-10-13T00:23:44.770511568+00:00 stderr F I1013 00:23:44.770474 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-dns, name: node-resolver, uid: ffab3515-5e15-4244-a1f9-b35c0223ffe7]" 2025-10-13T00:23:44.770511568+00:00 stderr F I1013 00:23:44.770497 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" virtual=false 2025-10-13T00:23:44.777682798+00:00 stderr F I1013 00:23:44.777640 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-ingress-canary, name: deployer, uid: 682cc265-63a9-4e45-af55-564117dad2d3]" 2025-10-13T00:23:44.777701179+00:00 stderr F I1013 00:23:44.777680 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" virtual=false 2025-10-13T00:23:44.781141894+00:00 stderr F I1013 00:23:44.781107 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: builder, uid: e5aa0f53-6b29-446f-b20e-4098ddf36d61]" 2025-10-13T00:23:44.781141894+00:00 stderr F I1013 00:23:44.781133 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" virtual=false 2025-10-13T00:23:44.790707611+00:00 stderr F I1013 00:23:44.790660 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" 2025-10-13T00:23:44.790707611+00:00 stderr F I1013 00:23:44.790694 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" virtual=false 2025-10-13T00:23:44.833509873+00:00 stderr F I1013 00:23:44.833433 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.833509873+00:00 stderr F I1013 00:23:44.833470 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" virtual=false 2025-10-13T00:23:44.839651284+00:00 stderr F I1013 00:23:44.839590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.839651284+00:00 stderr F I1013 00:23:44.839628 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" virtual=false 2025-10-13T00:23:44.846709110+00:00 stderr F I1013 00:23:44.846618 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.846709110+00:00 stderr F I1013 00:23:44.846659 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" virtual=false 2025-10-13T00:23:44.855734632+00:00 stderr F I1013 00:23:44.855700 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.855734632+00:00 stderr F I1013 00:23:44.855722 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" virtual=false 2025-10-13T00:23:44.860002641+00:00 stderr F I1013 00:23:44.859966 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:44.860002641+00:00 stderr F I1013 00:23:44.859989 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" virtual=false 2025-10-13T00:23:44.864175377+00:00 stderr F I1013 00:23:44.864144 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.864175377+00:00 stderr F I1013 00:23:44.864167 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" virtual=false 2025-10-13T00:23:44.868044415+00:00 stderr F I1013 00:23:44.867998 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.868044415+00:00 stderr F I1013 00:23:44.868025 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" virtual=false 2025-10-13T00:23:44.871234383+00:00 stderr F I1013 00:23:44.871184 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:44.871234383+00:00 stderr F I1013 00:23:44.871211 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" virtual=false 2025-10-13T00:23:44.871896782+00:00 stderr F I1013 00:23:44.871853 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-10-13T00:23:44.871910642+00:00 stderr F I1013 00:23:44.871896 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" virtual=false 2025-10-13T00:23:44.876674875+00:00 stderr F I1013 00:23:44.876632 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.876674875+00:00 stderr F I1013 00:23:44.876656 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" virtual=false 2025-10-13T00:23:44.880631505+00:00 stderr F I1013 00:23:44.880584 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.880631505+00:00 stderr F I1013 00:23:44.880611 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" virtual=false 2025-10-13T00:23:44.884382550+00:00 stderr F I1013 00:23:44.884347 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.884382550+00:00 stderr F I1013 00:23:44.884368 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" virtual=false 2025-10-13T00:23:44.887980360+00:00 stderr F I1013 00:23:44.887927 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.887980360+00:00 stderr F I1013 00:23:44.887965 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" virtual=false 2025-10-13T00:23:44.891510768+00:00 stderr F I1013 00:23:44.891232 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.891510768+00:00 stderr F I1013 00:23:44.891260 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" virtual=false 2025-10-13T00:23:44.894598514+00:00 stderr F I1013 00:23:44.894563 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.894649216+00:00 stderr F I1013 00:23:44.894628 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" virtual=false 2025-10-13T00:23:44.899373577+00:00 stderr F I1013 00:23:44.899308 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.899392798+00:00 stderr F I1013 00:23:44.899380 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" virtual=false 2025-10-13T00:23:44.909104538+00:00 stderr F I1013 00:23:44.909065 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.909104538+00:00 stderr F I1013 00:23:44.909089 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" virtual=false 2025-10-13T00:23:44.916394471+00:00 stderr F I1013 00:23:44.916340 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.916394471+00:00 stderr F I1013 00:23:44.916383 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" virtual=false 2025-10-13T00:23:44.917613145+00:00 stderr F I1013 00:23:44.917569 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.917613145+00:00 stderr F I1013 00:23:44.917599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" virtual=false 2025-10-13T00:23:44.926864953+00:00 stderr F I1013 00:23:44.926811 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.926864953+00:00 stderr F I1013 00:23:44.926840 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" virtual=false 2025-10-13T00:23:44.934656340+00:00 stderr F I1013 00:23:44.934612 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" 2025-10-13T00:23:44.934656340+00:00 stderr F I1013 00:23:44.934641 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" virtual=false 2025-10-13T00:23:44.938942519+00:00 stderr F I1013 00:23:44.938903 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" 2025-10-13T00:23:44.938942519+00:00 stderr F I1013 00:23:44.938924 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" virtual=false 2025-10-13T00:23:44.945124681+00:00 stderr F I1013 00:23:44.945055 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" 2025-10-13T00:23:44.945144382+00:00 stderr F I1013 00:23:44.945136 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" virtual=false 2025-10-13T00:23:44.955551802+00:00 stderr F I1013 00:23:44.955517 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" 2025-10-13T00:23:44.955551802+00:00 stderr F I1013 00:23:44.955540 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" virtual=false 2025-10-13T00:23:44.965417717+00:00 stderr F I1013 00:23:44.965368 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.965417717+00:00 stderr F I1013 00:23:44.965410 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" virtual=false 2025-10-13T00:23:44.971772514+00:00 stderr F I1013 00:23:44.971685 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.971772514+00:00 stderr F I1013 00:23:44.971718 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" virtual=false 2025-10-13T00:23:44.974158250+00:00 stderr F I1013 00:23:44.974113 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.974173740+00:00 stderr F I1013 00:23:44.974157 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" virtual=false 2025-10-13T00:23:44.978935593+00:00 stderr F I1013 00:23:44.978888 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" 2025-10-13T00:23:44.978954843+00:00 stderr F I1013 00:23:44.978940 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" virtual=false 2025-10-13T00:23:44.982274486+00:00 stderr F I1013 00:23:44.982225 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" 2025-10-13T00:23:44.982297626+00:00 stderr F I1013 00:23:44.982274 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" virtual=false 2025-10-13T00:23:44.987672066+00:00 stderr F I1013 00:23:44.987636 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.987715247+00:00 stderr F I1013 00:23:44.987690 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" virtual=false 2025-10-13T00:23:44.991658677+00:00 stderr F I1013 00:23:44.991584 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:44.991658677+00:00 stderr F I1013 00:23:44.991612 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" virtual=false 2025-10-13T00:23:44.995371540+00:00 stderr F I1013 00:23:44.995305 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-10-13T00:23:44.995480663+00:00 stderr F I1013 00:23:44.995456 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" virtual=false 2025-10-13T00:23:45.000675478+00:00 stderr F I1013 00:23:45.000487 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.000675478+00:00 stderr F I1013 00:23:45.000536 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" virtual=false 2025-10-13T00:23:45.011646284+00:00 stderr F I1013 00:23:45.011590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.011646284+00:00 stderr F I1013 00:23:45.011636 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" virtual=false 2025-10-13T00:23:45.017518997+00:00 stderr F I1013 00:23:45.017469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.017564588+00:00 stderr F I1013 00:23:45.017519 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" virtual=false 2025-10-13T00:23:45.021645172+00:00 stderr F I1013 00:23:45.021604 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.021666393+00:00 stderr F I1013 00:23:45.021657 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" virtual=false 2025-10-13T00:23:45.028228785+00:00 stderr F I1013 00:23:45.028172 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.028228785+00:00 stderr F I1013 00:23:45.028205 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" virtual=false 2025-10-13T00:23:45.030540020+00:00 stderr F I1013 00:23:45.030494 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.030540020+00:00 stderr F I1013 00:23:45.030517 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" virtual=false 2025-10-13T00:23:45.031070265+00:00 stderr F I1013 00:23:45.031026 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" 2025-10-13T00:23:45.031070265+00:00 stderr F I1013 00:23:45.031051 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" virtual=false 2025-10-13T00:23:45.044997953+00:00 stderr F I1013 00:23:45.044967 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.045062464+00:00 stderr F I1013 00:23:45.045049 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" virtual=false 2025-10-13T00:23:45.059771224+00:00 stderr F I1013 00:23:45.059669 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.059771224+00:00 stderr F I1013 00:23:45.059714 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" virtual=false 2025-10-13T00:23:45.072994083+00:00 stderr F I1013 00:23:45.072930 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.072994083+00:00 stderr F I1013 00:23:45.072966 1 garbagecollector.go:549] "Processing item" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" virtual=false 2025-10-13T00:23:45.073819516+00:00 stderr F I1013 00:23:45.073762 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.073819516+00:00 stderr F I1013 00:23:45.073793 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" virtual=false 2025-10-13T00:23:45.080911323+00:00 stderr F I1013 00:23:45.080859 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.080911323+00:00 stderr F I1013 00:23:45.080899 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" virtual=false 2025-10-13T00:23:45.085959754+00:00 stderr F I1013 00:23:45.085700 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.085959754+00:00 stderr F I1013 00:23:45.085728 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" virtual=false 2025-10-13T00:23:45.103110552+00:00 stderr F I1013 00:23:45.103046 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.103110552+00:00 stderr F I1013 00:23:45.103086 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" virtual=false 2025-10-13T00:23:45.106887687+00:00 stderr F I1013 00:23:45.106813 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.106887687+00:00 stderr F I1013 00:23:45.106850 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" virtual=false 2025-10-13T00:23:45.110988751+00:00 stderr F I1013 00:23:45.110556 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.110988751+00:00 stderr F I1013 00:23:45.110581 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" virtual=false 2025-10-13T00:23:45.117962205+00:00 stderr F I1013 00:23:45.117907 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.117962205+00:00 stderr F I1013 00:23:45.117949 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" virtual=false 2025-10-13T00:23:45.119054266+00:00 stderr F I1013 00:23:45.119014 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-10-13T00:23:45.119054266+00:00 stderr F I1013 00:23:45.119037 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" virtual=false 2025-10-13T00:23:45.126512573+00:00 stderr F I1013 00:23:45.126208 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.126512573+00:00 stderr F I1013 00:23:45.126246 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" virtual=false 2025-10-13T00:23:45.128424527+00:00 stderr F I1013 00:23:45.128378 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.128424527+00:00 stderr F I1013 00:23:45.128411 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" virtual=false 2025-10-13T00:23:45.136141361+00:00 stderr F I1013 00:23:45.136089 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.136161362+00:00 stderr F I1013 00:23:45.136135 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" virtual=false 2025-10-13T00:23:45.137675154+00:00 stderr F I1013 00:23:45.137631 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" 2025-10-13T00:23:45.137675154+00:00 stderr F I1013 00:23:45.137660 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" virtual=false 2025-10-13T00:23:45.141294655+00:00 stderr F I1013 00:23:45.141254 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" 2025-10-13T00:23:45.141294655+00:00 stderr F I1013 00:23:45.141287 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" virtual=false 2025-10-13T00:23:45.150388098+00:00 stderr F I1013 00:23:45.150319 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.150414359+00:00 stderr F I1013 00:23:45.150397 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" virtual=false 2025-10-13T00:23:45.151219272+00:00 stderr F I1013 00:23:45.151180 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.151219272+00:00 stderr F I1013 00:23:45.151210 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" virtual=false 2025-10-13T00:23:45.157443245+00:00 stderr F I1013 00:23:45.157402 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.157443245+00:00 stderr F I1013 00:23:45.157427 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" virtual=false 2025-10-13T00:23:45.162202118+00:00 stderr F I1013 00:23:45.162158 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.162202118+00:00 stderr F I1013 00:23:45.162184 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" virtual=false 2025-10-13T00:23:45.165054567+00:00 stderr F I1013 00:23:45.164996 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.165054567+00:00 stderr F I1013 00:23:45.165037 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" virtual=false 2025-10-13T00:23:45.168972246+00:00 stderr F I1013 00:23:45.168932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.168972246+00:00 stderr F I1013 00:23:45.168960 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" virtual=false 2025-10-13T00:23:45.181840565+00:00 stderr F I1013 00:23:45.181781 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.181840565+00:00 stderr F I1013 00:23:45.181820 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" virtual=false 2025-10-13T00:23:45.190410283+00:00 stderr F I1013 00:23:45.190365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":false,"blockOwnerDeletion":false}] 2025-10-13T00:23:45.190437094+00:00 stderr F I1013 00:23:45.190408 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" virtual=false 2025-10-13T00:23:45.199962539+00:00 stderr F I1013 00:23:45.199925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.199962539+00:00 stderr F I1013 00:23:45.199957 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" virtual=false 2025-10-13T00:23:45.220577324+00:00 stderr F I1013 00:23:45.220536 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.220600624+00:00 stderr F I1013 00:23:45.220579 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" virtual=false 2025-10-13T00:23:45.234679436+00:00 stderr F I1013 00:23:45.234626 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.234679436+00:00 stderr F I1013 00:23:45.234668 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" virtual=false 2025-10-13T00:23:45.244649594+00:00 stderr F I1013 00:23:45.244575 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.244649594+00:00 stderr F I1013 00:23:45.244643 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" virtual=false 2025-10-13T00:23:45.247302058+00:00 stderr F I1013 00:23:45.247253 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.247302058+00:00 stderr F I1013 00:23:45.247294 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" virtual=false 2025-10-13T00:23:45.247934046+00:00 stderr F I1013 00:23:45.247895 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.247934046+00:00 stderr F I1013 00:23:45.247922 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" virtual=false 2025-10-13T00:23:45.256249147+00:00 stderr F I1013 00:23:45.256213 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.256268178+00:00 stderr F I1013 00:23:45.256260 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-monitoring, name: openshift-cluster-monitoring, uid: 58528e0e-a83e-439a-a129-7b0a4ae24a96]" virtual=false 2025-10-13T00:23:45.264186488+00:00 stderr F I1013 00:23:45.264132 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.264186488+00:00 stderr F I1013 00:23:45.264179 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" virtual=false 2025-10-13T00:23:45.265685130+00:00 stderr F I1013 00:23:45.265637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.265703141+00:00 stderr F I1013 00:23:45.265681 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" virtual=false 2025-10-13T00:23:45.272149970+00:00 stderr F I1013 00:23:45.272094 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.272170091+00:00 stderr F I1013 00:23:45.272145 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" virtual=false 2025-10-13T00:23:45.275742990+00:00 stderr F I1013 00:23:45.275684 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.275742990+00:00 stderr F I1013 00:23:45.275725 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" virtual=false 2025-10-13T00:23:45.280592855+00:00 stderr F I1013 00:23:45.280512 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.280617866+00:00 stderr F I1013 00:23:45.280595 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" virtual=false 2025-10-13T00:23:45.282171759+00:00 stderr F I1013 00:23:45.282116 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.282171759+00:00 stderr F I1013 00:23:45.282155 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" virtual=false 2025-10-13T00:23:45.287553999+00:00 stderr F I1013 00:23:45.287493 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.287553999+00:00 stderr F I1013 00:23:45.287541 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" virtual=false 2025-10-13T00:23:45.293361501+00:00 stderr F I1013 00:23:45.293291 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.293391412+00:00 stderr F I1013 00:23:45.293367 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" virtual=false 2025-10-13T00:23:45.293862175+00:00 stderr F I1013 00:23:45.293831 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.293890626+00:00 stderr F I1013 00:23:45.293860 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" virtual=false 2025-10-13T00:23:45.297821435+00:00 stderr F I1013 00:23:45.297780 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.297821435+00:00 stderr F I1013 00:23:45.297803 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" virtual=false 2025-10-13T00:23:45.304458290+00:00 stderr F I1013 00:23:45.304403 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.304458290+00:00 stderr F I1013 00:23:45.304431 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" virtual=false 2025-10-13T00:23:45.317722409+00:00 stderr F I1013 00:23:45.317641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.317722409+00:00 stderr F I1013 00:23:45.317680 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" virtual=false 2025-10-13T00:23:45.321215907+00:00 stderr F I1013 00:23:45.321165 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.321215907+00:00 stderr F I1013 00:23:45.321192 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" virtual=false 2025-10-13T00:23:45.331570865+00:00 stderr F I1013 00:23:45.331519 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.331570865+00:00 stderr F I1013 00:23:45.331544 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" virtual=false 2025-10-13T00:23:45.349933907+00:00 stderr F I1013 00:23:45.349837 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.349933907+00:00 stderr F I1013 00:23:45.349865 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" virtual=false 2025-10-13T00:23:45.365897301+00:00 stderr F I1013 00:23:45.365853 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.365941773+00:00 stderr F I1013 00:23:45.365898 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" virtual=false 2025-10-13T00:23:45.373787871+00:00 stderr F I1013 00:23:45.373748 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.373787871+00:00 stderr F I1013 00:23:45.373779 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" virtual=false 2025-10-13T00:23:45.377686760+00:00 stderr F I1013 00:23:45.377645 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.377708990+00:00 stderr F I1013 00:23:45.377683 1 garbagecollector.go:549] "Processing item" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" virtual=false 2025-10-13T00:23:45.382604967+00:00 stderr F I1013 00:23:45.382561 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.382604967+00:00 stderr F I1013 00:23:45.382596 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" virtual=false 2025-10-13T00:23:45.390560518+00:00 stderr F I1013 00:23:45.390499 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-monitoring, name: openshift-cluster-monitoring, uid: 58528e0e-a83e-439a-a129-7b0a4ae24a96]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.390560518+00:00 stderr F I1013 00:23:45.390552 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" virtual=false 2025-10-13T00:23:45.393606753+00:00 stderr F I1013 00:23:45.393562 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.393606753+00:00 stderr F I1013 00:23:45.393595 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" virtual=false 2025-10-13T00:23:45.396580426+00:00 stderr F I1013 00:23:45.396527 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.396580426+00:00 stderr F I1013 00:23:45.396551 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" virtual=false 2025-10-13T00:23:45.403086247+00:00 stderr F I1013 00:23:45.403022 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.403086247+00:00 stderr F I1013 00:23:45.403058 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" virtual=false 2025-10-13T00:23:45.407114189+00:00 stderr F I1013 00:23:45.407051 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.407114189+00:00 stderr F I1013 00:23:45.407087 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" virtual=false 2025-10-13T00:23:45.410048381+00:00 stderr F I1013 00:23:45.409996 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.410048381+00:00 stderr F I1013 00:23:45.410029 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" virtual=false 2025-10-13T00:23:45.414121114+00:00 stderr F I1013 00:23:45.414058 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.414121114+00:00 stderr F I1013 00:23:45.414093 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" virtual=false 2025-10-13T00:23:45.418347352+00:00 stderr F I1013 00:23:45.418271 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.418347352+00:00 stderr F I1013 00:23:45.418300 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" virtual=false 2025-10-13T00:23:45.425302146+00:00 stderr F I1013 00:23:45.425224 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.425302146+00:00 stderr F I1013 00:23:45.425258 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" virtual=false 2025-10-13T00:23:45.427827726+00:00 stderr F I1013 00:23:45.427784 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.427827726+00:00 stderr F I1013 00:23:45.427811 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" virtual=false 2025-10-13T00:23:45.430633464+00:00 stderr F I1013 00:23:45.430561 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.430633464+00:00 stderr F I1013 00:23:45.430587 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" virtual=false 2025-10-13T00:23:45.437028683+00:00 stderr F I1013 00:23:45.436976 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.437028683+00:00 stderr F I1013 00:23:45.437005 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" virtual=false 2025-10-13T00:23:45.451256799+00:00 stderr F I1013 00:23:45.450393 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.451256799+00:00 stderr F I1013 00:23:45.450433 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" virtual=false 2025-10-13T00:23:45.455211049+00:00 stderr F I1013 00:23:45.455141 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:45.455211049+00:00 stderr F I1013 00:23:45.455180 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" virtual=false 2025-10-13T00:23:45.457729529+00:00 stderr F I1013 00:23:45.457670 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" 2025-10-13T00:23:45.457729529+00:00 stderr F I1013 00:23:45.457702 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" virtual=false 2025-10-13T00:23:45.466948046+00:00 stderr F I1013 00:23:45.466876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.466948046+00:00 stderr F I1013 00:23:45.466932 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" virtual=false 2025-10-13T00:23:45.486624084+00:00 stderr F I1013 00:23:45.486566 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:45.486624084+00:00 stderr F I1013 00:23:45.486607 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" virtual=false 2025-10-13T00:23:45.501173709+00:00 stderr F I1013 00:23:45.501110 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.501213370+00:00 stderr F I1013 00:23:45.501174 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" virtual=false 2025-10-13T00:23:45.509129191+00:00 stderr F I1013 00:23:45.509091 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.509171432+00:00 stderr F I1013 00:23:45.509129 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" virtual=false 2025-10-13T00:23:45.511636391+00:00 stderr F I1013 00:23:45.511571 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.511636391+00:00 stderr F I1013 00:23:45.511614 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" virtual=false 2025-10-13T00:23:45.515680823+00:00 stderr F I1013 00:23:45.515646 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.515702774+00:00 stderr F I1013 00:23:45.515690 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" virtual=false 2025-10-13T00:23:45.523551193+00:00 stderr F I1013 00:23:45.523449 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.523551193+00:00 stderr F I1013 00:23:45.523511 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" virtual=false 2025-10-13T00:23:45.531557176+00:00 stderr F I1013 00:23:45.531496 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.531557176+00:00 stderr F I1013 00:23:45.531545 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" virtual=false 2025-10-13T00:23:45.537979134+00:00 stderr F I1013 00:23:45.537940 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.538001935+00:00 stderr F I1013 00:23:45.537974 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" virtual=false 2025-10-13T00:23:45.540879555+00:00 stderr F I1013 00:23:45.540746 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.540879555+00:00 stderr F I1013 00:23:45.540779 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" virtual=false 2025-10-13T00:23:45.544342542+00:00 stderr F I1013 00:23:45.544274 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.544396123+00:00 stderr F I1013 00:23:45.544361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" virtual=false 2025-10-13T00:23:45.547258373+00:00 stderr F I1013 00:23:45.547203 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.547258373+00:00 stderr F I1013 00:23:45.547247 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" virtual=false 2025-10-13T00:23:45.551612134+00:00 stderr F I1013 00:23:45.551563 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.551612134+00:00 stderr F I1013 00:23:45.551602 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" virtual=false 2025-10-13T00:23:45.557070616+00:00 stderr F I1013 00:23:45.557011 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.557070616+00:00 stderr F I1013 00:23:45.557053 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" virtual=false 2025-10-13T00:23:45.558848166+00:00 stderr F I1013 00:23:45.558788 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.558848166+00:00 stderr F I1013 00:23:45.558832 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" virtual=false 2025-10-13T00:23:45.564313868+00:00 stderr F I1013 00:23:45.564261 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.564313868+00:00 stderr F I1013 00:23:45.564285 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" virtual=false 2025-10-13T00:23:45.570881691+00:00 stderr F I1013 00:23:45.570837 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.570881691+00:00 stderr F I1013 00:23:45.570869 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" virtual=false 2025-10-13T00:23:45.574724028+00:00 stderr F I1013 00:23:45.574684 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" 2025-10-13T00:23:45.574747618+00:00 stderr F I1013 00:23:45.574738 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" virtual=false 2025-10-13T00:23:45.582594247+00:00 stderr F I1013 00:23:45.582552 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.582620668+00:00 stderr F I1013 00:23:45.582610 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" virtual=false 2025-10-13T00:23:45.591217437+00:00 stderr F I1013 00:23:45.591180 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.591258998+00:00 stderr F I1013 00:23:45.591212 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" virtual=false 2025-10-13T00:23:45.596522035+00:00 stderr F I1013 00:23:45.596116 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.596522035+00:00 stderr F I1013 00:23:45.596185 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" virtual=false 2025-10-13T00:23:45.596740421+00:00 stderr F I1013 00:23:45.596694 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.596740421+00:00 stderr F I1013 00:23:45.596731 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" virtual=false 2025-10-13T00:23:45.601011120+00:00 stderr F I1013 00:23:45.600967 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" 2025-10-13T00:23:45.601011120+00:00 stderr F I1013 00:23:45.601002 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" virtual=false 2025-10-13T00:23:45.625128232+00:00 stderr F I1013 00:23:45.625071 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:45.625128232+00:00 stderr F I1013 00:23:45.625107 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" virtual=false 2025-10-13T00:23:45.635132331+00:00 stderr F I1013 00:23:45.635072 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:45.635174692+00:00 stderr F I1013 00:23:45.635132 1 garbagecollector.go:549] "Processing item" item="[v1/ResourceQuota, namespace: openshift-host-network, name: host-network-namespace-quotas, uid: 499f87a8-1221-4cfd-b28c-9ae80d5ba123]" virtual=false 2025-10-13T00:23:45.647456064+00:00 stderr F I1013 00:23:45.647399 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.647456064+00:00 stderr F I1013 00:23:45.647441 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" virtual=false 2025-10-13T00:23:45.648995837+00:00 stderr F I1013 00:23:45.648954 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.648995837+00:00 stderr F I1013 00:23:45.648989 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" virtual=false 2025-10-13T00:23:45.652259958+00:00 stderr F I1013 00:23:45.652218 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.652287548+00:00 stderr F I1013 00:23:45.652259 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" virtual=false 2025-10-13T00:23:45.666624568+00:00 stderr F I1013 00:23:45.666433 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.666624568+00:00 stderr F I1013 00:23:45.666480 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 453b9a7e-62aa-4f3d-9330-918d9f77515b]" virtual=false 2025-10-13T00:23:45.675741232+00:00 stderr F I1013 00:23:45.675664 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.675741232+00:00 stderr F I1013 00:23:45.675719 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: console-operator, uid: 9d4ff0ed-d9b6-4c92-8dab-48c5d5dcf7ae]" virtual=false 2025-10-13T00:23:45.679620600+00:00 stderr F I1013 00:23:45.679582 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.679647761+00:00 stderr F I1013 00:23:45.679628 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" virtual=false 2025-10-13T00:23:45.686012518+00:00 stderr F I1013 00:23:45.685884 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.686012518+00:00 stderr F I1013 00:23:45.685939 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" virtual=false 2025-10-13T00:23:45.691209563+00:00 stderr F I1013 00:23:45.691159 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.691209563+00:00 stderr F I1013 00:23:45.691202 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" virtual=false 2025-10-13T00:23:45.691300635+00:00 stderr F I1013 00:23:45.691247 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.691415718+00:00 stderr F I1013 00:23:45.691366 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" virtual=false 2025-10-13T00:23:45.695623105+00:00 stderr F I1013 00:23:45.695569 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.695757349+00:00 stderr F I1013 00:23:45.695691 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" virtual=false 2025-10-13T00:23:45.699055361+00:00 stderr F I1013 00:23:45.699001 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.699055361+00:00 stderr F I1013 00:23:45.699047 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" virtual=false 2025-10-13T00:23:45.700979775+00:00 stderr F I1013 00:23:45.700931 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.700979775+00:00 stderr F I1013 00:23:45.700970 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" virtual=false 2025-10-13T00:23:45.711589520+00:00 stderr F I1013 00:23:45.711526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.711589520+00:00 stderr F I1013 00:23:45.711555 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" virtual=false 2025-10-13T00:23:45.712631909+00:00 stderr F I1013 00:23:45.712586 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.712631909+00:00 stderr F I1013 00:23:45.712612 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" virtual=false 2025-10-13T00:23:45.723993086+00:00 stderr F I1013 00:23:45.723926 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.723993086+00:00 stderr F I1013 00:23:45.723967 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" virtual=false 2025-10-13T00:23:45.727714839+00:00 stderr F I1013 00:23:45.727655 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.727714839+00:00 stderr F I1013 00:23:45.727708 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" virtual=false 2025-10-13T00:23:45.731717511+00:00 stderr F I1013 00:23:45.731651 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.731717511+00:00 stderr F I1013 00:23:45.731690 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" virtual=false 2025-10-13T00:23:45.738804148+00:00 stderr F I1013 00:23:45.738740 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.738804148+00:00 stderr F I1013 00:23:45.738780 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" virtual=false 2025-10-13T00:23:45.752838109+00:00 stderr F I1013 00:23:45.752731 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.752838109+00:00 stderr F I1013 00:23:45.752788 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" virtual=false 2025-10-13T00:23:45.769216525+00:00 stderr F I1013 00:23:45.769117 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ResourceQuota, namespace: openshift-host-network, name: host-network-namespace-quotas, uid: 499f87a8-1221-4cfd-b28c-9ae80d5ba123]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.769216525+00:00 stderr F I1013 00:23:45.769165 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" virtual=false 2025-10-13T00:23:45.773436123+00:00 stderr F I1013 00:23:45.773399 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.773482794+00:00 stderr F I1013 00:23:45.773432 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" virtual=false 2025-10-13T00:23:45.781688843+00:00 stderr F I1013 00:23:45.781635 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.781688843+00:00 stderr F I1013 00:23:45.781680 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" virtual=false 2025-10-13T00:23:45.786890548+00:00 stderr F I1013 00:23:45.786848 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.786913258+00:00 stderr F I1013 00:23:45.786888 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" virtual=false 2025-10-13T00:23:45.798584244+00:00 stderr F I1013 00:23:45.798540 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: cluster-monitoring-operator-alert-customization, uid: 453b9a7e-62aa-4f3d-9330-918d9f77515b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.798584244+00:00 stderr F I1013 00:23:45.798570 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" virtual=false 2025-10-13T00:23:45.807957335+00:00 stderr F I1013 00:23:45.807903 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-monitoring, name: console-operator, uid: 9d4ff0ed-d9b6-4c92-8dab-48c5d5dcf7ae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.807957335+00:00 stderr F I1013 00:23:45.807940 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" virtual=false 2025-10-13T00:23:45.811959766+00:00 stderr F I1013 00:23:45.811901 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.811959766+00:00 stderr F I1013 00:23:45.811924 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" virtual=false 2025-10-13T00:23:45.818437037+00:00 stderr F I1013 00:23:45.818371 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.818499588+00:00 stderr F I1013 00:23:45.818458 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" virtual=false 2025-10-13T00:23:45.819048894+00:00 stderr F I1013 00:23:45.819005 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.819060484+00:00 stderr F I1013 00:23:45.819049 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" virtual=false 2025-10-13T00:23:45.825109702+00:00 stderr F I1013 00:23:45.825070 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.825109702+00:00 stderr F I1013 00:23:45.825098 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" virtual=false 2025-10-13T00:23:45.830004739+00:00 stderr F I1013 00:23:45.829955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.830027639+00:00 stderr F I1013 00:23:45.830001 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" virtual=false 2025-10-13T00:23:45.855407216+00:00 stderr F I1013 00:23:45.852298 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.855407216+00:00 stderr F I1013 00:23:45.852350 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" virtual=false 2025-10-13T00:23:45.859519331+00:00 stderr F I1013 00:23:45.855654 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.859519331+00:00 stderr F I1013 00:23:45.855704 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" virtual=false 2025-10-13T00:23:45.859519331+00:00 stderr F I1013 00:23:45.859496 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.859612504+00:00 stderr F I1013 00:23:45.859525 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" virtual=false 2025-10-13T00:23:45.867441912+00:00 stderr F I1013 00:23:45.867365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.867441912+00:00 stderr F I1013 00:23:45.867396 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" virtual=false 2025-10-13T00:23:45.867532634+00:00 stderr F I1013 00:23:45.867499 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.867532634+00:00 stderr F I1013 00:23:45.867521 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" virtual=false 2025-10-13T00:23:45.871638999+00:00 stderr F I1013 00:23:45.871506 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.871638999+00:00 stderr F I1013 00:23:45.871529 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" virtual=false 2025-10-13T00:23:45.875570818+00:00 stderr F I1013 00:23:45.875473 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.875570818+00:00 stderr F I1013 00:23:45.875492 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" virtual=false 2025-10-13T00:23:45.880975499+00:00 stderr F I1013 00:23:45.880913 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.880975499+00:00 stderr F I1013 00:23:45.880942 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" virtual=false 2025-10-13T00:23:45.887070248+00:00 stderr F I1013 00:23:45.887015 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.887177491+00:00 stderr F I1013 00:23:45.887155 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" virtual=false 2025-10-13T00:23:45.904236165+00:00 stderr F I1013 00:23:45.904174 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.904380129+00:00 stderr F I1013 00:23:45.904358 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" virtual=false 2025-10-13T00:23:45.904755970+00:00 stderr F I1013 00:23:45.904699 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.904773630+00:00 stderr F I1013 00:23:45.904752 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" virtual=false 2025-10-13T00:23:45.909997986+00:00 stderr F I1013 00:23:45.909947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.909997986+00:00 stderr F I1013 00:23:45.909975 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" virtual=false 2025-10-13T00:23:45.918927635+00:00 stderr F I1013 00:23:45.918885 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.919021947+00:00 stderr F I1013 00:23:45.919005 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" virtual=false 2025-10-13T00:23:45.929071787+00:00 stderr F I1013 00:23:45.929037 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.929366896+00:00 stderr F I1013 00:23:45.929319 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" virtual=false 2025-10-13T00:23:45.936263498+00:00 stderr F I1013 00:23:45.936065 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.936263498+00:00 stderr F I1013 00:23:45.936090 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" virtual=false 2025-10-13T00:23:45.943849759+00:00 stderr F I1013 00:23:45.943820 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.943916851+00:00 stderr F I1013 00:23:45.943905 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" virtual=false 2025-10-13T00:23:45.948256772+00:00 stderr F I1013 00:23:45.948222 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.948256772+00:00 stderr F I1013 00:23:45.948251 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" virtual=false 2025-10-13T00:23:45.954978819+00:00 stderr F I1013 00:23:45.954947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.955069022+00:00 stderr F I1013 00:23:45.955050 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" virtual=false 2025-10-13T00:23:45.957956742+00:00 stderr F I1013 00:23:45.957925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.958042194+00:00 stderr F I1013 00:23:45.958020 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" virtual=false 2025-10-13T00:23:45.961506661+00:00 stderr F I1013 00:23:45.961477 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.961580143+00:00 stderr F I1013 00:23:45.961561 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" virtual=false 2025-10-13T00:23:45.969982377+00:00 stderr F I1013 00:23:45.969945 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.970083860+00:00 stderr F I1013 00:23:45.970066 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" virtual=false 2025-10-13T00:23:45.975366557+00:00 stderr F I1013 00:23:45.974762 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.975366557+00:00 stderr F I1013 00:23:45.974802 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" virtual=false 2025-10-13T00:23:45.978599167+00:00 stderr F I1013 00:23:45.978566 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:45.978617427+00:00 stderr F I1013 00:23:45.978598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" virtual=false 2025-10-13T00:23:45.984052499+00:00 stderr F I1013 00:23:45.984028 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.984068409+00:00 stderr F I1013 00:23:45.984048 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" virtual=false 2025-10-13T00:23:45.990522579+00:00 stderr F I1013 00:23:45.990494 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.990557230+00:00 stderr F I1013 00:23:45.990541 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" virtual=false 2025-10-13T00:23:45.993886873+00:00 stderr F I1013 00:23:45.993842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:45.993886873+00:00 stderr F I1013 00:23:45.993869 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" virtual=false 2025-10-13T00:23:46.000692812+00:00 stderr F I1013 00:23:46.000659 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.000712453+00:00 stderr F I1013 00:23:46.000697 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" virtual=false 2025-10-13T00:23:46.006481194+00:00 stderr F I1013 00:23:46.006437 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.006481194+00:00 stderr F I1013 00:23:46.006474 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" virtual=false 2025-10-13T00:23:46.013501279+00:00 stderr F I1013 00:23:46.013451 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.013501279+00:00 stderr F I1013 00:23:46.013486 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" virtual=false 2025-10-13T00:23:46.024824755+00:00 stderr F I1013 00:23:46.024783 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.024824755+00:00 stderr F I1013 00:23:46.024816 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" virtual=false 2025-10-13T00:23:46.033450745+00:00 stderr F I1013 00:23:46.033405 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.033480126+00:00 stderr F I1013 00:23:46.033450 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" virtual=false 2025-10-13T00:23:46.044307837+00:00 stderr F I1013 00:23:46.044270 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.044344478+00:00 stderr F I1013 00:23:46.044304 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" virtual=false 2025-10-13T00:23:46.054098260+00:00 stderr F I1013 00:23:46.054050 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.054098260+00:00 stderr F I1013 00:23:46.054086 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" virtual=false 2025-10-13T00:23:46.061848016+00:00 stderr F I1013 00:23:46.061814 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.061848016+00:00 stderr F I1013 00:23:46.061841 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-monitoring, name: telemetry-config, uid: 74c753c8-b52b-48a3-9800-ce294f362b5d]" virtual=false 2025-10-13T00:23:46.070517997+00:00 stderr F I1013 00:23:46.070485 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.070564829+00:00 stderr F I1013 00:23:46.070530 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" virtual=false 2025-10-13T00:23:46.077599765+00:00 stderr F I1013 00:23:46.077554 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.077599765+00:00 stderr F I1013 00:23:46.077588 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" virtual=false 2025-10-13T00:23:46.083761436+00:00 stderr F I1013 00:23:46.083729 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.083785537+00:00 stderr F I1013 00:23:46.083760 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" virtual=false 2025-10-13T00:23:46.087408888+00:00 stderr F I1013 00:23:46.087388 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.087425838+00:00 stderr F I1013 00:23:46.087414 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" virtual=false 2025-10-13T00:23:46.088151118+00:00 stderr F I1013 00:23:46.088125 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.088179689+00:00 stderr F I1013 00:23:46.088147 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" virtual=false 2025-10-13T00:23:46.097679294+00:00 stderr F I1013 00:23:46.097640 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.097698564+00:00 stderr F I1013 00:23:46.097684 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" virtual=false 2025-10-13T00:23:46.103621029+00:00 stderr F I1013 00:23:46.103586 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.103636260+00:00 stderr F I1013 00:23:46.103618 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" virtual=false 2025-10-13T00:23:46.107415445+00:00 stderr F I1013 00:23:46.107387 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.107431995+00:00 stderr F I1013 00:23:46.107425 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" virtual=false 2025-10-13T00:23:46.113438903+00:00 stderr F I1013 00:23:46.113399 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.113438903+00:00 stderr F I1013 00:23:46.113428 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" virtual=false 2025-10-13T00:23:46.117465835+00:00 stderr F I1013 00:23:46.117429 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.117465835+00:00 stderr F I1013 00:23:46.117458 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" virtual=false 2025-10-13T00:23:46.123180974+00:00 stderr F I1013 00:23:46.123156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.123198744+00:00 stderr F I1013 00:23:46.123187 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" virtual=false 2025-10-13T00:23:46.127279268+00:00 stderr F I1013 00:23:46.127259 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.133589604+00:00 stderr F I1013 00:23:46.133540 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.139798217+00:00 stderr F I1013 00:23:46.139758 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:46.147697697+00:00 stderr F I1013 00:23:46.147661 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.160849683+00:00 stderr F I1013 00:23:46.160800 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.164471124+00:00 stderr F I1013 00:23:46.164435 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.167788097+00:00 stderr F I1013 00:23:46.167761 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" 2025-10-13T00:23:46.174390311+00:00 stderr F I1013 00:23:46.174361 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.187048953+00:00 stderr F I1013 00:23:46.187016 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.193065091+00:00 stderr F I1013 00:23:46.192994 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-monitoring, name: telemetry-config, uid: 74c753c8-b52b-48a3-9800-ce294f362b5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:46.194852920+00:00 stderr F I1013 00:23:46.194812 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.198153572+00:00 stderr F I1013 00:23:46.198113 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.201624039+00:00 stderr F I1013 00:23:46.201597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.204963302+00:00 stderr F I1013 00:23:46.204934 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.208463730+00:00 stderr F I1013 00:23:46.208444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.211568996+00:00 stderr F I1013 00:23:46.211528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.216455272+00:00 stderr F I1013 00:23:46.216437 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-10-13T00:23:46.218526170+00:00 stderr F I1013 00:23:46.218489 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.222038598+00:00 stderr F I1013 00:23:46.222002 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-10-13T00:23:46.228557489+00:00 stderr F I1013 00:23:46.228515 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-10-13T00:23:48.061380733+00:00 stderr F I1013 00:23:48.061296 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: must-gather-xmtph, uid: 5f054249-6fe3-4b60-aad0-8b8f46729a0a]" virtual=false 2025-10-13T00:23:48.063201753+00:00 stderr F I1013 00:23:48.063159 1 garbagecollector.go:688] "Deleting item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: must-gather-xmtph, uid: 5f054249-6fe3-4b60-aad0-8b8f46729a0a]" propagationPolicy="Background" 2025-10-13T00:23:53.054596818+00:00 stderr F I1013 00:23:53.054453 1 namespace_controller.go:182] "Namespace has been deleted" namespace="openshift-must-gather-xwq75" ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000016567015073043233033075 0ustar zuulzuul2025-10-13T00:22:25.806907009+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-10-13T00:22:25.812103108+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:25.818688395+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,47sec,0)' ']' 2025-10-13T00:22:25.818688395+00:00 stderr F + sleep 1 2025-10-13T00:22:26.821803781+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:26.831822791+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,46sec,0)' ']' 2025-10-13T00:22:26.831822791+00:00 stderr F + sleep 1 2025-10-13T00:22:27.835593063+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:27.845239743+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,45sec,0)' ']' 2025-10-13T00:22:27.845239743+00:00 stderr F + sleep 1 2025-10-13T00:22:28.848054740+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:28.853176088+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,44sec,0)' ']' 2025-10-13T00:22:28.853199519+00:00 stderr F + sleep 1 2025-10-13T00:22:29.858476893+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:29.863163399+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,43sec,0)' ']' 2025-10-13T00:22:29.863163399+00:00 stderr F + sleep 1 2025-10-13T00:22:30.866386007+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:30.870800305+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,42sec,0)' ']' 2025-10-13T00:22:30.870800305+00:00 stderr F + sleep 1 2025-10-13T00:22:31.874250670+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:31.881252628+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,41sec,0)' ']' 2025-10-13T00:22:31.881252628+00:00 stderr F + sleep 1 2025-10-13T00:22:32.885405762+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:32.891316171+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,40sec,0)' ']' 2025-10-13T00:22:32.891316171+00:00 stderr F + sleep 1 2025-10-13T00:22:33.895453731+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:33.904839653+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,39sec,0)' ']' 2025-10-13T00:22:33.904839653+00:00 stderr F + sleep 1 2025-10-13T00:22:34.909088515+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:34.917239782+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,38sec,0)' ']' 2025-10-13T00:22:34.917239782+00:00 stderr F + sleep 1 2025-10-13T00:22:35.922575385+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:35.932912863+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,37sec,0)' ']' 2025-10-13T00:22:35.932912863+00:00 stderr F + sleep 1 2025-10-13T00:22:36.934832541+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:36.939871511+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,36sec,0)' ']' 2025-10-13T00:22:36.939959563+00:00 stderr F + sleep 1 2025-10-13T00:22:37.944114113+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:37.948408523+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,35sec,0)' ']' 2025-10-13T00:22:37.948453594+00:00 stderr F + sleep 1 2025-10-13T00:22:38.951679459+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:38.962703216+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,34sec,0)' ']' 2025-10-13T00:22:38.962753227+00:00 stderr F + sleep 1 2025-10-13T00:22:39.965829078+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:39.976693581+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,33sec,0)' ']' 2025-10-13T00:22:39.976753943+00:00 stderr F + sleep 1 2025-10-13T00:22:40.981240073+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:40.990757828+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,32sec,0)' ']' 2025-10-13T00:22:40.990813600+00:00 stderr F + sleep 1 2025-10-13T00:22:41.994114096+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:42.004740912+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,31sec,0)' ']' 2025-10-13T00:22:42.004874206+00:00 stderr F + sleep 1 2025-10-13T00:22:43.011212628+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:43.019758396+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,30sec,0)' ']' 2025-10-13T00:22:43.019856908+00:00 stderr F + sleep 1 2025-10-13T00:22:44.024018370+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:44.033444832+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,29sec,0)' ']' 2025-10-13T00:22:44.033494714+00:00 stderr F + sleep 1 2025-10-13T00:22:45.041797130+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:45.049888975+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,28sec,0)' ']' 2025-10-13T00:22:45.049965857+00:00 stderr F + sleep 1 2025-10-13T00:22:46.055826345+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:46.065311760+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,27sec,0)' ']' 2025-10-13T00:22:46.065453444+00:00 stderr F + sleep 1 2025-10-13T00:22:47.069961594+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:47.076243019+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,26sec,0)' ']' 2025-10-13T00:22:47.076344322+00:00 stderr F + sleep 1 2025-10-13T00:22:48.079717312+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:48.085945555+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,25sec,0)' ']' 2025-10-13T00:22:48.086049858+00:00 stderr F + sleep 1 2025-10-13T00:22:49.090508537+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:49.096260037+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,24sec,0)' ']' 2025-10-13T00:22:49.096433592+00:00 stderr F + sleep 1 2025-10-13T00:22:50.099581455+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:50.104008308+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,23sec,0)' ']' 2025-10-13T00:22:50.104069520+00:00 stderr F + sleep 1 2025-10-13T00:22:51.109324311+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:51.115821232+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,22sec,0)' ']' 2025-10-13T00:22:51.115878084+00:00 stderr F + sleep 1 2025-10-13T00:22:52.118551214+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:52.123123291+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,21sec,0)' ']' 2025-10-13T00:22:52.123195233+00:00 stderr F + sleep 1 2025-10-13T00:22:53.126156420+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:53.131154059+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,20sec,0)' ']' 2025-10-13T00:22:53.131237692+00:00 stderr F + sleep 1 2025-10-13T00:22:54.135904157+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:54.143191710+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,19sec,0)' ']' 2025-10-13T00:22:54.143191710+00:00 stderr F + sleep 1 2025-10-13T00:22:55.146221630+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:55.153298597+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,18sec,0)' ']' 2025-10-13T00:22:55.153298597+00:00 stderr F + sleep 1 2025-10-13T00:22:56.156765528+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:56.169057441+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,17sec,0)' ']' 2025-10-13T00:22:56.169057441+00:00 stderr F + sleep 1 2025-10-13T00:22:57.172419010+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:57.177229604+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,16sec,0)' ']' 2025-10-13T00:22:57.177229604+00:00 stderr F + sleep 1 2025-10-13T00:22:58.186380274+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:58.193406710+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,15sec,0)' ']' 2025-10-13T00:22:58.193406710+00:00 stderr F + sleep 1 2025-10-13T00:22:59.195946115+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:22:59.201685035+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,14sec,0)' ']' 2025-10-13T00:22:59.201685035+00:00 stderr F + sleep 1 2025-10-13T00:23:00.204859708+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:00.214535848+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,13sec,0)' ']' 2025-10-13T00:23:00.214535848+00:00 stderr F + sleep 1 2025-10-13T00:23:01.218553025+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:01.224980464+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,12sec,0)' ']' 2025-10-13T00:23:01.225106447+00:00 stderr F + sleep 1 2025-10-13T00:23:02.230119152+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:02.234925626+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,10sec,0)' ']' 2025-10-13T00:23:02.234925626+00:00 stderr F + sleep 1 2025-10-13T00:23:03.239868368+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:03.248694774+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,9.977ms,0)' ']' 2025-10-13T00:23:03.248694774+00:00 stderr F + sleep 1 2025-10-13T00:23:04.253464252+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:04.259663445+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,8.965ms,0)' ']' 2025-10-13T00:23:04.259663445+00:00 stderr F + sleep 1 2025-10-13T00:23:05.262614582+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:05.271292174+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,7.954ms,0)' ']' 2025-10-13T00:23:05.271292174+00:00 stderr F + sleep 1 2025-10-13T00:23:06.277082131+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:06.283493889+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,6.942ms,0)' ']' 2025-10-13T00:23:06.283493889+00:00 stderr F + sleep 1 2025-10-13T00:23:07.288710429+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:07.297860674+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,5.927ms,0)' ']' 2025-10-13T00:23:07.297860674+00:00 stderr F + sleep 1 2025-10-13T00:23:08.301836250+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:08.311674934+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,4.914ms,0)' ']' 2025-10-13T00:23:08.311674934+00:00 stderr F + sleep 1 2025-10-13T00:23:09.315603838+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:09.325713270+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,3.901ms,0)' ']' 2025-10-13T00:23:09.325713270+00:00 stderr F + sleep 1 2025-10-13T00:23:10.330123567+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:10.336201967+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,2.889ms,0)' ']' 2025-10-13T00:23:10.336201967+00:00 stderr F + sleep 1 2025-10-13T00:23:11.340662066+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:11.346343044+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,1.878ms,0)' ']' 2025-10-13T00:23:11.346343044+00:00 stderr F + sleep 1 2025-10-13T00:23:12.350587448+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:12.359721182+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,866ms,0)' ']' 2025-10-13T00:23:12.359721182+00:00 stderr F + sleep 1 2025-10-13T00:23:13.364112989+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:13.368896612+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,,0)' ']' 2025-10-13T00:23:13.368967594+00:00 stderr F + sleep 1 2025-10-13T00:23:14.372414704+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:14.382121355+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,,0)' ']' 2025-10-13T00:23:14.382210707+00:00 stderr F + sleep 1 2025-10-13T00:23:15.387374767+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:15.396400318+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,,0)' ']' 2025-10-13T00:23:15.396496581+00:00 stderr F + sleep 1 2025-10-13T00:23:16.401106894+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:16.409944431+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:44720 timer:(timewait,,0)' ']' 2025-10-13T00:23:16.409944431+00:00 stderr F + sleep 1 2025-10-13T00:23:17.414822961+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:23:17.424072909+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:23:17.425965191+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-10-13T00:23:17.426035703+00:00 stderr F + echo 'Copying system trust bundle' 2025-10-13T00:23:17.426049244+00:00 stdout F Copying system trust bundle 2025-10-13T00:23:17.426061724+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-10-13T00:23:17.433302916+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-10-13T00:23:17.434107928+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-10-13T00:23:17.524908467+00:00 stderr F W1013 00:23:17.524670 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-10-13T00:23:17.524908467+00:00 stderr F W1013 00:23:17.524842 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-10-13T00:23:17.524908467+00:00 stderr F W1013 00:23:17.524868 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-10-13T00:23:17.524979549+00:00 stderr F W1013 00:23:17.524940 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-10-13T00:23:17.524979549+00:00 stderr F W1013 00:23:17.524963 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-10-13T00:23:17.525020981+00:00 stderr F W1013 00:23:17.524995 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-10-13T00:23:17.525037241+00:00 stderr F W1013 00:23:17.525027 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-10-13T00:23:17.525090202+00:00 stderr F W1013 00:23:17.525063 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-10-13T00:23:17.525139424+00:00 stderr F W1013 00:23:17.525111 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-10-13T00:23:17.525139424+00:00 stderr F W1013 00:23:17.525135 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-10-13T00:23:17.525188525+00:00 stderr F W1013 00:23:17.525162 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-10-13T00:23:17.525204276+00:00 stderr F W1013 00:23:17.525186 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-10-13T00:23:17.525223266+00:00 stderr F W1013 00:23:17.525212 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-10-13T00:23:17.525271897+00:00 stderr F W1013 00:23:17.525236 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-10-13T00:23:17.525271897+00:00 stderr F W1013 00:23:17.525266 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-10-13T00:23:17.525350860+00:00 stderr F W1013 00:23:17.525289 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-10-13T00:23:17.525350860+00:00 stderr F W1013 00:23:17.525318 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-10-13T00:23:17.525373140+00:00 stderr F W1013 00:23:17.525360 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-10-13T00:23:17.525480653+00:00 stderr F W1013 00:23:17.525424 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-10-13T00:23:17.525480653+00:00 stderr F W1013 00:23:17.525471 1 feature_gate.go:227] unrecognized feature gate: Example 2025-10-13T00:23:17.525523334+00:00 stderr F W1013 00:23:17.525496 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-10-13T00:23:17.525539255+00:00 stderr F W1013 00:23:17.525521 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-10-13T00:23:17.525554895+00:00 stderr F W1013 00:23:17.525546 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-10-13T00:23:17.525599997+00:00 stderr F W1013 00:23:17.525569 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-10-13T00:23:17.525615787+00:00 stderr F W1013 00:23:17.525597 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-10-13T00:23:17.525631307+00:00 stderr F W1013 00:23:17.525622 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-10-13T00:23:17.525673439+00:00 stderr F W1013 00:23:17.525645 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-10-13T00:23:17.525713650+00:00 stderr F W1013 00:23:17.525688 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-10-13T00:23:17.525733240+00:00 stderr F W1013 00:23:17.525722 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-10-13T00:23:17.525767451+00:00 stderr F W1013 00:23:17.525750 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-10-13T00:23:17.525783222+00:00 stderr F W1013 00:23:17.525774 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-10-13T00:23:17.525824273+00:00 stderr F W1013 00:23:17.525798 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-10-13T00:23:17.525840103+00:00 stderr F W1013 00:23:17.525824 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-10-13T00:23:17.525858904+00:00 stderr F W1013 00:23:17.525852 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-10-13T00:23:17.525910285+00:00 stderr F W1013 00:23:17.525880 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:23:17.525910285+00:00 stderr F W1013 00:23:17.525906 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-10-13T00:23:17.525959577+00:00 stderr F W1013 00:23:17.525929 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-10-13T00:23:17.525959577+00:00 stderr F W1013 00:23:17.525956 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-10-13T00:23:17.526008288+00:00 stderr F W1013 00:23:17.525979 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-10-13T00:23:17.526069190+00:00 stderr F W1013 00:23:17.526024 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-10-13T00:23:17.526069190+00:00 stderr F W1013 00:23:17.526053 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-10-13T00:23:17.526085860+00:00 stderr F W1013 00:23:17.526077 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-10-13T00:23:17.526135752+00:00 stderr F W1013 00:23:17.526106 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-10-13T00:23:17.526135752+00:00 stderr F W1013 00:23:17.526130 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-10-13T00:23:17.526170383+00:00 stderr F W1013 00:23:17.526154 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-10-13T00:23:17.526189473+00:00 stderr F W1013 00:23:17.526179 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-10-13T00:23:17.526254155+00:00 stderr F W1013 00:23:17.526220 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-10-13T00:23:17.526254155+00:00 stderr F W1013 00:23:17.526248 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-10-13T00:23:17.526279866+00:00 stderr F W1013 00:23:17.526271 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-10-13T00:23:17.526320767+00:00 stderr F W1013 00:23:17.526295 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-10-13T00:23:17.526365798+00:00 stderr F W1013 00:23:17.526341 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-10-13T00:23:17.526498042+00:00 stderr F W1013 00:23:17.526450 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-10-13T00:23:17.526498042+00:00 stderr F W1013 00:23:17.526485 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-10-13T00:23:17.526584294+00:00 stderr F W1013 00:23:17.526545 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-10-13T00:23:17.526584294+00:00 stderr F W1013 00:23:17.526577 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-10-13T00:23:17.526628125+00:00 stderr F W1013 00:23:17.526601 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-10-13T00:23:17.526643886+00:00 stderr F W1013 00:23:17.526628 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-10-13T00:23:17.526684497+00:00 stderr F W1013 00:23:17.526658 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-10-13T00:23:17.526738158+00:00 stderr F W1013 00:23:17.526714 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526829 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526848 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526853 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526857 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526860 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526865 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:23:17.526874212+00:00 stderr F I1013 00:23:17.526869 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-10-13T00:23:17.526896983+00:00 stderr F I1013 00:23:17.526872 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-10-13T00:23:17.526896983+00:00 stderr F I1013 00:23:17.526876 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-10-13T00:23:17.526896983+00:00 stderr F I1013 00:23:17.526879 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-10-13T00:23:17.526896983+00:00 stderr F I1013 00:23:17.526886 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:23:17.526896983+00:00 stderr F I1013 00:23:17.526890 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-10-13T00:23:17.526896983+00:00 stderr F I1013 00:23:17.526894 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-10-13T00:23:17.526916483+00:00 stderr F I1013 00:23:17.526896 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:23:17.526916483+00:00 stderr F I1013 00:23:17.526901 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:23:17.526916483+00:00 stderr F I1013 00:23:17.526904 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-10-13T00:23:17.526916483+00:00 stderr F I1013 00:23:17.526907 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:23:17.526916483+00:00 stderr F I1013 00:23:17.526910 1 flags.go:64] FLAG: --cloud-config="" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526913 1 flags.go:64] FLAG: --cloud-provider="" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526917 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526923 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526927 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526930 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526933 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526936 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526939 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-10-13T00:23:17.526947674+00:00 stderr F I1013 00:23:17.526942 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-10-13T00:23:17.526968905+00:00 stderr F I1013 00:23:17.526945 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-10-13T00:23:17.526968905+00:00 stderr F I1013 00:23:17.526949 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-10-13T00:23:17.526968905+00:00 stderr F I1013 00:23:17.526951 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-10-13T00:23:17.526968905+00:00 stderr F I1013 00:23:17.526954 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-10-13T00:23:17.526968905+00:00 stderr F I1013 00:23:17.526957 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-10-13T00:23:17.526968905+00:00 stderr F I1013 00:23:17.526960 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526962 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526970 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526973 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526976 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526978 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526982 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-10-13T00:23:17.526988145+00:00 stderr F I1013 00:23:17.526985 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-10-13T00:23:17.527007606+00:00 stderr F I1013 00:23:17.526988 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-10-13T00:23:17.527007606+00:00 stderr F I1013 00:23:17.526991 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-10-13T00:23:17.527007606+00:00 stderr F I1013 00:23:17.527004 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-10-13T00:23:17.527024746+00:00 stderr F I1013 00:23:17.527006 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-10-13T00:23:17.527024746+00:00 stderr F I1013 00:23:17.527010 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-10-13T00:23:17.527024746+00:00 stderr F I1013 00:23:17.527012 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-10-13T00:23:17.527024746+00:00 stderr F I1013 00:23:17.527015 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-10-13T00:23:17.527024746+00:00 stderr F I1013 00:23:17.527018 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-10-13T00:23:17.527024746+00:00 stderr F I1013 00:23:17.527021 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527024 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527027 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527030 1 flags.go:64] FLAG: --contention-profiling="false" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527033 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527035 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527040 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527042 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:23:17.527051187+00:00 stderr F I1013 00:23:17.527047 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-10-13T00:23:17.527071528+00:00 stderr F I1013 00:23:17.527050 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-10-13T00:23:17.527071528+00:00 stderr F I1013 00:23:17.527053 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-10-13T00:23:17.527071528+00:00 stderr F I1013 00:23:17.527056 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-10-13T00:23:17.527071528+00:00 stderr F I1013 00:23:17.527059 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-10-13T00:23:17.527071528+00:00 stderr F I1013 00:23:17.527061 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-10-13T00:23:17.527071528+00:00 stderr F I1013 00:23:17.527064 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-10-13T00:23:17.527090448+00:00 stderr F I1013 00:23:17.527067 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-10-13T00:23:17.527090448+00:00 stderr F I1013 00:23:17.527086 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-10-13T00:23:17.527106809+00:00 stderr F I1013 00:23:17.527090 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:23:17.527106809+00:00 stderr F I1013 00:23:17.527093 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-10-13T00:23:17.527106809+00:00 stderr F I1013 00:23:17.527096 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-10-13T00:23:17.527106809+00:00 stderr F I1013 00:23:17.527099 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-10-13T00:23:17.527106809+00:00 stderr F I1013 00:23:17.527102 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-10-13T00:23:17.527125489+00:00 stderr F I1013 00:23:17.527105 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-10-13T00:23:17.527125489+00:00 stderr F I1013 00:23:17.527108 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-10-13T00:23:17.527125489+00:00 stderr F I1013 00:23:17.527112 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-10-13T00:23:17.527125489+00:00 stderr F I1013 00:23:17.527116 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-10-13T00:23:17.527125489+00:00 stderr F I1013 00:23:17.527119 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527122 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527126 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527130 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527134 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527137 1 flags.go:64] FLAG: --leader-elect="true" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527140 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527143 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-10-13T00:23:17.527150760+00:00 stderr F I1013 00:23:17.527146 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-10-13T00:23:17.527171350+00:00 stderr F I1013 00:23:17.527149 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-10-13T00:23:17.527171350+00:00 stderr F I1013 00:23:17.527152 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-10-13T00:23:17.527171350+00:00 stderr F I1013 00:23:17.527155 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-10-13T00:23:17.527171350+00:00 stderr F I1013 00:23:17.527158 1 flags.go:64] FLAG: --leader-migration-config="" 2025-10-13T00:23:17.527171350+00:00 stderr F I1013 00:23:17.527160 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-10-13T00:23:17.527171350+00:00 stderr F I1013 00:23:17.527163 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:23:17.527190411+00:00 stderr F I1013 00:23:17.527166 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:23:17.527190411+00:00 stderr F I1013 00:23:17.527172 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:23:17.527190411+00:00 stderr F I1013 00:23:17.527175 1 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:23:17.527190411+00:00 stderr F I1013 00:23:17.527179 1 flags.go:64] FLAG: --master="" 2025-10-13T00:23:17.527190411+00:00 stderr F I1013 00:23:17.527182 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-10-13T00:23:17.527190411+00:00 stderr F I1013 00:23:17.527184 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-10-13T00:23:17.527209172+00:00 stderr F I1013 00:23:17.527188 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-10-13T00:23:17.527209172+00:00 stderr F I1013 00:23:17.527191 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-10-13T00:23:17.527209172+00:00 stderr F I1013 00:23:17.527194 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-10-13T00:23:17.527209172+00:00 stderr F I1013 00:23:17.527197 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-10-13T00:23:17.527209172+00:00 stderr F I1013 00:23:17.527200 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-10-13T00:23:17.527209172+00:00 stderr F I1013 00:23:17.527203 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-10-13T00:23:17.527228292+00:00 stderr F I1013 00:23:17.527213 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-10-13T00:23:17.527228292+00:00 stderr F I1013 00:23:17.527216 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-10-13T00:23:17.527228292+00:00 stderr F I1013 00:23:17.527219 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-10-13T00:23:17.527228292+00:00 stderr F I1013 00:23:17.527222 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527225 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527229 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527232 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527235 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527238 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527241 1 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527244 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527247 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-10-13T00:23:17.527253723+00:00 stderr F I1013 00:23:17.527250 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-10-13T00:23:17.527274733+00:00 stderr F I1013 00:23:17.527253 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-10-13T00:23:17.527274733+00:00 stderr F I1013 00:23:17.527257 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-10-13T00:23:17.527274733+00:00 stderr F I1013 00:23:17.527260 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-10-13T00:23:17.527274733+00:00 stderr F I1013 00:23:17.527263 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-10-13T00:23:17.527274733+00:00 stderr F I1013 00:23:17.527266 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:23:17.527274733+00:00 stderr F I1013 00:23:17.527270 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:23:17.527293814+00:00 stderr F I1013 00:23:17.527274 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-10-13T00:23:17.527293814+00:00 stderr F I1013 00:23:17.527278 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-10-13T00:23:17.527293814+00:00 stderr F I1013 00:23:17.527282 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-10-13T00:23:17.527293814+00:00 stderr F I1013 00:23:17.527286 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-10-13T00:23:17.527293814+00:00 stderr F I1013 00:23:17.527290 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-10-13T00:23:17.527312634+00:00 stderr F I1013 00:23:17.527293 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-10-13T00:23:17.527312634+00:00 stderr F I1013 00:23:17.527297 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-10-13T00:23:17.527312634+00:00 stderr F I1013 00:23:17.527300 1 flags.go:64] FLAG: --secure-port="10257" 2025-10-13T00:23:17.527312634+00:00 stderr F I1013 00:23:17.527303 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-10-13T00:23:17.527312634+00:00 stderr F I1013 00:23:17.527307 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527310 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527313 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527316 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527368 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527377 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527381 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527385 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527390 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527393 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527396 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527399 1 flags.go:64] FLAG: --v="2" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527403 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527407 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527411 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-10-13T00:23:17.527479379+00:00 stderr F I1013 00:23:17.527414 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-10-13T00:23:17.529714031+00:00 stderr F I1013 00:23:17.529655 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:23:18.073343434+00:00 stderr F I1013 00:23:18.073222 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:23:18.073343434+00:00 stderr F I1013 00:23:18.073293 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:23:18.078711974+00:00 stderr F I1013 00:23:18.078622 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-10-13T00:23:18.078711974+00:00 stderr F I1013 00:23:18.078667 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-10-13T00:23:18.081427829+00:00 stderr F I1013 00:23:18.081369 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:23:18.081555553+00:00 stderr F I1013 00:23:18.081517 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:23:18.082090898+00:00 stderr F I1013 00:23:18.082009 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:23:18.082347395+00:00 stderr F I1013 00:23:18.082263 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:23:18.082211021 +0000 UTC))" 2025-10-13T00:23:18.082426057+00:00 stderr F I1013 00:23:18.082399 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:23:18.082359805 +0000 UTC))" 2025-10-13T00:23:18.082478089+00:00 stderr F I1013 00:23:18.082440 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:23:18.082415467 +0000 UTC))" 2025-10-13T00:23:18.082494489+00:00 stderr F I1013 00:23:18.082481 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:23:18.082456918 +0000 UTC))" 2025-10-13T00:23:18.082540540+00:00 stderr F I1013 00:23:18.082512 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:23:18.082490219 +0000 UTC))" 2025-10-13T00:23:18.082583382+00:00 stderr F I1013 00:23:18.082552 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:18.0825282 +0000 UTC))" 2025-10-13T00:23:18.082599262+00:00 stderr F I1013 00:23:18.082588 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:18.082565711 +0000 UTC))" 2025-10-13T00:23:18.082648833+00:00 stderr F I1013 00:23:18.082621 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:18.082596192 +0000 UTC))" 2025-10-13T00:23:18.082671754+00:00 stderr F I1013 00:23:18.082659 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:23:18.082635353 +0000 UTC))" 2025-10-13T00:23:18.082747006+00:00 stderr F I1013 00:23:18.082694 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:18.082671524 +0000 UTC))" 2025-10-13T00:23:18.082765657+00:00 stderr F I1013 00:23:18.082755 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:23:18.082732036 +0000 UTC))" 2025-10-13T00:23:18.083536618+00:00 stderr F I1013 00:23:18.083472 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:23:18.083422025 +0000 UTC))" 2025-10-13T00:23:18.084244678+00:00 stderr F I1013 00:23:18.084186 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314998\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314997\" (2025-10-12 23:23:17 +0000 UTC to 2026-10-12 23:23:17 +0000 UTC (now=2025-10-13 00:23:18.084146655 +0000 UTC))" 2025-10-13T00:23:18.084262978+00:00 stderr F I1013 00:23:18.084247 1 secure_serving.go:213] Serving securely on [::]:10257 2025-10-13T00:23:18.084409492+00:00 stderr F I1013 00:23:18.084363 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:23:18.084890116+00:00 stderr F I1013 00:23:18.084842 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000016253115073043233033066 0ustar zuulzuul2025-10-13T00:08:25.763593977+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-10-13T00:08:25.768461383+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-10-13T00:08:25.782248562+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:08:25.783596574+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-10-13T00:08:25.783822449+00:00 stdout F Copying system trust bundle 2025-10-13T00:08:25.783831360+00:00 stderr F + echo 'Copying system trust bundle' 2025-10-13T00:08:25.783831360+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-10-13T00:08:25.793435339+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-10-13T00:08:25.794210937+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050603 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050803 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050830 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050856 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050892 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050920 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050949 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-10-13T00:08:27.051015998+00:00 stderr F W1013 00:08:27.050980 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-10-13T00:08:27.051089381+00:00 stderr F W1013 00:08:27.051027 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-10-13T00:08:27.051089381+00:00 stderr F W1013 00:08:27.051052 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-10-13T00:08:27.051089381+00:00 stderr F W1013 00:08:27.051078 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-10-13T00:08:27.051132932+00:00 stderr F W1013 00:08:27.051102 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-10-13T00:08:27.051142992+00:00 stderr F W1013 00:08:27.051133 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-10-13T00:08:27.051166313+00:00 stderr F W1013 00:08:27.051161 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-10-13T00:08:27.051248335+00:00 stderr F W1013 00:08:27.051212 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-10-13T00:08:27.051265326+00:00 stderr F W1013 00:08:27.051255 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-10-13T00:08:27.051328848+00:00 stderr F W1013 00:08:27.051291 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-10-13T00:08:27.051328848+00:00 stderr F W1013 00:08:27.051325 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-10-13T00:08:27.051428921+00:00 stderr F W1013 00:08:27.051393 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-10-13T00:08:27.051483543+00:00 stderr F W1013 00:08:27.051463 1 feature_gate.go:227] unrecognized feature gate: Example 2025-10-13T00:08:27.051519224+00:00 stderr F W1013 00:08:27.051499 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-10-13T00:08:27.051551525+00:00 stderr F W1013 00:08:27.051533 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-10-13T00:08:27.051585616+00:00 stderr F W1013 00:08:27.051567 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-10-13T00:08:27.051627747+00:00 stderr F W1013 00:08:27.051607 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-10-13T00:08:27.051664458+00:00 stderr F W1013 00:08:27.051646 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-10-13T00:08:27.051704999+00:00 stderr F W1013 00:08:27.051688 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-10-13T00:08:27.051997128+00:00 stderr F W1013 00:08:27.051962 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-10-13T00:08:27.052008559+00:00 stderr F W1013 00:08:27.051994 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-10-13T00:08:27.052050430+00:00 stderr F W1013 00:08:27.052030 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-10-13T00:08:27.052088561+00:00 stderr F W1013 00:08:27.052072 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-10-13T00:08:27.052127842+00:00 stderr F W1013 00:08:27.052110 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-10-13T00:08:27.053753832+00:00 stderr F W1013 00:08:27.053709 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-10-13T00:08:27.053775712+00:00 stderr F W1013 00:08:27.053758 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-10-13T00:08:27.053804293+00:00 stderr F W1013 00:08:27.053784 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-10-13T00:08:27.053833144+00:00 stderr F W1013 00:08:27.053815 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:08:27.053861135+00:00 stderr F W1013 00:08:27.053844 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-10-13T00:08:27.053888846+00:00 stderr F W1013 00:08:27.053872 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-10-13T00:08:27.053939467+00:00 stderr F W1013 00:08:27.053910 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-10-13T00:08:27.053948538+00:00 stderr F W1013 00:08:27.053938 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-10-13T00:08:27.054010899+00:00 stderr F W1013 00:08:27.053984 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-10-13T00:08:27.054022190+00:00 stderr F W1013 00:08:27.054016 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-10-13T00:08:27.054067081+00:00 stderr F W1013 00:08:27.054050 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-10-13T00:08:27.054096892+00:00 stderr F W1013 00:08:27.054079 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-10-13T00:08:27.054121113+00:00 stderr F W1013 00:08:27.054105 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-10-13T00:08:27.054148624+00:00 stderr F W1013 00:08:27.054133 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-10-13T00:08:27.054175265+00:00 stderr F W1013 00:08:27.054159 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-10-13T00:08:27.054236746+00:00 stderr F W1013 00:08:27.054220 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-10-13T00:08:27.054270307+00:00 stderr F W1013 00:08:27.054252 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-10-13T00:08:27.054294368+00:00 stderr F W1013 00:08:27.054279 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-10-13T00:08:27.054326199+00:00 stderr F W1013 00:08:27.054310 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-10-13T00:08:27.054353760+00:00 stderr F W1013 00:08:27.054337 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-10-13T00:08:27.054483154+00:00 stderr F W1013 00:08:27.054454 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-10-13T00:08:27.054493244+00:00 stderr F W1013 00:08:27.054483 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-10-13T00:08:27.054551786+00:00 stderr F W1013 00:08:27.054534 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-10-13T00:08:27.054583957+00:00 stderr F W1013 00:08:27.054565 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-10-13T00:08:27.054614258+00:00 stderr F W1013 00:08:27.054594 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-10-13T00:08:27.054646039+00:00 stderr F W1013 00:08:27.054627 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-10-13T00:08:27.054712621+00:00 stderr F W1013 00:08:27.054661 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-10-13T00:08:27.054751102+00:00 stderr F W1013 00:08:27.054732 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-10-13T00:08:27.054920727+00:00 stderr F I1013 00:08:27.054881 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-10-13T00:08:27.054920727+00:00 stderr F I1013 00:08:27.054898 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:08:27.054920727+00:00 stderr F I1013 00:08:27.054905 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:08:27.054920727+00:00 stderr F I1013 00:08:27.054909 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-10-13T00:08:27.054920727+00:00 stderr F I1013 00:08:27.054912 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-10-13T00:08:27.054935558+00:00 stderr F I1013 00:08:27.054917 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:08:27.054935558+00:00 stderr F I1013 00:08:27.054922 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-10-13T00:08:27.054935558+00:00 stderr F I1013 00:08:27.054925 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-10-13T00:08:27.054935558+00:00 stderr F I1013 00:08:27.054929 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-10-13T00:08:27.054945938+00:00 stderr F I1013 00:08:27.054931 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-10-13T00:08:27.054945938+00:00 stderr F I1013 00:08:27.054939 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:08:27.054945938+00:00 stderr F I1013 00:08:27.054943 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-10-13T00:08:27.054962598+00:00 stderr F I1013 00:08:27.054946 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-10-13T00:08:27.054962598+00:00 stderr F I1013 00:08:27.054949 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:08:27.054962598+00:00 stderr F I1013 00:08:27.054955 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:08:27.054962598+00:00 stderr F I1013 00:08:27.054958 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-10-13T00:08:27.054972559+00:00 stderr F I1013 00:08:27.054961 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:08:27.054972559+00:00 stderr F I1013 00:08:27.054965 1 flags.go:64] FLAG: --cloud-config="" 2025-10-13T00:08:27.054972559+00:00 stderr F I1013 00:08:27.054968 1 flags.go:64] FLAG: --cloud-provider="" 2025-10-13T00:08:27.054982039+00:00 stderr F I1013 00:08:27.054971 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-10-13T00:08:27.054982039+00:00 stderr F I1013 00:08:27.054979 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-10-13T00:08:27.054991379+00:00 stderr F I1013 00:08:27.054982 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-10-13T00:08:27.054991379+00:00 stderr F I1013 00:08:27.054985 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-10-13T00:08:27.055000720+00:00 stderr F I1013 00:08:27.054989 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-10-13T00:08:27.055000720+00:00 stderr F I1013 00:08:27.054993 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-10-13T00:08:27.055000720+00:00 stderr F I1013 00:08:27.054997 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-10-13T00:08:27.055010970+00:00 stderr F I1013 00:08:27.055000 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-10-13T00:08:27.055010970+00:00 stderr F I1013 00:08:27.055003 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-10-13T00:08:27.055010970+00:00 stderr F I1013 00:08:27.055006 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-10-13T00:08:27.055020880+00:00 stderr F I1013 00:08:27.055009 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-10-13T00:08:27.055020880+00:00 stderr F I1013 00:08:27.055012 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-10-13T00:08:27.055020880+00:00 stderr F I1013 00:08:27.055015 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-10-13T00:08:27.055020880+00:00 stderr F I1013 00:08:27.055018 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-10-13T00:08:27.055032481+00:00 stderr F I1013 00:08:27.055021 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-10-13T00:08:27.055032481+00:00 stderr F I1013 00:08:27.055025 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-10-13T00:08:27.055032481+00:00 stderr F I1013 00:08:27.055028 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-10-13T00:08:27.055043151+00:00 stderr F I1013 00:08:27.055032 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-10-13T00:08:27.055043151+00:00 stderr F I1013 00:08:27.055035 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-10-13T00:08:27.055043151+00:00 stderr F I1013 00:08:27.055038 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-10-13T00:08:27.055058781+00:00 stderr F I1013 00:08:27.055041 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-10-13T00:08:27.055058781+00:00 stderr F I1013 00:08:27.055044 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-10-13T00:08:27.055058781+00:00 stderr F I1013 00:08:27.055047 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-10-13T00:08:27.055058781+00:00 stderr F I1013 00:08:27.055050 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-10-13T00:08:27.055058781+00:00 stderr F I1013 00:08:27.055052 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-10-13T00:08:27.055058781+00:00 stderr F I1013 00:08:27.055055 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-10-13T00:08:27.055071792+00:00 stderr F I1013 00:08:27.055058 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-10-13T00:08:27.055071792+00:00 stderr F I1013 00:08:27.055061 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-10-13T00:08:27.055071792+00:00 stderr F I1013 00:08:27.055064 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-10-13T00:08:27.055071792+00:00 stderr F I1013 00:08:27.055066 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-10-13T00:08:27.055083122+00:00 stderr F I1013 00:08:27.055069 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-10-13T00:08:27.055083122+00:00 stderr F I1013 00:08:27.055073 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-10-13T00:08:27.055083122+00:00 stderr F I1013 00:08:27.055076 1 flags.go:64] FLAG: --contention-profiling="false" 2025-10-13T00:08:27.055083122+00:00 stderr F I1013 00:08:27.055079 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-10-13T00:08:27.055093512+00:00 stderr F I1013 00:08:27.055083 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-10-13T00:08:27.055093512+00:00 stderr F I1013 00:08:27.055088 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-10-13T00:08:27.055103333+00:00 stderr F I1013 00:08:27.055091 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:08:27.055103333+00:00 stderr F I1013 00:08:27.055096 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-10-13T00:08:27.055103333+00:00 stderr F I1013 00:08:27.055099 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-10-13T00:08:27.055113833+00:00 stderr F I1013 00:08:27.055101 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-10-13T00:08:27.055113833+00:00 stderr F I1013 00:08:27.055105 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-10-13T00:08:27.055113833+00:00 stderr F I1013 00:08:27.055107 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-10-13T00:08:27.055113833+00:00 stderr F I1013 00:08:27.055110 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-10-13T00:08:27.055124183+00:00 stderr F I1013 00:08:27.055113 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-10-13T00:08:27.055156614+00:00 stderr F I1013 00:08:27.055116 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-10-13T00:08:27.055156614+00:00 stderr F I1013 00:08:27.055141 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-10-13T00:08:27.055156614+00:00 stderr F I1013 00:08:27.055146 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:08:27.055156614+00:00 stderr F I1013 00:08:27.055149 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-10-13T00:08:27.055156614+00:00 stderr F I1013 00:08:27.055153 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-10-13T00:08:27.055176245+00:00 stderr F I1013 00:08:27.055156 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-10-13T00:08:27.055176245+00:00 stderr F I1013 00:08:27.055159 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-10-13T00:08:27.055176245+00:00 stderr F I1013 00:08:27.055163 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-10-13T00:08:27.055176245+00:00 stderr F I1013 00:08:27.055166 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-10-13T00:08:27.055176245+00:00 stderr F I1013 00:08:27.055172 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-10-13T00:08:27.055209436+00:00 stderr F I1013 00:08:27.055175 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-10-13T00:08:27.055209436+00:00 stderr F I1013 00:08:27.055199 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-10-13T00:08:27.055209436+00:00 stderr F I1013 00:08:27.055205 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:08:27.055220096+00:00 stderr F I1013 00:08:27.055210 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-10-13T00:08:27.055229557+00:00 stderr F I1013 00:08:27.055216 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-10-13T00:08:27.055229557+00:00 stderr F I1013 00:08:27.055222 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-10-13T00:08:27.055229557+00:00 stderr F I1013 00:08:27.055225 1 flags.go:64] FLAG: --leader-elect="true" 2025-10-13T00:08:27.055241177+00:00 stderr F I1013 00:08:27.055229 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-10-13T00:08:27.055241177+00:00 stderr F I1013 00:08:27.055233 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-10-13T00:08:27.055241177+00:00 stderr F I1013 00:08:27.055237 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-10-13T00:08:27.055252297+00:00 stderr F I1013 00:08:27.055241 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-10-13T00:08:27.055252297+00:00 stderr F I1013 00:08:27.055245 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-10-13T00:08:27.055252297+00:00 stderr F I1013 00:08:27.055248 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-10-13T00:08:27.055262658+00:00 stderr F I1013 00:08:27.055252 1 flags.go:64] FLAG: --leader-migration-config="" 2025-10-13T00:08:27.055262658+00:00 stderr F I1013 00:08:27.055256 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-10-13T00:08:27.055272768+00:00 stderr F I1013 00:08:27.055261 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:08:27.055272768+00:00 stderr F I1013 00:08:27.055265 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:08:27.055282408+00:00 stderr F I1013 00:08:27.055273 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:08:27.055282408+00:00 stderr F I1013 00:08:27.055278 1 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:08:27.055292888+00:00 stderr F I1013 00:08:27.055282 1 flags.go:64] FLAG: --master="" 2025-10-13T00:08:27.055292888+00:00 stderr F I1013 00:08:27.055285 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-10-13T00:08:27.055292888+00:00 stderr F I1013 00:08:27.055289 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-10-13T00:08:27.055302959+00:00 stderr F I1013 00:08:27.055293 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-10-13T00:08:27.055302959+00:00 stderr F I1013 00:08:27.055297 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-10-13T00:08:27.055312619+00:00 stderr F I1013 00:08:27.055301 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-10-13T00:08:27.055312619+00:00 stderr F I1013 00:08:27.055305 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-10-13T00:08:27.055312619+00:00 stderr F I1013 00:08:27.055308 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-10-13T00:08:27.055328340+00:00 stderr F I1013 00:08:27.055312 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-10-13T00:08:27.055328340+00:00 stderr F I1013 00:08:27.055317 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-10-13T00:08:27.055328340+00:00 stderr F I1013 00:08:27.055320 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-10-13T00:08:27.055328340+00:00 stderr F I1013 00:08:27.055324 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-10-13T00:08:27.055339560+00:00 stderr F I1013 00:08:27.055328 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-10-13T00:08:27.055339560+00:00 stderr F I1013 00:08:27.055333 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-10-13T00:08:27.055349120+00:00 stderr F I1013 00:08:27.055336 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-10-13T00:08:27.055349120+00:00 stderr F I1013 00:08:27.055343 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:08:27.055359431+00:00 stderr F I1013 00:08:27.055348 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-10-13T00:08:27.055359431+00:00 stderr F I1013 00:08:27.055352 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:08:27.055359431+00:00 stderr F I1013 00:08:27.055356 1 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:08:27.055369481+00:00 stderr F I1013 00:08:27.055360 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-10-13T00:08:27.055369481+00:00 stderr F I1013 00:08:27.055364 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-10-13T00:08:27.055378871+00:00 stderr F I1013 00:08:27.055367 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-10-13T00:08:27.055378871+00:00 stderr F I1013 00:08:27.055371 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-10-13T00:08:27.055388351+00:00 stderr F I1013 00:08:27.055376 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-10-13T00:08:27.055388351+00:00 stderr F I1013 00:08:27.055382 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-10-13T00:08:27.055397762+00:00 stderr F I1013 00:08:27.055386 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-10-13T00:08:27.055397762+00:00 stderr F I1013 00:08:27.055390 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:08:27.055407492+00:00 stderr F I1013 00:08:27.055395 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:08:27.055407492+00:00 stderr F I1013 00:08:27.055401 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-10-13T00:08:27.055417852+00:00 stderr F I1013 00:08:27.055407 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-10-13T00:08:27.055417852+00:00 stderr F I1013 00:08:27.055412 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-10-13T00:08:27.055426963+00:00 stderr F I1013 00:08:27.055418 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-10-13T00:08:27.055426963+00:00 stderr F I1013 00:08:27.055422 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-10-13T00:08:27.055436593+00:00 stderr F I1013 00:08:27.055427 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-10-13T00:08:27.055436593+00:00 stderr F I1013 00:08:27.055431 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-10-13T00:08:27.055451273+00:00 stderr F I1013 00:08:27.055436 1 flags.go:64] FLAG: --secure-port="10257" 2025-10-13T00:08:27.055451273+00:00 stderr F I1013 00:08:27.055441 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-10-13T00:08:27.055451273+00:00 stderr F I1013 00:08:27.055446 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-10-13T00:08:27.055461214+00:00 stderr F I1013 00:08:27.055450 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:08:27.055461214+00:00 stderr F I1013 00:08:27.055454 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-10-13T00:08:27.055469994+00:00 stderr F I1013 00:08:27.055458 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-10-13T00:08:27.055479034+00:00 stderr F I1013 00:08:27.055463 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:08:27.055479034+00:00 stderr F I1013 00:08:27.055474 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:08:27.055488224+00:00 stderr F I1013 00:08:27.055478 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:08:27.055497295+00:00 stderr F I1013 00:08:27.055484 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:08:27.055497295+00:00 stderr F I1013 00:08:27.055491 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-10-13T00:08:27.055506215+00:00 stderr F I1013 00:08:27.055496 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-10-13T00:08:27.055506215+00:00 stderr F I1013 00:08:27.055500 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-10-13T00:08:27.055516175+00:00 stderr F I1013 00:08:27.055504 1 flags.go:64] FLAG: --v="2" 2025-10-13T00:08:27.055516175+00:00 stderr F I1013 00:08:27.055510 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:08:27.055525956+00:00 stderr F I1013 00:08:27.055516 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:08:27.055525956+00:00 stderr F I1013 00:08:27.055522 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-10-13T00:08:27.055534606+00:00 stderr F I1013 00:08:27.055526 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-10-13T00:08:27.147704142+00:00 stderr F I1013 00:08:27.147614 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:08:27.458922798+00:00 stderr F I1013 00:08:27.458791 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:08:27.459570077+00:00 stderr F I1013 00:08:27.459474 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:08:27.486423085+00:00 stderr F I1013 00:08:27.486313 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-10-13T00:08:27.486423085+00:00 stderr F I1013 00:08:27.486364 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-10-13T00:08:27.504530716+00:00 stderr F I1013 00:08:27.504417 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:08:27.504530716+00:00 stderr F I1013 00:08:27.504435 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-10-13T00:08:27.505308920+00:00 stderr F I1013 00:08:27.505200 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:08:27.505476515+00:00 stderr F I1013 00:08:27.505415 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:08:27.505340951 +0000 UTC))" 2025-10-13T00:08:27.505535637+00:00 stderr F I1013 00:08:27.505505 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:08:27.505453114 +0000 UTC))" 2025-10-13T00:08:27.505617759+00:00 stderr F I1013 00:08:27.505546 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:08:27.505522127 +0000 UTC))" 2025-10-13T00:08:27.505617759+00:00 stderr F I1013 00:08:27.505607 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:08:27.505585268 +0000 UTC))" 2025-10-13T00:08:27.505672591+00:00 stderr F I1013 00:08:27.505638 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:08:27.505616489 +0000 UTC))" 2025-10-13T00:08:27.505685021+00:00 stderr F I1013 00:08:27.505676 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:08:27.505654631 +0000 UTC))" 2025-10-13T00:08:27.505741973+00:00 stderr F I1013 00:08:27.505706 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:08:27.505685051 +0000 UTC))" 2025-10-13T00:08:27.505797595+00:00 stderr F I1013 00:08:27.505758 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:08:27.505722253 +0000 UTC))" 2025-10-13T00:08:27.505865197+00:00 stderr F I1013 00:08:27.505832 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:08:27.505791925 +0000 UTC))" 2025-10-13T00:08:27.505924999+00:00 stderr F I1013 00:08:27.505893 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:08:27.505856267 +0000 UTC))" 2025-10-13T00:08:27.506765414+00:00 stderr F I1013 00:08:27.506711 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-10-13 00:08:27.506674962 +0000 UTC))" 2025-10-13T00:08:27.507511967+00:00 stderr F I1013 00:08:27.507453 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314107\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314107\" (2025-10-12 23:08:27 +0000 UTC to 2026-10-12 23:08:27 +0000 UTC (now=2025-10-13 00:08:27.507343862 +0000 UTC))" 2025-10-13T00:08:27.507534618+00:00 stderr F I1013 00:08:27.507517 1 secure_serving.go:213] Serving securely on [::]:10257 2025-10-13T00:08:27.507682232+00:00 stderr F I1013 00:08:27.507629 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:08:27.508160237+00:00 stderr F I1013 00:08:27.508117 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2025-10-13T00:08:27.539591834+00:00 stderr F E1013 00:08:27.539506 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:33.722792050+00:00 stderr F E1013 00:08:33.722641 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:39.466908626+00:00 stderr F E1013 00:08:39.466769 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.214734545+00:00 stderr F E1013 00:08:45.214548 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:50.528943566+00:00 stderr F E1013 00:08:50.528844 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:53.742622176+00:00 stderr F E1013 00:08:53.742482 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:57.526283575+00:00 stderr F E1013 00:08:57.526177 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:01.006500298+00:00 stderr F E1013 00:09:01.006388 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:05.938705043+00:00 stderr F E1013 00:09:05.938586 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:09.065282915+00:00 stderr F E1013 00:09:09.065157 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:15.602826556+00:00 stderr F E1013 00:09:15.602739 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:19.446265016+00:00 stderr F E1013 00:09:19.446136 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:22.644811281+00:00 stderr F E1013 00:09:22.644756 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:25.708210554+00:00 stderr F E1013 00:09:25.708114 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:31.612929190+00:00 stderr F E1013 00:09:31.612480 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:37.579482385+00:00 stderr F E1013 00:09:37.579295 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:40.785761805+00:00 stderr F E1013 00:09:40.785655 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:44.951117380+00:00 stderr F E1013 00:09:44.950973 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:48.964995071+00:00 stderr F E1013 00:09:48.964866 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:54.362771470+00:00 stderr F E1013 00:09:54.362672 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:59.488073140+00:00 stderr F E1013 00:09:59.487875 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:05.078972310+00:00 stderr F E1013 00:10:05.078821 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:11.503300892+00:00 stderr F E1013 00:10:11.503169 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:17.454817430+00:00 stderr F E1013 00:10:17.454699 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:21.664148241+00:00 stderr F E1013 00:10:21.664054 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:24.890077009+00:00 stderr F E1013 00:10:24.889906 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:28.320903451+00:00 stderr F E1013 00:10:28.320819 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:32.763098277+00:00 stderr F E1013 00:10:32.763026 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:38.998151889+00:00 stderr F E1013 00:10:38.997969 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:44.697685576+00:00 stderr F E1013 00:10:44.697571 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:48.590349503+00:00 stderr F E1013 00:10:48.590155 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:54.455171262+00:00 stderr F E1013 00:10:54.455083 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2025-10-13T00:10:59.121951957+00:00 stderr F E1013 00:10:59.121817 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:03.502405790+00:00 stderr F E1013 00:11:03.502284 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:09.426800222+00:00 stderr F E1013 00:11:09.426709 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:14.416116700+00:00 stderr F E1013 00:11:14.415996 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:19.895572135+00:00 stderr F E1013 00:11:19.895460 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:26.470088674+00:00 stderr F E1013 00:11:26.469983 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:29.647456866+00:00 stderr F I1013 00:11:29.647374 1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-10-13T00:11:29.647456866+00:00 stderr F I1013 00:11:29.647443 1 controllermanager.go:332] Requested to terminate. Exiting. ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015073043234033216 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015073043234033216 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000006436615073043234033237 0ustar zuulzuul2025-10-13T00:14:57.892733600+00:00 stderr F I1013 00:14:57.891656 1 main.go:45] Version:216149b14d9cb61ae90ac65d839a448fb11075bb 2025-10-13T00:14:57.892733600+00:00 stderr F I1013 00:14:57.892018 1 main.go:46] Starting with config{ :9091 crc} 2025-10-13T00:14:57.894182884+00:00 stderr F W1013 00:14:57.894131 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:14:57.933912514+00:00 stderr F I1013 00:14:57.933168 1 controller.go:42] Setting up event handlers 2025-10-13T00:14:57.933912514+00:00 stderr F I1013 00:14:57.933468 1 podmetrics.go:101] Serving network metrics 2025-10-13T00:14:57.933912514+00:00 stderr F I1013 00:14:57.933480 1 controller.go:101] Starting pod controller 2025-10-13T00:14:57.933912514+00:00 stderr F I1013 00:14:57.933484 1 controller.go:104] Waiting for informer caches to sync 2025-10-13T00:14:58.134960018+00:00 stderr F I1013 00:14:58.134255 1 controller.go:109] Starting workers 2025-10-13T00:14:58.134960018+00:00 stderr F I1013 00:14:58.134886 1 controller.go:114] Started workers 2025-10-13T00:14:58.134960018+00:00 stderr F I1013 00:14:58.134920 1 controller.go:192] Received pod 'csi-hostpathplugin-hvm8g' 2025-10-13T00:14:58.135031340+00:00 stderr F I1013 00:14:58.135008 1 controller.go:151] Successfully synced 'hostpath-provisioner/csi-hostpathplugin-hvm8g' 2025-10-13T00:14:58.135031340+00:00 stderr F I1013 00:14:58.135015 1 controller.go:192] Received pod 'openshift-apiserver-operator-7c88c4c865-kn67m' 2025-10-13T00:14:58.135042170+00:00 stderr F I1013 00:14:58.135029 1 controller.go:151] Successfully synced 'openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m' 2025-10-13T00:14:58.135042170+00:00 stderr F I1013 00:14:58.135035 1 controller.go:192] Received pod 'apiserver-7fc54b8dd7-d2bhp' 2025-10-13T00:14:58.135054661+00:00 stderr F I1013 00:14:58.135047 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-7fc54b8dd7-d2bhp' 2025-10-13T00:14:58.135064041+00:00 stderr F I1013 00:14:58.135052 1 controller.go:192] Received pod 'authentication-operator-7cc7ff75d5-g9qv8' 2025-10-13T00:14:58.135073711+00:00 stderr F I1013 00:14:58.135064 1 controller.go:151] Successfully synced 'openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8' 2025-10-13T00:14:58.135116292+00:00 stderr F I1013 00:14:58.135082 1 controller.go:192] Received pod 'oauth-openshift-74fc7c67cc-xqf8b' 2025-10-13T00:14:58.135116292+00:00 stderr F I1013 00:14:58.135099 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b' 2025-10-13T00:14:58.135116292+00:00 stderr F I1013 00:14:58.135104 1 controller.go:192] Received pod 'cluster-samples-operator-bc474d5d6-wshwg' 2025-10-13T00:14:58.135127623+00:00 stderr F I1013 00:14:58.135115 1 controller.go:151] Successfully synced 'openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg' 2025-10-13T00:14:58.135127623+00:00 stderr F I1013 00:14:58.135120 1 controller.go:192] Received pod 'openshift-config-operator-77658b5b66-dq5sc' 2025-10-13T00:14:58.135139753+00:00 stderr F I1013 00:14:58.135132 1 controller.go:151] Successfully synced 'openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc' 2025-10-13T00:14:58.135139753+00:00 stderr F I1013 00:14:58.135136 1 controller.go:192] Received pod 'console-conversion-webhook-595f9969b-l6z49' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135148 1 controller.go:151] Successfully synced 'openshift-console-operator/console-conversion-webhook-595f9969b-l6z49' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135157 1 controller.go:192] Received pod 'console-operator-5dbbc74dc9-cp5cd' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135171 1 controller.go:151] Successfully synced 'openshift-console-operator/console-operator-5dbbc74dc9-cp5cd' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135176 1 controller.go:192] Received pod 'console-644bb77b49-5x5xk' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135188 1 controller.go:151] Successfully synced 'openshift-console/console-644bb77b49-5x5xk' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135192 1 controller.go:192] Received pod 'downloads-65476884b9-9wcvx' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135202 1 controller.go:151] Successfully synced 'openshift-console/downloads-65476884b9-9wcvx' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135206 1 controller.go:192] Received pod 'openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135216 1 controller.go:151] Successfully synced 'openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135220 1 controller.go:192] Received pod 'controller-manager-778975cc4f-x5vcf' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135234 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-778975cc4f-x5vcf' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135240 1 controller.go:192] Received pod 'dns-operator-75f687757b-nz2xb' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135252 1 controller.go:151] Successfully synced 'openshift-dns-operator/dns-operator-75f687757b-nz2xb' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135258 1 controller.go:192] Received pod 'dns-default-gbw49' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135272 1 controller.go:151] Successfully synced 'openshift-dns/dns-default-gbw49' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135278 1 controller.go:192] Received pod 'etcd-operator-768d5b5d86-722mg' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135290 1 controller.go:151] Successfully synced 'openshift-etcd-operator/etcd-operator-768d5b5d86-722mg' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135295 1 controller.go:192] Received pod 'cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135305 1 controller.go:151] Successfully synced 'openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135309 1 controller.go:192] Received pod 'image-registry-75779c45fd-v2j2v' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135320 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135365 1 controller.go:192] Received pod 'ingress-canary-2vhcn' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135376 1 controller.go:151] Successfully synced 'openshift-ingress-canary/ingress-canary-2vhcn' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135380 1 controller.go:192] Received pod 'ingress-operator-7d46d5bb6d-rrg6t' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135390 1 controller.go:151] Successfully synced 'openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135394 1 controller.go:192] Received pod 'kube-apiserver-operator-78d54458c4-sc8h7' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135408 1 controller.go:151] Successfully synced 'openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135412 1 controller.go:192] Received pod 'installer-12-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135422 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-12-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135427 1 controller.go:192] Received pod 'kube-controller-manager-operator-6f6cb54958-rbddb' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135438 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135443 1 controller.go:192] Received pod 'installer-10-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135453 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135457 1 controller.go:192] Received pod 'installer-10-retry-1-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135473 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-retry-1-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135477 1 controller.go:192] Received pod 'installer-11-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135479 1 controller.go:192] Received pod 'installer-9-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135488 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-11-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135500 1 controller.go:192] Received pod 'revision-pruner-10-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135511 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-10-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135515 1 controller.go:192] Received pod 'revision-pruner-11-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135526 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-11-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135531 1 controller.go:192] Received pod 'revision-pruner-8-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135546 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-8-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135551 1 controller.go:192] Received pod 'revision-pruner-9-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135561 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-9-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135563 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-9-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135571 1 controller.go:192] Received pod 'installer-7-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135571 1 controller.go:192] Received pod 'openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135582 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-7-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135588 1 controller.go:192] Received pod 'installer-8-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135587 1 controller.go:151] Successfully synced 'openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135604 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-8-crc' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135609 1 controller.go:192] Received pod 'kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135611 1 controller.go:192] Received pod 'migrator-f7c6d88df-q2fnv' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135621 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135624 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135627 1 controller.go:192] Received pod 'control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135629 1 controller.go:192] Received pod 'machine-api-operator-788b7c6b6c-ctdmb' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135637 1 controller.go:151] Successfully synced 'openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135641 1 controller.go:151] Successfully synced 'openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135642 1 controller.go:192] Received pod 'machine-config-controller-6df6df6b6b-58shh' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135646 1 controller.go:192] Received pod 'machine-config-operator-76788bff89-wkjgm' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135654 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135659 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135659 1 controller.go:192] Received pod 'certified-operators-7287f' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135665 1 controller.go:192] Received pod 'community-operators-8jhz6' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135673 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135677 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135678 1 controller.go:192] Received pod 'community-operators-sdddl' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135682 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-f9xdt' 2025-10-13T00:14:58.136976878+00:00 stderr F I1013 00:14:58.135689 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-10-13T00:14:58.136976878+00:00 stderr P I1013 00:14:58.135694 1 controller.go:151] Succe 2025-10-13T00:14:58.137066591+00:00 stderr F ssfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135694 1 controller.go:192] Received pod 'redhat-marketplace-8s8pc' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135703 1 controller.go:192] Received pod 'redhat-operators-f4jkp' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135710 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135715 1 controller.go:192] Received pod 'multus-admission-controller-6c7c885997-4hbbc' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135720 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135724 1 controller.go:151] Successfully synced 'openshift-multus/multus-admission-controller-6c7c885997-4hbbc' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135727 1 controller.go:192] Received pod 'network-metrics-daemon-qdfr4' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135730 1 controller.go:192] Received pod 'network-check-source-5c5478f8c-vqvt7' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135740 1 controller.go:151] Successfully synced 'openshift-multus/network-metrics-daemon-qdfr4' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135751 1 controller.go:192] Received pod 'network-check-target-v54bt' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135763 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135769 1 controller.go:192] Received pod 'apiserver-69c565c9b6-vbdpd' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135773 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-target-v54bt' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135780 1 controller.go:192] Received pod 'catalog-operator-857456c46-7f5wf' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135781 1 controller.go:151] Successfully synced 'openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135787 1 controller.go:192] Received pod 'collect-profiles-29251920-wcws2' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135793 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135799 1 controller.go:192] Received pod 'collect-profiles-29251935-d7x6j' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135801 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135806 1 controller.go:192] Received pod 'collect-profiles-29251950-x8jjd' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135809 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135816 1 controller.go:192] Received pod 'olm-operator-6d8474f75f-x54mh' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135817 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135823 1 controller.go:192] Received pod 'package-server-manager-84d578d794-jw7r2' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135830 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135834 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135837 1 controller.go:192] Received pod 'packageserver-8464bcc55b-sjnqz' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135841 1 controller.go:192] Received pod 'route-controller-manager-776b8b7477-sfpvs' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135853 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135859 1 controller.go:192] Received pod 'service-ca-operator-546b4f8984-pwccz' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135874 1 controller.go:151] Successfully synced 'openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135878 1 controller.go:192] Received pod 'service-ca-666f99b6f-kk8kg' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135887 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-kk8kg' 2025-10-13T00:14:58.137066591+00:00 stderr F I1013 00:14:58.135894 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs' 2025-10-13T00:15:58.539416413+00:00 stderr F I1013 00:15:58.536283 1 controller.go:192] Received pod 'collect-profiles-29338575-4qbqw' 2025-10-13T00:15:58.539461594+00:00 stderr F I1013 00:15:58.539366 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw' 2025-10-13T00:15:58.540525388+00:00 stderr F I1013 00:15:58.540495 1 controller.go:192] Received pod 'image-pruner-29338560-zvlxb' 2025-10-13T00:15:58.540556819+00:00 stderr F I1013 00:15:58.540538 1 controller.go:151] Successfully synced 'openshift-image-registry/image-pruner-29338560-zvlxb' 2025-10-13T00:15:58.614635075+00:00 stderr F I1013 00:15:58.612514 1 controller.go:192] Received pod 'redhat-operators-t4sr9' 2025-10-13T00:15:58.614635075+00:00 stderr F I1013 00:15:58.612558 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-t4sr9' 2025-10-13T00:15:58.614635075+00:00 stderr F I1013 00:15:58.614043 1 controller.go:192] Received pod 'redhat-marketplace-jfjbq' 2025-10-13T00:15:58.614635075+00:00 stderr F I1013 00:15:58.614079 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jfjbq' 2025-10-13T00:15:58.621470144+00:00 stderr F I1013 00:15:58.620727 1 controller.go:192] Received pod 'certified-operators-zqnwb' 2025-10-13T00:15:58.621470144+00:00 stderr F I1013 00:15:58.620772 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-zqnwb' 2025-10-13T00:16:06.321214427+00:00 stderr F I1013 00:16:06.320695 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-10-13T00:16:10.857150462+00:00 stderr F I1013 00:16:10.856394 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-29pzg' 2025-10-13T00:16:10.857284526+00:00 stderr F I1013 00:16:10.857268 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-29pzg' 2025-10-13T00:16:12.432169894+00:00 stderr F I1013 00:16:12.431571 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-10-13T00:16:14.756660245+00:00 stderr F I1013 00:16:14.756519 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-10-13T00:16:16.527883110+00:00 stderr F I1013 00:16:16.527319 1 controller.go:192] Received pod 'redhat-marketplace-crk87' 2025-10-13T00:16:16.527989884+00:00 stderr F I1013 00:16:16.527966 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-crk87' 2025-10-13T00:16:17.366244608+00:00 stderr F I1013 00:16:17.366144 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-t4sr9' 2025-10-13T00:16:18.367846141+00:00 stderr F I1013 00:16:18.367795 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jfjbq' 2025-10-13T00:16:19.455207234+00:00 stderr F I1013 00:16:19.455046 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-zqnwb' 2025-10-13T00:16:23.647584280+00:00 stderr F I1013 00:16:23.646578 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-10-13T00:16:24.251020383+00:00 stderr F I1013 00:16:24.249588 1 controller.go:192] Received pod 'redhat-operators-hkptr' 2025-10-13T00:16:24.251020383+00:00 stderr F I1013 00:16:24.250169 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-hkptr' 2025-10-13T00:16:24.951481268+00:00 stderr F I1013 00:16:24.951421 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-10-13T00:16:24.961207560+00:00 stderr F I1013 00:16:24.961159 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-10-13T00:16:27.531060889+00:00 stderr F I1013 00:16:27.530788 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-10-13T00:16:27.961152023+00:00 stderr F I1013 00:16:27.959442 1 controller.go:192] Received pod 'community-operators-gjctm' 2025-10-13T00:16:27.961233386+00:00 stderr F I1013 00:16:27.961205 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-gjctm' 2025-10-13T00:16:28.252208588+00:00 stderr F I1013 00:16:28.252139 1 controller.go:192] Received pod 'certified-operators-cms8q' 2025-10-13T00:16:28.252208588+00:00 stderr F I1013 00:16:28.252193 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cms8q' 2025-10-13T00:16:29.643494378+00:00 stderr F I1013 00:16:29.642656 1 controller.go:192] Received pod 'community-operators-wswq5' 2025-10-13T00:16:29.643494378+00:00 stderr F I1013 00:16:29.643481 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-wswq5' 2025-10-13T00:17:28.935396453+00:00 stderr F I1013 00:17:28.933656 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-wswq5' 2025-10-13T00:19:39.421033595+00:00 stderr F I1013 00:19:39.419892 1 controller.go:192] Received pod 'image-registry-75b7bb6564-2mwg6' 2025-10-13T00:19:39.421163029+00:00 stderr F I1013 00:19:39.421149 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75b7bb6564-2mwg6' 2025-10-13T00:19:45.669506054+00:00 stderr F I1013 00:19:45.669022 1 controller.go:151] Successfully synced 'openshift-multus/cni-sysctl-allowlist-ds-pklng' 2025-10-13T00:20:25.905598670+00:00 stderr F I1013 00:20:25.904975 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-10-13T00:21:31.052715322+00:00 stderr F I1013 00:21:31.051441 1 controller.go:192] Received pod 'installer-13-crc' 2025-10-13T00:21:31.052715322+00:00 stderr F I1013 00:21:31.051979 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-13-crc' 2025-10-13T00:21:36.284473064+00:00 stderr F I1013 00:21:36.282832 1 controller.go:151] Successfully synced 'openshift-ovn-kubernetes/ovnkube-node-44qcg' ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000011767515073043234033241 0ustar zuulzuul2025-08-13T19:59:14.203072729+00:00 stderr F I0813 19:59:14.200136 1 main.go:45] Version:216149b14d9cb61ae90ac65d839a448fb11075bb 2025-08-13T19:59:14.203072729+00:00 stderr F I0813 19:59:14.201374 1 main.go:46] Starting with config{ :9091 crc} 2025-08-13T19:59:14.243408519+00:00 stderr F W0813 19:59:14.238333 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:15.680406063+00:00 stderr F I0813 19:59:15.679464 1 controller.go:42] Setting up event handlers 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811144 1 podmetrics.go:101] Serving network metrics 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811485 1 controller.go:101] Starting pod controller 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811492 1 controller.go:104] Waiting for informer caches to sync 2025-08-13T19:59:22.235926353+00:00 stderr F I0813 19:59:22.214243 1 controller.go:109] Starting workers 2025-08-13T19:59:22.251194178+00:00 stderr F I0813 19:59:22.239598 1 controller.go:114] Started workers 2025-08-13T19:59:22.251194178+00:00 stderr F I0813 19:59:22.239961 1 controller.go:192] Received pod 'csi-hostpathplugin-hvm8g' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.292973 1 controller.go:192] Received pod 'openshift-apiserver-operator-7c88c4c865-kn67m' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293124 1 controller.go:151] Successfully synced 'openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293138 1 controller.go:192] Received pod 'apiserver-67cbf64bc9-mtx25' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293157 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-mtx25' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293165 1 controller.go:192] Received pod 'authentication-operator-7cc7ff75d5-g9qv8' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293176 1 controller.go:151] Successfully synced 'openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293182 1 controller.go:192] Received pod 'oauth-openshift-765b47f944-n2lhl' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293199 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-765b47f944-n2lhl' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293206 1 controller.go:192] Received pod 'cluster-samples-operator-bc474d5d6-wshwg' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293217 1 controller.go:151] Successfully synced 'openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293221 1 controller.go:192] Received pod 'openshift-config-operator-77658b5b66-dq5sc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293246 1 controller.go:151] Successfully synced 'openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293253 1 controller.go:192] Received pod 'console-conversion-webhook-595f9969b-l6z49' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293269 1 controller.go:151] Successfully synced 'openshift-console-operator/console-conversion-webhook-595f9969b-l6z49' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293276 1 controller.go:192] Received pod 'console-operator-5dbbc74dc9-cp5cd' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293287 1 controller.go:151] Successfully synced 'openshift-console-operator/console-operator-5dbbc74dc9-cp5cd' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293292 1 controller.go:192] Received pod 'console-84fccc7b6-mkncc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293306 1 controller.go:151] Successfully synced 'openshift-console/console-84fccc7b6-mkncc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293311 1 controller.go:192] Received pod 'downloads-65476884b9-9wcvx' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293327 1 controller.go:151] Successfully synced 'openshift-console/downloads-65476884b9-9wcvx' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293334 1 controller.go:192] Received pod 'openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293345 1 controller.go:151] Successfully synced 'openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293350 1 controller.go:192] Received pod 'controller-manager-6ff78978b4-q4vv8' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.350984 1 controller.go:151] Successfully synced 'hostpath-provisioner/csi-hostpathplugin-hvm8g' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403421 1 controller.go:192] Received pod 'dns-operator-75f687757b-nz2xb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403501 1 controller.go:151] Successfully synced 'openshift-dns-operator/dns-operator-75f687757b-nz2xb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403513 1 controller.go:192] Received pod 'dns-default-gbw49' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403534 1 controller.go:151] Successfully synced 'openshift-dns/dns-default-gbw49' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403539 1 controller.go:192] Received pod 'etcd-operator-768d5b5d86-722mg' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403550 1 controller.go:151] Successfully synced 'openshift-etcd-operator/etcd-operator-768d5b5d86-722mg' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403556 1 controller.go:192] Received pod 'cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403567 1 controller.go:151] Successfully synced 'openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403572 1 controller.go:192] Received pod 'image-registry-585546dd8b-v5m4t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403583 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-585546dd8b-v5m4t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403587 1 controller.go:192] Received pod 'ingress-canary-2vhcn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403598 1 controller.go:151] Successfully synced 'openshift-ingress-canary/ingress-canary-2vhcn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403607 1 controller.go:192] Received pod 'ingress-operator-7d46d5bb6d-rrg6t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403624 1 controller.go:151] Successfully synced 'openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403629 1 controller.go:192] Received pod 'kube-apiserver-operator-78d54458c4-sc8h7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403659 1 controller.go:151] Successfully synced 'openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403665 1 controller.go:192] Received pod 'kube-controller-manager-operator-6f6cb54958-rbddb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403675 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403680 1 controller.go:192] Received pod 'revision-pruner-8-crc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403691 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-8-crc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403696 1 controller.go:192] Received pod 'openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403721 1 controller.go:151] Successfully synced 'openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403728 1 controller.go:192] Received pod 'kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403744 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403751 1 controller.go:192] Received pod 'migrator-f7c6d88df-q2fnv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403767 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403828 1 controller.go:192] Received pod 'control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403905 1 controller.go:151] Successfully synced 'openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403915 1 controller.go:192] Received pod 'machine-api-operator-788b7c6b6c-ctdmb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403929 1 controller.go:151] Successfully synced 'openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403935 1 controller.go:192] Received pod 'machine-config-controller-6df6df6b6b-58shh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403944 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403949 1 controller.go:192] Received pod 'machine-config-operator-76788bff89-wkjgm' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403960 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403973 1 controller.go:192] Received pod 'certified-operators-7287f' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403985 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403990 1 controller.go:192] Received pod 'certified-operators-g4v97' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404000 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-g4v97' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404005 1 controller.go:192] Received pod 'community-operators-8jhz6' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404015 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404024 1 controller.go:192] Received pod 'community-operators-k9qqb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404039 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-k9qqb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404043 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-f9xdt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404055 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404060 1 controller.go:192] Received pod 'redhat-marketplace-8s8pc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404070 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404075 1 controller.go:192] Received pod 'redhat-marketplace-rmwfn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404093 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-rmwfn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404099 1 controller.go:192] Received pod 'redhat-operators-dcqzh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404109 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-dcqzh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404114 1 controller.go:192] Received pod 'redhat-operators-f4jkp' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404124 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404128 1 controller.go:192] Received pod 'multus-admission-controller-6c7c885997-4hbbc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404141 1 controller.go:151] Successfully synced 'openshift-multus/multus-admission-controller-6c7c885997-4hbbc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404149 1 controller.go:192] Received pod 'network-metrics-daemon-qdfr4' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404162 1 controller.go:151] Successfully synced 'openshift-multus/network-metrics-daemon-qdfr4' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404166 1 controller.go:192] Received pod 'network-check-source-5c5478f8c-vqvt7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404176 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404180 1 controller.go:192] Received pod 'network-check-target-v54bt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404191 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-target-v54bt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404196 1 controller.go:192] Received pod 'apiserver-69c565c9b6-vbdpd' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404212 1 controller.go:151] Successfully synced 'openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404218 1 controller.go:192] Received pod 'catalog-operator-857456c46-7f5wf' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404229 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404233 1 controller.go:192] Received pod 'collect-profiles-29251905-zmjv9' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404243 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404248 1 controller.go:192] Received pod 'olm-operator-6d8474f75f-x54mh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404257 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404262 1 controller.go:192] Received pod 'package-server-manager-84d578d794-jw7r2' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404278 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2' 2025-08-13T19:59:22.415768500+00:00 stderr P I0813 19:59:22.404286 1 con 2025-08-13T19:59:22.416128400+00:00 stderr F troller.go:192] Received pod 'packageserver-8464bcc55b-sjnqz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404298 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404302 1 controller.go:192] Received pod 'route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404318 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404323 1 controller.go:192] Received pod 'service-ca-operator-546b4f8984-pwccz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404345 1 controller.go:151] Successfully synced 'openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404349 1 controller.go:192] Received pod 'service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404361 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404375 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-6ff78978b4-q4vv8' 2025-08-13T19:59:47.793265519+00:00 stderr F I0813 19:59:47.789858 1 controller.go:192] Received pod 'service-ca-666f99b6f-kk8kg' 2025-08-13T19:59:47.798560020+00:00 stderr F I0813 19:59:47.795202 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-kk8kg' 2025-08-13T19:59:52.481966496+00:00 stderr F I0813 19:59:52.454353 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:54.699326581+00:00 stderr F I0813 19:59:54.695209 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-6ff78978b4-q4vv8' 2025-08-13T20:00:00.033952767+00:00 stderr F I0813 20:00:00.016212 1 controller.go:192] Received pod 'controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:00.033952767+00:00 stderr F I0813 20:00:00.031888 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:01.733428071+00:00 stderr F I0813 20:00:01.686450 1 controller.go:192] Received pod 'route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:01.733428071+00:00 stderr F I0813 20:00:01.687228 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:05.392908974+00:00 stderr F I0813 20:00:05.381095 1 controller.go:192] Received pod 'collect-profiles-29251920-wcws2' 2025-08-13T20:00:05.392908974+00:00 stderr F I0813 20:00:05.390516 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-08-13T20:00:12.837175058+00:00 stderr F I0813 20:00:12.831533 1 controller.go:192] Received pod 'revision-pruner-9-crc' 2025-08-13T20:00:12.837175058+00:00 stderr F I0813 20:00:12.832637 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-9-crc' 2025-08-13T20:00:13.415470178+00:00 stderr F I0813 20:00:13.414068 1 controller.go:192] Received pod 'installer-9-crc' 2025-08-13T20:00:13.415470178+00:00 stderr F I0813 20:00:13.414423 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-9-crc' 2025-08-13T20:00:18.087146555+00:00 stderr F I0813 20:00:18.085440 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:19.199895773+00:00 stderr F I0813 20:00:19.199376 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:23.817216183+00:00 stderr F I0813 20:00:23.814290 1 controller.go:192] Received pod 'route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:23.817216183+00:00 stderr F I0813 20:00:23.815556 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.262921 1 controller.go:192] Received pod 'installer-9-crc' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263147 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-9-crc' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263157 1 controller.go:192] Received pod 'console-5d9678894c-wx62n' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263168 1 controller.go:151] Successfully synced 'openshift-console/console-5d9678894c-wx62n' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263180 1 controller.go:192] Received pod 'controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263191 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:27.488118776+00:00 stderr F I0813 20:00:27.486613 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T20:00:27.637899127+00:00 stderr F I0813 20:00:27.636553 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-585546dd8b-v5m4t' 2025-08-13T20:00:29.897264330+00:00 stderr F I0813 20:00:29.895923 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:30.862824172+00:00 stderr F I0813 20:00:30.859600 1 controller.go:192] Received pod 'image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:00:30.862824172+00:00 stderr F I0813 20:00:30.860019 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:00:30.904206192+00:00 stderr F I0813 20:00:30.904154 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:30.971050898+00:00 stderr F I0813 20:00:30.970969 1 controller.go:192] Received pod 'image-registry-75779c45fd-v2j2v' 2025-08-13T20:00:30.971139431+00:00 stderr F I0813 20:00:30.971121 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-08-13T20:00:31.082874517+00:00 stderr F I0813 20:00:31.082592 1 controller.go:192] Received pod 'controller-manager-78589965b8-vmcwt' 2025-08-13T20:00:31.082874517+00:00 stderr F I0813 20:00:31.082653 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-78589965b8-vmcwt' 2025-08-13T20:00:36.540416602+00:00 stderr F I0813 20:00:36.533143 1 controller.go:192] Received pod 'route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:00:36.540416602+00:00 stderr F I0813 20:00:36.533597 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:00:41.455954023+00:00 stderr F I0813 20:00:41.454478 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-mtx25' 2025-08-13T20:00:45.801433589+00:00 stderr F I0813 20:00:45.784549 1 controller.go:192] Received pod 'installer-10-crc' 2025-08-13T20:00:45.801433589+00:00 stderr F I0813 20:00:45.798645 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-crc' 2025-08-13T20:00:45.900313318+00:00 stderr F I0813 20:00:45.873345 1 controller.go:192] Received pod 'revision-pruner-10-crc' 2025-08-13T20:00:45.900531905+00:00 stderr F I0813 20:00:45.900504 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-10-crc' 2025-08-13T20:00:45.951490608+00:00 stderr F I0813 20:00:45.949257 1 controller.go:192] Received pod 'installer-7-crc' 2025-08-13T20:00:45.951490608+00:00 stderr F I0813 20:00:45.949399 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-7-crc' 2025-08-13T20:00:47.103216737+00:00 stderr F I0813 20:00:47.102333 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-765b47f944-n2lhl' 2025-08-13T20:01:00.041614175+00:00 stderr F I0813 20:01:00.039017 1 controller.go:192] Received pod 'oauth-openshift-74fc7c67cc-xqf8b' 2025-08-13T20:01:00.054891223+00:00 stderr F I0813 20:01:00.054311 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b' 2025-08-13T20:01:00.080921856+00:00 stderr F I0813 20:01:00.078163 1 controller.go:192] Received pod 'apiserver-67cbf64bc9-jjfds' 2025-08-13T20:01:00.080921856+00:00 stderr F I0813 20:01:00.078340 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-jjfds' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.282197 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-9-crc' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.283523 1 controller.go:192] Received pod 'console-644bb77b49-5x5xk' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.283591 1 controller.go:151] Successfully synced 'openshift-console/console-644bb77b49-5x5xk' 2025-08-13T20:06:25.106206287+00:00 stderr F I0813 20:06:25.105569 1 controller.go:192] Received pod 'installer-10-retry-1-crc' 2025-08-13T20:06:25.106544196+00:00 stderr F I0813 20:06:25.105651 1 controller.go:192] Received pod 'controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:06:25.109592163+00:00 stderr F I0813 20:06:25.109563 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-retry-1-crc' 2025-08-13T20:06:25.109648825+00:00 stderr F I0813 20:06:25.109635 1 controller.go:192] Received pod 'route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:06:25.109696206+00:00 stderr F I0813 20:06:25.109682 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:06:25.109731127+00:00 stderr F I0813 20:06:25.109719 1 controller.go:151] Successfully synced 'openshift-console/console-84fccc7b6-mkncc' 2025-08-13T20:06:25.109764118+00:00 stderr F I0813 20:06:25.109753 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-78589965b8-vmcwt' 2025-08-13T20:06:25.109920543+00:00 stderr F I0813 20:06:25.109906 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:06:25.109960774+00:00 stderr F I0813 20:06:25.109949 1 controller.go:151] Successfully synced 'openshift-console/console-5d9678894c-wx62n' 2025-08-13T20:06:25.109990255+00:00 stderr F I0813 20:06:25.109979 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/kube-apiserver-startup-monitor-crc' 2025-08-13T20:06:25.110058797+00:00 stderr F I0813 20:06:25.110002 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:06:25.110085968+00:00 stderr F I0813 20:06:25.110010 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:06:32.084964732+00:00 stderr F I0813 20:06:32.083272 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-rmwfn' 2025-08-13T20:06:34.485249150+00:00 stderr F I0813 20:06:34.484717 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-dcqzh' 2025-08-13T20:06:34.779125875+00:00 stderr F I0813 20:06:34.778404 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-g4v97' 2025-08-13T20:06:35.123551180+00:00 stderr F I0813 20:06:35.123213 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-k9qqb' 2025-08-13T20:06:35.928407086+00:00 stderr F I0813 20:06:35.928154 1 controller.go:192] Received pod 'redhat-marketplace-4txfd' 2025-08-13T20:06:35.929223220+00:00 stderr F I0813 20:06:35.929127 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-4txfd' 2025-08-13T20:06:36.643606302+00:00 stderr F I0813 20:06:36.642995 1 controller.go:192] Received pod 'certified-operators-cfdk8' 2025-08-13T20:06:36.643736426+00:00 stderr F I0813 20:06:36.643721 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cfdk8' 2025-08-13T20:06:37.666895990+00:00 stderr F I0813 20:06:37.665212 1 controller.go:192] Received pod 'redhat-operators-pmqwc' 2025-08-13T20:06:37.666895990+00:00 stderr F I0813 20:06:37.665954 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-pmqwc' 2025-08-13T20:06:39.388929202+00:00 stderr F I0813 20:06:39.388413 1 controller.go:192] Received pod 'community-operators-p7svp' 2025-08-13T20:06:39.389035525+00:00 stderr F I0813 20:06:39.389019 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-p7svp' 2025-08-13T20:06:49.910865775+00:00 stderr F I0813 20:06:49.910061 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/kube-controller-manager-crc' 2025-08-13T20:07:05.606433680+00:00 stderr F I0813 20:07:05.590499 1 controller.go:192] Received pod 'installer-11-crc' 2025-08-13T20:07:05.606433680+00:00 stderr F I0813 20:07:05.591545 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-11-crc' 2025-08-13T20:07:09.571116321+00:00 stderr F I0813 20:07:09.568456 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-4txfd' 2025-08-13T20:07:17.348392622+00:00 stderr F I0813 20:07:17.344598 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-jjfds' 2025-08-13T20:07:21.108945819+00:00 stderr F I0813 20:07:21.104214 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cfdk8' 2025-08-13T20:07:21.346738817+00:00 stderr F I0813 20:07:21.346147 1 controller.go:192] Received pod 'apiserver-7fc54b8dd7-d2bhp' 2025-08-13T20:07:21.346738817+00:00 stderr F I0813 20:07:21.346207 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-7fc54b8dd7-d2bhp' 2025-08-13T20:07:23.498192142+00:00 stderr F I0813 20:07:23.495542 1 controller.go:192] Received pod 'revision-pruner-11-crc' 2025-08-13T20:07:23.498192142+00:00 stderr F I0813 20:07:23.497749 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-11-crc' 2025-08-13T20:07:24.100475910+00:00 stderr F I0813 20:07:24.094438 1 controller.go:192] Received pod 'installer-8-crc' 2025-08-13T20:07:24.100475910+00:00 stderr F I0813 20:07:24.095461 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-8-crc' 2025-08-13T20:07:26.190039028+00:00 stderr F I0813 20:07:26.184720 1 controller.go:192] Received pod 'installer-11-crc' 2025-08-13T20:07:26.190039028+00:00 stderr F I0813 20:07:26.189167 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-11-crc' 2025-08-13T20:07:34.911652425+00:00 stderr F I0813 20:07:34.905613 1 controller.go:192] Received pod 'installer-12-crc' 2025-08-13T20:07:34.911652425+00:00 stderr F I0813 20:07:34.906370 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-12-crc' 2025-08-13T20:07:41.042723107+00:00 stderr F I0813 20:07:41.042017 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-11-crc' 2025-08-13T20:07:42.641679050+00:00 stderr F I0813 20:07:42.640862 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-p7svp' 2025-08-13T20:08:01.087979002+00:00 stderr F I0813 20:08:01.087350 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-pmqwc' 2025-08-13T20:08:08.267400173+00:00 stderr F I0813 20:08:08.264471 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/openshift-kube-scheduler-crc' 2025-08-13T20:08:12.291050774+00:00 stderr F I0813 20:08:12.289975 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/kube-controller-manager-crc' 2025-08-13T20:10:21.976031818+00:00 stderr F I0813 20:10:21.975169 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/kube-apiserver-startup-monitor-crc' 2025-08-13T20:10:50.613462976+00:00 stderr F I0813 20:10:50.611261 1 controller.go:151] Successfully synced 'openshift-multus/cni-sysctl-allowlist-ds-jx5m8' 2025-08-13T20:11:00.793692470+00:00 stderr F I0813 20:11:00.789632 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:11:00.852394873+00:00 stderr F I0813 20:11:00.852251 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:11:02.192181446+00:00 stderr F I0813 20:11:02.192032 1 controller.go:192] Received pod 'controller-manager-778975cc4f-x5vcf' 2025-08-13T20:11:02.192437293+00:00 stderr F I0813 20:11:02.192326 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-778975cc4f-x5vcf' 2025-08-13T20:11:02.283304968+00:00 stderr F I0813 20:11:02.281406 1 controller.go:192] Received pod 'route-controller-manager-776b8b7477-sfpvs' 2025-08-13T20:11:02.283304968+00:00 stderr F I0813 20:11:02.281657 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs' 2025-08-13T20:15:01.024125786+00:00 stderr F I0813 20:15:01.017300 1 controller.go:192] Received pod 'collect-profiles-29251935-d7x6j' 2025-08-13T20:15:01.024125786+00:00 stderr F I0813 20:15:01.020210 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2025-08-13T20:16:58.871475492+00:00 stderr F I0813 20:16:58.866052 1 controller.go:192] Received pod 'certified-operators-8bbjz' 2025-08-13T20:16:58.871475492+00:00 stderr F I0813 20:16:58.870147 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-8bbjz' 2025-08-13T20:17:01.049845789+00:00 stderr F I0813 20:17:01.049067 1 controller.go:192] Received pod 'redhat-marketplace-nsk78' 2025-08-13T20:17:01.049906111+00:00 stderr F I0813 20:17:01.049893 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nsk78' 2025-08-13T20:17:21.605438507+00:00 stderr F I0813 20:17:21.604580 1 controller.go:192] Received pod 'redhat-operators-swl5s' 2025-08-13T20:17:21.605556170+00:00 stderr F I0813 20:17:21.605442 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-swl5s' 2025-08-13T20:17:31.144134404+00:00 stderr F I0813 20:17:31.142706 1 controller.go:192] Received pod 'community-operators-tfv59' 2025-08-13T20:17:31.144134404+00:00 stderr F I0813 20:17:31.143821 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-tfv59' 2025-08-13T20:17:36.938844574+00:00 stderr F I0813 20:17:36.938059 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nsk78' 2025-08-13T20:17:45.316184057+00:00 stderr F I0813 20:17:45.314435 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-8bbjz' 2025-08-13T20:18:56.581914971+00:00 stderr F I0813 20:18:56.581177 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-tfv59' 2025-08-13T20:19:38.583898355+00:00 stderr F I0813 20:19:38.580924 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-swl5s' 2025-08-13T20:27:06.515561143+00:00 stderr F I0813 20:27:06.514117 1 controller.go:192] Received pod 'redhat-marketplace-jbzn9' 2025-08-13T20:27:06.522213573+00:00 stderr F I0813 20:27:06.521390 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jbzn9' 2025-08-13T20:27:06.626739732+00:00 stderr F I0813 20:27:06.626334 1 controller.go:192] Received pod 'certified-operators-xldzg' 2025-08-13T20:27:06.626769043+00:00 stderr F I0813 20:27:06.626758 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-xldzg' 2025-08-13T20:27:30.206946955+00:00 stderr F I0813 20:27:30.204841 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-xldzg' 2025-08-13T20:27:30.240926917+00:00 stderr F I0813 20:27:30.239904 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jbzn9' 2025-08-13T20:28:44.070439440+00:00 stderr F I0813 20:28:44.069725 1 controller.go:192] Received pod 'community-operators-hvwvm' 2025-08-13T20:28:44.071889282+00:00 stderr F I0813 20:28:44.070535 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-hvwvm' 2025-08-13T20:29:07.485652753+00:00 stderr F I0813 20:29:07.484254 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-hvwvm' 2025-08-13T20:29:30.800937402+00:00 stderr F I0813 20:29:30.800342 1 controller.go:192] Received pod 'redhat-operators-zdwjn' 2025-08-13T20:29:30.801141928+00:00 stderr F I0813 20:29:30.801085 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zdwjn' 2025-08-13T20:30:02.812209420+00:00 stderr F I0813 20:30:02.808013 1 controller.go:192] Received pod 'collect-profiles-29251950-x8jjd' 2025-08-13T20:30:02.812209420+00:00 stderr F I0813 20:30:02.808719 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd' 2025-08-13T20:30:08.194448628+00:00 stderr F I0813 20:30:08.193089 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9' 2025-08-13T20:30:34.188255151+00:00 stderr F I0813 20:30:34.187324 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zdwjn' 2025-08-13T20:37:48.902847474+00:00 stderr F I0813 20:37:48.896192 1 controller.go:192] Received pod 'redhat-marketplace-nkzlk' 2025-08-13T20:37:48.902847474+00:00 stderr F I0813 20:37:48.899938 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nkzlk' 2025-08-13T20:38:09.866597412+00:00 stderr F I0813 20:38:09.865627 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nkzlk' 2025-08-13T20:38:36.821311697+00:00 stderr F I0813 20:38:36.817256 1 controller.go:192] Received pod 'certified-operators-4kmbv' 2025-08-13T20:38:36.826287260+00:00 stderr F I0813 20:38:36.824009 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-4kmbv' 2025-08-13T20:38:58.234193701+00:00 stderr F I0813 20:38:58.233474 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-4kmbv' 2025-08-13T20:41:22.218162180+00:00 stderr F I0813 20:41:22.216147 1 controller.go:192] Received pod 'redhat-operators-k2tgr' 2025-08-13T20:41:22.218162180+00:00 stderr F I0813 20:41:22.217285 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-k2tgr' 2025-08-13T20:42:15.662994211+00:00 stderr F I0813 20:42:15.659488 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-k2tgr' 2025-08-13T20:42:27.892076227+00:00 stderr F I0813 20:42:27.887166 1 controller.go:192] Received pod 'community-operators-sdddl' 2025-08-13T20:42:27.892076227+00:00 stderr F I0813 20:42:27.888532 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-08-13T20:42:44.151210941+00:00 stderr F I0813 20:42:44.150298 1 controller.go:116] Shutting down workers ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015073043234033216 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000000202015073043234033212 0ustar zuulzuul2025-10-13T00:14:59.050133578+00:00 stderr F W1013 00:14:59.049859 1 deprecated.go:66] 2025-10-13T00:14:59.050133578+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:14:59.050133578+00:00 stderr F 2025-10-13T00:14:59.050133578+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:14:59.050133578+00:00 stderr F 2025-10-13T00:14:59.050133578+00:00 stderr F =============================================== 2025-10-13T00:14:59.050133578+00:00 stderr F 2025-10-13T00:14:59.050975903+00:00 stderr F I1013 00:14:59.050937 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:14:59.050995694+00:00 stderr F I1013 00:14:59.050986 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:14:59.053120108+00:00 stderr F I1013 00:14:59.053068 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-10-13T00:14:59.053592542+00:00 stderr F I1013 00:14:59.053571 1 kube-rbac-proxy.go:402] Listening securely on :8443 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000000222515073043234033221 0ustar zuulzuul2025-08-13T19:59:23.418270946+00:00 stderr F W0813 19:59:23.412030 1 deprecated.go:66] 2025-08-13T19:59:23.418270946+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F =============================================== 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F I0813 19:59:23.413617 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:23.418270946+00:00 stderr F I0813 19:59:23.413677 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:23.432166223+00:00 stderr F I0813 19:59:23.432126 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-08-13T19:59:23.450188556+00:00 stderr F I0813 19:59:23.450142 1 kube-rbac-proxy.go:402] Listening securely on :8443 2025-08-13T20:42:43.082211382+00:00 stderr F I0813 20:42:43.081428 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043233033067 0ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043233033067 0ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015073043233033104 0ustar zuulzuul2025-10-13T00:16:22.771931476+00:00 stderr F time="2025-10-13T00:16:22Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-10-13T00:16:23.666949491+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-10-13T00:16:23.667037554+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000011060115073043233033054 0ustar zuulzuul2025-10-13T00:14:58.907994519+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="Using in-cluster kube client config" 2025-10-13T00:14:58.924912676+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="Defaulting Interval to '12h0m0s'" 2025-10-13T00:14:58.956293396+00:00 stderr F I1013 00:14:58.955794 1 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-10-13T00:14:58.957517413+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-10-13T00:14:58.957517413+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="operator ready" 2025-10-13T00:14:58.957517413+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="starting informers..." 2025-10-13T00:14:58.957517413+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="informers started" 2025-10-13T00:14:58.957517413+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="waiting for caches to sync..." 2025-10-13T00:14:59.063802068+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="starting workers..." 2025-10-13T00:14:59.064264991+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="connecting to source" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-10-13T00:14:59.064312853+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="connecting to source" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:14:59.076458767+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-10-13T00:14:59.076458767+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114256 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114289 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114400 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114419 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114436 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114443 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114572 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:14:59.114739094+00:00 stderr F I1013 00:14:59.114611 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:14:59.114793855+00:00 stderr F I1013 00:14:59.114779 1 secure_serving.go:213] Serving securely on [::]:5443 2025-10-13T00:14:59.120011042+00:00 stderr F I1013 00:14:59.115138 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:14:59.120011042+00:00 stderr F I1013 00:14:59.115154 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:59.120011042+00:00 stderr F I1013 00:14:59.115166 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:14:59.120011042+00:00 stderr F I1013 00:14:59.115171 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:59.120011042+00:00 stderr F I1013 00:14:59.116050 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-10-13T00:14:59.120011042+00:00 stderr F I1013 00:14:59.116515 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:14:59.215158012+00:00 stderr F I1013 00:14:59.214389 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:14:59.215158012+00:00 stderr F I1013 00:14:59.214549 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:59.215158012+00:00 stderr F I1013 00:14:59.214571 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:59.215158012+00:00 stderr F I1013 00:14:59.214738 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:14:59.216879994+00:00 stderr F I1013 00:14:59.215465 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:59.216879994+00:00 stderr F I1013 00:14:59.215572 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:59.286127529+00:00 stderr F W1013 00:14:59.285835 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46758->10.217.4.10:53: read: connection refused" 2025-10-13T00:14:59.287274763+00:00 stderr F W1013 00:14:59.286669 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:36641->10.217.4.10:53: read: connection refused" 2025-10-13T00:14:59.293159689+00:00 stderr F W1013 00:14:59.292087 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:33111->10.217.4.10:53: read: connection refused" 2025-10-13T00:14:59.293664265+00:00 stderr F W1013 00:14:59.293610 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:47013->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:00.315770498+00:00 stderr F W1013 00:15:00.315365 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60022->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:00.315770498+00:00 stderr F W1013 00:15:00.315416 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:51217->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:00.322092287+00:00 stderr F W1013 00:15:00.321952 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:56270->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:00.323085937+00:00 stderr F W1013 00:15:00.323038 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38950->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:01.419193648+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:15:01.419193648+00:00 stderr F time="2025-10-13T00:15:01Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:56270->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-10-13T00:15:01.733640190+00:00 stderr F W1013 00:15:01.731378 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46923->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:01.907818598+00:00 stderr F W1013 00:15:01.904188 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:56742->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:01.980427814+00:00 stderr F W1013 00:15:01.980283 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:58140->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:02.017124253+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-10-13T00:15:02.017124253+00:00 stderr F time="2025-10-13T00:15:02Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46923->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-10-13T00:15:02.044038280+00:00 stderr F W1013 00:15:02.043972 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40863->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:02.617529303+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-10-13T00:15:02.617529303+00:00 stderr F time="2025-10-13T00:15:02Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:56742->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-10-13T00:15:03.424039566+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-10-13T00:15:03.424039566+00:00 stderr F time="2025-10-13T00:15:03Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40863->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-10-13T00:15:04.013621251+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:15:04.013621251+00:00 stderr F time="2025-10-13T00:15:04Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:58140->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-10-13T00:15:04.161940505+00:00 stderr F W1013 00:15:04.161886 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:53530->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:04.574963090+00:00 stderr F W1013 00:15:04.574907 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:47716->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:04.614481654+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-10-13T00:15:04.614557316+00:00 stderr F time="2025-10-13T00:15:04Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:53530->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-10-13T00:15:04.662796172+00:00 stderr F W1013 00:15:04.662730 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49276->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:05.101390443+00:00 stderr F W1013 00:15:05.101303 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60759->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:05.210289666+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-10-13T00:15:05.210289666+00:00 stderr F time="2025-10-13T00:15:05Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49276->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-10-13T00:15:05.813944102+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-10-13T00:15:05.813944102+00:00 stderr F time="2025-10-13T00:15:05Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60759->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-10-13T00:15:07.795449371+00:00 stderr F W1013 00:15:07.795382 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49786->10.217.4.10:53: read: connection refused" 2025-10-13T00:15:09.212375645+00:00 stderr F W1013 00:15:09.212339 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-10-13T00:15:09.352795492+00:00 stderr F W1013 00:15:09.352737 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-10-13T00:15:09.970412377+00:00 stderr F W1013 00:15:09.970314 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-10-13T00:15:14.615445289+00:00 stderr F W1013 00:15:14.615393 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-10-13T00:15:15.094752279+00:00 stderr F W1013 00:15:15.094684 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-10-13T00:15:15.183001684+00:00 stderr F W1013 00:15:15.182949 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-10-13T00:15:16.475398996+00:00 stderr F W1013 00:15:16.475305 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-10-13T00:15:23.384357832+00:00 stderr F W1013 00:15:23.383838 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-10-13T00:15:25.584850461+00:00 stderr F W1013 00:15:25.584803 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-10-13T00:15:26.318501293+00:00 stderr F W1013 00:15:26.318431 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-10-13T00:15:27.056244417+00:00 stderr F W1013 00:15:27.056160 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-10-13T00:15:39.104959379+00:00 stderr F W1013 00:15:39.104365 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-10-13T00:15:40.654981950+00:00 stderr F W1013 00:15:40.654902 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-10-13T00:15:41.479584107+00:00 stderr F W1013 00:15:41.479482 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-10-13T00:15:41.494734491+00:00 stderr F W1013 00:15:41.494681 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-10-13T00:16:03.317104211+00:00 stderr F W1013 00:16:03.317030 1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-10-13T00:16:06.188501851+00:00 stderr F W1013 00:16:06.188441 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-10-13T00:16:10.501888788+00:00 stderr F W1013 00:16:10.501776 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-10-13T00:16:11.448773065+00:00 stderr F W1013 00:16:11.448446 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-10-13T00:16:23.672825230+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-10-13T00:16:23.672870751+00:00 stderr F time="2025-10-13T00:16:23Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-10-13T00:16:27.428044615+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-10-13T00:16:27.428095697+00:00 stderr F time="2025-10-13T00:16:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-10-13T00:16:29.035184309+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:16:29.035184309+00:00 stderr F time="2025-10-13T00:16:29Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-10-13T00:16:50.944452482+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-10-13T00:16:51.340562345+00:00 stderr F W1013 00:16:51.340482 1 logging.go:59] [core] [Channel #6 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-10-13T00:16:56.473526663+00:00 stderr F W1013 00:16:56.473086 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-10-13T00:16:57.728637225+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:16:57.728709387+00:00 stderr F time="2025-10-13T00:16:57Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-10-13T00:16:58.262819848+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-10-13T00:16:59.733459525+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:16:59.733459525+00:00 stderr F time="2025-10-13T00:16:59Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-10-13T00:17:16.817505010+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-10-13T00:17:16.817575382+00:00 stderr F time="2025-10-13T00:17:16Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-10-13T00:17:59.643640173+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-10-13T00:18:09.987992977+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-10-13T00:22:24.539353171+00:00 stderr F E1013 00:22:24.538771 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.542433574+00:00 stderr F E1013 00:22:24.542391 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.512568433+00:00 stderr F E1013 00:22:25.512043 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.512568433+00:00 stderr F E1013 00:22:25.512549 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.524282058+00:00 stderr F E1013 00:22:25.524231 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.524366290+00:00 stderr F E1013 00:22:25.524281 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000026200415073043233033061 0ustar zuulzuul2025-08-13T19:59:19.965638478+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:20.758915041+00:00 stderr F time="2025-08-13T19:59:20Z" level=info msg="Defaulting Interval to '12h0m0s'" 2025-08-13T19:59:21.874938483+00:00 stderr F I0813 19:59:21.865254 1 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="operator ready" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting informers..." 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="informers started" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:22.456175441+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting workers..." 2025-08-13T19:59:22.478751865+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:22.707297850+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:22.802301208+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:22.948340281+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:23.487362766+00:00 stderr F I0813 19:59:23.487270 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:23.487461169+00:00 stderr F I0813 19:59:23.487443 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:23.617713662+00:00 stderr F I0813 19:59:23.616166 1 secure_serving.go:213] Serving securely on [::]:5443 2025-08-13T19:59:23.658459603+00:00 stderr F I0813 19:59:23.658385 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:23.658534185+00:00 stderr F I0813 19:59:23.658520 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:23.658735911+00:00 stderr F I0813 19:59:23.658704 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:23.658818703+00:00 stderr F I0813 19:59:23.658762 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.658948727+00:00 stderr F I0813 19:59:23.658931 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.658987408+00:00 stderr F I0813 19:59:23.658970 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.660177232+00:00 stderr F I0813 19:59:23.659931 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:23.683340952+00:00 stderr F I0813 19:59:23.683297 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.683464446+00:00 stderr F I0813 19:59:23.660065 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-08-13T19:59:23.697110585+00:00 stderr F I0813 19:59:23.693327 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:23.707916413+00:00 stderr F W0813 19:59:23.706281 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:50216->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.749929450+00:00 stderr F W0813 19:59:23.749230 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:59130->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.789565970+00:00 stderr F W0813 19:59:23.774492 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:34813->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.789565970+00:00 stderr F I0813 19:59:23.665072 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.789565970+00:00 stderr F I0813 19:59:23.788885 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.851459845+00:00 stderr F I0813 19:59:23.838251 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.859441 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.862289 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.862499 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.862529 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.872977 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F W0813 19:59:23.959545 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46183->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.959998 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.960030 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960080 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960117 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960260 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:23.977470177+00:00 stderr F E0813 19:59:23.977429 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.977557779+00:00 stderr F E0813 19:59:23.977533 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.978249119+00:00 stderr F E0813 19:59:23.978222 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.988976935+00:00 stderr F E0813 19:59:23.988881 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.996523760+00:00 stderr F E0813 19:59:23.996485 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.999168595+00:00 stderr F E0813 19:59:23.999078 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.003033765+00:00 stderr F E0813 19:59:24.002966 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.015609204+00:00 stderr F E0813 19:59:24.014282 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.033490984+00:00 stderr F E0813 19:59:24.012013 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.033653068+00:00 stderr F E0813 19:59:24.033622 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.044284161+00:00 stderr F E0813 19:59:24.043528 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.054992656+00:00 stderr F E0813 19:59:24.054324 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.054992656+00:00 stderr F E0813 19:59:24.054381 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.078767994+00:00 stderr F E0813 19:59:24.077021 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.097853998+00:00 stderr F E0813 19:59:24.095203 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.125724753+00:00 stderr F E0813 19:59:24.124005 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.136417948+00:00 stderr F E0813 19:59:24.136044 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.160041621+00:00 stderr F E0813 19:59:24.157966 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.176171051+00:00 stderr F E0813 19:59:24.175957 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.288501183+00:00 stderr F E0813 19:59:24.284585 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.300258988+00:00 stderr F E0813 19:59:24.300092 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.322673827+00:00 stderr F E0813 19:59:24.321370 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.338433826+00:00 stderr F E0813 19:59:24.338312 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.642557 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.643580 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.646489 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.682534534+00:00 stderr F E0813 19:59:24.678028 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.286624 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.287059 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.287505 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.301457677+00:00 stderr F W0813 19:59:25.298317 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55001->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318372169+00:00 stderr F W0813 19:59:25.318170 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:54845->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318478232+00:00 stderr F W0813 19:59:25.318405 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49886->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318491072+00:00 stderr F W0813 19:59:25.318472 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60072->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318637136+00:00 stderr F E0813 19:59:25.318603 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.249019077+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:26.256263304+00:00 stderr F time="2025-08-13T19:59:26Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55001->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-08-13T19:59:26.295117731+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:26.295117731+00:00 stderr F time="2025-08-13T19:59:26Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60072->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-08-13T19:59:26.583654216+00:00 stderr F E0813 19:59:26.582010 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.583654216+00:00 stderr F E0813 19:59:26.582431 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.584416318+00:00 stderr F E0813 19:59:26.584012 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.605873800+00:00 stderr F E0813 19:59:26.605652 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.095678202+00:00 stderr F E0813 19:59:27.094975 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123041232+00:00 stderr F E0813 19:59:27.122181 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123041232+00:00 stderr F E0813 19:59:27.122689 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123218527+00:00 stderr F E0813 19:59:27.123107 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.176560857+00:00 stderr F E0813 19:59:27.126044 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.247275103+00:00 stderr F W0813 19:59:27.245344 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43093->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.489334133+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:27.489534669+00:00 stderr F time="2025-08-13T19:59:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49886->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T19:59:27.611118185+00:00 stderr F W0813 19:59:27.610462 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38532->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.611508286+00:00 stderr F W0813 19:59:27.611349 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38374->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.666886914+00:00 stderr F W0813 19:59:27.662119 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:53325->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.722828369+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:27.722999974+00:00 stderr F time="2025-08-13T19:59:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38532->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-08-13T19:59:27.892387802+00:00 stderr F E0813 19:59:27.892315 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.894210574+00:00 stderr F E0813 19:59:27.893518 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.894714329+00:00 stderr F E0813 19:59:27.893532 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.895250094+00:00 stderr F E0813 19:59:27.893611 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.896173530+00:00 stderr F E0813 19:59:27.893652 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.939682790+00:00 stderr F E0813 19:59:27.907594 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.975969835+00:00 stderr F E0813 19:59:27.975907 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.979482245+00:00 stderr F E0813 19:59:27.979448 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.285637351+00:00 stderr F E0813 19:59:28.149888 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.305910959+00:00 stderr F E0813 19:59:28.153221 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.305910959+00:00 stderr F E0813 19:59:28.154254 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.321043260+00:00 stderr F E0813 19:59:28.155060 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.509691 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.510964 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.511241 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.538281293+00:00 stderr F E0813 19:59:28.538083 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.539471637+00:00 stderr F E0813 19:59:28.538879 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.580161727+00:00 stderr F E0813 19:59:28.576581 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.586629151+00:00 stderr F E0813 19:59:28.583976 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.586629151+00:00 stderr F E0813 19:59:28.586318 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.596542443+00:00 stderr F E0813 19:59:28.595931 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.617760498+00:00 stderr F E0813 19:59:28.615621 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.620897 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.621387 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.621475 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.627189007+00:00 stderr F E0813 19:59:28.626373 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.627189007+00:00 stderr F E0813 19:59:28.626969 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.702275718+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:28.702328449+00:00 stderr F time="2025-08-13T19:59:28Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43093->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.860255 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.860620 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861078 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861521 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861751 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.142703442+00:00 stderr F E0813 19:59:29.142638 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.153274173+00:00 stderr F E0813 19:59:29.153156 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.155073075+00:00 stderr F E0813 19:59:29.153425 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.166039687+00:00 stderr F E0813 19:59:29.165990 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.258589855+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:29.258742140+00:00 stderr F time="2025-08-13T19:59:29Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38374->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.419873 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420250 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420442 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420622 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420927 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.626727279+00:00 stderr F W0813 19:59:29.625254 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49867->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:29.928732508+00:00 stderr F W0813 19:59:29.916278 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52709->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:30.160441743+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:30.168229755+00:00 stderr F time="2025-08-13T19:59:30Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52709->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T19:59:30.201098882+00:00 stderr F E0813 19:59:30.201004 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.205076685+00:00 stderr F E0813 19:59:30.201367 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.205076685+00:00 stderr F E0813 19:59:30.201700 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.261903495+00:00 stderr F E0813 19:59:30.242444 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.275320018+00:00 stderr F E0813 19:59:30.275283 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.571214002+00:00 stderr F W0813 19:59:30.571157 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40819->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:30.798852981+00:00 stderr F W0813 19:59:30.796635 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43500->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.705416 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706088 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706282 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706505 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.724590 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:33.186124520+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:33.186291375+00:00 stderr F time="2025-08-13T19:59:33Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43500->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-08-13T19:59:33.731313711+00:00 stderr F W0813 19:59:33.731250 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:34.232634982+00:00 stderr F W0813 19:59:34.231206 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:34.280269209+00:00 stderr F E0813 19:59:34.266200 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.283483771+00:00 stderr F E0813 19:59:34.281889 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.305643193+00:00 stderr F E0813 19:59:34.305574 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.306139547+00:00 stderr F E0813 19:59:34.306120 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.545033677+00:00 stderr F E0813 19:59:34.522194 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.545033677+00:00 stderr F E0813 19:59:34.522523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.674225549+00:00 stderr F E0813 19:59:34.673123 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.706747196+00:00 stderr F E0813 19:59:34.705343 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.754865488+00:00 stderr F E0813 19:59:34.669727 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.850960187+00:00 stderr F W0813 19:59:34.850308 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T19:59:35.058219095+00:00 stderr F W0813 19:59:35.056680 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:39.987047992+00:00 stderr F E0813 19:59:39.985498 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.039932499+00:00 stderr F E0813 19:59:40.039605 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.100807584+00:00 stderr F E0813 19:59:40.100483 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.260306171+00:00 stderr F E0813 19:59:40.243282 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.283765190+00:00 stderr F E0813 19:59:40.260700 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.700961392+00:00 stderr F W0813 19:59:40.680350 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:40.788635771+00:00 stderr F W0813 19:59:40.788513 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:41.074215822+00:00 stderr F W0813 19:59:41.073168 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:41.319629407+00:00 stderr F W0813 19:59:41.315176 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T19:59:44.540736135+00:00 stderr F E0813 19:59:44.524881 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.540736135+00:00 stderr F E0813 19:59:44.540258 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633273543+00:00 stderr F E0813 19:59:44.633111 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633371135+00:00 stderr F E0813 19:59:44.633328 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.932636194+00:00 stderr F W0813 19:59:49.931997 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:50.531744722+00:00 stderr F E0813 19:59:50.530292 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.844545 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.845564 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.852372 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871571450+00:00 stderr F E0813 19:59:50.871534 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:51.326018234+00:00 stderr F W0813 19:59:51.325587 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:51.450971396+00:00 stderr F W0813 19:59:51.450692 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:52.606671350+00:00 stderr F W0813 19:59:52.605750 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:03.507188075+00:00 stderr F W0813 20:00:03.505514 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:00:06.623020709+00:00 stderr F W0813 20:00:06.622177 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:00:06.623415940+00:00 stderr F W0813 20:00:06.623258 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:00:07.675007495+00:00 stderr F W0813 20:00:07.671547 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:27.945990342+00:00 stderr F W0813 20:00:27.941701 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:00:29.988055289+00:00 stderr F W0813 20:00:29.985630 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:31.935746376+00:00 stderr F W0813 20:00:31.935224 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:00:35.857438308+00:00 stderr F W0813 20:00:35.855636 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:01:10.363073166+00:00 stderr F W0813 20:01:10.361738 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:01:11.010767704+00:00 stderr F W0813 20:01:11.009529 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:01:11.878877308+00:00 stderr F W0813 20:01:11.876332 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:01:21.349283955+00:00 stderr F W0813 20:01:21.348270 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:02:22.615119270+00:00 stderr F W0813 20:02:22.614245 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:02:24.747603143+00:00 stderr F W0813 20:02:24.746366 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:02:26.988925622+00:00 stderr F W0813 20:02:26.988481 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:02:30.453457865+00:00 stderr F W0813 20:02:30.452679 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:03:52.874053825+00:00 stderr F W0813 20:03:52.871907 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:04:04.401934902+00:00 stderr F E0813 20:04:04.401068 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.402998952+00:00 stderr F E0813 20:04:04.402948 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.281572926+00:00 stderr F E0813 20:04:05.280372 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.281572926+00:00 stderr F E0813 20:04:05.281070 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.286935489+00:00 stderr F E0813 20:04:05.284345 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.286935489+00:00 stderr F E0813 20:04:05.284410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.215685534+00:00 stderr F W0813 20:04:09.215602 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:04:29.629577865+00:00 stderr F W0813 20:04:29.629008 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:04:35.640297450+00:00 stderr F W0813 20:04:35.639529 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:05:05.288435794+00:00 stderr F E0813 20:05:05.287900 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.288496986+00:00 stderr F E0813 20:05:05.288432 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.321420159+00:00 stderr F E0813 20:05:05.321363 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.321516132+00:00 stderr F E0813 20:05:05.321498 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:33.199988341+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:06:33.200242358+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:06:46.091914232+00:00 stderr F time="2025-08-13T20:06:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:06:46.092101228+00:00 stderr F time="2025-08-13T20:06:46Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T20:06:46.092101228+00:00 stderr F time="2025-08-13T20:06:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:06:48.336406265+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:07:00.663275096+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:07:19.651681279+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:07:41.934622389+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:07:50.348659746+00:00 stderr F time="2025-08-13T20:07:50Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:08:18.638742578+00:00 stderr F time="2025-08-13T20:08:18Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:08:42.737265511+00:00 stderr F E0813 20:08:42.735492 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.744188410+00:00 stderr F E0813 20:08:42.744107 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.621857893+00:00 stderr F E0813 20:08:43.618322 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.621857893+00:00 stderr F E0813 20:08:43.618410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.625036794+00:00 stderr F E0813 20:08:43.624526 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.625036794+00:00 stderr F E0813 20:08:43.624621 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:40.151279360+00:00 stderr F time="2025-08-13T20:09:40Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:09:40.151279360+00:00 stderr F time="2025-08-13T20:09:40Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:09:49.496565397+00:00 stderr F time="2025-08-13T20:09:49Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:09:52.875492813+00:00 stderr F time="2025-08-13T20:09:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:16:58.160638342+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:17:00.205634852+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:17:16.072748758+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:17:30.179055164+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:27:05.651766544+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:27:05.841924311+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:28:43.332268651+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:29:25.916234348+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:29:25.916953379+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:29:33.240381805+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:29:36.514472290+00:00 stderr F time="2025-08-13T20:29:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:29:44.206538782+00:00 stderr F time="2025-08-13T20:29:44Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:29:52.126891777+00:00 stderr F time="2025-08-13T20:29:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:37:48.225824045+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:38:36.094882524+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:41:21.481556744+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:42:26.028870400+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:42:39.339171660+00:00 stderr F W0813 20:42:39.338527 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:39.734393723+00:00 stderr F W0813 20:42:39.734342 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:39.951022478+00:00 stderr F W0813 20:42:39.950937 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:40.047724586+00:00 stderr F W0813 20:42:40.047677 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:40.344992617+00:00 stderr F W0813 20:42:40.344945 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:40.742693633+00:00 stderr F W0813 20:42:40.742601 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:40.959143163+00:00 stderr F W0813 20:42:40.959053 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:41.055167932+00:00 stderr F W0813 20:42:41.055001 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:41.670657416+00:00 stderr F W0813 20:42:41.670253 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:42.480053712+00:00 stderr F W0813 20:42:42.479949 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:42.551026478+00:00 stderr F W0813 20:42:42.550906 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:42.839825044+00:00 stderr F W0813 20:42:42.839695 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:44.057333295+00:00 stderr F I0813 20:42:44.056154 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:44.057333295+00:00 stderr F I0813 20:42:44.056440 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:44.057445078+00:00 stderr F I0813 20:42:44.057377 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:44.057554841+00:00 stderr F I0813 20:42:44.057509 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:44.057614363+00:00 stderr F I0813 20:42:44.057573 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:44.057673074+00:00 stderr F I0813 20:42:44.057635 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:44.058142168+00:00 stderr F I0813 20:42:44.057951 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-08-13T20:42:44.058402505+00:00 stderr F I0813 20:42:44.058348 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:44.058523589+00:00 stderr F I0813 20:42:44.058456 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:44.059177148+00:00 stderr F I0813 20:42:44.059151 1 secure_serving.go:258] Stopped listening on [::]:5443 ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000366515073043232033067 0ustar zuulzuul2025-08-13T20:00:19.608298628+00:00 stderr F I0813 20:00:19.605121 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc0008f6be0 max-eligible-revision:0xc0008f6960 protected-revisions:0xc0008f6a00 resource-dir:0xc0008f6aa0 static-pod-name:0xc0008f6b40 v:0xc0008f72c0] [0xc0008f72c0 0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6be0 0xc0008f6b40] [] map[cert-dir:0xc0008f6be0 help:0xc0008f7680 log-flush-frequency:0xc0008f7220 max-eligible-revision:0xc0008f6960 protected-revisions:0xc0008f6a00 resource-dir:0xc0008f6aa0 static-pod-name:0xc0008f6b40 v:0xc0008f72c0 vmodule:0xc0008f7360] [0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6b40 0xc0008f6be0 0xc0008f7220 0xc0008f72c0 0xc0008f7360 0xc0008f7680] [0xc0008f6be0 0xc0008f7680 0xc0008f7220 0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6b40 0xc0008f72c0 0xc0008f7360] map[104:0xc0008f7680 118:0xc0008f72c0] [] -1 0 0xc0008adc80 true 0x73b100 []} 2025-08-13T20:00:19.608298628+00:00 stderr F I0813 20:00:19.606611 1 cmd.go:41] (*prune.PruneOptions)(0xc0008e31d0)({ 2025-08-13T20:00:19.608298628+00:00 stderr F MaxEligibleRevision: (int) 9, 2025-08-13T20:00:19.608298628+00:00 stderr F ProtectedRevisions: ([]int) (len=6 cap=6) { 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 4, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 5, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 6, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 7, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 8, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 9 2025-08-13T20:00:19.608298628+00:00 stderr F }, 2025-08-13T20:00:19.608298628+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:00:19.608298628+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:00:19.608298628+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:00:19.608298628+00:00 stderr F }) ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000755000175000017500000000000015073043233033106 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000755000175000017500000000000015073043313033105 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000644000175000017500000626214615073043233033131 0ustar zuulzuul2025-10-13T00:12:49.341718772+00:00 stderr F I1013 00:12:49.341483 1 start.go:23] ClusterVersionOperator 4.16.0-202406131906.p0.g6f553e9.assembly.stream.el9-6f553e9 2025-10-13T00:12:49.408427844+00:00 stderr F I1013 00:12:49.408343 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:12:49.470829164+00:00 stderr F I1013 00:12:49.468907 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:49.470829164+00:00 stderr F I1013 00:12:49.469168 1 upgradeable.go:446] ConfigMap openshift-config/admin-acks added. 2025-10-13T00:12:49.491020510+00:00 stderr F I1013 00:12:49.490689 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:12:49.505524363+00:00 stderr F I1013 00:12:49.505403 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:49.505744369+00:00 stderr F I1013 00:12:49.505658 1 upgradeable.go:446] ConfigMap openshift-config-managed/admin-gates added. 2025-10-13T00:12:49.523544187+00:00 stderr F I1013 00:12:49.521599 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:12:49.543113345+00:00 stderr F I1013 00:12:49.543054 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:12:49.558224976+00:00 stderr F I1013 00:12:49.558158 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:12:49.558679969+00:00 stderr F I1013 00:12:49.558377 1 start.go:295] Waiting on 1 outstanding goroutines. 2025-10-13T00:12:49.558732630+00:00 stderr F I1013 00:12:49.558411 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-version/version... 2025-10-13T00:18:25.864834356+00:00 stderr F I1013 00:18:25.864738 1 leaderelection.go:260] successfully acquired lease openshift-cluster-version/version 2025-10-13T00:18:25.864956120+00:00 stderr F I1013 00:18:25.864928 1 start.go:565] FeatureGate found in cluster, using its feature set "" at startup 2025-10-13T00:18:25.864967440+00:00 stderr F I1013 00:18:25.864874 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-cluster-version", Name:"version", UID:"4c78d446-a3f8-4879-b39d-248de5762283", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_91e422d9-82ec-4e39-a11b-b3a858aee054 became leader 2025-10-13T00:18:25.865019692+00:00 stderr F I1013 00:18:25.864997 1 payload.go:307] Loading updatepayload from "/" 2025-10-13T00:18:25.866561128+00:00 stderr F I1013 00:18:25.866520 1 metrics.go:154] Metrics port listening for HTTPS on 0.0.0.0:9099 2025-10-13T00:18:25.880614556+00:00 stderr F I1013 00:18:25.879804 1 payload.go:403] Architecture from release-metadata (4.16.0) retrieved from runtime: "amd64" 2025-10-13T00:18:25.899212949+00:00 stderr F I1013 00:18:25.898635 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:25.910733552+00:00 stderr F I1013 00:18:25.910673 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.011780830+00:00 stderr F I1013 00:18:26.011704 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-Hypershift.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.015238862+00:00 stderr F I1013 00:18:26.015195 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.022057535+00:00 stderr F I1013 00:18:26.021547 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:26.027045544+00:00 stderr F I1013 00:18:26.026970 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.033054613+00:00 stderr F I1013 00:18:26.033014 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.035048412+00:00 stderr F I1013 00:18:26.034047 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.039440963+00:00 stderr F I1013 00:18:26.039197 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.045191484+00:00 stderr F I1013 00:18:26.045132 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:26.048061339+00:00 stderr F I1013 00:18:26.048037 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.073910429+00:00 stderr F I1013 00:18:26.073859 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.096599764+00:00 stderr F I1013 00:18:26.096551 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:26.110765236+00:00 stderr F I1013 00:18:26.110716 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.131126922+00:00 stderr F I1013 00:18:26.131078 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.133875323+00:00 stderr F I1013 00:18:26.133856 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:26.135342727+00:00 stderr F I1013 00:18:26.135308 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.155934100+00:00 stderr F I1013 00:18:26.155815 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.157307421+00:00 stderr F I1013 00:18:26.157270 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.164954608+00:00 stderr F I1013 00:18:26.164923 1 payload.go:210] excluding Filename: "0000_10_openshift_service-ca_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-service-ca": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.171055220+00:00 stderr F I1013 00:18:26.171036 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.173434371+00:00 stderr F I1013 00:18:26.173419 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:26.174656227+00:00 stderr F I1013 00:18:26.174640 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.176503142+00:00 stderr F I1013 00:18:26.176485 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.179696347+00:00 stderr F I1013 00:18:26.179678 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.402371124+00:00 stderr F I1013 00:18:26.402270 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.467088270+00:00 stderr F I1013 00:18:26.466480 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_04_infrastructure-components.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.469498952+00:00 stderr F I1013 00:18:26.469457 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-aws": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.469498952+00:00 stderr F I1013 00:18:26.469485 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-azure": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.469521073+00:00 stderr F I1013 00:18:26.469500 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-gcp": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.469521073+00:00 stderr F I1013 00:18:26.469514 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-powervs": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.469546374+00:00 stderr F I1013 00:18:26.469529 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-vsphere": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.470470421+00:00 stderr F I1013 00:18:26.470439 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.471043418+00:00 stderr F I1013 00:18:26.471010 1 payload.go:210] excluding Filename: "0000_30_cluster-api_01_images.configmap.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-images": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.472884243+00:00 stderr F I1013 00:18:26.472830 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.472884243+00:00 stderr F I1013 00:18:26.472868 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "Secret" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-secret": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.473706757+00:00 stderr F I1013 00:18:26.473663 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_webhook-service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-webhook-service": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.474654796+00:00 stderr F I1013 00:18:26.474618 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.474654796+00:00 stderr F I1013 00:18:26.474646 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.477527471+00:00 stderr F I1013 00:18:26.477480 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.core-cluster-api.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.523812349+00:00 stderr F I1013 00:18:26.522965 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-aws.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "aws": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.532685323+00:00 stderr F I1013 00:18:26.532630 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-gcp.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "gcp": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.544395891+00:00 stderr F I1013 00:18:26.543058 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-ibmcloud.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "ibmcloud": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.568311783+00:00 stderr F I1013 00:18:26.568173 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-vsphere.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "vsphere": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631075 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterclasses.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631119 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusters.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631132 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machines.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631144 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinesets.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631156 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinedeployments.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631168 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinepools.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631180 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesets.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631192 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesetbindings.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631202 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinehealthchecks.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.631493473+00:00 stderr F I1013 00:18:26.631212 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "extensionconfigs.runtime.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:26.632611347+00:00 stderr F I1013 00:18:26.632456 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.632667188+00:00 stderr F I1013 00:18:26.632655 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.635793721+00:00 stderr F I1013 00:18:26.634862 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.635793721+00:00 stderr F I1013 00:18:26.634895 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "validating-webhook-configuration": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.640564493+00:00 stderr F I1013 00:18:26.640510 1 payload.go:210] excluding Filename: "0000_30_cluster-api_11_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.642360827+00:00 stderr F I1013 00:18:26.642335 1 payload.go:210] excluding Filename: "0000_30_cluster-api_12_clusteroperator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.659778545+00:00 stderr F I1013 00:18:26.659707 1 payload.go:210] excluding Filename: "0000_30_machine-api-operator_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-machine-api-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.726682586+00:00 stderr F I1013 00:18:26.726617 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "Namespace" Name: "baremetal-operator-system": no annotations 2025-10-13T00:18:26.726733998+00:00 stderr F I1013 00:18:26.726678 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "ConfigMap" Namespace: "baremetal-operator-system" Name: "ironic": no annotations 2025-10-13T00:18:26.832423973+00:00 stderr F I1013 00:18:26.831849 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.833371472+00:00 stderr F I1013 00:18:26.832915 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.834371951+00:00 stderr F I1013 00:18:26.834317 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.835861516+00:00 stderr F I1013 00:18:26.835826 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-10-13T00:18:26.836888586+00:00 stderr F I1013 00:18:26.836856 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-10-13T00:18:26.837769192+00:00 stderr F I1013 00:18:26.837732 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-10-13T00:18:26.849379318+00:00 stderr F I1013 00:18:26.849270 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_04_serviceaccount-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "csi-snapshot-controller-operator": no annotations 2025-10-13T00:18:26.856711456+00:00 stderr F I1013 00:18:26.856660 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_05_operator_role-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Name: "csi-snapshot-controller-operator-role": no annotations 2025-10-13T00:18:26.872338941+00:00 stderr F I1013 00:18:26.872275 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_06_operator_rolebinding-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Name: "csi-snapshot-controller-operator-role": no annotations 2025-10-13T00:18:26.876567027+00:00 stderr F I1013 00:18:26.876522 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "csi-snapshot-controller-operator": no annotations 2025-10-13T00:18:26.879491354+00:00 stderr F I1013 00:18:26.879441 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "csi-snapshot-controller-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.893330526+00:00 stderr F I1013 00:18:26.893149 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:26.928017628+00:00 stderr F I1013 00:18:26.927854 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:26.941429468+00:00 stderr F I1013 00:18:26.941256 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:26.965389321+00:00 stderr F I1013 00:18:26.965299 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-image-registry" Name: "cluster-image-registry-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.988901070+00:00 stderr F I1013 00:18:26.988829 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-ibmcloud": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.988901070+00:00 stderr F I1013 00:18:26.988860 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-powervs": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.988901070+00:00 stderr F I1013 00:18:26.988872 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-ingress-operator" Name: "openshift-ingress-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:26.994374433+00:00 stderr F I1013 00:18:26.993329 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-ingress-operator" Name: "ingress-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.000603059+00:00 stderr F I1013 00:18:27.000563 1 payload.go:210] excluding Filename: "0000_50_cluster-kube-storage-version-migrator-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-kube-storage-version-migrator-operator" Name: "kube-storage-version-migrator-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.003353891+00:00 stderr F I1013 00:18:27.003322 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.003365901+00:00 stderr F I1013 00:18:27.003345 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.009710970+00:00 stderr F I1013 00:18:27.008968 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_04-deployment-capi.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-machine-approver" Name: "machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.281730856+00:00 stderr F I1013 00:18:27.281635 1 payload.go:210] excluding Filename: "0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-monitoring" Name: "cluster-monitoring-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.298733332+00:00 stderr F I1013 00:18:27.297995 1 payload.go:210] excluding Filename: "0000_50_cluster-node-tuning-operator_50-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-node-tuning-operator" Name: "cluster-node-tuning-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.328317912+00:00 stderr F I1013 00:18:27.328083 1 payload.go:210] excluding Filename: "0000_50_cluster-samples-operator_06-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-samples-operator" Name: "cluster-samples-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.363112907+00:00 stderr F I1013 00:18:27.362721 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-CustomNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.368569469+00:00 stderr F I1013 00:18:27.368516 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-TechPreviewNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.373489206+00:00 stderr F I1013 00:18:27.371038 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_06_operator_cr-hypershift.yaml" Group: "operator.openshift.io" Kind: "Storage" Name: "cluster": no annotations 2025-10-13T00:18:27.374252458+00:00 stderr F I1013 00:18:27.374177 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_07_service_account-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "cluster-storage-operator": no annotations 2025-10-13T00:18:27.384428621+00:00 stderr F I1013 00:18:27.384220 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_08_operator_rbac-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-storage-operator-role": no annotations 2025-10-13T00:18:27.404285992+00:00 stderr F I1013 00:18:27.402295 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "cluster-storage-operator": no annotations 2025-10-13T00:18:27.404285992+00:00 stderr F I1013 00:18:27.404122 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "cluster-storage-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.409919330+00:00 stderr F I1013 00:18:27.409871 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_11_cluster_operator-hypershift.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "storage": no annotations 2025-10-13T00:18:27.413745594+00:00 stderr F I1013 00:18:27.413705 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "aws-ebs-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413745594+00:00 stderr F I1013 00:18:27.413725 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-disk-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413745594+00:00 stderr F I1013 00:18:27.413736 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-file-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413765484+00:00 stderr F I1013 00:18:27.413749 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ibm-vpc-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413765484+00:00 stderr F I1013 00:18:27.413759 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "powervs-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413793205+00:00 stderr F I1013 00:18:27.413773 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "manila-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413793205+00:00 stderr F I1013 00:18:27.413787 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ovirt-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413814876+00:00 stderr F I1013 00:18:27.413800 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vmware-vsphere-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.413822086+00:00 stderr F I1013 00:18:27.413817 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vsphere-problem-detector": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.464823534+00:00 stderr F I1013 00:18:27.464753 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-conversionwebhook-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-conversion-webhook": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.468811733+00:00 stderr F I1013 00:18:27.467726 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.523871321+00:00 stderr F I1013 00:18:27.523348 1 payload.go:210] excluding Filename: "0000_50_insights-operator_03-insightsdatagather-config-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "insightsdatagathers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.530506559+00:00 stderr F I1013 00:18:27.530469 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-datagather-insights-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "datagathers.insights.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.532383745+00:00 stderr F I1013 00:18:27.532348 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-insightsdatagather-config-cr.yaml" Group: "config.openshift.io" Kind: "InsightsDataGather" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.547775623+00:00 stderr F I1013 00:18:27.545758 1 payload.go:210] excluding Filename: "0000_50_insights-operator_06-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-insights" Name: "insights-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.671491095+00:00 stderr F I1013 00:18:27.671426 1 payload.go:210] excluding Filename: "0000_50_olm_06-psm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "package-server-manager": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.675788093+00:00 stderr F I1013 00:18:27.675759 1 payload.go:210] excluding Filename: "0000_50_olm_07-olm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "olm-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.677639848+00:00 stderr F I1013 00:18:27.677604 1 payload.go:210] excluding Filename: "0000_50_olm_08-catalog-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "catalog-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.687124980+00:00 stderr F I1013 00:18:27.687078 1 payload.go:210] excluding Filename: "0000_50_operator-marketplace_09_operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-marketplace" Name: "marketplace-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.694391316+00:00 stderr F I1013 00:18:27.694340 1 payload.go:210] excluding Filename: "0000_50_service-ca-operator_05_deploy-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-service-ca-operator" Name: "service-ca-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.696180919+00:00 stderr F I1013 00:18:27.696144 1 payload.go:210] excluding Filename: "0000_50_tests_test-reporting.yaml" Group: "config.openshift.io" Kind: "TestReporting" Name: "cluster": no annotations 2025-10-13T00:18:27.696912491+00:00 stderr F I1013 00:18:27.696666 1 payload.go:210] excluding Filename: "0000_51_olm_00-olm-operator.yml" Group: "operator.openshift.io" Kind: "OLM" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.697558870+00:00 stderr F I1013 00:18:27.697130 1 payload.go:210] excluding Filename: "0000_51_olm_01_operator_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.698117547+00:00 stderr F I1013 00:18:27.698080 1 payload.go:210] excluding Filename: "0000_51_olm_02_operator_clusterrole.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.698469778+00:00 stderr F I1013 00:18:27.698438 1 payload.go:210] excluding Filename: "0000_51_olm_03_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.698936581+00:00 stderr F I1013 00:18:27.698905 1 payload.go:210] excluding Filename: "0000_51_olm_04_metrics_service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator-metrics": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.699470837+00:00 stderr F I1013 00:18:27.699422 1 payload.go:210] excluding Filename: "0000_51_olm_05_operator_clusterrolebinding.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-olm-operator-role": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.700469397+00:00 stderr F I1013 00:18:27.700429 1 payload.go:210] excluding Filename: "0000_51_olm_06_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.700937321+00:00 stderr F I1013 00:18:27.700900 1 payload.go:210] excluding Filename: "0000_51_olm_07_cluster_operator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "olm": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.710718342+00:00 stderr F I1013 00:18:27.710675 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_02_rbac.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "default-account-cluster-network-operator": unrecognized value include.release.openshift.io/self-managed-high-availability=false 2025-10-13T00:18:27.713602758+00:00 stderr F I1013 00:18:27.713572 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_03_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-network-operator" Name: "network-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.737999784+00:00 stderr F I1013 00:18:27.737918 1 payload.go:210] excluding Filename: "0000_70_dns-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-dns-operator" Name: "dns-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:27.741163128+00:00 stderr F I1013 00:18:27.741134 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.743360274+00:00 stderr F I1013 00:18:27.743335 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.756481554+00:00 stderr F I1013 00:18:27.752931 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.790629350+00:00 stderr F I1013 00:18:27.790502 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.844804853+00:00 stderr F I1013 00:18:27.843565 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.857901763+00:00 stderr F I1013 00:18:27.857800 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.885623388+00:00 stderr F I1013 00:18:27.883797 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.891452741+00:00 stderr F I1013 00:18:27.891400 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.894194243+00:00 stderr F I1013 00:18:27.894156 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.904395126+00:00 stderr F I1013 00:18:27.904347 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.914758545+00:00 stderr F I1013 00:18:27.914724 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.926279648+00:00 stderr F I1013 00:18:27.926240 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.935221374+00:00 stderr F I1013 00:18:27.935180 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.944153970+00:00 stderr F I1013 00:18:27.944111 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.950072186+00:00 stderr F I1013 00:18:27.950038 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.951930031+00:00 stderr F I1013 00:18:27.951789 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.954216569+00:00 stderr F I1013 00:18:27.954155 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.958128125+00:00 stderr F I1013 00:18:27.956579 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.961445184+00:00 stderr F I1013 00:18:27.961396 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.964604308+00:00 stderr F I1013 00:18:27.964559 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.967986649+00:00 stderr F I1013 00:18:27.967935 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.969903266+00:00 stderr F I1013 00:18:27.969580 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:27.971121302+00:00 stderr F I1013 00:18:27.970982 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:27.972437991+00:00 stderr F I1013 00:18:27.972264 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:27.983775109+00:00 stderr F I1013 00:18:27.983727 1 payload.go:210] excluding Filename: "0000_90_cluster-baremetal-operator_03_servicemonitor.yaml" Group: "monitoring.coreos.com" Kind: "ServiceMonitor" Namespace: "openshift-machine-api" Name: "cluster-baremetal-operator-servicemonitor": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.218821384+00:00 stderr F I1013 00:18:28.218290 1 cvo.go:315] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-redhat (567E347AD0044ADE55BA8A5F199E2F91FD431D51: Red Hat, Inc. (release key 2) , B08B659EE86AF623BC90E8DB938A80CAF21541EB: Red Hat, Inc. (beta key 2) ) - will check for signatures in containers/image format at serial signature store wrapping config maps in openshift-config-managed with label "release.openshift.io/verification-signatures", serial signature store wrapping ClusterVersion signatureStores unset, falling back to default stores, parallel signature store wrapping containers/image signature store under https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release, containers/image signature store under https://storage.googleapis.com/openshift-release/official/signatures/openshift/release 2025-10-13T00:18:28.218821384+00:00 stderr F I1013 00:18:28.218706 1 start.go:590] CVO features for version 4.16.0 enabled at startup: {desiredVersion:4.16.0 unknownVersion:false reconciliationIssuesCondition:false} 2025-10-13T00:18:28.218821384+00:00 stderr F I1013 00:18:28.218771 1 featurechangestopper.go:123] Starting stop-on-features-change controller with startingRequiredFeatureSet="" startingCvoGates={desiredVersion:4.16.0 unknownVersion:false reconciliationIssuesCondition:false} 2025-10-13T00:18:28.218821384+00:00 stderr F I1013 00:18:28.218804 1 cvo.go:415] Starting ClusterVersionOperator with minimum reconcile period 2m57.097487985s 2025-10-13T00:18:28.218881276+00:00 stderr F I1013 00:18:28.218844 1 cvo.go:483] Waiting on 6 outstanding goroutines. 2025-10-13T00:18:28.219014390+00:00 stderr F I1013 00:18:28.218987 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:28.219126953+00:00 stderr F I1013 00:18:28.219079 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:28.219211006+00:00 stderr F I1013 00:18:28.219181 1 sync_worker.go:565] Start: starting sync worker 2025-10-13T00:18:28.219223636+00:00 stderr F I1013 00:18:28.219175 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:28.219248527+00:00 stderr F I1013 00:18:28.219188 1 availableupdates.go:61] First attempt to retrieve available updates 2025-10-13T00:18:28.219333249+00:00 stderr F I1013 00:18:28.219281 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:28.219333249+00:00 stderr F I1013 00:18:28.219327 1 sync_worker.go:461] Initializing prior known value of enabled capabilities from ClusterVersion status. 2025-10-13T00:18:28.219424172+00:00 stderr F I1013 00:18:28.219402 1 sync_worker.go:262] syncPayload: 4.16.0 (force=false) 2025-10-13T00:18:28.219489894+00:00 stderr F I1013 00:18:28.219453 1 payload.go:307] Loading updatepayload from "/" 2025-10-13T00:18:28.219652479+00:00 stderr F I1013 00:18:28.219616 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2025-10-13T00:18:28.219652479+00:00 stderr F I1013 00:18:28.219637 1 upgradeable.go:123] Cluster current version=4.16.0 2025-10-13T00:18:28.219665309+00:00 stderr F I1013 00:18:28.219632 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RetrievePayload' Retrieving and verifying payload version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" 2025-10-13T00:18:28.219673400+00:00 stderr F I1013 00:18:28.219661 1 payload.go:403] Architecture from release-metadata (4.16.0) retrieved from runtime: "amd64" 2025-10-13T00:18:28.219673400+00:00 stderr F I1013 00:18:28.219660 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LoadPayload' Loading payload version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" 2025-10-13T00:18:28.219682560+00:00 stderr F I1013 00:18:28.219666 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='PoolUpdating' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are updating, please see `oc get mcp` for further details') 2025-10-13T00:18:28.219712471+00:00 stderr F I1013 00:18:28.219688 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (702.741µs) 2025-10-13T00:18:28.223115912+00:00 stderr F I1013 00:18:28.223093 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2025-10-13T00:18:28.232537622+00:00 stderr F I1013 00:18:28.232481 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.241989324+00:00 stderr F I1013 00:18:28.241894 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.260286268+00:00 stderr F I1013 00:18:28.260213 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-Hypershift.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.262551416+00:00 stderr F I1013 00:18:28.262515 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.265848784+00:00 stderr F I1013 00:18:28.265451 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:28.267831893+00:00 stderr F I1013 00:18:28.267803 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.268534294+00:00 stderr F I1013 00:18:28.268508 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.269231865+00:00 stderr F I1013 00:18:28.269206 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.271350918+00:00 stderr F I1013 00:18:28.271078 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.272979926+00:00 stderr F I1013 00:18:28.272945 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:28.274837271+00:00 stderr F I1013 00:18:28.274803 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.295797475+00:00 stderr F I1013 00:18:28.295703 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.323627133+00:00 stderr F I1013 00:18:28.323067 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:28.334247909+00:00 stderr F I1013 00:18:28.334166 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.347082781+00:00 stderr F I1013 00:18:28.346670 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.348873625+00:00 stderr F I1013 00:18:28.348220 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:28.352394030+00:00 stderr F I1013 00:18:28.349240 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.358451720+00:00 stderr F I1013 00:18:28.358404 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.359197612+00:00 stderr F I1013 00:18:28.359143 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.361746258+00:00 stderr F I1013 00:18:28.361702 1 payload.go:210] excluding Filename: "0000_10_openshift_service-ca_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-service-ca": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.365437918+00:00 stderr F I1013 00:18:28.365350 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.368272182+00:00 stderr F I1013 00:18:28.366576 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:28.368394396+00:00 stderr F I1013 00:18:28.368346 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.370651103+00:00 stderr F I1013 00:18:28.370610 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.374656872+00:00 stderr F I1013 00:18:28.374611 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.415475737+00:00 stderr F I1013 00:18:28.414777 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.450244782+00:00 stderr F I1013 00:18:28.450158 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:28.450244782+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:28.450244782+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:28.450359335+00:00 stderr F I1013 00:18:28.450326 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (231.270033ms) 2025-10-13T00:18:28.466398983+00:00 stderr F I1013 00:18:28.466343 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_04_infrastructure-components.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.467918658+00:00 stderr F I1013 00:18:28.467897 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-aws": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.467963709+00:00 stderr F I1013 00:18:28.467953 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-azure": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.467995560+00:00 stderr F I1013 00:18:28.467986 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-gcp": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.468025161+00:00 stderr F I1013 00:18:28.468016 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-powervs": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.468070562+00:00 stderr F I1013 00:18:28.468061 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-vsphere": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.468281648+00:00 stderr F I1013 00:18:28.468264 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.468582497+00:00 stderr F I1013 00:18:28.468559 1 payload.go:210] excluding Filename: "0000_30_cluster-api_01_images.configmap.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-images": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.468834375+00:00 stderr F I1013 00:18:28.468818 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.468872756+00:00 stderr F I1013 00:18:28.468862 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "Secret" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-secret": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.469088393+00:00 stderr F I1013 00:18:28.469073 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_webhook-service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-webhook-service": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.469417482+00:00 stderr F I1013 00:18:28.469401 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.469463714+00:00 stderr F I1013 00:18:28.469452 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.471121183+00:00 stderr F I1013 00:18:28.471102 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.core-cluster-api.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.527269164+00:00 stderr F I1013 00:18:28.527177 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-aws.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "aws": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.536862010+00:00 stderr F I1013 00:18:28.536808 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-gcp.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "gcp": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.549442044+00:00 stderr F I1013 00:18:28.549348 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-ibmcloud.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "ibmcloud": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.574471259+00:00 stderr F I1013 00:18:28.574409 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-vsphere.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "vsphere": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633311220+00:00 stderr F I1013 00:18:28.633237 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterclasses.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633311220+00:00 stderr F I1013 00:18:28.633275 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusters.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633311220+00:00 stderr F I1013 00:18:28.633287 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machines.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633311220+00:00 stderr F I1013 00:18:28.633298 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinesets.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633384912+00:00 stderr F I1013 00:18:28.633310 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinedeployments.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633384912+00:00 stderr F I1013 00:18:28.633322 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinepools.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633384912+00:00 stderr F I1013 00:18:28.633332 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesets.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633384912+00:00 stderr F I1013 00:18:28.633345 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesetbindings.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633384912+00:00 stderr F I1013 00:18:28.633369 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinehealthchecks.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633384912+00:00 stderr F I1013 00:18:28.633379 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "extensionconfigs.runtime.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-10-13T00:18:28.633727483+00:00 stderr F I1013 00:18:28.633696 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.633727483+00:00 stderr F I1013 00:18:28.633717 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.634269309+00:00 stderr F I1013 00:18:28.634239 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.634269309+00:00 stderr F I1013 00:18:28.634257 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "validating-webhook-configuration": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.634702802+00:00 stderr F I1013 00:18:28.634674 1 payload.go:210] excluding Filename: "0000_30_cluster-api_11_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.634940929+00:00 stderr F I1013 00:18:28.634914 1 payload.go:210] excluding Filename: "0000_30_cluster-api_12_clusteroperator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.638257997+00:00 stderr F I1013 00:18:28.638223 1 payload.go:210] excluding Filename: "0000_30_machine-api-operator_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-machine-api-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.666144527+00:00 stderr F I1013 00:18:28.665214 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "Namespace" Name: "baremetal-operator-system": no annotations 2025-10-13T00:18:28.666144527+00:00 stderr F I1013 00:18:28.665269 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "ConfigMap" Namespace: "baremetal-operator-system" Name: "ironic": no annotations 2025-10-13T00:18:28.687553724+00:00 stderr F I1013 00:18:28.687461 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.687880314+00:00 stderr F I1013 00:18:28.687844 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.688246335+00:00 stderr F I1013 00:18:28.688214 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.690529723+00:00 stderr F I1013 00:18:28.688596 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-10-13T00:18:28.690529723+00:00 stderr F I1013 00:18:28.688983 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-10-13T00:18:28.690529723+00:00 stderr F I1013 00:18:28.689342 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-10-13T00:18:28.691196453+00:00 stderr F I1013 00:18:28.691144 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_04_serviceaccount-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "csi-snapshot-controller-operator": no annotations 2025-10-13T00:18:28.693132441+00:00 stderr F I1013 00:18:28.693067 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_05_operator_role-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Name: "csi-snapshot-controller-operator-role": no annotations 2025-10-13T00:18:28.694556943+00:00 stderr F I1013 00:18:28.694499 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_06_operator_rolebinding-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Name: "csi-snapshot-controller-operator-role": no annotations 2025-10-13T00:18:28.695180701+00:00 stderr F I1013 00:18:28.695138 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "csi-snapshot-controller-operator": no annotations 2025-10-13T00:18:28.695675966+00:00 stderr F I1013 00:18:28.695636 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "csi-snapshot-controller-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.706651913+00:00 stderr F I1013 00:18:28.706512 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:28.737263784+00:00 stderr F I1013 00:18:28.737178 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:28.751394874+00:00 stderr F I1013 00:18:28.751317 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.770105211+00:00 stderr F I1013 00:18:28.770027 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-image-registry" Name: "cluster-image-registry-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.790411966+00:00 stderr F I1013 00:18:28.790325 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-ibmcloud": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.790468457+00:00 stderr F I1013 00:18:28.790428 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-powervs": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.790468457+00:00 stderr F I1013 00:18:28.790445 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-ingress-operator" Name: "openshift-ingress-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.793205789+00:00 stderr F I1013 00:18:28.793158 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-ingress-operator" Name: "ingress-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.798850957+00:00 stderr F I1013 00:18:28.797471 1 payload.go:210] excluding Filename: "0000_50_cluster-kube-storage-version-migrator-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-kube-storage-version-migrator-operator" Name: "kube-storage-version-migrator-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:28.798948420+00:00 stderr F I1013 00:18:28.798922 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.798948420+00:00 stderr F I1013 00:18:28.798941 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:28.800856837+00:00 stderr F I1013 00:18:28.800834 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_04-deployment-capi.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-machine-approver" Name: "machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.026616256+00:00 stderr F I1013 00:18:29.026520 1 payload.go:210] excluding Filename: "0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-monitoring" Name: "cluster-monitoring-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.042547680+00:00 stderr F I1013 00:18:29.040489 1 payload.go:210] excluding Filename: "0000_50_cluster-node-tuning-operator_50-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-node-tuning-operator" Name: "cluster-node-tuning-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.049563469+00:00 stderr F I1013 00:18:29.049509 1 payload.go:210] excluding Filename: "0000_50_cluster-samples-operator_06-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-samples-operator" Name: "cluster-samples-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.056321850+00:00 stderr F I1013 00:18:29.056268 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-CustomNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.060166134+00:00 stderr F I1013 00:18:29.060069 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-TechPreviewNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.061231386+00:00 stderr F I1013 00:18:29.061159 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_06_operator_cr-hypershift.yaml" Group: "operator.openshift.io" Kind: "Storage" Name: "cluster": no annotations 2025-10-13T00:18:29.061460963+00:00 stderr F I1013 00:18:29.061407 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_07_service_account-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "cluster-storage-operator": no annotations 2025-10-13T00:18:29.061733621+00:00 stderr F I1013 00:18:29.061676 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_08_operator_rbac-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-storage-operator-role": no annotations 2025-10-13T00:18:29.065111302+00:00 stderr F I1013 00:18:29.065021 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "cluster-storage-operator": no annotations 2025-10-13T00:18:29.066169353+00:00 stderr F I1013 00:18:29.066096 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "cluster-storage-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.067478902+00:00 stderr F I1013 00:18:29.067406 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_11_cluster_operator-hypershift.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "storage": no annotations 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076251 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "aws-ebs-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076291 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-disk-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076304 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-file-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076319 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ibm-vpc-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076350 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "powervs-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076417 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "manila-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076429 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ovirt-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076454 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vmware-vsphere-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.076664156+00:00 stderr F I1013 00:18:29.076483 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vsphere-problem-detector": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.095074424+00:00 stderr F I1013 00:18:29.094973 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-conversionwebhook-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-conversion-webhook": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.096455325+00:00 stderr F I1013 00:18:29.096368 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.115813041+00:00 stderr F I1013 00:18:29.115447 1 payload.go:210] excluding Filename: "0000_50_insights-operator_03-insightsdatagather-config-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "insightsdatagathers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.118971225+00:00 stderr F I1013 00:18:29.118862 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-datagather-insights-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "datagathers.insights.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.119030957+00:00 stderr F I1013 00:18:29.118999 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-insightsdatagather-config-cr.yaml" Group: "config.openshift.io" Kind: "InsightsDataGather" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.122665805+00:00 stderr F I1013 00:18:29.120066 1 payload.go:210] excluding Filename: "0000_50_insights-operator_06-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-insights" Name: "insights-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.204858391+00:00 stderr F I1013 00:18:29.204522 1 payload.go:210] excluding Filename: "0000_50_olm_06-psm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "package-server-manager": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.208536370+00:00 stderr F I1013 00:18:29.206497 1 payload.go:210] excluding Filename: "0000_50_olm_07-olm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "olm-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.208536370+00:00 stderr F I1013 00:18:29.207626 1 payload.go:210] excluding Filename: "0000_50_olm_08-catalog-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "catalog-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.212886990+00:00 stderr F I1013 00:18:29.212847 1 payload.go:210] excluding Filename: "0000_50_operator-marketplace_09_operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-marketplace" Name: "marketplace-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.222263049+00:00 stderr F I1013 00:18:29.222194 1 payload.go:210] excluding Filename: "0000_50_service-ca-operator_05_deploy-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-service-ca-operator" Name: "service-ca-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.225214497+00:00 stderr F I1013 00:18:29.224310 1 payload.go:210] excluding Filename: "0000_50_tests_test-reporting.yaml" Group: "config.openshift.io" Kind: "TestReporting" Name: "cluster": no annotations 2025-10-13T00:18:29.225556227+00:00 stderr F I1013 00:18:29.225505 1 payload.go:210] excluding Filename: "0000_51_olm_00-olm-operator.yml" Group: "operator.openshift.io" Kind: "OLM" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.225776433+00:00 stderr F I1013 00:18:29.225743 1 payload.go:210] excluding Filename: "0000_51_olm_01_operator_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.227183215+00:00 stderr F I1013 00:18:29.227106 1 payload.go:210] excluding Filename: "0000_51_olm_02_operator_clusterrole.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.227339570+00:00 stderr F I1013 00:18:29.227304 1 payload.go:210] excluding Filename: "0000_51_olm_03_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.227639049+00:00 stderr F I1013 00:18:29.227608 1 payload.go:210] excluding Filename: "0000_51_olm_04_metrics_service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator-metrics": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.228293288+00:00 stderr F I1013 00:18:29.228256 1 payload.go:210] excluding Filename: "0000_51_olm_05_operator_clusterrolebinding.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-olm-operator-role": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.230489104+00:00 stderr F I1013 00:18:29.230008 1 payload.go:210] excluding Filename: "0000_51_olm_06_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.230489104+00:00 stderr F I1013 00:18:29.230299 1 payload.go:210] excluding Filename: "0000_51_olm_07_cluster_operator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "olm": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.233178064+00:00 stderr F I1013 00:18:29.232810 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_02_rbac.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "default-account-cluster-network-operator": unrecognized value include.release.openshift.io/self-managed-high-availability=false 2025-10-13T00:18:29.233953437+00:00 stderr F I1013 00:18:29.233739 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_03_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-network-operator" Name: "network-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.239238894+00:00 stderr F I1013 00:18:29.239136 1 payload.go:210] excluding Filename: "0000_70_dns-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-dns-operator" Name: "dns-operator": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.241932444+00:00 stderr F I1013 00:18:29.241127 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.243860512+00:00 stderr F I1013 00:18:29.243733 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.245108839+00:00 stderr F I1013 00:18:29.244889 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.262669991+00:00 stderr F I1013 00:18:29.262583 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.283896483+00:00 stderr F I1013 00:18:29.283758 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.297476437+00:00 stderr F I1013 00:18:29.296057 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.300458956+00:00 stderr F I1013 00:18:29.300156 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.302763515+00:00 stderr F I1013 00:18:29.302709 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.304925509+00:00 stderr F I1013 00:18:29.304873 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.309204386+00:00 stderr F I1013 00:18:29.309055 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.319238965+00:00 stderr F I1013 00:18:29.319191 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.322131971+00:00 stderr F I1013 00:18:29.322078 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.329579253+00:00 stderr F I1013 00:18:29.328164 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.337232080+00:00 stderr F I1013 00:18:29.337163 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.343311291+00:00 stderr F I1013 00:18:29.343251 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.344734464+00:00 stderr F I1013 00:18:29.344685 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.346126215+00:00 stderr F I1013 00:18:29.346077 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.347500856+00:00 stderr F I1013 00:18:29.347441 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.349140145+00:00 stderr F I1013 00:18:29.349090 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.350893207+00:00 stderr F I1013 00:18:29.350836 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.353494965+00:00 stderr F I1013 00:18:29.352497 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.353494965+00:00 stderr F I1013 00:18:29.353295 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-10-13T00:18:29.354133084+00:00 stderr F I1013 00:18:29.354087 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-10-13T00:18:29.355028550+00:00 stderr F I1013 00:18:29.354994 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-10-13T00:18:29.359321698+00:00 stderr F I1013 00:18:29.359272 1 payload.go:210] excluding Filename: "0000_90_cluster-baremetal-operator_03_servicemonitor.yaml" Group: "monitoring.coreos.com" Kind: "ServiceMonitor" Namespace: "openshift-machine-api" Name: "cluster-baremetal-operator-servicemonitor": include.release.openshift.io/self-managed-high-availability unset 2025-10-13T00:18:29.453167261+00:00 stderr F I1013 00:18:29.451763 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:29.453167261+00:00 stderr F I1013 00:18:29.452750 1 cache.go:131] {"type":"PromQL","promql":{"promql":"(\n group by (type) (cluster_infrastructure_provider{_id=\"\",type=\"Azure\"})\n or\n 0 * group by (type) (cluster_infrastructure_provider{_id=\"\"})\n)\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-10-13T00:18:29.454411218+00:00 stderr F I1013 00:18:29.454342 1 sync_worker.go:366] Skipping preconditions for a local operator image payload. 2025-10-13T00:18:29.454793559+00:00 stderr F I1013 00:18:29.454423 1 sync_worker.go:415] Payload loaded from quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 with hash 6WUw5aCbcO4=, architecture amd64 2025-10-13T00:18:29.454793559+00:00 stderr F I1013 00:18:29.454435 1 sync_worker.go:527] Propagating initial target version { 4.16.0 quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 false} to sync worker loop in state Reconciling. 2025-10-13T00:18:29.454793559+00:00 stderr F I1013 00:18:29.454469 1 sync_worker.go:551] Notify the sync worker that new work is available 2025-10-13T00:18:29.454793559+00:00 stderr F I1013 00:18:29.454458 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PayloadLoaded' Payload loaded version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" architecture="amd64" 2025-10-13T00:18:29.454793559+00:00 stderr F I1013 00:18:29.454612 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:29.454793559+00:00 stderr F I1013 00:18:29.454688 1 sync_worker.go:584] new work is available 2025-10-13T00:18:29.456398127+00:00 stderr F I1013 00:18:29.454850 1 sync_worker.go:807] Detected while calculating next work: version changed (from { false} to { 4.16.0 quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 false}), overrides changed ([] to [{Deployment apps openshift-monitoring cluster-monitoring-operator true} {ClusterOperator config.openshift.io monitoring true} {Deployment apps openshift-cloud-credential-operator cloud-credential-operator true} {ClusterOperator config.openshift.io cloud-credential true} {Deployment apps openshift-machine-api cluster-autoscaler-operator true} {ClusterOperator config.openshift.io cluster-autoscaler true} {Deployment apps openshift-cloud-controller-manager-operator cluster-cloud-controller-manager-operator true} {ClusterOperator config.openshift.io cloud-controller-manager true}]), capabilities changed (enabled map[] not equal to map[Build:{} Console:{} DeploymentConfig:{} ImageRegistry:{} Ingress:{} MachineAPI:{} OperatorLifecycleManager:{} marketplace:{} openshift-samples:{}]) 2025-10-13T00:18:29.456398127+00:00 stderr F I1013 00:18:29.454880 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:error(nil), Done:0, Total:0, Completed:0, Reconciling:true, Initial:false, VersionHash:"", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:18:29.456398127+00:00 stderr F I1013 00:18:29.455615 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 0 2025-10-13T00:18:29.456398127+00:00 stderr F I1013 00:18:29.455716 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:29.456398127+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:29.456398127+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:29.456398127+00:00 stderr F I1013 00:18:29.455768 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (4.02218ms) 2025-10-13T00:18:29.463516099+00:00 stderr F I1013 00:18:29.461457 1 task_graph.go:481] Running 0 on worker 1 2025-10-13T00:18:29.463516099+00:00 stderr F I1013 00:18:29.461484 1 task_graph.go:481] Running 1 on worker 1 2025-10-13T00:18:29.463516099+00:00 stderr F I1013 00:18:29.462462 1 task_graph.go:481] Running 2 on worker 0 2025-10-13T00:18:29.479422872+00:00 stderr F W1013 00:18:29.477220 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:29.479422872+00:00 stderr F I1013 00:18:29.478693 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.259522836s) 2025-10-13T00:18:29.479422872+00:00 stderr F I1013 00:18:29.478716 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:29.479422872+00:00 stderr F I1013 00:18:29.478727 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:18:29.479422872+00:00 stderr F I1013 00:18:29.478731 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (17.11µs) 2025-10-13T00:18:29.488817082+00:00 stderr F I1013 00:18:29.488675 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:29.488817082+00:00 stderr F I1013 00:18:29.488726 1 upgradeable.go:69] Upgradeability last checked 1.269038509s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:18:29.488817082+00:00 stderr F I1013 00:18:29.488738 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (64.892µs) 2025-10-13T00:18:29.488817082+00:00 stderr F I1013 00:18:29.488750 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:29.489134091+00:00 stderr F I1013 00:18:29.488859 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:29.491901413+00:00 stderr F I1013 00:18:29.491816 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:29.491901413+00:00 stderr F I1013 00:18:29.491872 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:29.492412579+00:00 stderr F I1013 00:18:29.492243 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:29.494197812+00:00 stderr F I1013 00:18:29.494131 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:29.494197812+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:29.494197812+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:29.494197812+00:00 stderr F I1013 00:18:29.494188 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (5.353729ms) 2025-10-13T00:18:29.516530577+00:00 stderr F W1013 00:18:29.516458 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:29.520913477+00:00 stderr F I1013 00:18:29.520022 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:29.520913477+00:00 stderr F I1013 00:18:29.520051 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:29.520913477+00:00 stderr F I1013 00:18:29.520067 1 upgradeable.go:69] Upgradeability last checked 1.300377812s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:18:29.520913477+00:00 stderr F I1013 00:18:29.520077 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (65.152µs) 2025-10-13T00:18:29.520913477+00:00 stderr F I1013 00:18:29.520429 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:29.520913477+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:29.520913477+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:29.520913477+00:00 stderr F I1013 00:18:29.520503 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (474.334µs) 2025-10-13T00:18:29.523342409+00:00 stderr F I1013 00:18:29.523298 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.515138ms) 2025-10-13T00:18:29.523463793+00:00 stderr F I1013 00:18:29.523392 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:29.523748141+00:00 stderr F I1013 00:18:29.523722 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:29.523829634+00:00 stderr F I1013 00:18:29.523815 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:29.524177244+00:00 stderr F I1013 00:18:29.524139 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:29.527162353+00:00 stderr F I1013 00:18:29.526934 1 task_graph.go:481] Running 3 on worker 1 2025-10-13T00:18:29.549444596+00:00 stderr F W1013 00:18:29.549372 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:29.551019683+00:00 stderr F I1013 00:18:29.550980 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.586071ms) 2025-10-13T00:18:29.571746370+00:00 stderr F W1013 00:18:29.568746 1 helper.go:97] PrometheusRule "openshift-image-registry/image-registry-operator-alerts" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:29.571746370+00:00 stderr F I1013 00:18:29.568781 1 task_graph.go:481] Running 4 on worker 0 2025-10-13T00:18:29.571746370+00:00 stderr F I1013 00:18:29.571724 1 task_graph.go:481] Running 5 on worker 0 2025-10-13T00:18:29.571931525+00:00 stderr F I1013 00:18:29.571900 1 task_graph.go:481] Running 6 on worker 0 2025-10-13T00:18:29.715473557+00:00 stderr F I1013 00:18:29.715381 1 task_graph.go:481] Running 7 on worker 1 2025-10-13T00:18:30.266614850+00:00 stderr F I1013 00:18:30.266528 1 task_graph.go:481] Running 8 on worker 0 2025-10-13T00:18:30.266693072+00:00 stderr F I1013 00:18:30.266627 1 task_graph.go:481] Running 9 on worker 0 2025-10-13T00:18:30.457536702+00:00 stderr F I1013 00:18:30.457377 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:30.457696637+00:00 stderr F I1013 00:18:30.457639 1 cache.go:131] {"type":"PromQL","promql":{"promql":"topk(1,\n group by (label_node_openshift_io_os_id) (kube_node_labels{_id=\"\",label_node_openshift_io_os_id=\"rhel\"})\n or\n 0 * group by (label_node_openshift_io_os_id) (kube_node_labels{_id=\"\"})\n)\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-10-13T00:18:30.461389107+00:00 stderr F I1013 00:18:30.461300 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:30.461389107+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:30.461389107+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:30.461411127+00:00 stderr F I1013 00:18:30.461401 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (4.039431ms) 2025-10-13T00:18:30.461434408+00:00 stderr F I1013 00:18:30.461417 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:30.461518730+00:00 stderr F I1013 00:18:30.461480 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:30.461548681+00:00 stderr F I1013 00:18:30.461527 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:30.461803819+00:00 stderr F I1013 00:18:30.461757 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:30.469214969+00:00 stderr F I1013 00:18:30.469159 1 task_graph.go:481] Running 10 on worker 0 2025-10-13T00:18:30.483122553+00:00 stderr F W1013 00:18:30.483069 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:30.487319048+00:00 stderr F I1013 00:18:30.484374 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.955403ms) 2025-10-13T00:18:30.490061020+00:00 stderr F I1013 00:18:30.490010 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:30.490061020+00:00 stderr F I1013 00:18:30.490041 1 upgradeable.go:69] Upgradeability last checked 2.270353159s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:18:30.490061020+00:00 stderr F I1013 00:18:30.490047 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (41.581µs) 2025-10-13T00:18:30.490080480+00:00 stderr F I1013 00:18:30.490056 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:30.490150722+00:00 stderr F I1013 00:18:30.490110 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:30.490177363+00:00 stderr F I1013 00:18:30.490157 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:30.490577015+00:00 stderr F I1013 00:18:30.490520 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:30.490577015+00:00 stderr F I1013 00:18:30.490544 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:30.491014498+00:00 stderr F I1013 00:18:30.490798 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:30.491014498+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:30.491014498+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:30.491049399+00:00 stderr F I1013 00:18:30.491029 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (499.525µs) 2025-10-13T00:18:30.512476567+00:00 stderr F W1013 00:18:30.511342 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:30.512746245+00:00 stderr F I1013 00:18:30.512686 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.627113ms) 2025-10-13T00:18:30.512746245+00:00 stderr F I1013 00:18:30.512712 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:30.512809667+00:00 stderr F I1013 00:18:30.512787 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:30.512843398+00:00 stderr F I1013 00:18:30.512827 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:30.513084175+00:00 stderr F I1013 00:18:30.513055 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:30.534787331+00:00 stderr F W1013 00:18:30.534313 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:30.535884554+00:00 stderr F I1013 00:18:30.535845 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.128638ms) 2025-10-13T00:18:30.916384758+00:00 stderr F I1013 00:18:30.916278 1 task_graph.go:481] Running 11 on worker 1 2025-10-13T00:18:31.267203198+00:00 stderr F I1013 00:18:31.267132 1 task_graph.go:481] Running 12 on worker 0 2025-10-13T00:18:31.366079431+00:00 stderr F I1013 00:18:31.366018 1 task_graph.go:481] Running 13 on worker 0 2025-10-13T00:18:31.461597773+00:00 stderr F I1013 00:18:31.461472 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:31.462152520+00:00 stderr F I1013 00:18:31.461757 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group(csv_succeeded{_id=\"\", name=~\"numaresources-operator[.].*\"})\nor\n0 * group(csv_count{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-10-13T00:18:31.465679195+00:00 stderr F I1013 00:18:31.465630 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:31.465679195+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:31.465679195+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:31.465729747+00:00 stderr F I1013 00:18:31.465709 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (4.279288ms) 2025-10-13T00:18:31.465736987+00:00 stderr F I1013 00:18:31.465731 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:31.465833480+00:00 stderr F I1013 00:18:31.465803 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:31.465898632+00:00 stderr F I1013 00:18:31.465856 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:31.466902191+00:00 stderr F I1013 00:18:31.466847 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:31.472324633+00:00 stderr F I1013 00:18:31.471614 1 task_graph.go:481] Running 14 on worker 0 2025-10-13T00:18:31.490292988+00:00 stderr F W1013 00:18:31.490210 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:31.493763631+00:00 stderr F I1013 00:18:31.493711 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:31.493814232+00:00 stderr F I1013 00:18:31.493784 1 upgradeable.go:69] Upgradeability last checked 3.274094491s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:18:31.493814232+00:00 stderr F I1013 00:18:31.493798 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (98.703µs) 2025-10-13T00:18:31.493814232+00:00 stderr F I1013 00:18:31.493809 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:31.493841263+00:00 stderr F I1013 00:18:31.493812 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.075776ms) 2025-10-13T00:18:31.493873344+00:00 stderr F I1013 00:18:31.493850 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:31.494026659+00:00 stderr F I1013 00:18:31.493988 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:31.494091631+00:00 stderr F I1013 00:18:31.494058 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:31.494091631+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:31.494091631+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:31.494091631+00:00 stderr F I1013 00:18:31.494070 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:31.494250005+00:00 stderr F I1013 00:18:31.494218 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (408.172µs) 2025-10-13T00:18:31.496111731+00:00 stderr F I1013 00:18:31.496022 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:31.523309740+00:00 stderr F W1013 00:18:31.523149 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:31.524665600+00:00 stderr F I1013 00:18:31.524611 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.759176ms) 2025-10-13T00:18:31.524665600+00:00 stderr F I1013 00:18:31.524637 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:31.524736353+00:00 stderr F I1013 00:18:31.524701 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:31.524765673+00:00 stderr F I1013 00:18:31.524748 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:31.525008721+00:00 stderr F I1013 00:18:31.524966 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:31.551478499+00:00 stderr F W1013 00:18:31.551381 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:31.553074126+00:00 stderr F I1013 00:18:31.552888 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.24656ms) 2025-10-13T00:18:31.568288629+00:00 stderr F I1013 00:18:31.568219 1 task_graph.go:481] Running 15 on worker 0 2025-10-13T00:18:32.066353472+00:00 stderr F I1013 00:18:32.066248 1 task_graph.go:481] Running 16 on worker 0 2025-10-13T00:18:32.066407864+00:00 stderr F I1013 00:18:32.066341 1 task_graph.go:481] Running 17 on worker 0 2025-10-13T00:18:32.466749899+00:00 stderr F I1013 00:18:32.466702 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:32.466893473+00:00 stderr F I1013 00:18:32.466880 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group(csv_succeeded{_id=\"\", name=~\"sriov-network-operator[.].*\"})\nor\n0 * group(csv_count{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-10-13T00:18:32.468764999+00:00 stderr F I1013 00:18:32.468744 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:32.468764999+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:32.468764999+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:32.468850191+00:00 stderr F I1013 00:18:32.468836 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (2.140694ms) 2025-10-13T00:18:32.468881512+00:00 stderr F I1013 00:18:32.468872 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:32.468957154+00:00 stderr F I1013 00:18:32.468941 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:32.469014246+00:00 stderr F I1013 00:18:32.469004 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:32.469259723+00:00 stderr F I1013 00:18:32.469234 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:32.492408282+00:00 stderr F W1013 00:18:32.490141 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:32.492408282+00:00 stderr F I1013 00:18:32.491927 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.052566ms) 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.495855 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.495870 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.495886 1 upgradeable.go:69] Upgradeability last checked 4.276198046s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.495894 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (41.381µs) 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.495904 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.495986 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.496039 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.496378 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:32.497666329+00:00 stderr F I1013 00:18:32.496918 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:32.497666329+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:32.497666329+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:32.500528384+00:00 stderr F I1013 00:18:32.500485 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (4.579366ms) 2025-10-13T00:18:32.518528600+00:00 stderr F W1013 00:18:32.518483 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:32.520312603+00:00 stderr F I1013 00:18:32.520286 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.415607ms) 2025-10-13T00:18:32.520383525+00:00 stderr F I1013 00:18:32.520367 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:32.520511369+00:00 stderr F I1013 00:18:32.520488 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:32.520616232+00:00 stderr F I1013 00:18:32.520602 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:32.521009953+00:00 stderr F I1013 00:18:32.520975 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:32.544219994+00:00 stderr F W1013 00:18:32.544153 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:32.545539844+00:00 stderr F I1013 00:18:32.545495 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.128128ms) 2025-10-13T00:18:33.469579415+00:00 stderr F I1013 00:18:33.469404 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:33.469579415+00:00 stderr F I1013 00:18:33.469516 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group by (_id, invoker) (cluster_installer{_id=\"\",invoker=\"hypershift\"})\nor\n0 * group by (_id, invoker) (cluster_installer{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-10-13T00:18:33.471893123+00:00 stderr F I1013 00:18:33.471839 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:33.471893123+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:33.471893123+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:33.471942975+00:00 stderr F I1013 00:18:33.471888 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (2.492584ms) 2025-10-13T00:18:33.471942975+00:00 stderr F I1013 00:18:33.471913 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:33.471985416+00:00 stderr F I1013 00:18:33.471947 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:33.471996047+00:00 stderr F I1013 00:18:33.471986 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:33.472207153+00:00 stderr F I1013 00:18:33.472159 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:33.494978461+00:00 stderr F W1013 00:18:33.494899 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:33.496746213+00:00 stderr F I1013 00:18:33.496709 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.805088ms) 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497499 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497510 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497531 1 upgradeable.go:69] Upgradeability last checked 5.277842106s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497539 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (44.452µs) 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497578 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497579 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497621 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497890 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:33.500588097+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:33.500588097+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.497938 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (373.281µs) 2025-10-13T00:18:33.500588097+00:00 stderr F I1013 00:18:33.498005 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:33.523492999+00:00 stderr F W1013 00:18:33.523427 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:33.525474608+00:00 stderr F I1013 00:18:33.525450 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.937631ms) 2025-10-13T00:18:33.525524340+00:00 stderr F I1013 00:18:33.525510 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:33.525649183+00:00 stderr F I1013 00:18:33.525630 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:33.525709575+00:00 stderr F I1013 00:18:33.525699 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:33.526143038+00:00 stderr F I1013 00:18:33.526110 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:33.551198524+00:00 stderr F W1013 00:18:33.551116 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:33.552461691+00:00 stderr F I1013 00:18:33.552420 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.907771ms) 2025-10-13T00:18:34.472300507+00:00 stderr F I1013 00:18:34.472227 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:34.472693049+00:00 stderr F I1013 00:18:34.472595 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:34.472693049+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:34.472693049+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:34.472693049+00:00 stderr F I1013 00:18:34.472666 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (464.344µs) 2025-10-13T00:18:34.472693049+00:00 stderr F I1013 00:18:34.472684 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:34.472784972+00:00 stderr F I1013 00:18:34.472740 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:34.472822633+00:00 stderr F I1013 00:18:34.472800 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:34.473120192+00:00 stderr F I1013 00:18:34.473067 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:34.522445699+00:00 stderr F W1013 00:18:34.518883 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:34.522445699+00:00 stderr F I1013 00:18:34.520122 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.43771ms) 2025-10-13T00:18:35.169095484+00:00 stderr F I1013 00:18:35.169004 1 task_graph.go:481] Running 18 on worker 0 2025-10-13T00:18:35.268638707+00:00 stderr F I1013 00:18:35.266230 1 task_graph.go:481] Running 19 on worker 0 2025-10-13T00:18:35.369474728+00:00 stderr F I1013 00:18:35.369393 1 task_graph.go:481] Running 20 on worker 0 2025-10-13T00:18:35.370679654+00:00 stderr F I1013 00:18:35.369547 1 task_graph.go:481] Running 21 on worker 0 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.473568 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.473837 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:35.474554965+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:35.474554965+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.473890 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (330.019µs) 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.473907 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.473957 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.474007 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:35.474554965+00:00 stderr F I1013 00:18:35.474250 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:35.497449407+00:00 stderr F W1013 00:18:35.497395 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:35.500172628+00:00 stderr F I1013 00:18:35.499308 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.398196ms) 2025-10-13T00:18:35.765902886+00:00 stderr F I1013 00:18:35.765837 1 task_graph.go:481] Running 22 on worker 0 2025-10-13T00:18:35.869723966+00:00 stderr F I1013 00:18:35.869631 1 task_graph.go:481] Running 23 on worker 0 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.474623 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.475308 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:36.476781813+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:36.476781813+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.475378 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (763.732µs) 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.475395 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.475440 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.475482 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:36.476781813+00:00 stderr F I1013 00:18:36.475671 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:36.505792377+00:00 stderr F W1013 00:18:36.505730 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:36.507115836+00:00 stderr F I1013 00:18:36.507080 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.684183ms) 2025-10-13T00:18:36.768843086+00:00 stderr F I1013 00:18:36.768771 1 task_graph.go:481] Running 24 on worker 0 2025-10-13T00:18:36.865396279+00:00 stderr F I1013 00:18:36.864781 1 task_graph.go:481] Running 25 on worker 0 2025-10-13T00:18:37.476503757+00:00 stderr F I1013 00:18:37.476444 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:37.476697462+00:00 stderr F I1013 00:18:37.476676 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:37.476697462+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:37.476697462+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:37.476774925+00:00 stderr F I1013 00:18:37.476730 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (298.229µs) 2025-10-13T00:18:37.476774925+00:00 stderr F I1013 00:18:37.476752 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:37.476817486+00:00 stderr F I1013 00:18:37.476795 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:37.476854487+00:00 stderr F I1013 00:18:37.476841 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:37.477062893+00:00 stderr F I1013 00:18:37.477029 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:37.509075836+00:00 stderr F W1013 00:18:37.508974 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:37.510347244+00:00 stderr F I1013 00:18:37.510282 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.528588ms) 2025-10-13T00:18:38.065319011+00:00 stderr F I1013 00:18:38.065222 1 task_graph.go:481] Running 26 on worker 0 2025-10-13T00:18:38.116446372+00:00 stderr F W1013 00:18:38.116378 1 helper.go:97] ConsoleQuickStart "ocs-install-tour" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:38.476906119+00:00 stderr F I1013 00:18:38.476857 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:38.477234799+00:00 stderr F I1013 00:18:38.477218 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:38.477234799+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:38.477234799+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:38.477300501+00:00 stderr F I1013 00:18:38.477285 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (439.393µs) 2025-10-13T00:18:38.477354343+00:00 stderr F I1013 00:18:38.477320 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:38.477425605+00:00 stderr F I1013 00:18:38.477407 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:38.477482377+00:00 stderr F I1013 00:18:38.477472 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:38.477743324+00:00 stderr F I1013 00:18:38.477705 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:38.499118771+00:00 stderr F W1013 00:18:38.499036 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:38.501235184+00:00 stderr F I1013 00:18:38.501189 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.86119ms) 2025-10-13T00:18:38.717931713+00:00 stderr F I1013 00:18:38.717862 1 task_graph.go:481] Running 27 on worker 1 2025-10-13T00:18:38.815941550+00:00 stderr F I1013 00:18:38.815884 1 task_graph.go:481] Running 28 on worker 1 2025-10-13T00:18:39.066944360+00:00 stderr F I1013 00:18:39.066873 1 task_graph.go:481] Running 29 on worker 0 2025-10-13T00:18:39.216930724+00:00 stderr F I1013 00:18:39.216091 1 task_graph.go:481] Running 30 on worker 1 2025-10-13T00:18:39.366812695+00:00 stderr F I1013 00:18:39.366726 1 task_graph.go:481] Running 31 on worker 0 2025-10-13T00:18:39.417468722+00:00 stderr F I1013 00:18:39.417318 1 task_graph.go:481] Running 32 on worker 1 2025-10-13T00:18:39.478278512+00:00 stderr F I1013 00:18:39.478205 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:39.478485168+00:00 stderr F I1013 00:18:39.478454 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:39.478485168+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:39.478485168+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:39.478505809+00:00 stderr F I1013 00:18:39.478495 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (300.938µs) 2025-10-13T00:18:39.478516479+00:00 stderr F I1013 00:18:39.478509 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:39.478594151+00:00 stderr F I1013 00:18:39.478552 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:39.478619782+00:00 stderr F I1013 00:18:39.478606 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:39.478851869+00:00 stderr F I1013 00:18:39.478809 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:39.508713968+00:00 stderr F W1013 00:18:39.508639 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:39.510085079+00:00 stderr F I1013 00:18:39.510040 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.528448ms) 2025-10-13T00:18:40.015627565+00:00 stderr F I1013 00:18:40.015589 1 task_graph.go:481] Running 33 on worker 1 2025-10-13T00:18:40.116076174+00:00 stderr F I1013 00:18:40.116033 1 task_graph.go:481] Running 34 on worker 1 2025-10-13T00:18:40.479237883+00:00 stderr F I1013 00:18:40.478846 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:40.479237883+00:00 stderr F I1013 00:18:40.479125 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:40.479237883+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:40.479237883+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:40.479237883+00:00 stderr F I1013 00:18:40.479173 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (341.031µs) 2025-10-13T00:18:40.479354886+00:00 stderr F I1013 00:18:40.479291 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:40.479436528+00:00 stderr F I1013 00:18:40.479386 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:40.479448619+00:00 stderr F I1013 00:18:40.479439 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:40.479722357+00:00 stderr F I1013 00:18:40.479651 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:40.516764149+00:00 stderr F W1013 00:18:40.516720 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:40.518061508+00:00 stderr F I1013 00:18:40.518043 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.760923ms) 2025-10-13T00:18:40.867025504+00:00 stderr F I1013 00:18:40.866548 1 task_graph.go:481] Running 35 on worker 0 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.479735 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.480031 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:41.481028838+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:41.481028838+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.480080 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (359.48µs) 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.480096 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.480142 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.480195 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:41.481028838+00:00 stderr F I1013 00:18:41.480507 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:41.505398043+00:00 stderr F W1013 00:18:41.504246 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:41.506141265+00:00 stderr F I1013 00:18:41.505949 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.849269ms) 2025-10-13T00:18:42.480865903+00:00 stderr F I1013 00:18:42.480793 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:42.481092600+00:00 stderr F I1013 00:18:42.481061 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:42.481092600+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:42.481092600+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:42.481155672+00:00 stderr F I1013 00:18:42.481127 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (354.09µs) 2025-10-13T00:18:42.481155672+00:00 stderr F I1013 00:18:42.481145 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:42.481216294+00:00 stderr F I1013 00:18:42.481186 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:42.481249125+00:00 stderr F I1013 00:18:42.481231 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:42.482127661+00:00 stderr F I1013 00:18:42.481451 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:42.508069943+00:00 stderr F W1013 00:18:42.506603 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:42.508069943+00:00 stderr F I1013 00:18:42.507879 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.730745ms) 2025-10-13T00:18:42.615708667+00:00 stderr F I1013 00:18:42.615613 1 core.go:138] Updating ConfigMap openshift-machine-config-operator/kube-rbac-proxy due to diff:   &v1.ConfigMap{ 2025-10-13T00:18:42.615708667+00:00 stderr F    TypeMeta: v1.TypeMeta{ 2025-10-13T00:18:42.615708667+00:00 stderr F -  Kind: "", 2025-10-13T00:18:42.615708667+00:00 stderr F +  Kind: "ConfigMap", 2025-10-13T00:18:42.615708667+00:00 stderr F -  APIVersion: "", 2025-10-13T00:18:42.615708667+00:00 stderr F +  APIVersion: "v1", 2025-10-13T00:18:42.615708667+00:00 stderr F    }, 2025-10-13T00:18:42.615708667+00:00 stderr F    ObjectMeta: v1.ObjectMeta{ 2025-10-13T00:18:42.615708667+00:00 stderr F    ... // 2 identical fields 2025-10-13T00:18:42.615708667+00:00 stderr F    Namespace: "openshift-machine-config-operator", 2025-10-13T00:18:42.615708667+00:00 stderr F    SelfLink: "", 2025-10-13T00:18:42.615708667+00:00 stderr F -  UID: "ba7edbb4-1ba2-49b6-98a7-d849069e9f80", 2025-10-13T00:18:42.615708667+00:00 stderr F +  UID: "", 2025-10-13T00:18:42.615708667+00:00 stderr F -  ResourceVersion: "37089", 2025-10-13T00:18:42.615708667+00:00 stderr F +  ResourceVersion: "", 2025-10-13T00:18:42.615708667+00:00 stderr F    Generation: 0, 2025-10-13T00:18:42.615708667+00:00 stderr F -  CreationTimestamp: v1.Time{Time: s"2024-06-26 12:39:23 +0000 UTC"}, 2025-10-13T00:18:42.615708667+00:00 stderr F +  CreationTimestamp: v1.Time{}, 2025-10-13T00:18:42.615708667+00:00 stderr F    DeletionTimestamp: nil, 2025-10-13T00:18:42.615708667+00:00 stderr F    DeletionGracePeriodSeconds: nil, 2025-10-13T00:18:42.615708667+00:00 stderr F    ... // 2 identical fields 2025-10-13T00:18:42.615708667+00:00 stderr F    OwnerReferences: {{APIVersion: "config.openshift.io/v1", Kind: "ClusterVersion", Name: "version", UID: "a73cbaa6-40d3-4694-9b98-c0a6eed45825", ...}}, 2025-10-13T00:18:42.615708667+00:00 stderr F    Finalizers: nil, 2025-10-13T00:18:42.615708667+00:00 stderr F -  ManagedFields: []v1.ManagedFieldsEntry{ 2025-10-13T00:18:42.615708667+00:00 stderr F -  { 2025-10-13T00:18:42.615708667+00:00 stderr F -  Manager: "cluster-version-operator", 2025-10-13T00:18:42.615708667+00:00 stderr F -  Operation: "Update", 2025-10-13T00:18:42.615708667+00:00 stderr F -  APIVersion: "v1", 2025-10-13T00:18:42.615708667+00:00 stderr F -  Time: s"2025-08-13 20:39:50 +0000 UTC", 2025-10-13T00:18:42.615708667+00:00 stderr F -  FieldsType: "FieldsV1", 2025-10-13T00:18:42.615708667+00:00 stderr F -  FieldsV1: s`{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:include.re`..., 2025-10-13T00:18:42.615708667+00:00 stderr F -  }, 2025-10-13T00:18:42.615708667+00:00 stderr F -  { 2025-10-13T00:18:42.615708667+00:00 stderr F -  Manager: "machine-config-operator", 2025-10-13T00:18:42.615708667+00:00 stderr F -  Operation: "Update", 2025-10-13T00:18:42.615708667+00:00 stderr F -  APIVersion: "v1", 2025-10-13T00:18:42.615708667+00:00 stderr F -  Time: s"2025-08-13 20:39:50 +0000 UTC", 2025-10-13T00:18:42.615708667+00:00 stderr F -  FieldsType: "FieldsV1", 2025-10-13T00:18:42.615708667+00:00 stderr F -  FieldsV1: s`{"f:data":{"f:config-file.yaml":{}}}`, 2025-10-13T00:18:42.615708667+00:00 stderr F -  }, 2025-10-13T00:18:42.615708667+00:00 stderr F -  }, 2025-10-13T00:18:42.615708667+00:00 stderr F +  ManagedFields: nil, 2025-10-13T00:18:42.615708667+00:00 stderr F    }, 2025-10-13T00:18:42.615708667+00:00 stderr F    Immutable: nil, 2025-10-13T00:18:42.615708667+00:00 stderr F    Data: {"config-file.yaml": "authorization:\n resourceAttributes:\n apiVersion: v1\n reso"...}, 2025-10-13T00:18:42.615708667+00:00 stderr F    BinaryData: nil, 2025-10-13T00:18:42.615708667+00:00 stderr F   } 2025-10-13T00:18:42.838040774+00:00 stderr F I1013 00:18:42.837956 1 task_graph.go:481] Running 36 on worker 1 2025-10-13T00:18:42.915751516+00:00 stderr F I1013 00:18:42.915646 1 task_graph.go:481] Running 37 on worker 1 2025-10-13T00:18:43.115962075+00:00 stderr F W1013 00:18:43.115882 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-etcd" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:43.115962075+00:00 stderr F I1013 00:18:43.115921 1 task_graph.go:481] Running 38 on worker 1 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481147 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481458 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:43.483479493+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:43.483479493+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481514 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (380.091µs) 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481530 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481576 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481626 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:43.483479493+00:00 stderr F I1013 00:18:43.481858 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:43.506427796+00:00 stderr F W1013 00:18:43.505845 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:43.508868139+00:00 stderr F I1013 00:18:43.507882 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.345904ms) 2025-10-13T00:18:43.515845546+00:00 stderr F I1013 00:18:43.515811 1 task_graph.go:481] Running 39 on worker 1 2025-10-13T00:18:43.769263068+00:00 stderr F W1013 00:18:43.769217 1 helper.go:97] imagestream "openshift/hello-openshift" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:44.067353910+00:00 stderr F I1013 00:18:44.067247 1 task_graph.go:481] Running 40 on worker 0 2025-10-13T00:18:44.364822973+00:00 stderr F W1013 00:18:44.364758 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-cluster-total" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:44.417077569+00:00 stderr F I1013 00:18:44.416521 1 task_graph.go:481] Running 41 on worker 1 2025-10-13T00:18:44.456674137+00:00 stderr F I1013 00:18:44.455591 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:44.456674137+00:00 stderr F I1013 00:18:44.455674 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:44.456674137+00:00 stderr F I1013 00:18:44.455731 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:44.456674137+00:00 stderr F I1013 00:18:44.455921 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:44.482485975+00:00 stderr F I1013 00:18:44.482407 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:44.482721182+00:00 stderr F I1013 00:18:44.482693 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:44.482721182+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:44.482721182+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:44.482837676+00:00 stderr F I1013 00:18:44.482772 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (376.132µs) 2025-10-13T00:18:44.486384741+00:00 stderr F W1013 00:18:44.486321 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:44.488191755+00:00 stderr F I1013 00:18:44.488144 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.559349ms) 2025-10-13T00:18:44.488191755+00:00 stderr F I1013 00:18:44.488174 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:44.488267397+00:00 stderr F I1013 00:18:44.488231 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:44.488304378+00:00 stderr F I1013 00:18:44.488283 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:44.488626938+00:00 stderr F I1013 00:18:44.488576 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:44.510968523+00:00 stderr F W1013 00:18:44.510901 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:44.512789557+00:00 stderr F I1013 00:18:44.512741 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.562481ms) 2025-10-13T00:18:44.565495496+00:00 stderr F W1013 00:18:44.565425 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:44.766172218+00:00 stderr F W1013 00:18:44.766116 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:44.965105559+00:00 stderr F W1013 00:18:44.964665 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:45.165360199+00:00 stderr F W1013 00:18:45.165245 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:45.365475724+00:00 stderr F W1013 00:18:45.365400 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:45.483694912+00:00 stderr F I1013 00:18:45.483623 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:45.483958130+00:00 stderr F I1013 00:18:45.483916 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:45.483958130+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:45.483958130+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:45.484004561+00:00 stderr F I1013 00:18:45.483976 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (365.091µs) 2025-10-13T00:18:45.484004561+00:00 stderr F I1013 00:18:45.483996 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:45.484085384+00:00 stderr F I1013 00:18:45.484044 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:45.484125835+00:00 stderr F I1013 00:18:45.484103 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:45.484464725+00:00 stderr F I1013 00:18:45.484407 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:45.507588383+00:00 stderr F W1013 00:18:45.507518 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:45.508835940+00:00 stderr F I1013 00:18:45.508806 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.807498ms) 2025-10-13T00:18:45.564577079+00:00 stderr F W1013 00:18:45.564522 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:45.765786018+00:00 stderr F W1013 00:18:45.765722 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:45.818298520+00:00 stderr F I1013 00:18:45.816822 1 task_graph.go:481] Running 42 on worker 1 2025-10-13T00:18:45.965162081+00:00 stderr F W1013 00:18:45.965082 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:46.115424313+00:00 stderr F I1013 00:18:46.115367 1 task_graph.go:481] Running 43 on worker 1 2025-10-13T00:18:46.164806893+00:00 stderr F W1013 00:18:46.164752 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:46.365253019+00:00 stderr F W1013 00:18:46.365201 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-pod-total" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:46.484552369+00:00 stderr F I1013 00:18:46.484115 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:46.484552369+00:00 stderr F I1013 00:18:46.484380 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:46.484552369+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:46.484552369+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:46.484552369+00:00 stderr F I1013 00:18:46.484435 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (329.62µs) 2025-10-13T00:18:46.484552369+00:00 stderr F I1013 00:18:46.484449 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:46.484552369+00:00 stderr F I1013 00:18:46.484494 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:46.484552369+00:00 stderr F I1013 00:18:46.484542 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:46.484781386+00:00 stderr F I1013 00:18:46.484744 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:46.507267005+00:00 stderr F W1013 00:18:46.506700 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:46.508018518+00:00 stderr F I1013 00:18:46.507978 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.52556ms) 2025-10-13T00:18:46.567384785+00:00 stderr F W1013 00:18:46.567015 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-prometheus" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:46.567384785+00:00 stderr F I1013 00:18:46.567043 1 task_graph.go:481] Running 44 on worker 0 2025-10-13T00:18:46.670046790+00:00 stderr F I1013 00:18:46.669992 1 task_graph.go:481] Running 45 on worker 0 2025-10-13T00:18:47.415480765+00:00 stderr F W1013 00:18:47.415088 1 helper.go:97] PrometheusRule "openshift-kube-apiserver/kube-apiserver-recording-rules" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:47.485260942+00:00 stderr F I1013 00:18:47.485189 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:47.485545721+00:00 stderr F I1013 00:18:47.485444 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:47.485545721+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:47.485545721+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:47.485545721+00:00 stderr F I1013 00:18:47.485491 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (324.33µs) 2025-10-13T00:18:47.485545721+00:00 stderr F I1013 00:18:47.485504 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:47.485570011+00:00 stderr F I1013 00:18:47.485538 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:47.485596732+00:00 stderr F I1013 00:18:47.485582 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:47.485817029+00:00 stderr F I1013 00:18:47.485766 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:47.515198663+00:00 stderr F W1013 00:18:47.515126 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:47.516587485+00:00 stderr F I1013 00:18:47.516541 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.034264ms) 2025-10-13T00:18:47.668943679+00:00 stderr F I1013 00:18:47.668890 1 task_graph.go:481] Running 46 on worker 0 2025-10-13T00:18:47.765505713+00:00 stderr F I1013 00:18:47.765451 1 task_graph.go:481] Running 47 on worker 0 2025-10-13T00:18:48.115050226+00:00 stderr F W1013 00:18:48.114995 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-api-performance" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:48.315509822+00:00 stderr F I1013 00:18:48.315441 1 task_graph.go:481] Running 48 on worker 1 2025-10-13T00:18:48.485787490+00:00 stderr F I1013 00:18:48.485705 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:48.485978675+00:00 stderr F I1013 00:18:48.485944 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:48.485978675+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:48.485978675+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:48.486030527+00:00 stderr F I1013 00:18:48.486002 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (304.729µs) 2025-10-13T00:18:48.486030527+00:00 stderr F I1013 00:18:48.486018 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:48.486078128+00:00 stderr F I1013 00:18:48.486053 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:48.486103609+00:00 stderr F I1013 00:18:48.486090 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:48.486348336+00:00 stderr F I1013 00:18:48.486293 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:48.504933910+00:00 stderr F W1013 00:18:48.504892 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:48.506156736+00:00 stderr F I1013 00:18:48.506119 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (20.100329ms) 2025-10-13T00:18:48.816228844+00:00 stderr F I1013 00:18:48.816145 1 task_graph.go:481] Running 49 on worker 1 2025-10-13T00:18:49.118524990+00:00 stderr F I1013 00:18:49.118425 1 task_graph.go:481] Running 50 on worker 1 2025-10-13T00:18:49.367090788+00:00 stderr F I1013 00:18:49.366970 1 task_graph.go:481] Running 51 on worker 0 2025-10-13T00:18:49.486686827+00:00 stderr F I1013 00:18:49.486589 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:49.487180602+00:00 stderr F I1013 00:18:49.487118 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:49.487180602+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:49.487180602+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:49.487306136+00:00 stderr F I1013 00:18:49.487246 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (673.2µs) 2025-10-13T00:18:49.487306136+00:00 stderr F I1013 00:18:49.487284 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:49.487479521+00:00 stderr F I1013 00:18:49.487423 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:49.487560823+00:00 stderr F I1013 00:18:49.487523 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:49.488087799+00:00 stderr F I1013 00:18:49.488010 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:49.538139049+00:00 stderr F W1013 00:18:49.538077 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:49.541310163+00:00 stderr F I1013 00:18:49.541264 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.973416ms) 2025-10-13T00:18:50.487377470+00:00 stderr F I1013 00:18:50.487286 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:50.487749721+00:00 stderr F I1013 00:18:50.487733 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:50.487749721+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:50.487749721+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:50.487821513+00:00 stderr F I1013 00:18:50.487807 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (534.816µs) 2025-10-13T00:18:50.487851774+00:00 stderr F I1013 00:18:50.487841 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:50.487927026+00:00 stderr F I1013 00:18:50.487907 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:50.487986318+00:00 stderr F I1013 00:18:50.487976 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:50.488251896+00:00 stderr F I1013 00:18:50.488225 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:50.525066381+00:00 stderr F W1013 00:18:50.524991 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:50.528176944+00:00 stderr F I1013 00:18:50.528132 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.281769ms) 2025-10-13T00:18:51.417065869+00:00 stderr F I1013 00:18:51.417005 1 task_graph.go:481] Running 52 on worker 1 2025-10-13T00:18:51.488658810+00:00 stderr F I1013 00:18:51.488578 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:51.489007040+00:00 stderr F I1013 00:18:51.488969 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:51.489007040+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:51.489007040+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:51.489096723+00:00 stderr F I1013 00:18:51.489063 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (530.986µs) 2025-10-13T00:18:51.489096723+00:00 stderr F I1013 00:18:51.489090 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:51.489189365+00:00 stderr F I1013 00:18:51.489154 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:51.489264898+00:00 stderr F I1013 00:18:51.489234 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:51.489723981+00:00 stderr F I1013 00:18:51.489632 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:51.515257791+00:00 stderr F W1013 00:18:51.515219 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:51.516971342+00:00 stderr F I1013 00:18:51.516948 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.856099ms) 2025-10-13T00:18:52.415984428+00:00 stderr F W1013 00:18:52.415846 1 helper.go:97] PrometheusRule "openshift-authentication-operator/authentication-operator" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:52.415984428+00:00 stderr F I1013 00:18:52.415892 1 task_graph.go:481] Running 53 on worker 1 2025-10-13T00:18:52.416049799+00:00 stderr F I1013 00:18:52.416035 1 task_graph.go:481] Running 54 on worker 1 2025-10-13T00:18:52.489722062+00:00 stderr F I1013 00:18:52.489637 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:52.489970670+00:00 stderr F I1013 00:18:52.489927 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:52.489970670+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:52.489970670+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:52.490020191+00:00 stderr F I1013 00:18:52.489991 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (364.651µs) 2025-10-13T00:18:52.490020191+00:00 stderr F I1013 00:18:52.490011 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:52.490107474+00:00 stderr F I1013 00:18:52.490060 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:52.490127234+00:00 stderr F I1013 00:18:52.490117 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:52.490452674+00:00 stderr F I1013 00:18:52.490387 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:52.513941603+00:00 stderr F W1013 00:18:52.513865 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:52.515397846+00:00 stderr F I1013 00:18:52.515342 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.312473ms) 2025-10-13T00:18:52.818447526+00:00 stderr F I1013 00:18:52.818311 1 task_graph.go:481] Running 55 on worker 1 2025-10-13T00:18:53.490593320+00:00 stderr F I1013 00:18:53.490504 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:53.491126306+00:00 stderr F I1013 00:18:53.490986 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:53.491126306+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:53.491126306+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:53.491126306+00:00 stderr F I1013 00:18:53.491082 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (591.037µs) 2025-10-13T00:18:53.491126306+00:00 stderr F I1013 00:18:53.491102 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:53.491253469+00:00 stderr F I1013 00:18:53.491188 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:53.491370463+00:00 stderr F I1013 00:18:53.491285 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:53.491857587+00:00 stderr F I1013 00:18:53.491759 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:53.514058398+00:00 stderr F W1013 00:18:53.513943 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:53.516863862+00:00 stderr F I1013 00:18:53.516795 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.687984ms) 2025-10-13T00:18:53.716404290+00:00 stderr F I1013 00:18:53.716284 1 task_graph.go:481] Running 56 on worker 1 2025-10-13T00:18:54.492446197+00:00 stderr F I1013 00:18:54.491955 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:54.492684744+00:00 stderr F I1013 00:18:54.492648 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:54.492684744+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:54.492684744+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:54.492741945+00:00 stderr F I1013 00:18:54.492701 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (756.783µs) 2025-10-13T00:18:54.492741945+00:00 stderr F I1013 00:18:54.492718 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:54.492812457+00:00 stderr F I1013 00:18:54.492777 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:54.492869369+00:00 stderr F I1013 00:18:54.492831 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:54.493103666+00:00 stderr F I1013 00:18:54.493046 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:54.540304601+00:00 stderr F W1013 00:18:54.540230 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:54.544120984+00:00 stderr F I1013 00:18:54.544074 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.342158ms) 2025-10-13T00:18:55.492946603+00:00 stderr F I1013 00:18:55.492830 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:55.493275803+00:00 stderr F I1013 00:18:55.493221 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:55.493275803+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:55.493275803+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:55.493358815+00:00 stderr F I1013 00:18:55.493308 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (490.445µs) 2025-10-13T00:18:55.493386206+00:00 stderr F I1013 00:18:55.493372 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:55.493543041+00:00 stderr F I1013 00:18:55.493467 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:55.493562271+00:00 stderr F I1013 00:18:55.493551 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:55.494061066+00:00 stderr F I1013 00:18:55.493965 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:55.522496212+00:00 stderr F W1013 00:18:55.522418 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:55.525362608+00:00 stderr F I1013 00:18:55.525228 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.854118ms) 2025-10-13T00:18:55.616420068+00:00 stderr F W1013 00:18:55.616228 1 helper.go:97] configmap "openshift-machine-api/cluster-autoscaler-operator-ca" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:55.716403154+00:00 stderr F W1013 00:18:55.716212 1 helper.go:97] PrometheusRule "openshift-machine-api/cluster-autoscaler-operator-rules" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:18:55.716403154+00:00 stderr F I1013 00:18:55.716263 1 task_graph.go:481] Running 57 on worker 1 2025-10-13T00:18:55.818836492+00:00 stderr F I1013 00:18:55.818709 1 task_graph.go:481] Running 58 on worker 1 2025-10-13T00:18:56.067008537+00:00 stderr F I1013 00:18:56.066901 1 task_graph.go:481] Running 59 on worker 0 2025-10-13T00:18:56.494825420+00:00 stderr F I1013 00:18:56.493805 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:56.494825420+00:00 stderr F I1013 00:18:56.494776 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:56.494825420+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:56.494825420+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:56.494916163+00:00 stderr F I1013 00:18:56.494844 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.071282ms) 2025-10-13T00:18:56.494916163+00:00 stderr F I1013 00:18:56.494861 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:56.494938773+00:00 stderr F I1013 00:18:56.494914 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:56.495015026+00:00 stderr F I1013 00:18:56.494973 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:56.495310384+00:00 stderr F I1013 00:18:56.495241 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:56.532138680+00:00 stderr F W1013 00:18:56.532039 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:56.533403168+00:00 stderr F I1013 00:18:56.533343 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.480305ms) 2025-10-13T00:18:56.667277112+00:00 stderr F I1013 00:18:56.667170 1 task_graph.go:481] Running 60 on worker 0 2025-10-13T00:18:56.718735784+00:00 stderr F I1013 00:18:56.718630 1 task_graph.go:481] Running 61 on worker 1 2025-10-13T00:18:56.767525736+00:00 stderr F I1013 00:18:56.767418 1 task_graph.go:481] Running 62 on worker 0 2025-10-13T00:18:57.119089389+00:00 stderr F I1013 00:18:57.118977 1 task_graph.go:481] Running 63 on worker 1 2025-10-13T00:18:57.495513142+00:00 stderr F I1013 00:18:57.495393 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:57.495850282+00:00 stderr F I1013 00:18:57.495770 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:57.495850282+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:57.495850282+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:57.495928895+00:00 stderr F I1013 00:18:57.495884 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (504.845µs) 2025-10-13T00:18:57.495928895+00:00 stderr F I1013 00:18:57.495909 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:57.496041458+00:00 stderr F I1013 00:18:57.495974 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:57.496065709+00:00 stderr F I1013 00:18:57.496051 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:57.496570534+00:00 stderr F I1013 00:18:57.496468 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:57.540206822+00:00 stderr F W1013 00:18:57.540119 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:57.543003966+00:00 stderr F I1013 00:18:57.542915 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.000699ms) 2025-10-13T00:18:58.496426481+00:00 stderr F I1013 00:18:58.496275 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:58.496961927+00:00 stderr F I1013 00:18:58.496886 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:58.496961927+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:58.496961927+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:58.497082551+00:00 stderr F I1013 00:18:58.497016 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (761.163µs) 2025-10-13T00:18:58.497082551+00:00 stderr F I1013 00:18:58.497059 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:58.497237325+00:00 stderr F I1013 00:18:58.497161 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:58.497322438+00:00 stderr F I1013 00:18:58.497284 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:58.497916916+00:00 stderr F I1013 00:18:58.497815 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:58.540063660+00:00 stderr F W1013 00:18:58.539901 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:58.541454951+00:00 stderr F I1013 00:18:58.541387 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.327629ms) 2025-10-13T00:18:58.668772281+00:00 stderr F I1013 00:18:58.668685 1 task_graph.go:481] Running 64 on worker 1 2025-10-13T00:18:59.455742432+00:00 stderr F I1013 00:18:59.455626 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:59.455883536+00:00 stderr F I1013 00:18:59.455820 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:59.455972659+00:00 stderr F I1013 00:18:59.455931 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:59.456694510+00:00 stderr F I1013 00:18:59.456572 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:59.497634509+00:00 stderr F I1013 00:18:59.497503 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:18:59.497993530+00:00 stderr F I1013 00:18:59.497918 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:18:59.497993530+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:18:59.497993530+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:18:59.498092533+00:00 stderr F I1013 00:18:59.498031 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (537.806µs) 2025-10-13T00:18:59.503103302+00:00 stderr F W1013 00:18:59.503027 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:59.507272646+00:00 stderr F I1013 00:18:59.507189 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.575295ms) 2025-10-13T00:18:59.507272646+00:00 stderr F I1013 00:18:59.507234 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:18:59.507474392+00:00 stderr F I1013 00:18:59.507403 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:18:59.507530223+00:00 stderr F I1013 00:18:59.507501 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:18:59.507980737+00:00 stderr F I1013 00:18:59.507902 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:18:59.550915735+00:00 stderr F W1013 00:18:59.550792 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:18:59.554124940+00:00 stderr F I1013 00:18:59.554073 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.833834ms) 2025-10-13T00:18:59.670205414+00:00 stderr F I1013 00:18:59.670111 1 task_graph.go:481] Running 65 on worker 1 2025-10-13T00:19:00.216665148+00:00 stderr F W1013 00:19:00.216542 1 helper.go:97] configmap "openshift-operator-lifecycle-manager/olm-operators" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:19:00.316246512+00:00 stderr F W1013 00:19:00.316172 1 helper.go:97] CatalogSource "openshift-operator-lifecycle-manager/olm-operators" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:19:00.499060583+00:00 stderr F I1013 00:19:00.498978 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:00.499349161+00:00 stderr F I1013 00:19:00.499312 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:00.499349161+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:00.499349161+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:00.499494996+00:00 stderr F I1013 00:19:00.499461 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (514.955µs) 2025-10-13T00:19:00.499494996+00:00 stderr F I1013 00:19:00.499484 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:00.499582858+00:00 stderr F I1013 00:19:00.499539 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:00.499618569+00:00 stderr F I1013 00:19:00.499599 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:00.499927408+00:00 stderr F I1013 00:19:00.499873 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:00.546232206+00:00 stderr F W1013 00:19:00.546138 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:00.547598267+00:00 stderr F I1013 00:19:00.547458 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.971788ms) 2025-10-13T00:19:00.567037506+00:00 stderr F I1013 00:19:00.566950 1 task_graph.go:481] Running 66 on worker 1 2025-10-13T00:19:00.567037506+00:00 stderr F I1013 00:19:00.566979 1 task_graph.go:481] Running 67 on worker 1 2025-10-13T00:19:00.615988332+00:00 stderr F W1013 00:19:00.615911 1 helper.go:97] Subscription "openshift-operator-lifecycle-manager/packageserver" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:19:00.818391266+00:00 stderr F I1013 00:19:00.818181 1 task_graph.go:481] Running 68 on worker 0 2025-10-13T00:19:00.966920447+00:00 stderr F I1013 00:19:00.966815 1 task_graph.go:481] Running 69 on worker 1 2025-10-13T00:19:01.017171813+00:00 stderr F I1013 00:19:01.016868 1 task_graph.go:481] Running 70 on worker 0 2025-10-13T00:19:01.017171813+00:00 stderr F I1013 00:19:01.016940 1 task_graph.go:481] Running 71 on worker 0 2025-10-13T00:19:01.068957984+00:00 stderr F I1013 00:19:01.068855 1 task_graph.go:481] Running 72 on worker 1 2025-10-13T00:19:01.117971453+00:00 stderr F I1013 00:19:01.117884 1 task_graph.go:481] Running 73 on worker 0 2025-10-13T00:19:01.499636250+00:00 stderr F I1013 00:19:01.499521 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:01.500015051+00:00 stderr F I1013 00:19:01.499942 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:01.500015051+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:01.500015051+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:01.500080953+00:00 stderr F I1013 00:19:01.500037 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (530.775µs) 2025-10-13T00:19:01.500080953+00:00 stderr F I1013 00:19:01.500066 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:01.500243138+00:00 stderr F I1013 00:19:01.500170 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:01.500266388+00:00 stderr F I1013 00:19:01.500255 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:01.500758073+00:00 stderr F I1013 00:19:01.500677 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:01.535768524+00:00 stderr F W1013 00:19:01.535696 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:01.537675011+00:00 stderr F I1013 00:19:01.537596 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.527036ms) 2025-10-13T00:19:02.266264538+00:00 stderr F I1013 00:19:02.266177 1 task_graph.go:481] Running 74 on worker 1 2025-10-13T00:19:02.500373470+00:00 stderr F I1013 00:19:02.500060 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:02.500373470+00:00 stderr F I1013 00:19:02.500301 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:02.500373470+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:02.500373470+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:02.500433671+00:00 stderr F I1013 00:19:02.500369 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (320.429µs) 2025-10-13T00:19:02.500433671+00:00 stderr F I1013 00:19:02.500382 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:02.500433671+00:00 stderr F I1013 00:19:02.500420 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:02.502875254+00:00 stderr F I1013 00:19:02.500462 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:02.502875254+00:00 stderr F I1013 00:19:02.500673 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:02.522366274+00:00 stderr F W1013 00:19:02.521809 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:02.526364062+00:00 stderr F I1013 00:19:02.524266 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.87643ms) 2025-10-13T00:19:03.501624264+00:00 stderr F I1013 00:19:03.501449 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:03.502073688+00:00 stderr F I1013 00:19:03.502030 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:03.502073688+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:03.502073688+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:03.502204222+00:00 stderr F I1013 00:19:03.502169 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (736.451µs) 2025-10-13T00:19:03.502232143+00:00 stderr F I1013 00:19:03.502212 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:03.502509471+00:00 stderr F I1013 00:19:03.502457 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:03.502581013+00:00 stderr F I1013 00:19:03.502558 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:03.503115829+00:00 stderr F I1013 00:19:03.503045 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:03.568393440+00:00 stderr F W1013 00:19:03.565874 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:03.572352248+00:00 stderr F I1013 00:19:03.571732 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (69.521067ms) 2025-10-13T00:19:03.866535636+00:00 stderr F W1013 00:19:03.866421 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+; use flowcontrol.apiserver.k8s.io/v1 FlowSchema 2025-10-13T00:19:03.866971289+00:00 stderr F I1013 00:19:03.866898 1 task_graph.go:481] Running 75 on worker 1 2025-10-13T00:19:04.366790493+00:00 stderr F I1013 00:19:04.366681 1 task_graph.go:481] Running 76 on worker 1 2025-10-13T00:19:04.468402905+00:00 stderr F I1013 00:19:04.468268 1 task_graph.go:481] Running 77 on worker 1 2025-10-13T00:19:04.468874839+00:00 stderr F I1013 00:19:04.468805 1 task_graph.go:481] Running 78 on worker 1 2025-10-13T00:19:04.502933712+00:00 stderr F I1013 00:19:04.502854 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:04.503353044+00:00 stderr F I1013 00:19:04.503263 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:04.503353044+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:04.503353044+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:04.503478638+00:00 stderr F I1013 00:19:04.503427 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (582.037µs) 2025-10-13T00:19:04.503478638+00:00 stderr F I1013 00:19:04.503463 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:04.503598471+00:00 stderr F I1013 00:19:04.503544 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:04.503663803+00:00 stderr F I1013 00:19:04.503634 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:04.504144838+00:00 stderr F I1013 00:19:04.504058 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:04.550790045+00:00 stderr F W1013 00:19:04.550710 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:04.553689771+00:00 stderr F I1013 00:19:04.553635 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.167512ms) 2025-10-13T00:19:05.266606643+00:00 stderr F W1013 00:19:05.266455 1 helper.go:97] clusterrolebinding "default-account-openshift-machine-config-operator" not found. It either has already been removed or it has never been installed on this cluster. 2025-10-13T00:19:05.266606643+00:00 stderr F I1013 00:19:05.266577 1 task_graph.go:478] Canceled worker 0 while waiting for work 2025-10-13T00:19:05.266676165+00:00 stderr F I1013 00:19:05.266616 1 task_graph.go:478] Canceled worker 1 while waiting for work 2025-10-13T00:19:05.266676165+00:00 stderr F I1013 00:19:05.266661 1 task_graph.go:527] Workers finished 2025-10-13T00:19:05.266693845+00:00 stderr F I1013 00:19:05.266671 1 task_graph.go:550] Result of work: [] 2025-10-13T00:19:05.503723264+00:00 stderr F I1013 00:19:05.503591 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:05.504083885+00:00 stderr F I1013 00:19:05.504026 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:05.504083885+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:05.504083885+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:05.504172308+00:00 stderr F I1013 00:19:05.504121 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (577.737µs) 2025-10-13T00:19:05.504172308+00:00 stderr F I1013 00:19:05.504152 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:05.504270421+00:00 stderr F I1013 00:19:05.504215 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:05.504378194+00:00 stderr F I1013 00:19:05.504294 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:05.504781086+00:00 stderr F I1013 00:19:05.504695 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:05.545616890+00:00 stderr F W1013 00:19:05.545496 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:05.548539877+00:00 stderr F I1013 00:19:05.548448 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.290747ms) 2025-10-13T00:19:06.504612730+00:00 stderr F I1013 00:19:06.504535 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:06.504896518+00:00 stderr F I1013 00:19:06.504860 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:06.504896518+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:06.504896518+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:06.504969720+00:00 stderr F I1013 00:19:06.504935 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (412.212µs) 2025-10-13T00:19:06.505065183+00:00 stderr F I1013 00:19:06.504995 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:06.505114884+00:00 stderr F I1013 00:19:06.505068 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:06.505180606+00:00 stderr F I1013 00:19:06.505123 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:06.505473995+00:00 stderr F I1013 00:19:06.505418 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:06.542756954+00:00 stderr F W1013 00:19:06.542637 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:06.545103654+00:00 stderr F I1013 00:19:06.545031 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.01828ms) 2025-10-13T00:19:07.505484313+00:00 stderr F I1013 00:19:07.505375 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:07.505704689+00:00 stderr F I1013 00:19:07.505638 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:07.505704689+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:07.505704689+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:07.505704689+00:00 stderr F I1013 00:19:07.505690 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (328.93µs) 2025-10-13T00:19:07.505729220+00:00 stderr F I1013 00:19:07.505707 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:07.505800272+00:00 stderr F I1013 00:19:07.505767 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:07.505823373+00:00 stderr F I1013 00:19:07.505814 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:07.506060800+00:00 stderr F I1013 00:19:07.506001 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:07.552898573+00:00 stderr F W1013 00:19:07.552799 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:07.555785519+00:00 stderr F I1013 00:19:07.555716 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.998867ms) 2025-10-13T00:19:08.506042268+00:00 stderr F I1013 00:19:08.505903 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:08.506533263+00:00 stderr F I1013 00:19:08.506462 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:08.506533263+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:08.506533263+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:08.506629626+00:00 stderr F I1013 00:19:08.506566 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (715.902µs) 2025-10-13T00:19:08.506724149+00:00 stderr F I1013 00:19:08.506669 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:08.506867553+00:00 stderr F I1013 00:19:08.506796 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:08.506941195+00:00 stderr F I1013 00:19:08.506901 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:08.507759569+00:00 stderr F I1013 00:19:08.507659 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:08.534380321+00:00 stderr F I1013 00:19:08.531581 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:19:08.534380321+00:00 stderr F I1013 00:19:08.531701 1 upgradeable.go:69] Upgradeability last checked 40.312006981s ago, will not re-check until 2025-10-13T00:20:28Z 2025-10-13T00:19:08.534380321+00:00 stderr F I1013 00:19:08.531716 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (156.595µs) 2025-10-13T00:19:08.534380321+00:00 stderr F I1013 00:19:08.531736 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:08.534380321+00:00 stderr F I1013 00:19:08.532138 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:08.534380321+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:08.534380321+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:08.534380321+00:00 stderr F I1013 00:19:08.532236 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (501.905µs) 2025-10-13T00:19:08.536028940+00:00 stderr F W1013 00:19:08.535683 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:08.538693279+00:00 stderr F I1013 00:19:08.538651 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.976061ms) 2025-10-13T00:19:08.538712080+00:00 stderr F I1013 00:19:08.538697 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:08.538818203+00:00 stderr F I1013 00:19:08.538787 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:08.538899945+00:00 stderr F I1013 00:19:08.538880 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:08.539481343+00:00 stderr F I1013 00:19:08.539298 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:08.574737921+00:00 stderr F W1013 00:19:08.574636 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:08.577189924+00:00 stderr F I1013 00:19:08.577137 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.435573ms) 2025-10-13T00:19:09.507084448+00:00 stderr F I1013 00:19:09.507001 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:09.507342085+00:00 stderr F I1013 00:19:09.507303 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:09.507342085+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:09.507342085+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:09.507428258+00:00 stderr F I1013 00:19:09.507394 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (407.202µs) 2025-10-13T00:19:09.507428258+00:00 stderr F I1013 00:19:09.507417 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:09.507493190+00:00 stderr F I1013 00:19:09.507463 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:09.507553142+00:00 stderr F I1013 00:19:09.507522 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:09.507827160+00:00 stderr F I1013 00:19:09.507776 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:09.538455411+00:00 stderr F W1013 00:19:09.538354 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:09.541276124+00:00 stderr F I1013 00:19:09.541212 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.787455ms) 2025-10-13T00:19:10.508545369+00:00 stderr F I1013 00:19:10.508446 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:10.509026553+00:00 stderr F I1013 00:19:10.508975 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:10.509026553+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:10.509026553+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:10.509172997+00:00 stderr F I1013 00:19:10.509097 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (665.48µs) 2025-10-13T00:19:10.509172997+00:00 stderr F I1013 00:19:10.509130 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:10.509320722+00:00 stderr F I1013 00:19:10.509255 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:10.509414564+00:00 stderr F I1013 00:19:10.509384 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:10.509948520+00:00 stderr F I1013 00:19:10.509865 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:10.540886060+00:00 stderr F W1013 00:19:10.540802 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:10.542263651+00:00 stderr F I1013 00:19:10.542199 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.068943ms) 2025-10-13T00:19:11.510224718+00:00 stderr F I1013 00:19:11.510141 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:11.510741263+00:00 stderr F I1013 00:19:11.510689 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:11.510741263+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:11.510741263+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:11.510867827+00:00 stderr F I1013 00:19:11.510823 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (696.431µs) 2025-10-13T00:19:11.510867827+00:00 stderr F I1013 00:19:11.510860 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:11.511054983+00:00 stderr F I1013 00:19:11.510956 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:11.511119525+00:00 stderr F I1013 00:19:11.511083 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:11.511710662+00:00 stderr F I1013 00:19:11.511620 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:11.556017020+00:00 stderr F W1013 00:19:11.555845 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:11.559798122+00:00 stderr F I1013 00:19:11.559700 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.831532ms) 2025-10-13T00:19:12.511054861+00:00 stderr F I1013 00:19:12.510929 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:12.511449922+00:00 stderr F I1013 00:19:12.511388 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:12.511449922+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:12.511449922+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:12.511559646+00:00 stderr F I1013 00:19:12.511486 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (571.377µs) 2025-10-13T00:19:12.511559646+00:00 stderr F I1013 00:19:12.511516 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:12.511721321+00:00 stderr F I1013 00:19:12.511632 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:12.511901406+00:00 stderr F I1013 00:19:12.511878 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:12.512370120+00:00 stderr F I1013 00:19:12.512285 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:12.560219683+00:00 stderr F W1013 00:19:12.560131 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:12.563264733+00:00 stderr F I1013 00:19:12.563177 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.654156ms) 2025-10-13T00:19:13.512363429+00:00 stderr F I1013 00:19:13.512209 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:13.512948346+00:00 stderr F I1013 00:19:13.512911 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:13.512948346+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:13.512948346+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:13.513109401+00:00 stderr F I1013 00:19:13.513075 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (890.067µs) 2025-10-13T00:19:13.513179973+00:00 stderr F I1013 00:19:13.513156 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:13.513320377+00:00 stderr F I1013 00:19:13.513281 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:13.513488832+00:00 stderr F I1013 00:19:13.513464 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:13.514000797+00:00 stderr F I1013 00:19:13.513942 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:13.543039181+00:00 stderr F W1013 00:19:13.542977 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:13.544797543+00:00 stderr F I1013 00:19:13.544767 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.60993ms) 2025-10-13T00:19:14.455742982+00:00 stderr F I1013 00:19:14.455641 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:14.455933148+00:00 stderr F I1013 00:19:14.455866 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:14.456012100+00:00 stderr F I1013 00:19:14.455981 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:14.456646039+00:00 stderr F I1013 00:19:14.456552 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:14.505247814+00:00 stderr F W1013 00:19:14.505150 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:14.508464640+00:00 stderr F I1013 00:19:14.508412 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.78374ms) 2025-10-13T00:19:14.513471209+00:00 stderr F I1013 00:19:14.513396 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:14.514019405+00:00 stderr F I1013 00:19:14.513984 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:14.514019405+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:14.514019405+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:14.514179610+00:00 stderr F I1013 00:19:14.514149 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (763.223µs) 2025-10-13T00:19:14.514395736+00:00 stderr F I1013 00:19:14.514257 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:14.514525630+00:00 stderr F I1013 00:19:14.514461 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:14.514590342+00:00 stderr F I1013 00:19:14.514557 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:14.515097077+00:00 stderr F I1013 00:19:14.515020 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:14.560851158+00:00 stderr F W1013 00:19:14.560512 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:14.565245919+00:00 stderr F I1013 00:19:14.565188 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.928624ms) 2025-10-13T00:19:15.514619942+00:00 stderr F I1013 00:19:15.514547 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:15.515137327+00:00 stderr F I1013 00:19:15.515105 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:15.515137327+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:15.515137327+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:15.515302762+00:00 stderr F I1013 00:19:15.515270 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (734.742µs) 2025-10-13T00:19:15.515438046+00:00 stderr F I1013 00:19:15.515412 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:15.515582001+00:00 stderr F I1013 00:19:15.515546 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:15.515684964+00:00 stderr F I1013 00:19:15.515663 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:15.516129527+00:00 stderr F I1013 00:19:15.516080 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:15.563904548+00:00 stderr F W1013 00:19:15.563834 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:15.567943668+00:00 stderr F I1013 00:19:15.567894 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.476921ms) 2025-10-13T00:19:16.515911289+00:00 stderr F I1013 00:19:16.515855 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:16.516251079+00:00 stderr F I1013 00:19:16.516230 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:16.516251079+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:16.516251079+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:16.516375153+00:00 stderr F I1013 00:19:16.516352 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (507.045µs) 2025-10-13T00:19:16.516422734+00:00 stderr F I1013 00:19:16.516407 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:16.516513877+00:00 stderr F I1013 00:19:16.516488 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:16.516588569+00:00 stderr F I1013 00:19:16.516574 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:16.516899818+00:00 stderr F I1013 00:19:16.516863 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:16.567368769+00:00 stderr F W1013 00:19:16.567288 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:16.569299917+00:00 stderr F I1013 00:19:16.569266 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.853291ms) 2025-10-13T00:19:17.517443832+00:00 stderr F I1013 00:19:17.517286 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:17.518042190+00:00 stderr F I1013 00:19:17.517990 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:17.518042190+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:17.518042190+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:17.518231236+00:00 stderr F I1013 00:19:17.518171 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (908.717µs) 2025-10-13T00:19:17.518301348+00:00 stderr F I1013 00:19:17.518279 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:17.518483663+00:00 stderr F I1013 00:19:17.518441 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:17.518619307+00:00 stderr F I1013 00:19:17.518597 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:17.519150763+00:00 stderr F I1013 00:19:17.519070 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:17.579630742+00:00 stderr F W1013 00:19:17.579543 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:17.582698323+00:00 stderr F I1013 00:19:17.582635 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (64.351674ms) 2025-10-13T00:19:18.518673058+00:00 stderr F I1013 00:19:18.518580 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:18.519220104+00:00 stderr F I1013 00:19:18.519185 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:18.519220104+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:18.519220104+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:18.519452171+00:00 stderr F I1013 00:19:18.519413 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (846.395µs) 2025-10-13T00:19:18.519627926+00:00 stderr F I1013 00:19:18.519526 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:18.519745369+00:00 stderr F I1013 00:19:18.519675 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:18.519783811+00:00 stderr F I1013 00:19:18.519763 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:18.520226014+00:00 stderr F I1013 00:19:18.520145 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:18.564290404+00:00 stderr F W1013 00:19:18.564203 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:18.567610083+00:00 stderr F I1013 00:19:18.567553 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.035909ms) 2025-10-13T00:19:19.520314394+00:00 stderr F I1013 00:19:19.520239 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:19.521103838+00:00 stderr F I1013 00:19:19.520895 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:19.521103838+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:19.521103838+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:19.521361506+00:00 stderr F I1013 00:19:19.521282 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.051941ms) 2025-10-13T00:19:19.521552431+00:00 stderr F I1013 00:19:19.521454 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:19.521734257+00:00 stderr F I1013 00:19:19.521666 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:19.521851380+00:00 stderr F I1013 00:19:19.521764 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:19.522275423+00:00 stderr F I1013 00:19:19.522200 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:19.570764815+00:00 stderr F W1013 00:19:19.570684 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:19.574064473+00:00 stderr F I1013 00:19:19.573801 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.363618ms) 2025-10-13T00:19:20.521758516+00:00 stderr F I1013 00:19:20.521617 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:20.522188379+00:00 stderr F I1013 00:19:20.522132 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:20.522188379+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:20.522188379+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:20.522298182+00:00 stderr F I1013 00:19:20.522241 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (648.589µs) 2025-10-13T00:19:20.522298182+00:00 stderr F I1013 00:19:20.522279 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:20.522458837+00:00 stderr F I1013 00:19:20.522394 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:20.522562590+00:00 stderr F I1013 00:19:20.522503 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:20.523234990+00:00 stderr F I1013 00:19:20.523149 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:20.562414955+00:00 stderr F W1013 00:19:20.562284 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:20.565471166+00:00 stderr F I1013 00:19:20.565412 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.128473ms) 2025-10-13T00:19:21.523133244+00:00 stderr F I1013 00:19:21.522974 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:21.523598948+00:00 stderr F I1013 00:19:21.523523 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:21.523598948+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:21.523598948+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:21.523673080+00:00 stderr F I1013 00:19:21.523621 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (675.31µs) 2025-10-13T00:19:21.523673080+00:00 stderr F I1013 00:19:21.523658 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:21.523786024+00:00 stderr F I1013 00:19:21.523737 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:21.523871056+00:00 stderr F I1013 00:19:21.523837 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:21.524496725+00:00 stderr F I1013 00:19:21.524381 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:21.570302857+00:00 stderr F W1013 00:19:21.570174 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:21.573119791+00:00 stderr F I1013 00:19:21.573023 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.361378ms) 2025-10-13T00:19:22.524057620+00:00 stderr F I1013 00:19:22.523955 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:22.524563945+00:00 stderr F I1013 00:19:22.524504 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:22.524563945+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:22.524563945+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:22.524665888+00:00 stderr F I1013 00:19:22.524613 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (683.87µs) 2025-10-13T00:19:22.524665888+00:00 stderr F I1013 00:19:22.524651 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:22.524805192+00:00 stderr F I1013 00:19:22.524750 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:22.524949246+00:00 stderr F I1013 00:19:22.524908 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:22.525454721+00:00 stderr F I1013 00:19:22.525376 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:22.553706072+00:00 stderr F W1013 00:19:22.553594 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:22.557101872+00:00 stderr F I1013 00:19:22.557004 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.345272ms) 2025-10-13T00:19:23.525937164+00:00 stderr F I1013 00:19:23.525554 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:23.526132140+00:00 stderr F I1013 00:19:23.526079 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:23.526132140+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:23.526132140+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:23.526235043+00:00 stderr F I1013 00:19:23.526188 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (657.009µs) 2025-10-13T00:19:23.526235043+00:00 stderr F I1013 00:19:23.526226 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:23.526459580+00:00 stderr F I1013 00:19:23.526401 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:23.526542762+00:00 stderr F I1013 00:19:23.526506 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:23.527015356+00:00 stderr F I1013 00:19:23.526942 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:23.550431953+00:00 stderr F W1013 00:19:23.550358 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:23.553566466+00:00 stderr F I1013 00:19:23.553501 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.27249ms) 2025-10-13T00:19:24.526635443+00:00 stderr F I1013 00:19:24.526491 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:24.527182590+00:00 stderr F I1013 00:19:24.527124 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:24.527182590+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:24.527182590+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:24.527369175+00:00 stderr F I1013 00:19:24.527274 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (809.264µs) 2025-10-13T00:19:24.527369175+00:00 stderr F I1013 00:19:24.527317 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:24.527520880+00:00 stderr F I1013 00:19:24.527463 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:24.527624393+00:00 stderr F I1013 00:19:24.527580 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:24.528509489+00:00 stderr F I1013 00:19:24.528106 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:24.565525010+00:00 stderr F W1013 00:19:24.565384 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:24.569074956+00:00 stderr F I1013 00:19:24.568985 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.664189ms) 2025-10-13T00:19:25.527453235+00:00 stderr F I1013 00:19:25.527322 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:25.527714813+00:00 stderr F I1013 00:19:25.527614 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:25.527714813+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:25.527714813+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:25.527714813+00:00 stderr F I1013 00:19:25.527666 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (355.8µs) 2025-10-13T00:19:25.527714813+00:00 stderr F I1013 00:19:25.527680 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:25.527742094+00:00 stderr F I1013 00:19:25.527717 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:25.527783165+00:00 stderr F I1013 00:19:25.527761 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:25.528032312+00:00 stderr F I1013 00:19:25.527983 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:25.557107797+00:00 stderr F W1013 00:19:25.557041 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:25.559154418+00:00 stderr F I1013 00:19:25.559121 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.434265ms) 2025-10-13T00:19:26.528168325+00:00 stderr F I1013 00:19:26.527985 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:26.528619448+00:00 stderr F I1013 00:19:26.528572 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:26.528619448+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:26.528619448+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:26.528754672+00:00 stderr F I1013 00:19:26.528706 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (735.932µs) 2025-10-13T00:19:26.528754672+00:00 stderr F I1013 00:19:26.528744 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:26.528888636+00:00 stderr F I1013 00:19:26.528835 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:26.528969099+00:00 stderr F I1013 00:19:26.528937 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:26.529550906+00:00 stderr F I1013 00:19:26.529477 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:26.584321755+00:00 stderr F W1013 00:19:26.584258 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:26.587170349+00:00 stderr F I1013 00:19:26.587127 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (58.380366ms) 2025-10-13T00:19:27.528906475+00:00 stderr F I1013 00:19:27.528787 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:27.529368789+00:00 stderr F I1013 00:19:27.529279 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:27.529368789+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:27.529368789+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:27.529497423+00:00 stderr F I1013 00:19:27.529425 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (703.951µs) 2025-10-13T00:19:27.529736290+00:00 stderr F I1013 00:19:27.529534 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:27.529736290+00:00 stderr F I1013 00:19:27.529623 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:27.529736290+00:00 stderr F I1013 00:19:27.529721 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:27.530823392+00:00 stderr F I1013 00:19:27.530678 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:27.557222197+00:00 stderr F W1013 00:19:27.557171 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:27.560271548+00:00 stderr F I1013 00:19:27.560228 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.690512ms) 2025-10-13T00:19:28.530391967+00:00 stderr F I1013 00:19:28.530298 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:28.530650375+00:00 stderr F I1013 00:19:28.530620 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:28.530650375+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:28.530650375+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:28.530707367+00:00 stderr F I1013 00:19:28.530682 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (396.102µs) 2025-10-13T00:19:28.530707367+00:00 stderr F I1013 00:19:28.530701 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:28.530797369+00:00 stderr F I1013 00:19:28.530769 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:28.530846381+00:00 stderr F I1013 00:19:28.530828 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:28.531140700+00:00 stderr F I1013 00:19:28.531095 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:28.551315430+00:00 stderr F W1013 00:19:28.551262 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:28.553314869+00:00 stderr F I1013 00:19:28.553285 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.580152ms) 2025-10-13T00:19:29.455879360+00:00 stderr F I1013 00:19:29.455779 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:29.455991513+00:00 stderr F I1013 00:19:29.455940 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:29.456098166+00:00 stderr F I1013 00:19:29.456036 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:29.458377434+00:00 stderr F I1013 00:19:29.458278 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:29.502219818+00:00 stderr F W1013 00:19:29.502110 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:29.504448724+00:00 stderr F I1013 00:19:29.504370 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.661317ms) 2025-10-13T00:19:29.532523339+00:00 stderr F I1013 00:19:29.531744 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:29.532523339+00:00 stderr F I1013 00:19:29.532111 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:29.532523339+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:29.532523339+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:29.532523339+00:00 stderr F I1013 00:19:29.532177 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (447.804µs) 2025-10-13T00:19:29.532523339+00:00 stderr F I1013 00:19:29.532198 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:29.532523339+00:00 stderr F I1013 00:19:29.532253 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:29.532523339+00:00 stderr F I1013 00:19:29.532319 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:29.533421186+00:00 stderr F I1013 00:19:29.532649 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:29.564314065+00:00 stderr F W1013 00:19:29.564218 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:29.565792479+00:00 stderr F I1013 00:19:29.565701 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.501577ms) 2025-10-13T00:19:30.533417343+00:00 stderr F I1013 00:19:30.533277 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:30.533748813+00:00 stderr F I1013 00:19:30.533693 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:30.533748813+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:30.533748813+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:30.533819085+00:00 stderr F I1013 00:19:30.533778 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (530.826µs) 2025-10-13T00:19:30.533819085+00:00 stderr F I1013 00:19:30.533803 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:30.533919698+00:00 stderr F I1013 00:19:30.533864 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:30.533964540+00:00 stderr F I1013 00:19:30.533936 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:30.534490605+00:00 stderr F I1013 00:19:30.534410 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:30.603440496+00:00 stderr F W1013 00:19:30.602478 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:30.607383743+00:00 stderr F I1013 00:19:30.604475 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (70.668491ms) 2025-10-13T00:19:31.534680169+00:00 stderr F I1013 00:19:31.534401 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:31.534825824+00:00 stderr F I1013 00:19:31.534772 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:31.534825824+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:31.534825824+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:31.534856755+00:00 stderr F I1013 00:19:31.534835 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (450.183µs) 2025-10-13T00:19:31.534876585+00:00 stderr F I1013 00:19:31.534852 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:31.534949898+00:00 stderr F I1013 00:19:31.534904 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:31.534972608+00:00 stderr F I1013 00:19:31.534959 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:31.535308798+00:00 stderr F I1013 00:19:31.535214 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:31.576250606+00:00 stderr F W1013 00:19:31.576142 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:31.579395659+00:00 stderr F I1013 00:19:31.579303 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.441822ms) 2025-10-13T00:19:32.535619205+00:00 stderr F I1013 00:19:32.535476 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:32.536860092+00:00 stderr F I1013 00:19:32.535970 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:32.536860092+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:32.536860092+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:32.536860092+00:00 stderr F I1013 00:19:32.536130 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (664.02µs) 2025-10-13T00:19:32.536860092+00:00 stderr F I1013 00:19:32.536267 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:32.536860092+00:00 stderr F I1013 00:19:32.536365 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:32.536860092+00:00 stderr F I1013 00:19:32.536439 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:32.536860092+00:00 stderr F I1013 00:19:32.536704 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:32.559216517+00:00 stderr F W1013 00:19:32.559084 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:32.562146414+00:00 stderr F I1013 00:19:32.562065 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.91106ms) 2025-10-13T00:19:33.537221002+00:00 stderr F I1013 00:19:33.537120 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:33.537599383+00:00 stderr F I1013 00:19:33.537543 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:33.537599383+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:33.537599383+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:33.537655405+00:00 stderr F I1013 00:19:33.537626 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (517.565µs) 2025-10-13T00:19:33.537655405+00:00 stderr F I1013 00:19:33.537649 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:33.537759608+00:00 stderr F I1013 00:19:33.537711 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:33.537809189+00:00 stderr F I1013 00:19:33.537783 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:33.538116868+00:00 stderr F I1013 00:19:33.538052 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:33.565962456+00:00 stderr F W1013 00:19:33.565894 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:33.567831822+00:00 stderr F I1013 00:19:33.567790 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.137486ms) 2025-10-13T00:19:34.539215570+00:00 stderr F I1013 00:19:34.538101 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:34.539215570+00:00 stderr F I1013 00:19:34.538693 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:34.539215570+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:34.539215570+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:34.539215570+00:00 stderr F I1013 00:19:34.538812 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (731.251µs) 2025-10-13T00:19:34.539215570+00:00 stderr F I1013 00:19:34.538847 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:34.539215570+00:00 stderr F I1013 00:19:34.538948 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:34.539215570+00:00 stderr F I1013 00:19:34.539125 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:34.540212309+00:00 stderr F I1013 00:19:34.540129 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:34.578233590+00:00 stderr F W1013 00:19:34.578119 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:34.580299291+00:00 stderr F I1013 00:19:34.580205 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.35705ms) 2025-10-13T00:19:35.540750963+00:00 stderr F I1013 00:19:35.540633 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:35.541305550+00:00 stderr F I1013 00:19:35.541223 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:35.541305550+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:35.541305550+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:35.541440624+00:00 stderr F I1013 00:19:35.541370 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (752.382µs) 2025-10-13T00:19:35.541440624+00:00 stderr F I1013 00:19:35.541420 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:35.541579258+00:00 stderr F I1013 00:19:35.541510 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:35.541639239+00:00 stderr F I1013 00:19:35.541605 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:35.542161005+00:00 stderr F I1013 00:19:35.542058 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:35.589305367+00:00 stderr F W1013 00:19:35.589179 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:35.592261335+00:00 stderr F I1013 00:19:35.592183 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.760899ms) 2025-10-13T00:19:36.541943937+00:00 stderr F I1013 00:19:36.541855 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:36.542308938+00:00 stderr F I1013 00:19:36.542268 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:36.542308938+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:36.542308938+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:36.542451112+00:00 stderr F I1013 00:19:36.542407 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (566.547µs) 2025-10-13T00:19:36.542451112+00:00 stderr F I1013 00:19:36.542443 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:36.542551885+00:00 stderr F I1013 00:19:36.542508 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:36.542613587+00:00 stderr F I1013 00:19:36.542589 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:36.543016299+00:00 stderr F I1013 00:19:36.542959 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:36.574838335+00:00 stderr F W1013 00:19:36.574759 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:36.577646099+00:00 stderr F I1013 00:19:36.577585 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.132804ms) 2025-10-13T00:19:37.543007806+00:00 stderr F I1013 00:19:37.542890 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:37.543443899+00:00 stderr F I1013 00:19:37.543382 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:37.543443899+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:37.543443899+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:37.543543222+00:00 stderr F I1013 00:19:37.543482 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (614.388µs) 2025-10-13T00:19:37.543543222+00:00 stderr F I1013 00:19:37.543518 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:37.543659686+00:00 stderr F I1013 00:19:37.543593 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:37.543733328+00:00 stderr F I1013 00:19:37.543695 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:37.544163271+00:00 stderr F I1013 00:19:37.544081 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:37.583554512+00:00 stderr F W1013 00:19:37.583464 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:37.585287754+00:00 stderr F I1013 00:19:37.585246 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.725431ms) 2025-10-13T00:19:38.544741037+00:00 stderr F I1013 00:19:38.544584 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:38.545107428+00:00 stderr F I1013 00:19:38.545056 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:38.545107428+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:38.545107428+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:38.545195670+00:00 stderr F I1013 00:19:38.545148 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (591.278µs) 2025-10-13T00:19:38.545195670+00:00 stderr F I1013 00:19:38.545179 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:38.545310144+00:00 stderr F I1013 00:19:38.545254 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:38.545425507+00:00 stderr F I1013 00:19:38.545379 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:38.545872541+00:00 stderr F I1013 00:19:38.545783 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:38.591950761+00:00 stderr F W1013 00:19:38.591374 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:38.593307091+00:00 stderr F I1013 00:19:38.593217 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.034298ms) 2025-10-13T00:19:39.546127726+00:00 stderr F I1013 00:19:39.546068 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:39.546313131+00:00 stderr F I1013 00:19:39.546285 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:39.546313131+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:39.546313131+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:39.546394794+00:00 stderr F I1013 00:19:39.546339 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (283.079µs) 2025-10-13T00:19:39.546394794+00:00 stderr F I1013 00:19:39.546384 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:39.546466226+00:00 stderr F I1013 00:19:39.546433 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:39.546502537+00:00 stderr F I1013 00:19:39.546486 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:39.546718153+00:00 stderr F I1013 00:19:39.546681 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:39.567250124+00:00 stderr F W1013 00:19:39.567194 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:39.569107509+00:00 stderr F I1013 00:19:39.569061 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.672704ms) 2025-10-13T00:19:40.547403242+00:00 stderr F I1013 00:19:40.547295 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:40.547809254+00:00 stderr F I1013 00:19:40.547751 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:40.547809254+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:40.547809254+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:40.547887757+00:00 stderr F I1013 00:19:40.547843 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (563.627µs) 2025-10-13T00:19:40.547887757+00:00 stderr F I1013 00:19:40.547870 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:40.547995660+00:00 stderr F I1013 00:19:40.547934 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:40.548042531+00:00 stderr F I1013 00:19:40.548015 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:40.548461944+00:00 stderr F I1013 00:19:40.548390 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:40.586435133+00:00 stderr F W1013 00:19:40.585454 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:40.589617417+00:00 stderr F I1013 00:19:40.589203 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.326369ms) 2025-10-13T00:19:41.548585455+00:00 stderr F I1013 00:19:41.548502 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:41.548872804+00:00 stderr F I1013 00:19:41.548816 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:41.548872804+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:41.548872804+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:41.548917315+00:00 stderr F I1013 00:19:41.548879 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (392.122µs) 2025-10-13T00:19:41.548917315+00:00 stderr F I1013 00:19:41.548895 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:41.548987747+00:00 stderr F I1013 00:19:41.548944 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:41.549039879+00:00 stderr F I1013 00:19:41.549009 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:41.549319357+00:00 stderr F I1013 00:19:41.549252 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:41.602448917+00:00 stderr F W1013 00:19:41.601948 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:41.606422165+00:00 stderr F I1013 00:19:41.605920 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.014176ms) 2025-10-13T00:19:42.549496989+00:00 stderr F I1013 00:19:42.549315 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:42.550179459+00:00 stderr F I1013 00:19:42.550095 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:42.550179459+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:42.550179459+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:42.550179459+00:00 stderr F I1013 00:19:42.550153 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (866.885µs) 2025-10-13T00:19:42.550179459+00:00 stderr F I1013 00:19:42.550169 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:42.550226311+00:00 stderr F I1013 00:19:42.550208 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:42.550294683+00:00 stderr F I1013 00:19:42.550252 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:42.550549100+00:00 stderr F I1013 00:19:42.550461 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:42.571754941+00:00 stderr F W1013 00:19:42.571674 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:42.575987327+00:00 stderr F I1013 00:19:42.575908 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.730055ms) 2025-10-13T00:19:43.550545859+00:00 stderr F I1013 00:19:43.550455 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:43.550722715+00:00 stderr F I1013 00:19:43.550689 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:43.550722715+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:43.550722715+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:43.550759796+00:00 stderr F I1013 00:19:43.550732 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (297.049µs) 2025-10-13T00:19:43.550759796+00:00 stderr F I1013 00:19:43.550746 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:43.550805077+00:00 stderr F I1013 00:19:43.550780 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:43.550841358+00:00 stderr F I1013 00:19:43.550823 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:43.551088956+00:00 stderr F I1013 00:19:43.551035 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:43.592259830+00:00 stderr F W1013 00:19:43.592201 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:43.594170427+00:00 stderr F I1013 00:19:43.594128 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.37746ms) 2025-10-13T00:19:44.455745119+00:00 stderr F I1013 00:19:44.455633 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:44.455956736+00:00 stderr F I1013 00:19:44.455867 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:44.456022048+00:00 stderr F I1013 00:19:44.455985 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:44.456611275+00:00 stderr F I1013 00:19:44.456520 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:44.506815708+00:00 stderr F W1013 00:19:44.506702 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:44.510633992+00:00 stderr F I1013 00:19:44.510567 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.975335ms) 2025-10-13T00:19:44.550981041+00:00 stderr F I1013 00:19:44.550868 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:44.551433995+00:00 stderr F I1013 00:19:44.551387 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:44.551433995+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:44.551433995+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:44.551519677+00:00 stderr F I1013 00:19:44.551484 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (640.939µs) 2025-10-13T00:19:44.551535718+00:00 stderr F I1013 00:19:44.551520 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:44.551641771+00:00 stderr F I1013 00:19:44.551599 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:44.551723724+00:00 stderr F I1013 00:19:44.551699 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:44.552189167+00:00 stderr F I1013 00:19:44.552122 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:44.599715441+00:00 stderr F W1013 00:19:44.599597 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:44.601551695+00:00 stderr F I1013 00:19:44.601509 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.986947ms) 2025-10-13T00:19:45.551832575+00:00 stderr F I1013 00:19:45.551683 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:45.552280028+00:00 stderr F I1013 00:19:45.552201 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:45.552280028+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:45.552280028+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:45.552362671+00:00 stderr F I1013 00:19:45.552300 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (643.21µs) 2025-10-13T00:19:45.552414552+00:00 stderr F I1013 00:19:45.552356 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:45.552521345+00:00 stderr F I1013 00:19:45.552430 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:45.552548176+00:00 stderr F I1013 00:19:45.552531 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:45.553033961+00:00 stderr F I1013 00:19:45.552940 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:45.597207444+00:00 stderr F W1013 00:19:45.597081 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:45.600036098+00:00 stderr F I1013 00:19:45.599962 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.627967ms) 2025-10-13T00:19:46.553157101+00:00 stderr F I1013 00:19:46.553062 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:46.553808390+00:00 stderr F I1013 00:19:46.553768 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:46.553808390+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:46.553808390+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:46.553968635+00:00 stderr F I1013 00:19:46.553936 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (927.938µs) 2025-10-13T00:19:46.554054318+00:00 stderr F I1013 00:19:46.554003 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:46.554182151+00:00 stderr F I1013 00:19:46.554107 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:46.554200312+00:00 stderr F I1013 00:19:46.554178 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:46.554543112+00:00 stderr F I1013 00:19:46.554471 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:46.593821090+00:00 stderr F W1013 00:19:46.593751 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:46.595863791+00:00 stderr F I1013 00:19:46.595788 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.789992ms) 2025-10-13T00:19:47.554126028+00:00 stderr F I1013 00:19:47.554034 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:47.554529570+00:00 stderr F I1013 00:19:47.554440 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:47.554529570+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:47.554529570+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:47.554686225+00:00 stderr F I1013 00:19:47.554615 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (591.398µs) 2025-10-13T00:19:47.554686225+00:00 stderr F I1013 00:19:47.554657 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:47.555487669+00:00 stderr F I1013 00:19:47.555266 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:47.555487669+00:00 stderr F I1013 00:19:47.555414 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:47.557546360+00:00 stderr F I1013 00:19:47.557437 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:47.603306641+00:00 stderr F W1013 00:19:47.603193 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:47.606248009+00:00 stderr F I1013 00:19:47.606159 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.497992ms) 2025-10-13T00:19:48.554796957+00:00 stderr F I1013 00:19:48.554703 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:48.555087686+00:00 stderr F I1013 00:19:48.555036 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:48.555087686+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:48.555087686+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:48.555112687+00:00 stderr F I1013 00:19:48.555095 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (423.653µs) 2025-10-13T00:19:48.555128007+00:00 stderr F I1013 00:19:48.555112 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:48.555222760+00:00 stderr F I1013 00:19:48.555160 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:48.555265631+00:00 stderr F I1013 00:19:48.555225 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:48.555564850+00:00 stderr F I1013 00:19:48.555488 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:48.590087007+00:00 stderr F W1013 00:19:48.589748 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:48.594391175+00:00 stderr F I1013 00:19:48.593624 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.501445ms) 2025-10-13T00:19:49.555502617+00:00 stderr F I1013 00:19:49.555312 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:49.555803966+00:00 stderr F I1013 00:19:49.555746 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:49.555803966+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:49.555803966+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:49.555897389+00:00 stderr F I1013 00:19:49.555849 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (551.216µs) 2025-10-13T00:19:49.555897389+00:00 stderr F I1013 00:19:49.555875 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:49.555996292+00:00 stderr F I1013 00:19:49.555944 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:49.556034373+00:00 stderr F I1013 00:19:49.556016 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:49.556467326+00:00 stderr F I1013 00:19:49.556384 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:49.597562928+00:00 stderr F W1013 00:19:49.597418 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:49.601283308+00:00 stderr F I1013 00:19:49.601222 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.336548ms) 2025-10-13T00:19:50.556499324+00:00 stderr F I1013 00:19:50.556404 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:50.556718821+00:00 stderr F I1013 00:19:50.556681 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:50.556718821+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:50.556718821+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:50.556752542+00:00 stderr F I1013 00:19:50.556734 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (344.23µs) 2025-10-13T00:19:50.556763392+00:00 stderr F I1013 00:19:50.556749 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:50.556832594+00:00 stderr F I1013 00:19:50.556800 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:50.556875935+00:00 stderr F I1013 00:19:50.556851 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:50.557113482+00:00 stderr F I1013 00:19:50.557075 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:50.588901078+00:00 stderr F W1013 00:19:50.588706 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:50.590077093+00:00 stderr F I1013 00:19:50.590040 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.28932ms) 2025-10-13T00:19:51.557219614+00:00 stderr F I1013 00:19:51.557126 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:51.557610146+00:00 stderr F I1013 00:19:51.557579 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:51.557610146+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:51.557610146+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:51.557696768+00:00 stderr F I1013 00:19:51.557667 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (581.117µs) 2025-10-13T00:19:51.557718729+00:00 stderr F I1013 00:19:51.557694 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:51.557796981+00:00 stderr F I1013 00:19:51.557761 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:51.557862103+00:00 stderr F I1013 00:19:51.557840 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:51.558344077+00:00 stderr F I1013 00:19:51.558270 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:51.583560707+00:00 stderr F W1013 00:19:51.583512 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:51.586616738+00:00 stderr F I1013 00:19:51.586579 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.878819ms) 2025-10-13T00:19:52.559377407+00:00 stderr F I1013 00:19:52.558650 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:52.559909422+00:00 stderr F I1013 00:19:52.559851 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:52.559909422+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:52.559909422+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:52.560017466+00:00 stderr F I1013 00:19:52.559978 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.34535ms) 2025-10-13T00:19:52.560034426+00:00 stderr F I1013 00:19:52.560016 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:52.560145349+00:00 stderr F I1013 00:19:52.560102 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:52.560223982+00:00 stderr F I1013 00:19:52.560195 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:52.560857131+00:00 stderr F I1013 00:19:52.560743 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:52.603361255+00:00 stderr F W1013 00:19:52.603261 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:52.604666193+00:00 stderr F I1013 00:19:52.604598 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.582636ms) 2025-10-13T00:19:53.560299701+00:00 stderr F I1013 00:19:53.560219 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:53.560830747+00:00 stderr F I1013 00:19:53.560760 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:53.560830747+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:53.560830747+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:53.560882609+00:00 stderr F I1013 00:19:53.560854 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (647.08µs) 2025-10-13T00:19:53.560882609+00:00 stderr F I1013 00:19:53.560876 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:53.561012843+00:00 stderr F I1013 00:19:53.560942 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:53.561034173+00:00 stderr F I1013 00:19:53.561018 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:53.561458406+00:00 stderr F I1013 00:19:53.561405 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:53.604580518+00:00 stderr F W1013 00:19:53.604455 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:53.607202046+00:00 stderr F I1013 00:19:53.607162 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.281617ms) 2025-10-13T00:19:54.561662870+00:00 stderr F I1013 00:19:54.561584 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:54.562101183+00:00 stderr F I1013 00:19:54.562073 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:54.562101183+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:54.562101183+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:54.562203236+00:00 stderr F I1013 00:19:54.562174 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (612.538µs) 2025-10-13T00:19:54.562235097+00:00 stderr F I1013 00:19:54.562213 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:54.562393252+00:00 stderr F I1013 00:19:54.562289 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:54.562449924+00:00 stderr F I1013 00:19:54.562429 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:54.562867076+00:00 stderr F I1013 00:19:54.562806 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:54.589808977+00:00 stderr F W1013 00:19:54.589728 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:54.592537469+00:00 stderr F I1013 00:19:54.592486 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.26872ms) 2025-10-13T00:19:55.562926196+00:00 stderr F I1013 00:19:55.562851 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:55.563555665+00:00 stderr F I1013 00:19:55.563512 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:55.563555665+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:55.563555665+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:55.563784282+00:00 stderr F I1013 00:19:55.563747 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (927.218µs) 2025-10-13T00:19:55.563870205+00:00 stderr F I1013 00:19:55.563841 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:55.564018329+00:00 stderr F I1013 00:19:55.563981 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:55.564151473+00:00 stderr F I1013 00:19:55.564122 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:55.565460862+00:00 stderr F I1013 00:19:55.565395 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:55.591963120+00:00 stderr F W1013 00:19:55.591913 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:55.594889187+00:00 stderr F I1013 00:19:55.594844 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.999542ms) 2025-10-13T00:19:56.564494562+00:00 stderr F I1013 00:19:56.564415 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:56.565852302+00:00 stderr F I1013 00:19:56.565792 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:56.565852302+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:56.565852302+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:56.566059858+00:00 stderr F I1013 00:19:56.566019 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.617778ms) 2025-10-13T00:19:56.566147141+00:00 stderr F I1013 00:19:56.566117 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:56.566374248+00:00 stderr F I1013 00:19:56.566287 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:56.566520862+00:00 stderr F I1013 00:19:56.566492 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:56.567082139+00:00 stderr F I1013 00:19:56.567015 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:56.609740387+00:00 stderr F W1013 00:19:56.609657 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:56.612638373+00:00 stderr F I1013 00:19:56.612565 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.445871ms) 2025-10-13T00:19:57.566513649+00:00 stderr F I1013 00:19:57.566422 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:57.567205660+00:00 stderr F I1013 00:19:57.567161 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:57.567205660+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:57.567205660+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:57.567476508+00:00 stderr F I1013 00:19:57.567397 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.01925ms) 2025-10-13T00:19:57.567558380+00:00 stderr F I1013 00:19:57.567491 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:57.567708525+00:00 stderr F I1013 00:19:57.567671 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:57.567819998+00:00 stderr F I1013 00:19:57.567799 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:57.568252381+00:00 stderr F I1013 00:19:57.568200 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:57.612302831+00:00 stderr F W1013 00:19:57.612255 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:57.615154616+00:00 stderr F I1013 00:19:57.615108 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.627516ms) 2025-10-13T00:19:58.568253009+00:00 stderr F I1013 00:19:58.568185 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:58.568572349+00:00 stderr F I1013 00:19:58.568519 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:58.568572349+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:58.568572349+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:58.568609060+00:00 stderr F I1013 00:19:58.568595 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (437.393µs) 2025-10-13T00:19:58.568627940+00:00 stderr F I1013 00:19:58.568611 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:58.568710353+00:00 stderr F I1013 00:19:58.568658 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:58.568777235+00:00 stderr F I1013 00:19:58.568752 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:58.569066193+00:00 stderr F I1013 00:19:58.569002 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:58.613091333+00:00 stderr F W1013 00:19:58.612101 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:58.615300548+00:00 stderr F I1013 00:19:58.615240 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.624316ms) 2025-10-13T00:19:59.454908567+00:00 stderr F I1013 00:19:59.454845 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:59.454953108+00:00 stderr F I1013 00:19:59.454931 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:59.454998570+00:00 stderr F I1013 00:19:59.454977 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:59.455205896+00:00 stderr F I1013 00:19:59.455172 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:59.473920752+00:00 stderr F W1013 00:19:59.473868 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:59.475166099+00:00 stderr F I1013 00:19:59.475135 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (20.297103ms) 2025-10-13T00:19:59.570574157+00:00 stderr F I1013 00:19:59.570478 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:19:59.570776303+00:00 stderr F I1013 00:19:59.570740 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:19:59.570776303+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:19:59.570776303+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:19:59.570819434+00:00 stderr F I1013 00:19:59.570794 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (328.78µs) 2025-10-13T00:19:59.570819434+00:00 stderr F I1013 00:19:59.570815 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:19:59.570913937+00:00 stderr F I1013 00:19:59.570860 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:19:59.570928317+00:00 stderr F I1013 00:19:59.570918 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:19:59.571236666+00:00 stderr F I1013 00:19:59.571184 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:19:59.614407050+00:00 stderr F W1013 00:19:59.613843 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:19:59.618380518+00:00 stderr F I1013 00:19:59.615593 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.771751ms) 2025-10-13T00:20:00.571315806+00:00 stderr F I1013 00:20:00.571211 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:00.571530813+00:00 stderr F I1013 00:20:00.571482 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:00.571530813+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:00.571530813+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:00.571555193+00:00 stderr F I1013 00:20:00.571532 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (332.64µs) 2025-10-13T00:20:00.571555193+00:00 stderr F I1013 00:20:00.571545 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:00.571640576+00:00 stderr F I1013 00:20:00.571597 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:00.571693888+00:00 stderr F I1013 00:20:00.571647 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:00.571914164+00:00 stderr F I1013 00:20:00.571854 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:00.593323331+00:00 stderr F W1013 00:20:00.593156 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:00.594646210+00:00 stderr F I1013 00:20:00.594558 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.010234ms) 2025-10-13T00:20:01.572307665+00:00 stderr F I1013 00:20:01.572244 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:01.572739208+00:00 stderr F I1013 00:20:01.572708 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:01.572739208+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:01.572739208+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:01.572852171+00:00 stderr F I1013 00:20:01.572827 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (596.958µs) 2025-10-13T00:20:01.572898832+00:00 stderr F I1013 00:20:01.572882 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:01.573005935+00:00 stderr F I1013 00:20:01.572977 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:01.573092678+00:00 stderr F I1013 00:20:01.573075 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:01.573541221+00:00 stderr F I1013 00:20:01.573487 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:01.595656759+00:00 stderr F W1013 00:20:01.595609 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:01.597613507+00:00 stderr F I1013 00:20:01.597584 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.700134ms) 2025-10-13T00:20:02.573566021+00:00 stderr F I1013 00:20:02.573483 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:02.573889191+00:00 stderr F I1013 00:20:02.573838 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:02.573889191+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:02.573889191+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:02.573991454+00:00 stderr F I1013 00:20:02.573941 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (488.884µs) 2025-10-13T00:20:02.573991454+00:00 stderr F I1013 00:20:02.573970 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:02.574076526+00:00 stderr F I1013 00:20:02.574033 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:02.574154609+00:00 stderr F I1013 00:20:02.574110 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:02.574643423+00:00 stderr F I1013 00:20:02.574537 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:02.596787322+00:00 stderr F W1013 00:20:02.596729 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:02.600102880+00:00 stderr F I1013 00:20:02.600048 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.069195ms) 2025-10-13T00:20:03.574729704+00:00 stderr F I1013 00:20:03.574673 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:03.575094975+00:00 stderr F I1013 00:20:03.575062 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:03.575094975+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:03.575094975+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:03.575182988+00:00 stderr F I1013 00:20:03.575153 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (492.465µs) 2025-10-13T00:20:03.575190878+00:00 stderr F I1013 00:20:03.575180 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:03.575282881+00:00 stderr F I1013 00:20:03.575242 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:03.575574389+00:00 stderr F I1013 00:20:03.575554 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:03.576130376+00:00 stderr F I1013 00:20:03.576066 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:03.606519190+00:00 stderr F W1013 00:20:03.606453 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:03.607841569+00:00 stderr F I1013 00:20:03.607794 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.6124ms) 2025-10-13T00:20:04.576159584+00:00 stderr F I1013 00:20:04.576011 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:04.576577487+00:00 stderr F I1013 00:20:04.576420 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:04.576577487+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:04.576577487+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:04.576577487+00:00 stderr F I1013 00:20:04.576513 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (518.436µs) 2025-10-13T00:20:04.576577487+00:00 stderr F I1013 00:20:04.576533 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:04.576676500+00:00 stderr F I1013 00:20:04.576600 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:04.576676500+00:00 stderr F I1013 00:20:04.576670 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:04.577309219+00:00 stderr F I1013 00:20:04.577012 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:04.610273049+00:00 stderr F W1013 00:20:04.610209 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:04.613227207+00:00 stderr F I1013 00:20:04.613148 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.608989ms) 2025-10-13T00:20:05.576829800+00:00 stderr F I1013 00:20:05.576750 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:05.577385116+00:00 stderr F I1013 00:20:05.577308 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:05.577385116+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:05.577385116+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:05.577536744+00:00 stderr F I1013 00:20:05.577491 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (781.898µs) 2025-10-13T00:20:05.577552262+00:00 stderr F I1013 00:20:05.577529 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:05.577685802+00:00 stderr F I1013 00:20:05.577635 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:05.577769635+00:00 stderr F I1013 00:20:05.577737 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:05.578699352+00:00 stderr F I1013 00:20:05.578376 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:05.601081603+00:00 stderr F W1013 00:20:05.601021 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:05.604034640+00:00 stderr F I1013 00:20:05.603982 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.45235ms) 2025-10-13T00:20:06.578377306+00:00 stderr F I1013 00:20:06.577846 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:06.578836809+00:00 stderr F I1013 00:20:06.578559 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:06.578836809+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:06.578836809+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:06.578836809+00:00 stderr F I1013 00:20:06.578608 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (787.168µs) 2025-10-13T00:20:06.578836809+00:00 stderr F I1013 00:20:06.578625 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:06.578836809+00:00 stderr F I1013 00:20:06.578668 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:06.578836809+00:00 stderr F I1013 00:20:06.578716 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:06.579006306+00:00 stderr F I1013 00:20:06.578953 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:06.628182610+00:00 stderr F W1013 00:20:06.627879 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:06.631353560+00:00 stderr F I1013 00:20:06.631279 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.646101ms) 2025-10-13T00:20:07.579217456+00:00 stderr F I1013 00:20:07.579071 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:07.579726376+00:00 stderr F I1013 00:20:07.579675 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:07.579726376+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:07.579726376+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:07.579859936+00:00 stderr F I1013 00:20:07.579800 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (742.541µs) 2025-10-13T00:20:07.579859936+00:00 stderr F I1013 00:20:07.579832 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:07.579975036+00:00 stderr F I1013 00:20:07.579915 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:07.580131384+00:00 stderr F I1013 00:20:07.580008 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:07.580602417+00:00 stderr F I1013 00:20:07.580516 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:07.603427803+00:00 stderr F W1013 00:20:07.603321 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:07.606097872+00:00 stderr F I1013 00:20:07.606038 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.20358ms) 2025-10-13T00:20:08.580684179+00:00 stderr F I1013 00:20:08.580559 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:08.581170430+00:00 stderr F I1013 00:20:08.581104 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:08.581170430+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:08.581170430+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:08.581288311+00:00 stderr F I1013 00:20:08.581220 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (679.606µs) 2025-10-13T00:20:08.581288311+00:00 stderr F I1013 00:20:08.581255 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:08.581450748+00:00 stderr F I1013 00:20:08.581390 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:08.581557290+00:00 stderr F I1013 00:20:08.581501 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:08.582075939+00:00 stderr F I1013 00:20:08.581993 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:08.615096090+00:00 stderr F W1013 00:20:08.615002 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:08.619199155+00:00 stderr F I1013 00:20:08.619136 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.873328ms) 2025-10-13T00:20:09.581735833+00:00 stderr F I1013 00:20:09.581659 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:09.582514822+00:00 stderr F I1013 00:20:09.581992 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:09.582514822+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:09.582514822+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:09.582514822+00:00 stderr F I1013 00:20:09.582069 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (420.796µs) 2025-10-13T00:20:09.582514822+00:00 stderr F I1013 00:20:09.582101 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:09.582514822+00:00 stderr F I1013 00:20:09.582160 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:09.582514822+00:00 stderr F I1013 00:20:09.582213 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:09.582576487+00:00 stderr F I1013 00:20:09.582522 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:09.624117555+00:00 stderr F W1013 00:20:09.624023 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:09.625997696+00:00 stderr F I1013 00:20:09.625937 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.832927ms) 2025-10-13T00:20:10.582591414+00:00 stderr F I1013 00:20:10.582542 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:10.582921508+00:00 stderr F I1013 00:20:10.582901 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:10.582921508+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:10.582921508+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:10.583014731+00:00 stderr F I1013 00:20:10.582993 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (462.164µs) 2025-10-13T00:20:10.583081576+00:00 stderr F I1013 00:20:10.583053 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:10.583176068+00:00 stderr F I1013 00:20:10.583151 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:10.583246722+00:00 stderr F I1013 00:20:10.583233 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:10.583574457+00:00 stderr F I1013 00:20:10.583530 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:10.624001212+00:00 stderr F W1013 00:20:10.623929 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:10.627034133+00:00 stderr F I1013 00:20:10.626970 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.91617ms) 2025-10-13T00:20:11.584069744+00:00 stderr F I1013 00:20:11.583976 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:11.584288357+00:00 stderr F I1013 00:20:11.584240 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:11.584288357+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:11.584288357+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:11.584348622+00:00 stderr F I1013 00:20:11.584300 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (354.701µs) 2025-10-13T00:20:11.584348622+00:00 stderr F I1013 00:20:11.584317 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:11.584416487+00:00 stderr F I1013 00:20:11.584382 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:11.584447835+00:00 stderr F I1013 00:20:11.584439 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:11.584814996+00:00 stderr F I1013 00:20:11.584740 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:11.615274659+00:00 stderr F W1013 00:20:11.615236 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:11.616727934+00:00 stderr F I1013 00:20:11.616707 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.387011ms) 2025-10-13T00:20:12.585300715+00:00 stderr F I1013 00:20:12.585202 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:12.585746730+00:00 stderr F I1013 00:20:12.585672 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:12.585746730+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:12.585746730+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:12.585814015+00:00 stderr F I1013 00:20:12.585775 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (587.243µs) 2025-10-13T00:20:12.585814015+00:00 stderr F I1013 00:20:12.585801 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:12.585917037+00:00 stderr F I1013 00:20:12.585869 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:12.585982232+00:00 stderr F I1013 00:20:12.585950 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:12.586459764+00:00 stderr F I1013 00:20:12.586358 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:12.613622298+00:00 stderr F W1013 00:20:12.613567 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:12.617052257+00:00 stderr F I1013 00:20:12.617003 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.195055ms) 2025-10-13T00:20:13.586646068+00:00 stderr F I1013 00:20:13.586576 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:13.586946454+00:00 stderr F I1013 00:20:13.586911 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:13.586946454+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:13.586946454+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:13.587011879+00:00 stderr F I1013 00:20:13.586979 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (414.877µs) 2025-10-13T00:20:13.587011879+00:00 stderr F I1013 00:20:13.587002 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:13.587094623+00:00 stderr F I1013 00:20:13.587054 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:13.587130510+00:00 stderr F I1013 00:20:13.587111 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:13.587483392+00:00 stderr F I1013 00:20:13.587409 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:13.612077139+00:00 stderr F W1013 00:20:13.612018 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:13.615027046+00:00 stderr F I1013 00:20:13.614929 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.921234ms) 2025-10-13T00:20:14.455507691+00:00 stderr F I1013 00:20:14.454954 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:14.455714717+00:00 stderr F I1013 00:20:14.455684 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:14.455800119+00:00 stderr F I1013 00:20:14.455783 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:14.456151819+00:00 stderr F I1013 00:20:14.456114 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:14.483924028+00:00 stderr F W1013 00:20:14.483865 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:14.485171013+00:00 stderr F I1013 00:20:14.485143 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.201007ms) 2025-10-13T00:20:14.587139515+00:00 stderr F I1013 00:20:14.587034 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:14.587358071+00:00 stderr F I1013 00:20:14.587287 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:14.587358071+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:14.587358071+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:14.587399672+00:00 stderr F I1013 00:20:14.587378 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (358.66µs) 2025-10-13T00:20:14.587399672+00:00 stderr F I1013 00:20:14.587394 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:14.587489475+00:00 stderr F I1013 00:20:14.587431 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:14.587489475+00:00 stderr F I1013 00:20:14.587479 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:14.587763552+00:00 stderr F I1013 00:20:14.587672 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:14.619763160+00:00 stderr F W1013 00:20:14.619706 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:14.621059286+00:00 stderr F I1013 00:20:14.620979 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.583262ms) 2025-10-13T00:20:15.588418289+00:00 stderr F I1013 00:20:15.588307 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:15.588870212+00:00 stderr F I1013 00:20:15.588798 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:15.588870212+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:15.588870212+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:15.588957204+00:00 stderr F I1013 00:20:15.588892 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (598.457µs) 2025-10-13T00:20:15.588957204+00:00 stderr F I1013 00:20:15.588922 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:15.589039537+00:00 stderr F I1013 00:20:15.588986 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:15.589095328+00:00 stderr F I1013 00:20:15.589063 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:15.589555131+00:00 stderr F I1013 00:20:15.589469 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:15.634734999+00:00 stderr F W1013 00:20:15.634670 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:15.637899428+00:00 stderr F I1013 00:20:15.637857 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.927883ms) 2025-10-13T00:20:16.589842240+00:00 stderr F I1013 00:20:16.589768 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:16.590160189+00:00 stderr F I1013 00:20:16.590121 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:16.590160189+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:16.590160189+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:16.590220081+00:00 stderr F I1013 00:20:16.590187 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (432.483µs) 2025-10-13T00:20:16.590220081+00:00 stderr F I1013 00:20:16.590212 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:16.590314513+00:00 stderr F I1013 00:20:16.590271 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:16.590421006+00:00 stderr F I1013 00:20:16.590388 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:16.590708054+00:00 stderr F I1013 00:20:16.590655 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:16.620887961+00:00 stderr F W1013 00:20:16.620824 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:16.622911478+00:00 stderr F I1013 00:20:16.622820 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.604385ms) 2025-10-13T00:20:17.590719215+00:00 stderr F I1013 00:20:17.590649 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:17.591110436+00:00 stderr F I1013 00:20:17.591087 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:17.591110436+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:17.591110436+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:17.591270390+00:00 stderr F I1013 00:20:17.591239 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (603.537µs) 2025-10-13T00:20:17.591323442+00:00 stderr F I1013 00:20:17.591307 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:17.591438495+00:00 stderr F I1013 00:20:17.591413 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:17.591515247+00:00 stderr F I1013 00:20:17.591499 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:17.591830836+00:00 stderr F I1013 00:20:17.591796 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:17.614283866+00:00 stderr F W1013 00:20:17.614228 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:17.616580680+00:00 stderr F I1013 00:20:17.616537 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.224178ms) 2025-10-13T00:20:18.591597769+00:00 stderr F I1013 00:20:18.591532 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:18.591833525+00:00 stderr F I1013 00:20:18.591794 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:18.591833525+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:18.591833525+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:18.591888417+00:00 stderr F I1013 00:20:18.591855 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (350.77µs) 2025-10-13T00:20:18.591888417+00:00 stderr F I1013 00:20:18.591874 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:18.591951649+00:00 stderr F I1013 00:20:18.591916 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:18.591998470+00:00 stderr F I1013 00:20:18.591973 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:18.592266788+00:00 stderr F I1013 00:20:18.592219 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:18.615495469+00:00 stderr F W1013 00:20:18.614293 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:18.618263597+00:00 stderr F I1013 00:20:18.618120 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.238087ms) 2025-10-13T00:20:19.592317020+00:00 stderr F I1013 00:20:19.592253 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:19.592777173+00:00 stderr F I1013 00:20:19.592745 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:19.592777173+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:19.592777173+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:19.592917127+00:00 stderr F I1013 00:20:19.592888 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (648.029µs) 2025-10-13T00:20:19.592978088+00:00 stderr F I1013 00:20:19.592956 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:19.593101602+00:00 stderr F I1013 00:20:19.593067 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:19.593203475+00:00 stderr F I1013 00:20:19.593183 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:19.593666788+00:00 stderr F I1013 00:20:19.593613 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:19.628275519+00:00 stderr F W1013 00:20:19.628227 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:19.631503549+00:00 stderr F I1013 00:20:19.631463 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.501921ms) 2025-10-13T00:20:20.593513153+00:00 stderr F I1013 00:20:20.593401 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:20.593920835+00:00 stderr F I1013 00:20:20.593868 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:20.593920835+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:20.593920835+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:20.594045578+00:00 stderr F I1013 00:20:20.593990 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (600.326µs) 2025-10-13T00:20:20.594045578+00:00 stderr F I1013 00:20:20.594030 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:20.594190572+00:00 stderr F I1013 00:20:20.594133 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:20.594247674+00:00 stderr F I1013 00:20:20.594215 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:20.594739988+00:00 stderr F I1013 00:20:20.594662 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:20.622314572+00:00 stderr F W1013 00:20:20.622252 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:20.625541522+00:00 stderr F I1013 00:20:20.625493 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.458762ms) 2025-10-13T00:20:21.595271754+00:00 stderr F I1013 00:20:21.594411 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:21.595271754+00:00 stderr F I1013 00:20:21.594796 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:21.595271754+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:21.595271754+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:21.595271754+00:00 stderr F I1013 00:20:21.594887 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (562.786µs) 2025-10-13T00:20:21.595271754+00:00 stderr F I1013 00:20:21.594911 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:21.595271754+00:00 stderr F I1013 00:20:21.595000 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:21.595271754+00:00 stderr F I1013 00:20:21.595077 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:21.595550311+00:00 stderr F I1013 00:20:21.595471 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:21.643042434+00:00 stderr F W1013 00:20:21.643002 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:21.644833344+00:00 stderr F I1013 00:20:21.644807 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.89541ms) 2025-10-13T00:20:22.595598052+00:00 stderr F I1013 00:20:22.595054 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:22.595864029+00:00 stderr F I1013 00:20:22.595809 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:22.595864029+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:22.595864029+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:22.595941802+00:00 stderr F I1013 00:20:22.595896 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (873.634µs) 2025-10-13T00:20:22.595941802+00:00 stderr F I1013 00:20:22.595927 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:22.596029314+00:00 stderr F I1013 00:20:22.595985 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:22.596076275+00:00 stderr F I1013 00:20:22.596051 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:22.596427445+00:00 stderr F I1013 00:20:22.596359 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:22.633903997+00:00 stderr F W1013 00:20:22.633815 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:22.637318613+00:00 stderr F I1013 00:20:22.637254 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.317329ms) 2025-10-13T00:20:23.596953220+00:00 stderr F I1013 00:20:23.596881 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:23.597405162+00:00 stderr F I1013 00:20:23.597379 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:23.597405162+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:23.597405162+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:23.597580807+00:00 stderr F I1013 00:20:23.597546 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (670.979µs) 2025-10-13T00:20:23.597644809+00:00 stderr F I1013 00:20:23.597625 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:23.597754872+00:00 stderr F I1013 00:20:23.597725 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:23.597846245+00:00 stderr F I1013 00:20:23.597831 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:23.598153173+00:00 stderr F I1013 00:20:23.598117 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:23.629363519+00:00 stderr F W1013 00:20:23.629242 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:23.631533650+00:00 stderr F I1013 00:20:23.631484 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.85836ms) 2025-10-13T00:20:24.598617917+00:00 stderr F I1013 00:20:24.598522 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:24.598814132+00:00 stderr F I1013 00:20:24.598779 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:24.598814132+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:24.598814132+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:24.598852334+00:00 stderr F I1013 00:20:24.598831 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (321.629µs) 2025-10-13T00:20:24.598852334+00:00 stderr F I1013 00:20:24.598844 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:24.598938036+00:00 stderr F I1013 00:20:24.598885 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:24.598938036+00:00 stderr F I1013 00:20:24.598934 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:24.599186913+00:00 stderr F I1013 00:20:24.599138 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:24.622483297+00:00 stderr F W1013 00:20:24.622395 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:24.623771503+00:00 stderr F I1013 00:20:24.623720 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.872238ms) 2025-10-13T00:20:25.599507501+00:00 stderr F I1013 00:20:25.599432 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:25.599714037+00:00 stderr F I1013 00:20:25.599675 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:25.599714037+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:25.599714037+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:25.599742667+00:00 stderr F I1013 00:20:25.599724 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (320.259µs) 2025-10-13T00:20:25.599742667+00:00 stderr F I1013 00:20:25.599737 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:25.599815549+00:00 stderr F I1013 00:20:25.599775 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:25.599828840+00:00 stderr F I1013 00:20:25.599821 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:25.600072797+00:00 stderr F I1013 00:20:25.600025 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:25.622228818+00:00 stderr F W1013 00:20:25.622165 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:25.623532175+00:00 stderr F I1013 00:20:25.623487 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.747127ms) 2025-10-13T00:20:26.600734815+00:00 stderr F I1013 00:20:26.600659 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:26.600944321+00:00 stderr F I1013 00:20:26.600902 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:26.600944321+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:26.600944321+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:26.600981812+00:00 stderr F I1013 00:20:26.600965 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (320.259µs) 2025-10-13T00:20:26.600989203+00:00 stderr F I1013 00:20:26.600979 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:26.601060285+00:00 stderr F I1013 00:20:26.601032 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:26.601094946+00:00 stderr F I1013 00:20:26.601081 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:26.601322042+00:00 stderr F I1013 00:20:26.601286 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:26.635231563+00:00 stderr F W1013 00:20:26.635179 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:26.636531210+00:00 stderr F I1013 00:20:26.636488 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.507147ms) 2025-10-13T00:20:27.602047282+00:00 stderr F I1013 00:20:27.601528 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:27.602473344+00:00 stderr F I1013 00:20:27.602353 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:27.602473344+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:27.602473344+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:27.602522716+00:00 stderr F I1013 00:20:27.602483 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (985.227µs) 2025-10-13T00:20:27.602522716+00:00 stderr F I1013 00:20:27.602505 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:27.602602508+00:00 stderr F I1013 00:20:27.602566 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:27.602659089+00:00 stderr F I1013 00:20:27.602638 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:27.603042110+00:00 stderr F I1013 00:20:27.602989 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:27.626538580+00:00 stderr F W1013 00:20:27.626442 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:27.630816150+00:00 stderr F I1013 00:20:27.630751 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.235932ms) 2025-10-13T00:20:28.602774023+00:00 stderr F I1013 00:20:28.602696 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:28.603093652+00:00 stderr F I1013 00:20:28.603051 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:28.603093652+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:28.603093652+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:28.603154444+00:00 stderr F I1013 00:20:28.603120 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (436.683µs) 2025-10-13T00:20:28.603154444+00:00 stderr F I1013 00:20:28.603142 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:28.603228396+00:00 stderr F I1013 00:20:28.603187 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:28.603273387+00:00 stderr F I1013 00:20:28.603252 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:28.603660148+00:00 stderr F I1013 00:20:28.603603 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:28.630810640+00:00 stderr F W1013 00:20:28.630764 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:28.632666092+00:00 stderr F I1013 00:20:28.632632 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.486148ms) 2025-10-13T00:20:29.455578632+00:00 stderr F I1013 00:20:29.455478 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:29.455635774+00:00 stderr F I1013 00:20:29.455603 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:29.455703726+00:00 stderr F I1013 00:20:29.455677 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:29.456169479+00:00 stderr F I1013 00:20:29.456096 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:29.477301012+00:00 stderr F W1013 00:20:29.477205 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:29.480233574+00:00 stderr F I1013 00:20:29.480154 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.683502ms) 2025-10-13T00:20:29.603936645+00:00 stderr F I1013 00:20:29.603853 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:29.604303575+00:00 stderr F I1013 00:20:29.604267 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:29.604303575+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:29.604303575+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:29.604402178+00:00 stderr F I1013 00:20:29.604369 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (528.985µs) 2025-10-13T00:20:29.604413598+00:00 stderr F I1013 00:20:29.604396 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:29.604493191+00:00 stderr F I1013 00:20:29.604457 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:29.604557252+00:00 stderr F I1013 00:20:29.604531 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:29.604940973+00:00 stderr F I1013 00:20:29.604891 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:29.627245379+00:00 stderr F W1013 00:20:29.627155 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:29.630216132+00:00 stderr F I1013 00:20:29.630147 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.745742ms) 2025-10-13T00:20:30.605463968+00:00 stderr F I1013 00:20:30.605412 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:30.605793157+00:00 stderr F I1013 00:20:30.605777 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:30.605793157+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:30.605793157+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:30.605863959+00:00 stderr F I1013 00:20:30.605850 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (448.713µs) 2025-10-13T00:20:30.605894620+00:00 stderr F I1013 00:20:30.605884 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:30.605959542+00:00 stderr F I1013 00:20:30.605941 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:30.606017923+00:00 stderr F I1013 00:20:30.606008 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:30.606258290+00:00 stderr F I1013 00:20:30.606231 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:30.632405084+00:00 stderr F W1013 00:20:30.632308 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:30.633737201+00:00 stderr F I1013 00:20:30.633699 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.812571ms) 2025-10-13T00:20:31.606999401+00:00 stderr F I1013 00:20:31.606217 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:31.606999401+00:00 stderr F I1013 00:20:31.606600 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:31.606999401+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:31.606999401+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:31.606999401+00:00 stderr F I1013 00:20:31.606685 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (483.674µs) 2025-10-13T00:20:31.606999401+00:00 stderr F I1013 00:20:31.606706 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:31.606999401+00:00 stderr F I1013 00:20:31.606769 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:31.606999401+00:00 stderr F I1013 00:20:31.606837 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:31.607407822+00:00 stderr F I1013 00:20:31.607251 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:31.629954395+00:00 stderr F W1013 00:20:31.629883 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:31.633072593+00:00 stderr F I1013 00:20:31.633016 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.299628ms) 2025-10-13T00:20:31.857148450+00:00 stderr F I1013 00:20:31.856713 1 sync_worker.go:234] Notify the sync worker: Cluster operator machine-config changed Degraded from "False" to "True" 2025-10-13T00:20:31.857148450+00:00 stderr F I1013 00:20:31.857092 1 sync_worker.go:584] Cluster operator machine-config changed Degraded from "False" to "True" 2025-10-13T00:20:31.857148450+00:00 stderr F I1013 00:20:31.857102 1 sync_worker.go:592] No change, waiting 2025-10-13T00:20:32.607583147+00:00 stderr F I1013 00:20:32.607500 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:32.607905687+00:00 stderr F I1013 00:20:32.607869 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:32.607905687+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:32.607905687+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:32.608032520+00:00 stderr F I1013 00:20:32.607959 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (493.364µs) 2025-10-13T00:20:32.608032520+00:00 stderr F I1013 00:20:32.607987 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:32.608121113+00:00 stderr F I1013 00:20:32.608075 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:32.608192155+00:00 stderr F I1013 00:20:32.608157 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:32.608629157+00:00 stderr F I1013 00:20:32.608566 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:32.637574258+00:00 stderr F W1013 00:20:32.637506 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:32.640962483+00:00 stderr F I1013 00:20:32.640905 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.913372ms) 2025-10-13T00:20:33.609094989+00:00 stderr F I1013 00:20:33.608510 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:33.609415478+00:00 stderr F I1013 00:20:33.609385 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:33.609415478+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:33.609415478+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:33.609477780+00:00 stderr F I1013 00:20:33.609449 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (964.627µs) 2025-10-13T00:20:33.609477780+00:00 stderr F I1013 00:20:33.609468 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:33.609553922+00:00 stderr F I1013 00:20:33.609522 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:33.609592843+00:00 stderr F I1013 00:20:33.609578 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:33.609892141+00:00 stderr F I1013 00:20:33.609852 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:33.641821637+00:00 stderr F W1013 00:20:33.641763 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:33.643668389+00:00 stderr F I1013 00:20:33.643628 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.156139ms) 2025-10-13T00:20:34.609717757+00:00 stderr F I1013 00:20:34.609629 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:34.609949303+00:00 stderr F I1013 00:20:34.609899 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:34.609949303+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:34.609949303+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:34.610010545+00:00 stderr F I1013 00:20:34.609978 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (359.09µs) 2025-10-13T00:20:34.610010545+00:00 stderr F I1013 00:20:34.609999 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:34.610068527+00:00 stderr F I1013 00:20:34.610035 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:34.610123088+00:00 stderr F I1013 00:20:34.610086 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:34.610400266+00:00 stderr F I1013 00:20:34.610315 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:34.635437738+00:00 stderr F W1013 00:20:34.635354 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:34.636729045+00:00 stderr F I1013 00:20:34.636664 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.662628ms) 2025-10-13T00:20:35.611091835+00:00 stderr F I1013 00:20:35.611003 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:35.611503957+00:00 stderr F I1013 00:20:35.611448 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:35.611503957+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:35.611503957+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:35.611580839+00:00 stderr F I1013 00:20:35.611538 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (548.956µs) 2025-10-13T00:20:35.611580839+00:00 stderr F I1013 00:20:35.611567 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:35.611678262+00:00 stderr F I1013 00:20:35.611633 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:35.611737433+00:00 stderr F I1013 00:20:35.611709 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:35.612153325+00:00 stderr F I1013 00:20:35.612094 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:35.645749588+00:00 stderr F W1013 00:20:35.645674 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:35.649051760+00:00 stderr F I1013 00:20:35.648998 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.423541ms) 2025-10-13T00:20:36.612692689+00:00 stderr F I1013 00:20:36.612603 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:36.613063880+00:00 stderr F I1013 00:20:36.613015 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:36.613063880+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:36.613063880+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:36.613153812+00:00 stderr F I1013 00:20:36.613107 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (517.335µs) 2025-10-13T00:20:36.613153812+00:00 stderr F I1013 00:20:36.613134 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:36.613290046+00:00 stderr F I1013 00:20:36.613229 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:36.613316717+00:00 stderr F I1013 00:20:36.613306 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:36.613781870+00:00 stderr F I1013 00:20:36.613707 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:36.643464353+00:00 stderr F W1013 00:20:36.643370 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:36.644772599+00:00 stderr F I1013 00:20:36.644713 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.576656ms) 2025-10-13T00:20:37.613505862+00:00 stderr F I1013 00:20:37.613421 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:37.613766869+00:00 stderr F I1013 00:20:37.613723 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:37.613766869+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:37.613766869+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:37.613819961+00:00 stderr F I1013 00:20:37.613790 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (381.261µs) 2025-10-13T00:20:37.613819961+00:00 stderr F I1013 00:20:37.613814 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:37.613914223+00:00 stderr F I1013 00:20:37.613862 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:37.613928614+00:00 stderr F I1013 00:20:37.613919 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:37.614252643+00:00 stderr F I1013 00:20:37.614198 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:37.659575985+00:00 stderr F W1013 00:20:37.659512 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:37.660851380+00:00 stderr F I1013 00:20:37.660818 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.002399ms) 2025-10-13T00:20:38.614466839+00:00 stderr F I1013 00:20:38.614395 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:38.614658664+00:00 stderr F I1013 00:20:38.614632 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:38.614658664+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:38.614658664+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:38.614700346+00:00 stderr F I1013 00:20:38.614681 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (298.078µs) 2025-10-13T00:20:38.614708146+00:00 stderr F I1013 00:20:38.614698 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:38.614766358+00:00 stderr F I1013 00:20:38.614737 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:38.614810529+00:00 stderr F I1013 00:20:38.614786 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:38.615053456+00:00 stderr F I1013 00:20:38.615008 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:38.644947194+00:00 stderr F W1013 00:20:38.644875 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:38.646273822+00:00 stderr F I1013 00:20:38.646229 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.528425ms) 2025-10-13T00:20:39.615081027+00:00 stderr F I1013 00:20:39.614990 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:39.615298103+00:00 stderr F I1013 00:20:39.615272 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:39.615298103+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:39.615298103+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:39.615445437+00:00 stderr F I1013 00:20:39.615402 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (424.512µs) 2025-10-13T00:20:39.615445437+00:00 stderr F I1013 00:20:39.615425 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:39.615500719+00:00 stderr F I1013 00:20:39.615470 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:39.615561370+00:00 stderr F I1013 00:20:39.615527 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:39.615826368+00:00 stderr F I1013 00:20:39.615777 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:39.654056651+00:00 stderr F W1013 00:20:39.653977 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:39.655810730+00:00 stderr F I1013 00:20:39.655783 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.356393ms) 2025-10-13T00:20:40.615581099+00:00 stderr F I1013 00:20:40.615503 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:40.615790055+00:00 stderr F I1013 00:20:40.615742 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:40.615790055+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:40.615790055+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:40.615903448+00:00 stderr F I1013 00:20:40.615886 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (395.121µs) 2025-10-13T00:20:40.615935299+00:00 stderr F I1013 00:20:40.615925 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:40.616001771+00:00 stderr F I1013 00:20:40.615983 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:40.616057983+00:00 stderr F I1013 00:20:40.616048 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:40.616299879+00:00 stderr F I1013 00:20:40.616274 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:40.642501765+00:00 stderr F W1013 00:20:40.642439 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:40.644136351+00:00 stderr F I1013 00:20:40.644054 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.126859ms) 2025-10-13T00:20:41.616734822+00:00 stderr F I1013 00:20:41.616182 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:41.617229826+00:00 stderr F I1013 00:20:41.617196 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:41.617229826+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:41.617229826+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:41.617441581+00:00 stderr F I1013 00:20:41.617409 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.238954ms) 2025-10-13T00:20:41.617513293+00:00 stderr F I1013 00:20:41.617491 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:41.617636147+00:00 stderr F I1013 00:20:41.617601 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:41.617753740+00:00 stderr F I1013 00:20:41.617727 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:41.618212283+00:00 stderr F I1013 00:20:41.618159 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:41.648134003+00:00 stderr F W1013 00:20:41.648092 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:41.649485921+00:00 stderr F I1013 00:20:41.649465 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.974897ms) 2025-10-13T00:20:42.617695019+00:00 stderr F I1013 00:20:42.617384 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:42.618000728+00:00 stderr F I1013 00:20:42.617984 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:42.618000728+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:42.618000728+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:42.618073060+00:00 stderr F I1013 00:20:42.618059 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (686.879µs) 2025-10-13T00:20:42.618104731+00:00 stderr F I1013 00:20:42.618094 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:42.618170283+00:00 stderr F I1013 00:20:42.618153 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:42.618225614+00:00 stderr F I1013 00:20:42.618215 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:42.618461821+00:00 stderr F I1013 00:20:42.618436 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:42.638105452+00:00 stderr F W1013 00:20:42.638040 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:42.640905590+00:00 stderr F I1013 00:20:42.640864 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.764389ms) 2025-10-13T00:20:43.619087828+00:00 stderr F I1013 00:20:43.619038 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:43.619414137+00:00 stderr F I1013 00:20:43.619399 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:43.619414137+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:43.619414137+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:43.619509570+00:00 stderr F I1013 00:20:43.619494 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (468.283µs) 2025-10-13T00:20:43.619542181+00:00 stderr F I1013 00:20:43.619530 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:43.619606793+00:00 stderr F I1013 00:20:43.619589 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:43.619662214+00:00 stderr F I1013 00:20:43.619652 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:43.619880660+00:00 stderr F I1013 00:20:43.619856 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:43.649157932+00:00 stderr F W1013 00:20:43.649112 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:43.650581512+00:00 stderr F I1013 00:20:43.650557 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.025901ms) 2025-10-13T00:20:44.454696666+00:00 stderr F I1013 00:20:44.454652 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:44.454886701+00:00 stderr F I1013 00:20:44.454861 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:44.454948873+00:00 stderr F I1013 00:20:44.454938 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:44.455196810+00:00 stderr F I1013 00:20:44.455169 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:44.482531377+00:00 stderr F W1013 00:20:44.482488 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:44.483908286+00:00 stderr F I1013 00:20:44.483884 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.239901ms) 2025-10-13T00:20:44.619808350+00:00 stderr F I1013 00:20:44.619762 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:44.620077537+00:00 stderr F I1013 00:20:44.620062 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:44.620077537+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:44.620077537+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:44.620160430+00:00 stderr F I1013 00:20:44.620146 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (409.921µs) 2025-10-13T00:20:44.620191690+00:00 stderr F I1013 00:20:44.620182 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:44.620258702+00:00 stderr F I1013 00:20:44.620240 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:44.620314574+00:00 stderr F I1013 00:20:44.620305 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:44.620583051+00:00 stderr F I1013 00:20:44.620550 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:44.654527944+00:00 stderr F W1013 00:20:44.654469 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:44.656273523+00:00 stderr F I1013 00:20:44.656246 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.056522ms) 2025-10-13T00:20:45.621267810+00:00 stderr F I1013 00:20:45.621145 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:45.621630450+00:00 stderr F I1013 00:20:45.621572 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:45.621630450+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:45.621630450+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:45.621699192+00:00 stderr F I1013 00:20:45.621663 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (533.755µs) 2025-10-13T00:20:45.621699192+00:00 stderr F I1013 00:20:45.621690 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:45.621834456+00:00 stderr F I1013 00:20:45.621782 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:45.621884968+00:00 stderr F I1013 00:20:45.621858 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:45.622275569+00:00 stderr F I1013 00:20:45.622206 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:45.672569160+00:00 stderr F W1013 00:20:45.672466 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:45.676754287+00:00 stderr F I1013 00:20:45.676691 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.992593ms) 2025-10-13T00:20:46.622716682+00:00 stderr F I1013 00:20:46.622650 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:46.623164985+00:00 stderr F I1013 00:20:46.623134 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:46.623164985+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:46.623164985+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:46.623296918+00:00 stderr F I1013 00:20:46.623269 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (632.187µs) 2025-10-13T00:20:46.623386891+00:00 stderr F I1013 00:20:46.623363 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:46.623586516+00:00 stderr F I1013 00:20:46.623501 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:46.623708370+00:00 stderr F I1013 00:20:46.623685 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:46.624213244+00:00 stderr F I1013 00:20:46.624152 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:46.649709279+00:00 stderr F W1013 00:20:46.649653 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:46.651008836+00:00 stderr F I1013 00:20:46.650950 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.588994ms) 2025-10-13T00:20:47.624220733+00:00 stderr F I1013 00:20:47.624156 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:47.624491041+00:00 stderr F I1013 00:20:47.624453 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:47.624491041+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:47.624491041+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:47.624512382+00:00 stderr F I1013 00:20:47.624501 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (372.2µs) 2025-10-13T00:20:47.624532252+00:00 stderr F I1013 00:20:47.624513 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:47.624576973+00:00 stderr F I1013 00:20:47.624547 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:47.624608524+00:00 stderr F I1013 00:20:47.624591 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:47.624817280+00:00 stderr F I1013 00:20:47.624771 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:47.651946901+00:00 stderr F W1013 00:20:47.651885 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:47.653250718+00:00 stderr F I1013 00:20:47.653208 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.690845ms) 2025-10-13T00:20:48.625358475+00:00 stderr F I1013 00:20:48.625253 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:48.625521410+00:00 stderr F I1013 00:20:48.625485 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:48.625521410+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:48.625521410+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:48.625571261+00:00 stderr F I1013 00:20:48.625544 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (301.458µs) 2025-10-13T00:20:48.625571261+00:00 stderr F I1013 00:20:48.625564 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:48.625638493+00:00 stderr F I1013 00:20:48.625611 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:48.625675804+00:00 stderr F I1013 00:20:48.625658 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:48.625892240+00:00 stderr F I1013 00:20:48.625849 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:48.659043060+00:00 stderr F W1013 00:20:48.658982 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:48.660868762+00:00 stderr F I1013 00:20:48.660809 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.243859ms) 2025-10-13T00:20:49.626572771+00:00 stderr F I1013 00:20:49.626502 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:49.627472196+00:00 stderr F I1013 00:20:49.627427 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:49.627472196+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:49.627472196+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:49.627763744+00:00 stderr F I1013 00:20:49.627727 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.261986ms) 2025-10-13T00:20:49.627874497+00:00 stderr F I1013 00:20:49.627812 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:49.628044392+00:00 stderr F I1013 00:20:49.627954 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:49.628044392+00:00 stderr F I1013 00:20:49.628037 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:49.628560237+00:00 stderr F I1013 00:20:49.628466 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:49.651729177+00:00 stderr F W1013 00:20:49.651629 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:49.655181973+00:00 stderr F I1013 00:20:49.655068 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.264445ms) 2025-10-13T00:20:50.629051360+00:00 stderr F I1013 00:20:50.628596 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:50.629350538+00:00 stderr F I1013 00:20:50.629293 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:50.629350538+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:50.629350538+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:50.629419650+00:00 stderr F I1013 00:20:50.629386 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (803.102µs) 2025-10-13T00:20:50.629419650+00:00 stderr F I1013 00:20:50.629411 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:50.629487602+00:00 stderr F I1013 00:20:50.629457 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:50.629541503+00:00 stderr F I1013 00:20:50.629514 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:50.629825591+00:00 stderr F I1013 00:20:50.629777 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:50.649992087+00:00 stderr F W1013 00:20:50.649935 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:50.651282224+00:00 stderr F I1013 00:20:50.651246 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.835912ms) 2025-10-13T00:20:51.629864333+00:00 stderr F I1013 00:20:51.629718 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:51.630058038+00:00 stderr F I1013 00:20:51.629974 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:51.630058038+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:51.630058038+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:51.630058038+00:00 stderr F I1013 00:20:51.630019 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (333.829µs) 2025-10-13T00:20:51.630058038+00:00 stderr F I1013 00:20:51.630032 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:51.630091609+00:00 stderr F I1013 00:20:51.630073 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:51.630171101+00:00 stderr F I1013 00:20:51.630115 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:51.630419808+00:00 stderr F I1013 00:20:51.630310 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:51.650709818+00:00 stderr F W1013 00:20:51.650609 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:51.652024985+00:00 stderr F I1013 00:20:51.651957 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.922165ms) 2025-10-13T00:20:52.630749848+00:00 stderr F I1013 00:20:52.630116 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:52.631093568+00:00 stderr F I1013 00:20:52.631040 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:52.631093568+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:52.631093568+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:52.631263553+00:00 stderr F I1013 00:20:52.631222 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.122962ms) 2025-10-13T00:20:52.631271793+00:00 stderr F I1013 00:20:52.631261 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:52.631432608+00:00 stderr F I1013 00:20:52.631384 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:52.631501520+00:00 stderr F I1013 00:20:52.631478 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:52.632502368+00:00 stderr F I1013 00:20:52.632444 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:52.675704670+00:00 stderr F W1013 00:20:52.675619 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:52.678831848+00:00 stderr F I1013 00:20:52.678764 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.496512ms) 2025-10-13T00:20:53.632480187+00:00 stderr F I1013 00:20:53.632316 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:53.632903529+00:00 stderr F I1013 00:20:53.632823 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:53.632903529+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:53.632903529+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:53.632936940+00:00 stderr F I1013 00:20:53.632916 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (617.527µs) 2025-10-13T00:20:53.632955920+00:00 stderr F I1013 00:20:53.632938 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:53.633047843+00:00 stderr F I1013 00:20:53.632998 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:53.633167806+00:00 stderr F I1013 00:20:53.633087 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:53.633581148+00:00 stderr F I1013 00:20:53.633507 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:53.672863230+00:00 stderr F W1013 00:20:53.672742 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:53.674138296+00:00 stderr F I1013 00:20:53.674076 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.137124ms) 2025-10-13T00:20:54.633529316+00:00 stderr F I1013 00:20:54.633444 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:54.634715809+00:00 stderr F I1013 00:20:54.634673 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:54.634715809+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:54.634715809+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:54.634743440+00:00 stderr F I1013 00:20:54.634723 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.291696ms) 2025-10-13T00:20:54.634743440+00:00 stderr F I1013 00:20:54.634736 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:54.634808722+00:00 stderr F I1013 00:20:54.634771 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:54.634832653+00:00 stderr F I1013 00:20:54.634818 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:54.635576303+00:00 stderr F I1013 00:20:54.635520 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:54.663288691+00:00 stderr F W1013 00:20:54.663220 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:54.664682490+00:00 stderr F I1013 00:20:54.664627 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.888118ms) 2025-10-13T00:20:55.635686876+00:00 stderr F I1013 00:20:55.635593 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:55.635912923+00:00 stderr F I1013 00:20:55.635858 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:55.635912923+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:55.635912923+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:55.635956854+00:00 stderr F I1013 00:20:55.635906 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (325.959µs) 2025-10-13T00:20:55.635956854+00:00 stderr F I1013 00:20:55.635921 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:55.635973644+00:00 stderr F I1013 00:20:55.635953 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:55.636028946+00:00 stderr F I1013 00:20:55.635994 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:55.636250942+00:00 stderr F I1013 00:20:55.636192 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:55.678460487+00:00 stderr F W1013 00:20:55.678374 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:55.679824335+00:00 stderr F I1013 00:20:55.679784 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.860191ms) 2025-10-13T00:20:56.636357766+00:00 stderr F I1013 00:20:56.636268 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:56.636627013+00:00 stderr F I1013 00:20:56.636591 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:56.636627013+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:56.636627013+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:56.636686545+00:00 stderr F I1013 00:20:56.636653 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (396.131µs) 2025-10-13T00:20:56.636686545+00:00 stderr F I1013 00:20:56.636676 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:56.636758227+00:00 stderr F I1013 00:20:56.636720 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:56.636794928+00:00 stderr F I1013 00:20:56.636776 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:56.637063916+00:00 stderr F I1013 00:20:56.637014 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:56.672260803+00:00 stderr F W1013 00:20:56.672198 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:56.676803911+00:00 stderr F I1013 00:20:56.676717 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.031863ms) 2025-10-13T00:20:57.637592591+00:00 stderr F I1013 00:20:57.637503 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:57.637819797+00:00 stderr F I1013 00:20:57.637767 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:57.637819797+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:57.637819797+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:57.637839498+00:00 stderr F I1013 00:20:57.637828 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (337.559µs) 2025-10-13T00:20:57.637854808+00:00 stderr F I1013 00:20:57.637843 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:57.637929080+00:00 stderr F I1013 00:20:57.637886 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:57.637967251+00:00 stderr F I1013 00:20:57.637939 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:57.638262770+00:00 stderr F I1013 00:20:57.638201 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:57.666280996+00:00 stderr F W1013 00:20:57.666201 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:57.668036365+00:00 stderr F I1013 00:20:57.667983 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.135455ms) 2025-10-13T00:20:58.638203317+00:00 stderr F I1013 00:20:58.638095 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:58.638700701+00:00 stderr F I1013 00:20:58.638651 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:58.638700701+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:58.638700701+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:58.638798793+00:00 stderr F I1013 00:20:58.638754 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (685.219µs) 2025-10-13T00:20:58.638839305+00:00 stderr F I1013 00:20:58.638814 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:58.638947368+00:00 stderr F I1013 00:20:58.638896 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:58.639043170+00:00 stderr F I1013 00:20:58.639009 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:58.639633197+00:00 stderr F I1013 00:20:58.639542 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:58.661600093+00:00 stderr F W1013 00:20:58.661519 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:58.663826676+00:00 stderr F I1013 00:20:58.663764 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.94618ms) 2025-10-13T00:20:59.455869931+00:00 stderr F I1013 00:20:59.455770 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:59.456197930+00:00 stderr F I1013 00:20:59.456154 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:59.456317933+00:00 stderr F I1013 00:20:59.456300 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:59.456771586+00:00 stderr F I1013 00:20:59.456720 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:59.478035523+00:00 stderr F W1013 00:20:59.477940 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:59.480312547+00:00 stderr F I1013 00:20:59.480274 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.550999ms) 2025-10-13T00:20:59.639843293+00:00 stderr F I1013 00:20:59.639780 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:20:59.640115891+00:00 stderr F I1013 00:20:59.640080 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:20:59.640115891+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:20:59.640115891+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:20:59.640153662+00:00 stderr F I1013 00:20:59.640140 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (373.62µs) 2025-10-13T00:20:59.640164442+00:00 stderr F I1013 00:20:59.640157 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:20:59.640233964+00:00 stderr F I1013 00:20:59.640200 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:20:59.640348397+00:00 stderr F I1013 00:20:59.640274 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:20:59.640621255+00:00 stderr F I1013 00:20:59.640554 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:20:59.668093516+00:00 stderr F W1013 00:20:59.668017 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:20:59.669906266+00:00 stderr F I1013 00:20:59.669861 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.701753ms) 2025-10-13T00:21:00.640865682+00:00 stderr F I1013 00:21:00.640394 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:00.641156580+00:00 stderr F I1013 00:21:00.641141 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:00.641156580+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:00.641156580+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:00.641239823+00:00 stderr F I1013 00:21:00.641220 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (836.924µs) 2025-10-13T00:21:00.641274184+00:00 stderr F I1013 00:21:00.641263 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:00.641358366+00:00 stderr F I1013 00:21:00.641320 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:00.641432298+00:00 stderr F I1013 00:21:00.641418 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:00.641698835+00:00 stderr F I1013 00:21:00.641669 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:00.686646817+00:00 stderr F W1013 00:21:00.686604 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:00.688110278+00:00 stderr F I1013 00:21:00.688086 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.820484ms) 2025-10-13T00:21:01.641460787+00:00 stderr F I1013 00:21:01.641412 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:01.641827068+00:00 stderr F I1013 00:21:01.641807 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:01.641827068+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:01.641827068+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:01.641923120+00:00 stderr F I1013 00:21:01.641903 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (501.794µs) 2025-10-13T00:21:01.641964412+00:00 stderr F I1013 00:21:01.641950 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:01.642049024+00:00 stderr F I1013 00:21:01.642026 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:01.642118206+00:00 stderr F I1013 00:21:01.642104 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:01.642453155+00:00 stderr F I1013 00:21:01.642403 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:01.671256594+00:00 stderr F W1013 00:21:01.671212 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:01.672638892+00:00 stderr F I1013 00:21:01.672616 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.665241ms) 2025-10-13T00:21:02.642836096+00:00 stderr F I1013 00:21:02.642724 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:02.643162785+00:00 stderr F I1013 00:21:02.643103 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:02.643162785+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:02.643162785+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:02.643222967+00:00 stderr F I1013 00:21:02.643185 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (476.444µs) 2025-10-13T00:21:02.643222967+00:00 stderr F I1013 00:21:02.643212 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:02.643372431+00:00 stderr F I1013 00:21:02.643282 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:02.643416032+00:00 stderr F I1013 00:21:02.643383 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:02.643830254+00:00 stderr F I1013 00:21:02.643745 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:02.665833441+00:00 stderr F W1013 00:21:02.665771 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:02.667696914+00:00 stderr F I1013 00:21:02.667641 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.427646ms) 2025-10-13T00:21:03.643733361+00:00 stderr F I1013 00:21:03.643668 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:03.643935307+00:00 stderr F I1013 00:21:03.643901 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:03.643935307+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:03.643935307+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:03.643979028+00:00 stderr F I1013 00:21:03.643951 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (295.728µs) 2025-10-13T00:21:03.643979028+00:00 stderr F I1013 00:21:03.643969 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:03.644035660+00:00 stderr F I1013 00:21:03.644009 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:03.644074421+00:00 stderr F I1013 00:21:03.644055 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:03.644295277+00:00 stderr F I1013 00:21:03.644255 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:03.673729123+00:00 stderr F W1013 00:21:03.673672 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:03.675640657+00:00 stderr F I1013 00:21:03.675614 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.641658ms) 2025-10-13T00:21:04.644890354+00:00 stderr F I1013 00:21:04.644823 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:04.645091760+00:00 stderr F I1013 00:21:04.645059 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:04.645091760+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:04.645091760+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:04.645132051+00:00 stderr F I1013 00:21:04.645107 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (295.898µs) 2025-10-13T00:21:04.645132051+00:00 stderr F I1013 00:21:04.645126 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:04.645197703+00:00 stderr F I1013 00:21:04.645165 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:04.645228324+00:00 stderr F I1013 00:21:04.645211 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:04.645513252+00:00 stderr F I1013 00:21:04.645449 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:04.679855305+00:00 stderr F W1013 00:21:04.679775 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:04.681093320+00:00 stderr F I1013 00:21:04.681044 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.917787ms) 2025-10-13T00:21:05.645817009+00:00 stderr F I1013 00:21:05.645720 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:05.646105158+00:00 stderr F I1013 00:21:05.646020 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:05.646105158+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:05.646105158+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:05.646105158+00:00 stderr F I1013 00:21:05.646083 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (375.621µs) 2025-10-13T00:21:05.646156049+00:00 stderr F I1013 00:21:05.646100 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:05.646175850+00:00 stderr F I1013 00:21:05.646146 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:05.646238271+00:00 stderr F I1013 00:21:05.646197 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:05.646537950+00:00 stderr F I1013 00:21:05.646461 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:05.674310289+00:00 stderr F W1013 00:21:05.674248 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:05.677355834+00:00 stderr F I1013 00:21:05.677279 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.173285ms) 2025-10-13T00:21:06.646945382+00:00 stderr F I1013 00:21:06.646865 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:06.647161948+00:00 stderr F I1013 00:21:06.647121 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:06.647161948+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:06.647161948+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:06.647178928+00:00 stderr F I1013 00:21:06.647167 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (314.879µs) 2025-10-13T00:21:06.647189119+00:00 stderr F I1013 00:21:06.647180 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:06.647243960+00:00 stderr F I1013 00:21:06.647212 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:06.647285411+00:00 stderr F I1013 00:21:06.647263 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:06.647613240+00:00 stderr F I1013 00:21:06.647549 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:06.679525806+00:00 stderr F W1013 00:21:06.679422 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:06.682746756+00:00 stderr F I1013 00:21:06.682672 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.480836ms) 2025-10-13T00:21:07.648214597+00:00 stderr F I1013 00:21:07.648165 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:07.648440964+00:00 stderr F I1013 00:21:07.648419 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:07.648440964+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:07.648440964+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:07.648478825+00:00 stderr F I1013 00:21:07.648465 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (311.479µs) 2025-10-13T00:21:07.648486365+00:00 stderr F I1013 00:21:07.648479 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:07.648548507+00:00 stderr F I1013 00:21:07.648522 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:07.648582548+00:00 stderr F I1013 00:21:07.648572 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:07.648785893+00:00 stderr F I1013 00:21:07.648756 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:07.668422154+00:00 stderr F W1013 00:21:07.668384 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:07.669642429+00:00 stderr F I1013 00:21:07.669624 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.143733ms) 2025-10-13T00:21:08.649249666+00:00 stderr F I1013 00:21:08.649187 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:08.649806031+00:00 stderr F I1013 00:21:08.649758 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:08.649806031+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:08.649806031+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:08.650006097+00:00 stderr F I1013 00:21:08.649970 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (793.782µs) 2025-10-13T00:21:08.650119600+00:00 stderr F I1013 00:21:08.650035 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:08.650177212+00:00 stderr F I1013 00:21:08.650132 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:08.650193412+00:00 stderr F I1013 00:21:08.650184 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:08.650512941+00:00 stderr F I1013 00:21:08.650414 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:08.678729793+00:00 stderr F W1013 00:21:08.678671 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:08.680268866+00:00 stderr F I1013 00:21:08.680208 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.181727ms) 2025-10-13T00:21:09.650773358+00:00 stderr F I1013 00:21:09.650704 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:09.651053446+00:00 stderr F I1013 00:21:09.651015 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:09.651053446+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:09.651053446+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:09.651105368+00:00 stderr F I1013 00:21:09.651079 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (385.751µs) 2025-10-13T00:21:09.651105368+00:00 stderr F I1013 00:21:09.651100 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:09.651175120+00:00 stderr F I1013 00:21:09.651146 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:09.651274702+00:00 stderr F I1013 00:21:09.651204 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:09.651574071+00:00 stderr F I1013 00:21:09.651522 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:09.671975933+00:00 stderr F W1013 00:21:09.671920 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:09.673309041+00:00 stderr F I1013 00:21:09.673270 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.169612ms) 2025-10-13T00:21:10.651599609+00:00 stderr F I1013 00:21:10.651482 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:10.652042401+00:00 stderr F I1013 00:21:10.651969 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:10.652042401+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:10.652042401+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:10.652098563+00:00 stderr F I1013 00:21:10.652061 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (602.047µs) 2025-10-13T00:21:10.652126063+00:00 stderr F I1013 00:21:10.652095 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:10.652213276+00:00 stderr F I1013 00:21:10.652167 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:10.652307818+00:00 stderr F I1013 00:21:10.652262 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:10.653505520+00:00 stderr F I1013 00:21:10.653318 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:10.689466388+00:00 stderr F W1013 00:21:10.689407 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:10.692721845+00:00 stderr F I1013 00:21:10.692672 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.572272ms) 2025-10-13T00:21:11.652706761+00:00 stderr F I1013 00:21:11.652644 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:11.652975429+00:00 stderr F I1013 00:21:11.652944 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:11.652975429+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:11.652975429+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:11.653037880+00:00 stderr F I1013 00:21:11.653002 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (370.37µs) 2025-10-13T00:21:11.653037880+00:00 stderr F I1013 00:21:11.653024 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:11.653140053+00:00 stderr F I1013 00:21:11.653091 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:11.653183704+00:00 stderr F I1013 00:21:11.653164 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:11.653505023+00:00 stderr F I1013 00:21:11.653462 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:11.674305642+00:00 stderr F W1013 00:21:11.674222 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:11.678999778+00:00 stderr F I1013 00:21:11.678763 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.719705ms) 2025-10-13T00:21:12.653184606+00:00 stderr F I1013 00:21:12.653135 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:12.653489334+00:00 stderr F I1013 00:21:12.653473 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:12.653489334+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:12.653489334+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:12.653571776+00:00 stderr F I1013 00:21:12.653557 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (434.552µs) 2025-10-13T00:21:12.653619917+00:00 stderr F I1013 00:21:12.653609 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:12.653706650+00:00 stderr F I1013 00:21:12.653687 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:12.653776642+00:00 stderr F I1013 00:21:12.653766 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:12.654062309+00:00 stderr F I1013 00:21:12.654034 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:12.682028831+00:00 stderr F W1013 00:21:12.681990 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:12.683422419+00:00 stderr F I1013 00:21:12.683399 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.787151ms) 2025-10-13T00:21:13.654454113+00:00 stderr F I1013 00:21:13.654396 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:13.654629248+00:00 stderr F I1013 00:21:13.654610 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:13.654629248+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:13.654629248+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:13.654686679+00:00 stderr F I1013 00:21:13.654663 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (277.767µs) 2025-10-13T00:21:13.654694989+00:00 stderr F I1013 00:21:13.654684 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:13.654765591+00:00 stderr F I1013 00:21:13.654741 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:13.654809292+00:00 stderr F I1013 00:21:13.654794 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:13.655087870+00:00 stderr F I1013 00:21:13.655038 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:13.692712792+00:00 stderr F W1013 00:21:13.692657 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:13.695584619+00:00 stderr F I1013 00:21:13.695548 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.861309ms) 2025-10-13T00:21:14.455314770+00:00 stderr F I1013 00:21:14.455251 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:14.455374581+00:00 stderr F I1013 00:21:14.455354 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:14.455470324+00:00 stderr F I1013 00:21:14.455404 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:14.455655509+00:00 stderr F I1013 00:21:14.455613 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:14.477670211+00:00 stderr F W1013 00:21:14.477626 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:14.479064038+00:00 stderr F I1013 00:21:14.479041 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.79735ms) 2025-10-13T00:21:14.654930658+00:00 stderr F I1013 00:21:14.654832 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:14.655208285+00:00 stderr F I1013 00:21:14.655173 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:14.655208285+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:14.655208285+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:14.655289067+00:00 stderr F I1013 00:21:14.655259 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (440.542µs) 2025-10-13T00:21:14.655360159+00:00 stderr F I1013 00:21:14.655286 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:14.655427931+00:00 stderr F I1013 00:21:14.655392 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:14.655490393+00:00 stderr F I1013 00:21:14.655459 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:14.655773881+00:00 stderr F I1013 00:21:14.655721 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:14.678433200+00:00 stderr F W1013 00:21:14.678380 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:14.679847508+00:00 stderr F I1013 00:21:14.679807 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.51979ms) 2025-10-13T00:21:15.655525325+00:00 stderr F I1013 00:21:15.655466 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:15.655747261+00:00 stderr F I1013 00:21:15.655722 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:15.655747261+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:15.655747261+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:15.655811643+00:00 stderr F I1013 00:21:15.655781 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (326.139µs) 2025-10-13T00:21:15.655820343+00:00 stderr F I1013 00:21:15.655814 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:15.655908525+00:00 stderr F I1013 00:21:15.655873 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:15.655943846+00:00 stderr F I1013 00:21:15.655930 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:15.656187913+00:00 stderr F I1013 00:21:15.656146 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:15.703474384+00:00 stderr F W1013 00:21:15.699779 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:15.703474384+00:00 stderr F I1013 00:21:15.702037 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.218162ms) 2025-10-13T00:21:16.656715819+00:00 stderr F I1013 00:21:16.656593 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:16.656942465+00:00 stderr F I1013 00:21:16.656890 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:16.656942465+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:16.656942465+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:16.656963785+00:00 stderr F I1013 00:21:16.656947 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (368.89µs) 2025-10-13T00:21:16.656979376+00:00 stderr F I1013 00:21:16.656965 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:16.657071678+00:00 stderr F I1013 00:21:16.657013 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:16.657092839+00:00 stderr F I1013 00:21:16.657073 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:16.657504110+00:00 stderr F I1013 00:21:16.657376 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:16.703218149+00:00 stderr F W1013 00:21:16.703144 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:16.705149451+00:00 stderr F I1013 00:21:16.705077 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.108724ms) 2025-10-13T00:21:17.657533143+00:00 stderr F I1013 00:21:17.657456 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:17.658564540+00:00 stderr F I1013 00:21:17.658534 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:17.658564540+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:17.658564540+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:17.658682364+00:00 stderr F I1013 00:21:17.658660 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.224593ms) 2025-10-13T00:21:17.658729055+00:00 stderr F I1013 00:21:17.658714 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:17.658833018+00:00 stderr F I1013 00:21:17.658804 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:17.658922330+00:00 stderr F I1013 00:21:17.658907 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:17.659889296+00:00 stderr F I1013 00:21:17.659841 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:17.689768050+00:00 stderr F W1013 00:21:17.689667 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:17.693715116+00:00 stderr F I1013 00:21:17.693634 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.910409ms) 2025-10-13T00:21:18.658751048+00:00 stderr F I1013 00:21:18.658690 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:18.658967433+00:00 stderr F I1013 00:21:18.658942 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:18.658967433+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:18.658967433+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:18.659010845+00:00 stderr F I1013 00:21:18.658993 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (314.168µs) 2025-10-13T00:21:18.659018125+00:00 stderr F I1013 00:21:18.659011 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:18.659087197+00:00 stderr F I1013 00:21:18.659060 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:18.659122198+00:00 stderr F I1013 00:21:18.659107 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:18.659347204+00:00 stderr F I1013 00:21:18.659296 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:18.695291871+00:00 stderr F W1013 00:21:18.695165 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:18.696594485+00:00 stderr F I1013 00:21:18.696549 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.534969ms) 2025-10-13T00:21:19.659757747+00:00 stderr F I1013 00:21:19.659698 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:19.660092436+00:00 stderr F I1013 00:21:19.660067 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:19.660092436+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:19.660092436+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:19.660175258+00:00 stderr F I1013 00:21:19.660160 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (476.073µs) 2025-10-13T00:21:19.660205249+00:00 stderr F I1013 00:21:19.660195 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:19.660271701+00:00 stderr F I1013 00:21:19.660255 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:19.660373063+00:00 stderr F I1013 00:21:19.660319 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:19.660661271+00:00 stderr F I1013 00:21:19.660633 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:19.683110335+00:00 stderr F W1013 00:21:19.683039 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:19.687204985+00:00 stderr F I1013 00:21:19.687155 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.950844ms) 2025-10-13T00:21:20.660500448+00:00 stderr F I1013 00:21:20.660418 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:20.660727464+00:00 stderr F I1013 00:21:20.660688 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:20.660727464+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:20.660727464+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:20.660774886+00:00 stderr F I1013 00:21:20.660745 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (340.319µs) 2025-10-13T00:21:20.660774886+00:00 stderr F I1013 00:21:20.660768 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:20.660855478+00:00 stderr F I1013 00:21:20.660816 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:20.660894579+00:00 stderr F I1013 00:21:20.660873 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:20.661183677+00:00 stderr F I1013 00:21:20.661130 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:20.686010564+00:00 stderr F W1013 00:21:20.685946 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:20.687771282+00:00 stderr F I1013 00:21:20.687731 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.960435ms) 2025-10-13T00:21:21.661478377+00:00 stderr F I1013 00:21:21.661380 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:21.661758235+00:00 stderr F I1013 00:21:21.661702 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:21.661758235+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:21.661758235+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:21.661781355+00:00 stderr F I1013 00:21:21.661765 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (399.771µs) 2025-10-13T00:21:21.661797046+00:00 stderr F I1013 00:21:21.661782 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:21.661876758+00:00 stderr F I1013 00:21:21.661831 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:21.661919599+00:00 stderr F I1013 00:21:21.661892 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:21.662228907+00:00 stderr F I1013 00:21:21.662145 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:21.701746720+00:00 stderr F W1013 00:21:21.701671 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:21.703725873+00:00 stderr F I1013 00:21:21.703659 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.873056ms) 2025-10-13T00:21:22.662711052+00:00 stderr F I1013 00:21:22.662618 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:22.662934448+00:00 stderr F I1013 00:21:22.662897 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:22.662934448+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:22.662934448+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:22.662992369+00:00 stderr F I1013 00:21:22.662961 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (355.119µs) 2025-10-13T00:21:22.662992369+00:00 stderr F I1013 00:21:22.662984 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:22.663057101+00:00 stderr F I1013 00:21:22.663033 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:22.663106522+00:00 stderr F I1013 00:21:22.663086 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:22.664213312+00:00 stderr F I1013 00:21:22.663384 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:22.688079484+00:00 stderr F W1013 00:21:22.688034 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:22.689510822+00:00 stderr F I1013 00:21:22.689461 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.474042ms) 2025-10-13T00:21:23.663518204+00:00 stderr F I1013 00:21:23.663443 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:23.663695668+00:00 stderr F I1013 00:21:23.663660 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:23.663695668+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:23.663695668+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:23.663731839+00:00 stderr F I1013 00:21:23.663705 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (289.818µs) 2025-10-13T00:21:23.663731839+00:00 stderr F I1013 00:21:23.663720 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:23.663792171+00:00 stderr F I1013 00:21:23.663756 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:23.663850013+00:00 stderr F I1013 00:21:23.663814 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:23.664048918+00:00 stderr F I1013 00:21:23.664003 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:23.684837887+00:00 stderr F W1013 00:21:23.684782 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:23.686161083+00:00 stderr F I1013 00:21:23.686113 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.391402ms) 2025-10-13T00:21:24.664573355+00:00 stderr F I1013 00:21:24.664508 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:24.665440408+00:00 stderr F I1013 00:21:24.665393 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:24.665440408+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:24.665440408+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:24.665640963+00:00 stderr F I1013 00:21:24.665590 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.09202ms) 2025-10-13T00:21:24.665753757+00:00 stderr F I1013 00:21:24.665687 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:24.665777327+00:00 stderr F I1013 00:21:24.665751 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:24.665830229+00:00 stderr F I1013 00:21:24.665795 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:24.666127297+00:00 stderr F I1013 00:21:24.665993 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:24.697266614+00:00 stderr F W1013 00:21:24.697187 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:24.700114921+00:00 stderr F I1013 00:21:24.700072 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.380944ms) 2025-10-13T00:21:25.665747068+00:00 stderr F I1013 00:21:25.665649 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:25.665836531+00:00 stderr F I1013 00:21:25.665786 1 availableupdates.go:70] Retrieving available updates again, because more than 2m57.097487985s has elapsed since 2025-10-13T00:18:28Z 2025-10-13T00:21:25.671800311+00:00 stderr F I1013 00:21:25.671724 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2025-10-13T00:21:25.911167208+00:00 stderr F I1013 00:21:25.911103 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:25.911167208+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:25.911167208+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:25.911210069+00:00 stderr F I1013 00:21:25.911197 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (245.574354ms) 2025-10-13T00:21:25.911217969+00:00 stderr F I1013 00:21:25.911210 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:25.911277571+00:00 stderr F I1013 00:21:25.911247 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:25.911305812+00:00 stderr F I1013 00:21:25.911288 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:25.911524238+00:00 stderr F I1013 00:21:25.911482 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:25.931505245+00:00 stderr F W1013 00:21:25.931441 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:25.932856531+00:00 stderr F I1013 00:21:25.932816 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.603541ms) 2025-10-13T00:21:26.911850588+00:00 stderr F I1013 00:21:26.911781 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:26.912409172+00:00 stderr F I1013 00:21:26.912380 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:26.912409172+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:26.912409172+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:26.912542096+00:00 stderr F I1013 00:21:26.912502 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (726.79µs) 2025-10-13T00:21:26.912550366+00:00 stderr F I1013 00:21:26.912541 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:26.912685050+00:00 stderr F I1013 00:21:26.912659 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:26.912739451+00:00 stderr F I1013 00:21:26.912717 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:26.913126482+00:00 stderr F I1013 00:21:26.913069 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:26.937340673+00:00 stderr F W1013 00:21:26.937261 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:26.939035989+00:00 stderr F I1013 00:21:26.939012 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.466922ms) 2025-10-13T00:21:27.912612110+00:00 stderr F I1013 00:21:27.912544 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:27.913119384+00:00 stderr F I1013 00:21:27.913097 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:27.913119384+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:27.913119384+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:27.913188726+00:00 stderr F I1013 00:21:27.913172 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (641.127µs) 2025-10-13T00:21:27.913219056+00:00 stderr F I1013 00:21:27.913209 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:27.913283378+00:00 stderr F I1013 00:21:27.913266 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:27.913401851+00:00 stderr F I1013 00:21:27.913359 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:27.913636968+00:00 stderr F I1013 00:21:27.913610 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:27.942266618+00:00 stderr F W1013 00:21:27.942222 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:27.943713857+00:00 stderr F I1013 00:21:27.943691 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.47962ms) 2025-10-13T00:21:28.913670450+00:00 stderr F I1013 00:21:28.913609 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:28.913981579+00:00 stderr F I1013 00:21:28.913966 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:28.913981579+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:28.913981579+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:28.914049020+00:00 stderr F I1013 00:21:28.914036 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (440.352µs) 2025-10-13T00:21:28.914079141+00:00 stderr F I1013 00:21:28.914069 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:28.914154613+00:00 stderr F I1013 00:21:28.914137 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:28.914209445+00:00 stderr F I1013 00:21:28.914199 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:28.914476182+00:00 stderr F I1013 00:21:28.914449 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:28.935215000+00:00 stderr F W1013 00:21:28.935170 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:28.936601107+00:00 stderr F I1013 00:21:28.936578 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.506315ms) 2025-10-13T00:21:29.455893812+00:00 stderr F I1013 00:21:29.455410 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:29.455990244+00:00 stderr F I1013 00:21:29.455934 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:29.456060616+00:00 stderr F I1013 00:21:29.456023 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:29.456504008+00:00 stderr F I1013 00:21:29.456436 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:29.485058656+00:00 stderr F W1013 00:21:29.485002 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:29.486349231+00:00 stderr F I1013 00:21:29.486277 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.881451ms) 2025-10-13T00:21:29.914272207+00:00 stderr F I1013 00:21:29.914165 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:29.914738479+00:00 stderr F I1013 00:21:29.914661 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:29.914738479+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:29.914738479+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:29.914863653+00:00 stderr F I1013 00:21:29.914787 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (632.977µs) 2025-10-13T00:21:29.914863653+00:00 stderr F I1013 00:21:29.914833 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:29.914978356+00:00 stderr F I1013 00:21:29.914924 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:29.915081969+00:00 stderr F I1013 00:21:29.915022 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:29.915544141+00:00 stderr F I1013 00:21:29.915463 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:29.941559881+00:00 stderr F W1013 00:21:29.941512 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:29.944784638+00:00 stderr F I1013 00:21:29.944740 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.904564ms) 2025-10-13T00:21:30.915495172+00:00 stderr F I1013 00:21:30.915416 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:30.915777239+00:00 stderr F I1013 00:21:30.915738 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:30.915777239+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:30.915777239+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:30.915860011+00:00 stderr F I1013 00:21:30.915830 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (402.141µs) 2025-10-13T00:21:30.915860011+00:00 stderr F I1013 00:21:30.915850 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:30.915934243+00:00 stderr F I1013 00:21:30.915896 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:30.915970424+00:00 stderr F I1013 00:21:30.915952 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:30.916262382+00:00 stderr F I1013 00:21:30.916217 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:30.937349809+00:00 stderr F W1013 00:21:30.937271 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:30.939179168+00:00 stderr F I1013 00:21:30.939128 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.275216ms) 2025-10-13T00:21:31.916557972+00:00 stderr F I1013 00:21:31.916491 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:31.916872001+00:00 stderr F I1013 00:21:31.916830 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:31.916872001+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:31.916872001+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:31.916990634+00:00 stderr F I1013 00:21:31.916960 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (480.703µs) 2025-10-13T00:21:31.916990634+00:00 stderr F I1013 00:21:31.916984 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:31.917067506+00:00 stderr F I1013 00:21:31.917034 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:31.917120108+00:00 stderr F I1013 00:21:31.917095 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:31.917446756+00:00 stderr F I1013 00:21:31.917395 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:31.941400650+00:00 stderr F W1013 00:21:31.941126 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:31.942791678+00:00 stderr F I1013 00:21:31.942764 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.777853ms) 2025-10-13T00:21:32.917572582+00:00 stderr F I1013 00:21:32.917479 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:32.917910111+00:00 stderr F I1013 00:21:32.917868 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:32.917910111+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:32.917910111+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:32.918022214+00:00 stderr F I1013 00:21:32.917981 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (515.643µs) 2025-10-13T00:21:32.918022214+00:00 stderr F I1013 00:21:32.918013 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:32.918117926+00:00 stderr F I1013 00:21:32.918080 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:32.918183668+00:00 stderr F I1013 00:21:32.918158 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:32.918712422+00:00 stderr F I1013 00:21:32.918645 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:32.956530869+00:00 stderr F W1013 00:21:32.956473 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:32.957981768+00:00 stderr F I1013 00:21:32.957949 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.935384ms) 2025-10-13T00:21:33.918760195+00:00 stderr F I1013 00:21:33.918691 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:33.919245548+00:00 stderr F I1013 00:21:33.919166 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:33.919245548+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:33.919245548+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:33.919286399+00:00 stderr F I1013 00:21:33.919267 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (610.326µs) 2025-10-13T00:21:33.919308589+00:00 stderr F I1013 00:21:33.919293 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:33.919485594+00:00 stderr F I1013 00:21:33.919416 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:33.919519605+00:00 stderr F I1013 00:21:33.919504 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:33.920013848+00:00 stderr F I1013 00:21:33.919912 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:33.942157014+00:00 stderr F W1013 00:21:33.942046 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:33.945576336+00:00 stderr F I1013 00:21:33.945521 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.221255ms) 2025-10-13T00:21:34.920245307+00:00 stderr F I1013 00:21:34.920152 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:34.920436212+00:00 stderr F I1013 00:21:34.920401 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:34.920436212+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:34.920436212+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:34.920461752+00:00 stderr F I1013 00:21:34.920450 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (323.179µs) 2025-10-13T00:21:34.920471613+00:00 stderr F I1013 00:21:34.920464 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:34.920546805+00:00 stderr F I1013 00:21:34.920502 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:34.920558255+00:00 stderr F I1013 00:21:34.920549 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:34.921250174+00:00 stderr F I1013 00:21:34.920734 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:34.948810255+00:00 stderr F W1013 00:21:34.948741 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:34.950093009+00:00 stderr F I1013 00:21:34.950053 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.586396ms) 2025-10-13T00:21:35.920868376+00:00 stderr F I1013 00:21:35.920814 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:35.921071691+00:00 stderr F I1013 00:21:35.921048 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:35.921071691+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:35.921071691+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:35.921185554+00:00 stderr F I1013 00:21:35.921138 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (294.868µs) 2025-10-13T00:21:35.921185554+00:00 stderr F I1013 00:21:35.921159 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:35.921236035+00:00 stderr F I1013 00:21:35.921208 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:35.921292877+00:00 stderr F I1013 00:21:35.921267 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:35.921547154+00:00 stderr F I1013 00:21:35.921506 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:35.951300654+00:00 stderr F W1013 00:21:35.951207 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:35.952577528+00:00 stderr F I1013 00:21:35.952508 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.345703ms) 2025-10-13T00:21:36.921486474+00:00 stderr F I1013 00:21:36.921413 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:36.921820133+00:00 stderr F I1013 00:21:36.921712 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:36.921820133+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:36.921820133+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:36.921820133+00:00 stderr F I1013 00:21:36.921775 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (375.16µs) 2025-10-13T00:21:36.921820133+00:00 stderr F I1013 00:21:36.921792 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:36.921870305+00:00 stderr F I1013 00:21:36.921842 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:36.921924796+00:00 stderr F I1013 00:21:36.921899 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:36.922226814+00:00 stderr F I1013 00:21:36.922151 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:36.947687449+00:00 stderr F W1013 00:21:36.947641 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:36.950988588+00:00 stderr F I1013 00:21:36.950936 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.135334ms) 2025-10-13T00:21:37.922763960+00:00 stderr F I1013 00:21:37.922698 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:37.922986736+00:00 stderr F I1013 00:21:37.922958 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:37.922986736+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:37.922986736+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:37.923030917+00:00 stderr F I1013 00:21:37.923010 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (323.178µs) 2025-10-13T00:21:37.923038527+00:00 stderr F I1013 00:21:37.923029 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:37.923115039+00:00 stderr F I1013 00:21:37.923082 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:37.923147870+00:00 stderr F I1013 00:21:37.923130 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:37.923372896+00:00 stderr F I1013 00:21:37.923319 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:37.942300635+00:00 stderr F W1013 00:21:37.942239 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:37.943636411+00:00 stderr F I1013 00:21:37.943603 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (20.572463ms) 2025-10-13T00:21:38.923288576+00:00 stderr F I1013 00:21:38.923215 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:38.923896183+00:00 stderr F I1013 00:21:38.923858 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:38.923896183+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:38.923896183+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:38.924052597+00:00 stderr F I1013 00:21:38.923982 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (816.272µs) 2025-10-13T00:21:38.924052597+00:00 stderr F I1013 00:21:38.924044 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:38.924178900+00:00 stderr F I1013 00:21:38.924142 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:38.924270693+00:00 stderr F I1013 00:21:38.924248 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:38.924982222+00:00 stderr F I1013 00:21:38.924924 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:38.945857233+00:00 stderr F W1013 00:21:38.945821 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:38.950008215+00:00 stderr F I1013 00:21:38.949965 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.918087ms) 2025-10-13T00:21:39.924791179+00:00 stderr F I1013 00:21:39.924109 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:39.925092537+00:00 stderr F I1013 00:21:39.925049 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:39.925092537+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:39.925092537+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:39.925255731+00:00 stderr F I1013 00:21:39.925216 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.149951ms) 2025-10-13T00:21:39.925255731+00:00 stderr F I1013 00:21:39.925247 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:39.925604651+00:00 stderr F I1013 00:21:39.925412 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:39.925604651+00:00 stderr F I1013 00:21:39.925575 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:39.926255008+00:00 stderr F I1013 00:21:39.926123 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:39.956779319+00:00 stderr F W1013 00:21:39.956700 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:39.960888259+00:00 stderr F I1013 00:21:39.960840 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.588027ms) 2025-10-13T00:21:40.927610326+00:00 stderr F I1013 00:21:40.926636 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:40.928086669+00:00 stderr F I1013 00:21:40.928011 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:40.928086669+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:40.928086669+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:40.928216122+00:00 stderr F I1013 00:21:40.928143 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.529581ms) 2025-10-13T00:21:40.928216122+00:00 stderr F I1013 00:21:40.928190 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:40.928358916+00:00 stderr F I1013 00:21:40.928281 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:40.928488980+00:00 stderr F I1013 00:21:40.928439 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:40.929087836+00:00 stderr F I1013 00:21:40.928995 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:40.968770823+00:00 stderr F W1013 00:21:40.968626 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:40.971145547+00:00 stderr F I1013 00:21:40.970988 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.799231ms) 2025-10-13T00:21:41.928295567+00:00 stderr F I1013 00:21:41.928215 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:41.928717848+00:00 stderr F I1013 00:21:41.928672 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:41.928717848+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:41.928717848+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:41.928806010+00:00 stderr F I1013 00:21:41.928763 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (560.555µs) 2025-10-13T00:21:41.928806010+00:00 stderr F I1013 00:21:41.928796 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:41.928955944+00:00 stderr F I1013 00:21:41.928899 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:41.929000936+00:00 stderr F I1013 00:21:41.928981 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:41.929431697+00:00 stderr F I1013 00:21:41.929332 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:41.956090154+00:00 stderr F W1013 00:21:41.956027 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:41.958665753+00:00 stderr F I1013 00:21:41.958619 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.820082ms) 2025-10-13T00:21:42.929515611+00:00 stderr F I1013 00:21:42.929446 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:42.929693576+00:00 stderr F I1013 00:21:42.929657 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:42.929693576+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:42.929693576+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:42.929744598+00:00 stderr F I1013 00:21:42.929712 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (279.828µs) 2025-10-13T00:21:42.929744598+00:00 stderr F I1013 00:21:42.929733 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:42.929819940+00:00 stderr F I1013 00:21:42.929777 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:42.929854120+00:00 stderr F I1013 00:21:42.929833 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:42.930097107+00:00 stderr F I1013 00:21:42.930052 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:42.949961071+00:00 stderr F W1013 00:21:42.949899 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:42.951162253+00:00 stderr F I1013 00:21:42.951111 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.378145ms) 2025-10-13T00:21:43.930994263+00:00 stderr F I1013 00:21:43.930356 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:43.930994263+00:00 stderr F I1013 00:21:43.930790 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:43.930994263+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:43.930994263+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:43.930994263+00:00 stderr F I1013 00:21:43.930865 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (564.795µs) 2025-10-13T00:21:43.930994263+00:00 stderr F I1013 00:21:43.930883 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:43.930994263+00:00 stderr F I1013 00:21:43.930966 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:43.931074305+00:00 stderr F I1013 00:21:43.931042 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:43.931494646+00:00 stderr F I1013 00:21:43.931445 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:43.954340731+00:00 stderr F W1013 00:21:43.954266 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:43.955934484+00:00 stderr F I1013 00:21:43.955897 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.011252ms) 2025-10-13T00:21:44.454958652+00:00 stderr F I1013 00:21:44.454880 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:44.455011824+00:00 stderr F I1013 00:21:44.454981 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:44.455084256+00:00 stderr F I1013 00:21:44.455029 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:44.455289521+00:00 stderr F I1013 00:21:44.455243 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:44.476820080+00:00 stderr F W1013 00:21:44.476719 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:44.478113205+00:00 stderr F I1013 00:21:44.478076 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.234265ms) 2025-10-13T00:21:44.931239041+00:00 stderr F I1013 00:21:44.931169 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:44.931479997+00:00 stderr F I1013 00:21:44.931423 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:44.931479997+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:44.931479997+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:44.931529488+00:00 stderr F I1013 00:21:44.931495 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (338.909µs) 2025-10-13T00:21:44.931529488+00:00 stderr F I1013 00:21:44.931515 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:44.931592750+00:00 stderr F I1013 00:21:44.931560 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:44.931635031+00:00 stderr F I1013 00:21:44.931618 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:44.931876238+00:00 stderr F I1013 00:21:44.931832 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:44.951982598+00:00 stderr F W1013 00:21:44.951891 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:44.953438018+00:00 stderr F I1013 00:21:44.953384 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.864788ms) 2025-10-13T00:21:45.932512006+00:00 stderr F I1013 00:21:45.932433 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:45.932721692+00:00 stderr F I1013 00:21:45.932685 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:45.932721692+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:45.932721692+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:45.932741012+00:00 stderr F I1013 00:21:45.932728 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (307.948µs) 2025-10-13T00:21:45.932753123+00:00 stderr F I1013 00:21:45.932742 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:45.932847645+00:00 stderr F I1013 00:21:45.932805 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:45.932859455+00:00 stderr F I1013 00:21:45.932851 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:45.933093912+00:00 stderr F I1013 00:21:45.933048 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:45.957715384+00:00 stderr F W1013 00:21:45.957649 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:45.962465352+00:00 stderr F I1013 00:21:45.962407 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.657078ms) 2025-10-13T00:21:46.932985041+00:00 stderr F I1013 00:21:46.932921 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:46.933605487+00:00 stderr F I1013 00:21:46.933568 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:46.933605487+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:46.933605487+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:46.933743561+00:00 stderr F I1013 00:21:46.933701 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (828.642µs) 2025-10-13T00:21:46.933842334+00:00 stderr F I1013 00:21:46.933793 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:46.933935446+00:00 stderr F I1013 00:21:46.933900 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:46.933983148+00:00 stderr F I1013 00:21:46.933965 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:46.934221204+00:00 stderr F I1013 00:21:46.934186 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:46.957579202+00:00 stderr F W1013 00:21:46.957528 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:46.961972430+00:00 stderr F I1013 00:21:46.961865 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.079505ms) 2025-10-13T00:21:47.934568204+00:00 stderr F I1013 00:21:47.934507 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:47.934823101+00:00 stderr F I1013 00:21:47.934772 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:47.934823101+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:47.934823101+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:47.934914193+00:00 stderr F I1013 00:21:47.934863 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (367.74µs) 2025-10-13T00:21:47.934914193+00:00 stderr F I1013 00:21:47.934883 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:47.934972275+00:00 stderr F I1013 00:21:47.934949 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:47.935075908+00:00 stderr F I1013 00:21:47.935050 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:47.935419517+00:00 stderr F I1013 00:21:47.935359 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:47.960035959+00:00 stderr F W1013 00:21:47.959969 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:47.961685773+00:00 stderr F I1013 00:21:47.961641 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.753189ms) 2025-10-13T00:21:48.935466381+00:00 stderr F I1013 00:21:48.935386 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:48.936588081+00:00 stderr F I1013 00:21:48.936538 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:48.936588081+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:48.936588081+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:48.936664513+00:00 stderr F I1013 00:21:48.936631 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.268474ms) 2025-10-13T00:21:48.936664513+00:00 stderr F I1013 00:21:48.936659 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:48.936741995+00:00 stderr F I1013 00:21:48.936707 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:48.936778826+00:00 stderr F I1013 00:21:48.936761 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:48.936997512+00:00 stderr F I1013 00:21:48.936957 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:48.962429636+00:00 stderr F W1013 00:21:48.962377 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:48.964479921+00:00 stderr F I1013 00:21:48.964433 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.768146ms) 2025-10-13T00:21:49.936780628+00:00 stderr F I1013 00:21:49.936662 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:49.936993934+00:00 stderr F I1013 00:21:49.936957 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:49.936993934+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:49.936993934+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:49.937056626+00:00 stderr F I1013 00:21:49.937025 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (382.831µs) 2025-10-13T00:21:49.937056626+00:00 stderr F I1013 00:21:49.937046 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:49.937138868+00:00 stderr F I1013 00:21:49.937097 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:49.937174199+00:00 stderr F I1013 00:21:49.937154 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:49.937513538+00:00 stderr F I1013 00:21:49.937459 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:49.965435739+00:00 stderr F W1013 00:21:49.965320 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:49.967004831+00:00 stderr F I1013 00:21:49.966969 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.920745ms) 2025-10-13T00:21:50.937773447+00:00 stderr F I1013 00:21:50.937694 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:50.938022444+00:00 stderr F I1013 00:21:50.937990 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:50.938022444+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:50.938022444+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:50.938061045+00:00 stderr F I1013 00:21:50.938038 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (358.14µs) 2025-10-13T00:21:50.938061045+00:00 stderr F I1013 00:21:50.938055 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:50.938134157+00:00 stderr F I1013 00:21:50.938101 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:50.938174868+00:00 stderr F I1013 00:21:50.938156 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:50.938400564+00:00 stderr F I1013 00:21:50.938367 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:50.995362375+00:00 stderr F W1013 00:21:50.995294 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:50.997914674+00:00 stderr F I1013 00:21:50.997873 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (59.813498ms) 2025-10-13T00:21:51.939390682+00:00 stderr F I1013 00:21:51.939279 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:51.939680090+00:00 stderr F I1013 00:21:51.939621 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:51.939680090+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:51.939680090+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:51.939680090+00:00 stderr F I1013 00:21:51.939666 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (414.391µs) 2025-10-13T00:21:51.939707160+00:00 stderr F I1013 00:21:51.939697 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:51.939846014+00:00 stderr F I1013 00:21:51.939748 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:51.939907146+00:00 stderr F I1013 00:21:51.939876 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:51.940181953+00:00 stderr F I1013 00:21:51.940123 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:51.968543156+00:00 stderr F W1013 00:21:51.968422 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:51.972117782+00:00 stderr F I1013 00:21:51.972048 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.34131ms) 2025-10-13T00:21:52.940542775+00:00 stderr F I1013 00:21:52.940416 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:52.940751040+00:00 stderr F I1013 00:21:52.940701 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:52.940751040+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:52.940751040+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:52.940774961+00:00 stderr F I1013 00:21:52.940758 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (355.49µs) 2025-10-13T00:21:52.940790602+00:00 stderr F I1013 00:21:52.940774 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:52.940837053+00:00 stderr F I1013 00:21:52.940817 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:52.940904895+00:00 stderr F I1013 00:21:52.940870 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:52.941225263+00:00 stderr F I1013 00:21:52.941156 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:52.967720476+00:00 stderr F W1013 00:21:52.967641 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:52.969661048+00:00 stderr F I1013 00:21:52.969594 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.816265ms) 2025-10-13T00:21:53.941274817+00:00 stderr F I1013 00:21:53.941185 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:53.941620466+00:00 stderr F I1013 00:21:53.941556 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:53.941620466+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:53.941620466+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:53.941692498+00:00 stderr F I1013 00:21:53.941638 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (466.853µs) 2025-10-13T00:21:53.941692498+00:00 stderr F I1013 00:21:53.941660 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:53.941770270+00:00 stderr F I1013 00:21:53.941720 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:53.941839442+00:00 stderr F I1013 00:21:53.941790 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:53.942189742+00:00 stderr F I1013 00:21:53.942090 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:53.978096387+00:00 stderr F W1013 00:21:53.978014 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:53.981687914+00:00 stderr F I1013 00:21:53.981617 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.950595ms) 2025-10-13T00:21:54.941953536+00:00 stderr F I1013 00:21:54.941855 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:54.942163102+00:00 stderr F I1013 00:21:54.942120 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:54.942163102+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:54.942163102+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:54.942214043+00:00 stderr F I1013 00:21:54.942186 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (340.789µs) 2025-10-13T00:21:54.942214043+00:00 stderr F I1013 00:21:54.942209 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:54.942294865+00:00 stderr F I1013 00:21:54.942255 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:54.942351317+00:00 stderr F I1013 00:21:54.942311 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:54.942573113+00:00 stderr F I1013 00:21:54.942526 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:54.975095007+00:00 stderr F W1013 00:21:54.974997 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:54.976321000+00:00 stderr F I1013 00:21:54.976265 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.054385ms) 2025-10-13T00:21:55.943279793+00:00 stderr F I1013 00:21:55.943170 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:55.943478179+00:00 stderr F I1013 00:21:55.943425 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:55.943478179+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:55.943478179+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:55.943478179+00:00 stderr F I1013 00:21:55.943468 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (311.158µs) 2025-10-13T00:21:55.943501919+00:00 stderr F I1013 00:21:55.943482 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:55.943562661+00:00 stderr F I1013 00:21:55.943525 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:55.943578221+00:00 stderr F I1013 00:21:55.943569 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:55.943799397+00:00 stderr F I1013 00:21:55.943745 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:55.986607599+00:00 stderr F W1013 00:21:55.986546 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:55.990626597+00:00 stderr F I1013 00:21:55.990575 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.083826ms) 2025-10-13T00:21:56.944498828+00:00 stderr F I1013 00:21:56.944413 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:56.944770555+00:00 stderr F I1013 00:21:56.944727 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:56.944770555+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:56.944770555+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:56.944841907+00:00 stderr F I1013 00:21:56.944798 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (398.261µs) 2025-10-13T00:21:56.944841907+00:00 stderr F I1013 00:21:56.944824 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:56.944938940+00:00 stderr F I1013 00:21:56.944888 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:56.944977901+00:00 stderr F I1013 00:21:56.944956 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:56.945214507+00:00 stderr F I1013 00:21:56.945168 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:56.986251331+00:00 stderr F W1013 00:21:56.986163 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:56.988924053+00:00 stderr F I1013 00:21:56.988866 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.037154ms) 2025-10-13T00:21:57.945592829+00:00 stderr F I1013 00:21:57.945520 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:57.945889297+00:00 stderr F I1013 00:21:57.945850 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:57.945889297+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:57.945889297+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:57.945963029+00:00 stderr F I1013 00:21:57.945928 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (419.861µs) 2025-10-13T00:21:57.945963029+00:00 stderr F I1013 00:21:57.945955 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:57.946066352+00:00 stderr F I1013 00:21:57.946019 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:57.946111163+00:00 stderr F I1013 00:21:57.946089 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:57.946503114+00:00 stderr F I1013 00:21:57.946451 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:57.971648730+00:00 stderr F W1013 00:21:57.971558 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:57.973731386+00:00 stderr F I1013 00:21:57.973688 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.729425ms) 2025-10-13T00:21:58.946206286+00:00 stderr F I1013 00:21:58.946092 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:58.946488374+00:00 stderr F I1013 00:21:58.946407 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:58.946488374+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:58.946488374+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:58.946488374+00:00 stderr F I1013 00:21:58.946473 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (394.53µs) 2025-10-13T00:21:58.946539685+00:00 stderr F I1013 00:21:58.946490 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:58.946565216+00:00 stderr F I1013 00:21:58.946539 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:58.946623498+00:00 stderr F I1013 00:21:58.946594 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:58.946892235+00:00 stderr F I1013 00:21:58.946844 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:58.974590980+00:00 stderr F W1013 00:21:58.974516 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:58.976439429+00:00 stderr F I1013 00:21:58.976391 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.898174ms) 2025-10-13T00:21:59.455572975+00:00 stderr F I1013 00:21:59.455490 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:59.455620996+00:00 stderr F I1013 00:21:59.455583 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:59.455649647+00:00 stderr F I1013 00:21:59.455633 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:59.455941965+00:00 stderr F I1013 00:21:59.455887 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:59.476597760+00:00 stderr F W1013 00:21:59.476533 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:59.478976744+00:00 stderr F I1013 00:21:59.478912 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.390309ms) 2025-10-13T00:21:59.947316208+00:00 stderr F I1013 00:21:59.947203 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:21:59.947709049+00:00 stderr F I1013 00:21:59.947648 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:21:59.947709049+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:21:59.947709049+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:21:59.947818881+00:00 stderr F I1013 00:21:59.947766 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (576.026µs) 2025-10-13T00:21:59.947818881+00:00 stderr F I1013 00:21:59.947799 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:21:59.947947995+00:00 stderr F I1013 00:21:59.947880 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:21:59.947974006+00:00 stderr F I1013 00:21:59.947961 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:21:59.948513730+00:00 stderr F I1013 00:21:59.948423 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:21:59.990632773+00:00 stderr F W1013 00:21:59.990539 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:21:59.993643354+00:00 stderr F I1013 00:21:59.993578 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.773991ms) 2025-10-13T00:22:00.948152732+00:00 stderr F I1013 00:22:00.947993 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:00.948852711+00:00 stderr F I1013 00:22:00.948796 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:00.948852711+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:00.948852711+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:00.948974724+00:00 stderr F I1013 00:22:00.948924 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (947.986µs) 2025-10-13T00:22:00.949010085+00:00 stderr F I1013 00:22:00.948987 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:00.949170730+00:00 stderr F I1013 00:22:00.949113 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:00.949306213+00:00 stderr F I1013 00:22:00.949232 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:00.950096905+00:00 stderr F I1013 00:22:00.949979 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:00.992670829+00:00 stderr F W1013 00:22:00.992597 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:00.997009116+00:00 stderr F I1013 00:22:00.996946 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.919848ms) 2025-10-13T00:22:01.949311425+00:00 stderr F I1013 00:22:01.949182 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:01.950098637+00:00 stderr F I1013 00:22:01.950043 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:01.950098637+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:01.950098637+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:01.950268451+00:00 stderr F I1013 00:22:01.950224 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.1105ms) 2025-10-13T00:22:01.950268451+00:00 stderr F I1013 00:22:01.950247 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:01.950415845+00:00 stderr F I1013 00:22:01.950372 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:01.950472247+00:00 stderr F I1013 00:22:01.950452 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:01.950814446+00:00 stderr F I1013 00:22:01.950761 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:01.970864955+00:00 stderr F W1013 00:22:01.970815 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:01.972234122+00:00 stderr F I1013 00:22:01.972184 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.93752ms) 2025-10-13T00:22:02.364615302+00:00 stderr F I1013 00:22:02.364512 1 sync_worker.go:582] Wait finished 2025-10-13T00:22:02.364691164+00:00 stderr F I1013 00:22:02.364566 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:error(nil), Done:747, Total:955, Completed:1, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(2025, time.October, 13, 0, 19, 5, 266689715, time.Local), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:22:02.364691164+00:00 stderr F I1013 00:22:02.364659 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 0 2025-10-13T00:22:02.369564795+00:00 stderr F I1013 00:22:02.369470 1 task_graph.go:481] Running 0 on worker 1 2025-10-13T00:22:02.369564795+00:00 stderr F I1013 00:22:02.369498 1 task_graph.go:481] Running 1 on worker 1 2025-10-13T00:22:02.369627597+00:00 stderr F I1013 00:22:02.369574 1 task_graph.go:481] Running 2 on worker 0 2025-10-13T00:22:02.408719908+00:00 stderr F I1013 00:22:02.408615 1 task_graph.go:481] Running 3 on worker 1 2025-10-13T00:22:02.448625272+00:00 stderr F I1013 00:22:02.448506 1 task_graph.go:481] Running 4 on worker 1 2025-10-13T00:22:02.453625306+00:00 stderr F I1013 00:22:02.452978 1 task_graph.go:481] Running 5 on worker 1 2025-10-13T00:22:02.453625306+00:00 stderr F I1013 00:22:02.453158 1 task_graph.go:481] Running 6 on worker 1 2025-10-13T00:22:02.674445045+00:00 stderr F I1013 00:22:02.673738 1 task_graph.go:481] Running 7 on worker 0 2025-10-13T00:22:02.951218578+00:00 stderr F I1013 00:22:02.950767 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:02.951527876+00:00 stderr F I1013 00:22:02.951446 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:02.951527876+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:02.951527876+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:02.951527876+00:00 stderr F I1013 00:22:02.951510 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (754.32µs) 2025-10-13T00:22:02.952060140+00:00 stderr F I1013 00:22:02.951527 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:02.952060140+00:00 stderr F I1013 00:22:02.951580 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:02.952060140+00:00 stderr F I1013 00:22:02.951652 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:02.952060140+00:00 stderr F I1013 00:22:02.951973 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:02.976599210+00:00 stderr F W1013 00:22:02.976538 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:02.980815063+00:00 stderr F I1013 00:22:02.980759 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.219926ms) 2025-10-13T00:22:03.124380444+00:00 stderr F I1013 00:22:03.124218 1 task_graph.go:481] Running 8 on worker 1 2025-10-13T00:22:03.124380444+00:00 stderr F I1013 00:22:03.124301 1 task_graph.go:481] Running 9 on worker 1 2025-10-13T00:22:03.325569705+00:00 stderr F I1013 00:22:03.325476 1 task_graph.go:481] Running 10 on worker 1 2025-10-13T00:22:03.874679542+00:00 stderr F I1013 00:22:03.874629 1 task_graph.go:481] Running 11 on worker 0 2025-10-13T00:22:03.952430652+00:00 stderr F I1013 00:22:03.952249 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:03.952824663+00:00 stderr F I1013 00:22:03.952761 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:03.952824663+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:03.952824663+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:03.952886324+00:00 stderr F I1013 00:22:03.952860 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (624.437µs) 2025-10-13T00:22:03.952904835+00:00 stderr F I1013 00:22:03.952882 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:03.952998558+00:00 stderr F I1013 00:22:03.952949 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:03.953062139+00:00 stderr F I1013 00:22:03.953032 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:03.953695606+00:00 stderr F I1013 00:22:03.953578 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:03.981486584+00:00 stderr F W1013 00:22:03.981410 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:03.982751488+00:00 stderr F I1013 00:22:03.982702 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.818352ms) 2025-10-13T00:22:04.124058548+00:00 stderr F I1013 00:22:04.123988 1 task_graph.go:481] Running 12 on worker 1 2025-10-13T00:22:04.224185820+00:00 stderr F I1013 00:22:04.224121 1 task_graph.go:481] Running 13 on worker 1 2025-10-13T00:22:04.327044186+00:00 stderr F I1013 00:22:04.326984 1 task_graph.go:481] Running 14 on worker 1 2025-10-13T00:22:04.423693685+00:00 stderr F I1013 00:22:04.423639 1 task_graph.go:481] Running 15 on worker 1 2025-10-13T00:22:04.925158691+00:00 stderr F I1013 00:22:04.925108 1 task_graph.go:481] Running 16 on worker 1 2025-10-13T00:22:04.925158691+00:00 stderr F I1013 00:22:04.925150 1 task_graph.go:481] Running 17 on worker 1 2025-10-13T00:22:04.952915458+00:00 stderr F I1013 00:22:04.952844 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:04.953103113+00:00 stderr F I1013 00:22:04.953063 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:04.953103113+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:04.953103113+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:04.953127343+00:00 stderr F I1013 00:22:04.953109 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (272.498µs) 2025-10-13T00:22:04.953127343+00:00 stderr F I1013 00:22:04.953121 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:04.953190715+00:00 stderr F I1013 00:22:04.953163 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:04.953223996+00:00 stderr F I1013 00:22:04.953204 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:04.953480133+00:00 stderr F I1013 00:22:04.953409 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:04.988707770+00:00 stderr F W1013 00:22:04.988669 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:04.990521769+00:00 stderr F I1013 00:22:04.990480 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.354755ms) 2025-10-13T00:22:05.954220724+00:00 stderr F I1013 00:22:05.954142 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:05.954548263+00:00 stderr F I1013 00:22:05.954496 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:05.954548263+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:05.954548263+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:05.954580123+00:00 stderr F I1013 00:22:05.954566 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (436.312µs) 2025-10-13T00:22:05.954596024+00:00 stderr F I1013 00:22:05.954583 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:05.954686696+00:00 stderr F I1013 00:22:05.954651 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:05.954730637+00:00 stderr F I1013 00:22:05.954707 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:05.955058906+00:00 stderr F I1013 00:22:05.954960 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:05.981700133+00:00 stderr F W1013 00:22:05.981634 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:05.983126501+00:00 stderr F I1013 00:22:05.982963 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.377783ms) 2025-10-13T00:22:06.955527491+00:00 stderr F I1013 00:22:06.955451 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:06.955749067+00:00 stderr F I1013 00:22:06.955717 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:06.955749067+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:06.955749067+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:06.955810609+00:00 stderr F I1013 00:22:06.955771 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (330.249µs) 2025-10-13T00:22:06.955810609+00:00 stderr F I1013 00:22:06.955792 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:06.955857190+00:00 stderr F I1013 00:22:06.955829 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:06.955893831+00:00 stderr F I1013 00:22:06.955876 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:06.956116827+00:00 stderr F I1013 00:22:06.956080 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:06.994258143+00:00 stderr F W1013 00:22:06.994169 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:06.996514333+00:00 stderr F I1013 00:22:06.996453 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.657184ms) 2025-10-13T00:22:07.956684904+00:00 stderr F I1013 00:22:07.956619 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:07.957345232+00:00 stderr F I1013 00:22:07.956952 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:07.957345232+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:07.957345232+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:07.957345232+00:00 stderr F I1013 00:22:07.957027 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (434.382µs) 2025-10-13T00:22:07.957345232+00:00 stderr F I1013 00:22:07.957043 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:07.957345232+00:00 stderr F I1013 00:22:07.957097 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:07.957345232+00:00 stderr F I1013 00:22:07.957153 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:07.957526057+00:00 stderr F I1013 00:22:07.957457 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:07.984036250+00:00 stderr F W1013 00:22:07.983986 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:07.987218485+00:00 stderr F I1013 00:22:07.987135 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.082899ms) 2025-10-13T00:22:08.023888452+00:00 stderr F I1013 00:22:08.023862 1 task_graph.go:481] Running 18 on worker 1 2025-10-13T00:22:08.124745634+00:00 stderr F I1013 00:22:08.124696 1 task_graph.go:481] Running 19 on worker 1 2025-10-13T00:22:08.226283855+00:00 stderr F I1013 00:22:08.226223 1 task_graph.go:481] Running 20 on worker 1 2025-10-13T00:22:08.226372947+00:00 stderr F I1013 00:22:08.226351 1 task_graph.go:481] Running 21 on worker 1 2025-10-13T00:22:08.624692649+00:00 stderr F I1013 00:22:08.624640 1 task_graph.go:481] Running 22 on worker 1 2025-10-13T00:22:08.725205452+00:00 stderr F I1013 00:22:08.725145 1 task_graph.go:481] Running 23 on worker 1 2025-10-13T00:22:08.957176120+00:00 stderr F I1013 00:22:08.957105 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:08.957656593+00:00 stderr F I1013 00:22:08.957615 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:08.957656593+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:08.957656593+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:08.957761906+00:00 stderr F I1013 00:22:08.957704 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (631.837µs) 2025-10-13T00:22:08.957761906+00:00 stderr F I1013 00:22:08.957751 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:08.957857098+00:00 stderr F I1013 00:22:08.957825 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:08.957922070+00:00 stderr F I1013 00:22:08.957903 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:08.958302920+00:00 stderr F I1013 00:22:08.958259 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:09.000004732+00:00 stderr F W1013 00:22:08.999963 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:09.001361908+00:00 stderr F I1013 00:22:09.001341 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.576162ms) 2025-10-13T00:22:09.958040444+00:00 stderr F I1013 00:22:09.957973 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:09.958284580+00:00 stderr F I1013 00:22:09.958252 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:09.958284580+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:09.958284580+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:09.958354942+00:00 stderr F I1013 00:22:09.958303 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (344.029µs) 2025-10-13T00:22:09.958354942+00:00 stderr F I1013 00:22:09.958340 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:09.958416554+00:00 stderr F I1013 00:22:09.958389 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:09.958475136+00:00 stderr F I1013 00:22:09.958444 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:09.958697752+00:00 stderr F I1013 00:22:09.958657 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:09.959514693+00:00 stderr F I1013 00:22:09.959437 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.11301ms) 2025-10-13T00:22:09.959514693+00:00 stderr F I1013 00:22:09.959456 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:09.964758425+00:00 stderr F I1013 00:22:09.964712 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:09.964865247+00:00 stderr F I1013 00:22:09.964829 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:09.964919609+00:00 stderr F I1013 00:22:09.964897 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:09.965225287+00:00 stderr F I1013 00:22:09.965172 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:09.966462790+00:00 stderr F I1013 00:22:09.966422 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.722596ms) 2025-10-13T00:22:09.966462790+00:00 stderr F I1013 00:22:09.966439 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:09.976655935+00:00 stderr F I1013 00:22:09.976619 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:09.976713386+00:00 stderr F I1013 00:22:09.976686 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:09.976739797+00:00 stderr F I1013 00:22:09.976729 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:09.976976863+00:00 stderr F I1013 00:22:09.976943 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:09.977702733+00:00 stderr F I1013 00:22:09.977666 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.049889ms) 2025-10-13T00:22:09.977702733+00:00 stderr F I1013 00:22:09.977680 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:09.997998458+00:00 stderr F I1013 00:22:09.997929 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:09.998048990+00:00 stderr F I1013 00:22:09.998030 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:09.998105531+00:00 stderr F I1013 00:22:09.998079 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:09.998393439+00:00 stderr F I1013 00:22:09.998300 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:09.999116949+00:00 stderr F I1013 00:22:09.999082 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.160401ms) 2025-10-13T00:22:09.999116949+00:00 stderr F I1013 00:22:09.999098 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:10.039413382+00:00 stderr F I1013 00:22:10.039355 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:10.039469344+00:00 stderr F I1013 00:22:10.039441 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:10.039526225+00:00 stderr F I1013 00:22:10.039500 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:10.039734601+00:00 stderr F I1013 00:22:10.039695 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:10.040480821+00:00 stderr F I1013 00:22:10.040439 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.09297ms) 2025-10-13T00:22:10.040480821+00:00 stderr F I1013 00:22:10.040460 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:10.120796161+00:00 stderr F I1013 00:22:10.120726 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:10.120843202+00:00 stderr F I1013 00:22:10.120818 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:10.120903374+00:00 stderr F I1013 00:22:10.120877 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:10.121106309+00:00 stderr F I1013 00:22:10.121068 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:10.121882800+00:00 stderr F I1013 00:22:10.121837 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.11943ms) 2025-10-13T00:22:10.121882800+00:00 stderr F I1013 00:22:10.121855 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:10.282451558+00:00 stderr F I1013 00:22:10.282317 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:10.282546980+00:00 stderr F I1013 00:22:10.282508 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:10.282624033+00:00 stderr F I1013 00:22:10.282602 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:10.283202808+00:00 stderr F I1013 00:22:10.282979 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:10.284797761+00:00 stderr F I1013 00:22:10.284758 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.453056ms) 2025-10-13T00:22:10.284812151+00:00 stderr F I1013 00:22:10.284789 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:10.605515816+00:00 stderr F I1013 00:22:10.605430 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:10.605642799+00:00 stderr F I1013 00:22:10.605600 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:10.605725151+00:00 stderr F I1013 00:22:10.605702 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:10.606308417+00:00 stderr F I1013 00:22:10.606243 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:10.608082615+00:00 stderr F I1013 00:22:10.608032 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.60743ms) 2025-10-13T00:22:10.608082615+00:00 stderr F I1013 00:22:10.608063 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:10.958463347+00:00 stderr F I1013 00:22:10.958410 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:10.958678733+00:00 stderr F I1013 00:22:10.958657 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:10.958678733+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:10.958678733+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:10.958719014+00:00 stderr F I1013 00:22:10.958700 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (302.138µs) 2025-10-13T00:22:10.958728644+00:00 stderr F I1013 00:22:10.958720 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:10.958792946+00:00 stderr F I1013 00:22:10.958756 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:10.958804276+00:00 stderr F I1013 00:22:10.958799 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:10.959081153+00:00 stderr F I1013 00:22:10.959040 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:10.959819933+00:00 stderr F I1013 00:22:10.959794 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.071439ms) 2025-10-13T00:22:10.959840114+00:00 stderr F I1013 00:22:10.959813 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:11.248401244+00:00 stderr F I1013 00:22:11.248290 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:11.248663931+00:00 stderr F I1013 00:22:11.248609 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:11.248808445+00:00 stderr F I1013 00:22:11.248778 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:11.249517124+00:00 stderr F I1013 00:22:11.249442 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:11.251491077+00:00 stderr F I1013 00:22:11.251407 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (3.123974ms) 2025-10-13T00:22:11.251491077+00:00 stderr F I1013 00:22:11.251443 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:11.959544108+00:00 stderr F I1013 00:22:11.959492 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:11.959861606+00:00 stderr F I1013 00:22:11.959840 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:11.959861606+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:11.959861606+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:11.959954389+00:00 stderr F I1013 00:22:11.959924 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (443.922µs) 2025-10-13T00:22:11.959988889+00:00 stderr F I1013 00:22:11.959978 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:11.960057621+00:00 stderr F I1013 00:22:11.960040 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:11.960115473+00:00 stderr F I1013 00:22:11.960106 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:11.960390500+00:00 stderr F I1013 00:22:11.960353 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:11.961305535+00:00 stderr F I1013 00:22:11.961277 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.299515ms) 2025-10-13T00:22:11.961305535+00:00 stderr F I1013 00:22:11.961294 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:12.960796112+00:00 stderr F I1013 00:22:12.960730 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:12.961201033+00:00 stderr F I1013 00:22:12.961156 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:12.961201033+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:12.961201033+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:12.961302706+00:00 stderr F I1013 00:22:12.961258 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (541.795µs) 2025-10-13T00:22:12.961302706+00:00 stderr F I1013 00:22:12.961286 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:12.961423109+00:00 stderr F I1013 00:22:12.961385 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:12.961491971+00:00 stderr F I1013 00:22:12.961469 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:12.961901702+00:00 stderr F I1013 00:22:12.961853 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:12.963714041+00:00 stderr F I1013 00:22:12.963637 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.327033ms) 2025-10-13T00:22:12.963769092+00:00 stderr F I1013 00:22:12.963715 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:13.812125066+00:00 stderr F I1013 00:22:13.811599 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:13.812125066+00:00 stderr F I1013 00:22:13.812106 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:13.812197478+00:00 stderr F I1013 00:22:13.812156 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:13.812505037+00:00 stderr F I1013 00:22:13.812431 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:13.813537864+00:00 stderr F I1013 00:22:13.813460 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.85702ms) 2025-10-13T00:22:13.813561095+00:00 stderr F I1013 00:22:13.813516 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:13.961632547+00:00 stderr F I1013 00:22:13.961545 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:13.961885344+00:00 stderr F I1013 00:22:13.961835 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:13.961885344+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:13.961885344+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:13.961932895+00:00 stderr F I1013 00:22:13.961897 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (364.4µs) 2025-10-13T00:22:13.961932895+00:00 stderr F I1013 00:22:13.961915 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:13.962013997+00:00 stderr F I1013 00:22:13.961962 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:13.962037208+00:00 stderr F I1013 00:22:13.962025 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:13.962384327+00:00 stderr F I1013 00:22:13.962281 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:13.963710063+00:00 stderr F I1013 00:22:13.963646 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.719667ms) 2025-10-13T00:22:13.963710063+00:00 stderr F I1013 00:22:13.963688 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:14.455191469+00:00 stderr F I1013 00:22:14.455119 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:14.455379985+00:00 stderr F I1013 00:22:14.455349 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:14.455468617+00:00 stderr F I1013 00:22:14.455454 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:14.455789796+00:00 stderr F I1013 00:22:14.455732 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:14.456796823+00:00 stderr F I1013 00:22:14.456743 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.623834ms) 2025-10-13T00:22:14.456828194+00:00 stderr F I1013 00:22:14.456784 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:14.579769260+00:00 stderr F E1013 00:22:14.579695 1 task.go:122] error running apply for customresourcedefinition "consoleplugins.console.openshift.io" (634 of 955): Get "https://api-int.crc.testing:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoleplugins.console.openshift.io": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:14.645393934+00:00 stderr F E1013 00:22:14.645299 1 task.go:122] error running apply for servicemonitor "openshift-route-controller-manager/openshift-route-controller-manager" (952 of 955): Get "https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-route-controller-manager/servicemonitors/openshift-route-controller-manager": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:14.962470361+00:00 stderr F I1013 00:22:14.962407 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:14.962720688+00:00 stderr F I1013 00:22:14.962686 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:14.962720688+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:14.962720688+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:14.962801460+00:00 stderr F I1013 00:22:14.962768 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (372.36µs) 2025-10-13T00:22:14.962801460+00:00 stderr F I1013 00:22:14.962794 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:14.962887133+00:00 stderr F I1013 00:22:14.962851 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:14.962930654+00:00 stderr F I1013 00:22:14.962913 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:14.963224042+00:00 stderr F I1013 00:22:14.963176 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:14.964186328+00:00 stderr F I1013 00:22:14.964129 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.323746ms) 2025-10-13T00:22:14.964423714+00:00 stderr F I1013 00:22:14.964323 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:14.965588255+00:00 stderr F E1013 00:22:14.965526 1 cvo.go:642] Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:14.965609846+00:00 stderr F I1013 00:22:14.965566 1 cvo.go:643] Dropping "openshift-cluster-version/version" out of the queue &{0xc0002932c0 0xc0000145a0}: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:15.962856714+00:00 stderr F I1013 00:22:15.962770 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:15.963116821+00:00 stderr F I1013 00:22:15.963072 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:15.963116821+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:15.963116821+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:15.963154372+00:00 stderr F I1013 00:22:15.963129 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (372.96µs) 2025-10-13T00:22:15.963154372+00:00 stderr F I1013 00:22:15.963145 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:15.963251575+00:00 stderr F I1013 00:22:15.963207 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:15.963306106+00:00 stderr F I1013 00:22:15.963283 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:15.963582773+00:00 stderr F I1013 00:22:15.963532 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:15.964520379+00:00 stderr F I1013 00:22:15.964458 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.305305ms) 2025-10-13T00:22:15.964535509+00:00 stderr F I1013 00:22:15.964501 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:15.969894683+00:00 stderr F I1013 00:22:15.969789 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:15.969894683+00:00 stderr F I1013 00:22:15.969860 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:15.969961445+00:00 stderr F I1013 00:22:15.969902 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:15.970163100+00:00 stderr F I1013 00:22:15.970107 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:15.970980852+00:00 stderr F I1013 00:22:15.970908 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.11638ms) 2025-10-13T00:22:15.970980852+00:00 stderr F I1013 00:22:15.970941 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:15.981257579+00:00 stderr F I1013 00:22:15.981198 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:15.981302290+00:00 stderr F I1013 00:22:15.981267 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:15.981369192+00:00 stderr F I1013 00:22:15.981312 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:15.981630959+00:00 stderr F I1013 00:22:15.981557 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:15.982457761+00:00 stderr F I1013 00:22:15.982398 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.197182ms) 2025-10-13T00:22:15.982457761+00:00 stderr F I1013 00:22:15.982431 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.002928252+00:00 stderr F I1013 00:22:16.002833 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:16.003111567+00:00 stderr F I1013 00:22:16.003040 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:16.003203869+00:00 stderr F I1013 00:22:16.003159 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:16.003779064+00:00 stderr F I1013 00:22:16.003686 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:16.005682166+00:00 stderr F I1013 00:22:16.005611 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.794995ms) 2025-10-13T00:22:16.005682166+00:00 stderr F I1013 00:22:16.005652 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.046137674+00:00 stderr F I1013 00:22:16.046030 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:16.046550955+00:00 stderr F I1013 00:22:16.046219 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:16.046550955+00:00 stderr F I1013 00:22:16.046320 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:16.046836872+00:00 stderr F I1013 00:22:16.046761 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:16.048565529+00:00 stderr F I1013 00:22:16.048480 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.462006ms) 2025-10-13T00:22:16.048565529+00:00 stderr F I1013 00:22:16.048529 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.129005682+00:00 stderr F I1013 00:22:16.128895 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:16.129069944+00:00 stderr F I1013 00:22:16.128993 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:16.129069944+00:00 stderr F I1013 00:22:16.129051 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:16.129408853+00:00 stderr F I1013 00:22:16.129302 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:16.130431980+00:00 stderr F I1013 00:22:16.130379 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.433878ms) 2025-10-13T00:22:16.130525953+00:00 stderr F I1013 00:22:16.130489 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.291136722+00:00 stderr F I1013 00:22:16.291056 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:16.291222264+00:00 stderr F I1013 00:22:16.291189 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:16.291302127+00:00 stderr F I1013 00:22:16.291268 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:16.291771519+00:00 stderr F I1013 00:22:16.291704 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:16.293000222+00:00 stderr F I1013 00:22:16.292961 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.916482ms) 2025-10-13T00:22:16.293082654+00:00 stderr F I1013 00:22:16.293050 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.613762087+00:00 stderr F I1013 00:22:16.613678 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:16.613860440+00:00 stderr F I1013 00:22:16.613804 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:16.613918321+00:00 stderr F I1013 00:22:16.613884 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:16.614263941+00:00 stderr F I1013 00:22:16.614197 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:16.615780741+00:00 stderr F I1013 00:22:16.615711 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.026605ms) 2025-10-13T00:22:16.615799942+00:00 stderr F I1013 00:22:16.615769 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:16.964087188+00:00 stderr F I1013 00:22:16.964026 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:16.964486379+00:00 stderr F I1013 00:22:16.964463 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:16.964486379+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:16.964486379+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:16.964588912+00:00 stderr F I1013 00:22:16.964570 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (554.225µs) 2025-10-13T00:22:16.964671904+00:00 stderr F I1013 00:22:16.964618 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:16.964755766+00:00 stderr F I1013 00:22:16.964711 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:16.964826408+00:00 stderr F I1013 00:22:16.964797 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:16.965248649+00:00 stderr F I1013 00:22:16.965174 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:16.966760130+00:00 stderr F I1013 00:22:16.966681 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.063576ms) 2025-10-13T00:22:16.966760130+00:00 stderr F I1013 00:22:16.966726 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:17.256422020+00:00 stderr F I1013 00:22:17.256314 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:17.256536213+00:00 stderr F I1013 00:22:17.256483 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:17.256594704+00:00 stderr F I1013 00:22:17.256569 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:17.256978675+00:00 stderr F I1013 00:22:17.256915 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:17.258128506+00:00 stderr F I1013 00:22:17.258077 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.777908ms) 2025-10-13T00:22:17.258128506+00:00 stderr F I1013 00:22:17.258106 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:17.964943593+00:00 stderr F I1013 00:22:17.964874 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:17.965232991+00:00 stderr F I1013 00:22:17.965203 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:17.965232991+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:17.965232991+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:17.965300843+00:00 stderr F I1013 00:22:17.965272 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (466.683µs) 2025-10-13T00:22:17.965300843+00:00 stderr F I1013 00:22:17.965293 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:17.965420736+00:00 stderr F I1013 00:22:17.965376 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:17.965463257+00:00 stderr F I1013 00:22:17.965443 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:17.965751205+00:00 stderr F I1013 00:22:17.965691 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:17.966858165+00:00 stderr F I1013 00:22:17.966808 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.505881ms) 2025-10-13T00:22:17.966881945+00:00 stderr F I1013 00:22:17.966845 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:18.034018271+00:00 stderr F I1013 00:22:18.033967 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:22:18.034136554+00:00 stderr F I1013 00:22:18.034120 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2025-10-13T00:22:18.034164535+00:00 stderr F I1013 00:22:18.034154 1 upgradeable.go:123] Cluster current version=4.16.0 2025-10-13T00:22:18.034204436+00:00 stderr F I1013 00:22:18.034002 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:18.034237777+00:00 stderr F I1013 00:22:18.034227 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='DegradedPool' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are degraded, please see `oc get mcp` for further details and resolve before upgrading') 2025-10-13T00:22:18.034286128+00:00 stderr F I1013 00:22:18.034274 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (317.419µs) 2025-10-13T00:22:18.034347590+00:00 stderr F I1013 00:22:18.034293 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:18.034391811+00:00 stderr F I1013 00:22:18.034023 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:18.034417961+00:00 stderr F I1013 00:22:18.034397 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:18.034810442+00:00 stderr F I1013 00:22:18.034764 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:18.034810442+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:18.034810442+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:18.034871734+00:00 stderr F I1013 00:22:18.034829 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:18.034920335+00:00 stderr F I1013 00:22:18.034889 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (869.243µs) 2025-10-13T00:22:18.035788918+00:00 stderr F I1013 00:22:18.035739 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.748187ms) 2025-10-13T00:22:18.035806669+00:00 stderr F I1013 00:22:18.035770 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:18.035833459+00:00 stderr F I1013 00:22:18.035814 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:18.035939272+00:00 stderr F I1013 00:22:18.035895 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:18.035999114+00:00 stderr F I1013 00:22:18.035973 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:18.036474217+00:00 stderr F I1013 00:22:18.036415 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:18.037404312+00:00 stderr F I1013 00:22:18.037377 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.563092ms) 2025-10-13T00:22:18.037404312+00:00 stderr F I1013 00:22:18.037392 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:18.965422248+00:00 stderr F I1013 00:22:18.965365 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:18.965791248+00:00 stderr F I1013 00:22:18.965769 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:18.965791248+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:18.965791248+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:18.965903981+00:00 stderr F I1013 00:22:18.965884 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (538.014µs) 2025-10-13T00:22:18.965945592+00:00 stderr F I1013 00:22:18.965931 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:18.966031274+00:00 stderr F I1013 00:22:18.966008 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:18.966104266+00:00 stderr F I1013 00:22:18.966091 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:18.966445796+00:00 stderr F I1013 00:22:18.966406 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:18.967289798+00:00 stderr F I1013 00:22:18.967240 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.299465ms) 2025-10-13T00:22:18.967315289+00:00 stderr F I1013 00:22:18.967275 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:19.818494209+00:00 stderr F I1013 00:22:19.818409 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:19.818563641+00:00 stderr F I1013 00:22:19.818510 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:19.818580121+00:00 stderr F I1013 00:22:19.818562 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:19.819445225+00:00 stderr F I1013 00:22:19.818764 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:19.819994669+00:00 stderr F I1013 00:22:19.819933 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.522091ms) 2025-10-13T00:22:19.820102692+00:00 stderr F I1013 00:22:19.820057 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:19.966947581+00:00 stderr F I1013 00:22:19.966882 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:19.967147157+00:00 stderr F I1013 00:22:19.967113 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:19.967147157+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:19.967147157+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:19.967167887+00:00 stderr F I1013 00:22:19.967155 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (284.478µs) 2025-10-13T00:22:19.967183138+00:00 stderr F I1013 00:22:19.967167 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:19.967228879+00:00 stderr F I1013 00:22:19.967201 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:19.967270680+00:00 stderr F I1013 00:22:19.967246 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:19.967502876+00:00 stderr F I1013 00:22:19.967457 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:19.968156404+00:00 stderr F I1013 00:22:19.968119 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (947.095µs) 2025-10-13T00:22:19.968377020+00:00 stderr F I1013 00:22:19.968303 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:19.969555021+00:00 stderr F E1013 00:22:19.969372 1 cvo.go:642] Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:19.969555021+00:00 stderr F I1013 00:22:19.969407 1 cvo.go:643] Dropping "openshift-cluster-version/version" out of the queue &{0xc0002932c0 0xc0000145a0}: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:20.968300229+00:00 stderr F I1013 00:22:20.968169 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:20.969078240+00:00 stderr F I1013 00:22:20.968691 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:20.969078240+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:20.969078240+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:20.969078240+00:00 stderr F I1013 00:22:20.968787 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (630.757µs) 2025-10-13T00:22:20.969078240+00:00 stderr F I1013 00:22:20.968810 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:20.969078240+00:00 stderr F I1013 00:22:20.968900 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:20.969078240+00:00 stderr F I1013 00:22:20.968987 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:20.969543072+00:00 stderr F I1013 00:22:20.969447 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:20.970643902+00:00 stderr F I1013 00:22:20.970559 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.740587ms) 2025-10-13T00:22:20.970643902+00:00 stderr F I1013 00:22:20.970596 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:20.975945234+00:00 stderr F I1013 00:22:20.975859 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:20.975987225+00:00 stderr F I1013 00:22:20.975960 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:20.976084868+00:00 stderr F I1013 00:22:20.976025 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:20.976453588+00:00 stderr F I1013 00:22:20.976366 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:20.977129046+00:00 stderr F I1013 00:22:20.977068 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.212572ms) 2025-10-13T00:22:20.977129046+00:00 stderr F I1013 00:22:20.977089 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:20.987392802+00:00 stderr F I1013 00:22:20.987313 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:20.987488015+00:00 stderr F I1013 00:22:20.987439 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:20.987506605+00:00 stderr F I1013 00:22:20.987492 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:20.987771682+00:00 stderr F I1013 00:22:20.987718 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:20.988416880+00:00 stderr F I1013 00:22:20.988358 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.047918ms) 2025-10-13T00:22:20.988416880+00:00 stderr F I1013 00:22:20.988376 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:21.008736556+00:00 stderr F I1013 00:22:21.008640 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:21.008787498+00:00 stderr F I1013 00:22:21.008749 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:21.008836589+00:00 stderr F I1013 00:22:21.008809 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:21.009172708+00:00 stderr F I1013 00:22:21.009118 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:21.010039311+00:00 stderr F I1013 00:22:21.009992 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.359107ms) 2025-10-13T00:22:21.010039311+00:00 stderr F I1013 00:22:21.010018 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:21.050774907+00:00 stderr F I1013 00:22:21.050710 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:21.050807718+00:00 stderr F I1013 00:22:21.050789 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:21.050862999+00:00 stderr F I1013 00:22:21.050840 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:21.051078125+00:00 stderr F I1013 00:22:21.051034 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:21.051685181+00:00 stderr F I1013 00:22:21.051628 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (923.155µs) 2025-10-13T00:22:21.051685181+00:00 stderr F I1013 00:22:21.051645 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:21.132098614+00:00 stderr F I1013 00:22:21.132026 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:21.132228577+00:00 stderr F I1013 00:22:21.132178 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:21.132305139+00:00 stderr F I1013 00:22:21.132276 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:21.132720000+00:00 stderr F I1013 00:22:21.132657 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:21.133883962+00:00 stderr F I1013 00:22:21.133837 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.819959ms) 2025-10-13T00:22:21.133883962+00:00 stderr F I1013 00:22:21.133867 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:21.294714317+00:00 stderr F I1013 00:22:21.294388 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:21.294714317+00:00 stderr F I1013 00:22:21.294489 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:21.294714317+00:00 stderr F I1013 00:22:21.294541 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:21.294759678+00:00 stderr F I1013 00:22:21.294735 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:21.295538009+00:00 stderr F I1013 00:22:21.295484 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.093039ms) 2025-10-13T00:22:21.295554039+00:00 stderr F I1013 00:22:21.295523 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:21.616210192+00:00 stderr F I1013 00:22:21.616137 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:21.616333406+00:00 stderr F I1013 00:22:21.616279 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:21.616438019+00:00 stderr F I1013 00:22:21.616406 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:21.616960233+00:00 stderr F I1013 00:22:21.616895 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:21.618295258+00:00 stderr F I1013 00:22:21.618252 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.120327ms) 2025-10-13T00:22:21.618308869+00:00 stderr F I1013 00:22:21.618288 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:21.969740030+00:00 stderr F I1013 00:22:21.969670 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:21.970035828+00:00 stderr F I1013 00:22:21.969981 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:21.970035828+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:21.970035828+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:21.970096669+00:00 stderr F I1013 00:22:21.970054 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (397.991µs) 2025-10-13T00:22:21.970096669+00:00 stderr F I1013 00:22:21.970075 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:21.970164831+00:00 stderr F I1013 00:22:21.970130 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:21.970210242+00:00 stderr F I1013 00:22:21.970191 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:21.970562852+00:00 stderr F I1013 00:22:21.970513 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:21.971396554+00:00 stderr F I1013 00:22:21.971365 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.290215ms) 2025-10-13T00:22:21.971396554+00:00 stderr F I1013 00:22:21.971383 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:22.258969208+00:00 stderr F I1013 00:22:22.258895 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:22.259017159+00:00 stderr F I1013 00:22:22.258984 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:22.259061150+00:00 stderr F I1013 00:22:22.259037 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:22.259268126+00:00 stderr F I1013 00:22:22.259233 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:22.260016496+00:00 stderr F I1013 00:22:22.259963 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.070139ms) 2025-10-13T00:22:22.260016496+00:00 stderr F I1013 00:22:22.259995 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:22.970474642+00:00 stderr F I1013 00:22:22.970409 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:22.970699618+00:00 stderr F I1013 00:22:22.970672 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:22.970699618+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:22.970699618+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:22.970755969+00:00 stderr F I1013 00:22:22.970734 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (337.09µs) 2025-10-13T00:22:22.970763889+00:00 stderr F I1013 00:22:22.970752 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:22.970824071+00:00 stderr F I1013 00:22:22.970799 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:22.970868962+00:00 stderr F I1013 00:22:22.970855 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:22.971154250+00:00 stderr F I1013 00:22:22.971115 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:23.973458803+00:00 stderr F I1013 00:22:23.973400 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:23.973760631+00:00 stderr F I1013 00:22:23.973743 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:23.973760631+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:23.973760631+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:23.973821213+00:00 stderr F I1013 00:22:23.973808 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (418.511µs) 2025-10-13T00:22:24.974344029+00:00 stderr F I1013 00:22:24.974259 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:24.974678488+00:00 stderr F I1013 00:22:24.974653 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:24.974678488+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:24.974678488+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:24.974764060+00:00 stderr F I1013 00:22:24.974740 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (494.424µs) 2025-10-13T00:22:25.975193774+00:00 stderr F I1013 00:22:25.975150 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:25.976144179+00:00 stderr F I1013 00:22:25.976124 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:25.976144179+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:25.976144179+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:25.976260773+00:00 stderr F I1013 00:22:25.976245 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.10492ms) 2025-10-13T00:22:26.976557373+00:00 stderr F I1013 00:22:26.976439 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:26.976950283+00:00 stderr F I1013 00:22:26.976893 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:26.976950283+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:26.976950283+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:26.977039146+00:00 stderr F I1013 00:22:26.976995 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (572.715µs) 2025-10-13T00:22:27.977830408+00:00 stderr F I1013 00:22:27.977746 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:27.978016503+00:00 stderr F I1013 00:22:27.977990 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:27.978016503+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:27.978016503+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:27.978084245+00:00 stderr F I1013 00:22:27.978055 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (322.908µs) 2025-10-13T00:22:28.979360672+00:00 stderr F I1013 00:22:28.978788 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:28.979360672+00:00 stderr F I1013 00:22:28.979014 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:28.979360672+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:28.979360672+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:28.979360672+00:00 stderr F I1013 00:22:28.979058 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (281.857µs) 2025-10-13T00:22:29.455831225+00:00 stderr F W1013 00:22:29.453653 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456459 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (6.485572959s) 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456503 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456519 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456525 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.55µs) 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456572 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456583 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:29.457167921+00:00 stderr F I1013 00:22:29.456589 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (16.85µs) 2025-10-13T00:22:29.656106941+00:00 stderr F I1013 00:22:29.656032 1 task_graph.go:481] Running 24 on worker 1 2025-10-13T00:22:29.658833064+00:00 stderr F I1013 00:22:29.658774 1 task_graph.go:481] Running 25 on worker 1 2025-10-13T00:22:29.698686306+00:00 stderr F I1013 00:22:29.698625 1 task_graph.go:481] Running 26 on worker 1 2025-10-13T00:22:29.936602054+00:00 stderr F I1013 00:22:29.936562 1 task_graph.go:481] Running 27 on worker 1 2025-10-13T00:22:29.979407265+00:00 stderr F I1013 00:22:29.979355 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:29.979705553+00:00 stderr F I1013 00:22:29.979667 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:29.979705553+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:29.979705553+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:29.979766294+00:00 stderr F I1013 00:22:29.979737 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (394.101µs) 2025-10-13T00:22:29.979766294+00:00 stderr F I1013 00:22:29.979761 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:29.979776485+00:00 stderr F I1013 00:22:29.979770 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:29.979785295+00:00 stderr F I1013 00:22:29.979779 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (17.251µs) 2025-10-13T00:22:29.999867575+00:00 stderr F I1013 00:22:29.997584 1 task_graph.go:481] Running 28 on worker 0 2025-10-13T00:22:30.037951049+00:00 stderr F I1013 00:22:30.037876 1 task_graph.go:481] Running 29 on worker 1 2025-10-13T00:22:30.337902906+00:00 stderr F I1013 00:22:30.337822 1 task_graph.go:481] Running 30 on worker 1 2025-10-13T00:22:30.386907453+00:00 stderr F I1013 00:22:30.386846 1 task_graph.go:481] Running 31 on worker 0 2025-10-13T00:22:30.537638467+00:00 stderr F I1013 00:22:30.537585 1 task_graph.go:481] Running 32 on worker 1 2025-10-13T00:22:30.980595098+00:00 stderr F I1013 00:22:30.980539 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:30.980907376+00:00 stderr F I1013 00:22:30.980869 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:30.980907376+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:30.980907376+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:30.980954988+00:00 stderr F I1013 00:22:30.980927 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (400.321µs) 2025-10-13T00:22:30.980954988+00:00 stderr F I1013 00:22:30.980947 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:30.980977068+00:00 stderr F I1013 00:22:30.980956 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:30.980977068+00:00 stderr F I1013 00:22:30.980962 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (15.551µs) 2025-10-13T00:22:31.135836843+00:00 stderr F I1013 00:22:31.135791 1 task_graph.go:481] Running 33 on worker 1 2025-10-13T00:22:31.236707505+00:00 stderr F I1013 00:22:31.236629 1 task_graph.go:481] Running 34 on worker 1 2025-10-13T00:22:31.888887034+00:00 stderr F I1013 00:22:31.888430 1 task_graph.go:481] Running 35 on worker 0 2025-10-13T00:22:31.981504784+00:00 stderr F I1013 00:22:31.981437 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:31.981899155+00:00 stderr F I1013 00:22:31.981862 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:31.981899155+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:31.981899155+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:31.981993868+00:00 stderr F I1013 00:22:31.981960 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (534.395µs) 2025-10-13T00:22:31.982035939+00:00 stderr F I1013 00:22:31.982010 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:31.982035939+00:00 stderr F I1013 00:22:31.982030 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:31.982045689+00:00 stderr F I1013 00:22:31.982037 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.741µs) 2025-10-13T00:22:32.982769991+00:00 stderr F I1013 00:22:32.982689 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:32.983049038+00:00 stderr F I1013 00:22:32.983016 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:32.983049038+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:32.983049038+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:32.983110990+00:00 stderr F I1013 00:22:32.983078 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (403.901µs) 2025-10-13T00:22:32.983110990+00:00 stderr F I1013 00:22:32.983098 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:32.983121460+00:00 stderr F I1013 00:22:32.983112 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:32.983132090+00:00 stderr F I1013 00:22:32.983122 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.611µs) 2025-10-13T00:22:33.737893182+00:00 stderr F I1013 00:22:33.737817 1 core.go:138] Updating ConfigMap openshift-machine-config-operator/kube-rbac-proxy due to diff:   &v1.ConfigMap{ 2025-10-13T00:22:33.737893182+00:00 stderr F    TypeMeta: v1.TypeMeta{ 2025-10-13T00:22:33.737893182+00:00 stderr F -  Kind: "", 2025-10-13T00:22:33.737893182+00:00 stderr F +  Kind: "ConfigMap", 2025-10-13T00:22:33.737893182+00:00 stderr F -  APIVersion: "", 2025-10-13T00:22:33.737893182+00:00 stderr F +  APIVersion: "v1", 2025-10-13T00:22:33.737893182+00:00 stderr F    }, 2025-10-13T00:22:33.737893182+00:00 stderr F    ObjectMeta: v1.ObjectMeta{ 2025-10-13T00:22:33.737893182+00:00 stderr F    ... // 2 identical fields 2025-10-13T00:22:33.737893182+00:00 stderr F    Namespace: "openshift-machine-config-operator", 2025-10-13T00:22:33.737893182+00:00 stderr F    SelfLink: "", 2025-10-13T00:22:33.737893182+00:00 stderr F -  UID: "ba7edbb4-1ba2-49b6-98a7-d849069e9f80", 2025-10-13T00:22:33.737893182+00:00 stderr F +  UID: "", 2025-10-13T00:22:33.737893182+00:00 stderr F -  ResourceVersion: "41947", 2025-10-13T00:22:33.737893182+00:00 stderr F +  ResourceVersion: "", 2025-10-13T00:22:33.737893182+00:00 stderr F    Generation: 0, 2025-10-13T00:22:33.737893182+00:00 stderr F -  CreationTimestamp: v1.Time{Time: s"2024-06-26 12:39:23 +0000 UTC"}, 2025-10-13T00:22:33.737893182+00:00 stderr F +  CreationTimestamp: v1.Time{}, 2025-10-13T00:22:33.737893182+00:00 stderr F    DeletionTimestamp: nil, 2025-10-13T00:22:33.737893182+00:00 stderr F    DeletionGracePeriodSeconds: nil, 2025-10-13T00:22:33.737893182+00:00 stderr F    ... // 2 identical fields 2025-10-13T00:22:33.737893182+00:00 stderr F    OwnerReferences: {{APIVersion: "config.openshift.io/v1", Kind: "ClusterVersion", Name: "version", UID: "a73cbaa6-40d3-4694-9b98-c0a6eed45825", ...}}, 2025-10-13T00:22:33.737893182+00:00 stderr F    Finalizers: nil, 2025-10-13T00:22:33.737893182+00:00 stderr F -  ManagedFields: []v1.ManagedFieldsEntry{ 2025-10-13T00:22:33.737893182+00:00 stderr F -  { 2025-10-13T00:22:33.737893182+00:00 stderr F -  Manager: "cluster-version-operator", 2025-10-13T00:22:33.737893182+00:00 stderr F -  Operation: "Update", 2025-10-13T00:22:33.737893182+00:00 stderr F -  APIVersion: "v1", 2025-10-13T00:22:33.737893182+00:00 stderr F -  Time: s"2025-10-13 00:18:42 +0000 UTC", 2025-10-13T00:22:33.737893182+00:00 stderr F -  FieldsType: "FieldsV1", 2025-10-13T00:22:33.737893182+00:00 stderr F -  FieldsV1: s`{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:include.re`..., 2025-10-13T00:22:33.737893182+00:00 stderr F -  }, 2025-10-13T00:22:33.737893182+00:00 stderr F -  { 2025-10-13T00:22:33.737893182+00:00 stderr F -  Manager: "machine-config-operator", 2025-10-13T00:22:33.737893182+00:00 stderr F -  Operation: "Update", 2025-10-13T00:22:33.737893182+00:00 stderr F -  APIVersion: "v1", 2025-10-13T00:22:33.737893182+00:00 stderr F -  Time: s"2025-10-13 00:20:17 +0000 UTC", 2025-10-13T00:22:33.737893182+00:00 stderr F -  FieldsType: "FieldsV1", 2025-10-13T00:22:33.737893182+00:00 stderr F -  FieldsV1: s`{"f:data":{"f:config-file.yaml":{}}}`, 2025-10-13T00:22:33.737893182+00:00 stderr F -  }, 2025-10-13T00:22:33.737893182+00:00 stderr F -  }, 2025-10-13T00:22:33.737893182+00:00 stderr F +  ManagedFields: nil, 2025-10-13T00:22:33.737893182+00:00 stderr F    }, 2025-10-13T00:22:33.737893182+00:00 stderr F    Immutable: nil, 2025-10-13T00:22:33.737893182+00:00 stderr F    Data: {"config-file.yaml": "authorization:\n resourceAttributes:\n apiVersion: v1\n reso"...}, 2025-10-13T00:22:33.737893182+00:00 stderr F    BinaryData: nil, 2025-10-13T00:22:33.737893182+00:00 stderr F   } 2025-10-13T00:22:33.938089629+00:00 stderr F E1013 00:22:33.937995 1 task.go:122] error running apply for clusteroperator "machine-config" (799 of 955): Cluster operator machine-config is degraded 2025-10-13T00:22:33.938089629+00:00 stderr F I1013 00:22:33.938041 1 task_graph.go:481] Running 36 on worker 1 2025-10-13T00:22:33.983874184+00:00 stderr F I1013 00:22:33.983791 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:33.984113111+00:00 stderr F I1013 00:22:33.984047 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:33.984113111+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:33.984113111+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:33.984149392+00:00 stderr F I1013 00:22:33.984107 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (324.249µs) 2025-10-13T00:22:33.984149392+00:00 stderr F I1013 00:22:33.984123 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:33.984149392+00:00 stderr F I1013 00:22:33.984131 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:33.984149392+00:00 stderr F I1013 00:22:33.984137 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (14.2µs) 2025-10-13T00:22:33.989059399+00:00 stderr F E1013 00:22:33.988758 1 task.go:122] error running apply for imagestream "openshift/cli" (537 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io cli) 2025-10-13T00:22:34.037718324+00:00 stderr F I1013 00:22:34.037648 1 task_graph.go:481] Running 37 on worker 1 2025-10-13T00:22:34.137607076+00:00 stderr F I1013 00:22:34.137533 1 task_graph.go:481] Running 38 on worker 1 2025-10-13T00:22:34.338084529+00:00 stderr F I1013 00:22:34.338018 1 task_graph.go:481] Running 39 on worker 1 2025-10-13T00:22:34.786903371+00:00 stderr F I1013 00:22:34.786836 1 task_graph.go:481] Running 40 on worker 1 2025-10-13T00:22:34.985482413+00:00 stderr F I1013 00:22:34.984756 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:34.985482413+00:00 stderr F I1013 00:22:34.985074 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:34.985482413+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:34.985482413+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:34.985482413+00:00 stderr F I1013 00:22:34.985130 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (388.671µs) 2025-10-13T00:22:34.985482413+00:00 stderr F I1013 00:22:34.985145 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:34.985482413+00:00 stderr F I1013 00:22:34.985154 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:34.985482413+00:00 stderr F I1013 00:22:34.985159 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (15.09µs) 2025-10-13T00:22:35.985856627+00:00 stderr F I1013 00:22:35.985783 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:35.986068433+00:00 stderr F I1013 00:22:35.986010 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:35.986068433+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:35.986068433+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:35.986123455+00:00 stderr F I1013 00:22:35.986067 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (295.398µs) 2025-10-13T00:22:35.986123455+00:00 stderr F I1013 00:22:35.986081 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:35.986123455+00:00 stderr F I1013 00:22:35.986089 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:35.986123455+00:00 stderr F I1013 00:22:35.986093 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (13.32µs) 2025-10-13T00:22:36.037203788+00:00 stderr F I1013 00:22:36.037156 1 task_graph.go:481] Running 41 on worker 1 2025-10-13T00:22:36.747383499+00:00 stderr F I1013 00:22:36.747117 1 task_graph.go:481] Running 42 on worker 1 2025-10-13T00:22:36.888083819+00:00 stderr F I1013 00:22:36.888023 1 task_graph.go:481] Running 43 on worker 1 2025-10-13T00:22:36.986706965+00:00 stderr F I1013 00:22:36.986600 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:36.986986963+00:00 stderr F I1013 00:22:36.986940 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:36.986986963+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:36.986986963+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:36.987071006+00:00 stderr F I1013 00:22:36.987031 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (442.882µs) 2025-10-13T00:22:36.987071006+00:00 stderr F I1013 00:22:36.987058 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:36.987091636+00:00 stderr F I1013 00:22:36.987072 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:36.987091636+00:00 stderr F I1013 00:22:36.987079 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.251µs) 2025-10-13T00:22:37.987268365+00:00 stderr F I1013 00:22:37.987187 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:37.987610535+00:00 stderr F I1013 00:22:37.987577 1 task_graph.go:481] Running 44 on worker 1 2025-10-13T00:22:37.987786500+00:00 stderr F I1013 00:22:37.987758 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:37.987786500+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:37.987786500+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:37.988021576+00:00 stderr F I1013 00:22:37.987929 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (752.11µs) 2025-10-13T00:22:37.988021576+00:00 stderr F I1013 00:22:37.987956 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:37.988021576+00:00 stderr F I1013 00:22:37.987967 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:37.988021576+00:00 stderr F I1013 00:22:37.987973 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (19.261µs) 2025-10-13T00:22:38.038770760+00:00 stderr F I1013 00:22:38.038689 1 task_graph.go:481] Running 45 on worker 1 2025-10-13T00:22:38.538727766+00:00 stderr F I1013 00:22:38.538617 1 task_graph.go:481] Running 46 on worker 1 2025-10-13T00:22:38.587289769+00:00 stderr F I1013 00:22:38.587210 1 task_graph.go:481] Running 47 on worker 1 2025-10-13T00:22:38.987998310+00:00 stderr F I1013 00:22:38.987942 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:38.988186456+00:00 stderr F I1013 00:22:38.988150 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:38.988186456+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:38.988186456+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:38.988246247+00:00 stderr F I1013 00:22:38.988210 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (275.277µs) 2025-10-13T00:22:38.988246247+00:00 stderr F I1013 00:22:38.988233 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:38.988246247+00:00 stderr F I1013 00:22:38.988242 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:38.988258468+00:00 stderr F I1013 00:22:38.988246 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (14.55µs) 2025-10-13T00:22:39.387917110+00:00 stderr F I1013 00:22:39.387860 1 task_graph.go:481] Running 48 on worker 1 2025-10-13T00:22:39.639017725+00:00 stderr F I1013 00:22:39.638906 1 task_graph.go:481] Running 49 on worker 1 2025-10-13T00:22:39.790423242+00:00 stderr F I1013 00:22:39.790300 1 task_graph.go:481] Running 50 on worker 1 2025-10-13T00:22:39.988831709+00:00 stderr F I1013 00:22:39.988700 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:39.989151748+00:00 stderr F I1013 00:22:39.989082 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:39.989151748+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:39.989151748+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:39.989182739+00:00 stderr F I1013 00:22:39.989152 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (462.513µs) 2025-10-13T00:22:39.989203879+00:00 stderr F I1013 00:22:39.989186 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:39.989203879+00:00 stderr F I1013 00:22:39.989198 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:39.989225390+00:00 stderr F I1013 00:22:39.989203 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.201µs) 2025-10-13T00:22:40.888632913+00:00 stderr F I1013 00:22:40.888593 1 task_graph.go:481] Running 51 on worker 1 2025-10-13T00:22:40.989780541+00:00 stderr F I1013 00:22:40.989720 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:40.990258764+00:00 stderr F I1013 00:22:40.990229 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:40.990258764+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:40.990258764+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:40.990431219+00:00 stderr F I1013 00:22:40.990399 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (691.789µs) 2025-10-13T00:22:40.990513901+00:00 stderr F I1013 00:22:40.990486 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:40.990581103+00:00 stderr F I1013 00:22:40.990558 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:40.990637995+00:00 stderr F I1013 00:22:40.990615 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (131.534µs) 2025-10-13T00:22:41.990661030+00:00 stderr F I1013 00:22:41.990590 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:41.990840685+00:00 stderr F I1013 00:22:41.990810 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:41.990840685+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:41.990840685+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:41.990894176+00:00 stderr F I1013 00:22:41.990853 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (277.378µs) 2025-10-13T00:22:41.990894176+00:00 stderr F I1013 00:22:41.990870 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:41.990894176+00:00 stderr F I1013 00:22:41.990877 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:41.990894176+00:00 stderr F I1013 00:22:41.990881 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (11.56µs) 2025-10-13T00:22:42.991489518+00:00 stderr F I1013 00:22:42.991249 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:42.992080285+00:00 stderr F I1013 00:22:42.991998 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:42.992080285+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:42.992080285+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:42.992201328+00:00 stderr F I1013 00:22:42.992140 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (909.865µs) 2025-10-13T00:22:42.992201328+00:00 stderr F I1013 00:22:42.992185 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:42.992225829+00:00 stderr F I1013 00:22:42.992206 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:42.992264630+00:00 stderr F I1013 00:22:42.992217 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.041µs) 2025-10-13T00:22:43.992308046+00:00 stderr F I1013 00:22:43.992198 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:43.992506362+00:00 stderr F I1013 00:22:43.992452 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:43.992506362+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:43.992506362+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:43.992529823+00:00 stderr F I1013 00:22:43.992502 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (312.969µs) 2025-10-13T00:22:43.992529823+00:00 stderr F I1013 00:22:43.992518 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:43.992546463+00:00 stderr F I1013 00:22:43.992526 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:43.992546463+00:00 stderr F I1013 00:22:43.992533 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (15.811µs) 2025-10-13T00:22:44.090071670+00:00 stderr F E1013 00:22:44.089965 1 task.go:122] error running apply for imagestream "openshift/cli" (537 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io cli) 2025-10-13T00:22:44.338281634+00:00 stderr F I1013 00:22:44.338174 1 task_graph.go:481] Running 52 on worker 1 2025-10-13T00:22:44.455610922+00:00 stderr F I1013 00:22:44.455468 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:44.455610922+00:00 stderr F I1013 00:22:44.455518 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:44.455610922+00:00 stderr F I1013 00:22:44.455530 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (76.293µs) 2025-10-13T00:22:44.837298824+00:00 stderr F I1013 00:22:44.837236 1 task_graph.go:481] Running 53 on worker 1 2025-10-13T00:22:44.837430668+00:00 stderr F I1013 00:22:44.837402 1 task_graph.go:481] Running 54 on worker 1 2025-10-13T00:22:44.996498078+00:00 stderr F I1013 00:22:44.993673 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:44.996498078+00:00 stderr F I1013 00:22:44.994205 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:44.996498078+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:44.996498078+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:44.996498078+00:00 stderr F I1013 00:22:44.994323 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (664.089µs) 2025-10-13T00:22:44.996498078+00:00 stderr F I1013 00:22:44.994384 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:44.996498078+00:00 stderr F I1013 00:22:44.994402 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:44.996498078+00:00 stderr F I1013 00:22:44.994413 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.841µs) 2025-10-13T00:22:45.044891676+00:00 stderr F I1013 00:22:45.044806 1 task_graph.go:481] Running 55 on worker 1 2025-10-13T00:22:45.487007751+00:00 stderr F I1013 00:22:45.486889 1 task_graph.go:481] Running 56 on worker 1 2025-10-13T00:22:45.995073013+00:00 stderr F I1013 00:22:45.994913 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:45.995354221+00:00 stderr F I1013 00:22:45.995253 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:45.995354221+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:45.995354221+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:45.995354221+00:00 stderr F I1013 00:22:45.995325 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (425.292µs) 2025-10-13T00:22:45.995386672+00:00 stderr F I1013 00:22:45.995366 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:45.995386672+00:00 stderr F I1013 00:22:45.995377 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:45.995412453+00:00 stderr F I1013 00:22:45.995383 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (18.501µs) 2025-10-13T00:22:46.066048830+00:00 stderr F I1013 00:22:46.065921 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:22:46.486185633+00:00 stderr F I1013 00:22:46.486138 1 task_graph.go:481] Running 57 on worker 1 2025-10-13T00:22:46.538548662+00:00 stderr F I1013 00:22:46.538495 1 task_graph.go:481] Running 58 on worker 1 2025-10-13T00:22:46.987275311+00:00 stderr F I1013 00:22:46.987163 1 task_graph.go:481] Running 59 on worker 1 2025-10-13T00:22:46.995956263+00:00 stderr F I1013 00:22:46.995890 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:46.996262032+00:00 stderr F I1013 00:22:46.996164 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:46.996262032+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:46.996262032+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:46.996262032+00:00 stderr F I1013 00:22:46.996227 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (347.199µs) 2025-10-13T00:22:46.996262032+00:00 stderr F I1013 00:22:46.996244 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:46.996262032+00:00 stderr F I1013 00:22:46.996252 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:46.996305503+00:00 stderr F I1013 00:22:46.996257 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (15.35µs) 2025-10-13T00:22:47.288364638+00:00 stderr F I1013 00:22:47.288287 1 task_graph.go:481] Running 60 on worker 1 2025-10-13T00:22:47.337298551+00:00 stderr F I1013 00:22:47.337229 1 task_graph.go:481] Running 61 on worker 1 2025-10-13T00:22:47.542609750+00:00 stderr F I1013 00:22:47.542515 1 task_graph.go:481] Running 62 on worker 1 2025-10-13T00:22:47.996452142+00:00 stderr F I1013 00:22:47.996375 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:47.996683619+00:00 stderr F I1013 00:22:47.996649 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:47.996683619+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:47.996683619+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:47.996727340+00:00 stderr F I1013 00:22:47.996701 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (339.489µs) 2025-10-13T00:22:47.996727340+00:00 stderr F I1013 00:22:47.996720 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:47.996735430+00:00 stderr F I1013 00:22:47.996727 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:47.996735430+00:00 stderr F I1013 00:22:47.996730 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (12.08µs) 2025-10-13T00:22:48.997732782+00:00 stderr F I1013 00:22:48.997666 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:48.997958059+00:00 stderr F I1013 00:22:48.997927 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:48.997958059+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:48.997958059+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:48.997997890+00:00 stderr F I1013 00:22:48.997974 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (320.249µs) 2025-10-13T00:22:48.997997890+00:00 stderr F I1013 00:22:48.997992 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:48.998008770+00:00 stderr F I1013 00:22:48.997999 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:48.998008770+00:00 stderr F I1013 00:22:48.998003 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (12.641µs) 2025-10-13T00:22:49.539269847+00:00 stderr F I1013 00:22:49.538987 1 task_graph.go:481] Running 63 on worker 1 2025-10-13T00:22:49.567825853+00:00 stderr F I1013 00:22:49.567704 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:22:49.998477568+00:00 stderr F I1013 00:22:49.998392 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:49.998734706+00:00 stderr F I1013 00:22:49.998687 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:49.998734706+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:49.998734706+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:49.998795967+00:00 stderr F I1013 00:22:49.998769 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (398.801µs) 2025-10-13T00:22:49.998795967+00:00 stderr F I1013 00:22:49.998790 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:49.998811308+00:00 stderr F I1013 00:22:49.998802 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-10-13T00:22:49.998823928+00:00 stderr F I1013 00:22:49.998809 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (20.431µs) 2025-10-13T00:22:50.336931936+00:00 stderr F I1013 00:22:50.336860 1 task_graph.go:481] Running 64 on worker 1 2025-10-13T00:22:50.836481421+00:00 stderr F I1013 00:22:50.836431 1 task_graph.go:481] Running 65 on worker 1 2025-10-13T00:22:50.848623630+00:00 stderr F I1013 00:22:50.848577 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848763 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848803 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848813 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848849 1 upgradeable.go:69] Upgradeability last checked 32.814572389s ago, will not re-check until 2025-10-13T00:24:18Z 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848861 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (116.703µs) 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848924 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.848987 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.849029 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:50.849174365+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:50.849174365+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:50.849174365+00:00 stderr F I1013 00:22:50.849091 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (290.788µs) 2025-10-13T00:22:50.849395381+00:00 stderr F I1013 00:22:50.849315 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:50.910350049+00:00 stderr F W1013 00:22:50.910265 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:50.911665546+00:00 stderr F I1013 00:22:50.911620 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (62.84043ms) 2025-10-13T00:22:50.911665546+00:00 stderr F I1013 00:22:50.911643 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:50.911727077+00:00 stderr F I1013 00:22:50.911689 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:50.911738488+00:00 stderr F I1013 00:22:50.911729 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:50.911964804+00:00 stderr F I1013 00:22:50.911923 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:50.942233547+00:00 stderr F W1013 00:22:50.942165 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:50.944155031+00:00 stderr F I1013 00:22:50.944092 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.443174ms) 2025-10-13T00:22:50.999380339+00:00 stderr F I1013 00:22:50.999270 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:50.999645196+00:00 stderr F I1013 00:22:50.999606 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:50.999645196+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:50.999645196+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:50.999700648+00:00 stderr F I1013 00:22:50.999672 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (415.522µs) 2025-10-13T00:22:50.999700648+00:00 stderr F I1013 00:22:50.999692 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:50.999782910+00:00 stderr F I1013 00:22:50.999743 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:50.999834322+00:00 stderr F I1013 00:22:50.999812 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:51.000147510+00:00 stderr F I1013 00:22:51.000093 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:51.039373573+00:00 stderr F W1013 00:22:51.039288 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:51.042160601+00:00 stderr F I1013 00:22:51.042088 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.38918ms) 2025-10-13T00:22:51.051047908+00:00 stderr F I1013 00:22:51.050950 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-10-13T00:22:51.288847582+00:00 stderr F I1013 00:22:51.287014 1 task_graph.go:481] Running 66 on worker 1 2025-10-13T00:22:51.288847582+00:00 stderr F I1013 00:22:51.287048 1 task_graph.go:481] Running 67 on worker 1 2025-10-13T00:22:51.488168334+00:00 stderr F I1013 00:22:51.488070 1 task_graph.go:481] Running 68 on worker 1 2025-10-13T00:22:51.588719095+00:00 stderr F I1013 00:22:51.588612 1 task_graph.go:481] Running 69 on worker 1 2025-10-13T00:22:51.640299072+00:00 stderr F I1013 00:22:51.640225 1 task_graph.go:481] Running 70 on worker 1 2025-10-13T00:22:51.640374824+00:00 stderr F I1013 00:22:51.640300 1 task_graph.go:481] Running 71 on worker 1 2025-10-13T00:22:51.688606658+00:00 stderr F I1013 00:22:51.688538 1 task_graph.go:481] Running 72 on worker 1 2025-10-13T00:22:52.000443874+00:00 stderr F I1013 00:22:52.000377 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:52.000629389+00:00 stderr F I1013 00:22:52.000598 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:52.000629389+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:52.000629389+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:52.000664410+00:00 stderr F I1013 00:22:52.000640 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (274.637µs) 2025-10-13T00:22:52.000664410+00:00 stderr F I1013 00:22:52.000657 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:52.000727662+00:00 stderr F I1013 00:22:52.000694 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:52.000751943+00:00 stderr F I1013 00:22:52.000736 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:52.000987419+00:00 stderr F I1013 00:22:52.000943 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:52.024092653+00:00 stderr F W1013 00:22:52.024033 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:52.025561464+00:00 stderr F I1013 00:22:52.025521 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.859672ms) 2025-10-13T00:22:52.287896990+00:00 stderr F I1013 00:22:52.287846 1 task_graph.go:481] Running 73 on worker 1 2025-10-13T00:22:52.345614998+00:00 stderr F I1013 00:22:52.345567 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:52.435036759+00:00 stderr F I1013 00:22:52.434952 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:53.001125677+00:00 stderr F I1013 00:22:53.001068 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:53.001478997+00:00 stderr F I1013 00:22:53.001463 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:53.001478997+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:53.001478997+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:53.001560730+00:00 stderr F I1013 00:22:53.001547 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (491.094µs) 2025-10-13T00:22:53.001632812+00:00 stderr F I1013 00:22:53.001607 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:53.001708384+00:00 stderr F I1013 00:22:53.001689 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:53.001764405+00:00 stderr F I1013 00:22:53.001754 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:53.002007512+00:00 stderr F I1013 00:22:53.001977 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:53.022155483+00:00 stderr F W1013 00:22:53.022099 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:53.023515811+00:00 stderr F I1013 00:22:53.023468 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.860329ms) 2025-10-13T00:22:54.002245674+00:00 stderr F I1013 00:22:54.002097 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:54.002720287+00:00 stderr F I1013 00:22:54.002677 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:54.002720287+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:54.002720287+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:54.002788629+00:00 stderr F I1013 00:22:54.002766 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (685.259µs) 2025-10-13T00:22:54.002820300+00:00 stderr F I1013 00:22:54.002792 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:54.002883572+00:00 stderr F I1013 00:22:54.002855 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:54.002960554+00:00 stderr F I1013 00:22:54.002929 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:54.003603822+00:00 stderr F I1013 00:22:54.003528 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:54.033147755+00:00 stderr F W1013 00:22:54.032522 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:54.034565754+00:00 stderr F I1013 00:22:54.034544 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.751225ms) 2025-10-13T00:22:54.288201690+00:00 stderr F I1013 00:22:54.288147 1 task_graph.go:481] Running 74 on worker 1 2025-10-13T00:22:55.003588437+00:00 stderr F I1013 00:22:55.003530 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:55.003893295+00:00 stderr F I1013 00:22:55.003872 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:55.003893295+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:55.003893295+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:55.004006259+00:00 stderr F I1013 00:22:55.003975 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (453.623µs) 2025-10-13T00:22:55.004112612+00:00 stderr F I1013 00:22:55.004044 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:55.004244635+00:00 stderr F I1013 00:22:55.004190 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:55.004353558+00:00 stderr F I1013 00:22:55.004298 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:55.004953475+00:00 stderr F I1013 00:22:55.004865 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:55.025829536+00:00 stderr F W1013 00:22:55.025758 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:55.027144433+00:00 stderr F I1013 00:22:55.027103 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.075143ms) 2025-10-13T00:22:55.087820783+00:00 stderr F W1013 00:22:55.087771 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+; use flowcontrol.apiserver.k8s.io/v1 FlowSchema 2025-10-13T00:22:55.088428400+00:00 stderr F I1013 00:22:55.088403 1 task_graph.go:481] Running 75 on worker 1 2025-10-13T00:22:55.338223048+00:00 stderr F I1013 00:22:55.338150 1 task_graph.go:481] Running 76 on worker 1 2025-10-13T00:22:55.387452940+00:00 stderr F I1013 00:22:55.387374 1 task_graph.go:481] Running 77 on worker 1 2025-10-13T00:22:55.387812680+00:00 stderr F I1013 00:22:55.387752 1 task_graph.go:481] Running 78 on worker 1 2025-10-13T00:22:56.004681932+00:00 stderr F I1013 00:22:56.004613 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:56.004956189+00:00 stderr F I1013 00:22:56.004926 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:56.004956189+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:56.004956189+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:56.005010901+00:00 stderr F I1013 00:22:56.004986 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (387.201µs) 2025-10-13T00:22:56.005010901+00:00 stderr F I1013 00:22:56.005006 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:56.005132694+00:00 stderr F I1013 00:22:56.005062 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:56.005132694+00:00 stderr F I1013 00:22:56.005127 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:56.005436063+00:00 stderr F I1013 00:22:56.005392 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:56.026421207+00:00 stderr F W1013 00:22:56.026372 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:56.027815996+00:00 stderr F I1013 00:22:56.027779 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.771234ms) 2025-10-13T00:22:57.005430868+00:00 stderr F I1013 00:22:57.005344 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:57.005726176+00:00 stderr F I1013 00:22:57.005694 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:57.005726176+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:57.005726176+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:57.005780678+00:00 stderr F I1013 00:22:57.005755 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (436.272µs) 2025-10-13T00:22:57.005789308+00:00 stderr F I1013 00:22:57.005778 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:57.005868170+00:00 stderr F I1013 00:22:57.005834 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:57.005915042+00:00 stderr F I1013 00:22:57.005892 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:57.006236441+00:00 stderr F I1013 00:22:57.006192 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:57.034162848+00:00 stderr F W1013 00:22:57.034117 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:57.035501116+00:00 stderr F I1013 00:22:57.035482 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.702627ms) 2025-10-13T00:22:57.090257551+00:00 stderr F I1013 00:22:57.090214 1 task_graph.go:478] Canceled worker 1 while waiting for work 2025-10-13T00:22:57.090303072+00:00 stderr F I1013 00:22:57.090225 1 task_graph.go:478] Canceled worker 0 while waiting for work 2025-10-13T00:22:57.090360954+00:00 stderr F I1013 00:22:57.090350 1 task_graph.go:527] Workers finished 2025-10-13T00:22:57.090400895+00:00 stderr F I1013 00:22:57.090384 1 task_graph.go:550] Result of work: [Cluster operator machine-config is degraded Could not update imagestream "openshift/cli" (537 of 955): the server is down or not responding] 2025-10-13T00:22:57.090430816+00:00 stderr F I1013 00:22:57.090422 1 sync_worker.go:1167] Summarizing 2 errors 2025-10-13T00:22:57.090459197+00:00 stderr F I1013 00:22:57.090446 1 sync_worker.go:1171] Update error 799 of 955: ClusterOperatorDegraded Cluster operator machine-config is degraded (*errors.errorString: cluster operator machine-config is Degraded=True: RequiredPoolsFailed, Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused) 2025-10-13T00:22:57.090487387+00:00 stderr F I1013 00:22:57.090473 1 sync_worker.go:1171] Update error 537 of 955: UpdatePayloadClusterDown Could not update imagestream "openshift/cli" (537 of 955): the server is down or not responding (*errors.StatusError: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io cli)) 2025-10-13T00:22:57.090538409+00:00 stderr F E1013 00:22:57.090527 1 sync_worker.go:649] unable to synchronize image (waiting 22.137185998s): Multiple errors are preventing progress: 2025-10-13T00:22:57.090538409+00:00 stderr F * Cluster operator machine-config is degraded 2025-10-13T00:22:57.090538409+00:00 stderr F * Could not update imagestream "openshift/cli" (537 of 955): the server is down or not responding 2025-10-13T00:22:58.006833072+00:00 stderr F I1013 00:22:58.006676 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:58.007269845+00:00 stderr F I1013 00:22:58.007109 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:58.007269845+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:58.007269845+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:58.007269845+00:00 stderr F I1013 00:22:58.007180 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (522.605µs) 2025-10-13T00:22:58.007269845+00:00 stderr F I1013 00:22:58.007195 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:58.007269845+00:00 stderr F I1013 00:22:58.007241 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:58.007313076+00:00 stderr F I1013 00:22:58.007296 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:58.007402298+00:00 stderr F I1013 00:22:58.007305 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:22:58.007806960+00:00 stderr F I1013 00:22:58.007712 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:58.042390963+00:00 stderr F W1013 00:22:58.038723 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040121 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040178 1 upgradeable.go:69] Upgradeability last checked 40.005903794s ago, will not re-check until 2025-10-13T00:24:18Z 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040186 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (74.662µs) 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040196 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040425 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:58.042390963+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:58.042390963+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040496 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (300.528µs) 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040692 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.492113ms) 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040719 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040794 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040845 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.040853 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:22:58.042390963+00:00 stderr F I1013 00:22:58.041159 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:58.078165149+00:00 stderr F W1013 00:22:58.077834 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:58.081441881+00:00 stderr F I1013 00:22:58.079389 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.664797ms) 2025-10-13T00:22:59.008031311+00:00 stderr F I1013 00:22:59.007960 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:22:59.008322289+00:00 stderr F I1013 00:22:59.008292 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:22:59.008322289+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:22:59.008322289+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:22:59.008397121+00:00 stderr F I1013 00:22:59.008374 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (426.382µs) 2025-10-13T00:22:59.008408861+00:00 stderr F I1013 00:22:59.008393 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:59.008465733+00:00 stderr F I1013 00:22:59.008443 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:59.008538745+00:00 stderr F I1013 00:22:59.008516 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:59.008620707+00:00 stderr F I1013 00:22:59.008537 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:22:59.008939406+00:00 stderr F I1013 00:22:59.008901 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:59.030698192+00:00 stderr F W1013 00:22:59.030598 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:59.032739559+00:00 stderr F I1013 00:22:59.032677 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.279956ms) 2025-10-13T00:22:59.455267098+00:00 stderr F I1013 00:22:59.455148 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:22:59.455267098+00:00 stderr F I1013 00:22:59.455243 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:22:59.455368281+00:00 stderr F I1013 00:22:59.455290 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:22:59.455395811+00:00 stderr F I1013 00:22:59.455298 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:22:59.455672029+00:00 stderr F I1013 00:22:59.455604 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:22:59.495288033+00:00 stderr F W1013 00:22:59.495157 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:22:59.497002240+00:00 stderr F I1013 00:22:59.496949 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.807435ms) 2025-10-13T00:23:00.008916120+00:00 stderr F I1013 00:23:00.008831 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:00.009146527+00:00 stderr F I1013 00:23:00.009076 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:00.009146527+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:00.009146527+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:00.009146527+00:00 stderr F I1013 00:23:00.009123 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (303.109µs) 2025-10-13T00:23:00.009146527+00:00 stderr F I1013 00:23:00.009137 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:00.009216878+00:00 stderr F I1013 00:23:00.009173 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:00.009238609+00:00 stderr F I1013 00:23:00.009218 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:00.009295291+00:00 stderr F I1013 00:23:00.009223 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:00.009565268+00:00 stderr F I1013 00:23:00.009515 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:00.036583381+00:00 stderr F W1013 00:23:00.036528 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:00.038485874+00:00 stderr F I1013 00:23:00.038459 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.317727ms) 2025-10-13T00:23:01.009802460+00:00 stderr F I1013 00:23:01.009407 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:01.010186451+00:00 stderr F I1013 00:23:01.010125 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:01.010186451+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:01.010186451+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:01.010217392+00:00 stderr F I1013 00:23:01.010180 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (784.342µs) 2025-10-13T00:23:01.010217392+00:00 stderr F I1013 00:23:01.010194 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:01.010277283+00:00 stderr F I1013 00:23:01.010236 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:01.010297584+00:00 stderr F I1013 00:23:01.010287 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:01.010382286+00:00 stderr F I1013 00:23:01.010295 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:01.010588882+00:00 stderr F I1013 00:23:01.010544 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:01.035573658+00:00 stderr F W1013 00:23:01.035510 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:01.038501659+00:00 stderr F I1013 00:23:01.038453 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.251867ms) 2025-10-13T00:23:01.531927084+00:00 stderr F I1013 00:23:01.531868 1 sync_worker.go:234] Notify the sync worker: Cluster operator authentication changed Available from "True" to "False" 2025-10-13T00:23:01.531987556+00:00 stderr F I1013 00:23:01.531919 1 sync_worker.go:584] Cluster operator authentication changed Available from "True" to "False" 2025-10-13T00:23:01.531987556+00:00 stderr F I1013 00:23:01.531954 1 sync_worker.go:592] No change, waiting 2025-10-13T00:23:02.011132092+00:00 stderr F I1013 00:23:02.011074 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:02.011392629+00:00 stderr F I1013 00:23:02.011358 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:02.011392629+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:02.011392629+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:02.011463641+00:00 stderr F I1013 00:23:02.011421 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (359.52µs) 2025-10-13T00:23:02.011463641+00:00 stderr F I1013 00:23:02.011441 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:02.011540644+00:00 stderr F I1013 00:23:02.011506 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:02.011594385+00:00 stderr F I1013 00:23:02.011569 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:02.011681277+00:00 stderr F I1013 00:23:02.011610 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:02.011932804+00:00 stderr F I1013 00:23:02.011895 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:02.036949911+00:00 stderr F W1013 00:23:02.036884 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:02.040132900+00:00 stderr F I1013 00:23:02.040079 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.632787ms) 2025-10-13T00:23:03.012399902+00:00 stderr F I1013 00:23:03.012297 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:03.012697640+00:00 stderr F I1013 00:23:03.012607 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:03.012697640+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:03.012697640+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:03.012728641+00:00 stderr F I1013 00:23:03.012658 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (374.601µs) 2025-10-13T00:23:03.012728641+00:00 stderr F I1013 00:23:03.012706 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:03.012788773+00:00 stderr F I1013 00:23:03.012752 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:03.012824704+00:00 stderr F I1013 00:23:03.012804 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:03.012885095+00:00 stderr F I1013 00:23:03.012812 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:03.013160023+00:00 stderr F I1013 00:23:03.013050 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:03.053532867+00:00 stderr F W1013 00:23:03.053434 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:03.054766872+00:00 stderr F I1013 00:23:03.054733 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.025151ms) 2025-10-13T00:23:04.013554339+00:00 stderr F I1013 00:23:04.013472 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:04.013750595+00:00 stderr F I1013 00:23:04.013722 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:04.013750595+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:04.013750595+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:04.013811266+00:00 stderr F I1013 00:23:04.013783 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (321.289µs) 2025-10-13T00:23:04.013811266+00:00 stderr F I1013 00:23:04.013806 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:04.013879428+00:00 stderr F I1013 00:23:04.013857 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:04.013926639+00:00 stderr F I1013 00:23:04.013913 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:04.013986181+00:00 stderr F I1013 00:23:04.013924 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:04.014226058+00:00 stderr F I1013 00:23:04.014188 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:04.058248044+00:00 stderr F W1013 00:23:04.058122 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:04.061120654+00:00 stderr F I1013 00:23:04.061022 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.209685ms) 2025-10-13T00:23:05.014639285+00:00 stderr F I1013 00:23:05.014562 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:05.015033326+00:00 stderr F I1013 00:23:05.014976 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:05.015033326+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:05.015033326+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:05.015128769+00:00 stderr F I1013 00:23:05.015077 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (533.005µs) 2025-10-13T00:23:05.015128769+00:00 stderr F I1013 00:23:05.015107 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:05.015228532+00:00 stderr F I1013 00:23:05.015172 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:05.015275613+00:00 stderr F I1013 00:23:05.015248 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:05.015403786+00:00 stderr F I1013 00:23:05.015261 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:05.015815588+00:00 stderr F I1013 00:23:05.015733 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:05.066724006+00:00 stderr F W1013 00:23:05.066626 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:05.068091094+00:00 stderr F I1013 00:23:05.068000 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.892153ms) 2025-10-13T00:23:06.015881655+00:00 stderr F I1013 00:23:06.015413 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:06.015988908+00:00 stderr F I1013 00:23:06.015963 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:06.015988908+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:06.015988908+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:06.016046860+00:00 stderr F I1013 00:23:06.016013 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (610.127µs) 2025-10-13T00:23:06.016046860+00:00 stderr F I1013 00:23:06.016032 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:06.016120992+00:00 stderr F I1013 00:23:06.016077 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:06.016143312+00:00 stderr F I1013 00:23:06.016130 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:06.016222014+00:00 stderr F I1013 00:23:06.016137 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:06.016482542+00:00 stderr F I1013 00:23:06.016387 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:06.044067790+00:00 stderr F W1013 00:23:06.043956 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:06.046533599+00:00 stderr F I1013 00:23:06.046451 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.412897ms) 2025-10-13T00:23:07.016523957+00:00 stderr F I1013 00:23:07.016104 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:07.016523957+00:00 stderr F I1013 00:23:07.016409 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:07.016523957+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:07.016523957+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:07.016523957+00:00 stderr F I1013 00:23:07.016463 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (369.93µs) 2025-10-13T00:23:07.016523957+00:00 stderr F I1013 00:23:07.016476 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:07.016617849+00:00 stderr F I1013 00:23:07.016549 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:07.016617849+00:00 stderr F I1013 00:23:07.016600 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:07.016682081+00:00 stderr F I1013 00:23:07.016607 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:07.016898747+00:00 stderr F I1013 00:23:07.016834 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:07.053483136+00:00 stderr F W1013 00:23:07.053425 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:07.056236183+00:00 stderr F I1013 00:23:07.056188 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.705326ms) 2025-10-13T00:23:08.017061137+00:00 stderr F I1013 00:23:08.016963 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:08.017433388+00:00 stderr F I1013 00:23:08.017394 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:08.017433388+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:08.017433388+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:08.017511980+00:00 stderr F I1013 00:23:08.017484 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (535.195µs) 2025-10-13T00:23:08.017523310+00:00 stderr F I1013 00:23:08.017510 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:08.017605772+00:00 stderr F I1013 00:23:08.017573 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:08.017663264+00:00 stderr F I1013 00:23:08.017645 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:08.017742756+00:00 stderr F I1013 00:23:08.017659 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:08.018134437+00:00 stderr F I1013 00:23:08.018077 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:08.038525945+00:00 stderr F W1013 00:23:08.038474 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:08.041436156+00:00 stderr F I1013 00:23:08.041388 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.871805ms) 2025-10-13T00:23:09.018166423+00:00 stderr F I1013 00:23:09.018104 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:09.018528443+00:00 stderr F I1013 00:23:09.018504 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:09.018528443+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:09.018528443+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:09.018631736+00:00 stderr F I1013 00:23:09.018612 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (519.094µs) 2025-10-13T00:23:09.018719499+00:00 stderr F I1013 00:23:09.018669 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:09.018817361+00:00 stderr F I1013 00:23:09.018777 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:09.018889643+00:00 stderr F I1013 00:23:09.018845 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:09.018965635+00:00 stderr F I1013 00:23:09.018889 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:09.019245733+00:00 stderr F I1013 00:23:09.019182 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:09.047945483+00:00 stderr F W1013 00:23:09.047824 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:09.049602129+00:00 stderr F I1013 00:23:09.049568 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.90763ms) 2025-10-13T00:23:10.019313981+00:00 stderr F I1013 00:23:10.019243 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:10.019846465+00:00 stderr F I1013 00:23:10.019803 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:10.019846465+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:10.019846465+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:10.019997250+00:00 stderr F I1013 00:23:10.019942 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (714.49µs) 2025-10-13T00:23:10.019997250+00:00 stderr F I1013 00:23:10.019981 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:10.020122743+00:00 stderr F I1013 00:23:10.020073 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:10.020213506+00:00 stderr F I1013 00:23:10.020175 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:10.020346489+00:00 stderr F I1013 00:23:10.020199 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:10.021002008+00:00 stderr F I1013 00:23:10.020894 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:10.065104226+00:00 stderr F W1013 00:23:10.064996 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:10.074042255+00:00 stderr F I1013 00:23:10.073949 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.954843ms) 2025-10-13T00:23:11.021033962+00:00 stderr F I1013 00:23:11.020930 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:11.021262679+00:00 stderr F I1013 00:23:11.021213 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:11.021262679+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:11.021262679+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:11.021284159+00:00 stderr F I1013 00:23:11.021272 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (356.83µs) 2025-10-13T00:23:11.021300020+00:00 stderr F I1013 00:23:11.021287 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:11.021408013+00:00 stderr F I1013 00:23:11.021354 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:11.021430303+00:00 stderr F I1013 00:23:11.021413 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:11.021507986+00:00 stderr F I1013 00:23:11.021424 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:11.021765553+00:00 stderr F I1013 00:23:11.021709 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:11.049017532+00:00 stderr F W1013 00:23:11.048933 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:11.051201843+00:00 stderr F I1013 00:23:11.051138 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.846172ms) 2025-10-13T00:23:12.021475571+00:00 stderr F I1013 00:23:12.021381 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:12.021801490+00:00 stderr F I1013 00:23:12.021755 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:12.021801490+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:12.021801490+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:12.021929533+00:00 stderr F I1013 00:23:12.021854 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (489.794µs) 2025-10-13T00:23:12.021929533+00:00 stderr F I1013 00:23:12.021887 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:12.022034336+00:00 stderr F I1013 00:23:12.021973 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:12.022085678+00:00 stderr F I1013 00:23:12.022071 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:12.022247262+00:00 stderr F I1013 00:23:12.022086 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:12.022644263+00:00 stderr F I1013 00:23:12.022560 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:12.066267698+00:00 stderr F W1013 00:23:12.066186 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:12.069113348+00:00 stderr F I1013 00:23:12.069042 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.149593ms) 2025-10-13T00:23:13.022750220+00:00 stderr F I1013 00:23:13.022608 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:13.023222084+00:00 stderr F I1013 00:23:13.023018 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:13.023222084+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:13.023222084+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:13.023222084+00:00 stderr F I1013 00:23:13.023105 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (514.334µs) 2025-10-13T00:23:13.023222084+00:00 stderr F I1013 00:23:13.023124 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:13.023222084+00:00 stderr F I1013 00:23:13.023186 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:13.023275195+00:00 stderr F I1013 00:23:13.023254 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:13.023407679+00:00 stderr F I1013 00:23:13.023271 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:13.023736478+00:00 stderr F I1013 00:23:13.023684 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:13.046791280+00:00 stderr F W1013 00:23:13.046557 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:13.047952542+00:00 stderr F I1013 00:23:13.047851 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.725808ms) 2025-10-13T00:23:14.024951486+00:00 stderr F I1013 00:23:14.023989 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:14.025373237+00:00 stderr F I1013 00:23:14.025274 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:14.025373237+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:14.025373237+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:14.025430359+00:00 stderr F I1013 00:23:14.025395 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.42019ms) 2025-10-13T00:23:14.025430359+00:00 stderr F I1013 00:23:14.025420 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:14.025548722+00:00 stderr F I1013 00:23:14.025496 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:14.025607004+00:00 stderr F I1013 00:23:14.025576 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:14.025694026+00:00 stderr F I1013 00:23:14.025593 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:14.026166429+00:00 stderr F I1013 00:23:14.026021 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:14.067656255+00:00 stderr F W1013 00:23:14.067603 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:14.069668951+00:00 stderr F I1013 00:23:14.069601 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.17918ms) 2025-10-13T00:23:14.455482978+00:00 stderr F I1013 00:23:14.455421 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:14.455692124+00:00 stderr F I1013 00:23:14.455664 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:14.455777946+00:00 stderr F I1013 00:23:14.455763 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:14.455874239+00:00 stderr F I1013 00:23:14.455801 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:14.456344672+00:00 stderr F I1013 00:23:14.456282 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:14.494412973+00:00 stderr F W1013 00:23:14.493003 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:14.495512943+00:00 stderr F I1013 00:23:14.495460 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.047666ms) 2025-10-13T00:23:15.026213726+00:00 stderr F I1013 00:23:15.026156 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:15.026507074+00:00 stderr F I1013 00:23:15.026490 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:15.026507074+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:15.026507074+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:15.026585197+00:00 stderr F I1013 00:23:15.026570 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (427.212µs) 2025-10-13T00:23:15.026629068+00:00 stderr F I1013 00:23:15.026616 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:15.026715900+00:00 stderr F I1013 00:23:15.026696 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:15.026790542+00:00 stderr F I1013 00:23:15.026780 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:15.026862704+00:00 stderr F I1013 00:23:15.026807 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:15.027146602+00:00 stderr F I1013 00:23:15.027123 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:15.061979902+00:00 stderr F W1013 00:23:15.061933 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:15.063515095+00:00 stderr F I1013 00:23:15.063458 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.838546ms) 2025-10-13T00:23:16.027536958+00:00 stderr F I1013 00:23:16.027465 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:16.028006682+00:00 stderr F I1013 00:23:16.027975 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:16.028006682+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:16.028006682+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:16.028183486+00:00 stderr F I1013 00:23:16.028147 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (695.769µs) 2025-10-13T00:23:16.028371992+00:00 stderr F I1013 00:23:16.028298 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:16.028544677+00:00 stderr F I1013 00:23:16.028507 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:16.028721551+00:00 stderr F I1013 00:23:16.028632 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:16.028849255+00:00 stderr F I1013 00:23:16.028758 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:16.029269227+00:00 stderr F I1013 00:23:16.029223 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:16.066623277+00:00 stderr F W1013 00:23:16.066529 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:16.072851101+00:00 stderr F I1013 00:23:16.072559 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.254953ms) 2025-10-13T00:23:17.028254134+00:00 stderr F I1013 00:23:17.028148 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:17.028622454+00:00 stderr F I1013 00:23:17.028573 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:17.028622454+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:17.028622454+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:17.028684406+00:00 stderr F I1013 00:23:17.028646 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (514.955µs) 2025-10-13T00:23:17.028684406+00:00 stderr F I1013 00:23:17.028665 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:17.028774188+00:00 stderr F I1013 00:23:17.028720 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:17.028801529+00:00 stderr F I1013 00:23:17.028789 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:17.028895452+00:00 stderr F I1013 00:23:17.028799 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:17.029205270+00:00 stderr F I1013 00:23:17.029141 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:17.074506772+00:00 stderr F W1013 00:23:17.074444 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:17.079451830+00:00 stderr F I1013 00:23:17.079397 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.722243ms) 2025-10-13T00:23:18.029269627+00:00 stderr F I1013 00:23:18.029221 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:18.029628457+00:00 stderr F I1013 00:23:18.029605 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:18.029628457+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:18.029628457+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:18.029733899+00:00 stderr F I1013 00:23:18.029714 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (503.514µs) 2025-10-13T00:23:18.029775621+00:00 stderr F I1013 00:23:18.029761 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:18.029863023+00:00 stderr F I1013 00:23:18.029840 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:18.029938635+00:00 stderr F I1013 00:23:18.029924 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:18.030029118+00:00 stderr F I1013 00:23:18.029962 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:18.030357157+00:00 stderr F I1013 00:23:18.030303 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:18.059202120+00:00 stderr F W1013 00:23:18.059124 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:18.062974735+00:00 stderr F I1013 00:23:18.062935 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.167744ms) 2025-10-13T00:23:19.030353752+00:00 stderr F I1013 00:23:19.030289 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:19.030598939+00:00 stderr F I1013 00:23:19.030583 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:19.030598939+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:19.030598939+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:19.030663801+00:00 stderr F I1013 00:23:19.030650 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (369.7µs) 2025-10-13T00:23:19.030692322+00:00 stderr F I1013 00:23:19.030682 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:19.030759363+00:00 stderr F I1013 00:23:19.030741 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:19.030815565+00:00 stderr F I1013 00:23:19.030805 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:19.030881737+00:00 stderr F I1013 00:23:19.030832 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:19.031112923+00:00 stderr F I1013 00:23:19.031089 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:19.059622807+00:00 stderr F W1013 00:23:19.059593 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:19.061008876+00:00 stderr F I1013 00:23:19.060989 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.304334ms) 2025-10-13T00:23:20.031534260+00:00 stderr F I1013 00:23:20.031490 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:20.031838549+00:00 stderr F I1013 00:23:20.031821 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:20.031838549+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:20.031838549+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:20.031928001+00:00 stderr F I1013 00:23:20.031912 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (433.552µs) 2025-10-13T00:23:20.031959252+00:00 stderr F I1013 00:23:20.031949 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:20.032036264+00:00 stderr F I1013 00:23:20.032017 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:20.032104236+00:00 stderr F I1013 00:23:20.032094 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:20.032217480+00:00 stderr F I1013 00:23:20.032143 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:20.032517648+00:00 stderr F I1013 00:23:20.032490 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:20.078475098+00:00 stderr F W1013 00:23:20.078406 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:20.081038049+00:00 stderr F I1013 00:23:20.081007 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.054626ms) 2025-10-13T00:23:21.032498522+00:00 stderr F I1013 00:23:21.031974 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:21.032984905+00:00 stderr F I1013 00:23:21.032951 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:21.032984905+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:21.032984905+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:21.033149750+00:00 stderr F I1013 00:23:21.033118 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.155412ms) 2025-10-13T00:23:21.033216772+00:00 stderr F I1013 00:23:21.033193 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:21.033386607+00:00 stderr F I1013 00:23:21.033316 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:21.033506130+00:00 stderr F I1013 00:23:21.033484 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:21.033642954+00:00 stderr F I1013 00:23:21.033543 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:21.034165438+00:00 stderr F I1013 00:23:21.034108 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:21.061913331+00:00 stderr F W1013 00:23:21.061850 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:21.065037378+00:00 stderr F I1013 00:23:21.064998 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.801146ms) 2025-10-13T00:23:21.610541473+00:00 stderr F I1013 00:23:21.609914 1 sync_worker.go:582] Wait finished 2025-10-13T00:23:21.610836942+00:00 stderr F I1013 00:23:21.610672 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:21.611076498+00:00 stderr F I1013 00:23:21.611041 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 1 2025-10-13T00:23:21.615552223+00:00 stderr F I1013 00:23:21.615515 1 task_graph.go:481] Running 0 on worker 1 2025-10-13T00:23:21.615653196+00:00 stderr F I1013 00:23:21.615625 1 task_graph.go:481] Running 2 on worker 1 2025-10-13T00:23:21.615741828+00:00 stderr F I1013 00:23:21.615680 1 task_graph.go:481] Running 1 on worker 0 2025-10-13T00:23:21.615771719+00:00 stderr F I1013 00:23:21.615756 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.615833111+00:00 stderr F I1013 00:23:21.615770 1 sync_worker.go:999] Running sync for role "openshift-apiserver-operator/prometheus-k8s" (936 of 955) 2025-10-13T00:23:21.622553688+00:00 stderr F I1013 00:23:21.622497 1 sync_worker.go:1014] Done syncing for role "openshift-apiserver-operator/prometheus-k8s" (936 of 955) 2025-10-13T00:23:21.622646551+00:00 stderr F I1013 00:23:21.622623 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.622713082+00:00 stderr F I1013 00:23:21.622685 1 sync_worker.go:999] Running sync for rolebinding "openshift-apiserver-operator/prometheus-k8s" (937 of 955) 2025-10-13T00:23:21.624252005+00:00 stderr F I1013 00:23:21.624196 1 sync_worker.go:989] Precreated resource clusteroperator "console" (635 of 955) 2025-10-13T00:23:21.624394249+00:00 stderr F I1013 00:23:21.624364 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.624494802+00:00 stderr F I1013 00:23:21.624455 1 sync_worker.go:999] Running sync for customresourcedefinition "consoles.operator.openshift.io" (580 of 955) 2025-10-13T00:23:21.628321699+00:00 stderr F I1013 00:23:21.628271 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-apiserver-operator/prometheus-k8s" (937 of 955) 2025-10-13T00:23:21.628454602+00:00 stderr F I1013 00:23:21.628427 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.628531905+00:00 stderr F I1013 00:23:21.628500 1 sync_worker.go:999] Running sync for servicemonitor "openshift-apiserver-operator/openshift-apiserver-operator" (938 of 955) 2025-10-13T00:23:21.633121342+00:00 stderr F I1013 00:23:21.633072 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-apiserver-operator/openshift-apiserver-operator" (938 of 955) 2025-10-13T00:23:21.633202285+00:00 stderr F I1013 00:23:21.633181 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.633263486+00:00 stderr F I1013 00:23:21.633238 1 sync_worker.go:999] Running sync for role "openshift-apiserver/prometheus-k8s" (939 of 955) 2025-10-13T00:23:21.636126866+00:00 stderr F I1013 00:23:21.636006 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoles.operator.openshift.io" (580 of 955) 2025-10-13T00:23:21.636466856+00:00 stderr F I1013 00:23:21.636440 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.636592869+00:00 stderr F I1013 00:23:21.636553 1 sync_worker.go:999] Running sync for customresourcedefinition "consoleclidownloads.console.openshift.io" (581 of 955) 2025-10-13T00:23:21.637934837+00:00 stderr F I1013 00:23:21.636391 1 sync_worker.go:1014] Done syncing for role "openshift-apiserver/prometheus-k8s" (939 of 955) 2025-10-13T00:23:21.638051380+00:00 stderr F I1013 00:23:21.638028 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.638119012+00:00 stderr F I1013 00:23:21.638090 1 sync_worker.go:999] Running sync for rolebinding "openshift-apiserver/prometheus-k8s" (940 of 955) 2025-10-13T00:23:21.640812237+00:00 stderr F I1013 00:23:21.640767 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoleclidownloads.console.openshift.io" (581 of 955) 2025-10-13T00:23:21.640889419+00:00 stderr F I1013 00:23:21.640869 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.640950491+00:00 stderr F I1013 00:23:21.640924 1 sync_worker.go:999] Running sync for customresourcedefinition "consoleexternalloglinks.console.openshift.io" (582 of 955) 2025-10-13T00:23:21.642051741+00:00 stderr F I1013 00:23:21.641983 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-apiserver/prometheus-k8s" (940 of 955) 2025-10-13T00:23:21.642051741+00:00 stderr F I1013 00:23:21.642040 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.642085872+00:00 stderr F I1013 00:23:21.642052 1 sync_worker.go:999] Running sync for servicemonitor "openshift-apiserver/openshift-apiserver" (941 of 955) 2025-10-13T00:23:21.645808936+00:00 stderr F I1013 00:23:21.645727 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoleexternalloglinks.console.openshift.io" (582 of 955) 2025-10-13T00:23:21.645808936+00:00 stderr F I1013 00:23:21.645772 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.645846337+00:00 stderr F I1013 00:23:21.645808 1 sync_worker.go:999] Running sync for customresourcedefinition "consolelinks.console.openshift.io" (583 of 955) 2025-10-13T00:23:21.646626099+00:00 stderr F I1013 00:23:21.646584 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-apiserver/openshift-apiserver" (941 of 955) 2025-10-13T00:23:21.646694251+00:00 stderr F I1013 00:23:21.646674 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.646752902+00:00 stderr F I1013 00:23:21.646728 1 sync_worker.go:999] Running sync for service "openshift-apiserver/check-endpoints" (942 of 955) 2025-10-13T00:23:21.649860699+00:00 stderr F I1013 00:23:21.649821 1 sync_worker.go:1014] Done syncing for service "openshift-apiserver/check-endpoints" (942 of 955) 2025-10-13T00:23:21.649936391+00:00 stderr F I1013 00:23:21.649916 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.650010783+00:00 stderr F I1013 00:23:21.649984 1 sync_worker.go:999] Running sync for servicemonitor "openshift-apiserver/openshift-apiserver-operator-check-endpoints" (943 of 955) 2025-10-13T00:23:21.650681002+00:00 stderr F I1013 00:23:21.650598 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consolelinks.console.openshift.io" (583 of 955) 2025-10-13T00:23:21.650681002+00:00 stderr F I1013 00:23:21.650644 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.650681002+00:00 stderr F I1013 00:23:21.650656 1 sync_worker.go:999] Running sync for customresourcedefinition "consolenotifications.console.openshift.io" (584 of 955) 2025-10-13T00:23:21.653835549+00:00 stderr F I1013 00:23:21.653792 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-apiserver/openshift-apiserver-operator-check-endpoints" (943 of 955) 2025-10-13T00:23:21.653922632+00:00 stderr F I1013 00:23:21.653901 1 task_graph.go:481] Running 3 on worker 0 2025-10-13T00:23:21.654012094+00:00 stderr F I1013 00:23:21.653992 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.654073146+00:00 stderr F I1013 00:23:21.654047 1 sync_worker.go:999] Running sync for operatorhub "cluster" (21 of 955) 2025-10-13T00:23:21.655050623+00:00 stderr F I1013 00:23:21.655008 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consolenotifications.console.openshift.io" (584 of 955) 2025-10-13T00:23:21.655137486+00:00 stderr F I1013 00:23:21.655117 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.655197907+00:00 stderr F I1013 00:23:21.655172 1 sync_worker.go:999] Running sync for customresourcedefinition "consolequickstarts.console.openshift.io" (585 of 955) 2025-10-13T00:23:21.657875852+00:00 stderr F I1013 00:23:21.657811 1 sync_worker.go:1014] Done syncing for operatorhub "cluster" (21 of 955) 2025-10-13T00:23:21.657875852+00:00 stderr F I1013 00:23:21.657855 1 task_graph.go:481] Running 4 on worker 0 2025-10-13T00:23:21.657912053+00:00 stderr F I1013 00:23:21.657875 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.657912053+00:00 stderr F I1013 00:23:21.657890 1 sync_worker.go:999] Running sync for imagestream "openshift/driver-toolkit" (657 of 955) 2025-10-13T00:23:21.660243508+00:00 stderr F I1013 00:23:21.660201 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consolequickstarts.console.openshift.io" (585 of 955) 2025-10-13T00:23:21.660362431+00:00 stderr F I1013 00:23:21.660308 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.660472314+00:00 stderr F I1013 00:23:21.660436 1 sync_worker.go:999] Running sync for customresourcedefinition "consolesamples.console.openshift.io" (586 of 955) 2025-10-13T00:23:21.665793353+00:00 stderr F I1013 00:23:21.665737 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consolesamples.console.openshift.io" (586 of 955) 2025-10-13T00:23:21.665884625+00:00 stderr F I1013 00:23:21.665858 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.665959797+00:00 stderr F I1013 00:23:21.665929 1 sync_worker.go:999] Running sync for customresourcedefinition "consoleyamlsamples.console.openshift.io" (587 of 955) 2025-10-13T00:23:21.667016207+00:00 stderr F I1013 00:23:21.666925 1 sync_worker.go:1014] Done syncing for imagestream "openshift/driver-toolkit" (657 of 955) 2025-10-13T00:23:21.667016207+00:00 stderr F I1013 00:23:21.666962 1 task_graph.go:481] Running 5 on worker 0 2025-10-13T00:23:21.667016207+00:00 stderr F I1013 00:23:21.666979 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.667016207+00:00 stderr F I1013 00:23:21.666990 1 sync_worker.go:999] Running sync for build "cluster" (68 of 955) 2025-10-13T00:23:21.670415071+00:00 stderr F I1013 00:23:21.670364 1 sync_worker.go:1014] Done syncing for build "cluster" (68 of 955) 2025-10-13T00:23:21.670415071+00:00 stderr F I1013 00:23:21.670396 1 task_graph.go:481] Running 6 on worker 0 2025-10-13T00:23:21.670415071+00:00 stderr F I1013 00:23:21.670409 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.670453002+00:00 stderr F I1013 00:23:21.670420 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-network-config-controller" (466 of 955) 2025-10-13T00:23:21.670789542+00:00 stderr F I1013 00:23:21.670747 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoleyamlsamples.console.openshift.io" (587 of 955) 2025-10-13T00:23:21.670852053+00:00 stderr F I1013 00:23:21.670832 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.670912465+00:00 stderr F I1013 00:23:21.670886 1 sync_worker.go:999] Running sync for customresourcedefinition "helmchartrepositories.helm.openshift.io" (588 of 955) 2025-10-13T00:23:21.674619238+00:00 stderr F I1013 00:23:21.674577 1 sync_worker.go:1014] Done syncing for namespace "openshift-cloud-network-config-controller" (466 of 955) 2025-10-13T00:23:21.674696451+00:00 stderr F I1013 00:23:21.674676 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.674756052+00:00 stderr F I1013 00:23:21.674731 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-gcp" (467 of 955) 2025-10-13T00:23:21.674819914+00:00 stderr F I1013 00:23:21.674795 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-gcp" (467 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:21.674869165+00:00 stderr F I1013 00:23:21.674850 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.674925327+00:00 stderr F I1013 00:23:21.674901 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-aws" (468 of 955) 2025-10-13T00:23:21.674984789+00:00 stderr F I1013 00:23:21.674962 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-aws" (468 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:21.675100572+00:00 stderr F I1013 00:23:21.675080 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.675159763+00:00 stderr F I1013 00:23:21.675135 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure" (469 of 955) 2025-10-13T00:23:21.675232195+00:00 stderr F I1013 00:23:21.675210 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure" (469 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:21.675281067+00:00 stderr F I1013 00:23:21.675262 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.675366549+00:00 stderr F I1013 00:23:21.675313 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-openstack" (470 of 955) 2025-10-13T00:23:21.675436961+00:00 stderr F I1013 00:23:21.675413 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-openstack" (470 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:21.675486522+00:00 stderr F I1013 00:23:21.675468 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.675544494+00:00 stderr F I1013 00:23:21.675518 1 sync_worker.go:999] Running sync for service "openshift-network-operator/metrics" (471 of 955) 2025-10-13T00:23:21.676184682+00:00 stderr F I1013 00:23:21.676099 1 sync_worker.go:1014] Done syncing for customresourcedefinition "helmchartrepositories.helm.openshift.io" (588 of 955) 2025-10-13T00:23:21.676184682+00:00 stderr F I1013 00:23:21.676145 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.676184682+00:00 stderr F I1013 00:23:21.676158 1 sync_worker.go:999] Running sync for customresourcedefinition "projecthelmchartrepositories.helm.openshift.io" (589 of 955) 2025-10-13T00:23:21.679657879+00:00 stderr F I1013 00:23:21.679617 1 sync_worker.go:1014] Done syncing for service "openshift-network-operator/metrics" (471 of 955) 2025-10-13T00:23:21.679734981+00:00 stderr F I1013 00:23:21.679714 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.679792892+00:00 stderr F I1013 00:23:21.679769 1 sync_worker.go:999] Running sync for role "openshift-network-operator/prometheus-k8s" (472 of 955) 2025-10-13T00:23:21.682452687+00:00 stderr F I1013 00:23:21.682408 1 sync_worker.go:1014] Done syncing for customresourcedefinition "projecthelmchartrepositories.helm.openshift.io" (589 of 955) 2025-10-13T00:23:21.682528769+00:00 stderr F I1013 00:23:21.682508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.682612851+00:00 stderr F I1013 00:23:21.682563 1 sync_worker.go:999] Running sync for helmchartrepository "openshift-helm-charts" (590 of 955) 2025-10-13T00:23:21.683322411+00:00 stderr F I1013 00:23:21.683251 1 sync_worker.go:1014] Done syncing for role "openshift-network-operator/prometheus-k8s" (472 of 955) 2025-10-13T00:23:21.683322411+00:00 stderr F I1013 00:23:21.683295 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.683322411+00:00 stderr F I1013 00:23:21.683307 1 sync_worker.go:999] Running sync for rolebinding "openshift-network-operator/prometheus-k8s" (473 of 955) 2025-10-13T00:23:21.686981563+00:00 stderr F I1013 00:23:21.686922 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-network-operator/prometheus-k8s" (473 of 955) 2025-10-13T00:23:21.687080166+00:00 stderr F I1013 00:23:21.687053 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.687165478+00:00 stderr F I1013 00:23:21.687128 1 sync_worker.go:999] Running sync for servicemonitor "openshift-network-operator/network-operator" (474 of 955) 2025-10-13T00:23:21.690503781+00:00 stderr F I1013 00:23:21.690421 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-network-operator/network-operator" (474 of 955) 2025-10-13T00:23:21.690503781+00:00 stderr F I1013 00:23:21.690452 1 task_graph.go:481] Running 7 on worker 0 2025-10-13T00:23:21.690503781+00:00 stderr F I1013 00:23:21.690462 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.690503781+00:00 stderr F I1013 00:23:21.690469 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-credential-operator" (14 of 955) 2025-10-13T00:23:21.690503781+00:00 stderr F I1013 00:23:21.690485 1 sync_worker.go:1002] Skipping namespace "openshift-cloud-credential-operator" (14 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:21.690503781+00:00 stderr F I1013 00:23:21.690493 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.690570183+00:00 stderr F I1013 00:23:21.690499 1 sync_worker.go:999] Running sync for customresourcedefinition "credentialsrequests.cloudcredential.openshift.io" (15 of 955) 2025-10-13T00:23:21.690570183+00:00 stderr F I1013 00:23:21.690514 1 sync_worker.go:1002] Skipping customresourcedefinition "credentialsrequests.cloudcredential.openshift.io" (15 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:21.690570183+00:00 stderr F I1013 00:23:21.690522 1 task_graph.go:481] Running 8 on worker 0 2025-10-13T00:23:21.697019382+00:00 stderr F I1013 00:23:21.696958 1 sync_worker.go:989] Precreated resource clusteroperator "config-operator" (66 of 955) 2025-10-13T00:23:21.697130765+00:00 stderr F I1013 00:23:21.697102 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.697215478+00:00 stderr F I1013 00:23:21.697178 1 sync_worker.go:999] Running sync for customresourcedefinition "apiservers.config.openshift.io" (36 of 955) 2025-10-13T00:23:21.702808974+00:00 stderr F I1013 00:23:21.702707 1 sync_worker.go:1014] Done syncing for helmchartrepository "openshift-helm-charts" (590 of 955) 2025-10-13T00:23:21.702808974+00:00 stderr F I1013 00:23:21.702775 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.702808974+00:00 stderr F I1013 00:23:21.702793 1 sync_worker.go:999] Running sync for oauthclient "console" (591 of 955) 2025-10-13T00:23:21.707943227+00:00 stderr F I1013 00:23:21.707860 1 sync_worker.go:1014] Done syncing for customresourcedefinition "apiservers.config.openshift.io" (36 of 955) 2025-10-13T00:23:21.708063130+00:00 stderr F I1013 00:23:21.708038 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.708179673+00:00 stderr F I1013 00:23:21.708144 1 sync_worker.go:999] Running sync for customresourcedefinition "authentications.config.openshift.io" (37 of 955) 2025-10-13T00:23:21.711152436+00:00 stderr F I1013 00:23:21.711058 1 sync_worker.go:1014] Done syncing for oauthclient "console" (591 of 955) 2025-10-13T00:23:21.711152436+00:00 stderr F I1013 00:23:21.711114 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.711152436+00:00 stderr F I1013 00:23:21.711125 1 sync_worker.go:999] Running sync for console "cluster" (592 of 955) 2025-10-13T00:23:21.713704607+00:00 stderr F I1013 00:23:21.713608 1 sync_worker.go:1014] Done syncing for customresourcedefinition "authentications.config.openshift.io" (37 of 955) 2025-10-13T00:23:21.713704607+00:00 stderr F I1013 00:23:21.713647 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.713704607+00:00 stderr F I1013 00:23:21.713658 1 sync_worker.go:999] Running sync for customresourcedefinition "configs.operator.openshift.io" (38 of 955) 2025-10-13T00:23:21.718222483+00:00 stderr F I1013 00:23:21.718162 1 sync_worker.go:1014] Done syncing for customresourcedefinition "configs.operator.openshift.io" (38 of 955) 2025-10-13T00:23:21.718374037+00:00 stderr F I1013 00:23:21.718300 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.718483140+00:00 stderr F I1013 00:23:21.718446 1 sync_worker.go:999] Running sync for customresourcedefinition "consoles.config.openshift.io" (39 of 955) 2025-10-13T00:23:21.723810699+00:00 stderr F I1013 00:23:21.723743 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoles.config.openshift.io" (39 of 955) 2025-10-13T00:23:21.723810699+00:00 stderr F I1013 00:23:21.723788 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.723810699+00:00 stderr F I1013 00:23:21.723799 1 sync_worker.go:999] Running sync for customresourcedefinition "dnses.config.openshift.io" (40 of 955) 2025-10-13T00:23:21.731257966+00:00 stderr F I1013 00:23:21.731205 1 sync_worker.go:1014] Done syncing for customresourcedefinition "dnses.config.openshift.io" (40 of 955) 2025-10-13T00:23:21.731552004+00:00 stderr F I1013 00:23:21.731365 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.731650707+00:00 stderr F I1013 00:23:21.731612 1 sync_worker.go:999] Running sync for customresourcedefinition "featuregates.config.openshift.io" (41 of 955) 2025-10-13T00:23:21.733085077+00:00 stderr F I1013 00:23:21.733010 1 sync_worker.go:1014] Done syncing for console "cluster" (592 of 955) 2025-10-13T00:23:21.733085077+00:00 stderr F I1013 00:23:21.733041 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.733085077+00:00 stderr F I1013 00:23:21.733049 1 sync_worker.go:999] Running sync for namespace "openshift-console" (593 of 955) 2025-10-13T00:23:21.736677587+00:00 stderr F I1013 00:23:21.736582 1 sync_worker.go:1014] Done syncing for namespace "openshift-console" (593 of 955) 2025-10-13T00:23:21.736677587+00:00 stderr F I1013 00:23:21.736649 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.736735979+00:00 stderr F I1013 00:23:21.736667 1 sync_worker.go:999] Running sync for namespace "openshift-console-operator" (594 of 955) 2025-10-13T00:23:21.738190209+00:00 stderr F I1013 00:23:21.738141 1 sync_worker.go:1014] Done syncing for customresourcedefinition "featuregates.config.openshift.io" (41 of 955) 2025-10-13T00:23:21.738282292+00:00 stderr F I1013 00:23:21.738257 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.738396195+00:00 stderr F I1013 00:23:21.738358 1 sync_worker.go:999] Running sync for customresourcedefinition "imagecontentpolicies.config.openshift.io" (42 of 955) 2025-10-13T00:23:21.739752013+00:00 stderr F I1013 00:23:21.739694 1 sync_worker.go:1014] Done syncing for namespace "openshift-console-operator" (594 of 955) 2025-10-13T00:23:21.739752013+00:00 stderr F I1013 00:23:21.739721 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.739752013+00:00 stderr F I1013 00:23:21.739729 1 sync_worker.go:999] Running sync for namespace "openshift-console-user-settings" (595 of 955) 2025-10-13T00:23:21.741997655+00:00 stderr F I1013 00:23:21.741934 1 sync_worker.go:1014] Done syncing for namespace "openshift-console-user-settings" (595 of 955) 2025-10-13T00:23:21.741997655+00:00 stderr F I1013 00:23:21.741957 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.741997655+00:00 stderr F I1013 00:23:21.741964 1 sync_worker.go:999] Running sync for clusterrole "console-extensions-reader" (596 of 955) 2025-10-13T00:23:21.742265163+00:00 stderr F I1013 00:23:21.742185 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagecontentpolicies.config.openshift.io" (42 of 955) 2025-10-13T00:23:21.742265163+00:00 stderr F I1013 00:23:21.742240 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.742290273+00:00 stderr F I1013 00:23:21.742257 1 sync_worker.go:999] Running sync for customresourcedefinition "imagecontentsourcepolicies.operator.openshift.io" (43 of 955) 2025-10-13T00:23:21.743885088+00:00 stderr F I1013 00:23:21.743820 1 sync_worker.go:1014] Done syncing for clusterrole "console-extensions-reader" (596 of 955) 2025-10-13T00:23:21.743885088+00:00 stderr F I1013 00:23:21.743844 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.743885088+00:00 stderr F I1013 00:23:21.743854 1 sync_worker.go:999] Running sync for clusterrole "console-operator" (597 of 955) 2025-10-13T00:23:21.769351947+00:00 stderr F I1013 00:23:21.769272 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagecontentsourcepolicies.operator.openshift.io" (43 of 955) 2025-10-13T00:23:21.769351947+00:00 stderr F I1013 00:23:21.769300 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.769351947+00:00 stderr F I1013 00:23:21.769308 1 sync_worker.go:999] Running sync for customresourcedefinition "imagedigestmirrorsets.config.openshift.io" (44 of 955) 2025-10-13T00:23:21.819997668+00:00 stderr F I1013 00:23:21.819931 1 sync_worker.go:1014] Done syncing for clusterrole "console-operator" (597 of 955) 2025-10-13T00:23:21.819997668+00:00 stderr F I1013 00:23:21.819958 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.819997668+00:00 stderr F I1013 00:23:21.819966 1 sync_worker.go:999] Running sync for clusterrole "console" (598 of 955) 2025-10-13T00:23:21.871733019+00:00 stderr F I1013 00:23:21.871661 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagedigestmirrorsets.config.openshift.io" (44 of 955) 2025-10-13T00:23:21.871844752+00:00 stderr F I1013 00:23:21.871823 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.871906754+00:00 stderr F I1013 00:23:21.871880 1 sync_worker.go:999] Running sync for customresourcedefinition "images.config.openshift.io" (45 of 955) 2025-10-13T00:23:21.920371854+00:00 stderr F I1013 00:23:21.920266 1 sync_worker.go:1014] Done syncing for clusterrole "console" (598 of 955) 2025-10-13T00:23:21.920468587+00:00 stderr F I1013 00:23:21.920447 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.920552569+00:00 stderr F I1013 00:23:21.920504 1 sync_worker.go:999] Running sync for clusterrole "helm-chartrepos-viewer" (599 of 955) 2025-10-13T00:23:21.970597093+00:00 stderr F I1013 00:23:21.970542 1 sync_worker.go:1014] Done syncing for customresourcedefinition "images.config.openshift.io" (45 of 955) 2025-10-13T00:23:21.970679555+00:00 stderr F I1013 00:23:21.970665 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:21.970720056+00:00 stderr F I1013 00:23:21.970702 1 sync_worker.go:999] Running sync for customresourcedefinition "imagetagmirrorsets.config.openshift.io" (46 of 955) 2025-10-13T00:23:22.020434301+00:00 stderr F I1013 00:23:22.020372 1 sync_worker.go:1014] Done syncing for clusterrole "helm-chartrepos-viewer" (599 of 955) 2025-10-13T00:23:22.020523594+00:00 stderr F I1013 00:23:22.020511 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.020562365+00:00 stderr F I1013 00:23:22.020544 1 sync_worker.go:999] Running sync for clusterrole "project-helm-chartrepository-editor" (600 of 955) 2025-10-13T00:23:22.034695258+00:00 stderr F I1013 00:23:22.034425 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:22.034877834+00:00 stderr F I1013 00:23:22.034819 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:22.034877834+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:22.034877834+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:22.034949325+00:00 stderr F I1013 00:23:22.034914 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (527.134µs) 2025-10-13T00:23:22.034949325+00:00 stderr F I1013 00:23:22.034940 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:22.035055228+00:00 stderr F I1013 00:23:22.035005 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:22.035107320+00:00 stderr F I1013 00:23:22.035082 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:22.035232473+00:00 stderr F I1013 00:23:22.035100 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:22.035648725+00:00 stderr F I1013 00:23:22.035585 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:22.057369990+00:00 stderr F W1013 00:23:22.057294 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:22.060082956+00:00 stderr F I1013 00:23:22.060027 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.084119ms) 2025-10-13T00:23:22.073278633+00:00 stderr F I1013 00:23:22.073160 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagetagmirrorsets.config.openshift.io" (46 of 955) 2025-10-13T00:23:22.073306614+00:00 stderr F I1013 00:23:22.073287 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.073323824+00:00 stderr F I1013 00:23:22.073300 1 sync_worker.go:999] Running sync for customresourcedefinition "infrastructures.config.openshift.io" (47 of 955) 2025-10-13T00:23:22.120867989+00:00 stderr F I1013 00:23:22.120817 1 sync_worker.go:1014] Done syncing for clusterrole "project-helm-chartrepository-editor" (600 of 955) 2025-10-13T00:23:22.120937911+00:00 stderr F I1013 00:23:22.120925 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.120970812+00:00 stderr F I1013 00:23:22.120958 1 sync_worker.go:999] Running sync for role "openshift-console/console-operator" (601 of 955) 2025-10-13T00:23:22.178404502+00:00 stderr F I1013 00:23:22.178011 1 sync_worker.go:1014] Done syncing for customresourcedefinition "infrastructures.config.openshift.io" (47 of 955) 2025-10-13T00:23:22.178404502+00:00 stderr F I1013 00:23:22.178063 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.178404502+00:00 stderr F I1013 00:23:22.178077 1 sync_worker.go:999] Running sync for customresourcedefinition "ingresses.config.openshift.io" (48 of 955) 2025-10-13T00:23:22.220732001+00:00 stderr F I1013 00:23:22.220685 1 sync_worker.go:1014] Done syncing for role "openshift-console/console-operator" (601 of 955) 2025-10-13T00:23:22.220848314+00:00 stderr F I1013 00:23:22.220834 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.220898225+00:00 stderr F I1013 00:23:22.220871 1 sync_worker.go:999] Running sync for role "openshift-config-managed/console-operator" (602 of 955) 2025-10-13T00:23:22.272414310+00:00 stderr F I1013 00:23:22.272359 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ingresses.config.openshift.io" (48 of 955) 2025-10-13T00:23:22.272492352+00:00 stderr F I1013 00:23:22.272480 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.272523873+00:00 stderr F I1013 00:23:22.272509 1 sync_worker.go:999] Running sync for customresourcedefinition "networks.config.openshift.io" (49 of 955) 2025-10-13T00:23:22.319847291+00:00 stderr F I1013 00:23:22.319796 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/console-operator" (602 of 955) 2025-10-13T00:23:22.319928984+00:00 stderr F I1013 00:23:22.319917 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.319958515+00:00 stderr F I1013 00:23:22.319946 1 sync_worker.go:999] Running sync for role "openshift-config-managed/console-public" (603 of 955) 2025-10-13T00:23:22.370980186+00:00 stderr F I1013 00:23:22.370931 1 sync_worker.go:1014] Done syncing for customresourcedefinition "networks.config.openshift.io" (49 of 955) 2025-10-13T00:23:22.371047398+00:00 stderr F I1013 00:23:22.371037 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.371076548+00:00 stderr F I1013 00:23:22.371064 1 sync_worker.go:999] Running sync for customresourcedefinition "nodes.config.openshift.io" (50 of 955) 2025-10-13T00:23:22.419561879+00:00 stderr F I1013 00:23:22.419514 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/console-public" (603 of 955) 2025-10-13T00:23:22.419642091+00:00 stderr F I1013 00:23:22.419632 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.419669702+00:00 stderr F I1013 00:23:22.419658 1 sync_worker.go:999] Running sync for role "openshift-config-managed/console-configmap-reader" (604 of 955) 2025-10-13T00:23:22.470515808+00:00 stderr F I1013 00:23:22.470471 1 sync_worker.go:1014] Done syncing for customresourcedefinition "nodes.config.openshift.io" (50 of 955) 2025-10-13T00:23:22.470584530+00:00 stderr F I1013 00:23:22.470574 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.470613131+00:00 stderr F I1013 00:23:22.470600 1 sync_worker.go:999] Running sync for customresourcedefinition "oauths.config.openshift.io" (51 of 955) 2025-10-13T00:23:22.521103918+00:00 stderr F I1013 00:23:22.521029 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/console-configmap-reader" (604 of 955) 2025-10-13T00:23:22.521247772+00:00 stderr F I1013 00:23:22.521218 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.521393306+00:00 stderr F I1013 00:23:22.521298 1 sync_worker.go:999] Running sync for role "openshift-config/console-operator" (605 of 955) 2025-10-13T00:23:22.572230052+00:00 stderr F I1013 00:23:22.572174 1 sync_worker.go:1014] Done syncing for customresourcedefinition "oauths.config.openshift.io" (51 of 955) 2025-10-13T00:23:22.572493979+00:00 stderr F I1013 00:23:22.572479 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.572539660+00:00 stderr F I1013 00:23:22.572521 1 sync_worker.go:999] Running sync for namespace "openshift-config-managed" (52 of 955) 2025-10-13T00:23:22.621237547+00:00 stderr F I1013 00:23:22.621179 1 sync_worker.go:1014] Done syncing for role "openshift-config/console-operator" (605 of 955) 2025-10-13T00:23:22.621320979+00:00 stderr F I1013 00:23:22.621303 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.621409162+00:00 stderr F I1013 00:23:22.621385 1 sync_worker.go:999] Running sync for role "openshift-console-user-settings/console-user-settings-admin" (606 of 955) 2025-10-13T00:23:22.670174780+00:00 stderr F I1013 00:23:22.670124 1 sync_worker.go:1014] Done syncing for namespace "openshift-config-managed" (52 of 955) 2025-10-13T00:23:22.670266643+00:00 stderr F I1013 00:23:22.670253 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.670305124+00:00 stderr F I1013 00:23:22.670289 1 sync_worker.go:999] Running sync for namespace "openshift-config" (53 of 955) 2025-10-13T00:23:22.720376508+00:00 stderr F I1013 00:23:22.720300 1 sync_worker.go:1014] Done syncing for role "openshift-console-user-settings/console-user-settings-admin" (606 of 955) 2025-10-13T00:23:22.720458801+00:00 stderr F I1013 00:23:22.720445 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.720498312+00:00 stderr F I1013 00:23:22.720481 1 sync_worker.go:999] Running sync for rolebinding "openshift-console-user-settings/console-user-settings-admin" (607 of 955) 2025-10-13T00:23:22.769566829+00:00 stderr F I1013 00:23:22.769475 1 sync_worker.go:1014] Done syncing for namespace "openshift-config" (53 of 955) 2025-10-13T00:23:22.769632940+00:00 stderr F I1013 00:23:22.769622 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.769661211+00:00 stderr F I1013 00:23:22.769649 1 sync_worker.go:999] Running sync for config "cluster" (54 of 955) 2025-10-13T00:23:22.820338643+00:00 stderr F I1013 00:23:22.820265 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console-user-settings/console-user-settings-admin" (607 of 955) 2025-10-13T00:23:22.820408035+00:00 stderr F I1013 00:23:22.820394 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.820445646+00:00 stderr F I1013 00:23:22.820429 1 sync_worker.go:999] Running sync for role "openshift-monitoring/console-operator" (608 of 955) 2025-10-13T00:23:22.887887264+00:00 stderr F I1013 00:23:22.887830 1 sync_worker.go:1014] Done syncing for config "cluster" (54 of 955) 2025-10-13T00:23:22.887976287+00:00 stderr F I1013 00:23:22.887957 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.888030498+00:00 stderr F I1013 00:23:22.888007 1 sync_worker.go:999] Running sync for customresourcedefinition "projects.config.openshift.io" (55 of 955) 2025-10-13T00:23:22.920604996+00:00 stderr F I1013 00:23:22.920557 1 sync_worker.go:1014] Done syncing for role "openshift-monitoring/console-operator" (608 of 955) 2025-10-13T00:23:22.920681988+00:00 stderr F I1013 00:23:22.920671 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.920712189+00:00 stderr F I1013 00:23:22.920699 1 sync_worker.go:999] Running sync for role "openshift-console-operator/console-operator" (609 of 955) 2025-10-13T00:23:22.970156166+00:00 stderr F I1013 00:23:22.970106 1 sync_worker.go:1014] Done syncing for customresourcedefinition "projects.config.openshift.io" (55 of 955) 2025-10-13T00:23:22.970232898+00:00 stderr F I1013 00:23:22.970218 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:22.970285520+00:00 stderr F I1013 00:23:22.970262 1 sync_worker.go:999] Running sync for role "openshift-config-operator/prometheus-k8s" (56 of 955) 2025-10-13T00:23:23.020695754+00:00 stderr F I1013 00:23:23.020628 1 sync_worker.go:1014] Done syncing for role "openshift-console-operator/console-operator" (609 of 955) 2025-10-13T00:23:23.020795767+00:00 stderr F I1013 00:23:23.020780 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.020841268+00:00 stderr F I1013 00:23:23.020821 1 sync_worker.go:999] Running sync for clusterrolebinding "console-operator" (610 of 955) 2025-10-13T00:23:23.036007260+00:00 stderr F I1013 00:23:23.035959 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:23.036394981+00:00 stderr F I1013 00:23:23.036372 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:23.036394981+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:23.036394981+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:23.036491594+00:00 stderr F I1013 00:23:23.036472 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (522.154µs) 2025-10-13T00:23:23.036533305+00:00 stderr F I1013 00:23:23.036519 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:23.036622638+00:00 stderr F I1013 00:23:23.036600 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:23.036696350+00:00 stderr F I1013 00:23:23.036682 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:23.036780672+00:00 stderr F I1013 00:23:23.036719 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:23.037111311+00:00 stderr F I1013 00:23:23.037079 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:23.059196486+00:00 stderr F W1013 00:23:23.059149 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:23.062152979+00:00 stderr F I1013 00:23:23.062074 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.548241ms) 2025-10-13T00:23:23.069807882+00:00 stderr F I1013 00:23:23.069761 1 sync_worker.go:1014] Done syncing for role "openshift-config-operator/prometheus-k8s" (56 of 955) 2025-10-13T00:23:23.069887824+00:00 stderr F I1013 00:23:23.069868 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.069945756+00:00 stderr F I1013 00:23:23.069919 1 sync_worker.go:999] Running sync for customresourcedefinition "schedulers.config.openshift.io" (57 of 955) 2025-10-13T00:23:23.121049739+00:00 stderr F I1013 00:23:23.120995 1 sync_worker.go:1014] Done syncing for clusterrolebinding "console-operator" (610 of 955) 2025-10-13T00:23:23.121125871+00:00 stderr F I1013 00:23:23.121112 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.121165823+00:00 stderr F I1013 00:23:23.121148 1 sync_worker.go:999] Running sync for clusterrolebinding "console-extensions-reader" (611 of 955) 2025-10-13T00:23:23.169860559+00:00 stderr F I1013 00:23:23.169806 1 sync_worker.go:1014] Done syncing for customresourcedefinition "schedulers.config.openshift.io" (57 of 955) 2025-10-13T00:23:23.169938921+00:00 stderr F I1013 00:23:23.169925 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.169980422+00:00 stderr F I1013 00:23:23.169962 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:cluster-config-operator:cluster-reader" (58 of 955) 2025-10-13T00:23:23.220822489+00:00 stderr F I1013 00:23:23.220768 1 sync_worker.go:1014] Done syncing for clusterrolebinding "console-extensions-reader" (611 of 955) 2025-10-13T00:23:23.220894481+00:00 stderr F I1013 00:23:23.220882 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.220932862+00:00 stderr F I1013 00:23:23.220914 1 sync_worker.go:999] Running sync for clusterrolebinding "console-operator-auth-delegator" (612 of 955) 2025-10-13T00:23:23.270248815+00:00 stderr F I1013 00:23:23.270194 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:cluster-config-operator:cluster-reader" (58 of 955) 2025-10-13T00:23:23.270318497+00:00 stderr F I1013 00:23:23.270305 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.270392439+00:00 stderr F I1013 00:23:23.270372 1 sync_worker.go:999] Running sync for node "cluster" (59 of 955) 2025-10-13T00:23:23.319879038+00:00 stderr F I1013 00:23:23.319827 1 sync_worker.go:1014] Done syncing for clusterrolebinding "console-operator-auth-delegator" (612 of 955) 2025-10-13T00:23:23.319954050+00:00 stderr F I1013 00:23:23.319940 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.319994081+00:00 stderr F I1013 00:23:23.319977 1 sync_worker.go:999] Running sync for clusterrolebinding "console" (613 of 955) 2025-10-13T00:23:23.371102835+00:00 stderr F I1013 00:23:23.371056 1 sync_worker.go:1014] Done syncing for node "cluster" (59 of 955) 2025-10-13T00:23:23.371174647+00:00 stderr F I1013 00:23:23.371161 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.371214678+00:00 stderr F I1013 00:23:23.371197 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-operator/prometheus-k8s" (60 of 955) 2025-10-13T00:23:23.421119908+00:00 stderr F I1013 00:23:23.421070 1 sync_worker.go:1014] Done syncing for clusterrolebinding "console" (613 of 955) 2025-10-13T00:23:23.421204090+00:00 stderr F I1013 00:23:23.421189 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.421244951+00:00 stderr F I1013 00:23:23.421228 1 sync_worker.go:999] Running sync for clusterrolebinding "helm-chartrepos-view" (614 of 955) 2025-10-13T00:23:23.469898557+00:00 stderr F I1013 00:23:23.469858 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-operator/prometheus-k8s" (60 of 955) 2025-10-13T00:23:23.469962908+00:00 stderr F I1013 00:23:23.469949 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.470001960+00:00 stderr F I1013 00:23:23.469985 1 sync_worker.go:999] Running sync for service "openshift-config-operator/metrics" (61 of 955) 2025-10-13T00:23:23.520546957+00:00 stderr F I1013 00:23:23.520488 1 sync_worker.go:1014] Done syncing for clusterrolebinding "helm-chartrepos-view" (614 of 955) 2025-10-13T00:23:23.520690191+00:00 stderr F I1013 00:23:23.520667 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.520755523+00:00 stderr F I1013 00:23:23.520729 1 sync_worker.go:999] Running sync for clusterrolebinding "console-auth-delegator" (615 of 955) 2025-10-13T00:23:23.571301391+00:00 stderr F I1013 00:23:23.571224 1 sync_worker.go:1014] Done syncing for service "openshift-config-operator/metrics" (61 of 955) 2025-10-13T00:23:23.571466756+00:00 stderr F I1013 00:23:23.571436 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.571551878+00:00 stderr F I1013 00:23:23.571517 1 sync_worker.go:999] Running sync for servicemonitor "openshift-config-operator/config-operator" (62 of 955) 2025-10-13T00:23:23.620751539+00:00 stderr F I1013 00:23:23.620688 1 sync_worker.go:1014] Done syncing for clusterrolebinding "console-auth-delegator" (615 of 955) 2025-10-13T00:23:23.620885282+00:00 stderr F I1013 00:23:23.620858 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.620963805+00:00 stderr F I1013 00:23:23.620930 1 sync_worker.go:999] Running sync for rolebinding "openshift-console/console-operator" (616 of 955) 2025-10-13T00:23:23.671088361+00:00 stderr F I1013 00:23:23.671031 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-config-operator/config-operator" (62 of 955) 2025-10-13T00:23:23.671172733+00:00 stderr F I1013 00:23:23.671158 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.671215444+00:00 stderr F I1013 00:23:23.671196 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:openshift-config-operator" (63 of 955) 2025-10-13T00:23:23.721038992+00:00 stderr F I1013 00:23:23.720990 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console/console-operator" (616 of 955) 2025-10-13T00:23:23.721314240+00:00 stderr F I1013 00:23:23.721299 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.721379212+00:00 stderr F I1013 00:23:23.721359 1 sync_worker.go:999] Running sync for rolebinding "openshift-console-operator/console-operator" (617 of 955) 2025-10-13T00:23:23.770093399+00:00 stderr F I1013 00:23:23.770042 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:openshift-config-operator" (63 of 955) 2025-10-13T00:23:23.770163651+00:00 stderr F I1013 00:23:23.770149 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.770205472+00:00 stderr F I1013 00:23:23.770187 1 sync_worker.go:999] Running sync for serviceaccount "openshift-config-operator/openshift-config-operator" (64 of 955) 2025-10-13T00:23:23.820559334+00:00 stderr F I1013 00:23:23.820502 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console-operator/console-operator" (617 of 955) 2025-10-13T00:23:23.820663927+00:00 stderr F I1013 00:23:23.820645 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.820717789+00:00 stderr F I1013 00:23:23.820695 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/console-operator" (618 of 955) 2025-10-13T00:23:23.869577950+00:00 stderr F I1013 00:23:23.869522 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-config-operator/openshift-config-operator" (64 of 955) 2025-10-13T00:23:23.869665852+00:00 stderr F I1013 00:23:23.869646 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.869719764+00:00 stderr F I1013 00:23:23.869696 1 sync_worker.go:999] Running sync for deployment "openshift-config-operator/openshift-config-operator" (65 of 955) 2025-10-13T00:23:23.920258522+00:00 stderr F I1013 00:23:23.920207 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/console-operator" (618 of 955) 2025-10-13T00:23:23.920385165+00:00 stderr F I1013 00:23:23.920368 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.920431406+00:00 stderr F I1013 00:23:23.920414 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/console-public" (619 of 955) 2025-10-13T00:23:23.970241024+00:00 stderr F I1013 00:23:23.970189 1 sync_worker.go:1014] Done syncing for deployment "openshift-config-operator/openshift-config-operator" (65 of 955) 2025-10-13T00:23:23.970346947+00:00 stderr F I1013 00:23:23.970304 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.970395328+00:00 stderr F I1013 00:23:23.970376 1 sync_worker.go:999] Running sync for clusteroperator "config-operator" (66 of 955) 2025-10-13T00:23:23.970647115+00:00 stderr F I1013 00:23:23.970626 1 sync_worker.go:1014] Done syncing for clusteroperator "config-operator" (66 of 955) 2025-10-13T00:23:23.970752348+00:00 stderr F I1013 00:23:23.970724 1 task_graph.go:481] Running 9 on worker 0 2025-10-13T00:23:23.970856091+00:00 stderr F I1013 00:23:23.970792 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:23.970911793+00:00 stderr F I1013 00:23:23.970892 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-storage-operator" (302 of 955) 2025-10-13T00:23:24.020197105+00:00 stderr F I1013 00:23:24.020140 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/console-public" (619 of 955) 2025-10-13T00:23:24.020259327+00:00 stderr F I1013 00:23:24.020246 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.020297848+00:00 stderr F I1013 00:23:24.020281 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/console-operator" (620 of 955) 2025-10-13T00:23:24.037110097+00:00 stderr F I1013 00:23:24.037074 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:24.037480317+00:00 stderr F I1013 00:23:24.037460 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:24.037480317+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:24.037480317+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:24.037631231+00:00 stderr F I1013 00:23:24.037558 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (494.624µs) 2025-10-13T00:23:24.037679142+00:00 stderr F I1013 00:23:24.037604 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:24.037786725+00:00 stderr F I1013 00:23:24.037763 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:24.037859897+00:00 stderr F I1013 00:23:24.037847 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:24.037946450+00:00 stderr F I1013 00:23:24.037883 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:24.038241928+00:00 stderr F I1013 00:23:24.038210 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:24.069963432+00:00 stderr F W1013 00:23:24.069917 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:24.071925906+00:00 stderr F I1013 00:23:24.071853 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.247184ms) 2025-10-13T00:23:24.072298637+00:00 stderr F I1013 00:23:24.072126 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-storage-operator" (302 of 955) 2025-10-13T00:23:24.072425750+00:00 stderr F I1013 00:23:24.072310 1 task_graph.go:481] Running 10 on worker 0 2025-10-13T00:23:24.072425750+00:00 stderr F I1013 00:23:24.072359 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.072425750+00:00 stderr F I1013 00:23:24.072374 1 sync_worker.go:999] Running sync for customresourcedefinition "controlplanemachinesets.machine.openshift.io" (67 of 955) 2025-10-13T00:23:24.120008956+00:00 stderr F I1013 00:23:24.119951 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/console-operator" (620 of 955) 2025-10-13T00:23:24.120008956+00:00 stderr F I1013 00:23:24.119985 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.120008956+00:00 stderr F I1013 00:23:24.119991 1 sync_worker.go:999] Running sync for rolebinding "openshift-monitoring/console-operator" (621 of 955) 2025-10-13T00:23:24.174662778+00:00 stderr F I1013 00:23:24.174519 1 sync_worker.go:1014] Done syncing for customresourcedefinition "controlplanemachinesets.machine.openshift.io" (67 of 955) 2025-10-13T00:23:24.174753011+00:00 stderr F I1013 00:23:24.174737 1 task_graph.go:481] Running 11 on worker 0 2025-10-13T00:23:24.174804342+00:00 stderr F I1013 00:23:24.174785 1 sync_worker.go:982] Skipping precreation of clusteroperator "baremetal" (300 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.174844143+00:00 stderr F I1013 00:23:24.174831 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.174880604+00:00 stderr F I1013 00:23:24.174865 1 sync_worker.go:999] Running sync for customresourcedefinition "baremetalhosts.metal3.io" (278 of 955) 2025-10-13T00:23:24.174941056+00:00 stderr F I1013 00:23:24.174927 1 sync_worker.go:1002] Skipping customresourcedefinition "baremetalhosts.metal3.io" (278 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.174968647+00:00 stderr F I1013 00:23:24.174958 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.175001138+00:00 stderr F I1013 00:23:24.174986 1 sync_worker.go:999] Running sync for customresourcedefinition "bmceventsubscriptions.metal3.io" (279 of 955) 2025-10-13T00:23:24.175038239+00:00 stderr F I1013 00:23:24.175024 1 sync_worker.go:1002] Skipping customresourcedefinition "bmceventsubscriptions.metal3.io" (279 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177058595+00:00 stderr F I1013 00:23:24.175055 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177112546+00:00 stderr F I1013 00:23:24.177093 1 sync_worker.go:999] Running sync for customresourcedefinition "dataimages.metal3.io" (280 of 955) 2025-10-13T00:23:24.177154558+00:00 stderr F I1013 00:23:24.177138 1 sync_worker.go:1002] Skipping customresourcedefinition "dataimages.metal3.io" (280 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177197609+00:00 stderr F I1013 00:23:24.177185 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177237480+00:00 stderr F I1013 00:23:24.177219 1 sync_worker.go:999] Running sync for customresourcedefinition "firmwareschemas.metal3.io" (281 of 955) 2025-10-13T00:23:24.177279421+00:00 stderr F I1013 00:23:24.177263 1 sync_worker.go:1002] Skipping customresourcedefinition "firmwareschemas.metal3.io" (281 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177319282+00:00 stderr F I1013 00:23:24.177309 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177373204+00:00 stderr F I1013 00:23:24.177359 1 sync_worker.go:999] Running sync for customresourcedefinition "hardwaredata.metal3.io" (282 of 955) 2025-10-13T00:23:24.177402475+00:00 stderr F I1013 00:23:24.177392 1 sync_worker.go:1002] Skipping customresourcedefinition "hardwaredata.metal3.io" (282 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177424745+00:00 stderr F I1013 00:23:24.177416 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177451806+00:00 stderr F I1013 00:23:24.177439 1 sync_worker.go:999] Running sync for customresourcedefinition "hostfirmwarecomponents.metal3.io" (283 of 955) 2025-10-13T00:23:24.177491147+00:00 stderr F I1013 00:23:24.177478 1 sync_worker.go:1002] Skipping customresourcedefinition "hostfirmwarecomponents.metal3.io" (283 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177517098+00:00 stderr F I1013 00:23:24.177507 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177553649+00:00 stderr F I1013 00:23:24.177537 1 sync_worker.go:999] Running sync for customresourcedefinition "hostfirmwaresettings.metal3.io" (284 of 955) 2025-10-13T00:23:24.177596120+00:00 stderr F I1013 00:23:24.177580 1 sync_worker.go:1002] Skipping customresourcedefinition "hostfirmwaresettings.metal3.io" (284 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177632021+00:00 stderr F I1013 00:23:24.177620 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177671512+00:00 stderr F I1013 00:23:24.177652 1 sync_worker.go:999] Running sync for customresourcedefinition "preprovisioningimages.metal3.io" (285 of 955) 2025-10-13T00:23:24.177735874+00:00 stderr F I1013 00:23:24.177718 1 sync_worker.go:1002] Skipping customresourcedefinition "preprovisioningimages.metal3.io" (285 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.177769675+00:00 stderr F I1013 00:23:24.177758 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.177808256+00:00 stderr F I1013 00:23:24.177791 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/cluster-baremetal-operator-images" (286 of 955) 2025-10-13T00:23:24.177851727+00:00 stderr F I1013 00:23:24.177836 1 sync_worker.go:1002] Skipping configmap "openshift-machine-api/cluster-baremetal-operator-images" (286 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.178750832+00:00 stderr F I1013 00:23:24.178735 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.178791843+00:00 stderr F I1013 00:23:24.178776 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/cbo-trusted-ca" (287 of 955) 2025-10-13T00:23:24.178821274+00:00 stderr F I1013 00:23:24.178811 1 sync_worker.go:1002] Skipping configmap "openshift-machine-api/cbo-trusted-ca" (287 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.178843645+00:00 stderr F I1013 00:23:24.178835 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.178869665+00:00 stderr F I1013 00:23:24.178858 1 sync_worker.go:999] Running sync for customresourcedefinition "provisionings.metal3.io" (288 of 955) 2025-10-13T00:23:24.178896946+00:00 stderr F I1013 00:23:24.178887 1 sync_worker.go:1002] Skipping customresourcedefinition "provisionings.metal3.io" (288 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.178918777+00:00 stderr F I1013 00:23:24.178910 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.178944827+00:00 stderr F I1013 00:23:24.178933 1 sync_worker.go:999] Running sync for service "openshift-machine-api/cluster-baremetal-operator-service" (289 of 955) 2025-10-13T00:23:24.178993389+00:00 stderr F I1013 00:23:24.178982 1 sync_worker.go:1002] Skipping service "openshift-machine-api/cluster-baremetal-operator-service" (289 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.179016379+00:00 stderr F I1013 00:23:24.179008 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.179041530+00:00 stderr F I1013 00:23:24.179030 1 sync_worker.go:999] Running sync for service "openshift-machine-api/cluster-baremetal-webhook-service" (290 of 955) 2025-10-13T00:23:24.179067821+00:00 stderr F I1013 00:23:24.179058 1 sync_worker.go:1002] Skipping service "openshift-machine-api/cluster-baremetal-webhook-service" (290 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.179271167+00:00 stderr F I1013 00:23:24.179262 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.179296027+00:00 stderr F I1013 00:23:24.179285 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/cluster-baremetal-operator" (291 of 955) 2025-10-13T00:23:24.179335658+00:00 stderr F I1013 00:23:24.179312 1 sync_worker.go:1002] Skipping serviceaccount "openshift-machine-api/cluster-baremetal-operator" (291 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.179612746+00:00 stderr F I1013 00:23:24.179600 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.179645267+00:00 stderr F I1013 00:23:24.179632 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/baremetal-kube-rbac-proxy" (292 of 955) 2025-10-13T00:23:24.179673158+00:00 stderr F I1013 00:23:24.179663 1 sync_worker.go:1002] Skipping configmap "openshift-machine-api/baremetal-kube-rbac-proxy" (292 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.179695338+00:00 stderr F I1013 00:23:24.179687 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.179720839+00:00 stderr F I1013 00:23:24.179710 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (293 of 955) 2025-10-13T00:23:24.179750820+00:00 stderr F I1013 00:23:24.179739 1 sync_worker.go:1002] Skipping rolebinding "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (293 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.179788751+00:00 stderr F I1013 00:23:24.179777 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.179826022+00:00 stderr F I1013 00:23:24.179809 1 sync_worker.go:999] Running sync for role "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (294 of 955) 2025-10-13T00:23:24.179874653+00:00 stderr F I1013 00:23:24.179859 1 sync_worker.go:1002] Skipping role "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (294 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.179913454+00:00 stderr F I1013 00:23:24.179902 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.179948595+00:00 stderr F I1013 00:23:24.179933 1 sync_worker.go:999] Running sync for role "openshift-machine-api/cluster-baremetal-operator" (295 of 955) 2025-10-13T00:23:24.179985716+00:00 stderr F I1013 00:23:24.179972 1 sync_worker.go:1002] Skipping role "openshift-machine-api/cluster-baremetal-operator" (295 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.182594199+00:00 stderr F I1013 00:23:24.182544 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.182614430+00:00 stderr F I1013 00:23:24.182589 1 sync_worker.go:999] Running sync for clusterrole "cluster-baremetal-operator" (296 of 955) 2025-10-13T00:23:24.182656381+00:00 stderr F I1013 00:23:24.182631 1 sync_worker.go:1002] Skipping clusterrole "cluster-baremetal-operator" (296 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.182656381+00:00 stderr F I1013 00:23:24.182651 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.182692032+00:00 stderr F I1013 00:23:24.182661 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/cluster-baremetal-operator" (297 of 955) 2025-10-13T00:23:24.182699502+00:00 stderr F I1013 00:23:24.182686 1 sync_worker.go:1002] Skipping rolebinding "openshift-machine-api/cluster-baremetal-operator" (297 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.182706742+00:00 stderr F I1013 00:23:24.182698 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.182736793+00:00 stderr F I1013 00:23:24.182711 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-machine-api/cluster-baremetal-operator" (298 of 955) 2025-10-13T00:23:24.182752234+00:00 stderr F I1013 00:23:24.182742 1 sync_worker.go:1002] Skipping clusterrolebinding "openshift-machine-api/cluster-baremetal-operator" (298 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.182780914+00:00 stderr F I1013 00:23:24.182756 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.182790135+00:00 stderr F I1013 00:23:24.182776 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/cluster-baremetal-operator" (299 of 955) 2025-10-13T00:23:24.182834436+00:00 stderr F I1013 00:23:24.182803 1 sync_worker.go:1002] Skipping deployment "openshift-machine-api/cluster-baremetal-operator" (299 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.182834436+00:00 stderr F I1013 00:23:24.182823 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.182984860+00:00 stderr F I1013 00:23:24.182837 1 sync_worker.go:999] Running sync for clusteroperator "baremetal" (300 of 955) 2025-10-13T00:23:24.182984860+00:00 stderr F I1013 00:23:24.182865 1 sync_worker.go:1002] Skipping clusteroperator "baremetal" (300 of 955): disabled capabilities: baremetal 2025-10-13T00:23:24.182984860+00:00 stderr F I1013 00:23:24.182889 1 task_graph.go:481] Running 12 on worker 0 2025-10-13T00:23:24.182984860+00:00 stderr F I1013 00:23:24.182900 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.182984860+00:00 stderr F I1013 00:23:24.182942 1 sync_worker.go:999] Running sync for role "openshift-console-operator/prometheus-k8s" (855 of 955) 2025-10-13T00:23:24.220808614+00:00 stderr F I1013 00:23:24.220742 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-monitoring/console-operator" (621 of 955) 2025-10-13T00:23:24.220870405+00:00 stderr F I1013 00:23:24.220859 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.220899446+00:00 stderr F I1013 00:23:24.220886 1 sync_worker.go:999] Running sync for rolebinding "kube-system/console-operator" (622 of 955) 2025-10-13T00:23:24.270258731+00:00 stderr F I1013 00:23:24.270228 1 sync_worker.go:1014] Done syncing for role "openshift-console-operator/prometheus-k8s" (855 of 955) 2025-10-13T00:23:24.270312532+00:00 stderr F I1013 00:23:24.270300 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.270373264+00:00 stderr F I1013 00:23:24.270354 1 sync_worker.go:999] Running sync for rolebinding "openshift-console-operator/prometheus-k8s" (856 of 955) 2025-10-13T00:23:24.329628755+00:00 stderr F I1013 00:23:24.329588 1 sync_worker.go:1014] Done syncing for rolebinding "kube-system/console-operator" (622 of 955) 2025-10-13T00:23:24.329676276+00:00 stderr F I1013 00:23:24.329666 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.329704177+00:00 stderr F I1013 00:23:24.329692 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/console-configmap-reader" (623 of 955) 2025-10-13T00:23:24.371527022+00:00 stderr F I1013 00:23:24.371464 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console-operator/prometheus-k8s" (856 of 955) 2025-10-13T00:23:24.371641155+00:00 stderr F I1013 00:23:24.371620 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.371700917+00:00 stderr F I1013 00:23:24.371676 1 sync_worker.go:999] Running sync for servicemonitor "openshift-console-operator/console-operator" (857 of 955) 2025-10-13T00:23:24.423814627+00:00 stderr F I1013 00:23:24.423751 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/console-configmap-reader" (623 of 955) 2025-10-13T00:23:24.423950971+00:00 stderr F I1013 00:23:24.423900 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.424014073+00:00 stderr F I1013 00:23:24.423989 1 sync_worker.go:999] Running sync for rolebinding "kube-system/console" (624 of 955) 2025-10-13T00:23:24.472076102+00:00 stderr F I1013 00:23:24.472007 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-console-operator/console-operator" (857 of 955) 2025-10-13T00:23:24.472174924+00:00 stderr F I1013 00:23:24.472159 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.472218666+00:00 stderr F I1013 00:23:24.472199 1 sync_worker.go:999] Running sync for storageversionmigration "console-plugin-storage-version-migration" (858 of 955) 2025-10-13T00:23:24.521573970+00:00 stderr F I1013 00:23:24.521513 1 sync_worker.go:1014] Done syncing for rolebinding "kube-system/console" (624 of 955) 2025-10-13T00:23:24.521668533+00:00 stderr F I1013 00:23:24.521653 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.521710834+00:00 stderr F I1013 00:23:24.521693 1 sync_worker.go:999] Running sync for configmap "openshift-console-operator/console-operator-config" (625 of 955) 2025-10-13T00:23:24.571104960+00:00 stderr F I1013 00:23:24.571051 1 sync_worker.go:1014] Done syncing for storageversionmigration "console-plugin-storage-version-migration" (858 of 955) 2025-10-13T00:23:24.571196313+00:00 stderr F I1013 00:23:24.571181 1 task_graph.go:481] Running 13 on worker 0 2025-10-13T00:23:24.571239074+00:00 stderr F I1013 00:23:24.571226 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.571293155+00:00 stderr F I1013 00:23:24.571274 1 sync_worker.go:999] Running sync for customresourcedefinition "etcds.operator.openshift.io" (70 of 955) 2025-10-13T00:23:24.619940550+00:00 stderr F I1013 00:23:24.619883 1 sync_worker.go:1014] Done syncing for configmap "openshift-console-operator/console-operator-config" (625 of 955) 2025-10-13T00:23:24.620015533+00:00 stderr F I1013 00:23:24.619998 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.620068354+00:00 stderr F I1013 00:23:24.620045 1 sync_worker.go:999] Running sync for service "openshift-console-operator/metrics" (626 of 955) 2025-10-13T00:23:24.672088733+00:00 stderr F I1013 00:23:24.672031 1 sync_worker.go:1014] Done syncing for customresourcedefinition "etcds.operator.openshift.io" (70 of 955) 2025-10-13T00:23:24.672148055+00:00 stderr F I1013 00:23:24.672137 1 task_graph.go:481] Running 14 on worker 0 2025-10-13T00:23:24.672181055+00:00 stderr F I1013 00:23:24.672172 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.672210726+00:00 stderr F I1013 00:23:24.672196 1 sync_worker.go:999] Running sync for role "openshift-controller-manager-operator/prometheus-k8s" (944 of 955) 2025-10-13T00:23:24.720424579+00:00 stderr F I1013 00:23:24.720387 1 sync_worker.go:1014] Done syncing for service "openshift-console-operator/metrics" (626 of 955) 2025-10-13T00:23:24.720483551+00:00 stderr F I1013 00:23:24.720466 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.720544653+00:00 stderr F I1013 00:23:24.720531 1 sync_worker.go:999] Running sync for service "openshift-console-operator/webhook" (627 of 955) 2025-10-13T00:23:24.770126224+00:00 stderr F I1013 00:23:24.770057 1 sync_worker.go:1014] Done syncing for role "openshift-controller-manager-operator/prometheus-k8s" (944 of 955) 2025-10-13T00:23:24.770126224+00:00 stderr F I1013 00:23:24.770099 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.770126224+00:00 stderr F I1013 00:23:24.770107 1 sync_worker.go:999] Running sync for rolebinding "openshift-controller-manager-operator/prometheus-k8s" (945 of 955) 2025-10-13T00:23:24.819646583+00:00 stderr F I1013 00:23:24.819571 1 sync_worker.go:1014] Done syncing for service "openshift-console-operator/webhook" (627 of 955) 2025-10-13T00:23:24.819646583+00:00 stderr F I1013 00:23:24.819607 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.819646583+00:00 stderr F I1013 00:23:24.819613 1 sync_worker.go:999] Running sync for configmap "openshift-console-operator/telemetry-config" (628 of 955) 2025-10-13T00:23:24.870925092+00:00 stderr F I1013 00:23:24.870187 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-controller-manager-operator/prometheus-k8s" (945 of 955) 2025-10-13T00:23:24.870925092+00:00 stderr F I1013 00:23:24.870230 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.870925092+00:00 stderr F I1013 00:23:24.870239 1 sync_worker.go:999] Running sync for servicemonitor "openshift-controller-manager-operator/openshift-controller-manager-operator" (946 of 955) 2025-10-13T00:23:24.920433711+00:00 stderr F I1013 00:23:24.919679 1 sync_worker.go:1014] Done syncing for configmap "openshift-console-operator/telemetry-config" (628 of 955) 2025-10-13T00:23:24.920433711+00:00 stderr F I1013 00:23:24.919720 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.920433711+00:00 stderr F I1013 00:23:24.919729 1 sync_worker.go:999] Running sync for serviceaccount "openshift-console-operator/console-operator" (629 of 955) 2025-10-13T00:23:24.976401350+00:00 stderr F I1013 00:23:24.974538 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-controller-manager-operator/openshift-controller-manager-operator" (946 of 955) 2025-10-13T00:23:24.976401350+00:00 stderr F I1013 00:23:24.974569 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:24.976401350+00:00 stderr F I1013 00:23:24.974575 1 sync_worker.go:999] Running sync for role "openshift-controller-manager/prometheus-k8s" (947 of 955) 2025-10-13T00:23:25.020723844+00:00 stderr F I1013 00:23:25.020638 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-console-operator/console-operator" (629 of 955) 2025-10-13T00:23:25.020723844+00:00 stderr F I1013 00:23:25.020683 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.020723844+00:00 stderr F I1013 00:23:25.020694 1 sync_worker.go:999] Running sync for serviceaccount "openshift-console/console" (630 of 955) 2025-10-13T00:23:25.037717628+00:00 stderr F I1013 00:23:25.037610 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:25.038003866+00:00 stderr F I1013 00:23:25.037961 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:25.038003866+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:25.038003866+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:25.038197611+00:00 stderr F I1013 00:23:25.038123 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (494.484µs) 2025-10-13T00:23:25.038197611+00:00 stderr F I1013 00:23:25.038145 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:25.038275883+00:00 stderr F I1013 00:23:25.038213 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:25.038302634+00:00 stderr F I1013 00:23:25.038290 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:25.038382416+00:00 stderr F I1013 00:23:25.038297 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:25.038591402+00:00 stderr F I1013 00:23:25.038554 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:25.066413437+00:00 stderr F W1013 00:23:25.065826 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:25.068462994+00:00 stderr F I1013 00:23:25.068381 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.230202ms) 2025-10-13T00:23:25.070456100+00:00 stderr F I1013 00:23:25.070358 1 sync_worker.go:1014] Done syncing for role "openshift-controller-manager/prometheus-k8s" (947 of 955) 2025-10-13T00:23:25.070456100+00:00 stderr F I1013 00:23:25.070393 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.070456100+00:00 stderr F I1013 00:23:25.070401 1 sync_worker.go:999] Running sync for rolebinding "openshift-controller-manager/prometheus-k8s" (948 of 955) 2025-10-13T00:23:25.122884430+00:00 stderr F I1013 00:23:25.122568 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-console/console" (630 of 955) 2025-10-13T00:23:25.122884430+00:00 stderr F I1013 00:23:25.122602 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.122884430+00:00 stderr F I1013 00:23:25.122610 1 sync_worker.go:999] Running sync for deployment "openshift-console-operator/console-conversion-webhook" (631 of 955) 2025-10-13T00:23:25.169655753+00:00 stderr F I1013 00:23:25.169231 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-controller-manager/prometheus-k8s" (948 of 955) 2025-10-13T00:23:25.169655753+00:00 stderr F I1013 00:23:25.169630 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.169655753+00:00 stderr F I1013 00:23:25.169636 1 sync_worker.go:999] Running sync for servicemonitor "openshift-controller-manager/openshift-controller-manager" (949 of 955) 2025-10-13T00:23:25.220209361+00:00 stderr F I1013 00:23:25.219817 1 sync_worker.go:1014] Done syncing for deployment "openshift-console-operator/console-conversion-webhook" (631 of 955) 2025-10-13T00:23:25.220209361+00:00 stderr F I1013 00:23:25.219846 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.220209361+00:00 stderr F I1013 00:23:25.219851 1 sync_worker.go:999] Running sync for consoleclidownload "helm-download-links" (632 of 955) 2025-10-13T00:23:25.277377334+00:00 stderr F I1013 00:23:25.277266 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-controller-manager/openshift-controller-manager" (949 of 955) 2025-10-13T00:23:25.277377334+00:00 stderr F I1013 00:23:25.277293 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.277377334+00:00 stderr F I1013 00:23:25.277299 1 sync_worker.go:999] Running sync for role "openshift-route-controller-manager/prometheus-k8s" (950 of 955) 2025-10-13T00:23:25.321107042+00:00 stderr F I1013 00:23:25.321027 1 sync_worker.go:1014] Done syncing for consoleclidownload "helm-download-links" (632 of 955) 2025-10-13T00:23:25.321107042+00:00 stderr F I1013 00:23:25.321059 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.321107042+00:00 stderr F I1013 00:23:25.321067 1 sync_worker.go:999] Running sync for deployment "openshift-console-operator/console-operator" (633 of 955) 2025-10-13T00:23:25.369451198+00:00 stderr F I1013 00:23:25.369386 1 sync_worker.go:1014] Done syncing for role "openshift-route-controller-manager/prometheus-k8s" (950 of 955) 2025-10-13T00:23:25.369451198+00:00 stderr F I1013 00:23:25.369418 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.369451198+00:00 stderr F I1013 00:23:25.369423 1 sync_worker.go:999] Running sync for rolebinding "openshift-route-controller-manager/prometheus-k8s" (951 of 955) 2025-10-13T00:23:25.469509275+00:00 stderr F I1013 00:23:25.469415 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-route-controller-manager/prometheus-k8s" (951 of 955) 2025-10-13T00:23:25.469509275+00:00 stderr F I1013 00:23:25.469460 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.469509275+00:00 stderr F I1013 00:23:25.469475 1 sync_worker.go:999] Running sync for servicemonitor "openshift-route-controller-manager/openshift-route-controller-manager" (952 of 955) 2025-10-13T00:23:25.522013618+00:00 stderr F I1013 00:23:25.521913 1 sync_worker.go:1014] Done syncing for deployment "openshift-console-operator/console-operator" (633 of 955) 2025-10-13T00:23:25.522013618+00:00 stderr F I1013 00:23:25.521965 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.522013618+00:00 stderr F I1013 00:23:25.521977 1 sync_worker.go:999] Running sync for customresourcedefinition "consoleplugins.console.openshift.io" (634 of 955) 2025-10-13T00:23:25.572323859+00:00 stderr F I1013 00:23:25.572255 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-route-controller-manager/openshift-route-controller-manager" (952 of 955) 2025-10-13T00:23:25.572323859+00:00 stderr F I1013 00:23:25.572292 1 task_graph.go:481] Running 15 on worker 0 2025-10-13T00:23:25.572323859+00:00 stderr F I1013 00:23:25.572309 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.572389521+00:00 stderr F I1013 00:23:25.572319 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/release-verification" (854 of 955) 2025-10-13T00:23:25.621010986+00:00 stderr F I1013 00:23:25.620945 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoleplugins.console.openshift.io" (634 of 955) 2025-10-13T00:23:25.621010986+00:00 stderr F I1013 00:23:25.620983 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.621010986+00:00 stderr F I1013 00:23:25.620991 1 sync_worker.go:999] Running sync for clusteroperator "console" (635 of 955) 2025-10-13T00:23:25.621228642+00:00 stderr F I1013 00:23:25.621184 1 sync_worker.go:1014] Done syncing for clusteroperator "console" (635 of 955) 2025-10-13T00:23:25.621228642+00:00 stderr F I1013 00:23:25.621201 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.621228642+00:00 stderr F I1013 00:23:25.621208 1 sync_worker.go:999] Running sync for consolequickstart "add-healthchecks" (636 of 955) 2025-10-13T00:23:25.670165255+00:00 stderr F I1013 00:23:25.670100 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/release-verification" (854 of 955) 2025-10-13T00:23:25.670165255+00:00 stderr F I1013 00:23:25.670134 1 task_graph.go:481] Running 16 on worker 0 2025-10-13T00:23:25.721846854+00:00 stderr F I1013 00:23:25.721766 1 sync_worker.go:1014] Done syncing for consolequickstart "add-healthchecks" (636 of 955) 2025-10-13T00:23:25.721846854+00:00 stderr F I1013 00:23:25.721808 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.721846854+00:00 stderr F I1013 00:23:25.721815 1 sync_worker.go:999] Running sync for prometheusrule "openshift-console-operator/cluster-monitoring-prometheus-rules" (637 of 955) 2025-10-13T00:23:25.773216305+00:00 stderr F I1013 00:23:25.773154 1 sync_worker.go:989] Precreated resource clusteroperator "machine-approver" (441 of 955) 2025-10-13T00:23:25.773216305+00:00 stderr F I1013 00:23:25.773185 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.773216305+00:00 stderr F I1013 00:23:25.773193 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-machine-approver" (430 of 955) 2025-10-13T00:23:25.820841442+00:00 stderr F I1013 00:23:25.820765 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-console-operator/cluster-monitoring-prometheus-rules" (637 of 955) 2025-10-13T00:23:25.820841442+00:00 stderr F I1013 00:23:25.820801 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.820841442+00:00 stderr F I1013 00:23:25.820810 1 sync_worker.go:999] Running sync for consolequickstart "install-cryostat" (638 of 955) 2025-10-13T00:23:25.870290499+00:00 stderr F I1013 00:23:25.870134 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-machine-approver" (430 of 955) 2025-10-13T00:23:25.870290499+00:00 stderr F I1013 00:23:25.870177 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.870290499+00:00 stderr F I1013 00:23:25.870191 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-machine-approver/machine-approver-sa" (431 of 955) 2025-10-13T00:23:25.921177307+00:00 stderr F I1013 00:23:25.921101 1 sync_worker.go:1014] Done syncing for consolequickstart "install-cryostat" (638 of 955) 2025-10-13T00:23:25.921177307+00:00 stderr F I1013 00:23:25.921137 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.921177307+00:00 stderr F I1013 00:23:25.921146 1 sync_worker.go:999] Running sync for consolequickstart "explore-pipelines" (639 of 955) 2025-10-13T00:23:25.969622296+00:00 stderr F I1013 00:23:25.969554 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-cluster-machine-approver/machine-approver-sa" (431 of 955) 2025-10-13T00:23:25.969622296+00:00 stderr F I1013 00:23:25.969586 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:25.969622296+00:00 stderr F I1013 00:23:25.969592 1 sync_worker.go:999] Running sync for role "openshift-cluster-machine-approver/machine-approver" (432 of 955) 2025-10-13T00:23:26.021498951+00:00 stderr F I1013 00:23:26.021402 1 sync_worker.go:1014] Done syncing for consolequickstart "explore-pipelines" (639 of 955) 2025-10-13T00:23:26.021498951+00:00 stderr F I1013 00:23:26.021451 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.021498951+00:00 stderr F I1013 00:23:26.021464 1 sync_worker.go:999] Running sync for consolequickstart "host-inventory" (640 of 955) 2025-10-13T00:23:26.038389732+00:00 stderr F I1013 00:23:26.038290 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:26.038602428+00:00 stderr F I1013 00:23:26.038550 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:26.038602428+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:26.038602428+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:26.038625518+00:00 stderr F I1013 00:23:26.038600 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (322.549µs) 2025-10-13T00:23:26.038625518+00:00 stderr F I1013 00:23:26.038615 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:26.038698950+00:00 stderr F I1013 00:23:26.038652 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:26.038721871+00:00 stderr F I1013 00:23:26.038700 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:26.038801523+00:00 stderr F I1013 00:23:26.038710 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:26.039046950+00:00 stderr F I1013 00:23:26.038985 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:26.057939896+00:00 stderr F W1013 00:23:26.057798 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:26.059979493+00:00 stderr F I1013 00:23:26.059893 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.273523ms) 2025-10-13T00:23:26.070204558+00:00 stderr F I1013 00:23:26.070125 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-machine-approver/machine-approver" (432 of 955) 2025-10-13T00:23:26.070204558+00:00 stderr F I1013 00:23:26.070160 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.070204558+00:00 stderr F I1013 00:23:26.070170 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-machine-approver/machine-approver" (433 of 955) 2025-10-13T00:23:26.121312122+00:00 stderr F I1013 00:23:26.121200 1 sync_worker.go:1014] Done syncing for consolequickstart "host-inventory" (640 of 955) 2025-10-13T00:23:26.121312122+00:00 stderr F I1013 00:23:26.121234 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.121312122+00:00 stderr F I1013 00:23:26.121239 1 sync_worker.go:999] Running sync for consolequickstart "user-impersonation" (641 of 955) 2025-10-13T00:23:26.170676447+00:00 stderr F I1013 00:23:26.170078 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-machine-approver/machine-approver" (433 of 955) 2025-10-13T00:23:26.170676447+00:00 stderr F I1013 00:23:26.170454 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.170676447+00:00 stderr F I1013 00:23:26.170463 1 sync_worker.go:999] Running sync for role "openshift-config-managed/machine-approver" (434 of 955) 2025-10-13T00:23:26.220450463+00:00 stderr F I1013 00:23:26.220372 1 sync_worker.go:1014] Done syncing for consolequickstart "user-impersonation" (641 of 955) 2025-10-13T00:23:26.220450463+00:00 stderr F I1013 00:23:26.220405 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.220450463+00:00 stderr F I1013 00:23:26.220411 1 sync_worker.go:999] Running sync for consolequickstart "install-helmchartrepo-ns" (642 of 955) 2025-10-13T00:23:26.269463208+00:00 stderr F I1013 00:23:26.269340 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/machine-approver" (434 of 955) 2025-10-13T00:23:26.269463208+00:00 stderr F I1013 00:23:26.269385 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.269463208+00:00 stderr F I1013 00:23:26.269392 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/machine-approver" (435 of 955) 2025-10-13T00:23:26.321284142+00:00 stderr F I1013 00:23:26.320422 1 sync_worker.go:1014] Done syncing for consolequickstart "install-helmchartrepo-ns" (642 of 955) 2025-10-13T00:23:26.321284142+00:00 stderr F I1013 00:23:26.320454 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.321284142+00:00 stderr F I1013 00:23:26.320460 1 sync_worker.go:999] Running sync for consolequickstart "install-multicluster-engine" (643 of 955) 2025-10-13T00:23:26.371429089+00:00 stderr F I1013 00:23:26.369385 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/machine-approver" (435 of 955) 2025-10-13T00:23:26.371429089+00:00 stderr F I1013 00:23:26.369440 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.371429089+00:00 stderr F I1013 00:23:26.369450 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:controller:machine-approver" (436 of 955) 2025-10-13T00:23:26.419935600+00:00 stderr F I1013 00:23:26.419872 1 sync_worker.go:1014] Done syncing for consolequickstart "install-multicluster-engine" (643 of 955) 2025-10-13T00:23:26.419935600+00:00 stderr F I1013 00:23:26.419909 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.419935600+00:00 stderr F I1013 00:23:26.419918 1 sync_worker.go:999] Running sync for consolequickstart "odf-install-tour" (644 of 955) 2025-10-13T00:23:26.469131890+00:00 stderr F I1013 00:23:26.469067 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:controller:machine-approver" (436 of 955) 2025-10-13T00:23:26.469131890+00:00 stderr F I1013 00:23:26.469109 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.469131890+00:00 stderr F I1013 00:23:26.469117 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:controller:machine-approver" (437 of 955) 2025-10-13T00:23:26.541314251+00:00 stderr F I1013 00:23:26.540877 1 sync_worker.go:1014] Done syncing for consolequickstart "odf-install-tour" (644 of 955) 2025-10-13T00:23:26.541314251+00:00 stderr F I1013 00:23:26.540917 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.541314251+00:00 stderr F I1013 00:23:26.540925 1 sync_worker.go:999] Running sync for consolequickstart "install-serverless" (645 of 955) 2025-10-13T00:23:26.572500770+00:00 stderr F I1013 00:23:26.570112 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:controller:machine-approver" (437 of 955) 2025-10-13T00:23:26.572500770+00:00 stderr F I1013 00:23:26.570149 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.572500770+00:00 stderr F I1013 00:23:26.570157 1 sync_worker.go:999] Running sync for configmap "openshift-cluster-machine-approver/kube-rbac-proxy" (438 of 955) 2025-10-13T00:23:26.621184886+00:00 stderr F I1013 00:23:26.621095 1 sync_worker.go:1014] Done syncing for consolequickstart "install-serverless" (645 of 955) 2025-10-13T00:23:26.621184886+00:00 stderr F I1013 00:23:26.621140 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.621184886+00:00 stderr F I1013 00:23:26.621148 1 sync_worker.go:999] Running sync for consolequickstart "jboss-eap7-with-helm" (646 of 955) 2025-10-13T00:23:26.669091240+00:00 stderr F I1013 00:23:26.669020 1 sync_worker.go:1014] Done syncing for configmap "openshift-cluster-machine-approver/kube-rbac-proxy" (438 of 955) 2025-10-13T00:23:26.669091240+00:00 stderr F I1013 00:23:26.669055 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.669091240+00:00 stderr F I1013 00:23:26.669061 1 sync_worker.go:999] Running sync for service "openshift-cluster-machine-approver/machine-approver" (439 of 955) 2025-10-13T00:23:26.720807401+00:00 stderr F I1013 00:23:26.720730 1 sync_worker.go:1014] Done syncing for consolequickstart "jboss-eap7-with-helm" (646 of 955) 2025-10-13T00:23:26.720807401+00:00 stderr F I1013 00:23:26.720762 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.720807401+00:00 stderr F I1013 00:23:26.720768 1 sync_worker.go:999] Running sync for consolequickstart "manage-helm-repos" (647 of 955) 2025-10-13T00:23:26.769308342+00:00 stderr F I1013 00:23:26.769247 1 sync_worker.go:1014] Done syncing for service "openshift-cluster-machine-approver/machine-approver" (439 of 955) 2025-10-13T00:23:26.769308342+00:00 stderr F I1013 00:23:26.769279 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.769308342+00:00 stderr F I1013 00:23:26.769285 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-machine-approver/machine-approver" (440 of 955) 2025-10-13T00:23:26.820377014+00:00 stderr F I1013 00:23:26.820281 1 sync_worker.go:1014] Done syncing for consolequickstart "manage-helm-repos" (647 of 955) 2025-10-13T00:23:26.820377014+00:00 stderr F I1013 00:23:26.820369 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.820412075+00:00 stderr F I1013 00:23:26.820383 1 sync_worker.go:999] Running sync for consolequickstart "monitor-sampleapp" (648 of 955) 2025-10-13T00:23:26.869971426+00:00 stderr F I1013 00:23:26.869832 1 sync_worker.go:1014] Done syncing for deployment "openshift-cluster-machine-approver/machine-approver" (440 of 955) 2025-10-13T00:23:26.869971426+00:00 stderr F I1013 00:23:26.869886 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.869971426+00:00 stderr F I1013 00:23:26.869899 1 sync_worker.go:999] Running sync for clusteroperator "machine-approver" (441 of 955) 2025-10-13T00:23:26.870259654+00:00 stderr F I1013 00:23:26.870183 1 sync_worker.go:1014] Done syncing for clusteroperator "machine-approver" (441 of 955) 2025-10-13T00:23:26.870259654+00:00 stderr F I1013 00:23:26.870219 1 task_graph.go:481] Running 17 on worker 0 2025-10-13T00:23:26.920516304+00:00 stderr F I1013 00:23:26.920415 1 sync_worker.go:1014] Done syncing for consolequickstart "monitor-sampleapp" (648 of 955) 2025-10-13T00:23:26.920516304+00:00 stderr F I1013 00:23:26.920449 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.920516304+00:00 stderr F I1013 00:23:26.920455 1 sync_worker.go:999] Running sync for consolequickstart "node-with-s2i" (649 of 955) 2025-10-13T00:23:26.973008616+00:00 stderr F I1013 00:23:26.972776 1 sync_worker.go:989] Precreated resource clusteroperator "dns" (772 of 955) 2025-10-13T00:23:26.973008616+00:00 stderr F I1013 00:23:26.972823 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:26.973008616+00:00 stderr F I1013 00:23:26.972836 1 sync_worker.go:999] Running sync for clusterrole "openshift-dns-operator" (763 of 955) 2025-10-13T00:23:27.022292359+00:00 stderr F I1013 00:23:27.022178 1 sync_worker.go:1014] Done syncing for consolequickstart "node-with-s2i" (649 of 955) 2025-10-13T00:23:27.022292359+00:00 stderr F I1013 00:23:27.022213 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.022292359+00:00 stderr F I1013 00:23:27.022218 1 sync_worker.go:999] Running sync for consolequickstart "ocs-install-tour" (650 of 955) 2025-10-13T00:23:27.038830439+00:00 stderr F I1013 00:23:27.038724 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:27.039215880+00:00 stderr F I1013 00:23:27.039135 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:27.039215880+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:27.039215880+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:27.039311613+00:00 stderr F I1013 00:23:27.039266 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (608.767µs) 2025-10-13T00:23:27.039311613+00:00 stderr F I1013 00:23:27.039299 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:27.039488088+00:00 stderr F I1013 00:23:27.039413 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:27.039522869+00:00 stderr F I1013 00:23:27.039510 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:27.039669563+00:00 stderr F I1013 00:23:27.039526 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:27.043556101+00:00 stderr F I1013 00:23:27.043453 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:27.069793542+00:00 stderr F I1013 00:23:27.069682 1 sync_worker.go:1014] Done syncing for clusterrole "openshift-dns-operator" (763 of 955) 2025-10-13T00:23:27.069793542+00:00 stderr F I1013 00:23:27.069743 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.069793542+00:00 stderr F I1013 00:23:27.069756 1 sync_worker.go:999] Running sync for namespace "openshift-dns-operator" (764 of 955) 2025-10-13T00:23:27.082294860+00:00 stderr F W1013 00:23:27.082226 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:27.084375268+00:00 stderr F I1013 00:23:27.084276 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.975583ms) 2025-10-13T00:23:27.120281428+00:00 stderr F I1013 00:23:27.120182 1 sync_worker.go:1014] Done syncing for consolequickstart "ocs-install-tour" (650 of 955) 2025-10-13T00:23:27.120281428+00:00 stderr F I1013 00:23:27.120231 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.120281428+00:00 stderr F I1013 00:23:27.120244 1 sync_worker.go:999] Running sync for consolequickstart "quarkus-with-helm" (651 of 955) 2025-10-13T00:23:27.171525136+00:00 stderr F I1013 00:23:27.171416 1 sync_worker.go:1014] Done syncing for namespace "openshift-dns-operator" (764 of 955) 2025-10-13T00:23:27.171525136+00:00 stderr F I1013 00:23:27.171473 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.171525136+00:00 stderr F I1013 00:23:27.171487 1 sync_worker.go:999] Running sync for customresourcedefinition "dnses.operator.openshift.io" (765 of 955) 2025-10-13T00:23:27.222485705+00:00 stderr F I1013 00:23:27.222304 1 sync_worker.go:1014] Done syncing for consolequickstart "quarkus-with-helm" (651 of 955) 2025-10-13T00:23:27.222485705+00:00 stderr F I1013 00:23:27.222414 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.222485705+00:00 stderr F I1013 00:23:27.222427 1 sync_worker.go:999] Running sync for consolequickstart "quarkus-with-s2i" (652 of 955) 2025-10-13T00:23:27.272716684+00:00 stderr F I1013 00:23:27.272625 1 sync_worker.go:1014] Done syncing for customresourcedefinition "dnses.operator.openshift.io" (765 of 955) 2025-10-13T00:23:27.272716684+00:00 stderr F I1013 00:23:27.272664 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.272716684+00:00 stderr F I1013 00:23:27.272670 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-dns-operator" (766 of 955) 2025-10-13T00:23:27.323994283+00:00 stderr F I1013 00:23:27.323917 1 sync_worker.go:1014] Done syncing for consolequickstart "quarkus-with-s2i" (652 of 955) 2025-10-13T00:23:27.323994283+00:00 stderr F I1013 00:23:27.323957 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.323994283+00:00 stderr F I1013 00:23:27.323966 1 sync_worker.go:999] Running sync for consolequickstart "rhdh-installation-via-helm" (653 of 955) 2025-10-13T00:23:27.370145288+00:00 stderr F I1013 00:23:27.369970 1 sync_worker.go:1014] Done syncing for clusterrolebinding "openshift-dns-operator" (766 of 955) 2025-10-13T00:23:27.370145288+00:00 stderr F I1013 00:23:27.370023 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.370145288+00:00 stderr F I1013 00:23:27.370054 1 sync_worker.go:999] Running sync for rolebinding "openshift-dns-operator/dns-operator" (767 of 955) 2025-10-13T00:23:27.422887017+00:00 stderr F I1013 00:23:27.422804 1 sync_worker.go:1014] Done syncing for consolequickstart "rhdh-installation-via-helm" (653 of 955) 2025-10-13T00:23:27.422887017+00:00 stderr F I1013 00:23:27.422850 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.422887017+00:00 stderr F I1013 00:23:27.422863 1 sync_worker.go:999] Running sync for consolequickstart "rhdh-installation-via-operator" (654 of 955) 2025-10-13T00:23:27.470741320+00:00 stderr F I1013 00:23:27.470662 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-dns-operator/dns-operator" (767 of 955) 2025-10-13T00:23:27.470741320+00:00 stderr F I1013 00:23:27.470698 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.470741320+00:00 stderr F I1013 00:23:27.470707 1 sync_worker.go:999] Running sync for role "openshift-dns-operator/dns-operator" (768 of 955) 2025-10-13T00:23:27.522880103+00:00 stderr F I1013 00:23:27.522796 1 sync_worker.go:1014] Done syncing for consolequickstart "rhdh-installation-via-operator" (654 of 955) 2025-10-13T00:23:27.522880103+00:00 stderr F I1013 00:23:27.522838 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.522880103+00:00 stderr F I1013 00:23:27.522849 1 sync_worker.go:999] Running sync for consolequickstart "sample-application" (655 of 955) 2025-10-13T00:23:27.571558029+00:00 stderr F I1013 00:23:27.571456 1 sync_worker.go:1014] Done syncing for role "openshift-dns-operator/dns-operator" (768 of 955) 2025-10-13T00:23:27.571558029+00:00 stderr F I1013 00:23:27.571492 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.571558029+00:00 stderr F I1013 00:23:27.571500 1 sync_worker.go:999] Running sync for serviceaccount "openshift-dns-operator/dns-operator" (769 of 955) 2025-10-13T00:23:27.622101497+00:00 stderr F I1013 00:23:27.622040 1 sync_worker.go:1014] Done syncing for consolequickstart "sample-application" (655 of 955) 2025-10-13T00:23:27.622101497+00:00 stderr F I1013 00:23:27.622084 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.622148028+00:00 stderr F I1013 00:23:27.622094 1 sync_worker.go:999] Running sync for consolequickstart "spring-with-s2i" (656 of 955) 2025-10-13T00:23:27.670924347+00:00 stderr F I1013 00:23:27.670860 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-dns-operator/dns-operator" (769 of 955) 2025-10-13T00:23:27.670924347+00:00 stderr F I1013 00:23:27.670895 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.670924347+00:00 stderr F I1013 00:23:27.670902 1 sync_worker.go:999] Running sync for service "openshift-dns-operator/metrics" (770 of 955) 2025-10-13T00:23:27.720973641+00:00 stderr F I1013 00:23:27.720922 1 sync_worker.go:1014] Done syncing for consolequickstart "spring-with-s2i" (656 of 955) 2025-10-13T00:23:27.721015372+00:00 stderr F I1013 00:23:27.720974 1 task_graph.go:481] Running 18 on worker 1 2025-10-13T00:23:27.721015372+00:00 stderr F I1013 00:23:27.720987 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.721015372+00:00 stderr F I1013 00:23:27.720992 1 sync_worker.go:999] Running sync for customresourcedefinition "openshiftapiservers.operator.openshift.io" (277 of 955) 2025-10-13T00:23:27.769909954+00:00 stderr F I1013 00:23:27.769852 1 sync_worker.go:1014] Done syncing for service "openshift-dns-operator/metrics" (770 of 955) 2025-10-13T00:23:27.769909954+00:00 stderr F I1013 00:23:27.769887 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.769909954+00:00 stderr F I1013 00:23:27.769895 1 sync_worker.go:999] Running sync for deployment "openshift-dns-operator/dns-operator" (771 of 955) 2025-10-13T00:23:27.822132369+00:00 stderr F I1013 00:23:27.822044 1 sync_worker.go:1014] Done syncing for customresourcedefinition "openshiftapiservers.operator.openshift.io" (277 of 955) 2025-10-13T00:23:27.822132369+00:00 stderr F I1013 00:23:27.822118 1 task_graph.go:481] Running 19 on worker 1 2025-10-13T00:23:27.822167690+00:00 stderr F I1013 00:23:27.822142 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.822175550+00:00 stderr F I1013 00:23:27.822159 1 sync_worker.go:999] Running sync for role "openshift-dns-operator/prometheus-k8s" (862 of 955) 2025-10-13T00:23:27.870177867+00:00 stderr F I1013 00:23:27.870097 1 sync_worker.go:1014] Done syncing for deployment "openshift-dns-operator/dns-operator" (771 of 955) 2025-10-13T00:23:27.870177867+00:00 stderr F I1013 00:23:27.870159 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.870214898+00:00 stderr F I1013 00:23:27.870176 1 sync_worker.go:999] Running sync for clusteroperator "dns" (772 of 955) 2025-10-13T00:23:27.870583198+00:00 stderr F I1013 00:23:27.870553 1 sync_worker.go:1014] Done syncing for clusteroperator "dns" (772 of 955) 2025-10-13T00:23:27.870594939+00:00 stderr F I1013 00:23:27.870587 1 task_graph.go:481] Running 20 on worker 0 2025-10-13T00:23:27.870628919+00:00 stderr F I1013 00:23:27.870602 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.870638120+00:00 stderr F I1013 00:23:27.870625 1 sync_worker.go:999] Running sync for role "openshift-console/prometheus-k8s" (859 of 955) 2025-10-13T00:23:27.919630334+00:00 stderr F I1013 00:23:27.919557 1 sync_worker.go:1014] Done syncing for role "openshift-dns-operator/prometheus-k8s" (862 of 955) 2025-10-13T00:23:27.919630334+00:00 stderr F I1013 00:23:27.919599 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.919630334+00:00 stderr F I1013 00:23:27.919609 1 sync_worker.go:999] Running sync for rolebinding "openshift-dns-operator/prometheus-k8s" (863 of 955) 2025-10-13T00:23:27.969681019+00:00 stderr F I1013 00:23:27.969622 1 sync_worker.go:1014] Done syncing for role "openshift-console/prometheus-k8s" (859 of 955) 2025-10-13T00:23:27.969681019+00:00 stderr F I1013 00:23:27.969657 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:27.969681019+00:00 stderr F I1013 00:23:27.969666 1 sync_worker.go:999] Running sync for rolebinding "openshift-console/prometheus-k8s" (860 of 955) 2025-10-13T00:23:28.019682570+00:00 stderr F I1013 00:23:28.019602 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-dns-operator/prometheus-k8s" (863 of 955) 2025-10-13T00:23:28.019682570+00:00 stderr F I1013 00:23:28.019637 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.019682570+00:00 stderr F I1013 00:23:28.019643 1 sync_worker.go:999] Running sync for servicemonitor "openshift-dns-operator/dns-operator" (864 of 955) 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.039703 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.039941 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:28.040201532+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:28.040201532+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.039982 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (290.028µs) 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.039994 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.040030 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.040072 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:28.040201532+00:00 stderr F I1013 00:23:28.040080 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:28.040397137+00:00 stderr F I1013 00:23:28.040362 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:28.063781769+00:00 stderr F W1013 00:23:28.063737 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:28.065213919+00:00 stderr F I1013 00:23:28.065192 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.193942ms) 2025-10-13T00:23:28.069821237+00:00 stderr F I1013 00:23:28.069792 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console/prometheus-k8s" (860 of 955) 2025-10-13T00:23:28.069864698+00:00 stderr F I1013 00:23:28.069855 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.069894579+00:00 stderr F I1013 00:23:28.069881 1 sync_worker.go:999] Running sync for servicemonitor "openshift-console/console" (861 of 955) 2025-10-13T00:23:28.120722755+00:00 stderr F I1013 00:23:28.120622 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-dns-operator/dns-operator" (864 of 955) 2025-10-13T00:23:28.120781117+00:00 stderr F I1013 00:23:28.120770 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.120810487+00:00 stderr F I1013 00:23:28.120797 1 sync_worker.go:999] Running sync for prometheusrule "openshift-dns-operator/dns" (865 of 955) 2025-10-13T00:23:28.170398819+00:00 stderr F I1013 00:23:28.170319 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-console/console" (861 of 955) 2025-10-13T00:23:28.170398819+00:00 stderr F I1013 00:23:28.170363 1 task_graph.go:481] Running 21 on worker 0 2025-10-13T00:23:28.170398819+00:00 stderr F I1013 00:23:28.170375 1 sync_worker.go:982] Skipping precreation of clusteroperator "storage" (577 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.170398819+00:00 stderr F I1013 00:23:28.170393 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170440060+00:00 stderr F I1013 00:23:28.170398 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-csi-drivers" (549 of 955) 2025-10-13T00:23:28.170440060+00:00 stderr F I1013 00:23:28.170409 1 sync_worker.go:1002] Skipping namespace "openshift-cluster-csi-drivers" (549 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.170440060+00:00 stderr F I1013 00:23:28.170414 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170440060+00:00 stderr F I1013 00:23:28.170419 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/aws-ebs-csi-driver-operator" (550 of 955) 2025-10-13T00:23:28.170440060+00:00 stderr F I1013 00:23:28.170430 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/aws-ebs-csi-driver-operator" (550 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170440060+00:00 stderr F I1013 00:23:28.170436 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170449940+00:00 stderr F I1013 00:23:28.170441 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/azure-disk-csi-driver-operator" (551 of 955) 2025-10-13T00:23:28.170457350+00:00 stderr F I1013 00:23:28.170450 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/azure-disk-csi-driver-operator" (551 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170457350+00:00 stderr F I1013 00:23:28.170455 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170466681+00:00 stderr F I1013 00:23:28.170459 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/azure-file-csi-driver-operator" (552 of 955) 2025-10-13T00:23:28.170475401+00:00 stderr F I1013 00:23:28.170469 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/azure-file-csi-driver-operator" (552 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170482601+00:00 stderr F I1013 00:23:28.170474 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170489751+00:00 stderr F I1013 00:23:28.170479 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cluster-csi-drivers" (553 of 955) 2025-10-13T00:23:28.170497011+00:00 stderr F I1013 00:23:28.170488 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cluster-csi-drivers" (553 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170497011+00:00 stderr F I1013 00:23:28.170493 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170517802+00:00 stderr F I1013 00:23:28.170498 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-pd-csi-driver-operator" (554 of 955) 2025-10-13T00:23:28.170525272+00:00 stderr F I1013 00:23:28.170508 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-pd-csi-driver-operator" (554 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170525272+00:00 stderr F I1013 00:23:28.170520 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170532732+00:00 stderr F I1013 00:23:28.170525 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/ibm-vpc-block-csi-driver-operator" (555 of 955) 2025-10-13T00:23:28.170539683+00:00 stderr F I1013 00:23:28.170534 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/ibm-vpc-block-csi-driver-operator" (555 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170556413+00:00 stderr F I1013 00:23:28.170539 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170556413+00:00 stderr F I1013 00:23:28.170543 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/manila-csi-driver-operator" (556 of 955) 2025-10-13T00:23:28.170564013+00:00 stderr F I1013 00:23:28.170554 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/manila-csi-driver-operator" (556 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170564013+00:00 stderr F I1013 00:23:28.170559 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170571344+00:00 stderr F I1013 00:23:28.170564 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/ovirt-csi-driver-operator" (557 of 955) 2025-10-13T00:23:28.170578514+00:00 stderr F I1013 00:23:28.170573 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/ovirt-csi-driver-operator" (557 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170585684+00:00 stderr F I1013 00:23:28.170578 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170592984+00:00 stderr F I1013 00:23:28.170582 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/ibm-powervs-block-csi-driver-operator" (558 of 955) 2025-10-13T00:23:28.170600324+00:00 stderr F I1013 00:23:28.170591 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/ibm-powervs-block-csi-driver-operator" (558 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170600324+00:00 stderr F I1013 00:23:28.170596 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170607805+00:00 stderr F I1013 00:23:28.170600 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-vmware-vsphere-csi-driver-operator" (559 of 955) 2025-10-13T00:23:28.170616975+00:00 stderr F I1013 00:23:28.170610 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-vmware-vsphere-csi-driver-operator" (559 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170624555+00:00 stderr F I1013 00:23:28.170615 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170631635+00:00 stderr F I1013 00:23:28.170620 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-problem-detector" (560 of 955) 2025-10-13T00:23:28.170638875+00:00 stderr F I1013 00:23:28.170630 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-problem-detector" (560 of 955): disabled capabilities: Storage, CloudCredential 2025-10-13T00:23:28.170638875+00:00 stderr F I1013 00:23:28.170635 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.170647716+00:00 stderr F I1013 00:23:28.170640 1 sync_worker.go:999] Running sync for customresourcedefinition "clustercsidrivers.operator.openshift.io" (561 of 955) 2025-10-13T00:23:28.222828349+00:00 stderr F I1013 00:23:28.222768 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-dns-operator/dns" (865 of 955) 2025-10-13T00:23:28.222828349+00:00 stderr F I1013 00:23:28.222802 1 task_graph.go:481] Running 22 on worker 1 2025-10-13T00:23:28.271822974+00:00 stderr F I1013 00:23:28.271755 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clustercsidrivers.operator.openshift.io" (561 of 955) 2025-10-13T00:23:28.271822974+00:00 stderr F I1013 00:23:28.271794 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.271822974+00:00 stderr F I1013 00:23:28.271800 1 sync_worker.go:999] Running sync for customresourcedefinition "storages.operator.openshift.io" (562 of 955) 2025-10-13T00:23:28.324074839+00:00 stderr F I1013 00:23:28.323987 1 sync_worker.go:989] Precreated resource clusteroperator "ingress" (419 of 955) 2025-10-13T00:23:28.324074839+00:00 stderr F I1013 00:23:28.324017 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.324074839+00:00 stderr F I1013 00:23:28.324023 1 sync_worker.go:999] Running sync for clusterrole "openshift-ingress-operator" (403 of 955) 2025-10-13T00:23:28.372091507+00:00 stderr F I1013 00:23:28.372003 1 sync_worker.go:1014] Done syncing for customresourcedefinition "storages.operator.openshift.io" (562 of 955) 2025-10-13T00:23:28.372091507+00:00 stderr F I1013 00:23:28.372046 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372091507+00:00 stderr F I1013 00:23:28.372054 1 sync_worker.go:999] Running sync for storage "cluster" (563 of 955) 2025-10-13T00:23:28.372091507+00:00 stderr F I1013 00:23:28.372070 1 sync_worker.go:1002] Skipping storage "cluster" (563 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372091507+00:00 stderr F I1013 00:23:28.372077 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372083 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-storage-operator/cluster-storage-operator" (564 of 955) 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372097 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-storage-operator/cluster-storage-operator" (564 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372103 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372109 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-storage-operator-role" (565 of 955) 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372121 1 sync_worker.go:1002] Skipping clusterrolebinding "cluster-storage-operator-role" (565 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372128 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372134 1 sync_worker.go:999] Running sync for service "openshift-cluster-storage-operator/cluster-storage-operator-metrics" (566 of 955) 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372146 1 sync_worker.go:1002] Skipping service "openshift-cluster-storage-operator/cluster-storage-operator-metrics" (566 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372161539+00:00 stderr F I1013 00:23:28.372152 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372184779+00:00 stderr F I1013 00:23:28.372157 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-provisioner-configmap-and-secret-reader-role" (567 of 955) 2025-10-13T00:23:28.372184779+00:00 stderr F I1013 00:23:28.372169 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-provisioner-configmap-and-secret-reader-role" (567 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372184779+00:00 stderr F I1013 00:23:28.372176 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372202830+00:00 stderr F I1013 00:23:28.372182 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-provisioner-volumeattachment-reader-role" (568 of 955) 2025-10-13T00:23:28.372202830+00:00 stderr F I1013 00:23:28.372195 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-provisioner-volumeattachment-reader-role" (568 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372219140+00:00 stderr F I1013 00:23:28.372204 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372219140+00:00 stderr F I1013 00:23:28.372210 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-provisioner-volumesnapshot-reader-role" (569 of 955) 2025-10-13T00:23:28.372248441+00:00 stderr F I1013 00:23:28.372222 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-provisioner-volumesnapshot-reader-role" (569 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372248441+00:00 stderr F I1013 00:23:28.372231 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372248441+00:00 stderr F I1013 00:23:28.372238 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-resizer-infrastructure-reader-role" (570 of 955) 2025-10-13T00:23:28.372265892+00:00 stderr F I1013 00:23:28.372250 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-resizer-infrastructure-reader-role" (570 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372265892+00:00 stderr F I1013 00:23:28.372258 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372282162+00:00 stderr F I1013 00:23:28.372264 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-resizer-storageclass-reader-role" (571 of 955) 2025-10-13T00:23:28.372299063+00:00 stderr F I1013 00:23:28.372277 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-resizer-storageclass-reader-role" (571 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372299063+00:00 stderr F I1013 00:23:28.372285 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372299063+00:00 stderr F I1013 00:23:28.372291 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-attacher-role" (572 of 955) 2025-10-13T00:23:28.372316393+00:00 stderr F I1013 00:23:28.372304 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-attacher-role" (572 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372316393+00:00 stderr F I1013 00:23:28.372311 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372389355+00:00 stderr F I1013 00:23:28.372317 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-provisioner-role" (573 of 955) 2025-10-13T00:23:28.372389355+00:00 stderr F I1013 00:23:28.372352 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-provisioner-role" (573 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372389355+00:00 stderr F I1013 00:23:28.372360 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372389355+00:00 stderr F I1013 00:23:28.372366 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-resizer-role" (574 of 955) 2025-10-13T00:23:28.372389355+00:00 stderr F I1013 00:23:28.372378 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-resizer-role" (574 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372389355+00:00 stderr F I1013 00:23:28.372385 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372416246+00:00 stderr F I1013 00:23:28.372391 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-snapshotter-role" (575 of 955) 2025-10-13T00:23:28.372416246+00:00 stderr F I1013 00:23:28.372403 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-snapshotter-role" (575 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372416246+00:00 stderr F I1013 00:23:28.372410 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372433186+00:00 stderr F I1013 00:23:28.372417 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-storage-operator/cluster-storage-operator" (576 of 955) 2025-10-13T00:23:28.372448767+00:00 stderr F I1013 00:23:28.372429 1 sync_worker.go:1002] Skipping deployment "openshift-cluster-storage-operator/cluster-storage-operator" (576 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372448767+00:00 stderr F I1013 00:23:28.372436 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372448767+00:00 stderr F I1013 00:23:28.372442 1 sync_worker.go:999] Running sync for clusteroperator "storage" (577 of 955) 2025-10-13T00:23:28.372471957+00:00 stderr F I1013 00:23:28.372453 1 sync_worker.go:1002] Skipping clusteroperator "storage" (577 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372471957+00:00 stderr F I1013 00:23:28.372460 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372488078+00:00 stderr F I1013 00:23:28.372466 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-storage-operator/prometheus" (578 of 955) 2025-10-13T00:23:28.372488078+00:00 stderr F I1013 00:23:28.372481 1 sync_worker.go:1002] Skipping prometheusrule "openshift-cluster-storage-operator/prometheus" (578 of 955): disabled capabilities: Storage 2025-10-13T00:23:28.372511009+00:00 stderr F I1013 00:23:28.372500 1 task_graph.go:481] Running 23 on worker 0 2025-10-13T00:23:28.372530069+00:00 stderr F I1013 00:23:28.372512 1 sync_worker.go:982] Skipping precreation of clusteroperator "csi-snapshot-controller" (377 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.372530069+00:00 stderr F I1013 00:23:28.372520 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.372549620+00:00 stderr F I1013 00:23:28.372526 1 sync_worker.go:999] Running sync for customresourcedefinition "csisnapshotcontrollers.operator.openshift.io" (357 of 955) 2025-10-13T00:23:28.420314070+00:00 stderr F I1013 00:23:28.420259 1 sync_worker.go:1014] Done syncing for clusterrole "openshift-ingress-operator" (403 of 955) 2025-10-13T00:23:28.420314070+00:00 stderr F I1013 00:23:28.420290 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.420314070+00:00 stderr F I1013 00:23:28.420296 1 sync_worker.go:999] Running sync for customresourcedefinition "dnsrecords.ingress.operator.openshift.io" (404 of 955) 2025-10-13T00:23:28.470584070+00:00 stderr F I1013 00:23:28.470516 1 sync_worker.go:1014] Done syncing for customresourcedefinition "csisnapshotcontrollers.operator.openshift.io" (357 of 955) 2025-10-13T00:23:28.470584070+00:00 stderr F I1013 00:23:28.470567 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470613471+00:00 stderr F I1013 00:23:28.470579 1 sync_worker.go:999] Running sync for csisnapshotcontroller "cluster" (358 of 955) 2025-10-13T00:23:28.470620881+00:00 stderr F I1013 00:23:28.470609 1 sync_worker.go:1002] Skipping csisnapshotcontroller "cluster" (358 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470627692+00:00 stderr F I1013 00:23:28.470620 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470649142+00:00 stderr F I1013 00:23:28.470631 1 sync_worker.go:999] Running sync for configmap "openshift-cluster-storage-operator/csi-snapshot-controller-operator-config" (359 of 955) 2025-10-13T00:23:28.470674553+00:00 stderr F I1013 00:23:28.470655 1 sync_worker.go:1002] Skipping configmap "openshift-cluster-storage-operator/csi-snapshot-controller-operator-config" (359 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470674553+00:00 stderr F I1013 00:23:28.470670 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470698614+00:00 stderr F I1013 00:23:28.470680 1 sync_worker.go:999] Running sync for service "openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics" (360 of 955) 2025-10-13T00:23:28.470719684+00:00 stderr F I1013 00:23:28.470705 1 sync_worker.go:1002] Skipping service "openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics" (360 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470735875+00:00 stderr F I1013 00:23:28.470719 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470735875+00:00 stderr F I1013 00:23:28.470729 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (361 of 955) 2025-10-13T00:23:28.470761965+00:00 stderr F I1013 00:23:28.470747 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (361 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470768816+00:00 stderr F I1013 00:23:28.470761 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470821017+00:00 stderr F I1013 00:23:28.470797 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-snapshot-controller-runner" (362 of 955) 2025-10-13T00:23:28.470839647+00:00 stderr F I1013 00:23:28.470822 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-snapshot-controller-runner" (362 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470846518+00:00 stderr F I1013 00:23:28.470838 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470865478+00:00 stderr F I1013 00:23:28.470848 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-csi-snapshot-controller-role" (363 of 955) 2025-10-13T00:23:28.470886579+00:00 stderr F I1013 00:23:28.470871 1 sync_worker.go:1002] Skipping clusterrolebinding "openshift-csi-snapshot-controller-role" (363 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470893369+00:00 stderr F I1013 00:23:28.470886 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470911509+00:00 stderr F I1013 00:23:28.470896 1 sync_worker.go:999] Running sync for role "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (364 of 955) 2025-10-13T00:23:28.470931370+00:00 stderr F I1013 00:23:28.470917 1 sync_worker.go:1002] Skipping role "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (364 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470938170+00:00 stderr F I1013 00:23:28.470931 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.470956091+00:00 stderr F I1013 00:23:28.470941 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (365 of 955) 2025-10-13T00:23:28.470976521+00:00 stderr F I1013 00:23:28.470962 1 sync_worker.go:1002] Skipping rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (365 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.470983341+00:00 stderr F I1013 00:23:28.470976 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.471004462+00:00 stderr F I1013 00:23:28.470986 1 sync_worker.go:999] Running sync for rolebinding "kube-system/csi-snapshot-controller-operator-authentication-reader" (366 of 955) 2025-10-13T00:23:28.525043917+00:00 stderr F I1013 00:23:28.524926 1 sync_worker.go:1014] Done syncing for customresourcedefinition "dnsrecords.ingress.operator.openshift.io" (404 of 955) 2025-10-13T00:23:28.525043917+00:00 stderr F I1013 00:23:28.524984 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.525043917+00:00 stderr F I1013 00:23:28.524997 1 sync_worker.go:999] Running sync for customresourcedefinition "ingresscontrollers.operator.openshift.io" (405 of 955) 2025-10-13T00:23:28.570072362+00:00 stderr F I1013 00:23:28.569997 1 sync_worker.go:1014] Done syncing for rolebinding "kube-system/csi-snapshot-controller-operator-authentication-reader" (366 of 955) 2025-10-13T00:23:28.570072362+00:00 stderr F I1013 00:23:28.570045 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.570072362+00:00 stderr F I1013 00:23:28.570059 1 sync_worker.go:999] Running sync for clusterrole "csi-snapshot-controller-operator-clusterrole" (367 of 955) 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626313 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ingresscontrollers.operator.openshift.io" (405 of 955) 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626361 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626367 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ingress" (406 of 955) 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626381 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ingress" (406 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626387 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626391 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-azure" (407 of 955) 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626400 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-azure" (407 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626405 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626410 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-gcp" (408 of 955) 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626420 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-gcp" (408 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626425 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.626454832+00:00 stderr F I1013 00:23:28.626429 1 sync_worker.go:999] Running sync for namespace "openshift-ingress-operator" (409 of 955) 2025-10-13T00:23:28.670611792+00:00 stderr F I1013 00:23:28.670538 1 sync_worker.go:1014] Done syncing for clusterrole "csi-snapshot-controller-operator-clusterrole" (367 of 955) 2025-10-13T00:23:28.670611792+00:00 stderr F I1013 00:23:28.670588 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.670681074+00:00 stderr F I1013 00:23:28.670601 1 sync_worker.go:999] Running sync for role "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (368 of 955) 2025-10-13T00:23:28.722008934+00:00 stderr F I1013 00:23:28.721242 1 sync_worker.go:1014] Done syncing for namespace "openshift-ingress-operator" (409 of 955) 2025-10-13T00:23:28.722008934+00:00 stderr F I1013 00:23:28.721300 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.722008934+00:00 stderr F I1013 00:23:28.721316 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-ingress-operator" (410 of 955) 2025-10-13T00:23:28.771082951+00:00 stderr F I1013 00:23:28.771024 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (368 of 955) 2025-10-13T00:23:28.771082951+00:00 stderr F I1013 00:23:28.771058 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.771082951+00:00 stderr F I1013 00:23:28.771064 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-admin" (369 of 955) 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771080 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-admin" (369 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771087 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771092 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-view" (370 of 955) 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771104 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-view" (370 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771109 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771114 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-basic-user" (371 of 955) 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771124 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-basic-user" (371 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771129 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.771143133+00:00 stderr F I1013 00:23:28.771133 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-storage-admin" (372 of 955) 2025-10-13T00:23:28.771154383+00:00 stderr F I1013 00:23:28.771143 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-storage-admin" (372 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.771154383+00:00 stderr F I1013 00:23:28.771148 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.771162033+00:00 stderr F I1013 00:23:28.771153 1 sync_worker.go:999] Running sync for clusterrolebinding "csi-snapshot-controller-operator-clusterrole" (373 of 955) 2025-10-13T00:23:28.820448416+00:00 stderr F I1013 00:23:28.820393 1 sync_worker.go:1014] Done syncing for clusterrolebinding "openshift-ingress-operator" (410 of 955) 2025-10-13T00:23:28.820524708+00:00 stderr F I1013 00:23:28.820514 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.820554379+00:00 stderr F I1013 00:23:28.820541 1 sync_worker.go:999] Running sync for rolebinding "openshift-ingress-operator/ingress-operator" (411 of 955) 2025-10-13T00:23:28.870430118+00:00 stderr F I1013 00:23:28.870315 1 sync_worker.go:1014] Done syncing for clusterrolebinding "csi-snapshot-controller-operator-clusterrole" (373 of 955) 2025-10-13T00:23:28.870430118+00:00 stderr F I1013 00:23:28.870376 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.870430118+00:00 stderr F I1013 00:23:28.870384 1 sync_worker.go:999] Running sync for clusterrolebinding "csi-snapshot-controller-runner-operator" (374 of 955) 2025-10-13T00:23:28.870430118+00:00 stderr F I1013 00:23:28.870400 1 sync_worker.go:1002] Skipping clusterrolebinding "csi-snapshot-controller-runner-operator" (374 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.870430118+00:00 stderr F I1013 00:23:28.870408 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.870430118+00:00 stderr F I1013 00:23:28.870412 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (375 of 955) 2025-10-13T00:23:28.923453685+00:00 stderr F I1013 00:23:28.922927 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-ingress-operator/ingress-operator" (411 of 955) 2025-10-13T00:23:28.923453685+00:00 stderr F I1013 00:23:28.922961 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.923453685+00:00 stderr F I1013 00:23:28.922967 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/ingress-operator" (412 of 955) 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970769 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (375 of 955) 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970808 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970814 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (376 of 955) 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970827 1 sync_worker.go:1002] Skipping deployment "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (376 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970832 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970837 1 sync_worker.go:999] Running sync for clusteroperator "csi-snapshot-controller" (377 of 955) 2025-10-13T00:23:28.970864516+00:00 stderr F I1013 00:23:28.970848 1 sync_worker.go:1002] Skipping clusteroperator "csi-snapshot-controller" (377 of 955): disabled capabilities: CSISnapshot 2025-10-13T00:23:28.970933198+00:00 stderr F I1013 00:23:28.970863 1 task_graph.go:481] Running 24 on worker 0 2025-10-13T00:23:28.970933198+00:00 stderr F I1013 00:23:28.970870 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:28.970933198+00:00 stderr F I1013 00:23:28.970876 1 sync_worker.go:999] Running sync for namespace "openshift-config-operator" (13 of 955) 2025-10-13T00:23:29.021195158+00:00 stderr F I1013 00:23:29.021116 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/ingress-operator" (412 of 955) 2025-10-13T00:23:29.021195158+00:00 stderr F I1013 00:23:29.021157 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.021195158+00:00 stderr F I1013 00:23:29.021165 1 sync_worker.go:999] Running sync for role "openshift-ingress-operator/ingress-operator" (413 of 955) 2025-10-13T00:23:29.040320061+00:00 stderr F I1013 00:23:29.040274 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:29.040850136+00:00 stderr F I1013 00:23:29.040812 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:29.040850136+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:29.040850136+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:29.041019910+00:00 stderr F I1013 00:23:29.040988 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (723.91µs) 2025-10-13T00:23:29.041086672+00:00 stderr F I1013 00:23:29.041063 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:29.041256817+00:00 stderr F I1013 00:23:29.041211 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:29.041418301+00:00 stderr F I1013 00:23:29.041391 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:29.041558475+00:00 stderr F I1013 00:23:29.041465 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:29.042027128+00:00 stderr F I1013 00:23:29.041973 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:29.071654554+00:00 stderr F I1013 00:23:29.071493 1 sync_worker.go:1014] Done syncing for namespace "openshift-config-operator" (13 of 955) 2025-10-13T00:23:29.071654554+00:00 stderr F I1013 00:23:29.071531 1 task_graph.go:481] Running 25 on worker 0 2025-10-13T00:23:29.087004181+00:00 stderr F W1013 00:23:29.086925 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:29.091213668+00:00 stderr F I1013 00:23:29.091121 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.051734ms) 2025-10-13T00:23:29.121141002+00:00 stderr F I1013 00:23:29.121069 1 sync_worker.go:1014] Done syncing for role "openshift-ingress-operator/ingress-operator" (413 of 955) 2025-10-13T00:23:29.121141002+00:00 stderr F I1013 00:23:29.121117 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.121141002+00:00 stderr F I1013 00:23:29.121130 1 sync_worker.go:999] Running sync for role "openshift-config/ingress-operator" (414 of 955) 2025-10-13T00:23:29.173374587+00:00 stderr F I1013 00:23:29.173216 1 sync_worker.go:989] Precreated resource clusteroperator "machine-config" (799 of 955) 2025-10-13T00:23:29.173374587+00:00 stderr F I1013 00:23:29.173262 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.173374587+00:00 stderr F I1013 00:23:29.173269 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:machine-config-operator:cluster-reader" (774 of 955) 2025-10-13T00:23:29.220538371+00:00 stderr F I1013 00:23:29.220472 1 sync_worker.go:1014] Done syncing for role "openshift-config/ingress-operator" (414 of 955) 2025-10-13T00:23:29.220614223+00:00 stderr F I1013 00:23:29.220603 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.220643314+00:00 stderr F I1013 00:23:29.220630 1 sync_worker.go:999] Running sync for serviceaccount "openshift-ingress-operator/ingress-operator" (415 of 955) 2025-10-13T00:23:29.270488672+00:00 stderr F I1013 00:23:29.270395 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:machine-config-operator:cluster-reader" (774 of 955) 2025-10-13T00:23:29.270488672+00:00 stderr F I1013 00:23:29.270439 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.270488672+00:00 stderr F I1013 00:23:29.270448 1 sync_worker.go:999] Running sync for namespace "openshift-machine-config-operator" (775 of 955) 2025-10-13T00:23:29.320355411+00:00 stderr F I1013 00:23:29.320230 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-ingress-operator/ingress-operator" (415 of 955) 2025-10-13T00:23:29.320355411+00:00 stderr F I1013 00:23:29.320266 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.320355411+00:00 stderr F I1013 00:23:29.320274 1 sync_worker.go:999] Running sync for service "openshift-ingress-operator/metrics" (416 of 955) 2025-10-13T00:23:29.370447717+00:00 stderr F I1013 00:23:29.370402 1 sync_worker.go:1014] Done syncing for namespace "openshift-machine-config-operator" (775 of 955) 2025-10-13T00:23:29.370501048+00:00 stderr F I1013 00:23:29.370491 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.370533169+00:00 stderr F I1013 00:23:29.370520 1 sync_worker.go:999] Running sync for namespace "openshift-openstack-infra" (776 of 955) 2025-10-13T00:23:29.420576403+00:00 stderr F I1013 00:23:29.420537 1 sync_worker.go:1014] Done syncing for service "openshift-ingress-operator/metrics" (416 of 955) 2025-10-13T00:23:29.420625454+00:00 stderr F I1013 00:23:29.420615 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.420653965+00:00 stderr F I1013 00:23:29.420641 1 sync_worker.go:999] Running sync for configmap "openshift-ingress-operator/trusted-ca" (417 of 955) 2025-10-13T00:23:29.455693101+00:00 stderr F I1013 00:23:29.455629 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:29.455805404+00:00 stderr F I1013 00:23:29.455757 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:29.455850616+00:00 stderr F I1013 00:23:29.455828 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:29.455925108+00:00 stderr F I1013 00:23:29.455844 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:29.456258737+00:00 stderr F I1013 00:23:29.456210 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:29.469896417+00:00 stderr F I1013 00:23:29.469859 1 sync_worker.go:1014] Done syncing for namespace "openshift-openstack-infra" (776 of 955) 2025-10-13T00:23:29.469963619+00:00 stderr F I1013 00:23:29.469953 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.469993239+00:00 stderr F I1013 00:23:29.469980 1 sync_worker.go:999] Running sync for namespace "openshift-kni-infra" (777 of 955) 2025-10-13T00:23:29.479834014+00:00 stderr F W1013 00:23:29.479816 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:29.481097069+00:00 stderr F I1013 00:23:29.481079 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.459859ms) 2025-10-13T00:23:29.529208319+00:00 stderr F I1013 00:23:29.529153 1 sync_worker.go:1014] Done syncing for configmap "openshift-ingress-operator/trusted-ca" (417 of 955) 2025-10-13T00:23:29.529300862+00:00 stderr F I1013 00:23:29.529286 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.529366223+00:00 stderr F I1013 00:23:29.529344 1 sync_worker.go:999] Running sync for deployment "openshift-ingress-operator/ingress-operator" (418 of 955) 2025-10-13T00:23:29.570448728+00:00 stderr F I1013 00:23:29.570379 1 sync_worker.go:1014] Done syncing for namespace "openshift-kni-infra" (777 of 955) 2025-10-13T00:23:29.570560871+00:00 stderr F I1013 00:23:29.570539 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.570621293+00:00 stderr F I1013 00:23:29.570596 1 sync_worker.go:999] Running sync for namespace "openshift-ovirt-infra" (778 of 955) 2025-10-13T00:23:29.669775005+00:00 stderr F I1013 00:23:29.669664 1 sync_worker.go:1014] Done syncing for namespace "openshift-ovirt-infra" (778 of 955) 2025-10-13T00:23:29.669775005+00:00 stderr F I1013 00:23:29.669695 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.669775005+00:00 stderr F I1013 00:23:29.669701 1 sync_worker.go:999] Running sync for namespace "openshift-vsphere-infra" (779 of 955) 2025-10-13T00:23:29.720908929+00:00 stderr F I1013 00:23:29.720833 1 sync_worker.go:1014] Done syncing for deployment "openshift-ingress-operator/ingress-operator" (418 of 955) 2025-10-13T00:23:29.720908929+00:00 stderr F I1013 00:23:29.720880 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.720908929+00:00 stderr F I1013 00:23:29.720893 1 sync_worker.go:999] Running sync for clusteroperator "ingress" (419 of 955) 2025-10-13T00:23:29.721365022+00:00 stderr F I1013 00:23:29.721244 1 sync_worker.go:1014] Done syncing for clusteroperator "ingress" (419 of 955) 2025-10-13T00:23:29.721365022+00:00 stderr F I1013 00:23:29.721284 1 task_graph.go:481] Running 26 on worker 1 2025-10-13T00:23:29.770268324+00:00 stderr F I1013 00:23:29.770187 1 sync_worker.go:1014] Done syncing for namespace "openshift-vsphere-infra" (779 of 955) 2025-10-13T00:23:29.770268324+00:00 stderr F I1013 00:23:29.770234 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.770268324+00:00 stderr F I1013 00:23:29.770242 1 sync_worker.go:999] Running sync for namespace "openshift-nutanix-infra" (780 of 955) 2025-10-13T00:23:29.822758836+00:00 stderr F I1013 00:23:29.822701 1 sync_worker.go:989] Precreated resource clusteroperator "openshift-samples" (536 of 955) 2025-10-13T00:23:29.822820618+00:00 stderr F I1013 00:23:29.822810 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.822851689+00:00 stderr F I1013 00:23:29.822836 1 sync_worker.go:999] Running sync for customresourcedefinition "configs.samples.operator.openshift.io" (517 of 955) 2025-10-13T00:23:29.870423664+00:00 stderr F I1013 00:23:29.870309 1 sync_worker.go:1014] Done syncing for namespace "openshift-nutanix-infra" (780 of 955) 2025-10-13T00:23:29.870542697+00:00 stderr F I1013 00:23:29.870520 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.870646080+00:00 stderr F I1013 00:23:29.870618 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-platform-infra" (781 of 955) 2025-10-13T00:23:29.920577981+00:00 stderr F I1013 00:23:29.920525 1 sync_worker.go:1014] Done syncing for customresourcedefinition "configs.samples.operator.openshift.io" (517 of 955) 2025-10-13T00:23:29.920646013+00:00 stderr F I1013 00:23:29.920624 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.920675274+00:00 stderr F I1013 00:23:29.920663 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-samples-operator" (518 of 955) 2025-10-13T00:23:29.969699489+00:00 stderr F I1013 00:23:29.969659 1 sync_worker.go:1014] Done syncing for namespace "openshift-cloud-platform-infra" (781 of 955) 2025-10-13T00:23:29.969771761+00:00 stderr F I1013 00:23:29.969740 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:29.969802022+00:00 stderr F I1013 00:23:29.969788 1 sync_worker.go:999] Running sync for service "openshift-machine-config-operator/machine-config-operator" (782 of 955) 2025-10-13T00:23:30.020500684+00:00 stderr F I1013 00:23:30.020407 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-samples-operator" (518 of 955) 2025-10-13T00:23:30.020500684+00:00 stderr F I1013 00:23:30.020443 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.020500684+00:00 stderr F I1013 00:23:30.020448 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-samples-operator/samples-operator-alerts" (519 of 955) 2025-10-13T00:23:30.041611243+00:00 stderr F I1013 00:23:30.041561 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:30.042054825+00:00 stderr F I1013 00:23:30.042023 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:30.042054825+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:30.042054825+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:30.042206369+00:00 stderr F I1013 00:23:30.042178 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (647.848µs) 2025-10-13T00:23:30.042267721+00:00 stderr F I1013 00:23:30.042246 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:30.042422955+00:00 stderr F I1013 00:23:30.042389 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:30.042533418+00:00 stderr F I1013 00:23:30.042512 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:30.042657152+00:00 stderr F I1013 00:23:30.042566 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:30.043066533+00:00 stderr F I1013 00:23:30.043022 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:30.065243881+00:00 stderr F W1013 00:23:30.065183 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:30.068925763+00:00 stderr F I1013 00:23:30.068776 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.524719ms) 2025-10-13T00:23:30.069714515+00:00 stderr F I1013 00:23:30.069680 1 sync_worker.go:1014] Done syncing for service "openshift-machine-config-operator/machine-config-operator" (782 of 955) 2025-10-13T00:23:30.069714515+00:00 stderr F I1013 00:23:30.069703 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.069744146+00:00 stderr F I1013 00:23:30.069709 1 sync_worker.go:999] Running sync for service "openshift-machine-config-operator/machine-config-controller" (783 of 955) 2025-10-13T00:23:30.171723557+00:00 stderr F I1013 00:23:30.171656 1 sync_worker.go:1014] Done syncing for service "openshift-machine-config-operator/machine-config-controller" (783 of 955) 2025-10-13T00:23:30.171791479+00:00 stderr F I1013 00:23:30.171779 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.171822650+00:00 stderr F I1013 00:23:30.171809 1 sync_worker.go:999] Running sync for service "openshift-machine-config-operator/machine-config-daemon" (784 of 955) 2025-10-13T00:23:30.271477336+00:00 stderr F I1013 00:23:30.271402 1 sync_worker.go:1014] Done syncing for service "openshift-machine-config-operator/machine-config-daemon" (784 of 955) 2025-10-13T00:23:30.271580288+00:00 stderr F I1013 00:23:30.271557 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.271647370+00:00 stderr F I1013 00:23:30.271618 1 sync_worker.go:999] Running sync for customresourcedefinition "containerruntimeconfigs.machineconfiguration.openshift.io" (785 of 955) 2025-10-13T00:23:30.323360661+00:00 stderr F I1013 00:23:30.323257 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-cluster-samples-operator/samples-operator-alerts" (519 of 955) 2025-10-13T00:23:30.323465504+00:00 stderr F I1013 00:23:30.323440 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.323528245+00:00 stderr F I1013 00:23:30.323502 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-samples-operator/cluster-samples-operator" (520 of 955) 2025-10-13T00:23:30.372780677+00:00 stderr F I1013 00:23:30.372697 1 sync_worker.go:1014] Done syncing for customresourcedefinition "containerruntimeconfigs.machineconfiguration.openshift.io" (785 of 955) 2025-10-13T00:23:30.372883640+00:00 stderr F I1013 00:23:30.372861 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.372948362+00:00 stderr F I1013 00:23:30.372921 1 sync_worker.go:999] Running sync for customresourcedefinition "controllerconfigs.machineconfiguration.openshift.io" (786 of 955) 2025-10-13T00:23:30.419199270+00:00 stderr F I1013 00:23:30.419104 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-cluster-samples-operator/cluster-samples-operator" (520 of 955) 2025-10-13T00:23:30.419199270+00:00 stderr F I1013 00:23:30.419136 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.419199270+00:00 stderr F I1013 00:23:30.419142 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-samples-operator-imageconfig-reader" (521 of 955) 2025-10-13T00:23:30.485708013+00:00 stderr F I1013 00:23:30.485564 1 sync_worker.go:1014] Done syncing for customresourcedefinition "controllerconfigs.machineconfiguration.openshift.io" (786 of 955) 2025-10-13T00:23:30.485708013+00:00 stderr F I1013 00:23:30.485606 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.485708013+00:00 stderr F I1013 00:23:30.485612 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeletconfigs.machineconfiguration.openshift.io" (787 of 955) 2025-10-13T00:23:30.520147602+00:00 stderr F I1013 00:23:30.520060 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-samples-operator-imageconfig-reader" (521 of 955) 2025-10-13T00:23:30.520147602+00:00 stderr F I1013 00:23:30.520105 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.520147602+00:00 stderr F I1013 00:23:30.520118 1 sync_worker.go:999] Running sync for clusterrole "cluster-samples-operator-imageconfig-reader" (522 of 955) 2025-10-13T00:23:30.571765520+00:00 stderr F I1013 00:23:30.571655 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeletconfigs.machineconfiguration.openshift.io" (787 of 955) 2025-10-13T00:23:30.571765520+00:00 stderr F I1013 00:23:30.571715 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.571765520+00:00 stderr F I1013 00:23:30.571726 1 sync_worker.go:999] Running sync for customresourcedefinition "machineconfigpools.machineconfiguration.openshift.io" (788 of 955) 2025-10-13T00:23:30.620228940+00:00 stderr F I1013 00:23:30.620135 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-samples-operator-imageconfig-reader" (522 of 955) 2025-10-13T00:23:30.620228940+00:00 stderr F I1013 00:23:30.620178 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.620228940+00:00 stderr F I1013 00:23:30.620189 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-samples-operator-proxy-reader" (523 of 955) 2025-10-13T00:23:30.672743403+00:00 stderr F I1013 00:23:30.672632 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineconfigpools.machineconfiguration.openshift.io" (788 of 955) 2025-10-13T00:23:30.672743403+00:00 stderr F I1013 00:23:30.672671 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.672743403+00:00 stderr F I1013 00:23:30.672676 1 sync_worker.go:999] Running sync for customresourcedefinition "machineconfigs.machineconfiguration.openshift.io" (789 of 955) 2025-10-13T00:23:30.720913774+00:00 stderr F I1013 00:23:30.720848 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-samples-operator-proxy-reader" (523 of 955) 2025-10-13T00:23:30.720913774+00:00 stderr F I1013 00:23:30.720881 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.720913774+00:00 stderr F I1013 00:23:30.720887 1 sync_worker.go:999] Running sync for clusterrole "cluster-samples-operator-proxy-reader" (524 of 955) 2025-10-13T00:23:30.771043411+00:00 stderr F I1013 00:23:30.770952 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineconfigs.machineconfiguration.openshift.io" (789 of 955) 2025-10-13T00:23:30.771043411+00:00 stderr F I1013 00:23:30.771002 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.771043411+00:00 stderr F I1013 00:23:30.771011 1 sync_worker.go:999] Running sync for customresourcedefinition "machineconfigurations.operator.openshift.io" (790 of 955) 2025-10-13T00:23:30.819812509+00:00 stderr F I1013 00:23:30.819744 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-samples-operator-proxy-reader" (524 of 955) 2025-10-13T00:23:30.819812509+00:00 stderr F I1013 00:23:30.819774 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.819812509+00:00 stderr F I1013 00:23:30.819780 1 sync_worker.go:999] Running sync for role "openshift-cluster-samples-operator/cluster-samples-operator" (525 of 955) 2025-10-13T00:23:30.872015263+00:00 stderr F I1013 00:23:30.871925 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineconfigurations.operator.openshift.io" (790 of 955) 2025-10-13T00:23:30.872015263+00:00 stderr F I1013 00:23:30.871985 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.872050974+00:00 stderr F I1013 00:23:30.872001 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/machine-config-operator-images" (791 of 955) 2025-10-13T00:23:30.920614437+00:00 stderr F I1013 00:23:30.920287 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-samples-operator/cluster-samples-operator" (525 of 955) 2025-10-13T00:23:30.920614437+00:00 stderr F I1013 00:23:30.920564 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.920614437+00:00 stderr F I1013 00:23:30.920578 1 sync_worker.go:999] Running sync for clusterrole "cluster-samples-operator" (526 of 955) 2025-10-13T00:23:30.970449785+00:00 stderr F I1013 00:23:30.970373 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/machine-config-operator-images" (791 of 955) 2025-10-13T00:23:30.970449785+00:00 stderr F I1013 00:23:30.970442 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:30.970500787+00:00 stderr F I1013 00:23:30.970456 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-config-operator/machine-config-operator" (792 of 955) 2025-10-13T00:23:31.019892333+00:00 stderr F I1013 00:23:31.019816 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-samples-operator" (526 of 955) 2025-10-13T00:23:31.019892333+00:00 stderr F I1013 00:23:31.019847 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.019892333+00:00 stderr F I1013 00:23:31.019852 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:cluster-samples-operator:cluster-reader" (527 of 955) 2025-10-13T00:23:31.043076378+00:00 stderr F I1013 00:23:31.042987 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:31.043530861+00:00 stderr F I1013 00:23:31.043471 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:31.043530861+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:31.043530861+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:31.043621834+00:00 stderr F I1013 00:23:31.043569 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (595.136µs) 2025-10-13T00:23:31.043621834+00:00 stderr F I1013 00:23:31.043598 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:31.043749167+00:00 stderr F I1013 00:23:31.043682 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:31.043808289+00:00 stderr F I1013 00:23:31.043772 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:31.043946423+00:00 stderr F I1013 00:23:31.043791 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:31.044426416+00:00 stderr F I1013 00:23:31.044313 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:31.064187857+00:00 stderr F W1013 00:23:31.064132 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:31.067141949+00:00 stderr F I1013 00:23:31.067085 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.484604ms) 2025-10-13T00:23:31.069409322+00:00 stderr F I1013 00:23:31.069373 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-config-operator/machine-config-operator" (792 of 955) 2025-10-13T00:23:31.069426392+00:00 stderr F I1013 00:23:31.069406 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.069426392+00:00 stderr F I1013 00:23:31.069413 1 sync_worker.go:999] Running sync for clusterrolebinding "custom-account-openshift-machine-config-operator" (793 of 955) 2025-10-13T00:23:31.120870525+00:00 stderr F I1013 00:23:31.120363 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:cluster-samples-operator:cluster-reader" (527 of 955) 2025-10-13T00:23:31.120870525+00:00 stderr F I1013 00:23:31.120838 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.120870525+00:00 stderr F I1013 00:23:31.120847 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-samples-operator/cluster-samples-operator" (528 of 955) 2025-10-13T00:23:31.169616993+00:00 stderr F I1013 00:23:31.169551 1 sync_worker.go:1014] Done syncing for clusterrolebinding "custom-account-openshift-machine-config-operator" (793 of 955) 2025-10-13T00:23:31.169616993+00:00 stderr F I1013 00:23:31.169588 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.169616993+00:00 stderr F I1013 00:23:31.169594 1 sync_worker.go:999] Running sync for role "openshift-machine-config-operator/prometheus-k8s" (794 of 955) 2025-10-13T00:23:31.219691458+00:00 stderr F I1013 00:23:31.219644 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-samples-operator/cluster-samples-operator" (528 of 955) 2025-10-13T00:23:31.219758910+00:00 stderr F I1013 00:23:31.219748 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.219788101+00:00 stderr F I1013 00:23:31.219775 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-samples-operator" (529 of 955) 2025-10-13T00:23:31.269920657+00:00 stderr F I1013 00:23:31.269867 1 sync_worker.go:1014] Done syncing for role "openshift-machine-config-operator/prometheus-k8s" (794 of 955) 2025-10-13T00:23:31.269994769+00:00 stderr F I1013 00:23:31.269984 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.270023220+00:00 stderr F I1013 00:23:31.270010 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-config-operator/prometheus-k8s" (795 of 955) 2025-10-13T00:23:31.326505614+00:00 stderr F I1013 00:23:31.326456 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-samples-operator" (529 of 955) 2025-10-13T00:23:31.326580636+00:00 stderr F I1013 00:23:31.326570 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.326609806+00:00 stderr F I1013 00:23:31.326597 1 sync_worker.go:999] Running sync for rolebinding "openshift/cluster-samples-operator-openshift-edit" (530 of 955) 2025-10-13T00:23:31.370117978+00:00 stderr F I1013 00:23:31.370049 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-config-operator/prometheus-k8s" (795 of 955) 2025-10-13T00:23:31.370205111+00:00 stderr F I1013 00:23:31.370192 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.370240912+00:00 stderr F I1013 00:23:31.370225 1 sync_worker.go:999] Running sync for deployment "openshift-machine-config-operator/machine-config-operator" (796 of 955) 2025-10-13T00:23:31.420623955+00:00 stderr F I1013 00:23:31.420570 1 sync_worker.go:1014] Done syncing for rolebinding "openshift/cluster-samples-operator-openshift-edit" (530 of 955) 2025-10-13T00:23:31.420697097+00:00 stderr F I1013 00:23:31.420687 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.420724468+00:00 stderr F I1013 00:23:31.420713 1 sync_worker.go:999] Running sync for role "openshift-config/coreos-pull-secret-reader" (531 of 955) 2025-10-13T00:23:31.471823791+00:00 stderr F I1013 00:23:31.471769 1 sync_worker.go:1014] Done syncing for deployment "openshift-machine-config-operator/machine-config-operator" (796 of 955) 2025-10-13T00:23:31.471897323+00:00 stderr F I1013 00:23:31.471886 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.471926224+00:00 stderr F I1013 00:23:31.471913 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/kube-rbac-proxy" (797 of 955) 2025-10-13T00:23:31.524718435+00:00 stderr F I1013 00:23:31.522397 1 sync_worker.go:1014] Done syncing for role "openshift-config/coreos-pull-secret-reader" (531 of 955) 2025-10-13T00:23:31.524718435+00:00 stderr F I1013 00:23:31.522452 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.524718435+00:00 stderr F I1013 00:23:31.522469 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/cluster-samples-operator-openshift-config-secret-reader" (532 of 955) 2025-10-13T00:23:31.575365866+00:00 stderr F I1013 00:23:31.574261 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/kube-rbac-proxy" (797 of 955) 2025-10-13T00:23:31.575365866+00:00 stderr F I1013 00:23:31.574305 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.575365866+00:00 stderr F I1013 00:23:31.574317 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/machine-config-osimageurl" (798 of 955) 2025-10-13T00:23:31.620701797+00:00 stderr F I1013 00:23:31.620605 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/cluster-samples-operator-openshift-config-secret-reader" (532 of 955) 2025-10-13T00:23:31.620701797+00:00 stderr F I1013 00:23:31.620663 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.620701797+00:00 stderr F I1013 00:23:31.620676 1 sync_worker.go:999] Running sync for service "openshift-cluster-samples-operator/metrics" (533 of 955) 2025-10-13T00:23:31.671749289+00:00 stderr F I1013 00:23:31.671660 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/machine-config-osimageurl" (798 of 955) 2025-10-13T00:23:31.671749289+00:00 stderr F I1013 00:23:31.671716 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.671749289+00:00 stderr F I1013 00:23:31.671734 1 sync_worker.go:999] Running sync for clusteroperator "machine-config" (799 of 955) 2025-10-13T00:23:31.672136770+00:00 stderr F E1013 00:23:31.672090 1 task.go:122] error running apply for clusteroperator "machine-config" (799 of 955): Cluster operator machine-config is degraded 2025-10-13T00:23:31.672136770+00:00 stderr F I1013 00:23:31.672129 1 task_graph.go:481] Running 27 on worker 0 2025-10-13T00:23:31.672150311+00:00 stderr F I1013 00:23:31.672141 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.672179101+00:00 stderr F I1013 00:23:31.672152 1 sync_worker.go:999] Running sync for priorityclass "openshift-user-critical" (579 of 955) 2025-10-13T00:23:31.721900176+00:00 stderr F I1013 00:23:31.721815 1 sync_worker.go:1014] Done syncing for service "openshift-cluster-samples-operator/metrics" (533 of 955) 2025-10-13T00:23:31.721900176+00:00 stderr F I1013 00:23:31.721871 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.721946088+00:00 stderr F I1013 00:23:31.721887 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-samples-operator/cluster-samples-operator" (534 of 955) 2025-10-13T00:23:31.770508810+00:00 stderr F I1013 00:23:31.770430 1 sync_worker.go:1014] Done syncing for priorityclass "openshift-user-critical" (579 of 955) 2025-10-13T00:23:31.770508810+00:00 stderr F I1013 00:23:31.770465 1 task_graph.go:481] Running 28 on worker 0 2025-10-13T00:23:31.770508810+00:00 stderr F I1013 00:23:31.770479 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.770508810+00:00 stderr F I1013 00:23:31.770487 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/etcd-dashboard" (814 of 955) 2025-10-13T00:23:31.821355967+00:00 stderr F I1013 00:23:31.821228 1 sync_worker.go:1014] Done syncing for deployment "openshift-cluster-samples-operator/cluster-samples-operator" (534 of 955) 2025-10-13T00:23:31.821355967+00:00 stderr F I1013 00:23:31.821264 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.821355967+00:00 stderr F I1013 00:23:31.821272 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (535 of 955) 2025-10-13T00:23:31.871458122+00:00 stderr F I1013 00:23:31.871356 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/etcd-dashboard" (814 of 955) 2025-10-13T00:23:31.871458122+00:00 stderr F I1013 00:23:31.871397 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.871458122+00:00 stderr F I1013 00:23:31.871405 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-etcd" (815 of 955) 2025-10-13T00:23:31.920696844+00:00 stderr F I1013 00:23:31.920612 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (535 of 955) 2025-10-13T00:23:31.920696844+00:00 stderr F I1013 00:23:31.920650 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.920696844+00:00 stderr F I1013 00:23:31.920657 1 sync_worker.go:999] Running sync for clusteroperator "openshift-samples" (536 of 955) 2025-10-13T00:23:31.920902200+00:00 stderr F I1013 00:23:31.920871 1 sync_worker.go:1014] Done syncing for clusteroperator "openshift-samples" (536 of 955) 2025-10-13T00:23:31.920902200+00:00 stderr F I1013 00:23:31.920887 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.920902200+00:00 stderr F I1013 00:23:31.920893 1 sync_worker.go:999] Running sync for imagestream "openshift/cli" (537 of 955) 2025-10-13T00:23:31.970487711+00:00 stderr F I1013 00:23:31.970411 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-etcd" (815 of 955) 2025-10-13T00:23:31.970487711+00:00 stderr F I1013 00:23:31.970447 1 task_graph.go:481] Running 29 on worker 0 2025-10-13T00:23:31.970487711+00:00 stderr F I1013 00:23:31.970464 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:31.970487711+00:00 stderr F I1013 00:23:31.970473 1 sync_worker.go:999] Running sync for role "openshift-etcd-operator/prometheus-k8s" (866 of 955) 2025-10-13T00:23:32.025153074+00:00 stderr F I1013 00:23:32.025048 1 sync_worker.go:1014] Done syncing for imagestream "openshift/cli" (537 of 955) 2025-10-13T00:23:32.025153074+00:00 stderr F I1013 00:23:32.025099 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.025153074+00:00 stderr F I1013 00:23:32.025112 1 sync_worker.go:999] Running sync for imagestream "openshift/cli-artifacts" (538 of 955) 2025-10-13T00:23:32.044583805+00:00 stderr F I1013 00:23:32.044455 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:32.044994426+00:00 stderr F I1013 00:23:32.044904 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:32.044994426+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:32.044994426+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:32.045104499+00:00 stderr F I1013 00:23:32.045030 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (585.576µs) 2025-10-13T00:23:32.045104499+00:00 stderr F I1013 00:23:32.045069 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:32.045226553+00:00 stderr F I1013 00:23:32.045159 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:32.045297255+00:00 stderr F I1013 00:23:32.045256 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:32.045454509+00:00 stderr F I1013 00:23:32.045277 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:32.045930602+00:00 stderr F I1013 00:23:32.045823 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:32.072134412+00:00 stderr F I1013 00:23:32.071624 1 sync_worker.go:1014] Done syncing for role "openshift-etcd-operator/prometheus-k8s" (866 of 955) 2025-10-13T00:23:32.072134412+00:00 stderr F I1013 00:23:32.071664 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.072134412+00:00 stderr F I1013 00:23:32.071679 1 sync_worker.go:999] Running sync for rolebinding "openshift-etcd-operator/prometheus-k8s" (867 of 955) 2025-10-13T00:23:32.082680436+00:00 stderr F W1013 00:23:32.082596 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:32.085301469+00:00 stderr F I1013 00:23:32.085235 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.165579ms) 2025-10-13T00:23:32.126542788+00:00 stderr F I1013 00:23:32.126460 1 sync_worker.go:1014] Done syncing for imagestream "openshift/cli-artifacts" (538 of 955) 2025-10-13T00:23:32.126542788+00:00 stderr F I1013 00:23:32.126519 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.126578889+00:00 stderr F I1013 00:23:32.126535 1 sync_worker.go:999] Running sync for imagestream "openshift/installer" (539 of 955) 2025-10-13T00:23:32.170886203+00:00 stderr F I1013 00:23:32.170763 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-etcd-operator/prometheus-k8s" (867 of 955) 2025-10-13T00:23:32.170886203+00:00 stderr F I1013 00:23:32.170813 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.170886203+00:00 stderr F I1013 00:23:32.170825 1 sync_worker.go:999] Running sync for prometheusrule "openshift-etcd-operator/etcd-prometheus-rules" (868 of 955) 2025-10-13T00:23:32.224604219+00:00 stderr F I1013 00:23:32.224532 1 sync_worker.go:1014] Done syncing for imagestream "openshift/installer" (539 of 955) 2025-10-13T00:23:32.224604219+00:00 stderr F I1013 00:23:32.224572 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.224604219+00:00 stderr F I1013 00:23:32.224578 1 sync_worker.go:999] Running sync for imagestream "openshift/installer-artifacts" (540 of 955) 2025-10-13T00:23:32.275265031+00:00 stderr F I1013 00:23:32.274758 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-etcd-operator/etcd-prometheus-rules" (868 of 955) 2025-10-13T00:23:32.275265031+00:00 stderr F I1013 00:23:32.275224 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.275265031+00:00 stderr F I1013 00:23:32.275235 1 sync_worker.go:999] Running sync for servicemonitor "openshift-etcd-operator/etcd-operator" (869 of 955) 2025-10-13T00:23:32.324372939+00:00 stderr F I1013 00:23:32.324249 1 sync_worker.go:1014] Done syncing for imagestream "openshift/installer-artifacts" (540 of 955) 2025-10-13T00:23:32.324372939+00:00 stderr F I1013 00:23:32.324287 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.324372939+00:00 stderr F I1013 00:23:32.324292 1 sync_worker.go:999] Running sync for imagestream "openshift/tests" (541 of 955) 2025-10-13T00:23:32.383488245+00:00 stderr F I1013 00:23:32.383217 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-etcd-operator/etcd-operator" (869 of 955) 2025-10-13T00:23:32.383488245+00:00 stderr F I1013 00:23:32.383285 1 task_graph.go:481] Running 30 on worker 0 2025-10-13T00:23:32.383488245+00:00 stderr F I1013 00:23:32.383309 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.383488245+00:00 stderr F I1013 00:23:32.383374 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-version" (1 of 955) 2025-10-13T00:23:32.425860015+00:00 stderr F I1013 00:23:32.425761 1 sync_worker.go:1014] Done syncing for imagestream "openshift/tests" (541 of 955) 2025-10-13T00:23:32.425860015+00:00 stderr F I1013 00:23:32.425803 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.425860015+00:00 stderr F I1013 00:23:32.425815 1 sync_worker.go:999] Running sync for imagestream "openshift/tools" (542 of 955) 2025-10-13T00:23:32.475152489+00:00 stderr F I1013 00:23:32.475070 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-version" (1 of 955) 2025-10-13T00:23:32.475152489+00:00 stderr F I1013 00:23:32.475116 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.475152489+00:00 stderr F I1013 00:23:32.475129 1 sync_worker.go:999] Running sync for configmap "openshift-config/admin-acks" (2 of 955) 2025-10-13T00:23:32.526514739+00:00 stderr F I1013 00:23:32.526386 1 sync_worker.go:1014] Done syncing for imagestream "openshift/tools" (542 of 955) 2025-10-13T00:23:32.526514739+00:00 stderr F I1013 00:23:32.526449 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.526514739+00:00 stderr F I1013 00:23:32.526465 1 sync_worker.go:999] Running sync for imagestream "openshift/must-gather" (543 of 955) 2025-10-13T00:23:32.571117162+00:00 stderr F I1013 00:23:32.571016 1 sync_worker.go:1014] Done syncing for configmap "openshift-config/admin-acks" (2 of 955) 2025-10-13T00:23:32.571117162+00:00 stderr F I1013 00:23:32.571061 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.571117162+00:00 stderr F I1013 00:23:32.571073 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/admin-gates" (3 of 955) 2025-10-13T00:23:32.627090121+00:00 stderr F I1013 00:23:32.626982 1 sync_worker.go:1014] Done syncing for imagestream "openshift/must-gather" (543 of 955) 2025-10-13T00:23:32.627090121+00:00 stderr F I1013 00:23:32.627032 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.627090121+00:00 stderr F I1013 00:23:32.627045 1 sync_worker.go:999] Running sync for imagestream "openshift/oauth-proxy" (544 of 955) 2025-10-13T00:23:32.670406007+00:00 stderr F I1013 00:23:32.670297 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/admin-gates" (3 of 955) 2025-10-13T00:23:32.670406007+00:00 stderr F I1013 00:23:32.670384 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.670469169+00:00 stderr F I1013 00:23:32.670400 1 sync_worker.go:999] Running sync for customresourcedefinition "clusteroperators.config.openshift.io" (4 of 955) 2025-10-13T00:23:32.725046629+00:00 stderr F I1013 00:23:32.724946 1 sync_worker.go:1014] Done syncing for imagestream "openshift/oauth-proxy" (544 of 955) 2025-10-13T00:23:32.725046629+00:00 stderr F I1013 00:23:32.725007 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.725046629+00:00 stderr F I1013 00:23:32.725024 1 sync_worker.go:999] Running sync for imagestream "openshift/hello-openshift" (545 of 955) 2025-10-13T00:23:32.770856076+00:00 stderr F I1013 00:23:32.770770 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusteroperators.config.openshift.io" (4 of 955) 2025-10-13T00:23:32.770856076+00:00 stderr F I1013 00:23:32.770805 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.770856076+00:00 stderr F I1013 00:23:32.770813 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterversions.config.openshift.io" (5 of 955) 2025-10-13T00:23:32.824398977+00:00 stderr F I1013 00:23:32.824294 1 sync_worker.go:1014] Done syncing for imagestream "openshift/hello-openshift" (545 of 955) 2025-10-13T00:23:32.824398977+00:00 stderr F I1013 00:23:32.824339 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.824398977+00:00 stderr F I1013 00:23:32.824345 1 sync_worker.go:999] Running sync for imagestream "openshift/network-tools" (546 of 955) 2025-10-13T00:23:32.874429541+00:00 stderr F I1013 00:23:32.874349 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterversions.config.openshift.io" (5 of 955) 2025-10-13T00:23:32.874429541+00:00 stderr F I1013 00:23:32.874383 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.874429541+00:00 stderr F I1013 00:23:32.874389 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-version-operator" (6 of 955) 2025-10-13T00:23:32.924222768+00:00 stderr F I1013 00:23:32.924155 1 sync_worker.go:1014] Done syncing for imagestream "openshift/network-tools" (546 of 955) 2025-10-13T00:23:32.924222768+00:00 stderr F I1013 00:23:32.924183 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.924222768+00:00 stderr F I1013 00:23:32.924189 1 sync_worker.go:999] Running sync for role "openshift-cluster-samples-operator/prometheus-k8s" (547 of 955) 2025-10-13T00:23:32.970880677+00:00 stderr F I1013 00:23:32.970818 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-version-operator" (6 of 955) 2025-10-13T00:23:32.970880677+00:00 stderr F I1013 00:23:32.970847 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:32.970880677+00:00 stderr F I1013 00:23:32.970853 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-version/cluster-version-operator" (7 of 955) 2025-10-13T00:23:33.020256003+00:00 stderr F I1013 00:23:33.020226 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-samples-operator/prometheus-k8s" (547 of 955) 2025-10-13T00:23:33.020310744+00:00 stderr F I1013 00:23:33.020301 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.020353875+00:00 stderr F I1013 00:23:33.020340 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-samples-operator/prometheus-k8s" (548 of 955) 2025-10-13T00:23:33.045405053+00:00 stderr F I1013 00:23:33.045362 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:33.045647080+00:00 stderr F I1013 00:23:33.045596 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:33.045647080+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:33.045647080+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:33.045670711+00:00 stderr F I1013 00:23:33.045642 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (287.228µs) 2025-10-13T00:23:33.045670711+00:00 stderr F I1013 00:23:33.045654 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:33.045731502+00:00 stderr F I1013 00:23:33.045698 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:33.045759723+00:00 stderr F I1013 00:23:33.045740 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:33.045804924+00:00 stderr F I1013 00:23:33.045747 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:33.046012190+00:00 stderr F I1013 00:23:33.045959 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:33.083802883+00:00 stderr F W1013 00:23:33.083733 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:33.085002726+00:00 stderr F I1013 00:23:33.084959 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.303125ms) 2025-10-13T00:23:33.121027690+00:00 stderr F I1013 00:23:33.120923 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-samples-operator/prometheus-k8s" (548 of 955) 2025-10-13T00:23:33.121027690+00:00 stderr F I1013 00:23:33.120969 1 task_graph.go:481] Running 31 on worker 1 2025-10-13T00:23:33.121027690+00:00 stderr F I1013 00:23:33.121013 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.121076801+00:00 stderr F I1013 00:23:33.121026 1 sync_worker.go:999] Running sync for operatorgroup "openshift-monitoring/openshift-cluster-monitoring" (826 of 955) 2025-10-13T00:23:33.179851768+00:00 stderr F I1013 00:23:33.179754 1 sync_worker.go:1014] Done syncing for deployment "openshift-cluster-version/cluster-version-operator" (7 of 955) 2025-10-13T00:23:33.179851768+00:00 stderr F I1013 00:23:33.179808 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.179851768+00:00 stderr F I1013 00:23:33.179824 1 sync_worker.go:999] Running sync for service "openshift-cluster-version/cluster-version-operator" (8 of 955) 2025-10-13T00:23:33.221204810+00:00 stderr F I1013 00:23:33.221084 1 sync_worker.go:1014] Done syncing for operatorgroup "openshift-monitoring/openshift-cluster-monitoring" (826 of 955) 2025-10-13T00:23:33.221204810+00:00 stderr F I1013 00:23:33.221150 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.221204810+00:00 stderr F I1013 00:23:33.221166 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-cluster-total" (827 of 955) 2025-10-13T00:23:33.270977527+00:00 stderr F I1013 00:23:33.270880 1 sync_worker.go:1014] Done syncing for service "openshift-cluster-version/cluster-version-operator" (8 of 955) 2025-10-13T00:23:33.270977527+00:00 stderr F I1013 00:23:33.270930 1 task_graph.go:481] Running 32 on worker 0 2025-10-13T00:23:33.270977527+00:00 stderr F I1013 00:23:33.270949 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.270977527+00:00 stderr F I1013 00:23:33.270965 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager-operator/prometheus-k8s" (896 of 955) 2025-10-13T00:23:33.322490221+00:00 stderr F I1013 00:23:33.322366 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-cluster-total" (827 of 955) 2025-10-13T00:23:33.322490221+00:00 stderr F I1013 00:23:33.322417 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.322490221+00:00 stderr F I1013 00:23:33.322433 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-cluster-total" (828 of 955) 2025-10-13T00:23:33.370721175+00:00 stderr F I1013 00:23:33.370624 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager-operator/prometheus-k8s" (896 of 955) 2025-10-13T00:23:33.370721175+00:00 stderr F I1013 00:23:33.370664 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.370721175+00:00 stderr F I1013 00:23:33.370671 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager-operator/prometheus-k8s" (897 of 955) 2025-10-13T00:23:33.420175522+00:00 stderr F I1013 00:23:33.420075 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-cluster-total" (828 of 955) 2025-10-13T00:23:33.420175522+00:00 stderr F I1013 00:23:33.420106 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.420175522+00:00 stderr F I1013 00:23:33.420112 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-cluster" (829 of 955) 2025-10-13T00:23:33.470653019+00:00 stderr F I1013 00:23:33.470563 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager-operator/prometheus-k8s" (897 of 955) 2025-10-13T00:23:33.470653019+00:00 stderr F I1013 00:23:33.470608 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.470653019+00:00 stderr F I1013 00:23:33.470616 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (898 of 955) 2025-10-13T00:23:33.522128202+00:00 stderr F I1013 00:23:33.521845 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-cluster" (829 of 955) 2025-10-13T00:23:33.522128202+00:00 stderr F I1013 00:23:33.521887 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.522128202+00:00 stderr F I1013 00:23:33.521894 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" (830 of 955) 2025-10-13T00:23:33.570764677+00:00 stderr F I1013 00:23:33.570704 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (898 of 955) 2025-10-13T00:23:33.570841929+00:00 stderr F I1013 00:23:33.570826 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.570874730+00:00 stderr F I1013 00:23:33.570861 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (899 of 955) 2025-10-13T00:23:33.620015519+00:00 stderr F I1013 00:23:33.619962 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" (830 of 955) 2025-10-13T00:23:33.620086801+00:00 stderr F I1013 00:23:33.620073 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.620125802+00:00 stderr F I1013 00:23:33.620109 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-namespace" (831 of 955) 2025-10-13T00:23:33.670925807+00:00 stderr F I1013 00:23:33.670867 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (899 of 955) 2025-10-13T00:23:33.670925807+00:00 stderr F I1013 00:23:33.670898 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.670925807+00:00 stderr F I1013 00:23:33.670904 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (900 of 955) 2025-10-13T00:23:33.721548207+00:00 stderr F I1013 00:23:33.721495 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-namespace" (831 of 955) 2025-10-13T00:23:33.721625520+00:00 stderr F I1013 00:23:33.721612 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.721664161+00:00 stderr F I1013 00:23:33.721648 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" (832 of 955) 2025-10-13T00:23:33.771775566+00:00 stderr F I1013 00:23:33.771726 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (900 of 955) 2025-10-13T00:23:33.771849829+00:00 stderr F I1013 00:23:33.771836 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.771890100+00:00 stderr F I1013 00:23:33.771872 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (901 of 955) 2025-10-13T00:23:33.830692358+00:00 stderr F I1013 00:23:33.830631 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" (832 of 955) 2025-10-13T00:23:33.830782790+00:00 stderr F I1013 00:23:33.830764 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.830935154+00:00 stderr F I1013 00:23:33.830911 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-node" (833 of 955) 2025-10-13T00:23:33.871732921+00:00 stderr F I1013 00:23:33.871641 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (901 of 955) 2025-10-13T00:23:33.871732921+00:00 stderr F I1013 00:23:33.871676 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.871732921+00:00 stderr F I1013 00:23:33.871682 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager/prometheus-k8s" (902 of 955) 2025-10-13T00:23:33.923495823+00:00 stderr F I1013 00:23:33.922978 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-node" (833 of 955) 2025-10-13T00:23:33.923495823+00:00 stderr F I1013 00:23:33.923019 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.923495823+00:00 stderr F I1013 00:23:33.923027 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" (834 of 955) 2025-10-13T00:23:33.977370273+00:00 stderr F I1013 00:23:33.972914 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager/prometheus-k8s" (902 of 955) 2025-10-13T00:23:33.977370273+00:00 stderr F I1013 00:23:33.972957 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:33.977370273+00:00 stderr F I1013 00:23:33.972968 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (903 of 955) 2025-10-13T00:23:34.023525849+00:00 stderr F I1013 00:23:34.021371 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" (834 of 955) 2025-10-13T00:23:34.023525849+00:00 stderr F I1013 00:23:34.021401 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.023525849+00:00 stderr F I1013 00:23:34.021407 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-pod" (835 of 955) 2025-10-13T00:23:34.046523080+00:00 stderr F I1013 00:23:34.046449 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:34.046765396+00:00 stderr F I1013 00:23:34.046721 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:34.046765396+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:34.046765396+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:34.046822688+00:00 stderr F I1013 00:23:34.046783 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (348.54µs) 2025-10-13T00:23:34.046822688+00:00 stderr F I1013 00:23:34.046803 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:34.046880470+00:00 stderr F I1013 00:23:34.046852 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:34.046923391+00:00 stderr F I1013 00:23:34.046908 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:34.046990393+00:00 stderr F I1013 00:23:34.046920 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:34.047281321+00:00 stderr F I1013 00:23:34.047234 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:34.067233077+00:00 stderr F W1013 00:23:34.067171 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:34.068484931+00:00 stderr F I1013 00:23:34.068449 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (21.644053ms) 2025-10-13T00:23:34.070425366+00:00 stderr F I1013 00:23:34.069878 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (903 of 955) 2025-10-13T00:23:34.070425366+00:00 stderr F I1013 00:23:34.069902 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.070425366+00:00 stderr F I1013 00:23:34.069908 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (904 of 955) 2025-10-13T00:23:34.123109503+00:00 stderr F I1013 00:23:34.122383 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-pod" (835 of 955) 2025-10-13T00:23:34.123109503+00:00 stderr F I1013 00:23:34.122419 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.123109503+00:00 stderr F I1013 00:23:34.122425 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" (836 of 955) 2025-10-13T00:23:34.171803389+00:00 stderr F I1013 00:23:34.170955 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (904 of 955) 2025-10-13T00:23:34.171803389+00:00 stderr F I1013 00:23:34.170986 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.171803389+00:00 stderr F I1013 00:23:34.170992 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager/prometheus-k8s" (905 of 955) 2025-10-13T00:23:34.219488488+00:00 stderr F I1013 00:23:34.219430 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" (836 of 955) 2025-10-13T00:23:34.219559430+00:00 stderr F I1013 00:23:34.219545 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.219601021+00:00 stderr F I1013 00:23:34.219582 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-workload" (837 of 955) 2025-10-13T00:23:34.271179548+00:00 stderr F I1013 00:23:34.271115 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager/prometheus-k8s" (905 of 955) 2025-10-13T00:23:34.271258960+00:00 stderr F I1013 00:23:34.271243 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.271311461+00:00 stderr F I1013 00:23:34.271292 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (906 of 955) 2025-10-13T00:23:34.319795672+00:00 stderr F I1013 00:23:34.319732 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-workload" (837 of 955) 2025-10-13T00:23:34.319795672+00:00 stderr F I1013 00:23:34.319772 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.319795672+00:00 stderr F I1013 00:23:34.319780 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" (838 of 955) 2025-10-13T00:23:34.370020431+00:00 stderr F I1013 00:23:34.369943 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (906 of 955) 2025-10-13T00:23:34.370020431+00:00 stderr F I1013 00:23:34.369977 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.370020431+00:00 stderr F I1013 00:23:34.369984 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (907 of 955) 2025-10-13T00:23:34.420075275+00:00 stderr F I1013 00:23:34.419992 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" (838 of 955) 2025-10-13T00:23:34.420075275+00:00 stderr F I1013 00:23:34.420028 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.420075275+00:00 stderr F I1013 00:23:34.420037 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-workloads-namespace" (839 of 955) 2025-10-13T00:23:34.470994984+00:00 stderr F I1013 00:23:34.470899 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (907 of 955) 2025-10-13T00:23:34.470994984+00:00 stderr F I1013 00:23:34.470943 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.470994984+00:00 stderr F I1013 00:23:34.470950 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (908 of 955) 2025-10-13T00:23:34.521470980+00:00 stderr F I1013 00:23:34.521375 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-workloads-namespace" (839 of 955) 2025-10-13T00:23:34.521470980+00:00 stderr F I1013 00:23:34.521420 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.521470980+00:00 stderr F I1013 00:23:34.521428 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" (840 of 955) 2025-10-13T00:23:34.570670630+00:00 stderr F I1013 00:23:34.570583 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (908 of 955) 2025-10-13T00:23:34.570670630+00:00 stderr F I1013 00:23:34.570617 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.570670630+00:00 stderr F I1013 00:23:34.570623 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (909 of 955) 2025-10-13T00:23:34.619726266+00:00 stderr F I1013 00:23:34.619661 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" (840 of 955) 2025-10-13T00:23:34.619726266+00:00 stderr F I1013 00:23:34.619701 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.619726266+00:00 stderr F I1013 00:23:34.619714 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-namespace-by-pod" (841 of 955) 2025-10-13T00:23:34.672653971+00:00 stderr F I1013 00:23:34.672283 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (909 of 955) 2025-10-13T00:23:34.672653971+00:00 stderr F I1013 00:23:34.672359 1 task_graph.go:481] Running 33 on worker 0 2025-10-13T00:23:34.672653971+00:00 stderr F I1013 00:23:34.672377 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.672653971+00:00 stderr F I1013 00:23:34.672389 1 sync_worker.go:999] Running sync for role "openshift-service-ca-operator/prometheus-k8s" (953 of 955) 2025-10-13T00:23:34.722140099+00:00 stderr F I1013 00:23:34.722036 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-namespace-by-pod" (841 of 955) 2025-10-13T00:23:34.722140099+00:00 stderr F I1013 00:23:34.722084 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.722140099+00:00 stderr F I1013 00:23:34.722101 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" (842 of 955) 2025-10-13T00:23:34.770081135+00:00 stderr F I1013 00:23:34.769991 1 sync_worker.go:1014] Done syncing for role "openshift-service-ca-operator/prometheus-k8s" (953 of 955) 2025-10-13T00:23:34.770081135+00:00 stderr F I1013 00:23:34.770039 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.770081135+00:00 stderr F I1013 00:23:34.770054 1 sync_worker.go:999] Running sync for rolebinding "openshift-service-ca-operator/prometheus-k8s" (954 of 955) 2025-10-13T00:23:34.820547320+00:00 stderr F I1013 00:23:34.820493 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" (842 of 955) 2025-10-13T00:23:34.820632103+00:00 stderr F I1013 00:23:34.820617 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.820674884+00:00 stderr F I1013 00:23:34.820656 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-node-cluster-rsrc-use" (843 of 955) 2025-10-13T00:23:34.871130850+00:00 stderr F I1013 00:23:34.871075 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-service-ca-operator/prometheus-k8s" (954 of 955) 2025-10-13T00:23:34.871244713+00:00 stderr F I1013 00:23:34.871229 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.871288814+00:00 stderr F I1013 00:23:34.871271 1 sync_worker.go:999] Running sync for servicemonitor "openshift-service-ca-operator/service-ca-operator" (955 of 955) 2025-10-13T00:23:34.921091121+00:00 stderr F I1013 00:23:34.921029 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-node-cluster-rsrc-use" (843 of 955) 2025-10-13T00:23:34.921091121+00:00 stderr F I1013 00:23:34.921060 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.921091121+00:00 stderr F I1013 00:23:34.921065 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" (844 of 955) 2025-10-13T00:23:34.971105884+00:00 stderr F I1013 00:23:34.971008 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-service-ca-operator/service-ca-operator" (955 of 955) 2025-10-13T00:23:34.971168466+00:00 stderr F I1013 00:23:34.971132 1 task_graph.go:481] Running 34 on worker 0 2025-10-13T00:23:34.971168466+00:00 stderr F I1013 00:23:34.971147 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:34.971168466+00:00 stderr F I1013 00:23:34.971154 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver-operator/prometheus-k8s" (874 of 955) 2025-10-13T00:23:35.020853400+00:00 stderr F I1013 00:23:35.020769 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" (844 of 955) 2025-10-13T00:23:35.020853400+00:00 stderr F I1013 00:23:35.020805 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.020853400+00:00 stderr F I1013 00:23:35.020813 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-node-rsrc-use" (845 of 955) 2025-10-13T00:23:35.046995428+00:00 stderr F I1013 00:23:35.046890 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:35.047371399+00:00 stderr F I1013 00:23:35.047306 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:35.047371399+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:35.047371399+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:35.047543213+00:00 stderr F I1013 00:23:35.047507 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (628.737µs) 2025-10-13T00:23:35.047543213+00:00 stderr F I1013 00:23:35.047535 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:35.047655657+00:00 stderr F I1013 00:23:35.047597 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:35.047689018+00:00 stderr F I1013 00:23:35.047671 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:35.047796931+00:00 stderr F I1013 00:23:35.047684 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:35.048228713+00:00 stderr F I1013 00:23:35.048141 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:35.070831812+00:00 stderr F I1013 00:23:35.070714 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver-operator/prometheus-k8s" (874 of 955) 2025-10-13T00:23:35.070831812+00:00 stderr F I1013 00:23:35.070774 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.070831812+00:00 stderr F I1013 00:23:35.070793 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver-operator/prometheus-k8s" (875 of 955) 2025-10-13T00:23:35.100659043+00:00 stderr F W1013 00:23:35.100600 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:35.104752167+00:00 stderr F I1013 00:23:35.104702 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.158702ms) 2025-10-13T00:23:35.121682419+00:00 stderr F I1013 00:23:35.121595 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-node-rsrc-use" (845 of 955) 2025-10-13T00:23:35.121682419+00:00 stderr F I1013 00:23:35.121632 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.121682419+00:00 stderr F I1013 00:23:35.121640 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" (846 of 955) 2025-10-13T00:23:35.170612411+00:00 stderr F I1013 00:23:35.170520 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver-operator/prometheus-k8s" (875 of 955) 2025-10-13T00:23:35.170612411+00:00 stderr F I1013 00:23:35.170557 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.170612411+00:00 stderr F I1013 00:23:35.170565 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (876 of 955) 2025-10-13T00:23:35.219929904+00:00 stderr F I1013 00:23:35.219846 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" (846 of 955) 2025-10-13T00:23:35.219929904+00:00 stderr F I1013 00:23:35.219881 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.219929904+00:00 stderr F I1013 00:23:35.219889 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-pod-total" (847 of 955) 2025-10-13T00:23:35.270857553+00:00 stderr F I1013 00:23:35.270754 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (876 of 955) 2025-10-13T00:23:35.270857553+00:00 stderr F I1013 00:23:35.270791 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.270857553+00:00 stderr F I1013 00:23:35.270801 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (877 of 955) 2025-10-13T00:23:35.321524014+00:00 stderr F I1013 00:23:35.321428 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-pod-total" (847 of 955) 2025-10-13T00:23:35.321524014+00:00 stderr F I1013 00:23:35.321467 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.321524014+00:00 stderr F I1013 00:23:35.321477 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-pod-total" (848 of 955) 2025-10-13T00:23:35.370096777+00:00 stderr F I1013 00:23:35.370029 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (877 of 955) 2025-10-13T00:23:35.370096777+00:00 stderr F I1013 00:23:35.370075 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.370168149+00:00 stderr F I1013 00:23:35.370088 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (878 of 955) 2025-10-13T00:23:35.421180570+00:00 stderr F I1013 00:23:35.421094 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-pod-total" (848 of 955) 2025-10-13T00:23:35.421180570+00:00 stderr F I1013 00:23:35.421136 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.421180570+00:00 stderr F I1013 00:23:35.421149 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-prometheus" (849 of 955) 2025-10-13T00:23:35.470625618+00:00 stderr F I1013 00:23:35.470544 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (878 of 955) 2025-10-13T00:23:35.470625618+00:00 stderr F I1013 00:23:35.470581 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.470625618+00:00 stderr F I1013 00:23:35.470589 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (879 of 955) 2025-10-13T00:23:35.520971230+00:00 stderr F I1013 00:23:35.520880 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-prometheus" (849 of 955) 2025-10-13T00:23:35.520971230+00:00 stderr F I1013 00:23:35.520917 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.520971230+00:00 stderr F I1013 00:23:35.520923 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-prometheus" (850 of 955) 2025-10-13T00:23:35.577397842+00:00 stderr F I1013 00:23:35.576306 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (879 of 955) 2025-10-13T00:23:35.577397842+00:00 stderr F I1013 00:23:35.576357 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.577397842+00:00 stderr F I1013 00:23:35.576363 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (880 of 955) 2025-10-13T00:23:35.622911260+00:00 stderr F I1013 00:23:35.622791 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-prometheus" (850 of 955) 2025-10-13T00:23:35.622911260+00:00 stderr F I1013 00:23:35.622822 1 task_graph.go:481] Running 35 on worker 1 2025-10-13T00:23:35.622911260+00:00 stderr F I1013 00:23:35.622836 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.622911260+00:00 stderr F I1013 00:23:35.622843 1 sync_worker.go:999] Running sync for customresourcedefinition "networks.operator.openshift.io" (773 of 955) 2025-10-13T00:23:35.672094470+00:00 stderr F I1013 00:23:35.672011 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (880 of 955) 2025-10-13T00:23:35.672094470+00:00 stderr F I1013 00:23:35.672043 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.672094470+00:00 stderr F I1013 00:23:35.672050 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (881 of 955) 2025-10-13T00:23:35.724217442+00:00 stderr F I1013 00:23:35.724125 1 sync_worker.go:1014] Done syncing for customresourcedefinition "networks.operator.openshift.io" (773 of 955) 2025-10-13T00:23:35.724217442+00:00 stderr F I1013 00:23:35.724165 1 task_graph.go:481] Running 36 on worker 1 2025-10-13T00:23:35.770553112+00:00 stderr F I1013 00:23:35.770489 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (881 of 955) 2025-10-13T00:23:35.770553112+00:00 stderr F I1013 00:23:35.770523 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.770553112+00:00 stderr F I1013 00:23:35.770529 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver/prometheus-k8s" (882 of 955) 2025-10-13T00:23:35.823346333+00:00 stderr F I1013 00:23:35.823253 1 sync_worker.go:989] Precreated resource clusteroperator "kube-storage-version-migrator" (429 of 955) 2025-10-13T00:23:35.823346333+00:00 stderr F I1013 00:23:35.823284 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.823346333+00:00 stderr F I1013 00:23:35.823290 1 sync_worker.go:999] Running sync for customresourcedefinition "storageversionmigrations.migration.k8s.io" (420 of 955) 2025-10-13T00:23:35.870540247+00:00 stderr F I1013 00:23:35.870468 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver/prometheus-k8s" (882 of 955) 2025-10-13T00:23:35.870540247+00:00 stderr F I1013 00:23:35.870514 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.870540247+00:00 stderr F I1013 00:23:35.870523 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver/prometheus-k8s" (883 of 955) 2025-10-13T00:23:35.920968642+00:00 stderr F I1013 00:23:35.920899 1 sync_worker.go:1014] Done syncing for customresourcedefinition "storageversionmigrations.migration.k8s.io" (420 of 955) 2025-10-13T00:23:35.920968642+00:00 stderr F I1013 00:23:35.920943 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.920968642+00:00 stderr F I1013 00:23:35.920951 1 sync_worker.go:999] Running sync for customresourcedefinition "storagestates.migration.k8s.io" (421 of 955) 2025-10-13T00:23:35.970905673+00:00 stderr F I1013 00:23:35.970789 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver/prometheus-k8s" (883 of 955) 2025-10-13T00:23:35.970905673+00:00 stderr F I1013 00:23:35.970829 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:35.970905673+00:00 stderr F I1013 00:23:35.970837 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver/kube-apiserver" (884 of 955) 2025-10-13T00:23:36.020851094+00:00 stderr F I1013 00:23:36.020747 1 sync_worker.go:1014] Done syncing for customresourcedefinition "storagestates.migration.k8s.io" (421 of 955) 2025-10-13T00:23:36.020851094+00:00 stderr F I1013 00:23:36.020791 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.020851094+00:00 stderr F I1013 00:23:36.020799 1 sync_worker.go:999] Running sync for namespace "openshift-kube-storage-version-migrator-operator" (422 of 955) 2025-10-13T00:23:36.047826246+00:00 stderr F I1013 00:23:36.047738 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:36.048017931+00:00 stderr F I1013 00:23:36.047969 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:36.048017931+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:36.048017931+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:36.048040942+00:00 stderr F I1013 00:23:36.048017 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (290.728µs) 2025-10-13T00:23:36.048040942+00:00 stderr F I1013 00:23:36.048031 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:36.048122604+00:00 stderr F I1013 00:23:36.048065 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:36.048122604+00:00 stderr F I1013 00:23:36.048110 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:36.048189776+00:00 stderr F I1013 00:23:36.048117 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:36.048435843+00:00 stderr F I1013 00:23:36.048360 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:36.073724127+00:00 stderr F I1013 00:23:36.073610 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver/kube-apiserver" (884 of 955) 2025-10-13T00:23:36.073724127+00:00 stderr F I1013 00:23:36.073650 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.073724127+00:00 stderr F I1013 00:23:36.073658 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (885 of 955) 2025-10-13T00:23:36.075657311+00:00 stderr F W1013 00:23:36.075596 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:36.076956677+00:00 stderr F I1013 00:23:36.076900 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.867435ms) 2025-10-13T00:23:36.121420626+00:00 stderr F I1013 00:23:36.120769 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-storage-version-migrator-operator" (422 of 955) 2025-10-13T00:23:36.121420626+00:00 stderr F I1013 00:23:36.120797 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.121420626+00:00 stderr F I1013 00:23:36.120802 1 sync_worker.go:999] Running sync for configmap "openshift-kube-storage-version-migrator-operator/config" (423 of 955) 2025-10-13T00:23:36.174618418+00:00 stderr F I1013 00:23:36.173708 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (885 of 955) 2025-10-13T00:23:36.174618418+00:00 stderr F I1013 00:23:36.173752 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.174618418+00:00 stderr F I1013 00:23:36.173764 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (886 of 955) 2025-10-13T00:23:36.220360902+00:00 stderr F I1013 00:23:36.220279 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-storage-version-migrator-operator/config" (423 of 955) 2025-10-13T00:23:36.220399933+00:00 stderr F I1013 00:23:36.220359 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.220399933+00:00 stderr F I1013 00:23:36.220372 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (424 of 955) 2025-10-13T00:23:36.269488590+00:00 stderr F I1013 00:23:36.269440 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (886 of 955) 2025-10-13T00:23:36.269488590+00:00 stderr F I1013 00:23:36.269466 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.269488590+00:00 stderr F I1013 00:23:36.269472 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver/prometheus-k8s" (887 of 955) 2025-10-13T00:23:36.320676546+00:00 stderr F I1013 00:23:36.320619 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (424 of 955) 2025-10-13T00:23:36.320676546+00:00 stderr F I1013 00:23:36.320665 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.320720797+00:00 stderr F I1013 00:23:36.320679 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-storage-version-migrator-operator" (425 of 955) 2025-10-13T00:23:36.375172174+00:00 stderr F I1013 00:23:36.375127 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver/prometheus-k8s" (887 of 955) 2025-10-13T00:23:36.375172174+00:00 stderr F I1013 00:23:36.375164 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.375195425+00:00 stderr F I1013 00:23:36.375172 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver/prometheus-k8s" (888 of 955) 2025-10-13T00:23:36.421066593+00:00 stderr F I1013 00:23:36.421009 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-storage-version-migrator-operator" (425 of 955) 2025-10-13T00:23:36.421066593+00:00 stderr F I1013 00:23:36.421059 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.421096643+00:00 stderr F I1013 00:23:36.421070 1 sync_worker.go:999] Running sync for kubestorageversionmigrator "cluster" (426 of 955) 2025-10-13T00:23:36.469963195+00:00 stderr F I1013 00:23:36.469899 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver/prometheus-k8s" (888 of 955) 2025-10-13T00:23:36.469963195+00:00 stderr F I1013 00:23:36.469936 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.469963195+00:00 stderr F I1013 00:23:36.469944 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver/kube-apiserver" (889 of 955) 2025-10-13T00:23:36.520944185+00:00 stderr F I1013 00:23:36.520863 1 sync_worker.go:1014] Done syncing for kubestorageversionmigrator "cluster" (426 of 955) 2025-10-13T00:23:36.520944185+00:00 stderr F I1013 00:23:36.520902 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.520944185+00:00 stderr F I1013 00:23:36.520910 1 sync_worker.go:999] Running sync for deployment "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (427 of 955) 2025-10-13T00:23:36.574125976+00:00 stderr F I1013 00:23:36.574037 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver/kube-apiserver" (889 of 955) 2025-10-13T00:23:36.574125976+00:00 stderr F I1013 00:23:36.574094 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.574125976+00:00 stderr F I1013 00:23:36.574107 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (890 of 955) 2025-10-13T00:23:36.621773863+00:00 stderr F I1013 00:23:36.621615 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (427 of 955) 2025-10-13T00:23:36.621773863+00:00 stderr F I1013 00:23:36.621652 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.621773863+00:00 stderr F I1013 00:23:36.621658 1 sync_worker.go:999] Running sync for service "openshift-kube-storage-version-migrator-operator/metrics" (428 of 955) 2025-10-13T00:23:36.673825363+00:00 stderr F I1013 00:23:36.673744 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (890 of 955) 2025-10-13T00:23:36.673825363+00:00 stderr F I1013 00:23:36.673793 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.673825363+00:00 stderr F I1013 00:23:36.673806 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (891 of 955) 2025-10-13T00:23:36.721722507+00:00 stderr F I1013 00:23:36.721651 1 sync_worker.go:1014] Done syncing for service "openshift-kube-storage-version-migrator-operator/metrics" (428 of 955) 2025-10-13T00:23:36.721722507+00:00 stderr F I1013 00:23:36.721702 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.721752618+00:00 stderr F I1013 00:23:36.721715 1 sync_worker.go:999] Running sync for clusteroperator "kube-storage-version-migrator" (429 of 955) 2025-10-13T00:23:36.722039146+00:00 stderr F I1013 00:23:36.721999 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-storage-version-migrator" (429 of 955) 2025-10-13T00:23:36.722050427+00:00 stderr F I1013 00:23:36.722036 1 task_graph.go:481] Running 37 on worker 1 2025-10-13T00:23:36.722060937+00:00 stderr F I1013 00:23:36.722048 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.722071467+00:00 stderr F I1013 00:23:36.722059 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorhubs.config.openshift.io" (22 of 955) 2025-10-13T00:23:36.770255379+00:00 stderr F I1013 00:23:36.770179 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (891 of 955) 2025-10-13T00:23:36.770255379+00:00 stderr F I1013 00:23:36.770218 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.770255379+00:00 stderr F I1013 00:23:36.770228 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (892 of 955) 2025-10-13T00:23:36.821106976+00:00 stderr F I1013 00:23:36.821020 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorhubs.config.openshift.io" (22 of 955) 2025-10-13T00:23:36.821106976+00:00 stderr F I1013 00:23:36.821076 1 task_graph.go:481] Running 38 on worker 1 2025-10-13T00:23:36.872429575+00:00 stderr F I1013 00:23:36.871869 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (892 of 955) 2025-10-13T00:23:36.872429575+00:00 stderr F I1013 00:23:36.872404 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.872477427+00:00 stderr F I1013 00:23:36.872418 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-api-performance" (893 of 955) 2025-10-13T00:23:36.923598371+00:00 stderr F I1013 00:23:36.923508 1 sync_worker.go:989] Precreated resource clusteroperator "marketplace" (741 of 955) 2025-10-13T00:23:36.923598371+00:00 stderr F I1013 00:23:36.923548 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.923598371+00:00 stderr F I1013 00:23:36.923555 1 sync_worker.go:999] Running sync for namespace "openshift-marketplace" (731 of 955) 2025-10-13T00:23:36.970412135+00:00 stderr F I1013 00:23:36.970308 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-api-performance" (893 of 955) 2025-10-13T00:23:36.970412135+00:00 stderr F I1013 00:23:36.970360 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:36.970412135+00:00 stderr F I1013 00:23:36.970368 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (894 of 955) 2025-10-13T00:23:37.020082308+00:00 stderr F I1013 00:23:37.019991 1 sync_worker.go:1014] Done syncing for namespace "openshift-marketplace" (731 of 955) 2025-10-13T00:23:37.020082308+00:00 stderr F I1013 00:23:37.020034 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.020082308+00:00 stderr F I1013 00:23:37.020046 1 sync_worker.go:999] Running sync for serviceaccount "openshift-marketplace/marketplace-operator" (732 of 955) 2025-10-13T00:23:37.049203880+00:00 stderr F I1013 00:23:37.049090 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:37.049479257+00:00 stderr F I1013 00:23:37.049405 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:37.049479257+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:37.049479257+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:37.049511408+00:00 stderr F I1013 00:23:37.049494 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.312µs) 2025-10-13T00:23:37.049524799+00:00 stderr F I1013 00:23:37.049514 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:37.049603321+00:00 stderr F I1013 00:23:37.049561 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:37.049650692+00:00 stderr F I1013 00:23:37.049618 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:37.049733614+00:00 stderr F I1013 00:23:37.049633 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:37.050046393+00:00 stderr F I1013 00:23:37.049963 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:37.071048968+00:00 stderr F I1013 00:23:37.070951 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (894 of 955) 2025-10-13T00:23:37.071048968+00:00 stderr F I1013 00:23:37.071004 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.071048968+00:00 stderr F I1013 00:23:37.071013 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-api-performance" (895 of 955) 2025-10-13T00:23:37.072687764+00:00 stderr F W1013 00:23:37.072630 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:37.074806593+00:00 stderr F I1013 00:23:37.074740 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.218963ms) 2025-10-13T00:23:37.120955478+00:00 stderr F I1013 00:23:37.120859 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-marketplace/marketplace-operator" (732 of 955) 2025-10-13T00:23:37.120955478+00:00 stderr F I1013 00:23:37.120904 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.120955478+00:00 stderr F I1013 00:23:37.120912 1 sync_worker.go:999] Running sync for clusterrole "marketplace-operator" (733 of 955) 2025-10-13T00:23:37.170161869+00:00 stderr F I1013 00:23:37.170080 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-api-performance" (895 of 955) 2025-10-13T00:23:37.170161869+00:00 stderr F I1013 00:23:37.170117 1 task_graph.go:481] Running 39 on worker 0 2025-10-13T00:23:37.170161869+00:00 stderr F I1013 00:23:37.170129 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.170161869+00:00 stderr F I1013 00:23:37.170135 1 sync_worker.go:999] Running sync for role "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (931 of 955) 2025-10-13T00:23:37.221182010+00:00 stderr F I1013 00:23:37.221091 1 sync_worker.go:1014] Done syncing for clusterrole "marketplace-operator" (733 of 955) 2025-10-13T00:23:37.221182010+00:00 stderr F I1013 00:23:37.221131 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.221182010+00:00 stderr F I1013 00:23:37.221142 1 sync_worker.go:999] Running sync for role "openshift-marketplace/marketplace-operator" (734 of 955) 2025-10-13T00:23:37.270444882+00:00 stderr F I1013 00:23:37.270322 1 sync_worker.go:1014] Done syncing for role "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (931 of 955) 2025-10-13T00:23:37.270444882+00:00 stderr F I1013 00:23:37.270399 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.270444882+00:00 stderr F I1013 00:23:37.270408 1 sync_worker.go:999] Running sync for rolebinding "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (932 of 955) 2025-10-13T00:23:37.319988802+00:00 stderr F I1013 00:23:37.319900 1 sync_worker.go:1014] Done syncing for role "openshift-marketplace/marketplace-operator" (734 of 955) 2025-10-13T00:23:37.319988802+00:00 stderr F I1013 00:23:37.319947 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.319988802+00:00 stderr F I1013 00:23:37.319959 1 sync_worker.go:999] Running sync for clusterrole "operatorhub-config-reader" (735 of 955) 2025-10-13T00:23:37.370663424+00:00 stderr F I1013 00:23:37.370596 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (932 of 955) 2025-10-13T00:23:37.370663424+00:00 stderr F I1013 00:23:37.370635 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.370663424+00:00 stderr F I1013 00:23:37.370644 1 sync_worker.go:999] Running sync for servicemonitor "openshift-operator-lifecycle-manager/olm-operator" (933 of 955) 2025-10-13T00:23:37.420792680+00:00 stderr F I1013 00:23:37.420711 1 sync_worker.go:1014] Done syncing for clusterrole "operatorhub-config-reader" (735 of 955) 2025-10-13T00:23:37.420792680+00:00 stderr F I1013 00:23:37.420765 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.420792680+00:00 stderr F I1013 00:23:37.420774 1 sync_worker.go:999] Running sync for clusterrolebinding "marketplace-operator" (736 of 955) 2025-10-13T00:23:37.471176124+00:00 stderr F I1013 00:23:37.471087 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-operator-lifecycle-manager/olm-operator" (933 of 955) 2025-10-13T00:23:37.471176124+00:00 stderr F I1013 00:23:37.471126 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.471176124+00:00 stderr F I1013 00:23:37.471134 1 sync_worker.go:999] Running sync for servicemonitor "openshift-operator-lifecycle-manager/catalog-operator" (934 of 955) 2025-10-13T00:23:37.520343863+00:00 stderr F I1013 00:23:37.520251 1 sync_worker.go:1014] Done syncing for clusterrolebinding "marketplace-operator" (736 of 955) 2025-10-13T00:23:37.520343863+00:00 stderr F I1013 00:23:37.520288 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.520343863+00:00 stderr F I1013 00:23:37.520297 1 sync_worker.go:999] Running sync for rolebinding "openshift-marketplace/marketplace-operator" (737 of 955) 2025-10-13T00:23:37.573389031+00:00 stderr F I1013 00:23:37.572015 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-operator-lifecycle-manager/catalog-operator" (934 of 955) 2025-10-13T00:23:37.573389031+00:00 stderr F I1013 00:23:37.572068 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.573389031+00:00 stderr F I1013 00:23:37.572080 1 sync_worker.go:999] Running sync for prometheusrule "openshift-operator-lifecycle-manager/olm-alert-rules" (935 of 955) 2025-10-13T00:23:37.619817814+00:00 stderr F I1013 00:23:37.619747 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-marketplace/marketplace-operator" (737 of 955) 2025-10-13T00:23:37.619817814+00:00 stderr F I1013 00:23:37.619794 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.619817814+00:00 stderr F I1013 00:23:37.619806 1 sync_worker.go:999] Running sync for configmap "openshift-marketplace/marketplace-trusted-ca" (738 of 955) 2025-10-13T00:23:37.672155392+00:00 stderr F I1013 00:23:37.671572 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-operator-lifecycle-manager/olm-alert-rules" (935 of 955) 2025-10-13T00:23:37.672155392+00:00 stderr F I1013 00:23:37.672141 1 task_graph.go:481] Running 40 on worker 0 2025-10-13T00:23:37.672203743+00:00 stderr F I1013 00:23:37.672164 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.672203743+00:00 stderr F I1013 00:23:37.672181 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-api/machine-api-operator" (919 of 955) 2025-10-13T00:23:37.729289924+00:00 stderr F I1013 00:23:37.729169 1 sync_worker.go:1014] Done syncing for configmap "openshift-marketplace/marketplace-trusted-ca" (738 of 955) 2025-10-13T00:23:37.729289924+00:00 stderr F I1013 00:23:37.729262 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.729289924+00:00 stderr F I1013 00:23:37.729271 1 sync_worker.go:999] Running sync for service "openshift-marketplace/marketplace-operator-metrics" (739 of 955) 2025-10-13T00:23:37.775380628+00:00 stderr F I1013 00:23:37.773065 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-api/machine-api-operator" (919 of 955) 2025-10-13T00:23:37.775380628+00:00 stderr F I1013 00:23:37.773118 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.775380628+00:00 stderr F I1013 00:23:37.773140 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-api/machine-api-controllers" (920 of 955) 2025-10-13T00:23:37.821219264+00:00 stderr F I1013 00:23:37.821154 1 sync_worker.go:1014] Done syncing for service "openshift-marketplace/marketplace-operator-metrics" (739 of 955) 2025-10-13T00:23:37.821219264+00:00 stderr F I1013 00:23:37.821191 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.821219264+00:00 stderr F I1013 00:23:37.821199 1 sync_worker.go:999] Running sync for deployment "openshift-marketplace/marketplace-operator" (740 of 955) 2025-10-13T00:23:37.870473866+00:00 stderr F I1013 00:23:37.870377 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-api/machine-api-controllers" (920 of 955) 2025-10-13T00:23:37.870473866+00:00 stderr F I1013 00:23:37.870414 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.870473866+00:00 stderr F I1013 00:23:37.870422 1 sync_worker.go:999] Running sync for prometheusrule "openshift-machine-api/machine-api-operator-prometheus-rules" (921 of 955) 2025-10-13T00:23:37.971607564+00:00 stderr F I1013 00:23:37.971545 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-machine-api/machine-api-operator-prometheus-rules" (921 of 955) 2025-10-13T00:23:37.971607564+00:00 stderr F I1013 00:23:37.971584 1 task_graph.go:481] Running 41 on worker 0 2025-10-13T00:23:37.971636384+00:00 stderr F I1013 00:23:37.971601 1 sync_worker.go:982] Skipping precreation of clusteroperator "monitoring" (465 of 955): overridden 2025-10-13T00:23:37.971636384+00:00 stderr F I1013 00:23:37.971632 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:37.971655525+00:00 stderr F I1013 00:23:37.971640 1 sync_worker.go:999] Running sync for customresourcedefinition "alertingrules.monitoring.openshift.io" (442 of 955) 2025-10-13T00:23:38.021138413+00:00 stderr F I1013 00:23:38.021091 1 sync_worker.go:1014] Done syncing for deployment "openshift-marketplace/marketplace-operator" (740 of 955) 2025-10-13T00:23:38.021229496+00:00 stderr F I1013 00:23:38.021219 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.021259677+00:00 stderr F I1013 00:23:38.021246 1 sync_worker.go:999] Running sync for clusteroperator "marketplace" (741 of 955) 2025-10-13T00:23:38.021461242+00:00 stderr F I1013 00:23:38.021441 1 sync_worker.go:1014] Done syncing for clusteroperator "marketplace" (741 of 955) 2025-10-13T00:23:38.021496363+00:00 stderr F I1013 00:23:38.021487 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.021523884+00:00 stderr F I1013 00:23:38.021512 1 sync_worker.go:999] Running sync for servicemonitor "openshift-marketplace/marketplace-operator" (742 of 955) 2025-10-13T00:23:38.050152581+00:00 stderr F I1013 00:23:38.050086 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:38.050552133+00:00 stderr F I1013 00:23:38.050513 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:38.050552133+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:38.050552133+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:38.050641655+00:00 stderr F I1013 00:23:38.050600 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (526.515µs) 2025-10-13T00:23:38.050641655+00:00 stderr F I1013 00:23:38.050627 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:38.050760368+00:00 stderr F I1013 00:23:38.050691 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:38.050810240+00:00 stderr F I1013 00:23:38.050788 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:38.050906802+00:00 stderr F I1013 00:23:38.050804 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:38.051270473+00:00 stderr F I1013 00:23:38.051219 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:38.070085627+00:00 stderr F I1013 00:23:38.070018 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertingrules.monitoring.openshift.io" (442 of 955) 2025-10-13T00:23:38.070085627+00:00 stderr F I1013 00:23:38.070053 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.070085627+00:00 stderr F I1013 00:23:38.070061 1 sync_worker.go:999] Running sync for customresourcedefinition "alertmanagerconfigs.monitoring.coreos.com" (443 of 955) 2025-10-13T00:23:38.073317917+00:00 stderr F W1013 00:23:38.073268 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:38.076254369+00:00 stderr F I1013 00:23:38.076188 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.554091ms) 2025-10-13T00:23:38.120828410+00:00 stderr F I1013 00:23:38.120760 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-marketplace/marketplace-operator" (742 of 955) 2025-10-13T00:23:38.120828410+00:00 stderr F I1013 00:23:38.120795 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.120828410+00:00 stderr F I1013 00:23:38.120803 1 sync_worker.go:999] Running sync for rolebinding "openshift-marketplace/openshift-marketplace-metrics" (743 of 955) 2025-10-13T00:23:38.218991545+00:00 stderr F I1013 00:23:38.218891 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertmanagerconfigs.monitoring.coreos.com" (443 of 955) 2025-10-13T00:23:38.218991545+00:00 stderr F I1013 00:23:38.218945 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.218991545+00:00 stderr F I1013 00:23:38.218956 1 sync_worker.go:999] Running sync for customresourcedefinition "alertmanagers.monitoring.coreos.com" (444 of 955) 2025-10-13T00:23:38.220104216+00:00 stderr F I1013 00:23:38.220046 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-marketplace/openshift-marketplace-metrics" (743 of 955) 2025-10-13T00:23:38.220104216+00:00 stderr F I1013 00:23:38.220079 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.220104216+00:00 stderr F I1013 00:23:38.220084 1 sync_worker.go:999] Running sync for role "openshift-marketplace/openshift-marketplace-metrics" (744 of 955) 2025-10-13T00:23:38.270815558+00:00 stderr F I1013 00:23:38.270715 1 sync_worker.go:1014] Done syncing for role "openshift-marketplace/openshift-marketplace-metrics" (744 of 955) 2025-10-13T00:23:38.270815558+00:00 stderr F I1013 00:23:38.270777 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.270815558+00:00 stderr F I1013 00:23:38.270788 1 sync_worker.go:999] Running sync for prometheusrule "openshift-marketplace/marketplace-alert-rules" (745 of 955) 2025-10-13T00:23:38.356944917+00:00 stderr F I1013 00:23:38.356842 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertmanagers.monitoring.coreos.com" (444 of 955) 2025-10-13T00:23:38.356944917+00:00 stderr F I1013 00:23:38.356899 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.356944917+00:00 stderr F I1013 00:23:38.356905 1 sync_worker.go:999] Running sync for customresourcedefinition "alertrelabelconfigs.monitoring.openshift.io" (445 of 955) 2025-10-13T00:23:38.371113492+00:00 stderr F I1013 00:23:38.371046 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-marketplace/marketplace-alert-rules" (745 of 955) 2025-10-13T00:23:38.371113492+00:00 stderr F I1013 00:23:38.371083 1 task_graph.go:481] Running 42 on worker 1 2025-10-13T00:23:38.420639752+00:00 stderr F I1013 00:23:38.420567 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertrelabelconfigs.monitoring.openshift.io" (445 of 955) 2025-10-13T00:23:38.420639752+00:00 stderr F I1013 00:23:38.420605 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.420639752+00:00 stderr F I1013 00:23:38.420611 1 sync_worker.go:999] Running sync for customresourcedefinition "podmonitors.monitoring.coreos.com" (446 of 955) 2025-10-13T00:23:38.472027673+00:00 stderr F I1013 00:23:38.471924 1 sync_worker.go:989] Precreated resource clusteroperator "kube-apiserver" (143 of 955) 2025-10-13T00:23:38.523591489+00:00 stderr F I1013 00:23:38.523499 1 sync_worker.go:1014] Done syncing for customresourcedefinition "podmonitors.monitoring.coreos.com" (446 of 955) 2025-10-13T00:23:38.523591489+00:00 stderr F I1013 00:23:38.523536 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.523591489+00:00 stderr F I1013 00:23:38.523543 1 sync_worker.go:999] Running sync for customresourcedefinition "probes.monitoring.coreos.com" (447 of 955) 2025-10-13T00:23:38.573968273+00:00 stderr F I1013 00:23:38.573892 1 sync_worker.go:989] Precreated resource clusteroperator "kube-apiserver" (144 of 955) 2025-10-13T00:23:38.573968273+00:00 stderr F I1013 00:23:38.573920 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.573968273+00:00 stderr F I1013 00:23:38.573926 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:anyuid" (83 of 955) 2025-10-13T00:23:38.622565036+00:00 stderr F I1013 00:23:38.622481 1 sync_worker.go:1014] Done syncing for customresourcedefinition "probes.monitoring.coreos.com" (447 of 955) 2025-10-13T00:23:38.622565036+00:00 stderr F I1013 00:23:38.622514 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.622565036+00:00 stderr F I1013 00:23:38.622520 1 sync_worker.go:999] Running sync for customresourcedefinition "prometheuses.monitoring.coreos.com" (448 of 955) 2025-10-13T00:23:38.669991147+00:00 stderr F I1013 00:23:38.669889 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:anyuid" (83 of 955) 2025-10-13T00:23:38.669991147+00:00 stderr F I1013 00:23:38.669937 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.669991147+00:00 stderr F I1013 00:23:38.669949 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:anyuid" (84 of 955) 2025-10-13T00:23:38.759924111+00:00 stderr F I1013 00:23:38.759819 1 sync_worker.go:1014] Done syncing for customresourcedefinition "prometheuses.monitoring.coreos.com" (448 of 955) 2025-10-13T00:23:38.759924111+00:00 stderr F I1013 00:23:38.759863 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.759924111+00:00 stderr F I1013 00:23:38.759871 1 sync_worker.go:999] Running sync for customresourcedefinition "prometheusrules.monitoring.coreos.com" (449 of 955) 2025-10-13T00:23:38.769409386+00:00 stderr F I1013 00:23:38.769349 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:anyuid" (84 of 955) 2025-10-13T00:23:38.769409386+00:00 stderr F I1013 00:23:38.769377 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.769409386+00:00 stderr F I1013 00:23:38.769385 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostaccess" (85 of 955) 2025-10-13T00:23:38.820183170+00:00 stderr F I1013 00:23:38.820089 1 sync_worker.go:1014] Done syncing for customresourcedefinition "prometheusrules.monitoring.coreos.com" (449 of 955) 2025-10-13T00:23:38.820183170+00:00 stderr F I1013 00:23:38.820125 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.820183170+00:00 stderr F I1013 00:23:38.820133 1 sync_worker.go:999] Running sync for customresourcedefinition "servicemonitors.monitoring.coreos.com" (450 of 955) 2025-10-13T00:23:38.870677677+00:00 stderr F I1013 00:23:38.870601 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostaccess" (85 of 955) 2025-10-13T00:23:38.870677677+00:00 stderr F I1013 00:23:38.870656 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.870705027+00:00 stderr F I1013 00:23:38.870670 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostaccess" (86 of 955) 2025-10-13T00:23:38.922754407+00:00 stderr F I1013 00:23:38.922691 1 sync_worker.go:1014] Done syncing for customresourcedefinition "servicemonitors.monitoring.coreos.com" (450 of 955) 2025-10-13T00:23:38.922754407+00:00 stderr F I1013 00:23:38.922724 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.922754407+00:00 stderr F I1013 00:23:38.922730 1 sync_worker.go:999] Running sync for customresourcedefinition "thanosrulers.monitoring.coreos.com" (451 of 955) 2025-10-13T00:23:38.970462716+00:00 stderr F I1013 00:23:38.970403 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostaccess" (86 of 955) 2025-10-13T00:23:38.970462716+00:00 stderr F I1013 00:23:38.970437 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:38.970462716+00:00 stderr F I1013 00:23:38.970443 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount-anyuid" (87 of 955) 2025-10-13T00:23:39.050732292+00:00 stderr F I1013 00:23:39.050620 1 sync_worker.go:1014] Done syncing for customresourcedefinition "thanosrulers.monitoring.coreos.com" (451 of 955) 2025-10-13T00:23:39.050732292+00:00 stderr F I1013 00:23:39.050689 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.050732292+00:00 stderr F I1013 00:23:39.050695 1 sync_worker.go:999] Running sync for namespace "openshift-monitoring" (452 of 955) 2025-10-13T00:23:39.051348099+00:00 stderr F I1013 00:23:39.051306 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:39.052618055+00:00 stderr F I1013 00:23:39.052556 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:39.052618055+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:39.052618055+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:39.052754518+00:00 stderr F I1013 00:23:39.052711 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.401969ms) 2025-10-13T00:23:39.052764189+00:00 stderr F I1013 00:23:39.052755 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:39.052966974+00:00 stderr F I1013 00:23:39.052916 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:39.053035526+00:00 stderr F I1013 00:23:39.053009 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:39.053222881+00:00 stderr F I1013 00:23:39.053047 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:39.054046604+00:00 stderr F I1013 00:23:39.053969 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:39.072393595+00:00 stderr F I1013 00:23:39.070381 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount-anyuid" (87 of 955) 2025-10-13T00:23:39.072393595+00:00 stderr F I1013 00:23:39.070466 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.072393595+00:00 stderr F I1013 00:23:39.070484 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount-anyuid" (88 of 955) 2025-10-13T00:23:39.078511296+00:00 stderr F W1013 00:23:39.078451 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:39.079806032+00:00 stderr F I1013 00:23:39.079758 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.005852ms) 2025-10-13T00:23:39.120543877+00:00 stderr F I1013 00:23:39.120439 1 sync_worker.go:1014] Done syncing for namespace "openshift-monitoring" (452 of 955) 2025-10-13T00:23:39.120543877+00:00 stderr F I1013 00:23:39.120485 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.120543877+00:00 stderr F I1013 00:23:39.120492 1 sync_worker.go:999] Running sync for namespace "openshift-user-workload-monitoring" (453 of 955) 2025-10-13T00:23:39.171013373+00:00 stderr F I1013 00:23:39.170903 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount-anyuid" (88 of 955) 2025-10-13T00:23:39.171013373+00:00 stderr F I1013 00:23:39.170950 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.171013373+00:00 stderr F I1013 00:23:39.170957 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount" (89 of 955) 2025-10-13T00:23:39.222407724+00:00 stderr F I1013 00:23:39.221980 1 sync_worker.go:1014] Done syncing for namespace "openshift-user-workload-monitoring" (453 of 955) 2025-10-13T00:23:39.222407724+00:00 stderr F I1013 00:23:39.222034 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.222407724+00:00 stderr F I1013 00:23:39.222040 1 sync_worker.go:999] Running sync for role "openshift-monitoring/cluster-monitoring-operator-alert-customization" (454 of 955) 2025-10-13T00:23:39.270029021+00:00 stderr F I1013 00:23:39.269982 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount" (89 of 955) 2025-10-13T00:23:39.270029021+00:00 stderr F I1013 00:23:39.270013 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.270029021+00:00 stderr F I1013 00:23:39.270019 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount" (90 of 955) 2025-10-13T00:23:39.320100995+00:00 stderr F I1013 00:23:39.320056 1 sync_worker.go:1014] Done syncing for role "openshift-monitoring/cluster-monitoring-operator-alert-customization" (454 of 955) 2025-10-13T00:23:39.320100995+00:00 stderr F I1013 00:23:39.320082 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.320100995+00:00 stderr F I1013 00:23:39.320087 1 sync_worker.go:999] Running sync for clusterrole "cluster-monitoring-operator-namespaced" (455 of 955) 2025-10-13T00:23:39.369589334+00:00 stderr F I1013 00:23:39.369510 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount" (90 of 955) 2025-10-13T00:23:39.369589334+00:00 stderr F I1013 00:23:39.369548 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.369589334+00:00 stderr F I1013 00:23:39.369554 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork-v2" (91 of 955) 2025-10-13T00:23:39.420522823+00:00 stderr F I1013 00:23:39.420472 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-monitoring-operator-namespaced" (455 of 955) 2025-10-13T00:23:39.420522823+00:00 stderr F I1013 00:23:39.420506 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.420522823+00:00 stderr F I1013 00:23:39.420513 1 sync_worker.go:999] Running sync for clusterrole "cluster-monitoring-operator" (456 of 955) 2025-10-13T00:23:39.470926987+00:00 stderr F I1013 00:23:39.470845 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork-v2" (91 of 955) 2025-10-13T00:23:39.470926987+00:00 stderr F I1013 00:23:39.470899 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.470926987+00:00 stderr F I1013 00:23:39.470911 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork-v2" (92 of 955) 2025-10-13T00:23:39.520977861+00:00 stderr F I1013 00:23:39.520920 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-monitoring-operator" (456 of 955) 2025-10-13T00:23:39.521025422+00:00 stderr F I1013 00:23:39.520966 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.521033722+00:00 stderr F I1013 00:23:39.521020 1 sync_worker.go:999] Running sync for serviceaccount "openshift-monitoring/cluster-monitoring-operator" (457 of 955) 2025-10-13T00:23:39.569945545+00:00 stderr F I1013 00:23:39.569902 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork-v2" (92 of 955) 2025-10-13T00:23:39.569945545+00:00 stderr F I1013 00:23:39.569936 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.569979946+00:00 stderr F I1013 00:23:39.569941 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork" (93 of 955) 2025-10-13T00:23:39.620933975+00:00 stderr F I1013 00:23:39.620877 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-monitoring/cluster-monitoring-operator" (457 of 955) 2025-10-13T00:23:39.620933975+00:00 stderr F I1013 00:23:39.620909 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.620933975+00:00 stderr F I1013 00:23:39.620915 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-monitoring-operator" (458 of 955) 2025-10-13T00:23:39.670844036+00:00 stderr F I1013 00:23:39.670783 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork" (93 of 955) 2025-10-13T00:23:39.670844036+00:00 stderr F I1013 00:23:39.670827 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.670885867+00:00 stderr F I1013 00:23:39.670842 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork" (94 of 955) 2025-10-13T00:23:39.720233021+00:00 stderr F I1013 00:23:39.720180 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-monitoring-operator" (458 of 955) 2025-10-13T00:23:39.720233021+00:00 stderr F I1013 00:23:39.720208 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.720233021+00:00 stderr F I1013 00:23:39.720214 1 sync_worker.go:999] Running sync for rolebinding "openshift-monitoring/cluster-monitoring-operator" (459 of 955) 2025-10-13T00:23:39.770234644+00:00 stderr F I1013 00:23:39.770175 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork" (94 of 955) 2025-10-13T00:23:39.770234644+00:00 stderr F I1013 00:23:39.770204 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.770234644+00:00 stderr F I1013 00:23:39.770210 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot-v2" (95 of 955) 2025-10-13T00:23:39.820150385+00:00 stderr F I1013 00:23:39.820086 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-monitoring/cluster-monitoring-operator" (459 of 955) 2025-10-13T00:23:39.820150385+00:00 stderr F I1013 00:23:39.820117 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.820150385+00:00 stderr F I1013 00:23:39.820123 1 sync_worker.go:999] Running sync for rolebinding "openshift-user-workload-monitoring/cluster-monitoring-operator" (460 of 955) 2025-10-13T00:23:39.869990083+00:00 stderr F I1013 00:23:39.869935 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot-v2" (95 of 955) 2025-10-13T00:23:39.869990083+00:00 stderr F I1013 00:23:39.869969 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.870033364+00:00 stderr F I1013 00:23:39.869974 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot-v2" (96 of 955) 2025-10-13T00:23:39.920788528+00:00 stderr F I1013 00:23:39.920729 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-user-workload-monitoring/cluster-monitoring-operator" (460 of 955) 2025-10-13T00:23:39.920788528+00:00 stderr F I1013 00:23:39.920765 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.920788528+00:00 stderr F I1013 00:23:39.920773 1 sync_worker.go:999] Running sync for rolebinding "openshift-monitoring/cluster-monitoring-operator-alert-customization" (461 of 955) 2025-10-13T00:23:39.969745592+00:00 stderr F I1013 00:23:39.969687 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot-v2" (96 of 955) 2025-10-13T00:23:39.969745592+00:00 stderr F I1013 00:23:39.969730 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:39.969796103+00:00 stderr F I1013 00:23:39.969738 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot" (97 of 955) 2025-10-13T00:23:40.020735252+00:00 stderr F I1013 00:23:40.020671 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-monitoring/cluster-monitoring-operator-alert-customization" (461 of 955) 2025-10-13T00:23:40.020735252+00:00 stderr F I1013 00:23:40.020708 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.020735252+00:00 stderr F I1013 00:23:40.020717 1 sync_worker.go:999] Running sync for service "openshift-monitoring/cluster-monitoring-operator" (462 of 955) 2025-10-13T00:23:40.053530665+00:00 stderr F I1013 00:23:40.053485 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:40.053874845+00:00 stderr F I1013 00:23:40.053847 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:40.053874845+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:40.053874845+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:40.053934567+00:00 stderr F I1013 00:23:40.053905 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.812µs) 2025-10-13T00:23:40.053934567+00:00 stderr F I1013 00:23:40.053925 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:40.053999388+00:00 stderr F I1013 00:23:40.053970 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:40.054048320+00:00 stderr F I1013 00:23:40.054024 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:40.054106181+00:00 stderr F I1013 00:23:40.054035 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:40.054404340+00:00 stderr F I1013 00:23:40.054364 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:40.071068404+00:00 stderr F I1013 00:23:40.071015 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot" (97 of 955) 2025-10-13T00:23:40.071068404+00:00 stderr F I1013 00:23:40.071060 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.071094805+00:00 stderr F I1013 00:23:40.071072 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot" (98 of 955) 2025-10-13T00:23:40.094876047+00:00 stderr F W1013 00:23:40.094829 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:40.096846372+00:00 stderr F I1013 00:23:40.096808 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.879115ms) 2025-10-13T00:23:40.121056456+00:00 stderr F I1013 00:23:40.121009 1 sync_worker.go:1014] Done syncing for service "openshift-monitoring/cluster-monitoring-operator" (462 of 955) 2025-10-13T00:23:40.121079317+00:00 stderr F I1013 00:23:40.121051 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.121079317+00:00 stderr F I1013 00:23:40.121063 1 sync_worker.go:999] Running sync for configmap "openshift-monitoring/telemetry-config" (463 of 955) 2025-10-13T00:23:40.170716200+00:00 stderr F I1013 00:23:40.170640 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot" (98 of 955) 2025-10-13T00:23:40.170716200+00:00 stderr F I1013 00:23:40.170698 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.170749921+00:00 stderr F I1013 00:23:40.170710 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:privileged" (99 of 955) 2025-10-13T00:23:40.225921797+00:00 stderr F I1013 00:23:40.225850 1 sync_worker.go:1014] Done syncing for configmap "openshift-monitoring/telemetry-config" (463 of 955) 2025-10-13T00:23:40.225921797+00:00 stderr F I1013 00:23:40.225895 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.225921797+00:00 stderr F I1013 00:23:40.225908 1 sync_worker.go:999] Running sync for deployment "openshift-monitoring/cluster-monitoring-operator" (464 of 955) 2025-10-13T00:23:40.225958818+00:00 stderr F I1013 00:23:40.225933 1 sync_worker.go:1002] Skipping deployment "openshift-monitoring/cluster-monitoring-operator" (464 of 955): overridden 2025-10-13T00:23:40.225958818+00:00 stderr F I1013 00:23:40.225945 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.225968739+00:00 stderr F I1013 00:23:40.225955 1 sync_worker.go:999] Running sync for clusteroperator "monitoring" (465 of 955) 2025-10-13T00:23:40.225999800+00:00 stderr F I1013 00:23:40.225976 1 sync_worker.go:1002] Skipping clusteroperator "monitoring" (465 of 955): overridden 2025-10-13T00:23:40.226024940+00:00 stderr F I1013 00:23:40.226004 1 task_graph.go:481] Running 43 on worker 0 2025-10-13T00:23:40.226024940+00:00 stderr F I1013 00:23:40.226019 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.226039951+00:00 stderr F I1013 00:23:40.226029 1 sync_worker.go:999] Running sync for role "openshift-authentication-operator/prometheus-k8s" (804 of 955) 2025-10-13T00:23:40.270854209+00:00 stderr F I1013 00:23:40.270797 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:privileged" (99 of 955) 2025-10-13T00:23:40.270854209+00:00 stderr F I1013 00:23:40.270834 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.270878430+00:00 stderr F I1013 00:23:40.270849 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:privileged" (100 of 955) 2025-10-13T00:23:40.320978605+00:00 stderr F I1013 00:23:40.320910 1 sync_worker.go:1014] Done syncing for role "openshift-authentication-operator/prometheus-k8s" (804 of 955) 2025-10-13T00:23:40.320978605+00:00 stderr F I1013 00:23:40.320952 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.320978605+00:00 stderr F I1013 00:23:40.320961 1 sync_worker.go:999] Running sync for role "openshift-authentication/prometheus-k8s" (805 of 955) 2025-10-13T00:23:40.370835054+00:00 stderr F I1013 00:23:40.370692 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:privileged" (100 of 955) 2025-10-13T00:23:40.370835054+00:00 stderr F I1013 00:23:40.370811 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.370835054+00:00 stderr F I1013 00:23:40.370819 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted-v2" (101 of 955) 2025-10-13T00:23:40.420971941+00:00 stderr F I1013 00:23:40.420914 1 sync_worker.go:1014] Done syncing for role "openshift-authentication/prometheus-k8s" (805 of 955) 2025-10-13T00:23:40.420971941+00:00 stderr F I1013 00:23:40.420953 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.420971941+00:00 stderr F I1013 00:23:40.420962 1 sync_worker.go:999] Running sync for role "openshift-oauth-apiserver/prometheus-k8s" (806 of 955) 2025-10-13T00:23:40.471035495+00:00 stderr F I1013 00:23:40.470975 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted-v2" (101 of 955) 2025-10-13T00:23:40.471035495+00:00 stderr F I1013 00:23:40.471025 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.471058816+00:00 stderr F I1013 00:23:40.471036 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted-v2" (102 of 955) 2025-10-13T00:23:40.521474270+00:00 stderr F I1013 00:23:40.521379 1 sync_worker.go:1014] Done syncing for role "openshift-oauth-apiserver/prometheus-k8s" (806 of 955) 2025-10-13T00:23:40.521474270+00:00 stderr F I1013 00:23:40.521428 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.521474270+00:00 stderr F I1013 00:23:40.521443 1 sync_worker.go:999] Running sync for rolebinding "openshift-authentication-operator/prometheus-k8s" (807 of 955) 2025-10-13T00:23:40.570766443+00:00 stderr F I1013 00:23:40.570666 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted-v2" (102 of 955) 2025-10-13T00:23:40.570766443+00:00 stderr F I1013 00:23:40.570726 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.570766443+00:00 stderr F I1013 00:23:40.570740 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted" (103 of 955) 2025-10-13T00:23:40.620424587+00:00 stderr F I1013 00:23:40.620370 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-authentication-operator/prometheus-k8s" (807 of 955) 2025-10-13T00:23:40.620496448+00:00 stderr F I1013 00:23:40.620486 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.620525639+00:00 stderr F I1013 00:23:40.620513 1 sync_worker.go:999] Running sync for rolebinding "openshift-authentication/prometheus-k8s" (808 of 955) 2025-10-13T00:23:40.670915353+00:00 stderr F I1013 00:23:40.670825 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted" (103 of 955) 2025-10-13T00:23:40.670915353+00:00 stderr F I1013 00:23:40.670888 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.670968264+00:00 stderr F I1013 00:23:40.670905 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted" (104 of 955) 2025-10-13T00:23:40.720882985+00:00 stderr F I1013 00:23:40.720816 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-authentication/prometheus-k8s" (808 of 955) 2025-10-13T00:23:40.720977897+00:00 stderr F I1013 00:23:40.720961 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.721031079+00:00 stderr F I1013 00:23:40.721008 1 sync_worker.go:999] Running sync for rolebinding "openshift-oauth-apiserver/prometheus-k8s" (809 of 955) 2025-10-13T00:23:40.770455956+00:00 stderr F I1013 00:23:40.770391 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted" (104 of 955) 2025-10-13T00:23:40.770455956+00:00 stderr F I1013 00:23:40.770432 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.770455956+00:00 stderr F I1013 00:23:40.770440 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:scc:restricted-v2" (105 of 955) 2025-10-13T00:23:40.820006896+00:00 stderr F I1013 00:23:40.819941 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-oauth-apiserver/prometheus-k8s" (809 of 955) 2025-10-13T00:23:40.820097468+00:00 stderr F I1013 00:23:40.820079 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.820147480+00:00 stderr F I1013 00:23:40.820127 1 sync_worker.go:999] Running sync for servicemonitor "openshift-authentication-operator/authentication-operator" (810 of 955) 2025-10-13T00:23:40.870749119+00:00 stderr F I1013 00:23:40.870660 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:scc:restricted-v2" (105 of 955) 2025-10-13T00:23:40.870749119+00:00 stderr F I1013 00:23:40.870705 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.870749119+00:00 stderr F I1013 00:23:40.870713 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:scc:restricted-v2" (106 of 955) 2025-10-13T00:23:40.920451954+00:00 stderr F I1013 00:23:40.920395 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-authentication-operator/authentication-operator" (810 of 955) 2025-10-13T00:23:40.920558517+00:00 stderr F I1013 00:23:40.920542 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.920603828+00:00 stderr F I1013 00:23:40.920584 1 sync_worker.go:999] Running sync for servicemonitor "openshift-authentication/oauth-openshift" (811 of 955) 2025-10-13T00:23:40.970695223+00:00 stderr F I1013 00:23:40.970609 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:scc:restricted-v2" (106 of 955) 2025-10-13T00:23:40.970776986+00:00 stderr F I1013 00:23:40.970766 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:40.970806957+00:00 stderr F I1013 00:23:40.970793 1 sync_worker.go:999] Running sync for namespace "openshift-kube-apiserver-operator" (107 of 955) 2025-10-13T00:23:41.020808469+00:00 stderr F I1013 00:23:41.020761 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-authentication/oauth-openshift" (811 of 955) 2025-10-13T00:23:41.020865011+00:00 stderr F I1013 00:23:41.020854 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.020893152+00:00 stderr F I1013 00:23:41.020881 1 sync_worker.go:999] Running sync for servicemonitor "openshift-oauth-apiserver/openshift-oauth-apiserver" (812 of 955) 2025-10-13T00:23:41.054798646+00:00 stderr F I1013 00:23:41.054718 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:41.055127685+00:00 stderr F I1013 00:23:41.055000 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:41.055127685+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:41.055127685+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:41.055127685+00:00 stderr F I1013 00:23:41.055075 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (366.76µs) 2025-10-13T00:23:41.055127685+00:00 stderr F I1013 00:23:41.055096 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:41.055236018+00:00 stderr F I1013 00:23:41.055181 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:41.055260269+00:00 stderr F I1013 00:23:41.055252 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:41.055407213+00:00 stderr F I1013 00:23:41.055263 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:41.055722392+00:00 stderr F I1013 00:23:41.055667 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:41.070066791+00:00 stderr F I1013 00:23:41.069992 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-apiserver-operator" (107 of 955) 2025-10-13T00:23:41.070140984+00:00 stderr F I1013 00:23:41.070126 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.070181985+00:00 stderr F I1013 00:23:41.070164 1 sync_worker.go:999] Running sync for namespace "openshift-kube-apiserver-operator" (108 of 955) 2025-10-13T00:23:41.081177281+00:00 stderr F W1013 00:23:41.081125 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:41.083130555+00:00 stderr F I1013 00:23:41.083101 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.00206ms) 2025-10-13T00:23:41.122283376+00:00 stderr F I1013 00:23:41.122211 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-oauth-apiserver/openshift-oauth-apiserver" (812 of 955) 2025-10-13T00:23:41.122377759+00:00 stderr F I1013 00:23:41.122362 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.122428520+00:00 stderr F I1013 00:23:41.122410 1 sync_worker.go:999] Running sync for prometheusrule "openshift-authentication-operator/authentication-operator" (813 of 955) 2025-10-13T00:23:41.169938473+00:00 stderr F I1013 00:23:41.169829 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-apiserver-operator" (108 of 955) 2025-10-13T00:23:41.169938473+00:00 stderr F I1013 00:23:41.169882 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.169938473+00:00 stderr F I1013 00:23:41.169891 1 sync_worker.go:999] Running sync for securitycontextconstraints "anyuid" (109 of 955) 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220428 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-authentication-operator/authentication-operator" (813 of 955) 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220460 1 task_graph.go:481] Running 44 on worker 0 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220474 1 sync_worker.go:982] Skipping precreation of clusteroperator "node-tuning" (494 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220490 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220495 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-node-tuning-operator" (475 of 955) 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220506 1 sync_worker.go:1002] Skipping namespace "openshift-cluster-node-tuning-operator" (475 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220520362+00:00 stderr F I1013 00:23:41.220511 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220516 1 sync_worker.go:999] Running sync for customresourcedefinition "performanceprofiles.performance.openshift.io" (476 of 955) 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220526 1 sync_worker.go:1002] Skipping customresourcedefinition "performanceprofiles.performance.openshift.io" (476 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220532 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220536 1 sync_worker.go:999] Running sync for customresourcedefinition "profiles.tuned.openshift.io" (477 of 955) 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220545 1 sync_worker.go:1002] Skipping customresourcedefinition "profiles.tuned.openshift.io" (477 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220550 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220554 1 sync_worker.go:999] Running sync for customresourcedefinition "tuneds.tuned.openshift.io" (478 of 955) 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220580 1 sync_worker.go:1002] Skipping customresourcedefinition "tuneds.tuned.openshift.io" (478 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220590474+00:00 stderr F I1013 00:23:41.220586 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220614485+00:00 stderr F I1013 00:23:41.220591 1 sync_worker.go:999] Running sync for configmap "openshift-cluster-node-tuning-operator/trusted-ca" (479 of 955) 2025-10-13T00:23:41.220614485+00:00 stderr F I1013 00:23:41.220601 1 sync_worker.go:1002] Skipping configmap "openshift-cluster-node-tuning-operator/trusted-ca" (479 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220614485+00:00 stderr F I1013 00:23:41.220605 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220614485+00:00 stderr F I1013 00:23:41.220610 1 sync_worker.go:999] Running sync for service "openshift-cluster-node-tuning-operator/node-tuning-operator" (480 of 955) 2025-10-13T00:23:41.220626935+00:00 stderr F I1013 00:23:41.220618 1 sync_worker.go:1002] Skipping service "openshift-cluster-node-tuning-operator/node-tuning-operator" (480 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220626935+00:00 stderr F I1013 00:23:41.220623 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220637176+00:00 stderr F I1013 00:23:41.220628 1 sync_worker.go:999] Running sync for role "openshift-cluster-node-tuning-operator/prometheus-k8s" (481 of 955) 2025-10-13T00:23:41.220646956+00:00 stderr F I1013 00:23:41.220637 1 sync_worker.go:1002] Skipping role "openshift-cluster-node-tuning-operator/prometheus-k8s" (481 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220646956+00:00 stderr F I1013 00:23:41.220642 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220656996+00:00 stderr F I1013 00:23:41.220646 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-node-tuning-operator/prometheus-k8s" (482 of 955) 2025-10-13T00:23:41.220667367+00:00 stderr F I1013 00:23:41.220654 1 sync_worker.go:1002] Skipping rolebinding "openshift-cluster-node-tuning-operator/prometheus-k8s" (482 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220667367+00:00 stderr F I1013 00:23:41.220659 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220678567+00:00 stderr F I1013 00:23:41.220663 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-node-tuning-operator/node-tuning-operator" (483 of 955) 2025-10-13T00:23:41.220678567+00:00 stderr F I1013 00:23:41.220672 1 sync_worker.go:1002] Skipping servicemonitor "openshift-cluster-node-tuning-operator/node-tuning-operator" (483 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220690127+00:00 stderr F I1013 00:23:41.220677 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220690127+00:00 stderr F I1013 00:23:41.220682 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-node-tuning-operator/node-tuning-operator" (484 of 955) 2025-10-13T00:23:41.220716628+00:00 stderr F I1013 00:23:41.220690 1 sync_worker.go:1002] Skipping prometheusrule "openshift-cluster-node-tuning-operator/node-tuning-operator" (484 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220716628+00:00 stderr F I1013 00:23:41.220696 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220716628+00:00 stderr F I1013 00:23:41.220700 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (485 of 955) 2025-10-13T00:23:41.220716628+00:00 stderr F I1013 00:23:41.220709 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (485 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220716628+00:00 stderr F I1013 00:23:41.220714 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220732108+00:00 stderr F I1013 00:23:41.220718 1 sync_worker.go:999] Running sync for clusterrole "cluster-node-tuning-operator" (486 of 955) 2025-10-13T00:23:41.220741829+00:00 stderr F I1013 00:23:41.220729 1 sync_worker.go:1002] Skipping clusterrole "cluster-node-tuning-operator" (486 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220741829+00:00 stderr F I1013 00:23:41.220735 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220751839+00:00 stderr F I1013 00:23:41.220741 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-node-tuning-operator" (487 of 955) 2025-10-13T00:23:41.220761589+00:00 stderr F I1013 00:23:41.220751 1 sync_worker.go:1002] Skipping clusterrolebinding "cluster-node-tuning-operator" (487 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220761589+00:00 stderr F I1013 00:23:41.220756 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220771839+00:00 stderr F I1013 00:23:41.220760 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-node-tuning-operator/tuned" (488 of 955) 2025-10-13T00:23:41.220781550+00:00 stderr F I1013 00:23:41.220768 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-node-tuning-operator/tuned" (488 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220781550+00:00 stderr F I1013 00:23:41.220774 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220791670+00:00 stderr F I1013 00:23:41.220778 1 sync_worker.go:999] Running sync for clusterrole "cluster-node-tuning:tuned" (489 of 955) 2025-10-13T00:23:41.220791670+00:00 stderr F I1013 00:23:41.220787 1 sync_worker.go:1002] Skipping clusterrole "cluster-node-tuning:tuned" (489 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220801680+00:00 stderr F I1013 00:23:41.220792 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220801680+00:00 stderr F I1013 00:23:41.220796 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-node-tuning:tuned" (490 of 955) 2025-10-13T00:23:41.220814171+00:00 stderr F I1013 00:23:41.220806 1 sync_worker.go:1002] Skipping clusterrolebinding "cluster-node-tuning:tuned" (490 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220814171+00:00 stderr F I1013 00:23:41.220811 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220824091+00:00 stderr F I1013 00:23:41.220815 1 sync_worker.go:999] Running sync for service "openshift-cluster-node-tuning-operator/performance-addon-operator-service" (491 of 955) 2025-10-13T00:23:41.220833841+00:00 stderr F I1013 00:23:41.220824 1 sync_worker.go:1002] Skipping service "openshift-cluster-node-tuning-operator/performance-addon-operator-service" (491 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220833841+00:00 stderr F I1013 00:23:41.220829 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220843681+00:00 stderr F I1013 00:23:41.220833 1 sync_worker.go:999] Running sync for validatingwebhookconfiguration "performance-addon-operator" (492 of 955) 2025-10-13T00:23:41.220853352+00:00 stderr F I1013 00:23:41.220843 1 sync_worker.go:1002] Skipping validatingwebhookconfiguration "performance-addon-operator" (492 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220853352+00:00 stderr F I1013 00:23:41.220848 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220866682+00:00 stderr F I1013 00:23:41.220852 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (493 of 955) 2025-10-13T00:23:41.220866682+00:00 stderr F I1013 00:23:41.220861 1 sync_worker.go:1002] Skipping deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (493 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220876752+00:00 stderr F I1013 00:23:41.220866 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220876752+00:00 stderr F I1013 00:23:41.220870 1 sync_worker.go:999] Running sync for clusteroperator "node-tuning" (494 of 955) 2025-10-13T00:23:41.220886723+00:00 stderr F I1013 00:23:41.220877 1 sync_worker.go:1002] Skipping clusteroperator "node-tuning" (494 of 955): disabled capabilities: NodeTuning 2025-10-13T00:23:41.220886723+00:00 stderr F I1013 00:23:41.220883 1 task_graph.go:481] Running 45 on worker 0 2025-10-13T00:23:41.220896703+00:00 stderr F I1013 00:23:41.220887 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.220896703+00:00 stderr F I1013 00:23:41.220892 1 sync_worker.go:999] Running sync for role "openshift-ingress-operator/prometheus-k8s" (870 of 955) 2025-10-13T00:23:41.271246135+00:00 stderr F I1013 00:23:41.271168 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "anyuid" (109 of 955) 2025-10-13T00:23:41.271246135+00:00 stderr F I1013 00:23:41.271194 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.271246135+00:00 stderr F I1013 00:23:41.271200 1 sync_worker.go:999] Running sync for securitycontextconstraints "anyuid" (110 of 955) 2025-10-13T00:23:41.319766047+00:00 stderr F I1013 00:23:41.319690 1 sync_worker.go:1014] Done syncing for role "openshift-ingress-operator/prometheus-k8s" (870 of 955) 2025-10-13T00:23:41.319766047+00:00 stderr F I1013 00:23:41.319728 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.319766047+00:00 stderr F I1013 00:23:41.319736 1 sync_worker.go:999] Running sync for rolebinding "openshift-ingress-operator/prometheus-k8s" (871 of 955) 2025-10-13T00:23:41.371126958+00:00 stderr F I1013 00:23:41.371059 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "anyuid" (110 of 955) 2025-10-13T00:23:41.371126958+00:00 stderr F I1013 00:23:41.371096 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.371126958+00:00 stderr F I1013 00:23:41.371102 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostaccess" (111 of 955) 2025-10-13T00:23:41.420517253+00:00 stderr F I1013 00:23:41.420471 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-ingress-operator/prometheus-k8s" (871 of 955) 2025-10-13T00:23:41.420517253+00:00 stderr F I1013 00:23:41.420500 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.420517253+00:00 stderr F I1013 00:23:41.420506 1 sync_worker.go:999] Running sync for servicemonitor "openshift-ingress-operator/ingress-operator" (872 of 955) 2025-10-13T00:23:41.472533152+00:00 stderr F I1013 00:23:41.472460 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostaccess" (111 of 955) 2025-10-13T00:23:41.472533152+00:00 stderr F I1013 00:23:41.472501 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.472533152+00:00 stderr F I1013 00:23:41.472511 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostaccess" (112 of 955) 2025-10-13T00:23:41.520143929+00:00 stderr F I1013 00:23:41.520053 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-ingress-operator/ingress-operator" (872 of 955) 2025-10-13T00:23:41.520143929+00:00 stderr F I1013 00:23:41.520085 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.520143929+00:00 stderr F I1013 00:23:41.520091 1 sync_worker.go:999] Running sync for prometheusrule "openshift-ingress-operator/ingress-operator" (873 of 955) 2025-10-13T00:23:41.571079507+00:00 stderr F I1013 00:23:41.570992 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostaccess" (112 of 955) 2025-10-13T00:23:41.571079507+00:00 stderr F I1013 00:23:41.571037 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.571079507+00:00 stderr F I1013 00:23:41.571043 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostmount-anyuid" (113 of 955) 2025-10-13T00:23:41.622646734+00:00 stderr F I1013 00:23:41.622557 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-ingress-operator/ingress-operator" (873 of 955) 2025-10-13T00:23:41.622646734+00:00 stderr F I1013 00:23:41.622592 1 task_graph.go:481] Running 46 on worker 0 2025-10-13T00:23:41.622646734+00:00 stderr F I1013 00:23:41.622604 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.622646734+00:00 stderr F I1013 00:23:41.622610 1 sync_worker.go:999] Running sync for role "openshift-kube-scheduler-operator/prometheus-k8s" (910 of 955) 2025-10-13T00:23:41.672494252+00:00 stderr F I1013 00:23:41.672384 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostmount-anyuid" (113 of 955) 2025-10-13T00:23:41.672494252+00:00 stderr F I1013 00:23:41.672422 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.672494252+00:00 stderr F I1013 00:23:41.672428 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostmount-anyuid" (114 of 955) 2025-10-13T00:23:41.720876640+00:00 stderr F I1013 00:23:41.720731 1 sync_worker.go:1014] Done syncing for role "openshift-kube-scheduler-operator/prometheus-k8s" (910 of 955) 2025-10-13T00:23:41.720876640+00:00 stderr F I1013 00:23:41.720765 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.720876640+00:00 stderr F I1013 00:23:41.720773 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-scheduler-operator/prometheus-k8s" (911 of 955) 2025-10-13T00:23:41.771850980+00:00 stderr F I1013 00:23:41.771802 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostmount-anyuid" (114 of 955) 2025-10-13T00:23:41.771908292+00:00 stderr F I1013 00:23:41.771898 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.771937652+00:00 stderr F I1013 00:23:41.771924 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork-v2" (115 of 955) 2025-10-13T00:23:41.821118632+00:00 stderr F I1013 00:23:41.821010 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-scheduler-operator/prometheus-k8s" (911 of 955) 2025-10-13T00:23:41.821118632+00:00 stderr F I1013 00:23:41.821045 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.821118632+00:00 stderr F I1013 00:23:41.821053 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-scheduler-operator/kube-scheduler-operator" (912 of 955) 2025-10-13T00:23:41.871848425+00:00 stderr F I1013 00:23:41.871743 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork-v2" (115 of 955) 2025-10-13T00:23:41.871986909+00:00 stderr F I1013 00:23:41.871959 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.872131863+00:00 stderr F I1013 00:23:41.872051 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork-v2" (116 of 955) 2025-10-13T00:23:41.920478730+00:00 stderr F I1013 00:23:41.920352 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-scheduler-operator/kube-scheduler-operator" (912 of 955) 2025-10-13T00:23:41.920478730+00:00 stderr F I1013 00:23:41.920390 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.920478730+00:00 stderr F I1013 00:23:41.920398 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-scheduler-operator/kube-scheduler-operator" (913 of 955) 2025-10-13T00:23:41.972044186+00:00 stderr F I1013 00:23:41.971915 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork-v2" (116 of 955) 2025-10-13T00:23:41.972044186+00:00 stderr F I1013 00:23:41.971972 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:41.972044186+00:00 stderr F I1013 00:23:41.971985 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork" (117 of 955) 2025-10-13T00:23:42.022448200+00:00 stderr F I1013 00:23:42.022312 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-scheduler-operator/kube-scheduler-operator" (913 of 955) 2025-10-13T00:23:42.022448200+00:00 stderr F I1013 00:23:42.022400 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.022448200+00:00 stderr F I1013 00:23:42.022414 1 sync_worker.go:999] Running sync for clusterrole "prometheus-k8s-scheduler-resources" (914 of 955) 2025-10-13T00:23:42.055568473+00:00 stderr F I1013 00:23:42.055503 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:42.056056567+00:00 stderr F I1013 00:23:42.056025 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:42.056056567+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:42.056056567+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:42.056200081+00:00 stderr F I1013 00:23:42.056169 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (676.129µs) 2025-10-13T00:23:42.056261992+00:00 stderr F I1013 00:23:42.056240 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:42.056427647+00:00 stderr F I1013 00:23:42.056393 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:42.056537550+00:00 stderr F I1013 00:23:42.056517 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:42.056695024+00:00 stderr F I1013 00:23:42.056573 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:42.057132857+00:00 stderr F I1013 00:23:42.057084 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:42.072357111+00:00 stderr F I1013 00:23:42.072298 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork" (117 of 955) 2025-10-13T00:23:42.072410002+00:00 stderr F I1013 00:23:42.072399 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.072439013+00:00 stderr F I1013 00:23:42.072426 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork" (118 of 955) 2025-10-13T00:23:42.083840071+00:00 stderr F W1013 00:23:42.083754 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:42.085785025+00:00 stderr F I1013 00:23:42.085735 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.492121ms) 2025-10-13T00:23:42.120910593+00:00 stderr F I1013 00:23:42.120838 1 sync_worker.go:1014] Done syncing for clusterrole "prometheus-k8s-scheduler-resources" (914 of 955) 2025-10-13T00:23:42.120910593+00:00 stderr F I1013 00:23:42.120882 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.120910593+00:00 stderr F I1013 00:23:42.120892 1 sync_worker.go:999] Running sync for role "openshift-kube-scheduler/prometheus-k8s" (915 of 955) 2025-10-13T00:23:42.169889688+00:00 stderr F I1013 00:23:42.169801 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork" (118 of 955) 2025-10-13T00:23:42.169889688+00:00 stderr F I1013 00:23:42.169841 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.169889688+00:00 stderr F I1013 00:23:42.169849 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot-v2" (119 of 955) 2025-10-13T00:23:42.221115354+00:00 stderr F I1013 00:23:42.221035 1 sync_worker.go:1014] Done syncing for role "openshift-kube-scheduler/prometheus-k8s" (915 of 955) 2025-10-13T00:23:42.221115354+00:00 stderr F I1013 00:23:42.221069 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.221115354+00:00 stderr F I1013 00:23:42.221074 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-scheduler/prometheus-k8s" (916 of 955) 2025-10-13T00:23:42.272573818+00:00 stderr F I1013 00:23:42.272452 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot-v2" (119 of 955) 2025-10-13T00:23:42.272573818+00:00 stderr F I1013 00:23:42.272503 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.272573818+00:00 stderr F I1013 00:23:42.272514 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot-v2" (120 of 955) 2025-10-13T00:23:42.319741951+00:00 stderr F I1013 00:23:42.319647 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-scheduler/prometheus-k8s" (916 of 955) 2025-10-13T00:23:42.319741951+00:00 stderr F I1013 00:23:42.319685 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.319741951+00:00 stderr F I1013 00:23:42.319691 1 sync_worker.go:999] Running sync for clusterrolebinding "prometheus-k8s-scheduler-resources" (917 of 955) 2025-10-13T00:23:42.371683488+00:00 stderr F I1013 00:23:42.371619 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot-v2" (120 of 955) 2025-10-13T00:23:42.371683488+00:00 stderr F I1013 00:23:42.371654 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.371683488+00:00 stderr F I1013 00:23:42.371662 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot" (121 of 955) 2025-10-13T00:23:42.421566827+00:00 stderr F I1013 00:23:42.421386 1 sync_worker.go:1014] Done syncing for clusterrolebinding "prometheus-k8s-scheduler-resources" (917 of 955) 2025-10-13T00:23:42.421566827+00:00 stderr F I1013 00:23:42.421418 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.421566827+00:00 stderr F I1013 00:23:42.421424 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-scheduler/kube-scheduler" (918 of 955) 2025-10-13T00:23:42.472209518+00:00 stderr F I1013 00:23:42.472081 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot" (121 of 955) 2025-10-13T00:23:42.472209518+00:00 stderr F I1013 00:23:42.472118 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.472209518+00:00 stderr F I1013 00:23:42.472126 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot" (122 of 955) 2025-10-13T00:23:42.520516493+00:00 stderr F I1013 00:23:42.520437 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-scheduler/kube-scheduler" (918 of 955) 2025-10-13T00:23:42.520516493+00:00 stderr F I1013 00:23:42.520474 1 task_graph.go:481] Running 47 on worker 0 2025-10-13T00:23:42.520516493+00:00 stderr F I1013 00:23:42.520496 1 sync_worker.go:982] Skipping precreation of clusteroperator "cluster-autoscaler" (353 of 955): overridden 2025-10-13T00:23:42.520570805+00:00 stderr F I1013 00:23:42.520515 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.520570805+00:00 stderr F I1013 00:23:42.520524 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterautoscalers.autoscaling.openshift.io" (335 of 955) 2025-10-13T00:23:42.570976529+00:00 stderr F I1013 00:23:42.570915 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot" (122 of 955) 2025-10-13T00:23:42.570976529+00:00 stderr F I1013 00:23:42.570952 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.570976529+00:00 stderr F I1013 00:23:42.570960 1 sync_worker.go:999] Running sync for securitycontextconstraints "privileged" (123 of 955) 2025-10-13T00:23:42.621241819+00:00 stderr F I1013 00:23:42.621150 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterautoscalers.autoscaling.openshift.io" (335 of 955) 2025-10-13T00:23:42.621241819+00:00 stderr F I1013 00:23:42.621190 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.621241819+00:00 stderr F I1013 00:23:42.621197 1 sync_worker.go:999] Running sync for customresourcedefinition "machineautoscalers.autoscaling.openshift.io" (336 of 955) 2025-10-13T00:23:42.672002293+00:00 stderr F I1013 00:23:42.671192 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "privileged" (123 of 955) 2025-10-13T00:23:42.672002293+00:00 stderr F I1013 00:23:42.671247 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.672002293+00:00 stderr F I1013 00:23:42.671264 1 sync_worker.go:999] Running sync for securitycontextconstraints "privileged" (124 of 955) 2025-10-13T00:23:42.723109997+00:00 stderr F I1013 00:23:42.723025 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineautoscalers.autoscaling.openshift.io" (336 of 955) 2025-10-13T00:23:42.723109997+00:00 stderr F I1013 00:23:42.723068 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.723109997+00:00 stderr F I1013 00:23:42.723078 1 sync_worker.go:999] Running sync for clusterrole "cluster-autoscaler-operator" (337 of 955) 2025-10-13T00:23:42.775978269+00:00 stderr F I1013 00:23:42.774602 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "privileged" (124 of 955) 2025-10-13T00:23:42.775978269+00:00 stderr F I1013 00:23:42.774674 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.775978269+00:00 stderr F I1013 00:23:42.774685 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted-v2" (125 of 955) 2025-10-13T00:23:42.830521819+00:00 stderr F I1013 00:23:42.827570 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-autoscaler-operator" (337 of 955) 2025-10-13T00:23:42.830521819+00:00 stderr F I1013 00:23:42.827602 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.830521819+00:00 stderr F I1013 00:23:42.827608 1 sync_worker.go:999] Running sync for role "openshift-machine-api/cluster-autoscaler-operator" (338 of 955) 2025-10-13T00:23:42.872068506+00:00 stderr F I1013 00:23:42.872006 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted-v2" (125 of 955) 2025-10-13T00:23:42.872068506+00:00 stderr F I1013 00:23:42.872046 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.872068506+00:00 stderr F I1013 00:23:42.872053 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted-v2" (126 of 955) 2025-10-13T00:23:42.920022132+00:00 stderr F I1013 00:23:42.919966 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/cluster-autoscaler-operator" (338 of 955) 2025-10-13T00:23:42.920022132+00:00 stderr F I1013 00:23:42.920004 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.920022132+00:00 stderr F I1013 00:23:42.920012 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/cluster-autoscaler-operator" (339 of 955) 2025-10-13T00:23:42.971879736+00:00 stderr F I1013 00:23:42.971756 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted-v2" (126 of 955) 2025-10-13T00:23:42.971879736+00:00 stderr F I1013 00:23:42.971819 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:42.971879736+00:00 stderr F I1013 00:23:42.971828 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted" (127 of 955) 2025-10-13T00:23:43.023380301+00:00 stderr F I1013 00:23:43.021285 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/cluster-autoscaler-operator" (339 of 955) 2025-10-13T00:23:43.023380301+00:00 stderr F I1013 00:23:43.021341 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.023380301+00:00 stderr F I1013 00:23:43.021350 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-autoscaler-operator" (340 of 955) 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056459 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056775 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:43.057448230+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:43.057448230+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056832 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (384.401µs) 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056851 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056903 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056965 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.056975 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:43.057448230+00:00 stderr F I1013 00:23:43.057283 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:43.122245135+00:00 stderr F I1013 00:23:43.122194 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted" (127 of 955) 2025-10-13T00:23:43.122302016+00:00 stderr F I1013 00:23:43.122292 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.122363568+00:00 stderr F I1013 00:23:43.122349 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted" (128 of 955) 2025-10-13T00:23:43.154439221+00:00 stderr F I1013 00:23:43.153597 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-autoscaler-operator" (340 of 955) 2025-10-13T00:23:43.154439221+00:00 stderr F I1013 00:23:43.153632 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.154439221+00:00 stderr F I1013 00:23:43.153637 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/cluster-autoscaler-operator" (341 of 955) 2025-10-13T00:23:43.173892463+00:00 stderr F I1013 00:23:43.173829 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted" (128 of 955) 2025-10-13T00:23:43.173964735+00:00 stderr F I1013 00:23:43.173952 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.174000866+00:00 stderr F I1013 00:23:43.173984 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeapiservers.operator.openshift.io" (129 of 955) 2025-10-13T00:23:43.180469687+00:00 stderr F W1013 00:23:43.180427 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:43.182343739+00:00 stderr F I1013 00:23:43.182218 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (125.364192ms) 2025-10-13T00:23:43.226389176+00:00 stderr F I1013 00:23:43.226314 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/cluster-autoscaler-operator" (341 of 955) 2025-10-13T00:23:43.226461088+00:00 stderr F I1013 00:23:43.226448 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.226496949+00:00 stderr F I1013 00:23:43.226482 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/cluster-autoscaler" (342 of 955) 2025-10-13T00:23:43.270841334+00:00 stderr F I1013 00:23:43.270780 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeapiservers.operator.openshift.io" (129 of 955) 2025-10-13T00:23:43.270916466+00:00 stderr F I1013 00:23:43.270903 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.270948757+00:00 stderr F I1013 00:23:43.270934 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeapiservers.operator.openshift.io" (130 of 955) 2025-10-13T00:23:43.319409407+00:00 stderr F I1013 00:23:43.319358 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/cluster-autoscaler" (342 of 955) 2025-10-13T00:23:43.319490109+00:00 stderr F I1013 00:23:43.319480 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.319521100+00:00 stderr F I1013 00:23:43.319507 1 sync_worker.go:999] Running sync for clusterrole "cluster-autoscaler" (343 of 955) 2025-10-13T00:23:43.370265923+00:00 stderr F I1013 00:23:43.370212 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeapiservers.operator.openshift.io" (130 of 955) 2025-10-13T00:23:43.370356916+00:00 stderr F I1013 00:23:43.370345 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.370389737+00:00 stderr F I1013 00:23:43.370376 1 sync_worker.go:999] Running sync for kubeapiserver "cluster" (131 of 955) 2025-10-13T00:23:43.419625028+00:00 stderr F I1013 00:23:43.419572 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-autoscaler" (343 of 955) 2025-10-13T00:23:43.419684670+00:00 stderr F I1013 00:23:43.419674 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.419716601+00:00 stderr F I1013 00:23:43.419701 1 sync_worker.go:999] Running sync for role "openshift-machine-api/cluster-autoscaler" (344 of 955) 2025-10-13T00:23:43.473241822+00:00 stderr F I1013 00:23:43.473171 1 sync_worker.go:1014] Done syncing for kubeapiserver "cluster" (131 of 955) 2025-10-13T00:23:43.473399996+00:00 stderr F I1013 00:23:43.473376 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.473470588+00:00 stderr F I1013 00:23:43.473445 1 sync_worker.go:999] Running sync for kubeapiserver "cluster" (132 of 955) 2025-10-13T00:23:43.518905114+00:00 stderr F I1013 00:23:43.518853 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/cluster-autoscaler" (344 of 955) 2025-10-13T00:23:43.518984916+00:00 stderr F I1013 00:23:43.518957 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.519023387+00:00 stderr F I1013 00:23:43.519006 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-autoscaler" (345 of 955) 2025-10-13T00:23:43.572359213+00:00 stderr F I1013 00:23:43.572264 1 sync_worker.go:1014] Done syncing for kubeapiserver "cluster" (132 of 955) 2025-10-13T00:23:43.572462936+00:00 stderr F I1013 00:23:43.572442 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.572518777+00:00 stderr F I1013 00:23:43.572495 1 sync_worker.go:999] Running sync for service "openshift-kube-apiserver-operator/metrics" (133 of 955) 2025-10-13T00:23:43.621960545+00:00 stderr F I1013 00:23:43.621910 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-autoscaler" (345 of 955) 2025-10-13T00:23:43.622020106+00:00 stderr F I1013 00:23:43.622010 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.622050747+00:00 stderr F I1013 00:23:43.622036 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/cluster-autoscaler" (346 of 955) 2025-10-13T00:23:43.669229411+00:00 stderr F I1013 00:23:43.669189 1 sync_worker.go:1014] Done syncing for service "openshift-kube-apiserver-operator/metrics" (133 of 955) 2025-10-13T00:23:43.669281953+00:00 stderr F I1013 00:23:43.669272 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.669310774+00:00 stderr F I1013 00:23:43.669298 1 sync_worker.go:999] Running sync for service "openshift-kube-apiserver-operator/metrics" (134 of 955) 2025-10-13T00:23:43.719483601+00:00 stderr F I1013 00:23:43.719417 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/cluster-autoscaler" (346 of 955) 2025-10-13T00:23:43.719483601+00:00 stderr F I1013 00:23:43.719463 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.719483601+00:00 stderr F I1013 00:23:43.719472 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (347 of 955) 2025-10-13T00:23:43.769101033+00:00 stderr F I1013 00:23:43.769041 1 sync_worker.go:1014] Done syncing for service "openshift-kube-apiserver-operator/metrics" (134 of 955) 2025-10-13T00:23:43.769101033+00:00 stderr F I1013 00:23:43.769076 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.769101033+00:00 stderr F I1013 00:23:43.769081 1 sync_worker.go:999] Running sync for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (135 of 955) 2025-10-13T00:23:43.818922811+00:00 stderr F I1013 00:23:43.818478 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (347 of 955) 2025-10-13T00:23:43.818922811+00:00 stderr F I1013 00:23:43.818515 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.818922811+00:00 stderr F I1013 00:23:43.818523 1 sync_worker.go:999] Running sync for role "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (348 of 955) 2025-10-13T00:23:43.869510120+00:00 stderr F I1013 00:23:43.869461 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (135 of 955) 2025-10-13T00:23:43.869576772+00:00 stderr F I1013 00:23:43.869566 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.869606173+00:00 stderr F I1013 00:23:43.869593 1 sync_worker.go:999] Running sync for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (136 of 955) 2025-10-13T00:23:43.919225715+00:00 stderr F I1013 00:23:43.919175 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (348 of 955) 2025-10-13T00:23:43.919300737+00:00 stderr F I1013 00:23:43.919291 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.919352409+00:00 stderr F I1013 00:23:43.919316 1 sync_worker.go:999] Running sync for service "openshift-machine-api/cluster-autoscaler-operator" (349 of 955) 2025-10-13T00:23:43.970293438+00:00 stderr F I1013 00:23:43.970224 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (136 of 955) 2025-10-13T00:23:43.970293438+00:00 stderr F I1013 00:23:43.970257 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:43.970293438+00:00 stderr F I1013 00:23:43.970263 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (137 of 955) 2025-10-13T00:23:44.021856524+00:00 stderr F I1013 00:23:44.021788 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/cluster-autoscaler-operator" (349 of 955) 2025-10-13T00:23:44.021856524+00:00 stderr F I1013 00:23:44.021831 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.021856524+00:00 stderr F I1013 00:23:44.021839 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator" (350 of 955) 2025-10-13T00:23:44.057988110+00:00 stderr F I1013 00:23:44.057940 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:44.058284949+00:00 stderr F I1013 00:23:44.058265 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:44.058284949+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:44.058284949+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:44.058380081+00:00 stderr F I1013 00:23:44.058359 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (427.061µs) 2025-10-13T00:23:44.058421282+00:00 stderr F I1013 00:23:44.058409 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:44.058560296+00:00 stderr F I1013 00:23:44.058480 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:44.058664079+00:00 stderr F I1013 00:23:44.058651 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:44.058735391+00:00 stderr F I1013 00:23:44.058684 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:44.059023559+00:00 stderr F I1013 00:23:44.058997 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:44.074358096+00:00 stderr F I1013 00:23:44.071010 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (137 of 955) 2025-10-13T00:23:44.074358096+00:00 stderr F I1013 00:23:44.071051 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.074358096+00:00 stderr F I1013 00:23:44.071058 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (138 of 955) 2025-10-13T00:23:44.090566248+00:00 stderr F W1013 00:23:44.090501 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:44.092290816+00:00 stderr F I1013 00:23:44.092250 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.837712ms) 2025-10-13T00:23:44.119195335+00:00 stderr F I1013 00:23:44.119145 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator" (350 of 955) 2025-10-13T00:23:44.119268647+00:00 stderr F I1013 00:23:44.119258 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.119300818+00:00 stderr F I1013 00:23:44.119285 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (351 of 955) 2025-10-13T00:23:44.170627558+00:00 stderr F I1013 00:23:44.170573 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (138 of 955) 2025-10-13T00:23:44.170687490+00:00 stderr F I1013 00:23:44.170677 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.170716020+00:00 stderr F I1013 00:23:44.170703 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (139 of 955) 2025-10-13T00:23:44.220624301+00:00 stderr F I1013 00:23:44.220565 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (351 of 955) 2025-10-13T00:23:44.220694533+00:00 stderr F I1013 00:23:44.220684 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.220723543+00:00 stderr F I1013 00:23:44.220711 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/cluster-autoscaler-operator" (352 of 955) 2025-10-13T00:23:44.220757934+00:00 stderr F I1013 00:23:44.220747 1 sync_worker.go:1002] Skipping deployment "openshift-machine-api/cluster-autoscaler-operator" (352 of 955): overridden 2025-10-13T00:23:44.220780525+00:00 stderr F I1013 00:23:44.220772 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.220806946+00:00 stderr F I1013 00:23:44.220795 1 sync_worker.go:999] Running sync for clusteroperator "cluster-autoscaler" (353 of 955) 2025-10-13T00:23:44.220837907+00:00 stderr F I1013 00:23:44.220827 1 sync_worker.go:1002] Skipping clusteroperator "cluster-autoscaler" (353 of 955): overridden 2025-10-13T00:23:44.220861297+00:00 stderr F I1013 00:23:44.220852 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.220886958+00:00 stderr F I1013 00:23:44.220875 1 sync_worker.go:999] Running sync for clusterrole "cluster-autoscaler-operator:cluster-reader" (354 of 955) 2025-10-13T00:23:44.269037789+00:00 stderr F I1013 00:23:44.268976 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (139 of 955) 2025-10-13T00:23:44.269037789+00:00 stderr F I1013 00:23:44.269006 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.269037789+00:00 stderr F I1013 00:23:44.269012 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (140 of 955) 2025-10-13T00:23:44.320067581+00:00 stderr F I1013 00:23:44.320017 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-autoscaler-operator:cluster-reader" (354 of 955) 2025-10-13T00:23:44.320124622+00:00 stderr F I1013 00:23:44.320115 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.320159613+00:00 stderr F I1013 00:23:44.320140 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/cluster-autoscaler-operator-ca" (355 of 955) 2025-10-13T00:23:44.369804476+00:00 stderr F I1013 00:23:44.369736 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (140 of 955) 2025-10-13T00:23:44.369804476+00:00 stderr F I1013 00:23:44.369767 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.369804476+00:00 stderr F I1013 00:23:44.369773 1 sync_worker.go:999] Running sync for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (141 of 955) 2025-10-13T00:23:44.419576302+00:00 stderr F I1013 00:23:44.419530 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/cluster-autoscaler-operator-ca" (355 of 955) 2025-10-13T00:23:44.419576302+00:00 stderr F I1013 00:23:44.419556 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.419576302+00:00 stderr F I1013 00:23:44.419562 1 sync_worker.go:999] Running sync for prometheusrule "openshift-machine-api/cluster-autoscaler-operator-rules" (356 of 955) 2025-10-13T00:23:44.454711101+00:00 stderr F I1013 00:23:44.454672 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:44.454851255+00:00 stderr F I1013 00:23:44.454827 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:44.454908507+00:00 stderr F I1013 00:23:44.454898 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:44.454986419+00:00 stderr F I1013 00:23:44.454933 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:44.455226766+00:00 stderr F I1013 00:23:44.455202 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:44.479413899+00:00 stderr F I1013 00:23:44.476163 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (141 of 955) 2025-10-13T00:23:44.479413899+00:00 stderr F I1013 00:23:44.476210 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.479413899+00:00 stderr F I1013 00:23:44.476222 1 sync_worker.go:999] Running sync for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (142 of 955) 2025-10-13T00:23:44.525291617+00:00 stderr F I1013 00:23:44.524297 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-machine-api/cluster-autoscaler-operator-rules" (356 of 955) 2025-10-13T00:23:44.525291617+00:00 stderr F I1013 00:23:44.524338 1 task_graph.go:481] Running 48 on worker 0 2025-10-13T00:23:44.525291617+00:00 stderr F I1013 00:23:44.524352 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.525291617+00:00 stderr F I1013 00:23:44.524365 1 sync_worker.go:999] Running sync for customresourcedefinition "kubestorageversionmigrators.operator.openshift.io" (301 of 955) 2025-10-13T00:23:44.545687035+00:00 stderr F W1013 00:23:44.545139 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:44.547165606+00:00 stderr F I1013 00:23:44.547131 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (92.463455ms) 2025-10-13T00:23:44.569837798+00:00 stderr F I1013 00:23:44.569763 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (142 of 955) 2025-10-13T00:23:44.569837798+00:00 stderr F I1013 00:23:44.569790 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.569837798+00:00 stderr F I1013 00:23:44.569796 1 sync_worker.go:999] Running sync for clusteroperator "kube-apiserver" (143 of 955) 2025-10-13T00:23:44.570347312+00:00 stderr F I1013 00:23:44.569994 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-apiserver" (143 of 955) 2025-10-13T00:23:44.570347312+00:00 stderr F I1013 00:23:44.570011 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.570347312+00:00 stderr F I1013 00:23:44.570018 1 sync_worker.go:999] Running sync for clusteroperator "kube-apiserver" (144 of 955) 2025-10-13T00:23:44.570347312+00:00 stderr F I1013 00:23:44.570132 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-apiserver" (144 of 955) 2025-10-13T00:23:44.570347312+00:00 stderr F I1013 00:23:44.570139 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.570347312+00:00 stderr F I1013 00:23:44.570144 1 sync_worker.go:999] Running sync for prioritylevelconfiguration "openshift-control-plane-operators" (145 of 955) 2025-10-13T00:23:44.620446488+00:00 stderr F I1013 00:23:44.620365 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubestorageversionmigrators.operator.openshift.io" (301 of 955) 2025-10-13T00:23:44.620446488+00:00 stderr F I1013 00:23:44.620394 1 task_graph.go:481] Running 49 on worker 0 2025-10-13T00:23:44.669912006+00:00 stderr F I1013 00:23:44.669835 1 sync_worker.go:1014] Done syncing for prioritylevelconfiguration "openshift-control-plane-operators" (145 of 955) 2025-10-13T00:23:44.669912006+00:00 stderr F I1013 00:23:44.669868 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.669912006+00:00 stderr F I1013 00:23:44.669874 1 sync_worker.go:999] Running sync for flowschema "openshift-monitoring-metrics" (146 of 955) 2025-10-13T00:23:44.722447159+00:00 stderr F I1013 00:23:44.722367 1 sync_worker.go:989] Precreated resource clusteroperator "control-plane-machine-set" (226 of 955) 2025-10-13T00:23:44.722447159+00:00 stderr F I1013 00:23:44.722403 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.722447159+00:00 stderr F I1013 00:23:44.722410 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/control-plane-machine-set-operator" (219 of 955) 2025-10-13T00:23:44.770147928+00:00 stderr F I1013 00:23:44.770068 1 sync_worker.go:1014] Done syncing for flowschema "openshift-monitoring-metrics" (146 of 955) 2025-10-13T00:23:44.770147928+00:00 stderr F I1013 00:23:44.770103 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.770147928+00:00 stderr F I1013 00:23:44.770109 1 sync_worker.go:999] Running sync for flowschema "openshift-kube-apiserver-operator" (147 of 955) 2025-10-13T00:23:44.819344169+00:00 stderr F I1013 00:23:44.819231 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/control-plane-machine-set-operator" (219 of 955) 2025-10-13T00:23:44.819344169+00:00 stderr F I1013 00:23:44.819261 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.819344169+00:00 stderr F I1013 00:23:44.819268 1 sync_worker.go:999] Running sync for role "openshift-machine-api/control-plane-machine-set-operator" (220 of 955) 2025-10-13T00:23:44.869570977+00:00 stderr F I1013 00:23:44.869483 1 sync_worker.go:1014] Done syncing for flowschema "openshift-kube-apiserver-operator" (147 of 955) 2025-10-13T00:23:44.869570977+00:00 stderr F I1013 00:23:44.869518 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.869570977+00:00 stderr F I1013 00:23:44.869524 1 sync_worker.go:999] Running sync for prioritylevelconfiguration "openshift-control-plane-operators" (148 of 955) 2025-10-13T00:23:44.920040273+00:00 stderr F I1013 00:23:44.919956 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/control-plane-machine-set-operator" (220 of 955) 2025-10-13T00:23:44.920040273+00:00 stderr F I1013 00:23:44.919993 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.920040273+00:00 stderr F I1013 00:23:44.920000 1 sync_worker.go:999] Running sync for clusterrole "control-plane-machine-set-operator" (221 of 955) 2025-10-13T00:23:44.971243349+00:00 stderr F I1013 00:23:44.971168 1 sync_worker.go:1014] Done syncing for prioritylevelconfiguration "openshift-control-plane-operators" (148 of 955) 2025-10-13T00:23:44.971243349+00:00 stderr F I1013 00:23:44.971209 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:44.971243349+00:00 stderr F I1013 00:23:44.971217 1 sync_worker.go:999] Running sync for flowschema "openshift-monitoring-metrics" (149 of 955) 2025-10-13T00:23:45.019344548+00:00 stderr F I1013 00:23:45.019260 1 sync_worker.go:1014] Done syncing for clusterrole "control-plane-machine-set-operator" (221 of 955) 2025-10-13T00:23:45.019344548+00:00 stderr F I1013 00:23:45.019292 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.019344548+00:00 stderr F I1013 00:23:45.019300 1 sync_worker.go:999] Running sync for clusterrolebinding "control-plane-machine-set-operator" (222 of 955) 2025-10-13T00:23:45.059690392+00:00 stderr F I1013 00:23:45.059619 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:45.059927699+00:00 stderr F I1013 00:23:45.059893 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:45.059927699+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:45.059927699+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:45.059982840+00:00 stderr F I1013 00:23:45.059956 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (364.87µs) 2025-10-13T00:23:45.059993150+00:00 stderr F I1013 00:23:45.059979 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:45.060080993+00:00 stderr F I1013 00:23:45.060034 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:45.060145565+00:00 stderr F I1013 00:23:45.060115 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:45.060198476+00:00 stderr F I1013 00:23:45.060129 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:45.060499915+00:00 stderr F I1013 00:23:45.060455 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:45.071241154+00:00 stderr F I1013 00:23:45.070017 1 sync_worker.go:1014] Done syncing for flowschema "openshift-monitoring-metrics" (149 of 955) 2025-10-13T00:23:45.071241154+00:00 stderr F I1013 00:23:45.070054 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.071241154+00:00 stderr F I1013 00:23:45.070062 1 sync_worker.go:999] Running sync for flowschema "openshift-kube-apiserver-operator" (150 of 955) 2025-10-13T00:23:45.089846612+00:00 stderr F W1013 00:23:45.089781 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:45.091814967+00:00 stderr F I1013 00:23:45.091748 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.764755ms) 2025-10-13T00:23:45.121179835+00:00 stderr F I1013 00:23:45.121085 1 sync_worker.go:1014] Done syncing for clusterrolebinding "control-plane-machine-set-operator" (222 of 955) 2025-10-13T00:23:45.121179835+00:00 stderr F I1013 00:23:45.121125 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.121179835+00:00 stderr F I1013 00:23:45.121134 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/control-plane-machine-set-operator" (223 of 955) 2025-10-13T00:23:45.171601719+00:00 stderr F I1013 00:23:45.171511 1 sync_worker.go:1014] Done syncing for flowschema "openshift-kube-apiserver-operator" (150 of 955) 2025-10-13T00:23:45.171601719+00:00 stderr F I1013 00:23:45.171575 1 task_graph.go:481] Running 50 on worker 1 2025-10-13T00:23:45.171601719+00:00 stderr F I1013 00:23:45.171591 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.171648331+00:00 stderr F I1013 00:23:45.171598 1 sync_worker.go:999] Running sync for clusterrole "registry-monitoring" (816 of 955) 2025-10-13T00:23:45.220373888+00:00 stderr F I1013 00:23:45.220105 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/control-plane-machine-set-operator" (223 of 955) 2025-10-13T00:23:45.220373888+00:00 stderr F I1013 00:23:45.220152 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.220373888+00:00 stderr F I1013 00:23:45.220160 1 sync_worker.go:999] Running sync for service "openshift-machine-api/control-plane-machine-set-operator" (224 of 955) 2025-10-13T00:23:45.269207518+00:00 stderr F I1013 00:23:45.269132 1 sync_worker.go:1014] Done syncing for clusterrole "registry-monitoring" (816 of 955) 2025-10-13T00:23:45.269207518+00:00 stderr F I1013 00:23:45.269165 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.269207518+00:00 stderr F I1013 00:23:45.269171 1 sync_worker.go:999] Running sync for clusterrolebinding "registry-monitoring" (817 of 955) 2025-10-13T00:23:45.319221721+00:00 stderr F I1013 00:23:45.319148 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/control-plane-machine-set-operator" (224 of 955) 2025-10-13T00:23:45.319221721+00:00 stderr F I1013 00:23:45.319186 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.319221721+00:00 stderr F I1013 00:23:45.319195 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/control-plane-machine-set-operator" (225 of 955) 2025-10-13T00:23:45.368905855+00:00 stderr F I1013 00:23:45.368848 1 sync_worker.go:1014] Done syncing for clusterrolebinding "registry-monitoring" (817 of 955) 2025-10-13T00:23:45.368905855+00:00 stderr F I1013 00:23:45.368893 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.368933596+00:00 stderr F I1013 00:23:45.368899 1 sync_worker.go:999] Running sync for role "openshift-image-registry/prometheus-k8s" (818 of 955) 2025-10-13T00:23:45.420301907+00:00 stderr F I1013 00:23:45.420236 1 sync_worker.go:1014] Done syncing for deployment "openshift-machine-api/control-plane-machine-set-operator" (225 of 955) 2025-10-13T00:23:45.420301907+00:00 stderr F I1013 00:23:45.420277 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.420301907+00:00 stderr F I1013 00:23:45.420287 1 sync_worker.go:999] Running sync for clusteroperator "control-plane-machine-set" (226 of 955) 2025-10-13T00:23:45.420619195+00:00 stderr F I1013 00:23:45.420585 1 sync_worker.go:1014] Done syncing for clusteroperator "control-plane-machine-set" (226 of 955) 2025-10-13T00:23:45.420619195+00:00 stderr F I1013 00:23:45.420610 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.420630356+00:00 stderr F I1013 00:23:45.420620 1 sync_worker.go:999] Running sync for validatingwebhookconfiguration "controlplanemachineset.machine.openshift.io" (227 of 955) 2025-10-13T00:23:45.469761654+00:00 stderr F I1013 00:23:45.469693 1 sync_worker.go:1014] Done syncing for role "openshift-image-registry/prometheus-k8s" (818 of 955) 2025-10-13T00:23:45.469761654+00:00 stderr F I1013 00:23:45.469724 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.469761654+00:00 stderr F I1013 00:23:45.469730 1 sync_worker.go:999] Running sync for rolebinding "openshift-image-registry/prometheus-k8s" (819 of 955) 2025-10-13T00:23:45.525085375+00:00 stderr F I1013 00:23:45.524958 1 sync_worker.go:1014] Done syncing for validatingwebhookconfiguration "controlplanemachineset.machine.openshift.io" (227 of 955) 2025-10-13T00:23:45.525085375+00:00 stderr F I1013 00:23:45.525007 1 task_graph.go:481] Running 51 on worker 0 2025-10-13T00:23:45.525085375+00:00 stderr F I1013 00:23:45.525023 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.525085375+00:00 stderr F I1013 00:23:45.525032 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/coreos-bootimages" (689 of 955) 2025-10-13T00:23:45.570188931+00:00 stderr F I1013 00:23:45.570032 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-image-registry/prometheus-k8s" (819 of 955) 2025-10-13T00:23:45.570188931+00:00 stderr F I1013 00:23:45.570076 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.570188931+00:00 stderr F I1013 00:23:45.570084 1 sync_worker.go:999] Running sync for servicemonitor "openshift-image-registry/image-registry" (820 of 955) 2025-10-13T00:23:45.621417388+00:00 stderr F I1013 00:23:45.620564 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/coreos-bootimages" (689 of 955) 2025-10-13T00:23:45.621417388+00:00 stderr F I1013 00:23:45.620614 1 task_graph.go:481] Running 52 on worker 0 2025-10-13T00:23:45.621417388+00:00 stderr F I1013 00:23:45.620633 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.621417388+00:00 stderr F I1013 00:23:45.620643 1 sync_worker.go:999] Running sync for role "openshift-cluster-version/prometheus-k8s" (9 of 955) 2025-10-13T00:23:45.671068041+00:00 stderr F I1013 00:23:45.670972 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-image-registry/image-registry" (820 of 955) 2025-10-13T00:23:45.671068041+00:00 stderr F I1013 00:23:45.671005 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.671068041+00:00 stderr F I1013 00:23:45.671013 1 sync_worker.go:999] Running sync for servicemonitor "openshift-image-registry/image-registry-operator" (821 of 955) 2025-10-13T00:23:45.719242073+00:00 stderr F I1013 00:23:45.719135 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-version/prometheus-k8s" (9 of 955) 2025-10-13T00:23:45.719242073+00:00 stderr F I1013 00:23:45.719168 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.719242073+00:00 stderr F I1013 00:23:45.719176 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-version/prometheus-k8s" (10 of 955) 2025-10-13T00:23:45.771037686+00:00 stderr F I1013 00:23:45.770950 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-image-registry/image-registry-operator" (821 of 955) 2025-10-13T00:23:45.771037686+00:00 stderr F I1013 00:23:45.770978 1 task_graph.go:481] Running 53 on worker 1 2025-10-13T00:23:45.819751863+00:00 stderr F I1013 00:23:45.819597 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-version/prometheus-k8s" (10 of 955) 2025-10-13T00:23:45.819751863+00:00 stderr F I1013 00:23:45.819648 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.819751863+00:00 stderr F I1013 00:23:45.819656 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (11 of 955) 2025-10-13T00:23:45.878890131+00:00 stderr F I1013 00:23:45.878827 1 sync_worker.go:989] Precreated resource clusteroperator "operator-lifecycle-manager" (727 of 955) 2025-10-13T00:23:45.920254722+00:00 stderr F I1013 00:23:45.919911 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (11 of 955) 2025-10-13T00:23:45.920254722+00:00 stderr F I1013 00:23:45.919942 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:45.920254722+00:00 stderr F I1013 00:23:45.919948 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-version/cluster-version-operator" (12 of 955) 2025-10-13T00:23:45.971950622+00:00 stderr F I1013 00:23:45.971872 1 sync_worker.go:989] Precreated resource clusteroperator "operator-lifecycle-manager-catalog" (728 of 955) 2025-10-13T00:23:46.020653478+00:00 stderr F I1013 00:23:46.020565 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-cluster-version/cluster-version-operator" (12 of 955) 2025-10-13T00:23:46.020653478+00:00 stderr F I1013 00:23:46.020600 1 task_graph.go:481] Running 54 on worker 0 2025-10-13T00:23:46.061027423+00:00 stderr F I1013 00:23:46.060934 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:46.061220088+00:00 stderr F I1013 00:23:46.061190 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:46.061220088+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:46.061220088+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:46.061243919+00:00 stderr F I1013 00:23:46.061233 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (313.069µs) 2025-10-13T00:23:46.061251359+00:00 stderr F I1013 00:23:46.061246 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:46.061315451+00:00 stderr F I1013 00:23:46.061283 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:46.061363482+00:00 stderr F I1013 00:23:46.061351 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:46.061420104+00:00 stderr F I1013 00:23:46.061359 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:46.061623609+00:00 stderr F I1013 00:23:46.061588 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:46.070940199+00:00 stderr F I1013 00:23:46.070893 1 sync_worker.go:989] Precreated resource clusteroperator "operator-lifecycle-manager-packageserver" (729 of 955) 2025-10-13T00:23:46.070940199+00:00 stderr F I1013 00:23:46.070920 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.070940199+00:00 stderr F I1013 00:23:46.070925 1 sync_worker.go:999] Running sync for customresourcedefinition "catalogsources.operators.coreos.com" (690 of 955) 2025-10-13T00:23:46.086745289+00:00 stderr F W1013 00:23:46.086696 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:46.088016545+00:00 stderr F I1013 00:23:46.087978 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.728665ms) 2025-10-13T00:23:46.121723533+00:00 stderr F I1013 00:23:46.121642 1 sync_worker.go:989] Precreated resource clusteroperator "kube-controller-manager" (165 of 955) 2025-10-13T00:23:46.174095572+00:00 stderr F I1013 00:23:46.174022 1 sync_worker.go:1014] Done syncing for customresourcedefinition "catalogsources.operators.coreos.com" (690 of 955) 2025-10-13T00:23:46.174095572+00:00 stderr F I1013 00:23:46.174056 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.174095572+00:00 stderr F I1013 00:23:46.174062 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterserviceversions.operators.coreos.com" (691 of 955) 2025-10-13T00:23:46.222028757+00:00 stderr F I1013 00:23:46.221961 1 sync_worker.go:989] Precreated resource clusteroperator "kube-controller-manager" (166 of 955) 2025-10-13T00:23:46.222028757+00:00 stderr F I1013 00:23:46.222009 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.222054538+00:00 stderr F I1013 00:23:46.222022 1 sync_worker.go:999] Running sync for namespace "openshift-kube-controller-manager-operator" (151 of 955) 2025-10-13T00:23:46.320658584+00:00 stderr F I1013 00:23:46.320582 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-controller-manager-operator" (151 of 955) 2025-10-13T00:23:46.320658584+00:00 stderr F I1013 00:23:46.320628 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.320658584+00:00 stderr F I1013 00:23:46.320641 1 sync_worker.go:999] Running sync for namespace "openshift-kube-controller-manager-operator" (152 of 955) 2025-10-13T00:23:46.320900301+00:00 stderr F I1013 00:23:46.320850 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterserviceversions.operators.coreos.com" (691 of 955) 2025-10-13T00:23:46.320900301+00:00 stderr F I1013 00:23:46.320896 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.320926062+00:00 stderr F I1013 00:23:46.320904 1 sync_worker.go:999] Running sync for customresourcedefinition "installplans.operators.coreos.com" (692 of 955) 2025-10-13T00:23:46.370750330+00:00 stderr F I1013 00:23:46.370674 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-controller-manager-operator" (152 of 955) 2025-10-13T00:23:46.370750330+00:00 stderr F I1013 00:23:46.370727 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.370778700+00:00 stderr F I1013 00:23:46.370741 1 sync_worker.go:999] Running sync for kubecontrollermanager "cluster" (153 of 955) 2025-10-13T00:23:46.423895420+00:00 stderr F I1013 00:23:46.423802 1 sync_worker.go:1014] Done syncing for customresourcedefinition "installplans.operators.coreos.com" (692 of 955) 2025-10-13T00:23:46.423895420+00:00 stderr F I1013 00:23:46.423862 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.423895420+00:00 stderr F I1013 00:23:46.423878 1 sync_worker.go:999] Running sync for namespace "openshift-operator-lifecycle-manager" (693 of 955) 2025-10-13T00:23:46.471874266+00:00 stderr F I1013 00:23:46.471817 1 sync_worker.go:1014] Done syncing for kubecontrollermanager "cluster" (153 of 955) 2025-10-13T00:23:46.471874266+00:00 stderr F I1013 00:23:46.471853 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.471930348+00:00 stderr F I1013 00:23:46.471875 1 sync_worker.go:999] Running sync for kubecontrollermanager "cluster" (154 of 955) 2025-10-13T00:23:46.520454870+00:00 stderr F I1013 00:23:46.520387 1 sync_worker.go:1014] Done syncing for namespace "openshift-operator-lifecycle-manager" (693 of 955) 2025-10-13T00:23:46.520454870+00:00 stderr F I1013 00:23:46.520430 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.520454870+00:00 stderr F I1013 00:23:46.520437 1 sync_worker.go:999] Running sync for namespace "openshift-operators" (694 of 955) 2025-10-13T00:23:46.571931503+00:00 stderr F I1013 00:23:46.571865 1 sync_worker.go:1014] Done syncing for kubecontrollermanager "cluster" (154 of 955) 2025-10-13T00:23:46.571931503+00:00 stderr F I1013 00:23:46.571907 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.571931503+00:00 stderr F I1013 00:23:46.571916 1 sync_worker.go:999] Running sync for service "openshift-kube-controller-manager-operator/metrics" (155 of 955) 2025-10-13T00:23:46.619321064+00:00 stderr F I1013 00:23:46.619258 1 sync_worker.go:1014] Done syncing for namespace "openshift-operators" (694 of 955) 2025-10-13T00:23:46.619321064+00:00 stderr F I1013 00:23:46.619289 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.619321064+00:00 stderr F I1013 00:23:46.619294 1 sync_worker.go:999] Running sync for customresourcedefinition "olmconfigs.operators.coreos.com" (695 of 955) 2025-10-13T00:23:46.670016276+00:00 stderr F I1013 00:23:46.669947 1 sync_worker.go:1014] Done syncing for service "openshift-kube-controller-manager-operator/metrics" (155 of 955) 2025-10-13T00:23:46.670016276+00:00 stderr F I1013 00:23:46.669986 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.670016276+00:00 stderr F I1013 00:23:46.669992 1 sync_worker.go:999] Running sync for service "openshift-kube-controller-manager-operator/metrics" (156 of 955) 2025-10-13T00:23:46.722158018+00:00 stderr F I1013 00:23:46.722097 1 sync_worker.go:1014] Done syncing for customresourcedefinition "olmconfigs.operators.coreos.com" (695 of 955) 2025-10-13T00:23:46.722158018+00:00 stderr F I1013 00:23:46.722129 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.722158018+00:00 stderr F I1013 00:23:46.722134 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorconditions.operators.coreos.com" (696 of 955) 2025-10-13T00:23:46.769942229+00:00 stderr F I1013 00:23:46.769893 1 sync_worker.go:1014] Done syncing for service "openshift-kube-controller-manager-operator/metrics" (156 of 955) 2025-10-13T00:23:46.769942229+00:00 stderr F I1013 00:23:46.769916 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.769962940+00:00 stderr F I1013 00:23:46.769934 1 sync_worker.go:999] Running sync for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (157 of 955) 2025-10-13T00:23:46.821635219+00:00 stderr F I1013 00:23:46.821480 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorconditions.operators.coreos.com" (696 of 955) 2025-10-13T00:23:46.821635219+00:00 stderr F I1013 00:23:46.821525 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.821635219+00:00 stderr F I1013 00:23:46.821537 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorgroups.operators.coreos.com" (697 of 955) 2025-10-13T00:23:46.870595713+00:00 stderr F I1013 00:23:46.870527 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (157 of 955) 2025-10-13T00:23:46.870595713+00:00 stderr F I1013 00:23:46.870576 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.870634414+00:00 stderr F I1013 00:23:46.870590 1 sync_worker.go:999] Running sync for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (158 of 955) 2025-10-13T00:23:46.922031566+00:00 stderr F I1013 00:23:46.921946 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorgroups.operators.coreos.com" (697 of 955) 2025-10-13T00:23:46.922031566+00:00 stderr F I1013 00:23:46.921990 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.922031566+00:00 stderr F I1013 00:23:46.921998 1 sync_worker.go:999] Running sync for customresourcedefinition "operators.operators.coreos.com" (698 of 955) 2025-10-13T00:23:46.969417056+00:00 stderr F I1013 00:23:46.969295 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (158 of 955) 2025-10-13T00:23:46.969417056+00:00 stderr F I1013 00:23:46.969337 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:46.969417056+00:00 stderr F I1013 00:23:46.969344 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (159 of 955) 2025-10-13T00:23:47.019867801+00:00 stderr F I1013 00:23:47.019774 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operators.operators.coreos.com" (698 of 955) 2025-10-13T00:23:47.019867801+00:00 stderr F I1013 00:23:47.019810 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.019867801+00:00 stderr F I1013 00:23:47.019816 1 sync_worker.go:999] Running sync for poddisruptionbudget "openshift-operator-lifecycle-manager/packageserver-pdb" (699 of 955) 2025-10-13T00:23:47.061772099+00:00 stderr F I1013 00:23:47.061701 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:47.062343204+00:00 stderr F I1013 00:23:47.062285 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:47.062343204+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:47.062343204+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:47.062492219+00:00 stderr F I1013 00:23:47.062444 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (755.571µs) 2025-10-13T00:23:47.062509529+00:00 stderr F I1013 00:23:47.062483 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:47.062634983+00:00 stderr F I1013 00:23:47.062582 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:47.062686464+00:00 stderr F I1013 00:23:47.062660 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:47.062807607+00:00 stderr F I1013 00:23:47.062679 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:47.063261460+00:00 stderr F I1013 00:23:47.063178 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:47.069713570+00:00 stderr F I1013 00:23:47.069602 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (159 of 955) 2025-10-13T00:23:47.069713570+00:00 stderr F I1013 00:23:47.069657 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.069713570+00:00 stderr F I1013 00:23:47.069669 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (160 of 955) 2025-10-13T00:23:47.084670006+00:00 stderr F W1013 00:23:47.084567 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:47.086696003+00:00 stderr F I1013 00:23:47.086629 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.149563ms) 2025-10-13T00:23:47.120382051+00:00 stderr F I1013 00:23:47.120305 1 sync_worker.go:1014] Done syncing for poddisruptionbudget "openshift-operator-lifecycle-manager/packageserver-pdb" (699 of 955) 2025-10-13T00:23:47.120382051+00:00 stderr F I1013 00:23:47.120373 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.120422772+00:00 stderr F I1013 00:23:47.120380 1 sync_worker.go:999] Running sync for configmap "openshift-operator-lifecycle-manager/collect-profiles-config" (700 of 955) 2025-10-13T00:23:47.169087178+00:00 stderr F I1013 00:23:47.169022 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (160 of 955) 2025-10-13T00:23:47.169087178+00:00 stderr F I1013 00:23:47.169058 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.169087178+00:00 stderr F I1013 00:23:47.169064 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (161 of 955) 2025-10-13T00:23:47.219929584+00:00 stderr F I1013 00:23:47.219865 1 sync_worker.go:1014] Done syncing for configmap "openshift-operator-lifecycle-manager/collect-profiles-config" (700 of 955) 2025-10-13T00:23:47.219929584+00:00 stderr F I1013 00:23:47.219896 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.219929584+00:00 stderr F I1013 00:23:47.219902 1 sync_worker.go:999] Running sync for role "openshift-operator-lifecycle-manager/collect-profiles" (701 of 955) 2025-10-13T00:23:47.271323796+00:00 stderr F I1013 00:23:47.271231 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (161 of 955) 2025-10-13T00:23:47.271323796+00:00 stderr F I1013 00:23:47.271261 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.271323796+00:00 stderr F I1013 00:23:47.271267 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (162 of 955) 2025-10-13T00:23:47.320082604+00:00 stderr F I1013 00:23:47.319958 1 sync_worker.go:1014] Done syncing for role "openshift-operator-lifecycle-manager/collect-profiles" (701 of 955) 2025-10-13T00:23:47.320082604+00:00 stderr F I1013 00:23:47.319994 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.320082604+00:00 stderr F I1013 00:23:47.320000 1 sync_worker.go:999] Running sync for rolebinding "openshift-operator-lifecycle-manager/collect-profiles" (702 of 955) 2025-10-13T00:23:47.370464287+00:00 stderr F I1013 00:23:47.370378 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (162 of 955) 2025-10-13T00:23:47.370464287+00:00 stderr F I1013 00:23:47.370407 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.370464287+00:00 stderr F I1013 00:23:47.370412 1 sync_worker.go:999] Running sync for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (163 of 955) 2025-10-13T00:23:47.419961766+00:00 stderr F I1013 00:23:47.419863 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-operator-lifecycle-manager/collect-profiles" (702 of 955) 2025-10-13T00:23:47.419961766+00:00 stderr F I1013 00:23:47.419903 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.419961766+00:00 stderr F I1013 00:23:47.419914 1 sync_worker.go:999] Running sync for serviceaccount "openshift-operator-lifecycle-manager/collect-profiles" (703 of 955) 2025-10-13T00:23:47.471156582+00:00 stderr F I1013 00:23:47.471061 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (163 of 955) 2025-10-13T00:23:47.471156582+00:00 stderr F I1013 00:23:47.471092 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.471156582+00:00 stderr F I1013 00:23:47.471098 1 sync_worker.go:999] Running sync for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (164 of 955) 2025-10-13T00:23:47.520752243+00:00 stderr F I1013 00:23:47.520673 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-operator-lifecycle-manager/collect-profiles" (703 of 955) 2025-10-13T00:23:47.520752243+00:00 stderr F I1013 00:23:47.520704 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.520752243+00:00 stderr F I1013 00:23:47.520710 1 sync_worker.go:999] Running sync for secret "openshift-operator-lifecycle-manager/pprof-cert" (704 of 955) 2025-10-13T00:23:47.570948052+00:00 stderr F I1013 00:23:47.570877 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (164 of 955) 2025-10-13T00:23:47.570948052+00:00 stderr F I1013 00:23:47.570909 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.570948052+00:00 stderr F I1013 00:23:47.570914 1 sync_worker.go:999] Running sync for clusteroperator "kube-controller-manager" (165 of 955) 2025-10-13T00:23:47.571193678+00:00 stderr F I1013 00:23:47.571109 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-controller-manager" (165 of 955) 2025-10-13T00:23:47.571193678+00:00 stderr F I1013 00:23:47.571124 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.571193678+00:00 stderr F I1013 00:23:47.571129 1 sync_worker.go:999] Running sync for clusteroperator "kube-controller-manager" (166 of 955) 2025-10-13T00:23:47.571270980+00:00 stderr F I1013 00:23:47.571222 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-controller-manager" (166 of 955) 2025-10-13T00:23:47.571270980+00:00 stderr F I1013 00:23:47.571236 1 task_graph.go:481] Running 55 on worker 0 2025-10-13T00:23:47.620973335+00:00 stderr F I1013 00:23:47.620869 1 sync_worker.go:1014] Done syncing for secret "openshift-operator-lifecycle-manager/pprof-cert" (704 of 955) 2025-10-13T00:23:47.620973335+00:00 stderr F I1013 00:23:47.620900 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.620973335+00:00 stderr F I1013 00:23:47.620906 1 sync_worker.go:999] Running sync for customresourcedefinition "subscriptions.operators.coreos.com" (705 of 955) 2025-10-13T00:23:47.673159759+00:00 stderr F I1013 00:23:47.673056 1 sync_worker.go:989] Precreated resource clusteroperator "openshift-controller-manager" (516 of 955) 2025-10-13T00:23:47.673159759+00:00 stderr F I1013 00:23:47.673114 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.673159759+00:00 stderr F I1013 00:23:47.673120 1 sync_worker.go:999] Running sync for namespace "openshift-controller-manager-operator" (507 of 955) 2025-10-13T00:23:47.734451786+00:00 stderr F I1013 00:23:47.733946 1 sync_worker.go:1014] Done syncing for customresourcedefinition "subscriptions.operators.coreos.com" (705 of 955) 2025-10-13T00:23:47.734451786+00:00 stderr F I1013 00:23:47.733998 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.734451786+00:00 stderr F I1013 00:23:47.734010 1 sync_worker.go:999] Running sync for serviceaccount "openshift-operator-lifecycle-manager/olm-operator-serviceaccount" (706 of 955) 2025-10-13T00:23:47.770596623+00:00 stderr F I1013 00:23:47.770507 1 sync_worker.go:1014] Done syncing for namespace "openshift-controller-manager-operator" (507 of 955) 2025-10-13T00:23:47.770596623+00:00 stderr F I1013 00:23:47.770552 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.770596623+00:00 stderr F I1013 00:23:47.770561 1 sync_worker.go:999] Running sync for openshiftcontrollermanager "cluster" (508 of 955) 2025-10-13T00:23:47.820476552+00:00 stderr F I1013 00:23:47.819674 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-operator-lifecycle-manager/olm-operator-serviceaccount" (706 of 955) 2025-10-13T00:23:47.820476552+00:00 stderr F I1013 00:23:47.819716 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.820476552+00:00 stderr F I1013 00:23:47.819725 1 sync_worker.go:999] Running sync for clusterrole "system:controller:operator-lifecycle-manager" (707 of 955) 2025-10-13T00:23:47.871272087+00:00 stderr F I1013 00:23:47.871192 1 sync_worker.go:1014] Done syncing for openshiftcontrollermanager "cluster" (508 of 955) 2025-10-13T00:23:47.871272087+00:00 stderr F I1013 00:23:47.871235 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.871272087+00:00 stderr F I1013 00:23:47.871243 1 sync_worker.go:999] Running sync for configmap "openshift-controller-manager-operator/openshift-controller-manager-operator-config" (509 of 955) 2025-10-13T00:23:47.920292823+00:00 stderr F I1013 00:23:47.920219 1 sync_worker.go:1014] Done syncing for clusterrole "system:controller:operator-lifecycle-manager" (707 of 955) 2025-10-13T00:23:47.920292823+00:00 stderr F I1013 00:23:47.920263 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.920292823+00:00 stderr F I1013 00:23:47.920270 1 sync_worker.go:999] Running sync for clusterrolebinding "olm-operator-binding-openshift-operator-lifecycle-manager" (708 of 955) 2025-10-13T00:23:47.969367900+00:00 stderr F I1013 00:23:47.969278 1 sync_worker.go:1014] Done syncing for configmap "openshift-controller-manager-operator/openshift-controller-manager-operator-config" (509 of 955) 2025-10-13T00:23:47.969367900+00:00 stderr F I1013 00:23:47.969314 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:47.969367900+00:00 stderr F I1013 00:23:47.969340 1 sync_worker.go:999] Running sync for service "openshift-controller-manager-operator/metrics" (510 of 955) 2025-10-13T00:23:48.019969269+00:00 stderr F I1013 00:23:48.019907 1 sync_worker.go:1014] Done syncing for clusterrolebinding "olm-operator-binding-openshift-operator-lifecycle-manager" (708 of 955) 2025-10-13T00:23:48.019969269+00:00 stderr F I1013 00:23:48.019945 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.019969269+00:00 stderr F I1013 00:23:48.019954 1 sync_worker.go:999] Running sync for olmconfig "cluster" (709 of 955) 2025-10-13T00:23:48.063292646+00:00 stderr F I1013 00:23:48.063244 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:48.063584294+00:00 stderr F I1013 00:23:48.063544 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:48.063584294+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:48.063584294+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:48.063643146+00:00 stderr F I1013 00:23:48.063610 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (378.171µs) 2025-10-13T00:23:48.063643146+00:00 stderr F I1013 00:23:48.063630 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:48.063701857+00:00 stderr F I1013 00:23:48.063674 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:48.063745169+00:00 stderr F I1013 00:23:48.063731 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:48.063812950+00:00 stderr F I1013 00:23:48.063742 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:48.064089508+00:00 stderr F I1013 00:23:48.064047 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:48.070769384+00:00 stderr F I1013 00:23:48.070705 1 sync_worker.go:1014] Done syncing for service "openshift-controller-manager-operator/metrics" (510 of 955) 2025-10-13T00:23:48.070769384+00:00 stderr F I1013 00:23:48.070747 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.070769384+00:00 stderr F I1013 00:23:48.070758 1 sync_worker.go:999] Running sync for configmap "openshift-controller-manager-operator/openshift-controller-manager-images" (511 of 955) 2025-10-13T00:23:48.085838304+00:00 stderr F W1013 00:23:48.085724 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:48.087050208+00:00 stderr F I1013 00:23:48.086998 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.366471ms) 2025-10-13T00:23:48.119996655+00:00 stderr F I1013 00:23:48.119928 1 sync_worker.go:1014] Done syncing for olmconfig "cluster" (709 of 955) 2025-10-13T00:23:48.119996655+00:00 stderr F I1013 00:23:48.119962 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.119996655+00:00 stderr F I1013 00:23:48.119968 1 sync_worker.go:999] Running sync for service "openshift-operator-lifecycle-manager/olm-operator-metrics" (710 of 955) 2025-10-13T00:23:48.169056762+00:00 stderr F I1013 00:23:48.168984 1 sync_worker.go:1014] Done syncing for configmap "openshift-controller-manager-operator/openshift-controller-manager-images" (511 of 955) 2025-10-13T00:23:48.169056762+00:00 stderr F I1013 00:23:48.169023 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.169056762+00:00 stderr F I1013 00:23:48.169031 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:openshift-controller-manager-operator" (512 of 955) 2025-10-13T00:23:48.220125694+00:00 stderr F I1013 00:23:48.220054 1 sync_worker.go:1014] Done syncing for service "openshift-operator-lifecycle-manager/olm-operator-metrics" (710 of 955) 2025-10-13T00:23:48.220125694+00:00 stderr F I1013 00:23:48.220098 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.220125694+00:00 stderr F I1013 00:23:48.220111 1 sync_worker.go:999] Running sync for service "openshift-operator-lifecycle-manager/catalog-operator-metrics" (711 of 955) 2025-10-13T00:23:48.269988994+00:00 stderr F I1013 00:23:48.269931 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:openshift-controller-manager-operator" (512 of 955) 2025-10-13T00:23:48.269988994+00:00 stderr F I1013 00:23:48.269961 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.269988994+00:00 stderr F I1013 00:23:48.269967 1 sync_worker.go:999] Running sync for serviceaccount "openshift-controller-manager-operator/openshift-controller-manager-operator" (513 of 955) 2025-10-13T00:23:48.320433099+00:00 stderr F I1013 00:23:48.320379 1 sync_worker.go:1014] Done syncing for service "openshift-operator-lifecycle-manager/catalog-operator-metrics" (711 of 955) 2025-10-13T00:23:48.320433099+00:00 stderr F I1013 00:23:48.320410 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.320433099+00:00 stderr F I1013 00:23:48.320416 1 sync_worker.go:999] Running sync for deployment "openshift-operator-lifecycle-manager/package-server-manager" (712 of 955) 2025-10-13T00:23:48.369716972+00:00 stderr F I1013 00:23:48.369647 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-controller-manager-operator/openshift-controller-manager-operator" (513 of 955) 2025-10-13T00:23:48.369716972+00:00 stderr F I1013 00:23:48.369692 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.369716972+00:00 stderr F I1013 00:23:48.369705 1 sync_worker.go:999] Running sync for deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (514 of 955) 2025-10-13T00:23:48.420450715+00:00 stderr F I1013 00:23:48.420392 1 sync_worker.go:1014] Done syncing for deployment "openshift-operator-lifecycle-manager/package-server-manager" (712 of 955) 2025-10-13T00:23:48.420450715+00:00 stderr F I1013 00:23:48.420429 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.420450715+00:00 stderr F I1013 00:23:48.420437 1 sync_worker.go:999] Running sync for service "openshift-operator-lifecycle-manager/package-server-manager-metrics" (713 of 955) 2025-10-13T00:23:48.469682056+00:00 stderr F I1013 00:23:48.469623 1 sync_worker.go:1014] Done syncing for deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (514 of 955) 2025-10-13T00:23:48.469682056+00:00 stderr F I1013 00:23:48.469660 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.469682056+00:00 stderr F I1013 00:23:48.469667 1 sync_worker.go:999] Running sync for flowschema "openshift-controller-manager" (515 of 955) 2025-10-13T00:23:48.520367948+00:00 stderr F I1013 00:23:48.520293 1 sync_worker.go:1014] Done syncing for service "openshift-operator-lifecycle-manager/package-server-manager-metrics" (713 of 955) 2025-10-13T00:23:48.520367948+00:00 stderr F I1013 00:23:48.520356 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.520405069+00:00 stderr F I1013 00:23:48.520364 1 sync_worker.go:999] Running sync for servicemonitor "openshift-operator-lifecycle-manager/package-server-manager-metrics" (714 of 955) 2025-10-13T00:23:48.569527617+00:00 stderr F I1013 00:23:48.569471 1 sync_worker.go:1014] Done syncing for flowschema "openshift-controller-manager" (515 of 955) 2025-10-13T00:23:48.569527617+00:00 stderr F I1013 00:23:48.569507 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.569527617+00:00 stderr F I1013 00:23:48.569514 1 sync_worker.go:999] Running sync for clusteroperator "openshift-controller-manager" (516 of 955) 2025-10-13T00:23:48.569748343+00:00 stderr F I1013 00:23:48.569716 1 sync_worker.go:1014] Done syncing for clusteroperator "openshift-controller-manager" (516 of 955) 2025-10-13T00:23:48.569748343+00:00 stderr F I1013 00:23:48.569741 1 task_graph.go:481] Running 56 on worker 0 2025-10-13T00:23:48.621692970+00:00 stderr F I1013 00:23:48.621606 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-operator-lifecycle-manager/package-server-manager-metrics" (714 of 955) 2025-10-13T00:23:48.621692970+00:00 stderr F I1013 00:23:48.621643 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.621692970+00:00 stderr F I1013 00:23:48.621649 1 sync_worker.go:999] Running sync for cronjob "openshift-operator-lifecycle-manager/collect-profiles" (715 of 955) 2025-10-13T00:23:48.671626551+00:00 stderr F I1013 00:23:48.671560 1 sync_worker.go:989] Precreated resource clusteroperator "kube-scheduler" (177 of 955) 2025-10-13T00:23:48.671626551+00:00 stderr F I1013 00:23:48.671590 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.671626551+00:00 stderr F I1013 00:23:48.671598 1 sync_worker.go:999] Running sync for namespace "openshift-kube-scheduler-operator" (169 of 955) 2025-10-13T00:23:48.719857825+00:00 stderr F I1013 00:23:48.719767 1 sync_worker.go:1014] Done syncing for cronjob "openshift-operator-lifecycle-manager/collect-profiles" (715 of 955) 2025-10-13T00:23:48.719857825+00:00 stderr F I1013 00:23:48.719827 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.719857825+00:00 stderr F I1013 00:23:48.719834 1 sync_worker.go:999] Running sync for deployment "openshift-operator-lifecycle-manager/olm-operator" (716 of 955) 2025-10-13T00:23:48.769738604+00:00 stderr F I1013 00:23:48.769672 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-scheduler-operator" (169 of 955) 2025-10-13T00:23:48.769738604+00:00 stderr F I1013 00:23:48.769705 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.769738604+00:00 stderr F I1013 00:23:48.769711 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeschedulers.operator.openshift.io" (170 of 955) 2025-10-13T00:23:48.820468867+00:00 stderr F I1013 00:23:48.820408 1 sync_worker.go:1014] Done syncing for deployment "openshift-operator-lifecycle-manager/olm-operator" (716 of 955) 2025-10-13T00:23:48.820468867+00:00 stderr F I1013 00:23:48.820438 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.820468867+00:00 stderr F I1013 00:23:48.820445 1 sync_worker.go:999] Running sync for deployment "openshift-operator-lifecycle-manager/catalog-operator" (717 of 955) 2025-10-13T00:23:48.872656121+00:00 stderr F I1013 00:23:48.872569 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeschedulers.operator.openshift.io" (170 of 955) 2025-10-13T00:23:48.872656121+00:00 stderr F I1013 00:23:48.872632 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.872698772+00:00 stderr F I1013 00:23:48.872649 1 sync_worker.go:999] Running sync for kubescheduler "cluster" (171 of 955) 2025-10-13T00:23:48.920650977+00:00 stderr F I1013 00:23:48.920585 1 sync_worker.go:1014] Done syncing for deployment "openshift-operator-lifecycle-manager/catalog-operator" (717 of 955) 2025-10-13T00:23:48.920650977+00:00 stderr F I1013 00:23:48.920614 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.920650977+00:00 stderr F I1013 00:23:48.920623 1 sync_worker.go:999] Running sync for clusterrole "aggregate-olm-edit" (718 of 955) 2025-10-13T00:23:48.972571844+00:00 stderr F I1013 00:23:48.972497 1 sync_worker.go:1014] Done syncing for kubescheduler "cluster" (171 of 955) 2025-10-13T00:23:48.972571844+00:00 stderr F I1013 00:23:48.972547 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:48.972617185+00:00 stderr F I1013 00:23:48.972561 1 sync_worker.go:999] Running sync for service "openshift-kube-scheduler-operator/metrics" (172 of 955) 2025-10-13T00:23:49.019547142+00:00 stderr F I1013 00:23:49.019459 1 sync_worker.go:1014] Done syncing for clusterrole "aggregate-olm-edit" (718 of 955) 2025-10-13T00:23:49.019547142+00:00 stderr F I1013 00:23:49.019495 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.019547142+00:00 stderr F I1013 00:23:49.019500 1 sync_worker.go:999] Running sync for clusterrole "aggregate-olm-view" (719 of 955) 2025-10-13T00:23:49.064696620+00:00 stderr F I1013 00:23:49.064598 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:49.065141792+00:00 stderr F I1013 00:23:49.065052 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:49.065141792+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:49.065141792+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:49.065174783+00:00 stderr F I1013 00:23:49.065154 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (571.215µs) 2025-10-13T00:23:49.065195604+00:00 stderr F I1013 00:23:49.065183 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:49.065388479+00:00 stderr F I1013 00:23:49.065275 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:49.065432490+00:00 stderr F I1013 00:23:49.065417 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:49.065552974+00:00 stderr F I1013 00:23:49.065432 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:49.066095849+00:00 stderr F I1013 00:23:49.066010 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:49.070618805+00:00 stderr F I1013 00:23:49.070368 1 sync_worker.go:1014] Done syncing for service "openshift-kube-scheduler-operator/metrics" (172 of 955) 2025-10-13T00:23:49.070618805+00:00 stderr F I1013 00:23:49.070401 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.070618805+00:00 stderr F I1013 00:23:49.070407 1 sync_worker.go:999] Running sync for configmap "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config" (173 of 955) 2025-10-13T00:23:49.091147417+00:00 stderr F W1013 00:23:49.091072 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:49.092571186+00:00 stderr F I1013 00:23:49.092477 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.29564ms) 2025-10-13T00:23:49.122262593+00:00 stderr F I1013 00:23:49.122162 1 sync_worker.go:1014] Done syncing for clusterrole "aggregate-olm-view" (719 of 955) 2025-10-13T00:23:49.122262593+00:00 stderr F I1013 00:23:49.122211 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.122262593+00:00 stderr F I1013 00:23:49.122224 1 sync_worker.go:999] Running sync for configmap "openshift-operator-lifecycle-manager/olm-operators" (720 of 955) 2025-10-13T00:23:49.170706583+00:00 stderr F I1013 00:23:49.170625 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config" (173 of 955) 2025-10-13T00:23:49.170706583+00:00 stderr F I1013 00:23:49.170676 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.170706583+00:00 stderr F I1013 00:23:49.170689 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:cluster-kube-scheduler-operator" (174 of 955) 2025-10-13T00:23:49.220590932+00:00 stderr F I1013 00:23:49.220525 1 sync_worker.go:1014] Done syncing for configmap "openshift-operator-lifecycle-manager/olm-operators" (720 of 955) 2025-10-13T00:23:49.220590932+00:00 stderr F I1013 00:23:49.220561 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.220590932+00:00 stderr F I1013 00:23:49.220567 1 sync_worker.go:999] Running sync for catalogsource "openshift-operator-lifecycle-manager/olm-operators" (721 of 955) 2025-10-13T00:23:49.270796301+00:00 stderr F I1013 00:23:49.270711 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:cluster-kube-scheduler-operator" (174 of 955) 2025-10-13T00:23:49.270796301+00:00 stderr F I1013 00:23:49.270776 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.270828102+00:00 stderr F I1013 00:23:49.270792 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (175 of 955) 2025-10-13T00:23:49.319945640+00:00 stderr F I1013 00:23:49.319884 1 sync_worker.go:1014] Done syncing for catalogsource "openshift-operator-lifecycle-manager/olm-operators" (721 of 955) 2025-10-13T00:23:49.319945640+00:00 stderr F I1013 00:23:49.319931 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.319968120+00:00 stderr F I1013 00:23:49.319946 1 sync_worker.go:999] Running sync for operatorgroup "openshift-operators/global-operators" (722 of 955) 2025-10-13T00:23:49.370938500+00:00 stderr F I1013 00:23:49.370877 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (175 of 955) 2025-10-13T00:23:49.370938500+00:00 stderr F I1013 00:23:49.370912 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.370938500+00:00 stderr F I1013 00:23:49.370918 1 sync_worker.go:999] Running sync for deployment "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (176 of 955) 2025-10-13T00:23:49.421919540+00:00 stderr F I1013 00:23:49.421847 1 sync_worker.go:1014] Done syncing for operatorgroup "openshift-operators/global-operators" (722 of 955) 2025-10-13T00:23:49.421919540+00:00 stderr F I1013 00:23:49.421882 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.421919540+00:00 stderr F I1013 00:23:49.421888 1 sync_worker.go:999] Running sync for operatorgroup "openshift-operator-lifecycle-manager/olm-operators" (723 of 955) 2025-10-13T00:23:49.470577736+00:00 stderr F I1013 00:23:49.470533 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (176 of 955) 2025-10-13T00:23:49.470577736+00:00 stderr F I1013 00:23:49.470563 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.470577736+00:00 stderr F I1013 00:23:49.470570 1 sync_worker.go:999] Running sync for clusteroperator "kube-scheduler" (177 of 955) 2025-10-13T00:23:49.470781572+00:00 stderr F I1013 00:23:49.470756 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-scheduler" (177 of 955) 2025-10-13T00:23:49.470781572+00:00 stderr F I1013 00:23:49.470777 1 task_graph.go:481] Running 57 on worker 0 2025-10-13T00:23:49.470794392+00:00 stderr F I1013 00:23:49.470783 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.470794392+00:00 stderr F I1013 00:23:49.470788 1 sync_worker.go:999] Running sync for role "openshift-cloud-credential-operator/prometheus-k8s" (800 of 955) 2025-10-13T00:23:49.470806232+00:00 stderr F I1013 00:23:49.470799 1 sync_worker.go:1002] Skipping role "openshift-cloud-credential-operator/prometheus-k8s" (800 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:49.470815513+00:00 stderr F I1013 00:23:49.470804 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.470824403+00:00 stderr F I1013 00:23:49.470811 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-credential-operator/prometheus-k8s" (801 of 955) 2025-10-13T00:23:49.470833203+00:00 stderr F I1013 00:23:49.470821 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-credential-operator/prometheus-k8s" (801 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:49.470833203+00:00 stderr F I1013 00:23:49.470826 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.470842213+00:00 stderr F I1013 00:23:49.470831 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (802 of 955) 2025-10-13T00:23:49.470851134+00:00 stderr F I1013 00:23:49.470840 1 sync_worker.go:1002] Skipping servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (802 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:49.470851134+00:00 stderr F I1013 00:23:49.470847 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.470860444+00:00 stderr F I1013 00:23:49.470852 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cloud-credential-operator/cloud-credential-operator-alerts" (803 of 955) 2025-10-13T00:23:49.470869194+00:00 stderr F I1013 00:23:49.470861 1 sync_worker.go:1002] Skipping prometheusrule "openshift-cloud-credential-operator/cloud-credential-operator-alerts" (803 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:49.470877984+00:00 stderr F I1013 00:23:49.470867 1 task_graph.go:481] Running 58 on worker 0 2025-10-13T00:23:49.470877984+00:00 stderr F I1013 00:23:49.470872 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.470899795+00:00 stderr F I1013 00:23:49.470876 1 sync_worker.go:999] Running sync for role "openshift-cluster-machine-approver/prometheus-k8s" (822 of 955) 2025-10-13T00:23:49.521475323+00:00 stderr F I1013 00:23:49.521374 1 sync_worker.go:1014] Done syncing for operatorgroup "openshift-operator-lifecycle-manager/olm-operators" (723 of 955) 2025-10-13T00:23:49.521475323+00:00 stderr F I1013 00:23:49.521420 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.521475323+00:00 stderr F I1013 00:23:49.521432 1 sync_worker.go:999] Running sync for subscription "openshift-operator-lifecycle-manager/packageserver" (724 of 955) 2025-10-13T00:23:49.569738977+00:00 stderr F I1013 00:23:49.569649 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-machine-approver/prometheus-k8s" (822 of 955) 2025-10-13T00:23:49.569738977+00:00 stderr F I1013 00:23:49.569703 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.569738977+00:00 stderr F I1013 00:23:49.569720 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-machine-approver/prometheus-k8s" (823 of 955) 2025-10-13T00:23:49.620256555+00:00 stderr F I1013 00:23:49.620174 1 sync_worker.go:1014] Done syncing for subscription "openshift-operator-lifecycle-manager/packageserver" (724 of 955) 2025-10-13T00:23:49.620256555+00:00 stderr F I1013 00:23:49.620217 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.620256555+00:00 stderr F I1013 00:23:49.620229 1 sync_worker.go:999] Running sync for role "openshift/copied-csv-viewer" (725 of 955) 2025-10-13T00:23:49.670613317+00:00 stderr F I1013 00:23:49.670480 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-machine-approver/prometheus-k8s" (823 of 955) 2025-10-13T00:23:49.670613317+00:00 stderr F I1013 00:23:49.670530 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.670613317+00:00 stderr F I1013 00:23:49.670542 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-machine-approver/cluster-machine-approver" (824 of 955) 2025-10-13T00:23:49.721243138+00:00 stderr F I1013 00:23:49.721162 1 sync_worker.go:1014] Done syncing for role "openshift/copied-csv-viewer" (725 of 955) 2025-10-13T00:23:49.721243138+00:00 stderr F I1013 00:23:49.721206 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.721243138+00:00 stderr F I1013 00:23:49.721218 1 sync_worker.go:999] Running sync for rolebinding "openshift/copied-csv-viewers" (726 of 955) 2025-10-13T00:23:49.770551271+00:00 stderr F I1013 00:23:49.770471 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-cluster-machine-approver/cluster-machine-approver" (824 of 955) 2025-10-13T00:23:49.770551271+00:00 stderr F I1013 00:23:49.770508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.770551271+00:00 stderr F I1013 00:23:49.770515 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-machine-approver/machineapprover-rules" (825 of 955) 2025-10-13T00:23:49.819655389+00:00 stderr F I1013 00:23:49.819581 1 sync_worker.go:1014] Done syncing for rolebinding "openshift/copied-csv-viewers" (726 of 955) 2025-10-13T00:23:49.819655389+00:00 stderr F I1013 00:23:49.819612 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.819655389+00:00 stderr F I1013 00:23:49.819618 1 sync_worker.go:999] Running sync for clusteroperator "operator-lifecycle-manager" (727 of 955) 2025-10-13T00:23:49.819948327+00:00 stderr F I1013 00:23:49.819803 1 sync_worker.go:1014] Done syncing for clusteroperator "operator-lifecycle-manager" (727 of 955) 2025-10-13T00:23:49.819948327+00:00 stderr F I1013 00:23:49.819814 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.819948327+00:00 stderr F I1013 00:23:49.819819 1 sync_worker.go:999] Running sync for clusteroperator "operator-lifecycle-manager-catalog" (728 of 955) 2025-10-13T00:23:49.819948327+00:00 stderr F I1013 00:23:49.819900 1 sync_worker.go:1014] Done syncing for clusteroperator "operator-lifecycle-manager-catalog" (728 of 955) 2025-10-13T00:23:49.819948327+00:00 stderr F I1013 00:23:49.819907 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.819948327+00:00 stderr F I1013 00:23:49.819911 1 sync_worker.go:999] Running sync for clusteroperator "operator-lifecycle-manager-packageserver" (729 of 955) 2025-10-13T00:23:49.820044430+00:00 stderr F I1013 00:23:49.819989 1 sync_worker.go:1014] Done syncing for clusteroperator "operator-lifecycle-manager-packageserver" (729 of 955) 2025-10-13T00:23:49.820044430+00:00 stderr F I1013 00:23:49.820006 1 task_graph.go:481] Running 59 on worker 1 2025-10-13T00:23:49.820044430+00:00 stderr F I1013 00:23:49.820016 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.820044430+00:00 stderr F I1013 00:23:49.820024 1 sync_worker.go:999] Running sync for customresourcedefinition "ipaddresses.ipam.cluster.x-k8s.io" (217 of 955) 2025-10-13T00:23:49.873369055+00:00 stderr F I1013 00:23:49.872474 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-cluster-machine-approver/machineapprover-rules" (825 of 955) 2025-10-13T00:23:49.873369055+00:00 stderr F I1013 00:23:49.872530 1 task_graph.go:481] Running 60 on worker 0 2025-10-13T00:23:49.873369055+00:00 stderr F I1013 00:23:49.872548 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.873369055+00:00 stderr F I1013 00:23:49.872561 1 sync_worker.go:999] Running sync for customresourcedefinition "builds.config.openshift.io" (69 of 955) 2025-10-13T00:23:49.921855305+00:00 stderr F I1013 00:23:49.921792 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ipaddresses.ipam.cluster.x-k8s.io" (217 of 955) 2025-10-13T00:23:49.921855305+00:00 stderr F I1013 00:23:49.921842 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.921905797+00:00 stderr F I1013 00:23:49.921856 1 sync_worker.go:999] Running sync for customresourcedefinition "ipaddressclaims.ipam.cluster.x-k8s.io" (218 of 955) 2025-10-13T00:23:49.971702724+00:00 stderr F I1013 00:23:49.971642 1 sync_worker.go:1014] Done syncing for customresourcedefinition "builds.config.openshift.io" (69 of 955) 2025-10-13T00:23:49.971702724+00:00 stderr F I1013 00:23:49.971679 1 task_graph.go:481] Running 61 on worker 0 2025-10-13T00:23:49.971702724+00:00 stderr F I1013 00:23:49.971692 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971697 1 sync_worker.go:999] Running sync for role "openshift-cluster-storage-operator/prometheus" (851 of 955) 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971711 1 sync_worker.go:1002] Skipping role "openshift-cluster-storage-operator/prometheus" (851 of 955): disabled capabilities: Storage 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971718 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971723 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-storage-operator/prometheus" (852 of 955) 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971733 1 sync_worker.go:1002] Skipping rolebinding "openshift-cluster-storage-operator/prometheus" (852 of 955): disabled capabilities: Storage 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971738 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971743 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-storage-operator/cluster-storage-operator" (853 of 955) 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971751 1 sync_worker.go:1002] Skipping servicemonitor "openshift-cluster-storage-operator/cluster-storage-operator" (853 of 955): disabled capabilities: Storage 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971756 1 task_graph.go:481] Running 62 on worker 0 2025-10-13T00:23:49.971763476+00:00 stderr F I1013 00:23:49.971760 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:49.971786686+00:00 stderr F I1013 00:23:49.971764 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/coreos-bootimages" (688 of 955) 2025-10-13T00:23:50.020531454+00:00 stderr F I1013 00:23:50.020459 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ipaddressclaims.ipam.cluster.x-k8s.io" (218 of 955) 2025-10-13T00:23:50.020531454+00:00 stderr F I1013 00:23:50.020507 1 task_graph.go:481] Running 63 on worker 1 2025-10-13T00:23:50.066273278+00:00 stderr F I1013 00:23:50.066198 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:50.066495865+00:00 stderr F I1013 00:23:50.066471 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:50.066495865+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:50.066495865+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:50.066526835+00:00 stderr F I1013 00:23:50.066511 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (325.619µs) 2025-10-13T00:23:50.066534396+00:00 stderr F I1013 00:23:50.066526 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:50.066592897+00:00 stderr F I1013 00:23:50.066562 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:50.066628148+00:00 stderr F I1013 00:23:50.066617 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:50.067013549+00:00 stderr F I1013 00:23:50.066626 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:50.067013549+00:00 stderr F I1013 00:23:50.066902 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:50.073903781+00:00 stderr F I1013 00:23:50.073849 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/coreos-bootimages" (688 of 955) 2025-10-13T00:23:50.073903781+00:00 stderr F I1013 00:23:50.073894 1 task_graph.go:481] Running 64 on worker 0 2025-10-13T00:23:50.098691971+00:00 stderr F W1013 00:23:50.098613 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:50.100007338+00:00 stderr F I1013 00:23:50.099972 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.442362ms) 2025-10-13T00:23:50.122546166+00:00 stderr F I1013 00:23:50.122479 1 sync_worker.go:989] Precreated resource clusteroperator "etcd" (81 of 955) 2025-10-13T00:23:50.122546166+00:00 stderr F I1013 00:23:50.122520 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.122546166+00:00 stderr F I1013 00:23:50.122528 1 sync_worker.go:999] Running sync for namespace "openshift-etcd-operator" (71 of 955) 2025-10-13T00:23:50.172103506+00:00 stderr F I1013 00:23:50.172044 1 sync_worker.go:989] Precreated resource clusteroperator "machine-api" (275 of 955) 2025-10-13T00:23:50.172103506+00:00 stderr F I1013 00:23:50.172084 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172103506+00:00 stderr F I1013 00:23:50.172091 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-aws" (228 of 955) 2025-10-13T00:23:50.172152938+00:00 stderr F I1013 00:23:50.172110 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-aws" (228 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172152938+00:00 stderr F I1013 00:23:50.172117 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172152938+00:00 stderr F I1013 00:23:50.172122 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-azure" (229 of 955) 2025-10-13T00:23:50.172152938+00:00 stderr F I1013 00:23:50.172134 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-azure" (229 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172152938+00:00 stderr F I1013 00:23:50.172140 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172152938+00:00 stderr F I1013 00:23:50.172145 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-openstack" (230 of 955) 2025-10-13T00:23:50.172164858+00:00 stderr F I1013 00:23:50.172157 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-openstack" (230 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172184908+00:00 stderr F I1013 00:23:50.172163 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172184908+00:00 stderr F I1013 00:23:50.172169 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-gcp" (231 of 955) 2025-10-13T00:23:50.172184908+00:00 stderr F I1013 00:23:50.172180 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-gcp" (231 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172194629+00:00 stderr F I1013 00:23:50.172186 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172203009+00:00 stderr F I1013 00:23:50.172192 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ovirt" (232 of 955) 2025-10-13T00:23:50.172213839+00:00 stderr F I1013 00:23:50.172206 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ovirt" (232 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172222170+00:00 stderr F I1013 00:23:50.172213 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172230340+00:00 stderr F I1013 00:23:50.172219 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-vsphere" (233 of 955) 2025-10-13T00:23:50.172238460+00:00 stderr F I1013 00:23:50.172230 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-vsphere" (233 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172268171+00:00 stderr F I1013 00:23:50.172253 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172268171+00:00 stderr F I1013 00:23:50.172262 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ibmcloud" (234 of 955) 2025-10-13T00:23:50.172280451+00:00 stderr F I1013 00:23:50.172273 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ibmcloud" (234 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172288791+00:00 stderr F I1013 00:23:50.172280 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172297172+00:00 stderr F I1013 00:23:50.172285 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-powervs" (235 of 955) 2025-10-13T00:23:50.172305702+00:00 stderr F I1013 00:23:50.172297 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-powervs" (235 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172305702+00:00 stderr F I1013 00:23:50.172303 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172316352+00:00 stderr F I1013 00:23:50.172309 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-nutanix" (236 of 955) 2025-10-13T00:23:50.172361943+00:00 stderr F I1013 00:23:50.172321 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-nutanix" (236 of 955): disabled capabilities: CloudCredential 2025-10-13T00:23:50.172361943+00:00 stderr F I1013 00:23:50.172346 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.172361943+00:00 stderr F I1013 00:23:50.172353 1 sync_worker.go:999] Running sync for namespace "openshift-machine-api" (237 of 955) 2025-10-13T00:23:50.220639698+00:00 stderr F I1013 00:23:50.220584 1 sync_worker.go:1014] Done syncing for namespace "openshift-etcd-operator" (71 of 955) 2025-10-13T00:23:50.220639698+00:00 stderr F I1013 00:23:50.220617 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.220639698+00:00 stderr F I1013 00:23:50.220623 1 sync_worker.go:999] Running sync for etcd "cluster" (72 of 955) 2025-10-13T00:23:50.270243420+00:00 stderr F I1013 00:23:50.270172 1 sync_worker.go:1014] Done syncing for namespace "openshift-machine-api" (237 of 955) 2025-10-13T00:23:50.270243420+00:00 stderr F I1013 00:23:50.270215 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.270243420+00:00 stderr F I1013 00:23:50.270223 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/machine-api-operator-images" (238 of 955) 2025-10-13T00:23:50.322044293+00:00 stderr F I1013 00:23:50.321973 1 sync_worker.go:1014] Done syncing for etcd "cluster" (72 of 955) 2025-10-13T00:23:50.322044293+00:00 stderr F I1013 00:23:50.322012 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.322044293+00:00 stderr F I1013 00:23:50.322021 1 sync_worker.go:999] Running sync for service "openshift-etcd-operator/metrics" (73 of 955) 2025-10-13T00:23:50.370112472+00:00 stderr F I1013 00:23:50.370046 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/machine-api-operator-images" (238 of 955) 2025-10-13T00:23:50.370112472+00:00 stderr F I1013 00:23:50.370083 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.370112472+00:00 stderr F I1013 00:23:50.370089 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/mao-trusted-ca" (239 of 955) 2025-10-13T00:23:50.420853755+00:00 stderr F I1013 00:23:50.420807 1 sync_worker.go:1014] Done syncing for service "openshift-etcd-operator/metrics" (73 of 955) 2025-10-13T00:23:50.420853755+00:00 stderr F I1013 00:23:50.420828 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.420853755+00:00 stderr F I1013 00:23:50.420834 1 sync_worker.go:999] Running sync for configmap "openshift-etcd-operator/etcd-operator-config" (74 of 955) 2025-10-13T00:23:50.478361207+00:00 stderr F I1013 00:23:50.478289 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/mao-trusted-ca" (239 of 955) 2025-10-13T00:23:50.478459450+00:00 stderr F I1013 00:23:50.478443 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.478504871+00:00 stderr F I1013 00:23:50.478484 1 sync_worker.go:999] Running sync for customresourcedefinition "machines.machine.openshift.io" (240 of 955) 2025-10-13T00:23:50.519933445+00:00 stderr F I1013 00:23:50.519872 1 sync_worker.go:1014] Done syncing for configmap "openshift-etcd-operator/etcd-operator-config" (74 of 955) 2025-10-13T00:23:50.520009407+00:00 stderr F I1013 00:23:50.519994 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.520061909+00:00 stderr F I1013 00:23:50.520033 1 sync_worker.go:999] Running sync for configmap "openshift-etcd-operator/etcd-ca-bundle" (75 of 955) 2025-10-13T00:23:50.572041757+00:00 stderr F I1013 00:23:50.571954 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machines.machine.openshift.io" (240 of 955) 2025-10-13T00:23:50.572041757+00:00 stderr F I1013 00:23:50.572005 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.572041757+00:00 stderr F I1013 00:23:50.572018 1 sync_worker.go:999] Running sync for customresourcedefinition "machinesets.machine.openshift.io" (241 of 955) 2025-10-13T00:23:50.621204686+00:00 stderr F I1013 00:23:50.621121 1 sync_worker.go:1014] Done syncing for configmap "openshift-etcd-operator/etcd-ca-bundle" (75 of 955) 2025-10-13T00:23:50.621204686+00:00 stderr F I1013 00:23:50.621174 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.621204686+00:00 stderr F I1013 00:23:50.621188 1 sync_worker.go:999] Running sync for configmap "openshift-etcd-operator/etcd-service-ca-bundle" (76 of 955) 2025-10-13T00:23:50.673760430+00:00 stderr F I1013 00:23:50.673685 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machinesets.machine.openshift.io" (241 of 955) 2025-10-13T00:23:50.673760430+00:00 stderr F I1013 00:23:50.673727 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.673760430+00:00 stderr F I1013 00:23:50.673735 1 sync_worker.go:999] Running sync for customresourcedefinition "machinehealthchecks.machine.openshift.io" (242 of 955) 2025-10-13T00:23:50.720107781+00:00 stderr F I1013 00:23:50.720038 1 sync_worker.go:1014] Done syncing for configmap "openshift-etcd-operator/etcd-service-ca-bundle" (76 of 955) 2025-10-13T00:23:50.720107781+00:00 stderr F I1013 00:23:50.720073 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.720107781+00:00 stderr F I1013 00:23:50.720079 1 sync_worker.go:999] Running sync for secret "openshift-etcd-operator/etcd-client" (77 of 955) 2025-10-13T00:23:50.771767760+00:00 stderr F I1013 00:23:50.771690 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machinehealthchecks.machine.openshift.io" (242 of 955) 2025-10-13T00:23:50.771767760+00:00 stderr F I1013 00:23:50.771730 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.771767760+00:00 stderr F I1013 00:23:50.771739 1 sync_worker.go:999] Running sync for customresourcedefinition "metal3remediations.infrastructure.cluster.x-k8s.io" (243 of 955) 2025-10-13T00:23:50.819916791+00:00 stderr F I1013 00:23:50.819848 1 sync_worker.go:1014] Done syncing for secret "openshift-etcd-operator/etcd-client" (77 of 955) 2025-10-13T00:23:50.819916791+00:00 stderr F I1013 00:23:50.819889 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.819916791+00:00 stderr F I1013 00:23:50.819897 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:etcd-operator" (78 of 955) 2025-10-13T00:23:50.871205130+00:00 stderr F I1013 00:23:50.870764 1 sync_worker.go:1014] Done syncing for customresourcedefinition "metal3remediations.infrastructure.cluster.x-k8s.io" (243 of 955) 2025-10-13T00:23:50.871205130+00:00 stderr F I1013 00:23:50.870805 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.871205130+00:00 stderr F I1013 00:23:50.870813 1 sync_worker.go:999] Running sync for customresourcedefinition "metal3remediationtemplates.infrastructure.cluster.x-k8s.io" (244 of 955) 2025-10-13T00:23:50.920775681+00:00 stderr F I1013 00:23:50.920084 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:etcd-operator" (78 of 955) 2025-10-13T00:23:50.920775681+00:00 stderr F I1013 00:23:50.920118 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.920775681+00:00 stderr F I1013 00:23:50.920124 1 sync_worker.go:999] Running sync for serviceaccount "openshift-etcd-operator/etcd-operator" (79 of 955) 2025-10-13T00:23:50.973065227+00:00 stderr F I1013 00:23:50.972991 1 sync_worker.go:1014] Done syncing for customresourcedefinition "metal3remediationtemplates.infrastructure.cluster.x-k8s.io" (244 of 955) 2025-10-13T00:23:50.973065227+00:00 stderr F I1013 00:23:50.973029 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:50.973065227+00:00 stderr F I1013 00:23:50.973035 1 sync_worker.go:999] Running sync for service "openshift-machine-api/machine-api-operator-machine-webhook" (245 of 955) 2025-10-13T00:23:51.020693634+00:00 stderr F I1013 00:23:51.020628 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-etcd-operator/etcd-operator" (79 of 955) 2025-10-13T00:23:51.020693634+00:00 stderr F I1013 00:23:51.020671 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.020693634+00:00 stderr F I1013 00:23:51.020678 1 sync_worker.go:999] Running sync for deployment "openshift-etcd-operator/etcd-operator" (80 of 955) 2025-10-13T00:23:51.066830969+00:00 stderr F I1013 00:23:51.066781 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:51.067149908+00:00 stderr F I1013 00:23:51.067132 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:51.067149908+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:51.067149908+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:51.067221230+00:00 stderr F I1013 00:23:51.067206 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (438.783µs) 2025-10-13T00:23:51.067251921+00:00 stderr F I1013 00:23:51.067241 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:51.067339213+00:00 stderr F I1013 00:23:51.067297 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:51.067400215+00:00 stderr F I1013 00:23:51.067389 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:51.067477637+00:00 stderr F I1013 00:23:51.067417 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:51.067713904+00:00 stderr F I1013 00:23:51.067690 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:51.072118526+00:00 stderr F I1013 00:23:51.072090 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/machine-api-operator-machine-webhook" (245 of 955) 2025-10-13T00:23:51.072163658+00:00 stderr F I1013 00:23:51.072153 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.072192028+00:00 stderr F I1013 00:23:51.072179 1 sync_worker.go:999] Running sync for service "openshift-machine-api/machine-api-operator-webhook" (246 of 955) 2025-10-13T00:23:51.097027410+00:00 stderr F W1013 00:23:51.096989 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:51.098372658+00:00 stderr F I1013 00:23:51.098352 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.108437ms) 2025-10-13T00:23:51.120492294+00:00 stderr F I1013 00:23:51.120465 1 sync_worker.go:1014] Done syncing for deployment "openshift-etcd-operator/etcd-operator" (80 of 955) 2025-10-13T00:23:51.120537115+00:00 stderr F I1013 00:23:51.120527 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.120566136+00:00 stderr F I1013 00:23:51.120553 1 sync_worker.go:999] Running sync for clusteroperator "etcd" (81 of 955) 2025-10-13T00:23:51.120781512+00:00 stderr F I1013 00:23:51.120761 1 sync_worker.go:1014] Done syncing for clusteroperator "etcd" (81 of 955) 2025-10-13T00:23:51.120813093+00:00 stderr F I1013 00:23:51.120803 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.120841684+00:00 stderr F I1013 00:23:51.120830 1 sync_worker.go:999] Running sync for flowschema "openshift-etcd-operator" (82 of 955) 2025-10-13T00:23:51.171082623+00:00 stderr F I1013 00:23:51.171001 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/machine-api-operator-webhook" (246 of 955) 2025-10-13T00:23:51.171082623+00:00 stderr F I1013 00:23:51.171058 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.171140055+00:00 stderr F I1013 00:23:51.171075 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/machine-api-operator" (247 of 955) 2025-10-13T00:23:51.221117637+00:00 stderr F I1013 00:23:51.221043 1 sync_worker.go:1014] Done syncing for flowschema "openshift-etcd-operator" (82 of 955) 2025-10-13T00:23:51.221117637+00:00 stderr F I1013 00:23:51.221085 1 task_graph.go:481] Running 65 on worker 1 2025-10-13T00:23:51.270098421+00:00 stderr F I1013 00:23:51.270029 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/machine-api-operator" (247 of 955) 2025-10-13T00:23:51.270098421+00:00 stderr F I1013 00:23:51.270065 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.270098421+00:00 stderr F I1013 00:23:51.270072 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/machine-api-controllers" (248 of 955) 2025-10-13T00:23:51.330378610+00:00 stderr F I1013 00:23:51.330292 1 sync_worker.go:989] Precreated resource clusteroperator "authentication" (330 of 955) 2025-10-13T00:23:51.330378610+00:00 stderr F I1013 00:23:51.330351 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.330378610+00:00 stderr F I1013 00:23:51.330358 1 sync_worker.go:999] Running sync for namespace "openshift-authentication-operator" (320 of 955) 2025-10-13T00:23:51.370435736+00:00 stderr F I1013 00:23:51.370386 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/machine-api-controllers" (248 of 955) 2025-10-13T00:23:51.370538019+00:00 stderr F I1013 00:23:51.370520 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.370593401+00:00 stderr F I1013 00:23:51.370570 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/machine-api-termination-handler" (249 of 955) 2025-10-13T00:23:51.420642405+00:00 stderr F I1013 00:23:51.420323 1 sync_worker.go:1014] Done syncing for namespace "openshift-authentication-operator" (320 of 955) 2025-10-13T00:23:51.420642405+00:00 stderr F I1013 00:23:51.420393 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.420642405+00:00 stderr F I1013 00:23:51.420406 1 sync_worker.go:999] Running sync for customresourcedefinition "authentications.operator.openshift.io" (321 of 955) 2025-10-13T00:23:51.470105513+00:00 stderr F I1013 00:23:51.470004 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/machine-api-termination-handler" (249 of 955) 2025-10-13T00:23:51.470161014+00:00 stderr F I1013 00:23:51.470098 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.470161014+00:00 stderr F I1013 00:23:51.470116 1 sync_worker.go:999] Running sync for role "openshift-config/machine-api-controllers" (250 of 955) 2025-10-13T00:23:51.521262007+00:00 stderr F I1013 00:23:51.521197 1 sync_worker.go:1014] Done syncing for customresourcedefinition "authentications.operator.openshift.io" (321 of 955) 2025-10-13T00:23:51.521262007+00:00 stderr F I1013 00:23:51.521227 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.521262007+00:00 stderr F I1013 00:23:51.521233 1 sync_worker.go:999] Running sync for authentication "cluster" (322 of 955) 2025-10-13T00:23:51.570680574+00:00 stderr F I1013 00:23:51.570609 1 sync_worker.go:1014] Done syncing for role "openshift-config/machine-api-controllers" (250 of 955) 2025-10-13T00:23:51.570680574+00:00 stderr F I1013 00:23:51.570652 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.570680574+00:00 stderr F I1013 00:23:51.570661 1 sync_worker.go:999] Running sync for role "openshift-config-managed/machine-api-controllers" (251 of 955) 2025-10-13T00:23:51.626548990+00:00 stderr F I1013 00:23:51.626463 1 sync_worker.go:1014] Done syncing for authentication "cluster" (322 of 955) 2025-10-13T00:23:51.626548990+00:00 stderr F I1013 00:23:51.626508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.626548990+00:00 stderr F I1013 00:23:51.626520 1 sync_worker.go:999] Running sync for service "openshift-authentication-operator/metrics" (323 of 955) 2025-10-13T00:23:51.669373083+00:00 stderr F I1013 00:23:51.669267 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/machine-api-controllers" (251 of 955) 2025-10-13T00:23:51.669373083+00:00 stderr F I1013 00:23:51.669309 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.669373083+00:00 stderr F I1013 00:23:51.669321 1 sync_worker.go:999] Running sync for role "openshift-machine-api/machine-api-controllers" (252 of 955) 2025-10-13T00:23:51.719649374+00:00 stderr F I1013 00:23:51.719563 1 sync_worker.go:1014] Done syncing for service "openshift-authentication-operator/metrics" (323 of 955) 2025-10-13T00:23:51.719649374+00:00 stderr F I1013 00:23:51.719603 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.719649374+00:00 stderr F I1013 00:23:51.719615 1 sync_worker.go:999] Running sync for configmap "openshift-authentication-operator/authentication-operator-config" (324 of 955) 2025-10-13T00:23:51.770455689+00:00 stderr F I1013 00:23:51.769957 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/machine-api-controllers" (252 of 955) 2025-10-13T00:23:51.770455689+00:00 stderr F I1013 00:23:51.770343 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.770455689+00:00 stderr F I1013 00:23:51.770355 1 sync_worker.go:999] Running sync for clusterrole "machine-api-controllers" (253 of 955) 2025-10-13T00:23:51.821850991+00:00 stderr F I1013 00:23:51.821744 1 sync_worker.go:1014] Done syncing for configmap "openshift-authentication-operator/authentication-operator-config" (324 of 955) 2025-10-13T00:23:51.821850991+00:00 stderr F I1013 00:23:51.821790 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.821850991+00:00 stderr F I1013 00:23:51.821800 1 sync_worker.go:999] Running sync for configmap "openshift-authentication-operator/service-ca-bundle" (325 of 955) 2025-10-13T00:23:51.871161944+00:00 stderr F I1013 00:23:51.871089 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-controllers" (253 of 955) 2025-10-13T00:23:51.871161944+00:00 stderr F I1013 00:23:51.871139 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.871161944+00:00 stderr F I1013 00:23:51.871148 1 sync_worker.go:999] Running sync for role "openshift-machine-api/machine-api-operator" (254 of 955) 2025-10-13T00:23:51.921484246+00:00 stderr F I1013 00:23:51.921394 1 sync_worker.go:1014] Done syncing for configmap "openshift-authentication-operator/service-ca-bundle" (325 of 955) 2025-10-13T00:23:51.921484246+00:00 stderr F I1013 00:23:51.921454 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.921484246+00:00 stderr F I1013 00:23:51.921463 1 sync_worker.go:999] Running sync for configmap "openshift-authentication-operator/trusted-ca-bundle" (326 of 955) 2025-10-13T00:23:51.970147991+00:00 stderr F I1013 00:23:51.970079 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/machine-api-operator" (254 of 955) 2025-10-13T00:23:51.970147991+00:00 stderr F I1013 00:23:51.970114 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:51.970147991+00:00 stderr F I1013 00:23:51.970122 1 sync_worker.go:999] Running sync for clusterrole "machine-api-operator" (255 of 955) 2025-10-13T00:23:52.030755830+00:00 stderr F I1013 00:23:52.030688 1 sync_worker.go:1014] Done syncing for configmap "openshift-authentication-operator/trusted-ca-bundle" (326 of 955) 2025-10-13T00:23:52.030755830+00:00 stderr F I1013 00:23:52.030725 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.030755830+00:00 stderr F I1013 00:23:52.030732 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:authentication" (327 of 955) 2025-10-13T00:23:52.067845943+00:00 stderr F I1013 00:23:52.067760 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:52.068047058+00:00 stderr F I1013 00:23:52.068008 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:52.068047058+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:52.068047058+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:52.068063429+00:00 stderr F I1013 00:23:52.068052 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (331.87µs) 2025-10-13T00:23:52.068074519+00:00 stderr F I1013 00:23:52.068066 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:52.068136481+00:00 stderr F I1013 00:23:52.068104 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:52.068160302+00:00 stderr F I1013 00:23:52.068150 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:52.068220443+00:00 stderr F I1013 00:23:52.068157 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:52.068447760+00:00 stderr F I1013 00:23:52.068405 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:52.069976972+00:00 stderr F I1013 00:23:52.069922 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-operator" (255 of 955) 2025-10-13T00:23:52.069976972+00:00 stderr F I1013 00:23:52.069948 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.069976972+00:00 stderr F I1013 00:23:52.069954 1 sync_worker.go:999] Running sync for clusterrole "machine-api-operator-ext-remediation" (256 of 955) 2025-10-13T00:23:52.106016886+00:00 stderr F W1013 00:23:52.105945 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:52.109464192+00:00 stderr F I1013 00:23:52.108839 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.765345ms) 2025-10-13T00:23:52.134372836+00:00 stderr F I1013 00:23:52.130783 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:authentication" (327 of 955) 2025-10-13T00:23:52.134372836+00:00 stderr F I1013 00:23:52.130847 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.134372836+00:00 stderr F I1013 00:23:52.130863 1 sync_worker.go:999] Running sync for serviceaccount "openshift-authentication-operator/authentication-operator" (328 of 955) 2025-10-13T00:23:52.170156183+00:00 stderr F I1013 00:23:52.170092 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-operator-ext-remediation" (256 of 955) 2025-10-13T00:23:52.170156183+00:00 stderr F I1013 00:23:52.170131 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.170156183+00:00 stderr F I1013 00:23:52.170142 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-operator-ext-remediation" (257 of 955) 2025-10-13T00:23:52.220635009+00:00 stderr F I1013 00:23:52.220587 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-authentication-operator/authentication-operator" (328 of 955) 2025-10-13T00:23:52.220635009+00:00 stderr F I1013 00:23:52.220623 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.220682660+00:00 stderr F I1013 00:23:52.220631 1 sync_worker.go:999] Running sync for deployment "openshift-authentication-operator/authentication-operator" (329 of 955) 2025-10-13T00:23:52.270040455+00:00 stderr F I1013 00:23:52.269989 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-operator-ext-remediation" (257 of 955) 2025-10-13T00:23:52.270040455+00:00 stderr F I1013 00:23:52.270025 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.270074856+00:00 stderr F I1013 00:23:52.270033 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-controllers" (258 of 955) 2025-10-13T00:23:52.370156864+00:00 stderr F I1013 00:23:52.370093 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-controllers" (258 of 955) 2025-10-13T00:23:52.370156864+00:00 stderr F I1013 00:23:52.370128 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.370156864+00:00 stderr F I1013 00:23:52.370134 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/machine-api-controllers" (259 of 955) 2025-10-13T00:23:52.420697662+00:00 stderr F I1013 00:23:52.420614 1 sync_worker.go:1014] Done syncing for deployment "openshift-authentication-operator/authentication-operator" (329 of 955) 2025-10-13T00:23:52.420697662+00:00 stderr F I1013 00:23:52.420649 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.420697662+00:00 stderr F I1013 00:23:52.420655 1 sync_worker.go:999] Running sync for clusteroperator "authentication" (330 of 955) 2025-10-13T00:23:52.420853036+00:00 stderr F E1013 00:23:52.420808 1 task.go:122] error running apply for clusteroperator "authentication" (330 of 955): Cluster operator authentication is not available 2025-10-13T00:23:52.420853036+00:00 stderr F I1013 00:23:52.420824 1 task_graph.go:481] Running 66 on worker 1 2025-10-13T00:23:52.420853036+00:00 stderr F I1013 00:23:52.420831 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.420853036+00:00 stderr F I1013 00:23:52.420836 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterresourcequotas.quota.openshift.io" (16 of 955) 2025-10-13T00:23:52.469440679+00:00 stderr F I1013 00:23:52.469388 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/machine-api-controllers" (259 of 955) 2025-10-13T00:23:52.469440679+00:00 stderr F I1013 00:23:52.469415 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.469440679+00:00 stderr F I1013 00:23:52.469421 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/machine-api-controllers" (260 of 955) 2025-10-13T00:23:52.521639793+00:00 stderr F I1013 00:23:52.521577 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterresourcequotas.quota.openshift.io" (16 of 955) 2025-10-13T00:23:52.521639793+00:00 stderr F I1013 00:23:52.521608 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.521639793+00:00 stderr F I1013 00:23:52.521613 1 sync_worker.go:999] Running sync for customresourcedefinition "proxies.config.openshift.io" (17 of 955) 2025-10-13T00:23:52.600915522+00:00 stderr F I1013 00:23:52.600848 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/machine-api-controllers" (260 of 955) 2025-10-13T00:23:52.600915522+00:00 stderr F I1013 00:23:52.600885 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.600915522+00:00 stderr F I1013 00:23:52.600892 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/machine-api-controllers" (261 of 955) 2025-10-13T00:23:52.620792625+00:00 stderr F I1013 00:23:52.620748 1 sync_worker.go:1014] Done syncing for customresourcedefinition "proxies.config.openshift.io" (17 of 955) 2025-10-13T00:23:52.620859377+00:00 stderr F I1013 00:23:52.620849 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.620888788+00:00 stderr F I1013 00:23:52.620876 1 sync_worker.go:999] Running sync for customresourcedefinition "rolebindingrestrictions.authorization.openshift.io" (18 of 955) 2025-10-13T00:23:52.669969885+00:00 stderr F I1013 00:23:52.669891 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/machine-api-controllers" (261 of 955) 2025-10-13T00:23:52.669969885+00:00 stderr F I1013 00:23:52.669936 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.669969885+00:00 stderr F I1013 00:23:52.669948 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-operator" (262 of 955) 2025-10-13T00:23:52.721536492+00:00 stderr F I1013 00:23:52.721470 1 sync_worker.go:1014] Done syncing for customresourcedefinition "rolebindingrestrictions.authorization.openshift.io" (18 of 955) 2025-10-13T00:23:52.721630274+00:00 stderr F I1013 00:23:52.721615 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.721762168+00:00 stderr F I1013 00:23:52.721728 1 sync_worker.go:999] Running sync for customresourcedefinition "securitycontextconstraints.security.openshift.io" (19 of 955) 2025-10-13T00:23:52.771176034+00:00 stderr F I1013 00:23:52.771114 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-operator" (262 of 955) 2025-10-13T00:23:52.771176034+00:00 stderr F I1013 00:23:52.771163 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.771216006+00:00 stderr F I1013 00:23:52.771176 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/machine-api-operator" (263 of 955) 2025-10-13T00:23:52.821606159+00:00 stderr F I1013 00:23:52.821550 1 sync_worker.go:1014] Done syncing for customresourcedefinition "securitycontextconstraints.security.openshift.io" (19 of 955) 2025-10-13T00:23:52.821606159+00:00 stderr F I1013 00:23:52.821583 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.821606159+00:00 stderr F I1013 00:23:52.821589 1 sync_worker.go:999] Running sync for customresourcedefinition "rangeallocations.security.internal.openshift.io" (20 of 955) 2025-10-13T00:23:52.869468682+00:00 stderr F I1013 00:23:52.869370 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/machine-api-operator" (263 of 955) 2025-10-13T00:23:52.869468682+00:00 stderr F I1013 00:23:52.869415 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.869468682+00:00 stderr F I1013 00:23:52.869428 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/prometheus-k8s-machine-api-operator" (264 of 955) 2025-10-13T00:23:52.920636358+00:00 stderr F I1013 00:23:52.920578 1 sync_worker.go:1014] Done syncing for customresourcedefinition "rangeallocations.security.internal.openshift.io" (20 of 955) 2025-10-13T00:23:52.920713210+00:00 stderr F I1013 00:23:52.920693 1 task_graph.go:481] Running 67 on worker 1 2025-10-13T00:23:52.920845523+00:00 stderr F I1013 00:23:52.920815 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.920854554+00:00 stderr F I1013 00:23:52.920835 1 sync_worker.go:999] Running sync for customresourcedefinition "openshiftcontrollermanagers.operator.openshift.io" (730 of 955) 2025-10-13T00:23:52.970009313+00:00 stderr F I1013 00:23:52.969884 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/prometheus-k8s-machine-api-operator" (264 of 955) 2025-10-13T00:23:52.970009313+00:00 stderr F I1013 00:23:52.969932 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:52.970009313+00:00 stderr F I1013 00:23:52.969945 1 sync_worker.go:999] Running sync for role "openshift-machine-api/prometheus-k8s-machine-api-operator" (265 of 955) 2025-10-13T00:23:53.022169076+00:00 stderr F I1013 00:23:53.022081 1 sync_worker.go:1014] Done syncing for customresourcedefinition "openshiftcontrollermanagers.operator.openshift.io" (730 of 955) 2025-10-13T00:23:53.022290549+00:00 stderr F I1013 00:23:53.022267 1 task_graph.go:481] Running 68 on worker 1 2025-10-13T00:23:53.022422863+00:00 stderr F I1013 00:23:53.022394 1 sync_worker.go:982] Skipping precreation of clusteroperator "cloud-controller-manager" (209 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.022537936+00:00 stderr F I1013 00:23:53.022512 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.022652449+00:00 stderr F I1013 00:23:53.022617 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-controller-manager-operator" (178 of 955) 2025-10-13T00:23:53.022804824+00:00 stderr F I1013 00:23:53.022747 1 sync_worker.go:1002] Skipping namespace "openshift-cloud-controller-manager-operator" (178 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.022892326+00:00 stderr F I1013 00:23:53.022866 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.023013379+00:00 stderr F I1013 00:23:53.022985 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-controller-manager" (179 of 955) 2025-10-13T00:23:53.023807022+00:00 stderr F I1013 00:23:53.023086 1 sync_worker.go:1002] Skipping namespace "openshift-cloud-controller-manager" (179 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.023989667+00:00 stderr F I1013 00:23:53.023935 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.024126360+00:00 stderr F I1013 00:23:53.024074 1 sync_worker.go:999] Running sync for configmap "openshift-cloud-controller-manager-operator/cloud-controller-manager-images" (180 of 955) 2025-10-13T00:23:53.024250984+00:00 stderr F I1013 00:23:53.024196 1 sync_worker.go:1002] Skipping configmap "openshift-cloud-controller-manager-operator/cloud-controller-manager-images" (180 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.024321396+00:00 stderr F I1013 00:23:53.024298 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.024446689+00:00 stderr F I1013 00:23:53.024412 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (181 of 955) 2025-10-13T00:23:53.024593663+00:00 stderr F I1013 00:23:53.024538 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (181 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.024681296+00:00 stderr F I1013 00:23:53.024656 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.024812450+00:00 stderr F I1013 00:23:53.024782 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:operator:cloud-controller-manager" (182 of 955) 2025-10-13T00:23:53.024934933+00:00 stderr F I1013 00:23:53.024901 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:operator:cloud-controller-manager" (182 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.025047846+00:00 stderr F I1013 00:23:53.025021 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.025150449+00:00 stderr F I1013 00:23:53.025121 1 sync_worker.go:999] Running sync for role "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (183 of 955) 2025-10-13T00:23:53.025244822+00:00 stderr F I1013 00:23:53.025221 1 sync_worker.go:1002] Skipping role "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (183 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.025349985+00:00 stderr F I1013 00:23:53.025308 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.025432227+00:00 stderr F I1013 00:23:53.025407 1 sync_worker.go:999] Running sync for role "openshift-config/cluster-cloud-controller-manager" (184 of 955) 2025-10-13T00:23:53.025525279+00:00 stderr F I1013 00:23:53.025502 1 sync_worker.go:1002] Skipping role "openshift-config/cluster-cloud-controller-manager" (184 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.025609782+00:00 stderr F I1013 00:23:53.025590 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.025699554+00:00 stderr F I1013 00:23:53.025675 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/cluster-cloud-controller-manager" (185 of 955) 2025-10-13T00:23:53.025793197+00:00 stderr F I1013 00:23:53.025770 1 sync_worker.go:1002] Skipping rolebinding "openshift-config/cluster-cloud-controller-manager" (185 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.025876429+00:00 stderr F I1013 00:23:53.025857 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.025965152+00:00 stderr F I1013 00:23:53.025909 1 sync_worker.go:999] Running sync for role "openshift-config-managed/cluster-cloud-controller-manager" (186 of 955) 2025-10-13T00:23:53.026060074+00:00 stderr F I1013 00:23:53.026037 1 sync_worker.go:1002] Skipping role "openshift-config-managed/cluster-cloud-controller-manager" (186 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.026127116+00:00 stderr F I1013 00:23:53.026091 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.026200718+00:00 stderr F I1013 00:23:53.026176 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/cluster-cloud-controller-manager" (187 of 955) 2025-10-13T00:23:53.026295691+00:00 stderr F I1013 00:23:53.026273 1 sync_worker.go:1002] Skipping rolebinding "openshift-config-managed/cluster-cloud-controller-manager" (187 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.026386203+00:00 stderr F I1013 00:23:53.026366 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.026488876+00:00 stderr F I1013 00:23:53.026462 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:cloud-controller-manager" (188 of 955) 2025-10-13T00:23:53.026585499+00:00 stderr F I1013 00:23:53.026561 1 sync_worker.go:1002] Skipping clusterrolebinding "system:openshift:operator:cloud-controller-manager" (188 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.026670151+00:00 stderr F I1013 00:23:53.026651 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.026760004+00:00 stderr F I1013 00:23:53.026736 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (189 of 955) 2025-10-13T00:23:53.026862457+00:00 stderr F I1013 00:23:53.026839 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (189 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.026947809+00:00 stderr F I1013 00:23:53.026928 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.027036692+00:00 stderr F I1013 00:23:53.027013 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-controller-manager/cluster-cloud-controller-manager" (190 of 955) 2025-10-13T00:23:53.027134154+00:00 stderr F I1013 00:23:53.027109 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-controller-manager/cluster-cloud-controller-manager" (190 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.027217777+00:00 stderr F I1013 00:23:53.027198 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.027301879+00:00 stderr F I1013 00:23:53.027278 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-controller-manager/cloud-controller-manager" (191 of 955) 2025-10-13T00:23:53.027406352+00:00 stderr F I1013 00:23:53.027380 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-controller-manager/cloud-controller-manager" (191 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.027495254+00:00 stderr F I1013 00:23:53.027476 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.027591067+00:00 stderr F I1013 00:23:53.027566 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-controller-manager/cloud-controller-manager" (192 of 955) 2025-10-13T00:23:53.027685100+00:00 stderr F I1013 00:23:53.027662 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-controller-manager/cloud-controller-manager" (192 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.027768132+00:00 stderr F I1013 00:23:53.027749 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.027856794+00:00 stderr F I1013 00:23:53.027801 1 sync_worker.go:999] Running sync for role "openshift-cloud-controller-manager/cloud-controller-manager" (193 of 955) 2025-10-13T00:23:53.027933967+00:00 stderr F I1013 00:23:53.027895 1 sync_worker.go:1002] Skipping role "openshift-cloud-controller-manager/cloud-controller-manager" (193 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.027997688+00:00 stderr F I1013 00:23:53.027978 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.028053530+00:00 stderr F I1013 00:23:53.028030 1 sync_worker.go:999] Running sync for rolebinding "kube-system/cloud-controller-manager" (194 of 955) 2025-10-13T00:23:53.028167293+00:00 stderr F I1013 00:23:53.028141 1 sync_worker.go:1002] Skipping rolebinding "kube-system/cloud-controller-manager" (194 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.028236665+00:00 stderr F I1013 00:23:53.028217 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.028296497+00:00 stderr F I1013 00:23:53.028272 1 sync_worker.go:999] Running sync for role "kube-system/cloud-controller-manager" (195 of 955) 2025-10-13T00:23:53.028402810+00:00 stderr F I1013 00:23:53.028378 1 sync_worker.go:1002] Skipping role "kube-system/cloud-controller-manager" (195 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.028481222+00:00 stderr F I1013 00:23:53.028441 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.028543143+00:00 stderr F I1013 00:23:53.028516 1 sync_worker.go:999] Running sync for clusterrole "cloud-controller-manager" (196 of 955) 2025-10-13T00:23:53.028603395+00:00 stderr F I1013 00:23:53.028581 1 sync_worker.go:1002] Skipping clusterrole "cloud-controller-manager" (196 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.028659947+00:00 stderr F I1013 00:23:53.028641 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.028764530+00:00 stderr F I1013 00:23:53.028692 1 sync_worker.go:999] Running sync for clusterrolebinding "cloud-controller-manager" (197 of 955) 2025-10-13T00:23:53.028852792+00:00 stderr F I1013 00:23:53.028828 1 sync_worker.go:1002] Skipping clusterrolebinding "cloud-controller-manager" (197 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.028902913+00:00 stderr F I1013 00:23:53.028884 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.028957905+00:00 stderr F I1013 00:23:53.028935 1 sync_worker.go:999] Running sync for rolebinding "kube-system/cloud-controller-manager:apiserver-authentication-reader" (198 of 955) 2025-10-13T00:23:53.029016107+00:00 stderr F I1013 00:23:53.028994 1 sync_worker.go:1002] Skipping rolebinding "kube-system/cloud-controller-manager:apiserver-authentication-reader" (198 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.029064418+00:00 stderr F I1013 00:23:53.029046 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.029119340+00:00 stderr F I1013 00:23:53.029096 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-controller-manager/cloud-node-manager" (199 of 955) 2025-10-13T00:23:53.029206252+00:00 stderr F I1013 00:23:53.029183 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-controller-manager/cloud-node-manager" (199 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.029255743+00:00 stderr F I1013 00:23:53.029237 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.029311635+00:00 stderr F I1013 00:23:53.029288 1 sync_worker.go:999] Running sync for clusterrole "cloud-node-manager" (200 of 955) 2025-10-13T00:23:53.029434458+00:00 stderr F I1013 00:23:53.029409 1 sync_worker.go:1002] Skipping clusterrole "cloud-node-manager" (200 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.029491490+00:00 stderr F I1013 00:23:53.029472 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.029579172+00:00 stderr F I1013 00:23:53.029553 1 sync_worker.go:999] Running sync for clusterrolebinding "cloud-node-manager" (201 of 955) 2025-10-13T00:23:53.029643064+00:00 stderr F I1013 00:23:53.029617 1 sync_worker.go:1002] Skipping clusterrolebinding "cloud-node-manager" (201 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.029733017+00:00 stderr F I1013 00:23:53.029702 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.029810819+00:00 stderr F I1013 00:23:53.029777 1 sync_worker.go:999] Running sync for serviceaccount "kube-system/cloud-controller-manager" (202 of 955) 2025-10-13T00:23:53.029886671+00:00 stderr F I1013 00:23:53.029862 1 sync_worker.go:1002] Skipping serviceaccount "kube-system/cloud-controller-manager" (202 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.029937952+00:00 stderr F I1013 00:23:53.029919 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.029994104+00:00 stderr F I1013 00:23:53.029970 1 sync_worker.go:999] Running sync for clusterrole "openstack-cloud-controller-manager" (203 of 955) 2025-10-13T00:23:53.030101297+00:00 stderr F I1013 00:23:53.030055 1 sync_worker.go:1002] Skipping clusterrole "openstack-cloud-controller-manager" (203 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.030168059+00:00 stderr F I1013 00:23:53.030148 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.030255171+00:00 stderr F I1013 00:23:53.030201 1 sync_worker.go:999] Running sync for clusterrolebinding "openstack-cloud-controller-manager" (204 of 955) 2025-10-13T00:23:53.030353724+00:00 stderr F I1013 00:23:53.030304 1 sync_worker.go:1002] Skipping clusterrolebinding "openstack-cloud-controller-manager" (204 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.030418276+00:00 stderr F I1013 00:23:53.030397 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.030478637+00:00 stderr F I1013 00:23:53.030452 1 sync_worker.go:999] Running sync for service "openshift-cloud-controller-manager-operator/cloud-controller-manager-operator" (205 of 955) 2025-10-13T00:23:53.030539909+00:00 stderr F I1013 00:23:53.030518 1 sync_worker.go:1002] Skipping service "openshift-cloud-controller-manager-operator/cloud-controller-manager-operator" (205 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.030588540+00:00 stderr F I1013 00:23:53.030570 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.030645022+00:00 stderr F I1013 00:23:53.030621 1 sync_worker.go:999] Running sync for configmap "openshift-cloud-controller-manager-operator/kube-rbac-proxy" (206 of 955) 2025-10-13T00:23:53.030704324+00:00 stderr F I1013 00:23:53.030682 1 sync_worker.go:1002] Skipping configmap "openshift-cloud-controller-manager-operator/kube-rbac-proxy" (206 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.030752665+00:00 stderr F I1013 00:23:53.030734 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.030810217+00:00 stderr F I1013 00:23:53.030787 1 sync_worker.go:999] Running sync for deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (207 of 955) 2025-10-13T00:23:53.030911509+00:00 stderr F I1013 00:23:53.030853 1 sync_worker.go:1002] Skipping deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (207 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.031004392+00:00 stderr F I1013 00:23:53.030983 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.031063594+00:00 stderr F I1013 00:23:53.031038 1 sync_worker.go:999] Running sync for deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator" (208 of 955) 2025-10-13T00:23:53.031182277+00:00 stderr F I1013 00:23:53.031101 1 sync_worker.go:1002] Skipping deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator" (208 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.031294600+00:00 stderr F I1013 00:23:53.031274 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.031378892+00:00 stderr F I1013 00:23:53.031352 1 sync_worker.go:999] Running sync for clusteroperator "cloud-controller-manager" (209 of 955) 2025-10-13T00:23:53.031448434+00:00 stderr F I1013 00:23:53.031425 1 sync_worker.go:1002] Skipping clusteroperator "cloud-controller-manager" (209 of 955): disabled capabilities: CloudControllerManager 2025-10-13T00:23:53.031497986+00:00 stderr F I1013 00:23:53.031479 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.031553367+00:00 stderr F I1013 00:23:53.031530 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-openstack-cloud-controller-manager" (210 of 955) 2025-10-13T00:23:53.031613349+00:00 stderr F I1013 00:23:53.031591 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-openstack-cloud-controller-manager" (210 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.031663080+00:00 stderr F I1013 00:23:53.031643 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.031717662+00:00 stderr F I1013 00:23:53.031695 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-azure-cloud-controller-manager" (211 of 955) 2025-10-13T00:23:53.031785744+00:00 stderr F I1013 00:23:53.031764 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-azure-cloud-controller-manager" (211 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.031834425+00:00 stderr F I1013 00:23:53.031816 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.031891527+00:00 stderr F I1013 00:23:53.031866 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ibm-cloud-controller-manager" (212 of 955) 2025-10-13T00:23:53.031968259+00:00 stderr F I1013 00:23:53.031939 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ibm-cloud-controller-manager" (212 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.032028561+00:00 stderr F I1013 00:23:53.032008 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.032087442+00:00 stderr F I1013 00:23:53.032062 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-powervs-cloud-controller-manager" (213 of 955) 2025-10-13T00:23:53.032147884+00:00 stderr F I1013 00:23:53.032126 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-powervs-cloud-controller-manager" (213 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.032196805+00:00 stderr F I1013 00:23:53.032178 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.032254077+00:00 stderr F I1013 00:23:53.032229 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-ccm" (214 of 955) 2025-10-13T00:23:53.032378760+00:00 stderr F I1013 00:23:53.032346 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-ccm" (214 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.032452842+00:00 stderr F I1013 00:23:53.032430 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.032511944+00:00 stderr F I1013 00:23:53.032487 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-cloud-controller-manager" (215 of 955) 2025-10-13T00:23:53.032738730+00:00 stderr F I1013 00:23:53.032551 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-cloud-controller-manager" (215 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.032798142+00:00 stderr F I1013 00:23:53.032778 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.032856154+00:00 stderr F I1013 00:23:53.032831 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-nutanix-cloud-controller-manager" (216 of 955) 2025-10-13T00:23:53.032917725+00:00 stderr F I1013 00:23:53.032895 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-nutanix-cloud-controller-manager" (216 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-10-13T00:23:53.032983807+00:00 stderr F I1013 00:23:53.032963 1 task_graph.go:481] Running 69 on worker 1 2025-10-13T00:23:53.033035259+00:00 stderr F I1013 00:23:53.033016 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.033091770+00:00 stderr F I1013 00:23:53.033068 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-config-operator/machine-config-operator" (922 of 955) 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069005 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069193 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.49" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-10-13T00:23:53.069540604+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-10-13T00:23:53.069540604+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069235 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (239.137µs) 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069247 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069283 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069336 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-10-13T00:23:53.069540604+00:00 stderr F I1013 00:23:53.069344 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00402c990), Done:734, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc2332f355b15c76a, ext:340638708776, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-10-13T00:23:53.069617297+00:00 stderr F I1013 00:23:53.069579 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-10-13T00:23:53.070551283+00:00 stderr F I1013 00:23:53.070518 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/prometheus-k8s-machine-api-operator" (265 of 955) 2025-10-13T00:23:53.070602524+00:00 stderr F I1013 00:23:53.070592 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.070633575+00:00 stderr F I1013 00:23:53.070618 1 sync_worker.go:999] Running sync for clusterrole "machine-api-operator:cluster-reader" (266 of 955) 2025-10-13T00:23:53.112156792+00:00 stderr F W1013 00:23:53.112123 1 warnings.go:70] unknown field "spec.signatureStores" 2025-10-13T00:23:53.115389442+00:00 stderr F I1013 00:23:53.115350 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.070894ms) 2025-10-13T00:23:53.121531883+00:00 stderr F I1013 00:23:53.121481 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-config-operator/machine-config-operator" (922 of 955) 2025-10-13T00:23:53.121531883+00:00 stderr F I1013 00:23:53.121526 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.121561064+00:00 stderr F I1013 00:23:53.121538 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-config-operator/machine-config-controller" (923 of 955) 2025-10-13T00:23:53.171230697+00:00 stderr F I1013 00:23:53.171163 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-operator:cluster-reader" (266 of 955) 2025-10-13T00:23:53.171230697+00:00 stderr F I1013 00:23:53.171222 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.171258018+00:00 stderr F I1013 00:23:53.171238 1 sync_worker.go:999] Running sync for securitycontextconstraints "machine-api-termination-handler" (267 of 955) 2025-10-13T00:23:53.221458426+00:00 stderr F I1013 00:23:53.221068 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-config-operator/machine-config-controller" (923 of 955) 2025-10-13T00:23:53.221458426+00:00 stderr F I1013 00:23:53.221427 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.221458426+00:00 stderr F I1013 00:23:53.221434 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-config-operator/machine-config-daemon" (924 of 955) 2025-10-13T00:23:53.271308335+00:00 stderr F I1013 00:23:53.271248 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "machine-api-termination-handler" (267 of 955) 2025-10-13T00:23:53.271308335+00:00 stderr F I1013 00:23:53.271281 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.271308335+00:00 stderr F I1013 00:23:53.271287 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/kube-rbac-proxy" (268 of 955) 2025-10-13T00:23:53.320853975+00:00 stderr F I1013 00:23:53.320794 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-config-operator/machine-config-daemon" (924 of 955) 2025-10-13T00:23:53.320853975+00:00 stderr F I1013 00:23:53.320824 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.320853975+00:00 stderr F I1013 00:23:53.320830 1 sync_worker.go:999] Running sync for storageversionmigration "machineconfiguration-controllerconfig-storage-version-migration" (925 of 955) 2025-10-13T00:23:53.370561410+00:00 stderr F I1013 00:23:53.370082 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/kube-rbac-proxy" (268 of 955) 2025-10-13T00:23:53.370561410+00:00 stderr F I1013 00:23:53.370523 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.370561410+00:00 stderr F I1013 00:23:53.370532 1 sync_worker.go:999] Running sync for clusterrole "machine-api-controllers-metal3-remediation-aggregation" (269 of 955) 2025-10-13T00:23:53.419949585+00:00 stderr F I1013 00:23:53.419891 1 sync_worker.go:1014] Done syncing for storageversionmigration "machineconfiguration-controllerconfig-storage-version-migration" (925 of 955) 2025-10-13T00:23:53.419949585+00:00 stderr F I1013 00:23:53.419928 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.419949585+00:00 stderr F I1013 00:23:53.419935 1 sync_worker.go:999] Running sync for storageversionmigration "machineconfiguration-machineconfigpool-storage-version-migration" (926 of 955) 2025-10-13T00:23:53.470466022+00:00 stderr F I1013 00:23:53.470382 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-controllers-metal3-remediation-aggregation" (269 of 955) 2025-10-13T00:23:53.470466022+00:00 stderr F I1013 00:23:53.470420 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-10-13T00:23:53.470466022+00:00 stderr F I1013 00:23:53.470429 1 sync_worker.go:999] Running sync for clusterrole "machine-api-controllers-metal3-remediation" (270 of 955) ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log.gzhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000644000175000017500000244707715073043233033135 0ustar zuulzuul‹›Fìh0.logäYsÛH’€ß÷W ôâî]ƒ®ûІzÇÓGlÛ=ýäP€@QÄ 8(µzbþûf Q¤Š"ˆ’<m“H¢ª¾ÌÊÊ: ˆp)Ó/XŸstÎäˆi!ZþBçyE™<÷¾÷~B S¯%G9Ñ^ýƒX—£ËìœÐ¯Þß’EQšü&/â,ýmnò Ìr°!Ÿ ÂÀk$Fs4ºΩѣ (ÌlœÜŽŠ27Áldí7×þƒl©ªT SÎ5sTU*ŒYU51șĄ%ÔÎÖòÕû25Í%oyÍ»Œ¯Má1/7eÿ‚4ò‚$Én o ¢¨ð²‰&YxåWæfTÝåj16~0 “_ÃÝ¢ì&-ã™ñÊ,ih¼¸ð¤*FÞY^”^š¥þe„f²Hl ãá?qW•a2ct%þ°è¿ˆ(þ=ÚÂKˆ`’*,у¼@ŽKÉÄŠWn&–G–[T”cÐoNÆ<›/’ 4Qã?¯ñècžýyëMòlèÊéb< ³Ù»lnÒbOÊwa›´ô/³wa–NâËwq ?¼3‚ŤAr]MñnØ2o+ý`±½ARå®I©>¤Aß› \äæør¸fQL –;›¥…Räfý­ªæ/Á¼nÔØbœµš±®»º\Y]©éºº‹9rd‚qbìÝPáU½VpýiÍâÔÂ+è…Qd¢í&N)áš(ªu¡ÑÃLü®oÐ(8eú¯tµ äˆ:¢e+w=\Ó4§ œs5Ms¤Ô–!Gó¯Þ¸ÀôÒ`/[”p5ì—YãÔÛM†!D¨&Œ8KÐEG_½ ,Íl^Õ¤Ìjßœ¯Æ•Úa·L»&î7°Þ5FÛk*Á‰ÁB5•œ ΞÜ/<¢ºLèý‚? ÒàÒD¸„æ<è Q)Að®Jiˆ~_~'qyëAu¢zžqL¿)oçæý›ß׿Ü‹@Ûy™â h((²ôý›ŸMùÆ›™¢€†¾ó!.àf•©ß¤ 9çÞux1yyÙòçÞ<7× ÀbÙ ˆ$>Ö¦˜›ˆµDÇllcÀ„PÈèÍ·Çr”Bìâˆ[Ñ¢.òª¹lÀû:\<²t zÑâÒM¶ÕþB¼¿Áõlö œŽù|›†ßWwú®¢l_+ÎÏ?€“,ˆŠ}Ò2ƒ%µlé®Ãºt¯˜f‹$‚ ²m.›Á¿ËcRzI¶2’âÜÛ^éV{ϽbÚ&–…³QSrë-R{Ýú¹)øZû¯©5©. D©ßDUí½¼ªø¨úk´ê½àV–öêèGšÙ.tCTÂÏiÊÀö%V»­t)'~hd0²‹0åCà-h–hß!eGý$CfQÎúIŠÅýq— wïDÈÙ&HqºôEo½EaU ¾gRË>Jïì ÆÉú>‹ùŽªATOÂΪ)Ò çÁ­UL5Ð!ùÕû>Ø ,æÔ¯¹Zmgïζ—Ì €ð¸Jf„¶&••‡­Êì«÷wûéPÿ¨ñ“™€Baâ÷¯ÿÓèüìg«´³·Þ¯tç9ÌæÎÏvj°;?[þý§çg‘ ã#˜hŽõYˆ¤hnü ¥ÔÆ(b ûÝÇŸšáäü,̲p•Y5cû5¡O¦ÈyhV’D"&áÂ÷±I¢A9=?;û÷·Ð¡Àž{o~… H–޾ø¹2Ù¿7&ûÆ óðÂŒ£Gtâ³ÉÄøLCUU4>šŽu8FÊ€W ¡qÉïЇ@¦£NK`ly›%0ƒÌwy8K¨ŸµÁÊl5ø3S`÷M=à|ÛÌö¯Mc)ù¢š¾Ÿ{gÁ,l—Ù‘PÒYMÁ_Usf‹ ‹:ŒÃù¥þ ¡Y^zI †ZûµñÙ_¾|ülÃ[4ªþw®‘ÖÛë"1·I…«.ãÖ$¨…Œ`p8æO0Ūû|£QTü¹€ÿoX©¿†.^^[^ª=ô¯YãšGanƒYræýÎun¹Îc;oH«!©±Ì3¯ê+ÞYãàý`&qZÐgu¿ðÎîWŒêÀðŽË>1øe°HÊ3;X_oÝçÛjE§±„;¿x×ø+üÕûl‡®4Õ’i§³9Bz‡þÅ„Ó÷Åææ9“ßÖŒ-ø¡ÛR‚°Wñ ~+‡•ÄàÇ€¿žq´© À=„•S-üÁAæuKº†½QÔØqj`F[É&™,'Lþ4¾œúÁ5DšËàw‘òtíJL’t…<ȸ÷£ûêÿK]ý¿ëݯì¼_¿H¸ Í¥Ó°ET „þƒ¹îݹ쇑æ&Ì.Óø/®¯ƒda¬FîÕîÌF¥N}lùá{W)„ûõ­ /È¡jj{ÛXÀÛŸ!ÆLÃÛ϶Åe|mÞîxæŒ2íV2Äâz %2„ôßÉöWÇJ *¨K  Ý]OáÕb^ôîΚržÊqSy*´.cœtw[îð#li†¹Û†íì¡3ÌM ÏÀu̳$cS ÷ß)ô©¬[ØÝ¤” » àbúÅ>ÄØüô/d€0WÖŒ¹,Æý*xÈ©]ßkh˜¶a„‘K0'Ý9¶8äAQæ‹j᨟¶QÞ¹3L´Æ 9ÖÜ*9‰Uo°‡ðdûN 3ó ÒÒ¥Vk­Qv­ÖAüWÿiìœI¡rö&•ì {NM´HLÞ¿×ZõT‹KI ܉X`ªû@<„¯rb~!njoeJÎDÊÄCõÚiö‡- “a×ÊÈQ†ƒmÊ0²„íßC­v´Ê-µ<¸W’L ådk tÇvóíð#,X!¢)qº )?Ð÷¯jya“Ä¡ñÃÀnÈ¥Ë ü ´+Œ«þ·õFÿúV½íÛ`©9T:#>©©>œM¥|?‰'&¼ ÓTµòªY2뿳ÛBž®—+ˆ—í3.Æð ÓÞ€<Èù…Š" q…vj“hÕ_Ä…÷ÖmA›SÆeNÚL³ƒf¤äÎ9LÜñ„¾H#¤ÑNë9Lºà9X¬ñÄ–Jg ÕååAŽàCü]ïXþ<Ï®c(¯ª^B U‚–ÙÝÒ )üæ¼êè0ÉÑZì.«ðõm>Õw9kEôîF„Ûùëû­fL[â™võWµJ?”*,Ào»"CRõ£¶±Ôå‡Ùlž¥öèý®øpõ`Ãn¬Ê¼‹|P¸àH©°k"cå(>î³²ôà¦P J#â<…eå$}ejø p§ ã'F{ôMøëRÄe8R Œcg´cå(y]j˜g7&¿Ð5aÌÄ­ LÑëRÅu1Ÿš!Á’kg¤jåx7½â¨e°v„Óé éícIF–nd²ƒà¾Úµ+–+ÿ³`Þi€¸ü.lBñzµ¡.r83¬GIád !¹è€)Y®Í^a˜-ÒrÒϵØwµÔÑ\‡*0eî ƒ2¦† Nñx€£ºÏp<5GXº‡,ª í„çO³ìj¹òï0Ðãn”7YÆfÜ—AÄÐE×§ù8/ò,1›3m{ad™gyüWûÉǵ‡­ïô ~|*Ð)‘—ú©ñ­¹= ª°¢ÚyÄßÊñ.2ØE8ƒAÞ®­¿îe¬Âð‰aB9SÚµ r„uFuc­-¸év‘íøuÃyÉ%RÂålÔÊhOkúlÝç]¦&HÊi85áÕóäMŸI²úi½{WŒš q§#ðÙܹUríô¨Ç@¯VWþˆ5øÿ©ïqëũ𠀼Gð,ÊsŽÅÈ™º ä(é2F˸M¾6‡ta;vn.ã¢Ì·"þGÄQ`3|ÿQߥ^pZÔÒÏiS–a^¯ÄV;›áÁÁèÂ¥àÈI—à.vâ1¾XOÔîdóbÅñÃJêu~"4ЏYrÝÁÊ'&Ë«Ç 6mywc˜úmóÜWw{pঈb{LÞ¸Ââ@lâJ¿m'ò®]iðãÆÌf‹¤Ÿg,¹f’ ;#\›Éø Å*?rcó'wô±úvšÛòŽ>p ïþ ý⊚®4›M³²ÎÌwtÛ;äÕYÛwl í°Úc˳4÷ã!(¢B3å@ŽˆCl¯½Y“ÂÀ:µÉ:{+­ßø¡G8¸V2÷MçÖ[ÿ6PAÊy˜˜i"úgvÿ1Ú“ç§´–Œ9ùa)pßü¶>xêa$¥Êy`ä$A½ÜÌ®Ûm¿Ýòdö$ßO`À6>4×ÌK³œ#¡ÊД+Í4q‚¦º¿Î~:Уš:϶‹êÜðÐ{õ§C]ap·Ümê Ñc"ö‹4˜Ó¬z+V™gIR?™Ú„Ily’»9ÈíO×iêqHÞ’v”µg˜Ä0’9Ó’€U½Óá«„ ÕØÝpsvV¾-m_`TbMˆÓœ\õ LÜÖ,VËíÞÂo—ø˜ JKgˆiOJˆ¾ñÉÖjÎnh.ìtع}¨ØyÖä`j3(™x©’ Û”mNÖàÂ:g½:{¾~cËN›;䱑¦Ä=ÍŽÀ ¥”rv<‚űӽ{(î|2Aº˜?õÎApSøf\TgKêhb£|„&´x5š° øý(.®NU¿2eL )C+C øÊ ôµ(ÃVìzúã$ ¯NVœ¼}4{˧­Å^³‚ê¼Á¡a#öP„”¯EÙuœ—CëÁ>J¥[ºóµ‡ÓÕÃJözv¬3ž®9Q{¨æõŒ-Õ4:™çÙ813?2¥ {U§T ÷Z=È!|èi˜}ÚG`Ú‡%á»æpÄ2éF×Ë…¶°VÖ5Xæýè2X¼FD;!s˜Üu¹·-'Ûþýˆ&TCté$jOgh¶qZ@Êö.õ—_FA\åtý|b×­ï—´|±Ï“lh€ßæZ æ&ÞDtFœù-Ò«ë]³n3^òT”µbˆ 7eMI‡”²ëG,JþÔܿܿ‡ê6[¶" 3"äJTÉœÀ` IÑëÙϺ¸{.¢KÑÔ¹«Lìvæv™%3Ëo^ÌVíµ2Tu s÷ëÒ×Sú ¼²¤ÝK¬lÕ—ûbLRhc“œÁƒäT·kÒ#Y•rnƃÕú²ÊÁñ%Ùå‰ÐݬM„1æÆ$—·9&\£]µwäW¦œ'ä鋾¢ÜV1­U¥ÕwC@•ZJ.¸ªT‡Nšs`·wrÔVÛ5Ñ-…­·-·\ë¬æ‚"œ`õÁÖZš¢,ªÿú¹™g¹M0ùˆ@ê üðÓòw‡ÁÚkÎÅîVªƒ’ùq\{=tw<¹ÝÿˆÑo?ÿòTñ!%6©åÎ1~,ÑtðúN'o´Ý1‚…9‚ÔšL;i\k²¿g‰éôí‹OÄOq¥äüäaOi6üh?¯«½Ãì4€Jµ¢Ú ”|Œoc3SæqX\tózÕ½IúM¹¾Q.Žñ‡|koÒýôq.ñÕæLâ@H©PÌ•— ’ãRTtžÉù; TLíÉ-×¾-ÈÁÔ¿?=•špà®¶¿0ÌÉç®þr‡½Xg«î4ß&[èßÝÝ/ò$üÜ3Dœ{! 1]ü‡pxêà…x<ˆà †CÚ©iÉÊÌ´¯¦qyOÐÝö×AJ¥];l '{êsÕ«±‡õyõÛ¸OÂç ƒŽÀÒÉ_.zä?¸ÏÛ_/ÄçQ¤CÎÇUAN Ú£¦‡÷yÃt·GhBÚ¤/ÒçQ)H?}®iþÀnoYê“å=²ïµg!'y&PO£ÍŠüàÏIÿ…ø9.ì+ˆ‘ÓÏqFJëûïêzíbÑ%0åÔÄ~]ë +Æ‹8‰ómËòN#ž˜Ø7ë¸vt@”¶f/òú¶GÑ!N¤(¨Ù©c~X¾°½t<¤o¬‹=Bš!ôÿä]koÛH²ý+„¿$Dv¿ÆÍÅ&q2`çN¹3û)0(ŠŽy#‰ZQv6;Øÿ~«ùi=ÜÅnI60ÇR™ÝnãœcÛF ȳ·>ºO èó/ÁŸñ<¹ùa@/»˜b“A'‚¶/ƒ7ãñò›Üô‚QòÕܹ &ðöÁmx?ú9È’¯ÓÜt²àfžN‚{óÜÐ™Ý ÇI4øÿ€é݆‹à%ò=eòÍBŒ½¹zÏùÛ7ê ÿ€µ~O>hüáŠQ|Åñeð¨é—pñ*ø8΃—UGàaù)ø¯,ŽîæÐÍ¿O6zýïWÁ[¤Þºíý{%Þ|„¾}§Ñ{uõVSõF¡wo>súüþíÚ㇠ÄGŸýS0¾'€HtGß‚›tÞ|oàI‚̳‹,™À+Ã`I8®å“7)¾ÏÃÙÌÀ_ ã`Îò'=ÈcòC”öí/n°xœm´¦ù(§ƒAݽ³WÖ.”ááézj9“¡Þ/·ÃWÁM8éa,Ò Œ/–½‚a=‰øñW}ˆÒªäÝì8¸],fÙåÅE™` L6ý ï;Kò|Q#TÂp‘ÞÀ{Ã^Ôo]K]”R¯vm~’Ìç6Mã`ÕܳG›Üwì2V“f¶ç 3z¹Ê|÷çoAIÙYnŽeª¥€cqŽ`ÑáŒÆØŸùCÇâÌÌ¥š/KÑ»i>]WŸæQùñ¦ÐPnM³ fòwét”OL…̶¼FÙïäÀÛiZo_–ï݆SC2é Õ¼,&ôKð»yc?æ‹wõæƒB>¨“ŠQ’•ð¹œ÷>ò¿ÃÜvv¶üöÝ}ú3¸Ùk_À` ÐØÑHªUÊf†²—8<¯Õm˜âÝ'à=Lî&ËŽÆ|¤£€N¨4õ²€5¡Ù¶NR.ˆ5lÑÈ52ûUT ¬„IÞG°E¤w @;¿p|MaZ†¯âì|cۦدÈzÚæ;œeÛ’ˆ °úì¼94YΦ€ÁÝldT½é*V9r.ÊŸg{uÎÈ!±Ú9ÁåzçÊö—#·E×€Égq”£ä•±äÅ]Vxu¥Sµ­÷”+{”¢©£Œù´Œ¯÷þ®p sp÷•I*…´ƒJñšÆ…Ò_‚«bä.AÌ]Ê3®ï=>ÿ#×ý_oæàÉ™lÀ—g€e5°ÏŠñŸ|4³ÂåÙ?ïÂfBLœãâ"fÕLó·ì6$\€Bã›(bØs¤ÁìMòAãéhn£XŒTŒiL",9 C¡(±8Úú‚³ÿaÀ[ Іµž’‘k¤Ž7 º6Qç2¿€—“˜ÄˆÉ¿æfÀóæ-Ho–óEÎ ï3©ü¹Uß 7¹m#3dή¬=ƔԛëËQZÒ\¹ÐëÉ<3Ô"žÌÆá˜›»1økãzkw¨´Ç`›î4 i6,ÛôD“/ÁÕG…_U|.QbP{¹ø1‹_¿ø£þˆóœ'0˼€w³túúŘ¢^°ÉÀ_¿¸J2xXΦߧyƒYpŸ„kÔ‘VÊ|¦à¤—o‘Ÿ ×–* ÖÆ7Æç1&žLïÌ"éÅOÛpä0H­7 ÞRoÃ1ŸÉË·ÀÞC7«x]ŒÀm­KD5ÏÈ ¶uj4Ÿ}*hò²r”^Þ˜qXÜEÜúúJJøÏÞ@iÓ"—" ÿ¿Ã/q‹-¿-†ØÙÅÙ¾- G\ùMË-rP–»0¥Ç¿¬?äOºž\4¿Ë./¯ÀxÍËfmÿ¤1*ƒYRz™3ÈnÓ»ñ(˜¦ 0äê•GðïÅ÷8ž/¨vгË`s§ï{Àâ7(2‘™y£´Ð±Y…çLüÓîÈüë6ʪÏÁËQÞû`žwü<ÿñ`U,Ó¡æMïhbÛ óÑñQÕvÕõ­ô@¦µÏSmN]ÇPCsz.†E9çšD ¡I›¼,ÆìOˉ ?ó; °¢‚Âd$ØÙÞÝ”| i&¤i’Ývw‚‚—ä<¦Õ$ÛŠ%6•½X7s+ÙqQ9l—ËÅN.2[›¦J´˜- Ƶóé&'9Á¾ïÍo/Áçúmø ÊÏñM ÆÅå{ng'=p‚ê=·íð•b—gõï|¼Ê=¸7Ÿ>.¸ i.î1U«KIã|%ñxô)\ÜÂoÿù †Ðßeðâ̦͸¢:øàsihå¬ñ"(?ÈÓQ±Ù•o¢U„^Íh•SYì½öáSî­ÖÆaûÓV«™‹—*­&æãÕ BüX;q Õ8ÌZ®‹Õâê´0^š¨3L8ÝJ_+ޏ5K!ÈIÒåÜÕçUnñú4`Yø«úÊùÕÓ͹ÈÝæ°S‚slJ’ZÝs‚D§+ˆ»îç&¢sä[ŸI+Ä¥m;W2¤:åAÆyîysÐD}yXS†ËþR—¸ë앦¶€íæ K2†8§ÌŽ®Iêí Ýß¡ÿ¿Ýÿås^i¸s^a æ/iMImä:%á齘Švð?( É f‚Ù «"_Jö2…¸d;(j¬}§—SðœzS‚9ǽ›¹+Û9q&$·g0r*™<®[v‡ð6¬GRX'j.u§ò%›a.Á|‰6KÇI”xÈÁ°©ÑCY·@ZÀBÒ:Iƒm3æv/¹fÚCÿD&hA%%ØZ·äÀMv«`ŸK;׃«½”F”ib=ùWJwŠqܬ€dz3³Åü.*"]\sÚJ{¢3…9b [ËŽ™ÀßN)âÚíƒÉÚþ4HLƘ’ÖƒL£ÈÝòÂ_îR{Ø)çBck}c=nždÑm<ºûHTZ7u(¢Bdd…Xt«)e…ØWYa~"4ebÑ0ÂöñbR`¹P¦†r:hÚƒ 3‚É}i»£rBv\—Ç‹hd6?}ív4Ú:X®Ÿ‰òµÞ<9ÞuU¸ [/æëà,X›8FnGY£®Ü¿ìe£˜9Û«ÒQý(gç6Šc¢°Ï‹c­:c³¥>_^jGq~"Χ„hn²9,3mz¡pgÃf´WY«¸¹NÃ0y0Cúñ;ÉEæ˜Ù½9ü<½ù‡¶TÍ•V˜X÷N4¬Kx­Ã{¬ÁlžÞ'#S…Êt}ŽVofNKÃq6(CˆWË6™¢.µØ–úMõc>OÙ^(ËÔˆ©Ÿ·^åe=`ÍtÙk_úõp¤5kÈÑNù[臭lu ¢t2K§æ"È6ÿð]¾òû5œÙ*•5/¾\­…fLZÁÕJ‰½Á=)K¿gG¨Mñ3SÿÎcTãÏK_£Ù1ª?3Zš¥ßãù½?j€—´—³ËU¦ž—*î³Ùm쑜„¹D­——LÀ ígTôRðÛx8½®^í£BX—M GyÖ‹¯‹4RåÎÿ$œõê VŸE¥+^ì6Mz4CÁ²l€ºL‰›šàV\P< ¤¸?GþxjN”5“>Èm¹P¾3žßãámš~ôScÝèJ{þÅXƒ3@¬ÈbLú@–æå˜¯MÑólŸ¢ÌÇ2ÐÛÃ'é¡á«q; %’kdçP"xвëh’g=lbád®?˜û¤ÎJkd¹½’Ëa-úBue¯-üÞï&Ûþû:Ýñ$Œj¬-VšË‘Ná—mð„u|¯xî¿/ÐOÊ‘PÄr{ÑÈ1µ#<“á$_@ö jõP?ü qŘ&–ÓÑ\ŽhäÉr=Ý+ý¬Ñ;[(´ÀÉD¶‰Ò¸'lS/7¢q˜ef¥[üzþ¯Aù €Z²“‚ú4Aî‹'\ƒ\¦ >Eõ–ŒOÇ ²É¯~Š8cvR8âÙ8ý1ÉO¨On"N ç)MÕIÍ„óR"ç‘p4Ê3 œ ìŸ*ìÃ$O7}Òè zRìr‡ãÅm^á¨ðÆB1K¾õJ/§÷òO«ZWeæËci­D Щî‡Ùóäå ßcþmñŒcØ/Þ HFÈ5~SÀ£ð!–ܶ©rõ°XĨ:ˆ[Å7M’Ì ìyü5ÉóÿŽ“Qh’¹þ£xÊ»f¨Óƒ”ÑSô~)¶<Ù|P|Ë#ºÂ”…Ávt9í]|]/ÔÖf²Y¶Äñj)uJƒ_3ŽX KU}ÄðaRýº¼N°4¶á.îÊ4õÛjÜWgp;àÆ$ÁÄr!—¨£ 6Š®5nJyä]³ÓÀÃæÌòˆÄÅK@X"L„”VA"ÔÍ‚ñ£Õ÷ÐõòÓÛ4Û”wô‘0¼Mµó~@S€kš‚ÍNÓE‘™oÛ»ÃÔZ%7wãfR rëŠ\«n‹•1[¿F{ìøiŒ—ÜŠ¬°™kü6Þ(È Õq '™fУEóªGá"ü.nëû‰}Z¯·Tö9Ȇ` QD­±Fõ‡84^~ß7ÖMŒ«F…2‘œ lG™t=¤Þˆòcv½Ã¦äÇò1Wð˜ŸóÇl8>ò„¤Ù°)9ÖŸ½ §±ŸEskážo%5é”(µ~£]oô¥ã‰Áo–M–ïsÞÈPÕ3˜ÛË¥×Kú0úfHs–˜l_»ÃlÑÄ^Y1´ë½cp ÆÇ‚q³'‘•BÁ¢ƒ[‘•¢ëáZެD@|ãôë‘ »Úw3"8ÒÖ=(ã]/-ßwοŋÙ¹FúÚ•—Ûh¦±«´ų̈#,¨Õl,ª;îc”q`·rVÛ7¢«-7|çXI"Ên­u=QXÄÙ"Ëÿ?˜Ç³tnLîàHý/üáçêﺅà@ï)ç cû[ÙÉ—Ä롇óÉö!F¿ýý×Cù‡ðÖLQûšÜÈa¹:¸Â饢íÊŒé ,E RDYÁRÓ=À"Àµ2û{:Ž{­¾x ü%\؇¢¢²9VøQ7åj`v$€JŒµu‹ä¸`{Ê®'ñbžDÙu?åU[#9(Ûõ‰(ÃÔšËÀÈuÛ™/å‡x™îÇE\âT˘DOj‰ÍE+¤Z´¤¢÷LÎG8àµf¼/¬ßÇ<åZx^1yðdo@qÅ%ö©†+†»x}²q¹(^|OçßšWŸó@ã>Gñ¨ÀlProkÛQš¾-ÃÌÃì±z Gµìcê0Xw ÕJ àv´6Ri¬ARpD7lGS_×õ›-Õ&=ÍŽ G)ºÚ(¼ˆÉ??̦NÓØ©:OöµÚàyi¾ïZr Ø’X1Všð~1ö‘Ó«-ÎO#›—”H˜Rë®DH“~µé%u—aÓmMåÜzÑäîâЩºÔDá²™­˜:£ÇÄ„[}uZ¶Õßv\âø …A© áÙ£t<|•×ðíÓa?*n§ Ð¾{™Ín’ñw`’ £ïÊ©÷ÜμïÖgbÙ¡''{§žt¾¼ vVfᓎ†}Ç5®©[´Fwab\3îMm‘K§çÎI»S݈²nÄãn$M7"åÛçhê¡#®I``hKZ¢d³¥ÇÀ>º¥Ã¾•Ð0SàÏÙ(Ÿãôo`nb¥´_=ó¸V÷kmÖ€Ë}=³¥õ@]|¼¯.ž æcgÌ&à ÛÕ¯ ÙÆ¼ èÀšù;huinÞÈÁèáÍ`³L¯e×~»ÀÞüv·<–áq•hÛXQËÅWQ1›ewk§Ù¢-ª<ŠjM–?kFßX 1j~³K_Ö[ñ÷¨æ>WIæE–ƒ8ÔêaîØ‹õv‚9Sm`]Tr©sȵê‡ü Žm´æ¹9$Sçïê¢Qؘat“LAê&YA‡ Ïì£ÿ”~ÿîh+[?.‰?±=ÁgV᜕WWÛîSüøÂ øømAÆóE­›®£ø®V”¿Ö긯àáÅaƒ:L‚#GýêĘFoOq†Î0¤Ù§,6x }<=³f¥=Ë&iöë Â7üPYüRóbû8P ¹û½P¦¥jÿ¢çÕ¼’~é•t£ç`»À‡w_º°"ÉdžÌnÁxXÛQýg_¾y}rˆJ»€Qï]ƽÙZoZôKïÑMìùúS°çé´ßYq;Í `œ%®Á}[ráíÂ}ûlx»æ„¾uãýÖÊÁš‘Ý.|ΕéWNÍݪøtÛuø1ƒ°Áà–!µîM9-PÂ_Ï›¼«;Løéîc2†©õÇ€Ñ+i$å—T¦—øXÿ>'BpAb! A1ô¾¯¼'lìð€üÁ©¦"%‡Ÿ '5=Y¥êñm%Iö0w5êÚüýÝNÉÖ߀@7ÿòö®c/°¨S*^ü¸®xñيⵜâÅKŋߚ/¾«WÈê®zrŠ·|ùæZ?„óÓóI2ͯ³Â~ESuRYªZ7Ü›ÊêïIþ¢„å&+^fÃôµ½­ÒǘÅÕið±JÂ…1ìN¥éx4ãÛ¥x¸æÔ[“Ë,$„™ÚxmœÍ/ð¦/ÐißöÞ.xÅï*Cñ­I–(W§(Ò›i‘†îi0­±Ž½ÝÓ¦Ö½"Éßÿþn–L¯m>.¦ÑÙ|‚cÙž•ɺ7N‡àd8c1µðCÌâ :i ç6Þš†SCâhé×:N Çj]}LfˆÀüâýùÄ-˜\av&êäÓtЃ92±yuÜ&üšiCÛ÷%[8UK£ÞD·Jš‹èÇÑd”_ƒMEy¬Å-U`Öœ :^$¦¢ÚãÐÅ gùaSWc Œ¿«º¶q·ê*”e€¿OO»2·t³ónéFºe‘2­ÝØ{C‰"½Wj½÷ZÈÍÞÏݺ * žoï–¤4æ1Õ¾nIBèÆøk¦6»Uå· ƒè›äÒ9µTUJüV9Âè?ÝèØÇt–F3pg& Êá8¢é\ä²|Ðö9æ‚AsÞp5ã†ñJOwŒ›m^'øbCütíÜ]À౺¼g¿¬.N®âÝb¸%WˆIC¶Tâ\a‡ƒcr7vdX²t0¡ç¾¸Ù ç¡Kê¯`*€sT"E³rKS¹oi‚‹¡Ä•n»âûÑdu~y ­žØVíæ¥ƒè6öI­S­€J\c4ƒn¦zWO÷ü …È„Á'tމ[Ùaá¨P!à˜¯Wà×¶•öZÀI;¢Œå¢WZSѶ=¾‚#ê/”“XPmQÄø¹c( œD‰QÍ<ö›ä±ú–“ø–“øzr Á„y%Lì`Y«Juÿ9 cXÒx[NBõ)ï)%9hHÅÛ:êà˜ s/[j×xÑ0õð!¸Zõø6¤Ò‹b/b`”©&з0JUR¼í¢í¤}Õ¹ÀõXíEj(ˆ!m ¢ 8­ÂÆH%‰5‹T”$¡ÆäÃp&¼”2NT 1‚T`ôÉýH õ#å±àXzÙ F“å,ó#zò¸ÝùspB%„÷ ¯9…Ví”Z8–%e¢-´·1 ±3Dzœ$a”J/RÁˆŒ!l7¤'TØŸF’Ðñ´…1Ž ±gNh8ÉÁi šs@ãÍù´÷üt³ç&Œ]ÚÏ.­…ÒŠhR­¹ ›w±G0dObÑr¦<‚aá Ên1ã¥4fLbe3îAŠp”±1¤˜\çÜ‹T3b”T±ô Õv¹!©Ï»Ó=ð'!\¢}L-œæAó”ût©Ý;/°PN;¥öÌ™T_Wžb§ÞsAô_*OᨆÉ/¤à΃íX`T0§)õ÷LÆä[žâ[žâkÊS8ÁÔbbá`eäÝ; ±&ìfkžÂô8.À ·›O 'µ 2*>_Ù€'Ig±f¤à ›8Ì’i?RSð˜5muÐ-áâ^–]ã®6ðYkdÛ‡Xi!ç†5ï#_ÀñÚ}Ù×éxê4¼#îÎÞ$Ó±³5FŽah®àãÑ0ɯ/³d6ö¢Ó"JG@Ú k`DɨÞF—iŠÅ€n² îÈξž¤È&ûr4Ý>ÆuæâYçD¾×D2D€Êc÷\Xþ‚Ç^Ó¸5•·Þ@UÁÑ è‚{¢ Fz§{dRÇIPÒD/RÌlÒœý6‰rp¬fžïC¢š®À£•.Z@“PFP£7wH¬Ñ.°‚.¿WÚßÇùÑb}2_pâ1x ÞŒiˆòðàè=ÿ*&Ëמ â±ØÄ^.\=ãó¸ …Z“7ˆ®¾ |ÿ ˜fÃÇ ?VÆhÅ6Skôœ6_R ²Æø1˜`”Q‚K-=L8Y[súrLxT­@{Ds zO·ÛF '8½W~Tt]Þ>ΜÀ]:”H¢”òO5ðþ~É5Xù³|68šç" ô‚Q’0ŽñûçÁcÒ.¸âoXõÐ*Ãèû5Š ñåRt÷â˜j|tƒj¨eï…îYv“…óü«"üt“põϪ-c=ÌÓ`å—ÖPËÁÑÀÓ Ì‹”3.¤‰ÛrÛj.<+ Œƒ™„Ç’´›W„ã¦ÑÕ~UIÈÙ|õXâûùeºÜ#~¼úõkvΰtêÑlŽõ \¼80“ÑXƒëà¡ájÛ´ïc^Ž ‘+,_8y g‚c¢ ¢{Ö¾|^‘0‘^‘1‹Á­õÌh ÇÃÖ™„ô"5š2Mø¦ùÃ*R€£kEv_>·bž»*‹œéÑÍè}GrYÛñªÖ_,¹·Q{ºAmýLo‹=YW&pÙ•KªHkNÚÁ±°ãVB{‘ƌŊ+Á>ÍÓ-{î™é ,ž*oÝ¢ãàûÓ¬0—qŽ9ñ$‰- [o‘w‰“Ó±ŠŠÖHÐÂç"vö\ð¨VŒrõ¼ªú‰Ç ÏÜz·qàt‚ˆ UÉ?Ÿí›HyØ&)¼H…Æ{›9'¤xòSJ/Rè¤D*¥õURå¯RµhõÇå;ù£µb¦Ç-ÅLÉÃË2íaIiÅ·lC^£Û AR;ÏæPâ'i£ÿtƒ~Mƒ„M{…Íh{­d>¤xŽ>̺{œ'Î –L*Ó¾3ÓÁѰmØÒx‘jqL¥ñ ¸À]|Šø‘•R‚vð!U± ;#¯¨)žé$ºµzÝNÄaH}¶Ã@¸Á k§ÔÂÕ«| »øvë½’ì/µ‹ÏQµ%Ú·o/¸?Ø.>‹‘ƒÒŒIÀ¸ÅêÛ}ßvñ}U»øv`³/¸‹OH<)È·Ÿ6äÞÅ1^ìÜš%upJï´âTy^ãE¡ÜE‘ùãl|S½~ŸLô4Åcž[N®Q pš7-=ŸÀgïÎÝ©ÿÊcMxLÔ7ÞxÓwíøã*åçóË|0M‹51l!|š ÞÛ{â\š‡'ü0,÷nâD8…‡ë÷¯¼ÓÀˆ£5FèÖŒqhCÉcÇ*$m_‡Zð*hË®ò¥dOk´Ñºý쫃S<(*P¾´‚²‰¢ o]°pÜHuou–6/ÔÀžJ(¸´å6Ž€Qu´&L{ØåàtPŽNI/ƒñ|’RFQR»ÄêûDªyPÞFi/Rp1!0}èA N²1A‹ÊJëa\iÌö·"µp§_•ñ#Õœp¥ñ!ÕL‡EµšÜ'R–uÔÔ‹”2BcÉÛ×! +¯™©fäÈ´wqp<ìd¥öi_c3±i?ïà˜ZÐÒí+Hc®…kÕjkK¸µýì»kßÕ´=–¸ÄS®úv<@ú~2coMá•p:èЦ–^&J0ÖÐ|ûbº°[¥MXºG+/Rð afxâ GŒ Ò°Z‡ eƒêEJYØ™XíѰwQ‚aRJ´î¶,áš· ;é«^Ž&öªàÎÐ]p|1;8¸Ö)^KU—ûGmI°žÀ5ÎÚ×¹\‹Ï cŒ‚&ƒñ.¸]Þ«÷1Ù„ ÎY|µo7h,véùBÝ`²I3Û«qÝu_ytUï{£kœÙ¾^à]ª j¨íÜ­…¿íZÑ÷ó €F'Ù|<´ÏÜuAåòjË×Ã,ÍËúûåïNf{²»®vT]NŽWË–?Î7±ßCíÂ!^t‚»l§òiæfsPmŒày×ôC{±ÛF¢{ +÷Œ_v6ÇM"ʘL Y[R\K\Sªl.ãæ&™þYðµnG¾7ÖX7cÕ´Ê—;'gY“»òW—L½ÌЧÍÌóMîƒï9=ûϹÍ÷«ï(`AÆ PÊÚÚ5Ðôçöíò·ß_f¯²a¿_¾ Òü÷Wh:@=¢Ú"ÎÐOÚúãÚ¨L²¨ÂÝÛÖŸi6¬]¥®T2¹µ‚ÓÛmÄj=]‘ŠÍ>ï/ Tî" •Eî—/Ê;,ײ¿¾¦/FyníΣšùJ®ðÞ¤óìÒ”ÜÝ8„Øa8–ÉÚp”Sñ:èïTOòâÕÂj¯œSåI„§ów0Ûº‚¨éFRu#Áà/x †¯Xøö®®Ç‘[¹þa;Øó³HN` ŽïM`Ä~¹v °z4½3Êj$YÒìÞ½ ÿ÷»[šÞ‘º‹TS\ÙnÀ€g¥žéÃÃC²ŠdU}ƒo@DÐX’Ø߯͆`-‚¯Œï_~úñÇm¹ºyÕ|æ¿®N~hÜüÝÕŸ{žG¾þ¹!íçWõ‚ŒœŸ5|YÑR ¾³’ËÏ~²zò¬òÊ$¬XzÝ ô›ßŤwØ¥•”›æÈOë ƒOÃyüOßóù cÁßh§¹¼åº¼õÚ7’¡‰¯˜UÚ1/“)Îû‘âÿ˜Ÿ$úÿªd_þ†0¾-VÅ-Žîí¬Ü´¤´ÿøÃ^IÕwHnc=4\þþÇ¿V6þ]ût÷o~øøêßžfs¯ÌWß6–þøì#[]ßòŸUÝõ·]qmÿÁ¢šEü?Ô—½ÐŽòÿÚí^|¿3©h®‡ãwΞÛÕܧ|Ý^ã7…Ÿø6¯—ÿ\ yZ~üîÇE±Ú<,·Õ?ç˧»gG®£þ§ïróÍÿÕrÿ°=B…?¬ÿéÉ_Ó£ˆi¶ˆý·èk<–ÛMØo~(ùÓüÙvþáYkõôÖi ÷Í­¡×Ùö…ì1"qŒH¼€ˆDå®™¬j÷)Ó{ßµzZ™Þk?T+ æwLÞú©hRÙ…“WÞٸƑ¾(üªæ,¿Ž÷iµíϹÑ<§Ïy-=úhaƒ C—UqC#Õü R¤QúБ{ZUlµÙr(,hÕ›k½º"²rAšªY¯ˆµÿˆù#Gz||íäfRÜ/ÑëÍçÍîÁU½Oð„KÄ|ò¥AçCçìQs¤ï#)|ðµâx$l$Hqaþ{zý'óßãØqý÷(d¦4úï£ÿ>úï£ÿ>úï£ÿ>úï¤ÿµÎÚV`ãè¿þû¥øïÁvüœþ³pâšY›Âr‡žˆpèG=_/ª¿79œh׊{ÝhÞï‘Ztø¾9ö¾\{‹Z_B^—8bÊwåÝëú¶#Nn›ÉÔD>úH ¤èh—sœÞ|“g‹iùÒ³ÖÞ³nüp¡5/:{ãØŠ¨Ý³oÈä$ú'_¿V꨿mn˜O}…†Œ¤ÍsBéKò·cÑ;©ÿDþv$;Rê\þv,2ÅøèoþöèoþöèoþöèoúÛ±ë¬ke»ýíÑßþìþöNÀ˜ë+ µ{N(}F›_;…Þ<ó£8»æLJ篧ô­Ÿºob‹_ªËÎÔE»?¢œ–ÌÒ/“Ý¡u …/éP¾›-Ñn®Ê÷Ôÿf2ÚÈ£<ÚÈ£<ÚÈ£ü‡¶‘ëER3£”£Se:wɬ•¿ÔùQo&õ ÷¥÷ûs"}ä½__ËöJ7)¶øß¶|\m3d6¯5Àq­§ £EPÒÞlÀÑï «¿(‚Þ©¸êwUšçDP^ÞvbÖõN Œ6c¡³ë îÿËz¹Zµ¶!×åj¹ÞÖ»Ée±žÏ| ⢶²æËåªå\Îh<:cjÑ }&¡z¡OÞá³£~Ro_%x_zï¹õÕ[»A£×É}Îôãx9v,›ÿK¼¸èÄËW¿TâÞD¿ÀÝÉvxñ¾O‡K£é§Q¢ïŠ7a›úýóãZgèþ<ÝóÏñîßÔù—èD}Zá“}_V¼Z‚W©]€ Ûað©Îvøã8Ÿi¢ÎÎøœWî Ùøs¨5H ™Í1ÃGàq‘=>E––;ÖîJ¿…P׆iøÚ\ï(ü$Å–X·¬ŸF¥­?B%m$¥ ÄøËÀm»çׄÝŽÇ1¸û§ó™Ïƒà}ðÍuóÙ!¡¼ŸP-ïÁÞ=Ç$D !딥-%9ñ9i3(C+ÁÑå74ÎM"ÓïØ ê(Ú”Ö¦¯vÑ®ÿ…c©'‚4 VšcŸjšh•ÅæGi&8N_G~U69&ëI”X= 0eÑ»£¡—ÜH$Z`F(N]øœÎaÇ8 À&ñ4ðëÓlúvã¯*õðIì±Y#• XÇ,(p§MÉäkqÍE€œr±M6džâÑÎàq.‹ ðœSü½’¤×q ý@ èó¦žÒhYKj¦ûC ‘EÚhk] ©÷‹ëC÷n.‰Ã6'@‚aŠÂî„Ö,q0HÜáíi§~é9¡–ô uÄ;yŽmJ|/K®ðˆHKe×5O·(ÁúÌÇG:¶œ³¾6(½qê”É ’ÁC$¼f"‡KG¦6E>s’Ïþ^Á¸ôÕd)SŸ2öœê4}Æ@²A÷Z9kļ³ÛªH'+ÇàŠÆ#Yä*vBvðŒšk®DO>ûÝsìE\²Zv‚ æg,÷ÕA‡e MºËŒáÉ·y’Lá –c Dàq©¦Ö‡rþ8}@‹ØcÞxSi†”úOùì?0>㸕’òšñ9eÒü}–h™C1x’ bµ^þ_9ÝÆÐêZ ©É{›Bø´“Étñ9Ûá˜Ì!p<\ÄÉãµmÊ}U}¿©ªY0‚8ǸFÂ:‘HŽ]iÐ$RÇL–.vþ†$¹ùãŸã±ÎYšä8$¯84‚eI-¤ˆ³‚Bª¥œ½²ì‘VWé¢ÎŶÿ< .óàÄE%A©Q¿04zmÌŸ* JÝjÂËlýœÊ—¥~£³€}B#³m uL‚2xŽžc€çà9xŽžT€§_?Q¦hÐÖü$JmL‚2&AùüIPja¢[&¹¥,Õ4L–Åiv­…9žLRxGU(§Ýá튶÷X?'ZwÙÿ{è¾™Z:àÆIŠ-Uë¦ñïÙ™àc—@iiz¯3ÕÏ©Vaà3n‰…-mFn‘Öç[Â$(Ü ]èsn1Îï5vŒýêåF‚s2¤4µ‡‰^ÿ¹ÉƱcºc”’ï¡ToÔœi!È8Œ{(ãʸ‡2{(ãʸ‡¼‡·Îºv¬Ú¸‡2î¡|þ=”(kÖŠT;CáSî rð?*(ð|…[¢Á¹ßYá–}Lÿ‘ç-Ü¢pt‰HéïIðÞhÒú9Á‚òêþ{Åqï”ìÜWè"ñðÈÚ[oA´ïyCÏõBÿi9JG0 ‹4bãÎÆ2©CÄ´8"Þ©mqDàÈÛU‹â±Ü¬Šé‹½3´ ¯åÖÿöÕ´²ýÿ[Ói€þnóI̵ì¿#\?'!öŽíçǬ8ÏÐíxDä½ÉéÞ=Øì ¸|~æX®*‚ï«ûéªâÜœ;°V´±;‡šä…·³šôÏи›ÉÝlSy'“iÛ¹›¼pÆ“pöt‘x,¿4áï7UßXJxMŸþ„•Ís}€2 ï qC…‡ 8BgY„ŽÇ]œðþñ´.«Þq¤ô”dý·%›ç¤¼é6o°øŒ¯D@ƒYÄ g!x.N|Õ“ÛbúÖ÷a”C[ âr澎&a02ÏÒk¸ŒàQ|pþœÙûÞxôûÓj1œ‘E«S*ÚH°’éá¹R2AÕY ûp<À‡eH= í Ùœ2¢Ÿ=üj"ä¹~Ž·Ââz0‡pŠ >®‚c,NSJ$„DÙO"k” ½&gÓdÍŠÚdYg#ððdeDx|ùAÅ$±ê‚”,$r){" c ‡x÷¨…ÚÝŽ¨sx:øgŒ ÁÉVÒ}ú´* 2ørÙ‘‹Îú,±´@­²é2Á¥Qh0t­‚’¾9ú$-â&‡ùÇB;¾>ßÔ7 Úý%±,áPJr4Âh‘æ-XÛºNÑWóŽÑê—¢yGûÑ\ŽÕ+Oô]_¢)¿C¶š‹ò±¾ ·)Ѿn~~9ÁQrFI°Àè6 ƒ“.[ÖàyYç\ôg­ŸSy¤ @0ÙgÐà2Ùêö„ÆúB­´ÒǨ¤à$·’ž#qíæé2ßfÇ9nnDà1)+b uݵÆ$q4jýt©9–¬H\!å<ÑV„•Ó RøK¥ì^l{SeÕxž#lð#ð|¡Ú¯Òô©ÈþùÁ-P.ÊÀ“®¤Ìž£‡¥ß©ª~”µìü0M¤;i‡o¬…!Í Ç.K .§å;\™6O·›éz¶ª—§O™v©4¡$®‰Àç—†Ò⢤ÑÕ(»b[ÔáõŸÒkY*=h1|Ž Qž_Zñ‹Aü ŽNzßÌÖï‘§Íôí–—ódJ0r°¡fƒ½¬9A»dE©ŠõçØº‹TZƒµ‚óüB¡.J “íËx‹l7à|H zB‹—&š•É$¡ÌpID!Î Ž³W®Ägz¹>~ô%Ñ*™4œK* ïù…aøey†›tYÍËÕzùnæCçðWŽÚm:•2, ·)¢Ÿ_Ö^–¡im¬¡YmÞ=«öÉL³æ«?}µ;¥ÙSö\ ª&¿¢Hhë,§àxðÂ’ù±„¶M˜ûïðàªÉ,S@]Wœ9É„0L¼Á˜9a©­=éD¼Te¸ƒG‹Á;Z»”ïu ÉÂ)Eõ(Æ€ÑÀut1«LC&´ÀU%€FÛ'Ð瘂êo÷¬v k% ^•²Î‘.¼Ž©á{_)%­4WÆŠä9í=_· >e·ûøÖB ŸšàÓXÃ¥²$~c$;Ë‘JÒá ,ÂÆcÓ%e©x½›Ý—›íãÌ—ñ®²ˆu̽ÄH‚3f4Ù!˜–ÉvÍJ;¢6‡"@Ôàq:áZ'¥AOULÁ4¬\sÒ2¬J¹ÁvIM3"‡-GBj[ôØ ìßÀ”’;‡æiiH¦¦TM9†·@g±A#ðˆ´np'ƒÄD,¥LH £ë«“[›§ˆ6pŽ” øÀ“,ãg22ÑHìXJ´lœÐä‚'­žvìŸ,Õ`È=ESö|8‘ÖŠÜ÷ôÔI,žè»û€Z·Îq~.òáFàÖ9TÜù,Ç^«ªâÕ¼J¥$Ò"W8áé3Ú‰¢¸É!p< ³ ×˧m6Ì4£X4Rp²â->×›Já”õ!Éì ¤Í"‚`<*Y†àÙâͺØl×OSÜÔI%ap)ôR‚´i¸h%œSÀ¡° Ë8æßcŒ¦7G=î4vb ™Äq£L1Ôý"©¹µ"±ðYÐg‰!ŒÁ{fÕ½ÒvòI?RqLœi G›&Û¤NWêbemPF_<’:ax©Ë±«ާ§hdD^ަ‚\ÝMw_­ËyYlª{9³7M–êJ±5~› ¹â‚*újÇYq+齟€íkU&ëÓКi@lÇãs<áatû¤SÒÒ³)z.‡½ïOÅ‘ Mãq&Ý9ôâ?î¹ÉHxÕ~‡œ+E^£p†÷Ü ®ùÕçóëU“±¢®<9ƒ>48CC”9’wû÷X0füf¿JX²ïw’0†eÚ§© ±º„Wk‹2µE{*G[f—6N–·¿©¨ÙÉaÿnˆbœ Æ$5áœå8¤,Öx‚L¬@~%Ár‘#C®¯ˆåð€H•»®‹À¯^~pµ)*R9Aªö5ßÉ3)|NEÇÒ]b+´T9¤Ž êYàQràYo(•‚à´fœ17èèx«óMbBKA“ À²Ì Fi2U¡N' ©],ï:WRb3LqtP•!Bñ9#øÐSÞ”råÚeÀÑÀ]Ž@;Åù;UäðÁçX¢ÓÞ(:%A'`'²ú«‡ÏÓ]s¤ÜpÄ9j7ÅàqÉ‚\–¾m'ƒÄÊ)}ùa ƒÂüsÜ%:èM©Ùüa?,Qƒoyµ·”Ž²×¿¤¬‘\!^ ­ËÓy~ƒ Ù“cÈGà±"…£ÿI¯{ÂúI-%óæe£:AÄÀ@IFÀ5<ÃqL ‘ìIbÿe(m¥>%!ÚÊžÀ(¿>V•!ƒÑƒ'öŠwaž¨þ$Ú 80©-ð¹ë\²›XèQAgëõÏ帇…ïÁÙÂ1OtúøvæÓÍüùñfÏåÍóÑâÍQûoª€_Å!²÷­áJ§Èº G&t†í×(<'7Û§µõäô/bF «¬µ”Q€ÏA¼'uF½Å7¦åfž¾[zØ Í/“·ÅÚŸ–íРž4‰(ŽùUÍW_5ÿõz²Y•ÓÉô¡XÜ—›×M‡¼ž‹»Éªø0_w]è…Ђ<;õ­l™¶;ôþ¸ï/åf¶Fô;´Uïïúf›É;~ý_«»b[~üfônËê°÷æ‚þïúwn^©k× ?ùÎßÿ»yõëSñ}…¯¦kß;_-§««æ8ý_7…Ѐ̕o¦Su[jÍœ²þ~%6£,™›Þ–wo„.§%ÜÙ’ËR2q‹.»VEVê;Uàð]ÿ¾D§åæM1ß”¿u²ã¤bšfGt_"TYª¨á='%6p>Ù.уZ¯}ÂS”À}¹ý—ÉbÙt"¦_Ÿ<±]Èü9 ™vÌ#kÛV•,ª‰Çj}X/³´1þÍ×ofåüîú¯þæ÷³Íö‹ÅlþeóÀ×ÿä{ßÿêÿTMü±úô㔋²N—r#‘Úb6÷üÅ?7êk$PýÅ/¿`Ÿ2&î8€d_¾®æ?ìPþzòÓr[Ìop°½ž|»|\ÍK7ìõäo>u:›#Λíú©D¡xOóÿ¹»º¹näü*]  ñ¿ŠøÂÑn`'vÖ°¼ÈEì‹£ž¶¦£žt÷HÖ~¬¼À>ÙÏé™nÍôa‘çðPXÄQs¾*~¬*‹EÛNß#“¾i6W/Üýõ£m^¿ÿÅ|Eóü9íšëKgè§ß5›íëÛö€òb»¼^œÿ‰þsÐö˳öÏ_ß¿£ÕöòL’@Ò¿<³îå™Qô?0aã©Æ}w;oV$Æ×ôBD|ÿ±ãêï_–ßýñ»‹WÛíÝæâÕ«f>'AωPWÍ–öåׯh¢›móêÇoÞ|=#© ñÈHúg¯‰7‹Õæâ¿Ùlפñvöÿhu¼$b\>è<ÌëÝôv$¸ÌøîéO³]Ü]¼Øý,ü5í^ž}OpZ]ìþ¢ýu{;òÕÏ;¥ýüâ¬- ¦L¯5úXsÀúfK—Ÿƒ±úš4xG|üq¹y¿iÍ×˃‡Y·Zz¹[èÿ'ŒÞó)m©¼[aaü´nn6mJç'bz·LÂýþ±Y­.h+ù+dR¿•vñvNÿjñÛöB cÈl¢±^šÌÉ<®”ðË‚áºYˆùƒ`¼>èã{@¥ÇzdRûw¤Ü×#Üéòù¿ÿýÏ7m‹àÿ!B÷ÿËO¿¿ø×ûå*0óÅë®1TøÏ?=ö¤~ÝEÍô³vº~\¼#øþÔþ +sÿù}A|ý÷áOÙ¹Áï–¿.æŸæ«Å÷»Òס†¬çönEÁyøãÞÇoš`ø6/H/ÿqsûñf˜o¾}sÓÜm®n·íW·÷—¯ÝßY ÉžŽÿ[bË»«íUüçíåâ§ûžqŠyÓõá ÿ¹ï¤°?ÂRº ­ ¶«O{0ZëÌ[o`Cߘ„èÎV ìý¬¿œ]/ȹS$Lþ~ñ©-xÛ«eè7öélþàæ¾j—àÙeN}õÿÔ‹œ­Èª|õOæñ›N íÊ&‰>ý†]X¼7<þÙìüî¿ßßÁQä•%yeä¡ÉK‹'^y÷+?3añßdé7Iñü7}ÙyxjªŸº£^;Šõ¤á |Øüùq{B¡ë¿-o–›«qû“kçNÛ¿ÿïæØ>Êœ @!¥aê)Â8ðÙ§T“ï;üa`Yü(`ò GíwÂ1•q<)K¼¦Ø{ä÷üyBí N˜P Çë…õ0¨t;9Ó“…ÆM_‚‡§è{ÝÿCAletHV©: ãP!)Æ/IÇdðÚÚHÇãÆ?½œw–ï9UZ­$ ÝÐÅN£G“7´•ªÊü'ãr^y䜨¿ÑS´Q:aðÎÈP|ÆaõdПNFVÂŽ!&á±WÃ¥{×^P”<+T‰÷¢²´í•PXÏõméÆe7³+O[ «4?õ¡‘\©o´à—8é‘…ObšéIƒÇÙ‘Ïe_Ðj¿k¥åÞiÇi3p W  ‘i “¹ZiÎégNyÇ.Ýðo#í?’oÖ]Þû ¢ªÈA5}õFçFToìM)ê°Ù í¨8yŠhb¹Vñcmpl/u•‰NÆ£TÑ2kx<þœVÖe—ˆo•V4‰^±h‰ÖŽ©üÎÕtˆ^V_²ðø¥wÏôOñ($tÂ).ùHãtþõØ:¬LÁLW6Oî©pä³^|§µTÊyæ¶V;N*_¢Â.“~9}…@3®ì#r½oa¸x’†|Š¢ýxsx &×=ç. T®rð ”,pï߀»øÞËhÚ`9.[`B?¡Òo å`hW#a”ƒ ä‹ôÏ `<ÝbÐ*AžŸ‹hœ´¢hüp†¢mû )³Â ‹<à1BjÍãѹ­§%ZŸ¨t·^Î[½Å7áV;Þ;g§dƒÞÁ¬$J #Tú ç>Oˆ©€ÇCw3~¨±WÏÔßyZD°Ä6V}ˆÎ—Hýf$¢ÇpvÏõªÂ¯E ½׋©'K>æú™úžü9¨1žRr´ßEÔŠKO†‹:Ëñ1ü$><#x¼ÞVðÖŽ6ÉU?ò%³¸Þâ;MGÍ%bq’³_²QôH~’ãóÙ¨0*˜súޱÎhäñ-&¸øò˜kèUrPj<ÓZ °|äÆI5òTo k!¼£'Àh¨Ô0ýì‡ïã“&k-×`|W áé4·‹ ã²wØõK«‰è*/…Š9Âw¬— ð˜r-ÆžÓ¨BãÉ+òø ¼eV¸åm¡X{±1Lm7 jTô… É+iǃmA¯êöQ2@|Ãëµ1DTÏ­§½Ï‡ 5[øÆÕ8û ï ƒ‚ǃ®HÕNti€d´‡4yÐ:(Ø,°œ=òáÁ"^±ЬØóVdícÀƒeª¶r”ÏhzD)c÷H4NU¤‚g{1d÷isÂÕ5ïô¥SæžÆI=²b'­ðƒ>EÛ2û|yüù$s`ÅC L;•JýrÖÞw<[ï5ww«O‰.Îv·°ö7ÄiL¸}s»=k>4ËU¸µÕ‡Ü¥ä“Â8‘ÖŒÙêúe¡ë´@þ£ k0Ê+0ÒñÖÍ+çt±ös«³¹[#ªQúIÄÒ.ö\P KÆ‚ñ‡Ý8 eŠKÄýŸgÚÿuã¦?fï¾CþMÀ‘戶§-ÚØt~|EàxN(!˜Ænݸƒsî˜EñEö~Ôj®©y7Ψ ráÖ SUÚ]0íøìÀ £ ¼5Ž6n¬ûeNYHùèô÷—ºï€ÒL¢û÷èû+á.û²Ek8ì¹Ô  {q¥'Ñ”Llf¯ ‚ yÁCU^O?¿QÒIV®ü9åꨙ Zº0€iœQnüÕ”aŒ (½QÞð(íô·ßqZx´<'m©ª¢ÝE‹5‹Þƒ’þò]Kžƒ'71½_ëûø1_«¸ø¶ŒvdV(£8ovn*×[)¤æ %‹%»çÈùh¤rÕ/goÞ/;^Ýí.#…™~¶ÛìÚø„*°Ïÿ\.7m_ž³ùa[£³]ÓŸÑè¡‚)ÌÁƒ¶àÃ_óÍrv¹^†HÛ¶ÂsLsá=™ÓGÎkÅá¬Ã7tÚSñ[a«L{2™[¯ñØ€kú>.6O.µÞÞ_Îöcö6§ù¸™-Þn4èn¬¥b•NdÈä¢pdzyö¤§ÙhÙµ<-vi¨Ä®¿Ý¯3ÒöûÞY‘Å«ë.C° iêslú:ƒ<lî®ëþ<›¼˜$¢Æj;\ÙÊñ/UÆTáŸ7Ä?™rÚˆÅÚU +}p‚HVq«aœ¨pz¾²e,aœ•o··êÛýÿVo’Ó›²ZH‘@:+ª™Ý $:Ð꺞].ÂCÝòã ¡y+ÁO€2¶žiI’¨ŒAÉQ“UÖ„BÉs­Ã]ªª—?$#¾Ëö–µZv­€‚_ÓÎ9(Ù {Q'›V«Øú['<:‹È4¸¹buË%§×ûpYÝñxaj,Oð”[.»‚–˜ã'U€¡žX²u#4Nc9ÔÄm\…#¦<¹áÜ)ë³fL¥fïýiÇ zš œVàIM£•¦XçƒÝ¡ÙNÁýdzþ¢Õ¼)ŀȹúSL%ÂääQ…<ÎH4>ß<§Oµ³uxX¦ì3oúò8 C‰.ˆ?)"8a‹=†”¼öfûΖ¹Ž˜#ŠÄ<32 SÓÇ)}ZôQ¾È»!iž¬X†—;_7w³æær¶ UÕÛ]Ûïý¢Ž9 C )ŒÌ·.åà%M²xVU!M:œ„4nW÷׋f»mæWáºû3c1Â@a¤CŸœ,€§EÔ’e³{ú™¾}1ªx;UxàS„<)¢€€bDY/6Ë¿‘V—7¿®›Ív}ß¾TÿT× J‘T{’ zr‚¨:y°Àµ ÜáVÅ ²‹ ç«f³y¦iÉÒ#yú=€R‡’#¼ÞÀ&u[‘\r X«\WÎÍ\7Ë›YçËuË&S±¢+`-8ƒ à@ŠPÇ}¤ãÁ²8ôÙêe³¦^e î„ÈœCi.Gp3t“õ x|vu<£Þ³û¨Z6mêµp*jzqÕ`Œƒ§?U†Jù o…ó˜‚Ç—þ‡8}{¨^6-š ×”²qœƒiàzŸ"G¬h2kK>u(lBÔ£!z$H‘œ¥˜þ`Ò$‹WÅu¨Ps•„ÍÈ^‚GÛCôÂ’Z+ÁÂ"ø™Ñâí*²ðBiU¡ð%OnÚþÖ(½ jÿOZ]#;÷:<“€]¥‰=” :ôqM ’ÙRdÁµ€Éø¨Œd—²ZÀû˜¼›ßÞlÉÛ®ÚZ°Æ^¿ùöÍî–HVÙ§ãÉ~Ù¸¿Êk³|Pñ^Ãýµ>ÚrÆÖxÐ|u. öýÆ6ýµr’’þ2¬*#¹<´%bzˇŠV ©1Ý6ôav¼YµR—Ùoï”uƒßf£°R†>} JÚˆr¯–fe†¶Æ•§<Ù­€Ž©î°L3w–rÀ™ä®>¨[ôäVŸÖ4»Üi~86OÛ…÷¤û î×µóà‹±“wÜÓ S…TþÄl‡‡ –¸YØýhæFI†HNŠ) —ÉRƒR‘B­/B©ìCÊP™ÙhgAc”VÔf‰R…PæÄ•›>Œdgi:¼2ÓÌ-…ÌPàŒ9lRÀ‰‘mÉ×ä{ÕÝ%ïµ+¶#j¯‡ ®Aæ¤èÂ|Õ!×^¯ÚÚÅj1ûÓvL)Î`ò%ÙIŨA$´pZDrºÌ»&Â㸠>M-MZ¡:-Z!Œ Õûû·‹Y÷v1¿1iî·Wá¢ú¼Í¢ïjÔÚpŸ”ô 5+•Ú Œy|ú1¦­ã¤’ñdgtz³¡—7›õb~»¾ÜœïtØ› 5‚ñKR{iù1Þf?jñhœ!N"º<(Fl|øØ~t«G(£Gò à‹¥ù¿± d®ŸYx\)K±ÓbÊ™‰LÄ¡•×ʲÊá‘óXsÞC”ær:pÄL ¢ ß$…0Þ¤à)qÓÝÁ<\Æ2¾sH–TÀ†4Ndïm§ô»é¸¥Ã 3'w¦ûß–£¹>®<UžÒšŒƒ6œ}¡q";R-ÃKú´¶ÖIä!êƒ> 1óaYó¡Úvñ¹ÍŸ /‚ˆb/E Ì¥B6i¡¢-žË-Õö(5L©~7ÎÀ¸Š»4§˜…ÞxÍ9 çúÓ›¦\7ó«åÍbO¼gj!•4Žè±Â üb%dŒâ%#¦g=}Ç[çðxüÀŽÒúYi Ž«Ô{íÈ6kNª_¥é‡ò{i6ÍõÝjѾ¤]1ŒÓ?™…G‹bM¯ºÙÝœïôÖÛÈF÷p®´QÚ É‘V…BA]Í d±6G†ŒÈÀ#¥¬e.›ÅuÛ¨ÐÄÕ‰´Û¦ÜqðQûü6™Ó3:¾ª‚[ÈÁ£]‘]Ï®ì¼Säg¯/2¤&œ†¦Õ0XiœUÍ$S7¿Õá<¹EyÑ•´mH}ëõýÍvy½xXY;­v¼_·mGŸ®.@×0†» Bs¶¢F…²ÌÞx8“Á:+L:dàQ¢ø«’Oøêéf ùÛmƒ1;NT‘ß<­½ñ%]ÄdÄ&‘Œ$—¦Y‘Œözz®Ðw€bl£x<lIÓÑÅ]yº‡ì¡n Œg^wmÇEòT=t©KütIh^*°Mø”Ò xœ(þŠÑSÅÆlµQÍZk(fVÀåiœÊ>è¬Møa ‚1ÉÁƒ²”1 ÝÐW‹mžbãÁ½ÅÐ/X+M㜕ÅjÑ&!|º( +Ī9x´(Ð5¢Gu³åuónÇ’}½¶Uñ£§5‚4lDB㬷ÅÌÇÏĹ û×T ›I†\B'#6UêÇ2ðd ÷>Q•Ú‰%à¹ÝÛ­ailjìÿšdþuWÓÛX®cÿJЫÙ$%‰”DÕö½7ÀlÌl{árÜ]A'qÆq= æ×u¤œ“—¶$ç- H%´ïá¹EQiRŇî¦aþUÂûˆÃk%ȼÖkvˆÊ"i’³¯-†Øø|(Ž0 žì:DgIäÌ×.}è¢S0²œýêHëµ(Ò³Ô„G¨thXaNAÚå×—³ )åeB½ºV4jù‡Ø#j˜cÀóAÆþ·ýlx µ¬‰tˆÄ÷•fbÈ"§¡6Z¢’4ÇUËÁQ“ud{ã5èàÓ`Á“O,†vÈ»þ:—pùíB-”I¤à´R%œ¦ŠrXš–MjhÍó•ÈýïÚðÄØ¶Ô¹)Ú"‘V€”Éiao=˜‰§†‹Í,Ú€:ºÜß,x|c0p)ÇÛµô­Äi• ]ãRç--y¾%° žûmNLÔ‘:^Ü,¼üU âT`¶ 4³_ ø!‚õXÃÇ7=Y†Éá8²Mb,5(䈷 æï|¬Tä xŠkÒñâ ×Ü ÁW×7ÛéœÉŒSÜÊ”+àYΙÔŽ²^ƒ> ð<;î@ÔCe—›o‹å.?Q)GÝÕŽ‘²še9ἩifhiÍóÑCÿ†'6<ÖþŠê—.×›ÕúñòáÇííåc½8»Ý[rœ¢Oàëy¥Xû$—cî¹Û`7ÜTK3J¯bÎ H(ñs¼¹à <8ròqÚ±ù±¹H•CìDrÖÚ¢W¹èе)À{’SŽ)æU¼ÅÐ# œ8€Fš‡šv“˜áSŸ â=ÇrÆ>dï´Ú;“ÚÏ,Œ²lƒ5QÕÝR,x ŸXèÄØÁ >µ^úITtȱð©õNŽ£O®èðpïb´Tö$+MFLèFXÆŒgá!ë•Лõæfûsy»x|Üšµ7öå’ÿt³\Ôa³ò¦jû!Œ¤š9¯• 5íÌÒÑ#Ö^ ;Ñ?øÇ_^VJ'"å-ƒBŠ/A3àÚìÞ¿´àùj„4`KÜ‚çÈ*wëû›7ÞŽcV2råÀeÈAÑd’‹©‡o8Ò´'D’Pö:rÚ;@'%E Jê—yR‚¨>”)é>Aíž“’ói¤Óò¬“?¼{I)Éï‹ÃŒ©¶•ó‘eòF šùªÄþÓ“ OŒ'æPì'ëƒ_"t€d@#,>Žx¥óñc›ÂÝÍ…-/óï^ïFÝLDe‘(_·1)å{&9éÄÌø‰.§b­…ƒŽÕÓoÍÏ P;sêx‚õÎæÊ ò¶sáÉ3¾¯¼ø7ˆsÉ{ºu’^s¿_ÔŒï ðâááö§É ¿^0âÉÃWŽ/–?6Ž9ù[~ÜׯîÛõÅ÷Åýuý‰7«ÿýÁ_{ño®¶ûÏx¼šþóê’ûŽ›àx‰ÉS«òªXr£¶â'¾+d„.Ì‚±U©¾×$NÿÛc²Jï¸$™ËZŒƒCÒ°$—›•[ CžXðäÒjµô—{!¥"rX=5e­Ïò$²k­ { ê„%Q=v¢,‰,ÅÒßÚ,x¬Á°X]•§§Ç›©Æ®—ß舳jïô–õ_{ L#^©On—~û²¸¾»¹¿\,ÿª^7ˆ”a ’x)^ˆ,ÚåQza,æžä }VuGÝŸ‹í4«‚Ì›{zÕ«°œ#×!OÖ ¬ï¿CoÛÕÎ|“x¼Ú1ú¶ÊŠTF¨M0¨h6ÊrBe0{MÕÁØ úþf`Á]c3xšû1)'ü"a¤ì¢–H`9 Í­`ô2 bÀÍN_¸oñDâ~8-'øä" –Te9Ì-ë\öDû7{¶á±Þ÷Ÿ¨û"P)çP2ðŒXrÒ"Í ®¤Ôt¿t vÆ—û›OHÍNcÍàQNŸdò9AÍC±\Š¡Ý¹™‘À³Ÿu´ž:W–ÿ–‡ÂçcÁƒÆùåùMü¹Yÿx}P{—Éaá/ïßë¯?Ö7d_^ aÖ3úÚ†.suöO…ÜF1â¡ökÌ_ÇžéÜ®·Ó¡K ²;'ò¥me9¢ÒaÅ9zÁ;<±Œ8¾õžQÙµžK}ÔæR–s0æ˜NOh@xbÁSbOÏð=^>/ÿŸy•Óý…xžs)iÛÉeªÜÕCŒPD%B÷VbF<1Ž÷¯¹ "·¾Ö–Ï(_ãÞÉ!Å3øNÊ$çûŠ=ÈúzG'Èt²@HÁivîë)é<Úãï~.Ј§ø3øgZQ¤5𼘉!+j„©µÍ9\Fk=j1ÜþæaÁƒã‡õtOC:,_QQHÅ'93ÉùÒ%?¸0 ưà3ÄϬ&‘U¨Ë§Œr Ir8GtÑ\ ¡bz;ã°à!ç)êwß®;J³L)D, …Fõ=ŽŒ-úè€Ý›¦ñø3øŒWÔ’H-†RœóQ³p–K]²âçÑEتjg&<Ç{·k¾"³;•5(ÞiÚÔ¾Þù ޤ›:‚R¤äSÎ!ÍÀƒå|>å Ë(§£ÏÉ— zJ–‹‡æüùß/þs½½ùãçîîRUl÷ç¯Ûå=_6c/îWÛú·‹å÷Å=kvñ÷ëvÍ?L/ä·ÿÙü`äÛõÅoÿ¾¸}\ýö1n^:Du‰Àr9ÒÝ¢ñuÌW‹òiÁC]Wb/,^~ûù¬¢œ-‚Çà5÷Îr¾Mýºs¡Ï4À xˆFø¿I•ó¢ \-7&7ÝÉù<&šê«E€^‚§o¾¦&¼^6ê7›e-ý51*'HkG L˜´É½6%±¯«è¯B)#Œb6t=.»|0Ò2+§GëyÍâØxMêÙ8&ÍÛ_/t™lg$<Þu÷¯˜”Ó¤»Ã°I.m°“‹n€Çè=8^`ÁÓ£ÐàáõŠQ9-Z·ÁkmÅrÒ8ÑGˆ¦ žÔuǘc²_gþPN„Õ½>’¯ìä¼ë{B­ìi} ²?üšM9ÍYÀSZ8Ìr˜Æä&:Á~ÀÂÓ‚'”®žà¥4ÉD£œ¿,uâJŸÑdìë :áF°—cÁƒiˆ/xEg”2z~*DXäža“œêšµu½ðÓ¬RØõèštÇÊôÐÚs£· ÚðX¯½-75Äáw¸å?ÜÖÛHÓ{ÜHu²¤¦Y9d*_OÝÉ…“ž>tïþhÄ“ÓxcÈ"›¹øÚÁŠ‚>G Î` Íà—0À3Xð@› ‡ÇJ ©]IèT%b0·ÿŒZ@ÿÃñ6<ÚHCª †+8¨ö$÷ÙÚÉyJç3fZXZ™† Oò­ ª¼jÌtqNa:‡”zU³”JiVFðs©–]÷Þ£F

—y™éšY”ËïäàØ‚Ÿ_5ìŸ\±á ¥K0ûÞS=¹Uˇ°ô‰a»¢pˆ§oèú— s‰ÎåZÑDÅ^J«jÕãÁ×ñ‰öŽF9Ù‰WTx…T‚‹NÕƒýôž6)’G„«<ä†MQ!1å@y†U§d>Òó)Pw¯^hÓ=I$™Ëè€bÔ'¸èÅá“D3ð€#¢ž˜Î9Id…WÌõLxTõÀÂY'‰vŠ”8Â@fãÉ.µê~ÑfÍF2ÓÉ%“3jš%ͼþeTK¡ 0¢ä²÷.¦x€>—))È”}&u"MIh1p&#RUË®Ú~ºFÚ"e‹ØöÐ݈ܙ/Mbã™<*ËË À+~¯ÎÔµ q£¸x8ð4Äæãi×÷#*?øÝħ²ØÌ‰B=5¥âOÙS³X¸©$§+P'±93ê3YI™üŒ‡Y™ðDjY<ÜìšJ;²YÊÄpj¤£f¹ÐäLR#Ô¥8LQG]ˆ5y^ýs,¬âa¦Ò°W¯ ZJsÑ#%Š0òÕ7C-”lùêçã)Ô.17K%·EŇœº"öÚØ01× |Ý6Ó‰/>X2S žCt§ã .Œ6%‡UÏìóÈ {h¸!´?Ä#ð”¶Úè<ðû‰W9'äY›Ô‚óÅœ¶ÿœŠìåXð$hš\3ò*ç›ØÝa­¨Iš¾—˜K¥M‘û¼<„gô $ŸÔ Pï\CÒþáñêåÇ«¿èqgvuëHbÔC¨uç0(°«Úl¸œIß¿¶‘ õ±Êh¾:@£—i¤èy½ç@ƒM˜´¶„1¸û‡ú&N|’Ègä5K$ˆÚªŒå—|Ê€pÖ€'»ã"ŽvyÃÞ׫ïÀþã-Xïàuruzí!ü~Q£…ÍÅæ ëâááöçÉ`¿^üm÷é‹—‹Ÿ¾¸y¼¸_o/ÿ\ÜÜ.¾Ý®lÚþÇ{m÷º¥K©ä¬¦’--4Â>½G†„:ï°Uîù5~ÿñíñj·T}» d2"ˆõF†›B¥Y® pLLfÑC˜·iFªÕ%Ì%fLÚCYNŠufo`Ü-ø3Û‡ÛÅ´˜QNMÐQÁþc€§XËu|˜ß{C(tÕƒÚH‹1XΛ÷"Æâ #6,x¬ ‡S-{œ}Ùûy?¸Í »ŠZÛa£æ›38 Ø,I4yÁ x ëfÃ^Žä•ã{Í—œ„ÍÓ1pÚlÆrÞûã“8#7`LçZ Ì3ðä/±Ïr”y>×¢>ª‰ÇS ¹ÆK¡ÿë¶à)pÂøÝ‹ïžOLq°s½Ú‘&g-‰¢À3’P(:9gA]’·àÉîô”üÁ!g/ {6ç´laa´7Œ?Np8 ËjÁÛ¡˜åÿä´b!^\úHšQ Õ¡ " XHYðXo }¸9~ˆÃí¦°ëËåbbQJ!â•C1“r2e’ ¡ÉžþpØ¥{ÌmÂîôT¸6|.å3™ÉZ± ƒÒáb’ Åž?tèÓdž'†›lsÜ(:‘D_'%䨀f9gä>ja ÙîÕ[ð„pâ·%á8==:thpl¤>[ÿ*b6<%·ª0k<‘H^.È´ù‡å¼o×jr0ò\ú›€OqÍcø»?'"ÈtÖiºhuð&9ÎãƒùøÄæ`ÀƒÐ.÷¦óˆ"5·9`+ îzæ¡an ðà\0á)­ @r¬‹ÛÕfûR/!c”¹¤àcÕ—Õ#æ.Ÿ¼÷nÖÞqQöŽm  ÀÓÜÏË ìöæÕòçòvõÒôðŸöFv9ØCð%yPcd–£D'º¤þÊÔù ”&ßÉõ7”úvR¹ÌÀ“Û4;b92ä±[€RÒlŸå²oÊô×)Ô‚¥!©:…ëÝs˜då†ñN.7[ðH,ßÞ½j¸r¸!ää|Ò³ڹ©´[÷4U `ñjxQå`€AÔç0¥{ÏNÎêGŽâs¹Ø.n×¾æT!#;d«Ð|3Ë‘#Œâ(%‚KµƒµªDp8 ‰5ÂpAõÆUÎzNìp(© ´×e9¢LuÛ-guÎNu ´+—×Z=§ê~¯ ¨×L«×/ã™Í…¥ÙÖT[Ä®UÇ|h—¦[~¿¹Ÿ*Ø}ÙûùÕ öòŽE¢Z|T;É3ÉA †iºÈ‡ tä± pG‰Ø$ Õx·Ê¥î&ð«_ù4†ƒ¼yÁN=ºBêüÄrP\w+èã€W žÚågäu¹W*ùŵyI™yèxL ­ót a¶æ¬ªà^Goi–òê,eyèáz…Âïÿý×ÍÎ2ž.Ôû¬ðûí®Ý0cÑz‰'Å_—xÖÿä‡ß\_¯îO›ì][ðku5fŠ}Xv²««_d¾½f‚Ê&R=2ɸ’¶¬¦=Q³ 2g€`ÀäjÁƒ±©1¬¯Ÿ(”†X’™ä¡ã1jÈkDZvþa0ôh€ð´+Ëk”o+‰Ä,“Hˆ%ÅLhBhèÆ¡Æþ'àmx̵ª%Ÿ–§"•$RYþ…”ˆw’ æ~jŸ{0 XðäÐÞ Þ­ÆÞ°YD6CÆcYA2Ä\:XÂ0ø ƒ6+¹ñ: *°Ì&ñ|_[ÌjèÙ£QnWÿv<|¡qXCc0à±V!;Ìæöûâ~= +‘J/R 1•˜]Ñæ6–Ë  £±—ú›´¨†ü:KƒÌV*ÙEŸ4#e¹X¨IÅ‘Nð Ó<ִЇlq¬´¹¬Ÿ¼]/®ßr2wäyÙ¡]Њ»ná¹Å«¶ô¿»nÃsrŠ¢¾œq:º`áuà¹N-¤ÓZ”¹MùcÞi<’ï1iûè½åpxÞL¯¯Æ~®ÛfåFpƒNEQbµ{ÍÙÖÂäå—dÓxŒ.g\¢Hà 2¥¿Î 7¦ÕŒ›ÀãTnã^Ì}8l—þ U„rذ cI*•ÖÙ \U—'šÆÃyY#ë¨rœ9þAн—|çs¡:f*´Á)<–•0ò¥_¶:¹« M\?Á×V2ŠWp¡LS–5Ju«ap×Ïú?ŠÅk¹1¬ŠÁ³[[Ãà)<å î…sQáBPƒCÔ•‘¼œ¥ó’:^ÞÄ <’±’&v,.œåRø J(PË”5qFRc*˜8Çš&ž¶+ÿE'ŸyˆÔUO Р$ÙuÄtRË"v.„«Dc§ðÈ Ææqõ­’ŒSÓ8ûo_¬Ž[~¿jU¿E¥S‚¡SRÛðB:ÁYIKçe-.=‰G²ÒfŽÏ6)ëÏE[Ò#1/[ ó²ÚfNàqÛÌÓÙt1›­Öõ`|ºI þÜŒ$H1+0°JDåLc£H£Bþ¶ç‘8ˆ64Oä¾å\Yaî+…GñrÆÒÅæ“Ì©ŸÑÂòJÜ‘âÓÅö g°t?VÀê’ØX×¥+ß×îÞ£$öûzðlß›–ÉÌ2.åNƒ!îò é¬+nfŠÕïJSÖ’¬‚•?©Ùñh+‰kAºt2ñܞU¢ŠŠX^±ÀZÊà˜Î—átn&jŽ•‘$®ðêÒ•ßí™Æ#]5Ó븈þœ–   ýQ]ÏôÙ¨]ùÅÊ4žÔÅÊ'ÃÝß÷¾y˜‰JÇýõAÜj •cÀtŽÐKuXU…žÂ£ua3Û¸t¤ó¢(Tü-9LÞó±VؘƓ:¥²l‡W‹ñêÚßõÓþ¾Â°ækÆÓÕrpt»»˜‹ &„ +‰ãš!]únƒ\„BHg MˆýÝò&õïÁQ–¶=x”*aRˆÍ7ÙS†ƒÅœŠWBºHx™½LJ¥±!§KW¾Íõïñ·þrÑC1 .ŸIïít€¸hÜ߉- Pþ(‰ÉhÖR”¦ü 'iØÃ âÒ ¦€1«)TÁ¤P™ œÎª±>4ŽfUNV03òXÆh-Ea3˨tÒ_nî#ª¨>]ÖÎÔn¬}4Í*Ë’ï±Îqæh#!¯™ïíwÅ^QT8ÃÁ⊪¬ g2ùä3‘Æödä3q /kbNX­%ûƒÆ±eMœÔUèn%ð(¦‹˜8¨f¢ªa=# v÷¨ºÏ§Ûˆ=ü<ótZ™÷ƒw«fáãIÜXv=?9øÜ.–!ÊÿÝlÆMÄ—õGgëŽËy; /šé§vy¼ÖýxÐLGƒysíËm£Wa+$M¯6jîzãÛv9^ ý m0òm(ŒñrðN™šUûõÕbx1^µÃÕÕ¢=?BèïþæüHž‚>eøä‡Ëæ~ö÷«æút<;úkWg³áüdÑNÚfÙþÓò¢AfT®ý8Ê­ò÷ †å ®EÛ27üÐŽ>rÕ[=²-ˆó÷{mJ6¶BdÛh‡ïúól1lÏ?6“eûÇ6u´6hzZ½ýØ›4¨R§@£â5i1ƒ“Áj6@7\ø?èŸÚÕ?¦³µ±ÌüýÊ »Ìj‰Cód›‘êƒ[„ Å*t<½X̦ãÿÚ(«ø=Ë—ÇídtúÝb1[¼/W/¦ãÉ7ë/ÿä­ïÿôo!‹ïÂÓ¯ÿÒN½É½5JÛŒ'ÞÈ/þaí}kßøÍ öû1>¨°oŽC½†…ãÁϳU39Gůg—óI‹åâœÞ¶èÃñ9ÏW‹«Å_iƒùn=éûfyq~¤ÿöËÕ¼þ0ü«|‰v¾ïvÍåHK|ú¦Y®~ZÌ>-pÄr¾_¶§ß"à ¯öñ üþêê–¶ã`†À”>HŽÿi|¨Ž›tofÃf‚Ùx…o@"ô÷·¯~}^ÿþåí›ó£‹Õj¾]¬žâßf£öç+ß £„y‡Ïñgÿã‡fáƒvX´`ø¢4ŸŒ‡ãÕäúÎÕºêm[ÀŸù²GÀmΦݵ³Œ½\¶Ø¸c‡Ûûöw”Í·¶øïlq=Þ4s/CŒºîÔËÿ§­È`‚µÊË?ÝVßw2„’9ºmôñÖÝ⻊ç¦}–ëv÷/WS¬p8¶Ê€­²°ØBc+Í´Ê민W…Å¿Iá7{üMÏk‡‡UõÃæh«ƒS}:ŠX«ÇÃìºþy<//öŸøsÞì”1ý?ÿ½|jåN[ + è¿Ûõé„ÞØIì†rÆùûAh‡‹õ`¯™Ï'×a´úè…ms¹{6œŒÌ¿®¸ºhÝJ±ï¥¯;çø-W¡2ñvtë‘ÿé¢ë¡ã×^`Ï}óËÓð˽ëšQ´ñÓP§ˆ!˜OÇ”Se&Èqn¦ÖÅgìÓx ”›°qÕ´ß2¨%¦Ë>‹›„)üVИ‚‰ ÆÅ÷øÛhU•q9æÞ¡HÍùË/PF@:¦!ùÜÈsPJåM›Â¢”i9‹‹æã-IÉ-ɪ˜ióQJ¡+˜6G±ì¦ ŠEv(v µs ¢‹j!“‚å·k^DÇK·³I< ´(cTWÌJä2¾NÐ¥3Éò!:˜FtÅCa„÷6œ*Þ¾®Óåܤtÿ¬#qѬ1ûnŽ‚´F —Ñ®©”Öb ‘4¥Ý~¦0£i­Á:XEó`©.fZͯ˜á`ÐR˜”*gÚl”¼øÖÂD™±WüèWQÝ Ãžbñ%Ñ.]zXöÌ Re Z~ßJ÷%°V=x´-iàÈN§ …õ»Žâœ!ɺ§!TKS_“ dñœÄÃsni¸v‘›¸h¾è")i]Å„.cݼ”’ë ¦MઘimT4έPÎǵJ—3-M©±h3’ÒªâS‰<"5ÈE¸£ïæ&™Qûq< Ë­Ý2·ç––§7;jîÍ*1]TLᬳ â÷A„tÒoY8 zÎeyWHá¼¶+SbÿÑQ—ºtBðê®^–ŸIãIíß“ìöÖè DERNö¾ã7q…t"ýûTš»ò¦KáI z‰GEÒÌæg½ (L§Ëgº|TªB:…Gë½ï„ÝrzøÌß´9†“NBÄä8’Ä©Ý.´lÿË`«+Y¡û•£ ŠÉ%! Q`%ã$°ÙeËÿ3k^ÃäýyR÷Ÿ¼Hw›„[žßݱ Bŕլ#Wã"ÇÀ‘•ò‹i¡†9J3A:º\?«ƒäˈ`5$ƒmŽnäì«)šú `ŒÌƒIïs`&ÔÖ&ð¤Fp˧i|BÍí0_¨KÇ“Ç*˜ Qc_@ Ò;uFoOäÝ—-¨%(µ %zÐ¥oÄ«Šg˜®bL#4·}x €1%¡–?}Ž‘t–3UÀ˜ùð´­aÌþ<©WMÍã™ß01i?·“n†åª ¡pïH]wj÷d>i¦ím%fue|‰.Ô %¦ÉQþ>²U?£#$ðÈĶûãdöe9¼h/›Mí.gÓ1Š…)O6fò¥&dó153$¦–É6N5Zà®r˜7Ö{1„vÎp%™&YJÙ?'¬ªbèþ<ÚÔ®ÐãSeÆJnY æ,1–¶z…ž Þð £ô! VèñÙ/ã(ÉÉéHc­‘¼d…žÓª½ðžÔrž\G*Fh'üÈŽSû‘Œóç¶JWèa¥¾¨á??-šùEˆ‰fáNW?¾›ºŒ `ï—ªÂXytwÝ™ª¹Z]øà<]褻é9·øÑÕòä7단eÄØ ?Á"!ɱŒS2ycÿ!P+&j˜¾?@NÓ?¥]|e“ˆë»t‘ûv²øN°†ƒ#—ê­Î\¡Œ§ðØ= =óâÝÕÇO‰OagÇ%ÜP;¦0pzOKפ•5: )<»Tç· è;Õñ’p«©~,¦c;•ñC·Îw¤ð¤î)ë¥åSZB°A–ÜçÏ0%Ç*9fWÅì½y8Ëcöuh|Àe™ÑÔÇj°&“ÝkB»‡x’xÜN{9Ö£é>Õç–ç^Wˆʬã‚kMžÈÆt`ôn{9-#ÜÕpÇ¥Än ¡yDúÅ0ýt=ëŠÝíÇAF dĪ ÇC@bc— ùCYn%jŒð=ÎhÁ{ðÈlæXÍný$(ßêà˜ñW7;JQÇ”2<›' Ì´*;ã¾±b~ÒŒF›‘NÖ_lÙŲBóï©9©áMö°ÚîÛ‘ÏÍd"!¢SRi $´‹\v˜vö°*µb²†éûó€ÎµÈûHLJ‚’<®¤H¤ˆàù!„|Ç k£—_©MãI½êfû2g?%¡¤1ÂOÙ“äF+–íøP&t«±ããht#úÅ0ÒD7#í¥åƒ^¤ñ¤^¥ö°Ýá¦|ªZs©ÃÍ·ÔÊ—CAKSÜž½ç †ò›Óxv1ýÍw‚–ñV»’#ŽùtB€ËÓ÷xx[¡HáqÙö™m“sËó +'tÅ~¿0Ìù°Rˆl’ç͈s5¤7‚lÉvÑ5ÞâKâ¾Zª³‚éÌ¡S2#ÖAyIàq©¡sî­‰Þ¯k7Ä»9ÐÔÕ¿2®ž³Ü ¦4Eël$n#}cV}\^>\N"ÍÙe|ª©UQíI'X•¿o²ö‹Â %ÊÚóHEféx2Ä£%Ô„„NøËˆ‰ì¸c%ú…E™#K79ÍÞŸG±ý7bô¬$M\D°B;¦ ±+CnAÌE-•bESKY¡jÇ÷(·ºìÁcm™‘ÀƒnQPÑÆUÄ~§SŒQs ›-é õû‹bË^ulþÉóO)/™Öܬ 7W«ÙrØtêøìvpö!Æ£Q;ݺüõi<©aå·ŸÏ_Ëx§âòôægü†‡'ö… :’;n¢Ö”d,yöþ3ak4 <É·‘oÕô²^Œ§moM‰îˆuþx‘¢û ó™+î3¡e®tʱ¥ùNÇÍy!âý RJÐä:´!…Ûgd\S± 3¨)<©BŽ2×ÅO<£TŒw@´à8àäšêíùÒÛspü<تÂìh Oê1Û-CÎ1!¦¶¸¥áµ™ÌÏBo¡Š+ôçQ<߆-êÉø‚¤Ö~½°ÔèÓE$ÆÌÕpyc'ðìè‰Ca …G¡§ü_ê®f»­G¿J^À ’ mŸÞÏ9½î…,«\š²ceWuÍÓx¥8rl„//¥YU³|?|IÄSÀ€*þÈ.vký~b¡íxQˆ‰FPh,-‘½úü+ã*…Ù:ëAgÜFL¿wµñ'¶êO†˜KV=:ÒÐeÇfco,¼8 ,Ó‚'¥%-ú‰¼¨¡Ü,œ 6z‹òÑÆç¸uÛn³‚&Ò’BZv‘´Z„e2ÇElµ®(³ó~ÄÔðäW³‰CT8d¹i¡×wFÀ7²Î ‡ÜÉ x°ÿÄÿò8¨™¹Š³‹BðŽYÝŸ(xýâTÂ@¡(<Éõ;ç¤Ö=bB†Œ  !ã’OÏÿ“IL@ žf#°Ü‹Y!2BRSŠdf÷¹ ÝÉ‘çF¨@;ž˜:Ô?¦r (x¼Z­/d™ýçomu¥º÷,{O‚QMÔ‘qÙ›ŸõÎW–4ÀÌ4áéÖ@аì’WøLrÌQÅŸ ¸n4§`DF…SÚ.Aå218]™Sµ]xC%‰OV£…,?FšñÐÌZ¬Õ‡³\Z&ülˆÙa`˜˜ÆL,• ¥O†e‚:¾ýøÙãfu}`4*“Mä½£ ¢¦ KÅx|6‹¡­“MC&¿úΦa…Ù}7–ꎺ ¾ôBÊÚÛ¥Œ«­ù¬58~ñlÁ¹W‚\+¥åÿÛM¬¢Âj`vŽÕÛŽŒè–wJ1Žüùµ¸jRãª-Å!ÙŽ'w Q.×<ùÙãêf#GùNþín{3ÍÞîòÇ<¾ åtŠm<ù¨žú@¸[<ê©Eá¶ÂV¬kfN˜²L²öÑœjÝ«I4á¿þ›ë/?({ÇÌ»ÿþ$çÿÅÃíJõŽ۔ÎßÊÒmÇŠn„énÀcM|nŒŒøˆÍãÛQ wªÒJž!š±)ã2æe"÷㼟ûô黸€"Av:ÐüÙì nHK©ì:Râ1S,×BAÇÃÑ÷í;qàq¢ñÀ¢xyøã¯+€\g]¢”YUVt¬ÀùŠÂ©Å÷V‚Tª¾7ÓG+Å;j¦Ì©Óö¸¹Ùîžÿ¾8„}ì{W²Üª’gb ^þÄD|T:åÐŒÿ2.¸Œˆ::ÊDê-ãbœýˆ5eåQâÌÈ:Ê”}îpñ\怚Š&<暊o:*Ü­n6?x{§¨~µ”‘``={ÌVAšÃÜöÀ&Ç:ØÊ¾ãD³l¶Þéx’ë“« SÈ …!¥èªCÅ+clý0s1ííxB·ªË¿°øú¯…Ū߽ É®„«„ì\Á÷kú06¹“ߎ‡aÈä¿j 8ä“+‰H½T—qŸ-p.ø14]¼z°|4¸ XðPèÁ¶ÿáQϸûÇÍýîâêþþišÒ²›#±2ƒè0øTð‰Éõ `ë„Þ |uù)sSéÇ­ê_ûG‰Fèzηàá™­G~Ä•ByÞ)Ò[Ÿ¯àK.zÒì[Ì~b¥¥<ÑêÿÀTTôuk!•-Ðßrd\¥·Í¼ˆ9ºÓnÀ㻕þ•Å_þ~l#øº‰€²31kL©þ–úu7;‰Ñ¸æZðôŠ‘¶ðYD@Š ¨úÊ ã<ö‹Žî*@t˜#ê4õ;+¹;ŠÑ‘“+QYcMÆÉÄÏ>}‰¿Ýþ¶Yÿ½¾-IßÅ,+eå Q$1~4°DXñ§v{±^=­nïo&Ðuwe;¢S-a* Z¢ Vë?Š,»ÛoÖ –&W:tŠ,Ò’ðä927౦mVJbï§úÐ5à%¸|w¹¿YÈîÊ•‚ëç?Çj^Dç¹_ÙæáàaÄÕØ‚‡¨s÷ƒqpØà+œÖÏ.å¹9&ƒŒKfŸü #.3<̽cû]0ßN/á&ë§?“Ü®‚Gíôg \7u }Äî`ÁCưÿï«»ÍNÎÑW6`Õ¨‘´Â^¼t!&Îë‡ì4.’ÕwqR¸•g‘^“mÃsÇÉžüˆëdx 8gî¾Ü8JJºÛ^ †S)x¢’¯¿R¯Íûþönï2®ì©Ê£‹j¸evܺÇ_¾,‹ Õã\áñ@žy=ýS…O¬óI¥‹zVBB¦qØÑ¤;‰9Øä;rsf¥ßÙ¹^¿ÊçÍãýóC…Ë\å²4àྲྀ»Blïbz6ࣃ§¼Oǣᅼi¤:Á’vþƒè9cw€»4D0ý×÷··›õS©¦øÛöv³;D%Ÿ«suVÉË6‡NÛéB‰uö='N,†²ƒXðàÌ`¡“¾Êd ¥F°¦Ö2®TŸÛp2èqù.6ædBTBVû)†7W_?¶›f…Ä‹õæñib±îÒKäå ¤íp2.9{ø÷‚°e¨ö}ÏÓ€É<Ç<ýž‘vÏW»õãö¡êÂÉ®îÓØ’,sÖ¨”qûù Fci€½`Áƒ0àp¸¿½ûùã׿a¢·îÝCÇÉgÍ.½8]qNŒ“gÄÂÓŒìÝÝßòç»o‡ìÛÍã·zX•«;û²˜Å19Õ¹&ãâ¼´ÞÏŽ Êóó}_-ŸÃ¿^4¾ó–¢;Un ÄæÍœ5Yœ2Ôç&Lôv 0î/Ï‚/±>Au_prbíT꣒Õn\‡´ü”YðXsùß)¿ÔzhÞmž·ëé‚åë9*§Ǭ=®R9-ÃüR§” € <>/©‡(Ï÷I­û踜E O¬2.¤´¨ZŒ‘"ºï•<>õ(.¦?î\^wŽŽM_wÖ±lz>§ù@dœ7—!=O1Òå0à±¾EØVÜû”¾Zw5¯]ºt"I AÉQžÆ9p‹îCeÉ‹«‰ u+@1áXg¸¸ï|Rü)Ó¸Ê}Íœ³w&"¥”Æ€ÇZÛnýxÿý¿ï¯f¼øT%ÓƒG%mWiËöÑóŒÐÄåUÁ‚'/mƒ›ü—Xç²ô ZcÏi\4{±Ï|Z¾¤© µˆ™Ë_ü‰Ï\åÄ âIÃ¥ÞH^Z– RοŸBXðø9ñV77››ÕÓæ¢,¡ÍõvrëWë¿ÉWåhâDCGµ¨‘wôÒðÒò…lx¬…Í+lý¹Ýü5±ÅU¶BðÙ9ŒšªÉ¸d®Ì2Þò5wlxÌÎÆà¿ãSo²ª5ÜR ˲†½(l\8rqAð˜ÒE0à1¿%焚O_å³”œ‡RuTÁKùMñÌX¾ © /v¯rFÞÍ&üzs{µºý…A¨3HÅ~ˆÊsø4Λ#™OYî&Ý€ÇÚ¸^e°iÕ]jÅ}…¥ÓŽ‚¿4ú6ïç&@p•³à±v5>áúLÚÈDgÝ[&×Kì“æ’qÎÞèð¬ð{?À\´àyY _×÷ÛÍõÅz÷çd\øª»ÄP´Uì`_©)67wai€wi¿ µàé“¡ð–¶ý¾Xwq¡,PF­2Tœ°K"ÂÒ@ƒó#&Ø‚‡g¶šQlmB䪣Ô?ßó³û ƒaÈ$7ãIn±In.R7!IÒ©d1.7Ù Aæ1+»Ïr+ÛXäo“câ·ž†¾ÁH¯ÊNNsðï/Å'ûööêááö¿}ùÇþ½Øü_ZÑ—íîË÷û§/«?WÛÛÕÕíæCÐ9ÕÛ2qå›JoF¥ôfº,¥,#D¯) ÅÌ¡Cã÷?ž¯¦êè‡Ðë£ýÕcÝJ" X k‘2.ù¸,Rì…ý¯Mùc­ÄÕ©> îUÚäs’ÛKÐ1ËZÈ=ŠV MÕl/¼tåå9dª»ø¦q‘:ÄçŸzŠ‹¿©Úð¤0V °Î$‘c§u,.ã2# V‚^ÐÉPžÐ#ÂBåRð³èRÉN¨qíCi •þ·eœcߥvÔ™ ‡yyE²àÉg§HTçºD‘³GÖdûL‘ÿOÂE H<©CÖø! ÿ…û—üoM×1®’++zçÀ+Âȸˆ=ÒÆÏEš¨ŠOÄS« º:¹ä":U*ý—O®*ý¤A稊|9hÁC½j×ÌÞÆ'¦}•é0å&‡Šd¡Ô*ÝÊØœ™hŒay%²à±¦-Í4Ô™&O)ˆ¡¨IVâ}éÌ”¨›hÙP"ð=Òú°ª,—8[Tký”qsŸ¤¤ó+V®Çý”g {Ùç<‘ÎGybåÒ"yÒL~W ó:òt Gì<<Ö××nÏëåódësꚣèçß Á)kvÄô¶ã °àô¢F[HìX×ÂÙ/9½:ÎÉ5à GÕ.kA0I ‚ÁËT^ÉÑû¨|TÆÕܱ–î¨?÷¾ÇQòuŽ,p9.¿ x¼Ãaõ"¹º7•9 ¨­X=÷ˆ0‡7 °†,x¬Y/œU~“«;WKhsbP¢ §qhÎ÷qÄ6àÉ.vxH¨YM£~úÚ“«;P‘@,sVhœz<$œ§púžÚðÍ~ßn{N¾î7Í¥s‰ÊS@Ë8nþÓöpÔÉÃòSoÁ'ÝC¶w%6{¢¸î ÍTÚ<&Õ´‘qžà¤[Ç2A¡6<ÖˆÛ L³ÅæëÎÏ’™Jx "‘Œ‡‹>*B$pÌXðdîõ(0û\Ÿ˜®;?‰°§¬YbD2t{83Ñ" ˆž±à‰®‡_·Ëu(‹Ê'ÊAóVs)›ºøuÏM,àø3àg|Qúíöþ¯Ýú÷ÍݪÁYT÷—2Ë$»àµÁúP}"œ<ÀËaÀÎϤœãä¸e+«fOîÇUƒûtöd#Ö÷3&?üŸ[²$+²¾D×ä FÅAœ/])]éXñJOã ’0oË=, ëúùvï6Ï5¡ `^<Ùˆ‡{ež½0öú½ëÔQt>'T ”c·„³¥±2¤ÓlÀB¯u…¿ú~öy»|aør{?­W%ä¥ÑˆJÀ£ëÖ¤®úH9g¥gè„>·¼*ˆ19k­*§qî3yj/”½Ê˾NR(Ò{å߃ð‰ü´Ï¢JÉe@U aÀÔ•r^.¢Óñ`à>)Io·¿ãtž ¢ÒÎ{Pgp®S&Ò0Ä#¦¼O†^y#ïP¨Ž8J©È¡Î1 uÌJ«Ši\dè–/r6B ȱáIiIgíat͚ʱÊm`¦ ÀÚ>,ã0úEÝ´c…ᦡO%ùúSQÛŸY‘éÊptÁ“‹N;&eœ37£<{‘|ÈË+Oän¶ŸfëìO1j&zœz,õ °]Nœà|@¯‹4BYO ­Íü~œ5¤¾G&£:°k` #w ¸4ÂËAÌ  Ã;·¨¹Ñ²êF+¿ }løèò‘lÓwˆÑ)qýûq9Ϊ\{±¾½¾¾X?n®eÉnJ‚}Ì›§ß7Ï»‹?¨\&È9mîX.†Ä:l•´!ÿþò¯?¶{ûàýöåz»+¾Ûë/ëÕÃêj{»}ÚnvÅ,¿î/¿íC¹Xnë®A®àó£LÌÖݬSª‡ª4IЭ{ôB¢ˆîºº(•¶ŽJB¥ïHâØ€Çª$5cáfÿðãïåܯ½ºO…U,U_1d ³5øû,å‡m «Œå£4àÔ„‡{¸lV××òƒÝfw)¾»ŠœK<{iàÕóöözw¹Uø5>M‰TÌR.j‡y.IâÐMëA>®]Ó4¯kšá£8`¯Ë¥*NÄœðX½O]}e2W7›÷œÆÅNI^›¶fÄÄŸ~ 1BýÐwô¯ýo˜- ãY):èzƒ×ˆU'2çà@sß>Œˆ?­­¡£ D-x¬û„~ ~ÃõG?˜˜š®--’°ëðØ1K„Ï*O»ˆÜt²AËÉ&&eL¤4ƒ¢±íxz”Jÿqßÿðh¢ï7÷»‹«ûû§—œ}¤º,~€Èj®B)Í“»Ä¶ž=„6ý ªþ±˜2ªáA%ϯ²C5çlmžÖ×e­Ö­ùš` œtT9ÆåW…|‡%r:ê’¨Uh:ÞÛª¹téÄìVLã’9†®Àr0; ââOnÓw2ä¬äGîñX;€jŽ]ªIò‰Pk]5Kæ~®ËaÁL“Oгsn^)õQÞJ5û€.½ .—ç9¥wÍEAO3%\~r-x°‡©ñš·Wû™jRM (`²ÙhQ]üœ=LŒ±¨³[¾]Ÿ ,4õëÕÅÕó÷ëÛM¡/ÕéËÎ%ÊNÕTäʶ8sÒ—Á›]0Ý<~¡é>죯iÄ:bñq@ 6‰1 Mû²¸i„±%ß)µ(04àÉæb‡rçxÒ–ŽàüþTˆËUâ"Å2_q¥¬²³8<Òå]6…à„#ú<;‡°SA¤<üì±S1m(4f]0ú.eøÞèyÝó”²KœÈ«ëRÆëR€ÏްèYæ„m© ¥EOÕ=O—“)*-ãz”Tû¹-QGÊ=³èŠób^¨è¼£ ùø¨¥"îq³Õ }IdÙ=nþçy³{jK¦yûV$Ü^¬þ*;Òø "è³ï¡V«åÕ£ë‚ÌHÚ œC¨ Q¨(ç/5(x°7çáŸ7ÓL°¦R ¢lºDÉåÓ¨Ô[Qæ*:±Z Ad£TH1džU餻Ôô¯O«õe6‚Ó‹ ¹Ô°Tr:b} Î\åbOZÔó~œƒÊÎg¹Š7à±V©ë;7ë‡i¼¢V%oB4K•ª™D ªÕAf*”˜0%hLØC¢PÁ•¦G:0·+黼ÿÜ>>M3šJ…ŒZÕ~\âÓìToE™«T1øÿ£îj¶ÛÊqô«äÆ ðì}/f1çÌ ÌB±U±¦Û-Ë©T?ýW²-Ùè’”&½¨>ûüH ðv xñ<}ÇCäj>mÇi;ž·‰_O·Ë­9‚ÈÑ*˜ƒÈDç¡Õ§ÂÌ%Vd¶U-A£)±¢ÏÞ¦à¸tÖãoõýçô+Ód8ŽYÉGB³H\£f„43©…&ÿÏ#+5;dÏ*%’yç¤ÖãÃ_Ëõ¯É/G.€ƒ6F‘—É{f}*Ì\b/wÒ¼Ð0æ0Ĭ]>ªàIáœÄºÞ,îW¿§¹±\^•V°x‘ÎC¬O…™K,ç“h‡p. !–φ»!² ²–ßE12‰e‚‘_‡ÑÔêÉ“–ûá C<19žè>ÍÈêúö.޲M?|}ÚȲ½)m$x§63¡åÃŒ&ØóöŒìÕYgÌxãŠOîx<°9 ®6ëriT'í1ñæàmžTÞŠËpÕÙí]àB¶¶SàáÂèKðyoØ*y4+ª°SáÓ×ÝÿyÿD &g]ô|’\¥0€úYû@ØÖÒ€ÉWàØzò—›Š"ëAÛXº¤;®uË4lûùƒÜÑ hð¤Ô˜·ËÅÝæöúvyýgE¡õ`ëô8>€å6°Rá"Ak* –Àû©¶<Á4£Är³¸Ãõòçòfµ(?zúººÿc½xÊÇëõæy½ü´j«‡L©”²Î¢pá*•Îî}dAkˆÀ°² ñ,‡ü¬Ø(ÀCÐ&›åÏÇ»Åf)Sq=JD Š5ΉD úÑåL2¹4 €®ÀãOlH£rÓ^~ø×òûíÃÃtsïê1ÍÒÙØù˜8âçqÆ5xÙGÆ„À‹n)RižõBÊ8oû“b_“õ8d"{јÇ„dè}Dr¥OjÖªŠSè¤ÈZ/}Í›YÌvF¬ßQNã ¯º.y›B‡G[âQ¯È·ÞØSÏQ]—T®)²^ÆA‚ÐjðòyYðÙA„üâ^$ž+U+t™M®Ÿ«ûÉ4»º]”7²ÛÅ•ª:µY„¼W%N§¶,.èNˆîBD;€<ÚÕ‡jƒSô·¶ª¼©+/…ü9°åD›ÝÁ` ÚØ?ÕF‡'`‹©Îkç~ñcySU¢­*Še]ª2 K/¨ «*+¶|¶O9¯äÛ žþ˜F²ý§U…'õYËû.‘wuõfC&xÏÁ%Ô;ôgÆhÀt+ðDßhèËWõåJ ´Ä½§™Æ™`[-á>\ëð@h<¡WËß›«½Èø¤¿P×ßä_;Ï™¥m…ÖܰÃç°‹óKíˆÕXeyÌË!:àBeœ: w!¸ÝÛ[ƒÇcÛéoéÔƒV¥šC9m8Ë"«”.9qÆ»A¥0À(Óà‰¶IK©ÓRG•.P>Ó“cg(Öd£†€Ã‘(´£Ä“Út•`CT¡¢ ey„ä9Úæq•vÊvAWª<6œxt '^¨ õ@UôÙ¹°d8c43ε$ÀXðÙDéO °} ½Imõ8U$GS„œƒI®ÚEf–a×gpÆ+ðDcºžñZ¬‡¯²c )ij Ù÷€ï̯]ƒÇÆæ“Ø*ùê¨Rë±-"ˆ¥Š·á„È®¼‹í¹0\ ‡¶} gÚd¥Ú¬‡ÇR1ì‘;~a jð 6Žîýã¥ÙÖz¹¸Ù¦#„z¬,•v‰Øèr‡Æ·îõÅ8}™­3N­’è+í§Ò[çGj<Úb kXÌxGt¼eà„ϹÒâÊÕªìÆùÊã’#s~¡‚¤½ÛÈZ—_ëÒ2ýcÞz›Õr5^ÆU’Vå]ZÏ›ÛR‡ðúÅÃA¬{8 €•Fgí–¶¨ø§•ÿ5¶p 0ªC£ÉóÆAE(µ¨R8k„»£6+Us¨¿§¯/ |_ø¡îz”Ȱ[Iôí_À/=¡Lí}Þë¸8 ªáK53“¢çñXí•÷¡ÎöÞ#Ô½Á€!¸À Êã¼:n1 Vô&ÇÊ8 2‘¿“¢u^ ¦äæ;©G¶À½^÷uW/˜äÐE0ÙD Ô&ˆÁZÏn7¡ỖSžñP¬¶Mz§~ï÷ÙµÎ1ùù.ñoÒlÝ÷‹Ó%º%޼yœ×—£¼TQBçd5m²ŠJ³ŸµI/Q«ÊÌCv¸="+nGG^tFH¨ À£~—¤Qæ[Áß}eÖÝFòàóÃÏ㜉=©Ð}Á Á]Ïnð=e¨'ƒ“Ï×ÀmhDž’ïÚ¾'öxù<1´ª¤<{'½Ö3FJEÁÈ^Æ'̶d»JAçL`=¨ð¤÷§è´–9bKB[f/…z(kç|hrrV!|ÿ8çfÞŽ(£©å«h²×í>xFÿü€ŽöêF•ó4×ÿó¥œë/ë¸ÅããÝßzt¥-Í4üËëøw!‡ÕÓ—û‡Í—ůÅê®´µÑÉóQÛù«¢˜y`b溆!”“ãѦÞö½X»¿ú×óCÖê×é?ïã}6ÔII˜|ærø©Ýo­<‹ÉŒ „u­Q²XW˧¯[Ÿáƒ cU…ÙÑ‹–ÌœÆ9GÍ80 ³·¦ÿ´«ðP«ißs ò_—á6ø_¶ó‡õêßÓnþA³T×,9— °‡h—enƆN¢ørÍbxQ\ÿì žÔŒ$ÇŸA|}ù«z­Û}€>;àÕ‚¾/ã°EÎ*ˆèO(½@BàIͬ‰õâþÇrqw÷ðrwøªÍ¬Ûåú~q÷¡ùIÝÒ…r'‘“ 63ηÛB:È."^ܻʭY¸‘µpó?æ²»„†ÿ¨36-(Ë"àÑp?>—¯õ–ª¹-Õ°>~µlzød,óBj‡¶]#s‹BFÄLb™©ùhåø¢ý欻ĥ¢©,ð?vמõUs»‚SN—IÂþ«¯¿ý_Û_ž+³ hцõ˜ª÷#B6ÇKoB€FiÏñh1D,œÃ! ‘ãI2©:5#F­‚öéÄ89šý/NtxÔ±¨Ï.#Ù5øíèˆ×>º–L3âÈ[‚f4ÃF8°<ɵºš“L̶‚Cmí“eyŒ@4_«AqÀ³q2µb›TÖa[Fã%xæp„£OÒ¯)ìFœ‘K™.Š3•®]Âö"mÖ0¶¢Q°R‹º·$ƒ™À\³´=Õtý)Ý5£žLŸSàŽæˆ»0Ž¸Ð²ú®H÷¾U¼WP¥êÑŒ —eㄚ¶Ã)?4£LŒ³v—aæ áeq†¨G©gÑÄVÔ©Ô±Õì6'¢Ì hÂE1(Zè™i]6ÖY#@ï`5CߊAbai ƒ¤xÛìAmœ”ÔŒTgnKjÄ3²6z/‘Œ?/Æã}Wž‰f#±Ñl±8Áõ¡—RŽÑ¬Šcü4 ’ ÒIÐ=Žý­:lغ¼ƒ!‰$µ‡è3ÃÖBZqI*qcKQ* „<½w¨Ê,°áêRöÁJÄH¾ÛÎ$Á߈ERq“H@†’Oè·®«Ÿ S‹ñÛØ8L-ÞŠ7R9GdŒ–ï”"ûF¼H§l3z ­x!•˹1¼–¼‘à9©?³XÏlÔ9Eë¨OªC°¤…1y„‰,$ žhgÜ¢WUÌÆ‘S².H0Ê“|ôàZÍ¿X–!‘gL1bxRƒÎ¯UMs1aglv|y¨E¤=Æ6l‹dÍ6#Â:Û ÿX<®Š¯™òîúk+›ˆ¥ zò §Vmož=¡¤¯œG+‘Ý^yç¶¢ÓÉáˆû‡›åÁÒN,—‚sA"ƒ‡Þ±”ãà[ñG,ë S*FP€'¸ÙVÊ;å‚1,3²mï$èÄ÷Ü*X­æ<‘K)Ò˜‡³ÑZÇG…5¦•eòAÇ\€UÒÑl›ä8ºF  ÆX#ÙG¦’ÏÍ;5„Þ`†ÉñP~vVìªÌÇÚˆrÑÆœ b<ÉÏ8¦3y³¸þóªªllE °'å{kaæD¥ãðY8àçŸB;–‚!hL3Î-ÚV ÅH7&cÀADŒ<žfלžñjl¿ñ,ƒ¨Ü¼ „ŠJפ«4­&~ ÃБ‘ÄP€âw–Ÿô35\ÈU,¢éû®²Š¾ƒÄÂŽy¹í0X°<Þ´¨`Ù$× ¿uùç'ƒt—#O#–‰Å“å¤À.ƒe‡Ç5£Ñ™évD°Ñ¼Kþ¢xç Ì,Ã*+3§U^7§‚ËÇ—G‰<0æ”Ë€R”àAíÍózyS®YwOë忞—OŸ-××1oËômŒÌW²l`8+Ü'ÑI³æÆËưïõ[ÿ¡¥c( <ºñ0†Ž|”àÑ6]œ?e‹?¯—õébÕ®ÔÓ“ˆq5ru£a²ÎbÞŽÆXeÞ”»R ž8š†«ï?ë“ÅFǽI1 „óÆ#¡\ª^ô6Y/ÒʘÀº‡ÒZV€G]Xeöd=>üµ\ÿzªO‹÷Π‘ˆãh¨“¬] Q¤™1±4À£ žïÝp*þ¸~¼º¾þ9Í â÷¥t’DOÃ÷©Ýˆ%UÀ˜bxΓžˆ£‰õëéñvÉÙEì•€OIpÏáiãt’õ¢b0™Œͤ1T Ö£äú/˜áT¼Þ,îW¿ëÆÞ%ÈÆªD@ñãŸÑ’u£"å™3Í ˆê4'AfùGmÅb<'6tûùp¿š¢mo³ÿsq}»º_¾ÿxk­ûùÏ30Î 9ô¥p'9‹'6%HFèø+{ª]?5$ˆO°ƒ ò¶oLš­;nJË6 º´Nñ8š"½D¡8àV!ã)}®Áñx’wƒIr³(¿ë‹íîÇ£ ÒC QŽFƒµGLVT>à-×O«‡ûŸ«|.o;ì”·ÕÝóö§{kl§Óݯ_í~ÿêõ˜Ô\·ñ=úgotò8m—Èÿr9UèKßg‚àÑ6¡Ö©ùà‡wŒžë†¹§©¬gâ2§ò8ëLWþœM0tî–4x<´H ;,i7=Ø]=Oê«'ê„MCËz¤¡ÜÐ7ia?o°_hðhût=f$ËÍíòùiý|Xþâ4+¯žCòÊ7!gå…r] õi.U¬ò´G0nsòé³ *Àc•w;¯e"÷Oz`VV2Îà'%[yM+ŽN½¬¯ÖË«§ÍºÔbÀ”šLŒG0¾Uü`kk?}Ô÷¢½£9Fªê´p+ÃCŽ„eM¨íÍSX¨dG¸<Úw#‡%\”Tß?KAçà“å<.ß,Ô˜z !hÀ~¬Á£­÷iõãë C =k-– ÆZq¦Ïë©dT â jð ©=O»2ª¯ÚñuíPÞ8­‹žC“½˜H-‚ózZ‰!æß6&PƒgüÃåõ^-îVßߋыdš©Ù|²,Gúde¾Ü4„^Ùr ¦'Û:#¢ùt”ªÓdXŽ\b¡b8¹>J3—Zb©É^µèÜ;×é š+Åsëƒ,£he ^­¬¡3ÓjõýçõÃv6°³,˜s1ë3q†‘ ürÌ›¸nsfr½Ö›š&ıü’ æà\ü:"ÑlЉ%§Ë¢˜Ç‹Ø¿^J¨L³Âzx6Fo$Ò:ï>ö¹X³É&?º1dK%©G€‡Î¼ŸMÓ8vA¦†‘ˆ“ÎuJÊ1—NryÃ:ås8Hð€ºÆèÑHë¤ÞÇõóýr-·R¬É#”Æ1ÌËÕi„žb)ãb 8˜D ž˜€€Çã´6öÍýÓfê+ÞSp¢c5C\hÜïA9¹$haÀ+ðÀ¼¥ÛÍm½—4‡R'º¦º€>8Áåp‡¨ õëø¨€âhÀJUàñV¿ûc™qåùZl÷²:Ï˼hBâL ò¶Ñ›Ó^4!–+ÛÐàA˜×þPkß$ʬߜÄ|V–r\úh,·çÚŒÜÓ¸¨€da@~”¶`Æ4qº©s?’Çà­å2cÉÊsÊéêO69zH-´^– 4šê7r„béUÀH0=ïÒòAKN?"±Q'Xe:Äêþõ"ÏÖóuÙá”T_DÞxB6‡È…/N.B‘ö¢Á“æ4êž³5¦úígBt1V†<’ÖZ>™¬ T8ÂpVàqÆjWôìT¸X_‰ =>ò8§ÎaF<…a@´Jƒ'úVcNPkíÒ¿š’¼OÖÖïȦqÞ¨f%U§Ï$Š>&Nêÿp¹|'ûf12/^¶ã´… î—›2ä@;¶®‰EÏÔoÜŽ3ÔìAS{Ò@DCŽ—|0ËhR¶­x’kð`õFó‰—5z³}Ñ «Ê³e%ZÏ”ûŸÆ9õ›"=' œlׄÄÃñ#VlþN@kx<Á(¬‡’¹{ ›:Ïm²]b.ì¦qƧψg+Ù¼ ¯¹<κ3™¿ÈðxÛ4^µ­e·Uš«* Ê‹¡è·¡¼tÒÚVZ¾å@iCD<› ÿ <åê)ñx‚º\Úÿ.¯7Ú©óÈÆ¬hÏ¢¡<ƶI)%Gú×íPâI=œÖ}ÝùªîŠÏ]ž<kmжSN‡,õŸJ ¯¯^øûïÝÔiŽÅäò.p~–\–>>©‚Wr°ü€‰TàÑ6»ûê Uõ9,  Ç;W*m8ý²TO`ÀþªÂ£­‹t}»¼y¾;|Šu®; -÷6¯Œó)b/SA. Ö`€Àâ &˜ÏŒÇîÍ2Üì:WïôVóÉcU>Oz )póžÇ‘úÅã)DTJ{–u­r2• t ,hžÔ 2q¬HïõÃzùðtõýáa3q¬äµb(• iòÎqîHgTQk·²-KàÑðÀCà çïÄÒx2ÈŠt‘~¾=5–À¯Íöùg… .p¦™sÁÇ6·úq·æ‰ÃÔ <ñÆÓ4Ê”×\k…JC‘#Òx ®÷ç˜?©çn¦q6-† Öã”™N‚ë¿ÇÙ0?7;DÄO/I¶ÜÙñ¾PEX›ÿÆöýïÛ9Œ½Ç5Ñ ì¯ÿ¯ÖI<)D,\4æÒµb{ ÁI¨­Ùàß–aVxÀ¯eˆ¹ñ4>ûuxÀ#_V¼9=¯:W…ÄšúÀ䤨_û~÷m€fÞ&¥˜ž¦ûá–u­xk}’IPòhuGÀfkø¶C6 ¼ñÐŽ£+Ð É:Wÿ¤@ܹElʵÿ¾Û÷½Ð’ø4A chʱÏLT“ ´Ù)µZKçúŸf !¼šÊn:Àhk&h.±¦ù@LS/<²/-pµ¼{¿IóÏW©Ùsˬ° „ZJï]¢àŠÁ º=C4ÆÈ«Ur†øâQ¬g]Q%s×"EsLkÑ—úpboË߯Q„YG4ñ9\yg Ü,„Wq•¯ÊNNˆ<ÉÆW«³+×+›'æÜ†•9uò…2®|š@d'£ öö|a”QîÑ8ʆá '„ûàa8 7ÒBôÌI½îÓïS ½5¡={„V^ìxöH%‘ðÁ£‚EÁ+D´•vuÞY!¹“3 ª÷Áï´!,ðöLÑJzµl˜#eÀv `k-`ÃØXUä‡7.òX–Ïë ©œ=-M"R\ÂwgÓPûžUÈq¶h¦g•æÚ ë™®iCf“^}Töt¨t­bÚ̒¬&|Ð:”­9@`õâÀ0{Xær6'ȇ4ÝÃ:2`+Û}°€«»Íb2Oì!pä$ãÙ-Ö¾Ö^˜íY@%àðh‘ð€ ¤|ðP’¥\Á%ñb' 8S{ e" 0Û³@P÷iÛ^: $¡Â@Ö²U¬r.FeBay ”·³êàµïuðXñÁ?Ð ¾òÁ£xÛ^µ;Dʹ¶H‘d^âÓ¾I9ºálÍcX{œm2K/ƒðÀž›¤x0í~“x+\^ëÙ8ZÎõAøŸV> oæÛùCkßÛŒHæƒcûSŽ¥Ï=ˆ¼cvûÓ×ü*q 3¾=p1Ýì¼Z_÷ýóal9P*”ûàí"çüèJç‚UTx‘Ñ;Çew¬­ùÀÀ¶gë¶å»Vu—w¹óò® ó˵r^Rr¨q˜îl7hà _MðpÞUE¥]ÒÚ%-N;Õðí:&è#šÙ®&0VÌ^0Í pñj5hŽÎd²Å£]ùÇÅ£Y-–YŠgs3œšµçXÚåäuÃF”† :])LÉÔ¤ÛqUÊ«‰†£ ‹_WÝAYl7ŽŽåXÄ»Jǵ­&ÈÔ^B<¡·Ù Q•ˆäÜ!3Cvì<ϤlzÄî½¹¯ffº±ËÄÑ`(åj®EôìNXå¾eŽ˜hÊÄ>Á;ù§Àƒi'*€b˶žÁÓ —”!¡äZ“±"xªÄ“ è#±YC ~R$ØéĆq_/7™ÉXWH¹ò …äõ;šb –q¸Ñv;1„æh¥àÀÔh€‚&xšf‰Ù‰j'©ò>+s„àÓŠ©3D,”CˆK£˜•Š2^/q7‚ªÖþ N9¡n<ŒâP#}+½ÕÇ™HëýG­©B('c5‘%c~üii  .arÈâµ±ù¶Œ(HCÓ-àÝ‘û»ÅO7Å¿—QºJÆÑx/>$éeÑ)—Q¼˜D«øa¾Œ']Ñ«R„Ú-zc´¾JÒÌ/;´–{§,>áëŸV°j¿¼X§³,±‰oFú¿ògFì‹kß|gœ¸ÑÅoâÿÍx=^.²›åxŽÝ<‰Óä?ÒiL¸É%ïÇcv—pŽ´Ù–Šš$Hï’É{“q"&*Á4¡ˆÜ!,9‹c¡(Ÿ°$êúÈ1zSqòkWéèê­)&AJ¹¬_hdS<²%°t½6K@Iöoà²óÀ¶;²’£fia‘â@<:]/³–2¼'}þ~–Ì'×ßOý{p§Ÿ-fóoŠÏgzß<ú7ÛÄ[ûí—?& Óå¦7)ˆœnÓÉÏ~_°¯ €}ã7ÏÐ/cP’˜æ)úæÒêCèP|½]fñ|ƒí2z¹¼_Í#t½I€c°jFÙz“QÌh†²¶ûvLúSœNGâo?}æñË»ñ_ÙsèçÇ´‹ï'‚Á·ßÇiöz½´úu”Íî“ëWð™‘öedÿ~±ù£í2ÂÐ ¬/#..#FàÿæžYì,Ê}¿ÇshÆ ¨ßßä\ýòuùýÓ›ïGÓ,[¥£››x<††^¡¦qv=^Þß@GÇY|óæO·/® Õ xÄ0<öø·HæéèïïÒÌ„U±½ÿ«•ñ ˆ1ÙÊÜôëë¼{sŒ 3¾?üöËm–¬FÅwæçdÕüp¬,Šìëözäù? ¡ýã"_Ù/ú—T—ØuZºüÃ(« ÁðñÍ,ý˜Zõµe¹]ѲRº,úèÿ…Ò;îRKåb„™òv/R;m¾¦çÃÄ|úò9žÏG0Œ ~Ï5Çôóän O%¿d#ŠCC1ð7 MÆ v#żÌhcø²}ó+ÀxYòYJTÚ}ý°c’ý „[¬þ²<~þË· ë•BW?ùðåâ?7³¹aæÅË<ý¸ùøj· ýÒÕ0ßÙîz³]–4_ä) ÌÇò€œ/^gþúk1 ~?{ŸŒÆóä‡"üvƒöÌVs0ãÍŸû9>âK/@.Y,?/Ú5áö»ÛE¼J§ËÌþiÝ@`Õz9Ÿ'댱có·{Ç¢øq9IÞnŒ‰æÌ-|ŸÍÇ»xÜ'YË-°_ÍPZÍgãY6ØÀ!µ\½uµ(Æ'çY„ÞE÷ Lî` Ã|Ÿüb3³íþ]®¢ñvš{n‡`4ÉÍ©ç¿ÑY$šƒVyþ»züS.;²¡E»IÞP˜Å{ųŸY1ïþy³…C`VÆ0+S34ÌÒè`V.^ùH…Õ¿‰Ã›0:~Ó×í‡CU}8u'pÉãÛ¹'`ºþ§tÚÍ?ÿZˆk&ØÿþOzÊb6¹œ”ʑև٤x÷»¸þM Õ], å57ÃCU€s“ÇKK7u?×ܬde½dMލ]-QqlQ%à@£6`ª`G àizáâô¡½#5zRЏVŠæ&×®Tä¦ÖèñÞ¸Û¤%šöÏ<¡`—¬š\öˆSý83YÕŽ““¦È•9„ØÞ°MÚÛèРúM´Õic7,.d»ñß3.QJ`Tw‚T:^7«´&D£¬I‡g“lÖtàR³æ04ÈðÇ£:„X?šc*€¶È.°1­=ñÙ¬B¶‰Áý`Ô‘òiQ§iܽê-+øÒÑÂÊÅ,Š˜ƒ4f “9×L9¤:6jƒ»#]Ì%bä¸Ê_ÈA Ao<˜òNxŽ*³òæ.žxã¨C€÷€‡"ÈÑRâѦ¢zA[ ‹P”¨‰éQ7ß48 æOŠ„Ð>‚-VŠÞJ]#F×0Ðí`F>Y¤æ {à©NÁÒ*’«ÇUNª(­¼°K&„kЉbõjœÄT51ñÁÓ$“19lñ†í< ÞÁè¡ñÓ¢‡q²^àûÃ!VæÎEVaÎ{Þ‚à.‘$ÛîLïÆ ¤G|ñÐ`äÛÚ‘ÎUW˜‡˜ö0é…wªŠÑ™HRR¯ÆŠa¬¡”+ k›„86ÓVþÎuZ¡•ö1êMT².¡,;7 3|:È9² SâG"Üñ`Eýn}%>‚‘¼$&mƒ özŒ€JŠ4õiÀ@+0’‘Â|ð §Úý`n--íO¹áè\ª•ö2“Gãé9:N›Vu¦šoëZ©QÜäýôÁóÕ©6‹ï¡Slw8ׂ•äÒyÓÉ”#_d§ÚÓ•^š͵G»K[u‡Ê”Ç¡2ïJ5ÆàÒÐý v”æãæ.Ù/¸ë-ÕS°¨ŸTeŠ3„\î”3™^B]\ º¶AC$€MððîÃOt¿ÈÂQý4SŒƒjP.í åjÓC5¼æšÊ Z1„=ÝAä«é‡z3Š›CJ9ò¹ÛrŒ¶ÔÁxÜì h€‡£ .y½Z¿½À•¹j%±ËÄr5‰wû× õ4Ó“KÆ…k,B9¦H»á»ÂØÏTi犓)×X¡„ãËòt(ËùÛCˆFÐão;!žÂ}âŸHÜ1ãS·ðÍv\•É+Õ»èe¼'ÆîÞZ»ÑçélžDŸãY¶¥”ù© — +áCË&0PO0x)aïcœÈwQ¤-ÞQ<ªªÄïS]iøTÇ?I7óÌÄ´mýýår3ŸØþßä!ð–ñ&›ŽAÁ›á<.B ­ti<›&QAžÉ2Éé³}õa1ûg¿ÎÒ]€ìËhÎà•©UÁg‹Itñ×ù¥…|aìÎ}h#Ó–GÖÐÍ'|Q99K‚ÀNsw‡DMºc¯Œº£`ð+£ Tu;™ç±ölPÏÃG‹À¦`[ll 6ÓÓØÄM°·+oÏL•¦×öGCþ¦»ôkÎK_J²“~!ðmøº—Ù‹jurrØÉÎþskC£v!µŸÝ½ýùLÔ¯QôêÇÛŸ»·Ù¿£ 0ìc˜u/Ì›ý¯×Õ‚ëûªæøÁ ­AÒ´í847g !9ÆÎóÞ¶?šÈÙ$…iEȆ¶€'Uo="ñ_à‰"ÊcSøÏyÀI[Qu©{þÿ­dMïƒUvïš-«¢m‡´7®<û¦IGÝTÀ½ÍW‘Š>€ÝnÆã$™$“Š"%;ªºþ›Çõ/ï쇉™|WÓ8Mlp]ã=Á´#2N—‹‘å¸\@ÿ´˜&ñ<›>\F÷EaðdgŽ\X³EšÅ`cNªÙ ¢>làMØÀwçš³¡}ìÏ€Ú—õúÏ?._/'iiŒ•>‚ñ´‚K0 »ãÅü0©™o”qA낦µ´Ë“ðíñkŽ„ –ê­%ø^7åòŠŸ ]6‰ŸÌü•åìMOF#k”æ¿1 ¾]LVËÙ"{aƒ°Î š}Dߟ·?–­ƒòŠº¿ãTbç+]—ƽ‘MÉ‹ª!M3Δ¤VBqJ~ÍÞÛDÖ¥ŠÿÂ.8nÞè½u¯VéÕx=¾Îó÷&×Yÿ¼0{Ã&üÿx52‹Û7«Èõ¬F×Ëkvшӑъ“h3Ym@צR |õï§ÊŽÌ!ÑE2¶l]'ï7iõrSJ|ª’ÓC#Æè ±DEvÈO5¡»Ö´Žn9Ù»–#<µÎñkŽ´ÎË7‰ ƒ—ÆÈÈà~ùâUlr“þ¸Üý°-ùHÍ¿&åU÷‡>?%%ÄëâÙfPrX§ý,õÌꆫq¸n°ùõz³xÄÎü‹èåz¹øóòjJ£»$YDE2¾ÎS5±v ì­í‡ê嫿fãwàÚM-_uR©¤‚6« ½+£Qåþ¾`ú]É!MwÉY 9z¶Ý:àæXƒœk•‚_üøþ³Õ¼àˆd˜hNŸŒmb;INšaÿýÁÒ‹çÒ¬R\(¤Ëzðò³.k­ÔK/Æ´Ük«©²Éý®Nô§ûš^gdC.úV€åX"êáÏqT»–+öîX³áàŠÔL“èUl6Éjìy…‹·ë ´Æì…ÕŠ]ñáê \“w0ƒH.·»‰Ç5becú â¸8m˥ɓJM–£7ŠëØ­,]ÜùWHM–·&3޵[:´úØHðÔdyœSÅ<ú–ãY>¥Ôd‰øSKM†·ÙJâÅ&63-¶9JvÿÙzûòœŠìœŠìœŠìœŠìœŠì7“Š,Ÿ/%X†¹çUQÞ½:§";§"ûú©È,1™±¬êÏçÖŒô˜Š _cÉ`^¸?•ŠŒš\hšPÁëWöòr¸t¹ï)øM9*†¯÷3órTükùMy«¹¢{ômMùà~S^£âBIêF&°8ûMg¿éì7ý¦³ßtö›Î~S¥ßDÙ5ÌJH ­ký¦¢\9>óÙo:ûM_ßoʉI¸9·ê&0)Ý-éÅo‚Y““~³ûHD*G´Ž¼"Ok¿©zrýoÙoÊ[ÍSLº¥ÃÌoÊkàñ2äFÆË7èÏ~ÓÙo:ûMg¿éì7ý¦³ßtj^…q¤4ò˜W%Ü=ûMg¿éëûM91ç’S')Òº_¿‰›*OøMz„(xUQ\kÁå0«¾(ÊM6Ws²2ÔGñLQÌYÑÕá…Ê^¯“O³%ØÉ6ÊM>ðGÑÙ&>ÛÄg›ølŸmâ³Mü²‰NžìÑýãÉS)ú.»6ŠòAoî.}ØÍµ¡YXžÙ¢8ƒÿ²ä~•Eä4aƒQ]Ôà¦uR©|êÄ!ë”Ü«¤T'êX'G¨ßÌÑMñ`¤;§ù4`hqP“lšlÒ«ÊDU¼2sIÓã¼: ‹*Çñ^­{AÓFœx¾p¯-L’Gw†¾]ùŽ-¸çt$ñH*úíNFÓeš¥×ö#ÍC„©Q@ÖäÉõN-2!Ÿˆ¤ÁÈp?¶—LÓÍ]:^ÏVæÛCIëPœ(ŸUl͉&€û§•ôIQƒª`Q²ÁÕ‰ó»ªÅ«P(>0M:óÁ²p„Ÿ 8¡Hð~¶¾ÿ rJÇÓä>>”1Å„š|ƒÞLð„Ú?Ä£ƒ¨ŽWДÓx=126£î@À$`ºíÌœýAÒA쥥 fî‡K±b,²í€o#ºŽ9ê „'p*j²ÕúS¢âÖäðo§Oˆ€§iöo?Q§If6 ÍBQC(”.¼ýzÿIC6]z¨4¸û«õòÓ̬ÖÁ#'í6Š:€uÙpïÔèiQC"Œ¤e¾ÏW•M,¡›íìNd¥ mVøVä"G$eºKnæÈû' ­Vè_…,@—Ðd¹;H€M” Å ÆT0fÔÀ쟌‹'E&H¸éd¯šµ² E.BL#@û§‚ÀúIQAÑ9©Ÿ§ .ž´ס¨¡1o– =ðþ©¢«3œ|ªh†{¦Êçänº\~, \£`LѺ/¦Ôâî( ¡'¥Sª“µD9‘€Õo„Zã@DQ5a O%îþ‰‚{RDÁÁ}—½ŒmøÔõ]<¾‚™ÿ—+jŠ"Ø-âÈ!ÅÓ"GÓm4sÜân–§¯©öãWŽ‘jíTQék¿ö׆þ šóIˆjÔí¼N{©‡ZZý?ö®e¹±¹þŠVODQ ,Gïl‡^øfÁ–ØUŒÖË$Uýø/ÿ€¿Ì‰KJ¤$‰äÅEÑ1Ñ£Aó$2@>’AÁœÆà§§Œ§Ë2HÁ4¤Œ æV—¬)‚kÀ‘Z´Ó“"bC“ 6Å<þ¼&Å[i r®QkQÆTˆN{ï’œïÜe¯Ÿ€³¾Ë²C0Îù <šû‚´£D†jì!¶s8jAŸO‘ÚiA—¨ žp¶f'ô$1… ñ9¼b §_(N«FØÏ'Lõì(u!L-4Ê›‘ƒîAç ›ZCíEk;ÌÓOŸÌ¾AbÚL¿ô<ÚðŽÝÆý#ïñ¬ª r|ÿ‡A’Q$9ƒ>Jž4 Z«]ÿ‰ô”óI,˜͋⛮ÇSŸ £s"ßõ©+l]ÂòÅjŽ´1!… ÏYi¼‘ùȱ‡ÒQàñÐÊÞhäY¶ßèÀ²ûç¥ t|Ð׺ÓjMtèqe¦Á£½2Ûµ·Ü'Ù6·tø‹Üžîž¿,N´ÁÜ–J¾|‚Àè­ä,ó88ËóœœËõ@×A7hð@K‡ä¤H«F ’.»~ˆüÇœgFZ‡¥ ×ÑÊ•¤  ª:{±¢ƒâ›è:\Çið@j²ýãŽo·?íŠ åõ[þºkñ;, KC°¼³DØÁ•îlõ«Ã¶ªžØ. ©Çãbkÿö˜<‚àˆMkŒ^O…¯Sç¨ )­ÀªÎRAÖF1°L^4Nц`›ÎXܘâÐZAœ0uÈeÅ|·è €Œ'´»iªcÙòñiÅäF¾×¼ñ±e™ËÏÿ×® ÎâöµöÇKíýsFN,Xd5˜fm ž%4x´YËóûÅúiþ6ˆúåÙç˜ì |-ã]î‡AÄê($íÉþ;‚M±CÌœ6èòtô)þôþ³õ|ª°{ØŽø\8NšðþöSãÎPL ÜÔ!ðÉCn¼ÁFRƵE+4nfÑ»äZ~ºöÁ»dÐJ5=û”'ô—;N¤PЫ!AxŽóŽWÏ›EÝ6ó‚Îå3_ÊŠND}†ÜOÉ)fÑÃÔࡱ/µ¢äXNåð)‘Ec“„;Å‚[{ ø>Àcô@ lô '4g9~¶Š •Y=lçè’XPh„)èTÊ5;ø„-çc¯µÇ–Và÷=Ž <Á¾=;¼‡:*=¤—RŠìñŠhù(†ãïƒ:ÂÅ€÷­nƒD!–Ì‚É>žó’L,†5jv|;}¥@ï{dÊhðxlu`<)OyÈ ã²›,Q2¼á¥9Y‚Ôî2a,¯ëQ#ô`†ºìáa’ÒúOþçûϯ²ü¼_ôÏGåX~ €ù¢+Io‡˜vÁ!SZ1©TÕ@ ŠkŠohy ÛÅÌjð¨kDÉÛQ‹³ü†•¼ñÌO’.˜¦Pª(ñàðÇ!¿ýüjE®îX«üüO¯êñ?¶bv6ÏèÕÈó/ìÜà½âìqþÆ‹I~~`…ãÛd÷é Ò§+O{›<¨²¿½ú oTXù—<ÿ’5éû®Ã{UýÞ$0—<’LàÃÂõ¯ÇvUÿ}ù°\wásD–šýßÿY;7±Ds·<¶¼Ó†qÁc‹ dÕÁN0NŸh£Ããšu¸a¡- ùÿÁ 2äÛ}/böà¼Ç¡iîjTØ1Ùë_G[¬õx•)•4±…å¯s¸@Ê2Ûqêˆó.ð Èðaú¼Ìíw(²Ã]G{+6>='×ã·ãôYvRp*\"^x“N7üf~£üMt®‰¬7 \¬Àl+‹òºAçOˬÂÙc8i_\ŒåUŒÖÄL4qà£kVgŒQTÓ—'Qâ¡–5:?xd¥@ƒpíbrÆD¯y\4¥RßÊÕnGXÕìAÚbAí vtßœþ¹V‡Ç̉9dØ‘êeÁµZ-hŠÑck@¼¬5.ĪB]ß,õñ•·%Dr€„\ˆ‰b£âz}ð’Ë­Áƒ¾á–þ ·T”»ùäø¿‚€!…8¶pæ8Ý£€šBš~‰5xbj¿£ŠäÓZ>ƒ$Ççßr­•¤TØa¥5xB³æde RQ‚1Y>Ç¡ˆ‹`ÀM ¾ÕädÄ·‰52Œ³ÓÇT ß!Ï›;ÈxÔ—‹§ƒ¸KkžŠ×³á:AÌYõ ­yÊZm³¸ó‘°ýr°8ý¢çïä: XÇ+}¿#VÏeU9Η—¯¾=¯£4bTûá‘z> géí¸ƒj¯…ó5â/ÿ±kj+>:}‰þá;6X° ã±.NjÌo×Ç*S—må•Bvh¼=¼ƒ65ÔÏB9ô–D™ÇFÙniù;ÎÆ$T3Þâ ¡eéôc6¶tg¯1aŽ‹ríxœu¬6šVL?,™ˆÞ {8OʃŸÜ!ð'TRÆ¥D ¥¿Túäf‹ró¼ÀÉA×dœ¼‘S³R%ç{«*ÈÿºÝR+𤀣ãé߈Ž7Éjy³M:)oòóëBeð˜Ç?%O“|‡ÅÕà Mú)·F´m¤ 9h]i\¡v4@cÕ󡵂Ÿ§û(~¯K‡ †žviaù¥ƒ'üáiuW@™Eâ£ÜœnÊ7½56\ÐÎ`<ÚRìÅ"ˆ7ëåìvµÌïHY¬H˜†… ÌïÚJ‡oâeG¼(…Èx¢rÙo^Ãj×9{c±~òø|;ÛÙkÉùïëÙâ—õ䕦÷¦C]½‹T=÷pQJ}ÄNìúëyµ˜±´;¹*¶ÁÒäȘ;Ç‚µűàlOŽýº¼[œ\׊cbŽÕͦ?Çü…q̧>=Í(–`bŠ)&ÓŸaDŰBû©öåæiöt{r÷c+¢%纭vNÝù–\ßRt}ø¶üå~öíéföËÝãÍiwÆ7#ù‰É¦œPw¦Ñ…ùgd;ÙÎûùÃòn~rEB+ŠÚ‰)V;“þÜÂ˲šä{YÍoËÕæä‚ÄfÔšÞ'«›Hf‘»$fycB?ûøôøûbõm-˜”ÔˆeÞ8è`#•“êÍ8o.‹qýOß³o맯‹Õé{jF¼Ðï$ [þ%sYüÓ†84X£Ýâ<­Yð÷³ÛE®)³]™ÐêAÀKò¶f]ÕŒºsÍÚxQ\³Î·JµÜ]-ñþÞ],Ì[óÂ{w È6AOŠBMÒ·ÇÄóÎ’3^€› *@š~ù5x´MãÎ*âʟ˜’Å(®({ŸÔ.u¶%k'@.’ëð°¬Á£n•|Rž»øŒ’KÛ‚SÈ}Š&•k lÇ…º'TiO×àÀEð(}43n|Eæå®ÖÖÕ?£¥âB‚Ë•*“#Æé-òwØ´©DÅ7ù&U™wÂ:´l¶ÊRCŒÆ[¡°Ç0. ´©È{LÖÈ0ÉÄ‹‹¬5rF`ž#â=ó¸º]_ïdxRí¡Á²@ƒÇ® ²3xP‡“všA`Ýg£“gà§?N xØ@8D<¡ «¥ÄNŠû–%ú²@£CVÛˆÒ";i¶%F[pÆ!EÑæqÓ—•¾C.x!·q‹'(#ìvÂzõ#ñª3‚a/EV1w®< —qœ(9àäÍŒ¸xœé±Ÿ#ðnRÙ¶xÈ·nt½ðþxêÿ$Xªl¶Ã• ˜j ç‘'% ÈS ¡‡ ùJSēӞÔ:%ÚÙjÏûùämSQgñ!Þbí•Ù8 çOëèB¬˜YêA°Þ/U§ V,€¬Þ{³}j/ÖØ ’EAžJ ¨S#ÌáLúÁ”dúÅ.zÝPŸCăÎ@“Cd~ Ée—¹mŽË{i¯3›?ÜÎÖù̾™ñ¹ì7m˜“s’·ò,Œ^»´ƒ&i0²ô=ÉÓóÐÁsñÎ2‹½“<>gõƒz/õowÏ÷‹ùf3¿ùš/¶?H< ã ¬E Š3p!`[ÂÔC?‹, 9z¶rIžZ‡¦ìù;À.HH20ÞLH–õ®|ÿy—¯ýB²1÷–t »}ÅËçSE~Qó=®¼4xÔi§å½Z¬—±T—¿®æëÍêyh,ò^ÖŬdöú¾v ''Èôu³tx<4'ÈÎ+¼¹›¯×$m[Ñ#µ£Gä©ÉÑ!(X‰'6#Çý|ù0ÛÚòCÙºVt ú\¬óANNÄË"Àˆ‘£²=´Ù¯â…fðÔˆeœ“Ó`úaždÛÒàEí¾Š›Q€ Ncœ|ù‰.iùÁßvù_üôÍ¡x}# €±¾Ê8'¦g/‹д—‹êQ"†fìÀZvLrÒxwY¤ñqdáѺ?£aEåÙ²}"Þtaº@Jsº¶éÉ µÿWY§Vko)T®}CÐSÄùº® ÔöÒ}ôto]m¯lq_/ïö4,B¨ØcGGÏ(]Ôý8‚f!øë勈kb”À—•-™6¹$=mä6 .6‹Qje¥˜õsÑà±Ð; è üׄfrÅØ-éçèNë1µG7Q/‹@Ú ›Íä?J «ÐLÁ{§¢Ñ±it#¥K"“7¿™¾Ü< kQÉ›ºSéÃ$:É×áY›,çÙë®À¶AuÓ£I å§kBBK¢Ìræ´h=-Hð=4„O„–Ý Ï–ßÉç¨j) ”Çyòí’ ;Ýõ³ˆ¶‡Ëê)W3õ±¶rñQц¹ƒ´¾\½ª@uöUƒ¼Gº=ïKT`Ý%ëñ‹‚}êöç†u VìHõmÓO¦©RŒEª”`Âxliö1ÍL3NQš".û¬¹ô ÅËÒSê'ís²l„ÕVÁ6bc&LRM¥¡°Púë{ ¯ÎêGòc’Î5 ¶†„©ÁìÄLÕ‚íBŠé«…)ñ4íRRÜÛØòO5À½‘Zvâ.ôð¦3‚ÙsP«¯ï†ÐÖÅÝâ&ŸO‡Àfœ‰µuã&F"]–ƒŒ&¹6,„oƧj¹ÇlzÐÊš‹r’ÑZ3‚V¿=ÿ²˜­ÿä%¸—&óçÍ×|©}3 ìBð‡Sp&Bo—æä­5/m:!uˆôÒà¡1AÂò]È~ô ÇÒE)]Ë w^&9åüÆó‹)µ¸œV¡˜ÜòèðxÛÄò¼—ÚOGÅhËb Ëq5yœC4c*hµfk=p®Ãú+ðœ³þmî²^…›Ê€¹ou”&ys¹6Í—›°¹7ÚÔ <`[5Üþuùå¸ð\QxÁX뀰]À0¶§úÔVÌ&¢ž <£2Õ·îÖçWQža*|™o¯r]Ï6³ùíýrë¯S3A†pÎÝØÄgž0T³JÓ÷³Õáq¾=¾-¿±Ä¦;RHS²ã#à.äˆé²È‘¨9~™¯—7³çõ.Üܶ¢™I)r v¢õEr®Q^Ìû«êŽÍ|?%W È»Ð/ËèПd_>Oy,f¹Óµ3ÑbFò®AMƒ ‹ï‡ßBí¡Áƒ8¶/¡oÞÉ‹Éî:äaÌ+ÝYÏÖªiÅË"ˆ6zµÕ{ćsb1Iž®Á±)4„ÒœÀ² ¤6WçŸÑ«Ñò ÛáÖKƒÇÅq·^uDe•‹ƒ*MÎIx“³>5Zí©)\;)4#MO ¯MƒÑ¨ˆÁ*ÖIÐͦºúÚ¤Ó8ÛÞh¦êñ²ˆìÈš UùÔ£aFn …)ó½UsýJió(¤Íë>êÝEñ-xjvµu·±d¿òÃ…“Šõò¸à!¦‘wôM n=`ßÃ…Õà‰#–’›W>ÏxÇK”ƒN”Þ†qìSL˜‰:¼±(ðiË^·MÊ.\ö·&%ÉöPÝÏïb ¾’@'ŒïYðA„ût>”—%—\ Þ&‘®1Yµ2o•\p¢ÚÌãL5ž¿Ròh·ãÔ7—ÇÒ>Èn³ÊþÙíìf>ˆ¯ì!FŒ{«JœäqƪûÔM‚7Dç„îÛq¦_nþNÌí•*ä´ÝŒŽÅê4c9!ȦÔÉ£.÷ÓÆŸT@´€Un¿Ýþ„¬n„^syœÑwï›p9ëaw¹fÑࡱm‰k;™ª`EïÎ{•™×A¥ÃÕƒÐp–,Š>€øQWè€SÝpvÏøõüþén1TÔƒ²mgWÉĉ£ ”=ÇZb³-ã±Æ4+õ6¨ÎõõNn§û9 ¦“<uE÷³œé¦/šñàS‡+kB‹+™—;Ø­ ß”¤-k®MðÆ•[yã0`jbﻀ¾,ж,JEMÒwüéýfó»Åj3¨Ó’Éc¤œ•d¤‰Dë¨AñûùÍ×åÃb_ø"R3€°ÃÊ+ð¶ Zz+º×µþüŠ×¤XŠ?`h¼’”‘ƒ0›‹zÅ1Y—1‡É/ótxPëU¦'$9ˆÎ—EG>¢‹å¶y\ ^íò~?¬aò%­Þ?*ºáŸ6ó›ß¶ý­‘…¢Èr$P€èrª™o±¼“bÄÉ/è”xl ¿í·‡å°bQXÀt&„ €›RÒ.hWg£~&dzxq*<Ô¼šÐ{Á–ücgŠ’Íå<(DðÒLø|¯¾¤Í`º;lw žÔâåüñÛrµ9Wù°ƒùîØ;/i#4)‚m¢Á'ÂGÆt0ÊG…Éø¡Çy¿“+—Èû‰?©a„ÍfÎâ[­ø¾¼_¼„¬ì¤ºýŸÏ«¡úãû°•˜Ê¯Z)A6Äâ2ƒ¦¤ì-‰Ðxå–«Îg'›ì9»)Û¹|¸øÃˆ[¬P¤–C%Ѻò߯3’(‰*»•ª•©çéFÓèÆ–qm4+]÷°{Ž Ž[9%‹˜Ê®Äº0±þŽt6¦ˆdÀu3ɧ‰3Âñ¸&ÎèLDr|”„¨É¢œàEL¤crû ãX˜Sg€+0&¦ƒ£Œì2Ö‘C Œ÷d¸–`§lñ‹#m5ð8™ÜqjÓÛÄ— í‰bã™åAf¹2B(ãTD”ãÒå¸<\Rtç‚™¸*ÀzðôÕv;à‘&Wºâ#^C9äU(+¤c‘ä”.µÊ–;SDrè<6 (–—7Âc¥‹.w$§D†sï­O:®OvJøw¦ÔqE‰µ ¢X®²þÙ6ì^í@Eƒ'’+ÿŽÕ?Çh©‹ãÁ@©À©çL9á -bÙ@ÿÇ”óðü†› o–†G²=›nó¬·— N~±íŠ+ƒ$Rv••‘ è­\°wÜãÏ;w°Ö\÷;¦à1Y›1l#ñáu>%L˜S#9H+b^SRD¦²^I<Œ¶ן‚'5Ã.V=6iE Ÿó(Å.M*†+Š sÜÃè!xæ‘‚G̯ðÔ¹<Ô®îf[ÍP—oꔈG±,þ ËpX­p§¥M,œÑèÊ\Éä묱 …á´Œb!{ð9Rk¥ãÜA kbʸou“k¡v=™®|½!SjAG‰QðR9•Çûg´Z4Y)ñÙÉõ±Äç.8S;àq:GeºÇº„Ão£qͤˆqj4(a²DFsN@¯¥,o)xd©0<^,ëEsrq9›4T2fµbÐG/~¹äQ¸Öhž¥_ ÞY ºØ,ùòÁ7à^žVã6ëÈSÞrYª˜Ê‹A·–=¢ ô“ÁÒ•7ƒ<ªÏwPcs¹œyRC›,A­…Å7ü†•ä¤wˆ~‚ZÈâ­×ñ(µgñÎÄaIèÌc[Á÷Ï®Á •ò4᪵‰-^þ5U©ó¦‹åtu5žUM³>5©wæÉ5W4í” Æ×ÁÈ _’“ O#ƒÇGC)xS= r ír6ß!@ïüðCŽE˜ct ¬.Ê)-“«/å²ÜÝQj³[Cõ1)u=¬R xLê+¯KgRçÕWöÉ‹z5žœLªæìtQ-'8j6Ô‘+­R6XË’ëÂB¬±B 1Êñ†œžãdÜãœáù‡çÎ'üñ–âԩ‚DÒ‘¯ÕBdžÞ ÛC¿r˶jäf-{'äj\ÔÕàCq£xô¡VïqÞáQÒ)ˆãØóZ‰Ÿ¤ÛÏ ­Öáñ’8aµ¶q¨Tà kËÐr«2Pùe¦YÌÕ£ïcéÁçH†›8H½f´áÊïÖד·=rpY 2ˆKSÚÉØ46hæÉuL2›ðîXU uv¾ÑNÁcó¼H‹2hò0ht 0}šè2Ø'4èˆGª\%ßï’èÿµÆ$I·\Ú,\Z!YrÛè§^>!C <©÷|:_¾˜OïuHÙÀåÚ+:8 qè讎ÅLEIY®ð a·M-UBŒDš)u=œ©$à‘2¶ùµ^6S_0$tC#q¦U¸ÛY+' ÏÙG¤BY¼?\"ïœâE59ŸÎOªñ/äuE„2ë ÊQˆÜæhö¸w@›ÙqÕÇ(ïŽGäí+Ÿ’´¡9ÄxÚ1ªhÔÂË Ç²¼p?° ‹Gž|1Äã˜UAf9ú2#\俢—ù²v÷fI¡ü©eáòŸZ¶Ô}®V>þ†0wE»tØÊq‘-r,bª¸µ•†;·rJ±Ì€3#rÇ£Ùó­zr¯Eÿ`¡A)µS”c{¿XB`\9!â]ùr!·Üì0¤ÀSÓéÚ{Ë«e]ßÍMœz¢L(a×F=ì'úõ=`š¯÷ü¢½”C!>(Cz9Xv¸ª‹‹ÙU°Ñ7þ~ΕÍ`|¹\âÚßr9¯Ngõ`µœUó ý„‚Ëú_—øµƒgŸëÕú3š¡ÿÇ+ÿhMÓÍŠãÖ‰qîÂ-½ 4|ßãEKгM«]*aÙJšÜó Ͱeô~‡pħ‘‹@׸ª|…³2aG¯k Í;¹òGŸþ9T9ÚÚ8Í]f3èvùÛ˜ /ùtVÇ$ØXôb¥â–å¶‚¾ KÖCК‚GÈ|×Å:×½pL€Gw&Ð¥e2^*ˆTˆògIx€g=²è¨{ 2ߨ¡íèfKº—ÓŠg=§È‚ݹƒÑÊ•On ç*‹¹%ÔÊ¥^ œýíÀ£ òˆÑ-h€H´‚¸ÑÎÎwhÙ'pÍøN/:xäEGÚC”·º×Ë@è!“ø)C”Aä$Çgvœ{AçÌ*)£Ð³ÅW¢±–[­œ}¤w>dÔåŽk©´0 8z-{ÉóLWA`VCLG}OÊ>áX©ãx„%=Ã/¶9¹Þþ5×¼zF]Q…À¸â›fŠs§yQQ^(Ù•Ï(RðHÙ‡§ØÊlð†8"ÄåÕà–žÅ41ܺ~2ÃË«"dùN2ix’¯1î1ín²\<§¡dC …dÆEÞÇ‘3Ìöè; )á\ñ.)x8c¶ïq—[äV²¶~Šè"}ÑëøBÊziå3”<¸èу,&-¤S|´r‘» $ÇÑšîIáÊ–7‡<ÚÀo\Ó*ƒ´:FQš…˜û³tL©á2òêá-Ë›G Óc¼q±ðý‚é2v(”¦|(öu$'µë3ÒÈß•ÏúMãUÿ¾âšÕÐ1¡2&@º˜ÎéõTºñ¯ B¯•ù8x¿ª–«zrcÝ~bЛ‡Nد3¾ÍE=ŒÏªùçº9îÆåxPÍ'ƒ‹êj¶¨&[Ð[&á£9’SL?Do‘û7u3]"úk´Þn’˜¦ÍàW>üñbR­ê/¯–ã³éª¯.—õèAÿoû7£#ʳ2üä;Êdý벺N/ÆÔnkõb1¾À™ÕUSÿWsV¡å sõ§ñXžÖŠ.뛎óèS5kê?öeGmwWÒ K-þqR£‚3JïéR~hŸëÕ_óE7ˆ>Û‡ˆÝŒLZBkC& è5WäÍÂ{!«ÐðèÙr1Ÿþ{m.ã÷4/?MëÙdø5å7½6«góéìy'ðò+}úÓŸ¼Šïý§_þVÏë¶ ÉÚj:£A~ö—Îú:ðßøüû}ÌUŸA³z~ì!(?| #¸N¶ãÁëå¼á¼±ãÁ;ês2žÎçhµ¼¬ÑP(µeýðÝXÒ·8—GGú§SÕëÓñ?åKç»fWO´ÄOßVÍê‡åâ3NøfD= ‡oà3áÿñ÷j~Y-¯Žø[ûÏÿòǯö+üFD€öý®µÍ/‡µçß½­VÍèÅ‹jŒ>¬¢U«áxqþ¶ZU/Þ}ûþÕ Ú”D»‘ÿì5ÚÛ¼ž5£ÿûجèåˆí?<§S4„É5Ç4Ž?´ÃÙúˆ,áíýO¿¼_Õ££î3úu=ÁÇ|p<Ý/ü×Ýú—?w¤ý|Ôæ¶áåYÇUkÖÏôæñ39§WÈàÚß»ióKãÝÕµUû¼?ÏÒq7±G '÷pHß.ÆhÁ팢 ñaYÍŸ³óí¼ôӗߪÙl„ÓVðOÊ)§\Õ§cü«ú÷Õ¨üdV*ÇÈLÆ8ïýŸÒ×Ó—ÑŒn¸¬Ùó?Æëê¢:ÅÙ¼šÖÍš)Ý||ucIþwHîëváë¸|ø÷_¾ö““õß AoÿË«/Gÿ}9‘e½^̛Ŭ¦ßÜd6¼nËYág~¸ÞÕŸÑ.¯üsï5èÇïÛâW¯~øŽþõÏnÙ{;ýT¯Æ³ú{k,éwçzËÕÅŒöùÇ›’{‘—ÿ™/~›?N…÷ß½ŸWÍÙbåÿ9[\N^ßô¬\ƒÑþݪˆþsOõ¿Ckù|¶Ú@Å?p‡òá’¢²1ïñsü™~<­–õy½jaÅ ûƒ¦ÒÅl:ž®fW·a­uo[V|E/R ÒŒÖU'×ûn×UÆ>Îk\Ì1Æõ½þi£Õõ ÿ±¼ÂÈ´[Ö^ú)8˜´áÓËÿÐUd0C¯òò«÷ømKƒŸÙ¨ÑÍ"ßÐ…Á·ŽÇ¯ÇôŒë%ùrŽGh\“Åñìñ@¹Û5Ù»²ç7qÃþ&…ßÄÙÃo:ì8ÜwÕ÷—£ÍÌ­Û¡¤¹—[ u³ÁPõ›é|Úœí·<ㆸà6þ¼Ù¼sB ÎÆæÊáÒr€C–Ý6€‚®„€ˆäe´r¬ø;ö9F°H®t+'xg,ôÝòyJMRãŒQÚª˜e#¹è󠥄Ni¥Ê›E Ãú?m¹Cm ÿB0:8Ò¾iH•VŽóC8Ž2ºúüd2“D<Éí3̼ûoË\]ë”TV„¯œ{9PLÀ‘SGkSÞXRð}8ŸreHÒ|(¥âFŠp\âå@Y8 {)¦–†Òw+ñÈ¢ù¬7,žœ^]}’É´Š ix¸æ#ÉI+ ;–Âè /o xë%6ÙHª’êÀIkÍ#JЖH¹^ÜFY-´îÁ4Rð(ê%“úf¿»l–cªIí ¤h1Ô¸}•܆1½œr¶löû*p_35ªÜÅ—Žö9&V\´“Óýø‹­Ì4ÐFé3afINëg+Ó‡*ųâÓðHΊ{Ž;Lª0“N$Eî„3®¼ÇHƒ.¸4\oåÊ—!îðXd âx„Ô½yŠ;Œê £Ü#(ŠŒh€rZA"AÁ¸Õ Ãmw[9£xy£ Ûh¸ä…ÛQvrªè}ŒÉnoIdQ‚Å0(Î"*ædÙ;2¥`ïˆÇõrƒî.›6ȦÁ`Ø*ÜGÐ e–†½áÓ[8i¢ð)b.o „Ç€”.ŽGSÔÜ”ö4º Ž£u‚f1v\€-ì á– Ê ÙË>â.*t¢'‡±K§„ •0órÂZþŸR 2À¸V<ò’儱ºŸ˜(iàvÇoùŽu¼E¸¼EÚCy„RK©Ýo·á{‘ŽoÒ¡ºq\†š1„ZÖ 5äÔÄšEª1´rfïn!%àkÚ…¯Yùö9ΚpAµNNCÿÆ`‚l‚dL®ÔBràœÝj Ðþ±XM?]µN•h=tÉl·‰õ^‹f|VO.Q‰.i|2xS£ƒ ´ZÏÿÑ7”;tDNøèÃò²>ÚŸÚ×*#å@ÚØr˜ýøRéò¶œ‚'õ^Ë–Î2!5©kœhWÏX¤NrZi»Ý¤õÇÁ‡»¶<¨fÔÕê ÂfP P¯Íœ Ú¦ šÝ„˜×hÈh»Ó9ê~>¨N—«GOoÿ~&´“b£ÚBá>Ô¸ˆÚm¦¶@ùoW-€r²‹/õixROá2’ꂤRöC€±¡qÚ0{8ÓȦ…*Þ;(Ò¹êþGy ÓŽ…‚|3d”IÅ#½L[9%òÕ‰ïK5k¸`qÕ´)¾÷Ï1 T8k¿“ã𴌈™TÚÉ “Žh&¬2ɽ*ÿ,ª9Îdy#JÁÃM‘-ÈCw(%£:Ô,6%8®E™Ð½(jQ¼Òf"à%ã“M\BKË®&êû,3N‰¢aIIð¸Y²å !4…’mNÖóÊ۸ÐYtg,’5…rRBòÕšžÑÌ1ÃbŠ(\‹¿:ÆçP«wájnÓß"¡‚$rôCÉh_ÌKõ·HäC­Yù¡çF3Ü„0³ ×÷"¡ƒ\z5ª™/"ØAK–Ü ðÒ %žä´+oøF8Ç£øA äUrISLÄ ”c ºHdS„3QÞ@Rð¤fnï-gÏfƒLSk Žñ‘Žh¦ƒäJÞÕt‘F óČȅ™‰aŸ‰ÜöðrF=5#Š«æh«æÖ F…^ìCäÅ>}™¢6ÍVDª˜èa}Äçpn¤•;àÑ{ÆÈÝ2s2®gâTÍ.8j†:‚ˆM5”Sjßø8b«¤2‘ÖS\+>GY'•ˆãQ©‰T[ÂËy o/-WN‹H/y/GM_óÄÅ}çÒ”7€<©7r¶‡’›¨Üð™ç3´Ùt¸¯gZZné@r¨¦tÙbáƒ(`Ûiå‘‘•'í¡åSÊÒðè§3ÕÅ´M} ½‘528r¸1gŒ3`ÐÂi,ÇéL.Ô8¨¢¨©Qù¡ÇçøÛä*އ ÑÛЇ'­Äõ[rÌ©þ†>j^>ß* –ùævãR¹TVZ ‘Žç^Npž3©¤oðÊB Ëû6äRƒkˆ­L ‰±˜!d¯¬*o)xœÊzB»Î-Ÿ{^gRÀ†è>9h~/ÝÊvÒ;¡Í­ˆʆÛBtr¼ô~µ{ަüÖð¤Öž‰œ@%òꂼ]eÔ&œ¥ë夞÷(íPŠ%ËH ÃèAl SøQÓPT$¸$z9k¸;¤‰+‚[;NèäŠwcížã¬¾sHbyWá« ˆµAæ$£"Å[Ç'áÉÔÂ[» ñ›Aî4½jB!Áª…J|@°œ©ò‚‡—Ø~n¢‚š6GEÇlÓp!+±Ùxf ‚ÕÚÅ®£%”ö<†ÚcÜs—žEdÑ­[ÊÆP;a¹µ…⢰…êaðSðhQfõ¿¿VâRI·äªù¸>YÖãÅ’fà }G;½‡z †Üâbe,¾Qlå¤BqÀÁTÒ®xȘ„Ç0ÖÑlbUYL‚ËdD ¦`û2”¢j(éÊG U<\4aî4å·1P1¬·žP<\Ì6p‘9ã@'àyLù£B/¤P2NH-"ÁÁU_áb>ÌÖ”ö<®ÇpÑYTtJŠDP£œ–}†‹QØNRSÎ8l'DùÁ§ç8&9ßyá¢cA†5®Šb欨+{áb\%e gvrå÷þ9ÍXˆ8#ÔÁÂEÇì‚qœî&Ç´†;y¸p1Ÿ¸5ìÁ8ðˆ^ªXnœ†žYdÖZÎÕþŒhb hè§–[ª/~œ†Ç0Õ“‘<à4tìCÆQ©ul_ä˜VZ÷eå”0²øIDÅë=dY§t™$¨ É1#ܽG6Uô`$IxÜ¡¼‡ sê0ª Û10’][&oºaš×]I÷i†y×8©QÁUŠëŠÙÐ>׫¿RIºv}["v_d°¹‰°Uhxôl‰›ã¯Ídüžæå§i=› ¿¦Ê½o§ÍÊ7Úí^~E£Oú“W±kŽý·zNCN£ ·mÌŸý¥³¾Îü7R«î1YžÀ˜€=?ön”>P•ïN¶ÿgïêwÛ8’ü«úcm$¦?ª¿¸Ðgs .{1b#rlK‘:’²ã5ö±îîÉ®ªgH‘’ØÍ&»ÇJVÓd‘óëªêêªêîªu“Ýz4`§½ŸÑÆÉ1Aœm+ñ0¤#mÛº¶Õ¤ïq&Nô/o?©êåÅð'8§.ë[j绯Ÿ4½È_Íg¾ÏôF _¾êß[MoªùçÓ÷]{×ü‡oß¼|Aír—(ÅÁ´­òýöÀ$ÇW87:ªÿx÷Ý/¯—õõà¤}>ö½—ÿŠpMÂë^O«ëÅåléÿ9™ÝŒ^®/oÀh>A³€CDûyäð@mùp¹|€ÿ5ÕonÈ'‹1æ5¾¯éåE5¯¯êe+ΰÒTºžŒ‡ãåäó­D¸Ö˜·£W|½y¾ÿv]eì×ÞU‹9º¿¸¾×¿!Ûhu½lºÒ£_Ú.kç~ öFûtþ]Ez´*çZ›Çï6ø™#Z/òø ­|kxüzLÏX-É7S48Bãš,N{Òžö”»]“½){±ö¶LXø—þg÷éëÊᮩ¾»íR`κô"®ÀNßw躪ߧãÅåqñHï9U:sàäÕâŃ@1ð£äs¸äÑqmø×˺„‚¿¤APúwŸ+±*|%6í¡®üM‡D<©Q<ª×ìjÕ—xT¿O½wÒ;™ÖKúò¢¿ 5·{”ÐaÓÈqO'’¯C=Ü ÷ëì¡£:§-š°ÊEª-8ؽ@ù‚Wóz8¯+Jx¬ø³²0ë¬ÁISo½q VÆåìjüauT„ 2i¼J¿6ÞâQ.RθÅÙæHËÀ– ûðÅ¢¿~Ùÿ»]4jG§pCµÜàê)¿àé4O®÷øè† ËßJãmf¥ àõ6ò0i*f冀wr9Ø¢¸ˆ_¸òtº¿2’I06ŽG¤ÓZ·å¾·c½Ë¨n^„!œ4äÙ±põ¹†NÚÔ=šӕmâеë`a <Úˆ¸ @:ȱ˾'¿i¾ëvGÐEæ*VÓ¼¡K/0ÑÕ„VZ±øD'vÁ)ÐÖjÇ©*шð~v3]ÀÕ„‰A®;Ω尌ªDú°sc˜ P˜¦’º¼rÑs¬±Lí'õÖqëÅo·]|Æ·®k V¼$°8„ãZÄТó¨&uº}½1qn˜‰Š ÇÞÅ2Ex4“VÇñðÔzbÚCn>Å-û=Çt˜c–ºûh–¥—üÌ‘3ëˆØÒé7øî§ïÇ%Æ£õFBY뺙¡úŒnj‚ФN8îRC•ßÝmyK‘„Ç1w÷r»¯êå|<\x~Ú ?¹PŽ)T?Ò ÇóKËdyw6 ’y¹ò†°¹v3<Ü«ßõ1`w¶¤Üñm:c4-{ÕÇj<¡}ÑcE³yÁ<”J6‘TrÚCËßœIÄ“-5»ãåÍŢ߄ªw7‚™ Õ ú~‘r5DG5K³¥áºnÌ^Zg£Z'`ʱÈuzE5dÀ×Ù{cóðBï¹dI:Åeù9‘‚'õbЃù½;ì’av)F zÉœ8`9ò¥ð)ÆYyq¦à:WFiƒgßl¼ÞtnÑì©-cJI3Ú8u¨3öÕ‘;¡Ê«@ iÎûl¾m~É ¿Œ¢FŒ†ÅVaã/ÔžÄé  Ì¡|MÁ#õqÕöšdŸãB3,:TS~lênñvP 0=bþnøw«SèìŒê†i*Ì4Ãq`bnÒ°3üPjÎ;mÁOÉì¥îsÉ(!9ëéTrI–¯ƒS—ϤáIÍì¨l²—ý3A. «œn·ã逛LUã»…-dÂOÁ9îGïâárNlt6¬<m‹T)Á2‰ˆG—eO¿cØšïX”ˆGé£Sá±ésv›C6Ò…9I5&†\ôã“à_ºÝ$àI­ãøðÔ>fX˜‰Nk& ª¹NsYöͺEh²—Qô xR»Æ•pL‚f“ûvˆÍ°G%Fjg“©®Ú^óAi%f]¤FÑ ay¶ÒjÝ"—¦7.uÙ}øßÝ\AÙ‰Ö†q ANqߙπߢ "Žß”錯šÁŒÖ6ŽÇÒ‹ôp>†’Z¦/8F¼ÜDN•¨›Z¡7·±óÆ}h ½‘Ox)x$'¼µÿ:YÝ{^÷¢ßýц@]p£Éôq'È|GÆ‚tŒÛØñ1 ÆÊòŠ’‚Çåéºp<‹E˜ÅtUƒvˆ I°ãË*ç´\ÈHɬ7Üv ‚&”‰4=jð¤ºö»KNîò¬ªI=_® MPA^*a©­„4ì ;‘Ç(­Ø cR–•×$<6WÌâòäj«×sp·qYEy|[$‘Ž%7õ*®à à7ŠdÜÉ"Œ’Êy«=Çjz<‚å‹Z} *¥ðÍÆë-Õã.(=íÂÆM«Æ˜FÈŒQkäè/¨H÷†Nw¦hª9(cq<’éÒ*0\W#ò=xWÛöQ|Ôj;²qâé„eÙ´ —M€„âaO%:X‘†Õ²šÌ>ló‚<åt Œf‘1pEEºPвƒ0å›Ë¥ááÙÚLF&Úö‚ï‚':,u¾Lré"C”ßÊ×0è«a£CÈiá‘ÜLÚCw—wVüÚ{ý÷q£‹×íñh:-޾¿ÐL0* ×{ZÝ^q˜}ćG£zz,X]ÞÃJÓzªv÷ù{¯UHëõªËÌ»‡ð!˜bGôZs-Œ‹¡ \¸ânVšw^(Ñ*$àÉ™»¬g-Öf(X$ÅöµVÒÍld$H'D¾2?¹´ZS/ gyÌË!:Þ×…Ï‘B…=ð(“ÕF´KQsÜf‹§ÃÙ¼žÑå+ÏѰkèÐ8pÚ¡¹[PCõûõÍo®qÁÕt-mŸZ‚GÂr›Ç±7í+¨2wû–/JÇ“F ßWã ¢~¾ü|]Ÿ?{{ûÍí¢?5«V½x†*R-fÓóg;^×Ëg½«¦ òù³oÇ ü1âÇìÓ)/Ç×½ãê^µÅÙêë´²~¬§ËÅj‹~:‰¾B² Ò‹5¤¦Á଺!¡?;V¼Žkµ‹\à„YÝP\•4_vnªgûôÍêœR\U¦Ý_S÷¥Ÿ_?Ïn–5U;ÿÛwþ—¾­©^éÆg‹Áà[” U³^ìû• 5¸w¯sØ<½·¸œÝLFþçE½ò_/?Õõ´w5žÎÖJ²ô½1ÞAW‚^Ó7‚®‡¶bš|îÝøÊ¹Tþ²šŽèÕeS~¾^,{ÏG}oî÷ý_[Vyø¬±G‰j¶‹u]­ž½‚~üÑ÷kª0ÔTM´½ç¼/´B?\O5$•ŽT«Eú Cð/ч"‰;|·»›¿E²MÇ÷©ÅSЧ>O}(žúP<õ¡xêCqÔººUò©ÅSŠGЇ"I +Ù‡B+Ýçàþ¸Ü }?nZ]Â(tmqx–‹BJÇ£›¶FQ!–ŒÇpò'–†bÔ~û["r(Çö-STìYóÈC-›%¿×lå×ä„ñÉt­þø°Œ«ÕxzÚ»¨‡Õ ÚÓ+Ê¿-Ñõä•4}ƒ(„tB.z—7\HeÆÓaÝ»]a¨ö߇SÃul<œátÞ¥B¢Ò¾¹l´aÅÀjBè?{tUUÁá›Î–8ùç £ùÅ…Óã’2žR§”^u1»Y>PŸ¬^Gm 9Âà¼M_ù¨üä;2%'ô#'oÐç{Xù­5Ì ©×NtÚlôå]ÏLû€ 9hjöžs‰vW«ì®ôKXÇäcK¿¤ Wî_-ýb7¸‰ZÚj°]¦_IùÔô)ýò”~yJ¿<¥_žÒ/Oé—`ú%a]«ŸÒ/Oé—G–~IPàÍ™ùÓ/²¯­QF=7¹¾ Ò\Fly:ÜÆMk,X]›ãÚÏ?¹¾4(9Î{ Ä¸.PÏþ€h‘ƒÒ:hAëâçÉ“ðÎŽ¨Tx¯ Ëb8TÓÏ7ãµn ‹2eb퇈NiŽ)UØLguyá¦àq¶œpÃSBk&ÓJEP¢†n6˜Ò˜òÂMÁã ·p/gˆÝûÄ9äœ)¥Á¥#‚Ô m”Ì.à4¨Ž6llý îTΖ2⃈âqÌÚ²Bò†Îpe,/&p|CáëAu¥¯Ÿ§âq%„|E5ÕÏn-¡‰ðÏp%#5R:¦ËˆºàÀ활7Ü¢ƒ¼]Üù'$Uì×a+äép=â<`á¼ÀSðˆr3œçÂŒ£…ŠÇ€:P¦àÔΊÔêŒ8>Ça(¾TÖⱪ¤ˆ 2Nj¦0VD€"reEGj´6âH t`¶¥æŒÇãLglZ/é‡Î> _à"È=Å”âL³Z@ÏC–1Ú¥àvá”%àQŒ•¶sÏ Iw$c.¤²Üš.„…‹sÖ…km7tªx¢¤yŽáRDg6áv¼ °}é ë´’‚:çÊTtÄ$ˆ¼W~s\RMè¼³<J\û ñ‚¼´¸H[£y,”°¸pqUrâÇU7«-ÞK2‚³>"fk9¥b“z×[[ZÌÙ°*c;s«s‹y:›Îg³e»,ª 眖L¢üb®¹CG´€k^*W²¼Sð¤6nHå\(ûÈûÔñËmë§3Ür *FL„³á °Å0ÿ Ó‡ÇŠ2±ÍÙÆ9·Z‹ðÎ Ñ1ØIô™qLqÉÃæè,¶HÆûÐülxÁŠ'ÃÒð‘5×=¯'ÕE=y`÷à^ÙPä)ç`h##cà(ó;›IÊ›€5íÉ'ÿ<’ß¾|Ï52,p`è~„;Êy:¦•ËÐ̼kÔVç=püÏÅB3ÂãdzZª£¹Ÿ0PúQ1 ËeO¯g£–…¡•I‡9i8Zxà°zºÎ,A4ÚH:¼.xí锵ùv¿sð;ºU^sSðä+v}=Ÿ]Ô!&š0µÕŒ›ÈQiOÇén%йP #:}žÔ³ŠÓêª^\WÛã·¯Émö_A7#È:+•²tA-•è’Ï7äÇê€EZL´t¢1Ós,¢å{à1¼°˜C{’ P2Ò½³¡“F”s«<Ò-°¥cÅÅìŸc97lÞŽíê5¼™—ŸéÊtýÛ’Š‡/çÕ˜ÊÚŸ¬ãsæ‚ ãÄ-‡¾p Ò)Pùíc–Æ(É·ÖZ:éÊ‹Ÿã˜–‚ÇñXË2®×mí¸ +Ã3Fн-Â݉Ž[éíá2¨hBÁxya§àá¬Äœæ,È0‰â‡¡’wP (/ÒãRЅŰ #†qZÞï|k~]u >¦ñ-º;×õ°uT§­hN{ÕtÔ»nß‹ÞlLÀzê2ûmÓIzÖëÁÚ±/z¿‹õÇrÇî>ܹÔpÀû¯Ä“8!ÇÝ·9Í=TõòÏä²6BÄiõ?7ÄØ£‘=Ü0Ü*T<z9Ç©ð錿³8?®'£þ_æsê¾Xú¦Ú-ÁùŸHúôÕ_ü_ûw¿üG=%‘“4%²eòókµ¯Uÿ‹/ž³ß†Œ1Œ¯™b/N½õCòÓÞ›Ù²š p²­j×£;íý\£ ÇÄ9X¢«ŠBÖiÛ6Õ­&}_-.'ú—·ŸTõòbøœ£œ·Õ®ºiÀw¬ËWó™ï)¿Ñ®›¯zuWÓ›jþù´Ç}‡îõÿáÛ7/_Pkì%Jqðm¡~¿8ÉñU#ÎFèÒ„ï¾ûåõ²¾œ´ïÑǾÏú_ŽçEûÿ¹[»qþ®eÚ»“Þ˜˜†o”ç>¬ÚÐ|¦Wwdœîv>¿Õjª°2÷\:m'öàwaäî‹Ô÷œogMˆ7ójºow±§W_>U“ɧ­àï†fò‚«úbˆß¨s €ÄX ”c¤&Cœ÷ë–öôc4ãÑæP³ÿD/«ëêgór\/6Tiýöÿswu»‘å¸ùUŒ¹`Ý#Š¢$60Én‚ ’M™Ü.‚»¦ÛÿÁöÌ®±ØÇÊ äÉBž*Û埥::²“‹µ«ÔÖ§OERuÿ(IÓwBîÖcßrùúßÿå/Õ1?ÝýFzÿ¿¼ÿË7ÿðËÙ¹Jæ7¿·Ylýñwëëó«û QÉ¿*}égÓtýÇú‹(›ûéƒËIkè¿_Él_®ÿþßëoÿ¾Ýöþõì§õÉýÉùú÷›ê‹úÝÅJ´åÝõ¹ØtúëÓž~»REwûðò/—Wºe¨`3µ¾+󌲣o¶S>‘E’ 9Ð3g€Ò7Á›Kq uò«G’Ô­Y~º›ðÌy¼ Dåcˆêtb 1ê#qé½Ì1‡AOBÕãÞÒ@—V±#cìÒ.çÖÓìƒW§wÁ%MO2Qi–Äò2-ýLåÓržÖi’|‘¤à•$-P²Ÿ÷V·•ÖÓ]Õ‚'ûetÕD[,Ó3G¢lÂŒÉ#-4Ó}qâòI'mx 9OnÊ'D똴ïö©H§ÖGLl½Ð¾iç Ÿnöª9›ºYÛ-ÿ6åÔOò‘1WàáÃ&¹¢²é·kMh˜êR! Œ˜ÊNa¹A¦ö›Ã‹‹eþø-xšKm¿Q´Úb/ÙKúZ.°gmé€ôÁäµñ€úÚmx2 YðÜ'±sä˹Úsû‚ï,¡mp ×o:Íw#ž€íÇñ´¤+¹Ì%'´~´… ¸L½„°Ö#öDf¿OlÕîÓ9ÂÅ꺆Â=ŸoþÈÄ,™õIKBù1“©]‚Hÿ*KØ5e8˜tö ü&ÒÔ.úf+véõÛ¾àAô[ -x|ê¢ ÷%vYÌFWd6xt¡|0µÓpbë¦8ta7 ey©OsT£3³±Èl.êÅc$Q35sݱ¼7 Å//$-xдªöƒD(óËzËŒÊUÂ7í /½Ò'Ñ ;±yƒ=”Ìn€¨ P¾è·m‡´À[f~.<+ƒ©ÈiNŽ#‰ƒeŒAÚEG#­å½a@É ’<íR^9: zÙyä@h”ò™ÚZâͳÒÝ0°¼0´ài¾!ÕÓR>‰˜R²»&íb©ús½ëÞI¢›€ÇÅOañ´j†77æ&ïÖçš´wsÿ´û†ˆ}"»Á[p9ïßM!ds 8{°x)æM?ˆ²³f‚ïu×h¡M¼r‘W'–dK–É1çÔíìf‘heDk ÅW¾û ˆöÚ©]'Á; HpE^#xtËÏBlÚAæ÷{ 1—OY¶í–÷W7ý$ôÎUài­|ú˜Y>—S(ršÅ=G*X‹NÚÉX…cà "‰*ö ¢°µH? \06xò» FÙdc'ÿ¡O§ì23õ^̳‘Xß÷€6^ˆËË€öÃ²ç¦ <)ök=XTÒTyφN¾wþG?ü|¶á²æÏGW"Ë7g§§ë˽ø£¾ Q„¥)ýDHì¡Oιž§dß<¿æɚ騶£·‘Å€•3] ©f"\6¼Ý€ÒÝêöçÿúr³ºþ:Ý'ÎðÄ›æ•\>QÂÞNI+"=YŸ¯Xúû¬«ÿÏk’=m#OE³ÎzdŸ¡B2ë¹é`söÔú¶Ö"úQ‹<­5Ý_-Úç´Mla/¶¸ù`l,¼ä>Ôdr«}PÃV¯õ‹ŽyÉì\úH“‰ÐKº¾9»Òšeçë_×çsø—Måˆg7 7——¯ÏW—ëG%7eW…’ à?9HÑA.¿Zµi‡.ͼvÖiƒiC½xÆo#žÖX@™Ä·¸3f<ˆŸ$“™-¬ˆ[ÓŸ×à}L©ü<â¶Ž˜øàQ7´P‡'þ§ó«?Ýž|]_¬Þö™Žwr(C,Ò.k©*²`Šº¤æ#„wÂÉ~ùémÀö˜Þ’½š:q—]¦®š¼] 5€…åÛð@˜§À¯”¼§9~‹q¿ O £Íô\äÒcd1ŠÑ²{¤¾O¶X7©m8@ZðZ@¼E¡1ÿI”#OælÚÉà†í&xòÄ 6x0ÿI8'_'¤v.ÓÆ8„`Ù$Ò9,±æÛåT.8Z˜¥]L–:‚ÖAs©‹;\8]Iåä±ÝñŦôÛŒªú¼ãŒÉœLž ³±,ýˆ¨¯i'ËL“¹EÀ%]Ö~8óó¶û…]ÖÒÃtŠ!»ræ]!K¸MÃÍÌÐy@ª–ç¾]5ÆážÏ•W°„!GÀÌÎG.]ïåÛP7°»ñ‚ÒÙtØ9›v³;-̻ܵÇõ×7ëÙÇ&cOüêpæòêt}|·)Mzô·b<ߟžÝNEDNvk°=Õ2=ˆý^XÏåS'ùïi

–嫚´[ö½ w Æí"pÎ’@ËkÄCCv•)ä9±Ý- K–Ý>ÞÀqmFY«W۽ώêAyÿ±¤Ãçõ f…»‰ ò¬Q†1BŽBøXrÔšÉ_Ÿû9ZMj‰|ÔjØ;‡L¥L^ªÉä­î4ÁÇ«VSè幬|y³¾½-Ý'Leƒ8àåZiçVÒtHÊ»ôúvÙñ#¢%Ú9*?\¸m‡´ÌU‚o74¿žh,Û½!9/6ƒY$DNanåƒâ›9Ÿ³u—SÚ…¹iŠ'‘³ñï“'PCbyÎ §êl‰ª´Ãæ²ÆK¢¦à£9õÚ΃h?‰!» <{-øW<¾ü`bÒLÆ2‡d"— 4,´ «ôzv4ËœoÚ‘!Ég²‹"h;ê¦ò_^ÂÛûÍĨ! Ù—}°´?¥ºIB!n€žì-xrì•ÃQÇ$vb’±£¼3zúHÂÀˆÐK1² ÑÐ2Ï>›% HÝRj.ñ!w.v•<ÕhÔCnë”Òšðäyžêt=Xo Ÿþr¾[Sø•ÙdžŸ‘œ|ŒVÉÅè³o~faYÅÚ}]x$™á‘–N zš.:?Äe}^ÝžèxÂV–+oÄtšðt»–¸¥ñ‰ÅÛO?Ë_xyÓ ‘,é .EÓ?‰ú´››éÙ÷\î!bô`£#l|é‡Xuþ7xrŸzW \–ý%ñòRäÄ&viç\?¿#xY‚É*Þ?ç‘ÅCB“¹œ¤]¿øÞ>:÷|>ñZ6•PòNGûnà!¼–­­‘³õx›´KbÿÄn×»$§Y-xØÍÈ–{®kwÈ{0:6ú·|$™X·¾dÖ=“v¾ù``” ”ôñèìA„qé‡|ó´EñÄn¥ .V'_Ï.×Õœ–“î3EÀäÌÄ,v\€n‚Ñ}úX YÊZÛ¨¤¤Ù‡}p6ô©C&í»*±œmŸ9 ‚–ñ íbóÉAÖ70Zð´Þ-{doÙßeÕÉQŸásÞÂ*ípÎËÉsE²'˜ã<…#•º9ÞêD •k±˜ ½ ìLÔÞÍ~Êf¾hzp„¦O®`Ù˜râf;²ñÂ1‚·(,ï,¦¸³æaº(,nöÄœÅu5eTÚÅö¼ôÀÞÆÎÏ¿Q¹%–Ô%êËD(¯í|Ø_¿{LQçmÈý±~óïÖ_nV§òCuùf‡X«]ÖxÕ×™í¥›`g¿xô« Oìm!“ËdF"}¡,ðjëóá¦Lç•Z: <Ñ-} šX,/$1²F1œZ44A\(ܹ,ì½ÒÑa6Ϋ›:Ë?,Õ†§õmÈý+ùîfõeýëúæV>»8û²y¯ãÓƒÒynpPžNŸ9`2Îeí°¾¯ÔÇÜê§?kÑk;°Wã5W¡ßÔ=ìg°Ç(>!ÛhÅÒ~çg4N®.oÕ¦™cqAÔ„û, È“‹vÛ™Ó“¢Ïâ™Â-ípÀ6+ý '1·lš2; ž‹¹Ç6Ó‚>‘€z<­ï1>ßõêØ+{ºÁe½)Å–…!í¢›• Õ .Gm¸2 6médzN¤§p³«A“בXömBöÈCÿM»ý׆=м)w~öÓúäþä|ýð$òñõêägùasÁvÎÉÓH£.°)Ö.ªßë:ÍcB°2q7í X ÒOÔ§}*ðôßûê8-{â>{Ê-ÃVÛ5Gþ5s9Ûƒ@àÆj?]â <1Í®´yŸeÿœøŒÁRÚÎÞXzz‹’Ð@áMõ€Í¸†¶£.å ´ìx  slÉ´´k?#8†²{ ”è é'’(Š`ã‰gV7œsX 8{Hhe:0´Õ iÐ̶) ˆ).ŠëCxBê[&wËãDã–E!ñÓöÇ—'«Þ'ƒY1cRˆöHJÏþî‘áCAÄ@ÉÊîÛ“…CbMX0‰å [æà†(Ÿ,J.T¬”0ÏðÛoÖ_Înïnî·9w›R¦Œ…ìsöÑT9>Çæ#–>ø2ëµO_N¬H¾Äcû<ß3ßÇš±òĈ:’)tÒ.äŽçÁ0!§hÄ…ö#ª(§ iÙ¦%g^Ï>8„aß 4Ј‰oÀ8•®'¶¶d=ô&²°Y˜xQ}ô¬¬Á¤•¸töjž‘û½;»ÑâÛÒôåRo?m΃åÿ.ô ˜Ë1†DɬÍàcj5ÏR-i"ZÀD§—½—Ÿi­4ħõ6žä©ûŠ.ÇßYìo½=•1Qó‹Ëa5øœü€XA žà;¿%½ÕmÝç%C(|r!úÊ~ÛÔÎ5_Ø)®S¯žT«Øè<.~ájÓOu“+ðtH¡¶ìo/Öw7g'S¾gDä“õbô¦ïP¬£/tHÒq°¡òâ’[é¼<­'Gs™Œe&£f!'#…zjGéK\íöŒ Â'p™BÌ Æø¥]!Áh¡EPž:ˆè‚‘³içhùE ý°·jƒmÚµjÂ7OL.­ۜȉëdp-Cp™ƒ96-³ùî•àÈû×ÊËêoÓ ™ÖÊfÙ¼Í €FÅÆ©o~_íݧX+©$Îh3XN‚'f²ñ„æ?‹sË\ç@@ì,#NÚ…öGÚ>Àನ ¶WØR; RÑk>Ot<¿ÀÇí½|tñù‘ûϹaUÁ:.’‹.zï"X––´“eÑ¡ÈÒ£ ²9Ø£Áå¯I?Àz'%›ì‚8ºôÞ¢]™\â˜ù`)ti—ö_ù¦mÕnzÕXã'â»/s6 €‘=p\>í‡bP¹×³3³·±‰érü' †Ddº„ÒÎS¿—hÆ -èÓŸÄæÐ‚sl¼ÿeîêÖ¹qì«ø›ëµCHÎíîì ÌÜæB¶•nmÜ-lw’yúKn·Ü–ˆ‚‹¤5?_›é:A€ùäòŽSÇ.ž‰ ®é„¥¡gV%K²ðÝ™‘¨hÈ€ÇÚIáøóž6Zƪ–£å=Ÿ*Uô06){fbQ@ ž·<õK‚Òn’KõIM*ÃçDU,æœ|ÐÅb‚ä)g¢™æ¨Ù\ǾYÆ\ù<…L¬Â¤àóÒ"Ä‹p²#R£ÁÒ4¥Ó+–!i½«öã|ê8½¬¨-E™7çT˜‰Cì9½*ÎÒ‘,é«W€ÎK]&% 7\%.}W´%šJÊDh’¯sòðYtD¾®£$^Dr „Û£¢k„Wß $ ¯«¯„-jÚI¬œær4Í?(BäL¬­“2.Žà@)ø..éxÌ•OëtóU0ßM…Ë+š¬G*8 4ä¹d6ÉÃ]F_^¹ÿÌ[ð°qõ¿è¬š SÞ°œÖ]•+דҕlÇÞf‡ÈÝ'Õ†ÇÜ øØÕS-(˜5êÇÕ¹\Õ5¤rýŒ aeœÁL‡ž¶j‚޳Rªb?Îcš”êû$®XÐñk¥—£¦³ú|‰3VµÇÎqI-B-匑[\¨dµA¸©?5,x¬7óGR†æe QõÕ=]eç}Α´õ%qT4Ÿ§ê4bó0àÉL¹y¼Ôt£êë{¾*µ #¥"]Wüí5]dêOžäzÞ«ÏvÕ«/Šù*`ö©W$*ã|îz¹ÜP$‰ôYñ@¦qbÒÈwB©rM:ž`=~;}ƒ¶x_Ÿ4uM äè³áOã\‹¸tse‚ú7õ±á±Ö@­jo*šCUY¥é´-OcÌå—‡¢ëÿ2̈‡[-mï¾ì7æJ´FŠCH•\ï2(·Ëaœ ¢q ™‡šÜжÑrm!Æ+ Yi]ÆQå*Ó|R±”ù&à±Q ùˆ¤Ñc§Ý‹ÃïÊeÞN?ªè“«ú,'ë]Ðô¢‹›\Õà÷|±<:îO 0žzþv·ýãáæóúËjÆý_}u•B•âò%Ͳ­ý.w T: ·›w Ï­ÍçÝöé¾¢ËX×%rJH!iØ‘É\+¿%içãäþO’mx¬ñAÛ ÿ‚!¹#¼ ºÿñk¯“"¦Éøõ¢œì.vÏPW÷÷w-Åj}V·y(Ùÿ«ï¯êl²¾™—x˜Â^Ka%$^ÅŒ2GÍ”¢FÜÞ¢.³ð„¬©OH­­iE©ªÆ\þ¡¶ù—d!ת¸]1®·Oûbð>ÆFi@ÁžL­jo½hìuNhír,]9 9;1U¨2.”4ÍJnõÆJý³»lx[-碿õýXÓW/¾Úl§5ãÚ(“›öw#îìψ12ºßÞÞnvO÷Ew×O·ŸÖóN¥yUÜìòþözÒl}yùä@Rêé˸ƒ¹úÞùŠ2‚$<äZ4Z¨è󦸞7¥½Ûo›»õÃDŠèê&$º§Ò>M‘ʵS;_°‘Á3 gOý9aÁóžªn/*{U+úª’0FQ#*ÆZ,j¹ÍïÂTƒ±«'ÈËz¾:iR!Dšzš(®lWŠ!¾£šÛûh:UiÀüðPlSÀë­ }Xü)BU¦†›Q,ãœ9Ó¿/# Ð= XÛ<œ›´ý±*TáBr>&GšJ-qlÓèÃ$ð! „q«Ô«NQÑiihRÔd(íM¸ÙýÇ Ñ¿@Ž;gn?|³›T%^Þ¬w“CU‹ä8pª÷BÂxïÍTè›bv²àAƒœçN‚(Ç'¥ä‹ŒC‡ÕŠ\ä%”éœZ^hˆE2ë¦ÓñÞÃÓõÃÍns_½HŽêÐs(åö“¶+Ë8àg…<蛕ã;2ûꈃJu±n]%äTzm)2•qŽš•áë.”ir^*Fן8ò’AÇ“²ï™ûý½Tå?Ö-²ãR07iö$ú’Þ5ë{¤0Á¸°à!lZ ê=+rÒp݆GÀr‘& 3´-ÕO$‰µÜëý87 R)߉¨ÕÃܳF*•B7ïÖ.×µ‹I|¹¤IƒÈ˜ÛÕíùxqÊ¡I²ðøÐ¢ Ëü$€òÕèXB¬¢+õê›Ôqé/TR©eïD%{ÇôÑ1æÆ€ÇÇeçà7wÛ§ÛK .oeÉnVw?Öæ›ŽÓI¹N°À®ÝŠ8øõ⟿oözlƒ÷ï·›‡’2v{q³º_]oî$ŠZ?”4ùãþûåO[,ׯ€‡ZuP7hÝ7c KšÁF–HçE–w6M9Ú¶ü”âOþfšhDX}DýŠ:…D¤Ò£çœˆ$xr»~íæ9¸,=¯÷k›ñ©ÖiãŸúË2ŒV5+5t°¢´ƒ¢KÁñ 4'ÁÍÝjó¥¦Íú •ìS.¨ Ùƒ jF„&iìDÌã€LºíÝ—¿~ý'Lê­Ýèä+h;GOuqʸҸ½XÄl | ƒ£²C‘Sv(ÛGûSÒ†‡›=â½~ÚÜÝ–U\ÒX~~úR}7)H}.åh5Ä\мŒHå\¸ŠæË“û7º°àAçàÝ):/é,? ü½Z˜.Vë¥ žHžKFª†;†ìr3k´ˆÊ³!“ÃY5ÈëÈðÑþÏËxR›C>™ÌÐêØ=S mÈ·š6ïÞj„zò¸ùŸû?a±(ýŸÚð@ÛC?MÑÐŒµçÆó®Kˆ»Sù¼¨|ëCÀ7º>õ‹IóØŒ+µr³ïG‰Ð<4«½Sa³¾³A Vz_=ƒ C;O„Yòßïö¿<˜è)h¾¼Þn_ª†sªGGèbJÊÝ[äšt¥ÿ(ôé5þ¡Ê?,­•Ž„eœpË+ ­onËZ­{+TH±ÿª°àanPN¨¨éжEE_(Æ¡Ü$høÐ‡Ø¤fs?€#&Ô€·ôE5‡OécÝ ÃÒu œÏ*”Ò~Ð8wý°TÚÏ5œ&ž¼üUÿ+R¼Ò‹uß)‹q %Iß 3K˜žÄM<֓壛õk½½ú§ƒÂ&¡®Äì\pä@2ÆÔÂÅŒ:Äçu<®ÏÔ߬.¯Ÿ¾ÞÞ­‹ú¨ª¾Ò‹:æ¬8Ë8„ÜiÒ;áaÆ-x2õ™îg;úZõSî©à¢C¥ó4ÎùØiÚ;ãÎ#¦>ïÒÒš Ç–ŽàÜßkÔãƒl4!³ÓÜ2Ó‚žHÍï À+ÏzθµÚÖ±WƯn³žz9³yPtIÓmŽÔc!×ÞXDß%³4¨Hƒ7‚óñ ö|8ÿ&Z­O99A]z„(ÀÉQ¥zá•|Oþ„`,xи¼t‡yóºú¦x˜²7±zîF^¬•õr¾3¢ìb0e<)-Ãç¦-D¾Õ.¤,DF@³D2Ž8/лH½Gu 88@+xˆ(%OÄГ7«ÇÕÝöÓq¥ÖO%‰E£‰³vAì<»®´#„ç:<ÖRœÇK@赫/Ÿ‹Wl›¾~J1„€Ú»²išõ†Š!ÓábÐÅ AÁCå囎‡ßù,f‘J_­;TT+þmˆèUQ2w­¥dõoc¦R7LŸ„FœQŠ|ö*žrlÖ(·r°ZT¨¸h³!SN}·ŽåDž-‹ìƒtNt(;Z«¬™e®Ÿ¿S⪑0eÈàš=¡;‘Äø 8l°à‰F¯ãf·ýúÛëUr}ýÀž%prâàkÎ{Šöfìg„>ô§‚Bgô0Þ›tY?»/í–ÅÈf-Ô.Ø÷v;ÍàÅ ;„¬ƒÏ#‚Ô‚§”×p*v!w&ÂÏÞ¤Ïú.gô™ôc ÎßQö¹‹c$˜¡ØV \ö…΄|G ~ Iǃ޵à@]}©~ÎKѲŒ¸ÑÅdîË>‚³òˆÇb<Œ ž­>}Ú­?­×—Åj®o7Óòž/dö>ªè¢ Knß ]’_° Ýiå5œL@ÔSÐ2Îåf“ùm³þcÒVÝôE Äòi-Í¡ŒsÐdí.³5˜‘u¼F,UœrÅ8Ov «›ÎKDŽX‰dÝz EZ\Òt6&( ©˜’‡!Ó&Áºð>ÎÀ}‹ÎôoxšÂ(D7 clhdçšàDa´®?ÆáS,ÕÉ9&ÕŒÉ8ïû¶J< @¦P“F:ù¬c'"צ㼕Œ„g½òê+(û’ØpÝdPiqÞ¹© }Ngƒ0 1à1§µìîç×3ïÕ§²$—忤E•Qv4ó#f±—^Ý”EPòaÀc 1¾ëåÓnûtt=üòén{}P×s¯Áº§F\ý•_Ì¥âüòW_?*tmJ‘%}Ù€.¤¢O¶¾‰|)©ú°[ÿûiýð8¯¬ëÛ'ˆ¢ÛËÕeöKW#ý’Ç™oy; ² 0­Mà³~,¡þó´[O3‘›Q*Ä¡Ô[Q†‘ŠÒy‘Ê|ÐÕt&¦Ÿ>®n~/³®±bøbg¹FdhXð|ìøéæ~šߊVì>†VoE(öî¬Åž>ÔZ}ÛLÍN›ùÔ ô1–ê­(ÃH…þ¼H…ü‘¤úöpÿy½wG›ÑŠ>ÆY?*Ì0bñ™Y+þPkµ¹þ2ý+Ód„fÌŠã³—fµµÌ—^M'ã~ûÇz÷mŠË±ÙNtîC˜uT˜QÄ*IûçD¬èÓGëëÓãêëæÏi.¸±ðcˆuT˜aÄÂó:hˆ!4(†õÓé1F…#åy ©ÙŸÉq&hQ «¾<$´Ÿ‹Gl56¬ø'êúå§s”ý ÑKżòþ·ªJCÚ]jòÀµ¬÷×;øÒþEÃÁ(i\:R±†¨ã Íi°½|Ü•K£RçfÒžrÞL¥ÅšSS­dœý³Ó›@NÕÄØ2.ŽX÷”8 AÇÃÖJŸªgÝ×ã쀑œš]jµ€oM±°°¯[ðX[C¶F±½PÚ?PšÔYg³ìæXRãø9‰:¬Éò焟¦•ïDê3î ·_Tãþ—›íý¦˜þ‡oSÒܳ¾Ž™PÀÈÿ®œ£â¹à }½çRûÍÞŽu#Ät2•úÕ¸Ž%m¿Ã²Žƒ×ñ6k¢ò¬Â‡«ç¿ù¹ù„8tu-‚,_È5Ô ;íÂ>¥ïcå|€!à€i6à!hÑœà­Úö›!×õ†Þ%¯.t9·ëݶ˜³a 40Ý<ÍWõú±¢H_Wdð!9wª*Ô«q>4i>ûN¢_j'èÛŽŒ£ qÇ©ðÒÀuà‡O—kO3èài†;þÑäË멨.”ä0µw|–lÙÉ‹c¼j”d0’òU,!´Ž=6ö}>¯WwŸo>¯o~¯(…ŠÇ1ê`¬t‹š•¿zzü\îÄDŠ(ѵ˜Rè?ãÞúaÖ(ã}ëµÓbÊó§îÒ^s[–àØ`´OÄnŒ~\¹¿{±Q4iCPü§TÚ0D¯FI6Ì…e»Ps¶Á0ó<>ö_쇚¬_6@i M¨^ç@HöTg]ðõ'µàéRŠšB'EÖï"З~)Îi!¹¸ú‘r³"C‘'?b/°à Л?zM颡~9!1r’ÿiØ%¸!¿Ü,ßÈ ˆ+ÍÔξ9ò;–.tJ…'~þ£™'BÝ™&9«a‹Œ£Ø¤/aOQ"1é¢D?!ß™ò]Ó <:’äXÛGTr(%LY³oÄÉß}§°8xïÜ&XðäÞ›…__6_÷ù¼*|ØoÀõ MÆÈ.ë¹*\J$SOCñ:— Ño©5Kõ“øH½ü_§^yËRg •p=cÌYCËž`ÀŨOh2ÕÏi,·U%ÖÏÛSŒ¨4EG9÷´ÿï1YôqD1xÇMÐPÀ]Ô‚Çó²…4Ó3¥ú‘¶|‹CÈjÞªð0ûгñÛ„j«¿òžåÇ ¤ú’° ][ÔõÄŽÜýÙ†'ÆV1òžôšêz)£¸A‘CÆEXê´XJŽ\J*d‘Ì  ByÇUÞYèx*9%fgº’“1DUä¡¿bA<,‡ÚqЄ'/èïQã"VU&Þ&ç¬-ªT:ä_/þõy?óß„Ww»õêö¯‹Ï«‡‹Õ…LûdS¿n7¿=Ïí]=–b"“„/—/¨s¿Ù§A_ÜH¸%ÅÅÿ¬?íV·ò7Ó ýí_»'ñÉäüÛÿ®îÖ[(;{{›ºÑóÃ>Äþ”¶à±ÖU˜kmïl(ÔÕÇbõYÛådY#Ž·°yUËU8B¸<£"G_)P³p‡3è|>^ðûSÖ‚'a#+üJ_TÕWé¸")IÓ2NœœìZ™¡>i„ ²àa×xB/×>^¤•MúãFúK•Òe¦FZí­EˆÀçD‚iéCQãÓ’‚.e ü6[ÿ?¡gõ [¿¼½šæà׋ÂB1ÐÏàV÷÷wÙÑq£~ʆÛ<Ÿìbõmµ¹+5ÝŽËC'òš¶C>ŒkoÉXyKfúhð4€róñØûœLdýn‘žõï§­hõjúËÏ Ë¾n‡”>Nó’lZo4ï0œ ™|ô!k€±Öš¨ä;¥އRƒ£³Ùj¬=eŸFÍ’q•G †c³v¸™ÑAÖqXÿœó9ó <ÙÜ5@QãÏQcý $‹gÍu¤Åo÷ ·©Êå9UŒHªÁõO·œ¾ƒ$êx<5{gs¿Ûþ¹Y?\íoOÞ¨PY=˜Å aR9€Y¢£Æ«ÞH×àˆ³:Û¥•ã;/߉åñuÖñ˜ëS/Þ2ûd9+Š Þ YŽÙ\OªOC+4êʸ!K=„ò^K=D.ã\³–´_—¤ÔýÛ´â¬ow›ÿL{æÍ*ËŠ™\`ÒÎêd\@nR„¨Y­¬ß”q~ÀA¾|§ôàÎ^Ç51zÊ ;E‡)䜎9nçt!²Ò T"‹AÎîNå;%ãŽÇž:wR³ë›§Ýæñ¯Bq¦å/¢ÞÕæëãÃÕ÷_½Ñkm™Á•s™€“’:«¼¶ˆ…¼žÀÄR{8ê ™ºïÓwRN g(1EhhædÓ°¯ë9¥@NÅŽÔl‡øðìc@žìúœLjƒºÚ¢óÌéÌ Ù¾ÐÁzÍ$%0ÿ<¹Ùñà®Ü¯îîþŸ»«mnãHÎ…ÅWVŠ/óÚ3Í”ª¢Èw±9ç’ä܇Ø–ÀŠD<´ÍSù¿§gØžf‡Lê\uÔb€}ºû™žîyé™>T3Xi“t[Ï&ÕøIÆ¿Z"dÅ vJô4[Ãc‹R†«;xœÞ™æ·ád:W԰ź̘lohQwk1L¯R~΢¦„Øê|QbvÖ:Nà3û0›v~Íùv-P8f"ü˜¢· ä•§¤,áœR¤ç• ”Í–º® õ8|/Kî.­ât·9¥’!8•ÈI‚h]Ï©kBoŠ…-¥ë¨ó"÷Ù}Ò-­×õÝ<”V9Ý©TÃ)U ã¬e…kòU ê‰Ö–WÇŠ*ŽÆ-£zÖKÑ‘Þ9þ¥:úõKÆn—ƒ¦Hà'KÒËkÆVš[+VLþ2ò±Õ·h¿¼S:¯Ñc„tNérÄ#§ãÞ¼„:}»T½ÞŤó/ eh ‘Ñå@›‡!AÕN*V8ê¶CeV`<'ÜÚCU#C e ¸6+£TÄØ™‰7|0¾ˆÇPÖ€dî´hÛ)›á .ÛÏw¶XÝ.qcˆCùÓ&B(KœÒdbP<ÅçiÛ)S‚aZÈP—"d«c˜&lèìû^2<³Ö!F8¨®ó _ñ¬œLyØZ9Jly€.Â6)½å#2°^°=w÷é¼NåwO+…À[RüÆFËÚi}A}NØY8Ó²+?¤ ´f.¤¡¡MPúÆâ1ʻϣçéÃÌL€±Z£öì NíºÎmШoI²0‹bÃÖ¥jÚImòÏÜdšs j+²›¨;!û¿<¥yƒp @”Àc ZlxGí¬Ëw ,Öº¶ÛºÞxo)™åµiÐd)tÕNz+ТfG&j'ä¾.e¸Yü†· (ˆ‘¼tánÊþîÉA…S<Y·üDéž#8E»Ê)äÁÛ¤Ñ'êLŒSk^H0¢cÊòxœÔYkêD)8Ê î|" æ]ö„‡3@i‡@¾Wƒ€㺇PáÔñZél›‡¢làꀥßAkÈámöDŸ‰A<Á EÆ)EYZƒ,BŸå|:mÀ¬?xl×gX!Ph—žtgCŸ‡A¨ÑiÃÎë•XðæÆøXžÚ©<‹Ä™WäHŠÿù=ƒ;4Ê,P&ž¡’ÊEØÕC ž…Zå4Jðù ¢‡^ye dV8PP~HÐ9q¨~è•(GVa¸ƒÝ Ë‹­¼(Â*îDOÇí¹Ö6Î; !9B¹p¸Øñ’€ÇÞ–2"EÈÄ%GQ’ð¼ÄÎ`.a¸]Oóxü>óBi=»Ã ÝKJP„ŽÎ³bçÏY0(Ï윢xË‚Ó\¦và ¬g©`n x<èr׺xðð¡–Z» ºyeŒ”ÚX.!P ƒ½K1ìÍáC¢ÅUZÊQN múYÉêV¿ÎC¥¥s™®"—æM‰ž<Í7¾»¬OÛ<¸[Ý&O”Ø+üMZ˜JªÅ %ý>þ$ZÏ6!TÚŠw:ÂÒL(±¾„çj²*†l$è*5´{÷L¸Òö/q˜#ÏqèÊÿvjÚe£AWÂÎçÆa,Í|aÞ ÷xábUùëÊ“«´…’Í\”é¼(ö°x"M¼Òð²òíd¡ù´ÉtXotmÌÆ%¯úžIÛ ¾4üË¥4ªƒ£”¯”«„ÈÅ #ôþñÉnX…mn¾(››WÊìбÌfzåŽIv£+Íý²z½9ô.âÈl`fãƒÝwHÇZš _;š1yQ >Ÿv*;Û ¨‰Þ7pÌÒœ(²ª’€'Gµ¤Hg›ô´B0†¤¢-Ì+àE1ÄÊý2‘LçH×7E¶YR«LÚ¨Ó«4¥¦^VÜjµ)ròº f—Õà4Õ½olmÊÕÛïIëNô¥T¢øc žÔ“ÕÛovɲÓ0\š‹S.6M.#Oi–ù—5×k½},ÛF²ÍýZ´ÏL·‚戗µ'öœ×I-u0NeÓÂòç¨i•"~YþtêÊó¬†e–j<ŸÕ¿«çÛºëªÍc7}l—+ÉlÃ`£'†‹Ëưoõ®“¾èh_VØ©—zn²êw³ºÛ\{MTÿ¸M¼øù§%æ]o¤èTè'ǯÊh¡/Òn—i‹}ÒFãÁâ¤]Þt‹KÎIQÌ{ÆKõÜt^’¥)x;ý­žý:ï6˜ÎFCm‹Ñ0M²g§¢‘/‹Š¦xDy5¸= n»˜lŒ‹.œÚ“ÏN,xaÄ‚âÄúu~{]sq‘ÍÅ8]‘µ´dÏKE#м¤ˆ/„û¾4'w‹j2ú½Û`‡Š$ +çüÒ${f*Êp¤6¦Î;²uÞ^ª„}QüWöZ¿½™NFÍÜÜÓó¨ËB1+~ìxNd@Å$7@î [g*´ëš]Üzœ¶° Ö£À Ò±ª“‘ €@a‘t<¶0AýF£Y&ñph¥°ü‘en Í}³úg„ /‚c¯* í\‰¡Ò¡Ó ¥åñ¸t&ìºxlS›¤À›Ñ¤Ýì]M†ã¥&»WJ5ÓìPbS;….Ù9ô Ý*çQóÐ;ö×ä#AxO¸jÀDà±"ÃnŒõÒ[v.@÷¢%y.ã´` ¡Pk\ña!É}%ˆbK܈‚eáaX…/7ZížðRI57ÜR;ɃB~BûP‘’tÁãíÐz><áÚQÉãm3ÕÝX»‚ôô¦^TcM£êM=µÞµººšÕWÍßF»×¥½ò䫸«j©›+,È-‚Shò"¸ƒ„W(A)ö2ŸÐN›"¤ZtÝ«ËÞjë5²…޼Úbé±"ޣŊaÈ“” C•1ÌIQ”ßÿJ6'ߌ(-oï…[*¯ÕÝ]ût!K.¿~ºüþéê5wOñsjÃF)´3eÜFÉ`›u-Sðx“÷>æue^VA¤ÐF‡Ýkÿ¨Q:-ÙDµ5z×ásR:^.v4%áÁ>=ÇÆÃÛétÌè¹;±WÞkãÙjˆÓ/÷î‹ôÚ ÁæÐ¡-°í1¸ƒžÄÜ;ë§Am4ò®ŽÚ Û«ÛÈÎg£œfkb†v%ÂOzq¿Ý3àAŸãÄÈf½óæTñ²6êëšâÑg Þ2B7í˜ýâάTn  ±Â*2öE„÷„ÛÀЩÉöæ Ö|k£DÝ­ÄPA¹ËTšvÎØþ잌ÃØ¥yÔ¨eÓcP£çMOíR-o?ÕL§%5Ž,Ï)1´“©7§÷ˆÚ‹Ò[µFõozz” ­‹À“:ùä`ÁšÎUYNUÚ“>#T•>—p86æ²€Çf¤-bFãÂåñxlâñ¼¥f®ëj¼¸\׃Ï1}bÇRƒN½*…(gÒ•ÐN9¦’‹;ðZáúçA žÔ"šE¯gwÛKc§-ÕønÍÿ§5/‰öºë¸­L(JéAÖöçwl‘±¶EFn}©Ñ ••ŽÑêƒM‡Ë!AÕ ¨4<Òîbk3ÜÈhÕ;-PiV ©¿')~œÕƒY]-ê!)t>½›QÈüdx]ÆÒ§ƒŠÀ9k8pœÇƒ£¡„É=…lÊú<èú¼zêQëAgØ9Þ¥O+¤ä\µ³Æd˜,ëA'¬aså (H˜è=h„ŽQ¨S‰„˜T7õü¶ÚÌ;w©­;:±šÜ=ç"hj§’ÇÐr¾*E _`HÀ£E†õ–aý©º/N—ÅÖNYMjÑ­S$jZÞÿS;Á%ô#„2\íÊ áR7žä쇄Óy«‹Ó (-Ò{-G™T“é§7Y}4µ³èKÝ ªùÙƒîÎVš=MEvÇ‚N¡áÖg8 X_%³Ý\ŸùPfžü#g=OágòRe¯êŠG.…ýó. <ż¥³žÓ5 æòºGŠ@Ñhîîô¦Uúð©åÑzê”Ar]ÚSCL€'µ^ÔÖyœmêÛòl9Ê4Iˆà4ªçY h$…Ã3¤ÑMX;œÕW£ùb¶1jÄLÃX_ÄäJÐx:Žå¤Ž*­=çgú´·Ócj×í1‘:¼A`çQ)*’Fæ˜ÌÎÛ|§Ÿ„g¿Mƒ*-z˜³uV2 R”Ï{Å%o¡]W}ì.ÇÞ"çK˜L™ˆ<{šlËí‘=¡Ñg÷Òz£Ñò‹Z¡Ý¾ÙúÀSRÁŽž!¯p%ú°¨µ4šÇƒ©wÏn_jMÑe÷b;¢4ÜR]h·–ì ~Ì`Ý/GÕ, Þ4Xöì£åFŸÍs»í µåGçËÿ?>9šßÖƒ£Áu5¹ªç'KkœU“áÑmu?žVÃ]è) &yô~mÎú}|[ÏG3Bÿ€¶1ý*îÍ~•g?Ý)<ùòfF©ø¢,îfõÅ1þ¯ö;ÇæL™ '߇qöâøïwÕ= ©çƒ°irq>ÜRè2®«yý/óëJY ÍÕŸsY[+Ðx-’N]×—õð“²õ †¡¯¥®)½¤¬Àšª¯íÐÔ ½ë/Sì/>UãyýÇí¡"lk:ö¬GZj5Ðd‚A'5 8>ZL)ò˜Í7‰WõâŸ&Ó¥婒ʯ]ȤŽXõ È֓Ɔ÷ñ–ˆG@¯gÓÉèk=˜~gþúÓ¨Ïþ<›Mgï(äùf2¿Z6xý§`ýðÕ¿5"~hž~ù·zR·ÛÑ.4©¶ƒ‘¿ù§%û–h~ñÕ7â÷¹©œ°âÕIãÉ òäèãtQ/¨³½ÞÜŽkêâäè}Ø7 çÅbvWQB„Fmó­˜ô]5¿¾8†¿ýô›­Þ^þj^“7iWÝ ÁÐÓwÕ|ñãlzEQßüb1º©Ï¾%€ßˆæÿ^MîªÙýÉ=kÿ5þôñ-Á~C¿Hˆßï[n~y^>ÿôþÝÅñõbq;¿8?§„;#]W‹³Áôæœ [-ªó÷ß}xsJœ2D#éko‰o“z<¿øï_(¸% 7Öþ£Ñ鈈0|Ðq°ã­9[£_&¼ûúé—‹úöâxù,|\é5?œFËšŸ{ô¯^*íçã£&ئýk^V­±ƒÞÙÐãçàœÞo‰ïGóÏóÆ]=°:Ì=Î-,;öÅÿ '÷Ô¤ï¦bpÛ£B‡ø8«&ó&õùH±þ z÷7ï¿ÿëÝh˜yüv:™OÇuøóÛU¸ð¶ÍˆèYc®÷™sx0i¼Føó‡vvùÍ߇ýu9ì½}ª÷ƒq½,n>»©È[.nÇÕ yÑZ,RG7?&½üÇdúÛd?>|ÿaRÝί§‹æŸ[kl<~²*Çq øß[®®[TñŸÓaýñ.Äbœb>´ûšÃŸ»Ý£öGèJ·ãÑ`´ß?€ÑZëÞvŽøSãªYŸWzW…øå覦ÁœÂ^ßëßImat½…ýÛ÷”¥/‡µ×M¤ø´ Ÿ^ÿ?EŽÆäU^ÿiå¿kÕÐôl’h5ÈÓ/,ÃàGÇÓŒÇáCòÝ„Ž“ÕÉ‘ö'GÇ䯕½ZÅ .¬û—,ý’Oéyíðµ«þz8ÚI`§QADȺ^+q•ŽP¨ú—Ñd4¿>,9úFž BÉ›ù¶¼Éœ ©Ñ“ØÝ‹km;á²ì¯Í•ôPVí ÞÓûŠ[ó)¬÷>wn³MØkÓ¼Q‡­-À#ÓÖºË6šQÊŠ#·Ú§˜síò*ÂRƸ"̱ S¬ Å“z%ÂêØõÉJÅô,­LØ:¦8ÛƒK F¦ÒXÛÎB ã…º€‚wøÔ.µŽI†*úê=­Æ£Ëê2ªÚ FÍ^„= ¬“£v®à­òÄV…Ý%·ÔÒ[ž^^*Q‚^”°ç¬Ô.Õ7ä7G¸8¤1†äÈ¥¤´"„ê*¼Ø3¹žJs(µ”‘/µêÿˆtûg0ÂsyÏí¹®š-ƒ”ÖsÄÒ$b„H蟋XOd9”Vš²@-x™µ)ÍxÊ€ŒÀãà™i5º¼L[khŽYF ´|dä|6—µMœCÉeˆ." 0EÒYç-…î>b8´B>3¹VÀ51¿¬£$.bH츷o~íèPŠÐ #†E‹ªÅ@{3,ÂîMòEý×Ã-EU¸Ü× i#F x¾¸¾K¬CÉæ”Ó6bpr²Ì`éB•šˆa%œŒyV²5f`æBå¢P啇ÚiõLìÚ”ã0:–P^ÏÒ‰Ú)]€NÄn©?5Ix’×ävN¯6ê½ÝMêYô«wD¢ „çgy¨Um¢¸IȾ†Íú‚ľÀZNÀcÂ:8Ç©CŠ0?,Ó? !V¼µ÷Œê´ØÌ†ÚuÞQ´}¹>ib4@±aóEYŠšÜ¬Q*K\BýTWaáUØ;¼®!†çè„òJ³=A¸Ô#(½C’%2ã<6ÑhM/ÛÐMwßR‡Ôšsô]}Èaù~ «)áBðØÔåðVg»B—óev¯Ÿ(°((HcÁƒ&ÕðÉ,ážÉ‚! ìL ïñJz'õˆïhòiV‘µîš]ÚJ궘¦P NIZj0©‡õ‹ Â¡oò©¦k6¯kG‹LÚ12yÅ»w‘€^ õ’lk””ù«ÓDh>(ú"ëX¨Õ‡°K‰ %(v%2´+B z(îFĶ]ê­ª<¤£a÷B(iJƬ ¤QTÉ®=ÙkiaBþây8JÀ¶ïq”ßc›˜«NêEs0r];Œ±ÂJxv²‡’}·‡O/Â8êKƒ1¬NŠûÂ{¬’ÚFàÑë\þjUœZ‘>ÔÈ‹á}jGÞ‹ª±pÐC +{¡”7"&®©LÃ^Ý Ý0¦¤PÂI¶@()–­’B~Ê‘d=vÅ0´ƒ6vBHìDgh—¼4»­ÌÌW¥†4:lÏa„K;•çhÔõÈNÅ„vÉ—,$3ÒK©(#gƒC/–°dØÚ¦¸Ë/[ÜN$—þŸz°ØÐN·©Œ(Á²›‡©(—^ñ³G8X`M“^ý÷û Ýtob Èч¬ 0R€ÞÃT½)1k˜‚S+®ëáÝxó‘¶™ôc\råž­_!Óu6N³gãR^Zb+Be¼HûëÊÆƒé¬žÎO/§ÓE3І]À¬uŠ5܃ŠA*[ÄöÒ¡d`…v`˜Öÿ|wYŸ¶·zœïV/Ž¡œƒ°ÿËܵtÇ‘ëæ¿¢3ëÈæ 1»,’uw›E[ÒŒûZGjÝÉäרnK-«› «X´Îœ3 V}Añ°ÑW[°æl)X¥ã1@" RÅÁJd©½céÄMomzâ„¡âȤìÚO½‰jöCH˜m;DQÙcк犳S(yhzÀ»{¸ß 7å§§Î’¸«»»C0åw9j—PLÁ¼¤„®4¬óüÑ_Œ{¦œ`gÍrt¥‹#ü"MHxš ÊjîÚ‰©Ñ†ÀÉ›Áx¥ÃÜÓx nöŽÇXÑŸAéF¤±ëwX|\®ÀƒÔSû_þ¹Ù}=£díxJìJèJMƒ.“àfï8/ì³ÑÈúÊgZScëìé#Þ‚µñbË%¬8êè «Í ãìý—¿Qfº„CöŸ¼£\¡0çõöÿòñfs}`1Zb@ æ]ÅõOWƒ“PgK'Îf!¥ C¤A_µ8WH'ç…‘Ð}Xñóé{Õˆ&¢6:‹Ú„Y|~ÂlÇ7wßu:F®X@bé%¹ô í3•¼ïãÙ›­í÷ì|0¥ƒ…@ÌÙR\#²b<"¾›CÌ£íû³ß7wºá!$ÐÉP¦ÎºÀC¸ì²ýz”Õh]#ö˜w† ¥]ZtÄœ5ÑÒšƒèŸ9;l 9ûdÚzBçÜL _‹pæénX¸ÛsŒ)&û NqC9-2˜œ%µxc“,:[êWò±$!.~ï>ËÙˇ/é«ênb¶ï&º+ôú2$ò'ß$¬Âƒ+ëŠs<¦ Ô.¡º¯OìóåEœOÇ5‹¤QÄ{5x°‹ßx¹»_Ÿ`ùÍîêòû·í¥¶ÙÝÞè`€‰çÑ”— j–@q‘O¹û|y©^\#/ \žœ:ú›3Xoy#1¹”}ÍJ8õðE—,a¶ôÔ®1¹1·Sòä¨ O·áÀ?ØýƒÛçG©‹‡eÊLÀ”jðûê䕾ÀgK ´»N€!’R'ºnEn?Ì7NZ2Q4Õz;³!®¿û ?Öî·æ9T°öõш¨ÛÞ£ï·÷§®¿ó”>Öηö¤x«N -И`Ñ‚‹kwºÐú;Ëü¡v¶ð~>Ï3ý.HoäÈÚ)–ý^Åf3I!Q®AéãÒ¨u Þü]­&ÞX¨¤ê»ëZËXáAS«_wzœhù…(›±E‚±nª5û–áœ/Õ r$—<ÖÜI:åuއw’¹r¼·Wû£eÅëblóíê¡ÍÝíìü¡v;·6³©,ä\Š‹]c1ÚJ•¥ ˆ=~¨å³Îñ¡’£AWb·MÜv°`][Òq ¡Ry6˜åÙâŸO]Æ-·Rè| «´u«ÏœR°bõšõ†J‡, ˜’9ðAéZã Õ½´ŽÙW>-@:Ü™!@*/íýµ ^ï”e¶à  ·|+õJ+‡¶!mªc¦¸‚œ¬ó©Oœ÷:/ˆfûñÊõþ’¢ñHyvã+Œ7ur‹æL5-”?–!öJ}¹™úY?><ït<ÓËg³HÊʇ¤•/Æ„Îw1õË1¤S žæNýgyúƒ{ß¿mkXš-–Fšä+–À­ÖÅ [Q×ÀÉÅ`¯!’")² ÉÆ“‚_2{7íÙâ–¸+brýŒî?Þ¡;îªa‡i«ƒ\Z´+îþÜæû÷Û¿ÛÑée0‘_¼Ð¿%¿Ø>]Ü?ì.6ÿÚloõ29»LÙœ¨tP×åÂ(îôG=qÔºxë£^'‹vS§}¾¬9ÚÚKèò€¼AôÙŌ櫢҅Ð+ºò³û¢NZ¾ÆÑ ¬½¼Ì‰B×ÞøÃ-Ü€¤õ<¾C;œëŒîhõ.Íæhgí¤TèãWvy€Í÷­JÇ”|.öBÉ#¦žèwÁÕàiõÊOz´/Ì:ÞÛdD¢1ʯ˜“‹1æÐ<•¬»Vc~ÄnÀÓÚÁô´…ùN!žäb9w!9¨f§H„ÀÍuù«ÂŽ:!LØcæVéw¦ž0x.´…ëÑò9LÉ ¤taæyž‰K›A˜96BGGO%c–Ì7Aùc¬t*˜ÁÀ}Z6ÿYš¦üP—ð‘¶Ò b3ØG ³©ß³+`X2ƒzß7õ÷…òûÙðÝï?®——®JÑ2Ä m;LèÏi× ýRÑñèÙ,ÙTºCDGÇüzW'Ç^¦‰ñGáóÓà]²„&$ðìmø!úÅ5wsp/—½‡P±¾ù òXn6OtÔÃY:sB/È&~ƒ%'‘2† t—›x©€$Ç™*TQÈ£xÄr0[Í)]k ïOcÐKŒž8Œ–H€ 9VhâD³î›ˆ …@þs*˜#úÍjêØÙ¬Õ9äkô8Ëú‰ëƠœÉU ‡¾µÿ•°— ¹ÌÉW,oD©‡âI",ÑÆÓœôo4ê¨8 F€´J%V8yHH}º‹´€^((œˆŠ}µÏC®¥ÑÖrT°õÎ]-o »·3žÙ’*üQбú©Á³t{cOÊVÒÝÓ’¢vœwxRìsiœM>)*bï,Ð>¶ä+ [T-^ÁRšRÇmõGiD²¥|B`[ù^%íœQÐÞŠ°’¨ÔŠ·Ð¼ÝºUºà_*>ĬX'y ”ïvͯÀ°8»°÷;áõíÍãáÖ¼<üû‰ûV€7;/˰õzvnAiÚrø …'kó*pËŒCb°Y>ŒÑWài±ÑÆý«‡Ë×jÁè­¨kλh«ðì‰W‘–sx—ŠGH¢Bí« ‡8äjÊ!«õZ{Ls/3üí`WoY³hi¶^̱z([GÐK%¢ó¾B]Æ4ÄËÑÊB¬0ƒslíqt>Ág®iE]sߨÂÎÉÃâX‹±T’*5‹Mc.$pì¬EÁÃ]š)Ìå¿§ÍúFUcÉ…ÁU]/`©‰í™+^•3à§ÁŒ¾b†¥âa^˜XQ~­?‹/ed;”1ÂÜúüUÓ2ŠÌºŠ{iHØ>S“VqRsÖ™U@çÑ?¯¾‹C:ýjo8Z¡ÚL:÷²â^¥êÆá#WµTÔ8ʽS¡[xHÆž|G›ÉTèFüÕ¢¶ÝÜɦLÛaÅ‚Ùv`Ÿh±Ü/²SëY(^ìXóØ*ÖMuIe¹"©Œ}¨2¸Øû!Y¬½6ì,T¡KÔ+•FçY¿\ñ–ó‰)Þ˜ÁFŽe#Ñt‡„®XZ_ÿ;+[º&ÆY-xRceÉ «^8uü”Œ–,N4çX‘º ¡[Qo±lXňyê-xÐÿ²³^6‰Ä¼ÇIi‹:È­»Ë¤V>I‰ƒrRº8@·g¹Ûg¬`§á»÷Zñsé—‡$‰³å“”‰!ˆSæJbüëôAî¶ ÎU›GÃGÕE\_(ð„´ì¹ó„<¾ëG*7`aÓà“‹ÉÅ Þ2ØØst­•à#OQÓRÒú½³ÛðÄ•ôÓ©öÔ©XX/¸³ÎIFãùi¢+L ªœMÚE†…É­FÉ{:?bã3ŸÁ(çÙãaê’5SÇÆòùñŽ' ¶Ò…ܧI`'ܬ­ºØÆÍ1¬¿ýÞe—äó&M èÕa³Q­N|-ë¯Y;"Æh­C'ØÎ¬kµ$Ïí… ÀÉ|°²êöx:Ù§ÎÙ&:,:Ñd`À:ÏÜ똇“CHÙÆ9`âÞôÍ×ë=}Õÿ î%Wæ^ÐÑm-m% »v%pŽ™Òœœê<±GÏŠw•“/…’fÏ8OàsF‚h-|¡sÞL;`†«±%?7ùÁ­ßjúN’Óo„¡&º¹É"¥[óóé!)r}&d¢FŸ0vhu±žøÖ¯b  xR·öÑíV¸ò9[Él~5Ñssp¦` F²28ötëOJÔï KL<èBZœ“úʾ«¯7Wß.o?lïw{ÍʬӞÉJæØÓ5?S ]åw¶^'º4ÂTïÈQ 6LÐO^úšÜ ®|~´ P{ZÖ€öU¡¨—lVÎz žCL£¸ê öÆ2{QüQñÍå`€:ŠÂ,¹mË4@êñ ëÒkŽ5Îk$%™ÖÄ1¯iÿûâ_÷Ûÿãugs«m]þ¾øºyºØ\ÌÉJ¼ØmÿØ^M …þíâéA~pqss}±{ŽËÒï.6_žw'šŠžXêÅÕ×ÍýŸò¯ÿýGkÑýþöŸ›Û'q9åþöÇç›ßN.š$Bi€‡¡x@U0[¯™.ý'õÓà´Kb6!Ê_â¥×fcôŽÁŒÛ)FÆÛ â6fçl<>6þ¸}øëItÆÝæôQyÚì9–ŠCŸ¦l¾¡…¢ØúÞ§W÷OûrW•ãNÈhÄ“` žÐ-7éÀ¯ó)ËŽÓTåK!š°È·¶Ãì$ƒZ2«)6Äý¬'<W2ÃJ×686q«|.d—œÏœ¬«‹\†æQ2+ bn v¹OLÏóÕíöúá¯ûÛ‡ÍõÓ§ÃÏÞ3´ì†¬qs+jN“ÛÒ÷d×È)j*BÎ`Ãã8 ¨¯ßÉ¢ ±¹¾wñË´œ “vNwhíªÐ2Oæïr)ÈÄ&ÓAPº4Àm—ï Î’¯àhsÛ4“¡7ÿ+æÏýæööáÏÛíý·SËKœ*Ý­øg†ä›'ôßzœáÈj+¹¤ÞêÑöÑWK žØ^¶0íÅ×ç/Ǚߡ|&E¡ˆECl9F<ÝÞZ¥×!t(ps²W8@³Èw(00Øx¨5 h2ÕàdÙQž&7$ &'] í5 sD´ÒñL®’æ¦æhùhL$ªOk·¶iFÓÓN\ô»#5ÿùúq«:~÷ðpûm»SǼ8Ì ?9—ô¢4Þ÷tÎ÷Ö% Ä~‚D![ñ¡=Ýú3T¦ïdôÙxQØãÞ>Èqd¾ÀÑTæ(2Z*pO×\÷ÒMb‘´ÏŽ‘×5Ñ庲ІjÑ?–] 6gr!K £¬‰M"§Ö±Ç·ú?_ž··×Ǻ¾Xg„Ÿ|J)'q¾ ,B!õV!]¿a i„*iÁz«’ÿyÞ^}ô»?¡ÌO ï’‰_mëVEÒ,Ÿõ`Žgm•F2FËG×ÏzkÄÃ|í;+ü˜tèÏüZݨ‰ƒˆÅ} !iRŽ1ûi¢Kí¥pNAËÖŸ Ü†©÷£Çæî»FœÏòÒŒA$Y·~@ïöè¹ÐA€0ã€[¥¥Ñ#Ø ~kן‰çÔ眆go^Ü‚V3ML?”àÿáoó×Ó´7¹—à…@FðÞ-n˜à¥ð±/ñ‡¼ÿ{~œ*¯»‰Æ#zï—7LøÖ/ŒžYâ0!äeóEç—6¼žûš žÈ…(u¿“1é@è&ÏN…Yæž´€çWžÔܦÔâåß›»[“Ÿe3>bvLâ×[øÁ»å¥ Kä¶*±nÁãiY7ƒwl{W-œ(¹—|ŒžXh“º¤Ýߟ:Hª, !f3°¦ti@G¾­L­=îäùzs{wõuó¸SÌOZ3!7æ'ýé{~–ý(ˆÄÁYüÔR¼¸´ËÅrùm;Bñ7à—º4»¨abù%W§ZÇÐz„:ïûõÛë(´ ÀRЀ‡º½ɾÿóæj×ÂV6ØšµB&Fs¹PßÖÖù¢‡4gÑ«³§[d×þ;šÚ„xÀ÷ªz}ÇÇŸ0q²ü,‹Îpr– ]Êý\‚îr¬µ` F™³×ëÏáÚ'#£¯ÀCñžk‹…þúrúýT4ÌåÀê´°Lf®‡Ðæn5ž]d7k=§ÏцNGqéÕÌWgýcˆ!¤ŠÂ_D¾“™¬2Ã=…n¯Î§gÐxsÃô}<8+ÇÚ—£ …¡dFÀŠƒÃ#A~‚‰'8‡ÝâïBˆâæ½ü®©2BIJJh°à«G®´€…ÄRåŒ …º^4ÕEçC`²t&âКÈÜõbjAJ¼~RÍE­í¢Ä¾5èó°yÞ}½ZÇÿZò \Ž‹É­Í ˜LH,ÆQêRŒ¬ñý#]j¤g4P“܈§[ÂKaÐþ¬PŠÍžé“¨²$Rž¬LTæä]¿·‘°qõü¼6<ýr– @ÐqÇó's,×j”Âã8 Ö‚§Ù>?Å´7&n9¢è( Ö#§ßç.;º¸yÀF6àižà~‚W‹Ê¾(z1GxKá/i”fÕÜO¥4, y\[ð\CEïûÂÕsAŠTG‘è6í©iy3B|XE0z-$E/^¯3’‡õDñ Ïl<1QW¹Þj¡»­h‰G±yÏž¸rTƒ¦Î\âéZ ð.3wPùj,‹›œ#Ëf?@Ógyš9iâ ´HÓ¿¦^L©—ÿÚÞüu0rØàXBñ.0u>ß}䰞Ǒ>ÖžGŠ}ó³,;³9dHr”­€Kía¶>bZa€önÁƒKlñC"õåOœ›r¯.åÒt\9@¥mL³ µô¢Ðj+kv¸^t1óo':Ȭ9Å# дæxj­¼¬ ðpÛ}>áè +Ç)XmNíÂoÁŒ^î¿……_ËpF§Y#6Î!¶˜âIÞ¥dã‰o¯æØÝmîõ<ͽ²«Ã˜§Vá–áÅ8cjå/…+ÿ Øì<™VÙìïÏ_Ä™™xWrZò'&gå˜(X½i­^ìú|mxhSý2,ü5êˆÅá/‚F'ëbrÙBÉå•¶|ìÉ#«÷tÉ Ø|ùùMž>›Z7B™k ¤ÃˆL” íãh×pš §8b£ðôËZ™8¸Ûüi;¯XäfHL!Xè…!v9ísäT>ÏÑùœm˜ì¨výNvÎÈ—ÞÓa{üíñç“?½Ü\ßmï'~–·=zñoz tŒ)ö1Ô;/ 3k‹Sk ¸K„yË ò2R!ŠFâñ´Ü;ºÝA{èHZBQlüBî÷ÈpÿÇãæi÷ø|¥Ïêg¡—9ŸX´I²ÞÀ&º‚µõ%XC‚'ˆ!5Ê{º9ÍBŽñʬ…æ´ÏE¤:ÁOK›- Mèç…:g1T±w8ƒ ÷Qï/…ê'ñroNbû†l©s¡+t…nTç S"°SknþQ8ðå ž‚è]"Ë"Rð–Éòr¿†6]”[vê×:NÈäÇ…$Æò¡#ä3e‚rvDÿÏÞµ7·m$ù¯‚Òk;%Áó~ðÊWç³óÚµ—å\ªnãòB$d²Ì× Å•µ_à>Ùu@ ‰‡m‚ÍV™"˜ßôôôkfº÷Â=éÀÃùvÛñ!E©+ ”iÀеUU› | ºµª¾erZÌ~if,5 ˆš0[JÃ϶ͻƒeÚ´/zÏðáÙ ãÈrÑù¸6xÕ–1¶Ã½÷ÖùžÝ^¡†élP»ð$ñ—PkÚàQ!`-…=‚øÝƒ1v‡¬¨h•2O(š’˜hÊÁÁöF,Ú±/çî;ýa€gM3b!Z»¼î®†×"¦~ĬÙp‰Ø!Vžmá »Ëô@SöpÆÜv¬ÁÁ^˜Œ1Ë@ª˜ÐÎÈ6,®½qrBð†K3Nk[ô¶$éʹÁ0Ù´ìÁ¶¦½2j‡¬"¼ê+dCص#*ÔU¨»ñRÙòA€Ì ˜ƒ'®š˜Û'luö·Ô4#åšµ“Üê‚SåE‹i„8Õ~CÛq˃ÖVf>¡Ð¼5ÖD`~ñn8‘†3Õ$Þ¡ØÚí‘®UdÁåj 72BøÅ¶‘„S©XÓÊ5Bë`»½ýWµ»¥à ¥QO¬‡Ò˜IÎŒ1þDÙŽÐvÙâUI”ºkÇ÷Ù ß»é,5^М€%¢¹ÿìŽkÇ<ûfŒ«wÑÛaAêU:½dŒ.üU4L²(‰³C_-ze3ø"š¦é ÊgÑh cDÉÅl™cF@·,מ£ã^ÕK)Æ‹y˜LÑÓOÉhŒ ‹ Гo’q.¼ýäíb™žl§‚k‰?Eˆkþ™ Ï<"f‚ÐhIZN”]kuI/fåü«CÛ1Êuk‹ö@î@Í;æ’½>F­²aÊXëÏ)R¶c¢íôÓûr‡a+{ˆfІ±VŽÕ§J—Ê‹Uƒ[`0oKVM$°ÏQ9yg¦À¬‰m‘·.:F¼`-µJh*šô;fžÔ¦Mï‰ÌvIM3ZÍÛ [½Œú3&,o¨²T´ã–µsÜqïÅ $f/¶Í`%k-O]Ö¦ƒå¸>O§ôYz,Æ à”øOÈ•íhK"â¦@$œà6‰lD R„ÀÅ—éYv†Îd;NæÇ‰E!„iPØNÛÞÎÈÁ ±;lkì—— ÊöÖèz寷âúûëÜ Æ;ÊA*릅èÚÉÃXú@ ªÅ1M¶mgºorLmÁ3œ&ñ®…qʨ±; ÁžÃÝž› :½PÀëT ¾…¦š N‘q4,šÝ&[Öë^EZ‚lˆ¥G5t÷XJè)•íj4OxáqŽéP²<`;&4mÕÈÝŸ!0›ÐÖ–ò–·á^W·T´%œÒ“*lO]ÕêÜË𳀫•½¼·+&½ØTXDB7-+Ã5è½"a÷<§¼ ¾‰¼ÂÁLµ…42‡X«ˆb ± ×NšÀPÛ gW“ôðÒ £„D‰F)Ã1?Šù·ÓìŽÚ’Pï噿3Ìz‘ fñº¡jBí ‹[ÐKºŸbÝ·”öÊA¡ Fh3:»ïùï;‚W‰^ùŠj1ÃÀN­m¥öi^Ì€&ɇkƒ¿Ù& ]KX“ ÚZÖª*8|Yï Ýà¡–R5RÁ‰§”ŒI"ü¥p‹vÔÈVÎýíÏ|)Ò –“ÝŒi\0ø2+”b;t*Û,á‚ç›æãdšN’þp4MÝöòóíÝ)å75DSݨ±¥m/¤vÌPÍi3P&MK¦ÓöÅÿbG˜³ŠÅÍ©?þ“û6áWJZÖfu•6¹ ;LÒ$»°]pðNf ḵ¬› ”Ú«¬a<Ì>OdzdPÞYýy6M?º°÷K3X’’Z›‚ÐN³_±»v`*ÐcþÊFåpB+2Ã(ƒ/Ì b%]õC¡h×NiËÃÇ-™n¹^Ø|¼ü0*ê_âŸRƒûÜ^ŽY¶áê•k'9i»²Ôáà4Ò_ޤäž·àox k}(`iWiÃøøú6,É+Ö:Z<Ž˜Œ½‹0ª·ˆ%ªd>_ÀÚr {UHs”áYî(Y·®˜˜š7s…®Þx¹e³ió.z–Lû)–_™kÑçá`|NFùŠMð§:š Ûp'¬€[9Q½ s ê¨a*›)7aÀÊýäzÏ"\Ù0Ôtg(3MçÈ]»júÈ[ÝILfËqŽLcëEßàŠÁ4ÛàˆèÙl9¸ï–óA’ϸ4yù"M&Ùùô”» ¤¯Ù.ÇË®–6¾ÕVäeäXl}>+bÔ(ŠýÚÒµXY“чâ×M|A7 š¿YJx‡=Îé‡åj‡êß»ZЖ5ÄgŠvõ9µ)UÀqçËÉ$YŒ~u~S!U²ºNÑÿ"ÍLn˜ç3Õà¨ýX°M!Ã` —ü±–J?”ty5ËŸÖm+o>üªCìþ9Ïñdeo%7ž]¿ý‰»OÒ‹ž¿:ÿqº~›û;:Å”À:9Á'–׿Æj e÷§ä.ÛyâS›„º^H½ò‡×ÉZ}%ežãj:öÚ\Ó÷Üe%üÿ¸ýh¹XÀzÔÂIÉgÑ0™ð4\¤ÿ\¢èá‡4¯ÂËb÷Ç %37zTKgÎuÓ‰¢ ¡³XG2PLíȪoÝ`â¿ÁçÅÿS´Y6ÿܹ®£úVO¯ùÿ§Šþ{¾6k'D’¦äàE;áÉ‹´…ñ×"'$Lîï8M/ݘŽ÷¯f¯gƒ¬BÕÊÇé,šÃ a’é•;¨X/aÀ›rù/9 4_r ݤ!w$íæk6ˆùÃShsîDÍ›Ù2O¿žæ³Ñ4Úï§Y6‚–ÏÖ³ò~õã ‘ò†ÊD} Âêd˜çó¬÷ø±Kdpv-¨ÀþÎÎú‹~œƒ,óüÃ4çÃ_OzÑ`”Œ£¼?ïa’ÖËyÔô,2戀 1%=É{ŸŒ–ƒùêSÛ“àÂÓ³ÿÜÖ¶‡Å4í;2.ÒËe–jyIáQ½/©EËížR´ÉxÚ‘—6_³ÁK¯fo@¤·Õ¬M¼ùZ‰ fie˜|J+€åKÂèû Ö÷³§Ï<s»O÷%(ÔÜI³~R\¶=°Çï‘"¯Áe¼Á×ÅѳÅlú×Ù…ëè"M§Qø(÷q†í`ãxŽmášk5kê,è]Ù¥òü¦Q¸æ÷ßO³åå%ÖW›æ+æéõœùÎ-ù÷ßÀÃé¤Eõ¹ ½Ézö* +Á½ Ç…‘¾¢ÒÝ¥ƒƒØ·_¿Ø&z¸c•XÖÍp׆PztKîþdEFÔ²˜*SŒö„àkA$±´ât3=ëFô ùÔñžö]Å<Å_‡°F¿–sôpåúÓ‰0±ÑKÁ¨ ñ|Ñ|\²,Lã"æ‹ôj.h?_Ì\bÁ^ú¯6x:»¥OW,rŠîËé- wê7Oýv Ža—°O=øãúup,Å]ôf)dêõJ#ÞÕÙò«ûcë4ÏS ç>윅 €éR¿4ôõQkiÙ;P%’ÓUdjK&¶–q¥™?ubÑŽUî­õ?ÍÜÒÈ“ˆÿu¸v%0K¾Ú¶÷[þô¸ü÷ä4Êæi¿ üZä±?ëvÍ Ïµ½²”ËЫÊvá ½…ý<ÍF @¿BëfëÚ4È¢O4.ôΗ§‹þp”§.‹mï@—ÎPï,*øÆ)ÎÞÉ?—ɸ¤A¸Áz<ëãµ–qšdéeÄI”K/û}q‘JI¬·oëó4%¶‘.™Lû©˜”ò”vA¨–"I”ár ÒDYèë›Ù¢Ÿö.‘¯~«£´4¬™:ž’ÐBÛµæÅo‘&à–£œž­ü÷X\ôÿ@¥˜Dçº#akˆäÂêU/y:¶pö€‘ÀxkI\©}ïÉž\ŽÒñ v‡ òNGãGeƒ'ÁÙÇG‹jœøòm:Å)ÇÙä@Zm8É¿*¹¯d÷ÆGÉ/},O*™G§n³¢§ =ÞÎòdÜYu Rn‚ÖEœFo°îj4Fƒ"‡ÕŒ‚ÛÐÖMßš“¾K²aïDýôãg™<»èÿ žÀ<ßd»d2P¾}‘dùë•þÈG“4~"µO#÷÷ÓåXm§…1ÿ§ú4Á̘¤lõbÖOÆ0ˆ§ð~ÀÜþ¦àÔ/¿/wÿøæEomC$ÎኆI÷g“Ç0ÍIž<~óÝùÓ3³†{Ü7MÇYïïï2gÀ¹¹ÿÍQxl1XQgµ Š,ÐC¾xqûÛ/çy:ï”ßáÏéºy p-ÊÜ뮥ȓŸK¢ý|Rh7øâøTƒÎ’ ¯@ŸŽY~FQ….ë¸ñÍ(û˜9áµâqgš8*–˼÷o!ò6§Ô±r¹¾py¼]$ÓÌí¾N/ ~úò9{°ˆ½”VR~Aezч§Ò_ò'`l b„´Ù¤R`½Rðe¸þ9ø#"%~Ï’yrk;¥Y…•Ö__­9ÉýÄ-Mˆ’–›ÏùÚY›ƒê/ÀÐõO^}9ùïåhŒœyò¬ÜŃ×^ųâ -|ç¦ëMi º/ŠäÖøñeq\äéëïñ¯•7õbt™ö¯úãôeaâo“dg>'}×ѵ†Ï{Ù ÐåoS°ÏöÂù÷çÓdž g¹ûs<[®#-Å/ ÐôMÆ<¶Ñ‡a¾…è¿]âþiaÊ0)~¼Ky’æ¬f‚ý†K ݽQ>¾ºf€ªâ­Nÿs!°Xj£–åÜlÕ²„¼‹&)¨öh4mŸþ2*âNÃ:Wàe”Jî‰[‚à:cêÉT‹Dc*Oþ²ßdp+F´Vùð†Ò(¾<+í,J½û×åS ”*åÓHZ§£ohåò•7D˜ÿMÞDÉæ›~ßy¸-ªo«£m lc®ˆRŠ¿™í$p:¿mÞk!7“åÜyeΙßÁ19–×gE*];ïÙÕåWn-;öû9íĂĊXmˆ" üû3päFãqéíž¹fÑäÞ8º’ö¸ù߃‘Wîw¯ ÞÞ7åyƒ})=¤ÜÆÂˆÿûWöè`jÃ%ÕLmÎúuø¬9dw1÷²’ån  ì‘ æÓ `ŸÓ:OýbWt—HÓOé'?F )²(éñ ×±ÆsÉ\xð{Æ +pÈ£i?­r†ÿ-rDÉ?âCW‹øàZ½I}³…Eö"?0Ф1È’zFAןòMÿö§[P xš×Œò9Y &ÏŠòyÚûˆ–JäœÆè#1Ó^Ôç Ãú¿M*s£hð:vb·jàIô[sŠ1ŽI(©6¡*ÆïW˜' }åJàŸ"ÌD-éÝ…y@ÊK#&¬I– 2Luaž.ÌÓ…yº0OæéÂ<]˜g×0jOÍ9üÀµ¬PFuaž.ÌsÏÂ<»3°ÔDÑÒX¾Ên¶ºQLÄFi©mC1$×îõµîÏç£Ë«âDÃuÉ„¶j¸#î°AqîàÐÑx(â^ ¶õÕ¡w>#ÁdL…Ácþƒ®õ¹}Ãfý֙Ľ•„Á—´á•ÈP4 Ìðá2wRÁåû¸…+›E£<êƒCv‘®ÁŽ WTêÚVà‚Ù 4F÷N)>ÙJA ^–Hæ³EA¨¤ÓU¦¾ïÞ¾}}ŽyÄHìþëYbí,2Ö– [Ò^,Ð΀`Ú>û”ÌG%÷?<6Kä¾Gž~°Œ|"FàM`%Œ‘Â{•ĵ“šÜ¯°^z`Ñ?UX/Œ:–³;‹Ö"«lutÑš.ZÓEkºhM­é¢5]´Æ­ Ò²ŠT ÿuÑš.ZóûGk¸ZË¢ýhÐ ³¼nÛ¥=‚Gr,e‚ûªí•í 8÷÷È‹Z¡—H­Ñ[nÿL÷ `Ô"&„(mÁðRǵ“Bß‘UöH9 ÚWLd=ÕåÔ켨΋꼨΋꼨΋ÚÍ‹*µ'cà )Ù¨e1´ó¢:/êþxQ+Æ]û*P®Ú‰-›$-zQ1±”=ÙæEa¥"fR P×Ü=sÔ[÷·wïšî³‡â÷TA ºvtœüÐ| ´Ì7ÌÀC&^Óµ£VÙ{å¡g¤RUáÏàRÇÊ;óÃQÒÝ»ïüãÎ?îüãÎ?îüãÎ?ÞÕ?Ó²ŒwÉ;ÿø^ùÇ lÍ1Ï„ K³}“‘ñØX+”`‚øpºv¦šý0÷¸[uiÞšµ†â6žC­ÛÝâ;ƧÕÁîp™èKhB¼[Še"·{¶]\ ?WRÚŒžnqæÿÈîð*½ &6l¦ÓâÎÜá2oˆ¤’ì0oœ²ÎîÜáÎîÜáÎîÜáÎÞÑ.´§a¯e³–•ºÛ.îÜá{åŒi¦çof`Íš›Üap3¶ûÃîÔ/ÐÄJ¯ß^´STÝ+/ª@*8iF¿Ìd/ª¤Žf\Ófê€;ó¢\œHa¤Ú™î®.v^TçEu^TçEu^TçEíêEZ–QjwÐÿœè®}çEÝ+/*Œ©9f2o©u,­ÞîEIô°@‹j¼—È\;!6ó:#™w(,UÙ¬¸•Ì÷än%óN8Å]‚¨æWóôɃ¯Ÿ¼)°~œ‹ˆ‡à$XHjœ§ù9μzòàù(ƒ—!=@—@Ëáh}%³8[=Íé'øÙjY½vÜ O Y¥éEz9[¤8jKè$~pðô*Q›2~½ùºrœVc±*í]W¶ ۜŕÆÛœ¾õ/¯fyå¹Ê¾\Žó±õœ­œë,¿\Ú÷‹txŸóÏi:&£él=ÅY/Œ8_mîsO ~Vd ì=z=›ØG'.úa œ¯D*ß[f&‹æÐ0K¼L ÍO£yÁ‚YšFÿ˜õ#ð•£Iþ—Kìr¹È‡€s¶Âhœ<Ì~a…ޝ4œžÁÛf“7³ež¢“üþÇÏSTl•ß²^ï9,It{²©é£Â7½â8E‘l”­–Éø*Z:‹c ›øiXÒ,\gÑÂõ»âµÐí¶"ÊYßu¸ÌëFzÀ«¾WÐke”&ŒiÁ¥„Ò”Þ«@^zó';QRG2PÑÔ*ÞÝ퀢GJ…dbd\v¼.×òº@^Èëy] oÇ@žÓžÖ(ÅoÖ²´»=ßòîW /Œ9;æíyEỷ/·;P©î®ˆWÑ) úXÕNOñ¯"^¡äBµˆ‹-Þ.à“:F €úÿì]{ã6’ÿ*BþÈ̱‡ïG‡Å&»÷Â]Ü"@€Aà±ÕÓºi?`Ù=›|¯û÷É®HI¶ºmñÑ–ÔN†ØÙÝÁ4[,dUýX,j2Ðsƒ|ª)e¬º>J Bz̾8J B;\ŽJ DH&Ò ‰D $J Q‰H”@¢b(p+Û~â(Q‰¸J@L1‚oušºÿSÀ¶Aǿϳ­YªÂ¼_}2[Qf½Âì+jLa¥¯fƪ™ Ëï|ÿXBôÊ…;Ò7ý›ì¸A úI¦4Bœè³q\%ªäòúD…v ”]ÝswD¹K>TbQÚ™#ôI^ŠY‰jäóO 1¨‡ Í(!ªÌf×5“`²¶ùĶ슇왌¶¿Ÿ/œ:P쎕ž*%:˜¨8!¹®à=Nzþ…çGiGS9*Ë#œB£=x-KɉYHÌBb³˜…Ä,3 QV–œOéKÌBb^“Y0¥zÀÈ^i5åÂâq!‘„@Ð'(¢•lð{9Ïà™œô~ÏóÕT®59sXNªwÙ5uܲiÚarM‡å±Òþ%½>׌ša¥ök‡¶g† i›¹ÂÈõ¬ÈA2žªë§6…´)¤M!m iSHÒ6ÖSp.H€wÄU*„‘BÚ+ i`jsXà&*‡,'Hø” hI]ÜŒq©7í0ÖWCUR/NöKωú¢b¨jÔþ£¹_;B³Ñb¨êé ¤„Â’©Ö÷)†J1TŠ¡R •b¨C¥ÊCUV–iLÜLeÝŽ¦*ÅPWCUÀ”ˆPW¦tÀʼOÆ!:{EÈ”iN1ÕnA«vXêî§¾Èûì§Y±Ëîj©»:‚+FýQÙY#\PèìûmþX¬Ák¶¯¡U ÿ&Krò“‡œ<ää!'ùì!W&Ò$¹òŽ&—w2dJÑ÷Ùl³yøõ&«}ŽÅǃ½Ì¸ÏÖºæm;—Ívðg—/7»Œvˆ§¡[­…W< Íâífå§_>ng›{Ëß)ü>ûa¿2s“YÉ*É3|qŸ­ƒGŸ8¤OMµÌß§jÕbvôIZ}¢³}r„°¢à\úûuM½6.ÛŸŠÍÆôºÙæóm^Íùú®q5dòWe }[Ú´?èO7ÙÂVLÛ8ooY½Pº¤'ˆ îs8Í(»kIHÀýÜ´Ôm¾YowžÏ¶…©Žºª<ćõzÓ)J°ytg¹K€Áq m‡¦Øîj¶ÌË l ç|õyYLÛÂxF­° jµv‰ .1"Ò/&ÑOÞBXÜžõ—É÷Òi•ŸR6Ê´SK2DFâ¦}~0ZeSz÷‰~Á®MŽm&Í*{7û\NòeK󇟙àȇfþ«ý#jàq#dð¡øÀ.ØS?àâ±: º‚åé~ «gtý~þ´ý©sVpoëNlì `£éc°è áëq0¦0Æ(@ÎÇÄØ]ajŸwÌ ñbLS¡B%Õ( M ¾ÇJBùð°žw»3¼7°I60Ø"4>ÒF²Áòh4Ò–³Uñ0ëœÑÄ–C,t$£cKŒd5ÃåQ#YÍÇb»ëœ郖$Jò6F°¡¡8Þ:r‰Ä(ÈRZàIÌÆ³›õç|ûXzLŠò¡,tlJa##Õâ4ÀhA#4 âÂåQcGËϳm>y,7÷ù¶›ð’÷Z rÆ ©-ˆ[ø U†ç(X)FÇ_=9›í¿œ,r“±QÍŒðÄà$h=^ü6¢¾°†&2ÀO‚aâ1°†‰‚›†È}Ú·y¹Þoç “‰gÏs¿ª©%Xß5±4m¦åÉëv\Gtª Ñ”3÷ÝÆªWŽ ³÷6³îÛx…xpw+NÂûD}šïR£‹¦gSÂ9ÆL÷žiÚ1Š»qln±}_gÀBoD=M?(êDžì-ú/ɇT}Mp”<þ«c‚+5m×OËÀÖÊjq†¨Sk , Q” ¾ PoëúbFÈ-~¶cä‘‘ç¾µ²»¡UOoÓ*tèQ‡[Ž$‡DÉ9B'ŽFˆøŠy>›Ï×ûÕî\âM­à£ïÔõ«yÖ$å¡jCƒGayUàQÑ&ý¸ã(V 󳯺T;1­¬~y_QZ‡† — :44º. hL_´‹\´ö&KS£on™…è #:˜Ÿn ƒÃ‡_—Ò÷â9ÂòÇÂä¡›\›Z¿œm&³ÕbRG}7g9.ZÙr$Žß]úpÐtç+½h4ë‡ýºÝl~oîgœh\õHÏ€ }`°ÀÐô5Å<: XÊúBЉ¾uoPal¨ø(’\P¤ê (Û¼,~­«»í¬Üm÷öªâs]KÔHtûI¬ÐƒD_×N‚‘è µW8˜•剦q_ðÀ„÷0‘‡¦øºÀAûÇrV¬&•-oë–ôÙ|B%¯ šõ €¶Í>¨·/ÖTð<·‹ä„°áaÀ17’{cM;„êͶ{P­›6•Ѻe|ÑôÇŒ©«š~Â{Þ?}×V/ï  ÷·œƒÃ` c#Ž<]ŠB\|(!E_è (4ôDü¡AC1º*ÐPü²3O­ƒ‹ÅŠ ú¯½'ï‘A”<‘¼8TË|wŸïËíþ!ìáø+Vת·¹¡;@BV€£œ õ”P‰ê“u‰+gb î»›ªiÀk•0ºxD‚_Õ2c±Y)ޤ½²hT|Ôpwªu^áS!$Ò”y*íT툊MQê)“Êvož*vϯۉÁOm?Ô<*üòîCñع_¬JXëí¢œÖ:ìœw†˜S¡S"C>…jŒ5çý¥œö‹^Õsä¥Ã'½C?ÄT„úåa$vK8§ºvºuÞÓN)’ùö«ª ¾¡!Õ w~#¥j¦¼’<…ކ">6ŒÎ ãR0³QÀ®!*@©^ Lç;Ê%Ž ÑØ?†ÉèP:Ä¥@ ¬–£I åsÈ+¹QOùœÍÑN€()$×\ûœ%9Õ±aÏ+éxZ¢¿‰‘G°>‰ÛsÊè'å çÝ¢RÏ~v„àLÐkšu!™}^¼„ ·aÐ ‚nq&è»§ƒÑŒÓVÂɬ´p `ìkžÙ¶‹}¶oðÁÜdËÁÜkݰ&vÓùv>ÝåöIíÁ5?(ßm?ÌæS.¯·Åo6Zž~R¥y9ô¿;lNå»Q$e ÀƈͲÝ|“aM¦X¨)¦hбíÆ$w¬òùîðÞmó»}yî™bŠ§Ê¤yzNÔž«ªŽ«K¶›þæâTŠòCŠ~L§ZLqGôU;$ÈéëØò}óJù%c“•›|žÍïg«yùM½W}“ÍV‹¬~ºKz{Žûéͺã'Ò›¢‡ßV¯Í¤µã!=¹(³ßÅ+½]Úa„!wÂiÝ®û*8“ ¥J6{Üè\_Âu6ßo·æ’ @àc¾û§lµ®'13î±Ql—dÁº’µó–,,ìÂV€‚Þo׫ⷖ}ƒï”·wEþ°˜ÚgÄÿ£(wöõÖºÁmz˜<=Lž&O“§‡ÉÓÃäà‡É+ë ñ!Öö_¶O™ŽV¡÷Ù2Ó!"Xûü…õm³ûÂD ¿‚“Y¹[»³EåLÝþA­Hö»Êíׇíñ_*5Ø• #:˜|øBí7žÆ:³ÚîþÛ~`”‰1Êßd\[ýÄ*ן|²…¹¿ÄáK~éuçáùVýÜu˜"Œ) pïu«`Ô!8Çõ¯Åª(ï/‹N²·Zè)Eòÿþ·Í-mÄ™·O.º3Aµ´Z#jÂÔ¦®Yîx+qYYüc ©[5H¾_—Á|Oõ…'—ZLOÓc9Àt…«N ]}2R}ÉŽÕ¤›ƒÂožÎåaÜ4;üá.’t4¶¢A BÝücÕNHýbþ±§! ÂJ–ï^*[¿øVSB%ŠŸ^Äz2¶æÇÉxzÉÕL ¥Çi1õaòG{©¶ÙÌ«ò™×-ˆRö|½Þ.ŠUßôg}±òàô«U°v‚ Á#N­švX·^~ý"­"…?SPÂþw ©¿#JÈ”i}.K`S†áóthwÁeºÄ‰ðSC’½åTün¸ðXíéŸì9ô#¨¢!ò0ÚÛ["Vûå´žÈÎBLw^q¯¥’ˆ Æ™k3²í„¦‘a¯ÒGíCð?àç ·²½uHÔ Ù'Q¿X–°Å›×¸«„dÝsMCvé:†‹ì“@«¡×í$âÊ=Ê-cŸ¼*Àí~µ ·÷?˜‘íd?*Àá±ÏDiÇ21à gŒhç2±íd«òå2_ÎÍC¶FB"ø{ÀÅþa±z³Ë>˜>Ì1MAø}Ü®÷›ì\żKãКûG«[LiЦð’)¹Éîf…©ÈvÎ(î`áv¿nòç?ÜÝ·LàßýøßÙrV¡Ôtÿ7£àÚ½ý÷bµx[Âô,ö†l;îŽÂn¿¯Eý‹õO¯7!6[›ç•Ý^Šm‡øuï•T~I_z‚ÕE¼W£¦ I×ë-‡vœŽF¼W= 1•L&â=ï‰xOÄ{"Þñžˆ÷Pâ×wùâö_ð”ËŸˆ÷«"Þ+`jˆr¤òX = ñú”*vžx'dª%Õ&ÑP¸ÂÚªlÍL\×å1,1dÔæ-/âÔ¾m'øï›B‰­’"Q(ÃNH]ëv<דHM;¢ÙUQ(M !Á)óK/8û¢(”ª@ Q»‰Šªî~¼³w ¥ê‘3©¨ðKFJJ¢P…’(”D¡$ %Q(еžŒ¬©ô[YÙ~æ"Q(‰By} ¥0OOûÝ{FùÉ–fyŽA¡|ªÍÃæ´Ï)§mG ÉWQîl¹¸>©bç«ÄÉÃPì›ÙÍu¾3·*›+ùn¾˜,Àþ°žm†ö¬;wÅ¢Œ¬¤rQ¶n¿µF{](ð°W"ŸõÜ+¥ÀÐTcŒ¹"ÊÜUí•!«A»³·"ûTtàÕ)f—=Ic§óP¼træ»û|_N>)“ɪº_­E0¹Y°YºÖAÕŽ tYuØpQ®ýúT*÷ú™8ý. ³H‰)“Ô©gÛN ‘ö›@ÅVùQœséôªví畯Œ“ë/+ ¬5%Œ"áבh4 ³®Ä‹8gØ/E) ,Q˜‰ÂLf¢0…™(ÌP 3ÎÊ2)…™(Ìk¢0›' ìS~ <$…É™¹~ÍÏgqï Vš*…\áiÕŽ šh€E«&Z™· <÷bm;…PWliq‘őСé±Hy$í£tîÓuaJðu“b\O‰ ˆ)«Á¶#Š§Ê‹!ØmiL0¡Rû´ªQ›ë¸ òb÷´÷By=û|úb¶&5(™ëNÕLWA{vQé±JªÝ+¢åá‘{E³õWɱ-à,׫þŸ™îã±"ôz%×ZšpÌ'>ÄjìÄŒŸ²“û Lî"·%.lÿ¥baz$M[][­ëÕÿdyíE•â]'Ͼ5©´·oþ~üͧÀœÛÜî7€Y¹^ݾéhðc¾{¾¼¥-nß|[”ð1£ˆÑ å}±É‹Ù‰w´n~ÝÔy„HªlFQN³ïíŽ/¡Y«é‡à‘›ÁÀN°7)¼o.ž^,:õ›âû¬õlˆ˜ÊÛ½´wÚ*Ãßç,6‘d{úªøõ/6Óþ‡õ~—®ò—¿Ú/}››ø¢õ³òææ[˜AÃ>•¡¿Ò‚A£µß=¯zÏ ôÞ?,²Õz³Ù yß}ÎóU¶,VëHÊ›ì¼Ð­ñÞØüï:£¾(›ió³?T›˜kaþv_‘Ç`G²· +=øÀFð©ý¿'7 'ÕÍ„H˜u©n ÁMßè—¯.Ôé(ÏÄ@‘{ )裧‹sñO´Š_Ñ)R¬ôÉ/è)Z;œtŠ-™H§Hé)"¥S¤tŠ”N‘Ò)RØ)R¬••©–@:Eº¦S$fjÞ""QÄuŠTX qúì·$â48™=‚Ù³áÓÞNaùB*)J8Ùªcv î¿b ߃‹x ÉOeœ}œ«o ˆŸÏö°Ì–†¤ÙF3º¤r*™„jBKhÁ~€]à ¹XÍó¬%:Ö7\ßPõóÙñmüB¥Ü„#´S ´}Tvakxw»c`ÁBiG!ôz—ýåÏÙf½~È>»û þ­Üøp+JD4¡w3Z‚µêVøm­¡¦*ë—bugy~Ëûþ³qn­Óóõ¼Z·°›&'õ_‹Û™b‹Ù‡;:Éó» SDM>ˆžÜé‰;DÀüí£ °y'Rj"Τ·µeÛ Œµsö±!;0:ØçÇ“ªE–?ÎöÖï8ε†a§2±˜‰-–`äÀý¼ýûê“1º7Ùw‡ßª¸$Cô4´PýEøô?6ër¿µLHÝÛÖcöŸëÕÏëÕìáÏ¿ÁkCøc¾û63±oï*Ïv{X,Kø¡ÅâºÌþõû›ÿgïÚz#¹•ó_ô´>°Úd±ŠçÉ Ø9A€dZIö Þ]mt±áÎoO±{f4Úfu©›->LŽ]~üXͺ°Xì.×ù»þåÏ‹ûö´û§›‡—_é=‘±\Ü^ꃷ“^f›T¼Ëyö7Ïþ7ÿ9+ÚÏ9ëźøþ}Ÿá;9¿?¹ÛüÊãïÞ ½“/ó_oöÛ”Y·0øl¡vÊ~}w÷põÄB½½½ùíî꛾ûñ»ïø9㯔ttÒ 5ùû¿ýë÷ƒvýýê-Öcô<£ pòêQs³šÞs¤~Ÿsª쫽ß~!ð_jmÆ}¢ Yv{îóñ&§¸×¾ßyòœ\>Ü;VÙÎñò¡Ÿñùɇá÷·šÝ-Œwšüí»Îl00w3"g¾´“ñ@¦ñY†òäUè,Qô-ݬ¬Gœ•ÀBL4zÐfM¶49ó²C¼Ä)¸a2{Wê*ñX¿Ø³oÎ/ys?;¿ø%W§@‘²Ü–=(A›Ïû”‹Zû¸R‰Ÿbý%×àñ~ùk…»k+;:ïsj³g2™ _C¤/ïûüçSä!Ÿ[?jÂoç·åÎ~<™“>“{ršºÝ›†9Èæùðø‚` æXÎ_±òŒýò`(rl #;¡*€kêìE‰Þ}yräããÑïí€áòkÖ†ëûÝ–~õ‹ôGسq ©ùx|ꌳ‡OÚ,°ñò­Ú"Î,\€5ƒÙ<(YËèÀIàȤ½kÕ»^lÖÙžüqšOËO¿=ý‘÷ºÿþôëÓ\Tø?ïO¿ýc÷O§÷7Ÿ~ye¿~ýñdÓ›ííï'¯xWïßdïëÍno®oÞÜp€wùÕÉ«_Þ^ ÿ¹—¼ûƒÿõ__Ÿ¾>ýºø—,rûîêýëÓ?¿ÊãÝÜæÿkNþ²ÄÈù'ùOÿü3{·ÌôàÎß¿ËGÍêË >œß_¼cº²ïÉ:=‰©WO9ÊÁ€6¿ßÛó»ûÛ‡>7ÿ†ÿê×k^³GfúÊ…×§}ØQb@ù»ó~ŒG¯ïûÄÃÇ«|Ôþ6—lƒ‡ËƒÚæ­wÉð‚¶e¹½àyš·PÓ¼õ0¥“ +߸ގC˜Pp9¨à(ü|{þÿãY‰Ñ(0Ñ8Bg_P¦»·<6äâ#‘„‘åVXuÈO"ž`C8fŸ¤ìSÑö³OsÀ/“}* ÐI/˜}Z]MLKÍÁ;;-ÕÎáimäfUÓRìÞvùÉ&‚w£+ Æ9À&ÓV06 GDÑÏràkcÃï‡ddc9aY3ú# Æò;Èá¹—q:ƒË»MƒïñsVàüFhÁùp²O “)be¹àÐÇ Þò\O5rÕý'õ+ªøã—xw¶½u·åµg4-Ãhî„\C)ªhp´`mCËÏx{¤ó³·P»o¦2•X¤’bÏÆLÈ"õríªY¤<(ï¼µ"86µ0'‹Ô§/^]ÜýúæîáââŠy¾|Ì|ô§@ý¿×§>œ?~\[âÿ»ûg÷—!‘‘ó!»lHÿsìûÝ?Iv3<ê uÁDvFÀ•ý‘,ç}Úí©æ'¦Âê+´ ïV»Àó0“$0™ï >ŠÈÙk¤cšBŠ? Œ¶Ÿ¦˜~™4ENºñ4E‘éÓsðÎNSôƒ#$¼–AÎÔMS@gBˆh ®ž@„ĸ&€õ3* [ ¡P©GO¦Zš¢ÇŽø¿H8l´©ê#e<€MÆ–›DÂÀÛñ.‹œ”=/RUM` wJǹ5NÌF™u¦À¬gÕq)ÆP®@ßÈAZ5̃CÁX/‚c!ª ÞÝ^ßüzöñê>¯á1\-ô«hnS+d[²œõ{Ù–c0âÞm? ˜~™( €@'ÝxPdºÁ(`ÞÙQ€j—JÆU¬]ŠÇ+u`]c‡•:ôèjE:T³rl—üXå¨' ¡UŸë²j¦âLí`@‰'­w¸»ÇÒsjËœ&LÑy!˜ÍsHÞáÒ¹ö â %+” rH+¨C"cÉOÀµ©ö¯Ümõí5G üßN?Ûva Wº¯ƒ´Ê9«,g×-&W£yÅäq ,_óúþzÃkð$¾auÈM##¥ÿ?ïx€Û~~"@ݯƒÃg‡±EƒåÍ«—³úRðå>9ÒXýV ORV~_î:§”6•A 2"ù€^„q/%xÌŒ„zFÛÏÌ¿LF €@'ÝxF Ètƒ9xggúÁƒébJ/çSªšp;ÇÛ=…Ãû÷HQ´•Ȩrè(”¸ôè†ZGv%¢¼äÉĪL‰a$%Рüüo’Òªç‚Óל'°ÆÈ“ðŽÖð¨R°>˜€'Áú'ƒO¹…2·@É… +PðærFºÉø`|Ê^ή (Á¤ “ëš³Flh{:]‘Îü ¥h¤Ý6%H.¶f+4è÷Þ^ÜVLÇጡš5$¦ãu1‰•ÐĔȗq¦¼«%ÂUSE p¹ßa¸ä&àÑÞO]‚MAB0Æ!)‘å|‚øʰü`WP ž¨W†Ý¹y¤Æ"©ùY‹ÆIËx6ªj\˜%øÐ’Ž VŠÑöãÂ9à—‰ tÒÇ…E¦Œ çàöƒ#»B˜0ÈYªÛª&gÀcÂw£ëÊÛ}(·+ßÈùÔV\Ø£"Ë«l«Ø™…Zqáðûè¼³p8[1.ô®‹c¯j€ëÀ™`QŠ£{9oÖµíyP üÍ”ïËoäÌñÅvqÓ.0Ú¾mŸ~Û^@ “nܶ™nжÏÁ;Û¶÷ƒ;t $R9¨\xæL‡d »½)£˜5„¶l{*a°ÑÉèµÐ[ȶ¿ƒw8G¨iÛ)agÒÈ•sÀÜG†Ãv(ßÖäšg$sVH? ³ùùyè«'øúqØÅn‘nð¤—Ëé¤2©1 þi/‡&¾œj,7‹„+¨Æt<ûÍò5Šn>^3W*^%f:#0M†#y+A4~Í{K“:ú9XH¾Üf#ga m!À–Ÿ(ÞÈÑ ´µBm”¨M‘ &q*ÎÒªÍ ó ’ó‚.²öŸ^ャ퇑sÀ/Fè¤#‹L7FÎÁ;;ŒTíRñË÷€—½±žºÈaÖá'n”PC[/)ÑÇZ/)q¤ša$˜.Ä‘ ’z˜)ÆrƒµA.øu¯&çA‹ÙõÁ9€c†Xܳ Œ¶oÚç€_Æ´è¤7íE¦4ísðÎ6íª]Š BUÓî­åaJ»½C“¯¥M€3í=*b+d’Œ]µÓ_ M{`÷ÁÃèj{ç8äõ䜞÷»¹˜:9¡šK¬ŸÝÑá·~eðçý<“À.Å„g@‡Y<ó^⋤6uS«ßÒH‰'¶•D.tÏÛFx¶w³iÝ,¡ { %$±Àhû¡ÄðË„:éÆC‰"Ó †sðÎ%†ÁƒØå`ÃX7KècGc‘„©÷mEª˜@ðÞ6³¬õà¡GªÛ×r4l <|€„Ç z9ÏÞ½G–¨üvÑ çâѲ‹[vÑö-ûðËXötã–½Ètƒ–}ÞÙ–]µK!Ö}»CêÀai·Ÿ Õ7VFªC¨–iWâ¨jÚó;ºÚĪ™ pX ÛXê§”PMÍÙê uxªÜÿ²3HL>:ês{9áJ5™NÝtbZC'&ãAc^®Jð3–±”tù Ê{ðÂÅø,â—϶U ò ùm;¢.¦½C°cÀ0â m?`˜~™€¡€@'ÝxÀPdºÁ€aÞÙC?¸5> ]…ÞT¾SÎIrt¸`P5Q[Àަè' ·¦ZÀ ÃaMÝŽç00äWP‘ °[ÄÙË9^°ª@㲨¦Uèâ½”c¨Ä“jìX<{ûû¶ZÌ@ìÛ ]²z9c|>SÚPG‡q ˜Ž‡ æUÄC\º"—}gï“°³œ?Ô*³bÀƒ&cØ\ å98^]»£íGsÀ/褊L7ÌÁ;;è·¥ ­u‹CgÁú„*xã'@µ¶±j€•#©ÅÇf–µ°Vâp¦n5€ ã@LH)ײ 8³œƒªÝôŽ ƒò‰ƒ ƒ÷q'Ç Ž¢‹2žB‘öBõ c§.=¯Xä5»J‰RæÁr¶r g, ô+¼…¥Ã£} ëy1öAR ñ”3yŸB^ír¯ÁAÖ}è~Ô'gË÷39rxŒç®Ähó1À,ð‹Ä%:é¶c€2ÓíųðÎt»”7•Ûxè`Ì+TB…ØT  Dïjµ Pâ@[7€‘wOåá#{ÈÅPe3¸êÿ0(F“Êu69{4íâž]`´}Ó>ü2¦½€@'ݸi/2Ý iŸƒw¶iï'Þꃗw)2•Ó{>t6¸Òn?*´Õ4^‰Þ…Z¦]‡¡®i÷at±Y`Ê].³&ëý*çûê܃jµŸ Râq¦êq>o—;帽»½8c·¥gÔ Œ’s!Éåã- Ð_s©Bñçf*•¡°†jLÇãiµ²p*’˜Bô yC@§`Ÿwðâ¨Ó K¯ÀãM\»ÂÃ/Å¥ÇT7áÿÌ-M3ïÛR‡°NÖ”ÙÒ ìàu)…rê?˹DëæzpÎ&Á!„ãa1ð+0Ú~~`øeò:éÆóE¦ÌÌÁ;;? Ú¥Ö}T­í¢Ký"Qô úÆRÿ:ônê.”~?ùà¦àˆ5Û ÚÔ%_möæú›q’búüР[½üÇxpÕóJ<é%ËÂR¼ú°N¯Éçú®š©T¯SâIõSGO˜¤2“„¹¢±ÜÞ¨—s”¨~ˆXúþ9l=%ÐàI«…†Oõ£€Êå̃V«äÃR)$mUÉx¬òYªGûzû0)'§làQhúë{:}›ÝX.†Èr<ûUŸ¯À!ûîÑŠà<î•æ³#áaÑö³sÀ/“E( ÐI7žE(2Ý`aÞÙY„~ð€É¦$ïRd}å6&¤`íÁ6J¨©­KD:ô*5—Ê"èpX[·€i)Êçž¼dÚYŽ ´¶Øôdê-¶ GÍ&s:„‘—(vÙf%ã ¬”ƒœµ«:ryÐhØÁE³çe¹ ]`´}Gnøe¹tãŽ\‘é¹9xg;rýàÖ;+œ r&UeŠ]ÚÂn?jk7Atè]­~PJµo‚ÐØbS猱¹f¥œ¡ 3®û€¤ \‚½’Û£iÙ³ Œ¶oÚç€_Æ´è¤7íE¦4ísðÎ6íª]j¿ÅrÓŽ<ÌØsÁJ¨ØØMzªVé¡Ãá©nަdÙ©?—E%ËášGú“Ï{hÁĈIž‚wTû¨.ãyéƒp ÕãI5ò?Ý\žÝßÜŸ¿ïY ‹Ñ u$Ôã¿ä £njÉ®¡ “ñxcÛ:Ë-4Þw¾3ó}qáT/çT­™®û+Q’a“õµß Ðá€kÔx#â´Î„õÝ}ÁEÞâ.”…E»] säß÷¼F‰WoÈŠÇy¾–ö®gø‚ Ü)¬¡>zC2Òéîiüª0–NÙ×(3 `Ù^Y9ËÙ´l7£eUœA䉀]!#Çñ\œ€‡Ì¢°J^ˉ7ÇÎT ž¤“ –sꒂʺ1¼‘ƒ]á\E'Æ%ªÊ¦1ùÍð·=¡å³ ‡Æ’#†àhô>ĪJ=}"Ö¹4ƒÇ bQy–Óž$,º G!ÕE@–ÿG )X )Ž_\Å3Hk˜‘éx@û²ÂF3Î/.8ø¿«ŠÖ1 ñ4Ê‘Kˆ/é…ˆÀô‰¤5LÎd<¼e¿¤‡ŸÍ¦Þ‹Ÿ7&áKZy"Á#É5Î1ó8ŒFŽdóÅ‹¸d­Ç,@,ûsh Øì/ X1OÉ/Zµð,°ÉZˆQÇÊZn¡3žÜ±-ÉxB…ÊC–m1s)’Ô®—Cõ u;ÆO1ÍÂ炚¢Àã´aʾ|¢0w¿ó¿úðín¾Ýr÷­‚ârå²o”Œä_³úP#ùù ퟎ™`…’P ­ZLuÑ>3¼=‹ÂÇ|J²2³.;õåÿÒæzO(Î)æÄz}͘Ž'i£×LÚ¡bŒ¬%ú{ÆÊåB ¯  ^Dè}•â‹g(oì/¢&)§™åp…ŽÆ—D/<ã‰X'šøÜ÷æÏ†ÿüÃùÇ‹«³Û«‹›Ûüžåß6Ùò§D¹ç‡bâžØûUßVh~J+„ <έ£4‡X-ï ôÿì]Ýr7®~×ÞK&@?y½ÞÚÒJ³UdK%Ë9Çûô öØÒÈÖÍi’š‹ä&‰ «? €øAR ˆÞ=’±X£“eè2pÆCj žG»ŸõGè\ÞÀsp±JjNšïsÏ5@Ì3œÎ&<Ó^nvw÷ß>íF;oŽRû‘¼|£ãÍ%÷›5x=X‰ì‰ž Úý)äÍŸzÉQ„$Zì]÷²J’£tS„Žšœƒy-I]}0­ a‚>4àI0ád¸¾zºº»ÿÏkž¦><Íõ(Û¬Ô øÇOzoá_^‹ÏGGHmÛpM28©£ÑGœaK6à‰Ø+µõØm{u·{|zÎ,S㪼äÒL|;xÉßqCœr¢qé8YýE ÏØ¯Š„9øxb«Õp\!œ[÷µ^hôô"g ¶Ð~]mz)õjðÆä5¡Î2Ÿ«Z ØòѦ81-x´Ÿ‡»ÌJ*ñéÿýj7ƒ:Ò#Bー<uS½ÎÛ‡ÌÕI~¬­ G «4œ`ûòË„”Ñ?ËËÅx¼(ÿñáïÜî5ðá{ub)ÓµÿÚ«€‘–ÒpzæÊoîÿ´ßÞÜì>«D²âŒ$ÍS¶ ƒ?„gOG¡WÅö¢UF»èÕå 3®~NŽß ÑÔ rðüT£Ãœ;zü›w;$ÈD€ÁE®*4^ eQטƒÒ¤‹G›×÷ŸŸÌ”6-YN¬ÇÊÁ`›)³‡]!¶7НÄëá'˜àصàAìz$|¿yöÏÀ¯xz}ÿ¸»/M>-­»Êˆ¥á¯°§ÌežÐðc¡M› ”V¼È«¾à)„ÐÇ£Qú¹yõ#öâ iju‚õÇ伄Y½° 塎Æýˆ¥”æ”\%1ºƒôéš¡…®©C†Ä™=ë.†DåÂ3Æ/àìŠ]p:¢ý5äöÈôÒ GÏÈíð}†ÜV´QŸùÛ*§ÏpÈí¼›‡Üî?ÎæˆJáÁ ŽCn)ÅË(zdìé‚–Ñ%+ 2œ×Û6ôEp‡Ü6âȇ܂\2¶¹£’ýô£‹•þä«;ýÒõ“>%âÈht•Q›¦nÇ—·áQØP‡üËØ†/×׿]}þöõö¦¤ç:bU@óÁK9IB1à–Bäi0i†pðp'Üú–ÈPfµEò Å«(Ü~0i‚ÃÝ€Gbwáþ~oدÍ\Ú¬ÔÃý™’²²)2:NýÜ5‡R-.ÔhB;%ÙN}<x¬©Ê9“qŠŠê5dÇQŽƒ…Ü jŠÒ¦šðè!*-µ.^NB®óOÌðUHÞ£±ý†ˆº°íéä¾ZÝ”XiÁ#²úx4ç—*ÿ8a™?½0"G§¼`;7Ç ¼ ϸ¾ÔÁ9ŒãœÊ¨6¨ÝÔ}ßÈ:½ê4;P³¿ 8CöfJ ´OsËU­ñ²nóòÀ)Š÷¨ÃDÔüpÞWq9BŒ ®Ô‰•&<”¯Ç#!ÂȯõwD1›Âì]w»f*âHíux ݌Љ}'™å¦êãI(#Düy÷T~ÑÅŸ¸¤’T¹§\ú0{F›Š96:DÎmp±LÀìÂEÅ 7¸šMfC²‹'† 3{Ws¯–+"—ÄŠuÕÜÓmë¾Ù n.mч›Æ—?îñdÇ!ÛÓÅ4PØK.eu\:Š3ˆ•s¦¡’^U)Å„>VF f¡$ü]bt‰‹9UYÍYDs=4:9.?#§¢|”PcÊ>8:|ü+§âÈcy…£çŸS±|ŸœŠ ‚6ê3Ï©¨rú s*¶àÝœS±|Ü®•ä$ïé Í©à —éXJÅ‚€rD)9¯”Š=*v†Õï鲌J©X~éÔ›ó  S*$_bÄãÒ.©†œÆï ]”<Ú‚sÌõXÍQoÁµàéÿÀm—é£Uß]œ\å\*“ Áë]è¼™ ‚ZiîÔOÈ-xÇ ™ªœËjzRô޾¬€ ƒ…ìB……Ø…&¸Üå;"¤ºÅAB.lã*Û(æ˜%8ÁüBgl&á¾8YÆ‹·È@ñŠÃ63¼]±¤oQòüuêcÍ| „>xÌSd^¤5­À£} Äwf•ïîÞxJü¥è®~RÎ)š<]…È¥fhä~—n85OýzÃâÏ-x7.ÏPÐ?¥rġşæÇ\&ÎGê÷4yñöt¬çUý¹ "$Ñì£'È£ª?÷¿?ó:idCm¢K<*mÛ¥ƒën¦¨¡£ëÞËmÏìù<Ò-wirϨÊJpXYJ!¼®š…3Èhc>uêi†ØWã¡Ð9J÷ªÑ@®3Ž$krõ“rÅÿ=QÈ£æ4cg7àÉ2TÄTe•äheðîo£‹ÆŠ¸Òãx·àI8DÄ ×¸Î5sÄ%ø¦d©´#ßÎ0eÂCJ åq­¿Fp`ˆ¢îK /é ×…#E&† ô‚G“Æäãéú¦ýªã@=ö̘¢]ÿâEª‘±§lG¡Tœ`Zµàií…¸žiXwC¸”¿2ù ítI4L´>J»Ý‚›¶Pèò“ЉBPÖ¸OÏ䢗ªvÀºOÄ"¦V’]¹Š`¹v†¨îÙ‚‡1øx0È¡Ö_0IŒä©ÑEʃ„Ú bB/Ô<­³~j{]¨Šõ>! š“û4-$Ü<¦JAdvQJ™»3A´ëñÀ0ÑÖã=fÊ ž}§¨ÙçW3ì1qr±›z¼Àí;¶G}ƒªàaé¹ý¸…[õ“O1Ä,½“ÆètÜ&N½PÆ8ÁÿiÁ“::·¿«`=¨§f"'ÊîÛ’!qTü6å[/â ±6à‰=ji¾~Ù=^”¿YÞ×~æ]å^Ká2 Pï°Ðuušt° (¯•kăC÷.ÕùÆû$ öpr¨ôxjÙ»[´q=X˜ äÐñ]t¸aAœ'hDI›5¢á¬­ß»j6ù›ä=,”¡!Ì9 R7Äy‚ä[ðP묿eê§«‡5,<òó‹ý/Y8[OgQBNfN{–¡Ñåo*ñzà4ã=§OóȨ·T–O»»]ÙLß^ä^Úkg\˜ÄéÔ½§ƒØÞ™¨Â Ę82ùK_×ôý;* ¼Ok![oÎR³ 5'õVBP™Т$•ºïð‘°xZ¿ßìîî¿}Úú>B©+S­à<à?>üýÛ=/ûaþíý©ôãíÍÍîófüˆsTS¬Ï’øç´÷œúð·×*”]I¯E–`¥¤WCZ#H*=§“/t‘zºúòÇ?ÿóxõð{A—^øVê¢?ÿH G?•êµÈ?ÀÍÑžÕx"l«Ú¹úúô»íÐÛëÅ-{¹M^:Ÿ_ü!Åh—P?BÌö5ÏÁ31Jú’†w61¨×RN¸XZðD0Öî «L˜Š\ç)A(ùè­‚rÜXâÓO©W£.=Õ'hBž¾'Ã[¼«ß4¦¯æ {X±8Í0b®aµmXÄ„ªÝ6<ÈïuH§TÚ¡Qð<¸Ò³´9 Õ]‹Àæ Ѐ‡ò¶à¾0ïEÆo1¯~‘FÀ#e¬é´F}·#@¼E€”òº‹Ð<^ÖãÁJt´5«´ÑØZøª_cࢿ$˜Z#¶|téȹœ³Õˆ)þ©pôükĶ€ïS#VAÐF}æ5bUNŸaؼ›kÄ–s¦DþýŠÇÖˆ%ÑËÓ‘²¡‚&É>TÑp^EbMèG÷hÄñ«À;‰%¾ÌK»Ä‰EѵQbóìëþfj &ÆìzÕeUª,¼2YÚýèãÉ­ý·Ždr4DLêss©È‹/ºÐ¯R¤¿yšJnE7[èâÇϾC`&=¯À£áÍþê|¥ ˜1zw–Ñq蔳ÜO»)˜£š¢úàuFd¸|‡9iX‡`ÀIñ ëî´]^v›³{Ø]&~ÏÂQdȉœæÿßéâ„Øp.­¾0!¯ÀCÜãùùžB§¦§²x7ŸÑ1ÁˆÃá…^YdB|¨ö±VX`õË–Ìêö­£kd0Ky©çÜØü²Ø ÏÈ-xú䣜ÂS¬ó”‚-ÑsxŒ."ö9¶ksh† ŠÐ‚GO2¿'|¬1ºŽü¼ðê7.!O¡KËv‰ïvJ`§E˜È&8›-xâÖT¥×l[¸=n•›Á銳§Kñ´¤Ið§s=í/Ìäq+g÷ŧÐ Ì~ðÆ7‰hÓ ¹‡ÇÛûÒhén÷çînŸõõñêéçøßýç'»$/î®>ïž¹¥„#ÕŸÍÍÃÐcð :£“H§yysîœõ Ñ0c‡7às—Ü[Hϼ°±nKÉG )yβÑeiÍ-­Ì à‰&è@ n4ìÿ}wÿ_®ß}ºz;iûâ Œ+ÕÃêæÐ©JÏì4:I8è 8EW×ãÖZ žØÍ|ÿÙ :ú' Gë6¯–¶…à)¬ÑU†ÅÑ„ŽÛ€ó„PN žÖÞÛl«ù5õиÄrüy¦§ÑH·ÝÞUMÿÇÞÕ,Ç‘ãèWñ HM‚$@øº±×‰Ø½Î¡¬ªv+Z²’Ü3ýöKfé§ôSQ‰¤r"쓬¢*?|D’ý"@à·kð Ó€OAÚgl® Õò’ ^}#,ÿÚj<ÃäM^ÿSõX–üâa}ð£½€ÜæÉQŠÂY_—bbµÕ7PƒûIÕ¨Z×’¢p-I÷ÐFóž|xSëön7%¿Õ).¿sšk"ÔÙÃÏ7·øà Þöò¾¦m¿\ln7ß.¯..w÷_¿üOù‹ÿ›þ`¶Ë7DQâ±hCø”CsÀì«[Ž”Ì4/¹Îûz3ŽÐ…åséux´§çÇËZJË€ëÍ‹]YÂ~¿¼ÚÝŸüîmyËHh¦ {/t.€~„Öd·.­Éh¦5Od—7s·}O3™)IËëU’^°t—Ê)ñd+˜Ø=J²™i†³5¢ê}ëZ#0 AQÖæýÛÃ]ýp{v±™Øf3ňԫf˜GhHZÙŠnv5±&×Ç,¾ìÌ4…zKIØc¡1´.k3Ì»ùÙ¤ü]>LqlÍ…{÷3È#ôƒóªôƒœMÞŸ–s°R§PKä´…–?TÒá hu¨tÚRn›£ºmÈ^„úƒë²_ÔÙæÇ#»§‘­ô'·RÌÜë8ëB" Пœ×µþd6ëÕ$_ €f³H-û [‡–e€>1¬+®ßHÚP4F>J­Y¸–»W›Ž˜û´²¹OÙ®›ÖQ†Í‚±L WpŽÐ„¼2MÈyÈ®2…<'¶­²è,»}|€yy )b­Êî@çÑvŸøúÂ+›é°Ñ&ñº³¾|Ï=m*}çÊûB/;³ÉOd»=|r„`\—_6â~pb~¶Ùn’‘οjšo¦(­ž³âïZIFh¯kEñÚhë_›«Ëí¦ú×îÛ77¾IÍ;ÎùĵU”›=_iâÚá!­K;‚ÉÕëYá>)Âú¤é>^RŒcRxÂzÌѺ¿ñ^Ò­ëƃ~æíÏ£™¢ó¡ñ©m–È^í†}pÜÊûM=y¿ÝͰ*µ:©1êáû_>¼ÛÝß·J4QÛ|µ’;–òå˸èphYÙé¡2»$ƒË”•••ê…6]YÙ9àmÊÊ6èF¯¼¬l“é–•ƒwvYÙúðècÌò*ÅÙ-ZV6å|^¶Ö#…F÷PSÈE¨ÑE¿®²²*õYúƒÔã²²J¼`YY¨}òü‘ÙµÁÍÔ‹³íVMã0Ð̲²sM\r‹gêðøl’!ÖC¢H õš}t"è²ÉáðÊÙ|pË׈Ôá‰~ÁÊܤ­¬3e÷ËŽ˜e\†±6|}h µ Á%ðüˆ—Œ³£ë·á瀷±át£WnÃ7™^¡ ?ïl~zxB±-ñ~ÜA¥ê%lxbw0µê ,¿÷Üõ •t6¼=ñR6¼GÆ%mx¨ä‘ÙŽç9å@©Ÿ›ÆÕ Ü ÖjÙ *œO\Ñቼp- äÚÜ¥âÅQQw+£QÕÞ¹žÆ„ "¡íÆ^: 3®ÀcwEão11 m&ÑF—ˆ<ÇHfµ FC_¾©¼{«{}LÉÀœs& yÂÒUŸ¤«pŒw€å¾¾ãxpÐçg>4:ïFèšÍ;蛦ªÎඨÞ]cÕf/Í\âärWIL™¢Y)›7¤:»¾ÓeN—u£ªROÆâæçÃMuº§j|!\Š»)ïëÝåv»û1t #Þ,6øÔƒ‡­ê}<ÒøÂâýùÓÏåÞ–Õ!µµ(bÄ rJÅÔl\¡òÅÈõ% ‰_žF´Z«×ûØR7§Øæ4/ƒ¥Ó :Y]»{}B‹i„bôãI÷^x<Ü’B³ß}¬ÑŽE5¥…­F…‘÷úК`Bë°iœÇð+ò.…TŒ®?ò>¼Mä½@7zå‘÷&Ó+Œ¼ÏÁ;;ò¾8±§(¯R–mÊœˆÏsÀ£±X T 늼ïQq1Â{Ðg¿Tä]‰ƒŒ¼ûtžãÑÉŽµX'míeœO~fòŒ™{_Q猎dÔ@ì¸òœ1ù,ã Î&‡FÁe;ÀÁqbò’ýTÆa2ê®ùà) –ið XEçÑyä÷¯ ðr̲@žOkÊeá™hp&1ÿýx0Í –>ºÆ5´ý›Äbn³8u†Íbˆ·Œ Á®KÇj›<çÉÉ‚ ÷ÔA¢Õ9Í)¼×bÈ@ÙáD9À;˜i%Xêu±m{ì€Ma„:ïÃC O±ÈLÌ ™íÔ‡ˆXÔP´q°¸hv¤²€2÷  Ð žèg_[Wä=^÷WØËr’ß©)ÏÌÆ\´Tâ eŸë¸'å9¡x2Ažû¼·+Ar„½(¸]ņ 9¢¸•qÎnÚÁÍ™²¶®ã‬ò‘ÊxØ›U7T¼Š¯´OÒ#»äˆ(ˆX™Mà ‚eçü€ä\ X"*ø…(PBH^„£§AK|Ï’¤Ài‰ æ²\ÓˆÝHgD’« O2MŽ˜4.˜ÍXî[a;\šœs‘ˆîOÁ•ÑÀ€·BíÁËê87Ài«Ï)h¼ëÀ ò¿‰L,f‰EŽ9‹¡é2.3Ú†ô;Þ§TæÎ!‹§ÎUŒÌ#&¹æìqBo\Ò¶ˆäOäµMÜTp1g¬Ï~v  ´!ø(FÅ«T#6¾Š‡"ŠYxuÜŒ,¼FTc"-I¤!¹L¤¥¼ÔÍ «N?lľ+Y¼BQ¿¬ØÞÜóІ(¯ G«üß:Wå³»Í÷Ýc6Èõå÷}‘Šó§ É»Z׎ ¤Xk3—5X’$¶bÕê,àO%†Ô¥™,kfy^•)3á2Ôžµ¦eƧêK˜4Ä%Sà™{Iq©‚­D!$íP Шԛ‹ÿx¼{·Û~yÚO?¸€÷ª‚Õ“X÷»‡©^Za#n©jðh3m:ã–ÇØ Qž<»ÉƒæÀRPöP</.kµ‚žæô}>òÁd ½×©œ¼æ»o›‹³òšýûoéíª$fÌž‚踔q>,sFc²Ö1SÊÎ,¸Œ™¦ 7‹Iüuœ¶P~3†ÖE&´•£zÃQRŠ2£V)+¸BrÜO ÏVñ~ÅZœ¼À'ˆàdüÅõ–ÑaÍîFß0í-µ¡O˜“®ÙÉž4÷‘ID›“c³°±¥òö †L?í »ßh¸+.±•¿ÙѲç$ÐbÅ•ëRô0³·ŽT/g6Bˆ§¶Ø™]ÊG3øuMlX¨0Ç×§Ïîv›í#£íƒUô±€Fñ°­¸™Ã¬#~“½©.G0é <É¢h‰Âš@Y<ë,ã0.uSóU…àd1`[Ǹ„Sž“Ù£ xr4Ž4˜=»ØLl¶ì0¸â×y—Eôå+LŽMT·5â€C bûcÆ>N…ý>øZ‡VÌð+²’ºÈêEã;@G›«üÎc/¥õïî'VQ`µ¸ÆŒâÅ# ‘p£G#Íæ žÅ,Ð2q„µÐ§ÑÇç´C…>>…8†œcïtû]‰TsÝî#†xŒÛHã N,—ªy(HVÕàÉsbRµnÛýÃÝßÕókI[/Í;ï‹'ÂË!ƒñ1–Í»Ò-Àa®Ø‚óÝ'Fó T¥ÂNPœ­è¤ÔžÚChŽßr¢Î’£€e|Åp€Ù‡œËIÊÇX#5Ì9'ô"Ê2ÿZot”f*dÈ#ܦ¹AÆ9AH¹6㫲!]>Ñ ŽÔ»!/w?¯ªÁÑdqÊJS¹8ÙQ «P=0ª"2{AT`¦ƃO6K–|ÃâëÿVAšùâè¥²àˆ¨StÎfæçkk?æ4b]WàA²šù·,¾ùÿ«vít"€P¬èBe—‡hןr®ê„Z”O ÖqC¶w‚è‰Í®*mÙžY|Õ~OHJÊ>aô.Ht–qñÔÛ‹ªqñ6±X¢Çž=ˆ³ ž˜Ìj`kølg© †0¸ -ke\"³,¨º?öÕ9A< åÚý$’X=§ŒóÙ®X´Ý¬+8l¬Ýb- žZ:ž\ÆÄí$´:ÃÐ>k¸@™„ëZûqW¾õY;Ò@«Áèúû¬ÍoÓg­@7zå}ÖšL¯°ÏÚ¼³û¬MOP y•Š)/ÛgÍåóéHë-ÔtÐTc}Ötèb%Æ}Ööß!bð öY8wèÎ6¥T«§'pRuÖÓüâÓ•ÜÇÑšXw½ùQ,ÈÉòÁVþCA€ÁGADŠÞÏ|éGúœòwuùûîâï‹«ÝXjÒ9Û–ƒ6g߸›köìb󰹺ù>ÎmÐì‚—A»ì–Õ4BÊqùJ`:<¤<;ø±¹ÞÝßn.Þ×$OÝ+O¢yƒµb..Z *H˜9FuÛ„5€NiÄÄ÷ãÑžIŸÎ!49Ì.‘Ô*gçÕ‘ƒAë- iíûq‰—ŸøòϱƒÄ¤M³¯t½°õHÖóE·‰¬ …)FŸ“_t«<+ªðg=….¦ôãÆÉt„D2ttCæëûIÆCê² G ‹>îٵȟk‰ÞŸïO÷«ÛQõÙµ¹ è|ÎBíê:.qÒÆŒç+iAÇàø•q4b¦Ö\™d<^{i¾ƒ¬Ø&+peKÉŠ5ïÛ¬À­‘vƒ¯)^fZ‡’Ù;ý Ûž£XG9õ§˜B-ì#ÉP“hÿnG+tiùDff_Š“ìß®ww—Óazj{½\¼Èâ¢GÉîaçÅúï(YBOÈÅÊ·NvàÂ'­OJ n¬´Ü]´µšfjµÉÆT”UHJŸÆEuÝÝåÔU:˜suMVÓõõ¹6YòíÝ7x„j‰‘ R턨Nb¶×ãP]2Eknk@¾“'ÃU`ª;Çí€aAV ý‰ê|½f3´°À®¼L>ŠàÀÞ8ùUXàÈñ£ë/,0¼MaÝè•h2½ÂÂsðÎ., Z¥b ‹@¨ùÈUó=„\öê€ú¾ŒÌçС'Xª°€-XXÀ§s|¶‹)걈cPß2ý,[´_¦C ^θSà¡Eó»#î¾íDÇb®rÁ*9­Ñ'Ž`ií÷Ø«t< ÄÓ§ö€·ŠÞ\]ïß¿F¨¤}’ÑS RVGMý4S\…H<àp\…‡­R÷f‡['¦ÛvY¤ ÙQ$KÞq³á|­×qñ@ƒ‡ƒÙêñH^!s;ýªÁgû0¾6¦© £¥Ý¯ŒC$³Î!:®mÄF£À£n*ùq¾š Ë­íœêAZ õƒ¦TÓ8PWý+,ß‘T‡G[é÷«â"_ü±»Þtäö$¶z› ‚a†HÁn_1ZU4.¹[Î{?0ß_¾ßÝü¼mpIM.„œ™AÂs GD¦5àÂa"æ¯Èô‘cƒÑõG¦ç€·‰L7èF¯<2Ýdz…‘é9xgG¦u«T^¶ä­?w¹6wº¾Ÿ»Ú n]‘iúè—ŠL+q,™\æûXdšÎ±šä X'úÌQ›cdhöãdç`y{NƒÇ»™÷-f%ê7°þ÷;¬ppé°^f˜&àŸ_j ñîËÝ#ÔÍííÕßs±~}~ÓžÿºõÇÕĪ[Òó²wLÖâ^;y^/Ò·.n pq£|Y-Æ\Œ_”¼3LÔ³±îà}ˆÞ,u ¸ÈeƒÆÜ¦1bô½;bñ“Ìu`îå+Oêðde€äöf»½¼¿û9™žß~n¿ïúrÝ~{U¹õìvûmb–f9! É’0doTA¼n·ÛŸWûKnDVŽxóx­êG?3öúv¯0· ÊôS¡&ˆê.”ƒÕ´_”äÒ-PàQ×õù(U¢ÁçEµ@.Înïn~/îáýËr®Íj10ˆCö’è8Y]ñÓ”¹«wiÄ ÀƒÙʨü=Ó÷²ž?3|~y3-ªÒÄçTL. P̽EªÏBúÛ-FŒ8B'úñ ÎkH­ tbÒ·™¤4µŒc 9Åàìœ+uV ²6(ðh׆W”½ªðL³+^¾—ÎÌüÌäÆÚYvŸÀNèL»‡Î#vÿºò;`qíg$£ŽäZBA ´WJèN¨E~¢†ö£ÊqMÓL‰½Mùé÷¶ÑaéfjO+yÆPþ¸Œó)U…–¿©­Ã£­8~¼ŒîŠ#ÊçRhs ™‹%…ÒÂ_Æ‘úJÿåJ!» ƒÏ‰Í >HcÓr*iGç ‹š^Ö¿äÌ*F/­êýB1ŒP”n<Ã’éþOgo œÚIjªY›ä¥1¤ì’7µ^!ÄCƒ‡Õ+ÉÅ]P®’xv±»{˜XT!úÅèW‚¾ ï`ãˆÕAƒÇ¬,ÌýÏoÏ™-GdȵOº)¨caKž.n°²u Ž„x<’içS¶ç‰aA]0åìHè<‹Ù¸øç‹”FœHkðD»º»'³ÛÎÉ1“”¹’}1ø ‹5¾8ÝòÊ¢Á£lÌ?òU Cý}±‘ðèàÍoe£y舣 žóŽ:.®n~nÏŠU¸-¯ìåæêåÝ,ÖßõîáÝÏû³?sµÿ³pb¤€[v‹ƒ~ùß?/÷<Úàýúe{y_Sƶ_.6·›o—WÅ`ÛÝ×´òuÿõümsåjTüõ` “¹Šuo¥%Í|ÓwZb{˜²¤¼.eÁt’½{}SÜŸ7™ŸG‰?úÉ4`¦:9uªÎ‚B S$+R$pÎ)Íš—·ñîgïrô“³M15öït°Ñ§"R«kð+}Z^–1jUd†>+ÛXõ¡¼.]i®µ¡\ïï›mjXí¯ÝG[HË_ŠEÑ‹¨¨–W;ø!Öu?HFæS?™Â’jÒ;‰á«'osÞøèË¢:Â8RàñhgáóسQ`6FG$J’Avp}¢„8"ž£À“ Ÿ#Ì>þâéÿgõ÷ï^heÀêz‹b0€:a}¨±lB•¯ý¸ÎËL(«)RJòC“¢“ŒäÊ£d<íšÜn¶ÛòÁýîþ¼ü|}þ8¹çÿ®ëÿ>þŸ½«g#ÇÑežË.P›H8½ç‹6Þ -õÚ½#Y:IžûõVµä–¥&È.’ª@ìÌX°ø| H°Ê2é0]CÁV·Dé|1WÝ™ U5xjŸ ž•4p{}óóÇ/ä^å Ÿ‚ ÖF•Ô”^¶ëvЂÕÐÀÛï<Ñ6·—×ÛýMN›ù‰ ²‘°VAÏ Þ[‘ ³ŒØóqq;jðœyC5‡“‡Ä‚ŸO¹?g» “ÿÊ8X²Asa˜Äcë` –Òº>º¢–‚âþjžKÍ p¯6{ÜýåÇþú*mó)×ñ×0ÙÇqqcR‘„£ù q’óÕŽs¿Ïe¬Ì:pŠÝ—~‡ Jå»7„å ›/ŒöáO/ ›„lÛ<Á˜2Ù1êsN73E‹xL˜]d2Ä¢Â`ó3ý2Ð胢78(ãDcÙà!׿äWó^hõÖåcŠwÑjËFÎö:b2ñì[õJ¨'ï þ1ÿ†“SAï ¦n‚BÒñšžkŠv'‚|¥J>Ú,—Ëà*»C¯D|.5Xb(`9Œ ˆ•_@UæØúdø•®Oý`Ò¼W¸É¢}§SœoÑm g’ŒüeOSŒEG‚b}ô-õDпA‰RqcS£y ð6x]÷t¢;ÿðh¡§#µ‹TFñ¹•H*i˜#©µ^¼>4eZ HÁ5Hyïé¦É0F2êd¼‰|ÇztÁêx¬­´eÏ}Žß"“÷De¥MŒòÿ ‘«¯”Õ«ä¬èª“äŽ Læ¬W­³âù‰2¨ÈÅêxã¬Et6Êâ Qp—;#®S\þ ¼ô°ðçsííž*À"Cd´ÏÔM‡¶Ë«…í/¯Òæk5TìÕT™= Øæ\b’ö¨o– ®A‰°¤¦cg%äõå­³©ü¤e‹œAZ^& =Yê¨n­>¹ÿŠ'<žE«:ç|Ïûrû¸½¾ýú¶RZ8ñÀÅ`ªJ•x4àÐ úiP”LPÁ!…ôO”FÏhtýô—€oSA?ƒ Nzåô³š^aý%xWПéPV7¡bh»WЩ®çÛô'$P%*“[Wý„JœPˆAßÈß«‚þüû"ÆÞw¬ @ÜIw¢‚¾Œ/N;6Z$”ä ¶¨ôYë“Y´F§£Èù‡«i‡(à •GUI5/ÂØüI:¤î‚ÌêýT’³ÜÕ]_à]‚µìXtê,l&èh·À2޵ŒxjÛm¾ýÀ\/‰|q¨‰|tŽgnH<­vrD\]Ô¿–§,HU,2«A™Œã,i f3^”½0xG%ðBþî%ñ¢¶‚"äšT:èADL¯U¢pLŸõÏ;ÆI_˜Ï.ú¼é® ÕäÉ$ç;ŠËíÅ—߯®wI}˜UŸ8+èÅÐ"¥T¢]³gt=yœªî:ë êS"3€i¸OužÂýí÷ß~YPâÒ*ü çÈ5‡RälèejøLΣ1êmY’ó<Ç4#‘/ÀSÛà§T}‡Ïð¥)«Æ(qD­/Ñ$wF3ìÞ¤Ta²:zâù( Νäj-@uPv|I:éRa‚8ØŒš§(r–L'pƒ“ãePÍŽHrf@êm'ˆ xj°¾.rû–œóã­ü# ¶Ó1´ÕœSpA½*ÙÊ)J3Þ’ÞÀ°QÆ'úŒ:0®ó—ÿëyë¤O… Ñ¥s/ `Ù0×f^BÛÀdÅò:Rñ½§qô¤¹$çcÏbí¯îNòÙ~©ƒsä¨/y*zï×v•'è%¢a«ª“-îw•Ç©¦’z›äõl†míFbwy¼‘O]´Š¹Ÿä€Bos_m¢&`òá[%~–ëÿáÏã x x|XðÖxûõëýîëöqw‘6ÉÝÕ~²’Ù'¹2ªèJì’Q—[äLßRëUviB|ðÊåÔ,ç`À:“àH¹¬Ÿñ`³ºê™#³¤BΪЦæ‘Á;šéҔ㒇½gq3ݯ=ü›ädÞÎx(øˆ:öÜìKþc¿ûsÒV~1Ó3[ A©ˆ3É¿å‘`™•ÅoŒTp/’–?,OdÎe4ºþË%àÛ$XfÔI¯<Á2«é&X.Á»8Á²ÊJ¿ ì‘` 2 ƒ?é¦×@%ZWTV‡þõc€VQYŽ£‡Ší£2‹p§;…TíNƒea±ŸVêyˆ9U¯Páß¿ÂÎ4@$åÊu–s-žWNWÎäuÉ–bŒv‘³¡sïôžàû¿ûÆ‘Ï&F*ÀõýV瓉ÃÛÂsõi}²qÞ²®Ï]»2p‹bÊÌýÛUááÚ'ogåÕ󟤴¦ô²B+r}“ã·¥xSŽº÷:^ß¹eœ`€<à‰´°÷TÉâ4TjTiÔI‘3ä—6œ*Æä½ºE¦’ÁŽX6HªÑ— |¬´ÔÿºÏÿòÛîf›{A]VaQ´…d€Qê'.£7ŸÊUe¥*Ã4(‚DuPFSÍí[lÆ‘Ô3Éêÿ!È8Ñ„¨ã¡ÚÖVOzùzûãMðáÓ×ëÛ/GÝWf fx‡f“^ðÉNÎYï–Ãêðï™lïÊ•x7^ô¢Ègõé]ê êò• &¹À@#žgp)õ+‹0ËÙ£" GÏoŸ)æ4ºú£çEà›=çÔI¯ûè9¯éõ=/»ôèù08‡ÏS9È¡ïzô,ÁòÆð‰ ‘‚7}ToÌªŽž¨lP^æ>¡Žžçß©\O(Àá]Ï£gØD<¹ØÉ7¤¼?:É?ðW^“ëg—œ}*(ŸËî¯C—yÞÚΑ«ÁªŸv<µ5|¸O.ÈÃcYkÅ×5E·Û?“cç\l¥_4¥ ;Oäüæ•¶fU„Bû¾„JNÓ´ÜŒR߇R¯§2ŒT~]V «S‘›®Äô§ÛËßÓjxÓŒXø>¶êÄt†‘ úÈEðžäúzy7­ƒmF«ø>öêÕD†ŠiU„¢Ú'3?ï?ö÷ÓJ¸V”"ëÞÇR½žÊ(R‘åu‘ʽ«_õÇÃÝ·ÝìŽxߊVÁ¾¥zs2£ˆüºöàé=‰µÿr3ý•i1 ³ ¼ ³ÞžÍ0j!¬‹ZdÞ“Zw·îîÿ˜ârßì'ý÷{0ëÍÉŒ"Vt¸*bÅê5M×âûÇí÷ý_ÓZP3b¼ ±ÞœÌ0bñº6C6РUÁ/§Ç>h!¶žuxDž[g.4¸w`RG }Á Õ"YÍ ¸ô=Ž/Óò¨åz‚1Véx˜|Ãk¢®O¿œ£ÎßÌsË*çsYvcRc¦|õŸY¹ÖÔ/çâ4p°©ž°¸{JÚŒ"æ‹Ôä¼oÑôµÚæ-½yŸZúê ëÍißõœNj-HYŒÝz±æåxÐ4ÿ¾o/ïS6@ªï7i³ÚóŽ%̵ùÊ~“\D?ô½{86ü‘t¨e“e4ºþ¤Ã%àÛ$fÔI¯<é0«é&.Á»8é°ÎJÕ¬îòÞ]üF‹'Þ»WBåu5ªEϽ’kp¤·X}“½=¹Ø(©;T`ŠœÇp†·ÞÊû¬ †ú{p5xlXøú1ß"ÚMsAB-p:RB»ôQä0¨1 Yäb<ŒÝùâðšäªÁJF£ëi—€oÓfÔI¯<¦Íjz…1í¼‹cÚ*+EÀ}cZ—J-Ÿ(ëu€ ªX{ +‹iëÐG×+¦­Ä{Æ´qsêѤO£39Ê·‹œåâ[¥;îì2¨7΂WÞ"Nrñcg×LvF£ëßÙ—€o³³gÔI¯|gÏjz…;û¼‹wöyð£r%=Ëy껳Ø8r§­}Tâuíì3*&¦}èöD¾GìzZM›S/ä!žNÿc’C{µ¢¥Æ ŽÉ~ììšÉÎhtý;ûðmvö ‚:é•ïìYM¯pg_‚wñÎ> >•huª•²ÖÅ®;;z¿A—³öåP㺺aU¢gêµ³WápÆwÜÙÝø`O®v ô䢒;ÉÙÛgV¥ÄUÁuýCVâ©-4!z¹½y*$tµû—ðaέ?¨ðasø—ͳv7ûÛ¤EÈ’Àu©w±Í÷=Ÿå¬iPòhûãñ[zÔs99B©—o»ç ×á¡Ðà™ÏK×Èõ.¯:1(”2­Y*ràa¨³žuÆ€SÎ'pì>*Uª^XF£ëwÖ—€oã¬gÔI¯ÜYÏjz…Îú¼‹õyp±¢ù+Å}/Ø0¤NÉá„ûVÕ»²c¸:ô®WëÚÃï Öàè]©2œ^lAÈÆX«ížHÁa“×|ç»!X½÷ý=¸<àZ9ê/õ÷°yR௻ÏÕð¯A„ÇÕé…ÝâŒØVDÁmëhM\¤ÓŠTÖ?:˜œ <:ç±Ùú7£oüþU*ñTFí/uöÛŒý¤¥\Ñ4J» Eó­úf9¤±=Ò ^¾Q£ämNà8~$P¨.yF£ëÜ–€o¹eÔI¯œüwl¤Ÿ^V!Ï´äkȉ <>4ŽÓ¿í¶×ß.¿í.Ï(4gWƒl7àlª¬œÀ$gÂØ\øiPù”¡Ø„ršžÑèúC¹%àÛ„ruÒ+岚^a(·ïâPnœ<©6uÇX‡l•€@ä›\›ÁåF£žÚuèû—]®ÃS}™_£ÌŸ¹ËÇʤ¬2½G6Îy£€¹7¼…®±a4•ñbÅOr>* ¨NF£ë —€ofÔI¯<6Ìjz…±á¼‹cÃ*+å]èÃp§.~ê B\WlX‡»%hÖáxý¹m§W£0!(«È…ˆÔ0O¯‡ËZ1áôÕà~ÑÀãîæîzòqJT YJ¥&¬:¥€ž úÑå}æ¦Ì:<..¾RV{'<ý៻/ßno§>É€4ÊÎvÏ wŠÑ“q#ø ‘#¢B¤³ÊŒ¾¨¿üð·üÑÍçgå~~Òæçׯ"³ezRöQWeˆö´,O6èÁåŠ)ð6”ã9~¦ÝÍ:k2÷]ñÆyFã´BØ“œ!:¯ìl"Wa·ýsPêðÔ¶…9°`{y)Áßã‚cýI¯1¯W‹A+¡œäÄ[7爵M$@ <µUç¯vw×·ßì–딳:õƃ'TÊ=ËØ8ƒvr['ЙÁ:§@¹L«ÓvtðR,bIņ¡•½Ð:)2ox½Ç1*‰UI.ÕG©äÀ0"WL‚ì5x-,\_YJ#ƒî¿_¡££¬ÄTå}Zkçþù[òî»?€ÛÞÝ]ÿ]îóóQгü/‰Ëû‡é”ôù$n©¶é¨nBO\qe4ºþ›Ð%àÛÜ„fÔI¯ü&4«éÞ„.Á»ø&T ÖÒ‹GÍJ±Ø÷&Ô¥çbÍoþóT@kéP¬ìÁㄊ1ÈÆ¯£gï{Ý„¦ßo “–£7Ë…ž££7›háÄU¨  !¨ŽxîîâUz&àûgBÖá©mˆW¯ËÙÚÏq»pøêz7{άé”ÁZT=UâbwBtŸDŒv1Šñ°«obÿÂLW…¹¯ •£ˆ”ÐÑE¬³õI°#ÂVr¨ ø9?dõÖèx<6;¸»¿ýk/®Ê̇W*Tâç˜ÚkSt»»ÿ¾½~õÒØ(jb$*U§ƒ7|*>G–8Û¢Ž—Gä+‚e#4P¯Ï“œÅFŽÀ }a^_Χ⨚xç­ãv^@sžVÌ#bQ®IPsM*u§kà5$›Œã¨¹&nesžêçÆ=Gq÷§ËP¯.'Ê©ÏZyuß qtú6/r†,}Jze«ã 66¶3²…_=Ršô—?Ÿ/úK…>5¼é=¡k½Àç©üaÔñ­{/˜S¯’œ ËÇ«1Z`Árª‰ë|»Ò+]l8‰ß€AŸ @Ù–õ-§|P:}Ú)–럿ýã÷ýLûCc¸¤)™ð«TÑËëÛWGaÙán2“†&^íR.ÂÕo—Û»í—ýµè~÷rIå¯ÿ×óßþŸù//ž]pˆ_ƒ§öÚ›]\N©úxkJÍñ2{ˆoŽ9Òí`†„!á~ žž ™TÍ­ˆÁö ÆKƒù®Šܤޑú ~:)±¿‘i”)Fïã"ër&ìÁœq@«âL¦<ÃV¦n B+êx[X›3Ñf÷¸.íYÝ,»±ƒrÝ®OXÍÐfX^ƒÀ»66¨M­H°Ô j<¡Ñ<ÃuùG€Ü•gE«Á¦½¸½*ç1˜Uˆë²^™Œ­Î±?eÂ6#Tèwl]8…Ñ\ŠCN©¬•ß§ã¡hz[¨Ì*hÇÕÅÓˆlºY¦ümXT:]0Crè*ð0ö9·Î«ß7¢Oz‡Þø˜ºx#Þ  AÏ”ÿ…!9 ˆ=¡Žk+¿øzÿñew1GByukçÑ¥xÑE:ÇÌÔmËâyù!騂'p,8ÌMuáα'ÅzF•¥@½­2õÛ0‚`.˜ ±Åx–Ü¢gU¬#—b$ãϺ+/7výÉ [Óú“ fùY_VÓ¡ Ç'ze³!¿*6ÛÛ_ø¼½Û§XSþäÿÙ»ºÇrÜü*Æ^ííjýKl AgY`&;˜Á @€\¸ìÓ]NWÙÿôÌä½òy²PÇv•]e‹â9:²/’›Íöª|>‘Ÿ(Š¢È7užWÍd¶WöBÈœKKçµÊŸàM¤ ¯²ç ¦JÞM>*ETbµÆ“¥MÅ~½- 9Ÿ|\&–r|!þdÏUWÉœÈÇcUo/åp•ŘáEwÿä2¬Ú:¯óÂ%O‰Kd,K©^kßÛ'¹Œ®2´7ÅÝw×È< ÆÇÒ¥øàuÇM‚µ6;±#ø{B»'o&Óo㤰u)b!»ì\˜•9a¤º)NUà4›)sSŒZ÷ØC¸hk3ÄÈÛbˆéßɳǫ±ã’ÂcS¼]gÐÙÔf˜¿­}ÉøPåeë0¬î'Óq¬gúG«WŒQÙ÷½C ¯Ì [¥H·HßùþjErÍ”ð¥8eCn¿Î|j³ üM±Ì s,;ÝFB)º9ymº]˜XeÞ9ån‹wÚ÷ìt˜W ¦7Nüu¼N›üùØãcڟ骙Åk–Éãz[¦­Ï-×—1¯ËôuLÞYI »ý¡úÜö½|ëŽA‚rdññ(¨â¶gãñÜ”˜þ*›Ä¶piuQêüé)[Šœy]›†‰âëÐPCmÎïŸÒÊRÅHhu5æÏêêt7f ¯MÁçåoÍêû:­0]ІA›j4äÍìÚT Fß»+zo…}>§Ó§V/¦ã Þî{v×&sSÄ‚úÞÞ÷õóCCùE¶ã‚ÐõǛٕ©„¹)*a«Sq±ÝLóßÓ sŨèêQ‘7³«Sñ(¥"U§2ê43>êo‹ÿÛÒ>-ó6Úö¾¡Ç¾øÇ ?.ü;’q¸±Fy©4ùö Ç Û½n|ñºç6‚ tk'kð!ö¦ÆB«Ë–Ûà‰¸a°ÞJïèF:±%¥ëÖŒ¸uÑË‘=‘Ä…cI äã ª²IxåH+Yâ¨éµŠ…>IŠø.MlʲÙÇá ÕVÉùF<^I)i<‰ûÖËùÜŽ@ œ§<Ç”àTNð0fKßÜ©@•;#üNˆIôx¸5í{Kv6‰ÜJ•ˆ ##Ahr¥á¦l»t1ŽÚÑ369T‰áwœÑ&Õ%+j&dèˆKÁ€Ô¤¿BòMGû5A‡*iœ ܾþÝ/¿V|éÀ®Z;gÈÆ«8NI?ì1“Agìé0,<¡¸QxF°Íæ¡Ù®ÇßÂz|Q¨†*È€no '‡*vSÛ!¸‹ÆAh!h¼ªFO[üžÅ<éÈF<Üb¯^mÏwç…ô%Y/$j™ž‰Â*ãsˆ} == _#e¥@Í+ò`x€Ë·u.»H3í—[i½3Ž,[‰ãœw8q3ð}S&R…;ž:<5z©-¥œ#Ä‹‹9GZé,û´YÙ¸I<(8<†’Sñ‰¶!%™ç¸wmã(Ò—Žú[%t|’Iù½8.¾é|œ,FhÞ•JXx€{k5Ý®æ›?â2i~Mä×›Õd¾Ø¬O¥‰|š/v§´‡Ébö¸—$á†)m¼5Üfñï¡¶i`8{FÕð&xt2%³æËdû¸ï+IIG™jAÈÔJB’›œ²‚}‰94«³¡KYÅ>0ð@¸Â± v¦º‚#6YôÇ &»‘p]¢¨8#º8 }o@äL²R/­ S/­6"G‡´/KœÂ{j:¯I^.¸Px¸)Aý£¯1îW“G}œ`9ž|ýºj¾¾$[ºôNÏ —:åkã8¶”+u­)XW!'ˆƒÇÛ*¤ˆRôiûãc‘X©€²ùÞ —¨CõO«fоñ¦™¡×ËíjÚ¼¯3°Ï O'}Rà@UØ 9xô  _Åx¼Ùà!/-Ï`ÿGOâ&Ù6.{Ǽæ B„…Ç ¦½˜<5ëçÉiå³KbK;š”SZZêøãŒu¬}Žmʇmm m3ð89Ø‹€ñý$ÊÚÊP2ŒÙÔž ªá8ÃvõŠR4§õ²†®óñ°ïXQÕ˧ɧŠùb¾»ÌÞËn:Yßdw÷"Ù»ù²dÚWBÆ»òF"ÑÁ·ˆ¥ù ¥¨Ñ†ƒÇÊÞÕ9sn«½"$ØÑqк%s^2ÈJgà ƒ)ýMf£Og)„øbÎ 2u)ÈTÍÈþz µ“5TÏÁ%jf­C Q)2 ƒì\ŠA÷(r¯ª¨?O°½Wþé#ùtÝ /÷*ÈvYP`U¼ócª}8²æ£¶ºB¢1=«?ɬ•-$*4Zª›W[Gû<À|xµ—ÌC3yÜqMÊ­‡ ÂHjƈnËzXÖÆ:$J¦Ê½ŒÓ¶)bã.œÌÀã‹„28²ÔIYJ#%ˆäýê~œðÖ‹h”e4cñõÉà„ààÑ®Àå{»êÓvUZ£T,A!´ÚhY$¢Q³ùà­¨°þ9x¤îØÈÍæ8 ÐA)™wÐ7®146ïÔ¢RYxû³¶}=D~Ôšß±ñpk™¼t?v°Uzeá¿ü²¡ðà8ÅÎZ(c¹õÀå„Ùxê—>ïxò8¿ŸÜÇåû´–³‚zõú3æ“[¯µ÷¼µ»-zymzÅ–­2d1rYu5r½ŸM5jÙp[Ԫ߅é2¾¶QrôŠË›këÝ\ªÑjà'o|<îÊ´šß?M—;móŒ4¸k1ëÜtj‘ËU…\Œ!ão-er½4[kb(~eOLékñëÂŒúR,{æZÞÅômدCÿ V+¶ÏRïjرóÓªF6{cöÌ^ÙžµjpŨ啨u:jtòú¶èÄn@1˜ßŠ÷yµ]4«ìˆ~ði"…*GE•µà¹WÏ"cÙ€bíª šæàaÞ9M¶›‡Húé¾Þß‘ÒNŒ2àÂ&ï=p¿˜PUþ1fâk\1rðÓã¥ë¡âÊ{Çïp;ˆ&EgŒÐ€º²×•ð*ºé÷e+2åÐôo&«˜HpPÿäûdþmôhû<›lšõé°C¼/3ùqÿŸº.¨X”šgÔQ&ï €ý÷[š‹ÂŒÒ¶y0Þ5ß'Ûvý|Ý7ÓÉvÂ5wÒÝù§Ñ|9]>=5‹Y3û§_ßËߟF}ù«¿áï53Ü…–ÛÇÙh±Ü~ú÷çåz»jF›å^£Õ|ýmôoËÅ,“ÇŽq«wie¿4›_¦“Ç(Ä?ÁŸŒ6kvyfø“År=úûOŸ÷ö(ÙÝ?Žq^«y\ôñÛ_–ÛÅì/<‰Ž(@³f=]ÍŸãl?Ú1¸ç¶º\DZX.Æÿÿ|Y¸\¬G¿ÍGq£Éf´ÞÿÊëï®G‡üëÃÊÚ³a}Wü¯?ÿðiô°Ù<¯?}ü8_¯Ñ-¹C_áa²¹C½~¼_-[7ÿñù§Ï?üã×+𖇀7šA“Ÿÿõ¯?ìØõss¿\n~Dô8£_ö«õiB:¡I{¸â‹u%Ió\µ?0šmW­Ùh‡&?Û¶¢˜Œžv¿˜Ë]a¼yÌøñó?ÆÒ Õ×Jé£âÈ/&4üçèo¸;®úÚÐÑŸ­„;)íÿþÏú/½¡EûöPõï­ýÌaéæàü0Z?7ÓÑ9ùµYØoµF“Ålô<ùãq9™õFTÿî€Þ¡ ?¿A»Ã0û0úm‚îÉáê³Áƒ,.Môªúâ0G—š/R„3 ï ÆØöðNêËʶ`5ÐÑÀø¦ÏqŸIt÷:óQIY!û‡ƒG9vjüúØOk÷¤¥NX%Sïìöãl")§AבÕWˆÎ°ð@¿Ò®§Rû˜#Ìt¢„±Êhd–$ŽSüÛ‡ 8U.Ö9x¸'ª:«Ç’IÍ à=¶ €å—ˆžfùè­¨aá å ,fHÒ©7Öã• 3qœÜ$,59P\…ø&·à×<;nÛbh§f-(²€tTÄ5öô‘f€Šˆø–?'+ä¥pðè2 ÞøVÒNÎÆ¼8ÜɈ9à8É.µß©H¾BH“ƒ'0³¦[ÅÈ&íxz Êz—jn¹ç¥-Ô˜±8çò'ÒWÈŒåàéXâáÌ3£bMç”yë”÷ÎÓ°Ññ`rƒMTWá„ÄÁÃ}³:_|YMP[Ûi´v'BJ»¤>öÏ“JQï±pœìúnµ óó¨ñø‘ƒ\7ÿo$ŠT@‰ÎvïK4¤/lCìp§€¼1ÃqüΩÉ@åD…S0· þ—U³>9nè´'ßÀ–@Ú;Q¤Hv†qÐÖ°Å<^VíZ|í„–Ö~Z)')çž5øË’͸|8ÊUð™8x<÷åh³‰CN¤“ö(bI!o¨ø'€7R–L1(•ÒÖˆ,rð˜0ÄùõXv©¨¬¼m»cGDAÛqÂs:p Ç‚# ­JÍTå2^xÈF%e#…·Jxå ,R8ðf3j>¯òÁÆúîÃ+’G©¡Î¢Çâsiñ9K®hŠw2&÷qãlâåƒ1à*è2†ÝYí¿šéæD::)d•ŽÊÌkÇ9å‡:læS q„ u4Þ ÕðÚÄï ã „׺'}ïB oä–:œû´Ml0à•¦p› \à÷GãÒ0ŽV)æ/Òßÿ8‘IË@JIU¹jÇ ] `s9Ž() m[p‚ +(bkfAlÎ(!==W㿃§bx\èYÛ’Y:~Œ–Î ÒUô­oÙ ^V+žñ²†2Á*'¥¦ñX)ØA™§fóÐl׫í#Áÿ¯™<­Çql\ÚFˆ¤$­• @KjiÇqJ•lŸ]Â0!*㘠ї¸Im-†ÜZðǬŠþ¨½\º Âq)Žç}‘÷è±á„߯Åzþõa³“ ]±åïûáñ‡ Agà÷¶FüŽS6¨ :îÂó¥Q„iE0ç½Ó.›Í­êÃÕYÃ^+çÚKWEÃÁûà2VŒw¢ÿÓ‘ƒ,_{—Ä›¹V²@©Œ1ÖÐ@!¿XXG„u $d¬&>Y9~G í½§WPtÎz$m|ÛÞ7ã]“î—Åë¨íP‰X6:d Í®³Ô fW(©¤0@ÏC _…Ò¡—!2ðô*³òN¤­4%¥t…[¢àòKP2PuÖ1n›NÑû¸RFWѱ^“'˜"éYOËÅ¥‰ÿzn-=6«ÍÓd˵òV t¬éDÎÔ;ÚÔÒï»3O4˜dè—qAVá‰Ác‰Î°±†›µ×¶BÕb &dØOãUIGà\gãÙØB†µµªhÜâiݪ <Η´þã¯<:ï„j(;”ϰ¦Nª"›ÀYp5Žv"ÇqUΊ*wÀeSôÿ‘l-¥x¯]–ƒê ûüç0vÖ¿wñy~Æ$l•3½‚Êp©‚Ãé¼j&³½ˆEp MÆúÐà,Ô®lÐBJ4*s*Þ¿ѳ¢—˜:ô{@³ +~<¿¯RÑD-…í3`Î᯳ޥP´g¥¥ªr"Ð2% ³ENIñR±¾Ø©4ÚÄù%ï³3 ´SBÓ«_':He’RX›§Ë¹ð\õ‚d©èŸ6NBÈXÿƸŽë?agÝPYÀUE÷Ö(/3,€•…žl%ì©øŸvÖøa¬‡~6 hg&xërâÀÚë*wñ1‚ΈHêà`(+0^ÞÙ®w•Çëxç¶i…M5Ģʴ7«AAqÃ@‚îÌÂJ•1+Wå¦Àë§÷-#¤ØV\’9C4RJp.c ‡2$ö®|1±Ñª§·G“x³Y”/èZú2ðYâT1ÞEeggDÞl¦ãçoóq,´ºylb:S+s*ÌhbüÃÐ:£L¿GìùsVÓ(¨â‹m¼Q´½3Z«‚§‘¢§b•¸é uÆL¼,qRé3…ÎìA3¢LÆnP¡AÚî;(ÎŒ,ƒô)Öf/¢ƒ´×o8O…9ãÃ+a3¬w¢#òÛøVYà™C¸9û€­“íŸ xŒ+ö$þàœ"¨˜§±`•ÊArï<:Cì¬}‡VBgl¡NV‰ygCÖ®çŒ.®ý×+O; 3 PN÷çvÖ¼›Œ¢Ò ×@lÀ”±š@8nɯSszüüÂy*® !¾O¡ >;¬• ¨«flQ˜˜¦^C³¨¯3nW¡S¹úÔéî5ý{ü-´Ù¡žˆb:!ÚGX§E¡3h eG8!…’da™8Pcu#뽺qœ%j4dûZ;o|¿]Ì›6 \P$ˆ¤È@ Ù—Ú}`vfnî,ªÇiY… ¤ sç]lY’û­õx%Å££é¢Ñjg‹°€€Ù™È'=SåÅBì^C½4v+„‹aá÷&6(Jûá:ÃX+úF­Sð:kW‘·kßBµït@qÐxœ’EŠ´$oˆ‚¦ÔïBì±’×åº}ýpvæAÐZÙŒÕ索ÂP&@/Aèþ…µÂÅ嵚OwKËPªç2^¨á8«yg»|h]µß•2g‡¨‘É褖>зdˆÛBßwºgŸù]Âe¬ †vó¥QÌ|ÕÞ"’X¾Êg u4k•7ô+9'U§u¼ÏógØJG)Ï Hü ÈÖðv¬ùà´tŽ(Ž;’êñ®%ïF¿\iï©0¾‹FÈðK™ øT›ÝU¸åàá–ÔæIòµ Ä«,Ó'#gœ”dª—Ãå¦.ŸæbÿiÿtɸyoG÷›™'ž!1iSÁëå౦Àë콨ŽMMLYOÊ |Pñ-3…|â½K‘EÝŠ ð5/däuZ‹ê,ëÓ·q;©¦—‘—µ–N&Âq‰²­—ê¯â-~<¾Ó%×VW£˜wüŽóžÌã¸7—h#g1E`ò¸^Åê™ëÍii²åv6~óZä€[¹:J®Nî§ô<’íNN}ãa&pÑwúñóË^œh¬ž‘3QS‡@b>Fnת˹MÛœdµÜÆ¢w{)_Ì7ð6}1Dl²$ý»9üûé‚ðâèÅÃÿ±wµ½‘äÆù¯û!¸;`[,¾SÁ! Û8 AŒI¾8fµ½«ÉJY3ZßÚÈO‘=Òöž¦Y¤Höp>à,•š««ë)‹Åñúîßø àÑÑý[™£{ê¯9œýt8·~óôìj³?Û\ûóB_ÎÞã-¾¬›Ýg´ ¿Sx¿¾?£`ø%ï‡Íµ70TÖáj» O†¼ùÿôËù[v%2Í'„‡Á)ç¹ü4¬IZ]èÙêNê¯ÏàÖQ¬c%Rl®îVû’'á˜Z¡.8浜ʣöî>mSTß‚GEaŒ'É ”‹¥Ž“—_?«ÍÝÖ¯ïþ¼b‚@ésjšL)y¹5.‰ñ¯ÓYIn äÞ-r2„{RÖ<ˆ“Ä,)ØŸ¥ü¥SB^훯kŸJ[À—OÚ'Ò¶XáÀ(Žã8`Gã±NÖ*¢úeÿÔuòdh¯ÂpÎø2 ò.üÞTöÕ7¦–+ì­äàQ™{lOªzÒÔ¼hIF¯wá“Zà*]Æ?ó =šyM=› Ф`Šèp3ɵ/•œÆ±Šºág’Ë&ö'ïcª‹öã0ɼ# ¢ˆÜ S™aâ‚áo9 m…c´1&h<ÙÔœ¬=yîøÎc¿<žhš5qÍ*ŒoÁYÊ>QNfß@UÛ@ÓBKéh¬‘³¢à8¾è”%àQ¦F±Ä3OyR‹×¢ w«sò[rèµtn¸íŒ×i•šä4[Á œ1ãdCãQNµq §Je´o8|¹Ö‚t('´Yõ^£0(L†è\äì¬Dêõ^£… k"íÿ^£ðuî5Š È“îü^£¨¦;¼×¨oñ½F~pão‰%öâ&9%šÞk„Z”\¸è& М\!y9ÖÙµF*iuUÍ®5šžoîMÝøZ#ç`ñmû²!ƒ%p¢œ™-êîÇ×ã¥Ï²!Z¡µ†þ-ðnw÷p<¶×ÒŸaøÇ˜ü &!Xúd÷Ãv‡Kø-ªîíÇÝùö%oïù‡â—ÀÄÊ„k¹2\jE‘=Êéì‹QÛÅÔÇZ'AÓ°mûczagœ2ŽÆã¸),èL­ûñÃinµÖ†„¥¹r¥7®dáþJFã³Îݱ}4CTé…‡ù) ƒºJWlürß_C¦Â6ý4‰Î†ÓÓ‘í7V¦qŒ2R$à)¸áìÍÔçþâÉ¡\,–I\hÝP†Õ)ÐkWµW"ìRcá\*CÇY†_ÅX¸?×®ðHYµí_Âj)SAGÊ$`·®N¿¿Ð¥†"p©˜29Á×ñ*›®MÀ£rwœ¾ ì¾¹ ™9Ê$3V&ðˆp2™GRð”¾^éïÉH ÉWÉãéð(]‡4‹ü£ŽeŠIº€ÃO¤´ilñ JMH mS˜Rq¹Š ù”*K 6¥L!•äkþÉA•a5¾åK`D•Üì§ þRóÑB*GçŒæbóÁÕ-K!<­ÊADÞÀîö€º¾ï¬ÿöø÷AûT‚}ºP:&µc/oQ¿Ôx0ì:v _%3b,gÚ%ЗQ¬¥ñ\îÞ~íß!€ÊºJ«—is[†â-5§@¤¬*\ûææÓ8¾YF‚¯‹\ã›Q.Wø×Ê© s*ÉŠo !Éj/j>÷2Ð…†b™6ÖÙ„É©UÂ`‹š’Ü%à±Õº ¿4ޤ²®c\z‰a—÷#.žD©!®–Ö/Ô*yä#Æ%OÀcyŠÝ—êŸÊÓZ.(z‘a9·E}ÏŠ'Pj@Ü •öÛH/ʪä,CS–¹ý3wëñÐ áºJ¾´cVÓ2+™•=¼¿&\Žù–åÂЩ*åëô$iÁJéXòºèÄÔKÎõd7zð;åÃ>+n{ªe#”Ç|šðUÝÚqAl»9>Û'ZãÀT?:Gô, rf¦å×S 'a"íÿÀT ø:¦"ò¤;?0Õt‡¦Jð˜ÊòRvÖ´É)n&ìš ª’Œiêü4E'¦òÐϲþ•OLåá˜Õ4Õ?1¥Ô ,¾m¿J—2z9í?Ü¡Y”’1Ý~›‡‡ÛZiüÌ«¤WËcWdv꺪ɛ…å=Y‡Ô 6_*Ú¸R‘ÂIK±ʉün+ÎBù¾ç–ž…bi}%‘º˜† ?b“o’Óz{4F MEä °÷„zv×:þÒ¬dפä÷c…3¹|ö×· M–À`…n%gJ)—€ÇÖ¹;;M†R£@ä@¿wÉ­®Ö´­­¦ODH½Š=$ãÉ­­>Q tB¡_K€T´Á¾œãÊ 4+ÊÉÙi—N–(èç|S}‰’ƒtÛ¦Bð…%Šœ–RŒº×8ÈÍ ÔÖH?úA}hN‚³óCþ¯éÇ…¼RD£ý§KÀ×I?FäIwž~ŒjºÃôc Þâôc\„ðžöRBª¦éGéÜFF¼½0$Qj:ÉAgÜP9ëË|iôóv“•¹Ý?ß1©•$ÇxÓ†MBùíîÅ×m cJjF)ÌjßÅóeñ|¥¸3 ¬?!Ó8fÏÄãjx¾dÄ)ú.qDc I޽d!ßd!:áqþ*ÑÜíŠa)„“’Æ#A׺û2ßü®KL¯è·Œ°@Ôœ9 ¦JO´úF3 »‚£ÈÁãDa!ïK®* C;%Ôóøè7Ï :æ¾É†Îùù­&üè7ww×_^ñ≆ŸþèÄßøe—SŸB¡…™ù’Z#$­üùÁŸH~ÙwÞ[„ãAºU”3³{Ä2:!þ—FÖÿ·ÂÁ?nWïBLú¤«YgÄ©ôkÖ qüõ‹ ¢ÇÕ7AŸ˜–òñŠø®ãÓ róÖøkäpPÅÁ’õ‚S¯¥Hä"/¢Ñþs%àëä"ò¤;ÏD5Ýa. oq. ËKÁ,cÙ"Àkq³_|¯´Õʾryèj• ÈÂÞ0nK™5¡êA3 ^– Xaã…a=qd’kß%Œ£vŠÑx´á•¶öO-¨]\m0cÓA¤kEúŠî§n&Z¨ZÈLûâyxê39êk¹ÎFYF¨ƒwfˆÃ…ANÄŽFå•l¼ÄÓqZµÆëNÇãdÝJÚ“±·¬ÄÄQ†h†ä"Ý^˜Ïk Dó4^)ª§ñÎOgFœ„¨I¦˜!B Dßž‚úÜݶd Û¿ü<\׫Á9¿¼/?½oßßí¶·‡é»áUTç;™hYqߦ-VÅx?¯Ùã¾Ê7>»Žî„zE%õ ¦ëˆÆ#Y¸lOf! ÷ë'Õxy½}¿ûó­_Ìî‡ãÏž+”p§ø9P] ýÜ:,PdØéóQ³…S,qDaxÞ n NJÇ£™È¾¿9¼›«‡wóþ¶œ43'´$êõƒœµPÛßTúPœœX‘OrF­ñŽÑ÷(¤çA9•=Ïã6ÐõîãõööSD©Ës©~TäŒr‘æå _b©é”LjŠá¯v¢HÆ í¯‰ÍÃc2ÈöfóqÜîÇÍÍŒÎßßo½ë?ìvן¶ŸáQ&þ¢„á连7îøÍ/0 nfë—¤Íç`¼8Ã%ìÙ×=æË‡ûûñö€|¸ YöÃîìjãÛbÁÇž0ß}óáöCø?ß|Hgߎ»¤«®0Hÿ$,Ë¿4y5‡> €5Âþ <Õ½,¡I"ÒÆHßSŸB®–²¶=¬Ý0·‚dàÚ¹ÛÛÝaûa{j"•q:É,ãd† 0È7¼¶1¬<ÉÄF‘G±ÊFñ§‡íå§½ß&èSEõÉ™#©Ã˜^Nƒ«nëN€ho9xrgÒ•ÍÍÏ«,êRÇu Ê:NÆ9ÃlõtÐjà­Ñ+(çïj’ x\í˜áËææšÔg<°æ\€?èÎ)ü(ÇTmcXyz ÏÀFŠÉ<¦U\×7—Wèa=æ½OÀmQ¥þ§Ïõiãú¾³9p*F9Á굌øu&àÄ ‘ŽGææç–õyw¿ûŸñò£VW«”L“⽜°¦ž£ø5ç¹å¡¢ydàÉ$žkíËœ­rŒPœÓÀŒr$P'mnuȯ…4’ЪùŠÓñØÌÎ »ÍÃájªáŸWL)O|rmÑ)axGäÓPŽÏ¶“òi4¤‹³ÍíñÏñŸ¥Ò¾{óÓí”[:ûÃô“ßx©‹³?¾9÷;FçáÙß|òçŸá|6âþü8âß\œý~·?àŸ>–ãÃ0Ì r€ )Åô@ÿ—èLþÖ«Ã'»?>rÿðÎûÍåå¸ßߟ·ãŸ÷wØÞŒ»‡ÃÀöþñïýµ‡Ë»³o|á›èÞâ?ý‡÷d÷㇇ýøþÍ÷gW›ýÙÝ}(ß“ fµÀ1G_AsòÍù£·ÙJCÊÁʽ^àhëÖ œœ]þzÀb¡r>¢ÑþX”€¯sÀ"‚ OºóQMwxÀ¢oñ‹0¸ò½Ñí¥oÛëU23³Ôýs‚ ­1)Pg½»8`‘‡^‹V,¦çÇEÛò€…¸]~ÛV+e0.£E”“’¿äLèQÿúXd¿Ò¹PƒªÕÌHê›ór¾'ÿªaKǾy ÁéyÕ×°e"í?l)_'l‰ È“îóÑãÇZ'¹y˜ùÊí§vL£Ýs{ø*ÜC'Ý7·Ç5Ý·á-åöipôôŠÚK £—È—Ü1oŸ Õõ•“ÏC/Y«u{&ŽS·_ÔìÕÈÅÂË?¼µÔÕö“œVrUj÷ƒú>&9¯m˜IŸÑhÿÔ^¾µGäIwNíQMwHí%x‹©=ÏK¹ÆÔ®ÔÀdÌÛ8®â·G=NÉöEíyè›-Û3q´^¶+¹´lç8¾à ÀDÛ+9.­]•ÛsÀ ¦ì+·SN;¢Ñþ¹½|n È“îœÛ£šîÛKðs{–—Ñ6%¯5ÜR×ýL¨º¯™è›•Òeâ°-—í\,oÀ€ðã+ãûFFq9 «–Ò…A%G5*A‚“ _O÷‘N;¢Ñþ¹½|n È“îœÛ£šîÛKðs{—.~èìòTê³"·[åSò*âíýEŒÈÒNP]_Ûíyè´º>éø|Á-¤àà-×í\ fq»]âøZ;fã`Žr|Ýœ¼ÔøKÇâ÷Qå„|åvÊiG4Ú?·—€¯ÃíyÒs{TÓr{ ÞbnƒsFÚKÍ/£o²n—|ÖF¼=Âîhooc}q{@%Xü<½k¶Ý>áPÌÄ»ûå„h™“bpzi FyÊ9Æã@ƒœ´«¶å ƒZ\¨p¨îWr§¼vD£ý“{ ø:äA'Ý9¹G5Ý!¹—à-&÷,/å“»â¸p×*âíÓ¡šÎ’òyèm«¶<¹8·å¶´p÷÷=8n˜Ð.Ö,q’ƒYËÐÿ——à4¹åVpŒPÊaþÚ;²“Þ‘øFЄÑAÇ3eAĪM˜ü È ‡•8í %£ˆFûRKÀ× R#ò¤;R£šî0H-Á[¤†Á•Eì9éDëF Fhy³_|­¾M U4Rå:‹Q³Ðëç=:kŨ™8\ÃU²ÁÈ¥Ó=ÇËÁÆÛMrð|+±)µûA9×À™%Áù[1^©òÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜ÚÃà’IE¤#‚œ0¼5µ+ç„»Yôö\#)iGC•RõÅíyèkÅíY8Ô©-ºz=˜Ø Ü·[æ c5‹/ÛƒWëøƒj_~#hpʾöN&vD£ýs{ ø:ÜA'Ý9·G5Ý!·—à-æö0¸cHí@{)ÃÛr»4fJļ½óyTÝYŸ†,ô®]QhŽçÅ–5÷–4—‹¯Ã<Ilü¾ÈI˜ëíu{ôRY¢Øz’k—¦ÉÃñ|g£r²K¯Û{³ÆŠx*$ÈY¹j?Í0¨ämÑà$篽²I’Žh´ÿX®|X.‚ OºóX.ªéc¹¼Å±\ÿ瘣½”m¯ïRÌ—žÆ¼½Ä0GEÀÇ)ñ¾È=½•­È=<_2IÜ üˆ·e0‡A,5æàÌo0̉ˆšä´Zµ8 ª8F™‚‘àØ×Ɣӎi´{n/_…Ûcò¤ûæö¸¦ûãö"¼¥Ü> .¥æñ–ÄG9Ö6Oc´ÄRV~‚ ˜åÎ&@íìŒ#z\(éqûô|  p4åvÀPG\xݾÖF «™F“|Ur÷ƒZ°L9 Á©×…;íµ#íŸÜKÀ×!÷‚<éÎÉ=ªéɽo1¹gy) îNNšˆ·O‡Êûꨙ‰^´êº•‰C6ÍÊËA-uæà|–1 Îj{9cN´FoÊí9àì¼WÝ+·/8íˆFûçöðu¸=‚ OºsnjºCn/Á[Ìíy^ʶåvãì —ºegBí¬ëVzÎZuËÎļ-·Û%j~x'˜‰—WLrZ¯ºßu`X Á9öÚ—ƒöÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜Ú³¼Ô‰ûêÖNZ>8##Þ>ªì«v2½jU;™‰C³¶a0½ð²%F Üq .âÔä _w»Ý*8j Npýz,‚ôÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜ÚÃàÒïh/Õ|»ÝÚA,‹8Bp¸¤…¨¦³Œ|@¥$C&¢Ñ+ތڧç[§™HÀaÚn·‹/¾n5çN€Š÷]rÀfçHîÇ×ãåawïá ¨6ô¨hƒw»»‡ëà³½š~ø Ãïïw?™Ú&}ܮޅ¯æk ®©'ÔÛ;ßêÃöãùöÿôgp>þ<5šzôùç6~Ì/­AŸœg¾ZDå"^Îÿó’  ä?ޝ¡Ç™ñ÷‘ùèæ—_æÃÝÇûÍû1x‘—E`–pLÇojä¸é«o@e˜?«¢iô'.‚Ñ?Ô÷[äƒ'´Á ñýæ° ãÐfþ=¸è¿þÃ=†´3äã‹7úhDS$Éð'?ùfqoþô°ùâÛœ]Þ£íÎw—woïÇëq³ÿ~µáJ£æÆ——òݨsÒ Ð#ÜqdîòÝøþWãå¨ßÛÄ(ÇÀ(¹Ùht8ïå¸ÑÇúíîþr¼ø°¹Þÿ[ª5ã-ÿÿ{"u¯%iPK“Õ{`˜½ ìªw†&ðq<ü-2üñ%†Žk^±ÅÈàë¢d2‹ð•Yÿ= ЫûÝíö/!~¿ôMïö?~ØŽ×ï‡ÐÙûÃw·Ûëï?þûþOÿ3Lñá§ýÝxë_¹›U;]ßýp´¾£ „'~ÿûù’±ÿcïj{¹‘ô_æÃa+$‹,’ò!—ÜewƒLîdY3Æ–IÞì È¿"%Ë=¶›ÕT³;Lâ‘iña±X/‹$ÀÍLüõõä«õjqi…|=ùa½ŸÝ^zc^Sx·¹¥˜íúR¼ŽÙ-Žª^î·÷ R”¹#j§ï¤I_Ïv7—¯ðÇÿþÅ̾¼šÿCNóü±ÚÍî®QÓ§ßÎv{2Îï¶‹Ýî2\›7ýŠþ%Hûõ$þû‹ûw´Ú^O$ HúOÚ×pV)­Å±Õ·k ¡i_Ð÷Òöïšúëï«Ý!n;Ý+x¸$°·Ñ4Ïö³Ï¾ÿúÍ4fMZD™‰´oµ¸Ý]þïÏ»};ãÜÿ%¼$µ¸~x˜Õï“{PË ß>ýô×7ûÅæòÕñ³ðëÅ5uó7‚eqüEüºG+òùOG¡ýôêp™$}нԨ³YEW¨Ï¨,?SõIpCÚø=å»h¼t<Þ¥ôú¸Ì/ÿ&ïù”FU>®¯°<~ØÎV»eXÓ?¦Iøé×_f··—´ˆ•|k¼‘p%ÍâjNEŽý„ÖÇ;m¼j2'+pZ)áËÂúi¥^ˆ¿þF0¾œmfW´¶÷ËÅ®¢J§?œ4)þîY òüïýxEéuõ7¤Ðõùá×Wÿ~¿¼ šùêËã} ôãW‹ÍíúÃè/c>‹Óõ=¥ð´<>ÄVц„yõß}þõ£üvùv1ÿ0¿¥´{E» ¿»›‘íÜongóØÑ£‡ßÍ‚ÙÛ½"¹üW QΛoÞ¬f›ÝÍzÿy»¾¿¦ì·ëÛÛŶãð2 4D²¦-‡ÿ iË»›ý ¢ø;%®?܇[X9Á¼¡ÏéçðãÕl»¸[ì°xý–Òæv9_îo?<*#µƒykíÿ½p/yY!~žPÄKî|¹"o¿ø‰-øÚú?Å»“ùƒ“û<.Á@ŸÿùŸÔ‹Lnɪ|þo'óøõA qeÓˆN.Ÿ¾á?žï¬~÷?ïWdp’SVÁ)¿ž}ôG^ùø•™°ô7ú&)žÓï;OMõSwT£Àµ¡ W³ ìz~ç1VN6ŒÜu )™èð¹ë6àËp× y­Î]'%=@îº ÞÖÜu–•²ØmÅ N=Ör™Í‘†„NH+$Xät"Yqü(FHÑiOæ7‹ù{ƒ¼Srª4 o¥U»ÉìÝúõaâƒÖm±áäžâÉÛɘx)àZcN[m_r&»7ù‹’ËEuW#_°އ»gäc+±r?Þ/³mˆ{£Ÿ²ÞÌ$R,“W·›Rؼš…1D¿˾Ü? µå˜Fp d§¤ºµS¡ÑÊZI5Gj@ÍAª«M”L]¾[6ìÔÚpZi+…cqêJ4;2!ÐÞzσ3v<²Ø z¨•è!È<|© ³A^ëÁ™ I2È<o “:'  x+…®ÛÚG#aJ¢HY{‹Ú¦_¦8¶3bx¾];£eôB;p]úv핱ÐÀgzá;­}tS ºvºQ p­`€"%Þô¶ëž *§ÏÉuHe-z©e©\'»6å:ÆO•%¦ŒÚ*Ï@¯_¸qýO^ÕÐ\:^TòȪrUÊׯª†±ªa¬j«ƪ†±ªa¬j`«2¼¬ª¼‹3V5ŒU èjÀPl¹¢ýÐNbÏ„3uêA;‡<8/ÅH8óLb­Dÿ„óùàKεòZžpNHz„óùx Ψ¤0 %P;åº%œ•›Rj\CEáT‚pÎLo»cØ¢T•M…¶Ýýó´ ˜ºÜv7ÂMƒ—}ù¹Æ<¤Ê lÛ= ½~ᦟ?5i—%S9¨Ø9i—‡l$íFÒn$íFÒn$íFÒn$íš“vä=µ'K/p^–\B¥(q$íFÒn¤N•’žtP&£vÕ›[‹¥qJY Œ¦*¥Œíò¥VO}8cÑÕ¥qà0@`Þ*‰í¤è—Þ Z¥ œ`ÁY©Æ÷½XÞ*!ÑáÓ›mÀ—¡7òZœÞLJz€ôf¼­éÍØ¹1’ o¥Àc§ôfxv¢ÞÒ7†i„aG¨œFdQ‹í¬ŸaÎ aV:NT%é°£µth› ³b$ìFÂn$ìFÂn$ìFÂn$ìšvà%EGÒóY¼¯>„=v#a7ÂΆ°Ý £tºÊ.¶3ªß'm²À9=VÙ±üBB¢Ã§¡Ú€/CC%äµ8 •”ôi¨6x[ÓP‡Î)PÈ[©Nù–}ÒFÈ©­}fÞ…GVÂ!WÉí™ØDñËm\ƒ¢4é €Ü‘têp{FM… d¤š#õv`ïúå wR~b¤]–t òUç¤]2=’v#i7’v#i7’v#i7’vI»/ëÁø‘´I»A‘v¤˜^(îãØÓ¼¹^I;7ÕRÄCR8M2z$í86&!Ñá“vmÀ—!íòZœ´KJz€¤]¼­I»,+eT·~K?ÕÞj[sâÓM ýR¡e6c;©à¬×þbø·Ùæ@ó¼w»èùNOû=¾éW}ÂÏøá¢%¡©ÞSyB2¢ Pžd¤þ­` #®ÐNuùâ´r8µà¨×º™%NÁ:òuhOŒTºÂ‚vF~j,c†t<¨>YÆdZ,ãÈ2Ž,ãÈ2Ž,ãÈ2Ž,cs–-Jé)nc½¬‘•ÇnF–qd‡Á2:PÔ•e®d íDõI±<·Ò:%˜Dí”°ßžï5‚®ÉãüTj)¥¬7 5´ªrù~¶{ÿï¶³ÍM õ¥èß߯‚ô&‚w¤aDëN+,ð“à  4l»Þl*aüv±Yo÷‡lìæ[®âŸ’O^oZãÁÚ„Ç{ÿ(…Øa )®‚sš¼:ÎÍ ºÉúí„bý—'„RoÆ2/@„vZVÎe¥&DW&D¶íTÕ+7!9xŒÎ›Õìn±Û?ýx Q@p±ZìÃ__Ì£Ûÿ;†¡@ 19oÆ“‘cNŇvDí­õRHýsL O ˜IÙºŸö<.sÎOÑÞ.P ‹Ýþ¹,Û\<09Ÿ5–÷Å»ù&ÊÜ–’yõLí3=êçÉ›÷˃ЇÁ]N®—»lNæÕX}ò$¶n-„Ê ¹CP<£§x³_vqn\1Åó~0Š÷lp}) 7(ÅC©§xaû3ÎŽ/¥z¨ôpTïùðzS>凥|ÞMùbËýlþ>Ìeq…ÐJ9¬b_Jhå°, U§„ÇMüê´<û4w‹ýv9>ÆÊ´y–riÃ!õJá禉ԉ6Zh˃Ñॉ†ÍÛ×yMß× Óúø¼ îxe5óÚÕ‡‘Ùiâ ‹:p-š›0c­²Vlj‚Z€lgD¼¢ðˆ¼TƒÁÔ{ޢЀ5s™Í¡Êõd´"×wÛÅn}¿“TÞ.Wqoˆë3°›Fá?þbzš–ér 9NIGêuƒq`ã½›´U + WérhW)µL™/dÍ—–Âç']j§M¿Å§±S§œÖ ÀÙŠ}‹Okª ~ñiðeŠOòZ¼ø4)韶ÁÛºø4vî 8ey+åAt{b܉©¨91~€à`ž°>´s»¼0 ";oŒ–,zrjúÓªPŒ£–àÑ*^:RaбG%µf›€¬æŠc…âX¡8V(ŽŠc…âX¡˜®P ÞÓy |D/K?V(ŽŠƒªPôSƒZH– ¦vƚ̽„²Û<à¡R¤Ø=×ú1B‚i€2·ÌÉ §Å¶!¨‹ýÍâ~wñÞ¤§ÒÒóBTšËTÐYLlQ„|î»íâFz s2ÒSV÷급X¡ü™° €Õ£pÝMo¥¨üÙfV Y’éA‚OyoHKÑz ¥ævƨ2"s=—×J‚¡É{ÉÃ×CýUÀ(Ø|>´S"M_-W×áÃL!BRˆŽRxig¿ÌQØÁ°ãÇNëCéœ`Má±”9grØ&ÚÈ”ê„Q‚™œ0I11„Û¬R4…˜ô–þ¼‘v9V»r:Uº{íÊÁSiaÒ¸¹­w®o·Øï¦ÇŸŸZ8´ÜD’MFÃA;/‹¹¸–v9 ©‹æQ{Û‹&¡%soɱ]QçãÚÁ§eè,XëS‡hc;-Ð>DKß«(Þ§NfJ ®?“sŽæµZh£m÷Š—ƒÇ`)Å»^íj¨EZ€Ií¬àð²£(6ëí{Š ¤á;߃­ xÓ„v¦˜­y»ˆ ú]¬A¬‘£LËÑ:A ‰àeeþÚ¾Ü% "ì$«¾ÖU+äSqgãžÐ©dãŒÐÎÔW?»jÑðæH6Ñ€ŸñÌ[€!䦜ÚוUé=†c;ÙÇ¢~#¯à1¢Ô:Éèf*±ãpP;ê(­w”ÄКw’l¤íòæ@‡z*­±_¼qâUIJ‘ƒê¨´®qo&Ôsõ–Æb…ËEÕƒÞ:J%4°$JÀã]1½½›/þ¹Xíw÷W§‚ا’öœú'”…–Í—B;­õGy¾ôÞ6°uÆè^” <ùíØ+ã‹“z»ÜÞQZ¸ØÍo(Äz*cÉi¢²EÖ¡!ÔóÕ!T¨ÁÆ ìE¬•Ì1¾c;],U¸™m¯ƒŒÃª{"`Åé‚×Òbíõ²}äÐçùŠàYc×`$Vv¯ Œ3J"ç¨]â||¶"PDö°àB½%m«§!šƒ¤J€@ ¿ððê!³U" ñ™ÊAc’è¤Rì˜(²}(‚ð‚ Ø©]ð˜(%äûÈ)~,èôÂÁ" ›â4+Šª‡÷\Å .Ü+¨ùì!š ¾héÉãI0_¹Š±!¬Ûõ?—!á¤?y1n3ŒfúرYµ“ ZkFà³UÃ(gùÝóЮÝ|êGKô¼CQ|®C‰ìâÝlSe$ŽtÝÅl³üì8‰ì´syqþá…NG(W"Yñ0¦±ŽF~¾²„Ë.¬[ƒ®e¡hÈ@Yç—|²"¿Z_ì·Aì×óY”¯e4e } ‹¥ðÅ4#ól5@…7æ¢t/j€Fx¼{Cp®œ;y4ÍO­²ãS Ô–ša7Òèùª`é0Þ" í£r"Ü7JAïÎ4Ø\þý…Ó Mðñ/£ÄÓ|&áF³±§Öªñ]‹…Ÿ«*Zp‚š6Ø‹ª„´€ÇƒBv¬*¿,®nÖë÷U{Ái )J#ΔCWš’Ä}¾¢xŠ3Tƒ‰ñЋM1!èüš¤6æ,E™ÍçëûÕ>…FKFQHK¤~Ý™êò¥î³Åh Feƒ‰”½(Š¡Õ&ø%iŒ´…ÃÑG¿¿¿Z\l¯fó òüÿúE­8¡xNK~Å4Å"Ó†ˆÏUV*Ås(Œî¡b˜4PZðlu „ûñ|‘“JUa|Øã‚Y™ wŠ´“ì%µSØ”;ín g+Ð𥡰½(qÖóŸÚ¡hw|õ|©kNs »­Iílã˜ÀŸ¯2ý´üð¼îÃ!¡’ ϫ°¹Ï"¤¤Îˆ™!Y­G§œå %,WÎ×¾‘R›PÁ'NH졦‡úÑE{ÇãUlû%Ò¡6r±ÚoÖáîŽzy¦£-Ðë°ýÆãÖ0MÕöLëÐx<ž’ÒJžÜ·rŽ¢ý J2]êƒZxš½qÁ«JMLù‹é”RSB‰N×]L‡h(»UìÕ£ÔÀ”5b…[Æ|¾4OiëU+Át4‚´ž³ì%¤è-((«=@v=Tåàñªè¤ïgïø%”¶üáб@Þ¡²”É¡/ö¢C S›ݹb—<¹×ß$8š÷»å»ÃþÄéúî‹Ííý»åêôtÁ±áÅ©e”gºöІ OHöºEidîäC6ºŒ8<Èåtêz¨“ÎÂSìÊÃÅ~~]Ÿ„XÁÌ o{˜E­tálª„ m„ïCšã‘Å4`¹z»íöÛû¸?T+Ê´Iw@ ¤•ܽÐÓ„VÚ›ƒ¹’«¤¬Žb­NóN­èã”g ­Ë»?w‡Í–Dæë“)„œ Þ2P.}»µ”lÏþ>È»ó%Ï9§6_&>räiÒòô­5 9üÞ) Ř›ßgÝ_²š‡Ç•$qjEÚ¨U”4&%-e¸†É1)_l‡ÝT›…\G÷Z’…Ç—‹VâN½#D @ Í‚ÖÒ©’Ysº­‘{ãóЮû[òb?Z‚`. 9àñªtdñ’N¯/-µ‹A1xC9cåV¿>ŽLdkÇ#\-|B¢Ã?2Ñ|™# y­~d")é™hƒ·õ‘‰,+¥u·G&ÀR|Zwb"iå²Aœ˜ÈBo¤ù´NLäI§þähùYÈPºñÄÄxbb<11ž˜OLŒ'&ÆMOLïi]¸oy/ëOLŒ'&ubBN i°Ês$%µ“¾üÖV>u–Xõ±£•ƒÇaG;Z/‰Ñ1bD$ïËnÄ…ZE¡ îN´¢Ô3PkÙËä7Ç£ Ô¾Äj±\+Cž |®° K‹ ï5„EœË¤vP9âT¾øY‚™Zïú»æÆPfìÈ I*J¬¦ínï‹;w›¼9xºÜäMîE¹2KÇ ÿ,9NÅ„ã…åjÐZ®÷æ ¥ïa¯.’Å®nŽDxH ÉHØðN ˜Úør³Þ bÓÇúÏÁ㊽‰¹žÝïoj%(ÓôÖ‘ž²Ž½Q :Ü»ok²râUªÐä¾Lõ4ÝØ®ï÷‹f•YPáµÞsõÔNzh™%… á‰/ÏÃNÔ-–›üШE<¦ÌõÜ9ÂL›«ÁiO>‰¯A‰r'áÚX1«]ÐZ¶lÊjõœÿæx¤Í,:_Íî»Ílþ”p ©çü:L¥f¤£‚ŸckL™€¿ †6FO–¬‡ì8Oî)“zZ+O¾I”qZC-¤\†ó_ÔNërGá:”+Ù½âÐN6;Ì鄸e¹K¾SjÕ½¶R?ž–Êí„)ðÂÝÇÆé³##ˆŠåÛåüñ<[ºÂŸd#AZÃ…&ŽRð컊Õp… RþÏ V èe¢3ðhWÌ AAZP}lØnȦ:TÁæ¸Meå¤Ì‡æÍ%ÛÆà;‘}&¹í46E&¡‡¨+Öç¶Ó;CA8i]¡´N £-·]B©üö®u7Ž[I¿ÊÀ¿œ n“E‹,‚lrp°Øœ ÉÁ»1ERbØ– ‘’õ¾×¾À>ÙÙsiIÓdsšÝêѹØÖpš_ɺ|,5ǧA8C€”ƒÇ@ÁÛÞÿqqÖ{²ã'ŠØ•d ÍK;…™”4ý^žOüöæb›ž¼Åùø¾‰ýM#þò÷0ÊZËRAÒ Ãœ°„Úl‚’ŠqÐ)Mpw¡ÒåÂÞ1s2 t¤ÆJ©ÏÃ#õèó)Þ¦÷Œ–yb †@Xce +r#UÄ°Ž˜ÃÁFnŠ/8Ðxrxÿ½ä}|ýð¯Ö§A¨2.T"KN'n)õíÀÏ 7]3Àœadà¡b&|}ööâüŽ£þ^…Iq)r Ú&\xßNèìêóMÜáormÆ<*Ê GˆÊ‘­3h’æJú:У«¥> ðª¬äá‘¶LrX–8U\œ X06”°¢˜ï7V™eÀŽLß‚³€ñJmÃm𸗯?òŸßŸì¤u² Œìîçl·q²K”,²_£’o@$Ž*€2õDÎÀO3Ä9x,‰¶ßAéé¨ô@*Ÿiœ L};ÿO®˜e+b!ÐÉä+èYâÈk]i•ÄKËèaüÞ.æâ¢!‰Â(‚’ùJ~‚É8n¤"}Á‘ÌÀƒ¶TþwRˆq~D±¿) ÊT¢üŽLö¦QÞüËÀ2‹µÎÂãJh lâ,ˆRÎZ:ëëÛ‘²®XžòØI8µž#rÏÁ“›’°r»Mû“ƒrŒ3 Z¢Î&ê ùvº»{?G1™p~K£“IU ‰HtùÅdÆ€/SL&‚ ¯õ‹ÉD%½Àb2cðŽ.&“§¥HL\LF4®§–LÐÎiÃEÔ’iÑ£2‰ä¹¶èçUK&¼5hŸ39@:$æ«%zÔ,˜uÐ-YkÉÔZ2µ–L­%SkÉÔZ2µ–L¼–L°ž¬ê”i+k:•öj-™ZKf µdüÄ4R¡IN`#….JCM2åŒéEgÑô$$÷ƒ,D—vµ ŠãJ%ÄÄDz»¹< r‹§Ú ±µ©±FC˜~?%[šÜè¶rð <šuÞ-lqÆs-Тc ©žÛÁ±çáÊLØá@•°3Œ{žÜÔ h{â{ðw/Æx†•1ƱŽOÂ6F)ǯû©ækþ92ArðX]âÔa_ZÍow?_¼ºùùôìÿà¿?YÆ“*ˆ|u0Ln}Î.4áÎÁmg˜9xN`vi5½BöB§XYË!¶PÉ kÑ`™Ã‹åfr/øÿxÞto9.TÍßÁl$øÌê„ðøH̰žƒíx/4+Õ/žNâKž‡‹æ’ÐÑn Ub5å¼…£9&Ä`<$¨Ø¹–š>*ÐxÚ˜#‰ S—Î8G¢€ŸZrJÅNBÀ ®k•9Îw$àÑÒÔñˆß9 Àê>e¨œŽÜ„õ'Yu6–,‘<\Rùã€Rg! ó󺦖´‡Ñ&\‚ö5íäó6 8ÊÏqL9 fŸ¼,Ðß{†$@d *ô€‰Bü0¯E”Nl‹s»PÂnüùòóžM ‚¸(2P9=ÃÜŽÇ‘¼¸W,¨®¦¨K£ü!7ö° b !9eE¹#†¥|ˆ¬7ÐjŽ1ÏÀ£M;œ#LŒ “=-š’Öi6”cäãgìpˆfú㤙x\²ä_l¾„“Ö‰>¶ö‘RøyîJ¹ 9=YÍ0Ú9x´é­–)%-=Ö³ÂÃNÚyÌS1OEŒîTÙEÍ ÔåÊ.o—àéõ¥7Ëþïvà䢸0j¶\ lÊá”Ú£ËÕ\}šW0¬8¥gbF§z]–ƒÇ;ïÚ5ZÓ¬‰ûõ[O‹©Ìç1S‘ê7¹Fv8@2r†¡ŽÇJ,¥dXhýk’R¦Þá ‰Ù Õvä‘בóq0TV0ÃpgàQPätdJ€q'^ZbÛ@É ‰Û Ke’’fC,§¯Ú‡ÇéR™ ÷dø@GÚ„Žô…[ÙIOÔ`òí@C¹Ë­Æ)¥Ðfv!HLŠXD“ ¶@J'j%M±4„qt8d sŒõp<.s¬÷úïæ.b[ÚÕ¦ °·Á›I¥e‰ô£ãg%h¡¬26‰T ˜ÁOóý9#à1²¼á~$¾xØ 8•Œ]ÁçÐåªð²S2ig“.àÙd€—Ñ)M_è,Ï1…Îp~þÈÈÇ*Zi„;×’”SIG†Û'&ps×€––-IZ¨ÜNÏàr?NƒÖ2Ç.Å?’›MÈ Œ“2Qð¼mî´=r6¢B‚$JTzŽ@žû1ìî9“ƃKս鑉gqê©ÛVÖ¸²ÜI“#ÁZ!­P*©vÈJ3 ´•hÁˆ$+€\ÁÌó ¾ÍïAn2!7´Îß¶’”J%Yš\}ÃK Òz‘Ûé9¼yîÇ‘K²­¾Ý‘|`£=.Á¸.TžNEE)]¨$ÛoS2“Vœ^ÞnAúFÃ0&†œ õÁ@ 7«¶¦ÚzõËæ@Ëèî:sôAw(¼ ²¾{wëGÚ¿ÛÉê¿ö¶`£ùâ[0—mñŸî‚÷»<›™³zô´¾«FÖmÕŸÎâõå«õ¦„Q÷ ÝmƒW7mx¨öLÀãÞb—¥vKÕÿˆxá‹í‰d1|"é£wÿº}îºÙÔ8êÝÉFIý¿¾¹¼º¹¼ýxöît}Ïùe·òæ›ÇÛKöKÂñ'9à¶Ò‹Û³óWç§ë·?_Þø{h­ÔÃçBøv,IÃ쟵9ݵÓZ÷ =¸nÙ;̾Û} ðèëþFw_«­O6à ûþv ‰¾õðÃÍñgl‡¼ùÜSèv&ò„ý¥‹Õþçþðr¨à~ëÈ\_µ+ðÑ“C¼õ-Ï·÷‘¾>¿¹ôW5Ý^]½ûíò6d\S^Woúu<éRü·ëÞÊ>EuV‰ðéÖ®ïÈžV¤œÅdÖ„ogô² ´TÆm1;ªþYhå·ÖÚÿ—”ŽÖ f,К‡ÌÊZ µh­ZkÖZ µh­Z‡hõÖ“»¢Æ¥­ì=®«h­ZP •'¦/i)ØSLM`2"R U¥Ü½ç5غŠüÝ`õÖ£{ÅH¯$w~sèmµç~N6l¼‡–ò^•e—^~zwó®õOV¹à˜z·®/ù˼nÎŽ$ýŠ>1lôüë×¾YãCñ+–V0xÍovígÈïòõcDë×D'«óK(œ]¯¤žx¶‘J4R†þNÇ={>t‹öÇ!ÿ£9~lHQèŒ^ºõëaèžtàúïï<4p» ‹wÔVÈËO[Wªùãòö-{½g¿„œî€#†_/n÷¥«n?^_<üгì»ß}õý«÷§í6µßRüËÍÕÝõÆsý7ö7^n®ZçÏ÷CÊ‘&ôù·ô_zôŸdÌ•§R²Cºß1ºÇîKMcζ]ïà­¢xÒ1é¿ÁúИìvÞŽô3vöæó ïƒjmÝhO9D òLØ~ˆ6DÊ–Ãùæêö‹!™C6Nwª0üö} NvÛaƒºÃóù×>X>YýôÍÕ·WçëÝÏOV?~¸â u×.õÍÃVÛ‡uò‘®ù¨<§>†$®fü`È¿P)qÜ`¤¶¡Šÿñc üo_p›ïÃÞïžClGáä¤óáw^ð_}8¿¾â¥÷E`—.ù1{ªò§í‡ÿ°Ãyï ›Ì¶ìg†xZ„ýôW{祳+_‘¹‘75rDÓÁzO‘=DÍÞÀúUG«½~{qúîöíÿ°úùêo_o€àûJÑðŒot£Å‰×°{@{}x¿ÉĺPç¸(ÔX]IêÈV÷žõh!~ñí_û–áæ#Ÿ1û“¯“ê2¼Xù%¾ãëãÁ/ß_ »¾›CxŠ­ŠÒNÿ‡0Ÿ ˜C†ì3kx.›±ÿWŸó2]šÎn®´›^¯æôà ¼ó˜—/þú!ùœõvPW›ö?¾h]œë^¨æ}›ØõöØÔ¼R¿½ZßòW-iy² {Ü¥õÝÏÿàÜn&Ü\ü~yñÇú_<ÕÎïs)Ö?T2©^|²z{ºæ ìâwFë#¤·më½6+e}Ç}^xÙŸ(&ÇÚçEåL”ɲ®zfÎý¯n2SÞ}\Ý…½4p¾=ýpîÿÔìK„vò(û¿Ü›H«û?É”þW¤¯ ·›ÑîMñz—ⲑçêå6åV½WÔ6”å#Þ¿{w{yýn3j¬¼n.¶ÓÉáz›-’ÿÓG¦`ýÀ‡úì>ÿ,îçzl÷LG6¨ééÃ’²÷oÆw>2ô $8ÍdÚh4²ì=†Ñ=–2á£Ã8ŽîtÊ»âà†“C»FÒ’½(*´’ë….ŠòÏÕÊ)¥SéÅÜ:…ºÊ_^"¥i|)mû2¡SZ8€T¶ŒsàœYZ&ôpôª[„ñ™dBgH'BM‘ ƒÌÐÌ™Ðz`&´þµÞeB³YB&´Ü¦G~¸;½ùøÙJ†¤¨Ý¿áÿÿðeÍ}®¹Ï5÷¹æ>×Üç?Sî³s¤ ¦+øû«è¤¬¹Ï5÷yQ¹ÏºþŒHˆ_ñÚAçÒÊB‘?Ù/ÕÒÅ#7ß¿ýß ®TðEs{7Ré&JIªsíÕö¸­¯5ô0p»»þõ†}šÀÂW)–Ûïàuº‚ !ÇæGA녅ȱÅÅÙoŒX¾÷)H­W§¿^}Öni©šW¡!OƒÛËw«0͉Pÿ9»î±HéÀàgÊ”Cv¡žÛÿ÷¿ãǽ{û"ö<ôÏ-`Ï’ö_]X>`ÏCÖ½ª¨ì5`¯{ ØkÀ^ö°°« V³Ö6 àüÅJ$ÁqC¬µMSE+#]~mÓ1àËÔ6 Èk½ðÚ¦QI/°¶é¼£k›fi)%Ô¤uÇQƒBöÇîŽ]T)¨¤5A¥g+=»0zVI%%à㜞ô¬¨;ÙžÅèY_¸ž|M¾Äò…ëÑNHÏ‚Ö ²>ÑÐÇχêgÎÒx:F%%1þè{n<¿5ÖÅ"-‰jNžÎ##Ò ›½Ä`åé*OWyºÊÓUž®òtÿd<ÛKYÀ»zog®Fn5r[Dä¦9pÃäÖ ÍM438e­H”ì í ^¢5„Aì•è?Ñ|<øRDs/‚¼Ö‹'š#’^$Ñ|<ÞD3w®’i-¥¬œ”hÖ e¡—hFÌ‹K¥Q¢R`\yžÎŸ“ºÑÍíXÚS€³²1$e?O—õÓõÄLÓ^–W±¢xÚŒ K`Vš.€#Ú¹$8K‚y•¦ëá_"]>M7|š.‚ ¯õÂiº¨¤HÓÁ;š¦ ;Tà0­¥Ü¡tº§ä¾<*'P‚°Iô žž÷ÞZjÃtÀØ*;)ë¬nØ ¶…j‰4¤RÎYO>ô†ØÕJ#YÓé*MWiºJÓUš®Òt•¦‹ÐtÁ^¢íX|5®ÒtË¢éxbZc´Õ.~Xh§eqšÎ4ȳÁh6‰„þ Ï”4éÆ("{êÉ“_Â1•9Ú)7/M:å‚ü48ëj6]’‰Htù4Ýðehº‚¼Ö §é¢’^ M7ïhšŽ;Wá¾ËDyǶ†I©c ±ÆõP9‚”ÂÕѲŽEHT§kÛõ¼ÅðÖào6uié@gNNÓ…•$9™JÓUš®Òt•¦«4]¥é*M×OÓ{É.™t"mW¨Õé*M·,šÎÍ«¬°qš.´Så¯}äç‚°žªKEnÜNé)i:'~Ciz²élî§CMw¡C;kç=ôê;•Ú°2IpR«ZÇ/É¿D$º|šn ø24]A^ë…ÓtQI/¦ƒw4M:7À¥U¨4ïø-{èÕˆFQ_¿‚%‰0*©eÑt[u)(žÜÖñ§¦éÂ[;‰µHKÇuÒ"'§é|àïáN”Dl‘¬4]¥é*MWiºJÓUš®Òtý4]°«*Ü –´«ìªJÓUšnQ4mŒ•hs¥é|;Ñ­ÓRˆ¦óÏušŒ›X@Æ"j1éõª±ìS’:ÌÓ9^ÃÊ1ŒD‚Eh‡fÞû6|§ÁÃ'H‚óqzåéRLD¢Ëç鯀/ÃÓEäµ^8O•ôyº1xGót¾s¶o(yL-H5í}l`p}étyPI.‹§ËB¯ úÏÍÓeIGËyº8Õcº¡ôLßu’ï`ë|¬½~œéödIèçÚ†ŸÉ$Dg:g:g:g:g:g:gºs¦+ý¥%Ž¤Û¯jÄäLçL·Óq™¥f!¨j“éJ…`w3]ãúÿ½ŒóèáÉE¯¶Å$ªpÌtRnaÃÖNZêÄ^eº‘pú¹ë“3݉¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝÐS Åže:.[Bã ÓEµÅ JO?7hø»™n¬u>Þ9gº±dæ{Ó9Ó9Ó9Ó9Ó9Ó9Ó5˜n¨_Ug:gº¥˜Nʽò˜%5™®ÖÑÇâž›˜®|nÒp¡w¥”›äI¦‹$Ñx2›N‹Èæ›¶È×:~•éêE5boíp­“­”œéNü¥Ñ¢ë3ÝLø{˜®‘`¬zq¦k¶ô‚L7“wšéêÅ“•·þþSJn™v/ÓoΘn(êç.K0]Meù­¡ŸÞàËfÓåo-¡¬!ng™ë‹{Ó%‹~–¯333333]ƒéjIdLÜïW1¨33ÝRL§…ß‚$MÖd:­G=ÈíGH”ÏULݳÍö:à™tc!8Qº”ïà(d"í;½Ôæw'Ó …KÉOèòK£E×Wº™ð÷(]#ÁXõâJ×lé•n&ï´Ò<¥âçÌíG”NÃñL鯢Âb½Ž¥ÿ¹¸øïVº¡Ö‰ðâdº±däk^]é\é\é\é\é\éJ7Ô¯R®t®tK)]*kNµšJWën_óZ>7YRîÌÛëXŸ=èÕ@,œ0m!ÿp1¿c'© ÑZ·¡ôɾlOñ¡Ö±¹ˆÜÆ’‰ýç7¸ùÀÍn>pó[cà6Ò¯¨O¯ðÛZ·üVƒ¤ù'ܸ•ºÄzûÑùsS ¶o R—¿ó“« ’lŒˆ’În$1B'i®K»¯¿1¿¢\"ì†`öù½?œ7Ztýù3áï™_ÑH0V½øüŠfK/8¿b&ïôüŠ¡§?:¿‚o)ñÉüб¨²Ó•ô”{Vê§§ŸYþv¦Ëß:ÿÃjgNôÞ:œÞdº’,ˆ\Hö±›—3333333Ýq¿šb$¹ð>¤G!;Ó9Ó­ÁtdRâÓåºà~¦£Ü9 ÆÞ ”ë¶U¸q~lÊì˜é8ÔWh‹)5oõ½îðܪ瘮^ËSˆB?œ}lâLwì/­]žé¦ÂßÂt­cÕk3]»¥×cº©¼³L7ô” ð(ÓI[a3Ý`Ô£³.þ ÓýJ¯Âûé#×2¨ý[ƒ†ÎÎ~¿êâ{GÿíWDáÔ>¢èW2ò=ÅéœéœéœéœéœéΙnï/M˧~¿ªÇË‹éœéþÓå&‡(Z6¨l1Ý^>öN½‡éêç YÒÎ ”ë>yô…Í(lð˜éb¾…©ˆ¢¶oõX‡>–^eº¡p|·¢®¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝØS*=Ëtùµm‹O˜n(ª\‹éÆÒ Ó µŽÂ{{Š&;þ«¿3333333Ýx¿JQ|·"gºµ˜.n ³µ™®ÖÞÎtåsËy{Q¬så:>ØþôF¦£ÍÊŸØOfÓA¹…ÁÊùͤ¥S´W™n$ÅQ¥3݉¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝØSJéÙMÅnÀpÂtcQm­E¯céáç´Å¿›éê·FÆÎ!¾¿êÂ{›Šÿº¢\J}6333333]ƒéJÉù…Qûýª¨33ÝZL—˜å,ô¨ÐÜ›n¯Jw3]ùÜ”¿ŠAìÝ@–Ÿ<úi#Ôtröc¾…r;YûV¯u‘ã«LW/JÉRÂ~8úX;ìLwâ/]ŸéfÂßÃtcÕ‹3]³¥dº™¼ÓLW/΀ÖÞ"äWÝÃgÿ¡Æ él6Ý/D…´ÓÕTBA-]H¯_Æt{ëp‚x¡uäc‹ÄÇ™®^Q9‚Å ÉØgÓ9Ó9Ó9Ó9Ó9Ó9Ó5˜®ô—S$è5°!áL·ÓáÆAóoSƒ6™®Ö!†»™®|n2ãÔ}1-uHOΦƒ óð5ò1ÓQ¾…[‡éj]Lô*Ó勿‘1§~8Cgºž¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝÐSŠåÙÙt1?탊>í¯G•µ˜n(½%ü.¦h\/îMW®Mb r!úÎtÎtÎtÎtÎtÎt ¦éW#%?B™n-¦£-?š1¿¬‡ölºRŸw¿‰éÊç&€ÎtcÉ|6333333]‹éJ™»¡ªÝ~•)Eg:gº¥˜N7.¿LæŸ/†ÿüûœëBâ»™®|nÒ˜ÿwèÜ@¹ŽøI¦SÚ 17ü1Ó•A%Ë·zû/ݵ.¼ÊtCáÐgÓõý¥Ñ¢ë3ÝLø{˜®‘`¬zq¦k¶ô‚L7“wšéÆžRž=Bpó$Æ’ÊbJ7”žÂ—)ÝXëЋk^Ç’}Ì6u¥s¥s¥s¥s¥s¥s¥›ëW%‰++ÝRJ—ÊZV1‰ÚVºR`ñn¥k\ÿÿþ{}d“•lÈýÑÉÖtVou.§Ý6“Ö: ïN¦+e0¶Î2ž½NÌ•®Ç/]_éfÂߣtcÕ‹+]³¥Tº™¼ÓJW/Nò{ÿ)…éY¥‹ÛÙÆt{вK?÷ƒâZHWS±¡õÓ3Ù1¯õ[KîáBW)Ÿ§4›®ë/]ŸéfÂßÃtcÕ‹3]³¥dº™¼ÓL·_œSJÖJ1Ò³L'´1Ÿ¬yKúó¤‹?«t5•p¢Ò'û.¥Û[G£\ia~OéêËùNF’}LØw¥s¥s¥s¥s¥s¥s¥;ìW%"AgÉ^­ â§¼ºÒ­¥t¸åGs¢¢5•®ÖéÇñ 7)]ù\Ö,ô^™s]î3žL—’’3¥3ƒѬw«çºpptÞ¸åT -õÓGÁo¸•Ö±”ÿáú­‹ñ^¸å+bJ‘ñB²ãÙÚ>pó›Ü|àæ7¸ùÀí£¿Ôh&,á´/.' L×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pgÉIû«”CºÓlÆGÙtU$Κü ©ß›Òþ\L·«æ^õµ‡øgaºýWcK¸ï|z /ÇtûˆT†|cøº†N`ºÀtéÓ¦ L˜î)^fÎäýÝèÓ5ŽÀtéVÀtåŬ N›˜n·K|zmºú\Ê©Ì._JWÖ¦Û»ÉælG˜ÎLÊŸ¦·¹/vŠËÜܹv¤íª¯…Ó?íàV½Sææ¾w0ó·2¢r™{ï(³(*·8¸ÅÁ-nqp‹ƒ[óàæ.ˆ54u㪀jÜâà¶ÔÁÍ·DI•MÛùÅ-?Ø<éàÖÿO?Œ_ì’_™_‘·T6È~_᛺vŽ>ÕÜó­ù#â³F~EïÃyãëçẄ?'¿¢¡`ÌzñüЦ§̯˜Ñ;_1´J=ßò¸"¿‚L6ñ£üŠ1©kaº!õL–_1æË÷aº1e˜.0]`ºÀtéÓ¦k`º¡¸š%ŠŠ¦[ Ó ›gÒLW¯K¥|>¦;ÿÇ ÄÙ^”?=ÓÉÆBjúÓa*S˜kÖvÿnGŽz'¦{ˆ#Kî}qŒ× zü¥åÑå1Ý”øS0]KÁ˜õÚ˜®íéõ0Ý”ÞYL7¶JQ†k‹ŠKýôC¯1ÝC³›ÛRÍ—Âtcê™>+›îëWײâþ†w²Ý†é#ª´—üö Ó¦ L˜.0]`ºÀtǘn—‚š­M¾ìRdÓ¦[ ÓÕ“…IËÿ¶0ÝÃŽÔNÆtûss¢,ý Tì’]ÛûOUù5¦ƒ:… È´}ôÙíàÞÞAÙ3´›=}ÙQL×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pÉà¤ýUJÒµ½ÿHaãH7&t±’âUÊ%^¾©$ógAº!ïèS¦áån1—MÂÊØÒ¤ H. ]@º€tÇ®ÆËÈ)£wãªr@º€tKA:¨¹l†bMH·Û•×ülHWž+e_‡å_zˆÝ€¯„t¼!C¢×W^÷)\ ~¶¿sïvpï•×Ç YAÚ9½;H×£/ ®éfÄŸé Ƭ‡tMO/éfôNCº}p$ƒþ*e~í•×r¶Ø˜®¼ŽIu²µ0]U•Sù;»tÕçöY˜nÿÕPBtÒ7¼ã÷•ÿRV„±õ•DIñÀtéÓ¦ L˜®éöxéÀÞnó°3ˆ’âéÖÂtX1YY±‰év;¦Ó1]}.’Ø‹\¾'P‘èéÚ+¯TvÞŽ¯1Õ)\¶œEmSénÇH·bº:¨¥ša}qöÔÚ 0Ýixt}L7#þL×P0f½8¦kzzAL7£wÓ ­Rž¯Åtb´±åL7&ÕÃt»z ¢”ºê-}Xeº1ï@ºñÊë>"sçÊëCäÀtéÓ¦ L˜.0Ý1¦Û㥂s»×ïî¦ L·¦+/¦Yr0ç&¦{Ø¥Ó1]}®WJ×¹\ô°Ã+³é2l‚ôuç?ä2…=•ã·§zµ+ÞÄ[1Ý.NÔ©Ý îaÇÎézü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL·^³¼ú«”^|åUsÞ2ø¦“ʼ¦ÛUYÑ…o„ƒ,ù³0]ùÕ˜Ê&4u0Ýî'½Ó=”¨ãʔӦ L˜.0]`ºÀtǘn«,šˆzqå˜.0ÝR˜®¼˜ZvvåiíK¯»>]³8 ÓÕç’)jo)Ó…˜Žic¯5_^c:©S¸uê\ÏÝíPäVLW…r(sË}q˜®Ç_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆt#«$¸¶D‰Ÿ›õy”Jº¦ROFŸ…éê¯.Q9k×;åÿ}cmº}D4Ð o( L˜.0]`ºÀtéÓµ0]—%ö u>Ìívœ(0]`º¥0]y1Õ™Ò‹ï·¿þð«q’³1Ý>¾;çN5–‡\Šél³Œ™ü5¦Óº5VÈ$í£Oµ…{³éFÄÃèóÚå/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº¡UŠ(_‹éH7Ò#L7&Uq-L7¤žíò醼#tã¥×1eÑB"0]`ºÀtéÓ¦kaº/ÅБ½WÙž?̦ L·¦ÓZóÍŒrÓívæz6¦+ÏU)Óµ»1UK×fÓaÒŒù5¦Ë[-(îàn1»ˆÝŠéê œUT¹+Ž1ú¼öùKãëcºñç`º†‚1ëÅ1]ÓÓ bº½Ó˜nh•¢$—b:(Kññjÿ¾T¤µ0ݘúüa-$†¼Ã(÷aº1e-$Ó¦ L˜.0]`º¦«ñRxêd©ïqõ¹wY`ºÀt+`º\³Ô„Y›˜®Úe0?Ó•çæECênLsJ¯Ät²e&?(Mgu;p IM¡»šÞJéê Fœ±+N î¼öñKãëSºñçPº†‚1ëÅ)]ÓÓ Rº½Ó”nl•Rº”Ò•ÝØÎÉt»L™:m½¿~ÒbÉt»*Gr}Cý«o_Ó”nÈ;êp¥R–SÜy J”.(]Pº tAé”®ÆKe÷Ü«J Aé‚Ò-EéÊ‹)FÙɨIéª]NΦtûø&Ù“ö&˜~ϯ8³Ñ+mŒ‰ý ™ÎëÖ¸0S§ ÍnGßkŠ_ŠéöAs½¡Ã}qÉt}þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦{ .”\ú«TÆkKÓQÅt‡w^Ǥ¾h•úS1Ý®ÊÍT¡¯Þ> Ó yÇéÆÒtuDMfèo„qgL˜.0]`ºÀtéÓcº=® iêÔ’Øí50]`º¥0]y1¥l v0]µW;ÓÕçfÍ–Ä{¨œšX¯Ät°Yqk‚—˜ŽÒ>…©Ö o)}ØИî1¨”…ß÷\‹(0ÝkþÒòèò˜nJü)˜®¥`ÌzmL×öôz˜nJï,¦{ ž©Vè¯Rš®Í¦“D›èA6Ý T²¥0ÝC•q-ÑÐWŸQ> ÓyÇ0߆é#:#g~C¦ L˜.0]`ºÀtéŽ1Ý/Ë6ÉûÇã Ñè50ÝZ˜®¾˜YSYž1µ0Ýn'F|2¦«ÏµZÜ1SîM ƒ|)¦ÙÊ\LúúÒ+Á&D^sÑÚSj³˜âÑ[1Ý8qL×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº¡UJåZL§`@>ÀtcRób˜nH}¦ÏʦóŽ ß‡éÆ”y\z L˜.0]`ºÀté˜n ®–éfQš.0ÝZ˜6+¯oò²71ÝnGrv‰ý¹ ŠÞ™@æRÎ>WfÓ©l®èN¯9–9 b"Ü&ŠÕ®ìUïM§GÑéµ`]ŸÓ͈?‡Ó5ŒY/Îéšž^ÓÍèætC«Óµ-$¸†hÇN7&Ud-N7¤^‰?‹Óyç©mÖåœnHYNœ.8]pºàtÁé‚Ó§;æt{\Uõ±»q4Eqºàtkq:ܔ˙J-5;½îvõz÷Ùœ®,Ÿ®þjS“wþ¶~ãµ×!e€œ.8]pºàtÁé‚Ó§;ætCq•%§ N·§+/¦£¦œ¿Wáùõ‡Ø0ŸÞD¢5þÈ8yºòÚ+n% !Ò¦s×T$´›Òîv¢ëÜÔgöO;¸ xÇÌï<¸ (ó§‚^qp‹ƒ[Üâà·8¸ÅÁíe\-/©%Ôn\UŽƒ[Ü–:¸ñ–“)~ïrô»ƒ[µKxöÁ­>·ÌvlvYÚí2ë¥õŠ`3Í`¯»ÿB’ê‘Ì:J…ØåÞ{P»¸ò^¤,]q‚÷ ºÎ]?¿bFü9ù cÖ‹çW4=½`~ÅŒÞéüŠÇàhYßX¥(]›_AZW¬ƒîƒR‘×Ât»ªòã4Ùêó‡åW쿺îxûÞ‘œîÃtûˆ9 ´{!ýˆ{PéÓ¦ L˜.0]ÓÕx©ÊäLݸª¨‘_˜n5L§)q§ûßÃNýL§©È4w.èW;$¹ò”æ­ QÆxé¤LጠŠíTªÝîî²âû ¾×Në‹3ˆî]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦Z¥übLge½g“L7&i-LWUYâ25å õ–> ÓíÞp„ÜõŽ%…û0ÝC™“tn…?”ydÓ¦ L˜.0]`ºÀt L·ÇKA³~\e´Àté–Ât²im¾ç/àÓ¯¿‹ã阮>×Ô!u&¢¾<†éÌêU’„ðÓé&\æ*¢´•V»ºU½Ó ‰#ˆjE]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦[¥Ä®Í¦cÝœýÓIͺ¦RÏ(Ÿ…鯼“oÄtƒÊ,0]`ºÀtéÓ¦ LwŒé†âª=5/ L˜nLW^L4M–š˜®ÚqæÓ«Õç™zo™æte6]9¶¹½Æt¹La“DÖ¹ß^í8ÝŠé†Ä[`ºixt}L7#þL×P0f½8¦kzzAL7£wÓ­Ræ×b:£MåèÒëT†Åšÿ©Wû,L7ä›ÿ)øô˜.0]`ºÀtéÓ50ÝH\•µéÓ-†éÊ‹éœ@ÚÙtÕ婨ôI˜®1þÈÉ¿'Xœ‰é|+¾G~Mé¬Ìàì’´…Ú>Ó³ÜJé†Ä1E2]¿4<º>¥›¥k(³^œÒ5=½ ¥›Ñ;MéÆV©|íWFßXñ€Ò IX¬4ݘzý°;¯CÞÉOßÛ.§tcÊ0’é‚Ò¥ J”.(]Pº¥‰« ”.(ÝZ”ÎvúVNUÒN¦«v€éôdºú\–DØi}°Û‘^šLg›–iúÓU†I„:=Û½‚{JùVL7$N)’éºü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7¶Jå«“éòæv”L7&Õ»ó:¤¾ì3> Ó yÇùÆ;¯cÊžvCéÓ¦ L˜.0]`º©¸Z+Ö¦ L·¦+/f­“Êš˜®Ú‘ºœé|S0@ñÌ ¤ žøÊ¶ ¥×ÙtœÊTG«g²f³˜Ý.±ÞÚAbH<3ÄÀt¯ùKË£Ëcº)ñ§`º–‚1ëµ1]ÛÓëaº)½³˜nl•’‹;HÀà9ãüj_¢†/…éÆÔ—ÄGaº1ï˜Êm˜nL™sdÓ¦ L˜.0]`ºÀtǘn—bÐîþøUÓ¦[ ÓÕÓÔÌSò¦Û턟*eŸƒéöçº qoY.¿öBLGº»Z~}pƒ2…k¸bi+­vD‰nÅt#âJàK¯]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦Z¥^468Ó‘³ÒüjÏÌk•¦{¨Rsbí«— Ÿ…醼£t_iºÇˆõjå7”±¦ L˜.0]`ºÀtéŽ1]—R^SiwxØý®µ{`ºÀt `:Ø4•! `ÓívøT å$LWž ™“ªHg;ð+KÓiÚ¸¸žò+p+G.3½}¿½Ú¥O-qØÜ#ñ?ýáÏÿù׿üÛÿüá¯ÿñ¯û?Ïÿþ7šóÿ§¡þïbSÖÝGTüÚéÿK9Býûí{¡ýìôË?–püK=üòÇú§ÿeZ¾Þô+òW‘iÇ2«—¿”ü]y]ÿRÕ—#–#¡dé¸în­9¸*he$]qò<=¿pµ†G×ǯ3âÏÁ¯ cÖ‹ãצ§į3z§ñë>8I9¯i•BãKñ«m¬ùõeæ1©ôª±üÏį»*Æú]õ õ? ¿>¼S¿ù¦¾w˜oįûˆbå¿ô†2‰,ÉÀ¯_¿~ üøµ_k¼Ì)—HôÆnÎ_ÇÕÀ¯_~ÅM‘’dckâ×j‡ª§ã×ú\Ïœ¡{ (v˜àBüšiËZCÒküJe —sbSén§~o–dÔ’~ÿ3~—<0]¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éÆV©ï…üÎÅtš6?ÀtCR Öª98¨^>«æà˜wøé¾Ãå˜nL@`ºÀtéÓ¦ L˜îÓ ÅÕ,˜.0ÝR˜Ž7Q,¯o§nµÏpkËqš$§€O=ªÐðèúðiFü9ð©¡`ÌzqøÔôô‚ðiFï4|Z¥Téâ+ºì˜ñàŠ.oY œ•Í´ùQf·Sdz?ÊÈ–¸<¶D¸ö§ŽjGùÒFPÅQVþf˜_ºjHªóZ-|‡Ôs’Ïjá;æ|§Sfœ.8]pºàtÁé‚Ó§;ætCqõ¹)]pºàt+p:)‡G@/Gï¼Àåé~ï]Î]i ]qÑñâÓðèúœnFü9œ®¡`ÌzqN×ôô‚œnFï4§Ûgv~c•²tmcZ·­Hj<*ìÅö²³úÏ)Ép¹N’ùš åI’5Q»xpUƘô|ëhåDÇÒÛ;d½Àšñ†æ9Àê]W­}©”e5ë” QJÔWÏÆŸF`¼#ÏÍU®'°#Ê$ 6lØ °A`[v ®:rØ °KXÝDÙÙ ÛgÅjGIîí9<$Ž<l­5<º>m(³^œÀ6=½ Ñ;M`‡V)ñ|)±MElÞ¥zJÖn+Qí('¿uµWN«}o7<ºþj?#þœÕ¾¡`ÌzñÕ¾ééWû½Ó«ýÐ*¥|míÔrPØÊË{°ÚÛ&E‘f¶öW™Ý.=}o;é«L}® då¶«v;Ö+[:â&ÉÊÙŠ]õ¶TI‹}•ÙU)£1¼¡þ{Iô¿í¯2ïx6¢¾wôμø}Ä,¦o÷eõ+â«L|•‰¯2ñU&¾ÊÄW™ÆW™/!%'éÇÕbG_eâ«ÌR_eÊ‹©ÙjσÔ{U]øVN7".#ÆW™.€ixt}N7#þN×P0f½8§kzzAN7£wšÓ ­R×r:ʸéQíTßÀ¹– ‡ÔÄtÕNÇtñÿôãøäreMq5Þ˜Uú“š+3;¢žæÏÂtcÞÑ1ݘ²å+Ó¦ L˜.0]`º¦Š«ÉÓéÃt^ÎŽFIÔ©ó;H÷bº:¨j9ótÅyb L×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pN Aú«bº¸Ì,`.¡ã>¹X‚”)w8³ËSG»Ó8Ýáø?¸ªœ@®,r%o%†ªù°«^Hý^”ägsºõÈøiœnÄ;FwrºeétÁé‚Ó§ Nœ.8]“Ó ÄUE Nœn%N'i“š:‘5»¶?ì€ÏîÚ^Ÿ‹„ÌnMN¸Û\yJ2lÌågòË“[•o‰ö¤ª»ßZâ!Ž™º~Ì 3Òì°ª–G—GšSâOAš-cÖk#Ͷ§×CšSzg‘æ×à%¼÷W)Êz}E^>¨ÈûÀŒîú†Ôï¥+~*§{¨sï«gû,N7æyª‰{5§{Œ¨ Ø.¬ÿÛ/ÐàtÁé‚Ó§ Nœ.8Ý!§Ûã% vã*ü/{÷²+»øU×…ë’ГŒx”A€Ü<=ÉÛ‡dÃe‰,Bg/ôÈöêâ/î’(}E‘ˆNN·˜Ó‰SÖÚÞÀé¤üËõN§dóèÉ­Ô%»Óé@t“r‰ºïtPÎas&ÜC×:Hœuº©påá3œn0]ßé΄¿Æé: æªwºnO/ètgòžvº©«‘Ýët›§ƒW„'£æµ¶mŸKÏ߯:ûc;Ý\ïän.™Ä{¯átátátátátát§›W•â½×pºµœ6AðL®ÜuºV§èW;]ùÜ2  1çÁ TêPìÖ‹Êc[ý™}Ÿé°žÂâ‰úo«Ãg§ÓÕFËßGÝuáí©2˜îÀ_:=º>Ó  ÓuÌU/ÎtÝž^éÎä=Ít­q,·üÙÇW)ä{ßâ-‰0Ý+‚›Zú ª­µ<Ý+ åqzÊôµ˜®u&fù`°Ìo#ùíL×Z”¤åÈ?HÓé‚é‚é‚é‚é‚é‚ézLWÇKJ b0W}w¦`º`º?é¨}ÍÄûÓÙ^uøèòt¯FKˆÛ8¿Í¾|:P…N®OgÂ_ƒOsÕ‹ãS·§ħ3yOãSk\2¹Àø*%è÷âøÆz„O´‰”ƒÀlýEj8\þ£Lù\e0-rðª#½sòô ŒÜŽºJ3zÇöoudk9]KEõ¹ËÆéwæ-þØN׎š À`Ü;ï“ owºÖbÎÂÀãdï{ ‡Ó…Ó…Ó…Ó…Ó…Ó…Óí«ž¬|ÇãªåpºpºµœŽ·œYÁe°RK©cÃtøÓ2ÿó_þñ¿ÿöË¿ÿß_þöŸÿÖþó·ô¿ÿ•s~»ÇÿíÌü—ÿù¥>—ÿøº‰ý×òtðÿÛ†ùöXðÓ?•‘æ§zßûÓ?Ô£úéô1ðáîmÙÊ1,•Uû;;Öþþ–äïʃô/õ¢±ß¢zù×ÉaÔ¢’=*±3á8Å4À1±uzt}‰=þ‰í$˜«^\b»=½ ÄžÉ{Zb§®RàV‰å [¹?8ØÁ¥®«×•ØZ'ô¶†êEÛiÿ]ejxçF!´eª;·ü×lWíDe_KbKªr‘¡ ‹£ôulj/¶áTï ós;—,kHlHlHlHlHlHlHì±Äò&)!r¬ÏßêyHlHìbk‰²Y&<¹iI|ý“›!Vœ@†¥ o|rC¢4¹Áþ“[.Od–T”ûïCÕ:õ¤’fmÔë_yÎÐb¡Uuzt}Ò<þÒì$˜«^œ4»=½ ižÉ{š4§®R¤÷’fNi+Ï!¤9Õ×ÚÐw.=Ó{³y®wìA§›Kö¶ãv8]8]8]8]8]8]8ݹqÕS̘ §[Ëéò&,J°³ÏÏ¿ÿ —ïùåïºåú®Z’Á $’vžÜ.t:…ÍÅS¦}¦“-;'ð„}¦«u¿[Ãû ¦› Wž¡ƒéFþÒéÑõ™îLøk˜®“`®zq¦ëöô‚Lw&ïi¦›»JI¾wŸ2¦X>Pº™¤¤i±í|§ÒûWÛ&d®wòƒï5¿’‰)ùÉ$fÓ…Ò…Ò…Ò…Ò…Ò…Òu”®«J`jÃq•óþ,õPºPº?MédÓäåA›»J×ê”íj¥+Ÿ h9©ðàRä;߃Ð-“ÊÁkPZoÍͽ¿ U«ñG•NÛÛ2Dl8 'ò¶nQ(Ý¿tzt}¥;þ¥ë$˜«^\éº=½ ÒÉ{Z馮R 7+Ý–‰õøjÿyTZl›©ôFüµ˜nªwé9¦«-jbAÍ$c¦ ¦ ¦ ¦ ¦ ¦ ¦;f:Ý$ŽF|Ià,ÁtÁtK1nfÊŠžûL×êHõj¦+Ÿë˜´®Œ=8Ì!§;'Ó¡ó¦\Z9xéÕê=´eåÁOÝ­NåÙí|k£†T¾9iÎàím™pº€éôèúNw&ü5N×I0W½¸Óu{zA§;“÷´ÓµÆ)g\í_u7晴¶$z0îÁëÈøAT£µœ®¥be!§ç¯¶o;ê\ž@Óx$·LnçÛZL¥Ñ’IlçNNNNNN×qº6^:‹ÙãªC¼ôN·–Óùu t“®Óµ:•Ë_z-Ÿ+Vn˜ÕúµNÒÞãÐeN—E7©'óÁ ߨÕÜÓ ª”o8?»8ÝT8}{y8œî`:=º¾Ó ÓuÌU/îtÝž^ÐéÎä=ítSW)Û[8àÊý6¶”öÛ˜Šê)¯åtsé9-§›ëÇçœn&YypHátátátátátátátÇN75®"Åk¯át«9p¹)¯/ œ®.b—ÓõN'Ù‘ÈÄG'P¹ÝÛyAçB§³ ÁÊùÓiIVNa$ÄÜ›N÷k؃ÛâþÚ¨db·q¸÷i‰Át{þÒïÑÅ™îdø ˜®Ÿ`®ze¦õôjLw2ï9¦›¾J‰ÝËtu Ã=¦›Žj+íõ:›^¾ÓýzÔ–2dü wÔbºÙdšb:]0]0]0]0]0]0ÝÓ}/ ¤ÌÃqµœoLL·Ó½¾˜L&¢)3Ý·:´·÷c®`ºoŸ›M€²ŽN ΔóLǸ©€Šì3ÔS˜R¦nÒZ‡.éQ¦kሜЇá!Óü¥Ó£ë3Ý™ð×0]'Á\õâL×íé™îLÞÓL7w•ÒtïêtHeL9Pº¹¤«)Ý+}ý]-ÓÓ—Zœî×£æœÌñƒÞ1zNéZ‹ЉÇÉ0”.”.”.”.”.”.”îXéÚxé î:WÅb2](ÝZJW¾˜¤™Ì»J×êÊ“ÓÕJW?×Ë1Ûè"Ä•Žòf¨”u_鰜 TƬ¾Òµº´·úJ×åò÷Q‡#Œ=$†üÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥k `J2¾Jå ÷®MǺQÊL×"(¤4ŽZ†¦µ˜®¥òL>èhû~ÖâÍt¯ÞQJ¢ãÞñÌÏ1]m1'eÕ¾uþ¶r0]0]0]0]0]0]0Ýþ¸ZwP‡ñˆŸ‘b‰`ºµ˜Žê“Ðú_`zÝÐ?;El&\×À§‘*tzt}|:þ|ê$˜«^Ÿº=½ >É{Ÿ¦®RÈ÷â9o˜áŸxË A”¡?uš7v/£ÒÕ?ÊtÚÿëïÛÏ)åïgÓ]¹±÷fžê>ç³]µÕu-§›J/¿–ÓÍõŽÃsN7™LÃéÂéÂéÂéÂéÂéÂéŽnf\…NN·”Óñ&è`BƒÕPJ]ýÇgW\+RqÁÑÙUâý±2œî`:=º¾Ó ÓuÌU/îtÝž^ÐéÎä=ítSW)Êt«Ó!lHŠt°ÀfÉÀÀZ·Ê@áõPwØþûŠTTî„:Ò-)Qžî©ï“jJ«9ÝLú,_Íé&zÇØŸtº™dátátátátátátát=§û|\5ÈñÚk8ÝjN§I¹|‰]n¥Ž”¯pÓº¶#øàµ×R è7>¸i59E†ý·\Náúæ- ’Ö:t~væáL¸òø¢9¤ªN®/šgÂ_#šsÕ‹‹f·§Í3yO‹æÜUÊåÞ=$Ä7U:˜y8•ÉÖbº¹ôúÅ^{êü6XÞÎtsÉ(¦ÓÓÓÓÓÓÓu˜®Ž— „èy<®z0]0ÝbL—·òD%€;›Âýüû/°¦äoËš]Ätås¡¾„ƒ÷Æ[ò/BYÞÐAí`~…”S8'a,Wëeº©pL7ö—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÜUJý^¦CÞ²mõ:U¬Åtséå‹Í¦›ê[÷v¦›KF±:]0]0]0]0]0]0]‡éÚ¸*B&>W3ÓÓ-Åtå‹éåÞûLWëÌÓå³é¤ò_®oÞŽN RWŒ™N*ER9Æ}¦Óö"Q?i­ËšùQ¦«ÖŽt±a8ÑäÁt#éôèúLw&ü5L×I0W½8Óu{zA¦;“÷4ÓM]¥œïM&ÉйÚ+$ èú¾œÉL×R±@Τ§¬_‹é¦zç}¦äíL×ZÌZžŠùƒd9^z ¦ ¦ ¦ ¦ ¦ ¦ë0]/-å<¾ñTG ¦ ¦[Šél³rWNƒ—^[¼Íf»ˆéÊçºBVÔþ TëüN¦ËÙË_Í pÿÉÍÚ3cùÛ ¢Ö'<‘G®6j‚$ãpåëN7˜N®ïtgÂ_ãtsÕ‹;]·§tº3yO;ÝÔUª<Üêtâ¸ùÁdº© ´–ÒÍ¥7øZJ7Ó;åÞ^ŸSº¹d,¡t¡t¡t¡t¡t¡t¡tÇJ75®y(](ÝbJ§PCIÀ¥SHùm ‰Ë”N!Krœ@¥Ž,ß¹4n9)!—SØ]T°ÿz{«ÛY¬èV¤+ÖÑ´Nþ…ã”0n¨/]é΄¿é: æªGºnO/ˆtgòžFº©« ÞŠtš`>Pº¹¤y1¥›Jé‹­L7×;òà+¯sÉÞ–Õ¥ ¥ ¥ ¥ ¥ ¥ ¥;7®ŠæPºPº¥”®|1­-®H¹«tµ.§·—,.Rºò¹åXit™Þ¹óËæâ¥;v•R=…€¼›ôUgèO*]k13Ãb ¥ðK¯G—WºSá/Qº^‚¹êµ•®ßÓë)Ý©¼g•îÕ8å ýu|¿ÕQºw*ûFbûL÷Š`í—ÕqT–¼Ó½R¹òx¬* üRL׎˽Lù{ŸœL÷jØv ÙIf±D0]0]0]0]0]0Ý1Óµñ’Uáƒqµ ¿ÁtÁt+1]ýbd—”¼Çt¯ºWo Ñ>‘Æ7¦† w¾òÊióúRëþZEíÖûóf[쬡w+ÓM…ÓL7ô—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÔUÊäæ}^µŒ)x°Ïë\T_‹éæÒËךM7Õ;X>ÿ9¦k-B9n÷’a0]0]0]0]0]0]0]‡éÚxÉHIu<®¢ÓÓ­Åtå‹©È#¦kudWo Ñ>—Õ¿l·:1¾‘é6ÝW:lgz¹¥¾'âë§úG×¥ûÎIr‡ã·I½¡tüÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥k«É—P±›'Ó‘oˆG“馢–\k)]ME 2R§ß3ÆZéfz‡ÒÛ¾·+]k »ÛÉ Þy ¥ ¥ ¥ ¥ ¥ ¥ë(]/ \Çãjù_(](ÝRJW¾˜fXnù»J×ê ]ýÎký\GVB¥Ñ äH†wN¦ó­<¼R¢}¦£vkì¬Ö¿…~Õ¥ü(ÓµF™ðƒp ±ÍëÐ_:=º>Ó  ÓuÌU/ÎtÝž^éÎä=Ít­ñ,å:ã«T¦{W¦Ë,[¦|Àt-‚$ΊDu\‹é^éÍGõªû~ÖâÍtí¨5g%÷ŽòsKÓ½Z,5É|ël¡ë`º`º`º`º`º`º`ºßÆKõH&±Ñk8]8]8]8]8]8]ÇéÚxiXËq<®jL§ §[Ìéx3ð”,§¾ÓÕ:+wYW;]ù\4RB=•:ô;Ô·ò\–ØöŸÜrmWÿ6ÞÈÌõ\wçG®…Ë.2˜@ñª“xíu0]ßé΄¿Æé: æªwºnO/ètgòžvºÖ¸&!µñUên§Ó”¶Óé^I˃Ë`•°W/6®¦‚TÎLÓqz·/öÖkë,mõ7—ÿV÷ä[¯­E*·9Y>H–c:]0]0]0]0]0]0]‡éÚx©`ãû!(§A0]0ÝRL—7­‹ë²t™®ÕAº|mºú¹ VîíFn¥î}o³ë™Îi£º&ÒÁN¯ROáòì“¥¯t­.ѳ³éZ£™A?ÿê€CéFüÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥“×ݤÁ†[­N,ß»8¦Ý˜®Ep7DGµÌk1]M…X% ‡é1qúZL7Õ; n!ÑZdg·ôA2 ¦ ¦ ¦ ¦ ¦ ¦ ¦ë0]/µí!1W…‚é‚éÖb:Ù\-È÷oþüû/°«:áÕLW>×´ük¢Ñ ä–3Ó[HmRÑfÓi9‡ LG³éZ]ùæ<êt­ÑòÍ‘Á‹<­Ž-V§L§G×wº3á¯qºN‚¹êÅ®ÛÓ :Ý™¼§®5.Ž&y|•±{.—!:9]‹`l¢\P5ÓZN×R¹(Ó;}±étõ¨9{ 9ძӵxô³ÿ+YŽ­^ÃéÂéÂéÂéÂéÂé:NׯKr“n<ë²¾átátK9ÖÍ!²£@î:]­S Ë§Ó•Ïṵhu)ç;N7t3ÜW:ÛD°þ ?€°R—…¾Å[•®†#…4z§Õ½o˜JwÀ/]_é΄¿Fé: æªWºnO/¨tgòžVºÖ¸3ð`²Ð«äÞw^·lŽ~“©8q*üaTN(k)]Kuðòä«ng£ÚZéÚQ#xù{Óƒ³é^-:+¥’qlõJJJJJJ×Qº:^–gr¶Áb+¯º¤¡t¡tK)·uKýzoONâS WÒ•Áb®ÜÝÆÆCUèôèúøt&ü5øÔI0W½8>u{zA|:“÷4>M]¥(ß‹O¤¶¡ÌóÍ˸BfÖ_á Õe»ü'™ú¹D lØS¥³Ý¹ié%ǤL~ØUĥܯQiµ­^[*ÓÁÎL­no£ÚšéÚQ«pLY{Õ¥·¨-r}·Á?8AR0]0]0]0]0]0]0Ý1Óµq•F/¼ê’ÓÓ­Ät˜¶„ZE ÷äöªc¹z ‰ö¹æFÒ_ìúU§rëÚty#EÁý½ÿJ‘òËì(i©³'AóÕ¨° Ø8œ¼½Æ ¹/U½]4O…¿4{ æª×Í~O¯š§òžÍWã†Hã«”ú½ ™¶2¢í‹f‹ )¥üÉÕÜ—bºWzÀúw¦×ä_ë××Q£¦Á\ÃouÉcºW‹ÌÆÉ!˜.˜.˜.˜.˜.˜.˜îéêx©`Šƒ­Ë¾ÕQ ¦ ¦[Šé`KˆX›å.Ó•:pa»šé:íÿõí#Áï¼fÞRÎû{H”"Nl©¿ªx«3cyÔé ^`²3nðk]’·)=átÓéÑõîLøkœ®“`®zq§ëöô‚Nw&ïi§k—æIyx•Ù›»}¡Óáfu"9^íKTÁl2Žªlk9]Kå’ ‡púZ{H´£ÆDJDÃÞÁ÷UfowºÖ"Ëß:L±Õk8]8]8]8]8]8]ÏéÚx©\÷Ë«„ñÖk8ÝjN§r[…gàt¥Þ^,½Ìé4‹+YöÁ TêÈåF§3ÞÚŠ/ºÿà†å.=_þ2ý›ûZ‡ln!1NßVÎ ¦;ð—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÔUjgÿÔkß6ÚŒá`:Ý\ÔÕ¦ÓM¥÷™~h¦›ëî­×W²ò@PnÇ>ILLLLLLL×aº6®2ÖÕi†ã*¡z0]0ÝRL‡›HúöήW“¹ã_刋…H¡q½¹\H(Z-Š”›$ZíÞDŠVsf90#f`Å·ÝÍa:œ§ícºûÁ᩽Y`jÚÿ®§ýR?—ËR½êu±#<Ó•çF H(¿)žY¯(â„ĉ7,(÷aåÜCý(Q±#‚ë{íÉÓ隦âÑñ9ÝñÇpºŠ‚>ëÁ9]ÕÓrº=zwsº¾QÊνꕌ'³-N×%UAÇât}êŸLü}sº.ïØ*1þtN×§L’s:çtÎéœÓ9§sNçœn›ÓõÌ«LàÇ^ÓÅéh²H”¥ZW¼ØiŒépN—Ÿk¤$”¬Ñ²];óØ+Ø”’np:Î}ØD•R½¯;Ž|Õû6úÄ™_öÚ0ŽÏéöˆ?†ÓUôYÎ骞ÓíÑ»›ÓõŒRèÜ|:Z®ïÞàt}R…Æât]êà¶8]Ÿw䊜®OYdçtÎéœÓ9§sNçœÎ9Ý6§Ëó¥I´Ö¼š{#{>sº±8O‰ ¦*§Ëv ëË^ât¼äó•kõ(»Dã™Ç^ó‚|Ò@‰é2§“)ÎH36ΤHYCÓÓ /Nåt]â’çÓµLÅ£ãsº=âát}Öƒsºª§ät{ôîæt=£”åñò\N7•”=“ý£½ië¶×>õ†é¶8]ŸwVi§sºe)¿íÕ9s:çtÎéœÓ9§«qºy¾dÄ ÚžWI<ŸÎ9ÝXœN&B@ÌõÛ^‹G;ü¶×ò\ )A#!u±:³¦üf”uJàSÓé`¢lã —Ð !Õ‰ülÇO늟ŠéJ£Gm\Æ1‹KÑÓµøKÅ£ãcº=âÁt}Öƒcºª§Ät{ôîÆt]£”E97›.Á$ȧ^û¤Z Óõ¨‡@7–M×çÕ‚Ó1]i1¿R[YÓ9¦sLç˜Î1c:ÇtÛ˜®k^5óË^Ó…étRŒyýTÇtÙMèðêtù¹œ×uŠ!5:RJt&¦-7Z¤Ë˜.å., ¥¾Ó]ì0J¼*¦ëG¨à˜®Å_*Óí ¦«(è³ÓU== ¦Û£w7¦ë¥èäË^pJ›—HôIU Óu©g·…麼³Þ<Óõ)uLç˜Î1c:ÇtŽéÓmcºžy•’c:ÇtCaº4%L&AR½8]š³éøpLWiÿß~Ù¾Æ ÅÙtÌF /s:Ë}8¢Q¤ú&üløºœnn4EŽmqÂ蜮`*ŸÓí §«(賜ÓU== §Û£w7§ë¥âÉÅéb±0mpº>©lcqº>õOS¿ßœ®¼µrÒ•6½£€Wät}ÊÔ/{uNçœÎ9s:çtÎé*œ®Ì—¥º~ ϘW£ø©Wçtcq:›S޵ëétÅŽíøË^­¤óQPÃÐè@ÙœÈé˜'&cà‹˜ŽBéÂB¬Z };ºtQáy˜nn4Í3*4Å¥üÔk‹¿Ô<:<¦Û%þLWSÐg=6¦«{zï]¯8ÝÒbÀz­ëŸ”±c:ÇtŽéÓ9¦sLç˜nÓÍó¥ä™©½š³°¾ÝÝ1cºßÓå3/DËάI Ó-v*Gß!1?7¢jé(Ûžz×kÔÉ0‹¹|ûAéÃdêuô;´«^"Qµ¼@–„Úg«ó2Îé6LÅ£ãsº=âát}Öƒsºª§ät{ôîæt}£T:—Óiɶ0]‡Òl0¦›UaŒV¿A}±»´ùõ»Æt]ÞA¾"¦›[dÎ_”=CÙåpÂ1c:ÇtŽéÓ9¦sL÷~¾$ÁƦël—ÿ¶c:ÇtCa:œ&ˆ«˜n¶ƒÕiɃ0]y®%PâúÂt¶SÖ1QIÙCݸý/Kˆ†@ ­%5¯¡_Óõˆ䘮Å_*Óí ¦«(è³ÓU== ¦Û£w7¦ë¥ÏÅt :á§ë“ǺëµO=¹-N×çõ˜³9]Ÿ²è—H8§sNçœÎ9s:çtN×5¯FHÎéœÓ Æé”B^ð)IƒÓ)®. =ŒÓ)%Eh¥ÓÍv(væ©×8b(—ïz%Ê]˜U'Rf»@pUL77š¥H[­ªd:¦Ûà/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓõRIOÅt¤XŠ‘n`º.©ƺD¢Sýj÷í&0]—w$\1®Oˆc:ÇtŽéÓ9¦sLç˜nÓ•ùRÊåèøŒyÕÀÓ9¦ ÓѤDb1rõ‰ÅŽíðtºòÜ$šß¨ØA8·8]°¼¦ä-Lg¥ k”–R3äÄ£nêáÖ·ï\5pëQ†^®È7ÜëÁó+ªž0¿bÞÝù]£T<÷”„)2mäWôIű.ÿëS¯qý®1]wò'wEL×§ÌÓ9¦sLç˜Î1c:ÇtL×5¯JTÇtŽéÃtËuè´é²:Ó©& ’ê·RÏv9ôÁ1ÚdÀ6ªIéÂóJ³ÞÕ‹]´ÕI£k`ºYE‰Ëÿf;D/*Þä/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓuR„'c:À’“´éú¤^8Hô›bº¢*¯I“ð3Ôó­UïóÎ bžŽéú”©9¦sLç˜Î1c:ÇtŽé¶1]×¼š08¦sL7¦“Ió@lŒ±^U¼ØAÒÃ1]~®Ä)¿q£iž+â©Ç Òl+›.NÄ‚¨…ªÒbGº½¸Gâÿ¾û÷×ï^ýx÷î«ûùŠÏ?}¤9÷Â_¼y(áwþÃe¹ÿòî?ǧsõÁ¿–Ÿýƒ#|ð—ŠÝ#þ[QÐg=8Š­zz@»Gïn[·@¬!´G)£s+R‰æUÅ Û'U+ߣ>OŽ7–19¿uþD!ò3¼#z=»(KŒöe‰Å:Šuë(ÖQ¬£XG±Û(vž/-¨Š´çUõÂñŽbC±qRÁ@´ž1™íȪÏA(¶´ùý¹Q`Ö©g¢X˜4;Ý6 Ç뼄cëú„«b:-™ÛA8Rh‹6Çt-þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1]×(ÉNÅtQp²”60]ŸTì`s—z}:Wý¾1]ŸwøŠ˜®O™DÇtŽéÓ9¦sLç˜Î1Ý6¦›çU$n¥ØÌv°¾ÑÙ1cº0N "ZãNÁžÂ§‚t~Ž|ÿæïßå™z~-vêEWgbâ"°3±6ìØôèÿ&öëÅÅÄ6ôYÏÄ*ž’‰ýz½0±ŽQJƒœ{Š¡rŠ8K°ˆÊAZ«¨d¬«SÄ«‰¦H.™Ñ}üOóÊ-/ž¾}9G'ó›_ä£w?¾¹ÿìÿ¾ÿ›ÿwÕ÷Yôwù“ø0/Þæ0óà ƒÜ?Ì ·9FýìÃÏÞæ‡å!8[~õðæî‡‡OÐÜëÇ¿~÷æ»ûò²ùíã[¼îþs^䆿Éf+Ó/î_½Î}©¬¾ý>72}xÙI"bÃÙ.˜lùÞçÃ?ò¦Ç¨{YÚl´!Æb³õR8åW| Ö?߬ü)ÿùëoþüúûw÷Eýmé{Ÿß—ÅäêÏÞ~úéçù,¨áísÿÊê3xrŠàË¥õ»g=Ž­_Ü?¾òËüÏïþqÿmM¾}ý~š)ö%Ñ«÷ýt9n±KÈ Ÿ~¦¯¼û~kÊ •Ç֗埾ZØàýÛww½œÕß}7 Ÿæÿ›~îÁy‰ú(øã/ç¦ÿ©ï‡ÞrÝ5„¿|lûQúfQ‹yÈKí¯Ô”Ÿ.xõ°عâ½ûBw™7ªè¤ Eó‚«»çÅ®œò9z÷¼<7¿)j£`×lGL§–×)–ÔWÛvÕó¥Ú`yϳªñ基­ž£ÞÖ†ÊüÖR¢ØöŽ`¸Þ†JiÑ ”ÄÏP&ž÷ì*¾¡â*¾¡â*¾¡RÙPÉóeÌ#Cj®æbê*¾¡2Ö†Jþ0 S4nTŠí€ù„È­ì90hu c£tbä–ch! ·7 ¨”l(Õ€ù¿êO8 $¾ÇÓ‚÷Ž¿Ç³Gü1{<}ÖƒïñT==àϽ»÷xºF)x:غÇS)kû{ŽÒu¨§Õa¦¡tÞá€×¤t=ÊÀ)S:§tNéœÒ9¥sJW¥tój"vJç”n0J7§ @TnP:eÁ$ÇSºÍöÙ8Fâ3«Д¥„¸QÀJ–ˆ u ?ÛqºnѹQ3c–¶¸¤à”®…_*ŸÒí ¥«(賜ÒU== ¥Û£w7¥+‹)„æ(Åáäûœ„ÓDºUD´O*DZ0]—z¸µê}ÞYUž8Óõ)‹~Ÿ“c:ÇtŽéÓ9¦sLWÁt]ój\…ÇŽéÓ€éò‡™’HŽ4¬ŠéŠÄãïsª´ÿË”Ô,žˆéMÆ‘6Šˆr(]=O[R¿Ng¶c W½v}—¿!lŠ0/˜Ðâ/5év‰?ÓÕôYéêžÓíÒ»Ó-S”<â·G):ÓQâ .cºN©JCaºESAÛê)ÊMaºå­-bh{‡WölLשL=›Î1c:ÇtŽéÓ9¦ÛÆtË|™Ê±½ÔžWs î˜Î1ÝH˜®|˜’@5/9k˜n± ¦cºù¹ѨÕò,¢vfµ"ž¨Ô*ÚÀt0)°D¨ÅbÄ0]Óõˆ‹y^uL×â/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓõRrî]?Ì¥Àm`º>©caº.õŒ·Uš®Ó;ë½õ³1]§2ϦsLç˜Î1c:ÇtŽé*˜®k^Uñl:Çtca:œ0JHÉ”ª˜®Ø®ª)„éòs5hÔÆ5‹è©EÅcùÕÔX.s:Ì}8;RýòŒÅŽõªµé–FcŠ@ÔcÓ•ßóÏ÷¹OÞÿpYã‹¿¿xXQºoÊE…nÜÑ7åZ3ŽÉÞÞ}õ"‡_ç¶¼ò÷9Öÿ…ô<ð]~Ÿ˜=¨ÒzŸºÎ]|(ŽÍŸÝÃ<¸çHêí‚6¾+ ãO¼{óúõ×wÿxx÷Õ]øù¿½ýþ‹ÿÉ+°· Y®x|Û9>ýy|{ó°ºb Œoù¿|òx]Çß¾}õú“à“ü¯o¾ú—²Rül^þáËe´û,ü…y–Oí/?{‘øå‹/^ÑÇ÷/¿|õ1'L_Àǯì%½ˆ¯æ•ÒZ×l0M!Å`¦PO¦,vbœÒ¶ð[Å£ãSÚ=â¡´}ÖƒSÚª§¤´{ôsã yÅÌÍQ*ÂSxxl2¥¦)σyò¸<Û÷‰å±Š.ª8¿#â3Ôß§íòòõŠv*»O:§uNëœÖ9­sZç´ÎiWóeÉSmÏ«âœÖ9íPœ–&‹A‰êœv¶cã9m~.Zj-Ló¿Ÿyê™xŠˆ x9pãÜ…57°ž9=Û½n:e—8^]eä nƒÀT<:>¨Û#þPWQÐg=8¨«zz@P·GïnP×5J žœNuÊAÈF:eŸT caº.õñi…Žß7¦+oBÈ}Žwâõnú--Z©l¬õZç?)óSÏŽéÓ9¦sLç˜Î1] Ó•yc©­­yÕ‚Š:¦sL7¦ËBQ0)Cõ¦ßÅ,éÊsSBˆÍ%s¶Ó gÞ!…t™ÒIîÁò :Õó>¥Ä¯›LÙ#ÎÉ)] ¿T<:>¥Û#þJWQÐg=8¥«zz@J·GïnJ×5JåèàTJ§T2Ë·jöIc]!Ò§žn‹Òõy'Ñõ(]Ÿ2c§tNéœÒ9¥sJç”Î)Ý6¥+ój¡Ѭ5¯ZþÛNéœÒEédŒ!å@ª”®Ø1®Â‘ƒ(]y®%H`©ÑU1œ™L—&*w„ØeL hǤêݳ]ñª˜nn” ¸~ËbG~êµÍ_*Óí ¦«(è³ÓU== ¦Û£w7¦+æé_¡=Jŧ7[Šé¢âdh˜®KêÅ$ïßÓu©7¸1L×ç½â"}ÊRrLç˜Î1c:ÇtŽéÓmcºy^,W•6çÕr´Ï1cº¡0]œ€’1HÓ-váð3¯å¹Q·jÐÍvDxf2L")èF±"-¤b^Ö•ê¼#®{æµK®j;¦Ûà/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓuR„xnq:¢ ¦ë“ʃyíS¯7vÓo—wxuØé˜®O©c:ÇtŽéÓ9¦sLç˜nÓõÌ«ê¥éÓ…ét JÆ«˜NË™SLp4¦«´ÿ‹”í"yæ•lB ãeL—JN Ð8ÞžæÐÇôª˜®GbdÇt-þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1]×(µÎù=ÓY’)IÚÀt}Rm0L×¥ž)ݦëóNº"¦ëSfÑ1c:ÇtŽéÓ9¦sL·éºæU%ϦsL7¦K0 D\Åt³_š®<—r´Ä‘¨ìó™Ùtq”ˆ[˜Îr@&l)4”f; 8ZàÖ¡žo-pËoÍåø7>Ã;zÕÀ-·(¡œT|†².ðÀÍ7Ü©‘l(L·¨R&Dz†úd7…é–·N˜ŒcÛ;º¢Ågcº¹E )²<ãw3AÇtŽéÓ9¦sLç˜Î1Ý&¦[æUi¯æ//vLç˜î·ÂtùÃRÔ²­aºÅîø¢âå¹,ll©ÕФ3‹Š§){!¥tÓÁÜ…Eë$l¶ |ÕjEK£dÚâÖãcº þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1ÝÜ8 ±ˆ|º'slQñ&£¢âR#Œ…éõˆÆü õo Ó-ÞI¤"mïðê3<ÓÍ-JYÇg(pLç˜Î1c:ÇtŽéÓmcºy¾LÀÈϘñóÒÀ1cº¡0”*D12A5›n±“ÕÎòA˜®<—%(¥V@‘íéLL§S˜‚]Æt8Áœu(ЧØ{š_q*¦+"äÿ™5Å9¦kñ—ŠGÇÇt{ăé* ú¬ÇtUOˆéöèÝéºF):ûÐ+ã”ç L×'5ÉX˜®K=ãeÓõyGãõ0]Ÿ²$ŽéÓ9¦sLç˜Î1cºmL×5¯*x6cº±0]þ0I@˜V1]±ËÍч^ççF¤[ˆDéÔC¯<ˆü/{gÐ3ÍmÛñ¯òÂçv,‘)úÖK^Š ø”““ >%hœýö¥4µŸiUÍŒ…wù¿ôè¿Ü‘(þ–¢žc:´)lnJ¶ m*-v¬Ÿâ¥˜nDœ€‚cºixt}L7#þL×P0f½8¦kzzAL7£wÓ ­Rˆz…¼©À¦“šh-L7¦þó=µ_7¦òч^Ç”%tLç˜Î1c:ÇtŽéÓcº¡¸ê˜Î1Ýj˜Î^LbËXbn^!QíÒé‡^ãÿv‡¯½B‚5‚TÓÙWj‚ýivÑ«vAõÞjº*‰´SÕ»ÛÇt]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦«ƒJÐüÂ*•/>ôª´Ù ˜nH*…µ®ØU%!Lð‚úï…醼“è¾+$ö™´w¶`Wö‡Ó9¦sLç˜Î1c:ÇtOã*Ĩ™_ØÍ=æîŽéÓ­€éLh²HlWÓU;x8r¦+ÏÅ„€¡Ë—Ì*^‰é(n9õ¦K6…1ÛF·§z±ñVL7"cö›^»ü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7´J赘.ñv鯄~îCúûBº!õñ½ ݘwøFH7¨LÒ9¤sHçÎ!C:‡tÇ®ÆK-Åt¡W…’C:‡tKA:{1 ‰¤Ý™®Ú éÊscæÜ@”RÂkïyÍ)ðsHg)IÈØ»ù¯Ú…Ïõ —Bº:h& Aûâ$9¤ëÒ—†Gׇt3âÏt cÖ‹Cº¦§„t3z§!]\™¡¿JeÁK!EÎÍ"Ú¦’ª«ÝóZT¥`ß2¦Ô.Züº1Ýî”9õ½“ë÷/ÇtuÄòkdÊ/(c¿@Â1c:ÇtŽéÓ9¦k`º/I8&éÇU ~äÕ1ÝZ˜Î^ÌĶWÏMLWí œ~Dy®¤Ä,Ô›@IL啵t°%ûspâUlsÌ¡=Ó‹]Rº÷ÄkTˆÄttÅ&ê”®‡_]ŸÒ͈?‡Ò5ŒY/Néšž^ÒÍè¦tC«åkïH¤[> t#Rh±ÆtCê1å÷¢tõS'  Ü÷ÝIéve©\éõ‚2öÆtNéœÒ9¥sJç”Î)]ƒÒY¼QŒ¤Ô‹«@üšW§tkQ:{1Å^a ©}âu·‹álJWž‹˜Â Syz:çö&Ù¥te-]Ú‡ƒb:Ý"FÊ!aû8Š–©ÎñÞbºqOñ8¥;À/ ®OéfÄŸCé Ƭ§tMO/HéfôNSº¡U*‚\[L‡²Ybr€éƤæ´¦Ro±ÿ½0ݘw›I_醔=þfé˜Î1c:ÇtŽéÓ9¦›‹«¹»c:Çt+`:{19“J@mbºÝ.ÈÙ˜®œ;²Ù#П@¬ªWÓ‘nö é)¦Ë¦Ì¦°Mó¬Ò8GôaƒÞx}Ĩ8$ÇtmþÒöèâ˜nRü ˜®­`ÌzeL×óôj˜nRï¦^¥2][LxËŸaºQ©ÂB˜nXý[Ýò:ìÛ0ݨ²¼šÎ1c:ÇtŽéÓ9¦;ÂtƒqÕ^gqLç˜nL·¿˜9g¶•¶QM÷aGñá‡30Ýþ\UæÞÊÊ.Ätœ7È¢<ÇtѦ°ý% ¶§únïÅtuÐPúâ(;¦ëò—†G×Çt3âÏÁt cÖ‹cº¦§Ät3z§1Ý>x&ŽÔ_¥_‹éˆy³Ðs€éªRä¾Ôµ½þ¢JEc~AýÃÝíoéê§F”Ð÷ޤp¦«#ªís^yë4&ÇtŽéÓ9¦sLç˜Î1Ý1¦+ñ1¨½ÈÝ¸Š ÞšÎ1ÝZ˜.n6xˆ Ÿ/*þþ×/°½¿|rkºöøÿöÛñ‰ò•˜.ƸeF=âtPÚK*3¶.‹ù°CÍr+§Gœü ‰.€ixt}N7#þN×P0f½8§kzzAN7£wšÓ ­R¯=õ*õÆ xÀ鯤"­Åé†Ôgz3NW>u €¢¹ïe¼Ó•ÙÂ/wâ®LÈ9s:çtÎéœÓ9§sNwÌéFâjzlÖëœÎ9Ý œʩӠ¬š›œÎìX0ç³9]?¡&„î*W›Éŧ^²òsL‡eªc*÷Ò6•V»øùÔ¥˜® *¬$²Þ*ŽÁ¯èò—†G×Çt3âÏÁt cÖ‹cº¦§Ät3z§1ÝÐ*%!^ŠéØV,Nr€éƤ~nEúûbº!õùsé÷×鯼“ã}˜nH™;¦sLç˜Î1c:ÇtŽéŽ1ÝH\ oNç˜n-Lg/¦½¦ÄH±‰éª]ÄÓ1]y.ª2õ&P¦ø,:ï¦WÙPíŸCL§ Yb¤ÜQjvpµÄm@==é‰þ•'nÞ‘xk⦊ARTzAxWqOÜ’ÞZ_1"N³×Wô~8oxtýúŠñçÔW4ŒY/^_Ñôô‚õ3z§ë+†V©'ͮϽü/äMòQ»¢1©²XWñ!õIð½0Ýî)W™ô½Ã$÷aº2b–P$zAYRÇtŽéÓ9¦sLç˜Î1Ý1¦«q˜r§ÛJµ‹LŽéÓ-†ébÊ!—+€:˜.&¡‡® §aº˜T‰Rν ”Ò•—ÿaÜD˜)>Çti‹%j‰}7M¥©Lu•t+¦‡ÁÞÇt=þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦+ƒGû× ©¿J¥‹1]Ü, PJÇ«ýËRù¡å˜®¨2åhÿtÕ—$ë½0ÝwôáÊË1ݘ2õËÿÓ9¦sLç˜Î1cº¦+ñ²à ~\M$ŽéÓ­…éRéB”$3·»U;âÓA•çZtˆQ»|‰ñRL—7ÎönêóÄm —ßÖClW(T»”øVLWÍ(Ⱥâì•ñnE]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦«ƒ3c¯5ÝnGWcº˜b x¼Úg¥Ò®¦/Ux±nEE•¤~8ЀoÖT¼~êÈA…úÞ‰-×/ÇtuD[Q‘ò ʲ7wLç˜Î1c:ÇtŽé˜®ÆKfˆIûq•ýЫcºÅ0ü¥éó©íïó³%äélLgÏ•˜YBè&b Q¼ÓÁVŽÿƃú Ù¢¬ ©sTFêVûÙóbºq  cºixt}L7#þL×P0f½8¦kzzAL7£wÓ­RY.Æt-ÎN/ö0,VL7¦þI)àWM醼CwžySæ”Î)S:§tNéœÒ9¥kQº/1*Fˆý¸ª7§9¥sJ·¥³Sˆés3”ïó‹Æ gS:{nFL3ô&Pù| êÌ«ÿ`£RñzК.—)Ì¡s:·Úá³óBJWD™1vÅE‹½Nézø¥áÑõ)ÝŒøs(]CÁ˜õâ”®éé)ÝŒÞiJ7´JE¾öê¿ eÑ:jM7&5ÃZ˜nH=<‰ª_5¦óŽæû0Ý2 ~õŸc:ÇtŽéÓ9¦sL×Àt%^@’ÎÍe5®fÇtŽé–Âtöbæ`éHûÌk.˜Ž„ÏÆt¹^)XБö&PŽA/=󪛶óÓi!ò ¢ÊM¥Åâ³¾Jbº!qôp ×1Ýixt}L7#þL×P0f½8¦kzzAL7£wÓ ­R)ð¥˜%l⦓ ‹UÓ©×7;ó:äæt¦S&~Ñ«c:ÇtŽéÓ9¦sL×Àt#q{5cºµ0½˜"íýå&¦«válL§¿•;×:ÍXª£^‹éb€žß C-œ Û¿tïv˜n4ÅÛ7Hìv¤Ñ1]‡¿´<º<¦› ¦k)³^Óµ=½¦›Ò;‹é>W 7©¿J=¹ôÜ‹^5ohŸcº]§ }©×Ât»*áúêßëÐëþ©5"õ½“oÄtÊ4Û¢ÚW¦Ó9¦sLç˜Î1c:Çt‡˜®ÆK 当pvÞšÎ1ÝZ˜®¼˜² ½Ã-LWìDˆÎ¾A¢5þo'dåp%¦K›ÉPxÞ­(Æ:…¡ÜëÜTZ힃ºÓÕAÅâeÈ}qÌ~ƒD—¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éöÁQ4¾°J ¤k1]’€0]•Y;w0|H%Y ÓUÉ"•v@TU¯ovÑëî$ιÉÈ}‡^?”q`zE™‚c:ÇtŽéÓ9¦sLç˜îÓÕx™C"Ð~\夎éÓ-…éìÅÔLårmbºÝ.ž]MgÏ…¢íê:U »\Ù›.¥ YósL6…Ybdlo¡¡. rëE¯â˜,eìŠc¡ä˜®Ç_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆtepû›D’ú«”ºú‰˜!êñjÿºÔ´¦Q/!è{aºú©£¢tj wïˆÞ‡é†”=v vLç˜Î1c:ÇtŽéÓ=«9H ðÂ~ˆÈ1cºµ0”*µ,¶§oö¦ÛíâÙ˜ê¡WJºÓ'ÐÕt´QÎ9&n€ 41ô0XdãÃͽýí¾üû_~úñÏÿûå§ÿúSýëüü»ŸiÎ?²¡ö•öc‹ÿÇ/ÿòKNZ³¦oþµ|Õß”¼à›ÿ´˜üͬpK:„§lÂ*Ï~Hù'{E, 铱,ÇT´}F¸Úë­ÈkLA‚NL©â4:rí²´†G×G®3âÏA® cÖ‹#צ§D®3z§‘ëØ*¥×ö´íÃfÛðƒÊÈ©9¬v€¹ªŠ63U_P¯o†\‡¼ùÆÊÈ:¢íjr§wW&âÈÕ‘«#WG®Ž\¹:r=F®5^rK}ûq•Ôû :r] ¹â¡ö^—fŸÁjèô>ƒõ¹©l™…:Èì‚Ê••‘¼©&Jϯ‰ö•&Œ!†ÎQëbú¬IÖ…˜® *9Ù»#]qòX`ê˜î€¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;é†V)¾ø3mBœ#¯ö/Kĵ0]YŽc$…®úƒ¼¦ò>T½^ŽéÆ”‰`vLç˜Î1c:ÇtŽé˜n$®ªíˆÓ9¦[ ÓQ9˜œs &¦«v¶fŸé¨à?-ÝûzÈìò¥} ·)¤ƒÄ-ÙW1w·T¶Ðï­¦Ç.?vLwÀ_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆtC«T¹ø³e@8½Ú›]X¬š®ªÊ˜Sçv¨j'ñÍ0]ùÔd¡¥ÿÝâc°¼Ó )cÊŽéÓ9¦sLç˜Î1cºcLgñRÕ¶s)j/®jfòë@Ó­…éÒ†!¤ CÓ¥ Táá(çI˜®ôö‹öA!ô&ZBñ,: ÓA[Í9=Ïܸ yJ¡‡ä«pº•Ó•A!"äûâ,}sN×0 ®ÏéfÄŸÃé ƬçtMO/ÈéfôNsº¡UJŸýÔq"§ÎüààÔë˜T kqºõ"¾§«Ÿ™rß;á>N7¦,ùµ½ÎéœÓ9§sNçœÎ9]ƒÓÕxÉ3s?®&ñS¯ÎéÖât\N½†Ò¾Ýh°Ø…ÇS§'qºò\=$éÁ/(Õ xe9mÈ)+<ÇtRSÆ í…jgZoÅtuPÆ“ôÅ¥DŽézü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiLWTè\w°ÛÅkïa‰[V=ÀtUBF|aµ^¬œ®ªRÆÔ¹»j·Ãð^˜®|j ¥»rÿ5ÄpÜØö|LWGŒ¬ÄÐW£ŸzuLç˜Î1c:ÇtŽé˜®ÄKÂD’ûû!Bðr:Çtka:©ø ìqík{«mÏÆtö\{* vÊéª1^{m/ç¤ùàÔk®)Pî°j—InÅt¹бÜsW"z5]—¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éêඈ3Çþ*E®Åt7[¦0]•"Ç/HÕÅ0]UÅ„ýX…Lo†éê§Öˆ Ô÷N¦¯í-#R° Ñ { ÔÄŽéÓ9¦sLç˜Î1cºcLWã*"äΓ»]ðætŽéÖÂtyƒ@¶gUj7§«v |6¦+Ïee¹Ç—Ì.`¼²š¶’Ž®íÍ›ªù€B7õQs§àj‰›©ÇhoMêª ~·Ä­xG•sÿ»|€Õ7$n6"YFÖ)@ÚíÐ7OÜÇtö•JV›Åм_¾ÚqN÷VÓˆ³è‡^»ü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7´Ja¸øÐkÂ-<ÀtcRa­+$Õó›zòNÊù>L7¤Œ½7c:ÇtŽéÓ9¦sL×Ât#q5ˆŽéÓ-…éhÛØ©íÕ¥óCøÜÕûÜj6„BG˜lD©~VŠü‡/u7ùÅ^¼?V$Ñ•õÝ—ÿøûO_þñrÿõǶIºÙ׸Y>X¦éwl‘¬üÅß¾ý¡.€Û/-_øÿÄo?ý3VùùÑßîKÀ7ß}ùãegÿÃ_¿D­§Y¶ˆa‹±>ù;›þ6K~øé—ÿSüùßúóßÿv°Ñ öÆö6»]Y-½Po3àÝÒËïâé刲ç-Z=½ôôÒÓKO/=½ôôÒÓËÿG\MÄž^zzùìͮ嶱…_¥Gn°þYóÜÁ}‚ÀA& AfAÞýuŒ‹ãô)B[ •`7—jS¤¾Å"k5¼$¤è°Ž0× ñ_.Åvé’P, ±Ž…råÇÒ%G„ZÊÙ(JL¢e9ºä‹hÌcõÕèqtɬñ³Ñ8:ñ]u/]2[à˜ÙX™~JåJºLºLºLºLºLºLºür]Bt~œ³ åU I—ËÑ¥pÌÆ&£ÝwtºòÆ\Q+ñå»t9!´ÖÇÒ¥q‘ê@,0…–£ËãêUàqty<:†x+]N(#MºLºLºLºLºLºLºìÒåáu•±æÞeÒårtiVäÈÞ¥©_™û Þ±ª»ti$ Ã]¹hW»wÙ*È£U“A”h;y¹]RÄ ˜8ÕÇX|]¶è`…r :~']FÄ&¨”QÞ–t™t™t™t™t™tÙ¥KBByÑGëj¬ªž÷£%]®F—„"Æ2LFŒväBº´VEÚÝ»œ úØs—\Äɼ޿£h·]rÑâB‡êµ<ïZŸxjˆW„u°[÷.£ÇøZ¯‚Ñ!é2é2é2é2é2é2é²K—±^Š×‚6^WEkÒeÒåbtÉÅ ¾Ö‡¸V©—î]B|žò.]NuħҥX 'Nve½ÌØ õèþ4ºœˆ}ºþýºœQ¦™›t™t™t™t™t™tÙ¥KiY±¤ŒÃuµ°IºLº\Œ.Ť@ñá50bìv奱ürŽG†=ºlBãÖÆ_°”úTº¬^AÍêètj´+¾Ü±¡ŠS†·Ã´vø¸[}â©™PÁÇÑá{3c£G‰(P&I—I—I—I—I—I—I—]º¬îUÆëª&]&].F—Þ’ hm^Ä¿ªøÆÌØZÔö ÷ÎÕ‡Þê#­¤ “ÄŒÔÒÖ.ÆÎ­å[§Ñ'òPÜïà1ËïÔ­íDtýòÆgÄ¿§¼qGÁ\ëÅËw#½`yã3zO—7Þ:gؘíôÚòÆê“ÖNuãM0˜ûX© ®e%nªT‰ÆêõÇ2ÒÿÝVâöÔͱ;2 Mù>+±õè%>ÆÈóÑóDZ‰i%¦•˜VbZ‰i%v¬Äm½$¯õ;i^žVâZVb ÌŠâ›~ˆ#XýÒòSKÊ÷=n:.Tžj%J (Õ‘•í .G—M}-¨7ª(éÙç›¶Þ›æÆU~¬’ð?ÿ:€kÈúáMcüé?ÑÙÞyæ^Úv-÷ÏL?}ûãß~ùõ×öš™E~‰ýò—oÿëÏß>\æ?ücCÇRþÄôçòí·?ÃÏåŸÿq;°}F˜ªtsŽ·vñ÷îlâ¤`­6lš;# ¸Ñõw ΈÏNAGÁ\ëÅw º‘^p§àŒÞÓ;[ç (¥Œg)Q¾¶ÒÊ+–¯“Ž'¥ÖŶ š*Š•ú§ˆ~{J¡gmÌDåÓbyùVÁœ²O[<¹U[¹U[¹U[¹Upj]¥B¹UæZf˜³³*°ð5 ߺUPã¡Ù¿ïrÓq¡úÌ»Pãé±@‰õ¶À JXŠ›¯F—¡‹–*Cõ$O£Ëj»ÇÑA;é2zÜ–Ár@jÒeÒeÒeÒeÒeÒeÒe.c½4V¦ëj´JºLº\Œ.±!A`®rm"Z51ƒ=ºœê„¥Ëø”ØGnW´Sçå謔JƒdÃM½óóèj0¡ £c•ê­t9¡,ï^OºLºLºLºLºLºÐe+㟽C·âE,I—I—«Ñ%`nva)J—Ò%U(l»t9!Ôí±t*ãÈ1k-—;¡^¡<Ž.'¢£÷Ò儲Z’.“.“.“.“.“.“.»t몚U®«_ç%]&]þ;é’M£O¦Û±‰Ö éR_Á–J²K—Ç…j‘ÇÒeÝî^ž²ÇZ]ërtY±2 ÷.«ƒ×ÇÑeu‚ø‹0Žú½™±Õ5m¬Œ>ïª&]&]&]&]&]&]&]~¹®:;¯«U236ér9ºtƒa©cÌ×—î]Z¥]¸ în§|zK¿{*\R«]*à2Ú•¯J÷ü{á2T1ï»ÝÚ‘?îRŸxj)ÅñÀoËŸÓO¯‡ËèQK©EÇÊ„òØeÂeÂeÂeÂeÂeÂe.ÉâE+Ëp]•òéÔZÂeÂåpÙö¹B@`c÷Ká’‰Dv/õiBÛ¶Ž…~µÇúºª T}”¨íJ•Õè2T7a¬žžF—ñÔñÕ?ï8:ŒåNºŒ%¾éàÀ¨cÉc—I—I—I—I—I—I—]ºòÛa¸®zaKºLº\Œ.…ÁÙØ†Ÿ¬ íÚêrñ’øî±Ë&´0…:=–.«ÅБQ*E´C×Õè2T)#ŠÕkyÜÞe<µEt¼Œ£cxkblU.  u¬Ì!÷.“.“.“.“.“.“.»tëjŒa5®«,,I—I—‹ÑeUGcÑñ‡¡^YD¢ ýK}B™—a©ö@ôÐc—Ø R”»ÜÚźµ¸ñÖ©ZáÑÖN>UoÊâÆ;Uk;]¿¸ññï)nÜQ0×zñâÆÝH/XÜøŒÞÓŧf©/n{kqcf})àNqã9©´Ø9ˆMUË”: ¾>ì‚ðí©kü7ŽŽ Þç%N)«é%¦—˜^bz‰é%¦—˜^bÏKlë%›¡ ×U¬é%¦—¸”—ˆ[Í`Ћà•W¸á«Ö˜ðwNÙoÍdü¦Éc¯p×`+=ú9£(®F—¡*¾ \ëX=}]ÆSK1ãr :ŸJ‹Ý@—M™«ùQ'”„']&]&]&]&]&]vé2ÖK—Br`]uÀ¤Ë¤ËÅè2¦fŽžL†8&l¸”.Q@•÷è² ¨v@¨–gÒ%5«TîßE@Ûn“Þ›©²‰3EŠ#ýädd¦ÊN B'¢ëgªœÿžL•Ž‚¹Ö‹gªt#½`¦Ê½§3U¶Î+ÙÈoýM$\š©RE_¤²“©ò!Uœ÷š~´ãÅÊA4U\ÑÇê½>ìN•-:PÔ|¼Xr4»ÏKÜzÄv’PfY"½ÄôÓKL/1½Äô;^â¶^j,ù¨ãuU O½¥—¸–—Ø¦šª °*Ë¥^¢y[)¾ïrÓa¡ö…éù/ÑÛ}—H8áh «Ñe¨bªupŠã£]ñ§Ñe<µ@eÀqtäÓp]z;Uƒ±ÄPÆy"é2é2é2é2é2é²K—îñŽ¡1Œ×UwHºLº\Œ.=¾Ñêp W¸ôÆNªÄöèÒÛ9ˆ€Šñ›&"ôLºäæb™Â~”¶vjpk¦JëTZÅÊBCqñÁ`™©2JAèDtýL•3âß“©ÒQ0×zñL•n¤ÌT9£÷t¦ÊGç¨<ž¥°\›©"L/VÝÉT™“ ‹eªlªˆYªPÿEãÿj/ñ#:Zë8:tç*[lª ”)¦—˜^bz‰é%¦—˜^bz‰û^â¶^—2ÈÜÝÚ©Qz‰é%.å%¶ÙÊyýŠ€—fª¸c<Ï÷]nªNãoëʬÏôåU,桟µvì¼XmÙõ­¢÷³èr*: 7Vÿ™SV3S%é2é2é2é2é2é²C—SëªBÖ–Mº\‹.ÛÀ´Æ1•‡8x¤\J—"œû5]N •jO¥Kp&˜­T¢[3UfÄ)Ö¬þ3LAèDtýL•3âß“©ÒQ0×zñL•n¤ÌT9£÷t¦ÊÔ,E?^TòÞLkÕö2Uæ¤ZYÍKœPÏ Oóg¢SoõÛ‘XR 9 ÌÓKL/1½ÄôÓKL/1½Ä®—nÈ€õÀŠ_ÕÓKL/q1/¼¶so„Ã\Õ®õ1¤ù¾ËM‡…<´ú¶ù¨TÃAýœÖNÝôV/q'!mppqkG–^âÐ$êDt}/ñŒø÷x‰s­÷»‘^ÐK<£÷´—85K1Ùµ•ÄM^¥ÐŽ—8'U»AkS¥D28³÷ñ”_œcÿ¯ö§¢£xc­·­G#ÖÁív¿=§—˜^bz‰é%¦—˜^bz‰û^b[/+€ŠŒÙ½ÎZoé%®å%¶éb(¥ p»séR/±”xIê÷]n:,T„óMË7m±7 K¤Äÿ:€ÿç÷8Ú1þxÝ ãOÿyÖöî3ÿïÏl?ž©eúéÛÿö˯¿¶×ôÈ,òKè—¿|ûû_þöa3ÿá:–ò'¦?—o¿ý~.ÿüÛ*°mC…Q¥emÃxÆ[· fÄÕB’[#¸Ñõ· ΈÏVAGÁ\ëÅ· º‘^p«àŒÞÓ[s³T½6í˜Y^.{[SR¡ØZ[sê¥>k«`*:XnÜ*˜SyA^näVAnäVAnäVAg«`j]ýÝA£40ÓÀ\ÀÀ´W+ügƒu0€£Ú•Å6"fJüû7M%|h±öôV±F ‡Q2ÃåèK-%Õ»ÈÓè²EG-0hZ¤ÜI—ÂñÚ©P¦”t™t™t™t™t™t™t٣ˉuU$Ñ’.—£ËÁR…‡Ð†1RéBºz¡• Ø]¼œP*öX¼$lå Õä[;¨¬Ëáåaõ-Åéqx9Õ[ñrB™Iâeâeâeâeâeâeâe/ ÍÜÆëª•šx™x¹^’¸¶«>‡XÔýB¼¤WU!Ü…Ëã:+Ëcá²ñ7?óE‹Èrpy\=˜>.G? ¸.+£OµÕ.......Ï­«F™›p¹\ŠW#¦2ÀÁ~ré% €Ê»ty\¨•ÇÒ%•6tbÁEÉ*®F—êåy™±ÑQ”;érFYÒeÒeÒeÒeÒeÒeÒe.®«T TMºLº\Œ.`‚/.Ž{çÖ¥!±=ºœÊMŒ¥* à8ŠRe,u9º<®¾2=Ž.GÇ?_|]N(“,t™t™t™t™t™tÙ§Ë*à¬ÃuU°XÒeÒåjtYM[‘N`“‹c©²ß§ËãB¿¸ûú)t)\½ƒ¢$°]W_žF—Ñ‘rë±ËeÇ.“.“.“.“.“.“.»t9±®VË¢WI—«Ñeôg@RêhÇ+Á¥™±Ö.ÈÚ½3vF¨Ö§Òe5(l:<^­øW?ç¿—.›úÊ8<[ÛÚ)<.kÿTÆÑ¼•.g”IMºLºLºLºLºLºLºìÑe¬—"Ãê¬[;VOºLº\Œ.«i!¯ÃÍ÷¶uiåBº”W, n»ç.›Ð¦aü¦iA}&]ÖVºÝ…q@—[»`Ð[«×V¢š4“â¢ZV7–­íDtýêÆgÄ¿§ºqGÁ\ëÅ«w#½`uã3zOW7Þ:gTGÏRTõÒêÆ±j½ŠÕêÆSRÊZ^â¦J¤–êŸæ%nO­ 8ŽŽÜyAøÖ£A%“±2-%½ÄôÓKL/1½ÄôÓKÜ÷Ûz ñ1G2þ€¼ <½Äµ¼Ä60É” Ž?YI/­nLÛÍÆHßw¹é°PÃòT/Q@¥“¢í¨ÖÕ販׀ü2V/€O£ËöÔÆñˆŽÊt)ñ8Rë‘Q'Y~*é2é2é2é2é2é²O—Ç×Õ–xœt™t¹] xå¿Ép»½ö”=:©ìÑåŒPægÒ¥o»Hû»H[»@ù[3UZ§àR¥ÂPe¦Ê0¡Ñõ3UΈO¦JGÁ\ëÅ3Uº‘^0SåŒÞÓ™*S³ÂÅ™*ë™ïeªÌIåÅêAL©g·gy‰SÑ-÷y‰Ê@ñ€2ËL•ôÓKL/1½ÄôÓKìx‰m½Df³{«±^bz‰Ky‰10cì¢×Aâ|À^ê•ÕíC´sêmJ(Ð3½D.Û<ƒ¥û¿µ –¦;½Ä)q!ÒK˜D½ˆ.ï%žÿ/±§`®õÚ^b?Òëy‰§ôžõ'g)ÃK½Dªøª _{‰“R}­SosêÁžå%ÎE?í^í%~ôèZËe‚é%¦—˜^bz‰é%¦—˜^â®—8·®*•ôÓK\ÉKdˆ)`*µk†oíÐ Üj>͈#´4Ÿ†®B'¢ë›OgÄ¿Ç|ê(˜k½¸ùÔô‚æÓ½§Í§©YŠ”¯Md³ú¢ë ›ÙJ„w…ní˜Ê­sýÖ©»h?Ûé·v¢9×^âNDןëψÏ\ßQ0×zñ¹¾éçú3zOÏõ­s.^½Èp–âR˵s}µ£ïÌöôB(Ukõ֮˜q½u¶ŸWcyÍÙ~ôw"ºþlFü{fûŽ‚¹Ö‹ÏöÝH/8ÛŸÑ{z¶Ÿ›¥j½t¶—Vz ug¶ç&A½ ÷¿í?Úá½ßöü‘ßÒ¿ì£]ùt~&gû׸Ñõgû3âß3Ûw̵^|¶ïFzÁÙþŒÞÓ³ýGçFÒ¿³øÿE^;Û#¾bìîÌöÒ&`êZkíŠ;Ê­³ý&N@êXÈÙ~ôw"ºþlFü{fûŽ‚¹Ö‹ÏöÝHÿ{ç³kÉÛáW1f â²HŠ•HVYàEÑn7¦c7ºm$~ûPªÛö±ï)éÈõÇJ_Î fa¥_ñ–(ñ;5a´ß£ww´ŠRÏ¥öl‹JÒ­h/årâÜlɲڤK£}T‚R_£“œî4nxtþh¿Gü1Ѿ¡`Ìzòhßôô„Ñ~ÞÝÑ~!â!Ôâì©Ñ>R^l¿í“I ´ ª\íë ¬å"õ¾8¾ù­Û£ýÆ4nxtþh¿Gü1Ѿ¡`Ìzòhßôô„Ñ~ÞÝѾ.B¢~”Î's{±hϛў«F’giÈ׿—Ê!Þ^#ö?¯>”RéHSøæ‹Ÿ×øE=‡óÅßÊá®åãÛ7?¼*§ JÁôëÛ?Å0¦Ž«8ˆ&9õ,9…˜ìE7]õ°ÔŒsuôT¯ô²NÎxÇæ<]wrnLÙ ö“s~rÎOÎùÉ9?9ç'çüäÜÝu5CÌÚ¾‡µÚÙ¾3úÉ9?97ÕÉ9û0k ÔËÜÐM—õã2·dY•ÞJ¤˜NÌÜDEî¶‹jiv$ówlÃ×ÕîÚfùë ©ô*y@\º¹Ú‰æªjxt~¢¹Gü1D³¡`Ìzr¢Ùôô„DsÞÝD³®¶f%íG)x2Ñ,%p[D³JȶæèKÍÄsaº¢ŠCÊŒýå€óËÂtõ­íËYúÞ¹½YætLWGDL€ùeÉ1c:ÇtŽéÓ9¦sL×Àtu½I9öwsÌÊŽéÓM…ét¡DsÓU;”|4¦+ωö¡3ÌŽ“žˆé2,BlƒlaºœQ˜Sèå˜f£Ì–¸ ¨çH/-qðŽÜÀâ 7Q1d¡”QòÄÍ7OÜL·Kï^L·N±Ô£ö£Á¹˜ÎÖÏîcºUB šÃT˜nUÅ`û¬ØW^¦[ßZÊñkí{G.<µŽ¨ è‘¿›Í1ÇtŽéÓ9¦sLç˜Î1Ý&¦«ëefµä½Ÿg¦ì˜Î1ÝL˜®|˜°T‘51]±³PÍGWÓ­ã+jêM øDLq!ÁûÕt e C9‹Õü ¾ÚiŒ—v+*ƒÚˆ‰»â³cºixt~L·Gü1˜®¡`ÌzrL×ôô„˜nÞݘn,Je>ÓђĨ›Ñþq©su+zRUH'= ž_V5ÝúÖ¶å‘ûÞ±Lõ:LWGŒ¢Ú¾ãåIc:ÇtŽéÓ9¦sLç˜nÓÕõRrÊéÿ¶ÄÆ1cº0”j6Žp'ùú°ÅpMGcºòÜr “eD½ Ęïüî`5]°Ä rÂû‰.eÏž±÷üjG×bº:h)‹äÜÅ«éºü¥áÑù1ÝñÇ`º†‚1ëÉ1]ÓÓbº=zwcº:¸ k»õÖ“]8¹©¸Â’4mTÓ­D{õh«O†éªªÛWu<Ù=¿kþóÆtõ­3éüàVíô¦CÄ阮Œ”›Kúʲ8¦sLç˜Î1c:ÇtŽé˜®®«˜ÄÒ÷îº1ªc:ÇtSa:,‡I³Rçî¿ÕŽo*^ŸCÌ=LWìlûŠ'b:ÔR¼ßTœ©LaPEh¶?_íÂó …S1]”¥ó3|µ#ñÞt]þÒðèü˜nøc0]CÁ˜õ䘮éé 1ݽ»1ÝP”Š)Šé"ñ70ÝT8¦«ªÄöíîkOêE^¦[½#¤í[|ŸìàÂC¯uDÛåØFëe1:¦sLç˜Î1c:ÇtŽé¶1]Y/ËÇ1s]ÍšÓ9¦› ÓQ=LÊ‘£61]µ“ÄGcºò\Ä”0w AÀtf5]ZL ÀFoºhSXÊý‡¡] Rì˜)\Šéª8I•»â$D¯¦ëò—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝéÖÁ³­ÿÚRp/„ˆéxÕ L7&5Ñ\˜®ªÊ‘C÷T;¼éÅò"0Ýwè¶¡ôÙ˜nL9¦sLç˜Î1c:ÇtŽé˜nh]qLç˜n.L ~‹‘Sl÷¦«võhLWžKRºŠco 1†s1E„û˜Ž—rU s¢öT¯v!Ó¥˜® ª‰Ê‘Ý®8 7:¦Ûà/ Îéöˆ?Ó5ŒYOŽéšžžÓíÑ»Ó E)¼©ù=ÓÁš-ÁØŽöK%’¹0ÝúÈô²0Ýwð:L7¦ìþtŽéÓ9¦sLç˜Î1cº›õ2g[•ú¹»æÈŽéÓM…éìÃ̤¥J,71]± r4¦ãÅÞ&”Û^©3¨h<ó ‰„KÞh). †`“¸}ÕE±ƒð¼>áTH7$ŽÈ/èÒ—†Gç‡t{Äé Ƭ'‡tMOOéöèÝ éÆ¢”žÛ™Ž8-´QI7$”æBtcêù…õ¥òŽ„p¢T–Ñ9¢sDçˆÎ#:GtÛˆ®®«6‡ìÿ»ëjYÑ9¢› ÑÙ‡)¥3MÐv%]µ‹px_ºòÜÂ!åÞ’$z&¢#YJr)}éR™ÂQD24•V;b¹ÒÕAÕò†€}qé&«tH·A_Òí ¤k(³žÒ5==!¤Û£w7¤[O:=žDž[IÇdÓ­40?"5…¹0]QEÁr+ѾúüüFòÏÓ­ÞI:¿«¯v7]ûNÇtuDH1D|@YJŽéÓ9¦sLç˜Î1cºmLW×K–ÔßÍÙ÷î˜Î1Ý\˜.-µÁI@’&¦«vpSÞp¦+ÏZΑv ‰)虘%Ø8ðªe SNí_º«ÞË1OÄtuPa• }q–Ÿ8¦ëñ—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝéêà‰€:ÍÞV‘Ͻ>‚aI²UM·J1䤚®¹0]Q£`îªA^¦[½£ÉžÝ÷Ä 1]±Ül‘ø»AÇtŽéÓ9¦sLç˜Î1Ý6¦+ë%##t~þªv¶€9¦sL7¦ÓR¥ÆIcj_Qíb<üúˆò\[’-½ $Ùv¦gbºÒNœ3Ò¦Ë9UJ½©ž³æçu©À¯?&n?¿óÁö45Éü³€Îd!Ùw ]Yä·²›¡«#kÊñôjÔ«Ñr‹×ßþÓÃ#,A2ا÷ñ‹Wo~üו$Œñáõ—ÕÐ>ƒŸÞ¾ûâ2ó¿Qú¯ÝÚ9\L>mPʈ }qøüžÇÓ/sðþÅ¿ …EI6i‚ Q{·Œ?ýõi6šK³Ê+ž×·L— †›ýÿ—ÿ¼ø£àò¦‚1ëéárÃÓSÂå?¯÷¸lƒKΩË<‹ÝÂÊÏœyÚ[«%òü@ —|îÍÍ,yIÒ\,³–ßPûRóMóÞÓñl^‚ýG#vNj¬v7?V;žu<ëxÖñ¬ãYdzŽgﯫ”P:?{®vÑñ¬ãÙ¹ðlù0•c÷VÔß~·ÿðú»w¶dýX7¬Äð…èo¿·}ôûßÿü®fåßYâñ÷gÓjÍLÞ¼ýéûŸÿQ¿_§éWßZÿÃO_¾1?ÕXòÕÛì67>~e‹Æë–Jï¿úîUü—:ƒ@î¿ÍæÄØÏÕŽo®ù8<—ç"RʹëYÂåLðŒK&Ô­ùCJ!NV14¤ᅵȯoMå^ ~À;Ì—¦¤ÊÄo²ô”ÔSROI=%õ”ÔSÒvJJh›š(¬«Lê)©§¤³¥¤‚ s ÔKÜø„ÄM¢ Üý­D"$:7q `IÝJÜØ’WìNuÛhç|mÿ­2¨Ø'š»âÊ}^{ÑûQ½áÑùk/öˆ?¦ö¢¡`ÌzòÚ‹¦§'¬½Ø£wwíE<ª-nD)[•N¾ÉU•Óv´X*å0¦3UŒ¶¨‡¾úx‡~æ˜ÎÞZˆ$Sß;ÌáJL7¢LÈ1c:ÇtŽéÓ9¦sL×Âth[!Kß%ö×ÕŒ^9â˜n6L‡ÉR ûеƒéÌï¿Už«ˆÔçÜfáÌúŠK‚ˆï$nÙ”•)¬Xª¯·•~²cI×aº§Arë¬_ín*ÃÓÝã/mNŽévŠ?ÓµŒYÏŒézžž ÓíÔ»Ó}<æLô@”ŠNÆt ˜2nG{ˆ!†¾T™Ó=©ÊB@ì«×”^¦ûä̶›éÿm32]„é>5åV5ݯÊnš8¦sLç˜Î1c:ÇtŽéî­«1”+ZÅ@ŸÖÕ,Ù1cºy0Ýúa*Øž*JÓ=ÙK[ÅtOÏe"în™-Á£x&¦“…„ƒäû‰ØT‡ËY¸¦R(S/ÅtCâD“cºixt~L·Gü1˜®¡`ÌzrL×ôô„˜nÞݘn(J)ŸÝ«',¶ÊÞëÕ3,5é\˜nH}Îø²0Ý€wls/bº1eÞ&ß1c:ÇtŽéÓ9¦kaº¡u5ÇtŽéæÂtPÛÔc´Áš˜®ÚÑÁÝŠÖç& T:&õ&PLrîm–”Qä>¥Ã2Ó!«v~è^ínfú”®*)¢j_ßü ï”n¿4<:?¥Û#þJ×P0f=9¥kzzBJ·GïnJWOT ¥,°ŸLéxɼE醤&ä¹(]U¥Qrëìäoo©/‹ÒÕ·Î¥·›ô½£*×Qº2"fˆ¹¯,#8¥sJç”Î)S:§tNé¶)]]W)äþŠoVè”Î)ÝT”K‘pPjSºjnî_?ˆÒ•çJH ÐM‡”s†3‹éâbS9nÓ‘MaJö_io¡«m¶/ÅtePSbÆ®¸’·¦ëò—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝ醢è¹g^S 1m`ºUjL¨/)Í…éª*¢QP¯ô²0Ýê%ÄÔ÷±^‡éêˆbCR|@™ø¥†ŽéÓ9¦sLç˜Î1]Ó•õ’íŸÚ‡Ü_W5‚c:ÇtSa:û051*41]±ƒ$‡Ÿy-Ïå²íëzH£ê©˜.,B*‰·0]Î SèMu³£À³%n¦J²­»ØWÏÏ Y>÷ÄmÀ;"peâf#ª`†ô€²äWÿyâæ‰›'nž¸yâæ‰[3qËÙ>d›D¡»®F¿úÏ·¹·¸„Xên5ÅfâVí"èщ›=—©"ìt?-ãÇsOA1,9åCP± ¶3ïé,$']{ªÊ\ ¸úâ‚WWô~6oxtþêŠ=â©®h(³ž¼º¢éé «+öèÝ]]QÒ$ñ(•ù‡ PbÞ¨®’*@sAºªÊœØ=«z–—éê[k¤Ôé(¾zQ.¬®¨#fŽ |uæ‡téÒ9¤sHçÎ!Ý6¤+륥ÜiUTíloìÎ!ÝdSTûŸPҙߴÄ? Ò•‹3%îò%˘ҙÅ),ʶ}÷1—)  ƒW;àK1]T)díŠÓÛ³lŽé6øKãócº=âÁt cÖ“cº¦§'Ät{ôîÆtuðÒ‰%ä~”²8u*¦Ã…’¨èv´·ikõ¥òóÎt-¦«ª,é²V_½¾´^Eå­K)šjèz'c¸°£x‘Ëýe<¡c:ÇtŽéÓ9¦sLç˜nÓÙzÉ!&ÂNŸájG7!;¦sL7¦ãr¸I€µ/þ«v–;éÊsm¿)Ro©™…31]\RÌÀ²•¸IÀ±×UÉì4gùmªxýÝ;[ܬ[{bøÆ³o¿·lòýï~WéËw–÷ü½ BÿóÕû5'ú§~¬Í»·^¿|óãWo0K‹¿úîUyâ/5€p¾+WciÒK‰Í.EþSrW_~ Õ«è7oúþçÔ™ÿ«×o^áÛúŠ7obÛ×~xõîvúÝ«Üy5)IŠPH#KÕŽ\ L¥Dú‚(÷Ä™]ðæî]ÖðèüÀtøc€iCÁ˜õäÀ´éé é½»i¼\¥‹ÐRåÜ®Q€KƲƪ€D ÃKW»çâ¥U•„reJ_ý0ýyóÒÕ;œB¾wÓu¼´Œ²ýåP&~öØy©óRç¥ÎK—:/mðÒº® î\ Wí¢úÙcç¥sñRYEÅv£íÞîÕ.èáe幚-qK½iùMâÞ)¯ÃxiJ bæ¼QÖ˜ÊÖXbçz¤j—.¦tePÐ¥+ÎRèä”®‡_ŸÒí ¥k(³žœÒ5==!¥Û£w7¥[OåÈ~”RާR:°U\±íAÕ>n}@j𬬱¨Â@¹W‚_Õg‰/ ÓUïÙs¹ëÛÚçë0ݘ²DŽéÓ9¦sLç˜Î1cºmLWעìkŽýu•Ä{»;¦› Ó¥Ò[€HÛ˜®ÚÁ͈aºòÜTNæK7¡ÐÌg–5ò‚¶à¼Ÿ¸©MaÊ"ö—i*Õšú$ºÓˆCŒê˜®Ç_Óí ¦k(³žÓ5==!¦Û£w7¦ŠRwJÔÅt”Â#lTÓIÅɪéÆÔ§vãwn/S>Ó *ó+Ó9¦sLç˜Î1cº¦ZWÅ«éÓM†é´\ÁbÔ¾‚±ØD:Ó•çŠíw…´7THøÜÓÇ DÞÂt‰ƒäˆº¿jÇ7¶þšÓǺdû‹ H¢ŽÜŒ)ß” Èý÷×õ¯û%O¹èäq.׿†Í‘ækU»×Þ¨R¥B}q·=ž–nP°†G燥{ÄK Ƭ'‡¥MOOK÷èÝ Këà1§;}ŸG©çÂÒhÃm=“ª“ÁÒªŠ5¦Nr°ÚQzY°´¾µØŸ µï —#`‚ÎÑüUYôU–:,uXê°Ôa©ÃÒ,­ëj„¡ŸÛOK–NKsmÕ1?ï.þõ>`>––çR{twcª$ÊçÞ¨Y‘î·j„° R²Mt»Ucµ³½òŸ¢¶»üß_.âŽE( Ø"Ün,_íJŸ÷+¹ã8‹À^¤ÙJ-NÏw‰?„;¶ŒYÏÍÛžž;îÒ»—;ŽE©;3厂´è}î8(5ÎÅÇÔ ¿¬+bƼ“n†œÍ•%pîèÜѹ£sGçŽÎ;nrÇu]…HÒ]W…Øorvî8w´“eMòü,ó׿ÿ€ÍNHæŽõ¹‚ ©ÝëÉîy#Á¹#‡E8¦xÿ,5@™Â)c³åaµc}žúœŠéª8*—rjWœà CtL·Á_Óí ¦k(³žÓ5==!¦Û£w7¦ŠRzny`JKFÚÀtcRIçÂtUUÄ9? ^ñeaºÕ;¶ëáÐ÷N$¸ÓÕ™ÐþÍÊbpLç˜Î1c:ÇtŽéÓmcº²^&ÆÝu5…è79;¦› ÓAi%˜(#¤&¦«vBáhLWž›QwiŽùÌ›Iˆ`¸Oé°ÌàL6‰Û;èjG!]Jé†ÄÝžþqJ·_ŸÒí ¥k(³žœÒ5==!¥Û£w7¥ŠR à\Jg+ÇC¼ƒR æ¢tcê¿,J7ä•ë.&yR–E8> LÒ9¥sJç”Î)S:§t JWÖKÛ”*u×U…ìÅtN鿢t¸°&¤&¥«v¾?¸>×ö¥öÆÐ™@¶$Ñ)„¥Ìã-LGK¹@9•?NS)•©®zí™×qÙ‹éà/ Îéöˆ?Ó5ŒYOŽéšžžÓíÑ»Ó E) r.¦#]@·ŠéƤ"Ï…é†ÔÇ;·Ö˜îÿØ;·KnÜŽ•†ßdŽER¤H¿òA?ÁØÓ¾Àk÷`.öÛGRõØg¶OIU®Ë{,ŒEQú‹G%’¿Òe•w˜ä˜®¡`õà˜®éé1ݽ›1ÝÔx¹ˆ^û³D>v9ÂEbœÁt“£¨¶@jJcaºª*ÿ/b?VâMW{M!§=¸À;gîz”•ci²”Ó9¦sLç˜Î1c:Çtó˜®ÆKeIKOvLç˜n0LÇ%aNÚÄtÕN`÷]¯ù¹Lúå@8Ój€L'ùƘ´½š®Úñ©˜®6š0,'ì«éºü¥áÑñ1Ýñû`º†‚uÖƒcº¦§Ät[ônÆtµqc ýY*ÉÁ‡Ó)^rJ6ƒéVIU¡±0]QE9¬¦Îz±jîí¦×É;%ëI}ï@<ñ ‰Úbî1,P–È1c:ÇtŽéÓ9¦sL7éj¼”† â*›c:Çtcaº<0- BÄöM¯ÕÂ<ÄT©Ë—²É‘˜N/)a0¾éÒTúDè$÷©x'¯¦«âˆ³¸ØGüpº.ixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]m<–¹^ÌRrðjº/¤0ƒéVIÇÂtUç —¨yê?7¦«½–h©sŸêäňçaºuÊ’ß!á˜Î1c:ÇtŽéÓ50]—–ˆmA\½>XÞ1cº0]ª›N¥ÌÄMLWíðj9èN˜.Uü&“ô^ P8Ó¥KJ„"·1ÖW¸0Å6‘¯v ðTLW “uÅÅ|5]—¿4<:>¦Û"~L×P°ÎzpL×ôô€˜n‹Þ͘®6ެ±³Ûp² xìÙt/*sgÓ­“zëóÑ—ÄtUAçãפ^ïìª×É;šbg­ádOÜôZ[Œ\Ò¡ÊRtLç˜Î1c:ÇtŽéÓÍcº/ÅBÂù8¦sL7¦Ë³l8ôrØ·ÿ0€Mèê4”0]y®æŒ“C~™Ò¡«é(^Êæ[Is˜Î B²^ í„q´ÂÍŒB"Ô¾z ÷V¸å^#•ß·ï>³pË-’•‹V(óÂÍ 7/ܼpóÂÍ 7/ÜÚ…›Y ¨!Ä~\5¿üÏ ·Á 7»$Žj/ï8ú¬p«vQv/ÜÊs9”5½í¨ØE»q›øŽ…^ˆ˜m¦p³K.Ç çö% “ž{÷_m4{:õïd¢¯¯è}8oxtüõ[Äï³¾¢¡`õàë+šžp}Ž›×WÔÆ•Å:»sª]9t}…H¸ä¹|f}E•`åÀ¢ªQ ÓUr†±«>Gƒx_˜®öCŠÔ†’#þy˜nRfš ô•!ù6(ÇtŽéÓ9¦sL瘮éj¼H²$®²Šc:Çtƒa:´²âüÆ-Aÿ€é²]à°?¦Ë•P¢ûÜ{ŒáØCÅFA¼‰é0äW8‰‚´IØdÇfgbºÚ¨Žˆ Ä™ªcºiytxL·Iü.˜®¥`õؘ®íéñ0Ý&½[1Ýsã†yÂïÎRÒчŠKùôsÓM Щ6ÖjºgõÊ¡}%õ³]¼¯mPS¯ós©ýÁm²;óî¿©ÅXîÿÃÊLÓ9¦sLç˜Î1c:Çt³˜nŠ—bJúqUBrLç˜n$L—&åö’&°¦›ìôêF»}0]}.ªRh¯¦›ì¤1Æ‹€…™mPåfŽ’š‰&»ˆt*¦«*ç˜ûâ’ cºixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]i¼^aѾPoilÇžVĹ™4sZÑ*©ÂX«é&UÀœÚ¿žÕÛ}­¦›zÙÒ÷h8ÓMÊ”£i_ÇtŽéÓ9¦sLç˜Î1]ÓÕx™ó^4ëÇUÁ1cº¡0”S€$§…Ö\M7ÙñÕ>¤0\(çu,̽(Ûa¢#7½Ö—Ôðö¡âˆå_c`Åö—îj/7Šépš_L÷ÅEÇt=þÒðèø˜n‹ø}0]CÁ:ëÁ1]ÓÓbº-z7cºÚ¸@ˆ¬ýYŠ£»š®œ€§8ƒéÖIM<¦›Ô'ÆÎ×ãç^Ú}aºuÞ=ÓÕCç0øO=ÇtŽéÓ9¦sLç˜Î1Ý<¦Ëñ2g¤9£äÔ‹«)sLç˜n,L‡*'iG ibºj¯–aí„éòs‚$Ré¼@AõÈÕt± @Ë/ómLG厔°}}öd‡)ŠéJ£9\†Ô>JiÇÑïþëò—†GÇÇt[Äïƒé ÖYŽéšžÓmѻӭš¥Ò¡˜Žå f0Ý:©4¦[§^ïl5Ý*ï$:q5ÝJe¾šÎ1c:ÇtŽéÓ9¦k`º5q0úŽéÆÂttÉ-Iì`ºjG÷Ætå¹j©œcÝyòk¦väÝ1]ËšÁÛ˜.ÖWòßÛ Öª]þ阮6ʬ}•ÍdÉ1]—¿4<:>¦Û"~L×P°ÎzpL×ôô€˜n‹Þ͘nj<Ï¢Aú³c8v5ÓE•f0Ý:©q0LWU ©.Poz_˜®ö:!ÆúÞ‘tÞM¯S‹J…(SÇtŽéÓ9¦sLç˜Î1]Ó•x‰ÀÚ;i¸ÚsLç˜n,L/òð5ˆí³éªÄÝϦ+ÏM‘»ð+ÛÇ#7½òEC 0³é•Ë+L‘¸³¿½Ú{6]mT˜SûÅÉŽÙϦëò—†GÇÇt[Äïƒé ÖYŽéšžÓmÑ»ÓM[¤°` ჯHyÆš»BbT•±0]U•L%a_}¢;ÛôZz³»qïKïXLçaºª P³Š®²Üùä˜Î1c:ÇtŽéÓ9¦›Çt5^FŽ â*]ítLç˜nLÇ ÄŒÔÄtÕ.ØÓ•ç2”[ z/P± ‡žM‡eTT»é¤¦Ð*ùeo*­v!ž{Ókm”¡àÁ¾82_M×å/ Žé¶ˆßÓ5¬³Ó5== ¦Û¢w3¦›å¦Ùþ,Åáà³é]$Ì­¦['u,LWUIê, ÔË­¦›¼Sò™ÞÄó0]m1±š…ÊÓ9¦sLç˜Î1c:Çtó˜®Ä˘ÿÓ«1°8¦sL7¦“r5CbDMLWíXhoLWž[më¼@ÙŽáÈM¯¨SˆncºTîDМ†6•»(/•Šéª¸Å˜¢i_*ñ`«éª*É1©¯žï ÓÕ^'èæ“õÄ+$j‹jˆúÊrwLç˜Î1c:ÇtŽéÓÍcº/‰%ô㪂¯¦sL7¦ÓzÓ+ hbºjw}¨ÙN˜.?‚¤Î ”íñÈM¯ñ¢ά¦³ZÒ$VmטÕ.ǵS1Õù%¿X!uÅiˆê˜®Ç_Óm¿¦k(Xg=8¦kzz@L·EïfLWǤ¡?K‹éDåb3kéÖ Mi,H7©)tNb™ìï ÒÕ^SŒÑâï\1{8¤«-Æü«¡ô•øÉtéÒ9¤sHçÎ!]ÒÕxYB¹h?®&‡t醂tVN|£ÁÚHT;ä´7¤+ÏM9µéÅøà-¯„rûþ ÔRˆÒÌí‹]R³S¯y]%Έ|)]¾´<:<£Û$~F×R°ÎzlF×öôxŒn“Þ­ŒnÝ,!{0] ;˜n¥T‹Ò­SÏá¾(ÝJïÈy×¼®Tv}­S:§tNéœÒ9¥sJç”îV\HÉ’ôâªð¯N鯢ty`RPJ©¹”îÙ.ì½ãµ<BÙt£½Ä´Ø!¹”êb9œÁtP^amžXŸb’ðTL·B\YIÓõøKããcº-â÷Át ë¬ÇtMOˆé¶èÝŒé¦Æ•±½óÙ½?ÂÈ.¬iÓM,uŽûÔ¥±¦›Tiiï×ì’„ûÂtµ×f@í×Ïv$ça:¨åG$Y ,¡c:ÇtŽéÓ9¦sLç˜nÓÕ¸EBân\J¾˜Î1ÝX˜.«*{¹ãôÛÏp¶ ÷Æt幘›Wèñ¥lwë]1¦™nc:̯0¢• $šJ‹ð­íZbº*.A9Ó¦+üš×.ixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]m\9òÒŸ¥ÒÁ÷Gä” !7f{•S_ª2…éŠ*Š ¢ýp]÷…éj¯0÷½å¼=¯S‹‚åâûÊÌïpLç˜Î1c:ÇtŽé˜®ÄˈQû‰g„«¡Ó9¦Óa¹—nœ“ýíç¸Üßö¾æµ>71¥ˆ½r(Û]ßOº?¦‹ñ’£ Ý.ܨ¤Æ´÷ ¾Ø¡á©ÓMâr ÌÐGru¹…cºþÒðèø˜n‹ø}0]CÁ:ëÁ1]ÓÓbº-z7c:ªY‹ÄØŸ¥Lm,öUÕGêªdv_ì«ö:?Ú:™pµã«»Ag_ë”±9ûröåìËÙ—³/g_ξæÙW‰—LyGíÆU¾þèìËÙ×ì« LB2î—#éÑÙqÕ€’^TâÌæº•塉éªÝõ»¶¦ËÏ%ŒùÏU Å <š.¦‹Kæ0æÁÑÃtÙ.­Æ,ê€b_}®²ï­ÆÌ½¦²Ê'õ½“_ë3kÌ¢,¨öÅÏÊ®®AöÓkL¯1½ÆôÓkL¯1oÇU± ý¸*~¦¸×˜ƒÕ˜ñ1¾ÜÇ÷YáVìÀ’í]¸•çZÙ7Ä픹ÚÁsvÜ•.Àšdfa|,˜((Z€ŽRÔäÜCÅW‰‹¾¾¢÷á¼áÑñ×Wl¿ÏúІ‚uÖƒ¯¯hzzÀõ[ôn^_±j–Ò—“ý¾@à¢6w¨ø*©F4¦[£žC¤ûÂtµ×ÀÖƒ˜“ÝÕeÇcºÚb$ŽK~·ë›1Ó9¦sLç˜Î1c:Çt·ãªä€ÿòæ´—qU0:¦sL7¦#•üòãß>üüXÿù¹@ÿæÎù³zý1Ûä‰w ‹Ï©þ›‡û£6­ÕÓWÿ“òW¥<øê?ÊÿÕœ~3£ÎÁP“]œ]¨Îšõ£SÓ¬NÃâçg)ÿšGì/e^½ÙbYÐ1°uZ, ú^Þ?y(­2uݯv‘Õ l­5<:>Ý"~ÛP°ÎzpÛôô€v‹ÞÍvÕ,Åx,%ЋÌð×I€wÎ}îÐ`Ë$yª#T1-P¯z_üµöºl–‡Ð÷Žð‰ÇP­S–Ôù«óWç¯Î_¿:uþ:Ï_K¼3Án\€àüÕùëPü•/eËçŒSšüµÚì~Z|y.3ä8Ð+‡²]¤#ù+ÚEs R½_%¿ÂZγè`Áj1 éJ£9–Úgtuè¾CºúÒðèøn‹ø} ]CÁ:ëÁ!]ÓÓBº-z7CºÚxŠ’3¦þ,%>-žÏ_%¨°v€ÉÔ%Ʊ0]VU¶îÄÞæƒjÞ¦«½N “sT»'bºuÊXÓ9¦sLç˜Î1c:Çtó˜®ÄK ’Å~\µä§Å;¦ ÓÉ…ò\,h/ðýöóLyÓî§Å—öó ”ÓºüÊ:íåá=¡Ê…AöÅ\á–ß` É&¥Ù.ÿ<ó« ñ»‡o_ÿ’kÿgÙsQ¹EÓúð\cB¹±ÿz÷ø¿<åD½.Èœfžo<)÷¤Ü“rOÊ=)÷¤üŸ-)·rQ5[¤^ð„@<ÄòÙ¤ÄÓÇëÈV>µ¼þðáñ··⌼dÁ´+/©ü™H|xýþ×ÿýéÝë·?WH¨ðÝÃü½ü6UÙ3䇭mÚÕi'6a×6ÓìÏBþþýÝÓÛ·WÌñÝãÛ§w&tüéÃUù1J†ó·§§·3z4¤4,Уa‰ðÊac›"žâƒ„úy%¨ÌŸ·if:¡6øãÓŸ{f~z÷ôñíõ«þÛSNóžJRòõËLüÏ,gL <<ýøSÇÙ*?¸uòó$ûx­ú×ß?¾úáÙ÷ÕoÓd÷êS7¿~›=ÿøáçÇï_ýªï‹zë¨/· D ½¯T†)Éì>+¿«ÙöÏpòókÂX^¡g~Qôqƒ!µ½™ë; NØUŸËɰr0œñ*®ÐÏvÂhX¡GÒºÑðCM£~{ý™+§?>„7_¿ÉEë÷O¯ß½ùÃJ\=Ù Q1kì!l'¾À´°›üÃa…]?-|ÿËïoÊ7:U;N•)án'Ødí˜8z8/—n§L KõHL 9uü1ÿßW-öƒF"¥.yÌ6Èa‚8iH/ï…œ‘ ¯Ð“›'¬íÔRîzˆAæ·”o˜'vÝË»ðŒ21·“ È=DG&y¼z÷øþéã»ßòkõhoP0€tGv$ûróÅn½ŠgŒ ’rç7/ÐWŽ‹ç••Ï ø*¿ö ²§-«m{ÚÊ5˜Øñ9FÊ¡ÉÆ_ò‹»ç_Ú=Ër=óG’ì™tÌz–:ƒƒI’%ëö¤\f§D˜º"rÀX£Ç΋4¿¿þíñýÛBì‹O¡íSQUHÔ[œídun:b'ôŒÙc…ÃógÏ}‹mߦò™:H·/å24ýóÇA19c ¬Ð“àÄäéÍäNê¸3aÌ^_~‚yùGL»ëG<# ¬Ñc_`ÞøäÖØsk,gçÅ~7"®2¾pªçÒRxA×,ž2b˜9%éëa䱊šNLW,º_Ÿ5Pø"qgá˰¢r^_£GÏ 9oŸÞToòNÞ"ljݺ6_ʼn”ðϳ/¡“…N‚§ \¦ÑÍj|Ì·¸#UËõ®Ìz Âba #Ñú-_vrºrX纼«½žCáé_ŸwWÓ;Ö3‚I(C,ÐcG3Ó¹I¶úµ„s^g¹>í¦mÙnþÛÕ…ʨ—cGž‰1õåž‘u–Ȩ ô°œ_§|òjgŒÅügáî2S+×~ sÿnè æ='RÑòì²Û¡º4õ\ÊAÕ°ß…È|t–¹iÞ[ÞãSÆc®ùŒ ñ´œ²7ûj€´ïÄt}aèá9e_5kè~Ívñ”Ÿ^ƒiNnèÑpvNÙšbÓ%äßN :iÐd‰éÃâ9­j“¡sØdw|jYÛùö®h¹ŽÇþJjžöE à¯Øªy˜§yp$mâŠm¥d;UÙÿÚØ/[¢¯%_Ù÷ 5IÝ­t&5åR`õ!‚ ˆCpNTÐÆ£÷‚ÓcˆgªeKµBl%áªul{-x¡0Å.Dr0^(:ÈyýDß­×Zo¤»†q†V¹"é5¢K—¯Ëø¬÷&ëÅŃ\Àù!æ÷—’bi7d¶wBzÝhÓ4ùTcÿ.Nq%”HësVà)Î æ7?ûðÅY¡¿*Wl,È Šé´cÈçä“ù3´2>nõáñ²”FkÚð;‘…Á\®U.çWˆjnT,4ú§ä@f¦z>_a8˜_/ÀýNË)Z¦P±9ª .ÌóH·¡­ã£Á/ö}sJdµ†¸-Aó54¹ºyÛ¨2Wæg{Ò8èz6 ‘ðÃlï2[?¬<Å™­Ç3–ù¤Å«_þz̧h(S_EÉöÄÌ´õR¸“U¯F,8%Z'õ¹^©ÇÔÖ#Äÿ³q¼Èq.}23€CHö™°ÊÁŒl]ýƒD\§t«3;¥Ê?[ô™-}2¨ÛøiÈ]ß6V·d}Ú6c˜ð‹~C=;°™ªrqÊýïI¥‚¡Ô²_›I$(ÑÔŸcÓë@eM¼šìxÕñÍó|Õž–¸ôxCäíït¶ÚÌH²œ»6Á\dkIØœ˜õEëgõ(rœ¢:ðÀк­ ~¢Q?|z¸¹úòéôQ_Ý`!f;_YhîºùÖ¼u™‘õõàái.Áðå92Šï!4ø¦ý§¾jަÞG _Á:]K—ÌÀ´;ŒP_ê¿Ö §˜C=h³¤xxJö¼f ‹¤p6n,1ö|A¦e¯?%™áÀÃ4ÛK¥K,eÅ]±êrNUÁ›^;œRT¿ÃÊ_g,wÑè3MZÛ°$ªÌ<¡v©œî¨ø)÷½<¹/åá„:Ïü|Ñ«qN,Bž%wD9ØžÐeF’a= eZñL£í-8j{×구ü}•£û–uµjˆd ¶Â2áÄáÀ#@]«Cœzmß}G*™ k3®r)Çyq…ÇÆ×!‡2Á4xbOüqûí5¿ÔÞŽ8isKË õ>Åý–ØL‹v q‚§ðàIáã nW&‘põxÖé©Êñà£\ví€M¼ƒç1Åsm¶·amL\²O,m„ejËû壹(‡À—‚½åý™^æ ^~Ëû-àû´¼o ðI_xËû¦¦/°åý¼›[Þ/§±$ÛK•£jÓ-ï ÃuBùßÿùtv^ëV•„m¨ÇWÑòž–¯JqCýñ‘ñoÑòÞ£¥@Ïky¿|1¦:`Zìøq¾½åýÞ]s﮹w×Ü»kîÝ5÷îš'÷U kvüœÓÞò~oyQ-ïé!SJÄÒly¿È•£Ìn§–÷ú{‰"”l]V¹rTƒÑ¿åýÒ?>GâgnˆúzJâh"v'óg&ë@ Ä­h•‹aB2߃ó+^ûq›X‚DZ2m›2QÂ9,{ÒÐøyKʃ'…¡W~OF´¨±}U–€³”hVÎW¹”_õÎÏ0gÇ@rœà&{]_‰L+Hùˆ7sFš ÷q†zø¸ÿðøœÅíݽû¸äÎ~úÇÇ»Ïú—?]?nÐ×Os\ŽuêˆÚ¦§Uø(V"¢Ê5ƒÝR¶ì_.Ì3O<2¨Zù»€mÑb»ô¿0 ö.5Q3Ô¼Ng Å:pi»álŠélH ¶Jmª\îÝLiÃ{`ó„²~Á1'öïO6Õ ë_¯áÍÝÕÃÝÍýƒz«+ýW” §T£3بr(ç«Òõ~û?ë׿–Õ<®›ÇŒÔÓ-÷?ðŸ‰ìÇdÔÕ‡w¿>ÎH/¼)ñ ‹XÇÝÄà¬úªÀ¯ú;¨o©]zúãu݉î(A;öÑœkÒv!Æ ;·÷5jÇ$MØ_=x2Ìq§´Ú.gÖrM0‰¹\mÝäϱõõCÈS ÃÇK35ªÀïΨ±!“Š˜ Ím°ÊÅ'¹ ¿A;†qÂ^âÁ4:Ñ×®„Žõ,P‚ÄV9ÿ®1Òt×ãf™ìqài$OÏùÓs~~V.85µ¸dkërybRzÁf˜à4}.z ¤“ ; ÒŒÉwàáä›üÛ§“ÌØëÖv½_Æ„5¨!˼«D™BH}©}{†ržÜÏxX4E¡Xþ*…fy§Ý;8á âÁ“èu£v}O­Ó fAw•‹ðÂÞ8cíÛ1€ñ=™}xäeJ½X© Bßl)&X é…GÕWAxF :tß7ÓóZ'w{íâqûò˧ëƒ/úþõƒÊ@˜d$ËcV9Lå•Ûd %D{(‰'”D*”T‚'¾V`ÛÎ/£>w­(K{Ìw£åu°kF(z‚±ÃQ"ªá“ØôIõ\e° °Êå¯ÒµgœµµC¶÷I•;¢47'GŠ ç¾Yƒ,`3ΠPèwJaãýšƒÜKØtÇSøhêWï¯T¿NçÃÏçÿÓÕ·8L°(d.5ž!±*Œª\(´ýAŽãÛÞŸþCû6vBÏ¿EØqæ=x¤}ö;u“)ÕÔhEU.ç­ïà¶LÇ`JÄñsïÁ}ÞÉÛ®âö.(¤”Œ-sMÚPŽ­Óž×ãËyÂò÷à)Ћ;w¤³Ÿþ|œ£æD‹òº K4Þø]äJéÔl{¬ñ.X%ÉÑóð¨@¿#A+ä’G¼·ç‡--¿ÿpl#‚ØÖ'Õã ?IêÆŒëaÝú\MˆÆ³69„ –@R-f؇“ÝžSÏõÕžim˜I¢¥¯¨Þ 9¶Ÿ©ÖØ*@ª¾Ø@ã¹â~®xˆØ •yÂÒ¿yûùíûû_Ÿë4µuª-[!°Xc ö3¦»X-iŰÖh`¤;ðÞv \å$Ûó  ”Öš=¸ËŒEÿ#­ƒÈ_£5ˆ*'ìùõ;%itlã)ЭòÍp¤oßß=|~ªÌm•’@d‹)¬r,’6ž{Øñj¼Õ×NØì=xJܰÙ%={wooïJ3&¹ÓÚ¢-õPþ#ÿo(W}ùhNÌq¸ãÇ‚w®úrC£—ÏUß¾W½À'}á\õ¦¦/«¾ïf®úòñzŠm/URÊUO¥\Á3­¯(š‘©B¼,®ú‚ŠS±®Ê£,³Öײ0ÒB kÌPrœÇU? «Çr6có*‡;W}çªï\õ«¾sÕw®úÎUopÕ—ý ±wü„¸sÕw®úeqÕå!FÒØ°ÉU_äŽéʸêú{s¶’´Uî˜ן«Ke@8óƘà˜)˜@[7÷´k:YÇ 1›Wa*wDkÔÆ`.¿¬àªn•æôË ëw ³äxr·w›uþ|ôçg÷+QڳǡÅÎ!ǰåpK6Ûƒ²” ÓíÀCyûã¶goNJSmÚ°ÅjôºÈ%w³ä¡f™t}§b–ú¨\œ0áõ;”J iž2|}ß<ÅÔ‹ƒnö¤mõ\RÝ?,cMúsèðÌÅK­Uë 0˜÷j*Ä9¯ßÉkV¶ìíÓBeÕm)5µ˜õæ/ Y¨³¯ñð5ï3Õ *Õefž|T&Üöëwê®^`ž"½nû-ÿyÔzú[”ÖlKUj¯×ºl¬»Ê!ö©íaͬ]ƒ¡ˆ i†A0èÙ<OŠÜÂuN‡ŸÔïÞ^ݼ]´ÈM-–å-š,Ê‚¦~8Gرvˆ†ºM{(éèhÙ:mDó´¡wSuó§d´œo. ÿþ響¿;Xá_¹3z–¬þ‘6s𱚺Òû¿§øøÍO÷õ|ùðîööîã9°õo—§¨NX2õ;ÂÉT®„^DØå [e»ºþ¦ÌïYƒ)µ+‹(5~²v€*× {(|}VüzØfÏ {yvž#Òaû$BÀÊâF+žªru€Ý¯½ìן`BPíÂC]½Á××aÑ=ÓéÍýÃݽòŠ?,mW%“¾¡…Á,E¯r/°GÙ²ºL8U;ðDïq§IZ’¦ö…… ä¬ýµ„¹¯CèdÂë‡@aFa½OI#œBK—íêd®:¶ÒY³Û 3bj‰ã-ÀG`ëc<Fú Hé&´ê¥`ëÓ;ã°ôƇ'§~iÒë¡&¥p¢Ô¶•9L8Ä©¿ÐùÀ×Cì`+p≹«;¸{ÿö—»÷'6ÈâæÜÖi…¯/·dk L=Ó¨ÛMC¬?†b!¯#˜` OÌ…W੸»§ÑOþô8fnÅJ ‹$¤¢Š_o :{†~Ö¼z†§œx öòÜß~UaË×–¦&#"RÊ… äUJéŸGßjÎü8ü'¯%´¸ˆ¶Û{n” cû]ûEånn¡‹ ¯‡!L0îæ î¹k)‘šJàBÚüʃ\DêH¢Ýj¸àâøÙ÷à‰ÐïŠõtœõ¼ôŒmu¹A \Lìõ70Í$ÙºÀ9zOz'ÙžfO¶4zñ$ÛMà»l[|Ò—M²mkúòH¶›ðn%Ùº¼Å ƒBg}1-,ÖöA…£´Æ%l+ªzÚ¯žžs´Ñ†¿ÉÖ¥n<;ÚdëD¶“lw’íN²ÝI¶;Év'Ùî$ÛÉÖµ¯ ÜI¶;Éö’H¶jÀ±žV2\årè˜Þ–7õ æ ÉgÅÃ򻂮SúÝUKL6UÉmUÖyeA°.ѱ„رª¿GêÔ~%kÚýXô—咥į¢ÿõÝGsIG ŸûðÒõ÷*A€0Z„*Ç yé%_çÄEä\®CY£uç3/h«\h´½XÝ$äyûcMÆ÷˜‰E;™ë¶Ê¦!S/€ZÑ1ÜÑyðä-O”ÿÐäüÓÍÍ›·ÿúòîV» µ§5ëƒIе=T¹œ©ç¦¶Ý;°œ°µ¹ðHÿ­íêÂwÚ”¶6Qçlô˜aËÃÛ Ö“'ÔÞyðH·ÌÛÎQY¦Y-gTåbÁË|£m®‡a‚owáé¶ÒŸW16´™CS›5Ø‘Rÿ/è5(òwœìf³˜¥¤ñ“îÁC©÷Jÿí¾b_Rª¹v¬_8æë³å“ª\ʱÛjïfŸëáç0¡d΃Ç[1s^›‡Kpõ›MUƦ* (Qˆæj•ܺ/uÑVõÔlé §—65}ôÒ-x7ÓK—Ç…ÅôR¥— Õx±ðé®5ÔW@ýñ1×¥—.¨R Rì½Jðhëÿ[ÐK—Qç@’V˜a:{ÒŸ^º|‘b#ùÁé^s;½t§—îôÒ^ºÓKwzéN/}Ü/±nKõXNÖ¾ª4Ôã<ûN/Ýé¥@/­†Yl™Ð2` ThHnÙ“"Õd}Ö—kl¸ŒÌõ;"ÌæyHåPÆ'˜ÁО¶[ÄkÉÿõ=ZÎÜYoUëì‹9Á@b$Âu¡‚…>œ9¥#§ÅôëTM~Ù»Ð#sÊü4á݃‡q{'UK{í”’Ò ´IÛȧÝàŽ ܃‡CÀNqikÝV]uËLÑôßU¶TGõ2Ìõp'<ëìéϣïë]d»ð$cL­™*G.rÛLX1N(’ñàS {¤ºv-YÖ‡™c°¡ê‹½:¢v³Qø2cÞx8tYïçjŠ-Í–v¥JÁ"ZIÁ %^ý©ÖŠg‚xð¼°‹SCuï?>Üßþµ Ίö:(VõB•‹ }ÿ(ƒ]?˜bÕCœY;¤ã¡ ©†ËºB5v6§—¤€›Õ<À×ãÕæ¾ã§Ùƒ‡»ôæ^ïS/Urkeø÷OÿüýÝA—ý0¿ùéþÏŠçÝííÝÇÍøI.È8„·6g~F/¹ÏLWdWÎôjHý&’ÃqûÖ#·xôÈmØüÑX.Êz"À6ÂüÛ/ŸÓr¤C±ØÓúûùÛÃW¿óòZLh¹¸˜B!Èí|¦ÊéKU#ƒî…³ñÜc¯ÉöàÁÐÿÄÿÇû?ß½¿ûuᜊ´5Ç b`Ã,Uµ¥â6R|?»\š±L˜ož”{.îSºËMÝÅjr9–Ö¨ï:„îkÚcœJ3¯ÿ3ŽR‹\ä Ó¬x ’ ]Öu³n«Ž Y%‹躮_`’ëÁb„ óìÀ³µó½*ïÛ“ê§”WšÊ`ɘ’e”ÚaV¯gË*X)ÅñíÁÓÿ1‹‡;­ì½ù|w{HlÖ9hk8°YîHŸݸ¦·Ûåz´&Äd<اÇïPS—u‹W´ìT·vÁ©xý(gŒbKU®žòN·¸Í ^>~ ø>øŸô…Sà›š¾@ ü¼›)ð./•‡RàË5KJùLq½ j>J4^Þ‡žþfx—vÊù×ûSà}Èîø¿Sàw üNß)ð;þ<¾î—1J’`&µh<ï–w üeQà«a¦$ ë¿–W Ü=ÁìÍ’zàò„¼£àø3´µGõJh÷®rÕ°BèMo|ÿ{mQ()ì Œ×…óÙC:ke̦[ç¥u¦_>y=ø­·ãù?ö®&;ŽI_ůW3 Ò@ ?>ÅìzMSt›Ï’èGJvû`s9Ù²D©D±@%ª×Î^t»I˜ùáC ÄOÊøæ_£°rbºÎ $õµŒ q‚Vn’„nDJ7ãå;ƒêÔ$>Œ ´V\¨KuêJ©”³xPY@ã;ý édÁ >f¤ËÍ’“‰Vòñä C6xÃf=ú$#ÙãÕ^Æå6Üá 2ÚƒuûõîÁ“óÆÛ»K–ÍB&â€TSŸÝÅL6“ÍÐ2!b¦†³òÚ>…·ØC'~^xõ ’<]O ¢që=£°*OˆŽëÁ³¢—ãçíÔhÕCψ5¦dÊÊÍÅ…s^ÚÛÉ혈L8zð(o£~<èÙÏ¿^h¬{ŠE$âÙ¥K¿cPðhˆw &˜pŸëÁ“â4UPXcB3ó=ýÅ¥²l¤ ÎÙÒî%7Ç ¶~ùÙáðôߦñ¥Iuò7 £õ˲q dÆž' RâÎyž uÊ*à ö@ùŽrr•@×[ éÕöª¿}üùîË:žÑñ‡I)ÂbV‰w=µq±»WÜ$™eãÑ °ìO4MX}.ÍØs$Ow ´/×¥Òç~­]U¿Yk 9Jr=)Z^¬aD—àóÅØ0PLÙÇšgÈ@Á“KÞ‡ze ›ººÃL¥ÔâB÷U1‚{5ÀTymŸˆeM×’mÑI¶íùh¬¥FËqþñïwKìqy±± “Ž\âP¯>Â~ø/Ô£Äò7÷OËÃý·Çq?|‰89 ѤÐ0‰ÓïœwOžÞ振îžçǰ#f¿ÊÝçìJ^+Þ Yè+΂˒êM(^jj™µùðññöîÍÝ/÷ï—p­þa”Ú€w7ïoïL…ýrÿöîéúègןWäúþaá<±•O&Q»[]¦`ô+¤¦y‚3ÒÐ žÈðÄÞjcޟɶy÷æ[šÙPàÌ‘× I+Ø52A#6ÌhŽLd0[µOoÜišvO’ìšf‘H¸qΫ%¢êyà†{Ú2ëI:¢ÀreµÃûÇå—o®no¶Õ ÈYܬÈ]šUÅ0Ì+$¤uZ0Ic4ãéµC_) Xåú”Å'Á•¶ëIËjµì¿’”ñØ×HLëô&Y«¥“Ž1³àëê'T)ÿæ]Ý.¶® ¨lAÞl¢ƒ¼F>Zg5É.]R7ÔÇ“ ‰êå<1I ¦ë&{Äd$òÒÒ<9˜$-ˆ9µàI0ê=é'v±$Jƒs(Ç3Dà œk$!)ç–©Ï‘„Sµ½~Ø3Uñâò\Øv²™¸Áýeò¶ÇÇ+˜×HHë´ò»#sÒ‚‡`ì9ñÓ^]lVÀ&̃‰×ЭYõÖ ÈœU7Ó©Á——69!¾Ð«®[•¢6`¥ c‡×@®J Á?FqNäG ?-x`cûÑ‹ùÕÍ›7GÁHWŸþÔ²®[Õ  6˜àÔü„·õLVHSódeŽ¥ÑŒG;¥é›·÷onJ5’?ï~þõáá·ÃóÙÇCÙ®¯b0^p¾p £¤Æ®çR3 ñé`H%œÂˆ7«Üš†‰ âªž7ƒ¦1CŽ2\–e]ÙçdìçIh%9¿áÒÍLç¶ÇÙ"µ¶d¬Eòæ–H^¡RZ¥á£³ÄŠ´áš,´ò)ùþP„¬VW…kqº@lbâ<5,ã2æ­“dVŠ›D}x ÛÍ~ûññþÃ_¥¶ÖÝ¿?Øÿ<}x¼¹ÿáé‡ܼÿëãý!ßZë„1çhÿÅ@愺²üþYl‡‹&¬ož‡<ú·«$.¡¨vÇŠèÒßyˆTv ¤€Û/sž¸É6Žõ}Q:ÞØ¥ä”P Àì«÷1@ÛQè„eîÀƒÃ ~|ÃãË,LB•IHròÎw\ánä–v$´ T°;‚y–¶ïd;"œG«Ã¸¨ã¶ô¯†z©ºVß Keð@JJçö¯ÞJ"%圡='ž°Þ’ÈV2SÁQ3mL¦*“ÉŒÞL^¾TjÝã‘ó|FŸœpj°½OŽ×¥Âèå÷ÉY~LŸœ ‚¾ÑÞ'§ÊôöÉYƒwuŸœ.-!lÛ'§”‡"ù¿ÿ}úïÕPñÂúäô¡?z&ú[ôÉébN—Ýß'§Ù±‡t÷ÉÙûäì}rö>9{Ÿœ½OΪs•Ž£=ö>9{Ÿœ è“c‚™4 â·•þùB€ÓWu/µN±¿‹1¦¨Nõ·2.¨À†­SR¼ûH€w§.nh׺ ‚.ÒŒ¢]ð]唤P;ðpàÍÜðu_v@ ä¾®”q‡ÕÑâ<6=a†”놷qÇ¥Ÿkñ6äÄÛ>ª˜ÜWñ2Ž'DaØw¸ˆøx¸·ØÇˇ%våéÖô­æcåqT3ãX4{o?˜ˆA¶Ó'0%‡·_ê<#_ê iïJ’ÏÕÑ‹h}k¢ä¬Jâí’2.ÑÊЛqb)dšB¹¨©òF>pÁ…ŠSªÅÒÊaHN—õÃØ–UK›ï\É¥I¼Å;¥µ+gÙ~Ý{ðo¼Ñ±N«‚f7z(—ó4&8gœŒ6ƒ×&Ämtà‰<,Fç'~¾ðZ?8Én5Á {ó0í&ï ÜŽµVrsœ ôà‰0vï¿¿ûPþħî1׉³ƒ)+:5¡–q•&Ýñ;HkÇDdB¤^ÃÊßœÃkýTåd›Û¬@ol—©Ìƒµ@—(w •ñ{=x$oª¨Nœ Ú<ý©R§Y{ÿðþñááó%£uÒ˜5Apn‰Ë8RÙÂñv†<¶cæ',t]_º©Ñºá*‰”0©ã$\ÆÁySƒÄ³e´ýR÷àɰ՞†P'M hˆNÉÃ2.¨â€6¦c²µéTš°Ôx2nãEáŠXX¬‚ZÂ2:¨Ki¡gu—„v ¤Guž Žê…±XgŒKt¥J[Õ´‘›ü ql‡¡-¶†Øš¾žö÷u¥|ŽŠ»ùøá¡¤ .ì”2÷Ÿ‹=üa îß¼¹{¿4ë„-ÑGhT¿°O4~añéúùŸí/¼lË•R®J_¹M§Ò…Í™ƒK“Ó‚—JHäÄ=/ãø¨²Ãž|"ß³Âè姯?&-¸‚ oô…§W™¾À´à5xW§>ÎÒ B%Á¦iÁ˜ñÚ._'Ò‚û ø¹ˆ´`,i0!1¤ä£×W’šÿ£Ó‚;Ø)}Æ'¦/_, S`ÏO3À=-xO ÞÓ‚÷´à=-xO ÞÓ‚O§/çeÊÊ¡áÄÿê}bO ÞÓ‚/ -Ø3AŒvq Ã…ñžø§rDŒ¼½Ï±ð6nx¨3†€5Åj®÷2îø=rP®wù»ˆYˆ¼˜‚2î¨X¹ÞxÍÒÐøîÔm\Û}ËwoJ"ê Ûžæ@–$ Ù}î-ãxÆ‘¤Êèã1]7Ê)ÿî`N4sZÞ@(‰)Y¼9OâFz\ˆ¶Éˆ]ˆ&߸ý²—žP¼äìø‘~¼ÿãþíÝ¿–~[† N³.‰¦Hfé>ï¦ 'KŒv­Jî$$† ¡Eå;Ùv36àéM˵ÛÞóRªÇndP4Å”½ƒ»Œƒ‘{ºKBíë8¸jã²LÐæOÖ”‚‡m¶­±Nš[_]g¶ƒ ×í¯A)~XNžÎ.óÑ áC%S‰>žˆ+Ø|ҥ̋õx%•6{úÏÆ1Æí62z(Õ¶„ës/ãd‚–¦ÅY®ì•qQÆmd;í‚kWÑç°«º9K%Tƒ¢«pl§µl ¥Øá£¢[ó½M’CâìâÉÝTN„L÷Y¨eÛá,ªž 0”<å;¼WX;€&šúÛƒ‡iËM^7aÍ G»¾ §Â¹´˜TJc xv W™°êíxR qm™O°‡õ¸Z)ž1»]ƒV䤛nn”„à<§›â­ÙÜ iu·”p ¥Ï“²$ltÇFƒ!ls"®Ä ­ZJI}<Î+‹ss{[ÊÆœ©±î­×¤HÂì@šÌ„ßæänÒf”Fò„c»ÏÈjw/H«[¸ªYÍXdÏϬ%i#žÝ¿­l¶OÀ†MXõ<çìõ…Æšã9_;…P’Ö/’˸”h»-.ÃPno£uáéî0zÚñ\ra>7à}º~Þ,/=δN¦@*);â·ÿ‡³¶~‹¨vàæBЧ·Ø‘ã¦\ØJU¶ŒŒ™9èbÊ(8ìqdŒv Ï·_ë.<:{çP%F±[DpÀ۸С¹^Rí³Ù+x˜Æö~ô.<ò–~ô…<¬“—rDæè)!H)Oßæždv œ°ôx´óRþeŸ·ÊBR¬“$„Y‘ÕUššð¦>ôlGË34wݶ–«¤s’ŠëÂiãRš›6Z>šEÊ3º ®Ägíi£^>`…ÑËO]~LÚhAßè O­2}i£kð®NíÒR$Û¦Æk»–×U» rÐËJ-¨ÙEO!Èß+m´‹Ê[èø´Ñ>dÇÊ=mtOÝÓF÷´Ñ=mtOÝÓF_;WY‚ª{®rÔ½›ìž6zYi£¹ä@RŒâ;Ÿ`÷óÀÙ^ÏTyû‚}xzkáWHª?‰&…˜“—_²Œ ¼M|V‹wSÊ+côQÚ&,¥}‡‘œ¶‡q™'DS/R•C,óбTM>Œ;ºuJ¶¿‹Ñ“'þ6.0Suù·¯qŒÕGE’·—,ûN©*Ê ,Iè×ÁWö²Vêgâ¾´2{èñþvé›’C i À â–ÙÝhéE§œfÙ.ÿTbR—Ì2oص= ·Ï(èÃÃ0\ó½¨ÚírÛTT]fôæ’¢Ò™õ‰Çn¶vÄPIÚ( x€¦h«úÛr Ê $ö\˜6ÎîcãuÀXù5[2iÐèOÆÌ Ò %rܳ«Œ£é£¤ÖEÉ”?›eçgd"⥀Ãóã¸Ow¿ÅAôîæ÷ OüüêðGfë×Smb²ªžz3îb¦q3C%ºc'˜=xW= õRF£Z²z·>Ú}Wž+ÙíSI:!ƪöæ9f¶n°—†‡Ì!do&¶˜ÖŸÅ»9Ìð—õàÁ2qLå"?ßÜ^™.þ÷_ž Îõ(UIv’芷ƒÀßYsа©PØ^JºðÈúȩ̀¿ìGï~úÌôOÏþt‚Ú…Óº!/¢’$ïVbãRÈ#ÄcCQï˜ ñùèÀÃ4ª%V‡VÎõW C!AöòFŒM€8ÀA¿…€÷LB&ø-zðèwSu³^‹.@u£±T## ë®5P²Û' y‚ Úƒ§×}ñæsˆÃ¹\Â0.k¹zá¸Ø@ÐN+±¾ (^”0(œ—ôÑÞˆm5BÔÆ%Ý#®æ {± oSò§çß=ÞݼùÄhÍÇGKZº¨rݧ¶Œƒî$û9'V×$J@ÀƢЇ'ó¨‚ 7³…W­ó* J>aòæ!)ögpn'Ó°%L‡<:ØßPaöêöfa3WÙŒ) +‚8èm\TV“a¼4wL¶7 ;ñèwÔÕÒVTzCc¹ãª7è~úœ ßíðSž -ºðtŠÅ—çŸÇg(½*ÿÞÓÂ*UY-QuDä ÷’Öž¾§Öp¤»c"œ×}xF\&Ïá4Ö9•Äm”7)=P:…c¢ŒwLãÈÌ«õó§ŸwßG9MÇê½»ÿײz§+~˜}R]Î5—²—žâµqqj¥ƒå£Bv½Fì•N¤°W½üJkÀ©tPAÐ7úÂ+T™¾ÀJkð®®t°|\‘³Ó‘ëRqÛÙ’¯#à‰Ù¨H¢õØËy• „À6ôÐÛ¸ð7k½Ì:–‰çvN·Ä_é`ùbRŽ!5 ;ŽôÚ+ì•öJ{¥ƒ½ÒÁ^é`¯tðÚ¹J1DrÏUJǽ>öJ{¥ƒ ¨t@¥Gr "t zŒAzQfùu;&‘¶÷MöàÉù{ùÊ¡Îiñ›ÚeÙÕl’*™™gUQî`³4û—V›ÊQ5š¿\]ùÒÿ¥Z ŒCˆÃ›”ÛßÕH4ºwQµ‹zØ0Ãu0ö³¾;áÁRêWsð¢V°D·àwSAõíbàrˆÁŸD† ¡@å;L ±O^î÷5m [ÉcKI9ƒNôÌðUð8ñ&´OZð«¦ìâAÉ<~1Ña+G`Ää¢Ë1à‹éÃc‚„ ð(ÌXÌ € þbfÐgqÿPº ½µÃóí!„åãÁÉö•®>Üó®ìªôþî³’[2³±³‚bf©&õŒ‚2îôRkqÿÏãÝóëÓ³ðJ°îWHŸßÑŸî>Vò°¦,YÕÇ:%8³|G3rhÀúI7¬SltëÑl9©š2"VÚC±­üf»ª›¹½sÇÆÖíe¢àÉ ®n*ãzƒòyûðçÓí¯wïn¾’‡CŒ¼:*È€õ­”Y‹yãn{(Û´Ç"»¬v7Ò¢ÌW'¨„‚EÝgË2.ÁÀ¶:MdB=@’í‘ÕÛKTŒ’ÞÏÂK)Êft»8s˜~KÅ ÊɽX<Â#ö|íVÃuîX"³YáVfÓb#ûì ‘Ðvô)MØî=x0­ÉÔic¯î5±f±·oʸ8fŸ/¨„…Å;%:±l‡~0uÃl#¿žÕÄf&‰èriTæu½ñ†Èi;\Ež°ôxò€TìFëî+Ã"ÄI=KÄÆ…„Ó­z^&Xõ=xz“ñ» £º5l·8ÈšÄ3Dl\Q)yˆ¤šm^<#â£N0Áœ+ßá¨Yðd_9¹Óº¿SM‘—¬Ï”/ã¢niÊ«‡Ó,yôqNȉ)ßQ Í xzK±t[H¹n k9nÈ« ½ŒƒÓÉ©+ŠT)¹î¾ÿ¦Œ›aÑk±ÞP]ÿMÁ#qlå¿6>k¾q.‰à!:Í]˸ ys¿*Á ØXŠÜlŒZžqËýñó3n<ùQt:ÆmŸ¥¹|Ê«.úx×T{xüÀu¤©m$R}XÄl_r×Èîj„ƒ‹Ù,Z %ºýE²|G­¥‹GCÐá5|Ú(ÍUJËã`ŒÌÞ²q©Û{<@fc¹Æ:§øaèöK^jNkyfiÀ#¸þyŠ5®³Æj¦ïè°qèñ’)!0Ûöòæ`ã(NXy ±ð xάâ6âmu°ôëhŠ1­}ò_‹ÔŽË˜2MYâÒÈ=¿Ë¸Øéþãæíý››=üçÝÏ¿><üöÂ×ò‰Ç…ÆO,‰×ŸþñeT@]”¢I¢kÙ8Fà>8OO±ÍÐ#¼ŒËy{°ïÄ ˆ¹®|ó[òb®žy{Q„}áNêÜIÊfÕ:‘yË8ÀÞ·þÉÒ*‰‘ÌÂð§Bš.щ]þ˜h6UíT˜'Èž`(i7àâÁJLï¡¿ügyûðx÷ðtU’-ÂH¢Õ,)ŒêZÂe\”©e– ¥L⃳µÞË$xùïF/¿LÂðcÊ$Tô¾ð2 U¦/°L¼«Ë$”ǘRrµ”ÙiÓ2 è:Q X>šdÀülÇSËêmz’zË´w3³>^ž&µdÒ^¤Š¬ÁA I+¬%Ì ·ïV@ð_ÿ±XëÑy’…Œ\uú˜À§…;ðR–¶cE7 ®Ä‚Çt03iT B}¸™(ŤAÎ'2÷X¥ØžÄ’YðX‹ÉhÅý-©æveX‚’ Dmgb6ç ®Å\æ0 àÄ‚'ú^xkÅ7~÷]ç@ Ç1„4 çvy…³X8lÀŸ÷ä¦#&©aë!å«oD58+ߟSânFë8ê­()}Bg x’5Ík¾rŒÅžXµg,’„ùóÚ…2z Ë•{ž‹Ú\ÿrMæ{ÛÎ7ùÛ¿E%X+ 櫵6ebÉ<Ýêôt´v†IžÕîÒšKÚ‚r­,UÁ9é>ªÜο柜Ÿ=ž]ß~›†ºî«’äE{qÍíÜ|ˆa;è)ýèüÙƒ¿GJõ`ñRÍç¤i¾JWâ]»˜÷îìüâàø%1õ‚.ä×_L-xÌB³Ò¢O¤ÜýàEMôátç8*nøâ1Jõã]ÊD¤l%m!M"¡¢ÙÓ…°¡ÒD½þ¨ðãAôûÙÍåCž—ïR ß›îuÎaí"§0§â ®bžÚùÝTp{PÕ>ß×&€ ‡nÓþ)öswh}ŽÛœµ©BˆRLAI{šÚA°ú%Vcq;h q x¬Rˆ‡Û«6q˜›I#/ÄxÜÀw=¬?ð9c¶ÖGž3VµôæŒ-Á»8gÌ´Jí»²×È£àó9c&ĕ”üq¥ŒÙÐGþk¥Œ™¬ã1ŽK³!Û{ÞÝRƶ”±-elKÛRƶ”±-elÙ¾÷U·”±-eìRÆ21½x¥ÚGiB\Á\w‰ÐEðN`Áãq±’²ö¶öûž(9sÝ’1 g¡š*6µƒ=‘¯N©bå÷ÆÈàI{a)íöÄÛû§Š1Ÿ–º(7st<ÈA¢†T€ÍE&Vp·Ã ¦ˆ¦Ž®÷)‘.Õ×Oà¯óT‹m~î8Û¡ƒ[?]Öˆ'] CÝ’’Ïb™~ÚRäÅsvÞrж£0Ÿ-xú…CÝ^ßìg+O’õù˜ʣ>küd,Rƃ§zè]†õ)`Á3c`äTm©5xʺžlë¶Ž #qTûV’ û…ÓtàwLÈ»¤'°”êŸD‰ëßá1ëÁÏÛñÉxÙ˜Ó_UìYŸtù0Bæ9*øs;N]²<ƒQïœÐ€M%Uîü÷©µºèú¶–º­Å繋êÄ,í0ö[QúÍñ’×;ã€KwþNIÅà ã‘äz¯,ßîoÜUlYŸxcÈÇ_• ¹S<ºUEÔÎ%GõÎ…8 º/§Èb*±»v€ËK’<ü+ÿÕÍ—gÛùeÂ/MAÑ©nܘÊ,ôZôy,jÒ}YH{ø8 §Ã‚G ÷R1c}‚å žÄ ¨:_Ña‡â«2ÚÒ°\Xð¤ðÙËE¨gQI©æµãwn t_.§¸wp1á1FßÝ^\\=Üÿ˜"ó¾þ¸øvùØæþýU*íÉÝÅ×ɲõé–Š.9‰ëžJJúôÅCá·¡7Þ ð‰Xðäþ<;?¿ýñº¶ØâÃÝdéznN¹Hê]1 ˆY*oð0t% 8†ð$ô®½{ž‚œ?žÜÝßþýêúòáåØ2“GI´}3•Œpl¬ôìü_-‚Õb@##X-Èöýù[ëÁºE°n¬[ëÁºE°~¼¯ÆLfõ"“Ûù¸E°n¬GÁšNó_ùY\5²qjG¾{dcù½£ËW¶êÚµX1²‘ü©d3ÀŒ~A à˜€U¤âÐó)N [×Öвá‘#saÝÒ¥¹GˆZÏJ,ïñ¢¹‚ÃÔ ?ÿVØ‘0<˪ :Y²>õ€ò½B òÜ$Ùª‚½ºöêl´I,x¬ò¾ÏçàþV¦º•crä´È«Ò.& k&tæ~3tq. ˆº.%¬­Ï8¤<Ä’¢W:Û‘9óó(néV°XðÄ#ZH|ÝÊâ#GÒz•Ïf±S†ÎôoﬧiÃ#¡×1ÖjÓúÜ#ôEÓ×k'«R[„a¨ºVù(gd¢”½Øµs›º–*›T±èñ«k-ßG]«‚ÀÖúÈÕµª–>Bu­%x«kMQ°a• ëªky ¥VÐŒ¼– *ì-¨GœPPE!jØ üÅäµLÖÁy5€þÁ 6dÞmÁ [pœ°'lÁ [pœ0œ`ÙW‹ªûœ°'YpåOAr¢:ŸÈIˆGäÏ®{Z ÝJë—j³á1—jëUœÏ(„ÃRrFãwLà $aÅá ½ÌÆæZ,Ÿ„s/ç®V÷“•ºŸù—q(J™•0-bŠ{ONÝ´f¿ÿ¶ÓÌûép+„i¹Sô‘æè2‚Rw,ϧ!ÍŠý´:¿º´w XB x’¹Äüù}{[1âÉùåýãdÅú>ëKyañ^{Ëí©K5×_üýhaaÝàÆAU<Ö´ÿ…Þê§*v±j>F‰@¢ÀÍí‚“ÂtãðX¸-x¢q¸ŸmVÕ³e'U‹…¢ͪ­BkÙî³KûZËv¤°þ@[ðøØKÎãáÇ×ç—¿Y½ƒèêGë|ñL1¸¨E‹ëæÞ‰¤ˆBF»O>ÿwÈš®]ž›Z½„!³KU[G‘8³Yé[dÔO~° • Ø ,ÿ<ÖàǃŽÍ·×7/?~ý&óÖS1JŠù–¦ÅèåvA G ýHÞ·w.Ò€ë÷‹ÕŒÛtbê9 RüѶ æv†\—1ÞÒŸ8`ã±à1Kš¾è<+ç¼°âKU>*çr)épÉymÃÌíˆy¹ôq'6·£®°¹ãèð`øÌcÇô>¿3q=]!Q¹e§=§åväÃá’PÝmîÝ€í‚h¹tÜ«uõéoO HD帟$¸<7Yãy¤?õœÑNôö>ùž ž¸ª2i³ï*™ ìNæqU_Ë®ÜA]nµ)`ë ¯^CňÎêgåù}—@¬ס$IG$·ª–`7º´º—È'a¯Ûëâ+ßdiªZ’sP锢;Ö¥â :·#PKÇŠGß>[/p/g÷X;»`‚ù8D¢‘Äyân7Ñ!ìmï»\1à×#u³•}ÕÊèbP|wS;à–_YûOC"ÅõybÁc}#±ôüìñìúöÛÇF…ªQ‰bÌvR:‘Û9‘‘¹›å£Þ¥TÏÝܵCÙr7µ¤¼šE>wsø.¹›5¶ÖÇ»Y·ôñån.»4ws÷qvôUʃ¬š»IÓGBø0ðÍ•ù¨r7ŸÐ'pê†[ÚÁ_+wÓhùâî¹›»/FÎw/ׂ,m¹›[îæ–»¹ånn¹›[îæ–»9›»9í—àR}_•Ä[îæ–»yL¹›…˜Áq(eÍ5çtÉÝájÍpƒd[¢Þ­Q1Îæï×ù‚|þË›³†ÜVÌ%Pj`C$ «z¨—8TÛ{á1øíx¬%[?žbz‰±“§c{ñÊ[²§¼0 #(½ðENÓúzёÜÞxó¶àa\˜×»$·­`à|LAÿ>¹èßß`eöæIšño¿M—ßžÝÝ]ÿk)Ö/¿üÜ/~¢Ú?.“â]}öàÙúúïú÷\YµáPOž~û”ˆúQ¤.Ûí*k¡#âJ,xŒ+YdÒW›Wý)ÞGp!p-…|jçÂ^é>)äÓïÍs;}¹Šù¼b 9?Ea/sžtÎw/,õ¤œï^óÑyíyÓAÙ¤.~\ï$b¬e™—T€#^ËówŠ*´ê.í¦Â+“í‡9C¬;{b œ/רX“?™âÕ C¤›à®î'¶á¡Ðmöÿ¼ºüs²Vª[+¦RÙQÙÀ¦v°d0Ç03&v¥¢”Þ™ŠúNÇ¡Ïx$Dn0.cêZÒûåu²0W- „^ÊLéQi¡ßÜod3gBžgÄõ ¿ˆœOpØáxØxךœ)èê¶t!Ÿ[Uì¹s}‹t¯Å^A.oøz—ÀØòwØynÀÓçþ¸Ìº¡j]$F'Zo¼³ë»¬Ìö Þ;DçTðLë?$íðø¤Èÿ<µ#cÎþÓ•êéeþP{BÝž‰b&v];¿´“T4£»U1Y‹Û†î@žè{”·hÕ€SʉPÔu– ô©j±¼šbÕ¢«fú¨‡ ²àAãÞ³;Áí‡%Ýþ¸89N{y™›wóåã?.<œü!e]çz§Šà½Ã¿ýöŸ\íìØï—ß.®¦,˜¼Âî%ýö&égq¿|8.zðôxi;ÌêÐ%µªIïXÒ ö0²D<.²Dé5oøÙŸLc€Ý¨“ ‘:+vb‘R<*"±3Ó½ÌÆû­ËýìONÎòQãq7§©Ÿ¸¦¸ýŠOë÷e­Ûª…IÏãq°˜`ékç:½Ù%Ÿœ•Z-?/?Ú‚PYþòýó¡F»ÄSÑ{6ûƒ\<-H4€íx¼ÕøË.ßîo|èyøýÛõí×½¹¾³ BOŽ<¦ !öEVj¨‚ÚŽ‹)©àÈï½Wl j3ÒX‹¿‚Úð}Ô*l­\A­jé#TP[‚w±‚ÚôñôUŠ“[UA-`:uIþ÷fÇ5°Ääu¨Á™‚Ú„*zN ïU‘þZ jS¯KÅètëȼS©¿‚ÚôÅ”‚ㆠ"ûi›‚Ú¦ ¶)¨m j›‚Ú¦ ¶)¨}´¯úäSP÷ÕƵ)¨m jG¥ –‰)ˆŒê‘µ„rYý+ºMKAO:lÂ^Çü<$Ñ7àáÔéù¶Ý˜Ê“Iyl*ïÊUÑ–Ò./cÐ[´N½Èg§fJûµËû‹¶pžµ‘#Έ¶d¤EáŽI½¼—væpÜu¼Ë æ7E2$·þL)x¼‹ :œ6:Ð?ßôÞQŸ,^0¯;I½ãû²°Ê æªSÝ‚~Ä´9îÉ0cϹL–­çB0¹XãunRê½4t 6:N!²Út4 "7§KqlÀ“¬ÔøñõÙGn“FÙ)£LæTø3«iH¹ë(î² µcnA¨NÒÒÓfäïˆ5S¢´ ¾_ÔÊŒeŸþâןOÊ?Ü­Åõìˆ äHõ¥‡|b´T[Ÿá?`¾ª$¾Ãú´(ß yÈ]^òûùí]¾jœ?üœR¥žì¥Œ·¸¼mÍí¢9_c$oÛû‘ö„øjÑFI6Ê[«ƒ¢.ÝðÑ0àÂgÁ©—ðØÕÝÙÅEñ¥]>œæÿ¾9}ÜÓ–óßN— ¡1AŒN´©´s²ìÆÐ<É&BÔ Í.߉¤¿•vzÜíß›mw€«çÌäƒDJ _£s;IÜM.¬ 'ól ™UèàxÀ%5òÞ,¡p÷Ù}~}vuS³f=#_ˆòq½†^(36u¹¢ÆØv äÜ€üq àåbç3'±Ér±—åPÒÒŒ¯QPÉÁQ 2¹¸Ú Ÿ<çNì5ŸK¦ïzƒ½d8®™íÑ­7èïïXI±£gŸØ‹Ž›Wú5‡¦úäö. °ø£F°®UØþñµ¼ñ?œî2øßª¶eØÄ“DOÚå ·CýÏKO>âƒpRoe¥ÝÞ3\u ^Mó/˿Ϋî±Ü.âÇiþNòù&@:é“ó°‹Éø(³x”aã̘”XGœšsCMÉz @™ë xˆ>©]ÉëµA†\r x¬ÂÊk•fhÔ8>#G8’[šRlD|(50¡îG(íbB *Õ®ôYÇÒûíí­ç~0Yž4®P =‰®Cñ¢.JOàEߘٷÕ{"lØÙJ…’3ac™AÃbÆ@ä½~ùÌw?ÜèÛûËÛ‡“’Ì2Å—å!Hýl’òMG6ÝR‰ŒÅnG«%§AdØSÛ­‘ŽÔC|Š0rÔH—ÛUDü,²V]G9£OeêèÞ$bÍd^™§xZŠ›:tÊ^™Û‘P%¿¹–ÔåãùEYÓj§:ªõ @ØðX @|Xf§˜éµŽrÝ^’‰BÔð‰$qËGñåAðª\­ªZ~tÉ­¯ðkÃÃæÊ.¿ÂÝJVÕåÃc›äÃû%%Ûöäìϲp`Õcfê8j<ìܑ¢¶!T¾·)R«O¸Ý§ªdXO#‘TJµöˆøs(õ¾+‹IÕÚeÇE*Ÿ>“TÓß>žÿQFƒœJ¬äòA§¡Wñsˆ5ÓÅäjí¶ŒY±ÐQT®C¿p&¹¾ßMã €Š^°oè~Òø®#K ÕÜa³Z¡GT²Ùžð|îjõóêþq T)Å^°¥Gþ“Vª÷]YLª$¶t™‘*JŠØ€'|ê*õóáî—»ã‘J«R=¢¥OQ>…Vvf)±(ß›Rd±šñ¤O½^}½™þÉ4^cé"—>…Y÷f1µXZz ƒ¨åK™<Ö$µ¾ƒqwûçåýÏé^Nª‡˜#¶ô‰>‡Yvf1±Z;½~Šé;¥ôhÓôfÿ™ÄúþãñìûÕ?§±±ZûäÝ眱>ìÌRb5wzý uôœÛo¼Ç«_âü|Òæ!”Ê}©G{£óÝ0 ‚ÇZŬ˜f¿p¬ß¡@òŠ´×uÜU×cgä–_0-xbèøJÍõûØ.œóùõ²d¤×L‰Þ %Í'_¤sÍÉíFÚ°¸õ‡Õ‚ǺÂ~P»÷ÕúµW±7Öo¦˜/þY)qUÚßóõ¼ûÚ±{0‘-xè|#« å|šçÓy8}ú·™žè†š¹¼w¨¨»!³¤Éÿ±wµ;näÊõUóãÂfd~(0Ç»›Ýï]óÎÉúGÔöÖHºjÉö¬±•È“¥È–4­‰lªÙt'w¶%ªû°xXU$«ŠÝlËò‹«õl<5A”žì8пL›s_\EéñºÚ-©Ú ^–"H8OppCñmö*÷Åè}â¦h¾RÊÇRh‡‚oÏèn²ÀÆ)‚]Bð„»ø¥X¬‚tG q-À@ùt=´#¨«i op‹Ó?Ä¡±£e±ò©NÀ93YÒk$ÌeÍR#ßv­ ”3}· jrt÷C‚'´úŒW×E>]]®‹ÑG‡@ÝñIÀWNÁ÷v@JFxp¼ELÒ M‘¸‚GŸVs¯ÐSy Ý w²nÅ8|$êP’c¡•7²ORDŠ=ûc6 Œ$ª ££i³ÿGÍ>àxb7ËÁdö~™—«åÚnª¬=ÈÜMÜWLµö±Z*DN­ÒÚ­ œ€!xôI‘–ùh4_ÏV ×þF„îM&[Q”#îÛdR‰ð¢Ei‰Ò—u{BðHÞ™Ž€¹¶˜ÚÀ´&"vï)©…¹uÑÛ%iJk«â˜_/‘Íñ0X n¼;£j.>å^sÃ@ ­%óûh /ëNCĤq@ŸXŠú!xi¢ï|ß~ø¹¸ºžÏm)æÞ—ÓʬvòmyC;|GP¬nŽ—’¶"Oð¶ñieð q%Û–pI‹^ ›@÷ýtþ¹„UÓMîJT®u "%1ǬN€¶׬}rEtÕRuAP¡´¿ w¾dßcvýQð´ôð²p·?%MÙw°g>Ûí(j;îé! Ùý ‡à‘êäüó]âöA‘Qȸ)æL©"Ãw©õOÑMèÜk†‡Ðý]Ẻ0ݱéŠRÄ8ƒŠÍééyåmè‚Q'ð <ªµ^OænÇÈ^“.´çÊ^h§4!¤ËÙ~AЫ6=Æ]–y˜@Üêš4b¾ 'Í¡´#ã@ܼ*AÞLž•eöU垼܆QKE¥ô˜v× ç¦4ø 6UíP‚inÞcöñt§ÍÔOiï ¶àLÝaêehW›P™ºGR0í¦nðq2uÂZ÷6dH2IùÃK‡Û‡lÚa;[Û<VÌŒŸÈ ªÅ(ÆÏÖh '”[¼3ŠµÔ‚ù†ZØÙ&ssä0 ‡à ­²{DþôIá>1d&X ƒóêÃ,ñî—l7·¥ &ý˜¹Np@gðHÁ¼[$¦v›lðµÉ›¬Ê˜sÖùrò‡]t<¬ûü‹SA(æÜ§µÌ½¸Áõ»¢0€Ì0ùü )JphgÞÃMÄ`< ETMÒåvËR1 ά7ÚÚÑàÐÁÄ\6QÁLáïŠîþÊ™ ÏÁú1ðpD£Ý?[£õ|†Å—üâÍaYV¶_=«Ûê "©¹:Ù7Ý„)YBcêˆüPŒ*…¨|’h2óÉ¥7¸Ñ´ãº›`B+6⛢°`ÄgqM;ïÚÈ«LdÃÚ߯(³ÚÕKÚOpêÐqukWòét¾-•´“&ȶXÎòéƒk~Ý&Xðl8'¾ˆ hÇ5ï(¸´áý’*¢¥§ìþ2ûS5ãx¸î4¾gOŠÔ-EE8c0|¨æ÷Wâ³¶y?D-Æ](½Ñ…/•' `žhnÉn îÌweÜ—Çë¶IOð+Ìk¢¾ž(PV¤ãŒ€Ù[&ˆpÁ£Ht´°ÅêºX—æZØ‹£Ben¡*‚ÀIõÚU¥°’ñ¶5:¡uó®hÞLK)¯– y©ëZ5ò.»ü8©H¹ØÜ¯f$~O>2§r5~ãîÛáFú.²|<)í)W6ªf‡ûZ÷N%ðCðháN¶c¢®O1,U$^ä*‡ê‰6-C¨O <¢C†XQëhÄ`¨ bìƒLÍ&ûÅ£’˜w>;ÚÂF2•vÃÎké¤þÍGÛyÑ8)z‰a;/I†5Ç#cU;h20ÕI§kî+g® Ò÷x–®O©Ù¦i¿Ø¦ÛTH;^ŸÄ)|‹P7µxQa'æ Æ¢WœÁ„´¬´gÓh4¢¸!ºîIjfQÕ/f1¥ {#¡³hôqUöpÓ縩9Â{ÆAcû4’=E*hUb¡NÌ*ûe¯¨ÂQ+Ã7¾ˆF[i—a§æŒîg"]Ä5‹: ãÚæDô‰Äzæ%³P/9¬¤s T4Q¾ÀІ>5ƒzæ 3ÉÓ‰³HÑÑHÅYKµ¹C©y&úµãÈ„ì”gFCGÛÍfRtC¯À~¤f•ê«”êzû™s ¢m[³ã{¤­·­v!1—8b½âǨk å…hÛÕœ Î4Sü©YDúå‘sŠ»Ù·v‹?Ú65g±·©OÍÖ/?œsÞBû|\_ÕJÈ-îhûÑ\œä…MÍ‹4'õÜ\Dì-fñœt¥Xc9ûvB)Žp„q˜Ð¼C õŠ Bµ¹¨À)b‹ò´³òfà¿Ä²Wã/ o¿×ç”´ŒF*Zìè5Øš Œö‹ Lwì/ óÅĬ5á“{Åà—E>Þ ‡ŠFºò'Â:’šW¢_ZFÅ»ɳ 0E^÷¦¶ŽÅ%%;ßK9>1”Òýâ­½”{Â%Åb†n¼º ‚•vÌ)a¬OcN G±<“2Æ‘†ž[û$ÇÑ¥f@¿¬Õµ³ Wƒ3"äD#Ž5-;¢¸Oìº0EoíÄÚre³nó*è3ãž1(ôö¶Ã—0F‰5#HFãmÊ©4ýIÍ2Ú3–±ž°lߌ¨htãßšnG:–šw¼g¼¬åu¨Í Ä´Æ)i˜[þm*Øô§gÔAI’Ñ0¬gã]ãš=¸~¶#â|Ù!9&ò(›âIïQ~-.F£;Šþ¦ýÐ,?v`ø­‰¥ûE,Š’ëS¹¸.|^Å8Ši2Æ…õì[S‘âžQ‘$§âl½Êg“/îѨHÓ)¿°ž}s*Ö–®ªÎºAU瀗Ê~ñŸÓ“Îoo泉ݛ{Xm|S*dÇ#Ÿ4q/…4x L`o?4‘Žê¼§WžS3=¤"ÁÒ%díò$O‘¦Û k©G ù®ÑRjz%JRr7îˆJrËb–ZiÜÙ+Y×Ò•%T0&ÝksÓãà{Tºfx|Â:ßô ÃÓ"ÆøðåCÃí¡à. „áŸT‚2⹊ʶã®hãnD‘¥’DsJü]‘˜&`¼žæ±U»ä~Å87?¶ReN©b“&Æ Už^@;êÚÙò]ª‹Ð€ƒ+"ñãå" à=‚0Ι@ÁyJÇn¦Û—&ðf2«òîóÙxº‘¤tKRaÜ$؇\aNijÕМÀÐ)™ðØ”l ¡¡ŒSíÇCBË·” (Xìmîݾ™À­ºh#¼JvëêÓšvÝÈtóóíýÝ»X1»í0¡ˆ"í×Ð3Ì–.‰1W–ø¡³†Ã¼GHâ5d¦]° q(`²î•Ê->iKLx=h‡ƒ¯åü–ŒèW÷‹axBo'óÞ‡‹ù|ꑳÛbSbü1ßuß¶,ÔÈÏ÷¼¼ûz axˆˆ´â¨]x|«¸|JÁE»)Æ“Š!ù‡ËâÖDh·D™¦qŽ|=`×N§FŸæV’D¼Ë.WùrUŒwcŸÊ'S³]š­ã|U”û»³•S¹aë³ÍßgÇÀq¦(~p¬´·°y¿å(ã c³[¼.î\T-²âS>][‘gWÅ(_— s6ÀbÀðY6)á—£ùÍM1ãçoggóϳaöýîW?ÀóŠñ0{9_OÇÙl¾Ú>ýe1/×Ë"[Í7òÈ–“òcö·ùì?ç³|úÂÄÌü\ ïe±ºåS#Ä'ïá‘æg㢲¬ð0«ó2ûéõpëÂd«/ _ˉ±¸æÝïçëÙøi˜D3 qQŽ–“…éí0³m²—ÕX–fžÌ€y˜ŸÃDú`.gÍ>O¦ÓÌô"ËWY¹yÊÝsËÌÞ¼_o'܆ å 2ø·o^ ³ëÕjQŸ=›”åH±,Æ×ùjãúìj9ÿ\Ï~yùúå«_Þ~wA´äaÂZÐäÍß¿ªØõ¦¸šÏW?zèÑå&'=M<€öhbÚn$L²sÐÏ¥}@6ÿÒ¨ ÛdÛùñÚŠ"Ïnªçoû2ˆŒ·3~~ùË–ˆ´ÕR¼–ººS¡ê]öÃd6)¯ÛêÐì åbÀÕÿüwy|X…‰–i€´V@nƒTpùPÙo°lgn˜çY¹(FÙèÚ\ç\žo ðy‹†l‘ßNçùøzÁ4RºúZÙˆ-zFö»¢œÀèîÐZk¿ ß=ÿ ÞZ}±%²*F+˜]Ã3ýÕo†•]@ðÉOæ&¸áÙß×ùí`262žïêÙ|´{<-ò²ø—ò:'\€äŠ÷£»*ÀÎj  ŠZH®Šñ{‹Q!ƪÀ´ ˆ\!,9Ëss5ü˜¹Ðð®æËQ1|ŸOËâÏ£ÒÀÆò˜+Â$H©’€¸F&`4s;_a!¸4¹@ÅêŸa¾n13ǰF°‡‘q„|ÑY=#ÖÒÂÚn¶û€^/ç³É5§ žS>?)¦ãÁ÷à)-_MÊÕ“ÙdútÓàù_Ì蛟þf»xi?ýúoŬ¨|ä!ÑV*tøäŸ6ìÛPÀ>ñéôe„eWœ1ôôܺ¹C®ñyöë|¯!Ì”sPè7‹)hàñ[_&‡1<ÃÕr]QÌõÀÐÖߎI?æåõðLüöö3Ï_^~aÏaœ÷i—ߌƒO_ååêõrŽ]YW“›bð| ìþšÏÖùòö<ƒPíýòí¯/ö x" ~¿©¸ùõÛòÙèݳ­ÞÍG`¤öô. l¾ÊŸ½ùñòÅP‡oÀAßfÅ´þ×»rĕí?­L'@„ñVÆf_WÃY úÐ0áÕýO¿^®ŠÅðló™ùºÃk~8V›/ìãîôÆóß7BûœC#4ø {©ÁËò;à–¿åô$¸þ½Ÿ¡´êjËj³‚XZ)o&öðÿ„’{8¤¯æàÍmf”™¿.óYiïÝþx^M 󯯟óétÓ–à÷\sL¯0/®Fð«âËjHc”!ŸF†&#˜÷ö§æñæafÆS,1+ÐÓ?ÆËZÈGJ»owL²ßp7~ñF–ÿõû™&©„>þËÛ¯gÿºžL 3Ï^‚{ KBóÏïvék/íRÛ|f‡ë ¸à0=ní3«5Ì?7~ñ‹×?™ÿý²1{¯&ï‹ÑíhZl¢UÌw79hËÕbšì‹îlz™EWž\þÝ,‚NëÂåO—³|Q^ÏWö¿ƒfî¾ÙÅ×´ìþOÀ–׫¢ø8ž¿®ÍòÛ'˜Ëj³Åüó*7«îUË/°?ÍTZL'£Éjz{GÔ*õvÄâs‚Á€z[ÀþâúñÆ]Eè]vS€1‡5Ø÷â ˆÍX×ë‰ÙTºÍF[³öÜNAãÜ+ÿüÿ©ɦ Užÿe§¬Ä`g6ôhgäá 7øNñX{lÞ±5Éë("À&“óŒªóŒë;›lUÙÓß°§ÂÜOâð$Œ>éÛŽÃ}U}ß!°TšRòСÿíÍ1ÞÝ®êç|i¦jYmå½ËÖÕ~LfýÀìÌ,.0Óg¹±jfÂÂô {ÿý $%¯ÝǶ[é ·ÖCÙ¢ˆ“sslá&5åaìEª‰dîóGÞ€nÞ1JRlèà¡QÊ}ì_WfKÂmˆbÅç>>3’Rí;ùƒvˆ¡XQ7 Cº y÷ ÀƒK²¥o¤(‘[ŠR ¤¸ô¢–’¹ ’5?ÈiIÜæx9J:„'°Tò]°Ýr}8$/,Ë}DƉ`,¸Ï¨A;Št•ЀÀ!°5îžx¢íkÏæÝ*ÃÈ»e¨”bJ{·w¡`*p賸yW$N`Bð„&ê´–l-Ëmk%S.¹§Ö𮢲:´êþ¶•0<¡·8¨ÒW˜8§-‰[‚š™”]ዃàc3’Z/4goónP”ÀD„àÁJZ‹÷ùzººØT¾ðJÚÈôÙ»–å8rû+7fu7’I€$€þŠ»›µ,W·5¶,‡1s?l~`¾lÈ,=RVAd2éŠèê•ÃfW‡$âõc—Ðû Þw]JnY ig6  *µà±^ œm¯“ëûD%  ‚ÆÝ<ƒïplBÞöI„ÙU­’B™ïÈõÊxÉyÉcÖ¼®SÛš°ÝбéÛ€z@Srk£¢ÃUýšvžz8{¶%@R£òó8ªôß+ÿºß½„?Ýïîžî³Ò?T“{fÃÅõUG±8N,f Š[¶ã\ÙêžIâäI4‹?#g½LmFQ ê4À¯bÂ#+ËCÎd6‰**¢*­FyZÉYvŽÀ6ÂbÂcTã³d¾î®¾?~½þº»þÖ²&ޤ›²¨I8xÍÍ—ÇR³§û®Ô>Žvyá|È©O2¶íøqu»{øyõÞ:&¶š»9]:tù( PÏÆU:[Ñû8Û&2ë P3yýÌäu‡?JYI¸ŸÆUªÔ‘^O-ç³&äÊé8l¾8ößaÑòa÷㬋ã:ï5w·/¶c¾]Ýü¸Ù¿Ú>ËîúêáòEv—¯’½¼¹›ÉUAzÄÑaP€çq!ÂzCwŠ¡»¸ ÛùÇ?±Ú½oúpòe‹ÖÆíKŠíñä‹SdOÅónÕôþÆüp9‰ïEzG•ޤ(=/ðè>vÉ\Ž`[º¶#'Ø`Àë=Ä6ËO·»Çû›ë‡Ixõ¥XL6& l©›Úûr7ƒC-‹m?Žìù;!ŸÀ :žJZ³Á,ú¸ƒ’¢v!f¯ùþ&@ëÝX+Ij@Ki€~ x̵IE[ß¿{v¢Nî"W•(b>‹PRTf€¥ôPêaÓ™)j€a€9gÁ‡×”/Þ‹«ï7Ÿ¯>Ç!r7&Tc {—ñn˜OkaÆÕófZôbúÝô*y¶“2|7r ÿ6r}œÍ(j¡;­ËþRÝ[MA¶6{ Áÿ.b}˜Ë0Zm,=}'8¡<~3­n>ß^ßíµ³šgU«¦´-³Mg-¹š§ýi‘+þîãðµÓ¤Ð_é·‰Gf4ŒbÇ-ÍßC1v'±½´™´»ñLÜïÝÇOkÙŽ[ž}ÉN€u<áwÛ`“’Ê.tºÛpš6þ&v½ŸÇj:…¼7ÍWÆÐ©9Qí¨×vïÏû§»ûf×-×íÊKSôâ´·¦<.Ì^4G +E€’r¢‚CŸÎÈÔÊR‰ž~²5àû « °>ñdUIŸ`²5xW +S õ,ˆ‘"o[€ì²Ô?–È>A$Ð¡ŠƒÓª@VP%_^“Š>9Н dÓ¬J@˜.ð<®ÙôÅl¬zŠ:²l/+++++++¯@6—\²#ì¡’¾t®@v®@vJÈ21Ùªó)ìQ6§ »ä|Ãå]ä±ìȉÓo7ó¢à×ë«üÇ[©!ŠJŠ`L”ªUæÊ8˜§¨tª2W~7DÕ‡Éwø¸a•9äËà%ìr‚OkHó¸èC·Pà œÊ–™¤ñ¢<´¦¨Ø‹ö?¾æ¾$Ñ ×C©J¨Ý•{¡…!l ¸Q9³(Ó€—5|®’°àà˜äYÊ»Qéz¨>Ýåq(qM…°õl4` aÀ+®5ø×†­ï¥ö©E˜õ€êHòç´,‰<H–-ím™Û>Ķ޵ñáYÖÃÜR^Ý)²hØSž 9zs*[Ðó€Þ‚GbÿZP •ú="±‡Œ5hwù<Ιù°9™ à‡ÐÁ€GÂʲ†21Ó\Ô‰šg imÕ‹­ ¡÷M¹í ”sÚÿFï›>*C˜2'ÔœñIXÆäêçMQH†1³¾¡¾²¨äÓ'P3­ó8{žØ˜Í0À*¶à±&‹¿åšóWêw2*u˧Íͦä"ž¶ 8à’hÁc5 ¯ž¿–÷ˆëç:ý3!ÕwXÎÌ‹>±vs-N}v l¾ÔËÐRÌ&iÐ'‘·×4O=`½O xXz¹ˆµ~·bb—o㬄LÑêXÎØvTèÜø,xÀÞâaî6)Ò©‡øK)?[¬ÍT] º]è»sÏ0‘ŠkÁ#=ºÀÿ"ÑÇû"Ñ/{³¥~òI)O—Íœvôð¶rÒgˆ. x¬ºüòãádj)té0bÞ”"zÓ¸|N÷xl\N+ZÜÞ ²á }=¯ûÞ&{¡…ºÐØAyQNHOVÙÖ ПÕsþçnz…ý«ÄŒÎ%DU •ŠyÉ'§ ò@ÎÜ)¥¥,(Ó=ðøè¶¸½Îeë²c_Ú!kX9Cµú4’Îiû½F<ÆÛˤ·w²áªl PË'¥Îd—­YØäzÚÎ-Xr¸½"-x¼ßè:_ýx‚|P94¸ì<[7_3ñÚÁò]ðc탛Þ_ee=M!×ï„$U!!Hr¬ ¡VÍtíe³a¼vZ^]ò¹Õ.çõ cļù{ÍpÃÊbd‚® \šedtd^V$zú ºkÀ÷IЭ °>ñݪ¤O0Aw ÞÕ º¦]ŠfÝw6IÐEÊ;ÖÿýïÃêÍžO+?ׄž=þ½òsmÒ9ž)Ð??׆LÂ9?÷œŸ{ÎÏ=ççžósÏù¹çüÜãù¹4Õ³`¥ËtþÂÜåpÎÏ=ççž@~n&&±“äHõ™QÞ­kË=¡í¨`ûòÞ6<>YÝÅÓî8—Nµ+oþJéèêù¶e\çzçÛV¾ÿ«4˜qæ<ß ß/“ÏÃÊ÷pàˆêSj(Éz°ts €èÄnÃc}û:œº² ®þUJ1xÐŽê<<™w:ë"¶Àa acC¥ÝãäȘKÇW¥I|òÑkïsyœ=Wj[82`9ðxg ¹+¡ÄïdUÙ¤)d4ÿ–‚%‹±Oçî{ƒaÉÉöúµàñ¼2»ÍØrúªÄŒÐëèØY0žŒˆ5(ßIÁû<Öx®ŸÉîñëîéáþé»jŠä?í®n.ÊØR >¸ú•ØS*¥O5äì*¡TGmÞgÚÁ$? \Ö‚ЬÖÿÚ]?¾“N=˜˜@1±v"äqh-Ë Ðƒ` YðÀ¦ëõí,{“eÝP¡©bm¯Éã½VVàð€{¼Ø×ìÿüûlêaÇùâFY´ý£¤²z·éеóÌ~DÌ´Å1köµÒÜÕ÷Ýýã^ªõ#ާ¢ F%æÍ©2‹ã”°ñ³ ÅÛ²O²…ºþºû’•û.ëáÉIb>ÛPÁ“ÇE”Aë×Î9Ë4f9Sµš8« áÖ~4¹‘£&<Ô!ñvÿöõ\„ò­8ËõÝýîîá¢Iím©¬ÁÄu_¡dˆÉ!¨àÙC4ç¨.Yí€pÖð¼F© ”±}´’½ÆónI?ïwS€dqHç ¼š¾¼f%éMµ^?á˜ïü©`*YðX *î^=fd`­ØBÍ•û®5–ÔæÀc4] <Ø¡¢Ö‹,g‡ÔÓã×I²¢ª;¡ó-@kÜûNh .Ö} U‡^™‚ŒÑ}+žiEŽá·§Ï»‹‡gQß~:.Þä4Ä|%C×€6´6ô\s9 ˜‰ZæA4„Éq€<WÍý ÒIš^Sz38ûªWQ-Öq+ìäÂç3}ß'›øöîÇM–fþÛCk©\LnŸ£¹Š¼Aeˆî±™Æù%K5îåNcXÀœm«<äVV{{ÌɪÞ?Ê7“ ®ÿV„Ëuß:£{AÂKŸ .`ÒüàQòºPñ‚Õ{@+Ð¥L/¥ö>?æØŽ‡`«]àâîóŸOûÔÏ‹‡òæö8 Û«¬hÅÎÒ}cPA/fxï#7ÌJÒ†´âïÇd®ù!ÏÁqËù {ÐóŒ¡ÇMeͳ§}Ž<†=9¶àñÖ6PÇû©>‹èEÚGÛ¨–0M3¥ ~ ~hdé |9SBˆ 3Ã0†)É5áé×DäÅxw‰H*'"¶ ­Ož‹!.×~óÆX%HS ž”ºkÿíIHÕ=KHØ€“¤Ÿî\®yáH-37Dó![Ç)¶à1Wú}¿ÎcdËš¢éP2ˆp…Øjd6Z¬ÙfÄiÌÍ4䓼B°ïÍô->ÿâOÑ¡$ªº…¼okrV,E¹˜Ñ¥Ô41/š§ø&Ot¡G¡Ù¾µz¹øüôãË÷Ý®ú1#”À´>¶Z{k`.gAë<ÀaAJÐð¶æhÖºxgÍtgâUý–ÍhƒïÂær4ÏCN‹1vòeÜbYuFF’Ð⯋äÖz­kð–k™›¤LƒN€É7Ô€ÇÃ|¤úHõ…ˆUßbr”B \i}ÊZ‡s1’Ïæ`ÃD’c &H)5ø“ÇõÍ^„›—×ýÍõ~i©þÁ„Yf-!ØîvíЖk»;Žñ§è7ᑵr¦ùÅ• ×àÏKÍ·ºó!‘Ђ8 Ò,;ŠMx–ùéžãü {¥ê°KMNèÄh[Ø+°.åº÷Õ0™y'…ZònT’w¹´”ÌŸ„PwMã|ô=wöKÀ.¨ò¥X)>1ó³Í±&²¤¤Ðçc!QÞͦq;RJG†=§9g¢¼<|Üsž‹æeqP5eÇ„,Áæ– öÈd~Õ|YRŒU™yL™DA‰eÙ;^ï³]›o`¯~Þ”­frF‡ÊÀ“ÓQrÛk6'•Ö¡AÇ“¬ejöUXsÝQtKÅí ¤ê–Øûd­eÙ‹€I¨Ö€‡¬9^«Ù?”¦)»‡Ç÷å†ïž¾\¼y;>x’kê&WiN`ØfG †_z¬(l_±Øˆ'õŠ ØMe€ïïžJO¶Ëg)}d§X7,]@ ¨íoyœ]vŽEœ&ãöª·à±†½ŠêURó7¶ õÝ!ø˜¯ C@ f;w+Û'Xé<z­ôéýüvÓ"R®‹KuuˆšaU*t›Ë]¯¤,–âøN¼m@ç^žÐ£T×Ç-ñSíŸo”“d•uå¿V·©Dåauçf1Ó綯GúyV$zúm_×€ïÓöµ‚À6úÄÛ¾V%}‚m_×à]ÝöÕ´Ky‡›¶}M‚—Ž´}Ý#±éìñç$Ú¾N¨ Táô”þ^m_ˬÙâºt„Ò¸¶¯²¼>$ê&»yrйíë¹íë¹íë¹íë¹íë¹íë¹íëÁsµœI‰ôs5D9·}=·}=©¶¯™˜%&¢F`ö®›#º¯ÓÔ0‡í3™mx¬ÏÇS¸~}ë}í†wðX‰–Êàëe§qn¶­uj,[~WR¾"õšÉgÉ7h, —Á£Ü¹ÎÈ€J)Eiç{,m憶L…„ëXðÈFþCY3AêÛQ(âÄÚ];öËz o¼þ Sˆ8`Gµà ~}½Êšèª%êù2ïG1y4Öæq`£ a m3 KoVx ^sËw$²êÒ-ãÌ?t¾/úÿÇ«\ÿx‘æjTD}1ÅLÞHÁioºyšßt{³—R€èU¬àiÀ`ÀS©ØdIêù°”¢¯J1Çä¢R*kÌíR‡r×0“èDôXð€ïeZM®)€¶¾ÎòŽUbŒQÛló8}zew!6eS+_nTñçqЦÆNj¸L~eöO{à{þ\(!F b Þ¯í‹mÃ%è¹A\aVö½–Ÿ@jJG*6ëQYeHŸ~l¿F¾–›œ§.á­¦€W3TÊ8â!K ¡dƒSÇ“üšæ Θ£áż ¯%ÓQÉ'(àXòe¾a„Kº;tB¿–:Tž>HŸ%yBƆ,Ž‚Çš'Üõ7 ~¸~Ôí—E¥‘Fœó±acaâÕ%Ó–à^KaL©aeK²ÓˆËKÁë‹4«…{\¬ЋٰIÞQá‰8ÎÀ©÷’Ö1k¯$HÞ÷Qœnfˆw0„ >ä4àYÑ9Jô$á¤QÂç­O\ÒæÚ‹!®%äÿHß³Õâ‡bÒ þíÇY]um5PŠ~’:iÄqä°=q×Ò­°×’½`j˜Þ€ª­ûïÄÌÞЀ¹kè†ÊUS‚†ã°ÒÙVÚz-QBi ›ôÉ7ä¢/ÙnFn8Ã5uûÃî]s'òÕ–cà  çH žµê¾<›é¸ã Ë!†Ør‘ˆ£É±ÕØk~=)nüg‡ÄD+ϵ3XK¡ä ‹f*cŽ’@¨áhK°ö(±Kþuƒöš‡UQhpçHéJ¹ðTé‚-}È%Š §g’4„>¤† ®®VÑÀkæó©ñüÿOÒ×¼"ŒÐbßQ‚å•ÅÖÃ_Gv.ß0Ôp€2ÎØ{òwÈAˆ x"nIžë»‹·boè¯+;ïuÎgØ6aË1¼kéá1îðÉ㎯å®ôð“¥Ø€'ö¨M\ø[pÈ$ó qÊ1/ Ø%­©Q» ôZ¢@Œ¤›yÈ¢ïƒîÑ.Íκ5'XjGF8ù òj.\çauƒÕ“XK$Œ16L6„!DBfô $Z+âŽ.X*ÿ¤‘(”®f kp²ªHîê ¬%P ÖóHʸà†(zr-GV´6û3¾ÖÅ1qñƒ[Z^uË0‚ ¬E4=‘`ˆÂ“OZ½Úý8çÇ”Y;~ý«dÐÜMÿ´7YcKŠù\m8‹ÒƵזÌj-ÕŠõ ‡SJcnKäÅņɬÅû+åæê6+eR‡h$£”€†³ªÒM}s’šÏZzqŸŽ ØTÆzP;qÎ…†ýŸy §…MX*l£O¼ KUÒ'Ø„e ÞÕMXòÇ“ IIÆ1»M›°ÀeÊF^ˆ‡Ë¶î¡–LÖ¡ ŸX–‚Êaôº =þ{ua™f1ÆÝzD×…eúbÑ7 ›ª>wa9wa9wa9wa9wa9wa9wa9t®B^j¬ºQó¸0>wa9wa9.,™˜ ”^9dË¿ó3êg¨.xF³àßýõôÓáÇÈÒ¸½*E‰¥0åÇĉÿüõTq¹w“•ò»äc§JMòµeÓ&+ÿÏÞ•%Gr#Ù«Èæo>ÈÂâðEטﱱ,’Rqª¸4it°¹Àœl‘\²ÈL8€ÒºSÖ&«fAŒàÏ=û”øÐn¢gÔùh]è¸BM¹æ¼ÓÅgÊŠGÔ9`´q¨à™Ÿ²Þ“‰'üÔ´ËÑüÅëKåiÞ?<ž¿üì³AËSr5‚„`|ÀT¢ÿEôÌ™YNh^óçq#Òô9à!øPGb?=Ê—‹oWßÏ®n/ïï4ÎÚ.5¡hº”SO…Pu\ Ø{ê÷à¬"ƒÀVÕçí8ŸÖÿøù9¨lä <ÐÛ è¾óêáVw¥w¿ÿ¸¾ý^0ªA R¿ª®ÀšQiÔt”¢Ì¡0é›]Š6Vˆ2€ÄÉÆTGÂhi§[ÙóÆ¢y1æàVñ¥i\aQ›ë!zºå-8®O’<­K„iTÃ’Ø“#o]:. ‰ñ»þ}p§¶L)©Ér¥¶‡â€d¡&<Õ_¿Í·ç¯»µVCyY§À¼nL D:.EîívÍèëwñiÃþ³¯¹½{ºþíz{à]°hY4ÛEïÐ ¨‰D¨• 3ùY )×$¬sÁt-¬<ÜêL×]OW››uàËåÃuvøOww?¾_?åü½T–"0ä³Xô–kãȺÇ/]èßð œ/-x:;”<__|̧â{–S:YòÝŠE[ÎZah¤D7ÞÖƒ¤•aÉÁDÓÁ4<”Àþ<­ué¿æ[Â]eqƒ! ÚÅb½Žó®»#é@û–ÀÇd-xÈõÎdÞ^—¶eY !$.E+¦’Üdªõ¤´™™$^¢Á£†¬‹EÀtúËBLöÉaÄ"ù9ìc—<®µÛÚíææêñ~³¯ìòk—Ï—N"ïu4r÷WÅUuíx ·.â¯ÍÍÓžT¶'¡€”úݼSš7’a-×cæàp OTú°ÚÞ¹>ådón<áÕë .x¹™EÛ°ný±vm.rg÷Ü=Î" {5oJáhxà“°Ú<†„õz\îà^žlójì=%3|‘D~ÚÆ…‚<é}œ§ÞÛ½[–Rt.Ú/Ö­Ýôþ–(P‡ºC»úqsñmóð”1?æ{j;çù§ŸíYލ<;H"hÚ3特åY6K¸›B€OhCMë&Ù¼?GtÞSZØŽð“Ù>%t…¢õB!LæÂ­ãÈ÷K°íÇÔ÷Á±õùC1]¥ßçÏx0²ñøØ-ÉN¿û_]<µ˜µ×fbm©Bn\µ´¶M×Ã%Á‚j<!t)qScÄX4bŒàP—QËsÅœlßï©;wÞ#º›‘<¾±¹Ûg«ýµKŠü×gÓßO•Ó¥¼—ˆ¨Û€À`…1Énoµ+ἂóY #–£Ï¾v¦Ô©Ξ'†E»ÎRðË+áÚFq%ÓÒGV g)ÞE•pš½Ônš5*á§sqðÿûøï‹¡î¤Èýí•pšÑÃNßúJ8íÖ9œþØ·N;²]Åî©ΩΩΩΩΩΩ΢uõ§|äS%œS%œ¿¹Î 1•¦€­|‘˜f`ŸJ8ÎL«Q‹ž[ðÄnïOvüøƒÉ’P´$8u`J§x°(Îë8ÇØµ(ÎëïMžå@ÁÅ}¥UûÅpκårs`ãAP¼/´@~ µ2F×å¤2 #£Oáfà€˜xóüôíBqä^‚ïEüusX6{ôâÁº£C}£h½ËŒ­‡N;I%qG*ˆ;Ú:"Ó¬Ok—®Bnüžì«ÜµÛc‰•¹b @.Êöjx‰pTDŽ*O™zç“ñßÿâS—ÜÔ‹$‚µ$Y醈êz¡é¾èö ÅL Ðqíù63— }–`ð^lLfRe3¹ <ÜšB³µÊ®„1I9·$‰.„â‚u+®ãS—@9YvÇcZ§ÜzKwkH6À!ž0IÒ­uŒ\‡»%?¾•Ü™’HÕ‚ÝF,[Q7MèI‚c ¡9®v-xh€Š±tQ±î8°ò&9çö‘·—ǹÔ-qi1ãrMKôfÖµŽó|aÅ$ØxB nóV׬¼JïT"ÙgÊrþ7å„;l¹@WèÒÜ$Q­fg ¶”ÖÿÌ-xZk¤”LõS?G)ïTˆÑ%Ñ:ŒÉ ±_­>\¬Çp€ô¸…Žòs5Ýá ʱœÖÍY7,ÐÉwœÐ-,m¹r/év<­YÇE›=?æ.ëWOù¾h{.XÞc2³’ÍÚqè8ê[Ö{'ëQóGÞ‚§{±«ƒÎ±œ›-ÑiA¦&NG{ÎëV޶ Å7;-x¨µVâvW²Õe¼,¸½2pÏtÏzù²i)‡´2-6¥>Ìoã¹½þyœ¬=¤^L À^óúòöñ ¡tÍãÏ]t,.AyÝÑq¨½sn6¡Ô…tíÏ܆'r‡ ýSdCe[åöôൎ#Á„ÝæñÖ#¦õ/âÛðÄn‹óoWÓ=÷ï“Òå€}ÑŽ^w‡Ä'·WV6ÞKiÙ®p²Øï#·à\>…'qÙDœµfÞ9 R.ĺÍÜÄ«î£ðm[ðp¯ <%çܬ«Û§û»œ xØž¡hÏIÙ­aøC`I®ÃD® g=&qn€“nÁãÂùûž63¥Õœýq}õçKÐ"e‹©#Ã|cl!$Iد NGÖ¿†.»OLkLëíß¾YõÐaÄ¢]£N[‚äƒñ:®½(cî6@d/ëú<Í5w-ö¢h?û`¹)™îìêòúåVו#ÚˆQaXnÓ8ع¡ÆÎÕͧn@ɇÁŸÔئ̶`ÑãWc/ßG]@Ð6úÈÕØEK¡{ ÞÅjì&/¥žvU5v8çýþ¤î6¨Á—» ý-ù?µ»É:pø,¨¿» ÙnÒêI}RcŸÔØ'5öI}RcŸÔØ‹ÖU á¤Æ>©±J­Ä‡«“yø”›M­sîÜéœTœn¼¼yE§ãè·xXg=Ùxc×óçËëÜþïæ:GWOô¡hPpSdEö4ÎíÜ+uÒiçß .¸Rw¯×qN¬©ÓŽxNÁ“ÄC[zú¡‚uƒ¨ã’,ºësà¬8rw£iÂv„õ§ äÛõ|{aãaðË*i¿Üa~Ù“”€®|q ¬‹50Z®XÃÛÐ×UvšÒ o iÀ‡¯Ç“\_yЂ©hÁ:q\$ëjIÇAH Ëe/àj NÁõ¿tžä`ñϺʛiÇp¹ßzå»ì\YØ;«¸Ê4.Fßw¦Ï$¦BI^ÿ—lÈC¦vƃ>1ÚxR¿òø“Ÿ6¿Û޲œBŠ,Þ#¼é8×\!}¾ææÃºm®n||–´³ñïV™í÷Ï_u?1Ù.Z¶£¨ dcÅÔy®÷aj-|ònȧ¯ÇÓoÞßþö°y|zxžNQš²œtªÓ=ê†ÇY“^ÇdºË&}5mÀîV±Zí»·àI´Ê”ßþøfsÿ.@Wˆ‰an9άb9ܾ©}òw¡k=vòB¼<¡_0¿=[>lÄr¢*‡mê•EÛ¬¸Œ+Íù¼å,o÷vÎr~½ùñx2SØò¸Ð%¼ß&•cdf—3þÄ’jäq®ã~!EPÀìÊ<©[¾üKÙºƒ6,g¨J>"ÌÝ ÌÔ`—i>‡  0 ˜Ö-xv9˜ûY¾ûeïOÏ6—7×·“=ËÁ±°n‹Ø›—:œë6árµ4X×[ðP¿ùžs0•Ž·Ã¹ uûiœ*æqœûœÑu¤mÓ `Xý|¾ Oô]šÜÍ3+•ÍJì%—O·^CÇ…~mï–0™8aq@@â˜ÑÆ]· 3SµÃƒôE úè€õÍuœîðú_ƒ»¹a gE“ýˆi}&èsHݧq»½ç^Ù½T5Ö1ûc*.[ŽƒÎE“:é;¶¼^DÚzÈJÛ»OëÎþP)—r6X(L·æ%å'u\^¸â/£cÔBHÚïÛ6á‘>!þ[-ò½ö“²ýt gI1Yx5jm>±ëCFÎY+$d" .á€/¬³×‰7ªGmñHè6{³¡bÑP1‹¿i(a+Ó âô%ÜkLá<ÍõwßÓŽÞBœ}&óå6¢¾nˆlBD”½Ømë.j ãÀGqg¦ž´­D‹‹¿¶u ø>ÚÖ‚¶ÑG®m-Zúµ­Kð.Ö¶N„Hh{)Ø9^¥Ó°xuZ‡: O’ƒTUüqi[·è‰ˆ>í\aÿKh[§·žÖè`[§P)®¿¶uz"AâT±Œãn·›“¶õ¤m=i[OÚÖ“¶õ¤m=i[÷­«ì1J°Oö»utOÚÖ“¶õ´­JLŽÀ̳âÈš³ø—}ŠË²ñdn”XDü€“ÄZ<¢ §æNlj“Ÿ:bAÑ8º×U‚°yfÑÅùrÊ%çš-Gò·àÁŽGÁûê³£/_o%(Eaqçvëï¥0š›¶"Á¯(,ö.ž3øøæÀ f¹UbH±½W]ãĬG }ß-3íEƒ}ðn?m˜bb$ý æ §s›3SSd¦,Ñ2Á"D^ÿƒ·à¡.¼ÎrRÇÙ¥†7¿¿­åËà”õÀl"fêW^c!GëA§@>{žÖRÎ4o×ÄjIÝ„}»z~<ûÎS;¡rî&æ¦"^½ºUÇQ{‹ÊU8ªP’°!³ ˆe2a}”GˆºÍòÉRå°õ›‰FZVäÇ…N²àùL$ò„`›QÇ­ß×jzNðºSé¶Z?^h˜öüãpïÖTN¼$ÝÆDf¢£Ž‹í®±Q™‚÷æ}U礣äçèŽÌ¹ <€Ëçí¾zõ“ÙÊQ,SˆÞþ¢Ä=[-&"±þ8ذxi}޳Ù<dA‘ýÇ¿ôÏ7¿¾Yë×× Õùöó÷FaåTÌÜS§³¼¹ŽKÒc©žIØœ… ý¾|é…ï lÊqì”0¯±£5Ó™HpQ‹«õ8Zý ìü€Yß‚§Uè;åçìžF”³1%(M(tŒ´›yØ‚”|Ä<âzê¹öfÝúr+ìt‚Æ`¹ɪíÖõº•s XFÄÔ-x¨›2¯¼-ÁÒÉyÌ].õy`Ä Ó8lvʽÙ×· têõÅñHW-f1uÞSÙŒ$1%2êlLãBè÷Õ—1µ3Ë€O_g·ÉIÕ§I-™Om÷p}1ÙÍíæc¾LdˆN”ȱ¯ær G[pKXÿ{WãÉ ÃcOç^*Eç¹lÄ,YMÐľùãwei=ÐBXÑñk7ài½Ìz±Û‹r±d¾ÿ?›1ÍOV©Ìi\ØÕ±/`iêiýß‚'ÁšSý½—² Ù9Œè½Yã•Èó&ú*„mÀ-qÀ§¯Å£ã\X~šòé¨âílâ ‘³QcѨ1‹GUÿkçx]/PMàÌ> Xî[ð uô{h(ÇÆYs1—Kq« Cuù¡y»ËæRªàd§bÑI×y@°W°èñë:—€ï£ë, h}äºÎ¢¥P×¹ïb]çöáƒé¥dXU×9œ'çè:Û F9.]ç„*"ª@Ïî_K×Ùdø·¿®szbH|‚—7à“®ó¤ë<é:OºÎ“®ó¤ë<é:ë:§õÙ{w†Ý+Ä“®ó¤ë<]§3_‹[æÍZƙֿEqÑ‚'ÅYP:Óîžu#P8Ï/ŠçDw‘‹ºÂiGè­+Ì¿7äJšd0s°fÃR8×m“?Ô°TDï'¶®› (#\Ïãú9gÍ heÄDiÁÓé²Î¨9ÊWÝÀ!ÄÌ™ ¹ÎïÌ;ù•§vÃЀTœ<ÜxWsùžÎ6f*3E§œ u®¢ãœë4û—¸qpgÛ‚—¤]~þzu¶>öÛ­|×uKˆÑÌÓq…®«¾ôú<ÍJÁÙèÝ€¯N¹‘€¤PGæé›ÞŽ”ÿmµÐ°–úC 6ó0S¨¶ž¸„Ñ„'n'8{Ú<~ÿ¯ß6÷ߦSföï¶Ëz©Û×k"è¡> ™·Ôy\À ŸÓr©º¥¸_bÄÇíñÃî÷Š®ì;Ôm¸àÁü`¨3bYçB'×݀ؠOðkærík,Ê›RÌòAS2£À]J&- h=XÞɘ)ù6}Šþ2IÌN*Ê4€n xšW¥ƒjÖ‹í±ëýÍíÕÍöLtêaüòçÚV,g^Sîg§~ÑZéuœgX5}lƜɉ­ú.Øè£æçÃ+ð¤ÎÑéd®ò^Nƒ(J„æ1‰Ž‹ÒOÝ™±B!™2œ<êÂ1]Nt ̃û<îp_ áðŸ¿üÇ÷ë-Ñ„ªl#}áOÞûmEöÇo5¿þryý8ÝEür±{•óËÛøƒo ŽLÇÑ€œK}Ž@®—ó|³Ñ·»ÇÌÀüǸ¥] 6x—¼†1ÉU.52u»XŠt>òA¨óö»]É0‘íÇu\êG†›‹«?týøüõ-ÅâDtž)Vàf^ΉÀó©£Çd»×ƒBˆÀ.UàéGËÍÓfJù`^v ¯íö› -÷6Êù$H¼T8g¡ÔÈx"!TøÞÔ¯Áýo×7ªrý˜›ÍG{‹ ‰R„ ›JZëJ&TBO]øbªpÏI`0r€ ·‹­¥0ÛøÛæá2Û8Ϻ(FñÁ#vàB ÎùD Ì­ö›P¢!D`Îíl<…ÊLÍDЈìuÂå|Ý Ý~ Ñ8Z”]ÓBƒõw,§DâÙä@—ë¢Ù‘ºõòsRÅ´Sçkãµ×ðC[›tô>B  Ø"]©aáO ú/;*BŸ†,ô‹WD7è¹cñâ«û‡»?®s¦ˆþ'{ã¶d1#€ÚŽ)Pww‹™Ñx>5r€ªWÂ1ÔˆN<‡ <sªñßlîºßžm¼žÇ¿™ì½²ìÖø“ÉÑâˆnƒ›ý1ÆZŽtF>Ÿ,QHªx7Â!dL 4ïJL“½;{zÈf¿<»ØLöµÎM„|ânÌ(ÀœOƒsûERgäè2º <ä:ÖÂsͽ²u‰º>¨il¼Åê}ÕËHÐùTP.¸w­SpP÷#SD ‹ëDUºà—ÿr²¸už‰¹@oMO¾6Âè |>Uto±bñ ATa]ÎjÜ1ƒ[™*^}ývw÷}×àbt¢@t7ÈBk1¥ˆ{>Q„Õ;Vø¡!Ë 9/V«ùí8—z‰êfèdpë ”\JÀPBQ:ážMò)—´ßÌÇ!—häu®®ÀC©s8únã)5ááëæâLWþÿùk2µu>J!ä{¸NË^‘i%âùäùާæ`È Šƒº <Ô§¢á®±.zfÌLëì”bÖØKihXI—õÞa>4FwËErK‘!Ô,C‘ܲN@ó­n­xŽP±K Ìé ~>e@·î!.Ar;C .V$Mô¤Œafë5·H®XESà©E;ŸICÅB”Æšƺ…=.h^bØÕ:F%$¬ð¾ˆÕ!Éxó?;9ÒlŠȘÏN¹ðAÅòA»Ç†µ­“Sâ\q‘HTM†Ž çS„‘±â2Ÿ8¡ˆäÀ®ÂSqóµÜžÒ³Œn­RsEh*ín£öÙ„aQwæÛýT6oE°.l±"û’ÆRàc[­ì5Ä v¨ÆÞ×Fý0Ï'HÐðºbÓ›Y !Hv(ÎŽ58º¥²Ñéú¥Üµ¤4ú‰õ ^ƒŠŠÙ¶+)è.rSwÁ€1’½ccÝú aTòºSà <â÷Ãý¹±Æ§.*œÊ{ŽÞ%[¬ù¨ìÍq;è‘Zðဥõ¸Ð7{§¼ˆâ&[ßš(&h1SÇyîÐ w)5ëáp¼Õ€'¶ö¨mœ³Çˆå]ª„ÜàM­†Žcho®MØÜ))[&¹Ç΀듌'Ÿ,{OóõÉþî9?[r§}N0>{γï,®ê8è¨Ù]JÖ¼5böl£fàìuSà’Äh~oCÇ®?f·,NÿÏÞµ4·‘#é¿¢˜‹g#–l ñJ8bN½—>íltìe#ö@K´Åµ\Rr÷×o¢Š”Ê ‰¬BÕpÆœ9t· ©¾|ÈLä#ˆÑBÓŒCMìtg>“žàåè­ ñgë .àÎÌ=Á%ȼ»ô¿ô¿ô¿ô¿ô¿ô¿ôÏõ§ûÒ{@6û)­ëö„¼ô¿ô?ƒžàvI4:oµ ÙæÏÍ:èÄh+5N¿•…ÀÄdÛu*V™{;Ìso h‹ŠÙëí:˜¼a^ûDËøÀíº ëÇ<ùÛ çBžsFiãrêûê§ìðí—Z§Üéß„1øTÏ!EãjNgXAs¨g9Ó¿³{Ô³5‚½ß|i=ɦ[Y;ÍôîùËæá¥9âaáâeeÃOÌòSC@£ç¶;­ ÊOp2•ï/TÔ3ˆ^‚Ǹ)žbžèzM'û÷nH›‘wJLAϧ:Ñ&–÷ [N@7Ó)÷Æ«™7^ÑGÑâ ª'ÀãB­zÎõÓõÍ~yÔ¾·=ƒÊÊ €À`™ÆŽi]ˆòÆŽSmBíLˆL¾r»núVcÍw¼7PÀEôÒt‘ì¬öâöóœL³½-WdÕ¬“_4Sé,’ãJ‡wä1ÇÎàòÜYìY“>ØÄ›vŸCåPÓ ™´Øvò£óI±™CëÝ]æ5/µNÍI¶‰€³Çió¶æ`“Ñ›…  ŠÝæ‰Ä9¬Ü„ÇEË´`n×Y[ùØéfë˜ü†5>bʤdùæ#@œuð¼œëÔ#_Ï÷LÏpôüÏ_gð|lõ™žÏrú ÏÁ;zð¼è”ò'(ýs%™È¸3g’‰ Ùeðü%Éä’drI2¹$™\’L.I&¹$º/ƒ2Æ(dc A©nÊø%Éä’drI&†ÔÔ0l¼5x_ÞT1RšR>¼â‘Ûé‹Üšï¸4¤Lóxœ6µ*$ütY~ZåŒM°™t#Z§;‰«¥¥ª£¸/ͺî÷ 2Qp ŠlÞLkÓô&m¹ô Kÿs¾vŒ^Y¦ÏGV³,µã ¯2 OL-ëy<Ñc¥ã§ß5èŸÇ‡üN2‡Ñy~¦T7Íá¨Åϳìñ€€–VñÀô=8›ïð€žÇcT¨X+ÖËÒ¢U §ó‰Î¦©ÏÚr·£­¾ÖœòŠº.! „éUE‚½8q)1êúnsCWWløÓíúî~qüÏÅÝæák“ˆj Ï5$ËLq÷Ò:©™¤8½F“¦3£:+*„tXo¸ž:{ó™€tmåä#‡—T Ès‰*i°fôfz1Kð;©ÍÐÍ…0&ÏET©p–Bu­tu·j€v ÷WéW»NåMbc>õ.eGpl ­ËÌêk!N‹{ÃP‚õt†aööløšÏ‰Jæ¶²Îq „¹Ìÿ:f àøÀ6~uà‘ÖÓdç«'Ƶùßûåá?ßU7ù8Fk•wŠóqÈ·ÖOhŽÖåbB¬Ñ3X<Ç€»Çç§uÙIëò1™Hf ±‡-À uMµ”â:º,ïŸ^$xìØF„ݸ[>Ì’Z+¥d‹XQ.ãp¥ÖP3ÄExpOÿ¿Ï›ë¯ûôRtõ§ÕÍÍâv½º{º½¾]_¼¡¼Ó›š&Ũ<êM­|GÛñlWÇz–€ÏEžÜR0t2)iò& s”G©… bÆ-_ùµ{fºß0×?ýšE‡ÛéÇ[bBž˜¦õ^`†=7ëÀTŠjWÔ–¤w7½¶–§Ð‹Ÿ_ÒÐYšt*¶N… M"hi©óìú%¡fÀü¤·Û™Ì›§ÕÝÝâz÷ý1]N BÌ#Dc›(!‡Áz&úÄJD®ü§¶ë”-ª3LXûË‚÷J|ÔÅ:åÅô‡¶ÜürH«H¯º›Ï›ë×ú\›å@¤][ŽW´T…+g€ŽJ †ñ¶ÒúoÛ»ÇÝz±Ýl×w›‡ÃFyˆ©òÐFf€¬k*ÅÆÒ”Ò'Ü1j·&sµh§Xv§ÓÌ1ÜGÓ:¨ rCJßÚhçxŒ~ÀÔî·oi/,6ßÖ)å+á³*/(M>°øÒG˜µr®ù¨‰ZàÁ™NÔKå\OIT†£ç_97|ʹ Ùê3¯œËrú +çÆà]9×|Ü›týñ§”³nÒÊ9§üÒÛS9'ƒÚ‰ªEå\ƒ*ò@ ®ïÕÏU9'âNèÖ¯œK_ÔÚzú?‹LÓ=~Éð¿døŸU†?)&zPJEÖHF¯úGbÅtýu·>Ú=Ç7¦÷ϯs±¶»ÇoM¨ÂÚ| O1ŸqV÷ëývõ¦¡Û1°u«ÉûV;“î•M“Oë¬é´k¯”&O¿´&W¸x§ÕQÇ÷>Oò$ÞÞ‚Ï[òboÖÍ=ÔÛÀêŽë|º!¤óûðGMñUòoÚ7ÌÏ!tõç§ïÛõ_>üçëOþX¸õï„s·¡Cô)éjÿøð—= ÈÆþ@'sSfú—ÿÖ ïJü ¹ÐÊÛÍöêÛfõÎFx<þøÕv·N¾êþHÅ~yõ×fÏÓ‡ïiYgé§5éâ:CGÿ3}dùá_Fó±sݼá£óßGçåâ;ζgר¯ÛNìE¹Â kP¨]W¦Ï/mô½v $ø™½~8ß9.^¯;×Ëѯ8øZ^/Ùê³÷ú3œ>K¯8Þ ^¿à” &Nêõ KòÛ2§=©6â^”ˆ¤ˆq²@§Sé#({?)tû¥ŸI|B‚þD_¢òø„€;ZÅ9ãdÞ]â—øÄ¹Å'¬M‘3 \.•µN;7þ!TüÊh‰Êb`/KÔèWP2äw‹Í=›ä»Ÿ°=A±•Œ–‹J¸Ð]X/*჋Î.6ŸÖuŠñêßÒÖ-„þ[Ú‡Ú ðHý Óg²ø‰VÄ>¿ßÇwñ“tš]5‰„Øâ’”µ»ß_­¾<þkkÜ&Ëz·^4‹HžÈÔ}èÓÿ5wPq¢ˆj" úMHÐÓÎÎ0ó¨œõ¶€ÃÑøŸÎ0p§?wÃŒ¨E²ÈRS‰‹av1ÌÎÍ0óèRy†ãØ;wœÕ§I) ù}A]²«Jh}ý‡ˆ³_-ÎÚ‡@¶úüã¬ýœ>Ï8ë`¼5⬂SÊúi㬗*ôÛÈΠ²¬Œ`¢4¹ú|9X§cµÉ=€yû‹ýªmŸ%‚îVº6¸DZGw~}—Ýi­½ š³OÉê8ñPÓe‡%©î›ü(B}87ÿ­=ÙûFÿlþ›„;ý]‘¦ðßÈ~è9qñß.þÛYøoÎÐïÓ†õßœ+Ö® ÂjÅ¡©7»§Ô·`ñÐCY`úˆÁ¼/À{{_g݉Î*å‰Wåv‹‚ëß×€K°ÞÇØ{_#éx*iàbJú®×·²}†²šIÔº±ýK¡7¸ózêJ é5—°îS–X¯–uÀ–@ º^eøýóÝÓæÈòõÃbpÖäÁ¦—35·ÿhóuR…(N¶LVz"øy¾“LÈøâü¦ÙVÔçaZ"A+í rìãÍçÅðÓãsÓÛÍæ«SGSð.A뜂J- kꃈ£û¾vëËO¢ÍsSë"«‘» m…2ï !€˜´¿Ö)œ.òõî. †I]cÁ¤iÌÆ4òÕï§:Vê›–'‡£­Õ½”çq„€ˆÜÑEë”õõZ‘"@jG+ëÿ|zÜïëÕ6,þØ<Ý6FNƒ2ß*’ Ø[ Œªµ«Å¨£Ò0¨›^»Õößéßï?¾`ÿøÚàæãIÜ9n“;Ng§&‘ä°fU0^q)… ¦ŸGÖ*Bn‘Lòvw…Z/vŸVשáÛß¾7Øó¼Oá9ô^;õ $VI?5ŒwÎÒ£u{CìaÓÀ‹yxd Ú - Œáþ BÔ?«¢üËÁ£ö&;Ü£ÌóݬuÌT¤f¯a?HõÂP´gÀðøÄc³N¹×ûw.d¶ß¬OÍ/@{­‡0ЇYSÅZpÞ™ˆ,8òÂ%UŒËÊpôüSÅÆ€¯“*–A [}æ©bYNŸaªØ¼£SÅD§8=iª˜ÅTÌÔ׈«…¢‡XÕãyåã4¨Œ÷¨5Þœ(Óý§ÎÇi¨¶: 7Ü 3ÖS4_ôÑF6|’Öu#—|œK>Îäã$ÅŒ>gxŽ^ã°´ˆÉ\¤1hÔŽÇŽ¦Îä^ z›wO¬2È;2ÙÔ´L¬žj›~¯ÇTË]˜–ök„)G[Ã2Òénzªc A*u'¼ø·œÄN’Óíún{1]î¿¶>å$Ÿò÷>ŸòÅv\^ýF,yÃDÌíjµº#‹þæ;yPë‡C#­›«t­>5ý°&rÚ¿<üÂô×d°Òs |)£í·w´Å…æüh Q€h âxOŸËÝ×çý¯ÙyôiT¼gÚZ7ëhÓÕ™hWó((‡oŽœ%n…(”;:jb€~#¹fùŽi–ï—Ž¶DÔ ¸ëǥѹr€†l ÆÌÛÙ°½{ˆ#f'‚З1 ÝƒŠ»†’|ÃøN 7M`ÕPižåOޝ ëNà Þd’§«´óšsÍ=IÖ¸ÊÚW&Z ĨG+ßîöæöx·5¡ËÅ·Íêu£0ì ÁVàNZgƒóÈ<\âåÑÒ¾Ãg³ø‚²Æcäb´ÎÙ Ye£ä-ÛDTW7»ÃoÃÜŒgŸƒh?U‘~9@P®Ö4\¹Þ/0ßN Þe1#ùÒ`¸Œ¸fv“i¬@ 0妓™yÀtöŽÖÚö;bw¯ƒ‹¼c˜J6¶'‹—SZâ¢ÝÉ4!î§mŸª“Ùfùœ¢&ÕÐ{¦If³N» ùƒT€>n F$­¯¦Ûf à– ÃÅ”¸K† §£´Nc¬“3\Þt™“×¢4ÔI§Ñ üåÜ\²PXª”`Ðäá6ë¬7ÔS(zBgÊæªùŽ«¨F¡Öåø"ÒÕvÓæ3÷Á†”!…Œ‡Ö¡ÁJÉã±o¦0ª¬ #³85Ýçƒã4ƒÖ9ÐÕnËšª!!¡ÈÎí¨Àî(ÁG½®)ßw³€ÑCž=!º¨#>mÖ³#«DF©b9Pª50É#ŽYÄM)¬ LvZç£Ø~¯«åPƒšâ?Øä§S/ݬóâøÒDêP9HkJO¦×uy CJ‡ã¢£6…|•|ÈQ›FWÜŸjHXúT >½ÚðbôcSkcŠªS• ´#“å(û¨w£;[¥¾*—ü9“º¨Ð_3 i ´Š¡êÎ 5×€œ #ûX"ùh?ãˆ@ŽÒ×Ív›>»=$[6Ó§?¿Wéd»‘”fT¾¾}¼ºiZÓår½Ú¶Ó 6ëýÇ«ßÛÕ£ÑKÛ¨d›À_ï7‹›Ý&Ýi‰ [M»cªßé êry¾©Ùlµô&¯€î‹ ™§É§ZïߤÒ?>ß,^×¼î©ÕûÅúÓ¾Ci÷ZqNU“`¡D&'…¹-é#¿¾|c4íÒ÷ïÁ,HaÌQ÷µ— ºš@sO'u* f~™Š›âÂçÍݺ— PM¦8L˨™_¦Ò¦ŒC¹ÀÞ4ÎÔ©Svb‘ ˆ™]¢N›¹%úåz»ØÞôj·­&X0³ ¶”¦ùåk`ùn>Ý/¾m¯Ÿî¯û¯£jž‰³SïZ!AóKÖÍ´sïW›»U/|5‘Nn/•R2¿,ƒžéþ¶Ù=õ2 T%ê©Þ2Bæ—$ÆùÎÛíãëÝ·=sDa5©Æ8Ù+$jv {涘¾Ýÿ±"§àÛ~{»ÞõûÕ‚C`6ËIJÛüò6zvy˜±Ý=¡÷‹”ƒt}à„¯pòVÍ'å"Šæ—­ÅZ•.׎ô÷àØõ38Ï„˜BÄÔ 8rø›^sãg: |ï ”ØõwòïÁÊl ÚÂô§Üû=±33vB\áRO%Š 0 ­¥Ò‡÷“ì|`Í£Âæ]•ƒÊ‹ß׫+I9ØàB­ó#ÜÃ?œy]HYf@{«‘¡uæ>M§ ÜR#úîeË5pL-8¾ø­,cЛ˜gý™Î‚^¯ô¾¿h(µǃUƇJL©i2 C…UL¢xPÁ‰šî>R»ö,qL*€op¥Öâ8 “3Mµ.ûźµxM6tÞW“I´²m2 S‹+(¨ÒÛ*9–©™Û&%¦'˜ã܇Åêáf±O™$O‹ÔaqÝQÊPKRAƒ|÷Ôƒ?¹@O"¤owÏ÷ëÕÓÓêú6eû½£« Èèº*‡>T8ä S¤Ù)…³Xm÷·ï鋜hŠñ;;…hxà“ f„ ð–>2Ù7ÿ—æŒ?|Þ­öO»ç¦¼æ-mLž{Ân-׉ù@c/)èÁqÖeUunÑë»Õ~ÿŽ2ÖÓ+G^Qe‡ ƒö3èo& j ã~µyX´gq—Öa-ÅU…ë95ã…º ïž¹/ä˜jä¨~}‚3L:‹„¤¦xFDË"ŒÆOŸ'ÁPßP­kI0º)ó%D¤Ì"@_ïEëe;úùö‚`rš$˜Ñ䔂.$,βIp áXsE/y­¿c˜t®bài‘É2ñ(q´PB¨‘O¤ûÿ®yZß­¯39 “&¡¦ø9~R2F Î)Ïv K묪“]pÇÊϧú—¢œD~SS3JŒÁ–QìLjñëó§õbÿH¾ç §ÕóÓmªh‡á^hŠóîiÌk@nÿÑ:#Oƒ¯æn `Òy_qŒ×n}ý¸»Ù/˜{=m«òç\LéV±¥´Î˜0&í}µ)'Çê1 ¼Ï÷ººÁ xàR¥hvz´Ìâ¦uµ×5§€ÕQ$`ª ´; .‰9‘é›' D]d í:åÆœ8µu'D-DÇ·!Ìlx½Mž6Ù4H\j vdˆ¡u¾bùSE=Ò•Á[–Ts—kvâÝÊsth­<ó×Òç«Îñ¢¦Þ‘>Á¢¦mð¶.j%¥gÝ5%v¨;RÔ4ª8±¢¦Qè5¡WQÓ¸Ñ9~8“¾¨)¾Q€¹c "¤êz},júXÔôŠš"c!ˆ0*ÈÀ†›(ëeçæ•6’Z<”Âv„v{Õ»F<ºVAjlZ¤5µ3«tJFc[HPŒ‹à)<¶#m¯׺Œy¦²`™×€)iÓëà]^uØ4D *Áó¾;·,pç¶x©Ɔ7r£¹NQšÐ]¼ª²)õÿ b¬ó[yëc;-YòzÄÐ/…§†ß¥L«.ëc†iL@yyDáð~c™ YôÝ‘fçHq,[ –*8páa°~ç‘0’MƒA•ÐŽD&Z1õ!RÁk $’i¦…¥! í¨°É’u7œÅ´Œ¶¬ \o‰ø§Ti¨’ÁHX‰ ‡hÿ²8ÕÚ^“ÕÖ¦(%MéÞ…ºÔÑZnÕuÀ[£˜åRÒê S´ehQ ¦Pè*e4cí„a­ []jaÛ‰ZÔ Á¾2†· âî5êÓ}¹ô Ì?‚ÜAõ[Z½nLÛdƒvD›öå?rEL[ƒðø%åz«.0ÀÎ*ZuZk-&R3Cm¬ Ÿ€oÝ_«lü¾HÊê úmiÃ\h2@°ÍR"uÄ ‰c­†è| UÂämAƒ¬ ã\­Fž@)0&ÆP‚@”!†WÙ4!* í¥ú@UlºØ‡êGüh/<þ5üÈ´Þ/õã.ÐŽÑä±iÐ/•À\,x^€g®•„Ħ),”b?¶•cìÃr„!¤ VOW!‘Ý °\žl†ï©7uQÆýÀw*:ÎÜ„ÖÂ(I‚C­5&Mií«ê`YÁ 2K¬S ¬|(¹°ð« ÷’ a¸x$zëlܘc"Àj!’§ºøÅýƒ Æ|å·ç§Ò@,Xˆq¤¶’˜cý8Ή A“‡bþI†…¥|á…¯@—1ZÒ|hGUl¶‰e¦J'ÉKBÜ'Ä·¾™ÿü\ÂÁÖ D»ºvÌ—*ñ²¨ÏSøMÊ3ëUp ˜"° ÖºDIEññfáNTïëÚH/EX¶JqÐtA;Am²¨§.˜+‚¬Zß>=˨ƒ)f%km";%ó› Ú¥ˆ ûÐNª¤ziw¼UŸ$ETÊÕRìqq´ø½UÌ©TpåC;Œ–N»()wÅÀ7m¢âcaûõoK—þ€m ·ÑÊëC³P}b‘:ÕrÀÔú³|Gˆßµ g0ÍLÈ®ƒvĘVÙÐÓrS}Ü´E ùðíñÛ*n¿"n Á 2¤ÿY'o“­‚.8§>%Z$ÓŸv__]Íêã1V)qÕ ˆ’þbíE;Êe—[B ?€´d†‘0pÍD›Á`álˆ0„’àHƒz®Ó.íŠoêS#:Zu ±^B(Ã"XÒØ!áh'û@$#E „%*‡eâH?~¬ãDàÿCøµUÖv³ñN}Jt:eèòãgñÚ/ ü»339›>!;E`XVèü1øÃZ©B²’vÃòí9ˆ°ZÛà@C»èJ‡nGDºB }zé(¡^J8Ó06¨;@;#y2¿Ïp[§´‚AÂ’¦Ï?zÿ¦¯dÚOƒ¡ÍÚßc;ÆSܨéŽg0…Ó†Ia±‡Çx- ѹ£L)áóë^Êð3'apV?f súFôäc~[OóëC×ú´c~ý#}z1¿­ð¶ù-_n•¡ÁÚ Ûi̯åð¢g- PÀ*ë@µ§•5¶@Å,èI&ŒžVò¢~ 1¿ÕÊrE óëÞÈ !¼Î¼óóûó{R1¿À˜‚*:Üz³$íhÅé›&Õõk¨À® Hà jÛe$ªZ¥…¦—G6Ö¾ÒD†4vŒXÕ&i>ªtfjšÛ´%X¢NÂügò ÊØl,´ÙC;JI²0˜ô6j !:AA¨âmPÒ6¨•+ÊúÝS’seji€hïVx@žŠ¡CÙîÜͪÿ4^"”¦6´Š¡çI*suÀ6õi„´óí[·Û°ÐÁ{SùR stƒ@;JE‡>äLƒ5V´^–GW_…aòÉtå² øOàÌ„ÄÜ !ðXyµm¹ÞtìQµ–*ÉîÝï\Ò\JËŒ?Ö£h'¬I"ÆòKz)Ó¤4ƒR{µ\ßÌfƒ%&ÝXUÔ0î?b×XåÕР٨1y¢ŒÛé8¥>x«CÎ(ÔDïò»œÐ7LðÞíi²¶a˜¼Š§Lu^c–ÿO“_÷5 "PÀâ#hÇHšÔT0RJtè*Æð¿Áâ,A6ºß´ÌÛ±àní8×I3¯wÇTD ¢[ß‚:jˆlón‚½àGl‰’=ÐŽVT”&nü¹)O5ë+7j_ËDsÌÚ/1™žŸzÿüÇ?ÿ§p›ïâÀãïŸ|w{•¯²ž=¯?Dv4€~žü8iT’Þï?ëá?÷zØÛ•óòÍëõI_C0;=üFcûpcû¡É¾9ûß|¼ÚLÉöc“Ù}2{/^ôXo:9;g³âüoÙ¤³¯×wS׳TùW#§u“Ñxò6Ÿ9¿?ðÊø™u¼ñýëÏuVôx–é|rv&ô,caÏÔÀšL&F@§6·ç†4â'ï[ÑñmcµaÓj®£õvÛlÿšÏó"ÞkHÒ_¡ï¾ô,W ×+ý¢ƒÇwÎSÐ[öÑE0 jÀT²·#Æ{Ð)ôùýw¯úš Q×7ZûO>Ë·nŽû=ϧ³V}þu+÷M¾˜^MÞbÞd9jÜgR±òÍïÀßæçù"Ÿ±šÖÝÝŽô-äÈN€à ”ȽbÃè— ?lü»(§Pjo¸Å×Ï4Ÿe™2ᡬØ3kc’©<Ÿi˜„Æ@Û‡f³÷ÅtžÍ¦ÂkGM§Ì±÷WÙ<{—O¾p#:êýô 0ÕγÏç«Åm“MúI£•€Žªþ=ÿLcù^öýÍõV6ô ‡bÞZnÙe/…Ü(þU¤‡îQ>âbDUEz4{C1‡¨kÎâôU_?PD¼üõ®>‚Ìú#?ð7h¸Yù>eóùÕª»ÅÃu«é¦t’ù¯Àûp|xþ/bÆDºÆ¿ /–=ô¨É{;Ù ó"üUÎÏùt–o³ËòÞ‡¿>³5Úèï ↻e£ÍäõååÍ C°FwþÏ2´!Šwýý€ñfuqµ˜þYðüÏóÞ&!ñËØÆg7+ØŽñq¯—]O7 ý-žaã>n—MÐý6ËÅm±‡¢%èpb¤µÜ{/ÛŠLTû„­â +•´‚nŒs– ?Æ2®©?Ï~ÑŽXÝex}ì9[öx››Kw%àf1sD/”k¨Ò*@Åžä°¶ >Š Ãòfw.¯æÓ{YÖ›Q"ý”hc…%¢]Hö!/œD±U}* k[&:®8¥{«e”2í(m]î¡SxÑî«ú ýõZ#_{;èz1au;že˰ðX[ƒ1|…§¸8´d`SÖÊ È…vœðfûô­óRŒi¹B¢ C/ÖÄJâímI×c7àâVÝj‘g—»‘ S̯€ÊEq7¢!`îöutλT‹©>JZÉòè3*(fb^ªd¥©Ü½. «ìE¾O“lyqv•-&0J†ú5 έ  Dv0ç†GWúkÅeÈ¡©Ø-VÓól\¤Lä~5¯’J!t"†Õ%©G×zºë#f”¤çOXXçðçóIpÀe8“Ó:³j¤ÔCÜŠu/S•ñð¨2«Øã­ØÐuGψžþ­Ø6àÓÜŠõ ˆk}â·b½#}‚·bÛàm}+_Ž)I™A)Å)SÞŠ•’SGnÅ:ŒR믒]B­”t8‰[±'FsFÏ û¸nÅ:ªÇŒ÷áÑǃˆÓߊÅ7‚"LTäíä…¼ûx+önÅÒ!ÚKÒ2î¿ëÚ±J ÏD·b±_£Œ¢$´´•Zò.oŲ¡”R©Ï‚„ä”H:èi£ÉÌÞ(ë2£McOç ÌÊ ’ûÏW˜ˆÖ*’–_ÚñÛØ–Œ@Îh½h<Šy©hyqÐ ¿)c”ÿàÓG3II*´cÑÇÐ 8/ŸaiÆî ÄÔ]9‚õŠiËë:m'½>V#X’«:AÄþBÅ5UD¢ÐŽ“˜M£ø ilAï#@Á+¡Q?4L¤HÐÉ€5IKs›¦í¼G@¶6Uù¦]ÐîS9¶.°û÷tÍ¥ÕŒÐÐI¡æ‚‘†gë²B6Jq)¦»vИX\ë… Ít²*K‰æ¼>xÅ’oÀ^qßåßbÑö–`M„Ί á‰dk]fˆÀFy½ø Á» æÝé'»f@åJ°€2X÷,¿åŒ×-I=¥Ø•bìÌrKE—r–°ÎäÖTôËDË–L´\¨è˦I˜-aìnxàÆÈnâ¥ð»­ÆÂ›¡5®4$eiÈÚ“\¡4™SôE6¹œÎÙø=J=ŸÌcCÂ9^éè<®•2 ÆLsBFe>¼Èg³«íÝkRúAjL-¤iÛ°¶„¥;³ˆšêŒ†¦wÑPß¡3OÒ¼X)Âr³o4þ¸‹Úq³>0q×%LýKíÙÝø‡½×«^>EٻȖ½l†÷ìo{gy>ï-`kü-ŸômVîëy‚ ørm¾MpgZ]L—ëãÅay¯÷È"ÑAHM4£A\ëfôŽô F3¶ÁÛ:šÑ½/[–Rš›N£)誒u8Ø$ªT§͈¨—\ÖЄ­ÜÃø(¢ãFG뇋fŒA&YÕæ~Œf|Œf§—£Áº5Šö~Žâ"×¥;ûÿƒ_ÚÒg=!T¡Ytsy™-¦â²ŠùÅòðK±Þl¢´ÆK=7µ©¦=ݽ­'ézË­Çç›rx¾¾Z½<>vqCÿô“‚¸¡ûÏ[§•Ž6ÚDíN7x>ý5±Qï¾¾zs5Ynžz•?ÁÐX\Ý©ÛÎzëÎ*"ÿ:¨@FÈæ·îXaø¬õdH1|Sn#r2Œ\wø÷»ÙðóÙìï¨Âȯ¾E·uåYeì¿»éÏ…òšÏ'×W`…­¡Ýæx#w¶Ò Œ4°{/ß¼vÛ@{ýf9 Øß‚à™ô0æ®[·Y÷ýåÔQs}µXzhæ|^~±¼ÛdD¯&Dw©fÑD¯$v]þy¯’®{7[·KBÛÇ÷/æ9ºbŸ÷Ê<ÈyñU™¨¶H¨ÝŸäç0´§Ö¯ûeæÚ>ˆ‘Û œ3Kâ<ËVŒõ€)–Qk3:˜¸ö~ææ>‡æzû^|¶Ò~öâšVß7µöóÿ˜ÿ½ýâ@Òá§ŸÀ0+ñÌšÏ{_fgh~^f×?hi‡Þõ×sµœá©Üv$Ñ„[¾Ÿ^.§È¡ýQµ¸Éq2^n3œîwX¾ï~¶a—]w÷aÙ²’Î÷§Ý>vóBËÏ{øé:}©c°fÁ™0MÛ죛ܣ;ìærïDzœ“—7ï`‰ÂtÁ<1˜(˜H³?c•„Ÿ•|Ÿ½MîΧŸìõ§gOÉcBXn3;&ðë·7gk²ÿ‡îÑ2_•£°^EÅÃò(áéø"¿ïíÒ‹‡È+ŒZ)„ë‹%,`íÌnŸE ÒÏ÷©9j(a©Èüöbq5‡Y.\þ½§k5‰_r=Ä#_Æ-ãËg£=Iº¼'Ÿû·¤l‘ïJÚdrwB…e/ü'Y®­\zo¾TQ®-%ApŠV*wsÓõ-èä°‚a„žnÍJaØ\òa W”:r™¯#¥TÎÎ?b» þäþü¿ûc9Ó´ ó¯vT‹ÕL˜¡,HÉ2C7G¶äÛj«^›ˆ_×-ŠmKµÆš©ñ~ùÍïþð«+T¦ñÇßþíNÿ\6ˆÿòFøuâ:.åã{E8ÓþÔ'ùI‡~ùȼUîÜw£D/üÑ=­½õ¹v,×±‹³è§Eïõ ê/üm‘ô­Ñ{½Q–ÐÆÊ)¢÷ˆÞ#zè=¢÷ˆÞ#zŸ‰ÞËwì˜ô…óQv¢7ÿOT&&FFE]Ž./üÄzèÑÿ ¿Äÿõâ¯ú%þPÁœõò¿Äw<½ä/ñ½Þ ~‰ŸX¥ëdÝñKYòhlÜj'. ­ ¹b—è\Èqc‡A…øf'O:z^ÈoˆœR~Èц%†ã²gõ[ð5»lïí“=%Ž:w¡?@¯®Oèψ¿†ÐwÌY/N軞^ПÑ{šÐ·ÁUÇ«Ôc;;=Tº~m7'Uë“=¥^3~¶›ó޼1ÛŽ*JIÙò Ê4°]`»ÀvíÛ¶ l7í&öY/ŸzTpl·¶£„MÒ“ë4_þ;ÂËïÊÕç*pΤƒ Tí€nÄvª³ÒslW³3²óà7úfÇêoÅvuP€ZhL†â K`»!éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØ® NIIòx•B’[±l˜ñÛU ȵa ¥²ØZØnJ½bú,l7å3z¶›RöØÈ'°]`»ÀvíÛ¶ l7Äv3û,[`»ÀvKa;®··1;u±]³ƒ‡*8a»ú\pt¢ +vFwb;N[9 ãQã5©SXRf—ߊí¦ÄÑCIÛÀv<¦ãÑõ±Ýñ×`»Ž‚9ëű]×Ó b»3zOc»¹UÊèÞl»Äp>ÀvSR9-vIvNý“+¾ÿ¯±Ýœwì}œç”=Ü}lØ.°]`»ÀvíÛ ±]Ý?kÁ%Eî³”ST¦l·¶“€1)kÛ5;H—gÛÕçzΈjM¦|oµ#Όٞc;­S¸V¦4{jv ïÅvuPFñ' ~Ç ØnÄc:]Û ¶ë(˜³^Ûu=½ ¶;£÷4¶kƒ‚%¯Rø,aùÊl;Êè¶›’J°XKŠªJÉÁQÇê>¬¡ä”wŒü}ØnN™q`»ÀvíÛ¶ lØîul7³Ï B)l·¶Ó–ÅVŽ€?ö.ûú󏨉ÐÕØN[CJG¼©Ø¡ÞYÛŽl+ŽWÄçØÎê6L’úe,›ë{kÛÕA5³¨ÚPœæ‡¶’íxLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvmpbM/¬Rj·b;aÛ,û¶›“꺶kªPнRw;àÏÂví­91Ñ ›%å7¶¤˜S†Ø.°]`»ÀvíÛ¶{ÛÕýÓ“¦”q¸ÏZÙËÛ¶[ ÛÙVïž–³{ê×¶kv(—·¤(ÏÅœ0y…GTÎT¤wfÛÁ¦ÄÙìÛ¹c.ÿ”z­;“V äŠz0@NCõX|ði\yëò\”(Ýí½3+#%uzAYŽ"åÈE \rÈE 7È•ýÓÌ}¼ÏÚC5Èä"[!ó­¼£AQп6ÕìHøê@®>WIˆ­`T;6²{9gÅ|Ð$Þ+‹!V(EWq{kþEÔ‰Á‡âL"ÿbôÃzÇ£ëç_œMþEGÁœõâù]O/˜qFïéü‹©UÊž¥°]˜‘7@—ãÕþu©«å_L©wû°Þ‚3ÞñôpÇøvl7§Ì!°]`»ÀvíÛ¶ l÷:¶›Úg)çÀvíÃv%táLè:ÀvµÚÒõØ®ì¢F8š@„XæÐØŽ¶zÒýi i£L’û™"»Ý“ºLwb»9q„Ñ[pÄcz]Û ¶ë)˜³^Ûõ=½¶;¥÷,¶›\¥ôæ"å"›ÓAoÁI©æKa»9õŒŸ…í&½£ï+R¾¨) â Ê,Š”¶ lØ.°]`»Àv¯c»¶zb Óá>Ëå?¶ l·¶«&¨KÊ=l÷ÍîF]ƒíÚs¹y¢y4ˆEñNl—·¬†ò¼Úä2…ÕLúS½Ú±Xz+¶›'ÙÛ yLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvS«Þ[í(ã–²`»9©š×ÂvSêô³°ÝœwsÚîÆvSÊ\â’l`»ÀvíÛ¶ l7íföYÞ‚íÖÂvåÃ$K)»v‹”7;‘|uµ£}ü27¨_…g·C¾ù’l²dÛA%ô57ºmD›ù³~ö7b»&ŽrÙUÓPœ#FoÁ!éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØnÜSÙÀÆ«TÙoîͶSÚ$Ó¶›“j°¶kª$%íWæÛí?«Hù7ïHò~¹‹¿xñ}½¿¨å;EYöÀvíÛ¶ lØ.°ÝëØ®ìŸPØÆ'H9I`»ÀvKa»òa–C"€æ~¶]³+GÝ«±]}®BÙÜFH5óEÊY7‘ÈdÛa=*{uU· ân§,oÅvX×IÏJ~/‹¶ñ˜ŽG×ÇvgÄ_ƒí: æ¬Çv]O/ˆíÎè=íÚàFf”Ç«”e¾¹¶]Jõ÷•ÃÕ’3ë+ ªùb—d«ªœÔu°«îvôa—dÛ[gcÂñg˜³¼ÛµÁÒø«Ëã’l`»ÀvíÛ¶ l7íÚþ)T¾ï³l‘mØn-lW>L6Ä—ÿúÝÌúø»üEØ®ã¨E8,ŽýHˆöxÞJíê jœ¸ßât‰µᘎG×§vgÄ_Cí: 欧v]O/HíÎè=Mí¦V©ÇžwP;Ýü¨²ÝœRÀµ Ýœzù0h7åÊö>h7§ s@»€víÚ´ hÐîuh7µÏŠD®]@»µ ]ù0Õ“ ríªI¾º!E}®åæ6daZÎTéÎ\;Û¤`)?§v\¦pùC1xª7;â÷^‘mƒj9rf‹Ó¤AíF8¦ãÑõ©Ýñ×P»Ž‚9ëÅ©]×Ó R»3zOS»6x9ó§AÓÝî͵C˧|€íªÌeg‚T£Åríšzð²]÷*„䟅íÚ[cRUzÁ;žß‡íve^^ZÇʱv`»ÀvíÛ¶ lØnˆíÚþYoÈú '€ƒÆOíÛýbØ®|˜ l‰»Ø®ÙÑÃU ‹°]}.qΜ}4˜üÞ†¬’˜Ÿc;)S¸V×Sì+mvéÇPèVl7%s\‘ò˜ŽG×ÇvgÄ_ƒí: æ¬Çv]O/ˆíÎè=íæV)¹9ÙNÓÆGý(æ”þXÆô—¥vSêéÓ Ûµ·–”Qaì¦7ö£Ø•ÕŸ÷ñee~ƒÚµ jÔ.¨]P» vCjW÷O6K¬/œÊ¦Ô.¨ÝRÔNj²›`êS;©ÉnÌ—·‘íŒÿý²LYî-l—@àˆÚiê^þ4Ò?ì7;Jï-l7%Î9úQ qLÇ£ëS»3⯡vsÖ‹S»®§¤vgôž¦vupIb>¨'²ÛÁ½ý(„hS:ÂvsRy±;²M%/ï8VŸ1}¶›òÎÏÊÇÝíꈚÊ䜆veý(Û¶ lØ.°]`» l7µÏjÜ‘ l·¶+&»±ÓÛ5»Çü‡‹°Ö‚u”“.w6;¸·,oÙMÒAY«S˜Y,õ;µ6;Ò÷V¶kƒZ.û¬ŽÅiŠÊvCÓñèúØîŒøk°]GÁœõâØ®ëé±Ý½§±µÓ§²­ŽW){VôÊd;Ï›ãÑÙ9©&ka»)õîôYØ®y3—/`腜އíÚˆdå­åeý(Û¶ lØ.°]`» l×öO%ŸTº`¶ l·¶³ÖgÂŒÀ»Ø®Ù]~G¶>×)e“ášM1Ý{GÖ(©´‘õvØwíwÎhv`ï½#Û•$O*þ(ŽÛxLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvmpD{a µ[±¥Të `»&ÁÀÊb9–ªy±ÒvM•›x‚±zû´ŽSÞqá÷a»:b½ºcƒ<À]™¶ lØ.°]`»Àví&°]ÛgKŒî:— ˆÛ¶[ Ûy-^þ‡t±]³ËWc»ú\fÉŽ8š@ÅîÄv µ,§ç}d1µ)\¶$ér»]r|'¶Û%.:x,-¶ð˜žG—Çv§Ä_‚íz æ¬×Æv}O¯‡íNé=‹íöÁ9»ºŽW)’{³íJü±•=ö9¶›”êkeÛíª(ÕóÃvð ØnkÍ9Œ½#ô>l7©Œ£#E`»ÀvíÛ¶ l÷:¶kû§'.Ñü û¬?ÄòíÛ-€íê‡)R^“zخٕµš/Æví¹ ^ží£ Tìò­—dmÓ\bçÔ.×lÙFq\³“‡V­ï v¹-C™­Ÿ?°‹ó‡ã~P»ÓñèúÔîŒøk¨]GÁœõâÔ®ëé©Ý½§©]“äá*å%î¿—Ú9oht@íšÎ9Óxµwd^‹Ú5UF˜²ŒÕ³ØgQ»)¯´Ý7eúÚW§9’í‚Úµ jÔ.¨]P» jWöOLŒJ<<”O"Ù.¨ÝZÔ.Wǵå\îR»fG·2.¢võ¹JH<š@Å.ɽԮD”ðÛAÂÀNýŽ·»]æ·–¶Ûe-q­Å=æ,¶;à1®íΈ¿ÛuÌY/Žíºž^ÛÑ{ÛµÁEú5žÙÁ½}dÙò¢Ø®I¨‰Õ^*º¶kªŒUÇÒý-> ÛÕ·Î){Ê/|†åDð>l7¥Ì)’íÛ¶ lØ.°]`» l×öYäò!÷Ù\N«íÛ-…í ––Ëõ’¬v±]³+ƒ^í vš(§THà $ÉñVl—6@#8¸#‹e *ã UÛÑÛó[±]§²ŒÅÁ㵟Àv<¦ãÑõ±Ýñ×`»Ž‚9ëű]×Ó b»3zOc»6xëNãUªDû·b»¼‰™uV{CAà±TƒÅîÈ6Uî¬à/¨Wÿ,l7åWz¶«#b2€”™¶ lØ.°]`»Àví^ÇvmŸÅr•4ÜgËFÙvíÖÂvåÔò<òÛ£~ýî0¹º#E{.¡çdÃ#´ÔãÞŽäÅxȹC‘p¤´Ø¡,ÈU ™5Õ“¥O äÊ[ 22½Ã’ßÈ•-¹¼òÕ±G \rÈE \rSœ{ùLk[åá>‹é¡YNrÈ­ÈÑ– ò~ï·Üíäúü‹òÜ2/jùñþýþfW¢¶{¯M[Öƒ_䨲˜r¬#°ÒúÛÓ[ó/fÄ¡æÈ¿þ°ÞñèúùgÄ_“ÑQ0g½xþE×Ó æ_œÑ{:ÿbj•²¬·æ_ûfÎצæ¤þ˜*òËb»ªŠ8;:ÕSú´jGSÞÉøÆåsÊ«§¶ lØ.°]`»ÀvíFØnjŸ¥íÛ-†íH QÂlG’oÀv$åjà0š@‚(·¶ô3«\›â:…Á€S06»Dï­vÔ%._ ŒÅ!``»éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØnn•²{±²m.ÔnJéÓ"|¿$µÛU) á êI?‹Úµ·gÜšÚ½ão¼55¥ŒŸGAí‚Úµ jÔ.¨]P»çÔ®íŸõžÛxŸ5‰Î‚AíÖ¢våÃÔ$å1Þ¯QÞìÓÕÔ®>² ê_4»"àÎd;Ý„@å µ ´©^óûS½Ù¼·µ`´–þ¥AßÃÝî!Ê jw€c:]ŸÚ µë(˜³^œÚu=½ µ;£÷4µkƒ£§4^¥0ý5ʉ6Ô#l7'•ÃvM‰ø Âún—óga»öÖeÄÁ™c÷޽1Ù®( 9¿°“[`»ÀvíÛ¶ lØîulW÷OIà ãXžz¶ l÷‹a;©-ûLüIåí¯ß}À¬x5¶ëŒÿý%ä;±lI!\‘Õ:ÓÑŒ Ç5;p{+µ«ƒjR‹ÓÇDÀ v8¦ãÑõ©Ýñ×P»Ž‚9ëÅ©]×Ó R»3zOS»6xözïp¼Jeñ{¯ÈZY´èèŠl“PNl£s]³µ¨]SE :há¸Û¥ë,ØÞšë, ±wøá÷·Û©]±œ…ÒàÇ·]µ jÔ.¨]P» vAí&¨]Ý?Ëù_ —MZ¸µ j·µ+¦—£–öK”7;$»šÚiMâC75M  ú­qKìØÎêQÙœqpï¨Ù=)ò}+¶«ƒ€¢ÀP\ùW l7â1®íΈ¿ÛuÌY/Žíºž^ÛÑ{ÛµÁ‘ƒŒW)à{“íÔdÇl7%ÖÂvMÕøÉ^P¯òYØ®½5kcïü,¥ínl×F,_¹’Ž•qT¶ lØ.°]`»Àvíf°]Ý?=«%LÃ}ÖSÜ‘ l·¶³ŠãŒ°Ÿl×ìÄ.¯lg펮‹a˜&V¿7Ù®–­s>Âvîh‚%ä(-vp\þùË?üáO¿ÿ×ÿúò§ûmûßßâõ_ÿ…îüotô›?›²îî»â·“ÿ¿|ù»ÿ U[0õÓß׿øO5\øéËÖüÓYýø3§Ÿ­è_F§UìPýüMʯÊûûº¬>Ñëâ,õëêØìèÍ­FÊ ”r"¤ý4»„9€ìˆ´u<º>=#þ ÛQ0g½8ízzA {Fïi Û‡”ÀÒx•ÊpoÍÂr(Ú˜ýÈÎI5\ È6U˜™ŒÆêËqæ³€l{kJæ"cï ¿±Cð® ×veÏŲdÈ @6€ìs ÛöO'†ñ>k@6€ìZ@¶|˜ZÓ-ìÇ«@_¿û€53ðÕ@¶>MItx„V¦{ó(ëMlz^´R›êb–ºØn·3ÌïÄvmÐ Ù±ÿÃÐn—9ŠŽxLÏ£Ëc»Sâ/Áv=sÖkc»¾§×Ãv§ôžÅvß÷ØÃx•º;²ì±› =Çv»¬]éñ©?vEùE±Ý7õ–Ëé`¬á³ò(÷·®¶~¶â7ïøûŠî#2>VF‘GØ.°]`»ÀvíÛM`»}ÿtW÷NwæÑ!8°ÝRØ®|˜œÈ8Kâ¶kv€|uÑÂú\ *aB¡²ÝšGie uÃçÔ.— I ¥ å¶"ð[[ìâP=õsw;x¨žÔîÇt<º>µ;#þj×Q0g½8µëzzAjwFïij7µJ¡Ø½ÔÊ¢•à€ÚíÜR?]í›ñZÔ®©’ÚÎ>Õ}VÑÂ9ïðC_®Û©Ýœ²,Aí‚Úµ jÔ.¨]P»×©]Ý?Y^Øg9¨]P»¥¨]®wYT’w©]³£|u‡àöÜJÄe4Êñníœ7Åœì ÙêTOõ xÛA›êüÖ^#»8jt(=°ÝˆÇt<º>¶;#þl×Q0g½8¶ëzzAlwFïil7·J9Ý‹íH·–`»)©”ÃvM#”Ýìõ‚Ÿ…íÚ[KJe€±w˜Þˆíve^NZø‚2ËíÛ¶ lØ.°]`»×±]Ý?)[úoöÎ¥W–ܶã_e0k» >Ez—M–Y @A2qŒÄpàLùö‘ÔgìνÝRiêq•9„wcÞÒ¿Ø¥‡"ÅvÉ¢×H`»µ°]ù0M8Ë;S?|ñ›øÙØ®>ב"&åäWvfÚ´8•ÞdÛQ”Ê1¾Oè›°ÜŠíê R¢GFŠ„ȶò˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàb(nãUêM:ÛÁFXâ{¿ÚKM8^íE}1lWUi*âwìUâ™?¶kÞÁrê‘<ôŽÂsƒÀ«±]‘UÁÇÛ¸bòÀvíÛ¶ lØ.°Ý~lW÷ÏŒe ÜœkvÉ5°]`»¥°]ù0k‡Cל»Ø®ÙA:»Ep{®”—¡MðÚl;ßkÉ×·P¨LöÁ‘š[ —níH1%NS´ó˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàà$ý:¦v —b;AÞÊ·û&ÛnN*ùZØ®ªÊ I3Õ#äÏ…íæ¼“o,m7§Ìâ’l`»ÀvíÛ¶ l7í¦öYÕȶ l·¶+¦—ðô³íªóS$u¶kã“”Y4œ@®(W–¶“´dT}í¤NarIÜ/§.¿åã­Ø® *Ê\¶Ù4—U#ÛnÈc:]Û¶ë(˜³^Ûu=½ ¶;¢÷0¶›Z¥Œõâl»G>ÚÛÕ~BªéZØnF½=ÿçS`»)ïÞxIvNÇ%ÙÀvíÛ¶ lØnÛMí³Š)°]`»¥°ÔŽr·‘ìÃNÎÆvõ¹J&&#Vížjë]€ílc$tyÈé#ÄðþT¯vÙsºÛ5qLåÃÁ¡8ã¸$;æ1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁExt7áa÷ª÷‰ØN7`{“m÷à‚¤;¤®…íš*5NìcõªŸ¬%E{ë\oÃØ;™ø>l×FtUмC™æÀvíÛ¶ lØ.°Ý~lW÷O§dîã}Ö1S`»ÀvKa»òaZuìb»f—äôl»ú\¤$£NÌÕØõÊK²¸yÙ‘ˆ_c»\§°B‰„ú\nGï|o¶ÝŒ8Gˆ–CÓñèúØîˆøs°]GÁœõâØ®ëé±Ý½‡±ÝÜ*er)¶#ÔÄß`»)©ôâšé7ÅvSêùëÛÈ¿nl7ç ÷a»9e–Û¶ lØ.°]`»Àvû±]Ù?%¥2“²÷Y3lØn)l—+6c¯ T»Ø®Ùœ^Û®>—‘3¹&PÍ»²%ñfDÉßt’µ:Õ•sä_4;pºÛÕA¡÷Ë~:WÌ(°ÝˆÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal×G@T¯R ùRlç™6Éï²í¦¤–¾¶{¨r'’ê>¶koM¢yPO÷a‡ù>l×FäòÍÙŽmœ$°]`»ÀvíÛ¶ l7íÚþY¾bï³êQÛ.°ÝZØÎ*++1k?Û®ÙáS£…“°]}®¸yS±Kt-¶SÏòî’¬m% JÄ‚£Ã¾;fÖÕ¹ªÞU“Õ—ïK?[ WÞÀTÒïìèÚÞ‚œuË™^_›š”*kU;z¨¢²Ž գЧÂvÞÉ8¨•ÿa‡v¶{ŒX¯ ‚íP&Ø.°]`»ÀvíÛ¶ÛíûgF‚´ãPv…ÀvíVÂvåì=ëm éa»‡] ¥NÆví¹ eƒ†GÅ.½ª*q^oAØŒÜì ¶ƒ:…ËdwëÞ|ØÑÍØ® šµì˜y,NsT;ò˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàæà¸c 5ök±ÛF(o°ÝœÔ¼Ö%Ù¦J¹£ŒÕ{þ\—d?¼£¹üoèêÅû°]Õm‡²§*UíÛ¶ lØ.°]`»!¶kû'£1î8 ¶ l·¶ƒÚ³D»Ùv»ôÄOÂvõ¹˜U9¥Ñ*þH—fÛå-£½®v$X§°g”Ô-[û°Ë|koÁ6¨‚þLÿ°K‘m7ä1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁ „Ç;V)çk±ÒFü.Ûî!l‡T„Ű]SEÊ€´C½}®"åsÞù?¥À¯ÆvmDó~#”eOý‹Û¶ lØ.°]`»ÀvCl×öÏLfý²;åÀvíÖÂvذ9ö/É>ìÂÙØŽ#Ò~ŸŸ‡\‹ítCÓ²3½ÆvT¦p½XÚäš]Êz+¶kƒ–û­°vÏwÛ½á1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁk’Sæñ*•.Åvª°øl7'Ua-lWU•#©ïØüÅß_5¶kÞDãßÖR¾ÛµËQŒÒøŒQÞ z ¶ lØ.°]`»ÀvØ®íŸÂ2¨!û°ãÀvíÃvåÃ4“ò°¯?à¾ø€ÍÈήmWŸë‰¬LŽáAÕú¥½q«•€^c;.SØ•\³#—[±Ý”¸L)°ÝˆÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal×wa†<^¥ŒíRl'`[z{IvNêj—dêËÞ¨6VïdŸ Û•·Ö¤bˆÃßV+„ºÛÍ)£¨mØ.°]`»ÀvíÛM`»©}–s lØn)lÇ›$¨°QÛ5;9ÛÕç‚”WL b|%¶cÚ°†ókj'ug&·þm^yœ¨om$ÛL€j×Ä9Gi»!Žéxt}jwDü9Ô®£`Îzqj×õô‚ÔîˆÞÃÔnf•‚„zq#ÙT7{¿Úï—Ê‹Q»)õþ¹¨Ýœwžo¢^MíÚˆD)XñC™Ei» vAí‚Úµ jÔn‚Úµý3KY}¼Ïj² vAí–¢våÃôzyÛ1u©]³ÎgS»ú\cG"M b‡|e²]Þ(“:¾ä´La„Zr©ï¨Ù%¦[±]TRx‡8& l7â1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛÕÁ)¥lƒBÀMdÖk±]&ÞÄàM²Ý”T—Ű]SåÈ9øÓã-ù“%Ûµ·FJ&ãÍ’Àò}Øî¡¬¼ï ðÃîéy`»ÀvíÛ¶ lØnˆíÚþ)ecÚË?ŸÛ¶[Û•Ó©ÄHÐÅvÍŽèô;²õ¹h†åᣠä%’“+KÛùfÉ<½éH‘Ë椚°Ÿ˜ÛaïͶkâ2¨ ¢Ìf'O•˜Û½á1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛM­R*WgÛ “+wVûÝR³­…íêÙÊÆ>VŸá“5’mom.’aìñ]Ñ9åL;”=]}lØ.°]`»ÀvíÛ ±]Ý?Ëjã@™Ø.°ÝRØ®|˜åãõ\žÓÅvÕÎÜðll—7)Ç`¯×7Φvõ¹9»¤¯©á—¨Ø ^Ií`kÍÙô5µó2…ÍË™ÔúS½Úe{ïÈN‰ÍAíF8¦ãÑõ©ÝñçP»Ž‚9ëÅ©]×Ó R»#zS»©UJ.nH!eÑz×bNªâZØnJ}†ô¹°ÝœwôÆd»)efØ.°]`»ÀvíÛ¶ÛíföYC‹d»Àvka;ßëÍŒ~ÙjGüTLå$lWŸ+”([L ¢ÓàÊd;Î$GGíj$)Î>ZìÈùÝY‰ÿñ»¿ûÓOø×ÿùî§û±ýßáúï~†; Žžtÿeiþí+áGð/ßýÍ_âÖY}ÿ·õçÿ¾Æßÿ}Ù§¿÷6bÎ ã·‘LïÞF¬¼Í½ªÕk"ÎXµ>ìþBµcý >¤ü¦|˨+î—#BaKTž”SîPåŸíD¿å¯ÞœÖÜ÷ðä»·1u¿šÝð«ïT’U»žò«sÆ:doQzØ©‰ð}„~Rœ§h>3@¯}.NèŠ?Ð÷ÌY¯LèGž^ÐÔ{ŒÐÏ®Rx-¡ÏiÃ׉µÓR¿.~ò ý´zýL×ág½ƒ7úie˜‚СB„>}ú ô; ýì>ËO±|ú ôߜзSÈs&|Oè¶<·Šå㹌تC&P±¿²et¦-IF~UŲ(€:…Áz}¦~¶+›Û­Ø® Z"åJäl$®Ø=ÄíÞð˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàîîãUj­lÕYõöõeþ_7 «o ‰ÑRzâ},¬ÊYi‡²×%«‚… ,,XX°°`a¯YXÛ?,Œ÷Y¢`aÁÂÖbaõÃD*ñ,í8à‹ÑÅEöõ-Œ’r$$a.¶ƒZíÒŸj3ž„íêø ÕóÀUÅNäJlGi33x‡í°üXÙ3›ôã’j§ÂéVl7#®œ £ùÌÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal7·Jéµ÷á-ÉÆ‰ÞdÛÍIÍ‹eÛM©'ùdÙvSÞál÷Æ)eÅ=Aƒ0a „1cÆý„±îŸYs¯µàÏvÉ%cÆ¥#Ö{æœÔ…»Ø®ÙŸŽíêsÀy0Š]‚+{F“m™ókjGu»•uÅ6;Ð|+µ›÷Üy;¨ÝÓñèúÔîˆøs¨]GÁœõâÔ®ëé©Ý½‡©ÝÔ*eùâÞ3X†!xCí¦¤:ÓZÔ®ªrp,»úP½'öÏEíÚ[ª½SƾڵYP w({ºzÔ.¨]P» vAí‚ÚµR»²Z—²0Ž÷ÙŒQÅ2¨ÝZÔŽ6¡b Ú¥vÍ.=å?œDíêsAˆXFF±c¼²ùŒà&åË4y‡íÜÊÉNFGêb—HV äŠ**Á°ÛXý«ä‘_y WÞZÝvü¶äpg WF4LÊ0VÆ\rÈE \rÈMrîDd>H¿hvø:Í1¹ä¾Y Ç[¯åo¸_ì¨Ú™=Uë:)+Ï-¯™xT!½Ú‰8\™~Q‚ÅlÅí¯9ÞÊ\­ÅJˈéÞü‹)qY"ÿbø‡õŽG×Ï¿8"þœü‹Ž‚9ëÅó/ºž^0ÿâˆÞÃùS«”§koçd[‰KÞä_ÌIÍi-l7¡¾˜|.l7ç¹ñÖÔœ2ºLíÛ¶ lØ.°Ý¶›ÚgŸÿ|Ø.°ÝØ.—cÈÛ;zêây¶Ë‰Ë‘¨D‚üÔØð|lþHÀÈðšÛIë”Ä©ŸÂPí’’ÝÊíê µ‹"Å•­ÁíF@¦ãÑõ¹Ýñçp»Ž‚9ëŹ]×Ó r»#zs»:xYA-óx Eÿºåݩܷ8å÷«ý^©”`±jGMU->¸õõPÿ‚:þª¹ÝÃ;b–aì ¹]±æÑíP¦Ü.¸]p»àvÁí‚Û·ÛÏíÚþi"ª;NÊ)¸]p»¥¸]ù0YMÍRîr»fW¦³¹]}®——ÍcÆfzi‘rÙŒ¡8öu §íH¶;"þl×Q0g½8¶ëzzAlwDïal×Ï’Ò ÎL³Óœ.ÅvbV+è½I·›’š“®…íš*—ÚÍz¬Þà“Ý’óŽÝXXÿtˆ&;”E¹£ÀvíÛ¶ lØnÛÍ쳌åŽÛ­…í´â0”rVéc»j—)Ÿ^î¨ïfÃðˆÍ˜¯Åv@,j¯±]nS8‰¥>¶kvék¥—b»6hY`x^ÑìÈr`»éxt}lwDü9Ø®£`Îzql×õô‚ØîˆÞÃØ® .èè0^¥XìRlǘ7d|ƒíæ¤:®…íê]h¬^?¶ko­lÐåa—Ò}Øî1¢¦r Ù¡ìévs`»ÀvíÛ¶ lØnˆíêþYwPÖq¸,)Å-ÙÀvka»òa2¨gÄÔÅvÍNäôl»ú\ªõÖl;ò ±Z#ûkl×r3J¸›÷P›]ºÛÕA3’èP\NíÆ<¦ãÑõ±Ýñç`»Ž‚9ëű]×Ó b»#zc»©UŠ/ζƒ(¿ÁvSRÃvM•Za»f'úÉ.ÉNyGïÃvmD+§ŒA/‘‡2‘ÀvíÛ¶ lØ.°Ý~lW÷OÏŽã}ÖRŽžíÖÂvU©x=v±ÝÃîé/Ï'a»úÜòå¼4„alhW6¤´9•€åÍ%YoGå$2PÚì8ß[Û® j@ÒX\~*רî éxt}lwDü9Ø®£`Îzql×õô‚ØîˆÞÃØÎÛ)È´l7ãUÊY®Í¶ÜTé ¶›“j°¶›Qo‰øsa»9ïäkÛµÁ˜IóoÛ¶ lØ.°]`»Àvû±]Û?Ë.‹iÇ  ˜¶ l·¶+& y‰³¼‹ívOÙn'a»úÜÌÈDy48æ+³í¼)}Ý’R™ÂΞ»Xš¹ßŠí¦Ä99¶ð˜žG—Çv‡ÄŸ‚íz æ¬×Æv}O¯‡íé=Šíƒ ¨@¯Rl×¶’e÷¿ÆvsRe±ÚvUš!ìPoŸ«•ì‡wLqì}nØz1¶«#bJI3íøÝôu@`»ÀvíÛ¶ lØî%¶kû,$M Ã}¶üãµíÛ-…íê‡)$Y•¸‡ívŒg·’mϲ4äM"È·¤È”€ù5¶ƒ:…É0çîÕš‡~ ]ŠíÚ ªžÆâÄ#ÛnÈc:]Û¶ë(˜³^Ûu=½ ¶;¢÷0¶kƒçòÅÀŽ%TíêÚv°‰o°]•¡¸Ž¥fZ+ÛnJ}9¾êçÂvSÞ¼/ÛîC–?ïPFíÛ¶ lØ.°]`» l×öÏŒœÓ8\% lØn)l5‹M)iêc»f'‰ÎÆvõ¹žÊ!x4؜ӵµíÀÊÁÍÞa;…šY .¥ÅNù똳°/¹ÿþÏßÿ¹œiZÐùKÝ„¬/ßLµü´Bi¸<q®M|òKû•[|ôñŸÚ’ÜV‰ýøÏÿ^tÃuGáCÃ}÷O¿ÿÓoÔ£"—?ÿøÛfW>ÑŸþðß}¡—äK÷üõÏ_|˜“¿weɶñ[´P$bª­Ž†8Xí÷Ö£ÿXí/«}«`ÎzyVÛñô’¬ö—ë=Õ–Á)yB¯RHzmACÆM’ôTr¯ùø2õÿÛ²Ú¢ŠÙÓõ_ïX¿vV[½£©lécï0묶Œ(”Id‡²(h¬6Xm°Ú`µÁjƒÕαڲfsê÷ý°Ã¬6Xíb¬V™1s°Ze½€Õ*‹*¦1jähé•)–´%ÿ_öέՖã¸ã_å =E£®kwL0<æ!Ž,É:Á¶KvÈ·OwÏ–¼|´¦{š¹¤ñ*û`ðQiú?µ§/õÛÕUõyÀšŽRÚ¶ûÅ.¿Ý E«8èE™Õ Ï;¶Ûà1 Î펈?Û5ŒYOŽíšžžÛÑ{Û­ƒGFê/¡@ —b;2^¶R,«VL(}©Œsõ!YUåm5pê«—ðb7£qMœÂï˜Ü‡íVe– N¬vÁ±c;ÇvŽíÛ9¶sl7€íÊþ‰!ŸeÇ>kÑS,ÛÍ…íò‡Y˜7ÛW;0;»I}. Hêó&%~R1ýÞ‹ºÛ)cÇvŽíÛ9¶slçØÎ±Ý¶+û' ÔþA”¼‰c»¹°]þ0%ŸTâlöÅGpIŠKgc»òÜŒD¥7$šÙµ 8„›Ñ\§:åÙÞ¾‡Zíòöv+¶«ƒÆÒzúâ”ѱ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal7´JÅpm´0Ù¶[%IÜ#u¶l»ªÊDÓõ^ Û­ÞÉÿÚû°]‘ƒ$ä_)9¶slçØÎ±c;ÇvŽíöc»ºÏR>Fëѱc»¹°¦F„íK²ÕNÒéÙvù¹X)¿Moi ½´‰.J7ÚH™Áf”ËM¡ÕŽÂ½ÝƒË ‚j’bWœ@rj×Å1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁ‰…C LñRjgKŠiƒÚ IÍ{Â\Ô®ªârKv¨×ôZÔnõN‚ Ô÷NÞÊï£vuDa…Îï0WeÂNíœÚ9µsjçÔΩS»ýÔ®îŸIòÎý}6¢wvj7µ“Ò•MÚÔ®Ú9½{p~®„|P¥~€!éJj‡²@Þ‘6¨æ¬ÁRê܆/vb ·R»*Ž“¥®8ÅhNíz8¦áÑù©ÝñçP»†‚1ëÉ©]ÓÓR»#zS»up“ÐI`[íøÚ+²Ì¼Ðµ“'»"[Uå3%ê«gx±†õ­%ïðΕíêˆ(‰ö•IP§vNíœÚ9µsjçÔΩÝ~jW÷O‹¬ûûlzÞøÉ©S»ÿ7j—?L!‹¸}E¶Ú¥¤gS»ò\AÓ^²êj÷$-àÔæÁ)ÿ6®ÈÆ<…#Hhc»jn®lWeîD™«]`Çv=ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]\©SÙnµ{–¯|&¶Ky½—­ÊvcRgë#[UiÞ¬Tw¨W}-lWß:樴ïüýÛ• „N«ŒªÌRrlçØÎ±c;ÇvŽíÛíÇvuŸ¥D$©»Ï¦Ç,ÇvŽífÀv±T–ˇÓÐI¶«v@§c»ò\SˆÔ›@Rr®¼" ‹åX‡âsl—–‚ÇNËj'tïÙ<(…@AH{â²ÝCÍ(Çv<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:xéºC• —b;1Z"Å lW% ä­wH“U¶«ª“ôÕçÝÿµ°ÝêSµØ÷N>Ü߇íꈂAiÇaplçØÎ±c;ÇvŽíÛ `»º&F ;öÙ¨ÞGÖ±Ý\Ø.Õþ°yÌ_æ}ñѬÀátlWžK¡Ç½«⥠)h %ù ma;3L=2f–l¶@.«Çüeuý\Þ’äÕ¹âUë{‘ï äòˆ:5üWeä=ó@Î9ä<ó@n(Ëû§˜jØqxüõ±rÈÍÈÙ‚DåvþE± )ž^ì¨<Ëo9:$¤ÚAºòÚ…%G:1èó@ÎF•dÔ¹6UìXS­þö°Äÿñî_¾ûñÃ7ÿûîÇo¿®ÿøí¯¿úé×ôŽ„ÿ¼6ö‡u)|‹¾z÷›ŸYC«Oþ¹üü?)ÁÃ'ÿ–7êO޾MÄÍ·‘”ßfRÕ¼Y§B ËÏàMÊ?äùCYrŸŽ(TÂò¸3b¶ po?É!qøp;³n6Ò)?ëæˆøs²n Ƭ'Ϻizz¬›#zgÝ ­Rôl =1ë&?QÞê'9&'ƒµcêÓ‹ÁÚ!ïðõ÷Ëaí 2ϺqXë°Öa­ÃZ‡µk`íÐ>«É³nÖÎk”#<©†ñ¬ÍvÄé|X«ÍBìä;T; |!¬Mº”NkJOa-†<… /¦6V^í ÜŠíÖA™,iì‹#IŽí:<¦åÑé±Ý!ñ§`»–‚1ë¹±]ÛÓóa»Czb»·Á%IܱJ1][šžXÚè'ù¦ ﮘv(™ŠÚ­ª„ xÇVõX{ý¨Ý›w¬äXö½#Íx®¦vëˆ1ä÷æÊ,8µsjçÔΩS;§vNívS»º2 1JwŸåðP Á©S» ¨]ù0ó#4 R‹Ú­v,p2µ«ÏÐhd½ ”$è•wå-YröœÚAžÂ’Õäª][+Ó‰ŠMjW~œÿúuž’_ÿ幯÷¿{ÿáÙýá»?­äýâ9DdH?¼ûö}Ž~Ÿ××òÊþ˜#ÿ¿•⯘þ}ã}Tƒ0Äîû”Þ`uö‡âØüÕ}¨k+p¬~XAÇŸ öø§ß¼ûþ»ï~ÿî>üøí»ðóßýðç/ÿ+À~XÉÿùë~üùmk¸úóòöý‡åçŸP]Þòß|þçï÷§|4ýá??üñ›ï>ÿ |žÿï÷ßþc9(þº?ýíºØý:¯û…€–õáÓ_ýú}â¯Þù }öõW¿ýæ3N˜>ûRßÃgߨWô^¿ ˜JŸþtÞ]•§ÞŠù¤”7¡:ÞÊvhžjÙ…q ÎÏlˆ?‡Ù6ŒYOÎl›žžÙÑ{˜ÙÖÁóžÚùño"ãµ} òF¹0©%ÝÚíÄJsaÛªJËõ¯…mWï ‰iß;úXHìjl[GŒ!|ÒeèÉ–ŽmÛ:¶ulëØÖ±í¶-û'BPj犭û¬EulëØv*l›?LKœéÔ¼_íòùll[žkÌ9$£Þ2£¨W6KŠ!na[,S]4¦vg‚jfv+¶-ƒ”´o늣<Ù²Kdܸk(³žÜ5==!¸;¢÷0¸«ƒ£jŒý%”ôRpKÉ3Ý^í‰ ™H_jÞçÂvU™¥ê_ìŽô›wJõÐ÷Î#?¾ÛÕ50õ• ªc;ÇvŽíÛ9¶slçØn?¶«û§%¶°ã š,9¶sl7¶Ã%…8˜4;T;5=ÛåçB D½#t¶ O®sXÐR—˜’DyÈQžÂi'«vAã­Ø® * bꊓǺ‹Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvC«^ŒíBÌcn/öY'Ú¥ç¢vU&v7Ô7» ¯EíÆ¼£÷õ]G$Ù¡ìáw¨NíœÚ9µsjçÔΩS».µ«û§)î8¨§vNí¦¢v´¤Ò&Ô,q“Ú;²„gS»ü\ ¢Œ"LÕ.¤vÌ‹ 2l$ÛqžÂœum@_íòöv+µ«ƒ2’)öÅ=fW8µÛÀ1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁ…(©ôW)ŽáRj—Wò™7JIÍÑÍ\Ø®ªRÌâõ‚¯…톼£ ÷a»:b ”bØ¡ ½{´c;ÇvŽíÛ9¶sl7€íòþ™B r¡¿Ï³c;ÇvSa;^&k'ÛU;À³»G×çJKÜ;¨&dã+K–õyŠ"<ÇvR¦:åŸN'S¤Ú!ß{G¶ª¦¤;Ä)y²]—Ç4<:?¶;"þl×P0f=9¶kzzBlwDïalW‡|èϧþþ*éÚ>ÂliÑ$ØnHªE˜ Û¨/^^ ÛÕ·ÆÀfØ÷ðIVeQBâÊ’w$qlçØÎ±c;ÇvŽí°]Ý?%‘"õ÷YÇvŽíæÂv²ä¨%E‰MlWí@Îî#\ž›ã4!íL l'¯,mg‹h)üöÛ•´bVR›J«]‚{ïÈ–AK5(¤Ø!EÇv=ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]¼ì5ÈýUêI ÛÉwdSŽÒ·Wû¨Ê`;T%ž ÛÕjA; ªz ñµ°]õ…h«¨Õê‚\ŽíÖ*¤FØWF×wÛ9¶slçØÎ±c;Çv=lWöOé$ ­vÁ;R8¶› ÛiÍ¢‹!YÛU»ÀállWž«Â)FèL r€¶tei»´$IFü<‹Kpòa™;‰µÕŽžõA¼Û•A¡Ô(ìüš~µC¿$Ûå1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁKûÖ/©v®½$ÅÜȶ“*“5’So/ÖHvÈ;(7fÛ)KÞHÖ±c;ÇvŽíÛ9¶ÀvuÿÔüuô÷Y‰Þ‘±Ý\Ø.. L£‚µÉ®vpz¶],8M{ÕlG×b»Pj%‹Ï±]ÊS˜¡dh´ó«]ˆr+¶«ƒjÌkIè‹“‡…Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvupËÀú«TŸ/Åv qÚªm7$Õx²ÚvEU>FäƒvÕK ôZØ®¾5äɸíêˆX^:îPFŽíÛ9¶slçØÎ±c»lW÷ÏÈý}VÉÛ9¶› Û¥%QŒ"’ÚØ®Ú©ÙÙØ.?—jÈQZge;øe$wnm;ä¨aÛY™Â’ÕØÆvÕŽ‰nÅvuPËŽ„⢘c»ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ® ®©×5|™äRl'Êy½× l7"UC°¹°Ýª*":ÇÒÕN_,Û®¾5R^¹ú›ež'p¶«#R>E̼ۡ%…c;ÇvŽíÛ9¶sl7€íêþ©SŸ•è—dÛÍ…í¬\>„˜¨‰íŠØÃAñ$lWž›âØ™@ óâ+³í`)©¤O±…<…#¨k–±\í‚ÜšmWMÂ!‡Ë]q‰Õ³íz<¦åÑé±Ý!ñ§`»–‚1ë¹±]ÛÓóa»Czb»u𼆆öïhÞD\\Û.ï´ª¼½Ú§$!!ô¥jšë’lUe!m_¶\Õ¼VKŠ7ï@ävÞüO^´Û°Ý:"€Úe`ŽíÛ9¶slçØÎ±c»ÝØnÝ?E’†þéÎX=ÛαÝTØ.˜‰R¤ü?Íl»ÕNªàœƒíÊs9PˆÑBg•¬<WfÛå@ŽcàçEÊ êT¤Ò&cÕ.k½ÛÁ’òq9/%Iúâòç娮ÇcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒAàÔY¥ŠH´K±R\$¤çÙvƒR-Î…íª*„@õÕ?«Ì÷wíê[S>úÞA¼¯¶Ý›²Hd‡2ñK²ŽíÛ9¶slçØÎ±Ý¶«ûg‚„aÇé.>´ìrlçØnl—?Ì›D|ÒÉõ‹>à|ÐM§c»ò\‚|N݃*QÐKkÛábjÄòÛá:ÕÚÕŽV;Å{³íÊ @BY_W 8¶ëò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=ŒíêàŒ Iû«ý2µúÜ–).7ZR¼Iåí*c?½ÒdØ®ªÒÀÔ¾$»ÚIx1l·zÑ:©éo^¼1Û®ŽE“ì˜ ŠžmçØÎ±c;ÇvŽíÛ `»²"‘’öO˜Ï«ŽíÛM…íò‡É(ù€ÿËòÌ_|ô3žÝI¶>—”õ&P¢Ò¥Ùv¸¨`Þ©žc;*Sòßwnx;°¤·b»*.KGJ]qøe:¶Ûà1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁµ´ Çþ*%zqm;±%ÒFm»1© 2¶«ª"1Gî«áÅ.ÉÖ·¶¼Õ3ô½“CÔû°]1¿C¢ÊNCŽíÛ9¶slçØÎ±c».¶«û,#©õÏGD ŽíÛM…í¨dÑ¡*@ÛU;ˆgw’­Ï…HÀíòЫÝÓðè4l‡i1Kù¹…íÌP˜ô°]¶ãÉz ®ª±‘ôÕ«¥W äŠw$Gò;¼“ï äÌ(HÊ$z çœrÈy çœr#\Þg 8v÷Y"ï-èÜd/5ÿHÚÕŽªñé\y.çØH¹=ª]$¹öÚ…r7ëy ÇK lA´ÈU»|j½5ÿbH%ðü‹Þ/Ö?ÿâˆøsò/ Ƭ'Ï¿hzzÂü‹#zç_ ­RŒ×)'•%lä_ŒI¥4¶SŸàµ°Ýwù>l7¦ŒÐ±c;ÇvŽíÛ9¶sl·Û í³Q“c;Çv“a»R$œ-u°]¶»Û¥ŸÆH½ ÄøÒjGqÑ 9{Ží$OaálÑIi®vD|+¶«ƒšhÀØ—„ÛõxLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvep 1I§vöj®½6¥IܪQ>¦T&K¶«ª0$ì$ÛU;|-j7æÓû¨]‘ÈR§vêÛx±#§vNíœÚ9µsjçÔn€ÚÕý3ÇãùŒÙßgó†ìÔΩÝTÔ.˜‘0Åö­©jR<›ÚImmXª½ DII¯M¶“h7zÄkžÂ1%ÅNðj)ÝJíÊ å:ÅŽªz²]Ç4<:?µ;"þj×P0f=9µkzzBjwDïajW·$¦Ò_¥øbj Rˆ¦Û«ý~©lsa»!õúZØnÌ;zc²Ý˜²èÉvŽíÛ9¶slçØÎ±Ý¶Úg½Ø‘c»¹°’T°‰íª]@<Û•çš0ôйT;V¸ÛÁBR³ç\ÌSØ(t®ÃW» ÷&Û­â,敨/Ž¢×(ïò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=Œíêàš£þ*Å1\ŠílAÜj-8$U`2lWUeí»ö*‰/†íVïhâNfú›o¬Q^G´ÂÔw|uª^Úαc;ÇvŽíÛ9¶Àvyÿ„@ÜÉ·¯vy®9¶sl7¶‹‡¤_†q_|ô“a:½µ`,wo#„¤Ý#4E¾ÛAéh…ϱ]*S8åÍÚ€±Úµ[±Ý8~¸«ïØnƒÇ4<:?¶;"þl×P0f=9¶kzzBlwDïal7´J‰\›mǦ ÙÀvcRãd¥í†Ôç-ôµ°Ýwâ’½Û•sœÐt‡2ôK²ŽíÛ9¶slçØÎ±Ý¶«û,bÞ¤°»Ï<$ 9¶sl7¶KÇ¥ØÉ¶+vÉ꼜„íRÁqBŠB½ ÄáiþóZ Ú’ÅçØ.ÿS@˧ýN—õbXîÀv#ârL Žíz<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»28aþ£¡»JåáêK²¥’éöj¿_j„¹°ÝzøeZãß7¶«o}Ã3ÇêŇrº—c»:"—bÙºC™y¶c;ÇvŽíÛ9¶sl7€íêþiçÙŽóQ¤àØÎ±ÝTØÎJ§¡ˆMlWíÈällWžK)rŒÖ›@L9š¸2Û., œ}ÿ4ãPÊH¤Íßѯv!ÝŠíÖAY-&ë‹#CÇvÓòèôØîøS°]KÁ˜õÜØ®íéù°Ý!½G±Ý:¸ c ýUŠŸ•=ù’,ÑÆ%Ù7©cÜ±Ú †©°ÝªJ!AûŠï›z}­K²ë[çÐ2¶¯¿yñÉ^íêˆ(Æd‡2KŽíÛ9¶slçØÎ±c»ÝØîÿØ;»dçM ï(ƒþ¥•tÿ;)àóuÒžì±2]ô¦UÍkƒà‰¶8K¨/úpþгŒ”dÛ-…íÚ‡Ù~ÝU‹a#ÙÍŽüjlן‹h\ßf6‹ÝÙ’ý!5N½i$ËЦ0+<ìúÐíþµÙÿ¶ÛÄ™«”©8.O—õÛ½á1®íΈ¿Û ³^Û =½ ¶;£÷4¶Û€ù*~om;‘x êl×%`uÔ©ø»{Æÿ‹íº*–b6Uuûú]dy‡Ÿ¼s;¶ë#ŠVm;¾:~Ýá.±]b»Äv‰íÛ%¶Kl÷ÛµøiÚ?ó8šÙv‰íÖÂvõì;Xtû}æ¯ÿ|À̬W×¶ëÏÕ:}h\ÛîÇŽèÞÚvD ¯±Ö),5"!ŽÚº]ù0¶ëƒ2 ‹âlvÄÙIvÊc]Û ¶(8f½8¶zzAlwFïil· n„`óUŠ1nÅvdu˜o°Ý1©‚ka»®J<Üa®^о Ûõ·ÖB´Ë;ñ¹N²Ûˆ†…MæÊ´Hb»Äv‰íÛ%¶Kl—Øn?¶kñS}ÏîN‹qb»ÄvKa;lYl¦„eŒíºR\í°ã@¬¯4@5bÜÚ’ýáZ7ïkÛ1µ­² âø>üf§ ÅvÔר ž‹{þù ±Ý3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»>8Ζ7‘ ÷^’¥xT1o°Ý1©ºVm»cêëô»°ÝöÖZ îñÎkÛm#¢91îP&Ù’"±]b»Äv‰íÛ%¶;€ízüä¶o³yœå’dÛ­…íê‡iâŽ@ÃÚv›]ýÒ¯Æví¹Pu6Ì •±–G(¼¹$Ë-±Z¢Êx³ßì4ü£d7qä1éhýc‡Ù’bÊc]Û ¶(8f½8¶zzAlwFïil×Wœ¯RL7×¶Sz ¼ÃvǤÚb—d©’ïÂvǼô9lwH™Hl—Ø.±]b»Äv‰íÛíÇv-~z=Êñ4Îz=ð'¶Kl·¶«¦Ô/Ëoõ×>`)n—c»ö\$uŸÃ°j‡rç%YnqÃâ5¶“m«ŒLã«5ÝŽã³—dÛ ^4‚y.ΟN"‰íÞð˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛZ¥žÛ–ßRÛü¡&o°Ý©^ÃçZØ®«‚RC•ìPï_vIvóŽ×?­Ï½OPóvl×G¬oëTv(ÓÄv‰íÛ%¶Kl—Ø.±Ýl×ã§bd;âìóY>±]b»°]ý094”bœm×íDàjl'‚y‰Ùª~Ñ;/É>¢ T~í´NáÀê¬IDíS½ÐG±Ý!q¦YÛnÊc]Û ¶(8f½8¶zzAlwFïilwd•Šrsm;ezùlwLªÒZØ®«ª[œ«GŠïÂvý­UæÞ!ý`'Ù>¢0¸ø\gm»Äv‰íÛ%¶Kl—Øî¶«ñ PÝûLwXãY$¶Kl·¶«&2‡ãÛu;¸ÛiÃqXwv³ $ïmIQêIñ5¶³¶U6 ÆqþE·{¾Zó lgm}Atј‰«ëç%Ù)xt}lwFü5Øn à˜õâØnèé±Ý½§±]œÚ" óU /ö×b;/u½7Øî˜T÷µ°]WÅE„x®ž¾ ÛmÞq@йw˜?˜m×GT`ã²CÙëw‰íÛ%¶Kl—Ø.±]b»×Ø®ÅOhŸ±Ñ<ÎFdm»Ävka;Û° !¶ëvð´…½ÛµçFq#šn¡«à½-)ꎭn‡_c;oS½žõ^ä%þK©÷©þáN²mж‘·â ÛÍxÌÀ£ëc»3â¯ÁvǬÇvCO/ˆíÎè=í¶Á[Ú4MW),RnÅvíG†ïW{3Ÿ\3ݤþ¾Ïûÿb»® EÌl®¾ ÛòNÝ}ÛõëV(&—w~Þ€Û%¶Kl—Ø.±]b»Ävû±]ŸÊ¡“,—n'’Ø.±ÝZØÎ{Í: tb»n§~y¶]{.B ¨Î&PµCº3Û®< {|w‹*5&Xº]AXí ×Ô»Õ£Æ\=~ÛA®½uX”²Ã;Oµ >p«#rÚ¡ì äA.ry˃\äò —¹¹?Uëg¼#Îr”<ÈåAn©ƒ\< D]†‹Ž¯Mu;ƒËó/êsÛÎØfY ÝNÍîÌ¿ˆ‡ªÔoóõA.Ž5dÕÝL©#:êGó/º8-anSq$d™1ûa}àÑõó/Έ¿&ÿb à˜õâùCO/˜qFïéü‹mððºhÏW)Eº5ÿÂÕ“Ê‹U;:¦Þé»°]kS˜µÙìž"ùíØ®èB>¹š½)£¬v”Ø.±]b»Äv‰íÛÀv-~rÿ‚8³\”Û%¶[ Û¹±Í°k±b×c;¯ç3¨¡@gHÿ•Â|Cþ…?j r°—ØNJ›Â(¦1TºÙ=×…ú¶ÛeQ÷ßì("±Ý„ÇŒ<º<¶;%þl7RpÌzml7öôzØî”Þ³Øn\X¬È|•âß-ï.Åv¤ñ ¯±Ý1©ò»žúÿŠí6UJl;ÔÙµ©ûÜ;Êþ1l·hĦe‡2ñÄv‰íÛ%¶Kl—Ø.±Ýnl×ã§@!˜ÆÙVŒ9±]b»•°]û0H„~oÿúϬ€—W;êÏE…z>šÂ0Ea¿ÛU¯²¼¦vÐvÊã²L›½*°{#µƒ¾ Õi*®ZIR»Žxt}jwFü5Ôn à˜õâÔnèé©Ý½§©Ý6¸úŽU Äo¥võüñ¨Ëùj·Ipª[•Rm­bG›ªº©à±ª]Fþ.j×ߺþe m‡wìsÅŽ~”)YÄe‘ÅŽ’Ú%µKj—Ô.©]R»Ô®ÇOE$ر?ó¤vIí–¢võÃ4oCÇwd7; ¼šÚµç†ÕªL7ªÕ®ÜyGVøÑ6éH¯±ö)2N¿èvÏ ‰>íú ÀÎsq.ÙZpÊc]Û ¶(8f½8¶zzAlwFïil· n“ÝÒf'q+¶”Л;²?\Ý÷HµµZ vU õˆå0U¯Eä»°Ýæ ßù±{®~7¶ë#"‰+íP¦™l—Ø.±]b»Äv‰íÛÀv=~²9{™ÇYÆ,m—Øn-lW?LQSã!¶ëvÌz5¶kÏ5'›N ±â·&ÛÁ£ÎEÖ×5Ê…ú–Úl–m×ížÚ>íú lDsqÄ™m7å1®íΈ¿Û ³^Û =½ ¶;£÷4¶;´J1ÆÍwdíaï¨Ý1¥¿£ù©]SU•ǤðÄf‡ô]•íŽy‡Ê“íúˆõ`\ß|‡2ÈÊvIí’Ú%µKj—Ô.©Ýj×ãgÝLÂ¸ÝØfWÏòIí’Ú-Eíê‡éHà/îLýõŸ¸u>¿<Ù®=—Q‹Òt9CàÉvõGáþæŽ,·šBo©»Dù(µëƒ†±ŠÎÅEɆS3ðèúÔîŒøk¨Ý@Á1ëÅ©ÝÐÓ R»3zOS»6¸` mÓUÊJ{ïȶd»À7Øî˜TY ÛSö]Øîw€?˜lÇFUPÞ¡L0±]b»Äv‰íÛ%¶Kl·Ûõø)P^\1üg™Û%¶[ ÛqObk÷&}ˆíšÆS¶èEØ®=×µM²º¼7ÙŽõMY‘:…ël•ñTovBÅv‡ÄÉSAb»7Žk|w;ÖÏ^‘탶R=Pæâ 1¡ÝŒÆ <º>´;#þh7PpÌzqh7ôô‚ÐîŒÞÓÐîÐ*å|o®…<‚áM®Ý1©«Q»¦ªý[štÿyË/£vǼóÌÆî¦v›2Þ¡,(©]R»¤vIí’Ú%µKj·ŸÚõ8K ³ŽTÝ‘’Ú%µ[ŠÚY»úÊ¡2¤vÝŽ”¯¦ví¹fR÷©S&¦…o¤v(V¦Zߤ_x›Â L“"–Ý®Àg±]”ÄÜx.ŽJ¶‘ò˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛõÁ¹ÕQÕ«”ó½ØÎ[nõ;lwH*ƒ­…íº*!шê¾ ÛmÞ©gRƒ¹w„ýsØ®¨%ö(“Hl—Ø.±]b»Äv‰íÛíÇv=~F5œÇY7Ll—Øn)lç ‡±‰Ã¸ìf‡—c»ö\wså)oªvrçYÔ£½ÁvQ§0i1˜ä_t;þ]äûVl×åbÌ“û»›]¶‘ó˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛõÁ±8‚ÍW)°{ÛÈjèì µÛ”jË{š+E^ŒÚuUTcºÃ\=á—¶ëo]·<2÷˯ÈöÅÙ&ݹ6»§¶RIí’Ú%µKj—Ô.©]R»)µkñSê9i~Z–"Ô.©ÝRÔ®~˜R·*õìmCj×íÄ.¿"[Ÿ«€E%¦ xQƒçRjçZ‡|í° k;eWFœ8ÿØ©Àç ÝÏ BŠ—©8€„vC3öèâÐî¤ø  ÝXÁ1ë•¡ÝÌÓ«A»“zÏA»?ƒ3Õuœç«ѽÐÎ ?^6£8*”KYˆÙýQÕüWt‡zÅ/bvÿxG1|ÇEž;ÈÝÊìþŒh¥ÔÃòeš=d“Ù%³Kf—Ì.™]2»½Ìî'~*0yÌwwZ83í’Ù-Äì¶S¹( 2í~ìHʵÍ(~ž+ZOsÚ¤u£ w–µ+•×ÐÚFª1+†J»ú(´ëƒ¶sØ\kB»)xt}hwFü5Ðn à˜õâÐnèé¡Ý½§¡]\YË|•»÷‚¬(=ª’7ØîT] Ûmª Ñ`‡zù2l×ߺ†q4{§A?‡í6e^JÙÆM²®]b»Äv‰íÛ%¶KlwÛµøiX÷G£Þì@-±]b»¥°]ý0ë ™œ†Ø®ÛÁSÙEØ®=¥0Ɔr‰{{Èz}yA¶*À6…[q¿oö»]Ý{ÛµA] ó©8—Ävs3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»mðŒ2_¥êB|+¶ ¥¸½ÁvǤ¯…íºªvtT·å;ïÂvý­£¨à<’»Ë³íÚˆQ}ÏW¢‰íÛ%¶Kl—Ø.±]b»ýØ®ÇYb¥Ó8(žØ.±ÝRØ®~˜ªõì­Cl·Ù=•f¾Ûa¿•ê(4ÝB«¿³‰,Æ£E¤Â¯±õ-ukÉ1Î` ¾¥Vþ(¶ëâD˜G½xÿ±Ã’ØnÆc]Û ¶(8f½8¶zzAlwFïil××Ö±<æ«ÔË*W^‘Õò0xwI¶K0ˆaÓ슮…íºª(¨²#¸Ñwa»úÖ\ŠÃ4’W;¤Ïa»>"(‡Ð\”lG‘Ø.±]b»Äv‰íÛÀv=~rÝ"ÉŽ?µ–Ll—ØnlG¢8ÄvÝŽŸ6øa»ú\ÕR£Í6ªÑŠßÝyIPP`æï¸] M.¢v;U^í$Am72W†ßv’kÞñÿÇÎô“'¹:b}aÀ2WåŽò$—'¹<ÉåI.Ory’;t’«ñ“Kýœç»;"“<ÉåIn©“? ¢s¼H+ø×I®ÛY‰«Orí¹\OŠã ïÍNíÎ{SðP }}ãc$PX'JÛÏ6¬MÀèƒF=ˆˆÍÅùSyÓLÀxóËúÀ£ë'`œMÆ@Á1ëÅ0†ž^0ãŒÞÓ ‡V©°{0‚ü!åݽ©cRÖÂvMÔu¾p™ª‡òUÿ¼5B…Þ ù¶Û”…Ð\Nl—Ø.±]b»Äv‰íÛíÇv=~2¡ãŽýqÞ›Jl·¶síÛûßå†þƒíª³^íZ¹%±BSfåö{SŃñͽ)©SëŸa¬´Ùxù(¶ëâ…§âP"±Ý”Ç <º>¶;#þl7PpÌzql7ôô‚ØîŒÞÓØ®nTOÆ>_¥¬Ü[î@U V{7eÀ¹TÇÅ²íºªŹúÐ/Ãví­[OšovúÁ{S}D(Vv( Ol—Ø.±]b»Äv‰íÛíÇv=~R ØŽifÛ%¶[ ÛI+c^ÆUÊ»?ýò|¶kÏm}7ë™M ê¸³J9ÊCÐàõANûVÙ'›}í9ùl•ò.Ž y‡8BÏrGS3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»>8ÝNÜDªÞŠíjÌyH¼Ë¶;&õwOŠÿÛuURH'øg{K†ïÂv›w”ÊÉnv`ŸÃv}DE!çʨ$¶Kl—Ø.±]b»Äv‰íöc»?ûïc;â,c–;Jl·¶«¦ƒÔÍûÛu;ÖË«”·ç¢»JLŽ&·6ŒGk¤.ñÛY›Âb^§ñPi·ãW©"7b»>hõ⬅}·3Hl7å1®íΈ¿Û ³^Û =½ ¶;£÷4¶;´J9á­ØŽÉŽôÛ“*¸¶ëªBÍ íP_†í6ï¸íðN<íynÇvmD)†!ºC™e¶]b»Äv‰íÛ%¶KlwÛõ8K0m¾Üí"±]b»¥°5'¦ãl»ÍŽøjlWŸèjÓj”ò¢xöuØŽå¥Ý}í¼OáB8iIÔí |6ۮʥn‹u.Ž,±Ý”Ç <º>¶;#þl7PpÌzql7ôô‚ØîŒÞÓØnÜë†)æ«K¹Û‘ðÃø µÛ„‹íQê‹Q»®J4w„*¡/ëHÑßZIcR@nóN|ðŽlÑêŠj;¢¸bR»¤vIí’Ú%µKj—ÔîµkñSApV¬¥ÛÏÒvIíÖ¢võìÛÄvûÕ†Ô®ÛÝpG¶=W¡¾E™ª–;“íÊC ‘Þ$ÛEŸÂà6)ÔìÄ=>Jíº¸zaä©8Õ’ÔnŠc]ŸÚ µ(8f½8µzzAjwFïij×7„Œé*e@zsi;”ÓìýjoTwvs©H‹u¤èªÔŠîQ¯üeØ®¿µ×ÿÁ}î—6’m#:8ÙT™…Äv‰íÛ%¶Kl—Ø.±Ý~l×ãg¸#”yœõ§b-‰íÛ­€íê‡é8p’l×í ÂÕØ.¶;º¦“d»n'~ëY~ yyƒÒ6ñQÜÆ€q³ýh#Ù>¨‹±¢Oŵ:‰í&—b»Fµ ¼_í]Ia\»áÇÖ*m×UD¡SõQľ Ûmoè5–ϽOí›îÆvÛˆLŽHse”Ø.±]b»Äv‰íÛ%¶ÛíZü”‚5P¹Ìâ¬4«Äv‰íVÂvíôàa»ngxy#Ùm| -4åMÕãÞl»z„ñx}G m•‰yxñh³Ã"Åv}Pm}Eb.®~`‰íföŽ¿”“¸ÚíÊ2 ¤¡2M…íÚ´ hÐ. ]@»Юퟜ²ãø Ð. ÝbЮ|˜¦)©Hÿ†lµË—¶«ÏµFƒ¬€j—àÎT;Ü?r¹NáòGIØ…š=Jíê J9—“Ä Ù!Žéxt}j7#þj×QpÎzqj×õô‚ÔnFï4µÛ/ký ÷ÐnG÷R;qÝ?åÚ“šóZØ®©">°We¤/Ãv§¼CžÃvmDKéÀ6NHíÛ¶ lØ.°]`»ãخퟙåÀ @Û¶[ ÛåÚg"Õ8ØÅvÕMìjlWŸÛ’U…GÈ8ËØóV‚Ùü¡…•ì„™¬gªÙAÆG©]ÔHyÐ̺ٕo&¨ÝÇt<º>µ› µë(8g½8µëzzAj7£wšÚ•Á5±+ V©b—p-ÖTirÍ0V/øe×NwïäÌBcïüë |7 k#æ²³P&(,PX °@aÂ… ;ŽÂêþ Êè2>•3PX °µPXý0™D !JùÌ%Ý|˜1ã'å Õ œ¬ºÔ®Ù‰âÕÔ®>W«¼! +vÄxg; ÙœÔ8ô”×ÐÈ£?ª{­s¹ZÌy\}¥¸ßs–·"e8à…'cÎ3Ê^ÚÅDÌ1gÄœsFÌ1gÄœbβ2KÊi¼ÏKÄœs.sú–@EIº\³ã—ìð‹¹ú\Ã2}¸?šÜzk ek-Â?r^žA$(-vð»(Ó­ùmPÂûjv¬qkjøÃzÇ£ëç_̈¿&ÿ¢£àœõâù]O/˜1£w:ÿbÜ8ã%TôÞ å¬¸‰|ê+Ø$”ž’ºZ±£SêUõ»°]{ë¬êfcïd|°Byѳ’ñe/ɰíÛ¶ lØ.°]`»!¶«û' kT(ovà9°]`»Å°™{¶d2ÀvÅÎôlg– Úõ5;½³ØQ-Z ¤ðÛajS ûJw»ò>Ob»}ÐÚΰŸÆ²ÛáKÙÕÀvïyLÏ£Ëc»)ñ—`»ž‚sÖkc»¾§×ÃvSzg±ÝÏà%.¯R$x+¶#¦Íõ=µ;©4¯EívUB šÆê%}µÛßZ]ì•’ŸK¶ÛGÌ%$î×`øyjÔ.¨]P» vAí‚Ú§vmÿDp#‡ò‰‚Úµ[‰ÚÕÓQjên_ÁÝ®,ÙS»ö\NTª8š@N.~o‰r1ðÄï©Ô£r«©›Ï¶Û >šl×-먧±¸ò— j7Â1®OífÄ_Cí: ÎY/Níºž^ÚÍè¦v§V)Ï~s²lž?$۔꾶;£K(ö]Øîœwü¹d»sÊ E]¦ÀvíÛ¶ lØî¶kû'—X¾_Öf·#ºLíÖÂvP;öeõœ»wdw;Å«; Öç:€(ª^Ž‚rçÙZP ü=¶Ã:…­¯´Ù±è£Ø®J@bbCq”ÌÛxLÇ£ëc»ñ×`»Ž‚sÖ‹c»®§Äv3z§±]µVE¯RH÷Þ‘ÍÊgü€íš„²ÑÒ©¶¶kª¸³’ŽÕÓo>úwc»úÖœJäé2ô§Ë©ï#b¶Aü;ˆÎ‚íÛ¶ lØ.°Ý l×öO7Nx`ŸàÀví–ÂvX³èRYi7Çþç?°—¡_íêsYU}4œ‹á½wdA“‡l;ª!N9Ò%î‡BÍNüYlG-Âȉå€8l»1éxt}l7#þl×QpÎzql×õô‚ØnFï4¶ÛwM6^¥ŠH¸7ÛÎòB°Ý9©«a»¦ Sy=«‡ß™á7¶;ç—Íòvl×F¤Šµq¬ S`»ÀvíÛ¶ lØî¶kû§dA²P³‹.ˆíVÃvåÃ4˵î2v±]³¾º´]}®ªjgÿØÑ­Ùv%Œã”ÐÞc;®SX%åÜ?RóÏ’ð(¶kƒÖ¦7é€8ùù °ÝÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞilW—T–z¦á*% ô^l'¶ }º$»K`*{ð©ïö¥?‰íš*¨µhí€úüeµíNyøAl×FDIb~@Ùûp"°]`»ÀvíÛ¶ l÷ÛµýSÉ4¥ñ>Ë/ÉJíÛ­€íʇéŽ)åÔ¿$Ûìà¥òEØŽ7OIYy0ª¦;³íd+'æ÷ÔNÚL¯yt}0ÖìÞt¼•ÚÕA’ ÈPœ¦—KQAí>à˜ŽG×§v3⯡v笧v]O/HífôNS»}pwë7ðü±³{Rh*‹–çÔ®I@NyµÛ½IWû£Ô®©*'ܵÜÕû—•¶Û½cÀ‰ÆÞ!µç¨]±þäG¶qNÔ.¨]P» vAí‚Úµ;NíÚþ™²ŒcyU‹;²AíÖ¢vR;=H&”¶kve;šÚÕç–Ò8§Ñ*›ð½¥íÌì¶Ó2…3;÷›¼U»2ÓŸÅvMœ##úP\¶¦Øîéxt}l7#þl×QpÎzql×õô‚ØnFï4¶«ƒ ¢Èp•²ä÷Þ‘…ŠBì¬ö&8¨ˆ°Û©®…íš*42o†o*OüÕØ®½u9ðøà&ên÷âÛ±]‘‹>0A8:R¶ lØ.°]`»Àvg°]Û?͈œîrNíÛ-…íʇéf’Qµ‹íš]ÆË±Ö$ºò¶ £#tM¶¹·‘,—] ßÇq¹Ì`ϵŠQ_h³ÃÄR»6¨•CqÎcqör7¨ÝÓñèúÔnFü5Ô®£àœõâÔ®ëé©ÝŒÞij×÷²ˆb¯R.vsC Ù>5¤8'ÕÒZÔ®¨Ê  È§‘ú\c_FíNy^Û>ÜMíÚˆöæ0öFFe» vAí‚Úµ jÔîµkû§ˆ’ØgYãŠlP»µ¨]ÞH@³ôR4;¦Ë©]}.›bN£ Tì2êÔŽy+QZyÉ÷ØÎÚQ¹VìÿÞìÒ»˜óFl×åZ¥šÆâÈãŽìÇt<º>¶› ¶ë(8g½8¶ëzzAl7£wÛµÁŵ,…ãUJÒÍ•í6ÀO•íš- >Û©ºXCЦÊÒð·°f—ñ˰]{k¯èÜ^íwc»:"$-ßS+óíÛ¶ lØ.°]`»ãØ®í³$˜l|º|‰åÛ¶[Û•Ó³•0‹ûØ®Ù)ñÕØÎZŶ› ¶ë(8g½8¶ëzzAl7£wÛµÁ¡DãU*ó½Ùv\†‘Øn—*îv@ªÉbÙvU¦D¨ù€zÿ²>²§¼ã/áoÇvMY9 ÂPY-GØ.°]`»ÀvíÛ¶;ŽíÚþÉ`†ãp 0°]`»¥°7lfæ¹í|Çf—7¤ðvGVUho\q/¶ËNÙä-¶£T§° 3t[gìvœÒ“ØnÔe,NMÛ xLÏ£Ëc»)ñ—`»ž‚sÖkc»¾§×ÃvSzg±Ý¹UÊäîl;Ûé=¶;)5祰]SUN›0èH±Û%ú®>²ç¼é9l·ˆI…è€2°ÀvíÛ¶ lØ.°Ýal·ïŸ,Æý>²»Ei»Àvka»úaº –óŠ÷°ÝnG/¯Áv幞q9ŽÐÅ®8äNl—7²TÎÌï±´©NèÜWÚ숽$»ª&6§ÙvCÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞil×/ÿIÙ¬RvoG bý€ìv™ƒ~ÿµ2ívUE%d< žü»]}k®_ Ø(=?W×nW ÎîÈ.] »@vìÙ²;ŽìÚ>[û~ýØ»×~TìÙ-€ì`órJÌ ¿°ÿüû.v¯Ý .Bvõ¹&$Š<˜@^¯VÝ]×.ådïÛQÖ)¬Êƒ¢L»È£í(Ú BJÞoG±Û%L»!‹éxt}d7#þd×QpÎzqd×õô‚ÈnFï4²;µJÁïôµK‘olöÛ“ê‹eÚRœ¾ ÛóŽ?ˆíN)£í(Û¶ lØ.°]`»Ø®íŸZ?dï³bØ.°ÝRØ7O¬,ÒÅvÍŽ%_íêsÀ(Ù`»„·6‘M›Õ”Ã÷í(ˆê¶òWüÞìäÙvmPeæ°4;¢ØnÄc:]Û͈¿Ûuœ³^Ûu=½ ¶›Ñ;íêà¹,÷6ø‘³‰¾÷‚¬»o(Ÿ°Ý)©¯õ^—Àv§Ôg‚ïÂv§¼ãÎÏa»¦Œ I¦¡²2Å8°]`»ÀvíÛ¶ lwÛµý³DóÉÇ'€¬)²íÛ­…íhó2œ—³¢v±]³£—‚ha;ªY|DIltP-v˜ï¼ K¶•sˆ}Âvî(%˜¤V/¢ßÈðŽ&2+#”c’P•Ž"‹@.¹ä"‹@î\ ç^kÏ¡ND Ü äx+Orø@ô¯@®Ù©^ÈÕç–(JêO fWþ¹³Ò‘n™D ¿äj*5c(­¿Èá³ùuÐÚ0ɇârz‰‡#ÿâÃ뮟1#þšü‹Ž‚sÖ‹ç_t=½`þÅŒÞéü‹68&`LãU 2ßÛWPÊŠEŸªíRFE¬^i±ü‹¦ŠSÍ2«'àïÂví­…LøÀgÈ–žÃv»2Câ_]‰©Û¶ lØ.°]`»ÀvDZ]Û?]Á Çû¬qô l·¶ó¤bš”خڽ„'—a;O–ÊN0˜@ÕìækSP/@¾ÇvÒR¨$åA¦ˆìGoxÛÕA-ÕLšâü¥QB`»<¦ãÑõ±ÝŒøk°]GÁ9ëű]×Ó b»½ÓØ® âå›®ReS¸·Ú›lâüÛ’ ×Âv»ªÌipùç-å»°]{k”r McïàK7Û±]‘j¾PÆíÛ¶ lØ.°]`»Ø®íŸ*–?5;Áȶ l·¶“Í$I†}lWí’ãåEÊësQAIGF±ºµ¯ oŒŽ ï±–)ì lÚOhkvÉüQlW-Á8g´glâÜ$°ÝˆÇt<º>¶› ¶ë(8g½8¶ëzzAl7£wÛµÁ¡ûV©"ÒäVl[ÙÐHýãj\*¼©ôG±]S…˜tp›Wo_V¤|÷Ž•ÿ`cï àsØ®HŠé€2KíÛ¶ lØ.°]`»ãØ®íŸ*Rvðñ>+YÛ¶[ ÛéæI”Ë÷Û/RÞìäå÷Ý‹°]}n–TΩ£ TìøÍuž ±]ÞÊIXýCoÁ\¦0@‰\nSäQl×ÄyFŠŒK²cÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞil×/ç~G¯Rä÷fÛ™ëþé’ì.•³ò©L²¶kªj§ŒAûn÷&Wð¯Æví­5eÇ|À;¯‡ö»±]1 Êà×·Ÿ7lØ.°]`»ÀvíÛÀvuÿÄdeØgý5ß>°]`»°]®ÅÇ…]~ÿòüÏ¿?àZÌœ/ï-Xž[/¦R†Q€Qìò½Ùv¾¡+ê{jgu›â¨A³{-øµ«ƒbñ Åabj7Â1®OífÄ_Cí: ÎY/Níºž^ÚÍè¦vûà F<^¥o¾#ëeÑ’Owdw B9‘ú» ⟥vM²fÎÔ[þ.j·{G©Lƒ±wP¤vmDâòíûe*Aí‚Úµ jÔ.¨]P»ãÔ®îŸÄÊD0ÞgÅ(¨]P»¥¨mµý:X¿#E³Ó—ŽQ»ò\LR;RÈ`»²!ÜÛZT3~(mçm 3¡õ•6;þ}ïèVlW%ðŠðPÅÙ!éxt}l7#þl×QpÎzql×õô‚ØnFï4¶kƒSN6¸š°Û½kês!¶ó”6´OÉvM“Á #Ån‡‹5’mª´¸Pt¬^}¶kom.Él–YósØ®ŽÈI·ww»Ø.°]`»ÀvíÛ¶;íÚþI¤ÉÆç#¦—‚þíÛ­€í|s(ÁK9Ãö±]³ËˆWc»úÜõ ÒÄš0ßÛ‘"ƒ ¼/mÇ©LárÕ,]l×ìÊÞöèÙ]\fIÂCqY¢´ÝÇô<º<¶› ¶ë)8g½6¶ë{z=l7¥wÛ[¥rº7ÛŽ6*cHî¬öæ¨ýÚ ?R)/…íΩ·ü]Ø®½µ%ªÇÞqåǰݮ Ð Æg K€íÛ¶ lØ.°]`»ÃØnß?ûU2v;BlØn%lW>L¯|#ô°ÝnGzui»ö\-GhÑÑAÕ¡ìwfÛaͶË)Ùû@êf‡¬]Bßì²ã£dÛ Ž,ƒŸéw;x‰2Û}à1®ífÄ_ƒí: ÎY/Žíºž^ÛÍèÆvmpgÍãUŠ3ÜÜ‘B4 ÊçÕþ°TµÉÄXp`;ü]¥í~¼“U(½£¯‡ö»±]1——V< ŒÛ¶ lØ.°]`»Àv'°]Ù?=A½MA£}ÖS‚ȶ l·¶ƒÍ!;$rïb»f§—w¤hÏu¤¬ÃðèìY²ä6Ew¤ fr%ÞÿN¤^Ud»$R fTÂnÿ¡Å+¶ÃõHý.¶Ãº¾¨Ö.±#q%I‚Àv#ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»6¸QÖþÍÓ‘¦O7’Õ$»kýq¡sU¶kª€*¶Kcõù» ]óŽ0K¿ÀâjÇoæÚµÛÜÌ” hÐ. ]@»€víÚ‡vuÿDñy–h¸Ï"‰´ h7´Ã¥¢?%cÚU;(ÍQo‚võ¹ÆœF†Ûa–¡ë"bÙv9ªGe\Jÿ°O-*ùUh×ı;ˆh(>ס€v;4¦ãÑù¡Ýñ÷@»Ž‚sÖ“C»®§'„vWô^†vmp©íÈÇK(øžð(´“”H; )~¤¢ïy¤Jš Û5UŠ ƒŸÂVõò] )ÎyG?΋)i`»ÀvSa;ª&PŒ¥E¶Ù¥»±]}n‘0H[í>™kgK)(·±·#uÊ©ôó/šd{Ûq[_<¬µ±8L@íF<¦ãÑù±Ýñ÷`»Ž‚sÖ“c»®§'ÄvWô^ÆvmpH$ˆV©’žÅvÀ £ì`»U*K…TæÂvMù¿Ôç/˶;å?¼‡íÚˆT_ºP¦%°]`»ÀvíÛ¶ lwÛµýSS½Ð1Þgå£3Z`»Àv3`;®8,Kvœûçÿ?àj—înHQŸ‹‰²ŽÒÄV;ÈÏV¶ó!ÒÞYñ)L•R¿_³s³W±Ý)qòQ'°Ýéxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý¹Uª<›mÇÀ‹*î`»SRx.lwN½~YCŠSÞ±Ÿ*Çvç”}”álØ.°]`»ÀvíÛ ±]Ý?Ù·v£hvˆQÙ.°Ý\ØNê%YCVË]l×ì”ànlWŸ«â/Sl0ÜŽ·jÝwI—ÚH‚w: ªOaIþ—Éý©®mI w/É6q%®Ÿ5;Šl»1éxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý©UŠéé† tVûÃR%Í…íN©·?›'ýÝØî”w²¾˜mWGT— É(ËíÛ¶ lØ.°]`»Ø®í³¤–² ÷YE“Àví¦ÂvÚjÛ ÈŸ5|þùÿØíü8y7¶«Ïõ°ÇCL ·~²!•Eëá>²¶´æs¨íšRzÛÕAkÆ$ ²~›¸R¢!ÅÇt<:?¶»"þl×QpÎzrl×õô„ØîŠÞËØ® €ã%Ô’Ñ£Ø. / {—dW©(ÉÊX*Í…íš*d²‚Ô‹~¶[½#¾›ðâ‹ÙvmDR¢A8ñó÷K²íÛ¶ lØ.°Ý l×öO…lLã}V>óíÛ¶›ÛY½|ª¦þ¦]l×ìDn϶«Ïu…¥¿«xˆñä%Y^²(mc»\§pæ"Ø/—×%A_ÅvuМêWccqÅ¢%ÅÇt<:?¶»"þl×QpÎzrl×õô„ØîŠÞËØ® À‰óp•Êéá>²l¸PÞkIqJ*lÕ\ý/±]S… .ì€z‘ïÂv§¼ƒIŸc»:bñAË ¶Ý¯7lØ.°]`»ÀvíÛÇv§öYI%°]`»©°]®Yt¹$Véb»f§z;¶«ÏeªÿŒx“Û}–‡~ ÛN`2è¶+) –¶+µuž-sUì€Çê©à·rµ +r>W»$or>¢ p*”Aä_D \rÈE Ü©@®ò‰†zàtW4¹äæ äÊ’ü —êÍ¿n ×ì(—»9.&(&ƒraÍ.mU•¸³· y´J;ÕŽJe1þwÁAÄfgT^Í¿¨ƒÖ¾ˆyÐã´Ù}VPü‹Ö;?ÿâŠø{ò/: ÎYOžÑõô„ùWô^οhƒ“€ hÒ*²Ø³½--El'ÿb•ªl c©Ä“)oªX2¡P_òwa»öÖ¦4ÞÉ«‚÷°]QѶqƒÀvíÛ¶ lØ.°Ýql×öÏRH4Þgs‰Þ‚ífÃvIÅ?à¶s»®MÕçr)À8Âvn—7º ݈íd©Eky“Ú‰Çx)yÀ)Ü„V»$¯R»uPΖñ€8Ê)¨ÝÇô<:=µ»$þj×SpÎznj×÷ô|Ôî’Þ«Ôn\’Zÿ'šß"Ÿ½5ÅiQ¡mj÷#Šê©0µûQ•“r: ^¿+Ùn}k•þO\?v팟¦v눦þŸP&Ô.¨]P» vAí‚Úµ;LíÚþ ȨZ†û,$Õ vAíf¢võÃDüqÚ£v«ÞLíÚsU2ƒ ªõòÈ“5ÊÑÿb ˆdÛOaôãè DB³ó ^ÅvM»3’ Å!}¬CívxLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚà ¹ŒW)ÁôlrLKÞÅvç¤f™ Û5U%iî×¾[팾«ØÑwT=ø{§¼xG¶H~ÈФ”a`»ÀvíÛ¶ lØî¶kû¬Ã,ãóqlØn.lµˆ[ì;ZíHî¾#ÛžKÆÃ_¾›»äÉbG°¤bd¸í°Mõ¬dýü‹fçëÁ«Ø®ʾ§*ÁX\Ql7â1Îí®ˆ¿Ûuœ³žÛu==!¶»¢÷2¶kƒZA®Rœ´<Šíq½;²ç¤ÌU£|U…ˆÚ/Þö£^¿ Û­ÞádÅÆÞA|¯Fù:¢ŸÇ _‡ãGQ`»ÀvíÛ¶ lØî8¶kû§IQ‡Ë,íÛM…íüÃD?¨øB[ºØ®ÙiºÛùs (§<¸ÒìðÙÖ‚æaæ’·±ùôÓré+mv@ô*¶kƒzˆÁcqT"ÛnÈc:Û]¶ë(8g=9¶ëzzBlwEïel×—\€t¼JñŸ¿ÑÜŠíHq±;Ø®I0ñ#ÛÕͅ횪‚Hv`¯Ê?þjl·zÇrî÷Eù±ûìûð4¶«#ú!KSJ”mw*lØ.°]`»ÀvíÛmc»¶ÏRA°2ÜgÛ¶› Ûù‡I 3ô±]³û¼å}¶«Ï…\<¾TÝîÑÖ‚X[ Úµã5Žã2¸#ÛìùUj×-™ËX\ùÈ j·ƒc:ŸÚ]µë(8g=9µëzzBjwEïejÇ J˜ÿÃUJé³¥íDÜ¡vç¤Úd¥íš*„šô=VlßEíVï0¢êØ;îÅ÷¨]Ñߊúu~”aP» vAí‚Úµ jÔîµkû§Öª¾N‚‘lÔn.jç¦øI…r—ÚU;¹ÚÕçæ’¹Œªn—äAj'¸d$Ë;¥íħ°™ÔS]Wi³ƒôni»:hÿrW›]ŠŽcÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»6¸ov: I«Èò,¶ƒ 彊§¤"O–l×TQý-¬ŒÕëwa»öÖ\ëéŽwòìAê{Ø®(ìç1:  r`»ÀvíÛ¶ lØî8¶kûg&ʃ«ÍÎÛ¶› Ûù‡É¢ä‘G¿#E³Mwc»ú\KÅ äÑr;ÍO–¶ËKFD¡í@N} û>&Ê}2Víry¹#…êëk4’]íRdÛ yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚà¤ìKáx•¢­2·b;&ôuzwµ?¦PìK«ÔRæÂv«úÚ!>ÕóF®à_íÚ[‹f{G>ìãØ®¨>A¥VeØ.°]`»ÀvíÛ¶;íêþY7P.ãƒ(¤Ï*íÛM€íüÃd.µ8³u±]³Ër{#Ùú\õ@e±¢=yG–dÉfœËv gk —Ò #E³CÓW±­çý4ªedëy?:R yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíN­RÊÏfÛQ–…vKÛ­rÊ¢¤êdÙvM•){„5VoéË.ɶ·ö­ Œ½“éÅF²mÄ’!P¦Ñ‘"°]`»ÀvíÛ¶;íêþ (µ8ØpŸØ.°ÝTØÎ?ÌzA–0õ±]³K"wc»ú\#[K£ D†*Ï–¶“RÐv²ír›êàžþTové£ZôØ® ÊÙÊÀÍŽŠ¶ñ˜ŽGçÇvWÄ߃í: ÎYOŽíºžžÛ]Ñ{ÛµÁÅ×ûÁÝ„‘×¶3X|ßÁv§¤Êl)š*Mœ3PÏü]ØnõŽ’’Œ½£Àïa»:"¦$”Ž(ûàÚíÛ¶ lØ.°]`»!¶kû,©Zﳈ—dÛÍ…íüÃäú/ É®vnÂvõ¹œ³ aS}2ÛNʸ“mWêÖâ{Rª7;!|ÛÕA)©Á ë·‰+·kÛíð˜ŽGçÇvWÄ߃í: ÎYOŽíºžžÛ]Ñ{ÛµÁÁGÏ6\¥(Y~Û‰¦EEw°Ý)©~„› Û5Uè1VNÔó—5’]½ZŽx߬m×F¬%w2P¶}y'°]`»ÀvíÛ¶ l·íÚþ©È¾Ö÷Yáȶ l7¶+Çaf(ÒÅvÍ>jËÝ„íêsý=%óð ÊµBø“ÙveaWÂ[-)È•ùf¤\z…¬Ù¥Bïa»_ƒŠ–ÁÆâã’lŸÇô=:9¶»(þl×WpÎzfl7òôlØî¢ÞkØî÷तy¼J ð³Ùv$ §ÍN²¿%xxrH*éDØî—*U“Þa¿Õ—oªm÷ë­³ÿk6Kß¶_Âv¿F,©Vê8 LÛ¶ lØ.°]`»ÀvG±ÝÏþ)”¬{Çï—Ýgo´Àvíþsl·~˜E1Qê´¤øe'zï%ÙŸç Rf* O¶¤´(3lc;¨S8[­xÔUÚì”áUlWÕV°‡‡â4A`»!éxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý:¸b&¯RðáK²º@âlwN*Â\ØnU•SJGÔë—a»öÖè±åŽ­v¯eÛý‘©Wuï·2ŠK²íÛ¶ lØ.°Ý l×öOM2Žå•3¶ l7¶ƒ–EWp£ò?ÿú€¹–l»ÛÕçú:Ÿ»×y~Ù%MÏÖ¶+¾Umc;¬SØXd@ƚһخj €ÆâÊGA`»ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»uðŒIi¸Jùží$ˉѽl»sRçÂvMÖ±zH_†íÖ·Öú{åï|œ9ÇvmDdÍ(‹l»ÀvíÛ¶ lØî¶kû'«ŽcyãØ.°Ý\ØëåS€ þ¾ûÏ¿>`ÆÏVÈ7a»ú\bSŸ£ ÄDö0¶#Àˆ·±ùÎBY¬دv&˜^ÅvM\ÉX¨ Åå‚QÛnÈc:Û]¶ë(8g=9¶ëzzBlwEïelW/Éc÷‚ÃUª${6Û*´C•ýÕ¾e$K”¹°]S…H㽪àŸi7¶[ß:g-ù€wìÅK²mDRÿ.ÓXA lØ.°]`»ÀvíÛÇvmÿTfá'ÑÀvíæÂvþaúa%Y¦ÜÅvn'¥¤|7¶ëŒÿï ¤õŽêƒØŽqaq‡Ún WP“© ÊTEÊ©2bÉ4VïŸÞ×rkÁ£ñWèñÓ«œ++ïÀe"‹@.¹ä"‹@.¹S\!ÂD0>Ý’D ÜT/ ”©þ× äš©ÝÈÕçæd¤ƒkS«=È¡,æø˜Û·ß<Ø'ûHiýÕæÏ~Mæ_´A5³ ò/šÇ¢‘1úa½ãÑùó/®ˆ¿'ÿ¢£àœõäù]OO˜qEïåü‹6¸Õ_ü,¡ötµ#•Å£kS«bIp@*NV¤¼©Ê¾¯&«ÏôeØ®½uq7 °Ýê‚ïa;ÑçF"‚ÛxÙNçlØ.°]`»ÀvíÛmc»¶ÏÖ˃ž^ÍS`»Àvsa;]ü»Taí_›ªv~†½ÛÕçígÒ¯vðd‘òL‹‡+{·¦d=Q3Ò/¤¨áÝ[SMœ»Çt(®þÄ AíF8¦ãÑù©Ýñ÷P»Ž‚sÖ“S»®§'¤vWô^¦vmp?ÖÓx•bJR;—±X‘j·J(T˜HÍ“Õ(oªÄ7³Á ßÕŽówQ»öÖZ£Ï<öŽ~ÔŸœÚµ-e“Û¸Šµ jÔ.¨]P» vAíŽS»ºB‚d¥Œ÷ÙbÔ.¨ÝdÔÎR™9‹ ¨ÛÐýÔ®øcÑÃÈax$–¤v¤‹°‘åml×*˜3Q4Alv´ÕÎþAl×õ¨Ö¹!ÍN9Š yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚàÙ|•ñ*eòp²]Æó^²ÝÚ¯‚àkµ›íŽlU…IAY«ú’¾ Û5ïƒåñNŽÉ^ÄvmD$ó³ÖX|$I¶ lØ.°]`»Àví†Ø®íŸ*ÙÄÆû¬h lØn*lç&cKÚo-ØìôþbGþ\©×Óà¾J³~´FyY” Ê6µ³v¢f,Òãl˜ò«Ô® êfTÔmv”)¨ÝÇt<:?µ»"þj×QpÎzrj×õô„ÔîŠÞËÔ® .)Û ƒm™î,ˆ¶ íu\¥úro¤J²¹¨]S¥Yí€z¡ï¢v«wDiP¢üÇ‹òµk#—œá€2¶ vAí‚Úµ jÔ.¨ÝqjW÷OÿLk,<Üg) µ j7µóSr"ñ8«KíªéÇÏò7Q»úÜâoÊiÉv¡û’í<Œ«MÔwJ”ç6Õm@ƪ¾}G¶‰#Ò¬e(Žð£}`»ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»upƒ4¸¸Ú <‹íDRÚÁvMC­vp@ªñ\Ø®©‚QÃëõ-7RØÿjlwÎ;e`Çv«2#¤x®vQÙ.°]`»ÀvíÛ¶;ƒíêþÉÉÏvràtW@Û¶› Ûù‡É–})¦~e»f§tûÙúÜ’¬È / Úe?T=ÛBÁvJÛ•:…=pDêÇœÍNA^ÅvuPƤ%ÑPCŠÒvCÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»up–$<^¥ž-m§É2ÛÁvç¤ÒdØ®©"c¤°¯ê3¶koÍþÆñNÎô&¶k# æ,¶qNQÚ.°]`»ÀvíÛ¶;íÚþ™ÅCrï³öÑ\2°]`»°]©wO©žÝûØ®ÙÂÝØ®>]At¼kvœÊ“Ø./¾á”ÒvÚ‘Úš~^àjÇæ>‰íÖA‹7IÇâ>>l·ÍczÛ] ¶ë)8g=7¶ë{z>lwIïUlׯ»HÁ2\¥$q~Û‘¯X%oc»“R M…íVU˜ÔÇêA¾+ÛîÇ;JFã\ð£1ÊÓØn‘$kÖʘÛ¶ lØ.°]`»Àv‡±ÝºZ">²Ï*@`»Àv3a»úaJÈúçïòÿüëö@JîÆví¹è‡<ëÿòÝìÀc¨'±.™Ð`›ÚÏ`E(ÖoHÑ섊½JíN‰Ó/µÛÁ1ÎOí®ˆ¿‡Úuœ³žœÚu==!µ»¢÷2µ;µJÙV¾òÔŽ(-Ùʵ;'•d.jwJ}6ù.jwÊ;þ¿÷¨Ý9ewdƒÚµ jÔ.¨]P»ÔîÌ>«9¨]P»©¨.ÉT‘sé6¤XíHÊÝÔ®>7«BÂþªvfðdYŸ¥~fKÛµí\‚Ïáš[;ø¾Ù¾ÚH¶ êçãb’‡âÌO§ÁíF@¦ãÑù¹Ýñ÷p»Ž‚sÖ“s»®§'ävWô^ævmð ¾8–ñ*e[KèÜßvLtµ·‚3Ž¥æ|µÿ”ÛUU9AÖ~ûï·ü.n·zG©é«¿ÈíÚˆ 5¤8 L‚Û· nÜ.¸]p»àv'¸]Û?3¥ÿ±wö¼Öܸÿ*îœj ’â[±]šT ®Rl'q±‰aï.oIs?÷iÆóò(>„;›>úïHCQ:޳”9¸]p»É¸]ÙJåDü¹cÛÜ®Ø!ÓùÜÎYD½<Ïh±<»óîÔC²Zd =Oä¨La/ù®÷{Û5;#½õJŠU“ŽÅ9q’ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{ÛµÁ0÷/Ê\í¦caE%0w³‘úb÷ärÙß5 kO]Ï_k{n¤¿œ…µÉXû—(ã,,XX°°`aÁ‚… ÛÎÂZüÔ\¦Ø†8+ ã‚…ÍÅÂꋉe§¼¢hÒ‹« («ñ‹Žmŧ–@œ»Ønµ{¸»ì$lW̨ÿå»Ù™¥K¯¤À%•œ6É+W•å†Gµ"¹eÆxë«8/«á*6;³¸Jvd:ŸÛ·ë(Øg=9·ëzzBnwDïanW‡úiÄh¼JùçŽq'Æl _¯öصÝè‡]–¹cS$’u¬à͎ɶ§.C•?ÜØ;˜oln×F$RvÜ L=c Æ@Œ1b ĸ1¶øÉ*à>޳wRbœ 1–SPYðs÷ê/¸]±“Œçs;— Ìdêä¤x!·#ZêírþÛq…± îƒÊÚÕìVl×Í&¢2—…ÛxLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÚàRÖ£/iv¬~)¶ËŠ‹Q~ÑÝn•ʤ c©B:¶kªKȤ±zMovJ¶=µ±ç-‘ÜîÄvmD¯½:6¼uæØ.°]`»ÀvíÛ¶ÛŽíjü¬¡=°Ýj÷¼?°]`»¯†íÊ‹)à©ìû§d›=4ß> ÛÕß%®e³6š@Å._yJ–ÒR‘^µ;’:…©l<WÉ6»d÷b»6¨˜1àXK l7â1Î펈?Ûu쳞Ûu==!¶;¢÷0¶ÛµJ _ÛÜŽU—DúÛ퓪“a»ª*““lQÿ¤‹àïÛíòŽ=^Øz5¶Û§Œ-°]`»ÀvíÛ¶ l·Û퉳!°]`»¹°]y1¹ö›}÷Å Ìöx«ÊIØN*,yž&Pk!~å)Y]Èë1ÙçØNÛT'0éOõf‡tïmPfàÁÍ.k4·ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{Û­ƒ;ûx•b¾ú.YZØ^Ý%»OªÒ\Ø®©RRœFnvònwÉ®ÞaÁA·»/æû°]ÑAG»¡UÙ#P lØ.°]`»ÀvíÛ°]Ÿ e¡7ÆY~<ãØ.°Ý Ø®¼˜%Ã@ñÔ?$ÛìDàllWWËøhê¨8_{HÚŽù9¶³:…‹`Ðt»Ùe”[±]T…q,NSTÛ yLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÚà& "V)²k±òRòôØn—Tûü‘æëb»ªJHh¾¶kOèœy성]1»ºÒeŒíÛ¶ lØ.°]`»íØ®ÆOKõ&LÇÙ_µÉlØnlW^L)ÿ€ t±Ýj—ällW~W1åÁý3ÍéRl' ©¨ÒKlçHd˜†›}Gxv{ÆWNä¶«G¿D®xÇÁYÇÞ¡‡=îHä9£º}ïQŽØ¦0AŸŒ5» éVl×õ0`ŠëCHô(ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{ÛÕÁ’x¿ÍÀj—ðÚ3²’}a}Ñ£|§Tñ¹°Ý.õþ^ØnŸw„ïÃvû”i\-Ø.°]`»ÀvíÛíÀv-~–—Xû­í>ìR`»Àvsa;lgOë1ëb»j‡D§c»ú»Yj§M ÍÅ5×ÛåZm÷âŒ,µ-uEô}l×ìýVl×åòƤ<ÇëP`»<¦ãÑù±Ýñç`»Ž‚}Ö“c»®§'ÄvGôÆvmpI”}Ë*uñY¢¼”eú¶Û'Õm.l×TiRD«z³j»öԞؘÆÞ)»“û°Ý>ešÛ¶ lØ.°]`»ÀvÛ±]ŸˆžiÃî1ÅÕ‚íæÂvåÅAuÍØÅvÕŽUξ‘¢ý®™@Î2š@eC•åÊj;] §¢æ9¶+IdNÀƒ¾LÍŽ>+Õ Ã¾Läþòã¿ÿTö4-éümÀn—¬üp¹éÃÐUžcMA>þU[õÊÂóŸÿÚ"{ƒEõßüù¿üþßþÓÿýŸ¿^1ÿ¾èü釲>}[r“?þ\¶hß¾0øÇïÿümYôÚþîßþí?—«þ(oP±ü~üæ¯?üñÓŸñ¿~ùß¿ùñ§ïÿZBÎÏ¿<ÅÏË7ÿЦSøOÅìÁô_¾ÿ·òòÕ‡)«ê_Ê Ë·‡ÿ¼Ùà•éŸtÿ›»ý²c]—…—£Ký´3ùó ì¬O¦Áη«LðEò«v_ëà* °A¤LÖî«©ÄÎÒ”«øØj÷p3n|.yÁÁ;ÿsÉñç|.é(Øg=ùç’®§'ü\rDïáÏ%mp4eî` hpÐà ÁAƒƒ ¼·øIL0èM³Úѵe y1¶N.Ï"JƒÞtÍŽE\¸ž \—“œQÅ¥ ®›1Ÿ ®Ëïæ:Žëp®×CÚ—^¥,KÙÞ»¼äÖE(‰Ê¡ö„{}mj—“kbÕ ê ߎÚï+ÚØ;®x+«§P¥d=”YÜ­)g¤œ‘rFÊ)g¤œûRÎ d©l'‡q,ò¸ÈãfËã„38¨ò8a¢+ò8Kœi¸…®—aù•y/æI¾Jä<#'¶aùE±K~÷'ú2h­ã âŒâÜà†/œ/=úÿáCøoÖ‡ð— öYOÿ!¼ãé)?„ÿv½'|÷L) W)×kÛ}™ãR“—€¾H…ä®6”JIólØ®ªwM4%ËJï†íÊScírJcï ßZÏVF$Å”d¬ŒRœ lØ.°]`»Àvíva»?ÙÇq–ãnÍÀv“a;^RÙþ±÷ïÖ,veOcz6¶«ãcùeÅþFµÚ_ˆíÌ-o¦ÑslÇm Kmq=PZS¡›»ô·A­x2ùXœ¦¸\sÈc:Û¶ë(Øg=9¶ëzzBlwDïal·.Ž’Æ«”‘_{¹f†E^`»}R%Í…íš*·Äƒ¶Q«]z3lWŸ:§„ÀyƒwQ]íVeNIe¨¬ÁóK¿Û¶ lØ.°]`»Àvϱ]‹ŸÙJ”ÒqœÍÕvíæÂvº¤$(씺ØnµK§’-¿ õÐÔ“ñÿîËñ5?»Äì4l'¸¨£1<ÇvZ¦°$H–h ´Lu“{É6qHP–¢¡8ÕvCÓñèüØîˆøs°]GÁ>ëɱ]×Ób»#zc»]«Ôã7¢K.×]ò h· `H´E(MvD¶©"$YÕ>‹üû†ví©s*ÏÍcïÐCÉçåÐnUfehÐ. ]@»€víÚm‡v-~jí¯œÇqV$:Û´› Ú•¤»ìÒ3ä´«våNƒv®š JŠ6š@ªÅªÔÚA=LìÏ¡•)¬ ¢¹¿¥®vâH·B»&ÅÄÓPœb¦€v#ÓñèüÐîˆøs ]GÁ>ëÉ¡]×ÓB»#zC»6xÈãUŠ _ írÊKJøÛí“J“]H±O½å÷Âv«wj‰$½“éÆ#²mD–:A7(ËqD6°]`»ÀvíÛ¶ÛíZü4CßËkÉ—Û¶› Û•S…Í’ökíªk>½³5lµ`a4T)É•Røâ /°]ù¯åÏR·ûýÎvÍÒ½íÚ ,Nƒšßf—îb l÷‚Çt<:?¶;"þl×Q°Ïzrl×õô„ØîˆÞÃØn×*ÅéÚ#²|x…íöIEž Û5U‚9 ðϪ^ß Û­ÞÉlƒ[äV»‡`y9¶k#ZÉ–]¶(‹j»ÀvíÛ¶ lØn¶«ñÓŠsÇÙ÷µ¶ l÷Õ°]y1-9d´~µ]³“óÈÖßÍ,P6«£ dˆrå…D‹±ÙSlG©MueîÙfgšíNl·Š#GZíð!!l÷œÇô<:=¶;$þl×S°Ïznl×÷ô|ØîÞ£Ønç*ezmg;ÆÅAžc»}R)ÙTØn§ú7ël·>uÉýÎvvé¾{d눜Ryn€ Ê€Û¶ lØ.°]`»Àv›±Ýž8[ìž_üØ.°Ý×ÂvõÅ´²»W÷î=²«:ŒíÚïr‚\ö £ dÙôÊj;â% ” óÛAÂâÚ­¿Xí’ÞzHv”Å3n—ÕÛxLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÖÁpËÊrñ… ¦½ívJµ¹.¤XU•ýC’‡vT¦0Õ”ÓúSÚ’p+´Û%.?üÚ½ 1Î펈?Úu쳞Úu==!´;¢÷0´ÛµJqæk;Ûe]À^a»}R…æÂv»Ô?6¾x l·Ï;ù¾{dw*{8ÚØ.°]`»ÀvíÛ¶b»]q6:Û¶› ÛQÅq¢FÞ¯µkvL§‘­¿«e Šý]V»Œ—Þ#›“d/.¤¨ ‹¯ ÓH©£<»ñö+'rÛÕ¥·Kä=AfØà‡–[w$rN‰LÅÜ«2·Hä"‘‹D.¹Hä"‘‹DnW"WÏœÈ8Î"E"‰ÜT‰\^¹õë/V»„g'rõwKðŒ³)ŽM¶› ÛÕÖ߯j4ÀvÅN„ÎÇv^öI\âŒ&)ˆ_{³ ‘1ËslÇe ×#’ÆýD®ÚeNz+¶kâJ0Ŷkvô°ßl÷‚Çt<:?¶;"þl×Q°Ïzrl×õô„ØîˆÞÃØn×*•³\ŠíœqÒØn•à„¤>__Û5U%¦'Ècõ oVm·>u-¸Ûâ‡p—c»6¢xÝmPf9°]`»ÀvíÛ¶ l·ÛÕø)õjÁA‡ågýáXy`»Àv3`;®8(Ñçv]ß}ñ9¦³±·c[f Ãj±Kp%¶K  Ðsl'uª#`Ö~©Hµcs¿Û5q„e`Š|ø3¶{Ác:Û¶ë(Øg=9¶ëzzBlwDïal·o•2¿¶ÛQ¢…_`»]R)Ù\Ø®©H4¸NcUooVm·Ë;™ó}ØnŸ2¡ÀvíÛ¶ lØ.°Ývl×âg­CÂ<޳JØ.°ÝTØ®¼˜V6+ôäÐÔw_¼À†*§7)¯¿›©’$M Ë W’E],»ó‹nGºÔ¢Ú\‹EºJ«¢§[±]'D08ÁÛì¢IùÓñèüØîˆøs°]GÁ>ëɱ]×Ób»#zc»]«”¤kÉÂBäH½Õ^pP¯¶Úe˜ Û5U*%Ç’±zEz/lמÚÁÓØ;&v¶«#:@I'tƒ²¨¶ lØ.°]`»Àvíö`»g©lÒxwW÷ íÛM…í´67fîb»je{6¶«¿+4º»Ù%ÎWVÛÉ¢’’æç‰œµ©n¨Ü¯¿hvéVl×e‹Ë)ÉyLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=Œíö­RJ÷¶“…àUµÝ.© y.l·O½¼Ù!ÙòÔ%ΗlpD¦yGÒØnŸ2ˆ»Û¶ lØ.°]`»Ø®ÅY`Q°Qœ-v¨íÛM…íÊ‹ieuvïb»fGêgc;«80•gM /[A»öJŠ‘,Ãslçu ×ûAMʛݓ.|—b»6h}c<Åq\I1æ1Î펈?Ûu쳞Ûu==!¶;¢÷0¶kƒ«‚ŒW)Åt)¶cáÒ«Þvû¤æÉ°]SåŠ4¸òºÙó{a»úÔ2£ŽƒeÙË}Ø®\²cÝ  1°]`»ÀvíÛ¶ l·Ûµø™s àãݧÀví¦Âv^q(õ±WlWþ9ÛÕñ¡&’ƒ+ÍŽ“\ÛÛ®ø”èùM²9Õ)\ëj±[m÷a—n=$ÛEÈ,ýÏôvÑÛnÈczÛ ¶ë)Øg=7¶ë{z>lwHïQl·Nžh¼J‘¥خÄ×Úá9¶[%Ôˆú=WW»²ÿ› Û­ª¤ÒÁX=ë{UÛíóŽäû°Ý:¢%.³AGµ]`»ÀvíÛ¶ l·ÛµøYç¥qœ¥¤ÑÛ.°ÝTØ®¾˜®DlÖ=$»ÚŸÝÛ®ý® °2Ž&“^{H–4æ9¶ƒÿaïüvï¸q;þ*ÆÞõb§¢HŠT®ûE{‘« 0²Ùm°›Úp’}ûRšŸ½'ë3Òó'‰8Äè;<3"ù‰ª%F¡f¾ÕîI›ïK±]T)¦ }qIÛuyLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvëà‰TvÌR¯=’‚yÆ l7&•ã\Ø®ªÊ¦_Âõ9¾¶ƒšŒÖv—Ú7/>ôó½Û­Ê8e¦Êıc;ÇvŽíÛ9¶sl7€íjœÅQû>žæØÎ±Ý ØÎÌ̘cThb»ÕãÙØ®\7 1A·À0;¸Ûá¢åTèç›d)Ú+LµOy«v˜ŸÕœb»*Í?»âèqÕ¯c» ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]œ³¦þ,EÂ×n’eYBHØn•ZöÉj_*O¶IvU•(jÛ­v_ ÛÕ»¶œCS?’“=s÷a»:¢š„ö&Ù7»ç­²Û9¶slçØÎ±c;Çvϱ]‰Ÿ Jú‰(©c;ÇvSa;\BΚ%6±]µ³¬æll‡ fj÷¶«vŸØw¶äÂ& ž.hìFeâžÔbéVnWM’#A_\¢ìÜ®dŸÛ·k(³žœÛ5==!·;¢÷0·«ƒkN9K–Ò‹wÉrÊKŒ[Ü®HH0„Зj÷3·«ê­|JÄ]õ)¼Ú.Ùz×V~fêÿ¶é1’_Îíꈔ,×ÂÊ$8·snçÜιs;çvÎíös»?3Å~"šTÛ9·›‹ÛÑìïrÐÜln÷f‡x6·+×¥ B±Í›V;ºt¹.6@ÜèmGõ Nˆ’:BÍ.ɽ½íÊ ‚åC}ø°»Æ©ÝŽixt~jwDü9Ô®¡`Ìzrj×ôô„ÔîˆÞÃÔ®NQsŠýY 5^Jíl¢_d«µÝR‚É ]UŨ2ïPÿbÉŽy‡ã{dëˆ $«ìP†ìÐΡC;‡víÚ9´ÛíjüÔ¬wdú°VÈ¡C» /Êù Û{d«áé‹íÊu$vNb®vzå‰6†$6×?§v¼”E"€¥£ÔR?ºyì¸Ç­ÆNí6pLãóS»#âÏ¡v cÖ“S»¦§'¤vGô¦v#³Táâµv¥­µvcRcž Û ©G€×ÂvCÞ!¾q­Ý2fplçØÎ±c;ÇvŽíÛíÇv?%¤»qV;¶sl7¶Ëµí²ÆØÁvf‘ÎÇv9K¹‹ÎÙÕŽñBluI1n´¶Kå,ÐÙ4•jêèVlWÅEMV‡öÄ™ÝÃÇv<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:xb‰ ;f©Œ—b;L¸Ä­ÅvCJ1Â\ÔnL=ËkQ»!ï$¸±³]Q3"ÒeèÔΩS;§vNíœÚ9µ v%~BD´»q€Ú9µ›‹Ú•¦ÜÌ íÎvÕ><ŸDíjSp­;uš/Pµ“‡Ö—œ#K5ä<§vR_õcG©Ù…›Ï‘­ƒ2 uÛU;R_l×Å1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁS–ˆ©?KÙl)µK!,eÛU ‚*ÝŠ«]ä¹°]U¥©üÓW¯H¯…íÊ]ÇDûÞÉÁòrlW•E ‰°«,†‡~%ŽíÛ9¶slçØÎ±c».¶«ñ“BÈý8K’Û9¶› Ûå¬ÌjùJÛ™Ñé{du±4/KŽzµ‹—.¶Ë d̨ϱÚ+ŒAÍã©£4—-¿|+¶«â"©ÍD]qVѱ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal·. û³TLñâó(Â"°µG¶J@«SvH}Ö»á÷Äv«z‹a‡zÄ;G¶Þ5qÕv«wÖ|^Žíꈌ¤1÷•úyŽíÛ9¶slçØÎ±Ý¶«ñ3‹†°#ՇݎíÛÍ€í´à8Í™¾Na¿ý‡8« y6¶ËÇÅÒù»íª\ºÚ.¦EÙ’¶çÔnLhsÕq«úžÐÙ¯ÕGÂתãê]“p‚Ø÷Ž¥.÷ÕqcÊ0yçuœ×q^Çyçuœ×qûë¸?“¥Á¢ý8Ëš½Žó:nª:ÎL=è“^Cßþ˲ ê¸hyRiÌÙ{¢Eƒpm—‰‰q«+È&‹ v”æºÚþÖåePÚñ½mµ{8*Á—_l|WoxtþåGÄŸ³ü¢¡`ÌzòåMOO¸üâˆÞÃË/êà$¹×yµãtíò‹ ­]SUG¡}©t6lgª¨vNë]Õk~5lgw-Xˆ\ß;òóÜ€ílD{5¥³š{Uöü³c;ÇvŽíÛ9¶slçØn ÛYÕ!‚ô³;ñåŽífÃv9g¦1u°].pZNÆv–b´?Z/Ðj2\‹íXÓóõ¦ ¼ê$…ŽÒúªÇ;±Ý:(e{d¤/Î*bÇvÓòèôØîøS°]KÁ˜õÜØ®íéù°Ý!½G±Ý:x²b Ý/îÍŽ®mvDJ‹M…ϱݠT™ëdÁU•$ @}õ¯…íÖ»Ö%ï–ú,¯ÆvuÄ@í–w(K¾kʱc;ÇvŽíÛ9¶ÛíÖ8# ö3€dɬc;Çv3a»ò`fÍ–­pha»ÕNñìÕv u»EÐæ÷åÕ.rºÛYWZ<ïQn ì&j·¿¨vÕníQ>$.EïQÞç1 Î펈?Û5ŒYOŽíšžžÛÑ{Û ÍRHr-¶ã¸(ã¶“š&Ãv«z¹¯ž ¾¶«wÍ–„rÚá¾oµÝ›2µ[Ê;”©o’ulçØÎ±c;ÇvŽí°]ŸqGœÕ‡^-ŽíÛÍ€íb920¢67É®vü°Iõ$lW®›rŒÐîʹÚÙÍ^y´`X(E¦çØ.Ú+¬“&ê(-iý«í†Ä¥‡VóŽí6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvC³”^‹í/–·m`»1©çÂvCê•Òka»!ïäp_òAe¸Õ±c;ÇvŽíÛ9¶sl×Åv5Îbé"›»qVA¼·c»¹°.!HH‚™šØW¼vú&Ùr]ÕX˜vóªv‚r-¶Ë9[ÎüÛay…3‰t”–3 ž5fºÛ ‰£¦èØnƒÇ4<:?¶;"þl×P0f=9¶kzzBlwDïal76KI¸ÛE^Hâ¶“ª“­¶RÏQ_ Ûyç¡KíåØnL™ªc;ÇvŽíÛ9¶slçØn?¶+ñ3´Gúq6#:¶sl7¶£%€B¦,ØÄvÕ.žíÊu-™Ë˜š=¬vWö¶£°XÆÂó#)LA.ˆÓºW»ðVlW…@ Aûârö#)º<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:8„d)Sw–‚ÀñòÕvA·VÛI˜ Û­ê:'’¾Ù½Ú&Ùõ®5å]Þ‘7ÉÖ‘‘Ûg¶|¾ÇvŽíÛ9¶slçØÎ±Ý¶+ñ3‰exÝ8ÃC“eÇvŽífÀvVº€UC6˜4±]µƒLgc;^±])Òš/Pµ»öH T#&yNí¸dÊ"¡è«ç{©]4¢šsW\„‡ÕNí6pLãóS»#âÏ¡v cÖ“S»¦§'¤vGô¦vep‹]*µDU$ÅkO¤PÂoP»!©Ï–«ý®ÔnH½@x-j7ä½³µÝ˜2A§vNíœÚ9µsjçÔΩÝ~jWâ'Yœ%HÝ8‹)$§vNí¦¢v©.¶ÁÜ^lWíB8}±]7( éÚØ»ŒŸ5]ÚÚNDæüœÚ¥šQLª¡–Qg¸÷@Š*ΊñØ¡v«ûÙ.Žixt~jwDü9Ô®¡`Ìzrj×ôô„ÔîˆÞÃÔ®N9jg‘Ój—ôRjÇ Ö´Aíª&U¾T~Öqõ÷¤vUUŠ;R¬váŨÝz×veÞyXGy9µ«# Käa\‚S;§vNíœÚ9µsjçÔn€Ú•øIºq– ²S;§vSQ;)Ô, &Ò&µ“BÍ?ËŸDíìºh)´Uíhµ Wv¶ã°„4òsl'å¶x;'ÞV;«ŒoÅvuÐd ²¦¾¸Ç¯Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvupŒ;f)åK±&´˜ØnHª„ɶÈ©Wà×ÂvcÞIr¶+#2°=š;r ?GÖ±c;ÇvŽíÛ9¶Àv5Îr$¡gÕ¤pl7¶ÓºE•0(7±]µÃœÎÆvvÝBÊíU ÕàÒÅvi±ˆ”òÆ9²j¯p²¸¡§Ô^uIáVl7".Áà Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvc³Ô×§<œÛَ‚°ÕÙnHjüúÈÛßÛ©OøZØ®Þ5!C¾wnÄvcÊ¢c;ÇvŽíÛ9¶slçØnÛÕøi¯õãlzè§ïØÎ±Ý Ø.—nn!sol±†|6¶+×EËT%µÕjâ•íbZ4•‹Ï±]®)uîúj‡J·b»¼ÎCŠûâ,;ul×ã1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁ%$å´c–ʯ¶‹´ÍØn• 9Á©“­¶«ª4Cê´a]Õ3½¶«w! Ä¾w4ßØÚ®ŒhïÜñÔeðM²ŽíÛ9¶slçØÎ±Ý¶«qIìIîÆY‰âR8¶›Û•Ÿ¢4Qvu¶Î‘ýl'¬§b»·ëZáÏvÇÛ/Ðg»Dxåj»¸ä˜ÂÓÕvU½Â+t”–)Aoìm÷yPM”bè‹Kâ'R´yLÛ£“c»ƒâOÀvmcÖ3c»ž§gÃvõÃv£³”…k±†%>ß$;*U§Zm÷EUæ”vÄ*M¯ÔÛîí®5 Ö·öÏÞÉ!Þ„í¾(‹!ïPöвıc;ÇvŽíÛ9¶slׯvŸãlÌ‘»qVAÛ9¶› ÛA=éETšØ®Úų±”ÞzbåµSèj—’\‹íT³ùÛA}…Ù’ÒÜQZRïD·b»:(¥l…m_ª÷¶ëò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=Œíêàl9í˜BIàÚM²Œ‹ÅÙ l7&5ǹ°]U•B ûêåµ°Ýꈒ±ïô¸¦íjlWG`ê`»·; ÇvŽíÛ9¶slçØÎ±Ý~lWâgK2»q6DÇvŽí¦Âvq @Ê”7±]µ“‡Mª'a»rÝ#@lc»j|%¶‹²¤²ØnÛÅúªc¢˜;JKêî]mWÅ‘½X1uÅeŒÁ±]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïalWg²yú³e¹Û Á’6WÛ­RS`Ù!•‰çÂvUU²ØˆÔWŸB|-lWïZÈJÐÁRî\mWGÌ™ˆq‡2òM²ŽíÛ9¶slçØÎ±Ý¶³ø™¦,ÒÍrˆ‡R9¶sl7¶ÃÅËœ1½Úí7Ø®Ú)ÉÙØ®\W‰¢tRèj‡)]¹ÚN…¼í°¼ÂD–.kG©ÙÁ×J/ÅveÐrbGd銃ÞÛ®ËcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒGU mýY*~M“NÅv°„D¼=×C©QÒ¡šæ‚vU©fèG*ÀŒ¯íê]' Ñ-²ÕîÈ^íêˆ4ÇÊRðµvíÚ9´shçÐΡݴ+ñ3r$ ýb9"²C;‡vSA;*[O)I ¹ íª]Œx6´+×µËÚ?ílÕÎîùJhUÙ(äª b5o_i$™«[ÕgËK©¯I_««wM‰Sß;–¶ÜWÈÕ™YTw(CñBÎ 9/ä¼óBÎ 9/äör%~b€Èûq6ß4å…Üt…Ę:›¦ª]|XSp^!ÇP·ŽÝBŽA.]}ó˜!ÈV!—³ý­¹½W å í«/ª8d!í‹‹ösûê‹Þgõ†Gç_}qDü9«/ Ƭ'_}Ñôô„«/Žè=¼ú¢NI“J–"º¶E9Û|76MU QwHÍÓa»¢Þr }õœâ«a;»ëD¢!ö½“ïÄv9#¨€öŸ:+;|ý…c;ÇvŽíÛ9¶sl7„í,~j9mGœ}ÜsâØÎ±Ý ØŽ»GL95±]±‹IO?Y°\W¬tàNW†jõÒå´ˆFÕ lÇ5¥æ("¥µ»·Ey‡L°+1x¯£.ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ®NÈyÇ,¥×nšÊ!,·°ÝT‚ɶMUU’„Ãõ,/Ö¢¼ÞµÅyìl*[íîÃveD*McÜ¡,c;ÇvŽíÛ9¶slçØn?¶«q6±$©g˳îØÎ±ÝTØ.ÕíPjh{µ]µ{l~¶+×»eê´‚­vté¶©¨K`+u6¶M¥š*sù££´”Lp/¶+ƒ(„N¯£Õ.øj».ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ®nÓ½’”®=YPã¢HØnHj :¶+ª8A†¾z|µåCÞ±hy¶RÆâ-ÊÛ9¶slçØÎ±c»lWã,•¯²Ø³ŒŽíÛM…í¤¬¢³L…)4±]µ³ìllW®K”‚¦öº€j‡˜¯ívQ’Òsl'5¥.G QGi!ù™oÅvUZÆÐǑձ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal·®9ög)¤k±&\8n,8&5å¹°]UeyDèto«v_¬Iùꘒîð‡t¶«#¦ a‡2PÇvŽíÛ9¶slçØÎ±Ý~lW⧤õ3¡ì½íÛÍ…ít±™˜T²´±Ýj÷p$ÝIØ®¬p#`¡N—ÿjäÊÞv¬KÊ•âsl§ö §€;MÊ‹gÄ[±]G–+KìŠK¨àØ®ÇcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒ3”Ûú³e¼¶·à¢œ6°Ý*5q$éKeŽsa»ªÊ~fîý±ÚÁ‹IQïZ Ãïd¸ÛÕí¡Œ{^‰ìØÎ±c;ÇvŽíÛ9¶ÛíJü¶—¬g…ZÁ;¶sl7¶+m¹Ë.†ö&Ùj‡éô³íºA-;n/W]íX/ÄvD  )=ÇvyQ«â SÈU;Kž·’}›+¾{÷íû1W­²·#DåØ ysA@Bìß?ýð¿?~°Ä½üÏ·™ç›wž¤{’îIº'鞤{’þÏ¤× Éˆ‘¸L9lnŠUÅïÞ½ÿøñoÿ÷Í»õ¥g¹Å_¾DÌwT>«”øúÃc¤+ŸbÞÿòË?}üå?—§ö—h¼'O˱Å_äýòþç¿þ×_>½ÿøß"*|÷î?~ýŸòÛ¼«ÊÞ>ÀÖ˜ªÊý11Àž1ãž1)iH¹?&=°äƘð0fؓٯºã>wÿö§Ï%iûÏ¿þøñcõã§êw´ò›øóçló ÑþÃO,«ùPbnI+#K†þå›w,,~úñOö–nŠEû‘a‡ƒ¶q²{NÿÍô||€¶Ÿ~øøáÓ/+{ÿüå¯<­%%üÛ‡7õˆæ=æãÁ6ÿ輜ÿþ‹Õÿl^zû¶ôæ«Ç‰à§uÆùãû?þëÃÿñ³ƒKcRÈŸº%<ÙƒÒi˜½Úm¿ð§:r·žˆcŽüÞž¾?ÙÔýáWË þôC)+ê£ù‡÷# Jøô«Í«ËߟÊ勯-O©•Ol{³|¾°W¨û•… yK= ïj‘ð¥N»M¾Ú+Ü—/˜nxrVÜAF«ž”O}~Zãó÷5 øO¿· ýÃÏ¥¢¨ŦGs©“Y¥W ;ÀÁâÒ‰aD9éõψží2ÿ¬öû/©åÏÅ—1t|™˜!h_{"‰—?—‰gw<zþŸ½kÛìÖ±¿Ò8OçÅŽDRÕ_1À<Ž]'m¤ÛnØî>“ùú¡vùR¾”(Ö–Ô…IØŠ÷Ò"%Q/lS„ï‚dóðeóã¾l¥–uöò¿ž-Ûð¬¯3ëå¬ÈIuèÉ8òɨó§R^÷IŸÊnVNÅ`EÕ`Í yç40—Qû¡^üÇ£º¹úôt"¼7VÿüñûBTYp%JéŸ^ÛüC &Lš5 ã0¸!IÁƒ.5ç&œ÷å;´PŒí8²[¶dß“Åpÿ—ü÷·ÏÏkâóýååç‹›¿~\_ m¬ˆ5úˆœºE°³‘×Ë,‘)¤‘õ)Ä0Cð<1Ž0ôj\*‹ˆÑ»„êûƒ˜Õ2C«:ôÓÚv˜&ܘ-xb·Öá–¶Ä9ék={0P¸Ý`îú¨Æ ×€ÇûÞÂýr+Ø—çŸÂ\P˜‹ryTÏèRtÝ<*3Îr;žLc…;1£ÏSÓHÊG“’)«FOYOá”F¢åT=þ4’5àû¤‘TØFyI•é#L#YƒwuÉòqÖZ\lA¦±Õ_BHçûÒHSD¯C…š üÊ4’1ÇØpàÉ™ÿ¯ÓHLìN,Ú¼|1¸ÀMÈèÔkí¡vŠP;E¨"ÔNj§5S„šœŸÙËQÕ`ˆ& §4’SÉ1¥‘xw.º‹Q6âªoÇyNƒ=Ž¿™ * 30â‰8ÂãøíöÇÍÃÙ‹[>)üÄ@9«x_õcë)êq€÷?wôx;ž9g…?ŽIóånÇ SÞ ptSVx;ž1G …¸\'Σ‹²¼ÔCö4NÒ}‘îOÆë(bžGŠ8;…¸$ffˆ¤«€‡Š¸#Ò8c[ðä"¾Ù<”?tö–P×:{à‘ª2n@ÇH¸ÉM6ÈáëBvã… {T:ú\KâÞŽs;»aŸ$îíß Xr{T¶ˆÀ ;0ÔÉ>ó ÕkÆ\ßü¹M^ü¾ùúA Þ»œ‹ p€g}9ŸÓ„ÝzÁ…”f¨@@ïBÃîSpÜ} u¨Qˆ]Ê:TÛäÜË¥*¥ë7,8FÀz8æã8ŸGêd «íXÔã kô„‡+¤²£œÉÙ!jPKnRç@æ~û§aãâŒx2ô:“¾ß^=RX;Ýo³]õ®¡Ø{8zÕS7¬ÃØxbê½êonoînopm½¤˜8Di Áw—ò(¨yŠ›ñTÒeúY[ÊÌQ´QµÒ#¥n;{Ÿ]¨zžá‚nÇ“\¿ýüîö÷MDÅO!f};C3ÐàE»AåwJž-òB›&á˜={§Þ{(F„~+|­^6£N0ÃX3àA츸ók«Tr7*cµÄ»**OyR4à‰~àúÖÄ›£/]“U˜™¸…Fà 8E¼íx¨»7úûÝõÏ믛?6åé5+ïrDr™TE €ÜÿŒ•`† x† Ù;åi.¢,j¨¼Öô‘rG¬•·‰žbnÇÓ_Ìrr?Ü]_>l®¶v¬È@aOîR.eÕä Ù=„§˜b<3„­<ƒDŸÀG]5£¯Tªè(ì~pó a·ã‘ŽöBò¾)gº?_Æy?PÒ±òŒ <ùiÒ n7XÌŠC¹D¢ÈƬk$Ë„F‹¹Öèg¸L xàp¿èï×7Wåw­[¢â\N^.$^wæ&Oqźþe¨ƒŸ±Âå;qðÓD¯¸H¬?Ñ%¢ŒóD¯£N1衱2n¿}ÙSô”¼°˜u<ÑúÐysñmsÿýâòUݺ×åÍvkYz§ø ‡#èP9€usï5:1Ž\Ö<#\WðÏàu<Ò`1+¾(–1(@Ç™£ïûcÔ€5ç b<IJLt<€æÖ—?î®þ*©X›ÿyÉÎwq}ópÿéω Þ)>(Q/9ˆõè)Ž•T½5Tç"®gˆ´Ï‘zÅß”]ö³zgÇfl6ÂìflÆòB¨OýDú*iÐ{ų”)dκażrÅ: eÀ«Õ€‡h˜h?Rβ¾ʲ'Ún(!LYµíx¬¥.5ÒÞ¤–À½uÞ˵H¬=M eØÓÚÆ¾ˆ[°§ &•çþÁïºA¼a³nÍxŒĢ׎:/u«Ö°ÎH‰2áɃ|ݗ虓‹˜4‡’Œ ó‘ºi©~šð2mÁþךÝ¢Âf¨›»@Î!±šK˜+f{­yê…5ÓO¨5 R£îU—÷u2d¥~ŽvÓ—q.ô;â»éh;üJ–BG¹ð@·øàm©Ó²wV©¬_PŽ©èåÒ¡@—qCçånÒYÒL; ä¡‹½þh@%Ô1£×,ò)c¿ÅÞE=Û±³Ã é<Ä<᯻#†ú¥—äL.°šJʸÊsLŸÅ{!}UÁm˜\ xB²˜ÖêïBTªÞê(9V[-6õ3ÀË®`&<¹Ãbý!ÎYù?KÑÔ·ÜÕ¯¯èïÑ2Î'³t4ÐfxL-xÆ­ÛºW*ztè4”ÑC¥ŽeÝ®ÐÄv°èÜ„#·|';P_I˸dŒ,ÚFc}´ÝýöT:áåG/Íñ–ìõ³­ésý¿KíÜ…Øúm5]•AÚ«¾Œp—8÷‚Y)…ÔQþòvJ‹¿Çqû-ñW ŒÞ×}PQ.ý˜’zÔȸhNÕ™«¥\Ѝ'5~´Œ›QKÂ'®‰­Qù¼Ç^- Ö/ªIlI*©¡ b|O+ܤ±”ÑO°À-xzžä¯Iƒºã©Äe!%õ¯Äe­«øØO ói;žèŒYÏ mõ *“èA-_(ãȹa‹XÑG ʉµ<ÖÄÚÒÆê%f.™ê6Ã,§y´‚[ô¯g¦ ǰONµ^¿¸¼,Ï?vKg¡³~o͘œ¨eÔÂB3Ffß97h¦¢ /q ž„cÖrÝß”9EðNMã—qt€»s¼R¶ãiÂÑlÁô>‘Cc¯~O-Í=œÀÕî…¥)Omè¹ýhéõì²n×P<5ôü¸ScÑ£oè¹ |—†ž5¶ÑÇÝгÎôñ5ô\…wmCÏåã,;(€Sw)v;)Ø#z’O²iÁÇ =! è¶S:ª†ž[T>8±ÔÐGü[5ô|d'–Ä#JÇ™î =·_”kOj@O =O =O =O =O =O =O =ÛznÏÏàe5X„§†ž§†žÇÕÐÓŸ;Àä}|ß‘öUK¡eÜn½N-…Êßàs º¿içBÛWÄAN‘>î+"P¾†" iV¿² bž^`ÃÃ_Õ^ެ“Æ1&H È›J ê¬ñÛàæ4AÆíx¢5ùm½cä*—“£à‚¦Ÿ¾4îíùVnRP Ê7àÉŽ‡­jª“–c ŒÇ ”›:ùâÚ†êe;øì`‚Ä x¨®† „ñŠ`Áƒ‡Å]vöÆú 9%rÑ‘†»ìb=˘5ØtÆÊ·àÉnäÊUÞˆ<†¤=È.ã€âaÁWC4Ó]/p sùÜÅ›üíâû&6_‹ã÷íÈ…¾ú5*xÇ$KAÓO³º°c/ i|"‹Ï#}!-ÕIÃD11hÇÍ6ïÒêÅ íxiÆb¶à±.æ«çG¢÷EêFe­…ƒÿúôŸ^o¹ì‡ùó§ÛŸ‚çúêjs³À)ªJ?XnÀs`“±çðŸ¼ÎðA“t32v’n†Ô"È2ï¬ÃÛ-GöpqÿçÿqwñýË Äþ…7Q¢›§h>¿÷›˜Cjù&NRž,Ç…oÀc͆{›NzñãáKy•ÞÆ ;à ¢7à±V\O&Öß¿¸\?DÚe€Ëmå’ÕŠkA›'ì÷<ä¨ËkÁ:ª¼Ô)ެb/u“ãôM@ÓÛvô•z!5Á€Çzò¿¢ì9òi!©þšUÒÙS¢ ]ke\2×ù­Ÿð'¬õ’èîK¥ß¬ƒÛ×?¥·ïÉ[®0züéíkÀ÷Io¯ °>òôö*ÓG˜Þ¾ïêôöåãǬïR!Æ¡éíÁ¥sbÚ“Þn‚wjEzû‚*A €ô‘ÿ^éíˬY¬Æ@:;)¤yéíå‹ÙûRXAGÆ»>ÞSzû)½ý”Þ~Jo?¥·ŸÒÛOéíZzûrÎ’C9lÕs6ãnËÞSzû)½ýÒÛáÜA À×ÓÛ—q¸$ß)½½ü]1)(m>–q‘y`z;ºs—0~œÞ%¡äîhH1â»ÕKlÃÌßRlx¬){(lx˜ÌuK‹O9h˜³sœûyÚ¡*?¾A¦O^ÍTx{êËST 2¿¾\Úˆ#V ,Ýy9z"°Œ#ßémµ.Z@G/u žt˜Ôƒ[Þ!÷ü¼ðê]WfùÇéóA ëSž:¨/3‹tIJÝNÐðSÖ?U ,}©\@À€™&ò)ºk˜H¥•4¾~Š †õ=mîÿ’}ûüÌôç' ?ï¡vá4U9 äåvT¯‰ŒsÉêSœ©ç†‰ø£¿’Ù”Ì^Û7÷o»™ws¿ßm–¹â6–ù¾K}.ñRgÛgªOÿ¤¼“Ä~u}¿<(}ºÜ}ûôò®µoD1€×'aÂ=Þ€‡Ðuè¬þä‘ßaöU€4Å£ˆÈЀ7@cÆûJ kt![æqŽ.p$ׂ'ù^ÙOB© øvqs¹‘ì_×_7÷ç;?{›MB)ª ’“X- “àÔZaúZœœ„-ÌqŠÖ(Ï…:žàº%Ê>“-+ssõžfõèÙSf€õJÒ vNúÐ2ôst"ÈÑ„§[åÂî^’YÕˆ…ÂÄWkD#Ô5úÐ:›8IäÃØ„'w¸û×ïßîÊ/¯Î./¶³¦‘ðƒ~ï¡ïöP£æÒ<-S4¤¸ÕïIU®÷Y|ìºi †FMé}Ƥà37LæX«Íx‚[W¦¡Jù»W}¹×jŠ’Qj8¹ùÓ ò ýÈ 4ìñÙÓýȘ Zð@Ÿ> VÎAQp1xŸ&‚AMz"?\[Z'nÎùÓŽ°×ÓÕa[9vRp˜mÇPÿ)ÌÐJÇ¥?Ö®/û»‡‘Ošþxp)ú†™¤Öš’æ°Bä>¡e–<冞(¸<¥,â!ûÏÅååí×å[«äkPªÉå²Á»)3Û>4n*kô©u¶a’>%Ͼ üZöÒ¼k/µQÕFhâ-¡­‚­àÙg æÀsl9Œ|kÔ}­Û^†5glÉGÉÔÙUàœ+4¡y*çhB Ŧ<zU¬[ñâò\ØfU+¨7@G{||€y†´N‹&íÑ!…<¹ï9ñù…׬êB¢ìZ@Æ^‡ÄGèÖH½¼¬¶L Ñ©ç•Û€‡ýâ…Þ¬¹UKQ Öú\¡­ó@7çl@Ù9[t½ëqßy1?»¸ºÚ‰E:{üS‹ 4·*”ç#j¸‚#Œò¿[g²B›š'›ÂmjÆÃFÿÇÏ‹¯×W%Kþߛ߿ÜÞþ¹}>û±-/ó*ã ç ×ÐKkȵ^S{!ž äçÜF ƒgׂ‡{4ÓYåȪ‡•(fl™M³i:tkô¨u¦“¬UŠÅíØ€'¸•xöÆ~î…–Zܦ¥¯Ý­xÄ£Êu$a‹t7£·Èyå*Á¡!†ˆ§D°ApY„×€'‡u/É×ÛÚ8µbÊ©nGçPîôê^*ãÀ›‹}O „·L"Mƒ6áÉ¿*¹ î1)Ý(Êæ¢ô2ιµõÿ;è±n˜PÛÀ‚'Æ.ñ-$Öï#è肹{Ú\xÉMf;î/LÒØbäàXG—| Ì~ðœ•ÙŽÇ r}{'¶õ×ÍÏÍ×7YE;§õöBr&6ýÍæy“[šÌP¨réQlò€^ÛUd\¥R£-f°Û9cÏ1ŽW žÔ­Ê>:÷ü|áµ~ôÉEňTœ¢Ë¸ìQ ƒ5º|r3”€Ç³MþõõößEÂß.v¹{Ô‘’¶ÓmŠb•6À3ç¬Á”qö`¦ò&ÇËß‚}¯HÁCx­›P ög©¦r2Îe6*HOEæHŒI½È•qã{h.ß € PÇ\—õ_»ó¤*wˆ1—z6šŒe\µR©1ü¯¿²"Š3«öu7>0oû¤ ÚŽ‹~EÙ•×6ÔyOUÉ·vUÝÀ.’”œ̶Œ æó¿»ÎŠU!ë`S˜pµÃ’µJƒÛq1̾ p•KÂX:s±¶ld¦pxzY'=BÒá?á¨/߉å x(ö ¿ýèîT7“KÛ,/ǹ¶#QYã4ÝÆç^à'Ô&·á±öÄ5™F¹J[ ˆ‹u¦À,…}èê™·+¨lœ c k·¤&GÉGÖMá GLˆ¤î¡´Ð£‘&{î…“Ç'wm¿“Å x¬ÅTÍæO¨¿[E ‰œVÚmWi¤±Æg×K æW4 ž”ÖWƒh4~êvo,‹–zW“q?G/#]ÓRØ’+•ä,ÞIÎrk¿™ö':ÚúG>=›_üx¸-yË{†ÂL·ÂÜÝõÕÕæfh.Ó|hšp›1áéÖÙë‘ÆïÏŸþ[þÂÛZˆuK"¡\ÇJ¡1e2Îc :]Ö·užp(ð€Kc7Þx„ë7¹ÌFOAéÓ°Œ#gueLS`Ã$f¼sYð¤n›Â·md3§u¤¤Ž'A‹RbL)¹Aïv}6Àæ¶,ç¬d9Û¾Énb<æz÷{Õ¯ˆê~Qû ÷íú­ÇâüÉx§„®þ€Í9-õŒ‹ÈØmwê¼’Ú'Q)öÙQ= x t(qþÂã®IŒX?ß32ˆ­«jqç]7ÙPcXvIF}*¸s_þ?ö®n9Ž[G¿J.÷F2I˜§Øª½ÈÕ^ȲŽã¹$;Ùóö R²C.€5xWs´S}—^°…Ò¦\ÀEw¬x„  Aȵô‘@-gÆðˆ^ôÅFŸsø¹¸Ú¨J–hk‡w>ÏоXBÍs^€ì0Igçعv.€ `çعv.‹  žŸQ×]ûœ/ºTî\;Àp`­ñÏ1”.@“;ô¤q`­ñÌRÌ+´Ê!oÉ.‘€ÜsäP1EAR•KǫثKôŸO©Šùk\û;ù/RJ¿†n>+Ø”r?Ĉ9£0²µS¨3*PÛ¾£®¼:¸ ðð°ñ/B¶Ç´y¹M±ôÕ*9e£þ¥ÉE^K10à¼v:Ӯƣ̾ D ¢Z2ã9*ÇÄÃ8ç‡Û²cRh{£pàqSÎw—Ö"e¦ÐW¦(ð„œ,ðµ РÌÕ&í@/öžBãZ<Ñöç¾v©Ozs­Ê;yusËõ ß¾½·àšŒ‡eÚ‹}íµv[!X·×T ä`@:æJS]·pž0Ù<’7¹úìO-q-Àr ª\(«ò6FX«.NØç=x(Øç—)1õ•(œ€#Y‡“ÊÅŒÛÜÿÖ­ž®Ðôc«\œ° T>y`Y€'‡9›@Scß°6SÆ‚] d׬ý±ÖëAM¶ž¼+¸L§Ðש0©Þ“9†™; và.2Á–ãIal²kÓV– ‡Hb%Æ„²‹7ÂTƒ ˜°ü=xð´¸àJ}b_Ÿµå3DåãØÄÝ%6»Ìüyð@Þ2_·)¯?·Ym1™Qˆ ¹„}›çòô‚Rãf߃‡âˆ>n'¨”ú*•LQ2Z‘k•C7©Ôx ®Ì¬!@¶ÑR™°ü+ÌBlãɶˆë5¥õ§XŒ’tû±n!*'îÎã“ÌÔ3™àâyðœÐónÊêÙëÄ\1UÒZ¶àYHsœ1Å~xwU³bþ¾yûûÝݯh˜žôØÔø¤EUâåÓÿ}]VŸ8{š•TSžK ÆHT.PÚ$¦»`›r ÌÛ7iõášð¦ÛtØ_C›ÿQ¬HX•;‰CêGƒÆ ^»ŸøWüVh”-ÝJJlÅÏü?ws %§E,5•À¼[Çêú(Ïð7=x@VD”]8Ãj›÷h,e Q"ó)AåBáñûÏØµC! Y8Zåb™0÷ú¨äx²Œ‹58•ÚÏÜ*¢ö‹©Xys¥³óšøÒ‰,Ro}L|yÆ‚IõA™lé3/ïjú À×à]]þøñ¢&ö.±lZŽ„—PàH¸ª¤ó*o¨’znmô)¥Ÿ«¼ºuË’Úá4¯Ü… ÂÞ ~/ß À÷ð½|/ß ÀàíüÌzÄlŸ³tqØ À÷ð3(Wîìbº' „)-¥mº¼Ìë®Ê¥Í_ÅÚwr$½.ÀSâØÄFC‘¥¯HduéåŸÞêo¯€#‡.éA|ú»B³¥8•ô!æËšsñö˜__›£Õ×% im'0¢öëÄ`³°ncÍJåB˜°2jÍW¤, ðx;L¿Þnš[pñUoÿlî"±¿«DPƒ.–1V9(ƒS@W-cE”ÕýÁFž9m?ëúNQ/ô6vçö·‹çíÕ§cªlÌô÷o¯®/Ôþ÷ßÖQÓíÿBµê#‰X±º‰º;J·`a Û`±À;P<˜"ˆ‡É±UØ_Jµ"Yçb@V¹Äî7ع¶ë ¤²½9xðÀ°.óŽm–b_Ÿ’‰+M·…¿6D–1«í¹5n”lÌy{÷ö½J&ˆ ðœxUþŽ ¼ÒâË­ZLý•P9*Šù.Rå"ë¹2ÎtX%[‚liÂÉ ß¡…xááÕ!†.“¥KRß³;AçJû³ m´jkÀô+%ÙÐ {Š1äVÚG9^]0`4º;†P¯w…¬Niá”îÁ÷øùÌhL¢)+.Xår<Ãv]C›_¿þÝýÍÕ»'b²¥®Û€æòÐ[t¦a›ýê3J8bÛFuxe‚Xñ jl<9å)'ýá†)©bb‹ÕÓv«\âºôøë)żcW¹ùùêãÝûf¸ýgN))3[]K›eæ• Ôúòä4!KÃ…GFÝŠ=úìGä 2Å@ÁºL©ØÔŠ?]]ÿQïÁjEO›p=óö¦àÁÃ2ªEýÓÚü‹oí¼.ý‹šYZ‹Ò?J I'8Zw|h*0ì¢:ÐŽ—@ån`½+*žQ¾¬™uz6åþÖäâÁâ™QŸÜ>ŠíJeƒƒsÜ듞v4zþõÉkÀ©Oî ðIŸy}rWÓgXŸ¼ïêúäÇS zÛ»"lÛ šà’ñHyò#Ö½~ Ò çUžÜP‘䓞ºýåÉmÔꋦ%9 Ì+On_äDb;‘•¼—'ïåÉ{yò^ž¼—'ïåÉ{yòòòäz~êµ'Ƹàœ-‡Á›½ƒòdE X~§Êò·W Èé"ƒêQëïJ¨©¨æ²¤&lП:]B®÷¶ÛcŽ&¨Š" )¦ä&¯Ý6Pëaójž˜†½Ÿ‚gqˆçÀ×QƾN ë&´ÆP2¦¸þ ¤•ª]?ŸÑïÎ1Si”„(j—R•KÇ™îPu;¡[äö&ìÁa” øS1l,ÖM¦®&Sd]|–áª\"ØÖpó0¤¥l?ç<œ9d^ÝÞ<|ºzY±~Du/^¸¨¿nR- \‚…˜² [üC u9ôNC›ÓïÀƒqÀôw³r¾ö$'õN²õ’§r‘¼–ͬÕzÊŠ_Ž'Í[ñýe û¶N©e¦P»ìä¿ÊN=p m?å<î>™]íµTî‚}eUz—Œb='Õ¢7Ôf¹4‡ˆfØ'9¯nU]ß´õ¤¬ç»pSV9`mfD–ù!Ô.x#Wïûs +A¶ŸJžÈ£náwoë2:W›¾K¦Æ—’` ·”½•ëmÐ.ç ³ìÀÃ8|Áö—é9+„¥r1Ž»`¯·ApŽÛϲ„akùIyªÌwí:úì{«µd‡ÔH-ïPå:¤`§¯i…Žg8S<'&ò{®1ono>߸nÔ_7 r¥¥5«¤0nÁ3TÇ:‘Ëq¦àÁÃþûû»/Ÿ:ºä¾.ER­MKåbHë9 GÚ±úö™ð><œçîýÅÍs f`HåˆiøŽ°Î‚àu‰no<ÞŽ—¶.;j”¾õØ,XÌT•óónl¾Ë¡LØ/‹v¾yQŒvñéÝÛ¦ÙÒ׬pÄزv• ~¢ëmîŽÁɄПOÁsÛZúK´@Ñ{‘õV¡r!y¯¢sׄg(ó’Õ®¦Ï°du ÞÕ%«íãRyØ¢½K1¤mKVKºDŠGjV}PƒgQ³ZQåÀ˜R¶ÑËA0Yui§¤‰5«>d‡YÊ{Íê^³º×¬î5«{Íê^³º×¬Z5«íœ…9ÛîrN÷šÕ½fõ¬jV¹Ö‚ÆDÑ ¯rTÆ0´oõ cû§žë² mšŒ†&9RÉÿl÷Ûkä¹”áEÌõw¥rîÝ@›EÞ¶ˆ™‰Êí1Ï^¿³˜{¿pð'ªLJ/Mš°‚xtÈ?÷·§XòS!˨Uޏ¬$R»8 çí‹|x" áU÷*4õª†]{µXÏ?*sùáÛ†eÙËG“fl 4ÁVÑ7!ŽýŒš"¬ë$›e‘Ì$#2rgÚïòÁu*æZ‹¬·–eSÔµiÅ1 ôA7¹2•@ }”t~Œ.àM)íVexG£çO °üŸô™t5}†kð®&hÏœ…ÉÞ¥rܶç5J¾$<ÖôÚ5Çó"h¨DH°Ñþàÿ5@u‰‚“¼„‰õ‹•Üj ÿ4‚½éõN °ì;ÀN °8Ú9‹ÈDlž³ ‡}:v@à Ô0A¿3†n½x““ƒ:ÝAõâõwSÒã@Ìé0«|½x¢ËÚ…âí1G249È d=©ýèдk°}.›OŠ[ÆúŸ:‹_©±¯T£ ±É¥8 ×Á ´õö~>œ{ýŽž ð¸;ª–Ý_½ó^—ÒEÝoÞ}h\QúÚÒ[K&ˆæJ‘JQ¾Â4ëB·ñlcM0cUëw ð<Ì£H÷ªþžÕ÷|ìòY×îÚÙ‘=¨E-Î2Ë*—d ÄIv)­¯„ý@_åpÂ\×ï”à¶ŠÿúpówÓVeˆT궬óEå€h Þ(STôÙLj¬r3ÂXR_«8šaµ*ç}É}¡²ÃÂiý`WIC†¨G›Jå(Ò¸õºÐ=ðd³™7ÌäíStè­´èMꯇ"¬«9kk.§™n€ËQAžoáÁÃaõ3ú‘kËA\äþËr)©˜“u¤U¹xv•yŠ I$‹ãOW™§£¦XƒÚ9þJ¿Ee^E¦‡«¸©Èý¹½2o¯ÌÛ+óöʼ½2o¯ÌÛ+óìÊ<=? °˜D6*Çyoí»WæWe^i¥¥¬·•~¢H“šKÕ>ÊzL„lƒËvš(‹ÿ§£Ñó§‰Z~ MTOúÌi¢ºš>Cš¨5xWÓDùv©:Ø-h¢ó¥Ä#,QŠ ÌÿIdøÛ+¤*wMT†]7¥­vÍrU”a—ËBµín ‹ØHñ¼¢v UŽ)Ð=“„Ÿ+j×FÍ9‹ñðô(w¼G×ø¨]ý"Êl”6d%áµÛ£v{ÔnÚíQ»=j·Gí–GíÚ9«÷ÛÍ{‚ÃÛݵۣvgµ½¿4é«¡¿½Ë¦‰.>ðÛ·sñá)àLzÌd~Jq?UŸ±¯O,%Y^= q¯žb.HÙô§(ê¥b[¯>EF£^½H,Øô‹…e¹ÚÚì!âhÂZqà‰< Áî˜ M‰oíF"CWÇ2uþ,¯[å ¸s,'lžoo$<™ÜíÑš^Þßß}ùîõðæýÇ»·W_i°¿Ì¢°îŽŒÖVÙžÊót†]/TÙ¾s¦Ë–TY_NQ cW· 8UÙŠ«\äà︶; 'HÛ›ƒÂà]bÑ®Û_^IrÎ)™Av•ëP4 aÁ`ÏË“·/#÷á‰yÿÕÉ›rÓ0u5 ÐmËÜ+7“›4m†½{Ãö&âÂãdxøòöùÛGëóÈêÓÔÙ_q¡67+…QoR2ŒÙjK×!霔@æ 1no!ú½s°½ÎÊ÷kš×i7÷µ«vLY=Sk4Ý~óÇ f‚ÁCmά.”?Å ®lýN qtî%uÏj|s}÷éÃÍ»‹ë‡¿ZeÝ“¾ŒÕ$¬îZ´ð æ!%ù›š¯0çQìátÈN¿HˆÅêáÝð®%ªqRY4t Ûm±'_’ÐZjš“à Y5ùryd4íÇ0E0ó6ªÜŒ;jýY€㪠ä⺾ç]\??è}[›ŸóÍçßo¾<\ü!õ'!Xs‡ ”°¹CÀÒÿò_|xÔ㼿þòîÃC{ßûåúðyô—WÏ™ÇÆE5+Øã¢˜¦˜å²à|Q9¿y<û…'j=ZV’§è{ !ÿ°’a°×K=û ØÃË ¦‹:¼Ì öûì5–ã,¥Çôoþ½«iŽëÖ±Å5Û7RH€ €Ì~ÖSõo5 YÖ{VÅ_#ÉIf~ý€·ÛvËRd_^JUéJ¥’HŒï!x ,ku8˜‰O si¤Î†“XK$Ƹá$dSˆdf£HÃ!ɽfÍÝx÷µUÝýÍÅ•™»=ŸLɘ_éOÉÕȧíç²–VšU¨A?)h“¥ –†”d—5PÈä+ ˆk ¬]õãîyÓÅÕcÈï7ÏANœÌÿ)“wî’HoûŽe¨ î2—q2Á·ï˜U¯îyQÆAa(=Û.Z÷`I²š—‹îš:~aѶ°#ùh§q)ŠãŠ·\ÏMˆÊÙwbLÜ€§·FÏQ³]˜õ³¨\X—gâÞq_ÆÁ)›{Y @.¦¢ø@sž Ëí;Œ¬|<i}]à#QËErìHŽc–è"åGT^U9fl€:ã%]©\›¼˜2Òf‹|±t³HP< &0ó]}Ĩ´ÝbwBÔ~¦,SÝTøx(èv‹þô>B=92&R_C2ÞpéO.T,u¸`jñ ʃÝï¾A8öQ3~¢6¨Ù¾·@ùŽ„@9ø1+ã¨*¬·_®Þ½+Y7÷—öï/÷ë|ùg±vu/!:JÛìŸl:Û³Çr‰¢ðb¤$æ^DôwÌyEdßÁµ„q³#ò<ö‹E²õûQNü&ï65«æ:‚ÒÝV »¡÷2nF4Õ¾“0¨ûJ¹Œ y¸r¸þpuû±&Íú϶ñÅœ Wš6®»õÈdNËRÏYÀŸÊ„zλï¨Y¨Ø€‡FGHvÿƒoÿ}QþÇ®GpÌüÃÔPÀÆA¦ñªb%Á¥tcÙM¸)ã060Ñ5`ʦ">*t”}Ǽó“‡zÛÒ_·%“ôþr÷†üçŠÝ¹nwŠäB;t%X:Ðô²næî)(³bÜÝSgÔD?pcëåº%Ç 3ìgûABP…8&¶~¿K~îz·,EoÙ¨$êÅÄ'¿[è„zô–gŸï|l*Ù@gh˜Ê$2ä¥ags’õ '2k¦Ô<§µ¯:ŸJ ³þĽó*3R™B N1º/½Ê¸^jø¶èYûÅ"yô¸"¥g}ƒæ ž%¬šÂ©ä)W¸îMyÇÐt²aËɦå)Mƒ^V€Œ•BŒ ÊL{#Ϧò}3Lv¿_³&ÑW_éuø!•^kúF¿îJ¯uI¿¾J¯«ð®­ôºû¸¨R½Xö~˦•^Ñ%>_êu¡< ¬ßïǼ{ LT’b=X·ò—*`º›uŒšêõ5öãŽ_o /`ºÿ¢†\oµ—¹€é¹€é¹€é¹€é¹€é¹€é¹€isÓÝù™çà»ËŠÎLÏL_SÓBÌòh4ÑÓ7.ÿø‰Àhö ®Y¹ü¹Š‰ÁÛ@6.nY³Ã%2CŠ9r)«¥èúœ)óñ¶ÀZ\¢ÿº»ù‰ù~úPõæáú]¹Š¨\Æö¡’´õËóN<½ù³Ÿ®>Þܹz\¾±ˆéðê†ëòŠA2„žcdãˆdý*þx,q[äÔ ÷ì¾jK!}t‡l6[Í‚'Ùb‚‡»ó¾Û÷Å•½¹hKþ|z`²½¸ú£Äû¡öd‡Ó§œýùH¢Æ«Ç'rzúên"AëYÈûqa¡Lú)i„%T k/+¡¥4”ò‰Á‘†ˆ/C©§SYI* 9Ħœt©ÔÌë OïcÒ±+±üôáêú·²}}¸útûç²Ù#Ûaˆ <¿ÐaøìdÖ‹)`C8Ty’Æ*d¹áÜà^b=Üþ)zŒ\ç!•tYWù>,n{‹=$øN˜KA¦àÌ›WÀÚ}ÇŽCŒ±Oo¬"š7ÿ¶¿XЉÔESrÈKÁƒ"™)X»Nn•\ø¨¾×Zæ1CÇwàI00¹ÄÄõËO1°]=™ïI€õKHbKij{ ±v×µê¤]™p}؃GO«"rTtäºg b†Jöô˜#–‘I/Ø×Ž=#MXñ<½Ý§Dùùâá®ì3t®éÕo{°TUâœ=÷ÜÆqÊëWžHИ•þšã¹OÑ­ò±Üý×¾}\¤ÀFKÞ³<‹ãwy'5{àò„݃GÒ¨º{Þ_îÿåçJHõ[·„²t®õì0jÅ Ú}UÛQÇ8!jØ…G·Ùï×Wo¿~z÷¡Dgg7IC6°î#4—†%û®æj;l îTž„£wüÍCEu¿´tíð€—Ú÷7Úô|íÁ›'\iöàá¶ûÞxz,ÆzLD(™…îÅ m\Šaø¶_CØäyÂY߃‡Gïü÷7WÞ_¿¿¹þ­"к\ô¸ÆŠË i€(܃;O0ô{ðpw»Øë»›O‹ÎO妟ëñ,ó99Úë1ÖÆQâÑ{eÛgãЃ§·@QE %ûKлÛ%uìþòöÓ?ï®îÍwZRêž­‹˜êî3#GT o*ŒY¹ßËHãv¤Ò„ý߃'‡-[Ë?¹š¨‡ÅXÙ¬%OqÙ8 §6¡oû\pF¬¾O÷£ïvÑQ¾|XŠÓ´ˆ¸îpK©EONá Ý8Äm{Îw½¹Np;ð¤@£ÌW‚.E„õH›+6F7‡ÈÆa÷]ÝKº}N &Dzð`\}äÞ¯|ûá7oßþ¼Ó„û¶><ØyÓòÏŸÿ¸¿~óñª–;*PXé`‹Ùɽ]Æ…„îV:n: +NXëv<1À¨+wO ‹ ¹*ȈÐ~ p»œ‡°´¡HSC©\ïÁUþ0aÉà®±tl‰ÃnšG,h;rŒiûу6ß×ßµ%±3IU–%u9897eW*·W¶½úúð¾8‹;W¾TPÆ0 àæÍVûðhÕKìpŸ 8¢+’K£Aj)œ6ßáÄì/L.<ºõ&ß].îa¾ò»7;Å©U™¢ßD¦9ظœ†”ú8¼X9Ðöë߃'Ò¨ç¨åwùM€??û7¼.MÉB9§µJ‚EÀÍUÁ‰ôí˜M°ˆ{ðdXÙ„}‰ Öô)Õ÷SB–ŒêÆRR¹÷T}{àç koßQ`¯Aß7vševX—¥˜5)QL1Av ¡”q%Ý¢s‘‡S²m ‘·_Ó< #ö³)Èrsñ®*ÄúÆ *Ý*ƒx¯](q„^7n;`ÉŒµíÀ£ºúEè»ç  V˜—zkLž½iã€yÈVÀFC“RRŠ>ê&¸æå;’C†<ÖméF¿‡ê''{FEN¬X`jâfYž<¢ÈÛ1ùùú,©.Y‘M ¸»IX#¬ÔƒèkKúäB–Àô¾}'"“'v»m?2îµlz"²úþadˆ™¢§£J–l#ªlFØ2•’˜ü©OÚØw0 ùx0¤ UÁsE¨.Ì–‹ìš,æ æÓ³EWq·c%ç}à‚wàÑÎûÛ ªï[·ò>HÐkã’è–[ÿ‚v §4A×÷ࡼánÿQïP˜õë^15e–³^ˆÃVg}u;ðÒK¿O·ÕwLY>’W}³h*•Ø#x§¼ Ê[nõSØÙ>æ¸ýj÷àáM+†<½Ð„úíµ GeEϱ×ò®CFë=Lmg¨õ¸t°ÉÎÉÈG²L+}ýÉÈkÀIF® èýÊ“‘«’~…ÉÈkð®NFÞ}ƒ©B_KQÄmû K¼”|¬ŸpÁLÒ P‘_W2rúÓ_+¹K:|¼›Ãødä>d‰ÏÉÈçdäs2ò9ùœŒ|NF>'#·'#—ó#QTtÏY¡s2ò9ùU%#19LâØ XÊÃÓ㣪]S¡°yX²O”QñêûsLk¦ÙYkÿ0ñ^™n¹¿üö«'rÕº\…0`=_}wø¤wP¾ºý¹ cpê×—qæìƹɬ—ZÚFIN.˜ z‚ʸÌ1ŒNN>!ÝŽW&ì˜<z³éí‹ÇÒ<Ò-sU’1•šöž`ã2ë05:~³wL„1oO‰ $uZoïÆÍ¿•ï0Ƥ x(ðÜ=øËÑËæýâôN:N€Ä j9…V2c6ƒ–²Ao8Ò¤3‰BqEðHUü±eavqÀÚÞ—èñŒˆ³$jTK }ijysÄ6#[‚†ó„dÛ– á†3#ÚnjÇ˵V…¡Ì•Üp:ä[×H؃8cæ0Gi˜¥Î±‰¥á dX™43h£G£’F *Wj•Ñhë™ b–ÆDР5Àf)«æm¤½E ž¬Ç®¾O“ГC I}-$!ÆSés Ü1óÙ%HÃìæØG¥ñø3ÇÏà[œ¬IöäQ%‚(ú–v­Kù3T…zcb.%Ì&¹}¤ÝwT¡à­ò½¦P“ð³GóÍüÃI`•v9ö Î@¹ÝH ³$šÂ™RÁrM[KkZö¨S:e熃 S¡mND?ˆA)m¸yœc%——È ª>¡lÙ¹¸º^¬Y(L ª“¢ö;XÃÐbÙ®oˆ= m_†uùNµ3ÒÇ“ƒŽÑAcœ/N-Y 6XùÒîµOšÐ žqyÐÐ`Q󜘶˜£ŽÚ 9™pSž5­†zÑl‘¥â…?áèÕ9A¬ÒTÔúÓV˜¢½4 öw¹†^íÕóý¥º^ØZƒ‹4Ì$§ÍÂÖSÃ%¦ n¹ÌŸÃ¥HBhÀ“âÖª² ^¸Z!0Hl˜†Èfš©ÿ …–°ªy·S,r5_ 45àá¼Mܺ.~/L­å­Qƒ=¨xz$`ðA¼ÁÌÐà{+Ò”ˆ£&;¢Þ ”bR+´Ïò€qç ÕÅíÅ£µ´j9¯œdì:ˆI"pé”ò”‘0·¨s rŠ>i–³u6ò†Ô`ÛA]Š£á &ä`2m8YH§Ü[iN94Ü£iYq‹^±GÖ,Òpzd>鮼 Ü õ73Jµá„àˆSÖ_46h&&Zë«JÚ‹ «ùe±árFEŠˆ^ÆAl°ƒŒCƒz3pÌZÀ˜ÕÝi6nU÷åûëþD?5?øžÐÕ §‘’›r\ÆÅÍ쉾‰ áUéhk[æM2…WöcN¡ȤˆJ©õöhk«Ç%À’üãÏ`óXÊqðƒøY’ïl㈧ðƒÄØ€Gxµ•ò“p!åÃ~üÒÆ5{-]°­9fåÝ4ç,JeÍs‘Q–ÉGoéfòï©màj›ä8ºA H¥ ]Ëd(Ma˜uÚ€gí{òFoÐ`z| ÿgïZ—Û8–ó« ô㔜"©¹_RUÙ'v"»$;®JìK`%", ËQù±òy²tÏ.À…lÏ‹!“CW¹D‚ƒÝoº¿éî™ééÁ;¬"Vú@'‘޵'vh¼’9Âÿi›Ç'à]a,ÂGu\Ûá‚O^£ßÏ;…-)bm9ð`FÉC|D*Ìž8aœ±*Âó«³pÂrçM„³¬‡Ùl¤ÌE «aÐD8:«ø>$mO qGÌÀ7Yâ¼—&Â@»ÔÜÆN÷tj¬}Óƒ¼Ç»0éNùèS9zÓÃðZOç_B»WÙ‰¾'qVÁEtÖŠ, !?Æ#x¢o ”öžk&˜¥8sdîyD·¢yçéOO,“Ls'"ºŸ%Ëð(à3í4¸L= w*­l»jí—Kk]Œ_ѹO™;Öï pt"F>˼+ 8]œډîxN-³'N%"\ ²‰ÛM÷RÁFq-%g^Nç‰Ä¹‘<Ê–ʃM}î ¯…(«]ÃuÓæv˜¶.®š+qja˜+¼0ùFGŸAÈÞ7‚}›w%Ò¸$"D¸ñy‚.+µöfÒ&—8Ze^*Õ­.j¡"a"¸¾òöëT4ÄÛÑu„Ut:UôÌsºBàñ,7 '—×ÝÊ¢Vǹw†É“¿N‘³W'¢ `ZDÞ‚v2Ë"*Ø~\T‰Àãxn ÞÌ?”‹÷U·Â¨µx˜6kúNGl'M6¦õìTTäÎzº* ´³Y¬¡˜œF[g!RO⯰w£›óÑè:è…Zâx ‡>§í2zß8±$÷šN8†v,ƒI¿ˆÌ…L.£~´^ÞW7W%Q[çó>©HŸÏÆ¥õìTTTZú÷¦d–ý¡ÁÂùo¤lv7[-‹Ùäc·Â¨½¡-32ˆk-³Q1­g§¢¢‘&&-R˜V‚Ž:ÍXšªÓ ³¸ñRòðK+Gä ãÙA;²×óÙ$¬¶Ý­¾ÞÿØðcÏç@/ˆÉ Þà =½ïç@v¦¯Ënú-žÐ…,a)x\–¢ò(E۽꧰Ð.n*¨¡èªr½çn“|ŒNéH:¤àq¹-Å-m‚d»g  ï"6œ,„í„•ylE ³ãaËåÑRðv|º×>aÞ^»‰2ì^ˆÕ˜Å%9™! íŒÉn’X¬1—Š92’Æv9V 4¦71CžÄv\f¶ã¿¤Ú½V€éÃS¨^`šñ—æõËjg¤²œLÇv9 &à{žpŽÀcŽOókßå³ë* Áng qŽÄd‰lÇLn»Í^€Þד"°]Žjˆ+(§=G±ÄÅ쪾x¹¹ózS²úNŸFxµìVõ§­¡ÕÈ´ùúúbÍóÍ‚˜»—w FÆŠ)RÌÐŽñÃØÒ7Á!–QÒOC9ª<Á{$n@ó<–ŸÌD´,nb·ËµÆp^ä®9ÊÚ$[‰ûãsB¿ Ï@Ž<’Òll}x3ŸO 9w¯Åa>œÖ†\‡‡vR¸ÓÙDÖ' v–lð$;•݆Qö·Û3;¼ÂÄ;ÒþB;™>ñ¸GN'tLå`G|ig ývãpH´ ôƒøºÓ²³>”£àâ‚©&átlŽGírT¼MÁcí‘Y­-™QiBTXK[©_kìðÃ&”°I®r¨1Oj`#™«²˜.¯FWåè÷˜1±ç’jÛm‹ü“¥VƼÄZD}\g{œŠÇë9ˤž‚‡'nÊÞ’ryU®ªÅjZ¿H×} w–™£K6ûPƒ7‘ Y)БÖfP×f)om–ò/UŒ £´ wB;ÍníÓèý<ðQ˜ßo–ÅbYŽ7r+Þ“)î$V7ãbYVÛ×õ9‘&zÖüûd8É<ΖIp’µêŽl4ïúUä‚é«ò¶ÁyÝbP¾/¦« ¯³Áe9*VÐV]ps¡ø“Á¤‚oŽæ×àåÇåøùϳßgó³áà›Í·þ Ï+ÇÃÁËùj:ÌæËõáÑoæÕjQ–óFƒÅ¤ú}ð—ùì?æ³bú„¿¯Éó¦\¾SâÓ·ðHüÚ¸¬Ù*Í«Áw?×lýá9ôk1A–á»ßÎW³ñWiP€Æe5ZLn°·ÃAh3xYë²Bk3ªÿ ¿æèÞ??ø0™NØ‹A±TÍSnŸ[ ÂÅÇÀøöÚl5l¨.zÿóëWÃÁÕryS Ÿ=›TÕ H±(ÇWÅòôúìr1ÿP•Ï~xùãËW?üüõ¹ðV§!Hk@“×ß~óªf×ëòr>_~è¡Goš™h~š€¶h‚m׿ýe›ÐÏExÀ`¼Z³š¬;?^Qƒëúùë¾\ôŒ7Žß¿üáœ[&ö½œ[å‰È¸nת|±1¡î·ÁŸ'³Iuu¬ <5N]Hïÿç¿«½zåÎqµÄ7P¶w­}f=tcpž ª›r4'ß•ÕYÇœ ÀSnŠOÓy1Þ‡/P´¯’B±;èÄ*_—ÕÔ»A‚¦Ml†þ=¿ø9Hüó‹X‘e9ZÂð>Ðÿ^gX;Ÿ|‡÷ÞŸüuU|º˜ÌŸp¡pùl>º9_”Ó²¨Êª® ˆµAråÛÑH]–Z3¯ Ò7NY–Ì.Ëñ[¡ËQiÆ®ä²Ovɸժ(Œ“z¬ÊÂxxןç‹Q9|[L«ò}ÒQR["í©–¢Ú;ÓP¤TK Œ\” xÍ" ØÑj±À™.Pà]¹üG°˜¢†‚Ý‹LsÍ"ÜøÖ.v EpÞœ÷zµ˜Ï&kžðœêùÛI9_|³X̯&Õòél2ýªiðüO¨}üê/¡‹o§Ÿÿ¥œ•õòËPhk:|ú û „'~õ”}1Æô˜1Ͼ: 1âÄ|6øi¾,¦CˆçÎÀ¢_ßLÁ‡ì, 08Ðó —‹U D™M0w®V߆IßÕÕð‰ùåçºxy9úA==oÓ®¸Ÿ¾*ªå‹ù»EYUÃå亼øb$xyøýÅꌶ³—g Qøÿg±®ð³¦Ý«9øDèÆ x ¾¿®¹úù~ù†øÉÚ#ðZ[†],‹g¯¿}óâz­€Gšà³rZ ÿó·j‰~$hÿ ã c¼–9êõÇZ½5 †ÈŒW_~úùͲ¼>i>Ã?—cxÍ÷'È¢ùCxÜ­yþk#´_!ZD¡Á§—¼¬h±Þèò+« Áàãk"ª`¾Ö,ÇÍñEÒY3Їÿ'ŒÞ]•*7# ÈO‹bVMpTÿL¯‡ þôùC1a þV{Íå%×åå¾U~\%S*ñÑž!MF`6#†Ü!W%ûê€ñ²•Û¢ÒæãO&…¿p›@¹‘åÝïþf2oÛBïÿæ§ÏOþy5™"3Ÿ¼„xy>-ñǯ7‹‘/ÃD? êz 19 OáƒY°"øc(¿øñ;üí‡Æ ¾š¼-GŸFÓ²ÉìÅ¿]`=—7Ób^tëã« _õäòo8+:¬ o¾{3+nª«ù2üº3Áøö/›\ä#»ÿ°åÝÕr‡(þ‘èO+œCS‚yS¯ëã·é$Qû‡ÒÍt2š,§Ÿn @H­6o{#¼ê8ÚYÉwùYÆ~\—àÜaRþ¾übCo{5Áý‹OƒÑÚÍ=C£}ôúÏÿŸz‘Á¬Êó?mÌã·µÂȆmœ><¡ ‹o ÏÚ?«ÆïþëjG˜Ú+Kw6Ðþlð¥Wn¹eºŸ¤]íë¿|ÒýêáKSý¥;ÚC`Ťg^“Ó¬TœºŸsU2©+òô×w§áI-|~´d[ÉWžªpÚÃlâË^üòe/D»ØÞ‡bˆ«Z¤¿ Võ Þ LOp6z®`V`؃ìóž÷+ƒWÏjRŠJ·vO6óg¿cªÀzðTò ˜í í®÷Íô=f+âØhבmçqÎüã¢\/Õ`æ+ÉîîŠ5Ë3ç£Ð=èVc8ŸcpÄã‘}TÕÜ£ê­Û{ÅV™Nyr¼À1E îp^’嶛ѣ;ºŠ1#OO‹<¶ZŒË·Åjºµ¾ðm~½Ž>€j“Ù¤N¿kd7*ª‹µì.6’…iNdw¨.­ÇÛ¬%e楅ØS*…ÂóÅzM‚9ïûh¤=½¦Sð(Õ—¦kóQ]ñ­¥·Wéðîn™:§œWвÐîûxRºÆ#W,ƒCLÁsà¡4ÂX>ƒøt1UAxÝCIã9^¨B€UFi¥zS{¿ÜMè„âä<’÷ݵ –P»óÁ«S{£ “WÕñ§Ž$iOHÇöµãÄ{³XÍÊEôÒ­ëÞ´ÓÎZι{ í´Õ}[î--¾ FfØHÁ“Zé|³]ƒ÷X­z[ݽ `Œ–a™í—½í1œ€­ =‘&Ã3¼G)ωÀ“\I¼]Ttxw7L\ïÎK×½GƒGn¸ã¤4a×ü°‡DZ& ²<ƒ2ðXìvs¡|²9 òì^“±Zsk=£ò Ó☷dz1+7ÂCxÀò•,MÔ}ë²øAÒ³avïÔ€Atxƒ e­3ÖøÃ†öi™ßç2X‚<©™C»ë3¥È²{ýÌI¯ Þ®L`‡v*™ '§rz-20!dýgÜFHÔwGÇÎ3+…$ó.¡á¼—bc=’9¼Éh–‚çÀìO08Ï™ç¤hdÞ‹cKŽg‘I³’,#„ƒéžÓ/Õ*sâñ˜Ä”¤âf‚ )íè[t,,=ŤtÔâ«W‚õRó¼Ë–Ò…Y¨)x”L!7¢<Æÿúî9¼]9£È»Þy™|¥ÎA9Ï2ã(ï‡e§¿2 °=÷ÿB¢09‰ŽëÝnée·ðœƒÐ F6³„H¨Nåd<§D]&àI=ô3žU[’1’YAm׆vÂó>2ߡި þßJ­æþôzÄ÷X+´ŽÀ£]¯ “õEgµÐºÝ•pV i5§@:«TêXLd w&¸54ã2ŒCÄÓLãh<6õ|çÛ2bïpù¹-!Û)!)ABÖk*Ä•á¶ó~ã)%±†cD2SÝN±ÓëÞ㙦N Ôxœ;EÒI[vݾl-Nj8J@ªS5| éœç\pMªóæxu£9×4žšCô¶%×)ô6ÂqOQ Ú ­N2‘ç€P€Vy¬â&ì ¯f^ñ<–hÂÞ_·{R I[KŠ&R&50™xñ`2TžJãs¾&³·‹”µ »5[BòBÒÒ…“g–í$÷§šlÆ3,¯bg ù…ܺ&çÝKcÕJã<¸A;îR­òálŒG%”ΠÝ<Ú§Ù°ùÔ–NgÙh}a€C†NMÞ Lž‡œ’l˜°®ajdhàZfP+¼Çóö<ž÷q0ë€ÅµnçfÀ™±R]g"mòøM¦fo2h8OrQ«Y¹ ÙPmétoÁœ ø/Åh§LêÀ=Ž FÃ1:ÃÂ,¼Çc`!,⤬9nRlɦ{ÑÚIc@’"´ÓLõrέwÛÒa{%1GwK¬BÞêÀ_A£ëXЉ=òvRxZgQf4ÓÛ…[»Bø©,®«sl‹5§0Þï”$„²L3Òì¹ÎrÏ{lg¢Á8¦eµ&à1*Y­ÿUŽ–[ÒéÞ¦ðÒ¬YN-{@;.Lo?õÁ²è6ƒsMÁ“zÒ,M’·¾ìV–ÝŠÇ\eL§°;Ëì÷&¦²2Ö’ð¸dÕ~ü´%›® sÁ¸õ04ˆãyØNËVõ¬g“À™Öx<;¸çPX‡DþÙÁcÀ÷sv°AZë~v°SÒðìà1x>;˜d¥Ú{é'9;Èå…–bÏÙÁ4¨­Àâì`z'ýß×ÙÁ4éìO†ìÿì`2ß¾û÷ñìàãÙÁdzƒgÏ>ž|<;HLñ³f+7àñìàãÙÁpvP{%ãÒP–Þ'oŽŸvÑ3€—˜-A‚Ç+sNž‚Þ¥µxœÊ³€¼©î_€]_ÖR„T|ÇáŸ_¾ì…1Ì÷}dŸë`Öîˆ3u;®OxdPŠ k¸Q×û¦öÎk'¤'Õí¼2"}줭*ǃÑÜf xDâfJ5ÝÂ(Ø*k(u§|`Xr#…§X¦P*“ÉÐ¥Δn´Šu•vU­Ò®ìØ—*“ÁΦ౲‡ã‰×u¨ÚÜr[cyVl^ã"g½ 47jAÛŸáú¬ð¡ˆ?}zž_„*)2HN‹6 ^2Ñg ° î`K¯l„eðÒf‰¼ÒVqNãQÒ÷iýÏßË«f)JãZzÁ#F‰æ¾'°ÜÁ×ÞÅøW¯Î¢qƒ%‹"ÆìK¶Ô‚Çò&>Bñ®«rÁ1ÿ.Œë†Ž`ú÷"Lj÷Œy°Ÿ¤‚vÖžNÿ狲7"6Ý4¯Î…‚FÌý)h°êl€¾xã¦û"NEbx4Œºä¸n§ÄqõŸêuÎg»ýª¥( 71¤UB$D€éøÖ»fLÑg²è];ࡤñè£æþ‘âu”ú¶LEŒ#ýq€8˜³À*±žŠ÷æýg lêÁÐ;¥ºÁì‘,±úç9ãüÙ»Ö%7nåü*¬ýqJJí÷ SªŠ"Û±ùX%ÙqUbÿ˜%gµŒ¸äiYVù±òy²4ÀËrwI40BLκÎ)I$8ó¡ñ¡Ñh »EŒÚ –E ÎÿX„mÇžÂBàcO‰-áÛ<¬\#QÊYžŒcAk‚Q@€EJñ%€ ͺé€X ­™à–\…Û3Tš2LP°à»~hÇÄ¡´ÀÙôòjÑ,ojž5îÌmî…M1Vh.”Q8vÍHvÅ€‚nÍ[ÁuCy†Àn@’](ìIWì“9âC´ð)1Bà]°FJ} ØÛò…+¤Àgˆ „Ñ2˜–’ãüe¬ç)»Š³¥Wv¸Cäõ|pvûatæ®Ïǵ»VìeÎ1¾0A-'xXÇGì­ù©»‡wŽ9Ÿ€÷Ê"&'ãŠgÜ´½Àhãûh|? ½e9v*]ºÐš=°'$·dYàÄ0+{¤Õ”Ñ<©>ŒÁ¢™Oofu3]Ìõ°¾MFËb kq¯¥Ýœ¯ÿv¾¢óÑÔ \bœQŠ™ˆ½8SÑ'Û™·fŠvEpË’iVf]2œh1;µ¦Ù²¹®Í{›ÌçÉ,¡œD¬ ÆÄú<[Cl;úœJ$¾Ïa¶ŒUÂ]ÎÁˆ}'Âfý»#9;Áö»ׯ°ˆ“|c¿ `ë‘gVbY“—í´-2ò\ƒ$A»Ôâ¡:ݾ#«4æ×ä®V²‰`¤`±×["µYIÑÌKĶÈÚÏ¥11álºp9ÕÏWRÞ{È®eذ` Á,¶Æ»J¥ÚæÐ­œp>x°€ S“[÷XÊ™À“êÞˆj#©í36aÃÚ@C…5#¥ËžcRsFc¥1ÌíÞ Vâ{4’˜<‚äšékéÝ~ň4| Æ2,=Æ‚vʦ©ëFÙhZˆaLÁ#U†³žÇ*ñ"ôåjGé%žGÒÕ>Ú¢=1 5I6e—¸ }°΀ðP’­f÷C{{SGp§!¾( \m*%Ñ{jÐŽ$׫,Kð„®PZ@u¤àaR»Îã… O2Ö€0–cÆÊh£U¶:áYYß…@VàŒTHÀ£3ä> ‰I~¥ìËe˜yí¤âRmh›¼ˆ·(—ÝÇ|™N¬¿‘k-Í>ºß O&W‹—J^¿Ñ.éœmW55{ã±J[` HÀ£RSží>†¤FwJ1|eÈpm,¬\jh§Í0âãnBOŠ„¤àá"—Á˜hry×|xžë*GPô Ò¸`Nž¥vvbÇÃÜ…ËH‡\}íMîÕª?;[ýÞKŸecO—˜Åîð‹‘GÑã"2‡$Ï`zvFÊ)ÏÅKøAزo)zXÊŠ6¹ŽÇ®!aß]ñ2Ù8Ây—ìí@#Jk<žTÚþÓí¶v¤ÌF+;§Fë܉BD‚•Æ¡„’×fÚÊ_e"/'w #'ÇE ÞõbEø´¾3>Ñ:qÃA¯Äw@ªãðÔ¼Îmã÷oFßnaCê¿ZŽ&[ ?pTg›^£š¥ÇE5k¾4ÕFÕ Š›‹d4úè¨LJÑ‹F^*³—Êâ_*ÊûHijy¦ýWR\ì;‡ËÊß²ÿb EJ·YnM*íϸ’ÿvmJGlç<4õL|‡¿e‡@ïœ,’„ÐÖh%”D´ ùbžsS9¡%Šø¥à±ò‹é‡¥Ï UV(†ÄòúvFÜM«ÁoS/L¦`a›W3—êfMŠê·j4v+Ooq;¬æusy[^Ä&7П‹ÕŸ'{Ài¦9Ñ88M·v,«÷{ò bvËí¢¾kp¶lÑ««Æ õø´â­ L>qNÕ¹ '½Q¿Lo`{9¬‡/~š|˜L?Nú½¯7¿úžWam.ÆÃÞd:_?ýûí´YÌêÞ|º’Go6j>ôþ:üÇtR_þ_~¿L…õ®ž¿Tc'ÄgWðH÷³a âº/{óëj2mzß½é¯Hvùáôk6rÇÉîÝW ‡ÏÓ$Úà ëf0ݺÞö{¾ X~,7U&ÓÉÙîç0—ÞC›¦÷q4÷\/zռ׬žr÷ܦ·N‘¿^ϹšóÌàzûºß»žÏo›þÅŨiÀè: 躚ŸÃ¸^\Φ›úâ‡Wo^½þ᧯ΘÕ2 AZëš¼ýöë×Kv½­]F´ï=ôèÝj…(OÐ=š¸¶k›o2…¯¶Hýœùô†‹™W¾ÉºóÃ…EÕ»Y>Ý—óÌxã˜ñý«Ψ&¬«–b[޹ 5¿ö¾õÞ\wÕ¡½gB°s«øÿüwó¼3Ô­ëÕ+¨°Ö=Öö+0멃ó´×ÜÖƒÞ8ù¾nNW‹ði¯š {·Õ§ñ´îEË%‹A¿•‹gÞefûªnF0¼´~Åß8‡@ÑÿFÏòÿürZd^æ0½ú'úß—¿é/Ÿ|çÒöOþ¶¨>Áj{1˜¹…ùbê¶üõ¸®šúŸšëŠI’«¯qYK Ʃᰘ2ÅëšØÁe=¼b²ÔjhjÊkNØ%t@ŠªR†Ë¡¨+eá]ßLÁè_Á¦±þsŸt\n[¤Ò¹oR¤´”€Ÿ¹N&°jV~³™s±Þ×ó„ »ÄžÛÈ:ÁîAfàJD »w£ÊÓÂ/Þïwôz6ŒþØ2Üà9Í‹«Q=ž=›Mg¯GÍüÙd4~¾jðâ/nôÝOö]|ç?ýü/õÄ ¹M¢]êÐþ³X±oEÿÄçÏÈïbbÉóSo÷µ §½§ójÜÓê4úÍíTð°ON½±“í<ýùlQQœõmýðm˜ômÕ\÷OÔÏ?}”Õ«ËÁâŒó}ÚU7C%àÓ×U33›ú$Býùè¦>ÿ >sÒ>íù¿\¼‡ÙvÚ£ü$ ºÿ³Óž‚)£Œvùv¯§°&B7^Âðýí’«Ÿ¿,¿">Y+bØQBG·1 t5¯.Þ~ûîåôZÀ4Áÿ&õ¸éÿç¯ÍÜ­#~ôÿô21†k™»q}³Þ% úޝ~úùݼ¾íŸ¬>s_×CxÍ÷ÇËbõ…ÜyñËJh¿€µè„^jð²j‹-ðNO—_œ²z ¼>¾#¢ñêkÍrw›}æ¥tºšèýÿJïñz*¯f˜› ?ΪIã·I?Ó—ÓÄýíóÇj<îÃ4fô vb”_RY_àWõïó>'BpAŒ–8š @lfŠ{˜Óv¢&Ïÿ¯¶Üi[TÚ|üiÃ$ÿwe(¯dùø÷Ÿ¿žxOÝö7@èý¿üôù䟣±cæÉ+°—Ý=.øëW›S°WË1ð™®·`“Ãôøä?˜x-âþº2”_¾ùÎýë‡Õ2øztU> Æ`GûÍ¥ûî¦í9¿Wÿ¢»5¾©œâkN@.ÿævEíºðî»w“ê¶¹žÎý?½‡r³ÇÝ‚ñÀwÙ±ûë$ÌEñW°D\¸-9&˜wð9üÝýõ²šÕ7õ| ØŸn*ÝŽGƒÑ|ü鎈ԖêmŸ`,0I¬àÛ™íJ`9 Oj­Db^Z2,-c(‘–af3ÚXÑb"Šˆñ¸­%F9!$[õ³t Çò°mÍáăº@;"MÞ A×xTö<©e¬1uxÏn ¯~ÜXK C*ÀûvJÐv.’2ôŒïˆ>|xMÊ3íÂviK›à–jK A`B;’|•/#OpRÁ£vö)ȃ¥M¬ ®o§¤Ìe˶ÄxœZð@À{,SÚšœÕ“áít4™/hدS„Áî í¸¹§~Î&ô@È›Ù<ŠdVõï`>Mªñxú~<š|¡…ÿGìê„0›ñ6@ ÇcÕ%¼)x$/blU/Û!Þð±Žá.]Xw ‡Ï­!r:¡¶Ä‘‚GçÞ ’Dø`X¸GnK>´úüŽïß:Œ yKz"å¥Ö c<ALrÍ`?6׋ËíÜ«,¼¬[n”¡Ja*­;’¹ÕN§’]Ѻ&ã™uÍd:]–·¹ ŸFZæÏJOÌ@·¿@p~&@²q×§8ª6â_J ) 6R𤞿ûXŽf>««›­uàb89…?ŸNÇFsç Öë6ËÈ&CŒ¾Ýv~‰(kãó Ç.0xpîPç)Ê Ÿ Hôø£¬»€Ïe@ÖúÈ£¬ƒ’>Â(ë.x;GY§h)C¶*"ÊZR~®ˆÜeUÙ㊲ö¨`2‚âè·ëAþ]DY§Igÿ…ÎüQÖþÐ[!y²íS³§(ë§(ë§(ë§(ë§(ë§(ë§(k,ÊÚ¯ŸRh¡#,±]2ó)Êú)Êú¢¬˜‚*¢•ešäöjgðÅ&uA²ƒ_ºIÃÃUfï6ì 73ò”ˆ<|nÂq×ËvD去vÏÕŠX-*?-·*’ä»fö\HrºÙ³·wµ4eR#H¿ær7ǃt@¢Îr.öRf?'Sð¤ÎÉKgSmÔãÜá\kNˆÁû [”„0QÀ”‚Gf “½®Ç7ƒëj6w˜^sçÜ}úXža‹Š-ago°SHîâÍx÷à¸.܇*i‰¡OÀÃX·L/Äö(»€Ð,(=ÁÁ.!ŠcD\§8”`jJlsx<° 6¹f>ŒûÕƒyŠXÃv­0JSÐ`Ø‘´“ºk®› 4އ«û˜†GäÉÚ#D¢tÉh©A5´£LgÓÙ¹›Òþß<6ñ¨ð±Ô>m“Â}}æ¿÷9çmx/V–¦ôÔ-q&Ofœ ´GHË•q¸ð¤&åÝ<üHŽ?ð’ ß{Q\»Y.äÐŽÙÔñÏÊÔ¤,.ùÆ<N4ú¦Õb~=.iù]¶0iÃv±2ÆZF(va_¹ 6[{Øfr‚Þ­p툎:5V話{˜±¸áæÚiV€R>]£§†®]jàiàÐm‡[ÇUZ@ŒY)‹ÀJ©vÏ Õd'7€ä®Î>q,§¶¸sÙ‹<ù| ]€Í¹þÝÊqHŒ$ì 4JÙµ#&ÖSy˜t%°ÎBŠè¨ŽS_U_ZRÉ¥˜tµäüí– xX8Œ“’ViW¿v.œF౩‡ðK©lß’6¼i5°Cä–)lÅ1TÚýû”¤L£Î ¿¥1‘K1ÍE*F hB÷X)‹À#³]ËÛ¤àñÞiàCµÁ)J¡(µ8j)¤HNóÒ‚vRhâb9q<Ê”UïB¸”8ÍDCgK…·–lcîâ¨1h.G\>ïhgÆ%À¶LÙ<šdËÎãÖ,·Joƒìeø`É *µ¶#'´Bd¹ûÍÎl’¸ã–‚‡Ú|ù^âxÞ©XïtÁr£ûvRå;éÌÃÅxìŠðk¦àa9Ã’@tû3* CX7 F rµß·ã‚Í•ä_ªŒÐˆiÙnK¢O¹’ö$Á Hôøs%uŸ'WRAZë#Ï•”ôæJê‚·s®$ÿrø t„–2ê ¹’¸ç|_ª¤%RØðÓ¤Û†çQ¤J²ËBêŒ"î“%ú­kÍ©’–ÒÑD"åèWRTåR%¹7Z­ ¬3J?¥JzJ•ô”*é)UÒSª¤§TIO©’âS%ùu–kξ[¶TɧTIO©’Ž*UÒÿ²wmËÝ:öWRç]n’ A0_1Uópž¦¦Ü¶ÒVµoãKrò÷nɶÚ-„Èͨ’}^Òíƒö^IÄ…‰ ‘™c” ˆê>SÈ8ÿ# ¥Ž^æ×çÜZ|ý’wˆmŒZ(+ŽÙáç–¯ÿþ”å¬é]õ(ÿÞ†5•$Ey¾dÒœUð"X{¤æ?ÇK¢5â6N1wËë˜|ÔäG¦˜qû$£&ï,‹Üd•¬Œ'™Ô½ç‘TÔ¡ è¢8ó,çÕ=/º®e R ó϶OÔ¾¨nò6IãMa¹IÕýäªæÓ“¯‚Û“ËJãkxˆ^Ü€²ó-úþ=7Oã$EO!Ø ƒöÌtî ¢Œ'ØnùD×÷ÏGXìW›.„0Õѳÿsí¬÷â&=òÁ+>²[ƒóO3‡ådd×Ùq}º¢>= ãë9JÇ]–3Ôa!W“¿Åd7é,w¼‚j¿9Îß¡h+ðDÓ°~?RݦT¸Õï›õ;£¥l›zâ §G”b–3¾ßAÜ‘†Š¤[¶UoÙUúÜþ¿ïZ=æŒ(6xMØð2¥í2äֿز¼Oæn˜JÇ„EˆÉÚ&Yþr©O †Õ¾Ko_}ÒÜ”»Z_ov™¦lÑf M$c‡åŒ ³,ýNTU $ ð¤hð ïº\or9ö»M~ ~^¿ÝSËþyd;)Š^–Ã6½“xc°hð i«×³3޾¸í )[Ä8Õ %1è³_ÚõÝ:µ~É °÷4x ïáTƒe/t®É%é e9ëScQž®*p:3àö¦ÂCÍK<'YÞM!ׇµW6’#%—¼t׌¹Zô]é§S9 8Ôxbg»þåò›¼Q–}ÓÙ1ŠèY΂m_æ|ÕÀ¥—: m¹¥Jí=¾~e+yÒ]Ù¦\äoÊÅ@:ßè{0U?Ñ€©¯Ç¦[FÚæþ·§Ëç—§×)Œô¨*ËÞìys¤ Açû;z3Ï¢¯¦­ìˆý^G`\©»íï.?Þ Ñ” â”Å`¢d€²\„~õöúЕ4D°$cO#Ü9<ú­ü)¸ö¸ p090Õ¾ƒŸ·rû=Íä¢n?êѹ²ƒl'·gC-¹¨‡“ K=û\Ô&ð]rQKtÒç‹ZÖôùå¢6ámÍEÝ~<ø <ÚíäöfçÈEõ .ùÃɨ;9¼¼b·žÎ*u‹ s<±—Ñ£qÿ¨dÔí¨£®"µÓ¦aɨJdû¡PK2ê’Œº$£.ɨK2ê’Œº$£ ɨùüäºc'³¹$wZ’Q—dÔsJFÍÄÌÆS ÑX7ZëDUÂŽs¿>)ñP—·æÃov¡¬µÜù:ÂÏñÀÿþ„ö¢ßúä¦N¿7’IdD-aÜoÑÝ?7ÕÓ……'§f¹øºKPÊÙ—}ômŽåzÔvÀ¢PàñÝ: ï:OÕa*êÐzp‰Ð€™å,õy…?e%+`ºãÑ"ý¦ZƒG;ÕGB”~Ìæýrð§«Ëë»Íý¤O,ë3™œ/”Dü)Û{Ý|#W i xR· ÚÉË}DÁè z2$À.÷²}"ëºÒV1op~hðØ> ðNSk,«5·ò°¥GP–sê -ó0¹187€ <Ðía~jXrTƒ¶¨A€\ÝÔ‹ï4 ÌÄ>]ðæà®bäüüLÐàØf쓱Ìa›ªl>Câ«u(!M&:ì¶ê›H›¬o¼!3¬0`²ù;Sû˜ <èúTiÚó(d…¹¢Â|ÎÖ`%…±¨£¬ûÒ‘!„œ:çe¨ÁØÒów"[n¡O >&þ{;Áƒú+[ǹG±3A´EXÎC¯2C:2bÈÅd„4`†ù;ÉF‚ <1t[½YQPTTà3×S i*³\'e3÷BÎ.KÖ;0á€C˜¿“œ%2žÔP¤âÝÄ9¤2[¶`s_\2N\®YÎt\®ì££1`ed1 8dù;DâÁå0©ûÙ±N~è+VvÄO­mƒI³XΆpzj µr›ÞE£iþ äïï“è,ËrÎö[‡*f¡-[˜Ñ¢1Ì™X'v+¿¸ËÈ>zg(»Ž·i‘BþÁVÎ8ì¸bO¢_^’È`Ù®0áÀ§E1®.oŠDýÒU¾,®®×·¹ÌÒnó+™ž¼ ¢ÙÎrûÝjÛ8ªýËXƒ§Cµ€ý~­_X“wë—›õëóê;MUKË>aò!˜ÜFÂêùr3œÈ'p”!'›Dó4ËxÍçï £`¼Œ-t[哦ʶ(Q´d<ŠDÌÉ:% ŸÎļQ‚¯mY. °»r?ÒìKxR7§ôóÕÍúúõöx[×Pvè¦àõ˜Ó<õ[ÏlLy§q(:0²pQñw¦RÌ^Æc­o_·‡ÊbMj+[±‰ooùIV2"XÎt,êÒLÄzØ<6jðøØPËëùOþóݯïÚúõ-°èÓùþózÄ%¯½0ÎW.´º•s6Í>ŠÎÛrñÈ­\Ø+R°d‚Iñ+hôü3A[À÷É- ÐIŸy&hQÓg˜ Ú‚·9túxÁ-ÈC{f‚BºÀG2AUPÉŸW[Ò ˜<ÍVDæ@SÕ¿u&è4jkÉ Og[ííŸ ºE–0 .Ó-²ý ¼%tÉ]2A—LÐ%tÉ]2A¥LÐéü >ùXqÎú`—LÐ%ô¬2AmNM$ïÈŠ†¨/EØÍ{ªÂçÐᱟ„¼²…²ârÍs2‹ÙŸ“\@èý™onDNT·÷=Cö§»ŽmŠtwì¦NѸ„("%Vhl)®?Ÿk¹~ F¬mëÊÉuµD”Šª±¼ ’³A‚Âr¨NñìºbH£·óO¢O0=Ó»áZ,k/GŠÆ ”²œä¼ú­\˹z,ü€™TàñÐk&ËÑhŠ*th]îÛ$9 Dc\×­Sا iþ×àñØ55³IocQà[WN1˜äÐc·Yobªspö*ðDm}ôÝËPI}¬»§ÍÕ¤7[Ö°:ƒDQ€RßÞÆÑjÜhüˆùVà Øss/õ“°eƒøjbmr^;i{"tei=Ð4ŒÖ²Þém—ÈXRß§¿g5º¢}CGçP€Ír&Ù®{K¨­`Çið˜s©t‘°eSØóò%f¢ô Îr@æ´…> aëq{p¦^Ç›v¿ÓO®ŠwßÄQ%g¥BQ©!§8/U,çý¼»@55˜ãË^ƒ‡z÷7PW¶§ŸÑ8,!ùΗyh«E`(ð¨ó{w[êåÕÕÃëýKiK-*´ìÔÏi¤SØ“0€œFŠ]÷SH¬@ëæÏJÓáÑö´<¢<¡¶…+ÛÐ9&&$›s§îm€™ «Áˆ;¾Óî×ïOû'+³ìçŽ J¶/– ÐÉÉÓF` â4À3¯Àƒ¦Å3ÿýõëzµ=Së­lDóÝÝ b l®\@Úd¦ùyZ>šï1<á´Bïá¹ÿ: -”´ øä—å,œfê ‚>®ò/—Ïßÿ÷ÛÓåãͱKöCw¹<ïý[Ƚmþ(†! ªÆm’?»wìçmèÖþ|)ï17{/¼” P5›œ(ÚN"þN dSªÀƒfN'Ñ¡Jï®|Õ¢\(9’N6– ¡O] ŠÖƒÅ½„¾Òž’Ä=EóQ°§hðøniÖWÛÕÇÛËûõÝ6žtêp¼ûóç\W,?é$@-dN³œ nV¿Ô kFƒ> ðP)ð8ûZ§“ºÊw¹” òÊéúÌrÁôkÇÝ™±ŠA„ª­'—¶ÍG æ¹ÿùå¿¿o¶D{|ZOÉ©YG<àŸ ½H_;j~ýåzó<ÅqÿrµÿË»|óhÀ=^ƒ§_ŠwÝ<)Z¼¤GoIŒtÛ¯+5$¼ Ĩ!FO5xúùíøºÿøôðû&çæñ?9h·‘èëôÚ­KàÓ©A@D©bHèGPÚ)1¤ö™èÝ¿ùÃëüÖ¶º|Ü|yóÇ¿«ì£ÒìVù“ÊQàHõ’«¶5:#?™,õc#sVdIÚxDYå_V/OYí׫«ËI¿±3ucFæì4`°gµgXcMÇÚøï[óç]™D"x°¯³Ž‘   T¨JC…Z<àšÐ*·àÝ¿œ4žDj„*Lxg­íÙxUª‡6ˆ*<Õà sSåõ×›‡‡ïû OFdJôr Tl€×Ê”"î¢TlQˆ¬ÀsbÚÛâº:)ÜŠDI±>)‰Ò wQªG6äZkÙÀ _ƒGý¢*Ùy:žBž¾^^­øäÿÏŸ“ª%ÿ¨µyò+€[ãzY¦•ˆO'Gý˜Æì"Ö±•+ðX×'sbOÙ?VX +Dº€uX3WK—ùÆÐ@ êQ"7>Ôàhlo|²Ö½ÈŸÀÕŒÁk˜Ó|e„j†—ÆP£15x‚ïGAÍAäHniV±GjÑ6¢v<ÑŽ!ù«ð´43ô*ºQm ©%U»ÚOwú´;“{U ¹!Ó^'v·?m‹žSgƒÁ ì…¨^½ÁQ º"R0ò0†"ÕxR{‡“”.ºV„`j†àH¿mtÂÞ@Ïÿ«Œq¼×ãI=RÊ­. ÁèÇMµOD釹 È¥­UDj<©1môH¸þQ`9$¢*÷‰ù¬#þ55˜cUi°yŠŽ¹Òð= «ÀC¾¹?îû~*ÏHÅîÂîÂ8o&á8Êröb¹G´Ý›À9ø%‚ón/“i»w¤ŸZA£çßv¯|Ÿ¶{:é3o»WÔô¶ÝkÁÛÜvoú8± V±Kí…¾ÏÑv/G»î©ÂÏ=VÿÚ®{*Ï“ tÝÛ¢?Ð3áoÝuoup|eô²vüñ$¨þ]÷tÈÐ.]÷–®{K×½¥ëÞÒuo麷tÝ«ïº7ŸÄ­­8gÙŠZºî-]÷ΪëžË]ᜱNx÷䬉‹š‰µqtøÂìžF¤Ñ|bƒ]¦I—‚ß‘¦Í‰Š ø&9ÜÛ¬:5àË¿—u’ß’336àCw’½;vgO" |{b9{J¥¿Îþd\Äk$FG&Và!3O«§JÄ¢- _…Ñ€šå@_~æ•­ïM˜Ÿ<6öèøô£&÷šé8aÚ =ú%®²ø0SÏ'=Y¨ç/ÿ¬ÂãµåŸ‹-UÄÞY|ä5é|® ­³Ñtiût*UP“uóOºóýù.ËhÒœ0É”xõF!wh’ó1ôlùÓÎÏzèÁ˜tu~Üöb¿³Óî6ß¶þâ¶·¯ß6÷ï…qw‚«wÉIŸTÔ'PNƉ á2H8Ãz¯g­jœ?lG‡ÃWŸ¾4çýòÏ}ËH˜ïùJí¢ôüÆrÔKaëàövýR|â{t3€z<Ý Í­_®®Ÿ/ÞØ÷¹n4Åyó-àd‚tÙe93Ë}â„å¢A7 <ÞøŽ¥ä¦¬.•5Idù$]Ê}nÓéºunãl5fÚ/AQÚk@Ük4¿ •o%ܽ¸ìÊ®?.…É—]¥È;dJAŒÜc9K©gS«æÅ¢€î€óF…'uÞvö½ã`½Ù€†‚“qbìà¨ìIÖjäÑÒˆ¯Çãb/Ÿ¥FŸAÐ'y­ˆŸ ØÞ‹_EÙj Þàˆ‰¯Ç£uUmvxÃäÿ¾Ýön‰`Ëú$dex’‚ÉY.P?fGæÖý€k®Oð}™GUZ%5iºì@Œ@€à’Du–sz5¿ëÈuÍhU4x’Þ#’uu»¹~øã~ €û×Íúönõö×Õíæþûä7WÔ…|w–/Ï,¾kÃóÙ­Zðî¥À£¾®ÊæØ¡½7 úCD`“@Ä‹!F½“¢ƒ«a¢ƒÓ\ü¬6Ã=ÃKz §“A´Ö•QOr…ÝêäË‚–*Ààg7uxB˜éŽpH$¨Ñ“ ”4œäÂÏa7³¦NÍMx„^•ÛAì=*,é†GòÈ =ÿtÃð}Ò tÒgžnXÔô¦¶àmN7Ü~œ¢êáoåöj Ì’ntþHº¡éω‘mºaFÅç°FDÌ^çâDº¡N;80ÝP‡l?ÀfI7\Ò —tÃ%ÝpI7\Ò —tC)ÝPu΂¡%ÝpI7<«tC&¦G¾w@@‰À,gí\¯•'øÐ¸çoˆ¦ÃÝ|”Å—œI¯IÒkòÆ’ÇAÑÍû$©q¥×ÃNÆ ¡C5ž~½®vŠÛ9?_ìþú9æ0¿€ulÌ›t(f¥Nr¸×>±SVjþ½¼ÅÅhE¯~p ãŒY©.^¤”ßïŽøv,+*²®$ߎµ˜"ÍøÛºêë„|êðxj{¶}zx}Y×IÁ”µ‘-Hï:,ǪêaÜgÕkÀϟׯãÎëÿ\v?Z.j"@~r´SÒvއà²ÖŽ˜>:)çÿ^7Wߟó%à—]^_¯nÖ—·/7W7ë«] –5FR"![r’+dsU¾¾wÜdêaS&Zû¼Ák”i‹ÊtÀ¨È'‰¥,çžnÓHSÄ,le$þÇü=½ 59Þ¬¶yÄ¿fµ7ûùŸo•[hqLIIÁ,çœïsûíÈ^ú8àÒ£ÁCÝŠÕ§,2éØuœKņ$Y¢,gÔéÉÃù®açkðxl¶Ø<~¹¼½]]=ýùaO+û™€øhs$/Pu@æ_FêúAù½®¥´B/¤ê>Š‚5x"ôÉžçî¦éúËΜý›ß6Wéçå{š÷&yKbð Ë¡‡v“å„5£€Í€™ÖàÑfÐØú?·OëÕãæq}»¹ßmÌeŒç»n4Ft³œW' ÍÉÆzÜ딃¸“ˆÖ¡„,·”tò8eB5qÀ5Mƒ'¶Ÿ´7¼W¬6÷¿¯ïóë]Ö—/_kCÊö“wãØÀ<þ~’Îþëiýæúæ.ûÙ³òÑí‰Í¨ß§SÓ{Û "°­iðxå¶vy·~~¼üT:çÍ= ;(O/w¡Òä$çm_²jþiÆ–±µŸY¯lŠ®6w¼<žîß6òadMiOa¹Ü´W7©ÝXFøQ4xÐt«hpD_>ÿ`õ|9)µÌ¾óð‘â¬ôÌÉrüƒöe}M íˆ}[ƒÇ‡n—ÞœéwuÃ?Ê W÷“ãË÷ÄÈÜ3|E—¬;–³§Öf@ÒúQ8æ_çøNT™“Z«ÊIe6šÄ†¼€›å‚Oýnž'7CMÑŠK,ËÑ€è Êî!ˆ1Éxb‡×°7Íݽ޾lÞ(°¾ÿÆ>)¯KCÙ+¶qžä¼o}ëJP¾Œš`•‡ã6SÇYÏ2‚wNƃŽú¤§ªÔYæAò69gŒäÎOÞÜùúõ~khØw*<©yÉ?\ÿ¶zSà ßy'•}79-× z !數W¡šžü¬Æ͈—p l¯5±ï½;¨=aö™à­“¼,gΠü«ó3ú€`+Ð;‡ÿ´¼D5ð­1B…vŽœ#/‘¿È·^°AFû…C–¼Ä%/qÉK\ò—¼Ä%/qÉK”óùü$Š$Ÿ³?ÔL\ò—¼Ä3ÈKô†Ív ñç÷¯’&¹ýh¡NIHù÷BJSù‚1ÉQš³5Ø‹HÝ‘Þxä[§Œ4¤DC«‰MàxbÃS—/sK51©LTA£ç_M¬|Ÿjb:é3¯&VÔôVkÁÛ\Mlú¸çP(¼•³0k51î"98RNl‚Ÿ‹¾*áy¹í&Th\4AFCû[»í¦Qÿ?{W³ÝF®£_%Ë»¹ù‚@žbΙÅ]ÍB±u;šØ±G²“îyú!K²­ØAT±Ø:·k•¤›v}?’ ˆŸè‚‡%úŽn»á‹ì=PżÅËœ,n»Åm·¸í·Ýâ¶[Üv‹Ûîc·]>?£ÀBñíaÜ/‹ÛnqÛ]†Û.]MBZPã‚sÓc†Ôa/:ˆqö°‚f±ù6¾ÙÝæ4ë,”uBöº(ycC°ÁÏà œ!X~@s–„â+Æ´ÉòYo,—ØkÄÙdðަ÷ô˜¤‚ &vX <–Zuô•X^¹Ö³ÞKø\Òˆ¨] ¤j +@2ØùgZƒgDúõ[ýÏ—ûý~µY?ÄÕÏíã×!¨|Жµ–óVÑXÉ3˜Æãfå1§Rš|‡¹Vàa7ª–×akÜÿ™þ~÷ùE—Ÿ_Ë#|þPåÙwiÁ:HÂÆ!7ȵIRÌ8UžU= ePÖ*ã=솱¬2ŠäÙYÉ¡™ÆEÇ•q›‡’ àó—éÓá‰fôÒ~Ù&Õê,ó –ÉMáH€ŸËû@ƒÆ1\Õ`仹5núò>TÇ9º]ÒÜ*£²Ê+ÙZ’^Éò“£:µ¡-ëñSì°ÊÓwÐb¨ÀCÔ¢Vι<‘oO_6«Ý—õu.õÇŸƒ.Ë\€œãÀI†&ä‚u¡ÁÁWF´–¸›¾ÄsØÀÁâÙ»í ..«+‘ŒÀ[é¡4‹šTËiÈGø€æZG[Àóò1>¬ÌƒàÙƒ B&u—. ìs-Oø¢3óOµ·Ós¯÷Åê¶éˉc¥²'Ã8#+ªÏÆÆzìÁuØÄ5x¼oÒY£M(3s!*o£de¢G†Çö²ÖCŒ BéðÀô*ÿû´Þ}{Úÿêª(–¬MŸN:@é3³Î´i/Ò’” ø f¼3~b‘|uÉ´á»5^Æ6L­–?3À ¢T¿…ú… ûä"ÚNºÝåBA§omcª£wn~¦kðxÓvo;{Å¢ºpE\&H»ogCƒ †7C%œÛ¡âžgâ„§€× )ê´•TD_ÖX2~ñî›kEÛxuVR­btÞò4x,ÎÝ×›¯Ï¶Úß·ú±]¿nlÂô²5ÒeVÂʆ¦xþÇ3°"Ûò<}§¯rû`@²‚Ò8“ø§ ¦ÃÁªÁcýLk÷´Ó_&›È¥Y ’ƒÓÖ›Õ²± ïáÐàQ;}Î6%KJÛ쯞Õö¶Yv$ž×aÈñ­¸†|ð]sZ5àâéû×’Óz&Y± ÑËÏi¾MNknô…ç´5}9­SðNÎi>ž Y#Ê»Ôé³ç9­äÂU°çrZUPñ¤wáEä´¨²kBèUz@ùï•Ó:HMhÑU–]¿œÖüEÊ•è°bÞØ/¥è–œÖ%§uÉi]rZ—œÖ%§U‘Ó:œ³‰ÄR ¨aœGZrZ—œÖ‹ÊiMÄ„dh ¥àÒ8ÏÆÎæ^®v‘êÇÙÃÎtxhúóÐa/ËQ·¯ 1ÇÀÎz¶åzƒÃ8sâ%m”áš~/Úœà*ßFÒ ØÏ™ášÆ9°öL½Á„€Ø£CoëÄNßf.‡r=èNyï§§Í^3°¨¸D¶à ù=Ã8Ï zmŽZ¼ 0'œžÒ¿úÿ@ Ì*F”:âæqÎÛ ?f<ÿꑺùßÂuxõô9ÕØoïÔ‹ês9”;Z”Üo.XçDÈë©èÒÍÀ`š6åqÆž4~-EàÅ“<3õ£Î»ùY¥Á¾Õí ÅÖÛC-ó¬‹ÑØ¥Å,Ù/Ž=h“(?iaÔ㠦þ¢ÁcqZó ²Þ¨¨7.7qÒþ—Æ›fÖ_Kª*D¡®g9‰;Žæ£=îd*<Ôo ææñëæi¿úF9‚•Е§‹˜ˆØHGXgG•àh¶4@#v˜b ­: —5ÈE ‚i”ê„ ãl4-§ZÍJTç:X<~»ôú²sv6 4.Û¬·ð$z* ÷°Uxxræî/ªK“½Û^2LQk¹ 58Ç’e“Æ‘·3X†jjjÇÎ" r£æùX´ v¥–=У‰d¼tôF8=w,5«aFc;œØ<.´ÈØ-n‡Ñµ‡=åcG@‹ÙÓÉã&¹1/} ×àÑNøë¶·{*¹ƒâÊVL@Ü‚Ò8ˆÜ$ow5ëáßá¶¥ÂÃótk‡O‘5A4Ò¸¨ÎÄž€)™§!V`:ÙQJ7s²qU¥]¸Bbžm¤³÷)”MáèÓÙj¢dÔÕšî\¤\å_F#z{Å‚?—Üú¯oÛ§ŽIÁÙ–~·ÄöǘŸOÿ¯°ÏŸn¶û!ˆçÓõi Ô§c„Ðdôó×ÔáÑV(ú0ãëùuüz¿]Ýì¶ÙÛ˜Õ ÍÖÆBœˆ9õqøfŸvò—5í¤=4_¢õö9H|³S2äþéfõ:æuÏYÿܯ6_ö'š?5KB0"C1W2dvQ$2ýǧ7Se&ôaW-k:±+牭’¶¿ÛŠ`±Ô~¡ ÁÒô瘣Ëâ˜:ÊgÒ¬ük{»9;+®Ç Çê¤éÏ1ˆ—űàúpL´T‚—(FŽÐÅ ™pnŠ)„iưzéû0¬ÙØ›a¿_?¬nή~‰,8ª­ô6јhµ2µã[µúœšÕx Ó©¹ýr·úñp½úr{}Þœ=„€¾F®RˆT²)jÇ´Z  éô\ •kðP¦Ý­¿oo×ggEŠQä*ª]sKÒŽ[µ¢wò^° Œ¦O'nÝÿØîÏNH”¨U-ÓÜGe ͘Å]¨œ ôaV-ž^ž‹|œ<ÜÿÜì~ì…#…D–¹@±J6îpF*…jÇ8¢«Ñ‚ëĸZ<Þô¾ü¸û¹ÞmV?ö_7»ó~ÑyÏd<¹¹ÛM@+[;þÕ*úøj™s5ä <±û ôyrv÷Iñw«\ôêú83(>T FYW%Q+®‘1JƒªºÜ>ÉØ\UPÄ“pC«Tœ£k)­ï£célvC@á €rlÊ·çöÓÐŽŒ¨GÉ®ÇMPƒ§YßÙ3º6 ÊY¬ƒ´bÒ8×0§EøÐaÉkð ·ZòÇx’Ë1Dl‡¦Llb£n®Ðœ´ °ÔÁ¤ÐàáØ*¦úY}Ç?½•× ç°$ÏâóN›-öÉäTàî’+¡ÁÊ*ÈGe½l‘ƒz|3õ”ä1üÊ8FÅip¢¿¬iDn•ÓöüðwTð«É|î š‡f ˆµ¦ÿl"ÌNž{>çxE VàὊϩvµ;ìýA]UàEãko‡Ó€Ž%B½ |QD@ ÍZ^V¯½Õk Sê£hDq^·Ì ÃìôqFMzeÇ_nj¶ÍUºrÊs“ÚÕúûÍjŸ#ÃW»Í:y]´±sô»K;ø£IƒÞP¬/Ø>¤©Æg!ÍûÛ§»Íúñq}ý5g[½Ó8I„Iw  g JÙùcS},YªEcèB–Z<ÑØɲ?À}§onE•hÝT‘ÏM”h鲈âÚE»Í~ûI«ÛïÿÚ­÷»§¡4÷[] Ù×ì¾Á¤=;A ßƒr×oWƒ'6'ÈÑ*¼¾]ï÷ï4-z¾ª‘‡ØŽuÇ’£Z¤NÞ”j<±Ý1s·Þ~_ÎòSݺVt ßÀú@Ž&@`«„ˆ}P‹|[œžÙ/꽦ÕpƒoÄ2ÎÑ4ˆ­¯‘£Ï…¶B[o*Õx¼XÙæÃgaÅh\…s“Ai-ÎQr#ã`UxcŸ›A5mɹóå]Î.¨×t-ù))Í[š‘ ì\k%6=’ d|ö‹B‘9yç)•±±b¦(8–ŠŠæqÖ6«ð6"JFôÄËQÒŽ Gi>z>›MWä'Û#/®Íë—fd¹oKÅtÒÑl²D±Çõ”‚ ò}/ãÆf±¸ûí³Š_5|>ʧpè€ÁÅx耊]{™ãÐ5 eƒ‡q§apK/ó3Mª ½ü^æSÀ·ée^@ }á½Ì‹š¾À^æSðNîe>|܃uå]Êd ÌÑË0÷³;×Ë\/¬—¹=Óß«—ùA;¹0g…vüù´¸ö½Ì‡/‚$Ô—=J°ô2_z™/½Ì—^æK/ó¥—ùÒË\ÑË|8?sÞ/’|Î"àÒË|ée~Q½ÌJd;øì%§q…ìölèÿçÑU™Îèg/Ú{åö¸¤‹“å²›Llþxm%nò(~TÖés!_ÖZt ¾¯ôÏ7(£;*oÔ¶<ÿ^0¹ú¶¨¥è™íŒmË!\%UD¦myB3ü}´â©Xq@Ÿ5ߨ™¬bþ@u žhŒ²4ÿ‡ª;MnõÅbR‡BÙ¢g!j¨F>ÄèÏŸ=­ÃFÝÐü TZññØkÚÙá× óÀ";Òêl…L¡:Ga~a&‘ªZ`êCªZ<8g¦4 ¯™Œ¾XåB'Rœ%s”,](¹¥˜ Öài×òvôl ³`EFÕJÄ0c™•(Se {¡âqœéB(k‰Blû‘¦wéçF±ôÊ‹#à̶A|l-ØI¤¨Èõ±†¬'o| ž^G¢B݇0)^dDŒ5ÀÁL((1ñ$z„(…¯¥Â>ô¨Å£íÈñnýi÷ëÛ!¡ms»¹~<\œ}±NÍe2êC4Õ½8fc‘"3B¨}¬KìL žˆmŠ6˜ˆ ò‰)pPÕ%O{H3…V.Y¿*$æØ…VΦ­§30Vßž¾lVû?ÓÜÉ“õÓã×üætx<&Þ3€E>Ù¡`›=Œ³¥ºô¥>¶ µ ˜.t 6 ­½ù¾ßm®ïw7û«£ÏzBÁ”Ï%KÒ’‘ ôÁL©~Ù‡Æõâ€éðü¡Ác§¤„Ê>°×уËrç‘Ùú 9¤Ó8 íÜü ‰­€msVƒÇ5«Ž{ÔbÍ› ˜²ÅáÒ9ÒtKÎk—ïÃ~Ê ÑšË àØaOÐà‰¦óÅæmÍBåÇ!Áˆâ“FgL»Âº y­‘`þœA‚Þ]N" ¶Òk!F¢yS†8)[/èµ—E ëÌ_D ÕÐrv˜…ØŒFÞõ¦ÑGbt#“§Ë"üU»Qîô9ÌxV9oP|döÝ©ôNˆ©Dª–»©ö™úûún³X_ËÞ…r,ƒ§\Ý7ˆ·ù4§6}™Ý4«—&¸v¯7îÂ'ÃçU^ÿž¨ðûúqó¢×ýêñ~µ¾¹Û¼“Íh`T ½ ˆGûS5RuÙ x0t ÇíæçPÀ4cG¤9ÙñprÌßs@‡‡{ì_Öûíõêi,a[Q“"ç`÷ šxQDAë;åùxÙº£¹ÂLh*p0'W È'Ñ¥V:oúÐ¥OhÐËDéÚŒåç0¼óBû­a­›¾•U­ÙãÍNƒ±eÜÑGʳeëò+3+Å!û0Þë>C«ñG ü<ç[ð‡È¾_â>#4ÓdÄ)mŒÆ@}"hÄ"Y!þk¢…Þybù‰#¤UŒìÄK㊎¯êMdÒ–§@ëL‡‹©mÓÔö­Ö~ûPåKG rÞ9œÓ¸B¸¯²¯éìLV;ÜK5xÈÏ[ì·"µ b3íò\EUbŒ>v¢:sYDrvjàªò—“a:7¶ ðœå9ØØ9c+d8 ,/U9õB•SÝG{¡¨ð`ƒç˜Cæé2¶å›¦)ŠeÂ0mίç®7u0_4x¸ùr˜ë•Wö~áàa–h‰ù©Å·phèyYVm nqh{V ÷Oã‚qÍšÔœE Ú Äiwu[H™b‘}.vÍRjAÏ7_¨//sw¨)õš'™%€¹¬ ¡ 0 Òa‚x0¶ôúÿªº—Iþülß¾D®GÁ½C™£†Å™|ù‘¹*ö I5€{äã(𠡉aµ…­Q˜fbƒàIÚŠˆÀM‰¢ž‘ªõ2pìZ¯Á£m¼ô¡ÍpF“ƒêÊþ‹´ÿ€7l%_Rç8L0ÍRLfXƒ'ÆV)êvIaÂÓy“£ ¥£Ã³¸IuÐqö¼žh'Õ~§Â×úÒ`dÈâûǾA|8q07j]‘/PºT‚¸kˆ'…-Û·®pá*çuàݹË{.ž¼·PBV?°¶r'WcŒÆA‡e Àã[$íû¾=QV,*Ëú€Ñ£“Gi\@7n»k´HPÑÒüóªÁã°A#ŠwºK׎´ŽoV×ëA}ÂLSîPzȰùÁ¨ÉÒUñ°Íí¤ÃƒÔâÙìÇv÷x¢.*ªËA΃wBõìaxu2J{òið2Î?½ <Áp‹”ƒ:O˜õ´4Héò;x›ÓWÇJ¾ù#ÚtxÐ7XÄ?ö_7»Í‰Â¸¨0Oé‚#Lã<Ø:'Bî`L)ð€i1¥ßŸ×ß·¼*ŒLQaàcšk¥r‰Rmù£ù¶– ›ØÛ ÂNÆl‡ :}Ǧ ¿Aµfb*OmgÈás @žÝèÇEŽÄ•¡búà$عONâé͇ ÞFRÓ0}“ÝK»ÓUÖÏ.ÊÙu82žhe9M††Òy·ëÇ4äîDo¶¬·t€dÓI«/ƒóž¤úØù÷®éOÿ%Œ˜ØçHƈ¶ƒ·+‘#WàØª6õÁVß_õv¶€o ôÃz'é}<£HMr†FR“ØDãE \èÔ×pÚ‰­K+däïDÏÁVà ¾ÉL?(òtêƒ`ôa®“,CÉjÀÜõ2NwþÍ@^JxäqÞw˜ýôbt\'º^»Á/Ùß‘|Q¥1™ˆ!8ÑfJãœk“=:žÁ°=®-<Ú¦Ÿ Éæñëæi¿{º­RàooÿÃj8ð8XWÂ…"æ{¶3è$AÒ½?ºn[ƒŽÉ„!í²>È2„ÖþY­ê<»™ 7ëÍÝðö Šê$°ÉÒa#´iœ±ZÏFWrk$ÁÄÐà‰Ð¼Uð[Å–öcW¾pÓзƣt§qÆôÛ6ê9®ÀOì v /•뤾Ý.Qf{·y¶ÕZ=üóé¼üÖ^TR²í‰ƒt_Jã ¶‹¯™ƒì Q¬åùÉ¢ÁÓ¢ð9Õ­†ðããÒ{IˆÎåñ‹ÚÌ5M@B/t×ßCçãz½H:œ3<Zî&›M§Û’û®Œ7@2¦(š™®I£8—¾+t ÆÙoÚ’4z&° ÑËO¾MÒhnô…'5}I£SðNN=|<8¤Š]ʇ›%ié í¹¤Ñ |Ô““ë"’FT„®Ûôï ü{'´ƒ‘ ÊÚñú%_t ôÁx–`I]’F—¤Ñ%itI]’F—¤Ñú¤ÑáüL;½7öžÖ_’F—¤Ñ HÍÄ HÁFÙP Ás‹C ]¯:ø4{ O¨D«U£+«#{“îÊÅ,ÒaÜiÝFY¤ù÷rºf&†IjCvfÌ"Mß`Bˆtwî:ÏÑz h$¤Œ±©¯Otµ0Ñ@Õ£Àš½~{ú’6ãGbcQ±6wRŠ£ ˆõÈDSÊü¶Ýêq§ ˆóBƒG›Ë¨:œv÷üyªG_Ö#E"¶VòO¦qÚ½ÎÁäzI:”z×áQwo<«Ø_4ùp[«\**×yLDzd-¤qÖû9/%¿5À‰æg…Óf×;BYl¬4’Á㈠¶ÕÄ\<®—æ'Ñáq4Ë&Q«X.*ÖçÜFŒdå8+´³ØJb+³…ù© Áã`b[ˆQ‘Z¡¬OŠ" Í †q^ÝÆ±/—’pKBLœe“8jò| Qäÿgïêv#Ùqóõ¼…á«I÷臒(ß%{’`‘lv±¹ Ó¶Ûç8glÚž fó.û,ûd!«ºíöØ-«$¹7§v\õ‰bQÅTI™2)Ö»\õ¥nœUzj?‰ ê-™@ýN2<¡†#ÑÉ,}jî݇ÚäbÎÀMG+ùÓ5wø$Èik°ð<J" eýyþ¾“l:$ê,×ò!wfæÂqSº=OÒaÈ l•h[¨‡O¹D#H§1ßóäåZEG£Ì²#y/„r_sjj‹ârê*@ímà PÄ[È2­‹ŽsQç¶«ÈGLWÄ (¨¿ôœ$x\(Ò±úân³¾»_|üôáÃâž«Û=ì»lúLÎdTð1{©‘N÷¶ŒXNsà[\XKðÄ"UœÇœ³R1¼¸TVëŒI§btã´5MyÌÝK!ð¥^è¹ùm– šèñ󘧀/ÃcN >rsRÒGÈcž‚w2¹{¹#_tÞJ9mªò˜^£­=Àc–Auá¸xÌ*(È£÷~^<ænÖèµÊä]÷Ò9\À¦<¹{c4>WޤŸÁþ½ÜÌcžyÌ3yæ1Ï<æ™Ç<ó˜s.”_Og?zô™±'ôPzæ~ÿ‹Ó¯cD4äõ£¾ýwß­ŸN²ß>™Ž×“žù¯úrC‡Ì»Ëßq´ýòþlô3‹š•_ÿ/©Ço×WëÍúöbM¨¾|yf}{;òŒh÷ž-òI¿aœ>ûÐÏÙN±Õ~üA÷ñ®‚½8§C÷Ô¥]€°ˆçjå×ëKph ¦¹}·zthY}¸þ3½ölì’uêÝÇÊ.ÿ¥“èÙIÜzö³¾}Ø|³I¿õK[@›nV§ß·FÛ÷í³ûHaoNû˜ÑÈ'Mܲ·OéíFÿ_g=¶Oè¼<úŸÞ³ãÞЯ!ûZ„³ÿËõ¤gýQ3âûᅵ^‘W§gdèÏZÝþþ¶º½½{èé«üƒånÔõ--éåz¹YOº? Ç×x#e,äküê¢ÿÛÐEúÓv}®®?¬—ŸW7X÷¾~ý¾²Úè¿1Ä#wËQ›É/on>=ðÇÙèÿ»Ÿ!úÿ¾œ¾\Ú?=üx·¹þs¯óÿu{òØðùèl|þé¶cþñÉÉêãõ£BÿI÷?ãÁ§¼]ŽA÷O´Yn>÷ÇMðõ€¸áv÷2}x×^œÞϬ@xa8ä L¨ ‘à‘ö<=(:Šóf!)U«´v:ú\˜‰»É‹ŒÕCJ°×oj'Ã#nj7…aߥŽ|Ú|脊i¡Zo£÷ÙL>§Åuêõði˜úEÇdx̸7w·×œl8Y².-Ù­c¦Zn&1jð- IˆÔ\0‹èèÇp¬îŸeŒ"Û³¸ âè^)?}O! ’ÆÙ%µ‘L¡÷!Á#.ýí'ú2íŒæ3Ó†1£²µ¶“Hm„Z,ép<‡û¼¾¤}µ¥‡ÍzuóœP{Ý *íyǽMr„ àjZÜ2½”±Žö꣦¶…Ý$/ Ž©nr>ˆáþP.UV¤z]Ká¨m#l°°äy(Ý<˧º÷¢û¯-ØyKÊÎÒç¨"bnÿ· c©ìv¹GÉ¿]…%xl™|˜Ûõÿæâé´çÓ"Cr6è›Èm–“1C…w‘ë W,Á#æ vŸw¥w.×W×·]Ä÷±ÖÚîs¿|¥÷°JŠ€Ù³`sªIã”-ê¦#6ð˜%xâDBÁA ÷‹dP!-E Áx:€æPs]qÉ›ÊZ:»7 |1 « ð9C¯K2½5:ë½Òä¡eÓ80µ%D1åÀv¦Å—à‘îè‡JÁ „‰iaÒV¥0¨œöÒ89¥²ú  û¾»Op£ªOë‰Æ0…šÓXzVÒ ¨½Ø«šüR4õuM‚' 7ÝÒü°¹ûô¼UF§Áï_.ìÓ?rüÜd4½Eˆ>äàsL«ƒRè[ ŒŒUEK.îîv¢LïÛÁ.góE¤ñ‚J+ÀocƒÈ®Ï~÷b±¢'úNœœØI2­ ȱ ¥²Ö ·ò m ‡§ t-jc¼”hz?GäîâÎ`n´Ø,T6iìÆ«Ú Àt±ŽIä˜Ö&Ž9δÉà¦qNc“ªR5–LÁ78•Jð[sw ·~±;ÞïäÚI4dc G›Sf§œ/W {º6®Í J+ô­æ²C$/…Qœíêf_<æÌnup“b˧Z%‚^* H',H»Ù<ŽƦÌP 8Ð{©®33ôå/!Ñãg†N_†š@ }äÌФ¤:ïdf¨ÈJYc«r+ÜÒj8À­”AÝ«ç~$Vú—{Õÿof¨H:p¸ž`yf¨ Ù~™:3CgfèÌ ™¡33tf†æ˜¡¼:­ƒÍźqʇ™:3CŠÊŠÉ9†LV£Ö±êuÙ¸X®l õ‰NB<±ÅµÙAÉZ•–¬çniB’iÚS{§¿BLS~nðÖ•=ÅøàL¬É4µKHZææÐ)Ÿ/Ñ{Ì!ÅT÷ÌéŽBÒÃák£|B<±}€?$¥©ùÚ+‚‡ z64¹qk$SÁX_1$x"¶»y|¤wv2Õi™"Bäµ¹9 û¶"ƒØ žX$EzŒP1)TZhk17 —ða*¸aÅLÂh__3$x ´wÄžËÖ¤e‹´ò;hÒ8 åUWPóá³°ÐBCx¼™íHŸ8+ ¢¶9/™{$û·p7$z.˜ Ôg`ÉðX×ÐḻìÅiÓâDω¼™:*Ý8/æQ·Ópô\H7Ý~g;N…JÁLg4 À#%f.+”•knI:ª´¿OçeÐÙÜÁ ¬G„–~È0}àÐÀ|Hð zd'VH‹÷p59OŠy¨ª\_é&/˜Zh`[$x¤U:jK:}i.¸œƒå8$GfZt©©h<“àquÒ¦_ú~Q¥Þï2$¤nœóª¡G2L½ð½i°½HðØ7¸œÚI5}_Á­Vm &æfAø(–yUŒšÖ»AôC‚BÍèÇk²L;úÁrÓun ÖãÞÀ™¦È‚i¸'[ íÛùülÎÿëDš¾ŸuTs1½@»¡Çª!j=òÉ:¹¦}vR ÞúÜr2wRÒGHæž‚w2™»{¹§´j€•ÚKmªAæKtæ™»‡ŠœŸ‡ê÷Ú™›Qi ͣ礑Ÿ™[$<ìË•'sË¡ÉÜ3™{&sÏdî™Ì=“¹g2÷p2w·Ï:úÊ‚Ëî³öYp3™{&s™›ùFFaVѪ·¸å’DW;˜ µ2jÀtbõËÐW8àô‡ð¹3¤ *-eRNöñI’7ã^»¥IÞü\ç逓“*ÛËä(Oò6¸D>ú›C§ 6³ûÓºyfA&P-ïcƒI€'à[f„¤\µuÞ)ks6—Æ9‹oxY(³ ‚iyÝÀöJðSóÞðQŠ‹óÏ»,/Ðia2/ ÉÈG¼izA(6‘ú ˆex¤ ˆŸ,ìæ“S?(w“’6ÞX ÙéÌÌhœg¸VWxúÄNYNO$xLÖ«B5i¡bNÊÝæn(µ"o¬í‚©ah /<Ò2#µ%>SYæ»™™¥É‹™|Í>É,°}‘à‰u½“»Ëõãgs¿¹X|ºïý=›–h×P²gmË]ÓÌ‘—Xjjv`Ë “i™!{)¨*Á¦%bn=ÒÅÅ*Áù.­`(r‡s€iœU¡®4ò›’L!48CIð`“z8‡% iÉ¢³!xŸ ð8­&æ]–QgtåÒ!ZØzSÞÛ8–©|3PŽé•wœ›‚6æN|Ü89ª6þËH%&ˆQ‘ûhòSIôY-§Œ'Z—ÇÁUwbžIÒ¥%‰<ç¾äÓ8qÏǪڌœånœÏw¶Á¹‡ßƒhT€GÚØípœê5Q¾ò³NžiEðL_Œ6ë‡sŸ[oê; @wê«‚iç,<“húÆÃcp`ÈÈÍ É¥G] j³`8¬¹Í\$/ L’SP€A¶úxÍË•.XÒv™Å†6È€¦q´mµóR_` ¡…Ã*ÂSµòïǻ˧έ¾0 LY›½Ü\Q®A¬ˆú Pcƒk žšÙ€´õFú¶-Ęóíh@ÝRzÀvÊÖ_{ ÝÄy.ÍôÅ’Å"_ç¬3^·ûþ])Ô¶Åå¿Ñå¢Ãd™ÞG£ ¢6¹€E¤Sµiã Hx8|ÚÀ¨‚©ÊÒ„NŒé;¦8s&hNƒc²ÁƈԱ"*,8U« ð`hmRÛ«í¸\ÞG›Uñ8ï4å‹‹ÀíWÏùâˆÀ ‰?_| ø2|ñÙè#ç‹'%}„|ñ)x'óÅ%V*¨½ÈD¾xðKã5ÿî hðκP½9.¾¸½V?³æßݬi|$ïÛñÅeÈö;"Í|ñ™/>óÅg¾øÌŸùâ3_<Çí³Ïø 3_|æ‹_œ3:e¼6YŽ}Ý+ªáÑTnÀêwT= ÚØxÚTB~.N§Òâ V¡r/³iÿóøÁĽl…B p~.Ðs]¦'E7ÑÔl󖨽vîæÐy£C‹¾a7.‘S-ŒŸ =ß ° 3EைóÀÏ;¹bR®Ú‡Êg®#ºqàÛ¤$‹>~·×l=•f{ifjòKëw‚•áqÒÆdoînvŒüËõÕõmwj:9½]?ð/ß/wê¶|\drhíB°éµC$GÅkƒMJ ¦,³¼ìG$˜Hl`œxö«¢”`€ åšv• (£b.uª§¢¸$P=ÅàÞ®¦Œ’Ë%ƒAóãhœ2…IeW]0‘úM-exпáVé­’Ù1Ž“ 3ó qèÎ7#¿Ù¬w²»hç´>ÞœvHïûÇÎ_]Ü\ÿ°Ã &Ã]}=à's6F[nå׋¯»õ~üã’<èÞ.qa¿¤D‘\V>ó±L£ño¹áb¹‰xh æÝ×õÔrJ4m~†1ƦLöœ‰Ñd:Ÿ÷ãöÌüÌd?@QNHôø™ìSÀ—a²'ÈF9“=)é#d²OÁ;™ÉÞ½Üj:¨V*ºªLvqé1`²‹ Ú½xæQ0Ù;T èçfz§~^Lö^:Ü7+¿““¡“½coÜdû§µ™É>3Ùg&ûÌdŸ™ì3“}f²ç˜ìÝþ‰ƒü>û¬ÿÚÌdŸ™ìGÀd'Åô¸îwº‰u7Îîq[ Q˜ù¹Q™à1{<òˆkR˜ÕÒYcà…™ ùuà!ëRÓ8¥«§#fbŒ°Duxœk”ŽˆIjëuü?ö®nÇŽG¿Š¯‚¹XW$‘”ÈÁb1o°7f1Xtb;ñ®31lgæõ—Rµ=åîSâQ—ªZ;§_$6QúŠGEò“ø“,Ó®r@­—xcœD7¼b%Ë®ß.iÁž–©ôuë¶ùøîçOEŸÆ–áèœ4ñ3¥æ¤½¶ùõ˜™°-xäÀ¬E©j1ä®ÃJú-Ô!ïð'fŸíºu^€€öß-xŸ”>Ô«Z4ƒP«½ –=ŧå{tEÔØhÃÂJÕ*Ä£Q!^’Ø¿'0Ø|^G#߯ÀCÝJã¾üˆ¿þñÓ§iN(~Xß_­4P9S[âÀÉ©œÖhx¥ýËÚð°{¶TNñU­b̉F‰,מåÖ§:]߸byˆùâO MtH^—¶ÑáQ[^G4ÆÁ+ð´–š\¬à{ .0ÔÅ)…ƒ ¯ÔI?_F¦˜¯Áì’é¼²ð¥ë:‚Î'gãÏGtq¼hS‹f뾈 *óT{d¼‰Êùæ¦O6p¾@DÑMx¤WiÙBgß/þ{yò•Àø¡õý„,ÃIÌŒtHëÁ'ïÙë_EÜ֠σÆ.jö‘N¡ªÓ¨1)’óVl„ˆâºÕŒuØØ È Ø -xš§Ïþ³æ›Àé[}¿4+—Ö­h…*çØeZ·ëõ/áÃ?zx怫šMyÞ=€`«ÄðôJ®'ï߀¸ÿ˜É6øøæîõ›YiÆœgHC ¤:4Æg º½-FÔNQyqŠê·.z@»6<Þos>_vÜË÷_RïÏ߯ÿÓË^ÈŠAõRî-ÖÁ£@LQ¶D[¾”¹Yâî; „í]'Vít½èVÔ&ªÓ3ËjòÀfýÊî;õú—<À ´à!ì’k¶]Å5VO“CŸÌSݰe9Ç‹¦’G” pA=㢧Èy:‡_›µ Ž_2¼|Ÿ’á ‚6éÁK†«š°dx ÞÍ%ÃMV*,.©ö(tÅ•Šá6¤ÆªžQIÎo½}âÛª.oÓ]ØväÖ3œûW —Ñ'6&LÌÈ– qgÅðY1|V ŸÃgÅðY1|V [ÃÙñ>Ù~6úsöõY1š¨“ qcë‘–zß²dÈÞ½ -ÜÚh¤“\-ù^ÑØU¿wäæÉ·}°ÂÈ2±Wºyà>È3"¶‘J÷#¥‹»ôXç‰09+ô¡Ü<.ögÃðSôÖ¬ŒY¸_­‹»­åZ»|ãl™åRÝØ³ª¡ðsCxÄÔ+¬[ûP¿õ… ë¾0æ¬T4•Î »ùóÃÁ/«ÌT$þë‹?çz†÷o^-,xñ_ß)äܽûüåmò?m†±ðÌWÀp{Á€µZ é¯/æ|—O/ÞÞQo^naå,G.gÖ~úãýçüKçwûáÅýõ‚1_= °5< È¡ãC³ó¢U­äLpAýú™ml¥‘V4:~¶ñð}²+Ú¤Ï6®jzÀlã-x7g—Å£rÈ+Lh@†]³}nªŒÎ_¾·œ¡æËÛhCU–4VºqA%‘ƒ>ùt[éÆMÚaŒÇ¥çAw¾Ô9¿ìð”4¨œ|xfAYPý² ¢u ü«o÷onη8»è”“Ÿ+ùô!Yß×0~×¹ ÞË$.ä¬9Ÿ«­Á±{uSù¤?t0|PbóúMq†O%-°XFó{×£Oɹ[ó{-ÚY?ÝÃï5ì:òp0·¾»eWäÓ)ŸNy§Ü°å^}é_Õèÿ‡³¡§ƒïu6´Š Mzø³¡Š¦‡<z:ÞgC VÊG¿óÙPÄ„n=t9•ʆ*q1àxž—h³„Q÷U ¤ŠcÔxéÍÏÿ«/á ~ÒOÈ‘×wþôâî—ßÿmþåó¶ûøæeT®ôY÷Á˜ôCÀ¿lÆþø\+aº æFÞñâO0ïš‚ƒÏ}‘üãôÏ4Råpq£ÙF/âC2”<»K½ú•Wø‰ˆimü^Ò }ž™®5 WÓpkt­E;‘¤k È`Y°q2¢“ÀˆêóÄ‘é_t‹îà_B J`}?H~1Æ­¿ irÑ“‹—ýK*1±1¤ÈyÂCfÊ¢”;Àyú“ØšŒ¥¢Ññ‰íð}ˆmA›ôàĶªé‰í¼›‰m›•â¸s‹µd¢‚ Mzp2QÕô€db ÞÍd¢,žëp¡¢E.=>1ïJ&eYcMPa,6Á¥O[ð˜ÐF/·–IÖ¢rŽceEÔ÷5®&Фx²‰“MŒÄ&t_æ!‘¹Î&Š.fštbú\ñ.’cóË'iÏ êœƒàòü³56!¢¼K?t0f¹ £ùEÅìÀ{}Z´Šºÿ¢q…ó!z²µ#ÌGúEù°ÊF°ÀNÿrú—ü‹LåI„gO~ã_Š.†:ù—ü\Ѱ±nµg9t{^}ÇI<‡5ÿ¢0º“q.åH‚;ô´ª€ ɸ—/ra1@ü<­Z9†¨htüÓª-àûœVU´I~ZUÕô€§U[ðn>­j²R¾§UÞOÀaå´ª êhõþ¢¤”®@Ï7vZÕ¤\ïØŸM”“ú_¹b×Åå9ÚÉ&N61›Hy|­ h±‰ä–‰àýØ„†Ùú®F«ÇYp_68F‹l‚ܔӪ0@¬"å\¤#Ùļ(†à!ÚààœUl†‰5Ï&6ïÂ&jÚ¤ÇfuMÇ&6áÝÊ&î'õ$WX)„}+D!ÉDà/³‰{ѹú4°/¯†b3ª\uX¿¹¿G/·5¬ø^;Ƙ{¹ècóŠÂ˜êÝîåùÛ'›8ÙÄ󳉼/0²|7ðêÁþÍýÝ{³‰ò\Bç!˜1zRÛ¿sY^6¿I.³ ¯_°j;P}ªÀ,R<”M”EÕ>B}ÐÉ,Çg¿;L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6‘Ÿ§IEÓJ‰Ù•M0¤)"®°‰ª€1ôê^.ÑXl¢  "±^Ux/oŒM”·VõØž\`}>P6QVL>9¶‘Åtf:lb(6‘û»'C4ØDé/L½ÙD~®äþTõü¡Yî¢ÕîW——ë4"ðåº< “#— E}òQ–C‡²‰DšàTîlòa‡‰ŽÏ&¶€ïÃ&*Ú¤gUMÈ&¶àÝÌ&Êâè5ÎÛJ¡Ã}3ˆ'5÷+lb†Àõùà÷rƒÕM̨ˆÍ›•Yo+Ói~ë2 P®ÐÎú Ãþl¢¬(ìbº™,'ºŸlâdÏÏ&t_²÷ XÏtšå\ìÝ2°<"FFÓjë«>>ñïÉ&`òˆ|¹#-Áý‡ž ~ÜSäøÒ¹ÚŽd"/ê!`òÎçÁù“LXQbE£ã“‰-àû‰ ‚6éÁÉDUÓ’‰-x7“‰²8"Yá÷raß&aòB+db† ±ÑP/¹¥ç$E¡zaÀ½Üµ œßZɱµÃWeEå¹l#[fœdâ$ Ý—‰DzœÉúêÁþMIBw2‘Ÿ+„.Šùe'AJû&:IJ1\.Â&Ô/88"À:›ÀÙ"Z„=ƒËue &¸€á,›0ÃÄŠFÇg[À÷amÒƒ³‰ª¦d[ðnfeqÊգѶR„ûaÇ Úó ›(¢Woe{UKtšÑ³q¸4ËÅÛº>¿uŠ|°µ“踡ëeEðœÑDÞÓŒN61›À¥;ýIªC×g9èÍ&ô¹¹é·ÒÓj³Kä÷d¿µÄ«<Ð’À좗Ery¨L'ñ€ø(J‰ò€Ô*›(r Ò›Mäç20£a–s{² “ª=!_ö/©|é¢Ô8RùÒáÐá¨38ìÄžwf˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍl¢ÉJAJ;a§)¬ GmƒŠ—üÒs²‰•úKrW ÇkéTÞš0Z“f9wÜpÔ²bR†ëL‚ŒÓY„}²‰¡ØDÊDò,”*›(r¡7›ÈÏõì…(Zßûv^Gø`¥ ›õ ÌÕOõº<.Úã¡l¢ œ¸p² +L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6Ñf¥dßq$0yI+l¢jr~¬áuèolv›v<X…Ý„l9tðd'›€Mp‰Ò½ó®^7Q䈻g:ñܪI_•­ïGå<í;¼EheÚ„”سl‹œ÷Ç&:•EÕÿ‰6¸èÏD'3J¬ht|2±|2QAÐ&=8™¨jz@2±ïf2Qg xl+•|Ü·Ûû)ĵ"ì6¨q°D§ŒŠÂUî€åÆZ:µh‡;nö¼"@â`Ǫ•“Lœdb(2!SîúÊÌ©N&Š`êM&òsSÔ·5*ۊܾýaQ&åKʘ.° RdùØÀëRkéôEÎù'aY4V+Ä¿¾Dª³‰üsþçý$ßüý2Æ»_îÞ-¸Äo¿œ³ð¤)aŒͧ¿Þ©}}÷áS~åwÓˆätÿCð¹ü>Q)YLµNy_äÜ"qKµªt×½+¦Õ£ÚûOsö1‡cþ~ÿý½†jŸ}á¾þݧ?~úu†ŸæÀMÉOŸ¿¾mñ¢_ÍÛ‡wÓ×_¨˜7ý›ïÿøðËÇ;µËÿýîooÿþïþ{ýß¿þû:Ùï~{ñ»Ÿgc÷£šýÌ̲}øîÝëï_ßýô^¾yýóÛ—È_þïüË·òîâ[Àáw÷¿û³Q¹¨­Ü#0´¥rÑ ½ê$¡®ÑÁ¹äFð¸dA›ôÈ\ÒÒôh\r#Þm\òËâ'_8·|l¥Ю\2hD¡q»gYóö-`i¤D·/¨$G3ÎFÏìoˆM¶jGåöe“÷+JHúJl"ÓXôl|²ÉqØä¼/õ£r‰_ ½z°Iéwe“÷Ï œ{eYMå’ ûNÉÍÍ<\ö/¾|éìR­•ñ½œÚ x(›,à”ÁQ &8EžôÉ'VÅŠFÇç[À÷ám҃󉪦ä[ðnæeq¤<í϶R¸3Ÿ —'X]œ„þBt ×@MT1¤m_%„x[l¢I;ÑÈ&ÊŠJ'¢gãY„²‰¡Ø„îKL>èÎLU6‘å¢@w6‘ŸËä"c²¾dD·oƒ`aç¿Ì&Â²Ú ÿ’圓t(›hp² 3L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6Ñd¥Ðí[6>M$+d¢ ipc‘‰6ô‘n‹L4i‡G&š%8¯&N21™ùj‚‚ ‡*™(rco2‘Ÿ«Á-;­ï‡T'{VÍ@˜ÄèWÝ ÁâR0¹ŽMtË‹z̃[œ ·…:ÉÄJ”XÑèødb ø>d¢‚ Mzp2QÕô€db ÞÍd¢ÉJî{5&çרDÔ(c±‰}ò1=†KtšßZòl†+´sXàûƒcd¹b×ɲרÉ&N6ñülB÷et¬»÷±Ýyõ`ÿF%õfù¹às'­d}?Üã®»=Ëfü‰)Л ŒNÀH¥d>Žæ_DÀÅÀÆê,÷xŽû¿ºÑ·ö‚!Ú¿-èãô/º"$¹¹ó´êô/CùÌ;@€ÇVóÿRä¹ÞþEŸ\¤èR=©¥È‘ìzõ&f'+‡U˜DµÏh 9ŠÇVåEƒwä}0Á·è[{V­œBT4:þaÕð}«*Ú¤?¬ªjzÀê-x7V•Å‘Xø +‚»V±Ò‰°šG[ äI6TòƒåÑT‘ò­‰>†»ú.o­MrÅ6LpàÕw^Q_ýÈ””Ÿ=^N21™€¼ ‡˜ 2„Ëêƒnd(qJÕñ ÷rÑï:Ì `"&Ï+Uy”¿`!Åú±t– Dz‰.DMp9;èdV˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍl¢,®QHºÂJ=ž7Û7ó0«2Ñ„E& *t)¢»=¦Û"³vÀ{ö¶vT‹Ç‘‰²bL)°,úófâ$C‘ ÊÁ<¤$U2Qä 7™ÈÏåä™^wENMôžd"¯¡ºÇËd"ꌤn8Ô«ŠÜ…:]ÉDY”u]68õÉ'™°¢ÄŠFÇ'[À÷!mÒƒ“‰ª¦$[ðn&±åÄS¶­”º„}'ã&šÜåɸ­P% –G;£'Çd£'Âm±‰òÖê䜑Ã6k‡á86QVDÌ9†62ÀójâdC±‰˜üSLøøB÷ÕƒýKqÙ¹º›ÈÏeò!ˆ£cعa º0é2›HŶ$ª¼Tü‹À¡l¢€Óu]……:ÙÄJ˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍlb^c4ZÌr^ö½šˆqòa…L´!…ÁÈDA…ž%Å+Ч»š˜µy ‹­ ñ82QVd`ØF¦‘ÏI&N21™H¹u†t!D~õ`ÿª’îyNù¹Aß×[ß¡š€=‹&dò¹²iåj‚õ Ž1ùú±A‘ÓßðP2Qåè# .-ÌÐI&V¢ÄŠFÇ'[À÷!mÒƒ“‰ª¦$[ðn&eqq¨q¼m¥8†}ɇ)_amPÓ`l"£JÎS4ZEÌoà¶ØÄ¬r¶vt#»ãØDY|BaYð'›8ÙÄPl‚srSÁUÙD‘SÃÓ›MäçÇ.y3 V¹ ­:² š<0ÃJ ¶ä/˜9ZÍŠ\ÄcÙD^”÷&œàYƒm†‰ŽÏ&¶€ïÃ&*Ú¤gUMÈ&¶àÝÌ&Z¬;ˆû² ‘ ÓÚÝÄ E ]•+›(¨|ÐP‚¯@/7–è4k‡±ü{ç®ËmœáWaF;™B7ú0sâÈN\ŠÈ%Úf•-±DIU~{ã²´×çì;ÂÌæâ™}¦ÿé\>\ºûÑxcÂÀâ‘Èìßí9£æ¢‰EÐDú.Ù9¦ $7i¢Øáù Ós(µËzíGàŵ¼s:! ×•·!¤ì"Ô «]P¼“&ªSÚ5«]|*½hâõ4±ÑéibHü)4ÑRpÌznšhGz>šÒ;JÇz) ïMPîï_ÃÄA¥¯¹~A˜8¨^ä£`âÂ4ïG‡á¾[ÙcT †íœ÷U™>ýn &Lüò0‘¿KËGÝš0QìÐøì¨å¹FB@Ök?ÉäB˜à\5º¾>èZzBØÙ<®vAo-ŒZr.K}qÏÅÞLìÌ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0Qœ‹€;÷{)‰ñR˜`ñ Iwh¢HPéKÕ碉¢ÊLßìëmô_7Mä·†½ñúÓÆÍå4Q”¥)Hÿ«ƒÈ«–Ñ¢‰©hòÝjÎYšw°«~m¢ì`ÝšèÎ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0QœSÞ™ð~/EáúZ°Skâ¡ ýû–R s±DQÅy/ÿ¡ŠäÃX¢¼µ‰í¢„(2ÜÇÅ£3Â;ÊŒK,–˜Š%bÉ΂„öÎD±‹ñì*Øå¹nÂâ±×~8Oä¯Ý™H-4AÕk˜ Ô‚£ä¢tí½ãbGrkvØêÔCè+vÊ+;lw–؈èü01"þ˜h(8f=9L4#=!LŒè†‰êÜÈô.Ô/ÎKŠ[ ÃMT éO|G*ã\4‘UD]õ”fcŸE5:h©çêG྄NÕ#‘¾ñ»E[µ&MLEé»L ‘f˜ÐL[íξ‚]ž‹”`Üb¯ýÆpeå:´-R=vh‚K vChßš(v¨·–Á®N5§Ð~Cœ<å\4±3MlDt~šM4³žœ&š‘ž&FôÓDuÎý^JQ¯¥ ¡ÂN庇„44uÀ§ÚÑd眊ª4'ÅÎ|¹ØØìô½¾&²Çœ‡:wuŠøÚ›X41Mp>?$fðõÆ÷o¾ø~Åù©4éI4‘ž«ix ÝF!8_yÎIM Úëô°é÷Ìé©]µ][ôaîMèTœæbí€ÕetêNŸ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4Q;“~/£^|Ð))¼Ç”Êd[EiNeÞWOáÃ:•·fÖNž÷Gtì¾ì°Õ£:R»¦ÞÃîé¸ò‚‰À„䬫"A¹½5Qì8ž]»<Ò =öÚ ñÅW°ÉÄxçÖ„æ,é§ íZktïÖDqš@§wÊ·Ø™.˜èÎ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0‘K! ý^ʉ/?è$¸wÐé˜Ô§Ô$SÐDUïf»ê%ÐgÕÁ®o~Z0êGðÆô°Å#A‚ è+‹¸hbÑÄT4¡…4R»r]µ£§+Ð'Ñ„Jȵ™¹×~•¯Üš@ÙÔÀNl°œ(¨´Ç—bÀn¥‰â” ¬CÅ.⪃Ý&6":?MŒˆ?‡& ŽYONÍHOH#z‡i¢8ç4ü‡7ºP²k+×‘áæ»4Q¥æ¬éЗʯàþ’4QT¥qU;—«úOKèTÞ:}¡Ò. øˆ"ßxÐ){4Ä<è*³ÿÃ9‹&Müò4‘¾Káô_£´÷&²Ÿ~ ;?Wb ìµÄxíµ …ÝÂu¶¹£"¦Ÿ¥#4ÙI€Ù†—¬^ Hì«W¤O^Ò[{ú;gØŠ Ü9¼¸çlöQúÊ"®áe /3 /¾`QŠÍá¥Ø=×:ixÉÏU„À¡ ň®^hcN-¼_¼Nv* d» ÷n}qÑœ)vÅú*ŒÚ]…hDtþŪñç,V5³ž|±ªé «Fô/VçœÓ]k¿—J赋Uœï6íÝÊ;$•Âd[ßÇÔÚ­¼ oD‡ïÌñQ<&‚1´¾2 ¼hbÑÄ\4‘Ï”˜Z‡&²]ˆçÓD”4‡ìÝö#LBפMòñ%M`(}‹#·ß<ìb¸“&ŠS‡ eÞçWeÔÞ4±ÑéibHü)4ÑRpÌznšhGz>šÒ;JÕ9FM“ø~/…—Ò„8n²s+¯*ˆ:·œJͦ‚‰‡z3 í«u§ð× õ­óμ1VÝw޶zÔ Á¥¯l•2Z01LäïRˆÈ$6·&ª]´³·&ÊsÙÔ;éûv®=GË ðõÖ7Â)`úÓ郊„[ÏÑV§äÎíËÁ»§H &vf‰ˆÎ#âω†‚cÖ“ÃD3ÒÂĈÞa˜(ÎYDã](ÃÅ Ó˜¢Ž;4Q%Xúºß‘*stªª4…ØW/>‹&ŽEçÆÅ#äû‚$]e>ÏE‹&f¢ ÈOÙ½YµÚ=o œD £{h—³®v.ÝšðÈ™_tBÌ-ØòM‡æAÚj'xkeÔ⃧ɻâ0ÄŲ;MlDt~šM4³žœ&š‘ž&FôÓDqŽ Iô{)trÝŒvnå”JsÝÊ;¨Þ?ë S}똾¯Îœ½F‘é>š(5Ÿa²¾2áumbÑÄT4‘¾KO¿‡³µi"Ù™;œã£øŽäíkÕŽâ•9>ÈsiTˆúš&bjÁù”vö¾‹ê½{Å©xòûâ˜VÆÀî4±ÑùibDü94ÑPpÌzršhFzBšÑ;L‡z)¹¸˜QØ$ÂM“J“t*ª4‚êc•ØgåøxD‡¢ËÑÑx#Md ï$´¯MT»çEÁE‹&& ‰ô]jΩQ›4Qì(ž¾7‘ŸÝ8B·ý$f¹²˜QØò9Ò×4A¹SŠ”´¹§Ø¡ßKÅ©¤`tÊ;ÖUµ;MlDt~šM4³žœ&š‘ž&FôÓÄ¡^J(\K¬©ÇÚ‰ªÀü¥yj,½öc9ó•{º! †šÀ-ïŽBTo·ôb‡ñÞ½‰âTsQsë‹Ó¸ö&ºÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçîLú½”_¼7¡¶°·5‘X`µ(]¥â\µëª*`tð¾z@þ,˜(ohi®~Øa¸&ŠGqÜoƬ &L̹µ @{k¢Ø9;¥Sy®1¦>­Ûkç­‘‹k×å\ ;,˰á;,Qíoe‰â4—*o×ö|ØÅu»;IlDt~–K4³žœ%š‘ž%Fô³DqžOé÷Rôª =óœP¾c»Ǥ¾Zãú%a¢¨b"k„}¨wü,˜¨ÑQèýèðS]ÂËa¢x4%•7~7åµ3±`b*˜ˆåÖ‚ Ö„‰bǪgÃD~nIN ¡×~Œ!^z[6QWÇ=špGBÁv&èj‡4Ýør@}$û´ñ%½5 1ö£CªwŽ/žsTF§7”=o9­ñe/Œ/´¥¾Uš·òª] ³oååçFÈKUíšÐÕŽ$\¹Xå"ÀÎð’ÑÆ :B“]´[S|T§e’€}qb+ÅGw¢Ñù«FÄŸ³XÕPpÌzòŪf¤'\¬Ñ;¼X•œKHC¿—r‹Ñê¦qïí©É.†¹`¢¨BÊ»ß}õ@á³`âPtîKX=’)ô•®Ky &&ƒ‰|Ù-$T×L$»ÀDt ˆ»-;_ɽ&`‹1ŸayM\úG…ö„½Øß{)¯8e$鬙»çüV‹&v¦‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(Îò À~/¥q®œÞE!÷Õ›~ØáÔ#ÑðTëïò)zñ(¤ ¡¯ uN]Sô©¦èù»Ä!¶³åW»®½| ·ätg †7JÿÓ\Å›4Q삟¾õž›š*ƒY섊‚û¥)>È7Ú9H+yp" í {±³è·Ò„”¾Ü …»âVúñþ4±ÑùibDü94ÑPpÌzršhFzBšÑ;LÅ9F¢Î¥‰j®M?Ny»;àθX%Pú÷©q²ƒNEUDŽÁßP¯v¶FGsžï~tb¼ñ Sñ(èÚWÆq¥_à3ø¤9` !¢S³˜Q¶C‹tzÂÀìßvÁ+Û)Á•{*éG“4•Þ£ ËÿÛœz}P²ãðUBZÍsô/Ç—?ÿøoüíï¾/cá_ˇdM7ìeõªÜÉÝTížV+Ÿ—ßÂ1÷ëÿôÃüð§Ükþþw?ü© ƒe‚ôÍßüé¿~üþ»oÿéÿæcŠÿhµÿ^å?¤ÞõÛÔáÿö§?üþ»ow ÒÜûÛÔcÿôSjÛß}ûw?ü”–Ö¾ÿdùï?üøÍ_~øíWAüÃÏý›ÿøý_ÒHóÓÏoñÓöÍ?–Î 9þÏdödú/ßÿkj:ùeÒðçädûöo‡ãøtÓä‹8¦‰Ñ?ÿL>ÿ3 >^à»Ú©íy7Î÷IÞðn7y9ůH’¾èâößü bLsÙ=ù<¹g­$‹³àÑúâLèÎ韂³K?l4®Uœ>žïFôÿÃ*Î_/þ¬Uœ]Ǭ§_ÅiDzÊUœ¿^ï «8z)¹¶ˆ\„- 4q(ÍóÙ;WšŠ]\gýOÏÆÓF9ÿ>1wx:Û!ÏÓFª¡W¨¯Ú‰ÒµgýÀ¢8ìòtR*A1ô•*ÉtàJ¹™P?Î5~òƒ8zÿ·Íg•n°s`HƒP÷wãܯñe/s/bj[é[íŒ/(ž?¾ìúÿû/ýÇèWŽ/Ì›GØ)©ù¨NjëÒ郊]ôpëŠFqªÑĤ/Nžî÷®uƒ lDtþuƒñç¬4³ž|Ý é × Fô¯dç1¸ƒi·—Jƒ\¸vÝ€`SØ[7(@€ÑÞê“{/ªrËìlßT;€Ï¢‰òÖUÇ7¢ã7žþ(Sû$|ãwãM,š˜‰&4ç:Æœš4‘íTAΦ‰ü\O/Û+Tì«"¿ç¥QãÍ wòÜXnÁƬ­´ØéÍgɳSBÉ!ꊣçk³‹&v¦‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎËù\ï÷Rè×Þ±bMî†9$5’ÎEEUšD¤D_=Á‡íM”·æ4 ¥w¢ó”²úrš(SÛ J}eÖÞ÷¢‰©hÂ6 ’1¤]á¥ØY<ýfj~®!SˆØi?ùí¥å")n¹òîì}{jÁœk[vJ¼»÷–‹<$iŸïNŸ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4q¬—²kÏ4¢…-êÞÞÄ!©1LV|þ˜ú,ô«¦‰òÖ‰&œüè8ÜGÙ£ ¤éPè+ãuÒiÑÄ\4ái–®Ь‚¿Ø^/2?×ò¬n¿—ìR'w-M$– |}Ò‰BnÁŠ@í¬™ÕŽõVš(N#A{o¢ÚY{½ib+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ªó¨i®ú½|}œÿÜ“NÁ6 ;9øIEŸëÞDUE©¯žÂgíMÔ·N4éèÜHգvkobÑÄ\4‘¿Kg7ÞÜ›¨vÆgïM”çæëPÆÝ^Ûâ•'"n¹dX°×4©+0u2‘U»ô‰ßJÅ)P;ÝVµ‹ºö&ºÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç è](ñµ{iʶ‰ÓMT©L /u¶ôaUU¾nïñ õüY÷&ê[§/”ÚuQÔûÒ&œ ׺Ê,/šX41M¤ïÒ#blÞ®vñô“Nå¹éÓü»;À8G¾2k&êfD¾“5“p‹ÌÓÔÞ›ÀÒÒ-ÞJ‡Ä‘Ø¢‰Þ4±ÑùibDü94ÑPpÌzršhFzBšÑ;L‡z)Æki‚X7â=š8&•l.š8¦þÅò_5MŠŽð{G”å¬8‹&MÌD˜+jÅFÍŠ^ÕätšÈÏE 铈öC¹ÀŠ^{ÒÉ!ß»zM9ƒµ¨¶O[;sä[i¢ˆ‹Æ¢ÚçëÞÄÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0M罓(ég‘מt" ›cÜ¡‰CR)Ìu »ªb6ngJ|¨gû,š8~*¸z9MF:'5Š]Ò¶hbÑÄL4ñßìi²ä¨FwäÐ ZÂ[Aï'ð­Žì®´H¶‹è$êgéZŸ•f8 o`îŠ^»Òå4QŸKÈåm¨Ó~ÐßO½ò¤Séa€ßß©ëÀ u©-ÝéÙ“NM!¤ž¸˜W}àî41ˆèü41"þšœ³žœ&ÂHOH#z‡ibwîÍû½ÔëšÇ4ÁÌ[Ƙ8§4Í• ¶©âdŽüzþ=ï&ÎD§üõs)Î)#[—°LL²ÕlJ ñÖD³Cò«a¢>WUP:í§{™Çß õG+¡88褥£f°o¢4;áô(LT§Ä””¥+®tC«Lew–Dt~˜ L ÎYOa¤'„‰½Ã0ÑœKͧŸú½¿Ëcq!L¸–:]ÂÞ¥*h\þíÇŽ|.šhªŒÕ­?V‘êw¥tú‰Ž–iûÑ1y&ªG.¨ïä]eŒ‹&MÌDZ—ü r¦˜&š]é[¯¦‰ú\®åQ½7 ®Ôãw^›`ز&Öƒ­ k}Kñ'ñöh³SyöÚDsêDع½Õì²Ñ¢‰Þ41ˆèü41"þšœ³žœ&ÂHOH#z‡i¢:g(}}gAf©÷¯‰hâŒT†7þ(MØ>Û$錪»úLßE{t’[²~t¼„Ý<ŠªÃ¿¿û^4±hbš(ߥ§Â —›hv–óÕ¥°Ûs³@NÖmÙ^‹ñÜ[¼ŽÁ™ø=M¤Ò‚U0›Ä kiï¡íQš8%îu)mÑÄÁ41ˆèü41"þšœ³žœ&ÂHOH#z‡iâ\/•ä^šÜ€ŽN:’ª³]›8§^¿Œ&NEÇèÁKا”•yÓ¢‰E3ÑDªEéËÄ4Ñìà%õòE4QŸ[SÔ j§ýÔ²$wîMð–)9Ù{šÈ¥ åÜ)z¿Ûazö¤SsªbˆÞ'f‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰S½”ò½4Á–7â#šhÊà‚«»Ô7óñ?JçÔ[J§SÑyô¤Sõ¨PÚGçÀrSæ”M,š˜‰&j‰éò¨ÌÓD³{åô‹h"·ûb­Ó~Šð­{ÅGr9¢ w"–ÔIòÑìe¶ñ¥¨bÎûê)÷/5:*Ü«>9¾IéÊL×jÕ_¦_|¢Z×ZS8¾4;Í—/õ¹B¢dñP³C¥{“|€fƒ{y^fˆ©Œ°¥ê(-vò{‰ˆ[W«šSÌJ}qiÝËë/CµjDü5«U‚sÖ“¯V…‘žpµjDïðjÕî\˜€û½T™ÈÝ»÷ec>J¸KÐZuã©2ÙjUU• ã”ñ¬«>ƒ~YqÔ=:&¹“üw·ã÷¾›GÏùƒß_ª/šX41M–ŽÛßì=ÿ‹&ê~.úõ4!XÜ#CoŽ^íàÎ⨂[ù!JC~K µ+g‹¯¾ívì& ß&÷!Øí,§EibÑéibHü%4)8g=7MÄ‘ž&†ôŽÒÄîÜkæUï÷RÙòÍ4a¥Ó:(gtNªOv’¶©rôZçõú]{§¢ã`ðMì©ÿÕ9é*g´hb&š(ßeÝSV³ßiø¯~¿”_€_Cí¹Âìî½ip±#Î7ßË£šjé=M`mÁ9œ¹g·3Gi¢8͹0¡÷Ä;Z{ÝibÑùibDü54(8g=9M„‘ž&FôÓDsŽ"hØï¥n.Žš|£tD»-ãë'Ry®œ»*béœÈüQÿeY>~¢c ðAt^óàßNÍ£INª}e²N:-š˜‹&°ÎÒÄBšhvò’ö"š¨Ï­)rS§ýÔm[Ö{i˜Ìè=MPiÁ¥ƒÅy˜š äGi┸İh¢7M ":?MŒˆ¿†&ç¬'§‰0ÒÒĈÞaš8ÕK9ø­4¡X³:éMœ“Ê“íMœQO€þ]4q.:é¹â¨ç”1­“N‹&¦¢ *s@®9À=Þ›¨v”äêœí¹I< k§ýcº·žQÍãáï3+—Ì( :«UÕŽòïë=·ÒDGTcÔW^be ïNƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&ªs¬ÕDû½”¦{i"AÞŽÊSj€sÁÄ9õòe[ç¢ó’ü÷v˜8¥Ì_˶.˜X0ñça‚Ë@ââ¨Õ\_Ò\Õ?R6”^û)vèw¦ ¤´i€”ÞÄÔeÂB]ñY¢fWâù(L4§¢èkÍŽ_$/˜8˜%&FÄ_‚sÖ“ÃDé abDï0Lìγ[œÑéÇÎÍ)mù&šN(Í“íL4U†e&}õÊ_{tXÌ?ˆŽásÕŒšÇ–=ú£xŽuÎiÁÄT0!u’n¥K a¢Ù)]]µ=—±ÖFí/R˜CàÞŒµJëÁÆ„–\úgæÎÆDµS6y”%N‰3^,Ñ$Ÿ%FÄ_ÂsÖ“³Dé YbDï0Kœê¥ÒÍÇœX}Ó£SNç”2ÌÅçÔ§/»}*:9?x»)#DŒžý¼¬Ø‹%¦b Ý„Iì÷ìÆýóû-v|y>§ö\vFè´Ÿb÷fŠ~e>§¼%qV[°frŠ•6;ÑGkíNs.?‘÷Åe‘½YbÑùabDü50(8g=9L„‘ž&FôÃDsî™A¨ßK9úÍ7°mˇ7°w ž2~"u²ZFMUèd ÙíH¿‹&Ú[#eû :þ M4,åãéu¥1¤E‹&f¢‰2$wd¥øÎD³Kzù‰òÜÂ( Ô9ŸÓì2ß¹3!²%«£Ý{šHµgq¤¸‡nvfÏÞÀ®NËÀgܧ›¼,j,š8˜&Ÿ&FÄ_C‚sÖ“ÓDé ibDï0MìÎÅrú —B¢{iÂÒ–³Ãž“*“]šhªHÕD?PŸ¿«2êOtQ‡&v;{®2êîQS¢ôÁïöšgÑÄ¢‰ h"ÕsFI´|ª!M4»ë+×µç*h&ÁNû‡[³ÃÚV+Óáûʨšk v€b*mvæÏtªNëì]qî‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æœP¡ßKaN÷æsÛøð¤Ó)©ôn\ú“4ÑT1’vvVvõ’¾‹&öèL îG‡Ÿ¼5ÑWÑSêd‡ÝíÞ䞸0;,×Ó\œõ MXQV[0“{ÔCÿ²#x°öSeH¤¹+N²¯Zñ41Žèä41(þšˆœ³ž™&z‘ž&õŽÑÄÉ^JQî=éDuíˆÞîDŸ–šgº7qV=åo¢‰¿££ ýè0?U¹îÇ£ABþD™óJ»hbšhߥ0Ô”JÜ›øe‡éÚ:Ø?Ï¥òBá Ï_v¯þo  ÛÜYߦ‡- °¶àdNQzØ_vFÏÒDujèlŽ]q®‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æœ‘µßKÑ»âŸWžtrÛÊ„ûÝÞÄi©ÊsÑDS%–@úc•ñWåt:QxŽ&šÇöÒÜW–dÝ›X41M`™¥{.ý¡Ä4QírÑy5Mþÿ÷oÿÇï¼7!¼±”®õ€&¨´àŒ¥ k<¾Për~”&ΈK”Ö½‰î41ˆèü41"þšœ³žœ&ÂHOH#z‡iâT/Ål÷&ˆuØTñ€&ÎIÕ<MœR/ ßEç¢óòÛÞNM™ eJ]e9­Òu‹&æ¢ jb­ÞÄi¢ÙÑÅ'~žkÉ;í§Øá»‚£×íMä-qR°÷4ÁµS‰•ÇëÍ’M{í‡üVšð­V&K÷¾µ¶`-ãpŠû fWÐçQš¨N±|­œR_\~9æ»hâ`šDt~š M ÎYONa¤'¤‰½Ã4±;O(»½»«ÍÒn^¦”ǽ=‚3¡ 5Ã\4ÑT¤LÔWÂßE{t,™ôGr$~pï»y4¬ØW¦È‹&MÌDÚªŽ*‹§&š]z™#_Dõ¹LÙ wæ7%.®¤ ÚÊ@wp/¯Ì–€sÒ¸‡®v˜ìYšhâJ€”¡+®ž^4Ñ›&Ÿ&FÄ_C‚sÖ“ÓDé ibDï0MìÎú](±âíY>eù8'õ]ö©?IM•–[­¯^äËîåíÑcÿ :ú$M4NÞ;©Ñì’®½‰ESфլ۵8)ÄY>šèå'¬R Ö=e鵟béÞzFe¸*ƒØ{šHµgE—ø¤S³SåGi¢:-ŸEÝþì‹sZ9»ÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰Ýy¸8u{)¾7g xéïAhb—3zþ@j¹h¢©BSIÐWÿî~Êš&ö·N9ù'ÑIÒDó(¦9}ðÕñk6ÃE‹&þéÔì@.§‰ú\.ßCêT ªv”nÍòA¾)ØA–ÜZ:yîÜÀmvdô(M4§J‰Àúâde ïOƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&šsÓš®©ßKi¾÷¤³m¨0qJ©Ád—°Ï©ÿ¶kí­Ë_ù'¹åôLTRë¶vŽ}7;X[ &悉ò]–²°þÞ²þú×÷Ë‚¨WÃD}®jg굮צîMò‘JÜñà¶×\ ÄrÜ5;ðga¢9­ììlïv$ &z³Ä ¢óÃĈøk`"PpÎzr˜#=!LŒè†‰æ\Ê0ÂþA/usÊ@ƼúMœ’Z&€sÑÄ®Ê >P¯_–Ò©½µ"äÎ!ƒÝî%=ûí4Q=*b½uÙWæ°¶&MLE^Sñ Hœ2°Ù!]~ ÛÛµ .]_·eלnwÒ¥ Q<¿¿„P[:¡2…öÝìÑä»S©Ã}qü²”¶hâý41Šèô41$þšˆœ³ž›&âHÏGCzGibwn¨ŸãØíTèæƒN´tú‘šDbšø±ƒ¹hbW•5J_}‚ï*ŽúÌ”µlüM4†ÉSêÿn†¸/š˜‰&êwÉT¾@æ°œQ³C÷«:µç²qNÒ`˜ ¶ßKŽe } ±¶`V‰û fGøèµ‰Ýi #Ƶ–v»×|†‹&¦‰AD秉ñ×ÐD àœõä4FzBšÑ;LÍy‚Ú“÷{)Ó{:‰êfé œÑI©y®“N?ꬃm»ÓwÑD}ë„ %œñ9šhÊ(YŽ“Ðÿؽîš,šX4ñçiÛ &/] „4Ñìà%øE4QŸ›LRo ¨Ù)ÝšÒ ¶ºI‚ï/a#Õ¬‰¸Ã=ÍN0=JÕifpˆóîv´N:õ§‰AD秉ñ×ÐD àœõä4FzBšÑ;L»sîÕpø%òÞrÙ}s>¸7±KPQŠO°ïvbs]ÂÞU•Aˆâ³<»é—ÑÄõÂÔýè$yîÞDóèªõ,rW™‹¬âu‹&¦¢‰ò]²9Õ‚˜!M4»¤W'ˆmÏÍgèµΖà^š ®)ÆßÓïãKä»г'šS×¢CúâœV‚Øî41ˆèü41"þšœ³žœ&ÂHOH#z‡i¢:÷š'©³ ³ÛÙ½)ÌJŸîM쨦ˆí÷öŽi®r»*F5оú2øMìÑÉàÉúÑa{&šG+_~|?|·Ó´N:-š˜Š&¸tbËÓD³{-ÒsMÔç²”ñ-wû½bg·Þ›îMH™-·ÌŠ+mveÄ~”&šS£¾8~ùMLƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&šse%“~/%ùæ{ ›%: ‰SR•æ*…½«2IW üQŸí»h¢½u’òxèGÇž<éT=":pœéåÇîåÜ÷¢‰EÐD-1­˜,YXnâÇüjš¨Ï5+-›»-›MQï¤ Ý<'zOZ[°(çÎÂZ³cx´x]sJ¥Ê(kâT×I§î41ˆèü41"þšœ³žœ&ÂHOH#z‡iâT/eŒ7'ˆÕ-t:§T'Ûš8§Þ¿«ÚĹèü£BÜÝ0Ñ”•†ä]e¾R:-˜˜ &ÊwYËH0 …0Ñì€/ßš¨ÏURbä^û)vhwtâM5;¼OÛ&Ë%.£¨ÕŽ˜M»‹ã„ˆcÊyÁDo–Dt~˜ L ÎYOa¤'„‰½Ã0±;w“üA/õ¦¼ô¥0‚ëÑ%¹&¡žêK˜ìÚDSU†Õò‚¨Wÿ.šhom ‰R?:*ÒDõXÿ1|ðÕùËÒꢉEЄµk êFñÖD³K(WÓDy®”(ƒÄGw»ÂãwnMØ–P‹Ï÷ãKj}܈[6;åg:U§µ8&xê‹Ë¶R:u§‰AD秉ñ×ÐD àœõä4FzBšÑ;L§z)—{ËMæÍäè Ó© sÑDSEX³™ >}Y‚ØSÑÁ—›Ûi¢yTÉ„(Y4±hb*š¨¦‹#Íä!M4;ä«ËMÔç–îÌÐ:v»wi½/¥ (ÇMäÖ·Xaâñ%ï=´>JÍ©1)a_ÜkÕŽEÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æÜkÿý^ÊñÞ±Ym3::éT%(HâNÆíÝî]ªÁ?IMš—YD_=Ê—ÑD{kRN4’»?¸7Ñ<ªfÔþCWJ§ESÑDù.kvVòïM4;¶Ëi¢>·Þ”Ξìnw+M°lœ2Ñ„;aÊÒ¹–×ì^¯«O2¾U$Û¬¯Ý¿m|)o]þ–-÷£C&OŽ/Å£zrýàwSHk|YãËLã‹o€¹föþ}Vþñ¥ÙáÕãKy.0Ä3ÇfGzëµ¼¼a.?…¾_|«kUefì(-3I{8eàq/µ–ÖjÕÁ2DÑùW«FÄ_³Z(8g=ùjUé W«Fô¯V5çe`ëå0øyïIZɲe:*gtJêÛ3Y’&šª2Æ,¨O_–2pާ‚ýèÐËZÞí4Ñ<ªfÁ~·â~ÑÄ¢‰¹hB$—é¼uiBÊ×{ýÞw}®{vN=š(v™ñÞ{y¦µ„ñ[š (-8¡'kqìv î}ïN…$Å›»¯“´ÝibÑéibHü%4ñöÎ6rTWÃ;Šð·½’ÙÿN.:RÝé TD’A]ŒúW·'~ã „'`»¥àœõÜ4ÑŽô|41¤w”&vç‚í:I/‘t/MPØftp’ö¤T›ë$íK=¹ðï*Áßjg´ßu†¨L[_DGŸ;I»{ôÈ‹Œ/~7çÕuÑÄL4QžKvô ÄÖ¢‰Ýôêæ¨åº’XÝRwüH¢{i7ÏÄŸ«| äìLˆÑžƒªÁ³4Q†i§7îvÊ«d`w™Øˆèü41"þšh(8g=9M4#=!MŒè¦‰âø·h¢Þµ3 ~ñ²4çh"{D4AˆîÉ+¶´*/š˜Š&°î áŸiÿù×ó[ö®®@^¯ Š$í|±ÝNnÍ› ØMÚ!•œò¿§æIÚbv¸¸•&ª8 ë”.}Ù­“Nýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªs‘°dýYJn¥ qÚô€&ª×Îç™Ý.Í•7±«rfSùB½ÛoÑD½ëÈœ ÑN<¹7Q<1ÃÊ€’.šX41MPéäfh©IÕ.SóÕ4Q®kYCêŸl÷©6Ó•5:qùl °Ý¡Úø³N‰“·# &V‰ˆÎ#⯉†‚sÖ“ÃD3ÒÂĈÞa˜85K}X¡_»5¶¹mMœ“ú'÷ü·0qN½ýXÚÄ©è˜Às0Q<’;@;‰t¿0_0±`b&˜àr€(eÆöÖD±ƒô–tL”ë2dL!îá$voÚ£“Úgš2‚±´$k¬ªvIéQš¨N%¿·:}±LlDt~š M4œ³žœ&š‘ž&FôÓÄ©YJä^š[R> ‰]Bÿäèng“ÑDUeàØWÿé˜Ö_M{t¥]®ëQ|&ŠGBHy­ÐUF@«@좉©h"?—â™%"Ú[Õùò$ì|]… 3½"Õ.Á­|cÊNö&´Œ`)iÛsPµc°Gi¢:uEã/ĬvFÝeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&vç®L_ÌR.tsÚ„mîbOJ5™‹&Š*N9„Éûêå·hbN·Û½ìøÁƒNÕ£"¥v““ÝŽ}%a/š˜Š&´^M¢Ø¤‰bçïüE4¡5mÑ:iGÕŽ>Ÿ¸ô SšÐÏ4ayKBk¿_¬ŽtŒGi¢ŠÃHn©+NWÚDw™Øˆèü41"þšh(8g=9M4#=!MŒè¦‰êœJS¸þ*$÷&añæ`4qNªù\4QUqi#¥}õL?VÒ©ÞuùløÍËRà¹æuÕ£æ»Füâ© Z'MLEù¹U‹„í±ÕNäò“Nåº. Ð?âïíZni7‘JþÆçæuèe¤ §ÞçÿjGö,MT§Áb)ÕÎh5¯ë.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4±;rÀþ,åä÷îMXÚŽN:í”Ü¿‘*:MTUQRíµ¯>üM”»¶Èl_D'è9š¨ÊÐ(w•ÙÊ›X41Mäç2½ôUlÒD±CÔËK:•ëfLH½B|U§(ßHª›š'>(%«"³DþÓTZìÌôÑVØ»8 ¡Î1¬j‡‹&zËÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mœš¥ÈïÍ›PJ›‘ÐD•PúR¶Ï¯/˜‹&vUÐnöü²ãÛ›¨w-BÖŽÀs­°wn€ß(3\y‹&¦¢‰¨YÐáÚ)éTí’ÙÕ4Q®[6¡?k‹Dè'l· i"¿{ÊÁö.Ên§OÒDuù L®]qñžÌ¾hâó2±ÑéibHü%4ÑRpÎznšhGz>šÒ;Jçf)»7 [¼”ðûL'¥:NEçÔ#ýVö~×”,/z¾ˆNÚµ:ð<¥®¸ü|-šè.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q£¸B Lpo?#.gŽòòÎIåÉö&ª*R‡NÖÇ®>ü·hb޳±ö£Cú MTy¡€òÅï¦È‹&MÌDù¹ÔPJ„ܤ‰j÷Þ¥ç"šÈ×5€<„ÜzãÇòÜè÷Vùðdžö¾µŒà(µÛ öjGòl^^qªd ‚]qЏh¢»LlDt~š M4œ³žœ&š‘ž&FôÓĹYêS2•4!¼e\8 ‰SR x.š8§^~ì$m½kN‘ñEt"ž£‰êÑ’ñ»É[#‘E‹&& ‰ü\*”ªÖ¦‰j(WÓD¹n&qL½ñ“YìÞ½‰ùsP3жzÌIMÛsPµKòìI§ê”BóÙG¸ª|t—‰ˆÎO#⯡‰†‚sÖ“ÓD3ÒÒĈÞaš¨Î8¥øb–Š{÷&Hd:€‰SJóÂt.˜ØUEX§dànÇö[0QïZòu;¨µÛ%&ªÇ²g’¾øÝ,邉3ÁD~.ÕćvBÿüëùÕ<ÇàÕ0a5-0€A{ã'ßë­íŒ(mᙪ`·R=Ä;›ÇÅ΂ãQ˜¨âĀѻâW‘þ*±ÑùabDü50ÑPpÎzr˜hFzB˜Ñ; çf©À›a‚7>Êš8¥”`²nFçÔËÕ?xðœSõ¨ Þ9ýðºƒUãcÁÄT0‘ŸKG–?«üó¯ç×Áðr˜(×%0NžÝï¬ȼyRðø QF° C§Dµ3|6k¢8< )cW\¬¬‰/V‰ˆÎ#⯉†‚sÖ“ÃD3ÒÂĈÞa˜ØGþ×þš­îÝ™`ÑMãhk¢JÈà©n¢ü·4±«,Tûê‘~Œ&ê]¸}'»Uy1$}e +kbÑÄT4¥ç¨dìjÇérš(×Õ”I¥“<\ì$ÒÍ9Øù5'Yœ¶|ÿP´6Gún—Ÿ¤‰Ý)™ZH_Ü:çÔ_&¶":=M ‰¿„&Z ÎYÏMíHÏGCzGibw®åÕ…ýYŠîÝš0Ùè ™ÑK)DÒo”ò\çœÎ©þ­fF'£Ï•ß=†Uè+s]­QLÌå¹´2b’4Ëþì’] õºTÚ‡öÆ2ßyÎ 7s÷øÜ̈¡Œ`ËsP»=ón§6¯¾&ŠSŽN%ˆÝ.½Õ¨X0q°JlDt~˜ L4œ³ž&š‘ž&FôÃDuNÔþr´Û¡¤{›)n@ÎI¥4WyØ]U©Rnô…z¥ß¢‰=:n‘¾xY2Ós4Q=:9÷•åuÓ¢‰E3Ñ”-Uw²&MT;Q½š&Êu8å\oü¨&¹5›7“BLŸióæ ¼o+-vôaåVš8#Žñ-“}ÑÄÁ2±ÑùibDü54ÑPpÎzršhFzBšÑ;Lçf)½7k"/Këš8'ÕÓ\4qJ=áoå`ŸŒŽés4qJ™ˆ,šX41Mäç2,âÏ‚dÿüëù 3¾º5j¾®¤äB)ioüDËJ(m/>Óå,ØÞ­vIŸÝ›¨N3&(x_,šè-Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4±;/ÇE£?K1Ü»7A ÈMT ’ ,}!•l.šØÕCé#õ…úø­ò°¯è¨†~ñ&~&ªGwÿâw³·ÃÔ‹&ML@Tš8¸`3 {·»&Êu9;7ê.ƒË‰¨;O:¡m®˜*:1ב‚튻飪Ӽ@(ß™ºâ0-šè-Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q—WDýYŠøÞ½ %ÛÒáI§sRm²“NUU¦‰¤_¼òóð[4±GGIÔûÑ|. {÷èâÚWö^gÑÄ¢‰ h"?—æÁ)¢MÅήo]W¯è¦Ðµíÿ5â¾'oB‹“Ï4!ygð1é|­ªvDÏîM§^>Gy_œÓ¢‰þ2±ÑùibDü54ÑPpÎzršhFzBšÑ;LÅy$3NÞŸ¥Üt®%úõ‘ø·ºKïwBÐIÕ¨v@~ð¯I€XúOʽ [^ùñ#£‹a?ˆ,«ÛÄ¢‰©h"?—A¬Ê$Mš¨v”àjš(× ÕÞø ḳ6ëf‚üy¤çÕ²$L4í )­#m]·‹Ë3&rꉓ„‹&zËÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mìαsc·»9o"”6B;Ø›¨$:Õ§þwKsÏ®>khWâ}ÙáoˆÝïZ”ô‹è¼ý¶·ƒOõXˆÆ¤¯LcÑÄ¢‰©h"?—¦Š‚Mš¨vò6ï\DZ÷F„”­7~Ì4ìÞ“N€Ÿ»M°Õ.ÁÔ~½T;{&ŠS@‹^-ˆÝnµ®ë¯&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q“aˆõg)¿÷ “ÈFéè Ó9©ÊsÁDUÅLÉ¢¯žA~ &ê] º|{®wÝîÑÑ:Õòw;£ÕmbÁÄT0ae‘n%i85a¢Ú©ÐÕ0aõSF‰þø±R›õ^˜0P¥ƒnžGpF®9¨Ú=Ýmbg :¨³ÛÉJÂî.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q—?Êœ»é½%lÎGIØ» ¥o¤Êd4QUi „è«—?ýwÓÄ©èèÛqŒÛi¢x¤¤¥ |_Yðꄽhb*šð}k"¶:U;Š«{×Õë:äùX¼7~ÌSÒ;iB6aOG4Q`°÷~‰Ú„t¶÷Ë÷ê)ýÚûåLt<ž|¿d\zKr_¾gYï—õ~™àý[*háâívFÅ.Ï1—¤Í×EdÐö‘Ÿj—>Xºîý[E úü~‰²’´”_sÞQZøÅŸ-òqJ\f´õµª÷¢Ñù¿Vˆ¿ækUCÁ9ëÉ¿V5#=áת½Ã_«ÎÌR”nî*TzaËÁתsR?¬ÇÿSš8§Þ¬9ê©èús4qJ½1좉ESÐD^ø c¯yµcN×ÓG°®LÝîÖ¯UäùG³˜´•š¸e?¤9ЫEø“0Q¸ôÅâ:HÛ[%¶":=L ‰¿&Z ÎYÏ íHÏCzGaâåÜ ÏýYêý€Î-õÇ·R™í ö.¸]máesmMìªDKk×/ÔûomM¼¢c–ì‹èˆ>W1°zôü·ù7é+óX0±`b&˜(Ïe º©4·&v;q¾&êu9%wÑÞø ò[k|Pͤ€Ïï(#„¹}üf·KühÅÀÝ©$AúB'Z4Ñ[&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:Wjõ~‰¿yk"»±ƒ­‰sRa.š(ª"å¿Wî«÷ø­´¼WtXõ‹èDz«ý{;MTì¬a}eäë Ó¢‰©hJ%¾(Gˆ¬IÕÎäêƒNõºù~{ã'èÖŠi€ðƒ¯UXFp¦ ôöj—Ü¥‰ê”…©/Žitê.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q‹·›ì¼ìøÞ½‰¼dÛr(h¢JPè5›~IÉö&võ¥-Nê«Wý­ƒNû]»uöv»K¥šR#ÐS–ípÑÄ¢‰©hëÞD*g˜š4Q턯NË«×Õr6±³7QíPìÞ½ :  ª#ØUÚE¤Š]~»Ä³4QÅ‘F–×§ù^{Ýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&NÍRDv/MHžïh¢JàdÉì ©“• |©÷àvÉÀ—Ý• <·¾··ÓDõ˜Ð;œSílu3Z41MÐ&‡9šE>ªR\]2°\WØ­4|îŒz__O›'<Èšà<~I¨Ý¡ÚУω“•5Ñ_$6":?KŒˆ¿†% ÎYOÎÍHOÈ#z‡YâÜ,å7 Tßv&NIÍ+ιXâœzþ­‚'£Ï <¥ !VÁÀÅS±—l„”Wò¬M–(vh—§`×ëæ÷FÊ÷Û?!w²å-!tFÉ#˜!ã¦Òb‡ÏîL§,y`ur°«¯òãýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªs+®¸?K©ÂÍïÝRc¶gwK4´ý–t®‚NU•@ŠÔî7¾Û¥ÅÓÿjš¨w]¾ŒbêG'¯íŸ£‰êÑ’fžè+˿ܢ‰E3ÑD)¹êÊ©ƒ]Ksë[ÖÂE4‘¯+è.ὑãb7Ò„Ë–#áé`oBËÂö ]ì2ùУ4Qœ*zò/Ä)¾ X4q°LlDt~š M4œ³žœ&š‘ž&FôÓDuÎП¥8éÍ4QV܂dz½–ž÷4´]ªë\4QU)鯃›@7MÔ»6PÄèGGãÁ¬‰âÑÀ:çœv»·ç‹&ML@ºIpjç`W;|kÆuM”ëq~ýHgü»D÷fM8pØAE'«sP¶Òj÷¡ÆÇ­4Qœæ7¸©!.ÞÊ.š8X&6":?MŒˆ¿†& ÎYONÍHOH#z‡iâÌ,•gÚ›s°6t<8é´K0æÎç™ÝŽ&;éTUsòøB½Ùÿ±wvIÒ£¸Þ‘ý‚–0+èUœýß«'²»ÒØC|IOÏÅL(Ì›*#ô¾‹&ʯ¦˜Â+yDãçh¢Œ¨)J¤¶²W Ü4±ibšˆ¹>+ ×÷&ŠqœMù¹"FâÖüÉ…3äÎÖué0$2}OéünÀ­ÛâÅ.س{ePåfmqºëöÓÄŠG×§‰ñsh¢¢ Ïzqš¨zzAšÑ;Leð¨1%jG©È÷¶®S¥CÕ.h¢HH¸Qqã´ÓÅ*:eU)P„¶z³/£‰â ÀMïøÿ|po¢Œ(‚~ ŒuïMlšXŠ&R©¨Ä1±Ui¢Øáü{©Ô‡kÆ=3ÐpçÞDþVe¬'Ìg°Hl”#Év)Ò³4QÄå–72»ÅŽuÓD3M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4QWÇh‡ÐÜWîVšHª‡^ÞÂî’ªÖ¢‰¢Ê¡Q(~ ^í»h¢üêDêÏo{ÇŽ&|Äè?È"6•¹ìú°›&–¢ ;„üÏ¡êõa‹ñôŠNù¹`è!óÇíÒ­4¡x ©Â»ú°Ñ•ù ÉDUY_~ì냷°{Å%ÝÝ&êibÝ£‹ÓÄ ø 4QWÐg½2M´<½M ꣉Þ(eF7÷®‹G}GR!€-DÝê#|MôzТ‰ŸEL<×j*#…} {ÓÄ:4QÞKqHS«ÔtúÛŽ^výfÐÄÏs5qž@ù#ž¢ã­{tä³Tdïi"÷ME$K†U¥Å.¼kãz#M”AÉjÇ|ÿkÇ»¦S3M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4QgMo6ÂG©»oa“ÒáÙMôIE^‹&Š*!T ¨ü]4qz‡S@h{Çóçh"H¾ü†ZÇÄ¿í^vÓĦ‰hò7ÿ\ÑÉR•&ŠkšMù¹h‰¬ÖIþo;OäïÜ›¹¬êÅÞ–™™b=aÏvÆŸ¥‰".²$¢¦8ŠÀ›&ZibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰<8‰C;J™Ü]!–BŒV‰ö ¢›R×¢‰¢Š A[=}U'ìŸ_­`Ô^É9=GeÄ”B@l+‹»B즉µh ýýV¥‰lçgÓD~®a>ìÔšÙ®Óøæ~É#Œðûõ…|û"Ɉ\UšíÈ¥‰.q¯×·6M\¤‰®O#âçÐDEAŸõâ4Qõô‚41¢w˜&º¢”¼wo"zÄ W{}RW;éÔ§>~MtyG_:GÝN=Ê4¼”fÜ4±ibš CÔ<…N)Vi¢ØÉKÝÓI4áÏ€ ÌÒ˜?’ %ÝÚ [|u1¹¢ sìQyCC©Ûù¾ÚúÒ¡ƒ|ÛúÒá òäúâ#úïáÆWÆb÷ÚÕv¯/{}Y`}áã6ǹ¾¾d;ôUhöú’Ÿ››©ÖO,e;a½s}œRß/0œSÉäNjHõTÞ±ÖŸ«ºÄ‘Øþ\ÕúQñèúŸ«FÄÏù\UQÐg½øçªª§ü\5¢wøsUW”b¼ùsáa«ú„rX &úÔ§/;HÛåÁ?Vu)³ÝÎhÃÄj0!!%H)¦Lä­ïLŸ’{)¥ØøØ›íØÍ×ò"K„‹­oÉ3üŸÆAZ)3£,‘5Ђ5ÅåkÄ›%ZIbÅ£ë³Äˆø9,QQÐg½8KT=½ KŒèf <1ÄÆqÿlGo*:Í-@~Àtí?—ji±­ïõìÿÇwÑDùÕ”{­Ç¶wðåòÎí4‘G”@!ˆ¶•í­ïMKÑ„x–ý ¤ßÍ}ÿúçû+ 6¹dàÏsÍØ'f+îååzg;#’ÃïOç÷ë‹æª%iÿ¶Syö mT\[ kŠx骶iâ"M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4Q'‘V°?í ÜÞÎÈsÅ‹½‰"ƒ§*üÔ¤kÑÄ©>kä˧Ýï+ä6M”_-l–>X,…Ü›È#jð_ ØVf¸ ošXŠ&Ô³t1(õ’ÅÎyx6MäçFÿ¹Ø(ntÚ…[÷&ðpª2ºØ›ˆy¦ 4v!‹#âe•"ABûÔ´Xò¢Ê™Ñ0´ÕK ß…ùW[ʵŹécx°f`î< ÛÊÌdãÄÆ‰•pÂÀ|Î_¤Ž§ÝËæÀ$œðçRˆ>}¸¾ÀäñéÖî¨Hô'¬Lujî£XÑnù(Nôˆ3ÑÝШ™'V<º>NŒˆŸƒ}Ö‹ãDÕÓ âĈÞaœèŠRÊé^œP80áNôI‹uêRoöe8‘½ãipã¶fñ">y»K™ð>ë´qb1œÌ­‡!¥N¸ÎÇ AÅ[qÏí<Õ¿ó¬“çÃy’Â[š€gz q—×”f»dñÙä=â|™Ù›Í4±æÑåibHüš¨)è³^›&êž^&†ôŽÒD_”›7'$Ê!QßÓD§Ô´ÖY§>õ„ßus¢Ó;ñ9šèS&{sbÓÄR4áï¥P¨ø»þÃ_ÿ|…"¿÷ŸCå¹–4ùBј?ygÒìFš`;Pãû£Ny¦Kd•êi¢ÓŽ8>JyPDÿ7iSâËÓ¦‰‹4±âÑõibDüš¨(è³^œ&ªž^&FôÓDœ04Ž:ýˆL÷ÞÃöEö0Ž4Ñ%•Þ´løŸÒDQÅùànø@ýïš!6MœÞI€ÚÞaæçh¢Œ%ÅúŽ»°kÄnšXŠ&à!Ê¥«íQO;ˆ³N”çæNxÁZ3[²P¸ó¨ßwßÌX8ŠÕt±{SæãV˜(ƒ&ÿ?q—ˆmf‰®#âçÀDEAŸõâ0Qõô‚01¢w&º¢TÒxï5l‹‡gŠ0Q$˜9KÐR»†]TQ€Ûê-ÅâD15½C¯ o‡‰2";G×»;žvøòÖm˜Ø0±Là!1ù˜~ßHúëŸï¯x:½;jmüÿü{|¦o„‰äÀ¹Gß{š ŸÁŒùÊzýcU¶#Cz”&ò b-rSsÜ[Í4±âÑõibDüš¨(è³^œ&ªž^&FôÓDW”»·¨S":’^ÑD—ÔD¸Mt©ÓéϦ‰ïäöÏÑD—2”ÝpbÓÄR4A‡EŽ’ê4Qìfu*ÏÍ5È×ÅN;T¾ó S8˜QÓû±Ày†(õMÈbÞÝ ¼‘&Ê îD h‹c‹›&ZibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰2x.ºY/¹qÚi¼¹áÉA1]ÐD‘à«0jhK¼VÉ¢Jr{9Pø²KØç¯&„Fùß»ð\‰ØsDMãÊüÞ4±ib%šàC4$‹Q«—°O;Ù5Ês ZqO”â­{¼0\]–3q¬ƒ>íðáKØeÐ\AÛ´-NvÉvšXñèú41"~MTôY/NUO/H#z‡iâ<·iþ „*Ü{Ò‰,Àb;¥Êb'Šª”Àê¥N»×þC_AåW›ãDãäi¼6‘GŒH*Üþ»Exi&¼ibÓÄ4á*-i´`õkÅNŒgÓ„þúcîZŸ?ÅøÖ ±J¿ë‹ ±9]¶|[=Ö‹AŸv áQœèÇ'ÚybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ¾(e÷Öt"އ^´¯ëS*°ØI§S½¥¤ÜV¯ðe%ú¼£îMä–ÆŽ¢Ìˆ7MlšX‰&b~S„¨uN/v>µM‘ã!Ž1$Æ8·s«"·rŸŠG×O‘GÄÏI‘+ ú¬O‘«ž^0EÑ;œ"—ÁÑ<>J;JáÍçw òö…ãý‘ä@(Ôó¨lÉžöEQ²ÆiÍb‡/õ@v´¿˜Æ®íGÄωö}Ö‹Gûª§Œö#z‡£ý9¸¦øI”r‘÷V’9’\}É,¤€Õ‚lçΤÙ;þ\ÖmW¹ÝÙüd ñÊUyYŒþgmK\l+º¨R² ÚVÿz]å+>å_þ~ðF}ðš\Qæ¿[6•¡„ýñhï(Ëcߎº”Q€}a;ZéÛ‘¿—‰|Ý àÖ“ˆ¢Âüærüÿü{|¾óR40ŠŠéý®=‚gæ¢êà•íØäÑŠ­§8²êÍ)~ìh·¦nfÔ®^#âç€WEAŸõâàUõô‚à5¢w¼Êàž„)`;J1ý­©[$8Ô } 5ÑZ8QT)‰¶Õ }Wÿ‡ïÄhõú&?vüÜVtÑ'‡€@S™Þ5–6N,…p¨ÏÈåP«8Qìä¥rÐ$œÈÏA¨µ³Ý­g ø$å”Þ÷@ôS2jÄ l§¯ÇlŸ ‰.q›&šibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰¾(%z+M Á‘ð¢bk§Ô¸Vkê>õªßU±µÏ;¯uo§‰e>Å6MlšXŠ&¨¼—Q§‰²Ç|4G.â0W:NMqþÝL~*]?G?'G®(è³^È3_KÊ¡úí(Û áìn?å¹–ëQ6\•í’…;Ï:9JRHñÒUŸKµÅ6'zÔ[/É>ïØsE–ú”ÑËÇÝ'À Îkœxì3j¼¿nÇø,N”Aa|ek‹‹‚'ZybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ©¾ ¯E}ê¿í&v—w¬ë”GôŸ!YS™üãRǦ‰Mÿ{šð÷’-izS¸ï¯½¿lüRã`MÄL )_»K­ù“CÀ4AéLW{¹IƒCMUi±s8{”&Ê  )¶ÅÑKy¹MibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰sðB#Y:íHî¥ GÔ+š8%$ø‰Tµµh¢¨’¼ÑŽmõò{èϦ‰ò«L{§wìAšÈ#æúí#·,Ý4±ib%šH9Kþ Ö;b»À2ýj^~.²Ø¸ïvÚ…x/Mq¸h9aù³AÀ°.4Û é³0Ñ%NiÃD3K¬xt}˜?&* ú¬‡‰ª§„‰½Ã0Ñ¥ìæƒN‡èKt)°XØ>õßVÔ©Ë;Id‰¢Œ!7^n*S€} {³ÄR,aG¥˜ƒJ•%ŠÒôkù¹ÑX4`cþ¸]â;;N¸[ Ʀoi‚‚Oá$†XÇžb§I½…Ý%.ÂË×–MïÓÄšG—§‰!ñSh¢¦ Ïzmš¨{z=šÒ;J}Q on`Çï5^”Óí“J–‰Nõ¿›­þÑ8ÑçרwãDŸ2•}ÐiãÄJ8áï¥C.³J 'N»À³¯M”ç¢æˆù£åvÒ8ar€GVxm‚àÈͰ)T·GO;y”&|P Î…I›â0Ý ìšibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰®(xïÞ'>D/j:ò \Õ¤òZ›}ê_Kü}M”_íH1¶½CÖt:GTH?ø»É>è´ib-šÈ Õ‚RÐ*Md;Ii:Mä‹ÏN3ù¿­ùã6¿?WMÝœÈ)¼o‡Mè3<ºÔ¿;wè£4Q•€Xï£yÚ1ìKØÍ4±âÑõibDüš¨(è³^œ&ªž^&FôÓD_”Jt/Mø0éêöTd`û@ª­umâT¥¹±µÕËï}ô?›&º¼£¯Šî¦‰<¢'Ä š(Ê,ð¦‰M+Ñ[®×T¥‰b<û¨Synt–iâ;íäÖkî…ÃŒbzqÂ%øT'õÆRsH@~'Ê ¢ôfé·8NûâD3O¬xt}œ?'* ú¬ljª§ĉ½Ã8qž¢¡¶£”èÝ›rÐåæD‘ ÒGR¯…§zSød­RJß…åWçºIõ›?vAžÃ‰<"…$,(³—›E'6N,€YhÀª7'N;:'òsc*=9«ó§ØiÐqBYŽ,ˆ½Ç‰>©‹5HýQïÿQn«Oô]e>ʯN(ùÐÝÞI~¯òýF¾x4•ň/]÷³˜%FùàRj,0ŒD)Í_`˜„!µ¢¶Ûa¸ójjô,0Æx¹ÀxB ISCªçœ’ž=LÛ%.ÙÞþn~ˆ¨xtýïU#âç|¯ª(è³^ü{UÕÓ ~¯Ñ;ü½ª+Jã½…>"pñ½ªGª/ ¼Nô¨ç¯Ã‰ï˜<‰Ê0mœØ8±N8 ‹z68ávürálNDëCcþDâ7-ófâ„Ü­ÕÉê=NHžê¹;u½(I±cz'ºÄiÜ•>šybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ®(ïÅ ÎÃ\4Hí”J‹UúèS¿«p`Ÿw Ä ‘ÐXÉbS™€î©'– ɷM¢§êUœÈvôz°dNäçFÈ-R©5T-ɧiñÀ|ý‚&4ÏàÜ ¤ñá@K b}”&NqISãâàiÇ{s¢™&V<º>MŒˆŸC}Ö‹ÓDÕÓ ÒĈÞašÈƒ+!ÆÄí(%`·Ò„$ÍŸ².h¢Oª,¶9ѧ޾ls¢Ë;Êé9šèRf¶ËošXŠ&ôPÆ Q7‚ûëŸï¯æî 6›&òs‰-µÓ`·£tg¥€x$ ù=N”“PA›‡i³ÚÃwóò ÿÏÞÙôÜrÛvü«xç¬")¾-¼ë&«(ЬºH·½@œ¶ ß¾’æ¹é‰ïé(š«>‚wïÑøÌPüé…´ )Â@Sœ¢¯ÂÍ<±âÑùqbDü98QQÐg=9NT==!NŒèƉ®(EvíY§¨°¹•úè’Ãd…ûÔ¿[áÀ.ï0à}8ѣ̭͉…SáDz/=åÐb1Vq"Û©ñéwót“BÊ¿E[ß;^zÖ)Í.òÝ»ç4aù Žä õÛÓÅŽèÞ£NyP'FDn‹ãUé£&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ¥$^[8·hGGú¤ÊdGºÔë“Z¿jšèóßX†¼G™€E‹&f¢ ÛrïaÊm…ª4‘í‚àé4Qÿ·? ^@´¥øãÁ=lÏ_pîêÜSìHî=êÔ%NÂТ‰ƒ4±âÑùibDü94QQÐg=9MT==!MŒè¦‰¾(ep1MÈGe»”¦‡ &úÔÇ7ÛšèòŽÑUÒˆ1¶”QÊÆÂ‚‰3ÁDz/•Œ´~ »ØI8½§Qú] L ùý¤DŸàÒ*ä›"Øs–ˆ!Àù´U½\ún—|u'K”A‚c½Çntsj%‰5NÏCâOa‰š‚>ë¹Y¢îéùXbHï(K샣Ö{LØ=ÛÜ=‘%4ÀŽ9í Q"·•ÒdÇœvU1¥õ"ÛvÞŠ%ö§f®Xþ°c½%ʈ˜^:fh*K‰Êb‰Å3±D„ü^š™Ö;;ðnM‘{Ä!R\)r+÷©xtþyDü9)rEAŸõä)rÕÓ¦È#z‡Sä¾(¥py"<:¼“$ˆ„”ª°Qmå(Ûqâ :yå(ÿ®‚#a½ŽEÑÉÄWÖ‡ (|䪗¥š=©±ú‹âD—zïuΩË;0/lj.e¨«B쉩pÓ—oC@½Áo±‹o=½Ó'ŽpáD3O¬xt~œNTôYOŽUOOˆ#z‡q¢/J™\ÜE7óƒ. ‘¶ÜÜ•¼ž­d»è|ïâÑ.NHxS\®¹¢}ë3®xtþh?"þœh_QÐg=y´¯zzÂh?¢w8Ú—ÁSpÔí(EpñÍ/)bG{‰`fÕ; »]xˆö'-¥ßU ÎÜp•(:_Y®· ÁIáÈU/KMøås-õ©·7Û‹.O ÒO·½yùâQ1FˆöÂßpíE¯Å£©bšãsûæúû[ì——s/ʼn2(%7Ôëq|<„.œhæ‰Î#âÏÁ‰Š‚>ëÉq¢êé qbDï0NtE)ººg¦<™Žö¢9åМæoàD¶‹õROÂ‰Êø¿ýùø,Ž—îE§t,1SDíuÕ—R&;ÚZT9hã"؇zy¯‚àÞ!WyÁ;~'NäU¥©L€×^ô‰©p"½—)hzJÆ©5Á$;Ô &ÍS¡õýD° 'æMX¿;˜^˜Ò¬‘k—7„¦¤Ánå®".½7ìmqWÉvB]ñèüÜ5"þîª(賞œ»ªžž»FôsW_”ò‹» n廲TQoKeÐÙ`"©RȽ-^PÏön0ÑáQ¸&˜ÒŸ„ÚÊ lð[01L° ‹{h­V%»Ç…µÓ`‚]‚µ¾W£+ËKØŸÓ„l!`¯—Ìvd¬t+Mtˆ‹¹㢉VšXñèü41"þš¨(賞œ&ªžž&FôÓDW”zZõDšÈÍl<ÚÇã\åÀ;Õ¿Y¯Ò>ï0ßH]ÊœhÑÄ¢‰™h"½—‰Òó‰ÞúÖD²79ývߢ8IëûQ@¾²8…ÍÒ·ø¼SiÔü+­—êÜíýV˜(ƒ¦äíý!nÓ/˜8È+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜Ø7@´v”2¹¶‚GÛù`kb—&%}EªÅ¹`"«ËÝø¤­ÞñÍ`¢Ç;/'\ ]ÊPL,˜˜ &Ò{™¢{.ô U˜Èvá1™? &ÊøÑbh}?)—¿²x‚‰[£<§ Ë_°»p¬'ìÅèÞûê]â"®­‰fšXñèü41"þš¨(賞œ&ªžž&FôÓD_”²‹/˜HžS`¢K)OÖ[¨S=½Ù9§>ïÞ]Êþ®ëÑ‚‰¿ýT¦ÁÂß_ýæ§ÿùþÛo¾þ·ÿû—+H_í?§GùáSŠ®_§€ÿûÿü§o¾>0ø×oú:EìLßö7_ÿÓ§Óe·¥÷?Yþ÷§ï¿úë§ßáÄ?þç_}ÿ÷M3ÍŸŸâÇí«)Á ü]2{0ýoÿ3}:ùaÒ”ð—4ÈöõðˉŽüHÿþyaíoâÇ|³µ£Ñs;¥FѼ•ô^ÉK§wî-£Ô'Nñ˨¦OâogXûê7 ºÁá>t—H 7Öd(#J¾;~8n»6 V+“`Dü9›}Ö“oT==á&ÁˆÞáM‚2xâb"lG)ù2Gë¹q¢îéùpbHï(NtE©âµÅ[ëÉÁ«êé ÁkDï0xíƒ;‡`í(ñÚB7¦º1Ôàß%8@hK˜«?pŸz}rGëWM]Þ1¾‘&òˆÂ€ÚT&d«?𢉩h"½—¦ ¬z7u·{¼³xM¤ßõ\5[ÊÅî±ÞãËU²A¾öÿ¼£cú‚ÀëJqAr+Mqœ›â4>ìÞ.š8H+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(ƒ+!oG)Ñpu þ¼)•ho&)ikK}vX襉¢Ê1Ķzs~/šÈOœÒÖô޽¯AÑ2J7•yäU6sÑÄT4›„|}Ó¼Z6s· §÷Ÿ/¿›¿ ¬·Œü°ƒ+Ëf²n’>c<  ÚÒ_&Zc~)v!ÞzǤOéÚ›h¦‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÁY0¥Úí(ñZš ÇÜrþ`o¢O*ÍUà¥Sý›Õ5ëó£ÝGyDQ$@[™ó:é´hb*š \3©i•&²]>!z6M¤ßM$áˆõÂ0»Ò¥u3qÃrö9MÄôcŒ©ƒ²¸ß{Ò)JeY£-ŽP⢉VšXñèü41"þš¨(賞œ&ªžž&FôÓDW”JFïM9Òq°]©è\0Ñ¥>â{áïôŽÝW%oÑLB½¢Ãn'b &LÌ1 Bôe{ÞßýìýõœGŸ ùw)ä³NÍï'Ùá¥×&|f ó §/8šÕûïvo­uZMï’@[œ?ˆ[0q%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0Ñ¥8ðµ½a ~tm¢HÖÐ8r²ÛÁ\-½vU)N3æ êýÍj:uy籋éå4QF þÀ»Ýc+¶E‹&&  Þ$Xú5«ÒD± túµ‰ü»ÑS<­=Ù™ø•[¾©YÀƒƒ´’¾` ¦Ú(‘í¡Ý{m¢ˆ‹1¨µÅ >ôƒY4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ¥ìÚKØd°E<:èÔ%õ±ËË4ѧ>Â{ÑDŸw^ÃËi"¨1eø‚2A]4±hb&šÐô^æ*h½ÅNéËâ—æÈ=⌔WŽÜJ~*?GNŽ\QÐg=yŽ\õô„9òˆÞá¹ ΢®±¥áâã;àb|¾"¢›æâ}èV߉Îvñ±]îIkGùws L–V@Ov¢W^’+]Ä(/ðâ¡«^–ª8Ù-ì.õÃ{áD~ê|º,¾à¿'òˆŽݬ©Ìá¡#ˉ…Sà„Ï5•ÌL²“‡šJ§M0€`[ßO²ºts‚:ÒAC#Û8_X¢úEçb—þ<·‚W—8zØ™_àuQW<:?xˆ?¼* ú¬'¯ª§'¯½ÃàÕ¥.>êDœˆÂö&ú”êd'Šª”G€j[}zÊ÷‚‰òÔù|5ZÛ;‚7ÂD¢DŒí·,˜X01LX¾ÜœÞÌôjVa"Û„ÓW«*ãÿüûqŒŽ×¶›H³ÊÁ%l/_0«Hýº¸—XuóI§"ŽÒÜÇÒ( &šYbÅ£óÃĈøs`¢¢ Ïzr˜¨zzB˜Ñ; eðèÛQŠ¢_Û6<¦ˆåt@]Rc˜ìvQÅ9ƒˆmõüDý¯š&ö§6ò—¼£7Þ›(#:±ê Ê”×I§EóЄ¥?Ez/Ñ‘+[¢Ÿí0Ø}9r¯8¦u žüÔ=:yŽ<(þ„¹® Ïzæ¹åéÙräA½c9ro”’pím€ˆa3Æg9ò‡¯5úlÇ'7úø]fII:7]Åpi«RØB EâCW½,q¦^ØÝêõ®a÷z‡:^‹½ÊÖ5셓ᔉC8ÄX}‹]”{q"3Ä 5ÅEàÕE¡™'V<:?NŒˆ?'* ú¬'ljª§'ĉ½Ã8Q'&mG)B»¼Ã3à–ì1¤pïU©Ù.ò8?‹Ëåökõ1?ÛE^¬ÍϸâÑù£ýˆøs¢}EAŸõäѾêé £ýˆÞáh¿îL1´£ãµU®ÝhµãhonŒOfÅßýLª¹‚½x„ù@‰ÆZᢿÙ]z‹·€ HGžrPñ€ÚV:×VôgUiZ‡¶z3}¯µ£üÔÈÒרô†÷­•Ñk…#ÿfÇkíh­Mµv„›kD lÌ/É.œ?¿äY+ÏpÔø~²]ä+¶ZØ€º>»±ž$PÊ$_Ö u¶Ùín޵σjÌçr½)N‰Ã¯VF]ñèüà5"þðª(賞¼ªžž¼FôƒWœ•b;JñÅí…\e39¯"A€¹±)¸Ûœ 'Š*&´Ô¿N”§6Pj{Ç„îÉá/xÇâ4ÁªäQÚÊÐM,š˜Œ&b$o-V%;²p>MX®ÿÚø~²_yo"† £ðs–ÐüýRò:ÕïG;p+K”AE94Î;޶X¢•$V<:?KŒˆ?‡%* ú¬'g‰ª§'d‰½Ã,Ñ¥„¯mêÊB›¸°Ä.Á-Ò+Ru²KE•%dk¬p; þ^,Ñå»sg"¨Œ ¥bGºXb±ÄT,¡›äÒŠTg‰Ý.œ¾ñ7FhÖø~²Ó•,¡›jD8¸‚mù NÔ…V¯’QìðæÓ¿eP‰Q5ÄŠÝcßEibÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;LepCsÃv”‡‹w&`Káþ€&º¤*Ñ\4±«çÜ¢©­ÞðÍh¢<µƒpãd÷îÕûh"hœ/n¶ß:\W°MLE¶ Xа^Ñ)Û©;ŸMéw1D’hÞø~²]¸”&pdN¾±EvÓF±…b‡t/M”AÅc4i‹c]{Í4±âÑùibDü94QQÐg=9MT==!MŒè¦‰®(%*×ÒËært—°HHãë RÝ碉¬ŠÓLˆmõí½h¢Ë;öpãrš(ʈÔ½Yö'X½…MÌEž2è”û }¹Võ»¿“èé{ùw%Åã ­/;ÛÁ•çœ · À@OiBú‚ˆ#Wi¢Ø‰B¸“&ºÄiúüM4ÒÄšG§§‰!ñ§ÐDMAŸõÜ4Q÷ô|41¤w”&ú¢€]¾7Ax°7ñ!Á]_Jsu›èToòV4±?5šÆz'‘;º¯<ì>"+i|áï=,šX41M$¡1·/ËR?ÒD±#“³ËÖßB•PŸ`²û¥·&Xm3ˆžoN$ lNHbC*›¹ù­8Ñ#ΑN4óÄŠGçljñçàDEAŸõä8Qõô„81¢w'º¢TJõ/Æ Ü¢áD‘×ÏD^*<NU1Do«ø^ÅÀ;½ã÷ßGôô}À Ê7•N,œ˜'#!¡a'’]2;'siÇ`-œÈvÏ ñ×¼N6Š‚=§‰Ü~2EÓú—^ìPñVš(ƒ²çÓ²mqëö ibÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;LepOá^ˆR*_ø Å𨥦pï/H5ŸëvQÀfõÃ<»]0~/𨽓°V¨í¸ñâÄ>¢æ–VÖVÆa]œX41M`¾^-èRï]·Û1Ƴi ¥€šKãûIvpm+lÚPüù%l =îR’ZÕIeÂ[û`ïâÈÓÜéK´’ÄŠGçg‰ñç°DEAŸõä,Qõô„,1¢w˜%Êà1_E°v”Š.oÁ„G×&:¥bœ‹%úÔË{]ÂÞŸ:7÷Ðöã}%ʈ(Äúµ‰]YúŽK,–˜‰%(åèÒÿ¦úÎD¶Ó³›M”ß5¥HŽï'Ù±]y ›tΕrŸÓDÌ_zž‡¥ZÒéÃŽî½6QU†ÄiX;Í4±âÑùibDü94QQÐg=9Mü/{WÛ#I¤?ï¿hñíÞ19¶#Â/-N{·‡œN UMOõLÝö›º{XFˆÿ²¿eÙ…E“3]éÈÄ™‰J@¨»*:ãq¤_ÂYKWÈ&J𳉤Ü[«„õÓVçÝçDÆ6^Q›ˆ@iŽÁA†¨²•‰„^3®|i ½œ:²•‰ÔjÃÍ&`°àÊDÒˆÁ{aÛw+×Ù]·²‰•MTÀ&°±àWùcI:§w'bñ¹LÇu^?,§ÑÌYÒÉ6š ‹é95AikmUi’‹²‰¤Ô)'Kiålç\ÙÊ&zÒÄŒEëg%à§aã¤+gYKWÈ&J𳉤Ü[@£e/å4Í»6¡1¡ïÔÄ8¨TÙ!ì='0}÷0äQ°‰ÔêÀÏv$['˜aG÷ŠÈÐàÊ&V6Q› tj†ü>§$güäl">—<‘É_ÔÊ9ëç\›0RNAOI'Ë#˜”%ãòó=Q-Z ¶‡Ö õÅ÷r $+›èI3­ŸM”€Ÿ†MdŒ“®œMd-]!›(Á[Ì&’rbG¨¬ì¥ç-édµn˜Võ°‰ÁGùkör øýºl"¡rHÖi½ÓGv;¶Ú*$œÁNÖ Ýœ}n6‘*NtDdÖtWMV6±²‰_ŸM¸4²ˆ{p¾ÿF¹XE`Ñ9¯9ýÁÅÃkŽ,%?‹ÖŸ#—€Ÿ&GÎ ']yŽœµt…9r Þây”—‚C•£'­Säê­Sä›8Ï¡¼sù•è$§ýäûZ[ þ4òs¯IN«9+nè&(Dç}©Æ@µº.:1 ½}\tbœuÜ‚t"id7ƒù;÷-pëâÄJ'ª¢Ü/ƒ ƒbÿ N«e %¥V© • Ž´Yé„”'f,Z?(? È ']9ÈZºB:Q‚·˜N´ÊÙ‡.Ôμ‡˜NX×w DÞhi 3ÉY·ìvͨÔÃæ@œ×´.°ŠÃ8cÑú½} øi¼}Á8éʽ}ÖÒzû¼ÅÞ¾UîÈ‘•½”¡yX)؆Èõz{´„GIŽIÙÔ“Gñ¹Æ“åX"˜Šåœq³NéÀ,Ki´½¦ •_k]“Gz«¤é‘}8²µèQÖ —›<ƒ ŒY¯Z'jš<2ªåƒf^¹“䜷S¯NÄçrÿ·@ùÉ«¤?X3˯ÌA ƒ[[…ç‰óC½•;xÕÞ|Ì‹•òÔdt¾Â` .tìʼ§Ô9‹VϼŠÀO¼rÆI×ͼò–®yá-e^#¼Ë)˜·ì³‰ƒ=̼FB¥ºèÄ8ôZ×m¥m«öšÂëØåÖ¢[äÐ AFj¥+¨ŒN„àÑû¸¨Ó;t"ÄM§>(gtÃÎXy‹ù­áĨç<(Bbß<Ì&4`& Z8Ò·—3zQ6‘”k5ÀagÏÚÊ&zÒÄŒEëg%à§aã¤+gYKWÈ&J𳉤œ# gB²—¢0ó&`¯ßw½Ðjœ\r2T ¡.6‘P9 k‚Œþ@}“ß6›H­ö–éä€`éI-Ç&¢FÃñÙäË¥ìå:›ïV6±²‰ Ø„iŒ7ÊYÊHrÚ{;5›ˆÏ%BP.ïµ£„Y‹ø±Á›¸²iz' aâqå´]tp«”Èkmepë™B1OÌX´~:Q~:‘A0Nºr:‘µt…t¢o1HÊ-/{)k`^:á°qªNDVs¨VÍT*£ ½±.x9Xƒê¸èDj5¤[ eë@ç®ÀÙéDÒh)e•‘QXéÄJ'ª¢аsG :Çs+gú«í“7ßž|½ÙqàÝ{û>e6^£²2èŸà!þíÉç·ÛïvׯïÒ—ûátzò~ô‡_ò'_'é/ÓÇ?üi{`|5üBÿ³Í*NYóõmêtœüñúj{êÐ}pòÕõýæâ4=t ¶¨þ %ÉbÆuÇ!¿Û«ÝýŽe÷æÞ¿üÿÚܽ:}Ï~ýß§ÍGÏÏ>ÃgüjÞv·›ËùÓO¸§~{ý’©ÖÝÛÝŽöëÃ×/9Êpgî|ÜÛ4ÃGþÒz‡Ö¨ØßꀲFôu »xŒ¦Ÿ·Aµ}Õ§ñýòî§ì<¶7§ïí?‹_'ÿñ)ÃI¶Ø‘÷sü|öÍÞhß0KŠFãæ·+Ûtz ëLÝ囤ßчûrë7Oÿ_ûǯ4uåý‹ä«ÛÍÕÝîmïúá‹Sõý™ÑçHÃsMÛçgüWÛïïO㱘RP±›œþþ_FJ|Ø~¯¾í4nÕ~dmn6Ïytßï¶w®ôðñ›‡ž”¾cãî âÞ–ÿþ‡ÿ¸Š9î‹î7Ü¡ûÿòÍïýûëÝEì™æ®î®/¶ñÇ?no.®ß0=¿çÏw/ãgéu}Á\”‡Ç›ôÁUò"ñÇ=AüðóãoŸíÓ¿OvçÛ³7g̯øooãw—ö‘÷7›³¤èçÜönßÝ{l—?ÇÙ€_Ö„/?þòjss÷êú>ýzqýú·àþöúâb{ÛÑ~Ãn›Èþ´°ùsoyùêþ€)þ ì«×qL2LœþâŸãÏ7·ÛËí} K6Øq(Ý\ìÎv÷o~î‚ÕZ÷öciÈ$Ý»-Ç{øödsssñæô¤ô'L^>ÄÅŒ31Šn»‘.Înlîï·—7÷'ªž7:pÖ-Âóºslè~s÷·¿¾¼ÝܼJy¹×ßž|ñú*¾›“„lÏ«u©N£`ˆN=D''ë­‡è4‡mËD5j/µ3Z¾„Ä´”UNgMgú:}Ù3/•±hýÓ—%à§™¾Ì ']ùôeÖÒN_–à-ž¾LÊ™¶*åd/…Êͼ±Ç꛾D†à”%È_.·—SËzû¤9ø…à ¬WN‰Ã8cÑú½} øi¼}Á8éʽ}ÖÒzû¼ÅÞ¾U X’½*3éb•o˜|ôx{b!F!}Žrž>7ÉJµBÇËKàXN¯Þ^Æ‹ÖïíKÀOãí3ÆIWîí³–®ÐÛ—à-ööI¹ÓÎæï‰nå¬wk‚i4Åù¡ÃÑlÞZgíã«·öìE9Ru¯‰öìeôü®~ vÎ={Ð8Ï]†£†kªP}eGLG¡G8®»GZÇ-¸)È^ g.Ãbt=0cŒ@qú›…åÌ#8³zû1àÀ¨õ˜¼8Œ3­ßÛ—€ŸÆÛgŒ“®ÜÛg-]¡·/Á[ìíÇy) ³z{0K¯ô{{ ÐÀŒ";y”ä M^’Ÿ‹üaÜ%.™Šå”ŸµÜ1㈜õ™ çò··r@•J¨,± Œžü‘J­väµ"Ù:—+wœ4·J @Æ´ð®¬d uöh=šnöÈsCí ëæQ¬¡EùÄ(p¡Slå=‰bÆ¢õó‰ðÓð‰ ‚qҕ󉬥+ä%x‹ùÄ/…ì·çÝz„ªáèÑÃ'‚W†|9Ê$g;k”ñ‰ø\Nã­öä&9T8+Ÿ`¶À‰ˆQ¡×Tá’¯‹O$TÖx'Ìh¶­´GÆ'¸ÕÁ*ÐF¶ŽEXŽOøXäBSð INiXùÄÊ'*ã.Vuû¸îÖ;†åŒ™!Â8òŽ”´Ï$É‘³à±£†Ð{ð—=Æ+(þ+ e9hȱBŽ2ú&X Xt²œÃAG6q€ÎX[Þ{#ë zБMŽO&Èÿ ÛZ¹ÎBTF§ 3Xt¤eÛvLÔ ¶ vFkkT~·v”Ó¡sÕuF§Ú9F§Q‡è ¢Nˆ9+*,çô ~«•¨Ôê˜*«üµˆ­œ‚e7ÖŒn½ßH伋Ö?5R~š©‘ ‚qÒ•Od-]áÔH Þâ©‘Q^ ÉÎ;5ØX×752ª«ŒïBO渮Kig–ãûQ£åÿŒÑ"2«Ð­|åûUñ}î˜ìN8ý}<•öõ;˜bÝž©ù~|ntÇ^‰ˆ4ÎzÁQãÙ´ÔÃ÷ù[£Mœv$)Ëé0ˆ“j¡v¨H¼uB‰ÇVÎÙaÄI(^Ä31NPå`˜R•¢2Æ h©¥,§Í0ó¢¬ÔÅb•èE¥wƒ”’¨”cò³×­»$/m•z/˜‘ÁÙÎqÇ•—&9‹VÏK‹ÀOÂKsÆI×ÍKó–®—á-å¥ã¼”›y 0„æ0/Õ]/mQs wî¨xij5wP$²u,ÇK“FOšÉ9†3 V^ºòÒšxiì˜Öh&[&»äT÷œÜ4¼4=½1N£4€,v¯Ôžž—¢kÀ{B¸ì 0Á1Pùõ¼$Çonк¥¶"‡¡ÔéA —Ú JurVÊÍ;µ$Ç|Qâ””zkÈ+3ë•8IqÆ¢õ§ðÓ§ ‚qÒ•§¬¥+$N%x‹‰SRB0e/HÏJœc :Ýpv:‘4‚wÜ£dd¦Ã·W:±Ò‰脉…oX¥Ÿ¥I:\'¢ñ¹ñV¢  Œ+ 3Ò ã¢ÀÎõ0€Æ…*.Pd‘B ØnÑJ,ãÀ9kW:!剋ÖO'JÀOC'2ÆIWN'²–®N”à-¦£¼”Ÿù¸‡œ&`ß¶®qP©‹NŒBôq•e :[Ñg§£V:±Ò‰ªèwL¯ƒ¶öñe-_¿Ó9 í¤óщŒþwóúñ¥²Ò 1Ì&tÁ¥‚“²ÜÍN¿v€‰èɇ`eôæúßx€áVƒ6žÜët¨â†5:EŽ‚Œ¬;¡»˜5ÀÔ`(ÄûÇÀæL+שV4Q€‰Ïå|?ÖÊ (ç¼›sù[#5<Ž9ÆŽ0“Ià +l§Å˜L.|ñÔpüÇpë„UÏLDÆ¢õOX•€ŸfÂ*ƒ`œtåVYKW8aU‚·xÂ*)·hùÿ²—B7ï9Dε¯}Ï„Õ(¨TÛ9ÄqèÝ‘m§M­öqÏ•­c;U"fçQ£æð †DdZÙµîÊ'jãLÚ»d>å:÷kLÆ' uè„©‚VÎÌ9aÅã¸áÁ¼ïã¼#ÍŽH„êm‡úäÎ ÕLãÃÐêxõ£¨­¢aec‚¬Ô¢GO!HJ­1~P£(w„)«D¥d†È1ZT¹,0û“ZêcáÖAç<•r ÎjCЂR–³0¨#©þ¥Ì´É—wjåP-[•')uÈ\^Ëà¬_·oˆ4'cÑúÙp øiØpÁ8éÊÙpÖÒ²á¼Ål8)œº.ÔYšwû†ÓMß-ÌãúÊ.‡Ù£N,¬Œ>¨##Ô¨¤CéZk°àÙÒ„ Èa›z’3(¾’á• ×@†¹cR,¿ _”'ÉM~¶”R±Z¯ˆ•3 sž-Õ F¾ïsa~Ÿ#ùË_cÓÊuÏ}ä(ŒTÍ”ÆúÛ‡$¥±Р›= ‰Jãí06Ö”²Œ{|ÞwVÞ•šx°FœÑÕà•7õ$Ä‹ÖÏ›JÀOÛ2ÆIWΛ²–®7•à-æMI9»rj€— 3_@t,i×CœZ¨ä¼0µoReÄ)¡bÞ  ÷ÇuËFÛjÔ¤••­«f'NIc¼XSØ ›ä,áJœVâTqâŽi ƒ,qJrÊãÔÄ)>—Y[ ï¤d­Ç9OÑbÜNHa'o:ô^òWý´r: [Z“ª™ºy°0‹±^PŠ Bgb&§T¸û‘f G#ƒ‚R–ï%NIiPÉà<®ÕLÅŒ8cÑú‰S øiˆSÁ8éʉSÖÒ§¼ÅÄ)*GÅ‘-­ì^Nã¬ÄÉ5 CqJŒ¬ U#ÖEœF¡7xdÄ)µÐi¡`+§<Ε4Æ9sðÞˆÖj¦+qª‹8¹&~¢Ío¿LrÊMNœâs¹©ü¹aXŽüœÛ/91jÑSOý!ߘxŸŸÜJrì‘/߀ ZƒP0É[ç’¶_òȹ‹’Rî¬aЦDP”:PÞ KIIN©A›AËæõäW ½S–S~ОO0bKc˜ò …ÂëIÎв+zI)«„™Œ¶ý¾‡ˆit _lÙ¹o¿;Œqór³ëÐÒËëÛ6Ç?Kò.žGNŠîN^m8T_lnîb“wWœÜ¾êÿô´'p!aOH’#ß™™ÙEòÿÚ¥(­‘S‡»6—¿™ýGžÜ\__pÖÿêD=|v÷úùÿr^u×ræ$w÷­M ÙC ¼Ù5o(JþäéëîTáÿº»:¿~ú~šúØ¿m8_{¶¹|añý³6l>ã "’üiÞß½x¶ñøbóüžl_œ?AîÓOžÛ~r^ÀÆž+ ßß¿÷gmx:h-7TJ°V\‘vëz®È73­Z¢ü4Óã¤+Ÿ–ÈZºÂi‰¼ÅÓI9$w-{)Τg–0¬FÅ"ô¥„wÒšh+G•2K¨HiÒvúpd÷S޲Ò‚…Ì’FgQ(#³nÝ »NLÔ51á› @ƒ Ÿ˜ˆrýäu‘ãsQÇ¢$ –SzÎ:3¨Óvßþø…(OZb8Á£aÄY:Ͷºâè/¬èF9kœrJQTê-€Ò ¬]'9cÜ¢Ä9)u ƒö28ëÕJ¤œ8cÑú©S øi¨SÁ8éÊ©SÖÒR§¼ÅÔ))÷ÁJ“Ø­œõó®è4¦· N‚Сp`¼•¨‹8ET¨A¥Dô¨àȈSjuÜÀ r7DÓ)0;qJ1T¢•#·§•8ÕDœP%Ç‚rÙ78H›S±œvaÎÝšˆgˆUè3Õp¨ëºÞeÞ9„ £5l_"å7CŽTzdSVm«Q9ðj€u<½Ýjtq«è€÷F~=„°NYÕ5eešXU)xC&Ö’Áäa-Þm¬-:—_ºMrlœ1¬Y„ÆÝæàb?CðNÅ‹ µ zvÇËî.X·jùŒEëç{%à§á{ã¤+ç{YKWÈ÷Jðó½Q^Êͼ{ØyhAßµ²%ðqè9Ñ8.>1Î:n¹2ºc9¥Íz§ÌÊ'jã!` V |‚åÜô|"(4ì|Aâqñ¬— 6`=ú^:Á\ÃÆí.NBªtoÓä¶§~ï}"Ä®¯Îw/?ÝÜœüŒí²Íížœ¥ïžü亟þíõóí“Û盳'7·×ß¿ù)x±;??=ùç?þù8ZøÃ8ðñï÷Õ››í§ÛûÍiŒ?ý2òAOø9¿ûóBûÇ•qOø×GOxhÔØG%0~þñO#ò‚yë ßé±ÏH¶ýñ—üÑgéhìÃ+ùù×_òv×4ÍÉÓ§'æd÷‚CÜŽU;Nï~ÉÃþ²¹ÜÞÝl8`ÿ{çÖcÇmäñg‹^5ȺUÚ§]{ØEñÆzÈðÈÁBt1$9Fbø»o‘}$iÎauOwžžÁ—Xÿ®f“üñVW·~¸•ùN!üìË›Ïþëù«¿Ÿp9ÿÍÖºñ×?}qª°±Ä'×ùæé“'ô0>¹†‡¤OÒC•ëü𩆤7úLÂêSÓëžã/7o_ÿ`£§Õô¸DÄœðNB›ÅÞíþñæU©Ÿ–VÂ]ýóÂØ‡^ÛÆS/¿?*·4cöëŸj~õöAéʆôІÎ¡>²±˜jeþõ?ðó]B4Åý¾ýϾ¸ùe8òiÉ6FZTæߨ—û?6RxýôËû6Ÿ¾}tç2WmVþü£U¿Ü<»±Q¸ !]ýôÓG­ïØŽÝð¼ú·aÏØa<8LØ|õÃJ;UZí¿¨߃ëŒß>±‘ÓC Oñ!%¥‡úDåá·á:ÝÜ<%`3¶gûùnoÏF#×/žÿËÜ>ºë+«Õû¿¯_ÙØñéjD]ýík«Týî?_½³qñJþìNÿè èM}ªŸŒ£îܾÊþó÷¿´ ÆÿKZØeJÛñ§¶‡ê(¬9j=îæa|‡e¬e:Ç¿|•õU,Šß~óÓƒg,€×YûcÿýÒ† ‡¿Ûß º^¿«¡~[~1¼·zþÊ^éÓ#·o¬îßIÇÏ¿ÿ•*ãJc½.nYí¿ïçÙó7Ã?¯_¾(uï矿¹\e»SGÿIC|ÇÞòNÉŸ^¾ü¡^^óèÎ=ÿ×…!ÆŸŸÜ~Ö#þðî»×ožÿk¬óÿ÷êêêÍaô÷ïïÞ½yþä‡wÖ—__]]ÿüC…þGWŒ”îò.êþÃ:Ë7ÿ5ÞíOÍjà#¦mV/v]•ÖÚâÜÜ\ ËQEStæ˜yÚ9ßì;Õ¹Ì {NE¦h×i$„ö§ãÔ,ÂQî‚–Su–t¾¢œ<§SN“nã¦àìóÁAb%&ïš]ž–þ•¢û¤’#G…ö¹ªjò…7*Ï!î ×îŠd#¢ý/\/¿ÎÂuCÁ<ëή›‘îpáz‰ÞŠ׳Z)<Õ„®yv€pæìöL©úZ¸ž¥ž"ܯ…ëyÑ9ʵ²ùÂuñöЉ£¯,ãžÆh_¸îkáÅhLx"ñôã+°bHÊk/\—r˶uiÏì·Lc”êÓ ×T>u.Ù}ÚˆXíˆé¢8Q 2ùâ2¦'¼qb#¢ýãÄñëàDCÁ<ëÎq¢éqb‰ÞÅ8Qœ×ëq1º­”ö;Û\ZU•¤0š}õ˜ïÙaµúÔÌDí{Üvx¹û•FJÁIp;ÚÉÑÒ}Œ¾Ñ{£—Š™3±±­_ãÆ9Ñp°Qp éÜ ¹Ü¡ËYoY}ü±T³ «ß¡[˵&¨®sBevÈ›&÷‘!rÉq:R<@ …”±Ý›T» “ÖÖÈÉc…qÄÊEöŽSŽÒ¤'BשdfuŽ1W».{€°8-w’EWÞ/ŒqGĈöNKįN ó¬;§f¤;§%zƒÓèœPdB+ã¶ë0„q8w€pžT”¾¯ª28N)NP⺙ß4ãÑQû‡èGŽ:ËͯzÌR&_+Ûog¼5r«ší{¶ªHZœJ¹ ‚š<20;£Ì-Ê¢}¦xœì}2'€³bTí@¦í%#g[²zd¥™.rœš]œ¶’ØujQP°·-ŽS³Kħ┱fUWCÞWœÜq#¢ýƒÓñë€SCÁ<ëÎÁ©éÁi‰ÞÅàTÛ@=ð[)TÞöæ•„ƒ0œ§Q*«úR‰R_àTU1Jžv[=C¼_àTŸ:iÉ íG']rq¬xL!QF_™}DûÍ+;8õNiвŒ¢í›Wª]¤Õ7°Y¹\r¬Ev> bwªÙ^ œ2 ªœñ̶l¯”Áë`Š@‚I “†É”†O ÔqZìÂ4pʮӲHΑ§¹p›äÔ;Í•K ™ÁJLŽS³KtÙe®âÔb¢)úâl$µ§Št‡áˆöOkKįCk ó¬;§µf¤;¤µ%zÓZu3qT¿•Š·!bå,8<ˆœË‚3Ojêl+cU’2‚¯"ß/Z›Ì—£µê‘¼ƒP£]Üí´Ö­åA£QP9ðÓ¤µj‡Gx%Z+åBöŽU»S9rW½'³œDMgvRÊ`íO”c›ÖŠðW¶Æ»B†,AR ÄŽS³‹: 9¸N% þqœšdšä4:ˆ(¥…LV­Éuj-)Æ‹ÒZq*‘ÑÛð[íBÞs–ºÃðFDû§µ%âסµ†‚yÖÓZ3ÒÒÚ½‹i­:oûh—`Ûµ5¢!¥x†Öª$Egr´ÃÎ.‡¨ªˆ”|õá~ÑZ}jFJ0¡³äxÁµµêQTHÙW&G™“vZÛi­Z£ Dº½—ûñÇØì¢®~š«”K9¨¨×Ø]ʲ%­¥‰c”s´¦ Rr+x8¡e±2öÖÁ¨–ÛþÄeM-W…è}ë`Jt”ìKð£%\²ƒ)é59‡]e)‘ìÌÞÁtÕÁè@ j»ƒ)v9lWê`¬\Œ˜HS› ŠM1lÛÁ¨½ :3¨6FÔ L9;J‹Ý©rÎWU§Ìɾ,_ïóUÞDD#¢ýÏW-¿Î|UCÁ<ëÎ竚‘îp¾j‰ÞÅóUÕ¹XCè,žV»|jƒÖšóUÈCŠçæ«L‚„IùR%uvˆ¶ª/fðzÕb—îYÎúÔÆÙÙ†Yíà’8Q=Ú£hž ,Ñ~ˆvljÞpB—L²œ0»¼þî‚Rn&J™½AºÙáí›kÞ>”‡r+qæ“8ÁÁ>a mOMW»—½Ìt—@±½¥þ`wtMàŽ§Ç‰­ˆv‹Ä¯‚-ó¬ûƉv¤ûÉEz—âÄè\cŠíuÇÑNtÛÍÊq`:ßÚcÀÛÙ¡v±¯Õ‰QU¢€ú*ŒïNŒO-¨}§îhw|àÖ8Q=–µCMà*£ãÍ„;Nì8ÑNXÅT(wÆÛsî?®À Žn]['j¹T¶¨ºÍ¶BÙÿ²!NPÈxBÎàD °‘†æŒÏhGyRÎ>v.3µÂJ»ÚWŒv!_tIdtÊ)ÇöÝ=;Ú\ºƒÓFDûg˜%â×a˜†‚yÖ3L3Ò2̽‹¦:/YT#ø­Tâm&Å8(åÓK"UÇ¡,u”ª}1LUÂÆ"®zŽr¿–DƧ6¼s68ì.¸…·zLQ"8_•)ì¹#v†é‹a¬I-Õ7(4ó»vÖ¾§” VìY£óiT½»bÍV•œ¼!q#¢ý“ÓñëSCÁ<ëÎÉ©éÉi‰ÞÅäT§Xþð[)ÒmÉ ,h!g»˜éR9w¶/qTOQüÒìàž]¬WŸ:‡Ìí´´‡èä ’Sñ˜£2bt•åp´þ»“ÓNN=S.W%$LA‚1fWÏ5TÊÅ€Bêœ`ípËuD,»ÞÎ-#Ê E…œKf«]žÈ˜ÞusRÛ—l˜)žS³ xQ†©Nlx2A‡ýªwpÚˆhÿ ³Dü: ÓP0Ϻs†iFºC†Y¢w1ÃTç)'Dð[©CëU†ÑZ,à3«?UBFëéÈ—z|Ê¥ †©ª"¡úêEîÙêOyê’–Ônt$çÞœaªG{"ô•!î«?;ÃôÅ0R&`âöÙªjGiõG¥\- ¨n³]ì–ùR†(|î¶9³pGhµÃ'!ŒwÛÜ<§yÚÚOvZ[•)eÇ©Ù/^‚›ªS ) ûâd¿âÎ7"Ú?7-¿75̳š‘î›–è]ÌM£sŒäÜw¹ñ•*ƒ9=ÃMó¤ÆÎÎUCŽNn‘Qýí} ¿mn£cC+ç†ÜÑ.^›ªÇr¤Þ¹e¸Ú!ìk?;7õÅMZòÔ¤±ÉMÕîø.ƒ•¸©”«‘9fï² ¤˜·¼¦ÛèC)Ò9nRf JÑC³‹iÚÒ´FÍÉ…h¼Ú˜–9Ø¡L<⤮S€’!7´žô`ÒQ_ÔpšÃŠN#ò¤z9ºN1”‚ç´äÉâIgå2¸Nm T6J¶ÖßÛÁ%±ô½SÎ[g>ØåXÛ±ôo´#Ú9–.¿–¶̳îK½H÷†¥ õ.ÃÒ÷ÎmèýVjëå<TNæz/!§Ä¾Ô|bS߯‡¥ïU©ÑMžÐ(ܧìQ‡§†ÀŒâWCËõÞ#†„­-‰ïíŽsqîXºc鯎¥cÅ´ñvYjN^.wWMÃl]³0YÍe€ó,|°Kéh•j >”[:¬Œn{"iÓëÞó)ž¾NÅĪuþØn_ªõ~“^9à‡zÉ~ˆ"Æqt/ NÅ)¬+ƒ+Îþõ¾žçŽˆíœ–ˆ_œ æYwNÍHwNKô.§Ñ9(ú­T ß ‰uøyœF ¼©ÏÃ#å¾À©ª.›O&¨¸_à4F§£?:äràT=&ÛLyoŒ°ƒÓN]S, “­gl2LµcYaJ¹Ö½Äƒ÷ÆùÆWeëf%Ãi†±qIìÌÞŒv!\'ªÓ”„œiÃjÇy_‡qljˆöKįƒ ó¬;ljf¤;ĉ%zãDuž²3Ží·=VÅ0X$ÎàD• ÖÓfð¥Šh_8QTa(ÿ²«C ÷ 'êS—Ü“ª~t"éåp¢z¤ üĘ|lj'ú «˜hM¦ÄÐÆ‰jÇ¢kãD)—‚ZÃç6Ûf[âÆ!¢ÕM=8”[±\2ߌT±#=ºüÓH={ñúÇR Þ¼~1\ÿ¼ôýÖ<ý]Þ– òøäæÝ5^ýÁŒ¾üö»›—×¥‘zzóý¹”öi€þî÷ò_zÒñ׿û·«2¾w|îIBr—«§iksì,þàÀI’Äæy©÷v'Ý»ž“ë4%²æÕ{R³Ã¿~õ­½þ§&È®~üÎ8Ñ‚ú¼~úϬ½.ÿk± ™%#Î’¹¬49 ±ÅòÙÙQ{’¯¯Wïo­êŒ¯q©;9ïŽC™Š{ûËwW¯ŸÕg{tõ·¯O:4$@ ̞ì’.:=Y&Ä âXö˜î¼S#¢ýOO.¿ÎôdCÁ<ëΧ'›‘îpzr‰ÞÅÓ“£sÍ¢ê·R)o{•©õî‘ÎLOΓª±7x¬ÓWÎ)¾j—Qï<ÚS—s|üèË%áQm('Š’\eãÂ+ß½ùagÇWeÇ’YVv4» y}vTdÂÉý~MÁ–솄ðÌä$ÛœUœME\,âe7;Ì—ißìàíŸ&–ˆ_‡& æYwNÍHwHKô.¦‰Y­”Ü>ߺ.M Ö‡ž¡‰YR5H_41O½¦ûEs¢“_p)j–²ãÜÎ;Mì4ÑMpÙÐO†ÃÒÞÎ^íð¨Õ\‰&J¹˜’5íî÷ƒÈ)m{ޏ$ >sŒ8•X2(·—̪]¼¯kS˜(Ns g¶e´ ºÃ„7JlD´˜X"~˜h(˜gÝ9L4#Ý!L,Ñ»&ªsœ)Oh¥tÛ[‰âD1Äó­ýd©§öÿª01O}ºg0QŸš¢ª?:¼ÎµzL ™'¼7>Ê=°ÃÄÀ„ÕKLQO¥ß}üIý-Yèim˜(åJ¤€ Þ÷ƒ6Ý8a(ñÌÊD¶X˜ &ÚMP±Ë’.»Ï©ŠÓrÄ\qr<‘¶ÃÄ™Qb#¢ýÃÄñëÀDCÁ<ëÎa¢éab‰ÞÅ0Qœ+`ðväŽv7† PkËóùÖ^±\Õç7¨Š±³;‰ª*œ€}õé~ÁD}jÑ»±iŒŽ\ðN¢ê1#Ú#ùÊr &v˜è &¬^ZÛÎDÒ†‰b—8¤µaÂÊ•2Åí`ÄÇ-÷9e@(Ÿ£ ©ý):Ù{ª^”&ªÓdÒX|q)î7œºÃÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢:Ïê$ëí`ãS‡ñÌ>§*ARHʾTAì‹&L•*ƒ«¯^o_%ûÛ¦‰1:Ê’Üjhvù‚ûœªGRˆN*ÆÑŽ÷¥‰&º¢ «—hÍIÂÛ3î?©¿héÕOMX¹TÆ·Üï‡ÊdÑÆœrV9 jpL(Éi‚ªÞNÔ¾)Lü?{çš,»É«á¹º$£8óÿ{¸¬¯ª³WšÂö¦Ò¤ò+QYokÐc@ÊN1$é\Á.vÄûœS7KlDt}˜˜ L4ŒY/ÍH/3z§a¢8WIS=ög)±{Û%Ä#7ð?Ÿí ìƒ •Ò²°LU¦ÆJ}õl_¶51×b1·ÃDö¨”à´³w—í˜Ã¾4±ab)˜H沈hšQÚ0Qìˆ.?生«¹ã2wU¤;»%9Èûò]!·°NïCŽxKiµ‰OÒDuj‰tÚa¬vºi¢›&¶"ºöÕ¿ÙGÿoÓÄPtžÛ›¨U(ðÊ„öA§MKÑDz/9¤ä7ÆfA§j|õA§ò\ÌMìc7Gç´ ÞIÑ iŒ¾ßû1`‚ÜD¨­4Û¡ó³4QÄQZ¬/Ž0ìKØÝ4±ÑõibFü54ÑP0f½8M4#½ MÌ覉¡YŠÀî½6!r„³kƒRãZ—°TyZ°ýõ¿‹&ʯfƒÀ¡ŽøMdéW³¨ô•)æ‰M+ÑDz/“CI¯fóv±£ôïÕ4‘Ÿ+Ž{ã'1½óÚD<óÕˆ÷4y§Á.Øž¡‹˜?JÙ)Gn¬va7›è§‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MTçª*ý)”c¼wo‚Y4'ŸÐĘTÖµh¢¨¢`àØWð]}Ï££ðMâêúÊøµØÔ¦‰MŸ&0óuëÐDµ‹—ŸtÊÏ%tîŽ&T¼wo"„ ò¾ÛD 4‚Eƒb»¬\µ ðhI§â4­àÊí€Uœ¿\,Û4q’&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó4Qs L¡ xoë:G?Xõ„&ªI ñ'Ry±{EUP³NVZíà»naÿüjG–O¢£Ï5®)&†¾2‚½7±ib)šHï%«¹!Y“&²A¸ü¤SÃÿŸã‡™ìÎ[ØÑJ ŸÐ§l™6:·°¹Œt|´vGŠÀ}q–ÐmÓD/MlDt}š˜ M4ŒY/NÍH/H3z§i¢8—̽?K1ÝÝn"DVlÍöKõ°MTõîB_½Ð—ÑDùÕ¹ö a?: ¶ž£‰ìÑÁCïvµÝ4±ib%šHï%§Ú´÷&ŠúÕ°ós%DŠÄÝyOB¸÷¤ ÓII'ÉXc4h¬*vö®ËNØ:çÅŠl˜èe‰ˆ®3⯉†‚1ëÅa¢éabFï4L ÍRÁî½6‘V³CN·&†¤Æ°VØAõòe—°Ë¯& Ú¾î¾&ŠGKÓnÑ]íØwØ KÁ„dHˆlšb«]Ë·&ÒsSæMØ?ü¦g祰Sj$vRÒIÓŽ!8l6ÈvÁ¥‰"%tÅň»¤S7MlDt}š˜ M4ŒY/NÍH/H3z§ibh– ·ÒJ<$žÀĘÒÅZ×UU¹Ëj»¡òzÿ²[CÑ!ypg¢x5ŒÞW&´ëÃn˜X &´$éÄüûôÎ?¼¿’þl—ÃD~nÈLJ0ôÆ„´˜Ü{kN*:åæ}h&ØÙ;ÎvñMsÑ[abDFÝ[Ý,±ÑõabFü50ÑP0f½8L4#½ LÌ膉¡Y q±ÃCcêõË E‡øÁ2Ic·à÷"¬Ù‘Ò†’jqÓĦ‰¥hÂJ6o¢íÞuÅ^ëà]D ÿŽ ‚|ï­‰èîf§4‘QjèÜš(v¨ë-…Ÿ«g_·D‡íÑ¥ðseJûí^_–Z_ü€àÀC»›Q¶3yù|Ñú’žÓûíü¬èt¾µÆ‡Ç“õÅS†H Ù:J“]ˆôèתâ”U%~ Ža­ê~†hDtý¯U3â¯ùZÕP0f½ø×ªf¤üZ5£wúkUq.LП¥î­ñA:ùn0&5òZ4QTiäÐé×SÕ[oÔéT ¬vø`7£ì‘#8Dé*ã×:b›&6M,A.1±„îÐD²ƒ—³M—Ñ„ ªti"Ù‰‡{÷¾! ¡·0! àü]-¶——»øèÖwuʉ¶ öűîòã½,±ÑåabJü%0ÑR0f½6L´#½LLé…‰ê\ó·péÏRâzï­<ãñäV^•ÖÖ@Þ—ª‹•¯ªJ_‘–ÿ½Vý§a¢üj”þk¨ÀþLTÉ'õ•!î‚&V‚‰ü^ 9Eöf‰j—2é‹a¢îk7Ó¢ÑÉI'Ê#C€Øž¡)ÏA¦ÏîMqb¦}qÉŽ÷ÞD7MlDt}š˜ M4ŒY/NÍH/H3z§i¢8·ôo烱SÆ[iÂ<"|BUª{hWµø±³µ U!-© ,Îö]4Q£ã–Öúnt¼´c¿&ŠG²`‘ûÊè¥¥í¦‰M ÐDz/Óƒ”™µIÕ®.@^ž«Ðtç=U|óåÂnF)65Á÷4Áy{â h¯/ÅM¥‰ì4bìnÞ»øò¹eÓÄIšØˆèú41#þšh(³^œ&š‘^&fôNÓDqÎ"Ô¹þZíàÞÞ¨á˜ãùl%-Âð©v¿eý]š¨ªrŸÀ–Ñø]4Q~µ²÷.V»×2¬wÓDöˆ¹¼&Å®2LÑÙ4±ib%šHï¥:'þ]EàŸ?Þ_M²ìjšHϵ0vgí|¿ãÎähGÊ´ÉOîMH™[ÀTÛ÷ŋ೷°³SL °0wÅ忍›&zib#¢ëÓÄŒøkh¢¡`ÌzqšhFzAš˜Ñ;MÅy¾÷kÞŸ¥î=é¤=k,S%P°NÏû;X«@lUÅQQ±¯þÛÚMÔ_-¤WR¼&ŠG'úÁ2îa·3Ú4±M¤÷2eÈ1ï>7i¢Ø‰^~Ò)?YcôØ?šëzßIp)é MhÁJ.ÚÞ›(vbÏžtÊNcgó¶Úlšè¥‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MçDΚ5U¤ËÍ{éq!úùlÿ±TúýíèïÒDQŦiê«gü2š(¿Z$b?:Þ›È%%cÚÁíbdÓĦ‰¥h´ÜF¥½7‘íB¼¼y]y.9²ufí¢Ó‚ßH‰Dr²ÀXž\ŒÌ:Ró$¤ü(NX™‡¢#xWœ„—ÝÛ'yb#¢ëãÄŒøkp¢¡`ÌzqœhFzAœ˜Ñ;C³T |oQ'ö#†³¢NUB ÿD*.vÔ©¨Â(ÂêõËNÔèX ú¿Õ޼8Q<‚v:²;Qß8±qb-œHÙ@ŒQ;8‘ìÀõzœpÍ÷«#õp"Û¹ws"ª›ž\Ãö2‚É5´7¿‹?Û »8U=ž"Ι6MôÒÄFD×§‰ñ×ÐDCÁ˜õâ4ÑŒô‚41£wš&ªs—Ðù*\ín¾†Í”æ{8£‰1©Ö¢‰¢*8sç0Oµ‹_Öq¢üê”pXçxuµ{¹¿s;MRö`}eü°›&6M,@ž9ê›Ïÿüñþ*¾¶Q¸ˆ&òs%¦LÛ»#[ߺ²}]`a{ ;±FÁÈní«mÕ.š=IÕ©sZ@¤/Î^îÝmšxŸ&¶"ºX«4|W‰Øú«-=º]HòÇŸ»8Q»'òs#¤y¯ÝLæGç­e‚Ãá‚èú'ƤÒZ…>ªza®ú¿? þ·˜±èøƒ Ì2Ø Ì^`ÖZ`œ#†vßÏb—fÍgˆq$¨!vÅEÜGDût݈èúafÄ_ó¦¡`Ìzñ0ÍH/øfFïôG˜âÜYcçZlysmm;Tôä# 6«ÄV»py•ØüÜ)W¢vëµjÇvg;;rí8¡ÿ; ÕçRÛþ.ª,¢!ôÕ§Qó]8‘~uqèþm#?Y7pL™¾Ô=Û8±qbœà#-Ãéí5‚ΓíЮ_`b¾³*€Ô?ÉôÖJ¨Gî*r¶À8 0©ö¤zúò³¥>†Äáî‘ÚO©]Ÿ¼fÄ_C^ cÖ‹“W3Ò ’׌Þiò›¥ìÞËyHx¨Ÿ‚×€R °MŒ¨gý6šˆ=IÊlW!ß4±MÈ}sJ b“&Šx¼š&ÒsY0‚c›Æ³ÿ4ïá4apšãÉç*ÉCÓ2I±#5 ua}”&FÄ%tƒM½4±ÑõibFü54ÑP0f½8M4#½ MÌ覉±YÊéÞRä‡ÐÙaÚ!©!ÐZ81¦^¾ì0íPtbxps¢xLyw ;Ò]8pãÄR8¡é½TÀH»qú3Í‘‹8Qì‹S´]¿¢›ü4"º~Ž<#þš¹¡`Ìzñ¹ésä½Ó9rqΡs‹«ŠôpkŽŽ€9yÿEÄdJ:ƒ5[Ø;ô—âÚ}<ÊÏUðÀíVAÅŽQìΫæVz®æ§¡úX*ÅÅpbLý›º‡ÿiœŠGx'Š2SvÅ.Ú¾:±qb)œ°ƒ=Äø»7ó {DòëvrK;vÆçfÄpëY'?"›œ"·” ©ÉíÙRÅ©‚Qç³F±{Ý:Ùäu’R7"º>y͈¿†¼ Ƭ'¯f¤$¯½ÓäU³Š[–zSûâ³N~ÈÙæÄ˜RŽ«ÑDRe&b_½¡|M¤_혒êGÇÁŸ¤ Oi‰|ðwó`{sbÓÄb4á†im Dšp‹æ7|®ò”¡§Õ-ôh<ÛE¹“&8ûP>)Cîy§H‰´óu/s?{e½ˆc ÞÙcªv°/Nt³ÄFDׇ‰ñ×ÀDCÁ˜õâ0ÑŒô‚01£w&Šó”Ç}0Kið[aB9ÍXpV†o~ð¯váFšäCô}Ù@‚2Qš†bGiž¡õыթ3À¾8Ù×°»ib+¢ËÓÄ”øKh¢¥`ÌzmšhGz=š˜Ò;K?ÎÉÝ©?Ki„[i‚5¢òž&ªC£ H]¬©QQ ¸ÆÖ*³ïªêT£˜Ì¥œ™M̈¿†& Ƭ§‰f¤¤‰½Ó4‘Ç4»Z–ò›÷&˜ü0áš“êkÝ›øQŸFf»ÿÞÙwÑDùÕ9›ñØNˆúMŒhÚÏ1⿚·nšØ4±M¸å©„Ú¡ Ïç‘üzšp3N?Ƥ7~ÌÈîìiåà)¾¿6A1`Kósû†Gµ~ôÚĘ8Æ}m¢›&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó414K Ü{aÝÓ¤e'41&5âZ41¦^¿Œ&†¢£òÜI§â‘Rž¥]eøÚfÓĦ‰h`ÒfljjG¤WÓD~n„`„í¯(Ù.˜ßYæ#_ÂVø¾ JRç è ÐQšgh{ö¤–é%½Gì]q/.7Mœ¤‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MT繊ÿ³Tø}áÚ±GГ[Ø?ÄŒá©ìkÑDQ•Ffo«ºÚÁ—ÑÄXtì¹±Õcn9a±¯Œ^:²lšØ4±M¸S$ÑM$;`¿ž&<_3HùwwÉÅ3îì8ñP°hüž&(` ÁÚß ŠÝkO‡'h¢8¥Ä0ú8‚Ýp¢›&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó4‘súÏ­?K±Ý{Ò)AÀÁ|¶71$Up±“Ncêõ»j:EGIž£‰e q×tÚ4±M¤÷ÒcÐÈ¿ßßþx=ÂåbËs‘ó]0ïŸdõÖþu‡Yú‘ï›a§,0@û»×ÝÊGk:Uqˆ„î]qy·æè¦‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MTç¹^ôg)ä{÷&8ú¡|vobL꛳B•&Šª”ìbľzŠ_¶7Q~5c0¦¢ãžt*-Dµ~Ž!úÒ4qÓĦ‰h‚óžC`ÐØ¾…]쀮®[ž+  ÚMƒsɾ;k:¥ÕE‚ÐMHÁJÄÔ.oXíâÿ³w.»’ÜÈ~íÚÞWF,´óÞÞZy!cd»ÏŒ Ö à·7É<:SÒ©dTN^š£"ZÚ4¢3þŒJ^>^"Œ/¥‰æ4 XeÍ® “&¢ib'¢ãÓÄñÇÐDGÁ6ëÁi¢éibÞÝ4Ñœ›dïW!~©zîI'Ï7F^¡‰MRÛ›hªêõp”'ÔûkU¯kom‰­ü¼at,¥ i¢y¤LÎ+C{“&†¢ ­³t©§õ¨KÍŽ¿…]ŸkPïN„íÇ­¼ô™'ê9ÚäyåÞD.-ØjÓUZíÌåÚ{[ÄÕ IÑ4±Ññibøch¢£`›õà4Ñô€4±GïnšØÖK)Ÿ[o¢fÝ •z¥Ú`b7©Ïœ^‹&6EÇôœNÅcÐòF)Ë îwM&MLšøú4Q¾Ë2MWÐ.M4»|—Kî š°z»%©÷WQšŸIpc^)7aµ›ÕêÐb‡_›Ø$Nx¦t g‰ˆŽ{ÄÛ¬‡‰n¤„‰=zwÃĦ^JñäKØÉnY×`b›T«öFõ&¯›¢“™®ƒ‰êœËw+ƒä4abÂÄH0áe’®R—äûš  õ¹†9( ·ØÑ©—°óMJäÓDS€T˜Šb¥ˆcG]T[@‹]ʯ5¾´·æZrОˆŽ]¸XÕ¾M,oí”SÛé-:wǹϦ‰æ‘ ÎÆR¨¬LEfmÔICÑ”Y:³z7c`³C;|ë»=W &$î0Íž›œÙ¤- <3"çþYþfG,—¤Ý&.ÓÌñN;Ÿ&öˆ?†&: ¶YNÝHH{ôM½” Ÿ¼7¡7£•½‰Ró`4±I½Óke ÜNxÝAÚÅ#™q¬ utš41M`«š“I7ÇÇ›]:|o¢>9iN}šhv”ùÜü㬘3?¦ ¬-ØD4PZìTíRš¨NŠþPœÍ½‰pšØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7M,ÎÉŸè¥ðä“N¢vÈ+4±Hæ~]™_^i0šhªÈÊ4cõD¯•ãcyk–Ìýüãovp]mÔÅ£qys‰•iž'M ET÷&Êk¢B—&¨íMØÑ9>Ús%&è÷ÚÕŽÍùäüãLå%ÓÝjmfK(-v‰¯¥‰æTˆ!q,Nî¶x&M¬L;Ÿ&öˆ?†&: ¶YNÝHH{ô漖E‹{)=›&Ln–×h¢IÈ9¥±Ôü¨ÊÞפ‰¦ÊÁ‰$VoþZI>Ú[çÄ9§«;L×ÑDóHèÒ¯¿¾ØáÌ8ib,šàºçà) ‰fgztþñö\¤òE½v³Ã|æI'”›0çÇ9>ŠÏÎîAõÑfg˜/-ZZJe`΋-vLì Ltl³&º‘&öèÝ Í9‚&ñ¸—‚|îµ ¸9(ˆ¬ööEjá.†X*Ê`0ÑT —ÙËf‚ׂ‰öÖÊ`úÄo{Ÿ¸òt˜hÝ™ceŽóÚÄ„‰¡`Bn H¡L’»;ÑÃ:Õç ¡‚öü›>(âv൉âƒ+²<_¤´àò·Îš¥nð@é©4ÑÄ1§Ü¯ä·ØÝŸ›4±2MìDt|šØ#þšè(Øf=8Mt#= MìÑ»›&šsõZû1ðdš 2¡ÌÖéíͳÄRïKÞAM•ƒ¸?¡Þòk•FmoM)Aêç‡}‹âÝ¥’Ói¢)#%xBažùÇ'M EE%$B)3õ.M4;¤Ãi¢>]sÿ2B³;ùÚ„å”×V«´µ`ÆÄ(-vpñ%ìæÔ@À 7/a?1MìDt|šØ#þšè(Øf=8Mt#= MìÑ»›&ªsQöRœ>›8”&à¤+6I-‹&š*²€&š¾X‚Øå­Ë Éâ蔹ýu4Ñ<º³Z<Ç`“I“&†¢‰\×üYT?æìûMäeoàèjí¹µi—éw·ý4;ÂSK£–Ít%¥S®-X²b ´öUri¹‰Å©íL±¸ûd)“&V¦‰ˆŽO{ÄCÛ¬§‰n¤¤‰=zwÓDu.IjJÕ¸—rÉç^›@-ÖZ‚Ø-R%Ñ`{Mdð`h±{±ÚuË[cy0Àѹ°vÝâQ Ò3ÊXç%ìICÑ„•YzV¬÷Ⱥ4Qí4mšp Î5;zT$è8š [‰,òÊI'+-XK;O¢ÒÒÒ¯¥‰&®D‘ƒ”N‹ä‰hšØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7M4ç"êÙã^ªÌãN¥ MrËj+4±Mêh4ÑT)7õöb)–è`å S:U¥é@P wQf“&&MŒD~+ïŠeÝ¥‰jGbt4MÔç–N;LÖÖìØäLš(m”•œÓ„·LT¥Å.#]JÕiFV$ Åe™ 6œ&v":>Mì Mtl³œ&º‘&öèÝM‹óœ«ýØK¡ÀÉÅëèF¾vÒi›Ô cÑDSE9H»ØÁ‹ÑÄòÖÎbÏD'_xo¢y,“!Í)V&6oaOš†&°~¯uÏA«­ïM¼Û J¿<×Êî¬üÍñÜRØV{˜{‹‚ÚÒ¤”;¾Œ&ÞštŠÌ¾Ûå»üâ“&LƒˆŽM{Å柳@Á6ëi"Œô`4±Wï.šxwîLOt¡²[¼ÎÓMœÐÄf©NåtúE•¥"Ë1ToéÁ­ß-M¼¿5(•A:ŽàE'Þ= I¯øÖ»ù,^7ib(š€F –“i—&šß•ü<ˆ&Ês1‘ÕU nûivø¨×>²6ºÑcš€Ú‚Y™¥Å/,^÷îÔ ƒ>!î~ãdÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±GïnšhÎ]ÄIã^ÊìÜ{ÊrsòšØ$ÕIÆ¢‰ªÊSi™)…ê=½R)ì÷·†úlŠ£l×ÑDó(™SâXÙýII“&  ,³t`N¢ýï·Ù=(EëÌ󷽿_~üÿí‡ÿûfÇÍXùã±,¾«©wçºÊs¬íýí¯>ÿïçŸkkúÓ>ÿܺÇ6p~óO?ÿß?|ûéßÿö/ߦ~o¿æ¿?}.­îSé¾ÿòç?}ûiÅ ÌÉ>•–üåKùÍ¿ýô/Ÿ¿”‡Õx”ùt±üŸÏ?~ó×ÏßSþüË?ÿæÇŸ~øké¾üò_nßü[ûHŠã?³;Óÿüá¿  Õ—)]Å_Š“Û§ÞG¹£ŒßÄþã—ñ{Gùöß.ûšw­Ä0ö®òàãÊf¿®Z•0ÝleÞМgpŠEfH—RbuŠ ÂG“Î:"áô¿Ññ)qøc(±£`›õà”Øô€”¸GïnJì8ÿîC/EÇ/I.þ9QŠ»pääçRªçKg°aï•1y·#Éç&ÄgpH+‹§‹R·'”2ÓX8ÝT‰—Y´Äêå•jÁ¿¿µ’g×8:š.Üœ«%œÎ+sž‡&N†Ó¢PÆ¡¬ÑH(¥a¥FBÉ–™ã–]ìNNC€&º:¾¸9å”1jéÅ/Æ.÷T Y˜bqY&v=1Ÿ^è?výýâ®UÛ¬‡Ç®N¤‡Ä®¿_ïØUœßÑžQ³Ë¢'\ʼÚÛ{òº‚Œ±TO0M¸×|9˜4TéÁúß9M”·MHG®JCðî±f£Ó'”Ý•š41ibš ¶^Æd‚+ïv÷×-¢‰ú\rDãþæ`³3ÓsúÕÒfyåâ--œ,PZìÈèRšhN9ãt³»Ï†2ibešØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7MTç˜ à Æ½ÔÙ)’µ¸aÌ+»([¤bJ:M4UeÒ€&õöb4±DÇË44Çѽ&šGIÌÏün÷;b“&&M @¼\ït¹þW4Ñì’Nõ¹®–¢ƒVÍNüLšÀ|«‰ÓÄ¢Ô=§'”jì(ySU^Î-Çê¤dû}/›¢cW^Lååb]ùM¯®Ljh%ŸãËhã çz3U1_Š]¢Æö2t‰CÔ~ØÜìÌñÅoh™iu|q'*–#¥e&™.]­jâ²*pÕìÔfš›p¢ÑñW«öˆ?fµª£`›õà«UÝH¸ZµGïîժ꜒²„½úhSô¢ŠÐ!Ès¶¨'?yç>™’ÀúXõ´Ôböj4QÞšëYÔG‡T®¤‰âÑPÒÃ8gÈ“&&MŒDrKDI³¨ti¢Ú¡ß]ˆ?ˆ&êsÙ²kê¯4;Õ3SðsºÕ$ÿ¼BR[°’R ´öA.—ÒDu* @ê,vw¥r&M¬L;Ÿ&öˆ?†&: ¶YNÝHH{ô漌^D÷R˜Ï=Ik^:-Ö•½ïE*K–K%±À§©â"?y¬¾pÛkÑD{k2Ç'¢ã~M4Æœé‰ayuŸ„”;½})cŽÅR…;IÛT)f .V,véÅRð··Î‰58O´DÇñ:š¨-)Dé<;œ'&M E¹R¸ªq—&š]òÃO:Õ纕á-õ§ÁÍN™Ï½—¥‹‘üx|É­o1° HÊb‡×fù¨N ÊGáгt—ûtÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±Gïnšhα€L°t´Øœ~//ÑÚ½¼&R‹ÓRm0šXÔ×ÿ㱪V¼-šhoÍõÒ¢ÄÑá|á½¼æÑij=¡¬´I“&F¢ «'êä÷c©_ÑD³Ó»å¢ƒh¢>7× ‹ÖŸ7;˧–ö›f[9‹j­gÉ¢Òb§xí½‰êÔkˆÅ9ðÌN;Ÿ&öˆ?†&: ¶YNÝHH{ôÅ9×jôq/…xrªt¤[éíWhb‘PwRì ©2ØI§¦ªŒ—d)VOøb{í­™ (`¾Û¹9&šÇò«apšºÙ©Mš˜41MørÒH5(ìuo «MÞîmEeÑ«šá¹ÈëåD]¡ o-]X‚]”f§.áGn)%©‰ºÃÇbÇó¤S8MìEtxšØ%þšè)Øf=6Mô#=MìÒ»—&ç™øÁ`ó±—R>û¤SB4||Òi‘`ý™5ËXÈ›*¨‘–'möZõŒÞ¢ã–ûw5ßììº{‹G–zH-VF:sNš‰&êw‰F$‰º·°;¼Ë„w M´çz{¨Ÿ|±Ó|f=#,ÄžðñI'€Ú‚¥æŒë÷ÐÍ®üD—ÒDuŠåï=‡so"œ&v":>Mì Mtl³œ&º‘&öèÝMÍyÞÈ%î¥î×<Ρ DT‘õÞ³ÛRÆÊ»¨22Ê«W“×¢‰úÖDÖßXì_—Óiñ˜“ÒÃ8É]†ËI“& ‰ò]RéùúÈ»û úÑD}®%,صʎzrõm•Ò}<_°ö¼9eó~Kov"—Ö3jNËAR,ÎïNMšX™&v":>Mì Mtl³œ&º‘&öèÝM[z)JzîI'q»AZ¹7±Qê£Ìå_“&õîÚ¯–ñf÷qý÷Mí­Ñ)8³üf'×Ý›X<*¹ÁÊdîMLš‹&ÊwÉí-vëM,vbG×3jÏ-VˆÛ3Ø©b ±ÔS½+4Aµ[ÑÝ“N‹â¥b›SOª±8NwÝФ‰•ib'¢ãÓÄñÇÐDGÁ6ëÁi¢éibÞÝ4±©—ásiÂ꘲FÛ¤æ±na/ªH•‚}ôf‡üb{›¢Sò:šhµ€4r¬LxÞ›˜41MÔ¼ ÎÀÝ{ovx8M´¼™)Eí§Ø{oB ×›^Z0y?‡÷bGW«N¥‰æÔ@žg2O:…ÓÄNDǧ‰=â¡‰Ž‚mÖƒÓD7ÒÒĽ»ibqî ã^ÊñdšðtcY£‰mRy°{U•€çï›]’üZ4ÑÞ”ÅÑl×ÑDóXf9™â¯NÄg†ØICÑתp5ýxêætjv¤tt½‰ö\áZ:4l?Tæú§žtJ7R[¡ )-8'ËÚ¿…Ýì4óµ·°›8.ÿi Ååû$“&V¦‰ˆŽO{ÄCÛ¬§‰n¤¤‰=zwÓDs.5 7Žs:•&2ÂMVh¢IPÜO?õöJ¥~]šhª2rRÕëÇS¹¿ošhoíBØ/ÉþE¹ðvõhDÆu†2O:MšŠ&ÊwÉ©L/åãùï~óýrÂÃka·çb ¢p̈çV¯Ëž¯œtÒÚ‚™)A¿‡nv ¹žHÍiQ§±¸|—{qÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±GïnšhÎݵÌÒã^ÊìÜ[ØZú{¦• ±Û¤:vo¢ªò$–ûeßìàÅN:µ·&":z!M4å]ŸPF6kaOšŠ&ŠJ+=·ÛÇ*=ßýæûÍî|øÞDõ_«<…Ó`cËgîMd¹9c^ٚȷõ¶8`{š]Òt)L4§’Qä ql&ÂYb'¢ãÃÄñÇÀDGÁ6ëÁa¢éabÞÝ0Ñœ«` ö¡‘vnñ:ºè Ll’ª8ØÖDSÕJÂæ'Ôg{-˜X¢#P†ú8:FtLT@T¾»ø«¤ymbÂÄP0Q¾KÖzé§_ »Ù‰ õ¹Æ\wØ~JPôÔrù–KC†•äVZ0‚“ô«M7;¸Ô4ÑÄ•_‘TCqH4Ka‡ÓÄNDǧ‰=â¡‰Ž‚mÖƒÓD7ÒÒĽ»ib[/åv*M°áŒWhb“Ô¹l¿.MlS¯/VnbSt.¤‰ê‘’Öʱ²|—ç}ÒĤ‰hÂê"E#ìÓD³+3í£iÂZJ§œ9H”ÔìÀíLš€[®ecVi“˜›JÝáÁ ¯=¾õeäH¡úò–ðjãK-½ ” }‹"_:¾ÔâÁY£ƒ€‹]š«Us|j|ñ6s«¦ßk6;¹¸H·É[‘–s,Îy& áºÑñ×`öˆ?f ¦£`›õàk0ÝH¸³Gïî5˜æ4ÈU½ˆT:wG7ËMa-uE-q]H‚ûš]™$Mõ¹œ<ØQnvšó™4·¬õªùW#ÅVÓ²I¬4Ë`i›ªzñ,¸ô±ØÑ‹ÑD}kIª1þ åÒrFÍ#%q‹7¹¿¬3ibÒÄ4!œÑÃñEJÿN'Œ/¢ǃ ‹⹫U¬å-¯Vaª“©Ù»ëj‹§K“|,N´Œ~±¸Ì6¹+˜P÷":ÚΨ:•`1ǶJv·‰ˆÎO#⯡‰†‚sÖ“ÓD3ÒÒĈÞaš(Îs·Æþ,¥ÁnÎËKkŠîÑD•ÓŽHÕ¹ WUiKJb}õñËn:Õ·ö´åèGÇá¹æ¨Å£$Têÿê„Þ¾X.šX41M@Éw³Èš(v¨W·3ÊÏM,=¶/T» |'MÈFfÈð™&0à|òÙùnPì@½V&Ò1ö¾8~«¼¸hbg›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰êÜÈáÀ,%ê÷æån kvh¢HÐ|× Hµ¹n:UU%ÉœûêÓFã»h¢¼µ“†#‹eÔçn: iOÓ.{ÿ²ƒU2pÑÄT4%/Ñ¥ÙΨÚI¸ºd`y."lßa,vô1Ûí2š@Ûˆ‚Ég˜ 4€#{g Êv*…‰"Î’¾vŸj§q• ìî&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q3‚r–2¼·›lùHÞ[³½9±É©2ÙÑDVe! ·“!«zGÿ.˜(ѼJC7:ödÚDõ¨ìÀ±¯ì¶L,˜˜&(C‚E¥Ÿ·ÿüóûeŒpuÚDyn *Ý=:ç¶Gw„nd¹ußçõ…óN{'mwT«v ö(M§’¶ F}qWÚDw›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰S³”ð½È9ÐFq§7êI©*sÑDQ¥šVlí«×ð]½Që[GDB;{ð¢Söè!ýá<ö•¹®’‹&¦¢‰ô»´4§Fþ¹GþóÏï7§î…«i"=×C@JroüäVurg7#ØÈ)~®ñ²Ä”bûžS¶K¡âGaâŒ8Ј &z»ÄFD燉ñ×ÀDCÁ9ëÉa¢é abDï0LdçÄÊieëÎR„7ÃnlˆögûÃR)Lv4qN½|Yö©è0>x4Q<:tÄŠÚ:šX01LH®Ô$1²B&Š]WÃD~®JâÿvŸ²—Ý­GD’ÅøyyѼ £FµöWƒb÷^¢ú ˜(N9­a}qïýLìì&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q3±[–R¸9iÂ|KRvN&ª¡Hý9Ûÿ.LU‘ÐÚM ^êýËî9•·6Î×–ûщo¨u;Lh;€Þ=§b—ö* &LÌI¥@®ê í{NÙ.¼ß¯¼&òsÑE¹7~’ʽ0F{´±`GçØ¾kYì={Ï©8epVï‹#])ØÝmb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªsOKÛ)”ùfš°™ÓM ’K¢ã©¦sÑDQ¥Ò^¸¯>½ãwÑDyk <ð·U|®u]ñÈ!½o°¾2Y4±hb&šH¿K11Wm¶F­vìr5M¤ç¦Yœœ{ãGÃÍÍ&t‹”t|† Ë”EÚQ‹Ú³÷œŠÓHi&‰}q1¬ê°Ý]b#¢óÃĈøk`¢¡àœõä0ÑŒô„01¢w&¬6JSn7 ®vïMÁÖà›ƒíÀ„Õ®e ÔŸí%üÌÿ]˜(ªr;Eµ¾zñ»`¢¼5–û~¢cñ9˜(¾„€}e uL,˜˜ &,oÒM)zûh¢Ø1_ݵÅ<€›µ[;ñŸimwÂDÇœ&îŠSÆ•‚ÝÛ%¶":=L ‰¿&Z ÎYÏ íHÏCzGa¢:‰J±?K ѽë6Öø&^Ü-ú©“Ýsªªb N‡Àj§¾ &^щ@GËø¾e¿&ŠÇ€dý=F|oŽ»`bÁÄïÃDþ]*†4iB³ÕD±KÄquÖDy.‰#ºöÆO²Ó›ï9y„´Ò}¦ H#ØÑ©‰=ÕôÑÆuÕ©©`ûhâeG+k¢»MlDt~š M4œ³žœ&š‘ž&FôÓDržïpxh-}Ù…{i„7õì—„(Ðn¨ú²û´.ý&MU ¹Ïj_=Ðw5®«o˜/¨õ£ƒïíáâQ”H ¯LÞÎîM,š˜€&`p Ö¦‰b§~u¯‰ü\2êÍÚÙNîÌÁvØÐŒ‚¦ Ì#XQ°}ô]íØýQš(NE4ôÅ™¯´‰î6±ÑùibDü54ÑPpÎzršhFzBšÑ;LÙ9@èU¿|‰üÔ®çÊìÈ9;c‡&ÎH…çªèôR/Þc¡jß•ƒ]ßÉ ý@tü¹ìêQÓOŸúËx®›³hbÑÄL4ùl‚Å56ËÃV»4¯¦‰ü\2Ë)Q½ñ“ìHï<›À-jnÅð™&¨ÎAÛ}°«г4Qœ&„±v†øËŽ×M§î6±ÑùibDü54ÑPpÎzršhFzBšÑ;LŹPú¿±?K¥eðæ´ Úh/mâ¤T´¹h¢¨R4µp@}ü2š(oÓ¶»s6Q£HÏ¥MT.nGöæ¼hbÑÄL4‘~—Ê`š4Aå¦_~Ó)?WØœÚ Q/»`÷6›ˆ8ÈM¸£"ëÍAÉŽâtëKR¥@,ÒW/¾m}ÉÑ1•îÞ!Ûqxr}q§Ä°ÄÔU–·5k}YëËLë §}Ožv‚´÷GÙŽžÍ6;#Žñ­ÓÒú³׈Îÿ fDü5ß` ÎYOþ ¦é ¿ÁŒèþsj–"Ô{Ot‘6QÜùù%¥×Àf}Øb—{f^M ÿÿ†JÍÜ™„­°aP;7Î)ÕÉòòN©Wø®"'£#ž}ŸRfëì{ÑÄl4®Á£…ÐY_ÀÿúZtÙúIÕÚM=«ÆxãúL[,b?/0’v‚ªÚC½Ø=ܵ:ÕBD}q¬«f`wG݈èüà5"þðj(8g=9x5#=!xè¯S³”(ß ^„›í‚×9©ªîý*NUÑ;Ëj}ËÇÿ×8QÞÚÐ"„~t"=×µxT0ï+Óð~,¿pbáÄïã„äCåÜÌÛ8Qì \ž˜—Ÿ1mpÙzãG5ꭇ߰ALS+|¦ Í#ÙÛ#½Ø>ÚϨ:M?ï‹ã·\ƒE;ÛÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mçjª›U¤Ñ͉y°ÅÝ2§¤j˜ìpâœzù²«´å­ ò1@?:ø9šÈ# ]eIغJ»hb*šÐ\ 0_Î÷ؤ‰b—7G-ÏÍyçˆÖ?Š®por’<š?ÓDL#8E›¥S—/ÛE‰ò(Mœgé·³h¢·MlDt~š M4œ³žœ&š‘ž&FôÓĹYÊî-ó!h›ŠìÐÄ)©&Kœ8§žý»hâ\tÞö<·ÓDñ˜Pß:·&‹è*¸hb*šH¿Ku‘œÓÓ¤‰bÇÂWÓDznÌE½“0Vín-H´!¹ïܤµ4€!ýmÚŠ`x&ŠSv°vÛÁ—¯ äÝ]b#¢óÃĈøk`¢¡àœõä0ÑŒô„01¢w&ŠóÜ^ÚL¡Â÷6GM{ÉÍíÀD•`¹òÞ©1ÌEUT°ÎUžb§ôeyå­Í#ÀŸaÔ«|$Ð=Z÷WÇÞ3:L,˜ø}˜H¿Ë˜·éiŠoÂD±º&òs1½¨wªV»ï­¨˜›Ÿ~¦ ¯#°³V»ðlÍÀâT¢[ľ8¡U¼»MlDt~š M4œ³žœ&š‘ž&FôÓDqAÒrÓŸ¥ônš`ÛÜöhâœÔ8Y?£¢Ê©sý¾¾å·ÑDykO‹¥9>GÙc®KÌ]eðÞ3qÑÄ¢‰ hÂó'‡h±}ѩإ]òÕ4‘ŸËܵ;ïEr€»&ðsÅ@iü" ¡ùÕ Øã£µ¯ª8éÞ­v¤²X¢³IlEtz– K´œ³ž›%Ú‘ž%†ô޲ĹYŠf_~Í ö®9”jsL¼Ô[pò¾z¡ïJÁ®o­¸Ý—ðY¢zôô—“7[,±Xb*–H¿KÈå¦ÅÁ[,QíÈ®®?^ž«®iÚñ¹@ô,a¼‘3~¦ H#˜È>õ(ÿKi±C~ôd¢:Ueî,Åî½}Å¢‰mb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&NÍR*áÞ“ à e§ ÓK‚qè¬K¯Wš+i¢ª2оÉSíâ‡óþÿ5M”·v ôwëGÇ"W¶zL íò–ÕŽÞî×-šX41M¤ß¥sÓMš(vñú³‰ü\ ’ëµõơحÍ&tÓ Øgšà4‚иbûl"Û©>[¶Š“ä¶Ý³ãeÇël¢»MlDt~š M4œ³žœ&š‘ž&FôÓDqž–,ï|ð¯vzïM'ØB@Û;‰." Ô—g£‰¢Ê¢w:Â¾ìø»r°ë[§u\Ûm£^vlÏÑDöh˜¶Ø_Æs¬³‰ESÑDú]Æ4ïè‡:EþùýÆ4kÆ«i"?7íG2)ôÆO²ÃpoE§ÂN¶ä,E¼MòïôMq-BW\š†xÑDo›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰âœ8·6:0K¹ÞJ¤´…½£‰SJ çê\WU1Pÿ³áøÿ&jtRx:ÇNÕãs0QÄælbÚ3t±‹Ï6›(NƒH§I±#]º»ÄFD燉ñ×ÀDCÁ9ëÉa¢é abDï0L碦P7Û& 0íÎö0—ô}©î>Mõä.´‰j§_VÒ©¼µBúïÐÎ{#‘Ûi"{„ôuÒ&Š]X%MÌEI¥±’˜¶&ŠDºš&òs5ýĸ7~rÈ4A²¥ù7„£ïXÖ JžöbùÙ‹N±,~c_\^$Wëºî6±ÑùibDü54ÑPpÎzršhFzBšÑ;LÅ9¥uÍ¥?KнiæPÜ9›8'Ue.š(ªX-v>.U;ü²³‰òÖ‚Á:çNÕ.l&ÜÀ޲&šcÁX*½ë±÷'a¢©bÊ|¢Û퀾 &Ú[—u\ƒú°{tüÁŠNÍ£;0@ʶmÁÄ‚‰™`"oPÛx&¡þÑD³Ë/u©/‚‰ú\õT&¾hü;ËwÞsRܤ–:9¨ke •Ù% ‰f÷Úãï š¨N5¹iPÎp·ã•5n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4ќ׎pžãY S¾·>¬Ë–Øhb—PFµ¤fŸ‹&š*N¯UJY¿‹&öèXm%G‡Už£‰ê1'ÂL+sY}°MLEV³Ì‘»4±ÛÁå4QŸ›3¨%›Þ{Ñ 6N@|pÑÉëHGJ9¸ÒØìÞÔ¿•&šSv Ú‰ïv´Î&âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šs¡ÄÈñ,Å~/M°ÂæÄ4qJªü~Œògi¢©Ê`(«Wø²ìsѱ+:U (Be–hÕ‡]41MxÝ¥ƒ`ú=à¯ý~k% Ë+:ÕçzM ‹Æ{2¿—&D-¾¡ (Êö‘Ž;ß{~Ù½^&º&~9•Zƒbq¯©ô‹&Þmûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøå¼ü'ëµæúe§zïM'%Ü8½í]÷·T!c‹¥fš©¢Ó/UìöÁr`öMg?oí‰kß“0:^~­ÑÄ/$e|J¬Œ^vÑÄ¢‰?Míw ¸l r§¢Ó/;Âk»Mü<—’BÊŒHø®.ö¥½ë‹{OPG0¦ÜíŸùË ?JÍi™ r,Ž|õ® ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ…j~u›¨NÌØcqðzijhâ`›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰æ\³¥x–ò›;a£˜€Ïöå-\>j:MTUe» ª/ÿwþ.šhoXâèÀKNÌí4Ñ<—ßþ7¥u6±hb*šÀ­æÄ!¿9|þ럿ßb§¬WÓDy.Ö»VѬ]ìà÷l„+Ï&|÷2L߯/TG0%GìŸ}7;`x”&ªS¢¬bq„ ‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsAò^ÝÒ_v,|3M%Îv<Û“0&§Xª ÎEMUæzé V¯”¾‹&öèhù :ùeÏs;MTÌœHãeœ M,š˜Š&¨P !¦þÙD³£—jiÑD}n–œ %?ÅŽ5ÝI´9ºÁÑòâåÏ’=C´¼x­d˳-/'Ôë›kÀÿñååLtÌž\^>WÆ)­´¼µ¼Lµ¼ð–X4Kÿ"m³c¼|y©Ïµòè(Ÿ¬Úe×;?V of–ù_Š‚\ï̳{¤4—í&=ú±ê”8̸>VE_!:ÿcÕˆøk>Vuœ³žücU7Ò~¬Ñ;ü±êÔ,E ·~¬B×ÍÍ.Òž“:[ZÞ9õoºzü§iâTt˜è9š8¥L_ÒŠM,š˜‚&€Ì…³A@@/îÚ÷ÿ¿û7Iw¦åiý£™ ¿§ i#¸Þçç®Òf'éÙ‹´Õ©¤Z‚„bq¶h"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,å7}³ÒfÎ4qFª¤”梉]•%ù@½|Y‘öÖŠäqtØà9š¨UÀ4Vfyµ3Z41MÈVïî§‚ÃÚ¥‰f‡/í¸.¢‰úÜòžÂ)Ú£;‘;Ï&8o"%¶iyZGpAÖ—f§éÙ"Õ©‚ú›꿉Óä+-/Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9GË9¸ß¹ÛÝ[2qËzT2°I .+1| Õ`.šhª (ÄkUY|é»hâTt¤‰æÑ̈r¬,¯vF‹&æ¢ Ý ¸ª©¾©KÍ®üȯ¦‰ú\JboJþïßþÑýÎvF¢[™Eñ ë;—\?ëç ÎP³#ðGa¢9Í|mivb++/Ü%v":?LŒˆ¿&: ÎYOÝHO#z‡a¢9w”è“G³3O·ÂD™7¥|U‚Xt4Ñì’M–•×TaÌq  Ó—¥M´·&¯gqtʦã9˜¨=`¢ÙÑK.肉ÀDÞ pª‹xù_ Õ?‚jQŒ(frgVÓ&Šæòž&¬Í¼"jýC”f—\¥‰æT¸ü;ÆâxõF·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ5ÓS¨ÝKõºÈáE§]*”…ø©å•梉]•dw;Æï¢‰öÖY34±Û½\î¾&ªGçäM4;X½QMÌE¶A²\@ý÷ –ýó÷[ìÄ/ïfTž ©ö2ÂhÖ.vIùÞÞ¨,e’9¸èäun)“Ÿÿ}ŸƒðQš¨NK‘CqeZgá6±ÑùibDü54ÑQpÎzršèFzBšÑ;L§f)¼—&0o9ËMœ“J“¥MøþíÚ<óêó—u3Ú£S/h×ûÛiÂÎ&„$V&ºÒ&MLE¾Õß.hæ~ýñf~y7£ú\UFõhü;¡;“°ËêR"+ü¾d ¤:‚•Púiw;V{’&ªS+Ã*aùØÅÙK Eï·‰½ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQš83K»›K:‰Â†ªïiâG‚zêW²ýû•¦¢‰]º¦ÔûwMœ‹¦çêïµüIŒbe²Ò&MLEåw a7mb·ÃË Ä¶çZíÄŒŸÚÑOï¤ ²Íá}3#€6€•$u ìvœ½è´;5çœ?—_¢¸`â`—؉èü01"þ˜è(8g=9Lt#=!LŒè†‰êR-¥á,oÊ$]œ6‘7„ƒ´‰ ŒÂøÔ7ÍEÿ(L4U`e"×Ôçïªèt.: ÂDó(‚eû+ãÕuÁÄ\0õÈÁغõaw;õ«s°Ûs3Õ´@ ÆO±Ã7µ'.<šÐ ó{–À2~1+iê/„ÍôY–¨N‰ÈËß'G¨+i"Ü$v":?KŒˆ¿†%: ÎYOÎÝHOÈ#z‡Y¢9çâ»wèGd†›£bÙ¶‰ÏöKe˜ì`âœzû²ƒ‰öÖ’ÁãèÈ 'ÞÎÍ£Yú@™½4__,±Xb–À €Úyn7i¢Ù¡ñÕIí¹,T9%?ÅŽsº7i"INô¾sPÁEiùóôi¢Ù1<Úkbwš™,W˜lÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓDsîPKÑij”é½',¶¤`7 R[†÷«þÿØ¡ÏEM`N/_VÐéç­’§¢cé9šh9òwãE‹&æ¢ ª”5—¹µKÍNíê‚Ní¹Âåú…3v;ò[i"m…Ph‚÷™WHûŸÿy__탽;­'GÁ=ÐÝŽW v¸MìDt~š Mtœ³žœ&º‘ž&FôÓDs^›Ü¯ZúcGùVšÈÈ[V8 ‰*AI4–ªÉm.šhªÊBE™bõD_v6ÑÞZ,i¿åÉÝ“÷œªÇ à9I¨,'^I‹&¦¢ ®gµä)÷ï95;–Ëï9Õçæ¢€(š÷Š]ºµs]¡ q88û–6Ò‰€ûÜSíÔŸm6ÑœbÒ,¡8K¾Î&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsA5‰§P£œo¾éÄêr”#×$d¦,)–ªïîßþIš¨ª¼V¡x9ðo냽¿5—?C²i¢,N\þ$þêj Y[4±hb&ššÚ,Tüõi¢Ù1]]Щ>˸HîÑ‹ªß{Ó)gd>8›Ð¶¾”(`ÿ²Ù‰=[ÐIÛ²P(&¸0¶Û½ô¼Y4q°MìDt~š Mtœ³žœ&º‘ž&FôÓDsÉ Ò³”ß\Ð ­ìÉÊÃþHU…à^ën2M4UÈÀÁ7º]½}ÙM§öÖAÙ—=ŠùÁ›NÍ£–Ÿ]¿Xþn'´Î&MLEÚò ½™wþúçï·ØÑK¢‹h¢>WsPv·•{Ëæ §÷4‘ë÷4aéÏÐÍ¥‰æTÑØ?'«¢S¼MìDt~š Mtœ³žœ&º‘ž&FôÓÄî\ë>.ž¥”î͆Íà¨<ì9©2Wëº]UV,{áX}Nö]4ÑÞºv!ˆ£c ÏÑDõµl-a¨ ð…aM,š˜€&Zƒk(S·w›M4;Óë³°[ƒë2—¡ŒŸjpwÞf88›¨_²€Sýët•6;Ôgi¢9ÍPÏ?cq «¦S¸MìDt~š Mtœ³žœ&º‘ž&FôÓĹYÊíæ,ì2ßÐÄ.•3|2Ûg˜, »ªÂz”cõ˜€¾‹&ÎEǤ‰æ‘ËN!¨oÙìhµ®[41MØT–Àï'ßýó÷[í®?›¨ÏE•ò¾ÑSì$ßyÓ‰Óf¢i‘IÀ¥Å.å4ÛúRÕ»SPè}·ËümëKykf“8:ÿ˜Åï__ŠÇ,î̱2}ÉkZëËZ_&X_|+lÞúU>ª•yóêõ¥>W X0kïvpëMZ®íW‹Â÷ë‹·½k‰TðżÚÕ¾K~­jâjG(ÃXì`}‡Ÿ!:ÿkÕˆøk¾Vuœ³žükU7Ò~­Ñ;üµª9'©iËñ,E‰ïýZE°™áÁתsRy²*M—µ1H€ÜíÞÜþOÓD{k©½Fäƒè¨é¤ms1éßÉjv*ÏÞtÒ6 aùM`(à¥I󢉃mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR÷Öt’ŒñAwÔ ììë~ì&ËÂnªHŒ‚sô]½}MìÑÉàA½®ÝîÁÇ\^X,V¶Î&MLFåwY6éfœ¸KÍ.½|Ÿˆ&êsY­¬á¹,wö3¢Êæg¹Œ`$(3t}ivÉž½éÔœZΙ<W€hÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓDs^›Ä§ÏRæpsÞ„mFGgU¥š)øÁlÿæ$úÏÒDSTöÂñr@ðm5Ú[$Sˆ£ƒüàM§æ1§ÿêHÓªé´hb*š(¿KÎÅ%©ti¢ÚIÁŽ«i¢ãÿßã‡UÝ雷²È–—|OÖÖɹßǵÙᛪ{·ÒDWëÒš†âèµ´Ü¢‰ƒmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&vç"Q!‹Ýåæ ±e…†£ ±ç¤NÖ½nW¥µÞÆê_w¥_Aí­ JD9ê¶ïí¼éT=ry!' •q2[4±hb&š°zƒˆ(1õ³°m¿‘DWÓD}®dÊl9?5¿Âî¼é„[2AÎïiÂë&!–þ‡µf‡ÓDsªlI-'+o"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,¥è÷VˆÛ2ÕtÚ%dàäHåÉò&š*«‡Pøzÿ²›Ní­½faæ8:öºg¿›&ªG!eü`€ÒÊ›X41M”ߥp™3IúYØÍ.Ñågõ¹’¥LÛá¬-¢zgÞ„À–’%yÓ‰RÁ€DýÂfÇ®ð$Mìâj–<[(NÊokÑD°MìEtzš Môœ³ž›&ú‘ž&†ôŽÒÄî¼Ùî7çú±Kp3MЖè€&v ‚Éû57~¤²NEçÔË—Mìo­e‘fù :¯Ù 7ÓDó¨©N©9VæiUˆ]41MÔß%i™³x&v;ò«k:Õçrb6ÏáÃõ‚ê½bDßÓÔ‘N–ƒ>{»Ò£y»ÓÌ.ø¸×òè‹&¶‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÝûu¬w;ËvïM'ÅÍÐhâ”T§¹n:5UÊZÖï[°Û%ÎßEí­‘P ÅÑ{. {÷X@¿à~¬¬`Ç¢‰E3ÑDù]jm$áh]š¨vètu/ìöÜzèúu½w;þ=·ùʳ Û„@ð}M'Â6óêÁþHǶ¾€>JM£QŽÅeâÕo"Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibwn…4ž¥˜o®éİ©Ôt:)Uçºé´«Ê b«LßE§¢£éÁ³‰ê±^P³~Ú]™ùÊ›X41M`͇ȉ½_Ói·¸º¦S{®b¶$R ¿¹¶²‚¿§ j#XSöþúÒìrz´ßDsj„Y‚06»ÿ³wfIŽãºÞ‘‚ÄŒ•ôþwrI*«Ã§Ò"¬ ¤f\ó¥2PÂ/X>À¢‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ý9—Á&ޢܻ7A¾ÜÂþ‘à¢ýz?v2ÙI§¦ŠD 0VOÙ¾‹&ö·6 ¢£ÏÝ›Ø={î×òÝí”ÖÞÄ¢‰©h¢|—TpÂ%÷i¢ÚÁk†ã‹h¢>·Ð„²†-›$ù­µ°uÓÌ%îG4áÎR~‹&ìÅ™g_¼üÆ «7Éß6¾|´ùÉñ¥xt,C>ÇÊþ'ÿÈ_Öøòß/´%pb&êæ Üí/_­*ÏÅ\Ó‡·Ýšæ[ÇÞJçð>ËGQëã¹sz·CÇGW«šS+p?¡án§ë$m¼ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­jÎ$Ë»H‡{÷¾3lx”üGj•‚“N?¯DsÑDQU«`«&û@½ðwÑÄÉžÃèÔâ±üM4˜Ê/±2 µ÷½hb2šÈ('$i@ÅN…®§‰Œf)‡í§Øå›÷¾k± :ØûæÒ‚Kß›3öÏürëÄ¥‰&NDË/ŠËðRjÑÄÁ4±ÑùibDü54ÑQpÎzršèFzBšÑ;L§z)4¼•&òVKþä^oÿ©TJ“ÝË;§žå»hâTt8?¸÷]=BR”à^^Sf/{w‹&ML@\÷´IR¦þÞw³C¸:g`{.‘3öËoÿØe¹so·”²¾_¤õA„ìMÈÞ=›3PZ7äªÁ½¼&ÎlÕ3 §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8ÓKA’{OÒ²é–Qö&ÎIµÉh¢©‚2/íWcÚí²|Ù½¼=:ÌàGðAš¨ÑØ‚ꎻ2΋&MLEå»,_¥eÆ~ÎÀjW:ÍËsÖç‚‚PˆÚÃÛ*t×UGM1e?8é¤{ vMý½‰f÷š]ñ šhNMjŽ«Xœf]4M;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4±;wàìvJ÷f ÷´¡tjJ”Ä>‘ê“eù¨ª°žÀJ«wú®zF§¢ƒéÉ{Í#‚§XÐÊ@¾hb*šÐšÙ;y™Ð÷ïMh;i”.?éTýçì¤I¢öC9éÍ'²¿§ «-XJK—þŠF³#²Gi¢9õZObqʸh"š&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibwN²Æ½”ÜL¸ÉὉ]gû@*å¹h¢ª*Ãe™/C¨ž~ÙI§=:‚É(ŽNÁîçh¢y¤Ò¥¹eš¾|u‹&ML@VO©Hò~òfÇ~9MXÝóÀ$I†¤;÷&êþG™h+½§ ßê&/Áï¯hT;ry6g`G„Š)ÇÈ´h"š&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâT/E ÷Ò„Á–ì¨:ê.Ák—ÿTÖ¹h¢©ªÅ÷‚Ú{?oéßE§¢Ãòà-ìê±L…( Û”­ ä‹&&£‰ò]–·Ì~­Ò©Ú¡¼ìú]Dõ¹šËä›+Å”><ÛwÑÄ™èHBŽ&šG¢zÂ.V¾®M,š˜Š&ÊwÉ¢hÎÝJØ»¼ôšÑD}®b™§Gí‡K@èÞ;ØÜ¶@ßÓ„Õ\ë¾{ßÍŽ¥‰æÔ”>WÌMDÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çŽBˆq/å‰o¯„ vP û¤T ¹h¢ªÒZ¤4ÈlÒìRÎßE§¢“ŸÌÛKg÷>M4»×Ò‹ÑDy®–ïÊàµqG¿ù6;É»{P”µ¾“agù·Cw€çhâ8.Ð+„ýÇŽltêOûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøã܈‰5î¥Äï-7¡ÆêÛ{$”q˜{…°ÿµƒ™hâGU­Îf 9û7ÑÄŸ·. ȆqtPŸ*7ñÇ£–/?Q¬Ló:é´hbšØ¿K-·çßçhÿùëû-v/+îWÐÄÏs)›‚‡½v±ƒ[ aã–]Eì=Mä½›[î*mv9ó£4qJ,šˆ¦‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8×K½ËŠwå½ È¼?étVªd˜‹&Ω·/£‰öÖž³ÄÑQLÏÑDõ å •AÖµ7±hb*š(ߥZãëÒD³Ò«i¢<×Ê»¤µŸbG·Þ¶ò{š¨e*¡îŽJÝ Ù!>»7Q"0ˆÇâòº7N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ×Ì¥Iã^ o.…]F”D ḷGNæ’c©$4M4UR~æÞ­ßòÛö&Ú[k¡ øà3&ªÇÂÑÌ’Be”Ó*…½hb*š(ߥ‡’º4QíÈÓå4QŸ[0M1j?¦f|g)l*óáú–ïǬ-8)bÐU;t¤Gi¢:evÁ Úípeˆ §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhνÀLþ —2½·ÜDQ\ื*ÏJ•$>M4UÚª*ÇêÅòwÑD}k-(©GG“Às4Q=ZÎ"±2ñuÒiÑÄT4•& ¯Uýó×÷kL)_Mõ¹Vº^ µ3¢;s:lX\¤|4¾ÔÛ’rŽh¢ØeÖÙÆ—¢ ËËõÊýû–.ß6¾”·.s {ñEÕ'Ç—â±Ì¢rJ±2Uu/S/´¥Òí¸öW«v»ëÇ—ú\2©‰—ºí§Ù©ð½9Dß/TgöX†bÏÒbþìjíxž!§i­V…ˈοZ5"þšÕªŽ‚sÖ“¯Vu#=ájÕˆÞáÕªæÜ©ô?÷RžåÖÕ*AÝÊÐsp’–ÚŠ†Ö"š¡Ôò²“¤mªÊtÓ]cõ¾lµª½5*±¦8:(î}7R~ˆJ^4±hb2š(]&g( ÐD±£—z\—ÑD®å ’j8G/vor¥^˜3P ñûÁÎ7×Ui-½KW'·vŽÏÞÊkâj5<Å™¤•ã#œ$v":?KŒˆ¿†%: ÎYOÎÝHOÈ#z‡Y¢9·2²1Ľ”ýNètñ9ZÈJÎÞÞ™(Èè´Û½É¹÷Ÿ²DUå9gv&š]ü.–Ø£ch)ÇÑ)Ÿòs,Q<²ˆ GÊXÒb‰Å3±Dù.Ý” ¥+¯Ù‰ÛÕ,Á¤ ¦N)h?Õ.§Y‚aƒZ4û`gBZÏËuŬ«TÚø’Ò£4ÑÄQé…²…âWþñxšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æœ ÍK»]º7ÿ¸ä´íLìÜ,8 óóJ“e lªÊ¨.U.ɾ‹&NEGéÁsNÅ#§2ÑrðHY±[4±hb.š(ߥºPBïŸsjvœ/§‰ò\«¹ÃÆ2›Ý\IÞÓ„Ö Ê\|kv™äQšhNKx€<W¾®EÑ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy™ÅGý.RáÞjFè›9ÐÄ9©>ÙÞÄ®^Êcõ‚_vk¢½u=f¬ îÑy,o§‰ê1£x¶øwË ëœÓ¢‰©hBë,Ý„š&J‹”‹—‹­ &MLš‰&ô»LÂBø<ÁÇëϾß$Ä»'tÒç x'fÿpÉ[ÌÈiONñ*M€»ä*ÂÍ­ïjGˆgÒDi4p.¿MqÕÞ&M\Ÿ&¶<:JB±Õ#ÝÙ½¼5%¶‡ÇDpâ=·˜33E¹AYJsÁNчš¢ëwœ‹)¶ì<ÙœºJñ6^#{—0EŽÍ”NÕÎ-·îD9MŸäØp•Z¥C÷&/97ÐuOùɧªž[µ6u’€Ñx¦t2§‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš(çô—ì(¥Ìsð%ìtñ诟tÚ&5!>E•ètñõé¾îM”·eZJ7 –Âç%ˆ­ÊsÆ~S™x?K£Nð |ô»L!$¾RDáõg߯Ú-n$íDù¹ cŒÞì?j‡pì½ ¿– 0£zÐŽÐÅðÔ±¥Ñ\Ÿ™Ú×îªûYºÎœ&6<:>Môˆß‡& ¶YNMOH=z»i"7îc@hجvDt(Mø‹s a=Úß,•i¬ÒuÛÔ_ b·y'.îÄN¹E„6•©ü4ibÒÄH4¡ß¥ø|½ZÚ{Ùtê¾7Mäç¢'ôÁœ£‹2{8ò¤“»tÉ_/gù'õèÚ9X«¤SÄÖFu—älqae“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš('&gç¯"½…ÜE‡´•½‰*5&"¶¥&ìÞDV….H°Õ ÝW!ì'ï”*ЦwÐÅónaוô…¢­ŒÝL;ib(š L 1×Íl–›¨v÷¾…]ž‹NDМ£«Ý¡9”& æôVk4<ç‘Ð'CiYWs£/ÔãóÑ1fÂø\ý¯ïÿöóÛ·e$ÿ£´A–Âܽ {¼³œ}0ìmQ¶(j¼ød²2ñYØÓ_=~ÿøKí~|óøKùËÄöÅ_~ùß÷o_½ü¿ÿŸOhöôÛý«þî??ê¨øR?|øéÇW/W ”™^êHûáƒþ¯^þËã}XþŽ5n©åwï_üöøð¬Oþôññþç·¿©«>||‹—ÿV> mø5[˜~ûö†¼ü2:”ÿª\^v•´øþ?ó#xü¯Äúû/úô¯ê§ÝÙ:?ßn¥x%ønŒ /þ¢ïuѹ:¯ß ržþ˜3¬áfX[æ "'¯Bª8Ñoè\ffù–—V=úÿaò‹ßkrUÁ6ëáW!žròëÝarC”Šž¾9äÑEYÝs Êœ®dûŒ§Õü<b^.f2]¥và¼9—˜Ž ÈÙÙBÇÃéBÌõblõáù)›/ž[£:'DÛ;ÏåÖ!pb4•EG&&L r|ý4á%Ûì?¼D(ÙÑÚîÈvtäðBþâˆ[IšÉú“: ¡ñ­Ø-뀟]¥Ñà‘½ÊbÇ‹04±ke>ÝðèøØÕ#~ìj(Øf=8v5== võèíÆ®Ü8ƒ#ïÅŒRìü± ÄÉ…píðÇ6©i°£äE•pÆaéú–|g4Q½Sk{Çûi¢´¨¿ˆ´KU;^Å41ibš`A{L:'hÓDµƒ½ z•çbD°zv¶sxtÒLÑ©öuš¥§+÷øvO/v¼(rMäFƒS„¶8™4Óœ&6<:>Môˆß‡& ¶YNMOH=z»i¢6ÉÊâQíðXšÐX~qa&¶IåÁh¢¨*Ùõã êåÎÎÔ•·ö¹8Û;è<š(-²äŸ¶2’¹71ib(š%~¸–¦éõg߯Ú9¿7MäçFç8×=³]vÇ%O:ÑN+inbîÁ) Æö&}±‹x.MäF£zÒ7ZŠÝr‹éMäŸóßßj—|ûÛu{x\°ÄùÐlžƒ½À0^"i FñøáÅw_¿xÿ!¿òã:#ùT:¤ÀôŸ×ß'‚/G ¬÷‰ ïßý˜«_Ýc ­@ï?Ô ØÏy:öÏÿôâýO?}¯Sµ_¾{á~ÿ»¿~û?:~¨7H~øå÷·-£èïáíýãå÷_¨„7ý›o>-þïÇßýôÍoðþëûïþñAÙW?¼ ôÕ_k°{¥a?“YŽ_=¾yõèÍ÷ïðë·oþúîkJ>}ýmx€¯ßÉ|ïà}e Ž”ÄÈɨcQì–é¯'K®@Bãã³dø}X²¡`›õà,Ùôô€,Ù£·›%kã¢ÓegG)æcz„KÂ\zhm´ß"6Ʊh²¨R Ñ(Tí ÞM–·ŽÀH­X½³H†r8MæÅEcoª(“E!¾I““& Éx!”䟟Ô|ýé÷«v$»Ó¤>—”Û…ß‹IÇžtCAmôúø’JO—°ÝÓSéé|n †"=¨BSœxšiŽÌ‰bããóDø}x¢¡`›õà<Ñôô€<Ñ£·›'6E)tǦ9"p9mêÊÞT•ÀDF*œ¯4MUäuT¡ªwV‚¡zG\2R“V»3ïÍ”õ+£ö\± 8“¦NšŠ&Ò…‚K¼´¯e»eVìh"?W笩FÿÉ·æÝ› )ï¿E”5šÐ>. ÉŠAj½m|Ña0Rð¦zt÷6¾lð,vhO_´E•%le(qŽ/s|i|‘‹óúÊ•t÷ŸŒ/Å.î=¾äçzÊg ÚÛ¶ÕäØ?Ñ…éúø"Êg4œŽA†RµsáÜ{™¥QaHlqâgÁPs¢áÑñW«zÄï³ZÕP°ÍzðÕª¦§\­êÑÛ½Z•½˜Q Ü•\žû¦ÃÑ0­#íz´HQ_Ä– <ØjUQå“Ñ«À?«¾lš(oùÀg´½ƒg®V•€Ð ¿'˜41ib,š "íYú4¡veš 9‘¾±ÇPìŽ,ñC|Áàh%Ë‹wÚƒ½sÖ„½Ø)á™4QÅa ¶88ÏÒZÓÄ–G‡§‰.ñ»ÐDKÁ6ë±i¢íéñh¢Ko/MÔÆ)qrbG)í|/.¼v/³JÈ›8í<)Ov†¢‰ª*0cûœÖ“Þ×½ÌúÖDü ƒetç¤--¢cA°•éËûI“&¢‰ü]b Š©¹7Qíh5÷¡‰ü\rŠ(`õµstl–Çùbÿuš€[tp!C)ÔH~ê½ÌÒ(ê™LqŠnMôˆß‡& ¶YNMOH=z»ibS”‚ƒsFRrð¸BU‚N9 ðyz%‹&Š*OÀ7¨O÷u’¶¾5æ’›h{GÛ>&J‹Á‰qÒ©ÚQœC'M Eú]¢wQ§Í,/Åw§‰ü\Œ¬5 b€ÍòB1¯ \§ ¯=˜P[i±û$ñÈ 4‘Õ‡<³)Ž)ν sšØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Üx'1Ù!4¸DŸtª×£}ÐX?o[*ðXõ¬ª*ÔiE[½r_4Q½“’´sF>ÙQ:&J‹“—d+cž{“&†¢ ý.‘¼†éçõ¤^öý"î¾<—C®×hFmdf:öÞ‰[¡ ,‘7¯÷´'ìÅÎI8•&J£…¨l¨Ú-S×NšX™&6<:>Môˆß‡& ¶YNMOH=z»i¢4ò¸B(§c³|°sò´²7Q¥‚÷á©Áu »ªŠ:­h'‘}RÏw¶7Q½CˆtƒwâbßépšÈ-F`!¦²è¼71ib(šÀrÒI’÷í“NÕŽÜÞ4ù“c0ûÚùxäÞ„¶áóªÝuš Üƒ1ijsO±Ó ù©4QM¬(älqˤª“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnšÈ'DÀvIÀ*Râ±÷&’À%E^¡‰-R““±naoSñ¾r:móŽ_œY>œ&J‹)éTËÆS³žÕ¤‰¡hB¿Ëœ‹3^I³ñú³ï7/ÛǽiBŸËམ=[ýøØ ä)„šàyõŸv&ˆjçžç <”&J£ Ñ[ÛÅŽå'M¬LŸ&zÄïC Û¬§‰¦§¤‰½Ý4Q×a8ÙQ*R8”&"ò…Vh"Ký[GdKŒ&ŠzP¢íhqáÎnaoòÎ2_×á4QZ$h×d®v˜fòICÑ—:EJ Ò¬gTípqoa'šàRÏ(§;sVÿ!5;²žú `¢t=¹¹³Ž0©=a/vú”Si"Oåsº’%Níf}T{šØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Ò8†ŒËÁÕ½7‘"äŠ+4Q$„ÈÞ9[*EEU 9°­>ò}eˆ­o-¹ Ê ƒ¥,~ÛÃi"·rÂ)S Ìꨓ&†¢‰)!ç¸{>›ýÙ÷‹9µßÞ4ò &Ÿ]°ú9 t콉P²ý]§‰˜{0iœ5na;Lp*MäF½k)λy Ûž&6<:>Môˆß‡& ¶YNMOH=z»i¢4®ÿH ;JÁóRØûÒ„Æ{åšš(Åë×õ•Â`4QTQÄÐλþô–B÷Eå­™‘ÚÞáåœýhš(- åW²•ÅyÒiÒÄX4óI#í3ÞÈéËI#¿{N§ü\Ïù2B´úÚ=/¿½çI'wñ9CìÊ-ìT"´$0²·¥:›Ó©4= q† Ø…0iœ&6<:>Môˆß‡& ¶YNMOH=z»i"7Ž9Cµv”JÍ›#Vô+4±Iª¸Áhb›zö÷EÕ;L.ÚÞÑ&N¼…]Z̉èñeË\\“&&M @©œ` Dܾ7Qì<î~o"å=AFb—èÈêuJ,ù,¢»^µd|Bq±MÅŽR:•&J:*ŸóÞ‰)NåÁ¤ kšØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Òx.7áÑŽR>ÄCiB4Þ¯U¯Û$a°êuE“Dºa8 +Õ2¾hš(oPg¡öHNœN¬^—[ä’‡Le@h¦t*v.-R/ïDù¹ž¼6û{ý±{)X£ /LÎY4¡véZ™½?w|AGž0˜êÑ9¸·ñEß0qL7xgY4èøñE[¤R°ÖV†a&ù˜ãËPã ]œG%Î`¬V;Xô¬Æ—ü\v!ãÌO¶#!8vµÊ%k÷òHgˆ@18C©ÚùH§®V•FÕ‰F½·jâÜû6—!µªGü>«U Û¬_­jzzÀÕª½Ý«U›¢”ŽoÇ®V ^¢¬$ ¯RdÇ`KM~0šÈª¼s¢‘üõrg÷òªw”© Öªvg®V•sþq޶2t“&&M FÄœ³ö=ŸÍFj‡ûŸ¤ÍÏÕ¡'x»gsk'–ö£ ¾$uþušà܃!ˆ¸ö B®±ÊJ¥QFtälq´§'M¬LŸ&zÄïC Û¬§‰¦§¤‰½Ý4Q„.¢¥8â¡4Á!]"ÄšØ$5€ŒEEU®R7 ñŠú/š&6y'˜2°´ˆú展¡B줉I#Ñç=eH,®}/¯Ú¹½—çFrhö&ŸÞûœPV²H…܃9¦Ð.ƒ\íˆO-ŽZÕ? íêÑOv0³|˜ÓĆGǧ‰ñûÐDCÁ6ëÁi¢ééi¢Go7M”Æ1oG)$98ˇ QÇÙõhObtv´Ï7ÖÇ¢‰¬ŠÑ%7 wGÅ;¤XíÂyÅQk‹)±sl+‹4÷&&M Eú]jèÐ>ó<ÓëϾß"ì¾7‘ Ô…ˆÞŒ{1„CO:±¿X)ŽŠ1÷à\I.´wQŠãssæFƒýdÄ'‹¤ª“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš(C B`F©Ïkí3P£=âz´¿]*vÒ©¨ÂèÐÈQìüG­oMIÈ9Û;„'æ Ì-æ„g¶²è³¡I“&  ý.ƒ8 ÍrFÅ.9Þ&ô¹Q'è.y½‹Æþ#3§‹¯×FÅ”;0å+kí]ì<ñ©0‘MÎ'Ïd‹‹3¹=Klxt|˜è¿L4l³&šž&zôvÃĦ(%pìA'Í.:Ò®tªRƒCc‘¸J½6,ý™0QÕGÉVŸû‚‰òÖÀ ˜ÕîL˜(-rƱ•Ñâw›01ab˜ÐïR<$‡Ø>è”í\XÌ‘w‚‰¤µ¡ä­þÃλtäÖDÔ ½[£ É=XÇ3o$ù(vàÎ=è”Õ¯FYˆLq‚nG5§‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnšØ¥„Þš@Ñþf©Dƒ]›(ª¦Èb«gO÷EÕ;œÞààO¼„­-¢CJ‰Leúî3eउ±hB¿Ë˜"ø”Ú—°‹/ríDúÜä"±Øý'¹€|äA§\ÎHÖ¶¾ÉiεÒ(5Ïü;‡rê%ìmâ–Ki“&®O[ž&ºÄïB-۬Ǧ‰¶§Ç£‰.½½4±)Jà±4£ŽÐ¼’€|£Ôà†¢‰mêýó„!_4MlóúóÄnSƳ8꤉¡h"—‘}ÎÏŠ-š(viošÈÏM}4ÊU;#÷&H..ïÒÈuš€ÜƒóBŽoîMT»O½6Qõ$îÿØ;·å¸$ _ë-:x±1³k•uÎŽ˜ =ãUŒ=VX>\¬ë&Ù’{ÍSð`[£Ð»Ì³Ì“mf¡Iµ¨F%ÐÚå!x¡»“•?uúP…,“ënm7ít’§‰™ˆ–O}ÄCݬ §‰l¤ ¤‰>z{ÓDrŽ …Ûµ]Œqì—°µ ˜éíŠ4;Q*Ǿ,šHª4È¿B^ÛÁ#;uô@ŽŽÞØ1:M$>𙊲2o§ÃQ'š(Š&¨^íÀQ³ÉÒD²³ÃÓ—kçó¤öCvzTš€J£Âíg£ZM ØI–]ú®í”1{…‰ä”ixÅù'˜f‰™ˆ–}Äݬ ‡‰l¤ „‰>z{ÃDrŽÞ(å^ ͸¯M ØÊ£oXš` œI#h¹Cuʶ4‘TsÖÈc•óÈ–&ÒUkBàåèh³¿NµÇàU‹ZG¨8ÁÄ%ÁÕKšÇ£qÖfa‚í\PC¿6‘ÊEšß‚[vä\#gtÒ6Ú&˜püJ8­¡d§7FÂ]‹úûÙ·‹ÍPÖª›œ¹uŒ²3ë_ó†œ=»Zþ¼º¸½N_®ûùì?8€Ïé“o“õóôñëO—çSmùŽ'3{¾íU¹_æ¿‹IÑû·4UåuãöñÕÕâüzõî(Æÿ{ýËâôt®~=ÖðÂ!ÍÀ-Žé¯–¿Þ̲üð#Z‡Š«ÉñüÿyßR¸°?þAýj €]ª?¾!/.GÔ¸oVËëªtÿñ«û𔾣à®AzË÷ÿþõ_ΙN6¿¡ Ýü—¯^üùvuÊ5“¦ç×§Kþï'ËËÓ‹Wg4…£_¬^ògév}IÌNÍãUúà±ÜÚZµO”}2êyÖ>k;½×w¿k§A;g•,.¨énñYw&¢å/‰ô?Ì’HFA7ë—D²‘.pI¤ÞÞK"É9B$ÿr/qä¸C¬š†%–à”´Æœì¶åbúM—D’*m¬VVïŸMøï½$’®šÊu&ÈÑÑaû«’GïÀ‚—•¹¾‡\¤Mk"ÓšÈpk"T1=Ÿ jm6/mmg7Ö$Zár-‘AD±ÛöV™8âšˆŽ•â' ;¬LåƒR'Dv›Ùšr”È0葸ɉa†hÅÁ ڈʀ°>Ÿì‚kw¥Z¸R›zRDï¼ä”ì î•ÖØi¢(.(;eê§á™ˆ–Ok}ÄCkݬ §µl¤ ¤µ>z{ÓZí<†?YúNä¸ïÖÓd—Ƥ¦ lݤ†Âh-©ÒA)aT®íô#£µúª‰Ã­i`öGkÉ£×ZÇ÷Í›hm¢µ¢hÍò)"F£vyZc;½šÖ2þ6 raÜÎPÜ·ŸIH œV¼K;‚ ”ìL»Eh0"Ãx‘>&ðù'íV„ÁŠN]WAIÝ-Ùiô­œJk¥®Ò|¸!=Û¹¸±!çÔé¡]xƒè”Ær©­œ’]Œ~¯\ÊN©¾¡S²¸8ei#Ñò¹´øa¸4£ ›uá\št\ÚGoo.íÔKi?îá–¡ ¸´›ÔhËâÒNêv‹K»E'øýqi'e~ãô׉K'.-K©búM à²\šì|œK¹\ŒˆAY©‘…‘¹Ô)hÊÓ๩; JÌ/s±]tï'Ò'È©¥-ø ‰³ 6rM8Ñ0OÌD´|œè#~œÈ(èf]8Nd#] NôÑÛ'’sëœÑZÁ‘“¾EÞøh{{’(NF–j·K¿%N$UÁÓìdõÞ>®i꫎‘ßÿ‘£³ùÅè8ÁÁhï¬<Œƒ†8áÄ„Eá„gLà)&ä5$;³ñ  œàr©a{4Jj@¼Ê¦ÇÝ”¨éLJíL¨›º¶ÂŒ=ÙÛ﮹äÔi>Mg§]sò<1Ñòq¢øap"£ ›uá8‘t8ÑGooœ¨[¯ƒ‘{)7rÚ7«T¥±éDš$Áó«V¾…T§Ë‰Nê½ 'ê«ö}›è8½?œ`šAZµ˜c Ø '&œ( '¨bo©Í˜ì—ÉÎm&ê'¸Üïv–P>ø1W'|ôãV'bů¯8íðIv›g=çvXI¹!"÷/Ôá;áåšd§ö|ªfrJå[ˆ£:61Œ49ÍD´|†é#~†É(èf]8Ãd#] ÃôÑÛ›a’s`„ $ÉÎ|Žs¡ ¶éÍŸZªóZXª¯íla “T zY}xloþ¤«Få£),ÙÅöÇ0ì1­ˆxY™Ñ:L 31LQ yç?=Ñbö¨Z¾“òÌÜc:² ³àTÛ©Á“CP¹œ,;:#^4ÙiwƤž¡ œaˆ (¼•ìœiNZ‰·‡Ò4•ìÔ{låD§HãšâS™§ø]VNµì5'3¯”ì¬õ¥MH•‡èäÑU:÷Ø& ¢ãAïsÒ€Heœ°{')C5%wš& eMøüi˜1¼Ÿ7íñ;ÙEüÀ‹tþµ‹ b¾%;;êYܚèN¼ AKQTJí†RéuadØðÚJ‰•“Ýf:‚}<øLNÑ)/< Nv1âôàSz¢•‰hù>ûˆæÁgFA7ëÂ|f#]àƒÏ>z{?ødçi3¸7b/eiX9呯¨OnxðYKpªTWØ^ð¤J+…=ÅÉà‘mÞ¨£Cá1QŽŽV{Ly”?¯¥Ú²rò¬ÕëˆÐB=]壧úªi ù÷×ÖÑñû;Ù£öˆš7ƒÈÊ¢›vŒLàT8qÅ ¢1!»ë=ÙYoôÀà”óÿ°‡fÔœ<ä#ZcÀ *ãq:ûd®¶‹-÷QÉL©0KQw|~†à”Ÿß˜½žNX; ª8¿qtâÄ0 “ÓLDËg˜>â‡a˜Œ‚nÖ…3L6Ò2L½½¦vn8Õ¹ÜK­Æee*g]ÃÔX‹-¤ZUÃ$U¨ƒ^¯¶‹æ‘1 _5ol1XbóqìÃ3LRfͯœ¨,è)¯èÄ0…1 0hEbvñ§¶ƒ×mb.×óIéN"²s£æÕXñîr«›&jA)Ù©ÐêPw-œ$O…ñ~e¯Há!;ëÚ9‚S] à ÆüPšì”i·â„²S^[ã¼®’Ó[wa„ýýTuËh|É)Ùmyo|TDd§Q§NH‡~:À^œûg"Z>"ö? "ft³.³‘.ûè툵s~ÝØ‹½TTN{ôD •· GO¬%áÛH eí¬Ui‚q%U”\ˆØ-:~/F×-BŒQVöΡ"NˆX"jNšä´ ù#k;ëGD.7¢Vô¯Ô€5²8î2u­vû+Ê$ÀeQEaEm§Ú! ˆã5Áh”`íT«óë`ÍTšÈÏòäØN¡i…¥ÆˆN fµ6Ô$»à[í¾4Vtʽ7è¼à”{ïý¦ÿ­"W%”Å…‰å©&¢åbñÃbFA7ë 1é ±ÞÞ„ÈΑ;na¤¶3ãNVДþ·–QGq\c;]ØFÈ¤Š†*/ìJvÚ¨ÇEˆut8õ¦–£c6¦›“Gï¼5-îÛfj›‰'B, “ ‘1Ù)?ôd©\äíëùk;e͸‹ˆHÇwÛÑò›aÑÌgÁ¨í|h‡ˆÒË\¶² ‘ˆCÁ©u Ì^ÓÿvgÃt"¢89ÍD´|†é#~†É(èf]8Ãd#] ÃôÑÛ›a:õRÎŒ»Êe!V^71L7©Þ•Å0Ô{ÀÇÅ0Ý¢³ÇÖ;)e¦L~ÔÅ0T1cDPò!“]~#$•‹Ê‡ùüÙµs£fÁ †‰‘B¿a7áèÐ;õ’v{Mª×MœÛX¯™p¢až˜‰hù8ÑGü08‘QÐͺpœÈFº@œè£·7NtꥼwIÄUYÝ´i®›Ô-çqü¦8Áª´¢ §°|ŸÔo!þ(p¢ŽŽ÷ ¼ÞWÛm<'’Gc,ù¾im§'œ( '¨bFe›=Ø£¶3vèS¹4øÝ“;ò’Hô¨BNDM‡rJ JÉÎm*¥kO ÝÄuïÃ?¾8±zùùâröVÛY=·;û×?ÿõOn-÷¾î&žÿþÉW¯.—Ÿ/os#î~éXÐ!•óäo«sB…ú‡Æ•n%ü×{%Ü_T×¢’˜ž=½k‘;Šy§„Ÿ¡k)¶ovù£/ŽþFçû[òö×]îªf~8Ó³Õ q+ê¬êvz½Ka_œ-¯/4`ÏÞû9+óN!|ò|yúâ³ÕùO[\v¿³©n|ýô“m…Õ%-ÂòäèÈÂÑBZ<ò‡áð$òÃ\⋨vªOY¯;]Ç—Ëë‹[š=ÝWÓÍ5Áì$4[ìn÷ðÓå9W‡¥¥µË¥ÌŒ½µi>uv¹Q.wcôñë4¢Ï®x(?TþP{ÀççDT(•ùõW¼Ù%DmÜïÔöŸ|²|;yX2Í‘z•ùéµÜg4S¸8y¾¤¶yr=ß¹ÌA»•/~¡êñåòÅ’fá4…œÏ^¿~§÷­û‘꾃áé÷ȳzÀ8X?°ùæžÃ¸Ÿâ^ûþƒÔøÁÑÌéЪsÈÇ•âÆÃcµðËå‰uQ;2¦k{³ÛÝ£ÙÈâtõr;ßõ–¥êýùâœæŽ'MÏþç{ªTï|ö—óšïPò“þh-è*]ÕÁƒyÔÎýûºì/.ßö õÄÇ’zÙëRê~£þI½ÇºžåAœ[½Ñ{ìæ¡¾‡<×"õ/ß@¯²¾V|ýÃëƒs àâ`NýýÿŒ¦ ëßé7‚®‹›êkþ º³ZÓ-=Y¹ý@u'o>ø*ã@sßi]4¿ºHÿ[ߟ«ÓeõjqvÊuïÍ›öWÙvètÄ;Ž–; &OÏÎnoøQé|ç‘ÿ“3Dýóúàý;@#âíÍW«Ôuþ»óÙìj=ûûèææjut{CÃ1<›-.W÷úg¨?cã.wQ÷g,¯^Õw»ÀmO5|¥ÁÍçŸ*°]Ä–¯`yá,_VÖ¸è•à”ìŒnu¦ ¢Sèco§dçB»+¢S~š‰Æƒä”íŒÙëÊjrŠ^!SOmg¦•UqÉ,ÑòWVûˆfe5£ ›uá+«ÙH¸²ÚGoï•UvnZ’Š×vaܚљʡmXYMÀð ã²Tк¬•Õ¤J«èj¡>>®ãÊÖÑá´ ZŽŽö{ÌX™<òAëdenã]™ieuZY-aeÕsš>JÈX™ì „¡WV©ÜHc›Qœ“„qWV©gµqû‘ËŽÏÊ´  ']´c)c¨´Ç ƒ”0#Ùyß*‰•ò8A¸4Ùù°ßãÊØ©Õ&_Qœ´8I3âLD˧>⇧Œ‚nÖ…ƒS6Ò‚S½½Á)9'ߘ?çymgÇ=®Ì_ÃpJÔ KµË§¤Ê+m„CDë« ì ·::4c±ZŽŽß8)htpbNEºl+*sJMà4SYàøˆ.òå•Ë‚S²S8x–.—Ú,D¹iGš¥ûqó8¢qÎ4€Sà\‰ÖhƒQPÊ9µoÅ0 €SLý=ÒÐwšìlØïâO¬;¿à½Å¹Í /Ã4LN3-Ÿaúˆ†a2 ºYÎ0ÙHÈ0}ôöf˜ä\§ Kr/ÆÍ4¨u¥cÓkuµTGd©dTÃ$UV.‚¬Þ÷¸¦ŽN4ÚY9:vŸ “<«‚jqß‚š2 N SÃPÅŒÔ÷{oó Ãv>†Á³tDξ¡lP(΃#3j–[©+¥é˨…õy¶S¸‘¥#Ç0R^xäÔë1êApÊvÐΩZç zù§l§÷»ø“œò»MÂ!^É.†iñGœg"Z>8õ? 8et³.œ²‘.œúèí Nì<(OL„b/ÔÈé ƒó•vMé “ú¬‘¥‚+쯤ÊÂîˆÚNÁã§tÕV[ݦZµÇíÉct¨„ÔúÉ.l4 œ&p*œÄ;gߟ$û GtˆCƒ•‹QçÁdc¦h7¦Bo 6qS NÖó´ ”ìbhõ¶‘µÂþ5¬"ï^Ó^ÀÊd[ÂZ6-¼!'ÜWyÐÙíwvÎÚýqÓÚi$2Uèeq¸‘Åxâ¦mâ|D 禞â়‚nÖ%s“éÒ¸©§Þ~Ütç\G…`Ä^ЬÆå&çuLØÆMwŒ ro!–”þN•µˆ:Êê-<¦·î®ÚQM´ GÇ™}-8Ýy䧦…²¸y,óÄM7ýÖÜTWLôÖ¼¿×øÛÙÙ Ü´.—š,µ ±ÛÆh·=¬ªtðà¬a€‰Fa‡V £§Kj·i΋ ƒŒQÆK˜@vZ·:OËÁ)Pg…ÊØ ó÷„í▤£‚Sç `Qº|Í85̈3-œúˆœ2 ºYNÙHN}ôö§Ú¹ƒ˜ÛO|o·mÍ~È4 !T$¦œj bl#Õ™²À)©ŠŽföQVì#'ºj¯q“Gr²3và”<òûÀÆÉÊŒvêMàT8A:ÏJ„<8%;âÐàÄå ‹bBC]äÈ N†ZéYÃøâ´â·Y4a'; ºÂd3ÍQaºRšj8á¹L²sªUj‹¢SÞ¼³^pj´ÕíòQ8%:¥Òt¹}wvNïw•‹¾F´(‹#Z›`Mš…g"Z>¬õ? ¬et³.Ö²‘.Öúèí k]z)PG^å‚ Ñ7ÀZ7©Ë‚µNêÿŸ½3Ù¡åÆÍð«xŸ  Š“˜,È: /‚^8= ƒ´ÓÒ¯Ju»sì{J¬B |Û›{éâ_<¥áÓ@§írŠŽO•žƒµê‘J½YµCÑ,~GÔ¤œ‚3ÈÕŽñáÒÕi©§ùCqjyfä ":½êŒøkÖ«: ŽY¾^Õô€ëUgôž^¯jαæä)a/UÞ$йt½*/Fô¾ÌÐß$ˆú„\c©¯Y[†À‰ªÊy°Žë±zø,œð·V¨‡s4çp;}©H{;NTY˜Ã9†þàãĉ‰cà'6¨+øN¸B¹'|~^KTjÔ´«]ºõ€tYJJÉlk€©Ééê˨©×œÅ¸ë’'çàÜ0{¿‘©¸Ž™äQ†iâKaŽÄiBÖÉ0Ñä´ÑñæŒøk¦£à˜õà Óô€ sFïi†Y[Ó¸—"¹7;, ÒÆ±®&¡ÔÊá;¤²å±æz¥üY ÓÞÚ2ú#ŽNy©T~;ÃTPÏ=J •¿Àd˜É0C1 ׬3‰ëá÷.ÃT;µ”¯f˜ú\#%“pl†%ÝY‹Ûk[Þ`Ø1¡Ôm÷@©ÛÁ¾´¢Œ!ÃøÃPügαS¿Ë)…NMýc85àiW.SæÀ©´R©?6;&¥µæ”êèœcq(ó„t8 ïDt|Z;#þZë(8f=8­u#= ­Ñ{šÖVç…1ï-Ž¢‹ÁÆécJu°ŒÂœ Ât³aº‘aÎè=0͹"øø÷R"éÞKž–TÚ`˜CR5éX ÓTIL9V_>-QM}kL8¸ ¶F§<È0MYv(2—7;xIŠ8f2Ì £ '‡ˆÜOT³Úå˦>s–œRЀÜŒî<4— ï]ß3L©MX…RP1§ÙIy6ïeuê?a’à‚P³›E Âyb'¢ããÄñ×àDGÁ1ëÁq¢éqâŒÞÓ8Ñœ#9cV;¸7ï¥0.m'V Å8X­ ­u³œÖº‘ÖÎè=MkÍ9€QP)uù.Iò•U 2.™·JÊ­R¹ˆr,õ]žÿŸ”Öš*Ÿ*@²X}þ´lí­ý¹­­véÁÍŸæQJÝ›Š•IšØ&­EkV7UøMª–oø» ^MkÖ21 óbÔñ¹ÝkòÞ[6XEÒÆm#[ŠÖª–Ì(u;È»ÀIRNfB˜Jà´Ú%ÜåBp2ÍP tª9¥]÷ª$È i¡¬õCèÞXí°ìJH!ØÓú0(–‰"§*IÓ“ˆ¸:%QÍ9‡e&¤ˆæþ½ˆˆ§Ä_‚ˆ=ǬÇFÄ~¤ÇCÄSzÏ"âêœÛû¸—"»7!‘,˜7Š@|‘ZO‡íÊP†BÄUUMÏVÒõ峒ꋎèsE šÇz:¨E¼Ú¥tw-NÔšòN¬R3eÙ!•ËÑýEU¡¢´C=}ÖŽÓúÖ‚û»‰_ì^ÝŽÕcñé1÷o ¯ÊìeqvâÄĉpïÞÙŠ|½eúí?`·#¸'êsÍê5Ë4 ·S¹3¿æ%“ùø±…%×\;ÆÑSr"ÚujN(Øɵ!ÉØOɲÚ!=zjnuê¿I[í„e2L49íDt|†9#þ†é(8f=8Ãt#= ÜÑ{šaõRŠ73LÊ ÃÃ4 …¹Ëæ_^i¬Ss«*ŸºøûÅê‹|ÃŠŽ½f¾›aªGCŸú(†Êlæèž 3ÃÔ.µ°fÎÝ;N«ÙÕu†êsëý]rŒ Ûq¹³.69}@öh|¿1À¨™ï¸‚|µÝw¬‹†ÁÚo˜9=õGµfWøÑ´oÕiIÙ?Õ°š¼$Ÿ ³19íDt|†9#þ†é(8f=8Ãt#= ÜÑ{šaõRï=Ö…¥,À©«Jå±ê ­ª9[Ú¡Þø³¦½5qâàDÅEyašÇ’³ûgÍNM†™ 3Ã`]òNÓ¾Î[øí?`·Ë×ïÃÔçÐ~N€ÕŽ î܇¡Å‡0Ô÷©«]AMˆ4`ÜŽyßÍ æ€SÉûÀI§´$EòÞª_vµCÞUÜH‚äþ0çGðÿ'¸„Óìì»ãdá›Öoº&ääÀ©Û)ïºã¤éJ§Ev…W!vª˜sÃÍ.Ѿ7ÍáoZ]’E_o³“}MF1|Sòž³ÞÒÓÀiýØ ?ŠýÕ)›& ›–Û½.Mìßà¹NDÇÇþ3â¯ÁþŽ‚cÖƒc7Òbÿ½§±ÿP/%póÖå5ÛÅÆaûcRÉÆÂþCêÒgaÿ¡è¸€ç°¿zà”Be:·.'ö…ý´°{·d"]ìov„W—nÏ­õL0h@n*÷n]r®Àð~€áµl^í7u^+LÚ.† *ýÖ‡Õ~ƒr?ËH³ãbÏ2L—-e±PœdÈ“a¢Éi'¢ã3Ìñ×0LGÁ1ëÁ¦éæŒÞÓ ÓœûìQû™ÏW;‚tsÒ–¹jcëò˜T¦±æzæòY ÓÞZ)ip{£ÙÉ“Ç/«G.Ú¯Xµ*³yür2Ì` Ãíø#)]#áÛ~À Æ×¿¬þ¡ži n‡6;,7o]š‡|£ên.óˆú )ÕN­XuH\I‡Ï4 ®—&ïÌHA´ ˜ÁÆž».§LÖ¯j‡FÛ¥‚ó»ßÿµ~üýï–ïþð_uì÷îé¿ËŸêò¿ðŸ¿þówøÍ¿ºÑ¿ÿò·¿þþ»ÚIýê×ø£ÏÜ¿L(½ix´þÑCþÿ#éúǘÿ៿©óûÀÇËã7ÞT©f é¯ÞT;F]›?lþhíIkr’À©Ûå$ÒZsjÙ §/ç-&­mLÃ;ŸÖΈ¿†Ö: ŽYNkÝHHkgôž¦5wn)© qÔK¹Ñ­´VÈaÚ µ&¡.¢ÆR!vo­©Ê>Ž•¯vY>‹ÖÚ[# #ÆÑÁü ­5" ˜ce¯÷J&­MZÖ´LS(ojöüˆAÜŽ¯¿·VŸ[RV°¨ã㬜nͨ *RÙ¤5ÖLþ«ÄJ5gÝw€MB†…ĤNE“Á®{kª{œRʱSÀGõ¢{ke¡zŽ2½)tý§Õ®m–oVŸqüËwÿóKo]¿úûõ¦oþú[Ÿ$ù7û_­gý‡õ¯NÊðyD:"#Ý%c3Þf~ñÍ·ÍûŸ¼e®­ä´»ÍÚÙÞuUýÓ_~÷ço~ÿ›önÿôÍüâ½CŸÓªu³W;gK…5§*E cq2«IÇÐÕ‰èøl~Fü5lÞQpÌzp6ïFz@6?£÷4›ê¥ôæœ2\¬^/Û`ócRy°¼˜M•O9À4V_?‹ÍÛ[“ÅÕîÉÜþÕcÎõZGü»e8»“úç?þe¢ùDó Ѽ8òRª¥Âúi1«¾æ]¼Íës5·*€Aû©vxgZL¬·ÂÄì}1iW`V_EJÝ.im|qUþ#kN±úüõ"ûÏ}|9„ôäøâ¥Ô½ãX¿ljÏñeŽ/Œ/¶¤œ,—þñ“jçóZ¾z|©ÏE,ýùY³ËVî_ ¦,sÎz?¾X›IzK·(u»ôõÕ¬[W«šSöyBºÙÍkÄá2D'¢ã¯VÍjUGÁ1ëÁW«º‘pµêŒÞÓ«U͹$–¬q/ÅŒ7_#ÖŽl¬V“úîéOI«ú:N±z¡K…ÔÞº@=°GG󃩪GôÙœ›]âI“&£‰ša¨ž¿¢€&Ü.‘]Oœ‹÷Ú\$h?nÇYî]­ò¦L™rª-˜ÛYÅžÒÕÎcú$M¬NUü£(±8š4L{ž&N‰¿„&z ŽYMýHG§ôž¥‰Õy±”KŽ{©×Œ–÷Ü"Ö…·n”*cÝ"nª(ù<¡?ª®ê ì£hâKtTsÿÎÁ»¬ÑÄê)kÿ@ßj—iÒĤ‰‘h¿KÆ VrÑM|±ƒ«sµç¢hÿñjÇ…î¤ [jÙú÷4µ×bý¼ªÍMqºŠ#Vë×n^í<Þ“&¢ib'¢ãÓÄñ×ÐDGÁ1ëÁi¢éiâŒÞÓ4Ñœ3ZÜ…’÷Ò„ä¥Ù ‰CRƺ庪jw‹h‡zù¬[®ë[« æqtôå·½&ªG(%TÆé¥ÒÖ¤‰IÐD«L öi¢ÚÁÕUÚs‘´`8 v;Ÿ¤ßH¹,E'6h"{ Î\úš[>z/ï8¦Mœ Mt³œ&º‘&Îè=MÇz©Rî¥ ÒóM’Êi¬{yÕógt:y¹½s;MQ&ø²b9ibÒÄ4‘ë½¼bZ‚½‰f§/y¡/¢‰ú\ v›]ηÖKH‹:ïËû”9½+î¯VU;VÂGi¢:-©fÖ‹ÅÍz {¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDs^/DǽTNv+Mä–}2Ávo_ЇÆ`\ZíEM!Ø1Pú°½‰öÖ †”ãèpz.çêѤhÙÑ@^O¨Mš˜41M ÏÒM hêÞÂ^í˜.ß›ðç"äZô,êµÝT契ºó¶ª¯eZ8%$‘Òß=¦6¾”giâˆ8Ó4i"œ&v":>Mœ Mt³œ&º‘&Îè=MÇz©›óùÂ%+ïìMG»6qH}áÏ*¾v,:–ܚدÌí²¦ &F‚ ª×j Ã.L4»‚W_ÂnÏ.‚˫ݽ)eña¡ÈÆb·–…µ›bµy4¥ÓêÔ¿ ýš¼Ô;™0±1KìDt|˜8#þ˜è(8f=8Lt#= LœÑ{&õRšÒÍxAÜHétPê›kÌ?)M4UÅ-±úBŸ•€¼½5$R‘#¹=˜ vUF™¸_€tµC˜×&&M Eþ]rÊ V¬Knç W'ˆmþkvXã¨ý0¼KewMpZüY6®Mˆ·àœÌÈú-½ÚÁkñª'h¢‰ãZ݉Cq™^ lÒÄÆ4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïišXs!Õ¸—â|oJ'Á¼d² šX%(ä´G*UnbU¥dEr¬þÓÄ‹ÎëáîÛi¢zô–Râ9&–I“&F¢ ÿ.‘ÀûMëÓDµC}É,zMÔçr½€%j?ÈXìæƒNY”7®MèÚ a?Al³x6¥SuŠÙ?š€&š Nšˆ¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDsNI1؈þ"o¦ ]r)4qH*Â`'Ž©ç»6±F§ÞÝJ&ˆm ×ceb³8꤉¡hBë,-[I]š¨v¤/»~ÑD}®dÿ/H‰Öì ÝLZ“1m”›(µ0I÷š|tïVš¨N©¦ fż,¥MšØ˜&v":>Mœ Mt³œ&º‘&Îè=M‡z©Œ÷^Âvžòk«ÜÄ!©”Û›hª˜jí½êˇ•›8~2¥SõX—û2H¬Ì2Mš˜41Møw‰…÷/a7;ÄËO:ùs)ø+§¨ý¸]N÷¯“"†ïaÂjC¯© S_h³C~vk¢:•T±bqÅfíºp–؉èø0qFü50ÑQpÌzp˜èFz@˜8£÷4L饼›½ù Ébyë Ó*A¸ïŠ2L4U ˆRv¨/üY0±F§¤ÄGÇ£øL4Þvv4ƹ51ab(˜ðï’(Õ£N}˜¨vuÿàj˜¨Ï¡,9ìµIÞ$ ¿&(-„ YÞÒ¦Ú‚ÝbwAcµc|´öêÔ(ûØ‹+23:EÓÄ^D‡§‰Sâ/¡‰ž‚cÖcÓD?ÒãÑÄ)½gi¢9WŸîp?ýåj—àÞƒNd¸øÈóž&JŠ&VU˜4Äê¡ÀGÑıèøÌä1šX=ŠÏÆ Æ•q¦tš41MÔŒîA§ÕŽèê”Ní¹˜s{mÂDrïÖ„©¥üþÚBë[jjñ~Kovð.¹á4k7$D;ıÌká4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïišX‹°ì襄ôÞ”N ‹ÓÂM4 >ÄB?íÏ©:M4Uê‹Õ—¯ýó¦‰5:h¨;¢c ÏÑDõXê_„Ê êÜ›˜41M@£‰”’tk×­v¯Un.¢‰ú\ÌŠ·l·Kt/M0²½‰\[°ÿR”»I>V»dö(M4§dJ±8¹7N;Ÿ&Έ¿†&: ŽYNÝHHgôž¦‰æ\˜£Î¾Ù±å›/aÛâB6hâTì¤Óªª¨Q?'Çj§øa4ÑÞÚ¸”²c$7 çh¢z´z œ,TfÙ`ÒĤ‰‘h¿KGtÁÝr«ÐÕ)Ús Ô¢]ÇÕ.ɽ•°É·±5µk=%Ô_WkvÌ–®«N!1¥ ¶çjG4·&ÂYb'¢ãÃÄñ×ÀDGÁ1ëÁa¢éaâŒÞÓ0Ñœ‹(šÄ½ëÝ¥ëêa*‚ÍÞ~¿Tù:ùÔO M•SéŽá@ì³òö·`Äx$Hú L4býË:«01ab$˜ðï’8{·ùužoôý¿&½&êsµ¸'ˆÚ)–tsí:BÖÍñŬ¦ÈÍ)u»ô®.ÆO;¾PŸßþ™/¢ƒ(OŽ/î±!r¬Ló¬f4Ç—¡ÆZü“fúúÈÍÆ—j‡¦Wg lÏeËïrŒüÛýs)|sþqFB{?¾P›»U–@i%|v뻉#HY8³šQ¸ Ñ‰èø«UgÄ_³ZÕQpÌzðÕªn¤\­:£÷ôjÕêÍyfG/e|ïAZ)KN²±õÝ$0Jâo¯4ØÖwS%Yr¬žùö¾Û[«$ƒƒ¥Âÿ±w¦9Û0½‘!îâIrÿ›Œ$WÏÒe± ÙŽ0% ù‘nÆü̲–§…|.c`óˆígK¡2„·L‹&MLAÂl„òÿ‹&Š^¿ZÅ-yGùóx€)îó­[ß¼93ñMÔc¶­Øö¹§ÚÛ£µQ›S*^5GðV|ÑÄÁ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy™-±ZÜKáÍ)aC#=î퉰ÌÍÒRe²kyM[=3«çôc{ßí­%“òŸ¡@~Ž&ªG®k–A"±¦Ì×µ¼EsÑ„Ô=å(©¶Ú©ÃÕùÇÛs½ži!Ú;êÍ{ß 2}Î?^þ²6`¨;Ÿ]¡ÍÎôÑüãÍi½µèý ³»¾][^0q0KìDt~˜ Ltœ³ž&º‘ž&FôÃÄî<«æ÷RÄrs1#ÝLŽn哪6L4U\+ïQ¬žñǶ&Ú[×µýôMtü¹Ò¨»GO.AŽfgie \01Lh¤—94K?c`µËY.?èTž[¸¢ §Á’á^˜@ö|”ãÃ6ÁÄ"}ºJ›r~”&šÓŒå7úBœÉº–N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ»d1‹{©÷šk·Ð„ÖšÕŒš(=e n‹ïv0ÙÖDS•svâX=æÛš8ÊÒÄ)eï)5M,š˜€&ÊwY§é¢ˆ]š¨vlxùÖD}®Q] ѨýˆÁ½Mk^ÂÏ4‘k .Á‚`ÂÞìXž½6Q–þåSù¿Ä•ñMDÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çõÖ„XÜKßKn°¹ÙM4 Ì)åK-ƒ×\4ÑTÕÂÁ¡ÜÝ.åߢ‰öÖ Lä¨ï;wÓDõX^ªL‡$Væo‰ÄM,š˜€&Êw©ñïÚ¾ÿüëûÕ¤zumÔö\ÌZÏ0FíGQÑï­ ^ @|¦ ¯-48ÒØìPžÝ›¨N³"XpN¶Ù¯“Ná4±ÑùibDü54ÑQpÎzršèFzBšÑ;L§z)‚tï%ì-)Ëqoÿ½Tšloâ”zN?VõTtD¼„]=zùʳK¨Ìå-ò¢‰EÐDù.³dã¨6j³ÓëO:ÕçšÕ¬›á=›ÞZÍH|«'ôó½‰Â¥k¡îòÿnÇðhmÔÝ© J¿šÑë%2/𦉽ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQšØgJà÷Rö©†Ã…4AÂlMœSZÿ{&˜ØU993TåËèt.:þ¶7LT娾w¤¬|š²`bÁÄL0Q¿KÉ̹»5±Û•žçb˜¨Ï-ˆlÉîv)ӽŌ(Õý—Ï0µ3$ñ¾ÒfGðhF§ÝivV“XÜ{%¿³ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0Lœê¥ ìV˜`L2ÐÄ9©4×%ì¦ R™•&Õgÿ­­‰sÑq~îÚÄ®ŒKë±*J«4ꢉ©hê,ÈRÒ.M4»÷ã;ÑD}.¸° FíG!ßK°'<Ø™¨µYA kh¿¡7;&{&ªSLåwdŽÅ9.˜g‰ˆÎ#â¯‰Ž‚sÖ“ÃD7ÒÂĈÞa˜hÎkú~õ°—B`¼&LmsØhj>ˆ{{Ä$sÁDSEBžâá «tÝþÖŒeÒ¡qtž+]·{´ZâBbešL,˜˜ &Êw©Z&É¢ÝsNÍNÞÓ_õ¹Voǽ¶ãÍw° ñÛgš Ò‚‰­+o·+]£4ÑœÖZEýË'»È*6N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4±;g-ÃMÜK)Ý[l‚4oùèœÓ®@¿R*sÕÁÞU匆«/£ïoÁÄ©èd{î vóÈàž„Ê`%tZ01LPÝqp©Õ„º0Ñì2]¾3QŸË–\™¢ö£zgå:Ì[ ½ÊçKy̵çò×Á곃”…‰êTjV®ô…8[ô[0q0KìDt~˜ Ltœ³ž&º‘ž&FôÃÄ™^ªtÅéæÊu) ˜÷ö_K˜+¡ÓIõF¿E§¢Cò\åºÝc®Ü1V¦ï—ÃM,šøïi¢|—¦mžÞ­ƒ½ÛÉÛ­Ÿ‹h¢>7§œ½_"èe‡wžsbÜvLhöÅïF¾ö¾MÌDõ»4"+³ù.M¼ìÞhøšhÏ•’ûËUÍŽ]n>IÛªÄÐÔ,äÁ1°ÝŽüÑœ»Ó¬‚ì±8}K–²hâ`šØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰êœêÆH¿ˆÃ.ÒáÞœB¼!éMœ“*<MìêÙ$T_ÞR‹&Ú[Cyñ~FÅWtòƒ4Ñ<²•÷úBéÚ›X41M@½o§À KÍN„®¦ h”âÙ2Fí§Øñ'(mT—s>Ÿ¤,-X¼OØZzz4ËÇ9qb+y8MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^JÁï-ŽJ°©ÜË;)UæÊxN½}(9þÿš&NE'?Gg”qZ IMÌEXgé5··vsîvlWß›hÏBG€¨ý˜ Ýœ\!3~¾7!TûGõþÝàÝîá äÍiXS(Ž ÖÞD8MìDt~š Mtœ³žœ&º‘ž&FôÓDs.‰0Xðovl÷Þ›P´M<ÐÄ.µ–XêlõŒvUE‚÷¯Ñ½ìÐ~‹&Ú[ç:šñæ·»š·ÓDõ(`\^8T&@‹&MLEÔ²wx¢¿+üó¯ï×Dàò½ jÙ;ÀÑ=j?f |'Mà¦(€ø™&¸¶`Kµ¦QWi³ƒlÒDuªÈ®ýœ/;\{á4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy­{ÚÏRý²ãtwuTÕœù¸·×š¿ é ©>WÎÀ—ú:‰X=ëoå ÜßZØ£uÁÝŽž»…Ý×9zØk›£Þ\»®fS8ÚùÎ{ †B ]¥Õ.¹<[m¢‰cÑd±8 ƒÑ,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; ͹'Ѹ—â|s~XÁÒcå‡=%Uh2šhª´hË_¨ÿ±Úu¯èXù×ãè¨>HÕ#¢±püÕ!˜/šX41Mä}kÂPúÕ&šÝûùÊ‹h¢<7·ã©`Qûɬ7ßš@r³Ï4áµ+e—þÅ„f'Ÿ6éo¤‰ê”Á“¹Rš8ë ML;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4q¦—"À{óÃ*Ù†‡ÎI­ÚÄ)õøkùaÏEç}Î~7MœR&b‹&MÌDå»ÌDÉòßÅIÿù×÷[5]~»>W¼Vg¥¨ýdQ±{3:¬Êôévýhk ®‰Å{Y2þØ¡>xÐéSSU·XœæU »?MìGtršMôœ³ž™&¢HÏFƒzÇhâóÌÙz‹Âÿ'’îÝ›€–süMœ•š?dXýïhâ*(CöêÕˆ&^oÍ ˆãèøÛ©à{iâ24ëÖÞú_;XµëMÌCûw™Í „:Õ&þØÙ[ž+hâõ\Çšv5l?Ù!Ë4!^ øLÐú #éån{Ù‘K~”&š822öPSòEÑ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9+BÜK1­4Á9o¢|@ç¤Nµ7ñG•‰hþb8P…ߢ‰SÑ1ðçh¢z,ó UÂP™À¢‰EsÑDù.3ˆ&ëÒD³c¥«i¢>—˜A%œ£gB¶{ïMX-»AŸi_-8qE£Ú±?JM‹¹Çâ„Þ š/š8˜&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9×Ò…JŽ{)þ´½{åÞ„µœç4qJªØd4ÑT$ XhKοEí­3[‰£chÏÑDõ¨5ue¨L‰Ö½‰ESÑDù.s.ÿ” }—&š3^Må¹eOep û=/ÿó­{¼©eÍ{T[°b‰ûJ›¹=JÕ©)ÖC—¡8“´îM„ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎëí×÷RJvsµ Jì`ǽ½i¶_H•ÉN:5U9#£Åêè·h¢¾uNZ~[Œ£ãOÒDS&¨œ¦nvÄ«vÝ¢‰©h¢|—íœSy\—&ªº^~Ò©>·˜Ö^îŒ?v¬z'Mè–S-s4¾¸cM“œ¶lvD8ÛøRT 1…ýT}Kƒ__jt e™ÆÑyÏlöÀøR¤‰sRg£‰¦ª¦îrøB}þ±Ò¨{tj⌣㔟£‰êQ €¾P¦ëÖÄ¢‰©hÂ7MìÊ3jvï©Ò.¢‰ú\eªe#‚öSì Ã4Ái#ª—¬?Ò¤Ú‚3rN]¥»ê£éÇ›ÓÂdŒ}ÔÙíè­Â좉ÏÓÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(MìÎ…Ôû“Ü—Hç[i"KÚŠ—Ï4qNª LE»*KɳÄêì§hâzÕÙâèo:u—‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDmÜH#öF©dÇ×ÞtJ˶…a¥ØD•€¬sã!u4š(ªˆCë«Ç;_lšØåz*Åq9M”-­†$ö•éS~ËI“&  Z ©¾(÷éËï7ÙèÙ4‘Ÿëé{ðÐëÙÑٮŽè+gœ{0é”F­v·ÆMÔF£Ó‹š!Å=§ ™4±²Llxt|š8"þšh(Øg=8M4== MÑ{˜&JãNÈíô};ÐKi"`XDp…&²LÃ=GÞ •ÇÊéôPŸ¦KèÏU˜fß÷¢‰òÖr&ɾwo<›(-ªÇ°¡ƒ Ø,„=ib(šàœ+‰‰^ì¸úòûMvÉðlšÈÏ$H«øNÿIvÀñâj@Ñ_—®C)cKÔÐÙX“:Bë­4Q•$¿³©Qì|ÒDo™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”ÆÒ*Œú£”ѵ4¡N‹ÑJ†ØR?Æøº4±K}|³BØï(@‡«ßH¹EÊÅ·µ¿ÆHk¶™ÓiÒÄP4!iDϤоé”ìÐáé*ëI4‘ÛWäø¢ýßô#ñW¹3N£ å…r±ú•›NZzºP öTìîÂÎ2`ú(B_\ú¼&Mô–‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDiõöfqÕ;!¸Kß;tçÙDiÑ „¸AÙ¼é4ib0šÐ´J,é IÅãÙ•°ósY£…À½þ“옮¼éd´"‡×bÓï©AÌÁ;§ÅäÞ¸‰]â»Mšè-Ÿ&Žˆ?‡& öYNMOHGô¦‰]£ÔåQØXÂÞ'u´¸‰]êß,nbŸwôFš(-FÒÉÅUíhžMLšŠ&¬Üt DÖÎéTìN›ÈÏåïÖëÙÉîEyÔ3£°}G +g!õ`M?NìDx;a¼•&J£.,öÅÅY½®¿Llxt|š8"þšh(Øg=8M4== MÑ{˜&rã†Q “À¯ÚáµbÓöµ¸‰}RMÇ¢‰¢Š)ýÕŸŒôÍÎ&ªw”µ“N½ÚÉ}ÕëJ‹!¿oøÝÌf-ìICÑD®1ÍÄc³z]µÃ§³µ“h"?WL ;ý'Ù_Yo‚r6BÔ×4KO'qhŸg;‹Qn¥‰"N€P¸+.ŸOšè-Ÿ&Žˆ?‡& öYNMOHGô¦‰Ò¸JÔN¢ÿ*Ò®Ž›ÐBX¡‰]RõEņ¯JE•%ñˆÔû›ÅM”·ÎGôqÃdùœ²äršÈ-FRLƒjWYú×YobÒÄP4—¼ õÅ÷ûéËï7ÛÑÙµ°Ës9}{ktK}ŒüÊ(l]Ü•Öh„¡7B';m~Éêƒs_=ÒÛÍ/{¼âóKj1aïܤ-vò”[fÎ/s~`~ñÒ ª‰Ú»UÅ.êégßù¹ÄÆ íÝÞj÷1ß™qy°pZ)®Ý¤õBéÇ1é(Í ÷îVåF# ;lçOAƒs·je¢áÑñw«Žˆ?g·ª¡`Ÿõà»UMO¸[uDïáݪ}£T¸6¹XX¯UG­R9ÍÁÚ•;û.ª0ºŠ÷ÕãÇTïlšðº'$½½¼jrM”-½¸R_™>®Nš˜41Mä«U±—WìäéÐô4š°’“ÖÜ:ý'Ùáµ»U¸€Ç ¯i‚ õ`7Ëîj)­vv'MäFÓ4M©gÅž¸Ï 'M¼^&¶<:ºËĆGǧ‰#âÏ¡‰†‚}ÖƒÓDÓÓÒĽ‡i¢4.î±³Œ«vxmT^Ì&WibŸÔY2¾*MU(è†é@Ã{ÕFÝç“û¢&J‹ žs÷•¹Í¨¼ICÑ•-ÿàÀÖ¤‰bgñìjFù¹ˆ˜ÂÞ¨ìàE4ò©9>,gb_¹ç”æž(ÀܹkYì’ÝJ»Ä©ÎÚ¨ÝebããÓÄñçÐDCÁ>ëÁi¢ééiâˆÞÃ4±k”2ºöl‚s3´RÍh§T±hbŸúÞ‹&vy'<Õ<¹œ&Š2JœÓŽ!­vè3ÿø¤‰¡h‚Ó*=µÄêÍìj§x:Mp©¹Í;W³]´K3²,ie„¯’”ŽcìÜD-vÈp+L”F•ÝÚùf«<7N˜XY%6<:>LL4ì³&šž&Žè= ¥qc°-C¨Æ«/:邼v4Q¥æ¨‰ Rm°ìª*-!Dlƒú ïÕ;âL¡ï/J] ¹Etè*S´ &†‚ É€êŽí¨‰bngÃD~nT–`½þ“KKˆ^ ¼DÌR^Ó„æLHhÍd Õb¸•&J£‚AÚyK«Û,Ú]&6<:>MM4쳜&šž&Žè=Lµñh½?J¥•ðµG!,+ÅŒvJµÁ.:U9K£l˜«Þ,{ŸwôÆ£‰Ü¢a®¿á«s™a“&†¢ ]Œr̰r3ýx±³çŒ}'ÑDi_Aˆ Ó’à•éa…RZ ›°ÜÓYÉ:„ÅŽˆo¥‰Òh`}qÏf'M¬,Ÿ&Žˆ?‡& öYNMOHGô¦‰]£TxuWôÌbF@ €®ÐD‘ B»àͯ¯Ç¢‰ª>»ôÕG‰ïEå­]ÁúÞñ§ë—ÓDn1}ä˜Ö]ep†MLšŠ&¬\`ÊZ¸IÅN,œMù¹¹÷˜ÅNÿIv€tmzXâàίi",D½“5©ØÑ‹¼Fÿíüò×ÿã§ïÿôC™ Gì“å÷Ɔ—F«÷ÅñÓõÝ'¿d‘NyØ~ü§Ïÿõù—<(þùOŸ)³\Yÿ|ó¿üÏ?|÷í¿üÿÿùXÁ?:å?%?}Nƒç·i<ÿþç¿üù»oW ÒÒúÛ4 ÿüsêºß}ûŸNË^IŸw²üÏÏ?~ó·ÏßøÆþòëÿþÍ?ýð·4‘üüë[ü¼|óÏ¥¯§†ÿ;™=™þÛÿžzF~™4âÿ55²|»úíIê(eØÃߺæG$þ×_Áæÿæ»Ç |WǬµÖ5FQÙð‰ñ`·>v©Wýx½F‹fgÇÍ¥ÒtZY»–¶Í‚ɆNlO¹çfÀ å5<:þfÀñçl4ì³|3 éé7Žè=¼PÏâ¶a67x³DaÕ;”7x'ðµ&Dh Àé&‚æmå RÃ}µ«2Æ´|Ö®²HOÕ"&NOœ§ÓwiÀ”§Ym¥Úáù8Ÿ‹†¸a!i¨Dׯ ‘ûJéÆB†cФžÅCì«‘í?ZZB 9lðŽÝ;½¥¾âeŽóîÇœ^F›^œÅ8tÂR‹ËÓ‹‹ƒ…Ð]ã&¯(]{“œ¢F¦µùEc°\E™:J³ÈÍÛ¢#H0Ü Îm¦ÌÜÀñ«ý{ØîùýâÏÚîYU°Ïzøíž†§‡ÜîùýzOØîIc>w·î(âµw?ÒˆE¬«{¤*FI#õn$;z³ÒÀïAß; ~'M¤MÒµáwÓ0«yMšŒ&r„*wð;;¿šW~®“I½“Ù ã/. ,”¸æ5MÄÔƒ’×; ø«à­4Qå Ü©ßRìÈaÒDo™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”ÆÐ|Ã(Å!\{<Šº]»S¥"Ü0Ú –2³¨Ò´N°°A½Â{ÑDykƒî}ï<—'¸œ&R‹iéÐaCIŸç¤‰I#ÑD\Œ8šh“&ŠôŸ«¨L‘bzem`ÖÅTiåhÂs †m¡ÅNMo… Ï£ {''Cµƒ ÝUbããÃÄñçÀDCÁ>ëÁa¢ééaâˆÞÃ0Q'$è¤Ì¬"ýÚ»–&iÄâµüûU*¥9vƒTB &Š*§`}õ o–ºÏ;OuŸ/‡‰ÒbŽ&ví+3)3'L ^ŽÒ¨ÚGÅžö´O‚ Ï@hÚID™í@ùÒ”™°0bˆö’&ÒŸª#y»ünµ#½•&j£Q9 E}qiªŸ4ÑY&¶<:M¤ï2­]ÓÈÓ¤‰jåì”™ù¹ŒiÐæNÿIv|m5/_8’Ø M`îÁ“³š—-«Y¼•&r£,¨Ü>»­vÏÑ/“&V–‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDi\™­3ØW;¸ö¢S4Z‚­„M<$„ˆQ6H5‹&Šª í‚UÕÎÂ{a×·ŽT6|†Qï ›(- EGìu9»÷¤‰I#Ñ&J„ÃЛ¨vâÙ4‘Ÿ«„»=;Ù™]„M 8»¾NÀŸþ4õ`ñ´xjïh»çû4Q¥úH_œÍ°‰þ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïašØ5JE¾¶8°RÞ=Z£‰"ÁóÕzÛ U£‰ª>­75lPïá½h"¿µ‚…HØ÷ŽÛ}弪²œ}ùý&;<=n¢<× cúG§ÿ$;¼6›—)M k4áÎù§¡M¸Sí&íõ &ï6¿¤·&äйÅVìøÎù%µ(Ž ±¯ŒŸŠwÏùeÎ/Ì/²:{l–‹¬vøTîô¤ù%=7çïÜNòQíìÒ›´–à¹ÏëùEÊJÒ 0¥i%évkò*αü°ƒ™€¼» Ñðèø»UGÄŸ³[ÕP°Ïzðݪ¦§Ü­:¢÷ðnUiëÁi¢ééiâˆÞÃ4‘Ç¤× =+v.Î@î°øjÎÀ"A!hç¥Ø±…±h¢¨2eÂþt€ovÓ©¼u4ÍYß;n¤‰Ü"QˆÖ¹÷]ìð©ºã¤‰IÐD̹ø$BW›4‘íRÏ¢³i"?W4íõ‰vm–DGZ‰›ð܃!­ =f;t¹7n¢ˆS$êŠ# 3g`w™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”Æsû¬ýQJ#\\ÕÕQ«T¥Þ.Wµ‹&Šª–¥â}õÞ,yyëc`î{'Úq¹E¦ÀÚ©ŽZíxÖ3š41Mx¦ ˆù\´I^h‚O¿é”Ÿ+éE  ×XÀùÚ(l7y\ öàD ÍSÈj÷"²íJš¨jGÖ'ëÁi¢ééiâˆÞÃ4Q÷4öG)üÚ|òÙ„/â+gU!ÆMR}¬œNõiÑ¡‰jÇï…]ßš5¤Wï{ç¹øßå4QZ4Fì+Ó0£°'M Eé»d¦ôõ²7i"Û‘ëÙ7Ês…CêÝžÍBA®¬7á KSÈkš Ôƒsþ)ëŒAÅŽèÖ(ìÚh†²v©‡Í›NÝebããÓÄñçÐDCÁ>ëÁi¢ééiâˆÞÃ4Q÷4„¶YT»hábš =®ÐD–sáû¾T·±ª×Uõ9C,†®ú€o…]ßš,=¾ÿR¸&J‹ª1´c3v&MLš‰&¨®ÒÍÛ9ª>¥=‰&òsÕ9ˆt{6kÔp%MÄ%º°¼®Ž*\F^Ïé šJ‹„{Ï&J£f1´‹>ìfN§þ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïaš(籄ýQ*òµgqàš(œ;'ÑÕâX4‘UEPåv-ì‡Ñ{ÑDykT6†¾wïË[[T@Æ Êdætš41M¤ïRÓ`ÈñãÉ÷§ß|¿ÉO¿éÄõ¦•vêµT»ç;ŒÜt¢%!­ÄMHy:u*vtoÜDiTBÒÏ}qòT„pÒÄÊ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïaš(§a;{ÅN/ÎkÈ‹CX¡‰*57ïK5Œ&ŠªäAl§ø{Ø¡¾Mä·vÈ Ubß;_D'\ME§…ô•9ù¤‰ICÑ„ä̯’þÆvÜD±c<»z]y®£’ü/{gšì¸­ƒÑ¹b °’ì'¤n¿rr%À*J +fçW:ˆð ‡Ãv»]ºõ¤¦a*×&¸5`-™È_îévÂ&ˆmNëÀQD‰«voL̈Î#⯠GÁ9ëÉaÂô„01¢w&ºsfR¸—z?ÇqO)l~•| v“ÐÒ…û ü6;‘¹ÊMlª,e‘­ò] bû[C»|\*ÙìàÁƒNÝ#YI%Æ¡NÚL,˜˜ &êwIœù÷÷û×?¾_ªŸùå0Ñž[ ƒãCÝ.Ýz å…%É~µ ’Ö€˜èv»©Òo„‰îÔˆ£ ÒnWÈÑ,щèü01"þ˜pœ³ž&ÜHO#z‡aâT/UðÞü°,öR•˜8'•'ƒ‰Sê-L´·ÎIRöǼƒÝ=šXQ‹•¡®ü° &¦‚‰ú]R’¬ü»ßùëß/µT WÃD{n&ÊœsêvÈrïÎ[»›µO¥·àÌä×ÅØìàá[Ýiû%ýbE›óÊèNˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRz+MÛ‹óÑ­‰.¡‰_ßìÏ+Á\4qN½~YF§-:Ìè×›þ±£çj×uØ~¸`iµÛÁÊ»hb.š(m˵èï÷¿þñý%ºüÖD{nÛle‚¨ýP+zoí:¬¨pŸ&´µ`Òú«ø-½Ûezvo¢;­MeÝ®¤uk"œ&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mtç–•”ã^Jõæ[L¯ÄGš„:§ËÜbþy¥ÉòÞQO)áwÑDk` `αEçÉ[Ý#·?+#[ùaMLEÚ×ü¡Ò„»Û¥«i¢=¹@¢pLu.ŸîÍ[c_ˆh [¬P¥f¹Ì–1pSo¥âP¨IøÛÆ—úÖ"k1ŽÎ{͓Ƴ:/@ƒ”Ù&®ñe/Œ/öJ-­E"ôÒv;¸z|iÏÍÆœƒŒÝN‰ïÝûNj÷‡kÄÌÃJ ´OqŸÝúîN)µ)x,ßVÒÖbÕÁ*„Ñù«FÄ_³Xå(8g=ùb•é «Fô/VuçÜòfHÜKíÌq/ÞúÆn}Ÿ’Ê`sÁDWU$+|0ÉwÁÄ©è”üà­¼æ±ßÕ öœº]Z LÌBµãN˜R-k,óõ0Q;³\ˆ%l?$ w& Dx‰ ñþb§Ö‚s²dîrÏf—8?I›Ó6.ÇâP×AÚhšèEtzš Mx ÎYÏM~¤ç£‰!½£4±9WJH÷RïtnÙú©ÖAiÔ“R÷¹þEšøQ/\!ÿó–ßEý­%%†qt žÛ𨔵ƒ´Ï1ê?¸hbÑÄD4ѾKm©‘ÜÒ¨›¼*¹†&úsµ$DŸ&6»$RÒåW©AÍûiz„X‘ÊU ½‡6|”&º80.šBqÊM„ÓD'¢óÓĈøkhÂQpÎzršp#=!MŒè¦‰î¡Ž]÷R™îMò¬/†˜8§´ÌuŽvSEí>‡Æê‘¿«2êOt £|’çr|lK}a•XÙûmÊ &&€ h ë$˜èvWŸsêÏ-£ó›Ýû­Ö[Î9ia€˜È­×P©A7;ÊÞÊÛœjÍ)'¶:…³D'¢óÃĈøk`ÂQpÎzr˜p#=!LŒè†‰æ¼@F.q/UÒdW~Ô3äX=àw]EØÞ:çLöAtrz.qÆæ‘i¯øûoeô–üqMÑ×}‚)zû.kw­Á!öŸ^ðÞ㡦/0:XƒÉý !&·–Ñf‡éêŒý¹Rgè%‡¡"À{oå%’ŠLû4íÇBIšÝÜ ›]Æg:5§š¤—O6qF¶h"š&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mœé¥4ѽŌ4——áѸØ%H{êGj™ëÚÄú"ìß2þ±Ëù»À§¿uæ”?ˆŽ=wmbóX4 ¬ÛÑ[âß> |&ì{¥‚{{³Cºü S{nÛ³Ô¨ýT;¹5LJ¾Xk+Õ}š Ú‚­ªùãK·ËÉ¥‰SâXÖA§pšèDt~š M8 ÎYONn¤'¤‰½Ã4qª—’Œ·_Âf<¸„}R*Ï•üœúû)ÿmšèo­õûŽ££ftj3Jj ¬‘²ö?¯jF‹&¦¢ j«ÿ càfo’.¢‰ö\!«œ Qû©}öN· ¯Mð‹ª”ýŒN̵C8ü{mÍ.‰Á£0ÑÅ‘ ùÙ ìpÁD8Kt":?LŒˆ¿&ç¬'‡ 7ÒÂĈÞa˜èÎë`“â^гܻ5Q»{$>€‰sR¥Ì§Ô×ÉëwÁÄöÖ-¿jþ :ŒÏÁDóØ>§`N·K°¶&LLõ»äD ‹ù[Ý.Óå0ÑžÛjjS]¡Û‰–{¯MTZª-yŸ&¤µ`â+ÂÞìÀʳw°»¸’üÚÑ›¼­i,š8˜&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0MlÎ¥}2q/Uò½ÅŒJ±Wgh¢K¨£°úå3ÿ¼’ÎEM&ÁôÉp`ùË.aŸ‰&~ð†G÷ÈPÿ>žc –UlbÑÄT4Q¿K¦Úñ·&ºÝõÅŒúsÙZ‰ °e3+Ò½¥QS»q¶ôMÈRG4(ìvTž½6ÑœR†¼“æ÷—¸¿•ÂX4q0Mt":?MŒˆ¿†&ç¬'§ 7ÒÒĈÞaš8ÕKÝ»7Qˆ_–ŽR:“Z&;è´©oõe!VŸÑ¾‹&ú[£Y”›q³ÃòMtEê[K¬LdÑÄ¢‰©h¢~—\*—&º]æË/a·çZí•Â^»ÚÁ­4A/ÃêÃöiBk æœ0ùue6»ô{µêVšèN9 ¶xº¡.šˆ¦‰ND秉ñ×Є£àœõä4áFzBšÑ;Lݹ!dø—RÅ›/aëKùho¢I$ˆ©ÄRíwm¡—&ºzÈuÒP}}Ë/»6Ñß:k±>ÃVKî9šè¹åxZ×&MÌEÚªH¤ÚŹ4¡}.OÛžÛ?ç¸×æB–î­]ÇØªÎíÓ„ý´àúÇUÚí2?{Ò©;-œ XJëvb+¥S8Mt":?MŒˆ¿†&ç¬'§ 7ÒÒĈÞašèΕkwq/¥poí:¶ò’Rhb“P¨:HåÉrÙ6U%%ƒàœVWoðe4±EG°¾eú!?¸7Ñ<*%+ ¡²™²hbÑÄL4Q¿ËÚ¥¶Ð~‚Øn¤WÓD{®!U(ÛOí´áNš hw3ª“šàª¬÷-*Å ìÄö¶é?âдx™åþØå²ÊMøÓD?¢“ÓÄ ø hÂWpÎzfšˆ"=M ꣉³½ʽ b Ë+çݽ‰ÓR÷V¹þ5šø£Š,e¤X=±~Müyk¶¤¹ÄÑáôMüxÔúÖòÉW§eÑÄ¢‰yhbû.¥0âN…¿þñýJ¡|m‚ØŸç²f„¨ýÔ&Fù^šP*Ë>M@ï[”rö•v;ú}ÃãVšhNµ¥hgÅÙ:éOˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&ÎôRõ_ï­„-Z^‰`¢+È•í¥,sÁlK×uö’cõ™ówÁÄb÷öÿ힃‰î±Î³$}ð»±®k &¦‚ hy_±=†\˜èv9¥«a¢=·õhšÂñEZò´;:é+!W°Ù‡‰\[°¥b¥øJ›ó£0ÑŵRåh¡8ò`"œ%:&FÄ_Ž‚s֓Äé abDï0LtçLBÁzÌ&RïMéĤ/.|@]‚  P,• 梉sê ¾‹&ú[°”$ŽÎß®:ßMÕ#¦V(žÂß­¾û{êÚE‹&þ}šÈ­ŠsmYàÒÄfw=MÔ瀤á4¸@Ú[ºrk‚“êÑÖ¶œ´BþP¸Ù¥g:u§\ýÆâø­ ù¢‰ƒi¢ÑùibDü54á(8g=9M¸‘ž&FôÓDw^ê|)èì7»Ä7§t ÐD— "î}ñÿKœ‹&šª:ZÖAUCõ-éwÑDkÈhœâèÔ?ÏÑD÷È©PŠçP?ÞE‹&f¢ l”@ñ÷ÞÀ_ÿø~[]7¾š&°×®“V$>j?Õõν ®Ä’íˆ&Ì2f*Òj·S ûß_ª*J­¯ŠÕãÎ¥ÂÿøøÒ¢£‚ÁÉ€Íò“ãKõ¨¹~P(+oµ×ø²Æ— ÆzÕw¬Ã‹©?¾lv€W/í¹\€8hÙÝNDïM@^º¬«UTgˆÙª¿"ÒjöìjUsŠ©õD)‡‰x­VEËNDç_­Íj•£àœõä«Un¤'\­Ñ;¼ZÕ×ÿ–RŽ{)€{ÒVXz‰ÊÁjÕ9©¿O:ý»4ÑUåbŒ«Ï¿O,ÿ·i¢¿5‘ }0XâÛ¾Óí4Ñ<’fCø@Ùû‰‹E‹&¦  1ƒÐDµKoÅI/£ )I-@Ô~Jz§ñh¢¼X°íÓ·–^q#e7¤Ù¡¡>JÍ)3¡% Å1Ñ*ŽNˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRœïM@/I`ÆÇ½=sÛ„ä¤Òd{]•(gÕX½ì”õøOÓDkÅLÁ f·+o+¨·ÓDó(Àª9V&©¬½‰ESÑ·=íœkŸâßËëvr5M´çr… .aû©ôï,RWÒDzeÈ–hBZ .\Rp¶ªÛ1=KÍiÜX(W­”á4щèü41"þšpœ³žœ&ÜHOH#z‡ibsNÄùƒ^ ò½'i¥”•ƒ­‰MS ÷Þh²$]UV‰’©lêíË`¢¿52Ô¡<Ž&}&ºGQIö2[0±`b&˜6I—vH5¹0Ñí˜èj˜hÏ­í‡KÜïÕ™<é½0ÖJ&íÃDi-˜Ú9'ÿ m·Ã‡ów§ ªú¸B¸`"š%:&FÄ_Ž‚s֓Äé abDï0LtçÖÖ®RÜKÈÍÕŒðŇ6 ,hðÔÙ’|4U @9q¨^—þÖÙj#ˆ?C…÷ €»i¢{l–RüÕ)óJ¸hb*š(í§–ÜÂ¥‰n‡zùµ¼Ò“wdC§ÁmSàÖ­‰Ü®6ñÑA'í-Ø´¢‚«´ÛQÂGi¢9­Ú!AÅYzË©ºhâ`šèDt~š M8 ÎYONn¤'¤‰½Ã4qª—‚|/M³:¦U3:'•&»6qN½~Y’þÖÈ„ÂqtÝ›è )É »¤•2pÑÄT4Q¿KMZt§×_ÿø~5±]Ní¹„¦†á])Û­µQùmt&¬7`’`O·c~&ªSJ 5ƒÅâ,­sNá,щèü01"þ˜pœ³ž&ÜHO#z‡aâD/Uíî-ZèUð¨šÑ&¡eÜøD*æ¹`¢«Ê•%‚ë)Ýо &¶è´Š¶D'?™Ñ©{äR9úeï%mL,˜˜&¬MÒÉŒÀ?èÔì°ÒÄÕ0ÑžÛ*æ¨ý(ç’ï…‰Œ˜ukRmÁÀµ û6;ü]oîNšèNëß ùI*6q†ëv4Mô":=M ‰¿„&<ç¬ç¦ ?ÒóÑÄÞQšØœcÉ)ŽTÎt÷ì” ‰÷öŸKµ¹®MlªHIür?vð]¶·fÉì߯ÿ±KÏU3êëKYpÝeSf¸2:-š˜‰&ÚwYêL ±G›•«óÃöçZ»ù̵Ÿ¢&vçA'i>ÒAþq€Þ·´ê¨îjÕfGòhmÔîsÅ­ êâ,/š§‰ND秉ñ×Є£àœõä4áFzBšÑ;Lgz©VeçÞ;Ød/SÝß›8)Ui.šØÔKôúÌú]4Ñߺ>Xâèà[Â’Ûi¢y¬«P>PVV5£EsÑ´LbHèV›Øì®?èÔŸ‹( GíGñæƒN鹯a¿ÚäÖÒA’=t·Kø,Mt§l~.ˆÍóª6NˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRTʽ—°ÛáYæš8%•ód{]•pFΨ×ﺄýÂuÒGGøÁ½‰æ‘sÖúå…Êò:é´hb*š¨ß¥ÖIzb—&º]Á«O:õç’Z¸'»ÙÝZµÒ&N÷ik –ÚV¹¸÷&ºçbÒD‡ FŠ“¼R:ÅÓD'¢óÓĈøkhÂQpÎzršp#=!MŒè¦‰S½‚Þ}Ò‰œÞþc©8Wµ‰MÕáZôõ ßEý­ý;1?v ž£‰æ±Ô·Þ™ýRVÒÛ µE‹&&  l÷êç[›–KÝNõêjý¹«{ çèZ¸—&¬%é>_¨·`Ò]œ¿)¥Þ“çò(Mtq”Q‚cXÝut §‰ND秉ñ×Є£àœõä4áFzBšÑ;LÝ9SJ ýˆÜ[¹–&„Øéë?ÊPæb‰®ªïLðêù»î`Ÿ‹ŽäçÒÃn ‚•Õn§EK,–˜‰%èURQÛ;½ó×ß¿ßjGxù9§ö\+Å‚öSí¤ð,!ù\™æàœ×¬Y±6cWi·ƒß»oe‰æÔ€¬ø47qï×ÊKLˆÎÏ#â¯a GÁ9ëÉYÂô„,1¢w˜%NõR’ïe }ÕžðàœÓ9©<Ù­‰SêKú²‰sÑyc­Ûi¢+#&fCÛðº5±hb*š¨ß¥¡°óï`w;~;§wM´çŠÕ7NáÝXYîÜ™(/0Ù‡ é [¬\¡Ý.ñ£•ë6§dJø8z[ÓX0q0Kt":?LŒˆ¿&ç¬'‡ 7ÒÂĈÞa˜èÎ8içºÛ½ XøÅtP¹îG*·rɱT¡É`¢«*=Ky¬¾À—ÁDkU’à0Äf÷$LT\_È0Æ«ÝÛW·`bÁÄ0!íøRi ÝZ›ÈÕu°ûs ä:GÏQûÑ’ìέ ,/¢’ñà˜Si-8eÍä·ôÒû {öÒD—Ôr$®vCºh"œ&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mtç˜ë„‡â^jçLë¥4F¯¢p@›Tñ+Iÿy¥¹ŠMlªSE…Ô—/»‚Ýßšrp½~‹by®rÝæQ³™X¬¬¤u{ÑÄT4Qú,½°±¿5Ñìä½õE4Ñž«ÄÌöÚª(x'Mè+a †îÓDKÿ€õ×ñû n—àÙ½‰î”´rÂâÐÖìpšèDt~š M8 ÎYONn¤'¤‰½Ã4Ñsí ĽÔûê-¥ë¬öXtPºîG*´PÅR9MF]•dF Ôïì¬ü§iâTt¤‰æ1c2N±²œÓÚ›X41MôúÖVrtib«—Í—›hÏÍ"ÁÆnGzëÞDz™ÖO“hÂ,K*((­v¤2ÛøÒÔ·Ãb«gûºñåDt„ìÉñÅ ÁJa•aµZµÆ—©Æ{¥œP‹?¾t;’˶çRb´`~Öìð½Ø ã ¼ˆkS>(fdÛLR-8òÛíÀž]­êN¥þ>ô8~Ë?²V«–!œˆÎ¿Z5"þšÕ*GÁ9ëÉW«ÜHO¸Z5¢wxµê\/e÷¦' ¥£½ïSR&£‰M•J ÒXov’¿‹&ú[«¢I>ºÝßv˜ï¦‰æ³šš„ÊM,š˜‹&D! Öyr@mU‹n ‰V¤ˆë“1j?5&…î]­âÚË0íÒDN­WwDîºÁfÇŸ¤‰î”Rû'ÇâÞ¡lÑÄþ4Ñ‹èô41$þšðœ³ž›&üHÏGCzGiâT/Umøæ½ïô²£½ï BAÁ›?¯ÄSÑĦ zñ½ÔÛwÑÄöÖê|¼ÄÑò\iÔÍ#™$ÿüö]Z4±hb&šhߥ–~ ÂÝ›ØìjÛº˜&ús­~á£jpgiT¤— hÞO˜¡µàR;ÿÌïf'ð,M4§ -û£†âÚ=âEÑ4щèü41"þšpœ³žœ&ÜHOHÿcïlvf¹3|+Zy§6ë¬ Aî ÙÇ–e)°-Á²}ý)’Ÿ…QÎ49 v· q´Ñ‡B×ËšæÏÓ$«FôÓıQêâ,l²aÚIxL*À\ÅŒª’:ëWM¥ÕjýèàÃ)éËi¢x”­¤Ú1ࢉE3Ñ”,¢DÍ,Õ.b8›&òssþq6èõMtiiT ¼“>§ ,=]Ôå6•;¾7g`uª ÖùêWìÒÃEŒE;ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mdç1 ¦þ(õäXë¹4¡Ñ­½½‰#RcÀ¹rVU>—ùZâõú^÷&j«‘“¶ÏC|Dñj/§‰âÑ›ìHÐWæ ¢E‹&f¢ ÌÙ;иMÕÎ.Zž«”K´u—ÁšD]L–(Ñsš ïÁ)0¶¹‡ÊˆpoÎÀ*%@gó¶ØA\9»ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MTç‰Bza”B¾ö¤9MXÚ¹7qPjœŒ&Š*‡ J¡¯žÂ›íMÔV'HüJtä¾{Õ£ËJ/¬1R$[4±hb&š -…¼3¡)6i¢Ø¥ó÷&òs½_cîôŸlÇWæ dÛ41ÑózFÈ[ŒùÍUlAnçåp+MqŒF{âb ÔY4±³LlDt~šM4³žœ&š‘ž&FôÓDvDUú£”¾”&pƒdi´‡˜CêKõ¥Ý\4QÔç~Ò´/$ä½h"·c§é‡ÞxÒ©xç\éOãÈ¶Š£.š˜Š&ØWél˜øóÕü׿|S §ïMäçŠùüF½þãvÑÒ•4!›‹ÂÏçÉ#o`Gªöþh¶ †÷ÒDÇQ]_W ,šè.Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4qh”"ÁkO:qÚœªvö&ŽIMse ?¦žQß‹&j«-Y»é‡Þx ;{ÄDÎ)ʯ ±‹&¦¢ /0A 6M;8ÿv~n¾•:_ü‹È¥·°iK‰‚îìMÄÜÓ‘CèìB»ï=éTœŠãV»ØÒ‡]X9ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç‘!´ËM|ˆ´kïM¥,îÐD•j>p_j|¶gþ¯¤‰¢* Q‡&>Zùf'J«Ôì…™\õƽ‰ì‘„„E»Êˆ0.šX41Mø{i!“6«£V;ÏϦ‰üÜ|!ÊÝë?†ÞÍ.¤ ´-MÏ‹£b*=gˆmoM;¾·ÜDqÊHJŠ}q–Öµ‰î*±ÑùabDü90ÑPpÌzr˜hFzB˜Ñ; Å9‡¤1uG)¦¤—„mH{0Q¥j’NÚŸjÇ“¥t*ª|Ác®báô^0QZJ?:ñá‚ýå0‘=J ø‚2ãUuÁÄT0áï¥iòñ¬ ÅNÊÜœÉðÞ–5túÛ…dWnMÈ)qØ9è¤uäõA¨ÝÓ‹Ó½ b‹SUŸÅ_UMô–‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞašÐº2éTA¨v!]žÒIwS:“Êa.šÐ²*‚Ô@ÕÞ,¥SmµQïAµK÷•®Iƒšõ•QX)MLEþ^‹÷,kt*vôgÓD~®øäfýþc¿¸~Iñ:§š€Ïa¼GF¥NíºbGtï­‰ì4¢u>ú; «ÚDw•؈èü01"þ˜h(8f=9L4#=!LŒè†‰c£Tº&`˱ö>“j“ÝÁ.ª"uN×Vʛծ«Ñ!o÷ Ñá;ï` 󈾲Èëö‚‰©`Âò"5…ØÎ[ìˆO¿ƒŸ›‚z÷Ñ^ÿ1ÿ§×žs*tžtrÔˆí š=½Úð4Qб³/NPgÑÄóeb+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ªó„>@”ŠxýÖ‡­‰ƒRÓ\ùa«*sjo¬|´òómô_5M‹Ž>|@½š&ŠGC úÂ4n²hbÑÄD4‘?LJôȱ™¶ÚñéÕ&Êsóvr'_µÓxéA§¸‘<§ (=8×ìnrOµ xëìê””¨/ŽÖÞD™Øˆèü41"þšh(8f=9M4#=!MŒè¦‰âœ1Ÿ za”Ò‹3:åz¨whâT0MUB‰Ú•÷>Ô?¹ôñ«¦‰œšñ…™üñRÉå4QVR/OJþªiâPtb¼‘&){LR°hbÑÄ4QRl‡Ô¡‰l÷ð‰ÿ4šH3¤óºØA ׿Wp`áçó‹”‘7Pê¤-v>aßJÙ©¿7IÛÕÞ>ì„Mô–‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎÑT%õG)”k¯å)Ë&»I>Š6±v= ;Ö¹h¢¨Š¨½kyÅNTÞ‹&J«ý]²Nº®jîË?^ª]‚k“|n‰h‡&ФW¤r˜,yQ%°“Þ¶ª—7£‰$ ÖŽÀ×ò²GfÖþ樂ۢ‰EÐDÚV1mŸt*vøp-î$šÈÏÅ`:Gi³ÃÄ•{Ì›¤â¨¤=]:§‹ʽ'ŠSõ— /.Ƶ7Ñ]&6":?MŒˆ?‡& ŽYONÍHOH#z‡iâÐ(•8^JÒ†¼wÒéTŸ©ç¢‰¢ÊÐD¨¯Þž°Ð¯š&j«“OöüBtä¾â¨Å£3RaØM,š˜€&tK@Aýi'ùÈv>:ñÙ4‘Ÿëã:¤;ý'óÅ匀LwhÂrv¥½ryVÆ º÷vÇ„áq®mGí.Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4QÅΧ£j'צ ”ò…Wvhâ˜ÔdsÑDQ)aç‚J±z³{‡¢ƒÞGîQÄÐÆÕWÑ+eࢉ©hÂ6¢ÁôIý¯ùþº]°ÓO:ùs)„DÜI˜ý›¥+‹£JØœWdç6‡ÜƒcDŠÍ3ÕN­ È‹SG¡Ì[]qhÝÂî-[ž&†ÄŸB-Ǭ禉v¤ç£‰!½£4ñá<Á £ \{o"µ€ä¥Nv »ª"daë«Gз¢‰è8Rã Ñ¡‡ŽWÓDõh)B;/qµ‹L‹&MLDþ^&È÷°“5Ë»øx úš(ÏUU í,ÕN¯Ü›HY@°ç4Þƒ‘L:g²ªÝÓ„$ÒDqjHÔÎ[í"¬“NÝeb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&R)„ËO:Žâ¨¥â\õ&>Ô‹jçËWµ ñ½h"·š‚µÏ}ØÝ¸7Q=æ#åí#Õ.",šX41M@>iD™b“&Š>ì­DPh†AÛׇ‹ÓÌ¥byËWH‚>§ Ì=XRŠØƒŠË­'ŠS,±¾¶Š{ 㢉eb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&ŽŒRèÚ[Ødƒ”vh¢JPävú©;¹h¢¨‚°}›÷Ã.¼W½‰ÚjD /DGï»7Q=:æ@û~xµcZ·°MLE˜)Á4¤v†ØjGΦ‰üÜ\Ƥ׳S¾©}é½ Ú0ßI{~o‚É{°€äo9M¥Å.ÞJÅ))B»ˆæ‡ÝÃõ“E;ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçìC¶Y”zRªôTš ‹[Ø¡‰*A±S´ûŸMšŒ&²ªˆ$ ¯þ1kÑ[ÐıèØ'Ž(‹V†ØESÑå\IdȽ‰b‡†gÓD~.ª¯o!uúOÎR®píI'Ìßsž×›`Þ¢'ÿ]ÚÖŠ ÝJÙ)’2tÅåÑjÑDo™Øˆèü41"þšh(8f=9M4#=!MŒè¦‰â\| Mý!}Iuq-l—á#õþhï¯ö³¯lŸKõ…Ý\4‘UˆOìÔUOö^4QZíïtöªÝW »zŒ“¥¾²+§Ó¢‰©h‚sÎíHªš4Qì$~o"?WÐûFêõŸlGWÞÂÎû‡¾Ð–{RæŽÜùnPìTï=é”1ZçRGµ ‹&ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç2¤?Jq ‹s:X;9>$D_KƤò\9ª*I¢õÕ ¾ÙI§ÒêÈDÖŸÉ}mtãÞDöȾRèÿn 9qM,š˜€&dKT#bû¤“Û±>rúI4‘ýk0 ‘;ýÇÛ«I¯¬^ç³K®}‘žÓDÌ=˜rއö‚½ØAº÷¤SqÙYèq‚ºh¢·LlDt~šM4³žœ&š‘ž&FôÓÄ¡Q*Òµ·°Yy a§z݇Ÿ‰_:MU)p0yA½¾Ù-ìÒjM– ô£“ä¾ZØÅ£Ãi'W± iÑÄ¢‰©h"n ý_NJÖ¤‰líü½‰ü\õᘱ׳ݎ¯½7á‘ðä9M¤ÜƒA£uv«Þ›Ó©8ÁÄ ¬{Ýeb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&ŠóˆÔ)0ýa÷yQµs÷&¢n÷naW 1æxAêl4QTiL/LŠïU »´:Ži7:1À9ŠGoqÐ~‰¾n[4±hb&šÈ5¦#‹úÒ¤‰bG–Φ‰Tê]$DîÚÙŽ/Íé„›Ç=ÙÎI'Í=C'ûT¶‹÷îMqÄ ºâ|Z'ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçÌÇûþ(EѮݛÜp‡&ªTDíl£T©6Y†Ø¢Ê¡1…ÔWÏñÍêMÔèD üBtDâ}4‘=æ —/LãšVN§ESÑ„n‰ ¤¤ÊMšÈvÁîŸDù¹Þž\ó¢Ó¡½¶¶‡^…÷hÂÌ(&îõt·cšn~9 ^>OÎøkŸ_¼Õ‘P½»u~1£@*Ê}e©×ü²æ— æÛüåEe í“´Åäô½ïü\KQcjï}»˜â•÷ò҆ ð|~±¼’DÎEb;JóŠóæ{yÅ©`ꥸ*v¼rö?C4":ÿתñç|­j(8f=ùתf¤'üZ5¢wøkUqƒhÆ«HºøkUL›É^=£cRgËXT%§‰(}õ1½ÙתÒjõçv¾V}DñÆ äÙ£B>äÛW¦ÁV=£E“ÑDBI!2p‡&ÜNϧ‰„Ic.ÑÖé?ù¼/_Ir}‹žg‘’à=Øq =B;}’+ýJš8$΂®{y½eb+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ŽRðRšPÍp'ËÇA©¦SÑÄ1õO*yÿªiâXt(Øm4qL™Äµ7±hb&šð÷2± ˜|~¯ôë_¾¿nGtöÞw~®z ¡ÓÜ.Íò‘g—$¦ñ9M@[T;»ôÕï­ŽZƸ} ¹Ú=nm/šØY&6":?MŒˆ?‡& ŽYONÍHOH#z‡i¢:ǘè…Q*µÕQ)ÙÆºSÏèCYç,ê?íp.š(ª’k ð‚z}¯“NµÕšÜa?:Åûh6_ùôkíjÕ.<œ€^4±hbš€|B• ˆšY>ªÝùÕQËs™…Xz=Ûíðó*AgîMðsØç0¹;Ñ4[V»ÈéV˜ÈNbŠíQÕ.è:èÔ]%6":?LŒˆ?& ŽYOÍHO#z‡a¢8Çb»ŽÌ‡H½¶œ‘ø”w`¢JET}A*ÍE*Ù êã{% ÿˆçõx?:tcòê1ß“Tí+“¸R.˜˜ &ü½DKÜNXíB<&0_÷“DI»Ë` œ®Lˆ¶¡$ä­ òÌ„R{~ÉvÄÁn¥‰"NH¥³CZìøuMì,Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4QœGLÚ¥Dàêâ¨"©1Ú¿,u¶ƒNE•EÖöñûj—Þ¬œÑ±è¨Ñ}4‘= §\“µ«LWÊÀESÑ„¿—ŒÞcX±IÅ.Ƴ/a—çf(Ýþà ‰¯Üš°M܉>¿„-ì=8úèÂÖVší$¦{¯MTq%»HW\ º:u—‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎÑ×Þ¤ýQêêk°9UíìM’ŠORxÿKi¢¨"ŽÉ êíÍh¢´š Ð^˜,)ÝxЩxô&‹p_Y|äœE‹&þõ4Á™EAhÒD¶<›&òs} Ò9žZì"ص4AhïÚ„”1ˆ5P[i±#Š·ÒDqê¿¡t¶¶k#΋=£‰üsþ×¼KþáÏ5~úã§ïXâÏ?üµ.̾ ?Sò*F$Cúé‹ï>ùøú§O?þ”›üý_|Eòÿ¤ã¿Aúïçíõ zí|<¹åQõÿü­û¾ ­À>ÞÿT`Í˱ÿü/~üá‡?ùRíoß}~þÛOÿÝÿúdøS]¸ùBò§¿ýÜÚ2‹þ<¼ýøýöó/T†7ÿËoÿþãÿúÉÇåÿùþ/ßþðÛÀoýüîß?ù$ûÕ§?ù7¿¯ƒÝW>ìg2ËãÃo¾ÿæ«OÊß|úÝ·ôå¾ùý·_:Òë—¿‹ŸàËoíú¿ Hóñ»U•§Ñ2A!†v^ÁjGyÑKî@B#¢ó³äˆøsX²¡à˜õä,ÙŒô„,9¢w˜%‹óØ;çVEêµ¥q‘ãæNqo¶Ïb1rxAl “íMU‰¹w–«ªü^4y(: î+Ž[MœSÏ_ÖmäLt4áƒ%½šÇBœ5V¸n:-š˜Š&¼RBMà¶þÙD³“lWÓ„·›VHÑ>AµW:åJš¨Õų½¡ )ÊÊ\B€:gß¿ì˜, p>D¿õËN2=JÕiæÚRBq…aÑD´LìDt~š Mtœ³žœ&º‘ž&FôÓÄ©Q Ò½yì¾eÖš8'uªv#§Õë—MœŠ‚ÿï_¿_Ç$x5Mà–Ê«dAÕèûq#½õlB7@7<  ÜjYEòÄýݪj—]ž=›hâ¨îèÅâÊK®šNá2±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9gO’âQŠüæ³ â-磳‰]j™iᩜe.šhª„Ø?P/ö]4±GGâèÈ“7ªG•¢*SâÕ }ÑÄT4Ae5µ¸[—&šØå7ês’{0Á4»¬w6/$ØÊÓ²¾‡‰]¨ˆöúÊüc÷®/Æßœ^š*d¦X="}×ôÒÞš²åà’ôn—ø¹é¥y±Ü`Ûíp]¤]ÓËlÓ BÒ²ôñhzÁì/ÃûuÓ –éM%‡_v±{såç£ïšô­èh~qAðDAZ^³Ë„nV5§L€ ±¸ò–k³*Ú…èDtþͪñ×lVuœ³ž|³ªé 7«FôoV¥8ã½%6ð|°YuN*âl4QT‰gv‰Õ À·ÑDyk+—¦8:šó“4áR~s…öã_å¼.Ò.š˜Œ&ÜÑÁ°×õ—ÝÕíŒ~ž«ìNŽ{ÅNøÞvF ð -Û,í÷4»dü(M4§BŒ¤±8Éëè;\&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9·zêñ(õ¦gçµíŒ·L~@»TÎŒö¦“]¤­ª8±$òP='ø²³‰öÖ,qŽ£óÛšýnšhɲÙÊè¥{ð¢‰EÐ×âÙ¹O»]ºœ&ês˼ì¢ìv”î¤ Ø˜KÐó{šú‹J Éþ¥?KÕ©Ô›Sâ¡8‘—„—EËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç¦&Âñ(e„7— ,«Á p<Ú‹3hPd·C˜‹&ª*Íž1T¯9}ÙÙD{kD*OŽ£ƒ/Ååo§‰æÑÁ‰ãDM×EÚESÑ„Ö ²Îšÿ\ÍÿFÍ^FÍ‹h¢>·ÌmTþÙý~v;º9-O2gà÷ó‹n®„P4PêZB•¥‰SâtÑD¼LìDt~š Mtœ³žœ&º‘ž&FôÓĹQŠnng„´Qƒ³‰sRÙ碉SêíÏò¶ÿmš8‡o:Q–óKq–E‹&&  «E6¤þº4Ñì8_^2°>× ÿCp¨Ú©Ü{6![íØ”ò&¬~Á†’ RZìž=›¨NŠ ’PÀK¤EËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MTç”ëÉm G)JD·ÒleÚ%éŒöKÍ8M4U˜Ë@ϱz /+ØÞšP=hö´G,@^=2Ô³‰+Í‹&MÌD¾¥ V~™Ö?›hvüR²ï"š¨Ï%b+<Ñý~š]Îwù@/ߨÎ/õߢh-I(-vÌÏfaû>©åm7»Ä+o"\&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibw^»‰r ‰SR)OvÓ©©ÒBÓX½¼9YùOÓD{kã] Þ£øÒêõvš¨3‘z¨,#®šN‹&¦¢ ª7ˆ‰ûbw;ÐËi¢>·žÙQ?#j·#¼5 Û¶T&Y>  ª_0Ô$æHi±+„ö(M4§Ìõ7GN‹&¢eb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsÃ`ëui÷fa3ù–\h¢I0TŽ¥ŠLvÓé”ú×õòWÐD{ëzÑ }ð34{®{]ó˜jv¨ /šX41Mp=s(‹yK}šhv¯e6.¢‰ú\Ð,©_ay·c½õl"mÎEàû~EAù‚‹P´Hi±K(ÒDuŠUC,Ž`ÑD¸LìDt~š Mtœ³žœ&º‘ž&FôÓÄ©QŠéÞ¼ ÜÎ&ÎI•Én:R/ô]½°ÏEGéÁ¼‰¦ÌY‚Úµ»­ ±‹&梉¢2g†Ú¢´KÍŽ®§‰ú\ÌP5t¿Ÿf—Lï¥ Îì}wÔ¢ |Áel!Ö@i±ãülÞDuʹn’I(®Óê7.;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœhê·úÝíî¾é”·2œ[îŒöŸKõ¹*ÄîªÄXbõ¤_VÓéTt^¡övš¨¥îîö{ùîÊü%ýqÑÄ¢‰ hB·TÛHüy¶öM4;öt5MÔç)ƒõ—ÁÍ…ï¬é› ¨ÙûùEë—NÈäM4;ø³;ê­4Ñœš&˱8••….;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ×ÎaŠñ(åùÞîušiS88š8§t²VØM•–)(a<h²ïj^÷«½DâèÀKtn‡‰æ±&‡»ÆÊÄ×E§SÁDM$%wa¢Úñkñ‡‹`¢>WI¨?î5;¼·@,¥Œ*ïaÂÚ—.BA‘f÷Z5é ˜¨N jCŽXœå„ &¢Ub'¢óÃĈøk`¢£àœõä0Ñô„01¢w&v瘙0¥ ë½,m`z@»„ò‰TÔ¹h¢©"Ê™,VòeG{têªÇãèÐË•åÛi¢y´\ïeštÑÄ¢‰™h·Z¸"aÿ¢S³cå«i¢>W(YT‹¨Ù!ÝIl[®u«ò{š8§Ô'+òÑT1¦Üoäýó–ò]ÍQ¢Sœììvô`ÉÀêQëUÀ ­¨Ù%\Gßk~™m~Qg7IÍ/êäxÃüRwO9µkƒ‹;¾‘6pó£ù¥ü«„¨A9o+IÑGw«š8P)Ã_(Î!­æ¨á6D'¢óïVˆ¿f·ª£àœõä»UÝHO¸[5¢wx·ª9§2ýcŠG)¤{w«lC´ƒÝªsRMg£‰¢Š3X°-¸¿%Ë·ÑDŽ%Ž£Ã/   ÷” '·X™½TÁY4±hâïÓ¤-e-Ëùn‘»—»M×ÐD}nùx8u¿ì;ÈwÒDÞ³Êû"EAýÒ9¥þiHµ3wz´ÈÇ.¸ŸÝøc'ëì;Z&ö":=M ‰¿„&z ÎYÏMýHÏGCzGibwNµ‘<Ç£‰Ü{ö² ” Ü%Ôô•X*gžŠ&vU…Uýõ†_Eû[krÆ&Kñçξ›Çœ‘ â…Êrz9;\4±hbšÈe•î˜ËÀÓ=›Øí²^Nå¹Å;;LµKEèÍ7is*í{šÈõ FÜOÀÝí ?Z€|w*N ?'/YÝ‹&–‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎYÅãQJõfš0Þà&šO”b©æ“ÑDUåOœèƒéÀ•¿‹&ZtrJF’?w6±{DGý`äU2pÑÄT45ß®–öN©KÕŽ\®ÎËkÏ2òþý¡»t÷M§¤ŠïóòŠ‚ò׎j’¥õK'x”&ªSD/ž%‡H«ÈG¸LìDt~š Mtœ³žœ&º‘ž&FôÓDs.Le@ŠG)IvsÉ@¨û»z<Ú£RRü`@Õ sÑDUU&K±þ]ž]ýë¬þ4Ñ¢µ)¬‡Ñ¡¬ÏåMì½ÌÏý|—Ýî·Òè‹&Mü}šÀz6Y$¸éTí’ùÕÍQÛsË»æŒý ¦ÚÛyy蔩Nñýü‚m„®Å¶R ôýür+M4q ˜ CqDžMDËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç"@ý$Ó;„[i¢,£·"äàl¢IÐzïÕb©šæÊÂÞU9€ ÅêíÛn:շ朌ÉÃèp²o:5Xþ‹ñ¯Žñ5£cÑÄ¢‰¿O´Ÿ98þ¹]ôM4;MWga·çJÆòÎý ¦Ù%º9o¢|Å(gT¿`HžR¤´ŽAŽÒDuªFýš»Ýë¦Æ¢‰ƒeb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ªóZsC‚ƒèf—ôÞvFÐn¤:ö–Ù$åXj¦¹ÚíªزÄêÁ¿+ {kª{‹£ƒ®ÏÑD󨩖q‰•‰¬š‹&¦¢ Þ2sqÖ¿éÔìè¥(ìE4QŸkI(S\íÔßµ´¾òl¢Naø¾¦SQàuª ´‘Òzàá,ìæTØÞ”~üSœ¬ äñ2±ÑùibDü54ÑQpÎzršèFzBšÑ;L͹"¤€&v‘~oÞDÊ7:8›8%Ua²³‰¦ÊÙ8VodßE{t؃Nâ?v6G…ªTÖC˜Îiv°²°MÌER÷üصOÍ.ÛÕÈÛsÙ0ú~Àî¤ °M+2½‡ Ù‡ ÌA™Œf÷Z¼í ˜Ð6º”¿cWÜn'+ ;\%v":?LŒˆ¿&: ÎYOÝHO#z‡a¢9Ç²Ê Ïívw'a'Ø ÁD“@T†BŒ¥¾–þ&L4U e!±zNú]0±¿µSRû :/-o‡‰æ±¬ÄÞtªÿS™Ãºè´`b*˜Ðš6QV9÷Ó&š]ÖËÓ&ês¡¼„sо}ׄ ذ–?  +_p.ÿ¯Bÿ*jµKfÏM4qL ÁMÐÝ.¯£‰p™Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰ê Õ>ñ(¥Ä·Ò„)m¬|@M*UNˆGûl2M4õB5Ý=T’¾«yÝþÖ&…©5ŽŽ=صyDT–·w;XIØ‹&¦¢‰ò»DÑZ OÍNðò£‰ò\f'ÂàH¶Ø‘+ÜI¤›q½Nõž&|«›PHŒÐծ̘ϦM4q’Bqĸš×…ËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çJµ›ØEšÞœ6!ärTÀï”T…ÉÒ&ª*N­¶J¬ÞìËJ:íÑ!.ÀGÇŸ,éÔ””·›½r΢‰EŸ&¼¦C”´cîÒD³{M¸ˆ&¼Q —5Iøý 0ë½IØFå%óÛùSƒ¼üyºJ›9?Ú {G¦à)ǯÕkM¼_&ö":=M ‰¿„&z ÎYÏMýHÏGCzGiâÜ(EpsÚ„ç­ðÂû³‰“Ry®³‰]»¨b¬žÿ¬õŸ¦‰ý­ESÎGG²=FÍ£dQ4•I_4±hb"š¨¿Kô$êÒM›hvæx5MÔçR¦$Ø/b°Û!ÜÚn‚7c|_䣰Fù‚¡6íîhìveŒ~”&šS!EÕX¿L~‹&–‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÙ‚ÅÒ.ÒðVšÐL…&òMìRËB[S,Uß5Uý›4ÑT909Çêé»hbN-)#qtœžk^×<–ÅÊÒ¢‰ESÑDù]q͈èÓD³£tuI§öÜ2;FßIùϽ%ÞÓ”/Ø»§îe¢f§þ(M4qh`ýÎz?vyˆ —‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎkU¼~2Â]¶›³°qc<È›8)Uæj…½«p‹ÕÝÙĹ輴غ&ªGOšX?˜Æ_ï?,šX41M@=›àš ]šhvtùM§ú\ʉ܀£ï‡’¿+Äwe6qá{OX¿tª Œý}ƒfþìM§â´ #Åkÿ¢ì.Î%-šˆ–‰ˆÎOÿgïÜzì(’<þUZ<¬™:Ê̸e¶ävØËh™ ŒüZµMcZÛn·Üm ñÝ72ë`ÊîS§¨ 9>%$hNGWü+NÞ~y‰œ"~š¨(gÝ8MT#Ý MLÑ;™&Šó Q¼³Z)ŸÑeOa«Á¡N£¤zl+Aì8õ!œÖ¹‰î­A3õTïûè¤×&ŠÇè{°Š÷òFn4±ÑD4y”Î:˜¸øüä½ò‹eöµ‰üÜÉ%­ú£vŽ–¥ Ï œ†h"¥è ¬Zíü¡K»ÿØþe„zðîÔú—”÷hxèˆèİfÿ’8Á„ÞV{9×¶þeë_è_°ŒÜ$¹z–Î.„UsWtN…ØÛ☶9®+mfŠøyæ`* ÆY7>Stƒs0SôNžƒÕJ- îÄóÀL¾b€R"©ÒD± ½ ÷3ÑD~.`0F¹ÅΓ_–&Rdgˆ©45v.¯¨ÉS2¶úNlí»‹NL”ÈŽ¬˜å£ó(¤¯}„2–m¶j£‰Öh‚P«<ÌRó~ÿ’“Ôøúâ õšÄª?ùÀ-.Ù¿¸]Eþþ…t0àÄh,/;ï×=—Wœ2¡Ó-ÅŽx;—g¨+mŸ»¦ˆŸ‡»* ÆY7Î]ÕH7È]SôNæ®â\Ä8„݉”eÏåé¨mç¹k”TÅ·¶h¢¨Jlî.vnËø°i"¿uð€£àxŵï⑹~Kg‡~»Ïh£‰¦h‚öçâÄ  êÎÅÍNù¹ ‰­óÈÙ. /º“6ß™y 9p×¶hêGÔ¹´ä+gù(âHò%¦¸€°­â˜ÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢s“Ù­qZ”&a˜h¢“RôÇHMeù(ªE;v[=ËiÝŽZÞò† c³D'¹õnGí”a bä–)vÀMl4ÑM¨J!ÏT½Ï¨³cšû>£ò\B}v«þä{x²kßR²D¦ )mPÐÔkz¶ 1ÅUi¢ˆ#ÊcŠìAáFÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢8ϹšŒô {‘ËÞgÄB»à‡h¢Hˆ U‡-UBh‹&²*tQ¼©Ý©å ,oíÈßÛ@½8M¤˜€G(øå Üh¢)šML?MTŒ³nœ&ª‘n&¦èLÅyfãž„b'ì—Íòq8”30K íþë¦;©É7vn¢¨÷yÑŸLõä݉›(oò&¶xDt­GÅ#g H¶2Œ´ÑÄF-ÑDÌ;˜(¹ðp‡å“÷Ê/2Š››&òs£×á·7k6FçqYšÐ†Õ¥µ‰TjpJ`¬M;XùvÔìT¢eÆ—zÛ|7š&V"Ú>ML?MTŒ³nœ&ª‘n&¦èLs{'»•bwè*ì?rˆ^TåA¨÷¶zÿpŸÖ‡=D/o !çâ:â»]8¿]ˆ Ø)U”pd+í'¼Z&ŠÇ„ìÜ%,9·ÁÄ-ÁDÊKìµ…¯/M;ê-èÎù¹ù°›9DG ¸èÒ„ßùÈ@a&bHœo*ІRµÃ‡ÇÅ%Ñßï _ß>uñÝei./FdY¨C:gËêßÝså¥[ÆýGW×W÷¹Ý¹ùîê¾tye„qöñý›ÛËÇþþÛ_îÇÈûrÿ…ê|u¥íÓ#m2/î^Þ<~4` ƒ×GÚæÝÝiíxü賫;}Xއ– µüáêöìÇ«‹_ãË_ÿüìöÕåÚVßýúw»³/KuRÇ/Ô¬gúôò{-|ùe´Q}­Nv†¿^JàŽ‰cïÆ¼÷âè|û+:¼íRö/ð¸k†¼K^¥óGxqeRU§‰óFv[\zx UPÔÑ‘EÿìcDØ‚Á!BÔv1$Bk šíz÷²m8=ÈIƒýgÀéß/~.œT0κyœ®DºIœþýzgÀiu®È%1ÑJ…ØN«*òÆÙþî-NˆNÿèê"8´ÃjåÑR9­ÉÓê1æ}¸`+ØR$o<ÝO+Bzbð´Ú¹ùSðçç"ëËz³ƒa¤¸h ~Üñ0OsÎØ€ÁÛJUVs]!çÝ3Lh«„“ë cÎÂ/vt¢ðªý SÉÀcb"SE`ë_¶þ¥‰þ%9O³Ñ¿¤¼ ÐÏß¿ä'¬ñ™Ú .z05îœ$Hsƒ.¯¸H`WŸµêì˜VR+N%x§-‘)N|ܶ’[Óµˆ6?[5Iü,³U5ã¬Ûž­ªGº½ÙªIz§ÎVuÎÉÕ·’ïíüÂinwÉ÷rÏ*míRéTåmŠàmõ(§•2³{kb鈮’h½”™ÇúÒh+{'ýÎÆKüá,‘Ë%»œ•¥Æ]?‹Ù<,QžKê>8³ÕFL‹îý²#qàXÂk Žê.a±óìVe‰â4&# -N˜6–°‰•ˆ¶ÏSÄÏÃã¬g‰j¤d‰)z'³Dqž‚s|D+Ó²In¢ƒÚJ´— àê»÷vK-ªRr:¦úML?MTŒ³nœ&ª‘n&¦èLãZ©Cç%f¤ ·c7°×rœTrÔMULŠ9éõWÑ?lšë%àÏC¾ÉÎ¥`+KŒMl4ÑM„’²Rœ Õ}´Î~.µ<7å}Aõ}è[4efH;bêM@®éÉ)LÔÛ b§íÁª4‘æke0Ùâòµ‘MXÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢s®´úÙï½,Kœh'C ø; àé©©±µ‰N}¬_d¿·ƒ£‰òÖ‘êgJövaŵ‰â1ahØÊ·,7M4Eyµ—<åÍzUš(v.̾Ó)?—9„hw0Ä‹®MÐ.€ÖÑx˜&0×`ñXWZì@ÖÝé„]3œx[\ŒÛÚ„9L¬D´}š˜"~š¨(gÝ8MT#Ý MLÑ;™&²óà€ÄØÞ‰ä°,M8Ù äUÜK>2¥ê_C[4QT)Ž¡·»íNë:¯î­!`¨_®°âš;ŠGfq„2Š[Ž&š¢ Ìsþ⽤êìÎŽÎM˜wP‘#1ö;<°ÃsÖµ ty§äaš ­Ác=)_± ‚¸*Md§bràLq{ÝÇFÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“ibT+•dÙ¬r~P‡”<ÜÚ+]ðmÑÄ8õrZwŒ‹Ž‡õ2:SÖß)²ÑÄF Е5‡DÎW¯ó*vyîŒå¹L(1’Uˆá@»7ãÚDØ*ñ;8×`AqÆÚw±;pÂcQšÈNIÿÁzJÕÎ΋ßhÂ&V"Ú>ML?MTŒ³nœ&ª‘n&¦èLs!²[©ÀËÒ¥¸?´6ÑIˆ1sG{;×MUŠ `l|-vàOìv/ç´£ƒ½«“§‰â‘“'[õm4±ÑD4Áùt³gŽR§‰bçæ?7‘Ÿ›§ˆ„ÌV[ôK®MÄ]"Ѹ¦ É58/ã-t±ƒ°.MŒǽm¾M +mŸ&¦ˆŸ‡&* ÆY7NÕH7HSôN¦‰Q­T<4!3'MPÚ1òMŒ’š\[·MUì­Ó¼Ý©ÑDykŸ"ÛÑñ½]‹ÓDñHJúñeˆÛ)ì&š¢ -—,Úp'®ŸÂ.v‚³¯MèsÅŠ$±êø@° M ítÌ?°ös VèJTÏe[ì€hUšÈN%„œË§qÜÖ&Ìab%¢íÓÄñóÐDEÁ8ëÆi¢éibŠÞÉ4Ñ9ç”ìV*øeiB›òsC÷MŒ“ Â.ª Ï…#ÔÇÛéTÞóÅ~`G{ùº§‰â1‚’ÙÊX¶»ë6šhŠ&b™ó—ì°JÅÎËì§°cÙéä ˆYˆÑ§eÏMxçqˆ&R;éo ¥)Ský‹ª''ÑTèý©õ/úÖìýª]û¹4–ï_RÒ¯#øD¦2Ô¿Þú—­i©Iekežq­÷/Ù“Ÿ½IeWò¾©jýIÝ,{÷vˆÑóÀ¹¼¬ Šçd\ÆQìpåÙªì4…Äœ¼).ùÞ׸ÍV LCT"ÚþlÕñóÌVUŒ³n|¶ªég«¦è<[5ª• ¸ðlU ;æ¡ÙªqRcc³UE¢cËr±>±µïòÖ<¢ØÑyçÖ ¥iB=BÞö¡ñ·”óý{[7šØh¢ š`yƒ-š`¯š  .[–Œ ä] eï3  Øaš —k0ÇõUúÎ.ÄUi¢8-×§×Q§—Ü–åÃ&Ö"Úšð»(‡hÆ|è€êË£Ù(¤Ua¢ˆÓø¸ú™ôÎŽz³-L Œ+m&¦ˆŸ&* ÆY7ÕH7SôN†‰â\ô—G4¡‡ïp˜õ:#Ï*­ý±Rŵ•€¼SuÄo¯Oëv÷Ö ÅŽN¤—&²GÖ7w¦² &6˜h & ÒI<=¼RúÉ{å—ÒÜ)Ës)’Ö!³ÕÖ:À²0áçI±•EpMl4ÑMh¹ä‚0WS:uv~öËQósEI«þˆÃeaÃ.ßVäÓašà\ƒQ|}ú¿Ø…^¤Ö ‰â”B F÷Qì°·!y£‰ab%¢íÓÄñóÐDEÁ8ëÆi¢éibŠÞÉ4Ñ9×®¦~ÛÄÞniš€¸ w£îˆÒÄ1J¹±¥‰¢Š)Ÿû°Õó)º&º·ŽJ’ñˆèðŠ²Ç¨8Àõ{0:;ç·ŒNL4œ79ñ®~›Q±ÃÔ[Ð &òs9EŒóPÅ..zÛDH»qð 6ïTc,ÅÎÅÁÕ(†oÏž\\ée/{È™²‹Ob; nðzuöå«Ë¯^¾¾+¿Ü7<çgÿ’#ø•~ò¤XU>þù?/orw‘ ±ýÿè†_çêùå«R=?9ûìåÍå¹ |röõËû‹ëóDô¶ iÙƒO Mè÷‡¦ç¹ÅÒZpsu¥¶û‚¹¯&ÿuq÷ÃùGüäïÿ ‹??}ö>ÖBünÇtñâ;Fýôsm¾|õò¹–¢»w+(í«à§¯Ÿë«ÕR­—Aÿ뵚"§¼•/Ò{UõSõ Š>ÐFôacÇ_vÃî«>ÏßÿçïªÍìåíùGûÏò¯KKûW•Sb±ÿEyÜo#Çßìƒöâdš~°|ÔÔÙE¯´¨ÏR\¾ÉÙ÷Û¾Ãe¹ëaÎÿ)†E¿ÒR”÷5,W¯_]ÜÜ]½ÛåŸ~þÇÅõõ¹ûéYðßç´˜ðÔÓåÓgúW—?ÝŸC¾Ä ]DJ.“gçÿëÛš’ö§ÝOàÅã¥ûÓ/*ãÏ·Oµvß_]ÞõŠÒÛß¼-IåwÜ=Iïcùðïþ÷› ßõ£zø/ßüüÑ¿½¾ºÎ%S7w/¯/óŸ]Þ^¿|óBqúá÷WÏógåëú›B»V7僛Ҋä÷$ýé—Éÿ÷Å~ üùÕ÷—ÏÞ<»Vоѿ}•÷âBÛÈûÛë‹gÅÑoÝÉÝEnøî>Ò¸üwîÿ~ß+|õ—¯n.nï~xy_þ÷úåëïô î_½¼¾¾|Õ“ÑýF›}EmO'¾þ_´´<ÿáþ@(þGQõë×¹·“{vý9ÿøôâÕå‹ËûN–°_rUº½¾zvuýæ·`D­kÞ~™ÚeÂðt1·g··×oÎϺJ¦#ˆçoûÅ3ÌS:¹½ì÷tyèâþþòÅíý™—ó•]¦uqûC!˜è¿=ûÛë›üÝœeû ?ÙgôÇøô¶Ïàt”‚5 S;×Û«Uñz>Ç6§“žØ‚0ðŽãÌ÷ÉSÏÑYÓ 9“Ã1>ñŸ  ‰é3:*¶dÇVòŽû=¦£b˦O ‘P,ÜW;¢£b+Fl¥€¼Vãû,v`Õµ‚âSdb[Äm­Àœ®D´ýµ‚)âçY+¨(gÝøZA5Ò ®LÑ;y­`T+…ä–=Ç8ï4X,'•K±TTQˆñõtj‹Ý[GŒ1„õ ²Çäthã½­¬µïïZ,(ÚV ¶Õ‚ùV ´`²ø”ÄÕ·uv~öcÑù¹ÉùèƒYµY›·ìAïbȺ¡”úò-ô&Mˆ(\C0Ñ ¦ò,Jà"Ø>©w~«â3™>C”œJÊÈóí(ö Dm†Á™NYX‹<[ÁU;‚ãœZó1·Ý.…úv­bVNÁ[œDkºØao2bãÃ%¢íóáñóðaEÁ8ëÆù°éùpŠÞÉ|Ø9×ñƒqx¼³£eϹâŽîß+Èãôc”JcxXTINKöz>µû– mw‹ñìׯ‚‡A¢A@Ù.&HËn&æ“)ª@ò ‰ÑÈÜ×Ù…xÂXˤqçU*@[”oyÂ㜂é4ĈDà¬æVI2áqkÐh:aК­–Tí‚;jÑÒ“é4ªSeDcղ؅Þ=(5§<§S ãXX §)w@ ´âꥷØ1ÄUY8;Õq[¾WÈ—³uo,lAN%¢í³ðñó°pEÁ8ëÆY¸éYxŠÞÉ,\œqÈ`·R!À¢,LÞï‡`¸H@Ö¦üˆ5¤Ø gU)ŠwÆàb'§Ãå­S€d ;;ç׃aõHùÚ=Bo)£wöôm0¼Áp 0¬3z—“ »* g;ׯZ3Áp~n€à|0é1ßʶ £Û%@†E•iãP$ºJB‰_í¤å\'~u*Žäî{(ŽaKúV'Ö#Ú8NL?NÔŒ³n'¬H·†õNÉÿgïì–ì¸;~­·Øâm²#4º ì#%.U¢Èåé"I•–â2a,~IÙ‘Y|?‹Ÿ, Ìîê{=sæC°ÎP*—¸îEÿ§gÀotß;G.Ù½”N†×Ýz9×;†÷"ޑо!œ¸S•tÁl«Wì8#œ¸½j,õ F –‰·Úzy§Œ(ŒyêÐ'·ãÄŽíàDÿ`–3]p'î쀖ʼn¾ÝèÄÍIzT"_uë¥tˆé8M@~ƒAÃ^Ë«vg·éA®;§Y^í³ß½ÝÁ8½ÓÄÀ4±ÑöibŽøeh¢¢`šuã4Qtƒ41Gïlš(ÎsÅ »—bXyqÂÇŽ8 ÐD/!B¤8BjSYßîT…à¨vpýÞãyÑD¹jÑÉxðvtÂv4‘=’“G(#wX*g§‰&  èÀKÒ~çáªè7>À?jãÒ4¡íbJ:‰k’®vAüŠ4‘ôŽÅ”$áDa/ÄÖ«®v(£NUAõ(—6æs¿rU§ÅîÈñäU&;eý"ÚâíÉ(ÌÉi%¢í3ÌñË0LEÁ4ëÆ¦éfŽÞÙ Sœ#&» e°*Ç.y?À0Ó¤2·Å0“Ô#Ðy1L¹jJàõ¢Þn8‘ëò S<ê傌P&§¨w†Ù¦†ÑSPû¾jêê;;õ¹4Ãäv)…œpÍz„ŽÕÿZô´QNóè&+`ˆäL¥ÚEÊFÉd£p°œª]wÆ»1N%6¶’õvn­y0²„²EZ §jçeÔi#ïM§!æ|§N‚áTíô%å §˜Ç‚€¶;æm¹4;ÕÑ,Rr¦¸àü¾¶fG%¢íséñËpiEÁ4ëÆ¹´é¹tŽÞÙ\Zœ{`Ž`÷R:“Z•KC`“Ò—N“šãÒ¢JG!"±Õ#óyqi¹jÖñFÜ[† ¹´xLÎém+‹~ß©·si[\Š™79å U.-v”ÜÒ\šÛ Ag¸v·­ƒ¬›$Q %—¢> 'd0,ylk C&Ãäüù^€,§jw˜C²æ”M§ù”%b8¥tìˆÓÏ=”ª*FOÑÙêÙɹ ¥zÕÁ9?":Éù7JÕc¢P­Ô|oçö¡tJÛJ©sÉëQßô^ìÐ/¾M%·›/˜¹>Vd;JǺíå†RßéHªRŽ¥ª€rF¶”ªúM¿Ì§y÷¶8v¼™³>¹T"Úþ—¹9â—ù2WQ0ͺñ/sÕH7øenŽÞÙ_æŠóü­ÃØõÞÛyX½| ÈÐŽ‘"A`L#¤ÆÔNUÉ«HÅ.J:/œÈWjŠft¢CØ'ŠGdGÉ~A42¼ãÄŽ­á„DþÒÃ…úqB¢Î3WÀ‰œ’4„`΃sÍÅUÏÐB—|Œ‡p"Jt.yç ¥Qä0“ní#Y0>’qî7’ˆ3z>þ¸Ù‚a&‰ igsrZ‰hû 3Gü2 SQ0ͺq†©FºA†™£w6ÃLê¥"ʺ'wƒtœ†NîN“Ë4I}:·]ïùª““œˆÃŒNrDÛ1Lñ˜ÌÇùŠò¾$²3L[ Ã9]§ ±±» Ûðâ» *þ?~¢?ÌË»Ã`‚cðÏò%{€cCiN‰—FUžðÕ2ò¥1½æ Q §œkާh:Qr~nou·1ãï¸- ɤ5HÀƒõ $/rPeªâq¥¡d{ŽHRo±s6EÄIâˆýŽˆÖÜ¿ÑöqŽøe±¢`šuãˆXtƒˆ8GïlDœÔK)¿ì2WÐNKp§Iõm@/ªTBµ¶Â½z9³äN“¢pCD̽w <ÙÊïÉvDl sB£¼‹ÂˆXìèà”æBˆ¨í¦|Ö{´^ ”ëíYysî@ D ÊÔFIÌPªv”FŒF0"Ç$ÁÕ zt£Jö¡u\8t¢°FHѺRµƒq\Šh:"^}  §jG8ŠÀÑÚß/¥[¯ÓŠªÓb'6EÄì´°6îôvvD´æþ•ˆ¶ˆsÄ/ƒˆÓ¬GÄj¤DÄ9zg#bqî œ±í½™Âʈ¨cRBÄ^ª€7ßÜ]R[ˆXTaði„útf«ˆåªu‚è"ÛÑAÙðŒrñ(ˆ„#”ñÁÒÂŽˆ;"¶€ˆÒéóG¢þêù{»ƒÔÚ !bn—] ÀÎx²_3wc'ùÐAx>0ÀŸw8s,Ì©Â8†acß¿äDU§Å.¸m—¹²SeaïÄÙâ’àÎ0Öä´ÑöfŽøe¦¢`šuã Stƒ 3Gïl†™ÒKyHë.s…²¥_f’TìXÀÏÉ0ÓÔIåÿ‹f˜IÑÁƒ‰Îê 3IÙaš˜av†iab®Ûá#$#9D±;Ì´Ãh»Ép`A±ãuOsÅÎ'n 9DÒW˜Y1ÎHö–í0à¶i[§ˆ£÷r"æ<±ÑöqbŽøep¢¢`šuã8Qtƒ81Gïlœ˜ÔKIXwIJÊôDó{{ŠÜØ’È$õ)œYõ)Ñaw=xuœ˜¤ìp+ÍŽ;N´€)ïFË™æ\}×\¶ËÉ0—ƉÜnt‚Þ‹õ¥àV=XE:À„„) 0(Qzñ†Rµ ¹£¿Ók//:ÆÛÞ'Küìå‹§ÏþûËëW?i{ÞÏí.¿+ÿßå]×ýéx|sùúñõw—¯^¿ü¿ïæOž=}zuñ·¿þí¯ùm¹oðÝ4ñù÷?ùý¯n¾¼y{}•Lj»¿LlèRÛùä_Ÿ½PTèÿè¸2­…xÐÂýEMmªˆùÕo¾¸{#OóA ‚©m”ؾ?å—¾zü¿::ßß’ŸþzÊÝý¤ëº‹O?½ðÏžè÷L;«þ=}sJcÿ~ýüæÍ«k°/üyd>Ì'…ð“ßÝ|ÿôßž½øã—Óïly6þðÅçÇë[||-7O?¦Kx|í/)=—)^Ëå“\=!¤›ô4º“ž§ª×“®ã·7o^þ ³§ûÇô°E¤r’Ðj³§ÝÃ_ß¼ÈÀÇ­•?î”Kÿ,3öí¨­ó©ç¯ÚÍݘþø]Ñ/Þ<ÊCù¥ —:‚€¿ÂtåñBÕ6ÿðûϽ?%DcÜŸôîòùÍOÓ‘[Ö9Ò¬6ýZßÜßèLáå“ßÝè»ùäÍÕÉm.Ú­|õg}<~{óôFgá:…¼ºx÷îƒÞ·ïGºû&O§r|Ñn?Ø|}Ïa¹ŸÊ½öýÊË÷èZð»Ç:sº$÷/Iò2=Nñò;wnnžäꀬÆzmïO»{:¹þþÙ_ÔíÕ©·¬<Þ_^¿Ð¹ã“)½ºøÿ҇ꃟýó‹·:/>¡åONú¥[A¯ËU=úhurÿ~ÛöW¯~êõÿ[š9dß¶Ò÷ýŸÒ{ܶPfyþŠù ÷8ÍCó\Kuöùfµõ5dÅo¾}÷èé•ðúÑ•ö?úßÏuÚpûwý›B×Ë·%Ôoòº;«g/ô–>¹QrûVŸý“t¼ÿÇŸéa\h®ñwú,†¿gQÿëöþ<}öýM÷ãõóïó³÷þý·Û=l' ôuÄ'Ž–' &_<þÃÛü©ôêä‘ÿóëÌýŸwÞxû?/_?ûKÿÌÿç‹‹‹×·³¿_½}ûúÙãÞêpœ|qqýêÙýý'è–åáòuÿ¤ƒåë{§]àñ¯,L˜ËS¬®vF•XA+‘cꂈwÂFr¬b<ª 'Šé4 Çè’$Ãi>Ç]i¬;×9Ú˜‘;LtµÁÊjï”Ôo½ê­î)+­%³ZD›_Y%~‘•Õš‚iÖm¯¬Ö#ÝÞÊê,½sWV{ç9ؽÔaqU› w)Ðñ𥆶ªxõªBdqÉVà¼ò‘ôW-.úQщ~³•ÕâQgSì‘le ÷”•ûÊjS+«ùÁŒ‘âÃT?ß|ôLj´ôa³Ònæ S`¿nÊJ'É6S^;g‹ëíp\mJL&ÃèS|À†Sµ£Ç8%#‘#€vV1HôÆ•f;eÉM³t§ lx .9œ¬q%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ{Ÿ\ v/ó .¼%•”R1O“ê¶NEFvõôÊ·êƒ;/p*WM9‘ÚÑ¡ 9öS$’h+J;8íàÔ8Aç‚×&cœŠÉÒ‰K»È”X¬W[íHFå¾ 0p¤4“3ФÝÚ±¬Hk¨ zÇÓ®ùÎ'ù»i|鈎À+BLç9S\ð;ÄŒ˜V"Ú>ÄÌ¿ ÄTL³nbª‘nbæè 1Åyй#E»—‡‡¨…ß!‹ã8ÜÛ–JØVšŽ[õ9Ùq²Õ9þˆ˜rÕ‘¢§ƒ¥ðvçêŠGÌ5ÒÀV†Bq‡˜bš‚߃¤­%¨BL± ˆKCLnup³ãS;J²î¹: `Xò;,I ¥œwºª¨EFbxm,Šýô†SµÃ‘NÄð€¥³B¥ÖºÓbܶ«?Ù)yb1Å‘‡œÌq%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ“×!ËîBI'«‚“¤ØQÈÑ~+U¢¯TºµãÔ8eU:hƒšêðy%$é¯Ú A=Gû­]Ø.G{ï‘ FÜ7†}ÛÜNm“>˜I;MJÏ–|óÑœtŸ–'Ì@æ ¼3_픎§XpÛuQiÖa°C ¼[ŒÚAuÞˆÈØÁ†é“ÀNÐê_]wÞˆŒÄð@eQœ…ú=Év6§"Î{ õc_·Aûy#sF\‰hûà4Gü2àTQ0ͺqpªFºApš£w68õÎsQ‘d÷Rúëëž7".n5Q*†¶À©¨BŠ a„záó§>::móÞŽ2oNÅcò:Í1Œë$c§œš'Ê@0a=1|o§Ò–' ”YÀ;¯²0¯ NÜ!¸ô|`xI^¼>Z/zò<ª1ƒ`X{ !$4¹Š‚ß”`ŠS핃ÁtÅŽ…v‚±¦¦•ˆ¶O0sÄ/C0Ó¬'˜j¤$˜9zgLïøƒ¡7tð§—êÁÚ$ÝÛAcÛæzUb éíøÌþä«öù@0{3:Þá†K"Å#£sF¡¡bG„;Oì<ÑOÄÎ;b!çê…†z;·ø’Hn—µC‹õ¤·v~ÍBC€Ô9ÇÑòDJžbNÍgHU;ÔÚ3A½áÜF˜ Ñ!Øt„Q‚ØÊ‚„}„ÙG˜¦F˜”§>E¤~² Ø1o›E&—Åȳ–ŅíHûg˜¾®D´ýÏ0sÄ/ó¦¢`šuãŸaª‘nð3̽³?ÃLê¥üÊÿ± hè˜Gê€EÅÒÚÀß|(˜(-Îÿ_|ì?ñš<¨S´‰0=TG¤¶V{Šz}þè¼xbRtpË/VÙcT–&ˆ¦²°GØy¢5ž@Š’{D1F$‰A–ash½Új§/ÚÊÅÖA|a‚§È!Ûx‹]ÀqÛx­ê©‹¬— Ãi_à|”S\Ò)ºQ™E™êN½ËÓ­È¢žkN{;7ݰ¾¾‡¼·#Ï›2LqQÔâ¿pî 309­D´}†™#~†©(˜fÝ8ÃT#Ý ÃÌÑ;›aŠó\Ú&€ÝKŰòAON*=QjlëNVcLÆ*R•t^õ¦D'æ òv S<¢:5ȹØ)_ï ³3LS ã• œN ¤~г·ƒƒ2È 1LnWáÈ{Æ ¤v(²nîKÒá8Á`çrÈÙÇj†¨Þî°€M`Œ:¹1mš$XN#ø@›Lq*ˆ¾~±· ²gï7§¦•ˆ¶O0sÄ/C0Ó¬'˜j¤$˜9zgLqóG:o÷R‘yU‚ è»ãÁ ‰ÐÉz;/mLVå ‰=x‡gF0åª!ÚLvtt¼ßŽ`ŠGŒ‰“=Çð¸ÌN0 –ÚÏBÎø8ÐÛùqû£¢93Wjq>âÃý—ß|ä4g)¦¥±)·ËtÎoõ¶ëb¯Z»˜< °Ë9®ÌþEí€â¨ÛcÔZÓÆ„ò¸o=j‡0Êip¦Ó¨WðÂ"ªAå §%7Q—êÒ{;ˆãœzãé§<„ɉåTï@Üve-;E½ùâÁ—x_Y3£Ñö¹tŽøe¸´¢`šuã\Ztƒ\:Gïl.ÒK¡‹ëžpbÊõ›Ã—N’ ®±•µ^•Wõ$¡·v|f\Z®…<Ù#9úƒ=2«siñ˜¦¥ʈ÷’ ;—¶Å¥”W¬h’¯#b¶siñ’ ¥]€T/%ÐÛyX³$ƒOÊÞ‘Òñš?ª ¥`õÊØ½¹Q5Úƒ‘ÂëˆYÀc×x¶Ã@qS†™"Ž?Êí 309­D´}†™#~†©(˜fÝ8ÃT#Ý ÃÌÑ;›a&õR„ëžp’sO=À0Ó¤R[Iõ&ªXoã—Í0“¢Ã˜¶c˜IÊ„÷2;ôÅ0\"çË«ïä²;?áTü'ýÇØTPì$ÁºY£Zæâ.2‘x0²;ŸÆ1Œ•iN¢(N¸î4ÛJÛ®ÃL'xPpg˜Éi%¢í3ÌñË0LEÁ4ëÆ¦éfŽÞÙ Sœ³'+õA/Ò¯{ÂIGöN’ 0Ì4©m1Ì4õ1Ô«NÅHÒGñàü×ê 3IYª–Ý'5M–ãÔ¥þíª4ôD'…ÇBW3LõXª«¿ÆH@i1Ìb˜©FËþJR|RéíÓÿ}€³>|<9‰aÊu‘£Dçpµ£k«eÓÆ3/üpð‚Q SoÕŽø¥Js".Ãt8e}- G§¶…™9µ§ÛjÚ µœzyoVZNf€ÔDžSKí´Ö%.Ñ:5ç.ßÖFÄŸCk }Ö“ÓZ3ÒÒÚˆÞaZ«ÎcHÚîI÷!R/®©§²åÌ­õI}rîì7¥µª „1_}^Û¼­uEðÆŠÅ#BÊË)u•­Š‹Ö&£5Û"pF%Õ6­;ŠtzæOõOF¼Tì‚]»ã„1‡ý‡ƒ÷Kd„<¹7e»(/µg•ärSþ·òǼ—„höRm .7å‹–Ø©N»Û ¿ä4úNIÑÄ{²]6|É)¸áÅRf@½;Ív$¯…;MùUË êv_õj—_p+–öˆ‹¨« …ˈΥ#âÏÁÒ†‚>ëɱ´é ±tDï0–VçŒæÍÛ»H¹¸ …d7xt²Oj saiQ)¿æ}õlö^XÚI7–z¯Ê";)ö;ÀÕ®jaé\Xš2î)qˆÑšXZí{œ„¥åº¦AÙÙž©v‚rm»ªH_„?¼`ˆ!±¢S>°Ø•Ò“/1Œ“Ì…!Ï cj*Úíâ¯÷X¯d˜Ý©€}qüÐ&`1ÌóÅi+¢Ó3ÌøS¦¥ Ïzn†iGz>†Ò;Ê0ÎH»©û‡âµ l£pp²SªÎÕrwWeBL¾z}³­µè”¶/DÇô>†©%J¾ou•«Å0‹afb˜ü`FÉ›-ww;æ³ RÔë’D2cge;¦+ÛU‰m™¦¥§ ƒq 0‹MÍòƒ»„ã õüþÇ¿•§à§¿ß¾ýËŸË»?OOÿi?—ä¿ã|÷Ë·øÍ¿d£ýýŸ¾ûáÛ2Iýỿü”Wî Ê<4 ýîrÈÿ÷Mºÿ3Âïþ雲¾w|<\þàNc†ZŽÍƒ»]"/Ñš“¶V.Væ÷¨í]Äj'Âz+­§­Ý¬l¿ ~è6²hí`Þˆèü´6"þZk(賞œÖš‘žÖFôÓZß,e×–̘’'­ƒ§N©i®ò}êõIͯšÖº¢c$~9­õ(³øÐ}{ÑÚ¢µh-n¥+-«u$ÛÁÃÐ:‰ÖÊuUbŠè  lÇ×–@O9  G´–Î0¡ä(Ív¤/•ÞPqF¸Væu6uªÂK=U_pƒ0˜ùN½T(QÍq e†”œâS»iº•ÖŠÓ”ï4 úâ iÑš· oDt~Z­5ôYONkÍHOHk#z‡i­k–J!]›¶F¼)¤­uJ…4­õ©²3øUÓZOtRàû %ö)CŒ‹Ö­MEkPÒÁ0?™íB‰»ÝcFèI´V®+ÂÌÎætµc¹ò| è&ô¼ÆH@\Ê[°ƒ0ÕNÒaå ÍoÄþö¿~Ÿý?ü=»é›¿ý)¿ÄsLÿ\Gþót]þ׌Xη·«Ž|ØÅÐ##tÊ(äÅF©u ƒ!¿t?Uï?ç'gÿÜA$ŒJ¾»Ç½­/Üq(œôó_¿ÿå›ÿXïí¿ù·âë|G`hSrµ‹pïN_ušAm×Üú°ƒUòÄ…‚FDçgÇñç°cCAŸõäìØŒô„ì8¢w˜«s¶’ïëÏRìRvÄdc<`Ç*Á,’sÀg— <;â–b¯Âè¨/vüfìXï:Š¡Èp;V ^QF;´ÿ/vüå§¿.t\èx":âŽd Ú©e»=Ô´: Ëu­Ô1ðÆä×\Û'ŒØ>ïuœ¤`Dì)ÍvQe¶÷KVEÀ(â«G´w{¿”è0¼º±¤Öî1ei}e:z’d½_ÖûåÜ÷ m!¦üœnš¬v‚§ûÏ×…SlãÚíàâ÷KÊ¯Úø¼¤VVC)RÀíây»Ýc¡©;¾V§D=<ÙípKw?C4":ÿתñç|­j(賞ükU3Ò~­Ñ;üµª:†Ö®SñaÇמtˆ[ÈqàÆl_ê'"‹/5ÎEUæ_ÁìõÙ¨oA{tŒƒ?:xgqñH"%ÃÑUF¸hbÑÄ\4‘"£94‘íÂé…êu™ó_qç=ä§|Ï+„„D ÏB"çŒ@¤íÊ¥»]¤[˪îN…I5øâ˜×±iw•؈èü01"þ˜h(賞&š‘ž&FôÃD×,õøåêš’D¸ÅtT’¨J06pvé?ni²$×.õß,ɵ/: /‡‰â‘€-˜ºÊ(>´Ç^0±`b˜ÈÏ%Z9ŽÚuvªãé9®åº©4Ëwü`²g–΃‰Ò¶¾¥Žà<Ô¡=Õ.€ÝJ²«ÁÇ­}M,Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞaš¨Îµ¤è°?KÉÅ[ùÕ¹ÙKt Õ8Ù1Úª*© ¾ð2°'™H_5K”»æ¼f7§˜ÂE»‘%ª2Tˆí–ÈvK,–˜‰%¤l8H`¥ö1§jGvú1Úr]áüf î¬BáÒ q+§yEž³„æ,Ɖbû³Z±ã'².e‰.qøÐ6w±ÄÁ"±ÑùYbDü9,ÑPÐg=9K4#=!KŒèf‰®YŠ˜®MÊ£¸éÑÆDŸR¥¹`¢K=Cx/˜è‹Îj]]ÊŒVÎÄ‚‰©`¢´8³Úª€š0Qí(~Ê)_·”ªŒÒîBþaw)Lm âÁ1'+#]4d¤j*­vŒñV˜(Nó ¼$ úâ’®cNî*±ÑùabDü90ÑPÐg=9L4#=!LŒè†‰ê¼øwJ•ï"-^Ü=Z6Gµüû¤¦ÉªCVU ”„}õ‘Ò{ÑÄM˜^ˆÈ9Õ#«Z$_‡E‹&¦¢ «[ªÉ)¹ÛE:›&Êu%±“ŠPìøYUÜS3°MòÁÖD*#Ø’&'»ÚIºwk¢85@3ðÅYd^4á-Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞašØKqïÏR@z-M$Ühb— ä©2Y=§ªŠ‚S³¼Ú!ñ{ÑÄÓ¢CtãA§êQEÑ9 Ví»‡/šX41MäçÕŒÌiã\íäaÞ9‰&Êu%‘½ñƒ “^{Ð)–’zOa‚BÀ)2Q»ðTµËÌqëÖÄ.ŽhðÅ% kkÂ[%¶":=L ‰?&Z ú¬ç†‰v¤çƒ‰!½£0±;RIàÏR,מs’`›mMôI•'G…~K˜ØU)kˆü‚z{¯ƒNû]›–=?:zãA§â1㨱ÛÉC;«E‹&&  ({Ù!§Ð¤ ¨{pvuØz]¶ØNˆÚí8\\4šêsšÀ<‚±Te í‘^ì@‰n¥‰*Ž2FtÅáã¹hâ`™Øˆèü41"þšh(賞œ&š‘ž&FôÓÄî\óŸf)b»”&tcŠ4±K0e‡&>ni®^»* j¾ú'ûè_7MtEç1Uôrš(3æ`zaIVÚÄ¢‰©h"?—T>¹Ç_wqùôÅóK%aëlš(×ň%“Í?„A®<éicÕx„MTGºlW²­vh÷&aïâ0YP_AZ½&Üeb#¢óÓĈøsh¢¡ ÏzršhFzBšÑ;LÕ9A4~a–BÃk÷&bÜÔ ÄöI¥8Wì] ™¥Ô ¾MìÑÑÄí6ØŸ£xcÞDñÈôÇjatZ41Md¡!bû¤Sµ‹tvØzÝHª¨î¼—í®¤ ÝòòÑI'Î#XÀÔ¬ý݀댠·v®ÛÅq2t&ÈÝ.®“Nî2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰ìÀjsa–º¶s]Ü0ïú¬4ÈoÉ]ê•à½X¢D‡¢–†½^t€Âåaw\”«¯ŒpíL,–˜Š%¸d# è“ç÷ÓÏ/sÔÓs°Ëu5ZþwwœíàJ– WÃJt°3Qzd“Zjï|W;J|+K§ )Ç|q¶v&üEb#¢ó³ÄˆøsX¢¡ Ïzr–hFzB–Ñ;ÌÕ9S9£áÎRù…tmÖ„jØŒá`g¢K*˜‹&úÔó›ÑD½k!ŒÉY2 ÜGţļñ•¥‡î¸‹&ML@Rr MŸíç~úâùeQ<»ÙD½nJšßî…KËÃâ–òt°3¡e¤3"9'²ªa¸•&ªS 1@òÅéÃ1ÙEËÄFD秉ñçÐDCAŸõä4ÑŒô„41¢w˜&ªó¬‰]¤À¥4AÙ)ÐDŸÔ4YÖDQ•‚Œâ«Oúf4±GGó¢CÝèhxØ »œ&ªG„ÒÄW¼r°MLE¥Á4!E£öÞDµƒxzÖ„ÖŠN11²7~8Ó8\›5¢éÁÖ·ÕˆÕ.¦{a¢:Uaè‹S”Þ*±ÑùabDü90ÑPÐg=9L4#=!LŒè†‰â<ƒLRç,™äÚcNdK‡0±KM„ÁŸP-è\ëvU9Îä$ïvúfIõ®¡Ô°?:À7¦`‰)ˆs ÚQL &LÌù¹)%b›0Qí¢œ~Щ\— ™ù#[J_Ïk :ŃG4‘2̰D ŽÒlãtï—¬ ËÙè«Ïo˜w{¿”èX$§ºînGáÎ÷Kö( $/ün̺Þ/ëý2Óû%m $H{ë»Ú9}ë»\9‰8Gäw»pqgT ¥ŠÐó÷K*kW)©ÉQší˜î=H[œ¦@ˆ®¸”Ÿ¯õµÊû шèü_«Fğ󵪡 Ïzò¯UÍHOøµjDïðתê0–‚xm3#1Ú?Ví Ñ©)·ÛédQ«*d# ¾zÄ7«>^ïš(€ÓGd·ƒ{U¢,ä/1Òc›ú &¦€‰Èª€ Ñ‰l'Qχ‰r>Ö ‚;²³]¼´3jØ"êÑÇ*uƒRû³ôn÷$ÓáJ˜(N1iÆBOŒ«ú¸·JlEtz˜ L´ôYÏ íHÏCzGabwžXœBmv.®ðARЧÎö˜E˜¾ µ4ÌžŠ&>ÔSH}õÞkkb¿kb¯ÿèGÓ}5>v™•ØW¦M,š˜ˆ&ÊsiÀ%4«ïvŠgo}×ë’J?æ#A¼&7ÎÄDñùû%ÖŒìl¢ìvÈ·¤í')-šð–‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçùå–×ðþ,•.Þš`µ ô 3j§Ô'_÷Sš(ª p’È/¨҉髦‰(ªÍNéxtMTTzÕú¿[¶ZÕÇMLEù¹”âP ÙµÚÑùQëu…HXÜ‘-ò¤ˆÔ™{´F~NPGº(ZûkUµ‹zkgÔÝ)[&.ôÅñJË󗉈ÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçùuÀþ,%ñÚ^F¤\æûšè“Š4MTU&Âí,íÝNñ½Òòú¢c|_ÅÀê±43òŸº :+-oÑÄT4‘ŸËìHš4Qí¢œNåºIBÊãÖ?’Óµiy)ƒšÀ:‚óÍ:C»]@¼•&ªSŽ(ÎGjG°ê»ËÄFD秉ñçÐDCAŸõä4ÑŒô„41¢w˜&vç¨^˜¥8\[äƒA7U> ‰>©æ¢‰ªÊò´Ý+ãCý“%_5MtEGnÌ›¨…ËWÛè*#L‹&MLEX÷&HÚ'pß›8»3j½®%$ñÆOÙ›¸²b ÂfX*?§‰ü“bé ±Mŧ[i¢G!¯“Nî2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰®YŠÐ®Ý›Hy¾×£½‰>©<Mt©‚m_7MôEGì>šèRf%qM,š˜€&ŠÐ<õ'IÍúãÕN¾7Q®kÄjѼñ“ïÕìânF%ý9Lpýl€% M¡ÅŽé׉m—ÂD—(%ç›Fµ³‡*î &V‰ˆÎ#âω†‚>ëÉa¢é abDï0LtÍR)ÐÕIØ&1Bc¶¯Möà©<ÙA§¢Jƒ)»ê5¼ÛA§z×ù¯:Kv»x_ýñÝ#EÃv3£Ýî±–Ê‚‰Ào¢å¥¼¶a"?¿¥kèéIØÅ¿™fuÆáxåÖ„¤?¯ËRG°”Íï¦Òj÷´¸á…4Q&N,è‹Katr—‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MçCî,e1ÆKi"aØR:JÂî“*sˆÝU¡Äà-Úï’è½h¢+:HrM)ŒÎñºªì±¥í¢‰EÐD~.•00ýº@ë§/žß ÍϦ‰r]NQ Ýqt·³k:é¦y,†ƒƒNZFz‹ÜVZìÌÌn¥‰*.sŽºââÚ›p—‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçĬüYŠàÚ±R:I: ‰>©4YI§ªŠóÐt RíêíÍ:Õ»–r”Žýè0ßxÐ){¤cLž2 aˆ]41Mh9èU’µi¢Ú9»ÝD½®IâÄ?ÙŽ¯l7©ìÓ$a[Áãmš¨våVš¨NÙß8ÙíòÊM,Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞaš¨ÎƒÐ S¨\¼7‘Ñ[^‰ÐÄ.P_™í'k7QU™X zA}z³$ì®è¨ÜØn¢x¬íÑǨ+ {ÑÄT4aeÏ!sº:b«ñéIØåºˆ ƒ7~”È•{¼A {)`I¤Ý »ÚÅ >·ÒDqŠ\úqDWR\{î2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰Ý¹ZžÈýYêIý¾“ó&Rí(q<Û¿.U&Ë›¨ªòrW|õߌ&ê]›RJÁŽ><†—ÓDñH’¿ÆÈ‹ŒÕ {ÑÄT4‘ŸKÕ ñ §úâùUIç7¯+×5ª'y½ñ“퀯m7aÌÉžååYVVF0Õæ2ÇJ?ÛÜØnâ³Sf|AœØ¢‰ö2±ÑÉibPü 4ÑVÐg=3Mx‘ž&õŽÑćsÎþ ;Kq`½”&ŒiË/”g{½Rcĉh⳪ ŒÐjÏöÙ~ýåë+¦‰Ïw°Øjþ÷(ÞÖnâ³Går¤ÚW&MºM,šøib.µ¤C+oâ³À¹'>®«©Y}â³]¼4 cÉÊÓ§yYAÌ#XæßŠšJc}¿H¼•&ºÄQ²EÞ2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰®YŠÿ‡½³Í•ÕÁðŽ"ão¯¤÷¿“ ą̈úv*"É .FêžøO~Ø~sØÒÂ(ÑMœ’ªsÑÄ9õßE§¢cÏÑDU†É›{ÿØ%^bMLEùwé’¤\hÒDµƒ—úÑDy®‘£õÆçÅäν N›æx&{OXF°µGzµ“'ÛMü8ÍbÐgMtÓÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0MTç Ë ÔŸ¥ÜÛ [̶œJÐÄ.„ø©ˆsÑDU…‚òúoۛأãI[— ÿµ{iØ{;MT™£Q©¯,£E‹&f¢‰ü»´œ•7i¢ØåŒß¯¦‰üÜ<åE$lwo+lÊù0½‡ ÊØÒ›âW¿ -v¦þìÖDq)ÏBÈ]qåjÈ‚‰^–؈èü01"þ˜h(8g=9L4#=!LŒè†‰êM9¬?K¥°›¯MHi7¡Ç³ýÇRQÒ\0QUQ°vŽòìv˜¾ &ê[—^©¿’ƒ?Ù£@ löèþ×lÁÄ‚‰™`"ÿ.ÝÅÙš0QíXéj˜ÈÏ„ÍËnÿØ¥d÷nM¸+­/‘—&ê(Ív(f[_"œÍ¹¯>Þ¨ÿË×—Ï£CÀ®/Ù£p"±¾2–µõ½Ö—©ÖÞò;æª7×—Ýî¥IÏEëKy®° Jû#ýnvïÇ*Ì”%üÂ%CÄÒ:JsŽ›“ÜG¿VUq$)zâ²ÉúZÕû шèü_«FÄ_󵪡àœõä_«š‘žðkÕˆÞá¯U§f)†¸¹d`ÎÉœ¶¾ÏIÅÉhbW¬ø‰úø²­ïúÖªBBýèˆës4Q=†A·úÊ"­­ïE“ÑDrf ðMd;z™à/£‰äš°±êW ¿·d`)^˜¾VIÁIУs˜¿Ú1Ò£4Qœ"°Jgù¨âü¥É¢‰ƒ4±ÑùibDü54ÑPpÎzršhFzBšÑ;L§f©x×Ãáʃ´hù8'Ud.šØÕ›¡ôÕ—ý—úÖIQ:§Øöè¼°Öí4Q=f‚ û@Å*¸hb*šÈ¿ËÒ)šPÛ×òª _Må¹yNGIÚ?9&oæ½K‹|d`{ºÏ-îÚVªu}‰g÷&ª¸¼òaò®¸<]¥E½4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹‚G‚f©¸·‘&Ú(ü€&v©ìÒ)»ÛÑd4QUq³aÏ¿voJgýÕ4QßÚ-ô“ÅÒåÁ½‰â‘êùíþ!¤ÕÎhÑÄT4¡eÏAéÏ{׿þï÷ë,pùµ¼ò\Í+Lpwd»&»ù¤“9ìMXÁ–8qûLcµS¶Gi¢8e6OOi»¯æ¨Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹:`’þ,¥,7ß˃TúÏöl\.?õ¥Nv/¯ª HÔW‰¿‹&Ê[KÂdòì»ðs4Q=RùlÛ B¯7:M,šøïiÂ*MXiÜÛ¤‰j—øò“NV)Rwd»Ä½E>Ò†¥ùAÉ@/#X)Ú#½Ø±ó³'Έ¢uÒ©›&6":?MŒˆ¿†& ÎYONÍHOH#z‡iâÜ,u÷Þ„•±[§”æÅs.˜8§^¾ìv}k-'"¢±/a ¦è©¯l]›X01LxIæI©½5QírÊ5L”ç:°«Boü¸ë0›$•÷0e¤—²ÜVu¤?ܵŠ$†èŠSÆU¼›%6":?LŒˆ¿& ÎYOÍHO#z‡a¢:WDèœÚEÞ À›šÐÄ9©6Ù%ì]½k^¬úêý»h¢¾µ#„y?:FöMFypRÿWgH´hbÑÄL4õr3Ç›M¿_ÿ÷û-[—w3*ϵr‚ º#ÛôÖÞ¨°™ãÑ%ìe§”:ß ª†ø“4±‹Ë”Ð9¼Û½†qÑÄû4±ÑéibHü%4ÑRpÎznšhGz>šÒ;J?έ,ýØñÍÝŒ@6?*é´K¨es>‘êsÑÄ®JM“Z_½ÂwíMìomÄÚî@úxŽ&ªÇR^3¤ÿ«óëÚÄ¢‰™h¢ü.C¨ô†Mìvɯ¾6QŸ«ž¸7~"OwöF¥ØÐò;¾?H›RÁDÔ.·Pí =JUœ8ZW\þu­±Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹¥„‘ú³”ĽbÝa‹£“Nç¤êl4QU……µoÑívnñ]4‘ßZ¡TÆM܋޼v ½›&ªÇÒÛ¬¯,cù¢‰E3ÑD*Y:›½)ññëÿ~¿¡$W·›¨Ïuˆ–ÞøÉ³£ßZ Ö7 §HïióŸ4¹¤DÍ]”j¯ež ‰â9u6Nª¼[4q&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:IÜ>ֺ۹ܻ7)CòãÙþs©1MU Ñn½÷ó–þ]'öè$kgü‰âƒí&v\z£r_½´ä]4±hbšÀ QÜA5š4Qí4.§‰ò\C3o7 Úí’ßYÒI9ÓD¬/TGzJ턽ÚQz´¤SuÊ©^¦ïŠãko¢›&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:gË“(ôg)¢tsI§”':ÀãÙþs©³ÑDU%9)•–öô]4QßZÑ´³7±Gñ¥!üí4QDäÇhûÞDµ{½·pM”çæamÜÏÑsÚpoXñP8X_¤®/š¸3Ò¥®/þìI§*Ž  Ý§ûÇÖI§nšØˆèü41"þšh(8g=9M4#=!MŒè¦‰ê\”‚ ?K1Ý{Ò‰Õ6:¸7qNªÏU!öG½Úê…¾ìv}keÔv½®;xð¤SñhÉX“u•躅½hb*š’¥c 5i¢Ú¥—]¿‹h¢<7˜8‰ôÆO¥¸·Blžó2ûž&´ŽàüwéT‚(vêöìI§*ŽJµ¦¾8CY{Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ9'b€þ,E(7Ólyù< ‰sR碉]}]°ûêé»hâ\t<ž£‰âÑSi(øÁ¯Îeõ›X41MhÉÒó, Ò>éTí䥊ÀE4¡BbJæu»!ÜÛ ;¨(|OVGpäì©°W;u}”&¬NCœg"íŠsðU!¶›&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:G„þêo.É][!6ÉfIhâ”TD™‹&ª*bŒ„¨ïj…ýo7 ÿ±ÓôMT â}eò2@M,š˜€&¬PX™8›4Qì<Òå÷&,SH2„^Œåg#÷îM”N~t@^F0gÞÀö ]íèášNÙ©ˆF§ aµËk䢉^šØˆèü41"þšh(8g=9M4#=!MŒè¦‰Ýy0ög)¸¹¦2ËÑNt•žÿ£¾Ôž‹&Šª‘zTv;ú²šNõ­sx¸S‡õÇîAš¨•¹GÕNpÑÄ¢‰©hÂ7L…s±sÒ©ÚåUèjš(Ï%àèŸl§rë-lÜŒœÙÖ—rKÑùnPí ÒlëKVÉ¥S£dKo[_"¯-DI¨ ~r})ó¢æ}eŒ/=×ú²Ö— Ö—Øò;æ¬öͼóÛúRí .?I[žkH@ÑΫ]NÑîÝû/»Öï×—¨™$¸XOi¶ã‡OÒ§ˆÜ9\íÒª@Þÿ шèü_«FÄ_󵪡àœõä_«š‘žðkÕˆÞá¯UÕ9@ô§PD¾wïÛ6<Øû>'Õh.š¨ªXuj”ìoi_v’vNþßôƒèð“5«Ç@O–q{åœE‹&f  uGÔÔ¡ öðëi!ÏëâÛÔÕÎ™ï¤ ÙØ”ßÃÂ>БÚƒv»< = Õ)a)Ú ]q”l]Ëëe‰­ˆNCâ/‰–‚sÖsÃD;ÒóÁÄÞQ˜Ø—7´?KQÜ[2P=mñ&v Bò¦1ÇŸRy2˜ØUi(µ?qÿØ}Ùµ¼ý­-qj ÿ±ƒç¶&ªÇ" @ºÊdm}/˜˜ &òï²|©ChÂÄn—g׋a¢>WËÞžcgüd;R½wë;À#Ñ{šHy ¢@ûƒFµcÔG·&Ή‹Eý4±ÑùibDü54ÑPpÎzršhFzBšÑ;Lgf)Á[iÂ6ƒ"'¥ÎF§Ô§ô]NFGŸ+¸{4,Kt_-šX41M¤ Ë[Ê›Kß¿~ÿýb©TWÓDñOaíÖÕ.¯Q~'Mè¦èõÇ÷®©Ý½z·~´ÆGuª ÌÛõÇwqþr©{ÁÄA–؈èü01"þ˜h(8g=9L4#=!LŒè†‰Ýy(³ög©¸½b`:ª¸K¥ÒíºRi®»*AêÜuÿyKû2˜8InMTÎÊ!}e9L,˜˜ &°œ_Iøçµë_¿ÿ~³Ýk-΋`¢ ‰sRÿ,nøßÒDUU.Dtåîv”¾‹&ê[#»}xpk¢zäü_úàWG/wXM,š˜€&¨Òþ¹ºüúý÷[h‚®¾5QŸ›gmNÚ›÷ MðÝŒÐ7“U{OùOêèYiû~Gµ+Ÿ¥‰3â^z˜/š8HŸ&FÄ_C ç¬'§‰f¤'¤‰½Ã4qj–bà{iBb3=Ú›8'u.š8§Þ¿ìÚÄ©èãs4Q=†F´KjþؽPࢉEÐo(IƒÛ4Qì4]Ð)?×ó›J¨wÆO¶S½“&¶(ý½ö&¤Œ`¥ü¯ýÝ@ö¥‰â´Ü¦´®8ÊÃoÑD/MlDt~š M4œ³žœ&š‘ž&FôÓÄ©Y*™ß[Ñ b“£;ا”"LvÐéœzÅSÑ!|®5ê9eâë Ó‚‰©`BÊÝfEekßš¨vÌW·F-ÏEÈONäñ“íÀì΂Nº¹AŠ÷,¡e³C%ªI<ÊÕ©…“{_œÆªÛMŸ%FÄ_à ç¬'g‰f¤'d‰½Ã,Qœ3°‚J–òÐ{;£ å%E`â”ÔxÓ è?…‰ª>¥p·®zù²[õ­sÆ.ê‡{_/:ß Õ#{Î2úË8ç!¼`bÁÄL0¡°lç 7a¢Øå„ÿê^Fõ¹˜"¸3~²‹Þ{k"ÆLXÀ®˜¤-´ÚÑ£0Qœ ²X»Fõn—^jü.˜8È&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q‡&ïÏRè÷nLhù¾«v§¤Á\0QU fhì¯UÂúe—&ê[+’‰ö£#üàìâQA$:Wuª²xùÕ-˜X01LX ÷Øl\·Û\~Ì©ú±ª8-·0´+.ƒê:HÛý шèü«FÄ_󱪡àœõ䫚‘žðcÕˆÞáUÕ9“áS(3ÜÜ•eˉÈÁǪ]BPŽÔRm²[yU•¦°¾zù³ÐûßMõ­óoÉ:«v»k|T‚ŒÞ®ñ±Û%]õÇMLFˆŠN¡Ú¡‰b÷r€ÿ2šÀ ø”þÅ.Iüé…lŠûÅä·hâ L¬xt~šMTôYONUOOH#z‡ibo\¬uyx·Ãki"©äË{}RÎEEUŸÍäõöfùÇ»¼ó˜ërš(-¦\=8µ•/šX41MäÕ!7Xª_Ë+vrþÞDÑÉCôصÝã•{¹â˜Or@š× оU¥Å޾®–w)M”Fó‘ËFb'ko¢&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4¡[ 0…ÀQÊí‚\›€\RÚDN:õIµÉîMìê…LÚê!½M”·.W$­í7îM”s0&¡­,…•äcÑÄT4¡9JWFŽX¥‰bÇ|ú½‰ü\‹±qï¨Øʵ4Pà(¹ål Ú8“Uì’Ù­4‘W‡ªMqܸhâ L¬xt~šMTôYONUOOH#z‡i¢4ƒ†ÆÕæb—3òˆm3äšè“J“têSoo–Ó©¼µÿ­4*î^¼3e`i‘5PhÇVN§EsÑ„—èSK1«4Q좤³i"?WsÅQl†Á¨1†+O:É$wä§4A¡ô`%•ªÒÝŽèÖrF{£Š–êv;!Z4Ñkž&†ÄŸB5}ÖsÓDÝÓóÑÄÞQš(çc­&Ò¥TìÚ[ØÄ>hìMtIÁæ*gô¡žsºÑ¶úHïUukrÿ¼à“Ûhbo‘)¹ÿÛÊ®½‰E3ÑDþ.=–j4±Û=æ78‡&Ês}46h‡Á˜ï-\»7¡ŠÊÏO:ä,A¸ž¥z· _ß¾”&J£ÖLS±Û©.šh†‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0Mì QÀö(e)^[5ÀÆz°7ñ!ÁXëõ&>¿ÌEY:4¦zÔÝ.È{åtÚß:²À “¥'÷Õ›Ø[L¤Ì¡­ÌqxÑÄ¢‰™h"g¶ f–¯o›}úâûEJϦ‰ü\Î×ÀÛ£6&ÕxåÞDÜ,Zäç4½ç RP?éTìèæ ±¥Qbª_êØ_¬-š8+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞašè¥\œÓ rN'<í_—ç:éô¡*ü ^ÞëÞÄþÖ˜8„öLÎeÎ/§‰Ò"KJð‚2§ŽE‹&f¢ ÿ.=”G¬Ö›Øí$ž]µ<—£øàË­þƒü$ÍÇ™4¶îãóù½[@€ÆI§l'Š÷ÒD8凗‹&ÂÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&¼qËGÖ¡=JiÒ«i"ÐñhÿªTðÀn®[Ø}ê…ÞëÞDŸwôá†ýå4Ñ£ píM,š˜‹&0×qP…DV¥‰bÇv:Mäç2&Ávÿ!ŽWÒDÚ’¢Ô› Êópdc®+-vÀ·Ö›Øå|Þ2¶Å=?]4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QW‰XOþ!RÒµ·°£lHœtê’*0Mô©ç7;éTÞÚBhÛ½¨7Þ› Ì/Qc£Xün÷c,šX41Møw‰…£Uoaïvð°sMP®ŠÐCjõ ¬píÞ„1ðó”N”¼GãXßD)vùV˜(*˜G mqqÁD+J¬xt~˜LTôYOUOO#z‡a¢4n€P¯Øó!’¯… æ¸y8y»TQOŠôù•d.˜ÈªÐ1à…¹Ê¾ÎÄûÛ†‰âPàLìvwt*-rhnM»L,˜˜ &ü»d˃ÿ×¥Ü?}ñý²9;Al~®ññ¸q/¯Øñ×W›O„‰”—ªÐ2:ñ–/LPLXšíði× a¢G‘­ü°Í(±âÑùabDü90QQÐg=9LT==!LŒè†‰Ò¸öÆÒ¥RºxgÂtS>¨]×'•Ñæ‚‰¬*¡š@[}Ьïå­ $m{2½_¹ECÄÐTÆaíL,˜˜ &ü»LQ9W3ªÂD±#<=£S~nR&KÐê?‰äÒ[¶ fáàœ“äÌ&V¯y_ìHYo¥‰q)®jí0±âÑùibDü94QQÐg=9MT==!MŒè¦‰¾QÊðâü°aƒ€4Ñ%Ÿå-ÿ5i¢Oý“<‰¿išèòÎ/ò&]M]ÊØÖ­‰ESÑ„— 1Z¨W›Øíàü­‰ü\ï1"´úOr¡ébšš`B½ *IãœÓn÷žs*jŠ­üÙÅNxÁD3J¬xt~˜LTôYOUOO#z‡a"7®jA𣔆ÄÂÆ‹MäR©ÀÉuh[*¤É¶&Šz5$K-õn÷f…°÷·¶ìm{ÇÒ]]¹Åh„’š1†ÛZ0±`b&˜ðï’Ù»EJR… ·Kšrlœ•ö¿ì?ê•07ïÆÞÆM˜åt ¬­žîva¶„E•»Sh«‡Þm~éðN|ø o˜_,ßiJ¡q©¨Ø‘­Ò¨k~™j~±m´ŠïvœîM\a9(KQ©V;‚°Ö`Zp]ñèük0#âÏYƒ©(賞| ¦êé ×`Fô¯Áì›ס=J¥xíñPÜœÂí( žy-Ê"P?èTìR:ý S~®Yþ¡°á*·£tåÖªl”ØPŽ]õºT›ìZ^—úôõ­ß6Nty‡ã×ò¼EÄÀ,Ä–2¥Çš¶ 'NLþU¦ÀA[ŒÛùmˆ÷òú¤òd'iwUë[÷êõ½hbkUS}a² ·ÑDi1ÇcHmevÑÄ¢‰_Ÿ&òwÉM£A&v»„gg ,ÏÍC6(µúƒ‘^º\µ9+¸Äç4¹'#©WËÛí"†[i"7êÒ¨SìÂÃþEabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¹q¦ro¢=J‘àÅÕŒ|Ú…ƒJØ}RÌu”¶S½Ù{ÑD~kI"Œ©é!»ï¨ÓÞ¢IhÔvÜíTÓ¢‰E3Ñähž0ú×Y¥‰b‡§‰N¢ (GišãžÛ…‹³|¨ä[+Ïç—˜G^–\ ¯ª´Ø‘¥[i"7ÊA¤µqRÄÙÃ=’EabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¥qð©Ë´9Jå5®ksªX‡{»TÈ ›ÚRý•梉¬Jb$ áõé½²|ôy'Þ˜å£OYµ7±hb*šˆ[F Ìzãûu;ˆ__œÈ‘ç—£æ_üÏ?ÿî÷ß—þïŽ 1WjlYø°4ôÐt–g1÷÷úá¿øKîMüý)Ãc™8¿ù‡¿üÏß÷í¿þß_~„~¿æ?»Î?ÿà½î[~÷ÓŸþøÝ·“}ë=ù§Ÿü7ÿîÛúá'Xö‡ÇÓnù_?üøÍß~øÝWsÊŸ>ÿù7?þùû¿ùôÓç·øiûæ_ÊGâ ÿÁÍLÿýûÿp@Ë/ãCÅ_½‘íÛö#É‘ýÿö9"þy üxïö}´uù…äIôÐùuå\ôyôt7ôˆŒt3%¾.­2U/„ÿ‡ýÿ@‰¿ø³(ñPAŸõô”Xñô””ø÷ë=sãdF/ŒRõÚ\7iG{ŸTüŸÁ¸ºzšíØB<{õÔŸ› ņ«$ÇÙ+ ®Ä¸©¹;ŽVO{”j˜§;Ô ‡wãéïXwò´$D„šÊ€ÓJt³xz2žVÐÄ1ÖÝìvø°tÚü¢Ñ0x'âFÿѨ‘®¼9$âÔ*ö|~Á¼r†‘\GUi¶Ë¹{nå®"ŽM´Åù‡C‹»ZuÅ£ós׈øs¸«¢ ÏzrîªzzBîÑ;Ì]]£\|Ö/zPYí_–Ês¥5ÛU™r¬§ÛíÞì¬_—wôγ~¹E¥‚„¦2 ‹&ML@˜ÏÐp£¢×nÍœDù¹J­Õò»^œƒŸ•ñà╎®15;Ô{Ó”F]Ê â$ႉV”Xñèü01"þ˜¨(賞&ªžž&FôÃDiÜØc{”zܹ& o>–lâôI¥¹’šUÌ(¦¦z÷ó{•Þß:æ5Lk{îƒ o!÷M£¶²L,˜˜ &ü»ääñ¥G˜U˜Èv¿(z{LäçªY‚F‚‘ÝîÒ¤fh9ùL$ïè!§kh졤ÒÑáÞ{C]âôAÜ‚‰ƒ(±âÑùabDü90QQÐg=9LT==!LŒè†‰®QÊL®=f¸‘¦˜èŠ!̶3‘UAЄ‹zxržì7 Å;s.ñ¦wÜ97ÞÚ•á+ÊVuàSÁD*¹ÂLÃ×KUŸ¾ø~ÝNß™ÈÏõ1/A½ZÁ‡_ i 1>/è•8¼ ¾ÜSì€îÍiV-µ-ŽâÚšh†‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MôR|íÖ l$p@}Re²œf»ú¼ÚêSä÷¢‰òÖìSycÛi÷Îcâ«i"·è‘8¶•)¯œf‹&¦¢ .9Í×oå;´³ z•碩F‚Vÿq;¼´ n”ö&$÷ôHlPß„”}EãÞ½‰Ò(Y@„¶8ÔEÍ0±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QŠÒÅ4ás °DŸP™,£YQ%ù’±µÕ'¤÷b‰.ï0ßxÌ)·ƒúg÷‚2{¬³Xb±Ä¯Ïþ]J®bB_gýýôÅ÷+ pvíÆò\Tc­×nÜí8]š9n ÐÜÓ½«®¯U» |+K”FEÒ‡ëv;H¬xt~–KTôYOÎUOOÈ#z‡Y¢k”z¬0{ K¨n|¸3Ñ'•'£‰.õ)¼Y~äòÖœ2¾à»¯|iƒwžF•œ¢ÌhsZ41Mè¦IéëëHŸ~ùýº¦ÓiŸ0ßoôáéU·óòúôB ñ(a yfÔœ¿*5Û%T¼'ºÄñJ¤ÛŽ+'FÄŸƒ}Ö“ãDÕÓâĈÞaœè¥”®Å H›Ïµ8Ñ%UÍ…}êŸ\úøMãDw8È r‹ •^PWñÆ…sá„—ò®CªoN»@§oÌÏ…”¹'ŠÝ“êÀç^›FŸžÐ„¹²ÜƒXH•ÖŸíðÎô°ŸUVEl‹“°JÁ×ÃĺG'§‰Añ'ÐD]AŸõÌ4Ñòôl41¨wŒ&zG)%¾¶x#Äéé%ìŸ%””N/H}6/ýj4ñ¡Êe“¶z{«ŒN?{ǘj~¶ã»6'>·ÈìOj+£°ŠM,š˜‡&öïR0QÎzLŸíÀøTšØŸ«òJT3FõÊ£N jHÄç4el‘”jee>ì\è½4QÄQR ©)N×µ‰f˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi<åiR ®=ꔈ6Ÿxhb—àbÅ^ú,9ȯI°G¥J@[=?)áô›¦‰òÖ‚!l{G@ܢ‰IÛÊ4ࢉESÑä=‡„J‰«4QìèáüÎI4‘ŸA]@³ÿ¸]¸4¥m h"æ±E£Xco"îcÝJ¹QÿIšâ#,šh…‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÆÉÔÚ£ñµ{‡%Ã?KHì‘6¶¥&”¹h¢¨â\ÜŒÛê9Á{ÑDykõ.@/|†’Â}4‘[´˜Sg¶c YÕ&MLE1GéYk'>ìòuí³i"?ÉdH­þãvA¯¥ es‰Ïi÷ù%¡ÕW«ðcʼ•&r£„¬e»­½‰f˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi”ˆ¸=J_[Û#æMô&Š‚ˆHµœ?ÛÁd0QTa®QÚê1à{ÁDyk%ü±/xÇâ}0QZŠJ±­LÂÊé´`b*˜ðïRCJ \ßšØíB8&òscŠ!p³ÿh$¹ô “·‘„™Ž`Â,¦œJJÝn®,ŸU Fn«rÿñ·>¿ø[kNøÂo«"wÎ/f*ª¡© á!¥æš_Öü2ÁüB[SE¡úAÚbáôƒ´þÜHA êTí@0\8¿Ä|õ/_ž}>¿ä"Èæœ§™°ØQä[«ºÄ±®­ïæ*DÅ£ó/Vˆ?g±ª¢ ÏzòŪª§'\¬Ñ;¼XÕ5J áÅùÇqÓ£[y}Jy²[y]êUå½`b÷ŽŸ¥ÛÞ1äû`Â[¤@þ?HKŒ &LL‘S dÑ0áv!Àù0óa%KØìÙÌ(—ÂoIcÀƒÅªä=’½åžlRº÷m—8åµóÝŒ+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥ ÈÅ·òP7Á#šè“j<Mt©y³s´]Þ‰3ùå4Ñ¥,ÁÚú^41M¤rB„õs´Ù.ÚÃmà“h"?W-€5²÷d;zerä¼½ôàV—̌׷G‹¥{s|”F-/ˆÓ¸Ê5ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&öÆsýƒØ¥ Â¥4AÌD< ‰>©Ï¶ÌMšÈª|&óŸ™_PÏö^4Ñã7ºñ SiHcפØEY9>MLE\ŠŽú_göÿôÅ÷›3 ÚÙ4‘Ÿ›‚¿i#%m¶#»öV^ÜL@âAþqñLSlìBf»œ®éVš(â07.7;‡M´ÂÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&öÆQÚC(!^[Í(iØè0c`ŸÔ¤sÑDQE ŠÖVOðf'Ê[§háïØ4QZTÍAF[™àÊñ±hb*š’;¿^qÿôÅ÷[î6œMR(A’nõ·‹éJš[^‹b{Nš{°c6zz±Kf·ÒDn4Ïj@Mq Â:éÔ +Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(G(Ú¥"¦‹s|èFtEú@ÙH?¾+µÉÒUÄô…Ù€ðÍŠ•·N|ªo{'Åa¢´¨ÉBcMp·[0±`b&˜Ð|€)_ˆh$ ,vIO?蔟K¥JP3Dw»p%LDÝ,˜xVÆÖÔÈæSì(Æ[aÂöa¬‘€¤ØÉÚšhG‰Î#âÏ‰Š‚>ëÉa¢êé abDï0LäÆ9Hl­÷‘&צ'ƒÍðˆ&z¤r˜mk¢¨4Qk«‡Þ‹&Ê[G&"¼à;:•IÁ£š¶2|Ü4Y4±hâ×§‰œÅñ“ùÓ߯å|zÆÀüÜjÔ(8Zì‚áµNé)M@Ø{::{Õ”»dxë%ì]œDïX±)Žùá¼Ú¢‰çabÍ£ÓÓÄøSh¢¦ Ïznš¨{z>šÒ;J{ãjIYÚ£”¢\Jq”ç4±K°|¥à©FSÑDQ%DÚêß«4ê‡wü/¨=YJH÷¥tÚ[ÄDB[YÔµ7±hb&šÈߥåKÃ"Õ½‰b*g_›(Ïå}‚kÆèFö¤ˆÛ‰ùÇu žç‡Øçôߥªö©ðÖbF¥Qñ¹94Å <œ[4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QG¢Ô¥b¸–&\ÇÆ 4Ñ'Uæ:éô¡Þ$·Õ£¤÷¢‰òÖ>—Çz¥ª»‡Éòrš(- ˆê _?$P^4±hbšðïRL”ª bw;y8ÉzMøs€$iõ…ðl è¼k¸©Oaé€&¢÷`”WÖªJ‹ÝãŠÆ4Q%qm؇²R:5ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&ºF)J§t º1Æš(|&6¢¤Ê\µë>Ôç”êÿËÞÙôZrgø¯håÚ¬O²AU¶ÙÈ&G–=m)–íüýÙ7ÂßÓä%ØÝ"|(fTh¾]§ùñ¬ª¨x­›NooK~`&—‡[l—ÓDiÑ?¹ˆP–xa/š˜Š&p# *BUšp;0³³ƒ°Kû¹ÜEkŸ Ûñ³3Ùóh"úz8úT÷<È{°!i¬W¯Þí‚É­4Qå|‰ ¶Å1‡E­ebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¥ñèÓ[àö(¥ïËKŸK‰7Ÿxh¢HH¬žøÿMjš«ÜĮʂOUÖVŸâ‹ÑÄîñ>mïØc¬óÕ4‘«³$`lþn€uÑÄ¢‰™h‚rt³€šÕJØ»r<›&òsEc’öÝíðÒâu²‘7óœ&8÷àˆ$›´Å.X¸•&ºÄ-šøÀ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QJ//7ñ ÜD§Ô'~QšèR¯ðZå&ö·6b_ݵ½áƸ‰Ü"Hbl*~ÈI³hbÑÄ4áße"eF©–›ØíÐÎŽÂ.ÏU#‘zÎÀÝN1][¼.°iÐç4!e J­¾£Qì¢Ð­4‘ ͨbµ0M,+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(S’ íQ ÓµgÂa³£r}R)Ĺh¢¨b XÏþ¦^éµhb÷ŽF%i{‡ùÆ›N¥E#ÀÚʒࢉE3ÑD)E!‡Wi¢ØÑÙ4‘Ÿk|M‚­þÍyæÚ ±ÞEõyí:ÿ9Qƒ…F>’b‡|oviTY¡q… Ø=–ÂX0q°J¬xt~˜LTôYOUOO#z‡a¢4nìÓµG)Õk«M0Úáèh¢OªÍ• ¶O}¤;šèòÎcâ™Ëa"·è/œß¼©Œ ¬± &¦‚ ÿ.gNHu˜ÈvH¨gÃDiŸ©ã»]¸2¥Á¦îy<ÂŽ¹3"5°§ØÞ{4QÕ$ÐÈ,·ÛÁ¢‰æ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰Òx¢óbáÚ l2óñþ(¥SŸÔÉÊM¼©×„T»¿V%ìý­M§Ôöέa¹EöFª—LÜíp•›X41Mä Ó)XªVÂ.vÑôô±þÜÀA¡^¼n· réÑ„ üœ&Réé>øZ}~Ie Â[+aw‰c„•Ò©¹L¬xt~šMTôYONUOOH#z‡i¢o”J׆Mê&xD]R)„¹h¢¨bãøõübú¼óPâêrš(-æÅXcDzةð¢‰E3ÑDÊ«ô\ŽëgÅNñìRØå¹˜|ýªÍþ“02_{щɑ%Ñ„戵FE‡lç•Ì6¿¸zDÐ(Mõ¯6¿txß9¿x‹šµ•iX)×ü2Õüb[NQ“·WêayÅ.Èé Èós9†œÁ¿ÚŠò¥g߸åRhéùôb¾@Ÿd[BÝîq?èŽÍªÜ¨úÓ*µTÄÉÃ-ßµYu° Qñèü›U#âÏÙ¬ª(賞|³ªêé 7«FôoVuR*çø`Û$]¤í’ê“õ\0Ñ¥Þ^-c`w4[iªÇ^Œ&Ê[C¢¨ðï<Þ'ºš&J‹® ¥½Æ@ ´hbÑÄL4áߥ%@ŽZ§‰l§ˆgW3òçúÛ˜5úÛ%Ó+Ï&ÂÆˆž§ D*=8k¨A´U·ÖFÝÕhüq%fM,+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(G 飔]\ÍÈçÿvh¢Kê³Þ¿(MU š(vI^+ù›wU?à»±6ji‘ÈqBBS¯ ìESÑ„—ƬA¬7±Û¡œ2°q¤zÅ Ý.¤{ã&J£1²¶Å=Æ7.š8X&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QO¬^&áM$]…­*>§ØMìRU áRu®Ú¨»*Ÿ/¹ÁBû[Ú‹ÑD~k1ĶwìÆ”»2"NÐ^cøjlÝtZ41MðF a5y± ¦g×F-Ï¥ H”ý'ŸaÈ•gd$Ðç4!{ŽØƒŠ€ÞJ¥Q ð$üå½8¦…Ý\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4±7Nì#£”¥‹o:ñfáè¦S—T_–ÎEEUŽòÃÌU¢¯•ÓéÍ;I¡^‚ôÍo<›È-æ\3C[™=ÜÔX4±hbšÈû3ÁŸBªUš(vN§‰ü\ Îh·lFéÊœNÑ×ÃJr7¡ÞƒÕq#5â&²O/·Gí§AM´–‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MtR1\{ÓIƒl)ƚ蓊<Mt©OŒ¯E]Þ1¹1n¢(㘨qÅ ØQZ9MLEºQ.±€QªåŒv;±³3Ä–çFD!iõl·ƒhWÆMÀÆî{;ˆ›ˆÞƒ-‰Öw4²]»—&zÄù'#‹&ZËÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&ºF)»”&`CŠ!ÉøhŸX'‹›èR/øbQØ}Þy¸v9Mô(³HaÑÄ¢‰™h"æÌ¯Â> ÖÏ&Šų‹£–ç:¤môÊÈ/-^ç}4‚Dx>¿¤ÜƒCLV/ãZì’ɽ4Qı‘(7ÅãºéÔ\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ7J¥‹£°%ÜÖƒ³‰"A@Rú€T i.š(ªòJXìêõÅêMtyGï¬7á-FòÕXäÐR ºé´hb*šHŠ©½¯—òí—߯۱žÓ)?—}7ŠØ•õ&”7ÍqÏa¼çB?ÖHR]ìˆî…‰Ü($M­³ÛbÇa%ˆm®+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥œ8®>š`‰¬Ç£ý‡¥êl0ѧ>¾LtyÇ×õ÷ÁDn1ˆR[™›­ ìSÁ„mœs¿¾¿hôí—߯ÛA’³a"?WA@-4úHº4A,mî ÏS:QØÇ–ëR»Þ{Ñ©4Š˜R#¥Ó›Ý*…Ý\&Ö<:=M ‰?…&j ú¬ç¦‰º§ç£‰!½£4±7N‘¤^é÷ÍNäÚ°‰È›Ä`ï „0µ•òûäS¿(L쪔AÒf…×Êè´¿u$sh{Ç;Ëm0QZ$ (¡½Ä dž &&‚ ÿ.‰}b@Õ¨‰b¤z2L”çúÛj£L®S /„‰è}T"‹=‡ (g1XªÏ„Ùo­]Wú @S®£‰æ*±âÑùabDü90QQÐg=9LT==!LŒè†‰¾QÊÂÅG!%Wr<ÚT*šëh¢O=½MôyçÆ{N¹ÅT‚ÃU[Êèºç´hb.š€“æýê*Md»€étš¨´ÿoß¾÷í+3:qÚ¢¿$Ì/˜Ç AêÕ&v;xvJ!M”FUØa¨-NV~Øö2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®Qê±Âæ4Á 6N|p6Ñ'•Ã\4QT9i=³É›ú«6±¿µ¡Äˆm︀ûh"·(H`Me M,š˜‰&r…ify_NèÛ/¿ßƒ­g_t*Ï•èZ£v¶£K«M@ÎËÏ:yΑ[VOà½ÛQ¼µt]Ÿ¸H¼`¢µJ¬xt~˜LTôYOUOO#z‡a¢k”Jñâl -ÁA!ìN©6WzØ¢*‹©±ù¾¿¥Åׂ‰âˆØöN„pãE§Ò¢æW’¶2æu4±`b*˜ ò©Ÿ#8Vab·8&òs}ÐN>l7úO¶»´tà–=zDfsö«ÐPj†Ši¶ù¥C}|µù¥Ã;IàÎùåãÊÖü²æ—¹æÞBÎðAOŠ1¿»tzT^yn´¨±±YµÛ]›0Pòñ¾ÙÁüÂ&×TSC©¯$í}±¼Kw«zÄ¥`õôãùçü÷ßz—üíßžküô»OŸöªþøÃŸwðÿŠþHq‹¬ŠdH?}õûO>¾þáÓ?åWþü''Þ¿“.ÿ$òï“SÁû—Ñzïê{Õÿõ¯îsZ}¼ÿiü?gÜÿ×ùêÇ~øÃWÿûù/¿ÿ*üüw?ýõ7ÿí“áOûÆÀÿ8‡ÿåç·-³èÏÃÛŸ·Ÿ¡2¼ùßüú¯?þîÏŸ|\þÏÏúþ‡_ÿ ~íüñ÷ÿüÉ'Ùo>ýñ;å_ý×>Ø}ãÃ~ÞùËãï>÷ͧÄß}úÍ÷ôõo¿û¯ï¿öE^úú7ú ¾þÞ¾£Oú}@ ü«·ßý›}Pyê-IÌ Xm-vÎhaíU¶6¡*¯rDü9{•}Ö“ïUV==á^åˆÞá½Ê®QÊP¯-Œ|E!áð¢[ŸXž«4n—z úb©»¼á¾ôó}ÊèáÐqÑä¢É)hRÒÔH±A“ ‰ϧIeuöÀFÿQf‡Þ+Fâ–¯¤ƒ»’{0ÔCËĦ2÷Êʹhb*šMÉŸ•À Jn‡‰ñìÒ¸¥} I¥Ñ\§É¥¥qã!±ЄæBR¬Ï/ÙNCÃï  ÍÃPŽ!BiŠ3â–Ù\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4‘ìó[k”2ä¯Í¹¡p:ë?.4¥ÉN&zÔ±¾Kôx‡ƒÜ˜âE7 r€[+ –K,–˜‰%4¯åÙüÔF;´ÓïQçç*Fhõb¾ôd‚a“œ\þ û|,œHŠõ]ƒl4Þ{Ï­GœU‹%š‹ÄŠGçg‰ñç°DEAŸõä,Qõô„,1¢w˜%ºF©«O&4¯É¢2û¤ÒdQ™}ê_¬–UŸwî«eÕ§,®¨™EsÑD.8Rb}OÃ_ЄÛ!„DgÓDn‚ õ3Ç¢ÓäÒôód[>ùô'RÙÈAUI ©n‡ÏÒ]ˆ¥QÍTöqb+ÿ|sXñèü81"þœ¨(賞'ªžž'FôãD×(¥rmi\fÝTñ'v 9Ë‹}@jÔ¹p¢¨²ï0µÕGx1œèòNB¹'r‹ìtêÓTÆáá2õ‰…Sà„"ŠÔHò’írEÅóqB‘‰c¢Ðè?n‡|åE' ›¨ T³²|@j>ô6*ê»(÷NäF£€ÿ‡šâRZ4ÑZ&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ5J¡^†-ˆ›¯Èh¢K*¡ÍE]ê^ìªSŸwBÔ/§‰.e*«6©hÂrvâ¼c]¥‰b‡kä“h"?×!%‘µúÛE WÒn”œjž‡åqÈ—“h¬_jt; ž_I}â„Wvk™Xóèô41$þš¨)賞›&êžž&†ôŽÒDç(•äÚ¤N6²ƒ«N}R5àT4Ñ©þI-®dšèôÎÃÖÛÕ4ѧÌt¥tZ41Mäï××,Uib·<›&òs $×Çþ“ƒ°ñÒjV´DŽÏórX…ùÏŒªÇô»O§·Òø0P£·ÅñCÔࢉƒebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£”0^KI¶à€&ú¤êd4Ñ¥^_ì¦S§wâ}嬺”Yˆ+¥Ó¢‰©h¿ßd “¼O–öí—߯CºÄÓiŸkÁÀ—Ú¡ÑÜN€.¤ ƒÍ˜½Íç4¹§sÌHSUй§{Ÿ¾•&ºÄ¯¸‰æ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QŠÃµaØ>ÃnÂGg}Ri®›NêÓk%ˆíóŽÐ}7ú”=Þ¦^4±hbšÀ’ø5xsuš(vhgÇM”çZ Ôê?ålä⸉”IžÓ•}ïÄž^ì€o-7±7*ŒîǶ8 ‹&šËÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&JãŠBõJµo"Ÿïž…l#àšè“ja.šØÕ§ëÉÞìèµâ&ö·N ÐöN|ÈS|9MäCĦ2x¬A¼hbÑÄ4A%E,s&ŠÉÙIÊs}¹ÕÜåÚ›N‘¼3>Âf.=]žyê ¥¼÷t½•&r£H1‰A[ÓJÛ\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ7JÙµQØ >ÞÛAN§>©såtêT¯ñµh¢Ë;Š7žMt){,À¾hbÑÄ4Á¾‚6ó/•ëqÙ.Éù4áÏeL¤ª­5zdP»òl8lãQ¶äžžÔ˜ê=½Ø=ŽAwÐDn”rº¿¦8àE­ebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¹qJ!i{”2»¶xnöB‰,PSª@œ,n"«ŠˆÈp´Ä‡bÏ/AþÖÝ5’Ú“e”tãÙD‡2ˆÞ‰M,š˜‰&dKÔôïÇo¿ü~Ýîñ*ëI4áÏ•(¢X£ÿ$V WætB†˜°ջp9Gl}‚É•*Ò½‡E\Ie-qn÷pÏwáÄÁ:±âÑùqbDü98QQÐg=9NT==!NŒèƉÒ8IT…ö(EáÚ‚F¸%ˆ‡}Rg œ(ª˜0¶ç*Wÿd“î'vï¤ê'Þìô¾úu{‹I$Æ(‹a%uZ81NøwÉ€êHQ¿ê”«Q[äÓq¢Òþß÷·C¼6p"Eóë9MDïÁ yªŒ» ÷ÒD—8 +Els™Xñèü41"þš¨(賞œ&ªžž&FôÓD×(Å"_u ÂÑU§>©OêIÿ¢4QTå°zâõÿË£‰>ï<&b½š&r‹HI˜¥­ÌÂ*8±hb*šðïÒWò)DHUš(vòÀé'ÑD~.…ÄL±ÕÍ. œ€MlHŸÓDòÌ)™ÖKË;[i¢Gƒ¬«NÍebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£^œ"¶ƒŒöŒ:Mt©'Â×¢‰.ïø?÷ÑD—²(¸hbÑÄL4áߥ„\çC•&ŠÛégù¹1_1jõÁ —ÒDÜbÂãÑübF¦n` ¥fŸ)ýeç—õÆújóËǽCAn_:”áJ¸æ—¹æÛ‚!óû}à/æ—b÷x?ô¤ù%?—HƒÕþ“íPã•ó Úf˜ô RÃ|˜SH% ¡nÏHëÂͪÜèÿ±w®É²œ¸Qz£‘xþ3¹@û–}*adf+NÑÑín|¥< ÉбlaÇâd?¤ŸBt<ÿ°jEü5‡UsÖÁ«ºžxXµ¢wù°jj–ÒßÄ^zXUhé(é“«ï9©ì!íœzç)ïØ[¹ŽÛa¢)Sä‚CeFiVm˜*¥Ÿ²IÁD±“ëãòj» >xˆ^íØóõŒÈBóüù2DRÁš”º#½Ùɇ:®wÒÄK¥š­e(NQöCÚÑ6±çÑð4±$þšè)˜³ŽM}OÇ£‰%½«4ñêœËò/:ž¥î­gÄ ¤“‡´“R‰BÑÄ/õ5¡S«gH_E¯_-–‰ñÞ±çrÖ1yK ?Vfº³|lšˆDÔ²½¢Ð½ú~Ù•Ùübš¨í*”þ¥_yáeg7Ò„°dµ˜Àgœ€2„sqÁ ‚°Ù™Ð£IçÄù[†'ûÄŽGããÄŠøkp¢£`Î:8Nt='Vô.ãÄÜ,%÷fù`’ÃàÄœTãX81£>'þ®ò¨“Þy/t7NL)#Ü/6Nà eusàD±£·ÛçËpB¹Ìgªƒƒ‚jW¿ù¥SYìóS'©Z©±÷•;pÆü(ML‰3ÕM£mbÇ£ñibEü54ÑQ0gœ&ºžH+z—ibj–Ê~o\[>0ŸM¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR.7ç GYŸíAcÑÄœúüeØSÞ!zðrbJ™ð¾œØ4Š&ÊwiLŒ–ú—ÅNÜ®¦‰Nÿÿ?èw¦ub9’æâ×Ïë‹–lJÙ#½ÙÁÃO¦Ä 컉á6±ãÑø4±"þšè(˜³N]O¤‰½Ë417K݇M–Îpr7Ñ$XYP~ 5Ǫh4§þC6Þ?›&æ¼cúMÔ39*²¼”¹ï»‰M¡h¢|—jÙkéÞ.M4;“Ëã°k»Ž „y4~ÔAðNš€CÌøŒ&¬ŽàŒãÍ®ˆ}”&j§Ñ}0 5;ÐÕi¸Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2M´Î ÈÔÆ³ʽqØâv嚘“êÁh¢©bHe]«§ßkyÿÙ4ñòŽ&{‡é¹’¯Ëær+SßY6M„¢‰ò]fÀòiæÜ¥‰bgYáòÀ‰Nÿÿ?¹øåΗN5W3&úL¹ÞBZM©×?X«vùÃiÕ­41%Îy—œn;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»Ls³”ß[ÀŽ3e =¡‰©ž¢…aO©/Ðñ]41çü`ŽØ—²wÉce–vV§M¡h¢|—™Êw)¿ßúýõŸï·b\~7QÛ§\þ1?e‹.ùNšÐ£ŸåˆõC)<ˆjvšìQš¨–^…†âx'un;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»LS³²ÝJ…Vp?¡‰&ˆIåR bÑDS%,e£5VO®ßESÞað¥Sí‘QuTq¢){¯s²ibÓDšð·Pÿ“û9b›Zºš&J»9³ö Ý¿ìÈîLê$vÔ«aøP›ª²2‚‘Ê¿ïŒôì ?…ýO§ŠÄÇâ„Ó¦‰Þ6qàÑØ4±*~& æ¬ÓÄÐÓÁhbUïMüç™=ÿ` UÒ{_:¥| Éš˜—ªî&þQ•%«óX½ÁÝMüó«Ý´üqÇÞñ·››[iâ´¡²²Žï¸‰Mah¢~—p$Ê)!Ùyößv–õÒ—N¿ÚeÒvwüT;@NwÆM”åEÊ(uúŒPêÑà@jMÁð(NL‰³·'É'Nö‰ÆÇ‰ñ×àDGÁœupœèz: N¬è]Ɖ©YêC‡‹Ëa×I‹NpbNªy,œ˜Rï߆sÞqx'f”«ýÔiãD0œÈ’йPø'Š]rº'²HÙÄ{çÚño;º3©ba~–âõÏ8e‹c2§®ÔjWþ€ò(N̈ÐýÖi¸Oìx4>N¬ˆ¿': 欃ãD×ÓqbEï2NLÍRtoV'ÇNpbN*A,œ˜Soð]81åy'¦”I’'"á¶·Nª¤Ðʼnj‡tí[§Aÿÿ?FöûýÊvù(ÄÄhŸi‚ŽšlÁ[_iµ«5§¥‰)qB¾ib´Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR~oV'Ê5‹ßMLIUFsê¿©~Ý´wL¼œ˜QfðVÿkÓĦ‰4A5[SÖ$F]š¨v¦×æˆýÕn®ïxQh4~2"é½'¼ŒÒ3špWbM”J‹¤m}™P ¾m}™ñŽè“ë‹»±x2+ƽ¾ìõ%ÒúÂGB¯_¯Hw}ivüV²î¢õ¥´Kœ¥ü뎟fÇ„÷žV1ä²Ò}^_¸ìÝËHœ«U;KöìiUé”'ÌÀ#q\&QÞ§U£cˆŽGãŸV­ˆ¿æ´ª£`Î:øiU×ÓO«Vô.ŸV½:G°ü“YÊïÌ£ƒDõt¶çå¡$c© ÁžÒ6U’’ÿ@½ÙiÕ”w x?GµGHÊD4V–wE£MÑh¢æö–dìš(vêézšÐ¬¬()ÆOV¢tïi•—5ìäê[ÊFײ÷zµLú(L̈+ÚtÃÄh—Øñh|˜X LtÌY‡‰®§ÂÄŠÞe˜˜š¥àS,•I™;¹úž“* &¦Ôcæï‚‰)ï=SÊD÷CÚ ¡`B¢L½‚FÿØá[bí‹`¢¶+Rþ+>?ÅŽ>=Xº &ŒDgayzÔ#ƒ²ÈQÿj¢Ùò£4Ñ:5¶4xÏßìø-cÓÄÉ6±ãÑø4±"þšè(˜³N]O¤‰½Ë415K‰ÝœåCý0æš(ÑÌYX›T÷`I'ÔK²,ôGÓÄŒwÀýÁ°¼Ú£¦úˆÊ$[Þ4±i"M”ïÒ³ôÒ6;âËi¢¶[ k'ãÑøq4¾õ!m.«‹ƒž<¤µZE¨ßHSâÐwyÔá6±ãÑø4±"þšè(˜³N]O¤‰½Ë415K‘ÞK¤rˆŸÀÄ”RSêßsE|LÌyçio‡‰ÚcÙ  `¢){Ïà¼abÃD˜°š»Ã‘¬ ͮ쵯†‰Ú®3 épd›Sº&ˆ#,~ÿ ¹T`Y%—(Õ.}HI{+L̈«ùÞ7LŒv‰Ƈ‰ñ×ÀDGÁœup˜èz: L¬è]†‰©Y ÓÍOª£ÎKÅ`9>š*B…Á5úK½ç—w´&K{‡øÁzF­GÃäöeòv´»ibÓDšÈu—Ž)Áï#ë¯ÿ|¿–oÈñ‘k4jŽ6;ÊùÎä^þhe–9¡ ¯×£@NÚÏáíz4?ûЩ‰£òݨ Å!òÎ8Ü&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&¦f)Ât/M9ŸÀÄœR ‚=§þ÷BÞ6L´_-)—Õ|ìÆüL´­xðwSÙ 7L„‚ ¯¡Ír-÷C°›Ý{Ž‹`¢¶kF5'öhüdÓ[£&HÔuõ& •œ³ê ¾ãe§)= µSMâ¬8çÙ7LŒv‰=†‡‰%ñ—ÀDOÁœul˜è{:L,é]…‰™YJ¤{c°áP@;í'¤Z¬ô°sê ¿ëjbÎ;,Ï]M´\´_*þe—Ô7MlšDõ»¬‰—Q{4ñ²Câ‹i¢´k)‘$"ŒŸb‡zçÕ§ˆ øóúRVŸšwÊÁº‘[Í>ˆ¸•&¦Äù~è4Þ&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&ff)Lw?t>?_ML*•XïœæÔÇ‹•?&漓Ÿ‹šh=’`r++ ¸abÃD(˜€²I—$µÐV&ªg»ºÖDm—R.,c4?ÅÎnÍ«¹“‘}®5XF0ƒçÜÏFò²KþìÕDëÔFd³Ó·šƒ&Nv‰Ƈ‰ñ×ÀDGÁœup˜èz: L¬è]†‰Ú¹$ÔÌR~szØlxˆÈ M4©P·Ú0”ZŸàÇ¢‰¦ªl!’ËX=Ú—]M´_-¹¦~{‡!=GµÇòGƒÌ6T¦)íÊu›&BÑÖ+N¹[¹îe‡zuÔDk×3û UÛË îÁ†‚ ˆŸi‚ÊΩ̳ޚhvæøh öKœ”ïHi(.cÞ41Ü&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&ZçZ#ôdM¬ˆ¿†&: 欃ÓD×ÓibEï2MLÍRY²˜EâÉÝÄœT³X41¥Þ/êñgÓÄœwôÁ»‰eµÆÉ¦‰M‘h­UR?j¢Ù)\þЩ¶›kV Ÿb‡éλ ¦¥L­'/jí>(›ñþ“¬fW¾ðGi¢uÊVþ÷q¼ï&ÆÛÄŽGãÓÄŠøkh¢£`Î:8Mt=&Vô.ÓDë\rN@ãYJ”ïÁ?Hô„&š„즃›èf§)M4Uîe_Ácõ®_ƒ]µ&Ï:È›ô²ËÆ`·IÇÊès6MlšøßÓDnw ĹKÕŽE®.]÷«ÁM¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR.÷¦G;XÏ‚&¦¤2H,š˜So_FõWkÊ’szG ¼›hÊ `B?ðÿ—îôã›&BцDIÀûÏœªjºü™Sm—“•–Gã§ètº5h©”>3B©sKVïßM4;ägC°k§ŠÆâ²lšm;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»L­sB~åº_"=ß\5éYº—в °_v‚±X¢©bœp¿ì¾íf¢ýê²@“ëØ;ðK´Ë_$Û”™ì›‰Í¡XBjš$fõ~)£—\žÎ©µK(lýäÊ/;`¸“%ô°Ì…>¯/ZFpæìŒýíúËŸ Àn–oÁìâLw)£á&±ãÑø,±"þ–è(˜³Î]Od‰½Ë,Ñ:wlÑ_vxoÌ„l'¥ŒšO 0ã¤æ`7M=”Ýò Òýeð]4Ñ~5BùHó¼ó`©‰W\KÅËX¿•Ù4±i"M”4ˆf¡AÔD³¿º0jk7×ãª~‡fW»‘&0èÊ|5aGE.b«5»dÏFM´N™ÜÈÇâtÓÄh›Øñh|šX MtÌY§‰®§ÒÄŠÞešh " 2â½ìÒ½4Áµ´Ò M4 õ)Éàtÿ%UƒÅ`7U–D²Õ—ÆwÑDûÕÝÓVr³ç £¶¡ì1’—qH¾“ÃnšEÖî³c—&Š]¡t·«i¢ö€BÑm.7Ç`SüLùhu(È]rËÒÄŒ8(ä¶ib´Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2M´Î %õër½ìTïM«Žœ…`Ï)õ &ª*LR60Vo_V{Î;Y|èÔ”•?Èÿ±w.=³ÛÈþ+Þ9+u%k1»ì³ ¼60Fr€`ư=ä߇¤ÚpO«Ø %™pÓKŸúT¯ªÅËÃK•ùUâvOk‚ &LL©Læ% þÖÄn÷ôýžå¹å¶] VË.v®­4¡Rê\¿† +-˜”•ü–nµ²{“ÃöˆCÂu»9Kt":?LŒˆ?&}֓Äé abDï0LTçÌfßè¥ôâ+Ø 7 |@}Rg;è´«×<áLmõyøý,š¨o-¥Þ ¼»±ÒDõh€&o(‹O3M,š˜€&lË%‘0‹KÕãé×&ÊsSÙõàÖ48ÛåNîBšÝÐ Ókš [pY4`t·¾«é×[ßWÒD—8FZéa[ÓD/¢ÓÓÄøShÂSÐg=7Mø‘ž&†ôŽÒDg/•ìÚ­ ’ ä n]ŸT‚¹ÒÃvª×øQ4ÑF½&ú”ŧ¼Œ‹&Müñ4‘¿ËHQK…v7=ìÃΦ‰úܲÙÚ(Œú°»´Øó–B@L¯ir –P*£º{ßÕŽ“ÜšvGÙE¤¦8¡°h¢9Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MTçLI@Þè¥ìZš¥Üß\Âî“Ê8Wì]•DPIo¨OôY4Ñ©5}5M ˜ò„¨©Lƒ¬*Ø‹&¦¢ ØrMœ7¥Ón‡pöµ‰ò\"HÉbh´ŸbG—VÁŽ[ €¥ëÊ9(SEfò[zµ¾õÚDŸ8~Ú8Y4q0Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MtõRצ‡¥”;-:¸7Ñ)Uçº7±«ŠÍ/6ñxKú¬{}щr_J§ê±ÔHþ¹ï‡-šX41M`I¼‚ú{ÅÓé'êsKRV¤Ôh?ÙðÊ{·{å×—°‰r N!‡ ú-}· ·¦tÚRɹemqD«ØDsšèDt~šM8 ú¬'§ 7ÒÒĈÞaš¨ÎË-9y£ e ×Þ›(s2:Héô ü[Ø;Õ¹h¢ª’<ãôoó>ìX?‹&ê[+IF…vtï+]·{4 loün)­“N‹&¦¢ Ú"•Ì«ÝÒuÕ.ÀS…“h"?—³¡ØšG*u]®<é6„@¬G4aVªMQC©ÚlãK‡z~Q"ðO>¾ôDÇðÎñ¥C™Æ•2p/S/¼åw´’>Ç_ŠQ<;ËG}®Y0ñïvµåŒ0­VqI& B ¥uÆyk9£êÔ(rnŠ3z:‚³V«–!œˆÎ¿Z5"þœÕ*GAŸõä«Un¤'\­Ñ;¼ZUçÙ’øE×v_/{’VrEG'i¹œƒ )j÷ö&¯vQþHš¨ê!I m¨/vòYY>ö·FÉó‰ÔŽÒ{ßÕ£@hd)Øí˜WqÔE“ÑD ’¢FM¤ÀúTRú4šHA#B«×Î: Ó…4!/A ¤ùÜ„sæ×qÍvy€É#æ­8QÅQÈS»Ø—íN4ç‰NDçljñçà„£ Ïzrœp#=!NŒèƉݹöËRþ*òÚÍo*8h¢Oél4QU1FÅð†ú¯3½ÿ¹i¢+: ÷•3Ú=Æ ÞøêÄÖIÚESÑ„”ûvI™8º4Qì¢Êé{ßRöÔU“´Ú&3¹¶8ªF”£{yZ× [g~«Û½{Õiþ`5šw»(«œQs–èDt~˜L8 ú¬'‡ 7ÒÂĈÞa˜¨Î-•î±ÝK¥¯kvž{’6ÐïåuI5šì$í®Ê¬5_Þí¢|M”·F€RÒ¤ü×7žtª‰"iûwÃ窄‹&ML@ù»$¤’Eß¿—WìÀðô{y幜›vhä ¬vrm=£Ôüßkš(9@Pò×8éTs…˜Þ{/¯Š+5Ô#6Åa’´h¢5Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MçBžI»—2Ž—ÒlmD÷IµÉ2Wõ@Кê)Ä;é´G'?¹ÁZ»Ë}4Q=r~íÛʈiÑÄ¢‰™h"–½.ƒ »4Qí0¥³i¢<!F0nµŸl®¬g„¶B|=¾¤Ò‚c)„ì_Q¯v‚x+MT§óGcmq)­½‰æ4щèü41"þšpôYONn¤'¤‰½Ã4±;7 ñ^ÊâÅ{J[Ôƒê¨Uç‰J á ©6WuÔ]=hžJpS=MìÑI((Þ˜¼zƒ7”‘-šX41M¤2K×<#`Ÿ&v;<=yy.k°V6Ðj'p%MPØRˆõŒ,·`Aai,ÿW;t+MT§IBh,FU»Öµ‰æ4щèü41"þšpôYONn¤'¤‰½Ã4Q[ž¢7ÊMì"¿>€s*M¤`›æ ,4²¹-õÅ%¹?–&ªzdÊãPS½|XòúÖTïk¶£ƒ‰ï£‰êÑRIöØV–xåtZ41MX¡I)™ŸÓ©Úqà³i¢<—c0°Ôj?™&D¯½7!†^çtâ[p ‰Ð_­ªvjtk=£])ä/§).b\È[ÓD/¢ÓÓÄøShÂSÐg=7Mø‘ž&†ôŽÒÄÃy*·äÚ½]œÓ‰7‚šØ%0aŠá ©i® ±»*%?úã-õ³2?¢Ãå¨P;:B÷e ¯‚¶·ô\+~ÑÄ¢‰?ž&Êw™{Sõhb·c:ûv}®…r'#¶Ú'»´:*Ã&Œ ¯ïMäÍ}K¦žÐ_ªò­·°‹SHd~1¿*.­½‰æ4щèü41"þšpôYONn¤'¤‰½Ã4QSîýÔ‡]¸–&hÓó¸wØÛwH•¹naïª$µÕ3~ÖI§GtrlÂÑÉQ¼&ªÇ„ ý½‰ÝN-.šX41Mäï’ÉRb7§Ón—âé4Qž+ üœh»Ýó½ N:Ñ–;˜ÆÌ-ýšj»]¾•&ªS2k$ª~Øáº7Ñœ&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&ªsN)5vPw;¼:§“lQñ`o¢O*ó\4QUI’H¡­^ð³ª×ío­ÌI±çW—ÓDõhÂQ¡­,ÉªŽºhb*šÈß%EDz±óÝï¾_Šξ7QŸk5ùóv;‰WÒÆ-GÁ^'ˆeÊ ¸ì?ùnéÖKØ»S1 ~žÝ‡ÝÓ0½`â`–èDt~˜L8 ú¬'‡ 7ÒÂĈÞa˜¨ÎËqþ¨í^J/Þšà=±ÇL숽#纄½«*«`o¨Ÿumbk+´Êí褧Ìb—ÃDñÈÊi§¦2†çM“ &þx˜Èߥ°”¤èÂDµã§”J'ÁDy®%EÔæ4Xò¼®<è¤@Ê}M\Zp(—´}¥\û*¹÷ÚDGÉ'Aw;X×&šÓD'¢óÓĈøshÂQÐg=9M¸‘ž&FôÓDuÎbÜ ‰ÝîU‹3iBtKztm¢O*OvЩª’hÚ ‰]½ÅÏ¢‰úÖ*ÂÚѹuk¢x” úÆ0n¼ÊM,š˜Š&òwÉ!¨rƒ&¸l $;&ŠÿRŽýô Õèëÿ3·&´¤ø`9؛҂K͋Ƶ‰j§z/M§‚ ´)Nð)É¢‰ƒi¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢:ç²ÞßîB…âµ{aK•°w¹G‰m¥‚4LTUÙ¯Œô°cú,˜(o­TñßÖô¾ŒN»2!m+Ó™ &f‚ )“y‹ÊæÃDµKr:L”çj(—À¡Õ~8·ì‹óÃr¦…tpkB븑DJ«ë½ªÓD±•ѩڭJØöÎ#âÏ GAŸõä0áFzB˜Ñ; »óÜÝ“µ{©ÄéÚ;ؖʘv@»„¸qïàñJså‡ÝU™dlöz –ÑIwš(»2oD'Þ¸5Q•e€mÔ8yØ,šX41MhÙšˆX2\š¨v’N§‰òÜRÂD[í‡1¿øµ[˜ôàœSÌ ¸ìs#å^±S»ùÖDG%YSjŠËCòÚ™hΈÎ#âÏ GAŸõä0áFzB˜Ñ; ]½Ô‹,I'ÃDÜÔŽ`¢J`È{CjœìœSU•ë<°¶Õg^ú,˜è‹ŽÝxk¢zL¨­,†µ5±`b*˜ÈßeÙ –ü+ØÕŽèìbõ¹Y-4[6‹º¶ØD¹‹Nðš&RiÁ†d~9 ÝN¿.ft)M§©äiiìlW»ðÔ -š8˜&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&ºz)€k·&8ò‹MôI¥É¶&úÔ¿8=ü§¦‰®è ݸ5Q= GjTc©v,‹&MLEu³W9¥¯ï#}÷»ï·l!œž6UJ  ¡Ù~˜‘äÚ­‰yÒ#˜0Cˆ±…=Õ.€Î6¼dUHy8–¶zú´á¥DGÊIÚvtð©Ü0¼dÊ‹¶2¦•}| /S /¶ĨÁ¾^lý—á¥ÚéÓ¥Ò“†—ò\†’2ÐßP®v.=G‹^Bz=ºX™¦v­¹j§"·.U§Vú `Mqžò´¬¥ªƒ5'¢ó/Uˆ?g©ÊQÐg=ùR•é —ªFô/UUç`‰Ú½_»Te%-,UõImã»ªÊ ‘_ ­?m㻾5%%Líèà},Q=F%“7†qI+÷øb‰ÉX¢ì:gO)4X"ÛÅó+•ç&L°Ù~²ò•KU™WX@^£•°E ¥¤ ¿ÚSìÀ,Ø4±‹# Ø—ÿX×¼Ö4Ñ‹èô41$þšðôYÏM~¤ç£‰!½£4ñp®¨úF/Eh'§ ñàm§T SÑÄ®ŠE Èêí³v&ÑÑäè°Þ·ñ½{ŒÊÜV¦kã{ÑÄT4Q¾Kæº^íføØíÎÞ™¨ÏÕTÚÓ`K—£•M1%}½ó-PZ°‘@ô[zµKpkºÀê´Ôpƒt·]{Íi¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢«—Bº˜&RØ ìM<$D3|Gª¤¹h¢ª"õoº?ìÂgíM<Þ:rLïDGå>š(1@Ò(meö”¥rÑÄ¢‰ h"—µ”‘{Îi·£§N'ÑDy.&KàŸÚíâ«’çs‚-Ç]í`o÷ ê§äÛížwqî ‰â1k·¶8ħt³‹&¦‰ND秉ñçЄ£ Ïzršp#=!MŒè¦‰êœr_ ÝKÑ«{Í'Ò„”|x}|—À„ÖØ›Øí&»5±«)ÂÚê?+_àþÖªÜ8ü°c»&ŠÇ wä eÏåM,š˜€&òw™]”Ч‰jϦ‰ò\“Áš½v¶»öR^Ü td¯i‚j߸q+o·ã{V§$Ë‹Mq¹»‚E­i¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢:Ç ¬Ðî¥ ^[5Ú(ÀÄ®´Ô릶Rä¹Jíª(SI[=ágÕEÝߺ$~g¬d¸ñ SõXN6ø—Ãw;M¼`bÁÄL0Aûµ Põa¢Ú_ʨ>7a~ÕÆÖÄn'|íA§”á&8·`,I ]¥ÅŽ,…[a¢Š#%h¬iT;Œ²`¢5Kt":?LŒˆ?&}֓Äé abDï0Lì΂ŸrõaGrmöq-9œá€&ú¤Ê\ùwU¥ˆ÷cU~Kù,šèŠË[ÕcÌ«o|uª«”Ñ¢‰©h‚Ë’¿•› âÒDµ»`k‚Ë*TKÐlÙøÚ|©îÿ¦ É-XŒg~«¼*~!MT§$Ëo‹c\4Ñœ&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&vç¥r½ÑKÙµˆh‹G眺” È\0QU•ÓŒo¨—;çÔÅoMZ ¥7¦ÉVvØSÁ„ÔI:Ê‹Ó;ßýîû•ŸÎWžå¹5/v«ýÚµ07Öó×,¡¥¡GÕÄþªAµSº7¡Sqó7Mq17¿Å­I¢ÑùYbDü9,á(賞œ%ÜHOÈ#z‡Y¢«—¼ö˜“ ohr» ªïHež &ª*ÄRç õVÉhkBÀ íè<Ÿ†¸&ªGËßf[Çu{ÁÄT0‘¿KaQ{qé»ß}¿Â,§ïL”犥$Øœ£‹$µkë¢F„ bþ×Då$”¯´Úa [i¢:Uƒ fmq ë vsšèDt~šM8 ú¬'§ 7ÒÒĈÞaš¨ÎKê×@Øí"õ4!´AÔš¨,÷öÛROv»¨²€yºüÆp`ôaçœjt@©•6éÅi¢z¤üÖoünF´®`/š˜Š&òw)Š"û[ÕN/\WŸÉ'«ÐÅ4‘{Ѩ Ríƒ8Dõ·¾«Ý‹{—ÒDuJ‚©‘]¼Ú¡é¢‰Ö4щèü41"þšpôYONn¤'¤‰½Ã4±;Oepk÷Rùò[rt»*à˜ÞQj0LTU’± ±­^>-;l_tôÆKÕ£åÆÓX¬vQ×9§SÁD*[)’¨¿5QíâÓéГ`¢}LìÑ‘Ra«}*Ö{9L"emmeö´´»`bÁÄ0‘¿ËRRR2&ªž\ûñ\1ˆ‰›³`-Wůܚ  X#Òk˜ÀÒÒ™»J«]þ‰n…‰ê´” 4j‹SŽ &Z³D'¢óÃĈøs`ÂQÐg=9L¸‘ž&FôÃDW/)\ ̲%¦š¨R„<ξ!Up.šØÕçÿä õIé³h¢¾µ)F°vtì)aÉå4Q<2‚·7^[‹&¦¢ ,”rç¯âÒD±+½ëÙ4Qž+yòíæ0øÕñRš€-IJ¼¦ *-Øò>MT;r+Mt‰£§²z‹&¦‰ND秉ñçЄ£ Ïzršp#=!MŒè¦‰®^ŠåZš rïhk¢Oiœ &ºÔ ëgÁD_tžò•\Å£„”ÿ!¶•™®ô° &¦‚‰R¼G„}žõ»ß}¿1¿x6L”çÆ²ŸØX­ªv˜®Lèĸ%Â#–0ÂV”íx¶á¥C}Dø´á%¿u¶'ÒvtÒmù 9æwn*+%[Ö𲆗™†Þ˜%Ýá¥Ø%ˆñìá%?˾{ž|¹íg·Ãtm)#ËmôõøÂy‚¨!¦Ö}±¸u­ªKœ®µªÖ"„ÑùתFÄŸ³Vå(賞|­Êô„kU#z‡×ªºz©xñZ•€lyH9X¬ê’šH碉>õ/Γý©i¢+:&rM”êó`ÈH¡¥Œ?¥©\4±hb šˆ‰k-®Md;Né|šˆI³ Íö“T_„>&$l)*Æš-ä‰8æ‘Ã§¯7Cb™£ÿ~|ùÇÿõÓ÷ý¡Ž…ÿ?Žè‘%Ì¿­ö=¹.ò KÏøø__þçË/¥ßùÛ_¿üR’:Åøæß~ùßøË·ÿùÛ_>&Éïþ?²ÎŸ¾äþéÛÜe~ÿóßÿö—o òìõÛÜçýüsnùöß¿üœVâ‘¿ lùß_~üæŸ_¾ÿêgüû¯þÍ?ýðÏÜWÿüì]Û’E’}æ/Êúavé$®ámÆ3Œl`!@ ¶Tw—D/}³¾™þe¾e¾lÝ#K­T«2<³2³ˆ¡S¦‡®,¯ðž~ââñº×ÕâqêN¤øŒÄ¢‡«gÔø¸2äToIIõŸ ¶c#IÊ=;jcxÍu>®ÝÂPí~Cj{6tƒž­kñ>õÛ–ö H…EXýLÕʰjëDG¡¥lr’»‹°XcŒ|RdͽYs„5GXDXÜ.‘ ¥¬Ü~µÚí,dDaŠà9æYȖ饌EËŸ…~œYÈ ‚~Ò…ÏBf-]à,伃g!“ò¨ÁÛÁKá´‰†]å‘‘v–|tZ½{®úé=¨Á=:Ÿærƒ6A—L‚ jÚÍÊ)2D»¥:#ÕÚÇ&B0ÚSÓê€àÁ± ²ŽÕÔkeëãwÊ&¢JGÄ€' 6.@šÙÄÌ&Š`ZExû’0¾°œ¶ã/ZkTÚ*ÉkS :LyRÉTÀ@BÛø,x‡•€”ä¬ß5ï"¥H°12¸ çÝêV‹þ;ð®íÁÅ»Zô“.žwe,]$ïÚK‡—ŠÖN»ûÃÅÊ:h]›èÕ‡ÒØDô¸ázš?8›ècp»dÁòÔÞhY °if3›(‹Mr†Žw& l‚äœ㳉”Jç1¤þˆ1LœEÍ@P-YÔ€{0¯âø¼JrÖÙ² Vƒ!Ep¼ÛffR˜˜±hùlbøqØDA?éÂÙDÖÒ²‰!x³‰¤-(ÁÙ×rjÚU]ùh øvoJ‰§k¨¨Êb ½›.Zx=j÷Àö’§Zgo»–k$ó›œMïW:an5! g61³‰’Ø0KМïÜfÙD’SÆf\.º`<Š1: ~Úë"©‹’“Ù<¾îÁȤókIŽœÂNÙD`7Ä÷@'#¹ymB3-ŸM ?›È è']8›ÈZº@61ï`6‘”Š_!tðRQO»'ÌCE‘tËÚD/¨F¶6‘PYã¢SÐ?´ë"këÐÿÐÁ:v—l"iåƒ22oæ4j3›(ŠMP»¤0W)ôù´IÎ5v²ŽÄ&¸\¯#§€’úOpÍ«I&`®räYMËåó±>[.îiLrv›“™•j‹Áij[3›ÂÄŒEËgCÀÃ&2úIÎ&²–.M Á;˜M$åÎk¯‚쥜‰×&¯Ñèvo¯€w] ºX›H¨ìð¾HÖh´æ536ÓEÏlbf¿?› v4X0.ÃK’ó0ú¹ .×j¥E¯Í9™ãÄ7¼80¾em“‚0Ï&’œ»Ýé””¢åŒÊ28TóÚ„&f,Z>›~6‘AÐOºp6‘µtlbÞÁl‚•óÓ`;¸PŒŸW×±rÖ´¬MÔP©Âöu• »}>¡Ò6!^®Ñãc©ÖÆ ¦Þj+6.¢˜œM$C´VFæÜ¼61³‰¢ØµË5{ž|ÖL– ØØa9› r9‹ñFô{QyÐS²‰Xy«œÛœåC«J#z³ãK-ç\k&Í‹§Ë QÖ°Û”÷&ŸÄn Jë6e`IÙã«Õ/'·×é˵ã9Xü[ð =yš¤Ÿ¤Ç/?_ópÁ˜šþ_êøë€4_\¥îùáⳋógiüpñÍÅÍòô€lr×…¨íѯ˜NÐ{àØô€=õ‚ó“›’]7Ìu7ùëòú§ƒ=xúí¯~ùéáÑWîcjÄoL˳cpôô ò ¯.žS+º~»ƒúuüäö9½Xê––º)õKC-uUG-K[Í=÷­®ú i DP'ú®3à¸ãq~Ô¯ú€ßÿ÷Ÿ’›]]ì­Ÿñ×ÉÓ~Y'8½û"÷&ÒøøûµÑ¾'>ÉF£Ó[”-­…t¦æò=‡3÷}ßæ¶\0ÿaÑ»¯45åuãòÍÕòüúäíqŒÿzùëòôô@ývdô3OÞÂj¿:<¢_­~»9`—gŠÎ£âfrtðþÝõ.ìƒ÷ÕoVíVêƒWãÓåeW÷duÝhJw_ܵ¤ô÷íÔ¹ïþþåŸÏ™ 7¿¡ÝþË/÷þt{rÊ-“‚óë‹ÓÿùÙêòôâÅqôðÙÉs~–^××ÄÚ©{¼HΓá?×Tú“ÇøÓWë@ù‹“g«£G§Ä´Ïé·WüÝÙ’|äÍåéò()z3œ\/Ùñ]ï‘]þÆãßvUxòèÉùòòú§‹›ôñôâö˜jpsuqzººjÀ¨¿!·@U$:°ú¨µ<ÿéfƒ)þN\õ›[ß%ÃðÈNóŸ‡Ë«ÕÙꦆ%ìw¥ËÓ“£“›Óo€`µÚ½½:d‚j2c´?,–——§/u§_Pñün\\8žÓáQtÕéxhys³:»¼Y¨ð õð ´ñ<Ãs«oÞÍòúçÿ}~µ¼ü)1˜¨X|}{Îïf‘­g t›NêØFYgÔª‹NÝE'_„æUééŒNÓÐÙbÛè´Q6¤5ÉÅŽ]tZ±žÖ¡2Ú…(è´.’çï¢ÓÉ:=‚±Þ[I'ùùЩ ù.:‰xÓdÍyêŒNu‚ ¼ $$P;£3ˆmˆz¯ :IÜN“,%¥6(åƒÁYߨÁ5/lžÎY´øÅ‚AàGY,È!è']öbAÞÒå- Â;t± V@Œ²— 6L¼õH«Z’Û®¡b´Jöö¶™Š¿„Å‚„Êií•ò"z§Þ½cã½X°¶X£@¶Ž6¸³Å‚Z£ƒàc‡÷æôÀ”­É@ójÁ¼Z0Új7ÌàT …Ùœ­IÎb{ïQ*×å©®R ‚žvïQÔà”o`¢GçW HÎ6nÓËP˜(Ò&*‹ü^Ô ë„ntºâ”½#IëG¹)%* |ƒú()%¹°c®ÆJ½ÑZ8mSËi3ƒðŒEËçjCÀÃÕ2úIÎÕ²–.« Á;˜«ÕÊ#¤NöRfâc"]…¶å:äžPËâj •&äÏã¬åÞÍ<üÇæj©Ö ŒUÞ­Ó»ÛØUk àuþâu ‚¹ÚÌÕŠâjš¹‡Èùûk¹Ø¸o$®Æå£dÇÇé£&æjVÜ|½ˆ**ba€ÐÈ«RVµ–›¨0ò. J†ät[š5"oBOf ’O#¹è»)•JM¥€¨Ÿq1¿ œäÈ"”:Q©ö‘¸7 I®yÁoN©•( ´–jJr:-•jS©kÌç”Q)'Š0ÆåóÓ%9»Í!ÝÂ|¨È¬2-Ÿ€?Ï è']8ÏZº@>ï`ÞÇKÅÕ“p( ³º…€÷ƒªËÊÓн~`7Òô´Ž‡Ýð^Èló¼ÅLÀg^§†-ÅÕ¤2KÀ“œ5£/–r¹Þj4Nb$g NIÀ±2Ê[×ÂÀmêê ¢@“܆t×“Ò V´EoepA+œé„'f,Z>~:‘AÐOºp:‘µttbÞÁt")7ÑÓÐ%{)ã§]Ï‹ÑW¨ÚÖóËË9 Cµï.=þ¾t"¡rœ&eôÍ«A'R­½5^ØöTËéÝ]p™4FÁæ5Ôrôë™NÌt¢(:A 3‚ÁM»ûžÞkÀ”}=Ë CbƒÓSÒ §*…*ØÍ7\jÇ]Ø‘±B~›5Ë´;½“¦uDaÇC’ƒ8¯NˆqbÆ¢åÓ‰!àÇ¡ý¤ §YKH'†àL'’r ¡ì¥âÄwÒ Ð­b 衬.*Ê¢f¹k9õ°î¤©k­1Â÷Z®=§Ãøt"i¤b}dÖè™NÌt¢(:᪨}¤09K'’œo\å5àr1R…SšIΫ)éDP•wÚ†–Äo>uõ(å@­å¬ÇÒ‰¤4ãRe$¹sf1NÌX´|:1ü8t"ƒ Ÿtát"kééļƒé)÷J+g…DT $Úié„GäÜ-t¢ÔPVé>èyä}`›jë8åL”­£Íî.¥©5:ŽÈ: ³qÎ 1Ó‰²è7L´SOrº¢‘ø´ÃÊ*¥³WáÔrF¼:• 1¢Øk#è W€¸Ã *£¢j¹jDµ2Q˜fb¹€!tz=(¾ï”B!åe’ÓØé<ŒQc*µ®[þI)é%TÖs ù†ähôè¤ÔˆJÇH¿±’R>æâ:½ScE¥Á×—?IJi ÚõÚZgcTEpœ|&ÃËÉX´|2<ü8d8ƒ Ÿtád8kéÉð¼ƒÉpRî£Ö²—rÆOK†µ®À¸2\CˆN….P!”E†*‚öDÖrÆ=,2œj œ0_Ëí2õk4Zóé\™Q¥™ Ïd¸2 Ì7U€ >½×€£A=úÉ.—ú¬'R%u âÁLÉKmÅü\·œü ‰8yB²¼$ç:r)9D BÉð^’ÄN¹•’³‚@!‡TS’ó;ÎYÈJ©hV8tj&NRDœ±hùÄiøqˆSA?é‰SÖÒ§!x§Zy4NÈ5[Ë–8Y¨PC qª! E×®ÔhÊ"N •¦!Ò€Œ^›vÆ)ÕÚ(ž'ï`T»#NI£ œa^FfãLœfâTq¢†y¾iC2³§÷0}?g!—kƒ‹^‰ˆªjã”ÄIUÎEÔ-ùåcEC05Ý`ót‚å8Ë`'#åš‹ì7y|a½3ÉywÊaX)5œˆBÇaλ.§‹–Ïa†€‡Ãdô“.œÃd-] ‡‚w0‡é㥜ŠqZº¢ñ®…Ãô‚º)qÚïÊa*Ã'‡Lô^?,S[Ç¢”\­–kœþ˜œÃ$Ä­”ê€ÌÁÌafS‡‰¼¨b‚ÏÓäTcSàH†Ëå„rÖ‰Az TÛi<Ý[9 ïõW&x )¢AJ`}´ lã¨åt|h ÕÚ€ Â]µÜ.%^ƒ.(Or®Áñçf`J`°R&šè!ä'É’é{€¡r­Ò¼?'¡±~Äiw½‡J9à–+úÎ7 åë°Ó¡#]$O…å<€ÖKJÁ‚ê¶ë]: UPœ —­°\ÄŽÓ(Ö”tÊÑ'5’t;d¥Þðí^çõœgIžçÉX´üéÀ!àÇ™Ì è']øt`ÖÒNÁ;x:0)·HC¢’½” Óni FR…¶K oh,:ÑË:ÞÀîèDÒˆuþØa-üö3Ó‰²è5L¤‡}öXU-×¼kt$:Áåjª²7’œ‹nJ:a*ÅK?f3°•#@A›ü°ZN5ójSÝSG·qí}â§çÏNž¹¼\¼ÁvVÇvûGé»ý×®û£ŸoWûW‡Ë£ýË«‹ß^¼ŽŽOž=;XüëŸÿú'÷–»_öϿ«/W7Ë#^èYÐ>•óÞßNΉ*Ôÿh\éW¿SÂ]¥ú•À|òøÑë¹%˜·JøE÷-#ÙöÕ6?úêðÿht¾{%o>nóvß«ªjñÑG ³89¦!UÝO¯·)ìï˳Õõå’ìÅ;ÿöÄÆ¼• ß{²:}öÅÉùÏTö³©m|ûè³M…Õ%.ÃêøðÐíëÃ¥Ùwxû—aÿ8:T€+|ÕVí)«u«z|½º¾¸¥èé®™6K´žbÅ­€f‹Ýî~¾:ç&p¿´ôOmSõO™c¯GmЧÎ.å²£Ç/Óˆ¾¸Þã¡|_Á>‡ÚX¤ÒT(•ùí7Ÿî½ÚÆD]ÔoÕ÷ßûlõ&¹_2ÅHƒÊüüŠzîcŠ.ŽŸ¬¨o_l]æ¨nå«_©y|½z¶¢(œBȃÅË—oyßÚTw†Ã)öÈ‹zÀØ[OØ|wÇÃØO±×¾{:ßÞ2Ø£CŠœö:¶û|Bw1î©%¬VÇÎGãI˜êöj»·GÑÈòôä¤ö`ÛW–š÷—ËsŠÿ’,z°øŸ¨Q½õìÏç7oQò{[ýh è*Õjï^µµ_—ýÕåß°Wþ[–4pÈ^—Rûú_òë8Ê3á@aÃ{l§¡~‡kÎúÃwzPYßiF|ýã˽gdÀåÞùúûŒÂ†õgúD¤ëâ&™úšT¯¥NÎ镯ˆ¹ýHm+¯>üãH±ÆÜ§l‹ô×úý<;9]U/–g§Üö^½úqwm«þž#Þr´Üj0ytvv{ÃS¥[üŸ-™CÔÿ^î½ûhD¼½ùéâêäu›ÿþ|±¸ZGŸÜÜ\ÞÞÐpÌ‹ååÉ]ƒþE×ÏXx‡ËmÐý‰Ë«5Æí*øjãO@­‚qJàê@á¼étÁŠaˤ­‚‚J sI Óœ6ÈJA;>Ÿ´¾–Ó¸Ó›)“R‹lÐÐ8†9/r¶¬^e,Zþ"çðã,rfô“.|‘3ké9‡à¼ÈÙËKÓ&'v¦Ò±m‘³TWVîÈ~èãCÛ3ÙË:͘eòEÎ>ÈŒ1z^äœ9‹Z䤆‰àÌpKrƾg’ËåÓmZؿ䜚ôfJ¨Àk®m‘“ÏšEEh¤$Ýn¦´Qà0®² -’ËÏû—$gb·t$(* žü¨2®'9ïù<3J¸9ö¢r^RJív{À-)Eçµ°³7ÉÅ8_‡)†á‹–ÏÖ†€‡­eô“.œ­e-] [‚w0[«•‹B¶…ZÎOÌÖVh} [cVÑ ;@º,¶–ÐÓcÌ粯崇‡ÅÖjëxŠä‘Ü»C¶–4Rïð]Þ[´nfk3[+Š­¹*2·ˆFØ’šäÌèWÉp¹ZG'¢Ä ¢V&N™<Ò¢<ÊZÜÌÖ|J•hÒ‘$9§;­89-'Ï~ÃjÔùû¨k9£vzf­”où(ƒkfÎ9LKpš±hùføq8LA?éÂ9LÖÒr˜!xs˜Z¹G JöRÞL{¬Î²ÓBaú!õ®, “Pc´vÐã;U×Ë:jw†5:zª±CˆÔ03…™)L †&ÅÛäüó—•%9Âè§ê¸Ü€1h9 ÆqÒËÊ\ÅÙvâÿ³w.=´ã¶ÿ*³OaH|Il— tÝÅ]]Ló@‚¤M0“4_¿”tÚúÎ=íúaŽ€™ÅÜË1ÿæ±)ý,‰Ü(Ò!K©$9@€©v´¯=yUá¥ä6MÂT;yaŠS6†¤¾8ÍsÓœ;7íDt|„9#þ„é(8f=8Ât#= œÑ{aªsaÈÍRlYèV„IYA6æ˜Tű¦©O9ñõða=¼Ú]£A€S¦¸ÙÁƒ…«ÇdïfÜ¡LÖ D“a&à À0RØ$¤.Ã;H«Ýº1L¹.iÄ$â½@JIâ½UáíÍ [ £ÁBe€ÂŽR³[ŸÁé1ŒW>-PÊâfp §T;عöCÎþµ´ QSì·nvœwU…'vZ†TpÚT»㣴Vœ–î8Á'““Ö¼ix'¢ãÓÚñ×ÐZGÁ1ëÁi­éiíŒÞÓ´Vœ'(E¶ØÍRv…{7ÍÅ˦²ü~0mRm4uv!W»8Z¯ªŠX½ÍWÕ?mÓ\½kfÎyÇoKúàŠSñ˜Q9¹Ê2…YÇqÒÚX´–{J#”*¼]Z+v ruÇåzݤÁ^o/ñ™Ý›F»Ò‡År 2¾`ò‚F&–hûÍÕ›]H»:.“8 “bû¡‚ÕŽÒ®sU”\§%Ýæ ŽÓbŸ=mTª#ùâ4ÍÓFÑñÁéŒøkÀ©£à˜õààÔô€àtFïip*Îs$ œÝ,eëÍí´¢.*[Í›„lù^wHMƒÀ¯ª@€É«2Pø,pªwXŽEøÑÁÈÏSõXŠîÇè+cžË\œÆ§\–¹²sî‚SµK«ž³S¹.Çd™½Hù«²ò×/sÉRú;ÓF|]bH'tÊÀT;^U±è1ŒWB‰1ˆç m± š÷-s©ë4‰Fe8N-©åo{ÉÜ NUœÚå,ü5;žÃÜq'¢ãƒÓñ×€SGÁ1ëÁÁ©éÁéŒÞÓàdÎS@B7K•mòt÷Š“BJ[Ú) Ä ÑWú®“ïß”›ª*)Ÿ}ÅW/ðaGœê]çRüèdÂ縩xŒ6µ‰J®²È³JÃä¦Á¸I—˜¤l½írSµÃÕÁ‹¸©\×’(x4Rìnå&´ù0áæSj%@ ŽR³ØuĉCa0,–í¡Tí솧ÙEä]NcÁÉ.V™(!8N͎ã NÕiÓÔßšÑÄMnò'ĽˆÏM§Ä_ÂM=Ç¬Çæ¦~¤Çã¦SzÏrSsQ‘’›¥â:oß²àÄq!Ñ÷ N¥VÞ®©B )D_ý§-8‹Âs;õªG´¹àeëšœ&8 Nö`–]e"ñÛWëË×°Ä«œzþÿù§þ…ßm°¾œx!âø~+8F{…«‚1^Wiµ‹ãDu*1çþ†ÇfÇ8÷¯¹óÄNDÇlj3â¯Á‰Ž‚cÖƒãD7Ò⼧qâP–’÷VË^$nTË~I `s•Ra0œ¨ªr)Çv¨ÏôY8q(:ós8Q<"ØqbâÄ8 &”}³ßV«þòõ\Ök.¯–]¯k³t†þéfï]‡¡…T4¿ß¿†PÚ1%”þ÷ªjGiWñvJÍÙÅÊç P§˜x³Ãg«e7§‰ÈÙ¼Ûì$Í%wrÚ‰èø sFü5 ÓQpÌzp†éFz@†9£÷4ÃTç™cîo•y‰ät+ÃȈ sHjŽ<Ã4UIr+úÿÜåg•škw­ÌNã©—]xî NõHÀ™P\eÓ<ƒ3f,†Ê&Z6 w¦ÚÁª.ÈE uIcDoœË ¼“apÉHŒï‹ØßbÌbÿU/v¬û °±SjîS‰ûjt3¹N“=‚ÁrZvœ¦wõgeö&B*Í‚=§–qó³´VÚ°«¬¾¸´ª-4imcÞ‰èø´vFü5´ÖQpÌzpZëFz@Z;£÷4­ç,ýô;ˆ¿ìâÍ…Á1­Å Z«" ö‹~½ì¢ŒEkU•ÝœSRèeø³h­Þ5F¥aGtôÁ§êQÙùèZí8Ï l“ÖÆ¢5#ˆ–¾ÝSðåëØìb¾º0x½.dƒEõÈÀì„øÞþ¬ ¥°ó­©"°BöÀI æÑ˜ê†>m€9f~r€Ù¯ CÂ9ÀÌf¨†–€1uk™6»„—,×%ŽÉ95ÞìˆîÜ!M°)¿`¨tÅ‹bÓûè(•r¼}WYQç#ÙÄT"EDÏ©M`UžÝ–]ÅaÊŒÙ' ³¸ûõ£Ññ?’ÍG²Ž‚cÖƒ$ëFzÀdgôžþHÖœk–=Y ó½Û²‰dI°µ¥¡J ²4¶C*…±Êã4U¥¸Cý§òlÑ üèðª³Òí S=j§œ|³K³¬èd˜á&G©O=†1»u⹌ar̆'@ì¼@¥oCJw2L\XTÃð½Âe)(ô›q7;ágq¢8Í$)»â2§¹æîÎ;'Έ¿': ŽYŽÝHˆgôžÆ‰êÜž ;²”Þ]l³TÓÔíl¯±l¢õ³½†0V3î¦ !Dð]šâ~NÔ»&é÷pxÙÁƒK"æ1Û–"E_™ê\™81N𒃦˜ÞSþòõlvœ.Ç »nŒlÃ[&ç2;Ô;›q—ó™…&âûFÊR‡ 0Þ—ƒb—`_—NÎ’ˆ]Œ3 g·Qµ£}͸9;Õ6¥ì|18%„«]ÊûZ#°ºwj²ôâfïNKŽÏú(­IIó€()xâ,pi.þ¸ÓðNDǧµ3⯡µŽ‚cÖƒÓZ7ÒÒÚ½§i­:§Àèì®vˆ÷6ãΖ$´±øÓ¤FMºC*‹Öª*.mÂw ôí*ÛÏ›Öê]Û”%øÑa}®7Bõ"¨«Ìf¥6>¦CÙ%®¹§Ø•¥ê= #^o„´]ÛF`§dQ±+M÷v9®SæÄYS¿…çË.ìs "¦šI#‰óµµØ}˜ÖŠS@@q–(«Ð\[s§áˆŽOkgÄ_CkǬ§µn¤¤µ3zOÓZuN)§¨~–¢›;€Ç…ˆÆ­ÁôˆT¬úPUÅ”G_=»ˆùó¦µz×% îˆÎª¾ìí´V=ÚÜÁ)_™Â¤µIkcÑZj´ròÒ¦ÙÁê#WofŽ.˜~Ænû¼f'ùò½rÝij{Ó)Ü[¶UDCÜ8D›-o šØ~ðfñY†©Ns,öÅÙS=Æ›œv":>Ü Ãt³œaº‘aÎè=Í0‡²”À½T9„NˆcRy°.ÇÔ뇕L8ôdÉ„â±VPq•Q̳÷d˜±&/‚j'ªÝº·ÁE8Q® “8Í•«â­Tó"I¢ntÐYÙxCûJ›]صkN¼b¦Ú2Ÿ¥—à8-ã‚ð£ SœiiÏãŠ#Z­ÁM†Ù˜œv":>Ü Ãt³œaº‘aÎè=Í0Õ¹–î`àg©Dró:LȪ!n 1)s,µïÄ—ªy°º¢U=¤³2Íô³¦Þ5æ N…ðf·ÚZs;ÃTöPªìøÝÖ‡'ÃL†atÉÔrü·m¾|ý×Ý¡z5Ôë’ݯ ÷j—%™pg]QÐE@DÞ3 …²§N’’t?¤4;Ö}+VNo»XRKÄýZsÕÎFÇð$Ã4qöOþ4»¨³“79íEtx†9%þ†é)8f=6Ãô#=ÜÒ{–a^΃ìÈRâ½eßû÷ý:ÌK–.u;¤ÂX'š*BL;Ô§øQ óгMcüèЃݸ›ÇÄÌ‚¾2YUÞ 3f†±³œf }†©vÁf3L½®Ý§*y#L.gøï,û†q1)Ió{†±¿år®Cµ»­«Ù¥¸aœÒÕ/§¨_ºúe‡ûá$ש„ÄÙ.‡ŽÓb·êÑsê‡(FEè÷r}ÙÁ¾ðª{§F›AÅÙ€^í²îãÒ.tj`ºÏiô*@ÍÞojv–{…áêT˜¢îÇ«†7(§ÑñaøŒøk`¸£à˜õà0Üô€0|Fïi>”¥äæFÄi‘­èMB å$í©˜Æ‚á¦^ÊÆ_}ú°Fí®³Dõ£“W…šo‡áâ!Н¬´c›08 8u³œº‘œÎè= NÕy9ë|{ªv)Þ»Šhü¶¨È8“Êa,pªªL?ñŽá ¿é¡ò³§FÆÑQzn'dõH`ÀWFqu¶~‚Ó§À –í)'µº Sí@®>ÍU® A;E¾›Ç;WAJyÚ¨ïë.!äÒÇÑYÐkvûNs%r-o¹…LÉqjvHñQ†)NÔ˜[]q2wBv":>Ü Ãt³œaº‘aÎè=Í0Õ9³Æì§PAÅÛOsAÎqkˆ1 "$ΖšjGy0†)ªŒIuU}âÏ:ÍõŠNýëF'ÅðàâOõ˜ì±‹à+ãÕÂÉ0“aF`\ÌU2‚nEŠfwýi®z]$`p'éf÷f„¹x&€†‰@Loªµ©bòjIä§‘úÍþø×òüðÇ?,ßÿéweì·ôôûücy@þ+þû¯ÿü=~÷Ofô/¿üí¯ÿãû’¤~õë?ý`3÷ׄÒ^ Ð_ü…üÿFÒöÇ¿ø‡ïÊüÞñ±ºüÆ¢äDNºmv«jy=Zc‡Ö¨Žß©Ð¬ã´LIàÑÚÕi.û-›kš´æMÃ;ŸÖΈ¿†Ö: ŽYNkÝHHkgôž¦µâ\UÊî07K•­Ø·Ò,¬•¶ÓR×½ƒ† µCê?lÅéPtpUævZ;¤,­6ŽLZ›´6­‘QÑR̃˜§¼úÜp­•ë&ŽPÈìHâ½õ-²a‹Öx‰eOÄ~ j—ójm¬Ç0â0 /je’§²î<ã”.tªF·»œfשeHfÐà…×숟¥µÒ];ؽ¦¬Ž¸b“ÖÜix'¢ãÓÚñ×ÐZGÁ1ëÁi­éiíŒÞÓ´Vpô³„{;V%ÖÓV•‘&!%t³}±“±ª½7U6ñ>17;ú¬jïí®‰É;—ÝìV}[n§µê1%"Ù¡,­*\NZ›´6­q;cÄoØ~ùÉ Êáòýv] ™%ø#Liۛ蘆°p „é=­IÙ÷gxâT nvq»¿p²!ñ¿ÿÏ_ÚÏÿ«ÿ=ÞôÝ_k£¸õwõÕÿåëòW[2ì §U“›õˆŒxTf§ãî˶ؕÁFÝ/Õûöè´ŸqÓ]B²Ãl0³w«€Òùß¿ûãoê½ýýwÿúooZ–Tz:Ín½šö<§9$%W\©=áÑ£‚NDLJÇ3â¯ÇŽ‚cÖƒÃc7ÒÂã½§á±:·5ûY nÞ˜)¹äó­±Ý$`Š!_*¾iü7…ÇªŠµŒW¾zù,x,w!ftªÃÕè¬æÜU™ý¤þ)œlwüçþ2Ùq²ã…ì(e¥*öwV»È—ïË,×UR!pó^9{{k…Ê´Ž!ñûñ%ÕÌ«)9åKš]x¶à}qZö·fg²Ù­OšØ˜&v":>Mœ Mt³œ&º‘&Îè=MÕ¹D@g²Tíø]µŸKiÊ×5ÚÎö¥EŒâK‚±h¢ªÒ`¿BôÕ¯«Ø}M´è ýü;¢£ð`©Šâ‘)«?Ç È4ibÒÄH4‘Êé-Îz÷ÕŽ:g—þŸ4Q®‹€e?¸÷þØ,˜åÞž]Œh£RE.o0& ÐÓ«ȳ4QJvÖê›]˜kî4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïiš¨ÎSÄœw¤PÑ›+†››íŠáMji"²CjŠy,š¨ªlJûêó§C*wÍ6Œ§ýè(ás4Q•a‚Dþ0ΠsmbÒÄP4aÏ%ûº4Qí2ÑÕ4Q®«viää½?¤oö]H夓Xè7iB(%‰žÒrNu´ñÅTåZ¸ÆWocè§/ªÊ3ùÑÑž_Lg–¤®2\7›ãË__t 1sÎ*ý}ÓÕŽ(]=¾Øuí“hî¯1T»˜ï¬IiÉ–~·zCh™!jæà,ÜT»,øèתâT  GtÅI”Ù¦Áý щèø_«Îˆ¿ækUGÁ1ëÁ¿Vu#=àת3zO­:”¥€î-qŠJK¢­c˜Ç¤Žv ³ª*«;É«?­¿]½k¢Hη¼õ óÝ4Q=JB‘¿3Nš˜41M”–šeUš(;nùšÈ˜sÙõ“¼÷s áδ°eàoi‚Cyƒõ×¾›«>IÕ©µ$ý¦o/»&M8ÓÄ^D‡§‰Sâ/¡‰ž‚cÖcÓD?ÒãÑÄ)½gi¢9'ËŽýiÍéî¢.–±xƒ&^RqW¶·[‹&š*¶‰e¿"óKýj¾ü 4ñŠŽ†Ø§‰—=GÕ£ÍÔY•oÊT¤‰IÑDy.(¤îÚD³ƒÕªß54Q¯+ee¤ß‘§Ù­KØÞ²“-ô´Aµ]xPPì®V»¤ôh•&K%`pÅeȳD¤;MìDt|š8#þšè(8f=8Mt#= MœÑ{š&šóœ5ìÈR(÷¶&Å„lÐÄ1©i¬NMUÙ§…ì«'ø¬µ‰v׌e>¾#:úÜNÚæÑ¤±ìP&yî¤41MÄòÍß =¥néj—®®ç_¯«¤“ûþ˜]¼µž\J¡%Þ  °7X˶ß~©f#&MlL;Ÿ&Έ¿†&: ŽYNÝHHgôž¦‰æ<' èg)½¹ÊGDH²íHMcËkªl´LýSÍñÃh¢E§œ~Ë~tèÁ*Í£r„~]âf—Vt&MLš€& tý’lÞº;šËÕõæëu“Prú.5;Ê|﹉REJÞŸ›`´78F¤þøRíBx–&ªSK/Ø/hØìç¹ wšØ‰èø4qFü54ÑQpÌzpšèFz@š8£÷4MÊR‚÷Ö ¶¤•tcmâ˜ÔÆ¢‰ª*C‚°c¬JßPùyÓD¹kˆ!§~EÅ}°yS& Üo;ÖìhVù˜41MØs)öð*|{ÚìËOžßr.îê äõº¨œ¼êÕ.ç;ÏM±@È&ä=MPyƒí· Ðÿ¢A5åø(MTq™ÑûÜÒìVKL“&6¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDqŽ˜Äi†ÛìÂÝÈÉ2¡lg{$òúíò^·4Xò¦J3E_½ÄÏêgÔî:ÙuiÇc˜Âƒç&ŠGŠ!Q¿‚r³ ¢“&&MŒDö\R»â·§¾üäù-+§|5M”ë‚Ý*¨›µ©PûÝç&r ø~|áú¦s~„fðÙsÕ)—9‡« 2“&¼ib'¢ãÓÄñ×ÐDGÁ1ëÁi¢éiâŒÞÓ4q(KãÍç&t‰ºun¢J°‘8óŽlOi0š¨ª$ ùê™?lm¢EGÀþO?:òäN§â‘ƒÍ:q‡2]5™41ibš°ç’PFívGmvL—¯M”ëÚ =§~#žjWš¨Þyn",”sN_«¤¼él? ö?ÿW»7u2n¥‰ê4gá~¡Ýf—xÖtr§‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDu®1fL~–Êšî¥ N‹ Ÿ4Ѥ‚ 9;¤jLcÑDQ%Áþ¡°C=V?£WtQ£ ðàÚDõˆše2 ³¦Ó¤‰¡hBÊ,½|\ùoöÎ4×q\£; Ä™\Iï'Oí‡tW"Åí*jÔ¿&ÂϼÖp,‘ä~vµ#9ýl¢ü®8’¦á¬Í¢nמM˜JÂ74ay—Ì-Ã~†T±Óp¹•&ª8㡸,dßtn;]Ÿ&fÄŸCǬ§‰n¤¤‰½Ó4qh–"¼6o‚¤Ôt‚74Q%°RžÉ?*‹ÑÄ1õñe4q(:ÌzMTùq“} ,?À¦‰M+ÑD~/Ù\еŸ7Qíì)·ø$šÈ¿›7ðÀÔïþØìòº¶ß„’ókšðü]¥ßA³ÙÑÍý&ªÓRT‘h,îù*覉7ÛÄND×§‰ñçÐDGÁ1ëÅi¢éibFï4M4çF¬L¡Î×ÖtbŇػ³‰&!¯®ƒ;'?´XvQ@:*^ÕGŠï¢‰cÑyÎN¸š&ª²R±p"VíðùÖ¦‰Mž&¼ž H…þM§jgvzM§ò»†–çí4?ÙN®ÌÂF‰€¼¦‰x8&€Rs¡«´Ú¥W)„ÒDuÊÁ£¤Áf»Bìp›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉ê\@ã“YJÒÅý&¢LZü†&š ôAú±[,o¢ªRƬªMý‹¬¿š&Zt Ò ò˯(ÞGÅcÆ‚ôÁß-€7MlšX‰&ò{ÉZ¾UI¿Blµ“8½ßDù]ËãB‡ßÑÎPèÚ ±„ñ:o³²:‚0t>ÿÿ²s¼ñ¦ÓÓüG PŠCöØ4ÑÝ&ö#º8MLŠ?&ú ŽY¯L£H¯F“zçhâè,¥$×trS~?Û.u©›NGÕ[J_D‡£#wu¯;¨ŒRÚ4±ibšhï%YõNM§_vxrÞÄÏﺚH Ç»¼êtMÈ# <ãëõ%¯>ÈD™i¼«´Ø&¸•&ª8UÅ¡8&£M£mb'¢ëÓÄŒøsh¢£à˜õâ4Ñô‚41£wš&ªs1#‘ñ,%pu÷:"ÉËGg¶ÿXꫯ\’&ª*ÃĽ.N¿ìù»h¢>u¶ù`±4’ûh¢x.E{Ç7!Ø56M,Eù½”¼´$sëÒDµ“§û;'ÑDù]—RÅ€FãG w\™7AÀxKXG:ê‹ZºÿRZíÐñVš¨NK y¯ˆÇ/;y*_»iâÍ6±ÑõibFü94ÑQpÌzqšèFzAš˜Ñ;M‡f)¥k+Ä’–ªã¯.:ý_Aä­| Td-˜¨ªJ–£Òêÿ &EÇäF˜(t€°Õ.ñ.é´ab)˜À²IGO*Þ…‰jOÝIO‚‰ü»š(, >¢T;”+&ˆ’iâ5LPÁ$l½;¿¿ì0Ý Õ©A"ú@Üs׎ ov‰ˆ®3âÏ‰Ž‚cÖ‹ÃD7Ò ÂÄŒÞi˜¨Î#o—˜Æ³”ûµiyÇü@ˆ74qHj€¯EE•¥PïµTþ¿z×0àÁ@³KqMT™d¬L‘6MlšX‰&j‹kâ¼ÅL]š¨vøt=ô$šh­°Â‡ãGÉ®m7QŠÐ–[ïh""o2ùŒ”f;g^m}9 >¿m}‰ À¼ÏŠ¢óô-ï†õ%+#-Ú†Êi}ïõe©õ… =oÚSô¿V; ‰³×—ò»¡è®ý‘]íˆìÚõERJñf}á¼CÌqÑÒúõOoýZUæi8ÆâvZÞð3D'¢ë­šÎ×ªŽ‚cÖ‹­êFzÁ¯U3z§¿V5çy÷jéƒYÊ®½HKló­ª àd$Ÿý½<ìŸe‰ªJÂÌ>X©8¾Œ%EGžZž\ÎÅcFmF‚±²Œû›%6K¬Å.¤ù·Ì,‘í„ð|–ðüÀàÂ2?ÂÏ%r.HÊË,™©ð5KHéy#è_£­v)Ò­,QŠ!G‹“´“ò†›ÄND×g‰ñç°DGÁ1ëÅY¢éYbFï4KTçÆL(ÌRqm‰‘¼B“¾¡‰*!R²Á7®öH kÑDS Žcõy·ñ]4‘Ÿš$t¾†”ÞHÕc~1•|¬ŒX6MlšX‰&¤Ü-}·»4Qí,~2!å~,ˆ gmM‰âÊ“‰ô@Ë«Àk˜Ð2€3Qù ˆFµSç[a¢8p£AIµK&F»ÄNDׇ‰ñçÀDGÁ1ëÅa¢éabFï4Lš¥@®íŒÊ^:£¾KÊ«R~ Õ»æÔÔ óX=þ^˜ñúÔ$–\ÇÑy>Å¿&ªGE"ûàï&¶ën˜X &´@y`P&šŸ~V+L( ÇO¶£‹{‹Æ›£ Ë#‰9AÿëµC»·úxuªYúXœÂîŒ:Ü&v"º>M̈?‡&: ŽY/NÝH/H3z§i¢:72ÐD³ƒk+|°¤GBzCU‚ Ð ½£IU]‹&ªª0Óø`9ø²¤¼òÔ囚 H±Ú%ºñh¢zT„Q2eµû׬M›&þM̈?‡&: ŽY/NÝH/H3z§i¢:gL<¸ÕÚDúÕ½Œ¬pÕûÉ^80 Œ•²-vÑ©ª#q«ù²ìúÔÊ ƒ¬‰f‡7^t*kYdÿ`ßY&–‚‰¨Õ¿=ù •Q³ÃÓs°£Õ!$t·hv×MèÃò2¯ë9A*#5ºsP³#¼&šSË´ÕÿÚÒìžIgÃÄë]b/¢ËÃÄ”øS`¢§à˜õÚ0Ñôz01¥w&šs¯ÍZÆ³Ô‹Äæs&ØyÃýúhâ˜Tçµ*:5UQZ£Æê_Ô£ú›i¢>µ%@îß·û‰¢ÝGM†Ê7{îp²ibÓÄŸ§‰ò^@]éæ`7»çìÝsh¢þ.f?Ñ/EÔì äÊ£‰’çMÂðš& Œ`*ýœºšÒ­IØÍié¨1§´/: ·‰ˆ®O3âÏ¡‰Ž‚cÖ‹ÓD7Ò ÒÄŒÞiš84K_{ÑIÀò†&šÔòŒ¥:¯Õ˨ªò„ù¯ cõÁé»h¢F§$ù‹£ãÿê?z5MT’€m<@Ê3lšØ4±Mä÷Òͥѩٜ]Ñ©þ® ä:?–si¯ {Döùš&°ŽàPƒnêV³C¹—&ªS Ädcq’tÓÄh›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉ê\Ëÿôf©Ðk{M?Þ±DJy‘±P…µR°›*ã$öÁJ¥bßŇ¢cOKåå,QŽŽÞy͉+$ံ›죉 KÁ—£0ïV‡mvd~6L”ßUuÔsú±KqíÑä…áu…úÕ)úe «] Ã[a¢ŠcB$ V;z*à±aâÍ.±ÑõabFü90ÑQpÌzq˜èFzA˜˜Ñ; ͹$Žf)Æ‹[M(=Äße`“Êk‡mªòjͨ÷/»çÔ¢ ãèˆÜxÏ©z ¢ :ceù߆‰ +ÁDÉ:N”´Õ…Ά )'Š:¨ÔÖìDüJ˜ˆ(½¡ }”ïy,}¥Å.ÏS÷Þs:"Ÿûžnšx³MìDt}š˜Mt³^œ&º‘^&fôNÓÄ¡YŠ.n5a”.ïî9“ª¸MRÏßE‡¢#(÷ÑDñHTÎLx¨,›íÆu›&–¢‰ü^šiWÑ¿çTíÎo5Q7ÐP’ÆO¶£+&¨øàÐ×0am+ ’&Š:Þ›€]ÅA(„ Åð†‰á.±ÑõabFü90ÑQpÌzq˜èFzA˜˜Ñ; Õ9ZÞ{Çx–B”‹&ÒÃøÝÑÄ1©«•sªª(¯Tƒ”¦þÛŠÃÖ§æ”`PI½Eñéæå0Q=*Y’4V&¼ï9m˜X &¬l -S.R&ªøéGåw•M‡{ôlòªÛèy0ADéMÛ:ð<€Y9úB‹…¦[a¢8HC$«v vöp—؉èú01#þ˜è(8f½8Lt#½ LÌ膉êó`8ùnÑq¾s}‰ÒJÕb¨Œù)t¯/{}Y`}‰GÊ“aiŒÚí‹Zí Î?ù.¿KJyÚëÏÚÕŽâÒ ð|Ów;+È;Ä(Õ;| 4ÛéÍYyÅ©&Í/ÅiڌƟ!:]ÿkÕŒøs¾Vu³^ükU7Ò ~­šÑ;ýµª:á¼ÏR@÷E5~¾;únò®n°ãüy¤ÅîÑVUy%ÓÁM¨f—¾¬“Q{j1óO¢óTiærš¨µ–+cÙGß›&£ 7ÐD¶£§šÚ§Ñ„[~ÖÑø15¹”&ÒCBñu[TLyZ Š7»·Þ£mNËÕ ~¼fG¾ïÑŽv‰½ˆ.SâO‰ž‚cÖkÃD?ÒëÁÄ”ÞY˜hÎóZãYŠùââãò×,qP©¯uòý£^’÷ÏU~ìð»N¾ž:’2|§¯‚W³DóPš¼Œ•ùîd´Yb)–(ï¥a‚`è^£­v'³Dý]*rd8²P.-(3„7,y{¹W݃‰j—‰ãÖƒ‰&Ž¡_˰Ù!îÚãÃMb'¢ë³ÄŒøsX¢£à˜õâ,Ñô‚,1£wš%ªs._„`-5;ü®rí©óBýKÆÍNŸ.]Åc@æ4V¶¯Ñn˜X &ò{éÉâ÷žÿüçýuU:;'¯þ®SÊÓÞpìWLÄÃHÈð5M`餔£ÐUZíž7ìwÐDujùÅðĉnšn;]Ÿ&fÄŸCǬ§‰n¤¤‰½Ó4Ñœ‹ñx–2¸¶“G<å M“Êk]sjªòºêý®{?v/J§ÿÕ4QŸ:À±Ÿšðc—îKÊ+%•bký¶¨ÍžÄ6MlšX€&ò{éÊhݤ‰j—ÂξæT—HÂûÀªÚ«þsg¶E VMTJÕ’ a¤W;p¸•&ªSW”~¨f§¶K| ·‰ˆ®O3âÏ¡‰Ž‚cÖ‹ÓD7Ò ÒÄŒÞiš(Î ™ÁÍî¹íã4ál´w4qH*0¬EUUf E«Gþ®¤‰úÔ”BS|Ð/:œgÔàñ[GötAmÓĦ‰h"¿—!ÆÁÒ->Þìä©ÄÆI4Q~×ÈPFˆ»•Ѧ‰¥h‚Ë™Cv š¨vìx6M”ßå *D:?NFtmA§D˜'×4!uçØc?»ÚÜ›ƒ] Œ*ª6;RÚ41Ú&v"º>M̈?‡&: ŽY/NÝH/H3z§iâÐ,Å×ÒGž±ÞÑÄ1©–Ö¢‰¦žuPçýÇ.ùwÑD{je£Vr½&ªÇ D@ce¾{mšX‹&¤4-=Ô»4Qí@O?›rÓJœ¼_ÈîÇîUQïó’°=ÓD©§ðš&4`@ŒôbÇzóÙDgâ*Cqù!lÓÄh›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉êGÑMäïu’N¥ ³ô Ç74qHj^Ö¢‰ªŠJ¿ž4VO‰¿‹&ŽEGà>š¨‘郷Nžê'ošØ4±MhÍ®Ö<ý§.MT;9½™Qý]Oœˆ‡#Û-#ǵyJ¥¯Ókš°<‚•Q%õOQªƽyÕ©©ËXœÒ¦‰á6±ÑõibFü94ÑQpÌzqšèFzAš˜Ñ;M‡f)K|m6úÓ›f¥âbYØU• )ûêõ˲°ëSG¹…ÆÑñ§½—ÓDñhH)†Ê Ð7MlšX‰&¬Ü4¢0ç>MT;ä³[×ÕßUÈû׎—+k:a< Üê}ݺ.¯=eRNƒêSÞæ »•&ŠSœ4»71Þ&v"º>M̈?‡&: ŽY/NÝH/H3z§i¢:Wc”jvŸMăàMT ¡ˆƒoGÕÎu±¼‰¢ªùàìzµK„ßEõ©1Yøx±t¸³ÝDõ(Ì®(cÝ76M,E^¾ù'¥~» ¯•dåô³ ¯bóì58u¬vèW·®CGzCQGºf¦êsOÔõ…ï=›¨âyT¦¢ÚY‚M£mb'¢ëÓÄŒøsh¢£à˜õâ4Ñô‚41£wš&šsÖAæ_"¯Í›€G2Èótg¶wÉ+ñ'R9Ö¢‰ª*¯Ö‰ÒêãËúM´èZú :¡|M%³Ñ‡Ê|gaošXŠ&¢PB"‹ß{²ýóŸ÷·¼èélš(¿kb‰GãÇ_ÝO=3oBS)Åþr}ɬQ äÝšNÍ.™ÜIÍ)Kx‚±8²}61Ú&ö"ºÂYb'¢óÃĈøk`¢£àœõä0Ñô„01¢w&šs"RËq/u°|)Ld°­töo`â”TJy.˜hªØ(w wõY¾ &Ú[ –éÅÑá¬ÏÁDõ¨˜4V¦Ö­¼SÁ„lž² ùŸsä_¿¿ÅNír˜(Ï…ÔÊE½v±ã|çÎ7ˆeÄ:¦ ­-8å2Âô4ª¸={޶‰L ~ÍŽò¢‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰S½ë½¥Q%å-Ó»ç¤æÉ6UZ¦ ±zaý.šho]†cSŽ££/P{;MT†ÉÕã¯Î|ÑÄ¢‰™h¢|—NHYRŸ&š\Ÿã£>W½tÈýšž»2Ü»5%ð®Ç4aµ i½§Ùqzvo¢9­äqF‹&Âib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&vçõ6ŽT½9ý8l$ïhâœT“¹h¢©ªÇ‡Qbõ‡rÿmš¨o!a”­«Ù¥—9Çí4Ñ·^·KÆAûi{(zçÞDÚ²ÕK½Ç4‘K ®—&¨_ׯÙ僌´·ÒDqZÏ*¨Ó^Âd¥§‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8ÑKY*n.ʈð¶·?!•&£‰¦ 12ÇêÁ¾,ÿx{kÒ2F{âû9šh½H Jœ4;{ÉŸ¼hbÑÄ4‘7‡Ú¡˜ö‹U»Òü.?éTŸ›™Š{ÚO±C¹³˜ç ‹}³7á¥'î/Í¥‰ê´f©È‰Cq5SÉ¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æ\‰ÁSÜK êÝ4‘Ëlß÷öKU˜,ÿxSå˜sʱúÕúÖDPó…Ñ¡×Ä•·ÓDõÈuŽ‘ãß!¯üã‹&¦¢ ßJ§S(×­ »Ù%ô«i¢<—Ð8iÐ~ŠªÝH’7 2¼¯Vµ .Aà~}‹fG(ùIš8'Î^2Ë-š8ž&ö":=M ‰¿„&z ÎYÏMýHÏGCzGiâ\/å.ø_J¥Ü ²Ÿt:)Uçº7qJ='ø®[Øç¢SÃÇhâœ2[4±hb"š(ߥ'4ÑîI§ÝŽ_N]Cí¹Ân©¿´Û¡Ù½µQKdÕŽ«1´,œû7KQû©™d“Þ[o‚¸.èÓ„Ô–®^«Þw•6;ÉV¯«Nsq îM4»´ª×ÅÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎk{)¼·zûæÙÞÐÄ9©6ÙI§¦Š¸¦ŠÕ#MœŠ=¹7Ñ<:bIl·“¼2Ä.š˜Š&jiDwñþI§fv9MÔç2(F-ÛÜøNš ­àBÑrLZZ0¤z »ßÒµõ"ÒDÇFŒŠ+°°h"œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9/=}v{©×ÉÒ-·°ëœLßÝÂÞ¥ÖÅ#‹¥êQ¤¿IM•³ªI¬ÞáËN:Õ·®S;¦FÕ£@Âèò|³K/‰ÄL,˜˜&r;@„ZF˜.L4»ü²¦}Lä– VKÛNAû© jíæk¦‰”ßÑDžz{Q¢ »;8ãlãËçê1ýÉBÿúøRÞº&@fŠ£DOŽ/Å# ~ Œ2®ñe/3/¾%ÈeèÐÔ/gÔìòåÅQës11Õ…°nûivÈw& Ǽ‘«¹/Þfˆå—I(­sÜ?“HݺZÕÄ1”ß('hë m¸ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­:ÕKÑQzåAZ¨ Èó›ÕªsRg+ŽÚT•~Çêí»hâ\tòƒ4Q=jùì¢BKM™ñZ­Z41M8 ©j@õúžåëi¢®‚ai9h?ÅÎ’ßL$šŽW«$µ–ŽRà«§t·ãg‹£6§–JÙ?¼‹s[ÅQ£ib/¢ÓÓÄøKh¢§àœõÜ4Ñô|41¤w”&vç(ýó»(ÝJfºÉ»rFç¤"ÉT4±«âDÖ¿W±ÛùWÑÄþÖ’ÈÙâè0?—2°y̦¬}šØíuÑÄ¢‰‰h¢|—uOÙÅÿ,'ôë÷ï·Ø%¸zo¢=WËÈ‘ûåÀv;E¹“&|#%74­o©7p»«U»²>JÕi&-Äg±8QY4M;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4qª—Ò|oÊ@fÛ ¿£‰SR-ñ\4qJ}Æü]4q.:YŸ£‰¦LÔrÿ ÖnÇiG]41MÀæ$ΞR÷ZÞng—'ùhϵBã QË.vâwG­û‡µ|ÁšÀÚ‚Aj†¥®ÒfW¾ðGi¢9UB…‹“l‹&¢ib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šózD#}Ð…šÂ½{D5aÞš¨Ê åMb©ž&Û›hêÁ(¸ìþó–ü]÷òö·FBî'æû±{³ßMÍ£mþ2IëÞÄ¢‰©h7‡¬©&ÑïÒD³ã—$ÑÖT€\†Žp€©vpë½<Ø +”¨Óµ¾E÷oHívô(M4§b б8¶µ7N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ+ö3¢ýˆÔ›÷&”6Jïö&ÎIÍs% ÿQïΤ±z=8§õOÓD{ëb˜âèP¯ßUoâ':.-Žñƒ·°›Ç¬ŽÁ¬f'ºö&MLEZ÷ æ²w3Äîvàr5MÔçªfËõ{Õ.¥{i‚ ²Ðš°ÖÒ3[°‹Òì,={ »:•$`”5qþ²y»hâÍ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9(YpIné÷Òle$}C§¤ÍUobW…–25¨?ÈmòOÓÄLe$Œ£ƒšŸ£‰æQò'¿›ðº…½hb*š°¶ç9co¢Ù%IWÓ„µ=V rg4;r¿7C,ŠdxC¹µ`ÏÜËkvœž=éÔœºðâ^7oM¼™&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâT/•ÿœä^{Ò)•9™òš8%Õa²“NU•B„Ô?P¯_F-:¥ßJÿ¶ ž£‰æ1cÆÿnå·Í‹&MÌD¹ÍÒÓ§ÿúýû­'Yýêêuí¹YŒ0HFTíL(ßœÓI2ë›{^[0©r°Mßìž­^×œš–†…±8Ãu ;œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9wÎ9îÎëÑ» ±U‚•X$îP-ñd4ÑTaÊ¢«/MìÑ1ò°îvú\-ìÝ£d=¨%ü§2‘U {ÑÄT4áí´”þ° »Ù_¾7QŸ›½´k‹Zv±Ë"÷ÒD>ª7AEÙæ))±]”fWZºÒƒ'~Äg÷ÞRÚÿì^Ša,š8š&ö#:9M Š¿€&ú ÎYÏLQ¤g£‰A½c4ñŸs㺜÷Rê7gˆÝ’!®ýH@ì¦ÜþOj¯'¢‰“êȾˆ&ÎFÓSbÿ§Ì³dŠ•½æë_4±hâoÓÄþ]²±eëÝ›øÏŽøÚ½‰Ÿç:i­_µί–o¨^g[‚„ Çã ì=o?߸أ4ÑÄ1—ÿÈ¡8$[·°Ãib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šs!¦ ³ßEj¾÷¤ÒFYŽö&ÎJà¹h¢©R¨—ˆ?P¯ü]4ÑÞÚ@RoàQT}Ž&ªGJ^@ÇBe”Ò¢‰ESÑD­1í*”)wi¢ÙÉÅ9öçr­c9œs:\ºno"oLE‡ÓÖÌ¥=t³CÅGi¢9-"8g¸2ĆÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç®)KŽ{©œï=é¤$›ˆ¿¡‰SReý]š¨ª8%Ïâ¨Ïø]4Ñ¢IR°7±Gñõ®óÝ4Ñ<§øwcTY4±hb&š(ßeñ$Ч‰fzùÞD}.e­ÃOÔ~ïÌéÄyÃ$dzLTZ°äº Ñ_š¨—KçÕͯðŸ¨Ü[½NRA&;¾8•ŸÅ8ìƒ3òtãËçê‹å×/'¢ãÏŽ/+#D_ãË_f_x«g–ˆèÏբ߯—fÇÆW/õ¹j`9X‡nvšn­gD[ ¹!/Üæ®JÉ×ÀsØ~Ø3ßz-Ï6B¡cšÈ­oŽŠ 4;Éü(MäÖ Õ²TŠ£ô2ø-šx3MìDt~š Mtœ³žœ&º‘ž&FôÓDsN,øïv–îÝ›pÛ ÛšhD‚“£»â\4ÑT‘P²¢¢š:Ý{m‚Õ ý{çÖkI­Üñ¯2O<иnv9Ò(Š’—ó”(R„é(Úà°#.#†ƒ”oŸ²½!ÍìÕöjÜÝ8,Ï‘ÎPtý»Vûòó¥JoÓDL-X‚G¨¯÷;ç/¥‰ä” …,BMqGœ4Ñš&V":>Môˆ?†&* öYNÕHH=z»i";gvÒÈ‹]ì\<•&"†ÅÉÖ%ì,A¡‘ʶH ~,šÈª|JRÚê=„õï>Ôi"ۭχDnWSº‡ÝìµÐã™÷&d4Ü.g˜Zp ëû£Ån}wë šØ%.¬ªvÜ¢‰ôsþû—Ö$¿üé¶Æ§¿>=¯XâÛï(³7ô-…Å>&‘Þ¿ùúÉú×ožÞ½O¯üüÍH~-ã?ÿçí÷AgJµží½Ø9Y]RyNµ¯î9w­ÀÖß¿/°ÒtìŸÿéÍ»ï¿ÿƦj?~ýÆýòwïÿöùÛ`ø¾LÜl"ùþÇ_Þ6¢¿toïž—_~¡Ü½Ùß|ú·wýáÉúåÿzþî«ï?ý >µÿûîë|²AöíÓ·_xþè/¥³{kÝ~"³Ô?|ôüÅÛ'å/ž>ÿŠ>ùò‹¿|õ‰µ0ýäsÿŸ|¿ 'ÿ•C£ì^~÷·¥S¹-J×I)%„¬/¶e;ô0Y² •ˆŽÏ’=âaÉŠ‚}Öƒ³d5Ò²dÞn–ÌÎ9 ×5ÿ,òÜb#ȸ„t$yc´ß%–ÝXÅ ‹*oÀQ?•ý¢Þ?VN¯}ÑñணÉìÑ>;â;”Í Á“&£IJ7Y¼ý£J“ÙŽèèœ^ù¹ªè‚k¶ÕgfƸS¡ÛÅ !ý¤ÎQªZUš3ÅÄ‹Oºeq¤.º¦8³“y ¿9Q¬Dt|žè OT쳜'ª‘'zôvóDvÎÑ: n÷RìåÜ“n ‹}»{SY‚K[ª ‹&²ªQáŽáÀ¿.õǦ‰üÖ*ùŽÏPW…âO§‰ä D¼£¸yÒmÒÄP4Ái–îíó}½'þÙ߯٭z̓hž›¶r”ÁµÚO:¢Ï­7’<úÛ9½@R Ö”¿¥¾í(\KÉ)‚³8ú¶8ã¡I­ib%¢ãÓDøch¢¢`Ÿõà4Qô€4Ñ£·›&Šsr\O€Zì\ÔSi"·PÄ šØ%^W—ú}i¢¨ ¬w¨—Ǫ^XÞY"¹vtÐ]HÙ£·!º±#–íÄÍz#“&†¢ û.£^¤Zo¤Ø1} ??×F7 ÙïÅTåðܽ‰Ño\›ñö‹¤ôõ³dÙν>H}*Ld§âƒn‹[çyž0±1K¬Dt|˜è LTì³&ª‘&zôvÃDvœ²Óv/åùG<ëbIJY‚Æèêu_^‰ÂX0‘Tq*\Ñ8>žÕGz¬r#»¢Ãà:˜È‰Ä·•!Å &F‚ û.S®,Š\ßšÈvNŽNœŸ+bÃ4§ÁQøÔÁK±Îõ6M„Ü·Pº†_UÊ8tíÖD—àz>Ë;7/á7§‰•ˆŽO=â¡‰Š‚}ÖƒÓD5ÒÒDÞnšÈÎ9i÷R7î"{Ð ÝB~ë Ó>©8V‚à¢J„¡‘´ª¨×;è”ßÚ¸vtÄÇëh"y”t˜úža<®Ȥ‰IÐDH³tóå]5Ap¶ãuñ̓h"=צfû‰ÞþœI¸ /ø-šð‚ ±¡Ôì¼ºÑÆ—=êýëºa‰0>Tÿr÷<ä¿•‚ÌjÊ Ð–µÎüþ ÃÞŽèèê¦Îê‡IQŠ˜F½—¿zþæùÇ4¦|÷Åó9Zyúøæï~üŸw_¾ýø?þï¿| —ý«E÷‡g{>¶áðéý÷ß½ýxÃÀÈäcÏÞ¿·8¾ýø_žßÛÃÒ×b½ƒY~ýüîÍOÏO¯¾üïþÏß¼ûáËŸl~ÿó[¼_Þü[¼9þÖÌV¦ŸùUJ‘~¡çïþfN–ÿ¾;ŽáÒéƒy´¹CŒ÷([Püà¤?ÿL¬¿(i&oHOUgݺjÛû:à/B¸Ñùîì¬ãÍ6§õ÷kTGrñ"Ž9EvÛ?Ÿ‚ÌLsö7Øì/}¿â¡‘B¬ØÑ<}ÇêÒfDÿ?,BþvñG-Bn*Øg=ü"d%ÒC.Bþv½,Bšs2Fd¹£—Š'goñé^¼nÏVì8ÿúÞÎ8ÌŽÏigÏ Î¹Tš³*#Á[·=Äé4”9ÜÄéJc§í[!h-[˜ÐÃ-×&ÔJ¿½oGW{®à­à‚Ø k[™ç¹\;'ì£MØcôNäu=µÇ—9ðñã‹uÚÑ[›mµl³S=õð‡[TéíêРö“z´¦‘“;Û9v—®d§‚B>¶ÅÑjÕ}rׯ„ºÑñ¹«Gü1ÜUQ°ÏzpîªFz@îêÑÛÍ]ûz)=·:4I\¶’ïÊn0–اž«6ôÎèÄ ’gj“ãF5·lçuÞJ,1KØw•ƒ²Ö~d;‡gÌLÏÄBB­ö#jVµòÖbs;wDG¯«YëÁi¢éi¢Go7Mdç’NZB»—â(§ÒDDX˜62Aï“*~0šHªH»c8ðä‹&vE'°^GYً׳;\sΤ‰I¿?MPÊž6‹Õr‘ÙN½1О‹Î»@­ö“’õ œ»7ÁöŠ·"ç.(D_/ï^윻vk";M°WÏ5Tì'L4g‰•ˆŽ=≊‚}ÖƒÃD5ÒÂDÞn˜ØÕK ž¼5A¸Ä­j‘;¥òX)vªvЩDGÑ5ŽC;áë`"y´Ñ™l–ÑTÆç%ì CÁçKТ±^Þ"Ûq £/aççzdmÍÑÍÎA87e çÚ·iB¬‹Cp±¾X•ìâµ×&v‰ÓÈ“&ZÓÄJDǧ‰ñÇÐDEÁ>ëÁi¢éi¢Go7Mdç@,õ²O?‹ëÁi¢éi¢Go7Mdçcl÷R륫³.a»Íü°û¤b‹&²ªÈ‘EÚêõѪMØ[«ÇÚØ¹Év.¼„= 05ª„d;¦y {ÒÄP4$Ìáu¢¢Ï~ýýš_Õw>ˆ&ÒsÙxFC«e#‘Æ3iBÒf<nÒ¹Ô‚5¨£j"ˆbàÒ{Ù)؇äšâ'M4¦‰µˆO]⡉š‚}ÖcÓD=ÒãÑD—Þ^š(Ιn÷R¤çÒ„¯ Ám˜Ø§”q¬­‰¢J©~mâE}„‡‚‰òÖÞ†qÛÑ‘pÝìâ1%t‚ölÆ4abÂÄ@0aßeºŽ@ž¥ ÅÎ^º.?WRÎÕúìb§þÔƒN¼0Y;¾Í» 3ðu¡ðÒ‘_ÊÉ):GNï&K´&‰•ˆŽÏ=âa‰Š‚}Öƒ³D5Ò²DÞn–ÈÎm¶³¸"’ϯ\§ncgâE*x wH‡cÁDV…lóºC½êcÁD~kréA;:xá­‰âÑ«Q`{G޳rÝ„‰¡`ÒmOÞúÕ*Ld»uiÒƒ`"=—£øFMÊ;çÏM«­Ÿ¹Mh-˜H%Ô/;ÄK‹M§!uDÐçqÞšhN+Ÿ&zÄCû¬§‰j¤¤‰½Ý4Qœ³¼£ '§‡µ1gq~#£Ó‹ wHe‹&’*¶nœê§ï‹Ý£ÝÁ~‰Žì¸X•y=&²G¡¨õ´¾ÅŽ×Ê&MLšøýiÓÝf˜]5=l±<:=l~n:Üëu°‹»5!‹ênßÁ&J-Øâ dGÖS]J{Äñºl뤉ib%¢ãÓDøch¢¢`Ÿõà4Qô€4Ñ£·›&²sEÞb»—òzrF'ñ‹êÆì}Ruk"«òÎ!Fjªvu{_tdÅZ§ÓÄ.e&MLšŠ&hAD²y:Æ*Md;Ô£K×åçR zQÊbçƒ;“&t‰ä4Ä-šˆ-L¾q1!Û±òhãËõþñÆ—ôÖªÐX4-vWî}'ŒÈÒTf/Ïs|™ãËHã /ƒ×T«:¾d;v‡/é¹Ö#6V鳓SÇ¿H0BÛã çb®HÖPjvzëJɉ«UÉ©i·£)Î#ÌüãÍeˆJDÇ_­êÌjUEÁ>ëÁW«ª‘pµªGo÷jUvn³%½£—R>wïhqqë$í.©ä[­ÊªƒM´îPï«4jykÄz–ï—(òu‹Ç`¿Ü=¿›ý™41ib,š@›úzÖ׫­Ð Ójµè0šØôÿ§ý‹gV3òq!uakï[¬+¤-õ>(Ùºxï;‹ã¼ªÑ§D3ÇGsšX‰èø4Ñ#þš¨(Øg=8MT#= Môèí¦‰ì\"†v/Å«­½3hŒ%8Tzû»¥ò`I>ŠzD÷¨¶7Q¢ã£ó±‘ ÷&’ÇèÀ ÚÊ"̽‰ICÑ„,HŽE™ê4‘í æ¦‰ô\4PñÔš›à™'i™"£Ü_|jéœj/×W«²úk³|d§éŠ…º¶¸0k£¶§‰•ˆŽO=â¡‰Š‚}ÖƒÓD5ÒÒDÞnš0çÑ9°yœ´{©HþäŒ~ñ>nìM©,T¯úb‡ƒeùȪ­«‡¶zàK˜ßšÀLíèàŠO§‰ìQ XÛÊ|˜{“&†¢ Ÿî»Å .ú*Md;•Ã÷&|>¡K©u7Ú™ɹY>À¥,!÷A6C}ží@¯Ý›ÈNM^l$ÊvtÒDkšX‰èø4Ñ#þš¨(Øg=8MT#= Môèí¦‰ä:W/ô"2žKjý= oÐD‘š– )DZh"«‚Àž¹­äÁh"¿µ¹j‘b±ã gbxïP&B“&&MŒD!Ýw‹ÁÝHþÙ¯¿_³S=º6j~®°ÅÖcvtj5#â…Ø†ºjFj- Å5nxd;äx)M$§6;ß¼LšhM+Ÿ&zÄCû¬§‰j¤¤‰½Ý4‘§BJäš½!ú³O:¥Å¡¸ÝÛ1ãR£ŒEY“·9s[=E÷X4‘ßZ"»ã3¾®6jöÈ€@ÔþÝØñ¬:ib(šÐ4KOGé©~ ;ÛwtmÔüÜ@.%Ài´³Ã[·ÝŽ£ Z¢‹ºuÒ)æl±’ú.d²£¯Í@žÅ1)KlŠc†™¼9M¬Dt|šè MT쳜&ª‘&zôvÓDv.LBØî¥ݹ·°Õ/Daco¢HHÙIè©a°“NYU mõ^lo"¿µãŒÐŽŽ®–ÞN§‰äQÈS[™@ “&&MŒDqA!\¯g”íÖů¢‰ô\*åDíÇì :ÎÍÈÆþöÞ»Ô‚í‡ñ¾º{\ìÈ_zÒ©8ÕTËÍ·Åé¼7Ñœ&Ö":ˆPOó‘íB<÷¶[7r2æ>(Ýð¨÷Иû ‘Ki"‹ÆÆüÿ²w®Ér«:‘ ¡÷HÎügrïœê›´Q»°}¨4¿£X_k›Ç2HÚíè%©cÑÄÁ6±ÑùibDü54ÑQpÎzršèFzBšÑ;L͹e´x–b¿·;ªpÞ4Mœ’*Ÿ¥rJo§Ðk³°A ‡gû\«üÁ\4qN=ÙM§SÑÉð\M§sÊWö¢‰©h¢¼—êµ*Eîfaïv—ŸM”ç´/Q«å¿o¤ K–À¿o^GT°r¿LÆnGòìÑDu eT•5,WæªÑ.±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; Í9´»5á,or®=šH¸ ÚÁÑÄ)©“¥M4UY‚«<»zý®$ìýW#¦(©d¢és0Ñ<–=FîöÝíXVÚÄ‚‰©`¢¼—ЩÕ.ÉKI¥‹`¢>—iŽÆ¡ç;a‚òf)£\tâ2‚³€‰÷7ìÍ.á³4Ñœºe‹Å½öÐ\4q°MìDt~š Mtœ³žœ&º‘ž&FôÓDu^6qIƒ$ìÝ.Ý{4á^&->¢‰sR³ÌEMX½Ô«/pù]4Ñ~5Å+9¾¶¡½&šGÁ\v±2~¹(²hbÑÄ4±·¸6+@Ñ¥‰Ý.]Ý »=·ÅHA^Þn—î¼è”m³ÃÞu$m ÊHÐÿÜÓìÞT²½&šSQ"ú@Ûê6î;&FÄ_ç¬'‡‰n¤'„‰½Ã0QSÎâäá,E@7Ã„ÚÆG°OJuš &šª²ÛD—Êßv4Ñ~5•íDP7i¢>x4Ñ<šå²Ï‰•©Ð‚‰3ÁDy/™ÜŒE»0ÑìL¯®Ûž+ˆ6mvï„ Î[y1‘ßw› msQ}ìiv˜í6±;1 ÎMšûÊÁ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašØ›bؼÛé½Ý&P}sƒšØ%¸}"Õl.šhªÔ胵JéËŽ&Ú¯và GÇôÁú°Õ#g+ À¡2]Ý&MLEZ/:iJúgÇÏ~{)e¿ü¢S}.’ª9Fã‡àµM 9غ—©ÕÞÓ„Õ,àåïÓUÚì8åGi¢:•ÚØr,Îiå`‡ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœ™¥jWŠ{ëÃfÜ(ñMœ“š'£‰¦ªö;®ßïê¿­>lûÕ3©ÅÑzðl¢ydñÌ(+¸hbÑÄL4QÞKb¦„ÚO›hv×÷®kÏUPãhü&Iw¦MÔeö#špϦY%¢‰b—hºõå„úüçÉÊß¾¾œ‰Îë,~ÿúrB™dXëËZ_fZ_|+K ªŠö»5»üR¸ú¢õ¥>דyÔ.¯ÚÙëÝ‘ÖÙ´låà&­·bÙJB ”;´g{£6§eýÕ ÈG³YiyágˆNDçÿZ5"þš¯Uç¬'ÿZÕô„_«Fô­jÎkyZÌñ,¥ùÞÞ¨È^f¬ƒUç”Rž &võq+¦ÝŽ¿ìcUûÕ^{jaGËWå'Ï*ÓüR£`ÁÄ‚‰)`"sm9lÁD±“L×ÃDæú(ܳ+Ý|‘ÖMá -SÁµÿwÿÊün÷pkÔsâäeZ0ñ~—Ø‹èô01$þ˜è)8g=7Lô#=L é…‰“³”ß[1Po†ïaâœR…¹²òΩÿ6˜8§çN&ªGHL(3Êëä{ÁÄL0QßK4ÎY ›•·Ûá˼s LÔçd`KÈwž|#nè% ïYÚV“~Ò÷n—åY–¨NëÉ †âVcÔx“؉èü,1"þ–è(8g=9Kt#=!KŒèf‰Ý¹æ²tųætw+#CW8ží?—ÊsUßU1åL)VOô]Q¢£‚ qtø¥£Öí0Q—Á‘¢ñSskÖ¥MÄÌÞ—ø`Úê}, ¾4»ò›¥‰æÔIƒj†?v‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ªsDÎÎR’þlØy)MËfFgç¤N–ƒý£ÞrÕCþ®fFû¯®÷»Ù?ˆŽ?W0p÷X6˜â=†ÐKÁ³E‹&& ‰ò^’±È›÷÷ŸßÞ_2¢«;£¶çº»š„#›ÜÞÝO½2mÂ’–ྦྷ ®ß PÕ´?CW;)«õ£4qFœ¾^X[4q°MìDt~š Mtœ³žœ&º‘ž&FôÓĹYJï=› “°ÏIuž‹&N©Ïßv6q.:†ÏÑDó(b©ß´u·ãU~|ÑÄ\4Á•êE£Ü¿éÔìô¥5éE4QžËµ×3s8²9¡Üœ„ ’ü( [ʶ„|º5÷šzöGi¢‰Ct ®a5»¼h"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,…éæò°f§ƒú°?ØÑà©oŠ"ý§4ÑT¡dü@ýú«i¢ýj®iêG‡äÁ›NÍ£Ifý@™r^4±hb&š(ï%Õîåº4Ñìñjš¨Ï5(@®²wfagß„-ñÁÙ„n9§” ƒºØK~´5ê.NÄ=@fÇ’MDÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çFe'§ñ,¥ï¦ÐKk:”ý`g¶¯È1–ê6YÞDSŸ©û‡êK¾ì¦SûÕˆIRÜ£ø`ëºÝ£‹’¦X™¾d‘.šX41M”÷’ K”å¥_Ó©Ùe¿ü¦S}®3P Ç{ö;[סmXšjZÁêÒf—…¥‰æTjßű­¼‰p›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ýy™AùƒYJä^š À íèlb—àå>‘j“Uˆmª,©"Äê5ëwÑÄW Ê©ïvô`ÞDõ˜³‚ô{×ív`‹&MLEÖÎÙúyöûÙÀE4a-¢Ì½ñ¬M®vq¸òl¶¤fGb½Ž`ª¥ú#½Ù=KÍ©3›§Xœâê]n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Q#YYî8œ¥åî³ B°ãÙþc©ïª"ý§4ÑTÕDcôX=Ó—Ýtj¿ZËïîw9ÿ‰¢æçh¢z,”kìñý¥„ò¢‰EÐDy/µ^4²?»(üóÛû«®ï7QŸ[8¡ Úpüh™î¬K¶9±è;š¨/m›yA€:ß{~ÙÛs4ñËiízËsXÝëúÛÄ~D'§‰AñÐD_Á9ë™i"Šôl41¨wŒ&~œS’l¤á,E ýö~žÞö›ø%ÐÍ<– y¦,ì_ªê:çX}NßtÓéׯFFÈGᩳ‰_…JøãeœpÑÄ¢‰ihb/…M,e?¦‰_vš®½éôó\Íœ’Z4~DÁüNšÀ-I!+{OPF0›% 6ìÍ(?JÕ© &MŠÌ+ ;Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9'JÖkÑü¯ðÝgIËTx<Û.q.šhªk¦ãê•¿‹&Ú¯ 'Ž£ó\/ì_] LH¬ÌÜM,š˜‰&jiJ\ïæti¢õ¢~ý¨}MÔç2hð¹ªÙÓ½YØ ÊÜJï×—\F°y èoØ«z†GiâŒ8˼Î&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR˜î=› „›s>8›8'5Ë\4qN½èwÑÄ©èP¦çhâ”2}9[4±hbš(ï¥1h™Vû4ÑìRÆ«i¢<×… Is4~ °¿«ë}MˆlȤpðµ ËvM¨Áùh³Ëö,M4§¦’=ÅâŒÒ¢‰h›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰âSJµîGÍ£I¶ì±2áÕ¼nÁÄT0QÞK!#ÄÄ]˜hv"t5LÔç &³hüˆd¸3meËI•à&ÜH nJ‹Ý+vM²¾ÔvãÀ%бúüç:þ·¯/5: É4Ž¢>¹¾æI‚ ÐÍîµ8ËZ_Öú2ÁúB[m&„ìYºëK³ã‹Óò~žkà Ò¿òÓì’ä׿­À H~¿¾Pã’2»¸J+i >ú±ª‰#e±‰+v¼J†_!:ÿcÕˆøk>Vuœ³žücU7Ò~¬Ñ;ü±ª9åŒÏR‚toZžÕ“«š­™+ÕdsÁDSeÌÝêéÿÚ½9·ÿ«a¢ýꂊ.G§€ïs0Q=B¶28ce×É÷‚‰Ù`"—ý¸%6 `¢Ú1\Y”¤ÌÈáøEÅ{?VI­ƒ~p–ëæÔ__šÝ›<[a¢:­§,)Wû%,˜ˆv‰ˆÎ#â¯‰Ž‚sÖ“ÃD7ÒÂĈÞa˜h΃oÂÍŽ’Þ îeÆ:¬ñqNªOvôÝT‰gˆÕ³}ÙÑwûÕFÂIâèüßmÕ»i¢v /»cO†¡2·—e|ÑÄ¢‰ h¢¼—â&’4ui¢Ù•5âjšàZ‰P%—õ-? "·v3òòG+•÷4!ûT/u•6;‘gïÑV§ˆPߊP\YýhÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓÄîÜÁƒ¤£ÝŽøæ{´y#?€‰¦€’gÿD©é\0ÑT1 Äê‰ü»`bÕŽ^qtós0Ñ<:eno7;Óu4±`b*˜zä-¡ô“òš¨] õ¹eæ-‘]PÂïlf„¸%e´ƒ{´ZF°Ô«ðÁTíØI…‰&NÀU,'¼î9Å»ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0LTçšËôÜ0ivð’t4Ž©BgˆÕ`û®-zûÕ$JˆqtèÉ-zõhZ68ÁµfG/µ¹Ö}mÑ'Ø¢×÷ÒµV›÷všäÞÛ¡y³d–÷ȵtGâÜ?š¨väx9MÔç*&2 ˜Z!çΣ N[r?ˆ“ÕUØ«Ì>K4;Óg‹[[¥k©–X\YÍK„›ÄNDçg‰ñ×°DGÁ9ëÉY¢é YbDï0K4ç˜Ø‚¹]¤§{Ë"lrX.p— ªþT„ɰÇö »qPÄzW/_v2±G‡kÓÓ8:ôdvóèˆ\Okv†«1êž©°Çj2‚¡÷+|4;3¹š%¬]_JäàÑøQ ¹µÂGù£•8Ñ„×h…úy ÍÞ-…7ÒDsʬ Y³{=Ø^4q°MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©YŠs¾•&tC9€‰sJi2˜ØU9GY»~L´_-)8©Øí^JuÝ^s& \ƒH¥„+gbÁÄT0QÞËBßÄàýkNÍ.çËa¢>—UÝF›è­9ºAöœß÷E…TO›Ê ¹ûA£Ù™3= »8Ê..¡8GZGÑ.±ÑéabHü%0ÑSpÎzn˜èGz>˜Ò; ?ÎÉrЧÐ"òÞœ ØHù=MüH`+¯÷R‰¦¢‰]×Ë÷þzË_Eû¯¤òßãè°?w#k÷è*d(3_7²MÌDõ½-[èŒÝ£‰Ý.§«ë9Õçj2)ØqÎ~'M¤-i‰¿§ hßXô•BéæÒD'à†‰£Ä MDÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎÉ50K Ü{ÑI‘·ÄÕaw Ц”>j“ÑÄ®^2õ/ÌüØÑwåwì¿ÚËž£r³ÛÙË5°Ûi¢z„š|*ÐE‹&¦¢ ¨©Í šº4QíØÐ¯¦‰ú\圇³¶ü_æÖõ4‘eS"qO¹Œà]fýL˜f¨ü(Mœç¶ê9…ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœ™¥2¤{/:iò­,}4qN*Ë\4qJ}†ïê5±ÿjL˜ƒ=Ç—û·ÓDóXvYF+c^ëMLEå½Ô\vé¦ÝÆu»ÂÕ)Øí¹¨bâE‘[Ï&`c:¨ XG0)R¿9ón‡ôh¯‰Ý©!²| NaM„ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç^Sä(ž¥Lï=›`J›èAÞÄ9©y.š¨ª°^ÀWù@ý—%aïÑ„ÖoAûÅ—“›Ûi¢y$Ïœˆív/Ëø¢‰EÐDy/ՌܹOÍNôò›Nå¹V^‡²ý·ÁV¿&Ü›7Q„0Üt¢6qBìÏÐôûôM4§Ä$ îvøÒM|ÑÄÁ6±ÑùibDü54ÑQpÎzršèFzBšÑ;L§f)B»·¤S=›°ƒ’N?L20ÛÏÕlbWŵ2~ Þ¿«sÝOt${æçú`ïÅÓoÊj6±hb*š ºKÏ*’»}°w;`¾š&ês¡¸‡xn%$éNš°-ƒ daוTæë×`Ýí(É£4QrâòÖH,ÎiõÁ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ!a²»~Dʽ7XÊšrD»Ô²¨ÅR!MFMUÏÁ9ú®^ñ»h¢ýj”äÌqtò“YØÍ£*R¬LÒ:›X41MÔþÒšÝÕú4Ñì^ç‹h¢>×Ë/æx¬žïm]WþhF(øž&¤Ž`JÔ§‰f—õÙ³‰æÔ˜¥_k·[7>Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,e¨·Ò„¤´9êMìÔË?| •m.šhªœ‚ŠT»zÿ²¼‰ú« £Ä+yÙ²?xÓ©yDC×+C\g‹&¦¢ ©ù†ß4rÿç·÷W3¿$Æ]Dõ¹Òo7ñc—î<›Èõlù 6hÁš€=¸I[í¤€×£4QZ™ùÊìŠSL«Bl¸MìDt~š Mtœ³žœ&º‘ž&FôÓĹYÊà^š0ßRòš8%•ÒdyçÔó—Ýt:ÖçúMT ‰%¢‰ªÌQVö¢‰©h¢6˜NJBM4;¹¼v{n-c›£ñc9‘ß{6!ž²°­Ž`pȹÛµÙ™ù³YØMœ×¶Šs}©½¸hâ`›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰S³”'¼•&`+úÌ'{N…5$82ß•â\ÍëvõPLL‘úò+ß$#þÕ0Ñ~uáø-,v/0o‡‰æ±ðKy/ceB &LLÿcïLsÇu5¼#CœÅ•ôþwr5äé®XŒá¡„:ÿš0_ó‹†ÇÉ\ÓÜ™ò8 »Ù)\ž6QŸ›•¤¼l4~ræ…±/M›(S+È.L”uPµ~Ó”;am})ªL„’ÆêUÓ¯­/å­s.6£cÂO®/e¯‚e„8„ÊR^ëËZ_fZ_|KeÎ4ŒŠ|T;¡ë×—ò\cŒÒZ›]â;›£fØØ5í]¤õúY€Ì(PZv’ÙŸmŽÚÄ•’9,ëè;ü 1ˆèü«Îˆ¿æcÕ@Á1ëÉ?V #=áǪ3zO¬êÎYY%ž¥„î-@®’6żsôÝ%H!…o¤Êd_«šªò‚”òêó}··6 ¿¢ø`‘êËF ü eþÖ2qÑÄ¢‰)h¢i—‘ÅÐD-íÇx=M`&$r‰h¢Ø¥„÷ gVJü‘&0Õ‘µòp†îvï—ù ‰îTZöXÛ¢‰h›8Šèô4qJü%41RpÌznšGz>š8¥÷,MtçZ& ñÿ—¦›ÓòdÝ¡‰.Á(I’XªÁ\4ÑT‘rÑ ¡z’+ØßÚÊ¿q)—>Wä£yä¤f+ã„‹&MÌDõwYvó9'ùhve!Ћi¢=W\™0Eã'ÿ«:Å 4›»ÃÎÝ*„>·˜Œ/ów;¥ô(MT§\~yÜļÛ!¬äá6qÑùiâŒøkhb à˜õä41Œô„4qFïišhÎ)c<…2ɽ4Q`³Ç”rš«ÆGW¥\¿~¡Þ«›Ñ+:’•¿X+ ’=Õ£8Xüw”UãcÁÄT0õ‚,›ÚŸÁÿùÏï7˜] õ¹uʦqe¦ngpçEZôJèéóE'Ä6Ò=}À®)mvÈÏÂDuZ;oæPœÂ[ó;»ÄAD燉3⯉‚cÖ“ÃÄ0ÒÂĽ§a¢9'g‹½DÚíYyb£Éþk¥„s•øèª~¡þÇZ£ö·,» £Ãü\V^÷èêåceYWÖÄ‚‰©`¢ü.s™ù’%ÂD³ã|ukÔöÜ ¹žîEã'ç¤vï=§šÀ¡øy}¡:‚Ia|ÙìøÏúV˜¨N-µÓÛXœëjfî&Έ¿& ŽYOÃHOgôž†‰æR8KÙ‡ÉþÚ{N–6Ú»ætHi™/炉¦ “ê¸ßøÿ¿åo¥`÷·&Ê|¿ìQ|ûðv;L4ŠL±2ÕËhÁÄT0A &rò?GÖ?ÿùý;¼:i¢=7CÍÀgízëV˜H›c‰üÎ5'n#½`×8q«Û±?Úµ9Í©f–}!Îy%M„»ÄAD燉3⯉‚cÖ“ÃÄ0ÒÂĽ§a¢;¯)zÎREd¾&„u+bvhâ˜ÔOulÿ&M4UPV—X=ÐÑD{kdÍÁ=§n—žëeÔ=š(Ã3Vö¾[4±hbš(¿Ëlä„CšhvH—'MÔçf7‹n§v;Î÷ÒDñ€)¦ ©#˜³”ÝøPi³#zöh¢9­nÇÕr»¾µT[4±³MDt~š8#þš(8f=9M #=!MœÑ{š&šs »‹ä{&ÊL¿ ÒM’še2š8¤Þ?6ùŸ¦‰þÖ5è}Ù=™5Q=:%Í_üÝM,š˜‰&Êï²^ÒËŒÃ^F/»tùE§ú\Bb—ïïv äÞ^Fµÿƒ|ZGp™„ P •v»Oµ o¤‰âTYÑ–#q­%Ü¢‰h›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çDŒñ,ÅùÞ³ ³ú…w&ºT1SŽ¥Ê§²å“&š*%K¤±zý‹ø?Mí­ëõ4ûb±T° SõX/×YpS£Ù%Z9Ø‹&¦¢ ­»tu³4l6Ñí$óÕ4QŸ+eíÇOýÖÂ7wFMµúgš°:‚É x¿4;ÄgÏ&šÓ\xr,.¿ÕÏ^4±³MDt~š8#þš(8f=9M #=!MœÑ{š&ªs¬{o‰§PüðéèRšÈI6ð=šhP¾˜Pp²ŠNMUY«X0V_þÿ[4q(:}Ÿ£‰æQ-[pa¹Ù½·8Y4±hbš(¿KO"„<¾éÔìèzš¨ÏeöìAFT·KùÞò°De(ïäMä6Ò8ï4æ>ÒíQš¨N)Jð1ª‰ËoÆE;ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§i¢;2ˆ§PJÉï½é¤²ñM“Š“Ýtjª –I±zHü[4ÑÞs½LG§Ðïs4Ñ,›lä{4qLªMV ¶«§Zp#V/ðc­ëÚ[Š2³/¢£žMT’ [α2§•7±hb*šðJ eMÒ‹Ðá¬n÷áNÖ­4Ñœ–ÅÔéviÝt ·‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDsîä4Nmîv9ë½gÎâM’ê4×ÙDSeIJ¼¾XÜ~ël¢G4¥ñAþËžk^×=2d•Ñ[¡÷E‹&&  Ø( Ï&º]WÓD}.zÁÆO±Ë÷faËf¾{Ó‰°Ž`©·‡ç£ÝŽÓ£7ºÓLŽã ±ÝÎÞ.Œ-šØÙ&":?Mœ M ³žœ&†‘ž&Îè=M͹×=g©l7Wˆ5ÜìÜt:&Õa®,ì¦*'%¾Ñuõú[5E'—-às4Ñ<–½ Ûì(-šX41M`Ù¥— ²ëŸy?ÿüû÷[S+äêîuí¹"uí|Ù¥[+ÄÖ³ ´ ±Du{…ñÔì²>š…Ýœ:Š A(®ÞX[4mŸ&Έ¿†& ŽYONÃHOHgôž¦‰îd§y]àî9“B«ÉlË‹;"– P}±J¿¶¼x½Æ™5Ž=Xä£{MI¿ø»É[i™µ¼¬åe‚å…·ò$ ïvàW·3jÏÍJñK³“.\^´ÂKFþ¼¾pÛ –¡9PZìäÏöÕ·~¬*N­ «Ýçïv°.Ò†_!ÿcÕñ×|¬(8f=ùǪa¤'üXuFïéUÍ9zª+¼ìôÞ£o“¼9ï}¬jˆ5ÇR‰`.šhªØ$ŒÕ3ËoÑD{kaÌÁs·£/Ò6™å«¿›)-šX41M —ÿH hÛ—šëi¢]äÅäѬ]ìÒGßi˦ewô™&¤Œ`‚2Ÿ¥«]Y1ŸMËkâÈÄBq„¶J†ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâÐ,E|o;#Rßa‡&ŽIÕÉÒòšªú…pÜ7êe÷cÈû[ Ô›t_DǤ‰æÑ”Å1V¦o{ŒE‹&&  iiqP3î†4Ñì˜/¿H[Ÿ+Ù)*QíÈ5ßKš .|† -˜!k’qQ¾jGùÏååV˜hâˆ! ‡âÓºGî&Έ¿& ŽYOÃHOgôž†‰c³”Þ\dÓÝ£‰cRódºú²%÷|Ù‘þLÔ·°²–ç8:ÿêt7LRf¸²òLLZ7óž¥qV^³3¾º›Q{®!`´G/và·ÖøàÍ“àÞE'k#]É‚ÏÍ.ѳ5>šÓš¹‹#L‹&¢mâ ¢óÓÄñ×ÐÄ@Á1ëÉibé iâŒÞÓ4Ñœ3”}núb–Ê~óÑ„mD{G‡¤r²¹h¢©’BBúÅZÅücYy=:HyÜ—ðeÖøh#ÇÊÞÛˆ,šX41M”=Q¶!M4;ðËi¢>Wrm®ìbWV“i‚Sõáy§b`.#XÙ¨üY†J›©>JÍ©Y²àSZ³S[4nŸ&Έ¿†& ŽYONÃHOHgôž¦‰C³”ñ½gä°éncR•碉¦Ê³åX}æ«?Þ£Ój³ÄÑy¯y;MTFR~<*3Е„½hb*šÈõÌ8ƒ/:5»D~5MÔçªÖ:|Ѭ]ë€Ø­Iؾ儈;i^G°§¬÷4;Ågi¢8-ÂÌ|Jkvö–˾hbg›8ˆèü4qFü541PpÌzršFzBš8£÷4MTçhµ­#„³~œB/¤ Ù 6Í»³ý©îsÑDUE€eY§X½«ÿMôè° ÆÑ! {£VŒ(¤f6ee/šX41MøF\‹˜Ž{£;rx;[»ˆ&Ês¥lH ÌDÛàbG|çÙ„ú¦äBŸó&8µ¹E Ëð{O·{ç®h¢;-k€Œ¡ìe÷ÖâvÑÄçmâ(¢ÓÓÄ)ñ—ÐÄHÁ1ë¹ibéùhâ”Þ³4Ñg'É)ž¥ŒîM¶̛}>›8&5Û\õÇ›*U‹Õ»ÿÖM§‚Q0>xÙÑsõÇ›GIZVhŠ•½÷8Y4±hâïÓD»ÝC`H4Ì›xÙÁÕ7ÚsY-œµ‹g¾9 ’~N›`¨Ø˜ÔÆB›ò£ÍŒšÓD73êv iÁD´KDt~˜8#þ˜(8f=9L #=!LœÑ{&ºsÎbÏRïkÍ-IØ7Ê;­QJ¥¹Z£vUXLç/ÔçßjúŠŽ+‹ÆÑAyîh¢{TJd+“·Š8 &LL°Õâ¬Z¼ ›5;pËWÃD}.±«C4²«Ý‡#Ù /:åP2íЖ¬É’Œ¤š8>š„ÝÅ ÄPœâÛ‚E;ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâÐ,E”¶ât‡&ºö ßH¬ÛDWUë2¡Þè·h¢GÇ)%‹£óÞ‹ãvšh“é7]4±hb.šÀ­–fUt^têvï•—/¢‰ú\ÎHA·‰ng@wMš03¥Ï4AmK™„ÆsP³–&ªS+‹4fÅYÒÕ».Ü&":?Mœ M ³žœ&†‘ž&Îè=MÍ9Õ¹ÏR$ùÞ‹NÆ›¦$ì.A€)Ÿ.Õ'»ètH=Ûoõ®{E§¶Øâ8:Âψm NP7ŠïÊòJÂ^41MÐÖR<ù°¤S·KrùÙD}n™»œÆ=î»)Ý›„MF”vÎ&¸Ž`7ÎÁ•¬f—íÑvÍiFÉ<®SÝíÀW»‰p›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çL€ãÜànG÷&a{ÚPm‡&ŽIu‹&š*I˜‚3ý-%ýMô蔿[ÁÓ&šÇ²ÇPöXY6]4±hb&šàz»§üxiHÍŽóåiõ¹Z“Ý$?ÅΘo>›`±æu,e;–åe\ä£Û%‘Gi¢9ŒJ:u;&[4mŸ&Έ¿†& ŽYONÃHOHgôž¦‰îœ9éS¨|º,ziÞÕóîšèªÖo¤ò\b»*UÎAÞD·ƒßj^×ߺ,ÇhúEtì¹±Õ£'(ãs¤¬µ[4±hb&šzæ@.éÏl€þýû-vøvEæ"š¨Ï5Jt]ëvˆrçÙ„mjR–Ï4¡unÉY£ÏÿÍîc­ôiBëôB)„²f‡«y]¼MDt~š8#þš(8f=9M #=!MœÑ{š&ºsDŽw_"ïm^'9mÆ{gM'ÌÁ]¡.u¶¼‰®^sÙiÅêËæõ·h¢½µ”ð¸~'i¢y4-C±2µuÓiÑÄT4¡a¡\$ÓD±÷Ë Ä6ÿPóŽ ?ÅŽìÖ³ Ø™ûÎÙ„µ¬eŸw»ôh»‰æ påP$X4nŸ&Έ¿†& ŽYONÃHOHgôž¦‰æY8c·Âæ\F0¡œ6;áô(MT§œ’±X,Î3.šˆ¶‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDsìÔ±îvxo» M¾íMt ÎÈø…Ôqm^ɦ¾×µI(¤`_HÕÉú5U*Dù‹µJ~-/ïPt¬@Þ<–¿ 1Ín5G]41M–y•íÏ;7ÿ¡‰b‡®×Ó•¡“)ß ºÜLœ¡Dý#MHa W ’4L*ivâúhÍÀ.N4³P(Nù-dÑÄçmâ(¢ÓÓÄ)ñ—ÐÄHÁ1ë¹ibéùhâ”Þ³4ql–*»½›óòhó´sö}PªÌU士2r÷õèvšñ§h¢¿µ;~ñ·5}Ž&ŠGJV (;ÊŠ¥u“vÑÄL4Q—TËç »£v;…«+·çJ­ò¡Ÿ²W»·f ¤LdŸi¶ºqªÍ·‡_«º§ü(MT§9‰C€:ÝŽV?£p›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çeU“qÚÄËŽôVšÈä[âš]•(k:½¤NV¼«ªoƼ_oùc7iû[—¦ñJ^ v=GÕ££y"¡2 ‹&ML@PÏ i¢Ù‘ÁÕ4QŸ+FåáÑS+¥ªÍteÍ@ª wÎ&p£ÄÙÅÇõ|š$z–&‰ó´ú…ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâØ,e÷ö3rM›ûN•ƒR'ëŽzH=ýVÞÄÁèøs5){o$²hbÑÄ4Q~—”Œë— !M4;e¿š&ês‘£¯(Í®(½÷¦~®(ÔG0–` •6;ôG«|t§j%Ћӷ‹&v¶‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDuN`jã ä]¤û½y´e݉#J«Ö¹`âzøP:ë&Ú[“€±V–=ǃ0Ñ<* 1ÅÊäí2õ‚‰ÀDù]2#Y¶4„‰foÙ»ÁD}®˜±hü°°¦{K2f¢Ï,Áu{v”1K4»÷hoe‰ê´„‘‚öÝŽÝKD›ÄADçg‰3â¯a‰‚cÖ“³Ä0ҲĽ§Y¢9/ã˜âYJåÞ{N°pʸ?Û-Õ炉¦ªvÁÈ_,Ù~ &ê[ dΊqtü=Óùn˜hÊÆ•»»,˜X01Lp½Ûðì[7rÇ¿ÊÀÁn`°Èª"k€}pvs1²›5Öôb#Ëcy±$H£¬Ãß=Eö‘|$fu§/b|h¿ŒÎÔtý»Nóòk’UÁÅT…‰lçNw®XòŠ­‹]à·Üç$çK£åì¢7²‘;”·9qA¢4ÅQ€Øaš%V"Ú>L,¿LT̳n&ª‘n&–è] Å9ê¸åÙî¥Ðo|Ü!¬LÌ“zî0ßÇ„‰¢ŠRv¶z¹,˜(wÍì|=íËÑv<4‘=r.¡Eö÷Æ Ða¢ÃDK0Ay2ϹºE¨ÂD±Ã“ÃZ+ÁD¾n.,Œl¶ìÀÚж- }¡ .-˜Å(½=Ø9¿kzØÁ)%BB[A?‚mN+mŸ&–ˆ_‡&* æY7NÕH7HKô.¦‰â\¼<¡—J~ÛCQû{ô~„&²„˜_ÿÖËgo £‰¢>W#0^q;ˆñ²h¢ÜµÎ& žú÷hwµ›ÓDñ˜ê•me§Éò;Mtšh€&8ï*/‚êûœ²FZ&òu£vêf¯1ЖG°Ñå}NQF6:EmÁI§N:xT•;<—dCšÈNS½æÍÑ{± sšX‰hû4±Dü:4QQ0Ϻqš¨FºAšX¢w1MçyóP½&å`Gç2l¯»Ñ‰RŠ•Þ^8FLvo/ÌÁ.ªá„á žQÿ›¦‰r×’Ï?²ñ;žšPàJ®³€ó}m¢ÓD[4‘“% ⥾ѩØ!­žÐI¯‹ú8JÞj?˜ÛÙ¶G°õjaìv*}KʅߪJÓГï{;åî…È膻Ðw:™ÓÄJDÛ§‰%âס‰Š‚yÖÓD5Ò ÒĽ‹i¢8§hä\ì n»6x „‘µ‰"uün‚Tl«öQ½x£ òÑÎ_XB§r×1Ãâ”蜜1Üœ&²Ç|˜™LeÚŒ{B§NMÑ„>—˜«?£‘Ð)Û9¿þ±‰â_<#™½6B·ñN'ÎoíÎÓ„äRãmU± »ÒDq*‘“‘ew°s}mœ&V"Ú>M,¿MT̳nœ&ª‘n&–è]LÙ¹×9:Õ‘‚Û›ª›¤Æ|FÙ”êÝ™œH•&Š*Ï‚FšÄbwºÿþ"h¢ÜµÎ:ÀMønÃ~4‘=†\à$’­L ïtê4ÑMèsIˆÁŸYõ{ôÞóKÂ.×¥ˆ`Nƒ‰â¦‡°18œ-„Mª,·t‰K­ÈÞ[;¿ãN§7N "Öº¡7v(==l}šXhã4±Pü 4QW0Ϻeš°"ÝM,Ô»Œ&Þ8gòàÄî¥Øo›Ò‰ë˜r¶ö[ ’ –ä­]j©öU‰8¸ ÃAäK*„ý段ÛžðÊn4qôˆ:>‡ ÊP‰»ÓD§‰fhbx.IBÌÅ‘Çi✤$[ƒ&†ë2$fD³ßc` [ÒD<”À§ó4¹£†µ±oìüžkoœŠ`¬e|kwÒ uš™&V"Ú>M,¿MT̳nœ&ª‘n&–è]LÙ99Þxç1عmÏMĈd?Bƒ„àSš"ÕS[4QTÑ[½xY41DóŽ ;:§»«7§‰â1:S”Qè b;M4Eú\"Œä©JÙŽamšÈ×MeY›­öS²o¹ÓÉHH>MøLÚ®‚Uš(v‰xmšÈ×M!WâNVûQÁmiB‡ðgä4`"¿T¿>LÊ9r¼KVûÉ5pÛ—UŽyd#-iKÏԷV;û.}g§>¸1Þù»Ó·ù&Ff‰•ˆ¶Kį󬇉j¤„‰%zÃÄà<×¥c»— Ž6.gäb‚ñÞÞëaŠTtmÑDQ…Qœ­á–&Ê]“ƒ)ÑI;.} ráS[Yt}é»ÓDS4A9mR" ¡NÅ.ºÕ7ÒæëŠGã…ÿ`ç6M@Î „vçÇÖêoÿ‹ÝÞKÅ©8!cœ.vL®Ó„5M¬D´}šX"~š¨(˜gÝ8MT#Ý M,Ñ»˜&fõR‘7N˜èÀc§òf)M±1˜Ô£gˆ¶z [šÈw.0çë‡è$¿Le„ ³S"ù&Z‚‰œƒ•|PJ¯ÃD±׆‰’‚'ãôC±;[5b=˜ˆù+?1·`NŒSy±ôA¼ïÒDG\S¢ëÍYb%¢íÃÄñëÀDEÁ<ëÆa¢éab‰ÞÅ018OQÀOè¥dÛÚ¨:â’ðMÌ’JmÑDQ]Pe¶z† ËXî: 0a°ŒaÇ¥‰ì‘ò‰T¦2r©ÓD§‰¦h"æ DQ=~˜éÑ{Ï/9Y›&òu}H^Ç7«ýP^“Ý&|—LŒ3ØÅŽNVtW¢‰|Ý ÿY•ŠÝæ4¡´<²‘Vr &ï8Õ×&dèÜ®4Qœ¦¨3A\¤Næ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±DïbšÈΣÃäŒW¯E¤ Û–&Æj£Î“šËXÔCÙT¯wé.‹&Ê]{1w: Q<)$¾9Mü`è(vC§‰N-Ñ„äL¬-Ë861ØùÕMäë&çcD³×¦()m{[t\ðçi\nÁ‰†êÚ÷`ìIÅiÊçë9<; ¾ÓÉš&Ö"Ú¦T§‰b‡' ŽW¢‰|]Ñù-ï Š]ÂÍ«–‘µ Ÿ[p"õM¿ƒ&»ÒDv Žc4^¥qr’_¼ÓÄÈ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±Dïbš(Î=æä¯f/.âÆ4äÆhb–T8Sýí£ÒÄ >:¬¡}s——• v¸ë"S˜Ù¯vÝà‘Õgœð½‘ïå&:M4Eú\²~˜ß¯Ti¢ØAX»v]¹. €g³×V;€i‚ÄŸ?„ !7`ÌOD½¡;ŸöÝèTœF²ŠB v|2Lw˜™%V"Ú>L,¿LT̳n&ª‘n&–è] ÅyŠëEÃŽv~ã”N%g?ÀÄ<©­ÁDQ%‰¨~Ìøh¶4‘ïÚ;d©u>FGö;„=( ¨ó,û{ó>õN&š‚‰P´Ó‡“ùGï=¿Ä"«otÊ×M¥Þ…9G§„iÓCØ|`m‹Dçiµ‚DRoéÙÎËÎÇ&Џ$:K°Å…È}£“9M¬D´}šX"~š¨(˜gÝ8MT#Ý M,Ñ»˜&ŠsIÌõü}G;Ç[×®ËiE ÒÛO–꣉yê?,ëñÛ¦‰|×èHa›ÑÁw¶mMÅ£~!@”yî):M4Eú\r"‡‰êŠ]µa—ëJÄ0Y퇅iÓCØtп ç+aåŒè}=Iõ`çíJÅ) ’ñ*m°ÃžÒÉœ&V"Ú>M,¿MT̳nœ&ª‘n&–è]LŹöØ\ϱ}´óÛÂÖçÀcg°ç)¥ÐLU1$WOðwT/v»Üµ¶€ ceŠûU›()x™Ê´÷}N&š‚ }.DPLOU˜(v$«ïsÊ×ÕÁMo¬ö“7=ƒrHAuŒœÁæÒÒYÃ^mPì¼ÛµÚÄàT9޶8L½¶9K¬D´}˜X"~˜¨(˜gÝ8LT#Ý L,Ñ»&fõR´qµ‰éŒÐÄ<©©­JØóÔ³¿°Nó¢w<5‘=²6ðd*cçûF§NMÑç%‡$ Zmb°cY»v¾®ŽÆNÖ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±Dïbš(Îг{)Èm{ÒÁE?B³¤¢o,£SQ• ¨F?A=§Ë¢‰!:Q¼‘7i° ;ÒDñ(!X‡uŠ]¢^m¢ÓDS4¡Ï%ko¨¾êŠÝé4t%šÈ×MÚ[¢Š]~Kš€C’Ä0²Ñ)i ލQçúÚw±;“Á{Sš(N#úHh‹ã^»Îž&V"Ú>M,¿MT̳nœ&ª‘n&–è]Lƒó3v/qãŒNà0JE‚8F‘ R¹­ÚuGõQ•­^<\M w-@~JtÒŽ;²Ç¤Jýö`«ç‡Í×%ð vËæ|ºcÛJØ>¦(8F"Ùkà ¥j§ÝPk㋪ÊYæMîòÒÆ—Ÿ_WÙÑ!à=Çõ˜@LxêbèùÇûøÒÔø"I™©:¾»tòŽ¥ñE¯ë]ŠÉG»3UÜV­fDÞÇ‘—U¢D!Ñ!TíÂΧòŠÓ䎶¸è{Æ@ó-D%¢í¿¬Z"~—Uó¬YUtƒ/«–è]ü²ª8í±Ú½ÔÙ4Ikf p€ÑŒ*K.ÅŒ;hìX^Q¥Ã¥ÎV. &†èøöHž£HûÁDñÈ”«ÚÚʨ—Fí0ÑLä]H>E&Ôît޼L„ˆsÏgµµ nÛ´^y‰Ïg ô®´àR0¯¦t° ~׌Å)è—HõýȃB/fdMkmž&‰_…&j æY·MõH·G‹ô.¥‰£ó<ÀMè¥Üƽ„TzûéR±­¥‰AUίÍb«çpYKÃ]§ ÔÁŽNÄý–¾‹Gï 1ØOìK&Z¢‰ü\j¯Œnd}ôÞóË¢kešÈ×\b«ýD [.MÈÑ9ïÏ/ûá¤OoUi±Kâw¥‰ìÔ#CâdŠóJo&¬ib%¢íÓÄñëÐDEÁ<ëÆi¢éib‰ÞÅ4Qœ3³ Ý…êà6¥ ? G’| $ï ó¶Ô©-šÈªPà ÃA®ìzY4QîÚ³';:÷[›Úq_›0§‰•ˆ¶OKįC󬧉j¤¤‰%zÓDqžr~´{©ˆ²u5#‡d¼·Ï›Nb½Ð UZ[›(êó¦]ãw±¹¬”ÇèHòbäïTÙœ&ŠG&ïýe|º«ÓD§‰Oú\ÆèˆÑÕ×&²+r¬Mùº¢ÄïL7?~ǨŸþYû„/^<{¢OÑËw(›àg¯žè«Í2h3ÕvàÓ+ÔæšP¿2Iï5ÔÏôúªç7Ú…~ØäYÇÃäcø¢¯ó·ÿç÷?ÕNööùõ'ÇÏò¯K?û•SbqüE¹Ü¯óŒ?|} Ú×J“9húÁöQSg7'ÏŠú,Ë×y2ó~ÏwþIÆ—ëÿ“¢¿Òò(ÛWn_½¸yúòîÝQ,ÿôóßoîï¯ÝO=|OB¾ºýö±þÕíO×Á)[ Ë%å\~L_ÿîß¶”|±ßÿÎý Þºßÿ¢2þxóüæ[mÛw·/O¥·¿~û$•ßip }Œå‡ÿó??Í,ðÝéoôÿË×?òO¯îîó“©Ó§/ŸÝßæÿtûüþÙëu §~÷$V¾®¿)³kóx]>xZúüã¤?ûâóü¯¿§É¾ûþöñëÇ÷ÊÙOõo_äßýx£=äÃóû›ÇÅѯƒÉË›Üí½üDãòïyôû¿Ý—ŸùôæùËž=”Þ?{õÞÁËg÷÷·/Nd ¿ÑnAoQ{Ó…·ÿ¹>-O~x8ŠÿPRýêUÝ­Àäq]Î?~{óâöÇÛ‡A–°_rSz~÷øîáþõ¯€µ¡{ûelÀŒÈQœ=`žòÓ{fJ᛫›çÏï___ þJçOÞŽŠW˜ßèä1ôötœËonn|þpåFäIpÀ›òÄŸìx¸yùßÿõäÅÍó ¿$øæêo¯žæïæª(;¾€QŸIÄX ìX¦øôS|’C'ö¼E'Ý'œø<ÛZ>Õît³~Åg°}&ïr¡ëqK $ú)>q‚Oqy Óg?éû¤)>uŒp ¦OñÓbˆO<8–œÕŒë÷™íRˆ<Åg4ž[ÌxžGhH†ÏŒñw]*ÈNHØySÃÉ)¸¾T0ò¸Ñö— –ˆ_g© ¢`žuãKÕH7¸T°Dï⥂âÜ‹°°ÝKŸØfã8L5sóÐ9÷iRS[9x{Už]ÞVï!Þ-@œ¼Ýõ³½Ç$Mêet¤zÄ[D©˜š@Z35T±Ø9¿xÚ,-7ÐmSÖ”£‡U7“åÎé­nþf ƒ‰@R!äÕJ#ÄÌaà NÁ ÅNu9Z@o:MzÔŽ!Z]©ØZ)…`:Í5Ïz²Â›u>aÜ“Zk¥ÔI•òäÈ8¥v1[ãZÒi¦q)ÙN¥’1Z2]ç߆Õi–˜Dc™¼Ø¹}µÔ¦œJDÛ‡á9â—ኂiÖÃp5Ò Âð½³axR+uäž”eVvi†{ º GH%j †‹*Œ2, ¶ztánÁpÿÔ”ò¨èÄín·ì=Ê#yo+“z·ÃðÃMÁ0é‰%ŽR5«×'÷v9º¥aXËM]2²»°êÉ*d]Šã4p²*É'ÌN§ê}¡Úe†msHq^H½~¿eo‡aÇ sœX‰hû81Gü28QQ0ͺqœ¨FºAœ˜£w6NLj¥<¬{¿e€Ð¥„8ÑKˆ!7Bjk›/‹ª€2*ÅêéŽe}ë£ÃR°£#5`;œ(3"9¤‹]:H²¾ãÄŽ-à„TL¡t§Ùª8Qìà`Ùz!œÐråÃÖât±£ë&jðÕ¡+i²~ÂÚ 3Å.ûm/¸§Ñavlœå*vÀqÇ kœX‰hû81Gü28QQ0ͺqœ¨FºAœ˜£w6Nçô7v+å]Xwu"ú.¦¡Õ‰^B¦ˆq„ÔcéHIœ(ªbŽÁبPìÁ݉IщiÃ$ÒêQx@jÔˆ$sÚqblj¦pB*f†@1c=‹t±Ãœ—Æ -7f0‹»¸nÞ·ÐIOçC8Ž\>ubg¬Ô«;rKÁª8QÄÅ\&Sø¸ã„9N¬D´}œ˜#~œ¨(˜fÝ8NT#Ý NÌÑ;'&µRG–¢—ʼnP’ø àÄ4©±1œ˜¤>¹Ÿó£Æ‰iÑ!¿NLzrÂV–SØqblj¦p‚5å²Rx'z;·8Nh¹ƒ€Š·> ]Ƶ Nø.ùÁ½N,£åœ(g¶¾tµsiÔy +5w¥=c4vð»Ç‡aÓ)an FRµó™F¥ñC7Æ)eñI¦SÍ•7Ê©‘€Ò»Î9½R¬úN‹]äƒ5šS4䘙êëq½]p£ÎU¡7ê+uõsѽ]Ͷ‚U t”lõ9Ü­;6úè` ö»¤í&[{ÈQFnŸlÝGGMŽúŠ™“´<#‚M/¢ëÅÉxEØÝ‡ðáZŸÎ¾/îåSéa^y§ËBÔu=6eI´ß¶mMYÕ"ÚüÌæ,ñ‹ÌlÖL³n{f³éöf6gé;³Ù;÷,Ô„v+u˜{þ ÔŠQÇL·¿*WȈäñÃrG‰˜|)¿zñÓÓ«Ïîóó_¾{MÂWòý<»–8ß—Ø^<òø³ûòmÜ—ÑP¹Ôå³û_\?—Â4ÊÒ®‰å_®Ÿžýx}ñÁ¸üÉ›??{úìêGÅ=óÏ»³¯K?+ŽoÄìÀôòê‘0»>Œ ·^Š“îþ`¥ Ä®~ñÖqåäÁ9ÇãÛ 'Jõyè•ú?¿iDߎ‹ßܨÓm†¼{ãÎô[»p¤·NG5±»>ûv>côy0D½^xµÙ ÞÓÒ‰BK¹ÂŸŽÈ;(ûgƒ‚ï'Çà àS&´…´ÇÓšîsжú##Çž§sðÙœR»ƒS[ðt1KÁ#Þ[œ{gåÎÓ;O/ÎÓâ)9 ÌFÿ"vñ þbý‹½+Fz@ã;ȼîæ%Ýt†ú]W„”ÍÑBYWä'|Ô»¯ßóskw°*±þ º FôÿáŸ.~)ÂT0ͺy¯DºIÂ?]ï„/΃n‚v+`åDMŽ:æp|ïR/!FÏõ‹ðn¥f×Mˆªä„l½­ž€ïMht Î[Ò„¤æù@˜ö“Õ;M4GBÂ,c²h"é±âhBù1EkBQìrpk&jÊòŒN˜ÿ8N€ntñ,¿¯Ø‹Ýa.¤Úž#—¯s0Vÿ‹¨å4šN¥Qc=~ §bG¼iJªâTg¥ 7Åw° g§q%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ#ˆ{´[)ð+§¤ŠÒ=D?NÓ¤~xûó/ N½z”>ÒÛêÑÝ1pꟚõâÀщnk,cr0æ½…ÃÓí;8íàÔ8žÍÎ(õ·š’ª·K°ô¡R.¡èÍH?ÿUSRAÇ‘8°Îzû†“—RϧRì§QWƒ q5ˆc4î‰îí2mš«8Þ ×9S\Ä´oï4§•ˆ¶Ï0sÄ/Ã0Ó¬g˜j¤d˜9zg3Lqtï~°[©ënKŒì;ty€aŠ„!!5a[ STÇÙVã;šÕGÇ ÊÛ=y$Üîàzï‘9f¡Œa_üÙ¦-†ÑÌÒž¤#§Ÿ¾}¯kºÅ·*k¹)z0,÷vÁóÊ Czšá8ÃøÎKWŒõd|½]ÀQ7 âáMƒpÔ©ÔƒàÀËÞj—ˆÆ­Î¦S¡°’«'ÓïíÈ;¹n—×Â"i#c¶œjW@›ÒZq*ˆˆÞÙâ’;­YÃðJDÛ§µ9â—¡µŠ‚iÖÓZ5Ò ÒÚ½³i­8×ÍÐÙÛ­TÎ+g-ιs˜hm’TöЭ©ª¤éˆ}4Õ'çîVÖâþ©ÑQÑY&w0ïº:­‘È®u)fÜim§µ¦hÍË 01Æüábî·ïV`±ª[šÖ¤\tzZ,2Ù¥itÅ #À­EùÊ™ØüÔÕGåóÎ'‚ˆ9ùd1ŒÚ!Œr &8%AάæVÑÚèÑ|RfD–ªoõ€bZë¿'¨ÇÏ4|ìý÷”èìÚ ÿfmü(šµØÞÓ„îýw[ýw$rÒÛ`®'†(v‡g꿵\MÕ˜\½ƒ*v1øuïDÖû\:Þ‚g½óÅJuHÏaÓé@uªçŸÉHüPì í'wÍyžJDÛŸœ#~™éÀŠ‚iÖOV#Ýàtཱི§'µRxìâ–%§ ;œì%dÈõÌôo)´…E•§”s¶Õ{wÇp¢j½D‰GDç öê8¡•ïì1;·çÚq¢5œï…¤=´›M’&rÔB½÷ÆÔ–õfwŠl0ŒÚLž,Æ0ž]Î1D󫻸&Ãxì¤ÉÂ8È0A¢®÷ÌYp ÀãöQø`¼ž¨Ç€eW¿X®·#µÕÞ[Ç…cW²j3o8; m §Áçd‹ aß¼aÃ+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµÞypž²ÝJEäu §Ô¥0”g©H `Nn„ÔÐØâO¯Þ ‡POîŽmµ/O9ô#¢“7Üj/É2›nŠ2ÎûVûÖÚ¢µ¨[Ø)çêy–z;X<Ï’–Ë.F)Üú€ræ +/þ 63ÇÁ‰ôS÷œÐX¦*vHÛ¦m-N9Çhä{ëíÈí8a+m'æˆ_'* ¦Y7ŽÕH7ˆsôÎÆ u€AÆaf+.ÅUq"K{ŸÒNôRXéÜz»ØØ%E²ólõHw 'ÊSëæDãˆÞ.ox DñH9owã@qß ¾ãD[8Ae}Å;ÓWq¢Ø¡[|/8éúŠ<2’ù±  «à„w  ¦ä]„l5Ñb÷/Ï^>tŸo[•øù“Ç®ÿûË‹§g?k»éÇvçß—ÿwþ¦éþô¯//¯ÎŸ]^|þôÙ“ÿýéÍøááõ£GÎþñ÷ü]¿–·¾š&^ÿþ“?þôôêË«´xó‰K9ŸüûõcA…þGú•i%üÓ%¼}¨©E1¿þú·o¾ÈżSÂ0µŒÛ×§üÑW—ÿ#½óÛWòó?Oy»Ÿt]wöé§gxvýPº¸ki¬úïôù)…ýÇÅÍÕó§ÒaŸ}ðsϬÌ'…ð“?\ýðèw×ÿzÄåô7[êÆ7¿ýâXa}‰—éêáåe8‡Ë <|Içœ/Òùì+¬|Ų;©>U½žô¿¿zþ䥌žÞVÓÃÈŸ$´Zìiïð7Wµ ¼_Zùq§<úçÊØ·½¶Œ§nž”«Í˜üúUéÑÏžßÓ®üÜÑ9’tà8ûÏ?K¥zçwÿúø…Œ‹O(ù““þèVгòT÷ÞGܾߖýÕÓŸÛ†{ýÀÿÄ’fvÙ·¥ôíFÿSZÛÊ(Þm=NóпCk‰Îþ‚Yeý Tñóï^Ý{ô@xqï´?òß72l¸ý·üK ëÉ‹êçú‹îÕõcy¥¯„ܾ“º’Ž×ÿü UÆ…Æ{]\³.ÊݾŸG×?\u?]Üü uïõëï¶«l'uôï5Ä'ö–'u&¿½¹yùB§JœÜóq¡ Ñÿ¼º÷áñå‹¿ØNÅÎùQé‰=Ns'µ¥ bÝ©ÚÅ£²lgœp“ÂÈKCJ,§ä=ÅMiM=iŠ“wµ_&cÃ+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµI­Ô;ÛpÖØ’|'×­M’ŠèÚ¢µiêìÌú¨imRt|Ø0½¡zŒôˆ¼­ŒqOo¸ÓZ[´&“u”œrýBÌb‡¸ø 7-7{ï³cëâ|d>pÉÔ ¡s@.æ›&i~t$N†R±s4j™+€N¬i`!§:Ì;—·=VWœrˆœÑ—hÏ©hN+mŸaæˆ_†a* ¦Y7Î0ÕH7È0sôÎfuž@šnÌf+•œ§U†cî( åT,|`rÁ– Ìm1L¯žäG¨÷þŽ«+O8Œ¨†áà>²ÕF=fïB0æI‹º=KÇÎ0m1ŒTLÖ]kþßoß«À¬É¢–fîôºGÒÛ€HìŽä¾X–aä[ÌÇ“t§WNjrÃúlEo‡.×ƸwD Óf"p0œJÃÇœ·D˜^\ÔëN¼)NS%îcŒMkmaf‰_aj ¦Y·0õH·‡0³ôÎE˜[çÒþÔóÝÚù•ÓÂ3tìf¢Ô„M!L¯*yLžmõw a&F'»Í¦xdðÞ؈;xo;ÂìÓÂhÅdÍlCª!LoG9/Œ0Rnp.&2òmôvVÍÐ]Î0 1LÒÒœ¢h)¥`Ô-‘Á› “!z ÑìÕÄÇ¥…¡¾M c ’“áTìr·iÎH @[HŒàê×…övnÛMs½SJ!%°ÅÅq;­ Ã+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµÞ9'ä­;%º ­‘Næe µ"!¹”GHå¶Ž8©ª$ý”t9d©OºGënÑZyjDÝgGaCZ+õ"p[Yô¼ÓÚNkMÑ P‚:­AY:há¢5-ׇÐyã;ȫҚï$îŽýÍ@#üÂ#DC©ØåqGœ ƒƒHÒîAõ´Qoa­%ÓiiÔä’áTµ—¹Ô)è³"Ûâ˜q'kD\‰hûà4Gü2àTQ0ͺqpªFºApš£w68õÎ3¥äÌV `íÛu§ž§pš&5µuÚ¨W…9Ë@ÈVìˆôG Nå©õèp=Íü­‹ÛSñ˜„Ã=Æ {nˆœÚ' ‘ÁçjüÞÎÌ7-NZnräœ1ó v:áµ&8Å.1ÆÌ7Œ^¼ÕD‹ð¨üvÁÈ ¡çk)…H©~ï{ow˜3¨æ”M§¥MÁ[Nµ3:¸ëw p*N…ë²QaŠÝáÝŠ;8 Œˆ+mœæˆ_œ* ¦Y7NÕH7NsôΧâ<Ç@Æ,[oçÝÊûsãÐþ@• ‹i„Tn œŠzg»¯Bx·À© ±ó~e„qŒÒÈÜ t0º¥‹#›}‹]H£NÃDo2Œ&Å9¶ÓÃëŸkNƒá4I嬕6­Þ¨»ÃÄž5§Ñt*•ÀE&é´¤ì¶]p*â¤Zƒ‹bwxãÜNkÃðJDÛ§µ9â—¡µŠ‚iÖÓZ5Ò ÒÚ½³i­w®½$Û­Ôá2«Ðšç.ó­M“C[´VTE´ÕGôw‹ÖÊS“ãÈc¢³%­©Ç좓•­Œ÷„ ;­5FkIrFõLsÅ.ÌÇ-DkR.8ByëÓ»¼2­Åð8¬åΡȤú€½ØºÚ(’0¹4 ¬iªÞÎm›º@J=Í.Øâ2î9ßì±i%¢í#ÌñË LEÁ4ëÆ¦éfŽÞÙSœ‡€Án¥ä•s¾¥.§¡œoEBBö#¤&h aŠ* >҈œàý¨¦<µ"²q\µ·s"Œzd§}ÙTƇ×ï³#L “u!Ç!çXÏù¦v™ÓÒ·³j¹ ß#Y"L^uÁ ;pžÓ Ãè1X]œ³”ŠrsŒ¨ ˜][½Ïù®u0Œ.¢6í`ÄcöÓe÷9²½ƒi«ƒáN/ÑÂbµƒ)vqùFÊÕS“s}!»Ø¹´fƒ¹“Áûÿ±w.;—ÜÆí4XW’É2@^À‹YY(¶‘"C’í×O‘}fr>ÏiV7ØÝ&t(í¾)tý»Nóòã¥J1½`r™#Z×";JÍÓ½5 ªS‰6g_œàÜrw"½ªGü9ëU Ǭ_¯jFzÀõª½ÝëUÕ¹ŠÍxÐï¥4\[£€T–´±ZuL(¶á^UE.Õv¨Ïov<º¾u¢œ:kõF˜0)Šýç)KöeÎÕª £ÁC BìÀ„Ù!^ hÃk@f(^»ZE)Fz}L5¤—¯78?}ü€Ùú÷O§Õ´n¼dvútÁ¤E¹‚¿>,fpÇ †˜Ó¥eâbÃQˆ¯óG ؈§6½ÔVZíì›Ø Nx`sjØÑ®'·ÚY[Þå|§öûcûÜójgV»œ:©‡ìad¶à8NÍñÖ“à«S-ëå;ĉÎÔCîÜ¿Ññ±Gü9ˆØPpÌzpDlFz@DìÑÛˆÅ9çT¶öÜ^ŠžK]ˆ¸æ¸5˜ª4ÖÞÚ1õ‘Þ«ú÷±è$É÷!âe6-{kÇBDX˜S²9wjc¨v¥òõÙˆhÏU %iœ‡#Ì9jºÖTà([Œ` 1Côfì‚ø\§¼Å0ä€SÙîã2h«Ü[O®:(W“¢+Î wÖ“s'§ˆŽÏ0=âÏa˜†‚cÖƒ3L3Ò2LÞn†Yk–vá‡À¥ £¬‹M7¶¹ªTµ7Ù!u´m®ªŠA±]˜lµ#|¯óèêžè0Ê} S• Mà++wö&ÃL†Ša°ìø`Fç6kµ3†‘³¦<7Ä`ðÙQºô| /Ê•ù× C €ÄHÛcaµ“¸«&¶²Ã0Tû¬è8¥’ä@oe˜Câô)"“a6&§ˆŽÏ0=âÏa˜†‚cÖƒ3L3Ò2LÞn†9ÔKÅ«“Š&]Œ46æ˜ÔÁ’ŠSŸ4¼ÃŠNF¼aŽ(‹ø”|2Ìd˜† ›°´êU; §3 U6aí:m«^[ hcÃk†áÂ&åt_h/’ñÊ:»®©8 õ ‰):Nˉޚ¥auš­5§ä‹‹œ'Ãx“ÓFDÇg˜ñç0LCÁ1ëÁ¦é¦Go7Ãç)@çÄë*òr† ÑÓe,†©êA0;Wc×·Œü^ ³F'ÚÔ@ýè€è} S=2²wݨÚÑÓ0>f2Ì à  A³0Bµ ñü³då¹™3xݶÙ!_É0KIÞ¨û0²XÌ‘¨}ê­Ø)?%n1Œ: sÌé¾r ]§ Jc$Ç©Ù=¯m9õ’CÈB×{3ŽS³Ã¸/¼ÙwšRÎÀí‚};¸÷¨^qš…$rpÅ•Œ¢½¹#¢ã#bøs±¡à˜õàˆØŒô€ˆØ£·õR6ϸ6#…Ä…o â1©£Õ;¤þù0Ú[ â±è$¾«Ç$†íÙWöœ‚q"âDÄQJ¦ ƒ¿ à|Àfy׿M .ƒpŠº+û@ß){Æö¡Äj·áò\d0Ò辰^»¡g}’lÁ0«áEö:5³SÜ•#¢Kk%9¼ï›0;·Ë©w›KÊØ9~YíðÞÚ„«S ‰sôÅ Î]DwîßˆèøˆØ#þDl(8f=8"6#= "öèíFÄÕ9 ìéBðZDDÙˆ6ñ˜T c!bU‚„ê#¾"Ö·NÙÉl¶F‘n¼ÍU<"HDWΤ…ÇBD›ŠØô—B3ÛûjÂéàTžk]/8 Èì8_™?°”Q08üqc€)Iõ,VNâõj‡²œØ§Xú FElŸŸ¨v„÷&^¯NµdÚa_œ>¥ü› ³19mDt|†éÃ4³œaš‘azôv3Lu)y©V;䋦UÙJ¼^%ØÐŸ1í:XŪUU®µI|õIÞŒaE'Ó'!‹Gd›Ü©?Ç@œ'!'à Æ0q±n5#qhgÕ«v‰ÒÙ SžKF˜½y°Ùá‹k¬'nþÀ"B9ÐÓ‚%­ˆ¤f¤ªÒvŠúïøéïå+øù§–oÿòç2ö[÷ô?é—òü þû»_¿¥oþÃŒ~ÿ‡?}÷ã·¥“úãwùÙfî ¥5 Ì¿û ùÿ¤ëŸ ÷oß”ù½ããéño 8WȪýp»hMZK¥‡±ß˜§fGùÞ{kÕif¬¾¸(³º°; oDt|Zë­5³œÖš‘ÖzôvÓZqnsË“ßKå‹%ŠÚðuƒÖŽI̓í8­êS¹Þ@ô^´Vߣs§¡Ú=gѺœÖªGÔŒ¾2kÄ“Ö&­ Ek©ää„Á9ªWíì>›ÖR9H3{ ¨)¼4÷ÆrK¶h-j2ˆÑì1ŒÙIÚU<*ªË0%§•õø ŽS³ ²+iaŒ{œ 'åä;µ¯u—SïÞZ¶’C=wÑtZìJ݉[i­ŠC±<»ââ¤5wÞˆèø´Ö#þZk(8f=8­5#= ­õèí¦µÕy œvôR¨ñâ½5sƒ[çW YB ;¤&‹Öª*A¤¤¾zz·+dÇ¢ó4¥¼œÖªÇ’þü9˼B6im0ZË…‚2圵IkÕ.áé{kå¹H!‹ˆÓ€ÊŪ@×ÒZ†’òÇÆxÈ~©¼¦nvA7i-Úøïßþïìçÿã—ëMßüýO6Š[Pÿ\›þ÷Ö_—ꕨGdÀU2žöd?Ê´Q÷Sõþ‹}:ëÏØíŽ·gZÛ* ôË_øõ›Ÿ¾¯ïö¯ßüçý£C5¥ÃÎ¥~Lã»üb‡x<>œŠ|l‡_ìæå2‡ Ú;ÅŸmǬG†G/Ò£Ác§Þ>xü윣Íd²ßKñ«õ·3·úQyŸ%H–ÜÚúbp xü¬JcNÑ«,úFðøù­³QV Ñ>Û¥”n‚LJGÅRÇÌW¦Ýiöýù¯“';žÅŽõ»d¬Õ ¿¾1õéã÷kvÏeÖÏ`ÇÇs… p«\øg;ŒùJvÌKŒ`ãÇ/Ç(-X"‰¶'ìÕî9Ò4Qœ¸Â}q™çV”;MlDt|šèM4³œ&š‘&zôvÓDuÄâLÑW‘zñÁÁŒKÌqƒ&V©cÚ!ˆÇ¢‰ª -ÎYw¨Ïð^4QßšÊw˜üè` ÷ÑDõhsþ0qÒĤ‰‘h&k495ibµC9›&Ês#Ûÿ‚Nûá3|!MpXÐ^1¦×4Ö‚¢ý:í–^í‚ÜKÕiÉ8’ƒ/Nž2%NšØ˜&6":>Môˆ?‡& ŽYNÍHH=z»i¢:±ßKi¾ö`›–Ó )oÐÄ*U£8«û« FEUɾ }õã{ÑD*©žê¹\NÕ#ÛŸ|eä‰I#ÑÚ,]s`þ:Kù§ß¯ÑDJt6M”禢¨7G7;ÆKsFäEE_¦½3T0‘„ö&Jµ ù^˜¨N•Y}qL8a›%6":>Lôˆ?& ŽYÍH=z»aâP/%píA'¦¸Øx¶Ǥr &Ž©oõ­í eH~tô¶,ÚÕ#‡*ÁW– &LL˜ &¨”â5O@m˜¨vúÃ'ÁDynD4œðæèfrem]D ñL”æ[NÒzBsMA4ÚðbêQ!¸ê?$…y“á¥rbôÎH¯Q|º |ÃðR– D(‰¯LòMôˆ?‡& ŽYNÍHH=z»iâH/%¯ÝùÆ…“âvo_ºñýÞ^d,š¨ª(§Ä°C}z³­‰úÖöÚ¤ìG‡ŸÒ·\NRËÚ&'ÅGµ#I“&&MŒDö]æ9ÄÐÞš¨v!ÂÙ4Qž ¬h·×Î@/Nüœ¸ó­ö£Yl_/µ~µJC{±ªÚ…¯·è/…‰µ¸6go˜®vi„7KlDt|˜èL4³&š‘&zôvÃÄên5}ùÙm PyÒĤ‰‘h¾˔Ô0©IÕN€Ï¦‰òÜ’#‘Û~RN|iаØ0'¸QË(—l@ÅNÂÀjGpïÞDuj_Dr*£V;e™4áMŸ&zÄŸC Ǭ§‰f¤¤‰½Ý4q¨—ŠÀ—ïMÜÚ›8&•»6QU¥r¢-ûêÓ×Wü~Û4Qß:3±s©dÎSŽäËi¢xŒ ê+ЙÑiÒÄP4Qê9D‰¾Îƒ÷é¾ß9Ÿž0°Ö“QJ쵟ô*#í™{i‰Q¼>é¡´` =µ/a¯v€·¦_ eóë‹ã§Û[“&^O[ž&ºÄŸB-ǬǦ‰v¤Ç£‰.½½4±:·éRhgÅ~ˆ”kS:±¦y#ýøA©9E«ªRò´M·äôV4ñˆNI”üè<'g¼š&ªG´•¢ÿÕ!41ib š°ïRjª1þ:¿ñ§ß¯ÙÑSfÑsh¢>ׯHÑë÷ÌN‘®­Œ ¿† ( ¸HuæëÅÎÞ†n…‰*M[WœõBskÂ%6":>Lôˆ?& ŽYÍH=z»abužrJ¸£—zuVôL˜^ÌÇL¬rN¼Gj«–ÑªŠ•)G_=½LŠÃ}ùaW‘%ŠøÊô‰¶'LL˜&ì»Ì4ç6a¢ØY§yöµ‰ú\Ãu{íÌ.=è„‹áB­ ´L %MnS)ÖáÞj«¸T¶˜ÐG:óÃúÓÄFDǧ‰ñçÐDCÁ1ëÁi¢éi¢Go7Mçå GjŸæ_í‚^{m‚*Cn÷öŠPІûR!uÐiUE ’¯Ó{]›xD'r þHn£å[Õc å,²¯ì9%Τ‰IÐ.Éfóòõ¦ß§ß¯D¡³i¢<×(ARôZ¶Ù]›Ò BZç´qoÈš0‡L)·›új‡r+NT§ŒÂ@¾8ž›þ<±Ññq¢Gü98ÑPpÌzpœhFz@œèÑÛÕ¹@É“³£—J×â„Í&ó²±9±JÕìÜ’{ؽ*ÙýÏĉª*‰&_}zqNë7å­%PHÜèH¸'ªG&ûxÐWF³öĉ±p‚ÊæDbÍÔ̻کœ¾9Qž+6öd¿ýdQMWnNÀ¡\ÏzM\Z0(xËUÕ.„{7'ªSñ×ÒªóÜœp§‰ˆŽO=âÏ¡‰†‚cÖƒÓD3ÒÒDÞnšXgôÖ$.*Lšð¦‰ˆŽO=âÏ¡‰†‚cÖƒÓD3ÒÒDÞnš8ÔKåpñÅ ÂÃÖ-ìcRq°½‰#êã‹•¯ß6M‹ŽÜHÕ£äœÍ»jÇO¹¸&MLš€&J‰ie4Noß®vô´7pM”çÆH%e·×~̵bKõ:ÜØ›ÐÚ·”øvK×:¾ð½4QÅQLÞ­Žj‡‚“&¼ib#¢ãÓDøsh¢¡à˜õà4ÑŒô€4Ñ£·›&õRDçt ²HÚº†}Lª vq¢ªbæLÑWÏáÍ®a×·–ÀŽè<Íy.§‰ê1Eɾ2“&&M EZ®WÛ,=FnÒDµÓ§ièI4Qžk Éï÷2CÆ+3Äêba|]½bmÁœÚJãÚWå[i¢8M%3&_\ÎóÞ„;MlDt|šèM4³œ&š‘&zôvÓDuŽ`Ý#»½T‚L—Ò„Zo¸°AU“Whoµ#ˆcÑDUC¹bí«·Ÿã½h¢¾u)Të$ìZíèÆ¤NÅcÆÄÄU–çI§ICÑD,'˜‚ÄäÜ›°ï7笧Ÿt*þ9ÊNû1»ç ìMÐÔ×0‘j.Ç(ÛB«Ý‹,—ÂDuªö1øâ”Ä o–؈èø0Ñ#þ˜h(8f=8L4#= Lôè톉ê<‰‚³³Úáµ—°•eAÎ0Q%d(Õ…vH-§“©Š%QbÎ;†‹ô{ÁDN©ål;­QL7ÂDõhs!Fô•‘Îr&†‚‰T:•”dЄ‰jGO½æI0QžË"sÙ­Úñ\¹5—L9(mÑDÎXR‹O©ÙY0Úøbª%9Ř>¿å»/¢C!ß9¾˜Ç·“¯LŸ2ÇÏñeŽ/Œ/y±I[9âýu¯ùa|©v§—3*ÏMåι3s¬v¯_H¬ªy=¾ä:ÇójäÞzFÕ©î'0ÒºËˆŽ¿ZÕ#þœÕª†‚cÖƒ¯V5#=àjUÞîÕªê<¢ oÉï¥bÐkW«bZÂÖbUUˆ½¬r«Rl±ªªÊ%)–øê³Æ÷‚‰òÖ¥âIÄàFBäû`¢zTCXçhHµ“4O˜ &JÍÓ’„480Q²àE:&$€9Î}Ûj®½•WÒÍÒëâ¨}E¼¥tµ թÄ$}q‚i„3KlEtx˜è L´³&Ú‘&ºôöÂÄê¼Ü‡j][íT/†‰Óòš&ªDdH;zûŒ<M¬ê€Úµ²o)á­h¢¾52ŒJ~tòÓ9±«ibUÆ2fWQ˜çh'MŒDö]Jà$$¹¹5±Ú=WÕ<‡&êsµ”¸È^¯mv‚t%Mð¢@¤¯·&Jß"ÖAÇvK‡:¾ˆÞJP;H@iCÙúù)ᤉib#¢ãÓDøsh¢¡à˜õà4ÑŒô€4Ñ£·›&ªs°Á¦}û!2§Kirlq»·'È)e¿·'HcíM¬ª0ÙϾzÔ7£‰úÖÄ‘w –D7ÞÊ[=JFŠÁW&2÷&&M Eö]ædÓGlç¯vÑtMå¹Ùš¥sí{µ³™üµ9>løÀôú -¢µ`œ{ÓÕŽ2ßšãcG ÑÇøÔ MšØ˜&6":>Môˆ?‡& ŽYNÍHÿ{ç²sÉmÜñWñÊ;·Éº±*€yƒldŒ-+V`[‚eûùÃËgáÈsšuìî!t¸˜Å jºþ]§yùñR5!MŒè¦‰âœ)sÐéC¤\›ãƒ@7‚üãǤғz@_”&Ž©Wy/š8¦ûòSf‘M,š˜‰&`+Iý ¥ŸüÃîáÚÂI4QžkhQœÍïfÇW^› Ú S„çI>s – ÈŽR¬=B¼5ÉG—¿˜üñ¸âDVmTšØ‰èü41"þšè(8f=9Mt#=!MŒè¦‰êÜbîb“ßK©Æ‹÷&òl"î÷ö%Oó¥šÆ¹h¢ª‡rÏQ\õ)~¾ ô˦‰úÖ(€/ –éÎKØÍ£Hž*ø_]\ùÇMLE¸qHe?û'ªœ^ͨ>×Àƒ:í§ØÉ¥ÕŒ`ƒR˜,=_¨´àTòùôÏdU;½•&ŠSL¦Á§7ÄMìL;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4q¨—ŠéÚjF̶Y”½‰CR!LFU2ƒs–§©ç7Û›8„ïMTó G_óJ¸hb*š zob·šQ³Kñô½‰òÜd¢*ä´Ÿl§éÊ[Ø`›HJagµŠËZVä Úß›¨vnMéÔœ2¢×AV;¤¸h›&v":?MŒˆ?‡&: ŽYONÝHOH#z‡iâP/E‘.®J[žˆìÐÄ1©ˆsÑDU•‘@M fGïUµ½uþ_ æG'=Œä—ÓDö¨!XÄä*Ëvq%ˆ]41MpÍ—ÛÅÝjFÍNéìäå¹1˜f^¿Wì®­fkÅ$ع…-µKHÐç)}¥p+MTqƒ&öÄi@^·°Ýib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&šs ’ßKÑç5Î¥ ÍcŠÒMT 0ÅW¤Îv »ª’ÐÙYio‰ô^4Ñ¢‘å…èH¸qo¢z4a/(SY÷&MLEÒN0IÚ¥‰jÇtv†Øú\‹%g`tÚO¶Ë#Ì•{º1³î$ˆM¹—‰@}¡Õîq¾~L§H‰Õ‡xÁ„7KìDt~˜Lt³ž&º‘ž&FôÃDuΪ~ŠÂ¥0[Ì,‘`¿·]jš &ªªD™}õòn bË[S`õŽ,7;€û`¢z,eNúµ›ÀÚšX01L¤óä1©&èÂDµ#“³a¢?ÉúõÏ¿ßlG‹Ú'ÑDy.¶ˆ^û)vŸ'J:·Ü„ÚNñ: µop†ÂfùÖrÍiþrb|AÃ:éäM{ž&†ÄŸB=Ǭ禉~¤ç£‰!½£4Ñœ—R¥¤~/%tíI'2Ü v®M4 )õ‹×}Hµ¹.a7U øêóçðV4ÑÞÚ„ôÂ`©%®®¦‰êQ£at•id]4±hb"šÈß%GIÙtÄ6;‰p2MÔçj(t¼~¯ØÅKKa‡ #QŒÏi"¶­¿ ÙìÞš ¶9eÎß„ùâø!Oõ¢‰ib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&ªsÉþûÛ»v¯ªIÇwh¢JHÔ/ªúa7Yñº¦Ê°,ÀùêíIzÛ_4M”·¶€Äà†98tMT( /4CX bMLEqc`Óì²[¼®Ù%ŽgÓDyn‚h‚^¿—íâ“U”iB7BH¼³7YIÊ–úɧŠ]éÈoMéÔÄA¹Ož¸Ü?ÊJéäN;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Ñœ›²³ƒÚìÒµ b Ó&švh¢JÀ`)¼"Õæº7ÑTQL”ÔWÿ˜/ü-h¢EÇXû§«ÿE¾&ªÇ-ùÊ$®{‹&¦¢ Ø% êïM;6;}o¢<·,¢`òzmÆr-îÊRغE*wÒžÓ–l¹êçtú°‹ñVš(Ns 3(€+.âCMìL;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q³¨8 2ÍîóÎþܽ‰díÁDU 1c—½ Ô&ÛšhêUœhvò^—°Û['uVÞšÝ×&ªGhôÂï!¬”N &¦‚ Ü8š0÷·&ªëÙµëÊs!æwu*aØ!_[»ŽK5 ݃‰ÌRòoEŽR³hÏ2Ù~Ùñ%«$JÉU!…w_Jt$©øXôúñ%{äòJ/ünÖµ¼5¾L5¾Ð¢âôù…ן/Å®ÔZ;{|ÉÏͶ;K@Ííâüãd'yVgˆl¥œ‘£´Ø…[“|4§–4úâ4­Å*w¢Ñù«FÄŸ³XÕQpÌzòŪn¤'\¬Ñ;¼XUœçÁ‹c ¿—²¯MòÃÂNÊÀƒRm®$M}¦1î×búxK”÷¢‰úÖûI>>¢(÷ÕFm)•ÜȾ2´u-oÑÄd4Á˗餭vŠv>MpDgµªØ‘2^[U™å9MpnÁòÈáÜ|+vhŒ·ÒD—½†à‹#[4áM;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q !²ø½”<˺zêÖ7n¹;ß¡‰*A9/ø‚Ôg5»¿$M4õ)9éÓÿù–o¶7QßÚJ!Ž>C»±œQõÈ•]e !-šX41Mpž¥1{’Bê럿\öôlš(ÏÍͽ›\i&]I6Ãç4!¥Ç<¾ôË„7»Àr+MT§B1ZôÅIÐEÞ4±ÑùibDü94ÑQpÌzršèFzBšÑ;LÕ¹i)sï÷R&צ ´Üc‰ì$ ¯8(š+U³2{_’&ª*JN‹f‡o–2°¾5(¼ðÛrºñ$mñ˜(•;C®²„itZ41MH¹–g˜ýi—&ªÝcÉÏ“h¢œÇ¸œ&ªÇ„ÅŸc¨¬,‹&æ¢ -kþ’ÿ|¾óýõϿ߲ý–N¿7‘ŸK‘D¼ Õbºro·€°SÌÈrû5€¸"«Úº÷ÖDuÊJj/ˆ# ‹%¼Ib'¢ó³ÄˆøsX¢£à˜õä,Ñô„,1¢w˜%ªs Îæî‡H½ö6§ tïœS“ Èú‚ÔÇ"S°DU¥æ¤ä¨v)¾Ù9§úÖy.NšüèX¸ñÖDÉ2PKût”e;xHR°Xb±Ä,‘¿K„ $ÎÎD±‹ÂÙ,Qž‹5º½6—«{—ÞÁÆÍ¨Jr Îól2íîŽV» |kF§CâbþM8ÓÄ^D§§‰!ñ§ÐDOÁ1ë¹i¢éùhbHï(Më¥ éÅ¥Qe‹qggâ˜T a*š8¨žÞ‹&ŽE‡o:_LÍ£2aŒ¾2yX\4±hâËÓDù.s·m!õi¢Ú‰>Ô=8‡&êsòÜk?Ù.¾ƒ]ưðüÖÅÚ‚-jŸ{š‘ÞJÅi†Â’|qÆ‹&Üib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&ªó<Ø%·—‚˜.Î脲å‘o‡&I˜ko¢©B%¬ ø^4QߺnÞ˜z¨$r9MT!úÊ’¬½‰ESÑD,÷Ð,Räî9§b—gÒrö9§òÜÜé)sÿˆNµÓ+iBÃV²)ðÎÞ”¬ÙŸôû j'rkiÔê9åo"¸âtÝšp§‰ˆÎO#âÏ¡‰Ž‚cÖ“ÓD7ÒÒĈÞaš¨Î%S]úC¤ØµÊ­ Ù©zPêdµQ›ª ûµ2>Þòó•¯_6MÔ·V*Ieüè$º¯v]õH™££ùÊ(¨,šX41M@ÉÔ“*vk×5»€x6M”ç¦Ùí÷0Ã8]yÒ‰¶LwîMæÌ,ûg«ɽÕ&ªS âl‘Ö—0XµëÜib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&šs àöR?l® ‰¸Qàô}MA$/mù‡Ò4WB§¦ CHÎÂW{K}¯bÇ¢Ê÷ÁDõ(™î,øÊXxÁÄ‚‰™`"—hœ€?ßôûú_¾_4³ÓÖçR ‰XÝñ…òL?\ ʉ«çã åœ$«ˆý>ˆjK÷nM—òK.˜ðf‰ˆÎ#âÏ‰Ž‚cÖ“ÃD7ÒÂĈÞa˜8ÖKéµ0Á&ƽ­‰CR#Ñ\4qH=À{Â> 7nMjæM™òJ»hb*š ²äŸ¿K“þÖÕ- NgÓDyn~ÙÜ~Ðk?˜¢×ÂŽY`JÏi‚KK)Iè¯þ7»›i¢:– f¾8Z4áM;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q«ö+ }Øñµ—°5…-h;4Q%”:{I|©F“]Â.ª,”ôYÁUoAÒ{ÑD}kàÌ«þg˜?ÖûÒÃ6’gäã¥6ᢉE3Ñ×Ù< vi¢ÚA<;¥S}®qJê¬T;¾’&ÊEoŠ;0!¥G%'ç^³ v/LT§ÌôqüP¶uÁÄÎ,±ÑùabDü90ÑQpÌzr˜èFzB˜Ñ; Õyî~Bb¿—2¸¶r•rÛ;0‘%Ä<ÄjpnÃV»'KG_&ªªŒlÅWÿXÈô-`âPt0ޘѩz£øŠ2¶•ÑiÁÄT0QêK§(¥ЩÚ:=£Sy®E ¬êµ´@WnM€mA 3ÓsšH¹GÈ‘²þ„½ÚÅ›óÃV§œÕøâ˜`Ñ„7MìDt~šMt³žœ&º‘ž&FôÓDu.,!’ßKÉÅ·&Èpc¡š¨4;ÑÕ.ádª*Ó<œ½hã7Ë[ÞcÍÝè@¸sk¢z¤¨@/(ÇÊ_‹&ML@ù»dˆ)ôóÃV»O?èTžKH19—ݪ\š–`#MÈüœ&´¶ôýðcyåïþ’g$ÿ"ÿ-âï¼)J²è¾¥øp$â»ØüÕ}W»ÖH¹¿ÿ±MÀþZ¦cÿù¿úáûïÿ”§jûã¯ÂOÿöãß÷y0ü±MÜòDòÇ¿ýô¶uý©{ûá»í§_¨voù_~û÷þ÷¯Ÿr¿ü?ßýåÛïûøÛü×þøïŸò ûÕ§?#ôëß·Îî«Üí2+ýï¿ûæ«OJß|úÝ·ø›?|óûoC ú›ßɧø›oíü$ß–2èôëßý«Ö©gaÿ‘PaÓ7R?U.ÿÇh⪚@œs¬^ó/õB?LOea¿5㸑&ªSOH¤‹sXÈÃmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ÎÌRŸ Ä^KBÂMì -ÛRq²*M$SùFý¯Uùho¤Gç½^×í4Ñr«ˆœƒ/þÍ.3ßIº¡$ôƒšVGzÆÚT©«ÔÚHw~”&Έƒ‚n‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR$÷Ö ¤,[ÙHÐÄ9©Y碉Sê?õõøWÓÄ©èÈÛžãvšhËK[öX™™.šX41MX­ÅÇT»Lti¢Ù^ž7QŸ«Ì–ƒÚÍŽôΛNÄ›"úgšð2‚4qй¬Ù%|öl¢9%‡|nÙíÒ:›·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎê‚óÅ,ex;MP:¢‰SR9á\4ÑTiNÜ¿ovÂ?v6q*:*úMT¦šã=FyùuÓiÑÄT4áõ‘P²°›]~û\tMxËîÎ%æ5;ÎùÞ›Ne‚Q4©Ž`ÔÔUºÛAzôlbwÊ&,‹ã·ô“EŸ·‰½ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQšØKbóof)O÷ÞtJ¸±äM¼¤BvýBª¤¹ò&^ª<‘}±VýÚM§ý­[wÆ/¢cþM4eu./.¡2†·3¥E‹&þyš(¿KKHP›¤õhb·Ã·b@×ÐD{n±5;?ÅN)Ýy6‘6Éð9 jïTÀdÝÏÿ»É£ÝQw§fŒýÃÛÝ.Ã:›·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÊãYÊÓ½4!B›“ÐÄ.ÅD¾ŠsÝtjª„êÍúX½$ù­~F碞£‰æQÔŠˆXëºé´hb*š€²K7ìÞtÚí(_}Ó©=7SY&úýÀv;üðåÒ ±„ ôs…XÀ6ÒÍSðµªÙ±ÙM§]½×{:±z¡Ë›ho]~KŠGGéÁ³‰êÑ%ȵX™¯,ìEsфּ "@ëöÂÞíñjš¨ÏE1fíb—Ò­5hc·2|¦‰Üæ DÉû§Í®ÌÒDuêDÁÙÄ9É¢‰h›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰3³”'ð{i¢Öðã›Nç¤êd5š*TËÁ!Ðë-ì¦Ó¹è؃gÍ£$ÌýžV»Ý{¡÷E‹&& ‰\)¥¶réÒD³C»œ&êsÁÚ-Æ`ü»,ùÞšNjùওmÙp{Wi³K`ÒDsÊš™¾Ço'‹&¶‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÅÔ=dz”(Þ\Ó)oªG5š-+-Y,Ua2šhªrY0¾Poþ[4ÑÞÚ¬^cú`¿‰ê ¼xPè½Ù%_y‹&¦¢ «ßüU‰ÿÞEá¯ÿýý;¦ËûMÔçr2Á`üÔô$–{{a'Ä2Lh½ü×B ÑHwŸ-/¯©GeªÇ÷3¨Y_Ê[“bÆøo‹ôc¬/Å£e—pWSìò:û^ëË\ë‹o ʼT ovéú›´õ¹ž úëK³S·{oÒfcñƒõÅëQ©ìr-PZ÷¸÷3ªN±ü&Œ=þVÈ}}­:ø щèü_«FÄ_󵪣àœõä_«º‘žðkÕˆÞá¯UÍ9“€@8K!å{«|À&Ì|<Û-•梉]•Kù¾P¯?V3°½µE=Hw»7R¼&šGGÍü…2£ÕuÑÄd4aÀuŸÎM»Ät=M”Ý9¥¬Ÿb‡ŸÎ®¬@®žËñq}ÁTF01;÷;/ívïÜõMœG¼jFÛÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(Mœ›¥8é½U>Ä6у*»IDýýøKªúT4ñRoJý{Z/»õÓÿÍ4±¿µ¦¨ùߟ(âc4±{,ýÝÐËî-[gÑÄ¢‰ž&ÊïÒÊO×Ñ­ÛÏèe÷6Á_Cí¹œ¬ŒY ÆOµCº“&ÒVb¯ùóÙB=¹ ìÞÏû~Ù>JÍi.;;ÿB\~û3.š8Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢97Pˆg)½ù&­mŒ7iw eY zÞ¼ì&ëgÔTIÙ¼d/Ô»þM´è@rD £S4<×u÷ÈÄ™ãe¼ö¾]4±hb&š€J euá¿Åøë¿íÆ­\MÐÎFÊöÛ5?ÕŽýΛN¾a*“Ç皈m“Y¿È÷nþ,M4§J5ç&'²ª|„ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mìγhúb U¹¹ÊGö-3ÐD“Å)ø<³K5œ‹&šª²[Nýü”?où[7^ѱ¨‚ËËŽŸËËk•ܹŸ1¸ÛáÛ½ïE‹&&  Ü QjçQïÒD³+Èq5MÔç’e³Ÿz“Vï¬@N¶¥,EÍgš ­–-uõ~ç²f§bæMœg¾ò&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍRþé²è•7L6>¼étNª¥¹hâŒúò§ú±³‰sÑy+„;M4¤Ùƒ;X»®*‹&¦¢ ª]G…x·ùnÇoî/¢‰úÜì@œ%?Å.sº÷¦S)Xó™&jd¯ù‹Á1}µ36 {G*б8§·0.š8Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9çœÜ8ž¥˜o¾éi3;:›hH‚“èÝ.MF»*ó²}¡^«ŸÑþÖex¿ÛÓËîÁšN»GÏŒýž¼/;Z5MLE\)ÁŒ˜ú4Ñìòõ4Á-»šX¦ØßZÓÉ6­•ä3MH›[êbØß°ïvôìM§âÔSÊ©LD‘¸bǸh"Ú&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9ÇÚ [ãY òÝyºåMìR™,¸é´ÛÑ\5vU’Ý¿Pïü[4±G'Œ‚8:¤ÏÕtÚ=*$S‹•‰­,ìESÑ„l­®æL]šhvÉ®®ÛžkƦ)Ú»,|/M8àAÞ„–Œ‚šs?/¯Ú=J§Äɺéo;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4qj–RM÷æM(lEÈMœ“j4MœRo˜~‹&ÎEÇ<›8£ 1åE‹&f¢ ­»t&ì×tjv…篦‰ú\wNQúp³Ë"÷ÒDRS8¨›Ë®gǘúç£ÕNŸ=›8#®LU«;j¸MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©YÊìÞ¼ Ü”ì€&NIõ4YM§sêåÇÎ&ÎDÇÓ„•ìÉ25¼«Y²Gi¢‰+k8˜†â<åU!6Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,/»zq¿ ÝT`âœRå¹`¢©ª_¸Ácõ?v4ÑÞšPRJ&aÛf)‰£i”U»´J:-˜˜ &ÊïÊc(wa¢ÙáÛ‹`¢<—R–òÒáø¡¤zkó:ØØÕJ:y[˜00ÑìПMÂnNAÉcq &Â]b'¢óÃĈøk`¢£àœõä0Ñô„01¢w&vç\œÏRŠt/Lm`~@»Ñ”¾‘Ê“]tjªŒë™Xýψmoí˜S²8:fòMx¥æ ÉI³Ã÷+X‹&Müó4Q~—…’ppѩٽ÷)¸ˆ&ês M£=z±KŒ÷^t"'´Ï(Õ\~¹ÖO𨅡Ê\EMìâÈ$âê4”×E§h›Ø‹èô41$þšè)8g=7Mô#=M 饉ݹHËñ,EœïM›`ÚÒ&NJ5Š&Ωgú­³‰“ÑÉöMì3Q2Œ•©Ò¢‰EÑDý]b&ödÝv»ÈÕÚs-+„ã§Ø1Üœ6!`ieƒ&•Ôý޳۱ɣ4QbJ€`±8ç•„n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ“:x6Ñž«þª9…MÝíÒ¥üG#ïvd‹&Ì 'ã`m¢ÚyŠe£ …µŠE-%Tþ’ï6îˆN¶[‡B÷H>¾`Ž•=…s|™ãËã .)[-Œôe’ùÆ—j§–O_ü¹þ¢Ñlïj—/®X0‘m¬}cÍ3²R”zÒk7o¤mâÀ3ráP•‡…ù9[µ1 Ñ‰èø³UGÄŸ3[ÕQ°ÏzðÙªn¤œ­:¢÷ðlUs^ïe‹{)ÀkïF¡¥l\º* ’‹Æ=á³+»K˜XÕKZ‹ÕÐ{ÁD{k.XúÕº>¢£÷•_=ZÎ _øÝœ·'LL˜ &´ˆÕàÀ„Û)¦óaB‹e±¤aËv»k'«`aKbKßÔú A vÌ7;Êr+L4§ê?c²Xœ< ~&6²ÄNDLJ‰#âÏ‰Ž‚}ÖƒÃD7ÒÂĽ‡a¢97(ÌǬ"5] ¼lÉÛ%Ô¾ìë[–¨ªØÈ/ŒTFú^,±FÇŽ£Ãþ×÷±Dóˆþ»)ÅÊà¡jüd‰É°µí©êÿ£_á£ÙéÃõ“X¢>—K'Ûp¼rmZ !åòœ%¸µà”$_š]æ{&šSB–àì|³Ãy•Qœ$v":>KKt쳜%º‘%Žè=ÌÍ9—z,0õâzyÝ¢‰]R9u—ѪJ|2xA½”÷¢‰öÖZ-qt„n\™¨ý#§'ÙØÊäñÔ꤉IÐ× Þf´t/FmvÙèô õ¹À Â4€Ê•‡ò /VôòsšÚ‚±niìÏh4;§[i¢9UÕl‹ã$“&¢4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašØÕKù#®]™P]¬lÚ'µä±hbŸz~³ê㻢£|c½ÀêQ áX™ÿ7÷9MšŠ&¤Ò;(¤î]F«åÓK|Ôç¢?—‚µ½f‡×Ò„$*g&´5`“œúÓ=ÕN<ú&š8@HÁUFÍ®ð,>f‰ˆŽGÄŸû¬‡‰n¤„‰#zÃDsމ0¸ nIt-L,–óLì“*ƒmtZÕ›•„±z¤7ƒ‰öÖTÀ˜âèPJ÷ÁDó¨X”^øÝØæÒÄ„‰¡`¿KPâ/š}ÿ«ï¼w=&êsM…58”×ì„èÚz XŸÓD½ÚXÅ»è ÖB³#»wi¢:µÄ˜bq¦óbÔ0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šóœé•.ÔE^{—Rpƒ&öIÕÁJ6U%s hb}Kz³zNkt|,®2ZíðÆ¥‰æ‘@¡XÚ¬ç4ib(šðïA€…úÇ&š—Óë9Õç’÷Úbµ¬™ü•4A‹ù—¹QÏ ëÕÆfjÜßl¹Ú©Þz»:Íþ±²õwa­vùað›4ñ¦l¬Mì”:Ø]F«*„ h±zxrñk¦‰õ­‰$÷Ë]}D‘î»Ëhõ(äŸÆÊøaElÒĤ‰ßž&êw (»V;~¨ÎzM´çrb†¶ ¾úbToÌù9MdoÁ|€é¯®v%ߺÓiuJBÒ/Ÿ½Ú9¾MšˆÒÄNDǧ‰#âÏ¡‰Ž‚}ÖƒÓD7ÒÒĽ‡ibW/EtuA§²ˆmÀDSÀ9k¿NççñX0±ª÷¡¼0Tñ“zT_5L¬om¹ô+©ØÝX¶yôÌ!©Z¨¬$œK&†‚ ÿ.1%b°îUFÕÌèìŠN͆Zz"Gí½ùë•0a 'y~KmÁõNjìÏþ7»x+L4§$¢)ÅâhžÁ޳ÄNDLJ‰#âÏ‰Ž‚}ÖƒÃD7ÒÂĽ‡a¢9ç„Ò¿ÀáC¤^[Ñ X°­¥‰Ujfì_ƒýùJƒÑDSÕî,xa8x·ŠNû¢#x_}Øæ±`e •A’I“&†¢‰ºØK–Aâ^Ëã~ûϾfž¿î5ÿüÓ¿ÿü»ßÿÐzø¿9;~Y䇲Ë®«<+µ½üÓÿùãŸjkúÃïüSëÛÀùÍßýé¿úá»oÿùÿò#õûø5ÿÑuþü£·ºo½#øÝ/üÃwßnxNö­·ä_~ñßü»oÿáÇ_üa5žO»åüøÓ7ùñw_Œ)üüóo~úù‡¿xôËç[ü²|óOí#qÇÿåf¦ÿúÿ9 Õ—ñ®âÏîdùöïÇñGç¨ù̈ÿÚQ~¼ÀwëÇ~Ø;êÝöº¸’Ÿ|ùò$µÙùé×ëxËâùÕvRóºÈÇ5ùI‰ÛéÿVDÿ_Pâß,þ4JÜR°Ïz|JÜŽô˜”ø7ë=ƒ_ï¥ç².9“lñNu»CeUa¦nËÕ¸œ={êÏ5PòÔ.œ“´ ΞŠ.R¹Å6gOw(¬ Á>õž|¼O­<ýº2y˜Å™<=yzž6ÿ~År0¾¸˜?¾×SûI9h?FJùÊ2,K]¹Ú(j†PgÄÀDûG;W»¢åV8lNð ¼ Že êNDÇç®#âÏᮎ‚}ÖƒsW7Òr×½‡¹«:ÇÄDq/e_–>•»(¹¥ îÚ#S†±hbU¥Æ/ þ–o¶:×Þº¤d"qtr¹¯¨ÙêH^øÝ€'MLšŠ& ïÏJ¢¥KÕ®¦ÒgÓ„?— äZË#j?TJºr¶ tÉŽüyãà¶ž×{ þýºÍÎ#uo‚&$‚PB±IQšØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬ÎK=Ò÷R˜óÅ·7êâCÏM¬Àà%©EÇ¢‰¦ŠÀ»zyA½¼W‰äè Iy!:vM4–j¢+“2ï[™41M`=9dˆœµKÍŽn=‰&°-ÿg€\¢öƒ€WŸª5Í@žÓy &ÿÿ"ý„½Ù»÷äPsÊì=dŠÅ‘Í“CašØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M4ç’D(ǽ ^|á .²yrhŸTlm¢©ÒÆm±z÷*‘üfê¬vw5«¹€Z@Í.Á¤‰ICÑÕóýµ,ôwÒ6»”OßI[Ÿ Iê5IQû©'—Ê•4!KQH°Q‡€k æÊTýŠ#Õ®–U¸•&š8SŽ0®vy®M„ib'¢ãÓÄñçÐDGÁ>ëÁi¢éiâˆÞÃ4QK&Ä`ê¨Ù¥rí “¼äZʾl÷ö¯KíÜDSU<- VVÖ·”7£‰öÖ­ôï‚ÿˆâwÁ¯Ù½0Œ =0줉IЄ—îLP§S³{lY'Ñ„?—RÊZš¨þÍžTsM4;t6MÔçŠ÷za¯R¯Ý鄉 ˜ÐEKrèî7tm]P¹÷ØÄq …'LDYb'¢ãÃÄñçÀDGÁ>ëÁa¢éaâˆÞÃ0±¯—â‹‹_, [Ç&öIÕÁ`b—zz/˜Ø;K:UePЦHY)´=abÂÄ0áß%)¥Ü/é´Ú=¤¡'ÁD}®¤\8¸Ð«Úñ“ÍçÂ&óv\¶h ²„{‰š]Â4򿉻 ˜æ«ÏïV‚WtÊÃ&¶Æ«UlôÂW÷x¬iŽ/s|`|±ÅÓ6÷'Ü_šÂé%ësÅ®”ïYíÊ•iÛdnŒ/Ö2D" 6'¹]6å{—¾wˆ+‰öCÏÙªiˆNDÇŸ­:"þœÙªŽ‚}ÖƒÏVu#=àlÕ½‡g«öõR|í±<Í´(ãÆlÕ>©:MìR/åÍhbWt«ï£‰=Êò¼ÐkÒÄp4¡H9׃¥M¸]ât>M(J&bШý $KW.}{>œê½Oi‚Rméɧ‰\[p½¡EÄj÷î¤]*$îoZíä¡ÖФ‰4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašXK¶,q/¥|ñuFH oÐÄ*AY’¾ UÆ*@¾ª2FéWý°+ïu9j{kLÀÀå…èØ};Veà äP–2‹|LšŠ&rËÒ©v ¯v,gïtªÏåŒÞkcز9æ+iB–zòp&Jë‚<èýÛŒš]íÈo…‰&ý›ÈŠCx˜Ó˜0±‘%v":>LLtì³&º‘&Žè= Í9圇ú©z1L”EòL4 >ÜiʱTìnÔU•¤¬±zµ÷:–×ÞÚ¿BDˆGr×pß±¼Õ#¥XÒÜè4ab(˜(uë¨d#â.L4»óåµç’¡w_a¯Í¤—ÖǼ1x~j‚ 6`© ô…6;€r+Lì7/3z!KìDt|˜8"þ˜è(Øg=8Lt#= LÑ{&võRL_f²@Ù`‰}Je°mNUgÆÄªç¤ò^,±FG=>GÇÝßÇÕ£§ ’úwP­Ê`VŸ,1K@«ê­*_7øþWß/zÚpú„?—²£¸wiQû¡ éÒC´pËö&°õ-*ê·ôjG–o=‚½ŠƒRŠj(Ž ÍmNa–؉èø0qDü90ÑQ°Ïzp˜èFz@˜8¢÷0L4ç˜(•÷RP.®>^l)¸uhbŸT´±h¢©òÜ ‘Åê‘Þ«úø¾èMMt쳜&º‘&Žè=LÍ9¥zgÜK¡^KeÉÙh«àÆ*Í‚í·«ŒU~|UÅžpR Pœ(ÝGÍ#PÝn+ƒy3꤉Ñh A134QíÎ?5QŸËÎfM¸]yrZì<šÈL ËMHmÁl¢ÁìÕ.Ù­4Q–$už,WÁ¤‰(MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šó`q/•õÚ«QÉò(4Ñ$Õºå±Ô2ØÕ¨«*TI ±z|R&ñ«¦‰öÖĆù…Ïп€ûh¢yT±”[íæN§IcÑ„.)‘0ë—‡mv̧Ÿ›¨ÏÈ J¿ßkvEìJštdI ô'´u.U…Rë@tïÝuÍ)ä\Á,WW'&NDyb'¢ããÄñçàDGÁ>ëÁq¢éqâˆÞÃ8ÑœSM)\{Û„•^I§&궬KÀ±p¢©¢T«¦Äê±¼ÙâÄC ¨vµ»s«Sóh ‚/4Ë󶉉ƒá„rà¬à„Ûåó·:i;8‘ Çš^ZÒ Òây4ØÆ1lk-X‰±¿ø½Úݼ8Q"æÍ¥5;ž4¥‰ˆŽOGÄŸCû¬§‰n¤¤‰#zÓDsNÙûXŽ{)4¸xq‚M[‹«TR2‰¥ÒX4ÑTI X¨Ùñ»ÑD{kÅZÇ2ŽŽ–o›¨)gÒãNº³¨Ó¤‰¡h¿Ëz•{.Ú?8±Úw]}.ªç‡W;¸òî:JKQiŸÑ„º²Ú‚¤ÓCÚ=™­ºŽ&>j²Ô;Õñi'yu꧉ýˆNÅŸ@}û¬G¦‰(Ò£ÑÄA½ÇhâÃ9'1æ÷R¦×œ`æ…Š=£‰R9ÑHw×}ª*ZË Æê³ÙÑÄç[û{wk–üÕ.—›hâÓ£Šçñ0Î'‹&MLšø­ibý.ˆr纉O;x(—vM|<—±h·üħØ•w×aZ²ËsšÈÞ‚%©‹ê/¹µt²[i¢‰Sï`ÔivŒó¾‰0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&ªsM‚Y4î¥Ì®-ê”—Âf$Û½½C—f±Pª i,šhªˆ¦x8¨×ܽM슘ÞGÕ£¡@Âø«3H:ibÒÄH4Qo˜¶’“•>M4»œÊÙ4áÏ•ÄíȨý¸]ºÓ©,$ËÆÚD©}‹CJŸ{š]b¾•&Jë†êyz Å褉0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šsŠ–wW»LÓ2—íÞþu©CuúTÅžofŠÕ3¼M´·6Ì9CÉ|M¸GLÝg¨ÌQ&MLšŠ&j™¼\rAÃ.M4»ü@Ã'ÑD}nÌ–Â4Øíò•××,™R6~>¾Ô üR"¢ÞñÕÎû*¦[i¢‰¨uH"qÞ ÍsqšØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬ÎÍÇŽ{)ørûй%b.bec§S“€I å©–Ç¢‰U½Ôë‘bõð^4ÑÞZ@_ø ùÎNÕcö/?½2ŒФ‰I#Ñ„—¬E”º4Ñì²òÙ4QŸkäÃiÔ~¸†åÊ˰mIõVpÛ¢ ³’ELs Ô,›Êhã‹«/ÈJªw¸|»ñ¥FGü×Ç8:å¡´Ù ã‹{ô!F$þê<(s'í_†_pñ‚DûkßÍ.Ñé;iý¹=­ÖÜo?ÍÓ¥×£Ú‚.D6öVaÍ$9aÑH©Ûá×£~8-ÉBqþ׳y8 Ñ‰èø³UGÄŸ3[ÕQ°ÏzðÙªn¤œ­:¢÷ðlUsÎRo¦‹{©' Ê'¯}'õ¡6o÷ö/Kå4ع¼U•šà+ê¿,ÎøuÓD{keD•8:é>š¨±­r¨¬NGNš˜41M¨Ô,!4ávðp-ói4¡…4DíG;ôښĂic|!oÁIMû-½Ù%¼wí»9åzÁvŠÅÎ*ašØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬Îë=;9î¥øâëQÉpI¦kßû¤"EM•‚%xa¬’"ïEktü÷§¢£pã¹¼ê3Õó¢¡2L4/4š41MP]Ó.Zð˵µïõýºÝùçò¨­}×+•$j?n—ðʵo÷AEhc'-×,Þƒ³UÍŽìÞµ‰ê”³P,ÎÒ¤‰0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&öõRzm•̶HÞÚI»G*¥šh) (ÆÊ0Ï´“&†¢ ^Ú•Ì`¹OÍ.³œMõ¹™sÁ$Qû‘|í}FÅóaö`lÔ ”ÖÒ±0÷ç š¨ÞJÍi­‹cžçòÂ4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašØÕK ]»6áÛ¢9oÐÄ*UüE^èíu´ äUû°n’bõö¤âáWM-:9p~¼JüršhýÓÇò‚2x¨Î2ibÒÄ4!-›/Y³ti¢ÙQ‚³i¢>ôØ;ÓÆu\î( 8ŠKx+¸ûßÉÓ ¤º1†l·ÐQ¡~XægÆŽ1qŠÚ‘Á¥4aGÈ ïi"·–žêõÄ¡ÒÜzh¢[i¢‰Óv,WB¶i"œ&"º>M̈?‡& ŽY/NÃH/H3z§i¢;¯gE=î¥Þç?•&Lä¡òio¢IÈDdokvF²MtõBš¿P_ðî·h¢½µcZý":ùÆ“NÕ£a”8Tf‰Ò¦‰M+ÑDn9Ë|>oa7»„§ŸtªÏEu)<µŸb—üâ“N>œtòÚ‚).šÝkZ¾;h¢9U(>$'Ì›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ºs­y½ã^JñÚœ¤VúûO{]BFÃo¤*­EMUFÆàÖG³3J¿E=:d¼êv|#MT9y¸7Ñí^Î}ošØ4±M”ïÒSfÎ<¾…Ýìálš¨Ï%ölÁ¢z·»²:ª<Œ”)¿¥‰µ×bi㬮ÝN™ï¤‰æ´¦^L˜BqN/[Û›&ÞOG]ž&¦ÄŸB#Ǭצ‰q¤×£‰)½³4ÑkŒ{)‘«ëAb2ÿÜÛ/Õq)šèª²RˆÇ*Wý­zFõ­Œâèøëí„‹i¢+“òºãÙP·#ß·°7M¬Dõ»th¥¶†4ÑíÔϦ‰ö\"r·°×vB¼ro‚页´ÒšH­3øBs·Ã¿×{.¥‰æT!{0|4;aÜ4M]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4Ñ—6AÜK]Ó‰Áð©žÑS‚PÖô…T^«ÞDW•“¦qn §zÏ¿Eõ­PöôÅHîvßÞDWFÄã’PhÓĦ‰•h¢|—¹<χÕQ›–ùüÙ4QŸË5•R²¨ýd6¸´:*?€‘ì}õ턵³kÞt;Êv+M4§9'§¯}Úí ±ñ4qÑõibFü941PpÌzqšFzAš˜Ñ;MÕ9‚RvP›Hw¸”&4ñCó‡ ±]jJ);…R|­[Ø]RM^«O¿Fí­ Œ9ÇÑAçûh¢y4¨ŸT¬ìõ|ݦ‰M Ð>êu$e‚á-ìf'.élšøÿ¿ÿôoHWîM˜<Ù§{‰J fa·`|ivl·ætjNë*SÁ˜Pœì{á4qÑõibFü941PpÌzqšFzAš˜Ñ;MÍ9Š$丗B¼ú¶>Ê0û&šJ¥+”/¤úZ÷&º*V r:=í~ìÞDëòëâøvÂÓîµâôÕ4Ñ<º×䯱2çÓiÓÄR4Aõæ±aJ˜‡4ÑìôôZØí¹HŠèaû)v—žtB{ñûJ؉KûÕT“éŽûgnýóÍ;Mç¬Á9§fG°Ï9…“ÄAD×g‰ñç°Ä@Á1ëÅYbéYbFï4Kꥮ­6áT{{ûÀǤ¾™ÿWYâ˜ú«„ÝßZÔ²¥8:ò’÷år–h³gÊ_(3Ø•°7K,ÅÜX"¥7™+þùï7¦ÓÏ9Õç¢B’qE¸n'ì×V›H‰\ßW›Hµ²)ñxg¢Ù±ß{k¢9õD)¥XœÙ¦‰pš8ˆèú41#þš(8f½8M #½ MÌ覉î¼*Ÿ—z½üzÉ­ ö‡} ‰cRËÛT•áÚYù õÆ¿EG¢Sìî«6Ñ=’3áÊ0ï;Ø›&–¢‰ò]º‘’¤ñÎD³C9ýv}nVTÎaûñ,˜¯¼ƒ ÉYÞ›(?g—ºs<>Õì0ßZ»:Uwg©îâ÷¥‰p–8ˆèú01#þ˜(8f½8L #½ LÌ膉桦؎z©*Ò.…‰ÜøLt©D>N[þï+-¶5ÑT±#³…?ÕÿÚ¥‰CÑaºqk¢z,˜_X‚beN{kbÃÄR0¡}’NüwQÈþãûõÍr6LÔç–«Dí§&}ºrk‚ü!IÒ{š°Ú‚-%—ÏìvJr+MXï†D&kvwéºpš8ˆèú41#þš(8f½8M #½ MÌ覉æ͒丗B¹öÒ„Rz`út»I ÄDK%ðµh¢©b oƪDùÇ:õè8|1X²Ü˜Ð©y40 Òˆ5;})hµibÓÄ4QË÷¸ƒå ¡S³» =¬Õ"$",Qûñ„DWtòZÛáÃA§\Z0–1.·šè½šS«—Ê<÷zÞrÓćiâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&šóš™–8ÁÅ{øÀüio¢Ip6ÉKuZì vUEÀ„žCõe6¿Eí­Bvˆ£“€î£‰æ‘‰K#‰•‘ï+Ø›&–¢‰\géœ1ºž¾7QŸKe ï0ÅŽ¯,]GðHà¥þD^sJÕ:•Òb§„«/õ€“(ÄêMíׯ—2É‚G'ë­ã‹;!'I*#x™€íñe/ Œ/þ¬¥Ýd\õiG§¯V•çR»$fêvéʃ´”\¸ãS±<¯3Dõ„lÒbWºñ[W«šSÏÀa«Ý^­Š—!]µjFü9«UǬ_­FzÁÕª½Ó«UÕ9dLúE/•/.fÄð ýT̨K­»±ÔòJ‹]ËëªL1ÈÛí˜~‹&Ú[×üÁön7¦o™ô‹ßdŸ¤Ý4±MÔ½g"Wh¢î=OÙ JP‚p\׊øÊ½ïü°DÞ¯V!”\ë–©×¥›»Þš~¼‹#Ã4üžv›&Âiâ(¢ËÓÄ”øShb¤à˜õÚ41Žôz41¥w–&ºsæ ãIîÓÒÅ4!þTõ)Á]“!•y)šèªjé½q‰Íßò·’|<£ã9ÚÉ}4Ñ=zrç/”e–M›&¢‰ú]JÈIˆF4ñ´ƒ³S¶ç¢–'× º¼ÛQ>5ýx]x/SiÁ¹ AA:Ÿf§–#âL}ÓD8MDt}š˜M ³^œ&†‘^&fôNÓÄ¡^*˵Y>Ò#  Ó|ooÙÖ*ÚT9—)ÄøÄLWïù·NÒöèTªU £SÜw/¯z,Ó„2>c¨Ì@eßËÛ4±M¤‡"ƒÙð$m·K§ŸtjÏ%&£ ý;ú»öö™'ä!5ÆlãKï¢t»×åÿ;h¢9ÕTÄ{,Nvòxš8ˆèú41#þš(8f½8M #½ MÌ覉îœDÇÇùÿym–B}ȇjF•âZI>ºªÌŠY¿PŸ«2ê±èß—2°yt(/D+óäcÃÄZ05_ùxA‡ÕŒºÑÙÊsÀÊ—£–íõ0á¥)ñRbËïa‚jKG{WCö¥Í2ß Í©x‰ãâxÃDG'û}—°«G+s‘qjônGy_›Ø4±MÐÃSÍŸ£0¼6ÑíÒ˦ßI4QŸ[³§ ý»¤W¦ d°dQ}O\[0jÀ=Íð^šhNÙˇı8N´i"š&"º>M̈?‡& ŽY/NÃH/H3z§i‚ûd)±Ë½ÔÅåŒíÁø‰&I•´Ö%ì®JAx\7ê©^~ìÚD{k/ÿ/1’«ß¸7Q=¦òÞåÅCe$Žß4±ibšà2KGdïM4»ô’Vû$š¨Ï%ÕZT!h?5O¹ë•4A0FÄ÷4!µ›˜×òšÝë‘°;h¢:E®U”r(®Ѧ‰hš8ˆèú41#þš(8f½8M #½ MÌ覉æ\‰h\¹ە©üÅ×&*îí¿—ªºM4Uê ¹X½ÚÑDŽg ®:w»Ë5e6TÞ)þêþH¹ibÓÄ4!õ:Ãßå„þùóû-vtþÞD}®°‰K Ú×-@¸ö¤“A–×&´¶`4M8¾àÑìÞì¢\JÍ©²§`o¢Û½ä‘ß4ñaš8ˆèú41#þš(8f½8M #½ MÌ覉æÜ¨°ŒÆ½”]|mB2?ì㽉&!'K±ÔŒ‹ÑDSåá›áÀåÇN:Õ·.#Ž?Cº1¥SóH˜ù‹ßÑ÷ÞĦ‰¥hB%”iúßÅ}ÿùóû­—µÕϦ‰ú\Æ”08)Øì’éµåŒˆ%eyOÖZºiNc¥Ö[ú­ÅQ»Ó̃“fg¼S:…ÓÄAD×§‰ñçÐÄ@Á1ëÅibéibFï4M4ç®- w;òKiB•‚þ&ªÊ _ôö¾ZJ§¦>9¸ÆÃ¤+ŽÚß³–ÇÑÁ—Ò~—ÓDó¨žS¬LdGÝ4±MXM•¤š² ‹×u»×“F'ÑD}®hs ÚO±”kiBó‡ü°¹6`*2ƒá¥Ù%º&šS)C¸@,Nvµ‰x–8ˆèú01#þ˜(8f½8L #½ LÌ膉æ¼^~Ë_ôRŠ×t*¨ð@’0qLª¥µ`âz#ù-˜8×4Å—ÃDõ¨Èü2M¸·&6L,¹BB²Ìç7þçÏï×3~Ð)÷­ Ô&šd½¶v'Bx_ ½¶`F"oB6»×Mœ;h¢:µòÏܺkv%›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&šsfFˆ»P#Í_›À¬Àö¹·7!æ "A¥¼ØA§¦*§T‚«·ô[•°û[;x’/ˬ7¦tªsN(‰BeYm_›Ø4±Mx]ò‡\w'†4ÑíÒé bësU9'‹zíb'œ¯¤ €ÔíÏ·ã AmÁ‰Çueº]Ñz'Mt§†–Æ;¤ÝN9m𦉣ˆ.OSâO¡‰‘‚cÖkÓÄ8ÒëÑÄ”ÞYšhÎë-=E{)'º”&ÊL±ôXö&ºÔ2÷qâÿç+åµ*awU”\Æ©MºæüS4ñŒŽg`ˆ£Cù¾kÝc ~Pýi'»ö¦‰•h¢|—Ò±V ÒD·ƒ|öµ‰úÜ^OZE-»Øe¹2A,¥‡åš_ê=MÔê“îJÀÃngéÖÚuÕi®§`|½±Û•ïkÓD4MDt}š˜M ³^œ&†‘^&fôNÓÄ¡^ \|Ò)?ŒùM“J°M4UÄ–!¡Þð·hâPtˆîKéÔ=j™…øÊ$ïk›&–¢‰ZaºÞ4RÀ!M4;V<›&R¥„æ”9h?…fØ/Þ›@Eâ÷)K .RYÓ˜&ª]Á®[O:uqhå£áP\BØå&Âiâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ºóº$#_ôRY®­„íùÁ€h¢I ÄAq¡§¬Un¢«bFfÿB=ÛoÑÄ¡èðK)‘Ëi¢yÌ\ºÔ+SÜ'6M,EXËHHù“‡÷&ºŸž ¶=—É”‚öSììʽ z€ka¦÷4A­¥—Qn\µÛ©ÜZn¢9EÎ(¡¸?2smšø0MDt}š˜M ³^œ&†‘^&fôNÓDsÎå‹ àt‘×:•&Lá‘ìMt©-Ø3ïvºV)ì®JLHã± …õ·h¢½µQy¶ÅÑy½Ur9MT”³Æ „^OSošØ4±MÐÃ1•‰:ñ°x]·ËrúI§ú\bõ¨×.väW–›àzšÊA?Á„{Á®"•"¡Ž,yµá¥¨R%_OyÚüÚðRÞÚQü‹è˜Ý9¼¸S‚Z TFö𲇗¥†~@MQSÐ`¼õÝìÈäìá¥>W¡–gOϪ¼II{îb;ˆ¿Ï?^” ¢h ´N8=ߺXUrv)ŠcÓ½õ®B "ºþbÕŒøs« ŽY/¾X5Œô‚‹U3z§«õRÙìâ$êîŸ{{®œ€é ©¾V’§úŒ‚«wþ­üãí­¨ˆÃèÔ£Ô÷ÑDóH @ñï&å×Ý4±ib-špÈ14Qì,åóiÂH˜j'éʃ´òÐ2€Ø‡ñEj v«KÓC¥ÍîõøÍ4Q*•QcqŠº“|„ÓÄAD×§‰ñçÐÄ@Á1ëÅibéibFï4MtçŠÈ÷RD×Ò„;=àS5£ƒRÅÖ¢‰¦Š¹¼€|¡Þlo¢½u™Œ›[¾1y÷h¬f+Ó—œš›&6M,@RÈ–ÑÅ`œä£Ù¡œ2°=—JwLæAû)vÂWV3B{¤\bûá ­>²'/ZÐÒ‹]=¬zo’Câ$írFá4qÑõibFü941PpÌzqšFzAš˜Ñ;MÇz)Çk¯åYéï=} ‰CR5áZ4qL½üXÊÀCÑ1¸‘&(«å˜öµ¼MKфֲR¾Þ¿‚ÿóç÷[ìHNß›¨ÏÅ 9í§ØÙ»^û¼“N\¯å~Hòa¥oqK¢4æžftïI§â´øÄ ;z³ß4N]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4q¨—Bå«Ë1hÒ½}‘Š&AJ§ç+ÑZ4ÑT1ª$øBý¯]ËëÑ¡Òq}~©Ñ{9M4†FA2Ãf§û¤Ó¦‰µhÂÊ,]ÊÜ×Ò˜&ª»Nõ¹*€AqánÇvir-?š | ‰\×0¡ã˜{š¤{i¢:E`• ;zG ›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&õRzqr|ðÇäǤâb'Ž©·ß*Žz,:$滚&º2cË9V–ó>é´ib)šÈuÏAASö!M4»×â¤'ÑD}nyŸ”ƒu‚jW&.~m9#fÉö¾ø6ymé¢*ÁÕ„f‡)ÝJ‡Äí“N_L]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4Ñœ+‘ 2O‘|ñI'x}€‰cJób0ÑÕs=‚«Wú1˜homÌl;õèä·&ªGF‰•ùKÞø &€ ÆedÕqÆÀfGÌgÃD}®yÔï;¾öÚ®¢÷—°jKRow;„[/a7§Œeô Åqz©:¸aâý,qÑåabJü)01RpÌzm˜Gz=˜˜Ò; Ýyé0¥¸—rÈt²ÄƒÞ^ʬ!îíÿHû³MSŸòoÕF=|I#y5Mtõq|‰´Û½¦%Þ4±iâ¿O5çvùtÉK³ÑD·+cÐÉ4ÑžëR^X£–í˜=_\ÍÈÝüýlNµœˆ„6»lz+LT§õæKPaöiǸa"š%"º>L̈?& ŽY/ÃH/3z§a¢91r{)¡«aBMùCþ¾§„òóR3¯M•AO±zµß*ÚߺÕ9ÿâ3Ìz_~Øæ1'+xãE¿o˜Ø0±L¤‡·Ý\áaiÔn÷Zî$˜¨Ï•âêAûqb·+‹‘=¨5å÷ã –ìÈÀ0Ü„ìvÉøVšhNˆÆ¥0ºäMá4qÑõibFü941PpÌzqšFzAš˜Ñ;MÝy® Áã^JõÚŒN‚5縼?èÔ%‚‚ÇR Öª]×Uå27`ŽÕgáߢ‰öÖuÓ¾,ýå†úå4 ¨)ÜeÅ®kÓĦ‰•h¢|—ÅŸ Ž:U;v~ù~O¢‰ê?•—aŽ˜ªSôBššãƒ2¾¿5ÁTZp5×jv€ùÖ;Ø]\ׂŨfg"›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ªóò¯IÇEºH—tùÞDöϽý÷R=­E]}ÍGd¡zJ¿Eí­9o:?£ã7tjÙ€ð eô²b¹ibÓÄ4Q¾K®>®„ÝíèüƒNõ¹9“3…½6«ò•˜*eû@ÜZ°—_e¼nÐì –ÜJÍiù"ðÿÙ;³dÉm\ ïHAÄ´„»ï'—¤ŽÝiWŠH%£’õЧaáR>@ú@Üëªß¢‰ƒib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsG€q/er/MÙj{§¤zš«vSEI¼L>Po_¶7Ñ¢šƒ,¬?Q|’&šG¥z?3V&yU›X41M”ï’J’,wóÃîvö’ßø"šÈí¤“'Ö°×&~-“pÃI'Ü  a~pÒ‰k 6CÆþ½‰f÷Z¥ò š¨Nsý}Bq^î•-š8˜&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9ÏÍ8î¥Ê°t/M(o¥·? ‰sR}²{M•C¿Ž÷Ï[¾9?üGÓÄ©èÈ“'ªGΞ58éÔìÖI§E“ÑDù.IQ·ön—üò½‰òÜÒg—GgŒÚO™Ü[m­hy_Í(Kí[¸"ME£ÙeNÒDsj¢Ä‹Ó´R:…ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœê¥ì×׿‡uØ@*aŸ”š'»7ÑTÕ$¨î¨÷/£‰úÖ\ÕãèøË­’Ûi¢)#È¥ý„ʉM,š˜‰&ÊwIIëe³þI§fÇruµ‰ö\$@…°ß£=po‚X*ÄÑ-lmë¤A ïÝŽÈ¥‰æTÁ¼_mb·+#Ý¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æÜ4yþD2ÝJYl+õMœ’j4ÙI§¦Ê…‰ýõþe÷&ê[KR¢0:R¦€ÏÑDóHªÑùºÝVN§ESÑDù. 'ÕÔ?éTíÐírš¨Ï­{œlvB|ç½ Û´^Î8¸7a¥+C ªvâðl¹‰]œ‘“„â4ûª„N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ—Iýë½bÅi;<è´+EN¤±R™íÚDSeXFöx¨*S×/ƒ‰úÖ&‚Èü¶þ²€z;LœQf9­k &¦‚ «)•È4×&š_ õ¹„š1‡ý^»³væµæ€=„ ¯£ © ›]ÆùÆG3£ ½m³Ó7÷úøRwè ¯¦0:%‚ôèøâ™ B¨,g^‹Uk|™j|ñ­üIË—)ý­ïf'¢W/õ¹Z«®½v³KFw¤Í›”Çì|{ –¡8§o¼‚£kUMœ‹J¿Bóݪf/Bt":ÿZÕˆøkÖª: ÎYO¾VÕô„kU#z‡×ªªsKä5ÝínΈùøÖD“€9Q€=»â\0ÑTQ­]cõ”¿l±ª½u™ðPpÊx·£«šGSOÏ1Ü^È‚‰SÀD­R„t!€‰bG€×ÃDéÍÊ«”&j?Åî]ÕˆëÎÑòVâ®ÈoÇNµ ¤éNØw;ÖGw¾«ÓZ ˜0WìM„ÓÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(MìÎíƒ^êÍñ΋i‚(ÚaoÆÖ¯²·Û%æ©hbWE¬Ú/•½Ûaú®[yç¢Cé¹s´»Ç2ÁPü@ûÚú^41MÔï’S"èÒÄn÷ºŠq M´çb],˵Ÿ:ݹ5‘aS$0z?¾@7¤_o®Ù%§Gs|ìâÈÙû{?v°r|„ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœë¥äÞŒ¹¸‘tp+o—Q¹Ÿ»íÇn²ƒ´»ªv`Æ>P/_F{tê´Çãèðƒ·òv&åßÊttZ41M”ï2“±÷i¢Ùi¾zo¢=W"Eí'ókþþ[:aÎ`ïoå1–ŒY”‚ñ¥Ù‘*sÝÂÞU‰£q×ëµ`{í\K“Þ¹5‘6Pc|Ö† CJý=Hk#á¯o…‰&®üŽ&Šs´Um"œ%v":?LŒˆ¿&: ÎYOÝHO#z‡aâT/•Yï=ç$´©ÈL4 œ˜Ì?ª6Lìê]‚"?vôe[í­@‚›Î?Q|&šÇZæ„S¬Ì^òk.˜X01LX;¿TSôóÃ6»‚WÃD}n\¬<;j?ÅîÞÚu¾¬:Øúö­.Uµä]¥Í.Ù³šS&÷~ ÀÝ.¿íX4q0MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^ªLåo®6!ØQµ‰sRq²ü°M•¨Xu}Wïü]4q*:ÂðMTD ØšhÊœW~ØESÑDù.™’ÖÇ]š¨vè×ÓD}.—Q9lÙÌì·æ‡å- 9Ú𢬶`’^Å›¿íDý9šøqZ8È wÐé»—+ƒ‹&ÞMûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøÛ9¦Ò;ZÜKÜ\m"ÓÆü–&NKÍ4Mü£^IsŠÕ#~ÓÞÄ?oí"½TÿØÙS{{ä‚9Ï1 ¿”Š_4±hâwÓÄþ]rb@ÑÎì¿írºöÚÄÏs±g0‰Ú#²Ý[m W‰¼§ (-S¶ò¿]¥ÕHøQšhâ“% Å!¬“Nñ4±ÑùibDü54ÑQpÎzršèFzBšÑ;L»s5f{)”{¯M8à–²ÐÄ.Á“À'R 碉¦*'âƪ2úòwÑÄ2Š£“SzŽ&šGÓ”\be‚‹&MLEå»äŒÉI¼KÍ.]MÐöj·Nƒ¹ ½“&h"p~OXZ0e×~ÝìR~voâ”8â•6œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâ\/å÷ÒDFÜXð€&v©†¹wæä¯4ÙÞÄ®ªž þD½ÁwÑD{keMŸ|† òMT¹&Ù‚øwˉ×%ìESÑDù.%I=!š»4Ñì8ÛÕ4QŸ‹Ä扣öSÓbߺ7a›i-ã÷ž&¨(`ʹ¨ê*­vD¤ÒÄqYlÑD8MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^JÉoN[¸Æ®èí³²ÎE§ÔÛ·ÑÄ©è8ás4qFƒ®±‹&¦¢‰ò]zCþõnñ_ÿù~='º|o¢>·@J•µg¸7¥“¤BVx8¾|®ôõLØ,ãË õoFÇ?}|ñòðz1ïƒè¸=:¾8k}‹•‰¯jFk|™j|É[bw7ø5ËÆ¿Æ—j§œÒÕãKyn™Ÿ‘£õW{«]‰ Þ8¾ˆW~¡#~Éu†ˆõ·Ñ@i±K"®VU§‚¨®Š«%›ÖjU´ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­jÎÉ(3ĽݼZ…Y(Éqoÿ±TâÉö¾›*NªÅ.ÿZ,ãϦ‰öÖåµ9¸s¹G1ççh¢zÔz1s¨L“­œ‹&¦¢ ÞRMžoœ¤mvâp5MÔçfJ¹4înûiv¨·ÖFÕz£ÎÞ/\[pNdÒÚ#س÷òªSƒ†ò8×Eá4±ÑùibDü54ÑQpÎzršèFzBšÑ;L»sw5 {)´[i˜7Á|p’¶I 2Ò¢ÆR߯ø4ÑT©Ô]ÔX}Nò]4q*:ÌüMTN¨‰$TVó,šX41MÔÎÝD,¢‰b§ty–¥àš OãÍîmºëh7Ö|°Z%µ'5 rV»Bô(M4qHh”CqµÜÒ¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ýy]»â¸—BÆ[i‚·÷µQÿQP+˜Öý¼Ñd[M§2O€X=ýš›ñφ‰sÑÑ·&šÇ" ùƒQœ_“.˜X0ñûaBË$½€°@&š^ŸäCÛÖH"R3U;`ö{a‹ÇƒsNºâጒ0êõû³çœNˆ«ùˆl±D4IìDt~– Ktœ³žœ%º‘ž%Fô³Ä©^ ò½·ò2ùFr”üœÔ7õ€~+LœRôeçœÎEÇô9˜h%Qû¼ä_0±`b ˜¨·ÝJ"ÀD±K×Ä{‚ºµŸzáÎ ”ÊL›í=MXiÁPB ÖÇžfG†ÒDsjœ N|U3 §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašØGEv~DÞK ¸ñM4 ž n|"Õ'Ë?^U!€:H¬ÞíËòïÑ¡ò»iZªê9šheDš]ÆukbÑÄT4á[ªÇX‰šØí¯¦‰ò\,MÑ­£j"ùFšÈ¾©°¥üž&¼¶àš#Ã"¥µ‡ÎÏÒD‡å÷! Å!ÊÚ›§‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ3×Jòq/u{þqÁMíèÖÄ)©m.šhª8ƒ  Þ¾lob޳÷ëw;~&šG·üÁïöš‘fÑÄ¢‰)h¢¦s¢êÐD±K˜®§ wÏjoöFþÛ~ÜÉò½{ÎȦoiRiÁ”Ô»4ÑìÊëÑ“NÍi½HŸÜBqõ·^4L{ž&†Ä_B=ç¬ç¦‰~¤ç£‰!½£4Ñœ3b …w; {ï`ã†zq£I͹ˆ¥¾.ÏÌ@çÔ3ØWÑÄÉè¼@íÝ4qN™Ë¢‰E3Ñ@½¶þîZÜ+Mìvé¥ÀÄ54ÑžK¥OS›õÿþëŸ^ó+ß@¶er}Щ( =QÆþ.}³cOÏÂDÇ€Ö¯½ÛQ¢Ñ,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; çz)»ùÚ„Û†zpÐéœÔ 0LœSÏßuÐé':\÷âèpzîÚÄîÑŒs¬ì_E[L,˜øý0e’nYðÍñÁD³#¾º4j}.$T Ô_­jv忾ó6m’ÌÑßÓÖ,e”맨þ±ƒü(MT§š¼üBŠÓ¤‹&Âib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsÂ,¦q/…~/M(ë–h¢IÈ^þa,•|®ô°»*)ãúc•²|Wéºý­­Ð*~ðÛ*Âs4Q=–7NÜÏ#¶ÛA^4±hb*š 2K'ID]šhvürBæ"š¨ÏeMf©OãÍŽáVš°­„Bø=LPë ‘RÿDÖn—éÙ­ j½ ‘ô3Þív L„³ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0L4ç˜Øˆâ^ rº&@78€‰&¡ Þ¯HüóJ:L4U…RÆXý›ë)6L´·®ÅLHãè0=W»ytr-*sÌëœÓ‚‰©`"×-L¥ã´.L4»„—ÃD}nm?ô{Í]ï„ Ø–©öû;ØEAiÁà–-RZìÊ'þ(M4§ÙÜ3Äâè¥à¢‰ƒib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NõR9ãÍw°Ó–ù ?ìI©2ÙA§¦JÊt³_Ôc·cø2š8—,–·ÓDóè¢,­6;“•vÑÄT4Á[‚òP±KÍ.ÁÕ•ëÚsk%í§"ÚíDýÞƒNHéà6o5Ñ‚«¸J+qHz”&ªS&ªWÆBq¼2:}0MìDt~š Mtœ³žœ&º‘ž&FôÓDsž“²AÜKýùa“l2:”:Y~ØõÆ–)VÿŽ…þhšhoÍ™U8Ž¿^N¸›&j}`Nõè¹FÊü¥Žï¢‰EÐDQ‰õ„%cŸ&v»ËK×µçÖLMÖ/]÷cÇtg~XÚœÍáà¶”¾E$SZzµ+•=JgÄ ¤U;œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâ\/uw~XÒÍühoâ”T„Éhâœú_‹þÙ4q.:Oft:¥ì_×ÃM,šøý4¡uͳyê_Ânv¯®6ÑžKB”ƒ]ÇfÇzçI'ÔØñè¶–¬’”!RZìRæGi¢9uCg‰Å9¯½‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰êÜ ôG˜Â^Ê’Þ{o¶DrÜÙ˜Zp{·ËsÂÞU¡Z 2ívü]Å&ö·¦\þ“x¬4¢çŠMìË<çƒß}]›X01LX›¤3‘÷ï`7»l—gt²v·šSÒþ†r³Ë~7Ld:€ ÛÜSB÷{Š]‚øÑÒu»¸Z§;‡â<‰¬KØá,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; Õ9$ñ,q/¯•î Uw‚ãÞJêRß-rýNšhªrÒ š~=”ûgÓD{kV à8Än÷’ÿvš¨‘ˆ T†å‹&MÌDµÀuM{êÖ?èÔìøåöîE4Ñ l r2Ù5;F½¹Ø‹,Vª“Õ2jª¬0ŽçX½¦/»•÷Så%ŽÎë˜Û‡—Zÿ=!z ¦Í^êñ®áe /s /¹ÌÛ¼8Œ†—ü¯Å¢ë†—¬Ëÿ‘¢ö“•î]¬²TúV8_ʱvÐÁ­¼f—üÙÍ)'I ±¸2ˆ®Åªh¢Ñù«FÄ_³XÕQpÎzòŪn¤'\¬Ñ;¼Xµ;ׂ ÷R|seÔ̶Q>:G»KpBþ ·gͳÑDQ%Â,zìvðu4q&:ìOÒD]>õ¤üÁ0î’M,š˜ˆ&0µÅªlh]šØíøò­ïö\GÊÒ?‡¾ÛË4!+ÃÁѪ¢ÀëõÁrÙí¥‰æ4cÁ½>MüØQZ4L{ž&†Ä_B=ç¬ç¦‰~¤ç£‰!½£4±;ϨA•„ÝŽÞ]E¸&¤¯#{O?RµŒKKÍ2Wþñ]×_9«2~Mìo-Y5ŽŽ¤çr|ü?{gÛ#ÇqÜñ¯BèE`Æ¢»ºªð Å΃;,z#!8Q'ŠÈñä1!è»§ºg%­È®Ï̦ÃŽ{uSÿ©~øMwWU !9%m»HûÒ÷N]ÑD¬¹;8n.}vé$9ÄJ4Q®›a Íw@ÕN³n¹6tà¨QÏW32¥¥—bFì(µ>(çxQš¨âX ÀÇ”`§ ošØˆhÿ4±Dü:4ÑP0ϺsšhFºCšX¢w1MTç)”GÆï¥Lä¶4ÒuŒ&©FÎ[®ã-å¾h¢ªR1Âõz]ÇòŽÑÉ÷‚ÕN_Ž&ŠG{r„‚¯,áIíà&všè€&ÀféP“ñI“&Š]YœX›&ÊuÉ:ur^WU;Ò¼íFZ¤çij ®õY¥fñ²4Q&´/h‚8†=LJ;MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&fõR)¦w:åít:J`‰“¤b_ç&UŠ |õÂW¶6Qï:CIFàGGOŽ\nNÅ£P æÔU&1îwšèŠ&°ÌÒ5!çfmÔÁN0­Måº rÖÜî÷»@[ÒD>0 ÊM`iÁÖÎs{ÓoµKY/š|‡)BH®8²'ùp§‰ˆöOKįC ó¬;§‰f¤;¤‰%zÓĬ^ aÛüãHùd&æ)¥¾Òª(¦Ì2A½^W1£ct’A`ö£c¼y9˜¨%‘Ý”¯,íK;Lôd“t jœÞ^š(viu˜(×%ÌœíCÕ¶=6QŽþÉȱ ²¬1Hj'ù¨vrZ¶ì0QÅ!“vÅ)œìÂÚabd–؈hÿ0±Dü:0ÑP0Ϻs˜hFºC˜X¢w1LTçÄHºPżñÒDJ‡GJ£Î“Jˆ}ÑDUÅÖÏK˜ ^¯+cà1:ìÆýè0].¥Óà±”)¢ ÊvšØi¢+šàC({{XÞO–ö+š(vNêK¬D ÿŸ¼ëŸBØò6ò­å‘c\ûë‚Ú);¢‹æœ–”о¸ŒûÒ„;MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&Šó2ØDÞí¥rH[ÂN‡˜ÇŽMT SŽêKÐMTU²·}x°£+[›¨wØ9œ0Øá7:U)°ä ßÛiþæ&všè€&L%Ú:Bn§tªvœW§‰r] jÿµÛO±KÖ‚¶¤ 9hRá‘NröàPBGi.{óEi¢Š³nˆ©=øU;>93¸ÓÄÈ4±Ñþib‰øuh¢¡`žuç4ÑŒt‡4±Dïbš˜×KIÞ”&Tð8ÐÄ<©¹³cU•„Hí¨Ç»Ät]41D'jð£#ñ‚kÅcŒ9x;5ª]8É̸ÓÄNЄ )": b«]¢Õi¢\7›·ý˜]¤ma§GÖ&´´`¶¨§öžÆjGç²nHÕ©dbu;Üa»ÓÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢8‡2}m¥Dæ°ñN'ãØN§yRSì‹&õ9 GW=œ)BûaÓD½ëR‘=ûƒ%ÄKÒDõhS!š¢ u/7±ÓDW4¡5Ak¶ÏC“&´&hMk×F-×ŲˆS'¡Úñ¦;ÌG‰½âyšÈ¥לOíõÑj—Þ¯½½)M§e•‰]qOjaì412MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&ªsŠ˜ý^ hÛµ ›K$ŽÂž%#÷Eƒ*Ê0LP/×U¼n¸k6P$G’ &ˆ­Ë~j§FµSÝS:í4ÑMä2KÇrÚ¬} »Ú­ž ¶\7J²nÛm?ÙxmÂFYkÌgiá<·“mƒ]-^78%Ü®Ü:Øá~nÂ&¶"Ú=M,¿ M´̳î›&Ú‘î&é]Józ) Û–›°¾ü çiâ(5à©ÐM ªˆ1NPŸäªhb^t8^îÜÄàQíó)s ýÜÄN=ÑDy.Á WE°Eƒ]BZ™&Êu1DŽÌnûÁ2Þ’&ø"åó4K Îs;Iõ`—䢧°«Óå¥ctÅ¥H²Ó„7MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&ç6Š(ø½Ô™ÃÁ«ÒD<+€ðxoŸ@RÊS¤¦¾2ĪPKªD_=¾Æïæ‰zׄxJtô‚4Q=ÚýÚûÊRØ×&všèŠ&b9]Ë)èܤ‰j§'™©W¢‰r]%!ð§Á(ªºå)ìx(-ÒùSØÖ‚0ÇØæžj$]”&ªSFç­_µ#Ùw:¹ÓÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢:OˆØ®Ð|©Û–›`scP5²61Kj:sòàÿ”&U9h êåºJaw]*(¶ËÐíNó°nMÅ£`ÎílSƒ²Óœ4;Mì4ÑMØs‰È’ì!lÒDµ#\&Êu)—÷Uî4IyKš±ÑGßVaiéš$æf–Á.…‹žÂ®N3é´Uí`ßéäNíŸ&–ˆ_‡& æYwNÍHwHKô.¦‰ê¼LÐÛ üŽv6¥ ‰z Ghb@ÉÉÕùÓ-I_4QUQ²Ïy‚ú|]¥°ÑQûBtè¤rÔæ4Q=ªÍÆT|e‚;Mì4ÑMØsi½ Mš¨vÖ>7Q®[ÖÉm?T÷¶]›À¨pŒ&r‹•:ËôÕ$ö6¾”ÄâÉ@‚|õ˜ãµ/3¢C|ÑñňËh¬Ã®8P‚&¼ib#¢ýÓÄñëÐDCÁ<ëÎi¢éib‰ÞÅ418g†)½TÆmååšóM $9…1~º¥Î–&Š*ŒÈ9WQí\Ù±¼!:™)€›µ_Ž&ªGöðD_EÜib§‰žhÂTr0N÷6Ò¦2›²vqÔêJ©2t[6Û}Ë¥ Ì5u8˓҂S mš¨v„ù¢4Qœ–ÆÞë–*.ë¾%Ì&6"Ú?M,¿M4̳îœ&š‘î&–è]LÕ9XÛKQ„åQ:0Œ­MÌ“JÚMTU(‘$ùêáÚÖ&fEOÒ$oNÕ£FJÎÑÿjg_ÒN;MôDö\R9 ¬ÜÞèTíˆóÚ4Q®›¬%gƒgµ $ÛËÓÌ9å1š (6žIfG©Ù¥4úFƒ¾zôÅíS›¢e8SBàLu4«QBsöé«»ÿyúâÍëúËcÇsóèJ?³O¾¨ÖŸÕø×»çe¸(±=úÿ2Ì¿nÌó‹WµyþîÑ_<¿»’ß=úüÅÃíýMfþ¹ Ù³g-¤à„}enzSz,kÏŸ><5Ûãƒyl&ÿvûú»›Òÿù7¾ýÃ×ÿB¿·‡ø×Óí³oÙ§²>áÓW/žØSôú× ”Mðã7O싵f‰ÖL­]"=IˆœQÞmª›Sôv¢ïweÞñé0ý¾ê›òýÿéÝO­›½{yóÑñ³òëÚÓþÙäÔXQ/÷ËLã÷_ƒö¥ñd š}°}ÔÌÙíÉÓb>ëãòe™Î¼Û÷–‡ææÿÅ´èý¯´>ÊÇVÈç¯nŸ¿~úëq¬üôÃßnïïoÂ÷!~Ë™#~ùîëÇöWwß?Ü uìHA­så1y|ó›ü¹¥”‹ýö7á{Œ6ùº ¿ýÑdüáöåí×ÖºžÞ½>y”~þøíÏORý÷ˆÒÇX¾ÿ÷?üóóBßœþÆèñ¿|ûÃGÿôæé}y2mBðüõ‹û»òãï^Þ¿xûÌ&qöá·OŸ”Ïê×õW£vkoëÏk/R~<¢ôÇŸ~Rþõ—ãDùOO¿½{üöñ½‘ösûÛWåwÏn­|xyû¸:úe8y}[:¾×Y\þ½Œß-|öÉgÏo_¾þîÅCýçý‹7ߨ<¼zq÷êDÆðëì­?]xûŸØÓò仇3¡øcÕÏß”ñÝ LÙíçòã×·¯îžÝ= ²ü€ýXšÒËû§Ÿ>Ü¿ýåp¢6to?Ž ™l*©;dªMÅdžLUüêÑíË—÷oo þ‘Í žü<.>¢òN§Œ¢w§#]ytûðp÷ìåãp^Äš|ß{ßWìâ/›@n_ÿ÷=yuûò»J0¿zô×7ÏËwó¨*;¾ˆc>ˆU}ŸãŸqŠO͈ȾO aŠO8ñ9Ûœ1²F7¶Ùfš8Å'º÷Ébw  ÞËe³ƒi>É÷©$ÁɈWí8Oú>yŠÏ,QBò}ê´ûL®ÏSðbkv!Lj+âÜù°+>´³¤ÌRy¿´óï<À†gyõ³)åºHɼ” é–IÞ©|iÆ’¼gƒ0I$!{8av|’à«Å0à€S> (äDÎúá`G“Öò"ºNm(GÁä,W»À“ó"¹N¹„í\µwœš™Mrê,•R(ÝrPNØtZí úõ’ˆ8ˆ#fl×6ìðäØØŽˆççþ­ˆvˆ‹Ä¯‚ˆ-ó¬ûFÄv¤ûCÄEz—"âàœ%h{X<ÚÅ‹ ƒTGŠ Ï”Ê¹+DTIŒÎRÔ`—ÞùA#â1: N žŸ¢¨CÄâBùÖÄÆmòÉaGÄ{BÄò`r)l[ˆ8ØÁêU…ëuËI"ˆn·mØ6ÍÜO‡LFç3÷¢¥2¹8!’O:¡Ã$Ÿaf8•iˆ(ŽÓX:+Ì1¦f*£Ýek÷ NsöÅɾ¶æÏˆíœ–ˆ_œ æYwNÍHwNKô.§â<êöR6Œ›'j pªÊ[Ívé£]À¾À©ªª¯3'áºê[wM¥†]˜É—§â¢!=F_gÞÁi§®À)–¼kRr*6ó¾ v§ÅLV§r]"UtV7ª m Nxˆjq'°&ŒE±=À;ºhéyâwœp版öKįƒ ó¬;ljf¤;ĉ%zãļ^J6®I“¬Ç‘¼o3¥æ¾²HÏS¦øÏ‡å®9”’=äF‡ètCÜÖ81K™MÙvœØq¢+œ°“5SMœ(vét³ÒJ8Q®›í† V¼Ä9¤Mq†—D)Ÿ/JCXºèh´ÁíwÅ2\43Ä ŽDsòÅái!ì'F扈öKįƒ ó¬;ljf¤;ĉ%zãDuž08É’;¦°)N$Œ‡9¿séÎ¥=p©Ö͆Öj Í¥Õ.…Õ3›”ëJ)ªÜÆÆÀ¸)—}Ø£ #ÛTL$‰%¹‰§ÔðuZB $—a”RvjvÈÓhDÔƒG)ŸH›˜ö7”æŒÁ D_}æxmCéôè`ˆù’C©yÄI&(C }(݇Ү†Ò\ ŸC©ü“›Ciµ Êk¥y(f+êT'ªvœ`ÛW¼"ÃÈ+^S€!$"gÇÈ`.{Z¸:ET…싃´ïq_¹4"Úÿ›¹%â×y3×P0Ϻó7sÍHwøfn‰ÞÅoæfõR¼m “˜Ær8kš"µ/œ¨ªMÿ„± ® 'ê]3äœÈÀÖæ8Q=¦\2RúÊNS¦î8±ãD'8ÁÉ`˜þ—½ó[®ã6òðµÞ‚¥í–Ã1€þ@[¾ÈÚ›”+ëµ+Ž­‹ÝT™6\‹‹¤œuTz—œ ƒóÁ[E©Øqg¢ÿ¹\{~Ñ!\÷>Iâ—¯_½8ýïoŽÎÞk;ks»Ãçù¿ÞtÝŸÿüæxsxq|ôüðüâõÿþz“?œœ¾xñôàÿÇßÓÛrÛàÛqâÓß?úÓ¯ç›o6WGOÓqó‘ J;þpúJP¡ý‘qe\ ŸÝkáö¢Æ6•Åüö»¯oÞÈÅ|ÐÂ/vl9¶ïvù£oÿGFçÛ[òþŸ»ÜÝGMÓ|þù;8=‘!îT:«ö=½Ü¥±ÿ8:Û\žÉ€}pïç±ú0ïÂGßo^¾ø÷ÓW?oq9þÎæg㇯¿ÚÖXÛâñ‘ßœã¡=>r‡ù0†#x0Ž›ø"˜ž§¢×®ã›Ë×o${º}L»-ÑNB‹Íîv¿y•»­å³Ë¥™ûzÔ–|êì¼ÓnêÆä×oóˆ~pù8 准ËžZwp J›?üéËÇïv Ñ÷;½û¾Ú¼OGî¶,9Ò¤6!oîw’)¼>ù~#ïæÉåÓÛœµ[ùö¯òxüqób#Y¸¤OÞ¾ý ÷mû‘涃IéTê‘Úãñõ›o9,õS©×¾ýE~ùyx~,™Ó!š8L‹èãq ‡ÏÍo6'(i޼2rmïv»{’½<ý›¸}ºë-Ë÷7G¯$w<ù]ŽèÓƒÿü³Ð¿ØöwÉøq.wQ÷¯2X^üÚjÜí·ÕH•½"V;7¬à(¥½¤±è=9°ê·@±ƒÈƒœzũ͟FQ~ÊN³¹½ÖDÎN±ËuÇZqÑÙu’S›½*D´þIÎ)âç™ä,(g]ù$g1ÒNrNÑ;y’³uŽ©…ÚK9c`áÝÌ2ÜQÏnæ‘R+«‰Üª²iõ þ­™l¯Àꃥsa“œÙc*Õ]®‰ÜÚ1­»™×Iκ&9mžÕT6㤻óûö³ÌDk©]´"íòhì’KõÀ7âÐö¬Ô£‘8’W6þd;º?5æSŽ~w|ys.ls²Écá®1FV€Ú†=Qå´•έqmØÎGÊÎc•¢]õ®uúòô*)¯NN¯r´rúxðOW¿žo¾xòÃû¿ü°0Ú·Ý‹SzžÈpxtùúÕOz „LžÈxvy)qüâÉW§—ÒXz¨^§Bmòürztï{}óçç›_d¾¼¹ŠËæà»xq|&fÓãÍ éWÒÅÈ€ùFœ4Oþyr©7ŽÖÁŸo¸ð6]¸¾€/Ú¨Ï{ oÉðîÍ>“ñÙ;e:»µ3vÏHFˆ qKë· #;Ü4(`ã|/s ²ó,wMÕÈάÙßšýÕ–ýy€!®_!õÏK½ýÿðrwñs}…ìU0κú¯…HWùrw½3|…çN9qz©¸ì†aˆØHºÑŸ¤¢ƒ„÷¿ÝÁi±ãN5žÙpƒÐ¼ÑV}d»@K³Ü8I‘{yz¸ÒhBuàœIòÝõŽ\GÇ™Î^¿} Op¬® :Sîkƾfìudì$ 8¿ÜÛFd¿ÀCÖˆ4b;o—Ü!àb㣷û÷CªrŠR±c?¨&²ºæ$¸`Xüj£šXD7l[‚Wœrš\J÷Ä–¯4Û1ó^¿ã$§ÞxGÑ©â|w=㊘=ìPˆhýˆ9Eü<ˆYP0κrÄ,FºBÄœ¢w2b¶Î#ƒ½” ŸXê%ýì;±4K°⩱2pʪÀm¾·½Ê-ê?ipèTü\œ²GNGF]ùµ&Õ Nu“¨ iÛPùË\¶ÃÎ7‰™À)µËÆ‹{Ò^ @iÉm ¡ h˜ÂYÏ“ÊÛ’ð‰†bçaÐ)(¨óå´Ÿ;` QëùÄŽÂ0pÒŠùú´»ËÆÊ)²ÙnK°EÁ);%¹º8ê”ÙXÁ©'#.D´~pš"~p*(g]98#]!8MÑ;œ²ót\¬ z/Åà–ÝÏMÔ°åpʼ‘Æ +[*™UIf`ìkí¶œáòIƒS¾êT/ưèöNÉctd½×‡ñèì N+8ÕNò`zëÙnYaþìÎì­u³Ï8¥vÑa:­P{<\ò „0¨gÆÉ§}Ú1§¿‘íB¤! CFeé èâºZ­ ³%9U"Z7ÃL?aã¬+f5Ò•1ÌT½“æÖ94dõ^Š]Xx-™oÐl[K6^ê}Üúx s«*+†¿µóô€†¿¹jJ­=:$OÀ~æÖcoYÏ1ˆ;…PV†Yæc3ÌõƒéÀ½‹ý skdæd˜›vÑ2ˆ/í;GËV/¾•qËZ²VAÚ€#U`˜[»®ÒÀÊ0 ˆ˜=ƒê48ƒœ¢ê4‚nI»R±snÐ$DŠS›{HŠ¥íŒ·väq¯´–œ²5` õ®níL'¹Xi­' /D´~Z›"~Z+(g]9­#]!­MÑ;™Ö²s¨w¡,ƒþ²§ÑG“B¬e’¨{ku¥PSô[U$©‚']=YzX°–¯:–ÈAÎÞ£¿ñè%5óR al\am…µª`Ͷ9èî—{vçlg™ Ö¤ÝèL €ê«-vÞ,9áD &òYÏÓV}sÖ+JƒóØÁÚ°‚0.õhMpeXËvÞÙ½"Lrê-I@œ*ÎwïÝŠ0=¹i!¢õ#Ìñó LAÁ8ëʦé fŠÞÉ“;&P>=µvvY„!kc©‡aÆI%ª‹a²*€X*‹ú^}|` ÓF'+$ÝÚìa²GŽMЕQgÍÏÊ0+ÃÔÀ0.oÈ.ðýÃàŸÝy€åÍb˜›aR»dLª‰¨½@/yƒó [cû伇òG²l·Ï-ý7N£s‚NN­]÷à¨yb!¢õãÄñóàDAÁ8ëÊq¢é qbŠÞÉ8Ñ:OÇ‚ÞK‰ÈEqÂ6Ö:×ßÛGGhÌ©XÙ”HVé% ~Ë„Î'£¢#lj쑅N•5òÙîƒb+N¬8QN@Ú[C’a²/âD¶ÃÎ13á„´Š+œ×yã?FKN‰˜ÆÛ´Û¥o€IÜ$Ô^u± 8¬,™W¦D -3dµy±s1œ‡ ºScãiN£ó–jJ“zŸ>òèê…‡6”ÊUGAÛ`õèD ûJcˆä-¨Ê ›k®Cé:”Ö0”bcœw(Ïj,¥ÉŽ»û±gJS»L$.¥ÙÎY·lYQI‹z†RQ@œ8$M©`¯ÝïÎÒì”<±²V¹µ[wÓ?¹"Zÿ—¹)âçù2WP0κò/sÅHWøenŠÞÉ_æ’sËlL€½TðËÛô¾!´=ý£¤rmýI•K‡²:«ªwÝÉtÕN<'}°”(îq±rV(¿×ï8Xqbʼnêp"I¢² ÛQö½)Êàß4æ-€Ó&xG¸Ã„X|V/:xc®ð‰€Î÷1 û˜>‰aÄÎÓ ¥lÔÛãƒE µ¡TìÀúÈVuœ˜³ ]i²3ƒ¶³²V|ˆRÒÆ;_¾Òlç,ï³Sö^n„.ŽBXQËý ­§ˆŸ ÆYWŽˆÅHWˆˆSôNFÄì\ÆHƽóÂ'€35ÎA"Ž“b]ˆ˜UEI‡¬ÓÕ{⇅ˆ£¢y3NÉ£sÁ¡±ª2çÌŠˆ+"Ö…ˆò`Êc*ôçL‘Ö²<èsÓZj×¥ù&e-x¶ ÛÎÖlZ—NܳœëÓa­F`ÄŽ`ÐaÜ 8qê7(DãÊ»`²Ò~&;•Kݲðç¾8Á¿•a´ä´ÑúfŠøy¦ `œuå SŒt… 3Eïd†ÉΣ$ÈÁè½Tt°,ÃÛ0cÃŒ“ºeÝÙGeÎ9À¬Lä´vöPÍWÆ8 :ݓۖf˜ì%c1¨+\X¦.†•>Á”kòd;îF6Ãpš> ¥Ó¦oí¶í¼™u?+$á¬g€á`“.ZËØÙGëM2* ãS¿áze&ÛQØïR½ä [‹N;T·2LOrZˆhý 3Eü< SP0κr†)FºB†™¢w2ô΃ôÉz*"ia†‰ ø>†'5غ&«r z]ý¶ZèŸ4ÃŒŠN·*ùâ “=’ô¨JYÄl'™ÈÊ0+ÃTÅ0ò`JHÄåUsÙíìó0Òn´dµj,`@‹ž‹íK‚=ë`Ø'ÔŠA{ÕÅý C ˜† ¹t~^Ñi¶söÊ0É©t|òÀ¢*.¹+ÃhÉi!¢õ3Ìñó0LAÁ8ëʦé fŠÞÉ “³5N) 툖.$9¬§ž!fœÔXÃdU½¶Øº½JÀ‡Å0mtbºqztÂ>×’%ìpÛgì{ÊÒz™•aV†©ŠaäÁŒ\4¶\(ÛÑü “ÚõÖ‚5ŽÞøEã¶t`¸o€!"Hç«uBb' †a•aÈG‹ÈhU§ÑtæyKN½ê4:(}šQœŠ¸ÇI«>$ù´ÝH¿Ò”˜aKõ´}o1 r©Ìe§ÙŽß+"ÆÞè™ù`§Ñð êÞêN%X£Ã" g;„Ùa8µ‹) šñ§Y2\rQbhûÙˆuÞB 4ŠR±ë®/Ý­8„4&ˆHÁ­';‚a¢²›+e•j±ŒÅÉNØëé„cıéÖ~\q{î_Šhõˆ8Iü,ˆXR0κnD,Gº>Dœ¤w*"Žì¥â²E 1o^æíåÆIgªBÄ‘êýêH1.:û[ Ùzôì±\þìú :É+"V€ˆéÁŒÎ ³6”À)ÛÙˆsŸN˜ÛM3sQµ£Xñ’+!AèÃqྜG·ríŒÖN¤b,ƒ“µÒoØtênymLk°×i®Öi:t¢<ÛØÚ\+R¨Éi!¢õ3Ìñó0LAÁ8ëʦé fŠÞÉ “œ;ÄŒz/ò»¹8ÚÆÇí³\­R‹†Ë§æ]_Q¨« E«ÊE2¸-`?m„ÉW „®¼óùÚÎÐþ&{do±|êrk·°¾"LmcÓCpȾXT¯µ³4÷‰ˆ¹]a(†@Ú ãÖn{>„á&ÆhCÏ72›—ý¹ËgG¶v¶{/^¾þkz .^¿lŽÎOÓÐ/ÝÓÏá2= ¿ØãÍÕüNŒ¾þ—ÍÙQê¤N6ç’¸_ç“òj¸øÙo$äïÒö×à>û—ƒ”Þ+>:Í÷]©Oev´îVì8 Z}éI5—{H„X^ÉÐÚ!íÖ²Ó€Û– ßç× '= /D´~X›"~X+(g]9¬#]!¬MÑ;ÖZçžèBºeOÉŠ®Ap=´6N*Õµ&±UÓ†4£«[Îmù¤i-]5˜4aãDgå[e#cT•…•ÖVZ«‹Ö\:ˆŠÈm©±ýìÎ|w.w&ZsùÀ* ëX{Bà°èJ=jBú2Ig= º„0l´WÓ„Ó }UžU†!é6L:PLq*vÎÚAæý§ääù‹ºS„AEF¼²mÍ‚ô(DN¡ÜÇ';ˆH{¥µ,$opF‡®Sds¥µž4¼ÑúimŠøyh­ `œuå´VŒt…´6EïdZËÎÑs`Ð{)ðËA4‹}´ÖJ2°¡.¹2Z˪(–÷T\Û=°B‰íU§»V®Qvmgö¸<ðÿØ;»Émä ßÊùÌ2Y¿äFìäÄ@‚ Û;ëYø{sû)R£íi±Z ¤%¦ ì‰g ªWõ5‹|D²ªzÌžø»¥»3C“Ö&­@k¸ÄH-É7 %V;¼¯ôy­•ç²ÄìT]í˜ÏÜ[ãhD(¤øÃÆcƒ\BÂvAŠÕŽò&èM‰}ûã7öçÿö÷ÛMoþ÷;›Å-¨ïëÐgùºü_½2ì·´GFP6vú¸Ž`HÑ¡Üb'–È/¥‰*ŽÊ%½èŠS‚Iî2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ¹pi·èg)‰çnE,’6`¢*ÐZù ¥y°BUUJ2øêÐü´a¢¾u¹dÒnp³c¹&ŠG{cPö•%Pœ01ab$˜`[¤GQhv^íè®ÀÞA0QžËd ‚7~bíÛ}nÇ.A [0!e‡²ÇÔž_¤æ º¶ì]‡I‚øâÊn΄ o•؈èø0Ñ#þ˜h(Øg=8L4#= Lôè톉êœrJÎvòjwò¹6&\r š¨lbRg¯wµƒÁjFTU@í^Š7»W»…TßÚ&úôÄLž$_×<«zÌ!Ú{±¯,ëÜš˜41MH)A¥šh›&ªÒÑåÂës3SfçMôˆ?†& öYNÍHH=z»i¢:å•zYÊìÂÉ[€KÝ ‰UEÔô„T„±h¢ªÂ˜Ø¹»Zí :^‹&vEïÚ4žNÕ£ÄÒ/ÓWÆ1Nš˜41MhÙsPÌ í’ÕNôðtå¹Hˆ1xYÛì0ó¹5´í‘9?¦‰TG0€wÑ¿ÚáÇÕ¾O¥‰ê4QŽü„8å¹7á.Ÿ&zÄC û¬§‰f¤¤‰½Ý4QœÇ hIÖÏRùä iÌiIykobŸÔÄcÑDUʹÝÒïö–9½MìŠÈ…õ¬«GQV§v[µã”'MLš‰&Ò‰Cúø¤ÑWüýš!ÇÑ4QžË˜žÙÅ.Æ3¯ÜÓ‚¼ÑÊÔ”û%¼‹mÕN96¿äŒ`ÜÑU‘ùÕæ{kÄ€üèÀ•{ßÕ£„hë+cIs~™óËHóK^BÌ9Ú›óKµ3Î9z~±çZÖ+ÇßÛ#»Ú‰Ê¹­²3`ôx~)Ê!%VO©Bàk÷¾«SŽAžGi6_s?C4":þתñÇ|­j(Øg=ø×ªf¤üZÕ£·ûkUu.€’ƒŸ¥8Ÿ[䃳.6õl|­Z¥Š-«Á—*ŒcÑDU¥„IÈWo³ÚkÑD}ëÌÌòÄÏиã:š(±üò1»ÊŒ…aÒĤ‰±h"Ë:JAš0;Èt±'5i¢ÚE£i¢<·VæO茳“pê½¼¸h¹þ¸f @ý–•XC{~©v”Ò¥4Qœ ƒ“ «]äIî2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ¹%Prhbµ<{oÂfZØNöµ•ª¯Ôðh,˜¨ª20>gå×êfTßZCʨÁN©¹qLT6:sòÿnÊw]¨&LL˜&ÀéP6Ö¨ ÕN# P:e â-Ñ‹]8õÚD^PmÔÇó .õx@J¡ƒªÂ¥×&V§)Åè0YµÓ&¼Ub#¢ãÃDøc`¢¡`Ÿõà0ÑŒô€0Ñ£·&ŠóÌ¥O¸Y*S>÷ “Í(9âv¶ZꃱÿZš¨ª4br>|U;×jgTÞ:…òûj·þ[£“/,X•ª-ÈÜ_ýW˜%'M E¸”3´ö ؤ‰j'áðƒNå¹¹¥MÞ=†dq9—&(ÐMå–Ä©Tao*­v”¯½6QœfËDÙù”¶Úñ,@î.Ÿ&zÄC û¬§‰f¤¤‰½Ý4Q#ŠŸBmAuîA'%\LæÆA§*cÀ'¤æÁ:UU¤ ùê _ìÚD}kLÎ¥’ÕŽè:š •_Ê‘n_Y’I“&†¢  ¥=ÇöÞDµ»/ MØs1Ù£U³3~"²Ê™ØÖÛ4Áu+çØé«]¼–&¸|nI!€'®ÔÅ›{î2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ9¥RÂÏRôñœcÛI\"ç š¨ƒ7/­vQÇ¢‰ªJêþÄêÓkˆ½EGí†~täÊ“NÅ£,(ú7[7ñ¤‰I#Ñ/1[§§v;£Õî¾ÊÆA4QžKÿmçÚÑjwnsÔX÷Idƒ&¤Œà2Ô©=V»ðqã¥Si¢:eIÒ®yw³»»ý2ibc™Øˆèø4Ñ#þšh(Øg=8M4#= Môèí¦‰êÜ|S@?K‰œ{m‚SZ7 Ä® ûR5ÊX4QUeÈÞ—¯j—ò‹íM”·†Êö„œ¯kg´*CŸí樫ܕ˟41ibš[¥3“&lÓDµÃtø%ìòÜÒs47² t žI+Y…{yZs‹Rv>ÿW»$×ÒDq Š‚íúâ«c˜4á-Ÿ&zÄC û¬§‰f¤¤‰½Ý4±+KÉ£«gGžt] otÚ'•ÆjŽZU!¢Q[tÕc¤»…½+:pwìtš¨Y îûÊ'MLšŠ&t)ò(°CÕ."Mö\ûG[}ƒ—µÍ.œKº.Ù4ð˜&RÍ-¼RÐÕ.†ki¢:¥²qB¾8š·°ýeb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&ªs¡Ã)”O¾7AåëÑ&Mì“*a,šØ¥^Ô6ù¤i¢¾µýB³Ckt®,[ë±Q²éñP²Ko/JîËR‘Îݘ‚zt‰òÖl¿G¬ÐP4¹ªÂr6úêáÁªêS¦ÉÑI×ÑäêQ 1øÊæÖԤɑh²ü.)bÆøq±¼¯þôû¥Rüë`š¬ÏÅcÎîø!ȉÎ-\êdäù%–LX*6•Æ5WÁ¥4Yj”Ðþ¹ÚÉ,éå/Ÿ'zÄà û¬ç‰f¤ä‰½Ý<±:GLøD–Ò˜Ï=èFy±Dýø Û*!A™ŸŠc5/¼©Ï)eõÕ'~1š¨om9Ѿœr³ƒ i¢x´%(EWÁÝí×I“& ‰XiÊÆO“&ªÝ}øƒh"®” Šâ³>—&ÊÍ y| ¡Œà@ŒÚž ¡æ ”Ki¢Š³q»NùjÌ“&¼eb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&V碟¥Â¹4\æ” šX%è“R9EUi$_=}\ÌþÓ¦‰}ѹŽ&ªG¶õ˜¯Ldžt›41MØï’l™N‚Ík3«]=š&ÊsY -Þø!~P3òØk3¶42dyLh#˜¡œæn+­v1à¥4Q’‚ÓYñffóBw™Øˆèø4Ñ#þšh(Øg=8M4#= Môèí¦‰ÕyŽÌøD–R:™&`¡Mš¨8DNÏHÍcÝ›¹©Ï6­>1WñƒÒ6Ÿ4MÔ·–²< ×µY=æ¤ì+K”'MLš‰&°|Ÿ‰R6þš4Qí] ¸>…YÐ?„¬ù̽‰R(# æ-šÈÙ°‹TÉQšËVem~Ù¡>‡ðjóKÎhi)£<”®œ_L &ð•!ÄY2rÎ/CÍ/´ó”“!Ds~©v.YŸ›˜!¤ög–jGtf‘àE²¦Ï/´”û¬âœù­vxmsÜÕ©1(·‹¬vrw=|~­Úø шèø_«zÄ󵪡`Ÿõà_«š‘ðkUÞî¯UÕyéΟH¡ zò×*]‚l¤Ý'•ÆjŽ[Uiàl*|õ^«ý®èhÀ ÷¾«G*ÿó§qÅ8¿VMšŒ&°žW‡&ŠÄãi¢Ü´N¥*ˆ7~ˆl±&Mè’‰ä1MpÁI‚WH¥Ú^»÷½KœMÔ“&¼eb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&ve)ç6Ç%ÆÂM쓊c _Ue oç~}Kz­æ¸û¢“ïN±NÅcùÑyW«Ý}±¶I“& ‰r7AB,wîš4Qìø~dDå¹¶öΖ¿¼ñc‰]äÜ{yT@ =¦ )#8D`nïÒKÍA¤—ÒDW*dqÅ%Ô8iÂ[&6":>Môˆ?†& öYNÍHH=z»i¢:gÚî²´Ú™ÂSi‚SXXtƒ&ªAÍ}©œÛ›¨ªâÓ䣉úÖ‰)Ñ?C{îu4Q<æˆäô=¸Ù˜41ib$šzß-ríšÕŽùð*å¹ZzÉfwdu¤pîIZI1ÂãvV¥unÌõØoû,jµ‹/¥‰ê”sT_á¬Aî.Ÿ&zÄC û¬§‰f¤¤‰½Ý4Q³bp’ýjwrs\L²(Ç š¨Œ&¢sÑy•*ƒÑĪ^" úê^Œ&ê[+e"~":é“Nælu\hßW–%Nš˜41MØï’rùiÆvÍÀjwÍi¢ {.—vêgm*ñÜ*1‡÷òRÍA‘’´Gzµ“‹÷&RICFBØgé*Í*î2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ9’*ƒŸ¥ÎmŽ+¹tµØ€‰ª€±tm÷•ÒhÈ«*IDù‰8sæ×‚‰]ÑÕë`¢xŒ¥jñ³x q¶3š01LØï’…€RntªvòÑ0Qž›")¢;²93aBK1@×&²`K.€±}6¯!\ U—^L슂 î*±Ñña¢Gü10ÑP°Ïzp˜hFz@˜èÑÛ »²Ÿ|m".˜s#Ù³­³IžPJ2LTUÂQSzBý«sªo­±4÷£#raE§âC´ :ûÊò]¥Ç &€ û]R2‡QÛ0Qì4çÃÏ9•çfN Á_(S>µb ,øLÏ9QX3´åÞæH_í8_Úµ:5Ô“Ønƒp³“YÑÉ[%¶":Môˆ?†& öYNÍHH=z»i¢:§@ÕÏRx2MÀ¢ ±‘ì…8…öGâ› UUé»Gè«ç/ø}Ú0QßZ4'µV» [£VÈþ"â*ÓfA§ CÁ„ý.ÍaŠQÚ0±ÚÁÑÍ&ês%Cùå–têl !y\Љà6‚S»=sµ+ot)LTq†[}q¥ûë„ o•؈èø0Ñ#þ˜h(Øg=8L4#= Lôèí†‰êœ `»ìÜM¤ž\Vu±ù}ckb—TŠq,š¨ª˜0·KÝÔLmŸ6M¬ÑÉ‘%úÑa¦ëh¢zÌœíÅ|e‰çÖĤ‰¡hÊ*KáSiÒDµ#<ú v}n*×òBöÆkÆt.MpÙ>| X°`'AW;kw&ŠÓ„ˆ¹ÝÁ|µž—&ÜUb#¢ãÃDøc`¢¡`Ÿõà0ÑŒô€0Ñ£·&VçÔù³Úѹ0¡¥Û6nÔsZ%!·{%ߤ¦Á`¢ªâ\¾úê)¿Vìõ­E¢Ê“¥À…0Q<æRjß*Zíî?íN˜˜01LØïRb`ð`¢Ø…û ÁDyn™Þ¤ÝùqµÓxj='²1Êáñ¥<¢:€-P{&,v)ÇK‹Ã®âP9*ºâ2ÜÝ}™0±±JlDt|˜è L4ì³&š‘&zôvÃDunÙž!úYŠò¹0akí%êF«‰}Ry´sNûÔç×jƒ½¾µŠ0²Ñ aÂ<Ú;¯ÉS†!Î6Ø&Æ‚ û]¦l1ÿqƒž¯þôûMJñè+Øö\2'ôÖèäÌ •E#Q‚Ç4Áe[êçÒDµ ùZšØ%NÂl5á.Ÿ&zÄC û¬§‰f¤¤‰½Ý4±/KL¹”ÜØ ‰]R5ŽÕjbŸú$/F»¢“ï*ÿžNÅ#ÌÑÙ+vhžsš41MØï’³’ª4[M¬v’ÒÑ4ÁekËÇ2òÆD=ùœSÙ¸”'e‡¤Á¹Ö&5]\Ï©Š+wìÕŽdžsrW‰ˆŽ=≆‚}ÖƒÃD3ÒÂDÞn˜¨Î%fpVq«Ètr§ ËX€[0±JŨø„T‰cU‡]U)’¶ ™ßÔË‹ÕsªoÔ+~t”/¬çTÓ‹mM쉅×ÑDõhk'”a˜·&&M EZVó$ÿÇÞ¹æJ®ê`tF‘ñ¤ç?“ ¤ÎUõÙ "ÉF§Zý£ÛŠ¿¸Âc6¸¾5QìÞÓâ.¢‰üÜÔbµÚ’“ßK‰—4œä`ÇÜ‚Àê÷Ýïv ¥‰ìT‚š5îšØí`tjN+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Îs%šFŠénGroÚòÆ$'4±KpŒü‰T›,m¢¨"uFk«§ÀßEå­9äáüƒèÄi¢xtµöCÞË[.šX41M¤ïRÕ“³Ÿ;ßþõýªÆpyv~nŒ©]cs¬¹ Ã4AEˆNg4᎑-4”&;8¸ è—Ç—¬ÞšqÎv¾m|Io–¯3jGçý¸ÝãKò(–¾Kn+c_«Uk|™j|ñ 0@"Nª×øØíàòƒ´ù¹•,Ö÷¾‹ý˜+Ç—°%28Y­ò-W" 86”&;€gW«ŠSHŸN[¿ýŒkµêd¢ÑùW«FÄ_³ZUQÐg=ùjU5Ò®Vè^­êë¥~.]¼÷-žî}÷IõÉ*v©þ²“´]ÑÑð Mn†“´Å.š,šX41M¡EÉŽP¯§‰Äì`Ò¢‰lG·®Véæ9cû¸d @jÁ!Ížª}P±K½ñ£õÇwq$Ô¨•²ÛaäEib-¢ÓÓÄøKh¢¦ Ïznš¨Gz>šÒ;J»sb³v/E~o^ž„°œì}÷Ieš«ÈÇ®JD‚Ãêý».3ÚßZb;:bÏÑDñÒ,äƒaÜeíM,š˜‰&òwCêuª'iw;&¾˜&Ês)2h»ýÄܽßH9÷à M„ÜÒ%ý;T÷&v;úyCÄ­4‘z®6$ØçámQcÑÄÉ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰â<ÍÏc}éè%ÒïÍËK,ÌÏ{û¥²Îu’vW¥"&¡­^~ÞAûߦ‰òÖ¦ õœ˜—v;•p5M„¼ç¡Cl¶ "·æåáù¬ì M`ié1¡Þ;’g÷&ŠÓ¨–Ô·Å™¬¼¼æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ì<€§:¶{)¿ù¤“o '5»¤@™‹&Š*ä *vý»h¢¼5!jc]pâƒ×íM!´•é.šX41M`>AÙ«'v;5¼š&òs…’#€VûI“`·{ïFÍ·¯b<¦ J-¸ìLx57¸Ø§gi¢ˆËÛ²Ô‡VÞDsšX‰èü41"þš¨(賞œ&ª‘ž&FôÓDqÎÌáƒ^*ê½yÊËIÞÄK*£„¤žÀýMš(ª$ý©gó¾ÔÇïªøŠNN?×vtDŸ«@¾{t‹DÖVæÀ‹&MÌDé»T‹Êü3àÏ¿¾_5WçM”ç:4jn¾ìÂ'P71~œ…-œZ08Ô³°w;‘gO:e§iLËÿÑÇ!® äÍib%¢óÓĈøkh¢¢ Ïzrš¨FzBšÑ;L»slÚî¥îÝ›°DÈxB}Rq® ä»*É׉c[=ÃwÕ ìŒÎÛ½·ÓDö(Îi’ñ2Ó•7±hb*šHßeL]¿ Tk»¼?~5MäçFul¥}ÙÔJ½0o"ïM˜T É-]€ê»(ÅŽÎÂ.N#§ïæqiP^4Ñš&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4!ûd)ýµ{©ƒËë®Ý›pÚଦSŸT'˜‹&²*…¡‘C¾«ßUÓ©+: ¦ÏÑDñÈc[‡•7±hb*šRÓ)W¹«gaË^ÓéršÈÏECâM;å›÷&,¤w<† - 8õ½Zá³0Qœš ÖëìîvúVÏpÁÄÉ,±ÑùabDü50QQÐg=9LT#=!LŒè†‰âÜ‘C»—Š7—tRŒ†³´‰]ª'梶T“•tʪÒO†!þ@½âwÁD‰N@K7£cð^†õn˜(Ùäe ë:£SÁ„æRIòþD&²zð«a"?7MÑ%H³×ŽLïL›°ÓHK'E>,µàÌ(Ô[z±KΣ4QœJ ƒ¦ŸâÖuL+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞašè륎ªâ]zÝmiByB»TJäóT “¥MUQÌðõvBþŸ¦‰=:öYt¢ëÉi¢é ibDï0MtõRD÷&a“‡M!œÐDŸT™Œ&úÔû—]7QÞZ‚¨H;:,ñ9š(cbÜú%Ý»ê*»hb*šHßeÞ¨%‘úuÙŽíjšÈÏU2 BIÅîæ±²9©ñÑÞDHÊR šz šÒ—8Ùs4Ñ).¼W Y4q4M¬GtršMÔôYÏL­HÏFƒzÇh¢·—bÛ“°-^…Ý-gº¼®[½}ÓÞDotŸJÂþÇcÔä”ÛÊLÖI§EóÐÄþ]&&ósšøÇŽ]JûscòãÕ{þ±;:áya¶oè!D9¦‰Z0QȽoUi¶Cd|”&²Ó…d¶Ë—m.šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞašèê¥ï¾ Û=×:ïí?—z´gþ›4Ñ¥ž ~MtEGøAšèRæë*ìEsÑDÈÙÕ’¯€ŽUš(vLz5MäçæÂÏT;Ÿú²#v/MAŠûñø‚¹¥'š•ªR,-ݞݛ(â˜,ÄØ'¤¼h¢5M¬Dt~š MTôYONÕHOH#z‡i¢87’èÖî¥Do¾ `ó㼉^©J“íMô©ÿªËëþ* –FòMdJÁÛsŒôý®ë&MLEé»L]ªQ«ÒD±¹œ&òs1 TóÅ^vÁùÖ¼ ÝXÙŽi‚R 6t% U¥Åî½®Ü4‘æ=0n‹ã·:¿‹&N¦‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0MtõR÷îM0ëNh¢O*Ú\4ѧÞÂwÑDWtô±¼‰NeÖÞÄ¢‰¹h"}—ž¾^f©ïM;`¸š&hcИZFhÍÑR§}g…X¶ÍHü,o‚6w¶|³Œ7”z®G³/êMðÛÆ—ŽèÄž_Ü…Á­±éTìð-[g/k|™`|á  <ü\mýk|)vï³ò‹Æ—ü\5hì}g»\²ïÎÕªÇ:Œ'ã ç"£»rCi²C ®V§óñ€¶8áµZÕ\†¨DtþÕªñ׬VUôYO¾ZUô„«U#z‡W«Šó4Ïv†v/upöµ«U”ÜØÙjU‘`ð©™m¿JEUTѶzSø.š(ofìÄÞŽNŒþMdŽš&cíaÜ®¼¼E“ÑCrdР # ×ÓDBñèÈí'Ûá'i%l‰ªñ˜&$·`C„ƺZ±KƒÐ£4‘œ*dâÄ¥8.šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(θ"¶z©,Ro¥ ѸiÔšè“u.š(ª¢¶Õ’òÖ”Æq‹íè ðs4Q<&¾#üàwcçE‹&f¢ I³tŒžo­ÒD±»ao"?—E$6ªä;æ[÷¾3ñshiBs 6 Õ+ƒþ±“øìIÚì4€šh‹ó¸*7§‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0M瘰݅ø¹à-M lhg5û¤úd'iwõ1†Æ‰ÌÝîÛh¢¼5¦&Ð`­ÝŽ<éT<*Ѽ¦ŸÊ„⢉E3Ñ„æ{‚@E¼^30Û™\_å#?×ɡٲcô[im‹ˆê'{–[0QÒPoéÅåYš(NUˆ]Ûâ0-šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Î=v’v/åtóí¨1nN`"+ÀÄ›Jt2˜(ªÒ˜šfmõAé»`bŽç½›vtðÉ´¼âQ¢«ð»±.˜X01L¤ïÒ™£G¥*L;zK‹»&òs5õ{hÍöã~º2m‚7LNa"¦œoÞ–Fòl‡‡#á0QÄ)‹27Å¿U Y0q2K¬Dt~˜ LTôYOÕHO#z‡a¢«—¾wkB‰6“xB}Ru²´‰.õŠ_–„]ÞÚ0…G?ˆŽÁs4‘=r0õàmeî«dࢉ©h"}—ž—Ò£Ö‹|;â˯3ÊÏd&­öã(Ü›„-Âì~L^Z0@{Šñ³[Ù©‰Z„ØÇ×ÖDsšX‰èü41"þš¨(賞œ&ª‘ž&FôÓDW/â½4ÁRæd'4Ñ%y²½‰.õD_F}Ññ:u)sY{‹&¦¢‰ô]º¡s}o¢Ø¼¼d`~nTŽÚ(”Tì$ú½{D1gMH 8aTv»ô$LìN¥ln·ÅñÛ!ßdzÄZD§‡‰!ñ—ÀDMAŸõÜ0Qô|01¤w&vçJ,´{)‰÷žs’¼@…| /©l HUš &vUÜXøz©ßUü)ÇËŽè1˜(œ8~ ÌßkM-˜X0ñë0‘¾ËÔ布Ö+:ívñmÁý˜(ÏEÕ˜Zf£ýäpõ{o32¡4ÌÓDÈ-=¦PÝDÙíàÙ­‰Ýi.f‡±-NÞÎÉ.š8™&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4Qœ;Qã"ùÝ.ú½[цt’ƒ]$„µ£ý•œæ¢‰]}¾€šê ì»¶&ö·Ñ©žbø²³çîFÝ=²«n+c]õaMLE!/ù#2Q&мU$»ˆ&òsÓë*ÖëþìàÖ´ Ì5pØÓæì‰&ê9Ø»R|”&²Ó¨ÎÚ»è:èÔœ&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4Qœ[ 3h÷Rê÷Þ6Hƒ§„óÞ>Z•ø©æª»«J8fõ4ã—òwÑD_tÌž£‰ìÑC ZO"ÝíÀM,š˜Š&0§#sjE§bÇþ–}Mäç*³iNƒ=ç‹Ý[ÑI„ìx|¡ÒÒÙ¡^&£Ø¥®êYš(⸬j4Å9-šhO+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(ΩQ±æ%Rï¥ Ž‰'ÂÙI§>©q®$ì—ú(Ô˜/ïv¬ßEå­ó]"EçÁ´‰ìÑêkþTö×Ý‹&Mü>MP>Á”Z–Ç:M;æË÷&òs• ­Ñ~’]`½woBAëÆ|ojêò={U¥œ[ºã³'Š8Œ–>‡–¸ôºJ:5§‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0MçÄbÁÚ½Á½ym3<)Û)';éTîøf×OÔë—têŠÓƒ'º”éÛ æ‹&ML@\ò„U¥JÅŽõò½ .w×aL=r£ý$;b»¹@lˆx¶÷-©Iq×:Md;ðð,Md§é£QiŠ þVzqÑÄÉ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰Ý¹¨5¬ÿ#òæ“NS7ç½}¾FÄ>*“ÑDQ"Y„Ôܼ÷Ÿ¦‰®è~&ŠGI€ç¡­Œq•tZ41MH.ÕDÑöþüëûuKsí«i"?7æíDÄVûñ(DwžtJóaD:_4¯`Pkäå;8:ô{#M§jés ¶¸÷kºMœL+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Î#E°Øî¥ŒäÞë&$l(~²7Q$¤Q üƒÞ>þ¼©ôwibWïyÑVïôeyù­T-J3: o91·ÓDñÈÎÖVÆo¿Û¢‰EÐDš‚¢*Vi¢Ø]}ÝDynr#[í'ÙÁ­4AéG³|ß÷1MXjÁ‚ÖLt.vpT+ýFš(NM”ÚâÔWM§æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ì\ÓPC)zù^Mÿ–šN.‰&ô„&ŠÔ@1 7¥*ü¬¹ñ»4QTatб­¾‹&Ê[‹å½§vtXŸ»¼®xL¤Û¿›IX4±hb*š°1÷(ÖØ›(vrýI§ü\áÔms«×NvtkÞûF`ˆ'{ß±ôA(èõbÇÀÒDqê«‹´ÅEÒE­ib%¢óÓĈøkh¢¢ Ïzrš¨FzBšÑ;LÙyúíÝ]äÍYØ*¼±žíMìRÛRÓ+M¶7QT…ˆNÔ'Ç—äQZÕYŠ­“´k|™k|ñ-=ÉDZ{ßÅNßN•\4¾äçzêÏ,Öw”³]t÷ž¤ÍÈϪ|xš!F£¨ÞRšìÄàÑÕªä4ˆGŠ8^«U­eˆJDç_­ÍjUEAŸõä«UÕHO¸Z5¢wxµª§—Jƒ‚Þ|;ª$f9»Ï¨Oêlyy]êCø²Õª¾è¼íÐÝN]ÊxÕ \41Mp`“H4‘ì”øzšàD 1Ñ 4ÚO¶ÃxçIZߢ²_g„:e‰ÕåžÝôÑä»ÓôÝ( ŽiùhÍk&†Ä_5}ÖsÃD=ÒóÁÄÞQ˜x9ÆõªÞ/;Š7 ÇMù¤y§T SÁÄ®Jµuêeè«`bëd߸Uk·³÷ä·›a¢x à›ÊÂû–Ò‚‰¿é»äíࢰ?¿ÉŽôj˜(Ï nx¦Ñ~’]<º6â:˜…\õ˜&BnÁé—IàSUZì„Ýš(N‘r}ô¶8 ¸h¢9M¬Dt~š MTôYONÕHOH#z‡i¢«—B½·¹¥þ>„“ä/©˜†ÚR æJËÛU¥¡øƒ@“úwÑDWt˜Ÿ;H»{tU¡ÐVf²ÒòMLE!Í#ûÏš°þþ~“½-j_Dù¹1ÍÒ]¤Ñ~’Ñ—£Rž;ŸsBL ¸ìŽÔ«ùìv`ø(L§¢ÄÛâÞó/˜8™%V":?LŒˆ¿&* ú¬'‡‰j¤'„‰½Ã0Qœk.ýìôR~oÅ@!Û@NÎ9õIÕÀsÁDQÓ4äõò]w£¾¢# õ2ø/;~înÔâQ‚§O¿­,EgeM,˜˜ &pcÕ X=ç´Û \]ã£<5¡µæèÉüV˜°s5Öc˜ Ü€£äõªÐbwxÇ0Ñ%ŽÞJq-˜8™%V":?LŒˆ¿&* ú¬'‡‰j¤'„‰½Ã0ÑÕK½ŸÆ¼åœSRÌð&ú¤ŠÍ]ê%|W v_t"?W0°x47Ö¦²D°kgbÁÄT0Ai’CúçŸû¹þþ~Mßj8]TJ‡ˆ¤i|£ý0ås w ” !—:¦ Î-ØÄZ{ÅìYšÈNóÝäXÏdßÅ9¯­‰æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ž^*=áÞ«Q]ÿÇÞ¹&IŽòjxG„nh%½ÿ.Õ'²§Ó" lD'1?¦Ja^«Ìå$å5é M4 ÙOøû•»çTUA´ü›Ô¿IœõOÓD}ëHd~êß/òs¥Q[‹œWd~Òûf÷ʰ›&6M,@”)!;‰ÐM?Þìà%ú"š(Ï%~Ù…j‡l7ßsÊP“è$›sN9v”;Ey–&ª8 ~ÞÅfu—Fí.®O3⯡ GÁ˜õâ4ázzAš˜Ñ;Mµñr—¬?J¡Ñ­4¨™x£=aH!ô¥¬•¶©b Ò‰ oêßDÿÓ4QÞÚ2ÒªŸä»yGâƒ41¤,™mšØ4±MpYÍc”74üëÏï7ÛÅpuúñú\N1öÖèÙNnM‹zD±ÓŒNÒzpv¼?BW;¥g3:•F¢önùV;]̨»Lt<º>M̈¿†&cÖ‹Ó„ëéibFï4M R‘ï-fĆG³³‰1©¶XF§!õH_v6ѼC)…¼C¨ÏÑDnÑòê8öh¢*³—ÒM›&  9ç~ebFÍNáršÈÏ•ÀDÑ/²ÓìHî,f”_ïjšà=Nhéê);Ë•ª¥«‹ê£81 .Wi§tê®®3â¯Á GÁ˜õâ8ázzAœ˜Ñ;C£ß^5ϵ'81&•É1õŸýÛ81äâS:•êmä¾²$ûpbãÄR8¡‡²RüÔýùýf»—GaççfLÈ] q§ÿh)ykµ ±ÃR)kú'RîÂ)Ï”þžFµƒw•—nĉÚhb <ÕN_`6Nœ¬®3â¯Á GÁ˜õâ8ázzAœ˜Ñ;¥q ! €þ(eéÞâuIâÃYR§&•U1v¥R µŠ×5U€yÉÌ}õ_v:Qß:’åyß;¯72nljÚ"SŒÜŸÆ‰ÃŽœØ8±N¤ƒòÒ ŠŸ!¶Øåˆ¯Æ‰òÜXëìôƽlÇrk½‰tpŒ!½‡ Ë8•ä¯äAÍ.>[n¢6ªRv5úâ4î¤NÝU¢ãÑõabFü50á(³^&\O/3z§a¢6n¢Ü‰2mv7W–²$Ké&Š ùãŽÒ•j;›¨ª‘°?„彩ÝÞ:rì•iÞ±¯:Õ…2Áö;ˆI &V‚ +‹ô \˜¨vò’Lî"˜(ÏÑNòŒjG×3º8C,Q{KJÎó h6;Ö›ÈRîU’çµÔWì`Mô–‰žG—§‰)ñ—Є§`Ìzmšð=½ML饉Ö8 †þ(Åï÷_{4yÐÒ“RØMBÉÍç—*ý‘jkM4U%P>Po¿Š&ê[CþÓ%Àëšýfšh- ļÒê+#Û)b7M¬Då»Dd éïꤿþóý"^Ô©>—%ã¸Ü\í(1ÝItDV>9›@È=8æéEý2{ÕìÙêuMå/F´+.ï0ìî2Ññèú41#þšpŒY/N®§¤‰½Ó4QgÑNŠØ;²›Ã°á@> Ãn2M ¿Ëõ#Õx-šhê-QľzÑho­JÛÕ?vüÜÙDm±ÜÁ@]ed‡aošXŠ& ìÇKuħ‰j‡árš(ÏMP*Nt{v&ŽH÷œ€ˆ¤ï£°1æLrO÷÷{ª†G£°k£ÒJšwÅ•ke›&zËDÇ£ëÓÄŒøkhÂQ0f½8M¸ž^&fôNÓDi¼„¿™‘£Ù1Ý›"¼¢æóÑ>ÿš4i_ª¤Åh¢¨J ‘ûêíMñ½š&ªwJªw]ïäŽò\-ìÒ"€ è*ƒüé†M›&V¢‰XVéVª¹»'š’\Mù¹ÌhbØ]£3»5©“ÔIÌNÎ&°ü–€Lý{µÃgSĶFË%,о¸×„S›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Úx2HŒýQ*ɽåë8Ò¡ñ$»I0"êì7; kÑDQ•BÌkù@½ÙwÑDóŽ)øAØ?vé¹bØ­E–¤ûÊvÜĦ‰¥h"—Ä( àîvdtuÜD}n)ž—°ÛH)ÝIhs¦8£ ³X“:õh"Ûi\n~)êS2?ïÅo›_,¯P°ë²qôäü’[$ ê+CØ9÷ü²ÔüBGþQþÿÞþc~©v¯e™/š_Ês5ä'“ß³‹]©6tóMZ‰p¶[Ee…lŠ¡£´¬qŸ-Úå(ÐG´³|t·!®¿[5#þšÝ*GÁ˜õâ»U®§Ü­šÑ;½[54JñÍÈ©4¢³Ýª&$¦øT¤µh¢ª ™Ü>P¯ß•åãÇ;I5Iß;BÏ¥ l-f Œáƒ¿[zYmšØ4±M”³oB¥Ô¡ "f×ÓÄiûÿí?”'¹7e R)úž&8¯–G ~!×b—GqäGi¢ˆƒ#j쉃`/áÁ›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Ú8`à ÝQªü»•&JT^D²óÑ@£~ •ÓZ4QUŤ©UØìþÞùú·i¢¾5² õ'KÀ'i¢¶(1ÿ÷ÁßmߤÝ4±Mð‘G.y¤ü¸¼jGéršÈÏ5•D ½Q[LÞfz½Œ&b¢Œrrô]ËDcGhµx´šQm4B§|E³ !l˜è­®3⯠GÁ˜õâ0ázzA˜˜Ñ; c£Ý›€œHHñähbLª,–2pH=„/KòQß:FÅ”>ðÎË9þí0Q[´À ûkŒHqW3Ú0±Läï’ÐH´ÕN_â®/‚‰ò\Ž–yzý‡øõŠü GxdZÉnOš{°Pz—ªý¥ÅŽÄž=šÇIyÓDo™èxt}š˜ M8 Ƭ§ ×Ó ÒÄŒÞiš¥,éÝG¹Í¨ó£½„ kÑĘzþ²°‰!ï@x0ÉGm±Ü­êšT;Ü4±ib%šÈßeÊ7³²KÕŽìò”å¹Jš»FêõŸ¤`w^tR({UB'4‘rÖ ß .v&ðQš¨âÐ4I_œb„M½e¢ãÑõibFü54á(³^œ&\O/H3z§i¢6N~0JQ¸9lBøˆ|VΨI@‚NöÏ+-–2°ªbJâê¿‹&šwòÊA>ðÎkæÊÛi¢´˜BPë¤F¯ÊÒËjlÓĦ‰h"b%‡@ú;‰À¯?¿ßl_Æ‹h¢<7…T2âvúXîdvëE§P²|¤³(l+]½;tH‹<›3°Šc`Ö—(ìœÝu¢ãÑõqbFü58á(³^'\O/ˆ3z§qbl”²poÎÀ”Ž gõŒšT”B_*ÃbWª*¼Hˆ¨×/‹›¨o­¥Võ½#üàáDiÑ€K©®2 ;gàÆ‰µp"—#畺Õ©ÚêÕ8Qž[ê®bçªSµ‹”î¼êÄGžæð} Å œ±ËM>Õì„Ó0Q…R™"Æ®8°o:õV‰žG—‡‰)ñ—À„§`Ìzm˜ð=½LLé…‰Ö8¤ ~•»ï=›HtœœM J•¸L4U¨¥–w_}´ïJéôã ø‰wPŸ Ân-*’¦” í±&V‚‰ò]r D&nJ§j§æ/†‰ò\‰‰‘ý¸£f‡|kqÔp[ŠkÅM4UVBT´¯>áwG󎽖ˆ»›&J‹˜×bªýiãËÞꦉM ÐDþ.Å8·x]³C»:n¢L̈¿&c֋ÄëéabFï4LÔÆMAEú£”‘݃].:ÒD‘@!ÏÃÁºR)`X‹&ª*Ðr3¹¯¾,Al{k,éûŸ!E{.¥SkQòçä§tjvÌ»ÜĦ‰¥h‚eƨ‚þÑD±£ëc°ësE…%B§ÿdÚo-7‡„ÜÊ{šà܃(ÅN€T±#‹ú(M”F¥w65êK0ïâuÝe¢ãÑõibFü54á(³^œ&\O/H3z§ibh”’w9¶¯ › 8ØÎÂ&ƤêZ•°ÇÔ+~ÙÙÄwÒƒ b‡” @Ü4±ib%šÈße^?§ô¦Ë¯ÿ|¿Ù.^„]žkŠü"Õ.É­ac¤“l)8øÅ.¿Ñ³GU D€®8á}4Ñ_%:]&fÄ_Ž‚1ëÅaÂõô‚01£w&jãŠA¨?„ŠÜ|ÏIÌ•3˜(RBìJUÓµÒÃ6õù€Q»êóÿ~LԷƧÁþd™¢ês0Q[”Vô2~¹^·abÃÄ0!‡–Rk¢¯‘ýùýj)ånWÃDy.›Y¦‰Nÿ©q|'L@:PÂÉÙ„–‰CË©¶ Y퀞ʼnÒh¢h úâê®6Ñ]':]'fÄ_ƒŽ‚1ëÅqÂõô‚81£w'jã"üšj?v˜n.…Íó¿óÑ>i(‰?>j‹áDU•Ð²Š¾úßUm¢½µeªî„:7»ð`J§ÒbAúÊ lãÄÆ‰¥pBK<23'ªÈå7ÊsS^h‡Øí?œ'¢;o:E;*VÄM¤:¶h -fϦt*‘RWœa’M½e¢ãÑõibFü54á(³^œ&\O/H3z§i¢6.¢IûC¨ …›o:Åðìp¢J(;X€}©h-šÈª0#ñÓAâ/K[½SžÒóN¶{‰‰¹&j‹F,~­øf—Ò¾é´ib)šH‡QÙõGñ£°‹ØËvÑE4QžkÙÍýq/;ÅâQØ vÄ`Ìv†–µ¢Jèuu³9­6Á ¨Çß6Á x‡PŸœ`” óž`ö³Òceá£=‚¨vŸÝ„©šaìì7;ÜGº]ºv<ºþ&ÌŒøk6acÖ‹o¸ž^pfFïô&LiBTŽ¡;JýQ™øž¼Úép¶ c‡„üêuÍîúÀ‰ò\I’'›Þ€žíøÖÀ :bŠ)àD•`ÆÒ)†Ýìt±ò¨E•æ2±tÕkPù.œhÞI%Ⱦï×3ºÛq¢¶(j±¯ŒeŸ~oœX '„Ä’Jo‚!!Õ&˜Ü) 5uGíÜ:Ý\UÅøíüÂ!¯b`~5Žjø$x5q„tÅE ;4¯·¢ö<º’àûÝ*†Üƒ ³¹kÍ.áÒDm´Djê‹£—«_›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Úx^Å©_)áÇNàÞä%¯ÓYò&AJ–XîK}›ÍöIUUÊoèçâmv á»h¢¾µqfÀ>C{’&J‹ V‚îºÊÄ6MlšX‰& ®Òóó÷¨ùë?ßoæxõ]Úò\Íï‹ù“èõ !êÍ4‘,Úû´\ë_`Éqèž7»(æ oæiíq¢;2¯»Lt<º>M̈¿†&cÖ‹Ó„ëéibFï4MÔÆ {¹øš]²{s ¦#ØÙÙD‘ #ûWØ›]µn:5U%ûœb_}^f|MÔ·¦¼ìþ`²Ì_Ƀ4Q[Ô¼ ëœ)5»—º¶›&6M,@±Ô4ö¯Ò6»< ]Må¹IR¹EÕë?َ½9ȹDvŸœM`_jGiµ ø,MÔFù7’›Ù®hÔ]&:]Ÿ&fÄ_CŽ‚1ëÅiÂõô‚41£wš&jã‚úÙˆšèÍ4AvÀ)M4©y-é×Ðü±#Y‹&Šª²ùƈ}õ&ß•ç£y'#-ûq?v,ÏÑDm±ìWBè+ㆽib-šÈße ßž|ÿúÏ÷«Dárš(Ïe£ýÌÞÍNìVš(¹DbF«÷4A¥#‘¿±VíÓ£4ÑÄ)Ä@}qˆ›&ºËDÇ£ëÓÄŒøkhÂQ0f½8M¸ž^&fôNÓDk<™Ñ'£”Ø­4Ñ–˜h Ì4òJÓb0QUqÔL}õy>û.˜ò‡&j‹Æd~Nœf§/56Ll˜X&¨9 pàÂDµ‹‰®†‰ò\ ä—^hv¬·†M„£lÙ½G Îý7±EêMpíç”E‰q)ÿ¥7JôÖˆŽG×G‰ñ× „£`Ìzq”p=½ JÌèF‰¡Q ¢Þ‹GJ'¹¯¥ÒZùaÕ§ô],1äøRôäv–RF²&6K,Å\JÐû!ØÕNäòƒ‰òÜ<õ$K©×TìÖŒ˜MH)½§ ©=B/(¯ÚE~´8jkT ¨/N^2Alš8Y&:]Ÿ&fÄ_CŽ‚1ëÅiÂõô‚41£wš&Zã‚b ¡zsqT²<§$:¡‰*!•úuúT kÑDQe!kঃÄ_–ÐiÈ;ÆÏ•3jʱÿÕ†ÐiÓÄR4‘¿K-m…NB§b…/§‰ò\áüÎÐ`THùNšÐC!Ó‚¾§ =ˆB€ É¿Ð¨u R~”&FÄóÁî.®O3⯡ GÁ˜õâ4ázzAš˜Ñ;MC£”Ä{¯9‰¤#‚ÐĘT^Œ&†Ô~YvõNÙšì{‡¼”‡¹&†”!ï{N›&–¢ ­ 8¸Å&šÊå÷œÊs-Šhçmµ{w&{M”K´ˆ§ RîÁ”¨?Xì2’<41$îµfǦ‰“e¢ãÑõibFü54á(³^œ&\O/H3z§ibh”JtsÐD²ñì¦Ó˜ÔÅJש·ø]•°½£Ï•®kÊJ…K‰]e_BH7MlšX€&R mΠ ~vµ£—Œ}ÑDy®F‹ØIºZíàݨ}ÝÙ„±”½÷4a¹cö'«ØÅ`ÏžM ‰£´K×u—‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉¡QêM=¸ko:‰”›­'41&U;›Rÿu¥ëƼ#žM )³7MlšX‰&òw©€Q|š°Z"…«i¢<7¦Ü5´Û4˽g)…¨ñ MĬ,÷`Ž%ÄÃɶðc‡IŒ›Gò2 mšx·Lô=º8MLŠ¿€&|cÖ+ÓDÏÓ«ÑĤÞ9š¥R¼•&RäÃìmÜĨT +ÂVÿUF½“ŸûM *ã°3:mšXˆ&ÚwiÁò—ÉNF§ßv˜øRšøynn£wôcG†wÒ‡#FKïiJ† è3úmðYš¨R!ë‹£—ZE›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉ڸ„`^Eµßv¬÷ætb†UNh¢I%ÕD}©kÂþ­Ê¸\®ï«ÿ®›N?o-½Ü¿ÿoñ9š¨-bÙÒü@Y´ÓiÓÄR4Q LCˆÿÇÞÙ3O’Ûfü«\vvÒE€—9·×FäÒÙÞ*[ºÒIªò·7ÈþŸ4Ú&‡bw/}à 6ØÂ6ŸÆ4Iüø@L\¥‰\Ûì†O¢‰Ò¾¨J»ÿ¸Ñ•{þ‹Žö&pË«8Þ×±§ØE[a¢4š"RËÅNL´¢ÄŠG燉ñçÀDEAŸõä0Qõô„01¢w&Jã>Ò'Óö(•.¾6A¶Àx]R'ƒ‰¬J8¾2WiÒ÷‚‰ïh0½&º”QZ×&LL˜¯C$ˆ‘  ÅNøô­‰üÜRl"6ûÛa¸ø ¡"<§‰è=؈¸°g;¾wk¢KœCᢉV˜Xñèü41"þš¨(賞œ&ªžž&FôÓDÏ(eèâr¶¹Ìšè“ú¤’ô7¥‰.õðV)º½c7nMx‹ìÑqˆmeöpûqÑÄ¢‰ h"×—ö::èVi"Û%µÓ·&òs½Ç2Y³ÿ¸]¤kK×qW¤#š0ÿ]‘µ¡ÔíÄ`¶ùÅÜÉ ¤©ž#Á»Í/f@H í5ºs~ÉÅx)’¦¦²lÍ/k~™j~¡-Ä`ÀZKòñ‹›œ=¿äçbRýh¶µ‹ÇüiÊóù…<’ŸˆQ¤¡ÔíäÖÕªÒh#‚¶¸Di­Vµ–!*µjDü9«U}Ö“¯VU==ájÕˆÞáÕªÜ8úpš£Tž{/]­òxlÔƒÕª"˜±-âd«UEƒ¸²¶zz’>ýWMå­E°Zªê/v!ÝG¹ÅóIsn*‹`ëZÞ¢‰Éh‚Œ-ñÔ ‰|}.ðù4A&‘°EnåÊ”Dú"ôœ&8÷`æ\¢µª4Ûa t+MäFÉaÏb[ùü±h¢&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ5JEäKi6‰„!ö¯K%‹&úÔë›í}wy‡ªj]N¹EÎg)ÄÚÊDM,š˜Š&J¹iP Iª4‘í¢Q<›&òs% ¤Ôì?ʯE{/˜(omQRã¸ÝîŇĕ—ÃDnQ1½cÈã†Ø‚‰À„—œèÉìòé‹ï× "ž ²q9Z«ÿ¸]TºvkB£¸˜ç4‘r¡‘g(•1(ÝK¹ÑÀ’¤).…‡ÉoÑÄA˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi<ªÅÆù›Ý.Ä‹·&ˆBP<íó¬õ…•žÔú¦4QTù„£¯8úYæ¬_5M”·69‚µ½£éÆky¹E¥YÛ¿›{fe \41Møw©Êôë”°Ÿ¾ø~UÉN§‰”)Åg‰$Íþc ¨WnMІ!A<¸6¡[ ‹b¨sO¶SÀ¯“|äýËùåO?ýç~óÛË\ø÷qD‘•«±6RÍîòå¯|h:Ë3Ì#ãÇ?}þïÏÌãÎï~ûùe")!ÆwÿðÇÿýéǾÿ׿þÏ ùã»ÿg×ù‡Ï>>}ïCæo~þýï~øþÀÀ£×ï}Ìûùgï?|ÿOŸö‡eøä–ÿõù§ïþüù7_ýŒ¿ÿå¿÷Ó~ü³Õ?ÿò?oßýKéNÞðÿ¸Ùƒé¿ÿøþñå—ñAõOÞÈöý?ûQÓ‘ã¿ý™R>^à‡}X8lÝgÿWZ§'ÉÔ(=é_Wްb%*=àE‘Š<ÙÅÔ¢^‚‡©íâ' ã¿î«Ï;w&>ð9„ A­­Ì³E¯kEXß>ÂÊß/ˆ¾Q^5Ó½¤³8 œØšâØ'½US½¹¼Tñèü«#âÏY…¬(賞|²êé W!Gô¯BvRfzmÝ’-aÞR:Œ‘“JÌdª<]ì8œž†ÀŸ«,¾JÓUê‘®äiÙ0:{êu¥Ì4M¼®^bx;šxÝ;‰ñVšÐ”˜¶‡xؽ^4±hb š€ê3mÌ/nÇSáió Dg™ÍUPˆ|mš›\?,.V‘ åÐ{ChÞŒx3våý,÷a²¶8}…vÆÓ‡ýÿ€]¿ø³°ëPAŸõôØUñô”Øõ÷ë=»ˆó~—4RÎîváÚ,7¬² •‹Ü%$¢@/H•é¶&\0R3\ÎvïþÖÈB{²T t'Lx‹ì?È+¿éÚšX01L CjÀ„Ûº&ò¡Ž\ѽ9Á¸Ý“\ô'.VÅ-Ÿ°ŒÅçmË Ô²ØzOÏv>Ü»‰“`„€MqL«^d3L¬xt~šMTôYONUOOH#z‡i¢4žÀZ%w‹ÄkëEâÉâQ‚"AÉÁ'´¥&œì(yQe”D©­^íͲÜä·¦Ày«íK7nMeÂŒ/ünV,M,š˜€&ü»´ˆÌò5§úâûµõNO¢‰ü\bÓØž`r‰'»¶Â‹2=ß›€à=XØg¨®h;ŒwÒDŸ8cY4Ñkž&†ÄŸB5}ÖsÓDÝÓóÑÄÞQšè¥8(^›?Áp¿O*ð\4ѧŸ(û5ÓDŸw(ÜW}¾O™‘,šX41MøwÉùÞD¾ô]£‰bGôPGðšÈÏÂP£ÿ¸]ˆ—fà×MìàZ@îÁQƒjuEãÃnMs³7*"©^àÃŽM4ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&Jã) Ò¥]œ4“`:¨>¿KÐdbÖ–ªÏ*Ã|Kš(ª,¯a[½É{U.o-•#5½#áá3¼œ&J‹±äÖi+‹‹&MÌDþ]ZòùÉæó§/¾_K gŸtòç–¤™ õºß¥}“¤×žtR`fžÓ挊œêJ‹Э'öF9$JÚ÷x©cÑÄA˜Xñèü41"þš¨(賞œ&ªžž&FôÓÄÞxì_¥8À¥4SØ ÀDŸR sÁDQ%(ˆ/LUÌo¶5±{‡R¬gàÿ°‹÷•óÚ[4ÿåø…ßMÍL,˜˜ &°éQ‘ê[Å.ðÙ9>ü¹>i ¤„¡ÑÜ.`¸6g&ù<û&¢÷à$‰8ÔÇ bÇéÖKØ{£¦þÙ@[œ=Ü>Y0q%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0‘×@ ÒB5<$”ºdkBb.÷~@»V©çBÿ°‹“t*ª€Ðg«Ôë{ÕóÚß%4ª}xñÆü°{‹¤ ¿.šX41MøwihJòu=­O_|¿†ª§tÊÏ¥´‘×~·c¾ò æšaŽU4á¬!Ðü¯ªÒl§&|+Mq‘Gkгˆ+¥S3L¬xt~šMTôYONUOOH#z‡i¢4N1µdv;¸6“.%Ý4ÑÄ.!ùd_*q.š(ªr aÜVÏo–ÒikqØ’ÔöŽ„û.aï-‡/(³UÏkÑÄ\4A[¾„‰ê—°‹bà³iŸ£2YcG9Û¡\J7ÈI<žÃo=„”¤\† g9å/„‰"ÎBÞ½m‰srKëÖD3J¬xt~˜LTôYOUOO#z‡a"7þWL‹]Ы‹GcKǃ=DŠ¥­Ÿœú¦,QT¥|HGÛêÅÞìÒD~kÌYRê•ëv»ððÛ^ιÅüÑ5jMìv1®K‹%¦b ÞÌT1ÖY¢Øéù—&ü¹S«ÿ¸Û•éa9l1q ñùü"e b‘zŠêÝøÞK¥Q&r!mq”xÁD+J¬xt~˜LTôYOUOO#z‡a¢4.ÈF±=JñÕW°sý/<º‚Ý'Uæª\×§þqíý-h¢¼µr !¼àº1¡Sn1Bà€©­ÌBX4±hb&š¼ãà!2|]BáÓß~¿92Ÿ6?7b¾Ž­='I’+w&Øò΄7òœ&RééùS} Úí½ J£‰YË-»]\[Í0±âÑùibDü94QQÐg=9MT==!MŒè¦‰Ò¸10c{”2¼öքư%£šÈrAŸ‹_ª“íMìêSŽ"šê)¼ÛìòÖ@wkÛ;€7ÒDi1W8‰ÜVæóø¢‰E3ÑDÚ„ÙÇC¨ÓÄnNß›ÈÏÝëµÂàlôÚbÎ4†{ê=˜=ÐæÆ.J¶û›zwÐD縇õJKvaÑD3L¬xt~šMTôYONUOOH#z‡i¢o”J×›ˆ>ÞÓQ­‰¢ ‚šÁ JU炉]= 6RívøfW°Ë[“ ¼àt#L”“$¨W®Ûí$®­‰SÁ„æDÑ„úA§b ÉN‚‰ü\´\³³ÕÜ.¾²Ö„®žŽ`ÂA# ¶Ž¢f»|ùb¶ù%«W¶dMõþ¿õÝæk`heþýð¢Ü9¿x‹ 16ã·£u)oÍ/sÍ/¶°!Q}±ªØ1¾XåÏE h–êü’íÐ]¹X[È÷î¶¾Í#DAI¬ÚPêvî=H›M”#Ƕ8 ¶«Z«οX5"þœÅªŠ‚>ëÉ«ªžžp±jDïðbUß(•®=H•768X­ê“j<Mt©ç$ïE]Þ¼1ÅG—2M+ýø¢‰Éh‚°ÆðuŒüMd;„ói‚CDfPiô·“kæ,KéyeT ¹§K P?«Zì„9ÜI¥Q !ÄúÁý%,Á¢‰F˜Xóèô41$þš¨)賞›&êžž&†ôŽÒÄÞ8h\þ©áâL„G{ò©)¶¥BœkobW…)j}w÷ÃîÍ®åíoMˆú‰»]|H’|5Mì-&$¢ö4®¢+ýø¢‰™h¿K,ð,Qé§¿ý~Ýîñ ø94Qž›X0q«gg;¼2ÉÚ½‹$‘B(=Ø Ôw?ìb¼•&¼Q 9YJ=‹{g ë m3L¬xt~šMTôYONUOOH#z‡i¢4ÎAš£”2¸˜&4 Žöê¨a±žÀûÃ.È\4QT©Æ¨ØVïSï{ÑDykSA|á·µ‡.—ÓDn8Qãô‡ÝýÕE‹&&  ð(]̈°N»]¤³i"?WIHBjôŸl¯<é„iSTIø|~AïÁ1RÀXïéÙo½–×%.YÅŒšabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£Àµ4á@µ!œtꔊsÝËëTŸè½h¢Ë;È÷Ý›Ø[äÀZO¸Ûy̶hbÑÄL4çk?)pªÒD± ñìbFå¹"`POºÛÅ`WÞË‹$Ñ£½ïXÆ dõ½ïb‡÷Þ›Øe‹¡~ýäÃîáÖ¢‰ƒ0±âÑùibDü94QQÐg=9MT==!MŒè¦‰Ò¸pÔF·ÛAº69Ê–âAòN©q²½‰¢*¡yÌü‚z}¯[Øû[{ÀNõÒ¨^äûîM” ¸©Œ¬,‹&¦¢‰˜×ü Ð0Ti¢ØI:;y~.Ä|ù¨ÑÜ.@ºvo"¦Èppo‚rÎ…Šb&ŠÝJ¹Q– ÚǜҢ‰V˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi\siRJW×FÅȇkGE‚‰’Z[ªÑ\ È‹*É)©”›êäÍh¢¼uÎ’Ü8¼Û=$¸œ&J‹N: ÔVæ4´hbÑÄL4A¥k^Å€X¥‰bÇÑüI4áÏÅZ1ºÛ½4¹ÿhè¬pp’–½û·¨¯Vqƒ€o¥‰qŠ5ÌM„‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MtR‘®-g„[`cì5¦É¶&ºÔÓ»]›èòÇtLäÍûAh*³ ë Ó‚‰©`‚s½:¬ÂD±‘³a"?7RDkô·ÓkS:ÙMDõùü"¹;ti=yÛn÷¤ÀÅ¥0Ñ%NÂJéÔŒ+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥ìÚjF¶)tê’ªBsÑÄ®Þ@WÈw;y³kþÖ˜¶¼cÁ­î£‰Ü"DhlMeøpÂ`ÑÄ¢‰ hBò’Â|R¯JÅŽϦ‰ü\d¨­QÛíH®<èqKÓÑÖDÊ=XòÕ‘zO/vœî¥‰Ü(„±žg÷Ãn•›h‡‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÆQ ¶G)d»”&ÄÈG¬£KØE“¢…¶Ô(“t*ªR0hÐD±“'¥÷~Õ4QÞZ£jÛ;Šá>šÈ-bâ\r¢© å1ÙÔ¢‰Eßž&ÒÆ1Yða³Z¼®Ø‰éé4‘rÒo£|Û©ÑÜŽðJš`ÞS8(ŽŠê=˜s–ÚTïéÅ.>[X»&J£)r$h‹{\õ[4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Q×T_B} ½¸x]؈Žh¢KªÆÉö&Š*cÀƬº«·÷*^WÞZ<åFQ‡Ý‹w^›(Ê"˜²6• Ø¢‰ESÑ„æTMd©¾7QìôaQû$šÈÏMN È­QÛíâµ{°q. sp’Ö¼[Þm¬÷d»½×&ºÄE[—°›abÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£]\¼ŽlÆ|@]RùYªÁoI]êEã{ÑD—w¡öršèQ¦áá¾Ë¢‰EЄmì,‘„ª4áv˜?ô³i"·19¨ÄFÿq»gÉ'N¤ ÞÄ)@å)MÄPzpÎ+W]Ñ(vÉäVšØÅ%‘Ûâôñ Á¢‰çabÍ£ÓÓÄøSh¢¦ Ïznš¨{z>šÒ;J}£TÄki"&ÞPö&:¥NEêõ½na÷y‡J‰\M}ÊM,š˜‰&ü»tš {Åοñ³ËMìí‹YLÚè?ÙîÉ Ïi"nhh0yÙ@•ªg‰v;Ž·Ö®Ë»*-$jx1Û‰ëv3J¬xt~˜LTôYOUOO#z‡a¢k”R¹:?l$ðIöh´ï‘ªs]›èRïÑ'¾Ltyç±ÂÕå0Ñ¥L`-˜X0ñía¿˨̧W¯M; |v~ؽ}G ¬×~Üí\Y ;æJØ!ÄçÓ zÎë¨õ޾Û=«»t!LäFö˜ëÇdw;Lkg¢%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0±7®Œ`íQ*ʵéaIpƒH;Ea.Ö–J0×­‰]UÎç^/³ú¡ÞÞëœÓ‡w´u¶úÃî¡Løå0QZÌu%ñejkgbÁÄT0áß%‹ÿU-„]ì"ÑÙ¥ëÊsSð†fìéÊBضŸÓD.Þ—<·zF§bç#ù­¥ëúÄ)颉V˜Xñèü41"þš¨(賞œ&ªžž&FôÓDß(eßÁ6ÜäèvŸTƒ¹ªMt©—ovΩÏ;|ã9§.e¨+?좉©h"æ%‰9ev•&ŠÓÙw°ós)¨ÕSîvtíÖå¬kèˆ&ÌPС 5”º'˜m~Ééð\R[}’·›_ü­ÀHÚÞ1¼u~É9ïi³d»‡ÚÁk~YóËó m=Üöa§z+o·£xú9Úü\ I<úªöŸlféÚjF$)>Oè@‚Ñœ¿&ÝŠMŸ0ô‘àx ¨Žbž$ÈMì†còÜ.›«‹;ô"‘‰ž½ŸY:´Ì:|Ä m¨sǸٸmi)ï¼`U$˜çeò¨ôÍù›êMª½Ú»çbá"œÊ•så@Òo ñè +‘ŠewÜÔEàÐ-UãÌã¾­ÿTø>às*|AYëÊO…ZºÂSáûàÝûTx™—"7í…iÄMê9^4bmz§=¹07½Sb<ªÞ,Æ­dí¢w½S…Þ!ýe¾ðÏÊ$h;§\ù’&¿H}r§f˜ëº7ÂÁ+ãVvª£ÀônnXbcž…+B–Ür `¡ÀÊ(P8Ÿ ã‘¶Rœt¤JûH°(Í•E#mĬE‡#òÜÎï¸6{Òœ_ ¸à·Ôô’óëIæ X´þœß>à“ó@PÖºòœß ¥+Ìùíƒwïœ_‘— ~ÚJàS£AkOÖ¯ jLuIž"ôÛ[Æf!yʬ³]¼{jÉS„Œ`Éú-’§*É#M>m,âó—uŠ¿¾Xô» 4̱g£CTJó=I x1 fvíà¸åïÛNIãOñrÙ¶ÍX´~ѳøÃˆže­+=ƒ–®PôìƒwoÑÓvÎÀöR¦=Ñ¥|‘ièi!ˆhÀ6Táʶv·¨’d»v0³óMyÔùº¤dTìÚáËß·=‚§`æoÛ^DÏ"zª=y^Fd’½9#NZî)ò ÚåM¯{”GFiÍCt ‚>Nö8X+¢k•í÷ëPQ¾;o3Ê4¯Jëˆ~1ÛÖ!<^¹§®G}"#žo-à-$¸àOO‚í¼ ‡÷«v퀦Ìüù&%D¿»ÜSPܱ!z&$¨±¶N!0¬Dys¾3ïWŽãrí¥•Ð²hõy¿½À$ï7„ ¬uÝy¿aK×—÷Û ï¾y¿®ó”OEÛKIœx³SCìwçý6P£Ú‰m¨)øÚ$ù69,k7èÅÍMòŒ·3ï×õˆ˜/°‘/’g‘<•I œ/*°¼¶s~JÉÇÄ=§œ ‘î¨> Íã[O)Lq8ñ×¶s|Ô¢]§ íœ Íc³­_óìþ0šgAYëÊ5Ï ¥+Ô<ûàÝ[ó´#çÍV¶—Â&.ê `ä)CJuÝqÜ¡" Èd£Ÿ[U‡nÔ18KÎvÖáãíïîzLè]Œ62‰K=óEòT%yò¼"7ç¯w“ÞlåëQ<-€¨Á<àižeì0¨ÞóN•¡V<¡u[ñ¨»»‹À©³çEñX¡ì€EëW<û€?Œâ@PÖºrÅ3hé Ï>x÷Võ’ £†š£ ÔólÓ¡j ”Œz'm;¬ Ð'7;,±ÎVÒæ$=4“ð1טX¬Œ£ýf‰Û•?; ‚ãD½JPè‹–<Û@}’Ù’`Q±À––Òv P ¦ä%D½ªÝÙ‘ Z'o½Š¦u’Ûª~z T…J‚#ž[‚°àB‚•‘ »ô÷Ñâm'4åÎlªkã>Ì@Õ8±&7[”¨JÁ§h¹#mäj#AE¥"{}œßöP5cdsU<·wL”ˆ^H$šÈÐ/W¸.$X JŒÎååjsþRbœtcŒz6ÁØG‚¨"r¶ƒŒŽfZâGç(€ðpÝжS¨‹3ª¼¤éMôìü¼î1ïFíƒ÷†ìÚm%‹''Á¶Gô#ÙÈ`kMe!Á…+ ÁoÑó&¸´})ã’÷ëIè X´þ¼ß>à“÷@PÖºò¼ß ¥+Ìûíƒwï¼_‘—ò¦Íû%h©'ïWc]’§E8†èef\´£ÇddE;+ò%OÛ# 'ñÜÈ/y¿EòT%y(ßÞù`zÍàó¤’'ŸØòþ¦Ï½vÕ­œ…ä‰ 8$5¢q'·ƒôe5†‡§l§à”±÷áãSë‚.T ÎŒ};ë@L¶uýñØ7÷˜‹{“÷&2mÓ[Œ8Oú§»û{}~ݯûzƒè\Š#zÛ:˜¸pýÂõp}Ñü…íÒå_¾-1ŸVì„ï£2EË}ýQd/Áî^ïNTËh?gWí ¢5æëkô]º½È¿x÷ ?^Ý®wæSÊú~}a fþïw××ê¤3#µopN,d«?|î|ÿ¾‰­qSñ¸…œQ4{ƒ‘¬që“ÎqŒóØ~_W§y9f,³¢úÝ·Nc̯³öÈG÷þº8ÝË‘Ãd#Gëi¼ÿpv±nÇ~°1k\v„™I{þþãmkŸÖyúL|¿Z½[¿¿º½Í£úñîDcÎ0(²s {¯[Ó†¥Y]Ý]ìo2'c(~$Œ8ªïˆ»LC{½ Ñ3 ±0Ô>˜DŸC”ËõYF7qs µ¾Þ·k _ÈüéÉÝíÉæO×w‘Z~–y6=ì -ºí3›Ï†@_ú䇯ÑîÖ?\­| Ğǘ’ý°Jß^åRÕ¼¶ÏŒÈaÒú›Â_ô3(J;p~~}•ÁCÖ ×W7WyTÙÔªtâÝfW}»Zç¯=ýìFÎnÏ5غØ{듨ãÑ“ÌO¢Ž·ŽÐq%j àmñy9¾ˆÆúD£@V¶˜Q@“ÞÔÄ„ÜW³JƒÈnôZDeò·mð«/™:ÅË»‡›ÕåU^öϸ–É`Aìd°ÀëBZ/ÔÁËgÖjC3¼@ÈŠÃ}YpåóÕ9ê{_öýÇ}{[à>ï± îÈ+|šg?¬WWïo»°÷ye¿› —jœ§«_¯GÎ<nÏnÖ÷*kßlMÿvâœÜœÝª_¸ØÌ£›³ûÇ7Ÿ»~þôðó¥Ïdß¼2 X¹ƒg¼9R°eA߯ï9!Ï/uÊfÏÆå~»‰rúúŽª9Ù}Ó.ÉöB™Üœ]Ý~«ìqî;­@ ŪW¡ša“G¿µáñf­„|ÞE'¿{÷Éê¿~ÿûï~×îk!Â<©Î öâ‹'•k»@€ÓWõׯRÏÑJ]ù'd¦ÓÕ?ýãã/þE#Šía¯*>Êë“F/ÙÄGýç(ålm·¾îÞ¶[ÞÓnÒéöµ´¾º¾;ÿs!¥Ü=\ä¸ü½–Ç ¥ìö"/H´íòÝLà7åƒðf^YçÈŽ9,ôyý%[PGþ«ö1~zûfNR±Áž9)zo¾¸ÝÃ+Iô¿¥‹×?¨ŽhûÏ/ÏÛüÓ/4žýöÝÿ*Žï×—:TÆþí¿¯n/N¿úM~v½ýöùAŸöGK›f§_}þùßü‡†›ç,ú|NÎàRNP8¼ƒtqP.ÖÄ1mûoß}ó)BÝ=Ý´Ñ÷ëÇ»ªÆ>µ‰Y£]­¯/¾;{úCÉ_ž®žþz¿>]}ý[ £Î®¿Ö tö˜eý׿i­üvóŠ|½Òéþ?ëwçþ.Oðòr}‚É­Oäâ2ž¸Ëˆçï4ÄuÒòÈö é}"1¹”F<ò¶;-Œ9‚÷Á±3—#ƒw‰m"ùÂe™“Q;ç¤ÏÒîÜÓ& ­Pý¼C±ÉÕ¼ ²þá#&tÎF€;3|¶ßÍê«NOµòQ)B®‹Í½kÞÇK"X§æìñq}óîú¯*ÉõÙM³¾N'ÏŸýÃÿj&\õl±././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015073043234033165 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015073043234033165 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000017701315073043234033200 0ustar zuulzuul2025-08-13T19:59:46.011585192+00:00 stderr F I0813 19:59:45.901523 1 plugin.go:44] Starting Prometheus metrics endpoint server 2025-08-13T19:59:46.064059488+00:00 stderr F I0813 19:59:46.054048 1 plugin.go:47] Starting new HostPathDriver, config: {kubevirt.io.hostpath-provisioner unix:///csi/csi.sock crc map[] latest } 2025-08-13T19:59:54.824698355+00:00 stderr F I0813 19:59:54.823425 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.836695 1 hostpath.go:88] name: local, dataDir: /csi-data-dir 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.837369 1 hostpath.go:107] Driver: kubevirt.io.hostpath-provisioner, version: latest 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.839745 1 server.go:194] Starting domain socket: unix///csi/csi.sock 2025-08-13T19:59:54.846271330+00:00 stderr F I0813 19:59:54.843974 1 server.go:89] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} 2025-08-13T19:59:56.223891950+00:00 stderr F I0813 19:59:56.210090 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T19:59:56.498880609+00:00 stderr F I0813 19:59:56.484339 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T19:59:58.205910298+00:00 stderr F I0813 19:59:58.122081 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T19:59:58.239965509+00:00 stderr F I0813 19:59:58.206037 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:06.991021332+00:00 stderr F I0813 20:00:06.970088 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T20:00:07.168862783+00:00 stderr F I0813 20:00:07.147043 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-08-13T20:00:07.353057946+00:00 stderr F I0813 20:00:07.320948 1 server.go:104] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-08-13T20:00:07.589521437+00:00 stderr F I0813 20:00:07.589477 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:10.866698493+00:00 stderr F I0813 20:00:10.847387 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:00:11.139147861+00:00 stderr F I0813 20:00:11.038993 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543271 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543509 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543770 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-08-13T20:00:26.845955495+00:00 stderr F I0813 20:00:26.834162 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:26.990584009+00:00 stderr F I0813 20:00:26.989183 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.045191487+00:00 stderr F I0813 20:00:27.044369 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.073488273+00:00 stderr F I0813 20:00:27.073376 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.110229281+00:00 stderr F I0813 20:00:27.109569 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-08-13T20:00:27.132535977+00:00 stderr F I0813 20:00:27.110290 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-7cbd5666ff-bbfrf csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:42b6a393-6194-4620-bf8f-7e4b6cbe5679 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:27.701925913+00:00 stderr F I0813 20:00:27.692296 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.744057214+00:00 stderr F I0813 20:00:27.729731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.744057214+00:00 stderr F I0813 20:00:27.738204 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.813944467+00:00 stderr F I0813 20:00:27.813337 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-08-13T20:00:27.813944467+00:00 stderr F I0813 20:00:27.813370 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75779c45fd-v2j2v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:54.807024101+00:00 stderr F I0813 20:00:54.798046 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:54.807024101+00:00 stderr F I0813 20:00:54.805210 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:00:54.824138009+00:00 stderr F I0813 20:00:54.807706 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899397 1 healthcheck.go:84] fs available: 62292721664, total capacity: 85294297088, percentage available: 73.03, number of free inodes: 41533206 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899696 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899712 1 nodeserver.go:330] Capacity: 85294297088 Used: 23001575424 Available: 62292721664 Inodes: 41680368 Free inodes: 41533206 Used inodes: 147162 2025-08-13T20:00:56.656209940+00:00 stderr F I0813 20:00:56.597000 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:01:10.384473536+00:00 stderr F I0813 20:01:10.384126 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:01:10.516433778+00:00 stderr F I0813 20:01:10.490297 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:01:10.516433778+00:00 stderr F I0813 20:01:10.490338 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:01:10.725402807+00:00 stderr F I0813 20:01:10.539162 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:01:10.742578247+00:00 stderr F I0813 20:01:10.725588 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012392 1 healthcheck.go:84] fs available: 61630750720, total capacity: 85294297088, percentage available: 72.26, number of free inodes: 41533186 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012581 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012604 1 nodeserver.go:330] Capacity: 85294297088 Used: 23663546368 Available: 61630750720 Inodes: 41680368 Free inodes: 41533186 Used inodes: 147182 2025-08-13T20:01:56.740528286+00:00 stderr F I0813 20:01:56.740322 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:02:10.485513699+00:00 stderr F I0813 20:02:10.485234 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:02:10.485513699+00:00 stderr F I0813 20:02:10.485341 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:02:31.814900262+00:00 stderr F I0813 20:02:31.814574 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:02:31.816711174+00:00 stderr F I0813 20:02:31.816638 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:02:31.816934930+00:00 stderr F I0813 20:02:31.816735 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825011 1 healthcheck.go:84] fs available: 57135521792, total capacity: 85294297088, percentage available: 66.99, number of free inodes: 41516178 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825038 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825050 1 nodeserver.go:330] Capacity: 85294297088 Used: 28158775296 Available: 57135521792 Inodes: 41680368 Free inodes: 41516178 Used inodes: 164190 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225329 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225397 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225433 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-08-13T20:02:56.756340071+00:00 stderr F I0813 20:02:56.756230 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:03:10.486762909+00:00 stderr F I0813 20:03:10.486679 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:10.487170981+00:00 stderr F I0813 20:03:10.487152 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:11.493869339+00:00 stderr F I0813 20:03:11.493707 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:11.493869339+00:00 stderr F I0813 20:03:11.493746 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:13.499529645+00:00 stderr F I0813 20:03:13.499420 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:13.499529645+00:00 stderr F I0813 20:03:13.499457 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:17.502945632+00:00 stderr F I0813 20:03:17.502827 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:17.502945632+00:00 stderr F I0813 20:03:17.502899 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:25.509244321+00:00 stderr F I0813 20:03:25.508881 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:25.509244321+00:00 stderr F I0813 20:03:25.508945 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:41.517643887+00:00 stderr F I0813 20:03:41.517458 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:41.517746540+00:00 stderr F I0813 20:03:41.517732 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:56.915135236+00:00 stderr F I0813 20:03:56.915006 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:04:01.000095908+00:00 stderr F I0813 20:04:00.999928 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:04:01.005914114+00:00 stderr F I0813 20:04:01.003058 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:04:01.005914114+00:00 stderr F I0813 20:04:01.003149 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025128 1 healthcheck.go:84] fs available: 54261784576, total capacity: 85294297088, percentage available: 63.62, number of free inodes: 41488434 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025264 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025369 1 nodeserver.go:330] Capacity: 85294297088 Used: 31032512512 Available: 54261784576 Inodes: 41680368 Free inodes: 41488434 Used inodes: 191934 2025-08-13T20:04:10.486988651+00:00 stderr F I0813 20:04:10.486697 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:10.487205307+00:00 stderr F I0813 20:04:10.487185 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:04:13.523738710+00:00 stderr F I0813 20:04:13.523676 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:13.524015298+00:00 stderr F I0813 20:04:13.523997 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:04:56.964622012+00:00 stderr F I0813 20:04:56.959874 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:05:10.491516851+00:00 stderr F I0813 20:05:10.491238 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:05:10.491516851+00:00 stderr F I0813 20:05:10.491302 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:05:46.708170333+00:00 stderr F I0813 20:05:46.708026 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:05:46.714759392+00:00 stderr F I0813 20:05:46.714656 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:05:46.714759392+00:00 stderr F I0813 20:05:46.714693 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728583 1 healthcheck.go:84] fs available: 47486980096, total capacity: 85294297088, percentage available: 55.67, number of free inodes: 41469070 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728625 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728641 1 nodeserver.go:330] Capacity: 85294297088 Used: 37807316992 Available: 47486980096 Inodes: 41680368 Free inodes: 41469070 Used inodes: 211298 2025-08-13T20:05:56.984080993+00:00 stderr F I0813 20:05:56.983475 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:06:06.802203704+00:00 stderr F I0813 20:06:06.798708 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:06.802203704+00:00 stderr F I0813 20:06:06.798750 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:10.487223827+00:00 stderr F I0813 20:06:10.487127 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:10.487332100+00:00 stderr F I0813 20:06:10.487318 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:21.529625288+00:00 stderr F I0813 20:06:21.529548 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:21.529924837+00:00 stderr F I0813 20:06:21.529905 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:57.096820633+00:00 stderr F I0813 20:06:57.092522 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:07:10.508947399+00:00 stderr F I0813 20:07:10.508715 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:07:10.513067847+00:00 stderr F I0813 20:07:10.510553 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:07:20.657193328+00:00 stderr F I0813 20:07:20.657106 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:07:20.695849096+00:00 stderr F I0813 20:07:20.685970 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:07:20.695849096+00:00 stderr F I0813 20:07:20.686005 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.733739 1 healthcheck.go:84] fs available: 49462222848, total capacity: 85294297088, percentage available: 57.99, number of free inodes: 41476143 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.738849 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.738935 1 nodeserver.go:330] Capacity: 85294297088 Used: 35832098816 Available: 49462198272 Inodes: 41680368 Free inodes: 41476150 Used inodes: 204218 2025-08-13T20:07:57.145481707+00:00 stderr F I0813 20:07:57.144413 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:08:10.502928778+00:00 stderr F I0813 20:08:10.499463 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:08:10.502928778+00:00 stderr F I0813 20:08:10.499507 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:08:36.367695340+00:00 stderr F I0813 20:08:36.367492 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:08:36.369372678+00:00 stderr F I0813 20:08:36.369306 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:08:36.369396839+00:00 stderr F I0813 20:08:36.369339 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380557 1 healthcheck.go:84] fs available: 51706716160, total capacity: 85294297088, percentage available: 60.62, number of free inodes: 41485057 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380626 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380647 1 nodeserver.go:330] Capacity: 85294297088 Used: 33587580928 Available: 51706716160 Inodes: 41680368 Free inodes: 41485057 Used inodes: 195311 2025-08-13T20:08:57.162177496+00:00 stderr F I0813 20:08:57.160999 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:09:10.507345642+00:00 stderr F I0813 20:09:10.507012 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:09:10.507345642+00:00 stderr F I0813 20:09:10.507250 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:09:39.085725629+00:00 stderr F I0813 20:09:39.085497 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:09:39.090844026+00:00 stderr F I0813 20:09:39.090722 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:09:39.090984640+00:00 stderr F I0813 20:09:39.090767 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105406 1 healthcheck.go:84] fs available: 51679031296, total capacity: 85294297088, percentage available: 60.59, number of free inodes: 41484961 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105490 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105509 1 nodeserver.go:330] Capacity: 85294297088 Used: 33615265792 Available: 51679031296 Inodes: 41680368 Free inodes: 41484961 Used inodes: 195407 2025-08-13T20:09:57.199103275+00:00 stderr F I0813 20:09:57.198850 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:10:10.518927925+00:00 stderr F I0813 20:10:10.505588 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:10.518927925+00:00 stderr F I0813 20:10:10.505631 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:10:36.529453897+00:00 stderr F I0813 20:10:36.528695 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:36.529453897+00:00 stderr F I0813 20:10:36.528761 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:10:40.251553782+00:00 stderr F I0813 20:10:40.251303 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:10:40.253756016+00:00 stderr F I0813 20:10:40.253705 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:10:40.253836788+00:00 stderr F I0813 20:10:40.253736 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:10:40.267351885+00:00 stderr F I0813 20:10:40.267295 1 healthcheck.go:84] fs available: 51647049728, total capacity: 85294297088, percentage available: 60.55, number of free inodes: 41484928 2025-08-13T20:10:40.267418467+00:00 stderr F I0813 20:10:40.267405 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:10:40.278090563+00:00 stderr F I0813 20:10:40.276155 1 nodeserver.go:330] Capacity: 85294297088 Used: 33647247360 Available: 51647049728 Inodes: 41680368 Free inodes: 41484928 Used inodes: 195440 2025-08-13T20:10:57.213663048+00:00 stderr F I0813 20:10:57.212333 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:11:10.503087476+00:00 stderr F I0813 20:11:10.502811 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:11:10.503087476+00:00 stderr F I0813 20:11:10.502957 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:11:57.227966364+00:00 stderr F I0813 20:11:57.227715 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:12:10.505112391+00:00 stderr F I0813 20:12:10.504953 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:12:10.505112391+00:00 stderr F I0813 20:12:10.505061 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:12:36.137386269+00:00 stderr F I0813 20:12:36.136412 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:12:36.141004743+00:00 stderr F I0813 20:12:36.140064 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:12:36.141004743+00:00 stderr F I0813 20:12:36.140092 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150177 1 healthcheck.go:84] fs available: 51640684544, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485041 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150248 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150271 1 nodeserver.go:330] Capacity: 85294297088 Used: 33653612544 Available: 51640684544 Inodes: 41680368 Free inodes: 41485041 Used inodes: 195327 2025-08-13T20:12:57.251129275+00:00 stderr F I0813 20:12:57.251039 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:13:10.503850112+00:00 stderr F I0813 20:13:10.503729 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:13:10.503917984+00:00 stderr F I0813 20:13:10.503904 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:13:57.263200051+00:00 stderr F I0813 20:13:57.262977 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:14:04.803178800+00:00 stderr F I0813 20:14:04.802515 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:14:04.804445436+00:00 stderr F I0813 20:14:04.804387 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:14:04.805386423+00:00 stderr F I0813 20:14:04.804429 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811560 1 healthcheck.go:84] fs available: 51640279040, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485153 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811598 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811614 1 nodeserver.go:330] Capacity: 85294297088 Used: 33654018048 Available: 51640279040 Inodes: 41680368 Free inodes: 41485153 Used inodes: 195215 2025-08-13T20:14:10.505047437+00:00 stderr F I0813 20:14:10.504870 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:14:10.505047437+00:00 stderr F I0813 20:14:10.504991 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:14:57.277850577+00:00 stderr F I0813 20:14:57.277638 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:15:10.509881435+00:00 stderr F I0813 20:15:10.505916 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:15:10.509881435+00:00 stderr F I0813 20:15:10.506155 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:15:51.769155450+00:00 stderr F I0813 20:15:51.769000 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:15:51.772506196+00:00 stderr F I0813 20:15:51.772372 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:15:51.772761773+00:00 stderr F I0813 20:15:51.772515 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781436 1 healthcheck.go:84] fs available: 51640188928, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485105 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781516 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781541 1 nodeserver.go:330] Capacity: 85294297088 Used: 33654108160 Available: 51640188928 Inodes: 41680368 Free inodes: 41485105 Used inodes: 195263 2025-08-13T20:15:57.291996787+00:00 stderr F I0813 20:15:57.291842 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:16:10.506534710+00:00 stderr F I0813 20:16:10.506390 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:16:10.506534710+00:00 stderr F I0813 20:16:10.506435 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:16:57.304684138+00:00 stderr F I0813 20:16:57.304610 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:17:10.519147126+00:00 stderr F I0813 20:17:10.518943 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:17:10.523121470+00:00 stderr F I0813 20:17:10.519870 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:17:36.432045642+00:00 stderr F I0813 20:17:36.431604 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:17:36.438031932+00:00 stderr F I0813 20:17:36.435931 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:17:36.438031932+00:00 stderr F I0813 20:17:36.436033 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466277 1 healthcheck.go:84] fs available: 50719629312, total capacity: 85294297088, percentage available: 59.46, number of free inodes: 41481141 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466319 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466336 1 nodeserver.go:330] Capacity: 85294297088 Used: 34574667776 Available: 50719629312 Inodes: 41680368 Free inodes: 41481141 Used inodes: 199227 2025-08-13T20:17:57.326616570+00:00 stderr F I0813 20:17:57.326474 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:18:10.526943734+00:00 stderr F I0813 20:18:10.524915 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:18:10.526943734+00:00 stderr F I0813 20:18:10.524956 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:18:57.355476292+00:00 stderr F I0813 20:18:57.355306 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:19:10.522212718+00:00 stderr F I0813 20:19:10.521996 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:19:10.522212718+00:00 stderr F I0813 20:19:10.522137 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:19:29.886707247+00:00 stderr F I0813 20:19:29.886609 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:19:29.890321280+00:00 stderr F I0813 20:19:29.890184 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:19:29.890591578+00:00 stderr F I0813 20:19:29.890222 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898142 1 healthcheck.go:84] fs available: 49281728512, total capacity: 85294297088, percentage available: 57.78, number of free inodes: 41478728 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898180 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898196 1 nodeserver.go:330] Capacity: 85294297088 Used: 36012568576 Available: 49281728512 Inodes: 41680368 Free inodes: 41478728 Used inodes: 201640 2025-08-13T20:19:57.368016391+00:00 stderr F I0813 20:19:57.367718 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:20:10.519551602+00:00 stderr F I0813 20:20:10.519431 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:20:10.519551602+00:00 stderr F I0813 20:20:10.519479 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:20:57.389371162+00:00 stderr F I0813 20:20:57.389152 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:21:10.523616582+00:00 stderr F I0813 20:21:10.523457 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:21:10.523616582+00:00 stderr F I0813 20:21:10.523543 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:21:19.817840363+00:00 stderr F I0813 20:21:19.817595 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:21:19.821330213+00:00 stderr F I0813 20:21:19.820144 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:21:19.821330213+00:00 stderr F I0813 20:21:19.820181 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841120 1 healthcheck.go:84] fs available: 51637252096, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485141 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841205 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841221 1 nodeserver.go:330] Capacity: 85294297088 Used: 33657044992 Available: 51637252096 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:21:57.403416385+00:00 stderr F I0813 20:21:57.403188 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:22:10.524463732+00:00 stderr F I0813 20:22:10.524337 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:22:10.524463732+00:00 stderr F I0813 20:22:10.524388 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:22:56.284454137+00:00 stderr F I0813 20:22:56.283874 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:22:56.287110313+00:00 stderr F I0813 20:22:56.287065 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:22:56.287344410+00:00 stderr F I0813 20:22:56.287175 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296459 1 healthcheck.go:84] fs available: 51642810368, total capacity: 85294297088, percentage available: 60.55, number of free inodes: 41485141 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296511 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296527 1 nodeserver.go:330] Capacity: 85294297088 Used: 33651486720 Available: 51642810368 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:22:57.418258791+00:00 stderr F I0813 20:22:57.416314 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:23:10.527744748+00:00 stderr F I0813 20:23:10.527566 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:23:10.527744748+00:00 stderr F I0813 20:23:10.527615 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:23:57.431277872+00:00 stderr F I0813 20:23:57.431156 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:24:10.527718182+00:00 stderr F I0813 20:24:10.527537 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:24:10.527718182+00:00 stderr F I0813 20:24:10.527594 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:24:18.305041165+00:00 stderr F I0813 20:24:18.303307 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:24:18.306838946+00:00 stderr F I0813 20:24:18.306669 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:24:18.307206067+00:00 stderr F I0813 20:24:18.306756 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.314975 1 healthcheck.go:84] fs available: 51577217024, total capacity: 85294297088, percentage available: 60.47, number of free inodes: 41485141 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.315035 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.315051 1 nodeserver.go:330] Capacity: 85294297088 Used: 33717080064 Available: 51577217024 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:24:57.451153962+00:00 stderr F I0813 20:24:57.450706 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:25:10.527768953+00:00 stderr F I0813 20:25:10.527627 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:25:10.527768953+00:00 stderr F I0813 20:25:10.527694 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:25:54.737841501+00:00 stderr F I0813 20:25:54.737523 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:25:54.740075855+00:00 stderr F I0813 20:25:54.739979 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:25:54.740249760+00:00 stderr F I0813 20:25:54.740049 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:25:54.764451082+00:00 stderr F I0813 20:25:54.764195 1 healthcheck.go:84] fs available: 51575119872, total capacity: 85294297088, percentage available: 60.47, number of free inodes: 41485141 2025-08-13T20:25:54.765378308+00:00 stderr F I0813 20:25:54.765255 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:25:54.765662736+00:00 stderr F I0813 20:25:54.765639 1 nodeserver.go:330] Capacity: 85294297088 Used: 33719177216 Available: 51575119872 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:25:57.469624523+00:00 stderr F I0813 20:25:57.469476 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:26:10.527753664+00:00 stderr F I0813 20:26:10.527598 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:26:10.527753664+00:00 stderr F I0813 20:26:10.527645 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:26:57.484049366+00:00 stderr F I0813 20:26:57.483879 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:27:10.562146652+00:00 stderr F I0813 20:27:10.561015 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:27:10.562146652+00:00 stderr F I0813 20:27:10.561118 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:27:16.955541565+00:00 stderr F I0813 20:27:16.955467 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:27:16.960923459+00:00 stderr F I0813 20:27:16.957556 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:27:16.960923459+00:00 stderr F I0813 20:27:16.957592 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986431 1 healthcheck.go:84] fs available: 50553176064, total capacity: 85294297088, percentage available: 59.27, number of free inodes: 41482383 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986493 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986513 1 nodeserver.go:330] Capacity: 85294297088 Used: 34741121024 Available: 50553176064 Inodes: 41680368 Free inodes: 41482383 Used inodes: 197985 2025-08-13T20:27:57.502633244+00:00 stderr F I0813 20:27:57.502378 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:28:10.531412180+00:00 stderr F I0813 20:28:10.531096 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:28:10.531412180+00:00 stderr F I0813 20:28:10.531169 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:28:33.112173790+00:00 stderr F I0813 20:28:33.111824 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:28:33.115109274+00:00 stderr F I0813 20:28:33.114985 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:28:33.115441193+00:00 stderr F I0813 20:28:33.115092 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123446 1 healthcheck.go:84] fs available: 51568517120, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123954 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123971 1 nodeserver.go:330] Capacity: 85294297088 Used: 33725779968 Available: 51568517120 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:28:57.520944432+00:00 stderr F I0813 20:28:57.520680 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:29:10.533174696+00:00 stderr F I0813 20:29:10.532999 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:29:10.533311740+00:00 stderr F I0813 20:29:10.533291 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:29:44.340528574+00:00 stderr F I0813 20:29:44.340329 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:29:44.341679667+00:00 stderr F I0813 20:29:44.341619 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:29:44.341708008+00:00 stderr F I0813 20:29:44.341651 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:29:44.365910144+00:00 stderr F I0813 20:29:44.365397 1 healthcheck.go:84] fs available: 50792988672, total capacity: 85294297088, percentage available: 59.55, number of free inodes: 41482040 2025-08-13T20:29:44.366058328+00:00 stderr F I0813 20:29:44.365988 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:29:44.367612433+00:00 stderr F I0813 20:29:44.366118 1 nodeserver.go:330] Capacity: 85294297088 Used: 34501308416 Available: 50792988672 Inodes: 41680368 Free inodes: 41482040 Used inodes: 198328 2025-08-13T20:29:57.699870673+00:00 stderr F I0813 20:29:57.699698 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:30:10.532263930+00:00 stderr F I0813 20:30:10.532146 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:30:10.532459155+00:00 stderr F I0813 20:30:10.532436 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:30:57.725149969+00:00 stderr F I0813 20:30:57.725009 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:31:07.875860146+00:00 stderr F I0813 20:31:07.875731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:31:07.878296166+00:00 stderr F I0813 20:31:07.878232 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:31:07.878572504+00:00 stderr F I0813 20:31:07.878335 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887145 1 healthcheck.go:84] fs available: 51566333952, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887179 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887194 1 nodeserver.go:330] Capacity: 85294297088 Used: 33727963136 Available: 51566333952 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:31:10.532145442+00:00 stderr F I0813 20:31:10.532100 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:31:10.532199013+00:00 stderr F I0813 20:31:10.532186 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:31:57.750300037+00:00 stderr F I0813 20:31:57.749893 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:32:10.533952959+00:00 stderr F I0813 20:32:10.533818 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:32:10.533986770+00:00 stderr F I0813 20:32:10.533975 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:32:21.647876845+00:00 stderr F I0813 20:32:21.647682 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:32:21.650811060+00:00 stderr F I0813 20:32:21.650714 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:32:21.651427357+00:00 stderr F I0813 20:32:21.650756 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660146 1 healthcheck.go:84] fs available: 51570003968, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660184 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660201 1 nodeserver.go:330] Capacity: 85294297088 Used: 33724293120 Available: 51570003968 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:32:57.765045545+00:00 stderr F I0813 20:32:57.764764 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:33:10.536131259+00:00 stderr F I0813 20:33:10.535921 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:33:10.536131259+00:00 stderr F I0813 20:33:10.535980 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:33:22.980927831+00:00 stderr F I0813 20:33:22.980505 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:33:22.984701520+00:00 stderr F I0813 20:33:22.984616 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:33:22.984751711+00:00 stderr F I0813 20:33:22.984661 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:33:22.993114092+00:00 stderr F I0813 20:33:22.993024 1 healthcheck.go:84] fs available: 51570139136, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:33:22.993114092+00:00 stderr F I0813 20:33:22.993106 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:33:22.993139062+00:00 stderr F I0813 20:33:22.993127 1 nodeserver.go:330] Capacity: 85294297088 Used: 33724157952 Available: 51570139136 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:33:57.792135620+00:00 stderr F I0813 20:33:57.791627 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:34:10.537730761+00:00 stderr F I0813 20:34:10.537651 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:34:10.537912446+00:00 stderr F I0813 20:34:10.537894 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:34:57.805666005+00:00 stderr F I0813 20:34:57.805502 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:35:01.408516971+00:00 stderr F I0813 20:35:01.408392 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:35:01.410161969+00:00 stderr F I0813 20:35:01.410032 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:35:01.410270932+00:00 stderr F I0813 20:35:01.410066 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416222 1 healthcheck.go:84] fs available: 51570311168, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416257 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416277 1 nodeserver.go:330] Capacity: 85294297088 Used: 33723985920 Available: 51570311168 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:35:10.537402616+00:00 stderr F I0813 20:35:10.537200 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:35:10.537402616+00:00 stderr F I0813 20:35:10.537241 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:35:57.828048567+00:00 stderr F I0813 20:35:57.827894 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:36:10.540230706+00:00 stderr F I0813 20:36:10.540159 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:36:10.540230706+00:00 stderr F I0813 20:36:10.540203 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:36:45.003879095+00:00 stderr F I0813 20:36:44.997700 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:36:45.007661814+00:00 stderr F I0813 20:36:45.006888 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:36:45.007661814+00:00 stderr F I0813 20:36:45.006921 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017730 1 healthcheck.go:84] fs available: 51568951296, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485140 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017765 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017833 1 nodeserver.go:330] Capacity: 85294297088 Used: 33725345792 Available: 51568951296 Inodes: 41680368 Free inodes: 41485140 Used inodes: 195228 2025-08-13T20:36:57.844487042+00:00 stderr F I0813 20:36:57.844282 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:37:10.538597061+00:00 stderr F I0813 20:37:10.538482 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:37:10.538597061+00:00 stderr F I0813 20:37:10.538537 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:37:57.863257984+00:00 stderr F I0813 20:37:57.862999 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:38:10.540288405+00:00 stderr F I0813 20:38:10.540127 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:38:10.540288405+00:00 stderr F I0813 20:38:10.540205 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:38:36.464710306+00:00 stderr F I0813 20:38:36.464263 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:38:36.467944479+00:00 stderr F I0813 20:38:36.466838 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:38:36.467944479+00:00 stderr F I0813 20:38:36.466865 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477635 1 healthcheck.go:84] fs available: 51565973504, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485098 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477678 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477694 1 nodeserver.go:330] Capacity: 85294297088 Used: 33728323584 Available: 51565973504 Inodes: 41680368 Free inodes: 41485098 Used inodes: 195270 2025-08-13T20:38:57.880440552+00:00 stderr F I0813 20:38:57.880262 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:39:10.541717859+00:00 stderr F I0813 20:39:10.541462 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:39:10.541717859+00:00 stderr F I0813 20:39:10.541506 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:39:57.907047914+00:00 stderr F I0813 20:39:57.905077 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:40:10.541471925+00:00 stderr F I0813 20:40:10.541375 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:40:10.541471925+00:00 stderr F I0813 20:40:10.541406 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:40:35.536596687+00:00 stderr F I0813 20:40:35.536482 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:40:35.538166962+00:00 stderr F I0813 20:40:35.537987 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:40:35.538166962+00:00 stderr F I0813 20:40:35.538023 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546307 1 healthcheck.go:84] fs available: 51564703744, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485140 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546353 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546369 1 nodeserver.go:330] Capacity: 85294297088 Used: 33729593344 Available: 51564703744 Inodes: 41680368 Free inodes: 41485140 Used inodes: 195228 2025-08-13T20:40:57.920707454+00:00 stderr F I0813 20:40:57.920519 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:41:10.545660471+00:00 stderr F I0813 20:41:10.545536 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:41:10.545660471+00:00 stderr F I0813 20:41:10.545577 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:41:57.937944436+00:00 stderr F I0813 20:41:57.937728 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:42:10.552582979+00:00 stderr F I0813 20:42:10.552449 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:42:10.552739754+00:00 stderr F I0813 20:42:10.552697 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:42:16.573332707+00:00 stderr F I0813 20:42:16.572654 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:42:16.586949320+00:00 stderr F I0813 20:42:16.577589 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:42:16.586949320+00:00 stderr F I0813 20:42:16.577636 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589379 1 healthcheck.go:84] fs available: 51561603072, total capacity: 85294297088, percentage available: 60.45, number of free inodes: 41485107 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589439 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589487 1 nodeserver.go:330] Capacity: 85294297088 Used: 33732689920 Available: 51561607168 Inodes: 41680368 Free inodes: 41485108 Used inodes: 195260 ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000003553415073043234033201 0ustar zuulzuul2025-10-13T00:15:00.433000660+00:00 stderr F I1013 00:15:00.430228 1 plugin.go:44] Starting Prometheus metrics endpoint server 2025-10-13T00:15:00.433155535+00:00 stderr F I1013 00:15:00.432995 1 plugin.go:47] Starting new HostPathDriver, config: {kubevirt.io.hostpath-provisioner unix:///csi/csi.sock crc map[] latest } 2025-10-13T00:15:00.946425543+00:00 stderr F I1013 00:15:00.945122 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS 2025-10-13T00:15:00.946425543+00:00 stderr F I1013 00:15:00.945181 1 hostpath.go:88] name: local, dataDir: /csi-data-dir 2025-10-13T00:15:00.946425543+00:00 stderr F I1013 00:15:00.945278 1 hostpath.go:107] Driver: kubevirt.io.hostpath-provisioner, version: latest 2025-10-13T00:15:00.949386852+00:00 stderr F I1013 00:15:00.949320 1 server.go:194] Starting domain socket: unix///csi/csi.sock 2025-10-13T00:15:00.949563487+00:00 stderr F I1013 00:15:00.949518 1 server.go:89] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} 2025-10-13T00:15:01.026420180+00:00 stderr F I1013 00:15:01.026357 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:15:02.845066860+00:00 stderr F I1013 00:15:02.841993 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-10-13T00:15:03.406442799+00:00 stderr F I1013 00:15:03.406393 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-10-13T00:15:03.610380129+00:00 stderr F I1013 00:15:03.605896 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-10-13T00:15:05.788382666+00:00 stderr F I1013 00:15:05.786454 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-10-13T00:15:05.788382666+00:00 stderr F I1013 00:15:05.786845 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-10-13T00:15:05.788382666+00:00 stderr F I1013 00:15:05.787471 1 server.go:104] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-10-13T00:15:05.790477519+00:00 stderr F I1013 00:15:05.789710 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-10-13T00:15:05.945921436+00:00 stderr F I1013 00:15:05.945166 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:15:05.945921436+00:00 stderr F I1013 00:15:05.945184 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:16:01.039401931+00:00 stderr F I1013 00:16:01.037872 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:16:05.943705670+00:00 stderr F I1013 00:16:05.943637 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:16:05.943705670+00:00 stderr F I1013 00:16:05.943661 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:16:58.365848739+00:00 stderr F I1013 00:16:58.365289 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:16:58.611708683+00:00 stderr F I1013 00:16:58.611646 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:16:58.612675783+00:00 stderr F I1013 00:16:58.612477 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:16:58.632666652+00:00 stderr F I1013 00:16:58.632505 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:16:58.634120387+00:00 stderr F I1013 00:16:58.633890 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-10-13T00:16:58.634234601+00:00 stderr F I1013 00:16:58.634146 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75779c45fd-v2j2v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:17:01.048877424+00:00 stderr F I1013 00:17:01.048836 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:17:05.945625276+00:00 stderr F I1013 00:17:05.945554 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:17:05.945625276+00:00 stderr F I1013 00:17:05.945586 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:18:01.069919212+00:00 stderr F I1013 00:18:01.069880 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:18:05.368177275+00:00 stderr F I1013 00:18:05.367190 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:18:05.368241417+00:00 stderr F I1013 00:18:05.368220 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-10-13T00:18:05.368261657+00:00 stderr F I1013 00:18:05.368227 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:18:05.372796122+00:00 stderr F I1013 00:18:05.372757 1 healthcheck.go:84] fs available: 45927038976, total capacity: 85294297088, percentage available: 53.85, number of free inodes: 41460779 2025-10-13T00:18:05.372887525+00:00 stderr F I1013 00:18:05.372865 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-10-13T00:18:05.372974608+00:00 stderr F I1013 00:18:05.372952 1 nodeserver.go:330] Capacity: 85294297088 Used: 39367258112 Available: 45927038976 Inodes: 41680320 Free inodes: 41460779 Used inodes: 219541 2025-10-13T00:18:05.946086644+00:00 stderr F I1013 00:18:05.945999 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:18:05.946086644+00:00 stderr F I1013 00:18:05.946026 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:19:01.082372083+00:00 stderr F I1013 00:19:01.082252 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:19:05.947425370+00:00 stderr F I1013 00:19:05.947309 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:19:05.947425370+00:00 stderr F I1013 00:19:05.947371 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:19:38.579278014+00:00 stderr F I1013 00:19:38.578997 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:19:38.582472099+00:00 stderr F I1013 00:19:38.582414 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-10-13T00:19:38.582472099+00:00 stderr F I1013 00:19:38.582430 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:19:38.594230769+00:00 stderr F I1013 00:19:38.594138 1 healthcheck.go:84] fs available: 45925879808, total capacity: 85294297088, percentage available: 53.84, number of free inodes: 41460722 2025-10-13T00:19:38.594230769+00:00 stderr F I1013 00:19:38.594181 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-10-13T00:19:38.594230769+00:00 stderr F I1013 00:19:38.594199 1 nodeserver.go:330] Capacity: 85294297088 Used: 39368417280 Available: 45925879808 Inodes: 41680320 Free inodes: 41460722 Used inodes: 219598 2025-10-13T00:19:39.050415124+00:00 stderr F I1013 00:19:39.050361 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:19:39.052234398+00:00 stderr F I1013 00:19:39.052209 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:19:39.053351921+00:00 stderr F I1013 00:19:39.053301 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:19:39.056520876+00:00 stderr F I1013 00:19:39.054675 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-10-13T00:19:39.056520876+00:00 stderr F I1013 00:19:39.054709 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/fe9b4942-29e7-4ef1-85c7-1a2153128dc7/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75b7bb6564-2mwg6 csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:fe9b4942-29e7-4ef1-85c7-1a2153128dc7 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:20:01.098607687+00:00 stderr F I1013 00:20:01.098510 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:20:05.948180729+00:00 stderr F I1013 00:20:05.948133 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:20:05.948180729+00:00 stderr F I1013 00:20:05.948148 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:20:25.446428376+00:00 stderr F I1013 00:20:25.446216 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-10-13T00:20:25.446428376+00:00 stderr F I1013 00:20:25.446234 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:20:25.446428376+00:00 stderr F I1013 00:20:25.446255 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-10-13T00:20:28.160624676+00:00 stderr F I1013 00:20:28.160434 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:20:28.161258634+00:00 stderr F I1013 00:20:28.161163 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-10-13T00:20:28.161258634+00:00 stderr F I1013 00:20:28.161185 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/fe9b4942-29e7-4ef1-85c7-1a2153128dc7/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:20:28.166038308+00:00 stderr F I1013 00:20:28.165940 1 healthcheck.go:84] fs available: 45898960896, total capacity: 85294297088, percentage available: 53.81, number of free inodes: 41460761 2025-10-13T00:20:28.166038308+00:00 stderr F I1013 00:20:28.165971 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-10-13T00:20:28.166038308+00:00 stderr F I1013 00:20:28.165982 1 nodeserver.go:330] Capacity: 85294297088 Used: 39395336192 Available: 45898960896 Inodes: 41680320 Free inodes: 41460761 Used inodes: 219559 2025-10-13T00:21:01.107749873+00:00 stderr F I1013 00:21:01.107162 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:21:05.949108230+00:00 stderr F I1013 00:21:05.949028 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:21:05.949108230+00:00 stderr F I1013 00:21:05.949052 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:22:01.120986610+00:00 stderr F I1013 00:22:01.120870 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:22:02.243891626+00:00 stderr F I1013 00:22:02.243811 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-10-13T00:22:02.244996316+00:00 stderr F I1013 00:22:02.244952 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-10-13T00:22:02.244996316+00:00 stderr F I1013 00:22:02.244973 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/fe9b4942-29e7-4ef1-85c7-1a2153128dc7/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-10-13T00:22:02.254171223+00:00 stderr F I1013 00:22:02.253368 1 healthcheck.go:84] fs available: 45899112448, total capacity: 85294297088, percentage available: 53.81, number of free inodes: 41460759 2025-10-13T00:22:02.254171223+00:00 stderr F I1013 00:22:02.253395 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-10-13T00:22:02.254171223+00:00 stderr F I1013 00:22:02.253411 1 nodeserver.go:330] Capacity: 85294297088 Used: 39395184640 Available: 45899112448 Inodes: 41680320 Free inodes: 41460759 Used inodes: 219561 2025-10-13T00:22:05.950007330+00:00 stderr F I1013 00:22:05.949931 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:22:05.950007330+00:00 stderr F I1013 00:22:05.949955 1 controllerserver.go:230] Checking capacity for storage pool local 2025-10-13T00:23:01.134578906+00:00 stderr F I1013 00:23:01.134125 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-10-13T00:23:05.950955356+00:00 stderr F I1013 00:23:05.950158 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:23:05.950955356+00:00 stderr F I1013 00:23:05.950182 1 controllerserver.go:230] Checking capacity for storage pool local ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015073043234033165 5ustar zuulzuul././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000277515073043234033202 0ustar zuulzuul2025-10-13T00:15:02.778509356+00:00 stderr F I1013 00:15:02.772609 1 main.go:135] Version: v4.15.0-202406180807.p0.g9005584.assembly.stream.el8-0-g69b18c7-dirty 2025-10-13T00:15:02.778509356+00:00 stderr F I1013 00:15:02.773396 1 main.go:136] Running node-driver-registrar in mode= 2025-10-13T00:15:02.778509356+00:00 stderr F I1013 00:15:02.773403 1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock" 2025-10-13T00:15:02.834224705+00:00 stderr F I1013 00:15:02.834163 1 main.go:164] Calling CSI driver to discover driver name 2025-10-13T00:15:02.851341408+00:00 stderr F I1013 00:15:02.851253 1 main.go:173] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-10-13T00:15:02.851341408+00:00 stderr F I1013 00:15:02.851311 1 node_register.go:55] Starting Registration Server at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-10-13T00:15:02.853075820+00:00 stderr F I1013 00:15:02.853038 1 node_register.go:64] Registration Server started at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-10-13T00:15:02.853884704+00:00 stderr F I1013 00:15:02.853862 1 node_register.go:88] Skipping HTTP server because endpoint is set to: "" 2025-10-13T00:15:03.404351866+00:00 stderr F I1013 00:15:03.404252 1 main.go:90] Received GetInfo call: &InfoRequest{} 2025-10-13T00:15:03.429223322+00:00 stderr F I1013 00:15:03.429092 1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} ././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000277515073043234033202 0ustar zuulzuul2025-08-13T19:59:54.983940954+00:00 stderr F I0813 19:59:54.882907 1 main.go:135] Version: v4.15.0-202406180807.p0.g9005584.assembly.stream.el8-0-g69b18c7-dirty 2025-08-13T19:59:55.140220789+00:00 stderr F I0813 19:59:55.102275 1 main.go:136] Running node-driver-registrar in mode= 2025-08-13T19:59:55.336952618+00:00 stderr F I0813 19:59:55.271313 1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock" 2025-08-13T19:59:56.036679094+00:00 stderr F I0813 19:59:55.934459 1 main.go:164] Calling CSI driver to discover driver name 2025-08-13T19:59:56.994891017+00:00 stderr F I0813 19:59:56.988897 1 main.go:173] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-08-13T19:59:56.994891017+00:00 stderr F I0813 19:59:56.991755 1 node_register.go:55] Starting Registration Server at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-08-13T19:59:57.003131672+00:00 stderr F I0813 19:59:56.996236 1 node_register.go:64] Registration Server started at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-08-13T19:59:57.043204285+00:00 stderr F I0813 19:59:57.030985 1 node_register.go:88] Skipping HTTP server because endpoint is set to: "" 2025-08-13T19:59:58.136694005+00:00 stderr F I0813 19:59:58.083078 1 main.go:90] Received GetInfo call: &InfoRequest{} 2025-08-13T19:59:58.490232413+00:00 stderr F I0813 19:59:58.490135 1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015073043234033165 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000013456715073043234033207 0ustar zuulzuul2025-10-13T00:15:05.742110870+00:00 stderr F W1013 00:15:05.741811 1 feature_gate.go:241] Setting GA feature gate Topology=true. It will be removed in a future release. 2025-10-13T00:15:05.742110870+00:00 stderr F I1013 00:15:05.742056 1 feature_gate.go:249] feature gates: &{map[Topology:true]} 2025-10-13T00:15:05.743177722+00:00 stderr F I1013 00:15:05.743136 1 csi-provisioner.go:154] Version: v4.15.0-202406180807.p0.gce5a1a3.assembly.stream.el8-0-g9363464-dirty 2025-10-13T00:15:05.743177722+00:00 stderr F I1013 00:15:05.743157 1 csi-provisioner.go:177] Building kube configs for running in cluster... 2025-10-13T00:15:05.767751938+00:00 stderr F I1013 00:15:05.767682 1 connection.go:215] Connecting to unix:///csi/csi.sock 2025-10-13T00:15:05.784179850+00:00 stderr F I1013 00:15:05.782763 1 common.go:138] Probing CSI driver for readiness 2025-10-13T00:15:05.784179850+00:00 stderr F I1013 00:15:05.782815 1 connection.go:244] GRPC call: /csi.v1.Identity/Probe 2025-10-13T00:15:05.786357976+00:00 stderr F I1013 00:15:05.782820 1 connection.go:245] GRPC request: {} 2025-10-13T00:15:05.786357976+00:00 stderr F I1013 00:15:05.786222 1 connection.go:251] GRPC response: {} 2025-10-13T00:15:05.786357976+00:00 stderr F I1013 00:15:05.786239 1 connection.go:252] GRPC error: 2025-10-13T00:15:05.786357976+00:00 stderr F I1013 00:15:05.786254 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-10-13T00:15:05.786357976+00:00 stderr F I1013 00:15:05.786259 1 connection.go:245] GRPC request: {} 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.786617 1 connection.go:251] GRPC response: {"name":"kubevirt.io.hostpath-provisioner","vendor_version":"latest"} 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.786628 1 connection.go:252] GRPC error: 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.786638 1 csi-provisioner.go:230] Detected CSI driver kubevirt.io.hostpath-provisioner 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.786645 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.786649 1 connection.go:245] GRPC request: {} 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.787290 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}}]} 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.787297 1 connection.go:252] GRPC error: 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.787303 1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.787306 1 connection.go:245] GRPC request: {} 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.787767 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":11}}}]} 2025-10-13T00:15:05.788035146+00:00 stderr F I1013 00:15:05.787773 1 connection.go:252] GRPC error: 2025-10-13T00:15:05.790434608+00:00 stderr F I1013 00:15:05.789511 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments 2025-10-13T00:15:05.790434608+00:00 stderr F I1013 00:15:05.789529 1 connection.go:244] GRPC call: /csi.v1.Node/NodeGetInfo 2025-10-13T00:15:05.790434608+00:00 stderr F I1013 00:15:05.789533 1 connection.go:245] GRPC request: {} 2025-10-13T00:15:05.790823879+00:00 stderr F I1013 00:15:05.790571 1 connection.go:251] GRPC response: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"node_id":"crc"} 2025-10-13T00:15:05.790823879+00:00 stderr F I1013 00:15:05.790591 1 connection.go:252] GRPC error: 2025-10-13T00:15:05.795447988+00:00 stderr F I1013 00:15:05.790622 1 csi-provisioner.go:351] using local topology with Node = &Node{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[topology.hostpath.csi/node:crc] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} and CSINode = &CSINode{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:CSINodeSpec{Drivers:[]CSINodeDriver{CSINodeDriver{Name:kubevirt.io.hostpath-provisioner,NodeID:crc,TopologyKeys:[topology.hostpath.csi/node],Allocatable:nil,},},},} 2025-10-13T00:15:05.835122547+00:00 stderr F I1013 00:15:05.835071 1 csi-provisioner.go:464] using apps/v1/DaemonSet csi-hostpathplugin as owner of CSIStorageCapacity objects 2025-10-13T00:15:05.835204739+00:00 stderr F I1013 00:15:05.835184 1 csi-provisioner.go:483] producing CSIStorageCapacity objects with fixed topology segment [topology.hostpath.csi/node: crc] 2025-10-13T00:15:05.841503088+00:00 stderr F I1013 00:15:05.839872 1 csi-provisioner.go:529] using the CSIStorageCapacity v1 API 2025-10-13T00:15:05.841503088+00:00 stderr F I1013 00:15:05.840711 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0002ba540 = topology.hostpath.csi/node: crc], removed [] 2025-10-13T00:15:05.841503088+00:00 stderr F I1013 00:15:05.841061 1 controller.go:732] Using saving PVs to API server in background 2025-10-13T00:15:05.849672273+00:00 stderr F I1013 00:15:05.849043 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-10-13T00:15:05.849672273+00:00 stderr F I1013 00:15:05.849061 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-10-13T00:15:05.849672273+00:00 stderr F I1013 00:15:05.849524 1 reflector.go:289] Starting reflector *v1.CSIStorageCapacity (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-10-13T00:15:05.849672273+00:00 stderr F I1013 00:15:05.849549 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-10-13T00:15:05.849762625+00:00 stderr F I1013 00:15:05.849748 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150 2025-10-13T00:15:05.849787996+00:00 stderr F I1013 00:15:05.849778 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-10-13T00:15:05.860279741+00:00 stderr F I1013 00:15:05.860077 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-10-13T00:15:05.860279741+00:00 stderr F I1013 00:15:05.860102 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942797 1 shared_informer.go:341] caches populated 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942827 1 shared_informer.go:341] caches populated 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942845 1 controller.go:811] Starting provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_0cfc7cba-bce5-4681-98e7-76c72696faec! 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942881 1 capacity.go:243] Starting Capacity Controller 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942920 1 shared_informer.go:341] caches populated 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942926 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0002ba540 = topology.hostpath.csi/node: crc], removed [] 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942965 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942985 1 capacity.go:279] Initial number of topology segments 1, storage classes 1, potential CSIStorageCapacity objects 1 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.942991 1 capacity.go:290] Checking for existing CSIStorageCapacity objects 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.943046 1 capacity.go:725] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 matches {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.943058 1 capacity.go:255] Started Capacity Controller 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.943070 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.943083 1 volume_store.go:97] Starting save volume queue 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.943224 1 reflector.go:289] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-10-13T00:15:05.943471143+00:00 stderr F I1013 00:15:05.943230 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-10-13T00:15:05.943768552+00:00 stderr F I1013 00:15:05.943577 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:15:05.943768552+00:00 stderr F I1013 00:15:05.943638 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:15:05.943768552+00:00 stderr F I1013 00:15:05.943690 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:15:05.943919817+00:00 stderr F I1013 00:15:05.943694 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:15:05.943919817+00:00 stderr F I1013 00:15:05.943911 1 reflector.go:289] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-10-13T00:15:05.943930497+00:00 stderr F I1013 00:15:05.943921 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-10-13T00:15:05.945752401+00:00 stderr F I1013 00:15:05.945682 1 connection.go:251] GRPC response: {"available_capacity":52613812224,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:15:05.945752401+00:00 stderr F I1013 00:15:05.945700 1 connection.go:252] GRPC error: 2025-10-13T00:15:05.945752401+00:00 stderr F I1013 00:15:05.945721 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51380676Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:15:05.948628628+00:00 stderr F I1013 00:15:05.948584 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 40431 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:15:05.948789292+00:00 stderr F I1013 00:15:05.948757 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 40431 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 51380676Ki 2025-10-13T00:15:06.044539861+00:00 stderr F I1013 00:15:06.044454 1 shared_informer.go:341] caches populated 2025-10-13T00:15:06.044692526+00:00 stderr F I1013 00:15:06.044668 1 controller.go:860] Started provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_0cfc7cba-bce5-4681-98e7-76c72696faec! 2025-10-13T00:15:06.045776538+00:00 stderr F I1013 00:15:06.044570 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-10-13T00:15:06.045776538+00:00 stderr F I1013 00:15:06.045760 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-10-13T00:15:06.045776538+00:00 stderr F I1013 00:15:06.045768 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-10-13T00:16:05.943268136+00:00 stderr F I1013 00:16:05.943189 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:16:05.943268136+00:00 stderr F I1013 00:16:05.943239 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:16:05.943268136+00:00 stderr F I1013 00:16:05.943263 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:16:05.943363409+00:00 stderr F I1013 00:16:05.943267 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:16:05.944067871+00:00 stderr F I1013 00:16:05.944035 1 connection.go:251] GRPC response: {"available_capacity":46137790464,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:16:05.944067871+00:00 stderr F I1013 00:16:05.944046 1 connection.go:252] GRPC error: 2025-10-13T00:16:05.944085012+00:00 stderr F I1013 00:16:05.944071 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 45056436Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:16:05.951414627+00:00 stderr F I1013 00:16:05.951252 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41088 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:16:05.951510860+00:00 stderr F I1013 00:16:05.951471 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41088 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 45056436Ki 2025-10-13T00:17:05.944947305+00:00 stderr F I1013 00:17:05.944098 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:17:05.944947305+00:00 stderr F I1013 00:17:05.944875 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:17:05.944947305+00:00 stderr F I1013 00:17:05.944903 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:17:05.945051028+00:00 stderr F I1013 00:17:05.944908 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:17:05.945939545+00:00 stderr F I1013 00:17:05.945886 1 connection.go:251] GRPC response: {"available_capacity":44926410752,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:17:05.945939545+00:00 stderr F I1013 00:17:05.945899 1 connection.go:252] GRPC error: 2025-10-13T00:17:05.945939545+00:00 stderr F I1013 00:17:05.945915 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43873448Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:17:05.956513953+00:00 stderr F I1013 00:17:05.956433 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41425 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:17:05.956513953+00:00 stderr F I1013 00:17:05.956445 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41425 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 43873448Ki 2025-10-13T00:18:05.946052113+00:00 stderr F I1013 00:18:05.945406 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:18:05.946052113+00:00 stderr F I1013 00:18:05.945462 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:18:05.946052113+00:00 stderr F I1013 00:18:05.945492 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:18:05.946052113+00:00 stderr F I1013 00:18:05.945498 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:18:05.949369011+00:00 stderr F I1013 00:18:05.946255 1 connection.go:251] GRPC response: {"available_capacity":45927038976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:18:05.949369011+00:00 stderr F I1013 00:18:05.946268 1 connection.go:252] GRPC error: 2025-10-13T00:18:05.949369011+00:00 stderr F I1013 00:18:05.946287 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44850624Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:18:05.953229586+00:00 stderr F I1013 00:18:05.952937 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41561 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 44850624Ki 2025-10-13T00:18:05.954308308+00:00 stderr F I1013 00:18:05.953345 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41561 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:19:05.946613266+00:00 stderr F I1013 00:19:05.946491 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:19:05.946613266+00:00 stderr F I1013 00:19:05.946567 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:19:05.946692008+00:00 stderr F I1013 00:19:05.946606 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:19:05.946850223+00:00 stderr F I1013 00:19:05.946614 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:19:05.948041848+00:00 stderr F I1013 00:19:05.948007 1 connection.go:251] GRPC response: {"available_capacity":45926359040,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:19:05.948133941+00:00 stderr F I1013 00:19:05.948117 1 connection.go:252] GRPC error: 2025-10-13T00:19:05.948222443+00:00 stderr F I1013 00:19:05.948187 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44849960Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:19:05.967719933+00:00 stderr F I1013 00:19:05.967477 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41702 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:19:05.968475776+00:00 stderr F I1013 00:19:05.968405 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41702 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 44849960Ki 2025-10-13T00:20:05.947763182+00:00 stderr F I1013 00:20:05.947236 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:20:05.947763182+00:00 stderr F I1013 00:20:05.947704 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:20:05.947763182+00:00 stderr F I1013 00:20:05.947728 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:20:05.947877393+00:00 stderr F I1013 00:20:05.947732 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:20:05.948635513+00:00 stderr F I1013 00:20:05.948540 1 connection.go:251] GRPC response: {"available_capacity":45905338368,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:20:05.948635513+00:00 stderr F I1013 00:20:05.948550 1 connection.go:252] GRPC error: 2025-10-13T00:20:05.948635513+00:00 stderr F I1013 00:20:05.948564 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44829432Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:20:05.955841333+00:00 stderr F I1013 00:20:05.955737 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41915 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:20:05.956148879+00:00 stderr F I1013 00:20:05.956070 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41915 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 44829432Ki 2025-10-13T00:21:05.948411690+00:00 stderr F I1013 00:21:05.948287 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:21:05.948463882+00:00 stderr F I1013 00:21:05.948405 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:21:05.948463882+00:00 stderr F I1013 00:21:05.948447 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:21:05.948613736+00:00 stderr F I1013 00:21:05.948455 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:21:05.949530312+00:00 stderr F I1013 00:21:05.949515 1 connection.go:251] GRPC response: {"available_capacity":45898809344,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:21:05.949564163+00:00 stderr F I1013 00:21:05.949555 1 connection.go:252] GRPC error: 2025-10-13T00:21:05.949604224+00:00 stderr F I1013 00:21:05.949588 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44823056Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:21:05.958470563+00:00 stderr F I1013 00:21:05.958424 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 42175 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 44823056Ki 2025-10-13T00:21:05.958567765+00:00 stderr F I1013 00:21:05.958504 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 42175 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:21:18.947354449+00:00 stderr F I1013 00:21:18.947157 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 7 items received 2025-10-13T00:22:05.949354343+00:00 stderr F I1013 00:22:05.949249 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:22:05.949446085+00:00 stderr F I1013 00:22:05.949362 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:22:05.949446085+00:00 stderr F I1013 00:22:05.949398 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:22:05.949544978+00:00 stderr F I1013 00:22:05.949404 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:22:05.950320629+00:00 stderr F I1013 00:22:05.950284 1 connection.go:251] GRPC response: {"available_capacity":45898440704,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:22:05.950320629+00:00 stderr F I1013 00:22:05.950300 1 connection.go:252] GRPC error: 2025-10-13T00:22:05.950404731+00:00 stderr F I1013 00:22:05.950363 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44822696Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:22:05.959676100+00:00 stderr F I1013 00:22:05.959595 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 42835 is already known to match {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:22:05.960045970+00:00 stderr F I1013 00:22:05.960005 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 42835 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 44822696Ki 2025-10-13T00:22:09.855359053+00:00 stderr F I1013 00:22:09.854830 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 9 items received 2025-10-13T00:22:09.857248603+00:00 stderr F I1013 00:22:09.857180 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42844&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:11.248365803+00:00 stderr F I1013 00:22:11.248259 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42844&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:11.667595806+00:00 stderr F I1013 00:22:11.667433 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 7 items received 2025-10-13T00:22:11.698402715+00:00 stderr F I1013 00:22:11.698248 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42827&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:11.698507188+00:00 stderr F I1013 00:22:11.698457 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 7 items received 2025-10-13T00:22:11.700018238+00:00 stderr F I1013 00:22:11.698850 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 0 items received 2025-10-13T00:22:11.700018238+00:00 stderr F I1013 00:22:11.699001 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 15 items received 2025-10-13T00:22:11.714084377+00:00 stderr F I1013 00:22:11.713985 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42297&timeout=7m4s&timeoutSeconds=424&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:11.714136908+00:00 stderr F I1013 00:22:11.714072 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42834&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:11.714136908+00:00 stderr F I1013 00:22:11.714087 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42835&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:12.574873825+00:00 stderr F I1013 00:22:12.574776 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42835&timeout=7m52s&timeoutSeconds=472&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:12.731141328+00:00 stderr F I1013 00:22:12.731066 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42297&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:13.148470889+00:00 stderr F I1013 00:22:13.148411 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42827&timeout=5m59s&timeoutSeconds=359&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:13.268485867+00:00 stderr F I1013 00:22:13.268421 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42834&timeout=7m20s&timeoutSeconds=440&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:14.097193352+00:00 stderr F I1013 00:22:14.097119 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42844&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:14.703551868+00:00 stderr F I1013 00:22:14.703461 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42835&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:15.165555433+00:00 stderr F I1013 00:22:15.165474 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42297&timeout=7m11s&timeoutSeconds=431&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:15.753967126+00:00 stderr F I1013 00:22:15.753901 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42827&timeout=9m56s&timeoutSeconds=596&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:16.410409680+00:00 stderr F I1013 00:22:16.400529 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42834&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:18.494116994+00:00 stderr F I1013 00:22:18.494059 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42844&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:20.204118238+00:00 stderr F I1013 00:22:20.204070 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42835&timeout=7m44s&timeoutSeconds=464&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:20.618080221+00:00 stderr F I1013 00:22:20.618029 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42834&timeout=7m41s&timeoutSeconds=461&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:20.977981219+00:00 stderr F I1013 00:22:20.977928 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42297&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:21.576054673+00:00 stderr F I1013 00:22:21.575969 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42827&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:27.414677184+00:00 stderr F I1013 00:22:27.414538 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42297&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:28.117061572+00:00 stderr F I1013 00:22:28.116981 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42834&timeout=7m15s&timeoutSeconds=435&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:30.829014712+00:00 stderr F I1013 00:22:30.828938 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42844&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:31.499360189+00:00 stderr F I1013 00:22:31.499224 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42835&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:32.470509975+00:00 stderr F I1013 00:22:32.470443 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42827&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:46.022191929+00:00 stderr F I1013 00:22:46.022051 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42835&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:46.870462247+00:00 stderr F I1013 00:22:46.870304 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42834&timeout=7m30s&timeoutSeconds=450&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:47.964840422+00:00 stderr F I1013 00:22:47.964769 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42827&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:50.621851173+00:00 stderr F I1013 00:22:50.621772 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42297&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-10-13T00:22:55.850701273+00:00 stderr F I1013 00:22:55.850553 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 42844 (42861) 2025-10-13T00:23:05.949732272+00:00 stderr F I1013 00:23:05.949591 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-10-13T00:23:05.949732272+00:00 stderr F I1013 00:23:05.949653 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} 2025-10-13T00:23:05.949732272+00:00 stderr F I1013 00:23:05.949684 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-10-13T00:23:05.949850626+00:00 stderr F I1013 00:23:05.949691 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-10-13T00:23:05.951210663+00:00 stderr F I1013 00:23:05.951152 1 connection.go:251] GRPC response: {"available_capacity":45894459392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-10-13T00:23:05.951210663+00:00 stderr F I1013 00:23:05.951189 1 connection.go:252] GRPC error: 2025-10-13T00:23:05.951285576+00:00 stderr F I1013 00:23:05.951228 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44818808Ki, new maximumVolumeSize 83295212Ki 2025-10-13T00:23:05.962431396+00:00 stderr F I1013 00:23:05.962315 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 43024 for {segment:0xc0002ba540 storageClassName:crc-csi-hostpath-provisioner} with capacity 44818808Ki 2025-10-13T00:23:17.819986127+00:00 stderr F I1013 00:23:17.819911 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 42834 (42861) 2025-10-13T00:23:23.269912976+00:00 stderr F I1013 00:23:23.269820 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 42827 (42861) 2025-10-13T00:23:25.784390107+00:00 stderr F I1013 00:23:25.784308 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 42835 (42861) 2025-10-13T00:23:30.596856929+00:00 stderr F I1013 00:23:30.596802 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 42297 (42861) 2025-10-13T00:23:41.670482646+00:00 stderr F I1013 00:23:41.670385 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000054130515073043234033177 0ustar zuulzuul2025-08-13T20:00:06.445084695+00:00 stderr F W0813 20:00:06.443324 1 feature_gate.go:241] Setting GA feature gate Topology=true. It will be removed in a future release. 2025-08-13T20:00:06.445484987+00:00 stderr F I0813 20:00:06.445445 1 feature_gate.go:249] feature gates: &{map[Topology:true]} 2025-08-13T20:00:06.445541259+00:00 stderr F I0813 20:00:06.445526 1 csi-provisioner.go:154] Version: v4.15.0-202406180807.p0.gce5a1a3.assembly.stream.el8-0-g9363464-dirty 2025-08-13T20:00:06.445584310+00:00 stderr F I0813 20:00:06.445567 1 csi-provisioner.go:177] Building kube configs for running in cluster... 2025-08-13T20:00:06.695957019+00:00 stderr F I0813 20:00:06.658088 1 connection.go:215] Connecting to unix:///csi/csi.sock 2025-08-13T20:00:06.730316519+00:00 stderr F I0813 20:00:06.730187 1 common.go:138] Probing CSI driver for readiness 2025-08-13T20:00:06.730426782+00:00 stderr F I0813 20:00:06.730385 1 connection.go:244] GRPC call: /csi.v1.Identity/Probe 2025-08-13T20:00:06.763211887+00:00 stderr F I0813 20:00:06.730446 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925372 1 connection.go:251] GRPC response: {} 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925421 1 connection.go:252] GRPC error: 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925443 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925449 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034174 1 connection.go:251] GRPC response: {"name":"kubevirt.io.hostpath-provisioner","vendor_version":"latest"} 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034258 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034275 1 csi-provisioner.go:230] Detected CSI driver kubevirt.io.hostpath-provisioner 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034292 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034297 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.272894790+00:00 stderr F I0813 20:00:07.242737 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}}]} 2025-08-13T20:00:07.293117656+00:00 stderr F I0813 20:00:07.273106 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.293289511+00:00 stderr F I0813 20:00:07.293266 1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-08-13T20:00:07.293398524+00:00 stderr F I0813 20:00:07.293311 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.558471632+00:00 stderr F I0813 20:00:07.557998 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":11}}}]} 2025-08-13T20:00:07.558471632+00:00 stderr F I0813 20:00:07.558063 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578185 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578240 1 connection.go:244] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578246 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.691395962+00:00 stderr F I0813 20:00:07.676466 1 connection.go:251] GRPC response: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"node_id":"crc"} 2025-08-13T20:00:07.691395962+00:00 stderr F I0813 20:00:07.676565 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.929995405+00:00 stderr F I0813 20:00:07.676661 1 csi-provisioner.go:351] using local topology with Node = &Node{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[topology.hostpath.csi/node:crc] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} and CSINode = &CSINode{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:CSINodeSpec{Drivers:[]CSINodeDriver{CSINodeDriver{Name:kubevirt.io.hostpath-provisioner,NodeID:crc,TopologyKeys:[topology.hostpath.csi/node],Allocatable:nil,},},},} 2025-08-13T20:00:08.534874933+00:00 stderr F I0813 20:00:08.532659 1 csi-provisioner.go:464] using apps/v1/DaemonSet csi-hostpathplugin as owner of CSIStorageCapacity objects 2025-08-13T20:00:08.534874933+00:00 stderr F I0813 20:00:08.532718 1 csi-provisioner.go:483] producing CSIStorageCapacity objects with fixed topology segment [topology.hostpath.csi/node: crc] 2025-08-13T20:00:08.555891592+00:00 stderr F I0813 20:00:08.555419 1 csi-provisioner.go:529] using the CSIStorageCapacity v1 API 2025-08-13T20:00:08.558949119+00:00 stderr F I0813 20:00:08.556337 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0001296f8 = topology.hostpath.csi/node: crc], removed [] 2025-08-13T20:00:08.558949119+00:00 stderr F I0813 20:00:08.557178 1 controller.go:732] Using saving PVs to API server in background 2025-08-13T20:00:08.584915160+00:00 stderr F I0813 20:00:08.576298 1 reflector.go:289] Starting reflector *v1.CSIStorageCapacity (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.584915160+00:00 stderr F I0813 20:00:08.576380 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.601974936+00:00 stderr F I0813 20:00:08.591735 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.601974936+00:00 stderr F I0813 20:00:08.591826 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.617532940+00:00 stderr F I0813 20:00:08.612115 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.617532940+00:00 stderr F I0813 20:00:08.612165 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:10.156014488+00:00 stderr F I0813 20:00:10.115770 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:00:10.156148102+00:00 stderr F I0813 20:00:10.156111 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168114 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168191 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168371 1 controller.go:811] Starting provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_9c0cb162-9831-443d-a7c1-5af5632fc687! 2025-08-13T20:00:10.200235409+00:00 stderr F I0813 20:00:10.191948 1 capacity.go:243] Starting Capacity Controller 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380224 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380278 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0001296f8 = topology.hostpath.csi/node: crc], removed [] 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380614 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380631 1 capacity.go:279] Initial number of topology segments 1, storage classes 1, potential CSIStorageCapacity objects 1 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380815 1 capacity.go:290] Checking for existing CSIStorageCapacity objects 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380903 1 capacity.go:725] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 24362 matches {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380917 1 capacity.go:255] Started Capacity Controller 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380933 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:00:10.380968402+00:00 stderr F I0813 20:00:10.380954 1 reflector.go:289] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:00:10.380968402+00:00 stderr F I0813 20:00:10.380961 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.384352 1 volume_store.go:97] Starting save volume queue 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.385212 1 reflector.go:289] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.385224 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.609281 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 24362 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.635463 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.636202 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.636211 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:00:11.139235764+00:00 stderr F I0813 20:00:11.119364 1 connection.go:251] GRPC response: {"available_capacity":63507808256,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.358355 1 connection.go:252] GRPC error: 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.358538 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 62019344Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.178084 1 shared_informer.go:341] caches populated 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.359593 1 controller.go:860] Started provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_9c0cb162-9831-443d-a7c1-5af5632fc687! 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.359641 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.620275 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.620441 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:00:11.648860944+00:00 stderr F I0813 20:00:11.646741 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 29050 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:11.648860944+00:00 stderr F I0813 20:00:11.646990 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 29050 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 62019344Ki 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.440600 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.441262 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.442478 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.442489 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.566852 1 connection.go:251] GRPC response: {"available_capacity":61631873024,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.567008 1 connection.go:252] GRPC error: 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.567315 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 60187376Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:01:14.381652061+00:00 stderr F I0813 20:01:14.376669 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30349 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:01:14.446122080+00:00 stderr F I0813 20:01:14.427122 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 30349 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 60187376Ki 2025-08-13T20:02:10.481966068+00:00 stderr F I0813 20:02:10.481609 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:02:10.481966068+00:00 stderr F I0813 20:02:10.481897 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:02:10.482290887+00:00 stderr F I0813 20:02:10.482127 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:02:10.482566955+00:00 stderr F I0813 20:02:10.482150 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:02:10.486612800+00:00 stderr F I0813 20:02:10.486560 1 connection.go:251] GRPC response: {"available_capacity":57153896448,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:02:10.486612800+00:00 stderr F I0813 20:02:10.486584 1 connection.go:252] GRPC error: 2025-08-13T20:02:10.486756145+00:00 stderr F I0813 20:02:10.486636 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 55814352Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:02:13.039704301+00:00 stderr F I0813 20:02:13.038706 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:02:13.052607229+00:00 stderr F I0813 20:02:13.052485 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 30628 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 55814352Ki 2025-08-13T20:02:29.596263111+00:00 stderr F I0813 20:02:29.591484 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 2 items received 2025-08-13T20:02:29.643445047+00:00 stderr F I0813 20:02:29.643044 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.676729 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 5 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.677578 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.679991 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.682654 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=9m8s&timeoutSeconds=548&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.683529 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.701086371+00:00 stderr F I0813 20:02:29.690134 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m10s&timeoutSeconds=310&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.708374689+00:00 stderr F I0813 20:02:29.704397 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m6s&timeoutSeconds=366&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.708374689+00:00 stderr F I0813 20:02:29.707113 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:30.540496978+00:00 stderr F I0813 20:02:30.540338 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:30.583592947+00:00 stderr F I0813 20:02:30.583343 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.109182150+00:00 stderr F I0813 20:02:31.109009 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m44s&timeoutSeconds=464&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.124899568+00:00 stderr F I0813 20:02:31.124727 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m38s&timeoutSeconds=458&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.156989043+00:00 stderr F I0813 20:02:31.156761 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m2s&timeoutSeconds=302&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:32.744557062+00:00 stderr F I0813 20:02:32.744349 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:32.838680177+00:00 stderr F I0813 20:02:32.838465 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.163339149+00:00 stderr F I0813 20:02:33.163234 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=9m8s&timeoutSeconds=548&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.547760206+00:00 stderr F I0813 20:02:33.547306 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m10s&timeoutSeconds=370&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.892723506+00:00 stderr F I0813 20:02:33.892171 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:36.877874593+00:00 stderr F I0813 20:02:36.877703 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m23s&timeoutSeconds=443&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:38.534622675+00:00 stderr F I0813 20:02:38.534406 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.060985840+00:00 stderr F I0813 20:02:39.060316 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.439426896+00:00 stderr F I0813 20:02:39.439295 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.521892639+00:00 stderr F I0813 20:02:39.521633 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:44.532424188+00:00 stderr F I0813 20:02:44.530300 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:47.765023075+00:00 stderr F I0813 20:02:47.764585 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:48.193273602+00:00 stderr F I0813 20:02:48.193156 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:49.033706956+00:00 stderr F I0813 20:02:49.033579 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:51.421538544+00:00 stderr F I0813 20:02:51.421281 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=6m19s&timeoutSeconds=379&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:57.470707981+00:00 stderr F I0813 20:02:57.470627 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:07.083578567+00:00 stderr F I0813 20:03:07.083080 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:09.932190890+00:00 stderr F I0813 20:03:09.932042 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:10.483036413+00:00 stderr F I0813 20:03:10.482545 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:03:10.483876137+00:00 stderr F I0813 20:03:10.483736 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:10.484152475+00:00 stderr F I0813 20:03:10.484091 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:10.484967098+00:00 stderr F I0813 20:03:10.484128 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:10.488412296+00:00 stderr F I0813 20:03:10.488337 1 connection.go:251] GRPC response: {"available_capacity":55023792128,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:10.488412296+00:00 stderr F I0813 20:03:10.488355 1 connection.go:252] GRPC error: 2025-08-13T20:03:10.488824748+00:00 stderr F I0813 20:03:10.488495 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53734172Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:10.490741163+00:00 stderr F E0813 20:03:10.490570 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:10.490741163+00:00 stderr F W0813 20:03:10.490611 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 0 failures 2025-08-13T20:03:10.693011203+00:00 stderr F I0813 20:03:10.692499 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:11.492386517+00:00 stderr F I0813 20:03:11.492204 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:11.492601553+00:00 stderr F I0813 20:03:11.492514 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:11.493017445+00:00 stderr F I0813 20:03:11.492540 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:11.494756084+00:00 stderr F I0813 20:03:11.494698 1 connection.go:251] GRPC response: {"available_capacity":54914170880,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:11.494756084+00:00 stderr F I0813 20:03:11.494734 1 connection.go:252] GRPC error: 2025-08-13T20:03:11.496875165+00:00 stderr F I0813 20:03:11.494947 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53627120Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:11.497473372+00:00 stderr F E0813 20:03:11.497368 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.497473372+00:00 stderr F W0813 20:03:11.497402 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 1 failures 2025-08-13T20:03:13.498706382+00:00 stderr F I0813 20:03:13.498530 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:13.498706382+00:00 stderr F I0813 20:03:13.498619 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:13.498881047+00:00 stderr F I0813 20:03:13.498628 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:13.499975518+00:00 stderr F I0813 20:03:13.499936 1 connection.go:251] GRPC response: {"available_capacity":54619185152,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:13.499975518+00:00 stderr F I0813 20:03:13.499953 1 connection.go:252] GRPC error: 2025-08-13T20:03:13.500033699+00:00 stderr F I0813 20:03:13.499987 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53339048Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:13.501631665+00:00 stderr F E0813 20:03:13.501561 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.501631665+00:00 stderr F W0813 20:03:13.501614 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 2 failures 2025-08-13T20:03:16.036603780+00:00 stderr F I0813 20:03:16.036401 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:17.501995015+00:00 stderr F I0813 20:03:17.501871 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:17.501995015+00:00 stderr F I0813 20:03:17.501973 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:17.502119369+00:00 stderr F I0813 20:03:17.501980 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:17.503699514+00:00 stderr F I0813 20:03:17.503642 1 connection.go:251] GRPC response: {"available_capacity":53907591168,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:17.503699514+00:00 stderr F I0813 20:03:17.503676 1 connection.go:252] GRPC error: 2025-08-13T20:03:17.503722575+00:00 stderr F I0813 20:03:17.503701 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 52644132Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:17.505298440+00:00 stderr F E0813 20:03:17.505246 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:17.505298440+00:00 stderr F W0813 20:03:17.505284 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 3 failures 2025-08-13T20:03:25.507222693+00:00 stderr F I0813 20:03:25.506678 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:25.507309735+00:00 stderr F I0813 20:03:25.507277 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:25.507887532+00:00 stderr F I0813 20:03:25.507288 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:25.509641382+00:00 stderr F I0813 20:03:25.509551 1 connection.go:251] GRPC response: {"available_capacity":52684136448,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:25.509641382+00:00 stderr F I0813 20:03:25.509589 1 connection.go:252] GRPC error: 2025-08-13T20:03:25.509764366+00:00 stderr F I0813 20:03:25.509677 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51449352Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:25.511732552+00:00 stderr F E0813 20:03:25.511669 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.511732552+00:00 stderr F W0813 20:03:25.511709 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 4 failures 2025-08-13T20:03:41.513143198+00:00 stderr F I0813 20:03:41.512502 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:41.513143198+00:00 stderr F I0813 20:03:41.512892 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:41.513257972+00:00 stderr F I0813 20:03:41.512904 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:41.518480131+00:00 stderr F I0813 20:03:41.518403 1 connection.go:251] GRPC response: {"available_capacity":51869433856,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:41.518480131+00:00 stderr F I0813 20:03:41.518435 1 connection.go:252] GRPC error: 2025-08-13T20:03:41.518547163+00:00 stderr F I0813 20:03:41.518467 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50653744Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:41.520934581+00:00 stderr F E0813 20:03:41.520881 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.520934581+00:00 stderr F W0813 20:03:41.520922 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 5 failures 2025-08-13T20:03:42.890655574+00:00 stderr F I0813 20:03:42.890433 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m10s&timeoutSeconds=490&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:47.364616962+00:00 stderr F I0813 20:03:47.364496 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m45s&timeoutSeconds=405&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:48.306607314+00:00 stderr F I0813 20:03:48.306279 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:50.851697078+00:00 stderr F I0813 20:03:50.851564 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:52.214568860+00:00 stderr F I0813 20:03:52.214453 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:10.484515361+00:00 stderr F I0813 20:04:10.484202 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:04:10.484515361+00:00 stderr F I0813 20:04:10.484433 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:04:10.484827289+00:00 stderr F I0813 20:04:10.484734 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:10.485284332+00:00 stderr F I0813 20:04:10.484765 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:04:10.488669139+00:00 stderr F I0813 20:04:10.488550 1 connection.go:251] GRPC response: {"available_capacity":53007642624,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:04:10.488948477+00:00 stderr F I0813 20:04:10.488876 1 connection.go:252] GRPC error: 2025-08-13T20:04:10.489021569+00:00 stderr F I0813 20:04:10.488938 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51765276Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:04:10.492210750+00:00 stderr F E0813 20:04:10.492117 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:10.492210750+00:00 stderr F W0813 20:04:10.492154 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 6 failures 2025-08-13T20:04:13.522250948+00:00 stderr F I0813 20:04:13.521990 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:04:13.522464694+00:00 stderr F I0813 20:04:13.522448 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:13.522983819+00:00 stderr F I0813 20:04:13.522486 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:04:13.524461261+00:00 stderr F I0813 20:04:13.524434 1 connection.go:251] GRPC response: {"available_capacity":52794683392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:04:13.524668987+00:00 stderr F I0813 20:04:13.524619 1 connection.go:252] GRPC error: 2025-08-13T20:04:13.524953595+00:00 stderr F I0813 20:04:13.524744 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51557308Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:04:13.526714495+00:00 stderr F E0813 20:04:13.526603 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:13.526714495+00:00 stderr F W0813 20:04:13.526646 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 7 failures 2025-08-13T20:04:18.700132527+00:00 stderr F I0813 20:04:18.694650 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:30.683585878+00:00 stderr F I0813 20:04:30.683216 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=5m58s&timeoutSeconds=358&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:34.170544641+00:00 stderr F I0813 20:04:34.170234 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m13s&timeoutSeconds=373&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:42.112098456+00:00 stderr F I0813 20:04:42.111886 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=9m26s&timeoutSeconds=566&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:49.089169411+00:00 stderr F I0813 20:04:49.086947 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:05:02.568697791+00:00 stderr F I0813 20:05:02.568367 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:05:10.486039784+00:00 stderr F I0813 20:05:10.485702 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:05:10.486039784+00:00 stderr F I0813 20:05:10.486018 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:05:10.486309361+00:00 stderr F I0813 20:05:10.486220 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:05:10.486683242+00:00 stderr F I0813 20:05:10.486245 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:05:10.492159129+00:00 stderr F I0813 20:05:10.492067 1 connection.go:251] GRPC response: {"available_capacity":48768638976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:05:10.492159129+00:00 stderr F I0813 20:05:10.492101 1 connection.go:252] GRPC error: 2025-08-13T20:05:10.492188450+00:00 stderr F I0813 20:05:10.492134 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 47625624Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:05:10.495313319+00:00 stderr F E0813 20:05:10.494837 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:10.495313319+00:00 stderr F W0813 20:05:10.494884 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 8 failures 2025-08-13T20:05:27.772514028+00:00 stderr F I0813 20:05:27.772064 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 30628 (30718) 2025-08-13T20:05:29.777518023+00:00 stderr F I0813 20:05:29.777231 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 30619 (30718) 2025-08-13T20:05:32.165918598+00:00 stderr F I0813 20:05:32.165240 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 30620 (30718) 2025-08-13T20:05:34.082920372+00:00 stderr F I0813 20:05:34.081109 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 30625 (30718) 2025-08-13T20:05:51.792116716+00:00 stderr F I0813 20:05:51.791632 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 30620 (30718) 2025-08-13T20:05:59.738435116+00:00 stderr F I0813 20:05:59.736072 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:05:59.787175102+00:00 stderr F I0813 20:05:59.786630 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.772732740+00:00 stderr F I0813 20:06:06.772544 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:06:06.795659576+00:00 stderr F I0813 20:06:06.795480 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:06:06.795721518+00:00 stderr F I0813 20:06:06.795633 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.796370387+00:00 stderr F I0813 20:06:06.796282 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.797268303+00:00 stderr F I0813 20:06:06.796407 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:06.798042965+00:00 stderr F I0813 20:06:06.796415 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:06.799757244+00:00 stderr F I0813 20:06:06.799632 1 connection.go:251] GRPC response: {"available_capacity":47480958976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:06.799974140+00:00 stderr F I0813 20:06:06.799949 1 connection.go:252] GRPC error: 2025-08-13T20:06:06.800095603+00:00 stderr F I0813 20:06:06.800050 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46368124Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:06.820370124+00:00 stderr F I0813 20:06:06.819646 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 31987 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46368124Ki 2025-08-13T20:06:06.826149130+00:00 stderr F I0813 20:06:06.826035 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 31987 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486104 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486227 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486337 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486344 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516209 1 connection.go:251] GRPC response: {"available_capacity":47480971264,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516250 1 connection.go:252] GRPC error: 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516275 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46368136Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:10.580150658+00:00 stderr F I0813 20:06:10.579169 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32004 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.583094132+00:00 stderr F I0813 20:06:10.582981 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32004 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46368136Ki 2025-08-13T20:06:18.840921105+00:00 stderr F I0813 20:06:18.839541 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:06:21.527915969+00:00 stderr F I0813 20:06:21.527730 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:21.528032313+00:00 stderr F I0813 20:06:21.527996 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:21.528336031+00:00 stderr F I0813 20:06:21.528020 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:21.535661441+00:00 stderr F I0813 20:06:21.535546 1 connection.go:251] GRPC response: {"available_capacity":47480754176,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:21.535661441+00:00 stderr F I0813 20:06:21.535579 1 connection.go:252] GRPC error: 2025-08-13T20:06:21.535744454+00:00 stderr F I0813 20:06:21.535615 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46367924Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:21.550683701+00:00 stderr F I0813 20:06:21.550618 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32037 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46367924Ki 2025-08-13T20:06:21.551119974+00:00 stderr F I0813 20:06:21.551008 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32037 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:21.866016731+00:00 stderr F I0813 20:06:21.865906 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:06:28.050184360+00:00 stderr F I0813 20:06:28.044093 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:06:28.062771960+00:00 stderr F I0813 20:06:28.055746 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:06:28.062771960+00:00 stderr F I0813 20:06:28.062736 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:06:28.062945775+00:00 stderr F I0813 20:06:28.062823 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:07:10.490577272+00:00 stderr F I0813 20:07:10.490248 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:07:10.490577272+00:00 stderr F I0813 20:07:10.490483 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:07:10.491729745+00:00 stderr F I0813 20:07:10.490719 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:07:10.491729745+00:00 stderr F I0813 20:07:10.490752 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:07:10.513115619+00:00 stderr F I0813 20:07:10.512331 1 connection.go:251] GRPC response: {"available_capacity":49674403840,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:07:10.514217710+00:00 stderr F I0813 20:07:10.514163 1 connection.go:252] GRPC error: 2025-08-13T20:07:10.514511549+00:00 stderr F I0813 20:07:10.514232 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48510160Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:07:10.571925205+00:00 stderr F I0813 20:07:10.571615 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32389 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:07:10.573383717+00:00 stderr F I0813 20:07:10.571987 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32389 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48510160Ki 2025-08-13T20:08:10.496212475+00:00 stderr F I0813 20:08:10.494950 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:08:10.496212475+00:00 stderr F I0813 20:08:10.495758 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:08:10.497057509+00:00 stderr F I0813 20:08:10.496985 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:08:10.497678547+00:00 stderr F I0813 20:08:10.497017 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505524 1 connection.go:251] GRPC response: {"available_capacity":51680923648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505565 1 connection.go:252] GRPC error: 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505623 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50469652Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:08:10.529526140+00:00 stderr F I0813 20:08:10.529455 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32843 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50469652Ki 2025-08-13T20:08:10.529643024+00:00 stderr F I0813 20:08:10.529465 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32843 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:08:26.211449194+00:00 stderr F I0813 20:08:26.209243 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 1 items received 2025-08-13T20:08:26.232885748+00:00 stderr F I0813 20:08:26.226685 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 2 items received 2025-08-13T20:08:26.245728977+00:00 stderr F I0813 20:08:26.245648 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.245971094+00:00 stderr F I0813 20:08:26.245940 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.329062316+00:00 stderr F I0813 20:08:26.329006 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 7 items received 2025-08-13T20:08:26.338875157+00:00 stderr F I0813 20:08:26.337266 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:08:26.339112204+00:00 stderr F I0813 20:08:26.339083 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:08:26.339557827+00:00 stderr F I0813 20:08:26.339500 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=7m55s&timeoutSeconds=475&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.348191124+00:00 stderr F I0813 20:08:26.348106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=9m15s&timeoutSeconds=555&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.354041382+00:00 stderr F I0813 20:08:26.353987 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m32s&timeoutSeconds=392&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.271516707+00:00 stderr F I0813 20:08:27.271446 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.443477858+00:00 stderr F I0813 20:08:27.441995 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=5m47s&timeoutSeconds=347&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.459096985+00:00 stderr F I0813 20:08:27.458965 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m54s&timeoutSeconds=534&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.719648486+00:00 stderr F I0813 20:08:27.715522 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.915456340+00:00 stderr F I0813 20:08:27.915341 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:28.964473995+00:00 stderr F I0813 20:08:28.964297 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:29.650917816+00:00 stderr F I0813 20:08:29.650675 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:30.207930386+00:00 stderr F I0813 20:08:30.198136 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=9m44s&timeoutSeconds=584&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:30.226399856+00:00 stderr F I0813 20:08:30.219066 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=6m28s&timeoutSeconds=388&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:31.042481194+00:00 stderr F I0813 20:08:31.042351 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=6m39s&timeoutSeconds=399&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:33.424378073+00:00 stderr F I0813 20:08:33.424253 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.154934419+00:00 stderr F I0813 20:08:34.153059 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.169285610+00:00 stderr F I0813 20:08:34.169180 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.971095929+00:00 stderr F I0813 20:08:34.970921 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:36.935505080+00:00 stderr F I0813 20:08:36.935337 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=5m59s&timeoutSeconds=359&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:41.031694191+00:00 stderr F I0813 20:08:41.026092 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:42.459972361+00:00 stderr F I0813 20:08:42.453283 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:43.610085645+00:00 stderr F I0813 20:08:43.608699 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:44.909074679+00:00 stderr F I0813 20:08:44.907690 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m34s&timeoutSeconds=514&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:48.575034265+00:00 stderr F I0813 20:08:48.574857 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:57.009273752+00:00 stderr F I0813 20:08:57.009001 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=9m52s&timeoutSeconds=592&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:58.719116385+00:00 stderr F I0813 20:08:58.718969 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:09:00.268765025+00:00 stderr F I0813 20:09:00.265320 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:09:03.686105372+00:00 stderr F I0813 20:09:03.685709 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 32890 (32913) 2025-08-13T20:09:03.686105372+00:00 stderr F I0813 20:09:03.685724 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 32843 (32913) 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498204 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498512 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498673 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498682 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509149 1 connection.go:251] GRPC response: {"available_capacity":51679956992,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509163 1 connection.go:252] GRPC error: 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509258 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50468708Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:09:10.536886979+00:00 stderr F I0813 20:09:10.536467 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33017 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50468708Ki 2025-08-13T20:09:31.302131848+00:00 stderr F I0813 20:09:31.300214 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 32890 (32913) 2025-08-13T20:09:32.674389312+00:00 stderr F I0813 20:09:32.674302 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 32566 (32913) 2025-08-13T20:09:34.049559569+00:00 stderr F I0813 20:09:34.049452 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:09:34.054219852+00:00 stderr F I0813 20:09:34.054183 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33017 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:09:34.387063665+00:00 stderr F I0813 20:09:34.386506 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:09:42.139423570+00:00 stderr F I0813 20:09:42.139143 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 32800 (32913) 2025-08-13T20:10:10.500175528+00:00 stderr F I0813 20:10:10.499530 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:10:10.500175528+00:00 stderr F I0813 20:10:10.499996 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:10.500258670+00:00 stderr F I0813 20:10:10.500203 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:10.505378987+00:00 stderr F I0813 20:10:10.500259 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506508 1 connection.go:251] GRPC response: {"available_capacity":51678912512,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506547 1 connection.go:252] GRPC error: 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506612 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50467688Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:10:10.525308928+00:00 stderr F I0813 20:10:10.525178 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33141 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:10.525308928+00:00 stderr F I0813 20:10:10.525256 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33141 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50467688Ki 2025-08-13T20:10:11.134723951+00:00 stderr F I0813 20:10:11.134630 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.139712 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.140144 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.140154 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:10:11.329857436+00:00 stderr F I0813 20:10:11.325844 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:10:36.518379150+00:00 stderr F I0813 20:10:36.518068 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:10:36.527029198+00:00 stderr F I0813 20:10:36.526895 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:10:36.527029198+00:00 stderr F I0813 20:10:36.527007 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:36.527070579+00:00 stderr F I0813 20:10:36.527043 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:36.527211473+00:00 stderr F I0813 20:10:36.527164 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:36.527713867+00:00 stderr F I0813 20:10:36.527203 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:10:36.529408726+00:00 stderr F I0813 20:10:36.529332 1 connection.go:251] GRPC response: {"available_capacity":51647598592,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:10:36.529408726+00:00 stderr F I0813 20:10:36.529359 1 connection.go:252] GRPC error: 2025-08-13T20:10:36.529526919+00:00 stderr F I0813 20:10:36.529439 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50437108Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:10:36.549439450+00:00 stderr F I0813 20:10:36.549272 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33214 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50437108Ki 2025-08-13T20:10:36.549439450+00:00 stderr F I0813 20:10:36.549307 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33214 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:11:10.500524073+00:00 stderr F I0813 20:11:10.500223 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:11:10.500600195+00:00 stderr F I0813 20:11:10.500577 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:11:10.500900653+00:00 stderr F I0813 20:11:10.500838 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:11:10.501486520+00:00 stderr F I0813 20:11:10.500871 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:11:10.504303841+00:00 stderr F I0813 20:11:10.504273 1 connection.go:251] GRPC response: {"available_capacity":51642920960,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:11:10.504357362+00:00 stderr F I0813 20:11:10.504341 1 connection.go:252] GRPC error: 2025-08-13T20:11:10.504544088+00:00 stderr F I0813 20:11:10.504452 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50432540Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:11:10.524081448+00:00 stderr F I0813 20:11:10.524021 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33386 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50432540Ki 2025-08-13T20:11:10.527750263+00:00 stderr F I0813 20:11:10.525990 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33386 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:12:10.502114835+00:00 stderr F I0813 20:12:10.501553 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:12:10.502316631+00:00 stderr F I0813 20:12:10.502191 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:12:10.502569629+00:00 stderr F I0813 20:12:10.502518 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:12:10.503161966+00:00 stderr F I0813 20:12:10.502541 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:12:10.505836932+00:00 stderr F I0813 20:12:10.505727 1 connection.go:251] GRPC response: {"available_capacity":51640741888,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:12:10.505978606+00:00 stderr F I0813 20:12:10.505891 1 connection.go:252] GRPC error: 2025-08-13T20:12:10.506156291+00:00 stderr F I0813 20:12:10.506014 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50430412Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:12:10.524858878+00:00 stderr F I0813 20:12:10.524703 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33496 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50430412Ki 2025-08-13T20:12:10.525074764+00:00 stderr F I0813 20:12:10.524990 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33496 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.502418991+00:00 stderr F I0813 20:13:10.502124 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:13:10.502418991+00:00 stderr F I0813 20:13:10.502280 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.502514664+00:00 stderr F I0813 20:13:10.502454 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:13:10.502838033+00:00 stderr F I0813 20:13:10.502495 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:13:10.504610113+00:00 stderr F I0813 20:13:10.504575 1 connection.go:251] GRPC response: {"available_capacity":51640733696,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:13:10.504610113+00:00 stderr F I0813 20:13:10.504588 1 connection.go:252] GRPC error: 2025-08-13T20:13:10.504770328+00:00 stderr F I0813 20:13:10.504630 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50430404Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:13:10.527331795+00:00 stderr F I0813 20:13:10.527240 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33585 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.527486770+00:00 stderr F I0813 20:13:10.527375 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33585 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50430404Ki 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.502892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.503054 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.503103 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:14:10.503470731+00:00 stderr F I0813 20:14:10.503113 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:14:10.505520070+00:00 stderr F I0813 20:14:10.505426 1 connection.go:251] GRPC response: {"available_capacity":51640299520,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:14:10.505520070+00:00 stderr F I0813 20:14:10.505470 1 connection.go:252] GRPC error: 2025-08-13T20:14:10.505621053+00:00 stderr F I0813 20:14:10.505495 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50429980Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:14:10.526839471+00:00 stderr F I0813 20:14:10.526667 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33694 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50429980Ki 2025-08-13T20:14:10.527307325+00:00 stderr F I0813 20:14:10.527213 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33694 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504159 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504294 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504337 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:15:10.504606455+00:00 stderr F I0813 20:15:10.504343 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:15:10.507081435+00:00 stderr F I0813 20:15:10.507023 1 connection.go:251] GRPC response: {"available_capacity":51639975936,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:15:10.507081435+00:00 stderr F I0813 20:15:10.507056 1 connection.go:252] GRPC error: 2025-08-13T20:15:10.507177058+00:00 stderr F I0813 20:15:10.507079 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50429664Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:15:10.556052854+00:00 stderr F I0813 20:15:10.555982 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33862 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50429664Ki 2025-08-13T20:15:10.556361263+00:00 stderr F I0813 20:15:10.556289 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33862 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:03.530489450+00:00 stderr F I0813 20:16:03.530193 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 6 items received 2025-08-13T20:16:10.505406177+00:00 stderr F I0813 20:16:10.505244 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:16:10.505517601+00:00 stderr F I0813 20:16:10.505459 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:10.505533831+00:00 stderr F I0813 20:16:10.505515 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:16:10.505725797+00:00 stderr F I0813 20:16:10.505523 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:16:10.507020333+00:00 stderr F I0813 20:16:10.506933 1 connection.go:251] GRPC response: {"available_capacity":51641487360,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:16:10.507020333+00:00 stderr F I0813 20:16:10.506989 1 connection.go:252] GRPC error: 2025-08-13T20:16:10.507103596+00:00 stderr F I0813 20:16:10.507039 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50431140Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:16:10.535623750+00:00 stderr F I0813 20:16:10.535510 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33987 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:10.536485935+00:00 stderr F I0813 20:16:10.536359 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33987 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50431140Ki 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.512414 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.512837 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.513068 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.513080 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:17:10.521410071+00:00 stderr F I0813 20:17:10.521268 1 connection.go:251] GRPC response: {"available_capacity":51120128000,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:17:10.521410071+00:00 stderr F I0813 20:17:10.521396 1 connection.go:252] GRPC error: 2025-08-13T20:17:10.521606207+00:00 stderr F I0813 20:17:10.521489 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49922000Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:17:10.625718320+00:00 stderr F I0813 20:17:10.625536 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34153 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49922000Ki 2025-08-13T20:17:10.627687856+00:00 stderr F I0813 20:17:10.627461 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34153 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:17:25.400881884+00:00 stderr F I0813 20:17:25.400545 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:17:36.059047411+00:00 stderr F I0813 20:17:36.055432 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 18 items received 2025-08-13T20:17:53.142821593+00:00 stderr F I0813 20:17:53.142547 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 9 items received 2025-08-13T20:18:10.516188997+00:00 stderr F I0813 20:18:10.515307 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:18:10.516438474+00:00 stderr F I0813 20:18:10.516409 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:18:10.517535065+00:00 stderr F I0813 20:18:10.516641 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:18:10.518358489+00:00 stderr F I0813 20:18:10.517614 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531300 1 connection.go:251] GRPC response: {"available_capacity":49833275392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531345 1 connection.go:252] GRPC error: 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531403 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48665308Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:18:11.155853054+00:00 stderr F I0813 20:18:11.155701 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34319 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:18:11.156017789+00:00 stderr F I0813 20:18:11.155957 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34319 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48665308Ki 2025-08-13T20:18:17.333243462+00:00 stderr F I0813 20:18:17.333025 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 9 items received 2025-08-13T20:19:10.517415961+00:00 stderr F I0813 20:19:10.517177 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:19:10.517690579+00:00 stderr F I0813 20:19:10.517663 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:19:10.517998237+00:00 stderr F I0813 20:19:10.517936 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:19:10.518746609+00:00 stderr F I0813 20:19:10.518045 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523373 1 connection.go:251] GRPC response: {"available_capacity":49260212224,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523409 1 connection.go:252] GRPC error: 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523467 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48105676Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:19:10.548734825+00:00 stderr F I0813 20:19:10.547481 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34447 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48105676Ki 2025-08-13T20:19:10.548734825+00:00 stderr F I0813 20:19:10.547860 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34447 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.517892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518171 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518324 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518330 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:20:10.520223671+00:00 stderr F I0813 20:20:10.520172 1 connection.go:251] GRPC response: {"available_capacity":51637207040,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:20:10.520462378+00:00 stderr F I0813 20:20:10.520441 1 connection.go:252] GRPC error: 2025-08-13T20:20:10.520596282+00:00 stderr F I0813 20:20:10.520548 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50426960Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:20:10.562236352+00:00 stderr F I0813 20:20:10.561759 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34570 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.562236352+00:00 stderr F I0813 20:20:10.562004 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34570 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50426960Ki 2025-08-13T20:21:10.520579165+00:00 stderr F I0813 20:21:10.520199 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:21:10.520690438+00:00 stderr F I0813 20:21:10.520562 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:21:10.520998747+00:00 stderr F I0813 20:21:10.520895 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:21:10.521716037+00:00 stderr F I0813 20:21:10.520905 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:21:10.524537428+00:00 stderr F I0813 20:21:10.524468 1 connection.go:251] GRPC response: {"available_capacity":51637256192,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:21:10.524537428+00:00 stderr F I0813 20:21:10.524501 1 connection.go:252] GRPC error: 2025-08-13T20:21:10.525048933+00:00 stderr F I0813 20:21:10.524594 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50427008Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:21:10.583213255+00:00 stderr F I0813 20:21:10.583011 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34707 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:21:10.583606476+00:00 stderr F I0813 20:21:10.583515 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34707 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50427008Ki 2025-08-13T20:22:10.521492087+00:00 stderr F I0813 20:22:10.520940 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:22:10.521492087+00:00 stderr F I0813 20:22:10.521372 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:22:10.522094324+00:00 stderr F I0813 20:22:10.521708 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:22:10.522720792+00:00 stderr F I0813 20:22:10.521741 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:22:10.525556303+00:00 stderr F I0813 20:22:10.525450 1 connection.go:251] GRPC response: {"available_capacity":51641798656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:22:10.525556303+00:00 stderr F I0813 20:22:10.525489 1 connection.go:252] GRPC error: 2025-08-13T20:22:10.525664026+00:00 stderr F I0813 20:22:10.525569 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50431444Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:22:10.541904360+00:00 stderr F I0813 20:22:10.541672 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34825 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:22:10.544800883+00:00 stderr F I0813 20:22:10.544685 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34825 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50431444Ki 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.523137 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.523664 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.524175 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.524187 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.528753 1 connection.go:251] GRPC response: {"available_capacity":51645571072,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.528970 1 connection.go:252] GRPC error: 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.529061 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50435128Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:23:10.544002692+00:00 stderr F I0813 20:23:10.543885 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34939 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50435128Ki 2025-08-13T20:23:10.544418204+00:00 stderr F I0813 20:23:10.544368 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34939 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:23:46.064547252+00:00 stderr F I0813 20:23:46.063669 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-08-13T20:24:10.525005884+00:00 stderr F I0813 20:24:10.524701 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:24:10.525238161+00:00 stderr F I0813 20:24:10.525152 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:24:10.525447487+00:00 stderr F I0813 20:24:10.525375 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:24:10.526138997+00:00 stderr F I0813 20:24:10.525406 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:24:10.528528825+00:00 stderr F I0813 20:24:10.528481 1 connection.go:251] GRPC response: {"available_capacity":51577241600,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:24:10.528583917+00:00 stderr F I0813 20:24:10.528567 1 connection.go:252] GRPC error: 2025-08-13T20:24:10.528880635+00:00 stderr F I0813 20:24:10.528721 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50368400Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:24:10.544500702+00:00 stderr F I0813 20:24:10.544404 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35044 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50368400Ki 2025-08-13T20:24:10.544734719+00:00 stderr F I0813 20:24:10.544673 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35044 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:24:21.337206815+00:00 stderr F I0813 20:24:21.336881 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:24:27.539097402+00:00 stderr F I0813 20:24:27.537890 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:24:48.149301469+00:00 stderr F I0813 20:24:48.149070 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 8 items received 2025-08-13T20:25:10.525298092+00:00 stderr F I0813 20:25:10.525174 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:25:10.525392985+00:00 stderr F I0813 20:25:10.525296 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:25:10.525691593+00:00 stderr F I0813 20:25:10.525633 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:25:10.526530837+00:00 stderr F I0813 20:25:10.525670 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:25:10.528561555+00:00 stderr F I0813 20:25:10.528481 1 connection.go:251] GRPC response: {"available_capacity":51577327616,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:25:10.528561555+00:00 stderr F I0813 20:25:10.528519 1 connection.go:252] GRPC error: 2025-08-13T20:25:10.528626707+00:00 stderr F I0813 20:25:10.528560 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50368484Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:25:10.566050317+00:00 stderr F I0813 20:25:10.565076 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35166 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50368484Ki 2025-08-13T20:25:10.566050317+00:00 stderr F I0813 20:25:10.565288 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35166 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:25:11.140600555+00:00 stderr F I0813 20:25:11.140487 1 reflector.go:378] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.140929 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.141596 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.141725 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:25:11.332425400+00:00 stderr F I0813 20:25:11.330959 1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync 2025-08-13T20:26:07.408562415+00:00 stderr F I0813 20:26:07.408258 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 10 items received 2025-08-13T20:26:10.525908641+00:00 stderr F I0813 20:26:10.525712 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:26:10.526061135+00:00 stderr F I0813 20:26:10.525979 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:26:10.526190229+00:00 stderr F I0813 20:26:10.526140 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:26:10.526542699+00:00 stderr F I0813 20:26:10.526164 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:26:10.528577897+00:00 stderr F I0813 20:26:10.528486 1 connection.go:251] GRPC response: {"available_capacity":51575119872,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:26:10.528577897+00:00 stderr F I0813 20:26:10.528540 1 connection.go:252] GRPC error: 2025-08-13T20:26:10.528706281+00:00 stderr F I0813 20:26:10.528596 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50366328Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:26:10.540600391+00:00 stderr F I0813 20:26:10.540490 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35280 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:26:10.540904880+00:00 stderr F I0813 20:26:10.540752 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35280 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50366328Ki 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527220 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527530 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527858 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527870 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:27:10.562296477+00:00 stderr F I0813 20:27:10.562169 1 connection.go:251] GRPC response: {"available_capacity":51174940672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:27:10.562374879+00:00 stderr F I0813 20:27:10.562202 1 connection.go:252] GRPC error: 2025-08-13T20:27:10.562616176+00:00 stderr F I0813 20:27:10.562443 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49975528Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:27:10.621948322+00:00 stderr F I0813 20:27:10.621763 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35442 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:27:10.622171959+00:00 stderr F I0813 20:27:10.622064 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35442 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49975528Ki 2025-08-13T20:28:10.528586639+00:00 stderr F I0813 20:28:10.528237 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:28:10.528586639+00:00 stderr F I0813 20:28:10.528479 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:28:10.529034552+00:00 stderr F I0813 20:28:10.528970 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:28:10.529561847+00:00 stderr F I0813 20:28:10.529026 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:28:10.532628515+00:00 stderr F I0813 20:28:10.532243 1 connection.go:251] GRPC response: {"available_capacity":51567644672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:28:10.532681407+00:00 stderr F I0813 20:28:10.532664 1 connection.go:252] GRPC error: 2025-08-13T20:28:10.532953675+00:00 stderr F I0813 20:28:10.532871 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50359028Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:28:10.559875859+00:00 stderr F I0813 20:28:10.556836 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35577 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:28:10.559875859+00:00 stderr F I0813 20:28:10.557475 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35577 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50359028Ki 2025-08-13T20:29:10.529715136+00:00 stderr F I0813 20:29:10.529278 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:29:10.529715136+00:00 stderr F I0813 20:29:10.529606 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:29:10.530144499+00:00 stderr F I0813 20:29:10.530084 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:29:10.530790507+00:00 stderr F I0813 20:29:10.530110 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:29:10.535148743+00:00 stderr F I0813 20:29:10.534987 1 connection.go:251] GRPC response: {"available_capacity":51567120384,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:29:10.535148743+00:00 stderr F I0813 20:29:10.535126 1 connection.go:252] GRPC error: 2025-08-13T20:29:10.535486382+00:00 stderr F I0813 20:29:10.535285 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50358516Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:29:10.549869346+00:00 stderr F I0813 20:29:10.549666 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35709 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:29:10.549869346+00:00 stderr F I0813 20:29:10.549727 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35709 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50358516Ki 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530088 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530249 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530382 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:30:10.530663024+00:00 stderr F I0813 20:30:10.530389 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:30:10.535724309+00:00 stderr F I0813 20:30:10.535645 1 connection.go:251] GRPC response: {"available_capacity":49228713984,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:30:10.535724309+00:00 stderr F I0813 20:30:10.535675 1 connection.go:252] GRPC error: 2025-08-13T20:30:10.535972716+00:00 stderr F I0813 20:30:10.535726 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48074916Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:30:10.550323769+00:00 stderr F I0813 20:30:10.550147 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35890 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48074916Ki 2025-08-13T20:30:10.550751311+00:00 stderr F I0813 20:30:10.550690 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35890 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:30:27.073297549+00:00 stderr F I0813 20:30:27.073049 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 14 items received 2025-08-13T20:30:43.344585819+00:00 stderr F I0813 20:30:43.344328 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:30:44.158134804+00:00 stderr F I0813 20:30:44.158056 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:31:10.530875095+00:00 stderr F I0813 20:31:10.530527 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:31:10.530875095+00:00 stderr F I0813 20:31:10.530696 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:31:10.530951338+00:00 stderr F I0813 20:31:10.530912 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:31:10.531405681+00:00 stderr F I0813 20:31:10.530922 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:31:10.532597835+00:00 stderr F I0813 20:31:10.532491 1 connection.go:251] GRPC response: {"available_capacity":51566354432,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:31:10.532597835+00:00 stderr F I0813 20:31:10.532523 1 connection.go:252] GRPC error: 2025-08-13T20:31:10.532734639+00:00 stderr F I0813 20:31:10.532566 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50357768Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:31:10.544753214+00:00 stderr F I0813 20:31:10.544528 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36027 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:31:10.545745613+00:00 stderr F I0813 20:31:10.545608 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36027 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50357768Ki 2025-08-13T20:32:10.531836828+00:00 stderr F I0813 20:32:10.531466 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:32:10.531836828+00:00 stderr F I0813 20:32:10.531705 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:32:10.532556459+00:00 stderr F I0813 20:32:10.532005 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:32:10.532820336+00:00 stderr F I0813 20:32:10.532035 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:32:10.534975388+00:00 stderr F I0813 20:32:10.534850 1 connection.go:251] GRPC response: {"available_capacity":51568164864,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:32:10.534975388+00:00 stderr F I0813 20:32:10.534875 1 connection.go:252] GRPC error: 2025-08-13T20:32:10.535175014+00:00 stderr F I0813 20:32:10.534946 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50359536Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:32:10.553530152+00:00 stderr F I0813 20:32:10.553268 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36146 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:32:10.554085678+00:00 stderr F I0813 20:32:10.553945 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36146 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50359536Ki 2025-08-13T20:32:19.542178946+00:00 stderr F I0813 20:32:19.541133 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:32:39.416373564+00:00 stderr F I0813 20:32:39.416148 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 7 items received 2025-08-13T20:33:10.533275907+00:00 stderr F I0813 20:33:10.532719 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:33:10.533340859+00:00 stderr F I0813 20:33:10.533270 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:33:10.533504244+00:00 stderr F I0813 20:33:10.533448 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:33:10.534404340+00:00 stderr F I0813 20:33:10.533472 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:33:10.537336664+00:00 stderr F I0813 20:33:10.537200 1 connection.go:251] GRPC response: {"available_capacity":51570118656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:33:10.537336664+00:00 stderr F I0813 20:33:10.537232 1 connection.go:252] GRPC error: 2025-08-13T20:33:10.537362225+00:00 stderr F I0813 20:33:10.537289 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50361444Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:33:10.550672547+00:00 stderr F I0813 20:33:10.550436 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36259 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50361444Ki 2025-08-13T20:33:10.551315246+00:00 stderr F I0813 20:33:10.551053 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36259 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.533967613+00:00 stderr F I0813 20:34:10.533496 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:34:10.533967613+00:00 stderr F I0813 20:34:10.533736 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.534398095+00:00 stderr F I0813 20:34:10.534321 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:34:10.536031192+00:00 stderr F I0813 20:34:10.534354 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:34:10.539150562+00:00 stderr F I0813 20:34:10.538875 1 connection.go:251] GRPC response: {"available_capacity":51570245632,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:34:10.539150562+00:00 stderr F I0813 20:34:10.539064 1 connection.go:252] GRPC error: 2025-08-13T20:34:10.539240014+00:00 stderr F I0813 20:34:10.539147 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50361568Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:34:10.554901905+00:00 stderr F I0813 20:34:10.554593 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36359 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.554901905+00:00 stderr F I0813 20:34:10.554608 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36359 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50361568Ki 2025-08-13T20:35:10.534873013+00:00 stderr F I0813 20:35:10.534360 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:35:10.534985776+00:00 stderr F I0813 20:35:10.534872 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:35:10.535289855+00:00 stderr F I0813 20:35:10.535233 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:35:10.536180410+00:00 stderr F I0813 20:35:10.535261 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:35:10.538021483+00:00 stderr F I0813 20:35:10.537966 1 connection.go:251] GRPC response: {"available_capacity":51569307648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:35:10.538021483+00:00 stderr F I0813 20:35:10.537992 1 connection.go:252] GRPC error: 2025-08-13T20:35:10.538226819+00:00 stderr F I0813 20:35:10.538058 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50360652Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:35:10.552433408+00:00 stderr F I0813 20:35:10.552356 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36494 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50360652Ki 2025-08-13T20:35:10.552599143+00:00 stderr F I0813 20:35:10.552576 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36494 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.535607 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.535982 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.536269 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.536277 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:36:10.541271176+00:00 stderr F I0813 20:36:10.541238 1 connection.go:251] GRPC response: {"available_capacity":51568975872,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:36:10.541353548+00:00 stderr F I0813 20:36:10.541337 1 connection.go:252] GRPC error: 2025-08-13T20:36:10.541559594+00:00 stderr F I0813 20:36:10.541489 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50360328Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:36:10.562043683+00:00 stderr F I0813 20:36:10.561909 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.562558298+00:00 stderr F I0813 20:36:10.562329 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36628 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50360328Ki 2025-08-13T20:36:15.162456315+00:00 stderr F I0813 20:36:15.162055 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:37:02.083170034+00:00 stderr F I0813 20:37:02.082585 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-08-13T20:37:10.537019775+00:00 stderr F I0813 20:37:10.536721 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:37:10.537075267+00:00 stderr F I0813 20:37:10.537017 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:37:10.537088937+00:00 stderr F I0813 20:37:10.537082 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:37:10.537475688+00:00 stderr F I0813 20:37:10.537088 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:37:10.539417454+00:00 stderr F I0813 20:37:10.539325 1 connection.go:251] GRPC response: {"available_capacity":51568967680,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:37:10.539417454+00:00 stderr F I0813 20:37:10.539367 1 connection.go:252] GRPC error: 2025-08-13T20:37:10.539441095+00:00 stderr F I0813 20:37:10.539415 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49180Mi, new maximumVolumeSize 83295212Ki 2025-08-13T20:37:10.547160107+00:00 stderr F I0813 20:37:10.547063 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36750 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49180Mi 2025-08-13T20:37:10.547378274+00:00 stderr F I0813 20:37:10.547148 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36750 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:37:31.353038325+00:00 stderr F I0813 20:37:31.352700 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:38:10.537857165+00:00 stderr F I0813 20:38:10.537355 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:38:10.537937767+00:00 stderr F I0813 20:38:10.537858 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:38:10.539092880+00:00 stderr F I0813 20:38:10.539018 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:38:10.539377699+00:00 stderr F I0813 20:38:10.539057 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:38:10.540839291+00:00 stderr F I0813 20:38:10.540716 1 connection.go:251] GRPC response: {"available_capacity":51566211072,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:38:10.540839291+00:00 stderr F I0813 20:38:10.540743 1 connection.go:252] GRPC error: 2025-08-13T20:38:10.540984755+00:00 stderr F I0813 20:38:10.540877 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50357628Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:38:10.575113129+00:00 stderr F I0813 20:38:10.574998 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36879 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:38:10.575113129+00:00 stderr F I0813 20:38:10.575017 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36879 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50357628Ki 2025-08-13T20:39:10.539029861+00:00 stderr F I0813 20:39:10.538567 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:39:10.539123284+00:00 stderr F I0813 20:39:10.539028 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:39:10.539391742+00:00 stderr F I0813 20:39:10.539323 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:39:10.540115593+00:00 stderr F I0813 20:39:10.539350 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542210 1 connection.go:251] GRPC response: {"available_capacity":51565502464,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542237 1 connection.go:252] GRPC error: 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542316 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50356936Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:39:10.555006812+00:00 stderr F I0813 20:39:10.554849 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37007 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50356936Ki 2025-08-13T20:39:10.555279120+00:00 stderr F I0813 20:39:10.555226 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37007 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540041 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540237 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540309 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:40:10.540570459+00:00 stderr F I0813 20:40:10.540314 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:40:10.542059762+00:00 stderr F I0813 20:40:10.541991 1 connection.go:251] GRPC response: {"available_capacity":51564032000,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:40:10.542059762+00:00 stderr F I0813 20:40:10.542021 1 connection.go:252] GRPC error: 2025-08-13T20:40:10.542082482+00:00 stderr F I0813 20:40:10.542041 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50355500Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:40:10.555260212+00:00 stderr F I0813 20:40:10.555106 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37130 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50355500Ki 2025-08-13T20:40:10.555260212+00:00 stderr F I0813 20:40:10.555237 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37130 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:11.141967927+00:00 stderr F I0813 20:40:11.141843 1 reflector.go:378] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync 2025-08-13T20:40:11.142890584+00:00 stderr F I0813 20:40:11.142131 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:40:11.142890584+00:00 stderr F I0813 20:40:11.142763 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:40:11.142914225+00:00 stderr F I0813 20:40:11.142894 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:40:11.331528343+00:00 stderr F I0813 20:40:11.331348 1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync 2025-08-13T20:41:10.541671996+00:00 stderr F I0813 20:41:10.541162 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:41:10.541847171+00:00 stderr F I0813 20:41:10.541674 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:41:10.542047867+00:00 stderr F I0813 20:41:10.541965 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:41:10.544402995+00:00 stderr F I0813 20:41:10.542001 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:41:10.546830845+00:00 stderr F I0813 20:41:10.546706 1 connection.go:251] GRPC response: {"available_capacity":51564699648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:41:10.546830845+00:00 stderr F I0813 20:41:10.546734 1 connection.go:252] GRPC error: 2025-08-13T20:41:10.546981129+00:00 stderr F I0813 20:41:10.546873 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50356152Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:41:10.566679587+00:00 stderr F I0813 20:41:10.566585 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37251 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:41:10.566918594+00:00 stderr F I0813 20:41:10.566893 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37251 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50356152Ki 2025-08-13T20:41:43.426306873+00:00 stderr F I0813 20:41:43.424528 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:41:52.548745175+00:00 stderr F I0813 20:41:52.546570 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 10 items received 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542362 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542512 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542703 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:42:10.547095161+00:00 stderr F I0813 20:42:10.542711 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555855 1 connection.go:251] GRPC response: {"available_capacity":49223258112,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555892 1 connection.go:252] GRPC error: 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555940 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48069588Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:42:10.582284396+00:00 stderr F I0813 20:42:10.582197 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37403 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48069588Ki 2025-08-13T20:42:10.584191091+00:00 stderr F I0813 20:42:10.584074 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:42:36.366462995+00:00 stderr F I0813 20:42:36.366327 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.366722473+00:00 stderr F I0813 20:42:36.366690 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 0 items received 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.372885 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.373023 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 11 items received 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.378923 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.378981 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 5 items received 2025-08-13T20:42:36.417150636+00:00 stderr F I0813 20:42:36.406967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.417150636+00:00 stderr F I0813 20:42:36.407104 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:42:36.481197803+00:00 stderr F I0813 20:42:36.479176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.492491459+00:00 stderr F I0813 20:42:36.490471 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 0 items received 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.604957 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=7m4s&timeoutSeconds=424&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605095 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=5m12s&timeoutSeconds=312&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605145 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605333 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.606900 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:37.849300036+00:00 stderr F I0813 20:42:37.849155 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:37.870661042+00:00 stderr F I0813 20:42:37.870079 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.091716185+00:00 stderr F I0813 20:42:38.089119 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.150465799+00:00 stderr F I0813 20:42:38.139752 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.205443034+00:00 stderr F I0813 20:42:38.205106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.059831485+00:00 stderr F I0813 20:42:40.059687 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=6m21s&timeoutSeconds=381&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.085673111+00:00 stderr F I0813 20:42:40.085582 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=7m46s&timeoutSeconds=466&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.347258672+00:00 stderr F I0813 20:42:40.347129 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=5m7s&timeoutSeconds=307&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.721875683+00:00 stderr F I0813 20:42:40.721701 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:41.306913760+00:00 stderr F I0813 20:42:41.306567 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=5m13s&timeoutSeconds=313&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015073043234033165 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000061415073043234033170 0ustar zuulzuul2025-10-13T00:15:03.619024738+00:00 stderr F I1013 00:15:03.604704 1 main.go:149] calling CSI driver to discover driver name 2025-10-13T00:15:03.619024738+00:00 stderr F I1013 00:15:03.611081 1 main.go:155] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-10-13T00:15:03.619024738+00:00 stderr F I1013 00:15:03.611095 1 main.go:183] ServeMux listening at "0.0.0.0:9898" ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000061415073043234033170 0ustar zuulzuul2025-08-13T19:59:57.766335818+00:00 stderr F I0813 19:59:57.747274 1 main.go:149] calling CSI driver to discover driver name 2025-08-13T19:59:58.592328403+00:00 stderr F I0813 19:59:58.579963 1 main.go:155] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-08-13T19:59:58.592328403+00:00 stderr F I0813 19:59:58.580031 1 main.go:183] ServeMux listening at "0.0.0.0:9898" ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000004641115073043232033063 0ustar zuulzuul2025-08-13T20:01:08.774138269+00:00 stderr F I0813 20:01:08.766484 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc0005f2640 cert-dir:0xc0005f2820 cert-secrets:0xc0005f25a0 configmaps:0xc0005f2140 namespace:0xc0002d9f40 optional-cert-configmaps:0xc0005f2780 optional-configmaps:0xc0005f2280 optional-secrets:0xc0005f21e0 pod:0xc0005f2000 pod-manifest-dir:0xc0005f23c0 resource-dir:0xc0005f2320 revision:0xc0002d9ea0 secrets:0xc0005f20a0 v:0xc0005f3220] [0xc0005f3220 0xc0002d9ea0 0xc0002d9f40 0xc0005f2000 0xc0005f2320 0xc0005f23c0 0xc0005f2140 0xc0005f2280 0xc0005f20a0 0xc0005f21e0 0xc0005f2820 0xc0005f2640 0xc0005f2780 0xc0005f25a0] [] map[cert-configmaps:0xc0005f2640 cert-dir:0xc0005f2820 cert-secrets:0xc0005f25a0 configmaps:0xc0005f2140 help:0xc0005f35e0 kubeconfig:0xc0002d9e00 log-flush-frequency:0xc0005f3180 namespace:0xc0002d9f40 optional-cert-configmaps:0xc0005f2780 optional-cert-secrets:0xc0005f26e0 optional-configmaps:0xc0005f2280 optional-secrets:0xc0005f21e0 pod:0xc0005f2000 pod-manifest-dir:0xc0005f23c0 pod-manifests-lock-file:0xc0005f2500 resource-dir:0xc0005f2320 revision:0xc0002d9ea0 secrets:0xc0005f20a0 timeout-duration:0xc0005f2460 v:0xc0005f3220 vmodule:0xc0005f32c0] [0xc0002d9e00 0xc0002d9ea0 0xc0002d9f40 0xc0005f2000 0xc0005f20a0 0xc0005f2140 0xc0005f21e0 0xc0005f2280 0xc0005f2320 0xc0005f23c0 0xc0005f2460 0xc0005f2500 0xc0005f25a0 0xc0005f2640 0xc0005f26e0 0xc0005f2780 0xc0005f2820 0xc0005f3180 0xc0005f3220 0xc0005f32c0 0xc0005f35e0] [0xc0005f2640 0xc0005f2820 0xc0005f25a0 0xc0005f2140 0xc0005f35e0 0xc0002d9e00 0xc0005f3180 0xc0002d9f40 0xc0005f2780 0xc0005f26e0 0xc0005f2280 0xc0005f21e0 0xc0005f2000 0xc0005f23c0 0xc0005f2500 0xc0005f2320 0xc0002d9ea0 0xc0005f20a0 0xc0005f2460 0xc0005f3220 0xc0005f32c0] map[104:0xc0005f35e0 118:0xc0005f3220] [] -1 0 0xc0009ffb30 true 0x73b100 []} 2025-08-13T20:01:08.774138269+00:00 stderr F I0813 20:01:08.770143 1 cmd.go:92] (*installerpod.InstallOptions)(0xc0000dc820)({ 2025-08-13T20:01:08.774138269+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:01:08.774138269+00:00 stderr F Revision: (string) (len=2) "10", 2025-08-13T20:01:08.774138269+00:00 stderr F NodeName: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:01:08.774138269+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:01:08.774138269+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:01:08.774138269+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:01:08.774138269+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.774138269+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:01:08.774138269+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:01:08.774138269+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:01:08.774138269+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:01:08.774138269+00:00 stderr F }) 2025-08-13T20:01:08.830058183+00:00 stderr F I0813 20:01:08.828941 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:01:09.501531129+00:00 stderr F I0813 20:01:09.497182 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:01:09.701953794+00:00 stderr F I0813 20:01:09.687538 1 cmd.go:502] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:01:20.207197869+00:00 stderr F I0813 20:01:20.203486 1 cmd.go:502] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:01:31.558920221+00:00 stderr F I0813 20:01:31.543103 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:02:01.544989289+00:00 stderr F I0813 20:02:01.544661 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:02:03.699434270+00:00 stderr F I0813 20:02:03.699248 1 cmd.go:538] Latest installer revision for node crc is: 10 2025-08-13T20:02:03.699434270+00:00 stderr F I0813 20:02:03.699332 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744093 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744255 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744967 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744985 1 cmd.go:225] Getting secrets ... 2025-08-13T20:02:11.197718897+00:00 stderr F I0813 20:02:11.192487 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-10 2025-08-13T20:02:15.107310794+00:00 stderr F I0813 20:02:15.095719 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-10 2025-08-13T20:02:17.830040326+00:00 stderr F I0813 20:02:17.813908 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-10 2025-08-13T20:02:17.830040326+00:00 stderr F I0813 20:02:17.818969 1 cmd.go:238] Getting config maps ... 2025-08-13T20:02:20.454584586+00:00 stderr F I0813 20:02:20.454385 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-10 2025-08-13T20:02:21.108877962+00:00 stderr F I0813 20:02:21.107944 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-10 2025-08-13T20:02:21.667184459+00:00 stderr F I0813 20:02:21.667055 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-10 2025-08-13T20:02:23.567610131+00:00 stderr F I0813 20:02:23.567409 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-10 2025-08-13T20:02:24.259894000+00:00 stderr F I0813 20:02:24.250233 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-10 2025-08-13T20:02:25.669894114+00:00 stderr F I0813 20:02:25.666308 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-10 2025-08-13T20:02:25.939952998+00:00 stderr F I0813 20:02:25.939182 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-10 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.164324 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-10 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174281 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-10: configmaps "cloud-config-10" not found 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174311 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174970 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.175600 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177183 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177519 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177704 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177760 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177939 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178068 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178119 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178198 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178285 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178454 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178607 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178729 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config/config.yaml" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187037 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187129 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187265 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187323 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187513 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187632 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187875 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188023 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188169 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188321 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188411 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188474 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188555 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188662 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188758 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188770 1 cmd.go:225] Getting secrets ... 2025-08-13T20:02:26.880541580+00:00 stderr F I0813 20:02:26.880477 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:02:31.905193168+00:00 stderr F I0813 20:02:31.903646 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.920573976+00:00 stderr F I0813 20:02:31.917909 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.972818077+00:00 stderr F I0813 20:02:31.972687 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.242241153+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.244102906+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.244195099+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000755000175000017500000000000015073043233033141 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000755000175000017500000000000015073043233033141 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000002640615073043233033153 0ustar zuulzuul2025-10-13T00:15:26.903087128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:26.903451279+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:36.907969743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:36.908052335+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:46.900885287+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:46.900978220+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:56.901251525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:15:56.901251525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:15:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:06.902365026+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:06.902494220+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:16.901229374+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:16.901296436+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:26.901156327+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:26.901518649+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:36.899774008+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:36.899850950+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:46.900245689+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:46.900315281+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:56.900551439+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:16:56.901211229+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:16:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:06.900907812+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:06.901058466+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:16.901352917+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:16.901414859+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:26.899921075+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:26.899991717+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:36.899801342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:36.899864204+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:46.901916490+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:46.901997913+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:56.899982219+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:17:56.900053581+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:17:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:06.900378165+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:06.900378165+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:16.900678219+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:16.900788382+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:26.900728326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:26.900859450+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:36.900642418+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:36.900742231+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:46.901292332+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:46.901292332+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:56.900932666+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:18:56.900991468+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:18:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:06.901564123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:06.901684656+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:16.901055433+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:16.901162876+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:26.902839057+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:26.902953390+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:36.900135270+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:36.900270434+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:46.901465039+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:46.901566492+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:56.900382239+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:19:56.900553534+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:19:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:06.898897491+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:06.898945747+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:16.900932059+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:16.900932059+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:26.900423255+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:26.900486256+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:36.900552677+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:36.900552677+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:46.899585932+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:46.899886650+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:56.900559459+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:20:56.900625081+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:20:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:06.899668843+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:06.899725325+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:16.900730381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:16.900862324+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:26.900064611+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:26.900199574+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:36.900360546+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:36.900490260+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:46.900484337+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:46.900484337+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:56.899793366+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:21:56.899864538+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:21:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:06.899658939+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:06.900244774+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:16.899560783+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:16.899609734+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:26.900506228+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:26.900585950+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:36.899505737+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:36.900125434+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:46.900808333+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:46.900808333+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:56.899519218+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:22:56.899574259+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:22:56] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:06.901028330+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:06.901454812+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:06] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:16.900405523+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:16.900483075+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:16] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:26.900018633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:26.900057634+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:26] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:36.900481037+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:36.900776965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:36] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:46.899370655+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:46] "GET / HTTP/1.1" 200 - 2025-10-13T00:23:46.899811487+00:00 stderr F ::ffff:10.217.0.2 - - [13/Oct/2025 00:23:46] "GET / HTTP/1.1" 200 - ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000014462315073043233033155 0ustar zuulzuul2025-08-13T20:04:44.888728027+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:44.893972447+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:54.878027590+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:54.878081642+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:04.898170868+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:04.898170868+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:14.877953171+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:14.878994381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:24.878938698+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:24.901763522+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:34.880028638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:34.888279825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:44.877050427+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:44.877050427+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:54.877406116+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:54.877406116+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:04.873914236+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:04.897182743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:14.875633185+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:14.876301114+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:24.876177779+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:24.887274867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:34.901938306+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:34.901938306+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:44.884100374+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:44.884670630+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:54.887548271+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:54.901070549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:04.877456080+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:04.878290154+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:14.876924493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:14.876924493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:24.876599141+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:24.877872537+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:34.877159516+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:34.878752791+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:44.876859645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:44.876859645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:54.884906584+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:54.889264949+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:04.874438713+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:04.875134483+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:14.878531079+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:14.879438245+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:24.875006197+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:24.877290643+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:34.875727365+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:34.875946571+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:44.875280310+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:44.875334141+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:54.896377233+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:54.896377233+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:04.874617457+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:04.875455401+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:14.876610033+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:14.876989734+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:24.875273435+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:24.875411979+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:34.876292742+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:34.876361464+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:44.877966247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:44.877966247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:54.875029411+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:54.875664350+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:04.891257056+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:04.891257056+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:14.875624635+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:14.878898049+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:24.875490887+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:24.876027833+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:34.874839007+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:34.875504136+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:44.875599527+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:44.876278696+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:54.874877994+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:54.875300776+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:04.885986769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:04.888866112+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:14.877410392+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:14.877525665+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:24.875386473+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:24.875896337+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:34.876020156+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:34.876020156+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:44.875280875+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:44.876194671+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:54.875339943+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:54.875717254+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:04.874858908+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:04.874971102+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:14.874866656+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:14.875305319+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:24.875899935+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:24.875899935+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:34.874569125+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:34.875109830+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:44.875420674+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:44.876627619+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:54.873643782+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:54.875057172+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:04.874175265+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:04.874244207+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:14.874158743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:14.874721559+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:24.874623755+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:24.874623755+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:34.874381965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:34.874458417+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:44.875748023+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:44.876144354+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:54.873665730+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:54.874572326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:04.875581986+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:04.875676059+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:14.875640415+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:14.876189791+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:24.875375544+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:24.875468727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:34.875174045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:34.875298628+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:44.874482803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:44.874548545+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:54.880318548+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:54.881359348+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:04.875109967+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:04.875320773+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:14.876096772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:14.876096772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:24.876631268+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:24.877218265+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:34.874602682+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:34.874857909+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:44.873905452+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:44.874024645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:54.874181191+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:54.874637124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:04.875879751+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:04.876344854+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:14.874633168+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:14.875580235+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:24.874567479+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:24.874650501+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:34.874614263+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:34.874796578+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:44.875339293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:44.875495158+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:54.875721814+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:54.875840738+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:04.879383970+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:04.879619367+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:14.873847532+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:14.875212991+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:24.875065068+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:24.875166851+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:34.873994679+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:34.874061241+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:44.877205541+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:44.877711606+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:54.875900034+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:54.876359368+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:04.875607908+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:04.875664629+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:14.876385011+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:14.876385011+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:24.874482290+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:24.874548892+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:34.874673825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:34.874673825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:44.875253153+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:44.875954123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:54.874702969+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:54.875022458+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:04.875556776+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:04.875935727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:14.874408274+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:14.874909528+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:24.875276633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:24.875577012+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:34.874678069+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:34.874678069+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:44.875894925+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:44.876254865+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:54.876675080+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:54.885165523+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:04.874873544+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:04.875042369+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:14.875571982+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:14.875866700+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:24.874903896+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:24.875017079+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:34.874503889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:34.874503889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:44.875141889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:44.875279513+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:54.874700765+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:54.875555170+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:04.878385424+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:04.878545968+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:14.875189197+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:14.875267969+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:24.874953647+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:24.875516293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:34.876191494+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:34.876191494+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:44.874696495+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:44.875341083+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:54.875379996+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:54.875706686+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:04.876192689+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:04.877191887+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:14.874716719+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:14.875201063+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:24.875537137+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:24.875597658+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:34.875666620+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:34.876148754+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:44.875079835+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:44.875693912+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:54.876059296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:54.876180939+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:04.874739301+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:04.875832353+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:14.877123410+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:14.877583663+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:24.876410912+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:24.876478694+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:34.877141128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:34.877141128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:44.875017840+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:44.875167945+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:54.875209594+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:54.875276616+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:04.876443140+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:04.876628735+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:14.875599025+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:14.875701158+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:24.875469477+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:24.875730965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:34.874444668+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:34.874518460+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:44.876177127+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:44.876644111+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:54.875238157+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:54.875323130+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:04.874918895+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:04.875500371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:14.875561522+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:14.875691016+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:24.876888822+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:24.877013085+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:34.874693585+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:34.874869520+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:44.874894170+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:44.875096215+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:54.877111803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:54.877111803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:04.875430383+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:04.875612748+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:14.875736880+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:14.875937006+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:24.876522633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:24.877168101+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:34.873941987+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:34.874279987+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:44.875512929+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:44.875607191+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:54.882751573+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:54.882959629+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:04.874727566+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:04.875175359+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:14.873916413+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:14.876865648+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:24.879849482+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:24.880450389+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:34.874746425+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:34.875950389+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:44.875592499+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:44.876326890+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:54.875055852+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:54.875132294+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:04.876076826+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:04.876146048+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:14.874762562+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:14.874980408+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:24.880971247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:24.880971247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:34.874637832+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:34.874726785+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:44.875949525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:44.876455410+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:54.878058891+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:54.878058891+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:04.873956788+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:04.874317879+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:14.875684022+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:14.875974681+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:24.876114270+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:24.876246984+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:34.875232380+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:34.877091683+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:44.876884172+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:44.877306124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:54.875609259+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:54.876537436+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:04.874713649+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:04.877918661+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:14.876076045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:14.876296501+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:24.877316725+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:24.879651342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:34.878995877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:34.880163371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:44.875007381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:44.875131234+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:54.876424911+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:54.876424911+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:04.876680163+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:04.877402134+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:14.874521638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:14.874675732+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:24.874946315+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:24.874946315+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:34.875186878+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:34.875271910+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:44.877499258+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:44.877911550+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:54.873995226+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:54.873995226+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:04.874318060+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:04.874364502+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:14.874591153+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:14.874696446+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:24.876263998+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:24.876531425+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:34.874191296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:34.874268209+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:44.876020945+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:44.876730045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:54.876964326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:54.877179202+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:04.876386117+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:04.876494020+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:14.875852398+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:14.875927800+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:24.874691828+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:24.874925175+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:34.874654405+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:34.875039966+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:44.877827131+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:44.878375227+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:54.874361527+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:54.875222872+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:04.875028260+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:04.875152244+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:14.873858946+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:14.873997380+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:24.877743674+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:24.878407944+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:34.875057881+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:34.875226936+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:44.876473507+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:44.876709633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:54.874539769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:54.874539769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:04.874885285+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:04.874979877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:14.875463835+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:14.875639401+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:24.873876052+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:24.874397867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:34.876251115+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:34.876354308+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:44.875533638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:44.875860217+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:54.873841977+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:54.873909189+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:04.876650874+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:04.876740907+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:14.877361679+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:14.877850293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:24.874589473+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:24.875094478+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:34.874493960+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:34.874741507+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:44.874764953+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:44.875077672+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:54.874251540+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:54.874731414+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:04.873286331+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:04.874243149+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:14.875478292+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:14.875529104+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:24.875402662+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:24.875873855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:34.873890123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:34.874092298+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:44.876540775+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:44.876999478+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:54.875221698+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:54.875293580+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:04.876490367+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:04.877680001+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:14.874964371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:14.875125636+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:24.874243342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:24.875336493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:34.876535549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:34.876732334+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:44.874252855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:44.876123839+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:54.873657386+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:54.874943103+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:04.875230903+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:04.875357456+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:14.874139573+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:14.875194144+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:24.876877913+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:24.878520320+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:34.876636135+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:34.881854865+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:44.876229275+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:44.876543124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:54.878921304+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:54.879038727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:04.875610138+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:04.876332729+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:14.874282691+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:14.874338962+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:24.875280032+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:24.875704484+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:34.873937883+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:34.874442447+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:44.875692694+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:44.876143877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:54.876345164+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:54.876605542+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:04.875234483+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:04.875344667+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:14.874629855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:14.874690807+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:24.875425519+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:24.876146620+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:34.879277603+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:34.879277603+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:44.874017071+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:44.874453514+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:54.876080042+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:54.876712460+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:04.878643778+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:04.880964525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:14.885862486+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:14.885862486+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:24.877525607+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:24.878525616+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:34.873933626+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:34.875990815+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:39.843300253+00:00 stdout F serving from /tmp/tmp2jv0ugaq ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000000011315073043233033136 0ustar zuulzuul2025-08-13T20:04:14.907116744+00:00 stdout F serving from /tmp/tmpsrrtesg5 ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000755000175000017500000000000015073043233033022 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000755000175000017500000000000015073043233033022 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000644000175000017500000000113215073043233033021 0ustar zuulzuul2025-10-13T00:15:01.946481337+00:00 stdout F serving on 8888 2025-10-13T00:15:01.946481337+00:00 stdout F serving on 8080 2025-10-13T00:17:29.929742659+00:00 stdout F Serving canary healthcheck request 2025-10-13T00:18:29.967575990+00:00 stdout F Serving canary healthcheck request 2025-10-13T00:19:30.012906325+00:00 stdout F Serving canary healthcheck request 2025-10-13T00:20:30.036975256+00:00 stdout F Serving canary healthcheck request 2025-10-13T00:21:30.062519703+00:00 stdout F Serving canary healthcheck request 2025-10-13T00:23:30.098107426+00:00 stdout F Serving canary healthcheck request ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000644000175000017500000000555215073043233033033 0ustar zuulzuul2025-08-13T19:59:34.879020507+00:00 stdout F serving on 8888 2025-08-13T19:59:35.198359410+00:00 stdout F serving on 8080 2025-08-13T20:08:02.339625798+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:09:02.392271856+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:10:02.482997678+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:11:02.534550541+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:12:02.579987363+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:13:02.622390324+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:14:02.665508540+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:15:02.709870087+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:16:02.748527721+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:17:02.809741878+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:18:03.008946071+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:19:04.599125722+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:20:04.657458550+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:21:04.729419456+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:22:04.794676989+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:23:04.860257187+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:24:04.910380940+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:25:04.970661472+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:26:05.020180631+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:27:05.076309120+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:28:05.120889853+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:29:05.169859934+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:30:05.293400605+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:31:05.339396053+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:32:05.396747728+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:33:05.450110989+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:34:05.494021784+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:35:05.541527757+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:36:05.601877841+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:37:05.651693983+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:38:05.693964495+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:39:05.739405167+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:40:05.786630512+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:41:05.844252540+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:42:05.898761088+00:00 stdout F Serving canary healthcheck request ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015073043233033037 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015073043233033037 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001665715073043233033060 0ustar zuulzuul2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.109862 1 flags.go:64] FLAG: --add-dir-header="false" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.109997 1 flags.go:64] FLAG: --allow-paths="[]" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110005 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110008 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110012 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110017 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110020 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110024 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110028 1 flags.go:64] FLAG: --client-ca-file="" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110031 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110034 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110038 1 flags.go:64] FLAG: --http2-disable="false" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110041 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110045 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110049 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110054 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110057 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110061 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110065 1 flags.go:64] FLAG: --kubeconfig="" 2025-10-13T00:12:49.110074847+00:00 stderr F I1013 00:12:49.110067 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110070 1 flags.go:64] FLAG: --log-dir="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110074 1 flags.go:64] FLAG: --log-file="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110077 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110081 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110092 1 flags.go:64] FLAG: --logtostderr="true" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110095 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110098 1 flags.go:64] FLAG: --oidc-clientID="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110101 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110104 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110107 1 flags.go:64] FLAG: --oidc-issuer="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110109 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110117 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110120 1 flags.go:64] FLAG: --one-output="false" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110123 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110126 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:9192" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110129 1 flags.go:64] FLAG: --skip-headers="false" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110132 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110135 1 flags.go:64] FLAG: --stderrthreshold="" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110138 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-10-13T00:12:49.110194370+00:00 stderr F I1013 00:12:49.110142 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110190 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110196 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110199 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110204 1 flags.go:64] FLAG: --upstream="http://127.0.0.1:9191/" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110207 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110209 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110212 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110215 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-10-13T00:12:49.110222071+00:00 stderr F I1013 00:12:49.110218 1 flags.go:64] FLAG: --v="3" 2025-10-13T00:12:49.110235601+00:00 stderr F I1013 00:12:49.110221 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:12:49.110235601+00:00 stderr F I1013 00:12:49.110226 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:12:49.110246082+00:00 stderr F W1013 00:12:49.110233 1 deprecated.go:66] 2025-10-13T00:12:49.110246082+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:12:49.110246082+00:00 stderr F 2025-10-13T00:12:49.110246082+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:12:49.110246082+00:00 stderr F 2025-10-13T00:12:49.110246082+00:00 stderr F =============================================== 2025-10-13T00:12:49.110246082+00:00 stderr F 2025-10-13T00:12:49.110258102+00:00 stderr F I1013 00:12:49.110243 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-10-13T00:12:49.113430532+00:00 stderr F I1013 00:12:49.111055 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:12:49.113430532+00:00 stderr F I1013 00:12:49.111101 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:12:49.113430532+00:00 stderr F I1013 00:12:49.111601 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9192 2025-10-13T00:12:49.113430532+00:00 stderr F I1013 00:12:49.111979 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9192 ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001706415073043233033051 0ustar zuulzuul2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312609 1 flags.go:64] FLAG: --add-dir-header="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312952 1 flags.go:64] FLAG: --allow-paths="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312962 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312973 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312977 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312983 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312990 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312994 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312999 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313003 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313007 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313011 1 flags.go:64] FLAG: --http2-disable="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313015 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313020 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313024 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313030 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313037 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313042 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313054 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313621 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313717 1 flags.go:64] FLAG: --log-dir="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313723 1 flags.go:64] FLAG: --log-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313728 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313735 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313753 1 flags.go:64] FLAG: --logtostderr="true" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313759 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313765 1 flags.go:64] FLAG: --oidc-clientID="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313770 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313861 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313867 1 flags.go:64] FLAG: --oidc-issuer="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313871 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313879 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313883 1 flags.go:64] FLAG: --one-output="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313887 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313892 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:9192" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313906 1 flags.go:64] FLAG: --skip-headers="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313912 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313917 1 flags.go:64] FLAG: --stderrthreshold="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313921 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313925 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313936 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313941 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313945 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313951 1 flags.go:64] FLAG: --upstream="http://127.0.0.1:9191/" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313956 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313962 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313966 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313970 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313974 1 flags.go:64] FLAG: --v="3" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313979 1 flags.go:64] FLAG: --version="false" 2025-08-13T19:50:43.318225195+00:00 stderr F I0813 19:50:43.313985 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:50:43.318225195+00:00 stderr F W0813 19:50:43.316320 1 deprecated.go:66] 2025-08-13T19:50:43.318225195+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F =============================================== 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F I0813 19:50:43.316363 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:50:43.323724493+00:00 stderr F I0813 19:50:43.321089 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:43.323724493+00:00 stderr F I0813 19:50:43.321223 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:43.324886206+00:00 stderr F I0813 19:50:43.324349 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9192 2025-08-13T19:50:43.333905294+00:00 stderr F I0813 19:50:43.331692 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9192 2025-08-13T20:42:46.941041162+00:00 stderr F I0813 20:42:46.940758 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015073043233033037 5ustar zuulzuul././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001302015073043233033035 0ustar zuulzuul2025-10-13T00:12:50.202987702+00:00 stderr F W1013 00:12:50.200708 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:12:50.203163807+00:00 stderr F W1013 00:12:50.203135 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:12:50.203292860+00:00 stderr F I1013 00:12:50.203272 1 main.go:150] setting up manager 2025-10-13T00:12:50.213837321+00:00 stderr F I1013 00:12:50.213762 1 main.go:168] registering components 2025-10-13T00:12:50.213837321+00:00 stderr F I1013 00:12:50.213781 1 main.go:170] setting up scheme 2025-10-13T00:12:50.215111917+00:00 stderr F I1013 00:12:50.214127 1 main.go:208] setting up controllers 2025-10-13T00:12:50.215111917+00:00 stderr F I1013 00:12:50.214157 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-10-13T00:12:50.215111917+00:00 stderr F I1013 00:12:50.214169 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-10-13T00:12:50.215111917+00:00 stderr F I1013 00:12:50.214347 1 main.go:233] starting the cmd 2025-10-13T00:12:50.215111917+00:00 stderr F I1013 00:12:50.214578 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-10-13T00:12:50.219582695+00:00 stderr F I1013 00:12:50.216350 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-10-13T00:12:50.219582695+00:00 stderr F I1013 00:12:50.217424 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-10-13T00:13:20.222452804+00:00 stderr F E1013 00:13:20.222386 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:17:09.452365480+00:00 stderr F I1013 00:17:09.452287 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-10-13T00:17:09.453959410+00:00 stderr F I1013 00:17:09.453917 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-10-13T00:17:09.453959410+00:00 stderr F I1013 00:17:09.453950 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-10-13T00:17:09.453977120+00:00 stderr F I1013 00:17:09.453965 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-10-13T00:17:09.454291850+00:00 stderr F I1013 00:17:09.454260 1 recorder.go:104] "crc_54c2649e-9a04-4a51-b3da-40ae4773c7df became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41430"} reason="LeaderElection" 2025-10-13T00:17:09.454388523+00:00 stderr F I1013 00:17:09.454353 1 status.go:97] Starting cluster operator status controller 2025-10-13T00:17:09.458041066+00:00 stderr F I1013 00:17:09.457982 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-10-13T00:17:09.482632418+00:00 stderr F I1013 00:17:09.481545 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:09.594449641+00:00 stderr F I1013 00:17:09.594367 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:09.704867670+00:00 stderr F I1013 00:17:09.704501 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-10-13T00:22:51.626205299+00:00 stderr F E1013 00:22:51.626100 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:22:59.091947008+00:00 stderr F I1013 00:22:59.091878 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-10-13T00:22:59.427362870+00:00 stderr F I1013 00:22:59.426855 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:22:59.619888213+00:00 stderr F I1013 00:22:59.619794 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001665615073043233033057 0ustar zuulzuul2025-08-13T20:05:12.855041932+00:00 stderr F W0813 20:05:12.854614 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:05:12.856080652+00:00 stderr F W0813 20:05:12.855207 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:05:12.856080652+00:00 stderr F I0813 20:05:12.855407 1 main.go:150] setting up manager 2025-08-13T20:05:12.856517694+00:00 stderr F I0813 20:05:12.856386 1 main.go:168] registering components 2025-08-13T20:05:12.856517694+00:00 stderr F I0813 20:05:12.856465 1 main.go:170] setting up scheme 2025-08-13T20:05:12.857273606+00:00 stderr F I0813 20:05:12.857204 1 main.go:208] setting up controllers 2025-08-13T20:05:12.857273606+00:00 stderr F I0813 20:05:12.857261 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-08-13T20:05:12.857315647+00:00 stderr F I0813 20:05:12.857279 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-08-13T20:05:12.860378595+00:00 stderr F I0813 20:05:12.860230 1 main.go:233] starting the cmd 2025-08-13T20:05:12.860902380+00:00 stderr F I0813 20:05:12.860686 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T20:05:12.861960160+00:00 stderr F I0813 20:05:12.861896 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-08-13T20:05:12.863479814+00:00 stderr F I0813 20:05:12.863313 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-08-13T20:08:02.991035955+00:00 stderr F I0813 20:08:02.989962 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-08-13T20:08:02.991690854+00:00 stderr F I0813 20:08:02.990989 1 recorder.go:104] "crc_38dd04bb-211c-4052-882f-1b12e44fa6dd became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"32804"} reason="LeaderElection" 2025-08-13T20:08:03.000627140+00:00 stderr F I0813 20:08:03.000537 1 status.go:97] Starting cluster operator status controller 2025-08-13T20:08:03.018414770+00:00 stderr F I0813 20:08:03.017937 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:08:03.021056156+00:00 stderr F I0813 20:08:03.020547 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-08-13T20:08:03.021056156+00:00 stderr F I0813 20:08:03.020594 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:08:03.078039859+00:00 stderr F I0813 20:08:03.075934 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:03.078039859+00:00 stderr F I0813 20:08:03.077053 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T20:08:03.349822692+00:00 stderr F I0813 20:08:03.349675 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:03.368563049+00:00 stderr F I0813 20:08:03.368427 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-08-13T20:08:59.007661338+00:00 stderr F E0813 20:08:59.007399 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:09:05.293722175+00:00 stderr F I0813 20:09:05.293507 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:05.536332881+00:00 stderr F I0813 20:09:05.536106 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T20:09:05.790060166+00:00 stderr F I0813 20:09:05.789989 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:42:36.491959513+00:00 stderr F I0813 20:42:36.480040 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514396170+00:00 stderr F I0813 20:42:36.479177 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514396170+00:00 stderr F I0813 20:42:36.489175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.634924855+00:00 stderr F I0813 20:42:39.633727 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:39.635499292+00:00 stderr F I0813 20:42:39.634967 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:39.639672802+00:00 stderr F I0813 20:42:39.639594 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:39.639672802+00:00 stderr F I0813 20:42:39.639655 1 controller.go:242] "All workers finished" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:39.639736334+00:00 stderr F I0813 20:42:39.639706 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:39.644435049+00:00 stderr F I0813 20:42:39.644388 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:39.644719988+00:00 stderr F I0813 20:42:39.644686 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:39.647757065+00:00 stderr F I0813 20:42:39.647042 1 server.go:231] "Shutting down metrics server with timeout of 1 minute" logger="controller-runtime.metrics" 2025-08-13T20:42:39.647757065+00:00 stderr F I0813 20:42:39.647249 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:39.651514124+00:00 stderr F E0813 20:42:39.651384 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000002505015073043233033043 0ustar zuulzuul2025-08-13T19:50:52.226135278+00:00 stderr F W0813 19:50:52.191052 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:52.238309136+00:00 stderr F W0813 19:50:52.238020 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:52.241364754+00:00 stderr F I0813 19:50:52.239659 1 main.go:150] setting up manager 2025-08-13T19:50:52.290188489+00:00 stderr F I0813 19:50:52.289306 1 main.go:168] registering components 2025-08-13T19:50:52.290188489+00:00 stderr F I0813 19:50:52.289358 1 main.go:170] setting up scheme 2025-08-13T19:50:52.291735653+00:00 stderr F I0813 19:50:52.291002 1 main.go:208] setting up controllers 2025-08-13T19:50:52.292664000+00:00 stderr F I0813 19:50:52.291879 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-08-13T19:50:52.292664000+00:00 stderr F I0813 19:50:52.292278 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-08-13T19:50:52.295243134+00:00 stderr F I0813 19:50:52.295107 1 main.go:233] starting the cmd 2025-08-13T19:50:52.308932555+00:00 stderr F I0813 19:50:52.307024 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T19:50:52.327417733+00:00 stderr F I0813 19:50:52.326253 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-08-13T19:50:52.343960816+00:00 stderr F I0813 19:50:52.343359 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-08-13T19:51:22.437513386+00:00 stderr F E0813 19:51:22.437168 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:38.535367297+00:00 stderr F E0813 19:52:38.535240 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:54.958972291+00:00 stderr F E0813 19:53:54.958581 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:58.672042812+00:00 stderr F E0813 19:54:58.671715 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:15.692884729+00:00 stderr F E0813 19:56:15.692681 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:11.955337238+00:00 stderr F E0813 19:57:11.955019 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:58:04.802508691+00:00 stderr F I0813 19:58:04.802165 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-08-13T19:58:04.803114468+00:00 stderr F I0813 19:58:04.803034 1 status.go:97] Starting cluster operator status controller 2025-08-13T19:58:04.805735363+00:00 stderr F I0813 19:58:04.804317 1 recorder.go:104] "crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"27679"} reason="LeaderElection" 2025-08-13T19:58:04.806422283+00:00 stderr F I0813 19:58:04.806346 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:58:04.809971334+00:00 stderr F I0813 19:58:04.806616 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-08-13T19:58:04.810082087+00:00 stderr F I0813 19:58:04.810029 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T19:58:04.814620396+00:00 stderr F I0813 19:58:04.813702 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T19:58:04.819118554+00:00 stderr F I0813 19:58:04.818976 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:58:04.999867317+00:00 stderr F I0813 19:58:04.999733 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:58:05.021482623+00:00 stderr F I0813 19:58:05.021318 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-08-13T19:58:05.021741430+00:00 stderr F I0813 19:58:05.021656 1 controller.go:120] Reconciling CSR: csr-fxkbs 2025-08-13T19:58:05.068715569+00:00 stderr F I0813 19:58:05.068648 1 controller.go:213] csr-fxkbs: CSR is already approved 2025-08-13T19:59:54.738971221+00:00 stderr F I0813 19:59:54.736169 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783131 1 csr_check.go:163] system:openshift:openshift-authenticator-dk965: CSR does not appear to be client csr 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783176 1 csr_check.go:59] system:openshift:openshift-authenticator-dk965: CSR does not appear to be a node serving cert 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783237 1 controller.go:232] system:openshift:openshift-authenticator-dk965: CSR not authorized 2025-08-13T19:59:56.015861990+00:00 stderr F I0813 19:59:56.015717 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T19:59:57.081251229+00:00 stderr F I0813 19:59:57.073260 1 controller.go:213] system:openshift:openshift-authenticator-dk965: CSR is already approved 2025-08-13T20:00:01.347367856+00:00 stderr F I0813 20:00:01.346761 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T20:00:01.631136935+00:00 stderr F I0813 20:00:01.631069 1 controller.go:213] system:openshift:openshift-authenticator-dk965: CSR is already approved 2025-08-13T20:03:21.934357110+00:00 stderr F E0813 20:03:21.934093 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:17.937495002+00:00 stderr F E0813 20:04:17.937199 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:38.942610615+00:00 stderr F I0813 20:04:38.936003 1 leaderelection.go:285] failed to renew lease openshift-cluster-machine-approver/cluster-machine-approver-leader: timed out waiting for the condition 2025-08-13T20:05:08.957482381+00:00 stderr F E0813 20:05:08.957257 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:05:08.990523638+00:00 stderr F F0813 20:05:08.990431 1 main.go:235] unable to run the manager: leader election lost 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028498 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028591 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028608 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028585 1 recorder.go:104] "crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"30699"} reason="LeaderElection" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028819 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028849 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028884 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015073043232033103 0ustar zuulzuul2025-10-13T00:16:43.941168546+00:00 stderr F time="2025-10-13T00:16:43Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-10-13T00:16:46.720875916+00:00 stderr F time="2025-10-13T00:16:46Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-10-13T00:16:46.720875916+00:00 stderr F time="2025-10-13T00:16:46Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043232033066 0ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043232033066 0ustar zuulzuul././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015073043233033051 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015073043233033051 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000012605415073043233033063 0ustar zuulzuul2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.668772 1 flags.go:64] FLAG: --accesstoken-inactivity-timeout="0s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669102 1 flags.go:64] FLAG: --admission-control-config-file="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669108 1 flags.go:64] FLAG: --advertise-address="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669112 1 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669120 1 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669124 1 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669127 1 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669130 1 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669133 1 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669137 1 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669141 1 flags.go:64] FLAG: --audit-log-compress="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669144 1 flags.go:64] FLAG: --audit-log-format="json" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669147 1 flags.go:64] FLAG: --audit-log-maxage="0" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669150 1 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669153 1 flags.go:64] FLAG: --audit-log-maxsize="100" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669156 1 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669159 1 flags.go:64] FLAG: --audit-log-path="/var/log/oauth-apiserver/audit.log" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669162 1 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669165 1 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669169 1 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669172 1 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669175 1 flags.go:64] FLAG: --audit-policy-file="/var/run/configmaps/audit/policy.yaml" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669178 1 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669181 1 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669184 1 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669187 1 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669190 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669193 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669196 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669199 1 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669202 1 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669205 1 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669207 1 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669210 1 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669213 1 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669221 1 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669224 1 flags.go:64] FLAG: --authentication-kubeconfig="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669227 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669230 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669233 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669235 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669240 1 flags.go:64] FLAG: --authorization-kubeconfig="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669242 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669245 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669248 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669252 1 flags.go:64] FLAG: --cert-dir="apiserver.local.config/certificates" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669255 1 flags.go:64] FLAG: --client-ca-file="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669259 1 flags.go:64] FLAG: --contention-profiling="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669262 1 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669267 1 flags.go:64] FLAG: --debug-socket-path="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669269 1 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669272 1 flags.go:64] FLAG: --delete-collection-workers="1" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669275 1 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669287 1 flags.go:64] FLAG: --egress-selector-config-file="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669290 1 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669294 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669297 1 flags.go:64] FLAG: --enable-priority-and-fairness="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669300 1 flags.go:64] FLAG: --encryption-provider-config="" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669302 1 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669305 1 flags.go:64] FLAG: --etcd-cafile="/var/run/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669308 1 flags.go:64] FLAG: --etcd-certfile="/var/run/secrets/etcd-client/tls.crt" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669312 1 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669315 1 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-10-13T00:15:02.669394957+00:00 stderr F I1013 00:15:02.669318 1 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669320 1 flags.go:64] FLAG: --etcd-healthcheck-timeout="10s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669455 1 flags.go:64] FLAG: --etcd-keyfile="/var/run/secrets/etcd-client/tls.key" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669458 1 flags.go:64] FLAG: --etcd-prefix="openshift.io" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669461 1 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669464 1 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669468 1 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669473 1 flags.go:64] FLAG: --external-hostname="" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669476 1 flags.go:64] FLAG: --feature-gates="" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669487 1 flags.go:64] FLAG: --goaway-chance="0" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669492 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669495 1 flags.go:64] FLAG: --http2-max-streams-per-connection="1000" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669498 1 flags.go:64] FLAG: --kubeconfig="" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669500 1 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669504 1 flags.go:64] FLAG: --livez-grace-period="0s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669506 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669511 1 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669513 1 flags.go:64] FLAG: --max-requests-inflight="400" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669516 1 flags.go:64] FLAG: --min-request-timeout="1800" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669519 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669522 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669524 1 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669527 1 flags.go:64] FLAG: --request-timeout="1m0s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669530 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669533 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669536 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669540 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669544 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669551 1 flags.go:64] FLAG: --secure-port="8443" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669554 1 flags.go:64] FLAG: --shutdown-delay-duration="15s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669557 1 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669560 1 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669564 1 flags.go:64] FLAG: --storage-backend="" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669566 1 flags.go:64] FLAG: --storage-media-type="application/json" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669569 1 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-10-13T00:15:02.669576532+00:00 stderr F I1013 00:15:02.669573 1 flags.go:64] FLAG: --tls-cert-file="/var/run/secrets/serving-cert/tls.crt" 2025-10-13T00:15:02.669601043+00:00 stderr F I1013 00:15:02.669576 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:15:02.669601043+00:00 stderr F I1013 00:15:02.669583 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:15:02.669601043+00:00 stderr F I1013 00:15:02.669587 1 flags.go:64] FLAG: --tls-private-key-file="/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:02.669601043+00:00 stderr F I1013 00:15:02.669590 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:15:02.669601043+00:00 stderr F I1013 00:15:02.669594 1 flags.go:64] FLAG: --tracing-config-file="" 2025-10-13T00:15:02.669601043+00:00 stderr F I1013 00:15:02.669597 1 flags.go:64] FLAG: --v="2" 2025-10-13T00:15:02.669611713+00:00 stderr F I1013 00:15:02.669600 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:15:02.669611713+00:00 stderr F I1013 00:15:02.669603 1 flags.go:64] FLAG: --watch-cache="true" 2025-10-13T00:15:02.669611713+00:00 stderr F I1013 00:15:02.669607 1 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-10-13T00:15:02.726143617+00:00 stderr F I1013 00:15:02.726055 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:03.307925078+00:00 stderr F I1013 00:15:03.307708 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:03.315160645+00:00 stderr F I1013 00:15:03.315114 1 audit.go:340] Using audit backend: ignoreErrors 2025-10-13T00:15:03.326413891+00:00 stderr F I1013 00:15:03.326362 1 plugins.go:157] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook. 2025-10-13T00:15:03.326413891+00:00 stderr F I1013 00:15:03.326384 1 plugins.go:160] Loaded 2 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy,ValidatingAdmissionWebhook. 2025-10-13T00:15:03.330347689+00:00 stderr F I1013 00:15:03.330290 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:03.330347689+00:00 stderr F I1013 00:15:03.330314 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:03.330410311+00:00 stderr F I1013 00:15:03.330382 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:15:03.330410311+00:00 stderr F I1013 00:15:03.330392 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:15:03.357293166+00:00 stderr F I1013 00:15:03.357242 1 store.go:1579] "Monitoring resource count at path" resource="oauthclients.oauth.openshift.io" path="//oauth/clients" 2025-10-13T00:15:03.364128481+00:00 stderr F I1013 00:15:03.363927 1 cacher.go:451] cacher (oauthclients.oauth.openshift.io): initialized 2025-10-13T00:15:03.364128481+00:00 stderr F I1013 00:15:03.363952 1 reflector.go:351] Caches populated for *oauth.OAuthClient from storage/cacher.go:/oauth/clients 2025-10-13T00:15:03.366105440+00:00 stderr F I1013 00:15:03.366035 1 store.go:1579] "Monitoring resource count at path" resource="oauthauthorizetokens.oauth.openshift.io" path="//oauth/authorizetokens" 2025-10-13T00:15:03.367988207+00:00 stderr F I1013 00:15:03.367944 1 cacher.go:451] cacher (oauthauthorizetokens.oauth.openshift.io): initialized 2025-10-13T00:15:03.368024478+00:00 stderr F I1013 00:15:03.368005 1 reflector.go:351] Caches populated for *oauth.OAuthAuthorizeToken from storage/cacher.go:/oauth/authorizetokens 2025-10-13T00:15:03.375590645+00:00 stderr F I1013 00:15:03.375480 1 store.go:1579] "Monitoring resource count at path" resource="oauthaccesstokens.oauth.openshift.io" path="//oauth/accesstokens" 2025-10-13T00:15:03.378495912+00:00 stderr F I1013 00:15:03.378464 1 cacher.go:451] cacher (oauthaccesstokens.oauth.openshift.io): initialized 2025-10-13T00:15:03.378495912+00:00 stderr F I1013 00:15:03.378488 1 reflector.go:351] Caches populated for *oauth.OAuthAccessToken from storage/cacher.go:/oauth/accesstokens 2025-10-13T00:15:03.384649026+00:00 stderr F I1013 00:15:03.384604 1 store.go:1579] "Monitoring resource count at path" resource="oauthclientauthorizations.oauth.openshift.io" path="//oauth/clientauthorizations" 2025-10-13T00:15:03.389756329+00:00 stderr F I1013 00:15:03.389563 1 cacher.go:451] cacher (oauthclientauthorizations.oauth.openshift.io): initialized 2025-10-13T00:15:03.389756329+00:00 stderr F I1013 00:15:03.389603 1 reflector.go:351] Caches populated for *oauth.OAuthClientAuthorization from storage/cacher.go:/oauth/clientauthorizations 2025-10-13T00:15:03.393888573+00:00 stderr F I1013 00:15:03.393842 1 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-10-13T00:15:03.394505351+00:00 stderr F I1013 00:15:03.394475 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:03.394505351+00:00 stderr F I1013 00:15:03.394493 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:03.413450769+00:00 stderr F I1013 00:15:03.409548 1 store.go:1579] "Monitoring resource count at path" resource="users.user.openshift.io" path="//users" 2025-10-13T00:15:03.413450769+00:00 stderr F I1013 00:15:03.411926 1 cacher.go:451] cacher (users.user.openshift.io): initialized 2025-10-13T00:15:03.413450769+00:00 stderr F I1013 00:15:03.411939 1 reflector.go:351] Caches populated for *user.User from storage/cacher.go:/users 2025-10-13T00:15:03.419563212+00:00 stderr F I1013 00:15:03.419525 1 store.go:1579] "Monitoring resource count at path" resource="identities.user.openshift.io" path="//useridentities" 2025-10-13T00:15:03.423792959+00:00 stderr F I1013 00:15:03.422880 1 cacher.go:451] cacher (identities.user.openshift.io): initialized 2025-10-13T00:15:03.423792959+00:00 stderr F I1013 00:15:03.422906 1 reflector.go:351] Caches populated for *user.Identity from storage/cacher.go:/useridentities 2025-10-13T00:15:03.427127009+00:00 stderr F I1013 00:15:03.427100 1 store.go:1579] "Monitoring resource count at path" resource="groups.user.openshift.io" path="//groups" 2025-10-13T00:15:03.429820009+00:00 stderr F I1013 00:15:03.429793 1 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-10-13T00:15:03.429876321+00:00 stderr F I1013 00:15:03.429852 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:03.429876321+00:00 stderr F I1013 00:15:03.429864 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:03.432008725+00:00 stderr F I1013 00:15:03.431961 1 cacher.go:451] cacher (groups.user.openshift.io): initialized 2025-10-13T00:15:03.432008725+00:00 stderr F I1013 00:15:03.431981 1 reflector.go:351] Caches populated for *user.Group from storage/cacher.go:/groups 2025-10-13T00:15:03.560822604+00:00 stderr F I1013 00:15:03.559827 1 genericapiserver.go:560] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-10-13T00:15:03.560822604+00:00 stderr F I1013 00:15:03.560509 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:03.570304759+00:00 stderr F I1013 00:15:03.570267 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:03.570395111+00:00 stderr F I1013 00:15:03.570380 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:03.570461523+00:00 stderr F I1013 00:15:03.570415 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:03.570488024+00:00 stderr F I1013 00:15:03.570468 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.570609138+00:00 stderr F I1013 00:15:03.570387 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:03.570640309+00:00 stderr F I1013 00:15:03.570630 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.570991019+00:00 stderr F I1013 00:15:03.570940 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:03.570991019+00:00 stderr F I1013 00:15:03.570947 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-10-13 00:15:03.570903997 +0000 UTC))" 2025-10-13T00:15:03.571242877+00:00 stderr F I1013 00:15:03.571203 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314503\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314503\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:15:03.571185495 +0000 UTC))" 2025-10-13T00:15:03.571242877+00:00 stderr F I1013 00:15:03.571234 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:03.571291578+00:00 stderr F I1013 00:15:03.571262 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:03.571291578+00:00 stderr F I1013 00:15:03.571284 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:03.576190455+00:00 stderr F I1013 00:15:03.576141 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.576426282+00:00 stderr F I1013 00:15:03.576316 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.576545726+00:00 stderr F I1013 00:15:03.576508 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.576715131+00:00 stderr F I1013 00:15:03.576685 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.577231176+00:00 stderr F I1013 00:15:03.577196 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.582726791+00:00 stderr F I1013 00:15:03.582681 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.582950377+00:00 stderr F I1013 00:15:03.582908 1 reflector.go:351] Caches populated for *v1.OAuthClient from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.589257666+00:00 stderr F I1013 00:15:03.587272 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:03.671469910+00:00 stderr F I1013 00:15:03.671394 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.671629384+00:00 stderr F I1013 00:15:03.671427 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.671664785+00:00 stderr F I1013 00:15:03.671444 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:03.672005846+00:00 stderr F I1013 00:15:03.671968 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:03.671926133 +0000 UTC))" 2025-10-13T00:15:03.672501991+00:00 stderr F I1013 00:15:03.672483 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-10-13 00:15:03.672458529 +0000 UTC))" 2025-10-13T00:15:03.672904793+00:00 stderr F I1013 00:15:03.672887 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314503\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314503\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:15:03.672867792 +0000 UTC))" 2025-10-13T00:15:03.673177371+00:00 stderr F I1013 00:15:03.673134 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:03.673116289 +0000 UTC))" 2025-10-13T00:15:03.673226392+00:00 stderr F I1013 00:15:03.673214 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:03.673198711 +0000 UTC))" 2025-10-13T00:15:03.673270314+00:00 stderr F I1013 00:15:03.673258 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:03.673242113 +0000 UTC))" 2025-10-13T00:15:03.673312505+00:00 stderr F I1013 00:15:03.673301 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:03.673285734 +0000 UTC))" 2025-10-13T00:15:03.673396117+00:00 stderr F I1013 00:15:03.673382 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:03.673363016 +0000 UTC))" 2025-10-13T00:15:03.673447959+00:00 stderr F I1013 00:15:03.673437 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:03.673420508 +0000 UTC))" 2025-10-13T00:15:03.673491210+00:00 stderr F I1013 00:15:03.673480 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:03.673463689 +0000 UTC))" 2025-10-13T00:15:03.673535212+00:00 stderr F I1013 00:15:03.673524 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:03.673506751 +0000 UTC))" 2025-10-13T00:15:03.673579643+00:00 stderr F I1013 00:15:03.673568 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:03.673551052 +0000 UTC))" 2025-10-13T00:15:03.673627114+00:00 stderr F I1013 00:15:03.673616 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:03.673597923 +0000 UTC))" 2025-10-13T00:15:03.673676126+00:00 stderr F I1013 00:15:03.673664 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:03.673645045 +0000 UTC))" 2025-10-13T00:15:03.674122939+00:00 stderr F I1013 00:15:03.674105 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-10-13 00:15:03.674082448 +0000 UTC))" 2025-10-13T00:15:03.674569533+00:00 stderr F I1013 00:15:03.674552 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314503\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314503\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:15:03.674535611 +0000 UTC))" 2025-10-13T00:21:11.947179611+00:00 stderr F I1013 00:21:11.947118 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.947075798 +0000 UTC))" 2025-10-13T00:21:11.947179611+00:00 stderr F I1013 00:21:11.947157 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.94714137 +0000 UTC))" 2025-10-13T00:21:11.947212761+00:00 stderr F I1013 00:21:11.947180 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.94716231 +0000 UTC))" 2025-10-13T00:21:11.947212761+00:00 stderr F I1013 00:21:11.947200 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.947185601 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947220 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947205371 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947251 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947236592 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947270 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947256873 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947289 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947275393 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947310 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.947293654 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947352 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.947315614 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947390 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.947358535 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947410 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.947395356 +0000 UTC))" 2025-10-13T00:21:11.948293360+00:00 stderr F I1013 00:21:11.947856 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-10-13 00:21:11.947838038 +0000 UTC))" 2025-10-13T00:21:11.948527067+00:00 stderr F I1013 00:21:11.948317 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314503\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314503\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:21:11.948299631 +0000 UTC))" 2025-10-13T00:22:24.536632228+00:00 stderr F E1013 00:22:24.536530 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.536632228+00:00 stderr F E1013 00:22:24.536591 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.553698397+00:00 stderr F E1013 00:22:24.553623 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.553698397+00:00 stderr F E1013 00:22:24.553675 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.509655185+00:00 stderr F E1013 00:22:25.509549 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.509655185+00:00 stderr F E1013 00:22:25.509585 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.518897483+00:00 stderr F E1013 00:22:25.518840 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.518928654+00:00 stderr F E1013 00:22:25.518916 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.522182912+00:00 stderr F E1013 00:22:25.522129 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.522211573+00:00 stderr F E1013 00:22:25.522188 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.537986507+00:00 stderr F E1013 00:22:25.537933 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.537986507+00:00 stderr F E1013 00:22:25.537979 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:52.707765098+00:00 stderr F I1013 00:23:52.707706 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000037061315073043233033065 0ustar zuulzuul2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.198944 1 flags.go:64] FLAG: --accesstoken-inactivity-timeout="0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200040 1 flags.go:64] FLAG: --admission-control-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200051 1 flags.go:64] FLAG: --advertise-address="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200058 1 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200067 1 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200074 1 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200078 1 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200089 1 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200093 1 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200099 1 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200106 1 flags.go:64] FLAG: --audit-log-compress="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200110 1 flags.go:64] FLAG: --audit-log-format="json" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200114 1 flags.go:64] FLAG: --audit-log-maxage="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200118 1 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200122 1 flags.go:64] FLAG: --audit-log-maxsize="100" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200126 1 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200130 1 flags.go:64] FLAG: --audit-log-path="/var/log/oauth-apiserver/audit.log" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200134 1 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200138 1 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200143 1 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200148 1 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200152 1 flags.go:64] FLAG: --audit-policy-file="/var/run/configmaps/audit/policy.yaml" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200156 1 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200160 1 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200168 1 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200174 1 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200178 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200186 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200191 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200196 1 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200199 1 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200203 1 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200207 1 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200211 1 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200215 1 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200219 1 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200223 1 flags.go:64] FLAG: --authentication-kubeconfig="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200227 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200231 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200234 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200238 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200246 1 flags.go:64] FLAG: --authorization-kubeconfig="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200250 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200254 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200257 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200263 1 flags.go:64] FLAG: --cert-dir="apiserver.local.config/certificates" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200268 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200275 1 flags.go:64] FLAG: --contention-profiling="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200281 1 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200288 1 flags.go:64] FLAG: --debug-socket-path="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200293 1 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200298 1 flags.go:64] FLAG: --delete-collection-workers="1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200303 1 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200310 1 flags.go:64] FLAG: --egress-selector-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200315 1 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200321 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200326 1 flags.go:64] FLAG: --enable-priority-and-fairness="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200332 1 flags.go:64] FLAG: --encryption-provider-config="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200339 1 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200345 1 flags.go:64] FLAG: --etcd-cafile="/var/run/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200350 1 flags.go:64] FLAG: --etcd-certfile="/var/run/secrets/etcd-client/tls.crt" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200355 1 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200367 1 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200464 1 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200542 1 flags.go:64] FLAG: --etcd-healthcheck-timeout="10s" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.200547 1 flags.go:64] FLAG: --etcd-keyfile="/var/run/secrets/etcd-client/tls.key" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215297 1 flags.go:64] FLAG: --etcd-prefix="openshift.io" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215322 1 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215329 1 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379]" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215350 1 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215356 1 flags.go:64] FLAG: --external-hostname="" 2025-08-13T19:59:27.219097400+00:00 stderr F I0813 19:59:27.215361 1 flags.go:64] FLAG: --feature-gates="" 2025-08-13T19:59:27.219097400+00:00 stderr F I0813 19:59:27.219074 1 flags.go:64] FLAG: --goaway-chance="0" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219097 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219104 1 flags.go:64] FLAG: --http2-max-streams-per-connection="1000" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219110 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219121 1 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219223 1 flags.go:64] FLAG: --livez-grace-period="0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219259 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219268 1 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219272 1 flags.go:64] FLAG: --max-requests-inflight="400" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219277 1 flags.go:64] FLAG: --min-request-timeout="1800" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219281 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219291 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219295 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219300 1 flags.go:64] FLAG: --request-timeout="1m0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219313 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219324 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219329 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219335 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219341 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219346 1 flags.go:64] FLAG: --secure-port="8443" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219350 1 flags.go:64] FLAG: --shutdown-delay-duration="15s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219354 1 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219358 1 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219364 1 flags.go:64] FLAG: --storage-backend="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219370 1 flags.go:64] FLAG: --storage-media-type="application/json" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219375 1 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219380 1 flags.go:64] FLAG: --tls-cert-file="/var/run/secrets/serving-cert/tls.crt" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219385 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219395 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219402 1 flags.go:64] FLAG: --tls-private-key-file="/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219431 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219608 1 flags.go:64] FLAG: --tracing-config-file="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219614 1 flags.go:64] FLAG: --v="2" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219619 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219629 1 flags.go:64] FLAG: --watch-cache="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219637 1 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-08-13T19:59:28.503555713+00:00 stderr F I0813 19:59:28.503189 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:31.354631633+00:00 stderr F I0813 19:59:31.354516 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:31.413209243+00:00 stderr F I0813 19:59:31.412390 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T19:59:31.525990398+00:00 stderr F I0813 19:59:31.521558 1 plugins.go:157] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook. 2025-08-13T19:59:31.525990398+00:00 stderr F I0813 19:59:31.521671 1 plugins.go:160] Loaded 2 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy,ValidatingAdmissionWebhook. 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533000 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533154 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533585 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533605 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:31.715911232+00:00 stderr F I0813 19:59:31.714652 1 store.go:1579] "Monitoring resource count at path" resource="oauthclients.oauth.openshift.io" path="//oauth/clients" 2025-08-13T19:59:31.790381324+00:00 stderr F I0813 19:59:31.789477 1 store.go:1579] "Monitoring resource count at path" resource="oauthauthorizetokens.oauth.openshift.io" path="//oauth/authorizetokens" 2025-08-13T19:59:31.811382352+00:00 stderr F I0813 19:59:31.811291 1 store.go:1579] "Monitoring resource count at path" resource="oauthaccesstokens.oauth.openshift.io" path="//oauth/accesstokens" 2025-08-13T19:59:31.844340722+00:00 stderr F I0813 19:59:31.844266 1 cacher.go:451] cacher (oauthaccesstokens.oauth.openshift.io): initialized 2025-08-13T19:59:31.845079293+00:00 stderr F I0813 19:59:31.845048 1 reflector.go:351] Caches populated for *oauth.OAuthAccessToken from storage/cacher.go:/oauth/accesstokens 2025-08-13T19:59:31.851908797+00:00 stderr F I0813 19:59:31.851750 1 cacher.go:451] cacher (oauthclients.oauth.openshift.io): initialized 2025-08-13T19:59:31.857767484+00:00 stderr F I0813 19:59:31.853637 1 cacher.go:451] cacher (oauthauthorizetokens.oauth.openshift.io): initialized 2025-08-13T19:59:31.858423753+00:00 stderr F I0813 19:59:31.858378 1 reflector.go:351] Caches populated for *oauth.OAuthClient from storage/cacher.go:/oauth/clients 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.895619 1 reflector.go:351] Caches populated for *oauth.OAuthAuthorizeToken from storage/cacher.go:/oauth/authorizetokens 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.940494 1 store.go:1579] "Monitoring resource count at path" resource="oauthclientauthorizations.oauth.openshift.io" path="//oauth/clientauthorizations" 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.944551 1 cacher.go:451] cacher (oauthclientauthorizations.oauth.openshift.io): initialized 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.944618 1 reflector.go:351] Caches populated for *oauth.OAuthClientAuthorization from storage/cacher.go:/oauth/clientauthorizations 2025-08-13T19:59:32.129051887+00:00 stderr F I0813 19:59:32.128892 1 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-08-13T19:59:32.146996099+00:00 stderr F I0813 19:59:32.141533 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:32.146996099+00:00 stderr F I0813 19:59:32.141612 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:32.268331218+00:00 stderr F I0813 19:59:32.268131 1 store.go:1579] "Monitoring resource count at path" resource="users.user.openshift.io" path="//users" 2025-08-13T19:59:32.280715111+00:00 stderr F I0813 19:59:32.280485 1 cacher.go:451] cacher (users.user.openshift.io): initialized 2025-08-13T19:59:32.280715111+00:00 stderr F I0813 19:59:32.280534 1 reflector.go:351] Caches populated for *user.User from storage/cacher.go:/users 2025-08-13T19:59:32.342159052+00:00 stderr F I0813 19:59:32.340089 1 store.go:1579] "Monitoring resource count at path" resource="identities.user.openshift.io" path="//useridentities" 2025-08-13T19:59:32.365422735+00:00 stderr F I0813 19:59:32.364535 1 store.go:1579] "Monitoring resource count at path" resource="groups.user.openshift.io" path="//groups" 2025-08-13T19:59:32.365656342+00:00 stderr F I0813 19:59:32.365546 1 cacher.go:451] cacher (identities.user.openshift.io): initialized 2025-08-13T19:59:32.366472145+00:00 stderr F I0813 19:59:32.366259 1 reflector.go:351] Caches populated for *user.Identity from storage/cacher.go:/useridentities 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.367976 1 cacher.go:451] cacher (groups.user.openshift.io): initialized 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.368038 1 reflector.go:351] Caches populated for *user.Group from storage/cacher.go:/groups 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369329 1 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369391 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369402 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:35.053460550+00:00 stderr F I0813 19:59:35.053277 1 genericapiserver.go:560] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-08-13T19:59:35.054209721+00:00 stderr F I0813 19:59:35.054173 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:35.064731441+00:00 stderr F I0813 19:59:35.064541 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:35.064180125 +0000 UTC))" 2025-08-13T19:59:35.071705980+00:00 stderr F I0813 19:59:35.071409 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:35.069769704 +0000 UTC))" 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.073763 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.073924 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.074058 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:35.074169020+00:00 stderr F I0813 19:59:35.074129 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:35.074615162+00:00 stderr F I0813 19:59:35.074521 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075587 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075671 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075697 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075736 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.084971758+00:00 stderr F I0813 19:59:35.084607 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:35.090388342+00:00 stderr F I0813 19:59:35.090352 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.104283678+00:00 stderr F I0813 19:59:35.095711 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.261017356+00:00 stderr F I0813 19:59:35.258405 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.261017356+00:00 stderr F E0813 19:59:35.259591 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.261017356+00:00 stderr F E0813 19:59:35.259635 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.291381701+00:00 stderr F I0813 19:59:35.291261 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.314580743+00:00 stderr F I0813 19:59:35.310533 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.315930531+00:00 stderr F E0813 19:59:35.315769 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.316024594+00:00 stderr F E0813 19:59:35.316004 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.360429779+00:00 stderr F I0813 19:59:35.326507 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.374394 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375589 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:35.375543711 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375633 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:35.375602292 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375657 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:35.375639933 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375676 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:35.375662334 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375692 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375681064 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375710 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375696965 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375761 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375715045 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375922 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375902321 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.376298 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:35.376273311 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.376628 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:35.376610761 +0000 UTC))" 2025-08-13T19:59:35.398331219+00:00 stderr F I0813 19:59:35.398292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.401314314+00:00 stderr F E0813 19:59:35.401282 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.413428499+00:00 stderr F E0813 19:59:35.413179 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.448248762+00:00 stderr F E0813 19:59:35.423880 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.448248762+00:00 stderr F E0813 19:59:35.435259 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.464321790+00:00 stderr F E0813 19:59:35.464255 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.498595977+00:00 stderr F E0813 19:59:35.481648 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.505466903+00:00 stderr F I0813 19:59:35.505434 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:35.562928661+00:00 stderr F E0813 19:59:35.545409 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.673230705+00:00 stderr F I0813 19:59:35.673175 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.675129009+00:00 stderr F E0813 19:59:35.675000 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.728270224+00:00 stderr F E0813 19:59:35.727924 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.837193959+00:00 stderr F E0813 19:59:35.837129 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.033313399+00:00 stderr F E0813 19:59:36.033147 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.035455731+00:00 stderr F E0813 19:59:36.035426 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.036464989+00:00 stderr F E0813 19:59:36.036407 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.037394686+00:00 stderr F E0813 19:59:36.037322 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.037973162+00:00 stderr F E0813 19:59:36.033144 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.048474692+00:00 stderr F E0813 19:59:36.048441 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.052049914+00:00 stderr F I0813 19:59:36.052021 1 reflector.go:351] Caches populated for *v1.OAuthClient from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:36.158320383+00:00 stderr F E0813 19:59:36.158265 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.376588465+00:00 stderr F E0813 19:59:36.376519 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.377338296+00:00 stderr F E0813 19:59:36.377303 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.377953704+00:00 stderr F E0813 19:59:36.377920 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.378338595+00:00 stderr F E0813 19:59:36.378310 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.379342223+00:00 stderr F E0813 19:59:36.378599 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.436415990+00:00 stderr F E0813 19:59:36.436345 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.437664406+00:00 stderr F E0813 19:59:36.437540 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.472188440+00:00 stderr F E0813 19:59:36.445173 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.472631222+00:00 stderr F E0813 19:59:36.445254 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.476919835+00:00 stderr F E0813 19:59:36.445319 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.477208573+00:00 stderr F E0813 19:59:36.447532 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478147860+00:00 stderr F E0813 19:59:36.448082 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478380826+00:00 stderr F E0813 19:59:36.448223 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478628153+00:00 stderr F E0813 19:59:36.448301 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.479070036+00:00 stderr F E0813 19:59:36.448361 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.480056024+00:00 stderr F E0813 19:59:36.448428 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.480478096+00:00 stderr F E0813 19:59:36.448483 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.483198044+00:00 stderr F E0813 19:59:36.483108 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.483739059+00:00 stderr F E0813 19:59:36.483712 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485178480+00:00 stderr F E0813 19:59:36.484054 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485286323+00:00 stderr F E0813 19:59:36.484127 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485370796+00:00 stderr F E0813 19:59:36.484190 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485545511+00:00 stderr F E0813 19:59:36.484253 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485641593+00:00 stderr F E0813 19:59:36.484421 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485720866+00:00 stderr F E0813 19:59:36.484505 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485938202+00:00 stderr F E0813 19:59:36.484602 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486062785+00:00 stderr F E0813 19:59:36.484761 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486180459+00:00 stderr F E0813 19:59:36.484900 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486455607+00:00 stderr F E0813 19:59:36.484981 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.575577707+00:00 stderr F E0813 19:59:36.575523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.577465621+00:00 stderr F E0813 19:59:36.577436 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.578437578+00:00 stderr F E0813 19:59:36.578405 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.580095616+00:00 stderr F E0813 19:59:36.580072 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.597659256+00:00 stderr F E0813 19:59:36.580571 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598071008+00:00 stderr F E0813 19:59:36.580626 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598313015+00:00 stderr F E0813 19:59:36.580696 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598541692+00:00 stderr F E0813 19:59:36.580749 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.602086093+00:00 stderr F E0813 19:59:36.595085 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.602660729+00:00 stderr F E0813 19:59:36.595307 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.606749556+00:00 stderr F E0813 19:59:36.606721 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.608390602+00:00 stderr F E0813 19:59:36.608366 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.663521664+00:00 stderr F E0813 19:59:36.663462 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665381357+00:00 stderr F E0813 19:59:36.663881 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665555102+00:00 stderr F E0813 19:59:36.665531 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665762728+00:00 stderr F E0813 19:59:36.663938 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.679138919+00:00 stderr F E0813 19:59:36.664129 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.680098706+00:00 stderr F E0813 19:59:36.664191 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.680434846+00:00 stderr F E0813 19:59:36.664262 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.681091575+00:00 stderr F E0813 19:59:36.664327 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.688758883+00:00 stderr F E0813 19:59:36.688652 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689161305+00:00 stderr F E0813 19:59:36.689135 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689456673+00:00 stderr F E0813 19:59:36.689423 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689703040+00:00 stderr F E0813 19:59:36.689677 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.690070311+00:00 stderr F E0813 19:59:36.690047 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.690914385+00:00 stderr F E0813 19:59:36.690884 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.691368538+00:00 stderr F E0813 19:59:36.691183 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.733156989+00:00 stderr F E0813 19:59:36.733102 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.756324479+00:00 stderr F E0813 19:59:36.756160 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.759497590+00:00 stderr F E0813 19:59:36.759459 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.760244381+00:00 stderr F E0813 19:59:36.760215 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.851534213+00:00 stderr F E0813 19:59:36.851139 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.865752729+00:00 stderr F E0813 19:59:36.852028 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875198648+00:00 stderr F E0813 19:59:36.852119 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875459345+00:00 stderr F E0813 19:59:36.852199 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875731643+00:00 stderr F E0813 19:59:36.852256 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876069383+00:00 stderr F E0813 19:59:36.852310 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876304159+00:00 stderr F E0813 19:59:36.852393 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876563787+00:00 stderr F E0813 19:59:36.852462 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877315348+00:00 stderr F E0813 19:59:36.852523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877532394+00:00 stderr F E0813 19:59:36.852576 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877748271+00:00 stderr F E0813 19:59:36.870143 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.936192617+00:00 stderr F E0813 19:59:36.929752 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.937054831+00:00 stderr F E0813 19:59:36.932134 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.940238372+00:00 stderr F E0813 19:59:36.932220 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.940929152+00:00 stderr F E0813 19:59:36.932304 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.941176139+00:00 stderr F E0813 19:59:36.932386 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.458372 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.458449 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459342 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459485 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459713 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.716233812+00:00 stderr F E0813 19:59:37.716104 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.716739256+00:00 stderr F E0813 19:59:37.716716 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717144728+00:00 stderr F E0813 19:59:37.717093 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717200919+00:00 stderr F E0813 19:59:37.717185 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717411685+00:00 stderr F E0813 19:59:37.717359 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.971477538+00:00 stderr F E0813 19:59:37.969981 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.120652770+00:00 stderr F E0813 19:59:38.120511 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.121609857+00:00 stderr F E0813 19:59:38.121584 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122111541+00:00 stderr F E0813 19:59:38.122088 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122400310+00:00 stderr F E0813 19:59:38.122369 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122666387+00:00 stderr F E0813 19:59:38.122641 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.159218769+00:00 stderr F E0813 19:59:38.159141 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.318221776+00:00 stderr F E0813 19:59:39.317938 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.318472713+00:00 stderr F E0813 19:59:39.317948 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.319165733+00:00 stderr F E0813 19:59:39.319135 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.334884941+00:00 stderr F E0813 19:59:39.334715 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.335225161+00:00 stderr F E0813 19:59:39.335185 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650348344+00:00 stderr F E0813 19:59:39.650258 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650348344+00:00 stderr F E0813 19:59:39.650321 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650915920+00:00 stderr F E0813 19:59:39.650751 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.651055594+00:00 stderr F E0813 19:59:39.651002 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.651371383+00:00 stderr F E0813 19:59:39.651350 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.533147208+00:00 stderr F E0813 19:59:40.533083 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.720975642+00:00 stderr F E0813 19:59:40.720818 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.924524750+00:00 stderr F E0813 19:59:41.924344 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.974490634+00:00 stderr F E0813 19:59:41.974428 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983171752+00:00 stderr F E0813 19:59:41.982641 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983171752+00:00 stderr F E0813 19:59:41.982901 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983934714+00:00 stderr F E0813 19:59:41.983729 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.260972311+00:00 stderr F E0813 19:59:42.260295 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261708322+00:00 stderr F E0813 19:59:42.261350 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261708322+00:00 stderr F E0813 19:59:42.261654 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261923968+00:00 stderr F E0813 19:59:42.261891 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.262538365+00:00 stderr F E0813 19:59:42.262510 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:45.662298766+00:00 stderr F E0813 19:59:45.658946 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.847894196+00:00 stderr F E0813 19:59:45.845683 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.205464043+00:00 stderr F E0813 19:59:47.202257 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.242596192+00:00 stderr F E0813 19:59:47.242459 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.243157758+00:00 stderr F E0813 19:59:47.243104 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.243861318+00:00 stderr F E0813 19:59:47.243727 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.250236379+00:00 stderr F E0813 19:59:47.250172 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656272564+00:00 stderr F E0813 19:59:47.656166 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656481800+00:00 stderr F E0813 19:59:47.656440 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656747918+00:00 stderr F E0813 19:59:47.656688 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656765598+00:00 stderr F E0813 19:59:47.656740 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.662089500+00:00 stderr F E0813 19:59:47.661973 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.712139347+00:00 stderr F http2: server: error reading preface from client 10.217.0.2:56190: read tcp 10.217.0.39:8443->10.217.0.2:56190: read: connection reset by peer 2025-08-13T19:59:51.870432504+00:00 stderr F I0813 19:59:51.869647 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.881604152+00:00 stderr F I0813 19:59:51.881522 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.881392326 +0000 UTC))" 2025-08-13T19:59:51.881726736+00:00 stderr F I0813 19:59:51.881707 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.881681854 +0000 UTC))" 2025-08-13T19:59:51.881908551+00:00 stderr F I0813 19:59:51.881888 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.881749776 +0000 UTC))" 2025-08-13T19:59:51.881968363+00:00 stderr F I0813 19:59:51.881953 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.881936802 +0000 UTC))" 2025-08-13T19:59:51.882024604+00:00 stderr F I0813 19:59:51.882012 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.881997093 +0000 UTC))" 2025-08-13T19:59:51.882072705+00:00 stderr F I0813 19:59:51.882060 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882043885 +0000 UTC))" 2025-08-13T19:59:51.882116687+00:00 stderr F I0813 19:59:51.882104 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882090576 +0000 UTC))" 2025-08-13T19:59:51.882160598+00:00 stderr F I0813 19:59:51.882148 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882134397 +0000 UTC))" 2025-08-13T19:59:51.882230560+00:00 stderr F I0813 19:59:51.882212 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882190329 +0000 UTC))" 2025-08-13T19:59:51.883008762+00:00 stderr F I0813 19:59:51.882979 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:51.88294675 +0000 UTC))" 2025-08-13T19:59:51.883424654+00:00 stderr F I0813 19:59:51.883398 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:51.883371553 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773747 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.773664261 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773882 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.773861226 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773915 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.773890167 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773933 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.773921418 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773954 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773940988 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773970 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773959979 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774004 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77398973 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774020 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77400938 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774038 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.774024961 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774059 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.774046851 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774377 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 20:00:05.77435506 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774709 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:00:05.77468649 +0000 UTC))" 2025-08-13T20:00:56.698411453+00:00 stderr F I0813 20:00:56.678254 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:00:56.698411453+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:00:56.845173228+00:00 stderr F I0813 20:00:56.844884 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:56.845173228+00:00 stderr F I0813 20:00:56.845073 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:56.846576258+00:00 stderr F I0813 20:00:56.846323 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847425 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:56.847258947 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847863 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:56.847678039 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847919 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:56.847876525 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847944 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:56.847925996 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847962 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847951797 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847984 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847969098 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848007 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847993848 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848028 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.848013279 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848067 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:56.84805345 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848361 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.848342018 +0000 UTC))" 2025-08-13T20:00:56.848745390+00:00 stderr F I0813 20:00:56.848729 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:00:56.848690248 +0000 UTC))" 2025-08-13T20:00:56.853486745+00:00 stderr F I0813 20:00:56.850681 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:00:56.850582102 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028636 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.028399108 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028708 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.028692296 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028740 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.028714357 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029003 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.028910823 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029024 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029012746 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029080 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029056577 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029122 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029084858 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029261 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029130349 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029285 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.029270753 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029304 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.029293694 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029331 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029319794 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029748 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:01:00.029691875 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.030145 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:01:00.030123767 +0000 UTC))" 2025-08-13T20:01:05.265928136+00:00 stderr F I0813 20:01:05.264561 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:05.265928136+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:10.946272075+00:00 stderr F E0813 20:01:10.945945 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled 2025-08-13T20:01:10.946272075+00:00 stderr F E0813 20:01:10.946239 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout 2025-08-13T20:01:10.948325794+00:00 stderr F E0813 20:01:10.948042 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout 2025-08-13T20:01:10.948325794+00:00 stderr F E0813 20:01:10.948091 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout 2025-08-13T20:01:10.950701332+00:00 stderr F E0813 20:01:10.950364 1 timeout.go:142] post-timeout activity - time-elapsed: 3.955753ms, GET "/apis/oauth.openshift.io/v1/oauthclients/console" result: 2025-08-13T20:01:16.667006236+00:00 stderr F I0813 20:01:16.666703 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:16.667006236+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:31.303546439+00:00 stderr F I0813 20:01:31.302924 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:31.303546439+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:36.667417336+00:00 stderr F I0813 20:01:36.667273 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:36.667417336+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:44.989727596+00:00 stderr F I0813 20:01:44.989550 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:44.989727596+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:47.000469451+00:00 stderr F I0813 20:01:47.000415 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:47.000469451+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:54.660572879+00:00 stderr F I0813 20:01:54.660450 1 healthz.go:261] etcd check failed: healthz 2025-08-13T20:01:54.660572879+00:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:54.660946280+00:00 stderr F E0813 20:01:54.660916 1 timeout.go:142] post-timeout activity - time-elapsed: 5.657571ms, GET "/healthz" result: 2025-08-13T20:01:54.661201217+00:00 stderr F I0813 20:01:54.661031 1 healthz.go:261] etcd check failed: healthz 2025-08-13T20:01:54.661201217+00:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:58.103602303+00:00 stderr F I0813 20:01:58.103359 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:58.103602303+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:02:03.279348222+00:00 stderr F I0813 20:02:03.279140 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:02:03.279348222+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:02:15.043214406+00:00 stderr F I0813 20:02:15.042674 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:02:15.043214406+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:04:04.348973941+00:00 stderr F E0813 20:04:04.345887 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.348973941+00:00 stderr F E0813 20:04:04.348915 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.416483897+00:00 stderr F E0813 20:04:04.416321 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.416483897+00:00 stderr F E0813 20:04:04.416456 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.085312727+00:00 stderr F E0813 20:04:05.085217 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.085312727+00:00 stderr F E0813 20:04:05.085293 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.104089753+00:00 stderr F E0813 20:04:05.103978 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.104149364+00:00 stderr F E0813 20:04:05.104081 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297203 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297288 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297651 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297674 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.360691791+00:00 stderr F E0813 20:04:38.359378 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.360691791+00:00 stderr F E0813 20:04:38.359546 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:39.335531786+00:00 stderr F E0813 20:04:39.335387 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:39.335588038+00:00 stderr F E0813 20:04:39.335522 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.029295599+00:00 stderr F E0813 20:04:41.029154 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.029350590+00:00 stderr F E0813 20:04:41.029302 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.275168065+00:00 stderr F E0813 20:04:47.274663 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.275290638+00:00 stderr F E0813 20:04:47.275243 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.036167873+00:00 stderr F E0813 20:04:59.033754 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.036167873+00:00 stderr F E0813 20:04:59.033999 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.093002537+00:00 stderr F E0813 20:05:05.092767 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.093002537+00:00 stderr F E0813 20:05:05.092979 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.308754306+00:00 stderr F E0813 20:05:05.308472 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.308754306+00:00 stderr F E0813 20:05:05.308717 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.326847394+00:00 stderr F E0813 20:05:05.326709 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.326904556+00:00 stderr F E0813 20:05:05.326887 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.355996059+00:00 stderr F E0813 20:05:05.355847 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.355996059+00:00 stderr F E0813 20:05:05.355970 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.264399003+00:00 stderr F E0813 20:05:06.264250 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.265136094+00:00 stderr F E0813 20:05:06.264572 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.280454782+00:00 stderr F E0813 20:05:16.280262 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.280454782+00:00 stderr F E0813 20:05:16.280434 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:23.498213610+00:00 stderr F I0813 20:06:23.497964 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:28.010316348+00:00 stderr F I0813 20:06:28.010101 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:28.096337502+00:00 stderr F I0813 20:06:28.096272 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:48.196574406+00:00 stderr F I0813 20:06:48.196290 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:50.297769168+00:00 stderr F I0813 20:06:50.297496 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:05.311458663+00:00 stderr F I0813 20:07:05.310420 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:08:42.713645834+00:00 stderr F E0813 20:08:42.709166 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.725764152+00:00 stderr F E0813 20:08:42.725565 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766877421+00:00 stderr F E0813 20:08:42.766703 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766877421+00:00 stderr F E0813 20:08:42.766847 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.513741693+00:00 stderr F E0813 20:08:43.512988 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.513741693+00:00 stderr F E0813 20:08:43.513085 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.530236006+00:00 stderr F E0813 20:08:43.530011 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.530236006+00:00 stderr F E0813 20:08:43.530131 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.609963312+00:00 stderr F E0813 20:08:43.609275 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.609963312+00:00 stderr F E0813 20:08:43.609384 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.611957989+00:00 stderr F E0813 20:08:43.611516 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.611957989+00:00 stderr F E0813 20:08:43.611585 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570522845+00:00 stderr F E0813 20:08:48.570328 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570522845+00:00 stderr F E0813 20:08:48.570428 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.887212386+00:00 stderr F E0813 20:08:49.887128 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.887265298+00:00 stderr F E0813 20:08:49.887209 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.262668022+00:00 stderr F E0813 20:08:52.262575 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.263017712+00:00 stderr F E0813 20:08:52.262984 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.801090063+00:00 stderr F E0813 20:08:56.800852 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.801152225+00:00 stderr F E0813 20:08:56.801084 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:28.515678168+00:00 stderr F I0813 20:09:28.515417 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:46.544141888+00:00 stderr F I0813 20:09:46.540980 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:50.709580955+00:00 stderr F I0813 20:09:50.709457 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:54.399265802+00:00 stderr F I0813 20:09:54.397571 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:55.196098577+00:00 stderr F I0813 20:09:55.196024 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:14.687619245+00:00 stderr F I0813 20:10:14.687467 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:17:57.759289405+00:00 stderr F I0813 20:17:57.759058 1 trace.go:236] Trace[646062886]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8e440eb5-de6b-4d07-ad72-6fd790f8653a,client:10.217.0.62,api-group:oauth.openshift.io,api-version:v1,name:console,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients/console,user-agent:console/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:17:57.161) (total time: 588ms): 2025-08-13T20:17:57.759289405+00:00 stderr F Trace[646062886]: ---"About to write a response" 588ms (20:17:57.750) 2025-08-13T20:17:57.759289405+00:00 stderr F Trace[646062886]: [588.849764ms] [588.849764ms] END 2025-08-13T20:18:02.690719743+00:00 stderr F I0813 20:18:02.690551 1 trace.go:236] Trace[2103687003]: "Create etcd3" audit-id:f118d6a4-62e6-4076-a8b6-218f0a0d636e,key:/oauth/clients/openshift-browser-client,type:*oauth.OAuthClient,resource:oauthclients.oauth.openshift.io (13-Aug-2025 20:18:01.702) (total time: 988ms): 2025-08-13T20:18:02.690719743+00:00 stderr F Trace[2103687003]: ---"Txn call succeeded" 987ms (20:18:02.690) 2025-08-13T20:18:02.690719743+00:00 stderr F Trace[2103687003]: [988.013605ms] [988.013605ms] END 2025-08-13T20:18:02.951389737+00:00 stderr F I0813 20:18:02.950466 1 trace.go:236] Trace[353295370]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f118d6a4-62e6-4076-a8b6-218f0a0d636e,client:10.217.0.19,api-group:oauth.openshift.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:18:01.701) (total time: 1248ms): 2025-08-13T20:18:02.951389737+00:00 stderr F Trace[353295370]: ---"Write to database call failed" len:206,err:oauthclients.oauth.openshift.io "openshift-browser-client" already exists 1248ms (20:18:02.950) 2025-08-13T20:18:02.951389737+00:00 stderr F Trace[353295370]: [1.248658908s] [1.248658908s] END 2025-08-13T20:18:04.535705811+00:00 stderr F I0813 20:18:04.531464 1 trace.go:236] Trace[1861376822]: "Create etcd3" audit-id:523a0143-4800-4c01-b2ee-b100a17a7106,key:/oauth/clients/openshift-challenging-client,type:*oauth.OAuthClient,resource:oauthclients.oauth.openshift.io (13-Aug-2025 20:18:02.996) (total time: 1534ms): 2025-08-13T20:18:04.535705811+00:00 stderr F Trace[1861376822]: ---"Txn call succeeded" 1534ms (20:18:04.531) 2025-08-13T20:18:04.535705811+00:00 stderr F Trace[1861376822]: [1.534461529s] [1.534461529s] END 2025-08-13T20:18:06.381304026+00:00 stderr F I0813 20:18:06.380288 1 trace.go:236] Trace[1300541338]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:523a0143-4800-4c01-b2ee-b100a17a7106,client:10.217.0.19,api-group:oauth.openshift.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:18:02.985) (total time: 3395ms): 2025-08-13T20:18:06.381304026+00:00 stderr F Trace[1300541338]: ---"Write to database call failed" len:167,err:oauthclients.oauth.openshift.io "openshift-challenging-client" already exists 3383ms (20:18:06.379) 2025-08-13T20:18:06.381304026+00:00 stderr F Trace[1300541338]: [3.395104984s] [3.395104984s] END 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.326608 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.322968 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.324161 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329882780+00:00 stderr F I0813 20:42:36.329763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.352735829+00:00 stderr F I0813 20:42:36.323150 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.352735829+00:00 stderr F I0813 20:42:36.324215 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:42.122575675+00:00 stderr F I0813 20:42:42.121383 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:42.122575675+00:00 stderr F I0813 20:42:42.121637 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:42.122837923+00:00 stderr F I0813 20:42:42.122675 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:42.123355858+00:00 stderr F I0813 20:42:42.122480 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:42.134867230+00:00 stderr F W0813 20:42:42.134581 1 genericapiserver.go:1060] failed to create event openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd.185b6e4e3e4981aa: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/events": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015073043233033051 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000000000015073043233033041 0ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000000000015073043233033041 0ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000154167715073043233033102 0ustar zuulzuul2025-10-13T00:15:00.196414392+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="log level info" 2025-10-13T00:15:00.197362300+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="TLS keys set, using https for metrics" 2025-10-13T00:15:00.197961538+00:00 stderr F W1013 00:15:00.197926 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:15:00.199545125+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="Using in-cluster kube client config" 2025-10-13T00:15:00.280468660+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="Using in-cluster kube client config" 2025-10-13T00:15:00.282896373+00:00 stderr F W1013 00:15:00.282870 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:15:00.328309753+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" 2025-10-13T00:15:00.328309753+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" 2025-10-13T00:15:00.328309753+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="skipping irrelevant gvr" gvr="apps/v1, Resource=deployments" 2025-10-13T00:15:00.570740597+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="detected ability to filter informers" canFilter=true 2025-10-13T00:15:00.591403716+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="registering owner reference fixer" gvr="/v1, Resource=services" 2025-10-13T00:15:00.593576551+00:00 stderr F W1013 00:15:00.593542 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:15:00.597384135+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-10-13T00:15:00.597384135+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="operator ready" 2025-10-13T00:15:00.597384135+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="starting informers..." 2025-10-13T00:15:00.597384135+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="informers started" 2025-10-13T00:15:00.597384135+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="waiting for caches to sync..." 2025-10-13T00:15:00.699384421+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="starting workers..." 2025-10-13T00:15:00.699764933+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.699796344+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.700051251+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.700051251+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.701380411+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="operator ready" 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="starting informers..." 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="informers started" 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="waiting for caches to sync..." 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=69wN1 namespace=default 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=69wN1 namespace=default 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=+Pwfe namespace=hostpath-provisioner 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=+Pwfe namespace=hostpath-provisioner 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.705524565+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.707393021+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace default" id=69wN1 namespace=default 2025-10-13T00:15:00.707393021+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=DhbZ+ namespace=kube-node-lease 2025-10-13T00:15:00.707393021+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=DhbZ+ namespace=kube-node-lease 2025-10-13T00:15:00.712457953+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=DhbZ+ namespace=kube-node-lease 2025-10-13T00:15:00.712457953+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=kXo2V namespace=kube-public 2025-10-13T00:15:00.712457953+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=kXo2V namespace=kube-public 2025-10-13T00:15:00.712457953+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=+Pwfe namespace=hostpath-provisioner 2025-10-13T00:15:00.712457953+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=Ss1M6 namespace=kube-system 2025-10-13T00:15:00.712457953+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=Ss1M6 namespace=kube-system 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FyXnN 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=TjkRR 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.715366240+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.716354040+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace kube-public" id=kXo2V namespace=kube-public 2025-10-13T00:15:00.716354040+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=bfpUK namespace=openshift 2025-10-13T00:15:00.716354040+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=bfpUK namespace=openshift 2025-10-13T00:15:00.716354040+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace kube-system" id=Ss1M6 namespace=kube-system 2025-10-13T00:15:00.716354040+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=cmSWi namespace=openshift-apiserver 2025-10-13T00:15:00.716354040+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=cmSWi namespace=openshift-apiserver 2025-10-13T00:15:00.722603517+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace openshift" id=bfpUK namespace=openshift 2025-10-13T00:15:00.722603517+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=/LQUP namespace=openshift-apiserver-operator 2025-10-13T00:15:00.722603517+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=/LQUP namespace=openshift-apiserver-operator 2025-10-13T00:15:00.807480110+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="starting workers..." 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=cmSWi namespace=openshift-apiserver 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="resolving sources" id=7jkdt namespace=openshift-authentication 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="checking if subscriptions need update" id=7jkdt namespace=openshift-authentication 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.819913353+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TjkRR 2025-10-13T00:15:00.822504930+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-10-13T00:15:00.944892227+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-10-13T00:15:01.017368699+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:01.017398170+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:01.017398170+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:01.017470842+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=/LQUP namespace=openshift-apiserver-operator 2025-10-13T00:15:01.017517493+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="resolving sources" id=xvoPZ namespace=openshift-authentication-operator 2025-10-13T00:15:01.017517493+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="checking if subscriptions need update" id=xvoPZ namespace=openshift-authentication-operator 2025-10-13T00:15:01.019189753+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="catalog update required at 2025-10-13 00:15:01.017429721 +0000 UTC m=+1.747204290" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:01.214097283+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=7jkdt namespace=openshift-authentication 2025-10-13T00:15:01.214097283+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="resolving sources" id=ohxD+ namespace=openshift-cloud-network-config-controller 2025-10-13T00:15:01.214097283+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="checking if subscriptions need update" id=ohxD+ namespace=openshift-cloud-network-config-controller 2025-10-13T00:15:01.229021870+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FyXnN 2025-10-13T00:15:01.229681350+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-10-13T00:15:01.282340938+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-10-13T00:15:01.421755485+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.421755485+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.433121306+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.433121306+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.433121306+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.433121306+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=mQ9Be 2025-10-13T00:15:01.433121306+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.433121306+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.611413728+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=xvoPZ namespace=openshift-authentication-operator 2025-10-13T00:15:01.611413728+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="resolving sources" id=XdUIg namespace=openshift-cloud-platform-infra 2025-10-13T00:15:01.611413728+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="checking if subscriptions need update" id=XdUIg namespace=openshift-cloud-platform-infra 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="catalog update required at 2025-10-13 00:15:01.817740059 +0000 UTC m=+2.547514628" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=ohxD+ namespace=openshift-cloud-network-config-controller 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="resolving sources" id=rCHra namespace=openshift-cluster-machine-approver 2025-10-13T00:15:01.820240844+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="checking if subscriptions need update" id=rCHra namespace=openshift-cluster-machine-approver 2025-10-13T00:15:02.017631709+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mQ9Be 2025-10-13T00:15:02.018282438+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.018282438+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.019734922+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-10-13T00:15:02.034923567+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-10-13T00:15:02.207454876+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=XdUIg namespace=openshift-cloud-platform-infra 2025-10-13T00:15:02.207570040+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="resolving sources" id=aOAsB namespace=openshift-cluster-samples-operator 2025-10-13T00:15:02.207602851+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="checking if subscriptions need update" id=aOAsB namespace=openshift-cluster-samples-operator 2025-10-13T00:15:02.207893419+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.208067754+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.208106996+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.208144397+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LnGRS 2025-10-13T00:15:02.208175668+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.208204599+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.410979454+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=rCHra namespace=openshift-cluster-machine-approver 2025-10-13T00:15:02.410979454+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="resolving sources" id=wTie0 namespace=openshift-cluster-storage-operator 2025-10-13T00:15:02.410979454+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="checking if subscriptions need update" id=wTie0 namespace=openshift-cluster-storage-operator 2025-10-13T00:15:02.614450680+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.614450680+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.614488042+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.614504272+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="catalog update required at 2025-10-13 00:15:02.614488612 +0000 UTC m=+3.344263181" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.620648976+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:02.620648976+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:02.808341700+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=aOAsB namespace=openshift-cluster-samples-operator 2025-10-13T00:15:02.808341700+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="resolving sources" id=kT0qb namespace=openshift-cluster-version 2025-10-13T00:15:02.808341700+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="checking if subscriptions need update" id=kT0qb namespace=openshift-cluster-version 2025-10-13T00:15:02.825683019+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LnGRS 2025-10-13T00:15:02.825952197+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-10-13T00:15:02.926838350+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-10-13T00:15:03.012724503+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=wTie0 namespace=openshift-cluster-storage-operator 2025-10-13T00:15:03.012724503+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="resolving sources" id=Wbj2X namespace=openshift-config 2025-10-13T00:15:03.012724503+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="checking if subscriptions need update" id=Wbj2X namespace=openshift-config 2025-10-13T00:15:03.013804586+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.013804586+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.013804586+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.013804586+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=0hG3J 2025-10-13T00:15:03.013804586+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.013804586+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.208509990+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=kT0qb namespace=openshift-cluster-version 2025-10-13T00:15:03.208509990+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="resolving sources" id=A//iP namespace=openshift-config-managed 2025-10-13T00:15:03.208543061+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="checking if subscriptions need update" id=A//iP namespace=openshift-config-managed 2025-10-13T00:15:03.410761848+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.410761848+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.410761848+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.410761848+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0hG3J 2025-10-13T00:15:03.427070947+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.427070947+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.610888625+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.610888625+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.610888625+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.610888625+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=cHxEw 2025-10-13T00:15:03.610888625+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.610888625+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:03.618210254+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="No subscriptions were found in namespace openshift-config" id=Wbj2X namespace=openshift-config 2025-10-13T00:15:03.618210254+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="resolving sources" id=bFt/7 namespace=openshift-config-operator 2025-10-13T00:15:03.618210254+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="checking if subscriptions need update" id=bFt/7 namespace=openshift-config-operator 2025-10-13T00:15:03.814190216+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=A//iP namespace=openshift-config-managed 2025-10-13T00:15:03.814190216+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="resolving sources" id=TYSvz namespace=openshift-console 2025-10-13T00:15:03.814190216+00:00 stderr F time="2025-10-13T00:15:03Z" level=info msg="checking if subscriptions need update" id=TYSvz namespace=openshift-console 2025-10-13T00:15:04.013989622+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:04.013989622+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:04.013989622+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:04.013989622+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cHxEw 2025-10-13T00:15:04.017352403+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.017352403+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.208866121+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=bFt/7 namespace=openshift-config-operator 2025-10-13T00:15:04.208897362+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="resolving sources" id=+nwl3 namespace=openshift-console-operator 2025-10-13T00:15:04.208897362+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="checking if subscriptions need update" id=+nwl3 namespace=openshift-console-operator 2025-10-13T00:15:04.209432048+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.209432048+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.209432048+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.209432048+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s49NF 2025-10-13T00:15:04.209432048+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.209432048+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.408954106+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="No subscriptions were found in namespace openshift-console" id=TYSvz namespace=openshift-console 2025-10-13T00:15:04.408954106+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="resolving sources" id=AfbQG namespace=openshift-console-user-settings 2025-10-13T00:15:04.408954106+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="checking if subscriptions need update" id=AfbQG namespace=openshift-console-user-settings 2025-10-13T00:15:04.609772073+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.609990440+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.610023150+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.610102923+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s49NF 2025-10-13T00:15:04.615910257+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.615910257+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.805872018+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.806058294+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.806107096+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.806116906+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xv69J 2025-10-13T00:15:04.806123966+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.806130646+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:04.810904679+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=+nwl3 namespace=openshift-console-operator 2025-10-13T00:15:04.810935360+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="resolving sources" id=JhpS6 namespace=openshift-controller-manager 2025-10-13T00:15:04.810935360+00:00 stderr F time="2025-10-13T00:15:04Z" level=info msg="checking if subscriptions need update" id=JhpS6 namespace=openshift-controller-manager 2025-10-13T00:15:05.005738787+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=AfbQG namespace=openshift-console-user-settings 2025-10-13T00:15:05.005825069+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="resolving sources" id=a0uQb namespace=openshift-controller-manager-operator 2025-10-13T00:15:05.005868271+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="checking if subscriptions need update" id=a0uQb namespace=openshift-controller-manager-operator 2025-10-13T00:15:05.209746179+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:05.209903074+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:05.209903074+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:05.209999657+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xv69J 2025-10-13T00:15:05.210456421+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.210475321+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.406674660+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=JhpS6 namespace=openshift-controller-manager 2025-10-13T00:15:05.406674660+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="resolving sources" id=0xiuO namespace=openshift-dns 2025-10-13T00:15:05.406674660+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="checking if subscriptions need update" id=0xiuO namespace=openshift-dns 2025-10-13T00:15:05.406885376+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.406981689+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.406981689+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.406981689+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=3Bs5d 2025-10-13T00:15:05.407005010+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.407005010+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.614457145+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=a0uQb namespace=openshift-controller-manager-operator 2025-10-13T00:15:05.614457145+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="resolving sources" id=VC4f5 namespace=openshift-dns-operator 2025-10-13T00:15:05.614457145+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="checking if subscriptions need update" id=VC4f5 namespace=openshift-dns-operator 2025-10-13T00:15:05.816207800+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.816337064+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.816337064+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.816374315+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3Bs5d 2025-10-13T00:15:05.816418586+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:05.816493709+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:05.824421006+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:05.824421006+00:00 stderr F time="2025-10-13T00:15:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=0xiuO namespace=openshift-dns 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="resolving sources" id=eKi6G namespace=openshift-etcd 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checking if subscriptions need update" id=eKi6G namespace=openshift-etcd 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=l11D+ 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.013381008+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.206638098+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=VC4f5 namespace=openshift-dns-operator 2025-10-13T00:15:06.206638098+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="resolving sources" id=5pw95 namespace=openshift-etcd-operator 2025-10-13T00:15:06.206670949+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checking if subscriptions need update" id=5pw95 namespace=openshift-etcd-operator 2025-10-13T00:15:06.206846844+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:06.206947217+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:06.206947217+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:06.206957648+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=V00zf 2025-10-13T00:15:06.206965128+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:06.206972328+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:06.405244449+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=eKi6G namespace=openshift-etcd 2025-10-13T00:15:06.405244449+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="resolving sources" id=+6NPM namespace=openshift-host-network 2025-10-13T00:15:06.405244449+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checking if subscriptions need update" id=+6NPM namespace=openshift-host-network 2025-10-13T00:15:06.630731845+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=5pw95 namespace=openshift-etcd-operator 2025-10-13T00:15:06.630731845+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="resolving sources" id=KaiFl namespace=openshift-image-registry 2025-10-13T00:15:06.630731845+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checking if subscriptions need update" id=KaiFl namespace=openshift-image-registry 2025-10-13T00:15:06.806003536+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=+6NPM namespace=openshift-host-network 2025-10-13T00:15:06.806003536+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="resolving sources" id=ZJbs7 namespace=openshift-infra 2025-10-13T00:15:06.806003536+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="checking if subscriptions need update" id=ZJbs7 namespace=openshift-infra 2025-10-13T00:15:06.806429779+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.806614964+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.806614964+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.806705307+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=l11D+ 2025-10-13T00:15:06.806780539+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:06.806788330+00:00 stderr F time="2025-10-13T00:15:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=KaiFl namespace=openshift-image-registry 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="resolving sources" id=8cKSZ namespace=openshift-ingress 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="checking if subscriptions need update" id=8cKSZ namespace=openshift-ingress 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:07.010071589+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V00zf 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=KTZmg 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=ZJbs7 namespace=openshift-infra 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="resolving sources" id=aMSYd namespace=openshift-ingress-canary 2025-10-13T00:15:07.207949148+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="checking if subscriptions need update" id=aMSYd namespace=openshift-ingress-canary 2025-10-13T00:15:07.406855218+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=8cKSZ namespace=openshift-ingress 2025-10-13T00:15:07.406855218+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="resolving sources" id=8SB0t namespace=openshift-ingress-operator 2025-10-13T00:15:07.406855218+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="checking if subscriptions need update" id=8SB0t namespace=openshift-ingress-operator 2025-10-13T00:15:07.605627873+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.605674555+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.605707786+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.605772618+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KTZmg 2025-10-13T00:15:07.606641874+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=aMSYd namespace=openshift-ingress-canary 2025-10-13T00:15:07.606690885+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="resolving sources" id=Hl9Fp namespace=openshift-kni-infra 2025-10-13T00:15:07.606690885+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="checking if subscriptions need update" id=Hl9Fp namespace=openshift-kni-infra 2025-10-13T00:15:07.805708088+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=8SB0t namespace=openshift-ingress-operator 2025-10-13T00:15:07.805708088+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="resolving sources" id=LHk37 namespace=openshift-kube-apiserver 2025-10-13T00:15:07.805708088+00:00 stderr F time="2025-10-13T00:15:07Z" level=info msg="checking if subscriptions need update" id=LHk37 namespace=openshift-kube-apiserver 2025-10-13T00:15:08.007132913+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=Hl9Fp namespace=openshift-kni-infra 2025-10-13T00:15:08.007132913+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="resolving sources" id=pxTJW namespace=openshift-kube-apiserver-operator 2025-10-13T00:15:08.007175275+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="checking if subscriptions need update" id=pxTJW namespace=openshift-kube-apiserver-operator 2025-10-13T00:15:08.206358642+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=LHk37 namespace=openshift-kube-apiserver 2025-10-13T00:15:08.206358642+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="resolving sources" id=X2day namespace=openshift-kube-controller-manager 2025-10-13T00:15:08.206358642+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="checking if subscriptions need update" id=X2day namespace=openshift-kube-controller-manager 2025-10-13T00:15:08.408454538+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=pxTJW namespace=openshift-kube-apiserver-operator 2025-10-13T00:15:08.408454538+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="resolving sources" id=9eJQ/ namespace=openshift-kube-controller-manager-operator 2025-10-13T00:15:08.408454538+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="checking if subscriptions need update" id=9eJQ/ namespace=openshift-kube-controller-manager-operator 2025-10-13T00:15:08.620652955+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=X2day namespace=openshift-kube-controller-manager 2025-10-13T00:15:08.620652955+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="resolving sources" id=K7OJu namespace=openshift-kube-scheduler 2025-10-13T00:15:08.620652955+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="checking if subscriptions need update" id=K7OJu namespace=openshift-kube-scheduler 2025-10-13T00:15:08.806148403+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=9eJQ/ namespace=openshift-kube-controller-manager-operator 2025-10-13T00:15:08.806183254+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="resolving sources" id=6R/bQ namespace=openshift-kube-scheduler-operator 2025-10-13T00:15:08.806183254+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="checking if subscriptions need update" id=6R/bQ namespace=openshift-kube-scheduler-operator 2025-10-13T00:15:09.004948470+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=K7OJu namespace=openshift-kube-scheduler 2025-10-13T00:15:09.004948470+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="resolving sources" id=oexni namespace=openshift-kube-storage-version-migrator 2025-10-13T00:15:09.004988691+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="checking if subscriptions need update" id=oexni namespace=openshift-kube-storage-version-migrator 2025-10-13T00:15:09.207237441+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=6R/bQ namespace=openshift-kube-scheduler-operator 2025-10-13T00:15:09.207237441+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="resolving sources" id=b3r9W namespace=openshift-kube-storage-version-migrator-operator 2025-10-13T00:15:09.207279952+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="checking if subscriptions need update" id=b3r9W namespace=openshift-kube-storage-version-migrator-operator 2025-10-13T00:15:09.407124080+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=oexni namespace=openshift-kube-storage-version-migrator 2025-10-13T00:15:09.407179591+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="resolving sources" id=dK/J9 namespace=openshift-machine-api 2025-10-13T00:15:09.407179591+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="checking if subscriptions need update" id=dK/J9 namespace=openshift-machine-api 2025-10-13T00:15:09.605828363+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=b3r9W namespace=openshift-kube-storage-version-migrator-operator 2025-10-13T00:15:09.605828363+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="resolving sources" id=kVm42 namespace=openshift-machine-config-operator 2025-10-13T00:15:09.605828363+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="checking if subscriptions need update" id=kVm42 namespace=openshift-machine-config-operator 2025-10-13T00:15:09.805167886+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=dK/J9 namespace=openshift-machine-api 2025-10-13T00:15:09.805167886+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="resolving sources" id=dJPar namespace=openshift-marketplace 2025-10-13T00:15:09.805202427+00:00 stderr F time="2025-10-13T00:15:09Z" level=info msg="checking if subscriptions need update" id=dJPar namespace=openshift-marketplace 2025-10-13T00:15:10.005819218+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=kVm42 namespace=openshift-machine-config-operator 2025-10-13T00:15:10.005882640+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="resolving sources" id=zHFS6 namespace=openshift-monitoring 2025-10-13T00:15:10.005882640+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="checking if subscriptions need update" id=zHFS6 namespace=openshift-monitoring 2025-10-13T00:15:10.206059407+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=dJPar namespace=openshift-marketplace 2025-10-13T00:15:10.206098828+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="resolving sources" id=wDgwP namespace=openshift-multus 2025-10-13T00:15:10.206098828+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="checking if subscriptions need update" id=wDgwP namespace=openshift-multus 2025-10-13T00:15:10.409060739+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=zHFS6 namespace=openshift-monitoring 2025-10-13T00:15:10.409060739+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="resolving sources" id=RY1UT namespace=openshift-network-diagnostics 2025-10-13T00:15:10.409060739+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="checking if subscriptions need update" id=RY1UT namespace=openshift-network-diagnostics 2025-10-13T00:15:10.606073341+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=wDgwP namespace=openshift-multus 2025-10-13T00:15:10.606073341+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="resolving sources" id=sGBps namespace=openshift-network-node-identity 2025-10-13T00:15:10.606073341+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="checking if subscriptions need update" id=sGBps namespace=openshift-network-node-identity 2025-10-13T00:15:10.806852847+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=RY1UT namespace=openshift-network-diagnostics 2025-10-13T00:15:10.806852847+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="resolving sources" id=PAMBJ namespace=openshift-network-operator 2025-10-13T00:15:10.806852847+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="checking if subscriptions need update" id=PAMBJ namespace=openshift-network-operator 2025-10-13T00:15:11.006634803+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=sGBps namespace=openshift-network-node-identity 2025-10-13T00:15:11.006677374+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="resolving sources" id=4/kKf namespace=openshift-node 2025-10-13T00:15:11.006677374+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="checking if subscriptions need update" id=4/kKf namespace=openshift-node 2025-10-13T00:15:11.207315446+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=PAMBJ namespace=openshift-network-operator 2025-10-13T00:15:11.207315446+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="resolving sources" id=C8TTx namespace=openshift-nutanix-infra 2025-10-13T00:15:11.207315446+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="checking if subscriptions need update" id=C8TTx namespace=openshift-nutanix-infra 2025-10-13T00:15:11.406670049+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="No subscriptions were found in namespace openshift-node" id=4/kKf namespace=openshift-node 2025-10-13T00:15:11.406670049+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="resolving sources" id=bet/U namespace=openshift-oauth-apiserver 2025-10-13T00:15:11.406670049+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="checking if subscriptions need update" id=bet/U namespace=openshift-oauth-apiserver 2025-10-13T00:15:11.605072383+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=C8TTx namespace=openshift-nutanix-infra 2025-10-13T00:15:11.605072383+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="resolving sources" id=Fm35W namespace=openshift-openstack-infra 2025-10-13T00:15:11.605072383+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="checking if subscriptions need update" id=Fm35W namespace=openshift-openstack-infra 2025-10-13T00:15:11.805405366+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=bet/U namespace=openshift-oauth-apiserver 2025-10-13T00:15:11.805440637+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="resolving sources" id=hhitF namespace=openshift-operator-lifecycle-manager 2025-10-13T00:15:11.805440637+00:00 stderr F time="2025-10-13T00:15:11Z" level=info msg="checking if subscriptions need update" id=hhitF namespace=openshift-operator-lifecycle-manager 2025-10-13T00:15:12.005289394+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=Fm35W namespace=openshift-openstack-infra 2025-10-13T00:15:12.005289394+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="resolving sources" id=m8Yeu namespace=openshift-operators 2025-10-13T00:15:12.005289394+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="checking if subscriptions need update" id=m8Yeu namespace=openshift-operators 2025-10-13T00:15:12.205805562+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=hhitF namespace=openshift-operator-lifecycle-manager 2025-10-13T00:15:12.205805562+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="resolving sources" id=D2ZaQ namespace=openshift-ovirt-infra 2025-10-13T00:15:12.205888115+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="checking if subscriptions need update" id=D2ZaQ namespace=openshift-ovirt-infra 2025-10-13T00:15:12.405658350+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=m8Yeu namespace=openshift-operators 2025-10-13T00:15:12.405658350+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="resolving sources" id=jBMLv namespace=openshift-ovn-kubernetes 2025-10-13T00:15:12.405658350+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="checking if subscriptions need update" id=jBMLv namespace=openshift-ovn-kubernetes 2025-10-13T00:15:12.606034954+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=D2ZaQ namespace=openshift-ovirt-infra 2025-10-13T00:15:12.606034954+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="resolving sources" id=f6xS5 namespace=openshift-route-controller-manager 2025-10-13T00:15:12.606077385+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="checking if subscriptions need update" id=f6xS5 namespace=openshift-route-controller-manager 2025-10-13T00:15:12.805520361+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=jBMLv namespace=openshift-ovn-kubernetes 2025-10-13T00:15:12.805520361+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="resolving sources" id=sUPV+ namespace=openshift-service-ca 2025-10-13T00:15:12.805555542+00:00 stderr F time="2025-10-13T00:15:12Z" level=info msg="checking if subscriptions need update" id=sUPV+ namespace=openshift-service-ca 2025-10-13T00:15:13.006944466+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=f6xS5 namespace=openshift-route-controller-manager 2025-10-13T00:15:13.006944466+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="resolving sources" id=7Ydkz namespace=openshift-service-ca-operator 2025-10-13T00:15:13.006944466+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="checking if subscriptions need update" id=7Ydkz namespace=openshift-service-ca-operator 2025-10-13T00:15:13.206748462+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=sUPV+ namespace=openshift-service-ca 2025-10-13T00:15:13.206748462+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="resolving sources" id=Kkm7p namespace=openshift-user-workload-monitoring 2025-10-13T00:15:13.206748462+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="checking if subscriptions need update" id=Kkm7p namespace=openshift-user-workload-monitoring 2025-10-13T00:15:13.407840298+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=7Ydkz namespace=openshift-service-ca-operator 2025-10-13T00:15:13.407907040+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="resolving sources" id=NgGSu namespace=openshift-vsphere-infra 2025-10-13T00:15:13.407907040+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="checking if subscriptions need update" id=NgGSu namespace=openshift-vsphere-infra 2025-10-13T00:15:13.608679665+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=Kkm7p namespace=openshift-user-workload-monitoring 2025-10-13T00:15:13.608679665+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="resolving sources" id=5wIMM namespace=openshift-monitoring 2025-10-13T00:15:13.608679665+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="checking if subscriptions need update" id=5wIMM namespace=openshift-monitoring 2025-10-13T00:15:13.805819382+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=NgGSu namespace=openshift-vsphere-infra 2025-10-13T00:15:13.805855393+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="resolving sources" id=0XMXR namespace=openshift-operator-lifecycle-manager 2025-10-13T00:15:13.805855393+00:00 stderr F time="2025-10-13T00:15:13Z" level=info msg="checking if subscriptions need update" id=0XMXR namespace=openshift-operator-lifecycle-manager 2025-10-13T00:15:14.007467373+00:00 stderr F time="2025-10-13T00:15:14Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=5wIMM namespace=openshift-monitoring 2025-10-13T00:15:14.007467373+00:00 stderr F time="2025-10-13T00:15:14Z" level=info msg="resolving sources" id=be4jY namespace=openshift-operators 2025-10-13T00:15:14.007467373+00:00 stderr F time="2025-10-13T00:15:14Z" level=info msg="checking if subscriptions need update" id=be4jY namespace=openshift-operators 2025-10-13T00:15:14.207923348+00:00 stderr F time="2025-10-13T00:15:14Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=0XMXR namespace=openshift-operator-lifecycle-manager 2025-10-13T00:15:14.409956142+00:00 stderr F time="2025-10-13T00:15:14Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=be4jY namespace=openshift-operators 2025-10-13T00:15:30.819943924+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.819943924+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.829063957+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.829187111+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.829187111+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.829199771+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=/L4TO 2025-10-13T00:15:30.829207241+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.829214441+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.837021755+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.837169620+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.837169620+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:30.837240832+00:00 stderr F time="2025-10-13T00:15:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/L4TO 2025-10-13T00:15:31.230006010+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.230006010+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.232830904+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.232919587+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.232919587+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.232928717+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=aY7Zu 2025-10-13T00:15:31.232928717+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.232936368+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.238350860+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.238411342+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.238411342+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:31.238480654+00:00 stderr F time="2025-10-13T00:15:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aY7Zu 2025-10-13T00:15:32.018688839+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.018688839+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.021487033+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.021589716+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.021608047+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.021608047+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=lEPKp 2025-10-13T00:15:32.021617497+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.021626107+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.026710410+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.026841914+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.026852274+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.026990808+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=lEPKp 2025-10-13T00:15:32.825881594+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.825881594+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.828557245+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.828632797+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.828632797+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.828632797+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=NdzzS 2025-10-13T00:15:32.828647577+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.828647577+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.839663077+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.840012278+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.840028138+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:32.840136301+00:00 stderr F time="2025-10-13T00:15:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NdzzS 2025-10-13T00:15:44.547767485+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.547801406+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.551090605+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.551090605+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.554636391+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.554746435+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.554746435+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.554758715+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xwx1o 2025-10-13T00:15:44.554758715+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.554767585+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.555143226+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.555143226+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.555527738+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.555527738+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FevPG 2025-10-13T00:15:44.555527738+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.555527738+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.562855847+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.563035403+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.563035403+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.563377113+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FevPG 2025-10-13T00:15:44.563424635+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.563449495+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.563989031+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.564066014+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.564076454+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.564157917+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xwx1o 2025-10-13T00:15:44.567451485+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.567591709+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.567591709+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.567591709+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=9u/ac 2025-10-13T00:15:44.567605180+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.567613260+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.578402083+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.578402083+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.578402083+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.578402083+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9u/ac 2025-10-13T00:15:44.580622040+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.580622040+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.586839106+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.586940069+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.586951649+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.586974080+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=4dCit 2025-10-13T00:15:44.586974080+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.586974080+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:44.600785564+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:44.600785564+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:44.950346097+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:44.950505902+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:44.950538543+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:44.950568994+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dUtSX 2025-10-13T00:15:44.950592285+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:44.950615385+00:00 stderr F time="2025-10-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:45.151790603+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:45.151975178+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:45.152006319+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:45.152080212+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4dCit 2025-10-13T00:15:45.152156314+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.152194225+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.552881010+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.554509749+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.554509749+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.554528690+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=hfm/R 2025-10-13T00:15:45.554536020+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.554543270+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:45.751746809+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:45.751832181+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:45.751832181+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:45.751898423+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dUtSX 2025-10-13T00:15:45.751941504+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:45.751949145+00:00 stderr F time="2025-10-13T00:15:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.152151966+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.152271379+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.152271379+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.152308450+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=g6S8c 2025-10-13T00:15:46.152308450+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.152308450+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.350656540+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:46.350720672+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:46.350720672+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:46.350807525+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=hfm/R 2025-10-13T00:15:46.350829145+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.350844826+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.751573708+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.751664161+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.751664161+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.751691972+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=hnsDj 2025-10-13T00:15:46.751691972+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.751691972+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:46.952286655+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.952605786+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.952691508+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:46.952840973+00:00 stderr F time="2025-10-13T00:15:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g6S8c 2025-10-13T00:15:47.351543340+00:00 stderr F time="2025-10-13T00:15:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:47.351643033+00:00 stderr F time="2025-10-13T00:15:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:47.351643033+00:00 stderr F time="2025-10-13T00:15:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:47.351727536+00:00 stderr F time="2025-10-13T00:15:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hnsDj 2025-10-13T00:15:58.618439497+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.618439497+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.618439497+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.618439497+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.619498661+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.619627795+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.619658286+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.619686277+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.619721378+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.619721378+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.619743619+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=t1x8o 2025-10-13T00:15:58.619743619+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.619743619+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.619769520+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=B2I3P 2025-10-13T00:15:58.619793141+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.619816511+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.625369270+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.625500934+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.625573786+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.625582466+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.625669219+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=t1x8o 2025-10-13T00:15:58.625677819+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.625751342+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.625751342+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.625856885+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.625962819+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=B2I3P 2025-10-13T00:15:58.628530021+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.628530021+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.628530021+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.628530021+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ZBk7Q 2025-10-13T00:15:58.628530021+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.628530021+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.632698825+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.632747996+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.632756086+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:58.632807618+00:00 stderr F time="2025-10-13T00:15:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ZBk7Q 2025-10-13T00:15:59.152393212+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.152393212+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.154295533+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.154476809+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.154537441+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.154575552+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=aQEqY 2025-10-13T00:15:59.154606963+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.154646904+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.160632756+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.160744860+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.160744860+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:15:59.160808612+00:00 stderr F time="2025-10-13T00:15:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aQEqY 2025-10-13T00:16:00.201612832+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.201683175+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.210554309+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.210777166+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.210838038+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.210873049+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=RYCvF 2025-10-13T00:16:00.210902100+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.210961182+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.225518739+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.225708195+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.225739206+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.225814069+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=RYCvF 2025-10-13T00:16:00.837838406+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.837838406+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.840844273+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.840920035+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.840920035+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.840931736+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=eawNg 2025-10-13T00:16:00.840931736+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.840941086+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.854431609+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.854431609+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.854431609+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:00.854431609+00:00 stderr F time="2025-10-13T00:16:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eawNg 2025-10-13T00:16:01.219500157+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.219500157+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.222321007+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.222474412+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.222474412+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.222487613+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fp8Uu 2025-10-13T00:16:01.222487613+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.222498653+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.239055054+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.239055054+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.251906126+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fp8Uu 2025-10-13T00:16:01.278457868+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.278457868+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.417534498+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.417640842+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.417640842+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.417715954+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CdmoJ 2025-10-13T00:16:01.417780636+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:01.419708878+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:01.618998510+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.618998510+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.618998510+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.618998510+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=xL36w 2025-10-13T00:16:01.618998510+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.618998510+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:01.822869028+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:01.822869028+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:01.822869028+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:01.822869028+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mPhXB 2025-10-13T00:16:01.822869028+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:01.822869028+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:02.241032779+00:00 stderr F 2025/10/13 00:16:02 http: TLS handshake error from 10.217.0.28:47022: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "Red Hat, Inc.") 2025-10-13T00:16:02.416708203+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:02.416885259+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:02.416885259+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:02.417003763+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=xL36w 2025-10-13T00:16:02.417098636+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:02.417098636+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:02.615891612+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:02.616000365+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:02.616000365+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:02.616043197+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mPhXB 2025-10-13T00:16:02.616123559+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:02.616123559+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:02.817419825+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:02.817419825+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:02.817453406+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:02.817453406+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Mrjt6 2025-10-13T00:16:02.817453406+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:02.817462876+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:03.017058078+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.017100909+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.017100909+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.017115650+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HTF11 2025-10-13T00:16:03.017115650+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.017124820+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.618422675+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:03.618545169+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:03.618545169+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:03.618608811+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Mrjt6 2025-10-13T00:16:03.618676823+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:03.618695183+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:03.817690896+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.817801999+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.817812209+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.817878242+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HTF11 2025-10-13T00:16:03.817978855+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:03.817995495+00:00 stderr F time="2025-10-13T00:16:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:04.017588877+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.017677009+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.017677009+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.017686080+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=7m8E0 2025-10-13T00:16:04.017692900+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.017699620+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.217038692+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:04.218118847+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:04.218118847+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:04.218118847+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=TByj9 2025-10-13T00:16:04.218118847+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:04.218118847+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:04.817006664+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.817111598+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.817111598+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.817214261+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7m8E0 2025-10-13T00:16:04.817277123+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:04.817301514+00:00 stderr F time="2025-10-13T00:16:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.016628127+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:05.016734790+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:05.016734790+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:05.016813963+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=TByj9 2025-10-13T00:16:05.215934799+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.216026272+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.216026272+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.216038512+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=0QanS 2025-10-13T00:16:05.216047552+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.216047552+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.245317031+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.245402684+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.616345381+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.616443924+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.616453044+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.616470885+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mdx34 2025-10-13T00:16:05.616470885+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.616478745+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:05.816372686+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.816566612+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.816566612+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.816566612+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0QanS 2025-10-13T00:16:05.816627614+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:05.816639724+00:00 stderr F time="2025-10-13T00:16:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:06.217447939+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:06.217571123+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:06.217585973+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:06.217611174+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=xyzuM 2025-10-13T00:16:06.217611174+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:06.217632025+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:06.416088140+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:06.416211444+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:06.416211444+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:06.416264965+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mdx34 2025-10-13T00:16:06.416319077+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:06.416343118+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:06.817201884+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:06.817267446+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:06.817275777+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:06.817301647+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fh4S7 2025-10-13T00:16:06.817301647+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:06.817301647+00:00 stderr F time="2025-10-13T00:16:06Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:07.017011732+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:07.017107056+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:07.017107056+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:07.017173298+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xyzuM 2025-10-13T00:16:07.283092546+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.283320024+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.416763413+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:07.416872987+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:07.416883347+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:07.416960670+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fh4S7 2025-10-13T00:16:07.417032962+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:07.417042292+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:07.616679815+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.616768538+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.616768538+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.616793209+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=r+GoH 2025-10-13T00:16:07.616793209+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.616822069+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:07.818963711+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:07.819108966+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:07.819108966+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:07.819108966+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=HSQXA 2025-10-13T00:16:07.819163458+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:07.819163458+00:00 stderr F time="2025-10-13T00:16:07Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:08.418570522+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:08.418736037+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:08.418746428+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:08.418820210+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=r+GoH 2025-10-13T00:16:08.418909913+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:08.418920873+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:08.617686958+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:08.617778781+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:08.617778781+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:08.617848113+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=HSQXA 2025-10-13T00:16:08.820855114+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:08.820933097+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:08.820933097+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:08.820959907+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=gbATb 2025-10-13T00:16:08.820959907+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:08.820959907+00:00 stderr F time="2025-10-13T00:16:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:09.216822723+00:00 stderr F time="2025-10-13T00:16:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:09.216915506+00:00 stderr F time="2025-10-13T00:16:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:09.216915506+00:00 stderr F time="2025-10-13T00:16:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:09.216978988+00:00 stderr F time="2025-10-13T00:16:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gbATb 2025-10-13T00:16:10.117736727+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.117823140+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.121046153+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.123934716+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.123934716+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.123934716+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=WDm7N 2025-10-13T00:16:10.123934716+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.123934716+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.129870146+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.130082363+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.130092794+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.130204757+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=WDm7N 2025-10-13T00:16:10.130300430+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.130362022+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.133343278+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.133585026+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.133596986+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.133643137+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=sbkx4 2025-10-13T00:16:10.133643137+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.133661868+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.140654642+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.140654642+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.417126509+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.417298755+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.417298755+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.417344706+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=cG4ff 2025-10-13T00:16:10.417344706+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.417344706+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:10.617265318+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.617378152+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.617378152+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.617460714+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sbkx4 2025-10-13T00:16:10.617528777+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:10.617528777+00:00 stderr F time="2025-10-13T00:16:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.017116962+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.017179914+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.017179914+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.017191684+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=QaB2c 2025-10-13T00:16:11.017191684+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.017202525+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.216514477+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:11.216641911+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:11.216641911+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:11.216743434+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cG4ff 2025-10-13T00:16:11.216789326+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.216799256+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.617878809+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.618004593+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.618004593+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.618004593+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=d6LVW 2025-10-13T00:16:11.618004593+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.618004593+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:11.816053004+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.816162558+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.816162558+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.816206029+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QaB2c 2025-10-13T00:16:11.816290432+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:11.816290432+00:00 stderr F time="2025-10-13T00:16:11Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:12.217567332+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:12.217651614+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:12.217651614+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:12.217663305+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+sZq1 2025-10-13T00:16:12.217672065+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:12.217672065+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:12.416537663+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:12.416537663+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:12.416537663+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:12.416587565+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=d6LVW 2025-10-13T00:16:12.416634136+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:12.416655097+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:12.816908754+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:12.816986966+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:12.816986966+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:12.816986966+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=gU3j4 2025-10-13T00:16:12.816986966+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:12.816986966+00:00 stderr F time="2025-10-13T00:16:12Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:13.020718850+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:13.020718850+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:13.020718850+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:13.020718850+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+sZq1 2025-10-13T00:16:13.020718850+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.020718850+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.417068652+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.417125114+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.417125114+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.417125114+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0pKnc 2025-10-13T00:16:13.417144634+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.417144634+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:13.617404067+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:13.617661155+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:13.617661155+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:13.617853011+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gU3j4 2025-10-13T00:16:13.618248884+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:13.618248884+00:00 stderr F time="2025-10-13T00:16:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.017215260+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.017297882+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.017309693+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.017383555+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=2fWGV 2025-10-13T00:16:14.017383555+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.017383555+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.216911224+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:14.216967736+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:14.216983856+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:14.217212414+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0pKnc 2025-10-13T00:16:14.217212414+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.217212414+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.616873812+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.616957254+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.616957254+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.616973795+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=VK+OG 2025-10-13T00:16:14.616973795+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.616987035+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:14.816883036+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.816883036+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.816883036+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.816883036+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2fWGV 2025-10-13T00:16:14.816972109+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:14.816972109+00:00 stderr F time="2025-10-13T00:16:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.217920797+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.218045631+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.218045631+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=false id=aC65E 2025-10-13T00:16:15.218045631+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.417793508+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:15.417923512+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:15.417923512+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:15.418074167+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VK+OG 2025-10-13T00:16:15.816599208+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.816749153+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.816749153+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:15.816903048+00:00 stderr F time="2025-10-13T00:16:15Z" level=info msg="creating desired pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E pod.name= pod.namespace=openshift-marketplace 2025-10-13T00:16:16.023501124+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aC65E 2025-10-13T00:16:16.023501124+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.023501124+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.217987601+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.218087674+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.218096975+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.218106095+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=coPBm 2025-10-13T00:16:16.218113565+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.218120596+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.616152441+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.616280925+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.616280925+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.616392929+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=coPBm 2025-10-13T00:16:16.616447891+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:16.616456311+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:16.816197347+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:16.816292130+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:16.816292130+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:16.816304290+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=K+2Fj 2025-10-13T00:16:16.816314491+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:16.816314491+00:00 stderr F time="2025-10-13T00:16:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:17.217241519+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:17.217472357+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:17.217541579+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:17.217642622+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=K+2Fj 2025-10-13T00:16:17.345097330+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.345205663+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.417655257+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.417788221+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.417788221+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.417817492+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=hzarv 2025-10-13T00:16:17.417817492+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.417817492+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.817455649+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.817455649+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.817455649+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.817713327+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.817713327+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hzarv 2025-10-13T00:16:17.817799940+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:17.817822791+00:00 stderr F time="2025-10-13T00:16:17Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.018279840+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.018279840+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.018279840+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.018279840+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qZz49 2025-10-13T00:16:18.018279840+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.018279840+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.354476512+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:18.354476512+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:18.416891204+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.417013728+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.417013728+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.417125862+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.417125862+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qZz49 2025-10-13T00:16:18.617468956+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:18.617721764+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:18.617759025+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:18.617793976+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=zHbne 2025-10-13T00:16:18.617819287+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:18.617843458+00:00 stderr F time="2025-10-13T00:16:18Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:19.020066048+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:19.020250204+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:19.020283075+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:19.020426780+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:19.020459721+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zHbne 2025-10-13T00:16:19.020539663+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.020574574+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.216995874+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.216995874+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.216995874+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.216995874+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Li/Nc 2025-10-13T00:16:19.216995874+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.216995874+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.438696034+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:19.438696034+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:19.618304855+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.618485880+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.618497251+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.618653586+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.618653586+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Li/Nc 2025-10-13T00:16:19.618772660+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:19.618844832+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:19.818988631+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:19.819163076+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:19.819163076+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:19.819184187+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=2gtnU 2025-10-13T00:16:19.819184187+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:19.819200788+00:00 stderr F time="2025-10-13T00:16:19Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:20.018070096+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.018162759+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.018162759+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.018176909+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=MrGfy 2025-10-13T00:16:20.018176909+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.018187320+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.617627765+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:20.617856162+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:20.617856162+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:20.617938355+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:20.617938355+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=2gtnU 2025-10-13T00:16:20.618034098+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:20.618034098+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:20.817217416+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.817287878+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.817287878+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.817580238+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:20.817580238+00:00 stderr F time="2025-10-13T00:16:20Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MrGfy 2025-10-13T00:16:21.017330474+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.017478009+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.017478009+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.017495859+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=SOOZ2 2025-10-13T00:16:21.017508810+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.017508810+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.417785037+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.418022585+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.418079267+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.418225922+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.418275333+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SOOZ2 2025-10-13T00:16:21.443272885+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:21.443272885+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:21.617067329+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:21.617158612+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:21.617158612+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:21.617172072+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1M20t 2025-10-13T00:16:21.617172072+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:21.617182532+00:00 stderr F time="2025-10-13T00:16:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:22.016986615+00:00 stderr F time="2025-10-13T00:16:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:22.017053107+00:00 stderr F time="2025-10-13T00:16:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:22.017053107+00:00 stderr F time="2025-10-13T00:16:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:22.017134170+00:00 stderr F time="2025-10-13T00:16:22Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:22.017134170+00:00 stderr F time="2025-10-13T00:16:22Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1M20t 2025-10-13T00:16:23.636892727+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.636892727+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.640090470+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.640090470+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.640090470+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.640090470+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=9GhmZ 2025-10-13T00:16:23.640090470+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.640090470+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.650697440+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.650793343+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.650793343+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.650892916+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="creating desired pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ pod.name= pod.namespace=openshift-marketplace 2025-10-13T00:16:23.663043706+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.663043706+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9GhmZ 2025-10-13T00:16:23.674820694+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.674820694+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.677816920+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.677902162+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.677902162+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.677918213+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=MHdIG 2025-10-13T00:16:23.677918213+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.677931413+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.691476368+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.691596732+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.691609502+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.691741946+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.691741946+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MHdIG 2025-10-13T00:16:23.691791568+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:23.691803458+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:23.702879293+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:23.702979567+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:23.702979567+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:23.702994537+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iY3CN 2025-10-13T00:16:23.703004717+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:23.703004717+00:00 stderr F time="2025-10-13T00:16:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:24.017660439+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:24.017870116+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:24.017884916+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:24.018109113+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:24.018122894+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iY3CN 2025-10-13T00:16:24.018245268+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.018282269+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.218009925+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.218136079+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.218159159+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.218197921+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=5b9tt 2025-10-13T00:16:24.218197921+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.218197921+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.457714822+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:24.457714822+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:24.617203228+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.617386753+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.617397644+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.617551809+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.617551809+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5b9tt 2025-10-13T00:16:24.617650432+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:24.617658342+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:24.817043137+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:24.817096669+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:24.817096669+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:24.817096669+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=mW57E 2025-10-13T00:16:24.817115829+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:24.817115829+00:00 stderr F time="2025-10-13T00:16:24Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:25.017128424+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.017209056+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.017209056+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.017218417+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=vcmG8 2025-10-13T00:16:25.017225507+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.017232457+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.616683393+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:25.616795906+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:25.616795906+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:25.616900390+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:25.616908630+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mW57E 2025-10-13T00:16:25.616984602+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:25.616992362+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:25.817916586+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.818047230+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.818047230+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.818197855+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.818197855+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=vcmG8 2025-10-13T00:16:25.818304078+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:25.818338709+00:00 stderr F time="2025-10-13T00:16:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:26.017401973+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:26.017507767+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:26.017507767+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=false id=E5+hU 2025-10-13T00:16:26.017520887+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:26.217261193+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:26.217396718+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:26.217396718+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=false id=AsQxZ 2025-10-13T00:16:26.217396718+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:26.816578915+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:26.816670588+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:26.816670588+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:26.816729169+00:00 stderr F time="2025-10-13T00:16:26Z" level=info msg="creating desired pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU pod.name= pod.namespace=openshift-marketplace 2025-10-13T00:16:27.018717438+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:27.018882603+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:27.018882603+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:27.019048938+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="creating desired pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ pod.name= pod.namespace=openshift-marketplace 2025-10-13T00:16:27.224093704+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=E5+hU 2025-10-13T00:16:27.224151686+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.224202898+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.426453534+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:27.426453534+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AsQxZ 2025-10-13T00:16:27.428675046+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:27.428754528+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:27.617603815+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.617693218+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.617693218+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.617722139+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=tHHtc 2025-10-13T00:16:27.617722139+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.617748069+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:27.817659951+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:27.817737244+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:27.817737244+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:27.817737244+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=oSOpM 2025-10-13T00:16:27.817737244+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:27.817737244+00:00 stderr F time="2025-10-13T00:16:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:28.417373545+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:28.417709326+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:28.417709326+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:28.417709326+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:28.417709326+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tHHtc 2025-10-13T00:16:28.417709326+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:28.417728166+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:28.616929975+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:28.617161713+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:28.617201894+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:28.617374819+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="catalog update required at 2025-10-13 00:16:28.617301927 +0000 UTC m=+89.347076496" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:28.817070114+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:28.817157377+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:28.817157377+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:28.817168727+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=o9RuA 2025-10-13T00:16:28.817168727+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:28.817177747+00:00 stderr F time="2025-10-13T00:16:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:29.021757019+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=oSOpM 2025-10-13T00:16:29.036521402+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.036521402+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.416369234+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.416462437+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.416462437+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.416462437+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=1Cuhp 2025-10-13T00:16:29.416473037+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.416473037+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:29.619238480+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:29.619600441+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:29.619672744+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:29.619963313+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:29.619963313+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o9RuA 2025-10-13T00:16:29.620156889+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:29.620232132+00:00 stderr F time="2025-10-13T00:16:29Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.018888817+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.019067243+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.019067243+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.019083574+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=2a/ab 2025-10-13T00:16:30.019094114+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.019094114+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.217046413+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:30.217270730+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:30.217317431+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:30.217491297+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:30.217537818+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=1Cuhp 2025-10-13T00:16:30.217634922+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.217684303+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.616921937+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.617085373+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.617127054+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.617157815+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=sp502 2025-10-13T00:16:30.617193156+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.617218187+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:30.816463817+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.816626192+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.816656833+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.816734406+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2a/ab 2025-10-13T00:16:30.816808958+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:30.816852269+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:31.220354921+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:31.220354921+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:31.220354921+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:31.220354921+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OLQlz 2025-10-13T00:16:31.220354921+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:31.220354921+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:31.420014444+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:31.420254652+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:31.420267702+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:31.420545491+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:31.420545491+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=sp502 2025-10-13T00:16:31.420630734+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:31.420662005+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:31.816738978+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:31.816803460+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:31.816803460+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:31.816832971+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=szPtT 2025-10-13T00:16:31.816832971+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:31.816832971+00:00 stderr F time="2025-10-13T00:16:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OLQlz 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.028588402+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.416931247+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.417061251+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.417061251+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.417072721+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1pJ75 2025-10-13T00:16:32.417079992+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.417101422+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:32.625514586+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:32.625514586+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:32.625514586+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:32.625514586+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=szPtT 2025-10-13T00:16:32.625514586+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:32.625514586+00:00 stderr F time="2025-10-13T00:16:32Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.017969232+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.018053925+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.018062095+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.018070525+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=3TqN3 2025-10-13T00:16:33.018077326+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.018083996+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.217072718+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:33.217170081+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:33.217170081+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:33.217265954+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:33.217265954+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1pJ75 2025-10-13T00:16:33.217334426+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.217334426+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.616130816+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.616296242+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.616353723+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.616390625+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=LKm6M 2025-10-13T00:16:33.616414645+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.616440026+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:33.816580115+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.816713639+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.816713639+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.816774051+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3TqN3 2025-10-13T00:16:33.816836463+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:33.816851884+00:00 stderr F time="2025-10-13T00:16:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:34.217775322+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:34.218109093+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:34.218140454+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:34.218185105+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=dCGu4 2025-10-13T00:16:34.218222846+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:34.218252617+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=LKm6M 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:34.418795769+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:34.815977237+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:34.816079301+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:34.816099591+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:34.816099591+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=f8E5+ 2025-10-13T00:16:34.816107212+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:34.816124272+00:00 stderr F time="2025-10-13T00:16:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:35.017276254+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:35.017494071+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:35.017507801+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:35.017678356+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:35.017685987+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dCGu4 2025-10-13T00:16:35.416567309+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:35.416681703+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:35.416693524+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:35.416807277+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:35.416807277+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=f8E5+ 2025-10-13T00:16:35.583191583+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:35.583281286+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:35.616968867+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:35.617058770+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:35.617067770+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:35.617075220+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=YQzMc 2025-10-13T00:16:35.617082730+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:35.617082730+00:00 stderr F time="2025-10-13T00:16:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:36.016588443+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:36.016822511+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:36.016862862+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:36.016959935+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YQzMc 2025-10-13T00:16:36.456862393+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.456963746+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.460045685+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.460224131+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.460271132+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.460351985+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=fgce7 2025-10-13T00:16:36.460395406+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.460429627+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.584676152+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:36.584772165+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:36.616046988+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.616231654+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.616275885+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.616440071+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.616485882+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fgce7 2025-10-13T00:16:36.817142277+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:36.817460278+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:36.817574591+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:36.817649794+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=MfDqd 2025-10-13T00:16:36.817701615+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:36.817751917+00:00 stderr F time="2025-10-13T00:16:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:37.217165527+00:00 stderr F time="2025-10-13T00:16:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:37.217165527+00:00 stderr F time="2025-10-13T00:16:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:37.217165527+00:00 stderr F time="2025-10-13T00:16:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:37.217165527+00:00 stderr F time="2025-10-13T00:16:37Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:37.217165527+00:00 stderr F time="2025-10-13T00:16:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=MfDqd 2025-10-13T00:16:38.609384728+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.609384728+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.610859875+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.610999540+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.611047791+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.611078672+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=DL8qc 2025-10-13T00:16:38.611102223+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.611126704+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.619622866+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.619807472+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.619841843+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:38.619928186+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DL8qc 2025-10-13T00:16:39.607019974+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.607019974+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.612011124+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.612102887+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.612102887+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.612111677+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=3mIV/ 2025-10-13T00:16:39.612111677+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.612119037+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.619001108+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.619071220+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.619071220+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.619160023+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:39.619160023+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3mIV/ 2025-10-13T00:16:45.627440737+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.627440737+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.629371089+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.629509854+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.629541165+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.629572436+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=lxScR 2025-10-13T00:16:45.629597687+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.629620707+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.635131184+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.635272289+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.635282829+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:45.635393962+00:00 stderr F time="2025-10-13T00:16:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lxScR 2025-10-13T00:16:47.658108644+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.658108644+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.660471369+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.660584553+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.660584553+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.660604804+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=DsCee 2025-10-13T00:16:47.660604804+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.660604804+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.665260853+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.665306674+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.665306674+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.665394057+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DsCee 2025-10-13T00:16:47.908796724+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.908796724+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.910034403+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.910196098+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.910196098+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.910196098+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fULgw 2025-10-13T00:16:47.910196098+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.910196098+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.915641293+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.915704525+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.915704525+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.915778748+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:47.915778748+00:00 stderr F time="2025-10-13T00:16:47Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fULgw 2025-10-13T00:16:48.725414674+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.725414674+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.729020520+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.729020520+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.729020520+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.729020520+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=axIWL 2025-10-13T00:16:48.729020520+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.729020520+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.736285123+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.737450680+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.737523302+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.737746530+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:48.738377820+00:00 stderr F time="2025-10-13T00:16:48Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=axIWL 2025-10-13T00:16:50.926567758+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-10-13T00:16:50.926616839+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.926656411+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="resolving sources" id=3K74b namespace=openshift-marketplace 2025-10-13T00:16:50.926668341+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="checking if subscriptions need update" id=3K74b namespace=openshift-marketplace 2025-10-13T00:16:50.926861797+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.931952941+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=3K74b namespace=openshift-marketplace 2025-10-13T00:16:50.932320392+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.932481168+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.932536489+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.932567070+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=uRfxS 2025-10-13T00:16:50.932601991+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.932634862+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.936746414+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.936849758+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.936896879+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.937020163+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.937066725+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=uRfxS 2025-10-13T00:16:50.942160568+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.942239281+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.946578020+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.946665963+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.946665963+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.946690273+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=mKbCx 2025-10-13T00:16:50.946690273+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.946697934+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.954034439+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.954103311+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.954103311+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-crk87 current-pod.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.954216075+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:50.954216075+00:00 stderr F time="2025-10-13T00:16:50Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=mKbCx 2025-10-13T00:16:57.703238048+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.703238048+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.712497445+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.712497445+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.712497445+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.712497445+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=EECax 2025-10-13T00:16:57.712497445+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.712497445+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.718401558+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.718494931+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.718494931+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.718581034+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EECax 2025-10-13T00:16:57.728882083+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.728882083+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.740354258+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.740354258+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.740354258+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.740354258+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=2FBje 2025-10-13T00:16:57.740354258+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.740354258+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.752920327+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.752920327+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.752920327+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.752920327+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2FBje 2025-10-13T00:16:57.752920327+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.752920327+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.758376716+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.758376716+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.758376716+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.758376716+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=JcnzP 2025-10-13T00:16:57.758376716+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.758376716+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.768522320+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.768522320+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.768522320+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:57.769050117+00:00 stderr F time="2025-10-13T00:16:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JcnzP 2025-10-13T00:16:58.251460776+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-10-13T00:16:58.251460776+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.251460776+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.251516728+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="resolving sources" id=Jfciq namespace=openshift-marketplace 2025-10-13T00:16:58.251516728+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="checking if subscriptions need update" id=Jfciq namespace=openshift-marketplace 2025-10-13T00:16:58.253952123+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.253952123+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.253952123+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.253952123+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=0x8GQ 2025-10-13T00:16:58.253952123+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.253952123+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.254424168+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=Jfciq namespace=openshift-marketplace 2025-10-13T00:16:58.257402430+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.257477432+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.257477432+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.257610017+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.257610017+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0x8GQ 2025-10-13T00:16:58.264393787+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.264496710+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.306983196+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.307104950+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.307104950+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.307104950+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RAhvv 2025-10-13T00:16:58.307126510+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.307126510+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.706632363+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.706719366+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.706719366+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-cms8q current-pod.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.706803218+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:58.706803218+00:00 stderr F time="2025-10-13T00:16:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RAhvv 2025-10-13T00:16:59.708622335+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.708679887+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.711756802+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.711955129+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.712003290+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.712045161+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=v1lg/ 2025-10-13T00:16:59.712079312+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.712113024+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.728644706+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.728644706+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.728644706+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.728644706+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1lg/ 2025-10-13T00:16:59.733738083+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.733738083+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.735858329+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.735955732+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.735955732+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.735974933+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=EZMtn 2025-10-13T00:16:59.735974933+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.735974933+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.907995730+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.908144685+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.908153465+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:16:59.908257148+00:00 stderr F time="2025-10-13T00:16:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EZMtn 2025-10-13T00:17:02.626327858+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.626327858+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.629227097+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.629390982+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.629428404+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.629459935+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hVeMP 2025-10-13T00:17:02.629493776+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.629517526+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.638687190+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.638775313+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.638775313+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:02.638854006+00:00 stderr F time="2025-10-13T00:17:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hVeMP 2025-10-13T00:17:09.474091533+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.474091533+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.476522198+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.476652552+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.476652552+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.476665953+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=99U0c 2025-10-13T00:17:09.476665953+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.476676953+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.482737151+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.482870335+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.482870335+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.482903856+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=99U0c 2025-10-13T00:17:09.835424704+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.835424704+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.838127687+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.838337944+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.838413556+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.838448877+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=A6ZTI 2025-10-13T00:17:09.838460478+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.838490919+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.843684899+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.843768602+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.843768602+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.848500559+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.848500559+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A6ZTI 2025-10-13T00:17:09.849189800+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.849253202+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.851543533+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.851693418+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.851709698+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.851754759+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=0DwFs 2025-10-13T00:17:09.851754759+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.851766690+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.857585940+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.857707354+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.857707354+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.859914712+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:09.859931643+00:00 stderr F time="2025-10-13T00:17:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0DwFs 2025-10-13T00:17:16.791035790+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.791035790+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.801120243+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.801224566+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.801235836+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.801263057+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uWpr8 2025-10-13T00:17:16.801270647+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.801279578+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.807891693+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.808007986+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.808017036+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.808174401+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.808174401+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uWpr8 2025-10-13T00:17:16.816794068+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.816816489+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.822014960+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.822136374+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.822136374+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.822136374+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=5ZNxU 2025-10-13T00:17:16.822136374+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.822136374+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.830106900+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.830275026+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.830283216+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.830464352+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:16.830464352+00:00 stderr F time="2025-10-13T00:17:16Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5ZNxU 2025-10-13T00:17:28.118362859+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.118455122+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.124760457+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.124875491+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.124884951+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.124914582+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+YFaq 2025-10-13T00:17:28.124914582+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.124923512+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.131275879+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.131402483+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.131402483+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.131468935+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+YFaq 2025-10-13T00:17:28.862565188+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.862565188+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.864079944+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.864164337+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.864164337+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.864164337+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qTLYW 2025-10-13T00:17:28.864177998+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.864177998+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.873911119+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.873911119+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.873911119+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.873911119+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.873911119+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qTLYW 2025-10-13T00:17:28.915084344+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.915084344+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.922378420+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.922378420+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.922378420+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.922378420+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=cD1JY 2025-10-13T00:17:28.922378420+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.922378420+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.931707369+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.931707369+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.931707369+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.931747440+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cD1JY 2025-10-13T00:17:28.933141053+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.933196895+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.939553762+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.939654325+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.939654325+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.939676736+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=gurYt 2025-10-13T00:17:28.939676736+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.939684186+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gurYt 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:28.948444827+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:28.950550082+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:28.950656376+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:28.950664326+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:28.950674556+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=FS/0g 2025-10-13T00:17:28.950696437+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:28.950696437+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:29.269133499+00:00 stderr F time="2025-10-13T00:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:29.269133499+00:00 stderr F time="2025-10-13T00:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:29.269133499+00:00 stderr F time="2025-10-13T00:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:29.269189321+00:00 stderr F time="2025-10-13T00:17:29Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:29.269189321+00:00 stderr F time="2025-10-13T00:17:29Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=FS/0g 2025-10-13T00:17:32.640070628+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.640132590+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.642710880+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.642839924+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.642839924+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.642839924+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=0lygU 2025-10-13T00:17:32.642851514+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.642858705+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.649840401+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.650016246+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.650029517+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.650172711+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:32.650185312+00:00 stderr F time="2025-10-13T00:17:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=0lygU 2025-10-13T00:17:34.130392363+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.130440534+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.134126669+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.134294654+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.134360926+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.134407377+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4UmxZ 2025-10-13T00:17:34.134415897+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.134422598+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.141605720+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.141668422+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.141668422+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.141750925+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:34.141750925+00:00 stderr F time="2025-10-13T00:17:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4UmxZ 2025-10-13T00:17:35.005288028+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.005427873+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.008556610+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.008556610+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.008556610+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.008556610+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Naqy3 2025-10-13T00:17:35.008556610+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.008556610+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.015231296+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.015367681+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.015367681+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.015476014+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:35.015476014+00:00 stderr F time="2025-10-13T00:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Naqy3 2025-10-13T00:17:59.621804743+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-10-13T00:17:59.621861935+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.621861935+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.621953868+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="resolving sources" id=B1pt5 namespace=openshift-marketplace 2025-10-13T00:17:59.621953868+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="checking if subscriptions need update" id=B1pt5 namespace=openshift-marketplace 2025-10-13T00:17:59.629339858+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=B1pt5 namespace=openshift-marketplace 2025-10-13T00:17:59.629606826+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.629836052+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.629849773+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.629861863+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=iJLN/ 2025-10-13T00:17:59.629885354+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.629885354+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.635815150+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.635983355+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.635995646+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.636153570+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.636153570+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=iJLN/ 2025-10-13T00:17:59.643563241+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.643563241+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.645573131+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.645729285+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.645744976+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.645774637+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=eJiUz 2025-10-13T00:17:59.645774637+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.645785297+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.651478897+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.651715594+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.651725234+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-gjctm current-pod.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.651903059+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:17:59.651918600+00:00 stderr F time="2025-10-13T00:17:59Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=eJiUz 2025-10-13T00:18:09.966907189+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-10-13T00:18:09.966907189+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.966907189+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.966907189+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="resolving sources" id=SKDiP namespace=openshift-marketplace 2025-10-13T00:18:09.966907189+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="checking if subscriptions need update" id=SKDiP namespace=openshift-marketplace 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=SKDiP namespace=openshift-marketplace 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uQG1M 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.977707721+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.982208205+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.982403781+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.982445342+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.982552585+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.982580046+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uQG1M 2025-10-13T00:18:09.988062639+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.988062639+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.991199152+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.991347307+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.991385448+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.991424549+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GgeJi 2025-10-13T00:18:09.991448470+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.991472550+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.996087308+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.996223552+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.996253993+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-hkptr current-pod.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.996372416+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:18:09.996404317+00:00 stderr F time="2025-10-13T00:18:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GgeJi 2025-10-13T00:21:13.808483755+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="resolving sources" id=/Heqc namespace=openstack 2025-10-13T00:21:13.808483755+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="checking if subscriptions need update" id=/Heqc namespace=openstack 2025-10-13T00:21:13.825388319+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="No subscriptions were found in namespace openstack" id=/Heqc namespace=openstack 2025-10-13T00:21:13.832073029+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="resolving sources" id=qhmGH namespace=openstack 2025-10-13T00:21:13.832073029+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="checking if subscriptions need update" id=qhmGH namespace=openstack 2025-10-13T00:21:13.835415749+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="No subscriptions were found in namespace openstack" id=qhmGH namespace=openstack 2025-10-13T00:21:13.885257940+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="resolving sources" id=uF2vz namespace=openstack 2025-10-13T00:21:13.888902918+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="checking if subscriptions need update" id=uF2vz namespace=openstack 2025-10-13T00:21:13.890448389+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="No subscriptions were found in namespace openstack" id=uF2vz namespace=openstack 2025-10-13T00:21:14.512742824+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="resolving sources" id=qw7t8 namespace=openstack-operators 2025-10-13T00:21:14.512742824+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="checking if subscriptions need update" id=qw7t8 namespace=openstack-operators 2025-10-13T00:21:14.519847605+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=qw7t8 namespace=openstack-operators 2025-10-13T00:21:14.524956912+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="resolving sources" id=MgsSp namespace=openstack-operators 2025-10-13T00:21:14.524956912+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="checking if subscriptions need update" id=MgsSp namespace=openstack-operators 2025-10-13T00:21:14.526637978+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=MgsSp namespace=openstack-operators 2025-10-13T00:21:14.529493224+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="resolving sources" id=vKKUs namespace=openstack-operators 2025-10-13T00:21:14.529493224+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="checking if subscriptions need update" id=vKKUs namespace=openstack-operators 2025-10-13T00:21:14.532715551+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=vKKUs namespace=openstack-operators ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000560135215073043233033071 0ustar zuulzuul2025-08-13T19:59:14.867388586+00:00 stderr F time="2025-08-13T19:59:14Z" level=info msg="log level info" 2025-08-13T19:59:14.944315019+00:00 stderr F time="2025-08-13T19:59:14Z" level=info msg="TLS keys set, using https for metrics" 2025-08-13T19:59:15.055645443+00:00 stderr F W0813 19:59:15.055574 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:15.250577380+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:16.004044899+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:16.313683285+00:00 stderr F W0813 19:59:16.310301 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:19.740591563+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="apps/v1, Resource=deployments" 2025-08-13T19:59:19.790674340+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" 2025-08-13T19:59:19.801225861+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" 2025-08-13T19:59:22.002547450+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="detected ability to filter informers" canFilter=true 2025-08-13T19:59:22.214694658+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="registering owner reference fixer" gvr="/v1, Resource=services" 2025-08-13T19:59:22.327504744+00:00 stderr F W0813 19:59:22.326666 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:22.425442905+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:22.425657291+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="operator ready" 2025-08-13T19:59:22.425657291+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting informers..." 2025-08-13T19:59:22.513393653+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="informers started" 2025-08-13T19:59:22.524531920+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:23.026938681+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting workers..." 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=1tszK namespace=default 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=1tszK namespace=default 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.096940857+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="operator ready" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting informers..." 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="informers started" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.284758571+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting workers..." 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace default" id=1tszK namespace=default 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-public" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=IcxZR namespace=openshift 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=IcxZR namespace=openshift 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-system" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.516924149+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.523150956+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.523150956+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.530038362+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.530123125+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=2zAJE 2025-08-13T19:59:23.530157036+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.530187557+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace openshift" id=IcxZR namespace=openshift 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:24.726412565+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:24.726522058+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="resolving sources" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:24.726556649+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="checking if subscriptions need update" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="resolving sources" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="checking if subscriptions need update" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:25.805455853+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805642469+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805682480+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805870025+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.807135631+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807258165+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807296346+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807404539+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807543523+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.807607975+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:25.807646726+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:25.818163626+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T19:59:25.822749276+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T19:59:26.200523475+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.315502542+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:26.323310225+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.323437499+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.630627235+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631473109+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631561792+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631884371+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=yW7NQ 2025-08-13T19:59:26.631966313+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.632391166+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.633056135+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.633345513+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=/0z19 namespace=openshift-config 2025-08-13T19:59:26.673319412+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=/0z19 namespace=openshift-config 2025-08-13T19:59:26.707423114+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.707529307+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:26.707577859+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:26.740619411+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.744728288+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.744964494+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.745052047+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=U8uxi 2025-08-13T19:59:26.745615003+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.745987864+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config" id=/0z19 namespace=openshift-config 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.126553402+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.126753548+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.127020695+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.127736336+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.167088657+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T19:59:27.264241327+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.264353730+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.264394321+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.341884690+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.496996842+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.497136946+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.737028164+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.737266901+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:27.737314662+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:27.737550989+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:27.737615391+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:27.843101857+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.042618355+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:28.042629975+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.042934364+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.340480004+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.340734442+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.340734442+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.341944786+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.348934225+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.352167718+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:28.443163431+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.623900403+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.623900403+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:28.766033965+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.329208979+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:29.329330142+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.329376943+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.329596840+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330245548+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330298460+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330675310+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.398870494+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.399252315+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.428178660+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:29.428306033+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.428348624+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.551404102+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.551521896+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:29.551563397+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:29.618814094+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="resolving sources" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checking if subscriptions need update" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="resolving sources" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checking if subscriptions need update" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:31.003420522+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:31.004217255+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.004274696+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.141924400+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:31.142061084+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.142096805+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:31.390212098+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.422030935+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.422030935+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.391256522+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:32.584702026+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:32.585387936+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.585524430+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.593423435+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=n+zaX 2025-08-13T19:59:32.669492833+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669492833+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669542154+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.669727270+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.669768551+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.714419844+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.715084313+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.715348650+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.792942022+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:32.952874511+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.953173450+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:32.953307313+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.147542840+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.147659554+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.147693865+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.147978023+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.148037714+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.148078215+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310597278+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310660420+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310660420+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310676000+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=EG+R5 2025-08-13T19:59:33.310688081+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310688081+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.385050291+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.385050291+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.385362430+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.385441562+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.385441562+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.674899343+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.674899343+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:33.676986902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706623267+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706623267+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706731020+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.740440261+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.740507163+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.740507163+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.741243924+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:33.840507144+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-node" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:33.905607960+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.905647991+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.905750974+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939003481+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939536447+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939582898+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939737092+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=xySjj 2025-08-13T19:59:33.939947768+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.940077102+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.985690332+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161310969+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:34.161679489+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.162027529+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.194938967+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.197689775+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:34.198110157+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.198179299+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.411095919+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.411177581+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.473884238+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474142106+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474217108+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474275190+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1LWtC 2025-08-13T19:59:34.474325011+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474374332+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.641284630+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.655651460+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:34.655940278+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:34.703154954+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703583876+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703643278+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703752451+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.826184401+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.826184401+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:34.826243543+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:34.956635909+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:34.956762983+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028125617+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028125617+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.028139348+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.104361450+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:35.939629879+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="resolving sources" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="checking if subscriptions need update" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.283543052+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:36.467016732+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:39.198040701+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.198040701+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386097141+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386338798+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386388440+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386757920+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.433271946+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.433271946+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451519856+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451879316+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451936398+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451982749+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=06pP6 2025-08-13T19:59:39.452017720+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.452048641+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583471257+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583720424+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583761046+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.584012163+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.857200820+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.857328944+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.178530660+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:42.951135533+00:00 stderr F time="2025-08-13T19:59:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:42.983729312+00:00 stderr F time="2025-08-13T19:59:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117258258+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.343336543+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:46.735288911+00:00 stderr F time="2025-08-13T19:59:46Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:46.735288911+00:00 stderr F time="2025-08-13T19:59:46Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169190439+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0nGjK 2025-08-13T19:59:47.169573950+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169573950+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:49.859424817+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:49.859482799+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.197821493+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.197821493+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.348487368+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.609499119+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.944928531+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.207667451+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.207943779+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208099823+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208140744+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=wt1Gj 2025-08-13T19:59:51.208179315+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208221417+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.344408219+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.344730288+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.344875572+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.345033086+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:52.079561575+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.079561575+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145560836+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145821764+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145901376+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.146001069+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=gPEn3 2025-08-13T19:59:52.146040170+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.146079781+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.312279509+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336229291+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336381076+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336600242+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.568228405+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.568329207+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.613310480+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.648750480+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.648970566+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.650705826+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wkEPj 2025-08-13T19:59:52.650861700+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.650908281+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843163742+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843427369+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843467740+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843584284+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:55.809157948+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:55.809157948+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:55.814073058+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:55.814073058+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189320515+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.570590918+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.571053191+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.571149394+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691340420+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.146626098+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147017139+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147089101+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147257586+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.301985327+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302308336+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302525802+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302713597+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.328573935+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.328699418+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.445077216+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536339537+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536660586+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536703237+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536883623+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.690032828+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.690915883+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:59.388897670+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.389099906+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.402952380+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.404585347+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.405404920+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.405570945+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rcxaT 2025-08-13T19:59:59.406018848+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.407268553+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.451741681+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452075531+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452189124+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452303107+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T20:00:00.491755946+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.523464760+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.695830604+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.858367957+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.858768298+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.859165879+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=b3BRZ 2025-08-13T20:00:00.859254962+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.895947778+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.397743552+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.397743552+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:10.257916954+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.257916954+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618114084+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618688671+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618764433+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.633569755+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=215t+ 2025-08-13T20:00:10.633639977+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.633680268+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.625957411+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626000152+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626000152+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626870777+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.986223504+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:11.986223504+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.858314771+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:13.518068723+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.518068723+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.668883343+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.671459407+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.678907139+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.679191017+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=9FaCP 2025-08-13T20:00:13.679475516+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.679511957+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.735700629+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:13.737032737+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477549732+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.490480350+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.490755508+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491100338+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491267593+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=iRqKV 2025-08-13T20:00:14.491357865+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491395416+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:16.614995068+00:00 stderr F time="2025-08-13T20:00:16Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:16.615298246+00:00 stderr F time="2025-08-13T20:00:16Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.102506279+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:22.519198049+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.519198049+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:25.342577698+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.342733923+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.382118736+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:27.574588102+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.575607891+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.575607891+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.578337239+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.665193065+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:28.159993484+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.159993484+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289965351+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.304474084+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.304474084+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.377364413+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378421583+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378490995+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378537576+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Gc25i 2025-08-13T20:00:28.378580947+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378627869+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:57.666292432+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:00:57.666721735+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:00:57.704953045+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:00:57.704953045+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.329597053+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348178953+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348262555+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348433920+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=kPFEg 2025-08-13T20:01:03.348525503+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348570214+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.388328095+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.388328095+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520350389+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520576086+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520616927+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520654998+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=2qdyS 2025-08-13T20:01:07.520690729+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520727110+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.528978965+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529331725+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529379006+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529418868+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=IRiQU 2025-08-13T20:01:07.529449949+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529480639+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.064295639+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.064629838+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.064711361+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.065045300+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.096452916+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097157546+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097301700+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097634070+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:37.338197592+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:37.338363566+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:37.338668485+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:37.338668485+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.216011662+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.799299583+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799687804+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799687804+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799711044+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fpzh6 2025-08-13T20:01:51.799892539+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799892539+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.318031038+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.318031038+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:01.338527872+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:01.338527872+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.455292138+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:29.459578992+00:00 stderr F time="2025-08-13T20:02:29Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:31.320216660+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKVh 2025-08-13T20:02:31.320359594+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OFmS9 2025-08-13T20:02:31.320380295+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OFmS9 2025-08-13T20:02:31.320392025+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKVh 2025-08-13T20:02:31.324995286+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=buKVh 2025-08-13T20:02:31.325200072+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=OFmS9 2025-08-13T20:02:31.325309705+00:00 stderr F E0813 20:02:31.325252 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.325442679+00:00 stderr F E0813 20:02:31.325238 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.331356318+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YuG3U 2025-08-13T20:02:31.331434770+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YuG3U 2025-08-13T20:02:31.332093909+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8sZdC 2025-08-13T20:02:31.332093909+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8sZdC 2025-08-13T20:02:31.335483205+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=YuG3U 2025-08-13T20:02:31.335630260+00:00 stderr F E0813 20:02:31.335536 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.336220676+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=8sZdC 2025-08-13T20:02:31.336242677+00:00 stderr F E0813 20:02:31.336211 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.347115237+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=tUqdu 2025-08-13T20:02:31.347115237+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=tUqdu 2025-08-13T20:02:31.347680633+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fxfKq 2025-08-13T20:02:31.347680633+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fxfKq 2025-08-13T20:02:31.350528075+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=tUqdu 2025-08-13T20:02:31.350528075+00:00 stderr F E0813 20:02:31.350496 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.359026477+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fxfKq 2025-08-13T20:02:31.359140730+00:00 stderr F E0813 20:02:31.359109 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.371356229+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VKUpv 2025-08-13T20:02:31.371457772+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VKUpv 2025-08-13T20:02:31.377650838+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=VKUpv 2025-08-13T20:02:31.377997318+00:00 stderr F E0813 20:02:31.377921 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.379327976+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JDVSq 2025-08-13T20:02:31.379422999+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JDVSq 2025-08-13T20:02:31.382383563+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JDVSq 2025-08-13T20:02:31.382383563+00:00 stderr F E0813 20:02:31.382323 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.421214611+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=x39iE 2025-08-13T20:02:31.421214611+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=x39iE 2025-08-13T20:02:31.422724244+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3OfOW 2025-08-13T20:02:31.422936740+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3OfOW 2025-08-13T20:02:31.423440045+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=x39iE 2025-08-13T20:02:31.423440045+00:00 stderr F E0813 20:02:31.423385 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.425895305+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=3OfOW 2025-08-13T20:02:31.425982657+00:00 stderr F E0813 20:02:31.425952 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.504993781+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=p3gN7 2025-08-13T20:02:31.504993781+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=p3gN7 2025-08-13T20:02:31.510025254+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1Lp29 2025-08-13T20:02:31.510164538+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1Lp29 2025-08-13T20:02:31.524701023+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=p3gN7 2025-08-13T20:02:31.524701023+00:00 stderr F E0813 20:02:31.524677 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.685335536+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wq+F6 2025-08-13T20:02:31.685376547+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wq+F6 2025-08-13T20:02:31.723889646+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=1Lp29 2025-08-13T20:02:31.723946007+00:00 stderr F E0813 20:02:31.723879 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.884613581+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cysEU 2025-08-13T20:02:31.884613581+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cysEU 2025-08-13T20:02:31.924894470+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=wq+F6 2025-08-13T20:02:31.924894470+00:00 stderr F E0813 20:02:31.924867 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.124681629+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=cysEU 2025-08-13T20:02:32.124681629+00:00 stderr F E0813 20:02:32.124632 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.245592088+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y7GJB 2025-08-13T20:02:32.245592088+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y7GJB 2025-08-13T20:02:32.324189910+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=y7GJB 2025-08-13T20:02:32.324312864+00:00 stderr F E0813 20:02:32.324289 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.446008686+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uuH+l 2025-08-13T20:02:32.446008686+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uuH+l 2025-08-13T20:02:32.524368101+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=uuH+l 2025-08-13T20:02:32.524368101+00:00 stderr F E0813 20:02:32.524259 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.964896128+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=59J53 2025-08-13T20:02:32.964928159+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=59J53 2025-08-13T20:02:32.968003557+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=59J53 2025-08-13T20:02:33.165883152+00:00 stderr F time="2025-08-13T20:02:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ezGB7 2025-08-13T20:02:33.165883152+00:00 stderr F time="2025-08-13T20:02:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ezGB7 2025-08-13T20:02:33.169296029+00:00 stderr F time="2025-08-13T20:02:33Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ezGB7 2025-08-13T20:02:34.461571413+00:00 stderr F time="2025-08-13T20:02:34Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:41.180077903+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EdhsD 2025-08-13T20:02:41.180258628+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EdhsD 2025-08-13T20:02:41.184228181+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=EdhsD 2025-08-13T20:02:41.184272712+00:00 stderr F E0813 20:02:41.184250 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.189750679+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Oj8ti 2025-08-13T20:02:41.189907113+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Oj8ti 2025-08-13T20:02:41.191991262+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Oj8ti 2025-08-13T20:02:41.192127796+00:00 stderr F E0813 20:02:41.192004 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.194489064+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cne/A 2025-08-13T20:02:41.194563256+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cne/A 2025-08-13T20:02:41.196163562+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Cne/A 2025-08-13T20:02:41.196213943+00:00 stderr F E0813 20:02:41.196155 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.201539315+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=upscx 2025-08-13T20:02:41.201573576+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=upscx 2025-08-13T20:02:41.202747019+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fcrac 2025-08-13T20:02:41.202912874+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fcrac 2025-08-13T20:02:41.203769559+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=upscx 2025-08-13T20:02:41.203863561+00:00 stderr F E0813 20:02:41.203812 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.206941009+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fcrac 2025-08-13T20:02:41.206962590+00:00 stderr F E0813 20:02:41.206940 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.214092483+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5gZ8D 2025-08-13T20:02:41.214092483+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5gZ8D 2025-08-13T20:02:41.216442510+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=5gZ8D 2025-08-13T20:02:41.216522152+00:00 stderr F E0813 20:02:41.216451 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.228136414+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eTECs 2025-08-13T20:02:41.228136414+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eTECs 2025-08-13T20:02:41.230565593+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eTECs 2025-08-13T20:02:41.230565593+00:00 stderr F E0813 20:02:41.230267 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.238051627+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=kH+Io 2025-08-13T20:02:41.238081248+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=kH+Io 2025-08-13T20:02:41.240472596+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=kH+Io 2025-08-13T20:02:41.240557138+00:00 stderr F E0813 20:02:41.240493 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.271165252+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ogtb3 2025-08-13T20:02:41.271206903+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ogtb3 2025-08-13T20:02:41.274379723+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ogtb3 2025-08-13T20:02:41.274413014+00:00 stderr F E0813 20:02:41.274362 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.281761504+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RsNAH 2025-08-13T20:02:41.281835696+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RsNAH 2025-08-13T20:02:41.284457551+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=RsNAH 2025-08-13T20:02:41.284457551+00:00 stderr F E0813 20:02:41.284415 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.355263271+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eeoYO 2025-08-13T20:02:41.355263271+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eeoYO 2025-08-13T20:02:41.365502733+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=R8X3r 2025-08-13T20:02:41.365502733+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=R8X3r 2025-08-13T20:02:41.385820093+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eeoYO 2025-08-13T20:02:41.385903655+00:00 stderr F E0813 20:02:41.385835 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.546526037+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=vpLbh 2025-08-13T20:02:41.546526037+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=vpLbh 2025-08-13T20:02:41.584814309+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=R8X3r 2025-08-13T20:02:41.584935812+00:00 stderr F E0813 20:02:41.584887 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.745618686+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z8Tk9 2025-08-13T20:02:41.745735399+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z8Tk9 2025-08-13T20:02:41.784504675+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=vpLbh 2025-08-13T20:02:41.784504675+00:00 stderr F E0813 20:02:41.784476 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.985068617+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=z8Tk9 2025-08-13T20:02:41.985068617+00:00 stderr F E0813 20:02:41.984942 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.105127703+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fKVKG 2025-08-13T20:02:42.105127703+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fKVKG 2025-08-13T20:02:42.184517948+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fKVKG 2025-08-13T20:02:42.184564359+00:00 stderr F E0813 20:02:42.184495 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.305245692+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z/+5s 2025-08-13T20:02:42.305245692+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z/+5s 2025-08-13T20:02:42.384638557+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=z/+5s 2025-08-13T20:02:42.384638557+00:00 stderr F E0813 20:02:42.384559 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.825919346+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=e2NJz 2025-08-13T20:02:42.825919346+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=e2NJz 2025-08-13T20:02:42.828727216+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=e2NJz 2025-08-13T20:02:42.830009803+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:43.025300714+00:00 stderr F time="2025-08-13T20:02:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=pBUc+ 2025-08-13T20:02:43.025300714+00:00 stderr F time="2025-08-13T20:02:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=pBUc+ 2025-08-13T20:02:43.028725332+00:00 stderr F time="2025-08-13T20:02:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=pBUc+ 2025-08-13T20:02:47.833958362+00:00 stderr F time="2025-08-13T20:02:47Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:06:11.493510853+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-08-13T20:06:11.513128685+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.513128685+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.527002293+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="resolving sources" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.527002293+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="checking if subscriptions need update" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.530683628+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.564933489+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.613763127+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.613963973+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.614943581+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=6ycoF 2025-08-13T20:06:11.615105376+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.615142827+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.717925600+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.718236119+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.718306171+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.719213827+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:13.735607719+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-08-13T20:06:13.735857166+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="resolving sources" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.735949859+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="checking if subscriptions need update" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.736274268+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.738287036+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.740709755+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.741268591+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766092952+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766359430+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766359430+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766474793+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:26.110215478+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.110277409+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.113314296+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.113314296+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114995094+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115174030+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.135973235+00:00 stderr F time="2025-08-13T20:06:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Yd8EB 2025-08-13T20:06:26.139168817+00:00 stderr F E0813 20:06:26.139011 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:26.139256829+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.139256829+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.146991231+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.153847167+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.153847167+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283168110+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=O7Sr0 2025-08-13T20:06:26.283317255+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283317255+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.485976288+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486146673+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486161263+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486262386+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.500887035+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.500887035+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.083998242+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.096395597+00:00 stderr F time="2025-08-13T20:06:27Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=O7Sr0 2025-08-13T20:06:27.096551121+00:00 stderr F E0813 20:06:27.096522 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:27.102279406+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.102279406+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481279009+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481479894+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481529046+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481574137+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+4AoL 2025-08-13T20:06:27.481606088+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481635659+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.685135696+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685418774+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685461056+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685586239+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.699274701+00:00 stderr F time="2025-08-13T20:06:27Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Zh+dj 2025-08-13T20:06:27.699274701+00:00 stderr F E0813 20:06:27.698399 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:27.709208696+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:27.709451833+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.085910023+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086225762+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086282274+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086363386+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Oq41s 2025-08-13T20:06:28.086406027+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086482439+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.282898944+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283227293+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283351627+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283547252+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.302014561+00:00 stderr F time="2025-08-13T20:06:28Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=+4AoL 2025-08-13T20:06:28.302014561+00:00 stderr F E0813 20:06:28.301962 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:28.302055262+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.302055262+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685616206+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685669818+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685683108+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.883888374+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884000247+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884000247+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884082709+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.902330863+00:00 stderr F time="2025-08-13T20:06:28Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Oq41s 2025-08-13T20:06:28.902330863+00:00 stderr F E0813 20:06:28.902313 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:28.904563686+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:28.904659029+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.484531994+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484769601+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484769601+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484890994+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.485084470+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:29.485084470+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.081941562+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082042295+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082042295+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082058995+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Zknej 2025-08-13T20:06:30.082109716+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082109716+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693063231+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693214205+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693433561+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:30.693498553+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.087252319+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.410178215+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.410178215+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.815722402+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.815722402+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889601161+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889944851+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889994392+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.890068284+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=3/b+M 2025-08-13T20:06:31.890100255+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.890156607+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.086024412+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.086065673+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.086075844+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.610152570+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.610152570+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.679106567+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.688346182+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:32.881690145+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.882261601+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.682592218+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="catalog update required at 2025-08-13 20:06:33.682825164 +0000 UTC m=+449.334213281" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.203693937+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:34.230721402+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.231202476+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.294228903+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294601363+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294688216+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294835090+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GYogS 2025-08-13T20:06:34.294933823+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.295012845+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.531050753+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531341911+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531425763+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531661660+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EgTk3 2025-08-13T20:06:34.531742733+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531845646+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.087074635+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087519547+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087647251+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087946150+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.088233678+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.088348361+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.319184129+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.319619572+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.319726125+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.320045724+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="catalog update required at 2025-08-13 20:06:35.319971052 +0000 UTC m=+450.971359429" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.492294943+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492363785+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492363785+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.716204932+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.739636144+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:35.739636144+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="catalog update required at 2025-08-13 20:06:36.290062885 +0000 UTC m=+451.941451072" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.701448760+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.733992323+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:36.733992323+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.283263672+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283379175+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283379175+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="catalog update required at 2025-08-13 20:06:37.884757656 +0000 UTC m=+453.536145863" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.092241975+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-08-13T20:06:38.092323467+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="resolving sources" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.092323467+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checking if subscriptions need update" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.104610420+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.315227018+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:38.355523894+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.355523894+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.484491041+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484713547+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484754199+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484928834+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4TwJq 2025-08-13T20:06:38.484971205+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.485004926+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.286016762+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286108774+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286108774+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286166416+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.411929782+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.411929782+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.558727521+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.558727521+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.682668194+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683063256+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683152478+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683388495+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wIhww 2025-08-13T20:06:39.683499848+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683588961+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.895125296+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.895463175+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.895557448+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.896552426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=k0tK8 2025-08-13T20:06:39.896742382+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.896977069+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.518465107+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.518599201+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.683681214+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683840249+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683840249+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683945572+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.702434612+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:40.702434612+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.683858140+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684409336+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684473457+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684592981+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684758366+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:41.684909280+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:41.884354688+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.884581335+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.885465550+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.885964304+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.886066947+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:41.886122629+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890383862+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:42.892015309+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.867567909+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.867911889+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.867963541+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.868006242+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GW6YT 2025-08-13T20:06:43.868038283+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.868068834+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.975695169+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:43.975974367+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="resolving sources" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="checking if subscriptions need update" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="resolving sources" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="checking if subscriptions need update" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="resolving sources" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="checking if subscriptions need update" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:48.529694646+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.529741967+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.529741967+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:48.530299093+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:49.170617092+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:49.183381657+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.186333622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918537555+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918658748+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918658748+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="resolving sources" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checking if subscriptions need update" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:50.513729180+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:50.655559256+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.714640670+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.714640670+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.797510826+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.797510826+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.805644769+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.805963649+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.806042481+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.806094592+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=TdbPT 2025-08-13T20:06:50.806133023+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.810715425+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.901265181+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.901662052+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.902011852+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.902231359+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.133741756+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.133741756+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.146006978+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149216880+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149323193+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149441806+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+VsiI 2025-08-13T20:06:51.149549129+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149635552+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.509337165+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.525533099+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.525657003+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.526219469+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:56.134097751+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.134097751+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.376976224+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.378197979+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.378197979+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.380846025+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=dA2JE 2025-08-13T20:06:56.381006220+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.381068022+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.490907001+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.490907001+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=Jgp93 namespace=default 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=Jgp93 namespace=default 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace default" id=Jgp93 namespace=default 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-public" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=YBTPP namespace=openshift 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=YBTPP namespace=openshift 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift" id=YBTPP namespace=openshift 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-system" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.103426132+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.103552366+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.103587677+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="resolving sources" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="checking if subscriptions need update" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="resolving sources" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="checking if subscriptions need update" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:59.170645801+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:59.170830256+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.170918399+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.175838440+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.175838440+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.181687667+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:59.181903334+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.181993026+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.703075856+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.703192020+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:06:59.703232731+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=fddqw namespace=openshift-etcd 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=fddqw namespace=openshift-etcd 2025-08-13T20:07:00.096657181+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.096657181+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.098826673+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099016628+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099061860+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099155562+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Nrb/Z 2025-08-13T20:07:00.099232584+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099268175+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=fddqw namespace=openshift-etcd 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.503281089+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.503375092+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.503409213+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:00.935907223+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.936043507+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:00.936077668+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.301361841+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:01.301466174+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.301499775+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:01.909935219+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.910360421+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:01.910960379+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:02.428272220+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:02.428531668+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="resolving sources" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:02.437184656+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="checking if subscriptions need update" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="resolving sources" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="checking if subscriptions need update" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:03.884353897+00:00 stderr F time="2025-08-13T20:07:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:03.884477280+00:00 stderr F time="2025-08-13T20:07:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.452815165+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:04.452930688+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.452972260+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.519269010+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.519269010+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.574843154+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.574843154+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.612855994+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.642263837+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.642263837+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.675230562+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.720861450+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.723694311+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:04.723694311+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:04.998404328+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:04.998404328+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037186670+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037311213+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037311213+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037565210+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.098929900+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.098929900+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.173327193+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194309614+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194309614+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194476909+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.200168582+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.200168582+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206701680+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206727580+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206740501+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206753011+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EM2Dn 2025-08-13T20:07:05.206763421+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206814553+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-node" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.495970383+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.495970383+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:05.527593480+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527648152+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527648152+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.720449489+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.720449489+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:05.927625519+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.552257327+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.552257327+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.556577681+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.556577681+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:06.735860501+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736024806+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736024806+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736105458+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.903362294+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.903362294+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:06.903425696+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:07.429827448+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.433853584+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.433853584+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:07.523702840+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.523702840+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:08.166985893+00:00 stderr F time="2025-08-13T20:07:08Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:08.167210020+00:00 stderr F time="2025-08-13T20:07:08Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:09.012897627+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.012897627+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026530958+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026739694+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026849607+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026943929+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=oO6F4 2025-08-13T20:07:09.026994031+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.027029852+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.295143979+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.295143979+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352439852+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352667768+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352715280+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352948736+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.544717705+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.544928851+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927146228+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:17.402023609+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.402023609+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.471673926+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.471673926+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.540994884+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.540994884+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.569925323+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.569925323+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:19.897743354+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.901121591+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.962722627+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=XXh2y 2025-08-13T20:07:19.963908401+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963908401+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109529166+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109769743+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109941668+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.110055541+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.363563220+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.363563220+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383363787+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383672586+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383724398+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383768529+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=xkuDP 2025-08-13T20:07:20.383921013+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383967415+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488287136+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.494112093+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.494289488+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521108847+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521445106+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521614701+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521707434+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=SeWd4 2025-08-13T20:07:20.522132546+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.522691662+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.539406671+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.539712020+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.565002835+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.565767577+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566136148+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566481028+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=mvFP1 2025-08-13T20:07:20.566576080+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566707794+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.650509956+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650744512+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650846445+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650978819+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.822077504+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.822822676+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.822964710+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.823405262+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:21.118466272+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.118579555+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143076477+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143546361+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143663764+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143761227+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=TpWB0 2025-08-13T20:07:21.143910861+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.144029865+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.222951427+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.223850313+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.223993457+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224400239+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224472851+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224765519+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.224996136+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.258483186+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.258935579+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259019062+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259229778+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=kdqU8 2025-08-13T20:07:21.259333681+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259418713+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513392535+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513501248+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513501248+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513692293+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513692293+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:26.432005016+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.432005016+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463750046+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463760816+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.537822570+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.537822570+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545020466+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545276254+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545319895+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545360856+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=+5tYT 2025-08-13T20:07:26.545391007+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545420538+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:34.621979639+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.621979639+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:35.201738891+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.202012579+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227093568+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:39.803162468+00:00 stderr F time="2025-08-13T20:07:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:39.803162468+00:00 stderr F time="2025-08-13T20:07:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284657233+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284698754+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284708834+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284718375+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kwBL/ 2025-08-13T20:07:40.284728345+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284728345+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534016432+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534016432+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534107155+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.534107155+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.572495475+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.590760549+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.591497060+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.591582493+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.608217870+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.611012370+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.890683498+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.891013548+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.891013548+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:41.092128714+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:41.092128714+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:42.210912870+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.210912870+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.235076863+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238390857+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238522041+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238632754+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GWpeZ 2025-08-13T20:07:42.238742578+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238829830+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.328618204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.328618204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.331729294+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kRsnw 2025-08-13T20:07:42.332483155+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332483155+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.361420195+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.361420195+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.363117934+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363152725+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363162755+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363394412+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.507196204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507257866+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507267827+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507277557+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uUuJK 2025-08-13T20:07:42.507287257+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507287257+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.615342925+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.615342925+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892730938+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.090820408+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091066445+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091110296+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091269920+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.492354760+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492641138+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492689280+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492920576+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492966527+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.493119482+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.493202214+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.690738548+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691115039+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691176620+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691215671+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=zL7jK 2025-08-13T20:07:43.691245762+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691275603+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.090946862+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091207440+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091248581+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091381265+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091425996+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:57.427919925+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.427919925+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.616078620+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.616078620+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.621872686+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622178515+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622240697+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622283578+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/W5/o 2025-08-13T20:07:57.622338369+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622556936+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.635754994+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.636161676+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.636214817+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661724129+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661833862+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661948955+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.661995626+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.669732978+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.669973195+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670018046+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670056607+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ZaL6T 2025-08-13T20:07:57.670089528+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670119569+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.686739826+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.687120957+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.687190539+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.691604545+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.691604545+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:59.719443285+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.719556459+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.730188293+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.730188293+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.745876913+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746115750+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746156171+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746274225+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:08:00.174554403+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.174687257+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179552996+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=7VIOl 2025-08-13T20:08:00.179719591+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179719591+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.237632841+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.237918690+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.238012702+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.238112515+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:01.069850143+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.069850143+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120543376+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120871825+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120973828+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121166044+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121215975+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121343809+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.121392790+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136554085+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136739570+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136851224+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136933486+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=yqVMl 2025-08-13T20:08:01.136987037+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.137161492+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167616776+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167848692+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167848692+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.176215862+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.176215862+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:04.740190434+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.740190434+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.744980792+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.792837464+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793194344+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793234185+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793402310+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793441461+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:05.227830806+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.227830806+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258361051+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258426143+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258436623+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258636439+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258636439+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:34.569230327+00:00 stderr F time="2025-08-13T20:08:34Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:09:45.312959779+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.312959779+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.313585607+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.313674139+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.325440706+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329209134+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329468842+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329510783+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329549094+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=m5f4a 2025-08-13T20:09:45.329580195+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329610826+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.387238498+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395537146+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395660890+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395710771+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=auOVA 2025-08-13T20:09:45.395743392+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395877206+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398470770+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398541992+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398585564+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Id+B8 2025-08-13T20:09:45.398619555+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398650715+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.399083668+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:53.174562738+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:53.175244858+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:53.175335740+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.175335740+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.179189551+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.179189551+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.179215391+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.197599259+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.199541824+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.751689806+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752277003+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752326864+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752379006+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=HVKfg 2025-08-13T20:09:55.752410847+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752451938+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752714516+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.753997462+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754057224+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754096405+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=OemHP 2025-08-13T20:09:55.754128546+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754158867+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.770968679+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.776185209+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776207459+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776207459+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776339553+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776339553+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.777490676+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.145267191+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.546288508+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.748351692+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748483165+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748483165+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748497786+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=IakPh 2025-08-13T20:09:56.748497786+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748508116+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.156892975+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=aUKHs namespace=default 2025-08-13T20:09:57.156892975+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=aUKHs namespace=default 2025-08-13T20:09:57.157035829+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.157035829+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.165284335+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace default" id=aUKHs namespace=default 2025-08-13T20:09:57.165284335+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.165311576+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.168197969+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-public" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.168197969+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.168212199+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-system" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.549209973+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.549209973+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:57.946444862+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:58.148277418+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148434292+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148434292+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.745648065+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.745931463+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.745931463+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.746050217+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.746050217+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=xns6Z namespace=openshift-console 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=xns6Z namespace=openshift-console 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.962240075+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.962337268+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:58.962337268+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console" id=xns6Z namespace=openshift-console 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.762659064+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.762659064+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=CTpzL namespace=openshift-dns 2025-08-13T20:09:59.762715376+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=CTpzL namespace=openshift-dns 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=CTpzL namespace=openshift-dns 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.562843906+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.562987640+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.563004551+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:00.962427103+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.962524555+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:00.962524555+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.372387967+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:02.372522941+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.372563762+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.760844894+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.760844894+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:02.760895106+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-node" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=077is namespace=openshift-operators 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=077is namespace=openshift-operators 2025-08-13T20:10:06.361410876+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:06.361549950+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.361585521+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.560976667+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=077is namespace=openshift-operators 2025-08-13T20:10:06.561110021+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.561188093+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:07.162838483+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:07.162959517+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.162959517+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:07.763725242+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.962111509+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:36.567661543+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.567730145+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.568002362+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.568002362+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.571734019+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.571849393+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572242304+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572242304+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572262824+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=52tz4 2025-08-13T20:10:36.572318606+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572318606+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Upb4M 2025-08-13T20:10:36.572574683+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572574683+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.592592257+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.592866545+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.592866545+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593097232+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593097232+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593356229+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593453532+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.593453532+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.593564375+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593564375+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593581236+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593592416+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593727950+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.593727950+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596436368+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596566301+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596566301+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596580592+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=h3sA4 2025-08-13T20:10:36.596590752+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596590752+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596769277+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597005344+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597005344+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.967022983+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967294600+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967346602+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967502926+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967541448+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:37.169114087+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169361304+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169429406+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169642292+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169769006+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:16:57.983401621+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.983401621+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994107757+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=NJHA9 2025-08-13T20:16:57.994495598+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994495598+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052560476+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052759281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052759281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052999578+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="catalog update required at 2025-08-13 20:16:58.052873305 +0000 UTC m=+1073.704261512" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.104206561+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.160920630+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.160920630+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.182861157+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301356281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301490114+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301490114+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301576117+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301687300+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.301687300+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349110124+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349228398+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349244878+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349256929+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Q4L1p 2025-08-13T20:16:58.349267839+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349267839+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.366071269+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.366071269+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371099302+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.869911227+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.869911227+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187540528+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187934079+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187934079+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.188080593+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.982015846+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.982015846+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.985854136+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.010869410+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="catalog update required at 2025-08-13 20:17:00.011047765 +0000 UTC m=+1075.662435882" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.118078552+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.206741004+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.206741004+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.222015870+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.222015870+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.247353173+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247353173+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.388159314+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.065919068+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.065989060+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.065989060+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.388421828+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:02.214701065+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.214916361+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.257914299+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.328990569+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.329082781+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.226194831+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.226194831+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231187223+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231403959+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231451051+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231496242+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=NcZkZ 2025-08-13T20:17:03.231594725+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231634786+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:13.567481128+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.567481128+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575424395+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575732133+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575732133+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575751934+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cWRIS 2025-08-13T20:17:13.575766694+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575833406+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="catalog update required at 2025-08-13 20:17:14.574302031 +0000 UTC m=+1090.225690378" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.761215717+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:16.057152613+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:16.057152613+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.090869583+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.092864910+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.092945742+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.093063286+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=DQw9G 2025-08-13T20:17:17.093101357+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.093149408+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.167229383+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.167229383+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468679401+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.623563934+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.623563934+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.643182265+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643390751+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643430382+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643529405+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163396920+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:25.192955276+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.192955276+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.588933943+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.588933943+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.613010941+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614580255+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614639487+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614680998+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LEDYg 2025-08-13T20:17:25.614712339+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614743140+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:26.014261069+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.446116762+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.446166283+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.450755694+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=VNRX8 2025-08-13T20:17:26.451029212+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451029212+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500597568+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500840705+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500840705+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.501008230+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:28.104889022+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.104942904+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:29.083862719+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.083862719+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780267686+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988379249+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988635976+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988635976+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988765340+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="catalog update required at 2025-08-13 20:17:29.988696568 +0000 UTC m=+1105.640085225" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:30.091590506+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:30.118922297+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.118922297+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258727749+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258895684+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258914255+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258914255+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=tlYOF 2025-08-13T20:17:30.258924595+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258934245+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.300262586+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.359155597+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.359155597+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447140790+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447489830+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447532691+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447571042+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=qpwQE 2025-08-13T20:17:30.447625614+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447664765+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.505590159+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.507065671+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.507065671+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.786895173+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.786895173+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.790183456+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790392212+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790466695+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790550767+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790622629+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:30.790622629+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.237228273+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237274404+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237287695+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.786854869+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.790262216+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.790262216+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.355004368+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.301474856+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.301877788+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.302487085+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330228528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330280479+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330280479+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.338001729+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.338001729+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.351170726+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=LtZef 2025-08-13T20:17:35.351245528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351245528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.593362432+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.593362432+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.849649031+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.849649031+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195428065+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195492227+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195492227+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.196427574+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.624680583+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.624680583+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847434594+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847479315+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847479315+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847670481+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.853169388+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.853169388+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.600328055+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.600328055+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.603665960+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013003720+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013050411+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013050411+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.014143572+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.014143572+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.847365117+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.847365117+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852446052+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852603126+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852616807+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852652978+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=t60me 2025-08-13T20:17:38.852664618+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852664618+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032347619+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032394491+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032394491+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172130361+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172130361+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172176812+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.172192393+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588865252+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588865252+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588918974+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.588937064+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787139864+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.661614527+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.662073730+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.662073730+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.101329484+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.101422527+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.763039721+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.763039721+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.975679963+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976215999+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976273680+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976273680+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ja6zk 2025-08-13T20:17:44.976289581+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976289581+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.993212864+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:44.993404710+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.068746621+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251885971+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251934492+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251944773+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.252406326+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.252493518+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.252493518+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.304897995+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305139592+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305181903+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305221494+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=cxXAg 2025-08-13T20:17:45.305253065+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305284176+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449427192+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449815113+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449882175+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450066551+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450107972+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450257196+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.450310008+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460243081+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XeZZz 2025-08-13T20:17:45.460418126+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460418126+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.495863248+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496298881+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496298881+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496430045+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496430045+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:58.140876162+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.140938434+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.187421121+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:18:00.092684011+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.092684011+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.292591780+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.292647911+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.552924954+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.553143560+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967893429+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967893429+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.999135901+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:15.062646720+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.062646720+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070203696+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070523866+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070523866+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101226102+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101420618+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101420618+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101629364+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:24.802453013+00:00 stderr F time="2025-08-13T20:18:24Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:24.802453013+00:00 stderr F time="2025-08-13T20:18:24Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371243652+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:33.000200576+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.000200576+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.005656402+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:45.102857093+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.102857093+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.106837916+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127030363+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127136296+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127136296+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127194348+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:50.911552532+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.911657815+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945205163+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945329567+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945342337+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945415929+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:51.034061451+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.034061451+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038585820+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087387074+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087558829+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087558829+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.203088938+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.203088938+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.205561498+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.205561498+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209178852+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209348007+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209386928+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209424589+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=x6WRy 2025-08-13T20:18:51.209455260+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209484670+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.224655474+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.225014904+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.225014904+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.229637206+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.229702278+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:56.030993378+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.030993378+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.048494758+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534042774+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534369633+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534431465+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.536889185+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.537126812+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.537176154+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.573850301+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574244392+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574735616+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574735616+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=sZqtq 2025-08-13T20:18:56.574752107+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574752107+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669666167+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.670265584+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.670265584+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740431948+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.844820709+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845126518+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845241561+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845402136+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845440837+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:59.231328542+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.231328542+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.265665972+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.265926380+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266036373+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266100075+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Cle0k 2025-08-13T20:18:59.266134555+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266166386+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.285993252+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286239890+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286239890+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286408614+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237103764+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237207807+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237207807+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237387262+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:03.108079571+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.108079571+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111257432+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111529539+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111672513+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111731485+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bi/DE 2025-08-13T20:19:03.111766506+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111874869+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.356746543+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:15.128864790+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.128864790+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.137729953+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.202372679+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:31.079602079+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.079602079+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112271453+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=a07yu 2025-08-13T20:19:31.112512779+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112512779+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130389610+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130656218+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130656218+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130855944+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.205528878+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.205528878+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212642071+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212748464+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212764055+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212867548+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LvVPs 2025-08-13T20:19:31.212867548+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.213025682+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.120932469+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.120932469+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.131326806+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.131326806+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.828544909+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.828666523+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.843087175+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:35.843087175+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.699018575+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702572287+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718816861+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718913884+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718913884+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.719110969+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.719110969+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:45.203583479+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.203583479+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.212295488+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231634581+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231717194+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231717194+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.232405603+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.232405603+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:22:23.582698448+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.582698448+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.584282953+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.584467578+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.591555321+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.591886760+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nw+2Z 2025-08-13T20:22:23.593865707+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593865707+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608149195+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608293229+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608306630+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608468524+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608482785+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608669270+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608669270+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608716671+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608923097+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.608941598+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.609014160+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.609014160+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.609103472+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.609138513+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612150039+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612229572+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612336435+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612375166+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612413887+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Yo163 2025-08-13T20:22:23.612444628+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612503430+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789613431+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789740455+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789757655+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.790150497+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.790150497+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.987111926+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987264800+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987282611+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987452915+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987452915+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:23:25.274452558+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.274452558+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.274585281+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.274585281+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.278488203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.278587946+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.278587946+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.278848883+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.278907275+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.278907275+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.285220695+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.285326038+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.287261794+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.287322135+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=1wSOa namespace=openshift 2025-08-13T20:23:25.287322135+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=1wSOa namespace=openshift 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift" id=1wSOa namespace=openshift 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.290204408+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.290217238+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.290226318+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:26.079347861+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:26.079520806+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.079556917+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.279465021+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:26.279616845+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.279665146+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.678405412+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.678405412+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:26.678462454+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.278844792+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:27.278904683+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.278904683+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.480060842+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.480060842+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.480105244+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:27.879351374+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.879351374+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:27.879391395+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:28.079429683+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:28.079655029+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.079693940+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.478929790+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.478967151+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.478967151+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.679541924+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.679586375+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:28.679586375+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:28.879235711+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-config" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.879235711+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=4F67A namespace=openshift-console 2025-08-13T20:23:28.879292752+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=4F67A namespace=openshift-console 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-console" id=4F67A namespace=openshift-console 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.479273590+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.480161175+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.480161175+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=lZ2JR namespace=kube-public 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=lZ2JR namespace=kube-public 2025-08-13T20:23:30.078891927+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:30.078938818+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.078938818+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace kube-public" id=lZ2JR namespace=kube-public 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.685425780+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.685467922+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=YADVZ namespace=default 2025-08-13T20:23:30.685467922+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=YADVZ namespace=default 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace default" id=YADVZ namespace=default 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:31.879869898+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.879929879+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:31.879929879+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.477210620+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.477394285+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.477449906+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=lrgTb namespace=openshift-node 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=lrgTb namespace=openshift-node 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:33.079031800+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-node" id=lrgTb namespace=openshift-node 2025-08-13T20:23:33.079093591+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.079093591+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.680558511+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.680625313+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:33.680625313+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.279279401+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:34.279338033+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.279338033+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.679611322+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.679729226+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:34.679864840+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.479221045+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.479338269+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:35.479374890+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:35.679251252+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace kube-system" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.679371545+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:23:35.679412346+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:23:35.879907187+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:36.080542461+00:00 stderr F time="2025-08-13T20:23:36Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:26:10.557713661+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.557975158+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.558019659+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.558225385+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.568387246+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.568607772+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569072225+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569218189+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569314292+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=jjdyi 2025-08-13T20:26:10.569368614+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569411965+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569503438+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569503438+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569521388+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=82cCN 2025-08-13T20:26:10.569521388+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569549739+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.586239956+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586475883+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586475883+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586656668+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586656668+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586729980+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.586942976+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.587275366+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587413600+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587457221+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587657317+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587725259+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587913264+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.588059168+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.591514757+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591539028+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591551748+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591563829+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=N73fY 2025-08-13T20:26:10.591573769+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591573769+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.764340349+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764581136+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764621647+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764746011+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764856804+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.963193385+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963378190+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963378190+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963556586+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963570866+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:51.378101812+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.378101812+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.378541324+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.378604646+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.384194766+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384194766+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384215967+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384225547+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fbx2n 2025-08-13T20:26:51.384234957+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384234957+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407581495+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407823032+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407823032+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408155261+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408155261+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408313286+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.408313286+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.412890567+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XVosD 2025-08-13T20:26:51.413051251+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413051251+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.415528632+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=mMHet 2025-08-13T20:26:51.415630715+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415640495+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.582409004+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582568818+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582568818+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582686762+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582739173+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.787308723+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787364754+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787364754+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787450447+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787450447+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:27:02.910862330+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.910862330+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.910954953+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.910954953+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=OATTi namespace=openshift-operators 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=OATTi namespace=openshift-operators 2025-08-13T20:27:02.919337803+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.921322119+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=OATTi namespace=openshift-operators 2025-08-13T20:27:05.587701902+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.587701902+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.588240057+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.588240057+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592149169+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592277173+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592456148+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592527050+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592573421+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Dg+Zu 2025-08-13T20:27:05.592618812+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592650153+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592747646+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592747646+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592764706+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=u13ew 2025-08-13T20:27:05.592825458+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592825458+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604485182+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604519133+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604519133+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="catalog update required at 2025-08-13 20:27:05.604565974 +0000 UTC m=+1681.255954091" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.608475986+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609234727+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609306930+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609376271+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=7QFOv 2025-08-13T20:27:05.609411883+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609442743+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630010321+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630208127+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630247778+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630540807+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="catalog update required at 2025-08-13 20:27:05.63031641 +0000 UTC m=+1681.281704527" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.631330109+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.648020906+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.648020906+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.817190904+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.845462672+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:05.845653888+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:05.992745104+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.992979530+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993065643+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.197191580+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197351684+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197351684+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.793177852+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793548082+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793593863+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793714697+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793750328+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793931393+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:06.794043356+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.192680665+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.192848760+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.393473886+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.992046822+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992230498+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992230498+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992306210+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.993027990+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:07.993027990+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199428362+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.199428362+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.593088429+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593317615+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593372507+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593428788+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=QgpZY 2025-08-13T20:27:08.593471799+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593515401+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.392242049+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393138455+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393260138+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393695830+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.394145433+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.394391530+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.798720253+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799315680+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799440413+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799544036+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1ECed 2025-08-13T20:27:09.799696700+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799976328+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.394728315+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395145047+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395195749+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395315932+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:15.986079095+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.986217479+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.994691361+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995077322+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995133143+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995188995+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=usioD 2025-08-13T20:27:15.995220696+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995250277+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.013114138+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:18.640321989+00:00 stderr F time="2025-08-13T20:27:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:18.640387931+00:00 stderr F time="2025-08-13T20:27:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.021962132+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022034294+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022034294+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.108886688+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.109065703+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:20.031982022+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.031982022+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038391595+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038626582+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038665963+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038708324+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=o+5uM 2025-08-13T20:27:20.038754476+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038842488+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:26.208424501+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.208424501+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218349525+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218567871+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236246947+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236426812+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236426812+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236544435+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.341705882+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.341705882+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348538678+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348749164+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.362693263+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363116775+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363181356+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363332101+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:27.200910640+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.200910640+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209475655+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ppEqc 2025-08-13T20:27:27.209613099+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209613099+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243302082+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.243302082+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254441610+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254681467+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254681467+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254701548+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=YUGao 2025-08-13T20:27:27.254713588+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254725039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.261695998+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.261695998+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.263760517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.263760517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274579836+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274873295+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274891105+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274907706+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=lMTM+ 2025-08-13T20:27:27.274907706+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274960297+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274960297+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.275287447+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.275304057+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435446946+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435533299+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435661092+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.435724374+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.612335504+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.612424517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.612424517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.813162157+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Pgq+4 2025-08-13T20:27:27.813255879+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813255879+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.018274392+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:28.018274392+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:28.414124611+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.414321676+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.414321676+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.614579873+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.614579873+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:29.587962126+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.587962126+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594576925+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594716109+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594729579+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.616736668+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.616736668+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.630929544+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631161171+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631161171+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631176021+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=xYlza 2025-08-13T20:27:29.631176021+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631186272+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813262168+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813532566+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813624288+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813756632+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:30.145874999+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.145874999+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.190230007+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.190230007+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.413088979+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414129448+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.612113699+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612253553+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612253553+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612269604+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612279504+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612460769+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:30.612460769+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014216317+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014375462+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014390642+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.212750144+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.212984231+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.212984231+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213207357+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213207357+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213262039+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.213262039+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612159995+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612246318+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612258898+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612268088+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=b/75+ 2025-08-13T20:27:31.612268088+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612277558+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.812117543+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812200225+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812200225+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812390290+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812390290+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:32.212036668+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212237254+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212237254+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212348927+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212348927+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:35.632209844+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.632209844+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.635748505+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636015453+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636015453+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636046523+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Wr/Yi 2025-08-13T20:27:35.636060424+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636060424+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646249685+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646408050+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646408050+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646563154+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646563154+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.818397507+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.818397507+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822343920+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833224791+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833372116+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833372116+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.834251871+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.834251871+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:28:43.254469814+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.254538446+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.259174940+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260038304+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260137017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279125173+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279191525+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279204205+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279445772+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="catalog update required at 2025-08-13 20:28:43.279262877 +0000 UTC m=+1778.930651114" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.310497095+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.333983480+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.333983480+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.376674797+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377066058+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377131780+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377261594+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377602304+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.377688916+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385157771+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385536162+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385599744+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385647325+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=orR+p 2025-08-13T20:28:43.385686206+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385724237+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407342689+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407498503+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407516784+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407669618+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407986017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.407986017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.460921429+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.460984801+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461026952+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461044412+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=VzCkx 2025-08-13T20:28:43.461056563+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461056563+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.863512192+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864122329+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864190921+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864355826+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:44.065517389+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.065517389+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069128302+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069128302+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460311807+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460738529+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460925065+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.461180082+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.686685004+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.686893840+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692123461+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692302356+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692302356+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692316516+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=j9Ilt 2025-08-13T20:28:44.692326567+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692336137+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.059853951+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060085458+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060085458+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060159620+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.700760064+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.701313660+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.706951412+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707230380+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707278661+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707317832+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1eV/T 2025-08-13T20:28:45.707350223+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707380704+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.720619575+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721039397+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721089478+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721202291+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:58.823046762+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.823046762+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828352615+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828554991+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828554991+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853182689+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853304382+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853304382+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853536609+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:59.821934486+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.821934486+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835347291+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835511536+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835511536+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835528157+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Cs38y 2025-08-13T20:28:59.835528157+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835539857+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:29:03.819867187+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.819867187+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828040382+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=XHcJM 2025-08-13T20:29:03.828116945+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828116945+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846457512+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846674518+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846674518+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846767951+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.847152722+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.847152722+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850550909+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869118033+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869326689+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869343280+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869458703+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.952824390+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.954735224+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958091961+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958263086+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958312147+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958349778+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=31e1z 2025-08-13T20:29:03.958381039+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958411290+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.990631976+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.990631976+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.991181802+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:03.991248434+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024139789+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024275293+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024275293+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024318385+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=MbB2T 2025-08-13T20:29:04.024340705+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024340705+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.422918813+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.423215961+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.423215961+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.625743933+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.625743933+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:06.308482025+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.308965369+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313576681+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313885620+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313946132+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.314136067+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=QlN5u 2025-08-13T20:29:06.314181639+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.314245120+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332115274+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332262018+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332262018+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332358481+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.895144589+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.895237722+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899833244+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899922396+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899922396+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899935767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=AKAco 2025-08-13T20:29:06.899945767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899945767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928347703+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928548919+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928548919+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928665273+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:07.468727926+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.468904331+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.489868624+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490161342+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490161342+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490364788+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490364788+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490537513+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.490537513+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498314657+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=/DcUI 2025-08-13T20:29:07.498476511+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498476511+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524169690+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524287303+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524287303+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524534580+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524534580+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:13.310707337+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.310707337+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315120664+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Qy4PJ 2025-08-13T20:29:13.315419963+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315419963+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.331589987+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332102452+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332164764+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332345859+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332396120+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:25.860089594+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=IDLE" 2025-08-13T20:29:25.861642709+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=IDLE" 2025-08-13T20:29:25.861642709+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T20:29:25.861876145+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.861904356+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.862066681+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.862136663+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.862667748+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T20:29:25.871682017+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872280065+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872413568+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872543412+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ODM5w 2025-08-13T20:29:25.872703947+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872856731+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.873487219+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873558501+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873571482+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873583572+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bHhfJ 2025-08-13T20:29:25.873661704+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873661704+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.886810192+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-08-13T20:29:25.889217131+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-08-13T20:29:25.889388116+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="resolving sources" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.889428317+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checking if subscriptions need update" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.892328661+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892516766+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.892576638+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892613499+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892702362+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.892873136+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.892942178+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.893078802+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.893124664+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.893542786+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.893594217+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.908448774+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.908448774+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.909181525+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.920732907+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929641733+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.067388593+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067605369+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067605369+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067731293+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067731293+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.074127517+00:00 stderr F time="2025-08-13T20:29:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"certified-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Ag3x9 2025-08-13T20:29:26.080554121+00:00 stderr F E0813 20:29:26.080435 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "certified-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:29:26.080590313+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.080590313+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.291902467+00:00 stderr F time="2025-08-13T20:29:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=9p/HG 2025-08-13T20:29:26.292289898+00:00 stderr F E0813 20:29:26.292263 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:29:26.292381251+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.292439932+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.471669454+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472093807+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472144748+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472197850+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IVAIN 2025-08-13T20:29:26.472254471+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472286312+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.669128741+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669350997+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669390618+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669429659+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=qvytb 2025-08-13T20:29:26.669463340+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669494351+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.130067450+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=IDLE" 2025-08-13T20:29:27.130067450+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="resolving sources" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checking if subscriptions need update" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.141432467+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.267853131+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268166730+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268208791+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268346585+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268390547+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.286310242+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.289581716+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.343343171+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=IDLE" 2025-08-13T20:29:27.343343171+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T20:29:27.351528807+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-08-13T20:29:27.351769783+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="resolving sources" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.351913928+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checking if subscriptions need update" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.355880932+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.467052507+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467199872+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467199872+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467314815+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467314815+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.485198729+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.485312222+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.667667774+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.667957622+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668053655+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668101757+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jYXYT 2025-08-13T20:29:27.668139228+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668233650+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.490677703+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.490677703+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.690657910+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:28.690657910+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.067641717+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.673468052+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="catalog update required at 2025-08-13 20:29:29.673964586 +0000 UTC m=+1825.325352703" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.902366472+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:29.902366472+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.086921327+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:30.118863966+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.118863966+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075965238+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075965238+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.076230606+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.076230606+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.277869532+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278112969+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278154690+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278281154+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278316915+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278427688+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.278528761+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.670110057+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.268088386+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268333793+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268375374+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268557919+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268596570+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668717822+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074224299+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074672562+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074721983+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074868407+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074958670+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.075066323+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274758184+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.673948288+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674263387+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674334649+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674479074+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:30:00.087849848+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.087849848+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093647334+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093815869+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093815869+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093848830+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qnVZ4 2025-08-13T20:30:00.093848830+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093859011+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.153067387+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154602241+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154602241+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154681723+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:07.395009878+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.395009878+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.403667227+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404198012+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404251244+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404293905+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dPB/P 2025-08-13T20:30:07.404431929+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404467380+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429060657+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429265763+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429359416+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429459739+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:08.381187566+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.381409602+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391396369+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391868703+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391966646+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.392115790+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=z9jxi 2025-08-13T20:30:08.392218983+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.392318836+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.411661382+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412101244+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412224308+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412458885+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:30.678726219+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.680324975+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696571992+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696663364+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696663364+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696677685+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=0ogZg 2025-08-13T20:30:30.696728886+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696728886+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.824839489+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.825205609+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.833672703+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847428678+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847579832+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847579832+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862139941+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862139941+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862649906+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.863433168+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880538440+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880734075+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880734075+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.896538670+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.896538670+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:31.156699008+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.156767480+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.684967394+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.684967394+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:32.978336031+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.978336031+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993658712+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993713763+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993724044+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993736414+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Q/ZaW 2025-08-13T20:30:32.993746194+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993746194+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008257061+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008360794+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008374535+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008452197+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.566198690+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.566198690+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570876814+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570933446+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570945806+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570955057+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Lr1gT 2025-08-13T20:30:33.570964237+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570964237+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:34.179811709+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.179811709+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201019318+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201146142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201146142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201285946+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201285946+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201343868+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.201343868+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204170099+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204192700+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204202330+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204225051+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=IDV2+ 2025-08-13T20:30:34.204225051+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204241781+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217487252+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217738399+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217738399+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217754369+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217754369+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:31:03.010201680+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.010201680+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019081985+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=g3ts9 2025-08-13T20:31:03.019250450+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019250450+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034408246+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034641553+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034641553+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034722565+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034722565+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:35:01.852202125+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.853252566+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.853374639+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.853462172+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.862532792+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.863551292+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.863916342+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.863982874+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.864066337+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rC0bJ 2025-08-13T20:35:01.864156769+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.864251112+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.867295509+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867501785+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867715691+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FHAUm 2025-08-13T20:35:01.867848785+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867940438+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.883066263+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883214097+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883214097+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883411403+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883411403+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883555537+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.883908367+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884227916+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.884511274+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884619467+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884982768+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.885125652+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.885442081+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.885664577+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.887966714+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888236191+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888306573+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888385266+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=HxuCG 2025-08-13T20:35:01.888419297+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888462108+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.889839087+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890179037+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890303701+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890431634+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s7WmR 2025-08-13T20:35:01.890742873+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890963630+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058641240+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058641240+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.257482765+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.257756573+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258041451+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258291279+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258360451+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:36:53.388034701+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.388034701+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.388325389+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.388325389+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-config" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.399407719+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.399407719+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=SBluW namespace=kube-public 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=SBluW namespace=kube-public 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace kube-public" id=SBluW namespace=kube-public 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-console" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.595268516+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.595308527+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.595308527+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=Db5z+ namespace=default 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=Db5z+ namespace=default 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace default" id=Db5z+ namespace=default 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:54.992814608+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.992814608+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:54.992941382+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:55.991897921+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.991949463+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:55.991949463+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.393842649+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:56.393907951+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.393907951+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:56.992554081+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-node" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.992604432+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:56.992604432+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:57.191831596+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:57.191893087+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.191893087+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.392847441+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:57.392905693+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.392905693+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=yhel9 namespace=kube-system 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=yhel9 namespace=kube-system 2025-08-13T20:36:58.191854536+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:58.191919628+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.191919628+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace kube-system" id=yhel9 namespace=kube-system 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.593005012+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.593005012+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.593066134+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:59.191148527+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:59.191266030+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.191300851+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.391858323+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:59.392011648+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.392056369+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.593216928+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.593342931+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=sTFxb namespace=openshift-dns 2025-08-13T20:36:59.593383303+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=sTFxb namespace=openshift-dns 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=5vN/7 namespace=openshift-infra 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=5vN/7 namespace=openshift-infra 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=sTFxb namespace=openshift-dns 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=5vN/7 namespace=openshift-infra 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=rtok6 namespace=openshift 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=rtok6 namespace=openshift 2025-08-13T20:37:00.793269346+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.793269346+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:00.793302827+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:00.993206561+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift" id=rtok6 namespace=openshift 2025-08-13T20:37:00.993246192+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:00.993246192+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:01.193548277+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:01.193548277+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.193596558+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.392715689+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:01.392715689+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.392851213+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.593042834+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.593042834+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.593103245+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.393097139+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:02.393097139+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.393186041+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:03.993434825+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:04.192877725+00:00 stderr F time="2025-08-13T20:37:04Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:48.140845145+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.140845145+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.149819614+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150244806+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150244806+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169262244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169486381+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169486381+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169585623+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="catalog update required at 2025-08-13 20:37:48.169524322 +0000 UTC m=+2323.820912439" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.202895864+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.235902465+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.235902465+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250294840+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250387523+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250387523+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250403694+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=V0BeR 2025-08-13T20:37:48.250416114+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250416114+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310183017+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310202588+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.310218458+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346458733+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346649158+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346649158+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346667969+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=exBoo 2025-08-13T20:37:48.346667969+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346683899+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.901461744+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.901461744+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.652853206+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.659175078+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.666604712+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667440096+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667749245+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667766916+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OvvLL 2025-08-13T20:37:49.667766916+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667854928+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:50.659916190+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.659916190+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:54.702537060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.702537060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.719991153+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720081666+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720094756+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720294862+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:56.510581626+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.510581626+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176652949+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176819174+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176819174+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176939727+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:58.709366357+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.709366357+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716500383+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716531534+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716541624+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716561715+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=v6ChI 2025-08-13T20:37:58.716561715+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716571635+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730188238+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730385453+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730385453+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730493686+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:38:08.712229842+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.712864230+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.723938010+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724248169+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724316031+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724378502+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=inZ7f 2025-08-13T20:38:08.724420964+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724460165+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734251827+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734417332+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734417332+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.753557394+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.753748839+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.757069885+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.757069885+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.779865862+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.780030897+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.780257533+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.785683050+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.785683050+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:09.211962620+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.211962620+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244304872+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244395695+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244395695+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244537729+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.827574717+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.827574717+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833049285+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866775578+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866913492+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.867066806+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.867117897+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120128792+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120208544+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120208544+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120326007+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120326007+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:18.205309937+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.205309937+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.209619681+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.209919810+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210004862+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210114695+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=/UUWu 2025-08-13T20:38:18.210206248+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210281980+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229032341+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229131134+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229131134+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229330149+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229330149+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:27.984278184+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.984278184+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989278628+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989432803+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002103098+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002278123+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002278123+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002394966+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002394966+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:36.025954327+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.025954327+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033188556+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033395082+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033395082+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033409762+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mCjzB 2025-08-13T20:38:36.033419982+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033419982+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046728436+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046952883+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046952883+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.047027775+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="catalog update required at 2025-08-13 20:38:36.046963033 +0000 UTC m=+2371.698351150" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.076431152+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.095997827+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.096064658+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101432033+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101737922+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101888866+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101944148+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=w3m+T 2025-08-13T20:38:36.101993429+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.102024590+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119409181+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119694810+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119854244+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119982568+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.144541655+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.144623367+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154501002+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=8BKfc 2025-08-13T20:38:36.154645996+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154645996+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.174859609+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175226120+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175274451+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175394795+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175509788+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.175567700+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635668854+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635745037+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635745037+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.636006694+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.809771714+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.815577961+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835503266+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231557874+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231700468+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231749590+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.232066649+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:38.040583679+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.040583679+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:39.052584605+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.052584605+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056694574+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056851708+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056869919+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056869919+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=yPMA5 2025-08-13T20:38:39.056909940+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056909940+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.071094319+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:44.095858713+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.095858713+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101091804+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mFy1J 2025-08-13T20:38:44.101312300+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101312300+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.111966228+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112211775+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112326378+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112477592+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:45.083822447+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.083822447+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090675814+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090946492+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090995984+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.091035015+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e92MU 2025-08-13T20:38:45.091067476+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.091098456+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102374282+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102573637+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102573637+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102639139+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:46.581209017+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.581315311+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590748013+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1XQaS 2025-08-13T20:38:46.590888397+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590888397+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.602829551+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603099199+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603099199+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603118799+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:56.600761379+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.600761379+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.606826364+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624625037+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624850884+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624850884+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.635995575+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.635995575+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.640464834+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.640464834+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649345390+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649572396+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649627118+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649676889+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=y9XcG 2025-08-13T20:38:56.649722221+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649763432+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663227400+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663358624+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663358624+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.668764420+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.668926914+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:57.573863334+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.573863334+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592442830+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592547973+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592598544+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592670976+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:58.200207671+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.200207671+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=am3VE 2025-08-13T20:38:58.204892796+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204892796+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222947846+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.225684215+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.225684215+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.229899257+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246645790+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246645790+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246723762+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.246752153+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249256685+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249283486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249283486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249299486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RxOF8 2025-08-13T20:38:58.249311097+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249311097+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.605917108+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606131374+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606131374+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606323190+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606323190+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:39:06.078032810+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.078032810+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086442792+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=jXQ9t 2025-08-13T20:39:06.086792913+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.087428181+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098113519+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098339335+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098339335+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098408087+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098408087+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:41:21.359712161+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.360188655+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.375548007+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376392892+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376455903+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376567807+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JA2JG 2025-08-13T20:41:21.376613018+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376644079+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398404336+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398714105+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398935692+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.399160668+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="catalog update required at 2025-08-13 20:41:21.399119637 +0000 UTC m=+2537.050507884" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.436382651+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.456508562+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.456702157+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.491948363+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492104778+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492104778+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492119768+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492539550+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.492539550+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.524944875+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966515775+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966740402+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:21.966754632+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568117450+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568162071+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568176081+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568338156+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568922683+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.568922683+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.764863052+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.199135652+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.199298947+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365362345+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365735835+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365899840+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365962612+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=aQO+y 2025-08-13T20:41:23.366007463+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.366048504+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765080429+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765328116+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765372407+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765521581+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:24.189760582+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.189921307+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195527539+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195663143+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195702974+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195784976+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JTrAU 2025-08-13T20:41:24.195873519+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195923920+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:49.595426319+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.597122208+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:50.478897890+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.479014664+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.484344197+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485094339+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485154241+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485255504+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=aMQtg 2025-08-13T20:41:50.485363477+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485441989+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.508896725+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509281986+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509281986+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509305417+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:51.436771096+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.436771096+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.444349654+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:42:12.028532201+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.028532201+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049278609+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049386252+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049386252+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049439004+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.132087226+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.132206940+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.141983392+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.172837501+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.173045127+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.178349390+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.178349390+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190517391+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SyITD 2025-08-13T20:42:12.190695276+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190695276+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.234955482+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.234955482+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:14.280544576+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.280575917+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301883032+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301883032+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301922413+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.302129409+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.700878763+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.700878763+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742541894+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742769351+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742873784+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742977057+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:15.645178848+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.645371383+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656382491+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656490364+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656490364+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656508114+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HqDrp 2025-08-13T20:42:15.656508114+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656521225+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.699921786+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700378689+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700378689+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700669237+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700669237+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.701007197+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.701007197+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715175146+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fz3M4 2025-08-13T20:42:15.715401792+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715401792+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738736995+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738940741+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:21.467882987+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.467882987+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.471869692+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472009786+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472009786+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486397541+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486512274+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486512274+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486632568+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486632568+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391838604+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392165434+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392253346+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392324568+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=R4BYM 2025-08-13T20:42:25.392357319+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392404501+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424544857+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424712972+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.424917948+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424965669+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425070212+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.425070212+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.425168135+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425207176+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425410092+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.426424491+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.426525494+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.426562195+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.426633837+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.426673749+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.434557386+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434833034+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434885505+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434924907+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=zTeMR 2025-08-13T20:42:25.434955887+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434991648+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.439813977+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.574757068+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575143659+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575186140+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575378466+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575417057+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.775083633+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775301020+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775301020+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775429033+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="catalog update required at 2025-08-13 20:42:25.775357471 +0000 UTC m=+2601.426745588" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:26.001076909+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:26.026720348+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.026720348+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174703455+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174897940+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177567398+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177697992+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177697992+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177759503+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.892832629+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.892832629+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.896906027+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.896963068+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:31.791832838+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.791832838+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.798975694+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799129189+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799129189+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799142949+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jifKp 2025-08-13T20:42:31.799142949+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799152399+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819035763+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:39.302358298+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=IDLE" 2025-08-13T20:42:39.303379778+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T20:42:39.303487781+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KaU8M 2025-08-13T20:42:39.303530122+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KaU8M 2025-08-13T20:42:39.305189010+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=KaU8M 2025-08-13T20:42:39.307013572+00:00 stderr F E0813 20:42:39.306924 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.313402777+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OZNsZ 2025-08-13T20:42:39.313424687+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OZNsZ 2025-08-13T20:42:39.314723615+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=OZNsZ 2025-08-13T20:42:39.314861459+00:00 stderr F E0813 20:42:39.314735 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.325173216+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JdCT7 2025-08-13T20:42:39.325173216+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JdCT7 2025-08-13T20:42:39.328115041+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JdCT7 2025-08-13T20:42:39.328311126+00:00 stderr F E0813 20:42:39.328173 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.333503286+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:39.333526417+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=60fxG 2025-08-13T20:42:39.333536657+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=60fxG 2025-08-13T20:42:39.334476064+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=60fxG 2025-08-13T20:42:39.334476064+00:00 stderr F E0813 20:42:39.334461 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.349912389+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3IXh3 2025-08-13T20:42:39.349912389+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3IXh3 2025-08-13T20:42:39.351027821+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=3IXh3 2025-08-13T20:42:39.351027821+00:00 stderr F E0813 20:42:39.350986 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.431494521+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XvpGc 2025-08-13T20:42:39.431494521+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XvpGc 2025-08-13T20:42:39.432640164+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=XvpGc 2025-08-13T20:42:39.432693016+00:00 stderr F E0813 20:42:39.432643 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.594484399+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U1hZz 2025-08-13T20:42:39.594539681+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U1hZz 2025-08-13T20:42:39.598815574+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=U1hZz 2025-08-13T20:42:39.599062621+00:00 stderr F E0813 20:42:39.598878 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.725401654+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=IDLE" 2025-08-13T20:42:39.725401654+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=KqOOE 2025-08-13T20:42:39.725451675+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=KqOOE 2025-08-13T20:42:39.725716933+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T20:42:39.732380185+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=KqOOE 2025-08-13T20:42:39.732402876+00:00 stderr F E0813 20:42:39.732374 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.732446807+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZKca 2025-08-13T20:42:39.732543970+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZKca 2025-08-13T20:42:39.733369753+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=yZKca 2025-08-13T20:42:39.733389254+00:00 stderr F E0813 20:42:39.733357 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.734430794+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:39.734430794+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZS0Pg 2025-08-13T20:42:39.734451695+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZS0Pg 2025-08-13T20:42:39.735339760+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ZS0Pg 2025-08-13T20:42:39.735339760+00:00 stderr F E0813 20:42:39.735305 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.737593425+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=22fgy 2025-08-13T20:42:39.737593425+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=22fgy 2025-08-13T20:42:39.738130551+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=22fgy 2025-08-13T20:42:39.738130551+00:00 stderr F E0813 20:42:39.738105 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.778287578+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Nmv/v 2025-08-13T20:42:39.778287578+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Nmv/v 2025-08-13T20:42:39.778901296+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Nmv/v 2025-08-13T20:42:39.778926287+00:00 stderr F E0813 20:42:39.778894 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.859767828+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JD95p 2025-08-13T20:42:39.859767828+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JD95p 2025-08-13T20:42:39.904756515+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JD95p 2025-08-13T20:42:39.904756515+00:00 stderr F E0813 20:42:39.904736 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.920288802+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=8rqVT 2025-08-13T20:42:39.920288802+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=8rqVT 2025-08-13T20:42:39.944602543+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=IDLE" 2025-08-13T20:42:39.944602543+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CHY0C 2025-08-13T20:42:39.944656885+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CHY0C 2025-08-13T20:42:39.945185070+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T20:42:39.953284364+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:40.040316833+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=IDLE" 2025-08-13T20:42:40.040436596+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T20:42:40.044638258+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:40.105641986+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=8rqVT 2025-08-13T20:42:40.105641986+00:00 stderr F E0813 20:42:40.105594 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.105711408+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ONbEu 2025-08-13T20:42:40.105711408+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ONbEu 2025-08-13T20:42:40.305063236+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=CHY0C 2025-08-13T20:42:40.305100627+00:00 stderr F E0813 20:42:40.305053 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.305173279+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=asLjU 2025-08-13T20:42:40.305173279+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=asLjU 2025-08-13T20:42:40.506030160+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ONbEu 2025-08-13T20:42:40.506030160+00:00 stderr F E0813 20:42:40.505984 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506125882+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YSP3F 2025-08-13T20:42:40.506141043+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YSP3F 2025-08-13T20:42:40.705592583+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=asLjU 2025-08-13T20:42:40.705592583+00:00 stderr F E0813 20:42:40.705573 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.705642695+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=suOAk 2025-08-13T20:42:40.705642695+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=suOAk 2025-08-13T20:42:40.904975241+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=YSP3F 2025-08-13T20:42:40.904975241+00:00 stderr F E0813 20:42:40.904949 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.905013752+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=j66ZP 2025-08-13T20:42:40.905058694+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=j66ZP 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=suOAk 2025-08-13T20:42:41.106209573+00:00 stderr F E0813 20:42:41.105955 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=No0Sy 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=No0Sy 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=j66ZP 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=UnIEX 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=UnIEX 2025-08-13T20:42:41.505318040+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=No0Sy 2025-08-13T20:42:41.505351801+00:00 stderr F E0813 20:42:41.505293 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.505501865+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m4nH4 2025-08-13T20:42:41.505501865+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m4nH4 2025-08-13T20:42:41.705478940+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=UnIEX 2025-08-13T20:42:41.705517131+00:00 stderr F E0813 20:42:41.705468 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.705572143+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PpzAk 2025-08-13T20:42:41.705572143+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PpzAk 2025-08-13T20:42:41.905277341+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=m4nH4 2025-08-13T20:42:41.905324902+00:00 stderr F E0813 20:42:41.905211 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.925762921+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dwSwo 2025-08-13T20:42:41.925897595+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dwSwo 2025-08-13T20:42:42.105688269+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=PpzAk 2025-08-13T20:42:42.105688269+00:00 stderr F E0813 20:42:42.105656 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.147917036+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Wl+0f 2025-08-13T20:42:42.147917036+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Wl+0f 2025-08-13T20:42:42.304827760+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=dwSwo 2025-08-13T20:42:42.304827760+00:00 stderr F E0813 20:42:42.304758 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.345775970+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RVYto 2025-08-13T20:42:42.345775970+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RVYto 2025-08-13T20:42:42.505527846+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Wl+0f 2025-08-13T20:42:42.505527846+00:00 stderr F E0813 20:42:42.505496 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.505635749+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x8BXC 2025-08-13T20:42:42.505635749+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x8BXC 2025-08-13T20:42:42.704977556+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=RVYto 2025-08-13T20:42:42.704977556+00:00 stderr F E0813 20:42:42.704955 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.705079099+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cAhw1 2025-08-13T20:42:42.705092210+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cAhw1 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=x8BXC 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eJByL 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eJByL 2025-08-13T20:42:43.105845533+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=cAhw1 2025-08-13T20:42:43.106050719+00:00 stderr F E0813 20:42:43.105946 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.266869965+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w1tnL 2025-08-13T20:42:43.266869965+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w1tnL 2025-08-13T20:42:43.305633583+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eJByL 2025-08-13T20:42:43.305681664+00:00 stderr F E0813 20:42:43.305620 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.466166321+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ffEpJ 2025-08-13T20:42:43.466166321+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ffEpJ 2025-08-13T20:42:43.504700592+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=w1tnL 2025-08-13T20:42:43.504926168+00:00 stderr F E0813 20:42:43.504895 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.705847031+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ffEpJ 2025-08-13T20:42:43.706270283+00:00 stderr F E0813 20:42:43.706201 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.826166130+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=pD+NU 2025-08-13T20:42:43.826166130+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=pD+NU 2025-08-13T20:42:43.905995921+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=pD+NU 2025-08-13T20:42:43.906124515+00:00 stderr F E0813 20:42:43.906098 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.027184745+00:00 stderr F time="2025-08-13T20:42:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1gecN 2025-08-13T20:42:44.027350100+00:00 stderr F time="2025-08-13T20:42:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1gecN 2025-08-13T20:42:44.106599205+00:00 stderr F time="2025-08-13T20:42:44Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=1gecN 2025-08-13T20:42:44.106599205+00:00 stderr F E0813 20:42:44.106445 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415073043233033006 0ustar zuulzuul2025-10-13T00:14:59.777533191+00:00 stderr F W1013 00:14:59.776293 1 deprecated.go:66] 2025-10-13T00:14:59.777533191+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:14:59.777533191+00:00 stderr F 2025-10-13T00:14:59.777533191+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:14:59.777533191+00:00 stderr F 2025-10-13T00:14:59.777533191+00:00 stderr F =============================================== 2025-10-13T00:14:59.777533191+00:00 stderr F 2025-10-13T00:14:59.777533191+00:00 stderr F I1013 00:14:59.776972 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-10-13T00:14:59.777722277+00:00 stderr F I1013 00:14:59.777628 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:14:59.777722277+00:00 stderr F I1013 00:14:59.777696 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:14:59.778178530+00:00 stderr F I1013 00:14:59.778134 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-10-13T00:14:59.779586733+00:00 stderr F I1013 00:14:59.779104 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115073043233032777 0ustar zuulzuul2025-08-13T19:59:13.861232435+00:00 stderr F W0813 19:59:13.860364 1 deprecated.go:66] 2025-08-13T19:59:13.861232435+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.861232435+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.861232435+00:00 stderr F =============================================== 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.862362848+00:00 stderr F I0813 19:59:13.862339 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:14.022979775+00:00 stderr F I0813 19:59:14.022157 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:14.022979775+00:00 stderr F I0813 19:59:14.022382 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:14.025228869+00:00 stderr F I0813 19:59:14.025138 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:59:14.133310880+00:00 stderr F I0813 19:59:14.131998 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:43.317911807+00:00 stderr F I0813 20:42:43.317675 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000033300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000034000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000014116615073043233033012 0ustar zuulzuul2025-10-13T00:14:59.190595186+00:00 stderr F I1013 00:14:59.189851 1 start.go:62] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:14:59.190595186+00:00 stderr F I1013 00:14:59.190117 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:59.286992725+00:00 stderr F I1013 00:14:59.286921 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... 2025-10-13T00:20:28.026271836+00:00 stderr F I1013 00:20:28.025732 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config-controller 2025-10-13T00:20:28.047911903+00:00 stderr F I1013 00:20:28.047857 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:20:28.047981715+00:00 stderr F I1013 00:20:28.047942 1 metrics.go:100] Registering Prometheus metrics 2025-10-13T00:20:28.048010906+00:00 stderr F I1013 00:20:28.047993 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-10-13T00:20:28.066693510+00:00 stderr F I1013 00:20:28.066643 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-10-13T00:20:28.070496627+00:00 stderr F I1013 00:20:28.070462 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.070834527+00:00 stderr F W1013 00:20:28.070789 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:28.070902959+00:00 stderr F E1013 00:20:28.070880 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:28.070960110+00:00 stderr F I1013 00:20:28.070938 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.071021142+00:00 stderr F I1013 00:20:28.071001 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.071230798+00:00 stderr F I1013 00:20:28.071211 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.071510336+00:00 stderr F I1013 00:20:28.071489 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-10-13T00:20:28.071663170+00:00 stderr F I1013 00:20:28.071639 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.072095762+00:00 stderr F I1013 00:20:28.072071 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.072095762+00:00 stderr F I1013 00:20:28.072085 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:28.072455802+00:00 stderr F I1013 00:20:28.072429 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:28.072561115+00:00 stderr F I1013 00:20:28.072524 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.072754361+00:00 stderr F I1013 00:20:28.072713 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-10-13T00:20:28.072908445+00:00 stderr F I1013 00:20:28.072838 1 machine_set_boot_image_controller.go:151] "FeatureGates changed" enabled=["AdminNetworkPolicy","AlibabaPlatform","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CloudDualStackNodeIPs","ClusterAPIInstallAWS","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallVSphere","DisableKubeletCloudCredentialProviders","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","HardwareSpeed","KMSv1","MetricsServer","NetworkDiagnosticsConfig","NetworkLiveMigration","PrivateHostedZoneAWS","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereStaticIPs"] disabled=["AutomatedEtcdBackup","CSIDriverSharedResource","ChunkSizeMiB","ClusterAPIInstall","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallPowerVS","DNSNameResolver","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MixedCPUsAllocation","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereMultiVCenters","ValidatingAdmissionPolicy","VolumeGroupSnapshot"] 2025-10-13T00:20:28.072982577+00:00 stderr F I1013 00:20:28.072927 1 start.go:107] FeatureGates initialized: enabled=[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs] disabled=[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:20:28.073076570+00:00 stderr F I1013 00:20:28.073056 1 drain_controller.go:168] Starting MachineConfigController-DrainController 2025-10-13T00:20:28.073092220+00:00 stderr F I1013 00:20:28.072841 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-10-13T00:20:28.073464350+00:00 stderr F I1013 00:20:28.073194 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.073464350+00:00 stderr F I1013 00:20:28.073392 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:20:28.073479991+00:00 stderr F I1013 00:20:28.073460 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-10-13T00:20:28.078627565+00:00 stderr F I1013 00:20:28.078547 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:28.079773277+00:00 stderr F I1013 00:20:28.079692 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:28.080089356+00:00 stderr F I1013 00:20:28.080064 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-10-13T00:20:28.097876795+00:00 stderr F I1013 00:20:28.097827 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:28.098085361+00:00 stderr F I1013 00:20:28.098050 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-10-13T00:20:28.104866812+00:00 stderr F I1013 00:20:28.104840 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-10-13T00:20:28.125097139+00:00 stderr F I1013 00:20:28.125050 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:28.173473377+00:00 stderr F I1013 00:20:28.173425 1 machine_set_boot_image_controller.go:181] Starting MachineConfigController-MachineSetBootImageController 2025-10-13T00:20:28.173473377+00:00 stderr F I1013 00:20:28.173460 1 node_controller.go:247] Starting MachineConfigController-NodeController 2025-10-13T00:20:28.173519858+00:00 stderr F I1013 00:20:28.173496 1 template_controller.go:227] Starting MachineConfigController-TemplateController 2025-10-13T00:20:28.173546129+00:00 stderr F I1013 00:20:28.173519 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController 2025-10-13T00:20:28.173734794+00:00 stderr F I1013 00:20:28.173712 1 render_controller.go:127] Starting MachineConfigController-RenderController 2025-10-13T00:20:28.174038133+00:00 stderr F I1013 00:20:28.174013 1 container_runtime_config_controller.go:242] Starting MachineConfigController-ContainerRuntimeConfigController 2025-10-13T00:20:28.356088101+00:00 stderr F I1013 00:20:28.356032 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-10-13T00:20:28.388379517+00:00 stderr F I1013 00:20:28.388294 1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master 2025-10-13T00:20:28.581867446+00:00 stderr F I1013 00:20:28.581806 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-10-13T00:20:28.781221880+00:00 stderr F I1013 00:20:28.781176 1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker 2025-10-13T00:20:29.485759169+00:00 stderr F W1013 00:20:29.485656 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:29.485759169+00:00 stderr F E1013 00:20:29.485696 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:29.580513078+00:00 stderr F I1013 00:20:29.580431 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-10-13T00:20:30.180554135+00:00 stderr F I1013 00:20:30.180458 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-10-13T00:20:31.653089864+00:00 stderr F W1013 00:20:31.653040 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:31.653112395+00:00 stderr F E1013 00:20:31.653091 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:33.074147538+00:00 stderr F E1013 00:20:33.073908 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.074310353+00:00 stderr F E1013 00:20:33.073933 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.075422614+00:00 stderr F I1013 00:20:33.074918 1 node_controller.go:988] Pool master is paused and will not update. 2025-10-13T00:20:33.077213204+00:00 stderr F I1013 00:20:33.077158 1 status.go:266] Degraded Machine: crc and Degraded Reason: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:20:33.077213204+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.082696128+00:00 stderr F I1013 00:20:33.082568 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.083998875+00:00 stderr F I1013 00:20:33.083927 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.088305325+00:00 stderr F E1013 00:20:33.088264 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.090143047+00:00 stderr F E1013 00:20:33.089134 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.096872036+00:00 stderr F I1013 00:20:33.096803 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.097409781+00:00 stderr F I1013 00:20:33.097322 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.107208596+00:00 stderr F E1013 00:20:33.107189 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.107480494+00:00 stderr F E1013 00:20:33.107457 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.115076757+00:00 stderr F I1013 00:20:33.115009 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.115884899+00:00 stderr F I1013 00:20:33.115844 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.136176559+00:00 stderr F E1013 00:20:33.136139 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.136196909+00:00 stderr F E1013 00:20:33.136166 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.143397771+00:00 stderr F I1013 00:20:33.143322 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.147739353+00:00 stderr F I1013 00:20:33.147709 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.183735063+00:00 stderr F E1013 00:20:33.183690 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.189037182+00:00 stderr F E1013 00:20:33.188999 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.194203037+00:00 stderr F I1013 00:20:33.194138 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.197253263+00:00 stderr F I1013 00:20:33.197202 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.274825349+00:00 stderr F E1013 00:20:33.274764 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.278014259+00:00 stderr F E1013 00:20:33.277994 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.284588973+00:00 stderr F I1013 00:20:33.284562 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.446000002+00:00 stderr F E1013 00:20:33.445670 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.482711833+00:00 stderr F I1013 00:20:33.482628 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.643899456+00:00 stderr F E1013 00:20:33.643837 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:33.688321662+00:00 stderr F I1013 00:20:33.688232 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:33.882954523+00:00 stderr F I1013 00:20:33.882665 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:34.009429892+00:00 stderr F E1013 00:20:34.009235 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:34.082396860+00:00 stderr F I1013 00:20:34.082283 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:34.203342733+00:00 stderr F E1013 00:20:34.203224 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:34.283914114+00:00 stderr F I1013 00:20:34.283815 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:34.722741548+00:00 stderr F E1013 00:20:34.722643 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:34.732612405+00:00 stderr F I1013 00:20:34.732524 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:34.924117669+00:00 stderr F E1013 00:20:34.924046 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:34.934969514+00:00 stderr F I1013 00:20:34.934879 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:36.013315942+00:00 stderr F E1013 00:20:36.013250 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:36.022182470+00:00 stderr F I1013 00:20:36.022065 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:36.215959487+00:00 stderr F E1013 00:20:36.215793 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:36.228568241+00:00 stderr F I1013 00:20:36.228472 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:36.335221143+00:00 stderr F W1013 00:20:36.335080 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:36.335221143+00:00 stderr F E1013 00:20:36.335145 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:38.090420424+00:00 stderr F I1013 00:20:38.090316 1 status.go:266] Degraded Machine: crc and Degraded Reason: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:20:38.090420424+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:38.583462089+00:00 stderr F E1013 00:20:38.583387 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:38.591989828+00:00 stderr F I1013 00:20:38.591928 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:38.789569772+00:00 stderr F E1013 00:20:38.789486 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:38.796558969+00:00 stderr F I1013 00:20:38.796486 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:43.716311096+00:00 stderr F E1013 00:20:43.716240 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:43.723922240+00:00 stderr F I1013 00:20:43.723877 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:43.917593494+00:00 stderr F E1013 00:20:43.917530 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:43.924011584+00:00 stderr F I1013 00:20:43.923961 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:48.388484269+00:00 stderr F W1013 00:20:48.388384 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:48.388484269+00:00 stderr F E1013 00:20:48.388418 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:20:53.966318494+00:00 stderr F E1013 00:20:53.966248 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:53.976759697+00:00 stderr F I1013 00:20:53.976705 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:54.164489854+00:00 stderr F E1013 00:20:54.164400 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:20:54.172724265+00:00 stderr F I1013 00:20:54.172638 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:21:05.077869723+00:00 stderr F W1013 00:21:05.077800 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:21:05.077869723+00:00 stderr F E1013 00:21:05.077836 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:21:14.457067847+00:00 stderr F E1013 00:21:14.457002 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:21:14.466758897+00:00 stderr F I1013 00:21:14.466699 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:21:14.653252393+00:00 stderr F E1013 00:21:14.653192 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:21:14.661374521+00:00 stderr F I1013 00:21:14.661308 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:21:37.724593181+00:00 stderr F W1013 00:21:37.724510 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:21:37.724593181+00:00 stderr F E1013 00:21:37.724544 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:21:55.427494933+00:00 stderr F E1013 00:21:55.427086 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:21:55.440198835+00:00 stderr F I1013 00:21:55.440137 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:21:55.621666335+00:00 stderr F E1013 00:21:55.621589 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:21:55.634655124+00:00 stderr F I1013 00:21:55.634578 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:22:15.089117577+00:00 stderr F W1013 00:22:15.088559 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:15.089117577+00:00 stderr F E1013 00:22:15.089086 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.048115008+00:00 stderr F E1013 00:22:28.047639 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:09.148550135+00:00 stderr F W1013 00:23:09.148421 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:23:09.148550135+00:00 stderr F E1013 00:23:09.148482 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:23:17.361566137+00:00 stderr F E1013 00:23:17.361448 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:23:17.374774865+00:00 stderr F E1013 00:23:17.374678 1 render_controller.go:385] could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:23:17.374774865+00:00 stderr F I1013 00:23:17.374709 1 render_controller.go:386] Dropping machineconfigpool "master" out of the queue: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:23:17.555375516+00:00 stderr F E1013 00:23:17.555230 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:23:17.563792861+00:00 stderr F E1013 00:23:17.563674 1 render_controller.go:385] could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:23:17.563792861+00:00 stderr F I1013 00:23:17.563709 1 render_controller.go:386] Dropping machineconfigpool "worker" out of the queue: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-10-13T00:23:26.350724942+00:00 stderr F I1013 00:23:26.350664 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:36.654448994+00:00 stderr F I1013 00:23:36.654117 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:47.010507200+00:00 stderr F I1013 00:23:47.010449 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-10-13T00:23:48.720848552+00:00 stderr F I1013 00:23:48.720795 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:51.101452394+00:00 stderr F W1013 00:23:51.101401 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-10-13T00:23:51.101452394+00:00 stderr F E1013 00:23:51.101437 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) ././@LongLink0000644000000000000000000000034000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000041210415073043233033003 0ustar zuulzuul2025-08-13T19:59:09.332601413+00:00 stderr F I0813 19:59:09.332217 1 start.go:62] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:09.334155298+00:00 stderr F I0813 19:59:09.333384 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:09.904644890+00:00 stderr F I0813 19:59:09.904226 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... 2025-08-13T19:59:09.976994272+00:00 stderr F I0813 19:59:09.954145 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config-controller 2025-08-13T19:59:10.501277606+00:00 stderr F I0813 19:59:10.501175 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:10.506563697+00:00 stderr F I0813 19:59:10.502810 1 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:59:10.506684680+00:00 stderr F I0813 19:59:10.506662 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.495076 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.542454 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.563449 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.565377 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.565897 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F E0813 19:59:11.578688 1 node_controller.go:505] getting scheduler config failed: cluster scheduler couldn't be found 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.579413 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.607228 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.625149 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.626272 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.626618 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.205253159+00:00 stderr F I0813 19:59:12.202935 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.206409072+00:00 stderr F I0813 19:59:12.205694 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.359926668+00:00 stderr F W0813 19:59:12.353915 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:12.359926668+00:00 stderr F E0813 19:59:12.354497 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:12.360216916+00:00 stderr F I0813 19:59:12.360107 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:12.361595616+00:00 stderr F I0813 19:59:12.361563 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.421095472+00:00 stderr F I0813 19:59:12.305966 1 machine_set_boot_image_controller.go:151] "FeatureGates changed" enabled=["AdminNetworkPolicy","AlibabaPlatform","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CloudDualStackNodeIPs","ClusterAPIInstallAWS","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallVSphere","DisableKubeletCloudCredentialProviders","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","HardwareSpeed","KMSv1","MetricsServer","NetworkDiagnosticsConfig","NetworkLiveMigration","PrivateHostedZoneAWS","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereStaticIPs"] disabled=["AutomatedEtcdBackup","CSIDriverSharedResource","ChunkSizeMiB","ClusterAPIInstall","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallPowerVS","DNSNameResolver","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MixedCPUsAllocation","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereMultiVCenters","ValidatingAdmissionPolicy","VolumeGroupSnapshot"] 2025-08-13T19:59:12.767226169+00:00 stderr F I0813 19:59:12.765698 1 start.go:107] FeatureGates initialized: enabled=[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs] disabled=[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:12.767226169+00:00 stderr F I0813 19:59:12.765975 1 drain_controller.go:168] Starting MachineConfigController-DrainController 2025-08-13T19:59:12.768964998+00:00 stderr F I0813 19:59:12.767517 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:12.776213485+00:00 stderr F I0813 19:59:12.776110 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.779706024+00:00 stderr F I0813 19:59:12.779669 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.782413791+00:00 stderr F I0813 19:59:12.781746 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:12.903315078+00:00 stderr F I0813 19:59:12.897736 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController 2025-08-13T19:59:12.903315078+00:00 stderr F I0813 19:59:12.900105 1 container_runtime_config_controller.go:242] Starting MachineConfigController-ContainerRuntimeConfigController 2025-08-13T19:59:12.965361687+00:00 stderr F I0813 19:59:12.965178 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:12.973180490+00:00 stderr F I0813 19:59:12.973073 1 machine_set_boot_image_controller.go:181] Starting MachineConfigController-MachineSetBootImageController 2025-08-13T19:59:13.861069870+00:00 stderr F W0813 19:59:13.858186 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:13.861069870+00:00 stderr F E0813 19:59:13.858311 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:14.076055558+00:00 stderr F I0813 19:59:14.074825 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:14.076055558+00:00 stderr F I0813 19:59:14.075481 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T19:59:14.105357153+00:00 stderr F I0813 19:59:14.083405 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:14.138707644+00:00 stderr F I0813 19:59:14.137033 1 template_controller.go:227] Starting MachineConfigController-TemplateController 2025-08-13T19:59:14.160541337+00:00 stderr F I0813 19:59:14.160424 1 node_controller.go:247] Starting MachineConfigController-NodeController 2025-08-13T19:59:14.160688981+00:00 stderr F I0813 19:59:14.160594 1 render_controller.go:127] Starting MachineConfigController-RenderController 2025-08-13T19:59:16.807137172+00:00 stderr F W0813 19:59:16.784255 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:16.807137172+00:00 stderr F E0813 19:59:16.784949 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:18.197085633+00:00 stderr F I0813 19:59:18.195574 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:20.562090150+00:00 stderr F I0813 19:59:20.538752 1 trace.go:236] Trace[1112042174]: "DeltaFIFO Pop Process" ID:openshift-ingress-canary/ingress-canary-2vhcn,Depth:47,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:20.437) (total time: 100ms): 2025-08-13T19:59:20.562090150+00:00 stderr F Trace[1112042174]: [100.102354ms] [100.102354ms] END 2025-08-13T19:59:21.122969158+00:00 stderr F W0813 19:59:21.122736 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:21.123692448+00:00 stderr F E0813 19:59:21.123566 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:26.419192658+00:00 stderr F I0813 19:59:26.399961 1 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:26.424919171+00:00 stderr F I0813 19:59:26.422230 1 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:27.283901637+00:00 stderr F I0813 19:59:27.271959 1 render_controller.go:530] Generated machineconfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a from 9 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 99-node-sizing-for-crc machineconfiguration.openshift.io/v1 } {MachineConfig 99-openshift-machineconfig-master-dummy-networks machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:27.601928073+00:00 stderr F I0813 19:59:27.599531 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23974", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-ef556ead28ddfad01c34ac56c7adfb5a successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:27.752237497+00:00 stderr F E0813 19:59:27.750714 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.048144682+00:00 stderr F E0813 19:59:28.047461 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.055639386+00:00 stderr F I0813 19:59:28.050728 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.466367593+00:00 stderr F I0813 19:59:28.465479 1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master 2025-08-13T19:59:28.846212220+00:00 stderr F I0813 19:59:28.843613 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-08-13T19:59:31.536257221+00:00 stderr F W0813 19:59:31.497234 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:31.559062731+00:00 stderr F E0813 19:59:31.558862 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:33.466238635+00:00 stderr F I0813 19:59:33.448482 1 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:33.466238635+00:00 stderr F I0813 19:59:33.449191 1 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:33.714022388+00:00 stderr F I0813 19:59:33.549553 1 render_controller.go:556] Pool master: now targeting: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:33.720678308+00:00 stderr F I0813 19:59:33.720638 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.727381299+00:00 stderr F I0813 19:59:33.727341 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.889329125+00:00 stderr F I0813 19:59:33.888263 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.910104118+00:00 stderr F I0813 19:59:33.908677 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.952057924+00:00 stderr F I0813 19:59:33.950705 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.036304985+00:00 stderr F I0813 19:59:34.032993 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.193674731+00:00 stderr F I0813 19:59:34.193315 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.553558660+00:00 stderr F I0813 19:59:34.542021 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:35.184505775+00:00 stderr F I0813 19:59:35.184412 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:36.486676493+00:00 stderr F I0813 19:59:36.474371 1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker 2025-08-13T19:59:36.732022357+00:00 stderr F I0813 19:59:36.731222 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:37.038352279+00:00 stderr F I0813 19:59:37.038159 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-08-13T19:59:39.230871006+00:00 stderr F I0813 19:59:39.230169 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.292203 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.296699 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.296717 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T19:59:39.522953082+00:00 stderr F I0813 19:59:39.521362 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:39.774453491+00:00 stderr F I0813 19:59:39.774353 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28168", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc0007ca988) 2025-08-13T19:59:39.829324796+00:00 stderr F I0813 19:59:39.827563 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:39.929983155+00:00 stderr F I0813 19:59:39.929679 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.030141060+00:00 stderr F I0813 19:59:40.025318 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.059067385+00:00 stderr F I0813 19:59:40.057122 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.108063251+00:00 stderr F I0813 19:59:40.105291 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.289229585+00:00 stderr F I0813 19:59:40.288704 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.317947454+00:00 stderr F I0813 19:59:40.317581 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:40.320859347+00:00 stderr F I0813 19:59:40.318019 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28255", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:40.469715670+00:00 stderr F I0813 19:59:40.467188 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.837558036+00:00 stderr F I0813 19:59:40.835718 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:43.090480505+00:00 stderr F I0813 19:59:43.087270 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-08-13T19:59:43.295528870+00:00 stderr F I0813 19:59:43.294255 1 render_controller.go:530] Generated machineconfig rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:43.296176938+00:00 stderr F I0813 19:59:43.296135 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23055", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:43.792422874+00:00 stderr F I0813 19:59:43.733757 1 render_controller.go:556] Pool worker: now targeting: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff 2025-08-13T19:59:46.371377217+00:00 stderr F I0813 19:59:46.370946 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:46.804124423+00:00 stderr F I0813 19:59:46.801706 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:47.150328001+00:00 stderr F I0813 19:59:47.150101 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T19:59:47.151249728+00:00 stderr F I0813 19:59:47.150912 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28384", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T19:59:47.225183345+00:00 stderr F I0813 19:59:47.224970 1 render_controller.go:530] Generated machineconfig rendered-master-11405dc064e9fc83a779a06d1cd665b3 from 9 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 99-node-sizing-for-crc machineconfiguration.openshift.io/v1 } {MachineConfig 99-openshift-machineconfig-master-dummy-networks machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:47.252541415+00:00 stderr F I0813 19:59:47.252418 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28255", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-11405dc064e9fc83a779a06d1cd665b3 successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:47.534896974+00:00 stderr F E0813 19:59:47.531644 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:47.930438080+00:00 stderr F E0813 19:59:47.923294 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:47.930438080+00:00 stderr F I0813 19:59:47.923353 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.297673365+00:00 stderr F I0813 19:59:49.248434 1 status.go:249] Pool worker: All nodes are updated with MachineConfig rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff 2025-08-13T19:59:50.209966579+00:00 stderr F E0813 19:59:50.197069 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.613584565+00:00 stderr F E0813 19:59:50.613296 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.613675228+00:00 stderr F I0813 19:59:50.613659 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:51.892659457+00:00 stderr F I0813 19:59:51.890142 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:52.768347259+00:00 stderr F W0813 19:59:52.767605 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:52.768347259+00:00 stderr F E0813 19:59:52.768293 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:53.254425664+00:00 stderr F I0813 19:59:53.253448 1 render_controller.go:556] Pool master: now targeting: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T19:59:58.604698936+00:00 stderr F I0813 19:59:58.547385 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:58.641913876+00:00 stderr F I0813 19:59:58.565737 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:59.211441021+00:00 stderr F E0813 19:59:59.210702 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.249989550+00:00 stderr F E0813 19:59:59.249943 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.250047982+00:00 stderr F I0813 19:59:59.250034 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:03.749065482+00:00 stderr F I0813 20:00:03.745946 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:15.506909632+00:00 stderr F I0813 20:00:15.505103 1 drain_controller.go:182] node crc: uncordoning 2025-08-13T20:00:15.506909632+00:00 stderr F I0813 20:00:15.505950 1 drain_controller.go:182] node crc: initiating uncordon (currently schedulable: true) 2025-08-13T20:00:16.676519712+00:00 stderr F I0813 20:00:16.673529 1 drain_controller.go:182] node crc: uncordon succeeded (currently schedulable: true) 2025-08-13T20:00:16.676519712+00:00 stderr F I0813 20:00:16.674017 1 drain_controller.go:182] node crc: operation successful; applying completion annotation 2025-08-13T20:00:22.034286821+00:00 stderr F I0813 20:00:22.033417 1 node_controller.go:576] Pool master: node crc: Completed update to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:23.939924262+00:00 stderr F W0813 20:00:23.936455 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:23.939924262+00:00 stderr F E0813 20:00:23.937099 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:27.063247291+00:00 stderr F I0813 20:00:27.050625 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T20:00:27.063247291+00:00 stderr F I0813 20:00:27.061253 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T20:00:27.236879842+00:00 stderr F I0813 20:00:27.236013 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc000ee6f08) 2025-08-13T20:00:27.281201456+00:00 stderr F I0813 20:00:27.280951 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:27.281765232+00:00 stderr F I0813 20:00:27.281728 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:29.407765243+00:00 stderr F I0813 20:00:29.407432 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T20:00:29.423561903+00:00 stderr F I0813 20:00:29.423510 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T20:00:33.181499996+00:00 stderr F I0813 20:00:33.165414 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:33.516675434+00:00 stderr F I0813 20:00:33.511690 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:00:38.620300518+00:00 stderr F I0813 20:00:38.619649 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:54.455970291+00:00 stderr F W0813 20:00:54.454636 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:54.462198839+00:00 stderr F E0813 20:00:54.455773 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:11.763012024+00:00 stderr F I0813 20:01:11.761647 1 drain_controller.go:182] node crc: uncordoning 2025-08-13T20:01:11.763012024+00:00 stderr F I0813 20:01:11.762824 1 drain_controller.go:182] node crc: initiating uncordon (currently schedulable: true) 2025-08-13T20:01:14.404614056+00:00 stderr F I0813 20:01:14.404272 1 drain_controller.go:182] node crc: uncordon succeeded (currently schedulable: true) 2025-08-13T20:01:14.404702519+00:00 stderr F I0813 20:01:14.404687 1 drain_controller.go:182] node crc: operation successful; applying completion annotation 2025-08-13T20:01:38.032189301+00:00 stderr F W0813 20:01:38.031293 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:38.032189301+00:00 stderr F E0813 20:01:38.032036 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:51.239914643+00:00 stderr F I0813 20:01:51.236916 1 node_controller.go:576] Pool master: node crc: Completed update to rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:56.264214234+00:00 stderr F I0813 20:01:56.261042 1 status.go:249] Pool master: All nodes are updated with MachineConfig rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:02:19.147200121+00:00 stderr F W0813 20:02:19.146314 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:02:19.147482209+00:00 stderr F E0813 20:02:19.147454 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:03:13.621956248+00:00 stderr F W0813 20:03:13.621082 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.621956248+00:00 stderr F E0813 20:03:13.621837 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.110874138+00:00 stderr F E0813 20:03:21.110202 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.755041202+00:00 stderr F W0813 20:04:02.748980 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.755041202+00:00 stderr F E0813 20:04:02.749582 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:21.134592986+00:00 stderr F E0813 20:04:21.133978 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:45.331437244+00:00 stderr F W0813 20:04:45.323019 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:45.331437244+00:00 stderr F E0813 20:04:45.330911 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:21.114757787+00:00 stderr F E0813 20:05:21.113708 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:39.906343176+00:00 stderr F W0813 20:05:39.905388 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:05:39.906343176+00:00 stderr F E0813 20:05:39.906129 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:06.591223742+00:00 stderr F I0813 20:06:06.589573 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:14.707209002+00:00 stderr F I0813 20:06:14.706548 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:14.866933636+00:00 stderr F I0813 20:06:14.866097 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:15.643188785+00:00 stderr F W0813 20:06:15.642535 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:15.643350520+00:00 stderr F E0813 20:06:15.643214 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:18.021051777+00:00 stderr F I0813 20:06:18.019988 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.164475020+00:00 stderr F I0813 20:06:19.164291 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.611652556+00:00 stderr F I0813 20:06:19.611552 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:23.554076800+00:00 stderr F I0813 20:06:23.550996 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:25.825147654+00:00 stderr F I0813 20:06:25.824990 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:27.136463934+00:00 stderr F I0813 20:06:27.136396 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:29.797014992+00:00 stderr F I0813 20:06:29.796187 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:30.531452663+00:00 stderr F I0813 20:06:30.531307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.535016035+00:00 stderr F I0813 20:06:30.534117 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:06:35.440386944+00:00 stderr F I0813 20:06:35.439627 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:36.198413388+00:00 stderr F I0813 20:06:36.197860 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:36.616589747+00:00 stderr F I0813 20:06:36.615422 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:39.422978139+00:00 stderr F I0813 20:06:39.422199 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.778065952+00:00 stderr F I0813 20:06:42.776328 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:44.194632006+00:00 stderr F I0813 20:06:44.194259 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:06:44.650823986+00:00 stderr F I0813 20:06:44.650712 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:44.911028646+00:00 stderr F I0813 20:06:44.910966 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:50.084892825+00:00 stderr F I0813 20:06:50.084114 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:54.079645208+00:00 stderr F I0813 20:06:54.078126 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:10.877700932+00:00 stderr F W0813 20:07:10.876997 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:10.877700932+00:00 stderr F E0813 20:07:10.877546 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:42.503931311+00:00 stderr F W0813 20:07:42.500600 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:42.503931311+00:00 stderr F E0813 20:07:42.501278 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:08:33.357863726+00:00 stderr F W0813 20:08:33.356766 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.358155465+00:00 stderr F E0813 20:08:33.358117 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.553640806+00:00 stderr F W0813 20:09:29.552854 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:09:29.553640806+00:00 stderr F E0813 20:09:29.553366 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:09:33.937122765+00:00 stderr F I0813 20:09:33.935964 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:34.571283297+00:00 stderr F I0813 20:09:34.568451 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:35.089614588+00:00 stderr F I0813 20:09:35.089312 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:35.660374652+00:00 stderr F I0813 20:09:35.660267 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.614850148+00:00 stderr F I0813 20:09:36.612327 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:37.349693196+00:00 stderr F I0813 20:09:37.349540 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:37.349889271+00:00 stderr F I0813 20:09:37.349752 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:09:38.306224350+00:00 stderr F I0813 20:09:38.306099 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:40.359632383+00:00 stderr F E0813 20:09:40.356692 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.368343192+00:00 stderr F E0813 20:09:40.367830 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.372648726+00:00 stderr F E0813 20:09:40.371734 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.372648726+00:00 stderr F I0813 20:09:40.372136 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.380648125+00:00 stderr F E0813 20:09:40.380522 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.380648125+00:00 stderr F I0813 20:09:40.380577 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:45.551874839+00:00 stderr F I0813 20:09:45.549011 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:46.279887832+00:00 stderr F I0813 20:09:46.277766 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:46.575994511+00:00 stderr F I0813 20:09:46.575528 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:47.411594738+00:00 stderr F I0813 20:09:47.410700 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:47.414546083+00:00 stderr F I0813 20:09:47.412227 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:50.334154551+00:00 stderr F I0813 20:09:50.333387 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:52.122596887+00:00 stderr F I0813 20:09:52.122510 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:03.968061577+00:00 stderr F W0813 20:10:03.967024 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:10:03.968061577+00:00 stderr F E0813 20:10:03.967728 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:10:07.599688058+00:00 stderr F I0813 20:10:07.599277 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:10:08.776015395+00:00 stderr F I0813 20:10:08.775892 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:09.470276829+00:00 stderr F I0813 20:10:09.468847 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:10.215148165+00:00 stderr F I0813 20:10:10.214593 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:10:19.692741613+00:00 stderr F I0813 20:10:19.692077 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:23.228125756+00:00 stderr F I0813 20:10:23.227339 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.929125358+00:00 stderr F I0813 20:10:27.928636 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:11:02.478100283+00:00 stderr F W0813 20:11:02.477126 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:02.478100283+00:00 stderr F E0813 20:11:02.477947 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:33.421838594+00:00 stderr F W0813 20:11:33.420285 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:33.421838594+00:00 stderr F E0813 20:11:33.421743 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:27.036588905+00:00 stderr F W0813 20:12:27.036047 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:27.036588905+00:00 stderr F E0813 20:12:27.036497 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:58.297409024+00:00 stderr F W0813 20:12:58.296634 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:58.297409024+00:00 stderr F E0813 20:12:58.297348 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:13:28.588911106+00:00 stderr F W0813 20:13:28.588395 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:13:28.589057570+00:00 stderr F E0813 20:13:28.589040 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:14:20.404079860+00:00 stderr F W0813 20:14:20.402852 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:14:20.404079860+00:00 stderr F E0813 20:14:20.403448 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:15:15.591592395+00:00 stderr F W0813 20:15:15.590886 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:15:15.591592395+00:00 stderr F E0813 20:15:15.591532 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:16:10.081905945+00:00 stderr F W0813 20:16:10.081372 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:16:10.081905945+00:00 stderr F E0813 20:16:10.081875 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:04.217098347+00:00 stderr F W0813 20:17:04.216341 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:04.217098347+00:00 stderr F E0813 20:17:04.217026 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:56.486311514+00:00 stderr F W0813 20:17:56.485625 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:56.486311514+00:00 stderr F E0813 20:17:56.486247 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:18:54.917419329+00:00 stderr F W0813 20:18:54.916815 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:18:54.917419329+00:00 stderr F E0813 20:18:54.917299 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:19:48.919583120+00:00 stderr F W0813 20:19:48.918738 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:19:48.919583120+00:00 stderr F E0813 20:19:48.919471 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:20:36.034141950+00:00 stderr F W0813 20:20:36.033271 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:20:36.034141950+00:00 stderr F E0813 20:20:36.034025 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:14.926291207+00:00 stderr F W0813 20:21:14.925592 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:14.926291207+00:00 stderr F E0813 20:21:14.926203 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:47.846378734+00:00 stderr F W0813 20:21:47.845509 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:47.846378734+00:00 stderr F E0813 20:21:47.846321 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:22:35.519597564+00:00 stderr F W0813 20:22:35.518565 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:22:35.519597564+00:00 stderr F E0813 20:22:35.519565 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:23:17.253591317+00:00 stderr F W0813 20:23:17.252247 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:23:17.253839864+00:00 stderr F E0813 20:23:17.253555 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:12.130840552+00:00 stderr F W0813 20:24:12.130174 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:12.130947525+00:00 stderr F E0813 20:24:12.130717 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:46.986507470+00:00 stderr F W0813 20:24:46.985612 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:46.986507470+00:00 stderr F E0813 20:24:46.986375 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:25:38.363311548+00:00 stderr F W0813 20:25:38.362318 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:25:38.363311548+00:00 stderr F E0813 20:25:38.363059 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:26:32.842179333+00:00 stderr F W0813 20:26:32.841247 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:26:32.842179333+00:00 stderr F E0813 20:26:32.842122 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:27:26.684468834+00:00 stderr F W0813 20:27:26.683694 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:27:26.684468834+00:00 stderr F E0813 20:27:26.684369 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:28:04.854152355+00:00 stderr F W0813 20:28:04.853294 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:28:04.854152355+00:00 stderr F E0813 20:28:04.854021 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:00.352371193+00:00 stderr F W0813 20:29:00.351581 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:00.352371193+00:00 stderr F E0813 20:29:00.352355 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:39.887393446+00:00 stderr F W0813 20:29:39.886355 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:39.887393446+00:00 stderr F E0813 20:29:39.887065 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:30:30.924838473+00:00 stderr F W0813 20:30:30.924209 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:30:30.924838473+00:00 stderr F E0813 20:30:30.924735 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:07.703564453+00:00 stderr F W0813 20:31:07.702500 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:07.703702457+00:00 stderr F E0813 20:31:07.703684 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:41.915769202+00:00 stderr F W0813 20:31:41.915124 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:41.915769202+00:00 stderr F E0813 20:31:41.915654 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:32:17.032996847+00:00 stderr F W0813 20:32:17.032355 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:32:17.032996847+00:00 stderr F E0813 20:32:17.032943 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:01.622681426+00:00 stderr F W0813 20:33:01.622034 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:01.622930893+00:00 stderr F E0813 20:33:01.622891 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:47.072377624+00:00 stderr F W0813 20:33:47.071605 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:47.072377624+00:00 stderr F E0813 20:33:47.072320 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:23.300561987+00:00 stderr F W0813 20:34:23.299956 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:23.300664670+00:00 stderr F E0813 20:34:23.300551 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:59.364951638+00:00 stderr F W0813 20:34:59.364285 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:59.364951638+00:00 stderr F E0813 20:34:59.364742 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:35:31.472926985+00:00 stderr F W0813 20:35:31.472191 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:35:31.472926985+00:00 stderr F E0813 20:35:31.472873 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:36:30.556270120+00:00 stderr F W0813 20:36:30.555625 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:36:30.556270120+00:00 stderr F E0813 20:36:30.556227 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:08.151012647+00:00 stderr F W0813 20:37:08.150338 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:08.151169502+00:00 stderr F E0813 20:37:08.151108 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:48.345633999+00:00 stderr F W0813 20:37:48.344753 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:48.345633999+00:00 stderr F E0813 20:37:48.345434 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:38:23.450721151+00:00 stderr F W0813 20:38:23.449862 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:38:23.450721151+00:00 stderr F E0813 20:38:23.450450 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:00.580320141+00:00 stderr F W0813 20:39:00.579684 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:00.580320141+00:00 stderr F E0813 20:39:00.580237 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:40.650243669+00:00 stderr F W0813 20:39:40.649332 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:40.650328611+00:00 stderr F E0813 20:39:40.650253 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:40:40.538996386+00:00 stderr F W0813 20:40:40.538258 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:40:40.538996386+00:00 stderr F E0813 20:40:40.538909 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:41:14.181708488+00:00 stderr F W0813 20:41:14.180081 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:41:14.181708488+00:00 stderr F E0813 20:41:14.181138 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:10.626970184+00:00 stderr F I0813 20:42:10.622903 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:42:12.654457226+00:00 stderr F W0813 20:42:12.654097 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:12.654563459+00:00 stderr F E0813 20:42:12.654543 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:16.008754910+00:00 stderr F I0813 20:42:16.007899 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.014437714+00:00 stderr F I0813 20:42:16.014402 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.025124922+00:00 stderr F I0813 20:42:16.025012 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.045668374+00:00 stderr F I0813 20:42:16.045548 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.060137731+00:00 stderr F I0813 20:42:16.060088 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.066087593+00:00 stderr F I0813 20:42:16.065977 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.076672748+00:00 stderr F I0813 20:42:16.076547 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.086157632+00:00 stderr F I0813 20:42:16.086122 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.098042974+00:00 stderr F I0813 20:42:16.097929 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.138759688+00:00 stderr F I0813 20:42:16.138645 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.168434424+00:00 stderr F I0813 20:42:16.167938 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.219057253+00:00 stderr F I0813 20:42:16.218971 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.328636572+00:00 stderr F I0813 20:42:16.328586 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.379500419+00:00 stderr F I0813 20:42:16.379362 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.822158331+00:00 stderr F I0813 20:42:16.821429 1 render_controller.go:556] Pool master: now targeting: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:16.827337360+00:00 stderr F I0813 20:42:16.827259 1 render_controller.go:556] Pool worker: now targeting: rendered-worker-83accf81260e29bcce65a184dd980479 2025-08-13T20:42:21.836593786+00:00 stderr F I0813 20:42:21.835700 1 status.go:249] Pool worker: All nodes are updated with MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 2025-08-13T20:42:21.863504552+00:00 stderr F I0813 20:42:21.863159 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T20:42:21.863504552+00:00 stderr F I0813 20:42:21.863202 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T20:42:21.894567958+00:00 stderr F I0813 20:42:21.894471 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:42:21.955022110+00:00 stderr F I0813 20:42:21.950945 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37435", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc000f1f1c8) 2025-08-13T20:42:21.990988767+00:00 stderr F I0813 20:42:21.990888 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:21.993939322+00:00 stderr F I0813 20:42:21.991069 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37453", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:22.105384245+00:00 stderr F E0813 20:42:22.105190 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.106994052+00:00 stderr F E0813 20:42:22.106943 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112701226+00:00 stderr F E0813 20:42:22.112589 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112701226+00:00 stderr F I0813 20:42:22.112625 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112978044+00:00 stderr F E0813 20:42:22.112887 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112978044+00:00 stderr F I0813 20:42:22.112922 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:24.125361272+00:00 stderr F I0813 20:42:24.124848 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T20:42:24.127582136+00:00 stderr F I0813 20:42:24.125590 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37453", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T20:42:26.958697318+00:00 stderr F I0813 20:42:26.957735 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:27.009661437+00:00 stderr F I0813 20:42:27.009377 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:42:31.999703171+00:00 stderr F I0813 20:42:31.999076 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:31.999847395+00:00 stderr F E0813 20:42:31.999080 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.014363134+00:00 stderr F I0813 20:42:32.014258 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.020685466+00:00 stderr F E0813 20:42:32.020594 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.032041254+00:00 stderr F I0813 20:42:32.030072 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.040847727+00:00 stderr F E0813 20:42:32.040665 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.050491705+00:00 stderr F I0813 20:42:32.050420 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.071996205+00:00 stderr F E0813 20:42:32.071826 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.080745538+00:00 stderr F I0813 20:42:32.080664 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.121400500+00:00 stderr F E0813 20:42:32.121311 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.131085009+00:00 stderr F I0813 20:42:32.130997 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.211753795+00:00 stderr F E0813 20:42:32.211476 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.232496443+00:00 stderr F I0813 20:42:32.232403 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.394884324+00:00 stderr F E0813 20:42:32.393039 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.404468440+00:00 stderr F I0813 20:42:32.404355 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.725402322+00:00 stderr F E0813 20:42:32.725263 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.734254078+00:00 stderr F I0813 20:42:32.734154 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.201087597+00:00 stderr F E0813 20:42:33.200596 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.216832791+00:00 stderr F I0813 20:42:33.216170 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.222384611+00:00 stderr F E0813 20:42:33.222356 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.233886922+00:00 stderr F I0813 20:42:33.233775 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.244505438+00:00 stderr F E0813 20:42:33.244416 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.256767982+00:00 stderr F I0813 20:42:33.256685 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.279054565+00:00 stderr F E0813 20:42:33.277531 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.293007427+00:00 stderr F I0813 20:42:33.292724 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.335658456+00:00 stderr F E0813 20:42:33.335315 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.351373300+00:00 stderr F I0813 20:42:33.351312 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.376716570+00:00 stderr F E0813 20:42:33.376532 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.402590646+00:00 stderr F I0813 20:42:33.401612 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.432120678+00:00 stderr F E0813 20:42:33.432061 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.444271948+00:00 stderr F I0813 20:42:33.444176 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.606433643+00:00 stderr F E0813 20:42:33.605536 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.615295959+00:00 stderr F I0813 20:42:33.615145 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.936213291+00:00 stderr F E0813 20:42:33.936081 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.947201958+00:00 stderr F I0813 20:42:33.946321 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.588471676+00:00 stderr F E0813 20:42:34.588375 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.605863277+00:00 stderr F I0813 20:42:34.597025 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.682711883+00:00 stderr F E0813 20:42:34.682433 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:34.705688565+00:00 stderr F I0813 20:42:34.704968 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:35.886755436+00:00 stderr F E0813 20:42:35.886347 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:35.896417474+00:00 stderr F I0813 20:42:35.896291 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:36.323923739+00:00 stderr F I0813 20:42:36.323531 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.324596938+00:00 stderr F I0813 20:42:36.324510 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.324995299+00:00 stderr F I0813 20:42:36.324940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.323763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.345954 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346192 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346305 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346364 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346501 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346557 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.323714 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324345 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324439 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324456 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346847 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346907 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346965 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.347027 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.347181 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382925350+00:00 stderr F I0813 20:42:36.382895 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.016337511+00:00 stderr F I0813 20:42:37.015940 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.017895636+00:00 stderr F I0813 20:42:37.017753 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.023579580+00:00 stderr F I0813 20:42:37.023551 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.024699342+00:00 stderr F I0813 20:42:37.024673 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.035967567+00:00 stderr F I0813 20:42:37.035916 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.037004967+00:00 stderr F I0813 20:42:37.036970 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.058903918+00:00 stderr F I0813 20:42:37.058743 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.059901537+00:00 stderr F I0813 20:42:37.059873 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.101688622+00:00 stderr F I0813 20:42:37.101598 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.104434691+00:00 stderr F I0813 20:42:37.104407 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.217067408+00:00 stderr F I0813 20:42:37.217009 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.267748759+00:00 stderr F E0813 20:42:37.267687 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:37.268837191+00:00 stderr F E0813 20:42:37.268747 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.268893922+00:00 stderr F I0813 20:42:37.268879 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:37.421141582+00:00 stderr F I0813 20:42:37.421048 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.620343355+00:00 stderr F I0813 20:42:37.618982 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.834513760+00:00 stderr F I0813 20:42:37.827334 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.149293695+00:00 stderr F I0813 20:42:38.149172 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:38.222857066+00:00 stderr F I0813 20:42:38.217434 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.459902060+00:00 stderr F E0813 20:42:38.457113 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:38.459902060+00:00 stderr F E0813 20:42:38.457837 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.459902060+00:00 stderr F I0813 20:42:38.457859 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:38.620535611+00:00 stderr F I0813 20:42:38.620441 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.018395911+00:00 stderr F I0813 20:42:39.018295 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.217161502+00:00 stderr F I0813 20:42:39.216939 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:39.617432801+00:00 stderr F I0813 20:42:39.617015 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.816411778+00:00 stderr F I0813 20:42:39.816312 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.216590945+00:00 stderr F I0813 20:42:40.216482 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.616889276+00:00 stderr F I0813 20:42:40.616739 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.018724301+00:00 stderr F I0813 20:42:41.018146 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.217121801+00:00 stderr F I0813 20:42:41.216984 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:41.616830045+00:00 stderr F I0813 20:42:41.616691 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.818520409+00:00 stderr F I0813 20:42:41.818366 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.217537563+00:00 stderr F I0813 20:42:42.217150 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.389906863+00:00 stderr F E0813 20:42:42.389755 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:42.390755547+00:00 stderr F E0813 20:42:42.390682 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.390837650+00:00 stderr F I0813 20:42:42.390765 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:42.861080747+00:00 stderr F I0813 20:42:42.860980 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.579242811+00:00 stderr F E0813 20:42:43.579116 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:43.579945061+00:00 stderr F E0813 20:42:43.579869 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.579945061+00:00 stderr F I0813 20:42:43.579916 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:44.146971519+00:00 stderr F I0813 20:42:44.146875 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.178582740+00:00 stderr F I0813 20:42:44.178448 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:44.180025342+00:00 stderr F I0813 20:42:44.179930 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.527609583+00:00 stderr F I0813 20:42:44.527249 1 helpers.go:93] Shutting down due to: terminated 2025-08-13T20:42:44.527643104+00:00 stderr F I0813 20:42:44.527618 1 helpers.go:96] Context cancelled 2025-08-13T20:42:44.527938452+00:00 stderr F I0813 20:42:44.527875 1 machine_set_boot_image_controller.go:189] Shutting down MachineConfigController-MachineSetBootImageController 2025-08-13T20:42:44.529565509+00:00 stderr F I0813 20:42:44.529496 1 render_controller.go:135] Shutting down MachineConfigController-RenderController 2025-08-13T20:42:44.529694943+00:00 stderr F I0813 20:42:44.529633 1 node_controller.go:255] Shutting down MachineConfigController-NodeController 2025-08-13T20:42:44.529712833+00:00 stderr F I0813 20:42:44.529701 1 template_controller.go:235] Shutting down MachineConfigController-TemplateController 2025-08-13T20:42:44.529954020+00:00 stderr F E0813 20:42:44.529858 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.529954020+00:00 stderr F I0813 20:42:44.529904 1 start.go:146] Stopped leading. Terminating. ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000002602515073043234033075 0ustar zuulzuul2025-10-13T00:15:01.900472828+00:00 stderr F I1013 00:15:01.894378 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:15:01.969341322+00:00 stderr F I1013 00:15:01.969209 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:15:01.969341322+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="FeatureGates initializedknownFeatures[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot]" 2025-10-13T00:15:01.969999791+00:00 stderr F 2025-10-13T00:15:01Z INFO controller-runtime.metrics Starting metrics server 2025-10-13T00:15:01.970291550+00:00 stderr F 2025-10-13T00:15:01Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-10-13T00:15:01.970431784+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-10-13T00:15:01.970431784+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-10-13T00:15:01.970431784+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-10-13T00:15:01.970431784+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting Controller {"controller": "status_controller"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-10-13T00:15:01.971320421+00:00 stderr F 2025-10-13T00:15:01Z INFO Starting Controller {"controller": "dns_controller"} 2025-10-13T00:15:02.274487194+00:00 stderr F 2025-10-13T00:15:02Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-10-13T00:15:02.382374717+00:00 stderr F 2025-10-13T00:15:02Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} 2025-10-13T00:15:02.382374717+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="reconciling request: /default" 2025-10-13T00:15:02.537699831+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 2, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 2, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 2, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-10-13T00:15:02.546833185+00:00 stderr F time="2025-10-13T00:15:02Z" level=info msg="reconciling request: /default" 2025-10-13T00:15:08.567185953+00:00 stderr F time="2025-10-13T00:15:08Z" level=info msg="reconciling request: /default" 2025-10-13T00:15:17.577264850+00:00 stderr F time="2025-10-13T00:15:17Z" level=info msg="reconciling request: /default" 2025-10-13T00:15:17.619982840+00:00 stderr F time="2025-10-13T00:15:17Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 2, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 2, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 2, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 17, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 17, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.October, 13, 0, 15, 17, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-10-13T00:15:17.621948979+00:00 stderr F time="2025-10-13T00:15:17Z" level=info msg="reconciling request: /default" ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000003325215073043234033075 0ustar zuulzuul2025-08-13T19:59:13.692541717+00:00 stderr F I0813 19:59:13.684769 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:15.984962045+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="FeatureGates initializedknownFeatures[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot]" 2025-08-13T19:59:15.992337545+00:00 stderr F 2025-08-13T19:59:15Z INFO controller-runtime.metrics Starting metrics server 2025-08-13T19:59:16.542397145+00:00 stderr F 2025-08-13T19:59:16Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-08-13T19:59:16.562715444+00:00 stderr F I0813 19:59:16.552380 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting Controller {"controller": "status_controller"} 2025-08-13T19:59:16.776197580+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T19:59:16.789697915+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-08-13T19:59:16.789933952+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-08-13T19:59:16.789988983+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T19:59:16.790040805+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T19:59:16.790110607+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-08-13T19:59:16.790152098+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting Controller {"controller": "dns_controller"} 2025-08-13T19:59:26.606910839+00:00 stderr F 2025-08-13T19:59:26Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T19:59:26.606910839+00:00 stderr F 2025-08-13T19:59:26Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T19:59:26.711345946+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:31.665157205+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsDesired\", Message:\"No DNS pods are desired; this could mean all nodes are tainted or unschedulable.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available node-resolver pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-08-13T19:59:31.941140121+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:32.963913386+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:41.020328726+00:00 stderr F time="2025-08-13T19:59:41Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:43.077599078+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-08-13T19:59:43.159758230+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:14.997050813+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:19.742230217+00:00 stderr F time="2025-08-13T20:00:19Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:47.670975966+00:00 stderr F time="2025-08-13T20:00:47Z" level=info msg="reconciling request: /default" 2025-08-13T20:02:29.340606718+00:00 stderr F time="2025-08-13T20:02:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:29.346119497+00:00 stderr F time="2025-08-13T20:03:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:29.348626550+00:00 stderr F time="2025-08-13T20:04:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:06:11.240719884+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:26.959054664+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:27.922351049+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:28.002841484+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:54.997534705+00:00 stderr F time="2025-08-13T20:06:54Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:55.533701098+00:00 stderr F time="2025-08-13T20:06:55Z" level=info msg="reconciling request: /default" 2025-08-13T20:08:29.702753933+00:00 stderr F time="2025-08-13T20:08:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:09:29.337380577+00:00 stderr F time="2025-08-13T20:09:29Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:38.974292084+00:00 stderr F time="2025-08-13T20:09:38Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:41.903293700+00:00 stderr F time="2025-08-13T20:09:41Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:41.984477328+00:00 stderr F time="2025-08-13T20:09:41Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:44.310670152+00:00 stderr F time="2025-08-13T20:09:44Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:52.517276043+00:00 stderr F time="2025-08-13T20:09:52Z" level=info msg="reconciling request: /default" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000000202015073043234033062 0ustar zuulzuul2025-10-13T00:15:01.494227196+00:00 stderr F W1013 00:15:01.493007 1 deprecated.go:66] 2025-10-13T00:15:01.494227196+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:15:01.494227196+00:00 stderr F 2025-10-13T00:15:01.494227196+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:15:01.494227196+00:00 stderr F 2025-10-13T00:15:01.494227196+00:00 stderr F =============================================== 2025-10-13T00:15:01.494227196+00:00 stderr F 2025-10-13T00:15:01.494227196+00:00 stderr F I1013 00:15:01.493546 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:15:01.494227196+00:00 stderr F I1013 00:15:01.493580 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:15:01.496081462+00:00 stderr F I1013 00:15:01.495449 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-10-13T00:15:01.496789853+00:00 stderr F I1013 00:15:01.496199 1 kube-rbac-proxy.go:402] Listening securely on :9393 ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000000222515073043234033071 0ustar zuulzuul2025-08-13T19:59:32.028488101+00:00 stderr F W0813 19:59:32.022135 1 deprecated.go:66] 2025-08-13T19:59:32.028488101+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.028488101+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.028488101+00:00 stderr F =============================================== 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.085928918+00:00 stderr F I0813 19:59:32.083706 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:32.085928918+00:00 stderr F I0813 19:59:32.083991 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:32.147723120+00:00 stderr F I0813 19:59:32.147658 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-08-13T19:59:32.152233128+00:00 stderr F I0813 19:59:32.152208 1 kube-rbac-proxy.go:402] Listening securely on :9393 2025-08-13T20:42:42.635290547+00:00 stderr F I0813 20:42:42.634461 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015073043233033132 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015073043233033132 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000167740715073043233033162 0ustar zuulzuul2025-08-13T20:00:57.601024272+00:00 stdout F Copying system trust bundle 2025-08-13T20:00:59.150240997+00:00 stderr F W0813 20:00:59.149367 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-08-13T20:00:59.151697528+00:00 stderr F I0813 20:00:59.151463 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:59.152227024+00:00 stderr F I0813 20:00:59.151494 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:59.152227024+00:00 stderr F I0813 20:00:59.152190 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:59.152914523+00:00 stderr F I0813 20:00:59.152864 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:06.958417626+00:00 stderr F I0813 20:01:06.957144 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-08-13T20:01:07.010158681+00:00 stderr F I0813 20:01:07.005014 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:18.831567916+00:00 stderr F I0813 20:01:18.825876 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:01:20.315622631+00:00 stderr F I0813 20:01:20.315483 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:01:20.315622631+00:00 stderr F I0813 20:01:20.315570 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:01:20.315747505+00:00 stderr F I0813 20:01:20.315695 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:01:20.315767645+00:00 stderr F I0813 20:01:20.315756 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:01:20.445957847+00:00 stderr F I0813 20:01:20.445641 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.445957847+00:00 stderr F W0813 20:01:20.445701 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.445957847+00:00 stderr F W0813 20:01:20.445725 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.446168933+00:00 stderr F I0813 20:01:20.446115 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.946501 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:20.946457579 +0000 UTC))" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966538 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:20.96648868 +0000 UTC))" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966572 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966627 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.950537 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952678 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.967330 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952729 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.967928 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952749 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.968432 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.970361 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:21.000124869+00:00 stderr F I0813 20:01:20.991088 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:01:21.000124869+00:00 stderr F I0813 20:01:20.993434 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.058740 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:20.983738 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.064586 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072051 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072104 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072211 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072385 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.072358079 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072675 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:21.072659357 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077533 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:21.077465674 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077748 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:21.077726392 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077922 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:21.077760273 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077956 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.077934858 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077978 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.077963238 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078004 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.077988539 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078026 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.07801058 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078047 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.07803163 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078074 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.078052561 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114507 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:21.078080662 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114660 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:21.114568952 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114731 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.114714266 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.115201 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:21.11517397 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.115454 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:21.115438927 +0000 UTC))" 2025-08-13T20:01:21.634012614+00:00 stderr F I0813 20:01:21.633686 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-08-13T20:01:21.663876685+00:00 stderr F I0813 20:01:21.636694 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_dfde8735-ce87-4a11-8bd2-f4c4c0ff21d4 became leader 2025-08-13T20:01:23.664939632+00:00 stderr F I0813 20:01:23.659023 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:23.684375367+00:00 stderr F W0813 20:01:23.679522 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:23.684375367+00:00 stderr F E0813 20:01:23.679651 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680257 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680766 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680951 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681079 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681094 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681104 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681115 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681127 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681137 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681147 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681157 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681414 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681433 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681486 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681500 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681515 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681529 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681542 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681555 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681566 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681576 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681586 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681597 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681608 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681621 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681657 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681673 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681686 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681700 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681713 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681732 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682142 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682173 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682410 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682436 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682449 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682467 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682483 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682506 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682680 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:01:23.690124541+00:00 stderr F I0813 20:01:23.687756 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.692861649+00:00 stderr F I0813 20:01:23.691127 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.692861649+00:00 stderr F I0813 20:01:23.691418 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.694466604+00:00 stderr F I0813 20:01:23.693678 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.719885519+00:00 stderr F I0813 20:01:23.718499 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.728182826+00:00 stderr F I0813 20:01:23.727762 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.736134953+00:00 stderr F I0813 20:01:23.729167 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.736630577+00:00 stderr F I0813 20:01:23.736571 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.736904285+00:00 stderr F I0813 20:01:23.732416 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:01:23.759032195+00:00 stderr F I0813 20:01:23.755954 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.759032195+00:00 stderr F I0813 20:01:23.757448 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:01:23.761861286+00:00 stderr F I0813 20:01:23.759595 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:01:23.761861286+00:00 stderr F I0813 20:01:23.761255 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.779152519+00:00 stderr F I0813 20:01:23.779041 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:23.779292603+00:00 stderr F I0813 20:01:23.779245 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:23.781823315+00:00 stderr F I0813 20:01:23.781386 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.781877527+00:00 stderr F I0813 20:01:23.781832 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.782127914+00:00 stderr F I0813 20:01:23.782105 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:01:23.782259668+00:00 stderr F I0813 20:01:23.782154 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:01:23.782313839+00:00 stderr F I0813 20:01:23.782182 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:01:23.782469254+00:00 stderr F I0813 20:01:23.782380 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:23.782514255+00:00 stderr F I0813 20:01:23.782192 1 base_controller.go:73] Caches are synced for IngressStateController 2025-08-13T20:01:23.782621858+00:00 stderr F I0813 20:01:23.782587 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782206 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782859 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782254 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782883 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T20:01:23.783595756+00:00 stderr F I0813 20:01:23.783568 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-08-13T20:01:23.783641957+00:00 stderr F I0813 20:01:23.783627 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-08-13T20:01:23.783698149+00:00 stderr F I0813 20:01:23.783683 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:23.783728940+00:00 stderr F I0813 20:01:23.783717 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:23.786337854+00:00 stderr F I0813 20:01:23.786310 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.799062797+00:00 stderr F I0813 20:01:23.795492 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.807875158+00:00 stderr F I0813 20:01:23.807598 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.809104123+00:00 stderr F I0813 20:01:23.807656 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.812135370+00:00 stderr F I0813 20:01:23.811597 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.813392686+00:00 stderr F I0813 20:01:23.813334 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.828136846+00:00 stderr F I0813 20:01:23.827440 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.838033328+00:00 stderr F I0813 20:01:23.831671 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.840524729+00:00 stderr F I0813 20:01:23.840457 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.845056688+00:00 stderr F I0813 20:01:23.840856 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.845056688+00:00 stderr F I0813 20:01:23.841557 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.871267616+00:00 stderr F I0813 20:01:23.871128 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.897636398+00:00 stderr F I0813 20:01:23.894465 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:01:23.921866379+00:00 stderr F I0813 20:01:23.916084 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.921866379+00:00 stderr F I0813 20:01:23.917222 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.971640038+00:00 stderr F I0813 20:01:23.965615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994463 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994560 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994599 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994605 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:01:23.997192686+00:00 stderr F E0813 20:01:23.996014 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.006004578+00:00 stderr F E0813 20:01:24.002904 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.017182026+00:00 stderr F E0813 20:01:24.015431 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.040476041+00:00 stderr F E0813 20:01:24.036712 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.041134619+00:00 stderr F I0813 20:01:24.041104 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.084437394+00:00 stderr F E0813 20:01:24.084350 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084581 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084621 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084653 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084661 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:01:24.097243379+00:00 stderr F I0813 20:01:24.097197 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.176132239+00:00 stderr F E0813 20:01:24.170080 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.278356773+00:00 stderr F I0813 20:01:24.277114 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.337191441+00:00 stderr F E0813 20:01:24.336831 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.466957101+00:00 stderr F I0813 20:01:24.466645 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.658996427+00:00 stderr F E0813 20:01:24.657273 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.667258213+00:00 stderr F I0813 20:01:24.667094 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.863943911+00:00 stderr F I0813 20:01:24.863327 1 request.go:697] Waited for 1.190001802s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/configmaps?limit=500&resourceVersion=0 2025-08-13T20:01:24.867625346+00:00 stderr F I0813 20:01:24.866515 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.963424247+00:00 stderr F W0813 20:01:24.961338 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:24.963424247+00:00 stderr F E0813 20:01:24.961657 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:25.067905877+00:00 stderr F I0813 20:01:25.067464 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.266332485+00:00 stderr F I0813 20:01:25.266070 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.298593314+00:00 stderr F E0813 20:01:25.298511 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:25.707237687+00:00 stderr F I0813 20:01:25.706312 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.784144260+00:00 stderr F I0813 20:01:25.783887 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-08-13T20:01:25.786075505+00:00 stderr F I0813 20:01:25.785996 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-08-13T20:01:25.865314244+00:00 stderr F I0813 20:01:25.864253 1 request.go:697] Waited for 2.190413918s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0 2025-08-13T20:01:25.893330563+00:00 stderr F I0813 20:01:25.893261 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.950398310+00:00 stderr F I0813 20:01:25.949985 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983342 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983382 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983443 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983452 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:01:26.069915768+00:00 stderr F I0813 20:01:26.069377 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.270666692+00:00 stderr F I0813 20:01:26.270583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.284813315+00:00 stderr F I0813 20:01:26.284594 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:01:26.285052591+00:00 stderr F I0813 20:01:26.284942 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:01:26.287715797+00:00 stderr F I0813 20:01:26.285505 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:01:26.470182960+00:00 stderr F I0813 20:01:26.469441 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.579690483+00:00 stderr F E0813 20:01:26.579588 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:26.659031375+00:00 stderr F W0813 20:01:26.658260 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:26.659031375+00:00 stderr F E0813 20:01:26.658424 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:26.669964707+00:00 stderr F I0813 20:01:26.668072 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.868958311+00:00 stderr F I0813 20:01:26.867414 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.882898748+00:00 stderr F I0813 20:01:26.882653 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T20:01:26.883909807+00:00 stderr F I0813 20:01:26.882904 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:01:26.885030489+00:00 stderr F I0813 20:01:26.883574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:01:27.066296428+00:00 stderr F I0813 20:01:27.064957 1 request.go:697] Waited for 3.390505706s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets?limit=500&resourceVersion=0 2025-08-13T20:01:27.075250893+00:00 stderr F I0813 20:01:27.073760 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082391 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082547 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082610 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082617 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.083898 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.083918 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.084145 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.084156 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:27.128292386+00:00 stderr F E0813 20:01:27.125655 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:27.128292386+00:00 stderr F I0813 20:01:27.126925 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:27.269064480+00:00 stderr F I0813 20:01:27.268271 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282300 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282363 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282563 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282573 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282861 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282871 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:01:27.484936365+00:00 stderr F I0813 20:01:27.481954 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.668396366+00:00 stderr F I0813 20:01:27.667422 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.686198444+00:00 stderr F I0813 20:01:27.685345 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:27.686198444+00:00 stderr F I0813 20:01:27.685392 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:27.868074850+00:00 stderr F I0813 20:01:27.868018 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.882122220+00:00 stderr F I0813 20:01:27.882055 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:01:27.882207453+00:00 stderr F I0813 20:01:27.882190 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:01:27.882269575+00:00 stderr F I0813 20:01:27.882162 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-08-13T20:01:27.882308516+00:00 stderr F I0813 20:01:27.882296 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882680 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882709 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882727 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882732 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882748 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882752 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:01:27.882991775+00:00 stderr F I0813 20:01:27.882962 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:01:27.883093208+00:00 stderr F I0813 20:01:27.883071 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:01:28.265288876+00:00 stderr F I0813 20:01:28.264249 1 request.go:697] Waited for 4.479476827s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver 2025-08-13T20:01:29.140591084+00:00 stderr F E0813 20:01:29.140530 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:29.276565671+00:00 stderr F I0813 20:01:29.264871 1 request.go:697] Waited for 2.978228822s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:01:29.382012428+00:00 stderr F I0813 20:01:29.380383 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" 2025-08-13T20:01:32.212365833+00:00 stderr F W0813 20:01:32.211005 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:32.212365833+00:00 stderr F E0813 20:01:32.211528 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:35.315058375+00:00 stderr F E0813 20:01:35.309483 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.317909616+00:00 stderr F E0813 20:01:35.317744 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:35.318626706+00:00 stderr F I0813 20:01:35.318523 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:35.330132065+00:00 stderr F E0813 20:01:35.320317 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.330132065+00:00 stderr F E0813 20:01:35.328652 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.330814334+00:00 stderr F E0813 20:01:35.330725 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.351273227+00:00 stderr F E0813 20:01:35.351143 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.434510491+00:00 stderr F E0813 20:01:35.434401 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.597888489+00:00 stderr F E0813 20:01:35.597740 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.921267900+00:00 stderr F E0813 20:01:35.921136 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:36.565020306+00:00 stderr F E0813 20:01:36.564724 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:37.852888508+00:00 stderr F E0813 20:01:37.850910 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:39.382332008+00:00 stderr F E0813 20:01:39.382228 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:40.230678778+00:00 stderr F I0813 20:01:40.230622 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:01:40.415179749+00:00 stderr F E0813 20:01:40.415094 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:42.978231701+00:00 stderr F I0813 20:01:42.977340 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authentication-integrated-oauth -n openshift-config because it changed 2025-08-13T20:01:44.713770138+00:00 stderr F W0813 20:01:44.713667 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:44.713770138+00:00 stderr F E0813 20:01:44.713725 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:45.538965267+00:00 stderr F E0813 20:01:45.538896 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:53.763712966+00:00 stderr F E0813 20:01:53.763200 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:53.775870473+00:00 stderr F E0813 20:01:53.775696 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:55.783664842+00:00 stderr F E0813 20:01:55.783488 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:04.290913305+00:00 stderr F W0813 20:02:04.290126 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:04.290913305+00:00 stderr F E0813 20:02:04.290747 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:20.380357169+00:00 stderr F E0813 20:02:20.343460 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:23.764121197+00:00 stderr F E0813 20:02:23.762438 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:23.769153661+00:00 stderr F E0813 20:02:23.768893 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:26.933185062+00:00 stderr F E0813 20:02:26.932990 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:26.948695884+00:00 stderr F E0813 20:02:26.948445 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:26.989922850+00:00 stderr F E0813 20:02:26.989511 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.004154426+00:00 stderr F E0813 20:02:27.003469 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.022583992+00:00 stderr F E0813 20:02:27.022435 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.043768966+00:00 stderr F E0813 20:02:27.043682 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.329108505+00:00 stderr F E0813 20:02:27.328969 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.529163642+00:00 stderr F E0813 20:02:27.528948 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.928094343+00:00 stderr F E0813 20:02:27.928008 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.128399067+00:00 stderr F E0813 20:02:28.128107 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.327478386+00:00 stderr F E0813 20:02:28.327428 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.731456270+00:00 stderr F E0813 20:02:28.729698 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.930338964+00:00 stderr F E0813 20:02:28.930232 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.127728155+00:00 stderr F E0813 20:02:29.127603 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.642929832+00:00 stderr F E0813 20:02:29.641740 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.759355994+00:00 stderr F E0813 20:02:29.755106 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.273063908+00:00 stderr F E0813 20:02:30.272704 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.931073889+00:00 stderr F E0813 20:02:30.930430 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.031080811+00:00 stderr F E0813 20:02:31.031014 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.896913252+00:00 stderr F E0813 20:02:31.896737 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.901731269+00:00 stderr F E0813 20:02:31.901144 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903485989+00:00 stderr F W0813 20:02:31.903367 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903485989+00:00 stderr F E0813 20:02:31.903426 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903617233+00:00 stderr F E0813 20:02:31.903577 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917903270+00:00 stderr F W0813 20:02:31.917771 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.918132147+00:00 stderr F E0813 20:02:31.918076 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.922458890+00:00 stderr F E0813 20:02:31.922432 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.934531055+00:00 stderr F E0813 20:02:31.934350 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.936878452+00:00 stderr F E0813 20:02:31.936742 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.127017396+00:00 stderr F E0813 20:02:32.126971 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.255019067+00:00 stderr F E0813 20:02:32.254935 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.528008705+00:00 stderr F E0813 20:02:32.527930 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.653532296+00:00 stderr F E0813 20:02:32.653441 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.727059043+00:00 stderr F W0813 20:02:32.726958 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.727059043+00:00 stderr F E0813 20:02:32.727027 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.932082382+00:00 stderr F E0813 20:02:32.931836 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.127423844+00:00 stderr F E0813 20:02:33.127363 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.253611194+00:00 stderr F E0813 20:02:33.253178 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.328117050+00:00 stderr F E0813 20:02:33.327994 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.932450330+00:00 stderr F W0813 20:02:33.930969 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.932450330+00:00 stderr F E0813 20:02:33.931612 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.052659389+00:00 stderr F E0813 20:02:34.052584 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.136916972+00:00 stderr F E0813 20:02:34.132246 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.328227679+00:00 stderr F E0813 20:02:34.328114 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.534071921+00:00 stderr F E0813 20:02:34.533970 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.687623582+00:00 stderr F E0813 20:02:34.687573 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.736053963+00:00 stderr F E0813 20:02:34.735985 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:35.127705846+00:00 stderr F E0813 20:02:35.127553 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.328358600+00:00 stderr F W0813 20:02:35.328198 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.328358600+00:00 stderr F E0813 20:02:35.328266 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.453077748+00:00 stderr F E0813 20:02:35.453020 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.527173802+00:00 stderr F W0813 20:02:35.527113 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.527251724+00:00 stderr F E0813 20:02:35.527236 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.727884567+00:00 stderr F E0813 20:02:35.727750 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:35.927503792+00:00 stderr F W0813 20:02:35.927397 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.927503792+00:00 stderr F E0813 20:02:35.927461 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.327550184+00:00 stderr F W0813 20:02:36.327428 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.327550184+00:00 stderr F E0813 20:02:36.327499 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.528281810+00:00 stderr F W0813 20:02:36.528120 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.528281810+00:00 stderr F E0813 20:02:36.528206 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.727435112+00:00 stderr F E0813 20:02:36.727322 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:36.747421802+00:00 stderr F E0813 20:02:36.747322 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:36.927103428+00:00 stderr F W0813 20:02:36.926945 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.927103428+00:00 stderr F E0813 20:02:36.927013 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.052703641+00:00 stderr F E0813 20:02:37.052619 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.130507020+00:00 stderr F E0813 20:02:37.130384 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.328582881+00:00 stderr F E0813 20:02:37.328481 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.727091989+00:00 stderr F W0813 20:02:37.726976 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.727091989+00:00 stderr F E0813 20:02:37.727046 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.927703161+00:00 stderr F E0813 20:02:37.927482 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.127932173+00:00 stderr F E0813 20:02:38.127747 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:38.328096443+00:00 stderr F W0813 20:02:38.328010 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.328096443+00:00 stderr F E0813 20:02:38.328066 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.527090450+00:00 stderr F W0813 20:02:38.526951 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.527090450+00:00 stderr F E0813 20:02:38.527022 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.728004041+00:00 stderr F E0813 20:02:38.727762 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.928185062+00:00 stderr F E0813 20:02:38.928079 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:39.128406164+00:00 stderr F W0813 20:02:39.128242 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.128406164+00:00 stderr F E0813 20:02:39.128350 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.175635611+00:00 stderr F E0813 20:02:39.175271 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:39.527495399+00:00 stderr F E0813 20:02:39.527376 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:39.727723831+00:00 stderr F E0813 20:02:39.727575 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.855182367+00:00 stderr F E0813 20:02:39.855037 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.928222110+00:00 stderr F W0813 20:02:39.928035 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.928222110+00:00 stderr F E0813 20:02:39.928115 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.330568398+00:00 stderr F W0813 20:02:40.330149 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.330568398+00:00 stderr F E0813 20:02:40.330514 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.530181012+00:00 stderr F E0813 20:02:40.530063 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.728542011+00:00 stderr F E0813 20:02:40.728404 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:40.928131315+00:00 stderr F E0813 20:02:40.927956 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.127443081+00:00 stderr F E0813 20:02:41.127355 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.528347618+00:00 stderr F W0813 20:02:41.528216 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.528347618+00:00 stderr F E0813 20:02:41.528295 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.728027864+00:00 stderr F E0813 20:02:41.727880 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:41.927577757+00:00 stderr F E0813 20:02:41.927448 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.128612642+00:00 stderr F W0813 20:02:42.128336 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.128612642+00:00 stderr F E0813 20:02:42.128407 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.327284090+00:00 stderr F E0813 20:02:42.327148 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.528119720+00:00 stderr F E0813 20:02:42.527951 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.728413734+00:00 stderr F E0813 20:02:42.728286 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.927731520+00:00 stderr F W0813 20:02:42.927610 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.927731520+00:00 stderr F E0813 20:02:42.927699 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.129365182+00:00 stderr F E0813 20:02:43.129188 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.527767438+00:00 stderr F W0813 20:02:43.527659 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.527767438+00:00 stderr F E0813 20:02:43.527742 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.727519107+00:00 stderr F E0813 20:02:43.727459 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.159466789+00:00 stderr F E0813 20:02:44.159362 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:44.372410054+00:00 stderr F E0813 20:02:44.372313 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.984436533+00:00 stderr F E0813 20:02:44.984282 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.300437367+00:00 stderr F E0813 20:02:45.300330 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.491012544+00:00 stderr F W0813 20:02:45.490924 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.491012544+00:00 stderr F E0813 20:02:45.490980 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.660187220+00:00 stderr F E0813 20:02:45.660074 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.093562493+00:00 stderr F W0813 20:02:46.093390 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.093562493+00:00 stderr F E0813 20:02:46.093459 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.296879373+00:00 stderr F E0813 20:02:46.296743 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.896875699+00:00 stderr F E0813 20:02:46.896731 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.229642569+00:00 stderr F E0813 20:02:48.229165 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.434714093+00:00 stderr F E0813 20:02:50.434581 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:50.614304156+00:00 stderr F W0813 20:02:50.614129 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.614304156+00:00 stderr F E0813 20:02:50.614192 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.774613249+00:00 stderr F E0813 20:02:50.774502 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.172401237+00:00 stderr F E0813 20:02:51.172180 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229745813+00:00 stderr F W0813 20:02:51.229630 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229745813+00:00 stderr F E0813 20:02:51.229699 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.156068209+00:00 stderr F W0813 20:02:52.155874 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.156068209+00:00 stderr F E0813 20:02:52.156054 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.765568843+00:00 stderr F E0813 20:02:53.765410 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:53.767631182+00:00 stderr F E0813 20:02:53.767560 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.769331031+00:00 stderr F E0813 20:02:53.769283 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770287828+00:00 stderr F W0813 20:02:53.770226 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770287828+00:00 stderr F E0813 20:02:53.770276 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770340499+00:00 stderr F E0813 20:02:53.770318 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770487224+00:00 stderr F E0813 20:02:53.770415 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:53.773539391+00:00 stderr F E0813 20:02:53.773464 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.778548844+00:00 stderr F E0813 20:02:53.778440 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:53.785736249+00:00 stderr F E0813 20:02:53.785659 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.811054111+00:00 stderr F E0813 20:02:53.810964 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.967387341+00:00 stderr F E0813 20:02:53.967258 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.167566501+00:00 stderr F E0813 20:02:54.167444 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.367052942+00:00 stderr F E0813 20:02:54.366960 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.567292725+00:00 stderr F E0813 20:02:54.567165 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.766946050+00:00 stderr F E0813 20:02:54.766873 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.089055399+00:00 stderr F E0813 20:02:55.088918 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.227706785+00:00 stderr F E0813 20:02:55.227612 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.731500816+00:00 stderr F E0813 20:02:55.731367 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.297721488+00:00 stderr F E0813 20:02:56.297603 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.897877649+00:00 stderr F E0813 20:02:56.897404 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.014861806+00:00 stderr F E0813 20:02:57.014663 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:58.477131781+00:00 stderr F E0813 20:02:58.474671 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.579520009+00:00 stderr F E0813 20:02:59.579401 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.687168456+00:00 stderr F E0813 20:03:00.687070 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:00.857913247+00:00 stderr F W0813 20:03:00.857746 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.857913247+00:00 stderr F E0813 20:03:00.857868 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.476541395+00:00 stderr F W0813 20:03:01.476415 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.476594157+00:00 stderr F E0813 20:03:01.476530 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.493944529+00:00 stderr F E0813 20:03:02.493680 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.506707073+00:00 stderr F E0813 20:03:02.506415 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.522068951+00:00 stderr F E0813 20:03:02.521926 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.552519840+00:00 stderr F E0813 20:03:02.552368 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.597944556+00:00 stderr F E0813 20:03:02.597678 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.685393231+00:00 stderr F E0813 20:03:02.685317 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.850047908+00:00 stderr F E0813 20:03:02.849866 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.174533113+00:00 stderr F E0813 20:03:03.174424 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.818949566+00:00 stderr F E0813 20:03:03.818761 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.701402750+00:00 stderr F E0813 20:03:04.701197 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.102472402+00:00 stderr F E0813 20:03:05.102371 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:06.297269146+00:00 stderr F E0813 20:03:06.297219 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:06.899501755+00:00 stderr F E0813 20:03:06.899331 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:07.674637578+00:00 stderr F E0813 20:03:07.674518 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.657603490+00:00 stderr F E0813 20:03:11.656881 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.014409079+00:00 stderr F E0813 20:03:12.014274 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.021263144+00:00 stderr F E0813 20:03:12.021168 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.033699899+00:00 stderr F E0813 20:03:12.033630 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.055958634+00:00 stderr F E0813 20:03:12.055824 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.102658386+00:00 stderr F E0813 20:03:12.102579 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.166654942+00:00 stderr F E0813 20:03:12.166523 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:12.186324323+00:00 stderr F E0813 20:03:12.186242 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.348351775+00:00 stderr F E0813 20:03:12.348236 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.670494515+00:00 stderr F E0813 20:03:12.670386 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.806152205+00:00 stderr F E0813 20:03:12.806023 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.313283152+00:00 stderr F E0813 20:03:13.313092 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:14.596306612+00:00 stderr F E0813 20:03:14.596171 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:14.944364571+00:00 stderr F E0813 20:03:14.944263 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:15.711166666+00:00 stderr F E0813 20:03:15.711008 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:16.299980244+00:00 stderr F E0813 20:03:16.299884 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:16.899382965+00:00 stderr F E0813 20:03:16.899257 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:17.159255479+00:00 stderr F E0813 20:03:17.159063 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.488493338+00:00 stderr F W0813 20:03:18.488412 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.488601241+00:00 stderr F E0813 20:03:18.488582 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.618289821+00:00 stderr F E0813 20:03:18.618122 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.625185267+00:00 stderr F E0813 20:03:18.625110 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.637349034+00:00 stderr F E0813 20:03:18.637274 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.658940010+00:00 stderr F E0813 20:03:18.658767 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.701117413+00:00 stderr F E0813 20:03:18.701022 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.784162483+00:00 stderr F E0813 20:03:18.784045 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.947229385+00:00 stderr F E0813 20:03:18.947181 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.269028345+00:00 stderr F E0813 20:03:19.268920 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.911894545+00:00 stderr F E0813 20:03:19.911560 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.194807962+00:00 stderr F E0813 20:03:21.194399 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.342962569+00:00 stderr F W0813 20:03:21.342902 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.343047731+00:00 stderr F E0813 20:03:21.343032 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.282460580+00:00 stderr F E0813 20:03:22.282076 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.052871018+00:00 stderr F E0813 20:03:23.052646 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.760202456+00:00 stderr F E0813 20:03:23.759999 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.769813371+00:00 stderr F E0813 20:03:23.769669 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:23.779965870+00:00 stderr F E0813 20:03:23.779894 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:23.781059171+00:00 stderr F E0813 20:03:23.781003 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.784121259+00:00 stderr F E0813 20:03:23.784096 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.785240851+00:00 stderr F E0813 20:03:23.785206 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.788247946+00:00 stderr F W0813 20:03:23.788197 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.788319458+00:00 stderr F E0813 20:03:23.788302 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.797432929+00:00 stderr F E0813 20:03:23.797349 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.806918239+00:00 stderr F E0813 20:03:23.806727 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:24.163956525+00:00 stderr F E0813 20:03:24.163896 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:24.494039410+00:00 stderr F E0813 20:03:24.493690 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.764908027+00:00 stderr F E0813 20:03:24.764638 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:26.299993099+00:00 stderr F E0813 20:03:26.299749 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.899464241+00:00 stderr F E0813 20:03:26.899334 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.286445800+00:00 stderr F E0813 20:03:27.286342 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.890913401+00:00 stderr F E0813 20:03:28.890567 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.740079510+00:00 stderr F E0813 20:03:31.739954 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.525712363+00:00 stderr F E0813 20:03:32.525538 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.426605317+00:00 stderr F E0813 20:03:35.426247 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.645656476+00:00 stderr F E0813 20:03:35.645505 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:36.299152909+00:00 stderr F E0813 20:03:36.299090 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.901969136+00:00 stderr F E0813 20:03:36.901541 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.133112993+00:00 stderr F E0813 20:03:39.133000 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.659148194+00:00 stderr F E0813 20:03:41.658446 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:42.442147039+00:00 stderr F W0813 20:03:42.442027 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.442147039+00:00 stderr F E0813 20:03:42.442093 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.483134259+00:00 stderr F W0813 20:03:42.482557 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.483134259+00:00 stderr F E0813 20:03:42.482661 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.536431116+00:00 stderr F E0813 20:03:43.536332 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.300290470+00:00 stderr F E0813 20:03:46.299877 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.908462350+00:00 stderr F E0813 20:03:46.905407 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.394999873+00:00 stderr F W0813 20:03:49.393645 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.394999873+00:00 stderr F E0813 20:03:49.393889 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.025811794+00:00 stderr F E0813 20:03:53.023946 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.776505229+00:00 stderr F E0813 20:03:53.772389 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.776505229+00:00 stderr F E0813 20:03:53.773188 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.779145 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.786216 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.786304 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.795836431+00:00 stderr F W0813 20:03:53.795634 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.795836431+00:00 stderr F E0813 20:03:53.795691 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.828464271+00:00 stderr F E0813 20:03:53.816611 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:54.151538628+00:00 stderr F E0813 20:03:54.150645 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:54.345151752+00:00 stderr F E0813 20:03:54.344983 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:56.302523861+00:00 stderr F E0813 20:03:56.300941 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.673452972+00:00 stderr F E0813 20:03:56.673295 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.903368320+00:00 stderr F E0813 20:03:56.903319 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.620550084+00:00 stderr F E0813 20:03:59.618633 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.312954040+00:00 stderr F W0813 20:04:02.309330 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.312954040+00:00 stderr F E0813 20:04:02.309618 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.302755927+00:00 stderr F E0813 20:04:06.301888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.903333330+00:00 stderr F E0813 20:04:06.903206 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:16.305217907+00:00 stderr F E0813 20:04:16.304262 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:16.913825279+00:00 stderr F E0813 20:04:16.913395 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.850512924+00:00 stderr F W0813 20:04:19.850001 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.850605137+00:00 stderr F E0813 20:04:19.850588 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.413072252+00:00 stderr F E0813 20:04:20.412450 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.769465263+00:00 stderr F E0813 20:04:23.769303 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.770306377+00:00 stderr F E0813 20:04:23.770234 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:04:23.771539322+00:00 stderr F E0813 20:04:23.771375 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:04:23.773421256+00:00 stderr F E0813 20:04:23.773370 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.776546526+00:00 stderr F E0813 20:04:23.776490 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.783513345+00:00 stderr F W0813 20:04:23.783400 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.783513345+00:00 stderr F E0813 20:04:23.783473 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.798514465+00:00 stderr F E0813 20:04:23.798408 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.822333627+00:00 stderr F E0813 20:04:23.822215 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.972993131+00:00 stderr F E0813 20:04:23.972892 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:24.508599069+00:00 stderr F E0813 20:04:24.496475 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.579112658+00:00 stderr F E0813 20:04:24.579001 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:24.771043864+00:00 stderr F E0813 20:04:24.770921 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:26.307646646+00:00 stderr F E0813 20:04:26.307192 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:26.906563348+00:00 stderr F E0813 20:04:26.906472 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:27.287944838+00:00 stderr F E0813 20:04:27.287721 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:33.582276875+00:00 stderr F E0813 20:04:33.581698 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:33.987219352+00:00 stderr F E0813 20:04:33.987048 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.312691544+00:00 stderr F E0813 20:04:36.312200 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.917266547+00:00 stderr F E0813 20:04:36.916570 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.583078430+00:00 stderr F E0813 20:04:40.582122 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.022061001+00:00 stderr F E0813 20:04:41.021936 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:46.326883580+00:00 stderr F E0813 20:04:46.323295 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:46.909530754+00:00 stderr F E0813 20:04:46.909405 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.181715267+00:00 stderr F W0813 20:04:50.181102 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.181883722+00:00 stderr F E0813 20:04:50.181729 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.767715196+00:00 stderr F E0813 20:04:53.767349 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:04:53.770173056+00:00 stderr F E0813 20:04:53.770124 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:04:53.770377622+00:00 stderr F E0813 20:04:53.770329 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.771906916+00:00 stderr F E0813 20:04:53.771834 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.772924025+00:00 stderr F E0813 20:04:53.772878 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.774461939+00:00 stderr F W0813 20:04:53.774407 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.774461939+00:00 stderr F E0813 20:04:53.774452 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.785646849+00:00 stderr F E0813 20:04:53.785579 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:53.866119903+00:00 stderr F E0813 20:04:53.866006 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:56.306290470+00:00 stderr F E0813 20:04:56.306165 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.910579185+00:00 stderr F E0813 20:04:56.910461 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:57.350171663+00:00 stderr F E0813 20:04:57.349594 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:58.237879773+00:00 stderr F E0813 20:04:58.237672 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:00.340124593+00:00 stderr F E0813 20:05:00.338045 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:04.189001221+00:00 stderr F E0813 20:05:04.184949 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:06.315115545+00:00 stderr F E0813 20:05:06.313999 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.915299503+00:00 stderr F E0813 20:05:06.915171 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.568591731+00:00 stderr F E0813 20:05:09.568101 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:05:10.970612210+00:00 stderr F W0813 20:05:10.970159 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:10.970612210+00:00 stderr F E0813 20:05:10.970545 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:13.451698458+00:00 stderr F E0813 20:05:13.451599 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.913312373+00:00 stderr F W0813 20:05:14.908382 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.913312373+00:00 stderr F E0813 20:05:14.913234 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.310306737+00:00 stderr F E0813 20:05:16.310246 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912380388+00:00 stderr F E0813 20:05:16.911955 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:20.615698997+00:00 stderr F E0813 20:05:20.615061 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:05:23.777205299+00:00 stderr F E0813 20:05:23.776533 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:23.982701903+00:00 stderr F I0813 20:05:23.982325 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31176 2025-08-13T20:05:53.780925638+00:00 stderr F E0813 20:05:53.773995 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:53.846642450+00:00 stderr F I0813 20:05:53.846554 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:53.884143743+00:00 stderr F I0813 20:05:53.884055 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.150880939+00:00 stderr F I0813 20:05:57.150016 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:05:57.184283916+00:00 stderr F I0813 20:05:57.184150 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-08-13T20:05:57.184283916+00:00 stderr F I0813 20:05:57.184200 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185208 1 base_controller.go:73] Caches are synced for MetadataController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185241 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185271 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185277 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185295 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185301 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185320 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185326 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185342 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185348 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185365 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185371 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-08-13T20:05:57.191000838+00:00 stderr F I0813 20:05:57.190088 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:05:57.191000838+00:00 stderr F I0813 20:05:57.190136 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:05:57.259119429+00:00 stderr F I0813 20:05:57.258912 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.291520217+00:00 stderr F I0813 20:05:57.291376 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.467956799+00:00 stderr F I0813 20:05:57.458971 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31898 2025-08-13T20:05:57.602885093+00:00 stderr F I0813 20:05:57.602057 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31898 2025-08-13T20:05:58.111040365+00:00 stderr F I0813 20:05:58.110278 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.115503 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.116373 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.116996 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.593472469+00:00 stderr F I0813 20:05:58.593120 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31941 2025-08-13T20:05:59.392556071+00:00 stderr F I0813 20:05:59.392139 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31941 2025-08-13T20:05:59.994268782+00:00 stderr F I0813 20:05:59.994170 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31955 2025-08-13T20:06:00.393736241+00:00 stderr F I0813 20:06:00.393620 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31955 2025-08-13T20:06:00.951252697+00:00 stderr F I0813 20:06:00.951148 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:01.398687769+00:00 stderr F I0813 20:06:01.398630 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:01.592664314+00:00 stderr F I0813 20:06:01.592543 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:01.994168851+00:00 stderr F I0813 20:06:01.994108 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:02.218265058+00:00 stderr F E0813 20:06:02.218209 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:02.379615469+00:00 stderr F I0813 20:06:02.379483 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:02.798964217+00:00 stderr F E0813 20:06:02.798852 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:03.280820726+00:00 stderr F I0813 20:06:03.280603 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:03.395710406+00:00 stderr F I0813 20:06:03.395587 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:03.590517874+00:00 stderr F I0813 20:06:03.590118 1 request.go:697] Waited for 1.06685142s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:06:03.793120646+00:00 stderr F I0813 20:06:03.792978 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:04.195684024+00:00 stderr F I0813 20:06:04.195558 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:04.585346633+00:00 stderr F I0813 20:06:04.585242 1 request.go:697] Waited for 1.188418642s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:06:04.808918075+00:00 stderr F I0813 20:06:04.807539 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.201243302+00:00 stderr F I0813 20:06:05.201114 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.594123029+00:00 stderr F I0813 20:06:05.593970 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.796711801+00:00 stderr F I0813 20:06:05.796568 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.395088896+00:00 stderr F I0813 20:06:06.394551 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:06.395088896+00:00 stderr F E0813 20:06:06.394938 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:06.601353492+00:00 stderr F I0813 20:06:06.601259 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:06.793587567+00:00 stderr F I0813 20:06:06.793457 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:07.595002947+00:00 stderr F I0813 20:06:07.594288 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.200963119+00:00 stderr F I0813 20:06:08.199025 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.394736918+00:00 stderr F I0813 20:06:08.394611 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.395164010+00:00 stderr F E0813 20:06:08.395101 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:08.797241104+00:00 stderr F I0813 20:06:08.797136 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.802477464+00:00 stderr F I0813 20:06:08.802395 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:09.161467163+00:00 stderr F I0813 20:06:09.160116 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:09.214572484+00:00 stderr F I0813 20:06:09.213712 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.214572484+00:00 stderr F E0813 20:06:09.213961 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:09.684253444+00:00 stderr F I0813 20:06:09.684155 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.995109275+00:00 stderr F I0813 20:06:09.994982 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.995424974+00:00 stderr F E0813 20:06:09.995296 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:10.173123513+00:00 stderr F I0813 20:06:10.172991 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:10.914646817+00:00 stderr F I0813 20:06:10.912960 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:10.922015297+00:00 stderr F I0813 20:06:10.921943 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.096769872+00:00 stderr F I0813 20:06:11.096666 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.811653774+00:00 stderr F I0813 20:06:11.811553 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.812051776+00:00 stderr F E0813 20:06:11.811976 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:11.992449141+00:00 stderr F I0813 20:06:11.992313 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:12.193235221+00:00 stderr F I0813 20:06:12.193177 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:12.794165150+00:00 stderr F I0813 20:06:12.794056 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.251686431+00:00 stderr F I0813 20:06:13.251568 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.444129302+00:00 stderr F I0813 20:06:13.442353 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.444129302+00:00 stderr F E0813 20:06:13.442536 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:13.994283917+00:00 stderr F I0813 20:06:13.994167 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:14.122405486+00:00 stderr F I0813 20:06:14.122004 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.187278824+00:00 stderr F I0813 20:06:14.187176 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.195468358+00:00 stderr F I0813 20:06:14.195350 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:14.533009894+00:00 stderr F I0813 20:06:14.532219 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.808445681+00:00 stderr F I0813 20:06:14.807930 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.003177808+00:00 stderr F I0813 20:06:15.002420 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.003177808+00:00 stderr F E0813 20:06:15.002886 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:15.794070796+00:00 stderr F I0813 20:06:15.793986 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.979400003+00:00 stderr F I0813 20:06:15.979279 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:15.997270335+00:00 stderr F I0813 20:06:15.997103 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:16.399074080+00:00 stderr F I0813 20:06:16.398917 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:16.399115161+00:00 stderr F E0813 20:06:16.399084 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:16.464236376+00:00 stderr F I0813 20:06:16.464109 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:06:16.794003089+00:00 stderr F I0813 20:06:16.793881 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.197062101+00:00 stderr F I0813 20:06:17.196924 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.197212995+00:00 stderr F E0813 20:06:17.197139 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:17.722846068+00:00 stderr F I0813 20:06:17.722676 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.993669173+00:00 stderr F I0813 20:06:17.993511 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.994296241+00:00 stderr F E0813 20:06:17.994159 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:18.438699257+00:00 stderr F I0813 20:06:18.438641 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:19.248248059+00:00 stderr F I0813 20:06:19.248041 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.343039114+00:00 stderr F I0813 20:06:19.342098 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:19.584885919+00:00 stderr F I0813 20:06:19.584401 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:19.598600272+00:00 stderr F E0813 20:06:19.598516 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.614885068+00:00 stderr F E0813 20:06:19.614706 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.635157919+00:00 stderr F E0813 20:06:19.635096 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.674930438+00:00 stderr F E0813 20:06:19.664489 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.748246817+00:00 stderr F I0813 20:06:19.746043 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.749856013+00:00 stderr F I0813 20:06:19.749722 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.803189150+00:00 stderr F E0813 20:06:19.794665 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.828946597+00:00 stderr F I0813 20:06:19.828433 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.880045051+00:00 stderr F E0813 20:06:19.879148 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:20.043557963+00:00 stderr F E0813 20:06:20.042559 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:20.366935763+00:00 stderr F E0813 20:06:20.366018 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:21.009144944+00:00 stderr F E0813 20:06:21.009081 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:21.609717192+00:00 stderr F I0813 20:06:21.609270 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.234066591+00:00 stderr F I0813 20:06:22.233754 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.287365577+00:00 stderr F I0813 20:06:22.287227 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:22.294139981+00:00 stderr F E0813 20:06:22.294044 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:23.078702238+00:00 stderr F I0813 20:06:23.078452 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:23.152022367+00:00 stderr F I0813 20:06:23.151737 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:23.191397335+00:00 stderr F I0813 20:06:23.191229 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:23.191462137+00:00 stderr F E0813 20:06:23.191446 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:24.745077965+00:00 stderr F I0813 20:06:24.744379 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.783302970+00:00 stderr F I0813 20:06:24.783153 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.834289230+00:00 stderr F I0813 20:06:24.833331 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:24.847144218+00:00 stderr F E0813 20:06:24.847017 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.851570105+00:00 stderr F E0813 20:06:24.850085 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.857206446+00:00 stderr F E0813 20:06:24.857177 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.913851938+00:00 stderr F I0813 20:06:24.913718 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.951681371+00:00 stderr F I0813 20:06:24.951569 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:25.040631169+00:00 stderr F I0813 20:06:25.040492 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:25.314961334+00:00 stderr F I0813 20:06:25.312493 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:27.048497405+00:00 stderr F I0813 20:06:27.048413 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.215903255+00:00 stderr F I0813 20:06:28.215234 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.256019734+00:00 stderr F I0813 20:06:28.255448 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:28.292744106+00:00 stderr F I0813 20:06:28.292625 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:28.293143647+00:00 stderr F E0813 20:06:28.292841 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:28.414768040+00:00 stderr F I0813 20:06:28.414654 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.646161106+00:00 stderr F I0813 20:06:28.646046 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:29.686390835+00:00 stderr F I0813 20:06:29.685560 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:31.822085495+00:00 stderr F I0813 20:06:31.821356 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.153242831+00:00 stderr F I0813 20:06:33.153123 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.520317245+00:00 stderr F I0813 20:06:33.519759 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:33.575179558+00:00 stderr F I0813 20:06:33.573144 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:33.575179558+00:00 stderr F E0813 20:06:33.573340 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:33.576475315+00:00 stderr F I0813 20:06:33.576423 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:36.250033578+00:00 stderr F I0813 20:06:36.248480 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:36.270607538+00:00 stderr F I0813 20:06:36.268604 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:36.377326287+00:00 stderr F I0813 20:06:36.373692 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:36.458074342+00:00 stderr F I0813 20:06:36.457999 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:36.461352066+00:00 stderr F I0813 20:06:36.461272 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:36.967812537+00:00 stderr F I0813 20:06:36.967687 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:37.718851319+00:00 stderr F I0813 20:06:37.715596 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:39.611042031+00:00 stderr F I0813 20:06:39.610212 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:40.082855298+00:00 stderr F I0813 20:06:40.082102 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:40.161935945+00:00 stderr F I0813 20:06:40.160274 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=32119 2025-08-13T20:06:40.192331217+00:00 stderr F I0813 20:06:40.191960 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=32119 2025-08-13T20:06:40.192331217+00:00 stderr F E0813 20:06:40.192162 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:42.073213563+00:00 stderr F I0813 20:06:42.073075 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.353942972+00:00 stderr F I0813 20:06:42.353658 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:42.369517168+00:00 stderr F I0813 20:06:42.369422 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:42.424848485+00:00 stderr F I0813 20:06:42.422844 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from True to False ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" 2025-08-13T20:06:43.369853199+00:00 stderr F I0813 20:06:43.363073 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:43.560472214+00:00 stderr F I0813 20:06:43.559423 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.436295555+00:00 stderr F I0813 20:06:44.431154 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:44.744640726+00:00 stderr F I0813 20:06:44.743828 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:45.613246239+00:00 stderr F I0813 20:06:45.612552 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:45.923279647+00:00 stderr F I0813 20:06:45.922384 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") 2025-08-13T20:06:45.926984984+00:00 stderr F I0813 20:06:45.926915 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:47.027851337+00:00 stderr F I0813 20:06:47.027547 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:48.495591258+00:00 stderr F I0813 20:06:48.493729 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" 2025-08-13T20:06:48.541989388+00:00 stderr F I0813 20:06:48.521692 1 helpers.go:184] lister was stale at resourceVersion=32195, live get showed resourceVersion=32202 2025-08-13T20:06:49.220022478+00:00 stderr F E0813 20:06:49.218626 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:49.263620898+00:00 stderr F I0813 20:06:49.259652 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:49.878502717+00:00 stderr F I0813 20:06:49.872820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" 2025-08-13T20:06:50.549607578+00:00 stderr F I0813 20:06:50.549511 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:50.963101364+00:00 stderr F I0813 20:06:50.962722 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.650859433+00:00 stderr F I0813 20:06:52.649631 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:53.395922365+00:00 stderr F I0813 20:06:53.395406 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:55.018081174+00:00 stderr F I0813 20:06:55.008754 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:00.290906550+00:00 stderr F I0813 20:07:00.269440 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:04.240339803+00:00 stderr F I0813 20:07:04.239152 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.713414188+00:00 stderr F I0813 20:07:05.712626 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.720681146+00:00 stderr F E0813 20:07:05.720598 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:05.732849615+00:00 stderr F E0813 20:07:05.732208 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:06.622944324+00:00 stderr F I0813 20:07:06.620393 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:06.814317541+00:00 stderr F E0813 20:07:06.814180 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:09.126943667+00:00 stderr F I0813 20:07:09.122061 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:14.841465356+00:00 stderr F E0813 20:07:14.835520 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:30.191761342+00:00 stderr F E0813 20:07:30.188131 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:58.905403376+00:00 stderr F I0813 20:07:58.896695 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:07:59.008467061+00:00 stderr F I0813 20:07:59.008386 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:59.042475026+00:00 stderr F I0813 20:07:59.042305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" to "All is well" 2025-08-13T20:08:24.045096453+00:00 stderr F E0813 20:08:24.042992 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.050072765+00:00 stderr F W0813 20:08:24.047200 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.051353532+00:00 stderr F E0813 20:08:24.047264 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.061981107+00:00 stderr F E0813 20:08:24.060696 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.065391385+00:00 stderr F W0813 20:08:24.065222 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.065391385+00:00 stderr F E0813 20:08:24.065301 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.081278270+00:00 stderr F E0813 20:08:24.081184 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.090175325+00:00 stderr F W0813 20:08:24.090064 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.090175325+00:00 stderr F E0813 20:08:24.090129 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.115317196+00:00 stderr F E0813 20:08:24.115227 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.117149219+00:00 stderr F W0813 20:08:24.117072 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.117149219+00:00 stderr F E0813 20:08:24.117140 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.164695612+00:00 stderr F E0813 20:08:24.162356 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.168014307+00:00 stderr F W0813 20:08:24.165195 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.168014307+00:00 stderr F E0813 20:08:24.166593 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.256644158+00:00 stderr F E0813 20:08:24.256294 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371038608+00:00 stderr F W0813 20:08:24.370757 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371038608+00:00 stderr F E0813 20:08:24.370980 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.574927424+00:00 stderr F E0813 20:08:24.572225 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:24.749282603+00:00 stderr F E0813 20:08:24.749180 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.771673885+00:00 stderr F E0813 20:08:24.771274 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.970065703+00:00 stderr F W0813 20:08:24.969294 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.970065703+00:00 stderr F E0813 20:08:24.969386 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:25.168003797+00:00 stderr F E0813 20:08:25.166709 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.367454255+00:00 stderr F E0813 20:08:25.367064 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.568685445+00:00 stderr F E0813 20:08:25.566403 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.769318827+00:00 stderr F W0813 20:08:25.767739 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.769318827+00:00 stderr F E0813 20:08:25.767873 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:25.970122625+00:00 stderr F E0813 20:08:25.968364 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.353991311+00:00 stderr F E0813 20:08:26.353083 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.378685579+00:00 stderr F E0813 20:08:26.378502 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.569700015+00:00 stderr F E0813 20:08:26.569648 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.769699879+00:00 stderr F E0813 20:08:26.767454 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.967452419+00:00 stderr F E0813 20:08:26.966467 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.165713454+00:00 stderr F E0813 20:08:27.165504 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.290641866+00:00 stderr F E0813 20:08:27.290541 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.303245497+00:00 stderr F E0813 20:08:27.303084 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.369988360+00:00 stderr F W0813 20:08:27.369166 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.369988360+00:00 stderr F E0813 20:08:27.369257 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:27.412205051+00:00 stderr F E0813 20:08:27.411140 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.582706679+00:00 stderr F E0813 20:08:27.582457 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.768069004+00:00 stderr F E0813 20:08:27.768009 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.812603321+00:00 stderr F E0813 20:08:27.812334 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.968471070+00:00 stderr F E0813 20:08:27.968174 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.169348299+00:00 stderr F E0813 20:08:28.168588 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.372013970+00:00 stderr F E0813 20:08:28.371098 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.413937132+00:00 stderr F E0813 20:08:28.412224 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.567555536+00:00 stderr F E0813 20:08:28.566500 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.768137066+00:00 stderr F E0813 20:08:28.767597 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.812500518+00:00 stderr F E0813 20:08:28.812421 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.966491243+00:00 stderr F E0813 20:08:28.966373 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:29.166768005+00:00 stderr F E0813 20:08:29.166653 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.211285252+00:00 stderr F E0813 20:08:29.211232 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.368611422+00:00 stderr F E0813 20:08:29.367387 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.568887134+00:00 stderr F E0813 20:08:29.568833 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.766863351+00:00 stderr F W0813 20:08:29.766470 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.766863351+00:00 stderr F E0813 20:08:29.766526 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:29.812843879+00:00 stderr F E0813 20:08:29.812659 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.968834921+00:00 stderr F E0813 20:08:29.968403 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.169648219+00:00 stderr F E0813 20:08:30.169492 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:30.366019758+00:00 stderr F E0813 20:08:30.365931 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.455198815+00:00 stderr F E0813 20:08:30.455144 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.568202395+00:00 stderr F E0813 20:08:30.566868 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.688641188+00:00 stderr F E0813 20:08:30.688524 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.813959052+00:00 stderr F E0813 20:08:30.812976 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.012709651+00:00 stderr F E0813 20:08:31.011994 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.213289311+00:00 stderr F E0813 20:08:31.213022 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.413338887+00:00 stderr F E0813 20:08:31.413275 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.812176271+00:00 stderr F E0813 20:08:31.812067 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.015182451+00:00 stderr F E0813 20:08:32.015085 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.217369507+00:00 stderr F E0813 20:08:32.216942 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:32.330155181+00:00 stderr F E0813 20:08:32.330030 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.331628133+00:00 stderr F W0813 20:08:32.331548 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.331628133+00:00 stderr F E0813 20:08:32.331595 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:32.412567704+00:00 stderr F E0813 20:08:32.412458 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.501086511+00:00 stderr F E0813 20:08:32.500995 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.612261799+00:00 stderr F E0813 20:08:32.612160 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.813102527+00:00 stderr F E0813 20:08:32.813006 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.011336201+00:00 stderr F E0813 20:08:33.011169 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.300630165+00:00 stderr F E0813 20:08:33.300571 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.411134734+00:00 stderr F E0813 20:08:33.411019 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.016948782+00:00 stderr F E0813 20:08:34.016646 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:34.211524341+00:00 stderr F E0813 20:08:34.211467 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.979539251+00:00 stderr F E0813 20:08:34.978848 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.068993066+00:00 stderr F E0813 20:08:35.068533 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.494494035+00:00 stderr F E0813 20:08:35.494334 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.867394596+00:00 stderr F E0813 20:08:35.867290 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.331954435+00:00 stderr F E0813 20:08:36.330279 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.663480730+00:00 stderr F E0813 20:08:36.663423 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.937287141+00:00 stderr F E0813 20:08:36.937181 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.461235673+00:00 stderr F E0813 20:08:37.461092 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465001621+00:00 stderr F W0813 20:08:37.464585 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465001621+00:00 stderr F E0813 20:08:37.464647 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:38.057275622+00:00 stderr F E0813 20:08:38.056945 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.121127314+00:00 stderr F E0813 20:08:40.120651 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.202952520+00:00 stderr F E0813 20:08:40.202463 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.997972094+00:00 stderr F E0813 20:08:40.997415 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.873606559+00:00 stderr F E0813 20:08:41.873176 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.367147970+00:00 stderr F E0813 20:08:42.366964 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.388495 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F W0813 20:08:42.398454 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.399290 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.398772 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.413402036+00:00 stderr F E0813 20:08:42.413346 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.421702924+00:00 stderr F E0813 20:08:42.421659 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.582036081+00:00 stderr F E0813 20:08:42.581967 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.767459767+00:00 stderr F E0813 20:08:42.767286 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.169258266+00:00 stderr F E0813 20:08:43.168730 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.568299387+00:00 stderr F W0813 20:08:43.568007 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.568299387+00:00 stderr F E0813 20:08:43.568082 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.590563165+00:00 stderr F E0813 20:08:43.590408 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.770945797+00:00 stderr F E0813 20:08:43.770870 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.967106021+00:00 stderr F E0813 20:08:43.966929 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.169006020+00:00 stderr F E0813 20:08:44.167976 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.574914518+00:00 stderr F E0813 20:08:44.574749 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.969279875+00:00 stderr F E0813 20:08:44.969224 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.367978046+00:00 stderr F E0813 20:08:45.367418 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.578657656+00:00 stderr F E0813 20:08:45.570487 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.768164160+00:00 stderr F W0813 20:08:45.767463 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.768164160+00:00 stderr F E0813 20:08:45.767521 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.968257076+00:00 stderr F E0813 20:08:45.968187 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.174493719+00:00 stderr F E0813 20:08:46.173555 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.567953009+00:00 stderr F E0813 20:08:46.567318 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.967325470+00:00 stderr F E0813 20:08:46.967193 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.367953876+00:00 stderr F I0813 20:08:47.367503 1 request.go:697] Waited for 1.037123464s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:47.376156851+00:00 stderr F E0813 20:08:47.374092 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.569507115+00:00 stderr F E0813 20:08:47.569113 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.780139534+00:00 stderr F E0813 20:08:47.779160 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:47.969995228+00:00 stderr F W0813 20:08:47.969926 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.970092600+00:00 stderr F E0813 20:08:47.970078 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.173554544+00:00 stderr F E0813 20:08:48.173499 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.369102530+00:00 stderr F I0813 20:08:48.369006 1 request.go:697] Waited for 1.198369338s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:48.371757767+00:00 stderr F E0813 20:08:48.370471 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.573633485+00:00 stderr F E0813 20:08:48.573276 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.969884065+00:00 stderr F E0813 20:08:48.969666 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.168740957+00:00 stderr F E0813 20:08:49.168625 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.566419959+00:00 stderr F I0813 20:08:49.566331 1 request.go:697] Waited for 1.114601407s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-08-13T20:08:49.768094771+00:00 stderr F E0813 20:08:49.767501 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.968180258+00:00 stderr F W0813 20:08:49.968050 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.968180258+00:00 stderr F E0813 20:08:49.968111 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:50.172106924+00:00 stderr F E0813 20:08:50.171987 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.364823499+00:00 stderr F E0813 20:08:50.364571 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.367040633+00:00 stderr F E0813 20:08:50.366962 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.566413309+00:00 stderr F I0813 20:08:50.566355 1 request.go:697] Waited for 1.197345059s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:50.569850187+00:00 stderr F W0813 20:08:50.569463 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.569850187+00:00 stderr F E0813 20:08:50.569519 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.769344097+00:00 stderr F E0813 20:08:50.769278 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.369874175+00:00 stderr F E0813 20:08:51.367442 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.769272376+00:00 stderr F E0813 20:08:51.769161 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.968970392+00:00 stderr F W0813 20:08:51.968851 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.968970392+00:00 stderr F E0813 20:08:51.968945 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.170285694+00:00 stderr F E0813 20:08:52.169633 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.369885726+00:00 stderr F E0813 20:08:52.369128 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.769490233+00:00 stderr F E0813 20:08:52.767737 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.168273877+00:00 stderr F E0813 20:08:53.168091 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.370563587+00:00 stderr F E0813 20:08:53.370429 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.768486325+00:00 stderr F W0813 20:08:53.768341 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.768540916+00:00 stderr F E0813 20:08:53.768487 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.834438036+00:00 stderr F E0813 20:08:53.833611 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.168090622+00:00 stderr F E0813 20:08:54.167988 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.368919430+00:00 stderr F E0813 20:08:54.368570 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.574085232+00:00 stderr F E0813 20:08:54.573232 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:55.169695249+00:00 stderr F E0813 20:08:55.169436 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.370972760+00:00 stderr F W0813 20:08:55.370366 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.370972760+00:00 stderr F E0813 20:08:55.370465 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.768937660+00:00 stderr F E0813 20:08:55.768823 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.967508383+00:00 stderr F E0813 20:08:55.967410 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.368312545+00:00 stderr F W0813 20:08:56.368198 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.368312545+00:00 stderr F E0813 20:08:56.368250 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.568381761+00:00 stderr F E0813 20:08:56.568273 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.167729875+00:00 stderr F W0813 20:08:57.167605 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.167729875+00:00 stderr F E0813 20:08:57.167705 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.368017587+00:00 stderr F E0813 20:08:57.367766 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.570202373+00:00 stderr F E0813 20:08:57.570141 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.767974264+00:00 stderr F E0813 20:08:57.767746 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:58.367627167+00:00 stderr F E0813 20:08:58.367062 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.571766060+00:00 stderr F W0813 20:08:58.569885 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.571766060+00:00 stderr F E0813 20:08:58.570083 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.967660021+00:00 stderr F E0813 20:08:58.967524 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.293345769+00:00 stderr F E0813 20:09:00.292919 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.687833059+00:00 stderr F E0813 20:09:00.687656 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.931614558+00:00 stderr F E0813 20:09:00.931484 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:10.590964230+00:00 stderr F I0813 20:09:10.589417 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33015 2025-08-13T20:09:10.638054530+00:00 stderr F E0813 20:09:10.637972 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:28.814591308+00:00 stderr F I0813 20:09:28.813369 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:29.604256818+00:00 stderr F I0813 20:09:29.603837 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:30.178241024+00:00 stderr F I0813 20:09:30.178185 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:31.361714086+00:00 stderr F I0813 20:09:31.361450 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:32.362193281+00:00 stderr F I0813 20:09:32.361343 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:33.781685288+00:00 stderr F I0813 20:09:33.781545 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:34.400346116+00:00 stderr F I0813 20:09:34.399530 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33023 2025-08-13T20:09:34.437505131+00:00 stderr F I0813 20:09:34.437206 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33023 2025-08-13T20:09:34.437505131+00:00 stderr F E0813 20:09:34.437374 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:35.160181941+00:00 stderr F I0813 20:09:35.159682 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:35.160444489+00:00 stderr F I0813 20:09:35.160387 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:35Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:35.182949394+00:00 stderr F I0813 20:09:35.178589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") 2025-08-13T20:09:36.183181381+00:00 stderr F I0813 20:09:36.182351 1 request.go:697] Waited for 1.014135186s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:09:37.383550266+00:00 stderr F I0813 20:09:37.383452 1 request.go:697] Waited for 1.267698556s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca 2025-08-13T20:09:37.849265269+00:00 stderr F I0813 20:09:37.849146 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:38.593693882+00:00 stderr F I0813 20:09:38.593476 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.600523548+00:00 stderr F E0813 20:09:38.600397 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.606486579+00:00 stderr F I0813 20:09:38.606344 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.613476549+00:00 stderr F E0813 20:09:38.613406 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.625349050+00:00 stderr F I0813 20:09:38.625280 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.633824643+00:00 stderr F E0813 20:09:38.633739 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.655101473+00:00 stderr F I0813 20:09:38.655010 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:38.656756420+00:00 stderr F I0813 20:09:38.656525 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.664366438+00:00 stderr F E0813 20:09:38.664093 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.706020173+00:00 stderr F I0813 20:09:38.705672 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.727943461+00:00 stderr F E0813 20:09:38.727275 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.728329742+00:00 stderr F I0813 20:09:38.728299 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:09:38.808550282+00:00 stderr F I0813 20:09:38.808400 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.817439457+00:00 stderr F E0813 20:09:38.817411 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.979365520+00:00 stderr F I0813 20:09:38.979309 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.993256238+00:00 stderr F E0813 20:09:38.993201 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.995624496+00:00 stderr F I0813 20:09:38.995515 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.074880778+00:00 stderr F I0813 20:09:39.072858 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.315513077+00:00 stderr F I0813 20:09:39.315405 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:39.321931551+00:00 stderr F E0813 20:09:39.321866 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:39.859586766+00:00 stderr F I0813 20:09:39.859233 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.965092921+00:00 stderr F I0813 20:09:39.964876 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:39.971887716+00:00 stderr F E0813 20:09:39.971853 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.590280895+00:00 stderr F I0813 20:09:40.588307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:40.721889528+00:00 stderr F I0813 20:09:40.719649 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:41.256895097+00:00 stderr F I0813 20:09:41.254858 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.272325010+00:00 stderr F E0813 20:09:41.270282 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.003101422+00:00 stderr F I0813 20:09:42.001555 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:42.762881406+00:00 stderr F I0813 20:09:42.760248 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:42.762881406+00:00 stderr F E0813 20:09:42.761447 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:42.768262710+00:00 stderr F I0813 20:09:42.767079 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:43.836652932+00:00 stderr F I0813 20:09:43.836540 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:43Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:43.858078695+00:00 stderr F E0813 20:09:43.853310 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:44.286400066+00:00 stderr F I0813 20:09:44.286234 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:44.439010921+00:00 stderr F I0813 20:09:44.438763 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:09:44.663102456+00:00 stderr F I0813 20:09:44.662930 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:45.580610793+00:00 stderr F I0813 20:09:45.579381 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:46.812063900+00:00 stderr F I0813 20:09:46.811882 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:46.854568888+00:00 stderr F I0813 20:09:46.854333 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:46Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:46.863504424+00:00 stderr F E0813 20:09:46.861719 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:47.264677707+00:00 stderr F I0813 20:09:47.264553 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:47.288824629+00:00 stderr F I0813 20:09:47.288681 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:47.736836453+00:00 stderr F I0813 20:09:47.736530 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.148161416+00:00 stderr F I0813 20:09:48.145409 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.975455756+00:00 stderr F I0813 20:09:48.974995 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:48Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:48.984041552+00:00 stderr F E0813 20:09:48.983644 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:49.615860927+00:00 stderr F I0813 20:09:49.615132 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:49Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:49.623829645+00:00 stderr F E0813 20:09:49.621235 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:49.813302098+00:00 stderr F I0813 20:09:49.813240 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:51.188335851+00:00 stderr F I0813 20:09:51.187817 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:51.601003312+00:00 stderr F E0813 20:09:51.600623 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:52.643865962+00:00 stderr F I0813 20:09:52.641709 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.686666690+00:00 stderr F I0813 20:09:52.685997 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.692047854+00:00 stderr F E0813 20:09:52.691981 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.054038612+00:00 stderr F I0813 20:09:53.053871 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:53.563500969+00:00 stderr F I0813 20:09:53.563367 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.364638189+00:00 stderr F I0813 20:09:54.361472 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:56.583068233+00:00 stderr F I0813 20:09:56.582319 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.815065265+00:00 stderr F I0813 20:09:57.814990 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:57.861014273+00:00 stderr F I0813 20:09:57.860957 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:57.884623490+00:00 stderr F E0813 20:09:57.884536 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:59.212995395+00:00 stderr F I0813 20:09:59.212118 1 request.go:697] Waited for 1.156067834s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2025-08-13T20:10:00.017574593+00:00 stderr F I0813 20:10:00.016494 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:00.412400763+00:00 stderr F I0813 20:10:00.412021 1 request.go:697] Waited for 1.193054236s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift 2025-08-13T20:10:01.039028689+00:00 stderr F I0813 20:10:01.038959 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:01.414870635+00:00 stderr F I0813 20:10:01.414421 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:01.612929483+00:00 stderr F I0813 20:10:01.611468 1 request.go:697] Waited for 1.187769875s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication 2025-08-13T20:10:05.994944689+00:00 stderr F I0813 20:10:05.994001 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:08.314978607+00:00 stderr F I0813 20:10:08.314538 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:14.120859465+00:00 stderr F I0813 20:10:14.120610 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:20.593659503+00:00 stderr F I0813 20:10:20.593198 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:23.869260428+00:00 stderr F I0813 20:10:23.868864 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:25.514277572+00:00 stderr F I0813 20:10:25.514185 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:27.472026502+00:00 stderr F I0813 20:10:27.470139 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:10:30.920531234+00:00 stderr F I0813 20:10:30.917262 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:32.111634214+00:00 stderr F I0813 20:10:32.110131 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:32.326100383+00:00 stderr F I0813 20:10:32.324274 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:33.337597044+00:00 stderr F I0813 20:10:33.337494 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:34.581236090+00:00 stderr F I0813 20:10:34.580885 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:38.555504355+00:00 stderr F I0813 20:10:38.554902 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:38.978677758+00:00 stderr F I0813 20:10:38.978615 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:38.979649666+00:00 stderr F I0813 20:10:38.979578 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:38Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:38.996976132+00:00 stderr F I0813 20:10:38.995651 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:38Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:38.996976132+00:00 stderr F I0813 20:10:38.996248 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well",Available changed from False to True ("All is well") 2025-08-13T20:10:39.002536212+00:00 stderr F E0813 20:10:39.002389 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:39.865847024+00:00 stderr F I0813 20:10:39.865735 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:48.375629945+00:00 stderr F I0813 20:10:48.373525 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:29:36.584846903+00:00 stderr F I0813 20:29:36.583938 1 request.go:697] Waited for 1.136256551s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2025-08-13T20:42:36.642834533+00:00 stderr F E0813 20:42:36.641909 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.655063095+00:00 stderr F E0813 20:42:36.654734 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.836902268+00:00 stderr F E0813 20:42:36.836436 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.036862743+00:00 stderr F E0813 20:42:37.036597 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.160458276+00:00 stderr F E0813 20:42:37.159852 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.172532134+00:00 stderr F E0813 20:42:37.172300 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.190632706+00:00 stderr F E0813 20:42:37.190565 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.216130321+00:00 stderr F E0813 20:42:37.216044 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.235806149+00:00 stderr F E0813 20:42:37.235648 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.258745950+00:00 stderr F E0813 20:42:37.258690 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.447022458+00:00 stderr F E0813 20:42:37.444536 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.558196643+00:00 stderr F E0813 20:42:37.558098 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.644990536+00:00 stderr F E0813 20:42:37.643264 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.842356836+00:00 stderr F E0813 20:42:37.842114 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.851507320+00:00 stderr F E0813 20:42:37.850143 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.180712041+00:00 stderr F E0813 20:42:38.179735 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.380054948+00:00 stderr F E0813 20:42:38.379961 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.580030223+00:00 stderr F E0813 20:42:38.579936 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.980053556+00:00 stderr F E0813 20:42:38.979878 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.180444863+00:00 stderr F E0813 20:42:39.179891 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.579247880+00:00 stderr F E0813 20:42:39.578761 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.780079240+00:00 stderr F E0813 20:42:39.780027 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.979863310+00:00 stderr F E0813 20:42:39.979715 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.379862572+00:00 stderr F E0813 20:42:40.379683 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.979263443+00:00 stderr F E0813 20:42:40.979080 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.017829305+00:00 stderr F I0813 20:42:41.017697 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.019967157+00:00 stderr F I0813 20:42:41.019734 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.020199643+00:00 stderr F I0813 20:42:41.020132 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:41.020199643+00:00 stderr F I0813 20:42:41.020172 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.020349598+00:00 stderr F I0813 20:42:41.020265 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:42:41.020397429+00:00 stderr F I0813 20:42:41.020364 1 base_controller.go:172] Shutting down CustomRouteController ... 2025-08-13T20:42:41.020412520+00:00 stderr F I0813 20:42:41.020402 1 base_controller.go:172] Shutting down ProxyConfigController ... 2025-08-13T20:42:41.020630246+00:00 stderr F I0813 20:42:41.020566 1 base_controller.go:172] Shutting down OAuthServerRouteEndpointAccessibleController ... 2025-08-13T20:42:41.020630246+00:00 stderr F I0813 20:42:41.020614 1 base_controller.go:172] Shutting down WellKnownReadyController ... 2025-08-13T20:42:41.020989066+00:00 stderr F I0813 20:42:41.020682 1 base_controller.go:172] Shutting down PayloadConfig ... 2025-08-13T20:42:41.021255244+00:00 stderr F I0813 20:42:41.020046 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.021317106+00:00 stderr F I0813 20:42:41.020354 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.021331676+00:00 stderr F I0813 20:42:41.021315 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:41.021392778+00:00 stderr F I0813 20:42:41.021341 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:41.021580743+00:00 stderr F E0813 20:42:41.021519 1 base_controller.go:268] PayloadConfig reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.021734888+00:00 stderr F I0813 20:42:41.021576 1 base_controller.go:114] Shutting down worker of PayloadConfig controller ... 2025-08-13T20:42:41.021734888+00:00 stderr F I0813 20:42:41.021726 1 base_controller.go:172] Shutting down MetadataController ... 2025-08-13T20:42:41.021749498+00:00 stderr F I0813 20:42:41.021742 1 base_controller.go:172] Shutting down OAuthServerWorkloadController ... 2025-08-13T20:42:41.021897042+00:00 stderr F I0813 20:42:41.021758 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:41.021897042+00:00 stderr F I0813 20:42:41.021869 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:41.021916773+00:00 stderr F I0813 20:42:41.021886 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:41.021916773+00:00 stderr F I0813 20:42:41.021910 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:41.021938713+00:00 stderr F I0813 20:42:41.021924 1 base_controller.go:172] Shutting down IngressNodesAvailableController ... 2025-08-13T20:42:41.021951094+00:00 stderr F I0813 20:42:41.021937 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:41.021962624+00:00 stderr F I0813 20:42:41.021951 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.021974175+00:00 stderr F I0813 20:42:41.021964 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-oauth-apiserver ... 2025-08-13T20:42:41.021985925+00:00 stderr F I0813 20:42:41.021978 1 base_controller.go:172] Shutting down OAuthAPIServerControllerWorkloadController ... 2025-08-13T20:42:41.022060777+00:00 stderr F E0813 20:42:41.022021 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022116439+00:00 stderr F I0813 20:42:41.022083 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:42:41.022128659+00:00 stderr F I0813 20:42:41.022119 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.022144009+00:00 stderr F I0813 20:42:41.022135 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.022156230+00:00 stderr F I0813 20:42:41.022149 1 base_controller.go:172] Shutting down OpenShiftAuthenticatorCertRequester ... 2025-08-13T20:42:41.022168380+00:00 stderr F I0813 20:42:41.022161 1 base_controller.go:172] Shutting down WebhookAuthenticatorController ... 2025-08-13T20:42:41.022181600+00:00 stderr F I0813 20:42:41.022175 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:42:41.022246472+00:00 stderr F I0813 20:42:41.022188 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:41.022351045+00:00 stderr F E0813 20:42:41.022301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.022351045+00:00 stderr F I0813 20:42:41.022344 1 base_controller.go:172] Shutting down RouterCertsDomainValidationController ... 2025-08-13T20:42:41.022732486+00:00 stderr F W0813 20:42:41.022662 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022732486+00:00 stderr F E0813 20:42:41.022709 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022735 1 base_controller.go:114] Shutting down worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022743 1 base_controller.go:104] All OAuthAPIServerControllerWorkloadController workers have been terminated 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022756 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022769 1 base_controller.go:172] Shutting down TrustDistributionController ... 2025-08-13T20:42:41.022853040+00:00 stderr F I0813 20:42:41.022838 1 base_controller.go:172] Shutting down OpenshiftAuthenticationStaticResources ... 2025-08-13T20:42:41.022863260+00:00 stderr F I0813 20:42:41.022856 1 base_controller.go:172] Shutting down ServiceCAController ... 2025-08-13T20:42:41.022886701+00:00 stderr F I0813 20:42:41.022871 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointsEndpointAccessibleController ... 2025-08-13T20:42:41.022902241+00:00 stderr F I0813 20:42:41.022888 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointAccessibleController ... 2025-08-13T20:42:41.022912292+00:00 stderr F I0813 20:42:41.022905 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.022953973+00:00 stderr F I0813 20:42:41.022920 1 base_controller.go:172] Shutting down StatusSyncer_authentication ... 2025-08-13T20:42:41.022953973+00:00 stderr F I0813 20:42:41.022946 1 base_controller.go:150] All StatusSyncer_authentication post start hooks have been terminated 2025-08-13T20:42:41.022970083+00:00 stderr F I0813 20:42:41.022962 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:42:41.022983014+00:00 stderr F I0813 20:42:41.022976 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_OpenShiftAuthenticator ... 2025-08-13T20:42:41.022995354+00:00 stderr F I0813 20:42:41.022988 1 base_controller.go:172] Shutting down IngressStateController ... 2025-08-13T20:42:41.023007654+00:00 stderr F I0813 20:42:41.023000 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.023017265+00:00 stderr F I0813 20:42:41.023008 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.023026905+00:00 stderr F I0813 20:42:41.023018 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:42:41.023036665+00:00 stderr F I0813 20:42:41.023026 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:42:41.023046405+00:00 stderr F I0813 20:42:41.023035 1 base_controller.go:114] Shutting down worker of CustomRouteController controller ... 2025-08-13T20:42:41.023046405+00:00 stderr F I0813 20:42:41.023041 1 base_controller.go:104] All CustomRouteController workers have been terminated 2025-08-13T20:42:41.023059016+00:00 stderr F I0813 20:42:41.023051 1 base_controller.go:114] Shutting down worker of ProxyConfigController controller ... 2025-08-13T20:42:41.023101647+00:00 stderr F I0813 20:42:41.023071 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:41.023113937+00:00 stderr F I0813 20:42:41.023100 1 base_controller.go:114] Shutting down worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:42:41.023113937+00:00 stderr F I0813 20:42:41.023108 1 base_controller.go:104] All OAuthServerRouteEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023123998+00:00 stderr F I0813 20:42:41.023117 1 base_controller.go:114] Shutting down worker of WellKnownReadyController controller ... 2025-08-13T20:42:41.023134698+00:00 stderr F I0813 20:42:41.023124 1 base_controller.go:104] All WellKnownReadyController workers have been terminated 2025-08-13T20:42:41.023146558+00:00 stderr F I0813 20:42:41.023135 1 base_controller.go:114] Shutting down worker of MetadataController controller ... 2025-08-13T20:42:41.023146558+00:00 stderr F I0813 20:42:41.023141 1 base_controller.go:104] All MetadataController workers have been terminated 2025-08-13T20:42:41.023159119+00:00 stderr F I0813 20:42:41.023150 1 base_controller.go:114] Shutting down worker of OAuthServerWorkloadController controller ... 2025-08-13T20:42:41.023171099+00:00 stderr F I0813 20:42:41.023157 1 base_controller.go:104] All OAuthServerWorkloadController workers have been terminated 2025-08-13T20:42:41.023171099+00:00 stderr F I0813 20:42:41.023165 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:41.023183889+00:00 stderr F I0813 20:42:41.023172 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:41.023183889+00:00 stderr F I0813 20:42:41.023180 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023185 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023193 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023197 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:41.023213090+00:00 stderr F I0813 20:42:41.023203 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:41.023213090+00:00 stderr F I0813 20:42:41.023209 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:41.023255831+00:00 stderr F I0813 20:42:41.023216 1 base_controller.go:114] Shutting down worker of IngressNodesAvailableController controller ... 2025-08-13T20:42:41.023268572+00:00 stderr F I0813 20:42:41.023223 1 base_controller.go:104] All IngressNodesAvailableController workers have been terminated 2025-08-13T20:42:41.023280872+00:00 stderr F I0813 20:42:41.023272 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:41.023290522+00:00 stderr F I0813 20:42:41.023281 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:41.023302743+00:00 stderr F I0813 20:42:41.023294 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.023312523+00:00 stderr F I0813 20:42:41.023302 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.023322243+00:00 stderr F I0813 20:42:41.023311 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:42:41.023322243+00:00 stderr F I0813 20:42:41.023318 1 base_controller.go:104] All NamespaceFinalizerController_openshift-oauth-apiserver workers have been terminated 2025-08-13T20:42:41.023368995+00:00 stderr F I0813 20:42:41.023330 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:42:41.023393075+00:00 stderr F I0813 20:42:41.023378 1 base_controller.go:114] Shutting down worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:42:41.023403416+00:00 stderr F I0813 20:42:41.023390 1 base_controller.go:104] All RouterCertsDomainValidationController workers have been terminated 2025-08-13T20:42:41.023413076+00:00 stderr F I0813 20:42:41.023400 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.023413076+00:00 stderr F I0813 20:42:41.023409 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.023423086+00:00 stderr F I0813 20:42:41.023417 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:41.023432877+00:00 stderr F I0813 20:42:41.023424 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:41.023442607+00:00 stderr F I0813 20:42:41.023431 1 base_controller.go:114] Shutting down worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:42:41.023442607+00:00 stderr F I0813 20:42:41.023438 1 base_controller.go:104] All OpenShiftAuthenticatorCertRequester workers have been terminated 2025-08-13T20:42:41.023452487+00:00 stderr F I0813 20:42:41.023445 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorController controller ... 2025-08-13T20:42:41.023462267+00:00 stderr F I0813 20:42:41.023451 1 base_controller.go:104] All WebhookAuthenticatorController workers have been terminated 2025-08-13T20:42:41.023462267+00:00 stderr F I0813 20:42:41.023458 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:42:41.023477468+00:00 stderr F I0813 20:42:41.023468 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.023477468+00:00 stderr F I0813 20:42:41.023474 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.023487358+00:00 stderr F I0813 20:42:41.023481 1 base_controller.go:114] Shutting down worker of TrustDistributionController controller ... 2025-08-13T20:42:41.023497168+00:00 stderr F I0813 20:42:41.023486 1 base_controller.go:104] All TrustDistributionController workers have been terminated 2025-08-13T20:42:41.023497168+00:00 stderr F I0813 20:42:41.023492 1 base_controller.go:114] Shutting down worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:42:41.023507299+00:00 stderr F I0813 20:42:41.023498 1 base_controller.go:104] All OpenshiftAuthenticationStaticResources workers have been terminated 2025-08-13T20:42:41.023516869+00:00 stderr F I0813 20:42:41.023505 1 base_controller.go:114] Shutting down worker of ServiceCAController controller ... 2025-08-13T20:42:41.023516869+00:00 stderr F I0813 20:42:41.023510 1 base_controller.go:104] All ServiceCAController workers have been terminated 2025-08-13T20:42:41.023526879+00:00 stderr F I0813 20:42:41.023516 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:42:41.023526879+00:00 stderr F I0813 20:42:41.023521 1 base_controller.go:104] All OAuthServerServiceEndpointsEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023536920+00:00 stderr F I0813 20:42:41.023528 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:42:41.023536920+00:00 stderr F I0813 20:42:41.023533 1 base_controller.go:104] All OAuthServerServiceEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023546910+00:00 stderr F I0813 20:42:41.023539 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:41.023556760+00:00 stderr F I0813 20:42:41.023544 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:41.023556760+00:00 stderr F I0813 20:42:41.023551 1 base_controller.go:114] Shutting down worker of StatusSyncer_authentication controller ... 2025-08-13T20:42:41.023566880+00:00 stderr F I0813 20:42:41.023557 1 base_controller.go:104] All StatusSyncer_authentication workers have been terminated 2025-08-13T20:42:41.023576481+00:00 stderr F I0813 20:42:41.023564 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:42:41.023576481+00:00 stderr F I0813 20:42:41.023570 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:42:41.023586671+00:00 stderr F I0813 20:42:41.023576 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:42:41.023586671+00:00 stderr F I0813 20:42:41.023582 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_OpenShiftAuthenticator workers have been terminated 2025-08-13T20:42:41.023596761+00:00 stderr F I0813 20:42:41.023588 1 base_controller.go:114] Shutting down worker of IngressStateController controller ... 2025-08-13T20:42:41.023596761+00:00 stderr F I0813 20:42:41.023593 1 base_controller.go:104] All IngressStateController workers have been terminated 2025-08-13T20:42:41.023606772+00:00 stderr F I0813 20:42:41.023598 1 base_controller.go:104] All ProxyConfigController workers have been terminated 2025-08-13T20:42:41.023606772+00:00 stderr F I0813 20:42:41.023603 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:42:41.028196864+00:00 stderr F I0813 20:42:41.028132 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.028196864+00:00 stderr F I0813 20:42:41.028167 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:41.028868273+00:00 stderr F I0813 20:42:41.028687 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029112 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029153 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029161 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029690 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029707 1 base_controller.go:104] All PayloadConfig workers have been terminated 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029748 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029898 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.030945 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.030972 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.031627873+00:00 stderr F I0813 20:42:41.031516 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.032096826+00:00 stderr F I0813 20:42:41.032020 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.032726745+00:00 stderr F I0813 20:42:41.032663 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.033029423+00:00 stderr F I0813 20:42:41.032918 1 builder.go:329] server exited 2025-08-13T20:42:41.033148777+00:00 stderr F I0813 20:42:41.033087 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.033428035+00:00 stderr F I0813 20:42:41.033141 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:41.033592650+00:00 stderr F I0813 20:42:41.033525 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.033738504+00:00 stderr F I0813 20:42:41.033702 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:41.034633180+00:00 stderr F E0813 20:42:41.034423 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.035723531+00:00 stderr F W0813 20:42:41.035649 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000065061415073043233033150 0ustar zuulzuul2025-08-13T19:59:06.260034229+00:00 stdout F Copying system trust bundle 2025-08-13T19:59:22.590102599+00:00 stderr F W0813 19:59:22.588695 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-08-13T19:59:22.640260319+00:00 stderr F I0813 19:59:22.638079 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:22.834208448+00:00 stderr F I0813 19:59:22.833239 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:22.848578007+00:00 stderr F I0813 19:59:22.834682 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:22.881456264+00:00 stderr F I0813 19:59:22.881027 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:25.607941583+00:00 stderr F I0813 19:59:25.589342 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-08-13T19:59:25.893253336+00:00 stderr F I0813 19:59:25.891501 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:33.373549473+00:00 stderr F I0813 19:59:33.372947 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:33.668078119+00:00 stderr F I0813 19:59:33.631417 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:33.672148474+00:00 stderr F I0813 19:59:33.672098 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:33.691128516+00:00 stderr F I0813 19:59:33.690001 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:33.691409884+00:00 stderr F I0813 19:59:33.691263 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:34.000225697+00:00 stderr F I0813 19:59:33.999685 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:34.000225697+00:00 stderr F I0813 19:59:33.999969 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:34.000225697+00:00 stderr F W0813 19:59:33.999985 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.000225697+00:00 stderr F W0813 19:59:33.999991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.141756031+00:00 stderr F I0813 19:59:34.141160 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:34.141756031+00:00 stderr F I0813 19:59:34.141651 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:34.172037764+00:00 stderr F I0813 19:59:34.171578 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.172037764+00:00 stderr F I0813 19:59:34.171693 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:34.175457882+00:00 stderr F I0813 19:59:34.173635 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.254901 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:34.254658649 +0000 UTC))" 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255474 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:34.255444882 +0000 UTC))" 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255505 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255662 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255684 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:34.362964547+00:00 stderr F I0813 19:59:34.362912 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.411506 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.412464 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.230255 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:34.593040415+00:00 stderr F I0813 19:59:34.468477 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:34.631076069+00:00 stderr F I0813 19:59:34.631009 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:34.630966396 +0000 UTC))" 2025-08-13T19:59:34.631178092+00:00 stderr F I0813 19:59:34.631155 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:34.631138831 +0000 UTC))" 2025-08-13T19:59:34.631231254+00:00 stderr F I0813 19:59:34.631216 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:34.631199473 +0000 UTC))" 2025-08-13T19:59:34.682004001+00:00 stderr F I0813 19:59:34.681926 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:34.631250714 +0000 UTC))" 2025-08-13T19:59:34.685440619+00:00 stderr F I0813 19:59:34.685412 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685362857 +0000 UTC))" 2025-08-13T19:59:34.685525151+00:00 stderr F I0813 19:59:34.685506 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.68548534 +0000 UTC))" 2025-08-13T19:59:34.685711317+00:00 stderr F I0813 19:59:34.685608 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685587803 +0000 UTC))" 2025-08-13T19:59:34.685770398+00:00 stderr F I0813 19:59:34.685751 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685735487 +0000 UTC))" 2025-08-13T19:59:34.693335204+00:00 stderr F I0813 19:59:34.693308 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:34.693280732 +0000 UTC))" 2025-08-13T19:59:34.709294179+00:00 stderr F I0813 19:59:34.709228 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:34.709179716 +0000 UTC))" 2025-08-13T19:59:34.975225909+00:00 stderr F I0813 19:59:34.975151 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T19:59:35.045131102+00:00 stderr F I0813 19:59:35.045053 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.045484022+00:00 stderr F E0813 19:59:35.045451 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.045567465+00:00 stderr F E0813 19:59:35.045546 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.046629675+00:00 stderr F I0813 19:59:35.046602 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-08-13T19:59:35.076158477+00:00 stderr F E0813 19:59:35.069426 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.076494866+00:00 stderr F E0813 19:59:35.076466 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.076594539+00:00 stderr F I0813 19:59:35.069664 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28178", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_6afa03f3-b0bc-43af-aa46-6f8b0362eaa6 became leader 2025-08-13T19:59:35.125263296+00:00 stderr F E0813 19:59:35.110643 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.130622869+00:00 stderr F E0813 19:59:35.130220 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.172244315+00:00 stderr F E0813 19:59:35.172183 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.172356189+00:00 stderr F E0813 19:59:35.172339 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.214627814+00:00 stderr F E0813 19:59:35.214570 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.214826349+00:00 stderr F E0813 19:59:35.214763 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.302565860+00:00 stderr F E0813 19:59:35.302503 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.302658613+00:00 stderr F E0813 19:59:35.302645 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.475498699+00:00 stderr F E0813 19:59:35.475207 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.475601372+00:00 stderr F E0813 19:59:35.475584 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.153495055+00:00 stderr F I0813 19:59:36.144494 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:36.153495055+00:00 stderr F E0813 19:59:36.147098 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.153495055+00:00 stderr F E0813 19:59:36.147158 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.156144171+00:00 stderr F I0813 19:59:36.155212 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T19:59:36.270210032+00:00 stderr F I0813 19:59:36.270060 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:36.796701340+00:00 stderr F E0813 19:59:36.795870 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.796701340+00:00 stderr F E0813 19:59:36.796317 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.147242088+00:00 stderr F E0813 19:59:38.147176 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.156567364+00:00 stderr F E0813 19:59:38.156543 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.736104134+00:00 stderr F E0813 19:59:40.720145 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.736104134+00:00 stderr F E0813 19:59:40.725657 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.370973406+00:00 stderr F I0813 19:59:42.328014 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:42.370973406+00:00 stderr F I0813 19:59:42.357328 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:42.624090121+00:00 stderr F I0813 19:59:42.581157 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:42.624090121+00:00 stderr F W0813 19:59:42.581689 1 reflector.go:539] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:42.624090121+00:00 stderr F E0813 19:59:42.581720 1 reflector.go:147] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F W0813 19:59:42.636451 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F E0813 19:59:42.636577 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F I0813 19:59:42.636629 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-08-13T19:59:43.927570097+00:00 stderr F I0813 19:59:43.924584 1 request.go:697] Waited for 1.626886194s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/nodes?limit=500&resourceVersion=0 2025-08-13T19:59:43.983752688+00:00 stderr F I0813 19:59:43.977926 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.029193893+00:00 stderr F I0813 19:59:42.347523 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261391 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261449 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261463 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261476 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261487 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261499 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261508 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-08-13T19:59:44.285028176+00:00 stderr F I0813 19:59:44.284967 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.313641902+00:00 stderr F I0813 19:59:44.308901 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-08-13T19:59:44.344395498+00:00 stderr F I0813 19:59:44.330525 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.357344547+00:00 stderr F I0813 19:59:44.357293 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.379923801+00:00 stderr F I0813 19:59:44.364104 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.380647502+00:00 stderr F I0813 19:59:44.380612 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.381414253+00:00 stderr F I0813 19:59:44.381391 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.382467663+00:00 stderr F I0813 19:59:44.382438 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.419523020+00:00 stderr F I0813 19:59:44.419457 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.420504798+00:00 stderr F I0813 19:59:44.420473 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.510409980+00:00 stderr F I0813 19:59:44.510343 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:44.540482718+00:00 stderr F I0813 19:59:44.538576 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.561860267+00:00 stderr F I0813 19:59:44.548935 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.561860267+00:00 stderr F I0813 19:59:44.550243 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.695884888+00:00 stderr F I0813 19:59:44.670212 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:59:44.695884888+00:00 stderr F I0813 19:59:44.670327 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:59:44.704042670+00:00 stderr F I0813 19:59:44.697017 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.721011894+00:00 stderr F I0813 19:59:44.720828 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.725898 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.755682 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.770326 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.770412 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:44.885173573+00:00 stderr F I0813 19:59:44.881585 1 trace.go:236] Trace[193441872]: "DeltaFIFO Pop Process" ID:default,Depth:63,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.293) (total time: 510ms): 2025-08-13T19:59:44.885173573+00:00 stderr F Trace[193441872]: [510.729819ms] [510.729819ms] END 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.548496 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.549220 1 request.go:697] Waited for 3.178596966s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.549388 1 trace.go:236] Trace[1372478155]: "DeltaFIFO Pop Process" ID:csr-5phj9,Depth:13,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.756) (total time: 793ms): 2025-08-13T19:59:45.549647464+00:00 stderr F Trace[1372478155]: [793.1724ms] [793.1724ms] END 2025-08-13T19:59:45.554639617+00:00 stderr F I0813 19:59:45.554418 1 trace.go:236] Trace[1425268458]: "DeltaFIFO Pop Process" ID:kube-node-lease,Depth:61,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.935) (total time: 614ms): 2025-08-13T19:59:45.554639617+00:00 stderr F Trace[1425268458]: [614.132736ms] [614.132736ms] END 2025-08-13T19:59:45.798581691+00:00 stderr F I0813 19:59:45.793346 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:45.807958558+00:00 stderr F I0813 19:59:45.803328 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:45.814158995+00:00 stderr F I0813 19:59:45.814111 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T19:59:45.832557879+00:00 stderr F I0813 19:59:45.832188 1 trace.go:236] Trace[3213281]: "DeltaFIFO Pop Process" ID:openshift-apiserver,Depth:57,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.554) (total time: 277ms): 2025-08-13T19:59:45.832557879+00:00 stderr F Trace[3213281]: [277.296235ms] [277.296235ms] END 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834510 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834641 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834710 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T19:59:45.834761962+00:00 stderr F I0813 19:59:45.834730 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-08-13T19:59:45.834761962+00:00 stderr F I0813 19:59:45.834753 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-08-13T19:59:45.846495966+00:00 stderr F I0813 19:59:45.834770 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-08-13T19:59:45.846495966+00:00 stderr F I0813 19:59:45.834881 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:45.945129288+00:00 stderr F E0813 19:59:45.935665 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.100202093+00:00 stderr F E0813 19:59:47.090732 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.202653813+00:00 stderr F I0813 19:59:47.200413 1 trace.go:236] Trace[1441772173]: "DeltaFIFO Pop Process" ID:openshift-etcd,Depth:37,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.835) (total time: 1364ms): 2025-08-13T19:59:47.202653813+00:00 stderr F Trace[1441772173]: [1.364812394s] [1.364812394s] END 2025-08-13T19:59:47.270733784+00:00 stderr F I0813 19:59:47.270612 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:47.317994751+00:00 stderr F I0813 19:59:47.307542 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:47.899932630+00:00 stderr F I0813 19:59:47.627693 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683396 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683427 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103115 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103128 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683454 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103149 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103165 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683522 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683546 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.105971 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.105982 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.719088 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.719577 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.738197 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.707766 1 request.go:697] Waited for 5.044216967s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces?limit=500&resourceVersion=0 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762473 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762492 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762535 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107562 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107570 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.809319 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107630 1 base_controller.go:73] Caches are synced for IngressStateController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107637 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-08-13T19:59:48.110215605+00:00 stderr F I0813 19:59:48.109487 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-08-13T19:59:48.110238695+00:00 stderr F I0813 19:59:47.809389 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.110291597+00:00 stderr F I0813 19:59:48.110276 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T19:59:48.110433441+00:00 stderr F I0813 19:59:48.110414 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.110459 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.110468 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.109504 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:48.110533554+00:00 stderr F I0813 19:59:48.099900 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110532 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.101422 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110559 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.101513 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110578 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.102613 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110597 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110602 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-08-13T19:59:48.110675808+00:00 stderr F I0813 19:59:48.110655 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:48.110725059+00:00 stderr F I0813 19:59:48.110712 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:48.110767590+00:00 stderr F I0813 19:59:48.110755 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:48.111398048+00:00 stderr F I0813 19:59:48.110480 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:48.111567353+00:00 stderr F E0813 19:59:48.111464 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.112250462+00:00 stderr F I0813 19:59:48.112225 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.112636423+00:00 stderr F I0813 19:59:48.112552 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::IngressStateEndpoints_NonReadyEndpoints::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.113512278+00:00 stderr F I0813 19:59:48.113444 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T19:59:48.113564680+00:00 stderr F I0813 19:59:48.113544 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T19:59:48.113602291+00:00 stderr F I0813 19:59:48.113589 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T19:59:48.221219729+00:00 stderr F E0813 19:59:48.219539 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.945052413+00:00 stderr F E0813 19:59:48.944337 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.969533311+00:00 stderr F E0813 19:59:48.969473 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:49.011041914+00:00 stderr F E0813 19:59:49.010539 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:49.016328685+00:00 stderr F I0813 19:59:49.016267 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T19:59:49.052701932+00:00 stderr F I0813 19:59:49.052634 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:51.339436357+00:00 stderr F I0813 19:59:51.338201 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.913025 1 trace.go:236] Trace[1463818210]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10542ms): 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1463818210]: ---"Objects listed" error: 10542ms (19:59:52.912) 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1463818210]: [10.542314785s] [10.542314785s] END 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.913746 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.914537 1 trace.go:236] Trace[1770587185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10721ms): 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1770587185]: ---"Objects listed" error: 10721ms (19:59:52.914) 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1770587185]: [10.721248335s] [10.721248335s] END 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.914546 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.915093852+00:00 stderr F W0813 19:59:52.914686 1 reflector.go:539] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:52.915093852+00:00 stderr F E0813 19:59:52.914706 1 reflector.go:147] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:52.990222764+00:00 stderr F I0813 19:59:52.990163 1 trace.go:236] Trace[770957654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10796ms): 2025-08-13T19:59:52.990222764+00:00 stderr F Trace[770957654]: ---"Objects listed" error: 10796ms (19:59:52.990) 2025-08-13T19:59:52.990222764+00:00 stderr F Trace[770957654]: [10.796764708s] [10.796764708s] END 2025-08-13T19:59:52.990287066+00:00 stderr F I0813 19:59:52.990272 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.990656876+00:00 stderr F I0813 19:59:52.990634 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.991002756+00:00 stderr F I0813 19:59:52.990974 1 trace.go:236] Trace[300001418]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10797ms): 2025-08-13T19:59:52.991002756+00:00 stderr F Trace[300001418]: ---"Objects listed" error: 10797ms (19:59:52.990) 2025-08-13T19:59:52.991002756+00:00 stderr F Trace[300001418]: [10.79755446s] [10.79755446s] END 2025-08-13T19:59:52.991061928+00:00 stderr F I0813 19:59:52.991044 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.003764270+00:00 stderr F I0813 19:59:52.993586 1 trace.go:236] Trace[979236584]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.663) (total time: 10329ms): 2025-08-13T19:59:53.003764270+00:00 stderr F Trace[979236584]: ---"Objects listed" error: 10329ms (19:59:52.993) 2025-08-13T19:59:53.003764270+00:00 stderr F Trace[979236584]: [10.329986423s] [10.329986423s] END 2025-08-13T19:59:53.003764270+00:00 stderr F I0813 19:59:52.993626 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.004158351+00:00 stderr F I0813 19:59:53.004124 1 trace.go:236] Trace[86867999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10633ms): 2025-08-13T19:59:53.004158351+00:00 stderr F Trace[86867999]: ---"Objects listed" error: 10633ms (19:59:53.004) 2025-08-13T19:59:53.004158351+00:00 stderr F Trace[86867999]: [10.633566795s] [10.633566795s] END 2025-08-13T19:59:53.004207972+00:00 stderr F I0813 19:59:53.004192 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.004401748+00:00 stderr F I0813 19:59:53.004378 1 trace.go:236] Trace[328557283]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10811ms): 2025-08-13T19:59:53.004401748+00:00 stderr F Trace[328557283]: ---"Objects listed" error: 10811ms (19:59:53.004) 2025-08-13T19:59:53.004401748+00:00 stderr F Trace[328557283]: [10.811036734s] [10.811036734s] END 2025-08-13T19:59:53.004457219+00:00 stderr F I0813 19:59:53.004443 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.005326104+00:00 stderr F W0813 19:59:53.005292 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.005426517+00:00 stderr F E0813 19:59:53.005405 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.006176029+00:00 stderr F I0813 19:59:53.006146 1 trace.go:236] Trace[970424958]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10812ms): 2025-08-13T19:59:53.006176029+00:00 stderr F Trace[970424958]: ---"Objects listed" error: 10812ms (19:59:53.006) 2025-08-13T19:59:53.006176029+00:00 stderr F Trace[970424958]: [10.812689222s] [10.812689222s] END 2025-08-13T19:59:53.006235890+00:00 stderr F I0813 19:59:53.006217 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.007204038+00:00 stderr F I0813 19:59:53.007172 1 trace.go:236] Trace[1873105387]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10636ms): 2025-08-13T19:59:53.007204038+00:00 stderr F Trace[1873105387]: ---"Objects listed" error: 10636ms (19:59:53.007) 2025-08-13T19:59:53.007204038+00:00 stderr F Trace[1873105387]: [10.636448808s] [10.636448808s] END 2025-08-13T19:59:53.007264810+00:00 stderr F I0813 19:59:53.007246 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.007769564+00:00 stderr F I0813 19:59:53.007746 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T19:59:53.028289059+00:00 stderr F I0813 19:59:53.028255 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T19:59:53.109228056+00:00 stderr F I0813 19:59:53.109117 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T19:59:53.109352030+00:00 stderr F I0813 19:59:53.109330 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T19:59:53.111984805+00:00 stderr F I0813 19:59:53.111953 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:53.111911542 +0000 UTC))" 2025-08-13T19:59:53.308283359+00:00 stderr F I0813 19:59:53.269501 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:53.269156764 +0000 UTC))" 2025-08-13T19:59:53.362028301+00:00 stderr F I0813 19:59:53.154295 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:53.362443493+00:00 stderr F I0813 19:59:53.154593 1 trace.go:236] Trace[1108983851]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.297) (total time: 10856ms): 2025-08-13T19:59:53.362443493+00:00 stderr F Trace[1108983851]: ---"Objects listed" error: 10856ms (19:59:53.154) 2025-08-13T19:59:53.362443493+00:00 stderr F Trace[1108983851]: [10.856865691s] [10.856865691s] END 2025-08-13T19:59:53.362521245+00:00 stderr F I0813 19:59:53.156190 1 trace.go:236] Trace[133450871]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.663) (total time: 10492ms): 2025-08-13T19:59:53.362521245+00:00 stderr F Trace[133450871]: ---"Objects listed" error: 10492ms (19:59:53.156) 2025-08-13T19:59:53.362521245+00:00 stderr F Trace[133450871]: [10.492752323s] [10.492752323s] END 2025-08-13T19:59:53.439980033+00:00 stderr F I0813 19:59:53.258277 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.444960535+00:00 stderr F I0813 19:59:53.270449 1 trace.go:236] Trace[1116695782]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 11076ms): 2025-08-13T19:59:53.444960535+00:00 stderr F Trace[1116695782]: ---"Objects listed" error: 11076ms (19:59:53.270) 2025-08-13T19:59:53.444960535+00:00 stderr F Trace[1116695782]: [11.076712457s] [11.076712457s] END 2025-08-13T19:59:53.445165581+00:00 stderr F I0813 19:59:53.310605 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") 2025-08-13T19:59:53.445221433+00:00 stderr F I0813 19:59:53.385293 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.385214752 +0000 UTC))" 2025-08-13T19:59:53.445250794+00:00 stderr F I0813 19:59:53.385330 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.445344716+00:00 stderr F I0813 19:59:53.439913 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::IngressStateEndpoints_NonReadyEndpoints::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.445501421+00:00 stderr F I0813 19:59:53.441443 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.445530422+00:00 stderr F I0813 19:59:53.443214 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:53.445640545+00:00 stderr F I0813 19:59:53.443260 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443279 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443295 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443330 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.479112 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.479188 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.47914662 +0000 UTC))" 2025-08-13T19:59:53.504193694+00:00 stderr F I0813 19:59:53.504129 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:53.505556923+00:00 stderr F I0813 19:59:53.504744 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-08-13T19:59:53.505706487+00:00 stderr F I0813 19:59:53.504752 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T19:59:53.506406407+00:00 stderr F I0813 19:59:53.504758 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T19:59:53.548425305+00:00 stderr F I0813 19:59:53.504769 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T19:59:53.548966790+00:00 stderr F I0813 19:59:53.528316 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.493298023 +0000 UTC))" 2025-08-13T19:59:53.549686591+00:00 stderr F I0813 19:59:53.549596 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.549489755 +0000 UTC))" 2025-08-13T19:59:53.549759463+00:00 stderr F I0813 19:59:53.528426 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:53.550024050+00:00 stderr F I0813 19:59:53.528443 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:53.550170024+00:00 stderr F I0813 19:59:53.528457 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:53.550360770+00:00 stderr F I0813 19:59:53.528468 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:53.550653758+00:00 stderr F I0813 19:59:53.528481 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:53.550965407+00:00 stderr F I0813 19:59:53.549918 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.549892966 +0000 UTC))" 2025-08-13T19:59:53.551064420+00:00 stderr F I0813 19:59:53.550109 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:53.551256955+00:00 stderr F I0813 19:59:53.551240 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:53.588022383+00:00 stderr F I0813 19:59:53.550308 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:53.588918889+00:00 stderr F I0813 19:59:53.550351 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:53.589305100+00:00 stderr F I0813 19:59:53.550935 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:53.589770713+00:00 stderr F I0813 19:59:53.551449 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.55140693 +0000 UTC))" 2025-08-13T19:59:53.611528333+00:00 stderr F I0813 19:59:53.611498 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.61140994 +0000 UTC))" 2025-08-13T19:59:53.617472703+00:00 stderr F I0813 19:59:53.617445 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:53.617419221 +0000 UTC))" 2025-08-13T19:59:53.631119492+00:00 stderr F I0813 19:59:53.631088 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:53.63106044 +0000 UTC))" 2025-08-13T19:59:54.191673681+00:00 stderr F I0813 19:59:53.836206 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:54.191763743+00:00 stderr F I0813 19:59:53.836250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NoValidCertificateFound' No valid client certificate for OpenShiftAuthenticatorCertRequester is found: part of the certificate is expired: sub: CN=system:serviceaccount:openshift-oauth-apiserver:openshift-authenticator, notAfter: 2025-06-27 13:10:04 +0000 UTC 2025-08-13T19:59:54.191935458+00:00 stderr F I0813 19:59:53.836465 1 trace.go:236] Trace[1025006076]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 11466ms): 2025-08-13T19:59:54.191935458+00:00 stderr F Trace[1025006076]: ---"Objects listed" error: 11466ms (19:59:53.836) 2025-08-13T19:59:54.191935458+00:00 stderr F Trace[1025006076]: [11.466047985s] [11.466047985s] END 2025-08-13T19:59:54.192130534+00:00 stderr F I0813 19:59:54.192101 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.371965880+00:00 stderr F I0813 19:59:54.371650 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.504917340+00:00 stderr F I0813 19:59:54.448435 1 trace.go:236] Trace[570889891]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 12077ms): 2025-08-13T19:59:54.504917340+00:00 stderr F Trace[570889891]: ---"Objects listed" error: 12077ms (19:59:54.447) 2025-08-13T19:59:54.504917340+00:00 stderr F Trace[570889891]: [12.077815774s] [12.077815774s] END 2025-08-13T19:59:54.504917340+00:00 stderr F I0813 19:59:54.448476 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.640223627+00:00 stderr F I0813 19:59:54.638295 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:54.640223627+00:00 stderr F I0813 19:59:54.638380 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.806600 1 trace.go:236] Trace[478681942]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.475) (total time: 12331ms): 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[478681942]: ---"Objects listed" error: 12331ms (19:59:54.806) 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[478681942]: [12.331477334s] [12.331477334s] END 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807076 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807439 1 trace.go:236] Trace[1033546446]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.191) (total time: 12615ms): 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[1033546446]: ---"Objects listed" error: 12615ms (19:59:54.807) 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[1033546446]: [12.61581104s] [12.61581104s] END 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807446 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.821143 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.822735 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CSRApproval' The CSR "system:openshift:openshift-authenticator-dk965" has been approved 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.824230 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T19:59:54.824485799+00:00 stderr F I0813 19:59:54.824461 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:54.824525890+00:00 stderr F I0813 19:59:54.824512 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:54.824587032+00:00 stderr F I0813 19:59:54.824573 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-08-13T19:59:54.824618303+00:00 stderr F I0813 19:59:54.824606 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-08-13T19:59:54.826882207+00:00 stderr F I0813 19:59:54.826753 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CSRCreated' A csr "system:openshift:openshift-authenticator-dk965" is created for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:54.835257986+00:00 stderr F I0813 19:59:54.835113 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:54.835257986+00:00 stderr F I0813 19:59:54.835149 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:54.849867433+00:00 stderr F I0813 19:59:54.849738 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-08-13T19:59:54.849977596+00:00 stderr F I0813 19:59:54.849960 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T19:59:54.862338968+00:00 stderr F I0813 19:59:54.862251 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-08-13T19:59:54.862479722+00:00 stderr F I0813 19:59:54.862465 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-08-13T19:59:54.862529714+00:00 stderr F I0813 19:59:54.862439 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-08-13T19:59:54.862562194+00:00 stderr F I0813 19:59:54.862550 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-08-13T19:59:55.068942147+00:00 stderr F E0813 19:59:55.009727 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.068942147+00:00 stderr F I0813 19:59:55.064916 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.598418321+00:00 stderr F E0813 19:59:55.594431 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.618398260+00:00 stderr F I0813 19:59:55.609513 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.635942630+00:00 stderr F I0813 19:59:55.635868 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-08-13T19:59:56.577146750+00:00 stderr F W0813 19:59:56.576170 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:56.577146750+00:00 stderr F E0813 19:59:56.576603 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:56.803007408+00:00 stderr F I0813 19:59:56.784530 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ClientCertificateCreated' A new client certificate for OpenShiftAuthenticatorCertRequester is available 2025-08-13T19:59:57.869365875+00:00 stderr F I0813 19:59:57.817721 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.869365875+00:00 stderr F I0813 19:59:57.836769 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:58.532150488+00:00 stderr F I0813 19:59:58.491747 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:58.555246356+00:00 stderr F I0813 19:59:58.554051 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:59.026001885+00:00 stderr F E0813 19:59:59.023617 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.026001885+00:00 stderr F I0813 19:59:59.025256 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:59.026001885+00:00 stderr F I0813 19:59:59.025744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authentication-integrated-oauth -n openshift-config because it changed 2025-08-13T19:59:59.191602576+00:00 stderr F I0813 19:59:59.191358 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:00.988583578+00:00 stderr F W0813 20:00:00.985560 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:00.988583578+00:00 stderr F E0813 20:00:00.988021 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:05.766076904+00:00 stderr F I0813 20:00:05.756472 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.756441289 +0000 UTC))" 2025-08-13T20:00:05.766246049+00:00 stderr F I0813 20:00:05.766224 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.766140316 +0000 UTC))" 2025-08-13T20:00:05.766725483+00:00 stderr F I0813 20:00:05.766707 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.766684052 +0000 UTC))" 2025-08-13T20:00:05.766829396+00:00 stderr F I0813 20:00:05.766764 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.766748793 +0000 UTC))" 2025-08-13T20:00:05.766933289+00:00 stderr F I0813 20:00:05.766915 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.766896128 +0000 UTC))" 2025-08-13T20:00:05.766987800+00:00 stderr F I0813 20:00:05.766971 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.766954729 +0000 UTC))" 2025-08-13T20:00:05.767042702+00:00 stderr F I0813 20:00:05.767029 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767011351 +0000 UTC))" 2025-08-13T20:00:05.767096963+00:00 stderr F I0813 20:00:05.767081 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767063202 +0000 UTC))" 2025-08-13T20:00:05.767142385+00:00 stderr F I0813 20:00:05.767130 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.767116244 +0000 UTC))" 2025-08-13T20:00:05.767211757+00:00 stderr F I0813 20:00:05.767185 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767163085 +0000 UTC))" 2025-08-13T20:00:05.767562677+00:00 stderr F I0813 20:00:05.767544 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.767525566 +0000 UTC))" 2025-08-13T20:00:05.768005659+00:00 stderr F I0813 20:00:05.767984 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 20:00:05.767966438 +0000 UTC))" 2025-08-13T20:00:12.004702051+00:00 stderr F I0813 20:00:11.995725 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:12.036581620+00:00 stderr F I0813 20:00:12.012869 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-08-13T20:00:12.036581620+00:00 stderr F I0813 20:00:12.012914 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.044553 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.044598 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045238 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045269 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045301 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045307 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067044 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067095 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067134 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-08-13T20:00:12.067210793+00:00 stderr F I0813 20:00:12.067139 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067449 1 base_controller.go:73] Caches are synced for MetadataController 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067480 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067502 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067506 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:00:12.429934526+00:00 stderr F I0813 20:00:12.428685 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"XyQxByO8dMbdAXGOFdhjWrxg0HELpVdxvAUV_2DuovP3EK6S_sdSNGYO1dWowvrD71Ii-BaK1iul4_iTDM-yaQ","operator.openshift.io/spec-hash":"ad10eacae3023cd0d2ee52348ecb0eeffb28a18838276096bbd0a036b94e4744"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"XyQxByO8dMbdAXGOFdhjWrxg0HELpVdxvAUV_2DuovP3EK6S_sdSNGYO1dWowvrD71Ii-BaK1iul4_iTDM-yaQ"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:12.975490032+00:00 stderr F I0813 20:00:12.974998 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:13.015944106+00:00 stderr F I0813 20:00:13.015377 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.406758249+00:00 stderr F I0813 20:00:13.405089 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.415712555+00:00 stderr F I0813 20:00:13.414686 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:13.606415462+00:00 stderr F I0813 20:00:13.606353 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.609902272+00:00 stderr F E0813 20:00:13.606943 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.718737195+00:00 stderr F I0813 20:00:13.608178 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:13.718737195+00:00 stderr F I0813 20:00:13.718658 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.") 2025-08-13T20:00:13.843928115+00:00 stderr F E0813 20:00:13.843736 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.863371922+00:00 stderr F E0813 20:00:14.862982 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.908161409+00:00 stderr F E0813 20:00:14.904196 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.004283370+00:00 stderr F E0813 20:00:15.004205 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.173046952+00:00 stderr F E0813 20:00:15.156544 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.182964455+00:00 stderr F I0813 20:00:15.182923 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:15.253002892+00:00 stderr F E0813 20:00:15.252132 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.388949248+00:00 stderr F E0813 20:00:15.386277 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.476292329+00:00 stderr F E0813 20:00:15.461625 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.572213484+00:00 stderr F I0813 20:00:15.568489 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"UeLoNQgNyi4ek--YG05tkJpgr5YeX9hH-xOiyjIBYuZg66HSrnnna0O-xw2d6c90LgOJuApblDmeGo40yQBZ1g","operator.openshift.io/spec-hash":"f6f3b5299b2d9845581bd943317b8c67a8bf91da11360bafa61bc66ec3070d31"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"UeLoNQgNyi4ek--YG05tkJpgr5YeX9hH-xOiyjIBYuZg66HSrnnna0O-xw2d6c90LgOJuApblDmeGo40yQBZ1g"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:16.069873294+00:00 stderr F E0813 20:00:16.069421 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.719182359+00:00 stderr F I0813 20:00:16.718490 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:16.724515991+00:00 stderr F I0813 20:00:16.721913 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:17.149870289+00:00 stderr F E0813 20:00:17.149315 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.165937637+00:00 stderr F I0813 20:00:17.165762 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.202066678+00:00 stderr F E0813 20:00:17.201641 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.497314967+00:00 stderr F I0813 20:00:17.497116 1 helpers.go:184] lister was stale at resourceVersion=29206, live get showed resourceVersion=29240 2025-08-13T20:00:17.500207359+00:00 stderr F I0813 20:00:17.500067 1 helpers.go:184] lister was stale at resourceVersion=29206, live get showed resourceVersion=29240 2025-08-13T20:00:17.534337442+00:00 stderr F I0813 20:00:17.533562 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.536315839+00:00 stderr F I0813 20:00:17.535193 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:17.668454376+00:00 stderr F E0813 20:00:17.667141 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.738651368+00:00 stderr F E0813 20:00:17.738582 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:17.765923386+00:00 stderr F E0813 20:00:17.765691 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.772682838+00:00 stderr F I0813 20:00:17.772240 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.984901610+00:00 stderr F E0813 20:00:17.957458 1 base_controller.go:268] CustomRouteController reconciliation failed: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.998317872+00:00 stderr F I0813 20:00:17.996657 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.027750481+00:00 stderr F I0813 20:00:18.001198 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.005592 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.005978 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.016061 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.073065473+00:00 stderr F E0813 20:00:18.067757 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.225214142+00:00 stderr F E0813 20:00:18.223388 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:18.267422925+00:00 stderr F E0813 20:00:18.254323 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.267422925+00:00 stderr F I0813 20:00:18.254421 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.267422925+00:00 stderr F E0813 20:00:18.254661 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.314470386+00:00 stderr F E0813 20:00:18.314079 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.510731402+00:00 stderr F E0813 20:00:18.439370 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.580644456+00:00 stderr F I0813 20:00:18.519024 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:18.622977703+00:00 stderr F E0813 20:00:18.616420 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.055428173+00:00 stderr F E0813 20:00:19.029603 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.235422316+00:00 stderr F E0813 20:00:19.234686 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.242731974+00:00 stderr F I0813 20:00:19.242602 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation","reason":"OAuthServerDeployment_PodsUpdating","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:19.300572264+00:00 stderr F E0813 20:00:19.300451 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.532105785+00:00 stderr F I0813 20:00:19.529928 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation" 2025-08-13T20:00:19.538820187+00:00 stderr F E0813 20:00:19.537609 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.610996055+00:00 stderr F I0813 20:00:19.610297 1 request.go:697] Waited for 1.007136378s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:00:20.135667296+00:00 stderr F E0813 20:00:20.135228 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:20.144572859+00:00 stderr F I0813 20:00:20.142549 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation","reason":"OAuthServerDeployment_PodsUpdating","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:20.233140455+00:00 stderr F E0813 20:00:20.233035 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:20.295036100+00:00 stderr F I0813 20:00:20.286171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:21.188543427+00:00 stderr F I0813 20:00:21.184308 1 request.go:697] Waited for 1.122772645s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets/router-certs 2025-08-13T20:00:21.862728060+00:00 stderr F E0813 20:00:21.862242 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:22.463267694+00:00 stderr F E0813 20:00:22.463207 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:22.535823483+00:00 stderr F I0813 20:00:22.530093 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:22.617541683+00:00 stderr F E0813 20:00:22.617487 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:22.687107217+00:00 stderr F I0813 20:00:22.685318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") 2025-08-13T20:00:24.386693571+00:00 stderr F I0813 20:00:24.386152 1 request.go:697] Waited for 1.085380808s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:00:32.112004482+00:00 stderr F E0813 20:00:32.111258 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:35.710360804+00:00 stderr F I0813 20:00:35.669757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/v4-0-config-user-idp-0-file-data -n openshift-authentication because it changed 2025-08-13T20:00:36.365259357+00:00 stderr F I0813 20:00:36.292967 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"uomocd2xQvK4ihb6gzUgAMabCuvz4ifs3T98UV1yGZo0R1LHJARY4B-40ZukHyVSzZ-3pIoV4sdQo49M3ieZtA","operator.openshift.io/spec-hash":"797989bfafe87f49a19e3bfa11bf6d778cd3f9343ed99e2bad962d75542e95e1"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"uomocd2xQvK4ihb6gzUgAMabCuvz4ifs3T98UV1yGZo0R1LHJARY4B-40ZukHyVSzZ-3pIoV4sdQo49M3ieZtA"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:36.400968906+00:00 stderr F E0813 20:00:36.333924 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:36.400968906+00:00 stderr F I0813 20:00:36.339919 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.580487184+00:00 stderr F I0813 20:00:36.555590 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.580487184+00:00 stderr F I0813 20:00:36.556938 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF" 2025-08-13T20:00:36.652128637+00:00 stderr F I0813 20:00:36.651830 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:36.909954379+00:00 stderr F E0813 20:00:36.908986 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:36.912915663+00:00 stderr F E0813 20:00:36.910144 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:36.973100329+00:00 stderr F E0813 20:00:36.972602 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:36.986419359+00:00 stderr F I0813 20:00:36.983106 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.998673178+00:00 stderr F E0813 20:00:36.998162 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:37.078261178+00:00 stderr F I0813 20:00:37.077288 1 helpers.go:184] lister was stale at resourceVersion=29830, live get showed resourceVersion=29845 2025-08-13T20:00:37.097267870+00:00 stderr F I0813 20:00:37.094501 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:37.469467033+00:00 stderr F E0813 20:00:37.465356 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:37.502009441+00:00 stderr F E0813 20:00:37.501224 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:37.509666649+00:00 stderr F I0813 20:00:37.508157 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.562600368+00:00 stderr F E0813 20:00:37.555952 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:38.067596878+00:00 stderr F I0813 20:00:38.067325 1 request.go:697] Waited for 1.078996896s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/etcd-client 2025-08-13T20:00:38.324205215+00:00 stderr F I0813 20:00:38.324036 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.324214 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.324779 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325447 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:38.325396489 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325475 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:38.325461081 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325495 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:38.325482021 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325511 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:38.325500432 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325540 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325528573 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325556 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325545743 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325573 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325560843 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325589 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325577764 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325607 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:38.325597164 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325639 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325615085 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325998 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:00:38.325975315 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.326321 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 20:00:38.326261023 +0000 UTC))" 2025-08-13T20:00:38.560458541+00:00 stderr F I0813 20:00:38.537772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.") 2025-08-13T20:00:40.670992890+00:00 stderr F I0813 20:00:40.669437 1 request.go:697] Waited for 1.047166697s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/trusted-ca-bundle 2025-08-13T20:00:42.162988253+00:00 stderr F E0813 20:00:42.160729 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:42.637888014+00:00 stderr F E0813 20:00:42.637580 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:42.882033666+00:00 stderr F I0813 20:00:42.881970 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="9480932818b7cb1ebdca51bb28c5cce888164a71652918cc5344387b939314ae", new="fd200c56eea686a995edc75b6728718041d30aab916df9707d4d155e1d8cd60c") 2025-08-13T20:00:42.883879189+00:00 stderr F W0813 20:00:42.883854 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:42.884015782+00:00 stderr F I0813 20:00:42.883993 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="ae319b0e78a8985818f6dc7d0863a2c62f9b44ec34c1b60c870a0648a26e2f87", new="c29c68c5a2ab71f55f3d17abcfc7421d482cea9bbfd2fc3fccb363858ff20314") 2025-08-13T20:00:42.897758034+00:00 stderr F I0813 20:00:42.884538 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:42.897927709+00:00 stderr F I0813 20:00:42.888281 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:42.898027102+00:00 stderr F I0813 20:00:42.898004 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:00:42.898078293+00:00 stderr F I0813 20:00:42.898066 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:00:42.898159456+00:00 stderr F I0813 20:00:42.888544 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:00:42.898188427+00:00 stderr F I0813 20:00:42.888571 1 base_controller.go:172] Shutting down MetadataController ... 2025-08-13T20:00:42.898216907+00:00 stderr F I0813 20:00:42.889263 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:00:42.898262839+00:00 stderr F I0813 20:00:42.898249 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:00:42.898314980+00:00 stderr F I0813 20:00:42.898301 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:42.898352641+00:00 stderr F I0813 20:00:42.898341 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:00:42.898391672+00:00 stderr F I0813 20:00:42.898379 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:00:42.898450134+00:00 stderr F I0813 20:00:42.898437 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:42.898633219+00:00 stderr F I0813 20:00:42.898588 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:42.898738682+00:00 stderr F I0813 20:00:42.898666 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:42.898823935+00:00 stderr F I0813 20:00:42.898768 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:42.899380681+00:00 stderr F I0813 20:00:42.899277 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:42.899380681+00:00 stderr F I0813 20:00:42.899361 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:42.899434772+00:00 stderr F I0813 20:00:42.899395 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:42.899447402+00:00 stderr F I0813 20:00:42.899431 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:42.899701310+00:00 stderr F I0813 20:00:42.889374 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:00:42.899701310+00:00 stderr F E0813 20:00:42.890466 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": context canceled, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:00:42.899701310+00:00 stderr F I0813 20:00:42.899692 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.899701 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.890485 1 base_controller.go:172] Shutting down StatusSyncer_authentication ... 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.899713 1 base_controller.go:150] All StatusSyncer_authentication post start hooks have been terminated 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890499 1 base_controller.go:172] Shutting down IngressNodesAvailableController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890510 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointsEndpointAccessibleController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890521 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointAccessibleController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890545 1 base_controller.go:172] Shutting down IngressStateController ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890556 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890581 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_OpenShiftAuthenticator ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890590 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890607 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890637 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890660 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890689 1 base_controller.go:114] Shutting down worker of StatusSyncer_authentication controller ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.899762 1 base_controller.go:104] All StatusSyncer_authentication workers have been terminated 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890695 1 base_controller.go:114] Shutting down worker of IngressNodesAvailableController controller ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.899776 1 base_controller.go:104] All IngressNodesAvailableController workers have been terminated 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890700 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904143 1 base_controller.go:104] All OAuthServerServiceEndpointsEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890705 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904179 1 base_controller.go:104] All OAuthServerServiceEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890711 1 base_controller.go:114] Shutting down worker of IngressStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904193 1 base_controller.go:104] All IngressStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890716 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904205 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890722 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904215 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_OpenShiftAuthenticator workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890728 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904227 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890734 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904236 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890740 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904247 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890745 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904370 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890763 1 base_controller.go:172] Shutting down PayloadConfig ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.893541 1 base_controller.go:172] Shutting down ServiceCAController ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.894204 1 base_controller.go:268] ServiceCAController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904445 1 base_controller.go:114] Shutting down worker of ServiceCAController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904454 1 base_controller.go:104] All ServiceCAController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894222 1 base_controller.go:172] Shutting down ProxyConfigController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894237 1 base_controller.go:172] Shutting down CustomRouteController ... 2025-08-13T20:00:42.908890762+00:00 stderr F W0813 20:00:42.894272 1 base_controller.go:232] Updating status of "CustomRouteController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.904492 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894289 1 base_controller.go:172] Shutting down OAuthServerRouteEndpointAccessibleController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894307 1 base_controller.go:172] Shutting down WellKnownReadyController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894326 1 base_controller.go:172] Shutting down RouterCertsDomainValidationController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894549 1 base_controller.go:172] Shutting down OAuthServerWorkloadController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894655 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894672 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894684 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894697 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894709 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894722 1 base_controller.go:172] Shutting down OAuthAPIServerControllerWorkloadController ... 2025-08-13T20:00:42.908890762+00:00 stderr F W0813 20:00:42.895173 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.905507 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905536 1 base_controller.go:114] Shutting down worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905645 1 base_controller.go:114] Shutting down worker of CustomRouteController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905661 1 base_controller.go:104] All CustomRouteController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895222 1 base_controller.go:114] Shutting down worker of ProxyConfigController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905743 1 base_controller.go:104] All ProxyConfigController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905750 1 base_controller.go:104] All RouterCertsDomainValidationController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895239 1 base_controller.go:114] Shutting down worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905759 1 base_controller.go:104] All OAuthServerRouteEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895248 1 base_controller.go:114] Shutting down worker of WellKnownReadyController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905932 1 base_controller.go:104] All WellKnownReadyController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895268 1 base_controller.go:172] Shutting down OpenshiftAuthenticationStaticResources ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.895405 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895421 1 base_controller.go:172] Shutting down OpenShiftAuthenticatorCertRequester ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895432 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895444 1 base_controller.go:172] Shutting down WebhookAuthenticatorController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895457 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895474 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.895518 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: the operator is shutting down, skipping updating conditions, err = failed to reconcile enabled APIs: Get "https://10.217.4.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.user.openshift.io": context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895534 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-oauth-apiserver ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895570 1 base_controller.go:114] Shutting down worker of MetadataController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906156 1 base_controller.go:104] All MetadataController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895579 1 base_controller.go:114] Shutting down worker of OAuthServerWorkloadController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906174 1 base_controller.go:104] All OAuthServerWorkloadController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895588 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906189 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895593 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906412 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895600 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr P I0813 20:00:42.906432 2025-08-13T20:00:42.908997645+00:00 stderr F 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895606 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906446 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895611 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906566 1 base_controller.go:114] Shutting down worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906573 1 base_controller.go:104] All OAuthAPIServerControllerWorkloadController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895623 1 base_controller.go:114] Shutting down worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906585 1 base_controller.go:104] All OpenShiftAuthenticatorCertRequester workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906662 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895630 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906683 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895636 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906694 1 base_controller.go:104] All WebhookAuthenticatorController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895642 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906752 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895651 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906762 1 base_controller.go:104] All NamespaceFinalizerController_openshift-oauth-apiserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895674 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895681 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906868 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895696 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907061 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907071 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895715 1 base_controller.go:172] Shutting down TrustDistributionController ... 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.895746 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": context canceled 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907095 1 base_controller.go:114] Shutting down worker of TrustDistributionController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907100 1 base_controller.go:104] All TrustDistributionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.896184 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": context canceled, "oauth-openshift/oauth-service.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-openshift/trust_distribution_role.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.896215 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.896228 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907345 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.897470 1 base_controller.go:268] PayloadConfig reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.900227 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907647 1 builder.go:329] server exited 2025-08-13T20:00:42.909276393+00:00 stderr F I0813 20:00:42.909246 1 base_controller.go:114] Shutting down worker of PayloadConfig controller ... 2025-08-13T20:00:42.909319824+00:00 stderr F I0813 20:00:42.909307 1 base_controller.go:104] All PayloadConfig workers have been terminated 2025-08-13T20:00:42.909401276+00:00 stderr F I0813 20:00:42.909339 1 base_controller.go:114] Shutting down worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:00:42.909828228+00:00 stderr F I0813 20:00:42.909456 1 base_controller.go:104] All OpenshiftAuthenticationStaticResources workers have been terminated 2025-08-13T20:00:42.948115590+00:00 stderr F I0813 20:00:42.947976 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:42.948115590+00:00 stderr F I0813 20:00:42.948043 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:45.606345136+00:00 stderr F W0813 20:00:45.602018 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000073135515073043233033152 0ustar zuulzuul2025-10-13T00:14:57.256598380+00:00 stdout F Copying system trust bundle 2025-10-13T00:14:58.412430681+00:00 stderr F W1013 00:14:58.411529 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-10-13T00:14:58.412781462+00:00 stderr F I1013 00:14:58.412720 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:58.415986408+00:00 stderr F I1013 00:14:58.415966 1 cmd.go:240] Using service-serving-cert provided certificates 2025-10-13T00:14:58.416066120+00:00 stderr F I1013 00:14:58.416038 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:58.417037569+00:00 stderr F I1013 00:14:58.416942 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:58.492125709+00:00 stderr F I1013 00:14:58.491756 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-10-13T00:14:58.493094628+00:00 stderr F I1013 00:14:58.493059 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:14:58.708555894+00:00 stderr F I1013 00:14:58.706858 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:14:58.730860842+00:00 stderr F I1013 00:14:58.730066 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:14:58.730860842+00:00 stderr F I1013 00:14:58.730095 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:14:58.730860842+00:00 stderr F I1013 00:14:58.730121 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:14:58.730860842+00:00 stderr F I1013 00:14:58.730126 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:14:58.735679946+00:00 stderr F I1013 00:14:58.735630 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:14:58.736169601+00:00 stderr F I1013 00:14:58.736129 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:14:58.736445799+00:00 stderr F W1013 00:14:58.736405 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:58.736445799+00:00 stderr F W1013 00:14:58.736440 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:14:58.743289385+00:00 stderr F I1013 00:14:58.743207 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:14:58.743565843+00:00 stderr F I1013 00:14:58.743527 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-10-13T00:14:58.743761259+00:00 stderr F I1013 00:14:58.743736 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:14:58.743761259+00:00 stderr F I1013 00:14:58.743753 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:14:58.743941944+00:00 stderr F I1013 00:14:58.743909 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:14:58.743846001 +0000 UTC))" 2025-10-13T00:14:58.744282934+00:00 stderr F I1013 00:14:58.744258 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314498\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314498\" (2025-10-12 23:14:58 +0000 UTC to 2026-10-12 23:14:58 +0000 UTC (now=2025-10-13 00:14:58.744237743 +0000 UTC))" 2025-10-13T00:14:58.744282934+00:00 stderr F I1013 00:14:58.744279 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:14:58.744388457+00:00 stderr F I1013 00:14:58.744311 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:14:58.744388457+00:00 stderr F I1013 00:14:58.744348 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.744469 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.744559 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.744570 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.744603 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.744635 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.747338 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.747523 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-10-13T00:14:58.748230902+00:00 stderr F I1013 00:14:58.747564 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-10-13T00:14:58.847591960+00:00 stderr F I1013 00:14:58.846386 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:14:58.847591960+00:00 stderr F I1013 00:14:58.846460 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:14:58.847591960+00:00 stderr F I1013 00:14:58.846485 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:14:58.847591960+00:00 stderr F I1013 00:14:58.847005 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:14:58.846966861 +0000 UTC))" 2025-10-13T00:14:58.847591960+00:00 stderr F I1013 00:14:58.847444 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:14:58.847421604 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.847757 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314498\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314498\" (2025-10-12 23:14:58 +0000 UTC to 2026-10-12 23:14:58 +0000 UTC (now=2025-10-13 00:14:58.847736174 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848159 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:14:58.848138986 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848182 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:14:58.848168307 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848202 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:14:58.848187137 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848430 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:14:58.848207178 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848451 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:14:58.848436905 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848470 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:14:58.848456095 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848489 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:14:58.848474776 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848508 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:14:58.848493537 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848529 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:14:58.848512647 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848561 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:14:58.848537138 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848584 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:14:58.848568779 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.848925 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:14:58.848907519 +0000 UTC))" 2025-10-13T00:14:58.850267790+00:00 stderr F I1013 00:14:58.849252 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314498\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314498\" (2025-10-12 23:14:58 +0000 UTC to 2026-10-12 23:14:58 +0000 UTC (now=2025-10-13 00:14:58.849234359 +0000 UTC))" 2025-10-13T00:20:01.407475213+00:00 stderr F I1013 00:20:01.406849 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-10-13T00:20:01.407475213+00:00 stderr F I1013 00:20:01.406924 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41895", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_0286406b-dcce-4c59-9cc4-3897ebc3507a became leader 2025-10-13T00:20:01.495836740+00:00 stderr F I1013 00:20:01.495681 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:20:01.496197401+00:00 stderr F I1013 00:20:01.496146 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.496712367+00:00 stderr F I1013 00:20:01.496656 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:20:01.496912242+00:00 stderr F I1013 00:20:01.496869 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-10-13T00:20:01.497011015+00:00 stderr F I1013 00:20:01.496945 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-10-13T00:20:01.497011015+00:00 stderr F I1013 00:20:01.496978 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:20:01.497136279+00:00 stderr F I1013 00:20:01.497073 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-10-13T00:20:01.497136279+00:00 stderr F I1013 00:20:01.497101 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-10-13T00:20:01.497136279+00:00 stderr F I1013 00:20:01.497097 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-10-13T00:20:01.497164750+00:00 stderr F I1013 00:20:01.497139 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-10-13T00:20:01.497185421+00:00 stderr F I1013 00:20:01.497129 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-10-13T00:20:01.497507140+00:00 stderr F I1013 00:20:01.497454 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.497507140+00:00 stderr F I1013 00:20:01.497477 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.497691456+00:00 stderr F I1013 00:20:01.497650 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:20:01.497691456+00:00 stderr F I1013 00:20:01.497663 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-10-13T00:20:01.500156509+00:00 stderr F I1013 00:20:01.500087 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-10-13T00:20:01.500204160+00:00 stderr F I1013 00:20:01.500184 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-10-13T00:20:01.500225871+00:00 stderr F I1013 00:20:01.500202 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-10-13T00:20:01.500225871+00:00 stderr F I1013 00:20:01.500214 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-10-13T00:20:01.500452028+00:00 stderr F I1013 00:20:01.500379 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-10-13T00:20:01.500452028+00:00 stderr F I1013 00:20:01.500384 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-10-13T00:20:01.500489399+00:00 stderr F I1013 00:20:01.500462 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-10-13T00:20:01.500510939+00:00 stderr F I1013 00:20:01.500493 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:20:01.500510939+00:00 stderr F I1013 00:20:01.500504 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-10-13T00:20:01.500601612+00:00 stderr F I1013 00:20:01.500560 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:20:01.500601612+00:00 stderr F I1013 00:20:01.500571 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-10-13T00:20:01.500853560+00:00 stderr F I1013 00:20:01.500766 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-10-13T00:20:01.500853560+00:00 stderr F I1013 00:20:01.500804 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-10-13T00:20:01.500853560+00:00 stderr F I1013 00:20:01.500833 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-10-13T00:20:01.500853560+00:00 stderr F I1013 00:20:01.500835 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-10-13T00:20:01.500853560+00:00 stderr F I1013 00:20:01.500842 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-10-13T00:20:01.500898291+00:00 stderr F I1013 00:20:01.500874 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.500920 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.500945 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.500956 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.500979 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.500982 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.500996 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.501000 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.501015 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.501030 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.501181 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-10-13T00:20:01.504434056+00:00 stderr F I1013 00:20:01.502060 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.504851469+00:00 stderr F I1013 00:20:01.504810 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:20:01.506855478+00:00 stderr F I1013 00:20:01.506818 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-10-13T00:20:01.509157767+00:00 stderr F I1013 00:20:01.508199 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.509157767+00:00 stderr F I1013 00:20:01.508295 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.509157767+00:00 stderr F I1013 00:20:01.508794 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.509157767+00:00 stderr F I1013 00:20:01.508965 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.509429155+00:00 stderr F I1013 00:20:01.509401 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-10-13T00:20:01.509484346+00:00 stderr F I1013 00:20:01.509451 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.509484346+00:00 stderr F I1013 00:20:01.509452 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.509562719+00:00 stderr F I1013 00:20:01.509540 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.509749504+00:00 stderr F I1013 00:20:01.509729 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.510056763+00:00 stderr F I1013 00:20:01.510019 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.510076824+00:00 stderr F I1013 00:20:01.510055 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.510443525+00:00 stderr F I1013 00:20:01.510382 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.510781805+00:00 stderr F I1013 00:20:01.510736 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.511297930+00:00 stderr F I1013 00:20:01.511254 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.511792075+00:00 stderr F I1013 00:20:01.511717 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.511968160+00:00 stderr F I1013 00:20:01.511911 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-10-13T00:20:01.512890108+00:00 stderr F I1013 00:20:01.512828 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-10-13T00:20:01.513495476+00:00 stderr F I1013 00:20:01.513456 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-10-13T00:20:01.515023011+00:00 stderr F I1013 00:20:01.514982 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.515733612+00:00 stderr F I1013 00:20:01.515691 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.515969279+00:00 stderr F I1013 00:20:01.515920 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.517876306+00:00 stderr F I1013 00:20:01.517834 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.519946117+00:00 stderr F I1013 00:20:01.519873 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.596496684+00:00 stderr F I1013 00:20:01.596419 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:20:01.596496684+00:00 stderr F I1013 00:20:01.596453 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:20:01.597526485+00:00 stderr F I1013 00:20:01.597484 1 base_controller.go:73] Caches are synced for IngressStateController 2025-10-13T00:20:01.597526485+00:00 stderr F I1013 00:20:01.597495 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-10-13T00:20:01.598658158+00:00 stderr F I1013 00:20:01.598607 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-10-13T00:20:01.598658158+00:00 stderr F I1013 00:20:01.598632 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-10-13T00:20:01.600882644+00:00 stderr F I1013 00:20:01.600867 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:20:01.600917856+00:00 stderr F I1013 00:20:01.600907 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:20:01.600951877+00:00 stderr F I1013 00:20:01.600931 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-10-13T00:20:01.600951877+00:00 stderr F I1013 00:20:01.600943 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-10-13T00:20:01.600975377+00:00 stderr F I1013 00:20:01.600958 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-10-13T00:20:01.600975377+00:00 stderr F I1013 00:20:01.600966 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-10-13T00:20:01.600998298+00:00 stderr F I1013 00:20:01.600886 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-10-13T00:20:01.601022899+00:00 stderr F I1013 00:20:01.601013 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-10-13T00:20:01.601048469+00:00 stderr F I1013 00:20:01.600919 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:20:01.601071890+00:00 stderr F I1013 00:20:01.601063 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:20:01.615528390+00:00 stderr F I1013 00:20:01.615481 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.675860994+00:00 stderr F I1013 00:20:01.675762 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:01.701228549+00:00 stderr F I1013 00:20:01.701113 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-10-13T00:20:01.701228549+00:00 stderr F I1013 00:20:01.701139 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-10-13T00:20:01.701228549+00:00 stderr F I1013 00:20:01.701137 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-10-13T00:20:01.701228549+00:00 stderr F I1013 00:20:01.701157 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-10-13T00:20:01.701651621+00:00 stderr F I1013 00:20:01.701560 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-10-13T00:20:01.816214598+00:00 stderr F I1013 00:20:01.816144 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:01.906009399+00:00 stderr F I1013 00:20:01.905924 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:02.085745944+00:00 stderr F I1013 00:20:02.085029 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:02.101294636+00:00 stderr F I1013 00:20:02.101180 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-10-13T00:20:02.101294636+00:00 stderr F I1013 00:20:02.101211 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-10-13T00:20:02.282004570+00:00 stderr F I1013 00:20:02.281915 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:02.493536481+00:00 stderr F I1013 00:20:02.493482 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:02.498074516+00:00 stderr F I1013 00:20:02.497988 1 base_controller.go:73] Caches are synced for MetadataController 2025-10-13T00:20:02.498074516+00:00 stderr F I1013 00:20:02.498041 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-10-13T00:20:02.498564040+00:00 stderr F I1013 00:20:02.498509 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-10-13T00:20:02.498564040+00:00 stderr F I1013 00:20:02.498539 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-10-13T00:20:02.500979972+00:00 stderr F I1013 00:20:02.500930 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-10-13T00:20:02.500979972+00:00 stderr F I1013 00:20:02.500951 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-10-13T00:20:02.501127967+00:00 stderr F I1013 00:20:02.501009 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-10-13T00:20:02.501127967+00:00 stderr F I1013 00:20:02.501082 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-10-13T00:20:02.674114551+00:00 stderr F I1013 00:20:02.674029 1 request.go:697] Waited for 1.17858012s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces?limit=500&resourceVersion=0 2025-10-13T00:20:02.676471241+00:00 stderr F I1013 00:20:02.676408 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:02.877087137+00:00 stderr F I1013 00:20:02.877003 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:03.094463832+00:00 stderr F I1013 00:20:03.094281 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:03.101115109+00:00 stderr F I1013 00:20:03.101040 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-10-13T00:20:03.101115109+00:00 stderr F I1013 00:20:03.101065 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-10-13T00:20:03.276028611+00:00 stderr F I1013 00:20:03.275971 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:03.477634786+00:00 stderr F I1013 00:20:03.477580 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:03.674133781+00:00 stderr F I1013 00:20:03.674060 1 request.go:697] Waited for 2.177836936s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets?limit=500&resourceVersion=0 2025-10-13T00:20:03.676162741+00:00 stderr F I1013 00:20:03.676124 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:03.697777264+00:00 stderr F I1013 00:20:03.697717 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:20:03.697777264+00:00 stderr F I1013 00:20:03.697740 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:20:03.697831685+00:00 stderr F I1013 00:20:03.697813 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-10-13T00:20:03.697842716+00:00 stderr F I1013 00:20:03.697833 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-10-13T00:20:03.701110173+00:00 stderr F I1013 00:20:03.701068 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-10-13T00:20:03.701110173+00:00 stderr F I1013 00:20:03.701085 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-10-13T00:20:03.701129624+00:00 stderr F I1013 00:20:03.701110 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-10-13T00:20:03.701129624+00:00 stderr F I1013 00:20:03.701121 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-10-13T00:20:03.701253517+00:00 stderr F I1013 00:20:03.701201 1 base_controller.go:73] Caches are synced for RevisionController 2025-10-13T00:20:03.701253517+00:00 stderr F I1013 00:20:03.701231 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-10-13T00:20:03.876477448+00:00 stderr F I1013 00:20:03.876382 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:03.900881644+00:00 stderr F I1013 00:20:03.900801 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-10-13T00:20:03.900881644+00:00 stderr F I1013 00:20:03.900841 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-10-13T00:20:03.900881644+00:00 stderr F I1013 00:20:03.900842 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-10-13T00:20:03.900881644+00:00 stderr F I1013 00:20:03.900865 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-10-13T00:20:03.901011998+00:00 stderr F I1013 00:20:03.900982 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-10-13T00:20:03.901055289+00:00 stderr F I1013 00:20:03.901039 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-10-13T00:20:03.901654877+00:00 stderr F I1013 00:20:03.901598 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-10-13T00:20:03.901654877+00:00 stderr F I1013 00:20:03.901625 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-10-13T00:20:04.075200197+00:00 stderr F I1013 00:20:04.075160 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:04.286248023+00:00 stderr F I1013 00:20:04.286182 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:04.300712593+00:00 stderr F I1013 00:20:04.300658 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-10-13T00:20:04.300803686+00:00 stderr F I1013 00:20:04.300782 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-10-13T00:20:04.476029047+00:00 stderr F I1013 00:20:04.475495 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:04.500242907+00:00 stderr F I1013 00:20:04.500225 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-10-13T00:20:04.500283108+00:00 stderr F I1013 00:20:04.500272 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-10-13T00:20:04.674150198+00:00 stderr F I1013 00:20:04.674070 1 request.go:697] Waited for 3.173709541s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0 2025-10-13T00:20:04.676548000+00:00 stderr F I1013 00:20:04.676505 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:04.877655280+00:00 stderr F I1013 00:20:04.877601 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:05.076374090+00:00 stderr F I1013 00:20:05.076256 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:05.102168097+00:00 stderr F I1013 00:20:05.102084 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-10-13T00:20:05.102168097+00:00 stderr F I1013 00:20:05.102137 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-10-13T00:20:05.276746809+00:00 stderr F I1013 00:20:05.276691 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:05.477477800+00:00 stderr F I1013 00:20:05.476999 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:05.500931796+00:00 stderr F I1013 00:20:05.500829 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-10-13T00:20:05.500931796+00:00 stderr F I1013 00:20:05.500862 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-10-13T00:20:05.500997641+00:00 stderr F I1013 00:20:05.500921 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-10-13T00:20:05.500997641+00:00 stderr F I1013 00:20:05.500942 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-10-13T00:20:05.501024079+00:00 stderr F I1013 00:20:05.501010 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-10-13T00:20:05.501043038+00:00 stderr F I1013 00:20:05.501021 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-10-13T00:20:05.501062086+00:00 stderr F I1013 00:20:05.501030 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-10-13T00:20:05.501080815+00:00 stderr F I1013 00:20:05.501056 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-10-13T00:20:05.501080815+00:00 stderr F I1013 00:20:05.501058 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-10-13T00:20:05.501100533+00:00 stderr F I1013 00:20:05.501088 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-10-13T00:20:05.501100533+00:00 stderr F I1013 00:20:05.501088 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-10-13T00:20:05.501122901+00:00 stderr F I1013 00:20:05.501101 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-10-13T00:20:05.501492242+00:00 stderr F I1013 00:20:05.501133 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-10-13T00:20:05.501573356+00:00 stderr F I1013 00:20:05.501505 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-10-13T00:20:05.675817858+00:00 stderr F I1013 00:20:05.675701 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:05.701166836+00:00 stderr F I1013 00:20:05.701062 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-10-13T00:20:05.701166836+00:00 stderr F I1013 00:20:05.701102 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-10-13T00:20:05.701379529+00:00 stderr F I1013 00:20:05.701282 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-10-13T00:20:05.874026238+00:00 stderr F I1013 00:20:05.873939 1 request.go:697] Waited for 4.372973308s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets?limit=500&resourceVersion=0 2025-10-13T00:20:05.883695744+00:00 stderr F I1013 00:20:05.883624 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:05.901679653+00:00 stderr F I1013 00:20:05.901597 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-10-13T00:20:05.901679653+00:00 stderr F I1013 00:20:05.901631 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-10-13T00:20:05.901726929+00:00 stderr F I1013 00:20:05.901680 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:20:05.901726929+00:00 stderr F I1013 00:20:05.901710 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:20:05.901749947+00:00 stderr F I1013 00:20:05.901724 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:20:05.901846040+00:00 stderr F I1013 00:20:05.901710 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:20:06.874895687+00:00 stderr F I1013 00:20:06.874641 1 request.go:697] Waited for 4.373528226s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets/router-certs 2025-10-13T00:20:08.074062168+00:00 stderr F I1013 00:20:08.073971 1 request.go:697] Waited for 4.172816007s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/secrets/v4-0-config-system-session 2025-10-13T00:20:09.273913105+00:00 stderr F I1013 00:20:09.273846 1 request.go:697] Waited for 1.797620786s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/openshift-authenticator-certs 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948109 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.948058644 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948687 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.9486662 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948719 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.948695461 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948740 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.948725302 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948759 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948745943 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948786 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948764583 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948810 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948795164 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948832 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948817045 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948857 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.948838645 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948880 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.948865846 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948911 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.948894747 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.948938 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.948916527 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.949357 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:21:11.949313368 +0000 UTC))" 2025-10-13T00:21:11.950778717+00:00 stderr F I1013 00:21:11.949721 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314498\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314498\" (2025-10-12 23:14:58 +0000 UTC to 2026-10-12 23:14:58 +0000 UTC (now=2025-10-13 00:21:11.949701538 +0000 UTC))" 2025-10-13T00:22:09.643749122+00:00 stderr F I1013 00:22:09.642766 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.644413200+00:00 stderr F W1013 00:22:09.644307 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.645780037+00:00 stderr F E1013 00:22:09.645753 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:09.645853548+00:00 stderr F E1013 00:22:09.644570 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.645907780+00:00 stderr F E1013 00:22:09.644713 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.645951421+00:00 stderr F E1013 00:22:09.645373 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.645993522+00:00 stderr F E1013 00:22:09.645888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.650564965+00:00 stderr F E1013 00:22:09.650527 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.651116770+00:00 stderr F W1013 00:22:09.651014 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.651116770+00:00 stderr F E1013 00:22:09.651045 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.654519131+00:00 stderr F I1013 00:22:09.654471 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.654835070+00:00 stderr F E1013 00:22:09.654804 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.666114813+00:00 stderr F I1013 00:22:09.665736 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.666622707+00:00 stderr F E1013 00:22:09.666589 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.687625022+00:00 stderr F I1013 00:22:09.687571 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.692938035+00:00 stderr F E1013 00:22:09.692886 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.733981678+00:00 stderr F I1013 00:22:09.733904 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.735163340+00:00 stderr F E1013 00:22:09.735107 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.759188956+00:00 stderr F W1013 00:22:09.759121 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.759188956+00:00 stderr F E1013 00:22:09.759163 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:09.816248581+00:00 stderr F I1013 00:22:09.816196 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.817546456+00:00 stderr F E1013 00:22:09.817504 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.959384960+00:00 stderr F E1013 00:22:09.959345 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:09.978886024+00:00 stderr F I1013 00:22:09.978849 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:09Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:09.981062143+00:00 stderr F E1013 00:22:09.981014 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.302459126+00:00 stderr F I1013 00:22:10.302380 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:10Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:10.303689909+00:00 stderr F E1013 00:22:10.303664 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.359022197+00:00 stderr F E1013 00:22:10.358939 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:10.559298053+00:00 stderr F E1013 00:22:10.559214 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.758106509+00:00 stderr F I1013 00:22:10.757502 1 request.go:697] Waited for 1.099480016s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:10.945608671+00:00 stderr F I1013 00:22:10.945492 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:10Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:10.947278336+00:00 stderr F E1013 00:22:10.947212 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.959688010+00:00 stderr F W1013 00:22:10.959615 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.959688010+00:00 stderr F E1013 00:22:10.959669 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:11.159148713+00:00 stderr F E1013 00:22:11.159077 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.359376038+00:00 stderr F E1013 00:22:11.359268 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.758905582+00:00 stderr F I1013 00:22:11.758425 1 request.go:697] Waited for 1.31492514s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:11.759870968+00:00 stderr F E1013 00:22:11.759842 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:11.958320805+00:00 stderr F W1013 00:22:11.958245 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.958320805+00:00 stderr F E1013 00:22:11.958293 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.159131225+00:00 stderr F E1013 00:22:12.159056 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.228725526+00:00 stderr F I1013 00:22:12.228662 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:12Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:12.230018611+00:00 stderr F E1013 00:22:12.229967 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.360313545+00:00 stderr F W1013 00:22:12.360212 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.360313545+00:00 stderr F E1013 00:22:12.360283 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:12.558307640+00:00 stderr F E1013 00:22:12.558242 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.758546094+00:00 stderr F E1013 00:22:12.758475 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:12.958089790+00:00 stderr F I1013 00:22:12.958003 1 request.go:697] Waited for 1.588272541s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-10-13T00:22:13.158714685+00:00 stderr F E1013 00:22:13.158374 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:13.358437326+00:00 stderr F E1013 00:22:13.358382 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:13.759379698+00:00 stderr F E1013 00:22:13.758952 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:13.959395057+00:00 stderr F W1013 00:22:13.959266 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:13.959395057+00:00 stderr F E1013 00:22:13.959314 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:14.157848653+00:00 stderr F I1013 00:22:14.157787 1 request.go:697] Waited for 1.559027744s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-10-13T00:22:14.159149128+00:00 stderr F E1013 00:22:14.159086 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:14.359309521+00:00 stderr F E1013 00:22:14.359236 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:14.558941720+00:00 stderr F E1013 00:22:14.558851 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:14.791533254+00:00 stderr F I1013 00:22:14.791479 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:14Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:14.792547662+00:00 stderr F E1013 00:22:14.792488 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:14.959232944+00:00 stderr F W1013 00:22:14.959117 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:14.959432670+00:00 stderr F E1013 00:22:14.959383 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:15.158406141+00:00 stderr F I1013 00:22:15.158308 1 request.go:697] Waited for 1.314800417s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:15.159419458+00:00 stderr F E1013 00:22:15.159374 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:15.358610194+00:00 stderr F E1013 00:22:15.358549 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:15.559055415+00:00 stderr F W1013 00:22:15.558966 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:15.559055415+00:00 stderr F E1013 00:22:15.559015 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:15.759082384+00:00 stderr F E1013 00:22:15.759017 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.158955117+00:00 stderr F E1013 00:22:16.158851 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:16.357570969+00:00 stderr F I1013 00:22:16.357499 1 request.go:697] Waited for 1.598184279s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:16.361758721+00:00 stderr F E1013 00:22:16.361695 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.759157897+00:00 stderr F E1013 00:22:16.759081 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.958619431+00:00 stderr F E1013 00:22:16.958541 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:17.159262277+00:00 stderr F W1013 00:22:17.159204 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:17.159430491+00:00 stderr F E1013 00:22:17.159408 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:17.357624911+00:00 stderr F I1013 00:22:17.357517 1 request.go:697] Waited for 1.627218768s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:17.358521965+00:00 stderr F E1013 00:22:17.358444 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:17.558813011+00:00 stderr F E1013 00:22:17.558715 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:17.758625375+00:00 stderr F E1013 00:22:17.758135 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:17.959191008+00:00 stderr F E1013 00:22:17.959126 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:18.159355181+00:00 stderr F E1013 00:22:18.159274 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:18.358589379+00:00 stderr F I1013 00:22:18.358486 1 request.go:697] Waited for 1.798762803s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:18.359580706+00:00 stderr F W1013 00:22:18.359513 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:18.359580706+00:00 stderr F E1013 00:22:18.359556 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:18.758778501+00:00 stderr F E1013 00:22:18.758481 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:18.958622875+00:00 stderr F E1013 00:22:18.958522 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.159014564+00:00 stderr F E1013 00:22:19.158582 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.359020033+00:00 stderr F W1013 00:22:19.358915 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.359020033+00:00 stderr F E1013 00:22:19.358998 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:19.558066056+00:00 stderr F I1013 00:22:19.557982 1 request.go:697] Waited for 1.712793111s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:19.560792789+00:00 stderr F E1013 00:22:19.560745 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:19.914183992+00:00 stderr F I1013 00:22:19.914135 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:19Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:19.915114137+00:00 stderr F E1013 00:22:19.915067 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:19.958983217+00:00 stderr F E1013 00:22:19.958896 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.158912203+00:00 stderr F E1013 00:22:20.158845 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:20.359043875+00:00 stderr F E1013 00:22:20.358981 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.757769987+00:00 stderr F I1013 00:22:20.757703 1 request.go:697] Waited for 1.915802639s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:20.758506617+00:00 stderr F E1013 00:22:20.758462 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.959044530+00:00 stderr F E1013 00:22:20.958909 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.159342256+00:00 stderr F E1013 00:22:21.159285 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.359447627+00:00 stderr F E1013 00:22:21.359308 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:21.558587463+00:00 stderr F E1013 00:22:21.558491 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.759133666+00:00 stderr F W1013 00:22:21.759028 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.759133666+00:00 stderr F E1013 00:22:21.759101 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:21.958280181+00:00 stderr F I1013 00:22:21.958210 1 request.go:697] Waited for 1.632606964s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:21.959616877+00:00 stderr F E1013 00:22:21.959586 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:22.159157194+00:00 stderr F W1013 00:22:22.159040 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.159157194+00:00 stderr F E1013 00:22:22.159111 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.358801342+00:00 stderr F E1013 00:22:22.358744 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.758875381+00:00 stderr F E1013 00:22:22.758787 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:22.958645364+00:00 stderr F I1013 00:22:22.958338 1 request.go:697] Waited for 1.916802007s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:22.959874457+00:00 stderr F E1013 00:22:22.959836 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:23.158917509+00:00 stderr F E1013 00:22:23.158619 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:23.561953698+00:00 stderr F E1013 00:22:23.561912 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:23.959019275+00:00 stderr F E1013 00:22:23.958955 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:24.157816671+00:00 stderr F I1013 00:22:24.157703 1 request.go:697] Waited for 1.598214188s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:24.158915720+00:00 stderr F E1013 00:22:24.158865 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.359115284+00:00 stderr F E1013 00:22:24.359046 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.558558008+00:00 stderr F E1013 00:22:24.558230 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.763380646+00:00 stderr F W1013 00:22:24.762498 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.763380646+00:00 stderr F E1013 00:22:24.762533 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:24.959838389+00:00 stderr F E1013 00:22:24.959444 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.157971347+00:00 stderr F I1013 00:22:25.157885 1 request.go:697] Waited for 1.797958189s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:25.159044546+00:00 stderr F E1013 00:22:25.158917 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.359391964+00:00 stderr F E1013 00:22:25.359266 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.559106395+00:00 stderr F W1013 00:22:25.559026 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.559106395+00:00 stderr F E1013 00:22:25.559075 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.759263297+00:00 stderr F E1013 00:22:25.759190 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:26.159147191+00:00 stderr F E1013 00:22:26.159077 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.357662459+00:00 stderr F I1013 00:22:26.357565 1 request.go:697] Waited for 1.797520529s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:26.358754029+00:00 stderr F E1013 00:22:26.358687 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.567408490+00:00 stderr F E1013 00:22:26.564821 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:26.759100155+00:00 stderr F E1013 00:22:26.759020 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.358085982+00:00 stderr F I1013 00:22:27.358013 1 request.go:697] Waited for 1.399151775s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:27.359149091+00:00 stderr F E1013 00:22:27.359087 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.559506239+00:00 stderr F E1013 00:22:27.559421 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.759613180+00:00 stderr F E1013 00:22:27.759501 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:27.958470738+00:00 stderr F E1013 00:22:27.958401 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.158805005+00:00 stderr F E1013 00:22:28.158717 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.359906503+00:00 stderr F E1013 00:22:28.359571 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.557770504+00:00 stderr F I1013 00:22:28.557714 1 request.go:697] Waited for 1.398067707s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:28.559449169+00:00 stderr F W1013 00:22:28.559383 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.559449169+00:00 stderr F E1013 00:22:28.559438 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.758903213+00:00 stderr F W1013 00:22:28.758497 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:28.758903213+00:00 stderr F E1013 00:22:28.758875 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:29.158685354+00:00 stderr F E1013 00:22:29.158569 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:29.359405142+00:00 stderr F E1013 00:22:29.359279 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.561382283+00:00 stderr F I1013 00:22:29.558659 1 request.go:697] Waited for 1.638941145s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-10-13T00:22:29.561382283+00:00 stderr F E1013 00:22:29.559801 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:29.761397262+00:00 stderr F E1013 00:22:29.759436 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.156819046+00:00 stderr F I1013 00:22:30.156746 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:30Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:30.157580236+00:00 stderr F E1013 00:22:30.157534 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.359077765+00:00 stderr F E1013 00:22:30.358990 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.558383085+00:00 stderr F E1013 00:22:30.558284 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:30.758482355+00:00 stderr F E1013 00:22:30.758406 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.959595343+00:00 stderr F E1013 00:22:30.959535 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.158306847+00:00 stderr F I1013 00:22:31.158242 1 request.go:697] Waited for 1.155665477s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:31.159516149+00:00 stderr F E1013 00:22:31.159483 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.359236420+00:00 stderr F W1013 00:22:31.359161 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.359278761+00:00 stderr F E1013 00:22:31.359235 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.517719902+00:00 stderr F I1013 00:22:31.517639 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:31Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:31.519562592+00:00 stderr F E1013 00:22:31.519523 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.758371334+00:00 stderr F E1013 00:22:31.758279 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.959466742+00:00 stderr F E1013 00:22:31.959364 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.360293461+00:00 stderr F E1013 00:22:32.359865 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:32.558136482+00:00 stderr F I1013 00:22:32.558013 1 request.go:697] Waited for 1.037969763s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:32.758778287+00:00 stderr F E1013 00:22:32.758712 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:32.959364701+00:00 stderr F E1013 00:22:32.959296 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.160554181+00:00 stderr F E1013 00:22:33.160498 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:33.359429991+00:00 stderr F E1013 00:22:33.359060 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.559008709+00:00 stderr F E1013 00:22:33.558928 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.757859508+00:00 stderr F I1013 00:22:33.757789 1 request.go:697] Waited for 1.395641826s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:33.758905028+00:00 stderr F E1013 00:22:33.758867 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.959977548+00:00 stderr F E1013 00:22:33.959583 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.158669173+00:00 stderr F W1013 00:22:34.158597 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.158713924+00:00 stderr F E1013 00:22:34.158662 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:34.358738465+00:00 stderr F E1013 00:22:34.358683 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:34.758249733+00:00 stderr F I1013 00:22:34.758195 1 request.go:697] Waited for 1.532944818s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-10-13T00:22:34.759678353+00:00 stderr F E1013 00:22:34.759545 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:35.158779190+00:00 stderr F W1013 00:22:35.158695 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.158779190+00:00 stderr F E1013 00:22:35.158749 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:35.559079570+00:00 stderr F E1013 00:22:35.559013 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.758389951+00:00 stderr F I1013 00:22:35.758311 1 request.go:697] Waited for 1.077837503s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-10-13T00:22:35.759642896+00:00 stderr F E1013 00:22:35.759564 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:35.959464962+00:00 stderr F E1013 00:22:35.959369 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.159064922+00:00 stderr F W1013 00:22:36.158859 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.159064922+00:00 stderr F E1013 00:22:36.158907 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:36.559091554+00:00 stderr F E1013 00:22:36.559007 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.161868344+00:00 stderr F E1013 00:22:37.161149 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.359980253+00:00 stderr F E1013 00:22:37.359859 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.559156911+00:00 stderr F E1013 00:22:37.559080 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.759117401+00:00 stderr F W1013 00:22:37.759015 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:37.759117401+00:00 stderr F E1013 00:22:37.759089 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.158798723+00:00 stderr F E1013 00:22:38.158715 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:38.358511866+00:00 stderr F E1013 00:22:38.358452 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:38.559728961+00:00 stderr F E1013 00:22:38.559652 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:38.758872359+00:00 stderr F E1013 00:22:38.758809 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.159050095+00:00 stderr F E1013 00:22:39.158792 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.559370086+00:00 stderr F E1013 00:22:39.559228 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.759320076+00:00 stderr F W1013 00:22:39.759192 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:39.759320076+00:00 stderr F E1013 00:22:39.759262 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.159294197+00:00 stderr F E1013 00:22:40.158924 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.715219227+00:00 stderr F E1013 00:22:41.715140 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.484638570+00:00 stderr F E1013 00:22:42.484559 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.401730275+00:00 stderr F W1013 00:22:45.401605 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.401730275+00:00 stderr F E1013 00:22:45.401689 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:22:45.735891554+00:00 stderr F E1013 00:22:45.735286 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.653300823+00:00 stderr F E1013 00:22:49.652260 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.659364452+00:00 stderr F E1013 00:22:49.659274 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.670286697+00:00 stderr F E1013 00:22:49.670222 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.691787856+00:00 stderr F E1013 00:22:49.691708 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.734642949+00:00 stderr F E1013 00:22:49.734585 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.801786850+00:00 stderr F E1013 00:22:49.801734 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.815894243+00:00 stderr F E1013 00:22:49.815842 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:49.977058112+00:00 stderr F E1013 00:22:49.976991 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.002839390+00:00 stderr F W1013 00:22:50.002745 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.002839390+00:00 stderr F E1013 00:22:50.002775 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.299248956+00:00 stderr F E1013 00:22:50.299186 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.403141880+00:00 stderr F E1013 00:22:50.403079 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.639191056+00:00 stderr F I1013 00:22:50.639119 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:22:50Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:22:50.640647006+00:00 stderr F E1013 00:22:50.640603 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.941260990+00:00 stderr F E1013 00:22:50.941196 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.716850804+00:00 stderr F E1013 00:22:51.716783 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.226627533+00:00 stderr F E1013 00:22:52.226539 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.727920187+00:00 stderr F E1013 00:22:52.727834 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.789363270+00:00 stderr F E1013 00:22:54.788828 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:01.519434846+00:00 stderr F I1013 00:23:01.518689 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.534641890+00:00 stderr F I1013 00:23:01.534558 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") 2025-10-13T00:23:01.603464867+00:00 stderr F I1013 00:23:01.603321 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.613147976+00:00 stderr F E1013 00:23:01.613071 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:01.619774791+00:00 stderr F I1013 00:23:01.619637 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.627602199+00:00 stderr F E1013 00:23:01.627523 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:01.638690218+00:00 stderr F I1013 00:23:01.638637 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.643960035+00:00 stderr F E1013 00:23:01.643914 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:01.665055072+00:00 stderr F I1013 00:23:01.664909 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.669051054+00:00 stderr F E1013 00:23:01.668981 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:01.711604029+00:00 stderr F I1013 00:23:01.710384 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.715589890+00:00 stderr F E1013 00:23:01.715533 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:01.796892905+00:00 stderr F I1013 00:23:01.796821 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.802442609+00:00 stderr F E1013 00:23:01.802398 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:01.963297920+00:00 stderr F I1013 00:23:01.963229 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:01Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:01.968656619+00:00 stderr F E1013 00:23:01.968590 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:02.289938608+00:00 stderr F I1013 00:23:02.289858 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:02Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:02.296512261+00:00 stderr F E1013 00:23:02.296407 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:02.725529652+00:00 stderr F I1013 00:23:02.725459 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:02Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:02.731769986+00:00 stderr F E1013 00:23:02.731708 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:02.937747503+00:00 stderr F I1013 00:23:02.937674 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:02Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:02.944726277+00:00 stderr F E1013 00:23:02.944698 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:05.505766535+00:00 stderr F I1013 00:23:05.505675 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:05Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:05.514288753+00:00 stderr F E1013 00:23:05.514228 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:05.904737699+00:00 stderr F E1013 00:23:05.904671 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:23:10.314698988+00:00 stderr F I1013 00:23:10.313925 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:10Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:10.320912651+00:00 stderr F E1013 00:23:10.320842 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:10.635514964+00:00 stderr F I1013 00:23:10.635415 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:10Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:10.643982740+00:00 stderr F E1013 00:23:10.643907 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:31.125506695+00:00 stderr F I1013 00:23:31.124989 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:31Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:31.130633977+00:00 stderr F E1013 00:23:31.130596 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:31.518902753+00:00 stderr F I1013 00:23:31.518832 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:31Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:31.528565042+00:00 stderr F E1013 00:23:31.528477 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:32.727068266+00:00 stderr F I1013 00:23:32.726683 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:32Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:32.734319008+00:00 stderr F E1013 00:23:32.734266 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:33.905366058+00:00 stderr F I1013 00:23:33.905258 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:34.644005443+00:00 stderr F I1013 00:23:34.643937 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:23:34.673200576+00:00 stderr F I1013 00:23:34.672422 1 helpers.go:184] lister was stale at resourceVersion=42859, live get showed resourceVersion=43023 2025-10-13T00:23:34.673200576+00:00 stderr F E1013 00:23:34.672599 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:23:46.443472665+00:00 stderr F I1013 00:23:46.442466 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:46.472433502+00:00 stderr F I1013 00:23:46.472379 1 helpers.go:184] lister was stale at resourceVersion=42859, live get showed resourceVersion=43023 2025-10-13T00:23:46.472598397+00:00 stderr F E1013 00:23:46.472552 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:23:46.815438757+00:00 stderr F I1013 00:23:46.815382 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:23:46.845645218+00:00 stderr F I1013 00:23:46.845601 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-10-13T00:23:46Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42855\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:23:46.849281509+00:00 stderr F E1013 00:23:46.849260 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:23:46.911100141+00:00 stderr F I1013 00:23:46.911033 1 helpers.go:184] lister was stale at resourceVersion=42859, live get showed resourceVersion=43023 2025-10-13T00:23:46.911482542+00:00 stderr F E1013 00:23:46.911450 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-10-13T00:23:49.973322299+00:00 stderr F I1013 00:23:49.973258 1 helpers.go:184] lister was stale at resourceVersion=42859, live get showed resourceVersion=43023 2025-10-13T00:23:49.973433072+00:00 stderr F E1013 00:23:49.973412 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035454d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015073043233032762 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015073043233032762 5ustar zuulzuul././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000001665715073043233033003 0ustar zuulzuul2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469580 1 flags.go:64] FLAG: --add-dir-header="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469717 1 flags.go:64] FLAG: --allow-paths="[]" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469726 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469730 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469733 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469738 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469741 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469745 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469749 1 flags.go:64] FLAG: --client-ca-file="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469752 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469756 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469759 1 flags.go:64] FLAG: --http2-disable="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469762 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469767 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469770 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469775 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469778 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469783 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469787 1 flags.go:64] FLAG: --kubeconfig="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469790 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469793 1 flags.go:64] FLAG: --log-dir="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469796 1 flags.go:64] FLAG: --log-file="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469799 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469803 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469812 1 flags.go:64] FLAG: --logtostderr="true" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469815 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469818 1 flags.go:64] FLAG: --oidc-clientID="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469821 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469824 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469827 1 flags.go:64] FLAG: --oidc-issuer="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469830 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469838 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469841 1 flags.go:64] FLAG: --one-output="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469844 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469847 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469850 1 flags.go:64] FLAG: --skip-headers="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469853 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469856 1 flags.go:64] FLAG: --stderrthreshold="" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469858 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469861 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469870 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469874 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469878 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-10-13T00:15:00.469888775+00:00 stderr F I1013 00:15:00.469882 1 flags.go:64] FLAG: --upstream="http://localhost:8080/" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469886 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469889 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469891 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469894 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469897 1 flags.go:64] FLAG: --v="3" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469900 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469905 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:15:00.477523714+00:00 stderr F W1013 00:15:00.469911 1 deprecated.go:66] 2025-10-13T00:15:00.477523714+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:15:00.477523714+00:00 stderr F 2025-10-13T00:15:00.477523714+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:15:00.477523714+00:00 stderr F 2025-10-13T00:15:00.477523714+00:00 stderr F =============================================== 2025-10-13T00:15:00.477523714+00:00 stderr F 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.469921 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.471083 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.471129 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.471646 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-10-13T00:15:00.477523714+00:00 stderr F I1013 00:15:00.472189 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000001706415073043233032774 0ustar zuulzuul2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.288377 1 flags.go:64] FLAG: --add-dir-header="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289195 1 flags.go:64] FLAG: --allow-paths="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289218 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289224 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289229 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289236 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289242 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289247 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289260 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289265 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289270 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289275 1 flags.go:64] FLAG: --http2-disable="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289281 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289288 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289293 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289306 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289311 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289318 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289416 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289423 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289429 1 flags.go:64] FLAG: --log-dir="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289434 1 flags.go:64] FLAG: --log-file="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289439 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289445 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289468 1 flags.go:64] FLAG: --logtostderr="true" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289474 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289479 1 flags.go:64] FLAG: --oidc-clientID="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289484 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289489 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289493 1 flags.go:64] FLAG: --oidc-issuer="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289498 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289519 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289523 1 flags.go:64] FLAG: --one-output="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289528 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289593 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289600 1 flags.go:64] FLAG: --skip-headers="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289605 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289610 1 flags.go:64] FLAG: --stderrthreshold="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289614 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289619 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289642 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289648 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289659 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289668 1 flags.go:64] FLAG: --upstream="http://localhost:8080/" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289673 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289677 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289682 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-08-13T19:59:04.289699934+00:00 stderr F I0813 19:59:04.289686 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-08-13T19:59:04.289699934+00:00 stderr F I0813 19:59:04.289691 1 flags.go:64] FLAG: --v="3" 2025-08-13T19:59:04.289709695+00:00 stderr F I0813 19:59:04.289696 1 flags.go:64] FLAG: --version="false" 2025-08-13T19:59:04.289709695+00:00 stderr F I0813 19:59:04.289703 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:59:04.289906890+00:00 stderr F W0813 19:59:04.289737 1 deprecated.go:66] 2025-08-13T19:59:04.289906890+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F =============================================== 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F I0813 19:59:04.289857 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:04.291096624+00:00 stderr F I0813 19:59:04.291044 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:04.291190227+00:00 stderr F I0813 19:59:04.291158 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:04.292125893+00:00 stderr F I0813 19:59:04.292006 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-08-13T19:59:04.292856924+00:00 stderr F I0813 19:59:04.292710 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 2025-08-13T20:42:44.436262779+00:00 stderr F I0813 20:42:44.435618 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015073043233032762 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003130715073043233032770 0ustar zuulzuul2025-08-13T20:05:37.430602579+00:00 stderr F I0813 20:05:37.428176 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-08-13T20:05:37.431396461+00:00 stderr F I0813 20:05:37.431086 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:37.531196519+00:00 stderr F I0813 20:05:37.531136 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-08-13T20:08:24.743848747+00:00 stderr F E0813 20:08:24.742720 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:12:03.283463562+00:00 stderr F I0813 20:12:03.282470 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-08-13T20:12:03.408514977+00:00 stderr F I0813 20:12:03.408278 1 operator.go:214] Starting Machine API Operator 2025-08-13T20:12:03.414883729+00:00 stderr F I0813 20:12:03.414653 1 reflector.go:289] Starting reflector *v1.DaemonSet (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.414883729+00:00 stderr F I0813 20:12:03.414725 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415272170+00:00 stderr F I0813 20:12:03.415179 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415292001+00:00 stderr F I0813 20:12:03.415267 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415292001+00:00 stderr F I0813 20:12:03.415270 1 reflector.go:289] Starting reflector *v1.Proxy (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415304351+00:00 stderr F I0813 20:12:03.415294 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415413054+00:00 stderr F I0813 20:12:03.415295 1 reflector.go:289] Starting reflector *v1.FeatureGate (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415431205+00:00 stderr F I0813 20:12:03.415408 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415558749+00:00 stderr F I0813 20:12:03.415230 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415558749+00:00 stderr F I0813 20:12:03.415521 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415901538+00:00 stderr F I0813 20:12:03.415833 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (17m0.773680079s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415901538+00:00 stderr F I0813 20:12:03.415866 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416042702+00:00 stderr F I0813 20:12:03.416014 1 reflector.go:289] Starting reflector *v1.Deployment (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.416146305+00:00 stderr F I0813 20:12:03.416087 1 reflector.go:289] Starting reflector *v1.ClusterVersion (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416183666+00:00 stderr F I0813 20:12:03.416143 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416218927+00:00 stderr F I0813 20:12:03.416098 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.416357301+00:00 stderr F I0813 20:12:03.416207 1 reflector.go:289] Starting reflector *v1beta1.Machine (17m0.773680079s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416357301+00:00 stderr F I0813 20:12:03.416297 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416733772+00:00 stderr F I0813 20:12:03.416616 1 reflector.go:289] Starting reflector *v1.ClusterOperator (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416733772+00:00 stderr F I0813 20:12:03.416652 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.432283748+00:00 stderr F I0813 20:12:03.432183 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.432682600+00:00 stderr F I0813 20:12:03.432620 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.434528012+00:00 stderr F I0813 20:12:03.434460 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435013376+00:00 stderr F I0813 20:12:03.434965 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.435326135+00:00 stderr F I0813 20:12:03.434887 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435607613+00:00 stderr F I0813 20:12:03.435557 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435745857+00:00 stderr F I0813 20:12:03.435723 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.436151169+00:00 stderr F I0813 20:12:03.436098 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.442764949+00:00 stderr F I0813 20:12:03.442694 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.443384886+00:00 stderr F I0813 20:12:03.443356 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.508870364+00:00 stderr F I0813 20:12:03.508710 1 operator.go:226] Synced up caches 2025-08-13T20:12:03.509038179+00:00 stderr F I0813 20:12:03.509010 1 operator.go:231] Started feature gate accessor 2025-08-13T20:12:03.510557512+00:00 stderr F I0813 20:12:03.510481 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:12:03.510951624+00:00 stderr F I0813 20:12:03.510718 1 start.go:121] Synced up machine api informer caches 2025-08-13T20:12:03.511367516+00:00 stderr F I0813 20:12:03.511243 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:12:03.553660058+00:00 stderr F I0813 20:12:03.553520 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:12:03.564646243+00:00 stderr F I0813 20:12:03.564506 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:12:03.571000755+00:00 stderr F I0813 20:12:03.570848 1 status.go:99] Syncing status: available 2025-08-13T20:27:01.768104333+00:00 stderr F I0813 20:27:01.766359 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:27:01.777550173+00:00 stderr F I0813 20:27:01.777474 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:27:01.781471015+00:00 stderr F I0813 20:27:01.780930 1 status.go:99] Syncing status: available 2025-08-13T20:41:59.995734233+00:00 stderr F I0813 20:41:59.994921 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:42:00.004860526+00:00 stderr F I0813 20:42:00.003019 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:42:00.011051705+00:00 stderr F I0813 20:42:00.009869 1 status.go:99] Syncing status: available 2025-08-13T20:42:36.421768310+00:00 stderr F I0813 20:42:36.415200 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424103457+00:00 stderr F I0813 20:42:36.424040 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.415396 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.419956 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.443491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.415610 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421869 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421888 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421911 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.457922882+00:00 stderr F I0813 20:42:36.421929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF ././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003046315073043233032772 0ustar zuulzuul2025-08-13T19:59:30.422692088+00:00 stderr F I0813 19:59:30.324610 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-08-13T19:59:30.730222715+00:00 stderr F I0813 19:59:30.729311 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:32.264897550+00:00 stderr F I0813 19:59:32.241064 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-08-13T19:59:32.786068206+00:00 stderr F I0813 19:59:32.785728 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-08-13T19:59:33.460151711+00:00 stderr F I0813 19:59:33.456613 1 reflector.go:289] Starting reflector *v1.DaemonSet (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.460264245+00:00 stderr F I0813 19:59:33.460239 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.467459410+00:00 stderr F I0813 19:59:33.466231 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.470166087+00:00 stderr F I0813 19:59:33.470142 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.470908838+00:00 stderr F I0813 19:59:33.470700 1 reflector.go:289] Starting reflector *v1.Deployment (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.471114444+00:00 stderr F I0813 19:59:33.471098 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.528297624+00:00 stderr F I0813 19:59:33.526236 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.528948693+00:00 stderr F I0813 19:59:33.528641 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.536238090+00:00 stderr F I0813 19:59:33.535890 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.546972206+00:00 stderr F I0813 19:59:33.536681 1 reflector.go:289] Starting reflector *v1beta1.Machine (11m3.181866381s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.662896021+00:00 stderr F I0813 19:59:33.662732 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.677561929+00:00 stderr F I0813 19:59:33.552532 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (11m3.181866381s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.710037935+00:00 stderr F I0813 19:59:33.700993 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.570180 1 operator.go:214] Starting Machine API Operator 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571397 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571579 1 reflector.go:289] Starting reflector *v1.Proxy (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571608 1 reflector.go:289] Starting reflector *v1.ClusterOperator (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571644 1 reflector.go:289] Starting reflector *v1.ClusterVersion (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.732953 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.734011 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.762266853+00:00 stderr F I0813 19:59:33.762202 1 reflector.go:289] Starting reflector *v1.FeatureGate (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.777463697+00:00 stderr F I0813 19:59:33.777395 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.813468923+00:00 stderr F I0813 19:59:33.813001 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.818254169+00:00 stderr F I0813 19:59:33.814688 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:34.934623872+00:00 stderr F I0813 19:59:34.934463 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:34.982955680+00:00 stderr F I0813 19:59:34.978433 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.019614 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.033448 1 operator.go:226] Synced up caches 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.033478 1 operator.go:231] Started feature gate accessor 2025-08-13T19:59:35.037744732+00:00 stderr F I0813 19:59:35.034090 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:35.054888230+00:00 stderr F I0813 19:59:35.049334 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.280303986+00:00 stderr F I0813 19:59:35.279731 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.305181 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.345068 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.345498 1 start.go:121] Synced up machine api informer caches 2025-08-13T19:59:35.863532660+00:00 stderr F I0813 19:59:35.854514 1 status.go:69] Syncing status: re-syncing 2025-08-13T19:59:35.998346093+00:00 stderr F I0813 19:59:35.927690 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T19:59:36.046953968+00:00 stderr F I0813 19:59:36.046744 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:36.141443632+00:00 stderr F I0813 19:59:36.141079 1 status.go:99] Syncing status: available 2025-08-13T19:59:36.367874066+00:00 stderr F I0813 19:59:36.366889 1 status.go:69] Syncing status: re-syncing 2025-08-13T19:59:36.407210128+00:00 stderr F I0813 19:59:36.405968 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T19:59:36.451908862+00:00 stderr F I0813 19:59:36.451686 1 status.go:99] Syncing status: available 2025-08-13T20:01:53.429105265+00:00 stderr F E0813 20:01:53.428030 1 leaderelection.go:369] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:53.434100147+00:00 stderr F E0813 20:02:53.432992 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.444965011+00:00 stderr F E0813 20:03:53.443054 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.434373710+00:00 stderr F E0813 20:04:53.434088 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:34.054634372+00:00 stderr F I0813 20:05:34.050754 1 leaderelection.go:285] failed to renew lease openshift-machine-api/machine-api-operator: timed out waiting for the condition 2025-08-13T20:05:34.152035191+00:00 stderr F E0813 20:05:34.147127 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:05:34.165941200+00:00 stderr F F0813 20:05:34.165368 1 start.go:104] Leader election lost ././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000002462115073043233032771 0ustar zuulzuul2025-10-13T00:15:02.345419430+00:00 stderr F I1013 00:15:02.344424 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-10-13T00:15:02.345419430+00:00 stderr F I1013 00:15:02.345089 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:02.459773946+00:00 stderr F I1013 00:15:02.459705 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-10-13T00:20:11.756907768+00:00 stderr F I1013 00:20:11.755839 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-10-13T00:20:11.776182465+00:00 stderr F I1013 00:20:11.776101 1 operator.go:214] Starting Machine API Operator 2025-10-13T00:20:11.777723533+00:00 stderr F I1013 00:20:11.777669 1 reflector.go:289] Starting reflector *v1.DaemonSet (18m2.448374324s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.777723533+00:00 stderr F I1013 00:20:11.777698 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.777748222+00:00 stderr F I1013 00:20:11.777636 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (18m2.448374324s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.777758621+00:00 stderr F I1013 00:20:11.777721 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (18m2.448374324s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.777768440+00:00 stderr F I1013 00:20:11.777749 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.777780419+00:00 stderr F I1013 00:20:11.777763 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.777882451+00:00 stderr F I1013 00:20:11.777830 1 reflector.go:289] Starting reflector *v1.ClusterVersion (17m0.952827617s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.777882451+00:00 stderr F I1013 00:20:11.777862 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.777882451+00:00 stderr F I1013 00:20:11.777755 1 reflector.go:289] Starting reflector *v1.ClusterOperator (17m0.952827617s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.777882451+00:00 stderr F I1013 00:20:11.777873 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778068626+00:00 stderr F I1013 00:20:11.777843 1 reflector.go:289] Starting reflector *v1.Deployment (18m2.448374324s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.778068626+00:00 stderr F I1013 00:20:11.778051 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.778068626+00:00 stderr F I1013 00:20:11.778052 1 reflector.go:289] Starting reflector *v1.Proxy (17m0.952827617s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778068626+00:00 stderr F I1013 00:20:11.778061 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778232873+00:00 stderr F I1013 00:20:11.778195 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (19m32.324176432s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778232873+00:00 stderr F I1013 00:20:11.778211 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778267310+00:00 stderr F I1013 00:20:11.778217 1 reflector.go:289] Starting reflector *v1.FeatureGate (17m0.952827617s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778267310+00:00 stderr F I1013 00:20:11.778226 1 reflector.go:289] Starting reflector *v1beta1.Machine (19m32.324176432s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778267310+00:00 stderr F I1013 00:20:11.778246 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.778267310+00:00 stderr F I1013 00:20:11.778261 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:11.784522876+00:00 stderr F I1013 00:20:11.784479 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:11.785316594+00:00 stderr F I1013 00:20:11.784972 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.785316594+00:00 stderr F I1013 00:20:11.785229 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.785316594+00:00 stderr F I1013 00:20:11.785234 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.786008219+00:00 stderr F I1013 00:20:11.785887 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.786203673+00:00 stderr F I1013 00:20:11.786172 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.786431705+00:00 stderr F I1013 00:20:11.786312 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-10-13T00:20:11.788909800+00:00 stderr F I1013 00:20:11.788841 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:20:11.792714509+00:00 stderr F I1013 00:20:11.792615 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.794607199+00:00 stderr F I1013 00:20:11.794545 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:20:11.876491840+00:00 stderr F I1013 00:20:11.876420 1 operator.go:226] Synced up caches 2025-10-13T00:20:11.876491840+00:00 stderr F I1013 00:20:11.876463 1 operator.go:231] Started feature gate accessor 2025-10-13T00:20:11.876543235+00:00 stderr F I1013 00:20:11.876500 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:20:11.877088632+00:00 stderr F I1013 00:20:11.876987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:20:11.878205994+00:00 stderr F I1013 00:20:11.878150 1 start.go:121] Synced up machine api informer caches 2025-10-13T00:20:11.893378065+00:00 stderr F I1013 00:20:11.892545 1 status.go:69] Syncing status: re-syncing 2025-10-13T00:20:11.898523979+00:00 stderr F I1013 00:20:11.898480 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-10-13T00:20:11.902272593+00:00 stderr F I1013 00:20:11.902221 1 status.go:99] Syncing status: available 2025-10-13T00:22:11.776415093+00:00 stderr F E1013 00:22:11.775925 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016500515073043232033055 0ustar zuulzuul2025-08-13T20:00:30.228096014+00:00 stderr F I0813 20:00:30.192242 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a0b9a0 cert-dir:0xc000a0bb80 cert-secrets:0xc000a0b900 configmaps:0xc000a0b4a0 namespace:0xc000a0b2c0 optional-cert-configmaps:0xc000a0bae0 optional-cert-secrets:0xc000a0ba40 optional-configmaps:0xc000a0b5e0 optional-secrets:0xc000a0b540 pod:0xc000a0b360 pod-manifest-dir:0xc000a0b720 resource-dir:0xc000a0b680 revision:0xc000a0b220 secrets:0xc000a0b400 v:0xc000a20fa0] [0xc000a20fa0 0xc000a0b220 0xc000a0b2c0 0xc000a0b360 0xc000a0b680 0xc000a0b720 0xc000a0b4a0 0xc000a0b5e0 0xc000a0b400 0xc000a0b540 0xc000a0bb80 0xc000a0b9a0 0xc000a0bae0 0xc000a0b900 0xc000a0ba40] [] map[cert-configmaps:0xc000a0b9a0 cert-dir:0xc000a0bb80 cert-secrets:0xc000a0b900 configmaps:0xc000a0b4a0 help:0xc000a21360 kubeconfig:0xc000a0b180 log-flush-frequency:0xc000a20f00 namespace:0xc000a0b2c0 optional-cert-configmaps:0xc000a0bae0 optional-cert-secrets:0xc000a0ba40 optional-configmaps:0xc000a0b5e0 optional-secrets:0xc000a0b540 pod:0xc000a0b360 pod-manifest-dir:0xc000a0b720 pod-manifests-lock-file:0xc000a0b860 resource-dir:0xc000a0b680 revision:0xc000a0b220 secrets:0xc000a0b400 timeout-duration:0xc000a0b7c0 v:0xc000a20fa0 vmodule:0xc000a21040] [0xc000a0b180 0xc000a0b220 0xc000a0b2c0 0xc000a0b360 0xc000a0b400 0xc000a0b4a0 0xc000a0b540 0xc000a0b5e0 0xc000a0b680 0xc000a0b720 0xc000a0b7c0 0xc000a0b860 0xc000a0b900 0xc000a0b9a0 0xc000a0ba40 0xc000a0bae0 0xc000a0bb80 0xc000a20f00 0xc000a20fa0 0xc000a21040 0xc000a21360] [0xc000a0b9a0 0xc000a0bb80 0xc000a0b900 0xc000a0b4a0 0xc000a21360 0xc000a0b180 0xc000a20f00 0xc000a0b2c0 0xc000a0bae0 0xc000a0ba40 0xc000a0b5e0 0xc000a0b540 0xc000a0b360 0xc000a0b720 0xc000a0b860 0xc000a0b680 0xc000a0b220 0xc000a0b400 0xc000a0b7c0 0xc000a20fa0 0xc000a21040] map[104:0xc000a21360 118:0xc000a20fa0] [] -1 0 0xc000a023c0 true 0xa51380 []} 2025-08-13T20:00:30.241164866+00:00 stderr F I0813 20:00:30.240993 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a14340)({ 2025-08-13T20:00:30.241164866+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:00:30.241164866+00:00 stderr F Revision: (string) (len=1) "9", 2025-08-13T20:00:30.241164866+00:00 stderr F NodeName: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-08-13T20:00:30.241164866+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:00:30.241164866+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=11) "etcd-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "encryption-config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=12) "cloud-config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "aggregator-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=14) "kubelet-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=9) "client-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:00:30.241164866+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:00:30.241164866+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:00:30.241164866+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:00:30.241164866+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:00:30.241164866+00:00 stderr F }) 2025-08-13T20:00:30.264011288+00:00 stderr F I0813 20:00:30.263917 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:00:30.302991629+00:00 stderr F I0813 20:00:30.302663 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:00:30.336711361+00:00 stderr F I0813 20:00:30.311214 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:01:00.316616777+00:00 stderr F I0813 20:01:00.315386 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:01:05.767036875+00:00 stderr F I0813 20:01:05.763385 1 cmd.go:539] Latest installer revision for node crc is: 9 2025-08-13T20:01:05.767036875+00:00 stderr F I0813 20:01:05.763700 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:01:06.853768102+00:00 stderr F I0813 20:01:06.853476 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:01:06.869727917+00:00 stderr F I0813 20:01:06.869633 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9" ... 2025-08-13T20:01:06.878019154+00:00 stderr F I0813 20:01:06.871148 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9" ... 2025-08-13T20:01:06.878019154+00:00 stderr F I0813 20:01:06.871194 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:07.408893731+00:00 stderr F I0813 20:01:07.408596 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-9 2025-08-13T20:01:08.339296910+00:00 stderr F I0813 20:01:08.339174 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-9 2025-08-13T20:01:08.436071690+00:00 stderr F I0813 20:01:08.429282 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-9 2025-08-13T20:01:08.510517961+00:00 stderr F I0813 20:01:08.510440 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-9: secrets "encryption-config-9" not found 2025-08-13T20:01:08.565021486+00:00 stderr F I0813 20:01:08.564954 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-9 2025-08-13T20:01:08.565151519+00:00 stderr F I0813 20:01:08.565130 1 cmd.go:239] Getting config maps ... 2025-08-13T20:01:08.623391170+00:00 stderr F I0813 20:01:08.623334 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-9 2025-08-13T20:01:08.683964707+00:00 stderr F I0813 20:01:08.683901 1 copy.go:60] Got configMap openshift-kube-apiserver/config-9 2025-08-13T20:01:08.808296833+00:00 stderr F I0813 20:01:08.808012 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-9 2025-08-13T20:01:09.196587924+00:00 stderr F I0813 20:01:09.196529 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-9 2025-08-13T20:01:09.493872661+00:00 stderr F I0813 20:01:09.486134 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-9 2025-08-13T20:01:09.605195335+00:00 stderr F I0813 20:01:09.594112 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-9 2025-08-13T20:01:09.927254178+00:00 stderr F I0813 20:01:09.919756 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-9 2025-08-13T20:01:10.003346238+00:00 stderr F I0813 20:01:10.003247 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-9 2025-08-13T20:01:10.221272442+00:00 stderr F I0813 20:01:10.218353 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-9: configmaps "cloud-config-9" not found 2025-08-13T20:01:10.326975286+00:00 stderr F I0813 20:01:10.319482 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-9 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.407728 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-9 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.407859 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client" ... 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.408445 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client/tls.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409156 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client/tls.key" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409353 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409407 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409896 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.410201 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410373 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410522 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410745 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-08-13T20:01:14.420639313+00:00 stderr F I0813 20:01:14.420544 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/webhook-authenticator" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.420869 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/webhook-authenticator/kubeConfig" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.421250 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/bound-sa-token-signing-certs" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.421475 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.421654 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/config" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.421763 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/config/config.yaml" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.422214 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/etcd-serving-ca" ... 2025-08-13T20:01:14.426586313+00:00 stderr F I0813 20:01:14.422292 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.426586313+00:00 stderr F I0813 20:01:14.422534 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-audit-policies" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.446867 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.447272 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.447334 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:01:14.447697575+00:00 stderr F I0813 20:01:14.447640 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.447761 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448156 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448585 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/version" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448743 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-08-13T20:01:14.481921090+00:00 stderr F I0813 20:01:14.481337 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kubelet-serving-ca" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.515599 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.516600 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.516819 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.518506 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.519250 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.519910 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-server-ca" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.520220 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.520725 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/oauth-metadata" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.521051 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/oauth-metadata/oauthMetadata" ... 2025-08-13T20:01:14.536874107+00:00 stderr F I0813 20:01:14.535308 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-08-13T20:01:14.536874107+00:00 stderr F I0813 20:01:14.535433 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:18.851315639+00:00 stderr F I0813 20:01:18.840609 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-08-13T20:01:20.264538334+00:00 stderr F I0813 20:01:20.257589 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-08-13T20:01:20.938573184+00:00 stderr F I0813 20:01:20.927406 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-08-13T20:01:21.314140603+00:00 stderr F I0813 20:01:21.313762 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-08-13T20:01:21.682394503+00:00 stderr F I0813 20:01:21.667018 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-08-13T20:01:21.964411525+00:00 stderr F I0813 20:01:21.964309 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-08-13T20:01:22.200873837+00:00 stderr F I0813 20:01:22.198399 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-08-13T20:01:22.848864233+00:00 stderr F I0813 20:01:22.842456 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-08-13T20:01:23.458539357+00:00 stderr F I0813 20:01:23.458246 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-08-13T20:01:23.877878804+00:00 stderr F I0813 20:01:23.866135 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-08-13T20:01:23.987633334+00:00 stderr F I0813 20:01:23.987242 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-08-13T20:01:27.081947854+00:00 stderr F I0813 20:01:27.081695 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-08-13T20:01:31.496984765+00:00 stderr F I0813 20:01:31.490588 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-08-13T20:01:31.589915885+00:00 stderr F I0813 20:01:31.563334 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-08-13T20:01:36.824979329+00:00 stderr F I0813 20:01:36.823316 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-08-13T20:01:50.825746244+00:00 stderr F I0813 20:01:50.825271 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-004?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2025-08-13T20:01:56.153885648+00:00 stderr F I0813 20:01:56.150536 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-08-13T20:01:59.443927570+00:00 stderr F I0813 20:01:59.433966 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-08-13T20:02:01.285415308+00:00 stderr F I0813 20:02:01.285274 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-08-13T20:02:03.459094407+00:00 stderr F I0813 20:02:03.457472 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-08-13T20:02:05.722268108+00:00 stderr F I0813 20:02:05.721390 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-08-13T20:02:09.365492892+00:00 stderr F I0813 20:02:09.362561 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-08-13T20:02:09.365492892+00:00 stderr F I0813 20:02:09.362663 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:13.044816027+00:00 stderr F I0813 20:02:13.044401 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-08-13T20:02:15.097919247+00:00 stderr F I0813 20:02:15.089376 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-08-13T20:02:16.180974263+00:00 stderr F I0813 20:02:16.179374 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-08-13T20:02:17.837210851+00:00 stderr F I0813 20:02:17.830667 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-08-13T20:02:20.534021793+00:00 stderr F I0813 20:02:20.531035 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:02:20.535981769+00:00 stderr F I0813 20:02:20.535950 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-08-13T20:02:20.536941226+00:00 stderr F I0813 20:02:20.536680 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-08-13T20:02:20.537939034+00:00 stderr F I0813 20:02:20.537889 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-08-13T20:02:20.538769028+00:00 stderr F I0813 20:02:20.538690 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-08-13T20:02:20.538769028+00:00 stderr F I0813 20:02:20.538732 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-08-13T20:02:20.540256270+00:00 stderr F I0813 20:02:20.540170 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-08-13T20:02:20.540469467+00:00 stderr F I0813 20:02:20.540399 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-08-13T20:02:20.540469467+00:00 stderr F I0813 20:02:20.540461 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.540669492+00:00 stderr F I0813 20:02:20.540589 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-08-13T20:02:20.540817227+00:00 stderr F I0813 20:02:20.540735 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-08-13T20:02:20.540834697+00:00 stderr F I0813 20:02:20.540814 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.541009062+00:00 stderr F I0813 20:02:20.540964 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-08-13T20:02:20.541183147+00:00 stderr F I0813 20:02:20.541137 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-08-13T20:02:20.541195297+00:00 stderr F I0813 20:02:20.541179 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.541486776+00:00 stderr F I0813 20:02:20.541436 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:02:20.541641640+00:00 stderr F I0813 20:02:20.541607 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-08-13T20:02:20.541641640+00:00 stderr F I0813 20:02:20.541625 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.630024441+00:00 stderr F I0813 20:02:20.629923 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:02:20.630249658+00:00 stderr F I0813 20:02:20.630212 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-08-13T20:02:20.630264268+00:00 stderr F I0813 20:02:20.630246 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-08-13T20:02:20.630597198+00:00 stderr F I0813 20:02:20.630538 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-08-13T20:02:20.630886386+00:00 stderr F I0813 20:02:20.630822 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-08-13T20:02:20.630886386+00:00 stderr F I0813 20:02:20.630870 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-08-13T20:02:20.631072681+00:00 stderr F I0813 20:02:20.631039 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-08-13T20:02:20.632426470+00:00 stderr F I0813 20:02:20.632312 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-08-13T20:02:20.632426470+00:00 stderr F I0813 20:02:20.632404 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-08-13T20:02:20.632697428+00:00 stderr F I0813 20:02:20.632617 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-08-13T20:02:20.633043997+00:00 stderr F I0813 20:02:20.632913 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-08-13T20:02:20.633158671+00:00 stderr F I0813 20:02:20.633103 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-08-13T20:02:20.633451849+00:00 stderr F I0813 20:02:20.633357 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-08-13T20:02:20.633451849+00:00 stderr F I0813 20:02:20.633396 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.635386384+00:00 stderr F I0813 20:02:20.635321 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-08-13T20:02:20.635597430+00:00 stderr F I0813 20:02:20.635523 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:02:20.636249549+00:00 stderr F I0813 20:02:20.636167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:02:20.636457635+00:00 stderr F I0813 20:02:20.636391 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-08-13T20:02:20.636457635+00:00 stderr F I0813 20:02:20.636410 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.636551 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.636573 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.637108 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.637231 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-08-13T20:02:20.637574647+00:00 stderr F I0813 20:02:20.637490 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:02:20.637733881+00:00 stderr F I0813 20:02:20.637640 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:02:20.638261256+00:00 stderr F I0813 20:02:20.638139 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-9 -n openshift-kube-apiserver 2025-08-13T20:02:21.120887644+00:00 stderr F I0813 20:02:21.117049 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:02:21.120887644+00:00 stderr F I0813 20:02:21.117113 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-08-13T20:02:21.120887644+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"9"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"9"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url= 2025-08-13T20:02:21.120947706+00:00 stderr F https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.122047 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.124085 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.124100 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"9"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"9"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-a 2025-08-13T20:02:21.131093105+00:00 stderr F piserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.124736 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-08-13T20:02:21.131093105+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"9"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=9","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.125040 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.125152 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:02:21.131093105+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"9"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=9","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000024000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001104015073043233033065 0ustar zuulzuul2025-10-13T00:21:36.945970083+00:00 stderr F ++ K8S_NODE= 2025-10-13T00:21:36.945970083+00:00 stderr F ++ [[ -n '' ]] 2025-10-13T00:21:36.945970083+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:36.945970083+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:36.945970083+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.945970083+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:36.945970083+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:36.945970083+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:36.946181188+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:36.946181188+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:36.946181188+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:36.946181188+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:36.946889418+00:00 stderr F + start-rbac-proxy-node ovn-metrics 9105 29105 /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.946889418+00:00 stderr F + local detail=ovn-metrics 2025-10-13T00:21:36.946902598+00:00 stderr F + local listen_port=9105 2025-10-13T00:21:36.946902598+00:00 stderr F + local upstream_port=29105 2025-10-13T00:21:36.946902598+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-10-13T00:21:36.946911098+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.946918098+00:00 stderr F + [[ 5 -ne 5 ]] 2025-10-13T00:21:36.947379431+00:00 stderr F ++ date -Iseconds 2025-10-13T00:21:36.949523668+00:00 stderr F + echo '2025-10-13T00:21:36+00:00 INFO: waiting for ovn-metrics certs to be mounted' 2025-10-13T00:21:36.949540629+00:00 stdout F 2025-10-13T00:21:36+00:00 INFO: waiting for ovn-metrics certs to be mounted 2025-10-13T00:21:36.949547979+00:00 stderr F + wait-for-certs ovn-metrics /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.949580030+00:00 stderr F + local detail=ovn-metrics 2025-10-13T00:21:36.949580030+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-10-13T00:21:36.949587650+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.949594680+00:00 stderr F + [[ 3 -ne 3 ]] 2025-10-13T00:21:36.949601620+00:00 stderr F + retries=0 2025-10-13T00:21:36.950117354+00:00 stderr F ++ date +%s 2025-10-13T00:21:36.952948290+00:00 stderr F + TS=1760314896 2025-10-13T00:21:36.952948290+00:00 stderr F + WARN_TS=1760316096 2025-10-13T00:21:36.952948290+00:00 stderr F + HAS_LOGGED_INFO=0 2025-10-13T00:21:36.952948290+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.key ]] 2025-10-13T00:21:36.952975621+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.crt ]] 2025-10-13T00:21:36.953767272+00:00 stderr F ++ date -Iseconds 2025-10-13T00:21:36.957270837+00:00 stdout F 2025-10-13T00:21:36+00:00 INFO: ovn-metrics certs mounted, starting kube-rbac-proxy 2025-10-13T00:21:36.957292437+00:00 stderr F + echo '2025-10-13T00:21:36+00:00 INFO: ovn-metrics certs mounted, starting kube-rbac-proxy' 2025-10-13T00:21:36.957292437+00:00 stderr F + exec /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=:9105 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --upstream=http://127.0.0.1:29105/ --tls-private-key-file=/etc/pki/tls/metrics-cert/tls.key --tls-cert-file=/etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.989608416+00:00 stderr F W1013 00:21:36.989500 27852 deprecated.go:66] 2025-10-13T00:21:36.989608416+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:21:36.989608416+00:00 stderr F 2025-10-13T00:21:36.989608416+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:21:36.989608416+00:00 stderr F 2025-10-13T00:21:36.989608416+00:00 stderr F =============================================== 2025-10-13T00:21:36.989608416+00:00 stderr F 2025-10-13T00:21:36.990016127+00:00 stderr F I1013 00:21:36.989986 27852 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:21:36.990050298+00:00 stderr F I1013 00:21:36.990031 27852 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:21:36.990554562+00:00 stderr F I1013 00:21:36.990515 27852 kube-rbac-proxy.go:395] Starting TCP socket on :9105 2025-10-13T00:21:36.990890731+00:00 stderr F I1013 00:21:36.990863 27852 kube-rbac-proxy.go:402] Listening securely on :9105 ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001111015073043233033063 0ustar zuulzuul2025-10-13T00:21:36.806940964+00:00 stderr F ++ K8S_NODE= 2025-10-13T00:21:36.806940964+00:00 stderr F ++ [[ -n '' ]] 2025-10-13T00:21:36.806940964+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:36.806940964+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:36.806940964+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.806940964+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:36.806940964+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:36.807099918+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:36.807099918+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:36.807099918+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:36.807099918+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:36.807099918+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:36.808681181+00:00 stderr F + start-rbac-proxy-node ovn-node-metrics 9103 29103 /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.808681181+00:00 stderr F + local detail=ovn-node-metrics 2025-10-13T00:21:36.808681181+00:00 stderr F + local listen_port=9103 2025-10-13T00:21:36.808705521+00:00 stderr F + local upstream_port=29103 2025-10-13T00:21:36.808784684+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-10-13T00:21:36.808784684+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.808784684+00:00 stderr F + [[ 5 -ne 5 ]] 2025-10-13T00:21:36.809373279+00:00 stderr F ++ date -Iseconds 2025-10-13T00:21:36.812364380+00:00 stderr F + echo '2025-10-13T00:21:36+00:00 INFO: waiting for ovn-node-metrics certs to be mounted' 2025-10-13T00:21:36.812386600+00:00 stdout F 2025-10-13T00:21:36+00:00 INFO: waiting for ovn-node-metrics certs to be mounted 2025-10-13T00:21:36.812396851+00:00 stderr F + wait-for-certs ovn-node-metrics /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.812442222+00:00 stderr F + local detail=ovn-node-metrics 2025-10-13T00:21:36.812442222+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-10-13T00:21:36.812453552+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.812453552+00:00 stderr F + [[ 3 -ne 3 ]] 2025-10-13T00:21:36.812463013+00:00 stderr F + retries=0 2025-10-13T00:21:36.813104040+00:00 stderr F ++ date +%s 2025-10-13T00:21:36.816248384+00:00 stderr F + TS=1760314896 2025-10-13T00:21:36.816248384+00:00 stderr F + WARN_TS=1760316096 2025-10-13T00:21:36.816248384+00:00 stderr F + HAS_LOGGED_INFO=0 2025-10-13T00:21:36.816272195+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.key ]] 2025-10-13T00:21:36.817256221+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.crt ]] 2025-10-13T00:21:36.817256221+00:00 stderr F ++ date -Iseconds 2025-10-13T00:21:36.819303926+00:00 stdout F 2025-10-13T00:21:36+00:00 INFO: ovn-node-metrics certs mounted, starting kube-rbac-proxy 2025-10-13T00:21:36.819321407+00:00 stderr F + echo '2025-10-13T00:21:36+00:00 INFO: ovn-node-metrics certs mounted, starting kube-rbac-proxy' 2025-10-13T00:21:36.819369718+00:00 stderr F + exec /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=:9103 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --upstream=http://127.0.0.1:29103/ --tls-private-key-file=/etc/pki/tls/metrics-cert/tls.key --tls-cert-file=/etc/pki/tls/metrics-cert/tls.crt 2025-10-13T00:21:36.860475394+00:00 stderr F W1013 00:21:36.860268 27798 deprecated.go:66] 2025-10-13T00:21:36.860475394+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:21:36.860475394+00:00 stderr F 2025-10-13T00:21:36.860475394+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:21:36.860475394+00:00 stderr F 2025-10-13T00:21:36.860475394+00:00 stderr F =============================================== 2025-10-13T00:21:36.860475394+00:00 stderr F 2025-10-13T00:21:36.861033729+00:00 stderr F I1013 00:21:36.860991 27798 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:21:36.861085210+00:00 stderr F I1013 00:21:36.861051 27798 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:21:36.861769288+00:00 stderr F I1013 00:21:36.861717 27798 kube-rbac-proxy.go:395] Starting TCP socket on :9103 2025-10-13T00:21:36.862273002+00:00 stderr F I1013 00:21:36.862232 27798 kube-rbac-proxy.go:402] Listening securely on :9103 ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001350315073043233033073 0ustar zuulzuul2025-10-13T00:21:36.643900939+00:00 stderr F ++ K8S_NODE= 2025-10-13T00:21:36.644111195+00:00 stderr F ++ [[ -n '' ]] 2025-10-13T00:21:36.644146156+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:36.644174997+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:36.644204538+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.644232818+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:36.644261009+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:36.644290000+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:36.644317001+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:36.644364122+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:36.644397093+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:36.644424054+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:36.645604305+00:00 stderr F + start-audit-log-rotation 2025-10-13T00:21:36.645660977+00:00 stderr F + MAXFILESIZE=50000000 2025-10-13T00:21:36.645688938+00:00 stderr F + MAXLOGFILES=5 2025-10-13T00:21:36.646547821+00:00 stderr F ++ dirname /var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.651084233+00:00 stderr F + LOGDIR=/var/log/ovn 2025-10-13T00:21:36.651139744+00:00 stderr F + local retries=0 2025-10-13T00:21:36.651192926+00:00 stderr F + [[ 30 -gt 0 ]] 2025-10-13T00:21:36.651225696+00:00 stderr F + (( retries += 1 )) 2025-10-13T00:21:36.651840703+00:00 stderr F ++ cat /var/run/ovn/ovn-controller.pid 2025-10-13T00:21:36.653863567+00:00 stderr F + CONTROLLERPID=27632 2025-10-13T00:21:36.653904789+00:00 stderr F + [[ -n 27632 ]] 2025-10-13T00:21:36.653928179+00:00 stderr F + break 2025-10-13T00:21:36.653953200+00:00 stderr F + [[ -z 27632 ]] 2025-10-13T00:21:36.654152025+00:00 stderr F + true 2025-10-13T00:21:36.654221017+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-10-13T00:21:36.654258828+00:00 stderr F + tail -F /var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.654885105+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.655131561+00:00 stderr F ++ tr -s '\t' ' ' 2025-10-13T00:21:36.655180873+00:00 stderr F ++ cut '-d ' -f1 2025-10-13T00:21:36.657030393+00:00 stderr F + file_size=0 2025-10-13T00:21:36.657085674+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-10-13T00:21:36.658224235+00:00 stderr F ++ wc -l 2025-10-13T00:21:36.658282016+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.660619879+00:00 stderr F + num_files=1 2025-10-13T00:21:36.660680231+00:00 stderr F + '[' 1 -gt 5 ']' 2025-10-13T00:21:36.660713382+00:00 stderr F + sleep 30 2025-10-13T00:22:06.663350694+00:00 stderr F + true 2025-10-13T00:22:06.663431796+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-10-13T00:22:06.664621588+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-10-13T00:22:06.664789962+00:00 stderr F ++ tr -s '\t' ' ' 2025-10-13T00:22:06.664789962+00:00 stderr F ++ cut '-d ' -f1 2025-10-13T00:22:06.667700551+00:00 stderr F + file_size=0 2025-10-13T00:22:06.667751542+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-10-13T00:22:06.668815281+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-10-13T00:22:06.668815281+00:00 stderr F ++ wc -l 2025-10-13T00:22:06.671409100+00:00 stderr F + num_files=1 2025-10-13T00:22:06.671455802+00:00 stderr F + '[' 1 -gt 5 ']' 2025-10-13T00:22:06.671480762+00:00 stderr F + sleep 30 2025-10-13T00:22:36.673697447+00:00 stderr F + true 2025-10-13T00:22:36.673809070+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-10-13T00:22:36.675606600+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-10-13T00:22:36.675820536+00:00 stderr F ++ tr -s '\t' ' ' 2025-10-13T00:22:36.676158565+00:00 stderr F ++ cut '-d ' -f1 2025-10-13T00:22:36.680568898+00:00 stderr F + file_size=0 2025-10-13T00:22:36.680635660+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-10-13T00:22:36.682090421+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-10-13T00:22:36.682167513+00:00 stderr F ++ wc -l 2025-10-13T00:22:36.686015200+00:00 stderr F + num_files=1 2025-10-13T00:22:36.686078232+00:00 stderr F + '[' 1 -gt 5 ']' 2025-10-13T00:22:36.686116223+00:00 stderr F + sleep 30 2025-10-13T00:23:06.688929372+00:00 stderr F + true 2025-10-13T00:23:06.689111417+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-10-13T00:23:06.691180065+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-10-13T00:23:06.691359000+00:00 stderr F ++ tr -s '\t' ' ' 2025-10-13T00:23:06.691736020+00:00 stderr F ++ cut '-d ' -f1 2025-10-13T00:23:06.695874825+00:00 stderr F + file_size=0 2025-10-13T00:23:06.695933097+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-10-13T00:23:06.697297815+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-10-13T00:23:06.697578183+00:00 stderr F ++ wc -l 2025-10-13T00:23:06.701705558+00:00 stderr F + num_files=1 2025-10-13T00:23:06.701758819+00:00 stderr F + '[' 1 -gt 5 ']' 2025-10-13T00:23:06.701785590+00:00 stderr F + sleep 30 2025-10-13T00:23:36.704038835+00:00 stderr F + true 2025-10-13T00:23:36.704135368+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-10-13T00:23:36.705692801+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-10-13T00:23:36.705953128+00:00 stderr F ++ tr -s '\t' ' ' 2025-10-13T00:23:36.706144734+00:00 stderr F ++ cut '-d ' -f1 2025-10-13T00:23:36.710082273+00:00 stderr F + file_size=0 2025-10-13T00:23:36.710150425+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-10-13T00:23:36.711085931+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-10-13T00:23:36.711301517+00:00 stderr F ++ wc -l 2025-10-13T00:23:36.714757293+00:00 stderr F + num_files=1 2025-10-13T00:23:36.714854776+00:00 stderr F + '[' 1 -gt 5 ']' 2025-10-13T00:23:36.714904418+00:00 stderr F + sleep 30 ././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000735573415073043233033121 0ustar zuulzuul2025-10-13T00:21:41.893377058+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-10-13T00:21:41.893676906+00:00 stderr F ++ set -x 2025-10-13T00:21:41.893676906+00:00 stderr F ++ K8S_NODE=crc 2025-10-13T00:21:41.893676906+00:00 stderr F ++ [[ -n crc ]] 2025-10-13T00:21:41.893676906+00:00 stderr F ++ [[ -f /env/crc ]] 2025-10-13T00:21:41.893676906+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:41.893676906+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:41.893676906+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:41.893676906+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:41.893676906+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:41.893676906+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:41.893676906+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:41.893676906+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:41.893676906+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:41.893676906+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:41.895012412+00:00 stderr F + start-ovnkube-node 4 29103 29105 2025-10-13T00:21:41.895075403+00:00 stderr F + local log_level=4 2025-10-13T00:21:41.895075403+00:00 stderr F + local metrics_port=29103 2025-10-13T00:21:41.895075403+00:00 stderr F + local ovn_metrics_port=29105 2025-10-13T00:21:41.895119934+00:00 stderr F + [[ 3 -ne 3 ]] 2025-10-13T00:21:41.895119934+00:00 stderr F + cni-bin-copy 2025-10-13T00:21:41.895157005+00:00 stderr F + . /host/etc/os-release 2025-10-13T00:21:41.895509295+00:00 stderr F ++ NAME='Red Hat Enterprise Linux CoreOS' 2025-10-13T00:21:41.895509295+00:00 stderr F ++ ID=rhcos 2025-10-13T00:21:41.895509295+00:00 stderr F ++ ID_LIKE='rhel fedora' 2025-10-13T00:21:41.895509295+00:00 stderr F ++ VERSION=416.94.202406172220-0 2025-10-13T00:21:41.895526615+00:00 stderr F ++ VERSION_ID=4.16 2025-10-13T00:21:41.895526615+00:00 stderr F ++ VARIANT=CoreOS 2025-10-13T00:21:41.895526615+00:00 stderr F ++ VARIANT_ID=coreos 2025-10-13T00:21:41.895536556+00:00 stderr F ++ PLATFORM_ID=platform:el9 2025-10-13T00:21:41.895546446+00:00 stderr F ++ PRETTY_NAME='Red Hat Enterprise Linux CoreOS 416.94.202406172220-0' 2025-10-13T00:21:41.895546446+00:00 stderr F ++ ANSI_COLOR='0;31' 2025-10-13T00:21:41.895557606+00:00 stderr F ++ CPE_NAME=cpe:/o:redhat:enterprise_linux:9::baseos::coreos 2025-10-13T00:21:41.895566246+00:00 stderr F ++ HOME_URL=https://www.redhat.com/ 2025-10-13T00:21:41.895576097+00:00 stderr F ++ DOCUMENTATION_URL=https://docs.okd.io/latest/welcome/index.html 2025-10-13T00:21:41.895576097+00:00 stderr F ++ BUG_REPORT_URL=https://access.redhat.com/labs/rhir/ 2025-10-13T00:21:41.895586977+00:00 stderr F ++ REDHAT_BUGZILLA_PRODUCT='OpenShift Container Platform' 2025-10-13T00:21:41.895595467+00:00 stderr F ++ REDHAT_BUGZILLA_PRODUCT_VERSION=4.16 2025-10-13T00:21:41.895604307+00:00 stderr F ++ REDHAT_SUPPORT_PRODUCT='OpenShift Container Platform' 2025-10-13T00:21:41.895613018+00:00 stderr F ++ REDHAT_SUPPORT_PRODUCT_VERSION=4.16 2025-10-13T00:21:41.895613018+00:00 stderr F ++ OPENSHIFT_VERSION=4.16 2025-10-13T00:21:41.895622288+00:00 stderr F ++ RHEL_VERSION=9.4 2025-10-13T00:21:41.895630938+00:00 stderr F ++ OSTREE_VERSION=416.94.202406172220-0 2025-10-13T00:21:41.895639838+00:00 stderr F + rhelmajor= 2025-10-13T00:21:41.895639838+00:00 stderr F + case "${ID}" in 2025-10-13T00:21:41.896558243+00:00 stderr F ++ echo cpe:/o:redhat:enterprise_linux:9::baseos::coreos 2025-10-13T00:21:41.896715477+00:00 stderr F ++ cut -f 5 -d : 2025-10-13T00:21:41.899510942+00:00 stderr F + RHEL_VERSION=9 2025-10-13T00:21:41.900313844+00:00 stderr F ++ sed -E 's/([0-9]+)\.{1}[0-9]+(\.[0-9]+)?/\1/' 2025-10-13T00:21:41.900542370+00:00 stderr F ++ echo 9 2025-10-13T00:21:41.902269927+00:00 stderr F + rhelmajor=9 2025-10-13T00:21:41.902269927+00:00 stderr F + sourcedir=/usr/libexec/cni/ 2025-10-13T00:21:41.902269927+00:00 stderr F + case "${rhelmajor}" in 2025-10-13T00:21:41.902301878+00:00 stderr F + sourcedir=/usr/libexec/cni/rhel9 2025-10-13T00:21:41.902301878+00:00 stderr F + cp -f /usr/libexec/cni/rhel9/ovn-k8s-cni-overlay /cni-bin-dir/ 2025-10-13T00:21:41.982340460+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-10-13T00:21:41.984165529+00:00 stderr F + echo 'I1013 00:21:41.983809029 - disable conntrack on geneve port' 2025-10-13T00:21:41.984179869+00:00 stdout F I1013 00:21:41.983809029 - disable conntrack on geneve port 2025-10-13T00:21:41.984204100+00:00 stderr F + iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK 2025-10-13T00:21:41.988226638+00:00 stderr F + iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK 2025-10-13T00:21:41.991215799+00:00 stderr F + ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK 2025-10-13T00:21:41.995088733+00:00 stderr F + ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK 2025-10-13T00:21:41.998040462+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-10-13T00:21:41.999743388+00:00 stdout F I1013 00:21:41.999422909 - starting ovnkube-node 2025-10-13T00:21:41.999757288+00:00 stderr F + echo 'I1013 00:21:41.999422909 - starting ovnkube-node' 2025-10-13T00:21:41.999757288+00:00 stderr F + '[' local == shared ']' 2025-10-13T00:21:41.999757288+00:00 stderr F + '[' local == local ']' 2025-10-13T00:21:41.999757288+00:00 stderr F + gateway_mode_flags='--gateway-mode local --gateway-interface br-ex' 2025-10-13T00:21:41.999757288+00:00 stderr F + export_network_flows_flags= 2025-10-13T00:21:41.999769639+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999776939+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999784119+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999784119+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999791719+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999791719+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999798939+00:00 stderr F + gw_interface_flag= 2025-10-13T00:21:41.999805900+00:00 stderr F + '[' -d /sys/class/net/br-ex1 ']' 2025-10-13T00:21:41.999849731+00:00 stderr F + node_mgmt_port_netdev_flags= 2025-10-13T00:21:41.999849731+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999849731+00:00 stderr F + [[ -n '' ]] 2025-10-13T00:21:41.999849731+00:00 stderr F + multi_network_enabled_flag= 2025-10-13T00:21:41.999859751+00:00 stderr F + [[ true == \t\r\u\e ]] 2025-10-13T00:21:41.999867021+00:00 stderr F + multi_network_enabled_flag=--enable-multi-network 2025-10-13T00:21:41.999867021+00:00 stderr F + multi_network_policy_enabled_flag= 2025-10-13T00:21:41.999876432+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-10-13T00:21:41.999876432+00:00 stderr F + admin_network_policy_enabled_flag= 2025-10-13T00:21:41.999885372+00:00 stderr F + [[ true == \t\r\u\e ]] 2025-10-13T00:21:41.999892572+00:00 stderr F + admin_network_policy_enabled_flag=--enable-admin-network-policy 2025-10-13T00:21:41.999892572+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-10-13T00:21:41.999900032+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-10-13T00:21:41.999900032+00:00 stderr F + ip_forwarding_flag= 2025-10-13T00:21:41.999908762+00:00 stderr F + '[' Global == Global ']' 2025-10-13T00:21:41.999915783+00:00 stderr F + sysctl -w net.ipv4.ip_forward=1 2025-10-13T00:21:42.001193757+00:00 stdout F net.ipv4.ip_forward = 1 2025-10-13T00:21:42.001346461+00:00 stderr F + sysctl -w net.ipv6.conf.all.forwarding=1 2025-10-13T00:21:42.003640183+00:00 stdout F net.ipv6.conf.all.forwarding = 1 2025-10-13T00:21:42.003830868+00:00 stderr F + NETWORK_NODE_IDENTITY_ENABLE= 2025-10-13T00:21:42.003830868+00:00 stderr F + [[ true == \t\r\u\e ]] 2025-10-13T00:21:42.003830868+00:00 stderr F + NETWORK_NODE_IDENTITY_ENABLE=' 2025-10-13T00:21:42.003830868+00:00 stderr F --bootstrap-kubeconfig=/var/lib/kubelet/kubeconfig 2025-10-13T00:21:42.003830868+00:00 stderr F --cert-dir=/etc/ovn/ovnkube-node-certs 2025-10-13T00:21:42.003830868+00:00 stderr F --cert-duration=24h 2025-10-13T00:21:42.003830868+00:00 stderr F ' 2025-10-13T00:21:42.003845468+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-10-13T00:21:42.003845468+00:00 stderr F + [[ '' != '' ]] 2025-10-13T00:21:42.003866679+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-10-13T00:21:42.003866679+00:00 stderr F + [[ '' != '' ]] 2025-10-13T00:21:42.003966412+00:00 stderr F + exec /usr/bin/ovnkube --init-ovnkube-controller crc --init-node crc --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 --inactivity-probe=180000 --gateway-mode local --gateway-interface br-ex --metrics-bind-address 127.0.0.1:29103 --ovn-metrics-bind-address 127.0.0.1:29105 --metrics-enable-pprof --metrics-enable-config-duration --export-ovs-metrics --disable-snat-multiple-gws --enable-multi-network --enable-admin-network-policy --enable-multicast --zone crc --enable-interconnect --acl-logging-rate-limit 20 --bootstrap-kubeconfig=/var/lib/kubelet/kubeconfig --cert-dir=/etc/ovn/ovnkube-node-certs --cert-duration=24h 2025-10-13T00:21:42.051453499+00:00 stderr F I1013 00:21:42.051336 28251 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-10-13T00:21:42.051493800+00:00 stderr F I1013 00:21:42.051415 28251 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:local Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-10-13T00:21:42.053497334+00:00 stderr F I1013 00:21:42.053450 28251 certificate_store.go:130] Loading cert/key pair from "/etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem". 2025-10-13T00:21:42.053700079+00:00 stderr F I1013 00:21:42.053662 28251 certificate_store.go:130] Loading cert/key pair from "/etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem". 2025-10-13T00:21:42.053821682+00:00 stderr F I1013 00:21:42.053787 28251 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled 2025-10-13T00:21:42.053821682+00:00 stderr F I1013 00:21:42.053813 28251 kube.go:358] Waiting for certificate 2025-10-13T00:21:42.053859183+00:00 stderr F I1013 00:21:42.053836 28251 kube.go:365] Certificate found 2025-10-13T00:21:42.054132241+00:00 stderr F I1013 00:21:42.053960 28251 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Certificate expiration is 2025-10-14 00:09:13 +0000 UTC, rotation deadline is 2025-10-13 20:03:25.336172243 +0000 UTC 2025-10-13T00:21:42.054132241+00:00 stderr F I1013 00:21:42.054113 28251 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Waiting 19h41m43.282062353s for next certificate rotation 2025-10-13T00:21:42.054592783+00:00 stderr F I1013 00:21:42.054479 28251 cert_rotation.go:137] Starting client certificate rotation controller 2025-10-13T00:21:42.055431436+00:00 stderr F I1013 00:21:42.055392 28251 metrics.go:532] Starting metrics server at address "127.0.0.1:29103" 2025-10-13T00:21:42.056129304+00:00 stderr F I1013 00:21:42.056030 28251 libovsdb.go:62] Client for OVN_Northbound using log verbosity 4 with lumberjack &lumberjack.Logger{Filename:"/var/log/ovnkube/libovsdb.log", MaxSize:100, MaxAge:0, MaxBackups:5, LocalTime:false, Compress:true, size:0, file:(*os.File)(nil), mu:sync.Mutex{state:0, sema:0x0}, millCh:(chan bool)(nil), startMill:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}} 2025-10-13T00:21:42.057108611+00:00 stderr F I1013 00:21:42.056779 28251 node_network_controller_manager.go:98] Starting the node network controller manager, Mode: full 2025-10-13T00:21:42.057171672+00:00 stderr F I1013 00:21:42.057148 28251 factory.go:405] Starting watch factory 2025-10-13T00:21:42.057307526+00:00 stderr F I1013 00:21:42.057282 28251 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057307526+00:00 stderr F I1013 00:21:42.057296 28251 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057437769+00:00 stderr F I1013 00:21:42.057409 28251 reflector.go:289] Starting reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057437769+00:00 stderr F I1013 00:21:42.057426 28251 reflector.go:325] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057543662+00:00 stderr F I1013 00:21:42.057498 28251 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057569933+00:00 stderr F I1013 00:21:42.057560 28251 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057659875+00:00 stderr F I1013 00:21:42.057612 28251 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057659875+00:00 stderr F I1013 00:21:42.057636 28251 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057863281+00:00 stderr F I1013 00:21:42.057827 28251 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.057863281+00:00 stderr F I1013 00:21:42.057838 28251 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.058867198+00:00 stderr F I1013 00:21:42.058848 28251 reflector.go:289] Starting reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.058900959+00:00 stderr F I1013 00:21:42.058891 28251 reflector.go:325] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.069394401+00:00 stderr F I1013 00:21:42.066801 28251 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.073469021+00:00 stderr F I1013 00:21:42.069717 28251 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.073469021+00:00 stderr F I1013 00:21:42.070074 28251 metrics.go:532] Starting metrics server at address "127.0.0.1:29105" 2025-10-13T00:21:42.073469021+00:00 stderr F I1013 00:21:42.071069 28251 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.073469021+00:00 stderr F I1013 00:21:42.071301 28251 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.073469021+00:00 stderr F I1013 00:21:42.072133 28251 ovn_db.go:374] Found OVN DB Pod running on this node. Registering OVN DB Metrics 2025-10-13T00:21:42.075395472+00:00 stderr F I1013 00:21:42.075283 28251 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.078856565+00:00 stderr F I1013 00:21:42.078780 28251 ovn_db.go:329] /var/run/openvswitch/ovnnb_db.sock getting info failed: stat /var/run/openvswitch/ovnnb_db.sock: no such file or directory 2025-10-13T00:21:42.078856565+00:00 stderr F I1013 00:21:42.078837 28251 ovn_db.go:326] ovnnb_db.sock found at /var/run/ovn/ 2025-10-13T00:21:42.087075447+00:00 stderr F I1013 00:21:42.087017 28251 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:21:42.096251933+00:00 stderr F I1013 00:21:42.096192 28251 ovn_db.go:421] Found db is standalone, don't register db_cluster metrics 2025-10-13T00:21:42.099599983+00:00 stderr F I1013 00:21:42.099544 28251 libovsdb.go:62] Client for OVN_Southbound using log verbosity 4 with lumberjack &lumberjack.Logger{Filename:"/var/log/ovnkube/libovsdb.log", MaxSize:100, MaxAge:0, MaxBackups:5, LocalTime:false, Compress:true, size:0, file:(*os.File)(nil), mu:sync.Mutex{state:0, sema:0x0}, millCh:(chan bool)(nil), startMill:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}} 2025-10-13T00:21:42.110448835+00:00 stderr F I1013 00:21:42.110394 28251 network_controller_manager.go:300] Starting the ovnkube controller 2025-10-13T00:21:42.110448835+00:00 stderr F I1013 00:21:42.110409 28251 network_controller_manager.go:305] Waiting up to 5m0s for NBDB zone to match: crc 2025-10-13T00:21:42.110472036+00:00 stderr F I1013 00:21:42.110449 28251 network_controller_manager.go:325] NBDB zone sync took: 33.051µs 2025-10-13T00:21:42.110472036+00:00 stderr F I1013 00:21:42.110456 28251 factory.go:405] Starting watch factory 2025-10-13T00:21:42.110575518+00:00 stderr F I1013 00:21:42.110528 28251 reflector.go:289] Starting reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:21:42.110575518+00:00 stderr F I1013 00:21:42.110533 28251 reflector.go:289] Starting reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:21:42.110575518+00:00 stderr F I1013 00:21:42.110547 28251 reflector.go:325] Listing and watching *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:21:42.110575518+00:00 stderr F I1013 00:21:42.110539 28251 reflector.go:325] Listing and watching *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:21:42.112654324+00:00 stderr F I1013 00:21:42.112611 28251 reflector.go:351] Caches populated for *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:21:42.112735357+00:00 stderr F I1013 00:21:42.112608 28251 reflector.go:351] Caches populated for *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:21:42.158159618+00:00 stderr F I1013 00:21:42.158080 28251 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.158159618+00:00 stderr F I1013 00:21:42.158110 28251 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.159421102+00:00 stderr F I1013 00:21:42.159368 28251 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.211731689+00:00 stderr F I1013 00:21:42.211676 28251 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.211731689+00:00 stderr F I1013 00:21:42.211698 28251 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.212912140+00:00 stderr F I1013 00:21:42.212885 28251 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.258434015+00:00 stderr F I1013 00:21:42.258389 28251 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.258434015+00:00 stderr F I1013 00:21:42.258404 28251 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.260685905+00:00 stderr F I1013 00:21:42.260661 28251 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.315608632+00:00 stderr F I1013 00:21:42.315559 28251 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.315608632+00:00 stderr F I1013 00:21:42.315577 28251 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.317401660+00:00 stderr F I1013 00:21:42.316820 28251 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.359180794+00:00 stderr F I1013 00:21:42.359136 28251 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.359180794+00:00 stderr F I1013 00:21:42.359154 28251 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.360819298+00:00 stderr F I1013 00:21:42.360801 28251 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:21:42.416041513+00:00 stderr F I1013 00:21:42.415964 28251 network_controller_manager.go:336] Waiting up to 5m0s for a node to have "crc" zone 2025-10-13T00:21:42.416041513+00:00 stderr F I1013 00:21:42.416007 28251 network_controller_manager.go:359] Waiting for node in zone sync took: 16.141µs 2025-10-13T00:21:42.422973019+00:00 stderr F I1013 00:21:42.422934 28251 network_controller_manager.go:220] SCTP support detected in OVN 2025-10-13T00:21:42.425972770+00:00 stderr F I1013 00:21:42.425926 28251 services_controller.go:60] Creating event broadcaster 2025-10-13T00:21:42.426129464+00:00 stderr F I1013 00:21:42.426094 28251 default_network_controller.go:328] Starting the default network controller 2025-10-13T00:21:42.426233727+00:00 stderr F I1013 00:21:42.426206 28251 address_set_sync.go:394] SyncAddressSets found 0 stale address sets, 0 of them were ignored 2025-10-13T00:21:42.429209887+00:00 stderr F I1013 00:21:42.429157 28251 acl_sync.go:167] Updating Tier of existing ACLs... 2025-10-13T00:21:42.429232228+00:00 stderr F I1013 00:21:42.429216 28251 acl_sync.go:192] Updating tier's of all ACLs in cluster took 15.83µs 2025-10-13T00:21:42.429767202+00:00 stderr F I1013 00:21:42.429727 28251 port_group_sync.go:309] SyncPortGroups found 0 stale port groups 2025-10-13T00:21:42.440994784+00:00 stderr F I1013 00:21:42.440956 28251 default_network_controller.go:419] Cleaning External Gateway ECMP routes 2025-10-13T00:21:42.441182259+00:00 stderr F I1013 00:21:42.441160 28251 repair.go:33] Syncing exgw routes took 184.055µs 2025-10-13T00:21:42.441201950+00:00 stderr F I1013 00:21:42.441188 28251 default_network_controller.go:430] Starting all the Watchers... 2025-10-13T00:21:42.442021642+00:00 stderr F I1013 00:21:42.442001 28251 namespace.go:100] [openshift-cluster-machine-approver] adding namespace 2025-10-13T00:21:42.442021642+00:00 stderr F I1013 00:21:42.441999 28251 namespace.go:100] [openshift-controller-manager] adding namespace 2025-10-13T00:21:42.442033322+00:00 stderr F I1013 00:21:42.442021 28251 namespace.go:100] [openshift-apiserver-operator] adding namespace 2025-10-13T00:21:42.442040672+00:00 stderr F I1013 00:21:42.442025 28251 namespace.go:100] [hostpath-provisioner] adding namespace 2025-10-13T00:21:42.442049422+00:00 stderr F I1013 00:21:42.442042 28251 namespace.go:100] [openshift-node] adding namespace 2025-10-13T00:21:42.442058223+00:00 stderr F I1013 00:21:42.442046 28251 namespace.go:100] [openshift-kube-storage-version-migrator] adding namespace 2025-10-13T00:21:42.442064833+00:00 stderr F I1013 00:21:42.442051 28251 namespace.go:100] [openshift-kube-storage-version-migrator-operator] adding namespace 2025-10-13T00:21:42.442071783+00:00 stderr F I1013 00:21:42.442059 28251 namespace.go:100] [openshift-operator-lifecycle-manager] adding namespace 2025-10-13T00:21:42.442071783+00:00 stderr F I1013 00:21:42.441986 28251 namespace.go:100] [openstack] adding namespace 2025-10-13T00:21:42.442079003+00:00 stderr F I1013 00:21:42.442067 28251 namespace.go:100] [openshift-vsphere-infra] adding namespace 2025-10-13T00:21:42.442118634+00:00 stderr F I1013 00:21:42.442083 28251 namespace.go:100] [openshift-controller-manager-operator] adding namespace 2025-10-13T00:21:42.442118634+00:00 stderr F I1013 00:21:42.442087 28251 namespace.go:100] [default] adding namespace 2025-10-13T00:21:42.442118634+00:00 stderr F I1013 00:21:42.442092 28251 namespace.go:100] [openshift-kni-infra] adding namespace 2025-10-13T00:21:42.442137005+00:00 stderr F I1013 00:21:42.442085 28251 namespace.go:100] [openshift-service-ca] adding namespace 2025-10-13T00:21:42.442207827+00:00 stderr F I1013 00:21:42.442001 28251 namespace.go:100] [openshift-cloud-platform-infra] adding namespace 2025-10-13T00:21:42.442428453+00:00 stderr F I1013 00:21:42.442402 28251 namespace.go:104] [openshift-cluster-machine-approver] adding namespace took 388.061µs 2025-10-13T00:21:42.442428453+00:00 stderr F I1013 00:21:42.442422 28251 namespace.go:100] [openshift-ingress] adding namespace 2025-10-13T00:21:42.443011238+00:00 stderr F I1013 00:21:42.442944 28251 namespace.go:104] [openshift-ingress] adding namespace took 510.524µs 2025-10-13T00:21:42.443011238+00:00 stderr F I1013 00:21:42.442980 28251 namespace.go:100] [openshift-console] adding namespace 2025-10-13T00:21:42.443484681+00:00 stderr F I1013 00:21:42.443455 28251 namespace.go:104] [openshift-console] adding namespace took 462.882µs 2025-10-13T00:21:42.443496171+00:00 stderr F I1013 00:21:42.443485 28251 namespace.go:100] [openshift-apiserver] adding namespace 2025-10-13T00:21:42.443859051+00:00 stderr F I1013 00:21:42.443832 28251 namespace.go:104] [openshift-controller-manager] adding namespace took 1.806028ms 2025-10-13T00:21:42.443859051+00:00 stderr F I1013 00:21:42.443851 28251 namespace.go:100] [openshift-machine-api] adding namespace 2025-10-13T00:21:42.444337294+00:00 stderr F I1013 00:21:42.444279 28251 namespace.go:104] [openshift-apiserver-operator] adding namespace took 2.24274ms 2025-10-13T00:21:42.444373515+00:00 stderr F I1013 00:21:42.444349 28251 namespace.go:100] [openshift-machine-config-operator] adding namespace 2025-10-13T00:21:42.444730714+00:00 stderr F I1013 00:21:42.444703 28251 namespace.go:104] [hostpath-provisioner] adding namespace took 2.658602ms 2025-10-13T00:21:42.444730714+00:00 stderr F I1013 00:21:42.444725 28251 namespace.go:100] [openshift-authentication-operator] adding namespace 2025-10-13T00:21:42.445107595+00:00 stderr F I1013 00:21:42.445080 28251 namespace.go:104] [openshift-node] adding namespace took 3.029442ms 2025-10-13T00:21:42.445107595+00:00 stderr F I1013 00:21:42.445101 28251 namespace.go:100] [openshift-authentication] adding namespace 2025-10-13T00:21:42.445507665+00:00 stderr F I1013 00:21:42.445474 28251 namespace.go:104] [openshift-kube-storage-version-migrator] adding namespace took 3.411571ms 2025-10-13T00:21:42.445507665+00:00 stderr F I1013 00:21:42.445494 28251 namespace.go:100] [openshift-cluster-samples-operator] adding namespace 2025-10-13T00:21:42.445834034+00:00 stderr F I1013 00:21:42.445806 28251 namespace.go:104] [openshift-kube-storage-version-migrator-operator] adding namespace took 3.741ms 2025-10-13T00:21:42.445834034+00:00 stderr F I1013 00:21:42.445823 28251 namespace.go:100] [openshift-config-operator] adding namespace 2025-10-13T00:21:42.446111942+00:00 stderr F I1013 00:21:42.446071 28251 namespace.go:104] [openshift-operator-lifecycle-manager] adding namespace took 4.001187ms 2025-10-13T00:21:42.446111942+00:00 stderr F I1013 00:21:42.446089 28251 namespace.go:100] [openshift] adding namespace 2025-10-13T00:21:42.446528823+00:00 stderr F I1013 00:21:42.446497 28251 namespace.go:104] [openstack] adding namespace took 4.421469ms 2025-10-13T00:21:42.446528823+00:00 stderr F I1013 00:21:42.446518 28251 namespace.go:100] [openshift-ovirt-infra] adding namespace 2025-10-13T00:21:42.446796240+00:00 stderr F I1013 00:21:42.446760 28251 namespace.go:104] [openshift-vsphere-infra] adding namespace took 4.684426ms 2025-10-13T00:21:42.446796240+00:00 stderr F I1013 00:21:42.446781 28251 namespace.go:100] [openshift-nutanix-infra] adding namespace 2025-10-13T00:21:42.447229492+00:00 stderr F I1013 00:21:42.447186 28251 namespace.go:104] [default] adding namespace took 5.082616ms 2025-10-13T00:21:42.447229492+00:00 stderr F I1013 00:21:42.447204 28251 namespace.go:100] [openshift-operators] adding namespace 2025-10-13T00:21:42.448104725+00:00 stderr F I1013 00:21:42.448065 28251 namespace.go:104] [openshift-kni-infra] adding namespace took 5.95897ms 2025-10-13T00:21:42.448104725+00:00 stderr F I1013 00:21:42.448088 28251 namespace.go:100] [openshift-kube-apiserver] adding namespace 2025-10-13T00:21:42.450030527+00:00 stderr F I1013 00:21:42.449950 28251 namespace.go:104] [openshift-service-ca] adding namespace took 7.825421ms 2025-10-13T00:21:42.450030527+00:00 stderr F I1013 00:21:42.449971 28251 namespace.go:100] [openshift-dns-operator] adding namespace 2025-10-13T00:21:42.450356336+00:00 stderr F I1013 00:21:42.450316 28251 namespace.go:104] [openshift-controller-manager-operator] adding namespace took 8.215091ms 2025-10-13T00:21:42.450356336+00:00 stderr F I1013 00:21:42.450345 28251 namespace.go:100] [openshift-marketplace] adding namespace 2025-10-13T00:21:42.450713745+00:00 stderr F I1013 00:21:42.450684 28251 namespace.go:104] [openshift-cloud-platform-infra] adding namespace took 8.506638ms 2025-10-13T00:21:42.450713745+00:00 stderr F I1013 00:21:42.450702 28251 namespace.go:100] [openshift-infra] adding namespace 2025-10-13T00:21:42.451178418+00:00 stderr F I1013 00:21:42.451156 28251 namespace.go:104] [openshift-apiserver] adding namespace took 7.662906ms 2025-10-13T00:21:42.451444575+00:00 stderr F I1013 00:21:42.451407 28251 namespace.go:104] [openshift-machine-api] adding namespace took 7.548733ms 2025-10-13T00:21:42.451444575+00:00 stderr F I1013 00:21:42.451425 28251 namespace.go:100] [openshift-config-managed] adding namespace 2025-10-13T00:21:42.451764434+00:00 stderr F I1013 00:21:42.451742 28251 namespace.go:104] [openshift-machine-config-operator] adding namespace took 7.385599ms 2025-10-13T00:21:42.451764434+00:00 stderr F I1013 00:21:42.451759 28251 namespace.go:100] [openshift-host-network] adding namespace 2025-10-13T00:21:42.451989430+00:00 stderr F I1013 00:21:42.451963 28251 namespace.go:104] [openshift-authentication-operator] adding namespace took 7.231275ms 2025-10-13T00:21:42.451989430+00:00 stderr F I1013 00:21:42.451978 28251 namespace.go:100] [openshift-ingress-canary] adding namespace 2025-10-13T00:21:42.452256187+00:00 stderr F I1013 00:21:42.452236 28251 namespace.go:104] [openshift-authentication] adding namespace took 7.127801ms 2025-10-13T00:21:42.452256187+00:00 stderr F I1013 00:21:42.452253 28251 namespace.go:100] [kube-public] adding namespace 2025-10-13T00:21:42.452528694+00:00 stderr F I1013 00:21:42.452484 28251 namespace.go:104] [openshift-cluster-samples-operator] adding namespace took 6.982278ms 2025-10-13T00:21:42.452528694+00:00 stderr F I1013 00:21:42.452499 28251 namespace.go:100] [openshift-console-operator] adding namespace 2025-10-13T00:21:42.452746190+00:00 stderr F I1013 00:21:42.452728 28251 namespace.go:104] [openshift-config-operator] adding namespace took 6.898125ms 2025-10-13T00:21:42.452746190+00:00 stderr F I1013 00:21:42.452742 28251 namespace.go:100] [openshift-openstack-infra] adding namespace 2025-10-13T00:21:42.452974396+00:00 stderr F I1013 00:21:42.452951 28251 namespace.go:104] [openshift] adding namespace took 6.857124ms 2025-10-13T00:21:42.452974396+00:00 stderr F I1013 00:21:42.452967 28251 namespace.go:100] [openshift-ingress-operator] adding namespace 2025-10-13T00:21:42.453237843+00:00 stderr F I1013 00:21:42.453218 28251 namespace.go:104] [openshift-ovirt-infra] adding namespace took 6.69278ms 2025-10-13T00:21:42.453237843+00:00 stderr F I1013 00:21:42.453234 28251 namespace.go:100] [openshift-kube-scheduler] adding namespace 2025-10-13T00:21:42.453460349+00:00 stderr F I1013 00:21:42.453442 28251 namespace.go:104] [openshift-nutanix-infra] adding namespace took 6.655299ms 2025-10-13T00:21:42.453460349+00:00 stderr F I1013 00:21:42.453455 28251 namespace.go:100] [openshift-cluster-version] adding namespace 2025-10-13T00:21:42.453716366+00:00 stderr F I1013 00:21:42.453694 28251 namespace.go:104] [openshift-operators] adding namespace took 6.483304ms 2025-10-13T00:21:42.453716366+00:00 stderr F I1013 00:21:42.453710 28251 namespace.go:100] [openshift-network-node-identity] adding namespace 2025-10-13T00:21:42.453953882+00:00 stderr F I1013 00:21:42.453923 28251 namespace.go:104] [openshift-kube-apiserver] adding namespace took 5.826667ms 2025-10-13T00:21:42.453953882+00:00 stderr F I1013 00:21:42.453937 28251 namespace.go:100] [openshift-cloud-network-config-controller] adding namespace 2025-10-13T00:21:42.454214790+00:00 stderr F I1013 00:21:42.454191 28251 namespace.go:104] [openshift-dns-operator] adding namespace took 4.213684ms 2025-10-13T00:21:42.454214790+00:00 stderr F I1013 00:21:42.454207 28251 namespace.go:100] [openshift-kube-controller-manager-operator] adding namespace 2025-10-13T00:21:42.454549049+00:00 stderr F I1013 00:21:42.454515 28251 namespace.go:104] [openshift-marketplace] adding namespace took 4.162151ms 2025-10-13T00:21:42.454549049+00:00 stderr F I1013 00:21:42.454535 28251 namespace.go:100] [openshift-monitoring] adding namespace 2025-10-13T00:21:42.454859907+00:00 stderr F I1013 00:21:42.454757 28251 namespace.go:104] [openshift-infra] adding namespace took 4.049679ms 2025-10-13T00:21:42.454859907+00:00 stderr F I1013 00:21:42.454773 28251 namespace.go:100] [openshift-route-controller-manager] adding namespace 2025-10-13T00:21:42.454997581+00:00 stderr F I1013 00:21:42.454976 28251 namespace.go:104] [openshift-config-managed] adding namespace took 3.543285ms 2025-10-13T00:21:42.454997581+00:00 stderr F I1013 00:21:42.454994 28251 namespace.go:100] [openshift-service-ca-operator] adding namespace 2025-10-13T00:21:42.455263958+00:00 stderr F I1013 00:21:42.455234 28251 namespace.go:104] [openshift-host-network] adding namespace took 3.468663ms 2025-10-13T00:21:42.455263958+00:00 stderr F I1013 00:21:42.455251 28251 namespace.go:100] [kube-system] adding namespace 2025-10-13T00:21:42.455522725+00:00 stderr F I1013 00:21:42.455490 28251 namespace.go:104] [openshift-ingress-canary] adding namespace took 3.506475ms 2025-10-13T00:21:42.455522725+00:00 stderr F I1013 00:21:42.455506 28251 namespace.go:100] [openshift-kube-scheduler-operator] adding namespace 2025-10-13T00:21:42.455769311+00:00 stderr F I1013 00:21:42.455741 28251 namespace.go:104] [kube-public] adding namespace took 3.483004ms 2025-10-13T00:21:42.455769311+00:00 stderr F I1013 00:21:42.455757 28251 namespace.go:100] [openshift-multus] adding namespace 2025-10-13T00:21:42.456018748+00:00 stderr F I1013 00:21:42.455991 28251 namespace.go:104] [openshift-console-operator] adding namespace took 3.485434ms 2025-10-13T00:21:42.456018748+00:00 stderr F I1013 00:21:42.456009 28251 namespace.go:100] [openshift-etcd] adding namespace 2025-10-13T00:21:42.456184172+00:00 stderr F I1013 00:21:42.456155 28251 namespace.go:104] [openshift-openstack-infra] adding namespace took 3.407642ms 2025-10-13T00:21:42.456184172+00:00 stderr F I1013 00:21:42.456168 28251 namespace.go:100] [openshift-network-diagnostics] adding namespace 2025-10-13T00:21:42.456477640+00:00 stderr F I1013 00:21:42.456450 28251 namespace.go:104] [openshift-ingress-operator] adding namespace took 3.478184ms 2025-10-13T00:21:42.456477640+00:00 stderr F I1013 00:21:42.456467 28251 namespace.go:100] [openshift-kube-apiserver-operator] adding namespace 2025-10-13T00:21:42.456760408+00:00 stderr F I1013 00:21:42.456734 28251 namespace.go:104] [openshift-kube-scheduler] adding namespace took 3.492594ms 2025-10-13T00:21:42.456760408+00:00 stderr F I1013 00:21:42.456752 28251 namespace.go:100] [openshift-cluster-storage-operator] adding namespace 2025-10-13T00:21:42.456996844+00:00 stderr F I1013 00:21:42.456967 28251 namespace.go:104] [openshift-cluster-version] adding namespace took 3.504724ms 2025-10-13T00:21:42.456996844+00:00 stderr F I1013 00:21:42.456981 28251 namespace.go:100] [openshift-oauth-apiserver] adding namespace 2025-10-13T00:21:42.457197690+00:00 stderr F I1013 00:21:42.457176 28251 namespace.go:104] [openshift-network-node-identity] adding namespace took 3.461353ms 2025-10-13T00:21:42.457197690+00:00 stderr F I1013 00:21:42.457187 28251 namespace.go:100] [openshift-console-user-settings] adding namespace 2025-10-13T00:21:42.457424486+00:00 stderr F I1013 00:21:42.457401 28251 namespace.go:104] [openshift-cloud-network-config-controller] adding namespace took 3.457343ms 2025-10-13T00:21:42.457424486+00:00 stderr F I1013 00:21:42.457418 28251 namespace.go:100] [openshift-user-workload-monitoring] adding namespace 2025-10-13T00:21:42.457707943+00:00 stderr F I1013 00:21:42.457679 28251 namespace.go:104] [openshift-kube-controller-manager-operator] adding namespace took 3.466584ms 2025-10-13T00:21:42.457707943+00:00 stderr F I1013 00:21:42.457694 28251 namespace.go:100] [openshift-config] adding namespace 2025-10-13T00:21:42.457942560+00:00 stderr F I1013 00:21:42.457921 28251 namespace.go:104] [openshift-monitoring] adding namespace took 3.381551ms 2025-10-13T00:21:42.457942560+00:00 stderr F I1013 00:21:42.457933 28251 namespace.go:100] [openshift-kube-controller-manager] adding namespace 2025-10-13T00:21:42.458219547+00:00 stderr F I1013 00:21:42.458184 28251 namespace.go:104] [openshift-route-controller-manager] adding namespace took 3.405021ms 2025-10-13T00:21:42.458219547+00:00 stderr F I1013 00:21:42.458200 28251 namespace.go:100] [openshift-image-registry] adding namespace 2025-10-13T00:21:42.458393982+00:00 stderr F I1013 00:21:42.458370 28251 namespace.go:104] [openshift-service-ca-operator] adding namespace took 3.36995ms 2025-10-13T00:21:42.458393982+00:00 stderr F I1013 00:21:42.458386 28251 namespace.go:100] [kube-node-lease] adding namespace 2025-10-13T00:21:42.458715441+00:00 stderr F I1013 00:21:42.458693 28251 namespace.go:104] [kube-system] adding namespace took 3.436652ms 2025-10-13T00:21:42.459016409+00:00 stderr F I1013 00:21:42.458959 28251 namespace.go:104] [openshift-kube-scheduler-operator] adding namespace took 3.447483ms 2025-10-13T00:21:42.459168943+00:00 stderr F I1013 00:21:42.459140 28251 namespace.go:104] [openshift-multus] adding namespace took 3.377421ms 2025-10-13T00:21:42.459179943+00:00 stderr F I1013 00:21:42.459166 28251 default_node_network_controller.go:133] Enable node proxy healthz server on 0.0.0.0:10256 2025-10-13T00:21:42.459284786+00:00 stderr F I1013 00:21:42.459262 28251 default_node_network_controller.go:684] Starting the default node network controller 2025-10-13T00:21:42.459295006+00:00 stderr F I1013 00:21:42.459286 28251 ovs.go:159] Exec(11): /usr/bin/ovs-vsctl --timeout=15 --no-heading --data=bare --format=csv --columns name list interface 2025-10-13T00:21:42.459475011+00:00 stderr F I1013 00:21:42.459440 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.4 10.217.0.64]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-diagnostics:v4 k8s.ovn.org/name:openshift-network-diagnostics k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8218c39d-9f44-4ee1-a2d7-cc0399885a25}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.459491801+00:00 stderr F I1013 00:21:42.459473 28251 address_set.go:304] New(8218c39d-9f44-4ee1-a2d7-cc0399885a25/default-network-controller:Namespace:openshift-network-diagnostics:v4/a1966919964212966539) with [10.217.0.4 10.217.0.64] 2025-10-13T00:21:42.459491801+00:00 stderr F I1013 00:21:42.459482 28251 namespace.go:104] [openshift-etcd] adding namespace took 3.466193ms 2025-10-13T00:21:42.459500562+00:00 stderr F I1013 00:21:42.459480 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.4 10.217.0.64]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-diagnostics:v4 k8s.ovn.org/name:openshift-network-diagnostics k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8218c39d-9f44-4ee1-a2d7-cc0399885a25}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.459513492+00:00 stderr F I1013 00:21:42.459500 28251 obj_retry.go:541] Creating *v1.Namespace openshift-etcd took: 3.482483ms 2025-10-13T00:21:42.459767389+00:00 stderr F I1013 00:21:42.459739 28251 namespace.go:104] [openshift-network-diagnostics] adding namespace took 3.565456ms 2025-10-13T00:21:42.459767389+00:00 stderr F I1013 00:21:42.459753 28251 obj_retry.go:541] Creating *v1.Namespace openshift-network-diagnostics took: 3.581716ms 2025-10-13T00:21:42.459767389+00:00 stderr F I1013 00:21:42.459763 28251 obj_retry.go:502] Add event received for *v1.Namespace openstack-operators 2025-10-13T00:21:42.459780199+00:00 stderr F I1013 00:21:42.459768 28251 namespace.go:100] [openstack-operators] adding namespace 2025-10-13T00:21:42.459780199+00:00 stderr F I1013 00:21:42.459757 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.7]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver-operator:v4 k8s.ovn.org/name:openshift-kube-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c61fceab-5551-4b36-b095-0df4dd86c2d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.459787699+00:00 stderr F I1013 00:21:42.459780 28251 address_set.go:304] New(c61fceab-5551-4b36-b095-0df4dd86c2d2/default-network-controller:Namespace:openshift-kube-apiserver-operator:v4/a11465645704438275080) with [10.217.0.7] 2025-10-13T00:21:42.459816420+00:00 stderr F I1013 00:21:42.459788 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.7]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver-operator:v4 k8s.ovn.org/name:openshift-kube-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c61fceab-5551-4b36-b095-0df4dd86c2d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460079047+00:00 stderr F I1013 00:21:42.460051 28251 namespace.go:104] [openshift-kube-apiserver-operator] adding namespace took 3.577786ms 2025-10-13T00:21:42.460079047+00:00 stderr F I1013 00:21:42.460064 28251 obj_retry.go:541] Creating *v1.Namespace openshift-kube-apiserver-operator took: 3.596317ms 2025-10-13T00:21:42.460079047+00:00 stderr F I1013 00:21:42.460054 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-storage-operator:v4 k8s.ovn.org/name:openshift-cluster-storage-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e456889c-a1a5-4ca0-b08b-79801583741a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460090368+00:00 stderr F I1013 00:21:42.460077 28251 address_set.go:304] New(e456889c-a1a5-4ca0-b08b-79801583741a/default-network-controller:Namespace:openshift-cluster-storage-operator:v4/a13337366700695359377) with [] 2025-10-13T00:21:42.460097208+00:00 stderr F I1013 00:21:42.460083 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-storage-operator:v4 k8s.ovn.org/name:openshift-cluster-storage-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e456889c-a1a5-4ca0-b08b-79801583741a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460393306+00:00 stderr F I1013 00:21:42.460341 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.39]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-oauth-apiserver:v4 k8s.ovn.org/name:openshift-oauth-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4dac2b20-e105-4916-96b0-5690b4c79e49}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460393306+00:00 stderr F I1013 00:21:42.460367 28251 address_set.go:304] New(4dac2b20-e105-4916-96b0-5690b4c79e49/default-network-controller:Namespace:openshift-oauth-apiserver:v4/a18232515746603522929) with [10.217.0.39] 2025-10-13T00:21:42.460393306+00:00 stderr F I1013 00:21:42.460375 28251 namespace.go:104] [openshift-cluster-storage-operator] adding namespace took 3.617117ms 2025-10-13T00:21:42.460393306+00:00 stderr F I1013 00:21:42.460383 28251 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-storage-operator took: 3.630767ms 2025-10-13T00:21:42.460393306+00:00 stderr F I1013 00:21:42.460373 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.39]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-oauth-apiserver:v4 k8s.ovn.org/name:openshift-oauth-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4dac2b20-e105-4916-96b0-5690b4c79e49}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460413656+00:00 stderr F I1013 00:21:42.460391 28251 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-operator 2025-10-13T00:21:42.460413656+00:00 stderr F I1013 00:21:42.460396 28251 namespace.go:100] [openshift-network-operator] adding namespace 2025-10-13T00:21:42.460720724+00:00 stderr F I1013 00:21:42.460684 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-user-settings:v4 k8s.ovn.org/name:openshift-console-user-settings k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1993aac-30de-48d4-bdd3-a3299b9538ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460720724+00:00 stderr F I1013 00:21:42.460708 28251 address_set.go:304] New(f1993aac-30de-48d4-bdd3-a3299b9538ff/default-network-controller:Namespace:openshift-console-user-settings:v4/a17174782576849527835) with [] 2025-10-13T00:21:42.460720724+00:00 stderr F I1013 00:21:42.460713 28251 namespace.go:104] [openshift-oauth-apiserver] adding namespace took 3.72531ms 2025-10-13T00:21:42.460732945+00:00 stderr F I1013 00:21:42.460721 28251 obj_retry.go:541] Creating *v1.Namespace openshift-oauth-apiserver took: 3.73851ms 2025-10-13T00:21:42.460732945+00:00 stderr F I1013 00:21:42.460713 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-user-settings:v4 k8s.ovn.org/name:openshift-console-user-settings k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1993aac-30de-48d4-bdd3-a3299b9538ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.460732945+00:00 stderr F I1013 00:21:42.460729 28251 obj_retry.go:502] Add event received for *v1.Namespace openshift-ovn-kubernetes 2025-10-13T00:21:42.460746595+00:00 stderr F I1013 00:21:42.460736 28251 namespace.go:100] [openshift-ovn-kubernetes] adding namespace 2025-10-13T00:21:42.460979311+00:00 stderr F I1013 00:21:42.460957 28251 namespace.go:104] [openshift-console-user-settings] adding namespace took 3.763741ms 2025-10-13T00:21:42.460979311+00:00 stderr F I1013 00:21:42.460968 28251 obj_retry.go:541] Creating *v1.Namespace openshift-console-user-settings took: 3.779172ms 2025-10-13T00:21:42.460979311+00:00 stderr F I1013 00:21:42.460976 28251 obj_retry.go:502] Add event received for *v1.Namespace openshift-dns 2025-10-13T00:21:42.460989632+00:00 stderr F I1013 00:21:42.460982 28251 namespace.go:100] [openshift-dns] adding namespace 2025-10-13T00:21:42.460996412+00:00 stderr F I1013 00:21:42.460969 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-user-workload-monitoring:v4 k8s.ovn.org/name:openshift-user-workload-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c7b7eb85-7c38-4a2f-b21b-66f16447149e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.461003122+00:00 stderr F I1013 00:21:42.460995 28251 address_set.go:304] New(c7b7eb85-7c38-4a2f-b21b-66f16447149e/default-network-controller:Namespace:openshift-user-workload-monitoring:v4/a17884403498503024866) with [] 2025-10-13T00:21:42.461027403+00:00 stderr F I1013 00:21:42.461001 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-user-workload-monitoring:v4 k8s.ovn.org/name:openshift-user-workload-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c7b7eb85-7c38-4a2f-b21b-66f16447149e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.461296520+00:00 stderr F I1013 00:21:42.461262 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config:v4 k8s.ovn.org/name:openshift-config k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {44ff8a33-26ed-440b-98f7-fcd58e35aa55}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.461296520+00:00 stderr F I1013 00:21:42.461286 28251 address_set.go:304] New(44ff8a33-26ed-440b-98f7-fcd58e35aa55/default-network-controller:Namespace:openshift-config:v4/a14322580666718461836) with [] 2025-10-13T00:21:42.461307690+00:00 stderr F I1013 00:21:42.461295 28251 namespace.go:104] [openshift-user-workload-monitoring] adding namespace took 3.871334ms 2025-10-13T00:21:42.461307690+00:00 stderr F I1013 00:21:42.461292 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config:v4 k8s.ovn.org/name:openshift-config k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {44ff8a33-26ed-440b-98f7-fcd58e35aa55}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.461307690+00:00 stderr F I1013 00:21:42.461304 28251 obj_retry.go:541] Creating *v1.Namespace openshift-user-workload-monitoring took: 3.884704ms 2025-10-13T00:21:42.461712211+00:00 stderr F I1013 00:21:42.461677 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager:v4 k8s.ovn.org/name:openshift-kube-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.461712211+00:00 stderr F I1013 00:21:42.461704 28251 address_set.go:304] New(7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) with [] 2025-10-13T00:21:42.461726812+00:00 stderr F I1013 00:21:42.461713 28251 namespace.go:104] [openshift-config] adding namespace took 4.011448ms 2025-10-13T00:21:42.461726812+00:00 stderr F I1013 00:21:42.461709 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager:v4 k8s.ovn.org/name:openshift-kube-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.461726812+00:00 stderr F I1013 00:21:42.461722 28251 obj_retry.go:541] Creating *v1.Namespace openshift-config took: 4.026558ms 2025-10-13T00:21:42.461978508+00:00 stderr F I1013 00:21:42.461957 28251 namespace.go:104] [openshift-kube-controller-manager] adding namespace took 4.016658ms 2025-10-13T00:21:42.461978508+00:00 stderr F I1013 00:21:42.461971 28251 obj_retry.go:541] Creating *v1.Namespace openshift-kube-controller-manager took: 4.035519ms 2025-10-13T00:21:42.462002889+00:00 stderr F I1013 00:21:42.461970 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.22 10.217.0.38]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-image-registry:v4 k8s.ovn.org/name:openshift-image-registry k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.462002889+00:00 stderr F I1013 00:21:42.461999 28251 address_set.go:304] New(3fbe9c32-c447-4e1c-9b34-6fac7dd25149/default-network-controller:Namespace:openshift-image-registry:v4/a65811733811199347) with [10.217.0.22 10.217.0.38] 2025-10-13T00:21:42.462030580+00:00 stderr F I1013 00:21:42.462005 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.22 10.217.0.38]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-image-registry:v4 k8s.ovn.org/name:openshift-image-registry k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.462326348+00:00 stderr F I1013 00:21:42.462290 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-node-lease:v4 k8s.ovn.org/name:kube-node-lease k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {16ef921a-9758-4ef9-9c0a-3ab866f42178}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.462326348+00:00 stderr F I1013 00:21:42.462314 28251 address_set.go:304] New(16ef921a-9758-4ef9-9c0a-3ab866f42178/default-network-controller:Namespace:kube-node-lease:v4/a8945957557890443212) with [] 2025-10-13T00:21:42.462360879+00:00 stderr F I1013 00:21:42.462320 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-node-lease:v4 k8s.ovn.org/name:kube-node-lease k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {16ef921a-9758-4ef9-9c0a-3ab866f42178}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.462372959+00:00 stderr F I1013 00:21:42.462363 28251 namespace.go:104] [openshift-image-registry] adding namespace took 4.153981ms 2025-10-13T00:21:42.462380839+00:00 stderr F I1013 00:21:42.462375 28251 obj_retry.go:541] Creating *v1.Namespace openshift-image-registry took: 4.172442ms 2025-10-13T00:21:42.462390469+00:00 stderr F I1013 00:21:42.462385 28251 obj_retry.go:502] Add event received for *v1.Namespace openshift-etcd-operator 2025-10-13T00:21:42.462398200+00:00 stderr F I1013 00:21:42.462391 28251 namespace.go:100] [openshift-etcd-operator] adding namespace 2025-10-13T00:21:42.462690577+00:00 stderr F I1013 00:21:42.462654 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack-operators:v4 k8s.ovn.org/name:openstack-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1841b3a8-3470-46b5-88b1-23e03ef6311a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.462690577+00:00 stderr F I1013 00:21:42.462684 28251 address_set.go:304] New(1841b3a8-3470-46b5-88b1-23e03ef6311a/default-network-controller:Namespace:openstack-operators:v4/a14495394860088409165) with [] 2025-10-13T00:21:42.462720778+00:00 stderr F I1013 00:21:42.462690 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack-operators:v4 k8s.ovn.org/name:openstack-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1841b3a8-3470-46b5-88b1-23e03ef6311a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.462749479+00:00 stderr F I1013 00:21:42.462724 28251 namespace.go:104] [kube-node-lease] adding namespace took 4.328716ms 2025-10-13T00:21:42.462749479+00:00 stderr F I1013 00:21:42.462742 28251 obj_retry.go:541] Creating *v1.Namespace kube-node-lease took: 4.354637ms 2025-10-13T00:21:42.463011736+00:00 stderr F I1013 00:21:42.462994 28251 namespace.go:104] [openstack-operators] adding namespace took 3.220457ms 2025-10-13T00:21:42.463020026+00:00 stderr F I1013 00:21:42.463004 28251 obj_retry.go:541] Creating *v1.Namespace openstack-operators took: 3.234497ms 2025-10-13T00:21:42.463028717+00:00 stderr F I1013 00:21:42.462982 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-operator:v4 k8s.ovn.org/name:openshift-network-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04111b86-db80-412d-9e2f-97094cf524e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463045917+00:00 stderr F I1013 00:21:42.463030 28251 address_set.go:304] New(04111b86-db80-412d-9e2f-97094cf524e1/default-network-controller:Namespace:openshift-network-operator:v4/a17843891307737330665) with [] 2025-10-13T00:21:42.463067998+00:00 stderr F I1013 00:21:42.463037 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-operator:v4 k8s.ovn.org/name:openshift-network-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04111b86-db80-412d-9e2f-97094cf524e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463371746+00:00 stderr F I1013 00:21:42.463317 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovn-kubernetes:v4 k8s.ovn.org/name:openshift-ovn-kubernetes k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {26eb6b8a-5781-42f2-86d8-29346cd69f7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463371746+00:00 stderr F I1013 00:21:42.463357 28251 address_set.go:304] New(26eb6b8a-5781-42f2-86d8-29346cd69f7c/default-network-controller:Namespace:openshift-ovn-kubernetes:v4/a1398255725986493602) with [] 2025-10-13T00:21:42.463386786+00:00 stderr F I1013 00:21:42.463362 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovn-kubernetes:v4 k8s.ovn.org/name:openshift-ovn-kubernetes k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {26eb6b8a-5781-42f2-86d8-29346cd69f7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463386786+00:00 stderr F I1013 00:21:42.463379 28251 namespace.go:104] [openshift-network-operator] adding namespace took 2.97658ms 2025-10-13T00:21:42.463395966+00:00 stderr F I1013 00:21:42.463390 28251 obj_retry.go:541] Creating *v1.Namespace openshift-network-operator took: 2.99066ms 2025-10-13T00:21:42.463667314+00:00 stderr F I1013 00:21:42.463634 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.31]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns:v4 k8s.ovn.org/name:openshift-dns k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9adb0821-8866-4e7b-91bc-3de62bd5d8d5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463667314+00:00 stderr F I1013 00:21:42.463658 28251 address_set.go:304] New(9adb0821-8866-4e7b-91bc-3de62bd5d8d5/default-network-controller:Namespace:openshift-dns:v4/a11732331429224425771) with [10.217.0.31] 2025-10-13T00:21:42.463676694+00:00 stderr F I1013 00:21:42.463663 28251 namespace.go:104] [openshift-ovn-kubernetes] adding namespace took 2.922458ms 2025-10-13T00:21:42.463676694+00:00 stderr F I1013 00:21:42.463671 28251 obj_retry.go:541] Creating *v1.Namespace openshift-ovn-kubernetes took: 2.934239ms 2025-10-13T00:21:42.463676694+00:00 stderr F I1013 00:21:42.463664 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.31]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns:v4 k8s.ovn.org/name:openshift-dns k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9adb0821-8866-4e7b-91bc-3de62bd5d8d5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463990132+00:00 stderr F I1013 00:21:42.463949 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.8]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd-operator:v4 k8s.ovn.org/name:openshift-etcd-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1ee15a1-bd6d-495f-8ae4-fb5816533020}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.463990132+00:00 stderr F I1013 00:21:42.463976 28251 namespace.go:104] [openshift-dns] adding namespace took 2.98921ms 2025-10-13T00:21:42.463990132+00:00 stderr F I1013 00:21:42.463981 28251 address_set.go:304] New(f1ee15a1-bd6d-495f-8ae4-fb5816533020/default-network-controller:Namespace:openshift-etcd-operator:v4/a14710618839743769589) with [10.217.0.8] 2025-10-13T00:21:42.463990132+00:00 stderr F I1013 00:21:42.463985 28251 obj_retry.go:541] Creating *v1.Namespace openshift-dns took: 3.002481ms 2025-10-13T00:21:42.464018163+00:00 stderr F I1013 00:21:42.463989 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.8]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd-operator:v4 k8s.ovn.org/name:openshift-etcd-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1ee15a1-bd6d-495f-8ae4-fb5816533020}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.464241199+00:00 stderr F I1013 00:21:42.464217 28251 namespace.go:104] [openshift-etcd-operator] adding namespace took 1.821619ms 2025-10-13T00:21:42.464241199+00:00 stderr F I1013 00:21:42.464227 28251 obj_retry.go:541] Creating *v1.Namespace openshift-etcd-operator took: 1.83651ms 2025-10-13T00:21:42.464252079+00:00 stderr F I1013 00:21:42.464240 28251 factory.go:988] Added *v1.Namespace event handler 1 2025-10-13T00:21:42.464422004+00:00 stderr F I1013 00:21:42.464403 28251 obj_retry.go:502] Add event received for *v1.Node crc 2025-10-13T00:21:42.464436554+00:00 stderr F I1013 00:21:42.464427 28251 master.go:627] Adding or Updating Node "crc" 2025-10-13T00:21:42.464576088+00:00 stderr F I1013 00:21:42.464538 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:dc1aa63b-0376-4ccb-99e2-10794dfff422}]} other_config:{GoMap:map[exclude_ips:10.217.0.2 mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.464576088+00:00 stderr F I1013 00:21:42.464561 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:dc1aa63b-0376-4ccb-99e2-10794dfff422}]} other_config:{GoMap:map[exclude_ips:10.217.0.2 mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465155464+00:00 stderr F I1013 00:21:42.465108 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[arp_proxy:0a:58:a9:fe:01:01 169.254.1.1 fe80::1 10.217.0.0/22 router-port:rtos-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4867092-ff2b-4983-87ca-ac317e39f546}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465173054+00:00 stderr F I1013 00:21:42.465153 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465209935+00:00 stderr F I1013 00:21:42.465168 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[arp_proxy:0a:58:a9:fe:01:01 169.254.1.1 fe80::1 10.217.0.0/22 router-port:rtos-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4867092-ff2b-4983-87ca-ac317e39f546}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465516623+00:00 stderr F I1013 00:21:42.465480 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465516623+00:00 stderr F I1013 00:21:42.465501 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465838812+00:00 stderr F I1013 00:21:42.465803 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Gateway_Chassis Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa name:rtos-crc-017e52b0-97d3-4d7d-aae4-9b216aa025aa priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {964f1415-aaef-4b48-af02-711e20a49a83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465898534+00:00 stderr F I1013 00:21:42.465869 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:964f1415-aaef-4b48-af02-711e20a49a83}]} mac:0a:58:0a:d9:00:01 networks:{GoSet:[10.217.0.1/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {21ca7e68-1853-4731-afc6-84ae6ce74fe0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465927464+00:00 stderr F I1013 00:21:42.465901 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:21ca7e68-1853-4731-afc6-84ae6ce74fe0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.465939305+00:00 stderr F I1013 00:21:42.465914 28251 transact.go:42] Configuring OVN: [{Op:update Table:Gateway_Chassis Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa name:rtos-crc-017e52b0-97d3-4d7d-aae4-9b216aa025aa priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {964f1415-aaef-4b48-af02-711e20a49a83}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:964f1415-aaef-4b48-af02-711e20a49a83}]} mac:0a:58:0a:d9:00:01 networks:{GoSet:[10.217.0.1/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {21ca7e68-1853-4731-afc6-84ae6ce74fe0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:21ca7e68-1853-4731-afc6-84ae6ce74fe0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.466390597+00:00 stderr F I1013 00:21:42.466343 28251 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.217.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:crc:10.217.0.2 k8s.ovn.org/name:crc k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.217.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cb395e67-c62e-4261-95eb-0e35bc8ae4a2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.466409827+00:00 stderr F I1013 00:21:42.466398 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:cb395e67-c62e-4261-95eb-0e35bc8ae4a2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.466448758+00:00 stderr F I1013 00:21:42.466411 28251 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.217.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:crc:10.217.0.2 k8s.ovn.org/name:crc k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.217.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cb395e67-c62e-4261-95eb-0e35bc8ae4a2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:cb395e67-c62e-4261-95eb-0e35bc8ae4a2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.466844879+00:00 stderr F I1013 00:21:42.466796 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.217.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c008357e-464b-4db7-a13c-8eaa97f94217}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.466867110+00:00 stderr F I1013 00:21:42.466837 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:c008357e-464b-4db7-a13c-8eaa97f94217}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.466867110+00:00 stderr F I1013 00:21:42.466850 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.217.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c008357e-464b-4db7-a13c-8eaa97f94217}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:c008357e-464b-4db7-a13c-8eaa97f94217}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.467361213+00:00 stderr F I1013 00:21:42.467308 28251 ovs.go:162] Exec(10): stdout: "" 2025-10-13T00:21:42.467361213+00:00 stderr F I1013 00:21:42.467323 28251 ovs.go:163] Exec(10): stderr: "" 2025-10-13T00:21:42.467361213+00:00 stderr F I1013 00:21:42.467331 28251 node_network_controller_manager.go:251] CheckForStaleOVSInternalPorts took 8.110318ms 2025-10-13T00:21:42.467380614+00:00 stderr F I1013 00:21:42.467357 28251 ovs.go:159] Exec(12): /usr/bin/ovs-vsctl --timeout=15 --columns=name,external_ids --data=bare --no-headings --format=csv find Interface external_ids:sandbox!="" external_ids:vf-netdev-name!="" 2025-10-13T00:21:42.467380614+00:00 stderr F I1013 00:21:42.467343 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[b6:dc:d9:26:03:d4 10.217.0.2]} options:{GoMap:map[]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df0c3d3a-8474-4c1f-ac97-c1820ebf2712}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.467434105+00:00 stderr F I1013 00:21:42.467399 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.467445375+00:00 stderr F I1013 00:21:42.467425 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[b6:dc:d9:26:03:d4 10.217.0.2]} options:{GoMap:map[]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df0c3d3a-8474-4c1f-ac97-c1820ebf2712}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.467732413+00:00 stderr F I1013 00:21:42.467698 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.467732413+00:00 stderr F I1013 00:21:42.467721 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.467969429+00:00 stderr F I1013 00:21:42.467945 28251 switch.go:50] Hybridoverlay port does not exist for node crc 2025-10-13T00:21:42.467969429+00:00 stderr F I1013 00:21:42.467960 28251 switch.go:59] haveMP true haveHO false ManagementPortAddress 10.217.0.2/23 HybridOverlayAddressOA 10.217.0.3/23 2025-10-13T00:21:42.468041781+00:00 stderr F I1013 00:21:42.468010 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.468041781+00:00 stderr F I1013 00:21:42.468030 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.468380450+00:00 stderr F I1013 00:21:42.468316 28251 hybrid.go:140] Removing node crc hybrid overlay port 2025-10-13T00:21:42.468516124+00:00 stderr F I1013 00:21:42.468481 28251 ovs.go:162] Exec(11): stdout: "6c25dda9fb1b793\nea2e69806126bd9\novn-k8s-mp0\nbr-ex\ncbf486ca4cacf84\ndbcfdcf39d73e75\ne1d18b29c373f72\ndb11ff99fdc6391\na3103b56b63df6a\n41a9e9ef4b0fde1\na6a02864fcc1d29\n0f2717b16e95367\nf4cda25e678299f\n285fda3f179fd95\nc760791ee025653\nf1bcb80057948c8\na627c77ce1d0d5b\npatch-br-int-to-br-ex_crc\nbad13b08b2ffc11\nbr-int\n6cfe111afb06d35\ne7ac480dfe62cae\n0a945e981815604\ne1bfb510a845e2a\nb40bcae31bccd90\n5439deca53ee54f\nc644cf180f0b929\n3501d58989a4568\n9eac3f19b241539\nee196b19a43e4eb\nfb7d53dbe8608f5\n66953f4cfd7fd14\n11b9dba771caca2\nf7684764a172c67\n79f9f836193e2f1\ndc7cb350ffa4d90\n851b2fa5f88dcb6\nccdd6d0d771e91e\n69f02b6bc9278d1\n9c9a5e38e90cf6c\ncf8e48b8c1d90f3\n6ba83a9d480b2ef\n7490d971711bdc6\n545c8bbc6de04c8\nens3\nb6d9e0bc146cb28\n17fc394c1df425d\n42c0564c08abdab\n747d5b06bcf5b29\npatch-br-ex_crc-to-br-int\n7cf0ee4d5236aa6\ncfa8e7f575221b0\nf55b6990b4cae27\n" 2025-10-13T00:21:42.468516124+00:00 stderr F I1013 00:21:42.468508 28251 ovs.go:163] Exec(11): stderr: "" 2025-10-13T00:21:42.468577676+00:00 stderr F I1013 00:21:42.468537 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.180 physical_ips:38.102.83.180]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.468607337+00:00 stderr F I1013 00:21:42.468575 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.180 physical_ips:38.102.83.180]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.468667788+00:00 stderr F I1013 00:21:42.468557 28251 ovs.go:159] Exec(13): /usr/bin/ovs-ofctl dump-flows br-ex 2025-10-13T00:21:42.469053779+00:00 stderr F I1013 00:21:42.469008 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469053779+00:00 stderr F I1013 00:21:42.469042 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469075139+00:00 stderr F I1013 00:21:42.469054 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469433279+00:00 stderr F I1013 00:21:42.469388 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469451179+00:00 stderr F I1013 00:21:42.469426 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469460330+00:00 stderr F I1013 00:21:42.469437 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469772538+00:00 stderr F I1013 00:21:42.469730 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469784878+00:00 stderr F I1013 00:21:42.469764 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.469793998+00:00 stderr F I1013 00:21:42.469775 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470153928+00:00 stderr F I1013 00:21:42.470106 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:c3:15:08 networks:{GoSet:[38.102.83.180/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470193749+00:00 stderr F I1013 00:21:42.470156 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470205550+00:00 stderr F I1013 00:21:42.470177 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:c3:15:08 networks:{GoSet:[38.102.83.180/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470682002+00:00 stderr F I1013 00:21:42.470601 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470786825+00:00 stderr F I1013 00:21:42.470745 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:c3:15:08]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470798395+00:00 stderr F I1013 00:21:42.470783 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.470844677+00:00 stderr F I1013 00:21:42.470795 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:c3:15:08]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471255988+00:00 stderr F I1013 00:21:42.471210 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471255988+00:00 stderr F I1013 00:21:42.471236 28251 transact.go:42] Configuring OVN: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471555316+00:00 stderr F I1013 00:21:42.471520 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471571116+00:00 stderr F I1013 00:21:42.471552 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471597957+00:00 stderr F I1013 00:21:42.471565 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471936636+00:00 stderr F I1013 00:21:42.471868 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471980157+00:00 stderr F I1013 00:21:42.471941 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.471993928+00:00 stderr F I1013 00:21:42.471970 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.472296426+00:00 stderr F I1013 00:21:42.472259 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.472306826+00:00 stderr F I1013 00:21:42.472292 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.472334517+00:00 stderr F I1013 00:21:42.472304 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.473383775+00:00 stderr F I1013 00:21:42.473314 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.473402415+00:00 stderr F I1013 00:21:42.473387 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.473445567+00:00 stderr F I1013 00:21:42.473403 28251 transact.go:42] Configuring OVN: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.474311310+00:00 stderr F I1013 00:21:42.474269 28251 ovs.go:162] Exec(12): stdout: "" 2025-10-13T00:21:42.474311310+00:00 stderr F I1013 00:21:42.474287 28251 ovs.go:163] Exec(12): stderr: "" 2025-10-13T00:21:42.475114922+00:00 stderr F I1013 00:21:42.474983 28251 ovs.go:162] Exec(13): stdout: "NXST_FLOW reply (xid=0x4):\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=500,ip,in_port=2,nw_src=38.102.83.180,nw_dst=169.254.169.2 actions=ct(commit,table=4,zone=64001,nat(dst=38.102.83.180))\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=500,ip,in_port=2,nw_src=38.102.83.180,nw_dst=172.17.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=500,ip,in_port=2,nw_src=38.102.83.180,nw_dst=172.18.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=500,ip,in_port=2,nw_src=38.102.83.180,nw_dst=172.19.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=500,ip,in_port=2,nw_src=38.102.83.180,nw_dst=192.168.122.10 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=1811, n_bytes=183616, idle_age=7, priority=500,ip,in_port=2,nw_src=38.102.83.180,nw_dst=192.168.126.11 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=1446, n_bytes=4063519, idle_age=7, priority=500,ip,in_port=LOCAL,nw_dst=169.254.169.1 actions=ct(table=5,zone=64002,nat)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=1883, n_bytes=195238, idle_age=2, priority=500,ip,in_port=LOCAL,nw_dst=10.217.4.0/23 actions=ct(commit,table=2,zone=64001,nat(src=169.254.169.2))\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=105,ip,in_port=2,nw_dst=10.217.4.0/23 actions=drop\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=1495, n_bytes=4084406, idle_age=2, priority=500,ip,in_port=2,nw_src=10.217.4.0/23,nw_dst=169.254.169.2 actions=ct(table=3,zone=64001,nat)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=205,udp,in_port=1,dl_dst=fa:16:3e:c3:15:08,tp_dst=6081 actions=LOCAL\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=200,udp,in_port=1,tp_dst=6081 actions=NORMAL\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=200,udp,in_port=LOCAL,tp_dst=6081 actions=output:1\n cookie=0xdeff105, duration=13.024s, table=0, n_packets=0, n_bytes=0, idle_age=13, priority=110,ip,in_port=1,nw_frag=yes actions=ct(table=0,zone=64004)\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=109,ip,in_port=2,dl_src=fa:16:3e:c3:15:08,nw_src=10.217.0.0/23 actions=ct(commit,zone=64000,exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=105,pkt_mark=0x3f0,ip,in_port=2,dl_src=fa:16:3e:c3:15:08 actions=ct(commit,zone=64000,nat(src=38.102.83.180),exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=0, n_bytes=0, idle_age=446, priority=104,ip,in_port=2,nw_src=10.217.0.0/22 actions=drop\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=7435, n_bytes=1119386, idle_age=14, priority=100,ip,in_port=2,dl_src=fa:16:3e:c3:15:08 actions=ct(commit,zone=64000,exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=53341, n_bytes=4351227, idle_age=1, priority=100,ip,in_port=LOCAL actions=ct(commit,zone=64000,exec(load:0x2->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=216304, n_bytes=2450017343, idle_age=1, priority=50,ip,in_port=1 actions=ct(table=1,zone=64000,nat)\n cookie=0xdeff105, duration=447.604s, table=0, n_packets=31, n_bytes=1652, idle_age=21, priority=10,in_port=2,dl_src=fa:16:3e:c3:15:08 actions=NORMAL\n cookie=0xdeff105, duration=446.944s, table=0, n_packets=33, n_bytes=1834, idle_age=21, priority=10,in_port=1,dl_dst=fa:16:3e:c3:15:08 actions=output:2,LOCAL\n cookie=0xdeff105, duration=447.604s, table=0, n_packets=0, n_bytes=0, idle_age=447, priority=9,in_port=2 actions=drop\n cookie=0x0, duration=558.054s, table=0, n_packets=53409, n_bytes=6378105, idle_age=0, priority=0 actions=NORMAL\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=6478, n_bytes=5016272, idle_age=14, priority=100,ct_state=+est+trk,ct_mark=0x1,ip actions=output:2\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=209585, n_bytes=2444967145, idle_age=1, priority=100,ct_state=+est+trk,ct_mark=0x2,ip actions=LOCAL\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=0, n_bytes=0, idle_age=446, priority=100,ct_state=+rel+trk,ct_mark=0x1,ip actions=output:2\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=0, n_bytes=0, idle_age=446, priority=100,ct_state=+rel+trk,ct_mark=0x2,ip actions=LOCAL\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=0, n_bytes=0, idle_age=446, priority=15,ip,nw_dst=10.217.0.0/22 actions=output:2\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=0, n_bytes=0, idle_age=446, priority=13,udp,in_port=1,tp_dst=3784 actions=output:2,LOCAL\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=241, n_bytes=33926, idle_age=6, priority=10,dl_dst=fa:16:3e:c3:15:08 actions=LOCAL\n cookie=0xdeff105, duration=446.944s, table=1, n_packets=0, n_bytes=0, idle_age=446, priority=0 actions=NORMAL\n cookie=0xdeff105, duration=446.944s, table=2, n_packets=3329, n_bytes=4258757, idle_age=2, actions=mod_dl_dst:fa:16:3e:c3:15:08,output:2\n cookie=0xdeff105, duration=446.944s, table=3, n_packets=3306, n_bytes=4268022, idle_age=2, actions=move:NXM_OF_ETH_DST[]->NXM_OF_ETH_SRC[],mod_dl_dst:fa:16:3e:c3:15:08,LOCAL\n cookie=0xdeff105, duration=446.944s, table=4, n_packets=1811, n_bytes=183616, idle_age=7, ip actions=ct(commit,table=3,zone=64002,nat(src=169.254.169.1))\n cookie=0xdeff105, duration=446.944s, table=5, n_packets=1446, n_bytes=4063519, idle_age=7, ip actions=ct(commit,table=2,zone=64001,nat)\n" 2025-10-13T00:21:42.475114922+00:00 stderr F I1013 00:21:42.475089 28251 ovs.go:163] Exec(13): stderr: "" 2025-10-13T00:21:42.478259826+00:00 stderr F I1013 00:21:42.478206 28251 ovs.go:159] Exec(14): /usr/bin/ovn-sbctl --timeout=15 --no-leader-only get SB_Global . options:name 2025-10-13T00:21:42.487793953+00:00 stderr F I1013 00:21:42.487709 28251 ovs.go:162] Exec(14): stdout: "crc\n" 2025-10-13T00:21:42.487793953+00:00 stderr F I1013 00:21:42.487748 28251 ovs.go:163] Exec(14): stderr: "" 2025-10-13T00:21:42.487856854+00:00 stderr F I1013 00:21:42.487833 28251 config.go:1590] Exec: /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-remote="unix:/var/run/ovn/ovnsb_db.sock" 2025-10-13T00:21:42.494289047+00:00 stderr F I1013 00:21:42.494250 28251 ovs.go:159] Exec(15): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=192.168.126.11 external_ids:ovn-remote-probe-interval=180000 external_ids:ovn-openflow-probe-interval=180 other_config:bundle-idle-timeout=180 external_ids:hostname="crc" external_ids:ovn-is-interconn=true external_ids:ovn-monitor-all=true external_ids:ovn-ofctrl-wait-before-clear=0 external_ids:ovn-enable-lflow-cache=true external_ids:ovn-set-local-ip="true" external_ids:ovn-memlimit-lflow-cache-kb=1048576 2025-10-13T00:21:42.498749177+00:00 stderr F I1013 00:21:42.498687 28251 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c282f1ef-a278-42e0-aa68-8a0a37eb01c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.498814869+00:00 stderr F I1013 00:21:42.498772 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:c282f1ef-a278-42e0-aa68-8a0a37eb01c5}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.498845330+00:00 stderr F I1013 00:21:42.498807 28251 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c282f1ef-a278-42e0-aa68-8a0a37eb01c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:c282f1ef-a278-42e0-aa68-8a0a37eb01c5}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.499753054+00:00 stderr F I1013 00:21:42.499694 28251 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0739e536-1258-4533-951f-bde9a4f368fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.499788395+00:00 stderr F I1013 00:21:42.499757 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:0739e536-1258-4533-951f-bde9a4f368fa}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.499800255+00:00 stderr F I1013 00:21:42.499780 28251 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0739e536-1258-4533-951f-bde9a4f368fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:0739e536-1258-4533-951f-bde9a4f368fa}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.500268598+00:00 stderr F I1013 00:21:42.500226 28251 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b53f7a6-c603-4206-a5b6-1526766749dd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.500282618+00:00 stderr F I1013 00:21:42.500260 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:7b53f7a6-c603-4206-a5b6-1526766749dd}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.500308809+00:00 stderr F I1013 00:21:42.500281 28251 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b53f7a6-c603-4206-a5b6-1526766749dd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:7b53f7a6-c603-4206-a5b6-1526766749dd}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.500511124+00:00 stderr F I1013 00:21:42.500469 28251 ovs.go:162] Exec(15): stdout: "" 2025-10-13T00:21:42.500511124+00:00 stderr F I1013 00:21:42.500501 28251 ovs.go:163] Exec(15): stderr: "" 2025-10-13T00:21:42.500541455+00:00 stderr F I1013 00:21:42.500524 28251 ovs.go:159] Exec(16): /usr/bin/ovs-vsctl --timeout=15 -- clear bridge br-int netflow -- clear bridge br-int sflow -- clear bridge br-int ipfix 2025-10-13T00:21:42.500697310+00:00 stderr F I1013 00:21:42.500667 28251 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d46a82c9-2a2a-4689-b815-2cfbb33df9f4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.500732810+00:00 stderr F I1013 00:21:42.500699 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:d46a82c9-2a2a-4689-b815-2cfbb33df9f4}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.500743031+00:00 stderr F I1013 00:21:42.500721 28251 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d46a82c9-2a2a-4689-b815-2cfbb33df9f4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:d46a82c9-2a2a-4689-b815-2cfbb33df9f4}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.501241834+00:00 stderr F I1013 00:21:42.501193 28251 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7af90a92-555c-456c-830a-f7a82ed1882d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.501293836+00:00 stderr F I1013 00:21:42.501255 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:7af90a92-555c-456c-830a-f7a82ed1882d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.501334377+00:00 stderr F I1013 00:21:42.501286 28251 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7af90a92-555c-456c-830a-f7a82ed1882d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:7af90a92-555c-456c-830a-f7a82ed1882d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.508413377+00:00 stderr F I1013 00:21:42.508353 28251 ovs.go:162] Exec(16): stdout: "" 2025-10-13T00:21:42.508413377+00:00 stderr F I1013 00:21:42.508374 28251 ovs.go:163] Exec(16): stderr: "" 2025-10-13T00:21:42.511393107+00:00 stderr F I1013 00:21:42.511337 28251 default_node_network_controller.go:778] Node crc ready for ovn initialization with subnet 10.217.0.0/23 2025-10-13T00:21:42.511510540+00:00 stderr F I1013 00:21:42.511486 28251 ovs.go:159] Exec(17): /usr/bin/ovs-vsctl --timeout=15 --no-headings --data bare --format csv --columns type,name find Interface name=ovn-k8s-mp0 2025-10-13T00:21:42.516931036+00:00 stderr F I1013 00:21:42.516886 28251 base_network_controller.go:458] When adding node crc for network default, found 81 pods to add to retryPods 2025-10-13T00:21:42.516931036+00:00 stderr F I1013 00:21:42.516911 28251 base_network_controller.go:464] Adding pod hostpath-provisioner/csi-hostpathplugin-hvm8g to retryPods for network default 2025-10-13T00:21:42.516931036+00:00 stderr F I1013 00:21:42.516922 28251 base_network_controller.go:464] Adding pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m to retryPods for network default 2025-10-13T00:21:42.516931036+00:00 stderr F I1013 00:21:42.516927 28251 base_network_controller.go:464] Adding pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516932 28251 base_network_controller.go:464] Adding pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516938 28251 base_network_controller.go:464] Adding pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516943 28251 base_network_controller.go:464] Adding pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516948 28251 base_network_controller.go:464] Adding pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516953 28251 base_network_controller.go:464] Adding pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516957 28251 base_network_controller.go:464] Adding pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc to retryPods for network default 2025-10-13T00:21:42.516967277+00:00 stderr F I1013 00:21:42.516964 28251 base_network_controller.go:464] Adding pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 to retryPods for network default 2025-10-13T00:21:42.516977087+00:00 stderr F I1013 00:21:42.516969 28251 base_network_controller.go:464] Adding pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd to retryPods for network default 2025-10-13T00:21:42.516977087+00:00 stderr F I1013 00:21:42.516974 28251 base_network_controller.go:464] Adding pod openshift-console/console-644bb77b49-5x5xk to retryPods for network default 2025-10-13T00:21:42.516984717+00:00 stderr F I1013 00:21:42.516979 28251 base_network_controller.go:464] Adding pod openshift-console/downloads-65476884b9-9wcvx to retryPods for network default 2025-10-13T00:21:42.516992088+00:00 stderr F I1013 00:21:42.516984 28251 base_network_controller.go:464] Adding pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z to retryPods for network default 2025-10-13T00:21:42.516992088+00:00 stderr F I1013 00:21:42.516989 28251 base_network_controller.go:464] Adding pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf to retryPods for network default 2025-10-13T00:21:42.516999458+00:00 stderr F I1013 00:21:42.516994 28251 base_network_controller.go:464] Adding pod openshift-dns-operator/dns-operator-75f687757b-nz2xb to retryPods for network default 2025-10-13T00:21:42.517006518+00:00 stderr F I1013 00:21:42.517000 28251 base_network_controller.go:464] Adding pod openshift-dns/dns-default-gbw49 to retryPods for network default 2025-10-13T00:21:42.517013928+00:00 stderr F I1013 00:21:42.517005 28251 base_network_controller.go:464] Adding pod openshift-dns/node-resolver-dn27q to retryPods for network default 2025-10-13T00:21:42.517021128+00:00 stderr F I1013 00:21:42.517012 28251 base_network_controller.go:464] Adding pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg to retryPods for network default 2025-10-13T00:21:42.517021128+00:00 stderr F I1013 00:21:42.517017 28251 base_network_controller.go:464] Adding pod openshift-etcd/etcd-crc to retryPods for network default 2025-10-13T00:21:42.517028309+00:00 stderr F I1013 00:21:42.517022 28251 base_network_controller.go:464] Adding pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv to retryPods for network default 2025-10-13T00:21:42.517035619+00:00 stderr F I1013 00:21:42.517027 28251 base_network_controller.go:464] Adding pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 to retryPods for network default 2025-10-13T00:21:42.517035619+00:00 stderr F I1013 00:21:42.517032 28251 base_network_controller.go:464] Adding pod openshift-image-registry/node-ca-l92hr to retryPods for network default 2025-10-13T00:21:42.517045749+00:00 stderr F I1013 00:21:42.517037 28251 base_network_controller.go:464] Adding pod openshift-ingress-canary/ingress-canary-2vhcn to retryPods for network default 2025-10-13T00:21:42.517045749+00:00 stderr F I1013 00:21:42.517042 28251 base_network_controller.go:464] Adding pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517048 28251 base_network_controller.go:464] Adding pod openshift-ingress/router-default-5c9bf7bc58-6jctv to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517053 28251 base_network_controller.go:464] Adding pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517059 28251 base_network_controller.go:464] Adding pod openshift-kube-apiserver/installer-13-crc to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517064 28251 base_network_controller.go:464] Adding pod openshift-kube-apiserver/kube-apiserver-crc to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517069 28251 base_network_controller.go:464] Adding pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517074 28251 base_network_controller.go:464] Adding pod openshift-kube-controller-manager/kube-controller-manager-crc to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517080 28251 base_network_controller.go:464] Adding pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517084 28251 base_network_controller.go:464] Adding pod openshift-kube-scheduler/openshift-kube-scheduler-crc to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517089 28251 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517094 28251 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517098 28251 base_network_controller.go:464] Adding pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517103 28251 base_network_controller.go:464] Adding pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517107 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517112 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517116 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-daemon-zpnhg to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517121 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517126 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-server-v65wr to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517131 28251 base_network_controller.go:464] Adding pod openshift-marketplace/certified-operators-cms8q to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517136 28251 base_network_controller.go:464] Adding pod openshift-marketplace/community-operators-gjctm to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517141 28251 base_network_controller.go:464] Adding pod openshift-marketplace/marketplace-operator-8b455464d-29pzg to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517145 28251 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-marketplace-crk87 to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517149 28251 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-operators-hkptr to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517154 28251 base_network_controller.go:464] Adding pod openshift-multus/multus-additional-cni-plugins-bzj2p to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517159 28251 base_network_controller.go:464] Adding pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517165 28251 base_network_controller.go:464] Adding pod openshift-multus/multus-q88th to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517173 28251 base_network_controller.go:464] Adding pod openshift-multus/network-metrics-daemon-qdfr4 to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517178 28251 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517183 28251 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-target-v54bt to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517188 28251 base_network_controller.go:464] Adding pod openshift-network-node-identity/network-node-identity-7xghp to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517193 28251 base_network_controller.go:464] Adding pod openshift-network-operator/iptables-alerter-wwpnd to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517198 28251 base_network_controller.go:464] Adding pod openshift-network-operator/network-operator-767c585db5-zd56b to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517202 28251 base_network_controller.go:464] Adding pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517207 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf to retryPods for network default 2025-10-13T00:21:42.517218534+00:00 stderr F I1013 00:21:42.517213 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh to retryPods for network default 2025-10-13T00:21:42.517240974+00:00 stderr F I1013 00:21:42.517217 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 to retryPods for network default 2025-10-13T00:21:42.517240974+00:00 stderr F I1013 00:21:42.517222 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz to retryPods for network default 2025-10-13T00:21:42.517240974+00:00 stderr F I1013 00:21:42.517227 28251 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b to retryPods for network default 2025-10-13T00:21:42.517240974+00:00 stderr F I1013 00:21:42.517232 28251 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-node-wzh74 to retryPods for network default 2025-10-13T00:21:42.517240974+00:00 stderr F I1013 00:21:42.517236 28251 base_network_controller.go:464] Adding pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs to retryPods for network default 2025-10-13T00:21:42.517254235+00:00 stderr F I1013 00:21:42.517241 28251 base_network_controller.go:464] Adding pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz to retryPods for network default 2025-10-13T00:21:42.517254235+00:00 stderr F I1013 00:21:42.517246 28251 base_network_controller.go:464] Adding pod openshift-service-ca/service-ca-666f99b6f-kk8kg to retryPods for network default 2025-10-13T00:21:42.517261825+00:00 stderr F I1013 00:21:42.517253 28251 obj_retry.go:233] Iterate retry objects requested (resource *v1.Pod) 2025-10-13T00:21:42.517394158+00:00 stderr F I1013 00:21:42.517328 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Encap Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa ip:192.168.126.11 options:{GoMap:map[csum:true]} type:geneve] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ffeb60a-fdfe-4f88-8208-1eba752e78d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.517444490+00:00 stderr F I1013 00:21:42.517414 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Chassis Row:map[encaps:{GoSet:[{GoUUID:4ffeb60a-fdfe-4f88-8208-1eba752e78d6}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.517863691+00:00 stderr F I1013 00:21:42.517829 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Chassis Row:map[] Rows:[] Columns:[] Mutations:[{Column:other_config Mutator:delete Value:{GoSet:[is-remote]}} {Column:other_config Mutator:insert Value:{GoMap:map[is-remote:false]}}] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.517902172+00:00 stderr F I1013 00:21:42.517856 28251 transact.go:42] Configuring OVN: [{Op:update Table:Encap Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa ip:192.168.126.11 options:{GoMap:map[csum:true]} type:geneve] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ffeb60a-fdfe-4f88-8208-1eba752e78d6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Chassis Row:map[encaps:{GoSet:[{GoUUID:4ffeb60a-fdfe-4f88-8208-1eba752e78d6}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Chassis Row:map[] Rows:[] Columns:[] Mutations:[{Column:other_config Mutator:delete Value:{GoSet:[is-remote]}} {Column:other_config Mutator:insert Value:{GoMap:map[is-remote:false]}}] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.518501278+00:00 stderr F I1013 00:21:42.518464 28251 zone_ic_handler.go:220] Creating interconnect resources for local zone node crc for the network default 2025-10-13T00:21:42.518705474+00:00 stderr F I1013 00:21:42.518636 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:58:00:02 name:rtots-crc networks:{GoSet:[100.88.0.2/16]} options:{GoMap:map[mcast_flood:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e084d304-085a-4130-a762-c61b2dfff5af}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.518812837+00:00 stderr F I1013 00:21:42.518760 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e084d304-085a-4130-a762-c61b2dfff5af}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.518871418+00:00 stderr F I1013 00:21:42.518804 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:58:00:02 name:rtots-crc networks:{GoSet:[100.88.0.2/16]} options:{GoMap:map[mcast_flood:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e084d304-085a-4130-a762-c61b2dfff5af}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e084d304-085a-4130-a762-c61b2dfff5af}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.519064833+00:00 stderr F I1013 00:21:42.519021 28251 ovs.go:162] Exec(17): stdout: "internal,ovn-k8s-mp0\n" 2025-10-13T00:21:42.519064833+00:00 stderr F I1013 00:21:42.519053 28251 ovs.go:163] Exec(17): stderr: "" 2025-10-13T00:21:42.519085374+00:00 stderr F I1013 00:21:42.519072 28251 ovs.go:159] Exec(18): /usr/bin/ovs-vsctl --timeout=15 --no-headings --data bare --format csv --columns type,name find Interface name=ovn-k8s-mp0_0 2025-10-13T00:21:42.519745262+00:00 stderr F I1013 00:21:42.519680 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[requested-tnl-key:2 router-port:rtots-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff13d99b-65b5-4b3b-bc27-534de830144b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.519792593+00:00 stderr F I1013 00:21:42.519753 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ff13d99b-65b5-4b3b-bc27-534de830144b}]}}] Timeout: Where:[where column _uuid == {d7c11e7c-4a6b-41fe-87a6-5c70659238bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.519821284+00:00 stderr F I1013 00:21:42.519783 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[requested-tnl-key:2 router-port:rtots-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff13d99b-65b5-4b3b-bc27-534de830144b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ff13d99b-65b5-4b3b-bc27-534de830144b}]}}] Timeout: Where:[where column _uuid == {d7c11e7c-4a6b-41fe-87a6-5c70659238bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.520440990+00:00 stderr F I1013 00:21:42.520396 28251 obj_retry.go:541] Creating *v1.Node crc took: 55.969235ms 2025-10-13T00:21:42.520470741+00:00 stderr F I1013 00:21:42.520452 28251 factory.go:988] Added *v1.Node event handler 2 2025-10-13T00:21:42.520501072+00:00 stderr F I1013 00:21:42.520484 28251 ovn.go:449] Starting OVN Service Controller: Using Endpoint Slices 2025-10-13T00:21:42.520791330+00:00 stderr F I1013 00:21:42.520764 28251 services_controller.go:168] Starting controller ovn-lb-controller 2025-10-13T00:21:42.520803560+00:00 stderr F I1013 00:21:42.520797 28251 services_controller.go:176] Waiting for node tracker handler to sync 2025-10-13T00:21:42.520832261+00:00 stderr F I1013 00:21:42.520812 28251 shared_informer.go:311] Waiting for caches to sync for node-tracker-controller 2025-10-13T00:21:42.520853632+00:00 stderr F I1013 00:21:42.520839 28251 node_tracker.go:204] Processing possible switch / router updates for node crc 2025-10-13T00:21:42.520930274+00:00 stderr F I1013 00:21:42.520909 28251 node_tracker.go:165] Node crc switch + router changed, syncing services 2025-10-13T00:21:42.520930274+00:00 stderr F I1013 00:21:42.520922 28251 services_controller.go:519] Full service sync requested 2025-10-13T00:21:42.522013303+00:00 stderr F W1013 00:21:42.521976 28251 base_network_controller_pods.go:88] Already allocated IPs: 10.217.0.55 for pod: openshift-kube-controller-manager_revision-pruner-8-crc in phase: 0xc0009679a8 on switch: crc 2025-10-13T00:21:42.524386477+00:00 stderr F I1013 00:21:42.524351 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-9-crc 2025-10-13T00:21:42.524386477+00:00 stderr F I1013 00:21:42.524339 28251 default_network_controller.go:655] Recording add event on pod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.524386477+00:00 stderr F I1013 00:21:42.524378 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-9-crc 2025-10-13T00:21:42.524404417+00:00 stderr F I1013 00:21:42.524397 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.524438698+00:00 stderr F I1013 00:21:42.524415 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-operators-hkptr in node crc 2025-10-13T00:21:42.524438698+00:00 stderr F I1013 00:21:42.524428 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-12-crc 2025-10-13T00:21:42.524446458+00:00 stderr F I1013 00:21:42.524439 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.524467969+00:00 stderr F I1013 00:21:42.524454 28251 default_network_controller.go:655] Recording add event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.524467969+00:00 stderr F I1013 00:21:42.524457 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-12-crc 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524467 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524475 28251 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-12-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524481 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524482 28251 default_network_controller.go:655] Recording add event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524490 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/installer-13-crc in node crc 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524493 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.524508230+00:00 stderr F I1013 00:21:42.524499 28251 port_cache.go:122] port-cache(openshift-kube-apiserver_installer-12-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.524518840+00:00 stderr F I1013 00:21:42.524507 28251 base_network_controller_pods.go:476] [default/openshift-kube-apiserver/installer-13-crc] creating logical port openshift-kube-apiserver_installer-13-crc for pod on switch crc 2025-10-13T00:21:42.524518840+00:00 stderr F I1013 00:21:42.524509 28251 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-12-crc 2025-10-13T00:21:42.524531020+00:00 stderr F I1013 00:21:42.524518 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.524531020+00:00 stderr F I1013 00:21:42.524525 28251 default_network_controller.go:655] Recording add event on pod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.524538611+00:00 stderr F I1013 00:21:42.524528 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.524546801+00:00 stderr F I1013 00:21:42.524540 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.524586792+00:00 stderr F I1013 00:21:42.524551 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in node crc 2025-10-13T00:21:42.524586792+00:00 stderr F I1013 00:21:42.524548 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.524586792+00:00 stderr F I1013 00:21:42.524565 28251 default_network_controller.go:655] Recording add event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.524586792+00:00 stderr F I1013 00:21:42.524574 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.524586792+00:00 stderr F I1013 00:21:42.524576 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in node crc 2025-10-13T00:21:42.524597552+00:00 stderr F I1013 00:21:42.524575 28251 default_network_controller.go:655] Recording add event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.524606253+00:00 stderr F I1013 00:21:42.524595 28251 default_network_controller.go:655] Recording add event on pod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.524606253+00:00 stderr F I1013 00:21:42.524454 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-operators-hkptr] creating logical port openshift-marketplace_redhat-operators-hkptr for pod on switch crc 2025-10-13T00:21:42.524614553+00:00 stderr F I1013 00:21:42.524604 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.524614553+00:00 stderr F I1013 00:21:42.524611 28251 ovn.go:134] Ensuring zone local for Pod openshift-console/downloads-65476884b9-9wcvx in node crc 2025-10-13T00:21:42.524622503+00:00 stderr F I1013 00:21:42.524616 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.524630493+00:00 stderr F I1013 00:21:42.524621 28251 base_network_controller_pods.go:476] [default/openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] creating logical port openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 for pod on switch crc 2025-10-13T00:21:42.524638743+00:00 stderr F I1013 00:21:42.524585 28251 ovn.go:134] Ensuring zone local for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in node crc 2025-10-13T00:21:42.524638743+00:00 stderr F I1013 00:21:42.524633 28251 base_network_controller_pods.go:476] [default/openshift-console/downloads-65476884b9-9wcvx] creating logical port openshift-console_downloads-65476884b9-9wcvx for pod on switch crc 2025-10-13T00:21:42.524668284+00:00 stderr F I1013 00:21:42.524641 28251 obj_retry.go:541] Creating *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv took: 56.861µs 2025-10-13T00:21:42.524668284+00:00 stderr F I1013 00:21:42.524648 28251 default_network_controller.go:699] Recording success event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.524668284+00:00 stderr F I1013 00:21:42.524658 28251 default_network_controller.go:655] Recording add event on pod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.524680665+00:00 stderr F W1013 00:21:42.524657 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-apiserver/installer-12-crc. Using logical switch crc port uuid and addrs [10.217.0.86/23] 2025-10-13T00:21:42.524728076+00:00 stderr F I1013 00:21:42.524698 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress/router-default-5c9bf7bc58-6jctv. OVN-Kubernetes controller took 8.1612e-05 seconds. No OVN measurement. 2025-10-13T00:21:42.524728076+00:00 stderr F I1013 00:21:42.524695 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:b0a4ec02-9b6b-400a-9633-c11280799f07 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92cd2fae-7041-4269-9f12-7b10f07cb6aa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524728076+00:00 stderr F I1013 00:21:42.524553 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg in node crc 2025-10-13T00:21:42.524763457+00:00 stderr F I1013 00:21:42.524735 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524763457+00:00 stderr F I1013 00:21:42.524747 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/marketplace-operator-8b455464d-29pzg] creating logical port openshift-marketplace_marketplace-operator-8b455464d-29pzg for pod on switch crc 2025-10-13T00:21:42.524763457+00:00 stderr F I1013 00:21:42.524754 28251 address_set.go:613] (6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) deleting addresses [10.217.0.86] from address set 2025-10-13T00:21:42.524853859+00:00 stderr F I1013 00:21:42.524797 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524853859+00:00 stderr F I1013 00:21:42.524819 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:d3fa047a-b670-4067-b07b-06d9a1d3dbb1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02184096-16c6-4236-bbfe-4ffdc2c35434}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524866709+00:00 stderr F I1013 00:21:42.524826 28251 default_network_controller.go:655] Recording add event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.524906291+00:00 stderr F I1013 00:21:42.524872 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524941852+00:00 stderr F I1013 00:21:42.524666 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.524941852+00:00 stderr F I1013 00:21:42.524889 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524952722+00:00 stderr F I1013 00:21:42.524944 28251 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-l92hr in node crc 2025-10-13T00:21:42.524952722+00:00 stderr F I1013 00:21:42.524937 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524961282+00:00 stderr F I1013 00:21:42.524932 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.524968472+00:00 stderr F I1013 00:21:42.524958 28251 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/node-ca-l92hr took: 35.201µs 2025-10-13T00:21:42.524976442+00:00 stderr F I1013 00:21:42.524471 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.524976442+00:00 stderr F I1013 00:21:42.524971 28251 default_network_controller.go:699] Recording success event on pod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.524983373+00:00 stderr F I1013 00:21:42.524961 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525018214+00:00 stderr F I1013 00:21:42.524986 28251 default_network_controller.go:655] Recording add event on pod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.525018214+00:00 stderr F I1013 00:21:42.524981 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525018214+00:00 stderr F I1013 00:21:42.524820 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.86]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525034104+00:00 stderr F I1013 00:21:42.524992 28251 ovn.go:134] Ensuring zone local for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in node crc 2025-10-13T00:21:42.525058625+00:00 stderr F I1013 00:21:42.525033 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525058625+00:00 stderr F I1013 00:21:42.525051 28251 base_network_controller_pods.go:476] [default/openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] creating logical port openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs for pod on switch crc 2025-10-13T00:21:42.525067305+00:00 stderr F I1013 00:21:42.524625 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.525107876+00:00 stderr F I1013 00:21:42.525073 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in node crc 2025-10-13T00:21:42.525107876+00:00 stderr F I1013 00:21:42.524508 28251 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in node crc 2025-10-13T00:21:42.525107876+00:00 stderr F I1013 00:21:42.525100 28251 base_network_controller_pods.go:476] [default/openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] creating logical port openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 for pod on switch crc 2025-10-13T00:21:42.525196618+00:00 stderr F I1013 00:21:42.525159 28251 base_network_controller_pods.go:476] [default/openshift-controller-manager/controller-manager-778975cc4f-x5vcf] creating logical port openshift-controller-manager_controller-manager-778975cc4f-x5vcf for pod on switch crc 2025-10-13T00:21:42.525196618+00:00 stderr F I1013 00:21:42.524888 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.525196618+00:00 stderr F I1013 00:21:42.525192 28251 ovn.go:134] Ensuring zone local for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in node crc 2025-10-13T00:21:42.525206889+00:00 stderr F I1013 00:21:42.525200 28251 obj_retry.go:541] Creating *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 took: 12.58µs 2025-10-13T00:21:42.525213539+00:00 stderr F I1013 00:21:42.525207 28251 default_network_controller.go:699] Recording success event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.525222219+00:00 stderr F I1013 00:21:42.525217 28251 default_network_controller.go:655] Recording add event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.525222219+00:00 stderr F I1013 00:21:42.524945 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:c3d30d24-1dab-4362-a72b-dd6762f1f84c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {209ddb87-f880-45d7-bb2d-0426a70d7f75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525230769+00:00 stderr F I1013 00:21:42.525226 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.525242110+00:00 stderr F I1013 00:21:42.525236 28251 ovn.go:134] Ensuring zone local for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in node crc 2025-10-13T00:21:42.525278531+00:00 stderr F I1013 00:21:42.525231 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525278531+00:00 stderr F I1013 00:21:42.525263 28251 base_network_controller_pods.go:476] [default/openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] creating logical port openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 for pod on switch crc 2025-10-13T00:21:42.525308451+00:00 stderr F I1013 00:21:42.525282 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525386323+00:00 stderr F I1013 00:21:42.525292 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525386323+00:00 stderr F I1013 00:21:42.524595 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] creating logical port openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh for pod on switch crc 2025-10-13T00:21:42.525386323+00:00 stderr F I1013 00:21:42.525365 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525453625+00:00 stderr F I1013 00:21:42.525401 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d8ffb80-afa8-491b-8ba3-524ddf157155}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525453625+00:00 stderr F I1013 00:21:42.525341 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:b0a4ec02-9b6b-400a-9633-c11280799f07 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92cd2fae-7041-4269-9f12-7b10f07cb6aa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525453625+00:00 stderr F I1013 00:21:42.525427 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525487516+00:00 stderr F I1013 00:21:42.525455 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2d8ffb80-afa8-491b-8ba3-524ddf157155}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525533607+00:00 stderr F I1013 00:21:42.525483 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525533607+00:00 stderr F I1013 00:21:42.525507 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525562748+00:00 stderr F I1013 00:21:42.525536 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525571118+00:00 stderr F I1013 00:21:42.525550 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525616890+00:00 stderr F I1013 00:21:42.525514 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:d3fa047a-b670-4067-b07b-06d9a1d3dbb1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02184096-16c6-4236-bbfe-4ffdc2c35434}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d8ffb80-afa8-491b-8ba3-524ddf157155}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2d8ffb80-afa8-491b-8ba3-524ddf157155}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525616890+00:00 stderr F I1013 00:21:42.525597 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525638250+00:00 stderr F I1013 00:21:42.525576 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525647830+00:00 stderr F I1013 00:21:42.525597 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525699852+00:00 stderr F I1013 00:21:42.525252 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525707602+00:00 stderr F I1013 00:21:42.525648 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525714912+00:00 stderr F I1013 00:21:42.525688 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525785904+00:00 stderr F I1013 00:21:42.525748 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525793594+00:00 stderr F I1013 00:21:42.525763 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525870136+00:00 stderr F I1013 00:21:42.525562 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525870136+00:00 stderr F I1013 00:21:42.525746 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525910008+00:00 stderr F I1013 00:21:42.525878 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525948079+00:00 stderr F I1013 00:21:42.525905 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525948079+00:00 stderr F I1013 00:21:42.525931 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525973579+00:00 stderr F I1013 00:21:42.525945 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.525982089+00:00 stderr F I1013 00:21:42.525966 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526032441+00:00 stderr F I1013 00:21:42.525954 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526032441+00:00 stderr F I1013 00:21:42.526007 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526062232+00:00 stderr F I1013 00:21:42.525031 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526073352+00:00 stderr F I1013 00:21:42.524434 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.526081722+00:00 stderr F I1013 00:21:42.526027 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526110273+00:00 stderr F I1013 00:21:42.526080 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.526119073+00:00 stderr F I1013 00:21:42.526096 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in node crc 2025-10-13T00:21:42.526119073+00:00 stderr F I1013 00:21:42.526093 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9c97ad1-ae77-4984-a203-1dc405839a5c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526128033+00:00 stderr F I1013 00:21:42.526120 28251 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] creating logical port openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv for pod on switch crc 2025-10-13T00:21:42.526160444+00:00 stderr F I1013 00:21:42.526138 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9c97ad1-ae77-4984-a203-1dc405839a5c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526172585+00:00 stderr F I1013 00:21:42.525947 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526226116+00:00 stderr F I1013 00:21:42.526163 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:c3d30d24-1dab-4362-a72b-dd6762f1f84c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {209ddb87-f880-45d7-bb2d-0426a70d7f75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9c97ad1-ae77-4984-a203-1dc405839a5c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9c97ad1-ae77-4984-a203-1dc405839a5c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526283088+00:00 stderr F I1013 00:21:42.525073 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.86]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526283088+00:00 stderr F I1013 00:21:42.524599 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.526283088+00:00 stderr F I1013 00:21:42.526276 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-767c585db5-zd56b in node crc 2025-10-13T00:21:42.526309378+00:00 stderr F I1013 00:21:42.526289 28251 obj_retry.go:541] Creating *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b took: 17.751µs 2025-10-13T00:21:42.526309378+00:00 stderr F I1013 00:21:42.524393 28251 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-9-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.526371670+00:00 stderr F I1013 00:21:42.526349 28251 port_cache.go:122] port-cache(openshift-kube-apiserver_installer-9-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.526371670+00:00 stderr F I1013 00:21:42.526360 28251 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-9-crc 2025-10-13T00:21:42.526371670+00:00 stderr F I1013 00:21:42.526335 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526408511+00:00 stderr F I1013 00:21:42.526385 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526441242+00:00 stderr F W1013 00:21:42.526424 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-apiserver/installer-9-crc. Using logical switch crc port uuid and addrs [10.217.0.55/23] 2025-10-13T00:21:42.526441242+00:00 stderr F I1013 00:21:42.526428 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526449052+00:00 stderr F I1013 00:21:42.526319 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526455702+00:00 stderr F I1013 00:21:42.526306 28251 default_network_controller.go:699] Recording success event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.526479153+00:00 stderr F I1013 00:21:42.526466 28251 default_network_controller.go:655] Recording add event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.526479153+00:00 stderr F I1013 00:21:42.526455 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526486713+00:00 stderr F I1013 00:21:42.525002 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.526516614+00:00 stderr F I1013 00:21:42.526497 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qdfr4 in node crc 2025-10-13T00:21:42.526516614+00:00 stderr F I1013 00:21:42.526482 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526554305+00:00 stderr F I1013 00:21:42.526541 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.526561365+00:00 stderr F I1013 00:21:42.526552 28251 base_network_controller_pods.go:476] [default/openshift-multus/network-metrics-daemon-qdfr4] creating logical port openshift-multus_network-metrics-daemon-qdfr4 for pod on switch crc 2025-10-13T00:21:42.526561365+00:00 stderr F I1013 00:21:42.526558 28251 ovn.go:134] Ensuring zone local for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in node crc 2025-10-13T00:21:42.526574815+00:00 stderr F I1013 00:21:42.526553 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526595386+00:00 stderr F I1013 00:21:42.526584 28251 base_network_controller_pods.go:476] [default/openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] creating logical port openshift-etcd-operator_etcd-operator-768d5b5d86-722mg for pod on switch crc 2025-10-13T00:21:42.526620217+00:00 stderr F I1013 00:21:42.526595 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526656798+00:00 stderr F I1013 00:21:42.526500 28251 address_set.go:613] (6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) deleting addresses [10.217.0.55] from address set 2025-10-13T00:21:42.526694969+00:00 stderr F I1013 00:21:42.526670 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.55]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526694969+00:00 stderr F I1013 00:21:42.526674 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526702809+00:00 stderr F I1013 00:21:42.526563 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526715629+00:00 stderr F I1013 00:21:42.526648 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526722919+00:00 stderr F I1013 00:21:42.526705 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526775901+00:00 stderr F I1013 00:21:42.526723 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526787121+00:00 stderr F I1013 00:21:42.526706 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.55]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526793801+00:00 stderr F I1013 00:21:42.526767 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526825432+00:00 stderr F I1013 00:21:42.526494 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526825432+00:00 stderr F I1013 00:21:42.526811 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526879984+00:00 stderr F I1013 00:21:42.526858 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.526879984+00:00 stderr F I1013 00:21:42.526823 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.530038129+00:00 stderr F I1013 00:21:42.529962 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.530114031+00:00 stderr F I1013 00:21:42.530074 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.530233994+00:00 stderr F I1013 00:21:42.530187 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.530961693+00:00 stderr F I1013 00:21:42.530901 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.531398095+00:00 stderr F I1013 00:21:42.531352 28251 port_cache.go:96] port-cache(openshift-kube-apiserver_installer-13-crc): added port &{name:openshift-kube-apiserver_installer-13-crc uuid:92cd2fae-7041-4269-9f12-7b10f07cb6aa logicalSwitch:crc ips:[0xc000b88b70] mac:[10 88 10 217 0 41] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.41/23] and MAC: 0a:58:0a:d9:00:29 2025-10-13T00:21:42.531398095+00:00 stderr F I1013 00:21:42.531391 28251 pods.go:220] [openshift-kube-apiserver/installer-13-crc] addLogicalPort took 6.884345ms, libovsdb time 5.966591ms 2025-10-13T00:21:42.531412416+00:00 stderr F I1013 00:21:42.531401 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver/installer-13-crc took: 6.911216ms 2025-10-13T00:21:42.531412416+00:00 stderr F I1013 00:21:42.531409 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.531436576+00:00 stderr F I1013 00:21:42.531425 28251 default_network_controller.go:655] Recording add event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.531443456+00:00 stderr F I1013 00:21:42.531436 28251 obj_retry.go:502] Add event received for *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.531451717+00:00 stderr F I1013 00:21:42.531447 28251 ovn.go:134] Ensuring zone local for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in node crc 2025-10-13T00:21:42.531476117+00:00 stderr F I1013 00:21:42.531461 28251 base_network_controller_pods.go:476] [default/hostpath-provisioner/csi-hostpathplugin-hvm8g] creating logical port hostpath-provisioner_csi-hostpathplugin-hvm8g for pod on switch crc 2025-10-13T00:21:42.531610141+00:00 stderr F I1013 00:21:42.531554 28251 port_cache.go:96] port-cache(openshift-marketplace_redhat-operators-hkptr): added port &{name:openshift-marketplace_redhat-operators-hkptr uuid:02184096-16c6-4236-bbfe-4ffdc2c35434 logicalSwitch:crc ips:[0xc000e86030] mac:[10 88 10 217 0 34] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.34/23] and MAC: 0a:58:0a:d9:00:22 2025-10-13T00:21:42.531622421+00:00 stderr F I1013 00:21:42.531603 28251 pods.go:220] [openshift-marketplace/redhat-operators-hkptr] addLogicalPort took 7.161923ms, libovsdb time 6.029582ms 2025-10-13T00:21:42.531630751+00:00 stderr F I1013 00:21:42.531625 28251 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/redhat-operators-hkptr took: 7.216634ms 2025-10-13T00:21:42.531637412+00:00 stderr F I1013 00:21:42.531632 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.531656642+00:00 stderr F I1013 00:21:42.531642 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.531663472+00:00 stderr F I1013 00:21:42.531654 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.531670092+00:00 stderr F I1013 00:21:42.531665 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in node crc 2025-10-13T00:21:42.531700843+00:00 stderr F I1013 00:21:42.531687 28251 base_network_controller_pods.go:476] [default/openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] creating logical port openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb for pod on switch crc 2025-10-13T00:21:42.531943730+00:00 stderr F I1013 00:21:42.531890 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.532108934+00:00 stderr F I1013 00:21:42.531964 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.532143125+00:00 stderr F I1013 00:21:42.532084 28251 port_cache.go:96] port-cache(openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7): added port &{name:openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 uuid:6e77fb5d-c04f-467c-9883-8cb59d819d86 logicalSwitch:crc ips:[0xc000e6a450] mac:[10 88 10 217 0 12] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.12/23] and MAC: 0a:58:0a:d9:00:0c 2025-10-13T00:21:42.532186536+00:00 stderr F I1013 00:21:42.532094 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.532198227+00:00 stderr F I1013 00:21:42.532141 28251 pods.go:220] [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] addLogicalPort took 7.540303ms, libovsdb time 6.440994ms 2025-10-13T00:21:42.532204957+00:00 stderr F I1013 00:21:42.532197 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 took: 7.622454ms 2025-10-13T00:21:42.532211897+00:00 stderr F I1013 00:21:42.532206 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.532244278+00:00 stderr F I1013 00:21:42.532209 28251 port_cache.go:96] port-cache(openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7): added port &{name:openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 uuid:f5ecfd58-e886-4b2c-9939-022e7f14b7a7 logicalSwitch:crc ips:[0xc000e78fc0] mac:[10 88 10 217 0 7] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.7/23] and MAC: 0a:58:0a:d9:00:07 2025-10-13T00:21:42.532244278+00:00 stderr F I1013 00:21:42.532240 28251 default_network_controller.go:655] Recording add event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.532254998+00:00 stderr F I1013 00:21:42.532242 28251 pods.go:220] [openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] addLogicalPort took 7.148102ms, libovsdb time 6.229587ms 2025-10-13T00:21:42.532263478+00:00 stderr F I1013 00:21:42.532252 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.532271879+00:00 stderr F I1013 00:21:42.532264 28251 ovn.go:134] Ensuring zone local for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in node crc 2025-10-13T00:21:42.532301609+00:00 stderr F I1013 00:21:42.532286 28251 base_network_controller_pods.go:476] [default/openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] creating logical port openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd for pod on switch crc 2025-10-13T00:21:42.532310730+00:00 stderr F I1013 00:21:42.532253 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.532338460+00:00 stderr F I1013 00:21:42.532252 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 took: 7.181313ms 2025-10-13T00:21:42.532377171+00:00 stderr F I1013 00:21:42.532351 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.532377171+00:00 stderr F I1013 00:21:42.532361 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-10-crc 2025-10-13T00:21:42.532389002+00:00 stderr F I1013 00:21:42.532378 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-10-crc 2025-10-13T00:21:42.532395662+00:00 stderr F I1013 00:21:42.532386 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.532402282+00:00 stderr F I1013 00:21:42.532315 28251 port_cache.go:96] port-cache(openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8): added port &{name:openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 uuid:99ef3a4b-7858-4c9b-90db-217867afe36a logicalSwitch:crc ips:[0xc000ec8900] mac:[10 88 10 217 0 19] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.19/23] and MAC: 0a:58:0a:d9:00:13 2025-10-13T00:21:42.532408922+00:00 stderr F I1013 00:21:42.532403 28251 pods.go:220] [openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] addLogicalPort took 7.147332ms, libovsdb time 6.235457ms 2025-10-13T00:21:42.532417082+00:00 stderr F I1013 00:21:42.532411 28251 obj_retry.go:541] Creating *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 took: 7.175233ms 2025-10-13T00:21:42.532423663+00:00 stderr F I1013 00:21:42.532416 28251 default_network_controller.go:699] Recording success event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.532443403+00:00 stderr F I1013 00:21:42.532424 28251 default_network_controller.go:655] Recording add event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.532450183+00:00 stderr F I1013 00:21:42.532442 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.532456764+00:00 stderr F I1013 00:21:42.532451 28251 ovn.go:134] Ensuring zone local for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in node crc 2025-10-13T00:21:42.532506125+00:00 stderr F I1013 00:21:42.532485 28251 base_network_controller_pods.go:476] [default/openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] creating logical port openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc for pod on switch crc 2025-10-13T00:21:42.532557496+00:00 stderr F I1013 00:21:42.532083 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.532608018+00:00 stderr F I1013 00:21:42.532584 28251 port_cache.go:96] port-cache(openshift-marketplace_marketplace-operator-8b455464d-29pzg): added port &{name:openshift-marketplace_marketplace-operator-8b455464d-29pzg uuid:209ddb87-f880-45d7-bb2d-0426a70d7f75 logicalSwitch:crc ips:[0xc000e784b0] mac:[10 88 10 217 0 30] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.30/23] and MAC: 0a:58:0a:d9:00:1e 2025-10-13T00:21:42.532627618+00:00 stderr F I1013 00:21:42.532610 28251 pods.go:220] [openshift-marketplace/marketplace-operator-8b455464d-29pzg] addLogicalPort took 7.871162ms, libovsdb time 6.191307ms 2025-10-13T00:21:42.532634308+00:00 stderr F I1013 00:21:42.532628 28251 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg took: 8.076767ms 2025-10-13T00:21:42.532657439+00:00 stderr F I1013 00:21:42.532636 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.532665639+00:00 stderr F I1013 00:21:42.532660 28251 default_network_controller.go:655] Recording add event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.532678060+00:00 stderr F I1013 00:21:42.532673 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.532700560+00:00 stderr F I1013 00:21:42.532683 28251 ovn.go:134] Ensuring zone local for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in node crc 2025-10-13T00:21:42.532720391+00:00 stderr F I1013 00:21:42.532709 28251 base_network_controller_pods.go:476] [default/openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] creating logical port openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t for pod on switch crc 2025-10-13T00:21:42.532771302+00:00 stderr F I1013 00:21:42.532743 28251 port_cache.go:96] port-cache(openshift-console_downloads-65476884b9-9wcvx): added port &{name:openshift-console_downloads-65476884b9-9wcvx uuid:745a40f7-2acc-4e2b-a087-861e0ea97ffe logicalSwitch:crc ips:[0xc000640e40] mac:[10 88 10 217 0 66] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.66/23] and MAC: 0a:58:0a:d9:00:42 2025-10-13T00:21:42.532779972+00:00 stderr F I1013 00:21:42.532772 28251 pods.go:220] [openshift-console/downloads-65476884b9-9wcvx] addLogicalPort took 8.145589ms, libovsdb time 5.702353ms 2025-10-13T00:21:42.532787412+00:00 stderr F I1013 00:21:42.532781 28251 obj_retry.go:541] Creating *v1.Pod openshift-console/downloads-65476884b9-9wcvx took: 8.169259ms 2025-10-13T00:21:42.532795573+00:00 stderr F I1013 00:21:42.532788 28251 default_network_controller.go:699] Recording success event on pod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.532818053+00:00 stderr F I1013 00:21:42.532803 28251 default_network_controller.go:655] Recording add event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.532828794+00:00 stderr F I1013 00:21:42.532816 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.532838764+00:00 stderr F I1013 00:21:42.532832 28251 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in node crc 2025-10-13T00:21:42.532860294+00:00 stderr F I1013 00:21:42.532848 28251 base_network_controller_pods.go:476] [default/openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] creating logical port openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 for pod on switch crc 2025-10-13T00:21:42.532867805+00:00 stderr F I1013 00:21:42.532850 28251 port_cache.go:96] port-cache(openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs): added port &{name:openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs uuid:c2174bce-e1da-468b-aa60-b9409f80c104 logicalSwitch:crc ips:[0xc000c8b410] mac:[10 88 10 217 0 88] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.88/23] and MAC: 0a:58:0a:d9:00:58 2025-10-13T00:21:42.532900516+00:00 stderr F I1013 00:21:42.532884 28251 pods.go:220] [openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] addLogicalPort took 7.849111ms, libovsdb time 6.281699ms 2025-10-13T00:21:42.532907886+00:00 stderr F I1013 00:21:42.532880 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.532947497+00:00 stderr F I1013 00:21:42.532897 28251 obj_retry.go:541] Creating *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs took: 7.914513ms 2025-10-13T00:21:42.532956667+00:00 stderr F I1013 00:21:42.532945 28251 default_network_controller.go:699] Recording success event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.532969517+00:00 stderr F I1013 00:21:42.532958 28251 default_network_controller.go:655] Recording add event on pod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.532980458+00:00 stderr F I1013 00:21:42.532973 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.533008728+00:00 stderr F I1013 00:21:42.532987 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-7xghp in node crc 2025-10-13T00:21:42.533018579+00:00 stderr F I1013 00:21:42.531030 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533018579+00:00 stderr F I1013 00:21:42.532919 28251 port_cache.go:96] port-cache(openshift-controller-manager_controller-manager-778975cc4f-x5vcf): added port &{name:openshift-controller-manager_controller-manager-778975cc4f-x5vcf uuid:eda38bc9-7da5-4a6b-818c-4e1e8f85426d logicalSwitch:crc ips:[0xc000eba600] mac:[10 88 10 217 0 87] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.87/23] and MAC: 0a:58:0a:d9:00:57 2025-10-13T00:21:42.533048620+00:00 stderr F I1013 00:21:42.533026 28251 pods.go:220] [openshift-controller-manager/controller-manager-778975cc4f-x5vcf] addLogicalPort took 7.897203ms, libovsdb time 5.900329ms 2025-10-13T00:21:42.533048620+00:00 stderr F I1013 00:21:42.533041 28251 obj_retry.go:541] Creating *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf took: 8.537939ms 2025-10-13T00:21:42.533059000+00:00 stderr F I1013 00:21:42.533047 28251 default_network_controller.go:699] Recording success event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.533059000+00:00 stderr F I1013 00:21:42.533055 28251 default_network_controller.go:655] Recording add event on pod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.533086641+00:00 stderr F I1013 00:21:42.533063 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.533086641+00:00 stderr F I1013 00:21:42.533083 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/community-operators-gjctm in node crc 2025-10-13T00:21:42.533107281+00:00 stderr F I1013 00:21:42.533065 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533150002+00:00 stderr F I1013 00:21:42.533114 28251 port_cache.go:96] port-cache(openshift-multus_network-metrics-daemon-qdfr4): added port &{name:openshift-multus_network-metrics-daemon-qdfr4 uuid:3564ddfd-a311-4df3-b5d0-1e76294b4ab0 logicalSwitch:crc ips:[0xc001258120] mac:[10 88 10 217 0 3] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.3/23] and MAC: 0a:58:0a:d9:00:03 2025-10-13T00:21:42.533150002+00:00 stderr F I1013 00:21:42.533144 28251 pods.go:220] [openshift-multus/network-metrics-daemon-qdfr4] addLogicalPort took 6.606708ms, libovsdb time 967.046µs 2025-10-13T00:21:42.533159252+00:00 stderr F I1013 00:21:42.533152 28251 obj_retry.go:541] Creating *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 took: 6.661339ms 2025-10-13T00:21:42.533166603+00:00 stderr F I1013 00:21:42.533158 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.533175513+00:00 stderr F I1013 00:21:42.533164 28251 default_network_controller.go:655] Recording add event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.533175513+00:00 stderr F I1013 00:21:42.533171 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.533207194+00:00 stderr F I1013 00:21:42.533178 28251 ovn.go:134] Ensuring zone local for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in node crc 2025-10-13T00:21:42.533207194+00:00 stderr F I1013 00:21:42.533200 28251 base_network_controller_pods.go:476] [default/openshift-service-ca/service-ca-666f99b6f-kk8kg] creating logical port openshift-service-ca_service-ca-666f99b6f-kk8kg for pod on switch crc 2025-10-13T00:21:42.533246775+00:00 stderr F I1013 00:21:42.533117 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533328487+00:00 stderr F I1013 00:21:42.533276 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533428410+00:00 stderr F I1013 00:21:42.533397 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533467981+00:00 stderr F I1013 00:21:42.533427 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533522592+00:00 stderr F I1013 00:21:42.533494 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533522592+00:00 stderr F I1013 00:21:42.533001 28251 obj_retry.go:541] Creating *v1.Pod openshift-network-node-identity/network-node-identity-7xghp took: 19.68µs 2025-10-13T00:21:42.533530042+00:00 stderr F I1013 00:21:42.533524 28251 default_network_controller.go:699] Recording success event on pod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.533549383+00:00 stderr F I1013 00:21:42.533537 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2025-10-13T00:21:42.533556143+00:00 stderr F I1013 00:21:42.533549 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2025-10-13T00:21:42.533564353+00:00 stderr F I1013 00:21:42.533557 28251 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.533584834+00:00 stderr F I1013 00:21:42.533100 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/community-operators-gjctm] creating logical port openshift-marketplace_community-operators-gjctm for pod on switch crc 2025-10-13T00:21:42.533584834+00:00 stderr F I1013 00:21:42.533493 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533584834+00:00 stderr F I1013 00:21:42.533581 28251 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j): port not found in cache or already marked for removal 2025-10-13T00:21:42.533593044+00:00 stderr F I1013 00:21:42.533588 28251 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2025-10-13T00:21:42.533636945+00:00 stderr F I1013 00:21:42.533598 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.533769499+00:00 stderr F I1013 00:21:42.533738 28251 port_cache.go:96] port-cache(openshift-etcd-operator_etcd-operator-768d5b5d86-722mg): added port &{name:openshift-etcd-operator_etcd-operator-768d5b5d86-722mg uuid:e834ded8-9d5b-46e7-b962-1ee96928bab4 logicalSwitch:crc ips:[0xc0013bb080] mac:[10 88 10 217 0 8] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.8/23] and MAC: 0a:58:0a:d9:00:08 2025-10-13T00:21:42.533769499+00:00 stderr F I1013 00:21:42.533761 28251 pods.go:220] [openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] addLogicalPort took 7.183994ms, libovsdb time 613.567µs 2025-10-13T00:21:42.533778749+00:00 stderr F I1013 00:21:42.533773 28251 obj_retry.go:541] Creating *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg took: 7.217634ms 2025-10-13T00:21:42.533785399+00:00 stderr F I1013 00:21:42.533781 28251 default_network_controller.go:699] Recording success event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.533809270+00:00 stderr F I1013 00:21:42.533796 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw 2025-10-13T00:21:42.533816120+00:00 stderr F I1013 00:21:42.533808 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw 2025-10-13T00:21:42.533822730+00:00 stderr F I1013 00:21:42.533815 28251 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.533840971+00:00 stderr F I1013 00:21:42.533828 28251 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw): port not found in cache or already marked for removal 2025-10-13T00:21:42.533840971+00:00 stderr F I1013 00:21:42.533835 28251 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw 2025-10-13T00:21:42.533907153+00:00 stderr F W1013 00:21:42.533891 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw. Using logical switch crc port uuid and addrs [10.217.0.28/23] 2025-10-13T00:21:42.534012425+00:00 stderr F I1013 00:21:42.533992 28251 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.28] from address set 2025-10-13T00:21:42.534072717+00:00 stderr F I1013 00:21:42.534036 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.28]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534123258+00:00 stderr F I1013 00:21:42.534092 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.28]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534218431+00:00 stderr F W1013 00:21:42.534199 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j. Using logical switch crc port uuid and addrs [10.217.0.89/23] 2025-10-13T00:21:42.534276403+00:00 stderr F I1013 00:21:42.534260 28251 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.89] from address set 2025-10-13T00:21:42.534301423+00:00 stderr F I1013 00:21:42.534262 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534374215+00:00 stderr F I1013 00:21:42.533751 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534434487+00:00 stderr F I1013 00:21:42.534401 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.89]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534451187+00:00 stderr F I1013 00:21:42.534427 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534521789+00:00 stderr F I1013 00:21:42.534459 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.89]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534629022+00:00 stderr F I1013 00:21:42.534608 28251 ovs.go:162] Exec(18): stdout: "" 2025-10-13T00:21:42.534629022+00:00 stderr F I1013 00:21:42.534623 28251 ovs.go:163] Exec(18): stderr: "" 2025-10-13T00:21:42.534782636+00:00 stderr F I1013 00:21:42.534750 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-9-crc, ips: 10.217.0.55 2025-10-13T00:21:42.534782636+00:00 stderr F I1013 00:21:42.534778 28251 default_network_controller.go:655] Recording add event on pod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.534814837+00:00 stderr F I1013 00:21:42.534788 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.534814837+00:00 stderr F I1013 00:21:42.534428 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534861278+00:00 stderr F I1013 00:21:42.534829 28251 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv): added port &{name:openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv uuid:2a5717ea-0a50-4ebb-b087-90e637274a33 logicalSwitch:crc ips:[0xc000e34300] mac:[10 88 10 217 0 25] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.25/23] and MAC: 0a:58:0a:d9:00:19 2025-10-13T00:21:42.534904449+00:00 stderr F I1013 00:21:42.534866 28251 pods.go:220] [openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] addLogicalPort took 8.753275ms, libovsdb time 5.827447ms 2025-10-13T00:21:42.534960821+00:00 stderr F I1013 00:21:42.534924 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv took: 8.799517ms 2025-10-13T00:21:42.534960821+00:00 stderr F I1013 00:21:42.534912 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.534960821+00:00 stderr F I1013 00:21:42.534955 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.535009882+00:00 stderr F I1013 00:21:42.534980 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.535009882+00:00 stderr F I1013 00:21:42.535002 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.535024023+00:00 stderr F I1013 00:21:42.535012 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc 2025-10-13T00:21:42.535024023+00:00 stderr F I1013 00:21:42.535017 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver/kube-apiserver-crc took: 7.97µs 2025-10-13T00:21:42.535024023+00:00 stderr F I1013 00:21:42.535021 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.535034003+00:00 stderr F I1013 00:21:42.535026 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.535096225+00:00 stderr F I1013 00:21:42.535043 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535219288+00:00 stderr F I1013 00:21:42.535174 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535219288+00:00 stderr F I1013 00:21:42.535088 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535266859+00:00 stderr F I1013 00:21:42.535234 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535319101+00:00 stderr F I1013 00:21:42.535296 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-10-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.535319101+00:00 stderr F I1013 00:21:42.535276 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535390012+00:00 stderr F I1013 00:21:42.535370 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-10-crc 2025-10-13T00:21:42.535404773+00:00 stderr F I1013 00:21:42.535286 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535404773+00:00 stderr F I1013 00:21:42.535368 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535501585+00:00 stderr F I1013 00:21:42.535409 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.535501585+00:00 stderr F W1013 00:21:42.535477 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-10-crc. Using logical switch crc port uuid and addrs [10.217.0.69/23] 2025-10-13T00:21:42.535938077+00:00 stderr F I1013 00:21:42.535886 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.536426970+00:00 stderr F I1013 00:21:42.534804 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-marketplace-crk87 in node crc 2025-10-13T00:21:42.536575314+00:00 stderr F I1013 00:21:42.533461 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.536699128+00:00 stderr F I1013 00:21:42.536416 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-marketplace-crk87] creating logical port openshift-marketplace_redhat-marketplace-crk87 for pod on switch crc 2025-10-13T00:21:42.536699128+00:00 stderr F I1013 00:21:42.536597 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.536712528+00:00 stderr F I1013 00:21:42.536694 28251 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.69] from address set 2025-10-13T00:21:42.536805541+00:00 stderr F I1013 00:21:42.536746 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.69]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.536869942+00:00 stderr F I1013 00:21:42.535079 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.536920054+00:00 stderr F I1013 00:21:42.536891 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-server-v65wr in node crc 2025-10-13T00:21:42.536920054+00:00 stderr F I1013 00:21:42.536871 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.536937924+00:00 stderr F I1013 00:21:42.534986 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:49cd5dc0-c0e0-4199-93cd-8637bea2739a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.536999096+00:00 stderr F I1013 00:21:42.536967 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537119209+00:00 stderr F I1013 00:21:42.537007 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537224792+00:00 stderr F I1013 00:21:42.537171 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:a783d910-85f5-4f52-8831-6bae329a70fa requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {15739438-58a4-46c0-bcfe-2d9fdf5c37a4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537240932+00:00 stderr F I1013 00:21:42.537229 28251 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw, ips: 10.217.0.28 2025-10-13T00:21:42.537273863+00:00 stderr F I1013 00:21:42.537253 28251 default_network_controller.go:655] Recording add event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.537273863+00:00 stderr F I1013 00:21:42.537242 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537283983+00:00 stderr F I1013 00:21:42.536927 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr took: 44.181µs 2025-10-13T00:21:42.537294244+00:00 stderr F I1013 00:21:42.537288 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.537302484+00:00 stderr F I1013 00:21:42.536866 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.69]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537302484+00:00 stderr F I1013 00:21:42.537299 28251 default_network_controller.go:655] Recording add event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.537352855+00:00 stderr F I1013 00:21:42.537308 28251 ovs.go:159] Exec(19): /usr/bin/ovs-vsctl --timeout=15 -- --if-exists del-port br-int k8s-crc -- --may-exist add-port br-int ovn-k8s-mp0 -- set interface ovn-k8s-mp0 type=internal mtu_request=1400 external-ids:iface-id=k8s-crc 2025-10-13T00:21:42.537352855+00:00 stderr F I1013 00:21:42.536974 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537402647+00:00 stderr F I1013 00:21:42.537336 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537402647+00:00 stderr F I1013 00:21:42.536798 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537454228+00:00 stderr F I1013 00:21:42.537390 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537481809+00:00 stderr F I1013 00:21:42.537439 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537498159+00:00 stderr F I1013 00:21:42.537476 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537594582+00:00 stderr F I1013 00:21:42.537274 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.537606452+00:00 stderr F I1013 00:21:42.537598 28251 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in node crc 2025-10-13T00:21:42.537641523+00:00 stderr F I1013 00:21:42.537623 28251 base_network_controller_pods.go:476] [default/openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] creating logical port openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv for pod on switch crc 2025-10-13T00:21:42.537668184+00:00 stderr F I1013 00:21:42.537316 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.537677674+00:00 stderr F I1013 00:21:42.537666 28251 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in node crc 2025-10-13T00:21:42.537687714+00:00 stderr F I1013 00:21:42.537680 28251 base_network_controller_pods.go:476] [default/openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] creating logical port openshift-console-operator_console-operator-5dbbc74dc9-cp5cd for pod on switch crc 2025-10-13T00:21:42.537799657+00:00 stderr F I1013 00:21:42.537725 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.537865879+00:00 stderr F I1013 00:21:42.537822 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.538146597+00:00 stderr F I1013 00:21:42.538058 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.538772613+00:00 stderr F I1013 00:21:42.538730 28251 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j, ips: 10.217.0.89 2025-10-13T00:21:42.538772613+00:00 stderr F I1013 00:21:42.538714 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.538772613+00:00 stderr F I1013 00:21:42.538753 28251 default_network_controller.go:655] Recording add event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.538797844+00:00 stderr F I1013 00:21:42.538769 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.538797844+00:00 stderr F I1013 00:21:42.538770 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.538797844+00:00 stderr F I1013 00:21:42.538779 28251 ovn.go:134] Ensuring zone local for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in node crc 2025-10-13T00:21:42.538853986+00:00 stderr F I1013 00:21:42.538812 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.539119443+00:00 stderr F I1013 00:21:42.539078 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.539141073+00:00 stderr F I1013 00:21:42.539117 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.539220676+00:00 stderr F I1013 00:21:42.539153 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.539553214+00:00 stderr F I1013 00:21:42.539511 28251 base_network_controller_pods.go:476] [default/openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] creating logical port openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m for pod on switch crc 2025-10-13T00:21:42.539888583+00:00 stderr F I1013 00:21:42.539819 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.539896494+00:00 stderr F I1013 00:21:42.539885 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-12-crc, ips: 10.217.0.86 2025-10-13T00:21:42.539927355+00:00 stderr F I1013 00:21:42.539901 28251 default_network_controller.go:655] Recording add event on pod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.539927355+00:00 stderr F I1013 00:21:42.539914 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.539927355+00:00 stderr F I1013 00:21:42.539921 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-v54bt in node crc 2025-10-13T00:21:42.539936865+00:00 stderr F I1013 00:21:42.539906 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.539936865+00:00 stderr F I1013 00:21:42.539934 28251 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-target-v54bt] creating logical port openshift-network-diagnostics_network-check-target-v54bt for pod on switch crc 2025-10-13T00:21:42.540084389+00:00 stderr F I1013 00:21:42.540016 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540108419+00:00 stderr F I1013 00:21:42.540079 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540138090+00:00 stderr F I1013 00:21:42.540113 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540180081+00:00 stderr F I1013 00:21:42.540155 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540515360+00:00 stderr F I1013 00:21:42.540476 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540527201+00:00 stderr F I1013 00:21:42.540509 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540580782+00:00 stderr F I1013 00:21:42.540521 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540745186+00:00 stderr F I1013 00:21:42.540692 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.540745186+00:00 stderr F I1013 00:21:42.540734 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh): added port &{name:openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh uuid:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749 logicalSwitch:crc ips:[0xc000e4ad50] mac:[10 88 10 217 0 14] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.14/23] and MAC: 0a:58:0a:d9:00:0e 2025-10-13T00:21:42.540757237+00:00 stderr F I1013 00:21:42.540748 28251 pods.go:220] [openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] addLogicalPort took 16.169825ms, libovsdb time 5.643342ms 2025-10-13T00:21:42.540764477+00:00 stderr F I1013 00:21:42.540755 28251 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh took: 16.209866ms 2025-10-13T00:21:42.540764477+00:00 stderr F I1013 00:21:42.540760 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.540771877+00:00 stderr F I1013 00:21:42.540767 28251 default_network_controller.go:655] Recording add event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.540779107+00:00 stderr F I1013 00:21:42.540773 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.540786018+00:00 stderr F I1013 00:21:42.540780 28251 ovn.go:134] Ensuring zone local for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in node crc 2025-10-13T00:21:42.540795278+00:00 stderr F I1013 00:21:42.540789 28251 base_network_controller_pods.go:476] [default/openshift-dns-operator/dns-operator-75f687757b-nz2xb] creating logical port openshift-dns-operator_dns-operator-75f687757b-nz2xb for pod on switch crc 2025-10-13T00:21:42.540853909+00:00 stderr F I1013 00:21:42.537512 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541090716+00:00 stderr F I1013 00:21:42.541055 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541090716+00:00 stderr F I1013 00:21:42.541065 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541101876+00:00 stderr F I1013 00:21:42.541089 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541166408+00:00 stderr F I1013 00:21:42.541102 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541245600+00:00 stderr F I1013 00:21:42.541120 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541406794+00:00 stderr F I1013 00:21:42.541271 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541494657+00:00 stderr F I1013 00:21:42.541457 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541596129+00:00 stderr F I1013 00:21:42.541565 28251 port_cache.go:96] port-cache(openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc): added port &{name:openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc uuid:d2f291e9-b4fe-47a7-a644-298254d226c5 logicalSwitch:crc ips:[0xc0015faed0] mac:[10 88 10 217 0 23] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.23/23] and MAC: 0a:58:0a:d9:00:17 2025-10-13T00:21:42.541606970+00:00 stderr F I1013 00:21:42.541591 28251 pods.go:220] [openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] addLogicalPort took 9.133425ms, libovsdb time 6.265448ms 2025-10-13T00:21:42.541606970+00:00 stderr F I1013 00:21:42.541602 28251 obj_retry.go:541] Creating *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc took: 9.151036ms 2025-10-13T00:21:42.541615940+00:00 stderr F I1013 00:21:42.541609 28251 default_network_controller.go:699] Recording success event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.541647081+00:00 stderr F I1013 00:21:42.541633 28251 default_network_controller.go:655] Recording add event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.541656681+00:00 stderr F I1013 00:21:42.541647 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.541664921+00:00 stderr F I1013 00:21:42.541658 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in node crc 2025-10-13T00:21:42.541684792+00:00 stderr F I1013 00:21:42.541671 28251 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] creating logical port openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 for pod on switch crc 2025-10-13T00:21:42.541715183+00:00 stderr F I1013 00:21:42.541368 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1c699f2-1900-4069-955e-c9c52ca0d0fd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541781124+00:00 stderr F I1013 00:21:42.541748 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f1c699f2-1900-4069-955e-c9c52ca0d0fd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541814585+00:00 stderr F I1013 00:21:42.541784 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541846386+00:00 stderr F I1013 00:21:42.541825 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541881477+00:00 stderr F I1013 00:21:42.541782 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:a783d910-85f5-4f52-8831-6bae329a70fa requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {15739438-58a4-46c0-bcfe-2d9fdf5c37a4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1c699f2-1900-4069-955e-c9c52ca0d0fd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f1c699f2-1900-4069-955e-c9c52ca0d0fd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541881477+00:00 stderr F I1013 00:21:42.541842 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541895657+00:00 stderr F I1013 00:21:42.541859 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541937409+00:00 stderr F I1013 00:21:42.541913 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.541996260+00:00 stderr F I1013 00:21:42.541973 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542257107+00:00 stderr F I1013 00:21:42.542212 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542269897+00:00 stderr F I1013 00:21:42.542248 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542269897+00:00 stderr F I1013 00:21:42.542242 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542313289+00:00 stderr F I1013 00:21:42.542285 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542313289+00:00 stderr F I1013 00:21:42.542295 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542480053+00:00 stderr F I1013 00:21:42.542433 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542480053+00:00 stderr F I1013 00:21:42.542335 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:49cd5dc0-c0e0-4199-93cd-8637bea2739a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542530934+00:00 stderr F I1013 00:21:42.542497 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542652948+00:00 stderr F I1013 00:21:42.542561 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542751360+00:00 stderr F I1013 00:21:42.542721 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542780691+00:00 stderr F I1013 00:21:42.542756 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542818622+00:00 stderr F I1013 00:21:42.542791 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542847123+00:00 stderr F I1013 00:21:42.542818 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.542910835+00:00 stderr F I1013 00:21:42.542850 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543082999+00:00 stderr F I1013 00:21:42.543039 28251 port_cache.go:96] port-cache(openshift-service-ca_service-ca-666f99b6f-kk8kg): added port &{name:openshift-service-ca_service-ca-666f99b6f-kk8kg uuid:9409cb25-8c46-46db-98ab-5eafe9669ef8 logicalSwitch:crc ips:[0xc0012cc540] mac:[10 88 10 217 0 40] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.40/23] and MAC: 0a:58:0a:d9:00:28 2025-10-13T00:21:42.543082999+00:00 stderr F I1013 00:21:42.543056 28251 pods.go:220] [openshift-service-ca/service-ca-666f99b6f-kk8kg] addLogicalPort took 9.860126ms, libovsdb time 7.626255ms 2025-10-13T00:21:42.543082999+00:00 stderr F I1013 00:21:42.543064 28251 obj_retry.go:541] Creating *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg took: 9.885866ms 2025-10-13T00:21:42.543082999+00:00 stderr F I1013 00:21:42.543070 28251 default_network_controller.go:699] Recording success event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.543082999+00:00 stderr F I1013 00:21:42.543076 28251 default_network_controller.go:655] Recording add event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.543097720+00:00 stderr F I1013 00:21:42.543082 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.543097720+00:00 stderr F I1013 00:21:42.543089 28251 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in node crc 2025-10-13T00:21:42.543107170+00:00 stderr F I1013 00:21:42.543098 28251 base_network_controller_pods.go:476] [default/openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] creating logical port openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z for pod on switch crc 2025-10-13T00:21:42.543203623+00:00 stderr F I1013 00:21:42.543162 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543312916+00:00 stderr F I1013 00:21:42.543280 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543346246+00:00 stderr F I1013 00:21:42.543316 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543396348+00:00 stderr F I1013 00:21:42.543370 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543674795+00:00 stderr F I1013 00:21:42.543616 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543719627+00:00 stderr F I1013 00:21:42.543680 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543780128+00:00 stderr F I1013 00:21:42.543735 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-10-crc, ips: 10.217.0.69 2025-10-13T00:21:42.543787458+00:00 stderr F I1013 00:21:42.543710 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543797709+00:00 stderr F I1013 00:21:42.543790 28251 default_network_controller.go:655] Recording add event on pod openshift-image-registry/image-pruner-29338560-zvlxb 2025-10-13T00:21:42.543836020+00:00 stderr F I1013 00:21:42.543812 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/image-pruner-29338560-zvlxb 2025-10-13T00:21:42.543836020+00:00 stderr F I1013 00:21:42.543822 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543843620+00:00 stderr F I1013 00:21:42.543832 28251 obj_retry.go:459] Detected object openshift-image-registry/image-pruner-29338560-zvlxb of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.543876901+00:00 stderr F I1013 00:21:42.543855 28251 port_cache.go:122] port-cache(openshift-image-registry_image-pruner-29338560-zvlxb): port not found in cache or already marked for removal 2025-10-13T00:21:42.543876901+00:00 stderr F I1013 00:21:42.543852 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543876901+00:00 stderr F I1013 00:21:42.543869 28251 pods.go:151] Deleting pod: openshift-image-registry/image-pruner-29338560-zvlxb 2025-10-13T00:21:42.543923472+00:00 stderr F I1013 00:21:42.543871 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.543946023+00:00 stderr F W1013 00:21:42.543926 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-image-registry/image-pruner-29338560-zvlxb. Using logical switch crc port uuid and addrs [10.217.0.27/23] 2025-10-13T00:21:42.544076216+00:00 stderr F I1013 00:21:42.544041 28251 address_set.go:613] (3fbe9c32-c447-4e1c-9b34-6fac7dd25149/default-network-controller:Namespace:openshift-image-registry:v4/a65811733811199347) deleting addresses [10.217.0.27] from address set 2025-10-13T00:21:42.544135528+00:00 stderr F I1013 00:21:42.544096 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.27]}}] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544169309+00:00 stderr F I1013 00:21:42.542772 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544203330+00:00 stderr F I1013 00:21:42.544163 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.27]}}] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544303902+00:00 stderr F I1013 00:21:42.544173 28251 port_cache.go:96] port-cache(openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd): added port &{name:openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd uuid:8b4158c3-d859-42e6-8259-b16ce1cbd284 logicalSwitch:crc ips:[0xc00116e180] mac:[10 88 10 217 0 39] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.39/23] and MAC: 0a:58:0a:d9:00:27 2025-10-13T00:21:42.544303902+00:00 stderr F I1013 00:21:42.544289 28251 pods.go:220] [openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] addLogicalPort took 12.009183ms, libovsdb time 6.65689ms 2025-10-13T00:21:42.544303902+00:00 stderr F I1013 00:21:42.544298 28251 obj_retry.go:541] Creating *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd took: 12.034614ms 2025-10-13T00:21:42.544313052+00:00 stderr F I1013 00:21:42.544304 28251 default_network_controller.go:699] Recording success event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.544320163+00:00 stderr F I1013 00:21:42.544310 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.544368724+00:00 stderr F I1013 00:21:42.544320 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.544368724+00:00 stderr F I1013 00:21:42.544339 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in node crc 2025-10-13T00:21:42.544368724+00:00 stderr F I1013 00:21:42.544349 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] creating logical port openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf for pod on switch crc 2025-10-13T00:21:42.544368724+00:00 stderr F I1013 00:21:42.544322 28251 port_cache.go:96] port-cache(openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb): added port &{name:openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb uuid:805e2f41-6cb8-4ccf-9939-37cfb4fa5509 logicalSwitch:crc ips:[0xc00180c7e0] mac:[10 88 10 217 0 5] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.5/23] and MAC: 0a:58:0a:d9:00:05 2025-10-13T00:21:42.544368724+00:00 stderr F I1013 00:21:42.544363 28251 pods.go:220] [openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] addLogicalPort took 12.690102ms, libovsdb time 3.044201ms 2025-10-13T00:21:42.544400645+00:00 stderr F I1013 00:21:42.544381 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb took: 12.709652ms 2025-10-13T00:21:42.544400645+00:00 stderr F I1013 00:21:42.544394 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.544412415+00:00 stderr F I1013 00:21:42.544404 28251 default_network_controller.go:655] Recording add event on pod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.544419135+00:00 stderr F I1013 00:21:42.544413 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.544427296+00:00 stderr F I1013 00:21:42.544422 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/certified-operators-cms8q in node crc 2025-10-13T00:21:42.544450076+00:00 stderr F I1013 00:21:42.544436 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/certified-operators-cms8q] creating logical port openshift-marketplace_certified-operators-cms8q for pod on switch crc 2025-10-13T00:21:42.544513588+00:00 stderr F I1013 00:21:42.544479 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544557729+00:00 stderr F I1013 00:21:42.544532 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544565079+00:00 stderr F I1013 00:21:42.544545 28251 port_cache.go:96] port-cache(openshift-console-operator_console-conversion-webhook-595f9969b-l6z49): added port &{name:openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 uuid:6056bee0-572a-4de7-bb24-40ca6a66be30 logicalSwitch:crc ips:[0xc00180d560] mac:[10 88 10 217 0 61] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.61/23] and MAC: 0a:58:0a:d9:00:3d 2025-10-13T00:21:42.544587600+00:00 stderr F I1013 00:21:42.544568 28251 pods.go:220] [openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] addLogicalPort took 11.727235ms, libovsdb time 2.691182ms 2025-10-13T00:21:42.544594830+00:00 stderr F I1013 00:21:42.544584 28251 obj_retry.go:541] Creating *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 took: 11.751046ms 2025-10-13T00:21:42.544601830+00:00 stderr F I1013 00:21:42.544594 28251 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.544625091+00:00 stderr F I1013 00:21:42.544570 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544625091+00:00 stderr F I1013 00:21:42.544613 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.544632641+00:00 stderr F I1013 00:21:42.544626 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.544662392+00:00 stderr F I1013 00:21:42.544641 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc 2025-10-13T00:21:42.544689083+00:00 stderr F I1013 00:21:42.544665 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc took: 27.161µs 2025-10-13T00:21:42.544689083+00:00 stderr F I1013 00:21:42.544681 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.544698093+00:00 stderr F I1013 00:21:42.544692 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.544729904+00:00 stderr F I1013 00:21:42.544709 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.544738574+00:00 stderr F I1013 00:21:42.544732 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc 2025-10-13T00:21:42.544745254+00:00 stderr F I1013 00:21:42.544633 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:c8f142c0-dc2a-4213-882f-919da8583b03 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {628a7cc8-ec79-470f-b57a-8f42ec584f4b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544752214+00:00 stderr F I1013 00:21:42.544742 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc took: 11.53µs 2025-10-13T00:21:42.544780265+00:00 stderr F I1013 00:21:42.544751 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.544787525+00:00 stderr F I1013 00:21:42.544778 28251 default_network_controller.go:655] Recording add event on pod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.544795965+00:00 stderr F I1013 00:21:42.544790 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.544802646+00:00 stderr F I1013 00:21:42.544785 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.544830756+00:00 stderr F I1013 00:21:42.544801 28251 ovn.go:134] Ensuring zone local for Pod openshift-console/console-644bb77b49-5x5xk in node crc 2025-10-13T00:21:42.544858947+00:00 stderr F I1013 00:21:42.544838 28251 base_network_controller_pods.go:476] [default/openshift-console/console-644bb77b49-5x5xk] creating logical port openshift-console_console-644bb77b49-5x5xk for pod on switch crc 2025-10-13T00:21:42.544890488+00:00 stderr F I1013 00:21:42.544850 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545046762+00:00 stderr F I1013 00:21:42.545006 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545071173+00:00 stderr F I1013 00:21:42.545052 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545121114+00:00 stderr F I1013 00:21:42.545068 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545236947+00:00 stderr F I1013 00:21:42.545180 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545293669+00:00 stderr F I1013 00:21:42.545256 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545302309+00:00 stderr F I1013 00:21:42.545273 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6802ed31-354a-46db-8a22-561c776398db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545356320+00:00 stderr F I1013 00:21:42.544725 28251 port_cache.go:96] port-cache(openshift-marketplace_community-operators-gjctm): added port &{name:openshift-marketplace_community-operators-gjctm uuid:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95 logicalSwitch:crc ips:[0xc0015fbe90] mac:[10 88 10 217 0 35] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.35/23] and MAC: 0a:58:0a:d9:00:23 2025-10-13T00:21:42.545356320+00:00 stderr F I1013 00:21:42.545323 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6802ed31-354a-46db-8a22-561c776398db}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545432623+00:00 stderr F I1013 00:21:42.545387 28251 port_cache.go:96] port-cache(hostpath-provisioner_csi-hostpathplugin-hvm8g): added port &{name:hostpath-provisioner_csi-hostpathplugin-hvm8g uuid:52259988-af2b-4ee5-bbfe-801c4ebeb0ae logicalSwitch:crc ips:[0xc00177cea0] mac:[10 88 10 217 0 49] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.49/23] and MAC: 0a:58:0a:d9:00:31 2025-10-13T00:21:42.545432623+00:00 stderr F I1013 00:21:42.545371 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:c8f142c0-dc2a-4213-882f-919da8583b03 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {628a7cc8-ec79-470f-b57a-8f42ec584f4b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6802ed31-354a-46db-8a22-561c776398db}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6802ed31-354a-46db-8a22-561c776398db}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545445063+00:00 stderr F I1013 00:21:42.545421 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.545445063+00:00 stderr F I1013 00:21:42.545426 28251 port_cache.go:96] port-cache(openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t): added port &{name:openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t uuid:710ea152-1844-44ad-b1a6-805ec9a3700e logicalSwitch:crc ips:[0xc00180d980] mac:[10 88 10 217 0 45] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.45/23] and MAC: 0a:58:0a:d9:00:2d 2025-10-13T00:21:42.545482764+00:00 stderr F I1013 00:21:42.545450 28251 port_cache.go:96] port-cache(openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv): added port &{name:openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv uuid:82630d91-1647-4c0c-aa84-8f820bcf919e logicalSwitch:crc ips:[0xc0018977a0] mac:[10 88 10 217 0 22] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.22/23] and MAC: 0a:58:0a:d9:00:16 2025-10-13T00:21:42.545506435+00:00 stderr F I1013 00:21:42.545480 28251 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-v54bt): added port &{name:openshift-network-diagnostics_network-check-target-v54bt uuid:c0f95133-023f-4bbd-8719-e29d2cfbb32d logicalSwitch:crc ips:[0xc0017f9800] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04 2025-10-13T00:21:42.545519405+00:00 stderr F I1013 00:21:42.545502 28251 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7): added port &{name:openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 uuid:3644fddd-ceae-4a64-8b00-dadf73515945 logicalSwitch:crc ips:[0xc0007ee3c0] mac:[10 88 10 217 0 64] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.64/23] and MAC: 0a:58:0a:d9:00:40 2025-10-13T00:21:42.545551976+00:00 stderr F I1013 00:21:42.545524 28251 pods.go:220] [openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] addLogicalPort took 3.858654ms, libovsdb time 2.846377ms 2025-10-13T00:21:42.545574706+00:00 stderr F I1013 00:21:42.545551 28251 obj_retry.go:541] Creating *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 took: 3.894365ms 2025-10-13T00:21:42.545574706+00:00 stderr F I1013 00:21:42.545568 28251 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.545607937+00:00 stderr F I1013 00:21:42.545587 28251 default_network_controller.go:655] Recording add event on pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.545629088+00:00 stderr F I1013 00:21:42.545611 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.545663939+00:00 stderr F I1013 00:21:42.545636 28251 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 in node crc 2025-10-13T00:21:42.545672599+00:00 stderr F I1013 00:21:42.545665 28251 obj_retry.go:541] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 took: 35.971µs 2025-10-13T00:21:42.545679729+00:00 stderr F I1013 00:21:42.545674 28251 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.545740511+00:00 stderr F I1013 00:21:42.545713 28251 pods.go:220] [openshift-marketplace/community-operators-gjctm] addLogicalPort took 12.622719ms, libovsdb time 2.379354ms 2025-10-13T00:21:42.545740511+00:00 stderr F I1013 00:21:42.545723 28251 port_cache.go:96] port-cache(openshift-marketplace_redhat-marketplace-crk87): added port &{name:openshift-marketplace_redhat-marketplace-crk87 uuid:15739438-58a4-46c0-bcfe-2d9fdf5c37a4 logicalSwitch:crc ips:[0xc0014a3aa0] mac:[10 88 10 217 0 33] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.33/23] and MAC: 0a:58:0a:d9:00:21 2025-10-13T00:21:42.545749841+00:00 stderr F I1013 00:21:42.545737 28251 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/community-operators-gjctm took: 12.651931ms 2025-10-13T00:21:42.545759761+00:00 stderr F I1013 00:21:42.545753 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.545793972+00:00 stderr F I1013 00:21:42.545770 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.545803043+00:00 stderr F I1013 00:21:42.545788 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.545813023+00:00 stderr F I1013 00:21:42.545806 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in node crc 2025-10-13T00:21:42.545850504+00:00 stderr F I1013 00:21:42.545830 28251 pods.go:220] [openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] addLogicalPort took 8.217371ms, libovsdb time 5.037355ms 2025-10-13T00:21:42.545858504+00:00 stderr F I1013 00:21:42.545843 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] creating logical port openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz for pod on switch crc 2025-10-13T00:21:42.545868734+00:00 stderr F I1013 00:21:42.545858 28251 pods.go:220] [hostpath-provisioner/csi-hostpathplugin-hvm8g] addLogicalPort took 14.398147ms, libovsdb time 7.178703ms 2025-10-13T00:21:42.545896525+00:00 stderr F I1013 00:21:42.545868 28251 obj_retry.go:541] Creating *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g took: 14.420218ms 2025-10-13T00:21:42.545896525+00:00 stderr F I1013 00:21:42.545884 28251 default_network_controller.go:699] Recording success event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.545896525+00:00 stderr F I1013 00:21:42.545893 28251 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-q88th 2025-10-13T00:21:42.545923706+00:00 stderr F I1013 00:21:42.545907 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-q88th 2025-10-13T00:21:42.545923706+00:00 stderr F I1013 00:21:42.545910 28251 pods.go:185] Attempting to release IPs for pod: openshift-image-registry/image-pruner-29338560-zvlxb, ips: 10.217.0.27 2025-10-13T00:21:42.545923706+00:00 stderr F I1013 00:21:42.545920 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-q88th in node crc 2025-10-13T00:21:42.545931686+00:00 stderr F I1013 00:21:42.545923 28251 default_network_controller.go:655] Recording add event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.545931686+00:00 stderr F I1013 00:21:42.545927 28251 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-q88th took: 8.72µs 2025-10-13T00:21:42.545938816+00:00 stderr F I1013 00:21:42.545931 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.545938816+00:00 stderr F I1013 00:21:42.545932 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-q88th 2025-10-13T00:21:42.545945966+00:00 stderr F I1013 00:21:42.545938 28251 ovn.go:134] Ensuring zone local for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in node crc 2025-10-13T00:21:42.545945966+00:00 stderr F I1013 00:21:42.545940 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.545953457+00:00 stderr F I1013 00:21:42.545947 28251 base_network_controller_pods.go:476] [default/openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] creating logical port openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b for pod on switch crc 2025-10-13T00:21:42.545953457+00:00 stderr F I1013 00:21:42.545950 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.545962537+00:00 stderr F I1013 00:21:42.545958 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in node crc 2025-10-13T00:21:42.545983987+00:00 stderr F I1013 00:21:42.545970 28251 base_network_controller_pods.go:476] [default/openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] creating logical port openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw for pod on switch crc 2025-10-13T00:21:42.546109881+00:00 stderr F I1013 00:21:42.546074 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546122701+00:00 stderr F I1013 00:21:42.546109 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546156742+00:00 stderr F I1013 00:21:42.546121 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546177613+00:00 stderr F I1013 00:21:42.546155 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546177613+00:00 stderr F I1013 00:21:42.546162 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546231404+00:00 stderr F I1013 00:21:42.546205 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546231404+00:00 stderr F I1013 00:21:42.546216 28251 ovs.go:162] Exec(19): stdout: "" 2025-10-13T00:21:42.546254055+00:00 stderr F I1013 00:21:42.546237 28251 ovs.go:163] Exec(19): stderr: "" 2025-10-13T00:21:42.546288356+00:00 stderr F I1013 00:21:42.546267 28251 ovs.go:159] Exec(20): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 mac_in_use 2025-10-13T00:21:42.546494921+00:00 stderr F I1013 00:21:42.545751 28251 pods.go:220] [openshift-marketplace/redhat-marketplace-crk87] addLogicalPort took 9.339621ms, libovsdb time 3.923916ms 2025-10-13T00:21:42.546511572+00:00 stderr F I1013 00:21:42.546491 28251 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/redhat-marketplace-crk87 took: 11.678774ms 2025-10-13T00:21:42.546511572+00:00 stderr F I1013 00:21:42.546502 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.546531522+00:00 stderr F I1013 00:21:42.546517 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.546549883+00:00 stderr F I1013 00:21:42.546539 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.546568683+00:00 stderr F I1013 00:21:42.546552 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in node crc 2025-10-13T00:21:42.546605494+00:00 stderr F I1013 00:21:42.546585 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] creating logical port openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 for pod on switch crc 2025-10-13T00:21:42.546840630+00:00 stderr F I1013 00:21:42.546794 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546853091+00:00 stderr F I1013 00:21:42.546832 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.546895522+00:00 stderr F I1013 00:21:42.546846 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.547050936+00:00 stderr F I1013 00:21:42.547029 28251 pods.go:220] [openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] addLogicalPort took 14.328175ms, libovsdb time 3.085843ms 2025-10-13T00:21:42.547050936+00:00 stderr F I1013 00:21:42.547045 28251 obj_retry.go:541] Creating *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t took: 14.359596ms 2025-10-13T00:21:42.547060166+00:00 stderr F I1013 00:21:42.547051 28251 default_network_controller.go:699] Recording success event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.547067917+00:00 stderr F I1013 00:21:42.547060 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-8-crc 2025-10-13T00:21:42.547075487+00:00 stderr F I1013 00:21:42.547068 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-8-crc 2025-10-13T00:21:42.547083367+00:00 stderr F I1013 00:21:42.547075 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.547099807+00:00 stderr F I1013 00:21:42.547090 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-8-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.547099807+00:00 stderr F I1013 00:21:42.547096 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-8-crc 2025-10-13T00:21:42.547162349+00:00 stderr F W1013 00:21:42.547141 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-8-crc. Using logical switch crc port uuid and addrs [10.217.0.55/23] 2025-10-13T00:21:42.547162349+00:00 stderr F I1013 00:21:42.547156 28251 base_network_controller_pods.go:999] Completed pod openshift-kube-controller-manager/revision-pruner-8-crc was already released for nad default before startup 2025-10-13T00:21:42.547199770+00:00 stderr F I1013 00:21:42.547183 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-9-crc 2025-10-13T00:21:42.547199770+00:00 stderr F I1013 00:21:42.547193 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-9-crc 2025-10-13T00:21:42.547207990+00:00 stderr F I1013 00:21:42.547199 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.547215580+00:00 stderr F I1013 00:21:42.547209 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-9-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.547215580+00:00 stderr F I1013 00:21:42.547213 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-9-crc 2025-10-13T00:21:42.547265282+00:00 stderr F W1013 00:21:42.547245 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-9-crc. Using logical switch crc port uuid and addrs [10.217.0.52/23] 2025-10-13T00:21:42.547375105+00:00 stderr F I1013 00:21:42.547339 28251 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.52] from address set 2025-10-13T00:21:42.547473197+00:00 stderr F I1013 00:21:42.547442 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.52]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.547497998+00:00 stderr F I1013 00:21:42.547478 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.52]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.547628402+00:00 stderr F I1013 00:21:42.547591 28251 pods.go:220] [openshift-network-diagnostics/network-check-target-v54bt] addLogicalPort took 7.651716ms, libovsdb time 3.673149ms 2025-10-13T00:21:42.547628402+00:00 stderr F I1013 00:21:42.547622 28251 obj_retry.go:541] Creating *v1.Pod openshift-network-diagnostics/network-check-target-v54bt took: 7.697477ms 2025-10-13T00:21:42.547639562+00:00 stderr F I1013 00:21:42.547632 28251 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.547672313+00:00 stderr F I1013 00:21:42.547649 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.547672313+00:00 stderr F I1013 00:21:42.547665 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.547688223+00:00 stderr F I1013 00:21:42.547681 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc 2025-10-13T00:21:42.547697013+00:00 stderr F I1013 00:21:42.547688 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc took: 14.06µs 2025-10-13T00:21:42.547697013+00:00 stderr F I1013 00:21:42.547692 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.547705754+00:00 stderr F I1013 00:21:42.547697 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.547715084+00:00 stderr F I1013 00:21:42.547705 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.547723314+00:00 stderr F I1013 00:21:42.547712 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in node crc 2025-10-13T00:21:42.547764975+00:00 stderr F I1013 00:21:42.547739 28251 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] creating logical port openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr for pod on switch crc 2025-10-13T00:21:42.548017632+00:00 stderr F I1013 00:21:42.547954 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548017632+00:00 stderr F I1013 00:21:42.547971 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548028082+00:00 stderr F I1013 00:21:42.548012 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548028082+00:00 stderr F I1013 00:21:42.548011 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548087344+00:00 stderr F I1013 00:21:42.548025 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548087344+00:00 stderr F I1013 00:21:42.548068 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548366491+00:00 stderr F I1013 00:21:42.545845 28251 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv took: 8.256143ms 2025-10-13T00:21:42.548366491+00:00 stderr F I1013 00:21:42.548348 28251 default_network_controller.go:699] Recording success event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.548366491+00:00 stderr F I1013 00:21:42.548362 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-10-crc 2025-10-13T00:21:42.548381792+00:00 stderr F I1013 00:21:42.548371 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-10-crc 2025-10-13T00:21:42.548389902+00:00 stderr F I1013 00:21:42.548382 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.548415113+00:00 stderr F I1013 00:21:42.548397 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-10-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.548415113+00:00 stderr F I1013 00:21:42.548406 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-10-crc 2025-10-13T00:21:42.548467604+00:00 stderr F I1013 00:21:42.548430 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548467604+00:00 stderr F W1013 00:21:42.548448 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-10-crc. Using logical switch crc port uuid and addrs [10.217.0.68/23] 2025-10-13T00:21:42.548492725+00:00 stderr F I1013 00:21:42.548468 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548505375+00:00 stderr F I1013 00:21:42.548490 28251 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.68] from address set 2025-10-13T00:21:42.548537936+00:00 stderr F I1013 00:21:42.548488 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548537936+00:00 stderr F I1013 00:21:42.548519 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.68]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.548567537+00:00 stderr F I1013 00:21:42.548548 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.68]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549172833+00:00 stderr F I1013 00:21:42.549147 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-10-crc, ips: 10.217.0.68 2025-10-13T00:21:42.549203804+00:00 stderr F I1013 00:21:42.549178 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf): added port &{name:openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf uuid:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c logicalSwitch:crc ips:[0xc000b89290] mac:[10 88 10 217 0 11] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.11/23] and MAC: 0a:58:0a:d9:00:0b 2025-10-13T00:21:42.549203804+00:00 stderr F I1013 00:21:42.549196 28251 pods.go:220] [openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] addLogicalPort took 4.852501ms, libovsdb time 4.10342ms 2025-10-13T00:21:42.549212824+00:00 stderr F I1013 00:21:42.549204 28251 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf took: 4.865921ms 2025-10-13T00:21:42.549212824+00:00 stderr F I1013 00:21:42.549208 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.549226005+00:00 stderr F I1013 00:21:42.549215 28251 default_network_controller.go:655] Recording add event on pod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.549226005+00:00 stderr F I1013 00:21:42.549221 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.549234385+00:00 stderr F I1013 00:21:42.549227 28251 ovn.go:134] Ensuring zone local for Pod openshift-dns/dns-default-gbw49 in node crc 2025-10-13T00:21:42.549242345+00:00 stderr F I1013 00:21:42.549237 28251 base_network_controller_pods.go:476] [default/openshift-dns/dns-default-gbw49] creating logical port openshift-dns_dns-default-gbw49 for pod on switch crc 2025-10-13T00:21:42.549426970+00:00 stderr F I1013 00:21:42.549386 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549477541+00:00 stderr F I1013 00:21:42.549424 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549477541+00:00 stderr F I1013 00:21:42.549461 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549730528+00:00 stderr F I1013 00:21:42.549691 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549763599+00:00 stderr F I1013 00:21:42.549731 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549807600+00:00 stderr F I1013 00:21:42.549750 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.549982895+00:00 stderr F I1013 00:21:42.549948 28251 port_cache.go:96] port-cache(openshift-dns-operator_dns-operator-75f687757b-nz2xb): added port &{name:openshift-dns-operator_dns-operator-75f687757b-nz2xb uuid:b212e2c2-3d4e-4898-aede-c926b74813f0 logicalSwitch:crc ips:[0xc00097c630] mac:[10 88 10 217 0 18] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.18/23] and MAC: 0a:58:0a:d9:00:12 2025-10-13T00:21:42.549982895+00:00 stderr F I1013 00:21:42.549966 28251 pods.go:220] [openshift-dns-operator/dns-operator-75f687757b-nz2xb] addLogicalPort took 9.180786ms, libovsdb time 7.170353ms 2025-10-13T00:21:42.549982895+00:00 stderr F I1013 00:21:42.549975 28251 obj_retry.go:541] Creating *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb took: 9.194868ms 2025-10-13T00:21:42.549993695+00:00 stderr F I1013 00:21:42.549981 28251 default_network_controller.go:699] Recording success event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.549993695+00:00 stderr F I1013 00:21:42.549989 28251 default_network_controller.go:655] Recording add event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.550002165+00:00 stderr F I1013 00:21:42.549996 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.550009956+00:00 stderr F I1013 00:21:42.550004 28251 ovn.go:134] Ensuring zone local for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in node crc 2025-10-13T00:21:42.550030626+00:00 stderr F I1013 00:21:42.550016 28251 base_network_controller_pods.go:476] [default/openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] creating logical port openshift-apiserver_apiserver-7fc54b8dd7-d2bhp for pod on switch crc 2025-10-13T00:21:42.550229012+00:00 stderr F I1013 00:21:42.550183 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550239042+00:00 stderr F I1013 00:21:42.550227 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550319234+00:00 stderr F I1013 00:21:42.550288 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550567081+00:00 stderr F I1013 00:21:42.550523 28251 port_cache.go:96] port-cache(openshift-dns_dns-default-gbw49): added port &{name:openshift-dns_dns-default-gbw49 uuid:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213 logicalSwitch:crc ips:[0xc0013ba810] mac:[10 88 10 217 0 31] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.31/23] and MAC: 0a:58:0a:d9:00:1f 2025-10-13T00:21:42.550567081+00:00 stderr F I1013 00:21:42.550548 28251 pods.go:220] [openshift-dns/dns-default-gbw49] addLogicalPort took 1.315115ms, libovsdb time 760.71µs 2025-10-13T00:21:42.550567081+00:00 stderr F I1013 00:21:42.550557 28251 obj_retry.go:541] Creating *v1.Pod openshift-dns/dns-default-gbw49 took: 1.329836ms 2025-10-13T00:21:42.550567081+00:00 stderr F I1013 00:21:42.550564 28251 default_network_controller.go:699] Recording success event on pod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.550584341+00:00 stderr F I1013 00:21:42.550572 28251 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.550592871+00:00 stderr F I1013 00:21:42.550586 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.550602472+00:00 stderr F I1013 00:21:42.550597 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in node crc 2025-10-13T00:21:42.550627022+00:00 stderr F I1013 00:21:42.550610 28251 base_network_controller_pods.go:476] [default/openshift-multus/multus-admission-controller-6c7c885997-4hbbc] creating logical port openshift-multus_multus-admission-controller-6c7c885997-4hbbc for pod on switch crc 2025-10-13T00:21:42.550662793+00:00 stderr F I1013 00:21:42.550624 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550685194+00:00 stderr F I1013 00:21:42.550661 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550726525+00:00 stderr F I1013 00:21:42.550645 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550726525+00:00 stderr F I1013 00:21:42.550683 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550808647+00:00 stderr F I1013 00:21:42.550762 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550808647+00:00 stderr F I1013 00:21:42.550778 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550852998+00:00 stderr F I1013 00:21:42.550823 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550892669+00:00 stderr F I1013 00:21:42.550853 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550901930+00:00 stderr F I1013 00:21:42.550887 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550966811+00:00 stderr F I1013 00:21:42.550929 28251 port_cache.go:96] port-cache(openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m): added port &{name:openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m uuid:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26 logicalSwitch:crc ips:[0xc0012ebd10] mac:[10 88 10 217 0 6] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.6/23] and MAC: 0a:58:0a:d9:00:06 2025-10-13T00:21:42.550966811+00:00 stderr F I1013 00:21:42.550917 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.550966811+00:00 stderr F I1013 00:21:42.550959 28251 pods.go:220] [openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] addLogicalPort took 12.172247ms, libovsdb time 8.064777ms 2025-10-13T00:21:42.550981622+00:00 stderr F I1013 00:21:42.550969 28251 obj_retry.go:541] Creating *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m took: 12.190107ms 2025-10-13T00:21:42.550981622+00:00 stderr F I1013 00:21:42.550975 28251 default_network_controller.go:699] Recording success event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.550991452+00:00 stderr F I1013 00:21:42.550971 28251 port_cache.go:96] port-cache(openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z): added port &{name:openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z uuid:f5604df7-c1b9-4360-a570-e22fbf62c520 logicalSwitch:crc ips:[0xc000d83200] mac:[10 88 10 217 0 9] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.9/23] and MAC: 0a:58:0a:d9:00:09 2025-10-13T00:21:42.550991452+00:00 stderr F I1013 00:21:42.550980 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551001852+00:00 stderr F I1013 00:21:42.550994 28251 pods.go:220] [openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] addLogicalPort took 7.899592ms, libovsdb time 7.08514ms 2025-10-13T00:21:42.551010083+00:00 stderr F I1013 00:21:42.550995 28251 port_cache.go:96] port-cache(openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw): added port &{name:openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw uuid:f8e99409-b28a-4d27-a8e5-267ea6a801cf logicalSwitch:crc ips:[0xc00133c090] mac:[10 88 10 217 0 20] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.20/23] and MAC: 0a:58:0a:d9:00:14 2025-10-13T00:21:42.551010083+00:00 stderr F I1013 00:21:42.551002 28251 obj_retry.go:541] Creating *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z took: 7.913843ms 2025-10-13T00:21:42.551019393+00:00 stderr F I1013 00:21:42.551006 28251 pods.go:220] [openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] addLogicalPort took 5.042805ms, libovsdb time 4.080999ms 2025-10-13T00:21:42.551019393+00:00 stderr F I1013 00:21:42.550984 28251 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.551028193+00:00 stderr F I1013 00:21:42.551016 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw took: 5.058266ms 2025-10-13T00:21:42.551028193+00:00 stderr F I1013 00:21:42.551023 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.551059134+00:00 stderr F I1013 00:21:42.551035 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in node crc 2025-10-13T00:21:42.551059134+00:00 stderr F I1013 00:21:42.551030 28251 port_cache.go:96] port-cache(openshift-marketplace_certified-operators-cms8q): added port &{name:openshift-marketplace_certified-operators-cms8q uuid:628a7cc8-ec79-470f-b57a-8f42ec584f4b logicalSwitch:crc ips:[0xc000641440] mac:[10 88 10 217 0 36] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.36/23] and MAC: 0a:58:0a:d9:00:24 2025-10-13T00:21:42.551059134+00:00 stderr F I1013 00:21:42.550999 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551059134+00:00 stderr F I1013 00:21:42.551047 28251 pods.go:220] [openshift-marketplace/certified-operators-cms8q] addLogicalPort took 6.618708ms, libovsdb time 5.56944ms 2025-10-13T00:21:42.551059134+00:00 stderr F I1013 00:21:42.551055 28251 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/certified-operators-cms8q took: 6.634119ms 2025-10-13T00:21:42.551075114+00:00 stderr F I1013 00:21:42.551059 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.551075114+00:00 stderr F I1013 00:21:42.551057 28251 port_cache.go:96] port-cache(openshift-console-operator_console-operator-5dbbc74dc9-cp5cd): added port &{name:openshift-console-operator_console-operator-5dbbc74dc9-cp5cd uuid:6af06372-81fc-4451-8678-6253ce70f317 logicalSwitch:crc ips:[0xc000e4b980] mac:[10 88 10 217 0 62] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.62/23] and MAC: 0a:58:0a:d9:00:3e 2025-10-13T00:21:42.551075114+00:00 stderr F I1013 00:21:42.551070 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.551084875+00:00 stderr F I1013 00:21:42.551079 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.551093665+00:00 stderr F I1013 00:21:42.551077 28251 pods.go:220] [openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] addLogicalPort took 13.4024ms, libovsdb time 7.342848ms 2025-10-13T00:21:42.551093665+00:00 stderr F I1013 00:21:42.551088 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in node crc 2025-10-13T00:21:42.551093665+00:00 stderr F I1013 00:21:42.551089 28251 obj_retry.go:541] Creating *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd took: 13.427232ms 2025-10-13T00:21:42.551103135+00:00 stderr F I1013 00:21:42.551095 28251 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.551103135+00:00 stderr F I1013 00:21:42.551099 28251 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] creating logical port openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm for pod on switch crc 2025-10-13T00:21:42.551114525+00:00 stderr F I1013 00:21:42.551103 28251 default_network_controller.go:655] Recording add event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.551114525+00:00 stderr F I1013 00:21:42.551109 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.551124686+00:00 stderr F I1013 00:21:42.551119 28251 ovn.go:134] Ensuring zone local for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in node crc 2025-10-13T00:21:42.551135366+00:00 stderr F I1013 00:21:42.551129 28251 base_network_controller_pods.go:476] [default/openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] creating logical port openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz for pod on switch crc 2025-10-13T00:21:42.551178687+00:00 stderr F I1013 00:21:42.551009 28251 default_network_controller.go:699] Recording success event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.551188667+00:00 stderr F I1013 00:21:42.551179 28251 default_network_controller.go:655] Recording add event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.551215588+00:00 stderr F I1013 00:21:42.551196 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.551215588+00:00 stderr F I1013 00:21:42.551212 28251 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in node crc 2025-10-13T00:21:42.551227338+00:00 stderr F I1013 00:21:42.551207 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-9-crc, ips: 10.217.0.52 2025-10-13T00:21:42.551227338+00:00 stderr F I1013 00:21:42.551218 28251 obj_retry.go:541] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b took: 10.06µs 2025-10-13T00:21:42.551227338+00:00 stderr F I1013 00:21:42.551224 28251 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.551237969+00:00 stderr F I1013 00:21:42.551024 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.551237969+00:00 stderr F I1013 00:21:42.551231 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.551249399+00:00 stderr F I1013 00:21:42.551240 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-10-retry-1-crc 2025-10-13T00:21:42.551249399+00:00 stderr F I1013 00:21:42.551243 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.551249399+00:00 stderr F I1013 00:21:42.551245 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-10-retry-1-crc 2025-10-13T00:21:42.551258989+00:00 stderr F I1013 00:21:42.551252 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in node crc 2025-10-13T00:21:42.551258989+00:00 stderr F I1013 00:21:42.551251 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.551287350+00:00 stderr F I1013 00:21:42.551265 28251 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] creating logical port openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh for pod on switch crc 2025-10-13T00:21:42.551287350+00:00 stderr F I1013 00:21:42.551265 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-10-retry-1-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.551287350+00:00 stderr F I1013 00:21:42.551258 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551287350+00:00 stderr F I1013 00:21:42.551276 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-10-retry-1-crc 2025-10-13T00:21:42.551301860+00:00 stderr F I1013 00:21:42.551269 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551310101+00:00 stderr F I1013 00:21:42.551292 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551320491+00:00 stderr F I1013 00:21:42.551307 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551351052+00:00 stderr F I1013 00:21:42.551309 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551351052+00:00 stderr F W1013 00:21:42.551322 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-10-retry-1-crc. Using logical switch crc port uuid and addrs [10.217.0.76/23] 2025-10-13T00:21:42.551382673+00:00 stderr F I1013 00:21:42.551340 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551382673+00:00 stderr F I1013 00:21:42.551366 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551395573+00:00 stderr F I1013 00:21:42.551370 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551428654+00:00 stderr F I1013 00:21:42.551409 28251 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.76] from address set 2025-10-13T00:21:42.551453294+00:00 stderr F I1013 00:21:42.551392 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551478175+00:00 stderr F I1013 00:21:42.551440 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551522446+00:00 stderr F I1013 00:21:42.551484 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551522446+00:00 stderr F I1013 00:21:42.551434 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.76]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551565467+00:00 stderr F I1013 00:21:42.551539 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.76]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551585328+00:00 stderr F I1013 00:21:42.551537 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551629499+00:00 stderr F I1013 00:21:42.551601 28251 port_cache.go:96] port-cache(openshift-console_console-644bb77b49-5x5xk): added port &{name:openshift-console_console-644bb77b49-5x5xk uuid:9a79516e-7a72-4d42-b0ab-87a99aa064f3 logicalSwitch:crc ips:[0xc000e86b40] mac:[10 88 10 217 0 73] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.73/23] and MAC: 0a:58:0a:d9:00:49 2025-10-13T00:21:42.551662420+00:00 stderr F I1013 00:21:42.551629 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551675270+00:00 stderr F I1013 00:21:42.551656 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551720912+00:00 stderr F I1013 00:21:42.551691 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551720912+00:00 stderr F I1013 00:21:42.551045 28251 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p took: 12.74µs 2025-10-13T00:21:42.551732572+00:00 stderr F I1013 00:21:42.551722 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.551740822+00:00 stderr F I1013 00:21:42.551706 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551751442+00:00 stderr F I1013 00:21:42.551712 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551765433+00:00 stderr F I1013 00:21:42.551674 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551801034+00:00 stderr F I1013 00:21:42.551768 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551801034+00:00 stderr F I1013 00:21:42.551766 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551815284+00:00 stderr F I1013 00:21:42.551693 28251 pods.go:220] [openshift-console/console-644bb77b49-5x5xk] addLogicalPort took 6.863975ms, libovsdb time 3.387851ms 2025-10-13T00:21:42.551824284+00:00 stderr F I1013 00:21:42.551763 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551824284+00:00 stderr F I1013 00:21:42.551818 28251 obj_retry.go:541] Creating *v1.Pod openshift-console/console-644bb77b49-5x5xk took: 7.018359ms 2025-10-13T00:21:42.551838675+00:00 stderr F I1013 00:21:42.551824 28251 default_network_controller.go:699] Recording success event on pod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.551838675+00:00 stderr F I1013 00:21:42.551814 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551878196+00:00 stderr F I1013 00:21:42.551836 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.551904847+00:00 stderr F I1013 00:21:42.551784 28251 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr): added port &{name:openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr uuid:2d98188d-6d49-48e7-8956-57a5c46efe26 logicalSwitch:crc ips:[0xc000ebaa20] mac:[10 88 10 217 0 16] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.16/23] and MAC: 0a:58:0a:d9:00:10 2025-10-13T00:21:42.551940588+00:00 stderr F I1013 00:21:42.551925 28251 pods.go:220] [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] addLogicalPort took 4.202273ms, libovsdb time 3.065902ms 2025-10-13T00:21:42.551982749+00:00 stderr F I1013 00:21:42.551846 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.552009879+00:00 stderr F I1013 00:21:42.551964 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr took: 4.249054ms 2025-10-13T00:21:42.552041860+00:00 stderr F I1013 00:21:42.552031 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.552075341+00:00 stderr F I1013 00:21:42.552064 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-11-crc 2025-10-13T00:21:42.552109632+00:00 stderr F I1013 00:21:42.552098 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-11-crc 2025-10-13T00:21:42.552143713+00:00 stderr F I1013 00:21:42.552130 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-11-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.552216085+00:00 stderr F I1013 00:21:42.552202 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-11-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.552248466+00:00 stderr F I1013 00:21:42.552237 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-11-crc 2025-10-13T00:21:42.552326088+00:00 stderr F W1013 00:21:42.552310 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-11-crc. Using logical switch crc port uuid and addrs [10.217.0.85/23] 2025-10-13T00:21:42.552385690+00:00 stderr F I1013 00:21:42.551877 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.552478762+00:00 stderr F I1013 00:21:42.552463 28251 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.85] from address set 2025-10-13T00:21:42.552561334+00:00 stderr F I1013 00:21:42.551975 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.552596995+00:00 stderr F I1013 00:21:42.552525 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.85]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.552675867+00:00 stderr F I1013 00:21:42.552649 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.85]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.552729439+00:00 stderr F I1013 00:21:42.551839 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553193051+00:00 stderr F I1013 00:21:42.553162 28251 port_cache.go:96] port-cache(openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b): added port &{name:openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b uuid:3e86699a-fa52-4a81-9386-60d37f3fa10c logicalSwitch:crc ips:[0xc001400ae0] mac:[10 88 10 217 0 72] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.72/23] and MAC: 0a:58:0a:d9:00:48 2025-10-13T00:21:42.553193051+00:00 stderr F I1013 00:21:42.553181 28251 pods.go:220] [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] addLogicalPort took 7.237515ms, libovsdb time 2.154908ms 2025-10-13T00:21:42.553206382+00:00 stderr F I1013 00:21:42.553191 28251 obj_retry.go:541] Creating *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b took: 7.251125ms 2025-10-13T00:21:42.553206382+00:00 stderr F I1013 00:21:42.553201 28251 default_network_controller.go:699] Recording success event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.553215002+00:00 stderr F I1013 00:21:42.553208 28251 default_network_controller.go:655] Recording add event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.553223162+00:00 stderr F I1013 00:21:42.553216 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.553230962+00:00 stderr F I1013 00:21:42.553224 28251 ovn.go:134] Ensuring zone local for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in node crc 2025-10-13T00:21:42.553255313+00:00 stderr F I1013 00:21:42.553238 28251 base_network_controller_pods.go:476] [default/openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] creating logical port openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg for pod on switch crc 2025-10-13T00:21:42.553296254+00:00 stderr F I1013 00:21:42.553253 28251 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm): added port &{name:openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm uuid:ad3d5728-34ed-421c-a749-1d7a957800a8 logicalSwitch:crc ips:[0xc0011496b0] mac:[10 88 10 217 0 21] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.21/23] and MAC: 0a:58:0a:d9:00:15 2025-10-13T00:21:42.553305744+00:00 stderr F I1013 00:21:42.553292 28251 pods.go:220] [openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] addLogicalPort took 2.198179ms, libovsdb time 1.4679ms 2025-10-13T00:21:42.553314035+00:00 stderr F I1013 00:21:42.553303 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm took: 2.215389ms 2025-10-13T00:21:42.553314035+00:00 stderr F I1013 00:21:42.553309 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.553333915+00:00 stderr F I1013 00:21:42.553327 28251 default_network_controller.go:655] Recording add event on pod openshift-etcd/etcd-crc 2025-10-13T00:21:42.553368696+00:00 stderr F I1013 00:21:42.553336 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-etcd/etcd-crc 2025-10-13T00:21:42.553368696+00:00 stderr F I1013 00:21:42.553362 28251 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc 2025-10-13T00:21:42.553383416+00:00 stderr F I1013 00:21:42.553372 28251 obj_retry.go:541] Creating *v1.Pod openshift-etcd/etcd-crc took: 28.681µs 2025-10-13T00:21:42.553383416+00:00 stderr F I1013 00:21:42.553377 28251 default_network_controller.go:699] Recording success event on pod openshift-etcd/etcd-crc 2025-10-13T00:21:42.553392207+00:00 stderr F I1013 00:21:42.553384 28251 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-10-13T00:21:42.553415737+00:00 stderr F I1013 00:21:42.553392 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-10-13T00:21:42.553415737+00:00 stderr F I1013 00:21:42.553398 28251 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.553425538+00:00 stderr F I1013 00:21:42.553415 28251 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd): port not found in cache or already marked for removal 2025-10-13T00:21:42.553425538+00:00 stderr F I1013 00:21:42.553420 28251 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-10-13T00:21:42.553498919+00:00 stderr F I1013 00:21:42.553463 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553509240+00:00 stderr F W1013 00:21:42.553501 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd. Using logical switch crc port uuid and addrs [10.217.0.98/23] 2025-10-13T00:21:42.553538061+00:00 stderr F I1013 00:21:42.553510 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553564561+00:00 stderr F I1013 00:21:42.553544 28251 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.98] from address set 2025-10-13T00:21:42.553585852+00:00 stderr F I1013 00:21:42.553561 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553622253+00:00 stderr F I1013 00:21:42.553594 28251 port_cache.go:96] port-cache(openshift-multus_multus-admission-controller-6c7c885997-4hbbc): added port &{name:openshift-multus_multus-admission-controller-6c7c885997-4hbbc uuid:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6 logicalSwitch:crc ips:[0xc00113e3c0] mac:[10 88 10 217 0 32] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.32/23] and MAC: 0a:58:0a:d9:00:20 2025-10-13T00:21:42.553631383+00:00 stderr F I1013 00:21:42.553622 28251 pods.go:220] [openshift-multus/multus-admission-controller-6c7c885997-4hbbc] addLogicalPort took 3.019471ms, libovsdb time 1.799288ms 2025-10-13T00:21:42.553639813+00:00 stderr F I1013 00:21:42.553582 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.98]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553639813+00:00 stderr F I1013 00:21:42.553633 28251 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc took: 3.035242ms 2025-10-13T00:21:42.553648683+00:00 stderr F I1013 00:21:42.553638 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.553680674+00:00 stderr F I1013 00:21:42.553648 28251 port_cache.go:96] port-cache(openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz): added port &{name:openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz uuid:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56 logicalSwitch:crc ips:[0xc00120c450] mac:[10 88 10 217 0 10] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.10/23] and MAC: 0a:58:0a:d9:00:0a 2025-10-13T00:21:42.553680674+00:00 stderr F I1013 00:21:42.553674 28251 pods.go:220] [openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] addLogicalPort took 2.550188ms, libovsdb time 1.442509ms 2025-10-13T00:21:42.553690235+00:00 stderr F I1013 00:21:42.553680 28251 obj_retry.go:541] Creating *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz took: 2.564529ms 2025-10-13T00:21:42.553690235+00:00 stderr F I1013 00:21:42.553685 28251 default_network_controller.go:699] Recording success event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.553698795+00:00 stderr F I1013 00:21:42.553691 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/installer-7-crc 2025-10-13T00:21:42.553707135+00:00 stderr F I1013 00:21:42.553702 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/installer-7-crc 2025-10-13T00:21:42.553717615+00:00 stderr F I1013 00:21:42.553711 28251 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-7-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.553725776+00:00 stderr F I1013 00:21:42.553721 28251 port_cache.go:122] port-cache(openshift-kube-scheduler_installer-7-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.553733606+00:00 stderr F I1013 00:21:42.553725 28251 pods.go:151] Deleting pod: openshift-kube-scheduler/installer-7-crc 2025-10-13T00:21:42.553760676+00:00 stderr F I1013 00:21:42.553735 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-10-retry-1-crc, ips: 10.217.0.76 2025-10-13T00:21:42.553760676+00:00 stderr F I1013 00:21:42.553697 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553775247+00:00 stderr F W1013 00:21:42.553760 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-scheduler/installer-7-crc. Using logical switch crc port uuid and addrs [10.217.0.67/23] 2025-10-13T00:21:42.553802628+00:00 stderr F I1013 00:21:42.553788 28251 address_set.go:613] (87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) deleting addresses [10.217.0.67] from address set 2025-10-13T00:21:42.553832188+00:00 stderr F I1013 00:21:42.553810 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.67]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553832188+00:00 stderr F I1013 00:21:42.553709 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.98]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553858609+00:00 stderr F I1013 00:21:42.553838 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.67]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553897300+00:00 stderr F I1013 00:21:42.553802 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553930141+00:00 stderr F I1013 00:21:42.553894 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553970712+00:00 stderr F I1013 00:21:42.553909 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.553970712+00:00 stderr F I1013 00:21:42.553944 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.554035014+00:00 stderr F I1013 00:21:42.553967 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.554090335+00:00 stderr F I1013 00:21:42.554067 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz): added port &{name:openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz uuid:69155615-9d93-4b72-bddd-739a6e731251 logicalSwitch:crc ips:[0xc000c8ae10] mac:[10 88 10 217 0 43] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.43/23] and MAC: 0a:58:0a:d9:00:2b 2025-10-13T00:21:42.554090335+00:00 stderr F I1013 00:21:42.554080 28251 pods.go:220] [openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] addLogicalPort took 8.258962ms, libovsdb time 2.186059ms 2025-10-13T00:21:42.554101756+00:00 stderr F I1013 00:21:42.554088 28251 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz took: 8.284243ms 2025-10-13T00:21:42.554101756+00:00 stderr F I1013 00:21:42.554093 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.554101756+00:00 stderr F I1013 00:21:42.554099 28251 default_network_controller.go:655] Recording add event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.554133747+00:00 stderr F I1013 00:21:42.554115 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.554133747+00:00 stderr F I1013 00:21:42.554123 28251 ovn.go:134] Ensuring zone local for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in node crc 2025-10-13T00:21:42.554133747+00:00 stderr F I1013 00:21:42.554128 28251 obj_retry.go:541] Creating *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 took: 5.43µs 2025-10-13T00:21:42.554133747+00:00 stderr F I1013 00:21:42.554131 28251 default_network_controller.go:699] Recording success event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.554145097+00:00 stderr F I1013 00:21:42.554136 28251 default_network_controller.go:655] Recording add event on pod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.554153717+00:00 stderr F I1013 00:21:42.554146 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.554153717+00:00 stderr F I1013 00:21:42.554151 28251 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dn27q in node crc 2025-10-13T00:21:42.554162467+00:00 stderr F I1013 00:21:42.554154 28251 obj_retry.go:541] Creating *v1.Pod openshift-dns/node-resolver-dn27q took: 4.23µs 2025-10-13T00:21:42.554162467+00:00 stderr F I1013 00:21:42.554158 28251 default_network_controller.go:699] Recording success event on pod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.554200648+00:00 stderr F I1013 00:21:42.554181 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-dns/node-resolver-dn27q. OVN-Kubernetes controller took 2.124e-05 seconds. No OVN measurement. 2025-10-13T00:21:42.554212859+00:00 stderr F I1013 00:21:42.554150 28251 port_cache.go:96] port-cache(openshift-apiserver_apiserver-7fc54b8dd7-d2bhp): added port &{name:openshift-apiserver_apiserver-7fc54b8dd7-d2bhp uuid:005abe2f-f66d-42f8-945c-fbc80f820ed4 logicalSwitch:crc ips:[0xc00103cbd0] mac:[10 88 10 217 0 82] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.82/23] and MAC: 0a:58:0a:d9:00:52 2025-10-13T00:21:42.554247450+00:00 stderr F I1013 00:21:42.554226 28251 pods.go:220] [openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] addLogicalPort took 4.214353ms, libovsdb time 3.459993ms 2025-10-13T00:21:42.554247450+00:00 stderr F I1013 00:21:42.554239 28251 obj_retry.go:541] Creating *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp took: 4.234744ms 2025-10-13T00:21:42.554304461+00:00 stderr F I1013 00:21:42.554276 28251 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh): added port &{name:openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh uuid:9aafbb57-c78d-409c-9ff4-1561d4387b2d logicalSwitch:crc ips:[0xc00128bfb0] mac:[10 88 10 217 0 63] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.63/23] and MAC: 0a:58:0a:d9:00:3f 2025-10-13T00:21:42.554304461+00:00 stderr F I1013 00:21:42.554297 28251 pods.go:220] [openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] addLogicalPort took 3.039272ms, libovsdb time 2.296862ms 2025-10-13T00:21:42.554316121+00:00 stderr F I1013 00:21:42.554306 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh took: 3.053702ms 2025-10-13T00:21:42.554316121+00:00 stderr F I1013 00:21:42.554312 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.554340752+00:00 stderr F I1013 00:21:42.554320 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.554363653+00:00 stderr F I1013 00:21:42.554347 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.554363653+00:00 stderr F I1013 00:21:42.554355 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in node crc 2025-10-13T00:21:42.554372493+00:00 stderr F I1013 00:21:42.554366 28251 base_network_controller_pods.go:476] [default/openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] creating logical port openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb for pod on switch crc 2025-10-13T00:21:42.554591669+00:00 stderr F I1013 00:21:42.554547 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.554603669+00:00 stderr F I1013 00:21:42.554590 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-scheduler/installer-7-crc, ips: 10.217.0.67 2025-10-13T00:21:42.554630600+00:00 stderr F I1013 00:21:42.554603 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.554662991+00:00 stderr F I1013 00:21:42.554645 28251 default_network_controller.go:699] Recording success event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.554696342+00:00 stderr F I1013 00:21:42.554678 28251 default_network_controller.go:655] Recording add event on pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.554696342+00:00 stderr F I1013 00:21:42.554690 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.554706782+00:00 stderr F I1013 00:21:42.554700 28251 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 in node crc 2025-10-13T00:21:42.554775804+00:00 stderr F I1013 00:21:42.554756 28251 base_network_controller_pods.go:476] [default/openshift-image-registry/image-registry-75b7bb6564-2mwg6] creating logical port openshift-image-registry_image-registry-75b7bb6564-2mwg6 for pod on switch crc 2025-10-13T00:21:42.554893567+00:00 stderr F I1013 00:21:42.554835 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.555045541+00:00 stderr F I1013 00:21:42.555003 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2): added port &{name:openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 uuid:f24db1f4-18a4-418a-9c99-1d94ebfba0da logicalSwitch:crc ips:[0xc001486390] mac:[10 88 10 217 0 24] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.24/23] and MAC: 0a:58:0a:d9:00:18 2025-10-13T00:21:42.555102783+00:00 stderr F I1013 00:21:42.555073 28251 pods.go:220] [openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] addLogicalPort took 8.505579ms, libovsdb time 1.0846ms 2025-10-13T00:21:42.555132463+00:00 stderr F I1013 00:21:42.555121 28251 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 took: 8.57117ms 2025-10-13T00:21:42.555168944+00:00 stderr F I1013 00:21:42.555125 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:fe9b4942-29e7-4ef1-85c7-1a2153128dc7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0840ccdb-85bb-4744-9f7a-239a647cc044}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.555196625+00:00 stderr F I1013 00:21:42.555181 28251 port_cache.go:96] port-cache(openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg): added port &{name:openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg uuid:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4 logicalSwitch:crc ips:[0xc00119eba0] mac:[10 88 10 217 0 46] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.46/23] and MAC: 0a:58:0a:d9:00:2e 2025-10-13T00:21:42.555246996+00:00 stderr F I1013 00:21:42.555216 28251 pods.go:220] [openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] addLogicalPort took 1.985044ms, libovsdb time 1.062159ms 2025-10-13T00:21:42.555282437+00:00 stderr F I1013 00:21:42.555269 28251 obj_retry.go:541] Creating *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg took: 2.044675ms 2025-10-13T00:21:42.555312848+00:00 stderr F I1013 00:21:42.555301 28251 default_network_controller.go:699] Recording success event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.555366640+00:00 stderr F I1013 00:21:42.555148 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.555366640+00:00 stderr F I1013 00:21:42.555317 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.555366640+00:00 stderr F I1013 00:21:42.555358 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/installer-8-crc 2025-10-13T00:21:42.555381140+00:00 stderr F I1013 00:21:42.555369 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/installer-8-crc 2025-10-13T00:21:42.555381140+00:00 stderr F I1013 00:21:42.555244 28251 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd, ips: 10.217.0.98 2025-10-13T00:21:42.555388170+00:00 stderr F I1013 00:21:42.555373 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.555475363+00:00 stderr F I1013 00:21:42.555182 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.556034118+00:00 stderr F I1013 00:21:42.555919 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.556118250+00:00 stderr F I1013 00:21:42.556080 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.556128110+00:00 stderr F I1013 00:21:42.555099 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-11-crc, ips: 10.217.0.85 2025-10-13T00:21:42.556128110+00:00 stderr F I1013 00:21:42.555376 28251 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-8-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.556136420+00:00 stderr F I1013 00:21:42.556130 28251 port_cache.go:122] port-cache(openshift-kube-scheduler_installer-8-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.556136420+00:00 stderr F I1013 00:21:42.556134 28251 pods.go:151] Deleting pod: openshift-kube-scheduler/installer-8-crc 2025-10-13T00:21:42.556192972+00:00 stderr F W1013 00:21:42.556176 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-scheduler/installer-8-crc. Using logical switch crc port uuid and addrs [10.217.0.84/23] 2025-10-13T00:21:42.556239153+00:00 stderr F I1013 00:21:42.556222 28251 address_set.go:613] (87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) deleting addresses [10.217.0.84] from address set 2025-10-13T00:21:42.556313535+00:00 stderr F I1013 00:21:42.556278 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.84]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.556407388+00:00 stderr F I1013 00:21:42.556375 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.84]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.556901641+00:00 stderr F I1013 00:21:42.556858 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {978136d0-4574-48ef-909d-b0a98736fbac}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.556996504+00:00 stderr F I1013 00:21:42.556966 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:978136d0-4574-48ef-909d-b0a98736fbac}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.557109587+00:00 stderr F I1013 00:21:42.557020 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:fe9b4942-29e7-4ef1-85c7-1a2153128dc7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0840ccdb-85bb-4744-9f7a-239a647cc044}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {978136d0-4574-48ef-909d-b0a98736fbac}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:978136d0-4574-48ef-909d-b0a98736fbac}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.558740930+00:00 stderr F I1013 00:21:42.558687 28251 port_cache.go:96] port-cache(openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb): added port &{name:openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb uuid:be2fa59f-4cec-4742-a4bd-dcd0913d1422 logicalSwitch:crc ips:[0xc0011ca390] mac:[10 88 10 217 0 15] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.15/23] and MAC: 0a:58:0a:d9:00:0f 2025-10-13T00:21:42.558740930+00:00 stderr F I1013 00:21:42.558714 28251 pods.go:220] [openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] addLogicalPort took 4.352577ms, libovsdb time 2.753485ms 2025-10-13T00:21:42.558740930+00:00 stderr F I1013 00:21:42.558726 28251 obj_retry.go:541] Creating *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb took: 4.369358ms 2025-10-13T00:21:42.558740930+00:00 stderr F I1013 00:21:42.558734 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.558822753+00:00 stderr F I1013 00:21:42.558792 28251 port_cache.go:96] port-cache(openshift-image-registry_image-registry-75b7bb6564-2mwg6): added port &{name:openshift-image-registry_image-registry-75b7bb6564-2mwg6 uuid:0840ccdb-85bb-4744-9f7a-239a647cc044 logicalSwitch:crc ips:[0xc00169e390] mac:[10 88 10 217 0 38] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.38/23] and MAC: 0a:58:0a:d9:00:26 2025-10-13T00:21:42.558822753+00:00 stderr F I1013 00:21:42.558809 28251 pods.go:220] [openshift-image-registry/image-registry-75b7bb6564-2mwg6] addLogicalPort took 4.10275ms, libovsdb time 1.752917ms 2025-10-13T00:21:42.558822753+00:00 stderr F I1013 00:21:42.558817 28251 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 took: 4.11905ms 2025-10-13T00:21:42.558835773+00:00 stderr F I1013 00:21:42.558822 28251 default_network_controller.go:699] Recording success event on pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.558882464+00:00 stderr F I1013 00:21:42.558862 28251 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-11-crc 2025-10-13T00:21:42.558882464+00:00 stderr F I1013 00:21:42.558878 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-11-crc 2025-10-13T00:21:42.558926265+00:00 stderr F I1013 00:21:42.558906 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.558937166+00:00 stderr F I1013 00:21:42.558931 28251 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-11-crc): port not found in cache or already marked for removal 2025-10-13T00:21:42.558945306+00:00 stderr F I1013 00:21:42.558936 28251 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-11-crc 2025-10-13T00:21:42.559003357+00:00 stderr F W1013 00:21:42.558981 28251 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-11-crc. Using logical switch crc port uuid and addrs [10.217.0.83/23] 2025-10-13T00:21:42.559048599+00:00 stderr F I1013 00:21:42.559030 28251 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.83] from address set 2025-10-13T00:21:42.559101580+00:00 stderr F I1013 00:21:42.559069 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.83]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.559144421+00:00 stderr F I1013 00:21:42.559114 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-scheduler/installer-8-crc, ips: 10.217.0.84 2025-10-13T00:21:42.559144421+00:00 stderr F I1013 00:21:42.559115 28251 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.83]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.559155642+00:00 stderr F I1013 00:21:42.559144 28251 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.559190742+00:00 stderr F I1013 00:21:42.559169 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.559190742+00:00 stderr F I1013 00:21:42.559185 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in node crc 2025-10-13T00:21:42.559208363+00:00 stderr F I1013 00:21:42.559192 28251 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg took: 10.05µs 2025-10-13T00:21:42.559208363+00:00 stderr F I1013 00:21:42.559199 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.559208363+00:00 stderr F I1013 00:21:42.559204 28251 default_network_controller.go:655] Recording add event on pod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.559239394+00:00 stderr F I1013 00:21:42.559217 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.559239394+00:00 stderr F I1013 00:21:42.559226 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-wwpnd in node crc 2025-10-13T00:21:42.559239394+00:00 stderr F I1013 00:21:42.559231 28251 obj_retry.go:541] Creating *v1.Pod openshift-network-operator/iptables-alerter-wwpnd took: 5.641µs 2025-10-13T00:21:42.559239394+00:00 stderr F I1013 00:21:42.559235 28251 default_network_controller.go:699] Recording success event on pod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.559886871+00:00 stderr F I1013 00:21:42.559843 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-11-crc, ips: 10.217.0.83 2025-10-13T00:21:42.559886871+00:00 stderr F I1013 00:21:42.559863 28251 default_network_controller.go:655] Recording add event on pod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.559886871+00:00 stderr F I1013 00:21:42.559881 28251 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.559906932+00:00 stderr F I1013 00:21:42.559891 28251 ovn.go:134] Ensuring zone local for Pod openshift-ingress-canary/ingress-canary-2vhcn in node crc 2025-10-13T00:21:42.559918142+00:00 stderr F I1013 00:21:42.559911 28251 base_network_controller_pods.go:476] [default/openshift-ingress-canary/ingress-canary-2vhcn] creating logical port openshift-ingress-canary_ingress-canary-2vhcn for pod on switch crc 2025-10-13T00:21:42.560108307+00:00 stderr F I1013 00:21:42.560069 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.560142578+00:00 stderr F I1013 00:21:42.560116 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.560192139+00:00 stderr F I1013 00:21:42.560167 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.560618571+00:00 stderr F I1013 00:21:42.560550 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.560618571+00:00 stderr F I1013 00:21:42.560592 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.560695953+00:00 stderr F I1013 00:21:42.560608 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.561163506+00:00 stderr F I1013 00:21:42.561103 28251 port_cache.go:96] port-cache(openshift-ingress-canary_ingress-canary-2vhcn): added port &{name:openshift-ingress-canary_ingress-canary-2vhcn uuid:7a350d82-7987-4ce6-ae41-dd930411ca29 logicalSwitch:crc ips:[0xc001334150] mac:[10 88 10 217 0 71] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.71/23] and MAC: 0a:58:0a:d9:00:47 2025-10-13T00:21:42.561163506+00:00 stderr F I1013 00:21:42.561134 28251 pods.go:220] [openshift-ingress-canary/ingress-canary-2vhcn] addLogicalPort took 1.231394ms, libovsdb time 477.432µs 2025-10-13T00:21:42.561163506+00:00 stderr F I1013 00:21:42.561152 28251 obj_retry.go:541] Creating *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn took: 1.259134ms 2025-10-13T00:21:42.561179346+00:00 stderr F I1013 00:21:42.561159 28251 default_network_controller.go:699] Recording success event on pod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.561204337+00:00 stderr F I1013 00:21:42.561185 28251 factory.go:988] Added *v1.Pod event handler 3 2025-10-13T00:21:42.561269808+00:00 stderr F I1013 00:21:42.561254 28251 admin_network_policy_controller.go:124] Setting up event handlers for Admin Network Policy 2025-10-13T00:21:42.561422963+00:00 stderr F I1013 00:21:42.561381 28251 admin_network_policy_controller.go:142] Setting up event handlers for Baseline Admin Network Policy 2025-10-13T00:21:42.561441213+00:00 stderr F I1013 00:21:42.561430 28251 obj_retry.go:427] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod 2025-10-13T00:21:42.561465414+00:00 stderr F I1013 00:21:42.561450 28251 admin_network_policy_controller.go:159] Setting up event handlers for Namespaces in Admin Network Policy controller 2025-10-13T00:21:42.561605187+00:00 stderr F I1013 00:21:42.561585 28251 admin_network_policy_controller.go:175] Setting up event handlers for Pods in Admin Network Policy controller 2025-10-13T00:21:42.561755542+00:00 stderr F I1013 00:21:42.561731 28251 admin_network_policy_controller.go:191] Setting up event handlers for Nodes in Admin Network Policy controller 2025-10-13T00:21:42.561800153+00:00 stderr F I1013 00:21:42.561773 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openstack-operators 2025-10-13T00:21:42.561809923+00:00 stderr F I1013 00:21:42.561799 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-dns 2025-10-13T00:21:42.561809923+00:00 stderr F I1013 00:21:42.561804 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ovn-kubernetes 2025-10-13T00:21:42.561818883+00:00 stderr F I1013 00:21:42.561809 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-storage-version-migrator-operator 2025-10-13T00:21:42.561818883+00:00 stderr F I1013 00:21:42.561813 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-operator-lifecycle-manager 2025-10-13T00:21:42.561827833+00:00 stderr F I1013 00:21:42.561818 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-service-ca 2025-10-13T00:21:42.561827833+00:00 stderr F I1013 00:21:42.561823 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller default 2025-10-13T00:21:42.561836604+00:00 stderr F I1013 00:21:42.561827 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-storage-version-migrator 2025-10-13T00:21:42.561836604+00:00 stderr F I1013 00:21:42.561832 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cloud-platform-infra 2025-10-13T00:21:42.561845714+00:00 stderr F I1013 00:21:42.561839 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-controller-manager-operator 2025-10-13T00:21:42.561854834+00:00 stderr F I1013 00:21:42.561843 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kni-infra 2025-10-13T00:21:42.561854834+00:00 stderr F I1013 00:21:42.561849 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openstack 2025-10-13T00:21:42.561863934+00:00 stderr F I1013 00:21:42.561854 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller hostpath-provisioner 2025-10-13T00:21:42.561873135+00:00 stderr F I1013 00:21:42.561866 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-apiserver-operator 2025-10-13T00:21:42.561882465+00:00 stderr F I1013 00:21:42.561871 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-vsphere-infra 2025-10-13T00:21:42.561882465+00:00 stderr F I1013 00:21:42.561876 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-machine-approver 2025-10-13T00:21:42.561891885+00:00 stderr F I1013 00:21:42.561880 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-controller-manager 2025-10-13T00:21:42.561891885+00:00 stderr F I1013 00:21:42.561887 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-marketplace 2025-10-13T00:21:42.561901225+00:00 stderr F I1013 00:21:42.561891 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-node 2025-10-13T00:21:42.561901225+00:00 stderr F I1013 00:21:42.561895 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-samples-operator 2025-10-13T00:21:42.561916276+00:00 stderr F I1013 00:21:42.561899 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-apiserver 2025-10-13T00:21:42.561916276+00:00 stderr F I1013 00:21:42.561904 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-infra 2025-10-13T00:21:42.561916276+00:00 stderr F I1013 00:21:42.561908 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-machine-config-operator 2025-10-13T00:21:42.561926986+00:00 stderr F I1013 00:21:42.561913 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-nutanix-infra 2025-10-13T00:21:42.561926986+00:00 stderr F I1013 00:21:42.561919 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ovirt-infra 2025-10-13T00:21:42.561936906+00:00 stderr F I1013 00:21:42.561927 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift 2025-10-13T00:21:42.561946307+00:00 stderr F I1013 00:21:42.561934 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-authentication-operator 2025-10-13T00:21:42.561946307+00:00 stderr F I1013 00:21:42.561940 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress 2025-10-13T00:21:42.561956037+00:00 stderr F I1013 00:21:42.561945 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-machine-api 2025-10-13T00:21:42.561956037+00:00 stderr F I1013 00:21:42.561950 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-operators 2025-10-13T00:21:42.561965087+00:00 stderr F I1013 00:21:42.561954 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-authentication 2025-10-13T00:21:42.561965087+00:00 stderr F I1013 00:21:42.561959 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config-operator 2025-10-13T00:21:42.561974227+00:00 stderr F I1013 00:21:42.561963 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-monitoring 2025-10-13T00:21:42.561974227+00:00 stderr F I1013 00:21:42.561969 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-dns-operator 2025-10-13T00:21:42.561983428+00:00 stderr F I1013 00:21:42.561973 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-scheduler 2025-10-13T00:21:42.561983428+00:00 stderr F I1013 00:21:42.561978 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-storage-operator 2025-10-13T00:21:42.561992768+00:00 stderr F I1013 00:21:42.561982 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress-operator 2025-10-13T00:21:42.561992768+00:00 stderr F I1013 00:21:42.561987 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-node-identity 2025-10-13T00:21:42.562002058+00:00 stderr F I1013 00:21:42.561992 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-public 2025-10-13T00:21:42.562002058+00:00 stderr F I1013 00:21:42.561998 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cloud-network-config-controller 2025-10-13T00:21:42.562018429+00:00 stderr F I1013 00:21:42.562002 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-host-network 2025-10-13T00:21:42.562018429+00:00 stderr F I1013 00:21:42.562006 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress-canary 2025-10-13T00:21:42.562018429+00:00 stderr F I1013 00:21:42.562010 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-controller-manager-operator 2025-10-13T00:21:42.562018429+00:00 stderr F I1013 00:21:42.562015 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config-managed 2025-10-13T00:21:42.562029359+00:00 stderr F I1013 00:21:42.562020 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console-operator 2025-10-13T00:21:42.562029359+00:00 stderr F I1013 00:21:42.562025 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-openstack-infra 2025-10-13T00:21:42.562038809+00:00 stderr F I1013 00:21:42.562029 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-route-controller-manager 2025-10-13T00:21:42.562038809+00:00 stderr F I1013 00:21:42.562034 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-service-ca-operator 2025-10-13T00:21:42.562048109+00:00 stderr F I1013 00:21:42.562038 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-version 2025-10-13T00:21:42.562057340+00:00 stderr F I1013 00:21:42.562048 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console 2025-10-13T00:21:42.562057340+00:00 stderr F I1013 00:21:42.562053 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-image-registry 2025-10-13T00:21:42.562066880+00:00 stderr F I1013 00:21:42.562058 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-apiserver-operator 2025-10-13T00:21:42.562066880+00:00 stderr F I1013 00:21:42.562064 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-multus 2025-10-13T00:21:42.562076640+00:00 stderr F I1013 00:21:42.562069 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-operator 2025-10-13T00:21:42.562076640+00:00 stderr F I1013 00:21:42.562073 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-node-lease 2025-10-13T00:21:42.562086030+00:00 stderr F I1013 00:21:42.562078 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-system 2025-10-13T00:21:42.562086030+00:00 stderr F I1013 00:21:42.562082 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-scheduler-operator 2025-10-13T00:21:42.562095391+00:00 stderr F I1013 00:21:42.562087 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-diagnostics 2025-10-13T00:21:42.562095391+00:00 stderr F I1013 00:21:42.562091 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console-user-settings 2025-10-13T00:21:42.562105321+00:00 stderr F I1013 00:21:42.562096 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-etcd 2025-10-13T00:21:42.562105321+00:00 stderr F I1013 00:21:42.562102 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-etcd-operator 2025-10-13T00:21:42.562114161+00:00 stderr F I1013 00:21:42.562107 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-apiserver 2025-10-13T00:21:42.562114161+00:00 stderr F I1013 00:21:42.562107 28251 admin_network_policy_controller.go:218] Starting controller default-network-controller 2025-10-13T00:21:42.562126901+00:00 stderr F I1013 00:21:42.562117 28251 admin_network_policy_controller.go:221] Waiting for informer caches to sync 2025-10-13T00:21:42.562135382+00:00 stderr F I1013 00:21:42.562127 28251 shared_informer.go:311] Waiting for caches to sync for default-network-controller 2025-10-13T00:21:42.562135382+00:00 stderr F I1013 00:21:42.562129 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.562144692+00:00 stderr F I1013 00:21:42.562136 28251 shared_informer.go:318] Caches are synced for default-network-controller 2025-10-13T00:21:42.562144692+00:00 stderr F I1013 00:21:42.562139 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.562144692+00:00 stderr F I1013 00:21:42.562141 28251 admin_network_policy_controller.go:228] Repairing Admin Network Policies 2025-10-13T00:21:42.562154892+00:00 stderr F I1013 00:21:42.562146 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.562154892+00:00 stderr F I1013 00:21:42.562151 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.562164372+00:00 stderr F I1013 00:21:42.562156 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.562164372+00:00 stderr F I1013 00:21:42.562161 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-10-crc 2025-10-13T00:21:42.562174123+00:00 stderr F I1013 00:21:42.562166 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.562174123+00:00 stderr F I1013 00:21:42.562170 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.562183813+00:00 stderr F I1013 00:21:42.562174 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.562193153+00:00 stderr F I1013 00:21:42.562183 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.562193153+00:00 stderr F I1013 00:21:42.562187 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.562202714+00:00 stderr F I1013 00:21:42.562192 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.562202714+00:00 stderr F I1013 00:21:42.562196 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.562211984+00:00 stderr F I1013 00:21:42.562202 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/image-pruner-29338560-zvlxb 2025-10-13T00:21:42.562211984+00:00 stderr F I1013 00:21:42.562207 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.562225374+00:00 stderr F I1013 00:21:42.562212 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2025-10-13T00:21:42.562225374+00:00 stderr F I1013 00:21:42.562217 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.562234504+00:00 stderr F I1013 00:21:42.562223 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.562234504+00:00 stderr F I1013 00:21:42.562228 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-8-crc 2025-10-13T00:21:42.562243645+00:00 stderr F I1013 00:21:42.562233 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.562243645+00:00 stderr F I1013 00:21:42.562238 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.562252825+00:00 stderr F I1013 00:21:42.562245 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.562261845+00:00 stderr F I1013 00:21:42.562251 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-q88th 2025-10-13T00:21:42.562261845+00:00 stderr F I1013 00:21:42.562257 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw 2025-10-13T00:21:42.562271205+00:00 stderr F I1013 00:21:42.562262 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.562282286+00:00 stderr F I1013 00:21:42.562275 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.562291216+00:00 stderr F I1013 00:21:42.562280 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.562291216+00:00 stderr F I1013 00:21:42.562281 28251 repair.go:29] Repairing admin network policies took 133.594µs 2025-10-13T00:21:42.562291216+00:00 stderr F I1013 00:21:42.562287 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.562301256+00:00 stderr F I1013 00:21:42.562292 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.562301256+00:00 stderr F I1013 00:21:42.562297 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.562311056+00:00 stderr F I1013 00:21:42.562301 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.562311056+00:00 stderr F I1013 00:21:42.562306 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-9-crc 2025-10-13T00:21:42.562320797+00:00 stderr F I1013 00:21:42.562311 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.562320797+00:00 stderr F I1013 00:21:42.562316 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562320 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562327 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562331 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562336 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562354 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/installer-8-crc 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562360 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.562369648+00:00 stderr F I1013 00:21:42.562366 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.562388739+00:00 stderr F I1013 00:21:42.562373 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.562388739+00:00 stderr F I1013 00:21:42.562379 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.562388739+00:00 stderr F I1013 00:21:42.562385 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.562399309+00:00 stderr F I1013 00:21:42.562390 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.562399309+00:00 stderr F I1013 00:21:42.562395 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.562409139+00:00 stderr F I1013 00:21:42.562400 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-etcd/etcd-crc 2025-10-13T00:21:42.562409139+00:00 stderr F I1013 00:21:42.562405 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-10-retry-1-crc 2025-10-13T00:21:42.562417649+00:00 stderr F I1013 00:21:42.562409 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.562417649+00:00 stderr F I1013 00:21:42.562412 28251 repair.go:104] Repairing baseline admin network policies took 122.113µs 2025-10-13T00:21:42.562426410+00:00 stderr F I1013 00:21:42.562420 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.562426410+00:00 stderr F I1013 00:21:42.562421 28251 admin_network_policy_controller.go:241] Starting Admin Network Policy workers 2025-10-13T00:21:42.562435470+00:00 stderr F I1013 00:21:42.562426 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.562435470+00:00 stderr F I1013 00:21:42.562432 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.562450610+00:00 stderr F I1013 00:21:42.562438 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-10-crc 2025-10-13T00:21:42.562450610+00:00 stderr F I1013 00:21:42.562444 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.562459480+00:00 stderr F I1013 00:21:42.562448 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.562459480+00:00 stderr F I1013 00:21:42.562454 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.562468331+00:00 stderr F I1013 00:21:42.562456 28251 admin_network_policy_controller.go:549] Adding Node in Admin Network Policy controller crc 2025-10-13T00:21:42.562468331+00:00 stderr F I1013 00:21:42.562069 28251 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b099dc38-13d2-468d-a119-69434d19ed27}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.562476861+00:00 stderr F I1013 00:21:42.562432 28251 admin_network_policy_controller.go:252] Starting Baseline Admin Network Policy workers 2025-10-13T00:21:42.562485011+00:00 stderr F I1013 00:21:42.562477 28251 admin_network_policy_controller.go:263] Starting Namespace Admin Network Policy workers 2025-10-13T00:21:42.562493171+00:00 stderr F I1013 00:21:42.562484 28251 admin_network_policy_controller.go:274] Starting Pod Admin Network Policy workers 2025-10-13T00:21:42.562500892+00:00 stderr F I1013 00:21:42.562492 28251 admin_network_policy_controller.go:285] Starting Node Admin Network Policy workers 2025-10-13T00:21:42.562563883+00:00 stderr F I1013 00:21:42.562458 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.562563883+00:00 stderr F I1013 00:21:42.562542 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-10-13T00:21:42.562563883+00:00 stderr F I1013 00:21:42.562552 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-11-crc 2025-10-13T00:21:42.562563883+00:00 stderr F I1013 00:21:42.562558 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.562576144+00:00 stderr F I1013 00:21:42.562564 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-11-crc 2025-10-13T00:21:42.562576144+00:00 stderr F I1013 00:21:42.562571 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/installer-7-crc 2025-10-13T00:21:42.562591024+00:00 stderr F I1013 00:21:42.562578 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.562591024+00:00 stderr F I1013 00:21:42.562583 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.562599544+00:00 stderr F I1013 00:21:42.562589 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.562599544+00:00 stderr F I1013 00:21:42.562596 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.562608454+00:00 stderr F I1013 00:21:42.562568 28251 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab13cfbf-8c22-4847-bf8f-29854d97f49b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.562608454+00:00 stderr F I1013 00:21:42.562604 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.562617165+00:00 stderr F I1013 00:21:42.562610 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.562638325+00:00 stderr F I1013 00:21:42.562618 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.562638325+00:00 stderr F I1013 00:21:42.562623 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.562638325+00:00 stderr F I1013 00:21:42.562629 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-12-crc 2025-10-13T00:21:42.562638325+00:00 stderr F I1013 00:21:42.562634 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.562648366+00:00 stderr F I1013 00:21:42.562640 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.562656866+00:00 stderr F I1013 00:21:42.562646 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.562656866+00:00 stderr F I1013 00:21:42.562112 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config 2025-10-13T00:21:42.562665626+00:00 stderr F I1013 00:21:42.562655 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-user-workload-monitoring 2025-10-13T00:21:42.562665626+00:00 stderr F I1013 00:21:42.562636 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b099dc38-13d2-468d-a119-69434d19ed27} {GoUUID:ab13cfbf-8c22-4847-bf8f-29854d97f49b}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.562679086+00:00 stderr F I1013 00:21:42.562672 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openstack-operators in Admin Network Policy controller 2025-10-13T00:21:42.562687307+00:00 stderr F I1013 00:21:42.562679 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openstack-operators Admin Network Policy controller: took 8.13µs 2025-10-13T00:21:42.562695667+00:00 stderr F I1013 00:21:42.562687 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-dns in Admin Network Policy controller 2025-10-13T00:21:42.562695667+00:00 stderr F I1013 00:21:42.562692 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-dns Admin Network Policy controller: took 4.36µs 2025-10-13T00:21:42.562704227+00:00 stderr F I1013 00:21:42.562698 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ovn-kubernetes in Admin Network Policy controller 2025-10-13T00:21:42.562712997+00:00 stderr F I1013 00:21:42.562703 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ovn-kubernetes Admin Network Policy controller: took 4.18µs 2025-10-13T00:21:42.562712997+00:00 stderr F I1013 00:21:42.562666 28251 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b099dc38-13d2-468d-a119-69434d19ed27}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab13cfbf-8c22-4847-bf8f-29854d97f49b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b099dc38-13d2-468d-a119-69434d19ed27} {GoUUID:ab13cfbf-8c22-4847-bf8f-29854d97f49b}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.562712997+00:00 stderr F I1013 00:21:42.562709 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-storage-version-migrator-operator in Admin Network Policy controller 2025-10-13T00:21:42.562722528+00:00 stderr F I1013 00:21:42.562714 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-storage-version-migrator-operator Admin Network Policy controller: took 4.94µs 2025-10-13T00:21:42.562722528+00:00 stderr F I1013 00:21:42.562720 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller 2025-10-13T00:21:42.562764189+00:00 stderr F I1013 00:21:42.562747 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 3.921µs 2025-10-13T00:21:42.562764189+00:00 stderr F I1013 00:21:42.562757 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-service-ca in Admin Network Policy controller 2025-10-13T00:21:42.562783429+00:00 stderr F I1013 00:21:42.562762 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-service-ca Admin Network Policy controller: took 5.131µs 2025-10-13T00:21:42.562783429+00:00 stderr F I1013 00:21:42.562769 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace default in Admin Network Policy controller 2025-10-13T00:21:42.562783429+00:00 stderr F I1013 00:21:42.562773 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace default Admin Network Policy controller: took 4.14µs 2025-10-13T00:21:42.562783429+00:00 stderr F I1013 00:21:42.562778 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-storage-version-migrator in Admin Network Policy controller 2025-10-13T00:21:42.562793429+00:00 stderr F I1013 00:21:42.562783 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-storage-version-migrator Admin Network Policy controller: took 4.07µs 2025-10-13T00:21:42.562793429+00:00 stderr F I1013 00:21:42.562788 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cloud-platform-infra in Admin Network Policy controller 2025-10-13T00:21:42.562802250+00:00 stderr F I1013 00:21:42.562793 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cloud-platform-infra Admin Network Policy controller: took 4.36µs 2025-10-13T00:21:42.562802250+00:00 stderr F I1013 00:21:42.562799 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-controller-manager-operator in Admin Network Policy controller 2025-10-13T00:21:42.562810960+00:00 stderr F I1013 00:21:42.562803 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-controller-manager-operator Admin Network Policy controller: took 3.91µs 2025-10-13T00:21:42.562810960+00:00 stderr F I1013 00:21:42.562809 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kni-infra in Admin Network Policy controller 2025-10-13T00:21:42.562819630+00:00 stderr F I1013 00:21:42.562813 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kni-infra Admin Network Policy controller: took 4.06µs 2025-10-13T00:21:42.562827890+00:00 stderr F I1013 00:21:42.562819 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openstack in Admin Network Policy controller 2025-10-13T00:21:42.562827890+00:00 stderr F I1013 00:21:42.562823 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openstack Admin Network Policy controller: took 4.41µs 2025-10-13T00:21:42.562836491+00:00 stderr F I1013 00:21:42.562829 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace hostpath-provisioner in Admin Network Policy controller 2025-10-13T00:21:42.562836491+00:00 stderr F I1013 00:21:42.562834 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace hostpath-provisioner Admin Network Policy controller: took 4.53µs 2025-10-13T00:21:42.562845251+00:00 stderr F I1013 00:21:42.562839 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-apiserver-operator in Admin Network Policy controller 2025-10-13T00:21:42.562853481+00:00 stderr F I1013 00:21:42.562844 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-apiserver-operator Admin Network Policy controller: took 3.86µs 2025-10-13T00:21:42.562853481+00:00 stderr F I1013 00:21:42.562850 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller 2025-10-13T00:21:42.562862221+00:00 stderr F I1013 00:21:42.562854 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 4.38µs 2025-10-13T00:21:42.562862221+00:00 stderr F I1013 00:21:42.562860 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-machine-approver in Admin Network Policy controller 2025-10-13T00:21:42.562874902+00:00 stderr F I1013 00:21:42.562864 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-machine-approver Admin Network Policy controller: took 3.86µs 2025-10-13T00:21:42.562874902+00:00 stderr F I1013 00:21:42.562869 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-controller-manager in Admin Network Policy controller 2025-10-13T00:21:42.562883612+00:00 stderr F I1013 00:21:42.562873 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-controller-manager Admin Network Policy controller: took 3.841µs 2025-10-13T00:21:42.562883612+00:00 stderr F I1013 00:21:42.562879 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-marketplace in Admin Network Policy controller 2025-10-13T00:21:42.562892342+00:00 stderr F I1013 00:21:42.562883 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-marketplace Admin Network Policy controller: took 4.02µs 2025-10-13T00:21:42.562892342+00:00 stderr F I1013 00:21:42.562889 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-node in Admin Network Policy controller 2025-10-13T00:21:42.562900902+00:00 stderr F I1013 00:21:42.562893 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-node Admin Network Policy controller: took 4.12µs 2025-10-13T00:21:42.562909273+00:00 stderr F I1013 00:21:42.562900 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-samples-operator in Admin Network Policy controller 2025-10-13T00:21:42.562909273+00:00 stderr F I1013 00:21:42.562904 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-samples-operator Admin Network Policy controller: took 3.93µs 2025-10-13T00:21:42.562917833+00:00 stderr F I1013 00:21:42.562909 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-apiserver in Admin Network Policy controller 2025-10-13T00:21:42.562917833+00:00 stderr F I1013 00:21:42.562914 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-apiserver Admin Network Policy controller: took 4.371µs 2025-10-13T00:21:42.562926513+00:00 stderr F I1013 00:21:42.562920 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-infra in Admin Network Policy controller 2025-10-13T00:21:42.562926513+00:00 stderr F I1013 00:21:42.562924 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-infra Admin Network Policy controller: took 4.16µs 2025-10-13T00:21:42.562937103+00:00 stderr F I1013 00:21:42.562932 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-machine-config-operator in Admin Network Policy controller 2025-10-13T00:21:42.562962094+00:00 stderr F I1013 00:21:42.562936 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-machine-config-operator Admin Network Policy controller: took 4.11µs 2025-10-13T00:21:42.562962094+00:00 stderr F I1013 00:21:42.562944 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-node-identity/network-node-identity-7xghp in Admin Network Policy controller 2025-10-13T00:21:42.562962094+00:00 stderr F I1013 00:21:42.562953 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.562973064+00:00 stderr F I1013 00:21:42.562958 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-node-identity/network-node-identity-7xghp Admin Network Policy controller: took 14.861µs 2025-10-13T00:21:42.562973064+00:00 stderr F I1013 00:21:42.562969 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/redhat-marketplace-crk87 in Admin Network Policy controller 2025-10-13T00:21:42.562986345+00:00 stderr F I1013 00:21:42.562971 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-controller-manager 2025-10-13T00:21:42.562986345+00:00 stderr F I1013 00:21:42.562975 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/redhat-marketplace-crk87 Admin Network Policy controller: took 5.23µs 2025-10-13T00:21:42.562986345+00:00 stderr F I1013 00:21:42.562981 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in Admin Network Policy controller 2025-10-13T00:21:42.562986345+00:00 stderr F I1013 00:21:42.562981 28251 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-oauth-apiserver 2025-10-13T00:21:42.562996605+00:00 stderr F I1013 00:21:42.562985 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg Admin Network Policy controller: took 4.711µs 2025-10-13T00:21:42.562996605+00:00 stderr F I1013 00:21:42.562961 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.562996605+00:00 stderr F I1013 00:21:42.562991 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in Admin Network Policy controller 2025-10-13T00:21:42.563006385+00:00 stderr F I1013 00:21:42.562996 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t Admin Network Policy controller: took 4.68µs 2025-10-13T00:21:42.563006385+00:00 stderr F I1013 00:21:42.563003 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/kube-apiserver-crc in Admin Network Policy controller 2025-10-13T00:21:42.563015605+00:00 stderr F I1013 00:21:42.563007 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/kube-apiserver-crc Admin Network Policy controller: took 4.34µs 2025-10-13T00:21:42.563015605+00:00 stderr F I1013 00:21:42.562996 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.563024906+00:00 stderr F I1013 00:21:42.563019 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-9-crc 2025-10-13T00:21:42.563033736+00:00 stderr F I1013 00:21:42.563018 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-nutanix-infra in Admin Network Policy controller 2025-10-13T00:21:42.563033736+00:00 stderr F I1013 00:21:42.563028 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.563042556+00:00 stderr F I1013 00:21:42.563032 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-nutanix-infra Admin Network Policy controller: took 17.031µs 2025-10-13T00:21:42.563042556+00:00 stderr F I1013 00:21:42.563034 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.563051036+00:00 stderr F I1013 00:21:42.563043 28251 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.563051036+00:00 stderr F I1013 00:21:42.563045 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ovirt-infra in Admin Network Policy controller 2025-10-13T00:21:42.563063407+00:00 stderr F I1013 00:21:42.563050 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ovirt-infra Admin Network Policy controller: took 6.34µs 2025-10-13T00:21:42.563063407+00:00 stderr F I1013 00:21:42.563057 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift in Admin Network Policy controller 2025-10-13T00:21:42.563071967+00:00 stderr F I1013 00:21:42.563061 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift Admin Network Policy controller: took 4.161µs 2025-10-13T00:21:42.563071967+00:00 stderr F I1013 00:21:42.563067 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-authentication-operator in Admin Network Policy controller 2025-10-13T00:21:42.563080387+00:00 stderr F I1013 00:21:42.563072 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.23µs 2025-10-13T00:21:42.563080387+00:00 stderr F I1013 00:21:42.563078 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress in Admin Network Policy controller 2025-10-13T00:21:42.563088937+00:00 stderr F I1013 00:21:42.563083 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress Admin Network Policy controller: took 4.3µs 2025-10-13T00:21:42.563097308+00:00 stderr F I1013 00:21:42.563089 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-machine-api in Admin Network Policy controller 2025-10-13T00:21:42.563097308+00:00 stderr F I1013 00:21:42.563093 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-machine-api Admin Network Policy controller: took 4.34µs 2025-10-13T00:21:42.563105628+00:00 stderr F I1013 00:21:42.563099 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operators in Admin Network Policy controller 2025-10-13T00:21:42.563113748+00:00 stderr F I1013 00:21:42.563104 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operators Admin Network Policy controller: took 3.91µs 2025-10-13T00:21:42.563113748+00:00 stderr F I1013 00:21:42.563110 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-authentication in Admin Network Policy controller 2025-10-13T00:21:42.563122168+00:00 stderr F I1013 00:21:42.563114 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication Admin Network Policy controller: took 4.12µs 2025-10-13T00:21:42.563130418+00:00 stderr F I1013 00:21:42.563120 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller 2025-10-13T00:21:42.563130418+00:00 stderr F I1013 00:21:42.563125 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 4.32µs 2025-10-13T00:21:42.563138849+00:00 stderr F I1013 00:21:42.563130 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-monitoring in Admin Network Policy controller 2025-10-13T00:21:42.563138849+00:00 stderr F I1013 00:21:42.563135 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-monitoring Admin Network Policy controller: took 4.131µs 2025-10-13T00:21:42.563147839+00:00 stderr F I1013 00:21:42.563135 28251 factory.go:988] Added *v1.NetworkPolicy event handler 4 2025-10-13T00:21:42.563147839+00:00 stderr F I1013 00:21:42.563143 28251 admin_network_policy_node.go:55] Processing sync for Node crc in Admin Network Policy controller 2025-10-13T00:21:42.563158989+00:00 stderr F I1013 00:21:42.563151 28251 admin_network_policy_node.go:58] Finished syncing Node crc Admin Network Policy controller: took 10.25µs 2025-10-13T00:21:42.563172010+00:00 stderr F I1013 00:21:42.563160 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-10-crc in Admin Network Policy controller 2025-10-13T00:21:42.563172010+00:00 stderr F I1013 00:21:42.563165 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-10-crc Admin Network Policy controller: took 5.4µs 2025-10-13T00:21:42.563180950+00:00 stderr F I1013 00:21:42.563171 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-diagnostics/network-check-target-v54bt in Admin Network Policy controller 2025-10-13T00:21:42.563180950+00:00 stderr F I1013 00:21:42.563176 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-diagnostics/network-check-target-v54bt Admin Network Policy controller: took 4.46µs 2025-10-13T00:21:42.563189780+00:00 stderr F I1013 00:21:42.563180 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in Admin Network Policy controller 2025-10-13T00:21:42.563189780+00:00 stderr F I1013 00:21:42.563184 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd Admin Network Policy controller: took 4.04µs 2025-10-13T00:21:42.563198900+00:00 stderr F I1013 00:21:42.563189 28251 admin_network_policy_pod.go:56] Processing sync for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in Admin Network Policy controller 2025-10-13T00:21:42.563207881+00:00 stderr F I1013 00:21:42.563193 28251 admin_network_policy_pod.go:59] Finished syncing Pod hostpath-provisioner/csi-hostpathplugin-hvm8g Admin Network Policy controller: took 4.12µs 2025-10-13T00:21:42.563215911+00:00 stderr F I1013 00:21:42.563205 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/node-ca-l92hr in Admin Network Policy controller 2025-10-13T00:21:42.563215911+00:00 stderr F I1013 00:21:42.563209 28251 egressip.go:1052] syncStaleEgressReroutePolicy will remove stale nexthops: [] 2025-10-13T00:21:42.563298033+00:00 stderr F I1013 00:21:42.563279 28251 address_set.go:304] New(b27c19cc-d9d0-4d57-a5a8-06fcff438e8a/default-network-controller:EgressIP:egressip-served-pods:v4/a4548040316634674295) with [] 2025-10-13T00:21:42.563309193+00:00 stderr F I1013 00:21:42.563210 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-l92hr Admin Network Policy controller: took 5.381µs 2025-10-13T00:21:42.563317674+00:00 stderr F I1013 00:21:42.563311 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in Admin Network Policy controller 2025-10-13T00:21:42.563343414+00:00 stderr F I1013 00:21:42.563320 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb Admin Network Policy controller: took 9.631µs 2025-10-13T00:21:42.563343414+00:00 stderr F I1013 00:21:42.563339 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/community-operators-gjctm in Admin Network Policy controller 2025-10-13T00:21:42.563357385+00:00 stderr F I1013 00:21:42.563310 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b27c19cc-d9d0-4d57-a5a8-06fcff438e8a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.563357385+00:00 stderr F I1013 00:21:42.563344 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/community-operators-gjctm Admin Network Policy controller: took 5.32µs 2025-10-13T00:21:42.563357385+00:00 stderr F I1013 00:21:42.563351 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in Admin Network Policy controller 2025-10-13T00:21:42.563367105+00:00 stderr F I1013 00:21:42.563356 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb Admin Network Policy controller: took 5.051µs 2025-10-13T00:21:42.563367105+00:00 stderr F I1013 00:21:42.563348 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b27c19cc-d9d0-4d57-a5a8-06fcff438e8a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.563367105+00:00 stderr F I1013 00:21:42.563362 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/image-pruner-29338560-zvlxb in Admin Network Policy controller 2025-10-13T00:21:42.563384695+00:00 stderr F I1013 00:21:42.563367 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/image-pruner-29338560-zvlxb Admin Network Policy controller: took 5.3µs 2025-10-13T00:21:42.563384695+00:00 stderr F I1013 00:21:42.563373 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/network-metrics-daemon-qdfr4 in Admin Network Policy controller 2025-10-13T00:21:42.563384695+00:00 stderr F I1013 00:21:42.563378 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/network-metrics-daemon-qdfr4 Admin Network Policy controller: took 4.52µs 2025-10-13T00:21:42.563393786+00:00 stderr F I1013 00:21:42.563384 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j in Admin Network Policy controller 2025-10-13T00:21:42.563393786+00:00 stderr F I1013 00:21:42.563389 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j Admin Network Policy controller: took 5.17µs 2025-10-13T00:21:42.563402586+00:00 stderr F I1013 00:21:42.563395 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in Admin Network Policy controller 2025-10-13T00:21:42.563411036+00:00 stderr F I1013 00:21:42.563400 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz Admin Network Policy controller: took 4.93µs 2025-10-13T00:21:42.563411036+00:00 stderr F I1013 00:21:42.563407 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in Admin Network Policy controller 2025-10-13T00:21:42.563421496+00:00 stderr F I1013 00:21:42.563415 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp Admin Network Policy controller: took 4.8µs 2025-10-13T00:21:42.563429837+00:00 stderr F I1013 00:21:42.563420 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-8-crc in Admin Network Policy controller 2025-10-13T00:21:42.563429837+00:00 stderr F I1013 00:21:42.563424 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-8-crc Admin Network Policy controller: took 4.23µs 2025-10-13T00:21:42.563437977+00:00 stderr F I1013 00:21:42.563430 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in Admin Network Policy controller 2025-10-13T00:21:42.563437977+00:00 stderr F I1013 00:21:42.563435 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc Admin Network Policy controller: took 5.34µs 2025-10-13T00:21:42.563446407+00:00 stderr F I1013 00:21:42.563441 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-server-v65wr in Admin Network Policy controller 2025-10-13T00:21:42.563454537+00:00 stderr F I1013 00:21:42.563446 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-server-v65wr Admin Network Policy controller: took 4.84µs 2025-10-13T00:21:42.563454537+00:00 stderr F I1013 00:21:42.563452 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/certified-operators-cms8q in Admin Network Policy controller 2025-10-13T00:21:42.563467668+00:00 stderr F I1013 00:21:42.563457 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/certified-operators-cms8q Admin Network Policy controller: took 5.13µs 2025-10-13T00:21:42.563467668+00:00 stderr F I1013 00:21:42.563463 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-q88th in Admin Network Policy controller 2025-10-13T00:21:42.563476068+00:00 stderr F I1013 00:21:42.563468 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-q88th Admin Network Policy controller: took 4.861µs 2025-10-13T00:21:42.563485408+00:00 stderr F I1013 00:21:42.563475 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw in Admin Network Policy controller 2025-10-13T00:21:42.563485408+00:00 stderr F I1013 00:21:42.563467 28251 ovs.go:162] Exec(20): stdout: "\"b6:dc:d9:26:03:d4\"\n" 2025-10-13T00:21:42.563524129+00:00 stderr F I1013 00:21:42.563493 28251 ovs.go:163] Exec(20): stderr: "" 2025-10-13T00:21:42.563534269+00:00 stderr F I1013 00:21:42.563524 28251 ovs.go:159] Exec(21): /usr/bin/ovs-vsctl --timeout=15 set interface ovn-k8s-mp0 mac=b6\:dc\:d9\:26\:03\:d4 2025-10-13T00:21:42.563592881+00:00 stderr F I1013 00:21:42.563480 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw Admin Network Policy controller: took 5.54µs 2025-10-13T00:21:42.563602531+00:00 stderr F I1013 00:21:42.563595 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in Admin Network Policy controller 2025-10-13T00:21:42.563610901+00:00 stderr F I1013 00:21:42.563604 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 Admin Network Policy controller: took 11.33µs 2025-10-13T00:21:42.563619242+00:00 stderr F I1013 00:21:42.563612 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in Admin Network Policy controller 2025-10-13T00:21:42.563627572+00:00 stderr F I1013 00:21:42.563617 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc Admin Network Policy controller: took 5.521µs 2025-10-13T00:21:42.563627572+00:00 stderr F I1013 00:21:42.563624 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in Admin Network Policy controller 2025-10-13T00:21:42.563636422+00:00 stderr F I1013 00:21:42.563629 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 Admin Network Policy controller: took 5.31µs 2025-10-13T00:21:42.563660613+00:00 stderr F I1013 00:21:42.563635 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in Admin Network Policy controller 2025-10-13T00:21:42.563660613+00:00 stderr F I1013 00:21:42.563639 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/openshift-kube-scheduler-crc Admin Network Policy controller: took 4.79µs 2025-10-13T00:21:42.563660613+00:00 stderr F I1013 00:21:42.563645 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in Admin Network Policy controller 2025-10-13T00:21:42.563660613+00:00 stderr F I1013 00:21:42.563650 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf Admin Network Policy controller: took 4.9µs 2025-10-13T00:21:42.563660613+00:00 stderr F I1013 00:21:42.563657 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in Admin Network Policy controller 2025-10-13T00:21:42.563676533+00:00 stderr F I1013 00:21:42.563662 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b Admin Network Policy controller: took 5.44µs 2025-10-13T00:21:42.563676533+00:00 stderr F I1013 00:21:42.563668 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 in Admin Network Policy controller 2025-10-13T00:21:42.563676533+00:00 stderr F I1013 00:21:42.563673 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 Admin Network Policy controller: took 4.59µs 2025-10-13T00:21:42.563686153+00:00 stderr F I1013 00:21:42.563678 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-9-crc in Admin Network Policy controller 2025-10-13T00:21:42.563686153+00:00 stderr F I1013 00:21:42.563683 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-9-crc Admin Network Policy controller: took 5.03µs 2025-10-13T00:21:42.563694984+00:00 stderr F I1013 00:21:42.563689 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in Admin Network Policy controller 2025-10-13T00:21:42.563703294+00:00 stderr F I1013 00:21:42.563694 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm Admin Network Policy controller: took 5.011µs 2025-10-13T00:21:42.563703294+00:00 stderr F I1013 00:21:42.563700 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in Admin Network Policy controller 2025-10-13T00:21:42.563712084+00:00 stderr F I1013 00:21:42.563705 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-service-ca/service-ca-666f99b6f-kk8kg Admin Network Policy controller: took 4.88µs 2025-10-13T00:21:42.563720394+00:00 stderr F I1013 00:21:42.563711 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in Admin Network Policy controller 2025-10-13T00:21:42.563720394+00:00 stderr F I1013 00:21:42.563715 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv Admin Network Policy controller: took 4.54µs 2025-10-13T00:21:42.563729065+00:00 stderr F I1013 00:21:42.563722 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in Admin Network Policy controller 2025-10-13T00:21:42.563737425+00:00 stderr F I1013 00:21:42.563729 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd Admin Network Policy controller: took 5µs 2025-10-13T00:21:42.563737425+00:00 stderr F I1013 00:21:42.563735 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns/dns-default-gbw49 in Admin Network Policy controller 2025-10-13T00:21:42.563746055+00:00 stderr F I1013 00:21:42.563740 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns/dns-default-gbw49 Admin Network Policy controller: took 5.02µs 2025-10-13T00:21:42.563754595+00:00 stderr F I1013 00:21:42.563738 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-machine-approver 2025-10-13T00:21:42.563793866+00:00 stderr F I1013 00:21:42.563772 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openstack 2025-10-13T00:21:42.563793866+00:00 stderr F I1013 00:21:42.563783 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-dns 2025-10-13T00:21:42.563810587+00:00 stderr F I1013 00:21:42.563800 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-dns took: 7.28µs 2025-10-13T00:21:42.563810587+00:00 stderr F I1013 00:21:42.563807 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress 2025-10-13T00:21:42.563838828+00:00 stderr F I1013 00:21:42.563817 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress took: 4.19µs 2025-10-13T00:21:42.563838828+00:00 stderr F I1013 00:21:42.563828 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress-canary 2025-10-13T00:21:42.563838828+00:00 stderr F I1013 00:21:42.563831 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ovn-kubernetes 2025-10-13T00:21:42.563849468+00:00 stderr F I1013 00:21:42.563841 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-operator-lifecycle-manager 2025-10-13T00:21:42.563860238+00:00 stderr F I1013 00:21:42.563853 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ovn-kubernetes took: 11.75µs 2025-10-13T00:21:42.563868568+00:00 stderr F I1013 00:21:42.563856 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-operator-lifecycle-manager took: 8.87µs 2025-10-13T00:21:42.563868568+00:00 stderr F I1013 00:21:42.563863 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config-operator 2025-10-13T00:21:42.563877369+00:00 stderr F I1013 00:21:42.563870 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace hostpath-provisioner 2025-10-13T00:21:42.563885479+00:00 stderr F I1013 00:21:42.563874 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config-operator took: 2.89µs 2025-10-13T00:21:42.563885479+00:00 stderr F I1013 00:21:42.563882 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-machine-api 2025-10-13T00:21:42.563894459+00:00 stderr F I1013 00:21:42.563883 28251 obj_retry.go:541] Creating *factory.egressIPNamespace hostpath-provisioner took: 6.031µs 2025-10-13T00:21:42.563894459+00:00 stderr F I1013 00:21:42.563889 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ovirt-infra 2025-10-13T00:21:42.563903729+00:00 stderr F I1013 00:21:42.563892 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-machine-api took: 3.76µs 2025-10-13T00:21:42.563903729+00:00 stderr F I1013 00:21:42.563897 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ovirt-infra took: 3.03µs 2025-10-13T00:21:42.563903729+00:00 stderr F I1013 00:21:42.563899 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-route-controller-manager 2025-10-13T00:21:42.563913510+00:00 stderr F I1013 00:21:42.563903 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-operators 2025-10-13T00:21:42.563913510+00:00 stderr F I1013 00:21:42.563910 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-route-controller-manager took: 2.39µs 2025-10-13T00:21:42.563939070+00:00 stderr F I1013 00:21:42.563912 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-operators took: 2.07µs 2025-10-13T00:21:42.563939070+00:00 stderr F I1013 00:21:42.563918 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cloud-platform-infra 2025-10-13T00:21:42.563939070+00:00 stderr F I1013 00:21:42.563747 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/kube-controller-manager-crc in Admin Network Policy controller 2025-10-13T00:21:42.563939070+00:00 stderr F I1013 00:21:42.563929 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-operator 2025-10-13T00:21:42.563950001+00:00 stderr F I1013 00:21:42.563937 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-controller-manager-operator 2025-10-13T00:21:42.563950001+00:00 stderr F I1013 00:21:42.563941 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-operator took: 4.4µs 2025-10-13T00:21:42.563950001+00:00 stderr F I1013 00:21:42.563931 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/kube-controller-manager-crc Admin Network Policy controller: took 184.035µs 2025-10-13T00:21:42.563964901+00:00 stderr F I1013 00:21:42.563952 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-controller-manager-operator took: 7.981µs 2025-10-13T00:21:42.563964901+00:00 stderr F I1013 00:21:42.563955 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/installer-8-crc in Admin Network Policy controller 2025-10-13T00:21:42.563964901+00:00 stderr F I1013 00:21:42.563959 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-machine-config-operator 2025-10-13T00:21:42.563974751+00:00 stderr F I1013 00:21:42.563963 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/installer-8-crc Admin Network Policy controller: took 7.51µs 2025-10-13T00:21:42.563974751+00:00 stderr F I1013 00:21:42.563968 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-machine-config-operator took: 4.11µs 2025-10-13T00:21:42.563983641+00:00 stderr F I1013 00:21:42.563975 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-monitoring 2025-10-13T00:21:42.563983641+00:00 stderr F I1013 00:21:42.563978 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openstack-operators 2025-10-13T00:21:42.563993102+00:00 stderr F I1013 00:21:42.563983 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-monitoring took: 2.84µs 2025-10-13T00:21:42.563993102+00:00 stderr F I1013 00:21:42.563988 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-openstack-infra 2025-10-13T00:21:42.564002012+00:00 stderr F I1013 00:21:42.563994 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openstack-operators took: 8.481µs 2025-10-13T00:21:42.564002012+00:00 stderr F I1013 00:21:42.563969 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in Admin Network Policy controller 2025-10-13T00:21:42.564011122+00:00 stderr F I1013 00:21:42.564002 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-vsphere-infra 2025-10-13T00:21:42.564011122+00:00 stderr F I1013 00:21:42.564004 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr Admin Network Policy controller: took 34.481µs 2025-10-13T00:21:42.564020142+00:00 stderr F I1013 00:21:42.564013 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-vsphere-infra took: 4.74µs 2025-10-13T00:21:42.564020142+00:00 stderr F I1013 00:21:42.564015 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in Admin Network Policy controller 2025-10-13T00:21:42.564028543+00:00 stderr F I1013 00:21:42.564019 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-public 2025-10-13T00:21:42.564036693+00:00 stderr F I1013 00:21:42.564028 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kni-infra 2025-10-13T00:21:42.564036693+00:00 stderr F I1013 00:21:42.564021 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw Admin Network Policy controller: took 6.24µs 2025-10-13T00:21:42.564077034+00:00 stderr F I1013 00:21:42.564042 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kni-infra took: 6.88µs 2025-10-13T00:21:42.564077034+00:00 stderr F I1013 00:21:42.564058 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-nutanix-infra 2025-10-13T00:21:42.564077034+00:00 stderr F I1013 00:21:42.564062 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace default 2025-10-13T00:21:42.564077034+00:00 stderr F I1013 00:21:42.564065 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-nutanix-infra took: 2.121µs 2025-10-13T00:21:42.564093594+00:00 stderr F I1013 00:21:42.564074 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-node-identity 2025-10-13T00:21:42.564093594+00:00 stderr F I1013 00:21:42.564081 28251 obj_retry.go:541] Creating *factory.egressIPNamespace default took: 8.27µs 2025-10-13T00:21:42.564093594+00:00 stderr F I1013 00:21:42.564083 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-node-identity took: 3.21µs 2025-10-13T00:21:42.564093594+00:00 stderr F I1013 00:21:42.564089 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-authentication-operator 2025-10-13T00:21:42.564103465+00:00 stderr F I1013 00:21:42.564095 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-apiserver-operator 2025-10-13T00:21:42.564111555+00:00 stderr F I1013 00:21:42.564101 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-authentication-operator took: 3.52µs 2025-10-13T00:21:42.564120265+00:00 stderr F I1013 00:21:42.564109 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress-operator 2025-10-13T00:21:42.564120265+00:00 stderr F I1013 00:21:42.564029 28251 obj_retry.go:541] Creating *factory.egressIPNamespace kube-public took: 2.581µs 2025-10-13T00:21:42.564128635+00:00 stderr F I1013 00:21:42.564120 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-service-ca-operator 2025-10-13T00:21:42.564128635+00:00 stderr F I1013 00:21:42.564122 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress-operator took: 2.37µs 2025-10-13T00:21:42.564137646+00:00 stderr F I1013 00:21:42.564128 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-service-ca-operator took: 2.5µs 2025-10-13T00:21:42.564137646+00:00 stderr F I1013 00:21:42.564131 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-multus 2025-10-13T00:21:42.564137646+00:00 stderr F I1013 00:21:42.564109 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-apiserver-operator took: 6.76µs 2025-10-13T00:21:42.564146826+00:00 stderr F I1013 00:21:42.564140 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-node 2025-10-13T00:21:42.564146826+00:00 stderr F I1013 00:21:42.564142 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-multus took: 3.791µs 2025-10-13T00:21:42.564155656+00:00 stderr F I1013 00:21:42.564147 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-node took: 1.24µs 2025-10-13T00:21:42.564155656+00:00 stderr F I1013 00:21:42.564151 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-oauth-apiserver 2025-10-13T00:21:42.564164416+00:00 stderr F I1013 00:21:42.564153 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cloud-network-config-controller 2025-10-13T00:21:42.564164416+00:00 stderr F I1013 00:21:42.564161 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-oauth-apiserver took: 2.76µs 2025-10-13T00:21:42.564172666+00:00 stderr F I1013 00:21:42.563864 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift 2025-10-13T00:21:42.564172666+00:00 stderr F I1013 00:21:42.564090 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-image-registry 2025-10-13T00:21:42.564184867+00:00 stderr F I1013 00:21:42.564173 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift took: 1.38µs 2025-10-13T00:21:42.564184867+00:00 stderr F I1013 00:21:42.564179 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-dns-operator 2025-10-13T00:21:42.564184867+00:00 stderr F I1013 00:21:42.564180 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-image-registry took: 3.47µs 2025-10-13T00:21:42.564194847+00:00 stderr F I1013 00:21:42.564185 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-dns-operator took: 1.54µs 2025-10-13T00:21:42.564194847+00:00 stderr F I1013 00:21:42.564188 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-controller-manager 2025-10-13T00:21:42.564194847+00:00 stderr F I1013 00:21:42.564191 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-apiserver 2025-10-13T00:21:42.564205797+00:00 stderr F I1013 00:21:42.564199 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-controller-manager took: 3.74µs 2025-10-13T00:21:42.564214338+00:00 stderr F I1013 00:21:42.564162 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cloud-network-config-controller took: 2.5µs 2025-10-13T00:21:42.564214338+00:00 stderr F I1013 00:21:42.563834 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress-canary took: 1.4µs 2025-10-13T00:21:42.564223138+00:00 stderr F I1013 00:21:42.564214 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-node-lease 2025-10-13T00:21:42.564223138+00:00 stderr F I1013 00:21:42.564051 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-storage-version-migrator-operator 2025-10-13T00:21:42.564233398+00:00 stderr F I1013 00:21:42.564227 28251 obj_retry.go:541] Creating *factory.egressIPNamespace kube-node-lease took: 2.88µs 2025-10-13T00:21:42.564241508+00:00 stderr F I1013 00:21:42.564232 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-storage-version-migrator-operator took: 6.57µs 2025-10-13T00:21:42.564241508+00:00 stderr F I1013 00:21:42.563806 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openstack took: 18.141µs 2025-10-13T00:21:42.564250049+00:00 stderr F I1013 00:21:42.564240 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-marketplace 2025-10-13T00:21:42.564250049+00:00 stderr F I1013 00:21:42.564245 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-apiserver 2025-10-13T00:21:42.564258979+00:00 stderr F I1013 00:21:42.564252 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-marketplace took: 3.511µs 2025-10-13T00:21:42.564267649+00:00 stderr F I1013 00:21:42.564256 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-apiserver took: 3.79µs 2025-10-13T00:21:42.564267649+00:00 stderr F I1013 00:21:42.564259 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-storage-operator 2025-10-13T00:21:42.564292180+00:00 stderr F I1013 00:21:42.564264 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-host-network 2025-10-13T00:21:42.564292180+00:00 stderr F I1013 00:21:42.564276 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-host-network took: 2.87µs 2025-10-13T00:21:42.564292180+00:00 stderr F I1013 00:21:42.564280 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cloud-platform-infra took: 8.15µs 2025-10-13T00:21:42.564292180+00:00 stderr F I1013 00:21:42.564283 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console 2025-10-13T00:21:42.564292180+00:00 stderr F I1013 00:21:42.564287 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-infra 2025-10-13T00:21:42.564302860+00:00 stderr F I1013 00:21:42.564293 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console took: 2.09µs 2025-10-13T00:21:42.564302860+00:00 stderr F I1013 00:21:42.563995 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-openstack-infra took: 1.94µs 2025-10-13T00:21:42.564302860+00:00 stderr F I1013 00:21:42.564295 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-infra took: 2.22µs 2025-10-13T00:21:42.564315680+00:00 stderr F I1013 00:21:42.564302 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-etcd-operator 2025-10-13T00:21:42.564315680+00:00 stderr F I1013 00:21:42.564310 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-scheduler 2025-10-13T00:21:42.564343101+00:00 stderr F I1013 00:21:42.563821 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-storage-version-migrator 2025-10-13T00:21:42.564343101+00:00 stderr F I1013 00:21:42.564327 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-scheduler took: 4.13µs 2025-10-13T00:21:42.564343101+00:00 stderr F I1013 00:21:42.564333 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-storage-version-migrator took: 12.711µs 2025-10-13T00:21:42.564385182+00:00 stderr F I1013 00:21:42.564363 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config 2025-10-13T00:21:42.564385182+00:00 stderr F I1013 00:21:42.564378 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config took: 3.29µs 2025-10-13T00:21:42.564429583+00:00 stderr F I1013 00:21:42.564199 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-apiserver took: 2.11µs 2025-10-13T00:21:42.564429583+00:00 stderr F I1013 00:21:42.564386 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-samples-operator 2025-10-13T00:21:42.564429583+00:00 stderr F I1013 00:21:42.564392 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-etcd 2025-10-13T00:21:42.564455984+00:00 stderr F I1013 00:21:42.564438 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-etcd took: 4.601µs 2025-10-13T00:21:42.564455984+00:00 stderr F I1013 00:21:42.564215 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-diagnostics 2025-10-13T00:21:42.564465304+00:00 stderr F I1013 00:21:42.564456 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-diagnostics took: 3.14µs 2025-10-13T00:21:42.564465304+00:00 stderr F I1013 00:21:42.564234 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-user-workload-monitoring 2025-10-13T00:21:42.564475795+00:00 stderr F I1013 00:21:42.564470 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-user-workload-monitoring took: 2.92µs 2025-10-13T00:21:42.564483885+00:00 stderr F I1013 00:21:42.563775 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-machine-approver took: 19.121µs 2025-10-13T00:21:42.564492745+00:00 stderr F I1013 00:21:42.564482 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-authentication 2025-10-13T00:21:42.564492745+00:00 stderr F I1013 00:21:42.564311 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-etcd-operator took: 2.45µs 2025-10-13T00:21:42.564501975+00:00 stderr F I1013 00:21:42.564266 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-storage-operator took: 1.57µs 2025-10-13T00:21:42.564501975+00:00 stderr F I1013 00:21:42.564493 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-authentication took: 4.78µs 2025-10-13T00:21:42.564501975+00:00 stderr F I1013 00:21:42.564497 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-version 2025-10-13T00:21:42.564511626+00:00 stderr F I1013 00:21:42.564500 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console-operator 2025-10-13T00:21:42.564511626+00:00 stderr F I1013 00:21:42.564506 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-version took: 3.43µs 2025-10-13T00:21:42.564525096+00:00 stderr F I1013 00:21:42.564509 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console-operator took: 2.07µs 2025-10-13T00:21:42.564525096+00:00 stderr F I1013 00:21:42.564394 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-samples-operator took: 1.76µs 2025-10-13T00:21:42.564525096+00:00 stderr F I1013 00:21:42.564516 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-apiserver-operator 2025-10-13T00:21:42.564536576+00:00 stderr F I1013 00:21:42.564529 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-apiserver-operator took: 3.92µs 2025-10-13T00:21:42.564536576+00:00 stderr F I1013 00:21:42.564518 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-controller-manager-operator 2025-10-13T00:21:42.564562207+00:00 stderr F I1013 00:21:42.564543 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-controller-manager-operator took: 3.16µs 2025-10-13T00:21:42.564562207+00:00 stderr F I1013 00:21:42.564553 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-system 2025-10-13T00:21:42.564571507+00:00 stderr F I1013 00:21:42.564563 28251 obj_retry.go:541] Creating *factory.egressIPNamespace kube-system took: 4.25µs 2025-10-13T00:21:42.564571507+00:00 stderr F I1013 00:21:42.564207 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-service-ca 2025-10-13T00:21:42.564596468+00:00 stderr F I1013 00:21:42.564582 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-service-ca took: 9.26µs 2025-10-13T00:21:42.564596468+00:00 stderr F I1013 00:21:42.564592 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-controller-manager 2025-10-13T00:21:42.564606058+00:00 stderr F I1013 00:21:42.564043 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in Admin Network Policy controller 2025-10-13T00:21:42.564606058+00:00 stderr F I1013 00:21:42.564599 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-controller-manager took: 2.43µs 2025-10-13T00:21:42.564614858+00:00 stderr F I1013 00:21:42.564604 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 Admin Network Policy controller: took 560.855µs 2025-10-13T00:21:42.564614858+00:00 stderr F I1013 00:21:42.564606 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config-managed 2025-10-13T00:21:42.564623339+00:00 stderr F I1013 00:21:42.564616 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config-managed took: 2.12µs 2025-10-13T00:21:42.564631469+00:00 stderr F I1013 00:21:42.564622 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-scheduler-operator 2025-10-13T00:21:42.564639779+00:00 stderr F I1013 00:21:42.564632 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-scheduler-operator took: 3.35µs 2025-10-13T00:21:42.564648099+00:00 stderr F I1013 00:21:42.564638 28251 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console-user-settings 2025-10-13T00:21:42.564648099+00:00 stderr F I1013 00:21:42.564644 28251 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console-user-settings took: 1.33µs 2025-10-13T00:21:42.564672530+00:00 stderr F I1013 00:21:42.564654 28251 factory.go:988] Added *v1.Namespace event handler 5 2025-10-13T00:21:42.564712271+00:00 stderr F I1013 00:21:42.564618 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in Admin Network Policy controller 2025-10-13T00:21:42.564712271+00:00 stderr F I1013 00:21:42.564704 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m Admin Network Policy controller: took 86.703µs 2025-10-13T00:21:42.564727251+00:00 stderr F I1013 00:21:42.564711 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-dns-operator in Admin Network Policy controller 2025-10-13T00:21:42.564727251+00:00 stderr F I1013 00:21:42.564715 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-dns-operator Admin Network Policy controller: took 5.43µs 2025-10-13T00:21:42.564727251+00:00 stderr F I1013 00:21:42.564724 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-scheduler in Admin Network Policy controller 2025-10-13T00:21:42.564753692+00:00 stderr F I1013 00:21:42.564727 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-scheduler Admin Network Policy controller: took 3.37µs 2025-10-13T00:21:42.564753692+00:00 stderr F I1013 00:21:42.564732 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-storage-operator in Admin Network Policy controller 2025-10-13T00:21:42.564753692+00:00 stderr F I1013 00:21:42.564736 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 3.621µs 2025-10-13T00:21:42.564753692+00:00 stderr F I1013 00:21:42.564741 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress-operator in Admin Network Policy controller 2025-10-13T00:21:42.564753692+00:00 stderr F I1013 00:21:42.564744 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress-operator Admin Network Policy controller: took 3.07µs 2025-10-13T00:21:42.564753692+00:00 stderr F I1013 00:21:42.564749 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-node-identity in Admin Network Policy controller 2025-10-13T00:21:42.564796303+00:00 stderr F I1013 00:21:42.564776 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-node-identity Admin Network Policy controller: took 3.03µs 2025-10-13T00:21:42.564796303+00:00 stderr F I1013 00:21:42.564784 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-public in Admin Network Policy controller 2025-10-13T00:21:42.564796303+00:00 stderr F I1013 00:21:42.564788 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-public Admin Network Policy controller: took 3.31µs 2025-10-13T00:21:42.564796303+00:00 stderr F I1013 00:21:42.564793 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cloud-network-config-controller in Admin Network Policy controller 2025-10-13T00:21:42.564808724+00:00 stderr F I1013 00:21:42.564796 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cloud-network-config-controller Admin Network Policy controller: took 3.19µs 2025-10-13T00:21:42.564808724+00:00 stderr F I1013 00:21:42.564801 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-host-network in Admin Network Policy controller 2025-10-13T00:21:42.564808724+00:00 stderr F I1013 00:21:42.564804 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-host-network Admin Network Policy controller: took 3.09µs 2025-10-13T00:21:42.564818624+00:00 stderr F I1013 00:21:42.564808 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress-canary in Admin Network Policy controller 2025-10-13T00:21:42.564818624+00:00 stderr F I1013 00:21:42.564812 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress-canary Admin Network Policy controller: took 3.33µs 2025-10-13T00:21:42.564832494+00:00 stderr F I1013 00:21:42.564819 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-controller-manager-operator in Admin Network Policy controller 2025-10-13T00:21:42.564832494+00:00 stderr F I1013 00:21:42.564823 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-controller-manager-operator Admin Network Policy controller: took 6.13µs 2025-10-13T00:21:42.564832494+00:00 stderr F I1013 00:21:42.564827 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller 2025-10-13T00:21:42.564842224+00:00 stderr F I1013 00:21:42.564831 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 3.46µs 2025-10-13T00:21:42.564842224+00:00 stderr F I1013 00:21:42.564835 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console-operator in Admin Network Policy controller 2025-10-13T00:21:42.564842224+00:00 stderr F I1013 00:21:42.564839 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console-operator Admin Network Policy controller: took 3.09µs 2025-10-13T00:21:42.564852265+00:00 stderr F I1013 00:21:42.564843 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-openstack-infra in Admin Network Policy controller 2025-10-13T00:21:42.564852265+00:00 stderr F I1013 00:21:42.564847 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-openstack-infra Admin Network Policy controller: took 3.33µs 2025-10-13T00:21:42.564861975+00:00 stderr F I1013 00:21:42.564852 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-route-controller-manager in Admin Network Policy controller 2025-10-13T00:21:42.564861975+00:00 stderr F I1013 00:21:42.564855 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-route-controller-manager Admin Network Policy controller: took 3.35µs 2025-10-13T00:21:42.564871465+00:00 stderr F I1013 00:21:42.564859 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-service-ca-operator in Admin Network Policy controller 2025-10-13T00:21:42.564871465+00:00 stderr F I1013 00:21:42.564864 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-service-ca-operator Admin Network Policy controller: took 4.13µs 2025-10-13T00:21:42.564871465+00:00 stderr F I1013 00:21:42.564868 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-version in Admin Network Policy controller 2025-10-13T00:21:42.564881986+00:00 stderr F I1013 00:21:42.564871 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-version Admin Network Policy controller: took 3.08µs 2025-10-13T00:21:42.564881986+00:00 stderr F I1013 00:21:42.564876 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console in Admin Network Policy controller 2025-10-13T00:21:42.564891586+00:00 stderr F I1013 00:21:42.564879 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console Admin Network Policy controller: took 3.07µs 2025-10-13T00:21:42.564891586+00:00 stderr F I1013 00:21:42.564884 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-image-registry in Admin Network Policy controller 2025-10-13T00:21:42.564900916+00:00 stderr F I1013 00:21:42.564889 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-image-registry Admin Network Policy controller: took 2.96µs 2025-10-13T00:21:42.564900916+00:00 stderr F I1013 00:21:42.564894 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-apiserver-operator in Admin Network Policy controller 2025-10-13T00:21:42.564900916+00:00 stderr F I1013 00:21:42.564897 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-apiserver-operator Admin Network Policy controller: took 3.49µs 2025-10-13T00:21:42.564914766+00:00 stderr F I1013 00:21:42.564902 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-multus in Admin Network Policy controller 2025-10-13T00:21:42.564914766+00:00 stderr F I1013 00:21:42.564906 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-multus Admin Network Policy controller: took 3.38µs 2025-10-13T00:21:42.564914766+00:00 stderr F I1013 00:21:42.564910 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-operator in Admin Network Policy controller 2025-10-13T00:21:42.564924737+00:00 stderr F I1013 00:21:42.564913 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-operator Admin Network Policy controller: took 3.26µs 2025-10-13T00:21:42.564924737+00:00 stderr F I1013 00:21:42.564917 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-node-lease in Admin Network Policy controller 2025-10-13T00:21:42.564924737+00:00 stderr F I1013 00:21:42.564920 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-node-lease Admin Network Policy controller: took 2.961µs 2025-10-13T00:21:42.564934897+00:00 stderr F I1013 00:21:42.564925 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-system in Admin Network Policy controller 2025-10-13T00:21:42.564934897+00:00 stderr F I1013 00:21:42.564928 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-system Admin Network Policy controller: took 3.56µs 2025-10-13T00:21:42.564944017+00:00 stderr F I1013 00:21:42.564932 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-scheduler-operator in Admin Network Policy controller 2025-10-13T00:21:42.564944017+00:00 stderr F I1013 00:21:42.564937 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-scheduler-operator Admin Network Policy controller: took 4.05µs 2025-10-13T00:21:42.564944017+00:00 stderr F I1013 00:21:42.564938 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.564980248+00:00 stderr F I1013 00:21:42.564962 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf took: 12.231µs 2025-10-13T00:21:42.564980248+00:00 stderr F I1013 00:21:42.564973 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.564980248+00:00 stderr F I1013 00:21:42.564961 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.565009309+00:00 stderr F I1013 00:21:42.564989 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-service-ca/service-ca-666f99b6f-kk8kg took: 8.55µs 2025-10-13T00:21:42.565009309+00:00 stderr F I1013 00:21:42.564994 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc took: 9.601µs 2025-10-13T00:21:42.565009309+00:00 stderr F I1013 00:21:42.564999 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.565009309+00:00 stderr F I1013 00:21:42.565003 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/installer-8-crc 2025-10-13T00:21:42.565020359+00:00 stderr F I1013 00:21:42.565007 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg took: 1.49µs 2025-10-13T00:21:42.565020359+00:00 stderr F I1013 00:21:42.565011 28251 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565020359+00:00 stderr F I1013 00:21:42.565015 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.565046120+00:00 stderr F I1013 00:21:42.565023 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-additional-cni-plugins-bzj2p took: 2.57µs 2025-10-13T00:21:42.565046120+00:00 stderr F I1013 00:21:42.565030 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.565046120+00:00 stderr F I1013 00:21:42.565033 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.565046120+00:00 stderr F I1013 00:21:42.565037 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/marketplace-operator-8b455464d-29pzg took: 2.17µs 2025-10-13T00:21:42.565046120+00:00 stderr F I1013 00:21:42.565043 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.565056230+00:00 stderr F I1013 00:21:42.565049 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 took: 8.61µs 2025-10-13T00:21:42.565064330+00:00 stderr F I1013 00:21:42.565058 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.565072811+00:00 stderr F I1013 00:21:42.565066 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.565072811+00:00 stderr F I1013 00:21:42.565068 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 took: 2.251µs 2025-10-13T00:21:42.565081381+00:00 stderr F I1013 00:21:42.565075 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-11-crc 2025-10-13T00:21:42.565090371+00:00 stderr F I1013 00:21:42.565079 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/image-registry-75b7bb6564-2mwg6 took: 7.1µs 2025-10-13T00:21:42.565090371+00:00 stderr F I1013 00:21:42.565082 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565090371+00:00 stderr F I1013 00:21:42.565085 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.565100541+00:00 stderr F I1013 00:21:42.565094 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.565100541+00:00 stderr F I1013 00:21:42.565097 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm took: 3.36µs 2025-10-13T00:21:42.565109712+00:00 stderr F I1013 00:21:42.565103 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.565118082+00:00 stderr F I1013 00:21:42.565110 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-admission-controller-6c7c885997-4hbbc took: 1.94µs 2025-10-13T00:21:42.565118082+00:00 stderr F I1013 00:21:42.565113 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.565127182+00:00 stderr F I1013 00:21:42.565050 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver/installer-13-crc took: 2.7µs 2025-10-13T00:21:42.565127182+00:00 stderr F I1013 00:21:42.565103 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-operator/network-operator-767c585db5-zd56b took: 2.22µs 2025-10-13T00:21:42.565140583+00:00 stderr F I1013 00:21:42.565128 28251 obj_retry.go:502] Add event received for *factory.egressIPPod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.565140583+00:00 stderr F I1013 00:21:42.565131 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/kube-rbac-proxy-crio-crc took: 9.39µs 2025-10-13T00:21:42.565140583+00:00 stderr F I1013 00:21:42.565136 28251 obj_retry.go:541] Creating *factory.egressIPPod hostpath-provisioner/csi-hostpathplugin-hvm8g took: 2.56µs 2025-10-13T00:21:42.565150393+00:00 stderr F I1013 00:21:42.565138 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.565150393+00:00 stderr F I1013 00:21:42.565142 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2025-10-13T00:21:42.565159093+00:00 stderr F I1013 00:21:42.565147 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m took: 2.46µs 2025-10-13T00:21:42.565159093+00:00 stderr F I1013 00:21:42.565149 28251 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565159093+00:00 stderr F I1013 00:21:42.565154 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.565170213+00:00 stderr F I1013 00:21:42.565164 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh took: 2.88µs 2025-10-13T00:21:42.565178244+00:00 stderr F I1013 00:21:42.565171 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.565188644+00:00 stderr F I1013 00:21:42.565182 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw 2025-10-13T00:21:42.565188644+00:00 stderr F I1013 00:21:42.565184 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs took: 6.461µs 2025-10-13T00:21:42.565197354+00:00 stderr F I1013 00:21:42.565188 28251 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565197354+00:00 stderr F I1013 00:21:42.565192 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.565207734+00:00 stderr F I1013 00:21:42.565201 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-node-identity/network-node-identity-7xghp took: 2.32µs 2025-10-13T00:21:42.565217025+00:00 stderr F I1013 00:21:42.564975 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.565217025+00:00 stderr F I1013 00:21:42.565213 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.565225555+00:00 stderr F I1013 00:21:42.565217 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-wzh74 took: 3.531µs 2025-10-13T00:21:42.565225555+00:00 stderr F I1013 00:21:42.565218 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-8-crc 2025-10-13T00:21:42.565238635+00:00 stderr F I1013 00:21:42.565224 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-11-crc 2025-10-13T00:21:42.565238635+00:00 stderr F I1013 00:21:42.565225 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 took: 5.76µs 2025-10-13T00:21:42.565238635+00:00 stderr F I1013 00:21:42.565229 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565248065+00:00 stderr F I1013 00:21:42.565238 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-q88th 2025-10-13T00:21:42.565248065+00:00 stderr F I1013 00:21:42.565241 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.565257516+00:00 stderr F I1013 00:21:42.565245 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.565257516+00:00 stderr F I1013 00:21:42.565251 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv took: 1.91µs 2025-10-13T00:21:42.565257516+00:00 stderr F I1013 00:21:42.565251 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-q88th took: 6.63µs 2025-10-13T00:21:42.565267146+00:00 stderr F I1013 00:21:42.565257 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.565267146+00:00 stderr F I1013 00:21:42.565260 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.565276066+00:00 stderr F I1013 00:21:42.565264 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-server-v65wr took: 10.901µs 2025-10-13T00:21:42.565276066+00:00 stderr F I1013 00:21:42.565231 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-10-retry-1-crc 2025-10-13T00:21:42.565276066+00:00 stderr F I1013 00:21:42.565269 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw took: 2.97µs 2025-10-13T00:21:42.565285436+00:00 stderr F I1013 00:21:42.565274 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.565285436+00:00 stderr F I1013 00:21:42.565276 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565285436+00:00 stderr F I1013 00:21:42.565278 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-10-13T00:21:42.565294627+00:00 stderr F I1013 00:21:42.565284 28251 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565294627+00:00 stderr F I1013 00:21:42.565286 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.565303687+00:00 stderr F I1013 00:21:42.565295 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.565303687+00:00 stderr F I1013 00:21:42.565297 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z took: 3.21µs 2025-10-13T00:21:42.565303687+00:00 stderr F I1013 00:21:42.564941 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-diagnostics in Admin Network Policy controller 2025-10-13T00:21:42.565320637+00:00 stderr F I1013 00:21:42.565305 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-diagnostics Admin Network Policy controller: took 362.96µs 2025-10-13T00:21:42.565320637+00:00 stderr F I1013 00:21:42.565165 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.565320637+00:00 stderr F I1013 00:21:42.565312 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console-user-settings in Admin Network Policy controller 2025-10-13T00:21:42.565320637+00:00 stderr F I1013 00:21:42.565313 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-scheduler/openshift-kube-scheduler-crc took: 9.32µs 2025-10-13T00:21:42.565320637+00:00 stderr F I1013 00:21:42.565316 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console-user-settings Admin Network Policy controller: took 4.69µs 2025-10-13T00:21:42.565350878+00:00 stderr F I1013 00:21:42.565324 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.565363769+00:00 stderr F I1013 00:21:42.565348 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns/dns-default-gbw49 took: 3.35µs 2025-10-13T00:21:42.565363769+00:00 stderr F I1013 00:21:42.565355 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.565371929+00:00 stderr F I1013 00:21:42.565363 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz took: 2.58µs 2025-10-13T00:21:42.565371929+00:00 stderr F I1013 00:21:42.565369 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565383 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-console/downloads-65476884b9-9wcvx took: 1.49µs 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565285 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-daemon-zpnhg took: 2.84µs 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565392 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565401 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t took: 2.58µs 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565266 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd took: 2.37µs 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565202 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-9-crc 2025-10-13T00:21:42.565423020+00:00 stderr F I1013 00:21:42.565417 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565436400+00:00 stderr F I1013 00:21:42.565430 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.565444851+00:00 stderr F I1013 00:21:42.565438 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress-canary/ingress-canary-2vhcn took: 2.43µs 2025-10-13T00:21:42.565458511+00:00 stderr F I1013 00:21:42.565444 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.565458511+00:00 stderr F I1013 00:21:42.565452 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 took: 2.26µs 2025-10-13T00:21:42.565482712+00:00 stderr F I1013 00:21:42.565457 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.565482712+00:00 stderr F I1013 00:21:42.565464 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-diagnostics/network-check-target-v54bt took: 1.43µs 2025-10-13T00:21:42.565482712+00:00 stderr F I1013 00:21:42.565469 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.565482712+00:00 stderr F I1013 00:21:42.565478 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz took: 2.041µs 2025-10-13T00:21:42.565492902+00:00 stderr F I1013 00:21:42.565400 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-10-crc 2025-10-13T00:21:42.565492902+00:00 stderr F I1013 00:21:42.565488 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565503642+00:00 stderr F I1013 00:21:42.565498 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.565512043+00:00 stderr F I1013 00:21:42.565505 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-controller-manager/controller-manager-778975cc4f-x5vcf took: 1.4µs 2025-10-13T00:21:42.565520943+00:00 stderr F I1013 00:21:42.565511 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.565520943+00:00 stderr F I1013 00:21:42.565172 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.565529293+00:00 stderr F I1013 00:21:42.565518 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver/kube-apiserver-crc took: 2.13µs 2025-10-13T00:21:42.565538083+00:00 stderr F I1013 00:21:42.565531 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/certified-operators-cms8q took: 9.58µs 2025-10-13T00:21:42.565546843+00:00 stderr F I1013 00:21:42.565539 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.565555624+00:00 stderr F I1013 00:21:42.565547 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-controller-manager/kube-controller-manager-crc took: 2.39µs 2025-10-13T00:21:42.565555624+00:00 stderr F I1013 00:21:42.565552 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.565564624+00:00 stderr F I1013 00:21:42.565559 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b took: 2.27µs 2025-10-13T00:21:42.565573504+00:00 stderr F I1013 00:21:42.565564 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.565582084+00:00 stderr F I1013 00:21:42.565570 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh took: 1.83µs 2025-10-13T00:21:42.565582084+00:00 stderr F I1013 00:21:42.565576 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.565595225+00:00 stderr F I1013 00:21:42.565583 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/redhat-marketplace-crk87 took: 1.81µs 2025-10-13T00:21:42.565595225+00:00 stderr F I1013 00:21:42.565587 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.565604185+00:00 stderr F I1013 00:21:42.565305 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.565604185+00:00 stderr F I1013 00:21:42.565595 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/network-metrics-daemon-qdfr4 took: 1.86µs 2025-10-13T00:21:42.565637786+00:00 stderr F I1013 00:21:42.565610 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 took: 6.82µs 2025-10-13T00:21:42.565637786+00:00 stderr F I1013 00:21:42.565626 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.565637786+00:00 stderr F I1013 00:21:42.565635 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg took: 2.91µs 2025-10-13T00:21:42.565648606+00:00 stderr F I1013 00:21:42.565116 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.565656886+00:00 stderr F I1013 00:21:42.565651 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/redhat-operators-hkptr took: 4.75µs 2025-10-13T00:21:42.565665127+00:00 stderr F I1013 00:21:42.565657 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.565695927+00:00 stderr F I1013 00:21:42.565681 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/node-ca-l92hr took: 17.48µs 2025-10-13T00:21:42.565695927+00:00 stderr F I1013 00:21:42.565296 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/installer-7-crc 2025-10-13T00:21:42.565705418+00:00 stderr F I1013 00:21:42.565696 28251 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-7-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565713868+00:00 stderr F I1013 00:21:42.565707 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.565721818+00:00 stderr F I1013 00:21:42.565715 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 took: 1.79µs 2025-10-13T00:21:42.565721818+00:00 stderr F I1013 00:21:42.565057 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.565749359+00:00 stderr F I1013 00:21:42.565735 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv took: 9.42µs 2025-10-13T00:21:42.565749359+00:00 stderr F I1013 00:21:42.565745 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.565758939+00:00 stderr F I1013 00:21:42.565754 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr took: 2.01µs 2025-10-13T00:21:42.565767409+00:00 stderr F I1013 00:21:42.565759 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.565780640+00:00 stderr F I1013 00:21:42.565767 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns/node-resolver-dn27q took: 1.84µs 2025-10-13T00:21:42.565780640+00:00 stderr F I1013 00:21:42.565772 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-9-crc 2025-10-13T00:21:42.565780640+00:00 stderr F I1013 00:21:42.565777 28251 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565791470+00:00 stderr F I1013 00:21:42.565785 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.565791470+00:00 stderr F I1013 00:21:42.565023 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.565800070+00:00 stderr F I1013 00:21:42.565793 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns-operator/dns-operator-75f687757b-nz2xb took: 1.79µs 2025-10-13T00:21:42.565808161+00:00 stderr F I1013 00:21:42.565799 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/image-pruner-29338560-zvlxb 2025-10-13T00:21:42.565816371+00:00 stderr F I1013 00:21:42.565808 28251 obj_retry.go:459] Detected object openshift-image-registry/image-pruner-29338560-zvlxb of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565824391+00:00 stderr F I1013 00:21:42.565326 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-etcd in Admin Network Policy controller 2025-10-13T00:21:42.565832221+00:00 stderr F I1013 00:21:42.565823 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-etcd Admin Network Policy controller: took 496.344µs 2025-10-13T00:21:42.565840511+00:00 stderr F I1013 00:21:42.565835 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-etcd-operator in Admin Network Policy controller 2025-10-13T00:21:42.565848732+00:00 stderr F I1013 00:21:42.565839 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-etcd-operator Admin Network Policy controller: took 4.65µs 2025-10-13T00:21:42.565848732+00:00 stderr F I1013 00:21:42.565846 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-apiserver in Admin Network Policy controller 2025-10-13T00:21:42.565857432+00:00 stderr F I1013 00:21:42.565851 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-apiserver Admin Network Policy controller: took 4.161µs 2025-10-13T00:21:42.565865722+00:00 stderr F I1013 00:21:42.565857 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller 2025-10-13T00:21:42.565865722+00:00 stderr F I1013 00:21:42.565861 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 4.17µs 2025-10-13T00:21:42.565874292+00:00 stderr F I1013 00:21:42.565799 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb took: 3.39µs 2025-10-13T00:21:42.565874292+00:00 stderr F I1013 00:21:42.565868 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-user-workload-monitoring in Admin Network Policy controller 2025-10-13T00:21:42.565882933+00:00 stderr F I1013 00:21:42.565874 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.565882933+00:00 stderr F I1013 00:21:42.565876 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-user-workload-monitoring Admin Network Policy controller: took 5.8µs 2025-10-13T00:21:42.565897003+00:00 stderr F I1013 00:21:42.565881 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 took: 2.49µs 2025-10-13T00:21:42.565897003+00:00 stderr F I1013 00:21:42.565883 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-controller-manager in Admin Network Policy controller 2025-10-13T00:21:42.565897003+00:00 stderr F I1013 00:21:42.565887 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-10-crc 2025-10-13T00:21:42.565897003+00:00 stderr F I1013 00:21:42.565890 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-controller-manager Admin Network Policy controller: took 5.771µs 2025-10-13T00:21:42.565897003+00:00 stderr F I1013 00:21:42.565893 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565907343+00:00 stderr F I1013 00:21:42.565897 28251 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-oauth-apiserver in Admin Network Policy controller 2025-10-13T00:21:42.565907343+00:00 stderr F I1013 00:21:42.565901 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.565915663+00:00 stderr F I1013 00:21:42.565909 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp took: 2.03µs 2025-10-13T00:21:42.565915663+00:00 stderr F I1013 00:21:42.565230 28251 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565945874+00:00 stderr F I1013 00:21:42.565930 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-etcd/etcd-crc 2025-10-13T00:21:42.565945874+00:00 stderr F I1013 00:21:42.565942 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-etcd/etcd-crc took: 2.7µs 2025-10-13T00:21:42.565955774+00:00 stderr F I1013 00:21:42.565948 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.565964215+00:00 stderr F I1013 00:21:42.565956 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-console/console-644bb77b49-5x5xk took: 1.63µs 2025-10-13T00:21:42.565972685+00:00 stderr F I1013 00:21:42.565962 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-12-crc 2025-10-13T00:21:42.565972685+00:00 stderr F I1013 00:21:42.565968 28251 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-12-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-10-13T00:21:42.565982175+00:00 stderr F I1013 00:21:42.565975 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.565982175+00:00 stderr F I1013 00:21:42.565901 28251 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-oauth-apiserver Admin Network Policy controller: took 4.76µs 2025-10-13T00:21:42.565990745+00:00 stderr F I1013 00:21:42.565984 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress/router-default-5c9bf7bc58-6jctv took: 3.29µs 2025-10-13T00:21:42.565999256+00:00 stderr F I1013 00:21:42.565327 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b took: 8.71µs 2025-10-13T00:21:42.565999256+00:00 stderr F I1013 00:21:42.565991 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console/console-644bb77b49-5x5xk in Admin Network Policy controller 2025-10-13T00:21:42.565999256+00:00 stderr F I1013 00:21:42.565996 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.566013796+00:00 stderr F I1013 00:21:42.566003 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console/console-644bb77b49-5x5xk Admin Network Policy controller: took 9.801µs 2025-10-13T00:21:42.566013796+00:00 stderr F I1013 00:21:42.566004 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd took: 2.17µs 2025-10-13T00:21:42.566022766+00:00 stderr F I1013 00:21:42.566011 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in Admin Network Policy controller 2025-10-13T00:21:42.566022766+00:00 stderr F I1013 00:21:42.566012 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.566022766+00:00 stderr F I1013 00:21:42.566018 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z Admin Network Policy controller: took 6.46µs 2025-10-13T00:21:42.566032447+00:00 stderr F I1013 00:21:42.566023 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-operator/iptables-alerter-wwpnd took: 2.06µs 2025-10-13T00:21:42.566032447+00:00 stderr F I1013 00:21:42.566025 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns/node-resolver-dn27q in Admin Network Policy controller 2025-10-13T00:21:42.566041787+00:00 stderr F I1013 00:21:42.566030 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.566041787+00:00 stderr F I1013 00:21:42.566031 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns/node-resolver-dn27q Admin Network Policy controller: took 6.07µs 2025-10-13T00:21:42.566041787+00:00 stderr F I1013 00:21:42.566038 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-etcd/etcd-crc in Admin Network Policy controller 2025-10-13T00:21:42.566051157+00:00 stderr F I1013 00:21:42.566038 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 took: 2.14µs 2025-10-13T00:21:42.566051157+00:00 stderr F I1013 00:21:42.566044 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-etcd/etcd-crc Admin Network Policy controller: took 5.17µs 2025-10-13T00:21:42.566051157+00:00 stderr F I1013 00:21:42.566046 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.566061587+00:00 stderr F I1013 00:21:42.566050 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-10-retry-1-crc in Admin Network Policy controller 2025-10-13T00:21:42.566061587+00:00 stderr F I1013 00:21:42.566055 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb took: 2.34µs 2025-10-13T00:21:42.566061587+00:00 stderr F I1013 00:21:42.566055 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-10-retry-1-crc Admin Network Policy controller: took 5.35µs 2025-10-13T00:21:42.566071418+00:00 stderr F I1013 00:21:42.566061 28251 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.566071418+00:00 stderr F I1013 00:21:42.566065 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in Admin Network Policy controller 2025-10-13T00:21:42.566080258+00:00 stderr F I1013 00:21:42.566070 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-daemon-zpnhg Admin Network Policy controller: took 5.241µs 2025-10-13T00:21:42.566080258+00:00 stderr F I1013 00:21:42.566070 28251 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/community-operators-gjctm took: 2.421µs 2025-10-13T00:21:42.566080258+00:00 stderr F I1013 00:21:42.566075 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 in Admin Network Policy controller 2025-10-13T00:21:42.566094678+00:00 stderr F I1013 00:21:42.566080 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 Admin Network Policy controller: took 4.85µs 2025-10-13T00:21:42.566094678+00:00 stderr F I1013 00:21:42.566083 28251 factory.go:988] Added *v1.Pod event handler 6 2025-10-13T00:21:42.566094678+00:00 stderr F I1013 00:21:42.566085 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in Admin Network Policy controller 2025-10-13T00:21:42.566104799+00:00 stderr F I1013 00:21:42.566092 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg Admin Network Policy controller: took 6.38µs 2025-10-13T00:21:42.566104799+00:00 stderr F I1013 00:21:42.566098 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in Admin Network Policy controller 2025-10-13T00:21:42.566113529+00:00 stderr F I1013 00:21:42.566102 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz Admin Network Policy controller: took 4.24µs 2025-10-13T00:21:42.566113529+00:00 stderr F I1013 00:21:42.566108 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-10-crc in Admin Network Policy controller 2025-10-13T00:21:42.566122819+00:00 stderr F I1013 00:21:42.566112 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-10-crc Admin Network Policy controller: took 4.31µs 2025-10-13T00:21:42.566122819+00:00 stderr F I1013 00:21:42.566118 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in Admin Network Policy controller 2025-10-13T00:21:42.566132139+00:00 stderr F I1013 00:21:42.566122 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh Admin Network Policy controller: took 4.21µs 2025-10-13T00:21:42.566132139+00:00 stderr F I1013 00:21:42.566128 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in Admin Network Policy controller 2025-10-13T00:21:42.566141429+00:00 stderr F I1013 00:21:42.566133 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-additional-cni-plugins-bzj2p Admin Network Policy controller: took 4.4µs 2025-10-13T00:21:42.566141429+00:00 stderr F I1013 00:21:42.566138 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in Admin Network Policy controller 2025-10-13T00:21:42.566151080+00:00 stderr F I1013 00:21:42.566142 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc Admin Network Policy controller: took 4.44µs 2025-10-13T00:21:42.566151080+00:00 stderr F I1013 00:21:42.566147 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-operator/iptables-alerter-wwpnd in Admin Network Policy controller 2025-10-13T00:21:42.566160640+00:00 stderr F I1013 00:21:42.566152 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-operator/iptables-alerter-wwpnd Admin Network Policy controller: took 5.04µs 2025-10-13T00:21:42.566160640+00:00 stderr F I1013 00:21:42.566157 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd in Admin Network Policy controller 2025-10-13T00:21:42.566173960+00:00 stderr F I1013 00:21:42.566146 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 10.217.0.0/22 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566173960+00:00 stderr F I1013 00:21:42.566162 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd Admin Network Policy controller: took 4.15µs 2025-10-13T00:21:42.566173960+00:00 stderr F I1013 00:21:42.566167 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-11-crc in Admin Network Policy controller 2025-10-13T00:21:42.566184231+00:00 stderr F I1013 00:21:42.566171 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-11-crc Admin Network Policy controller: took 4.2µs 2025-10-13T00:21:42.566193331+00:00 stderr F I1013 00:21:42.566182 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in Admin Network Policy controller 2025-10-13T00:21:42.566193331+00:00 stderr F I1013 00:21:42.566187 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb Admin Network Policy controller: took 5.42µs 2025-10-13T00:21:42.566202671+00:00 stderr F I1013 00:21:42.566192 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-11-crc in Admin Network Policy controller 2025-10-13T00:21:42.566202671+00:00 stderr F I1013 00:21:42.566197 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-11-crc Admin Network Policy controller: took 4.53µs 2025-10-13T00:21:42.566211861+00:00 stderr F I1013 00:21:42.566202 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/installer-7-crc in Admin Network Policy controller 2025-10-13T00:21:42.566220912+00:00 stderr F I1013 00:21:42.566198 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566220912+00:00 stderr F I1013 00:21:42.566209 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/installer-7-crc Admin Network Policy controller: took 4.45µs 2025-10-13T00:21:42.566220912+00:00 stderr F I1013 00:21:42.566218 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in Admin Network Policy controller 2025-10-13T00:21:42.566231042+00:00 stderr F I1013 00:21:42.566223 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b Admin Network Policy controller: took 4.85µs 2025-10-13T00:21:42.566231042+00:00 stderr F I1013 00:21:42.566228 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress-canary/ingress-canary-2vhcn in Admin Network Policy controller 2025-10-13T00:21:42.566240502+00:00 stderr F I1013 00:21:42.566232 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress-canary/ingress-canary-2vhcn Admin Network Policy controller: took 4.21µs 2025-10-13T00:21:42.566240502+00:00 stderr F I1013 00:21:42.566236 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console/downloads-65476884b9-9wcvx in Admin Network Policy controller 2025-10-13T00:21:42.566253543+00:00 stderr F I1013 00:21:42.566242 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console/downloads-65476884b9-9wcvx Admin Network Policy controller: took 5.03µs 2025-10-13T00:21:42.566253543+00:00 stderr F I1013 00:21:42.566217 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 10.217.0.0/22 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566253543+00:00 stderr F I1013 00:21:42.566248 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in Admin Network Policy controller 2025-10-13T00:21:42.566263433+00:00 stderr F I1013 00:21:42.566254 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 Admin Network Policy controller: took 6.881µs 2025-10-13T00:21:42.566263433+00:00 stderr F I1013 00:21:42.566259 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/redhat-operators-hkptr in Admin Network Policy controller 2025-10-13T00:21:42.566272873+00:00 stderr F I1013 00:21:42.566263 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/redhat-operators-hkptr Admin Network Policy controller: took 3.73µs 2025-10-13T00:21:42.566272873+00:00 stderr F I1013 00:21:42.566268 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-operator/network-operator-767c585db5-zd56b in Admin Network Policy controller 2025-10-13T00:21:42.566282193+00:00 stderr F I1013 00:21:42.566272 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-operator/network-operator-767c585db5-zd56b Admin Network Policy controller: took 4.23µs 2025-10-13T00:21:42.566282193+00:00 stderr F I1013 00:21:42.566277 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in Admin Network Policy controller 2025-10-13T00:21:42.566291234+00:00 stderr F I1013 00:21:42.566281 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs Admin Network Policy controller: took 4.67µs 2025-10-13T00:21:42.566291234+00:00 stderr F I1013 00:21:42.566286 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in Admin Network Policy controller 2025-10-13T00:21:42.566300044+00:00 stderr F I1013 00:21:42.566290 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 Admin Network Policy controller: took 4.07µs 2025-10-13T00:21:42.566300044+00:00 stderr F I1013 00:21:42.566295 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-12-crc in Admin Network Policy controller 2025-10-13T00:21:42.566309344+00:00 stderr F I1013 00:21:42.566299 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-12-crc Admin Network Policy controller: took 4.12µs 2025-10-13T00:21:42.566309344+00:00 stderr F I1013 00:21:42.566304 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in Admin Network Policy controller 2025-10-13T00:21:42.566327134+00:00 stderr F I1013 00:21:42.566308 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv Admin Network Policy controller: took 4.34µs 2025-10-13T00:21:42.566327134+00:00 stderr F I1013 00:21:42.566313 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg in Admin Network Policy controller 2025-10-13T00:21:42.566327134+00:00 stderr F I1013 00:21:42.566317 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg Admin Network Policy controller: took 3.89µs 2025-10-13T00:21:42.566365026+00:00 stderr F I1013 00:21:42.566325 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in Admin Network Policy controller 2025-10-13T00:21:42.566365026+00:00 stderr F I1013 00:21:42.566347 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf Admin Network Policy controller: took 22.541µs 2025-10-13T00:21:42.566380826+00:00 stderr F I1013 00:21:42.566352 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in Admin Network Policy controller 2025-10-13T00:21:42.566380826+00:00 stderr F I1013 00:21:42.566358 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 Admin Network Policy controller: took 5.57µs 2025-10-13T00:21:42.566380826+00:00 stderr F I1013 00:21:42.566362 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in Admin Network Policy controller 2025-10-13T00:21:42.566380826+00:00 stderr F I1013 00:21:42.566366 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress/router-default-5c9bf7bc58-6jctv Admin Network Policy controller: took 3.881µs 2025-10-13T00:21:42.566380826+00:00 stderr F I1013 00:21:42.566371 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-13-crc in Admin Network Policy controller 2025-10-13T00:21:42.566380826+00:00 stderr F I1013 00:21:42.566375 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-13-crc Admin Network Policy controller: took 4.57µs 2025-10-13T00:21:42.566392326+00:00 stderr F I1013 00:21:42.566380 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-9-crc in Admin Network Policy controller 2025-10-13T00:21:42.566392326+00:00 stderr F I1013 00:21:42.566384 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-9-crc Admin Network Policy controller: took 4.4µs 2025-10-13T00:21:42.566392326+00:00 stderr F I1013 00:21:42.566389 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in Admin Network Policy controller 2025-10-13T00:21:42.566402207+00:00 stderr F I1013 00:21:42.566394 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 Admin Network Policy controller: took 4.66µs 2025-10-13T00:21:42.566402207+00:00 stderr F I1013 00:21:42.566399 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in Admin Network Policy controller 2025-10-13T00:21:42.566411187+00:00 stderr F I1013 00:21:42.566404 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh Admin Network Policy controller: took 4.051µs 2025-10-13T00:21:42.566424067+00:00 stderr F I1013 00:21:42.566410 28251 admin_network_policy_pod.go:56] Processing sync for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in Admin Network Policy controller 2025-10-13T00:21:42.566424067+00:00 stderr F I1013 00:21:42.566414 28251 admin_network_policy_pod.go:59] Finished syncing Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 Admin Network Policy controller: took 4.51µs 2025-10-13T00:21:42.566586881+00:00 stderr F I1013 00:21:42.566553 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {56cc3414-2f3a-494a-bb5b-fac89ab63a76}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566627853+00:00 stderr F I1013 00:21:42.566603 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:56cc3414-2f3a-494a-bb5b-fac89ab63a76}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566652693+00:00 stderr F I1013 00:21:42.566623 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {56cc3414-2f3a-494a-bb5b-fac89ab63a76}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:56cc3414-2f3a-494a-bb5b-fac89ab63a76}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566983742+00:00 stderr F I1013 00:21:42.566916 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[ip-family:ip k8s.ovn.org/id:default-network-controller:EgressIP:102:EIP-No-Reroute-reply-traffic:ip k8s.ovn.org/name:EIP-No-Reroute-reply-traffic k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressIP priority:102]} match:pkt.mark == 42 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {725b51e4-bb6e-431f-9362-bdc8357c6b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.566983742+00:00 stderr F I1013 00:21:42.566967 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:725b51e4-bb6e-431f-9362-bdc8357c6b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.567020073+00:00 stderr F I1013 00:21:42.566981 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[ip-family:ip k8s.ovn.org/id:default-network-controller:EgressIP:102:EIP-No-Reroute-reply-traffic:ip k8s.ovn.org/name:EIP-No-Reroute-reply-traffic k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressIP priority:102]} match:pkt.mark == 42 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {725b51e4-bb6e-431f-9362-bdc8357c6b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:725b51e4-bb6e-431f-9362-bdc8357c6b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.567295410+00:00 stderr F I1013 00:21:42.567266 28251 address_set.go:304] New(89261db8-e0fa-461c-a902-652c89cbed51/default-network-controller:EgressIP:node-ips:v4/a14918748166599097711) with [] 2025-10-13T00:21:42.567295410+00:00 stderr F I1013 00:21:42.567290 28251 address_set.go:304] New(b27c19cc-d9d0-4d57-a5a8-06fcff438e8a/default-network-controller:EgressIP:egressip-served-pods:v4/a4548040316634674295) with [] 2025-10-13T00:21:42.567384063+00:00 stderr F I1013 00:21:42.567309 28251 address_set.go:304] New(6ad154d5-3275-4190-8d21-ff202885643c/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with [] 2025-10-13T00:21:42.567419534+00:00 stderr F I1013 00:21:42.567398 28251 obj_retry.go:502] Add event received for *factory.egressNode crc 2025-10-13T00:21:42.567599149+00:00 stderr F I1013 00:21:42.567559 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[172.17.0.5 172.18.0.5 172.19.0.5 192.168.122.10 192.168.126.11 38.102.83.180]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {89261db8-e0fa-461c-a902-652c89cbed51}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.567599149+00:00 stderr F I1013 00:21:42.567586 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[172.17.0.5 172.18.0.5 172.19.0.5 192.168.122.10 192.168.126.11 38.102.83.180]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {89261db8-e0fa-461c-a902-652c89cbed51}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568055901+00:00 stderr F I1013 00:21:42.568011 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:(ip4.src == $a4548040316634674295 || ip4.src == $a13607449821398607916) && ip4.dst == $a14918748166599097711 options:{GoMap:map[pkt_mark:1008]} priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6eea91c7-d03c-41d8-906d-a701fa21697a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568067561+00:00 stderr F I1013 00:21:42.568051 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:6eea91c7-d03c-41d8-906d-a701fa21697a}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568102762+00:00 stderr F I1013 00:21:42.568066 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:(ip4.src == $a4548040316634674295 || ip4.src == $a13607449821398607916) && ip4.dst == $a14918748166599097711 options:{GoMap:map[pkt_mark:1008]} priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6eea91c7-d03c-41d8-906d-a701fa21697a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:6eea91c7-d03c-41d8-906d-a701fa21697a}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568406120+00:00 stderr F I1013 00:21:42.568314 28251 egressip.go:1280] Egress node: crc about to be initialized 2025-10-13T00:21:42.568459072+00:00 stderr F I1013 00:21:42.568415 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f05c0207-10b6-48ad-8eef-da8e7afca1a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568520033+00:00 stderr F I1013 00:21:42.568454 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:f05c0207-10b6-48ad-8eef-da8e7afca1a7}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568548084+00:00 stderr F I1013 00:21:42.568513 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f05c0207-10b6-48ad-8eef-da8e7afca1a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:f05c0207-10b6-48ad-8eef-da8e7afca1a7}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.568816881+00:00 stderr F I1013 00:21:42.568780 28251 obj_retry.go:541] Creating *factory.egressNode crc took: 1.366256ms 2025-10-13T00:21:42.568816881+00:00 stderr F I1013 00:21:42.568804 28251 factory.go:988] Added *v1.Node event handler 7 2025-10-13T00:21:42.568847582+00:00 stderr F I1013 00:21:42.568828 28251 factory.go:988] Added *v1.EgressIP event handler 8 2025-10-13T00:21:42.569040867+00:00 stderr F I1013 00:21:42.569009 28251 factory.go:988] Added *v1.EgressFirewall event handler 9 2025-10-13T00:21:42.569076118+00:00 stderr F I1013 00:21:42.569057 28251 obj_retry.go:502] Add event received for *factory.egressFwNode crc 2025-10-13T00:21:42.569110869+00:00 stderr F I1013 00:21:42.569092 28251 obj_retry.go:541] Creating *factory.egressFwNode crc took: 23.371µs 2025-10-13T00:21:42.569110869+00:00 stderr F I1013 00:21:42.569107 28251 factory.go:988] Added *v1.Node event handler 10 2025-10-13T00:21:42.569138140+00:00 stderr F I1013 00:21:42.569118 28251 egressqos.go:193] Setting up event handlers for EgressQoS 2025-10-13T00:21:42.569285224+00:00 stderr F I1013 00:21:42.569255 28251 egressqos.go:245] Starting EgressQoS Controller 2025-10-13T00:21:42.569285224+00:00 stderr F I1013 00:21:42.569269 28251 shared_informer.go:311] Waiting for caches to sync for egressqosnodes 2025-10-13T00:21:42.569285224+00:00 stderr F I1013 00:21:42.569275 28251 shared_informer.go:318] Caches are synced for egressqosnodes 2025-10-13T00:21:42.569285224+00:00 stderr F I1013 00:21:42.569281 28251 shared_informer.go:311] Waiting for caches to sync for egressqospods 2025-10-13T00:21:42.569298434+00:00 stderr F I1013 00:21:42.569286 28251 shared_informer.go:318] Caches are synced for egressqospods 2025-10-13T00:21:42.569298434+00:00 stderr F I1013 00:21:42.569291 28251 shared_informer.go:311] Waiting for caches to sync for egressqos 2025-10-13T00:21:42.569298434+00:00 stderr F I1013 00:21:42.569294 28251 shared_informer.go:318] Caches are synced for egressqos 2025-10-13T00:21:42.569307975+00:00 stderr F I1013 00:21:42.569297 28251 egressqos.go:259] Repairing EgressQoSes 2025-10-13T00:21:42.569307975+00:00 stderr F I1013 00:21:42.569301 28251 egressqos.go:400] Starting repairing loop for egressqos 2025-10-13T00:21:42.569395137+00:00 stderr F I1013 00:21:42.569374 28251 egressqos.go:402] Finished repairing loop for egressqos: 72.572µs 2025-10-13T00:21:42.569395137+00:00 stderr F I1013 00:21:42.569391 28251 egressservice_zone.go:129] Setting up event handlers for Egress Services 2025-10-13T00:21:42.569452978+00:00 stderr F I1013 00:21:42.569436 28251 egressqos.go:1008] Processing sync for EgressQoS node crc 2025-10-13T00:21:42.569577492+00:00 stderr F I1013 00:21:42.569546 28251 egressservice_zone.go:205] Starting Egress Services Controller 2025-10-13T00:21:42.569577492+00:00 stderr F I1013 00:21:42.569561 28251 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-10-13T00:21:42.569577492+00:00 stderr F I1013 00:21:42.569567 28251 shared_informer.go:318] Caches are synced for egressservices 2025-10-13T00:21:42.569577492+00:00 stderr F I1013 00:21:42.569572 28251 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-10-13T00:21:42.569594762+00:00 stderr F I1013 00:21:42.569577 28251 shared_informer.go:318] Caches are synced for egressservices_services 2025-10-13T00:21:42.569594762+00:00 stderr F I1013 00:21:42.569582 28251 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-10-13T00:21:42.569594762+00:00 stderr F I1013 00:21:42.569583 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-controller-manager/kube-controller-manager for endpointslice openshift-kube-controller-manager/kube-controller-manager-fcp2k as it is not a known egress service 2025-10-13T00:21:42.569604263+00:00 stderr F I1013 00:21:42.569587 28251 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-10-13T00:21:42.569604263+00:00 stderr F I1013 00:21:42.569597 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-scheduler-operator/metrics for endpointslice openshift-kube-scheduler-operator/metrics-zk8d6 as it is not a known egress service 2025-10-13T00:21:42.569613523+00:00 stderr F I1013 00:21:42.569602 28251 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-10-13T00:21:42.569613523+00:00 stderr F I1013 00:21:42.569607 28251 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-10-13T00:21:42.569613523+00:00 stderr F I1013 00:21:42.569608 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-controller for endpointslice openshift-machine-config-operator/machine-config-controller-7t8hc as it is not a known egress service 2025-10-13T00:21:42.569622743+00:00 stderr F I1013 00:21:42.569612 28251 egressservice_zone.go:223] Repairing Egress Services 2025-10-13T00:21:42.569622743+00:00 stderr F I1013 00:21:42.569617 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/package-server-manager-metrics for endpointslice openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p as it is not a known egress service 2025-10-13T00:21:42.569800228+00:00 stderr F I1013 00:21:42.569447 28251 egressqos.go:1023] EgressQoS crc node retrieved from lister: &Node{ObjectMeta:{crc c83c88d3-f34d-4083-a59d-1c50f90f89b8 42669 0 2024-06-26 12:44:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:crc kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node-role.kubernetes.io/worker: node.openshift.io/os_id:rhcos topology.hostpath.csi/node:crc] map[csi.volume.kubernetes.io/nodeid:{"kubevirt.io.hostpath-provisioner":"crc"} k8s.ovn.org/host-cidrs:["172.17.0.5/24","172.18.0.5/24","172.19.0.5/24","192.168.122.10/24","192.168.126.11/24","38.102.83.180/24"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"br-ex_crc","mac-address":"fa:16:3e:c3:15:08","ip-addresses":["38.102.83.180/24"],"ip-address":"38.102.83.180/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/network-ids:{"default":"0"} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-mgmt-port-mac-address:b6:dc:d9:26:03:d4 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.180/24"} k8s.ovn.org/node-subnets:{"default":["10.217.0.0/23"]} k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"} k8s.ovn.org/remote-zone-migrated:crc k8s.ovn.org/zone-name:crc machineconfiguration.openshift.io/controlPlaneTopology:SingleReplica machineconfiguration.openshift.io/currentConfig:rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/desiredConfig:rendered-master-ef556ead28ddfad01c34ac56c7adfb5a machineconfiguration.openshift.io/desiredDrain:uncordon-rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/lastAppliedDrain:uncordon-rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/lastObservedServerCAAnnotation:false machineconfiguration.openshift.io/lastSyncedControllerConfigResourceVersion:42002 machineconfiguration.openshift.io/post-config-action: machineconfiguration.openshift.io/reason:missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-10-13T00:21:42.569800228+00:00 stderr P machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found machineconfiguration.openshift.io/state:Degraded volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{12 0} {} 12 DecimalSI},ephemeral-storage: {{85294297088 0} {} 83295212Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{33654132736 0} {} 32865364Ki BinarySI},pods: {{250 0} {} 250 DecimalSI},},Allocatable:ResourceList{cpu: {{11800 -3} {} 11800m DecimalSI},ephemeral-storage: {{76397865653 0} {} 76397865653 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{33182273536 0} {} 32404564Ki BinarySI},pods: {{250 0} {} 250 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-10-13 00:21:35 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-10-13 00:21:35 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-10-13 00:21:35 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-10-13 00:21:35 +0000 UTC,LastTransitionTime:2025-10-13 00:14:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.126.11,},NodeAddress{Type:Hostname,Address:crc,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1bd596843fb445da20eca66471ddf66,SystemUUID:c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2,BootID:52d31473-3707-4197-b2aa-2769ec9105af,KernelVersion:5.14.0-427.22.1.el9_4.x86_64,OSImage:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0,ContainerRuntimeVersion:cri-o://1.29.5-5.rhaos4.16.git7032128.el9,KubeletVersion:v1.29.5+29c95f3,KubeProxyVersion:v1.29.5+29c95f3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:5b3e1a0801f0eb82eee25d9a30a9ac670ea391e6b4d29683fd4e3c51632c1e6b registry.redhat.io/redhat/redhat-operator-index@sha256:90182b61065bf1b1f1f131333fa52f7a8ad92c143119e4cad30a368fbc65a155 registry.redhat.io/redhat/redhat-operator-index:v4.16],SizeBytes:3741216144,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:98affcb112cdd069bfed8c7b5f300a1252e5b67d78a89515108331d589de6390],SizeBytes:3571551444,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f],SizeBytes:2572133253,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6],SizeBytes:2121001615,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:04de2cde6108388f00749945d558ebfcb462851573249de7e25845b34373f7e9 registry.redhat.io/redhat/community-operator-index@sha256:6bf1fb87767fed6815a141f6d793f80550d5d18439f0a8015f10ed87e7884e4c registry.redhat.io/redhat/community-operator-index:v4.16],SizeBytes:1888673182,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:6baf614a7bf44076cbfcf4d04b52541e8eaaf096cfaeb88b1ccccfb8d8b9d50a],SizeBytes:1858356929,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:7e1e0c2592c23302d2e2f6f72cc4752f1db35d33c11c77a11701dd0dd37f1647 registry.redhat.io/redhat/certified-operator-index@sha256:ee0abb5a575e6c9fdd1d89148da163322288878935a396f74f761ce95c5b19f6 registry.redhat.io/redhat/certified-operator-index:v4.16],SizeBytes:1631222185,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:c465a1b8318dfd33120b2692dd1f6aae7db4b6e69f50720cb1391d55fadec562],SizeBytes:1487797447,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:68d14b34dbf1db524862ddf16903158cee2d1b84017c3137648f0f36c033bdaf registry.redhat.io/redhat/redhat-marketplace-index@sha256:99e005d53d321b364601355bed1690870e17f9edd8f029d33a1cde74bc1d585b registry.redhat.io/redhat/redhat-marketplace-index:v4.16],SizeBytes:1486936464,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:55a23334bbbe439e5a7e305ed720325ac4192d078b06fe8be277fe1eff62c533],SizeBytes:1458020021,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b],SizeBytes:1374511543,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009],SizeBytes:1346691049,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca],SizeBytes:1222078702,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f],SizeBytes:1116811194,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842],SizeBytes:1067242914,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9b 2025-10-13T00:21:42.569825849+00:00 stderr F b0ccf6512d],SizeBytes:993487271,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a],SizeBytes:874809222,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2],SizeBytes:829474731,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251],SizeBytes:826261505,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8],SizeBytes:823328808,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2],SizeBytes:775169417,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648],SizeBytes:685289316,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73],SizeBytes:677900529,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae],SizeBytes:654603911,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc],SizeBytes:596693555,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78],SizeBytes:568208801,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce],SizeBytes:562097717,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403],SizeBytes:541135334,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d],SizeBytes:539461335,},ContainerImage{Names:[registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8 registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9 registry.redhat.io/openshift4/ose-csi-external-provisioner:latest],SizeBytes:520763795,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3],SizeBytes:507363664,},ContainerImage{Names:[quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69],SizeBytes:503433479,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3],SizeBytes:503286020,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce],SizeBytes:502054492,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b],SizeBytes:501535327,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75],SizeBytes:501474997,},ContainerImage{Names:[quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f],SizeBytes:499981426,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72],SizeBytes:498615097,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791],SizeBytes:498403671,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611],SizeBytes:497554071,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f],SizeBytes:497168817,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa],SizeBytes:497128745,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc],SizeBytes:496236158,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d],SizeBytes:495929820,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15],SizeBytes:494198000,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730],SizeBytes:493495521,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208],SizeBytes:492229908,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8],SizeBytes:488729683,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99],SizeBytes:487322445,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0],SizeBytes:484252300,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} 2025-10-13T00:21:42.569825849+00:00 stderr F I1013 00:21:42.569811 28251 egressqos.go:1011] Finished syncing EgressQoS node crc : 377.12µs 2025-10-13T00:21:42.569825849+00:00 stderr F I1013 00:21:42.569807 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6ad154d5-3275-4190-8d21-ff202885643c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.569842459+00:00 stderr F I1013 00:21:42.569825 28251 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6ad154d5-3275-4190-8d21-ff202885643c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.570091206+00:00 stderr F I1013 00:21:42.570046 28251 master_controller.go:86] Starting Admin Policy Based Route Controller 2025-10-13T00:21:42.570091206+00:00 stderr F I1013 00:21:42.570056 28251 external_controller.go:276] Starting Admin Policy Based Route Controller 2025-10-13T00:21:42.570198639+00:00 stderr F I1013 00:21:42.570180 28251 default_network_controller.go:572] Completing all the Watchers took 128.982279ms 2025-10-13T00:21:42.570198639+00:00 stderr F I1013 00:21:42.570195 28251 default_network_controller.go:576] Starting unidling controllers 2025-10-13T00:21:42.570241530+00:00 stderr F I1013 00:21:42.570225 28251 unidle.go:45] Registering OVN SB ControllerEvent handler 2025-10-13T00:21:42.570241530+00:00 stderr F I1013 00:21:42.570237 28251 unidle.go:62] Populating Initial ContollerEvent events 2025-10-13T00:21:42.570267990+00:00 stderr F I1013 00:21:42.570250 28251 unidle.go:78] Setting up event handlers for services 2025-10-13T00:21:42.570372593+00:00 stderr F I1013 00:21:42.570324 28251 network_attach_def_controller.go:134] Starting network-controller-manager NAD controller 2025-10-13T00:21:42.570390694+00:00 stderr F I1013 00:21:42.570384 28251 shared_informer.go:311] Waiting for caches to sync for network-controller-manager 2025-10-13T00:21:42.570423515+00:00 stderr F I1013 00:21:42.570408 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-service-ca-operator/metrics for endpointslice openshift-service-ca-operator/metrics-wrkrj as it is not a known egress service 2025-10-13T00:21:42.570423515+00:00 stderr F I1013 00:21:42.570419 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-controllers for endpointslice openshift-machine-api/machine-api-controllers-j9jjt as it is not a known egress service 2025-10-13T00:21:42.570433475+00:00 stderr F I1013 00:21:42.570427 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-cluster-version/cluster-version-operator for endpointslice openshift-cluster-version/cluster-version-operator-qt7zf as it is not a known egress service 2025-10-13T00:21:42.570442215+00:00 stderr F I1013 00:21:42.570433 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-cxk8j as it is not a known egress service 2025-10-13T00:21:42.570442215+00:00 stderr F I1013 00:21:42.570439 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-apiserver-operator/metrics for endpointslice openshift-kube-apiserver-operator/metrics-kbv55 as it is not a known egress service 2025-10-13T00:21:42.570450725+00:00 stderr F I1013 00:21:42.570445 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-controller-manager/controller-manager for endpointslice openshift-controller-manager/controller-manager-kxmft as it is not a known egress service 2025-10-13T00:21:42.570464426+00:00 stderr F I1013 00:21:42.570452 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress/router-internal-default for endpointslice openshift-ingress/router-internal-default-29hv8 as it is not a known egress service 2025-10-13T00:21:42.570464426+00:00 stderr F I1013 00:21:42.570458 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-storage-version-migrator-operator/metrics for endpointslice openshift-kube-storage-version-migrator-operator/metrics-zbxs7 as it is not a known egress service 2025-10-13T00:21:42.570473256+00:00 stderr F I1013 00:21:42.570465 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/redhat-operators for endpointslice openshift-marketplace/redhat-operators-47g6l as it is not a known egress service 2025-10-13T00:21:42.570497597+00:00 stderr F I1013 00:21:42.570472 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver/api for endpointslice openshift-apiserver/api-7hq6z as it is not a known egress service 2025-10-13T00:21:42.570497597+00:00 stderr F I1013 00:21:42.570479 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-authentication/oauth-openshift for endpointslice openshift-authentication/oauth-openshift-6gdxk as it is not a known egress service 2025-10-13T00:21:42.570497597+00:00 stderr F I1013 00:21:42.570485 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-config-operator/metrics for endpointslice openshift-config-operator/metrics-tw775 as it is not a known egress service 2025-10-13T00:21:42.570497597+00:00 stderr F I1013 00:21:42.570492 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-oauth-apiserver/api for endpointslice openshift-oauth-apiserver/api-2pj4d as it is not a known egress service 2025-10-13T00:21:42.570507937+00:00 stderr F I1013 00:21:42.570497 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-wg4kr as it is not a known egress service 2025-10-13T00:21:42.570507937+00:00 stderr F I1013 00:21:42.570504 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-controller-manager-operator/metrics for endpointslice openshift-kube-controller-manager-operator/metrics-cz5rv as it is not a known egress service 2025-10-13T00:21:42.570516717+00:00 stderr F I1013 00:21:42.570511 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/packageserver-service for endpointslice openshift-operator-lifecycle-manager/packageserver-service-tlm8t as it is not a known egress service 2025-10-13T00:21:42.570525087+00:00 stderr F I1013 00:21:42.570517 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-rdlxc as it is not a known egress service 2025-10-13T00:21:42.570533398+00:00 stderr F I1013 00:21:42.570524 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-image-registry/image-registry for endpointslice openshift-image-registry/image-registry-cfsrx as it is not a known egress service 2025-10-13T00:21:42.570533398+00:00 stderr F I1013 00:21:42.570530 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-scheduler/scheduler for endpointslice openshift-kube-scheduler/scheduler-4wbzh as it is not a known egress service 2025-10-13T00:21:42.570541968+00:00 stderr F I1013 00:21:42.570536 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/community-operators for endpointslice openshift-marketplace/community-operators-h826k as it is not a known egress service 2025-10-13T00:21:42.570550418+00:00 stderr F I1013 00:21:42.570543 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/certified-operators for endpointslice openshift-marketplace/certified-operators-bw9bv as it is not a known egress service 2025-10-13T00:21:42.570563168+00:00 stderr F I1013 00:21:42.570548 28251 egressservice_zone_endpointslice.go:80] Ignoring updating default/kubernetes for endpointslice default/kubernetes as it is not a known egress service 2025-10-13T00:21:42.570563168+00:00 stderr F I1013 00:21:42.570555 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-authentication-operator/metrics for endpointslice openshift-authentication-operator/metrics-dp499 as it is not a known egress service 2025-10-13T00:21:42.570563168+00:00 stderr F I1013 00:21:42.570561 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd-operator/metrics for endpointslice openshift-etcd-operator/metrics-z62zm as it is not a known egress service 2025-10-13T00:21:42.570572629+00:00 stderr F I1013 00:21:42.570567 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/control-plane-machine-set-operator for endpointslice openshift-machine-api/control-plane-machine-set-operator-nmjkn as it is not a known egress service 2025-10-13T00:21:42.570580969+00:00 stderr F I1013 00:21:42.570573 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator-machine-webhook for endpointslice openshift-machine-api/machine-api-operator-machine-webhook-xj4tp as it is not a known egress service 2025-10-13T00:21:42.570615890+00:00 stderr F I1013 00:21:42.570581 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator-webhook for endpointslice openshift-machine-api/machine-api-operator-webhook-x4gjx as it is not a known egress service 2025-10-13T00:21:42.570615890+00:00 stderr F I1013 00:21:42.570611 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-daemon for endpointslice openshift-machine-config-operator/machine-config-daemon-2nvnz as it is not a known egress service 2025-10-13T00:21:42.570626680+00:00 stderr F I1013 00:21:42.570618 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/catalog-operator-metrics for endpointslice openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 as it is not a known egress service 2025-10-13T00:21:42.570635120+00:00 stderr F I1013 00:21:42.570625 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-8wmzv as it is not a known egress service 2025-10-13T00:21:42.570635120+00:00 stderr F I1013 00:21:42.570633 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress-canary/ingress-canary for endpointslice openshift-ingress-canary/ingress-canary-rhnd4 as it is not a known egress service 2025-10-13T00:21:42.570643711+00:00 stderr F I1013 00:21:42.570638 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/cluster-autoscaler-operator for endpointslice openshift-machine-api/cluster-autoscaler-operator-r4g5l as it is not a known egress service 2025-10-13T00:21:42.570651951+00:00 stderr F I1013 00:21:42.570645 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/redhat-marketplace for endpointslice openshift-marketplace/redhat-marketplace-8k279 as it is not a known egress service 2025-10-13T00:21:42.570660241+00:00 stderr F I1013 00:21:42.570652 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver-operator/metrics for endpointslice openshift-apiserver-operator/metrics-sgtfh as it is not a known egress service 2025-10-13T00:21:42.570668431+00:00 stderr F I1013 00:21:42.570658 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/webhook for endpointslice openshift-console-operator/webhook-b7j7h as it is not a known egress service 2025-10-13T00:21:42.570668431+00:00 stderr F I1013 00:21:42.570664 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-controller-manager-operator/metrics for endpointslice openshift-controller-manager-operator/metrics-psf8p as it is not a known egress service 2025-10-13T00:21:42.570681132+00:00 stderr F I1013 00:21:42.570671 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns/dns-default for endpointslice openshift-dns/dns-default-lctlx as it is not a known egress service 2025-10-13T00:21:42.570681132+00:00 stderr F I1013 00:21:42.570677 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-multus/multus-admission-controller for endpointslice openshift-multus/multus-admission-controller-s6h4d as it is not a known egress service 2025-10-13T00:21:42.570689852+00:00 stderr F I1013 00:21:42.570684 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/downloads for endpointslice openshift-console/downloads-zsr67 as it is not a known egress service 2025-10-13T00:21:42.570698122+00:00 stderr F I1013 00:21:42.570690 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-apiserver/apiserver for endpointslice openshift-kube-apiserver/apiserver-5mvtf as it is not a known egress service 2025-10-13T00:21:42.570706372+00:00 stderr F I1013 00:21:42.570696 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator for endpointslice openshift-machine-api/machine-api-operator-2js9r as it is not a known egress service 2025-10-13T00:21:42.570706372+00:00 stderr F I1013 00:21:42.570702 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress-operator/metrics for endpointslice openshift-ingress-operator/metrics-cd48g as it is not a known egress service 2025-10-13T00:21:42.570714922+00:00 stderr F I1013 00:21:42.570708 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/olm-operator-metrics for endpointslice openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 as it is not a known egress service 2025-10-13T00:21:42.570723273+00:00 stderr F I1013 00:21:42.570714 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-p8xmw as it is not a known egress service 2025-10-13T00:21:42.570731503+00:00 stderr F I1013 00:21:42.570722 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/marketplace-operator-metrics for endpointslice openshift-marketplace/marketplace-operator-metrics-fcwkk as it is not a known egress service 2025-10-13T00:21:42.570731503+00:00 stderr F I1013 00:21:42.570728 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-route-controller-manager/route-controller-manager for endpointslice openshift-route-controller-manager/route-controller-manager-64jvm as it is not a known egress service 2025-10-13T00:21:42.570742243+00:00 stderr F I1013 00:21:42.570735 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver/check-endpoints for endpointslice openshift-apiserver/check-endpoints-sbfp5 as it is not a known egress service 2025-10-13T00:21:42.570750313+00:00 stderr F I1013 00:21:42.570741 28251 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-network-diagnostics/network-check-target for endpointslice openshift-network-diagnostics/network-check-target-kqkjk as it is not a known egress service 2025-10-13T00:21:42.570776374+00:00 stderr F I1013 00:21:42.570761 28251 egressservice_zone_node.go:110] Processing sync for Egress Service node crc 2025-10-13T00:21:42.570776374+00:00 stderr F I1013 00:21:42.570771 28251 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 11.09µs 2025-10-13T00:21:42.571028731+00:00 stderr F I1013 00:21:42.570997 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.571028731+00:00 stderr F I1013 00:21:42.571014 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.571028731+00:00 stderr F I1013 00:21:42.571021 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-10-13T00:21:42.571046911+00:00 stderr F I1013 00:21:42.571027 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-11-crc 2025-10-13T00:21:42.571046911+00:00 stderr F I1013 00:21:42.571032 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.571046911+00:00 stderr F I1013 00:21:42.571038 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.571046911+00:00 stderr F I1013 00:21:42.571044 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/installer-7-crc 2025-10-13T00:21:42.571057142+00:00 stderr F I1013 00:21:42.571050 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.571065682+00:00 stderr F I1013 00:21:42.571055 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.571065682+00:00 stderr F I1013 00:21:42.571061 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.571074562+00:00 stderr F I1013 00:21:42.571067 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-11-crc 2025-10-13T00:21:42.571083072+00:00 stderr F I1013 00:21:42.571073 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.571083072+00:00 stderr F I1013 00:21:42.571079 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.571092023+00:00 stderr F I1013 00:21:42.571086 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.571100493+00:00 stderr F I1013 00:21:42.571091 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.571100493+00:00 stderr F I1013 00:21:42.571097 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.571109373+00:00 stderr F I1013 00:21:42.571103 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.571117833+00:00 stderr F I1013 00:21:42.571108 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.571117833+00:00 stderr F I1013 00:21:42.571114 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.571126674+00:00 stderr F I1013 00:21:42.571120 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-12-crc 2025-10-13T00:21:42.571135164+00:00 stderr F I1013 00:21:42.571125 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.571135164+00:00 stderr F I1013 00:21:42.571131 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.571147744+00:00 stderr F I1013 00:21:42.571136 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-9-crc 2025-10-13T00:21:42.571147744+00:00 stderr F I1013 00:21:42.571142 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.571156714+00:00 stderr F I1013 00:21:42.571148 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.571156714+00:00 stderr F I1013 00:21:42.571153 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.571165635+00:00 stderr F I1013 00:21:42.571159 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.571174105+00:00 stderr F I1013 00:21:42.571165 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.571174105+00:00 stderr F I1013 00:21:42.571171 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.571182915+00:00 stderr F I1013 00:21:42.571176 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.571191365+00:00 stderr F I1013 00:21:42.571182 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.571191365+00:00 stderr F I1013 00:21:42.571187 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-10-crc 2025-10-13T00:21:42.571200126+00:00 stderr F I1013 00:21:42.571193 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.571208576+00:00 stderr F I1013 00:21:42.571198 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.571208576+00:00 stderr F I1013 00:21:42.571204 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.571222496+00:00 stderr F I1013 00:21:42.571210 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.571222496+00:00 stderr F I1013 00:21:42.571216 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.571246267+00:00 stderr F I1013 00:21:42.571221 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.571246267+00:00 stderr F I1013 00:21:42.571227 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.571246267+00:00 stderr F I1013 00:21:42.571232 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.571246267+00:00 stderr F I1013 00:21:42.571238 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.571256927+00:00 stderr F I1013 00:21:42.571244 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j 2025-10-13T00:21:42.571256927+00:00 stderr F I1013 00:21:42.571250 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.571270087+00:00 stderr F I1013 00:21:42.571255 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.571270087+00:00 stderr F I1013 00:21:42.571261 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/image-pruner-29338560-zvlxb 2025-10-13T00:21:42.571270087+00:00 stderr F I1013 00:21:42.571266 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.571279838+00:00 stderr F I1013 00:21:42.571272 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.571279838+00:00 stderr F I1013 00:21:42.571277 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.571288778+00:00 stderr F I1013 00:21:42.571283 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-q88th 2025-10-13T00:21:42.571297318+00:00 stderr F I1013 00:21:42.571289 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw 2025-10-13T00:21:42.571305788+00:00 stderr F I1013 00:21:42.571295 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.571305788+00:00 stderr F I1013 00:21:42.571301 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.571314589+00:00 stderr F I1013 00:21:42.571307 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-8-crc 2025-10-13T00:21:42.571325239+00:00 stderr F I1013 00:21:42.571312 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.571325239+00:00 stderr F I1013 00:21:42.571318 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.571355420+00:00 stderr F I1013 00:21:42.571327 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.571355420+00:00 stderr F I1013 00:21:42.571333 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.571364990+00:00 stderr F I1013 00:21:42.571358 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.571377410+00:00 stderr F I1013 00:21:42.571366 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.571377410+00:00 stderr F I1013 00:21:42.571372 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.571386261+00:00 stderr F I1013 00:21:42.571377 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.571386261+00:00 stderr F I1013 00:21:42.571383 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.571395111+00:00 stderr F I1013 00:21:42.571388 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-9-crc 2025-10-13T00:21:42.571408671+00:00 stderr F I1013 00:21:42.571394 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.571408671+00:00 stderr F I1013 00:21:42.571400 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/installer-8-crc 2025-10-13T00:21:42.571408671+00:00 stderr F I1013 00:21:42.571405 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.571418401+00:00 stderr F I1013 00:21:42.571411 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.571426922+00:00 stderr F I1013 00:21:42.571416 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.571426922+00:00 stderr F I1013 00:21:42.571423 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.571435752+00:00 stderr F I1013 00:21:42.571429 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.571444222+00:00 stderr F I1013 00:21:42.571434 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.571444222+00:00 stderr F I1013 00:21:42.571440 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.571453082+00:00 stderr F I1013 00:21:42.571446 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-etcd/etcd-crc 2025-10-13T00:21:42.571461653+00:00 stderr F I1013 00:21:42.571452 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-10-retry-1-crc 2025-10-13T00:21:42.571461653+00:00 stderr F I1013 00:21:42.571457 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.571470563+00:00 stderr F I1013 00:21:42.571463 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.571479093+00:00 stderr F I1013 00:21:42.571468 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.571479093+00:00 stderr F I1013 00:21:42.571474 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.571487893+00:00 stderr F I1013 00:21:42.571479 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.571487893+00:00 stderr F I1013 00:21:42.571485 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-10-crc 2025-10-13T00:21:42.571496664+00:00 stderr F I1013 00:21:42.571490 28251 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.571525794+00:00 stderr F I1013 00:21:42.571507 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-controller-manager 2025-10-13T00:21:42.571525794+00:00 stderr F I1013 00:21:42.571518 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-vsphere-infra 2025-10-13T00:21:42.571541015+00:00 stderr F I1013 00:21:42.571523 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-machine-approver 2025-10-13T00:21:42.571541015+00:00 stderr F I1013 00:21:42.571529 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-apiserver 2025-10-13T00:21:42.571541015+00:00 stderr F I1013 00:21:42.571535 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-marketplace 2025-10-13T00:21:42.571550775+00:00 stderr F I1013 00:21:42.571541 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-node 2025-10-13T00:21:42.571550775+00:00 stderr F I1013 00:21:42.571546 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-samples-operator 2025-10-13T00:21:42.571559685+00:00 stderr F I1013 00:21:42.571552 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-authentication-operator 2025-10-13T00:21:42.571559685+00:00 stderr F I1013 00:21:42.571557 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-infra 2025-10-13T00:21:42.571568555+00:00 stderr F I1013 00:21:42.571562 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-machine-config-operator 2025-10-13T00:21:42.571577076+00:00 stderr F I1013 00:21:42.571568 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-nutanix-infra 2025-10-13T00:21:42.571577076+00:00 stderr F I1013 00:21:42.571573 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ovirt-infra 2025-10-13T00:21:42.571585896+00:00 stderr F I1013 00:21:42.571579 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift 2025-10-13T00:21:42.571594446+00:00 stderr F I1013 00:21:42.571584 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config-operator 2025-10-13T00:21:42.571594446+00:00 stderr F I1013 00:21:42.571589 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress 2025-10-13T00:21:42.571603136+00:00 stderr F I1013 00:21:42.571594 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-machine-api 2025-10-13T00:21:42.571603136+00:00 stderr F I1013 00:21:42.571600 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-operators 2025-10-13T00:21:42.571611997+00:00 stderr F I1013 00:21:42.571605 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-authentication 2025-10-13T00:21:42.571620517+00:00 stderr F I1013 00:21:42.571610 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-scheduler 2025-10-13T00:21:42.571620517+00:00 stderr F I1013 00:21:42.571616 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-monitoring 2025-10-13T00:21:42.571629227+00:00 stderr F I1013 00:21:42.571621 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-dns-operator 2025-10-13T00:21:42.571629227+00:00 stderr F I1013 00:21:42.571626 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cloud-network-config-controller 2025-10-13T00:21:42.571638047+00:00 stderr F I1013 00:21:42.571632 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator 2025-10-13T00:21:42.571646548+00:00 stderr F I1013 00:21:42.571637 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress-operator 2025-10-13T00:21:42.571646548+00:00 stderr F I1013 00:21:42.571643 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-node-identity 2025-10-13T00:21:42.571659148+00:00 stderr F I1013 00:21:42.571648 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-public 2025-10-13T00:21:42.571659148+00:00 stderr F I1013 00:21:42.571653 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console-operator 2025-10-13T00:21:42.571668158+00:00 stderr F I1013 00:21:42.571659 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-host-network 2025-10-13T00:21:42.571668158+00:00 stderr F I1013 00:21:42.571664 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress-canary 2025-10-13T00:21:42.571677078+00:00 stderr F I1013 00:21:42.571670 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-controller-manager-operator 2025-10-13T00:21:42.571685549+00:00 stderr F I1013 00:21:42.571675 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config-managed 2025-10-13T00:21:42.571685549+00:00 stderr F I1013 00:21:42.571680 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-route-controller-manager 2025-10-13T00:21:42.571694379+00:00 stderr F I1013 00:21:42.571686 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-openstack-infra 2025-10-13T00:21:42.571694379+00:00 stderr F I1013 00:21:42.571691 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console 2025-10-13T00:21:42.571703239+00:00 stderr F I1013 00:21:42.571696 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-service-ca-operator 2025-10-13T00:21:42.571711739+00:00 stderr F I1013 00:21:42.571702 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-version 2025-10-13T00:21:42.571711739+00:00 stderr F I1013 00:21:42.571707 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-system 2025-10-13T00:21:42.571720530+00:00 stderr F I1013 00:21:42.571712 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-image-registry 2025-10-13T00:21:42.571720530+00:00 stderr F I1013 00:21:42.571717 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-apiserver-operator 2025-10-13T00:21:42.571729180+00:00 stderr F I1013 00:21:42.571724 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-multus 2025-10-13T00:21:42.571737570+00:00 stderr F I1013 00:21:42.571730 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-operator 2025-10-13T00:21:42.571746080+00:00 stderr F I1013 00:21:42.571736 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-node-lease 2025-10-13T00:21:42.571746080+00:00 stderr F I1013 00:21:42.571741 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-diagnostics 2025-10-13T00:21:42.571754870+00:00 stderr F I1013 00:21:42.571747 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-scheduler-operator 2025-10-13T00:21:42.571754870+00:00 stderr F I1013 00:21:42.571752 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config 2025-10-13T00:21:42.571763701+00:00 stderr F I1013 00:21:42.571757 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console-user-settings 2025-10-13T00:21:42.571772221+00:00 stderr F I1013 00:21:42.571762 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd 2025-10-13T00:21:42.571772221+00:00 stderr F I1013 00:21:42.571768 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd-operator 2025-10-13T00:21:42.571784681+00:00 stderr F I1013 00:21:42.571773 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver 2025-10-13T00:21:42.571784681+00:00 stderr F I1013 00:21:42.571779 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-oauth-apiserver 2025-10-13T00:21:42.571793601+00:00 stderr F I1013 00:21:42.571784 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-user-workload-monitoring 2025-10-13T00:21:42.571793601+00:00 stderr F I1013 00:21:42.571789 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-controller-manager 2025-10-13T00:21:42.571802472+00:00 stderr F I1013 00:21:42.571795 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ovn-kubernetes 2025-10-13T00:21:42.571802472+00:00 stderr F I1013 00:21:42.571800 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openstack-operators 2025-10-13T00:21:42.571811382+00:00 stderr F I1013 00:21:42.571805 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-dns 2025-10-13T00:21:42.571819802+00:00 stderr F I1013 00:21:42.571811 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-storage-version-migrator 2025-10-13T00:21:42.571819802+00:00 stderr F I1013 00:21:42.571816 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-storage-version-migrator-operator 2025-10-13T00:21:42.571828672+00:00 stderr F I1013 00:21:42.571821 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-operator-lifecycle-manager 2025-10-13T00:21:42.571837163+00:00 stderr F I1013 00:21:42.571827 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-service-ca 2025-10-13T00:21:42.571837163+00:00 stderr F I1013 00:21:42.571832 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: default 2025-10-13T00:21:42.571845953+00:00 stderr F I1013 00:21:42.571838 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver-operator 2025-10-13T00:21:42.571845953+00:00 stderr F I1013 00:21:42.571843 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cloud-platform-infra 2025-10-13T00:21:42.571854773+00:00 stderr F I1013 00:21:42.571848 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-controller-manager-operator 2025-10-13T00:21:42.571863323+00:00 stderr F I1013 00:21:42.571854 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra 2025-10-13T00:21:42.571863323+00:00 stderr F I1013 00:21:42.571859 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openstack 2025-10-13T00:21:42.571871874+00:00 stderr F I1013 00:21:42.571864 28251 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: hostpath-provisioner 2025-10-13T00:21:42.572240393+00:00 stderr F I1013 00:21:42.572210 28251 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:21:42.572240393+00:00 stderr F I1013 00:21:42.572223 28251 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:21:42.573430045+00:00 stderr F I1013 00:21:42.573393 28251 ovs.go:162] Exec(21): stdout: "" 2025-10-13T00:21:42.573430045+00:00 stderr F I1013 00:21:42.573413 28251 ovs.go:163] Exec(21): stderr: "" 2025-10-13T00:21:42.573796085+00:00 stderr F I1013 00:21:42.573778 28251 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:21:42.578053100+00:00 stderr F I1013 00:21:42.578031 28251 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.0.0/23 Src: 10.217.0.2 Gw: Flags: [] Table: 254 Realm: 0}" 2025-10-13T00:21:42.578097731+00:00 stderr F I1013 00:21:42.578086 28251 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.1.255/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-10-13T00:21:42.578124882+00:00 stderr F I1013 00:21:42.578115 28251 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.0.2/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-10-13T00:21:42.586772814+00:00 stderr F I1013 00:21:42.586744 28251 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.2/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-10-13T00:21:42.586824516+00:00 stderr F I1013 00:21:42.586813 28251 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.1.255/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-10-13T00:21:42.586868247+00:00 stderr F I1013 00:21:42.586858 28251 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 0 Realm: 0} 2025-10-13T00:21:42.587249637+00:00 stderr F I1013 00:21:42.587222 28251 ovs.go:159] Exec(22): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.forwarding=1 2025-10-13T00:21:42.587312139+00:00 stderr F I1013 00:21:42.587300 28251 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0} 2025-10-13T00:21:42.587372820+00:00 stderr F I1013 00:21:42.587357 28251 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.0/23 Src: 10.217.0.2 Gw: Flags: [] Table: 254 Realm: 0}" 2025-10-13T00:21:42.587413432+00:00 stderr F I1013 00:21:42.587400 28251 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0}" 2025-10-13T00:21:42.587445902+00:00 stderr F I1013 00:21:42.587435 28251 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 0 Realm: 0} 2025-10-13T00:21:42.587787632+00:00 stderr F I1013 00:21:42.587774 28251 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0} 2025-10-13T00:21:42.587821562+00:00 stderr F I1013 00:21:42.587811 28251 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0}" 2025-10-13T00:21:42.588546202+00:00 stderr F I1013 00:21:42.588530 28251 ovs.go:162] Exec(22): stdout: "net.ipv4.conf.ovn-k8s-mp0.forwarding = 1\n" 2025-10-13T00:21:42.588578503+00:00 stderr F I1013 00:21:42.588569 28251 ovs.go:163] Exec(22): stderr: "" 2025-10-13T00:21:42.599293041+00:00 stderr F I1013 00:21:42.599267 28251 gateway_init.go:324] Initializing Gateway Functionality 2025-10-13T00:21:42.600552845+00:00 stderr F I1013 00:21:42.600460 28251 gateway_localnet.go:181] Node local addresses initialized to: map[10.217.0.2:{10.217.0.0 fffffe00} 127.0.0.1:{127.0.0.0 ff000000} 169.254.169.2:{169.254.169.0 fffffff8} 172.17.0.5:{172.17.0.0 ffffff00} 172.18.0.5:{172.18.0.0 ffffff00} 172.19.0.5:{172.19.0.0 ffffff00} 192.168.122.10:{192.168.122.0 ffffff00} 192.168.126.11:{192.168.126.0 ffffff00} 38.102.83.180:{38.102.83.0 ffffff00} ::1:{::1 ffffffffffffffffffffffffffffffff} fe80::1424:e5ff:fe54:961:{fe80:: ffffffffffffffff0000000000000000} fe80::14e6:11ff:fec3:98f2:{fe80:: ffffffffffffffff0000000000000000} fe80::14f4:56ff:fee4:d831:{fe80:: ffffffffffffffff0000000000000000} fe80::2878:32ff:fe77:cf94:{fe80:: ffffffffffffffff0000000000000000} fe80::2cd3:feff:fea5:c063:{fe80:: ffffffffffffffff0000000000000000} fe80::2cef:79ff:fe35:cc24:{fe80:: ffffffffffffffff0000000000000000} fe80::30a3:7dff:fe85:b8af:{fe80:: ffffffffffffffff0000000000000000} fe80::3413:daff:feee:85b4:{fe80:: ffffffffffffffff0000000000000000} fe80::34ee:89ff:fe75:8a45:{fe80:: ffffffffffffffff0000000000000000} fe80::38de:feff:fe7f:8c2:{fe80:: ffffffffffffffff0000000000000000} fe80::40bd:24ff:feb0:b157:{fe80:: ffffffffffffffff0000000000000000} fe80::4405:4fff:fe9c:8195:{fe80:: ffffffffffffffff0000000000000000} fe80::48a2:7cff:fe44:db9a:{fe80:: ffffffffffffffff0000000000000000} fe80::4c43:46ff:fe3d:dd76:{fe80:: ffffffffffffffff0000000000000000} fe80::4ce4:64ff:fe32:27e5:{fe80:: ffffffffffffffff0000000000000000} fe80::506f:e8ff:fee8:abca:{fe80:: ffffffffffffffff0000000000000000} fe80::580e:a6ff:fe0f:b2cf:{fe80:: ffffffffffffffff0000000000000000} fe80::5c67:d0ff:fece:e194:{fe80:: ffffffffffffffff0000000000000000} fe80::5c8e:c9ff:fe55:3282:{fe80:: ffffffffffffffff0000000000000000} fe80::6097:55ff:feb5:ed02:{fe80:: ffffffffffffffff0000000000000000} fe80::640e:c2ff:fe35:2efd:{fe80:: ffffffffffffffff0000000000000000} fe80::702e:d6ff:fe5f:d701:{fe80:: ffffffffffffffff0000000000000000} fe80::709d:bfff:fe9b:d542:{fe80:: ffffffffffffffff0000000000000000} fe80::7839:bdff:fe68:6a72:{fe80:: ffffffffffffffff0000000000000000} fe80::7a:dcff:fe5c:3d7f:{fe80:: ffffffffffffffff0000000000000000} fe80::7c8e:caff:fe83:f562:{fe80:: ffffffffffffffff0000000000000000} fe80::7de3:ad7c:b56b:e3dc:{fe80:: ffffffffffffffff0000000000000000} fe80::8034:d4ff:fe44:990c:{fe80:: ffffffffffffffff0000000000000000} fe80::8487:8fff:fea1:8764:{fe80:: ffffffffffffffff0000000000000000} fe80::8c85:fff:fe2b:e168:{fe80:: ffffffffffffffff0000000000000000} fe80::944d:cff:fed9:aa6b:{fe80:: ffffffffffffffff0000000000000000} fe80::9459:b6ff:feb2:a908:{fe80:: ffffffffffffffff0000000000000000} fe80::945e:c6ff:fea4:783d:{fe80:: ffffffffffffffff0000000000000000} fe80::9cb5:70ff:feb2:5d92:{fe80:: ffffffffffffffff0000000000000000} fe80::a00a:4bff:fe06:8c93:{fe80:: ffffffffffffffff0000000000000000} fe80::a0df:caff:fedb:ddbb:{fe80:: ffffffffffffffff0000000000000000} fe80::a823:78ff:fe7a:afce:{fe80:: ffffffffffffffff0000000000000000} fe80::ac9d:41ff:fe6c:3f92:{fe80:: ffffffffffffffff0000000000000000} fe80::b4dc:d9ff:fe26:3d4:{fe80:: ffffffffffffffff0000000000000000} fe80::b861:d1ff:fe27:8ee9:{fe80:: ffffffffffffffff0000000000000000} fe80::bc51:e4ff:fee6:9afc:{fe80:: ffffffffffffffff0000000000000000} fe80::c848:f1ff:fe77:d04f:{fe80:: ffffffffffffffff0000000000000000} fe80::d063:86ff:fe41:c19f:{fe80:: ffffffffffffffff0000000000000000} fe80::d4b2:3eff:fea0:5c89:{fe80:: ffffffffffffffff0000000000000000} fe80::d8b9:1ff:fe20:811e:{fe80:: ffffffffffffffff0000000000000000} fe80::d:9fff:fe6c:6246:{fe80:: ffffffffffffffff0000000000000000} fe80::dc73:99ff:fe85:a4e4:{fe80:: ffffffffffffffff0000000000000000} fe80::f4b5:1eff:fed6:4627:{fe80:: ffffffffffffffff0000000000000000} fe80::fc01:b0ff:fe09:3d3:{fe80:: ffffffffffffffff0000000000000000}] 2025-10-13T00:21:42.600604496+00:00 stderr F I1013 00:21:42.600591 28251 ovs.go:159] Exec(23): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex 2025-10-13T00:21:42.608091978+00:00 stderr F I1013 00:21:42.608025 28251 ovs.go:162] Exec(23): stdout: "" 2025-10-13T00:21:42.608091978+00:00 stderr F I1013 00:21:42.608048 28251 ovs.go:163] Exec(23): stderr: "ovs-vsctl: no port named br-ex\n" 2025-10-13T00:21:42.608091978+00:00 stderr F I1013 00:21:42.608055 28251 ovs.go:165] Exec(23): err: exit status 1 2025-10-13T00:21:42.608198230+00:00 stderr F I1013 00:21:42.608173 28251 helper_linux.go:92] Provided gateway interface "br-ex", found as index: 11 2025-10-13T00:21:42.608338124+00:00 stderr F I1013 00:21:42.608311 28251 helper_linux.go:117] Found default gateway interface br-ex 38.102.83.1 2025-10-13T00:21:42.608572251+00:00 stderr F I1013 00:21:42.608546 28251 gateway_init.go:370] Preparing Local Gateway 2025-10-13T00:21:42.608572251+00:00 stderr F I1013 00:21:42.608558 28251 gateway_localnet.go:26] Creating new local gateway 2025-10-13T00:21:42.608586111+00:00 stderr F I1013 00:21:42.608579 28251 iptables.go:108] Creating table: filter chain: FORWARD 2025-10-13T00:21:42.611320334+00:00 stderr F I1013 00:21:42.611276 28251 iptables.go:110] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:42.613974966+00:00 stderr F I1013 00:21:42.613941 28251 iptables.go:121] Adding rule in table: filter, chain: FORWARD with args: "-o ovn-k8s-mp0 -j ACCEPT" for protocol: 0 2025-10-13T00:21:42.619403352+00:00 stderr F I1013 00:21:42.619367 28251 iptables.go:121] Adding rule in table: filter, chain: FORWARD with args: "-i ovn-k8s-mp0 -j ACCEPT" for protocol: 0 2025-10-13T00:21:42.621666913+00:00 stderr F I1013 00:21:42.621640 28251 shared_informer.go:318] Caches are synced for node-tracker-controller 2025-10-13T00:21:42.621666913+00:00 stderr F I1013 00:21:42.621661 28251 services_controller.go:184] Setting up event handlers for services 2025-10-13T00:21:42.621758985+00:00 stderr F I1013 00:21:42.621733 28251 services_controller.go:194] Setting up event handlers for endpoint slices 2025-10-13T00:21:42.621812757+00:00 stderr F I1013 00:21:42.621768 28251 iptables.go:108] Creating table: filter chain: INPUT 2025-10-13T00:21:42.621812757+00:00 stderr F I1013 00:21:42.621798 28251 services_controller.go:204] Waiting for service and endpoint handlers to sync 2025-10-13T00:21:42.621823737+00:00 stderr F I1013 00:21:42.621815 28251 shared_informer.go:311] Waiting for caches to sync for ovn-lb-controller 2025-10-13T00:21:42.621917569+00:00 stderr F I1013 00:21:42.621768 28251 services_controller.go:551] Adding service openshift-kube-controller-manager/kube-controller-manager 2025-10-13T00:21:42.621917569+00:00 stderr F I1013 00:21:42.621897 28251 services_controller.go:551] Adding service openshift-kube-storage-version-migrator-operator/metrics 2025-10-13T00:21:42.621917569+00:00 stderr F I1013 00:21:42.621906 28251 services_controller.go:551] Adding service openshift-monitoring/cluster-monitoring-operator 2025-10-13T00:21:42.621917569+00:00 stderr F I1013 00:21:42.621912 28251 services_controller.go:551] Adding service openshift-service-ca-operator/metrics 2025-10-13T00:21:42.621932480+00:00 stderr F I1013 00:21:42.621927 28251 services_controller.go:551] Adding service openshift-authentication/oauth-openshift 2025-10-13T00:21:42.621977361+00:00 stderr F I1013 00:21:42.621963 28251 services_controller.go:551] Adding service openshift-kube-apiserver-operator/metrics 2025-10-13T00:21:42.622029662+00:00 stderr F I1013 00:21:42.622014 28251 services_controller.go:551] Adding service openshift-etcd-operator/metrics 2025-10-13T00:21:42.622091554+00:00 stderr F I1013 00:21:42.622074 28251 services_controller.go:551] Adding service openshift-ingress-canary/ingress-canary 2025-10-13T00:21:42.622115645+00:00 stderr F I1013 00:21:42.622097 28251 services_controller.go:551] Adding service openshift-ingress-operator/metrics 2025-10-13T00:21:42.622115645+00:00 stderr F I1013 00:21:42.622107 28251 services_controller.go:551] Adding service openshift-machine-api/control-plane-machine-set-operator 2025-10-13T00:21:42.622123225+00:00 stderr F I1013 00:21:42.622114 28251 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-controller 2025-10-13T00:21:42.622123225+00:00 stderr F I1013 00:21:42.622120 28251 services_controller.go:551] Adding service openshift-marketplace/community-operators 2025-10-13T00:21:42.622131735+00:00 stderr F I1013 00:21:42.622126 28251 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-target 2025-10-13T00:21:42.622145956+00:00 stderr F I1013 00:21:42.622132 28251 services_controller.go:551] Adding service openshift-marketplace/certified-operators 2025-10-13T00:21:42.622145956+00:00 stderr F I1013 00:21:42.622139 28251 services_controller.go:551] Adding service openshift-apiserver/check-endpoints 2025-10-13T00:21:42.622158406+00:00 stderr F I1013 00:21:42.622144 28251 services_controller.go:551] Adding service openshift-controller-manager/controller-manager 2025-10-13T00:21:42.622158406+00:00 stderr F I1013 00:21:42.622150 28251 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-daemon 2025-10-13T00:21:42.622158406+00:00 stderr F I1013 00:21:42.622156 28251 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-operator 2025-10-13T00:21:42.622167756+00:00 stderr F I1013 00:21:42.622162 28251 services_controller.go:551] Adding service openshift-network-operator/metrics 2025-10-13T00:21:42.622189077+00:00 stderr F I1013 00:21:42.622174 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-10-13T00:21:42.622189077+00:00 stderr F I1013 00:21:42.622183 28251 services_controller.go:551] Adding service openshift-etcd/etcd 2025-10-13T00:21:42.622201817+00:00 stderr F I1013 00:21:42.622190 28251 services_controller.go:551] Adding service openshift-kube-scheduler-operator/metrics 2025-10-13T00:21:42.622201817+00:00 stderr F I1013 00:21:42.622196 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-machine-webhook 2025-10-13T00:21:42.622208767+00:00 stderr F I1013 00:21:42.622205 28251 services_controller.go:551] Adding service openshift-marketplace/redhat-operators 2025-10-13T00:21:42.622215377+00:00 stderr F I1013 00:21:42.622210 28251 services_controller.go:551] Adding service openshift-multus/multus-admission-controller 2025-10-13T00:21:42.622221948+00:00 stderr F I1013 00:21:42.622217 28251 services_controller.go:551] Adding service openshift-authentication-operator/metrics 2025-10-13T00:21:42.622228478+00:00 stderr F I1013 00:21:42.622222 28251 services_controller.go:551] Adding service openshift-console/console 2025-10-13T00:21:42.622234988+00:00 stderr F I1013 00:21:42.622228 28251 services_controller.go:551] Adding service openshift-console/downloads 2025-10-13T00:21:42.622241548+00:00 stderr F I1013 00:21:42.622233 28251 services_controller.go:551] Adding service openshift-controller-manager-operator/metrics 2025-10-13T00:21:42.622241548+00:00 stderr F I1013 00:21:42.622239 28251 services_controller.go:551] Adding service openshift-kube-apiserver/apiserver 2025-10-13T00:21:42.622248368+00:00 stderr F I1013 00:21:42.622244 28251 services_controller.go:551] Adding service openshift-oauth-apiserver/api 2025-10-13T00:21:42.622254908+00:00 stderr F I1013 00:21:42.622250 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-10-13T00:21:42.622261419+00:00 stderr F I1013 00:21:42.622255 28251 services_controller.go:551] Adding service default/kubernetes 2025-10-13T00:21:42.622268069+00:00 stderr F I1013 00:21:42.622261 28251 services_controller.go:551] Adding service openshift-console-operator/metrics 2025-10-13T00:21:42.622268069+00:00 stderr F I1013 00:21:42.622266 28251 services_controller.go:551] Adding service openshift-console-operator/webhook 2025-10-13T00:21:42.622364911+00:00 stderr F I1013 00:21:42.622302 28251 services_controller.go:551] Adding service openshift-kube-controller-manager-operator/metrics 2025-10-13T00:21:42.622382312+00:00 stderr F I1013 00:21:42.622366 28251 services_controller.go:551] Adding service openshift-machine-api/cluster-autoscaler-operator 2025-10-13T00:21:42.622389492+00:00 stderr F I1013 00:21:42.622379 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-controllers 2025-10-13T00:21:42.622426953+00:00 stderr F I1013 00:21:42.622397 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-webhook 2025-10-13T00:21:42.622426953+00:00 stderr F I1013 00:21:42.622417 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-10-13T00:21:42.622440913+00:00 stderr F I1013 00:21:42.622435 28251 services_controller.go:551] Adding service default/openshift 2025-10-13T00:21:42.622472764+00:00 stderr F I1013 00:21:42.622449 28251 services_controller.go:551] Adding service openshift-marketplace/marketplace-operator-metrics 2025-10-13T00:21:42.622472764+00:00 stderr F I1013 00:21:42.622466 28251 services_controller.go:551] Adding service openshift-multus/network-metrics-service 2025-10-13T00:21:42.622513625+00:00 stderr F I1013 00:21:42.622479 28251 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:42.622513625+00:00 stderr F I1013 00:21:42.622498 28251 services_controller.go:551] Adding service openshift-cluster-machine-approver/machine-approver 2025-10-13T00:21:42.622513625+00:00 stderr F I1013 00:21:42.622509 28251 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-source 2025-10-13T00:21:42.622540276+00:00 stderr F I1013 00:21:42.622520 28251 services_controller.go:551] Adding service openshift-route-controller-manager/route-controller-manager 2025-10-13T00:21:42.622540276+00:00 stderr F I1013 00:21:42.622535 28251 services_controller.go:551] Adding service openshift-image-registry/image-registry-operator 2025-10-13T00:21:42.622570477+00:00 stderr F I1013 00:21:42.622545 28251 services_controller.go:551] Adding service openshift-kube-scheduler/scheduler 2025-10-13T00:21:42.622570477+00:00 stderr F I1013 00:21:42.622561 28251 services_controller.go:551] Adding service openshift-marketplace/redhat-marketplace 2025-10-13T00:21:42.622578137+00:00 stderr F I1013 00:21:42.622570 28251 services_controller.go:551] Adding service openshift-apiserver/api 2025-10-13T00:21:42.622586807+00:00 stderr F I1013 00:21:42.622580 28251 services_controller.go:551] Adding service openshift-cluster-samples-operator/metrics 2025-10-13T00:21:42.622595468+00:00 stderr F I1013 00:21:42.622589 28251 services_controller.go:551] Adding service openshift-config-operator/metrics 2025-10-13T00:21:42.622618438+00:00 stderr F I1013 00:21:42.622600 28251 services_controller.go:551] Adding service openshift-image-registry/image-registry 2025-10-13T00:21:42.622633199+00:00 stderr F I1013 00:21:42.622620 28251 services_controller.go:551] Adding service openshift-cluster-version/cluster-version-operator 2025-10-13T00:21:42.622662039+00:00 stderr F I1013 00:21:42.622638 28251 services_controller.go:551] Adding service openshift-dns-operator/metrics 2025-10-13T00:21:42.622662039+00:00 stderr F I1013 00:21:42.622652 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/packageserver-service 2025-10-13T00:21:42.622677140+00:00 stderr F I1013 00:21:42.622664 28251 services_controller.go:551] Adding service openshift-apiserver-operator/metrics 2025-10-13T00:21:42.622677140+00:00 stderr F I1013 00:21:42.622672 28251 services_controller.go:551] Adding service openshift-dns/dns-default 2025-10-13T00:21:42.622704191+00:00 stderr F I1013 00:21:42.622683 28251 services_controller.go:551] Adding service openshift-ingress/router-internal-default 2025-10-13T00:21:42.622704191+00:00 stderr F I1013 00:21:42.622697 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator 2025-10-13T00:21:42.622728491+00:00 stderr F I1013 00:21:42.622711 28251 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:42.623715428+00:00 stderr F I1013 00:21:42.623670 28251 iptables.go:110] Chain: "INPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N INPUT --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:42.626953455+00:00 stderr F I1013 00:21:42.626920 28251 iptables.go:121] Adding rule in table: filter, chain: INPUT with args: "-i ovn-k8s-mp0 -m comment --comment from OVN to localhost -j ACCEPT" for protocol: 0 2025-10-13T00:21:42.630050438+00:00 stderr F I1013 00:21:42.630022 28251 iptables.go:108] Creating table: nat chain: POSTROUTING 2025-10-13T00:21:42.631803765+00:00 stderr F I1013 00:21:42.631769 28251 iptables.go:110] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:42.643541811+00:00 stderr F I1013 00:21:42.643498 28251 iptables.go:121] Adding rule in table: nat, chain: POSTROUTING with args: "-s 169.254.169.1 -j MASQUERADE" for protocol: 0 2025-10-13T00:21:42.650569130+00:00 stderr F I1013 00:21:42.650517 28251 iptables.go:121] Adding rule in table: nat, chain: POSTROUTING with args: "-s 10.217.0.0/23 -j MASQUERADE" for protocol: 0 2025-10-13T00:21:42.652777409+00:00 stderr F I1013 00:21:42.652728 28251 ovs.go:159] Exec(24): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex 2025-10-13T00:21:42.660177768+00:00 stderr F I1013 00:21:42.660137 28251 ovs.go:162] Exec(24): stdout: "" 2025-10-13T00:21:42.660177768+00:00 stderr F I1013 00:21:42.660161 28251 ovs.go:163] Exec(24): stderr: "ovs-vsctl: no port named br-ex\n" 2025-10-13T00:21:42.660177768+00:00 stderr F I1013 00:21:42.660168 28251 ovs.go:165] Exec(24): err: exit status 1 2025-10-13T00:21:42.660201149+00:00 stderr F I1013 00:21:42.660183 28251 ovs.go:159] Exec(25): /usr/bin/ovs-vsctl --timeout=15 br-exists br-ex 2025-10-13T00:21:42.667631099+00:00 stderr F I1013 00:21:42.667572 28251 ovs.go:162] Exec(25): stdout: "" 2025-10-13T00:21:42.667631099+00:00 stderr F I1013 00:21:42.667597 28251 ovs.go:163] Exec(25): stderr: "" 2025-10-13T00:21:42.667631099+00:00 stderr F I1013 00:21:42.667611 28251 ovs.go:159] Exec(26): /usr/bin/ovs-vsctl --timeout=15 list-ports br-ex 2025-10-13T00:21:42.670901337+00:00 stderr F I1013 00:21:42.670865 28251 shared_informer.go:318] Caches are synced for network-controller-manager 2025-10-13T00:21:42.670901337+00:00 stderr F I1013 00:21:42.670881 28251 network_attach_def_controller.go:182] Starting repairing loop for network-controller-manager 2025-10-13T00:21:42.670979939+00:00 stderr F I1013 00:21:42.670955 28251 network_attach_def_controller.go:184] Finished repairing loop for network-controller-manager: 76.942µs err: 2025-10-13T00:21:42.670979939+00:00 stderr F I1013 00:21:42.670967 28251 network_attach_def_controller.go:153] Starting workers for network-controller-manager NAD controller 2025-10-13T00:21:42.677035652+00:00 stderr F I1013 00:21:42.677005 28251 ovs.go:162] Exec(26): stdout: "ens3\npatch-br-ex_crc-to-br-int\n" 2025-10-13T00:21:42.677035652+00:00 stderr F I1013 00:21:42.677020 28251 ovs.go:163] Exec(26): stderr: "" 2025-10-13T00:21:42.677035652+00:00 stderr F I1013 00:21:42.677031 28251 ovs.go:159] Exec(27): /usr/bin/ovs-vsctl --timeout=15 get Port ens3 Interfaces 2025-10-13T00:21:42.683663180+00:00 stderr F I1013 00:21:42.683623 28251 ovs.go:162] Exec(27): stdout: "[e16f421c-0477-4745-9966-07c4e793298c]\n" 2025-10-13T00:21:42.683663180+00:00 stderr F I1013 00:21:42.683641 28251 ovs.go:163] Exec(27): stderr: "" 2025-10-13T00:21:42.683663180+00:00 stderr F I1013 00:21:42.683653 28251 ovs.go:159] Exec(28): /usr/bin/ovs-vsctl --timeout=15 get Port patch-br-ex_crc-to-br-int Interfaces 2025-10-13T00:21:42.689794315+00:00 stderr F I1013 00:21:42.689757 28251 ovs.go:162] Exec(28): stdout: "[6007aa1e-871e-4844-b798-8da594911a95]\n" 2025-10-13T00:21:42.689794315+00:00 stderr F I1013 00:21:42.689774 28251 ovs.go:163] Exec(28): stderr: "" 2025-10-13T00:21:42.689794315+00:00 stderr F I1013 00:21:42.689785 28251 ovs.go:159] Exec(29): /usr/bin/ovs-vsctl --timeout=15 get Interface e16f421c-0477-4745-9966-07c4e793298c Type 2025-10-13T00:21:42.695527249+00:00 stderr F I1013 00:21:42.695487 28251 ovs.go:162] Exec(29): stdout: "system\n" 2025-10-13T00:21:42.695527249+00:00 stderr F I1013 00:21:42.695503 28251 ovs.go:163] Exec(29): stderr: "" 2025-10-13T00:21:42.695527249+00:00 stderr F I1013 00:21:42.695511 28251 ovs.go:159] Exec(30): /usr/bin/ovs-vsctl --timeout=15 get Interface 6007aa1e-871e-4844-b798-8da594911a95 Type 2025-10-13T00:21:42.701089058+00:00 stderr F I1013 00:21:42.701040 28251 ovs.go:162] Exec(30): stdout: "patch\n" 2025-10-13T00:21:42.701089058+00:00 stderr F I1013 00:21:42.701056 28251 ovs.go:163] Exec(30): stderr: "" 2025-10-13T00:21:42.701089058+00:00 stderr F I1013 00:21:42.701065 28251 ovs.go:159] Exec(31): /usr/bin/ovs-vsctl --timeout=15 get interface ens3 ofport 2025-10-13T00:21:42.706790302+00:00 stderr F I1013 00:21:42.706759 28251 ovs.go:162] Exec(31): stdout: "1\n" 2025-10-13T00:21:42.706790302+00:00 stderr F I1013 00:21:42.706774 28251 ovs.go:163] Exec(31): stderr: "" 2025-10-13T00:21:42.706790302+00:00 stderr F I1013 00:21:42.706784 28251 ovs.go:159] Exec(32): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface br-ex mac_in_use 2025-10-13T00:21:42.718305211+00:00 stderr F I1013 00:21:42.718268 28251 ovs.go:162] Exec(32): stdout: "\"fa:16:3e:c3:15:08\"\n" 2025-10-13T00:21:42.718305211+00:00 stderr F I1013 00:21:42.718283 28251 ovs.go:163] Exec(32): stderr: "" 2025-10-13T00:21:42.718305211+00:00 stderr F I1013 00:21:42.718295 28251 ovs.go:159] Exec(33): /usr/sbin/sysctl -w net.ipv4.conf.br-ex.forwarding=1 2025-10-13T00:21:42.719263897+00:00 stderr F I1013 00:21:42.719235 28251 ovs.go:162] Exec(33): stdout: "net.ipv4.conf.br-ex.forwarding = 1\n" 2025-10-13T00:21:42.719263897+00:00 stderr F I1013 00:21:42.719250 28251 ovs.go:163] Exec(33): stderr: "" 2025-10-13T00:21:42.719263897+00:00 stderr F I1013 00:21:42.719258 28251 ovs.go:159] Exec(34): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:ovn-bridge-mappings 2025-10-13T00:21:42.722702040+00:00 stderr F I1013 00:21:42.722662 28251 shared_informer.go:318] Caches are synced for ovn-lb-controller 2025-10-13T00:21:42.722702040+00:00 stderr F I1013 00:21:42.722677 28251 repair.go:57] Starting repairing loop for services 2025-10-13T00:21:42.723078030+00:00 stderr F I1013 00:21:42.723043 28251 repair.go:128] Deleted 0 stale service LBs 2025-10-13T00:21:42.723078030+00:00 stderr F I1013 00:21:42.723069 28251 repair.go:134] Deleted 0 stale Chassis Template Vars 2025-10-13T00:21:42.723091710+00:00 stderr F I1013 00:21:42.723084 28251 repair.go:59] Finished repairing loop for services: 409.951µs 2025-10-13T00:21:42.723256185+00:00 stderr F I1013 00:21:42.723229 28251 services_controller.go:314] Controller cache of 53 load balancers initialized for 52 services 2025-10-13T00:21:42.723256185+00:00 stderr F I1013 00:21:42.723240 28251 services_controller.go:225] Starting workers 2025-10-13T00:21:42.723288715+00:00 stderr F I1013 00:21:42.723275 28251 services_controller.go:332] Processing sync for service openshift-kube-controller-manager-operator/metrics 2025-10-13T00:21:42.723363157+00:00 stderr F I1013 00:21:42.723314 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service 2025-10-13T00:21:42.723363157+00:00 stderr F I1013 00:21:42.723337 28251 services_controller.go:332] Processing sync for service openshift-console/console 2025-10-13T00:21:42.723448590+00:00 stderr F I1013 00:21:42.723434 28251 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry 2025-10-13T00:21:42.723494051+00:00 stderr F I1013 00:21:42.723290 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-controller-manager-operator 136038f9-f376-4b0b-8c75-a42240d176cc 4549 0 2024-06-26 12:39:14 +0000 UTC map[app:kube-controller-manager-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00029715f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-controller-manager-operator,},ClusterIP:10.217.4.79,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.79],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.723494051+00:00 stderr F I1013 00:21:42.723481 28251 services_controller.go:332] Processing sync for service openshift-kube-scheduler/scheduler 2025-10-13T00:21:42.723509471+00:00 stderr F I1013 00:21:42.723340 28251 services_controller.go:397] Service packageserver-service retrieved from lister: &Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager 8099635d-a821-489e-8b18-cae3e83f00b2 6451 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver 0beab272-7637-4d44-b3aa-502dcafbc929 0xc0007338ed 0xc0007338ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5443,Protocol:TCP,Port:5443,TargetPort:{0 5443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: packageserver,},ClusterIP:10.217.4.230,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.230],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.723561463+00:00 stderr F I1013 00:21:42.723522 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.15] []}] 2025-10-13T00:21:42.723561463+00:00 stderr F I1013 00:21:42.723497 28251 services_controller.go:397] Service scheduler retrieved from lister: &Service{ObjectMeta:{scheduler openshift-kube-scheduler a839a554-406d-4df8-b3ae-b533cb3e24bc 4695 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:kube-scheduler] map[operator.openshift.io/spec-hash:f185087b7610499b49263c17685abe7f251a50c890808284a072687bf6d73275 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10259 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{scheduler: true,},ClusterIP:10.217.5.218,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.218],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.723574103+00:00 stderr F I1013 00:21:42.723529 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/packageserver-service are: map[TCP/5443:{5443 [10.217.0.43] []}] 2025-10-13T00:21:42.723574103+00:00 stderr F I1013 00:21:42.723363 28251 services_controller.go:397] Service console retrieved from lister: &Service{ObjectMeta:{console openshift-console 5b0bdd1d-b81c-479c-9a03-f3ff2b5db014 9795 0 2024-06-26 12:53:44 +0000 UTC map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: ui,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.723582683+00:00 stderr F I1013 00:21:42.723561 28251 services_controller.go:413] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.79"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.15"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.723589644+00:00 stderr F I1013 00:21:42.723572 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.230"}, protocol:"TCP", inport:5443, clusterEndpoints:services.lbEndpoints{Port:5443, V4IPs:[]string{"10.217.0.43"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.723589644+00:00 stderr F I1013 00:21:42.723584 28251 services_controller.go:414] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.723596904+00:00 stderr F I1013 00:21:42.723588 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/packageserver-service LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.723596904+00:00 stderr F I1013 00:21:42.723592 28251 services_controller.go:415] Built service openshift-kube-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.723603984+00:00 stderr F I1013 00:21:42.723595 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.723603984+00:00 stderr F I1013 00:21:42.723578 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler/scheduler are: map[TCP/https:{10259 [192.168.126.11] []}] 2025-10-13T00:21:42.723624074+00:00 stderr F I1013 00:21:42.723609 28251 services_controller.go:413] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.723631265+00:00 stderr F I1013 00:21:42.723620 28251 services_controller.go:414] Built service openshift-kube-scheduler/scheduler LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.218"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10259, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.723640835+00:00 stderr F I1013 00:21:42.723630 28251 services_controller.go:415] Built service openshift-kube-scheduler/scheduler LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.723640835+00:00 stderr F I1013 00:21:42.723610 28251 lb_config.go:1016] Cluster endpoints for openshift-console/console are: map[TCP/https:{8443 [10.217.0.73] []}] 2025-10-13T00:21:42.723648345+00:00 stderr F I1013 00:21:42.723615 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/packageserver-service cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.723655725+00:00 stderr F I1013 00:21:42.723618 28251 services_controller.go:421] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.723655725+00:00 stderr F I1013 00:21:42.723644 28251 services_controller.go:413] Built service openshift-console/console LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.140"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.73"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.723655725+00:00 stderr F I1013 00:21:42.723651 28251 services_controller.go:421] Built service openshift-kube-scheduler/scheduler cluster-wide LB []services.LB{} 2025-10-13T00:21:42.723664786+00:00 stderr F I1013 00:21:42.723649 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB []services.LB{} 2025-10-13T00:21:42.723664786+00:00 stderr F I1013 00:21:42.723654 28251 services_controller.go:422] Built service openshift-kube-controller-manager-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.723664786+00:00 stderr F I1013 00:21:42.723655 28251 services_controller.go:414] Built service openshift-console/console LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.723673206+00:00 stderr F I1013 00:21:42.723660 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/packageserver-service template LB []services.LB{} 2025-10-13T00:21:42.723673206+00:00 stderr F I1013 00:21:42.723665 28251 services_controller.go:423] Built service openshift-kube-controller-manager-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.723673206+00:00 stderr F I1013 00:21:42.723664 28251 services_controller.go:415] Built service openshift-console/console LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.723683026+00:00 stderr F I1013 00:21:42.723670 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/packageserver-service has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.723683026+00:00 stderr F I1013 00:21:42.723658 28251 services_controller.go:422] Built service openshift-kube-scheduler/scheduler per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.723683026+00:00 stderr F I1013 00:21:42.723674 28251 services_controller.go:424] Service openshift-kube-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.723690486+00:00 stderr F I1013 00:21:42.723679 28251 services_controller.go:423] Built service openshift-kube-scheduler/scheduler template LB []services.LB{} 2025-10-13T00:21:42.723697076+00:00 stderr F I1013 00:21:42.723687 28251 services_controller.go:424] Service openshift-kube-scheduler/scheduler has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.723727537+00:00 stderr F I1013 00:21:42.723685 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"68eacba8-7e00-494f-a0e9-a0d7f4ab5c77", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.723727537+00:00 stderr F I1013 00:21:42.723701 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"915e622c-d89a-4906-831f-8daeda55c910", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.723727537+00:00 stderr F I1013 00:21:42.723691 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"a30efd8e-3079-4ca5-a3fd-b660bc1089ea", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.723759478+00:00 stderr F I1013 00:21:42.723467 28251 services_controller.go:397] Service image-registry retrieved from lister: &Service{ObjectMeta:{image-registry openshift-image-registry 7b12735e-9db4-4c6e-99f6-b2626c4e9f08 17962 0 2024-06-27 13:18:52 +0000 UTC map[docker-registry:default] map[imageregistry.operator.openshift.io/checksum:sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d service.alpha.openshift.io/serving-cert-secret-name:image-registry-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.723813480+00:00 stderr F I1013 00:21:42.723787 28251 lb_config.go:1016] Cluster endpoints for openshift-image-registry/image-registry are: map[TCP/5000-tcp:{5000 [10.217.0.38] []}] 2025-10-13T00:21:42.723852571+00:00 stderr F I1013 00:21:42.723803 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.218:443:192.168.126.11:10259]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {915e622c-d89a-4906-831f-8daeda55c910}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.723852571+00:00 stderr F I1013 00:21:42.723812 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.230:5443:10.217.0.43:5443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {68eacba8-7e00-494f-a0e9-a0d7f4ab5c77}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.723852571+00:00 stderr F I1013 00:21:42.723811 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.79:443:10.217.0.15:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a30efd8e-3079-4ca5-a3fd-b660bc1089ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.723883501+00:00 stderr F I1013 00:21:42.723852 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.218:443:192.168.126.11:10259]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {915e622c-d89a-4906-831f-8daeda55c910}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.723883501+00:00 stderr F I1013 00:21:42.723855 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.230:5443:10.217.0.43:5443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {68eacba8-7e00-494f-a0e9-a0d7f4ab5c77}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.724058626+00:00 stderr F I1013 00:21:42.723857 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.79:443:10.217.0.15:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a30efd8e-3079-4ca5-a3fd-b660bc1089ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.724563500+00:00 stderr F I1013 00:21:42.723697 28251 services_controller.go:421] Built service openshift-console/console cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.724576760+00:00 stderr F I1013 00:21:42.724564 28251 services_controller.go:422] Built service openshift-console/console per-node LB []services.LB{} 2025-10-13T00:21:42.724576760+00:00 stderr F I1013 00:21:42.724573 28251 services_controller.go:423] Built service openshift-console/console template LB []services.LB{} 2025-10-13T00:21:42.724585390+00:00 stderr F I1013 00:21:42.724579 28251 services_controller.go:424] Service openshift-console/console has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.724641862+00:00 stderr F I1013 00:21:42.724592 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"1b2be37e-8144-48c9-927b-da6e21dae8a9", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.724724904+00:00 stderr F I1013 00:21:42.723838 28251 services_controller.go:413] Built service openshift-image-registry/image-registry LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.41"}, protocol:"TCP", inport:5000, clusterEndpoints:services.lbEndpoints{Port:5000, V4IPs:[]string{"10.217.0.38"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.724733154+00:00 stderr F I1013 00:21:42.724720 28251 services_controller.go:414] Built service openshift-image-registry/image-registry LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.724733154+00:00 stderr F I1013 00:21:42.724729 28251 services_controller.go:415] Built service openshift-image-registry/image-registry LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.724757905+00:00 stderr F I1013 00:21:42.724710 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:10.217.0.73:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b2be37e-8144-48c9-927b-da6e21dae8a9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.724794976+00:00 stderr F I1013 00:21:42.724758 28251 services_controller.go:421] Built service openshift-image-registry/image-registry cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.38", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.724794976+00:00 stderr F I1013 00:21:42.724756 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:10.217.0.73:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b2be37e-8144-48c9-927b-da6e21dae8a9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.724794976+00:00 stderr F I1013 00:21:42.724789 28251 services_controller.go:422] Built service openshift-image-registry/image-registry per-node LB []services.LB{} 2025-10-13T00:21:42.724803916+00:00 stderr F I1013 00:21:42.724796 28251 services_controller.go:423] Built service openshift-image-registry/image-registry template LB []services.LB{} 2025-10-13T00:21:42.724811036+00:00 stderr F I1013 00:21:42.724803 28251 services_controller.go:424] Service openshift-image-registry/image-registry has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.724890549+00:00 stderr F I1013 00:21:42.724816 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"1636f317-82cf-4afb-adb2-1388c1aee17c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.38", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.725041253+00:00 stderr F I1013 00:21:42.725005 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"} 2025-10-13T00:21:42.725049523+00:00 stderr F I1013 00:21:42.725036 28251 services_controller.go:336] Finished syncing service scheduler on namespace openshift-kube-scheduler : 1.561022ms 2025-10-13T00:21:42.725074583+00:00 stderr F I1013 00:21:42.725057 28251 services_controller.go:332] Processing sync for service openshift-marketplace/community-operators 2025-10-13T00:21:42.725165006+00:00 stderr F I1013 00:21:42.725068 28251 services_controller.go:397] Service community-operators retrieved from lister: &Service{ObjectMeta:{community-operators openshift-marketplace daa5c70d-2f05-4c99-b062-49370cf4b7bd 6377 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:9y90X0LnOAvWXlE7PZKqH0sBNEP83PNwaUfqVB] map[] [{operators.coreos.com/v1alpha1 CatalogSource community-operators e583c58d-4569-4cab-9192-62c813516208 0xc0007323bd 0xc0007323be}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: community-operators,olm.managed: true,},ClusterIP:10.217.4.229,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.229],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.725207577+00:00 stderr F I1013 00:21:42.725183 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/community-operators are: map[TCP/grpc:{50051 [10.217.0.35] []}] 2025-10-13T00:21:42.725237428+00:00 stderr F I1013 00:21:42.725211 28251 services_controller.go:413] Built service openshift-marketplace/community-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.229"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.35"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.725237428+00:00 stderr F I1013 00:21:42.725228 28251 services_controller.go:414] Built service openshift-marketplace/community-operators LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.725247348+00:00 stderr F I1013 00:21:42.725236 28251 services_controller.go:415] Built service openshift-marketplace/community-operators LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.725289709+00:00 stderr F I1013 00:21:42.725254 28251 services_controller.go:421] Built service openshift-marketplace/community-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.725304750+00:00 stderr F I1013 00:21:42.725285 28251 services_controller.go:422] Built service openshift-marketplace/community-operators per-node LB []services.LB{} 2025-10-13T00:21:42.725304750+00:00 stderr F I1013 00:21:42.725299 28251 services_controller.go:423] Built service openshift-marketplace/community-operators template LB []services.LB{} 2025-10-13T00:21:42.725313700+00:00 stderr F I1013 00:21:42.725307 28251 services_controller.go:424] Service openshift-marketplace/community-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.725385812+00:00 stderr F I1013 00:21:42.725352 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"} 2025-10-13T00:21:42.725385812+00:00 stderr F I1013 00:21:42.725380 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator : 2.104077ms 2025-10-13T00:21:42.725398332+00:00 stderr F I1013 00:21:42.725328 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"acd71d69-4870-4695-b8eb-935626516f5d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.725421213+00:00 stderr F I1013 00:21:42.725404 28251 services_controller.go:332] Processing sync for service openshift-console-operator/metrics 2025-10-13T00:21:42.725488715+00:00 stderr F I1013 00:21:42.725370 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:5000:10.217.0.38:5000]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1636f317-82cf-4afb-adb2-1388c1aee17c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.725531226+00:00 stderr F I1013 00:21:42.725418 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-console-operator 793d323e-de30-470a-af76-520af7b2dad8 9604 0 2024-06-26 12:53:34 +0000 UTC map[name:console-operator] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efce7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-operator,},ClusterIP:10.217.5.211,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.211],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.725576127+00:00 stderr F I1013 00:21:42.725547 28251 lb_config.go:1016] Cluster endpoints for openshift-console-operator/metrics are: map[TCP/https:{8443 [10.217.0.62] []}] 2025-10-13T00:21:42.725584627+00:00 stderr F I1013 00:21:42.725567 28251 services_controller.go:413] Built service openshift-console-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.211"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.62"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.725593307+00:00 stderr F I1013 00:21:42.725587 28251 services_controller.go:414] Built service openshift-console-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.725600408+00:00 stderr F I1013 00:21:42.725595 28251 services_controller.go:415] Built service openshift-console-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.725628278+00:00 stderr F I1013 00:21:42.725498 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.229:50051:10.217.0.35:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {acd71d69-4870-4695-b8eb-935626516f5d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.725655759+00:00 stderr F I1013 00:21:42.725613 28251 services_controller.go:421] Built service openshift-console-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.725663509+00:00 stderr F I1013 00:21:42.725652 28251 services_controller.go:422] Built service openshift-console-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.725670430+00:00 stderr F I1013 00:21:42.725663 28251 services_controller.go:423] Built service openshift-console-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.725697870+00:00 stderr F I1013 00:21:42.725677 28251 services_controller.go:424] Service openshift-console-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.725733711+00:00 stderr F I1013 00:21:42.725512 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:5000:10.217.0.38:5000]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1636f317-82cf-4afb-adb2-1388c1aee17c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.725764852+00:00 stderr F I1013 00:21:42.725698 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.725791013+00:00 stderr F I1013 00:21:42.725762 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"} 2025-10-13T00:21:42.725798593+00:00 stderr F I1013 00:21:42.725787 28251 services_controller.go:336] Finished syncing service console on namespace openshift-console : 2.452406ms 2025-10-13T00:21:42.725807173+00:00 stderr F I1013 00:21:42.725802 28251 services_controller.go:332] Processing sync for service openshift-authentication-operator/metrics 2025-10-13T00:21:42.725904836+00:00 stderr F I1013 00:21:42.725809 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-authentication-operator 20ebd9ba-71d4-4753-8707-d87939791a19 4335 0 2024-06-26 12:39:09 +0000 UTC map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002ef837 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.51,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.51],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.725904836+00:00 stderr F I1013 00:21:42.725853 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.211:443:10.217.0.62:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.725943777+00:00 stderr F I1013 00:21:42.725919 28251 lb_config.go:1016] Cluster endpoints for openshift-authentication-operator/metrics are: map[TCP/https:{8443 [10.217.0.19] []}] 2025-10-13T00:21:42.725951997+00:00 stderr F I1013 00:21:42.725910 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.211:443:10.217.0.62:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.725951997+00:00 stderr F I1013 00:21:42.725942 28251 services_controller.go:413] Built service openshift-authentication-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.51"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.19"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.725979728+00:00 stderr F I1013 00:21:42.725955 28251 services_controller.go:414] Built service openshift-authentication-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.725979728+00:00 stderr F I1013 00:21:42.725973 28251 services_controller.go:415] Built service openshift-authentication-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.726034139+00:00 stderr F I1013 00:21:42.725991 28251 services_controller.go:421] Built service openshift-authentication-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726034139+00:00 stderr F I1013 00:21:42.726028 28251 services_controller.go:422] Built service openshift-authentication-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.726046230+00:00 stderr F I1013 00:21:42.726038 28251 services_controller.go:423] Built service openshift-authentication-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.726071910+00:00 stderr F I1013 00:21:42.726046 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"} 2025-10-13T00:21:42.726071910+00:00 stderr F I1013 00:21:42.726049 28251 services_controller.go:424] Service openshift-authentication-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.726071910+00:00 stderr F I1013 00:21:42.726066 28251 services_controller.go:336] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager : 2.761544ms 2025-10-13T00:21:42.726116812+00:00 stderr F I1013 00:21:42.726099 28251 services_controller.go:332] Processing sync for service openshift-etcd-operator/metrics 2025-10-13T00:21:42.726125262+00:00 stderr F I1013 00:21:42.726077 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"43d1a806-6d56-4f19-9c53-1ce78b0d24a1", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726146652+00:00 stderr F I1013 00:21:42.725628 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.229:50051:10.217.0.35:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {acd71d69-4870-4695-b8eb-935626516f5d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.726191804+00:00 stderr F I1013 00:21:42.726112 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-etcd-operator 470dd1a6-5645-4282-97e4-ebd3fef4caae 4531 0 2024-06-26 12:39:06 +0000 UTC map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296337 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: etcd-operator,},ClusterIP:10.217.4.182,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.182],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.726227845+00:00 stderr F I1013 00:21:42.726198 28251 lb_config.go:1016] Cluster endpoints for openshift-etcd-operator/metrics are: map[TCP/https:{8443 [10.217.0.8] []}] 2025-10-13T00:21:42.726235825+00:00 stderr F I1013 00:21:42.726218 28251 services_controller.go:413] Built service openshift-etcd-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.182"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.8"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.726242825+00:00 stderr F I1013 00:21:42.726236 28251 services_controller.go:414] Built service openshift-etcd-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.726249865+00:00 stderr F I1013 00:21:42.726243 28251 services_controller.go:415] Built service openshift-etcd-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.726303167+00:00 stderr F I1013 00:21:42.726259 28251 services_controller.go:421] Built service openshift-etcd-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726311407+00:00 stderr F I1013 00:21:42.726305 28251 services_controller.go:422] Built service openshift-etcd-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.726319977+00:00 stderr F I1013 00:21:42.726314 28251 services_controller.go:423] Built service openshift-etcd-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.726330707+00:00 stderr F I1013 00:21:42.726322 28251 services_controller.go:424] Service openshift-etcd-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.726354898+00:00 stderr F I1013 00:21:42.726262 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.51:443:10.217.0.19:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {43d1a806-6d56-4f19-9c53-1ce78b0d24a1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.726421670+00:00 stderr F I1013 00:21:42.726340 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"85c70b85-2b80-443a-b268-ffc8695f018e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726421670+00:00 stderr F I1013 00:21:42.726362 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.51:443:10.217.0.19:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {43d1a806-6d56-4f19-9c53-1ce78b0d24a1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.726494632+00:00 stderr F I1013 00:21:42.726464 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"} 2025-10-13T00:21:42.726494632+00:00 stderr F I1013 00:21:42.726489 28251 services_controller.go:336] Finished syncing service community-operators on namespace openshift-marketplace : 1.431518ms 2025-10-13T00:21:42.726530663+00:00 stderr F I1013 00:21:42.726514 28251 services_controller.go:332] Processing sync for service openshift-machine-api/control-plane-machine-set-operator 2025-10-13T00:21:42.726530663+00:00 stderr F I1013 00:21:42.726511 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"} 2025-10-13T00:21:42.726543613+00:00 stderr F I1013 00:21:42.726537 28251 services_controller.go:336] Finished syncing service image-registry on namespace openshift-image-registry : 3.103164ms 2025-10-13T00:21:42.726570194+00:00 stderr F I1013 00:21:42.726552 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-machine-webhook 2025-10-13T00:21:42.726570194+00:00 stderr F I1013 00:21:42.726513 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:443:10.217.0.8:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {85c70b85-2b80-443a-b268-ffc8695f018e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.726643296+00:00 stderr F I1013 00:21:42.726577 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:443:10.217.0.8:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {85c70b85-2b80-443a-b268-ffc8695f018e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.726643296+00:00 stderr F I1013 00:21:42.726526 28251 services_controller.go:397] Service control-plane-machine-set-operator retrieved from lister: &Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api 7c42fd7c-0955-49c7-819c-4685e0681272 4749 0 2024-06-26 12:39:09 +0000 UTC map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002977b7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.5.136,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.136],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.726682577+00:00 stderr F I1013 00:21:42.726577 28251 services_controller.go:397] Service machine-api-operator-machine-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-machine-webhook openshift-machine-api 7dd2300f-f67e-4eb3-a3fa-1f22c230305a 4821 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-api-operator-machine-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-machine-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000297d2b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 machine-webhook},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.4.242,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.242],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.726682577+00:00 stderr F I1013 00:21:42.726665 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/control-plane-machine-set-operator are: map[TCP/https:{9443 [10.217.0.20] []}] 2025-10-13T00:21:42.726698267+00:00 stderr F I1013 00:21:42.726684 28251 services_controller.go:413] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.136"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.20"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.726706507+00:00 stderr F I1013 00:21:42.726689 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-machine-webhook are: map[] 2025-10-13T00:21:42.726706507+00:00 stderr F I1013 00:21:42.726700 28251 services_controller.go:414] Built service openshift-machine-api/control-plane-machine-set-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.726732568+00:00 stderr F I1013 00:21:42.726707 28251 services_controller.go:415] Built service openshift-machine-api/control-plane-machine-set-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.726768459+00:00 stderr F I1013 00:21:42.726730 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.242"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.726777649+00:00 stderr F I1013 00:21:42.726735 28251 services_controller.go:421] Built service openshift-machine-api/control-plane-machine-set-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726785750+00:00 stderr F I1013 00:21:42.726773 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.726785750+00:00 stderr F I1013 00:21:42.726775 28251 services_controller.go:422] Built service openshift-machine-api/control-plane-machine-set-operator per-node LB []services.LB{} 2025-10-13T00:21:42.726785750+00:00 stderr F I1013 00:21:42.726681 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"} 2025-10-13T00:21:42.726794870+00:00 stderr F I1013 00:21:42.726784 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.726794870+00:00 stderr F I1013 00:21:42.726788 28251 services_controller.go:423] Built service openshift-machine-api/control-plane-machine-set-operator template LB []services.LB{} 2025-10-13T00:21:42.726803010+00:00 stderr F I1013 00:21:42.726794 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-console-operator : 1.387118ms 2025-10-13T00:21:42.726810790+00:00 stderr F I1013 00:21:42.726798 28251 services_controller.go:424] Service openshift-machine-api/control-plane-machine-set-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.726821780+00:00 stderr F I1013 00:21:42.726810 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-webhook 2025-10-13T00:21:42.726966254+00:00 stderr F I1013 00:21:42.726890 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-machine-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726966254+00:00 stderr F I1013 00:21:42.726824 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"0549e0a0-7df6-4da0-b4ff-6834505eba14", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.726966254+00:00 stderr F I1013 00:21:42.726904 28251 ovs.go:162] Exec(34): stdout: "\"physnet:br-ex\"\n" 2025-10-13T00:21:42.726966254+00:00 stderr F I1013 00:21:42.726925 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-machine-webhook per-node LB []services.LB{} 2025-10-13T00:21:42.726966254+00:00 stderr F I1013 00:21:42.726934 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-machine-webhook template LB []services.LB{} 2025-10-13T00:21:42.726966254+00:00 stderr F I1013 00:21:42.726935 28251 ovs.go:163] Exec(34): stderr: "" 2025-10-13T00:21:42.726981965+00:00 stderr F I1013 00:21:42.726973 28251 ovs.go:159] Exec(35): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-bridge-mappings=physnet:br-ex 2025-10-13T00:21:42.727008465+00:00 stderr F I1013 00:21:42.726818 28251 services_controller.go:397] Service machine-api-operator-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-webhook openshift-machine-api 128263d4-d278-44f6-9ae4-9e9ecc572513 4862 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-api-operator-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000297f3b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.727008465+00:00 stderr F I1013 00:21:42.726920 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"} 2025-10-13T00:21:42.727019806+00:00 stderr F I1013 00:21:42.727008 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-webhook are: map[] 2025-10-13T00:21:42.727026796+00:00 stderr F I1013 00:21:42.727017 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-authentication-operator : 1.210563ms 2025-10-13T00:21:42.727048847+00:00 stderr F I1013 00:21:42.727021 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.44"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.727048847+00:00 stderr F I1013 00:21:42.726940 28251 services_controller.go:424] Service openshift-machine-api/machine-api-operator-machine-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.727056547+00:00 stderr F I1013 00:21:42.727046 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.727063177+00:00 stderr F I1013 00:21:42.727058 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-webhook LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.727114848+00:00 stderr F I1013 00:21:42.727038 28251 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-daemon 2025-10-13T00:21:42.727114848+00:00 stderr F I1013 00:21:42.727042 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.136:9443:10.217.0.20:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0549e0a0-7df6-4da0-b4ff-6834505eba14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.727137669+00:00 stderr F I1013 00:21:42.727100 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.727150589+00:00 stderr F I1013 00:21:42.727142 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-webhook per-node LB []services.LB{} 2025-10-13T00:21:42.727158830+00:00 stderr F I1013 00:21:42.727153 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-webhook template LB []services.LB{} 2025-10-13T00:21:42.727165910+00:00 stderr F I1013 00:21:42.727062 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"733bd827-e0e7-4901-9b0d-4f2fcb21be04", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.727173070+00:00 stderr F I1013 00:21:42.727163 28251 services_controller.go:424] Service openshift-machine-api/machine-api-operator-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.727173070+00:00 stderr F I1013 00:21:42.727154 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"} 2025-10-13T00:21:42.727195060+00:00 stderr F I1013 00:21:42.727176 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-etcd-operator : 1.077019ms 2025-10-13T00:21:42.727201971+00:00 stderr F I1013 00:21:42.727112 28251 services_controller.go:397] Service machine-config-daemon retrieved from lister: &Service{ObjectMeta:{machine-config-daemon openshift-machine-config-operator bddcb8c2-0f2d-4efa-a0ec-3e0648c24386 4880 0 2024-06-26 12:39:15 +0000 UTC map[k8s-app:machine-config-daemon] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000732167 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.5.82,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.82],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.727256302+00:00 stderr F I1013 00:21:42.727181 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"61827598-f37d-413c-8229-0ed852809fb6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.727256302+00:00 stderr F I1013 00:21:42.727226 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-daemon are: map[TCP/health:{8798 [192.168.126.11] []} TCP/metrics:{9001 [192.168.126.11] []}] 2025-10-13T00:21:42.727256302+00:00 stderr F I1013 00:21:42.727246 28251 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-daemon LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.727290893+00:00 stderr F I1013 00:21:42.727261 28251 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-daemon LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:8798, clusterEndpoints:services.lbEndpoints{Port:8798, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.727290893+00:00 stderr F I1013 00:21:42.727196 28251 services_controller.go:332] Processing sync for service openshift-marketplace/certified-operators 2025-10-13T00:21:42.727290893+00:00 stderr F I1013 00:21:42.727283 28251 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-daemon LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.727347915+00:00 stderr F I1013 00:21:42.727319 28251 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-daemon cluster-wide LB []services.LB{} 2025-10-13T00:21:42.727347915+00:00 stderr F I1013 00:21:42.727123 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.136:9443:10.217.0.20:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0549e0a0-7df6-4da0-b4ff-6834505eba14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.727367635+00:00 stderr F I1013 00:21:42.727278 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.242:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {733bd827-e0e7-4901-9b0d-4f2fcb21be04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.727513369+00:00 stderr F I1013 00:21:42.727292 28251 services_controller.go:397] Service certified-operators retrieved from lister: &Service{ObjectMeta:{certified-operators openshift-marketplace 97052848-7332-4254-8854-60d45bb91123 6358 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators 16d5fe82-aef0-4700-8b13-e78e71d2a10d 0xc0007322dd 0xc0007322de}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.249,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.249],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.727513369+00:00 stderr F I1013 00:21:42.727332 28251 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-daemon per-node LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.727526989+00:00 stderr F I1013 00:21:42.727375 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.44:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61827598-f37d-413c-8229-0ed852809fb6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.727526989+00:00 stderr F I1013 00:21:42.727505 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/certified-operators are: map[TCP/grpc:{50051 [10.217.0.36] []}] 2025-10-13T00:21:42.727541070+00:00 stderr F I1013 00:21:42.727521 28251 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-daemon template LB []services.LB{} 2025-10-13T00:21:42.727541070+00:00 stderr F I1013 00:21:42.727529 28251 services_controller.go:413] Built service openshift-marketplace/certified-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.249"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.36"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.727550120+00:00 stderr F I1013 00:21:42.727536 28251 services_controller.go:424] Service openshift-machine-config-operator/machine-config-daemon has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.727550120+00:00 stderr F I1013 00:21:42.727545 28251 services_controller.go:414] Built service openshift-marketplace/certified-operators LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.727558780+00:00 stderr F I1013 00:21:42.727552 28251 services_controller.go:415] Built service openshift-marketplace/certified-operators LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.727568551+00:00 stderr F I1013 00:21:42.727450 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.242:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {733bd827-e0e7-4901-9b0d-4f2fcb21be04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.727650753+00:00 stderr F I1013 00:21:42.727575 28251 services_controller.go:421] Built service openshift-marketplace/certified-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.36", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.728187977+00:00 stderr F I1013 00:21:42.728161 28251 services_controller.go:422] Built service openshift-marketplace/certified-operators per-node LB []services.LB{} 2025-10-13T00:21:42.728187977+00:00 stderr F I1013 00:21:42.728179 28251 services_controller.go:423] Built service openshift-marketplace/certified-operators template LB []services.LB{} 2025-10-13T00:21:42.728202108+00:00 stderr F I1013 00:21:42.728187 28251 services_controller.go:424] Service openshift-marketplace/certified-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.728232428+00:00 stderr F I1013 00:21:42.728182 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"} 2025-10-13T00:21:42.728232428+00:00 stderr F I1013 00:21:42.727582 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.44:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61827598-f37d-413c-8229-0ed852809fb6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728242579+00:00 stderr F I1013 00:21:42.728202 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"46ded4bb-21d9-4d3a-a886-ac7e004b5ce4", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.36", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.728311701+00:00 stderr F I1013 00:21:42.727560 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"03279af5-9ea3-4256-ad64-2f0188f56e36", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.728528646+00:00 stderr F I1013 00:21:42.728219 28251 services_controller.go:336] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api : 1.705876ms 2025-10-13T00:21:42.728543117+00:00 stderr F I1013 00:21:42.728535 28251 services_controller.go:332] Processing sync for service default/kubernetes 2025-10-13T00:21:42.728570037+00:00 stderr F I1013 00:21:42.728519 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/certified-operators]} name:Service_openshift-marketplace/certified-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.249:50051:10.217.0.36:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46ded4bb-21d9-4d3a-a886-ac7e004b5ce4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728599558+00:00 stderr F I1013 00:21:42.728561 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.82:8798:192.168.126.11:8798 10.217.5.82:9001:192.168.126.11:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03279af5-9ea3-4256-ad64-2f0188f56e36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728599558+00:00 stderr F I1013 00:21:42.728568 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/certified-operators]} name:Service_openshift-marketplace/certified-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.249:50051:10.217.0.36:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46ded4bb-21d9-4d3a-a886-ac7e004b5ce4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728609109+00:00 stderr F I1013 00:21:42.728544 28251 services_controller.go:397] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 057366b7-69c1-49f0-88ca-660c8863cae8 249 0 2024-06-26 12:38:03 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.728622219+00:00 stderr F I1013 00:21:42.728592 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.82:8798:192.168.126.11:8798 10.217.5.82:9001:192.168.126.11:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03279af5-9ea3-4256-ad64-2f0188f56e36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728630819+00:00 stderr F I1013 00:21:42.728617 28251 lb_config.go:1016] Cluster endpoints for default/kubernetes are: map[TCP/https:{6443 [192.168.126.11] []}] 2025-10-13T00:21:42.728640449+00:00 stderr F I1013 00:21:42.728634 28251 services_controller.go:413] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.728651100+00:00 stderr F I1013 00:21:42.728641 28251 services_controller.go:414] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.1"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.728659110+00:00 stderr F I1013 00:21:42.728653 28251 services_controller.go:415] Built service default/kubernetes LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.728690251+00:00 stderr F I1013 00:21:42.728673 28251 services_controller.go:421] Built service default/kubernetes cluster-wide LB []services.LB{} 2025-10-13T00:21:42.728720862+00:00 stderr F I1013 00:21:42.728686 28251 services_controller.go:422] Built service default/kubernetes per-node LB []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.728720862+00:00 stderr F I1013 00:21:42.728710 28251 services_controller.go:423] Built service default/kubernetes template LB []services.LB{} 2025-10-13T00:21:42.728729982+00:00 stderr F I1013 00:21:42.728719 28251 services_controller.go:424] Service default/kubernetes has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.728773813+00:00 stderr F I1013 00:21:42.728733 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"64f8efea-8a64-40da-b2c5-6e8d4a7a1c68", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.728885576+00:00 stderr F I1013 00:21:42.728841 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64f8efea-8a64-40da-b2c5-6e8d4a7a1c68}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728885576+00:00 stderr F I1013 00:21:42.728869 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"} 2025-10-13T00:21:42.728896406+00:00 stderr F I1013 00:21:42.728886 28251 services_controller.go:336] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator : 1.84852ms 2025-10-13T00:21:42.728904866+00:00 stderr F I1013 00:21:42.728899 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-10-13T00:21:42.728904866+00:00 stderr F I1013 00:21:42.728879 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64f8efea-8a64-40da-b2c5-6e8d4a7a1c68}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.728983849+00:00 stderr F I1013 00:21:42.728907 28251 services_controller.go:397] Service catalog-operator-metrics retrieved from lister: &Service{ObjectMeta:{catalog-operator-metrics openshift-operator-lifecycle-manager 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72 5067 0 2024-06-26 12:39:23 +0000 UTC map[app:catalog-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0007334d7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.17,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.17],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.728983849+00:00 stderr F I1013 00:21:42.728959 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"} 2025-10-13T00:21:42.728983849+00:00 stderr F I1013 00:21:42.728957 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"} 2025-10-13T00:21:42.728983849+00:00 stderr F I1013 00:21:42.728975 28251 services_controller.go:336] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api : 2.424025ms 2025-10-13T00:21:42.728999719+00:00 stderr F I1013 00:21:42.728980 28251 services_controller.go:336] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api : 2.169648ms 2025-10-13T00:21:42.728999719+00:00 stderr F I1013 00:21:42.728987 28251 services_controller.go:332] Processing sync for service openshift-ingress-canary/ingress-canary 2025-10-13T00:21:42.728999719+00:00 stderr F I1013 00:21:42.728994 28251 services_controller.go:332] Processing sync for service openshift-machine-api/cluster-autoscaler-operator 2025-10-13T00:21:42.728999719+00:00 stderr F I1013 00:21:42.728990 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.11] []}] 2025-10-13T00:21:42.729030710+00:00 stderr F I1013 00:21:42.729005 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.17"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729030710+00:00 stderr F I1013 00:21:42.729022 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.729040370+00:00 stderr F I1013 00:21:42.729030 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729040370+00:00 stderr F I1013 00:21:42.728993 28251 services_controller.go:397] Service ingress-canary retrieved from lister: &Service{ObjectMeta:{ingress-canary openshift-ingress-canary cd641ce4-6a02-4a0c-9222-6ab30b234450 10172 0 2024-06-26 12:54:01 +0000 UTC map[ingress.openshift.io/canary:canary_controller] map[] [{apps/v1 daemonset ingress-canary b5512a08-cd29-46f9-9661-4c860338b2ca 0xc0002969a7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:8080-tcp,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},ServicePort{Name:8888-tcp,Protocol:TCP,Port:8888,TargetPort:{0 8888 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscanary.operator.openshift.io/daemonset-ingresscanary: canary_controller,},ClusterIP:10.217.4.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729072351+00:00 stderr F I1013 00:21:42.729050 28251 lb_config.go:1016] Cluster endpoints for openshift-ingress-canary/ingress-canary are: map[TCP/8080-tcp:{8080 [10.217.0.71] []} TCP/8888-tcp:{8888 [10.217.0.71] []}] 2025-10-13T00:21:42.729072351+00:00 stderr F I1013 00:21:42.729047 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729082711+00:00 stderr F I1013 00:21:42.729071 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.729082711+00:00 stderr F I1013 00:21:42.729066 28251 services_controller.go:413] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8080, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8888, clusterEndpoints:services.lbEndpoints{Port:8888, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729092171+00:00 stderr F I1013 00:21:42.729080 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB []services.LB{} 2025-10-13T00:21:42.729092171+00:00 stderr F I1013 00:21:42.729080 28251 services_controller.go:414] Built service openshift-ingress-canary/ingress-canary LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.729092171+00:00 stderr F I1013 00:21:42.729001 28251 services_controller.go:397] Service cluster-autoscaler-operator retrieved from lister: &Service{ObjectMeta:{cluster-autoscaler-operator openshift-machine-api c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062 4713 0 2024-06-26 12:39:18 +0000 UTC map[k8s-app:cluster-autoscaler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-autoscaler-operator-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00029769b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9192,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-autoscaler-operator,},ClusterIP:10.217.5.83,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.83],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729092171+00:00 stderr F I1013 00:21:42.729088 28251 services_controller.go:415] Built service openshift-ingress-canary/ingress-canary LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729105102+00:00 stderr F I1013 00:21:42.729089 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/catalog-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.729131573+00:00 stderr F I1013 00:21:42.729105 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/cluster-autoscaler-operator are: map[] 2025-10-13T00:21:42.729143183+00:00 stderr F I1013 00:21:42.729127 28251 services_controller.go:413] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729143183+00:00 stderr F I1013 00:21:42.729101 28251 services_controller.go:421] Built service openshift-ingress-canary/ingress-canary cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729152633+00:00 stderr F I1013 00:21:42.729142 28251 services_controller.go:414] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.729152633+00:00 stderr F I1013 00:21:42.729145 28251 services_controller.go:422] Built service openshift-ingress-canary/ingress-canary per-node LB []services.LB{} 2025-10-13T00:21:42.729152633+00:00 stderr F I1013 00:21:42.729113 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"1a7d5584-355c-4545-8ea4-6c97fee9c8d6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729165423+00:00 stderr F I1013 00:21:42.729150 28251 services_controller.go:415] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729165423+00:00 stderr F I1013 00:21:42.729151 28251 services_controller.go:423] Built service openshift-ingress-canary/ingress-canary template LB []services.LB{} 2025-10-13T00:21:42.729165423+00:00 stderr F I1013 00:21:42.729160 28251 services_controller.go:424] Service openshift-ingress-canary/ingress-canary has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.729208115+00:00 stderr F I1013 00:21:42.729168 28251 services_controller.go:421] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729208115+00:00 stderr F I1013 00:21:42.729171 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"6fe909bf-0efe-41d7-93bd-ab2cc0acd4db", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729208115+00:00 stderr F I1013 00:21:42.729201 28251 services_controller.go:422] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB []services.LB{} 2025-10-13T00:21:42.729222325+00:00 stderr F I1013 00:21:42.729209 28251 services_controller.go:423] Built service openshift-machine-api/cluster-autoscaler-operator template LB []services.LB{} 2025-10-13T00:21:42.729222325+00:00 stderr F I1013 00:21:42.729216 28251 services_controller.go:424] Service openshift-machine-api/cluster-autoscaler-operator has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.729285717+00:00 stderr F I1013 00:21:42.729178 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"} 2025-10-13T00:21:42.729285717+00:00 stderr F I1013 00:21:42.729263 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.204:8080:10.217.0.71:8080 10.217.4.204:8888:10.217.0.71:8888]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fe909bf-0efe-41d7-93bd-ab2cc0acd4db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729313857+00:00 stderr F I1013 00:21:42.729279 28251 services_controller.go:336] Finished syncing service certified-operators on namespace openshift-marketplace : 2.081976ms 2025-10-13T00:21:42.729313857+00:00 stderr F I1013 00:21:42.729290 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.204:8080:10.217.0.71:8080 10.217.4.204:8888:10.217.0.71:8888]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fe909bf-0efe-41d7-93bd-ab2cc0acd4db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729323368+00:00 stderr F I1013 00:21:42.729270 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.17:8443:10.217.0.11:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a7d5584-355c-4545-8ea4-6c97fee9c8d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729348128+00:00 stderr F I1013 00:21:42.729319 28251 services_controller.go:332] Processing sync for service openshift-etcd/etcd 2025-10-13T00:21:42.729400930+00:00 stderr F I1013 00:21:42.729340 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.17:8443:10.217.0.11:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a7d5584-355c-4545-8ea4-6c97fee9c8d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729417310+00:00 stderr F I1013 00:21:42.729232 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729471042+00:00 stderr F I1013 00:21:42.729442 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"} 2025-10-13T00:21:42.729471042+00:00 stderr F I1013 00:21:42.729463 28251 services_controller.go:336] Finished syncing service kubernetes on namespace default : 930.485µs 2025-10-13T00:21:42.729471042+00:00 stderr F I1013 00:21:42.729338 28251 services_controller.go:397] Service etcd retrieved from lister: &Service{ObjectMeta:{etcd openshift-etcd 09198b54-ff7d-4bc0-9111-00e2f443a981 4485 0 2024-06-26 12:38:46 +0000 UTC map[k8s-app:etcd] map[operator.openshift.io/spec-hash:0685cfaa0976bfb7ba58513629369c20bf05f4fba36949e982bdb43af328f0e1 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.137,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.137],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729482642+00:00 stderr F I1013 00:21:42.729476 28251 services_controller.go:332] Processing sync for service openshift-console-operator/webhook 2025-10-13T00:21:42.729507743+00:00 stderr F I1013 00:21:42.729487 28251 lb_config.go:1016] Cluster endpoints for openshift-etcd/etcd are: map[TCP/etcd:{2379 [192.168.126.11] []} TCP/etcd-metrics:{9979 [192.168.126.11] []}] 2025-10-13T00:21:42.729507743+00:00 stderr F I1013 00:21:42.729504 28251 services_controller.go:413] Built service openshift-etcd/etcd LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.729521203+00:00 stderr F I1013 00:21:42.729503 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"} 2025-10-13T00:21:42.729521203+00:00 stderr F I1013 00:21:42.729510 28251 services_controller.go:414] Built service openshift-etcd/etcd LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:2379, clusterEndpoints:services.lbEndpoints{Port:2379, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:9979, clusterEndpoints:services.lbEndpoints{Port:9979, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729530153+00:00 stderr F I1013 00:21:42.729520 28251 services_controller.go:336] Finished syncing service ingress-canary on namespace openshift-ingress-canary : 532.714µs 2025-10-13T00:21:42.729530153+00:00 stderr F I1013 00:21:42.729523 28251 services_controller.go:415] Built service openshift-etcd/etcd LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729539704+00:00 stderr F I1013 00:21:42.729532 28251 services_controller.go:332] Processing sync for service openshift-controller-manager-operator/metrics 2025-10-13T00:21:42.729539704+00:00 stderr F I1013 00:21:42.729483 28251 services_controller.go:397] Service webhook retrieved from lister: &Service{ObjectMeta:{webhook openshift-console-operator 0bec6a60-3529-4fdb-81de-718ea6c4dae4 9610 0 2024-06-26 12:53:34 +0000 UTC map[name:console-conversion-webhook] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:webhook-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efdb7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:9443,TargetPort:{0 9443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-conversion-webhook,},ClusterIP:10.217.5.84,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.84],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729548554+00:00 stderr F I1013 00:21:42.729507 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.83:443: 10.217.5.83:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729548554+00:00 stderr F I1013 00:21:42.729543 28251 services_controller.go:421] Built service openshift-etcd/etcd cluster-wide LB []services.LB{} 2025-10-13T00:21:42.729564624+00:00 stderr F I1013 00:21:42.729553 28251 lb_config.go:1016] Cluster endpoints for openshift-console-operator/webhook are: map[TCP/webhook:{9443 [10.217.0.61] []}] 2025-10-13T00:21:42.729572854+00:00 stderr F I1013 00:21:42.729549 28251 services_controller.go:422] Built service openshift-etcd/etcd per-node LB []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.729581875+00:00 stderr F I1013 00:21:42.729571 28251 services_controller.go:423] Built service openshift-etcd/etcd template LB []services.LB{} 2025-10-13T00:21:42.729581875+00:00 stderr F I1013 00:21:42.729549 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.83:443: 10.217.5.83:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729581875+00:00 stderr F I1013 00:21:42.729569 28251 services_controller.go:413] Built service openshift-console-operator/webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.84"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.61"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729591645+00:00 stderr F I1013 00:21:42.729578 28251 services_controller.go:424] Service openshift-etcd/etcd has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.729591645+00:00 stderr F I1013 00:21:42.729583 28251 services_controller.go:414] Built service openshift-console-operator/webhook LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.729600755+00:00 stderr F I1013 00:21:42.729592 28251 services_controller.go:415] Built service openshift-console-operator/webhook LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729631806+00:00 stderr F I1013 00:21:42.729540 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-controller-manager-operator 2f6bb711-85a4-408c-913a-54f006dcf2e9 4322 0 2024-06-26 12:39:07 +0000 UTC map[app:openshift-controller-manager-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:openshift-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002effab }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.152,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.152],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729631806+00:00 stderr F I1013 00:21:42.729607 28251 services_controller.go:421] Built service openshift-console-operator/webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729645946+00:00 stderr F I1013 00:21:42.729630 28251 services_controller.go:422] Built service openshift-console-operator/webhook per-node LB []services.LB{} 2025-10-13T00:21:42.729645946+00:00 stderr F I1013 00:21:42.729639 28251 services_controller.go:423] Built service openshift-console-operator/webhook template LB []services.LB{} 2025-10-13T00:21:42.729645946+00:00 stderr F I1013 00:21:42.729638 28251 lb_config.go:1016] Cluster endpoints for openshift-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.9] []}] 2025-10-13T00:21:42.729656047+00:00 stderr F I1013 00:21:42.729648 28251 services_controller.go:424] Service openshift-console-operator/webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.729664707+00:00 stderr F I1013 00:21:42.729651 28251 services_controller.go:413] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.152"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.9"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729673317+00:00 stderr F I1013 00:21:42.729664 28251 services_controller.go:414] Built service openshift-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.729682037+00:00 stderr F I1013 00:21:42.729672 28251 services_controller.go:415] Built service openshift-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729690788+00:00 stderr F I1013 00:21:42.729591 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"7b275cee-1db5-46de-8569-dce67abda430", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.729703218+00:00 stderr F I1013 00:21:42.729664 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"a25f22fe-ae1f-47da-afc5-e1fde93bc930", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729712378+00:00 stderr F I1013 00:21:42.729690 28251 services_controller.go:421] Built service openshift-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729721468+00:00 stderr F I1013 00:21:42.729713 28251 services_controller.go:422] Built service openshift-controller-manager-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.729730179+00:00 stderr F I1013 00:21:42.729721 28251 services_controller.go:423] Built service openshift-controller-manager-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.729741909+00:00 stderr F I1013 00:21:42.729731 28251 services_controller.go:424] Service openshift-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.729788840+00:00 stderr F I1013 00:21:42.729750 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.137:2379:192.168.126.11:2379 10.217.5.137:9979:192.168.126.11:9979]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b275cee-1db5-46de-8569-dce67abda430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729788840+00:00 stderr F I1013 00:21:42.729748 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"cd325bf7-5a1f-48df-b966-4cb50de55e08", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.729788840+00:00 stderr F I1013 00:21:42.729622 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"} 2025-10-13T00:21:42.729799951+00:00 stderr F I1013 00:21:42.729793 28251 services_controller.go:336] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager : 890.734µs 2025-10-13T00:21:42.729808961+00:00 stderr F I1013 00:21:42.729782 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.137:2379:192.168.126.11:2379 10.217.5.137:9979:192.168.126.11:9979]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b275cee-1db5-46de-8569-dce67abda430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729808961+00:00 stderr F I1013 00:21:42.729779 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/webhook]} name:Service_openshift-console-operator/webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.84:9443:10.217.0.61:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a25f22fe-ae1f-47da-afc5-e1fde93bc930}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729820671+00:00 stderr F I1013 00:21:42.729809 28251 services_controller.go:332] Processing sync for service openshift-kube-controller-manager/kube-controller-manager 2025-10-13T00:21:42.729858292+00:00 stderr F I1013 00:21:42.729815 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/webhook]} name:Service_openshift-console-operator/webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.84:9443:10.217.0.61:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a25f22fe-ae1f-47da-afc5-e1fde93bc930}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729894133+00:00 stderr F I1013 00:21:42.729863 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"} 2025-10-13T00:21:42.729894133+00:00 stderr F I1013 00:21:42.729817 28251 services_controller.go:397] Service kube-controller-manager retrieved from lister: &Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 419fdf14-5d8d-4271-b9e7-729de80d8cd2 5235 0 2024-06-26 12:47:11 +0000 UTC map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10257 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{kube-controller-manager: true,},ClusterIP:10.217.4.112,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.112],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729894133+00:00 stderr F I1013 00:21:42.729888 28251 services_controller.go:336] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api : 893.444µs 2025-10-13T00:21:42.729920474+00:00 stderr F I1013 00:21:42.729902 28251 services_controller.go:332] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics 2025-10-13T00:21:42.729920474+00:00 stderr F I1013 00:21:42.729906 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager/kube-controller-manager are: map[TCP/https:{10257 [192.168.126.11] []}] 2025-10-13T00:21:42.729929424+00:00 stderr F I1013 00:21:42.729923 28251 services_controller.go:413] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.729952815+00:00 stderr F I1013 00:21:42.729931 28251 services_controller.go:414] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.112"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10257, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.729952815+00:00 stderr F I1013 00:21:42.729916 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.152:443:10.217.0.9:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cd325bf7-5a1f-48df-b966-4cb50de55e08}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.729965785+00:00 stderr F I1013 00:21:42.729949 28251 services_controller.go:415] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.729965785+00:00 stderr F I1013 00:21:42.729911 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-storage-version-migrator-operator 3e887cd0-b481-460c-b943-d944dc64df2f 4706 0 2024-06-26 12:39:17 +0000 UTC map[app:kube-storage-version-migrator-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002975d7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-storage-version-migrator-operator,},ClusterIP:10.217.4.151,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.151],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.729993036+00:00 stderr F I1013 00:21:42.729971 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-storage-version-migrator-operator/metrics are: map[TCP/https:{8443 [10.217.0.16] []}] 2025-10-13T00:21:42.729993036+00:00 stderr F I1013 00:21:42.729961 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.152:443:10.217.0.9:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cd325bf7-5a1f-48df-b966-4cb50de55e08}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730003236+00:00 stderr F I1013 00:21:42.729989 28251 services_controller.go:413] Built service openshift-kube-storage-version-migrator-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.151"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.16"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.730015296+00:00 stderr F I1013 00:21:42.730002 28251 services_controller.go:414] Built service openshift-kube-storage-version-migrator-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.730015296+00:00 stderr F I1013 00:21:42.730010 28251 services_controller.go:415] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.730064998+00:00 stderr F I1013 00:21:42.730027 28251 services_controller.go:421] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730064998+00:00 stderr F I1013 00:21:42.729974 28251 services_controller.go:421] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB []services.LB{} 2025-10-13T00:21:42.730064998+00:00 stderr F I1013 00:21:42.730057 28251 services_controller.go:422] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.730075808+00:00 stderr F I1013 00:21:42.730066 28251 services_controller.go:423] Built service openshift-kube-storage-version-migrator-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.730084568+00:00 stderr F I1013 00:21:42.730075 28251 services_controller.go:424] Service openshift-kube-storage-version-migrator-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.730092838+00:00 stderr F I1013 00:21:42.730062 28251 services_controller.go:422] Built service openshift-kube-controller-manager/kube-controller-manager per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.730103069+00:00 stderr F I1013 00:21:42.730094 28251 services_controller.go:423] Built service openshift-kube-controller-manager/kube-controller-manager template LB []services.LB{} 2025-10-13T00:21:42.730111219+00:00 stderr F I1013 00:21:42.730105 28251 services_controller.go:424] Service openshift-kube-controller-manager/kube-controller-manager has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.730140110+00:00 stderr F I1013 00:21:42.730090 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730140110+00:00 stderr F I1013 00:21:42.730119 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"} 2025-10-13T00:21:42.730140110+00:00 stderr F I1013 00:21:42.730135 28251 services_controller.go:336] Finished syncing service etcd on namespace openshift-etcd : 816.912µs 2025-10-13T00:21:42.730154170+00:00 stderr F I1013 00:21:42.730147 28251 services_controller.go:332] Processing sync for service openshift-monitoring/cluster-monitoring-operator 2025-10-13T00:21:42.730154170+00:00 stderr F I1013 00:21:42.730121 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"b8952058-0880-4949-ae72-093072a0d7c5", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.730163230+00:00 stderr F I1013 00:21:42.730155 28251 services_controller.go:336] Finished syncing service cluster-monitoring-operator on namespace openshift-monitoring : 7.45µs 2025-10-13T00:21:42.730171781+00:00 stderr F I1013 00:21:42.730163 28251 services_controller.go:332] Processing sync for service openshift-service-ca-operator/metrics 2025-10-13T00:21:42.730247743+00:00 stderr F I1013 00:21:42.730169 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-service-ca-operator 030283b3-acfe-40ed-811c-d9f7f79607f6 5225 0 2024-06-26 12:39:07 +0000 UTC map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000733cdf }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: service-ca-operator,},ClusterIP:10.217.5.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.165],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.730261923+00:00 stderr F I1013 00:21:42.730219 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.151:443:10.217.0.16:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730261923+00:00 stderr F I1013 00:21:42.730251 28251 lb_config.go:1016] Cluster endpoints for openshift-service-ca-operator/metrics are: map[TCP/https:{8443 [10.217.0.10] []}] 2025-10-13T00:21:42.730270793+00:00 stderr F I1013 00:21:42.730237 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.112:443:192.168.126.11:10257]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8952058-0880-4949-ae72-093072a0d7c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730279673+00:00 stderr F I1013 00:21:42.730267 28251 services_controller.go:413] Built service openshift-service-ca-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.165"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.10"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.730279673+00:00 stderr F I1013 00:21:42.730268 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"} 2025-10-13T00:21:42.730288474+00:00 stderr F I1013 00:21:42.730279 28251 services_controller.go:414] Built service openshift-service-ca-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.730288474+00:00 stderr F I1013 00:21:42.730256 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.151:443:10.217.0.16:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730300134+00:00 stderr F I1013 00:21:42.730287 28251 services_controller.go:415] Built service openshift-service-ca-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.730300134+00:00 stderr F I1013 00:21:42.730271 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.112:443:192.168.126.11:10257]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8952058-0880-4949-ae72-093072a0d7c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730373196+00:00 stderr F I1013 00:21:42.730308 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"} 2025-10-13T00:21:42.730373196+00:00 stderr F I1013 00:21:42.730320 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-controller-manager-operator : 787.981µs 2025-10-13T00:21:42.730373196+00:00 stderr F I1013 00:21:42.730303 28251 services_controller.go:421] Built service openshift-service-ca-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730389326+00:00 stderr F I1013 00:21:42.730380 28251 services_controller.go:422] Built service openshift-service-ca-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.730398197+00:00 stderr F I1013 00:21:42.730391 28251 services_controller.go:423] Built service openshift-service-ca-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.730407027+00:00 stderr F I1013 00:21:42.730400 28251 services_controller.go:424] Service openshift-service-ca-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.730464808+00:00 stderr F I1013 00:21:42.730416 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"aa7bf248-62bd-463a-bce9-eaabc90e6138", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730464808+00:00 stderr F I1013 00:21:42.730282 28251 services_controller.go:336] Finished syncing service webhook on namespace openshift-console-operator : 805.491µs 2025-10-13T00:21:42.730480039+00:00 stderr F I1013 00:21:42.730474 28251 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-marketplace 2025-10-13T00:21:42.730541531+00:00 stderr F I1013 00:21:42.730361 28251 services_controller.go:332] Processing sync for service openshift-dns/dns-default 2025-10-13T00:21:42.730541531+00:00 stderr F I1013 00:21:42.730482 28251 services_controller.go:397] Service redhat-marketplace retrieved from lister: &Service{ObjectMeta:{redhat-marketplace openshift-marketplace 73712edb-385d-4bf0-9c03-b6c570b1a22f 6434 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true olm.service-spec-hash:aUeLNNcZzVZO2rcaZ5Kc8V3jffO0Ss4T6qX6V5] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-marketplace 6f259421-4edb-49d8-a6ce-aa41dfc64264 0xc00073255d 0xc00073255e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-marketplace,olm.managed: true,},ClusterIP:10.217.4.65,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.65],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.730581352+00:00 stderr F I1013 00:21:42.730531 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.165:443:10.217.0.10:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aa7bf248-62bd-463a-bce9-eaabc90e6138}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730590932+00:00 stderr F I1013 00:21:42.730532 28251 services_controller.go:397] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns 9c0247d8-5697-41ea-812e-582bb93c9b4d 5259 0 2024-06-26 12:47:19 +0000 UTC map[dns.operator.openshift.io/owning-dns:default] map[service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 DNS default 8e7b8280-016f-4ceb-a792-fc5be2494468 0xc000296237 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.730602782+00:00 stderr F I1013 00:21:42.730576 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.165:443:10.217.0.10:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aa7bf248-62bd-463a-bce9-eaabc90e6138}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.730611292+00:00 stderr F I1013 00:21:42.730556 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-marketplace are: map[TCP/grpc:{50051 [10.217.0.33] []}] 2025-10-13T00:21:42.730611292+00:00 stderr F I1013 00:21:42.730601 28251 lb_config.go:1016] Cluster endpoints for openshift-dns/dns-default are: map[TCP/dns-tcp:{5353 [10.217.0.31] []} TCP/metrics:{9154 [10.217.0.31] []} UDP/dns:{5353 [10.217.0.31] []}] 2025-10-13T00:21:42.730623813+00:00 stderr F I1013 00:21:42.730610 28251 services_controller.go:413] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.65"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.33"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.730623813+00:00 stderr F I1013 00:21:42.730617 28251 services_controller.go:413] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.730632473+00:00 stderr F I1013 00:21:42.730621 28251 services_controller.go:414] Built service openshift-marketplace/redhat-marketplace LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.730632473+00:00 stderr F I1013 00:21:42.730627 28251 services_controller.go:415] Built service openshift-marketplace/redhat-marketplace LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.730641683+00:00 stderr F I1013 00:21:42.730625 28251 services_controller.go:414] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"UDP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:9154, clusterEndpoints:services.lbEndpoints{Port:9154, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.730652804+00:00 stderr F I1013 00:21:42.730640 28251 services_controller.go:415] Built service openshift-dns/dns-default LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.730652804+00:00 stderr F I1013 00:21:42.730640 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"} 2025-10-13T00:21:42.730661274+00:00 stderr F I1013 00:21:42.730638 28251 services_controller.go:421] Built service openshift-marketplace/redhat-marketplace cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730661274+00:00 stderr F I1013 00:21:42.730655 28251 services_controller.go:336] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager : 846.033µs 2025-10-13T00:21:42.730661274+00:00 stderr F I1013 00:21:42.730657 28251 services_controller.go:422] Built service openshift-marketplace/redhat-marketplace per-node LB []services.LB{} 2025-10-13T00:21:42.730671144+00:00 stderr F I1013 00:21:42.730663 28251 services_controller.go:423] Built service openshift-marketplace/redhat-marketplace template LB []services.LB{} 2025-10-13T00:21:42.730671144+00:00 stderr F I1013 00:21:42.730666 28251 services_controller.go:421] Built service openshift-dns/dns-default cluster-wide LB []services.LB{} 2025-10-13T00:21:42.730679454+00:00 stderr F I1013 00:21:42.730670 28251 services_controller.go:424] Service openshift-marketplace/redhat-marketplace has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.730714965+00:00 stderr F I1013 00:21:42.730673 28251 services_controller.go:422] Built service openshift-dns/dns-default per-node LB []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.730714965+00:00 stderr F I1013 00:21:42.730591 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"} 2025-10-13T00:21:42.730727326+00:00 stderr F I1013 00:21:42.730715 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator : 810.882µs 2025-10-13T00:21:42.730737696+00:00 stderr F I1013 00:21:42.730731 28251 services_controller.go:332] Processing sync for service openshift-apiserver-operator/metrics 2025-10-13T00:21:42.730737696+00:00 stderr F I1013 00:21:42.730663 28251 services_controller.go:332] Processing sync for service openshift-authentication/oauth-openshift 2025-10-13T00:21:42.730799237+00:00 stderr F I1013 00:21:42.730741 28251 services_controller.go:397] Service oauth-openshift retrieved from lister: &Service{ObjectMeta:{oauth-openshift openshift-authentication 64190ecd-229c-482a-966a-b5649b5042ed 5248 0 2024-06-26 12:47:15 +0000 UTC map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.40,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.40],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.730799237+00:00 stderr F I1013 00:21:42.730739 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-apiserver-operator 4c2fba48-c67e-4420-9529-0bb456da4341 4348 0 2024-06-26 12:39:11 +0000 UTC map[app:openshift-apiserver-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:openshift-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002ef387 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-apiserver-operator,},ClusterIP:10.217.5.125,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.125],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.730809838+00:00 stderr F I1013 00:21:42.730798 28251 lb_config.go:1016] Cluster endpoints for openshift-authentication/oauth-openshift are: map[TCP/https:{6443 [10.217.0.72] []}] 2025-10-13T00:21:42.730822478+00:00 stderr F I1013 00:21:42.730809 28251 services_controller.go:413] Built service openshift-authentication/oauth-openshift LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.40"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.72"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.730822478+00:00 stderr F I1013 00:21:42.730813 28251 lb_config.go:1016] Cluster endpoints for openshift-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.6] []}] 2025-10-13T00:21:42.730831668+00:00 stderr F I1013 00:21:42.730817 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"} 2025-10-13T00:21:42.730841209+00:00 stderr F I1013 00:21:42.730832 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-service-ca-operator : 669.148µs 2025-10-13T00:21:42.730841209+00:00 stderr F I1013 00:21:42.730681 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"026ccb94-9135-4ac8-9f1b-4f7198943ac3", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730850619+00:00 stderr F I1013 00:21:42.730845 28251 services_controller.go:332] Processing sync for service openshift-multus/multus-admission-controller 2025-10-13T00:21:42.730928951+00:00 stderr F I1013 00:21:42.730818 28251 services_controller.go:414] Built service openshift-authentication/oauth-openshift LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.730928951+00:00 stderr F I1013 00:21:42.730918 28251 services_controller.go:415] Built service openshift-authentication/oauth-openshift LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.730958952+00:00 stderr F I1013 00:21:42.730930 28251 services_controller.go:421] Built service openshift-authentication/oauth-openshift cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.730958952+00:00 stderr F I1013 00:21:42.730923 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.65:50051:10.217.0.33:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {026ccb94-9135-4ac8-9f1b-4f7198943ac3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731000583+00:00 stderr F I1013 00:21:42.730959 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.65:50051:10.217.0.33:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {026ccb94-9135-4ac8-9f1b-4f7198943ac3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731025324+00:00 stderr F I1013 00:21:42.730829 28251 services_controller.go:413] Built service openshift-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.125"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.6"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.731072295+00:00 stderr F I1013 00:21:42.731055 28251 services_controller.go:414] Built service openshift-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.731102886+00:00 stderr F I1013 00:21:42.731091 28251 services_controller.go:415] Built service openshift-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.731135707+00:00 stderr F I1013 00:21:42.730949 28251 services_controller.go:422] Built service openshift-authentication/oauth-openshift per-node LB []services.LB{} 2025-10-13T00:21:42.731135707+00:00 stderr F I1013 00:21:42.731127 28251 services_controller.go:423] Built service openshift-authentication/oauth-openshift template LB []services.LB{} 2025-10-13T00:21:42.731144907+00:00 stderr F I1013 00:21:42.731134 28251 services_controller.go:424] Service openshift-authentication/oauth-openshift has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.731181838+00:00 stderr F I1013 00:21:42.731145 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"01ea2a7a-b4a0-46ba-a9ec-611e94b854e1", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.731243459+00:00 stderr F I1013 00:21:42.730706 28251 services_controller.go:423] Built service openshift-dns/dns-default template LB []services.LB{} 2025-10-13T00:21:42.731253850+00:00 stderr F I1013 00:21:42.731237 28251 services_controller.go:424] Service openshift-dns/dns-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.731253850+00:00 stderr F I1013 00:21:42.731231 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:10.217.0.72:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01ea2a7a-b4a0-46ba-a9ec-611e94b854e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731285501+00:00 stderr F I1013 00:21:42.731256 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:10.217.0.72:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01ea2a7a-b4a0-46ba-a9ec-611e94b854e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731295731+00:00 stderr F I1013 00:21:42.731276 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"} 2025-10-13T00:21:42.731295731+00:00 stderr F I1013 00:21:42.731291 28251 services_controller.go:336] Finished syncing service redhat-marketplace on namespace openshift-marketplace : 819.122µs 2025-10-13T00:21:42.731323232+00:00 stderr F I1013 00:21:42.731303 28251 services_controller.go:332] Processing sync for service openshift-kube-apiserver/apiserver 2025-10-13T00:21:42.731323232+00:00 stderr F I1013 00:21:42.730852 28251 services_controller.go:397] Service multus-admission-controller retrieved from lister: &Service{ObjectMeta:{multus-admission-controller openshift-multus 35568373-18ec-4ba2-8d18-12de10aa5a3f 5005 0 2024-06-26 12:45:47 +0000 UTC map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000732d07 0xc000732d08}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: multus-admission-controller,},ClusterIP:10.217.4.247,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.247],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.731341172+00:00 stderr F I1013 00:21:42.731258 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"19e6099f-93ac-4c53-b159-d6e39047458d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"0638bd05-8826-488d-bed7-e92d2c002903", Protocol:"udp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.731363533+00:00 stderr F I1013 00:21:42.731350 28251 lb_config.go:1016] Cluster endpoints for openshift-multus/multus-admission-controller are: map[TCP/metrics:{8443 [10.217.0.32] []} TCP/webhook:{6443 [10.217.0.32] []}] 2025-10-13T00:21:42.731375673+00:00 stderr F I1013 00:21:42.731364 28251 services_controller.go:413] Built service openshift-multus/multus-admission-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.731387873+00:00 stderr F I1013 00:21:42.731375 28251 services_controller.go:414] Built service openshift-multus/multus-admission-controller LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.731387873+00:00 stderr F I1013 00:21:42.731380 28251 services_controller.go:415] Built service openshift-multus/multus-admission-controller LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.731387873+00:00 stderr F I1013 00:21:42.731315 28251 services_controller.go:397] Service apiserver retrieved from lister: &Service{ObjectMeta:{apiserver openshift-kube-apiserver 44a33f79-7e24-4f1b-bc46-f52dfcec13b8 3793 0 2024-06-26 12:47:04 +0000 UTC map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.86,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.86],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.731424324+00:00 stderr F I1013 00:21:42.731391 28251 services_controller.go:421] Built service openshift-multus/multus-admission-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.731424324+00:00 stderr F I1013 00:21:42.731402 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver/apiserver are: map[TCP/https:{6443 [192.168.126.11] []}] 2025-10-13T00:21:42.731424324+00:00 stderr F I1013 00:21:42.731413 28251 services_controller.go:422] Built service openshift-multus/multus-admission-controller per-node LB []services.LB{} 2025-10-13T00:21:42.731424324+00:00 stderr F I1013 00:21:42.731420 28251 services_controller.go:423] Built service openshift-multus/multus-admission-controller template LB []services.LB{} 2025-10-13T00:21:42.731435865+00:00 stderr F I1013 00:21:42.731426 28251 services_controller.go:424] Service openshift-multus/multus-admission-controller has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.731435865+00:00 stderr F I1013 00:21:42.731428 28251 services_controller.go:413] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.731448125+00:00 stderr F I1013 00:21:42.731435 28251 services_controller.go:414] Built service openshift-kube-apiserver/apiserver LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.86"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.731460385+00:00 stderr F I1013 00:21:42.731449 28251 services_controller.go:415] Built service openshift-kube-apiserver/apiserver LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.731460385+00:00 stderr F I1013 00:21:42.731437 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"8f8bf377-ba57-45b8-b726-f3b9236cc1ab", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.731471946+00:00 stderr F I1013 00:21:42.731437 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353 10.217.4.10:9154:10.217.0.31:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {19e6099f-93ac-4c53-b159-d6e39047458d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731480646+00:00 stderr F I1013 00:21:42.731469 28251 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{} 2025-10-13T00:21:42.731509687+00:00 stderr F I1013 00:21:42.731209 28251 services_controller.go:421] Built service openshift-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.731509687+00:00 stderr F I1013 00:21:42.731477 28251 services_controller.go:422] Built service openshift-kube-apiserver/apiserver per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.731523267+00:00 stderr F I1013 00:21:42.731504 28251 services_controller.go:422] Built service openshift-apiserver-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.731523267+00:00 stderr F I1013 00:21:42.731507 28251 services_controller.go:423] Built service openshift-kube-apiserver/apiserver template LB []services.LB{} 2025-10-13T00:21:42.731523267+00:00 stderr F I1013 00:21:42.731514 28251 services_controller.go:423] Built service openshift-apiserver-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.731551148+00:00 stderr F I1013 00:21:42.731519 28251 services_controller.go:424] Service openshift-kube-apiserver/apiserver has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.731551148+00:00 stderr F I1013 00:21:42.731521 28251 services_controller.go:424] Service openshift-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.731592809+00:00 stderr F I1013 00:21:42.731521 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.247:443:10.217.0.32:6443 10.217.4.247:8443:10.217.0.32:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f8bf377-ba57-45b8-b726-f3b9236cc1ab}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731592809+00:00 stderr F I1013 00:21:42.731556 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"274b346b-9aa9-401d-b752-d4168e631034", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.731603179+00:00 stderr F I1013 00:21:42.731568 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"6c6bde70-7352-441c-a61d-299eaf2273c0", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.731618950+00:00 stderr F I1013 00:21:42.731584 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0638bd05-8826-488d-bed7-e92d2c002903}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731650720+00:00 stderr F I1013 00:21:42.731608 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.247:443:10.217.0.32:6443 10.217.4.247:8443:10.217.0.32:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f8bf377-ba57-45b8-b726-f3b9236cc1ab}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731685671+00:00 stderr F I1013 00:21:42.731624 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353 10.217.4.10:9154:10.217.0.31:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {19e6099f-93ac-4c53-b159-d6e39047458d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0638bd05-8826-488d-bed7-e92d2c002903}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731712072+00:00 stderr F I1013 00:21:42.731666 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.125:443:10.217.0.6:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6c6bde70-7352-441c-a61d-299eaf2273c0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731712072+00:00 stderr F I1013 00:21:42.731670 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.86:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {274b346b-9aa9-401d-b752-d4168e631034}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731720922+00:00 stderr F I1013 00:21:42.731697 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.125:443:10.217.0.6:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6c6bde70-7352-441c-a61d-299eaf2273c0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731751943+00:00 stderr F I1013 00:21:42.731706 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.86:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {274b346b-9aa9-401d-b752-d4168e631034}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.731817335+00:00 stderr F I1013 00:21:42.731794 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"} 2025-10-13T00:21:42.731850886+00:00 stderr F I1013 00:21:42.731838 28251 services_controller.go:336] Finished syncing service oauth-openshift on namespace openshift-authentication : 1.172281ms 2025-10-13T00:21:42.731890137+00:00 stderr F I1013 00:21:42.731877 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator 2025-10-13T00:21:42.732004410+00:00 stderr F I1013 00:21:42.731976 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"} 2025-10-13T00:21:42.732004410+00:00 stderr F I1013 00:21:42.731995 28251 services_controller.go:336] Finished syncing service multus-admission-controller on namespace openshift-multus : 1.149041ms 2025-10-13T00:21:42.732022760+00:00 stderr F I1013 00:21:42.732005 28251 services_controller.go:332] Processing sync for service openshift-console/downloads 2025-10-13T00:21:42.732057771+00:00 stderr F I1013 00:21:42.731911 28251 services_controller.go:397] Service machine-api-operator retrieved from lister: &Service{ObjectMeta:{machine-api-operator openshift-machine-api ef047d6e-c72f-4309-a95e-08fb0ed08662 4792 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:machine-api-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000297a9b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.127,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.127],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.732107733+00:00 stderr F I1013 00:21:42.732010 28251 services_controller.go:397] Service downloads retrieved from lister: &Service{ObjectMeta:{downloads openshift-console d6818508-d113-4821-84c8-94f59cfa13cb 9742 0 2024-06-26 12:53:44 +0000 UTC map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.196,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.196],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.732107733+00:00 stderr F I1013 00:21:42.732095 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"} 2025-10-13T00:21:42.732119763+00:00 stderr F I1013 00:21:42.732110 28251 services_controller.go:336] Finished syncing service dns-default on namespace openshift-dns : 1.749057ms 2025-10-13T00:21:42.732128373+00:00 stderr F I1013 00:21:42.732112 28251 lb_config.go:1016] Cluster endpoints for openshift-console/downloads are: map[TCP/http:{8080 [10.217.0.66] []}] 2025-10-13T00:21:42.732128373+00:00 stderr F I1013 00:21:42.732124 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-10-13T00:21:42.732138973+00:00 stderr F I1013 00:21:42.732127 28251 services_controller.go:413] Built service openshift-console/downloads LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.196"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.66"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.732150464+00:00 stderr F I1013 00:21:42.732140 28251 services_controller.go:414] Built service openshift-console/downloads LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.732150464+00:00 stderr F I1013 00:21:42.732148 28251 services_controller.go:415] Built service openshift-console/downloads LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.732182935+00:00 stderr F I1013 00:21:42.732167 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator are: map[TCP/https:{8443 [10.217.0.5] []}] 2025-10-13T00:21:42.732216476+00:00 stderr F I1013 00:21:42.732202 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.127"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.5"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.732240566+00:00 stderr F I1013 00:21:42.732231 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.732266577+00:00 stderr F I1013 00:21:42.732256 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.732345019+00:00 stderr F I1013 00:21:42.732296 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.732390190+00:00 stderr F I1013 00:21:42.732374 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-operator per-node LB []services.LB{} 2025-10-13T00:21:42.732422491+00:00 stderr F I1013 00:21:42.732412 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-operator template LB []services.LB{} 2025-10-13T00:21:42.732449192+00:00 stderr F I1013 00:21:42.732438 28251 services_controller.go:424] Service openshift-machine-api/machine-api-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.732514724+00:00 stderr F I1013 00:21:42.732233 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"} 2025-10-13T00:21:42.732514724+00:00 stderr F I1013 00:21:42.732506 28251 services_controller.go:336] Finished syncing service apiserver on namespace openshift-kube-apiserver : 1.201262ms 2025-10-13T00:21:42.732526594+00:00 stderr F I1013 00:21:42.732518 28251 services_controller.go:332] Processing sync for service openshift-ingress-operator/metrics 2025-10-13T00:21:42.732556595+00:00 stderr F I1013 00:21:42.732474 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"5e50823a-046a-4005-92b9-c7d2d651ffe9", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.732620516+00:00 stderr F I1013 00:21:42.732254 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"} 2025-10-13T00:21:42.732653477+00:00 stderr F I1013 00:21:42.732641 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-apiserver-operator : 1.907861ms 2025-10-13T00:21:42.732689178+00:00 stderr F I1013 00:21:42.732678 28251 services_controller.go:332] Processing sync for service openshift-marketplace/marketplace-operator-metrics 2025-10-13T00:21:42.732762580+00:00 stderr F I1013 00:21:42.732524 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-ingress-operator 1e390522-c38e-4189-86b8-ad75c61e3844 6514 0 2024-06-26 12:47:50 +0000 UTC map[name:ingress-operator] map[capability.openshift.io/name:Ingress include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296b57 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.4.255,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.255],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.732790651+00:00 stderr F I1013 00:21:42.732770 28251 lb_config.go:1016] Cluster endpoints for openshift-ingress-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.45] []}] 2025-10-13T00:21:42.732799951+00:00 stderr F I1013 00:21:42.732786 28251 services_controller.go:413] Built service openshift-ingress-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.255"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.45"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.732799951+00:00 stderr F I1013 00:21:42.732794 28251 services_controller.go:414] Built service openshift-ingress-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.732812942+00:00 stderr F I1013 00:21:42.732800 28251 services_controller.go:415] Built service openshift-ingress-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.732823242+00:00 stderr F I1013 00:21:42.732660 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.127:8443:10.217.0.5:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e50823a-046a-4005-92b9-c7d2d651ffe9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.732831662+00:00 stderr F I1013 00:21:42.732810 28251 services_controller.go:421] Built service openshift-ingress-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.732831662+00:00 stderr F I1013 00:21:42.732827 28251 services_controller.go:422] Built service openshift-ingress-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.732840262+00:00 stderr F I1013 00:21:42.732832 28251 services_controller.go:423] Built service openshift-ingress-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.732848673+00:00 stderr F I1013 00:21:42.732838 28251 services_controller.go:424] Service openshift-ingress-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.732874213+00:00 stderr F I1013 00:21:42.732831 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.127:8443:10.217.0.5:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e50823a-046a-4005-92b9-c7d2d651ffe9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.732874213+00:00 stderr F I1013 00:21:42.732849 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"3ce3d754-52d1-4b39-a0be-cc15ffb5a10c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.732923495+00:00 stderr F I1013 00:21:42.732710 28251 services_controller.go:397] Service marketplace-operator-metrics retrieved from lister: &Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace 1bfd7637-f88e-403e-8d75-c71b380fc127 4909 0 2024-06-26 12:39:13 +0000 UTC map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000732487 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.19,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.19],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.732959886+00:00 stderr F I1013 00:21:42.732928 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.255:9393:10.217.0.45:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ce3d754-52d1-4b39-a0be-cc15ffb5a10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.732983186+00:00 stderr F I1013 00:21:42.732955 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.255:9393:10.217.0.45:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ce3d754-52d1-4b39-a0be-cc15ffb5a10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.733036198+00:00 stderr F I1013 00:21:42.732132 28251 services_controller.go:397] Service olm-operator-metrics retrieved from lister: &Service{ObjectMeta:{olm-operator-metrics openshift-operator-lifecycle-manager f54a9b6f-c334-4276-9ca3-b290325fd276 5100 0 2024-06-26 12:39:23 +0000 UTC map[app:olm-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:olm-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073365f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: olm-operator,},ClusterIP:10.217.5.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.733050488+00:00 stderr F I1013 00:21:42.733040 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.14] []}] 2025-10-13T00:21:42.733061098+00:00 stderr F I1013 00:21:42.733052 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.220"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.14"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.733069508+00:00 stderr F I1013 00:21:42.733062 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.733069508+00:00 stderr F I1013 00:21:42.733067 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.733101679+00:00 stderr F I1013 00:21:42.733080 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"} 2025-10-13T00:21:42.733101679+00:00 stderr F I1013 00:21:42.732162 28251 services_controller.go:421] Built service openshift-console/downloads cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.733101679+00:00 stderr F I1013 00:21:42.733079 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.733116750+00:00 stderr F I1013 00:21:42.733101 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.733116750+00:00 stderr F I1013 00:21:42.733102 28251 services_controller.go:422] Built service openshift-console/downloads per-node LB []services.LB{} 2025-10-13T00:21:42.733116750+00:00 stderr F I1013 00:21:42.733108 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB []services.LB{} 2025-10-13T00:21:42.733116750+00:00 stderr F I1013 00:21:42.733113 28251 services_controller.go:423] Built service openshift-console/downloads template LB []services.LB{} 2025-10-13T00:21:42.733126870+00:00 stderr F I1013 00:21:42.733115 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/olm-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.733126870+00:00 stderr F I1013 00:21:42.733119 28251 services_controller.go:424] Service openshift-console/downloads has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.733151321+00:00 stderr F I1013 00:21:42.733019 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/marketplace-operator-metrics are: map[TCP/https-metrics:{8081 [10.217.0.30] []} TCP/metrics:{8383 [10.217.0.30] []}] 2025-10-13T00:21:42.733201442+00:00 stderr F I1013 00:21:42.733176 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"} 2025-10-13T00:21:42.733201442+00:00 stderr F I1013 00:21:42.733190 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-ingress-operator : 671.688µs 2025-10-13T00:21:42.733212542+00:00 stderr F I1013 00:21:42.733201 28251 services_controller.go:332] Processing sync for service openshift-route-controller-manager/route-controller-manager 2025-10-13T00:21:42.733240303+00:00 stderr F I1013 00:21:42.733177 28251 services_controller.go:413] Built service openshift-marketplace/marketplace-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8383, clusterEndpoints:services.lbEndpoints{Port:8383, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8081, clusterEndpoints:services.lbEndpoints{Port:8081, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.733282794+00:00 stderr F I1013 00:21:42.733267 28251 services_controller.go:414] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.733313655+00:00 stderr F I1013 00:21:42.733302 28251 services_controller.go:415] Built service openshift-marketplace/marketplace-operator-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.733395327+00:00 stderr F I1013 00:21:42.733357 28251 services_controller.go:421] Built service openshift-marketplace/marketplace-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.733456529+00:00 stderr F I1013 00:21:42.733442 28251 services_controller.go:422] Built service openshift-marketplace/marketplace-operator-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.733483980+00:00 stderr F I1013 00:21:42.733475 28251 services_controller.go:423] Built service openshift-marketplace/marketplace-operator-metrics template LB []services.LB{} 2025-10-13T00:21:42.733510630+00:00 stderr F I1013 00:21:42.733499 28251 services_controller.go:424] Service openshift-marketplace/marketplace-operator-metrics has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.733574502+00:00 stderr F I1013 00:21:42.733531 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"fd4c9213-5ce0-448d-a75e-3598556830dc", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.733696395+00:00 stderr F I1013 00:21:42.733126 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"ed8cbb68-e50d-4e49-8495-eff895089054", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.733776477+00:00 stderr F I1013 00:21:42.733092 28251 services_controller.go:336] Finished syncing service machine-api-operator on namespace openshift-machine-api : 1.216123ms 2025-10-13T00:21:42.733776477+00:00 stderr F I1013 00:21:42.733748 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.220:8443:10.217.0.14:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed8cbb68-e50d-4e49-8495-eff895089054}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.733808818+00:00 stderr F I1013 00:21:42.733775 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.220:8443:10.217.0.14:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed8cbb68-e50d-4e49-8495-eff895089054}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.733836319+00:00 stderr F I1013 00:21:42.733734 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.19:8081:10.217.0.30:8081 10.217.5.19:8383:10.217.0.30:8383]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd4c9213-5ce0-448d-a75e-3598556830dc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.733894731+00:00 stderr F I1013 00:21:42.733859 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.19:8081:10.217.0.30:8081 10.217.5.19:8383:10.217.0.30:8383]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd4c9213-5ce0-448d-a75e-3598556830dc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.733961512+00:00 stderr F I1013 00:21:42.733132 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"66ce05b8-462b-4fd9-b81f-bcc75a439997", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734028124+00:00 stderr F I1013 00:21:42.733894 28251 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-operator 2025-10-13T00:21:42.734069095+00:00 stderr F I1013 00:21:42.733207 28251 services_controller.go:397] Service route-controller-manager retrieved from lister: &Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 105b901a-a54d-4dae-b7b6-99e83d48166f 5156 0 2024-06-26 12:47:06 +0000 UTC map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{route-controller-manager: true,},ClusterIP:10.217.5.173,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.173],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734069095+00:00 stderr F I1013 00:21:42.734019 28251 services_controller.go:397] Service machine-config-operator retrieved from lister: &Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 355a1056-7d77-4a52-a1f5-8eb39c13574e 4891 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073221b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},ClusterIP:10.217.5.4,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.4],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734069095+00:00 stderr F I1013 00:21:42.734036 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.196:80:10.217.0.66:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {66ce05b8-462b-4fd9-b81f-bcc75a439997}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734084256+00:00 stderr F I1013 00:21:42.734074 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-operator are: map[TCP/metrics:{9001 [10.217.0.21] []}] 2025-10-13T00:21:42.734094766+00:00 stderr F I1013 00:21:42.734080 28251 lb_config.go:1016] Cluster endpoints for openshift-route-controller-manager/route-controller-manager are: map[TCP/https:{8443 [10.217.0.88] []}] 2025-10-13T00:21:42.734094766+00:00 stderr F I1013 00:21:42.734085 28251 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.4"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.21"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734103586+00:00 stderr F I1013 00:21:42.734071 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.196:80:10.217.0.66:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {66ce05b8-462b-4fd9-b81f-bcc75a439997}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734103586+00:00 stderr F I1013 00:21:42.734096 28251 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734103586+00:00 stderr F I1013 00:21:42.734095 28251 services_controller.go:413] Built service openshift-route-controller-manager/route-controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.173"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.88"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734112717+00:00 stderr F I1013 00:21:42.734102 28251 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734112717+00:00 stderr F I1013 00:21:42.734105 28251 services_controller.go:414] Built service openshift-route-controller-manager/route-controller-manager LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734124347+00:00 stderr F I1013 00:21:42.734113 28251 services_controller.go:415] Built service openshift-route-controller-manager/route-controller-manager LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734135037+00:00 stderr F I1013 00:21:42.734116 28251 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734143347+00:00 stderr F I1013 00:21:42.734136 28251 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-operator per-node LB []services.LB{} 2025-10-13T00:21:42.734143347+00:00 stderr F I1013 00:21:42.734126 28251 services_controller.go:421] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734152078+00:00 stderr F I1013 00:21:42.734142 28251 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-operator template LB []services.LB{} 2025-10-13T00:21:42.734152078+00:00 stderr F I1013 00:21:42.734146 28251 services_controller.go:422] Built service openshift-route-controller-manager/route-controller-manager per-node LB []services.LB{} 2025-10-13T00:21:42.734160638+00:00 stderr F I1013 00:21:42.734149 28251 services_controller.go:424] Service openshift-machine-config-operator/machine-config-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734160638+00:00 stderr F I1013 00:21:42.734152 28251 services_controller.go:423] Built service openshift-route-controller-manager/route-controller-manager template LB []services.LB{} 2025-10-13T00:21:42.734168988+00:00 stderr F I1013 00:21:42.734159 28251 services_controller.go:424] Service openshift-route-controller-manager/route-controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734200719+00:00 stderr F I1013 00:21:42.734162 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"f2bd9885-097c-4b3e-916f-b4598e49252e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734200719+00:00 stderr F I1013 00:21:42.734121 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"} 2025-10-13T00:21:42.734200719+00:00 stderr F I1013 00:21:42.734171 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"5536fef9-19d0-4375-b45b-e06dafe12061", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734214599+00:00 stderr F I1013 00:21:42.734199 28251 services_controller.go:336] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager : 2.071866ms 2025-10-13T00:21:42.734222919+00:00 stderr F I1013 00:21:42.734213 28251 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-target 2025-10-13T00:21:42.734288211+00:00 stderr F I1013 00:21:42.734219 28251 services_controller.go:397] Service network-check-target retrieved from lister: &Service{ObjectMeta:{network-check-target openshift-network-diagnostics 151fdab6-cca2-4880-a96c-48e605cc8d3d 2803 0 2024-06-26 12:45:59 +0000 UTC map[] map[] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000733147 0xc000733148}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: network-check-target,},ClusterIP:10.217.5.248,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.248],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734288211+00:00 stderr F I1013 00:21:42.734246 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.4:9001:10.217.0.21:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f2bd9885-097c-4b3e-916f-b4598e49252e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734288211+00:00 stderr F I1013 00:21:42.734262 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.173:443:10.217.0.88:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5536fef9-19d0-4375-b45b-e06dafe12061}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734288211+00:00 stderr F I1013 00:21:42.734278 28251 lb_config.go:1016] Cluster endpoints for openshift-network-diagnostics/network-check-target are: map[TCP/:{8080 [10.217.0.4] []}] 2025-10-13T00:21:42.734303882+00:00 stderr F I1013 00:21:42.734273 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.4:9001:10.217.0.21:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f2bd9885-097c-4b3e-916f-b4598e49252e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734303882+00:00 stderr F I1013 00:21:42.734292 28251 services_controller.go:413] Built service openshift-network-diagnostics/network-check-target LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.248"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.4"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734312412+00:00 stderr F I1013 00:21:42.734302 28251 services_controller.go:414] Built service openshift-network-diagnostics/network-check-target LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734312412+00:00 stderr F I1013 00:21:42.734288 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.173:443:10.217.0.88:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5536fef9-19d0-4375-b45b-e06dafe12061}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734312412+00:00 stderr F I1013 00:21:42.734307 28251 services_controller.go:415] Built service openshift-network-diagnostics/network-check-target LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734372893+00:00 stderr F I1013 00:21:42.734320 28251 services_controller.go:421] Built service openshift-network-diagnostics/network-check-target cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734372893+00:00 stderr F I1013 00:21:42.734367 28251 services_controller.go:422] Built service openshift-network-diagnostics/network-check-target per-node LB []services.LB{} 2025-10-13T00:21:42.734388074+00:00 stderr F I1013 00:21:42.734375 28251 services_controller.go:423] Built service openshift-network-diagnostics/network-check-target template LB []services.LB{} 2025-10-13T00:21:42.734388074+00:00 stderr F I1013 00:21:42.734382 28251 services_controller.go:424] Service openshift-network-diagnostics/network-check-target has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734438145+00:00 stderr F I1013 00:21:42.734395 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"64185ba6-b0f4-4c6d-b401-fb03d791f35d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734438145+00:00 stderr F I1013 00:21:42.734422 28251 ovs.go:162] Exec(35): stdout: "" 2025-10-13T00:21:42.734446925+00:00 stderr F I1013 00:21:42.734440 28251 ovs.go:163] Exec(35): stderr: "" 2025-10-13T00:21:42.734454136+00:00 stderr F I1013 00:21:42.734437 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"} 2025-10-13T00:21:42.734475666+00:00 stderr F I1013 00:21:42.734459 28251 ovs.go:159] Exec(36): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:system-id 2025-10-13T00:21:42.734475666+00:00 stderr F I1013 00:21:42.734455 28251 services_controller.go:336] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace : 1.774638ms 2025-10-13T00:21:42.734489357+00:00 stderr F I1013 00:21:42.734475 28251 services_controller.go:332] Processing sync for service openshift-apiserver/check-endpoints 2025-10-13T00:21:42.734539038+00:00 stderr F I1013 00:21:42.734481 28251 services_controller.go:397] Service check-endpoints retrieved from lister: &Service{ObjectMeta:{check-endpoints openshift-apiserver 435aa879-8965-459a-9b2a-dfd8f8924b3a 5567 0 2024-06-26 12:47:30 +0000 UTC map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002ef777 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:check-endpoints,Protocol:TCP,Port:17698,TargetPort:{0 17698 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.23,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.23],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734539038+00:00 stderr F I1013 00:21:42.734507 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.248:80:10.217.0.4:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64185ba6-b0f4-4c6d-b401-fb03d791f35d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734567199+00:00 stderr F I1013 00:21:42.734541 28251 lb_config.go:1016] Cluster endpoints for openshift-apiserver/check-endpoints are: map[TCP/check-endpoints:{17698 [10.217.0.82] []}] 2025-10-13T00:21:42.734567199+00:00 stderr F I1013 00:21:42.734555 28251 services_controller.go:413] Built service openshift-apiserver/check-endpoints LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.23"}, protocol:"TCP", inport:17698, clusterEndpoints:services.lbEndpoints{Port:17698, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734575459+00:00 stderr F I1013 00:21:42.734564 28251 services_controller.go:414] Built service openshift-apiserver/check-endpoints LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734575459+00:00 stderr F I1013 00:21:42.734540 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.248:80:10.217.0.4:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64185ba6-b0f4-4c6d-b401-fb03d791f35d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734575459+00:00 stderr F I1013 00:21:42.734570 28251 services_controller.go:415] Built service openshift-apiserver/check-endpoints LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734586389+00:00 stderr F I1013 00:21:42.734567 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"} 2025-10-13T00:21:42.734586389+00:00 stderr F I1013 00:21:42.734580 28251 services_controller.go:336] Finished syncing service downloads on namespace openshift-console : 2.574429ms 2025-10-13T00:21:42.734614520+00:00 stderr F I1013 00:21:42.734581 28251 services_controller.go:421] Built service openshift-apiserver/check-endpoints cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734614520+00:00 stderr F I1013 00:21:42.734601 28251 services_controller.go:422] Built service openshift-apiserver/check-endpoints per-node LB []services.LB{} 2025-10-13T00:21:42.734614520+00:00 stderr F I1013 00:21:42.734607 28251 services_controller.go:423] Built service openshift-apiserver/check-endpoints template LB []services.LB{} 2025-10-13T00:21:42.734614520+00:00 stderr F I1013 00:21:42.734510 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"} 2025-10-13T00:21:42.734622980+00:00 stderr F I1013 00:21:42.734613 28251 services_controller.go:424] Service openshift-apiserver/check-endpoints has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734631220+00:00 stderr F I1013 00:21:42.734623 28251 services_controller.go:336] Finished syncing service machine-config-operator on namespace openshift-machine-config-operator : 848.213µs 2025-10-13T00:21:42.734652941+00:00 stderr F I1013 00:21:42.734641 28251 services_controller.go:332] Processing sync for service openshift-kube-scheduler-operator/metrics 2025-10-13T00:21:42.734652941+00:00 stderr F I1013 00:21:42.734627 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"9644365b-5102-49b0-be5c-9e41d7162eca", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734694982+00:00 stderr F I1013 00:21:42.734592 28251 services_controller.go:332] Processing sync for service openshift-kube-apiserver-operator/metrics 2025-10-13T00:21:42.734723293+00:00 stderr F I1013 00:21:42.734649 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 080e1aaf-7269-495b-ab74-593efe4192ec 4661 0 2024-06-26 12:39:09 +0000 UTC map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002973ab }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.108,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.108],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734730673+00:00 stderr F I1013 00:21:42.734706 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.23:17698:10.217.0.82:17698]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9644365b-5102-49b0-be5c-9e41d7162eca}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734745054+00:00 stderr F I1013 00:21:42.734731 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler-operator/metrics are: map[TCP/https:{8443 [10.217.0.12] []}] 2025-10-13T00:21:42.734752554+00:00 stderr F I1013 00:21:42.734693 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-apiserver-operator ed79a864-3d59-456e-8a6c-724ec68e6d1b 4515 0 2024-06-26 12:39:27 +0000 UTC map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296f2b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.31,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.31],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734752554+00:00 stderr F I1013 00:21:42.734730 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.23:17698:10.217.0.82:17698]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9644365b-5102-49b0-be5c-9e41d7162eca}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.734762384+00:00 stderr F I1013 00:21:42.734608 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"} 2025-10-13T00:21:42.734762384+00:00 stderr F I1013 00:21:42.734747 28251 services_controller.go:413] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.108"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.12"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734769474+00:00 stderr F I1013 00:21:42.734759 28251 services_controller.go:336] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager : 1.556152ms 2025-10-13T00:21:42.734769474+00:00 stderr F I1013 00:21:42.734761 28251 services_controller.go:414] Built service openshift-kube-scheduler-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734776784+00:00 stderr F I1013 00:21:42.734761 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.7] []}] 2025-10-13T00:21:42.734776784+00:00 stderr F I1013 00:21:42.734770 28251 services_controller.go:415] Built service openshift-kube-scheduler-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734776784+00:00 stderr F I1013 00:21:42.734773 28251 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-controller 2025-10-13T00:21:42.734789145+00:00 stderr F I1013 00:21:42.734777 28251 services_controller.go:413] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.31"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.7"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734795805+00:00 stderr F I1013 00:21:42.734790 28251 services_controller.go:414] Built service openshift-kube-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734802415+00:00 stderr F I1013 00:21:42.734797 28251 services_controller.go:415] Built service openshift-kube-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734809015+00:00 stderr F I1013 00:21:42.734788 28251 services_controller.go:421] Built service openshift-kube-scheduler-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734818195+00:00 stderr F I1013 00:21:42.734812 28251 services_controller.go:422] Built service openshift-kube-scheduler-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.734826386+00:00 stderr F I1013 00:21:42.734821 28251 services_controller.go:423] Built service openshift-kube-scheduler-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.734833046+00:00 stderr F I1013 00:21:42.734810 28251 services_controller.go:421] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734833046+00:00 stderr F I1013 00:21:42.734829 28251 services_controller.go:424] Service openshift-kube-scheduler-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734840656+00:00 stderr F I1013 00:21:42.734797 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"} 2025-10-13T00:21:42.734840656+00:00 stderr F I1013 00:21:42.734781 28251 services_controller.go:397] Service machine-config-controller retrieved from lister: &Service{ObjectMeta:{machine-config-controller openshift-machine-config-operator 3ff83f1a-4058-4b9e-a4fd-83f51836c82e 4847 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073207b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734847936+00:00 stderr F I1013 00:21:42.734832 28251 services_controller.go:422] Built service openshift-kube-apiserver-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.734847936+00:00 stderr F I1013 00:21:42.734838 28251 services_controller.go:336] Finished syncing service network-check-target on namespace openshift-network-diagnostics : 624.597µs 2025-10-13T00:21:42.734857737+00:00 stderr F I1013 00:21:42.734846 28251 services_controller.go:423] Built service openshift-kube-apiserver-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.734857737+00:00 stderr F I1013 00:21:42.734850 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-10-13T00:21:42.734865357+00:00 stderr F I1013 00:21:42.734855 28251 services_controller.go:424] Service openshift-kube-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734865357+00:00 stderr F I1013 00:21:42.734853 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-controller are: map[TCP/metrics:{9001 [10.217.0.63] []}] 2025-10-13T00:21:42.734872257+00:00 stderr F I1013 00:21:42.734843 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"9037868a-bf59-4e20-8fc8-16e697f234f6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734897118+00:00 stderr F I1013 00:21:42.734872 28251 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.214"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.63"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734897118+00:00 stderr F I1013 00:21:42.734856 28251 services_controller.go:397] Service package-server-manager-metrics retrieved from lister: &Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager ae547e8e-2a0a-43b3-8358-80f1e40dfde9 5119 0 2024-06-26 12:39:24 +0000 UTC map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073379f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: package-server-manager,},ClusterIP:10.217.4.147,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.147],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.734897118+00:00 stderr F I1013 00:21:42.734889 28251 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734908468+00:00 stderr F I1013 00:21:42.734870 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"11ea2791-06de-4f67-9dea-91c73a312b37", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734908468+00:00 stderr F I1013 00:21:42.734899 28251 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-controller LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734915928+00:00 stderr F I1013 00:21:42.734905 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/package-server-manager-metrics are: map[TCP/metrics:{8443 [10.217.0.24] []}] 2025-10-13T00:21:42.734923098+00:00 stderr F I1013 00:21:42.734915 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.147"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.24"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.734930169+00:00 stderr F I1013 00:21:42.734923 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.734937409+00:00 stderr F I1013 00:21:42.734929 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.734937409+00:00 stderr F I1013 00:21:42.734916 28251 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734950259+00:00 stderr F I1013 00:21:42.734943 28251 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-controller per-node LB []services.LB{} 2025-10-13T00:21:42.734957769+00:00 stderr F I1013 00:21:42.734952 28251 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-controller template LB []services.LB{} 2025-10-13T00:21:42.734965239+00:00 stderr F I1013 00:21:42.734940 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.734965239+00:00 stderr F I1013 00:21:42.734960 28251 services_controller.go:424] Service openshift-machine-config-operator/machine-config-controller has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.734972320+00:00 stderr F I1013 00:21:42.734962 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.734972320+00:00 stderr F I1013 00:21:42.734959 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"} 2025-10-13T00:21:42.734979480+00:00 stderr F I1013 00:21:42.734971 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics template LB []services.LB{} 2025-10-13T00:21:42.734986390+00:00 stderr F I1013 00:21:42.734976 28251 services_controller.go:336] Finished syncing service check-endpoints on namespace openshift-apiserver : 498.974µs 2025-10-13T00:21:42.734986390+00:00 stderr F I1013 00:21:42.734977 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/package-server-manager-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.735008671+00:00 stderr F I1013 00:21:42.734991 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-controllers 2025-10-13T00:21:42.735008671+00:00 stderr F I1013 00:21:42.734956 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.108:443:10.217.0.12:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735008671+00:00 stderr F I1013 00:21:42.734978 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.31:443:10.217.0.7:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11ea2791-06de-4f67-9dea-91c73a312b37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735020351+00:00 stderr F I1013 00:21:42.734992 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.735041902+00:00 stderr F I1013 00:21:42.734976 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"14fa73a9-9675-4207-8711-28031fe0d8db", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.735041902+00:00 stderr F I1013 00:21:42.735012 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.108:443:10.217.0.12:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735041902+00:00 stderr F I1013 00:21:42.735014 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.31:443:10.217.0.7:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11ea2791-06de-4f67-9dea-91c73a312b37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735082283+00:00 stderr F I1013 00:21:42.735010 28251 services_controller.go:397] Service machine-api-controllers retrieved from lister: &Service{ObjectMeta:{machine-api-controllers openshift-machine-api 6a75af62-23dd-4080-8ef6-00c8bb47e103 4782 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00029793b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.5.185,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.185],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.735091713+00:00 stderr F I1013 00:21:42.735069 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.147:8443:10.217.0.24:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735091713+00:00 stderr F I1013 00:21:42.735086 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-controllers are: map[] 2025-10-13T00:21:42.735132694+00:00 stderr F I1013 00:21:42.735102 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.735132694+00:00 stderr F I1013 00:21:42.735122 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-controllers LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.735132694+00:00 stderr F I1013 00:21:42.735129 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-controllers LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.735141294+00:00 stderr F I1013 00:21:42.735112 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.147:8443:10.217.0.24:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735141294+00:00 stderr F I1013 00:21:42.735113 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.214:9001:10.217.0.63:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {14fa73a9-9675-4207-8711-28031fe0d8db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735181485+00:00 stderr F I1013 00:21:42.735145 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-controllers cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.735181485+00:00 stderr F I1013 00:21:42.735144 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.214:9001:10.217.0.63:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {14fa73a9-9675-4207-8711-28031fe0d8db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735181485+00:00 stderr F I1013 00:21:42.735175 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-controllers per-node LB []services.LB{} 2025-10-13T00:21:42.735193076+00:00 stderr F I1013 00:21:42.735184 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-controllers template LB []services.LB{} 2025-10-13T00:21:42.735205336+00:00 stderr F I1013 00:21:42.735193 28251 services_controller.go:424] Service openshift-machine-api/machine-api-controllers has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.735248617+00:00 stderr F I1013 00:21:42.735207 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"9345b326-0288-485f-8374-532f033762a6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.735310019+00:00 stderr F I1013 00:21:42.735292 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"} 2025-10-13T00:21:42.735358620+00:00 stderr F I1013 00:21:42.735344 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-apiserver-operator : 734.07µs 2025-10-13T00:21:42.735400331+00:00 stderr F I1013 00:21:42.735370 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"} 2025-10-13T00:21:42.735400331+00:00 stderr F I1013 00:21:42.735390 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-scheduler-operator : 749.17µs 2025-10-13T00:21:42.735415172+00:00 stderr F I1013 00:21:42.735401 28251 services_controller.go:332] Processing sync for service openshift-cluster-version/cluster-version-operator 2025-10-13T00:21:42.735448492+00:00 stderr F I1013 00:21:42.735435 28251 services_controller.go:332] Processing sync for service openshift-dns-operator/metrics 2025-10-13T00:21:42.735498734+00:00 stderr F I1013 00:21:42.735295 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.185:8441: 10.217.5.185:8442: 10.217.5.185:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9345b326-0288-485f-8374-532f033762a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735542025+00:00 stderr F I1013 00:21:42.735521 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"} 2025-10-13T00:21:42.735542025+00:00 stderr F I1013 00:21:42.735536 28251 services_controller.go:336] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator : 763.041µs 2025-10-13T00:21:42.735550675+00:00 stderr F I1013 00:21:42.735545 28251 services_controller.go:332] Processing sync for service openshift-controller-manager/controller-manager 2025-10-13T00:21:42.735591366+00:00 stderr F I1013 00:21:42.735554 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"} 2025-10-13T00:21:42.735598837+00:00 stderr F I1013 00:21:42.735590 28251 services_controller.go:336] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager : 737.82µs 2025-10-13T00:21:42.735647038+00:00 stderr F I1013 00:21:42.735609 28251 services_controller.go:332] Processing sync for service openshift-ingress/router-internal-default 2025-10-13T00:21:42.735670938+00:00 stderr F I1013 00:21:42.735498 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.185:8441: 10.217.5.185:8442: 10.217.5.185:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9345b326-0288-485f-8374-532f033762a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.735711270+00:00 stderr F I1013 00:21:42.735642 28251 services_controller.go:397] Service router-internal-default retrieved from lister: &Service{ObjectMeta:{router-internal-default openshift-ingress 3ded9605-ced3-4583-97b6-f93264b463a7 7398 0 2024-06-26 12:48:38 +0000 UTC map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{apps/v1 Deployment router-default 9ae4d312-7fc4-4344-ab7a-669da95f56bf 0xc000296cee }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.735744100+00:00 stderr F I1013 00:21:42.735722 28251 lb_config.go:1016] Cluster endpoints for openshift-ingress/router-internal-default are: map[TCP/http:{80 [192.168.126.11] []} TCP/https:{443 [192.168.126.11] []} TCP/metrics:{1936 [192.168.126.11] []}] 2025-10-13T00:21:42.735751001+00:00 stderr F I1013 00:21:42.735745 28251 services_controller.go:413] Built service openshift-ingress/router-internal-default LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.735778591+00:00 stderr F I1013 00:21:42.735751 28251 services_controller.go:414] Built service openshift-ingress/router-internal-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:80, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:1936, clusterEndpoints:services.lbEndpoints{Port:1936, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.735778591+00:00 stderr F I1013 00:21:42.735773 28251 services_controller.go:415] Built service openshift-ingress/router-internal-default LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.735786342+00:00 stderr F I1013 00:21:42.735466 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-dns-operator c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2 4375 0 2024-06-26 12:39:11 +0000 UTC map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296167 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: dns-operator,},ClusterIP:10.217.4.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.735813722+00:00 stderr F I1013 00:21:42.735798 28251 services_controller.go:421] Built service openshift-ingress/router-internal-default cluster-wide LB []services.LB{} 2025-10-13T00:21:42.735848373+00:00 stderr F I1013 00:21:42.735811 28251 services_controller.go:422] Built service openshift-ingress/router-internal-default per-node LB []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.735848373+00:00 stderr F I1013 00:21:42.735844 28251 services_controller.go:423] Built service openshift-ingress/router-internal-default template LB []services.LB{} 2025-10-13T00:21:42.735865264+00:00 stderr F I1013 00:21:42.735853 28251 services_controller.go:424] Service openshift-ingress/router-internal-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.735897085+00:00 stderr F I1013 00:21:42.735797 28251 lb_config.go:1016] Cluster endpoints for openshift-dns-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.18] []}] 2025-10-13T00:21:42.735921025+00:00 stderr F I1013 00:21:42.735871 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"f5075e35-1294-4014-afcd-dd765816bbc3", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.735921025+00:00 stderr F I1013 00:21:42.735903 28251 services_controller.go:413] Built service openshift-dns-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.52"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.18"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.735931335+00:00 stderr F I1013 00:21:42.735918 28251 services_controller.go:414] Built service openshift-dns-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.735931335+00:00 stderr F I1013 00:21:42.735926 28251 services_controller.go:415] Built service openshift-dns-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.735974657+00:00 stderr F I1013 00:21:42.735942 28251 services_controller.go:421] Built service openshift-dns-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.735974657+00:00 stderr F I1013 00:21:42.735551 28251 services_controller.go:397] Service controller-manager retrieved from lister: &Service{ObjectMeta:{controller-manager openshift-controller-manager 2222c363-21dc-4d99-b2be-80dc3cdf8209 4361 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:openshift-controller-manager] map[operator.openshift.io/spec-hash:b3b96749ab82e4de02ef6aa9f0e168108d09315e18d73931c12251d267378e74 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{controller-manager: true,},ClusterIP:10.217.4.104,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.104],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.735982027+00:00 stderr F I1013 00:21:42.735971 28251 services_controller.go:422] Built service openshift-dns-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.735988677+00:00 stderr F I1013 00:21:42.735982 28251 services_controller.go:423] Built service openshift-dns-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.735995547+00:00 stderr F I1013 00:21:42.735984 28251 lb_config.go:1016] Cluster endpoints for openshift-controller-manager/controller-manager are: map[TCP/https:{8443 [10.217.0.87] []}] 2025-10-13T00:21:42.735995547+00:00 stderr F I1013 00:21:42.735991 28251 services_controller.go:424] Service openshift-dns-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.736007358+00:00 stderr F I1013 00:21:42.735996 28251 services_controller.go:413] Built service openshift-controller-manager/controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.104"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.87"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.736014508+00:00 stderr F I1013 00:21:42.736008 28251 services_controller.go:414] Built service openshift-controller-manager/controller-manager LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.736021408+00:00 stderr F I1013 00:21:42.736016 28251 services_controller.go:415] Built service openshift-controller-manager/controller-manager LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.736028018+00:00 stderr F I1013 00:21:42.735994 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.220:1936:192.168.126.11:1936 10.217.4.220:443:192.168.126.11:443 10.217.4.220:80:192.168.126.11:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5075e35-1294-4014-afcd-dd765816bbc3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736048309+00:00 stderr F I1013 00:21:42.736007 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"404c8c52-bd98-412a-93df-61640fc7575c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736055579+00:00 stderr F I1013 00:21:42.735407 28251 services_controller.go:397] Service cluster-version-operator retrieved from lister: &Service{ObjectMeta:{cluster-version-operator openshift-cluster-version b85c5397-4189-4029-b181-4e339da207b7 6237 0 2024-06-26 12:38:51 +0000 UTC map[k8s-app:cluster-version-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true kubernetes.io/description:Expose cluster-version operator metrics to other in-cluster consumers. Access requires a prometheus-k8s RoleBinding in this namespace. service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-version-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efad7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9099,TargetPort:{0 9099 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-version-operator,},ClusterIP:10.217.5.47,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.47],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.736055579+00:00 stderr F I1013 00:21:42.736031 28251 services_controller.go:421] Built service openshift-controller-manager/controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736070279+00:00 stderr F I1013 00:21:42.736058 28251 services_controller.go:422] Built service openshift-controller-manager/controller-manager per-node LB []services.LB{} 2025-10-13T00:21:42.736070279+00:00 stderr F I1013 00:21:42.736033 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.220:1936:192.168.126.11:1936 10.217.4.220:443:192.168.126.11:443 10.217.4.220:80:192.168.126.11:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5075e35-1294-4014-afcd-dd765816bbc3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736070279+00:00 stderr F I1013 00:21:42.736065 28251 services_controller.go:423] Built service openshift-controller-manager/controller-manager template LB []services.LB{} 2025-10-13T00:21:42.736078389+00:00 stderr F I1013 00:21:42.736071 28251 services_controller.go:424] Service openshift-controller-manager/controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.736085560+00:00 stderr F I1013 00:21:42.736074 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"} 2025-10-13T00:21:42.736092630+00:00 stderr F I1013 00:21:42.736073 28251 lb_config.go:1016] Cluster endpoints for openshift-cluster-version/cluster-version-operator are: map[TCP/metrics:{9099 [192.168.126.11] []}] 2025-10-13T00:21:42.736092630+00:00 stderr F I1013 00:21:42.736087 28251 services_controller.go:336] Finished syncing service machine-api-controllers on namespace openshift-machine-api : 1.09559ms 2025-10-13T00:21:42.736113080+00:00 stderr F I1013 00:21:42.736099 28251 services_controller.go:413] Built service openshift-cluster-version/cluster-version-operator LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.736123051+00:00 stderr F I1013 00:21:42.736111 28251 services_controller.go:414] Built service openshift-cluster-version/cluster-version-operator LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.47"}, protocol:"TCP", inport:9099, clusterEndpoints:services.lbEndpoints{Port:9099, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.736130961+00:00 stderr F I1013 00:21:42.736092 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"b3787e47-e57b-411b-a351-f5dcf926f4a7", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736130961+00:00 stderr F I1013 00:21:42.736124 28251 services_controller.go:415] Built service openshift-cluster-version/cluster-version-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.736165632+00:00 stderr F I1013 00:21:42.736148 28251 services_controller.go:421] Built service openshift-cluster-version/cluster-version-operator cluster-wide LB []services.LB{} 2025-10-13T00:21:42.736184942+00:00 stderr F I1013 00:21:42.736103 28251 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-operators 2025-10-13T00:21:42.736184942+00:00 stderr F I1013 00:21:42.736159 28251 services_controller.go:422] Built service openshift-cluster-version/cluster-version-operator per-node LB []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.736192213+00:00 stderr F I1013 00:21:42.736186 28251 services_controller.go:423] Built service openshift-cluster-version/cluster-version-operator template LB []services.LB{} 2025-10-13T00:21:42.736211973+00:00 stderr F I1013 00:21:42.736197 28251 services_controller.go:424] Service openshift-cluster-version/cluster-version-operator has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.736220273+00:00 stderr F I1013 00:21:42.736169 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.52:9393:10.217.0.18:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {404c8c52-bd98-412a-93df-61640fc7575c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736232224+00:00 stderr F I1013 00:21:42.736199 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.104:443:10.217.0.87:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3787e47-e57b-411b-a351-f5dcf926f4a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736239584+00:00 stderr F I1013 00:21:42.736181 28251 services_controller.go:397] Service redhat-operators retrieved from lister: &Service{ObjectMeta:{redhat-operators openshift-marketplace adccbaa4-8d5b-4985-9a89-66271ea4bf4e 6530 0 2024-06-26 12:47:51 +0000 UTC map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7 0xc00073276d 0xc00073276e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-operators,olm.managed: true,},ClusterIP:10.217.5.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.736246264+00:00 stderr F I1013 00:21:42.736212 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"516e9abb-2a1b-4eba-859b-c827d44fb86e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.736255744+00:00 stderr F I1013 00:21:42.736226 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.52:9393:10.217.0.18:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {404c8c52-bd98-412a-93df-61640fc7575c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736255744+00:00 stderr F I1013 00:21:42.736245 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-operators are: map[TCP/grpc:{50051 [10.217.0.34] []}] 2025-10-13T00:21:42.736263234+00:00 stderr F I1013 00:21:42.736235 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.104:443:10.217.0.87:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3787e47-e57b-411b-a351-f5dcf926f4a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736270415+00:00 stderr F I1013 00:21:42.736258 28251 services_controller.go:413] Built service openshift-marketplace/redhat-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.52"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.34"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.736270415+00:00 stderr F I1013 00:21:42.736266 28251 services_controller.go:414] Built service openshift-marketplace/redhat-operators LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.736277735+00:00 stderr F I1013 00:21:42.736272 28251 services_controller.go:415] Built service openshift-marketplace/redhat-operators LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.736312636+00:00 stderr F I1013 00:21:42.736283 28251 services_controller.go:421] Built service openshift-marketplace/redhat-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.34", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736312636+00:00 stderr F I1013 00:21:42.736303 28251 services_controller.go:422] Built service openshift-marketplace/redhat-operators per-node LB []services.LB{} 2025-10-13T00:21:42.736312636+00:00 stderr F I1013 00:21:42.736309 28251 services_controller.go:423] Built service openshift-marketplace/redhat-operators template LB []services.LB{} 2025-10-13T00:21:42.736321516+00:00 stderr F I1013 00:21:42.736316 28251 services_controller.go:424] Service openshift-marketplace/redhat-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.736387558+00:00 stderr F I1013 00:21:42.736343 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"dbd78453-346b-4a67-b084-f31f95f15b67", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.34", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736387558+00:00 stderr F I1013 00:21:42.736352 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.47:9099:192.168.126.11:9099]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516e9abb-2a1b-4eba-859b-c827d44fb86e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736387558+00:00 stderr F I1013 00:21:42.736373 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"} 2025-10-13T00:21:42.736403738+00:00 stderr F I1013 00:21:42.736389 28251 services_controller.go:336] Finished syncing service router-internal-default on namespace openshift-ingress : 782.711µs 2025-10-13T00:21:42.736411758+00:00 stderr F I1013 00:21:42.736403 28251 services_controller.go:332] Processing sync for service openshift-config-operator/metrics 2025-10-13T00:21:42.736411758+00:00 stderr F I1013 00:21:42.736387 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.47:9099:192.168.126.11:9099]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516e9abb-2a1b-4eba-859b-c827d44fb86e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736481320+00:00 stderr F I1013 00:21:42.736410 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-config-operator f04ada1b-55ad-45a3-9231-6d1ff7242fa0 4291 0 2024-06-26 12:39:24 +0000 UTC map[app:openshift-config-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:config-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efb9f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-config-operator,},ClusterIP:10.217.5.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.120],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.736481320+00:00 stderr F I1013 00:21:42.736451 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.52:50051:10.217.0.34:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dbd78453-346b-4a67-b084-f31f95f15b67}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736501551+00:00 stderr F I1013 00:21:42.736491 28251 lb_config.go:1016] Cluster endpoints for openshift-config-operator/metrics are: map[TCP/https:{8443 [10.217.0.23] []}] 2025-10-13T00:21:42.736508821+00:00 stderr F I1013 00:21:42.736481 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.52:50051:10.217.0.34:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dbd78453-346b-4a67-b084-f31f95f15b67}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736517081+00:00 stderr F I1013 00:21:42.736506 28251 services_controller.go:413] Built service openshift-config-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.120"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.23"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.736523721+00:00 stderr F I1013 00:21:42.736518 28251 services_controller.go:414] Built service openshift-config-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.736530312+00:00 stderr F I1013 00:21:42.736525 28251 services_controller.go:415] Built service openshift-config-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.736581623+00:00 stderr F I1013 00:21:42.736541 28251 services_controller.go:421] Built service openshift-config-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736581623+00:00 stderr F I1013 00:21:42.736559 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"} 2025-10-13T00:21:42.736593573+00:00 stderr F I1013 00:21:42.736586 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-dns-operator : 1.152891ms 2025-10-13T00:21:42.736619564+00:00 stderr F I1013 00:21:42.736599 28251 services_controller.go:332] Processing sync for service openshift-oauth-apiserver/api 2025-10-13T00:21:42.736677166+00:00 stderr F I1013 00:21:42.736569 28251 services_controller.go:422] Built service openshift-config-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.736677166+00:00 stderr F I1013 00:21:42.736669 28251 services_controller.go:423] Built service openshift-config-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.736686206+00:00 stderr F I1013 00:21:42.736677 28251 services_controller.go:424] Service openshift-config-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.736728817+00:00 stderr F I1013 00:21:42.736689 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"8687c189-4f68-4a92-accd-596f2b18fbfd", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736739047+00:00 stderr F I1013 00:21:42.736727 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"} 2025-10-13T00:21:42.736746737+00:00 stderr F I1013 00:21:42.736610 28251 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-oauth-apiserver 8ccd218c-b483-42f1-81ef-8a1e9a05f574 5246 0 2024-06-26 12:47:12 +0000 UTC map[app:openshift-oauth-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.114,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.114],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.736785098+00:00 stderr F I1013 00:21:42.736763 28251 lb_config.go:1016] Cluster endpoints for openshift-oauth-apiserver/api are: map[TCP/https:{8443 [10.217.0.39] []}] 2025-10-13T00:21:42.736785098+00:00 stderr F I1013 00:21:42.736774 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"} 2025-10-13T00:21:42.736793599+00:00 stderr F I1013 00:21:42.736781 28251 services_controller.go:413] Built service openshift-oauth-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.114"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.39"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.736801579+00:00 stderr F I1013 00:21:42.736776 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.120:443:10.217.0.23:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8687c189-4f68-4a92-accd-596f2b18fbfd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736801579+00:00 stderr F I1013 00:21:42.736786 28251 services_controller.go:336] Finished syncing service redhat-operators on namespace openshift-marketplace : 682.458µs 2025-10-13T00:21:42.736809179+00:00 stderr F I1013 00:21:42.736703 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"} 2025-10-13T00:21:42.736809179+00:00 stderr F I1013 00:21:42.736805 28251 services_controller.go:336] Finished syncing service cluster-version-operator on namespace openshift-cluster-version : 1.403138ms 2025-10-13T00:21:42.736818189+00:00 stderr F I1013 00:21:42.736799 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.120:443:10.217.0.23:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8687c189-4f68-4a92-accd-596f2b18fbfd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.736825260+00:00 stderr F I1013 00:21:42.736817 28251 services_controller.go:332] Processing sync for service openshift-network-operator/metrics 2025-10-13T00:21:42.736833180+00:00 stderr F I1013 00:21:42.736824 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-network-operator : 7.72µs 2025-10-13T00:21:42.736840860+00:00 stderr F I1013 00:21:42.736832 28251 services_controller.go:332] Processing sync for service default/openshift 2025-10-13T00:21:42.736851850+00:00 stderr F I1013 00:21:42.736839 28251 services_controller.go:336] Finished syncing service openshift on namespace default : 7.53µs 2025-10-13T00:21:42.736851850+00:00 stderr F I1013 00:21:42.736849 28251 services_controller.go:332] Processing sync for service openshift-multus/network-metrics-service 2025-10-13T00:21:42.736859800+00:00 stderr F I1013 00:21:42.736855 28251 services_controller.go:336] Finished syncing service network-metrics-service on namespace openshift-multus : 7.21µs 2025-10-13T00:21:42.736867531+00:00 stderr F I1013 00:21:42.736862 28251 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:42.736875041+00:00 stderr F I1013 00:21:42.736868 28251 services_controller.go:336] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes : 5.501µs 2025-10-13T00:21:42.736882831+00:00 stderr F I1013 00:21:42.736805 28251 services_controller.go:332] Processing sync for service openshift-apiserver/api 2025-10-13T00:21:42.736882831+00:00 stderr F I1013 00:21:42.736876 28251 services_controller.go:332] Processing sync for service openshift-cluster-machine-approver/machine-approver 2025-10-13T00:21:42.736891291+00:00 stderr F I1013 00:21:42.736881 28251 services_controller.go:336] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver : 5.49µs 2025-10-13T00:21:42.736891291+00:00 stderr F I1013 00:21:42.736792 28251 services_controller.go:414] Built service openshift-oauth-apiserver/api LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.736899701+00:00 stderr F I1013 00:21:42.736889 28251 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-source 2025-10-13T00:21:42.736899701+00:00 stderr F I1013 00:21:42.736893 28251 services_controller.go:415] Built service openshift-oauth-apiserver/api LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.736899701+00:00 stderr F I1013 00:21:42.736896 28251 services_controller.go:336] Finished syncing service network-check-source on namespace openshift-network-diagnostics : 6.34µs 2025-10-13T00:21:42.736910312+00:00 stderr F I1013 00:21:42.736905 28251 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry-operator 2025-10-13T00:21:42.736917722+00:00 stderr F I1013 00:21:42.736911 28251 services_controller.go:336] Finished syncing service image-registry-operator on namespace openshift-image-registry : 5.45µs 2025-10-13T00:21:42.736925442+00:00 stderr F I1013 00:21:42.736907 28251 services_controller.go:421] Built service openshift-oauth-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736925442+00:00 stderr F I1013 00:21:42.736919 28251 services_controller.go:332] Processing sync for service openshift-cluster-samples-operator/metrics 2025-10-13T00:21:42.736933602+00:00 stderr F I1013 00:21:42.736923 28251 services_controller.go:422] Built service openshift-oauth-apiserver/api per-node LB []services.LB{} 2025-10-13T00:21:42.736933602+00:00 stderr F I1013 00:21:42.736926 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-cluster-samples-operator : 6.47µs 2025-10-13T00:21:42.736933602+00:00 stderr F I1013 00:21:42.736930 28251 services_controller.go:423] Built service openshift-oauth-apiserver/api template LB []services.LB{} 2025-10-13T00:21:42.736944123+00:00 stderr F I1013 00:21:42.736741 28251 services_controller.go:336] Finished syncing service controller-manager on namespace openshift-controller-manager : 1.193072ms 2025-10-13T00:21:42.736944123+00:00 stderr F I1013 00:21:42.736935 28251 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:42.736944123+00:00 stderr F I1013 00:21:42.736937 28251 services_controller.go:424] Service openshift-oauth-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.736951523+00:00 stderr F I1013 00:21:42.736941 28251 services_controller.go:336] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes : 6.921µs 2025-10-13T00:21:42.736987704+00:00 stderr F I1013 00:21:42.736949 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"7595c030-5437-4d83-9952-897bbf081592", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.736987704+00:00 stderr F I1013 00:21:42.736965 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.114379846 seconds. No OVN measurement. 2025-10-13T00:21:42.736987704+00:00 stderr F I1013 00:21:42.736983 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-cluster-machine-approver/machine-approver. OVN-Kubernetes controller took 0.114381126 seconds. No OVN measurement. 2025-10-13T00:21:42.736995754+00:00 stderr F I1013 00:21:42.736990 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.114375096 seconds. No OVN measurement. 2025-10-13T00:21:42.737032745+00:00 stderr F I1013 00:21:42.737011 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"} 2025-10-13T00:21:42.737032745+00:00 stderr F I1013 00:21:42.736881 28251 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-apiserver fb5bd66d-5e82-4bcc-8126-39324a92dccc 5229 0 2024-06-26 12:47:09 +0000 UTC map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.69,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.69],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.737032745+00:00 stderr F I1013 00:21:42.737028 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-config-operator : 625.087µs 2025-10-13T00:21:42.737065636+00:00 stderr F I1013 00:21:42.737047 28251 lb_config.go:1016] Cluster endpoints for openshift-apiserver/api are: map[TCP/https:{8443 [10.217.0.82] []}] 2025-10-13T00:21:42.737091167+00:00 stderr F I1013 00:21:42.737064 28251 services_controller.go:413] Built service openshift-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.69"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.737091167+00:00 stderr F I1013 00:21:42.737057 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.114:443:10.217.0.39:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7595c030-5437-4d83-9952-897bbf081592}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.737091167+00:00 stderr F I1013 00:21:42.737081 28251 services_controller.go:414] Built service openshift-apiserver/api LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.737098947+00:00 stderr F I1013 00:21:42.737090 28251 services_controller.go:415] Built service openshift-apiserver/api LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.737105877+00:00 stderr F I1013 00:21:42.737084 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.114:443:10.217.0.39:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7595c030-5437-4d83-9952-897bbf081592}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.737133238+00:00 stderr F I1013 00:21:42.737105 28251 services_controller.go:421] Built service openshift-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.737133238+00:00 stderr F I1013 00:21:42.737130 28251 services_controller.go:422] Built service openshift-apiserver/api per-node LB []services.LB{} 2025-10-13T00:21:42.737142778+00:00 stderr F I1013 00:21:42.737138 28251 services_controller.go:423] Built service openshift-apiserver/api template LB []services.LB{} 2025-10-13T00:21:42.737160338+00:00 stderr F I1013 00:21:42.737146 28251 services_controller.go:424] Service openshift-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.737207610+00:00 stderr F I1013 00:21:42.737164 28251 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.737314183+00:00 stderr F I1013 00:21:42.737276 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.69:443:10.217.0.82:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.737362224+00:00 stderr F I1013 00:21:42.737295 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"} 2025-10-13T00:21:42.737362224+00:00 stderr F I1013 00:21:42.737312 28251 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.69:443:10.217.0.82:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.737362224+00:00 stderr F I1013 00:21:42.737358 28251 services_controller.go:336] Finished syncing service api on namespace openshift-oauth-apiserver : 758.051µs 2025-10-13T00:21:42.737638541+00:00 stderr F I1013 00:21:42.737611 28251 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"} 2025-10-13T00:21:42.737638541+00:00 stderr F I1013 00:21:42.737623 28251 services_controller.go:336] Finished syncing service api on namespace openshift-apiserver : 818.502µs 2025-10-13T00:21:42.740391865+00:00 stderr F I1013 00:21:42.740351 28251 ovs.go:162] Exec(36): stdout: "\"017e52b0-97d3-4d7d-aae4-9b216aa025aa\"\n" 2025-10-13T00:21:42.740391865+00:00 stderr F I1013 00:21:42.740376 28251 ovs.go:163] Exec(36): stderr: "" 2025-10-13T00:21:42.740408566+00:00 stderr F I1013 00:21:42.740391 28251 ovs.go:159] Exec(37): /usr/bin/ovs-appctl --timeout=15 dpif/show-dp-features br-ex 2025-10-13T00:21:42.744280260+00:00 stderr F I1013 00:21:42.744208 28251 ovs.go:162] Exec(37): stdout: "Masked set action: Yes\nTunnel push pop: No\nUfid: Yes\nTruncate action: Yes\nClone action: Yes\nSample nesting: 10\nConntrack eventmask: Yes\nConntrack clear: Yes\nMax dp_hash algorithm: 1\nCheck pkt length action: Yes\nConntrack timeout policy: Yes\nExplicit Drop action: No\nOptimized Balance TCP mode: No\nConntrack all-zero IP SNAT: Yes\nMPLS Label add: Yes\nMax VLAN headers: 2\nMax MPLS depth: 3\nRecirc: Yes\nCT state: Yes\nCT zone: Yes\nCT mark: Yes\nCT label: Yes\nCT state NAT: Yes\nCT orig tuple: Yes\nCT orig tuple for IPv6: Yes\nIPv6 ND Extension: No\n" 2025-10-13T00:21:42.744280260+00:00 stderr F I1013 00:21:42.744250 28251 ovs.go:163] Exec(37): stderr: "" 2025-10-13T00:21:42.744340262+00:00 stderr F I1013 00:21:42.744307 28251 ovs.go:159] Exec(38): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . other_config:hw-offload 2025-10-13T00:21:42.749179952+00:00 stderr F I1013 00:21:42.749123 28251 ovs.go:162] Exec(38): stdout: "\n" 2025-10-13T00:21:42.749179952+00:00 stderr F I1013 00:21:42.749150 28251 ovs.go:163] Exec(38): stderr: "" 2025-10-13T00:21:42.750241620+00:00 stderr F I1013 00:21:42.750195 28251 iptables.go:146] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 2025-10-13T00:21:42.752807269+00:00 stderr F I1013 00:21:42.752758 28251 iptables.go:146] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 2025-10-13T00:21:42.755807260+00:00 stderr F I1013 00:21:42.755766 28251 iptables.go:146] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 2025-10-13T00:21:42.758978505+00:00 stderr F I1013 00:21:42.758938 28251 iptables.go:146] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 2025-10-13T00:21:42.761660207+00:00 stderr F I1013 00:21:42.761619 28251 iptables.go:108] Creating table: filter chain: FORWARD 2025-10-13T00:21:42.763509487+00:00 stderr F I1013 00:21:42.763462 28251 iptables.go:110] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:42.768590324+00:00 stderr F I1013 00:21:42.768550 28251 iptables.go:108] Creating table: filter chain: OUTPUT 2025-10-13T00:21:42.770556577+00:00 stderr F I1013 00:21:42.770497 28251 iptables.go:110] Chain: "OUTPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:42.775355516+00:00 stderr F I1013 00:21:42.775279 28251 gateway_localnet.go:152] Local Gateway Creation Complete 2025-10-13T00:21:42.775578272+00:00 stderr F I1013 00:21:42.775532 28251 default_node_network_controller.go:1301] MTU (1500) of network interface eth10 is big enough to deal with Geneve header overhead (sum 1458). 2025-10-13T00:21:42.775578272+00:00 stderr F I1013 00:21:42.775554 28251 kube.go:128] Setting annotations map[k8s.ovn.org/gateway-mtu-support: k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_crc","mac-address":"fa:16:3e:c3:15:08","ip-addresses":["38.102.83.180/24"],"ip-address":"38.102.83.180/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-mgmt-port-mac-address:b6:dc:d9:26:03:d4 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.180/24"} k8s.ovn.org/zone-name:crc] on node crc 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786244 28251 obj_retry.go:555] Update event received for resource *v1.Node, old object is equal to new: false 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786285 28251 obj_retry.go:607] Update event received for *v1.Node crc 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786532 28251 node_tracker.go:204] Processing possible switch / router updates for node crc 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786641 28251 node_tracker.go:165] Node crc switch + router changed, syncing services 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786656 28251 services_controller.go:519] Full service sync requested 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786673 28251 services_controller.go:551] Adding service openshift-console/downloads 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786684 28251 services_controller.go:551] Adding service openshift-controller-manager-operator/metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786689 28251 services_controller.go:551] Adding service openshift-kube-apiserver/apiserver 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786696 28251 services_controller.go:551] Adding service openshift-oauth-apiserver/api 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786703 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786708 28251 services_controller.go:551] Adding service openshift-authentication-operator/metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786712 28251 services_controller.go:551] Adding service openshift-console/console 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786716 28251 services_controller.go:551] Adding service openshift-console-operator/webhook 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786720 28251 services_controller.go:551] Adding service openshift-kube-controller-manager-operator/metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786725 28251 services_controller.go:551] Adding service openshift-machine-api/cluster-autoscaler-operator 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786743 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-controllers 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786747 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-webhook 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786752 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786758 28251 services_controller.go:551] Adding service default/kubernetes 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786763 28251 services_controller.go:551] Adding service openshift-console-operator/metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786767 28251 services_controller.go:551] Adding service openshift-multus/network-metrics-service 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786771 28251 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786776 28251 services_controller.go:551] Adding service default/openshift 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786780 28251 services_controller.go:551] Adding service openshift-marketplace/marketplace-operator-metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786792 28251 master.go:627] Adding or Updating Node "crc" 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786786 28251 services_controller.go:551] Adding service openshift-route-controller-manager/route-controller-manager 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786829 28251 services_controller.go:551] Adding service openshift-cluster-machine-approver/machine-approver 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786834 28251 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-source 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786838 28251 hybrid.go:140] Removing node crc hybrid overlay port 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786840 28251 services_controller.go:551] Adding service openshift-marketplace/redhat-marketplace 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786845 28251 services_controller.go:551] Adding service openshift-image-registry/image-registry-operator 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786850 28251 services_controller.go:551] Adding service openshift-kube-scheduler/scheduler 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786854 28251 services_controller.go:551] Adding service openshift-config-operator/metrics 2025-10-13T00:21:42.786890606+00:00 stderr F I1013 00:21:42.786858 28251 services_controller.go:551] Adding service openshift-image-registry/image-registry 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786895 28251 services_controller.go:551] Adding service openshift-apiserver/api 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786901 28251 services_controller.go:551] Adding service openshift-cluster-samples-operator/metrics 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786905 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/packageserver-service 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786909 28251 services_controller.go:551] Adding service openshift-cluster-version/cluster-version-operator 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786913 28251 services_controller.go:551] Adding service openshift-dns-operator/metrics 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786917 28251 services_controller.go:551] Adding service openshift-apiserver-operator/metrics 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786922 28251 services_controller.go:551] Adding service openshift-dns/dns-default 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786926 28251 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786930 28251 services_controller.go:551] Adding service openshift-ingress/router-internal-default 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786934 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786937 28251 services_controller.go:551] Adding service openshift-monitoring/cluster-monitoring-operator 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786942 28251 services_controller.go:551] Adding service openshift-service-ca-operator/metrics 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786946 28251 services_controller.go:551] Adding service openshift-kube-controller-manager/kube-controller-manager 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786950 28251 services_controller.go:551] Adding service openshift-kube-storage-version-migrator-operator/metrics 2025-10-13T00:21:42.786957188+00:00 stderr F I1013 00:21:42.786954 28251 services_controller.go:551] Adding service openshift-authentication/oauth-openshift 2025-10-13T00:21:42.786979048+00:00 stderr F I1013 00:21:42.786958 28251 services_controller.go:551] Adding service openshift-kube-apiserver-operator/metrics 2025-10-13T00:21:42.786979048+00:00 stderr F I1013 00:21:42.786962 28251 services_controller.go:551] Adding service openshift-ingress-operator/metrics 2025-10-13T00:21:42.786979048+00:00 stderr F I1013 00:21:42.786966 28251 services_controller.go:551] Adding service openshift-machine-api/control-plane-machine-set-operator 2025-10-13T00:21:42.786979048+00:00 stderr F I1013 00:21:42.786970 28251 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-controller 2025-10-13T00:21:42.786979048+00:00 stderr F I1013 00:21:42.786974 28251 services_controller.go:551] Adding service openshift-marketplace/community-operators 2025-10-13T00:21:42.786992889+00:00 stderr F I1013 00:21:42.786978 28251 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-target 2025-10-13T00:21:42.786992889+00:00 stderr F I1013 00:21:42.786982 28251 services_controller.go:551] Adding service openshift-etcd-operator/metrics 2025-10-13T00:21:42.786992889+00:00 stderr F I1013 00:21:42.786986 28251 services_controller.go:551] Adding service openshift-ingress-canary/ingress-canary 2025-10-13T00:21:42.786992889+00:00 stderr F I1013 00:21:42.786990 28251 services_controller.go:551] Adding service openshift-marketplace/certified-operators 2025-10-13T00:21:42.787002589+00:00 stderr F I1013 00:21:42.786994 28251 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-daemon 2025-10-13T00:21:42.787002589+00:00 stderr F I1013 00:21:42.786999 28251 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-operator 2025-10-13T00:21:42.787010279+00:00 stderr F I1013 00:21:42.787005 28251 services_controller.go:551] Adding service openshift-network-operator/metrics 2025-10-13T00:21:42.787017649+00:00 stderr F I1013 00:21:42.787010 28251 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-10-13T00:21:42.787024989+00:00 stderr F I1013 00:21:42.787015 28251 services_controller.go:551] Adding service openshift-apiserver/check-endpoints 2025-10-13T00:21:42.787024989+00:00 stderr F I1013 00:21:42.787022 28251 services_controller.go:551] Adding service openshift-controller-manager/controller-manager 2025-10-13T00:21:42.787032550+00:00 stderr F I1013 00:21:42.787028 28251 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-machine-webhook 2025-10-13T00:21:42.787041220+00:00 stderr F I1013 00:21:42.787035 28251 services_controller.go:551] Adding service openshift-marketplace/redhat-operators 2025-10-13T00:21:42.787048150+00:00 stderr F I1013 00:21:42.787040 28251 services_controller.go:551] Adding service openshift-multus/multus-admission-controller 2025-10-13T00:21:42.787055070+00:00 stderr F I1013 00:21:42.787046 28251 services_controller.go:551] Adding service openshift-etcd/etcd 2025-10-13T00:21:42.787062470+00:00 stderr F I1013 00:21:42.787054 28251 services_controller.go:551] Adding service openshift-kube-scheduler-operator/metrics 2025-10-13T00:21:42.787062470+00:00 stderr F I1013 00:21:42.787018 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.180 physical_ips:38.102.83.180]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.787140303+00:00 stderr F I1013 00:21:42.787070 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.180 physical_ips:38.102.83.180]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.787273126+00:00 stderr F I1013 00:21:42.787234 28251 default_node_network_controller.go:974] Waiting for gateway and management port readiness... 2025-10-13T00:21:42.787447081+00:00 stderr F I1013 00:21:42.787429 28251 ovs.go:159] Exec(39): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.27632.ctl connection-status 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788112 28251 services_controller.go:332] Processing sync for service openshift-console/downloads 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788126 28251 services_controller.go:397] Service downloads retrieved from lister: &Service{ObjectMeta:{downloads openshift-console d6818508-d113-4821-84c8-94f59cfa13cb 9742 0 2024-06-26 12:53:44 +0000 UTC map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.196,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.196],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788222 28251 lb_config.go:1016] Cluster endpoints for openshift-console/downloads are: map[TCP/http:{8080 [10.217.0.66] []}] 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788234 28251 services_controller.go:413] Built service openshift-console/downloads LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.196"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.66"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788245 28251 services_controller.go:414] Built service openshift-console/downloads LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788251 28251 services_controller.go:415] Built service openshift-console/downloads LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788268 28251 services_controller.go:421] Built service openshift-console/downloads cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788294 28251 services_controller.go:422] Built service openshift-console/downloads per-node LB []services.LB{} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788300 28251 services_controller.go:423] Built service openshift-console/downloads template LB []services.LB{} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788306 28251 services_controller.go:424] Service openshift-console/downloads has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788326 28251 services_controller.go:441] Skipping no-op change for service openshift-console/downloads 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788344 28251 services_controller.go:336] Finished syncing service downloads on namespace openshift-console : 220.845µs 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788355 28251 services_controller.go:332] Processing sync for service openshift-controller-manager-operator/metrics 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788360 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-controller-manager-operator 2f6bb711-85a4-408c-913a-54f006dcf2e9 4322 0 2024-06-26 12:39:07 +0000 UTC map[app:openshift-controller-manager-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:openshift-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002effab }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.152,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.152],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788407 28251 lb_config.go:1016] Cluster endpoints for openshift-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.9] []}] 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788417 28251 services_controller.go:413] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.152"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.9"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788436 28251 services_controller.go:414] Built service openshift-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788441 28251 services_controller.go:415] Built service openshift-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788451 28251 services_controller.go:421] Built service openshift-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788470 28251 services_controller.go:422] Built service openshift-controller-manager-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788475 28251 services_controller.go:423] Built service openshift-controller-manager-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788482 28251 services_controller.go:424] Service openshift-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788496 28251 services_controller.go:441] Skipping no-op change for service openshift-controller-manager-operator/metrics 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788499 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-controller-manager-operator : 144.124µs 2025-10-13T00:21:42.788941101+00:00 stderr F I1013 00:21:42.788508 28251 services_controller.go:332] Processing sync for service openshift-kube-apiserver/apiserver 2025-10-13T00:21:42.788941101+00:00 stderr P I1013 00:21:42.788513 28251 services_controller.go:397] Service apiserver retrieved from lister: &Service{Obj 2025-10-13T00:21:42.789002793+00:00 stderr F ectMeta:{apiserver openshift-kube-apiserver 44a33f79-7e24-4f1b-bc46-f52dfcec13b8 3793 0 2024-06-26 12:47:04 +0000 UTC map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.86,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.86],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789164 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver/apiserver are: map[TCP/https:{6443 [192.168.126.11] []}] 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789196 28251 services_controller.go:413] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789202 28251 services_controller.go:414] Built service openshift-kube-apiserver/apiserver LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.86"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789210 28251 services_controller.go:415] Built service openshift-kube-apiserver/apiserver LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789228 28251 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{} 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789234 28251 services_controller.go:422] Built service openshift-kube-apiserver/apiserver per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789252 28251 services_controller.go:423] Built service openshift-kube-apiserver/apiserver template LB []services.LB{} 2025-10-13T00:21:42.789273130+00:00 stderr F I1013 00:21:42.789259 28251 services_controller.go:424] Service openshift-kube-apiserver/apiserver has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.789292340+00:00 stderr F I1013 00:21:42.789273 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-apiserver/apiserver 2025-10-13T00:21:42.789292340+00:00 stderr F I1013 00:21:42.789277 28251 services_controller.go:336] Finished syncing service apiserver on namespace openshift-kube-apiserver : 769.161µs 2025-10-13T00:21:42.789292340+00:00 stderr F I1013 00:21:42.789285 28251 services_controller.go:332] Processing sync for service openshift-oauth-apiserver/api 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789290 28251 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-oauth-apiserver 8ccd218c-b483-42f1-81ef-8a1e9a05f574 5246 0 2024-06-26 12:47:12 +0000 UTC map[app:openshift-oauth-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.114,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.114],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789356 28251 lb_config.go:1016] Cluster endpoints for openshift-oauth-apiserver/api are: map[TCP/https:{8443 [10.217.0.39] []}] 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789364 28251 services_controller.go:413] Built service openshift-oauth-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.114"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.39"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789372 28251 services_controller.go:414] Built service openshift-oauth-apiserver/api LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789377 28251 services_controller.go:415] Built service openshift-oauth-apiserver/api LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789390 28251 services_controller.go:421] Built service openshift-oauth-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789405 28251 services_controller.go:422] Built service openshift-oauth-apiserver/api per-node LB []services.LB{} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789410 28251 services_controller.go:423] Built service openshift-oauth-apiserver/api template LB []services.LB{} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789416 28251 services_controller.go:424] Service openshift-oauth-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789430 28251 services_controller.go:441] Skipping no-op change for service openshift-oauth-apiserver/api 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789433 28251 services_controller.go:336] Finished syncing service api on namespace openshift-oauth-apiserver : 147.934µs 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789440 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789445 28251 services_controller.go:397] Service catalog-operator-metrics retrieved from lister: &Service{ObjectMeta:{catalog-operator-metrics openshift-operator-lifecycle-manager 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72 5067 0 2024-06-26 12:39:23 +0000 UTC map[app:catalog-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0007334d7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.17,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.17],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789492 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.11] []}] 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789501 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.17"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789508 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789513 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789521 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789537 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789543 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB []services.LB{} 2025-10-13T00:21:42.790709599+00:00 stderr F I1013 00:21:42.789550 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/catalog-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790709599+00:00 stderr P I1013 00:21:42.789565 28251 services_controller.go:441] Skipping no-op change for service opensh 2025-10-13T00:21:42.790755800+00:00 stderr F ift-operator-lifecycle-manager/catalog-operator-metrics 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789568 28251 services_controller.go:336] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager : 127.874µs 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789575 28251 services_controller.go:332] Processing sync for service openshift-authentication-operator/metrics 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789581 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-authentication-operator 20ebd9ba-71d4-4753-8707-d87939791a19 4335 0 2024-06-26 12:39:09 +0000 UTC map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002ef837 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.51,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.51],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789620 28251 lb_config.go:1016] Cluster endpoints for openshift-authentication-operator/metrics are: map[TCP/https:{8443 [10.217.0.19] []}] 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789628 28251 services_controller.go:413] Built service openshift-authentication-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.51"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.19"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789635 28251 services_controller.go:414] Built service openshift-authentication-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789640 28251 services_controller.go:415] Built service openshift-authentication-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789649 28251 services_controller.go:421] Built service openshift-authentication-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789665 28251 services_controller.go:422] Built service openshift-authentication-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789673 28251 services_controller.go:423] Built service openshift-authentication-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789679 28251 services_controller.go:424] Service openshift-authentication-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789692 28251 services_controller.go:441] Skipping no-op change for service openshift-authentication-operator/metrics 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789695 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-authentication-operator : 119.973µs 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789701 28251 services_controller.go:332] Processing sync for service openshift-console/console 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789705 28251 services_controller.go:397] Service console retrieved from lister: &Service{ObjectMeta:{console openshift-console 5b0bdd1d-b81c-479c-9a03-f3ff2b5db014 9795 0 2024-06-26 12:53:44 +0000 UTC map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: ui,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789741 28251 lb_config.go:1016] Cluster endpoints for openshift-console/console are: map[TCP/https:{8443 [10.217.0.73] []}] 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789748 28251 services_controller.go:413] Built service openshift-console/console LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.140"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.73"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789755 28251 services_controller.go:414] Built service openshift-console/console LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789759 28251 services_controller.go:415] Built service openshift-console/console LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789768 28251 services_controller.go:421] Built service openshift-console/console cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789784 28251 services_controller.go:422] Built service openshift-console/console per-node LB []services.LB{} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789790 28251 services_controller.go:423] Built service openshift-console/console template LB []services.LB{} 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789799 28251 services_controller.go:424] Service openshift-console/console has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790755800+00:00 stderr F I1013 00:21:42.789816 28251 services_controller.go:441] Skipping no-op change for service openshift-console/console 2025-10-13T00:21:42.790755800+00:00 stderr P I1013 00:21:42.789821 28251 se 2025-10-13T00:21:42.790789461+00:00 stderr F rvices_controller.go:336] Finished syncing service console on namespace openshift-console : 118.774µs 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789829 28251 services_controller.go:332] Processing sync for service openshift-console-operator/webhook 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789834 28251 services_controller.go:397] Service webhook retrieved from lister: &Service{ObjectMeta:{webhook openshift-console-operator 0bec6a60-3529-4fdb-81de-718ea6c4dae4 9610 0 2024-06-26 12:53:34 +0000 UTC map[name:console-conversion-webhook] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:webhook-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efdb7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:9443,TargetPort:{0 9443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-conversion-webhook,},ClusterIP:10.217.5.84,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.84],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789885 28251 lb_config.go:1016] Cluster endpoints for openshift-console-operator/webhook are: map[TCP/webhook:{9443 [10.217.0.61] []}] 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789893 28251 services_controller.go:413] Built service openshift-console-operator/webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.84"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.61"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789901 28251 services_controller.go:414] Built service openshift-console-operator/webhook LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789906 28251 services_controller.go:415] Built service openshift-console-operator/webhook LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789915 28251 services_controller.go:421] Built service openshift-console-operator/webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789931 28251 services_controller.go:422] Built service openshift-console-operator/webhook per-node LB []services.LB{} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789938 28251 services_controller.go:423] Built service openshift-console-operator/webhook template LB []services.LB{} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789944 28251 services_controller.go:424] Service openshift-console-operator/webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789958 28251 services_controller.go:441] Skipping no-op change for service openshift-console-operator/webhook 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789961 28251 services_controller.go:336] Finished syncing service webhook on namespace openshift-console-operator : 132.993µs 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789967 28251 services_controller.go:332] Processing sync for service openshift-kube-controller-manager-operator/metrics 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.789973 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-controller-manager-operator 136038f9-f376-4b0b-8c75-a42240d176cc 4549 0 2024-06-26 12:39:14 +0000 UTC map[app:kube-controller-manager-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00029715f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-controller-manager-operator,},ClusterIP:10.217.4.79,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.79],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.790009 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.15] []}] 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.790017 28251 services_controller.go:413] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.79"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.15"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.790024 28251 services_controller.go:414] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.790028 28251 services_controller.go:415] Built service openshift-kube-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.790039 28251 services_controller.go:421] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790789461+00:00 stderr F I1013 00:21:42.790057 28251 services_controller.go:422] Built service openshift-kube-controller-manager-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.790789461+00:00 stderr P I1013 00:21:42.790062 28251 services_controller.go:423] Built service openshi 2025-10-13T00:21:42.790809661+00:00 stderr F ft-kube-controller-manager-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790068 28251 services_controller.go:424] Service openshift-kube-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790082 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-controller-manager-operator/metrics 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790085 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator : 117.923µs 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790091 28251 services_controller.go:332] Processing sync for service openshift-machine-api/cluster-autoscaler-operator 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790097 28251 services_controller.go:397] Service cluster-autoscaler-operator retrieved from lister: &Service{ObjectMeta:{cluster-autoscaler-operator openshift-machine-api c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062 4713 0 2024-06-26 12:39:18 +0000 UTC map[k8s-app:cluster-autoscaler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-autoscaler-operator-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00029769b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9192,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-autoscaler-operator,},ClusterIP:10.217.5.83,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.83],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790141 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/cluster-autoscaler-operator are: map[] 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790149 28251 services_controller.go:413] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790159 28251 services_controller.go:414] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790164 28251 services_controller.go:415] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790174 28251 services_controller.go:421] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790189 28251 services_controller.go:422] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB []services.LB{} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790195 28251 services_controller.go:423] Built service openshift-machine-api/cluster-autoscaler-operator template LB []services.LB{} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790201 28251 services_controller.go:424] Service openshift-machine-api/cluster-autoscaler-operator has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790214 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-api/cluster-autoscaler-operator 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790218 28251 services_controller.go:336] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api : 126.453µs 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790224 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-controllers 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790229 28251 services_controller.go:397] Service machine-api-controllers retrieved from lister: &Service{ObjectMeta:{machine-api-controllers openshift-machine-api 6a75af62-23dd-4080-8ef6-00c8bb47e103 4782 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00029793b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.5.185,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.185],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790809661+00:00 stderr F I1013 00:21:42.790272 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-controllers are: map[] 2025-10-13T00:21:42.790809661+00:00 stderr P I1013 00:21:42.790280 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, e 2025-10-13T00:21:42.790834532+00:00 stderr F xternalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790291 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-controllers LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790295 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-controllers LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790306 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-controllers cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790328 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-controllers per-node LB []services.LB{} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790334 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-controllers template LB []services.LB{} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790340 28251 services_controller.go:424] Service openshift-machine-api/machine-api-controllers has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790389 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-api/machine-api-controllers 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790393 28251 services_controller.go:336] Finished syncing service machine-api-controllers on namespace openshift-machine-api : 169.065µs 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790400 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-webhook 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790405 28251 services_controller.go:397] Service machine-api-operator-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-webhook openshift-machine-api 128263d4-d278-44f6-9ae4-9e9ecc572513 4862 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-api-operator-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000297f3b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790442 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-webhook are: map[] 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790448 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.44"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790456 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790461 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-webhook LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790470 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790485 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-webhook per-node LB []services.LB{} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790491 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-webhook template LB []services.LB{} 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790497 28251 services_controller.go:424] Service openshift-machine-api/machine-api-operator-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790509 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-api/machine-api-operator-webhook 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790513 28251 services_controller.go:336] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api : 112.853µs 2025-10-13T00:21:42.790834532+00:00 stderr F I1013 00:21:42.790519 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-10-13T00:21:42.790834532+00:00 stderr P I1013 00:21:42.790524 28251 services_controller.go:397] Service package-server-manager-metrics retrieved from lister: &Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager ae547e8e-2a0a-43b3-8358-80f1e40dfde9 5119 0 2024-06-26 12:39:24 +0000 UTC map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073379f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: package-server-manager,},ClusterI 2025-10-13T00:21:42.790855052+00:00 stderr F P:10.217.4.147,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.147],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790570 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/package-server-manager-metrics are: map[TCP/metrics:{8443 [10.217.0.24] []}] 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790586 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.147"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.24"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790596 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790603 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790617 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790636 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790643 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics template LB []services.LB{} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790649 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/package-server-manager-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790665 28251 services_controller.go:441] Skipping no-op change for service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790670 28251 services_controller.go:336] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager : 150.044µs 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790678 28251 services_controller.go:332] Processing sync for service default/kubernetes 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790683 28251 services_controller.go:397] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 057366b7-69c1-49f0-88ca-660c8863cae8 249 0 2024-06-26 12:38:03 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790728 28251 lb_config.go:1016] Cluster endpoints for default/kubernetes are: map[TCP/https:{6443 [192.168.126.11] []}] 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790736 28251 services_controller.go:413] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790741 28251 services_controller.go:414] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.1"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790748 28251 services_controller.go:415] Built service default/kubernetes LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790761 28251 services_controller.go:421] Built service default/kubernetes cluster-wide LB []services.LB{} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790767 28251 services_controller.go:422] Built service default/kubernetes per-node LB []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790784 28251 services_controller.go:423] Built service default/kubernetes template LB []services.LB{} 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790790 28251 services_controller.go:424] Service default/kubernetes has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790809 28251 services_controller.go:441] Skipping no-op change for service default/kubernetes 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790816 28251 services_controller.go:336] Finished syncing service kubernetes on namespace default : 137.373µs 2025-10-13T00:21:42.790855052+00:00 stderr F I1013 00:21:42.790828 28251 services_controller.go:332] Processing sync for service openshift-console-operator/metrics 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790835 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-console-operator 793d323e-de30-470a-af76-520af7b2dad8 9604 0 2024-06-26 12:53:34 +0000 UTC map[name:console-operator] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efce7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-operator,},ClusterIP:10.217.5.211,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.211],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790896 28251 lb_config.go:1016] Cluster endpoints for openshift-console-operator/metrics are: map[TCP/https:{8443 [10.217.0.62] []}] 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790908 28251 services_controller.go:413] Built service openshift-console-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.211"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.62"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790918 28251 services_controller.go:414] Built service openshift-console-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790924 28251 services_controller.go:415] Built service openshift-console-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790938 28251 services_controller.go:421] Built service openshift-console-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790959 28251 services_controller.go:422] Built service openshift-console-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790965 28251 services_controller.go:423] Built service openshift-console-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790971 28251 services_controller.go:424] Service openshift-console-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790989 28251 services_controller.go:441] Skipping no-op change for service openshift-console-operator/metrics 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.790993 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-console-operator : 166.624µs 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791002 28251 services_controller.go:332] Processing sync for service openshift-multus/network-metrics-service 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791010 28251 services_controller.go:336] Finished syncing service network-metrics-service on namespace openshift-multus : 7.761µs 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791016 28251 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791023 28251 services_controller.go:336] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes : 6.38µs 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791031 28251 services_controller.go:332] Processing sync for service default/openshift 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791037 28251 services_controller.go:336] Finished syncing service openshift on namespace default : 5.17µs 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791043 28251 services_controller.go:332] Processing sync for service openshift-marketplace/marketplace-operator-metrics 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791049 28251 services_controller.go:397] Service marketplace-operator-metrics retrieved from lister: &Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace 1bfd7637-f88e-403e-8d75-c71b380fc127 4909 0 2024-06-26 12:39:13 +0000 UTC map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000732487 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.19,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.19],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791108 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/marketplace-operator-metrics are: map[TCP/https-metrics:{8081 [10.217.0.30] []} TCP/metrics:{8383 [10.217.0.30] []}] 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791118 28251 services_controller.go:413] Built service openshift-marketplace/marketplace-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8383, clusterEndpoints:services.lbEndpoints{Port:8383, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8081, clusterEndpoints:services.lbEndpoints{Port:8081, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791128 28251 services_controller.go:414] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.791580612+00:00 stderr F I1013 00:21:42.791133 28251 services_controller.go:415] Built service openshift-marketplace/marketplace-operator-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.791580612+00:00 stderr P I1013 00:21:42.791143 28251 services_controller.go:421] Built service opensh 2025-10-13T00:21:42.791617413+00:00 stderr F ift-marketplace/marketplace-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791163 28251 services_controller.go:422] Built service openshift-marketplace/marketplace-operator-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791170 28251 services_controller.go:423] Built service openshift-marketplace/marketplace-operator-metrics template LB []services.LB{} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791176 28251 services_controller.go:424] Service openshift-marketplace/marketplace-operator-metrics has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791191 28251 services_controller.go:441] Skipping no-op change for service openshift-marketplace/marketplace-operator-metrics 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791195 28251 services_controller.go:336] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace : 151.585µs 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791201 28251 services_controller.go:332] Processing sync for service openshift-route-controller-manager/route-controller-manager 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791206 28251 services_controller.go:397] Service route-controller-manager retrieved from lister: &Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 105b901a-a54d-4dae-b7b6-99e83d48166f 5156 0 2024-06-26 12:47:06 +0000 UTC map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{route-controller-manager: true,},ClusterIP:10.217.5.173,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.173],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791245 28251 lb_config.go:1016] Cluster endpoints for openshift-route-controller-manager/route-controller-manager are: map[TCP/https:{8443 [10.217.0.88] []}] 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791253 28251 services_controller.go:413] Built service openshift-route-controller-manager/route-controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.173"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.88"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791263 28251 services_controller.go:414] Built service openshift-route-controller-manager/route-controller-manager LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791269 28251 services_controller.go:415] Built service openshift-route-controller-manager/route-controller-manager LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791279 28251 services_controller.go:421] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791298 28251 services_controller.go:422] Built service openshift-route-controller-manager/route-controller-manager per-node LB []services.LB{} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791303 28251 services_controller.go:423] Built service openshift-route-controller-manager/route-controller-manager template LB []services.LB{} 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791309 28251 services_controller.go:424] Service openshift-route-controller-manager/route-controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791327 28251 services_controller.go:441] Skipping no-op change for service openshift-route-controller-manager/route-controller-manager 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791331 28251 services_controller.go:336] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager : 129.133µs 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791338 28251 services_controller.go:332] Processing sync for service openshift-cluster-machine-approver/machine-approver 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791359 28251 services_controller.go:336] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver : 4.741µs 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791367 28251 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-source 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791371 28251 services_controller.go:336] Finished syncing service network-check-source on namespace openshift-network-diagnostics : 4.53µs 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791377 28251 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-marketplace 2025-10-13T00:21:42.791617413+00:00 stderr F I1013 00:21:42.791382 28251 services_controller.go:397] Service redhat-marketplace retrieved from lister: &Service{ObjectMeta:{redhat-marketplace openshift-marketplace 73712edb-385d-4bf0-9c03-b6c570b1a22f 6434 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true olm.service-spec-hash:aUeLNNcZzVZO2rcaZ5Kc8V3jffO0Ss4T6qX6V5] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-marketplace 6f259421-4edb-49d8-a6ce-aa41dfc64264 0xc00073255d 0xc00073255e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-marketplace,olm.managed: true,},ClusterIP:10.217.4.65,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.65],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.791617413+00:00 stderr P I1013 00:21:42.791424 28251 lb_config.go:1016] Cluster endpoin 2025-10-13T00:21:42.791661904+00:00 stderr F ts for openshift-marketplace/redhat-marketplace are: map[TCP/grpc:{50051 [10.217.0.33] []}] 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791432 28251 services_controller.go:413] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.65"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.33"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791441 28251 services_controller.go:414] Built service openshift-marketplace/redhat-marketplace LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791447 28251 services_controller.go:415] Built service openshift-marketplace/redhat-marketplace LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791460 28251 services_controller.go:421] Built service openshift-marketplace/redhat-marketplace cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791481 28251 services_controller.go:422] Built service openshift-marketplace/redhat-marketplace per-node LB []services.LB{} 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791489 28251 services_controller.go:423] Built service openshift-marketplace/redhat-marketplace template LB []services.LB{} 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791496 28251 services_controller.go:424] Service openshift-marketplace/redhat-marketplace has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791513 28251 services_controller.go:441] Skipping no-op change for service openshift-marketplace/redhat-marketplace 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791518 28251 services_controller.go:336] Finished syncing service redhat-marketplace on namespace openshift-marketplace : 140.004µs 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791526 28251 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry-operator 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791532 28251 services_controller.go:336] Finished syncing service image-registry-operator on namespace openshift-image-registry : 6.011µs 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791540 28251 services_controller.go:332] Processing sync for service openshift-kube-scheduler/scheduler 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791547 28251 services_controller.go:397] Service scheduler retrieved from lister: &Service{ObjectMeta:{scheduler openshift-kube-scheduler a839a554-406d-4df8-b3ae-b533cb3e24bc 4695 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:kube-scheduler] map[operator.openshift.io/spec-hash:f185087b7610499b49263c17685abe7f251a50c890808284a072687bf6d73275 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10259 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{scheduler: true,},ClusterIP:10.217.5.218,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.218],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791600 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler/scheduler are: map[TCP/https:{10259 [192.168.126.11] []}] 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791611 28251 services_controller.go:413] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791620 28251 services_controller.go:414] Built service openshift-kube-scheduler/scheduler LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.218"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10259, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.791661904+00:00 stderr F I1013 00:21:42.791635 28251 services_controller.go:415] Built service openshift-kube-scheduler/scheduler LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.791771 28251 ovs.go:159] Exec(40): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 ofport 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.791934 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792058 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792084 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792559 28251 services_controller.go:332] Processing sync for service openshift-config-operator/metrics 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792573 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-config-operator f04ada1b-55ad-45a3-9231-6d1ff7242fa0 4291 0 2024-06-26 12:39:24 +0000 UTC map[app:openshift-config-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:config-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efb9f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-config-operator,},ClusterIP:10.217.5.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.120],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792812 28251 lb_config.go:1016] Cluster endpoints for openshift-config-operator/metrics are: map[TCP/https:{8443 [10.217.0.23] []}] 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792831 28251 services_controller.go:413] Built service openshift-config-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.120"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.23"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792846 28251 services_controller.go:414] Built service openshift-config-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792855 28251 services_controller.go:415] Built service openshift-config-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792899 28251 services_controller.go:421] Built service openshift-config-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792931 28251 services_controller.go:422] Built service openshift-config-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792941 28251 services_controller.go:423] Built service openshift-config-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792971 28251 services_controller.go:424] Service openshift-config-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.792996 28251 services_controller.go:441] Skipping no-op change for service openshift-config-operator/metrics 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.793002 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-config-operator : 449.552µs 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.793019 28251 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.793052 28251 services_controller.go:332] Processing sync for service openshift-apiserver/api 2025-10-13T00:21:42.793298018+00:00 stderr F I1013 00:21:42.793048 28251 services_controller.go:397] Service image-registry retrieved from lister: &Service{ObjectMeta:{image-registry openshift-image-registry 7b12735e-9db4-4c6e-99f6-b2626c4e9f08 17962 0 2024-06-27 13:18:52 +0000 UTC map[docker-registry:default] map[imageregistry.operator.openshift.io/checksum:sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d service.alpha.openshift.io/serving-cert-secret-name:image-registry-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793298018+00:00 stderr P I1013 00:21:42.793060 28251 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-apiserver fb5bd66d-5e82-4bcc-8126-39324a92dccc 5229 0 2024-06-26 12:47:09 +0000 UTC map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Pr 2025-10-13T00:21:42.793376450+00:00 stderr F otocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.69,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.69],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793113 28251 services_controller.go:332] Processing sync for service openshift-cluster-samples-operator/metrics 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793124 28251 lb_config.go:1016] Cluster endpoints for openshift-apiserver/api are: map[TCP/https:{8443 [10.217.0.82] []}] 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793134 28251 lb_config.go:1016] Cluster endpoints for openshift-image-registry/image-registry are: map[TCP/5000-tcp:{5000 [10.217.0.38] []}] 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793135 28251 services_controller.go:413] Built service openshift-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.69"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793140 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-cluster-samples-operator : 41.251µs 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793147 28251 services_controller.go:414] Built service openshift-apiserver/api LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793153 28251 services_controller.go:415] Built service openshift-apiserver/api LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793149 28251 services_controller.go:413] Built service openshift-image-registry/image-registry LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.41"}, protocol:"TCP", inport:5000, clusterEndpoints:services.lbEndpoints{Port:5000, V4IPs:[]string{"10.217.0.38"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793161 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793162 28251 services_controller.go:414] Built service openshift-image-registry/image-registry LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793171 28251 services_controller.go:415] Built service openshift-image-registry/image-registry LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793165 28251 services_controller.go:421] Built service openshift-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793184 28251 services_controller.go:422] Built service openshift-apiserver/api per-node LB []services.LB{} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793189 28251 services_controller.go:423] Built service openshift-apiserver/api template LB []services.LB{} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793195 28251 services_controller.go:424] Service openshift-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793212 28251 services_controller.go:441] Skipping no-op change for service openshift-apiserver/api 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793216 28251 services_controller.go:336] Finished syncing service api on namespace openshift-apiserver : 167.585µs 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793226 28251 services_controller.go:332] Processing sync for service openshift-cluster-version/cluster-version-operator 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793187 28251 services_controller.go:421] Built service openshift-image-registry/image-registry cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.38", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793237 28251 services_controller.go:422] Built service openshift-image-registry/image-registry per-node LB []services.LB{} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793246 28251 services_controller.go:423] Built service openshift-image-registry/image-registry template LB []services.LB{} 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793255 28251 services_controller.go:424] Service openshift-image-registry/image-registry has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793376450+00:00 stderr F I1013 00:21:42.793232 28251 services_controller.go:397] Service cluster-version-operator retrieved from lister: &Service{ObjectMeta:{cluster-version-operator openshift-cluster-version b85c5397-4189-4029-b181-4e339da207b7 6237 0 2024-06-26 12:38:51 +0000 UTC map[k8s-app:cluster-version-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true kubernetes.io/description:Expose cluster-version operator metrics to other in-cluster consumers. Access requires a prometheus-k8s RoleBinding in this namespace. service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-version-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002efad7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9099,TargetPort:{0 9099 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-version-operator,},ClusterIP:10.217.5.47,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.47],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793376450+00:00 stderr P I1013 00:21:42.793171 28251 services_controller.go:397] Service packageserver-service retrieved from lister: &Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager 8099635d-a821-489e-8b18-cae3e83f00b2 6451 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver 0beab272-7637-4d44-b3aa-502dcafbc929 0xc0007338ed 0xc0007338ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5443,Protocol:TCP,Port:5443,TargetPort:{0 5443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: packageserver,},ClusterIP:10.217. 2025-10-13T00:21:42.793414761+00:00 stderr F 4.230,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.230],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793278 28251 lb_config.go:1016] Cluster endpoints for openshift-cluster-version/cluster-version-operator are: map[TCP/metrics:{9099 [192.168.126.11] []}] 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793288 28251 services_controller.go:413] Built service openshift-cluster-version/cluster-version-operator LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793292 28251 services_controller.go:414] Built service openshift-cluster-version/cluster-version-operator LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.47"}, protocol:"TCP", inport:9099, clusterEndpoints:services.lbEndpoints{Port:9099, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793301 28251 services_controller.go:415] Built service openshift-cluster-version/cluster-version-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793303 28251 services_controller.go:441] Skipping no-op change for service openshift-image-registry/image-registry 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793312 28251 services_controller.go:336] Finished syncing service image-registry on namespace openshift-image-registry : 289.957µs 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793319 28251 services_controller.go:421] Built service openshift-cluster-version/cluster-version-operator cluster-wide LB []services.LB{} 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793324 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/packageserver-service are: map[TCP/5443:{5443 [10.217.0.43] []}] 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793325 28251 services_controller.go:422] Built service openshift-cluster-version/cluster-version-operator per-node LB []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793357 28251 services_controller.go:332] Processing sync for service openshift-dns-operator/metrics 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793361 28251 services_controller.go:423] Built service openshift-cluster-version/cluster-version-operator template LB []services.LB{} 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793368 28251 services_controller.go:424] Service openshift-cluster-version/cluster-version-operator has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793364 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.230"}, protocol:"TCP", inport:5443, clusterEndpoints:services.lbEndpoints{Port:5443, V4IPs:[]string{"10.217.0.43"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793383 28251 services_controller.go:441] Skipping no-op change for service openshift-cluster-version/cluster-version-operator 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793385 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/packageserver-service LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793387 28251 services_controller.go:336] Finished syncing service cluster-version-operator on namespace openshift-cluster-version : 161.064µs 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793394 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793414761+00:00 stderr F I1013 00:21:42.793397 28251 services_controller.go:332] Processing sync for service openshift-apiserver-operator/metrics 2025-10-13T00:21:42.793455822+00:00 stderr F I1013 00:21:42.793365 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-dns-operator c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2 4375 0 2024-06-26 12:39:11 +0000 UTC map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296167 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: dns-operator,},ClusterIP:10.217.4.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793455822+00:00 stderr F I1013 00:21:42.793444 28251 services_controller.go:332] Processing sync for service openshift-dns/dns-default 2025-10-13T00:21:42.793455822+00:00 stderr F I1013 00:21:42.793422 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/packageserver-service cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793471543+00:00 stderr F I1013 00:21:42.793458 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB []services.LB{} 2025-10-13T00:21:42.793471543+00:00 stderr F I1013 00:21:42.793459 28251 lb_config.go:1016] Cluster endpoints for openshift-dns-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.18] []}] 2025-10-13T00:21:42.793480873+00:00 stderr F I1013 00:21:42.793469 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/packageserver-service template LB []services.LB{} 2025-10-13T00:21:42.793480873+00:00 stderr F I1013 00:21:42.793473 28251 services_controller.go:413] Built service openshift-dns-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.52"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.18"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793489943+00:00 stderr F I1013 00:21:42.793480 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/packageserver-service has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793489943+00:00 stderr F I1013 00:21:42.793450 28251 services_controller.go:397] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns 9c0247d8-5697-41ea-812e-582bb93c9b4d 5259 0 2024-06-26 12:47:19 +0000 UTC map[dns.operator.openshift.io/owning-dns:default] map[service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 DNS default 8e7b8280-016f-4ceb-a792-fc5be2494468 0xc000296237 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793530664+00:00 stderr F I1013 00:21:42.793501 28251 lb_config.go:1016] Cluster endpoints for openshift-dns/dns-default are: map[TCP/dns-tcp:{5353 [10.217.0.31] []} TCP/metrics:{9154 [10.217.0.31] []} UDP/dns:{5353 [10.217.0.31] []}] 2025-10-13T00:21:42.793530664+00:00 stderr F I1013 00:21:42.793507 28251 services_controller.go:441] Skipping no-op change for service openshift-operator-lifecycle-manager/packageserver-service 2025-10-13T00:21:42.793530664+00:00 stderr F I1013 00:21:42.793519 28251 services_controller.go:413] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.793530664+00:00 stderr F I1013 00:21:42.793518 28251 services_controller.go:336] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager : 356.88µs 2025-10-13T00:21:42.793546205+00:00 stderr F I1013 00:21:42.793525 28251 services_controller.go:414] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"UDP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:9154, clusterEndpoints:services.lbEndpoints{Port:9154, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793546205+00:00 stderr F I1013 00:21:42.793535 28251 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:42.793546205+00:00 stderr F I1013 00:21:42.793540 28251 services_controller.go:415] Built service openshift-dns/dns-default LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793555625+00:00 stderr F I1013 00:21:42.793544 28251 services_controller.go:336] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes : 9.111µs 2025-10-13T00:21:42.793555625+00:00 stderr F I1013 00:21:42.793543 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.004253914 seconds. No OVN measurement. 2025-10-13T00:21:42.793577566+00:00 stderr F I1013 00:21:42.793554 28251 services_controller.go:332] Processing sync for service openshift-ingress/router-internal-default 2025-10-13T00:21:42.793577566+00:00 stderr F I1013 00:21:42.793559 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-cluster-machine-approver/machine-approver. OVN-Kubernetes controller took 0.004531902 seconds. No OVN measurement. 2025-10-13T00:21:42.793577566+00:00 stderr F I1013 00:21:42.793567 28251 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.004688186 seconds. No OVN measurement. 2025-10-13T00:21:42.793577566+00:00 stderr F I1013 00:21:42.793569 28251 services_controller.go:421] Built service openshift-dns/dns-default cluster-wide LB []services.LB{} 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793576 28251 services_controller.go:422] Built service openshift-dns/dns-default per-node LB []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793607 28251 services_controller.go:423] Built service openshift-dns/dns-default template LB []services.LB{} 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793614 28251 services_controller.go:424] Service openshift-dns/dns-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793562 28251 services_controller.go:397] Service router-internal-default retrieved from lister: &Service{ObjectMeta:{router-internal-default openshift-ingress 3ded9605-ced3-4583-97b6-f93264b463a7 7398 0 2024-06-26 12:48:38 +0000 UTC map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{apps/v1 Deployment router-default 9ae4d312-7fc4-4344-ab7a-669da95f56bf 0xc000296cee }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793484 28251 services_controller.go:414] Built service openshift-dns-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793636 28251 services_controller.go:441] Skipping no-op change for service openshift-dns/dns-default 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793638 28251 services_controller.go:415] Built service openshift-dns-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793640 28251 services_controller.go:336] Finished syncing service dns-default on namespace openshift-dns : 196.585µs 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793647 28251 lb_config.go:1016] Cluster endpoints for openshift-ingress/router-internal-default are: map[TCP/http:{80 [192.168.126.11] []} TCP/https:{443 [192.168.126.11] []} TCP/metrics:{1936 [192.168.126.11] []}] 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793653 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793668 28251 services_controller.go:413] Built service openshift-ingress/router-internal-default LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793658 28251 services_controller.go:421] Built service openshift-dns-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793720839+00:00 stderr F I1013 00:21:42.793676 28251 services_controller.go:414] Built service openshift-ingress/router-internal-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:80, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:1936, clusterEndpoints:services.lbEndpoints{Port:1936, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793720839+00:00 stderr P I1013 00:21:42.793663 28251 services_controller.go:397] Service machine-api-operator retrieved from lister: &Service{ObjectMeta:{machine-api-operator openshift-machine-api ef047d6e-c72f-4309-a95e-08fb0ed08662 4792 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:machine-api-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000297a9b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.127,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.127],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{Load 2025-10-13T00:21:42.793754150+00:00 stderr F Balancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793698 28251 services_controller.go:415] Built service openshift-ingress/router-internal-default LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793711 28251 services_controller.go:422] Built service openshift-dns-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793715 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator are: map[TCP/https:{8443 [10.217.0.5] []}] 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793726 28251 services_controller.go:423] Built service openshift-dns-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793728 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.127"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.5"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793731 28251 services_controller.go:421] Built service openshift-ingress/router-internal-default cluster-wide LB []services.LB{} 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793735 28251 services_controller.go:424] Service openshift-dns-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793736 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793754150+00:00 stderr F I1013 00:21:42.793743 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793755 28251 services_controller.go:441] Skipping no-op change for service openshift-dns-operator/metrics 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793760 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-dns-operator : 403.901µs 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793753 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793742 28251 services_controller.go:422] Built service openshift-ingress/router-internal-default per-node LB []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793769 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-operator per-node LB []services.LB{} 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793775 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-operator template LB []services.LB{} 2025-10-13T00:21:42.793782611+00:00 stderr F I1013 00:21:42.793775 28251 services_controller.go:423] Built service openshift-ingress/router-internal-default template LB []services.LB{} 2025-10-13T00:21:42.793798232+00:00 stderr F I1013 00:21:42.793782 28251 services_controller.go:424] Service openshift-machine-api/machine-api-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793798232+00:00 stderr F I1013 00:21:42.793788 28251 services_controller.go:424] Service openshift-ingress/router-internal-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.793798232+00:00 stderr F I1013 00:21:42.793795 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-api/machine-api-operator 2025-10-13T00:21:42.793807852+00:00 stderr F I1013 00:21:42.793402 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-apiserver-operator 4c2fba48-c67e-4420-9529-0bb456da4341 4348 0 2024-06-26 12:39:11 +0000 UTC map[app:openshift-apiserver-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:openshift-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002ef387 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-apiserver-operator,},ClusterIP:10.217.5.125,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.125],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793807852+00:00 stderr F I1013 00:21:42.793795 28251 services_controller.go:332] Processing sync for service openshift-monitoring/cluster-monitoring-operator 2025-10-13T00:21:42.793817762+00:00 stderr F I1013 00:21:42.793806 28251 services_controller.go:336] Finished syncing service cluster-monitoring-operator on namespace openshift-monitoring : 12.091µs 2025-10-13T00:21:42.793817762+00:00 stderr F I1013 00:21:42.793812 28251 services_controller.go:332] Processing sync for service openshift-service-ca-operator/metrics 2025-10-13T00:21:42.793817762+00:00 stderr F I1013 00:21:42.793813 28251 services_controller.go:441] Skipping no-op change for service openshift-ingress/router-internal-default 2025-10-13T00:21:42.793830962+00:00 stderr F I1013 00:21:42.793816 28251 lb_config.go:1016] Cluster endpoints for openshift-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.6] []}] 2025-10-13T00:21:42.793830962+00:00 stderr F I1013 00:21:42.793820 28251 services_controller.go:336] Finished syncing service router-internal-default on namespace openshift-ingress : 265.637µs 2025-10-13T00:21:42.793840533+00:00 stderr F I1013 00:21:42.793830 28251 services_controller.go:413] Built service openshift-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.125"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.6"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793840533+00:00 stderr F I1013 00:21:42.793835 28251 services_controller.go:332] Processing sync for service openshift-kube-controller-manager/kube-controller-manager 2025-10-13T00:21:42.793849093+00:00 stderr F I1013 00:21:42.793841 28251 services_controller.go:414] Built service openshift-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793849093+00:00 stderr F I1013 00:21:42.793817 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-service-ca-operator 030283b3-acfe-40ed-811c-d9f7f79607f6 5225 0 2024-06-26 12:39:07 +0000 UTC map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000733cdf }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: service-ca-operator,},ClusterIP:10.217.5.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.165],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793881124+00:00 stderr F I1013 00:21:42.793859 28251 lb_config.go:1016] Cluster endpoints for openshift-service-ca-operator/metrics are: map[TCP/https:{8443 [10.217.0.10] []}] 2025-10-13T00:21:42.793881124+00:00 stderr F I1013 00:21:42.793870 28251 services_controller.go:413] Built service openshift-service-ca-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.165"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.10"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793881124+00:00 stderr F I1013 00:21:42.793800 28251 services_controller.go:336] Finished syncing service machine-api-operator on namespace openshift-machine-api : 145.964µs 2025-10-13T00:21:42.793896954+00:00 stderr F I1013 00:21:42.793886 28251 services_controller.go:332] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics 2025-10-13T00:21:42.793896954+00:00 stderr F I1013 00:21:42.793845 28251 services_controller.go:397] Service kube-controller-manager retrieved from lister: &Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 419fdf14-5d8d-4271-b9e7-729de80d8cd2 5235 0 2024-06-26 12:47:11 +0000 UTC map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10257 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{kube-controller-manager: true,},ClusterIP:10.217.4.112,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.112],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793936365+00:00 stderr F I1013 00:21:42.793906 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager/kube-controller-manager are: map[TCP/https:{10257 [192.168.126.11] []}] 2025-10-13T00:21:42.793936365+00:00 stderr F I1013 00:21:42.793925 28251 services_controller.go:413] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.793947356+00:00 stderr F I1013 00:21:42.793932 28251 services_controller.go:414] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.112"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10257, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.793956066+00:00 stderr F I1013 00:21:42.793945 28251 services_controller.go:415] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793956066+00:00 stderr F I1013 00:21:42.793946 28251 services_controller.go:414] Built service openshift-service-ca-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.793977296+00:00 stderr F I1013 00:21:42.793955 28251 services_controller.go:415] Built service openshift-service-ca-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.793977296+00:00 stderr F I1013 00:21:42.793893 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-storage-version-migrator-operator 3e887cd0-b481-460c-b943-d944dc64df2f 4706 0 2024-06-26 12:39:17 +0000 UTC map[app:kube-storage-version-migrator-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002975d7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-storage-version-migrator-operator,},ClusterIP:10.217.4.151,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.151],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.793977296+00:00 stderr F I1013 00:21:42.793968 28251 services_controller.go:421] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB []services.LB{} 2025-10-13T00:21:42.793992457+00:00 stderr F I1013 00:21:42.793966 28251 services_controller.go:421] Built service openshift-service-ca-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.793992457+00:00 stderr F I1013 00:21:42.793983 28251 services_controller.go:422] Built service openshift-service-ca-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.793992457+00:00 stderr F I1013 00:21:42.793989 28251 services_controller.go:423] Built service openshift-service-ca-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.794002817+00:00 stderr F I1013 00:21:42.793986 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-storage-version-migrator-operator/metrics are: map[TCP/https:{8443 [10.217.0.16] []}] 2025-10-13T00:21:42.794002817+00:00 stderr F I1013 00:21:42.793995 28251 services_controller.go:424] Service openshift-service-ca-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.794002817+00:00 stderr F I1013 00:21:42.793980 28251 services_controller.go:422] Built service openshift-kube-controller-manager/kube-controller-manager per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.794012177+00:00 stderr F I1013 00:21:42.793999 28251 services_controller.go:413] Built service openshift-kube-storage-version-migrator-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.151"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.16"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.794012177+00:00 stderr F I1013 00:21:42.794009 28251 services_controller.go:441] Skipping no-op change for service openshift-service-ca-operator/metrics 2025-10-13T00:21:42.794024218+00:00 stderr F I1013 00:21:42.794009 28251 services_controller.go:423] Built service openshift-kube-controller-manager/kube-controller-manager template LB []services.LB{} 2025-10-13T00:21:42.794024218+00:00 stderr F I1013 00:21:42.794014 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-service-ca-operator : 200.535µs 2025-10-13T00:21:42.794033228+00:00 stderr F I1013 00:21:42.794021 28251 services_controller.go:424] Service openshift-kube-controller-manager/kube-controller-manager has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.794041548+00:00 stderr F I1013 00:21:42.794033 28251 services_controller.go:414] Built service openshift-kube-storage-version-migrator-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.794050448+00:00 stderr F I1013 00:21:42.794041 28251 services_controller.go:415] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.794050448+00:00 stderr F I1013 00:21:42.794044 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-controller-manager/kube-controller-manager 2025-10-13T00:21:42.794059509+00:00 stderr F I1013 00:21:42.794050 28251 services_controller.go:336] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager : 214.575µs 2025-10-13T00:21:42.794068439+00:00 stderr F I1013 00:21:42.794063 28251 services_controller.go:332] Processing sync for service openshift-kube-apiserver-operator/metrics 2025-10-13T00:21:42.794078969+00:00 stderr F I1013 00:21:42.794056 28251 services_controller.go:421] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.794087309+00:00 stderr F I1013 00:21:42.794080 28251 services_controller.go:422] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.794095530+00:00 stderr F I1013 00:21:42.794088 28251 services_controller.go:423] Built service openshift-kube-storage-version-migrator-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794071 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-apiserver-operator ed79a864-3d59-456e-8a6c-724ec68e6d1b 4515 0 2024-06-26 12:39:27 +0000 UTC map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296f2b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.31,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.31],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794118 28251 services_controller.go:424] Service openshift-kube-storage-version-migrator-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794138 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.7] []}] 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794144 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-storage-version-migrator-operator/metrics 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794151 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator : 264.427µs 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794152 28251 services_controller.go:413] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.31"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.7"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794163 28251 services_controller.go:332] Processing sync for service openshift-ingress-operator/metrics 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794166 28251 services_controller.go:414] Built service openshift-kube-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794174 28251 services_controller.go:415] Built service openshift-kube-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794190 28251 services_controller.go:421] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794215 28251 services_controller.go:422] Built service openshift-kube-apiserver-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794226 28251 services_controller.go:423] Built service openshift-kube-apiserver-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794171 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-ingress-operator 1e390522-c38e-4189-86b8-ad75c61e3844 6514 0 2024-06-26 12:47:50 +0000 UTC map[name:ingress-operator] map[capability.openshift.io/name:Ingress include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296b57 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.4.255,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.255],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794244 28251 services_controller.go:421] Built service openshift-kube-scheduler/scheduler cluster-wide LB []services.LB{} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794254 28251 services_controller.go:422] Built service openshift-kube-scheduler/scheduler per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794274 28251 services_controller.go:423] Built service openshift-kube-scheduler/scheduler template LB []services.LB{} 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794283 28251 services_controller.go:424] Service openshift-kube-scheduler/scheduler has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794283 28251 lb_config.go:1016] Cluster endpoints for openshift-ingress-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.45] []}] 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794297 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-scheduler/scheduler 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794302 28251 services_controller.go:336] Finished syncing service scheduler on namespace openshift-kube-scheduler : 2.762424ms 2025-10-13T00:21:42.794538262+00:00 stderr F I1013 00:21:42.794300 28251 services_controller.go:413] Built service openshift-ingress-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.255"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.45"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.794538262+00:00 stderr P I1013 00:21:42.794313 28251 services_controller.go:414] Built service openshift-ingress-operator/metrics LB per- 2025-10-13T00:21:42.794600423+00:00 stderr F node configs []services.lbConfig(nil) 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794316 28251 services_controller.go:332] Processing sync for service openshift-machine-api/control-plane-machine-set-operator 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794359 28251 services_controller.go:415] Built service openshift-ingress-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794377 28251 services_controller.go:421] Built service openshift-ingress-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794401 28251 services_controller.go:422] Built service openshift-ingress-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794409 28251 services_controller.go:423] Built service openshift-ingress-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794330 28251 services_controller.go:397] Service control-plane-machine-set-operator retrieved from lister: &Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api 7c42fd7c-0955-49c7-819c-4685e0681272 4749 0 2024-06-26 12:39:09 +0000 UTC map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002977b7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.5.136,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.136],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794442 28251 services_controller.go:424] Service openshift-ingress-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794454 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/control-plane-machine-set-operator are: map[TCP/https:{9443 [10.217.0.20] []}] 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794466 28251 services_controller.go:441] Skipping no-op change for service openshift-ingress-operator/metrics 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794465 28251 services_controller.go:413] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.136"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.20"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794474 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-ingress-operator : 310.319µs 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794477 28251 services_controller.go:414] Built service openshift-machine-api/control-plane-machine-set-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794483 28251 services_controller.go:415] Built service openshift-machine-api/control-plane-machine-set-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794489 28251 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-controller 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794495 28251 services_controller.go:421] Built service openshift-machine-api/control-plane-machine-set-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794514 28251 services_controller.go:422] Built service openshift-machine-api/control-plane-machine-set-operator per-node LB []services.LB{} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794520 28251 services_controller.go:423] Built service openshift-machine-api/control-plane-machine-set-operator template LB []services.LB{} 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794526 28251 services_controller.go:424] Service openshift-machine-api/control-plane-machine-set-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794545 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-api/control-plane-machine-set-operator 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794549 28251 services_controller.go:336] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api : 235.857µs 2025-10-13T00:21:42.794600423+00:00 stderr F I1013 00:21:42.794569 28251 services_controller.go:332] Processing sync for service openshift-marketplace/community-operators 2025-10-13T00:21:42.794622204+00:00 stderr F I1013 00:21:42.794530 28251 services_controller.go:397] Service machine-config-controller retrieved from lister: &Service{ObjectMeta:{machine-config-controller openshift-machine-config-operator 3ff83f1a-4058-4b9e-a4fd-83f51836c82e 4847 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073207b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.794622204+00:00 stderr F I1013 00:21:42.794577 28251 services_controller.go:397] Service community-operators retrieved from lister: &Service{ObjectMeta:{community-operators openshift-marketplace daa5c70d-2f05-4c99-b062-49370cf4b7bd 6377 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:9y90X0LnOAvWXlE7PZKqH0sBNEP83PNwaUfqVB] map[] [{operators.coreos.com/v1alpha1 CatalogSource community-operators e583c58d-4569-4cab-9192-62c813516208 0xc0007323bd 0xc0007323be}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: community-operators,olm.managed: true,},ClusterIP:10.217.4.229,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.229],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.794635084+00:00 stderr F I1013 00:21:42.794627 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/community-operators are: map[TCP/grpc:{50051 [10.217.0.35] []}] 2025-10-13T00:21:42.794644184+00:00 stderr F I1013 00:21:42.794630 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-controller are: map[TCP/metrics:{9001 [10.217.0.63] []}] 2025-10-13T00:21:42.794644184+00:00 stderr F I1013 00:21:42.794636 28251 services_controller.go:413] Built service openshift-marketplace/community-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.229"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.35"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.794653255+00:00 stderr F I1013 00:21:42.794645 28251 services_controller.go:414] Built service openshift-marketplace/community-operators LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.794653255+00:00 stderr F I1013 00:21:42.794644 28251 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.214"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.63"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.794665145+00:00 stderr F I1013 00:21:42.794651 28251 services_controller.go:415] Built service openshift-marketplace/community-operators LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.794665145+00:00 stderr F I1013 00:21:42.794655 28251 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.794717976+00:00 stderr F I1013 00:21:42.794663 28251 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-controller LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.794729317+00:00 stderr F I1013 00:21:42.794705 28251 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.794737977+00:00 stderr F I1013 00:21:42.794731 28251 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-controller per-node LB []services.LB{} 2025-10-13T00:21:42.794746267+00:00 stderr F I1013 00:21:42.794739 28251 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-controller template LB []services.LB{} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794770 28251 services_controller.go:424] Service openshift-machine-config-operator/machine-config-controller has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794793 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-config-operator/machine-config-controller 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794799 28251 services_controller.go:336] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator : 310.018µs 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794811 28251 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-target 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794818 28251 services_controller.go:397] Service network-check-target retrieved from lister: &Service{ObjectMeta:{network-check-target openshift-network-diagnostics 151fdab6-cca2-4880-a96c-48e605cc8d3d 2803 0 2024-06-26 12:45:59 +0000 UTC map[] map[] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000733147 0xc000733148}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: network-check-target,},ClusterIP:10.217.5.248,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.248],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794896 28251 lb_config.go:1016] Cluster endpoints for openshift-network-diagnostics/network-check-target are: map[TCP/:{8080 [10.217.0.4] []}] 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794907 28251 services_controller.go:413] Built service openshift-network-diagnostics/network-check-target LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.248"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.4"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794941 28251 services_controller.go:414] Built service openshift-network-diagnostics/network-check-target LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794947 28251 services_controller.go:415] Built service openshift-network-diagnostics/network-check-target LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794663 28251 services_controller.go:421] Built service openshift-marketplace/community-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794962 28251 services_controller.go:421] Built service openshift-network-diagnostics/network-check-target cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794980 28251 services_controller.go:422] Built service openshift-marketplace/community-operators per-node LB []services.LB{} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.794988 28251 services_controller.go:422] Built service openshift-network-diagnostics/network-check-target per-node LB []services.LB{} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.795025 28251 services_controller.go:423] Built service openshift-network-diagnostics/network-check-target template LB []services.LB{} 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.795034 28251 services_controller.go:424] Service openshift-network-diagnostics/network-check-target has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.795053 28251 services_controller.go:441] Skipping no-op change for service openshift-network-diagnostics/network-check-target 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.795057 28251 services_controller.go:336] Finished syncing service network-check-target on namespace openshift-network-diagnostics : 245.566µs 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.795068 28251 services_controller.go:332] Processing sync for service openshift-etcd-operator/metrics 2025-10-13T00:21:42.795184749+00:00 stderr F I1013 00:21:42.795074 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-etcd-operator 470dd1a6-5645-4282-97e4-ebd3fef4caae 4531 0 2024-06-26 12:39:06 +0000 UTC map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000296337 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: etcd-operator,},ClusterIP:10.217.4.182,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.182],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.795214710+00:00 stderr F I1013 00:21:42.795155 28251 lb_config.go:1016] Cluster endpoints for openshift-etcd-operator/metrics are: map[TCP/https:{8443 [10.217.0.8] []}] 2025-10-13T00:21:42.795214710+00:00 stderr F I1013 00:21:42.795193 28251 services_controller.go:413] Built service openshift-etcd-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.182"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.8"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.795214710+00:00 stderr F I1013 00:21:42.795204 28251 services_controller.go:414] Built service openshift-etcd-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.795214710+00:00 stderr F I1013 00:21:42.795210 28251 services_controller.go:415] Built service openshift-etcd-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.795266551+00:00 stderr F I1013 00:21:42.795224 28251 services_controller.go:421] Built service openshift-etcd-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.795277871+00:00 stderr F I1013 00:21:42.795268 28251 services_controller.go:422] Built service openshift-etcd-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.795286152+00:00 stderr F I1013 00:21:42.795277 28251 services_controller.go:423] Built service openshift-etcd-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.795298432+00:00 stderr F I1013 00:21:42.795284 28251 services_controller.go:424] Service openshift-etcd-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.795308902+00:00 stderr F I1013 00:21:42.795303 28251 services_controller.go:441] Skipping no-op change for service openshift-etcd-operator/metrics 2025-10-13T00:21:42.795317233+00:00 stderr F I1013 00:21:42.795308 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-etcd-operator : 239.716µs 2025-10-13T00:21:42.795325493+00:00 stderr F I1013 00:21:42.795318 28251 services_controller.go:332] Processing sync for service openshift-ingress-canary/ingress-canary 2025-10-13T00:21:42.795423755+00:00 stderr F I1013 00:21:42.795350 28251 services_controller.go:397] Service ingress-canary retrieved from lister: &Service{ObjectMeta:{ingress-canary openshift-ingress-canary cd641ce4-6a02-4a0c-9222-6ab30b234450 10172 0 2024-06-26 12:54:01 +0000 UTC map[ingress.openshift.io/canary:canary_controller] map[] [{apps/v1 daemonset ingress-canary b5512a08-cd29-46f9-9661-4c860338b2ca 0xc0002969a7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:8080-tcp,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},ServicePort{Name:8888-tcp,Protocol:TCP,Port:8888,TargetPort:{0 8888 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscanary.operator.openshift.io/daemonset-ingresscanary: canary_controller,},ClusterIP:10.217.4.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.795484977+00:00 stderr F I1013 00:21:42.795462 28251 lb_config.go:1016] Cluster endpoints for openshift-ingress-canary/ingress-canary are: map[TCP/8080-tcp:{8080 [10.217.0.71] []} TCP/8888-tcp:{8888 [10.217.0.71] []}] 2025-10-13T00:21:42.795496107+00:00 stderr F I1013 00:21:42.795482 28251 services_controller.go:413] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8080, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8888, clusterEndpoints:services.lbEndpoints{Port:8888, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.795561399+00:00 stderr F I1013 00:21:42.795497 28251 services_controller.go:414] Built service openshift-ingress-canary/ingress-canary LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.795561399+00:00 stderr F I1013 00:21:42.795531 28251 services_controller.go:415] Built service openshift-ingress-canary/ingress-canary LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.795573349+00:00 stderr F I1013 00:21:42.795546 28251 services_controller.go:421] Built service openshift-ingress-canary/ingress-canary cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.795586560+00:00 stderr F I1013 00:21:42.795573 28251 services_controller.go:422] Built service openshift-ingress-canary/ingress-canary per-node LB []services.LB{} 2025-10-13T00:21:42.795586560+00:00 stderr F I1013 00:21:42.795580 28251 services_controller.go:423] Built service openshift-ingress-canary/ingress-canary template LB []services.LB{} 2025-10-13T00:21:42.795659842+00:00 stderr F I1013 00:21:42.795612 28251 services_controller.go:424] Service openshift-ingress-canary/ingress-canary has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.795659842+00:00 stderr F I1013 00:21:42.795635 28251 services_controller.go:441] Skipping no-op change for service openshift-ingress-canary/ingress-canary 2025-10-13T00:21:42.795659842+00:00 stderr F I1013 00:21:42.795640 28251 services_controller.go:336] Finished syncing service ingress-canary on namespace openshift-ingress-canary : 321.649µs 2025-10-13T00:21:42.795659842+00:00 stderr F I1013 00:21:42.795651 28251 services_controller.go:332] Processing sync for service openshift-marketplace/certified-operators 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795658 28251 services_controller.go:397] Service certified-operators retrieved from lister: &Service{ObjectMeta:{certified-operators openshift-marketplace 97052848-7332-4254-8854-60d45bb91123 6358 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators 16d5fe82-aef0-4700-8b13-e78e71d2a10d 0xc0007322dd 0xc0007322de}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.249,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.249],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795745 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/certified-operators are: map[TCP/grpc:{50051 [10.217.0.36] []}] 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795781 28251 services_controller.go:413] Built service openshift-marketplace/certified-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.249"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.36"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795793 28251 services_controller.go:414] Built service openshift-marketplace/certified-operators LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795800 28251 services_controller.go:415] Built service openshift-marketplace/certified-operators LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795813 28251 services_controller.go:421] Built service openshift-marketplace/certified-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.36", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795837 28251 services_controller.go:422] Built service openshift-marketplace/certified-operators per-node LB []services.LB{} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795867 28251 services_controller.go:423] Built service openshift-marketplace/certified-operators template LB []services.LB{} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795875 28251 services_controller.go:424] Service openshift-marketplace/certified-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795895 28251 services_controller.go:441] Skipping no-op change for service openshift-marketplace/certified-operators 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795900 28251 services_controller.go:336] Finished syncing service certified-operators on namespace openshift-marketplace : 248.577µs 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795910 28251 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-daemon 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.795917 28251 services_controller.go:397] Service machine-config-daemon retrieved from lister: &Service{ObjectMeta:{machine-config-daemon openshift-machine-config-operator bddcb8c2-0f2d-4efa-a0ec-3e0648c24386 4880 0 2024-06-26 12:39:15 +0000 UTC map[k8s-app:machine-config-daemon] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000732167 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.5.82,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.82],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.796033 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-daemon are: map[TCP/health:{8798 [192.168.126.11] []} TCP/metrics:{9001 [192.168.126.11] []}] 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.796051 28251 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-daemon LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.796057 28251 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-daemon LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:8798, clusterEndpoints:services.lbEndpoints{Port:8798, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.796070 28251 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-daemon LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.796253408+00:00 stderr F I1013 00:21:42.796117 28251 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-daemon cluster-wide LB []services.LB{} 2025-10-13T00:21:42.796253408+00:00 stderr P I1013 00:21:42.796126 28251 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-daemon per-node LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[] 2025-10-13T00:21:42.796285498+00:00 stderr F string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.796285498+00:00 stderr F I1013 00:21:42.796156 28251 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-daemon template LB []services.LB{} 2025-10-13T00:21:42.796285498+00:00 stderr F I1013 00:21:42.796165 28251 services_controller.go:424] Service openshift-machine-config-operator/machine-config-daemon has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.796285498+00:00 stderr F I1013 00:21:42.796210 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-config-operator/machine-config-daemon 2025-10-13T00:21:42.796285498+00:00 stderr F I1013 00:21:42.796216 28251 services_controller.go:336] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator : 306.099µs 2025-10-13T00:21:42.796285498+00:00 stderr F I1013 00:21:42.796227 28251 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-operator 2025-10-13T00:21:42.796315279+00:00 stderr F I1013 00:21:42.796233 28251 services_controller.go:397] Service machine-config-operator retrieved from lister: &Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 355a1056-7d77-4a52-a1f5-8eb39c13574e 4891 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073221b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},ClusterIP:10.217.5.4,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.4],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.796382771+00:00 stderr F I1013 00:21:42.796313 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-operator are: map[TCP/metrics:{9001 [10.217.0.21] []}] 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796365 28251 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.4"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.21"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796494 28251 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-operator LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796504 28251 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-operator LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796521 28251 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.793873 28251 services_controller.go:415] Built service openshift-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796571 28251 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-operator per-node LB []services.LB{} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796579 28251 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-operator template LB []services.LB{} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796587 28251 services_controller.go:424] Service openshift-machine-config-operator/machine-config-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796585 28251 services_controller.go:421] Built service openshift-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796605 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-config-operator/machine-config-operator 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796528 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796611 28251 services_controller.go:336] Finished syncing service machine-config-operator on namespace openshift-machine-config-operator : 383.09µs 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796624 28251 services_controller.go:332] Processing sync for service openshift-network-operator/metrics 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.794994 28251 services_controller.go:423] Built service openshift-marketplace/community-operators template LB []services.LB{} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796654 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-network-operator : 29.89µs 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796657 28251 services_controller.go:424] Service openshift-marketplace/community-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796663 28251 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796674 28251 services_controller.go:441] Skipping no-op change for service openshift-marketplace/community-operators 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796678 28251 services_controller.go:336] Finished syncing service community-operators on namespace openshift-marketplace : 2.110437ms 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796686 28251 services_controller.go:332] Processing sync for service openshift-apiserver/check-endpoints 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796669 28251 services_controller.go:397] Service olm-operator-metrics retrieved from lister: &Service{ObjectMeta:{olm-operator-metrics openshift-operator-lifecycle-manager f54a9b6f-c334-4276-9ca3-b290325fd276 5100 0 2024-06-26 12:39:23 +0000 UTC map[app:olm-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:olm-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00073365f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: olm-operator,},ClusterIP:10.217.5.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800870392+00:00 stderr F I1013 00:21:42.796691 28251 services_controller.go:397] Service check-endpoints retrieved from lister: &Service{ObjectMeta:{check-endpoints openshift-apiserver 435aa879-8965-459a-9b2a-dfd8f8924b3a 5567 0 2024-06-26 12:47:30 +0000 UTC map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002ef777 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:check-endpoints,Protocol:TCP,Port:17698,TargetPort:{0 17698 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.23,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.23],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800870392+00:00 stderr P I1013 00:21:42.796746 28251 lb_config.go:1016] Cluster endpoints for o 2025-10-13T00:21:42.800933814+00:00 stderr F penshift-apiserver/check-endpoints are: map[TCP/check-endpoints:{17698 [10.217.0.82] []}] 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796788 28251 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.14] []}] 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796797 28251 services_controller.go:413] Built service openshift-apiserver/check-endpoints LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.23"}, protocol:"TCP", inport:17698, clusterEndpoints:services.lbEndpoints{Port:17698, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796638 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796808 28251 services_controller.go:414] Built service openshift-apiserver/check-endpoints LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796818 28251 services_controller.go:415] Built service openshift-apiserver/check-endpoints LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796828 28251 services_controller.go:421] Built service openshift-apiserver/check-endpoints cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796844 28251 services_controller.go:422] Built service openshift-apiserver/check-endpoints per-node LB []services.LB{} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796850 28251 services_controller.go:423] Built service openshift-apiserver/check-endpoints template LB []services.LB{} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796856 28251 services_controller.go:424] Service openshift-apiserver/check-endpoints has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796825 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796871 28251 services_controller.go:441] Skipping no-op change for service openshift-apiserver/check-endpoints 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796876 28251 services_controller.go:336] Finished syncing service check-endpoints on namespace openshift-apiserver : 189.275µs 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796884 28251 services_controller.go:332] Processing sync for service openshift-controller-manager/controller-manager 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796889 28251 services_controller.go:397] Service controller-manager retrieved from lister: &Service{ObjectMeta:{controller-manager openshift-controller-manager 2222c363-21dc-4d99-b2be-80dc3cdf8209 4361 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:openshift-controller-manager] map[operator.openshift.io/spec-hash:b3b96749ab82e4de02ef6aa9f0e168108d09315e18d73931c12251d267378e74 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{controller-manager: true,},ClusterIP:10.217.4.104,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.104],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796605 28251 services_controller.go:422] Built service openshift-apiserver-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796932 28251 lb_config.go:1016] Cluster endpoints for openshift-controller-manager/controller-manager are: map[TCP/https:{8443 [10.217.0.87] []}] 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796935 28251 services_controller.go:423] Built service openshift-apiserver-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796941 28251 services_controller.go:413] Built service openshift-controller-manager/controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.104"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.87"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796946 28251 services_controller.go:424] Service openshift-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796949 28251 services_controller.go:414] Built service openshift-controller-manager/controller-manager LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796954 28251 services_controller.go:415] Built service openshift-controller-manager/controller-manager LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796966 28251 services_controller.go:441] Skipping no-op change for service openshift-apiserver-operator/metrics 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796972 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-apiserver-operator : 3.573496ms 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796965 28251 services_controller.go:421] Built service openshift-controller-manager/controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796982 28251 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-machine-webhook 2025-10-13T00:21:42.800933814+00:00 stderr F I1013 00:21:42.796983 28251 services_controller.go:422] Built service openshift-controller-manager/controller-manager per-node LB []services.LB{} 2025-10-13T00:21:42.800933814+00:00 stderr P I1013 00:21:42.796989 28251 services_controller.go:423] B 2025-10-13T00:21:42.800957714+00:00 stderr F uilt service openshift-controller-manager/controller-manager template LB []services.LB{} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.796995 28251 services_controller.go:424] Service openshift-controller-manager/controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797024 28251 services_controller.go:441] Skipping no-op change for service openshift-controller-manager/controller-manager 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797027 28251 services_controller.go:336] Finished syncing service controller-manager on namespace openshift-controller-manager : 143.653µs 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797035 28251 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-operators 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.796988 28251 services_controller.go:397] Service machine-api-operator-machine-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-machine-webhook openshift-machine-api 7dd2300f-f67e-4eb3-a3fa-1f22c230305a 4821 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-api-operator-machine-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-machine-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000297d2b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 machine-webhook},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.4.242,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.242],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797055 28251 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-machine-webhook are: map[] 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797041 28251 services_controller.go:397] Service redhat-operators retrieved from lister: &Service{ObjectMeta:{redhat-operators openshift-marketplace adccbaa4-8d5b-4985-9a89-66271ea4bf4e 6530 0 2024-06-26 12:47:51 +0000 UTC map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7 0xc00073276d 0xc00073276e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-operators,olm.managed: true,},ClusterIP:10.217.5.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.794236 28251 services_controller.go:424] Service openshift-kube-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797085 28251 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-operators are: map[TCP/grpc:{50051 [10.217.0.34] []}] 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797094 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-apiserver-operator/metrics 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797094 28251 services_controller.go:413] Built service openshift-marketplace/redhat-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.52"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.34"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797100 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-apiserver-operator : 3.036741ms 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797101 28251 services_controller.go:414] Built service openshift-marketplace/redhat-operators LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797106 28251 services_controller.go:415] Built service openshift-marketplace/redhat-operators LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797108 28251 services_controller.go:332] Processing sync for service openshift-multus/multus-admission-controller 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797115 28251 services_controller.go:421] Built service openshift-marketplace/redhat-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.34", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797131 28251 services_controller.go:422] Built service openshift-marketplace/redhat-operators per-node LB []services.LB{} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797136 28251 services_controller.go:423] Built service openshift-marketplace/redhat-operators template LB []services.LB{} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797142 28251 services_controller.go:424] Service openshift-marketplace/redhat-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797067 28251 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.242"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797157 28251 services_controller.go:441] Skipping no-op change for service openshift-marketplace/redhat-operators 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797162 28251 services_controller.go:336] Finished syncing service redhat-operators on namespace openshift-marketplace : 127.443µs 2025-10-13T00:21:42.800957714+00:00 stderr F I1013 00:21:42.797170 28251 services_controller.go:332] Processing sync for service openshift-etcd/etcd 2025-10-13T00:21:42.800957714+00:00 stderr P I1013 00:21:42.797114 28251 services_controller.go:397] Service multus-admission-controller retrieved from lister: &Service{ObjectMeta:{multus-admission-controller openshift-multus 35568373-18ec-4ba2-8d18-12de10aa5a3f 5005 0 2024-06-26 12:45:47 +0000 UTC map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 Network cluster 5ca11 2025-10-13T00:21:42.800991095+00:00 stderr F 404-f665-4aa0-85cf-da2f3e9c86ad 0xc000732d07 0xc000732d08}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: multus-admission-controller,},ClusterIP:10.217.4.247,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.247],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797189 28251 lb_config.go:1016] Cluster endpoints for openshift-multus/multus-admission-controller are: map[TCP/metrics:{8443 [10.217.0.32] []} TCP/webhook:{6443 [10.217.0.32] []}] 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797204 28251 services_controller.go:413] Built service openshift-multus/multus-admission-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797216 28251 services_controller.go:414] Built service openshift-multus/multus-admission-controller LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797155 28251 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797222 28251 services_controller.go:415] Built service openshift-multus/multus-admission-controller LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797225 28251 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797234 28251 ovs.go:162] Exec(39): stdout: "not connected\n" 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797240 28251 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-machine-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797254 28251 ovs.go:163] Exec(39): stderr: "" 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797237 28251 services_controller.go:421] Built service openshift-multus/multus-admission-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.794022 28251 services_controller.go:332] Processing sync for service openshift-authentication/oauth-openshift 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797266 28251 default_node_network_controller.go:385] Node connection status = not connected 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797174 28251 services_controller.go:397] Service etcd retrieved from lister: &Service{ObjectMeta:{etcd openshift-etcd 09198b54-ff7d-4bc0-9111-00e2f443a981 4485 0 2024-06-26 12:38:46 +0000 UTC map[k8s-app:etcd] map[operator.openshift.io/spec-hash:0685cfaa0976bfb7ba58513629369c20bf05f4fba36949e982bdb43af328f0e1 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.137,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.137],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797268 28251 services_controller.go:397] Service oauth-openshift retrieved from lister: &Service{ObjectMeta:{oauth-openshift openshift-authentication 64190ecd-229c-482a-966a-b5649b5042ed 5248 0 2024-06-26 12:47:15 +0000 UTC map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.40,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.40],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.800991095+00:00 stderr F I1013 00:21:42.797305 28251 lb_config.go:1016] Cluster endpoints for openshift-authentication/oauth-openshift are: map[TCP/https:{6443 [10.217.0.72] []}] 2025-10-13T00:21:42.800991095+00:00 stderr P I1013 00:21:42.797313 28251 services_controller.go:413] Built service openshift-authentication/oauth-openshift LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.40"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.72"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNo 2025-10-13T00:21:42.801013996+00:00 stderr F dePort:false}} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797335 28251 services_controller.go:414] Built service openshift-authentication/oauth-openshift LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.796802 28251 services_controller.go:413] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.220"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.14"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797313 28251 lb_config.go:1016] Cluster endpoints for openshift-etcd/etcd are: map[TCP/etcd:{2379 [192.168.126.11] []} TCP/etcd-metrics:{9979 [192.168.126.11] []}] 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797356 28251 services_controller.go:414] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797353 28251 services_controller.go:415] Built service openshift-authentication/oauth-openshift LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797363 28251 services_controller.go:415] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797369 28251 services_controller.go:413] Built service openshift-etcd/etcd LB cluster-wide configs []services.lbConfig(nil) 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797373 28251 services_controller.go:421] Built service openshift-authentication/oauth-openshift cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797377 28251 services_controller.go:414] Built service openshift-etcd/etcd LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:2379, clusterEndpoints:services.lbEndpoints{Port:2379, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:9979, clusterEndpoints:services.lbEndpoints{Port:9979, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797391 28251 services_controller.go:422] Built service openshift-authentication/oauth-openshift per-node LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797258 28251 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-machine-webhook per-node LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797396 28251 services_controller.go:415] Built service openshift-etcd/etcd LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797400 28251 services_controller.go:423] Built service openshift-authentication/oauth-openshift template LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797402 28251 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-machine-webhook template LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797407 28251 services_controller.go:424] Service openshift-authentication/oauth-openshift has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797409 28251 services_controller.go:424] Service openshift-machine-api/machine-api-operator-machine-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797266 28251 services_controller.go:422] Built service openshift-multus/multus-admission-controller per-node LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797422 28251 services_controller.go:441] Skipping no-op change for service openshift-authentication/oauth-openshift 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797428 28251 services_controller.go:336] Finished syncing service oauth-openshift on namespace openshift-authentication : 3.404281ms 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797429 28251 services_controller.go:441] Skipping no-op change for service openshift-machine-api/machine-api-operator-machine-webhook 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797430 28251 services_controller.go:423] Built service openshift-multus/multus-admission-controller template LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797429 28251 services_controller.go:421] Built service openshift-etcd/etcd cluster-wide LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797435 28251 services_controller.go:336] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api : 453.072µs 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797441 28251 services_controller.go:424] Service openshift-multus/multus-admission-controller has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797379 28251 services_controller.go:421] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797457 28251 services_controller.go:422] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797462 28251 services_controller.go:423] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB []services.LB{} 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797465 28251 services_controller.go:441] Skipping no-op change for service openshift-multus/multus-admission-controller 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797468 28251 services_controller.go:424] Service openshift-operator-lifecycle-manager/olm-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.801013996+00:00 stderr F I1013 00:21:42.797444 28251 services_controller.go:422] Built service openshift-etcd/etcd per-node LB []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-10-13T00:21:42.801013996+00:00 stderr P I1013 00:21:42.797472 28251 servic 2025-10-13T00:21:42.801040826+00:00 stderr F es_controller.go:336] Finished syncing service multus-admission-controller on namespace openshift-multus : 363.189µs 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797480 28251 services_controller.go:423] Built service openshift-etcd/etcd template LB []services.LB{} 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797490 28251 services_controller.go:424] Service openshift-etcd/etcd has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797506 28251 services_controller.go:441] Skipping no-op change for service openshift-etcd/etcd 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797510 28251 services_controller.go:336] Finished syncing service etcd on namespace openshift-etcd : 340.389µs 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797436 28251 services_controller.go:332] Processing sync for service openshift-kube-scheduler-operator/metrics 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797482 28251 services_controller.go:441] Skipping no-op change for service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797616 28251 services_controller.go:336] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager : 950.785µs 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.797532 28251 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 080e1aaf-7269-495b-ab74-593efe4192ec 4661 0 2024-06-26 12:39:09 +0000 UTC map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0002973ab }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.108,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.108],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798051 28251 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler-operator/metrics are: map[TCP/https:{8443 [10.217.0.12] []}] 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798063 28251 services_controller.go:413] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.108"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.12"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798072 28251 services_controller.go:414] Built service openshift-kube-scheduler-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798078 28251 services_controller.go:415] Built service openshift-kube-scheduler-operator/metrics LB template configs []services.lbConfig(nil) 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798089 28251 services_controller.go:421] Built service openshift-kube-scheduler-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798107 28251 services_controller.go:422] Built service openshift-kube-scheduler-operator/metrics per-node LB []services.LB{} 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798113 28251 services_controller.go:423] Built service openshift-kube-scheduler-operator/metrics template LB []services.LB{} 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798119 28251 services_controller.go:424] Service openshift-kube-scheduler-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798133 28251 services_controller.go:441] Skipping no-op change for service openshift-kube-scheduler-operator/metrics 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798137 28251 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-scheduler-operator : 701.819µs 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798213 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798259 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798279 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798807 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:c3:15:08 networks:{GoSet:[38.102.83.180/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801040826+00:00 stderr F I1013 00:21:42.798907 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801040826+00:00 stderr P I1013 00:21:42.798922 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:c3:15:08 networks:{GoSet:[38.102.83.180/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable:< 2025-10-13T00:21:42.801069097+00:00 stderr F nil> Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.799341 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.799433 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:c3:15:08]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.799464 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.799477 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:c3:15:08]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.799836 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.799852 28251 transact.go:42] Configuring OVN: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800081 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800106 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800116 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800398 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800434 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800445 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800944 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.800984 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801069097+00:00 stderr F I1013 00:21:42.801002 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.801494609+00:00 stderr F I1013 00:21:42.801179 28251 ovs.go:162] Exec(40): stdout: "1\n" 2025-10-13T00:21:42.801494609+00:00 stderr F I1013 00:21:42.801198 28251 ovs.go:163] Exec(40): stderr: "" 2025-10-13T00:21:42.801494609+00:00 stderr F I1013 00:21:42.801214 28251 ovs.go:159] Exec(41): /usr/bin/ovs-ofctl --no-stats --no-names dump-flows br-int table=65,out_port=1 2025-10-13T00:21:42.802647830+00:00 stderr F I1013 00:21:42.802330 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.802647830+00:00 stderr F I1013 00:21:42.802397 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.802647830+00:00 stderr F I1013 00:21:42.802413 28251 transact.go:42] Configuring OVN: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.809505194+00:00 stderr F I1013 00:21:42.809415 28251 ovs.go:162] Exec(41): stdout: " cookie=0xaad46e3b, table=65, priority=100,reg15=0x2,metadata=0x3 actions=output:1\n" 2025-10-13T00:21:42.809505194+00:00 stderr F I1013 00:21:42.809455 28251 ovs.go:163] Exec(41): stderr: "" 2025-10-13T00:21:42.809505194+00:00 stderr F I1013 00:21:42.809478 28251 management-port.go:161] Management port ovn-k8s-mp0 is ready 2025-10-13T00:21:42.843711274+00:00 stderr F I1013 00:21:42.843647 28251 base_network_controller.go:458] When adding node crc for network default, found 81 pods to add to retryPods 2025-10-13T00:21:42.843711274+00:00 stderr F I1013 00:21:42.843674 28251 base_network_controller.go:464] Adding pod hostpath-provisioner/csi-hostpathplugin-hvm8g to retryPods for network default 2025-10-13T00:21:42.843711274+00:00 stderr F I1013 00:21:42.843687 28251 base_network_controller.go:464] Adding pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m to retryPods for network default 2025-10-13T00:21:42.843711274+00:00 stderr F I1013 00:21:42.843693 28251 base_network_controller.go:464] Adding pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp to retryPods for network default 2025-10-13T00:21:42.843711274+00:00 stderr F I1013 00:21:42.843699 28251 base_network_controller.go:464] Adding pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 to retryPods for network default 2025-10-13T00:21:42.843711274+00:00 stderr F I1013 00:21:42.843705 28251 base_network_controller.go:464] Adding pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843711 28251 base_network_controller.go:464] Adding pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843718 28251 base_network_controller.go:464] Adding pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843723 28251 base_network_controller.go:464] Adding pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843729 28251 base_network_controller.go:464] Adding pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843735 28251 base_network_controller.go:464] Adding pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843741 28251 base_network_controller.go:464] Adding pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843748 28251 base_network_controller.go:464] Adding pod openshift-console/console-644bb77b49-5x5xk to retryPods for network default 2025-10-13T00:21:42.843761745+00:00 stderr F I1013 00:21:42.843754 28251 base_network_controller.go:464] Adding pod openshift-console/downloads-65476884b9-9wcvx to retryPods for network default 2025-10-13T00:21:42.843791186+00:00 stderr F I1013 00:21:42.843760 28251 base_network_controller.go:464] Adding pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z to retryPods for network default 2025-10-13T00:21:42.843791186+00:00 stderr F I1013 00:21:42.843768 28251 base_network_controller.go:464] Adding pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf to retryPods for network default 2025-10-13T00:21:42.843791186+00:00 stderr F I1013 00:21:42.843774 28251 base_network_controller.go:464] Adding pod openshift-dns-operator/dns-operator-75f687757b-nz2xb to retryPods for network default 2025-10-13T00:21:42.843791186+00:00 stderr F I1013 00:21:42.843779 28251 base_network_controller.go:464] Adding pod openshift-dns/dns-default-gbw49 to retryPods for network default 2025-10-13T00:21:42.843791186+00:00 stderr F I1013 00:21:42.843785 28251 base_network_controller.go:464] Adding pod openshift-dns/node-resolver-dn27q to retryPods for network default 2025-10-13T00:21:42.843802406+00:00 stderr F I1013 00:21:42.843792 28251 base_network_controller.go:464] Adding pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg to retryPods for network default 2025-10-13T00:21:42.843802406+00:00 stderr F I1013 00:21:42.843798 28251 base_network_controller.go:464] Adding pod openshift-etcd/etcd-crc to retryPods for network default 2025-10-13T00:21:42.843811247+00:00 stderr F I1013 00:21:42.843804 28251 base_network_controller.go:464] Adding pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv to retryPods for network default 2025-10-13T00:21:42.843819677+00:00 stderr F I1013 00:21:42.843811 28251 base_network_controller.go:464] Adding pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 to retryPods for network default 2025-10-13T00:21:42.843819677+00:00 stderr F I1013 00:21:42.843817 28251 base_network_controller.go:464] Adding pod openshift-image-registry/node-ca-l92hr to retryPods for network default 2025-10-13T00:21:42.843828607+00:00 stderr F I1013 00:21:42.843823 28251 base_network_controller.go:464] Adding pod openshift-ingress-canary/ingress-canary-2vhcn to retryPods for network default 2025-10-13T00:21:42.843837177+00:00 stderr F I1013 00:21:42.843831 28251 base_network_controller.go:464] Adding pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t to retryPods for network default 2025-10-13T00:21:42.843845708+00:00 stderr F I1013 00:21:42.843837 28251 base_network_controller.go:464] Adding pod openshift-ingress/router-default-5c9bf7bc58-6jctv to retryPods for network default 2025-10-13T00:21:42.843845708+00:00 stderr F I1013 00:21:42.843843 28251 base_network_controller.go:464] Adding pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 to retryPods for network default 2025-10-13T00:21:42.843857748+00:00 stderr F I1013 00:21:42.843850 28251 base_network_controller.go:464] Adding pod openshift-kube-apiserver/installer-13-crc to retryPods for network default 2025-10-13T00:21:42.843866278+00:00 stderr F I1013 00:21:42.843856 28251 base_network_controller.go:464] Adding pod openshift-kube-apiserver/kube-apiserver-crc to retryPods for network default 2025-10-13T00:21:42.843866278+00:00 stderr F I1013 00:21:42.843863 28251 base_network_controller.go:464] Adding pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb to retryPods for network default 2025-10-13T00:21:42.843877098+00:00 stderr F I1013 00:21:42.843871 28251 base_network_controller.go:464] Adding pod openshift-kube-controller-manager/kube-controller-manager-crc to retryPods for network default 2025-10-13T00:21:42.843885639+00:00 stderr F I1013 00:21:42.843879 28251 base_network_controller.go:464] Adding pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 to retryPods for network default 2025-10-13T00:21:42.843893919+00:00 stderr F I1013 00:21:42.843886 28251 base_network_controller.go:464] Adding pod openshift-kube-scheduler/openshift-kube-scheduler-crc to retryPods for network default 2025-10-13T00:21:42.843902429+00:00 stderr F I1013 00:21:42.843892 28251 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr to retryPods for network default 2025-10-13T00:21:42.843902429+00:00 stderr F I1013 00:21:42.843899 28251 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv to retryPods for network default 2025-10-13T00:21:42.843911269+00:00 stderr F I1013 00:21:42.843905 28251 base_network_controller.go:464] Adding pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw to retryPods for network default 2025-10-13T00:21:42.843932170+00:00 stderr F I1013 00:21:42.843911 28251 base_network_controller.go:464] Adding pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb to retryPods for network default 2025-10-13T00:21:42.843932170+00:00 stderr F I1013 00:21:42.843917 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc to retryPods for network default 2025-10-13T00:21:42.843932170+00:00 stderr F I1013 00:21:42.843924 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh to retryPods for network default 2025-10-13T00:21:42.843942760+00:00 stderr F I1013 00:21:42.843930 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-daemon-zpnhg to retryPods for network default 2025-10-13T00:21:42.843942760+00:00 stderr F I1013 00:21:42.843937 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm to retryPods for network default 2025-10-13T00:21:42.843951450+00:00 stderr F I1013 00:21:42.843944 28251 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-server-v65wr to retryPods for network default 2025-10-13T00:21:42.843959811+00:00 stderr F I1013 00:21:42.843954 28251 base_network_controller.go:464] Adding pod openshift-marketplace/certified-operators-cms8q to retryPods for network default 2025-10-13T00:21:42.844012342+00:00 stderr F I1013 00:21:42.843983 28251 base_network_controller.go:464] Adding pod openshift-marketplace/community-operators-gjctm to retryPods for network default 2025-10-13T00:21:42.844012342+00:00 stderr F I1013 00:21:42.843994 28251 base_network_controller.go:464] Adding pod openshift-marketplace/marketplace-operator-8b455464d-29pzg to retryPods for network default 2025-10-13T00:21:42.844012342+00:00 stderr F I1013 00:21:42.844001 28251 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-marketplace-crk87 to retryPods for network default 2025-10-13T00:21:42.844012342+00:00 stderr F I1013 00:21:42.844007 28251 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-operators-hkptr to retryPods for network default 2025-10-13T00:21:42.844029092+00:00 stderr F I1013 00:21:42.844014 28251 base_network_controller.go:464] Adding pod openshift-multus/multus-additional-cni-plugins-bzj2p to retryPods for network default 2025-10-13T00:21:42.844029092+00:00 stderr F I1013 00:21:42.844022 28251 base_network_controller.go:464] Adding pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc to retryPods for network default 2025-10-13T00:21:42.844038143+00:00 stderr F I1013 00:21:42.844028 28251 base_network_controller.go:464] Adding pod openshift-multus/multus-q88th to retryPods for network default 2025-10-13T00:21:42.844038143+00:00 stderr F I1013 00:21:42.844035 28251 base_network_controller.go:464] Adding pod openshift-multus/network-metrics-daemon-qdfr4 to retryPods for network default 2025-10-13T00:21:42.844046883+00:00 stderr F I1013 00:21:42.844041 28251 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 to retryPods for network default 2025-10-13T00:21:42.844055253+00:00 stderr F I1013 00:21:42.844048 28251 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-target-v54bt to retryPods for network default 2025-10-13T00:21:42.844063583+00:00 stderr F I1013 00:21:42.844054 28251 base_network_controller.go:464] Adding pod openshift-network-node-identity/network-node-identity-7xghp to retryPods for network default 2025-10-13T00:21:42.844063583+00:00 stderr F I1013 00:21:42.844060 28251 base_network_controller.go:464] Adding pod openshift-network-operator/iptables-alerter-wwpnd to retryPods for network default 2025-10-13T00:21:42.844074404+00:00 stderr F I1013 00:21:42.844068 28251 base_network_controller.go:464] Adding pod openshift-network-operator/network-operator-767c585db5-zd56b to retryPods for network default 2025-10-13T00:21:42.844082794+00:00 stderr F I1013 00:21:42.844074 28251 base_network_controller.go:464] Adding pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd to retryPods for network default 2025-10-13T00:21:42.844082794+00:00 stderr F I1013 00:21:42.844080 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf to retryPods for network default 2025-10-13T00:21:42.844093584+00:00 stderr F I1013 00:21:42.844087 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh to retryPods for network default 2025-10-13T00:21:42.844102014+00:00 stderr F I1013 00:21:42.844093 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 to retryPods for network default 2025-10-13T00:21:42.844102014+00:00 stderr F I1013 00:21:42.844100 28251 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz to retryPods for network default 2025-10-13T00:21:42.844110775+00:00 stderr F I1013 00:21:42.844106 28251 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b to retryPods for network default 2025-10-13T00:21:42.844119115+00:00 stderr F I1013 00:21:42.844112 28251 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-node-wzh74 to retryPods for network default 2025-10-13T00:21:42.844127565+00:00 stderr F I1013 00:21:42.844118 28251 base_network_controller.go:464] Adding pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs to retryPods for network default 2025-10-13T00:21:42.844127565+00:00 stderr F I1013 00:21:42.844124 28251 base_network_controller.go:464] Adding pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz to retryPods for network default 2025-10-13T00:21:42.844139465+00:00 stderr F I1013 00:21:42.844131 28251 base_network_controller.go:464] Adding pod openshift-service-ca/service-ca-666f99b6f-kk8kg to retryPods for network default 2025-10-13T00:21:42.844165936+00:00 stderr F I1013 00:21:42.844147 28251 obj_retry.go:233] Iterate retry objects requested (resource *v1.Pod) 2025-10-13T00:21:42.844208857+00:00 stderr F I1013 00:21:42.844184 28251 obj_retry.go:555] Update event received for resource *factory.egressFwNode, old object is equal to new: true 2025-10-13T00:21:42.844208857+00:00 stderr F I1013 00:21:42.844196 28251 obj_retry.go:555] Update event received for resource *factory.egressNode, old object is equal to new: false 2025-10-13T00:21:42.844219108+00:00 stderr F I1013 00:21:42.844209 28251 obj_retry.go:607] Update event received for *factory.egressNode crc 2025-10-13T00:21:42.844301830+00:00 stderr F I1013 00:21:42.844254 28251 obj_retry.go:427] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod 2025-10-13T00:21:42.844372902+00:00 stderr F I1013 00:21:42.844294 28251 obj_retry.go:402] Going to retry *v1.Pod resource setup for 66 objects: [openshift-machine-config-operator/machine-config-server-v65wr openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz openshift-dns/dns-default-gbw49 openshift-etcd-operator/etcd-operator-768d5b5d86-722mg openshift-image-registry/node-ca-l92hr openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-admission-controller-6c7c885997-4hbbc openshift-network-diagnostics/network-check-target-v54bt openshift-console/downloads-65476884b9-9wcvx openshift-image-registry/image-registry-75b7bb6564-2mwg6 openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t openshift-marketplace/marketplace-operator-8b455464d-29pzg openshift-multus/multus-q88th openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh openshift-console/console-644bb77b49-5x5xk openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz openshift-apiserver/apiserver-7fc54b8dd7-d2bhp openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 openshift-service-ca/service-ca-666f99b6f-kk8kg openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 openshift-dns/node-resolver-dn27q openshift-ingress-canary/ingress-canary-2vhcn openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 openshift-ovn-kubernetes/ovnkube-node-wzh74 hostpath-provisioner/csi-hostpathplugin-hvm8g openshift-dns-operator/dns-operator-75f687757b-nz2xb openshift-etcd/etcd-crc openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh openshift-marketplace/certified-operators-cms8q openshift-marketplace/redhat-operators-hkptr openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/machine-config-daemon-zpnhg openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv openshift-multus/multus-additional-cni-plugins-bzj2p openshift-marketplace/community-operators-gjctm openshift-network-node-identity/network-node-identity-7xghp openshift-network-operator/network-operator-767c585db5-zd56b openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv openshift-kube-apiserver/installer-13-crc openshift-multus/network-metrics-daemon-qdfr4 openshift-network-operator/iptables-alerter-wwpnd openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-marketplace/redhat-marketplace-crk87 openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg openshift-console-operator/console-operator-5dbbc74dc9-cp5cd openshift-controller-manager/controller-manager-778975cc4f-x5vcf openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 openshift-ingress/router-default-5c9bf7bc58-6jctv] 2025-10-13T00:21:42.844443634+00:00 stderr F I1013 00:21:42.844424 28251 obj_retry.go:411] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources 2025-10-13T00:21:42.844455614+00:00 stderr F I1013 00:21:42.844449 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.844482045+00:00 stderr F I1013 00:21:42.844461 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.844491135+00:00 stderr F I1013 00:21:42.844478 28251 ovn.go:134] Ensuring zone local for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in node crc 2025-10-13T00:21:42.844491135+00:00 stderr F I1013 00:21:42.844481 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.844514766+00:00 stderr F I1013 00:21:42.844497 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.844514766+00:00 stderr F I1013 00:21:42.844508 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.844524516+00:00 stderr F I1013 00:21:42.844499 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-q88th 2025-10-13T00:21:42.844533686+00:00 stderr F I1013 00:21:42.844521 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.844533686+00:00 stderr F I1013 00:21:42.844528 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.844533686+00:00 stderr F I1013 00:21:42.844528 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.844543246+00:00 stderr F I1013 00:21:42.844518 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.844543246+00:00 stderr F I1013 00:21:42.844538 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.844552337+00:00 stderr F I1013 00:21:42.844494 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.844552337+00:00 stderr F I1013 00:21:42.844546 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.844561737+00:00 stderr F I1013 00:21:42.844551 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.844561737+00:00 stderr F I1013 00:21:42.844554 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.844575117+00:00 stderr F I1013 00:21:42.844560 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc 2025-10-13T00:21:42.844575117+00:00 stderr F I1013 00:21:42.844564 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.844575117+00:00 stderr F I1013 00:21:42.844566 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.844575117+00:00 stderr F I1013 00:21:42.844568 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s) 2025-10-13T00:21:42.844585897+00:00 stderr F I1013 00:21:42.844575 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.844585897+00:00 stderr F I1013 00:21:42.844575 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.844585897+00:00 stderr F I1013 00:21:42.844538 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-q88th 2025-10-13T00:21:42.844595308+00:00 stderr F I1013 00:21:42.844579 28251 ovn.go:134] Ensuring zone local for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in node crc 2025-10-13T00:21:42.844595308+00:00 stderr F I1013 00:21:42.844588 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-q88th in node crc 2025-10-13T00:21:42.844604358+00:00 stderr F I1013 00:21:42.844593 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.844604358+00:00 stderr F I1013 00:21:42.844595 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-q88th after 0 failed attempt(s) 2025-10-13T00:21:42.844604358+00:00 stderr F I1013 00:21:42.844598 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.844613838+00:00 stderr F I1013 00:21:42.844600 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.844613838+00:00 stderr F I1013 00:21:42.844604 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in node crc 2025-10-13T00:21:42.844613838+00:00 stderr F I1013 00:21:42.844609 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.844622998+00:00 stderr F I1013 00:21:42.844612 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.844622998+00:00 stderr F I1013 00:21:42.844601 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-q88th 2025-10-13T00:21:42.844631439+00:00 stderr F I1013 00:21:42.844623 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.844631439+00:00 stderr F I1013 00:21:42.844569 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.844639959+00:00 stderr F I1013 00:21:42.844631 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-server-v65wr in node crc 2025-10-13T00:21:42.844639959+00:00 stderr F I1013 00:21:42.844636 28251 ovn.go:134] Ensuring zone local for Pod openshift-ingress-canary/ingress-canary-2vhcn in node crc 2025-10-13T00:21:42.844648689+00:00 stderr F I1013 00:21:42.844637 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr after 0 failed attempt(s) 2025-10-13T00:21:42.844648689+00:00 stderr F I1013 00:21:42.844642 28251 base_network_controller_pods.go:476] [default/openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] creating logical port openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb for pod on switch crc 2025-10-13T00:21:42.844648689+00:00 stderr F I1013 00:21:42.844644 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-server-v65wr 2025-10-13T00:21:42.844661639+00:00 stderr F I1013 00:21:42.844645 28251 base_network_controller_pods.go:476] [default/openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] creating logical port openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m for pod on switch crc 2025-10-13T00:21:42.844661639+00:00 stderr F I1013 00:21:42.844616 28251 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 in node crc 2025-10-13T00:21:42.844689170+00:00 stderr F I1013 00:21:42.844671 28251 base_network_controller_pods.go:476] [default/openshift-ingress-canary/ingress-canary-2vhcn] creating logical port openshift-ingress-canary_ingress-canary-2vhcn for pod on switch crc 2025-10-13T00:21:42.844698940+00:00 stderr F I1013 00:21:42.844692 28251 base_network_controller_pods.go:476] [default/openshift-image-registry/image-registry-75b7bb6564-2mwg6] creating logical port openshift-image-registry_image-registry-75b7bb6564-2mwg6 for pod on switch crc 2025-10-13T00:21:42.844908696+00:00 stderr F I1013 00:21:42.844851 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.844920326+00:00 stderr F I1013 00:21:42.844902 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.844947127+00:00 stderr F I1013 00:21:42.844894 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.844956937+00:00 stderr F I1013 00:21:42.844921 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:fe9b4942-29e7-4ef1-85c7-1a2153128dc7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0840ccdb-85bb-4744-9f7a-239a647cc044}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.844983288+00:00 stderr F I1013 00:21:42.844957 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.844992648+00:00 stderr F I1013 00:21:42.844968 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.844992648+00:00 stderr F I1013 00:21:42.844973 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845005859+00:00 stderr F I1013 00:21:42.844543 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.845014289+00:00 stderr F I1013 00:21:42.845004 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.845159643+00:00 stderr F I1013 00:21:42.845131 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.845159643+00:00 stderr F I1013 00:21:42.845140 28251 ovn.go:134] Ensuring zone local for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in node crc 2025-10-13T00:21:42.845159643+00:00 stderr F I1013 00:21:42.845146 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.845159643+00:00 stderr F I1013 00:21:42.845151 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 after 0 failed attempt(s) 2025-10-13T00:21:42.845173623+00:00 stderr F I1013 00:21:42.845156 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in node crc 2025-10-13T00:21:42.845173623+00:00 stderr F I1013 00:21:42.845161 28251 default_network_controller.go:699] Recording success event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-10-13T00:21:42.845182773+00:00 stderr F I1013 00:21:42.845160 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.845191324+00:00 stderr F I1013 00:21:42.845179 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] creating logical port openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 for pod on switch crc 2025-10-13T00:21:42.845191324+00:00 stderr F I1013 00:21:42.845186 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.845220764+00:00 stderr F I1013 00:21:42.845193 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.845220764+00:00 stderr F I1013 00:21:42.845211 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.845231545+00:00 stderr F I1013 00:21:42.845217 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.845231545+00:00 stderr F I1013 00:21:42.845224 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.845240945+00:00 stderr F I1013 00:21:42.845227 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.845240945+00:00 stderr F I1013 00:21:42.845235 28251 ovn.go:134] Ensuring zone local for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in node crc 2025-10-13T00:21:42.845254445+00:00 stderr F I1013 00:21:42.845245 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.845266126+00:00 stderr F I1013 00:21:42.845246 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845274946+00:00 stderr F I1013 00:21:42.845260 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.845284046+00:00 stderr F I1013 00:21:42.845271 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.845284046+00:00 stderr F I1013 00:21:42.845274 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.845293876+00:00 stderr F I1013 00:21:42.845280 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.845293876+00:00 stderr F I1013 00:21:42.845286 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.845303737+00:00 stderr F I1013 00:21:42.845261 28251 base_network_controller_pods.go:476] [default/openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] creating logical port openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz for pod on switch crc 2025-10-13T00:21:42.845303737+00:00 stderr F I1013 00:21:42.845294 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.845303737+00:00 stderr F I1013 00:21:42.845299 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.845314057+00:00 stderr F I1013 00:21:42.845302 28251 ovn.go:134] Ensuring zone local for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in node crc 2025-10-13T00:21:42.845314057+00:00 stderr F I1013 00:21:42.845306 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.845327127+00:00 stderr F I1013 00:21:42.845313 28251 ovn.go:134] Ensuring zone local for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in node crc 2025-10-13T00:21:42.845327127+00:00 stderr F I1013 00:21:42.845314 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.845336378+00:00 stderr F I1013 00:21:42.845324 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.845336378+00:00 stderr F I1013 00:21:42.845330 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in node crc 2025-10-13T00:21:42.845358698+00:00 stderr F I1013 00:21:42.844557 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.845358698+00:00 stderr F I1013 00:21:42.845336 28251 base_network_controller_pods.go:476] [default/openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] creating logical port openshift-etcd-operator_etcd-operator-768d5b5d86-722mg for pod on switch crc 2025-10-13T00:21:42.845368188+00:00 stderr F I1013 00:21:42.845356 28251 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dn27q in node crc 2025-10-13T00:21:42.845381739+00:00 stderr F I1013 00:21:42.845362 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.845381739+00:00 stderr F I1013 00:21:42.845368 28251 base_network_controller_pods.go:476] [default/openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] creating logical port openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb for pod on switch crc 2025-10-13T00:21:42.845381739+00:00 stderr F I1013 00:21:42.844486 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv after 0 failed attempt(s) 2025-10-13T00:21:42.845381739+00:00 stderr F I1013 00:21:42.845377 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.845395919+00:00 stderr F I1013 00:21:42.845382 28251 default_network_controller.go:699] Recording success event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-10-13T00:21:42.845395919+00:00 stderr F I1013 00:21:42.845386 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in node crc 2025-10-13T00:21:42.845395919+00:00 stderr F I1013 00:21:42.845364 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns/node-resolver-dn27q after 0 failed attempt(s) 2025-10-13T00:21:42.845405139+00:00 stderr F I1013 00:21:42.845398 28251 default_network_controller.go:699] Recording success event on pod openshift-dns/node-resolver-dn27q 2025-10-13T00:21:42.845414430+00:00 stderr F I1013 00:21:42.845350 28251 base_network_controller_pods.go:476] [default/openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] creating logical port openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b for pod on switch crc 2025-10-13T00:21:42.845414430+00:00 stderr F I1013 00:21:42.845408 28251 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] creating logical port openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr for pod on switch crc 2025-10-13T00:21:42.845445410+00:00 stderr F I1013 00:21:42.845408 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845491842+00:00 stderr F I1013 00:21:42.845465 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845542693+00:00 stderr F I1013 00:21:42.845509 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845542693+00:00 stderr F I1013 00:21:42.845520 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845575804+00:00 stderr F I1013 00:21:42.845546 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845607275+00:00 stderr F I1013 00:21:42.845547 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845607275+00:00 stderr F I1013 00:21:42.845574 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845617495+00:00 stderr F I1013 00:21:42.845608 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.845617495+00:00 stderr F I1013 00:21:42.845594 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845626675+00:00 stderr F I1013 00:21:42.845621 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.845635566+00:00 stderr F I1013 00:21:42.845626 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.845635566+00:00 stderr F I1013 00:21:42.845632 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-marketplace-crk87 in node crc 2025-10-13T00:21:42.845645216+00:00 stderr F I1013 00:21:42.845625 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845653796+00:00 stderr F I1013 00:21:42.845619 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845653796+00:00 stderr F I1013 00:21:42.845637 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845666976+00:00 stderr F I1013 00:21:42.845652 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-marketplace-crk87] creating logical port openshift-marketplace_redhat-marketplace-crk87 for pod on switch crc 2025-10-13T00:21:42.845666976+00:00 stderr F I1013 00:21:42.845637 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845732108+00:00 stderr F I1013 00:21:42.845696 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845732108+00:00 stderr F I1013 00:21:42.845230 28251 ovn.go:134] Ensuring zone local for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in node crc 2025-10-13T00:21:42.845732108+00:00 stderr F I1013 00:21:42.845713 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845760439+00:00 stderr F I1013 00:21:42.845746 28251 base_network_controller_pods.go:476] [default/openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] creating logical port openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc for pod on switch crc 2025-10-13T00:21:42.845795370+00:00 stderr F I1013 00:21:42.845764 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845795370+00:00 stderr F I1013 00:21:42.845591 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845849131+00:00 stderr F I1013 00:21:42.845816 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:a783d910-85f5-4f52-8831-6bae329a70fa requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {15739438-58a4-46c0-bcfe-2d9fdf5c37a4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845874832+00:00 stderr F I1013 00:21:42.845852 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845912893+00:00 stderr F I1013 00:21:42.845889 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.845912893+00:00 stderr F I1013 00:21:42.845907 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.845921833+00:00 stderr F I1013 00:21:42.845916 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.845929834+00:00 stderr F I1013 00:21:42.845921 28251 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in node crc 2025-10-13T00:21:42.845952514+00:00 stderr F I1013 00:21:42.845938 28251 base_network_controller_pods.go:476] [default/openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] creating logical port openshift-console-operator_console-operator-5dbbc74dc9-cp5cd for pod on switch crc 2025-10-13T00:21:42.846023736+00:00 stderr F I1013 00:21:42.845990 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846068317+00:00 stderr F I1013 00:21:42.846044 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846111828+00:00 stderr F I1013 00:21:42.846085 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846142279+00:00 stderr F I1013 00:21:42.846069 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846142279+00:00 stderr F I1013 00:21:42.845281 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846156300+00:00 stderr F I1013 00:21:42.846138 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.846156300+00:00 stderr F I1013 00:21:42.846146 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.846156300+00:00 stderr F I1013 00:21:42.846152 28251 ovn.go:134] Ensuring zone local for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in node crc 2025-10-13T00:21:42.846181790+00:00 stderr F I1013 00:21:42.846166 28251 base_network_controller_pods.go:476] [default/openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] creating logical port openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg for pod on switch crc 2025-10-13T00:21:42.846181790+00:00 stderr F I1013 00:21:42.846156 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846191331+00:00 stderr F I1013 00:21:42.846173 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1c699f2-1900-4069-955e-c9c52ca0d0fd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846223291+00:00 stderr F I1013 00:21:42.846200 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f1c699f2-1900-4069-955e-c9c52ca0d0fd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846223291+00:00 stderr F I1013 00:21:42.846204 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846232072+00:00 stderr F I1013 00:21:42.846220 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.846240572+00:00 stderr F I1013 00:21:42.846230 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.846252132+00:00 stderr F I1013 00:21:42.846237 28251 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-l92hr in node crc 2025-10-13T00:21:42.846252132+00:00 stderr F I1013 00:21:42.846244 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/node-ca-l92hr after 0 failed attempt(s) 2025-10-13T00:21:42.846261232+00:00 stderr F I1013 00:21:42.846250 28251 default_network_controller.go:699] Recording success event on pod openshift-image-registry/node-ca-l92hr 2025-10-13T00:21:42.846261232+00:00 stderr F I1013 00:21:42.846218 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:a783d910-85f5-4f52-8831-6bae329a70fa requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {15739438-58a4-46c0-bcfe-2d9fdf5c37a4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:15739438-58a4-46c0-bcfe-2d9fdf5c37a4}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1c699f2-1900-4069-955e-c9c52ca0d0fd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f1c699f2-1900-4069-955e-c9c52ca0d0fd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846261232+00:00 stderr F I1013 00:21:42.844557 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.846270553+00:00 stderr F I1013 00:21:42.846150 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846308284+00:00 stderr F I1013 00:21:42.846225 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846335044+00:00 stderr F I1013 00:21:42.846303 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846335044+00:00 stderr F I1013 00:21:42.846298 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846388216+00:00 stderr F I1013 00:21:42.844546 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in node crc 2025-10-13T00:21:42.846401826+00:00 stderr F I1013 00:21:42.846380 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846410746+00:00 stderr F I1013 00:21:42.846401 28251 base_network_controller_pods.go:476] [default/openshift-multus/multus-admission-controller-6c7c885997-4hbbc] creating logical port openshift-multus_multus-admission-controller-6c7c885997-4hbbc for pod on switch crc 2025-10-13T00:21:42.846410746+00:00 stderr F I1013 00:21:42.846389 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846452108+00:00 stderr F I1013 00:21:42.846427 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846452108+00:00 stderr F I1013 00:21:42.845768 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846452108+00:00 stderr F I1013 00:21:42.844515 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.846463618+00:00 stderr F I1013 00:21:42.846455 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc 2025-10-13T00:21:42.846472368+00:00 stderr F I1013 00:21:42.846460 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s) 2025-10-13T00:21:42.846472368+00:00 stderr F I1013 00:21:42.846466 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-10-13T00:21:42.846481428+00:00 stderr F I1013 00:21:42.845881 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846527110+00:00 stderr F I1013 00:21:42.846497 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846566991+00:00 stderr F I1013 00:21:42.846543 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846597001+00:00 stderr F I1013 00:21:42.846423 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846655303+00:00 stderr F I1013 00:21:42.846615 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846696184+00:00 stderr F I1013 00:21:42.846667 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846738125+00:00 stderr F I1013 00:21:42.846435 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.846738125+00:00 stderr F I1013 00:21:42.845038 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.846805047+00:00 stderr F I1013 00:21:42.846736 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.847050704+00:00 stderr F I1013 00:21:42.845271 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.847092945+00:00 stderr F I1013 00:21:42.847062 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.847092945+00:00 stderr F I1013 00:21:42.847061 28251 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in node crc 2025-10-13T00:21:42.847102435+00:00 stderr F I1013 00:21:42.847078 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.847128016+00:00 stderr F I1013 00:21:42.847106 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.847136706+00:00 stderr F I1013 00:21:42.847128 28251 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 in node crc 2025-10-13T00:21:42.847147166+00:00 stderr F I1013 00:21:42.847138 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 after 0 failed attempt(s) 2025-10-13T00:21:42.847159687+00:00 stderr F I1013 00:21:42.847151 28251 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:42.847168347+00:00 stderr F I1013 00:21:42.847145 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.847168347+00:00 stderr F I1013 00:21:42.847089 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-767c585db5-zd56b in node crc 2025-10-13T00:21:42.847176587+00:00 stderr F I1013 00:21:42.847166 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.847185097+00:00 stderr F I1013 00:21:42.847174 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b after 0 failed attempt(s) 2025-10-13T00:21:42.847193887+00:00 stderr F I1013 00:21:42.847183 28251 default_network_controller.go:699] Recording success event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-10-13T00:21:42.847193887+00:00 stderr F I1013 00:21:42.847185 28251 base_network_controller_pods.go:476] [default/openshift-controller-manager/controller-manager-778975cc4f-x5vcf] creating logical port openshift-controller-manager_controller-manager-778975cc4f-x5vcf for pod on switch crc 2025-10-13T00:21:42.847203888+00:00 stderr F I1013 00:21:42.847192 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in node crc 2025-10-13T00:21:42.847241549+00:00 stderr F I1013 00:21:42.847220 28251 obj_retry.go:296] Retry object setup: *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.847241549+00:00 stderr F I1013 00:21:42.847237 28251 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] creating logical port openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv for pod on switch crc 2025-10-13T00:21:42.847250829+00:00 stderr F I1013 00:21:42.847240 28251 obj_retry.go:358] Adding new object: *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.847261319+00:00 stderr F I1013 00:21:42.847252 28251 ovn.go:134] Ensuring zone local for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in node crc 2025-10-13T00:21:42.847357962+00:00 stderr F I1013 00:21:42.847251 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847357962+00:00 stderr F I1013 00:21:42.847307 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.847357962+00:00 stderr F I1013 00:21:42.845197 28251 ovn.go:134] Ensuring zone local for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in node crc 2025-10-13T00:21:42.847375382+00:00 stderr F I1013 00:21:42.847321 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.847384383+00:00 stderr F I1013 00:21:42.847376 28251 ovn.go:134] Ensuring zone local for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in node crc 2025-10-13T00:21:42.847407583+00:00 stderr F I1013 00:21:42.847390 28251 base_network_controller_pods.go:476] [default/openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] creating logical port openshift-apiserver_apiserver-7fc54b8dd7-d2bhp for pod on switch crc 2025-10-13T00:21:42.847427544+00:00 stderr F I1013 00:21:42.847413 28251 base_network_controller_pods.go:476] [default/openshift-dns-operator/dns-operator-75f687757b-nz2xb] creating logical port openshift-dns-operator_dns-operator-75f687757b-nz2xb for pod on switch crc 2025-10-13T00:21:42.847440154+00:00 stderr F I1013 00:21:42.847392 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847661700+00:00 stderr F I1013 00:21:42.847595 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847672520+00:00 stderr F I1013 00:21:42.847449 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847726982+00:00 stderr F I1013 00:21:42.845708 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847726982+00:00 stderr F I1013 00:21:42.847683 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847726982+00:00 stderr F I1013 00:21:42.847294 28251 base_network_controller_pods.go:476] [default/hostpath-provisioner/csi-hostpathplugin-hvm8g] creating logical port hostpath-provisioner_csi-hostpathplugin-hvm8g for pod on switch crc 2025-10-13T00:21:42.847741182+00:00 stderr F I1013 00:21:42.847720 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.847741182+00:00 stderr F I1013 00:21:42.847732 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.847749602+00:00 stderr F I1013 00:21:42.847739 28251 ovn.go:134] Ensuring zone local for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in node crc 2025-10-13T00:21:42.847779463+00:00 stderr F I1013 00:21:42.847765 28251 base_network_controller_pods.go:476] [default/openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] creating logical port openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs for pod on switch crc 2025-10-13T00:21:42.847797524+00:00 stderr F I1013 00:21:42.845087 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.847797524+00:00 stderr F I1013 00:21:42.847745 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847805974+00:00 stderr F I1013 00:21:42.845259 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-7xghp in node crc 2025-10-13T00:21:42.847814664+00:00 stderr F I1013 00:21:42.847801 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.847851235+00:00 stderr F I1013 00:21:42.847832 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.847851235+00:00 stderr F I1013 00:21:42.845012 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.847851235+00:00 stderr F I1013 00:21:42.847843 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.847863546+00:00 stderr F I1013 00:21:42.847857 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in node crc 2025-10-13T00:21:42.847895486+00:00 stderr F I1013 00:21:42.847881 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-7xghp after 0 failed attempt(s) 2025-10-13T00:21:42.847895486+00:00 stderr F I1013 00:21:42.847891 28251 default_network_controller.go:699] Recording success event on pod openshift-network-node-identity/network-node-identity-7xghp 2025-10-13T00:21:42.847914417+00:00 stderr F I1013 00:21:42.847884 28251 base_network_controller_pods.go:476] [default/openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] creating logical port openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 for pod on switch crc 2025-10-13T00:21:42.847914417+00:00 stderr F I1013 00:21:42.845097 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.847926657+00:00 stderr F I1013 00:21:42.847911 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.847934917+00:00 stderr F I1013 00:21:42.847925 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.847943668+00:00 stderr F I1013 00:21:42.847933 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.847943668+00:00 stderr F I1013 00:21:42.847934 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.847952508+00:00 stderr F I1013 00:21:42.847941 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in node crc 2025-10-13T00:21:42.847952508+00:00 stderr F I1013 00:21:42.847943 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.847952508+00:00 stderr F I1013 00:21:42.847943 28251 ovn.go:134] Ensuring zone local for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in node crc 2025-10-13T00:21:42.847961858+00:00 stderr F I1013 00:21:42.847951 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in node crc 2025-10-13T00:21:42.847961858+00:00 stderr F I1013 00:21:42.847952 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 after 0 failed attempt(s) 2025-10-13T00:21:42.847961858+00:00 stderr F I1013 00:21:42.847948 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg after 0 failed attempt(s) 2025-10-13T00:21:42.847961858+00:00 stderr F I1013 00:21:42.847958 28251 default_network_controller.go:699] Recording success event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-10-13T00:21:42.847971748+00:00 stderr F I1013 00:21:42.847962 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-10-13T00:21:42.847971748+00:00 stderr F I1013 00:21:42.844534 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in node crc 2025-10-13T00:21:42.847982469+00:00 stderr F I1013 00:21:42.845307 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.847982469+00:00 stderr F I1013 00:21:42.847973 28251 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] creating logical port openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh for pod on switch crc 2025-10-13T00:21:42.847993329+00:00 stderr F I1013 00:21:42.847986 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.848001959+00:00 stderr F I1013 00:21:42.847994 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-wwpnd in node crc 2025-10-13T00:21:42.848010279+00:00 stderr F I1013 00:21:42.847999 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-wwpnd after 0 failed attempt(s) 2025-10-13T00:21:42.848020740+00:00 stderr F I1013 00:21:42.848015 28251 default_network_controller.go:699] Recording success event on pod openshift-network-operator/iptables-alerter-wwpnd 2025-10-13T00:21:42.848030980+00:00 stderr F I1013 00:21:42.848005 28251 base_network_controller_pods.go:476] [default/openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] creating logical port openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw for pod on switch crc 2025-10-13T00:21:42.848081461+00:00 stderr F I1013 00:21:42.848035 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848128323+00:00 stderr F I1013 00:21:42.848051 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848137393+00:00 stderr F I1013 00:21:42.848113 28251 port_cache.go:96] port-cache(openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb): added port &{name:openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb uuid:be2fa59f-4cec-4742-a4bd-dcd0913d1422 logicalSwitch:crc ips:[0xc000ec8cc0] mac:[10 88 10 217 0 15] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.15/23] and MAC: 0a:58:0a:d9:00:0f 2025-10-13T00:21:42.848168574+00:00 stderr F I1013 00:21:42.848134 28251 pods.go:220] [openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] addLogicalPort took 2.771895ms, libovsdb time 1.683305ms 2025-10-13T00:21:42.848168574+00:00 stderr F I1013 00:21:42.848158 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb after 0 failed attempt(s) 2025-10-13T00:21:42.848168574+00:00 stderr F I1013 00:21:42.848163 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-10-13T00:21:42.848200115+00:00 stderr F I1013 00:21:42.848181 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" 2025-10-13T00:21:42.848200115+00:00 stderr F I1013 00:21:42.844577 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc 2025-10-13T00:21:42.848208845+00:00 stderr F I1013 00:21:42.848181 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848216675+00:00 stderr F I1013 00:21:42.848186 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848234906+00:00 stderr F I1013 00:21:42.848184 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848234906+00:00 stderr F I1013 00:21:42.845125 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.848247046+00:00 stderr F I1013 00:21:42.848186 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848247046+00:00 stderr F I1013 00:21:42.848243 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.848277737+00:00 stderr F I1013 00:21:42.848257 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in node crc 2025-10-13T00:21:42.848277737+00:00 stderr F I1013 00:21:42.848251 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848289147+00:00 stderr F I1013 00:21:42.848283 28251 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] creating logical port openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 for pod on switch crc 2025-10-13T00:21:42.848313418+00:00 stderr F I1013 00:21:42.848252 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848362149+00:00 stderr F I1013 00:21:42.848319 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848403600+00:00 stderr F I1013 00:21:42.848366 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848403600+00:00 stderr F I1013 00:21:42.845145 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.848403600+00:00 stderr F I1013 00:21:42.848381 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848453231+00:00 stderr F I1013 00:21:42.848412 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.848453231+00:00 stderr F I1013 00:21:42.848432 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in node crc 2025-10-13T00:21:42.848519443+00:00 stderr F I1013 00:21:42.848489 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] creating logical port openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz for pod on switch crc 2025-10-13T00:21:42.848519443+00:00 stderr F I1013 00:21:42.845121 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.848519443+00:00 stderr F I1013 00:21:42.848490 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848519443+00:00 stderr F I1013 00:21:42.848508 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.848530193+00:00 stderr F I1013 00:21:42.848519 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in node crc 2025-10-13T00:21:42.848530193+00:00 stderr F I1013 00:21:42.848524 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p after 0 failed attempt(s) 2025-10-13T00:21:42.848539134+00:00 stderr F I1013 00:21:42.848529 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-10-13T00:21:42.848547804+00:00 stderr F I1013 00:21:42.845213 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.848558754+00:00 stderr F I1013 00:21:42.848551 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.848567864+00:00 stderr F I1013 00:21:42.848535 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848576715+00:00 stderr F I1013 00:21:42.844582 28251 ovn.go:134] Ensuring zone local for Pod openshift-console/downloads-65476884b9-9wcvx in node crc 2025-10-13T00:21:42.848691148+00:00 stderr F I1013 00:21:42.848662 28251 base_network_controller_pods.go:476] [default/openshift-console/downloads-65476884b9-9wcvx] creating logical port openshift-console_downloads-65476884b9-9wcvx for pod on switch crc 2025-10-13T00:21:42.848731909+00:00 stderr F I1013 00:21:42.848693 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848796801+00:00 stderr F I1013 00:21:42.848769 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848810391+00:00 stderr F I1013 00:21:42.848775 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848891183+00:00 stderr F I1013 00:21:42.848853 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848901093+00:00 stderr F I1013 00:21:42.848875 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848923354+00:00 stderr F I1013 00:21:42.848876 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848932194+00:00 stderr F I1013 00:21:42.848868 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848961345+00:00 stderr F I1013 00:21:42.848929 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848970685+00:00 stderr F I1013 00:21:42.848943 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.848997746+00:00 stderr F I1013 00:21:42.848888 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849009266+00:00 stderr F I1013 00:21:42.848978 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849050267+00:00 stderr F I1013 00:21:42.848995 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849050267+00:00 stderr F I1013 00:21:42.849022 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849050267+00:00 stderr F I1013 00:21:42.848959 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849084628+00:00 stderr F I1013 00:21:42.849052 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849176471+00:00 stderr F I1013 00:21:42.849127 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849176471+00:00 stderr F I1013 00:21:42.845109 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.849198351+00:00 stderr F I1013 00:21:42.849181 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.849198351+00:00 stderr F I1013 00:21:42.849090 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849206742+00:00 stderr F I1013 00:21:42.849198 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-etcd/etcd-crc 2025-10-13T00:21:42.849206742+00:00 stderr F I1013 00:21:42.849203 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-etcd/etcd-crc 2025-10-13T00:21:42.849215092+00:00 stderr F I1013 00:21:42.849208 28251 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc 2025-10-13T00:21:42.849222942+00:00 stderr F I1013 00:21:42.849213 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s) 2025-10-13T00:21:42.849230752+00:00 stderr F I1013 00:21:42.849217 28251 default_network_controller.go:699] Recording success event on pod openshift-etcd/etcd-crc 2025-10-13T00:21:42.849238813+00:00 stderr F I1013 00:21:42.849233 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.849249943+00:00 stderr F I1013 00:21:42.849238 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.849249943+00:00 stderr F I1013 00:21:42.849246 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in node crc 2025-10-13T00:21:42.849280094+00:00 stderr F I1013 00:21:42.849266 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] creating logical port openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh for pod on switch crc 2025-10-13T00:21:42.849280094+00:00 stderr F I1013 00:21:42.849246 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849361756+00:00 stderr F I1013 00:21:42.849301 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849374826+00:00 stderr F I1013 00:21:42.848328 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849383326+00:00 stderr F I1013 00:21:42.849350 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849411127+00:00 stderr F I1013 00:21:42.849379 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849497650+00:00 stderr F I1013 00:21:42.849391 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849522040+00:00 stderr F I1013 00:21:42.849486 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849551 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849604 28251 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr): added port &{name:openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr uuid:2d98188d-6d49-48e7-8956-57a5c46efe26 logicalSwitch:crc ips:[0xc001400480] mac:[10 88 10 217 0 16] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.16/23] and MAC: 0a:58:0a:d9:00:10 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849618 28251 pods.go:220] [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] addLogicalPort took 4.216944ms, libovsdb time 3.368421ms 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849626 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr after 0 failed attempt(s) 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849631 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849624 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.847614 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849645 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.845191 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849658 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849665 28251 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qdfr4 in node crc 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849687 28251 base_network_controller_pods.go:476] [default/openshift-multus/network-metrics-daemon-qdfr4] creating logical port openshift-multus_network-metrics-daemon-qdfr4 for pod on switch crc 2025-10-13T00:21:42.849809898+00:00 stderr F I1013 00:21:42.849700 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849833349+00:00 stderr F I1013 00:21:42.849794 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849878140+00:00 stderr F I1013 00:21:42.849824 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849878140+00:00 stderr F I1013 00:21:42.849840 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.849878140+00:00 stderr F I1013 00:21:42.849854 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850006623+00:00 stderr F I1013 00:21:42.849954 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850016943+00:00 stderr F I1013 00:21:42.849892 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850156117+00:00 stderr F I1013 00:21:42.850115 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850222989+00:00 stderr F I1013 00:21:42.850186 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850271450+00:00 stderr F I1013 00:21:42.850235 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850389073+00:00 stderr F I1013 00:21:42.849918 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850389073+00:00 stderr F I1013 00:21:42.850320 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.850389073+00:00 stderr F I1013 00:21:42.850328 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.850389073+00:00 stderr F I1013 00:21:42.850354 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/certified-operators-cms8q in node crc 2025-10-13T00:21:42.850389073+00:00 stderr F I1013 00:21:42.850374 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/certified-operators-cms8q] creating logical port openshift-marketplace_certified-operators-cms8q for pod on switch crc 2025-10-13T00:21:42.850389073+00:00 stderr F I1013 00:21:42.850342 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850475926+00:00 stderr F I1013 00:21:42.850294 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850520037+00:00 stderr F I1013 00:21:42.848471 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850582359+00:00 stderr F I1013 00:21:42.849409 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850582359+00:00 stderr F I1013 00:21:42.845244 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.850601129+00:00 stderr F I1013 00:21:42.848561 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/community-operators-gjctm in node crc 2025-10-13T00:21:42.850609759+00:00 stderr F I1013 00:21:42.847693 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850689632+00:00 stderr F I1013 00:21:42.846115 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850701082+00:00 stderr F I1013 00:21:42.850463 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850753723+00:00 stderr F I1013 00:21:42.846122 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850753723+00:00 stderr F I1013 00:21:42.844583 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.850753723+00:00 stderr F I1013 00:21:42.849188 28251 ovn.go:134] Ensuring zone local for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in node crc 2025-10-13T00:21:42.850765384+00:00 stderr F I1013 00:21:42.845178 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.850765384+00:00 stderr F I1013 00:21:42.846719 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850765384+00:00 stderr F I1013 00:21:42.844515 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg in node crc 2025-10-13T00:21:42.850765384+00:00 stderr F I1013 00:21:42.845072 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.850775524+00:00 stderr F I1013 00:21:42.847845 28251 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in node crc 2025-10-13T00:21:42.850775524+00:00 stderr F I1013 00:21:42.847894 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.850784614+00:00 stderr F I1013 00:21:42.844458 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.850784614+00:00 stderr F I1013 00:21:42.845110 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.850784614+00:00 stderr F I1013 00:21:42.850243 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850793954+00:00 stderr F I1013 00:21:42.850779 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" 2025-10-13T00:21:42.850793954+00:00 stderr F I1013 00:21:42.850411 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850830105+00:00 stderr F I1013 00:21:42.850802 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "4f8aa612-9da0-4a2b-911e-6a1764a4e74e" 2025-10-13T00:21:42.850830105+00:00 stderr F I1013 00:21:42.850811 28251 base_network_controller_pods.go:476] [default/openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] creating logical port openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t for pod on switch crc 2025-10-13T00:21:42.850840316+00:00 stderr F I1013 00:21:42.850830 28251 base_network_controller_pods.go:476] [default/openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] creating logical port openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv for pod on switch crc 2025-10-13T00:21:42.850848986+00:00 stderr F I1013 00:21:42.850833 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.850858456+00:00 stderr F I1013 00:21:42.850846 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc 2025-10-13T00:21:42.850858456+00:00 stderr F I1013 00:21:42.850831 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850858456+00:00 stderr F I1013 00:21:42.850853 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s) 2025-10-13T00:21:42.850868306+00:00 stderr F I1013 00:21:42.850837 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850868306+00:00 stderr F I1013 00:21:42.850861 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-10-13T00:21:42.850877497+00:00 stderr F I1013 00:21:42.845088 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850877497+00:00 stderr F I1013 00:21:42.845614 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.850886717+00:00 stderr F I1013 00:21:42.850408 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850937898+00:00 stderr F I1013 00:21:42.845289 28251 ovn.go:134] Ensuring zone local for Pod openshift-dns/dns-default-gbw49 in node crc 2025-10-13T00:21:42.850976309+00:00 stderr F I1013 00:21:42.850466 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.850988340+00:00 stderr F I1013 00:21:42.850872 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.851044791+00:00 stderr F I1013 00:21:42.845287 28251 ovn.go:134] Ensuring zone local for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in node crc 2025-10-13T00:21:42.851054911+00:00 stderr F I1013 00:21:42.850701 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.851065612+00:00 stderr F I1013 00:21:42.851058 28251 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in node crc 2025-10-13T00:21:42.851120783+00:00 stderr F I1013 00:21:42.846444 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.851120783+00:00 stderr F I1013 00:21:42.850744 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.851120783+00:00 stderr F I1013 00:21:42.851105 28251 base_network_controller_pods.go:476] [default/openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] creating logical port openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 for pod on switch crc 2025-10-13T00:21:42.851131943+00:00 stderr F I1013 00:21:42.851121 28251 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-v54bt in node crc 2025-10-13T00:21:42.851182385+00:00 stderr F I1013 00:21:42.850764 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/community-operators-gjctm] creating logical port openshift-marketplace_community-operators-gjctm for pod on switch crc 2025-10-13T00:21:42.851310778+00:00 stderr F I1013 00:21:42.850744 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.851310778+00:00 stderr F I1013 00:21:42.850763 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2): added port &{name:openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 uuid:f24db1f4-18a4-418a-9c99-1d94ebfba0da logicalSwitch:crc ips:[0xc000d809c0] mac:[10 88 10 217 0 24] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.24/23] and MAC: 0a:58:0a:d9:00:18 2025-10-13T00:21:42.851319368+00:00 stderr F I1013 00:21:42.850837 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.851407741+00:00 stderr F I1013 00:21:42.851385 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "01feb2e0-a0f4-4573-8335-34e364e0ef40" 2025-10-13T00:21:42.851407741+00:00 stderr F I1013 00:21:42.851314 28251 pods.go:220] [openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] addLogicalPort took 6.137635ms, libovsdb time 4.663366ms 2025-10-13T00:21:42.851407741+00:00 stderr F I1013 00:21:42.851381 28251 port_cache.go:96] port-cache(openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb): added port &{name:openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb uuid:805e2f41-6cb8-4ccf-9939-37cfb4fa5509 logicalSwitch:crc ips:[0xc0013350b0] mac:[10 88 10 217 0 5] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.5/23] and MAC: 0a:58:0a:d9:00:05 2025-10-13T00:21:42.851504843+00:00 stderr F I1013 00:21:42.851463 28251 port_cache.go:96] port-cache(openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b): added port &{name:openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b uuid:3e86699a-fa52-4a81-9386-60d37f3fa10c logicalSwitch:crc ips:[0xc00099b560] mac:[10 88 10 217 0 72] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.72/23] and MAC: 0a:58:0a:d9:00:48 2025-10-13T00:21:42.851928085+00:00 stderr F I1013 00:21:42.851898 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 after 0 failed attempt(s) 2025-10-13T00:21:42.851928085+00:00 stderr F I1013 00:21:42.851887 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.851928085+00:00 stderr F I1013 00:21:42.851880 28251 pods.go:220] [openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] addLogicalPort took 6.781993ms, libovsdb time 4.635935ms 2025-10-13T00:21:42.851928085+00:00 stderr F I1013 00:21:42.851913 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-10-13T00:21:42.851928085+00:00 stderr F I1013 00:21:42.851921 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb after 0 failed attempt(s) 2025-10-13T00:21:42.851940155+00:00 stderr F I1013 00:21:42.851928 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-10-13T00:21:42.851981386+00:00 stderr F I1013 00:21:42.851954 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in node crc 2025-10-13T00:21:42.851981386+00:00 stderr F I1013 00:21:42.851975 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc 2025-10-13T00:21:42.851989446+00:00 stderr F I1013 00:21:42.851980 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s) 2025-10-13T00:21:42.851989446+00:00 stderr F I1013 00:21:42.851984 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-10-13T00:21:42.851998727+00:00 stderr F I1013 00:21:42.851988 28251 base_network_controller_pods.go:476] [default/openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] creating logical port openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 for pod on switch crc 2025-10-13T00:21:42.852005827+00:00 stderr F I1013 00:21:42.851994 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.852005827+00:00 stderr F I1013 00:21:42.852002 28251 ovn.go:134] Ensuring zone local for Pod openshift-console/console-644bb77b49-5x5xk in node crc 2025-10-13T00:21:42.852013057+00:00 stderr F I1013 00:21:42.851985 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:c8f142c0-dc2a-4213-882f-919da8583b03 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {628a7cc8-ec79-470f-b57a-8f42ec584f4b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852039888+00:00 stderr F I1013 00:21:42.852024 28251 base_network_controller_pods.go:476] [default/openshift-console/console-644bb77b49-5x5xk] creating logical port openshift-console_console-644bb77b49-5x5xk for pod on switch crc 2025-10-13T00:21:42.852061598+00:00 stderr F I1013 00:21:42.852036 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852092389+00:00 stderr F I1013 00:21:42.852046 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852092389+00:00 stderr F I1013 00:21:42.852082 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852121870+00:00 stderr F I1013 00:21:42.852072 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852144301+00:00 stderr F I1013 00:21:42.852116 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852253964+00:00 stderr F I1013 00:21:42.852182 28251 port_cache.go:96] port-cache(openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc): added port &{name:openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc uuid:d2f291e9-b4fe-47a7-a644-298254d226c5 logicalSwitch:crc ips:[0xc001a54420] mac:[10 88 10 217 0 23] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.23/23] and MAC: 0a:58:0a:d9:00:17 2025-10-13T00:21:42.852253964+00:00 stderr F I1013 00:21:42.852144 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852253964+00:00 stderr F I1013 00:21:42.852198 28251 pods.go:220] [openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] addLogicalPort took 6.458594ms, libovsdb time 4.725047ms 2025-10-13T00:21:42.852253964+00:00 stderr F I1013 00:21:42.852208 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc after 0 failed attempt(s) 2025-10-13T00:21:42.852253964+00:00 stderr F I1013 00:21:42.852213 28251 default_network_controller.go:699] Recording success event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-10-13T00:21:42.852253964+00:00 stderr F I1013 00:21:42.852225 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "530553aa-0a1d-423e-8a22-f5eb4bdbb883" 2025-10-13T00:21:42.852267984+00:00 stderr F I1013 00:21:42.852249 28251 port_cache.go:96] port-cache(openshift-console_downloads-65476884b9-9wcvx): added port &{name:openshift-console_downloads-65476884b9-9wcvx uuid:745a40f7-2acc-4e2b-a087-861e0ea97ffe logicalSwitch:crc ips:[0xc0007ee450] mac:[10 88 10 217 0 66] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.66/23] and MAC: 0a:58:0a:d9:00:42 2025-10-13T00:21:42.852267984+00:00 stderr F I1013 00:21:42.852233 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852267984+00:00 stderr F I1013 00:21:42.852263 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "6268b7fe-8910-4505-b404-6f1df638105c" 2025-10-13T00:21:42.852298795+00:00 stderr F I1013 00:21:42.852260 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852306395+00:00 stderr F I1013 00:21:42.852288 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852313585+00:00 stderr F I1013 00:21:42.852280 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852356946+00:00 stderr F I1013 00:21:42.852314 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852387807+00:00 stderr F I1013 00:21:42.851907 28251 pods.go:220] [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] addLogicalPort took 6.584917ms, libovsdb time 2.490397ms 2025-10-13T00:21:42.852387807+00:00 stderr F I1013 00:21:42.852364 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852387807+00:00 stderr F I1013 00:21:42.852382 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b after 0 failed attempt(s) 2025-10-13T00:21:42.852395877+00:00 stderr F I1013 00:21:42.852389 28251 default_network_controller.go:699] Recording success event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-10-13T00:21:42.852403208+00:00 stderr F I1013 00:21:42.852382 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852444329+00:00 stderr F I1013 00:21:42.852368 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852444329+00:00 stderr F I1013 00:21:42.852422 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852491540+00:00 stderr F I1013 00:21:42.852462 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6802ed31-354a-46db-8a22-561c776398db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852521291+00:00 stderr F I1013 00:21:42.852479 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852521291+00:00 stderr F I1013 00:21:42.852497 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6802ed31-354a-46db-8a22-561c776398db}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852563802+00:00 stderr F I1013 00:21:42.852531 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852563802+00:00 stderr F I1013 00:21:42.852512 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:c8f142c0-dc2a-4213-882f-919da8583b03 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {628a7cc8-ec79-470f-b57a-8f42ec584f4b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:628a7cc8-ec79-470f-b57a-8f42ec584f4b}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6802ed31-354a-46db-8a22-561c776398db}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6802ed31-354a-46db-8a22-561c776398db}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852645414+00:00 stderr F I1013 00:21:42.852585 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852645414+00:00 stderr F I1013 00:21:42.852559 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852675485+00:00 stderr F I1013 00:21:42.852647 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852767467+00:00 stderr F I1013 00:21:42.852671 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852819919+00:00 stderr F I1013 00:21:42.852258 28251 pods.go:220] [openshift-console/downloads-65476884b9-9wcvx] addLogicalPort took 3.607917ms, libovsdb time 2.357363ms 2025-10-13T00:21:42.852819919+00:00 stderr F I1013 00:21:42.852811 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-console/downloads-65476884b9-9wcvx after 0 failed attempt(s) 2025-10-13T00:21:42.852819919+00:00 stderr F I1013 00:21:42.852280 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "297ab9b6-2186-4d5b-a952-2bfd59af63c4" 2025-10-13T00:21:42.852828419+00:00 stderr F I1013 00:21:42.852798 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852835689+00:00 stderr F I1013 00:21:42.852827 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "43ae1c37-047b-4ee2-9fee-41e337dd4ac8" 2025-10-13T00:21:42.852842999+00:00 stderr F I1013 00:21:42.852816 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852842999+00:00 stderr F I1013 00:21:42.852835 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "ed024e5d-8fc2-4c22-803d-73f3c9795f19" 2025-10-13T00:21:42.852850240+00:00 stderr F I1013 00:21:42.852843 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d0f40333-c860-4c04-8058-a0bf572dcf12" 2025-10-13T00:21:42.852858140+00:00 stderr F I1013 00:21:42.852849 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c085412c-b875-46c9-ae3e-e6b0d8067091" 2025-10-13T00:21:42.852858140+00:00 stderr F I1013 00:21:42.852854 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "f728c15e-d8de-4a9a-a3ea-fdcead95cb91" 2025-10-13T00:21:42.852866260+00:00 stderr F I1013 00:21:42.852849 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852873260+00:00 stderr F I1013 00:21:42.852859 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "21d29937-debd-4407-b2b1-d1053cb0f342" 2025-10-13T00:21:42.852880440+00:00 stderr F I1013 00:21:42.852861 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852932442+00:00 stderr F I1013 00:21:42.852906 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c8f142c0-dc2a-4213-882f-919da8583b03" 2025-10-13T00:21:42.852932442+00:00 stderr F I1013 00:21:42.852871 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852945882+00:00 stderr F I1013 00:21:42.852884 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852945882+00:00 stderr F I1013 00:21:42.852273 28251 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh): added port &{name:openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh uuid:9aafbb57-c78d-409c-9ff4-1561d4387b2d logicalSwitch:crc ips:[0xc00097d650] mac:[10 88 10 217 0 63] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.63/23] and MAC: 0a:58:0a:d9:00:3f 2025-10-13T00:21:42.852978363+00:00 stderr F I1013 00:21:42.852953 28251 pods.go:220] [openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] addLogicalPort took 4.983854ms, libovsdb time 3.311989ms 2025-10-13T00:21:42.852978363+00:00 stderr F I1013 00:21:42.852802 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.852989943+00:00 stderr F I1013 00:21:42.852976 28251 port_cache.go:96] port-cache(openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m): added port &{name:openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m uuid:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26 logicalSwitch:crc ips:[0xc000e864b0] mac:[10 88 10 217 0 6] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.6/23] and MAC: 0a:58:0a:d9:00:06 2025-10-13T00:21:42.852999714+00:00 stderr F I1013 00:21:42.852991 28251 pods.go:220] [openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] addLogicalPort took 8.377195ms, libovsdb time 3.198106ms 2025-10-13T00:21:42.853006954+00:00 stderr F I1013 00:21:42.853001 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m after 0 failed attempt(s) 2025-10-13T00:21:42.853014034+00:00 stderr F I1013 00:21:42.853006 28251 default_network_controller.go:699] Recording success event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-10-13T00:21:42.853037575+00:00 stderr F I1013 00:21:42.853017 28251 port_cache.go:96] port-cache(openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7): added port &{name:openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 uuid:f5ecfd58-e886-4b2c-9939-022e7f14b7a7 logicalSwitch:crc ips:[0xc000e63e00] mac:[10 88 10 217 0 7] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.7/23] and MAC: 0a:58:0a:d9:00:07 2025-10-13T00:21:42.853037575+00:00 stderr F I1013 00:21:42.853015 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853037575+00:00 stderr F I1013 00:21:42.853029 28251 pods.go:220] [openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] addLogicalPort took 5.154498ms, libovsdb time 2.900908ms 2025-10-13T00:21:42.853045765+00:00 stderr F I1013 00:21:42.853039 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 after 0 failed attempt(s) 2025-10-13T00:21:42.853052655+00:00 stderr F I1013 00:21:42.853042 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.853061905+00:00 stderr F I1013 00:21:42.853054 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.853061905+00:00 stderr F I1013 00:21:42.853051 28251 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7): added port &{name:openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 uuid:3644fddd-ceae-4a64-8b00-dadf73515945 logicalSwitch:crc ips:[0xc0010acdb0] mac:[10 88 10 217 0 64] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.64/23] and MAC: 0a:58:0a:d9:00:40 2025-10-13T00:21:42.853069236+00:00 stderr F I1013 00:21:42.853060 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.853076286+00:00 stderr F I1013 00:21:42.853049 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853076286+00:00 stderr F I1013 00:21:42.853072 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.853086766+00:00 stderr F I1013 00:21:42.853073 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh): added port &{name:openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh uuid:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749 logicalSwitch:crc ips:[0xc00060e9f0] mac:[10 88 10 217 0 14] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.14/23] and MAC: 0a:58:0a:d9:00:0e 2025-10-13T00:21:42.853086766+00:00 stderr F I1013 00:21:42.853039 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853096116+00:00 stderr F I1013 00:21:42.853075 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853096116+00:00 stderr F I1013 00:21:42.853082 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "41e8708a-e40d-4d28-846b-c52eda4d1755" 2025-10-13T00:21:42.853105937+00:00 stderr F I1013 00:21:42.853088 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853105937+00:00 stderr F I1013 00:21:42.853091 28251 port_cache.go:96] port-cache(openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg): added port &{name:openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg uuid:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4 logicalSwitch:crc ips:[0xc001a555f0] mac:[10 88 10 217 0 46] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.46/23] and MAC: 0a:58:0a:d9:00:2e 2025-10-13T00:21:42.853105937+00:00 stderr F I1013 00:21:42.853099 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.853118627+00:00 stderr F I1013 00:21:42.853103 28251 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-operators-hkptr in node crc 2025-10-13T00:21:42.853118627+00:00 stderr F I1013 00:21:42.853107 28251 pods.go:220] [openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] addLogicalPort took 6.942796ms, libovsdb time 568.275µs 2025-10-13T00:21:42.853118627+00:00 stderr F I1013 00:21:42.853110 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.853118627+00:00 stderr F I1013 00:21:42.853107 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853128847+00:00 stderr F I1013 00:21:42.853117 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg after 0 failed attempt(s) 2025-10-13T00:21:42.853128847+00:00 stderr F I1013 00:21:42.853119 28251 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in node crc 2025-10-13T00:21:42.853128847+00:00 stderr F I1013 00:21:42.853124 28251 default_network_controller.go:699] Recording success event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-10-13T00:21:42.853136867+00:00 stderr F I1013 00:21:42.853127 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b after 0 failed attempt(s) 2025-10-13T00:21:42.853136867+00:00 stderr F I1013 00:21:42.853132 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-operators-hkptr] creating logical port openshift-marketplace_redhat-operators-hkptr for pod on switch crc 2025-10-13T00:21:42.853144548+00:00 stderr F I1013 00:21:42.853085 28251 pods.go:220] [openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] addLogicalPort took 3.823943ms, libovsdb time 1.871531ms 2025-10-13T00:21:42.853144548+00:00 stderr F I1013 00:21:42.853134 28251 port_cache.go:96] port-cache(openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs): added port &{name:openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs uuid:c2174bce-e1da-468b-aa60-b9409f80c104 logicalSwitch:crc ips:[0xc0017f9ef0] mac:[10 88 10 217 0 88] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.88/23] and MAC: 0a:58:0a:d9:00:58 2025-10-13T00:21:42.853151918+00:00 stderr F I1013 00:21:42.853104 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853151918+00:00 stderr F I1013 00:21:42.853144 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh after 0 failed attempt(s) 2025-10-13T00:21:42.853162638+00:00 stderr F I1013 00:21:42.853151 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-10-13T00:21:42.853162638+00:00 stderr F I1013 00:21:42.853152 28251 port_cache.go:96] port-cache(openshift-marketplace_certified-operators-cms8q): added port &{name:openshift-marketplace_certified-operators-cms8q uuid:628a7cc8-ec79-470f-b57a-8f42ec584f4b logicalSwitch:crc ips:[0xc001258ab0] mac:[10 88 10 217 0 36] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.36/23] and MAC: 0a:58:0a:d9:00:24 2025-10-13T00:21:42.853171768+00:00 stderr F I1013 00:21:42.853066 28251 pods.go:220] [openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] addLogicalPort took 4.789778ms, libovsdb time 2.921859ms 2025-10-13T00:21:42.853171768+00:00 stderr F I1013 00:21:42.853121 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853171768+00:00 stderr F I1013 00:21:42.853027 28251 obj_retry.go:296] Retry object setup: *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.853181919+00:00 stderr F I1013 00:21:42.853171 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 after 0 failed attempt(s) 2025-10-13T00:21:42.853190899+00:00 stderr F I1013 00:21:42.853171 28251 port_cache.go:96] port-cache(openshift-apiserver_apiserver-7fc54b8dd7-d2bhp): added port &{name:openshift-apiserver_apiserver-7fc54b8dd7-d2bhp uuid:005abe2f-f66d-42f8-945c-fbc80f820ed4 logicalSwitch:crc ips:[0xc00097c7b0] mac:[10 88 10 217 0 82] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.82/23] and MAC: 0a:58:0a:d9:00:52 2025-10-13T00:21:42.853190899+00:00 stderr F I1013 00:21:42.853181 28251 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-10-13T00:21:42.853190899+00:00 stderr F I1013 00:21:42.853186 28251 pods.go:220] [openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] addLogicalPort took 5.816247ms, libovsdb time 513.824µs 2025-10-13T00:21:42.853203509+00:00 stderr F I1013 00:21:42.853081 28251 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/installer-13-crc in node crc 2025-10-13T00:21:42.853203509+00:00 stderr F I1013 00:21:42.853193 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp after 0 failed attempt(s) 2025-10-13T00:21:42.853203509+00:00 stderr F I1013 00:21:42.853198 28251 default_network_controller.go:699] Recording success event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-10-13T00:21:42.853231130+00:00 stderr F I1013 00:21:42.853205 28251 port_cache.go:96] port-cache(openshift-console-operator_console-operator-5dbbc74dc9-cp5cd): added port &{name:openshift-console-operator_console-operator-5dbbc74dc9-cp5cd uuid:6af06372-81fc-4451-8678-6253ce70f317 logicalSwitch:crc ips:[0xc001a54bd0] mac:[10 88 10 217 0 62] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.62/23] and MAC: 0a:58:0a:d9:00:3e 2025-10-13T00:21:42.853231130+00:00 stderr F I1013 00:21:42.853175 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.853231130+00:00 stderr F I1013 00:21:42.853224 28251 pods.go:220] [openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] addLogicalPort took 7.291336ms, libovsdb time 527.404µs 2025-10-13T00:21:42.853241160+00:00 stderr F I1013 00:21:42.853230 28251 ovn.go:134] Ensuring zone local for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in node crc 2025-10-13T00:21:42.853241160+00:00 stderr F I1013 00:21:42.853234 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd after 0 failed attempt(s) 2025-10-13T00:21:42.853250300+00:00 stderr F I1013 00:21:42.853241 28251 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-10-13T00:21:42.853250300+00:00 stderr F I1013 00:21:42.853101 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "a783d910-85f5-4f52-8831-6bae329a70fa" 2025-10-13T00:21:42.853258911+00:00 stderr F I1013 00:21:42.853250 28251 base_network_controller_pods.go:476] [default/openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] creating logical port openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd for pod on switch crc 2025-10-13T00:21:42.853258911+00:00 stderr F I1013 00:21:42.853251 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e9127708-ccfd-4891-8a3a-f0cacb77e0f4" 2025-10-13T00:21:42.853267681+00:00 stderr F I1013 00:21:42.853134 28251 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-10-13T00:21:42.853267681+00:00 stderr F I1013 00:21:42.853252 28251 port_cache.go:96] port-cache(openshift-marketplace_redhat-marketplace-crk87): added port &{name:openshift-marketplace_redhat-marketplace-crk87 uuid:15739438-58a4-46c0-bcfe-2d9fdf5c37a4 logicalSwitch:crc ips:[0xc000004a80] mac:[10 88 10 217 0 33] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.33/23] and MAC: 0a:58:0a:d9:00:21 2025-10-13T00:21:42.853276571+00:00 stderr F I1013 00:21:42.853269 28251 pods.go:220] [openshift-marketplace/redhat-marketplace-crk87] addLogicalPort took 7.620765ms, libovsdb time 4.564213ms 2025-10-13T00:21:42.853287501+00:00 stderr F I1013 00:21:42.853279 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/redhat-marketplace-crk87 after 0 failed attempt(s) 2025-10-13T00:21:42.853287501+00:00 stderr F I1013 00:21:42.853145 28251 pods.go:220] [openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] addLogicalPort took 5.394066ms, libovsdb time 374.19µs 2025-10-13T00:21:42.853299902+00:00 stderr F I1013 00:21:42.853291 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-marketplace-crk87 2025-10-13T00:21:42.853308262+00:00 stderr F I1013 00:21:42.853045 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-10-13T00:21:42.853362303+00:00 stderr F I1013 00:21:42.853214 28251 base_network_controller_pods.go:476] [default/openshift-kube-apiserver/installer-13-crc] creating logical port openshift-kube-apiserver_installer-13-crc for pod on switch crc 2025-10-13T00:21:42.853362303+00:00 stderr F I1013 00:21:42.853343 28251 port_cache.go:96] port-cache(openshift-console_console-644bb77b49-5x5xk): added port &{name:openshift-console_console-644bb77b49-5x5xk uuid:9a79516e-7a72-4d42-b0ab-87a99aa064f3 logicalSwitch:crc ips:[0xc000d2b620] mac:[10 88 10 217 0 73] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.73/23] and MAC: 0a:58:0a:d9:00:49 2025-10-13T00:21:42.853362303+00:00 stderr F I1013 00:21:42.853355 28251 pods.go:220] [openshift-console/console-644bb77b49-5x5xk] addLogicalPort took 1.336826ms, libovsdb time 466.643µs 2025-10-13T00:21:42.853377894+00:00 stderr F I1013 00:21:42.853364 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-console/console-644bb77b49-5x5xk after 0 failed attempt(s) 2025-10-13T00:21:42.853377894+00:00 stderr F I1013 00:21:42.853369 28251 default_network_controller.go:699] Recording success event on pod openshift-console/console-644bb77b49-5x5xk 2025-10-13T00:21:42.853387074+00:00 stderr F I1013 00:21:42.853379 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" 2025-10-13T00:21:42.853453596+00:00 stderr F I1013 00:21:42.853419 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853453596+00:00 stderr F I1013 00:21:42.853290 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs after 0 failed attempt(s) 2025-10-13T00:21:42.853453596+00:00 stderr F I1013 00:21:42.850781 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853481897+00:00 stderr F I1013 00:21:42.853458 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853481897+00:00 stderr F I1013 00:21:42.853476 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.853491627+00:00 stderr F I1013 00:21:42.853485 28251 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in node crc 2025-10-13T00:21:42.853504297+00:00 stderr F I1013 00:21:42.853485 28251 port_cache.go:96] port-cache(openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7): added port &{name:openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 uuid:6e77fb5d-c04f-467c-9883-8cb59d819d86 logicalSwitch:crc ips:[0xc00133d5f0] mac:[10 88 10 217 0 12] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.12/23] and MAC: 0a:58:0a:d9:00:0c 2025-10-13T00:21:42.853504297+00:00 stderr F I1013 00:21:42.853497 28251 pods.go:220] [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] addLogicalPort took 1.522541ms, libovsdb time 596.667µs 2025-10-13T00:21:42.853513057+00:00 stderr F I1013 00:21:42.853486 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:b0a4ec02-9b6b-400a-9633-c11280799f07 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92cd2fae-7041-4269-9f12-7b10f07cb6aa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853513057+00:00 stderr F I1013 00:21:42.853504 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 after 0 failed attempt(s) 2025-10-13T00:21:42.853513057+00:00 stderr F I1013 00:21:42.853510 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-10-13T00:21:42.853522918+00:00 stderr F I1013 00:21:42.853468 28251 base_network_controller_pods.go:476] [default/openshift-marketplace/marketplace-operator-8b455464d-29pzg] creating logical port openshift-marketplace_marketplace-operator-8b455464d-29pzg for pod on switch crc 2025-10-13T00:21:42.853522918+00:00 stderr F I1013 00:21:42.853464 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853522918+00:00 stderr F I1013 00:21:42.853512 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.853532978+00:00 stderr F I1013 00:21:42.853526 28251 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in node crc 2025-10-13T00:21:42.853544648+00:00 stderr F I1013 00:21:42.853505 28251 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] creating logical port openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm for pod on switch crc 2025-10-13T00:21:42.853544648+00:00 stderr F I1013 00:21:42.853520 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853553769+00:00 stderr F I1013 00:21:42.853544 28251 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] creating logical port openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf for pod on switch crc 2025-10-13T00:21:42.853593030+00:00 stderr F I1013 00:21:42.853563 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853625360+00:00 stderr F I1013 00:21:42.853503 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853634471+00:00 stderr F I1013 00:21:42.852817 28251 default_network_controller.go:699] Recording success event on pod openshift-console/downloads-65476884b9-9wcvx 2025-10-13T00:21:42.853642321+00:00 stderr F I1013 00:21:42.853417 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853691372+00:00 stderr F I1013 00:21:42.853656 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:c3d30d24-1dab-4362-a72b-dd6762f1f84c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {209ddb87-f880-45d7-bb2d-0426a70d7f75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853691372+00:00 stderr F I1013 00:21:42.853659 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853704283+00:00 stderr F I1013 00:21:42.853691 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853718523+00:00 stderr F I1013 00:21:42.853695 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853727443+00:00 stderr F I1013 00:21:42.853710 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853736273+00:00 stderr F I1013 00:21:42.852969 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh after 0 failed attempt(s) 2025-10-13T00:21:42.853745304+00:00 stderr F I1013 00:21:42.853720 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853745304+00:00 stderr F I1013 00:21:42.853730 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853754554+00:00 stderr F I1013 00:21:42.853527 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "71af81a9-7d43-49b2-9287-c375900aa905" 2025-10-13T00:21:42.853785355+00:00 stderr F I1013 00:21:42.853743 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853785355+00:00 stderr F I1013 00:21:42.853762 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853797245+00:00 stderr F I1013 00:21:42.853733 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853797245+00:00 stderr F I1013 00:21:42.853726 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853810285+00:00 stderr F I1013 00:21:42.853793 28251 port_cache.go:96] port-cache(openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw): added port &{name:openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw uuid:f8e99409-b28a-4d27-a8e5-267ea6a801cf logicalSwitch:crc ips:[0xc0012cdb60] mac:[10 88 10 217 0 20] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.20/23] and MAC: 0a:58:0a:d9:00:14 2025-10-13T00:21:42.853822126+00:00 stderr F I1013 00:21:42.853801 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853822126+00:00 stderr F I1013 00:21:42.853808 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853863617+00:00 stderr F I1013 00:21:42.853741 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-10-13T00:21:42.853863617+00:00 stderr F I1013 00:21:42.853824 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "45a8038e-e7f2-4d93-a6f5-7753aa54e63f" 2025-10-13T00:21:42.853907138+00:00 stderr F I1013 00:21:42.853874 28251 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv): added port &{name:openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv uuid:2a5717ea-0a50-4ebb-b087-90e637274a33 logicalSwitch:crc ips:[0xc000dc62a0] mac:[10 88 10 217 0 25] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.25/23] and MAC: 0a:58:0a:d9:00:19 2025-10-13T00:21:42.853907138+00:00 stderr F I1013 00:21:42.853892 28251 pods.go:220] [openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] addLogicalPort took 6.6732ms, libovsdb time 827.612µs 2025-10-13T00:21:42.853907138+00:00 stderr F I1013 00:21:42.853888 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853907138+00:00 stderr F I1013 00:21:42.853446 28251 default_network_controller.go:699] Recording success event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-10-13T00:21:42.853918728+00:00 stderr F I1013 00:21:42.853895 28251 port_cache.go:96] port-cache(openshift-multus_network-metrics-daemon-qdfr4): added port &{name:openshift-multus_network-metrics-daemon-qdfr4 uuid:3564ddfd-a311-4df3-b5d0-1e76294b4ab0 logicalSwitch:crc ips:[0xc000e33d10] mac:[10 88 10 217 0 3] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.3/23] and MAC: 0a:58:0a:d9:00:03 2025-10-13T00:21:42.853918728+00:00 stderr F I1013 00:21:42.853868 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853927959+00:00 stderr F I1013 00:21:42.853913 28251 pods.go:220] [openshift-multus/network-metrics-daemon-qdfr4] addLogicalPort took 4.232774ms, libovsdb time 767.911µs 2025-10-13T00:21:42.853927959+00:00 stderr F I1013 00:21:42.853922 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 after 0 failed attempt(s) 2025-10-13T00:21:42.853936739+00:00 stderr F I1013 00:21:42.853927 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/network-metrics-daemon-qdfr4 2025-10-13T00:21:42.853946609+00:00 stderr F I1013 00:21:42.853929 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853954999+00:00 stderr F I1013 00:21:42.853942 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "cf1a8966-f594-490a-9fbb-eec5bafd13d3" 2025-10-13T00:21:42.853954999+00:00 stderr F I1013 00:21:42.853951 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "a702c6d2-4dde-4077-ab8c-0f8df804bf7a" 2025-10-13T00:21:42.853964680+00:00 stderr F I1013 00:21:42.853810 28251 pods.go:220] [openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] addLogicalPort took 5.819636ms, libovsdb time 681.899µs 2025-10-13T00:21:42.853964680+00:00 stderr F I1013 00:21:42.853945 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853977970+00:00 stderr F I1013 00:21:42.853957 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.853977970+00:00 stderr F I1013 00:21:42.853962 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw after 0 failed attempt(s) 2025-10-13T00:21:42.853977970+00:00 stderr F I1013 00:21:42.853970 28251 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in node crc 2025-10-13T00:21:42.853977970+00:00 stderr F I1013 00:21:42.853971 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-10-13T00:21:42.853987970+00:00 stderr F I1013 00:21:42.853963 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.853996510+00:00 stderr F I1013 00:21:42.853989 28251 base_network_controller_pods.go:476] [default/openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] creating logical port openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z for pod on switch crc 2025-10-13T00:21:42.854029841+00:00 stderr F I1013 00:21:42.853995 28251 port_cache.go:96] port-cache(openshift-etcd-operator_etcd-operator-768d5b5d86-722mg): added port &{name:openshift-etcd-operator_etcd-operator-768d5b5d86-722mg uuid:e834ded8-9d5b-46e7-b962-1ee96928bab4 logicalSwitch:crc ips:[0xc0009c1950] mac:[10 88 10 217 0 8] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.8/23] and MAC: 0a:58:0a:d9:00:08 2025-10-13T00:21:42.854029841+00:00 stderr F I1013 00:21:42.853969 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:b0a4ec02-9b6b-400a-9633-c11280799f07 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92cd2fae-7041-4269-9f12-7b10f07cb6aa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:92cd2fae-7041-4269-9f12-7b10f07cb6aa}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9ebde46-a3ff-41f3-9509-e8a83ccf5dc7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854029841+00:00 stderr F I1013 00:21:42.854015 28251 pods.go:220] [openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] addLogicalPort took 8.687684ms, libovsdb time 521.954µs 2025-10-13T00:21:42.854044662+00:00 stderr F I1013 00:21:42.854029 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg after 0 failed attempt(s) 2025-10-13T00:21:42.854044662+00:00 stderr F I1013 00:21:42.854022 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9c97ad1-ae77-4984-a203-1dc405839a5c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854044662+00:00 stderr F I1013 00:21:42.854035 28251 default_network_controller.go:699] Recording success event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-10-13T00:21:42.854054072+00:00 stderr F I1013 00:21:42.853901 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv after 0 failed attempt(s) 2025-10-13T00:21:42.854054072+00:00 stderr F I1013 00:21:42.854048 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-10-13T00:21:42.854054072+00:00 stderr F I1013 00:21:42.854047 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0b5c38ff-1fa8-4219-994d-15776acd4a4d" 2025-10-13T00:21:42.854065182+00:00 stderr F I1013 00:21:42.854053 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9c97ad1-ae77-4984-a203-1dc405839a5c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854073913+00:00 stderr F I1013 00:21:42.853996 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854122504+00:00 stderr F I1013 00:21:42.854069 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:c3d30d24-1dab-4362-a72b-dd6762f1f84c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {209ddb87-f880-45d7-bb2d-0426a70d7f75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:209ddb87-f880-45d7-bb2d-0426a70d7f75}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9c97ad1-ae77-4984-a203-1dc405839a5c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9c97ad1-ae77-4984-a203-1dc405839a5c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854134314+00:00 stderr F I1013 00:21:42.854115 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854176935+00:00 stderr F I1013 00:21:42.854146 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854176935+00:00 stderr F I1013 00:21:42.853163 28251 pods.go:220] [openshift-marketplace/certified-operators-cms8q] addLogicalPort took 2.794585ms, libovsdb time 381.041µs 2025-10-13T00:21:42.854176935+00:00 stderr F I1013 00:21:42.854128 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854193956+00:00 stderr F I1013 00:21:42.854171 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/certified-operators-cms8q after 0 failed attempt(s) 2025-10-13T00:21:42.854193956+00:00 stderr F I1013 00:21:42.854180 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/certified-operators-cms8q 2025-10-13T00:21:42.854203036+00:00 stderr F I1013 00:21:42.854187 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854305749+00:00 stderr F I1013 00:21:42.854254 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854345230+00:00 stderr F I1013 00:21:42.854309 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854345230+00:00 stderr F I1013 00:21:42.854310 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854345230+00:00 stderr F I1013 00:21:42.854311 28251 port_cache.go:96] port-cache(openshift-multus_multus-admission-controller-6c7c885997-4hbbc): added port &{name:openshift-multus_multus-admission-controller-6c7c885997-4hbbc uuid:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6 logicalSwitch:crc ips:[0xc0018471a0] mac:[10 88 10 217 0 32] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.32/23] and MAC: 0a:58:0a:d9:00:20 2025-10-13T00:21:42.854360720+00:00 stderr F I1013 00:21:42.854349 28251 base_network_controller_pods.go:476] [default/openshift-dns/dns-default-gbw49] creating logical port openshift-dns_dns-default-gbw49 for pod on switch crc 2025-10-13T00:21:42.854360720+00:00 stderr F I1013 00:21:42.854349 28251 pods.go:220] [openshift-multus/multus-admission-controller-6c7c885997-4hbbc] addLogicalPort took 7.953694ms, libovsdb time 570.796µs 2025-10-13T00:21:42.854369631+00:00 stderr F I1013 00:21:42.854315 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854369631+00:00 stderr F I1013 00:21:42.854364 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc after 0 failed attempt(s) 2025-10-13T00:21:42.854391391+00:00 stderr F I1013 00:21:42.854370 28251 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-10-13T00:21:42.854391391+00:00 stderr F I1013 00:21:42.854326 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854400391+00:00 stderr F I1013 00:21:42.854386 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854411302+00:00 stderr F I1013 00:21:42.854381 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854411302+00:00 stderr F I1013 00:21:42.854380 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" 2025-10-13T00:21:42.854420062+00:00 stderr F I1013 00:21:42.854412 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "6d67253e-2acd-4bc1-8185-793587da4f17" 2025-10-13T00:21:42.854428292+00:00 stderr F I1013 00:21:42.854415 28251 port_cache.go:96] port-cache(openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz): added port &{name:openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz uuid:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56 logicalSwitch:crc ips:[0xc000e86f60] mac:[10 88 10 217 0 10] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.10/23] and MAC: 0a:58:0a:d9:00:0a 2025-10-13T00:21:42.854440022+00:00 stderr F I1013 00:21:42.854429 28251 pods.go:220] [openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] addLogicalPort took 9.173937ms, libovsdb time 684.059µs 2025-10-13T00:21:42.854440022+00:00 stderr F I1013 00:21:42.854427 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854448743+00:00 stderr F I1013 00:21:42.854437 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz after 0 failed attempt(s) 2025-10-13T00:21:42.854448743+00:00 stderr F I1013 00:21:42.854443 28251 default_network_controller.go:699] Recording success event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-10-13T00:21:42.854486174+00:00 stderr F I1013 00:21:42.854415 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854486174+00:00 stderr F I1013 00:21:42.854472 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854524295+00:00 stderr F I1013 00:21:42.854501 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854524295+00:00 stderr F I1013 00:21:42.854511 28251 port_cache.go:96] port-cache(openshift-kube-apiserver_installer-13-crc): added port &{name:openshift-kube-apiserver_installer-13-crc uuid:92cd2fae-7041-4269-9f12-7b10f07cb6aa logicalSwitch:crc ips:[0xc001628180] mac:[10 88 10 217 0 41] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.41/23] and MAC: 0a:58:0a:d9:00:29 2025-10-13T00:21:42.854538365+00:00 stderr F I1013 00:21:42.854525 28251 pods.go:220] [openshift-kube-apiserver/installer-13-crc] addLogicalPort took 1.317206ms, libovsdb time 537.394µs 2025-10-13T00:21:42.854538365+00:00 stderr F I1013 00:21:42.854533 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver/installer-13-crc after 0 failed attempt(s) 2025-10-13T00:21:42.854547515+00:00 stderr F I1013 00:21:42.854538 28251 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/installer-13-crc 2025-10-13T00:21:42.854556456+00:00 stderr F I1013 00:21:42.854548 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "b0a4ec02-9b6b-400a-9633-c11280799f07" 2025-10-13T00:21:42.854565076+00:00 stderr F I1013 00:21:42.854518 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854603347+00:00 stderr F I1013 00:21:42.854577 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854635838+00:00 stderr F I1013 00:21:42.854596 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854635838+00:00 stderr F I1013 00:21:42.854613 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854679189+00:00 stderr F I1013 00:21:42.854654 28251 port_cache.go:96] port-cache(openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd): added port &{name:openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd uuid:8b4158c3-d859-42e6-8259-b16ce1cbd284 logicalSwitch:crc ips:[0xc000d81650] mac:[10 88 10 217 0 39] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.39/23] and MAC: 0a:58:0a:d9:00:27 2025-10-13T00:21:42.854679189+00:00 stderr F I1013 00:21:42.854656 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854679189+00:00 stderr F I1013 00:21:42.854672 28251 pods.go:220] [openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] addLogicalPort took 1.427109ms, libovsdb time 653.868µs 2025-10-13T00:21:42.854690459+00:00 stderr F I1013 00:21:42.854677 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "5bacb25d-97b6-4491-8fb4-99feae1d802a" 2025-10-13T00:21:42.854690459+00:00 stderr F I1013 00:21:42.854681 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd after 0 failed attempt(s) 2025-10-13T00:21:42.854690459+00:00 stderr F I1013 00:21:42.854687 28251 default_network_controller.go:699] Recording success event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-10-13T00:21:42.854700279+00:00 stderr F I1013 00:21:42.854656 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854708690+00:00 stderr F I1013 00:21:42.854689 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854729860+00:00 stderr F I1013 00:21:42.854713 28251 port_cache.go:96] port-cache(openshift-marketplace_marketplace-operator-8b455464d-29pzg): added port &{name:openshift-marketplace_marketplace-operator-8b455464d-29pzg uuid:209ddb87-f880-45d7-bb2d-0426a70d7f75 logicalSwitch:crc ips:[0xc001a54570] mac:[10 88 10 217 0 30] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.30/23] and MAC: 0a:58:0a:d9:00:1e 2025-10-13T00:21:42.854729860+00:00 stderr F I1013 00:21:42.854711 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854729860+00:00 stderr F I1013 00:21:42.854724 28251 pods.go:220] [openshift-marketplace/marketplace-operator-8b455464d-29pzg] addLogicalPort took 1.263154ms, libovsdb time 639.788µs 2025-10-13T00:21:42.854741091+00:00 stderr F I1013 00:21:42.854702 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854769951+00:00 stderr F I1013 00:21:42.854756 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-29pzg after 0 failed attempt(s) 2025-10-13T00:21:42.854769951+00:00 stderr F I1013 00:21:42.854763 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/marketplace-operator-8b455464d-29pzg 2025-10-13T00:21:42.854779452+00:00 stderr F I1013 00:21:42.854770 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c3d30d24-1dab-4362-a72b-dd6762f1f84c" 2025-10-13T00:21:42.854812812+00:00 stderr F I1013 00:21:42.854778 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854812812+00:00 stderr F I1013 00:21:42.854721 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854854724+00:00 stderr F I1013 00:21:42.854812 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:d3fa047a-b670-4067-b07b-06d9a1d3dbb1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02184096-16c6-4236-bbfe-4ffdc2c35434}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854863914+00:00 stderr F I1013 00:21:42.854817 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854863914+00:00 stderr F I1013 00:21:42.854859 28251 base_network_controller_pods.go:476] [default/openshift-service-ca/service-ca-666f99b6f-kk8kg] creating logical port openshift-service-ca_service-ca-666f99b6f-kk8kg for pod on switch crc 2025-10-13T00:21:42.854877034+00:00 stderr F I1013 00:21:42.854851 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854877034+00:00 stderr F I1013 00:21:42.854860 28251 port_cache.go:96] port-cache(openshift-dns-operator_dns-operator-75f687757b-nz2xb): added port &{name:openshift-dns-operator_dns-operator-75f687757b-nz2xb uuid:b212e2c2-3d4e-4898-aede-c926b74813f0 logicalSwitch:crc ips:[0xc0010b2bd0] mac:[10 88 10 217 0 18] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.18/23] and MAC: 0a:58:0a:d9:00:12 2025-10-13T00:21:42.854907875+00:00 stderr F I1013 00:21:42.854879 28251 pods.go:220] [openshift-dns-operator/dns-operator-75f687757b-nz2xb] addLogicalPort took 7.478561ms, libovsdb time 716.679µs 2025-10-13T00:21:42.854907875+00:00 stderr F I1013 00:21:42.854876 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.854907875+00:00 stderr F I1013 00:21:42.854902 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb after 0 failed attempt(s) 2025-10-13T00:21:42.854918605+00:00 stderr F I1013 00:21:42.854909 28251 default_network_controller.go:699] Recording success event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-10-13T00:21:42.854929376+00:00 stderr F I1013 00:21:42.854921 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "10603adc-d495-423c-9459-4caa405960bb" 2025-10-13T00:21:42.854967437+00:00 stderr F I1013 00:21:42.854938 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855001077+00:00 stderr F I1013 00:21:42.854976 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf): added port &{name:openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf uuid:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c logicalSwitch:crc ips:[0xc0015de690] mac:[10 88 10 217 0 11] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.11/23] and MAC: 0a:58:0a:d9:00:0b 2025-10-13T00:21:42.855010658+00:00 stderr F I1013 00:21:42.854994 28251 pods.go:220] [openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] addLogicalPort took 1.453939ms, libovsdb time 642.558µs 2025-10-13T00:21:42.855010658+00:00 stderr F I1013 00:21:42.854989 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855010658+00:00 stderr F I1013 00:21:42.855006 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf after 0 failed attempt(s) 2025-10-13T00:21:42.855024018+00:00 stderr F I1013 00:21:42.855012 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-10-13T00:21:42.855032748+00:00 stderr F I1013 00:21:42.855021 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "8a5ae51d-d173-4531-8975-f164c975ce1f" 2025-10-13T00:21:42.855042029+00:00 stderr F I1013 00:21:42.855029 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855051399+00:00 stderr F I1013 00:21:42.855037 28251 obj_retry.go:358] Adding new object: *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.855059879+00:00 stderr F I1013 00:21:42.855048 28251 ovn.go:134] Ensuring zone local for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in node crc 2025-10-13T00:21:42.855059879+00:00 stderr F I1013 00:21:42.855036 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {978136d0-4574-48ef-909d-b0a98736fbac}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855092240+00:00 stderr F I1013 00:21:42.855064 28251 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm): added port &{name:openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm uuid:ad3d5728-34ed-421c-a749-1d7a957800a8 logicalSwitch:crc ips:[0xc001836780] mac:[10 88 10 217 0 21] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.21/23] and MAC: 0a:58:0a:d9:00:15 2025-10-13T00:21:42.855092240+00:00 stderr F I1013 00:21:42.855043 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855106680+00:00 stderr F I1013 00:21:42.855052 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855106680+00:00 stderr F I1013 00:21:42.855099 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "120b38dc-8236-4fa6-a452-642b8ad738ee" 2025-10-13T00:21:42.855115261+00:00 stderr F I1013 00:21:42.855093 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:978136d0-4574-48ef-909d-b0a98736fbac}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855151282+00:00 stderr F I1013 00:21:42.855114 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855178412+00:00 stderr F I1013 00:21:42.855157 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855178412+00:00 stderr F I1013 00:21:42.855072 28251 base_network_controller_pods.go:476] [default/openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] creating logical port openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 for pod on switch crc 2025-10-13T00:21:42.855188042+00:00 stderr F I1013 00:21:42.855121 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} options:{GoMap:map[iface-id-ver:fe9b4942-29e7-4ef1-85c7-1a2153128dc7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:26 10.217.0.38]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0840ccdb-85bb-4744-9f7a-239a647cc044}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0840ccdb-85bb-4744-9f7a-239a647cc044}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.38 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {978136d0-4574-48ef-909d-b0a98736fbac}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:978136d0-4574-48ef-909d-b0a98736fbac}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855188042+00:00 stderr F I1013 00:21:42.855170 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855201463+00:00 stderr F I1013 00:21:42.855090 28251 pods.go:220] [openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] addLogicalPort took 1.590093ms, libovsdb time 642.527µs 2025-10-13T00:21:42.855209923+00:00 stderr F I1013 00:21:42.855192 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855218933+00:00 stderr F I1013 00:21:42.855206 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm after 0 failed attempt(s) 2025-10-13T00:21:42.855218933+00:00 stderr F I1013 00:21:42.855215 28251 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-10-13T00:21:42.855265215+00:00 stderr F I1013 00:21:42.855191 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855284485+00:00 stderr F I1013 00:21:42.855260 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855317996+00:00 stderr F I1013 00:21:42.855292 28251 port_cache.go:96] port-cache(hostpath-provisioner_csi-hostpathplugin-hvm8g): added port &{name:hostpath-provisioner_csi-hostpathplugin-hvm8g uuid:52259988-af2b-4ee5-bbfe-801c4ebeb0ae logicalSwitch:crc ips:[0xc001258990] mac:[10 88 10 217 0 49] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.49/23] and MAC: 0a:58:0a:d9:00:31 2025-10-13T00:21:42.855336306+00:00 stderr F I1013 00:21:42.855311 28251 pods.go:220] [hostpath-provisioner/csi-hostpathplugin-hvm8g] addLogicalPort took 8.034916ms, libovsdb time 461.373µs 2025-10-13T00:21:42.855336306+00:00 stderr F I1013 00:21:42.855314 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855366277+00:00 stderr F I1013 00:21:42.855175 28251 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-target-v54bt] creating logical port openshift-network-diagnostics_network-check-target-v54bt for pod on switch crc 2025-10-13T00:21:42.855366277+00:00 stderr F I1013 00:21:42.855326 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855375998+00:00 stderr F I1013 00:21:42.855365 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "12e733dd-0939-4f1b-9cbb-13897e093787" 2025-10-13T00:21:42.855388438+00:00 stderr F I1013 00:21:42.855375 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855418229+00:00 stderr F I1013 00:21:42.855394 28251 port_cache.go:96] port-cache(openshift-ingress-canary_ingress-canary-2vhcn): added port &{name:openshift-ingress-canary_ingress-canary-2vhcn uuid:7a350d82-7987-4ce6-ae41-dd930411ca29 logicalSwitch:crc ips:[0xc000b89500] mac:[10 88 10 217 0 71] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.71/23] and MAC: 0a:58:0a:d9:00:47 2025-10-13T00:21:42.855418229+00:00 stderr F I1013 00:21:42.855408 28251 pods.go:220] [openshift-ingress-canary/ingress-canary-2vhcn] addLogicalPort took 10.748539ms, libovsdb time 4.977894ms 2025-10-13T00:21:42.855428309+00:00 stderr F I1013 00:21:42.855417 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn after 0 failed attempt(s) 2025-10-13T00:21:42.855428309+00:00 stderr F I1013 00:21:42.855411 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855439509+00:00 stderr F I1013 00:21:42.855417 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855452320+00:00 stderr F I1013 00:21:42.855422 28251 default_network_controller.go:699] Recording success event on pod openshift-ingress-canary/ingress-canary-2vhcn 2025-10-13T00:21:42.855490271+00:00 stderr F I1013 00:21:42.855458 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0b5d722a-1123-4935-9740-52a08d018bc9" 2025-10-13T00:21:42.855490271+00:00 stderr F I1013 00:21:42.855444 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d8ffb80-afa8-491b-8ba3-524ddf157155}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855490271+00:00 stderr F I1013 00:21:42.855476 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0f394926-bdb9-425c-b36e-264d7fd34550" 2025-10-13T00:21:42.855566553+00:00 stderr F I1013 00:21:42.855523 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2d8ffb80-afa8-491b-8ba3-524ddf157155}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855566553+00:00 stderr F I1013 00:21:42.855352 28251 obj_retry.go:379] Retry successful for *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g after 0 failed attempt(s) 2025-10-13T00:21:42.855605244+00:00 stderr F I1013 00:21:42.855575 28251 default_network_controller.go:699] Recording success event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-10-13T00:21:42.855605244+00:00 stderr F I1013 00:21:42.855465 28251 port_cache.go:96] port-cache(openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z): added port &{name:openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z uuid:f5604df7-c1b9-4360-a570-e22fbf62c520 logicalSwitch:crc ips:[0xc001846a20] mac:[10 88 10 217 0 9] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.9/23] and MAC: 0a:58:0a:d9:00:09 2025-10-13T00:21:42.855616394+00:00 stderr F I1013 00:21:42.855565 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855625914+00:00 stderr F I1013 00:21:42.855558 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} options:{GoMap:map[iface-id-ver:d3fa047a-b670-4067-b07b-06d9a1d3dbb1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:22 10.217.0.34]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02184096-16c6-4236-bbfe-4ffdc2c35434}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:02184096-16c6-4236-bbfe-4ffdc2c35434}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.34 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d8ffb80-afa8-491b-8ba3-524ddf157155}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2d8ffb80-afa8-491b-8ba3-524ddf157155}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855625914+00:00 stderr F I1013 00:21:42.855518 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855639065+00:00 stderr F I1013 00:21:42.855613 28251 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz): added port &{name:openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz uuid:69155615-9d93-4b72-bddd-739a6e731251 logicalSwitch:crc ips:[0xc0010b3a10] mac:[10 88 10 217 0 43] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.43/23] and MAC: 0a:58:0a:d9:00:2b 2025-10-13T00:21:42.855647415+00:00 stderr F I1013 00:21:42.855636 28251 pods.go:220] [openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] addLogicalPort took 7.175643ms, libovsdb time 4.693037ms 2025-10-13T00:21:42.855655645+00:00 stderr F I1013 00:21:42.855645 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz after 0 failed attempt(s) 2025-10-13T00:21:42.855655645+00:00 stderr F I1013 00:21:42.855650 28251 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-10-13T00:21:42.855683036+00:00 stderr F I1013 00:21:42.855648 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855725877+00:00 stderr F I1013 00:21:42.855701 28251 port_cache.go:96] port-cache(openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t): added port &{name:openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t uuid:710ea152-1844-44ad-b1a6-805ec9a3700e logicalSwitch:crc ips:[0xc000ebbbc0] mac:[10 88 10 217 0 45] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.45/23] and MAC: 0a:58:0a:d9:00:2d 2025-10-13T00:21:42.855725877+00:00 stderr F I1013 00:21:42.855718 28251 pods.go:220] [openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] addLogicalPort took 4.914622ms, libovsdb time 649.297µs 2025-10-13T00:21:42.855736727+00:00 stderr F I1013 00:21:42.855713 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855736727+00:00 stderr F I1013 00:21:42.855730 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t after 0 failed attempt(s) 2025-10-13T00:21:42.855745497+00:00 stderr F I1013 00:21:42.855736 28251 default_network_controller.go:699] Recording success event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-10-13T00:21:42.855745497+00:00 stderr F I1013 00:21:42.855578 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "bd556935-a077-45df-ba3f-d42c39326ccd" 2025-10-13T00:21:42.855760928+00:00 stderr F I1013 00:21:42.855754 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "7d51f445-054a-4e4f-a67b-a828f5a32511" 2025-10-13T00:21:42.855769608+00:00 stderr F I1013 00:21:42.855659 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855769608+00:00 stderr F I1013 00:21:42.855744 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855843010+00:00 stderr F I1013 00:21:42.855604 28251 pods.go:220] [openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] addLogicalPort took 1.619654ms, libovsdb time 934.675µs 2025-10-13T00:21:42.855843010+00:00 stderr F I1013 00:21:42.855824 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855843010+00:00 stderr F I1013 00:21:42.855831 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "fe9b4942-29e7-4ef1-85c7-1a2153128dc7" 2025-10-13T00:21:42.855853450+00:00 stderr F I1013 00:21:42.855770 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855890581+00:00 stderr F I1013 00:21:42.855869 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "13045510-8717-4a71-ade4-be95a76440a7" 2025-10-13T00:21:42.855890581+00:00 stderr F I1013 00:21:42.855845 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855904442+00:00 stderr F I1013 00:21:42.855876 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855935763+00:00 stderr F I1013 00:21:42.855911 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855935763+00:00 stderr F I1013 00:21:42.855836 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z after 0 failed attempt(s) 2025-10-13T00:21:42.855964783+00:00 stderr F I1013 00:21:42.855943 28251 default_network_controller.go:699] Recording success event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-10-13T00:21:42.855974894+00:00 stderr F I1013 00:21:42.855779 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.855974894+00:00 stderr F I1013 00:21:42.855951 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856053896+00:00 stderr F I1013 00:21:42.855929 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856053896+00:00 stderr F I1013 00:21:42.856030 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856073236+00:00 stderr F I1013 00:21:42.856030 28251 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:49cd5dc0-c0e0-4199-93cd-8637bea2739a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856073236+00:00 stderr F I1013 00:21:42.855805 28251 port_cache.go:96] port-cache(openshift-image-registry_image-registry-75b7bb6564-2mwg6): added port &{name:openshift-image-registry_image-registry-75b7bb6564-2mwg6 uuid:0840ccdb-85bb-4744-9f7a-239a647cc044 logicalSwitch:crc ips:[0xc0009c0930] mac:[10 88 10 217 0 38] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.38/23] and MAC: 0a:58:0a:d9:00:26 2025-10-13T00:21:42.856082777+00:00 stderr F I1013 00:21:42.856068 28251 pods.go:220] [openshift-image-registry/image-registry-75b7bb6564-2mwg6] addLogicalPort took 11.387346ms, libovsdb time 666.917µs 2025-10-13T00:21:42.856082777+00:00 stderr F I1013 00:21:42.856078 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 after 0 failed attempt(s) 2025-10-13T00:21:42.856091747+00:00 stderr F I1013 00:21:42.856084 28251 default_network_controller.go:699] Recording success event on pod openshift-image-registry/image-registry-75b7bb6564-2mwg6 2025-10-13T00:21:42.856121968+00:00 stderr F I1013 00:21:42.856085 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856121968+00:00 stderr F I1013 00:21:42.856099 28251 port_cache.go:96] port-cache(openshift-dns_dns-default-gbw49): added port &{name:openshift-dns_dns-default-gbw49 uuid:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213 logicalSwitch:crc ips:[0xc0011483c0] mac:[10 88 10 217 0 31] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.31/23] and MAC: 0a:58:0a:d9:00:1f 2025-10-13T00:21:42.856121968+00:00 stderr F I1013 00:21:42.856116 28251 pods.go:220] [openshift-dns/dns-default-gbw49] addLogicalPort took 1.777787ms, libovsdb time 670.678µs 2025-10-13T00:21:42.856136538+00:00 stderr F I1013 00:21:42.856122 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns/dns-default-gbw49 after 0 failed attempt(s) 2025-10-13T00:21:42.856136538+00:00 stderr F I1013 00:21:42.856132 28251 default_network_controller.go:699] Recording success event on pod openshift-dns/dns-default-gbw49 2025-10-13T00:21:42.856145618+00:00 stderr F I1013 00:21:42.856119 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d3fa047a-b670-4067-b07b-06d9a1d3dbb1" 2025-10-13T00:21:42.856154139+00:00 stderr F I1013 00:21:42.856137 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856163089+00:00 stderr F I1013 00:21:42.856144 28251 port_cache.go:96] port-cache(openshift-marketplace_redhat-operators-hkptr): added port &{name:openshift-marketplace_redhat-operators-hkptr uuid:02184096-16c6-4236-bbfe-4ffdc2c35434 logicalSwitch:crc ips:[0xc001896120] mac:[10 88 10 217 0 34] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.34/23] and MAC: 0a:58:0a:d9:00:22 2025-10-13T00:21:42.856163089+00:00 stderr F I1013 00:21:42.856123 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856172499+00:00 stderr F I1013 00:21:42.856061 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856172499+00:00 stderr F I1013 00:21:42.856162 28251 pods.go:220] [openshift-marketplace/redhat-operators-hkptr] addLogicalPort took 3.038342ms, libovsdb time 550.825µs 2025-10-13T00:21:42.856184769+00:00 stderr F I1013 00:21:42.856173 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/redhat-operators-hkptr after 0 failed attempt(s) 2025-10-13T00:21:42.856184769+00:00 stderr F I1013 00:21:42.856180 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-operators-hkptr 2025-10-13T00:21:42.856193720+00:00 stderr F I1013 00:21:42.856172 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856202700+00:00 stderr F I1013 00:21:42.856156 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856271482+00:00 stderr F I1013 00:21:42.856232 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856271482+00:00 stderr F I1013 00:21:42.856172 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856403305+00:00 stderr F I1013 00:21:42.856370 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "b54e8941-2fc4-432a-9e51-39684df9089e" 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856388 28251 port_cache.go:96] port-cache(openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv): added port &{name:openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv uuid:82630d91-1647-4c0c-aa84-8f820bcf919e logicalSwitch:crc ips:[0xc0008f5800] mac:[10 88 10 217 0 22] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.22/23] and MAC: 0a:58:0a:d9:00:16 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856411 28251 pods.go:220] [openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] addLogicalPort took 5.59005ms, libovsdb time 513.044µs 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856427 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv after 0 failed attempt(s) 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856434 28251 default_network_controller.go:699] Recording success event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856443 28251 port_cache.go:96] port-cache(openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8): added port &{name:openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 uuid:99ef3a4b-7858-4c9b-90db-217867afe36a logicalSwitch:crc ips:[0xc0013f9170] mac:[10 88 10 217 0 19] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.19/23] and MAC: 0a:58:0a:d9:00:13 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856460 28251 pods.go:220] [openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] addLogicalPort took 1.395118ms, libovsdb time 591.926µs 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856472 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 after 0 failed attempt(s) 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856476 28251 default_network_controller.go:699] Recording success event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856483 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856514 28251 port_cache.go:96] port-cache(openshift-service-ca_service-ca-666f99b6f-kk8kg): added port &{name:openshift-service-ca_service-ca-666f99b6f-kk8kg uuid:9409cb25-8c46-46db-98ab-5eafe9669ef8 logicalSwitch:crc ips:[0xc000861e90] mac:[10 88 10 217 0 40] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.40/23] and MAC: 0a:58:0a:d9:00:28 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856528 28251 pods.go:220] [openshift-service-ca/service-ca-666f99b6f-kk8kg] addLogicalPort took 1.676286ms, libovsdb time 577.936µs 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856534 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg after 0 failed attempt(s) 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856540 28251 default_network_controller.go:699] Recording success event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856551 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e4a7de23-6134-4044-902a-0900dc04a501" 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856619 28251 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-v54bt): added port &{name:openshift-network-diagnostics_network-check-target-v54bt uuid:c0f95133-023f-4bbd-8719-e29d2cfbb32d logicalSwitch:crc ips:[0xc00140e600] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856630 28251 pods.go:220] [openshift-network-diagnostics/network-check-target-v54bt] addLogicalPort took 5.493827ms, libovsdb time 457.852µs 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856641 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-target-v54bt after 0 failed attempt(s) 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856646 28251 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-target-v54bt 2025-10-13T00:21:42.856689843+00:00 stderr F I1013 00:21:42.856655 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "34a48baf-1bee-4921-8bb2-9b7320e76f79" 2025-10-13T00:21:42.856822096+00:00 stderr F I1013 00:21:42.856778 28251 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856833577+00:00 stderr F I1013 00:21:42.856808 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "1a3e81c3-c292-4130-9436-f94062c91efd" 2025-10-13T00:21:42.856856187+00:00 stderr F I1013 00:21:42.856829 28251 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856899769+00:00 stderr F I1013 00:21:42.856872 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "59748b9b-c309-4712-aa85-bb38d71c4915" 2025-10-13T00:21:42.856909969+00:00 stderr F I1013 00:21:42.856851 28251 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:49cd5dc0-c0e0-4199-93cd-8637bea2739a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.180 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:04b06371-7c5d-4b4a-bba2-ec3c8949a0a5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-10-13T00:21:42.856987131+00:00 stderr F I1013 00:21:42.856788 28251 port_cache.go:96] port-cache(openshift-controller-manager_controller-manager-778975cc4f-x5vcf): added port &{name:openshift-controller-manager_controller-manager-778975cc4f-x5vcf uuid:eda38bc9-7da5-4a6b-818c-4e1e8f85426d logicalSwitch:crc ips:[0xc0012cc270] mac:[10 88 10 217 0 87] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.87/23] and MAC: 0a:58:0a:d9:00:57 2025-10-13T00:21:42.856987131+00:00 stderr F I1013 00:21:42.856976 28251 pods.go:220] [openshift-controller-manager/controller-manager-778975cc4f-x5vcf] addLogicalPort took 9.840274ms, libovsdb time 712.659µs 2025-10-13T00:21:42.856998411+00:00 stderr F I1013 00:21:42.856988 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf after 0 failed attempt(s) 2025-10-13T00:21:42.857007141+00:00 stderr F I1013 00:21:42.856995 28251 default_network_controller.go:699] Recording success event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-10-13T00:21:42.857040732+00:00 stderr F I1013 00:21:42.857008 28251 port_cache.go:96] port-cache(openshift-console-operator_console-conversion-webhook-595f9969b-l6z49): added port &{name:openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 uuid:6056bee0-572a-4de7-bb24-40ca6a66be30 logicalSwitch:crc ips:[0xc000dabd10] mac:[10 88 10 217 0 61] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.61/23] and MAC: 0a:58:0a:d9:00:3d 2025-10-13T00:21:42.857040732+00:00 stderr F I1013 00:21:42.857032 28251 pods.go:220] [openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] addLogicalPort took 5.95406ms, libovsdb time 694.529µs 2025-10-13T00:21:42.857049463+00:00 stderr F I1013 00:21:42.857040 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 after 0 failed attempt(s) 2025-10-13T00:21:42.857049463+00:00 stderr F I1013 00:21:42.857046 28251 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-10-13T00:21:42.857347571+00:00 stderr F I1013 00:21:42.857301 28251 port_cache.go:96] port-cache(openshift-marketplace_community-operators-gjctm): added port &{name:openshift-marketplace_community-operators-gjctm uuid:0a1cf6f3-dd19-4bcc-b9c0-7589d4b1da95 logicalSwitch:crc ips:[0xc000e4a0f0] mac:[10 88 10 217 0 35] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.35/23] and MAC: 0a:58:0a:d9:00:23 2025-10-13T00:21:42.857365551+00:00 stderr F I1013 00:21:42.857355 28251 pods.go:220] [openshift-marketplace/community-operators-gjctm] addLogicalPort took 6.659149ms, libovsdb time 442.642µs 2025-10-13T00:21:42.857374311+00:00 stderr F I1013 00:21:42.857365 28251 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/community-operators-gjctm after 0 failed attempt(s) 2025-10-13T00:21:42.857383071+00:00 stderr F I1013 00:21:42.857372 28251 default_network_controller.go:699] Recording success event on pod openshift-marketplace/community-operators-gjctm 2025-10-13T00:21:42.857395532+00:00 stderr F I1013 00:21:42.857386 28251 obj_retry.go:413] Function iterateRetryResources for *v1.Pod ended (in 13.092782ms) 2025-10-13T00:21:42.857404112+00:00 stderr F I1013 00:21:42.857396 28251 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "49cd5dc0-c0e0-4199-93cd-8637bea2739a" 2025-10-13T00:21:43.287797766+00:00 stderr F I1013 00:21:43.287728 28251 ovs.go:159] Exec(42): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.27632.ctl connection-status 2025-10-13T00:21:43.296069729+00:00 stderr F I1013 00:21:43.295996 28251 ovs.go:162] Exec(42): stdout: "not connected\n" 2025-10-13T00:21:43.296069729+00:00 stderr F I1013 00:21:43.296022 28251 ovs.go:163] Exec(42): stderr: "" 2025-10-13T00:21:43.296069729+00:00 stderr F I1013 00:21:43.296032 28251 default_node_network_controller.go:385] Node connection status = not connected 2025-10-13T00:21:43.360607454+00:00 stderr F I1013 00:21:43.360558 28251 obj_retry.go:555] Update event received for resource *v1.Pod, old object is equal to new: false 2025-10-13T00:21:43.360701397+00:00 stderr F I1013 00:21:43.360671 28251 default_network_controller.go:670] Recording update event on pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:43.360737478+00:00 stderr F I1013 00:21:43.360727 28251 obj_retry.go:607] Update event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:43.360826720+00:00 stderr F I1013 00:21:43.360814 28251 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-wzh74 in node crc 2025-10-13T00:21:43.360888912+00:00 stderr F I1013 00:21:43.360879 28251 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:43.360917083+00:00 stderr F I1013 00:21:43.360907 28251 obj_retry.go:555] Update event received for resource *factory.egressIPPod, old object is equal to new: false 2025-10-13T00:21:43.360943783+00:00 stderr F I1013 00:21:43.360934 28251 obj_retry.go:607] Update event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-wzh74 2025-10-13T00:21:43.788475100+00:00 stderr F I1013 00:21:43.788023 28251 ovs.go:159] Exec(43): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.27632.ctl connection-status 2025-10-13T00:21:43.793784693+00:00 stderr F I1013 00:21:43.793749 28251 ovs.go:162] Exec(43): stdout: "connected\n" 2025-10-13T00:21:43.793784693+00:00 stderr F I1013 00:21:43.793766 28251 ovs.go:163] Exec(43): stderr: "" 2025-10-13T00:21:43.793784693+00:00 stderr F I1013 00:21:43.793775 28251 default_node_network_controller.go:385] Node connection status = connected 2025-10-13T00:21:43.793836565+00:00 stderr F I1013 00:21:43.793783 28251 ovs.go:159] Exec(44): /usr/bin/ovs-vsctl --timeout=15 -- br-exists br-int 2025-10-13T00:21:43.800451552+00:00 stderr F I1013 00:21:43.800425 28251 ovs.go:162] Exec(44): stdout: "" 2025-10-13T00:21:43.800491123+00:00 stderr F I1013 00:21:43.800481 28251 ovs.go:163] Exec(44): stderr: "" 2025-10-13T00:21:43.800532765+00:00 stderr F I1013 00:21:43.800523 28251 ovs.go:159] Exec(45): /usr/bin/ovs-ofctl dump-aggregate br-int 2025-10-13T00:21:43.810049941+00:00 stderr F I1013 00:21:43.810028 28251 ovs.go:162] Exec(45): stdout: "NXST_AGGREGATE reply (xid=0x4): packet_count=8215 byte_count=7769827 flow_count=2939\n" 2025-10-13T00:21:43.810088972+00:00 stderr F I1013 00:21:43.810078 28251 ovs.go:163] Exec(45): stderr: "" 2025-10-13T00:21:43.810133963+00:00 stderr F I1013 00:21:43.810115 28251 ovs.go:159] Exec(46): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_crc-to-br-int ofport 2025-10-13T00:21:43.816642238+00:00 stderr F I1013 00:21:43.816565 28251 ovs.go:162] Exec(46): stdout: "2\n" 2025-10-13T00:21:43.816642238+00:00 stderr F I1013 00:21:43.816625 28251 ovs.go:163] Exec(46): stderr: "" 2025-10-13T00:21:43.816672219+00:00 stderr F I1013 00:21:43.816639 28251 gateway.go:365] Gateway is ready 2025-10-13T00:21:43.816672219+00:00 stderr F I1013 00:21:43.816656 28251 gateway_localnet.go:78] Creating Local Gateway Openflow Manager 2025-10-13T00:21:43.816713620+00:00 stderr F I1013 00:21:43.816697 28251 ovs.go:159] Exec(47): /usr/bin/ovs-vsctl --timeout=15 get Interface patch-br-ex_crc-to-br-int ofport 2025-10-13T00:21:43.826097912+00:00 stderr F I1013 00:21:43.826026 28251 ovs.go:162] Exec(47): stdout: "2\n" 2025-10-13T00:21:43.826097912+00:00 stderr F I1013 00:21:43.826048 28251 ovs.go:163] Exec(47): stderr: "" 2025-10-13T00:21:43.826097912+00:00 stderr F I1013 00:21:43.826060 28251 ovs.go:159] Exec(48): /usr/bin/ovs-vsctl --timeout=15 get interface ens3 ofport 2025-10-13T00:21:43.835748982+00:00 stderr F I1013 00:21:43.835677 28251 ovs.go:162] Exec(48): stdout: "1\n" 2025-10-13T00:21:43.835803083+00:00 stderr F I1013 00:21:43.835778 28251 ovs.go:163] Exec(48): stderr: "" 2025-10-13T00:21:43.846449889+00:00 stderr F I1013 00:21:43.846404 28251 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 127.0.0.1/8 lo 2025-10-13T00:21:43.846449889+00:00 stderr F I1013 00:21:43.846443 28251 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 10.217.0.2/23 ovn-k8s-mp0 2025-10-13T00:21:43.846476950+00:00 stderr F I1013 00:21:43.846462 28251 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 169.254.169.2/29 br-ex 2025-10-13T00:21:43.846484720+00:00 stderr F I1013 00:21:43.846475 28251 node_ip_handler_linux.go:228] Node primary address changed to 192.168.126.11. Updating OVN encap IP. 2025-10-13T00:21:43.846508431+00:00 stderr F I1013 00:21:43.846491 28251 ovs.go:159] Exec(49): /usr/bin/ovs-vsctl --timeout=15 get Open_vSwitch . external_ids:ovn-encap-ip 2025-10-13T00:21:43.856726856+00:00 stderr F I1013 00:21:43.856670 28251 ovs.go:162] Exec(49): stdout: "\"192.168.126.11\"\n" 2025-10-13T00:21:43.856726856+00:00 stderr F I1013 00:21:43.856708 28251 ovs.go:163] Exec(49): stderr: "" 2025-10-13T00:21:43.856760457+00:00 stderr F I1013 00:21:43.856725 28251 node_ip_handler_linux.go:482] Will not update encap IP 192.168.126.11 - it is already configured 2025-10-13T00:21:43.856830669+00:00 stderr F I1013 00:21:43.856740 28251 node_ip_handler_linux.go:441] Node address changed to map[172.17.0.5/24:{} 172.18.0.5/24:{} 172.19.0.5/24:{} 192.168.122.10/24:{} 192.168.126.11/24:{} 38.102.83.180/24:{}]. Updating annotations. 2025-10-13T00:21:43.857454615+00:00 stderr F I1013 00:21:43.857420 28251 kube.go:128] Setting annotations map[k8s.ovn.org/host-cidrs:["172.17.0.5/24","172.18.0.5/24","172.19.0.5/24","192.168.122.10/24","192.168.126.11/24","38.102.83.180/24"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_crc","mac-address":"fa:16:3e:c3:15:08","ip-addresses":["38.102.83.180/24"],"ip-address":"38.102.83.180/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.180/24"}] on node crc 2025-10-13T00:21:43.869434437+00:00 stderr F I1013 00:21:43.869374 28251 gateway_shared_intf.go:2029] Setting OVN Masquerade route with source: 192.168.126.11 2025-10-13T00:21:43.869570911+00:00 stderr F I1013 00:21:43.869537 28251 ovs.go:159] Exec(50): /usr/sbin/ip route replace table 7 10.217.4.0/23 via 10.217.0.1 dev ovn-k8s-mp0 2025-10-13T00:21:43.869570911+00:00 stderr F I1013 00:21:43.869548 28251 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 11 Dst: 169.254.169.1/32 Src: 192.168.126.11 Gw: Flags: [] Table: 0 Realm: 0} 2025-10-13T00:21:43.869933831+00:00 stderr F I1013 00:21:43.869913 28251 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 11 Dst: 169.254.169.1/32 Src: 192.168.126.11 Gw: Flags: [] Table: 254 Realm: 0} 2025-10-13T00:21:43.871787431+00:00 stderr F I1013 00:21:43.871753 28251 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.4.0/23 Src: Gw: 10.217.0.1 Flags: [] Table: 7 Realm: 0}" 2025-10-13T00:21:43.872596413+00:00 stderr F I1013 00:21:43.872560 28251 ovs.go:162] Exec(50): stdout: "" 2025-10-13T00:21:43.872596413+00:00 stderr F I1013 00:21:43.872586 28251 ovs.go:163] Exec(50): stderr: "" 2025-10-13T00:21:43.872611963+00:00 stderr F I1013 00:21:43.872601 28251 gateway_shared_intf.go:1674] Successfully added route into custom routing table: 7 2025-10-13T00:21:43.872664534+00:00 stderr F I1013 00:21:43.872619 28251 ovs.go:159] Exec(51): /usr/sbin/ip -4 rule 2025-10-13T00:21:43.874584746+00:00 stderr F I1013 00:21:43.874538 28251 ovs.go:162] Exec(51): stdout: "0:\tfrom all lookup local\n30:\tfrom all fwmark 0x1745ec lookup 7\n32766:\tfrom all lookup main\n32767:\tfrom all lookup default\n" 2025-10-13T00:21:43.874584746+00:00 stderr F I1013 00:21:43.874569 28251 ovs.go:163] Exec(51): stderr: "" 2025-10-13T00:21:43.874602986+00:00 stderr F I1013 00:21:43.874585 28251 ovs.go:159] Exec(52): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.rp_filter=2 2025-10-13T00:21:43.875856200+00:00 stderr F I1013 00:21:43.875819 28251 ovs.go:162] Exec(52): stdout: "net.ipv4.conf.ovn-k8s-mp0.rp_filter = 2\n" 2025-10-13T00:21:43.875856200+00:00 stderr F I1013 00:21:43.875844 28251 ovs.go:163] Exec(52): stderr: "" 2025-10-13T00:21:43.875872581+00:00 stderr F I1013 00:21:43.875861 28251 ovs.go:159] Exec(53): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_crc-to-br-int ofport 2025-10-13T00:21:43.883097895+00:00 stderr F I1013 00:21:43.883046 28251 ovs.go:162] Exec(53): stdout: "2\n" 2025-10-13T00:21:43.883097895+00:00 stderr F I1013 00:21:43.883074 28251 ovs.go:163] Exec(53): stderr: "" 2025-10-13T00:21:43.883128626+00:00 stderr F I1013 00:21:43.883092 28251 ovs.go:159] Exec(54): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ens3 ofport 2025-10-13T00:21:43.890369041+00:00 stderr F I1013 00:21:43.890304 28251 ovs.go:162] Exec(54): stdout: "1\n" 2025-10-13T00:21:43.890410382+00:00 stderr F I1013 00:21:43.890398 28251 ovs.go:163] Exec(54): stderr: "" 2025-10-13T00:21:43.892733604+00:00 stderr F I1013 00:21:43.892666 28251 gateway_iptables.go:487] Chain: "OVN-KUBE-ITP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.895118708+00:00 stderr F I1013 00:21:43.895069 28251 gateway_iptables.go:487] Chain: "OVN-KUBE-ITP" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.898036657+00:00 stderr F I1013 00:21:43.897982 28251 gateway_iptables.go:487] Chain: "OVN-KUBE-EGRESS-SVC" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EGRESS-SVC --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.902849796+00:00 stderr F I1013 00:21:43.902778 28251 gateway_iptables.go:487] Chain: "OVN-KUBE-NODEPORT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-NODEPORT --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.904882611+00:00 stderr F I1013 00:21:43.904840 28251 gateway_iptables.go:487] Chain: "OVN-KUBE-EXTERNALIP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EXTERNALIP --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.907669796+00:00 stderr F I1013 00:21:43.907628 28251 gateway_iptables.go:487] Chain: "OVN-KUBE-ETP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ETP --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.907702967+00:00 stderr F I1013 00:21:43.907686 28251 iptables.go:108] Creating table: mangle chain: OUTPUT 2025-10-13T00:21:43.909730071+00:00 stderr F I1013 00:21:43.909695 28251 iptables.go:110] Chain: "OUTPUT" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.911839728+00:00 stderr F I1013 00:21:43.911817 28251 iptables.go:108] Creating table: nat chain: OUTPUT 2025-10-13T00:21:43.914369936+00:00 stderr F I1013 00:21:43.914327 28251 iptables.go:110] Chain: "OUTPUT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.917086969+00:00 stderr F I1013 00:21:43.917057 28251 iptables.go:108] Creating table: nat chain: POSTROUTING 2025-10-13T00:21:43.918860707+00:00 stderr F I1013 00:21:43.918823 28251 iptables.go:110] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.921563859+00:00 stderr F I1013 00:21:43.921535 28251 iptables.go:108] Creating table: nat chain: PREROUTING 2025-10-13T00:21:43.923548423+00:00 stderr F I1013 00:21:43.923516 28251 iptables.go:110] Chain: "PREROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N PREROUTING --wait]: exit status 1: iptables: Chain already exists. 2025-10-13T00:21:43.935458213+00:00 stderr F I1013 00:21:43.935421 28251 gateway_shared_intf.go:2124] Ensuring IP Neighbor entry for: 169.254.169.1 2025-10-13T00:21:43.935539215+00:00 stderr F I1013 00:21:43.935523 28251 gateway_shared_intf.go:2124] Ensuring IP Neighbor entry for: 169.254.169.4 2025-10-13T00:21:43.935634178+00:00 stderr F I1013 00:21:43.935585 28251 obj_retry_gateway.go:28] [newRetryFrameworkNodeWithParameters] g.watchFactory=&{10 0xc0000df8f0 0xc0000df960 0xc0000df9d0 0xc0000dfa40 0xc0000dfab0 0xc0000dfb20 0xc0004f5ea0 0xc0000dfb90 0xc0000dfc00 map[0x23d45a0:0xc000275260 0x23d4ae0:0xc0002750a0 0x23d4d80:0xc000275180 0x23d5020:0xc0002752d0 0x23d52c0:0xc0002753b0 0x23d5aa0:0xc000275420 0x23f31a0:0xc000274e00 0x23f4020:0xc000274e70 0x23f4760:0xc000274fc0 0x23f5980:0xc000274f50 0x23f7680:0xc000275030 0x23f8160:0xc000274ee0] 0xc0002bcde0} 2025-10-13T00:21:43.935682609+00:00 stderr F I1013 00:21:43.935666 28251 gateway.go:143] Starting gateway service sync 2025-10-13T00:21:43.936447260+00:00 stderr F I1013 00:21:43.936419 28251 openflow_manager.go:85] Gateway OpenFlow sync requested 2025-10-13T00:21:43.936447260+00:00 stderr F I1013 00:21:43.936431 28251 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-ITP 2025-10-13T00:21:43.940717865+00:00 stderr F I1013 00:21:43.940667 28251 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-EGRESS-SVC 2025-10-13T00:21:43.942682467+00:00 stderr F I1013 00:21:43.942647 28251 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-NODEPORT 2025-10-13T00:21:43.945974016+00:00 stderr F I1013 00:21:43.945921 28251 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-EXTERNALIP 2025-10-13T00:21:43.948363470+00:00 stderr F I1013 00:21:43.948310 28251 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-ETP 2025-10-13T00:21:43.952633705+00:00 stderr F I1013 00:21:43.952560 28251 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-SNAT-MGMTPORT 2025-10-13T00:21:43.955512932+00:00 stderr F I1013 00:21:43.955476 28251 gateway_iptables.go:544] Recreating iptables rules for table: mangle, chain: OVN-KUBE-ITP 2025-10-13T00:21:43.958180274+00:00 stderr F I1013 00:21:43.958147 28251 gateway.go:160] Gateway service sync done. Time taken: 22.467034ms 2025-10-13T00:21:43.958193064+00:00 stderr F I1013 00:21:43.958184 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver/check-endpoints 2025-10-13T00:21:43.958221325+00:00 stderr F I1013 00:21:43.958202 28251 gateway_shared_intf.go:609] Adding service check-endpoints in namespace openshift-apiserver 2025-10-13T00:21:43.958270686+00:00 stderr F I1013 00:21:43.958254 28251 gateway_shared_intf.go:635] Updating already programmed rules for check-endpoints in namespace openshift-apiserver 2025-10-13T00:21:43.958279737+00:00 stderr F I1013 00:21:43.958270 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958287017+00:00 stderr F I1013 00:21:43.958279 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver/check-endpoints took: 78.993µs 2025-10-13T00:21:43.958294917+00:00 stderr F I1013 00:21:43.958288 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-controller-manager/controller-manager 2025-10-13T00:21:43.958302757+00:00 stderr F I1013 00:21:43.958295 28251 gateway_shared_intf.go:609] Adding service controller-manager in namespace openshift-controller-manager 2025-10-13T00:21:43.958328258+00:00 stderr F I1013 00:21:43.958315 28251 gateway_shared_intf.go:635] Updating already programmed rules for controller-manager in namespace openshift-controller-manager 2025-10-13T00:21:43.958335138+00:00 stderr F I1013 00:21:43.958323 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958335138+00:00 stderr F I1013 00:21:43.958332 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-controller-manager/controller-manager took: 36.921µs 2025-10-13T00:21:43.958357859+00:00 stderr F I1013 00:21:43.958338 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-daemon 2025-10-13T00:21:43.958370159+00:00 stderr F I1013 00:21:43.958365 28251 gateway_shared_intf.go:609] Adding service machine-config-daemon in namespace openshift-machine-config-operator 2025-10-13T00:21:43.958438521+00:00 stderr F I1013 00:21:43.958421 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-daemon in namespace openshift-machine-config-operator 2025-10-13T00:21:43.958438521+00:00 stderr F I1013 00:21:43.958433 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958446071+00:00 stderr F I1013 00:21:43.958440 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-daemon took: 73.932µs 2025-10-13T00:21:43.958454321+00:00 stderr F I1013 00:21:43.958449 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-operator 2025-10-13T00:21:43.958462642+00:00 stderr F I1013 00:21:43.958458 28251 gateway_shared_intf.go:609] Adding service machine-config-operator in namespace openshift-machine-config-operator 2025-10-13T00:21:43.958492202+00:00 stderr F I1013 00:21:43.958480 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-operator in namespace openshift-machine-config-operator 2025-10-13T00:21:43.958499123+00:00 stderr F I1013 00:21:43.958491 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958505913+00:00 stderr F I1013 00:21:43.958498 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-operator took: 40.632µs 2025-10-13T00:21:43.958512503+00:00 stderr F I1013 00:21:43.958506 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-operator/metrics 2025-10-13T00:21:43.958520883+00:00 stderr F I1013 00:21:43.958516 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-operator/metrics took: 740ns 2025-10-13T00:21:43.958529143+00:00 stderr F I1013 00:21:43.958524 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics 2025-10-13T00:21:43.958535744+00:00 stderr F I1013 00:21:43.958530 28251 gateway_shared_intf.go:609] Adding service olm-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.958566454+00:00 stderr F I1013 00:21:43.958553 28251 gateway_shared_intf.go:635] Updating already programmed rules for olm-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.958566454+00:00 stderr F I1013 00:21:43.958562 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958575855+00:00 stderr F I1013 00:21:43.958567 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics took: 37.041µs 2025-10-13T00:21:43.958583955+00:00 stderr F I1013 00:21:43.958574 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-etcd/etcd 2025-10-13T00:21:43.958583955+00:00 stderr F I1013 00:21:43.958580 28251 gateway_shared_intf.go:609] Adding service etcd in namespace openshift-etcd 2025-10-13T00:21:43.958611786+00:00 stderr F I1013 00:21:43.958599 28251 gateway_shared_intf.go:635] Updating already programmed rules for etcd in namespace openshift-etcd 2025-10-13T00:21:43.958611786+00:00 stderr F I1013 00:21:43.958608 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958619096+00:00 stderr F I1013 00:21:43.958613 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-etcd/etcd took: 33.321µs 2025-10-13T00:21:43.958625766+00:00 stderr F I1013 00:21:43.958620 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-scheduler-operator/metrics 2025-10-13T00:21:43.958632516+00:00 stderr F I1013 00:21:43.958627 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-scheduler-operator 2025-10-13T00:21:43.958654237+00:00 stderr F I1013 00:21:43.958643 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-scheduler-operator 2025-10-13T00:21:43.958654237+00:00 stderr F I1013 00:21:43.958651 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958662837+00:00 stderr F I1013 00:21:43.958657 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-scheduler-operator/metrics took: 31.011µs 2025-10-13T00:21:43.958669577+00:00 stderr F I1013 00:21:43.958663 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator-machine-webhook 2025-10-13T00:21:43.958676177+00:00 stderr F I1013 00:21:43.958668 28251 gateway_shared_intf.go:609] Adding service machine-api-operator-machine-webhook in namespace openshift-machine-api 2025-10-13T00:21:43.958698048+00:00 stderr F I1013 00:21:43.958687 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator-machine-webhook in namespace openshift-machine-api 2025-10-13T00:21:43.958698048+00:00 stderr F I1013 00:21:43.958695 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958706668+00:00 stderr F I1013 00:21:43.958701 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator-machine-webhook took: 32.031µs 2025-10-13T00:21:43.958713278+00:00 stderr F I1013 00:21:43.958707 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/redhat-operators 2025-10-13T00:21:43.958719889+00:00 stderr F I1013 00:21:43.958714 28251 gateway_shared_intf.go:609] Adding service redhat-operators in namespace openshift-marketplace 2025-10-13T00:21:43.958744299+00:00 stderr F I1013 00:21:43.958733 28251 gateway_shared_intf.go:635] Updating already programmed rules for redhat-operators in namespace openshift-marketplace 2025-10-13T00:21:43.958744299+00:00 stderr F I1013 00:21:43.958742 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958752829+00:00 stderr F I1013 00:21:43.958747 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/redhat-operators took: 33.321µs 2025-10-13T00:21:43.958763520+00:00 stderr F I1013 00:21:43.958753 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-multus/multus-admission-controller 2025-10-13T00:21:43.958763520+00:00 stderr F I1013 00:21:43.958759 28251 gateway_shared_intf.go:609] Adding service multus-admission-controller in namespace openshift-multus 2025-10-13T00:21:43.958786920+00:00 stderr F I1013 00:21:43.958774 28251 gateway_shared_intf.go:635] Updating already programmed rules for multus-admission-controller in namespace openshift-multus 2025-10-13T00:21:43.958786920+00:00 stderr F I1013 00:21:43.958782 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958795831+00:00 stderr F I1013 00:21:43.958788 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-multus/multus-admission-controller took: 29.01µs 2025-10-13T00:21:43.958803901+00:00 stderr F I1013 00:21:43.958794 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-authentication-operator/metrics 2025-10-13T00:21:43.958803901+00:00 stderr F I1013 00:21:43.958800 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-authentication-operator 2025-10-13T00:21:43.958826031+00:00 stderr F I1013 00:21:43.958815 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-authentication-operator 2025-10-13T00:21:43.958826031+00:00 stderr F I1013 00:21:43.958823 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958833162+00:00 stderr F I1013 00:21:43.958828 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-authentication-operator/metrics took: 28.47µs 2025-10-13T00:21:43.958839812+00:00 stderr F I1013 00:21:43.958834 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console/console 2025-10-13T00:21:43.958846442+00:00 stderr F I1013 00:21:43.958840 28251 gateway_shared_intf.go:609] Adding service console in namespace openshift-console 2025-10-13T00:21:43.958868513+00:00 stderr F I1013 00:21:43.958857 28251 gateway_shared_intf.go:635] Updating already programmed rules for console in namespace openshift-console 2025-10-13T00:21:43.958868513+00:00 stderr F I1013 00:21:43.958865 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958875493+00:00 stderr F I1013 00:21:43.958870 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console/console took: 30.081µs 2025-10-13T00:21:43.958882093+00:00 stderr F I1013 00:21:43.958876 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console/downloads 2025-10-13T00:21:43.958888683+00:00 stderr F I1013 00:21:43.958882 28251 gateway_shared_intf.go:609] Adding service downloads in namespace openshift-console 2025-10-13T00:21:43.958907414+00:00 stderr F I1013 00:21:43.958896 28251 gateway_shared_intf.go:635] Updating already programmed rules for downloads in namespace openshift-console 2025-10-13T00:21:43.958907414+00:00 stderr F I1013 00:21:43.958905 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958915974+00:00 stderr F I1013 00:21:43.958910 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console/downloads took: 27.801µs 2025-10-13T00:21:43.958922594+00:00 stderr F I1013 00:21:43.958916 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-controller-manager-operator/metrics 2025-10-13T00:21:43.958929194+00:00 stderr F I1013 00:21:43.958922 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-controller-manager-operator 2025-10-13T00:21:43.958949015+00:00 stderr F I1013 00:21:43.958938 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-controller-manager-operator 2025-10-13T00:21:43.958949015+00:00 stderr F I1013 00:21:43.958946 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958959375+00:00 stderr F I1013 00:21:43.958951 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-controller-manager-operator/metrics took: 29.261µs 2025-10-13T00:21:43.958966025+00:00 stderr F I1013 00:21:43.958958 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-apiserver/apiserver 2025-10-13T00:21:43.958972635+00:00 stderr F I1013 00:21:43.958964 28251 gateway_shared_intf.go:609] Adding service apiserver in namespace openshift-kube-apiserver 2025-10-13T00:21:43.958991166+00:00 stderr F I1013 00:21:43.958980 28251 gateway_shared_intf.go:635] Updating already programmed rules for apiserver in namespace openshift-kube-apiserver 2025-10-13T00:21:43.958991166+00:00 stderr F I1013 00:21:43.958988 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.958998236+00:00 stderr F I1013 00:21:43.958993 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-apiserver/apiserver took: 29.231µs 2025-10-13T00:21:43.959004816+00:00 stderr F I1013 00:21:43.958999 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-oauth-apiserver/api 2025-10-13T00:21:43.959011406+00:00 stderr F I1013 00:21:43.959004 28251 gateway_shared_intf.go:609] Adding service api in namespace openshift-oauth-apiserver 2025-10-13T00:21:43.959031277+00:00 stderr F I1013 00:21:43.959019 28251 gateway_shared_intf.go:635] Updating already programmed rules for api in namespace openshift-oauth-apiserver 2025-10-13T00:21:43.959031277+00:00 stderr F I1013 00:21:43.959027 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959038257+00:00 stderr F I1013 00:21:43.959033 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-oauth-apiserver/api took: 27.701µs 2025-10-13T00:21:43.959044857+00:00 stderr F I1013 00:21:43.959038 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-10-13T00:21:43.959051458+00:00 stderr F I1013 00:21:43.959044 28251 gateway_shared_intf.go:609] Adding service catalog-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.959069968+00:00 stderr F I1013 00:21:43.959059 28251 gateway_shared_intf.go:635] Updating already programmed rules for catalog-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.959069968+00:00 stderr F I1013 00:21:43.959067 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959078488+00:00 stderr F I1013 00:21:43.959073 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics took: 27.981µs 2025-10-13T00:21:43.959085158+00:00 stderr F I1013 00:21:43.959080 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-10-13T00:21:43.959091719+00:00 stderr F I1013 00:21:43.959086 28251 gateway_shared_intf.go:609] Adding service package-server-manager-metrics in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.959114499+00:00 stderr F I1013 00:21:43.959103 28251 gateway_shared_intf.go:635] Updating already programmed rules for package-server-manager-metrics in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.959114499+00:00 stderr F I1013 00:21:43.959112 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959123249+00:00 stderr F I1013 00:21:43.959117 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics took: 30.591µs 2025-10-13T00:21:43.959131200+00:00 stderr F I1013 00:21:43.959123 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway default/kubernetes 2025-10-13T00:21:43.959131200+00:00 stderr F I1013 00:21:43.959128 28251 gateway_shared_intf.go:609] Adding service kubernetes in namespace default 2025-10-13T00:21:43.959156820+00:00 stderr F I1013 00:21:43.959141 28251 gateway_shared_intf.go:635] Updating already programmed rules for kubernetes in namespace default 2025-10-13T00:21:43.959156820+00:00 stderr F I1013 00:21:43.959149 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959164051+00:00 stderr F I1013 00:21:43.959154 28251 obj_retry.go:541] Creating *factory.serviceForGateway default/kubernetes took: 26µs 2025-10-13T00:21:43.959164051+00:00 stderr F I1013 00:21:43.959160 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console-operator/metrics 2025-10-13T00:21:43.959171031+00:00 stderr F I1013 00:21:43.959166 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-console-operator 2025-10-13T00:21:43.959194841+00:00 stderr F I1013 00:21:43.959182 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-console-operator 2025-10-13T00:21:43.959194841+00:00 stderr F I1013 00:21:43.959190 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959203482+00:00 stderr F I1013 00:21:43.959196 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console-operator/metrics took: 29.35µs 2025-10-13T00:21:43.959211632+00:00 stderr F I1013 00:21:43.959201 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console-operator/webhook 2025-10-13T00:21:43.959211632+00:00 stderr F I1013 00:21:43.959207 28251 gateway_shared_intf.go:609] Adding service webhook in namespace openshift-console-operator 2025-10-13T00:21:43.959238163+00:00 stderr F I1013 00:21:43.959226 28251 gateway_shared_intf.go:635] Updating already programmed rules for webhook in namespace openshift-console-operator 2025-10-13T00:21:43.959245143+00:00 stderr F I1013 00:21:43.959236 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959252643+00:00 stderr F I1013 00:21:43.959243 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console-operator/webhook took: 34.521µs 2025-10-13T00:21:43.959260713+00:00 stderr F I1013 00:21:43.959251 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-controller-manager-operator/metrics 2025-10-13T00:21:43.959268883+00:00 stderr F I1013 00:21:43.959259 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-controller-manager-operator 2025-10-13T00:21:43.959291014+00:00 stderr F I1013 00:21:43.959278 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-controller-manager-operator 2025-10-13T00:21:43.959291014+00:00 stderr F I1013 00:21:43.959287 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959299474+00:00 stderr F I1013 00:21:43.959293 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-controller-manager-operator/metrics took: 34.231µs 2025-10-13T00:21:43.959307084+00:00 stderr F I1013 00:21:43.959299 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/cluster-autoscaler-operator 2025-10-13T00:21:43.959314645+00:00 stderr F I1013 00:21:43.959305 28251 gateway_shared_intf.go:609] Adding service cluster-autoscaler-operator in namespace openshift-machine-api 2025-10-13T00:21:43.959327225+00:00 stderr F I1013 00:21:43.959319 28251 gateway_shared_intf.go:635] Updating already programmed rules for cluster-autoscaler-operator in namespace openshift-machine-api 2025-10-13T00:21:43.959335425+00:00 stderr F I1013 00:21:43.959328 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959357376+00:00 stderr F I1013 00:21:43.959333 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/cluster-autoscaler-operator took: 28.071µs 2025-10-13T00:21:43.959369866+00:00 stderr F I1013 00:21:43.959360 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-controllers 2025-10-13T00:21:43.959369866+00:00 stderr F I1013 00:21:43.959366 28251 gateway_shared_intf.go:609] Adding service machine-api-controllers in namespace openshift-machine-api 2025-10-13T00:21:43.959385607+00:00 stderr F I1013 00:21:43.959379 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-controllers in namespace openshift-machine-api 2025-10-13T00:21:43.959394697+00:00 stderr F I1013 00:21:43.959386 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959394697+00:00 stderr F I1013 00:21:43.959391 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-controllers took: 24.831µs 2025-10-13T00:21:43.959405497+00:00 stderr F I1013 00:21:43.959398 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator-webhook 2025-10-13T00:21:43.959413577+00:00 stderr F I1013 00:21:43.959406 28251 gateway_shared_intf.go:609] Adding service machine-api-operator-webhook in namespace openshift-machine-api 2025-10-13T00:21:43.959437418+00:00 stderr F I1013 00:21:43.959423 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator-webhook in namespace openshift-machine-api 2025-10-13T00:21:43.959437418+00:00 stderr F I1013 00:21:43.959435 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959458068+00:00 stderr F I1013 00:21:43.959442 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator-webhook took: 35.171µs 2025-10-13T00:21:43.959458068+00:00 stderr F I1013 00:21:43.959453 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway default/openshift 2025-10-13T00:21:43.959477869+00:00 stderr F I1013 00:21:43.959462 28251 obj_retry.go:541] Creating *factory.serviceForGateway default/openshift took: 710ns 2025-10-13T00:21:43.959477869+00:00 stderr F I1013 00:21:43.959473 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/marketplace-operator-metrics 2025-10-13T00:21:43.959484969+00:00 stderr F I1013 00:21:43.959480 28251 gateway_shared_intf.go:609] Adding service marketplace-operator-metrics in namespace openshift-marketplace 2025-10-13T00:21:43.959508110+00:00 stderr F I1013 00:21:43.959497 28251 gateway_shared_intf.go:635] Updating already programmed rules for marketplace-operator-metrics in namespace openshift-marketplace 2025-10-13T00:21:43.959514880+00:00 stderr F I1013 00:21:43.959506 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959521550+00:00 stderr F I1013 00:21:43.959512 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/marketplace-operator-metrics took: 31.681µs 2025-10-13T00:21:43.959528150+00:00 stderr F I1013 00:21:43.959520 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-multus/network-metrics-service 2025-10-13T00:21:43.959548981+00:00 stderr F I1013 00:21:43.959536 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-multus/network-metrics-service took: 490ns 2025-10-13T00:21:43.959555681+00:00 stderr F I1013 00:21:43.959548 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-10-13T00:21:43.959562301+00:00 stderr F I1013 00:21:43.959556 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-control-plane took: 700ns 2025-10-13T00:21:43.959570431+00:00 stderr F I1013 00:21:43.959564 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-machine-approver/machine-approver 2025-10-13T00:21:43.959578712+00:00 stderr F I1013 00:21:43.959574 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-machine-approver/machine-approver took: 520ns 2025-10-13T00:21:43.959588602+00:00 stderr F I1013 00:21:43.959582 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-diagnostics/network-check-source 2025-10-13T00:21:43.959595212+00:00 stderr F I1013 00:21:43.959590 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-diagnostics/network-check-source took: 550ns 2025-10-13T00:21:43.959601752+00:00 stderr F I1013 00:21:43.959596 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-route-controller-manager/route-controller-manager 2025-10-13T00:21:43.959608282+00:00 stderr F I1013 00:21:43.959603 28251 gateway_shared_intf.go:609] Adding service route-controller-manager in namespace openshift-route-controller-manager 2025-10-13T00:21:43.959636503+00:00 stderr F I1013 00:21:43.959623 28251 gateway_shared_intf.go:635] Updating already programmed rules for route-controller-manager in namespace openshift-route-controller-manager 2025-10-13T00:21:43.959644803+00:00 stderr F I1013 00:21:43.959634 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959652844+00:00 stderr F I1013 00:21:43.959642 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-route-controller-manager/route-controller-manager took: 37.631µs 2025-10-13T00:21:43.959659894+00:00 stderr F I1013 00:21:43.959650 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-image-registry/image-registry-operator 2025-10-13T00:21:43.959666514+00:00 stderr F I1013 00:21:43.959659 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-image-registry/image-registry-operator took: 780ns 2025-10-13T00:21:43.959673134+00:00 stderr F I1013 00:21:43.959667 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-scheduler/scheduler 2025-10-13T00:21:43.959681264+00:00 stderr F I1013 00:21:43.959675 28251 gateway_shared_intf.go:609] Adding service scheduler in namespace openshift-kube-scheduler 2025-10-13T00:21:43.959711505+00:00 stderr F I1013 00:21:43.959699 28251 gateway_shared_intf.go:635] Updating already programmed rules for scheduler in namespace openshift-kube-scheduler 2025-10-13T00:21:43.959718305+00:00 stderr F I1013 00:21:43.959710 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959724926+00:00 stderr F I1013 00:21:43.959717 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-scheduler/scheduler took: 42.251µs 2025-10-13T00:21:43.959731516+00:00 stderr F I1013 00:21:43.959723 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/redhat-marketplace 2025-10-13T00:21:43.959738106+00:00 stderr F I1013 00:21:43.959731 28251 gateway_shared_intf.go:609] Adding service redhat-marketplace in namespace openshift-marketplace 2025-10-13T00:21:43.959764347+00:00 stderr F I1013 00:21:43.959752 28251 gateway_shared_intf.go:635] Updating already programmed rules for redhat-marketplace in namespace openshift-marketplace 2025-10-13T00:21:43.959771067+00:00 stderr F I1013 00:21:43.959764 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959777667+00:00 stderr F I1013 00:21:43.959771 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/redhat-marketplace took: 39.711µs 2025-10-13T00:21:43.959785777+00:00 stderr F I1013 00:21:43.959779 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver/api 2025-10-13T00:21:43.959793997+00:00 stderr F I1013 00:21:43.959788 28251 gateway_shared_intf.go:609] Adding service api in namespace openshift-apiserver 2025-10-13T00:21:43.959819788+00:00 stderr F I1013 00:21:43.959808 28251 gateway_shared_intf.go:635] Updating already programmed rules for api in namespace openshift-apiserver 2025-10-13T00:21:43.959827488+00:00 stderr F I1013 00:21:43.959819 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959843599+00:00 stderr F I1013 00:21:43.959826 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver/api took: 38.211µs 2025-10-13T00:21:43.959843599+00:00 stderr F I1013 00:21:43.959833 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-samples-operator/metrics 2025-10-13T00:21:43.959852129+00:00 stderr F I1013 00:21:43.959842 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-samples-operator/metrics took: 580ns 2025-10-13T00:21:43.959859449+00:00 stderr F I1013 00:21:43.959850 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-config-operator/metrics 2025-10-13T00:21:43.959866029+00:00 stderr F I1013 00:21:43.959857 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-config-operator 2025-10-13T00:21:43.959895170+00:00 stderr F I1013 00:21:43.959882 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-config-operator 2025-10-13T00:21:43.959901950+00:00 stderr F I1013 00:21:43.959894 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959908531+00:00 stderr F I1013 00:21:43.959900 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-config-operator/metrics took: 43.131µs 2025-10-13T00:21:43.959915101+00:00 stderr F I1013 00:21:43.959909 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-image-registry/image-registry 2025-10-13T00:21:43.959923201+00:00 stderr F I1013 00:21:43.959917 28251 gateway_shared_intf.go:609] Adding service image-registry in namespace openshift-image-registry 2025-10-13T00:21:43.959950842+00:00 stderr F I1013 00:21:43.959939 28251 gateway_shared_intf.go:635] Updating already programmed rules for image-registry in namespace openshift-image-registry 2025-10-13T00:21:43.959957612+00:00 stderr F I1013 00:21:43.959950 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.959964212+00:00 stderr F I1013 00:21:43.959957 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-image-registry/image-registry took: 39.801µs 2025-10-13T00:21:43.959970782+00:00 stderr F I1013 00:21:43.959965 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-version/cluster-version-operator 2025-10-13T00:21:43.959978912+00:00 stderr F I1013 00:21:43.959973 28251 gateway_shared_intf.go:609] Adding service cluster-version-operator in namespace openshift-cluster-version 2025-10-13T00:21:43.960007263+00:00 stderr F I1013 00:21:43.959996 28251 gateway_shared_intf.go:635] Updating already programmed rules for cluster-version-operator in namespace openshift-cluster-version 2025-10-13T00:21:43.960014023+00:00 stderr F I1013 00:21:43.960006 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960020604+00:00 stderr F I1013 00:21:43.960013 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-version/cluster-version-operator took: 39.851µs 2025-10-13T00:21:43.960027214+00:00 stderr F I1013 00:21:43.960021 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-dns-operator/metrics 2025-10-13T00:21:43.960033774+00:00 stderr F I1013 00:21:43.960029 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-dns-operator 2025-10-13T00:21:43.960068715+00:00 stderr F I1013 00:21:43.960056 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-dns-operator 2025-10-13T00:21:43.960075445+00:00 stderr F I1013 00:21:43.960068 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960082055+00:00 stderr F I1013 00:21:43.960075 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-dns-operator/metrics took: 46.061µs 2025-10-13T00:21:43.960088615+00:00 stderr F I1013 00:21:43.960083 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/packageserver-service 2025-10-13T00:21:43.960098136+00:00 stderr F I1013 00:21:43.960091 28251 gateway_shared_intf.go:609] Adding service packageserver-service in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.960125246+00:00 stderr F I1013 00:21:43.960113 28251 gateway_shared_intf.go:635] Updating already programmed rules for packageserver-service in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.960132007+00:00 stderr F I1013 00:21:43.960124 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960138587+00:00 stderr F I1013 00:21:43.960131 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/packageserver-service took: 40.081µs 2025-10-13T00:21:43.960145167+00:00 stderr F I1013 00:21:43.960139 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver-operator/metrics 2025-10-13T00:21:43.960153287+00:00 stderr F I1013 00:21:43.960147 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-apiserver-operator 2025-10-13T00:21:43.960182598+00:00 stderr F I1013 00:21:43.960171 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-apiserver-operator 2025-10-13T00:21:43.960189358+00:00 stderr F I1013 00:21:43.960182 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960195938+00:00 stderr F I1013 00:21:43.960189 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver-operator/metrics took: 41.221µs 2025-10-13T00:21:43.960202498+00:00 stderr F I1013 00:21:43.960197 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-dns/dns-default 2025-10-13T00:21:43.960210579+00:00 stderr F I1013 00:21:43.960205 28251 gateway_shared_intf.go:609] Adding service dns-default in namespace openshift-dns 2025-10-13T00:21:43.960241209+00:00 stderr F I1013 00:21:43.960229 28251 gateway_shared_intf.go:635] Updating already programmed rules for dns-default in namespace openshift-dns 2025-10-13T00:21:43.960247950+00:00 stderr F I1013 00:21:43.960240 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960254540+00:00 stderr F I1013 00:21:43.960247 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-dns/dns-default took: 42.752µs 2025-10-13T00:21:43.960261110+00:00 stderr F I1013 00:21:43.960255 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress/router-internal-default 2025-10-13T00:21:43.960269220+00:00 stderr F I1013 00:21:43.960264 28251 gateway_shared_intf.go:609] Adding service router-internal-default in namespace openshift-ingress 2025-10-13T00:21:43.960299591+00:00 stderr F I1013 00:21:43.960288 28251 gateway_shared_intf.go:635] Updating already programmed rules for router-internal-default in namespace openshift-ingress 2025-10-13T00:21:43.960306331+00:00 stderr F I1013 00:21:43.960299 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960312941+00:00 stderr F I1013 00:21:43.960307 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress/router-internal-default took: 42.491µs 2025-10-13T00:21:43.960321062+00:00 stderr F I1013 00:21:43.960315 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator 2025-10-13T00:21:43.960331922+00:00 stderr F I1013 00:21:43.960323 28251 gateway_shared_intf.go:609] Adding service machine-api-operator in namespace openshift-machine-api 2025-10-13T00:21:43.960390023+00:00 stderr F I1013 00:21:43.960377 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator in namespace openshift-machine-api 2025-10-13T00:21:43.960399764+00:00 stderr F I1013 00:21:43.960389 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960409394+00:00 stderr F I1013 00:21:43.960397 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator took: 73.272µs 2025-10-13T00:21:43.960409394+00:00 stderr F I1013 00:21:43.960405 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-node 2025-10-13T00:21:43.960428584+00:00 stderr F I1013 00:21:43.960414 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-node took: 490ns 2025-10-13T00:21:43.960428584+00:00 stderr F I1013 00:21:43.960425 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-controller-manager/kube-controller-manager 2025-10-13T00:21:43.960448755+00:00 stderr F I1013 00:21:43.960434 28251 gateway_shared_intf.go:609] Adding service kube-controller-manager in namespace openshift-kube-controller-manager 2025-10-13T00:21:43.960472116+00:00 stderr F I1013 00:21:43.960460 28251 gateway_shared_intf.go:635] Updating already programmed rules for kube-controller-manager in namespace openshift-kube-controller-manager 2025-10-13T00:21:43.960478866+00:00 stderr F I1013 00:21:43.960471 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960485546+00:00 stderr F I1013 00:21:43.960478 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-controller-manager/kube-controller-manager took: 44.931µs 2025-10-13T00:21:43.960492136+00:00 stderr F I1013 00:21:43.960487 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-storage-version-migrator-operator/metrics 2025-10-13T00:21:43.960500246+00:00 stderr F I1013 00:21:43.960495 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-storage-version-migrator-operator 2025-10-13T00:21:43.960528097+00:00 stderr F I1013 00:21:43.960516 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-storage-version-migrator-operator 2025-10-13T00:21:43.960534807+00:00 stderr F I1013 00:21:43.960527 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960541418+00:00 stderr F I1013 00:21:43.960534 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-storage-version-migrator-operator/metrics took: 39.191µs 2025-10-13T00:21:43.960548008+00:00 stderr F I1013 00:21:43.960542 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-monitoring/cluster-monitoring-operator 2025-10-13T00:21:43.960556118+00:00 stderr F I1013 00:21:43.960551 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-monitoring/cluster-monitoring-operator took: 570ns 2025-10-13T00:21:43.960564198+00:00 stderr F I1013 00:21:43.960559 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-service-ca-operator/metrics 2025-10-13T00:21:43.960573228+00:00 stderr F I1013 00:21:43.960567 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-service-ca-operator 2025-10-13T00:21:43.960602089+00:00 stderr F I1013 00:21:43.960589 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-service-ca-operator 2025-10-13T00:21:43.960609619+00:00 stderr F I1013 00:21:43.960600 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960616210+00:00 stderr F I1013 00:21:43.960607 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-service-ca-operator/metrics took: 40.421µs 2025-10-13T00:21:43.960622790+00:00 stderr F I1013 00:21:43.960614 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-authentication/oauth-openshift 2025-10-13T00:21:43.960629410+00:00 stderr F I1013 00:21:43.960623 28251 gateway_shared_intf.go:609] Adding service oauth-openshift in namespace openshift-authentication 2025-10-13T00:21:43.960657121+00:00 stderr F I1013 00:21:43.960645 28251 gateway_shared_intf.go:635] Updating already programmed rules for oauth-openshift in namespace openshift-authentication 2025-10-13T00:21:43.960663931+00:00 stderr F I1013 00:21:43.960656 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960670541+00:00 stderr F I1013 00:21:43.960663 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-authentication/oauth-openshift took: 40.361µs 2025-10-13T00:21:43.960677141+00:00 stderr F I1013 00:21:43.960671 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-apiserver-operator/metrics 2025-10-13T00:21:43.960685271+00:00 stderr F I1013 00:21:43.960679 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-apiserver-operator 2025-10-13T00:21:43.960714092+00:00 stderr F I1013 00:21:43.960701 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-apiserver-operator 2025-10-13T00:21:43.960720952+00:00 stderr F I1013 00:21:43.960712 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960727543+00:00 stderr F I1013 00:21:43.960719 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-apiserver-operator/metrics took: 40.141µs 2025-10-13T00:21:43.960735163+00:00 stderr F I1013 00:21:43.960727 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-etcd-operator/metrics 2025-10-13T00:21:43.960743353+00:00 stderr F I1013 00:21:43.960734 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-etcd-operator 2025-10-13T00:21:43.960765634+00:00 stderr F I1013 00:21:43.960752 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-etcd-operator 2025-10-13T00:21:43.960765634+00:00 stderr F I1013 00:21:43.960760 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960774174+00:00 stderr F I1013 00:21:43.960766 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-etcd-operator/metrics took: 32.361µs 2025-10-13T00:21:43.960781864+00:00 stderr F I1013 00:21:43.960772 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress-canary/ingress-canary 2025-10-13T00:21:43.960781864+00:00 stderr F I1013 00:21:43.960778 28251 gateway_shared_intf.go:609] Adding service ingress-canary in namespace openshift-ingress-canary 2025-10-13T00:21:43.960814415+00:00 stderr F I1013 00:21:43.960801 28251 gateway_shared_intf.go:635] Updating already programmed rules for ingress-canary in namespace openshift-ingress-canary 2025-10-13T00:21:43.960814415+00:00 stderr F I1013 00:21:43.960809 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960823575+00:00 stderr F I1013 00:21:43.960815 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress-canary/ingress-canary took: 36.671µs 2025-10-13T00:21:43.960831735+00:00 stderr F I1013 00:21:43.960823 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress-operator/metrics 2025-10-13T00:21:43.960838936+00:00 stderr F I1013 00:21:43.960830 28251 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-ingress-operator 2025-10-13T00:21:43.960861266+00:00 stderr F I1013 00:21:43.960846 28251 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-ingress-operator 2025-10-13T00:21:43.960861266+00:00 stderr F I1013 00:21:43.960854 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960873016+00:00 stderr F I1013 00:21:43.960859 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress-operator/metrics took: 30.341µs 2025-10-13T00:21:43.960873016+00:00 stderr F I1013 00:21:43.960868 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/control-plane-machine-set-operator 2025-10-13T00:21:43.960913658+00:00 stderr F I1013 00:21:43.960899 28251 gateway_shared_intf.go:609] Adding service control-plane-machine-set-operator in namespace openshift-machine-api 2025-10-13T00:21:43.960936798+00:00 stderr F I1013 00:21:43.960920 28251 gateway_shared_intf.go:635] Updating already programmed rules for control-plane-machine-set-operator in namespace openshift-machine-api 2025-10-13T00:21:43.960936798+00:00 stderr F I1013 00:21:43.960930 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960945348+00:00 stderr F I1013 00:21:43.960937 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/control-plane-machine-set-operator took: 60.592µs 2025-10-13T00:21:43.960953039+00:00 stderr F I1013 00:21:43.960944 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-controller 2025-10-13T00:21:43.960959689+00:00 stderr F I1013 00:21:43.960951 28251 gateway_shared_intf.go:609] Adding service machine-config-controller in namespace openshift-machine-config-operator 2025-10-13T00:21:43.960980539+00:00 stderr F I1013 00:21:43.960969 28251 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-controller in namespace openshift-machine-config-operator 2025-10-13T00:21:43.960980539+00:00 stderr F I1013 00:21:43.960978 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.960989130+00:00 stderr F I1013 00:21:43.960984 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-controller took: 33.421µs 2025-10-13T00:21:43.960995760+00:00 stderr F I1013 00:21:43.960990 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/community-operators 2025-10-13T00:21:43.961002370+00:00 stderr F I1013 00:21:43.960997 28251 gateway_shared_intf.go:609] Adding service community-operators in namespace openshift-marketplace 2025-10-13T00:21:43.961026031+00:00 stderr F I1013 00:21:43.961015 28251 gateway_shared_intf.go:635] Updating already programmed rules for community-operators in namespace openshift-marketplace 2025-10-13T00:21:43.961032711+00:00 stderr F I1013 00:21:43.961024 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.961039361+00:00 stderr F I1013 00:21:43.961030 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/community-operators took: 32.961µs 2025-10-13T00:21:43.961039361+00:00 stderr F I1013 00:21:43.961036 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-diagnostics/network-check-target 2025-10-13T00:21:43.961047771+00:00 stderr F I1013 00:21:43.961043 28251 gateway_shared_intf.go:609] Adding service network-check-target in namespace openshift-network-diagnostics 2025-10-13T00:21:43.961071602+00:00 stderr F I1013 00:21:43.961061 28251 gateway_shared_intf.go:635] Updating already programmed rules for network-check-target in namespace openshift-network-diagnostics 2025-10-13T00:21:43.961078322+00:00 stderr F I1013 00:21:43.961069 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.961085242+00:00 stderr F I1013 00:21:43.961076 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-diagnostics/network-check-target took: 32.291µs 2025-10-13T00:21:43.961085242+00:00 stderr F I1013 00:21:43.961082 28251 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/certified-operators 2025-10-13T00:21:43.961104893+00:00 stderr F I1013 00:21:43.961090 28251 gateway_shared_intf.go:609] Adding service certified-operators in namespace openshift-marketplace 2025-10-13T00:21:43.961122113+00:00 stderr F I1013 00:21:43.961111 28251 gateway_shared_intf.go:635] Updating already programmed rules for certified-operators in namespace openshift-marketplace 2025-10-13T00:21:43.961122113+00:00 stderr F I1013 00:21:43.961120 28251 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-10-13T00:21:43.961132433+00:00 stderr F I1013 00:21:43.961125 28251 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/certified-operators took: 36.561µs 2025-10-13T00:21:43.961152514+00:00 stderr F I1013 00:21:43.961136 28251 factory.go:988] Added *v1.Service event handler 11 2025-10-13T00:21:43.961205125+00:00 stderr F I1013 00:21:43.961159 28251 obj_retry_gateway.go:28] [newRetryFrameworkNodeWithParameters] g.watchFactory=&{11 0xc0000df8f0 0xc0000df960 0xc0000df9d0 0xc0000dfa40 0xc0000dfab0 0xc0000dfb20 0xc0004f5ea0 0xc0000dfb90 0xc0000dfc00 map[0x23d45a0:0xc000275260 0x23d4ae0:0xc0002750a0 0x23d4d80:0xc000275180 0x23d5020:0xc0002752d0 0x23d52c0:0xc0002753b0 0x23d5aa0:0xc000275420 0x23f31a0:0xc000274e00 0x23f4020:0xc000274e70 0x23f4760:0xc000274fc0 0x23f5980:0xc000274f50 0x23f7680:0xc000275030 0x23f8160:0xc000274ee0] 0xc0002bcde0} 2025-10-13T00:21:43.961249567+00:00 stderr F I1013 00:21:43.961235 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/community-operators-h826k 2025-10-13T00:21:43.961267177+00:00 stderr F I1013 00:21:43.961254 28251 gateway_shared_intf.go:842] Adding endpointslice community-operators-h826k in namespace openshift-marketplace 2025-10-13T00:21:43.961289668+00:00 stderr F I1013 00:21:43.961278 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/community-operators-h826k took: 29.671µs 2025-10-13T00:21:43.961296408+00:00 stderr F I1013 00:21:43.961288 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console-operator/metrics-rdlxc 2025-10-13T00:21:43.961303118+00:00 stderr F I1013 00:21:43.961296 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-rdlxc in namespace openshift-console-operator 2025-10-13T00:21:43.961323279+00:00 stderr F I1013 00:21:43.961312 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console-operator/metrics-rdlxc took: 16.82µs 2025-10-13T00:21:43.961330019+00:00 stderr F I1013 00:21:43.961321 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-image-registry/image-registry-cfsrx 2025-10-13T00:21:43.961338989+00:00 stderr F I1013 00:21:43.961328 28251 gateway_shared_intf.go:842] Adding endpointslice image-registry-cfsrx in namespace openshift-image-registry 2025-10-13T00:21:43.961376390+00:00 stderr F I1013 00:21:43.961364 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-image-registry/image-registry-cfsrx took: 36.761µs 2025-10-13T00:21:43.961386420+00:00 stderr F I1013 00:21:43.961374 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-scheduler/scheduler-4wbzh 2025-10-13T00:21:43.961386420+00:00 stderr F I1013 00:21:43.961381 28251 gateway_shared_intf.go:842] Adding endpointslice scheduler-4wbzh in namespace openshift-kube-scheduler 2025-10-13T00:21:43.961409971+00:00 stderr F I1013 00:21:43.961399 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-scheduler/scheduler-4wbzh took: 18.541µs 2025-10-13T00:21:43.961416781+00:00 stderr F I1013 00:21:43.961408 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/control-plane-machine-set-operator-nmjkn 2025-10-13T00:21:43.961424991+00:00 stderr F I1013 00:21:43.961421 28251 gateway_shared_intf.go:842] Adding endpointslice control-plane-machine-set-operator-nmjkn in namespace openshift-machine-api 2025-10-13T00:21:43.961448722+00:00 stderr F I1013 00:21:43.961437 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/control-plane-machine-set-operator-nmjkn took: 16.911µs 2025-10-13T00:21:43.961455502+00:00 stderr F I1013 00:21:43.961446 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-machine-webhook-xj4tp 2025-10-13T00:21:43.961465302+00:00 stderr F I1013 00:21:43.961453 28251 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-machine-webhook-xj4tp in namespace openshift-machine-api 2025-10-13T00:21:43.961472773+00:00 stderr F I1013 00:21:43.961466 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-machine-webhook-xj4tp took: 13.54µs 2025-10-13T00:21:43.961480863+00:00 stderr F I1013 00:21:43.961472 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-webhook-x4gjx 2025-10-13T00:21:43.961488623+00:00 stderr F I1013 00:21:43.961479 28251 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-webhook-x4gjx in namespace openshift-machine-api 2025-10-13T00:21:43.961498553+00:00 stderr F I1013 00:21:43.961493 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-webhook-x4gjx took: 14.33µs 2025-10-13T00:21:43.961505384+00:00 stderr F I1013 00:21:43.961499 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-daemon-2nvnz 2025-10-13T00:21:43.961511964+00:00 stderr F I1013 00:21:43.961506 28251 gateway_shared_intf.go:842] Adding endpointslice machine-config-daemon-2nvnz in namespace openshift-machine-config-operator 2025-10-13T00:21:43.961533884+00:00 stderr F I1013 00:21:43.961522 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-daemon-2nvnz took: 16.111µs 2025-10-13T00:21:43.961533884+00:00 stderr F I1013 00:21:43.961530 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/certified-operators-bw9bv 2025-10-13T00:21:43.961542515+00:00 stderr F I1013 00:21:43.961537 28251 gateway_shared_intf.go:842] Adding endpointslice certified-operators-bw9bv in namespace openshift-marketplace 2025-10-13T00:21:43.961564035+00:00 stderr F I1013 00:21:43.961553 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/certified-operators-bw9bv took: 16.581µs 2025-10-13T00:21:43.961564035+00:00 stderr F I1013 00:21:43.961561 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway default/kubernetes 2025-10-13T00:21:43.961572555+00:00 stderr F I1013 00:21:43.961568 28251 gateway_shared_intf.go:842] Adding endpointslice kubernetes in namespace default 2025-10-13T00:21:43.961591546+00:00 stderr F I1013 00:21:43.961580 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway default/kubernetes took: 12.87µs 2025-10-13T00:21:43.961591546+00:00 stderr F I1013 00:21:43.961589 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-authentication-operator/metrics-dp499 2025-10-13T00:21:43.961600086+00:00 stderr F I1013 00:21:43.961596 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-dp499 in namespace openshift-authentication-operator 2025-10-13T00:21:43.961621347+00:00 stderr F I1013 00:21:43.961610 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-authentication-operator/metrics-dp499 took: 15.17µs 2025-10-13T00:21:43.961628077+00:00 stderr F I1013 00:21:43.961619 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-etcd-operator/metrics-z62zm 2025-10-13T00:21:43.961628077+00:00 stderr F I1013 00:21:43.961625 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-z62zm in namespace openshift-etcd-operator 2025-10-13T00:21:43.961652647+00:00 stderr F I1013 00:21:43.961639 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-etcd-operator/metrics-z62zm took: 14.25µs 2025-10-13T00:21:43.961652647+00:00 stderr F I1013 00:21:43.961648 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 2025-10-13T00:21:43.961669548+00:00 stderr F I1013 00:21:43.961655 28251 gateway_shared_intf.go:842] Adding endpointslice catalog-operator-metrics-fqfm8 in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.961676368+00:00 stderr F I1013 00:21:43.961671 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 took: 15.471µs 2025-10-13T00:21:43.961683048+00:00 stderr F I1013 00:21:43.961677 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-etcd/etcd-8wmzv 2025-10-13T00:21:43.961689678+00:00 stderr F I1013 00:21:43.961684 28251 gateway_shared_intf.go:842] Adding endpointslice etcd-8wmzv in namespace openshift-etcd 2025-10-13T00:21:43.961709489+00:00 stderr F I1013 00:21:43.961697 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-etcd/etcd-8wmzv took: 14.271µs 2025-10-13T00:21:43.961709489+00:00 stderr F I1013 00:21:43.961706 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress-canary/ingress-canary-rhnd4 2025-10-13T00:21:43.961718129+00:00 stderr F I1013 00:21:43.961713 28251 gateway_shared_intf.go:842] Adding endpointslice ingress-canary-rhnd4 in namespace openshift-ingress-canary 2025-10-13T00:21:43.961739320+00:00 stderr F I1013 00:21:43.961728 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress-canary/ingress-canary-rhnd4 took: 15.58µs 2025-10-13T00:21:43.961746350+00:00 stderr F I1013 00:21:43.961737 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/cluster-autoscaler-operator-r4g5l 2025-10-13T00:21:43.961746350+00:00 stderr F I1013 00:21:43.961744 28251 gateway_shared_intf.go:842] Adding endpointslice cluster-autoscaler-operator-r4g5l in namespace openshift-machine-api 2025-10-13T00:21:43.961771851+00:00 stderr F I1013 00:21:43.961761 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/cluster-autoscaler-operator-r4g5l took: 17.17µs 2025-10-13T00:21:43.961771851+00:00 stderr F I1013 00:21:43.961769 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-controller-manager-operator/metrics-psf8p 2025-10-13T00:21:43.961780411+00:00 stderr F I1013 00:21:43.961776 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-psf8p in namespace openshift-controller-manager-operator 2025-10-13T00:21:43.961802992+00:00 stderr F I1013 00:21:43.961792 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-controller-manager-operator/metrics-psf8p took: 16.5µs 2025-10-13T00:21:43.961809692+00:00 stderr F I1013 00:21:43.961801 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-dns/dns-default-lctlx 2025-10-13T00:21:43.961816322+00:00 stderr F I1013 00:21:43.961808 28251 gateway_shared_intf.go:842] Adding endpointslice dns-default-lctlx in namespace openshift-dns 2025-10-13T00:21:43.961835632+00:00 stderr F I1013 00:21:43.961823 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-dns/dns-default-lctlx took: 15.91µs 2025-10-13T00:21:43.961842383+00:00 stderr F I1013 00:21:43.961833 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/redhat-marketplace-8k279 2025-10-13T00:21:43.961849043+00:00 stderr F I1013 00:21:43.961841 28251 gateway_shared_intf.go:842] Adding endpointslice redhat-marketplace-8k279 in namespace openshift-marketplace 2025-10-13T00:21:43.961868343+00:00 stderr F I1013 00:21:43.961857 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/redhat-marketplace-8k279 took: 17.291µs 2025-10-13T00:21:43.961868343+00:00 stderr F I1013 00:21:43.961865 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver-operator/metrics-sgtfh 2025-10-13T00:21:43.961876864+00:00 stderr F I1013 00:21:43.961872 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-sgtfh in namespace openshift-apiserver-operator 2025-10-13T00:21:43.961951956+00:00 stderr F I1013 00:21:43.961930 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver-operator/metrics-sgtfh took: 57.672µs 2025-10-13T00:21:43.961951956+00:00 stderr F I1013 00:21:43.961941 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console-operator/webhook-b7j7h 2025-10-13T00:21:43.961951956+00:00 stderr F I1013 00:21:43.961948 28251 gateway_shared_intf.go:842] Adding endpointslice webhook-b7j7h in namespace openshift-console-operator 2025-10-13T00:21:43.961978406+00:00 stderr F I1013 00:21:43.961967 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console-operator/webhook-b7j7h took: 19.171µs 2025-10-13T00:21:43.961985156+00:00 stderr F I1013 00:21:43.961976 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-multus/multus-admission-controller-s6h4d 2025-10-13T00:21:43.961991797+00:00 stderr F I1013 00:21:43.961983 28251 gateway_shared_intf.go:842] Adding endpointslice multus-admission-controller-s6h4d in namespace openshift-multus 2025-10-13T00:21:43.962009377+00:00 stderr F I1013 00:21:43.961997 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-multus/multus-admission-controller-s6h4d took: 14.581µs 2025-10-13T00:21:43.962009377+00:00 stderr F I1013 00:21:43.962006 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-apiserver/apiserver-5mvtf 2025-10-13T00:21:43.962017977+00:00 stderr F I1013 00:21:43.962012 28251 gateway_shared_intf.go:842] Adding endpointslice apiserver-5mvtf in namespace openshift-kube-apiserver 2025-10-13T00:21:43.962038358+00:00 stderr F I1013 00:21:43.962027 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-apiserver/apiserver-5mvtf took: 14.81µs 2025-10-13T00:21:43.962038358+00:00 stderr F I1013 00:21:43.962035 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-2js9r 2025-10-13T00:21:43.962046958+00:00 stderr F I1013 00:21:43.962042 28251 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-2js9r in namespace openshift-machine-api 2025-10-13T00:21:43.962075969+00:00 stderr F I1013 00:21:43.962064 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-2js9r took: 21.71µs 2025-10-13T00:21:43.962082709+00:00 stderr F I1013 00:21:43.962073 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console/downloads-zsr67 2025-10-13T00:21:43.962089349+00:00 stderr F I1013 00:21:43.962081 28251 gateway_shared_intf.go:842] Adding endpointslice downloads-zsr67 in namespace openshift-console 2025-10-13T00:21:43.962107110+00:00 stderr F I1013 00:21:43.962095 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console/downloads-zsr67 took: 15.26µs 2025-10-13T00:21:43.962107110+00:00 stderr F I1013 00:21:43.962104 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress-operator/metrics-cd48g 2025-10-13T00:21:43.962115690+00:00 stderr F I1013 00:21:43.962111 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-cd48g in namespace openshift-ingress-operator 2025-10-13T00:21:43.962136461+00:00 stderr F I1013 00:21:43.962125 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress-operator/metrics-cd48g took: 14.98µs 2025-10-13T00:21:43.962143241+00:00 stderr F I1013 00:21:43.962134 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 2025-10-13T00:21:43.962149891+00:00 stderr F I1013 00:21:43.962141 28251 gateway_shared_intf.go:842] Adding endpointslice olm-operator-metrics-vql58 in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.962172461+00:00 stderr F I1013 00:21:43.962157 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 took: 17.13µs 2025-10-13T00:21:43.962172461+00:00 stderr F I1013 00:21:43.962167 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-operator-p8xmw 2025-10-13T00:21:43.962179712+00:00 stderr F I1013 00:21:43.962173 28251 gateway_shared_intf.go:842] Adding endpointslice machine-config-operator-p8xmw in namespace openshift-machine-config-operator 2025-10-13T00:21:43.962198962+00:00 stderr F I1013 00:21:43.962187 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-operator-p8xmw took: 14.641µs 2025-10-13T00:21:43.962198962+00:00 stderr F I1013 00:21:43.962196 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/marketplace-operator-metrics-fcwkk 2025-10-13T00:21:43.962207502+00:00 stderr F I1013 00:21:43.962203 28251 gateway_shared_intf.go:842] Adding endpointslice marketplace-operator-metrics-fcwkk in namespace openshift-marketplace 2025-10-13T00:21:43.962230213+00:00 stderr F I1013 00:21:43.962219 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/marketplace-operator-metrics-fcwkk took: 16.451µs 2025-10-13T00:21:43.962236963+00:00 stderr F I1013 00:21:43.962228 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-route-controller-manager/route-controller-manager-64jvm 2025-10-13T00:21:43.962243653+00:00 stderr F I1013 00:21:43.962235 28251 gateway_shared_intf.go:842] Adding endpointslice route-controller-manager-64jvm in namespace openshift-route-controller-manager 2025-10-13T00:21:43.962261664+00:00 stderr F I1013 00:21:43.962250 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-route-controller-manager/route-controller-manager-64jvm took: 16.121µs 2025-10-13T00:21:43.962261664+00:00 stderr F I1013 00:21:43.962259 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver/check-endpoints-sbfp5 2025-10-13T00:21:43.962270224+00:00 stderr F I1013 00:21:43.962265 28251 gateway_shared_intf.go:842] Adding endpointslice check-endpoints-sbfp5 in namespace openshift-apiserver 2025-10-13T00:21:43.962296005+00:00 stderr F I1013 00:21:43.962284 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver/check-endpoints-sbfp5 took: 19.77µs 2025-10-13T00:21:43.962296005+00:00 stderr F I1013 00:21:43.962293 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-network-diagnostics/network-check-target-kqkjk 2025-10-13T00:21:43.962304595+00:00 stderr F I1013 00:21:43.962299 28251 gateway_shared_intf.go:842] Adding endpointslice network-check-target-kqkjk in namespace openshift-network-diagnostics 2025-10-13T00:21:43.962327166+00:00 stderr F I1013 00:21:43.962314 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-network-diagnostics/network-check-target-kqkjk took: 14.37µs 2025-10-13T00:21:43.962333946+00:00 stderr F I1013 00:21:43.962322 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p 2025-10-13T00:21:43.962355356+00:00 stderr F I1013 00:21:43.962332 28251 gateway_shared_intf.go:842] Adding endpointslice package-server-manager-metrics-mq66p in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.962384917+00:00 stderr F I1013 00:21:43.962367 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p took: 35.461µs 2025-10-13T00:21:43.962384917+00:00 stderr F I1013 00:21:43.962374 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-service-ca-operator/metrics-wrkrj 2025-10-13T00:21:43.962384917+00:00 stderr F I1013 00:21:43.962381 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-wrkrj in namespace openshift-service-ca-operator 2025-10-13T00:21:43.962403518+00:00 stderr F I1013 00:21:43.962395 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-service-ca-operator/metrics-wrkrj took: 14.83µs 2025-10-13T00:21:43.962410238+00:00 stderr F I1013 00:21:43.962401 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-controller-manager/kube-controller-manager-fcp2k 2025-10-13T00:21:43.962416858+00:00 stderr F I1013 00:21:43.962408 28251 gateway_shared_intf.go:842] Adding endpointslice kube-controller-manager-fcp2k in namespace openshift-kube-controller-manager 2025-10-13T00:21:43.962444749+00:00 stderr F I1013 00:21:43.962422 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-controller-manager/kube-controller-manager-fcp2k took: 15.25µs 2025-10-13T00:21:43.962444749+00:00 stderr F I1013 00:21:43.962431 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-scheduler-operator/metrics-zk8d6 2025-10-13T00:21:43.962444749+00:00 stderr F I1013 00:21:43.962438 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-zk8d6 in namespace openshift-kube-scheduler-operator 2025-10-13T00:21:43.962477080+00:00 stderr F I1013 00:21:43.962453 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-scheduler-operator/metrics-zk8d6 took: 15.37µs 2025-10-13T00:21:43.962477080+00:00 stderr F I1013 00:21:43.962464 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-controller-7t8hc 2025-10-13T00:21:43.962477080+00:00 stderr F I1013 00:21:43.962473 28251 gateway_shared_intf.go:842] Adding endpointslice machine-config-controller-7t8hc in namespace openshift-machine-config-operator 2025-10-13T00:21:43.962509801+00:00 stderr F I1013 00:21:43.962492 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-controller-7t8hc took: 20.571µs 2025-10-13T00:21:43.962509801+00:00 stderr F I1013 00:21:43.962504 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-controllers-j9jjt 2025-10-13T00:21:43.962518471+00:00 stderr F I1013 00:21:43.962512 28251 gateway_shared_intf.go:842] Adding endpointslice machine-api-controllers-j9jjt in namespace openshift-machine-api 2025-10-13T00:21:43.962543921+00:00 stderr F I1013 00:21:43.962529 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-controllers-j9jjt took: 18.01µs 2025-10-13T00:21:43.962543921+00:00 stderr F I1013 00:21:43.962538 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-dns-operator/metrics-cxk8j 2025-10-13T00:21:43.962551362+00:00 stderr F I1013 00:21:43.962544 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-cxk8j in namespace openshift-dns-operator 2025-10-13T00:21:43.962570852+00:00 stderr F I1013 00:21:43.962558 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-dns-operator/metrics-cxk8j took: 14.411µs 2025-10-13T00:21:43.962570852+00:00 stderr F I1013 00:21:43.962564 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-apiserver-operator/metrics-kbv55 2025-10-13T00:21:43.962577932+00:00 stderr F I1013 00:21:43.962572 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-kbv55 in namespace openshift-kube-apiserver-operator 2025-10-13T00:21:43.962607313+00:00 stderr F I1013 00:21:43.962592 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-apiserver-operator/metrics-kbv55 took: 20.501µs 2025-10-13T00:21:43.962607313+00:00 stderr F I1013 00:21:43.962603 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-cluster-version/cluster-version-operator-qt7zf 2025-10-13T00:21:43.962617903+00:00 stderr F I1013 00:21:43.962612 28251 gateway_shared_intf.go:842] Adding endpointslice cluster-version-operator-qt7zf in namespace openshift-cluster-version 2025-10-13T00:21:43.962650364+00:00 stderr F I1013 00:21:43.962633 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-cluster-version/cluster-version-operator-qt7zf took: 21.691µs 2025-10-13T00:21:43.962650364+00:00 stderr F I1013 00:21:43.962643 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-storage-version-migrator-operator/metrics-zbxs7 2025-10-13T00:21:43.962659175+00:00 stderr F I1013 00:21:43.962649 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-zbxs7 in namespace openshift-kube-storage-version-migrator-operator 2025-10-13T00:21:43.962679525+00:00 stderr F I1013 00:21:43.962663 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-storage-version-migrator-operator/metrics-zbxs7 took: 14.571µs 2025-10-13T00:21:43.962679525+00:00 stderr F I1013 00:21:43.962672 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/redhat-operators-47g6l 2025-10-13T00:21:43.962686565+00:00 stderr F I1013 00:21:43.962679 28251 gateway_shared_intf.go:842] Adding endpointslice redhat-operators-47g6l in namespace openshift-marketplace 2025-10-13T00:21:43.962718436+00:00 stderr F I1013 00:21:43.962700 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/redhat-operators-47g6l took: 21.031µs 2025-10-13T00:21:43.962718436+00:00 stderr F I1013 00:21:43.962713 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-controller-manager/controller-manager-kxmft 2025-10-13T00:21:43.962727156+00:00 stderr F I1013 00:21:43.962721 28251 gateway_shared_intf.go:842] Adding endpointslice controller-manager-kxmft in namespace openshift-controller-manager 2025-10-13T00:21:43.962755707+00:00 stderr F I1013 00:21:43.962740 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-controller-manager/controller-manager-kxmft took: 19.421µs 2025-10-13T00:21:43.962755707+00:00 stderr F I1013 00:21:43.962751 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress/router-internal-default-29hv8 2025-10-13T00:21:43.962764607+00:00 stderr F I1013 00:21:43.962759 28251 gateway_shared_intf.go:842] Adding endpointslice router-internal-default-29hv8 in namespace openshift-ingress 2025-10-13T00:21:43.962787618+00:00 stderr F I1013 00:21:43.962773 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress/router-internal-default-29hv8 took: 15.391µs 2025-10-13T00:21:43.962787618+00:00 stderr F I1013 00:21:43.962782 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-oauth-apiserver/api-2pj4d 2025-10-13T00:21:43.962796878+00:00 stderr F I1013 00:21:43.962790 28251 gateway_shared_intf.go:842] Adding endpointslice api-2pj4d in namespace openshift-oauth-apiserver 2025-10-13T00:21:43.962826289+00:00 stderr F I1013 00:21:43.962808 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-oauth-apiserver/api-2pj4d took: 18.71µs 2025-10-13T00:21:43.962826289+00:00 stderr F I1013 00:21:43.962820 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver/api-7hq6z 2025-10-13T00:21:43.962835109+00:00 stderr F I1013 00:21:43.962828 28251 gateway_shared_intf.go:842] Adding endpointslice api-7hq6z in namespace openshift-apiserver 2025-10-13T00:21:43.962864280+00:00 stderr F I1013 00:21:43.962848 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver/api-7hq6z took: 20.211µs 2025-10-13T00:21:43.962864280+00:00 stderr F I1013 00:21:43.962860 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-authentication/oauth-openshift-6gdxk 2025-10-13T00:21:43.962876910+00:00 stderr F I1013 00:21:43.962868 28251 gateway_shared_intf.go:842] Adding endpointslice oauth-openshift-6gdxk in namespace openshift-authentication 2025-10-13T00:21:43.962902251+00:00 stderr F I1013 00:21:43.962886 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-authentication/oauth-openshift-6gdxk took: 18.221µs 2025-10-13T00:21:43.962902251+00:00 stderr F I1013 00:21:43.962894 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-config-operator/metrics-tw775 2025-10-13T00:21:43.962911091+00:00 stderr F I1013 00:21:43.962902 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-tw775 in namespace openshift-config-operator 2025-10-13T00:21:43.962948482+00:00 stderr F I1013 00:21:43.962931 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-config-operator/metrics-tw775 took: 28.711µs 2025-10-13T00:21:43.962948482+00:00 stderr F I1013 00:21:43.962943 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console/console-wg4kr 2025-10-13T00:21:43.962958663+00:00 stderr F I1013 00:21:43.962952 28251 gateway_shared_intf.go:842] Adding endpointslice console-wg4kr in namespace openshift-console 2025-10-13T00:21:43.962994094+00:00 stderr F I1013 00:21:43.962976 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console/console-wg4kr took: 24.021µs 2025-10-13T00:21:43.962994094+00:00 stderr F I1013 00:21:43.962987 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-controller-manager-operator/metrics-cz5rv 2025-10-13T00:21:43.963003094+00:00 stderr F I1013 00:21:43.962996 28251 gateway_shared_intf.go:842] Adding endpointslice metrics-cz5rv in namespace openshift-kube-controller-manager-operator 2025-10-13T00:21:43.963033445+00:00 stderr F I1013 00:21:43.963017 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-controller-manager-operator/metrics-cz5rv took: 20.821µs 2025-10-13T00:21:43.963033445+00:00 stderr F I1013 00:21:43.963027 28251 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/packageserver-service-tlm8t 2025-10-13T00:21:43.963042025+00:00 stderr F I1013 00:21:43.963036 28251 gateway_shared_intf.go:842] Adding endpointslice packageserver-service-tlm8t in namespace openshift-operator-lifecycle-manager 2025-10-13T00:21:43.963074216+00:00 stderr F I1013 00:21:43.963057 28251 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/packageserver-service-tlm8t took: 21.4µs 2025-10-13T00:21:43.963082526+00:00 stderr F I1013 00:21:43.963071 28251 factory.go:988] Added *v1.EndpointSlice event handler 12 2025-10-13T00:21:43.963151708+00:00 stderr F I1013 00:21:43.963135 28251 gateway.go:244] Spawning Conntrack Rule Check Thread 2025-10-13T00:21:43.963173808+00:00 stderr F I1013 00:21:43.963159 28251 default_node_network_controller.go:980] Gateway and management port readiness took 1.175899113s 2025-10-13T00:21:43.963483857+00:00 stderr F I1013 00:21:43.963375 28251 ovs.go:159] Exec(55): /usr/bin/ovs-ofctl -O OpenFlow13 --bundle replace-flows br-ex - 2025-10-13T00:21:43.963720213+00:00 stderr F I1013 00:21:43.963683 28251 ovs.go:159] Exec(56): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-br br-ext 2025-10-13T00:21:43.963826416+00:00 stderr F I1013 00:21:43.963777 28251 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 11 Dst: 10.217.4.0/23 Src: 169.254.169.2 Gw: 169.254.169.4 Flags: [] Table: 0 Realm: 0} 2025-10-13T00:21:43.964217106+00:00 stderr F I1013 00:21:43.964185 28251 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 11 Dst: 10.217.4.0/23 Src: 169.254.169.2 Gw: 169.254.169.4 Flags: [] Table: 254 Realm: 0} 2025-10-13T00:21:43.967408672+00:00 stderr F I1013 00:21:43.967337 28251 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 127.0.0.1/8 lo 2025-10-13T00:21:43.967408672+00:00 stderr F I1013 00:21:43.967396 28251 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 10.217.0.2/23 ovn-k8s-mp0 2025-10-13T00:21:43.967433473+00:00 stderr F I1013 00:21:43.967416 28251 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 169.254.169.2/29 br-ex 2025-10-13T00:21:43.967460444+00:00 stderr F I1013 00:21:43.967443 28251 node_ip_handler_linux.go:141] Node IP manager is running 2025-10-13T00:21:43.973458955+00:00 stderr F I1013 00:21:43.973403 28251 ovs.go:162] Exec(56): stdout: "" 2025-10-13T00:21:43.973458955+00:00 stderr F I1013 00:21:43.973425 28251 ovs.go:163] Exec(56): stderr: "" 2025-10-13T00:21:43.973458955+00:00 stderr F I1013 00:21:43.973438 28251 ovs.go:159] Exec(57): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-port br-int int 2025-10-13T00:21:43.991511590+00:00 stderr F I1013 00:21:43.991465 28251 ovs.go:162] Exec(57): stdout: "" 2025-10-13T00:21:43.991558672+00:00 stderr F I1013 00:21:43.991547 28251 ovs.go:163] Exec(57): stderr: "" 2025-10-13T00:21:43.991787098+00:00 stderr F I1013 00:21:43.991770 28251 healthcheck_node.go:124] "Starting node proxy healthz server" address="0.0.0.0:10256" 2025-10-13T00:21:43.992001964+00:00 stderr F W1013 00:21:43.991988 28251 egressip_healthcheck.go:74] Health checking using insecure connection 2025-10-13T00:21:43.992063025+00:00 stderr F I1013 00:21:43.992052 28251 egressip_healthcheck.go:107] Starting Egress IP Health Server on 10.217.0.2:9107 2025-10-13T00:21:43.992660561+00:00 stderr F I1013 00:21:43.992641 28251 egressservice_node.go:84] Setting up event handlers for Egress Services 2025-10-13T00:21:43.992812015+00:00 stderr F I1013 00:21:43.992799 28251 egressservice_node.go:172] Starting Egress Services Controller 2025-10-13T00:21:43.992843156+00:00 stderr F I1013 00:21:43.992834 28251 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-10-13T00:21:43.992867997+00:00 stderr F I1013 00:21:43.992859 28251 shared_informer.go:318] Caches are synced for egressservices 2025-10-13T00:21:43.992891158+00:00 stderr F I1013 00:21:43.992882 28251 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-10-13T00:21:43.992913298+00:00 stderr F I1013 00:21:43.992905 28251 shared_informer.go:318] Caches are synced for egressservices_services 2025-10-13T00:21:43.992936019+00:00 stderr F I1013 00:21:43.992927 28251 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-10-13T00:21:43.992957999+00:00 stderr F I1013 00:21:43.992949 28251 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-10-13T00:21:43.992980660+00:00 stderr F I1013 00:21:43.992972 28251 egressservice_node.go:186] Repairing Egress Services 2025-10-13T00:21:44.004509970+00:00 stderr F I1013 00:21:44.004465 28251 iptables.go:108] Creating table: nat chain: OVN-KUBE-EGRESS-SVC 2025-10-13T00:21:44.010430049+00:00 stderr F W1013 00:21:44.010387 28251 management-port_linux.go:495] missing management port nat rule in chain OVN-KUBE-SNAT-MGMTPORT, adding it 2025-10-13T00:21:44.016705338+00:00 stderr F I1013 00:21:44.016668 28251 node_controller.go:43] Starting Admin Policy Based Route Node Controller 2025-10-13T00:21:44.016705338+00:00 stderr F I1013 00:21:44.016690 28251 external_controller.go:276] Starting Admin Policy Based Route Controller 2025-10-13T00:21:44.021304272+00:00 stderr F I1013 00:21:44.021257 28251 egressip.go:183] Starting Egress IP Controller 2025-10-13T00:21:44.021636501+00:00 stderr F I1013 00:21:44.021605 28251 shared_informer.go:311] Waiting for caches to sync for eippod 2025-10-13T00:21:44.021636501+00:00 stderr F I1013 00:21:44.021629 28251 shared_informer.go:318] Caches are synced for eippod 2025-10-13T00:21:44.021651521+00:00 stderr F I1013 00:21:44.021640 28251 shared_informer.go:311] Waiting for caches to sync for eipnamespace 2025-10-13T00:21:44.021651521+00:00 stderr F I1013 00:21:44.021645 28251 shared_informer.go:318] Caches are synced for eipnamespace 2025-10-13T00:21:44.021698782+00:00 stderr F I1013 00:21:44.021669 28251 shared_informer.go:311] Waiting for caches to sync for eipeip 2025-10-13T00:21:44.021707192+00:00 stderr F I1013 00:21:44.021696 28251 shared_informer.go:318] Caches are synced for eipeip 2025-10-13T00:21:44.021850686+00:00 stderr F I1013 00:21:44.021814 28251 iptables_manager.go:96] IPTables manager: own chain: table nat, chain OVN-KUBE-EGRESS-IP-MULTI-NIC, protocol IPv4 2025-10-13T00:21:44.026269515+00:00 stderr F I1013 00:21:44.026230 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.033589632+00:00 stderr F I1013 00:21:44.033549 28251 iptables_manager.go:164] IPTables manager: ensure rule - table nat, chain POSTROUTING, protocol IPv4, rule: {[-j OVN-KUBE-EGRESS-IP-MULTI-NIC]} 2025-10-13T00:21:44.038139844+00:00 stderr F I1013 00:21:44.038109 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.049043318+00:00 stderr F I1013 00:21:44.048993 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.054470914+00:00 stderr F I1013 00:21:44.054119 28251 iptables_manager.go:164] IPTables manager: ensure rule - table mangle, chain PREROUTING, protocol IPv4, rule: {[-m mark --mark 0 -j CONNMARK --restore-mark]} 2025-10-13T00:21:44.058365878+00:00 stderr F I1013 00:21:44.058272 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.066808135+00:00 stderr F I1013 00:21:44.066261 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.073087954+00:00 stderr F I1013 00:21:44.073052 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","mangle"] 2025-10-13T00:21:44.089938007+00:00 stderr F I1013 00:21:44.089861 28251 iptables_manager.go:164] IPTables manager: ensure rule - table mangle, chain PREROUTING, protocol IPv4, rule: {[-m mark --mark 1008 -j CONNMARK --save-mark]} 2025-10-13T00:21:44.091938601+00:00 stderr F I1013 00:21:44.091901 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.097702426+00:00 stderr F I1013 00:21:44.097641 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.104418017+00:00 stderr F I1013 00:21:44.104388 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","mangle"] 2025-10-13T00:21:44.146356825+00:00 stderr F I1013 00:21:44.146229 28251 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.155448539+00:00 stderr F I1013 00:21:44.155295 28251 iptables.go:358] "Running" command="ip6tables-save" arguments=["-t","nat"] 2025-10-13T00:21:44.160787613+00:00 stderr F I1013 00:21:44.160627 28251 link_network_manager.go:116] Link manager is running 2025-10-13T00:21:44.160787613+00:00 stderr F I1013 00:21:44.160668 28251 default_node_network_controller.go:1128] Default node network controller initialized and ready. 2025-10-13T00:21:44.160787613+00:00 stderr F I1013 00:21:44.160729 28251 ovspinning_linux.go:39] OVS CPU affinity pinning disabled 2025-10-13T00:22:11.442726739+00:00 stderr F I1013 00:22:11.442631 28251 cni.go:258] [openshift-kube-apiserver/installer-13-crc 9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553 network default NAD default] DEL starting CNI request [openshift-kube-apiserver/installer-13-crc 9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553 network default NAD default] 2025-10-13T00:22:11.570755542+00:00 stderr F I1013 00:22:11.570630 28251 cni.go:279] [openshift-kube-apiserver/installer-13-crc 9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553 network default NAD default] DEL finished CNI request [openshift-kube-apiserver/installer-13-crc 9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553 network default NAD default], result "{\"dns\":{}}", err 2025-10-13T00:22:11.669606270+00:00 stderr F I1013 00:22:11.669517 28251 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 1 items received 2025-10-13T00:22:11.670776752+00:00 stderr F I1013 00:22:11.670073 28251 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 0 items received 2025-10-13T00:22:11.670776752+00:00 stderr F I1013 00:22:11.670590 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42706&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.670776752+00:00 stderr F I1013 00:22:11.670709 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42735&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.677745089+00:00 stderr F I1013 00:22:11.677618 28251 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Namespace total 0 items received 2025-10-13T00:22:11.678109059+00:00 stderr F I1013 00:22:11.678053 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42712&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.686783032+00:00 stderr F I1013 00:22:11.686690 28251 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 7 items received 2025-10-13T00:22:11.687142792+00:00 stderr F I1013 00:22:11.687074 28251 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 1 items received 2025-10-13T00:22:11.687876222+00:00 stderr F I1013 00:22:11.687366 28251 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.NetworkPolicy total 0 items received 2025-10-13T00:22:11.687876222+00:00 stderr F I1013 00:22:11.687410 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=5m39s&timeoutSeconds=339&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.687876222+00:00 stderr F I1013 00:22:11.687469 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.687876222+00:00 stderr F I1013 00:22:11.687811 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42697&timeout=6m32s&timeoutSeconds=392&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.725239247+00:00 stderr F I1013 00:22:11.725152 28251 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 0 items received 2025-10-13T00:22:11.725689169+00:00 stderr F I1013 00:22:11.725616 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.734795064+00:00 stderr F I1013 00:22:11.734708 28251 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 0 items received 2025-10-13T00:22:11.735223425+00:00 stderr F I1013 00:22:11.735036 28251 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 0 items received 2025-10-13T00:22:11.735223425+00:00 stderr F I1013 00:22:11.735115 28251 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42713&timeout=9m40s&timeoutSeconds=580&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.735595115+00:00 stderr F I1013 00:22:11.735279 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42713&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.742463020+00:00 stderr F I1013 00:22:11.742399 28251 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 0 items received 2025-10-13T00:22:11.742887861+00:00 stderr F I1013 00:22:11.742781 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=9m43s&timeoutSeconds=583&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.746876058+00:00 stderr F I1013 00:22:11.746820 28251 reflector.go:800] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: Watch close - *v1alpha1.BaselineAdminNetworkPolicy total 0 items received 2025-10-13T00:22:11.747820664+00:00 stderr F I1013 00:22:11.747304 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42713&timeout=5m16s&timeoutSeconds=316&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.749768636+00:00 stderr F I1013 00:22:11.749717 28251 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2025-10-13T00:22:11.750294630+00:00 stderr F I1013 00:22:11.750160 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42722&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.760011992+00:00 stderr F I1013 00:22:11.759244 28251 reflector.go:800] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: Watch close - *v1alpha1.AdminNetworkPolicy total 0 items received 2025-10-13T00:22:11.760011992+00:00 stderr F I1013 00:22:11.759625 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42710&timeout=5m6s&timeoutSeconds=306&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.765175611+00:00 stderr F I1013 00:22:11.764559 28251 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 0 items received 2025-10-13T00:22:11.765175611+00:00 stderr F I1013 00:22:11.764906 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42724&timeout=9m51s&timeoutSeconds=591&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.637400167+00:00 stderr F I1013 00:22:12.637290 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=8m41s&timeoutSeconds=521&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.663289473+00:00 stderr F I1013 00:22:12.663207 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.676981371+00:00 stderr F I1013 00:22:12.676905 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42712&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.765967664+00:00 stderr F I1013 00:22:12.765861 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=8m16s&timeoutSeconds=496&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.767257519+00:00 stderr F I1013 00:22:12.767182 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42713&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.861305447+00:00 stderr F I1013 00:22:12.861218 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42706&timeout=7m20s&timeoutSeconds=440&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.880751140+00:00 stderr F I1013 00:22:12.880678 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42713&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.035191823+00:00 stderr F I1013 00:22:13.034458 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.071003846+00:00 stderr F I1013 00:22:13.070923 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42735&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.124954267+00:00 stderr F I1013 00:22:13.124883 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42722&timeout=9m49s&timeoutSeconds=589&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.171388876+00:00 stderr F I1013 00:22:13.171263 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42724&timeout=7m4s&timeoutSeconds=424&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.268829856+00:00 stderr F I1013 00:22:13.268763 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42697&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.294573558+00:00 stderr F I1013 00:22:13.294481 28251 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42713&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.298479963+00:00 stderr F I1013 00:22:13.298111 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42710&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.630071062+00:00 stderr F I1013 00:22:14.630002 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42713&timeout=6m8s&timeoutSeconds=368&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.934439418+00:00 stderr F I1013 00:22:14.934244 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42712&timeout=8m33s&timeoutSeconds=513&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.021046357+00:00 stderr F I1013 00:22:15.020935 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42710&timeout=9m28s&timeoutSeconds=568&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.127566471+00:00 stderr F I1013 00:22:15.127466 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.180186356+00:00 stderr F I1013 00:22:15.180069 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42713&timeout=8m47s&timeoutSeconds=527&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.245651397+00:00 stderr F I1013 00:22:15.245558 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=5m32s&timeoutSeconds=332&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.267149195+00:00 stderr F I1013 00:22:15.267090 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=5m38s&timeoutSeconds=338&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.320632683+00:00 stderr F I1013 00:22:15.320544 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42722&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.436061517+00:00 stderr F I1013 00:22:15.435985 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42706&timeout=5m32s&timeoutSeconds=332&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.610461507+00:00 stderr F I1013 00:22:15.610357 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42724&timeout=9m45s&timeoutSeconds=585&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.726305592+00:00 stderr F I1013 00:22:15.725828 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42735&timeout=9m53s&timeoutSeconds=593&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.728682166+00:00 stderr F I1013 00:22:15.728351 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42697&timeout=7m50s&timeoutSeconds=470&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.816252551+00:00 stderr F I1013 00:22:15.816173 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=6m16s&timeoutSeconds=376&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.976691956+00:00 stderr F I1013 00:22:15.976607 28251 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42713&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:18.571060393+00:00 stderr F I1013 00:22:18.570980 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42713&timeout=5m9s&timeoutSeconds=309&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.037987870+00:00 stderr F I1013 00:22:19.037844 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42724&timeout=9m0s&timeoutSeconds=540&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.213965752+00:00 stderr F I1013 00:22:19.213881 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42712&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.402018249+00:00 stderr F I1013 00:22:19.401908 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42735&timeout=9m34s&timeoutSeconds=574&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.471296982+00:00 stderr F I1013 00:22:19.471101 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.681578627+00:00 stderr F I1013 00:22:19.681467 28251 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42710&timeout=7m16s&timeoutSeconds=436&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:20.059772426+00:00 stderr F I1013 00:22:20.059647 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42713&timeout=8m29s&timeoutSeconds=509&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:20.208543957+00:00 stderr F I1013 00:22:20.208451 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42706&timeout=9m27s&timeoutSeconds=567&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:20.254032831+00:00 stderr F I1013 00:22:20.253970 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42722&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:20.299641237+00:00 stderr F I1013 00:22:20.299578 28251 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:20.925997741+00:00 stderr F I1013 00:22:20.925869 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:21.119032442+00:00 stderr F I1013 00:22:21.118934 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=8m12s&timeoutSeconds=492&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:21.655198111+00:00 stderr F I1013 00:22:21.655060 28251 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42713&timeout=8m42s&timeoutSeconds=522&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:21.932440257+00:00 stderr F I1013 00:22:21.932147 28251 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42697&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:26.486508284+00:00 stderr F I1013 00:22:26.486454 28251 reflector.go:449] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy closed with: too old resource version: 42710 (42866) 2025-10-13T00:22:27.528738231+00:00 stderr F I1013 00:22:27.528657 28251 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 42735 (42861) 2025-10-13T00:22:27.873661707+00:00 stderr F I1013 00:22:27.873594 28251 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 42848 (42861) 2025-10-13T00:22:28.376199151+00:00 stderr F I1013 00:22:28.376125 28251 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 42713 (42866) 2025-10-13T00:22:28.954099082+00:00 stderr F I1013 00:22:28.954048 28251 reflector.go:449] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy closed with: too old resource version: 42713 (42867) 2025-10-13T00:22:28.960639238+00:00 stderr F I1013 00:22:28.960615 28251 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy closed with: too old resource version: 42697 (42861) 2025-10-13T00:22:29.402559752+00:00 stderr F I1013 00:22:29.402477 28251 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods?labelSelector=app%3Dovnkube-master&resourceVersion=0 2025-10-13T00:22:30.326187421+00:00 stderr F I1013 00:22:30.326078 28251 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 42708 (42867) 2025-10-13T00:22:30.534252676+00:00 stderr F I1013 00:22:30.534204 28251 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace closed with: too old resource version: 42712 (42861) 2025-10-13T00:22:30.930358457+00:00 stderr F I1013 00:22:30.930251 28251 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 42706 (42861) 2025-10-13T00:22:31.035243557+00:00 stderr F I1013 00:22:31.034994 28251 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 42722 (42866) 2025-10-13T00:22:31.472148567+00:00 stderr F I1013 00:22:31.472064 28251 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 42724 (42866) 2025-10-13T00:22:32.008050038+00:00 stderr F I1013 00:22:32.007892 28251 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 42724 (42881) 2025-10-13T00:22:32.159135011+00:00 stderr F I1013 00:22:32.159066 28251 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 42856 (42861) 2025-10-13T00:22:32.774670265+00:00 stderr F I1013 00:22:32.774599 28251 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 42713 (42866) 2025-10-13T00:22:41.139720398+00:00 stderr F I1013 00:22:41.139513 28251 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:41.142499435+00:00 stderr F I1013 00:22:41.142424 28251 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:41.143027930+00:00 stderr F I1013 00:22:41.142965 28251 master.go:627] Adding or Updating Node "crc" 2025-10-13T00:22:41.143312258+00:00 stderr F I1013 00:22:41.143270 28251 hybrid.go:140] Removing node crc hybrid overlay port 2025-10-13T00:22:43.772589566+00:00 stderr F I1013 00:22:43.772506 28251 reflector.go:325] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:43.773829621+00:00 stderr F I1013 00:22:43.773788 28251 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:44.328756628+00:00 stderr F I1013 00:22:44.328677 28251 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:44.331352621+00:00 stderr F I1013 00:22:44.331277 28251 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:46.843157497+00:00 stderr F I1013 00:22:46.843081 28251 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:46.845939264+00:00 stderr F I1013 00:22:46.845891 28251 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:48.028371481+00:00 stderr F I1013 00:22:48.028135 28251 reflector.go:325] Listing and watching *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:22:48.029784611+00:00 stderr F I1013 00:22:48.029745 28251 reflector.go:351] Caches populated for *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:22:48.770445181+00:00 stderr F I1013 00:22:48.770380 28251 reflector.go:325] Listing and watching *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:22:48.773051024+00:00 stderr F I1013 00:22:48.773017 28251 reflector.go:351] Caches populated for *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-10-13T00:22:48.961242806+00:00 stderr F I1013 00:22:48.961192 28251 reflector.go:325] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:48.964031454+00:00 stderr F I1013 00:22:48.963974 28251 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:48.964172957+00:00 stderr F I1013 00:22:48.964157 28251 namespace.go:144] [default] updating namespace 2025-10-13T00:22:48.964207758+00:00 stderr F I1013 00:22:48.964188 28251 namespace.go:144] [kube-public] updating namespace 2025-10-13T00:22:48.964243469+00:00 stderr F I1013 00:22:48.964226 28251 namespace.go:144] [hostpath-provisioner] updating namespace 2025-10-13T00:22:48.964286581+00:00 stderr F I1013 00:22:48.964163 28251 namespace.go:144] [kube-node-lease] updating namespace 2025-10-13T00:22:48.964481866+00:00 stderr F I1013 00:22:48.964444 28251 namespace.go:144] [kube-system] updating namespace 2025-10-13T00:22:48.964624550+00:00 stderr F I1013 00:22:48.964596 28251 namespace.go:144] [openshift-authentication] updating namespace 2025-10-13T00:22:48.964653081+00:00 stderr F I1013 00:22:48.964640 28251 namespace.go:144] [openshift] updating namespace 2025-10-13T00:22:48.964710242+00:00 stderr F I1013 00:22:48.964629 28251 namespace.go:144] [openshift-apiserver] updating namespace 2025-10-13T00:22:48.964750954+00:00 stderr F I1013 00:22:48.964740 28251 namespace.go:144] [openshift-authentication-operator] updating namespace 2025-10-13T00:22:48.964787415+00:00 stderr F I1013 00:22:48.964741 28251 namespace.go:144] [openshift-apiserver-operator] updating namespace 2025-10-13T00:22:48.964863457+00:00 stderr F I1013 00:22:48.964824 28251 namespace.go:144] [openshift-cloud-network-config-controller] updating namespace 2025-10-13T00:22:48.964922688+00:00 stderr F I1013 00:22:48.964898 28251 namespace.go:144] [openshift-cluster-machine-approver] updating namespace 2025-10-13T00:22:48.964953699+00:00 stderr F I1013 00:22:48.964941 28251 namespace.go:144] [openshift-cloud-platform-infra] updating namespace 2025-10-13T00:22:48.965009941+00:00 stderr F I1013 00:22:48.964985 28251 namespace.go:144] [openshift-cluster-storage-operator] updating namespace 2025-10-13T00:22:48.965031151+00:00 stderr F I1013 00:22:48.964989 28251 namespace.go:144] [openshift-cluster-samples-operator] updating namespace 2025-10-13T00:22:48.965065642+00:00 stderr F I1013 00:22:48.965036 28251 namespace.go:144] [openshift-cluster-version] updating namespace 2025-10-13T00:22:48.965103263+00:00 stderr F I1013 00:22:48.965083 28251 namespace.go:144] [openshift-config-managed] updating namespace 2025-10-13T00:22:48.965151475+00:00 stderr F I1013 00:22:48.965125 28251 namespace.go:144] [openshift-config] updating namespace 2025-10-13T00:22:48.965151475+00:00 stderr F I1013 00:22:48.965132 28251 namespace.go:144] [openshift-config-operator] updating namespace 2025-10-13T00:22:48.965191096+00:00 stderr F I1013 00:22:48.965168 28251 namespace.go:144] [openshift-console] updating namespace 2025-10-13T00:22:48.965270608+00:00 stderr F I1013 00:22:48.965236 28251 namespace.go:144] [openshift-console-user-settings] updating namespace 2025-10-13T00:22:48.965340550+00:00 stderr F I1013 00:22:48.965290 28251 namespace.go:144] [openshift-console-operator] updating namespace 2025-10-13T00:22:48.965416622+00:00 stderr F I1013 00:22:48.965393 28251 namespace.go:144] [openshift-controller-manager] updating namespace 2025-10-13T00:22:48.965487374+00:00 stderr F I1013 00:22:48.965455 28251 namespace.go:144] [openshift-controller-manager-operator] updating namespace 2025-10-13T00:22:48.965509165+00:00 stderr F I1013 00:22:48.965440 28251 namespace.go:144] [openshift-dns] updating namespace 2025-10-13T00:22:48.965549796+00:00 stderr F I1013 00:22:48.965520 28251 namespace.go:144] [openshift-dns-operator] updating namespace 2025-10-13T00:22:48.965604037+00:00 stderr F I1013 00:22:48.965527 28251 namespace.go:144] [openshift-etcd] updating namespace 2025-10-13T00:22:48.965604037+00:00 stderr F I1013 00:22:48.965586 28251 namespace.go:144] [openshift-etcd-operator] updating namespace 2025-10-13T00:22:48.965697400+00:00 stderr F I1013 00:22:48.965667 28251 namespace.go:144] [openshift-host-network] updating namespace 2025-10-13T00:22:48.965755542+00:00 stderr F I1013 00:22:48.965740 28251 namespace.go:144] [openshift-infra] updating namespace 2025-10-13T00:22:48.965803273+00:00 stderr F I1013 00:22:48.965742 28251 namespace.go:144] [openshift-image-registry] updating namespace 2025-10-13T00:22:48.965844264+00:00 stderr F I1013 00:22:48.965806 28251 namespace.go:144] [openshift-ingress-canary] updating namespace 2025-10-13T00:22:48.965844264+00:00 stderr F I1013 00:22:48.965832 28251 namespace.go:144] [openshift-ingress-operator] updating namespace 2025-10-13T00:22:48.965857134+00:00 stderr F I1013 00:22:48.965804 28251 namespace.go:144] [openshift-ingress] updating namespace 2025-10-13T00:22:48.965882755+00:00 stderr F I1013 00:22:48.965874 28251 namespace.go:144] [openshift-kni-infra] updating namespace 2025-10-13T00:22:48.966012029+00:00 stderr F I1013 00:22:48.965989 28251 namespace.go:144] [openshift-kube-apiserver] updating namespace 2025-10-13T00:22:48.966057780+00:00 stderr F I1013 00:22:48.966018 28251 namespace.go:144] [openshift-kube-apiserver-operator] updating namespace 2025-10-13T00:22:48.966146973+00:00 stderr F I1013 00:22:48.966132 28251 namespace.go:144] [openshift-kube-controller-manager-operator] updating namespace 2025-10-13T00:22:48.966181593+00:00 stderr F I1013 00:22:48.966142 28251 namespace.go:144] [openshift-kube-controller-manager] updating namespace 2025-10-13T00:22:48.966228845+00:00 stderr F I1013 00:22:48.966204 28251 namespace.go:144] [openshift-kube-scheduler] updating namespace 2025-10-13T00:22:48.966291927+00:00 stderr F I1013 00:22:48.966268 28251 namespace.go:144] [openshift-machine-api] updating namespace 2025-10-13T00:22:48.966291927+00:00 stderr F I1013 00:22:48.966266 28251 namespace.go:144] [openshift-kube-storage-version-migrator-operator] updating namespace 2025-10-13T00:22:48.966346248+00:00 stderr F I1013 00:22:48.966310 28251 namespace.go:144] [openshift-machine-config-operator] updating namespace 2025-10-13T00:22:48.966383039+00:00 stderr F I1013 00:22:48.966371 28251 namespace.go:144] [openshift-network-diagnostics] updating namespace 2025-10-13T00:22:48.966429230+00:00 stderr F I1013 00:22:48.966407 28251 namespace.go:144] [openshift-network-node-identity] updating namespace 2025-10-13T00:22:48.966429230+00:00 stderr F I1013 00:22:48.966416 28251 namespace.go:144] [openshift-monitoring] updating namespace 2025-10-13T00:22:48.966467841+00:00 stderr F I1013 00:22:48.966189 28251 namespace.go:144] [openshift-kube-scheduler-operator] updating namespace 2025-10-13T00:22:48.966528653+00:00 stderr F I1013 00:22:48.966501 28251 namespace.go:144] [openshift-multus] updating namespace 2025-10-13T00:22:48.966528653+00:00 stderr F I1013 00:22:48.966514 28251 namespace.go:144] [openshift-network-operator] updating namespace 2025-10-13T00:22:48.966563554+00:00 stderr F I1013 00:22:48.966326 28251 namespace.go:144] [openshift-marketplace] updating namespace 2025-10-13T00:22:48.966666817+00:00 stderr F I1013 00:22:48.966504 28251 namespace.go:144] [openshift-node] updating namespace 2025-10-13T00:22:48.966730859+00:00 stderr F I1013 00:22:48.966270 28251 namespace.go:144] [openshift-kube-storage-version-migrator] updating namespace 2025-10-13T00:22:48.966730859+00:00 stderr F I1013 00:22:48.966597 28251 namespace.go:144] [openshift-nutanix-infra] updating namespace 2025-10-13T00:22:48.966785390+00:00 stderr F I1013 00:22:48.966765 28251 namespace.go:144] [openshift-oauth-apiserver] updating namespace 2025-10-13T00:22:48.966818811+00:00 stderr F I1013 00:22:48.966804 28251 namespace.go:144] [openshift-openstack-infra] updating namespace 2025-10-13T00:22:48.966870433+00:00 stderr F I1013 00:22:48.966845 28251 namespace.go:144] [openshift-operator-lifecycle-manager] updating namespace 2025-10-13T00:22:48.967153530+00:00 stderr F I1013 00:22:48.967095 28251 namespace.go:144] [openshift-operators] updating namespace 2025-10-13T00:22:48.967202102+00:00 stderr F I1013 00:22:48.967119 28251 namespace.go:144] [openshift-ovirt-infra] updating namespace 2025-10-13T00:22:48.967267024+00:00 stderr F I1013 00:22:48.967242 28251 namespace.go:144] [openshift-ovn-kubernetes] updating namespace 2025-10-13T00:22:48.967290334+00:00 stderr F I1013 00:22:48.967250 28251 namespace.go:144] [openshift-service-ca] updating namespace 2025-10-13T00:22:48.967325465+00:00 stderr F I1013 00:22:48.967299 28251 namespace.go:144] [openshift-route-controller-manager] updating namespace 2025-10-13T00:22:48.967417158+00:00 stderr F I1013 00:22:48.967405 28251 namespace.go:144] [openshift-user-workload-monitoring] updating namespace 2025-10-13T00:22:48.967454739+00:00 stderr F I1013 00:22:48.967421 28251 namespace.go:144] [openshift-service-ca-operator] updating namespace 2025-10-13T00:22:48.967547201+00:00 stderr F I1013 00:22:48.967520 28251 namespace.go:144] [openstack] updating namespace 2025-10-13T00:22:48.967599203+00:00 stderr F I1013 00:22:48.967566 28251 namespace.go:144] [openshift-vsphere-infra] updating namespace 2025-10-13T00:22:48.967637254+00:00 stderr F I1013 00:22:48.967626 28251 namespace.go:144] [openstack-operators] updating namespace 2025-10-13T00:22:48.967680155+00:00 stderr F I1013 00:22:48.967645 28251 namespace.go:100] [service-telemetry] adding namespace 2025-10-13T00:22:48.969707852+00:00 stderr F I1013 00:22:48.969686 28251 namespace.go:104] [service-telemetry] adding namespace took 1.989266ms 2025-10-13T00:22:49.184004991+00:00 stderr F I1013 00:22:49.183877 28251 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:49.185690178+00:00 stderr F I1013 00:22:49.185578 28251 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:50.174006928+00:00 stderr F I1013 00:22:50.173936 28251 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:50.175827169+00:00 stderr F I1013 00:22:50.175772 28251 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:51.289066148+00:00 stderr F I1013 00:22:51.288958 28251 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:22:51.290945201+00:00 stderr F I1013 00:22:51.290899 28251 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:22:52.751572176+00:00 stderr F I1013 00:22:52.751507 28251 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:52.754345903+00:00 stderr F I1013 00:22:52.754285 28251 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:53.157994857+00:00 stderr F I1013 00:22:53.157913 28251 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:53.178857788+00:00 stderr F I1013 00:22:53.178793 28251 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:53.180506994+00:00 stderr F I1013 00:22:53.180449 28251 obj_retry.go:453] Detected object openshift-image-registry/image-pruner-29338560-zvlxb of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.180525985+00:00 stderr F I1013 00:22:53.180501 28251 obj_retry.go:453] Detected object openshift-image-registry/image-pruner-29338560-zvlxb of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.180817493+00:00 stderr F I1013 00:22:53.180785 28251 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-12-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.180817493+00:00 stderr F I1013 00:22:53.180776 28251 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-13-crc of type *v1.Pod in terminal state (e.g. completed) during update event: will remove it 2025-10-13T00:22:53.180817493+00:00 stderr F I1013 00:22:53.180806 28251 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-12-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.180840363+00:00 stderr F I1013 00:22:53.180829 28251 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-13-crc 2025-10-13T00:22:53.180840363+00:00 stderr F I1013 00:22:53.180829 28251 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:53.180860514+00:00 stderr F I1013 00:22:53.180834 28251 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-9-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.180860514+00:00 stderr F I1013 00:22:53.180856 28251 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:53.180869364+00:00 stderr F I1013 00:22:53.180850 28251 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:53.180890865+00:00 stderr F I1013 00:22:53.180873 28251 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.180890865+00:00 stderr F I1013 00:22:53.180848 28251 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:53.180924016+00:00 stderr F I1013 00:22:53.180904 28251 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:53.181026459+00:00 stderr F I1013 00:22:53.181000 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181026459+00:00 stderr F I1013 00:22:53.181014 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181035469+00:00 stderr F I1013 00:22:53.181027 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181043589+00:00 stderr F I1013 00:22:53.181034 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181091140+00:00 stderr F I1013 00:22:53.181026 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-11-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181091140+00:00 stderr F I1013 00:22:53.181072 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181091140+00:00 stderr F I1013 00:22:53.181057 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181091140+00:00 stderr F I1013 00:22:53.181085 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181128271+00:00 stderr F I1013 00:22:53.181107 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181128271+00:00 stderr F I1013 00:22:53.181117 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181128271+00:00 stderr F I1013 00:22:53.181121 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181139152+00:00 stderr F I1013 00:22:53.181132 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181139152+00:00 stderr F I1013 00:22:53.181132 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181149772+00:00 stderr F I1013 00:22:53.181144 28251 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181186393+00:00 stderr F I1013 00:22:53.181169 28251 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-7-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181186393+00:00 stderr F I1013 00:22:53.181180 28251 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-7-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181245395+00:00 stderr F I1013 00:22:53.181206 28251 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-8-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181245395+00:00 stderr F I1013 00:22:53.181233 28251 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181821991+00:00 stderr F I1013 00:22:53.181784 28251 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181821991+00:00 stderr F I1013 00:22:53.181802 28251 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181838671+00:00 stderr F I1013 00:22:53.181817 28251 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181838671+00:00 stderr F I1013 00:22:53.181827 28251 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181866212+00:00 stderr F I1013 00:22:53.181846 28251 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.181866212+00:00 stderr F I1013 00:22:53.181856 28251 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-10-13T00:22:53.184550967+00:00 stderr F I1013 00:22:53.183849 28251 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-13-crc, ips: 10.217.0.41 2025-10-13T00:22:53.184550967+00:00 stderr F I1013 00:22:53.183877 28251 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-13-crc of type *factory.egressIPPod in terminal state (e.g. completed) during update event: will remove it 2025-10-13T00:22:53.184550967+00:00 stderr F I1013 00:22:53.183893 28251 handler.go:411] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:53.184550967+00:00 stderr F I1013 00:22:53.183916 28251 handler.go:411] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-10-13T00:22:55.340157922+00:00 stderr F I1013 00:22:55.340084 28251 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:55.343777523+00:00 stderr F I1013 00:22:55.343224 28251 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:56.440997726+00:00 stderr F I1013 00:22:56.440741 28251 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:56.442201839+00:00 stderr F I1013 00:22:56.442172 28251 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:23:36.787555131+00:00 stderr F I1013 00:23:36.787478 28251 namespace.go:100] [openshift-must-gather-xwq75] adding namespace 2025-10-13T00:23:36.789155586+00:00 stderr F I1013 00:23:36.789111 28251 namespace.go:104] [openshift-must-gather-xwq75] adding namespace took 1.604325ms 2025-10-13T00:23:36.803418443+00:00 stderr F I1013 00:23:36.803349 28251 namespace.go:144] [openshift-must-gather-xwq75] updating namespace 2025-10-13T00:23:38.430944789+00:00 stderr F I1013 00:23:38.430845 28251 namespace.go:144] [openshift-must-gather-xwq75] updating namespace 2025-10-13T00:23:42.832836743+00:00 stderr F I1013 00:23:42.832784 28251 namespace.go:144] [service-telemetry] updating namespace 2025-10-13T00:23:48.053913295+00:00 stderr F I1013 00:23:48.053846 28251 namespace.go:144] [openshift-must-gather-xwq75] updating namespace 2025-10-13T00:23:48.061980179+00:00 stderr F I1013 00:23:48.061944 28251 namespace.go:278] [openshift-must-gather-xwq75] deleting namespace ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001777615073043233033113 0ustar zuulzuul2025-10-13T00:21:36.461313369+00:00 stderr F ++ K8S_NODE=crc 2025-10-13T00:21:36.461475054+00:00 stderr F ++ [[ -n crc ]] 2025-10-13T00:21:36.461499024+00:00 stderr F ++ [[ -f /env/crc ]] 2025-10-13T00:21:36.461533615+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:36.461557796+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:36.461580726+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.461606267+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:36.461634288+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:36.461660279+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:36.461682819+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:36.461704870+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:36.461731601+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:36.461757951+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:36.462899792+00:00 stderr F + start-ovn-controller info 2025-10-13T00:21:36.462940983+00:00 stderr F + local log_level=info 2025-10-13T00:21:36.462967554+00:00 stderr F + [[ 1 -ne 1 ]] 2025-10-13T00:21:36.463514918+00:00 stderr F ++ date -Iseconds 2025-10-13T00:21:36.466544220+00:00 stderr F + echo '2025-10-13T00:21:36+00:00 - starting ovn-controller' 2025-10-13T00:21:36.466613272+00:00 stdout F 2025-10-13T00:21:36+00:00 - starting ovn-controller 2025-10-13T00:21:36.466683994+00:00 stderr F + exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid --syslog-method=null --log-file=/var/log/ovn/acl-audit-log.log -vFACILITY:local0 -vconsole:info -vconsole:acl_log:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' -vsyslog:acl_log:info -vfile:acl_log:info 2025-10-13T00:21:36.471383640+00:00 stderr F 2025-10-13T00:21:36Z|00001|vlog|INFO|opened log file /var/log/ovn/acl-audit-log.log 2025-10-13T00:21:36.472970723+00:00 stderr F 2025-10-13T00:21:36.472Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2025-10-13T00:21:36.473024424+00:00 stderr F 2025-10-13T00:21:36.473Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2025-10-13T00:21:36.479715304+00:00 stderr F 2025-10-13T00:21:36.479Z|00004|main|INFO|OVN internal version is : [24.03.3-20.33.0-72.6] 2025-10-13T00:21:36.479775176+00:00 stderr F 2025-10-13T00:21:36.479Z|00005|main|INFO|OVS IDL reconnected, force recompute. 2025-10-13T00:21:36.479875928+00:00 stderr F 2025-10-13T00:21:36.479Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:36.479908549+00:00 stderr F 2025-10-13T00:21:36.479Z|00007|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-10-13T00:21:36.479938570+00:00 stderr F 2025-10-13T00:21:36.479Z|00008|main|INFO|OVNSB IDL reconnected, force recompute. 2025-10-13T00:21:37.481591496+00:00 stderr F 2025-10-13T00:21:37.481Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:37.481655008+00:00 stderr F 2025-10-13T00:21:37.481Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-10-13T00:21:37.481679338+00:00 stderr F 2025-10-13T00:21:37.481Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 2 seconds before reconnect 2025-10-13T00:21:39.484074017+00:00 stderr F 2025-10-13T00:21:39.484Z|00012|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:39.484183690+00:00 stderr F 2025-10-13T00:21:39.484Z|00013|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-10-13T00:21:39.484247871+00:00 stderr F 2025-10-13T00:21:39.484Z|00014|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 4 seconds before reconnect 2025-10-13T00:21:43.485193255+00:00 stderr F 2025-10-13T00:21:43.485Z|00015|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:43.485193255+00:00 stderr F 2025-10-13T00:21:43.485Z|00016|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2025-10-13T00:21:43.511106151+00:00 stderr F 2025-10-13T00:21:43.511Z|00017|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-10-13T00:21:43.511106151+00:00 stderr F 2025-10-13T00:21:43.511Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-10-13T00:21:43.511702977+00:00 stderr F 2025-10-13T00:21:43.511Z|00019|features|INFO|OVS Feature: ct_zero_snat, state: supported 2025-10-13T00:21:43.511702977+00:00 stderr F 2025-10-13T00:21:43.511Z|00020|features|INFO|OVS Feature: ct_flush, state: supported 2025-10-13T00:21:43.511702977+00:00 stderr F 2025-10-13T00:21:43.511Z|00021|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported 2025-10-13T00:21:43.511702977+00:00 stderr F 2025-10-13T00:21:43.511Z|00022|main|INFO|OVS feature set changed, force recompute. 2025-10-13T00:21:43.511702977+00:00 stderr F 2025-10-13T00:21:43.511Z|00023|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-10-13T00:21:43.511702977+00:00 stderr F 2025-10-13T00:21:43.511Z|00024|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-10-13T00:21:43.512459058+00:00 stderr F 2025-10-13T00:21:43.512Z|00025|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-10-13T00:21:43.512643343+00:00 stderr F 2025-10-13T00:21:43.512Z|00026|main|INFO|OVS feature set changed, force recompute. 2025-10-13T00:21:43.512751346+00:00 stderr F 2025-10-13T00:21:43.512Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-10-13T00:21:43.512751346+00:00 stderr F 2025-10-13T00:21:43.512Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-10-13T00:21:43.512964651+00:00 stderr F 2025-10-13T00:21:43.512Z|00027|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-10-13T00:21:43.512964651+00:00 stderr F 2025-10-13T00:21:43.512Z|00028|main|INFO|OVS OpenFlow connection reconnected,force recompute. 2025-10-13T00:21:43.515784857+00:00 stderr F 2025-10-13T00:21:43.515Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-10-13T00:21:43.534972813+00:00 stderr F 2025-10-13T00:21:43.534Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-10-13T00:21:43.534972813+00:00 stderr F 2025-10-13T00:21:43.534Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-10-13T00:21:43.535901088+00:00 stderr F 2025-10-13T00:21:43.535Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-10-13T00:21:47.087122416+00:00 stderr F 2025-10-13T00:21:47.087Z|00029|memory|INFO|20872 kB peak resident set size after 10.6 seconds 2025-10-13T00:21:47.087122416+00:00 stderr F 2025-10-13T00:21:47.087Z|00030|memory|INFO|idl-cells-OVN_Southbound:12454 idl-cells-Open_vSwitch:3087 lflow-cache-entries-cache-expr:270 lflow-cache-entries-cache-matches:602 lflow-cache-size-KB:697 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:692 ofctrl_installed_flow_usage-KB:509 ofctrl_sb_flow_ref_usage-KB:273 2025-10-13T00:22:11.453763726+00:00 stderr F 2025-10-13T00:22:11.453Z|00031|binding|INFO|Releasing lport openshift-kube-apiserver_installer-13-crc from this chassis (sb_readonly=0) 2025-10-13T00:22:11.453763726+00:00 stderr F 2025-10-13T00:22:11.453Z|00032|if_status|WARN|Trying to release unknown interface openshift-kube-apiserver_installer-13-crc 2025-10-13T00:22:11.453763726+00:00 stderr F 2025-10-13T00:22:11.453Z|00033|binding|INFO|Setting lport openshift-kube-apiserver_installer-13-crc down in Southbound 2025-10-13T00:22:39.663297891+00:00 stderr F 2025-10-13T00:22:39.663Z|00034|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory 2025-10-13T00:23:25.356887858+00:00 stderr F 2025-10-13T00:23:25.356Z|00035|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory ././@LongLink0000644000000000000000000000024500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000445315073043233033077 0ustar zuulzuul2025-10-13T00:21:39.606422407+00:00 stderr F + [[ -f /env/_master ]] 2025-10-13T00:21:39.606422407+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-10-13T00:21:39.606593522+00:00 stderr F ++ set -x 2025-10-13T00:21:39.606593522+00:00 stderr F ++ K8S_NODE= 2025-10-13T00:21:39.606593522+00:00 stderr F ++ [[ -n '' ]] 2025-10-13T00:21:39.606593522+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:39.606593522+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:39.606593522+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:39.606593522+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:39.606593522+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:39.606593522+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:39.606593522+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:39.606593522+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:39.606593522+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:39.606593522+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:39.607393183+00:00 stderr F + trap quit-sbdb TERM INT 2025-10-13T00:21:39.607393183+00:00 stderr F + start-sbdb info 2025-10-13T00:21:39.607407553+00:00 stderr F + local log_level=info 2025-10-13T00:21:39.607407553+00:00 stderr F + [[ 1 -ne 1 ]] 2025-10-13T00:21:39.607812084+00:00 stderr F + wait 28167 2025-10-13T00:21:39.608052461+00:00 stderr F + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor --db-sb-sock=/var/run/ovn/ovnsb_db.sock '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb 2025-10-13T00:21:39.721499762+00:00 stderr F 2025-10-13T00:21:39.721Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-sb.log 2025-10-13T00:21:39.797548087+00:00 stderr F 2025-10-13T00:21:39.797Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.1 2025-10-13T00:21:49.803607797+00:00 stderr F 2025-10-13T00:21:49.803Z|00003|memory|INFO|16896 kB peak resident set size after 10.1 seconds 2025-10-13T00:21:49.804236084+00:00 stderr F 2025-10-13T00:21:49.804Z|00004|memory|INFO|atoms:16113 cells:14306 json-caches:2 monitors:5 n-weak-refs:244 sessions:3 ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000000015073043233033057 0ustar zuulzuul././@LongLink0000644000000000000000000000024500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000455715073043233033104 0ustar zuulzuul2025-10-13T00:21:37.292683706+00:00 stderr F + [[ -f /env/_master ]] 2025-10-13T00:21:37.292683706+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-10-13T00:21:37.292683706+00:00 stderr F ++ set -x 2025-10-13T00:21:37.292906532+00:00 stderr F ++ K8S_NODE=crc 2025-10-13T00:21:37.292906532+00:00 stderr F ++ [[ -n crc ]] 2025-10-13T00:21:37.292906532+00:00 stderr F ++ [[ -f /env/crc ]] 2025-10-13T00:21:37.292906532+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:37.292906532+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:37.292906532+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:37.292906532+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:37.292906532+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:37.292906532+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:37.292906532+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:37.292906532+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:37.292906532+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:37.292906532+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:37.293857247+00:00 stderr F + trap quit-nbdb TERM INT 2025-10-13T00:21:37.293857247+00:00 stderr F + start-nbdb info 2025-10-13T00:21:37.293874828+00:00 stderr F + local log_level=info 2025-10-13T00:21:37.293874828+00:00 stderr F + [[ 1 -ne 1 ]] 2025-10-13T00:21:37.294186786+00:00 stderr F + wait 27971 2025-10-13T00:21:37.294397292+00:00 stderr F + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor --db-nb-sock=/var/run/ovn/ovnnb_db.sock '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb 2025-10-13T00:21:37.389388186+00:00 stderr F 2025-10-13T00:21:37.389Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log 2025-10-13T00:21:37.424563592+00:00 stderr F 2025-10-13T00:21:37.424Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.1 2025-10-13T00:21:47.427809277+00:00 stderr F 2025-10-13T00:21:47.427Z|00003|memory|INFO|13056 kB peak resident set size after 10.0 seconds 2025-10-13T00:21:47.428129506+00:00 stderr F 2025-10-13T00:21:47.428Z|00004|memory|INFO|atoms:4675 cells:3398 json-caches:2 monitors:3 n-weak-refs:114 sessions:2 ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001110015073043233033062 0ustar zuulzuul2025-10-13T00:21:37.120434023+00:00 stderr F + [[ -f /env/_master ]] 2025-10-13T00:21:37.120434023+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-10-13T00:21:37.120434023+00:00 stderr F ++ set -x 2025-10-13T00:21:37.120434023+00:00 stderr F ++ K8S_NODE= 2025-10-13T00:21:37.120434023+00:00 stderr F ++ [[ -n '' ]] 2025-10-13T00:21:37.120434023+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-10-13T00:21:37.120434023+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-10-13T00:21:37.120434023+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-10-13T00:21:37.120434023+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-10-13T00:21:37.120434023+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-10-13T00:21:37.120434023+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-10-13T00:21:37.120434023+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-10-13T00:21:37.120434023+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-10-13T00:21:37.120434023+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-10-13T00:21:37.120434023+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-10-13T00:21:37.121123912+00:00 stderr F + trap quit-ovn-northd TERM INT 2025-10-13T00:21:37.121123912+00:00 stderr F + start-ovn-northd info 2025-10-13T00:21:37.121123912+00:00 stderr F + local log_level=info 2025-10-13T00:21:37.121140112+00:00 stderr F + [[ 1 -ne 1 ]] 2025-10-13T00:21:37.121748079+00:00 stderr F ++ date -Iseconds 2025-10-13T00:21:37.125477119+00:00 stderr F + echo '2025-10-13T00:21:37+00:00 - starting ovn-northd' 2025-10-13T00:21:37.125537631+00:00 stdout F 2025-10-13T00:21:37+00:00 - starting ovn-northd 2025-10-13T00:21:37.125688515+00:00 stderr F + wait 27932 2025-10-13T00:21:37.125844939+00:00 stderr F + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 2025-10-13T00:21:37.130198746+00:00 stderr F 2025-10-13T00:21:37.130Z|00001|ovn_northd|INFO|OVN internal version is : [24.03.3-20.33.0-72.6] 2025-10-13T00:21:37.130525405+00:00 stderr F 2025-10-13T00:21:37.130Z|00002|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2025-10-13T00:21:37.130541365+00:00 stderr F 2025-10-13T00:21:37.130Z|00003|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connection attempt failed (No such file or directory) 2025-10-13T00:21:37.130572966+00:00 stderr F 2025-10-13T00:21:37.130Z|00004|ovn_northd|INFO|OVN NB IDL reconnected, force recompute. 2025-10-13T00:21:37.130621067+00:00 stderr F 2025-10-13T00:21:37.130Z|00005|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:37.130633768+00:00 stderr F 2025-10-13T00:21:37.130Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-10-13T00:21:37.130662239+00:00 stderr F 2025-10-13T00:21:37.130Z|00007|ovn_northd|INFO|OVN SB IDL reconnected, force recompute. 2025-10-13T00:21:38.131904894+00:00 stderr F 2025-10-13T00:21:38.131Z|00008|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2025-10-13T00:21:38.131904894+00:00 stderr F 2025-10-13T00:21:38.131Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connected 2025-10-13T00:21:38.131961806+00:00 stderr F 2025-10-13T00:21:38.131Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:38.131961806+00:00 stderr F 2025-10-13T00:21:38.131Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-10-13T00:21:38.131971476+00:00 stderr F 2025-10-13T00:21:38.131Z|00012|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 2 seconds before reconnect 2025-10-13T00:21:40.132502345+00:00 stderr F 2025-10-13T00:21:40.132Z|00013|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-10-13T00:21:40.133717107+00:00 stderr F 2025-10-13T00:21:40.133Z|00014|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2025-10-13T00:21:40.134555730+00:00 stderr F 2025-10-13T00:21:40.134Z|00015|ovn_northd|INFO|ovn-northd lock acquired. This ovn-northd instance is now active. 2025-10-13T00:21:42.465962265+00:00 stderr F 2025-10-13T00:21:42.465Z|00016|ipam|WARN|d6057acb-0f02-4ebe-8cea-b3228e61764c: Duplicate IP set: 10.217.0.2 2025-10-13T00:21:53.581492012+00:00 stderr F 2025-10-13T00:21:53.581Z|00017|memory|INFO|14336 kB peak resident set size after 16.5 seconds 2025-10-13T00:21:53.581778209+00:00 stderr F 2025-10-13T00:21:53.581Z|00018|memory|INFO|idl-cells-OVN_Northbound:2694 idl-cells-OVN_Southbound:12454 ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000222515073043233033232 0ustar zuulzuul2025-08-13T19:59:25.749930980+00:00 stderr F W0813 19:59:25.749386 1 deprecated.go:66] 2025-08-13T19:59:25.749930980+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.749930980+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.749930980+00:00 stderr F =============================================== 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.750743644+00:00 stderr F I0813 19:59:25.750677 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:25.792617047+00:00 stderr F I0813 19:59:25.791677 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:25.796481088+00:00 stderr F I0813 19:59:25.794679 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-08-13T19:59:25.796481088+00:00 stderr F I0813 19:59:25.795529 1 kube-rbac-proxy.go:402] Listening securely on :8443 2025-08-13T20:42:42.434768166+00:00 stderr F I0813 20:42:42.434158 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000202015073043233033223 0ustar zuulzuul2025-10-13T00:15:00.868406836+00:00 stderr F W1013 00:15:00.864762 1 deprecated.go:66] 2025-10-13T00:15:00.868406836+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:15:00.868406836+00:00 stderr F 2025-10-13T00:15:00.868406836+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:15:00.868406836+00:00 stderr F 2025-10-13T00:15:00.868406836+00:00 stderr F =============================================== 2025-10-13T00:15:00.868406836+00:00 stderr F 2025-10-13T00:15:00.871501778+00:00 stderr F I1013 00:15:00.871441 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:15:00.871530469+00:00 stderr F I1013 00:15:00.871506 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:15:00.876368334+00:00 stderr F I1013 00:15:00.875477 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-10-13T00:15:00.876368334+00:00 stderr F I1013 00:15:00.876120 1 kube-rbac-proxy.go:402] Listening securely on :8443 ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000237415073043233033237 0ustar zuulzuul2025-10-13T00:15:00.384740384+00:00 stderr F I1013 00:15:00.383974 1 main.go:57] starting net-attach-def-admission-controller webhook server 2025-10-13T00:15:00.395081444+00:00 stderr F W1013 00:15:00.395000 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:15:00.483154693+00:00 stderr F W1013 00:15:00.471257 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-10-13T00:15:00.486375569+00:00 stderr F I1013 00:15:00.485982 1 localmetrics.go:51] UPdating net-attach-def metrics for any with value 0 2025-10-13T00:15:00.486375569+00:00 stderr F I1013 00:15:00.486021 1 localmetrics.go:51] UPdating net-attach-def metrics for sriov with value 0 2025-10-13T00:15:00.486375569+00:00 stderr F I1013 00:15:00.486028 1 localmetrics.go:51] UPdating net-attach-def metrics for ib-sriov with value 0 2025-10-13T00:15:00.486861624+00:00 stderr F I1013 00:15:00.486838 1 controller.go:202] Starting net-attach-def-admission-controller 2025-10-13T00:15:00.591527990+00:00 stderr F I1013 00:15:00.588430 1 controller.go:211] net-attach-def-admission-controller synced and ready ././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000255215073043233033235 0ustar zuulzuul2025-08-13T19:59:12.195411848+00:00 stderr F I0813 19:59:12.184348 1 main.go:57] starting net-attach-def-admission-controller webhook server 2025-08-13T19:59:13.341524580+00:00 stderr F W0813 19:59:13.330254 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:13.777938261+00:00 stderr F W0813 19:59:13.777869 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:13.864351914+00:00 stderr F I0813 19:59:13.863190 1 localmetrics.go:51] UPdating net-attach-def metrics for any with value 0 2025-08-13T19:59:14.306827216+00:00 stderr F I0813 19:59:14.228559 1 localmetrics.go:51] UPdating net-attach-def metrics for sriov with value 0 2025-08-13T19:59:14.307369132+00:00 stderr F I0813 19:59:14.307339 1 localmetrics.go:51] UPdating net-attach-def metrics for ib-sriov with value 0 2025-08-13T19:59:14.409334398+00:00 stderr F I0813 19:59:14.382647 1 controller.go:202] Starting net-attach-def-admission-controller 2025-08-13T19:59:17.037691044+00:00 stderr F I0813 19:59:17.032208 1 controller.go:211] net-attach-def-admission-controller synced and ready 2025-08-13T20:01:48.651575909+00:00 stderr F I0813 20:01:48.650318 1 tlsutil.go:43] cetificate reloaded ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043234033055 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043234033055 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012467015073043234033071 0ustar zuulzuul2025-08-13T20:05:59.835111434+00:00 stderr F I0813 20:05:59.834569 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc0007986e0 cert-dir:0xc0007988c0 cert-secrets:0xc000798640 configmaps:0xc0007981e0 namespace:0xc000798000 optional-cert-configmaps:0xc000798820 optional-configmaps:0xc000798320 optional-secrets:0xc000798280 pod:0xc0007980a0 pod-manifest-dir:0xc000798460 resource-dir:0xc0007983c0 revision:0xc0001f5f40 secrets:0xc000798140 v:0xc0007992c0] [0xc0007992c0 0xc0001f5f40 0xc000798000 0xc0007980a0 0xc0007983c0 0xc000798460 0xc0007981e0 0xc000798320 0xc000798140 0xc000798280 0xc0007988c0 0xc0007986e0 0xc000798820 0xc000798640] [] map[cert-configmaps:0xc0007986e0 cert-dir:0xc0007988c0 cert-secrets:0xc000798640 configmaps:0xc0007981e0 help:0xc000799680 kubeconfig:0xc0000f9f40 log-flush-frequency:0xc000799220 namespace:0xc000798000 optional-cert-configmaps:0xc000798820 optional-cert-secrets:0xc000798780 optional-configmaps:0xc000798320 optional-secrets:0xc000798280 pod:0xc0007980a0 pod-manifest-dir:0xc000798460 pod-manifests-lock-file:0xc0007985a0 resource-dir:0xc0007983c0 revision:0xc0001f5f40 secrets:0xc000798140 timeout-duration:0xc000798500 v:0xc0007992c0 vmodule:0xc000799360] [0xc0000f9f40 0xc0001f5f40 0xc000798000 0xc0007980a0 0xc000798140 0xc0007981e0 0xc000798280 0xc000798320 0xc0007983c0 0xc000798460 0xc000798500 0xc0007985a0 0xc000798640 0xc0007986e0 0xc000798780 0xc000798820 0xc0007988c0 0xc000799220 0xc0007992c0 0xc000799360 0xc000799680] [0xc0007986e0 0xc0007988c0 0xc000798640 0xc0007981e0 0xc000799680 0xc0000f9f40 0xc000799220 0xc000798000 0xc000798820 0xc000798780 0xc000798320 0xc000798280 0xc0007980a0 0xc000798460 0xc0007985a0 0xc0007983c0 0xc0001f5f40 0xc000798140 0xc000798500 0xc0007992c0 0xc000799360] map[104:0xc000799680 118:0xc0007992c0] [] -1 0 0xc0006fdb30 true 0x73b100 []} 2025-08-13T20:05:59.835111434+00:00 stderr F I0813 20:05:59.834918 1 cmd.go:92] (*installerpod.InstallOptions)(0xc000583d40)({ 2025-08-13T20:05:59.835111434+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:05:59.835111434+00:00 stderr F Revision: (string) (len=2) "10", 2025-08-13T20:05:59.835111434+00:00 stderr F NodeName: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:05:59.835111434+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:05:59.835111434+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:05:59.835111434+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:05:59.835111434+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:05:59.835111434+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:05:59.835111434+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:05:59.835111434+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:05:59.835111434+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:05:59.835111434+00:00 stderr F }) 2025-08-13T20:05:59.842751153+00:00 stderr F I0813 20:05:59.838103 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:05:59.852998977+00:00 stderr F I0813 20:05:59.852958 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:05:59.857587378+00:00 stderr F I0813 20:05:59.857188 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:06:29.871751453+00:00 stderr F I0813 20:06:29.871543 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:06:29.927650613+00:00 stderr F I0813 20:06:29.912246 1 cmd.go:538] Latest installer revision for node crc is: 10 2025-08-13T20:06:29.927820478+00:00 stderr F I0813 20:06:29.927747 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:06:29.937145415+00:00 stderr F I0813 20:06:29.936965 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:06:29.937145415+00:00 stderr F I0813 20:06:29.937036 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:06:29.937458994+00:00 stderr F I0813 20:06:29.937385 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:06:29.937458994+00:00 stderr F I0813 20:06:29.937440 1 cmd.go:225] Getting secrets ... 2025-08-13T20:06:29.948920812+00:00 stderr F I0813 20:06:29.948461 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-10 2025-08-13T20:06:29.953436762+00:00 stderr F I0813 20:06:29.952585 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-10 2025-08-13T20:06:29.958541818+00:00 stderr F I0813 20:06:29.958434 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-10 2025-08-13T20:06:29.958541818+00:00 stderr F I0813 20:06:29.958509 1 cmd.go:238] Getting config maps ... 2025-08-13T20:06:30.028237794+00:00 stderr F I0813 20:06:30.028116 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-10 2025-08-13T20:06:30.033609938+00:00 stderr F I0813 20:06:30.033515 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-10 2025-08-13T20:06:30.036895452+00:00 stderr F I0813 20:06:30.036684 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-10 2025-08-13T20:06:30.041016810+00:00 stderr F I0813 20:06:30.040969 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-10 2025-08-13T20:06:30.045113907+00:00 stderr F I0813 20:06:30.045031 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-10 2025-08-13T20:06:30.078006779+00:00 stderr F I0813 20:06:30.077171 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-10 2025-08-13T20:06:30.286083348+00:00 stderr F I0813 20:06:30.284316 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-10 2025-08-13T20:06:30.480644879+00:00 stderr F I0813 20:06:30.480576 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-10 2025-08-13T20:06:30.680035208+00:00 stderr F I0813 20:06:30.679933 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-10: configmaps "cloud-config-10" not found 2025-08-13T20:06:30.680208983+00:00 stderr F I0813 20:06:30.680162 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token" ... 2025-08-13T20:06:30.680323866+00:00 stderr F I0813 20:06:30.680302 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:06:30.682192650+00:00 stderr F I0813 20:06:30.682158 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:06:30.682834408+00:00 stderr F I0813 20:06:30.682731 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:06:30.683144557+00:00 stderr F I0813 20:06:30.683114 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:06:30.684131515+00:00 stderr F I0813 20:06:30.684090 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key" ... 2025-08-13T20:06:30.684222138+00:00 stderr F I0813 20:06:30.684198 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:06:30.685318589+00:00 stderr F I0813 20:06:30.685287 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:06:30.685529345+00:00 stderr F I0813 20:06:30.685503 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert" ... 2025-08-13T20:06:30.685589647+00:00 stderr F I0813 20:06:30.685570 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.key" ... 2025-08-13T20:06:30.687402839+00:00 stderr F I0813 20:06:30.687362 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.crt" ... 2025-08-13T20:06:30.689492049+00:00 stderr F I0813 20:06:30.689451 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:06:30.689641083+00:00 stderr F I0813 20:06:30.689564 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:06:30.707133994+00:00 stderr F I0813 20:06:30.707021 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config" ... 2025-08-13T20:06:30.707330309+00:00 stderr F I0813 20:06:30.707300 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config/config.yaml" ... 2025-08-13T20:06:30.714536146+00:00 stderr F I0813 20:06:30.714440 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:06:30.714651599+00:00 stderr F I0813 20:06:30.714628 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:06:30.728982389+00:00 stderr F I0813 20:06:30.718252 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:06:30.729157004+00:00 stderr F I0813 20:06:30.729126 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:06:30.738554784+00:00 stderr F I0813 20:06:30.738445 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:06:30.738554784+00:00 stderr F I0813 20:06:30.738503 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:06:30.740369666+00:00 stderr F I0813 20:06:30.740296 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:06:30.740596192+00:00 stderr F I0813 20:06:30.740528 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:06:30.740945882+00:00 stderr F I0813 20:06:30.740732 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config" ... 2025-08-13T20:06:30.740945882+00:00 stderr F I0813 20:06:30.740861 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:06:30.742336842+00:00 stderr F I0813 20:06:30.742225 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca" ... 2025-08-13T20:06:30.742336842+00:00 stderr F I0813 20:06:30.742294 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.743521 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.743564 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.745655 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.745674 1 cmd.go:225] Getting secrets ... 2025-08-13T20:06:30.879011066+00:00 stderr F I0813 20:06:30.877620 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:06:31.081470183+00:00 stderr F I0813 20:06:31.080423 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key 2025-08-13T20:06:31.081644418+00:00 stderr F I0813 20:06:31.081623 1 cmd.go:238] Getting config maps ... 2025-08-13T20:06:31.371338561+00:00 stderr F I0813 20:06:31.371276 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca 2025-08-13T20:06:31.897416895+00:00 stderr F I0813 20:06:31.897357 1 copy.go:60] Got configMap openshift-kube-controller-manager/client-ca 2025-08-13T20:06:31.955715866+00:00 stderr F I0813 20:06:31.955186 1 copy.go:60] Got configMap openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:06:31.956504899+00:00 stderr F I0813 20:06:31.956057 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer" ... 2025-08-13T20:06:31.957035874+00:00 stderr F I0813 20:06:31.956553 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.crt" ... 2025-08-13T20:06:31.957035874+00:00 stderr F I0813 20:06:31.956928 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.key" ... 2025-08-13T20:06:31.957107226+00:00 stderr F I0813 20:06:31.957059 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key" ... 2025-08-13T20:06:31.957107226+00:00 stderr F I0813 20:06:31.957075 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.key" ... 2025-08-13T20:06:31.957278241+00:00 stderr F I0813 20:06:31.957185 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.crt" ... 2025-08-13T20:06:31.957642931+00:00 stderr F I0813 20:06:31.957345 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:06:31.962102239+00:00 stderr F I0813 20:06:31.960192 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:06:31.962102239+00:00 stderr F I0813 20:06:31.960413 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca" ... 2025-08-13T20:06:32.028939486+00:00 stderr F I0813 20:06:32.028378 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.034442 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.034579 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.035400 1 cmd.go:331] Getting pod configmaps/kube-controller-manager-pod-10 -n openshift-kube-controller-manager 2025-08-13T20:06:32.068991654+00:00 stderr F I0813 20:06:32.065289 1 cmd.go:347] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:06:32.068991654+00:00 stderr F I0813 20:06:32.065360 1 cmd.go:375] Writing a pod under "kube-controller-manager-pod.yaml" key 2025-08-13T20:06:32.068991654+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"10"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n 2025-08-13T20:06:32.069063786+00:00 stderr F \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:06:32.130591450+00:00 stderr F I0813 20:06:32.130436 1 cmd.go:606] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143163911+00:00 stderr F I0813 20:06:32.143032 1 cmd.go:613] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143220332+00:00 stderr F I0813 20:06:32.143086 1 cmd.go:617] Writing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143220332+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"10"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy 2025-08-13T20:06:32.143262553+00:00 stderr F -controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012467015073043232033067 0ustar zuulzuul2025-08-13T20:07:28.135939739+00:00 stderr F I0813 20:07:28.133266 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc000bb7c20 cert-dir:0xc000bb7e00 cert-secrets:0xc000bb7b80 configmaps:0xc000bb7720 namespace:0xc000bb7540 optional-cert-configmaps:0xc000bb7d60 optional-configmaps:0xc000bb7860 optional-secrets:0xc000bb77c0 pod:0xc000bb75e0 pod-manifest-dir:0xc000bb79a0 resource-dir:0xc000bb7900 revision:0xc000bb74a0 secrets:0xc000bb7680 v:0xc000bcc820] [0xc000bcc820 0xc000bb74a0 0xc000bb7540 0xc000bb75e0 0xc000bb7900 0xc000bb79a0 0xc000bb7720 0xc000bb7860 0xc000bb7680 0xc000bb77c0 0xc000bb7e00 0xc000bb7c20 0xc000bb7d60 0xc000bb7b80] [] map[cert-configmaps:0xc000bb7c20 cert-dir:0xc000bb7e00 cert-secrets:0xc000bb7b80 configmaps:0xc000bb7720 help:0xc000bccbe0 kubeconfig:0xc000bb7400 log-flush-frequency:0xc000bcc780 namespace:0xc000bb7540 optional-cert-configmaps:0xc000bb7d60 optional-cert-secrets:0xc000bb7cc0 optional-configmaps:0xc000bb7860 optional-secrets:0xc000bb77c0 pod:0xc000bb75e0 pod-manifest-dir:0xc000bb79a0 pod-manifests-lock-file:0xc000bb7ae0 resource-dir:0xc000bb7900 revision:0xc000bb74a0 secrets:0xc000bb7680 timeout-duration:0xc000bb7a40 v:0xc000bcc820 vmodule:0xc000bcc8c0] [0xc000bb7400 0xc000bb74a0 0xc000bb7540 0xc000bb75e0 0xc000bb7680 0xc000bb7720 0xc000bb77c0 0xc000bb7860 0xc000bb7900 0xc000bb79a0 0xc000bb7a40 0xc000bb7ae0 0xc000bb7b80 0xc000bb7c20 0xc000bb7cc0 0xc000bb7d60 0xc000bb7e00 0xc000bcc780 0xc000bcc820 0xc000bcc8c0 0xc000bccbe0] [0xc000bb7c20 0xc000bb7e00 0xc000bb7b80 0xc000bb7720 0xc000bccbe0 0xc000bb7400 0xc000bcc780 0xc000bb7540 0xc000bb7d60 0xc000bb7cc0 0xc000bb7860 0xc000bb77c0 0xc000bb75e0 0xc000bb79a0 0xc000bb7ae0 0xc000bb7900 0xc000bb74a0 0xc000bb7680 0xc000bb7a40 0xc000bcc820 0xc000bcc8c0] map[104:0xc000bccbe0 118:0xc000bcc820] [] -1 0 0xc000b7f560 true 0x73b100 []} 2025-08-13T20:07:28.135939739+00:00 stderr F I0813 20:07:28.133934 1 cmd.go:92] (*installerpod.InstallOptions)(0xc000b28680)({ 2025-08-13T20:07:28.135939739+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:28.135939739+00:00 stderr F Revision: (string) (len=2) "11", 2025-08-13T20:07:28.135939739+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:07:28.135939739+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:07:28.135939739+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:07:28.135939739+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:07:28.135939739+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:28.135939739+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:28.135939739+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:28.135939739+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:28.135939739+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:28.135939739+00:00 stderr F }) 2025-08-13T20:07:28.209260091+00:00 stderr F I0813 20:07:28.204641 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:07:28.244244624+00:00 stderr F I0813 20:07:28.244110 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:28.250320798+00:00 stderr F I0813 20:07:28.250209 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:07:58.255243085+00:00 stderr F I0813 20:07:58.255028 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:07:58.270363959+00:00 stderr F I0813 20:07:58.270251 1 cmd.go:538] Latest installer revision for node crc is: 11 2025-08-13T20:07:58.270363959+00:00 stderr F I0813 20:07:58.270306 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:07:58.280990563+00:00 stderr F I0813 20:07:58.280531 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:07:58.280990563+00:00 stderr F I0813 20:07:58.280586 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11" ... 2025-08-13T20:07:58.281383995+00:00 stderr F I0813 20:07:58.281285 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11" ... 2025-08-13T20:07:58.281383995+00:00 stderr F I0813 20:07:58.281325 1 cmd.go:225] Getting secrets ... 2025-08-13T20:07:58.295838999+00:00 stderr F I0813 20:07:58.295612 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.303010 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.307771 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.327978 1 cmd.go:238] Getting config maps ... 2025-08-13T20:07:58.335728553+00:00 stderr F I0813 20:07:58.335667 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-11 2025-08-13T20:07:58.345595516+00:00 stderr F I0813 20:07:58.345471 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-11 2025-08-13T20:07:58.349395815+00:00 stderr F I0813 20:07:58.349363 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-11 2025-08-13T20:07:58.352842753+00:00 stderr F I0813 20:07:58.352817 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-11 2025-08-13T20:07:58.358223618+00:00 stderr F I0813 20:07:58.358192 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-11 2025-08-13T20:07:58.468715386+00:00 stderr F I0813 20:07:58.468660 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-11 2025-08-13T20:07:58.660710620+00:00 stderr F I0813 20:07:58.660603 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-11 2025-08-13T20:07:58.859350016+00:00 stderr F I0813 20:07:58.859288 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-11 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.062954 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-11: configmaps "cloud-config-11" not found 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.062997 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064041 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064218 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064407 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064495 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064655 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064713 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064861 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065012 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065094 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert/tls.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065186 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert/tls.key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065278 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065386 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067285 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/config" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067412 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/config/config.yaml" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067532 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067588 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067680 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067727 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067873 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067959 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068127 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068266 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068367 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/recycler-config" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068466 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068558 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/service-ca" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068637 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068727 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/serviceaccount-ca" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068848 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.069090 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.069107 1 cmd.go:225] Getting secrets ... 2025-08-13T20:07:59.267692263+00:00 stderr F I0813 20:07:59.267338 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:07:59.469969843+00:00 stderr F I0813 20:07:59.469055 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key 2025-08-13T20:07:59.469969843+00:00 stderr F I0813 20:07:59.469120 1 cmd.go:238] Getting config maps ... 2025-08-13T20:07:59.669069401+00:00 stderr F I0813 20:07:59.668869 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca 2025-08-13T20:07:59.871931127+00:00 stderr F I0813 20:07:59.866753 1 copy.go:60] Got configMap openshift-kube-controller-manager/client-ca 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.071345 1 copy.go:60] Got configMap openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.071666 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer" ... 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.072018 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.crt" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.072324 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.key" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.080616 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.080639 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.crt" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.081228 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.key" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.083488 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.083523 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.084704 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.084977 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.087658 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:08:00.148623069+00:00 stderr F I0813 20:08:00.090499 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:08:00.157142794+00:00 stderr F I0813 20:08:00.150176 1 cmd.go:331] Getting pod configmaps/kube-controller-manager-pod-11 -n openshift-kube-controller-manager 2025-08-13T20:08:00.281932152+00:00 stderr F I0813 20:08:00.263229 1 cmd.go:347] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:08:00.281932152+00:00 stderr F I0813 20:08:00.263292 1 cmd.go:375] Writing a pod under "kube-controller-manager-pod.yaml" key 2025-08-13T20:08:00.281932152+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"11"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n 2025-08-13T20:08:00.282006684+00:00 stderr F \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:00.294647666+00:00 stderr F I0813 20:08:00.294562 1 cmd.go:606] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr F I0813 20:08:00.295071 1 cmd.go:613] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr F I0813 20:08:00.295115 1 cmd.go:617] Writing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"11"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"] 2025-08-13T20:08:00.298947980+00:00 stderr F ,"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015073043233033055 5ustar zuulzuul././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015073043233033055 5ustar zuulzuul././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000000107315073043233033060 0ustar zuulzuul2025-10-13T00:15:00.413244438+00:00 stderr F I1013 00:15:00.412964 1 cmd.go:47] Starting console conversion webhook server 2025-10-13T00:15:00.415449574+00:00 stderr F I1013 00:15:00.415428 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-10-13T00:15:00.420279849+00:00 stderr F I1013 00:15:00.420227 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-10-13T00:15:00.420381182+00:00 stderr F I1013 00:15:00.420365 1 cmd.go:93] Serving on [::]:9443 ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000000161515073043233033062 0ustar zuulzuul2025-08-13T19:59:37.777127617+00:00 stderr F I0813 19:59:37.775509 1 cmd.go:47] Starting console conversion webhook server 2025-08-13T19:59:37.955172273+00:00 stderr F I0813 19:59:37.955027 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:59:37.956180592+00:00 stderr F I0813 19:59:37.956106 1 cmd.go:93] Serving on [::]:9443 2025-08-13T19:59:37.957682034+00:00 stderr F I0813 19:59:37.957611 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-08-13T20:00:50.382659807+00:00 stderr F I0813 20:00:50.373360 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T20:00:50.382659807+00:00 stderr F I0813 20:00:50.374426 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043234033024 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043234033024 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000006575415073043234033047 0ustar zuulzuul2025-08-13T20:01:08.525355365+00:00 stderr F I0813 20:01:08.494832 1 cmd.go:92] &{ true {false} installer true map[cert-dir:0xc0005d0c80 cert-secrets:0xc0005d0a00 configmaps:0xc0005d05a0 namespace:0xc0005d03c0 optional-configmaps:0xc0005d06e0 optional-secrets:0xc0005d0640 pod:0xc0005d0460 pod-manifest-dir:0xc0005d0820 resource-dir:0xc0005d0780 revision:0xc0005d0320 secrets:0xc0005d0500 v:0xc0005d1680] [0xc0005d1680 0xc0005d0320 0xc0005d03c0 0xc0005d0460 0xc0005d0780 0xc0005d0820 0xc0005d05a0 0xc0005d06e0 0xc0005d0640 0xc0005d0500 0xc0005d0c80 0xc0005d0a00] [] map[cert-configmaps:0xc0005d0aa0 cert-dir:0xc0005d0c80 cert-secrets:0xc0005d0a00 configmaps:0xc0005d05a0 help:0xc0005d1a40 kubeconfig:0xc0005d0280 log-flush-frequency:0xc0005d15e0 namespace:0xc0005d03c0 optional-cert-configmaps:0xc0005d0be0 optional-cert-secrets:0xc0005d0b40 optional-configmaps:0xc0005d06e0 optional-secrets:0xc0005d0640 pod:0xc0005d0460 pod-manifest-dir:0xc0005d0820 pod-manifests-lock-file:0xc0005d0960 resource-dir:0xc0005d0780 revision:0xc0005d0320 secrets:0xc0005d0500 timeout-duration:0xc0005d08c0 v:0xc0005d1680 vmodule:0xc0005d1720] [0xc0005d0280 0xc0005d0320 0xc0005d03c0 0xc0005d0460 0xc0005d0500 0xc0005d05a0 0xc0005d0640 0xc0005d06e0 0xc0005d0780 0xc0005d0820 0xc0005d08c0 0xc0005d0960 0xc0005d0a00 0xc0005d0aa0 0xc0005d0b40 0xc0005d0be0 0xc0005d0c80 0xc0005d15e0 0xc0005d1680 0xc0005d1720 0xc0005d1a40] [0xc0005d0aa0 0xc0005d0c80 0xc0005d0a00 0xc0005d05a0 0xc0005d1a40 0xc0005d0280 0xc0005d15e0 0xc0005d03c0 0xc0005d0be0 0xc0005d0b40 0xc0005d06e0 0xc0005d0640 0xc0005d0460 0xc0005d0820 0xc0005d0960 0xc0005d0780 0xc0005d0320 0xc0005d0500 0xc0005d08c0 0xc0005d1680 0xc0005d1720] map[104:0xc0005d1a40 118:0xc0005d1680] [] -1 0 0xc000567560 true 0x215dc20 []} 2025-08-13T20:01:08.525355365+00:00 stderr F I0813 20:01:08.518481 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000581380)({ 2025-08-13T20:01:08.525355365+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:01:08.525355365+00:00 stderr F Revision: (string) (len=1) "7", 2025-08-13T20:01:08.525355365+00:00 stderr F NodeName: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-scheduler", 2025-08-13T20:01:08.525355365+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:01:08.525355365+00:00 stderr F SecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=20) "scheduler-kubeconfig", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=16) "policy-configmap" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F CertSecretNames: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=30) "kube-scheduler-client-cert-key" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F CertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", 2025-08-13T20:01:08.525355365+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.525355365+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:01:08.525355365+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:01:08.525355365+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:01:08.525355365+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:01:08.525355365+00:00 stderr F }) 2025-08-13T20:01:08.558026046+00:00 stderr F I0813 20:01:08.545567 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:01:09.709344395+00:00 stderr F I0813 20:01:09.708316 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:01:10.010008738+00:00 stderr F I0813 20:01:10.008949 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:01:40.010064498+00:00 stderr F I0813 20:01:40.009280 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:01:52.020591303+00:00 stderr F I0813 20:01:52.020372 1 cmd.go:539] Latest installer revision for node crc is: 7 2025-08-13T20:01:52.020591303+00:00 stderr F I0813 20:01:52.020447 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.130444 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.130517 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7" ... 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.131123 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7" ... 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.131137 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:59.477298721+00:00 stderr F I0813 20:01:59.477140 1 copy.go:32] Got secret openshift-kube-scheduler/localhost-recovery-client-token-7 2025-08-13T20:02:01.293670193+00:00 stderr F I0813 20:02:01.284574 1 copy.go:32] Got secret openshift-kube-scheduler/serving-cert-7 2025-08-13T20:02:01.293670193+00:00 stderr F I0813 20:02:01.284669 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:03.471152171+00:00 stderr F I0813 20:02:03.462088 1 copy.go:60] Got configMap openshift-kube-scheduler/config-7 2025-08-13T20:02:05.728719872+00:00 stderr F I0813 20:02:05.722475 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-cert-syncer-kubeconfig-7 2025-08-13T20:02:09.366901882+00:00 stderr F I0813 20:02:09.366325 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-pod-7 2025-08-13T20:02:13.052766894+00:00 stderr F I0813 20:02:13.050491 1 copy.go:60] Got configMap openshift-kube-scheduler/scheduler-kubeconfig-7 2025-08-13T20:02:15.100946713+00:00 stderr F I0813 20:02:15.096609 1 copy.go:60] Got configMap openshift-kube-scheduler/serviceaccount-ca-7 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.827442 1 copy.go:52] Failed to get config map openshift-kube-scheduler/policy-configmap-7: configmaps "policy-configmap-7" not found 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.827567 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.828621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829105 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829208 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829339 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829444 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829640 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert/tls.key" ... 2025-08-13T20:02:17.849720188+00:00 stderr F I0813 20:02:17.831747 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert/tls.crt" ... 2025-08-13T20:02:17.850252683+00:00 stderr F I0813 20:02:17.850183 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/config" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858457 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/config/config.yaml" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858690 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-cert-syncer-kubeconfig" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858969 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.859105 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859387 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/forceRedeploymentReason" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859545 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/pod.yaml" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859682 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/version" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859864 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/scheduler-kubeconfig" ... 2025-08-13T20:02:17.860032522+00:00 stderr F I0813 20:02:17.859969 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/scheduler-kubeconfig/kubeconfig" ... 2025-08-13T20:02:17.860201287+00:00 stderr F I0813 20:02:17.860141 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/serviceaccount-ca" ... 2025-08-13T20:02:17.860311810+00:00 stderr F I0813 20:02:17.860253 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:02:17.860493775+00:00 stderr F I0813 20:02:17.860418 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs" ... 2025-08-13T20:02:17.860493775+00:00 stderr F I0813 20:02:17.860480 1 cmd.go:226] Getting secrets ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474599 1 copy.go:32] Got secret openshift-kube-scheduler/kube-scheduler-client-cert-key 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474675 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474688 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474734 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.475101 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.475229 1 cmd.go:332] Getting pod configmaps/kube-scheduler-pod-7 -n openshift-kube-scheduler 2025-08-13T20:02:21.118620340+00:00 stderr F I0813 20:02:21.118522 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:02:21.118620340+00:00 stderr F I0813 20:02:21.118597 1 cmd.go:376] Writing a pod under "kube-scheduler-pod.yaml" key 2025-08-13T20:02:21.118620340+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"7","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.152955 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.153271 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.153282 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"7","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134015073043233033053 0ustar zuulzuul2025-08-13T20:30:04.045714829+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/olm-operator-heap-48qq2" 2025-08-13T20:30:04.473513386+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/catalog-operator-heap-88mpx" 2025-08-13T20:30:04.492237284+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/olm-operator-heap-hqmzq" 2025-08-13T20:30:04.500235874+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/catalog-operator-heap-bk2n8" ././@LongLink0000644000000000000000000000021000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000021500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043313032767 5ustar zuulzuul././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005637215073043234033010 0ustar zuulzuul2025-10-13T00:12:38.088521154+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:38.085667Z","logger":"etcd-client","caller":"v3@v3.5.13/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00001e000/192.168.126.11:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:2379: connect: connection refused\""} 2025-10-13T00:12:38.088521154+00:00 stderr F Error: context deadline exceeded 2025-10-13T00:12:38.198697616+00:00 stderr F dataDir is present on crc 2025-10-13T00:12:40.200872668+00:00 stderr P failed to create etcd client, but the server is already initialized as member "crc" before, starting as etcd member: context deadline exceeded 2025-10-13T00:12:40.203707829+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released. 2025-10-13T00:12:40.208806414+00:00 stderr F 2025-10-13T00:12:40.208806414+00:00 stderr F real 0m0.005s 2025-10-13T00:12:40.208806414+00:00 stderr F user 0m0.001s 2025-10-13T00:12:40.208806414+00:00 stderr F sys 0m0.004s 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=8589934592 2025-10-13T00:12:40.212901741+00:00 stdout F ALL_ETCD_ENDPOINTS=https://192.168.126.11:2379 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_STATIC_POD_VERSION=3 2025-10-13T00:12:40.212901741+00:00 stdout F ETCDCTL_ENDPOINTS=https://192.168.126.11:2379 2025-10-13T00:12:40.212901741+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key 2025-10-13T00:12:40.212901741+00:00 stdout F ETCDCTL_API=3 2025-10-13T00:12:40.212901741+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_NAME=crc 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_SOCKET_REUSE_ADDRESS=true 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION=200ms 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_EXPERIMENTAL_MAX_LEARNERS=1 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000 2025-10-13T00:12:40.212901741+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_INITIAL_CLUSTER= 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s 2025-10-13T00:12:40.212901741+00:00 stdout F ETCD_ENABLE_PPROF=true 2025-10-13T00:12:40.213538769+00:00 stderr F + exec nice -n -19 ionice -c2 -n0 etcd --logger=zap --log-level=info --experimental-initial-corrupt-check=true --snapshot-count=10000 --initial-advertise-peer-urls=https://192.168.126.11:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://192.168.126.11:2379 --listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0 --listen-peer-urls=https://0.0.0.0:2380 --metrics=extensive --listen-metrics-urls=https://0.0.0.0:9978 2025-10-13T00:12:40.258855422+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258614Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_CIPHER_SUITES","variable-value":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"} 2025-10-13T00:12:40.258855422+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.2588Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_DATA_DIR","variable-value":"/var/lib/etcd"} 2025-10-13T00:12:40.258855422+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258821Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ELECTION_TIMEOUT","variable-value":"1000"} 2025-10-13T00:12:40.258855422+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258835Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ENABLE_PPROF","variable-value":"true"} 2025-10-13T00:12:40.258924813+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258866Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_MAX_LEARNERS","variable-value":"1"} 2025-10-13T00:12:40.258924813+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258883Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION","variable-value":"200ms"} 2025-10-13T00:12:40.258924813+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258897Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL","variable-value":"5s"} 2025-10-13T00:12:40.258944224+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258914Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_HEARTBEAT_INTERVAL","variable-value":"100"} 2025-10-13T00:12:40.258944224+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258933Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_INITIAL_CLUSTER_STATE","variable-value":"existing"} 2025-10-13T00:12:40.258995675+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258961Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_NAME","variable-value":"crc"} 2025-10-13T00:12:40.259012006+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.258993Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_QUOTA_BACKEND_BYTES","variable-value":"8589934592"} 2025-10-13T00:12:40.259027396+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.259012Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_SOCKET_REUSE_ADDRESS","variable-value":"true"} 2025-10-13T00:12:40.259072848+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.259038Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3"} 2025-10-13T00:12:40.259072848+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.259054Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_STATIC_POD_VERSION=3"} 2025-10-13T00:12:40.259093108+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.259076Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_INITIAL_CLUSTER="} 2025-10-13T00:12:40.260455817+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.259169Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-10-13T00:12:40.260455817+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.260413Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--logger=zap","--log-level=info","--experimental-initial-corrupt-check=true","--snapshot-count=10000","--initial-advertise-peer-urls=https://192.168.126.11:2380","--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt","--key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key","--trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt","--client-cert-auth=true","--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt","--peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key","--peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt","--peer-client-cert-auth=true","--advertise-client-urls=https://192.168.126.11:2379","--listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0","--listen-peer-urls=https://0.0.0.0:2380","--metrics=extensive","--listen-metrics-urls=https://0.0.0.0:9978"]} 2025-10-13T00:12:40.260867299+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.260775Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"} 2025-10-13T00:12:40.260895370+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.260855Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-10-13T00:12:40.260895370+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.260877Z","caller":"embed/etcd.go:121","msg":"configuring socket options","reuse-address":true,"reuse-port":false} 2025-10-13T00:12:40.260956411+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.260893Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://0.0.0.0:2380"]} 2025-10-13T00:12:40.261030783+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.260981Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-10-13T00:12:40.262953128+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.262857Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"]} 2025-10-13T00:12:40.262953128+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.262913Z","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"} 2025-10-13T00:12:40.263834533+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.263755Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"GitNotFound","go-version":"go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"crc","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"existing","initial-cluster-token":"","quota-backend-bytes":8589934592,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} 2025-10-13T00:12:40.264036299+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.26398Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member/snap\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-10-13T00:12:40.294771266+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.294667Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"30.608333ms"} 2025-10-13T00:12:40.666664550+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.666541Z","caller":"etcdserver/server.go:514","msg":"recovered v2 store from snapshot","snapshot-index":40004,"snapshot-size":"8.9 kB"} 2025-10-13T00:12:40.666791574+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.666755Z","caller":"etcdserver/server.go:527","msg":"recovered v3 backend from snapshot","backend-size-bytes":74416128,"backend-size":"74 MB","backend-size-in-use-bytes":43872256,"backend-size-in-use":"44 MB"} 2025-10-13T00:12:40.770719078+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770515Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","commit-index":41660} 2025-10-13T00:12:40.770780319+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c switched to configuration voters=(15298667783517588556)"} 2025-10-13T00:12:40.770780319+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became follower at term 8"} 2025-10-13T00:12:40.770780319+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d44fc94b15474c4c [peers: [d44fc94b15474c4c], term: 8, commit: 41660, applied: 40004, lastindex: 41660, lastterm: 8]"} 2025-10-13T00:12:40.770944874+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 2025-10-13T00:12:40.770954614+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770937Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","recovered-remote-peer-id":"d44fc94b15474c4c","recovered-remote-peer-urls":["https://192.168.126.11:2380"],"recovered-remote-peer-is-learner":false} 2025-10-13T00:12:40.770963885+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.770952Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} 2025-10-13T00:12:40.771050097+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.771018Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-10-13T00:12:40.771503390+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:40.771446Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} 2025-10-13T00:12:40.771580982+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.771538Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":36405} 2025-10-13T00:12:40.825562951+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.825439Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":37877} 2025-10-13T00:12:40.826441336+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.826398Z","caller":"etcdserver/quota.go:117","msg":"enabled backend quota","quota-name":"v3-applier","quota-size-bytes":8589934592,"quota-size":"8.6 GB"} 2025-10-13T00:12:40.827586799+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.827544Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d44fc94b15474c4c","timeout":"7s"} 2025-10-13T00:12:40.840180488+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.840083Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d44fc94b15474c4c"} 2025-10-13T00:12:40.840198819+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.84018Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d44fc94b15474c4c","local-server-version":"3.5.13","cluster-id":"37a6ceb54a88a89a","cluster-version":"3.5"} 2025-10-13T00:12:40.840719604+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.840614Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} 2025-10-13T00:12:40.840732534+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.840713Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} 2025-10-13T00:12:40.840799826+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.840733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} 2025-10-13T00:12:40.840903419+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.8406Z","caller":"etcdserver/server.go:760","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d44fc94b15474c4c","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} 2025-10-13T00:12:40.842666489+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.842615Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-10-13T00:12:40.842912616+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.842794Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"[::]:2380"} 2025-10-13T00:12:40.842912616+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.842889Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"[::]:2380"} 2025-10-13T00:12:40.843307557+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.843255Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d44fc94b15474c4c","initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"]} 2025-10-13T00:12:40.843307557+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:40.843292Z","caller":"embed/etcd.go:859","msg":"serving metrics","address":"https://0.0.0.0:9978"} 2025-10-13T00:12:41.672395220+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c is starting a new election at term 8"} 2025-10-13T00:12:41.672472892+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became pre-candidate at term 8"} 2025-10-13T00:12:41.672549364+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgPreVoteResp from d44fc94b15474c4c at term 8"} 2025-10-13T00:12:41.672549364+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became candidate at term 9"} 2025-10-13T00:12:41.672549364+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgVoteResp from d44fc94b15474c4c at term 9"} 2025-10-13T00:12:41.672570265+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became leader at term 9"} 2025-10-13T00:12:41.672570265+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d44fc94b15474c4c elected leader d44fc94b15474c4c at term 9"} 2025-10-13T00:12:41.672992647+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672932Z","caller":"etcdserver/server.go:2119","msg":"published local member to cluster through raft","local-member-id":"d44fc94b15474c4c","local-member-attributes":"{Name:crc ClientURLs:[https://192.168.126.11:2379]}","request-path":"/0/members/d44fc94b15474c4c/attributes","cluster-id":"37a6ceb54a88a89a","publish-timeout":"7s"} 2025-10-13T00:12:41.673141991+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.672986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-10-13T00:12:41.673141991+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.67303Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-10-13T00:12:41.673302806+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.673201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-10-13T00:12:41.673302806+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.67327Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-10-13T00:12:41.677762823+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.677657Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.126.11:0"} 2025-10-13T00:12:41.677885066+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:41.677826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"[::]:2379"} ././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000010127015073043234032774 0ustar zuulzuul2025-10-13T00:08:34.870034549+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:34.868355Z","logger":"etcd-client","caller":"v3@v3.5.13/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000ee000/192.168.126.11:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:2379: connect: connection refused\""} 2025-10-13T00:08:34.870034549+00:00 stderr F Error: context deadline exceeded 2025-10-13T00:08:34.996396846+00:00 stderr F dataDir is present on crc 2025-10-13T00:08:36.998615436+00:00 stderr P failed to create etcd client, but the server is already initialized as member "crc" before, starting as etcd member: context deadline exceeded 2025-10-13T00:08:37.000767201+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released. 2025-10-13T00:08:37.009464396+00:00 stderr F 2025-10-13T00:08:37.009464396+00:00 stderr F real 0m0.009s 2025-10-13T00:08:37.009464396+00:00 stderr F user 0m0.000s 2025-10-13T00:08:37.009464396+00:00 stderr F sys 0m0.008s 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=8589934592 2025-10-13T00:08:37.013340254+00:00 stdout F ALL_ETCD_ENDPOINTS=https://192.168.126.11:2379 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_STATIC_POD_VERSION=3 2025-10-13T00:08:37.013340254+00:00 stdout F ETCDCTL_ENDPOINTS=https://192.168.126.11:2379 2025-10-13T00:08:37.013340254+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key 2025-10-13T00:08:37.013340254+00:00 stdout F ETCDCTL_API=3 2025-10-13T00:08:37.013340254+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_NAME=crc 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_SOCKET_REUSE_ADDRESS=true 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION=200ms 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_EXPERIMENTAL_MAX_LEARNERS=1 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000 2025-10-13T00:08:37.013340254+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_INITIAL_CLUSTER= 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s 2025-10-13T00:08:37.013340254+00:00 stdout F ETCD_ENABLE_PPROF=true 2025-10-13T00:08:37.014077177+00:00 stderr F + exec nice -n -19 ionice -c2 -n0 etcd --logger=zap --log-level=info --experimental-initial-corrupt-check=true --snapshot-count=10000 --initial-advertise-peer-urls=https://192.168.126.11:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://192.168.126.11:2379 --listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0 --listen-peer-urls=https://0.0.0.0:2380 --metrics=extensive --listen-metrics-urls=https://0.0.0.0:9978 2025-10-13T00:08:37.051140065+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.050838Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_CIPHER_SUITES","variable-value":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"} 2025-10-13T00:08:37.051140065+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.05106Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_DATA_DIR","variable-value":"/var/lib/etcd"} 2025-10-13T00:08:37.051140065+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051081Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ELECTION_TIMEOUT","variable-value":"1000"} 2025-10-13T00:08:37.051140065+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051114Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ENABLE_PPROF","variable-value":"true"} 2025-10-13T00:08:37.051233898+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051145Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_MAX_LEARNERS","variable-value":"1"} 2025-10-13T00:08:37.051233898+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051162Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION","variable-value":"200ms"} 2025-10-13T00:08:37.051233898+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051176Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL","variable-value":"5s"} 2025-10-13T00:08:37.051246908+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051225Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_HEARTBEAT_INTERVAL","variable-value":"100"} 2025-10-13T00:08:37.051256699+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051242Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_INITIAL_CLUSTER_STATE","variable-value":"existing"} 2025-10-13T00:08:37.051319010+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051269Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_NAME","variable-value":"crc"} 2025-10-13T00:08:37.051319010+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051307Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_QUOTA_BACKEND_BYTES","variable-value":"8589934592"} 2025-10-13T00:08:37.051364862+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051324Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_SOCKET_REUSE_ADDRESS","variable-value":"true"} 2025-10-13T00:08:37.051410033+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.051371Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3"} 2025-10-13T00:08:37.051410033+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.051392Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_STATIC_POD_VERSION=3"} 2025-10-13T00:08:37.051443694+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.051413Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_INITIAL_CLUSTER="} 2025-10-13T00:08:37.051617440+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.051558Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-10-13T00:08:37.051617440+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051589Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--logger=zap","--log-level=info","--experimental-initial-corrupt-check=true","--snapshot-count=10000","--initial-advertise-peer-urls=https://192.168.126.11:2380","--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt","--key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key","--trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt","--client-cert-auth=true","--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt","--peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key","--peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt","--peer-client-cert-auth=true","--advertise-client-urls=https://192.168.126.11:2379","--listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0","--listen-peer-urls=https://0.0.0.0:2380","--metrics=extensive","--listen-metrics-urls=https://0.0.0.0:9978"]} 2025-10-13T00:08:37.051774904+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051717Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"} 2025-10-13T00:08:37.051786705+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.051758Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-10-13T00:08:37.051827346+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051785Z","caller":"embed/etcd.go:121","msg":"configuring socket options","reuse-address":true,"reuse-port":false} 2025-10-13T00:08:37.051827346+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051804Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://0.0.0.0:2380"]} 2025-10-13T00:08:37.051904778+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.051858Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-10-13T00:08:37.053628591+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.053508Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"]} 2025-10-13T00:08:37.053628591+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.053559Z","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"} 2025-10-13T00:08:37.055130317+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:37.055013Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"GitNotFound","go-version":"go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"crc","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"existing","initial-cluster-token":"","quota-backend-bytes":8589934592,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} 2025-10-13T00:08:37.057023684+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.055126Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-10-13T00:08:37.057424846+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:37.057369Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member/snap\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-10-13T00:08:38.087243581+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:38.087089Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.028783663s"} 2025-10-13T00:08:40.435203817+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.435053Z","caller":"etcdserver/server.go:514","msg":"recovered v2 store from snapshot","snapshot-index":40004,"snapshot-size":"8.9 kB"} 2025-10-13T00:08:40.435254978+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.435158Z","caller":"etcdserver/server.go:527","msg":"recovered v3 backend from snapshot","backend-size-bytes":74416128,"backend-size":"74 MB","backend-size-in-use-bytes":42295296,"backend-size-in-use":"42 MB"} 2025-10-13T00:08:40.572070994+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.571901Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","commit-index":41289} 2025-10-13T00:08:40.572482966+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.572293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c switched to configuration voters=(15298667783517588556)"} 2025-10-13T00:08:40.572546438+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.572477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became follower at term 7"} 2025-10-13T00:08:40.572546438+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.572508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d44fc94b15474c4c [peers: [d44fc94b15474c4c], term: 7, commit: 41289, applied: 40004, lastindex: 41289, lastterm: 7]"} 2025-10-13T00:08:40.572810586+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.572726Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 2025-10-13T00:08:40.572810586+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.572761Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","recovered-remote-peer-id":"d44fc94b15474c4c","recovered-remote-peer-urls":["https://192.168.126.11:2380"],"recovered-remote-peer-is-learner":false} 2025-10-13T00:08:40.572810586+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.572786Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} 2025-10-13T00:08:40.572951711+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:40.572874Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-10-13T00:08:40.579573502+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:40.579484Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} 2025-10-13T00:08:40.586861294+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.58678Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":36405} 2025-10-13T00:08:40.666273412+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.666054Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":37545} 2025-10-13T00:08:40.724805134+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.724675Z","caller":"etcdserver/quota.go:117","msg":"enabled backend quota","quota-name":"v3-applier","quota-size-bytes":8589934592,"quota-size":"8.6 GB"} 2025-10-13T00:08:40.821053765+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.820942Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d44fc94b15474c4c","timeout":"7s"} 2025-10-13T00:08:40.840819507+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.840659Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d44fc94b15474c4c"} 2025-10-13T00:08:40.840927580+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.840861Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d44fc94b15474c4c","local-server-version":"3.5.13","cluster-id":"37a6ceb54a88a89a","cluster-version":"3.5"} 2025-10-13T00:08:40.841514288+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.841335Z","caller":"etcdserver/server.go:760","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d44fc94b15474c4c","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} 2025-10-13T00:08:40.841553519+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.841391Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} 2025-10-13T00:08:40.841579130+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.841554Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} 2025-10-13T00:08:40.841598640+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.841571Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} 2025-10-13T00:08:40.849089338+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.848945Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-10-13T00:08:40.849128369+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.849073Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"[::]:2380"} 2025-10-13T00:08:40.849219292+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.849111Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"[::]:2380"} 2025-10-13T00:08:40.851644026+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.851466Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d44fc94b15474c4c","initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"]} 2025-10-13T00:08:40.851644026+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.851562Z","caller":"embed/etcd.go:859","msg":"serving metrics","address":"https://0.0.0.0:9978"} 2025-10-13T00:08:41.473741298+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.473549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c is starting a new election at term 7"} 2025-10-13T00:08:41.473741298+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.473627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became pre-candidate at term 7"} 2025-10-13T00:08:41.473741298+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.473708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgPreVoteResp from d44fc94b15474c4c at term 7"} 2025-10-13T00:08:41.473741298+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.473724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became candidate at term 8"} 2025-10-13T00:08:41.473741298+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.47373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgVoteResp from d44fc94b15474c4c at term 8"} 2025-10-13T00:08:41.473824680+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.473741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became leader at term 8"} 2025-10-13T00:08:41.473824680+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.47375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d44fc94b15474c4c elected leader d44fc94b15474c4c at term 8"} 2025-10-13T00:08:41.532696611+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.532521Z","caller":"etcdserver/server.go:2119","msg":"published local member to cluster through raft","local-member-id":"d44fc94b15474c4c","local-member-attributes":"{Name:crc ClientURLs:[https://192.168.126.11:2379]}","request-path":"/0/members/d44fc94b15474c4c/attributes","cluster-id":"37a6ceb54a88a89a","publish-timeout":"7s"} 2025-10-13T00:08:41.532696611+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.532538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-10-13T00:08:41.532696611+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.532604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-10-13T00:08:41.533311139+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.533162Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-10-13T00:08:41.533311139+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.533282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-10-13T00:08:41.539141377+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.539052Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"[::]:2379"} 2025-10-13T00:08:41.559123092+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:41.559015Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.126.11:0"} 2025-10-13T00:08:52.934052898+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:52.93369Z","caller":"traceutil/trace.go:171","msg":"trace[71308416] transaction","detail":"{read_only:false; response_revision:37558; number_of_response:1; }","duration":"261.219831ms","start":"2025-10-13T00:08:52.672436Z","end":"2025-10-13T00:08:52.933656Z","steps":["trace[71308416] 'process raft request' (duration: 74.986489ms)","trace[71308416] 'compare' (duration: 185.967434ms)"],"step_count":2} 2025-10-13T00:08:52.934052898+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:52.933878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.669315ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/egressips.v1.k8s.ovn.org\" ","response":"range_response_count:1 size:5207"} 2025-10-13T00:08:52.934052898+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:52.93369Z","caller":"traceutil/trace.go:171","msg":"trace[42141151] linearizableReadLoop","detail":"{readStateIndex:41307; appliedIndex:41306; }","duration":"235.4814ms","start":"2025-10-13T00:08:52.698169Z","end":"2025-10-13T00:08:52.933651Z","steps":["trace[42141151] 'read index received' (duration: 49.159186ms)","trace[42141151] 'applied index is now lower than readState.Index' (duration: 186.320744ms)"],"step_count":2} 2025-10-13T00:08:52.934052898+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:52.933941Z","caller":"traceutil/trace.go:171","msg":"trace[1914025517] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/egressips.v1.k8s.ovn.org; range_end:; response_count:1; response_revision:37558; }","duration":"235.763438ms","start":"2025-10-13T00:08:52.698163Z","end":"2025-10-13T00:08:52.933926Z","steps":["trace[1914025517] 'agreement among raft nodes before linearized reading' (duration: 235.534031ms)"],"step_count":1} 2025-10-13T00:08:53.248936654+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:53.248772Z","caller":"traceutil/trace.go:171","msg":"trace[864650028] transaction","detail":"{read_only:false; response_revision:37561; number_of_response:1; }","duration":"175.463931ms","start":"2025-10-13T00:08:53.073267Z","end":"2025-10-13T00:08:53.248743Z","steps":["trace[864650028] 'process raft request' (duration: 170.832868ms)"],"step_count":1} 2025-10-13T00:08:53.249089258+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:53.248998Z","caller":"traceutil/trace.go:171","msg":"trace[2110986355] transaction","detail":"{read_only:false; response_revision:37562; number_of_response:1; }","duration":"175.189514ms","start":"2025-10-13T00:08:53.073781Z","end":"2025-10-13T00:08:53.248971Z","steps":["trace[2110986355] 'process raft request' (duration: 174.920626ms)"],"step_count":1} 2025-10-13T00:08:53.249458289+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:53.249358Z","caller":"traceutil/trace.go:171","msg":"trace[754485902] transaction","detail":"{read_only:false; response_revision:37563; number_of_response:1; }","duration":"172.98817ms","start":"2025-10-13T00:08:53.076333Z","end":"2025-10-13T00:08:53.249322Z","steps":["trace[754485902] 'process raft request' (duration: 172.569608ms)"],"step_count":1} 2025-10-13T00:08:53.249458289+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:53.24936Z","caller":"traceutil/trace.go:171","msg":"trace[1922563257] transaction","detail":"{read_only:false; response_revision:37565; number_of_response:1; }","duration":"158.72086ms","start":"2025-10-13T00:08:53.090607Z","end":"2025-10-13T00:08:53.249328Z","steps":["trace[1922563257] 'process raft request' (duration: 158.659738ms)"],"step_count":1} 2025-10-13T00:08:53.249520870+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:53.249361Z","caller":"traceutil/trace.go:171","msg":"trace[496558728] transaction","detail":"{read_only:false; response_revision:37564; number_of_response:1; }","duration":"172.99365ms","start":"2025-10-13T00:08:53.076334Z","end":"2025-10-13T00:08:53.249328Z","steps":["trace[496558728] 'process raft request' (duration: 172.827845ms)"],"step_count":1} 2025-10-13T00:10:04.329470263+00:00 stderr F {"level":"info","ts":"2025-10-13T00:10:04.329323Z","caller":"traceutil/trace.go:171","msg":"trace[848082496] transaction","detail":"{read_only:false; response_revision:37685; number_of_response:1; }","duration":"121.192205ms","start":"2025-10-13T00:10:04.208107Z","end":"2025-10-13T00:10:04.329299Z","steps":["trace[848082496] 'process raft request' (duration: 117.371464ms)"],"step_count":1} 2025-10-13T00:10:04.457659001+00:00 stderr F {"level":"info","ts":"2025-10-13T00:10:04.457498Z","caller":"traceutil/trace.go:171","msg":"trace[65945612] transaction","detail":"{read_only:false; response_revision:37686; number_of_response:1; }","duration":"104.659775ms","start":"2025-10-13T00:10:04.352804Z","end":"2025-10-13T00:10:04.457464Z","steps":["trace[65945612] 'process raft request' (duration: 103.638266ms)"],"step_count":1} 2025-10-13T00:10:08.480779081+00:00 stderr F {"level":"info","ts":"2025-10-13T00:10:08.480666Z","caller":"traceutil/trace.go:171","msg":"trace[639656824] transaction","detail":"{read_only:false; response_revision:37691; number_of_response:1; }","duration":"120.978088ms","start":"2025-10-13T00:10:08.359662Z","end":"2025-10-13T00:10:08.48064Z","steps":["trace[639656824] 'process raft request' (duration: 120.778663ms)"],"step_count":1} 2025-10-13T00:10:08.657615209+00:00 stderr F {"level":"info","ts":"2025-10-13T00:10:08.657439Z","caller":"traceutil/trace.go:171","msg":"trace[30412780] transaction","detail":"{read_only:false; response_revision:37692; number_of_response:1; }","duration":"245.772227ms","start":"2025-10-13T00:10:08.41163Z","end":"2025-10-13T00:10:08.657402Z","steps":["trace[30412780] 'process raft request' (duration: 230.647008ms)","trace[30412780] 'compare' (duration: 14.986555ms)"],"step_count":2} 2025-10-13T00:11:18.680105315+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:18.679941Z","caller":"traceutil/trace.go:171","msg":"trace[520159259] transaction","detail":"{read_only:false; response_revision:37805; number_of_response:1; }","duration":"188.517229ms","start":"2025-10-13T00:11:18.491401Z","end":"2025-10-13T00:11:18.679919Z","steps":["trace[520159259] 'process raft request' (duration: 152.714511ms)","trace[520159259] 'compare' (duration: 35.674094ms)"],"step_count":2} 2025-10-13T00:11:18.708371866+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:18.708279Z","caller":"traceutil/trace.go:171","msg":"trace[2142774099] transaction","detail":"{read_only:false; response_revision:37806; number_of_response:1; }","duration":"186.587993ms","start":"2025-10-13T00:11:18.52167Z","end":"2025-10-13T00:11:18.708258Z","steps":["trace[2142774099] 'process raft request' (duration: 186.467209ms)"],"step_count":1} 2025-10-13T00:11:29.645714416+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:29.644751Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"} 2025-10-13T00:11:29.645714416+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:29.644819Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"crc","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.126.11:2380"],"advertise-client-urls":["https://192.168.126.11:2379"]} 2025-10-13T00:11:29.645714416+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:11:29.644906Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp [::]:2379: use of closed network connection"} 2025-10-13T00:11:29.645714416+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:11:29.644984Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp [::]:2379: use of closed network connection"} 2025-10-13T00:11:29.666211224+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:11:29.663162Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept unix 192.168.126.11:0: use of closed network connection"} 2025-10-13T00:11:29.666211224+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:11:29.663451Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept unix 192.168.126.11:0: use of closed network connection"} 2025-10-13T00:11:29.666211224+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:29.665174Z","caller":"etcdserver/server.go:1522","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d44fc94b15474c4c","current-leader-member-id":"d44fc94b15474c4c"} 2025-10-13T00:11:29.684153979+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:29.684045Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"[::]:2380"} 2025-10-13T00:11:29.684366125+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:29.684328Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"[::]:2380"} 2025-10-13T00:11:29.684366125+00:00 stderr F {"level":"info","ts":"2025-10-13T00:11:29.684344Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"crc","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.126.11:2380"],"advertise-client-urls":["https://192.168.126.11:2379"]} ././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log.gzhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000327347215073043234033015 0ustar zuulzuul‹œFìh0.logì]{sÚH¶ÿûÞO¡âŸ™Ý± _ê–H¹j‡I\ë8¹¶3MR” V$F~ÌÔ~¬ûî'»ç´#ã¹[w“TbZ­£þ÷9Ý"Œ0Ç&®Mù5õÚB´)mIq]% ¤Mˆ•f}$ÖÖ‘¾Õ£F»qç'Q㨑¥ð3«"á¸Ì“ÿ€)£x8Ô LÓYзƒQ¨£ †42÷üo·¼é4)o%:Kºa”é$Г,NšÃ¸-̧C˜j®‡ÑЊÖ4ò“+Œnão:±~8Ò}\Ÿ u6{šŽú“è¥íV‹Ü~iŸjÒ¢kRé6)“MJÛŒ+nö³L'p79jà——LËüܶ‚¸¯­cëöû£0Òû@ë¾î[}0>ò3fVÏùQk*îù0ÄQ¤ƒ,Œ£±Õ‘‚Ì—F–øQ:‰“¬muð’uwè¬~èÃc‡m󃕫ŜðÒ=˜¦ºÿ¥ÑøçVJKRÊ<ÊÕªÀ;óÅfú>ƒ%æÐ-]`¯¤F=å)æqw•ZßÏü7ab…©5It Š`Áâ‚$¨ ÛŒƸðÄ2™…°­,†[50ÝBY[¹fY½ife7ÚJur«Í³üL룲„òðw¸ÙO­±÷`Bžß°zz'úâ'ªL0dóYOc®]1—0ÏY¬=†u}´~öCCžf¡ˆS ¥v»Äò£¾åyÊEh= RiÄÖ¬~•ÔqˆCV¹ü´ÙÀšÑ1iJÒ§Ý *•ä·þÄ[Ó‡´¸Sm¸…#©·Ì@ÐÈëÓ7Ýÿúôáú¤ûúäôï‹7Ý׿^w®Ž]Çõ<.=àÉùy×R?œ]\_ßdÙ]F…}í±Ø³÷'o;Ç¿Mý‡f·â‰ŽÒ›pÙ…xí¾¾mÅÁľMbƒöáÀßÒŸ9²ípå tÏ÷Òï{´ç¸:àÌó)]Ÿ¹•Š+?вçóPÂuÝë Ιxýßc¹W×'×g§ÝÞtê\^}¸8ÞƒÌéõù¡ФÿÞùõ¸öÙú6ëŒ4xÞ˜nö$†(£Á²Z©ÿÿšPá¦6'Z'6X~ó›~Øo 'ÏöeÌéÉiçòúqàsápìOŠE£Kça~+ðíÞ4êt3H²=¤ü®sryýºsrÝu.:9?¦„ìAèâä}çx£ ß®iÀ~¯»—OWîÉ›7—««ã,™ê=hu~ùع<{ß¹¸>9ïþ|ryqvñDôñü×î›O— Ñ ÆŒqZ—öû“_ºçÀ¼ °cºµ7'àºÞœ]·ný¤5 {F¶û,ë¼sЏº×°¶Ÿ®Q€dO…ÜIw²§ýÔñìâìú ¸{zþé ÔÑ8¡Î±¾SŒ–õ ïAâôìã;\ʧ3 /×çWÝÎé›wüûê¤ûóÙõ»îIçªK™Û}{ú¾{õî<öÑbÞåN³V¨ÁÅÙ<îŠMÔ6Î*Q;}wO~þ+åÄÙ²ÂMsëâõé»îÇËoѶ»®Ï~üuáqœ½¬ñâäõy§û¨þ¸Õ[H‰òV3 qÓ…¶ìȲ©gA†l> FHžöÙv^·ÿîOò¶)}ŽÃhÀ¾Ÿè$CÆéì"±´ƒ8I¦“Ìntðͬ f¦øï›Fãi”#%0<»ÇïC–š…š&£ts¸tñN´:{)p=‹“Üh0„Âg¦ .0"Í4ÜëïH~׸‡Ì0YΚÝ̸žû£çáTÙ·ÍH?¯ÊYÈŒrmnåD ¾¬sléò*ÛÚXÌØ®ÊCÛ'­£Ê ÁÇßfæÑ4 ï+¨‰u X0ª?ÖY)˜ž†·zqoq©úv,ª*Ý'ÊÁÍ4øÍ$A$÷þQni Fþ0máߨÀ ”—:A<Œò Š=,Ê-݆I¡/± %ýÞHà ³íÈk¸u=&•'Ýú£)Îú¦*»Ï$\Ê%!áÎ2Ã*¹.%‹‡Åë:/„w5­Âáò pÅK-¥-U@ÑèK©ñÆr©R¾‡Å,Ü?tuýYß”¤‡eä ¶eþU¬ṗ—Ò„õÎÊ¿¶®"Ž£êÁÍ7_^oeé^yVÍ·C_ 76¾ª`âvÂAò—ÒäŠ.{àEãý°¸_*-©jK>=b{.£®»ïjŽXn++¼âiT¼jiØžÏLøOÜ¡8(]÷eXµgrP`}!ÍXi£nÅc‚Ê: ã˾w^ûE{ÜpÀ.§Q„û¨Xé3âF}±çGV GÚl±6­ë›0Åà(Î,äǰâƽلMâþÔì’7÷÷¸û⌳epPÇý02¢SkàÚžIH†@õ³™Ë}ÉÙÇRgÒ íØ›4s«º“æÂžýIsïa:”†ôAz”†òÁº”9KªnæÊ¡:• âÏß«\Ð>h·²ÄŸì{rÇ2·šš=Ë2‘í]K3s­oY¾ÿÑÎeãë&Ç$¨K¤WÏ1I¢¶;&JåÜ3öÆO­žÖQÕј‹ç|ì~˜¬õ­àR:ó0Á“Ÿ¬iìíñˆÂ]Ïù¿Qw)qXÁ "T¸¹ÔCËO„—ÆÁ7Yñ˜Â”DC† fÔOt OCÚ "ÖF{àR}H ŒoÅ¡*q ÕY¹éq¬Z#ÍJƒÜhMeÜqh-,’¹[À@U¾0¥ÙI³»0»Éá\Ÿc¡‘RÛ< ƒk³Ž­g‹ Gn4=A ud•<óñü\‹Ø°Ë#Ÿ–k<Áh®$#‰`åNn€L: ᙨ;îÀ­;í,ÍÛºG²™â–y;ì“TÓÝ8{£ò+N¡„÷ê(?xk¹Í’¹Siɹ(«l¹ž«­ÙDó1z#VIî‘Z†®Ümv.™C@<`¬ÐVV'~vƒ±±¯{ÓaËLØ)¨ç1¹¨±Vꉭr~¬;%?ÊÏAäñ«D´Iø1…°óÛ0> 3;½ñaèm˜]ÄÙPG Æa\š=Œi“Ѧg} AòŸYùg›6õÈ늿X¿´SÌ‚²A8I¨D |ɉĈkFÓûü³ŸÈ;Ü—h+cÿÞ&SH¾!RÉÅgÿÖGy%‹£&Õ°ËYJèŠÆ6í¶æ-wP åWlˆøaàgH ‡‹§Th¹ÎQ„@Û‘¾åFï˜ñõ¨q£õ=íg¶9ëò6ñ Bò“Ü62&žâwŠã³*­t=ø†91AŸa\®ñmSäå¬$À_'ÿ0›—,îò³àf:±Á“½¨cnßR –µªFüú”h %pUŠ¿õà¾>›™*'êU”ò\ì,NÌõ¿â‡›8Í컈?He>ÔsW6B 'šfá¨5û]¿³(Ìz-“9ãÆaj\÷â=ðjàbâäÁú²ìÛ¾4,£Ÿ‹7/÷cØûÒè'w÷‰¾4°tÒK%ÓêlfÛæPÎb|SäóƒI “@9¬i4IÂ[Xè;üAe ÎÃ'£olf*”{¼NßHú2\-"F ½ð¿‡]”t\V£$V·ì‘yÒ*\²Ø]·¸]Ë,.Zý^)ÁÚÄÑ–™•Å1¸«†p ¤2Ž"tÃV¼ÓtI0Nì# (-½MÐòŒòP±´“Àî–Á3ãD[ƒ$[³ÐÜ(EéD}ßh3ˆÍ¬4œB²ƒ{tMi}{}Ptç; +Uó%t|.¾U|³p†8fAKAo}ïÃ̦dÓ~È&B%CœzVä(æU'9“i24~ÜYç&xÙ$;ùé—mm)ìçLƒðIÀç¦iSý{³!Sê%srP°’;êÅÁþIH;(Ò;@Qš|NIqe­8')¥l˰bî†s)ÅæüAN¦¬œï|–Ã)+';x>eÓqÏŸMyÖ³)¨½¥QEë€çÒ-àxÞÒÇù ³,ñÁtÍâì„\ãs»ý5ï!l^° N=‹e¬òìâ|Áªt¼o<½o·Mê:öÒE­uÒ­'Þ ‚Ztã;«ÌÜVn(­b_}×äáßv?$l$ª¯¸ª%dÁ· ruëYu¡•ë®^¶¢Ê£tkT@•Ͼ㦥ï´"}·HëÊ+êRé:C AòI°½€ç;?ê‡}üRÅÐp9z- ¹œ:Ï&Ño!M{Ÿ?&ú§8Ó—:äMÒµÙ»Àûn=˜Ò9ÐÖ¦6#TP^‰çR`;JkFá^ÖÂ(¸8´FÚï—öï¶Áp™CH=RÕ„a†°ðo¯‹Â¸=lÜæž.*¦(feuDÅÔÚKU ZœÇöF¦A¼´Ÿn6‹]•ì&‰§Ã«à×ZBÅ$?ƒØ×›š´¾ñǾ õˆujòO—çiûó– âë?MÃ:?§7;ÉBŠ‚2m­>¼UzÚã­§½½C÷¸ †ÄöÿÞ…\@UÙ¡é:’ò{Ûø&ÈÁ\Už#¶,Ò•œÉzZÄ”sØUB•㺠ì5VÉ «èqš7µð/³“ J¹vf}j«ïëqtuBn_\¹25{hƒ)nH™•†`Ž»,T:´ÎA%CBH¶Mج¼Ð"™-¤\ƒ ú`šè¾9«Û |]ì4륔w-«ßˆLEê4³9Ê{9dEu 5Ì$e_2Cʱ<Ü RAÂqäòAXU7èÌOùÎÅ¢6ƒŸ)u3gIÐc©W³Ã&‹1ùÊB¿\êOC.ß„lˆiÚÔ¦*¨"›® T££þ°ùpàSç­£ùWk™S/ó´é;Ëú~¶¬¶…«¢Ü£Â§ÉŸJ£ÀV1;ÅwV~°(»ññÛâ Ž4 üeúÌq›B¹ÿóß@ýk¾Önñ^Û¨L€WG$HùéŠàP ¹œ‡¨?êeÁãH·§‡aÔ.uþð+0ŒuÚ ²´õ*¿½ Ò{d*y…{·“8JuΖvyd¶ëŒ¦Èi¥Fyàˆ‰ØI£¸zL£$eªB£JlùÎ&Z›¯„ðÁñMÒcaÊ—߉?· ÐÔ ðë§\,,Æë:A7ë„r½=Ï Ïìe5¹ØÍ9N‰I.¾ZæZÈü¯ù„nò@[$M1+ëÆƒîlJ{]®LxMÁ1#T®ÊÁfË£žBºUŽ¢„ì»I›s*FžEz²bɰ*âyJˆÝå&=Å•R¢†Ü€cOCtйj?—îV¸tµn€Üm:Â¥”>"(ÕtÈ;åvATêV¹ô˜<:, Ò!˽L¡–?ç‚4NŸàÏ•p9ٳƞ«‘tö°]F<‡@]BjÛ.ÛÕv)irÕ8T%„+½üí…­*Á<^e¼%h»/Åÿ<ßX’ñ>ƒà„³G ¦ŽÄv¡ëñ'FâIÆI˜=#N@¶al¬A¶;‰åÿ“±MÅz36 †²¸Ö1Àq\ƒê†d`yX²¯M`pÖíb{pHÝèD‰l‰c°ÍC¬ìÒÜ`jƒdœÈ¡ºÜãa ‰Ã¡Éj…Ÿ‹ô*Š):Ä<Š}Åh "MÄ@©;1)Ô„ºUSöMP#ã9ØG*}Ñôðãs³Qùú©~GWÈ>?êî—Ë/šÌ?ÎsÎa¬ÃA´(º4‘^D‹r´H`R´8xÐ9‚ÂAl‚‹¶Übj"¯œ¦P{U‘Ë Ž%øJj2㨉IåQ 5y—¥&öÝØˆÃxéúAg¡&¹uÐã>ššV˜$Hhµ”ÔÿMÁ¡'. &QT.KMR©³ÁƒÎDMÞšFPƒ³™©§&Ö­ ¡Ê¿•ÿp“í[.Nˆ »i&$ïͨ¿i2®nlÚ+]CV”WÄI(V) ¢.8½¯íÆW$D k²ØMwBÁ¢Íú(€ËæÂ(‚öhø¼ܦÐ6­Ä1‘kYC>p!Þ‚kXG•‰©Í{tGxªâ6±¶ÄšìN†Ò0¬…ЈýŽ1Ç›E^œ€cA ððNhZ ÞÄ þ)ÃßX/?¡ºp¤;ÂT‰IÇ–P˲'#.#Î5Z:fóˆc"4G×An\Ò˜Ú‚V‚8Kš÷©»ã(IFœñÈe‘º0ò3"}¤Ê¸ÏúÀа¯Ö?339=‹k 0ŠØµvZ G‹ï«Å à)GlXQrR¥+zWV&DquÑ«­í_ÑG¶ÆŠWŠCfzys©º±ýq´ T)Q‹È}ÚšÙ‚VÂ¥r-ã=—"Ž4DÔ"®=ÂûñJAls'ðdÄ•†Yµ(Û žco±¡Hb¯Ã[¸IºªÈJð&·2N ^‹ðÆšÑ>‚þ|ÞÖG_a‡‚zâ$®üD´ÙPZÅaƒÑÊTç´@yÂ>öÄÛp›ÒŠÈЬaoÉÉÉ`.œmÇû‹vÇ*kfsÄ~ÙaE–gÔéÞSÕ› ¢8‡E꫾é 2"ƒÞÏam!I~€VhÍD"ưcÍ I§±,7âwgKÌ™îÎh˜‘±®~}T ±öTG3º¿}ÐMAç«çÕÑœ´=BuîÜXÜQ…ÙA ^ì{œPwE%“™˜+’ÒڷÖØÅ´Áž1Ió‚ˆ<x_¬x… œAË8… \o’Œ.æD'N•§i"C>ÿþøüÛâêvùåáñåõvõr¾þ¬ß_µ|þrýºøàòµNT„tϰ6u¹"ú2˜£/g<¦ C7¯4SÓ0Ce9»-ÄI¢‚j .¬ñ:fDäÔd㬴çEdùM*@¬;êZ­edRn +* Õà~1ÞPl9Äx µAö‡\Õó"~£ækD<Âá á‚CLE×D&ã8ƒ)lœc:ÎèaÅtwÒ˜¼„¡1‘¬ßéž0I¾s7ŠŽ|©„Ÿ‰ ¼u5‰<Äà Ü4ù¾.zZÜÝÞ\¯~®î®·¦áòuy÷øeû ®ñ&|¿©ö!IiZºoÙÚ"JC—£4 Ç&3„››CÖGM=‡Â³R‹¯ˆ£!z/•'Júe’žÄDtD6䱪“}«É|ưAO± ^ì©ZðÁ ɸˆ… *ÚÄr±­÷H&ÚzDG+äZhóÊBcUq7 öP9„D¬Ûp Æ<Ç¢Ò+Ç­‚ )ólKõ/wr¹›üñKâˆi#‚,6d±8ì6hÏÑx" ˆQ{Oi*»‡â*:tGAvÞ\Ä1GÂÖdEYg×v¬Ûr!-€Òã4´ÔMñd}§MÚâöJ·¿þ&­¢ÛE>d {À@¢ÛRÝ œþyPy ZW‹þqõº ÎïðúÚ\F·ÁÀÄèö—Ú o»S†·ç¸í{ñí–¢úBqáþÅ9ZåX¬YÑ{Œ˜Ó½»{}Ûvj¬w­w‹(¿_Ý<è<®ÈçPÿòê ©"¬àE8qÉ&Ö@ÏÛÈìrÑܤbX‰¶‘/·Ó­6<"½Y› .&¸Q¨ËÙF˜„jÏ5><".NMðòx·õÎ×Èß×aÇï×—_E…/‡8u¹¸Ã¿×釔T‰øÇíäÖ©â²4ä“ù£Á‹ÍlYÈf.Î&pcõëdëæ;¯ÐJŠ Ñ`,[ònl‡ÃaµÔ:E|¾Mc\]Þ \®xqÿúÃ…A‰øÕryYE*œ˜QìsGùX&;‚ã–Dû–1³WÜ»áíáÎùæ†uQæ±Z·çt\µ¬huFlt%#à^Xu}Dº-S}_6¥yع­;Ej†ByÇL¼âú=sâßù¼Vq·•¨ê qØ €t!fIÇ»$å ^j¦ü+¡#P7dã4yàjšµ­SÃHþc¬ ‰r6ŠIL†F§­Z_„IÎÊ÷ζô-0ù°ÙÆy²{©•áÓšö¡ l¹¸@±«ÄyäÚá6íþAz úöÛŽ½öUžêúùâ×ýÓ¾?å]°âÕ´¾–J„Çßĉº½º€¬˜–pƒ~I1ØÏ»ÆXku¹¤ÙÕ½¤ðU/¨ê¨ÙhC­©Ç $8êpâS0Fsó&×ÛܲK4ö8G)¬6)À”d+¹‘ëâ^ÈmxÆ´l%›F7LF?†§t³®§IÓïo¯KÕ•£`ŒýtJ8;±'Š¥„ÙòÿÁwÑÃç% s5¥FÔ‘†ô'2숵Ύ;lŒoy’EXaÞŽ Úhš“ªB¸œÅ¾}¡,9½6¡àÛ±äf!¨ ßöŒwÿ˜ Y¨¤ ÎmRª yªð)ª¼ËD!—l³E‰°tâ œ0Ÿíf îåP{´ƒÉÆY£sPãÊ×.MÒÛ'E.úS#RÈÕÀF±²Kë9§5—ô¶±Bz‹ð~¢h'‡[¡0Ü´3 ´!/Q—Ëdå¯M6æ +Ùà ^Šˆ¡âüáÉê|qÕ›ø¿ì]ër9®~•©ý³Ž:¼àÆóçÇy†-[²×ÄNÆv6›yúZ²Õ’(5›ìV²[“ÌT*vL‘øáJ€x7YåšNÌ®ö’7W©§Sð0AçØ3¦¥ç' éÏKÿ ß”†oÕJ€ó¾…N]ðá Ùùí¼’J®ÇsoVfIÉÏ¥:@*TRŠŽ|¨ì¸[@£¹,þž÷l£*‡SB7nòçLþEf0ùm×’ú’½ëZüè—.CU*Cl;5~ÂU¹ºüâ¶?×&äòk_Â`}’š0„]†$µO€Bû{­Ò)kØÞ'¦q›Èú~§qÙÕ…Ÿ@Èþd%cÖªûnDœ!Á1Çêë[ã¦jh­LêFy—©k?vIÉÊð­(÷"¯”£1Ú¯ä²óm§)˜‘Ø©¡J>ô‹®Ð6$Q°µ±ÔÈíå $µ|‹*Yúñìö:ñ©KÑwê²3]¡Ë8üŒ.ã%uoQät£HÄ“aë®±Áxí&:IÅ»°¿øyUÃ!ˆþ@ dɱ¬–B–ê9¯Ù¬j¤TÕ¤~#3à—çŽá—œ© ­Dר$ìQ`Š®eܸ#’,ðtÆ cdï¯ørãáñæãÝêùîãÃËëó½±µ¾Y}NáÓÄÒq©»Ò/á«î &%«½äm«Ê¿D¦ª,±'A.õ Ùáâë)æRÖFdŽ‚|•Úh– O»ƒýÅ=Ïaq—QéŒÛ6GÁ,ߨB‡ëÔl—uÉPܬ­Ùs\»Õš¸¸ð« c¢K™Âû>¤¬xÃmùnYùÕQ êã£v{¤¦¸Àèg]ˆDR5RjPH‘‚¯™§ IœØ¨¥æX£Ÿ0! õ„qňeD±D{„ž¸?ZÉKÝVJa‚ÞçÛ(âD"ZZosS¨ì;î8Ð8±¨³ g”J?göDü¾~º¼{ýöùðõîñÓíÃó§ÍÓÃC&ø˜°t ?y€@˜p*üàKp$ˆ¢’Ñ"ÖêQÍ_@]@ä¿´•_íÇUõ‚¨ñMXX$&þe{ùùL(@E¼$òãnŠÞëQ7E‰—Ÿ4 ÎnŠ˜s}A¹º™ã^Æ—¶p-ysÚÍÚ8…κ¢þÕÌoË‹äXÔ¾obg‚ù'…²}z€å§<(Øý¹¸_¯×|K«ÇߟÖë÷r`¸³¶€„“ W Áz>ŒCnrUÖ·gÆh5us½&(¥Ý\ðk¢ÂzÝy}ƒoR'>;zO§À·—o¯„˜#šã¶&~iðí»œ†ôÈâ8Q¸òC‚P‚‚Ôþ` lœá±Í¦óm ƒó¸@+kÇ¡Êêãa.OÊÜþ£d–1`ŠÑ’È5»<1Hr•S_.ÌÅÜQ¬tfÖÊuö8“c• ècð#@‹Ö²:ûbkO…9¦vÉû¦Ôoa°†GAš.¡OKgÝ­H”(ÓÇJÖ¢8¼.ÐæJLAˆO°6†©P[9£©¡€¦‹j/¹-XS*Έ)¨â5Å5B§Å5‚é´o5«ŸÅ&§±Ã`¥3—îêö¬.—œ¦`¦.‰î ºf¸D[;d°AE’킪Åàò=·%00Ìßä–±óè¢Çk¶,üüÍÜÅÕËÍã×Ïê=¾E:÷Õí6¸¡Õ÷—Oß?Nò†8,y=Uþ+WSŒL‘[Öµ’²Æ9bä `tJE@€i ÔþËuTmŽÖ _œBqË\2ƒÈÖã=AkíQϾ²ô•#u²b­sÅ’Í®å*nçI7-ÙŸ¬d,AR‚´É.‘}ùãõ‡BpÙòú%a›$.›fßÒÞ–×÷Ü#Ja¦‘zeé/É Ô[¾<²Ô3F › :ŸÉ¾Cl¬Ž¬ØÃ %Æž #K’eslî\ÚŒ:´‘dÑIu[‚ Æwu‘ÌmôRe× ŸÚµB§ãôê€jö࣠¸ËyûíY³À7<͸aK¾³7‚UãÃ%š [«&S, bÃv.9H­(X<‘Õµ*C)méխЛd\$¢…‰GE"Å\òmp°’2\U1ÞZޏ:ƒ%²º0©Ko3‡¡œÓöÖ6Ø ÀN§TáȦ$^$…ÆæÎÛ²U·Vo§:Íø¿z|ø¸µŠßíæ²¶"¡5­d½úãöé©*ÅÛ‹Þ‰q§k»ÀEÀC£R–svô€¨s ~Q¸DÏúÄQ'ªÚØ¥úy¶„btMc+PûÛy2c±E¤¿}~ýöòA¿ýýËóï«Ç»×ç‡õËjs£–ÖÓêÍý3ÔÉÅ©{庄1Ê…Â)²¿,ö.#d+”¤ùûÖ½Wóâå·ûç/ªUôŒ_žìôË«ÊÍñ¼ÐÏ'Æ4AèoòØ"Ä5Ša’f%RúšC­¬Î§¨¶æ(ÙYUß;ÏvSØ­¤›‚Y=qqðš;ïÐÞ°Cµî·%ë*9=’øÐÌ·TZÈ©ŸÕykš£|KL>^ìÝ<泿û“•°ÍùŽ™#¾#¬‘ïEõc©ú· «™AS ³Åùªž'ŠUê—¸*ƒ?Á©ÁŸ2óÖ¼ëÐbKiœ¿‰wS%/ñ—w…ï'÷òý0ãö¾ØÀt“¦Ã'ûšÌýèºècš$À5IA_#I¡"ªÑ<ƒø^6úáð¯«õóºJc'Ÿ•(a!,“(E ‰ÙóºÌaÉyK¾û²Òpÿ¿ÎN«)Éàêæ=¾-AX ƒuGÅš$——|ˆ úµL4‘•]ú‹*[:5|v¥ÀYF¾5ß‚p–qhPÓ ñŽßp‰¶ÁhAWO;½2Š »sA2ë±®—ÛiÒü).cûØ·ŠºZŠëñfýIÒ¢n÷÷îåÑ×w°óOÂ﫪Cp«fÄøu ®j¯$¥4»F·j:í*@›\’ìUW÷ëò Ä÷«îxôªï"'Ó}ß 5ǰtÝ4ª&ç8³g¸­ÁÁ‚õÕodö|ú ÔŽ,ÆÁx•g ‹ê«Ã¶kiÍ3ÐJ¼X‚«ÈQª–ÔkA Û<³íå,ëì¨/ŒaüÎÚh‚0®ž);·t²’vêP«ÆQ& .¬‘÷ÌB瀣\Yí†pT°ÀE&Héô"ƒã.Ä9]3G™|ú9ÊÂK>xQK›{ûï–nW(/Ÿ>í |7|«˜wšÆJiÌe¶9Ìt -™é\fÿÙd¨Š¶õŸ*Ÿ4¿-j²`ɶ„UîeÜÊ…,ƒµeÁWè8ú$t _·'å\äy”‚±Ð ] œûg Vh ŒÄ.¡yr¥(<—DöU³qÅ%§º­Qåz,b'×9t„a\$áQ‘ˆQòCmßV EÕJÃóãƒE²JWù KSøV¡æÚøÍTqåÕ¥°’5Šn±YÈzwÖ=möY9y+T@QhåEˆÂiT|³­”š#4¦Ûð ÔͲ½YI3'l’“ãwerÉBµÑ5&97OƒªÒá_VŒ÷$ÌÈ·«§?ÿng“V›ÐcòX Ì~\㤘-¾Phù`Ëý`éPùäÃ>²GˆB¢>M´Žì*ßÄ¥Ÿ±5½~p6©±@:$5!Ç¥Â(ª`ä¬RÐm–Ò ËØ«2I¥Rƒ}ë§ÆN¨•š~ Œ5‰WbpŽZ ¥Æ†K„@iÌþDu$ÙÉE¾nî³½vG+J½b—œ…ïjûÏNA½XŒëù¦K`ªë‘œ"’8×Ê8‰¥Œ×aˆéòÛ þT,^\fœž|¾ÏúþhEŒsul¡Ãœùp‘¼•h“à "•Z‰¶e•6§ÿµð;V¥KUå¹ì¹Ê3´r–cÏ0W”dwŽ.'Íûs(ƒú1ǘí¤?8LA‘,tõ’ÁAbl°B“o¾óAå'I8OMR@ÁWu<#Hµš#«³"²D©%B¿„'ù&‚ ¸ ˆ pœ8ÿ~óùƒþoçVà}?÷‹âÜo÷››×›—Oë}ŽÜXûwïÏ¥Èý¹ýƒ$…í ¾~ÿº„nþÄ¿‹z‘<û 7e³,7Ã-Ñ~þýîõëgÅ®Ïw›O7_Z=?~¿zY“¿¿»Á@§©ÄØœ¨ÛÈ ø¯â¸lð¿n‡fÀ½É,¨çî¡V«o—ð‘ÓjbUDU¶]ÊDýsåkªÒÄGŽ1^VÅsõáb f )§‰g)ûwVr¬ö—ãaAä`‰Æ‚Èh^I±‰ßŒ8öMb€U`õŠ=Yw÷f—®ô=p/CLøËÝöàÙ¸íà`%IELŽ AÉ;tV<­î™ÇIìNhõŽMÊN€ͶoÉŸiKØó=ÀkfÛüœlû_Jsy¥yN=Zw;æ.Š.2»U¨w@ýq«“M×+=Ó)ýèÙá…°¢{Zß*óQVÏ·›Íí´(j§œg”l±ct _¡ÀT“Ñã%ý³·.žÁ9}©²&ŽœU˜!Ȉ´ õÙW%JÎаmG'â ãýÖôJ%Á¦ûîÅ/­1FÁœbŒÄ"˜àõ¤ÞCIA©â'×”.ƒ8g™Ö— 61?.ý£%KÖþëRÎ5Ûý«¶n’t,¾þ ÎŽ‹U/³bTð›>çélÚI¾Ñ¬°Ÿd„M‚⦌%†¹s árbxK&È¿ì{'à l»v±¼¨»µ$BÐtAÅàžÊ; ?,6>¥àR:SÓ/4kÉP┄T Àí(p–«HÆÖ&®âQ£Ø9ÐU\êB@Œ‹vêÍÔŒ|è§f½Þ|||x~þòür7Ú¹³l‘Űٲ]ÀMØŒ5õÁ>xˆd­ó'bs!Ñ/@w!Å[½—¿äÉ¥ ©³÷ðiܼÆ|Ážˆ3 »íÚQ¤&!;1¤˜š0àø‰íü浑yÜ:€v;²:úÝ‹¯<0 w‹ÛÙM ÿ³Áè¬x$6ÂMâ!iþ)Fê™v‘C”+ 1*M7er_ w²Mˆžj¢þ‘£¸É1“rÍü0Ö:qaÔî¶æÒŽFÑ9a® s@‘9ZÂꮽUÙátö^ÕP“Ý-‰.ìT·<¦Ó²šµ”<´ÝŸ¾¼>ÜïžàbÙ:mvÚÙ”nRMïƒk …z/©f"ƒjÇŽdºVHú˦Z!Ý[¬5“E›„é”d”ÇðàÜX‡=)牅¢%ÍȦàA Ôä±ùÂòYÂ)›%´¾ÎÓEk‰æµÖBѳ SÌ~ `:+$`÷]£ÍÆ ( /¦ÏÄó•Ýú÷§%5ôþ‡SÀÑQhRPó– ƒš÷NÐÏàÅh<Ñ{¸Mx @úk´²(¾É÷E51WY4 Ù,jÀÛ@SÔQó«I  “…Õ€Q9ärbÊ'ä ŒÊÄc#!ËœèŸd~ð¿*Àä„@ì™ZŒÕ‰Q["-àš£µþ`vqöòXuæüËcßZÚ}a…›ÛÛ5ÃfV미Þì<0($‹;©û!Ÿ`žqÛV±¿Â„€ÆýN+¤-º —ÕjAÈU=‘ö1$¬z%!žTÒF™yv±ëÏ?öNºèwÏé»þ¬HÙ÷‹ƒÃôD×Ya:è-z°F[A—˜ìQ]iqå\’À5ëXm$ ¾±”6ÄÒdcÄ€eô¹²tJEÓ¨XäÌ ÁÉJÚV‘ï‚NéàõÜp3 …ö2x^n(F&\Ô:ÚRߟZGÆ?öðÖÞõÄ:R'yÆ Z‘Ÿ8Sç/½ú‹èÕ%MLðiþZ[…HË–_Ý?ß¼¨¾~U&u»¾Ý¿V¿KÿýGÅðÏQoèÝæ¡(Ü3}Á% æbìnDöÌ3šÁª“T@¯ØS÷mÎÆû=zøújæÚËêFOfÿÿú´™–ãCï—4L!Äš² ›VªÚä’ºÉWábõÝä{Pê ¥#“ÈÐ"уÊF"CCÑ?¶”Ôcð³ÚäçÒŒál²E¦nŒ”ÜXt_xk’É9Ù0ÀñNm{,r¢`\4 g-ˆ3ƒd˜ç—läð¬‚Qù, [ÅzOGNÚ]@µŠ™³5œÖ¸¸¾³µtL@WŒ=ùöš+gþpî+\ÃæöV$¥ÕëúÓŸ8íI½Ú§ç]Â6<÷AjÌ8š¢ãƦÏíÄœ¾¹²rÎøV  cðíçæè )WßÜY·™Iö™j%”6ô¶ÒÅÑ;ÆíË#ôæ.š¿WñÞʪƒì‰T­÷6~œcµªYRÑkbuò¸4«™ãÌË\è’ZFŽÏT‚GuþÿÆ6útó´¾û`Uß^N¹¾:I|®2Nûê”ï«lâó<3¢º†’Z˜Bšß/ŠÖ]ÖQIþ9¯5¾~þ¦Ê¡¥z·Â ±-ëÄlôÍbMØÜˆ‡è­4f¶oä®|›ñFëâ€HF¿ö2ÇÐêW%€¿<ŒcK^Ìë×=ýþÞïò··+YÛ« ÖíoŠ– ª¿¢k‚Þ>. ½FܘthÉ%J1Æ3Ð+³áz'ó?˜ýIh“‘ïm¾_”ˆ\û»„—š`Kdb±ÃUfÑ—OˆFŠ#±ïºDJ˜K—»?¸£ìì®áÉŠZðÇN'x*¼Ç[¢“% ⿹{¶å¸qå~eë¼äÅC}Ðû˜—Tª’oHÉ’ÖVKr$Y{üaù|YºIJÃ11$’³>YWmI£ Ñ÷{¯Â‡Šæ<² >@ÍúD>ŽªDɇÌÒi¢“Ì¢*Y¯išC•m$ËÍÒ?^¦`àªWÁ­ ûd˜þàˆUE¢lñŸ˜V v÷Jö ¿Š  j_Ž~±bÉv¬ä^_¼g64ÙƒŸgß”_”u¼Z û¢Ö÷!ž‰ž’­åÔx¥\ç— [ >’ê)ý¾BZ³XrÇ{¿.’Òç°ÏW÷ß,x,<ó‡Ã§kŠtÃ7áðçó—?«v+´ b´uÍ5ª‚`™€9b‚~ ÊÏ}WG¸m²TKí/±\:,Å+˸@¢«mÔ뎬Øì¼ka­†P<’™CckSæÖ#y[}$iRmt÷ö¹|æàbE›m§:· kê èMVa¡‚Õc"ÇjÀÈÚ½õóStk¸¹%†Ñr¥¨,(hí,<‹wv9Õ0ÍÜL¡«ZBj—:W=ذ»œÊ³š¡ Þfú)Õ×~ydøE¤ &C£…ÌÊ哳˜ôÙù„ÃÛ”lØUŸ"¹Ó]ªÃ3V™~Ì•÷ÿû?…¦«®Ú‹| õ„`ë]égÙyÛ"Ku„2„³ûÒR  0K!1NB²@I~«îñ6%.€…ù}ŸÙø;c]£¨-©Z:„;J¨)Ìd&ô‰ÖêwtÅú½ xž*”ª0ÌREÏ£Eï7+Qð Ž¡Wõs²DoxF¾O ÔŸIå6C6ÖìY h‰ÇèÖY}õÚáúêãñÇCá‘OáÃë×Oÿx­2 Œ†F2DÅjJ@P$C&ë¤{ȹ\4÷šM¬×.ÎåÒz5ïjC_;ÍŽ"\c8o¥÷jWi¢±6 Ì*bòsÍzµÓäÚÓþ²]…7¸M6V> Ô­Èù;c6³#•nàÙˆôˆš÷!u–W‡„†BuÎ ìÓ¬6PªPAÃOúÎç—§Û-šª¾ |yªÓ!+=”ÉŽ"Qpe7&à±Q‰³\ê²UPãÚÇXÛÿÐÁ?—Ðl’ÊOV2¡<¸ëhìlÕüÛ‹×7w%-*…§ì0U« …“üÜ¡jVbÛ*5o‚lÑòúÒv‡ ÈK3û!U&n'‡3™}¤`©ìAeÊŽÇÀ±:³¯ìlNiº4ó3îZTÕ×w±zÔñõÀgZåxãêç°ñ|­¿^øì(QÔK¦ Á¦Ä'ĺ8‘Œ3ÅQ2;ÎFëˆ>à,_ûvé_—r>Þà2ó„4É–ZŸG¬Ë#©=Ã¥áÂí¨ U‰¢-t6/f­WJË<0Rs+nL‘§É:þâÙYBÇ‹•ÌÒ‡b;þÂ"}”¹Ûl—tËÝÓžëDÏ7ßïf7 Œ?ð [i¡j^g)âÒÂË 0',² $Ë/È iŽ1 éȳ ù²Êt6pÂì±IMF gÚö¶ÃTò»Ð[Ñ)å\C„íæx±ã Ìñ®zh£ í ¯Ÿ®ÇÓ¸BX8kòðÁL-â°xrÖäÙgæ_mG€)ìL€Æèq”ç[5_â ¼í„ÌJœ°ÊØKkí‰í°GûF²)ˆ^/ØöxuýE…ú¡XüÓzϧOW×µÔþñãpýt÷xè8©¼É1p¢]-‚U³*8ªÆŒâÖÖë,]Up6˜Q:Q¯¨ 0C8YñуÑçç\áTÑÒˆÉ5I}ŠK‡dB¤ýC2™-À†²ìJœ\SF¶jkK‘œŠö^Øp€ÚæÆ)‘C1«%Þ¨¢6ßábMÀV½Ïä1ŸÁ¾ðçë/·7ßõÛÏþ¡žå3E^ÎÙ¼Äv~Ë$ËS#”$N¶ë!˜MÛ ´7 bn—µ[’Cɦ¿s¬ÛjÒÁÎn5yÅÿê~2—ÞiäÏ«»½Áoz³ß,àöï}´êG‡:ûƒ¾þòôã®ÕÞÏ ¢CÏV‡;ùOC`£øv „~à`ââñ{;é¬Õä9 x[2Ž=Öå(ߎpXS³â¼ 0JXLn\› Îàˆšì*%ÅÉÚ„™*)ÓÄH•þ® 1—`\f>©ZÇ ×OzU†G¬Š@z„€Æ¢\õõ« –Ú#â›{ ƒŠ>ï/9¿µÝñ®zN~;Änøߤpˆ÷Ÿ™© žeER]C•›ßP k:†½2‰[[õÞ%xò•ÙyÔaçá»FÚmÙ ˜¡P‡4+í#åû·Ž0Ûbe‰±D¿,†Â¶+mƒ‡´ƒ·íº.†}÷]}y|ÖçV`þDG݈—·¿Ïo^pÒ^fý¨LÉיּÕÓ“ÜRF¥^QW bÞ¬GÙÔ¬—½Jê/£*Α@0ßW™û–µ¨i°s¼, ™/áÀ m°„ýVvl~’8Á4Ø!‚ sº1äôŽðÜ"°£Ò“ …+ J©i^ À¸wœ@áìÇ®@òÐ8g‰Ñ|“NâmÇomìü:º)CFOQåµ$¨ wG ƒŠo ÎsKUªÜgRå~œ*§¶ ÁƑͰº4ê0#OpzwUõJsC׎w)èÕIÜÌød¤Ëðˆu™ò¶†b(L”oFj!W¬C´qÂ!0®mÖ‘Ò~òÜ€šYäçI"`¢yšÈÉþÁ½ zuŒL1Dê&+Èœ‘íßWT¦nŠêI…УÚÓö‘~Õ5Ž”ÃÂ…k"º4çÒÚ\Dw^ªŽü* 3Ñ®Z·¥ï~ûiN9$"Šç™:L– —¬Ûpã tg¼/:Õ¢úrhûÅÏ}3Dóê›ëÇÇ'µÆ†Á¸Q#)¶–5rlõƒž ëÒ·zŽlH)7hõŒ+ZA-ýäˆÝú w)QUä“é#6yçS"ðE¦ á¬bÌi¢ ¶ÈG¨¦Ò‡¦Â°[ŠCÙ]ÚÌT?‡Š(ˆ/[–NEûÒT®)÷9Qg˜œØÜ·é'a«ŽŸˆªÄv­æ1÷øúÐ<>}þx« ôü|÷mÎ[Ë~f‡˜arfÅ«upMÀÈ«Õêeq‚ø¡M±¡8µaíȨš§z]o#ÍaFœ*H²»†wÞdì’z#’0;„ßïŠq©©ãR™ƒ¾ 1žg¢‰``žƒŠÃ~¹Á*-Ï­²ë4DB?O!ù€Œ¶è ×ÇÉyÆM)¤@`#ù½5®D””m³]€x&êiïö¯UA¿}Æ®q—Ñ*˜cܵ‚¯Ôé;ñžo_Ì—ù¤¢Ë7ýk'Ÿ\ ØW;«Ðª’ÕÎļ_Zs»5,·Rñ-%%ŠgU¼‹Ñùyóüp õ@§ã/-À{¶—oᬈ Ù™ Ķh)+ÁÙöÛIc‘§´ q³ƒ<ÙÕ̲½äF×Ĥ~7_¾>ËÞÓW¼½ZñìãCW÷øT—ˆŸ8o‡:®íÐ*ÛÚÅ’ ûn9ígÉC jÜŽoùxúëáþÊr˜‹”.»´«Îe©ñˆ“mKæM*覘`qUݬr«TJ$ð‘çoG2Ù%Ó>e7 !»…N6ÑÆ/ïvǪU8Nô¿È¸Ô”«bíì54úÈi>$úOfI#rvµÎ:ÐF+}"H—¶×RJ{ÛkÊœOí5ì §3~ 7-¹÷x™¾¿ÚšÈ™8OÞø£’hº#\¿Ì°t€K†uØ 럓Ôf»#ÐCMS*IPwRRU ’¸q R’‘d›Žq&ϧ!§/NU¢w—ÉŽ¢Üf¾I³&yÐOœ¬Œž±r¯5j”M ~»X@äÜ,ðº¹ÁlµJâ®Ë¾Ü~½?•2öÊõ%—§ÛoÏw*]îfÛ† OÙÁçéQh•U¼Š—CEé²0¢j'ZZ¹\ ô k»â¥6v >+CÈ‘—2âyV†ŸÛ%0ã†TË7ªR(4¤ºG‹„ã*)ÂöqlŽ©Qý&,¿BûÛÓ’ÕËë¯WÏ}]N?9Í „ß«rİ)_ÎÖã„ØIZÅÖ±jnD´<šsa§v kÜ¡,Ù å&ŽÈ8ËÄgæNa¶ëS{!u[1±­Øt~GÞµà§3ÄqúÑ®LÁj®.½Æ"ˆ”ùl-E΢Yl;2­”Õ¼;šÕÑÂ1š“­ë•6”M3o[εý˜¿ÞbTpêWœnzž±2?O¡,?óv/Žê¸Ud@©Feë­U©9^9Ë{\ü0´9ß¾^=Üb¼V›òæN‡ðUâ§šð_r’!3l\¢é4à;•%ž¥²ü<ª¨¶hOQ2”v»ð"y êC¬#”ši ^±u¿kÕs‹¿B1ï@ž§·Ávh„Q¨@atsr0\M©äþ²˜CÕð6‚^ŸJ%:œ 99b•œWÞm0B±œßаFÝ;;Xܰ𓘿»¿ú|{xÒw>¿<ýøxúë³Ê÷›ôéðÊ÷ôR%߽˒—’"ÅêÂ(³Ô…µÓ@ÚBÀëS[/“¶"ȹUBRá€EVæ`¿VÀ‹c.1™Ç+IHóx¥˜]î|¼Y‘€·Hqá,0²f;ÏêÉ“wX…¶þ«ü/&ï#£_;æ'a±ÿ\CR˜´·ôVI+çú·‹βãàf%xÓ§rÔ‹X„7ï<¯Á›¢½¦¬&2™=)‹Ëj®¾¾|™”¬nÁ#Ë9ªŠtÎË *Ea‹ãúÞP)¹´ùà®›αñ‚©Ð{Ã’zòjJ®bPÆŠRH“ˆÈ$¾Ê³Ü?`™Å›Š«9ÄÊðg©¨Ì`ÇË”8Ú¡!I'ŽöðˆUöF›7Pf~Ñïž5O¬¦ê’êý™¢`5o±¸(¸½Í²±OK%'tGx¡ª 2GœªæŠð˜d¼2z×°¤$3ÚI/â’©ù Vè/›µ*†·)Øá×Îk6Ustž±n4¡¨‘$a7Ø´RXÁ v„Û±V¿Ä&."…ñè ÄÞvñx¦G9ó©éÿh·±·ÿÿ7U¬/ã$õa”¥>d’Ô‡qÑÉ!›¥>ÏœA©„Ó*œpà]J4U'‚ÿßÀßÑÀ¦¹VÉÈ€5æ¼z:Þ)Wú_x¢LKFTëAU“³Ò| yá|673ÓF•˜)%ŸJ}‡îÙ¥˜ÖÉÏÒ®ò³3J¦SôO|‘õŠÿ${À{¬g@~bwØØ¦L£¿õ;íÛ_^V?´²©¼àóâ68ªª4‚¨Dl‹rM÷o¶’¬†|[>ˆóv-«G/aN´—‚@²E˯Ѭ \"YƒÅq<¬³‚Òö hV¼$tûZA×·O/wÜ]Û6ª»ÏmaSû¹zãó´ã¯w÷w/ʆ®ˆi¡ãZõ±€Ïó-8 H«øÖ¨Ú§dÝ=,­4›€íDmÙ`Ç, å<½7fU;¢ÌWõ³<í ò«”ÞÁµOD=/Ò"¦AŠq S[ò{gsÉàŒãŽÃTtò–᜘T›6 —+…ö0¡giÙbÓi“§Q³Òl<¨BA°¢{ Eêü…uYöÅÙ-°çy;¹·%€çyÛ.žu…7+É’€k‹§“8ÔðŒì:P9åCè>U_Š æE;¬Fv{D ®½w>ª¨±êÙš8c¦æA2©çgsæ<ÝЈÞÃMâ·»kÈÊîÁmæãŒ"VóØ/¹û;b]m›ŠAu(‹ÃŒ[‘I…q¯^l" œVó|q‚'Ù "š§ Že–&(—O\¬„åõ¡ÔH_š/ë¾9¨.‹´ k¡¦/Ùúç $¦•*ý*§Ÿ^~b òé&ÝÄ›Ãóõëk•ׯ˜•ä‘K¤ÎRD> 6€Ú)V}j0Îõ¥$£š]ÎdL-É´G¨‘{B2÷¯×fß© 2jOïÔÒê©ë—¶ó¡f­9ÕaCÅ‘ååsOjÙMuHƒ¢«öIÛ#T6Qb * +T“87ÊëkùÂBçd&«ïÛl0ÇIÕÔ]ÖI½ß¦$Œ ùvFøIìxÄ*Õ”SrÙ.ìI¾*ÌïÛ†ïñråÌYBÑ@‚2Bñ³„âóúƒën!žì±½šKÅâ©}6´–¬¸ ÙPS| -µAí´¡°ØùˆÑ ž  D4‹Ø|kÓðj%¦R£0ôqâÄf-«2ÂzĉÃ~4ß©^ùû«Mš3¾iU‰-]R†cÝÆwwÏ_” ûÑ úÓñ½v\÷Ëá'µó/´úF§^/&04|¹zþò·ß‘Õ¶²b?}ßõ÷§'+ñ¸ùt0Gþðé‡òùß~Wc]¥r Ñ,A¿ýç¿þmüÑ»‡Ã÷ç÷ÈùÀù÷µËwì N?·«·@7 ´|‚Ç(¡Áêcnÿü­RV# g >¹Ì ZU4X²#H$n¿Ê[eŠ•† áE§w©‹ž®û<Ùû¯[dvÊ€>IFÝ®ÆÐÂ@lÅðÚ±®“pªr¢Ë´¢ª`äÂL°HÂfŽM¹ =вó‹Pù—Žr”XŠ[Ší)•mÛY¦E⛬“8p­‘Õá« ªØMª y¥ÚWóS'ÆãŒá­—RçÚ¥©2Ìþâ”S»ƒ›b@À‡R´ÅªaËT¶öGUe‚b‹(P ÊGbûHœ ß17)Æ™¦’ö"ÑOVªô—…ìÊ¢Ám |$}ªè9tMÄrg¬+´"y¤Òø]w1äà¬"„¾áfiïšš)0ý“´¦ŠËìNãÐø(žC™ÍRæíð#¨¶ð¯ô¡-ƸH`Xäã*: UEŦ¸­ô¡¸†›ÕÚ,†þ8‹WNÿÇÞµnÉmãèW™“?û'-“ ˆKÞ`Ÿb]îÄ}âv{ºíLÿbÝ´/Ékkv<AÜRêe³,&´kóâ~"¢òÃ,M˜„Žè/Eဉ_äÛñâ¹hnV…øañ€e.•-ù—~ÙìƈlÛcÛ,)ËåHËH±YB¼ðϱP87g×YLÓ†³¿G šH&85Ü5”°½G—™wÏþQ9£âÙóÚøˆuÞ™-|Ô\ç7”h*a2›„g`]­½µˆ(³Mo½èoeÁnBœ• €2üÒéjêëŸE‰#/S_ ¢ÐÖs8"½r—›¶Ãé¸Ç—9_Ü,ÞŒJsÓŠÛ*å74ò;€ë)€üøååÔO<þ‡;οZ:Ï™?Ü}ù7üù¡3#ð>0#*ß›^Bæ$Ë 7þœ¸ dúG_îx=U–8/ßv¿ñw ¹·l¼sZeó(5x>Îæ¤“èrÈì᏷úïJøa7‚?ÿ§«!í´°DÜGgÁ§ˆÎU ‡YçI£JÅ7 í¶ÈPü£-)_1ÍÊM…ɾÁ²uu,@Ë'þ{\‡V‰aûN\½ªhßâšÓMµÙ’ž†8–‚ý`qį› àײ©ÒPÐc¢Œ@Aª欳zl‰R)`>Qm‹mqöÕàžo­ÇCgÕ C'§r(,³ » nŸk&¯rÐí¸Ö’£cC –Ü4¢ Gx—zËókfŠ@m9n.丗£êæ ;” fU–ÍãqžRÙýesùÙçt›Š$W´åyUƒŸ±·ÁqݪsÜ^ À1iØ R ¾«‚ ÑÚWk1‘’šá@ù¨‰@¬s"C¹#gtµŠ×eBUr”Z;Ýÿìä;ÏB«Þ³~ûYv¿ £ÆÿkáVOQŒ)¥U65¦ÜR0B‡xŒü?/ÞÒP4ÞžÄ*Vï ³šz•nD¶-.ûl ?x‘&çH‘ó*MFÔG\Fç$Xˆ¸|RÒá^ŸÝÊJÿz9îA´Šã·ß—<ó' ÿñ–Æ»¼D` êîåáãóƒÓ¶iÃíÅ)ë6FNXzsïd•¥§&LBS5 J ›ìj¿$ûâ í—4¯.™•L¿É ‰Z¸=oú…4¤YÓŸË{gGtÜÂô»ê8úi^dúí¼2ˆ#ºõ;…Ñ™éro‘_Y¼­ ¸ˆÌ~?‡›/"»| ]¹tû- Ñ5¹0+Ëiªk B.ô& )æíÁ\ÿ·€¤ .–³Å5æBnèÃÊ9FG¡¿FJÉ0;ï£/霵Ë"~¥9»lô*5wœè±@Š÷ÊgΰÈ*ƒW+J ûÔZéæy ¤Å€<«bxÛ×ãÅë±ñòôùþÝðë௻ݎ?ÐÝãï_v»ÓÛ§ÄzÏOÆYB^ùdÜð ãwâ„·|'nø¸âã09àtLD ¬mõÃø A±2ꀬ±i¥ŽB¡o }Ë@¦VIxé×.Âp¾Þwx¼j*Fw™¯C®´Ÿ0ÿ¹tƺŠq G€¯«.ƹ¶ó%)ß <1Zù.¹iñ±ì¿³ù¯§çßg1^&þèºLU®ë,;vu^¥³MKmÁ¡14/Ã$ð’Ôu+sR’ŠvAÅÂÞTc&Üôä"Ú̈`[?öÕn­«BŸÃ—9œJZgà54ìÊòv=G–^¹yϾº× ÌXò,SÍІ‰½ö‡‹ — £›Õ<ÙWEáÚHÃöšÐÿu<"Þ0b=’1]¢M'¤.Dì'η(!W.Bƨo²Ö•ŸíïF”ÑF{×õÓ.û×K’), [·úŽñþdKÓG¨[}Ç•(Õò= N0 17±ýâî…Qªp¾š–›<Þãû¯¯bwGÛݧñßýfFÃþïÝÇ÷/Ÿ><½þxç‚߀Va‰ca›d°´uvYIOsppÂî)8¬1xüŽHÔ0%ëà9ÊþÌZiÁÑZI¢H£@쀖!œ(ª @+§dÍ6WOɺ0ÝãÐ_Êü<”W.Že>oV3;åk»9aˆµlK]ð³ù¡•mýÀÐr'–q…›²M•ËlS *g~/g š @,©‘©ls¸j¾qw¼KÅŒ¬}S ’“¼Â¸;±ãͧ#×f›û‹™$†Ub€ !)D²lRW+o5Äa'*)ɼD„1ÏÊb)$ݬrð1ƒ?ÿ/RÞ-¸Ö”vßÂ/ݵ,ëa•Òí,½zz±_ßyï¿}ºÿþòüýó%vÅ1œywþw_ïŸ-I3ß½sD<;Ðó³»þŒ†‚ü^† öƒÍN/ã9ØÒ8/-å´ôDÐMð¥¨#·%ÕÒ䨥æÂãþ m öþø‰1´¸‚”. ©´*.»YTœuê(o0ÅÊþ²™¡Œ?¼Í¼3@ %BJÏ|Áøˆu™ÑÛ1SÖøXé ö’À–Lã*I°”¼e¤™4£ËZ€Õ[š9vý2uœ‹¤‘!ÏŠSq&stµ ‡@¡Ô%=“‹ñEœã´ËÌŠ~ä*¿S$“7]Ãoû¨†eôÖ`öõF•ŽËe‰WOú÷ý篟ÞÇîñá·³Y´&oQ€ÃwC2i•‰‰³&& %±×ø¹'²m‚–± ¯]8ÊŒ*JX%3‘›š³ÔÇðÌc.š÷_??ýx|õz÷*¦8ß»sùû›‰ŠZÊ)4/*†ÖÓIQ‰åɉ±6 ,ÀaCLV‰ $Ô×™mS5Ø·Z‚ÖðLG¡mËÇÊB¡Óì;¹æ…!Bœ·¥`sD–MÖ„{7Š À"™H –9­“ ip99)˜:´—Ç A\î¡øGÜÎ@P—=+H¾„2ñ¼,P yD2û®/Bm% vBK¼ Š`:oð|:ˆÂÁS¼ÞE7“ˆ § \‘Àä .¤˜ÀŒˆ² ;wJØ r…H°/ I½G'µ•“‡#·lf’9ÈÚŠ`õ:zËÑBÀ “5†þR€‰ÓuM?^KÈèfUådŸíŒ€T϶Œ¢ÑýR;ÛìÆ¶¹×ŒÝRB@â‹5!Ĉ³œB !à4§ü®P*ü.SQNÔ…à—~|ĺjrg¹¡ªÊ'‡k©…L²F 4´˜ss„–ÈãZåõaç:åMÔ9¼Ý´›Þ‹D„©ÅõÇ‹•w|µ* òÔ!a¹îsU{U!®âÛë¦Üºg€ŒZZOdÞï†|™=[º÷°{ÿrÿmÔ°Ýâ‹¡0ùîÖ,êMó6Û÷0W°¤”¿³‰+ö¹È µRá]#Ù"|i„À>1ìßX˜¾Z\+© Ã….mºVH$‡ñîžYNŠÄ©MÃ]Q¥˜t.3oÓI:Áä3˜¾ñ «L:Y áË'R­MïY˜B2³¾J  NÙWtšLË‹„÷~³½Ÿ P§v£?=šš?}Þݼ÷ N•Oú‡€ÝÌÅ—{ó<|û±ût¿ûÝŸ©z ÒcívÅi¤&Ë©(L:½øâ(ŽIçÄ1C(º…e±¯6ï”-²,I,Pˆëdª©†èñ¶0¯í„ìX'`çýÜ jø&à@OÅ•F§›Õ¼2ˆEwá|ÛÞèˆâ#ƒƒôZ¾žh‘ý@¶4AlZU„Á—ǯfvõö°èKésžÙ5ÅYf[ Æ7«Zé(m™c\¤£YÈþÐJ¾å»Ï²7ï¬æ[}0o±@ô€§FIuâ‰øÄ7,ßÓÕ*ƒyË3r€JÆ%0F[NÂ)·*Üþˆ ÚÒg¹oV‹„V2.§jÆôgô ªþV¾FZ¦¼fs‹Ù‹¥±ÑÕªj(ØwFæjÆaÈäÕxM­Œëh¨{IbHAš (\Åæ(6tÛðŒ‚y Ä:ô^ã“_4j±ct™Šé/‰}G{Áñ«¢m†ÐAåÆŠ %ÀrÆ@ŒH«5·v%··+O÷Òî%œ—âœD^Ü߬fB´S‚´±âNLˆŽ áÛMˆÉÈ… c„š¥Ò"ª#r€íæBL@Þ »n‘Š':wˆÿÜýv1 Ӳ†9÷0 RX<ïÑð3‹£ƒ &MÖˆ±ÛÍí'OÅ„(Sü “ú÷¾#.Õ"L ‹%h‚Ø9¤טzãWS­\ÜVJ„m0ŽÎ ÔR éÆEÀ:M*‘«JsîÁ»œJ¨±ÅX¨}¶¥úAÓÿd© À*Å£À{øroÈ·Å›$çùéóý ´Œ¦/ïö¥ü_Ž’öË©’ÿËãûÝ'cÃ]áùé…:éÁÀOÍ|¨ÐInZI"–{*‹.ݶ–v›©+°ïãÕ8ßK"ËfÕ•c9;j uu9kí»HÊ…¢˜¼„Ùª®ýo ®d6ST‡ —©ë™¬]‚¡œf:î~——…º eYEôiÝìj¨jP”)ÉRHÀEtjÓC-€)ø¢S aNsgqŒSϓ͊È#¢´¬c5A•¨Ó"µË ОeõG$ÅíÕ.¨Ó ÓMÕî«#¢¾x"ôÇÓçï÷sh(ÿýæ(š’KIX¥˜¹iÝjVV‚K·Ú\’r÷ä’Žµ“àEýôæ*eË÷çõSç“ú™C±ê1¢Íô_ù×Õk©7 x¯m^¤¥sâP¡¥H7H"£'† Ç7ÒÒ¥sޝWM-ò—€!\WKv,~X¥–DMÏ7Ú§¦Io<#úšv›ùÐh>´ï…­ÐÑ'ÃŽt,¿èÔâCMžÅjh'³ãOsÊíRaG ¶´ôpÿõiõ&S(¼2¥ ,_µŸ2Ï9‹=rÐiÎù]¹„£9ºLE›¦}kŽél¢o|ÄÊÅØÒäêg¿[˜¨×Hméë²;Fˆ·mè9 ÿmׯS2þv9q†YC•9Yã¨Ø>v¤Ùí:ä/ÊÀ*K¬†IªP¦UòBÐ2&j™9©ä%SÁ…×§°„«hü XÃUÐ}Ëë$W)”¸:ºXHD² #buUcח⛫Šû#¯Iþ‘þkÅ5Öi#ëþõÞôФÓònßÿs0އ¤q|ö³ýþ·ç}¤öb’}7àîÁhzŽÚjÑŽ&ŽìÜùÖÓwçM2XìªÜêðöG ¥¨…¢«I€¹ÉáÉ%Ê}È…´R–.Ä 3þÎQ“AI¦Šäû«RWjt—ŠÖ̯‹gÈã#Öù;õ)'ªõwýµ,JóòQ»&¸ mž“8û|}f¸yÁîbªõÊüK¯)áÅÀ?5³a^%$l)áH‰˜Jx+(×iµ<™'‰UZŽ2§å kzG*miøG[è"Bõ>Ë>-š>¬ÑTyݵq'Å@f¸ÜÙáWFØ÷YNbm¦¼é¦L 5ëšbL¡z{Ç6ã:‹Íd\ÅâÛˆ(¡×Üt˜²}ÓÑ0Î4ü˹ÒnÕ7(÷îi–^몀JbS‹šeQD”6©+½&ùD ¸ŽÞÕeá’…7ÙC´ƒçM¼åœ@³&>r±,<¢á6ÞU&²ä´ÈÆyÇÆ*`Ñå­m|vdöKŸ|ÕœòÐ-g)m»–©°ùù¶§XmÙÿZtU&R²DOVÉD ùº/틤o¾ôà¨Ä²pœ‚N˜r âºx¡ý¥ká°2‡j„$‡}0ï;gг¹h󟓌Ýß<a3GW«‚H _MõY<õÑ6“}e+ãú#BK7µ±ÛÒoáF‘\`\öE¦8³P­gЦ iÒïo.¹øDyºZ ã2w*„1.i`J«SK´KFËÅ|ܼ!hÒBФ…&ÖÈ]R_2Ï+Ž ó¼ŠPœÆݦ¢Þ¿JÒ€­òséŒUQ“t¡º ~«œ1·ÛÝþlYEÍܨÅ×ÖÙ%ª?|JíéË»áצBzIéeŒs¥ ÅYAB.ƒŸ²E—»}¶Ùrä6A-F°H€VIQƒ1‡,F}Ùb¯íl¼¶{y¾{yøí‹¯®xß®å$w(R[!'æguVN(g¦N¤ÚDL| "W·¯p'H)ÕšhõG@À¿Q[=-i«7;T…[Aû#òkÄ«êü&'szMU pá@%žš u,9Ë\…È=Z™ìÇÚ_¶h÷Æ·©è¬·¯²d s>/;ŒÎX‡ø¥ËPßZ¿• hË,Y vç°o—X™»ÔBùR’ÎÈbž— Ë<"ÌJ…h1]­æÒ>‹ü]kÍXã¦ÍØžq7@S /Άè–増ûÝ÷ç‡o?Ά¿Û?CüùÍ‘:Œ!¾_àý—ß>.›ÎÉ?ÝúôZë÷¤^ŠÛöêýW-Ô6ß1ý@ˆ&Âbúj|ƒ¥Ç‡/ýéîyWz%Òe¯Dç§^~ì:¿ýœwí=g#ÃLZ–ú[€&ýÂ’…;=š”²-†MŸïÐ9˜©Îç3ÏYw e«}¶h}Âù­¿¹ºµý±„,åKûcœJ%ÑÌ{væM»±KÚ— ™RÝŸ°Â•Y‹Þòkî3´+>öà1M;õ¤”Ö¿är-ˆ0u—æ"2 MÓ/¹û›—«É£«Õ¼ävb*¶Ÿ‹ø¹tHq¿D“kIY«#pã·¯ti†ôرé8ˆ {Lºšß¼€ßTY«ø=ý,´¿y,/I9]­†ßöY9B®/BoÆ8iñÐýúa6-_Í8YÀ8¶˜¸†q˜hžqå&úÑÕj°~3Æþg'‹}£¬b4,K°ˆ-*[Í6]À¶½ÀPÃ6¡<Ï6.ÁaŒnVɵ˜8äELËb‚ULÃÔ¢m¨œ,qHÐV±ÒBÅ*ñ’ýGÍs*Ùq–SŽâ£ÛÌW¬2w¤@ù¬Oæìˆu+Èç@L‹fÎÛ{æ÷ê«M(¿È1¤Õ%+ K8³å5 <½Øf¸9ky?Êñjµì¯íËì.)JXgwÛ&û'JY_j”XÍ7ï¾Ç þ2y+ã,ßr]jt³*¶…N—R7eÛ\®éGäx“µ&ð÷ª4~zzùöè½k©9æ?ÝR}ˆšTÀ¢c‹²n]zM»¦ÒÄ¢s%%™}Ú;WžM3©ü4"ÔF»²  ,ÐVÁdAº®¨Ú¯—€T®  Å5!µ5EŒû᯼‚·ÉCaU'i€j¼jŠž_ÌȃQ¬ ×0"É&àÒ)pR|k°ßn)$«C] ¤•Ë¡OÝÿöõ³ÿn÷ôøøý‹’coÄËfòÉ‚ñÈ<Ÿ69ÞRгò¹¸¼dD¡Mú!¨¶púÍÅ£m5GŘWÇdP“açmÄZK[(­i–«¹<€z¸Ùsw­Ëq·úURúmŽ»qén¸Rù•?ç1DŠ´ëfJQ•ó^çΓ`w¥r{·¯³’“Tªšœ™ÆhàCULfÅp‰±ÍÊ“úñ1™qM6ˆÉü¶vtS¼{³ VáæwùãÛ¦8,hŠþª›öƒ=‹U=$Ûé|k ÖC¯Y¶Ô$#QãÒàæ@8µ1[â_gJ𥠜­a«–5¢ÑÓƒ¨ÆÀÿ†Wîî8X7 1”µŒ'<,†¢Ã,Ìe¾rnou²*¤]Y¼‹ÒÄ6Ñìã5} ×1Åš²,¬9ß÷Xœ÷„»%uùÇ@ì«n†Uè™POÔ¢Âh·ïÀS–ï}#òôº…«ÛÃs¶YåKÃW‡åˆHOìuX0Û·¢Ú Û¬Ÿm±—kt펙ô¨eãhx ÕGb”B×ý§÷2w² X{~‘•šn§Ë¬´³fƒÛÕa*&F5âæŸÝ£¯16f»ßÕè®Ûçˆu™dMÐ\~ØGWÊG·Dq‘Ë—JDŠBAYäáõÑjRý,æ.4éot¢fˆq]#¹½må“4~}PÝÀàÒÊ}ªKËÚò˵¿­ª/4HWû“Z'~74ĸÔÓÐÁE@?Œp•ªÛ[‚æ:Á“£ ¶»Pd[ÊSVÅ4·$—X®Î4éÙ>¥yœýéxSR¢j+i~“KVÒÕp-»„a}²šL&jã É3ßùì!Ù4uŠ]t¡Å)¹]J€¢ }Œ”“ââ@CýïŽï:uóÚvŠ;.­™¢täïÙî¼þé5nøŽ”ë«1ÅLIÅ$DJå<†4ø€XÒ`rY^‘iJ‰É¶s» ×9Hó¨@< ª€‡*Û\UÕXß¹ø¯]ùMDç¦~*h_ÔO}†žŽÏäˆ<7—®Vù•Œ_U‰P]IX ¥$P¨í {‹¶"Î µ4tLò€Ò¤–ªÍú ‰†—®[VÄÚ ÇÓÊê®Rô†¡_FPÆ €{.2ö0Q~RÒ?­&ÐE·„˜bㆧ²ÒÓWj…7;q˜q©qI¿„¤†qTVH€ìèêd•|‹È±åªEßÍAÿBFü ‘lP³÷NðÀéªÛÄïÞ¾~|·¶ôï^ÿzó¤>Ac¯?mÞðÅnä6Ì49³Ž`Ï}qCšÄܽFad¤@ƒ{Äû‰8Íešð¨'IJ‚²·vñ¢†2å.KW›á1,bkB“‹~yL¡Ôƒ‘϶d, wžHu7À’la¹Ÿˆ8.^Á<žý·“ÕØ]ð6rØ”0¡d$GUoÛ…XOö_v<ηê.noÛvCr¡B-X)ñÁå‡2ŽG«šÕø+:W¿$©øoûäg¨è=ú–ÐÇšvÝ?½ÿ—·ãES3¬hÙ³ ÏT(ØY³óÕaÊ÷_bëµÐ=¿þZ?aèúËŠÍèò bà¬ìf&ý`× F¯‡?\بnQ±yÜCY}U5¤¢L`þz{u´ õYBp·gCUëgd »z ³³çÙ-¶ ƒ‡Øw%‚jjs3øË˜ìÓÏ¿ÜÝ<ð›îîn4+I7tË|#·éöðî!àýÜKì‹Á cIlä*Ô̤ï,IQhb~rçH ­¿úÑú? Z— 6ªNlr·1°Gh ×S´ öÎwÎTe|©þ“8M eNš{t}Âþ°yý_¦ìÔ/ÎÉ. _U®ž1äx—â‡êûŸ pR?Õ½ïdÿˆù^ä`A[:.[æÒ‡ûˆ%ßhõõ§,õÂÕ?h‹å®U|,+tbîé?åÈž5šûÚ_h‘k |u»œœ‚šïÄQ#(,÷X‰ …v¹=­)Ûòz$æ×`_­^3Ô£©î>MÅHqÌ"¤mÎödöþ`ÈŽLÁÖÌ6 Ï]÷ „!Ùöú£Ø¦sb">ypDLÄÖ€jÊ`ÛÆªAµû§ÏêÏmmEW‡xž ô4bñåpûÒÚ =4oóî¦Y_|ŸYMa¢S€RS¤,N¥¢WúeRŽšQcÕ¯fF€#.ê­ôCÚ ´õ¨[X?Lè;õ <Ý¿ùíõçÕÅ÷›»?þó[Ûˆ˜²÷¼VÚÒ¥« £Àf ÷•~³øâBåë?4^0\ ;Y‰dÈ’ïêfµ(I3êy y!eM#P¬€b‹`ü‹fð °§½áß(3ÁÚg¬mý¸ìž©iûA¹Àž2"@šæ&ÏèÐ<÷¨&‚8{˜wî/Qí —¢ZQf’5là›ä‚]Ò<=¨Þòm¸}skºnßÜñË•®mŸ³Ú©RúÛ ìÅŠ#Lš§<ÛµðsîÈ;CÆO<`vôWšú‰,ý¶:TmuЮ#¼«1;!AÉ›è¹}®µ:YÙ‰¶ ”É·âÞÙøŽTª” nû@Ü ©3fÖ#øx&רÀÌÎÉ¢Â_ÞßåççÿW#wÛb˜Y·â©Í¾ô¾weD4m^ËÒûÚs \vò¨"‘ú!ºö% M®FÄ;ï¿O:ò5£»yûøp÷çÝÛû¯èŽ?ߨãÝç52oïoIˆð/¢O0äØ÷t½i¤¢ZÀœ¦å( œ—·&$šÔ]™8*z’”½29RkRMHCfñ©ÉŠ–ûGÌ×Z±•ÀeÓ Íßî_¿ý\yr¤iG…\Rº,4MU“W¦‡á= R ´Â5×rÌåÎhsEýù+ÅÕÉjB16hI`j“ÿd£nˆm± '8 È­3®{¡íZ™œ³uªhlTe[çÑc /sCr«ÓÎ0uúÕ žCKÿ§M²Xç©“0¿ï"Þ´ïïÓëwÕIži‰Ùog.µÕT=c£n?eÄHC©®w®çV+‰)¹k¾Ôª¢ø…F¿:r¨¿ÙL‡‰S…!´Ò½~äWÄ)8ãRÔôÅšÉc›ö“KCM¿&iëœ[É|Úáglò¶Ø¨ÔáØMíðóU;:ü¾¯:+>&7ÔÀ¢Ì asÙ@ÂL=Æ¢±hÝR…ý’@4¯0£ù‘»BeæÅi[ý5S¡‰©­B3úþU¥F ¶—jFß©d£ñ\ò†,"°lÒ/炈ÿQQ°Ú–Ü`€~TÄ4Ø…ëHŸÃ°f¶r©ÅÆšìS©Ø²ãýaùÁ)Œï7"Miåbò%ßz>jÁ¤?ÊY¬õh󛂯pÉ'-;>ØhHö¦€5뜺ð8Ö-”GÐêC…†$;wÊO«¸SópÀHÜÆµè†Jö¿mlOF>­ŽÐYV>Íð0³!ÉïÝÜÆu¯×÷ßöÜCê=ر}RЏÛñÿÓòÅ/ß’–íß¿~ûì'E1Réj+ŠmúqëÞ&Gí³M?î|9M q!â¦G$;3…€Û¶@íËî/Jñû0î›;ý¹§~惓tžèÑ©¿sCî&pG b{H1´„·Ð©/üˆYG…èÄUy*ýÅ¢§ 9¼øI&„öÑ^¥¶¥Ãäñ"à[…IÛ;2Ø_”ž:2!æPJð&ϺùšO3O¨¿èkµçªßqSpI·þ4¨éÃìÊì1ƒÓÄîP]R*k¦Fuš©©F.38¦ Ù…Î/HŽ÷••Ÿ2Bì‹O1›UœAì§B€à¦«ôÀ=­ ÉÚ††ÓŠZØr&§ägvR!’RaTÝNŽÙûÕÑ* J¸kx†îùìYG ù(ä§²»dÅõ˜`k+®Ô§VÜøG–Åã5ïã^eR¢g†ùå|¢¡;øªw®²ˆ„àFïÝ«Þy>9˜cÀÈu”E j‰]Ÿ“Œ ™M ª ÑB—¢É2T4Y˜í˜_¦bg‚Å&êðòäúcÀ“Ö‚Š»ÆÙëú15Ä[YS"OÃ^¬vk ï$ ûrÍÓ*£¥áo;wvJøx°šq¤ÝÒ°8cšI»!oÄ·öFJEÆSod|ÐØ-¥+ GÆï1$\¶ÎëI\ÿæM”ÙƒÁ}Ÿ°fçíz•iHP#â|`[zBDÙvšçí»çù¦„é›ì_Üý¦êótÿñçGeÉcqÑ{ÛöXü^ÅÐ w‘zö’F4€miNiâÁ…Þ÷FÔ6ÁwºLÁÄ3R*—Çȼw*ºª” »ÇöQgÝêW“m· -¾l†‰ÜØ••ñ´ÿÁN ä?dïy<†¹ÍïXÕà õ]?š…ÚRHRÜ ·x >8ÿu7SlýÉ|JÝ ¢ªZáü»ÉÈ™j“€¤.•kcÁF«±dª½ó¹ÕR+šÌè¤Ð¯æÝ멦ºF zJžb[$"Æ8˜,îX·yÊzÞ"”=°ÆÉP‚”²ƒçKž«£ÕlžÒÏr‘C¢&¾é¹5O±ž~ pF=—àð‡š«xÿáýÓ‡ŸÛ7‹³ÕÍ‚ Õñ=—F^8iˆ·õTÅ‘f³ « Ç®yŒ+Ê5©{³ËR¿hF0Æ%q/WWÏ-€3C\¬Ó+ÉQÂÐUnÀ–'‰ã›ÿ¼û×iëηÐ|K†Àö½g*õ>eÊk¶Î{?‹|—*TZt‘¶qyn°·4ÁЃLgíý¨ ¸à6y¿jL vÍ"P$(Eˆ¡\AW-ÉE+‚Ívõ«5Ä¢”®m•1mb”SúŠŽrýÍ wÞ½û÷{ÕÃU5ùwùãÛ&ó›ˆøÕ¶”Ÿ¥ Ó¢¾=ò¦Éþ Þ!a½1™¿y|£š Ô7D<û»o?h"?CÚÔL’ë¸HJZ‡ó8œ•Õ™öP©Ææõ.ÚCÌÎý)3É&¤s­aTt †'‰R«\têÑ´ôÅŒ…ºœPö•¨Ü-Ùì”ê Í7úÌÈ`ô³cBOþÚ¾’pó€Yé¼oXy0{ïA\ꎚ;Ä«l˜ë;\\G†³ì–Ý–TpÔ;V·vÝ÷QRŽôu afi9ºì‚jk ©hvtx±Êw8kÈoW‡)w 1/ †4¹nZ?a¨IÈ;^¢†\»Æzw.›Ñ3$±k IÒpé·~k9,š—ìöº—„Âve"Bn6a}²ŠÊ¯!Ðx‘=ª÷O¹gd›]/ä)¸n{µÅúTŠCÜ–žÐÛ¤Ê0p¨Šk-³­µ9x/@ÅÅÖæ·³+eW'«`¶þ¦ÊVr¾vdr϶èX#ï¶`Ïý 1ˆgçVó-,Pßl|%•ø½v[­†qd÷jIRha¨=Ö|Œq!õ×JsÛå†ÃŒ£ÆA Äj'É—wª÷x´JÆù˜’ocœvÅ¡èÈ*{ಒþÇ]é^§/+â¬oD±.à‚÷mÍQ)ë}&$EöÕ!&vMæØæç‡¼(¦Ù¶z „Ÿ¼ à‚‘Å‡Aà‹ˆöRåÜî‘“ÄÀc_/†B5–0ú¦§Œ´+U¨/Š™*ÏF¢/é³)„~J)²³f—U®S1faFF#ºýº´Ÿ2Ê P]øÿûßꀙB¤Âˆ„Ð÷<#Gfµ$^SŽ`¨Ëe‰8l—»$yÜ®ã±jœ{2û MQ2;Í®xŒiÒ“¨èŠLèvJÕ<³ì¾.·(0-¥¢“¸,ßVG«äÇLM|SÝg?Ä7Ð1ئ~†P†ðõ-¦ŒáŽ‹CB¨³ÛPb8SÊÚí#I¦8à°Ø.kjˆÃPõ>j'ò]ǽ?i"ÈÌ8ž\ÕŽ¸©÷\\ŒTÚH§‡p‡ž¡ l¨çæV'«Qcö‹ý~H-lÓ¨Núèñ™@~WkýñVÉï„á%Û=._„2×YKÖ[éæ²5¬afh³mµ @ä›Ä‚ †Ä¨g^Õ%RyâýôýÓçÇ DW‚ñ+}1¼œI‚Á´hî„\6óàS¼¸¡ò@:ÈUÉV´™!úÕμm‚¡ŽAC¼!Á`ìù³‚9²àæÅ˜O_ßÞÿzÿfš€DYˆ’‡Šh]ÈE, çöfHH´ÛÜ”ÖKˆf ê±ö"1í¸Á -Q(º†6ÛgmD¿)‰îŸ~ùûÿüóŤ®ÆhÛ%v÷Â& ~¿ÿåïo~¡;º“ôš¾¾»½óÿxÞ:´ß ¯ÎRýz¨W+!“ôl·…7,»j|GQƒr·Â’™ýPIvæfŠJ¡‡Â6W£\¦¢¨Áj=Ù¥ä :býˆ1$ÂD‘Cý=¡ÉAbä8"è¨Å 4Z¢7Z{_=ýíX‘"c2[S Á¬T¬ŽV5¤™»éØPÚˆšøQðÝ›§ö™?N‰>,lsnòC ©X|~zÜ}IóLP ¯º¹PR}öT˜Ç$([,Ï׳ø ȱ I !W¥‰ 5F#bÌÕ.VTšÑv¹»÷´=Èk¾~Ç^|Ub?ýíáéÃ;µã7ïîß}xúó`Ñ?ë'¼è ѨÌPÙ¨Þhëi5Ôwé>á¦Ø1{Î…D§x”ÊûHøu,sóá©[6çd»á–ãs–±bဠ™“ØQæŠtZ£«öŵZ†QÉQs )Zr67Y´!»$ru²W¬ê(¨áà³k§õ3²Z‹5×%nQb²¤>â óÆJ¬Äwét= ±/¦çQ3¢O󠼆Çt¨FŒ†—„¡ð¼ñu+X'%ˆ‹Íèƒï;<È~ñ¬!ì.ÔØ#8t5ͨã°!j¢aKUÛ¦¦¢¼ €¯(À‘SÂ\,ÈïNÎÙ{–õÑjPçH?+Úžñgݤë‡dm•²egGà,Ã)i Û ÞrxÊæÆJ³¨Óé€dQ`¤«b§ªwû¾Ø©gkÓ§Æ+¸yð©—^»FP ’f"¨^zíE[5€G³eAº‚.°õõãª6e¼h¶b*›2ïjLYà˜º¾ž¬¦-Þ–[±aqÖÖ?vl} ¥!‹$ÛÎ+ÈH§ø™ÑÇ%(®»]9Er4gI×EsÖLõ{¢9ïÄ1yå·Ç!CCG-ISq¯‚Ú7¦E|Z§SÓC*Ù–°P9ß³ºj,[ž˜›È9¥b#A\$z¢g V*ÆÓq?W\f%T£CF-8œ¿ÁUcu]J°é¶ŸwÞï€ß¿HR^ëA>?Ý¿}}{ÿvŸÄ”` ž´Êêž•ä8z7¢Í cÇ]vpÎ3y¯nª ·PÿÄj é«ñUýiÛrð~×Õå˱ kÎ#P²( C®€´"ç„rpiòÑIK¬c#èKEeY²…T=—tI’ø0>æ¹!FÕ”{×ù]ŽQñr'ÔþäœEý_­:Hµ¢]ã$FŠqÄž'yq»ºIŠ^2AjZ8}Ï{Yâ£è&.@´÷\?4*;É y(dú–U(Ëq<­ú–K!®¨SÔÑÁ¸µ>˜¿È$mæqbÐ8çÿÙ»¶å»“ª )y]±'ú¡w7,ýkùús»òÊÇðKæË‹½×õíûi‹œN edŸŠ'– â 4ì¡-œìYúVЉþjõ*õaÒ(Hš"0ÅX ðºF³ÃúζÇ;éh·¯aðÏbÏŸ¢áEâ?RÇÀ&ˇÍà¿g7WêD)÷•è»é×| €ë¯ÊÏë{Uªñ}»²[¹›Yå Ç!ßMN8Od¹jË BnÕžìf«›…W×#'<¯æ) 7É ©!,4&'¹£]”PP}†ý7d ËTÔl'p°‰S›UèØe®u“dÛ5u2d²8¢ÞÖâ”SbÞÐÖÉ€RÊaÀ(pÎ¥²:BlC¨£ÜŽÞu`ò¤Z3¦¶r§¬¶2¯sÛ(¯r{IšƒÝE}³DQNºPNΨó&ë½Aô²;ª.HIbwÛö|A‡ Ј0BTIëj1â K…ÁÑPñ1§&ƒUî×Ó@3©X(YÐÒî12Ô*Û†½dîò€±ß ¢(ÈÈê;¢!(=ú=™Ç#œ·‚oÜ^9D¿úöI_ùÝ£·mJ¿^~ÿ|y÷A½„.+6>àDê–tIÓ:¼ÒáÖª­²O·²¨‚~t(¶´Ýkòg‘Pg–ÃZÈÇ—[­€ª#Æ8 “ÝÐv!M¨š›ÄÁÖ ±ÉV¥¼Ž]ûDšÃ d™¤¨‡ñÌ ,Ψ[ØVo¡ø5@,!‹ú•ØÏn=BJÆtÊ ¸”0Üñ‹{ä__qÀÐòð”“dCMëÜ6«`ZKÊ<6Ÿl߬¸ŸjgÔ¹MiÒ@¶°[¥ŠJ3ÕE:ŠJzIr¤aÝ_ËxgŒ!¼ó”C$ˆ?ZbO‘L=5Nú½U©ˆÕ©Ùež YÄIà}`[IúwôfÈGÄýgìåIÉß4Cvì…:TǬ±NÿéÃÍ—³oÖtvþYõʇ»Ë§ž´m9±@õœ˜ëÚ¯sO¾C æ\¶w’m»©>Hª 8%ààRÀ\ÚO­6üt+{¤À¬{(!û_¢Õ H†~‘Ð#bé2¥¥¨ñÂ<7¹ ˆNSanÚUe†Ò:[ò€õ¥ð¤y ˆ¶ Ì÷Ò®.Ψ†MT¦$jÀo/)ªïØM©g¨¡æ„9õAÕ¤ v|ªd¯A_¤íml28j´Šëü5Rë¡Ë‚V )r ö·kÞpÎn®öv§H˜ß¿å#J±'ÅbøÜ´y×Ý÷×ådQ¨>˨±»”5åØ|–©Úͳ¸«=¼,0çæú¨ûÕb¢”Ó’b‹ôb_¢)‚zxÃ^Á–m–dÛ~Ú|eý¸Üà+¦êNÚ%e®e–l¦Üqªþ¼þÝú7Äî+óÜÕn…j¯ôk·°­Â×@¹8T¶ØiÌ#Í1Oúq‰Wóƒ¡ZTXæáØÄLp瓪‹”BNÜýÞæ#ŽaÝÆNÙ2'½ÇÑçÆ^`X°ÕAªBË»IS(zðês;žkztA™‡múU€~(‹¬jm|%w³m>‚¥Ç+ Ô~¨’æ›{Y¨àI›|+Q¹²jþ”Wc•e®žèWE€ø£ù¦¡nOñs ¦á`B†&d*nyØ&°Ž,s <Ök¯”¹Ø–&VÙÂ6Ã\§@Cl+]¹Éê÷„®î‰ÂËd@©Ð-€Vi|Þ~a¥Žþ¿ ¦ Áhùd®ääˆ1ðZœ ‰¸$o6Àã`³ÉqD¬à×Õ¾~ÍdGôtu ¤,)ÛNå õ²Ð_B¥Ðo£Ç¶T»Õk“§ˆ¡aŸæ×[íî[ÐÒ¶O&°zþi5byÄpŸ§>ç€~û¤„%²F‰!µ$9ﯗæã$¿1ÄÝíå÷W ÝßNUÖ€IÆ÷ÝWîxx)öø†¡$H–WÝŽh罦>ß@ªï×6ЈçùBÁæNõ”ÉâJöHašÖÉA˜·¸1`‘DCQz@ñ­Ñ£ØûŽñÜ6’Õ‰ŽM(e+Øù4¥ÖV<jØ‚2Ï#Ó¤‘üÔù¯f´y®ÊþÃûWöÕ…œ¢Ø.¬²¼nÞ4óíìËi˜6£T]ŸŸ%¬oöÜxÚØöôJËÎ?¬á8ËK…ÞnÏãËHòbÏü¶Šz–\]ƒÅwÜœŒ¥¤°‡Ç›˜7ÀD·YôÿʪØ ”Iïk[/Ú|Ò¢MƒÚ¼Gù >©ºNÙûJöŠ9u©é¦ÀÝ#Êj%°US¹s Ps h"N©U%>:P”›Ö’«É…å½î±dR?[ôš>JUɽÈÃGDxSÝk÷J©ìBS’5Da¬.éFÞwI7FßrÉí«%ÿ2æüU9ÉbÓêCr’ž­ ØÃíB– õÆ7"¯#ñY\r=£ðÝnØ¡þÊcΕ¼Î8ë„1³ÑÕÏU X‘÷A=<¹ëCá¸èÛ`â&" PäùÞb.Ñ\ç*(òâòv0 öÕ6áʸÅ4H@Û1òä%„òƦÁ®ù°ûÄ4ÉÉv³®nÉä„)ïj"‚xüóèß>üg©žºTd0Îúõ|D,D ²lÈpÖÝ ¹2qÔ¿³ýÖEPÖ›™fº¡¾OdAš#m’pÊÁæ0O›™gT3¥)Ndk,Ø™)qîvÑwTz™}8 ‡Ý*b1ÛàÛ(»ý+ã 7À:ˆpÝJU&’v(©>_ýDš΀¢ê-o– 5 Y£‹Þ2Ô|„!,÷´™Mt!Åfÿ³º¦ß?¯ö ¼f§©6t ¡@hd<•Òñ¸‘áfo£>)µ ·§] Ì#ý¸·˜ Eh€·zôìøQ^}i­kJ^Ì~=»ýõòûÍ=ôo·—ŸÏžF ï>\œÿöÏÏãƒçÏ.òÅyîj"¡ X5§)ç’4C_y”u¹°›¬ j,nª«‰$M–7tw&¢%Ò“­ü†Ð/zD„.„c+ÜñðvÏD^ËÓ £IœÛLDÂÖ¹¨„C‘:Üñ#ižŽ³±‘ ð¾ÙÌ[`†¾iМ» 0RÎIUdì*G–ü²Y*][Ñü•œC›WhóM±Á+%¶ŽÚµ ÆÑn-`…žBS/Ïk ª>bpz[Êluà$Dî0ÚdýOª²†[CÉÝJjŸ )ÃÛR‘8‘4¥"RM*”¹– Á4ïaÄMIÿ;Æ6Œ}{•£Y§´$ùe“¦Æä¿ÜžÝiìwþý^·–cqüº?Éb"S¨ ¬±;»T·…êË›ïo~pÕtªƒxìáÛTrK¡!éé‹1sÙ<5=Ã6Ü—Šü¡4~=üóÒ6ŒA!ú|¶Š<ž05å ‹´«òÀµ†–Å…ì§Â2%Ø KÕ‡C‡~i ’táꨄYÒ° ;Ú‰#ä¶ `5kЈÑlñék89¤9Í’A´Ó&Æi„Ž0Ž‚t0®5Wqxh:‘©2èÇjl”ÚlCµÊÔb…*8òa.¦…ÉåBò3-‹Í‹q?ÓôˆçPö‹¢Ö±;ÀØFs%òoWŸT;ªÆxw÷íìæîóõwýwV–ûb‘¡ràÕÝÖÑ/ç…>‚j:§óç?ttP?ÌêûŸm·>û‘‡ó~FƒÒ_û™‡ÛÒŸ õ+óÚêžT˜%ö䕊Ei0WDÆý †“2Cn§ 5—uA7Ê«]sKÒ\(àɶêmˆ2m“dQ7:0.3?c\UÔ#-xww¦‘×RÊŸ ä,´oùÍ™ÉñÍž¾yΞ[ ùÝ¿¾\2îÿôñ!ñ+ßÍ“µJw·gËÔ_Q@ý5Ê9o¶%MÂêf3…nm>"w­6 EF¥îR¦¡•· ™>eV ÙLwª¢2, sùl8I‰\ÄùFI#ªÔ õ¾ÑÃñrVW/I–­ñòùÝÕÅíÕユ§Ÿò§N¸TG®‘…[X)z¬”´ÂêÃ]…Z¨¼¸‹î¹HýRÈ!‚WC“¨F•Ïz=öáïÑÆÕ(Û’»­Érëäy¾ÈÛþ,WjÐÖë¨ìk$Ë•e@Nk!ñ‘XÊÕ—úDM;Yž`Ê çjËr‘÷∱á±½°Ù«¼w“¤®%dD*úêÈŒ*oq»X‰õþ%0¶…BciŠM¡À:JÙ‚4×Ä^™$"Âé²Å!õ-d< §Íî®%d¢™÷,yŸÛ<À/º?U½?Ñï%«Þ—ÏÎÏ J_Õ»2hzE4).µ"mµrĤxѵøx[{dPm.ÜrÛÙo!’¹´A(uv-Ž8îŸÞùU}[É,PvG¤õÙ~ "õ‰(_ýzÿíêû QþµüöÛÇEÇ/ñR?ýÅ ­ËÄ£îþ’ÅèO‚äD¢öÌí÷‰çF¾~”üTb¿g|8¢§Y¯X$ÂÇM¢·|Œ&Î@­ÖLSV½®Î°§*òÀ‚2Ogfœ0§rè²þ©rD¶S«Qö[DãT‰AšãgM`;7g/?Uš³­„ *leµ;Ûšh¿áÉáÌNþ¿ºü³ÔåkŠ‘õ#£Êá€b䨝dËêDc¿áDѸÌU·¸­Yï¤4£µšÅ‚2OZOß5k°èígä8T_Ô›ìäÚá)>?©Ê*QÅf”k‰Ýé9žH¡‘µ±íáV _aÚ‘îjçù‚0WzŽ'+ßza‰™l9§0bwRõpqîÉ CJÆyæ.ƒÇ8‘Uý,£Éª­6˜ÉÆzþü.WHõ¤BÑ[dž€Š`Ý@(‡#¸g@P5è_º2j(/3jÈ•È7N)ª§ÑHžŠJ‚µäÉTª,ˆñtŸ*ŸôáO•ÁpÒ„~0£ŠƒÆ³ØÝ„|£8q€Rôw•E;ô0ž¹WjoÝ…­MŒÏßŸŠ¯Íƒ©âà¼@MÁ¬F¸KÒÅZëÕóö&]l´Ôà€Ræ^”Óùžq£dx ©K…¿T!µöC ×IN-ëlvy =éH*Õ|ª'ZD¿)Óiÿúâ„! ‚PÔ_+n 2‹ªszëÀ0åLÑ|ží¦ÎÝf7gç¿ê?"Ι„>žŸ3üp÷ïß~ûgWŽ6KUÌ2D—”­¢Žo3׊Åuí¡.pšÄH[ÔsŠýÓ‰êŽÀ„½D[3žaû>»›\Q4 Ãèa«Uë¨ÉÖ:øý‚2£ˆ<¡¨ÝqGdÖ !Šj·îˆl>‚þ›¼+jn#ÇÑ%5/÷²­ ‚y¸÷­º‡«½Ç»­+Kr_Û±gfý-YjK´Èf³;;µ©™ÊŒâ´šø U—nÚ¢Ib¹“éX£»ÔÏRW[¤cÊ9w·%Ë063<`¿Ôät˜ÁbJ|,·_qÌ1m$"ò*X.öÏ­ŒÀÕt ‹Èå»­±S‰‰%I-ݼèV²ß\®¸MÄY‚ÈÚ„CLv—V²{嵬w¦¸?´™âW“]®ï#¹ ”[‹wʱl Ö[‹ø#S3úž'žLÕÀ†‹]qX9ç%@ÎêUï¹8ãŠuá6•ÿ VVä‰Ã LaqµYÏ57RŵQ¨ ¤C"¦¤¦$fBŠ4u±¦b¿V—lÉ?.&HKÄä"€WSO˜ÈÒàWzìÍ¥¤XÍÌÀÕÌ€cr"woý4¨ßÞ>¾—»íõÇ«_ŸºOëŸ:ʱÇ9iED¼/³¢¬9Læ DÑd’›dè.DZ j\ZŒj‰uðš oðŒ7††8[–ÞöÞÙ*ô×®—©Û>1ÛþÕ‰ãÝâ·L]¬ýÔ À® é4µ¬Ý8ÉŒ1ïý“7' 5áñС2ŠYÙš™GMფ7JÀMঙÇG‹Éý@Gi³F/«{Xo·ëfF¤LVÉ+óFääCÎÑ~Õy¢xd#¾G¯¬€&öÅ=I`\wÿò ®ŠåŸ•fƒ¡*ÈŒæ<ÈäDÓJRks¹Á©_aÜ׎&uyX¬±inýÃj ŽkåQÆÛ×l`ÃgL»OeX™`ËÎk¸¦É•‰ÌŒ™ðžˆ"Þ:⃘¨³s¢ V¡§ê”µ €äyí`iEµÊ\L`Üf¥-€IŠ#Sq†§Ã$¢¼îXG°¯zX¿^‹y =¾ß;}8¸‡GàÿðíjóYBùGîžëhžRV Øœ —9áØp!w8ˆ1U¹z”R —w&…„q»÷BBk‘h‚…(G……xK\žÊ¡ÂP|ü#!ºŽMBŸ×éåÉc‡…'i++b‡ð+ÑB viµyš)½¨7Ñl¦Ex ÖÝÁGÝ÷ŸoëörÂÏ{T.¼ˆù½L”d-«÷ýÏœ è=ˆ¦Ån–·Ö¢x¢¥í‚ªÊ"œö×(?íÄÒ'mì㬎ƒ­:ŠYÞ&®©{^oï·í̃W’¢_.žÞ[G@æ¬uP²xz( &æV±Ÿ.¼¸yÄóÐòt žyšyì*e‰à›Òi£ ¯qMÔm¿ã㦙¹„ xíÀDí…‰ƒ@9I:X { ÊÔca•U;{©£¸(ˆ£ AOÌE>ÿy÷ð¥ÛÞ\}º½{|ºÙ<¾ùlóùzó¥ÛµÐuÏè×OÍLÄÄ•‘Üàò`ž¤ø°ŸÞ{ÉD$IHO5>©ÉV xtK;²5 ätZÜd<99CzŸ8V¢µq—×¾û¹¥ëA|‘ïñ˜7N¶…ÕÄõp?ùÁ,%Î×&Nbk×ßÐM±KDj7»uGôÆç]Øl‚˜ n±û¿·´pÿãò6cÍžô¢Í¤G¦åÖ(œ•ÄÎ.o45'Ù’ü“¢ë4£ÑI«’þu’‡?Þ¿þßž?oÚe9µžŒmA–ã-˜¼ÓqIûdšÔ¬ öŽgäÍ&YÖLK·Êƒ£…tï7ŽIM"=ÿCÁæß0a÷å–·vX•Q±ä\üµ´I,È­‘Š€¥µ«òn~e)ð놵ï¡CÍÚ“ñ8AFë¢Å9§†6¶Àÿù­d ¨Ý  Àæ·jµ\Þ̽ZbA-FFïÍŸp/;s~o¡Í=¤UËÙ½¸Ëyi/Uæä^>Š­bœ”{Ï툭+Õ_iO°‘`,Õ†$Î Sç< ö•rfhÅ/è¼­¼½s.§EY7&y¾ +9¢~ÀøW-âƒ'$ [P™ù]ðeu--ík<»üE’+غg~ã]B»¨Š@P ]å.Ìk7}·8XMÁsp+ÿîøuéÒà“nœV=%‹Øß¶iBÖ[ãÈÍ@§%ö’ŠÃœÞz_íþøþQÿê¼bw?ÚÞ{¾?ý`”ƒ¶‘à··5Ä`i’&NØÚ(BÂ&=†˜SJbøx¥šxU³zpçãÄÌŽ.ˆY¢K¦IÐç«|œÄO¬óòÆBÓ´Yì#›™’ØÇ›hó¸ê1y‡3UMð£”òèqTðã#D?Ñ0BÅ ½N˜0ö >»iêÔn“î4²õ®Ð浞œ‰=CƒäU_:xgÆR²°„ݱ„åνÉ$ûìþ÷8$>XÂÏ«›'$K{§±Ð_÷qÄ â›ÿ"Ÿ?=üqÓ£ô£È¨Û£k?Bè”'ÖWà”¿Ð)ãÝUöë¼ ”à¸~'„mMó¸2bFt¬IÏÄD ”h›}ÔŽD´\`ÛÑ^.Cۭܤg™—V èké*Ê‹ŒŠ—³^y}8žÓ” ô Ç]I÷+‚PF#ñ«}©?„þ÷‡ö?5äE ­¨Æ9'uÇ‘€Ö}ë€ð“½w£y=ë¾5Éá)f¢åßÚ:!Û¦Öœõν=¿ª:$öVp^<Õ~¼qSZo¸ ­wyÐ Mý&*Wú•ÃS;5‰j§½ÊÀxE3i¶Z –_Yu1Ùv©=^ö‹½=¢­95ÁžnIþ©;5I4ƒÆD;¶U€¸ÂpÙîÌÅ­ÝZM²àf°˜ü¡ i,ï%ªz]¦?|Æ´CÙË¡oŸ(=5if \!¡†´1úé'¥Å¼*+‘MÐS*Èš…dmy³HÕVVB»nXâ£èü«©aÃg¤çˆ¶µÃ¢w½¡¢“q­Ãª^úÎê^,'P"¬rbùÔ.¬ î—Эÿ‹»±YV¤šÆ2嵕0Ÿ§'r¥Ô2ÁØËй‰ÝD„³8ÓüOƒ¥•](.¼h8Þ73à¨w%…¯§WäpyÐCh:è -€<ý†|ܵÅêiT×ð~ó°IÌna–\|øÄG'gŸ=sæ-·¯æ]¥ü¼bñTs a=lÖ•6vßž~wSá$öœ¾=˜šs¸F@Õl\WÝ!u*{ƒì1˜ƒ­ð½|廳ژd«ˆ¦E‰•¾5F#ÎÂþfHûJjJ¤ª(egƒ)ç|ÛÛå‰c0§ŽøÜ/Ø3¿`•Z&åZœêtÀs`l¤ˆÚp?#Æî˜#Ë`Óì`s^Ȭ™úc•œ‰ír´™i Œ:éXÂÍ,Úà. ø>È"$càãjÛ` ˜™X0. î¤Ãn)ïð)Šž˜ùdNnÈ·„@p%¡1”ãßqËΩœÒôF˜µão¹°±®ùi\é/#Ì ŠX3I×épXýÓ4%AUM&FytTMŒ.fA]ªàw ¬& ªoítBǨŠçFU³ TuZ™ÅÈ1sðà›Â+SI„8`m€Y s*:œô“¶Âç¼äõ~Q|®lv…Ñ,«š£C õSÔ’áÈÞþ©ÞÀé$=7”Çi1ü†dÑ÷@`pZ¢ßà_§Î~ b¶ÄéÈÎIʺÄ@4%ðÌìýxž€oé˜ØÄI:fsRÚ¢½3âÎ<ÎÑ©Q}WO?6O?D}/$f#O\êEžá—i3#eÕdK˜ßRC˜õÚbê!d &ƒœ ‡uÚi*>Ф ÌŠÑZëFÀl‹-hÑÍ]O©£ì9³¢¨ÄW]‡1rÓ×"¼µÞ”‡Ã#ÁàMu€?IûfýÖˆJÆÊ6ˆvÑ ÷ñîëõÑk~Ðáv½Þ¿ÝÄns›í¸ ƒ¶^8ë­*0§@A¸ÉÑîHá5„_ÒSEÈñ . C¶‹]térõƒ ÚÀ¯v aˆq ü6دf‡_d1´ü’äý²§Pta”p·9æT2™9NŠ-DCòkΓˆ«û›ëßEöºãW_xçîÄ“Ý}Ýý#ÝÊÃooô+ûò÷^äüÊ!ŽBgÐáUTâ1VÒÚµZs‹¦cïÉ1ëÛNŒípZ͇Ñåë,Gt> ÔÉvC‰5Ái‰ì£vCÁ(œ&åàŸ¶…OZ—g©Ýµ&ž•ÇF‹Ê𗾊‹m#l :Ëö¤rxn‹!o)9êepˆ“@!b E¥X#+yÍHL–¼í¨ÍßgÑlÏ{%G‡™ T\ÙÎä3ãèS•ò4!y¢²Ãr–Á"kÈoù([«=[…•cÏnVÚˆ½i ›Ý^îo^nfºû¯W·×‡?y¼~ÒšËméqU¶Ò¥Á~Œ¶fä@4œãÑ”±“„ÖªBFíÃï²{TXŠÙ&'‘aj„Ô¢—]­ZBö0ªŠ<*ݤ=jcûd`Mš¼·~Î=*¢¾ÿ&²\mDÆwòÛ·÷W²Œ§=ìïN•߈<WýsŒ<§WÉ®©áéáÇu›’‚8˜´Ñ¡Š«É ŒÇ2º–‹~¸­«å> Ô£äãyŸm‚£,$ˆ²lúÚNIFh…Iœ§Ë—úæ€àW=meœ®ô|xï…^ŸÂŒ»Aòþ·j ìU_3ä78ï=ŒÞªo ¥UTÜk—bÄl_V$ƒ˜íËŠ>y-?@ƒ ¦/-ÑUŒaé æqî¾-³Çóò)]rdùå¾¥3anzM´»nÏ^Q(΃3|N¥QûK!‰VHB˜®öóÝãÓýÕÓçNÙÂú­¬f7ÝËÜý!ØÑ}~þÆŸÆa¥DÁ4+ZÔPÒ®F‰«ŠöæÀÁQÈÂ(+9m¶º)î;ÒÏ(-¢i€£ýk;‚BºÙ–ô3èlÍèÎaTW,y#žÒÍ}ïC¡O-àÈ>§Ñè0§V™Úßô ‹+–à!ð·‚ëwù]V£bî~^¯?Ë+vñ£NZw_é>ŽB`³ðž½zìc°¤Ã×ýL÷ðy)¶æÞzÄOÛ˜fÁec³="Ñô¤â£È³¾vÐr–Å9º¹‘Yw:ïºÒ%Ç='úu§EW=hš_Ä—aÈ|J¸ í#bòaå£á"âóÉ2Ÿ~Ð…­§-®×´í>ÑÓ¸Ù8žó8AIê*d†Á:ÇmÆò” ¯*«­ ÑÑòy6˜{Fª ¨¬rL…ËA5åÞ½1^”A"t˜›-FÄlΛzEŰ¿ ø'«’²Æ›P‹ÎuÈ1§’ÚóôI&ä3,Êùq“»²‡7èäM÷üýù)Œ‚f †gEæ†d°^›—§s­›Á³Z zŒ‡gDbžÁ¥®aâjϤ1|pqitœ›¨@¥ìΫ£tÅÑZ"·HÈÌE\-¸g¬å)È1§†kO‘EVF…íœÐœ,$;™Ö™»c/zÆ´ûõy¡ÜÕ¶)ç“tÔŒ=)ù…»õ2yO¹WïmOã:›xï) ðÎ'iõ2lðúÚÀ€K#¼£¹Ë_UÌÏ^Vl=Y¾€£ø×¦¸µEø¤ÿµ@4§qø¦a˲Aåíf­¹Ø<Þôû/#ö—[âÿsÄû8âß|—%«<5ˆ}‘¶Ö{{óÌ?Èx ¬÷Vt2SÖ%¿±. ëžCº\ê ‰°Þ¿¶—D€ÇázRS£¶îì "æÄ%¤.X„‡!w éËØ¿KQ}cPo sê—LhÍ>® Dcæ…æÝ]ÁkO·ÿðû›Í—Þô²‚/{Ê|ÀžQa°“«© ј3’58ž±¡Hì—¼Pæ“€^m0†y ÉL9Þ‰9=±| Ç@¯¯Mç.ô„sßjŠœ!Q¶§KvQgexù‰¶|Šö,Nû_I3ZœV$5qlVŽ#„%gÚï:êĤ¯>]w/÷Ìßn>í.5×e?ÖÓ†6Áó¦û¾¾½}É2fuéB]ãDO3:qäý¼‚nu–VÙaþª½qœÍ 2¼ÑBñ"Õ®B·Ž<2ú…]…ÒÍí*DΉ ‹ª)¢—6‹ÅîZå¹E”ÖTëÏIsÄ~¸|[ÏãÊû Ú^Ú3$ÈJ_¤»øq+Ÿs·Hg‹æQ…l€¯“R7°µÀlyiÄ=ÐÒ˜m1ÌÙ"e8?Ÿ×%#°wË-†¢*r/:Õc0bNµã ÄAõJŠ-‡¼ßõõæãõæÍ×ëƒ`ï¯6_ä?ösIØëP’ âº{ü¿Ûïÿn#Ь(ìLM¸M†a2Í|µ Ûq©å\–ÁH2“lUŒ oòt} °&¤AúÚV–„ £²³s×ňœ™S3ŒdÉãÂTó±(’F_=ËhÌ©ëÓ¾ØFoNg£:ž“xs{}ÿõîo'ƒB÷|¥í¯’Õœ~0²ÝÇÁ¬-&]s "Æ‚plÙâDÑ5$s;éç¹íï„.³§d5Ë@NmØÜœ l¢ ³gžŸÍ “ÃådÉ$±qÉp9¶¬Ä¦°Ç§œë¸Z¤5ËÖë˜AKõûŸ­¥Ps[¦å¹"z¤É〩p°|×ÊyÃÌYb‡èr• ºrLN³,­`°¾hãðˆ –u¥|në·¤>ÂÎ{Aµ“£9Ï`½RiX¥1º|ìhšÎ©~èù'‘×MêŒGKW›«ÖSÉÛ½ÖpB¹§‰ÊÅ,•äßòHÒªã!‰ÑŒ~2"a1"…•¯åc "e ¦…++$yR H9µ’‹a~@‚E  ì[ÏOÔ4KN°ÜXòj>Ê)X3õ»‡ƒÍC lˆ½š ÊNª…þPC‰gÈGcû;ȉ° a#@\Ixb 6BŒî"ãÿná>9Óu°´ÜÀ+²d}q3T‘Þ.ãFÿDž7Dމ@F5!É3åîO[2>5¿j.) K>… ¥õUQöeC||…/ßÔ'ù-¾ëßßÍ 9+Hñ=Û_¡M„œX 9ò¶ͧ,äx£µoYÈÁ:q4lðÑkMóĉ¼·“'Ì\°Ñ‹Ñrqäû=9ò<éò¯4c¸Ò4Á!/ùÈ(jø*” ’†©†¯rĬ1Ðã“GÔô­K¸/©¤%ÓQŒKQÌ Šh£ pŠŽ3(&+·ÉQIƒ¥•À˜ÅDt0Ŭ±‚~Ž' ˜<ÂÌ7‰g²ª&2L"-C1 ¸ˆ]¦=Á¥â8\ºüôá LxãÆ9;‚.ë%Tá¨u< UÀÔ J v[û÷ÃeÈÕúëõßIþãîîþ ^þKLùú¯·Ûëß?8ÙzOxs½}ùL¬4$}?€+³Ÿx{HlLÏò9®æßômßÝè[ Œl®ož¯·'7`$HÓ[w’¿¤ž±_Ûþ17ïnï~¾ûz÷óúáÝÓç«Ûw‰¬úåŸPŒG^EÊï`NKðTQ‰ç¼|$ˆSÝ‹7…î…V`ÁÅ’AR$Ÿµ © y°²ïBlW ÄWcÒò¼Ú Ü Zœ;H1îŒ^w°«"H¯QYsiZî~A´<¶xîÄÉžž!ýúaî.~ijT<öë'ù¬“v†Žuqÿ ŸV+Ç<ÖL„u6 ó">-uQg‡Äb5²Ç%! ›uFÁ¥Ç¾H«IW~\IR`½[ÚWñÌ3z)#ÚIJ‘ íB´>>-W.É Šôê±”ä· QYrê² :Ë&$¦;¯%C·(h—Û¥E>\ÞªºÖäáê`1ùdB‡*?§…a21|Ĥ\‚t&»6 Ú rût,jû ƒ²³%dš·åvÏšiuÞÿÔ“ÇZ© Úöe»üÿì]Ýr[9Ž~•®Üç„‚˜šš«½Ù»}Gv:®ÄvÚqj§ß~I¶%Jü9ëüJ…»¤Ìb8Ø‚hÖÆ9ƒ$€º†×9’¦ ôkîÆÉHÅÖXݳ¢É—¥"œ@¥z!͹°cûĈi¨\Ôå¸v/¨jß¾ÅýíŽ1UT•ó§âß4Žªªù ê‡e–»åUÙšÖf+FÏpœÖ$¦fò÷ΣálEã£Xf˜4ÇwéÿÕΣA~UÏŽ>Íø$#üg­<Êø[Ma!–ÄÔØÄŒ ˜‹ÂfQnb?1é±åÂîÁ­]nR2ón3ÚÛr“2j;¥šþÞx´rÅ !HÏè:1¡Kaq?1úÚ1õÅÖÈXîíK†­EÝ…l¡xv±š)/Ö"¢éÅu“Æ/¸O>NìÔpù_Á¹¾ô’ÑõÇÆë¼ß|£Íu#Òùuõ'u GñVó¨')7,q5A ª(¾ìH‘)¤²2r®·ÿ…L#²V=sŒš _Ü×v£Èû‡ƒ·é^YSŸ.´¥ Ÿãp7zÖVügóÖ¼™?î1w …tF³K3 \SÄåë)¦^ d’ ¬Úij¢¡½9¹~Tø Âà-ý–Ll-‘-¢Û¨Èr+"”Tà+jÕHåÈ’òÕ3" ˆ-íÔ)`¼´³¡´~EmÁñ;7¹Àˆ2£­ºª¿9ºPb.¶kò72®P?)r@/X?x)é|x-îDÞ|üdÃìòþîéß@mÍ”*|qeÚG ž'Íi!àå4ï¥WEúÓíïûßëÛŽšýÒ ßFyÒ(=£ÆD‰ÀÉÂ]5’?®0£ª­þ“+`\÷›ŽÎúϸƒu= z!ÍÒŒž¼§±î³B.Øõ,ñÖKÝœ—8d‰w^-›WyçurQçµÙ*ƒ<Ç¢0 TRY˜$Û80£çˆÞk=6%t>\:cïW/ô¡ d }2±j¤OnƬ Æla{78}mܰ*_aíf.ÓŸÌK¥‰2;$‰¥EàÆVrñ"›ÚFD²¦ ˆ[áµ/èDTv_0Zß·³¿VÔÿà=_c¼¦ãõûÇÇßãS[ìîBÂU>éA¬‘ Æ{í´ý™¨ºT€øAäâœ)ŠÏE¯tüé¡=G5u—vׂ«»kg(%ÇîZ¯ܾ z¹E2U=FÞ¡ã^gÝg6Öåñøú ¥4©ü{€5-òK¸óÖÑéïÿ[–¼eåW–åî²®)çSŽäÀs+ÈG-ÕϤgµ$_’š™"°py#r¬é"•ìvš˜z;58õÍ—7õ²6R•’åØÔo%ÄXÖýUí1³«½~«ãÏ·H+ŠùƒÅrCbwS"ò$kzŠý¢ Ç‡¯7oïMíæ>ùúþÕý6é1´Q¼­†|H=Ï›*jŒË­›ë©4·í'¯¯ L’C[¼›¬¨E_çä5ã/™t¥/dŸ>_ 8"|×c³ømé©Å¦géצ°D«#:ù=ÒùÛð]¯¬q%È·Õ%·“ÔÅí)V[ò6‹±.Óp«1M…å§´™ìjP¯9ÐÛ?¯3ÁeœžíkìoêiGdÉö{ê.9O®³¥‘†€y+},[×è#¹T¶®œ}Ëx%Ïãº=µCöriãºöö%ò¾Ëó-°«^˜Ñ{ ‹5U7“Œ Y`ÖäfZ!¶Å)€†aå*ÈÍ¿•â¦Úß§/iç¾Ô{=Ü)—¶8ƒ×úñûÛÝÓÁ á_²ŒïSöÁ )NiUK° "±óí5ŽUh:¬Œm’…É9¬_OÅwg yl§‡D¶;<€÷—6Ö!®mºÛÜw+Ÿq¤R¿yLCkŽj"âä[j«Ù™<ë­E…¯ßr:q×f–èh¤²‚"T‰èñ5gҔʨD OSvólcïüjC"v,t†D†ð Ôqö¼O"©bÁb®a%×<ÇI…b™Y¹FêÅR‰kÙʯ«à™ ÐVž61M#vð®ßÎê'ö}ÐCÃ(ujD“GºàóþÝÕã—›§o_•ö67O·Ÿnõ'Ÿ Ø÷÷ ‰?5…HbÒÚMû …!î¶ N$zjÆ6í¡W_øsŒm(T„¶4Sé½/šJ ìÛΜ8Â;¶áó&ߢ–Ñ1çEj¹B‹¼]†c`^u%ƒòV­ÝåÞ·‡[{;Ñ ãêÇ×§Ù_lRCy×Mê -LÐQ2"eÖø³u ©Š<#µ.¡FãR¡uNbÙÕ%Ÿsu3b R:¶Îé-›®‰ú%lØ©§xH‚.íÒ†j±Ï|ã«pÙš"íîN³Õ.î²Ætv³šFOeÕȖŦd«öÜ¢Rƒ~}OÇ»Í]GŸ".æ[íT¹·®Ø·*ñMÑÒz½yÈO2Ì®VÃ8=£MOÕ2Žm 8õèÝï˜ô¼–‚ sâ¥|‹\Í7™Üv–ã,ßxrú·Îõo/ÎÙi³›U-žBýµù®‹ù'öƒè“iŠÉj—Xؼ±:a‡ý¼ÖOPì©j jƒtí2±%«‡»LlcýÕ Jâ"ÕË„ó»Lv—ÅìfÄÙm*6#’iY€7ü}ó‰eËL¼!'‡à[!:ÇE‚ z²rÌæ¡—j=W MV‘©Ðz;•¥BÈgs–׫ÕXkŸ&µ›\·¿e‰JKg£N=u¹(5jèØ)×<åÔSþ‡»›§ÇÛM_ÄÌ™­FÎ Y ±Ìó@Xâ¹A§dKz¯T2{ç&uûÒ$‰mMé2™H±gØPBl5TêÜ*ÓõúÓFÉòð]ÿçîYPöÿ2ƒù 9NHØÞ XAÜ»‹²Œäk¯T""lØ‘ª£ó4Ab z¸;:·O`O5Ø¢ðÈ]nŸÜ±Û7ØøC>zÒüÈI*…ãê¢h~|Ž‘»«bîMnv™Šf–"è©Þxýù9}‘ m- Õ:ýQB Æ©9Dfß<»ÔøîµÇTùœbVØ4er U–RQØT"sÉ/4a4ôÐ$ @6t¸k †>jB ê, •¥+ViLLe¶RØGPçØJ^0 Dÿzµš ,ÍôHáÒŒ ®'H„ 5¨ma\†?®EqÒˆÜÇ2ߨÉùµçû‹g{çf7« îäÉsmíSÿ>¸,jX:ù¶û„ nšÑ#º¥k¤˜m°Üo7|žã›‡)Ep|ÎŒî/îsftv³\FÜ˰PÛR”³êvöh÷‰°jS㞊ñ¸OFMÙ¤ùËó÷…¶Õ³Ë­^{[}]¿ã|£{¼þdÿ|ŒßSúþùsf…=á¢öCÎ4Ç× sé^û!g:±ìÞìbdÍ—‘S¿£hkÈ:Òµäúžt¼?JGÀ§#j &6 ÉX6mìãÙõ»»†”sI³Ë”Ó‘ qVýßÛ|dö‰EùH %t¡¶½s¨Á/Æ®1ò†Ó.¸ØÝÕ¶˜ñ¶°CÑc…»‹{Äñs2ÁÙäìjþ.jik©©!LÑߎ1j¨ÒïïìÁ¯îðâþåíí2ÛrêR¼À‚ –Ÿ @|hÁ7›gˆßOnâZxÕ¿=ó]€‰–à o%+y‰ÑÁ£’|O©Ëbw+BÑb£Rû°AhÍX1”õôR†¨â dT’ÇÜ3ôìf6… 0ÏkŽ‹-6E4p‰@KlJZyiÏŽŒ‘3ƒôÊ•ÒIJh2¨Î¦P¸Dè|4,´[[°ù|sýã«-Ÿ½y|z¯¡ÆƒžþÏy(øõaóå8Lf×&/ýý™YAjˆ—þü™èWC´á%¢.ƃy©NÊ©çÞεvo8oöhÕÞÇ{½ÓÃïž)¡5´|jˆL<ÍLï$v7.îå!õ¼¬ø¸ƒ+i…h¡ÿ$™&â×NÇJ¦ùÑÄCÜ>xU¤SXJ§”Þœ}Ýy%èˆéXS&5Õ¾%²V- Lƒ~‚׬ÙSf+¡Ý8‚p]Z}³Ï¢AÙ_ÉDå%…=[zCÒowôèzÞü}±ÕQôÕP踆âÇ,=ŠæËÈXm­œ”ø¼ÒëeUsùòì6oºUă@zÓÊ5ÿÆÂ*Ê‚8HõU½™‚sU”’Í0aZaÉ^¢âE¦›‹ö("ï¿_Ý}ûzsÜhÔ†6¤fü]7ý+41uAÀ‡$`oºaTGV‰h=oèªÄÇCÔ*èW¼ý0`¤¢r§ltN 3Ôzj Õ|½KgŸœ:õ©§¬ÿ:¤t4!cTŠPz¢¡Ï\ñìS¨wê‹,ÅIæç}\d{øñƒ›(,b_Œ~í¸FïŽ2÷•y†ºšrÝ «×¯]¢kã9âëíæfYÿ?¥Ï›\s"56sŒ?Òì!Öƒ£æ—ØŽtúqVe]XM‰[¤.rÐ:?"È`páÇýˆÓJ1FnQÕ™UVß?HR6l-Ñï7÷F=’}òëMÛÎ%Öü¯Ÿ%±ˆPG·h²œ%µbXŽ$ã¨Åä:_~EI¤²æ2ÍD,ìe’hèM‹½-Lå…ÓÊ%W±†§ãˆÅ0!R6•×Xjh¥Uª"Fª¯´Ž¶"§Ø,Þ'O¸Ä.HO9¸è̵Æå~TÆóÆ¥X‘Æûx)h/ß!¿~áåf5i<ªMakƒ\2ÿHº'Ûoï}Ë3§  àűüø®)NõÓÀž. èÿøðãéfÜÜ]Ý_ý~óxæ¯|_ï¿ç„g ‹êgÒ5½añOŒ-Å÷ï¡^ÚΛÀ$‡¸¨ËÖõ”ŠžXB~sñŒZ#6ë±mxÞa‹+"ÀÄ‹´WîhÚÑ3nS\„f±»ãÚÀÿ½Æã$—#:DzÌ6¯»Çxï!Sbà)rb( @V®1®+1 ú‹ ”o¾iÚ¬Ü5ßwTd°‡»¶"Ã*‡š÷{GïšË «ê\¡A4¦Öø‘ÎD7þ5ƒCš(Åâ¯Ui÷ÿûðøåMŽð¨¾þûÓãm#­G–UÙâ®PÿQS”œ„U÷F=êüñãáéêôøÃ«"œï~mýœ¡–˜X¡2ô4Ç¢CޏuU3áÏtæ7S}I4kr©7†Hu¥þÆžð>ÍÎ(;¢°dêDCjˆf (}Z(W¡'óI¼CX½yÖΫFz©ý'•¥&|Qj4ÓÉ> ¾Òm„Ôè±5^t†JM…KaZ{\Z’ìÁlÊ‘¢ÿЏÐê‚»"8ŒÞ~ö+¸¶U%$®^°VMôÇ2¦ ÃÑZ¼Ã¹æà.ß„æÉÁŠ•ëbTº¦+ñÐáJÔ‡ P}X\¼æêâµ³p¢šâuÒÿ½ÏBtÍnVS¼VÑ´N@G—µå¥¯ß%jïG¹n ?$Jo†8¦Î„(pdÖp,ÈÞs±­¸1ü@sl' Íuáç9SÓ¢þ 8f-L†ûŠ«vhÞÝÞoqH7›¶zDÐó­è(Àvȵ; ÃÚeïRsÊñJ†qóbÊ¿ Q|ª‹ûµèg<ˆR$äšgW2/¦¤¿ä/îa—¯1/}fEò–Q–.œr0C'¿!UE€êäë·Ï(ñI.1F!¿ˆK|°ífÈàWŒ“ÊȪ̧Ùº‡»»÷·OÎvÐ}‘?þø¸Gºþx½Ù¸ø1AÛžãàNs#±m±Yd#SO·:kÞkí©­ ƒ8lÌK¥&ÚS-•#tÂ"«ÉvvÐkF®s^1LÁ6¤É¾ ‚œ[–P¡¹âiížrØuƧ$‘Š;?HWÙÙDõÓ^ È >« %&¿ˆÏäp|¿«‹Œä—z_;_íjý\åCŽOû—Õ(ý¬,›wò]X'H’ JsÃQ3ùÏ<ç4Óþؤ†Ê¼‰§Pi#á¶è‹]ªB;h‘£×œW©ËóÄ6ôÝR—b!üêÈ FæÌįñ‰ ["\¤M8[÷éÈ ˜&^¯àº‚™Ê‰z‚a׸nd7ý„‹Ô׿¼N•%Z0°ðå1aMüŸM°·7”ÕûÙÕ*j´€0Ù–›X=°ˆ6— ¢:YÀ8Òˆ º ±œ5€NxqÎ@cÑ1¯"hØ«´Ü?ÊIèì2àÝe“Hëå6eh,ØêŒ2V ±^¿± Ól©I54Ö I€Cx–Úey¤îÅ/Va©Ua±f|Ï€5*,¡(IžÝ¬Fƒ“L ®žÛ4ØÚw8.â[àÔ…ÐACQ¿x´ÃV—ÕñMÕYEi±M\8á¼»xLy„†—›Õð-úÉ6@¶ð-ñ2}“ØÁ7¼áë/f›¯ö˜ˆÑ—ù–Ü32õY¾IÌéÛìfUÓšÙq¿k¶–mŒêb7°Æ–ó‘Æ3!ê§ÑàW$ÖXðîêÛÁÆIýÃW×w·÷ï¯6_Ú:LÜ»nb—uÄpñ»zÈ„¬ƒÔ·#W¨«I9{ˆ“½ˆU€NmÛ’^…˜nfÔBÔ´9ŒtaÅC$YEñÅ­Žþ,W»¦©?•hw^V>Îpž®~<}VöÜn¶?ßöºjiþšÚˆûޕ֗Œf¨]¿2¶m¤~"G’T°””‘cöà…8Ct5wðÃ…u“ÜxX]zdŠ«–¯o¾}}øóî H)óÔ¿hRTR£¿¦ž’ëYq-³ýÓ¨§ƒ(8LkMl8…²SMàb*&‡´ß%x ¸3jP\=4ØâA®W\ƒ- ¶E³_qõñÕŸ j`‚ËEý¬$ºyüÇ?ÿû¿Ž¶#E—~ûñÝÀäïnT™í¸*j·J{Õ@s¯âáû×oOÿ¾ÿÇ?Ç´Øep_aszéB ½tgyÞ4çelÓÜÙþ—þÖ[“·wJ\•}š3»~ˆzö‡t>ØU«„Ì*D¼ 1@˜Ün-àY³£&%I,ÄòzQΕg7)×)‰'ˆª60/Sο°¨J ‘&[¢¹¾L©"` ç KD@sêH …½&aùÞ_¢ê2%«#ò\ Ù @I&btYLáÙÕj 'LSh¨wÙª“–è.8êQ^ïÔêǰø‰ˆª6?9ôÏû ºŒX!Ðvíäø6»YͲ9ž,UHorÌ¿‘E ±òfàm»=åS§f%T}îJá.:8äü¡õ^~=({ C°ÇƒÓÑélM^­üéæÝãÃç›_6ŸÕk<þ±»ÿxÖw8¾”\T)ÉuN­ÀÐ_á€"åŽNyHÎ\—X|Mß%Gigî‡\gTŸcò‹îH’@j8T Œ^÷ìòô%V—ÃjXF‰`X@ ›Û”+ Þ/¬²…&¡Î3 5NT•Á"bCD”&VC®Šf´¾}<Œ}vª_¿}ú´Ùþß6opÚ"éÍËi*xL±ä FÂŒÜ(•½,:8{kM­Af9gãâÚ Ááz©ÈEý.þÙfïd †gþPc0!ê L¤.Û+°—Yƒ!u{4Ó^s’ŽÞúW4ƒÀ1ê^ûyQîåžáÝxL>§¾9ž+o&M„œ,Q‡îO2®}pÉ®ë®8Ÿ|,‹{pþ¶‰,wô–î„71„ÏïçQÓ¤ *Hêy ‚ú]•ôL Ž0’—e£ÈçH­`šyxæè¨â@‘û`*ªæ‚'"ê‘n% "Œ¥Mõˆé‰9Y´E ÒêŽà”Ø¥n 8]eîx¬¢|•ÀÒ:w|.FœÓj)rûD”í#x…–:† ѧ–ºÝ¨™»]çØ{·fNoдø¼ØA£UtKÀ641ňÚ:ÇÙL`3EÕ aÍЇ1º+媢٠–6@ˆÙvñ½\ztx7°4ç@l`'‹˜<b] !0%dB¼.=ÜKÙÍkð,Ös*jvãŽ|ò¢;v.K¿¹Ww¬oM|‚Y¶¢ÙG’¸¸#ÇÕ J¢÷§³¬4¹7¾”@pfÚyàëóÂ)`»kÃM3¨¼†Y3ZѬ(´£>"º†³-´Sw¼¥ã‚O;.Â);Œjƒ³­ŠÅ½j×Á%^]+g!'‹)7]xGvæ„NŸ°¨éÂÛ,—„ÿû?ÕºlŒêÕÒ#`Dß(€o×pñ±t-·“âñ ±ŒÞˆâ·õ»—,‚1äÐ{²²ŠsiOqJ<‡©B}¸Vç¿Dm-Ýâ$iXÛ˜õ ß<=?~»}þ¦Z{éülóÕrªmä!ÄHå£F4q¦¢²“d㺉Hzœ5¢DŒcZRo j¬ž…Y…–zuÍl…¶Gû÷ŸªÍuÆÄCÍ)Êh ¡d Æ–˜ñ÷Òèb < …73žm~+#ÂÌÏîFZF¾i`¬ ’ÆŒKk£¼'ã ò’Šz%F±èL!eïŸö+«t}”¨ˆpuµa Ó—ã!ýÐf'þ+ƒ0eõ{¥F¼Ñî³Lo õØÖd„ ‰<βa5w¿È>Ø·ø|"/Þ-oIJÕÚÈ&1Ô(–JUF¶ò|kìdiU‘ h”ó—ˆtg.Rœ$ßRAÀÖr5w§îÚ×-ü4ü&Û½®¡ÙÃç—Qow÷ï>jö¹±Ú^óÐ!êuÛðÎ FC™Ê­j¤:ç¢Yˆ„l¹Á‹àzìv}ggu'3¼„ä’ R»ÑˆÑ…´ÔIЯ:—šRsÈ!„L<æ¬>H^èû/é1H¥xÌÖšîÉb*ødH4”>àC˜sÝhœütðgׇž>?>|{¾ŸL6Ü]¶>^øÈåË×s3œ9£·k â>Í®:KlûQHŽKe*­¶¢¯íĦÕÚ å ;•As"`u© ‰€&R„SyÇÚ>HÏa‰J: ¹p÷²]¸ËFêû•Uåon Ã&é#² 9Þ«®uíÕ 9d­ôú¸H×àZ&/ÛÜF´ž–…ʦê0À ÆIY¬§‚‹@eƒW…dOÚ÷K«Ñ¶Kªm# ¯Ý¤2ÎÂýæ“t{„'8<†ûüû­]zkN`:ÓßxÕÙhp·ÏcsÑ6¯/›·JPób!Ì¿)Fç#¤$íoŠœK %2ê‹],Ò±• gýÄti5ÕÖc©Ÿ>œô1}FQ4lVÉK<ûµ÷÷ÏÖŠöóÓÍW“×÷§›[éóý¾]÷æÍ—»-iÈßÿöó üŸÿeU ¿~ýöüocÿðž¤íùíóã:þÍ.FÜ|þø~[îwL×`Úüç¶E7Fë<˜@åógß3å4¨!ë&…v´qÌþ]Y–”’† ±úÑÆNŸvH†ÀÙ°q.Ÿvˆ¦I”¶÷²çŒw»Ö,ÁÒd1…ì;Ž@:8î˜#gÆs:œ¥m±á¡©¹¾l\t\—¯h+üÌÌ9€8r¦¹Ë°¹®ì»Žg4 ¾ÏD®Vãø”ÿ¥wlW {J%áy&eQÛ¯NØ"€Ü|¶ˆ¶_=G¡69 ÌÚJÁÖ`Ý6îÍ)Ö,w‘Ù¡@õÿëáñ·×C¨YUîH!µ »è1TØ©©xMý$‰ƒ¹Çvuj8Š3®Ìˆ ââùІ„ñ2ÑVZ17OÄÑeN\ Õwf½ŒSË ybt¢ÞœÇµGïÁR #ÝoI±‚ñr_ØnåYŽýÊjZìkɧ8Ko<ÞŒ-ALSý ˆ ƒÑ0züÞBš‡½Ó?nîÞ<}xûðæñnó›ÿqûáþö·ñ:ÎîV¾~zóå~ÈÆóÆîãOÍZ«Øã°‰«Ÿ’:#çÖ½ë\(é~Á—Õ|°.º З¡ æÏÅ'bí4$Lá \ €ÆRuáWpѦü"Àxð¦e°ãŸOoUç s²æ.†H-lRCërm»y®R¯hÊu®êäu7üûÒ„-àñ!ÛD"=Ú5õ­9ø$éÚp'ÏÓTÌqË¢rÄÿmgfžÎñweO TÓ®IàçÒÉUÀ­"²,о}ŒMm<ÔU…å ìœ "ò‹[S(…⥈.œóUB¯+«9gãoÖï„ú­¤?8,¸·ÐG8lè¸á舂ÌÏ™^*y ÝW/sKr"ïÂ`†¹T Mä}¤Ëº IrÐD?/yóòbÕœžöª¬yP}õ`2ö$BYà‹õ!ú–ÑuI‚ÄDm÷—”¹¿Ì¨/ø!D)WÝ«ú’+ ¯Û®óÓë^SQ­ ƒIäpzÝþ ‹®/Y #¤ú ­>V°Ëæ 4®¹èÃwuÍO°¤úùP¶ _:X¶•;È Ú/­f^£ÚƒþƒG¬PX1+¬}©›@‘£º ›ß&Š~2Ëä¬`Éq1߀oj C—"¦Ø¡üäÂm—½·¥™*Sšˆ‹Ä ÿ“’e#Oùv²W¡ô¨2Õ×d¤¨s|‘Ù»e(”°%/L¬ùªÿÀm¤yóPÿ¦øà½Tù7¢y$È™ÇD@]jý5r¾Þ:Xì¢ch·މcj ­Ò7´E*ÊÉh5:dÝ2eMjÄ­jº¬I[+eÇæìSŽTŒoÔýVûMž°,R¡A“D_ï6zX8×R‰Vñc”M‹•ÚIJ>›oš–MÂTRØÜ¶pÈ™ÄdeUµÉlE¨ÁVßMž‘-´Bo£œÂa|3ýÖ áÍ-¤âJûis«!Íã¿þu~ÿpûeæh|žŽâœ`ÍïÞ£ÔˆÚE·njXçè#Àµ”Ó»ˆìØ!t™ã‘Csæ¿—§v¸ [ãä°Å $ä¥Y7A¡Z ÚõO– g"­.P"†§vLdþÔŒƒGs*Su½êT|€EöG¡iöœ##ÒeZáZüÜu¸ÅÞŸÞܵXóK2'ª¼²@Ñú0;m*«.4npˆ‡Ž{ú# Ö'ƒ3æ®4Ëú’h¬Y_lbíL!¦5³ª*»Ùšð¼‰¬ll©hlsP7M[E%!>òì“ßh2µà V3§Ÿ'§ø­Õ«ã#(t¿ÆE {$Z¹«Pp±¶`‚´»Ïlî>¾yÿåáISÖ×SÍøáÍl7··›ç‡Íþ=&±À®egÓT6ÙýԬ勀²Õ2S E¡Ã(èÛ.,~PÅôÂ.3rH!±+І#—®«wJÊ6 M´Ð‚+ú–Þy_}ðÐ XxåëéQ¬~[~p=mKÆ„B»ޏÖoîYÍ¿êªoª|Œ[Ñ„ÈAÿIèã@!9Ö¯žÛ©Rï0€_Óa£– NŽ!ªã÷žt4ò Pâ–Pœ\þºt/š×·D ³@\ëhÙäõAü¥øÄuÅìûôìÕ‚8Wq܃á-XpN­ê>œpX¢ÖˆÒT:|r»)’+áªúÂû·ºK4'{úE¥´‹uuà ~¯4»sí*(¬5pÍØ(bqU˜Ëy ¶ 5D2;UÏ `Us W>G+Q/†Ç‘²ôí‘´« ƒÍ±t2Y{lA’•Õ¤Ê`5½¨¼ü™âM¡¾xêNñTíèQmÇÅú¸x œÓjRc½8ñ®¬UI°®ò Óªë.²s;ê¨$Ó£OWÃkÚÁ«U`§vE”áURlhyQï•@DÜì)¾‡Â»€°Ç’Ë€lš²j¤é=Ô€,{.¬$ΟNÓ²4ŽèqÈvØŽšåÑú(K[NÄ#”UÅØ(i*Í^§®£˜sp›NáVw` Õp»VÕòQ{[Ôŧ½øåþoyOq/ðM.5qd’±Å}‡–âª;T7—-•ÐÇ \Æ‹{;‰FÎe¾Š¬ –qPìâjÕnvWa¼*,#l ~`íÔ˜ 4×÷D8U…4ó(¡^䕌b•té#0®Ñ*,©ôi* ð…èûÇ_ÿò÷¿ýJ˜b‚hÝÿB¬[•éæ›¾à—7ŸïUòã ÝÛOU*)«eÝozó×›çÿþòë_ªYš^iü>¬~µZ¾Üá ]Svóèšþü„·Éû³y›þü_õ]ÛöUäœýjDÀ`Á ¸>Â7º¢‘×LÆÊšI]é`!p…¯‰Á‡ËÎÆVî³ë“¥UM¢þ^²›ÃöŽéC²U“𥠠ÔBT軈WúïWwJŠ\§ÃÈG JH…\œô£§óšy\žîÿoð8c=V»q¾â8¥tÔÅ8Rÿö»QËZÐ;²«àÍðY^íÞ¨;ž>ÜßÝ<Ýê?¿}Ò?í?kÛþesÄŒ¼wâVâÑX*ÍÞ?hœ¢æ˜(å[¤øöø¨f¶¹{»1ÑoÞþ¡âüé×8r£1ž|@Ÿñæ?þý§Ó¯j¬>ýå ¬–9à¹Ï¢=h5¡ë#\ÈðM›F|=¾˜Ô0n¾ÜÿëfRNV'BÎHŸ¼ƒs+ÒˆÛ{Ò¶+Š®©'Í{ð¤·Í.²Jõ““ÓDLÀ £È˾Q!ñ²o4|‹É~1å ¢Î>xƲéEbã^¯÷¡},ARѵî¸iqÌT;[XcSà"•í ‰Šv!’mˆ,­"fÒ ¬éu5ÖgäŠ+?61KVÏÈUŽpÊÍ;j"iì/?RÇàG]ÉbŸ‹$î#€LäCi^ä³èÇ'qÓü°gÑo_ zz`Ô±©Wæu úMu¤K1*ÔŽËÕPcH Î«P_ª«r ÿÕ1*Qv^êdie¯Å* teˆÒÀmUˆÅèS:…(æ4ØH3„Ò]bGˆ"¦+`ÔQÃà.„£¸ Øð;¾}K˜H6oïîÞæ’¸8Ê®ñŽij1 ³!ïï¸62F-ãªü–Qmq𖪃7´i #íS9xãÊA}òÙqU¯+«ŒÝD䮌€ýùç%¤!9¡]cÖJןÿxx[ºK¶Ô–—ÜìjaÍ}âARKm {k›KA?ÊèBAÉ( Ó*’ ÑÇñv2US`J9²&©ÅÙC¾„äuýZðì¥ÅºZèÚ» ÃêÇÄÑg2%;ÊÀÄÇm-¯¬‹(=ï.1“!ÁIô¡›m[rRu}9g¿¯ªCìO]Kè‡dIטØaÄÈ÷ö¯Ï¿<|ú¼½nÙ¡!¬_.Q*:›QÈÐ8±£ ¡¦ö‘ x†ÁÙÀÉbŽæ)!I;1_·—Fð4S•DãµÁ“üÚ9œŠyë~°ÓÔ¤qÒËpˆ³Ý$RתêT¬©ï&™kj’]ÿ¶qK´A#¾vgž‚Øã›§çÇo·Ïšð5Â(HXFwÝÁsÌZk²`‡–¼3bê…¥£úÆŠòWÀâ—çlÇÈT&ÐÔ^;`¹:š2ÅÕOÄBðù1Ô„|‰µS®ÝC’AQt´¨'ï ¬º¯[(À]B/þONÍðr?`ê#o‘CÄ2Œ5‚‡2Œp®£w¢‚œ28Òí»kç³á˜°‹#7Â< Qâwï±ß™ä<|2JÀÙ—㺩…HÃÄ;4ݤï0abͽ@ýÕ?³ŒRIöh¬—«Ê=íÙ×l`V6$ULA‰DÒrÿ´%ƒ‰xG â†À’ª„‰´€ªù0ͳš¹(î…QeDY KƒÒ}+"ÏMËA¨4©`ŒJQQ¿êd¥Pè(Tz#@\Š}Ùm'Ê,N¤ (af‘Á4Ç=+OÙT€6dØÍK^§¥ÿ•FK«K+Žf˜.’²Ï2=ù¿ÿmÔåARÀØiN4eõ"±u½6[ð•ÌE晈H5™`„X|Ù;^­e¿6ëDêk@[CäQŒëš©1CrÆÅYÓÃãi«Ö·¥?÷_?ÒuG¡{¼{wÿ|ÿ­„ÓW–,ñ^-¬ 1` ïE Ê{Ò^ßu†¬õÕ)Y’ DƒÐm‰9é~Ѱ#´§‰*"“%Ž€ÚgÚ©`Ú᜕&ÁlúYe¥±)hÅ´ûec±Æ<»MƒmaÂ?”ÎXeÜíÒh^ùS³u'K°ìv«$!kÇîÔ-¢VcÚ6wÀ4ÙmsµÎ}¤éW¥"Këðx³ÛnŒŸc[e‰'Œ(¨kØf©uÇ…SW³ ZÙ–‚ÏÊ0ו9ú{E•mfãKÊ<»Y ÛBž€1éI 6?£Þl:Êæüi‰ŠŽàuìš¡ÃdšúñïïŸÞŸ=þN1žÎÿýÝÇ/¿vmšp[p¶A'ËÙR¨‡tÀßZÕ®ŠÏ!Ç=ív$Ö·î_ –¹õª¤\O»w·Cý–ÕDŽyÛŠÙŸ7ºé…woï퇼Ÿõë‡;ïs}ü|ÿqasJÄ7[ZL= Á¤¼æÿ—¥½Ì¾dáܪì$:IPwJ»KU«‚ZÄžœ€YñÏ)ÜÚªn°ë$±c´ï¼é_cÕâÎÁ«Ë¨FY’Ž–5u™Ä â2ªÖÖµт˔Vq¯wÁf8¼^Õë¦Ñ륬GòPk×µèèuŽÉЊúAYz™ÇÀ1¬Î!¨µ®Gº›ô õÝWj;~q-vìoÖRÖ³¯Ò(œb;Ûd»¥»qòZÚr’%šÚôÕn¤P»áóîÀì‘o¨>·2p”ë¬òËr¹?ðx›zí†yò@¢Ó]=³#V•nM>‘9µç…v±L!¬íªï±2¨p£þ’ÅQdÖ¦®¿ÙŒZ¬ …B)‰›Ý¬AQ`2ïiÿÉI `vÆ¥@v¨O\Àlôæëûƒ0—Þ¶ÁpOüXXöjìÃð"*B[ƒaÛ=k aøýº–ÃÓizÌJ˦é×ýúl¬žEóâ±úu¿~i¾ÞÅ<šîD¡6 C¤®,WL 9Ëj˜nF£fqÆ%S¬?8‘˜£ÁŠQ³›ã…+/Wk´j¾b…”žWµOöù‰7¶ONG,ôº'ЂõJ?4T¨·@ûxÿõã—?<¿Š¿nÄRX¸¶`»/›¯g!Ðå¸!›}Ù*ã× C¸Z¥4å€7n7¿öúï9éõQ(åM]¦ŽpÔ~:§LCÛ%.¬¯ZØàíBœXRj»`¬z/,§ÔGê ¨”øW'$ºµs£ÉödçÓ=&óSðÐàÂËÐáž&D2à úË{ Öüä –ŠXÒ8"›`äŸFì‡÷”>lgbØÖ°²tVÞá+¢B'Dþu2¤;¦‹)oý) ! ®W )ïÇ9_Ã3©1"߿ڋ[ÛQÎ[›Q#2¥s„üàù–>TòÕ2º¡ùÜbP‰#ä× À–ŒT ñ”iâ 9m©>L4>Þ?=ÕYNþ²÷Û¯#¥{Åc뇟ß?üñðñý·à_ï~µ¸;ôº~+ &zÇšÞi¦»ÿúo}„eˆb„C‚:cê&AcèyJ‰’ôÌòŽ$æ(}ÞÉR"Iõçgö™õjÜ !_jf”ZdŸ9 á­:ÂÖ/ÍFgŽç¸í~e±lšo2º›Shq^6·;Ú li"@bÏ€/eMJ›âݽ4Þ½ýãÎè:Ì»g™Ôþõî/}x_hYpÖF•å!î#kO‰¤™tq¹„W¡£–•ãÐä-w©Åq˜ƒÁjÁH³§Éf4á9\£)‰oí9rÖ­=gÎ…üÒ9•%‰–=æ¡ûÝ"¤Ážãû²O%)2DJ)q¯Ññ#âxq!»IsûO×vc3äŠãÉT¡J` >_~Mó÷×.‚ÌîÕ2˜dßh¿Þ¤M±1 Bì˜EÌJ‘‘uýRšQGAˆ$´üù÷RNû±¥vW~ÝAÕ8{”‘‚òj^chUP¥‰,WhÀ…²¼,ˆTyÍŠ³G/7kâ5NäØA¸DEG°B”H_ê ¡gtC(Œ¦Ba.MCÑ/³i%V9…¹Õ9»L}ró}Dð'f~ªÁÁÈyJèу6ÕÞ×;Ú¦Æ}%¸š±[ZøÝ·$í7 }ëþ<~÷;>»Š»XÐöœ'И¥M†X«2¤P²ì3J ©îfGðO!ÝØ¸"ö ÀE6êcÈßÛ¦î’D˜ƒDP®Bì"4IU‰ÀP†UŸe„HØgx›]»HpÌ1¦”¥79ó#’Œ_o ¦ì“r¹½vsR«ùO#ÑûÇŸþþÿüÉ_U2:ä&‹qí¢Þ=îÏE–ãøçÞ=˜Eþül‹ï¦< HüÛ?þöüïÏ?ý½yøÊäàëËKŸ>î@Ï×<ì}zÿüøááéîÓo"_Ï2CâeÓW~ÚlüÊ\_X<~µá§ýÃ>æ´,¶ÿÌô¦[%jVÒT"S‡³ôðNó®XÓ3Ceš±.ž ã‡‚T›?¸]7n~ÕRcÃì*õˆÉΡ ºGÅø¡pÂ:d‰ŽÂ Ed5¸(D¯‚܃Œâ{6­OwšQ2}\È7O4D®ÉC†\¬GïÕ¶˜Ò$Ñ{Óù2¶t*§5¾Ì]ĸU4kÜ~]ðùÌÔ!j~y*FÓˆBxÓ̓½9tP,íKÕl‡Ê]ëƒ;éÖs–`¨0CLwšÌrÔªr bé‘JCp¨hŠŽœé©aØÿ´¿^êêBÒdñvZ AŒÍX¶9L„Èœë|5CÜÀW¼°sàåfMæuSÒ”Nw(Ö =&!d¡vGj6s„û²øâ¤ñ¯ÃaÒ7~^ØÀfºwwHoïŸ|Êu‘iFÅ(oºyQSC\èõsÌö ±®¯’¤|]_]v‹Å9EFäþöÕÑá¹Ýî‚'4—%ð¶ý %u%(n?RÂDuIA^L5I±à¸´=bN«Û>Û"ñ:Àm%ºF „Õ´%®v٭Б 2iôL ÎX4ŸUÆb¤rïÿËÕZ|¶òäh÷a™Š iTYã}-ÔÚÂûZÒiÖoÛaRõØ$õñþéùñ÷‡çßM ºÐÌÔ›n’7èŠÄ®ò‘²¯¤Z¯*­€f18Ò¶d-Þˆ´ª*‹Ñíñf-/¥A'U‰‹4E#Y¾Žm= 6À·dDð4rA¸Çù¼<™`ÒŸ8Ô}žßÊf$âò\J!Ó2——ÑuZ%œRO0Zm¹àj5n[ÐÉi(šá£X z³¥’ÅàËÍZðä1Ñ®e–¬)’ösÍŽ®@EÕ7lÙïwïc,ïsAтμëÝ©pJ|{d¥zï— Å¹ÄÙmêõ{š,‹NûwÇJ'¬«ß§<å¬Ü^v0´ÎráþÀÇŽ7|Ä;Äó_ ñ‰àRa¡ÚuµÓÐ…,oöÄ$-~?O z«æ‚<|oÐÛÔôVÃDgk&äâ9e²2,±Áþ2­kºL\žzVJƒ ùûÃêò.…æònž,5NR÷œ–ëiÄ*_c(¶ŠÌ®Öâ:sšbYÄ7ÓðÃ*“‰q<)zqË×jÈÍsÅO÷¿T›â/ÿ‡ë&tÒ›nF5(r†‘ˆ†HAGä*{â^™Ç¹BÙæá›‚1vy‚#¥¥(U¥E*ÖxgÔ1|ãj@ÑN½µRSØÜaGçÎ×ྯPž½q0”‘à¡ÜÛW²ßʤ\ä=jÂUv‚cOI#FIwÍ®+sØÖ Ž-Æ*-J-€õëP¾8[øs¼Z“'Ž“2Æ%,öÛj¹›¤UJËteb”‰vø2Û6²¼µTÑ,`w_ÆêóóÓ²mŒrÅåàˆ@ÚÜŒ ”0r‚BC>’lH4LSÊ!-*œÑ_ ›GÃf%¥„ŠËS@3\×Ñôm¯/lG¶äµbÜÀXóë[ØÓ—ÿá벪YLiS£«³`–I›Ò§«‚£Ä¨°Ç öÌ-†“©j8•ŠÛ¬÷b7Mú,qÖ›Ç=*¼½Ý¤\¶›èi¾€öÃÐ'¤¦ÝyqA¡¤Ë%Ván?7qÄîg;BóagÎâ®Jæ…ûšåø¼Y.Òy®A2©d†Zþ˜#ŒWŸ„v—)–›n^nSo–‹¸‹¡²ž ßœ± (N)˜7ã’êÚeK¬¢ \]\hï0‹6‘GMØ áúò½ý½‹ÝÙÅšð÷pÊN„âäâHft,‡oÍléÙ{MË0&‡Ðî¸Ô(ìÖ¥?¸û¸Cƒéóæ%âî%¤&Ãê2$Zl y¡×ˆ*†}´…©9 ‰¨yʰFVLzSe0ÝK¢F¿•;eÐçôáì#ù…ò[¹û×Ûw_û¶EŸK‡ … PZ¤ã HW¤ÃÄ{´g!öÙFÔ^å$¡£,íƒs‰×W¥[§$"óäáu®³•blïX*~Ì.Öâ8|U ÛàÖ\#ìzLP/ë'L›þš ¸{þòܹ߭¤Ó@SÈ¢uæ+"J•ù„ŧ¤}Fè´¶CàßÚæ+ç.˜€,¦8FÇ¥‡Ó~ÀãÛû‡;Ó®ÿ1ÎêÓ$ÌЖld¨›NÅNð#†}rŒ‚EF_}æ Vˆ¢¦ÜSK^ÆÅਠ=y£òÆÂ”c‚‰,ö±ÊJb&®ä~Y-²rv›´4EM)ŸàÊžœ±*o4Gb~‹Ó‚¼qˆ$XþÒãþc°Ôh½û—%H>ó×ÓF"¥ZÞ¨¾Ò $³‹µ¼I#O)¦„'³wó3Ši£ñ:XãP^_/ì펈7@WƒIÍq¦üý½h¾"^1Géôdc*dö0B¼É#f[ü®¨tÁ¤köõ¹M&¨¦½fŠ&}F§!ðjÞÓëC´íî9Å€œbÄ~!ñú¡äžœŒ|rßç]Vemíú÷½–˜¥z%f·:œ¯óÕ.^¬æÍ.Ö`”Ñ<5‡IXÄ5•x…yõ#6Þº§b(¼›8ö4 %ª¬o‡LãZÍ™ä­æÃÊçèšuY'úÈo™7ªórÈæ‘Ÿr©Ý…|þ8ÅÖÌŽÀŸ'AeÀõx:”ZÍYTïo®¯3EÌdYzÅœÙÍ¡\f8^­Å EšXDp‘_7dAÚø!xOG.LFÇâðºƒfá^é6;fá‚Ð ,YwiüÜreË,ךߞ£Ëƒ¥»‹mÕš¿f›ˆPR?,žá;;lSÎ&3J‹;\šöi\Ý¿{¸·¨üË/}U³ÂÍ»RÕ«fbá[¨ÅoFÎ2žÍŒ^#ÊföÙàèNq‰A$‡"\c‰aƒ=©§¨½á- iž¦#ñÿ¿O?>ýþöéáñÃWÿ™™†VÆ£—†dÞæ‘ƒ¯#5ªN‡ŠÏÒš§¨÷pîD¸i"ø†EÔnOÒSa„۵܌«Æ€-¿E­ÊZDTQt.ƒ©;Xb ìöå5Æ€CÚ::r:ï ê«b´è(°^Íö8"Œ]ŽJƒû‹ÿL£tQ2ýVIä [Sôfdˆ›NŽ\ÜLþéþñ×÷Ï_?š÷.7Ÿ1Ì-P€t[`ìiqfs–z…&ú^ñMÄ]ã\ÌÀ⤆æ*FŠk^€QJ¥Ú‡TjM9<„âEN@¢§á«TÓ…qò7æï% œ%kã¬ðñÐaÖ@’˜´­²Â¾ØŒ cÄE‹3¬².®^Ъ?î0 T-†P©j6#òlEoÔŽÓ‹!!‰Ø‹arÚ ‡„Ià¶&ãUÇË8Æ‚»Ÿ8¶ø\ø÷wúð ?ÿ¬üŽï~É¿ý+-šsÒ¤—_¢ú2ñ5Šo® ÿÔysZ×%»–’£êA.FÉ2£ ) o_¨©²D,þÏÈ6$„ÉØGKZ«íÓØR(«T_m_âý)M`̶p›·D(n¢vƒ†RêéTJÈ'š¿#€â‚5¦c"iÐ;IAëzGE¨”1F¸Pâ)ƒÀ¢˜Û»å,´]¥vÒæ…¼Tx 8®z•jç––ñDµØ«¹à²¿ÄèœC`YUG‘œ¶(·çI)à­ÁŒÏžyvÐ/!ø²5sIø áM"RZcjsî‚'"É€[¼Š’kXÄ=p΂õ0 Õj‡åAÅÿ#e†HêêiyÔR¯¤žõn„?Ë?à/3XU­J©˜•L’Y­ÙËôDwUVx8…qH]ÊŽç UCWf$>¢Û‚ÕÄÒë@ϤD$‹›P߯AQ>1²¬_')~½»ºßùLÝ´sHÖíÃ÷ö·¿ß>ÿíæóöæ·±ÖŸÙ|¼½úõþáÉRž§ïIþf÷áÍþ¾msóx³y~ØdÏ7¶—MÍ¡ Ÿñ ˆM#²Tõ›h`O‹9¤ßµbúyP… ‚Rœõ6‘8ßÍò¢…î&mÍDB»¤jÒ¼A€ØänÔÇÕÝ !d²ÓTÅàò6ž¸k²à#ÑRƒ„%ÙÂû†ºœáSH*#_}'“á4Ô€—Ù±wU]ʧ]ÊpJMOB¶gõì­ç~±%ã­f¾K™iˆB^7)ÑÔ¤Ò`3 €³°¢%xGZ3/˾¦ÁIó*HG‰ÿ+Ѳ˜E4\ôsf11n´²‚’ôVit9ó tÓ[Mëù ¨-£ÉÍîf¦4¢h7Ÿ¯}ÝïfÍ^|WV¢6{«4Ùó"µY€­T=zu¯yò+{ìdýæÜd© Ñ,ï?Á¹Ë÷ýØžî¶Çìðû_ØŸ¶€d¥›¿n¯?[Bµ!¥O–.ëõæŽÿ43LGÚÚ^fÔĶ-ZƒÚ^fªSh¿'Òø'lÛºöÙw‚Ú —` ÑUÐÄ–ðõ=½'!¬«¯ð^:<êêf÷SE°ö¸/ßî-}1‡§·æïªæHUýî¸[÷¡Ÿ±: lÿm²º ý‡$ƒ2¡Þ!eÇã×íóçí·§E¡; 2¤ñNMQ ÕTž€3÷"xº iÇ‹àªX;(—âÐà¢%ɱ$( ª³±,Å<ïQPÎ9’u q°(jRPUh2Ë—› !ªD/¾írfwaødyüÇoöã“ÿ’ûªs°Œ}¤{æ‰6-ˆµùÙ Ø3å.ÏGêB¹Kœ¨üî¼È: <®1MÆ lçÁïXßXÚù†€³ßuË&@à8 ó‰Ä7´Á|d_G¦Õ Æ®øLÿy™¶ìè=±|0{nKÛ~ô2›ûüÉöXf].Ríµ]ØÑK-ÙÓ3S°§eåy‹{93gêaÜÀ¶/SŠXt}j;T]íýigXYSë ¼BLNÒáªâ]5}ò5º¿^žî:EüO¦¬&W \1zW Ó‚}Ó]¤ 쨕îý`G•´æda½–]‘ÈüY¸fđػ¤öÖNÀ-r&³6W+Â8`Â9`¢C*ºÉ_“é[ŒÃðy~ ¬MÛMb0n²X¥¡ÐÛ£ó×eøzw{sõ´}îÓ•°(ŸHg­Óz±Í >¶ø¨»³tŠÈâáieÙë”`gI rŠ˜Néç¼€É5Ë15\§©•˜Øé–¹%6mçÕ3ŠI2¥™‰eЉép¦4“ +«ŒÆ’ÔÂüVy…ý°2­ñÈ®¾soÐ×wüa_ãâø!µµÀe•ŠƒJ_ß5Lj5Ð4iú»hÄ!óº„@ÉĶŸ¾Ýí6Óù8iüÑôì»/Òx~ü¶í“ÛQ@†¦šóE\3KF\zNÿJ€g2µWÒkI»’]˜UøÙfs$‘yÖáˆÙXG‰ôp¸Éœ™ÕÑ"‡;g›sínˆ½œ!ãrmÉ&ÿè'nö5öíeëvõĆI3ÇÆ{¹èkˆÑvzq¼‹¹ƒó–ß̇Üè•Ï[8Ülf‰êÂêê³x¥E÷¶ ©¿Í!€Ô؇ø|.>í{9!x]Åm}Øv‡ò”ÏÛ»/7ŸMuÎ><ÝÝÚ~÷×ö#~HÿúêÝŒ‡0 ªd-¸ÖµÈHæÝäi@Gòëa=éµvURåÖC1Û¹&÷©ÿP`1{"§B¹ôÅî¹6¶¹løß\“‰U6×?^/kž%Ÿ¦E)¦iÙæ}Õy £˜²tp]…ÛkÃ[7¨xœ-Yƒ%ºf/}Ó9Lö€æ(É>¤YõˆG“„ØXj¦VŽ]1xzékqøà,K×T÷,;jºýíŽ=SVÀ‰È‚¥ öäÖ}aßœ˜ºh:^ö?o¯îž?wºle§ŠÎµ3(ÔÌ`@KÁâ”…À¼_~¯"É@œ(Ð<¦_½Ù>>ož÷·°ûtŠQS«ico“¯(Ÿ¦žƒøŽs{g…Ö-²ÕD¹ÈI[>Ä”6R 6£\V·1k!8Laõc3ž²œì¥á-§Ö÷¸v ÝÐ7ÆËÝÝÄŠ:Å·}ð]P7‘hh*ÂYõÞ¾¬ÙÕdzûéÖ~}Ü캠'Ç!® »+®fbš5è–6^VˬWèKRèKód´ªçïí8Bîêe$Ÿ€›ì™@!^q#¯Mchb†ýÀ’×uS¦§4›gçmwÅÞ²6G\~‡ß‚kB/øþ×j{½ ýc²Þ=Ø»^_Ù›ÂR)úfûß¶î¯î–mD¿.ª«Öðz±ZR‡NÂiuôs<˜LœbÇp,8ë1,S͘¿È¾G‹mÃD£›Æ_ÔcDz^£L¡­ƒ„äMæFvö Ö=Í ˆü`$<o«:#êÙG‡ÔÛëøâŸ·÷fÇw;¾Ì>9x¼¦Cjñ¸÷sÜ} ó$cp>ÝÝ÷èÿ\®£¥m ËÔÔ šØÎ= °É¤Á|³ÇI¦£Œ§é ‡§òià/dàëá©|\{¸t’²“¬«²½½&=qUäû™D¾DWè;†¾ÍÉ“ö§,@ ¤úßxùÑžlûËr# «æF¤æÚ×¾˜nÿQºtö”n·>RÕ]mTÁm„Z8;ï> æ™u’ìÑÕb{Äc ½¸A·úuDú=íjIŠR‰®å¾Å‰e¤ëÈÜØ@ÚsÖLXЯÀP“È¡ÒôÈ÷ÃPóªu·+ë'¿§Ì$¸š*v‹Ú˜"{]•šæµ:ºÑÒ¼ÖE%ÍŽç;8šOB5Øg½j¾5ê(ï.œ4. ’f/òo˼HÀÕÌLÎ.äImäÜÀ¿àúž˜Qø¡ä4 ˆ6aâ9œT….Æ¿ÿYöüeîê™õ9_³ôD_ú ä¨×á沫u‘z¡xr³TCHeà°ÆŒ×ƒœºÝŽx§I%Ã8ÜüT‡4WJ:’I—3'³[`JqV9ØwØ…‘`u²4­:ÃpoŠ"@Eá鮾è> EúŽwB^¡=¸4Ÿª~‡Û#œ‹5ƒ8^§|æ”_¿n¿Ÿßñ³/ ¾fÏ,ûws·¡’ø¤²­Ë# u¡¤PD÷ŒÔÿ”û½32s{úðéñáˇÛûÍÛ#;Œqz¶WxóLˆF5a”Oj2ƒaÁ…&›#_csÞ+ ³ Í“š pR“n@µ·-0ôNçlÅÉž'VV0òÇû41-yxe ã‡æl¼þж;`¹ºÕœ׫{Ǧ[5)Þ¼-QЬn,U7úÓû†yuKÔ0ƒ ;†Ø¬¾GK+Ñ·½V°•œ¦ébjIlC7|zDÍd.¯Î!šÔ—á<>Ümõû¹¹6¿I¥‡ S+ð:pš«9kâ,nœ)°LRôY1Saoí(¨ KŒ$Í^³äºÉHTk:s#š/Y|ñ®˜Q2ÆnO!Æë‰s³ý’x%{x4’_'ë±8‹gÿ%2`J£ÅB5‘åî~…ÑEÁ’# ί۳ôå*…ëwÛ«'­×IȲ>„Ÿ&%ŒÊDÕ¾WÄZ¥–èÒÝyF(½vXð4x$psåŒ4öžÅ罄r>z$ŠÐ:™ 8f ‹6T`o9zÓ†Zû0v'UÏ™#Ó pŠA'ZçEz¦å¡¬ÌcqR>³¡§”Æx•&¥Eê´‰ÓÑ&ÄU6¿¦‘‹OI¬¿?Ü}û²ãø;ù| ¨L«%s$hNvU%è.°íBˆ‹É NDyn¾Ï‰˯¼N15ÙJ´÷ž¥w²-Ï·qîE'yþí£lþ´{Ëß߬Zwvmûe ´0¥Úô–]Êžqmh5áâi}Z²øÄÛ1GÁܹJ0ËÏ=×A‹I6ê<È sž½k]ª“ÆlzY ë]<£†Ä}DMHL\SðÆéLÎhþcD$;ñCœe‹sƒ› t z³$'#1U„ºв-¤ExÌÅ5åŽlúÊxŒéì4ÔMŠJ7z3­–¦t%5qE1¯ÇòkPbJÅÑ›£W×¢âèt…Ài D«öÔØÝom~Ó¿üå:±ê^óõÇ/Ÿ`k?º†uZ °…£¯9Çg&—î*–Ò›t`706«Iˆ#2Æ~? ýÇC0ù¶3å(¬(¶w ÂãEtHLMûdm$ö>d&\Ú‚ÅÛn… ZèZF,R‚¿²çÐ/‚ߎ(1©^ÛöÚ`8¬Pz…‰RØEGï¦RøæîáÛÇÉþùÓÃã—âãUÏzáÜó{T ïTd¹NpM'Å1hEÕ0{äiÍ¢á¬fz•gÕÒtš’,›ƒw~ÞU€·ÈyÖYÉ W‰¾ÆYØ£0úEç(‘¢3Ñ„&D~mo>ÄLÜnq¹K,mt¡–ü#Ë„œMÚN„ÖûÂÈWðD–®Y> ñ¹¢žƒ=‰ä1-ë¯ÆÉŸêæ ,[F×–Äša™$ˆ pPÓúê뫦•Õæ¶xP-s[0ë¶"å _F ©r[†áÑIXtÜ$iJmcŽs`½YÛmŬÛ"Ž.Ýæ’Ëw.í¿Þ Nš–Wô®)¿Ï´ŠW Þ,*\z¥jm|Û“à›¢]Záxa§¦ö®!ÓZ¿‰ƒ¥›ÓÊÀ€Nš±«& ;—ªì`¥v¹³ì‡ÃòW(‰ƒ-ø’8‡ÃŠ>_f~”V›i»àyQ$¬hHÚvĬW¿MU\˜…b¯.è\E#õ%æ“"êñ¡”»AƤ¢‰I°éÀ×ìþeå ²+Xø<3h'=ö¤©ÖE©¦i‚Cp^ÞùÌ SàÝ™kœ@lAü<ðR¶"e$  ÜMoiæ äà.9ŒÔT‘¢Êý©û![9!Y»ËãÚz{ÿëÓ~þùÓßLl_~ÞÿñË‹©ýrg϶Øóîáæ7ûð›3×eôÆ4¹7)ù]À–fª#$²u‰*Z=:ȯÛF5£Áœ/(‹g7*¥q³Yú€qÕlÔ]?šÄEwð”Ø.¶d2ö×à¤ØF¼ó«–ÿè½^ÙÛ‰}þ_&ïF¬ñ‹w,MïX‹[=cÓŽE®¡I4Pä-n^JØY”Ý6oº¬ .ð|MH9}˜Ý¼H!Ku\ÍîMöîî^b‘&3¡5óq@¼ýüx¦×.…«žé•¡™7ˆ¥¨æMÍâ>ÐYS#ÌÏÝ= ¿†aÎ^31?„%¦;¬[λUjhKǨQÑýç _ÎÕõÝöß ëÿüððõ„yì?,éÞþkZú/æ—@ÿùCò7ÀÿNN5G2^ZŒ<«9BÕ9o+u9~ ÑZþ”Þõ ¢ÇíÍöö÷íÇ7j¢8Xê/0æ“?á°®ÃCnŸ>Ü?üÕ¼è_·ž?_Ýx‘ư[ú®RT3ú¿ÿ-dÛ­ Ó§o2¤š8jº-Y…»öõTø:0à¬I¡0h™I‘ÌÚÔalì[¿3’MÎ){m`¡E7öë¶Ó-ƒkºÉ18vÍl„\ÈFNC[ ù ­ z‚|V­L9Œ?.¬€‹0½”G\ÜbȃSg1×ŸØØ#Dµ¢>O}pÑœP»Òb©Ò’QQAï g=[ê±_·ÏÞ5WV¢5ø4Ëc¹ÖÒDšà8„z­Ù#Pk$m]Ü8‡Už}ÆkFU~pQœÌó+ˆàùj¿VÎÒ+Œ3ïŠÁYYøµëøM¾8„!ª"ÇR Ø´0I§­‘k÷l"¬]ãcN‹ƒÏÜl8ÛEžÜÌ͆ž¿nþs"ûy÷ß1½áØø·W› øæôŽc“eÑùiJ' )hËÞÔCIÔ²½)Î$šºDš!UŠI˜ÍÁQà‚}j¦ϳòîî³å£••ò7*/—ÛýttžÁ7©¹â™Rˆ !H³ÚtÚ1¨-¸ó” û…gékF++T›¾Ò‚øÅ¾ãÙƒ´xÂt¤[3“ɼ ’ý4·ê]©ÞÀ§•y.‚±èÄ«‹3jK“Vc6™8®¬ˆóœv³äÀ/Ò#»&½Y”¶\oQ¢9#„ج6¿Dm†-¢%j›flÝÙs…ÑŠ´–(Œ-0âEZcø6i¡"] 3zĘ ¨µV<`À¢H•*ZK¦8¯6ÎN<­¬@mŠÉ;[®4onECl÷/N©åžVåy²öê×íÍÝÕÓÓ,«áë¯A€ºWˆlɹÓ#j¸£SµNÀÅ|OoDx¦cúüŠ{žÓ®<É-1ŽQå€ç‰O÷"£œ·;ʤÃÁÙΤ11"-Ú aÇ.ݲA¼¬²A-»|Wä »cÛÍ÷ÓZ[ÆãÃï½;Žó¿Ñƒna§*c °švP_3eZBª-PX—oaBEýˆ&ôÓŠ6@ Ð&Ì”aíU41Ÿú¨ƒNxã‡Ø’ØÛ~,Ón±ÀÊ cËÕ¢`°Hu¡¨ßÏi~Uº·3ãgÎWë-0“4ï”bQ(?wÞab„ì‘‘œz\礷ŽN –X‰…–¢Q‹•x‡¾Š8EÔȘª‚×ÿfFñëöys(-ýŽWUWŠ9$Ä« /9¼óS{%Ïs×B—E,’ÀEgrd†+à¡ÉAjš:Àòõ°xPÖ¨êíòºÙ‚ÀàÌÐ|Y“ò¡ïó¬)@Þi¤ÑÃÄ*QÊéívïFBÒd AcÍQ ×Öjùgé'¿äšìDy0 Rœ/C0ç& ³†4{õ"©fb/M–ÿ…°ÈL„êg¾ì±?“ñW.Vs~©ª?MjCÑòk×´i¥†ûÔcf ®¥èü²crY‰Mʰ`ÛZD>ïêEsL@#aõÈI’¥;raÉ=½QË!¥¥ÕoØK»ì[Ó„ÏÕ'Öí"…} pxòëÈõÕ'ž~~45oí=RÜ™ú–ñêææ“x`·½¾¹Ù2]y ?ª¸uáÜ»ÄX<¹©з_þ?yW²irœ_…ÐÅõ?™™Ä Oº‚.‚/†!°Ivf7ÅEÖø½ü~2GTÉŸdVåZœû285Y•±e¬_(·û^bÖÇÉ77ºÓô]…-ÆÄËžä©q㤔/=r†Âë¯F´…ˆM O¶iLá#aæÅ–XiˆõãÌœ¼8½yøôõêü4šÐ~¦ =Ó@= ‰!’—b8êœÉòMÓZ“˜„N*"r»ÞƒZK²Ïô±fh­þj¯)·h-;}£1Œ”*"Ñ1&¬#,Ö- G…ÔÜ¥!×È{<Ã6Õt{=hvÄ<0q±=;rµd ¢Ñ¡ïKÕVÐhšþ-ub,W Eß(E·J0—«Ô¯(2%uïç×/çÚü4aúç™þñ™c²þÉ'¤®Vxwv}óõòî9V;B­ðõwLª²7hé¡~ö®§¹ Ô&¡F…ü¥Â7š]*|Þ¡R¡m9âTNº&á‹æÆ»,däšSìØl+¤&{ト¤¡tš¾ÉGhM Ø”Ò ßÝÞ<þñËí÷‡›»kòò„J[[[!ž÷; ž@`(2gåWϪ!J †K3ÌÅ+·ƒWÔ}£êØ¢êê20'ª˜CQ×€¨¨ê1;¾·"ß M'¿0¥È å^ ÞEýg?àhêØåh²5+û(íX; NC| §¸XÅ÷ü¸j|?[íâ¹ êêb5ƒ`ú£BháZ„¸pàš®"B86|Š®A°àÞ‚áÛùYHaÑc™SÛ€…† ½¬Ï&ÄÖ·©Ê‹j‹¸—£`ë3†FÁW³Z“q†õ6µ1pâ_‰l“üÈËŽ{Æ«}BôBì›îõÆß¾üçå§_Ô`¼hT<(:ÿÃ!ñl„-HÍ«±(/cþ¹&ÙŒ™àED\›À°ó‰‰‡Fz¶&êÏ•HÓãï¨M=/+jȇa ±,,ÂYaY‘kа$#mÃ@“°@’¡:™Ñi“F¢­Í’·—7_¯ÎÏî.ï÷Ì#=ÿßDôYä}ž)É  J6J#³F¼,ÖÔš:“dƒ9aôM²a«–G@ìˆØS’¢8q?ö¶ë¬€È¢Î~ô¾*þ_²ÆcE¢)"‹Ó@µí¡!µ~<äÜ2u˜L©Ùrlçœÿþðýþ¬0~ÀKi8fÐg‘cÄ"dÌF”v£E‰²í+rN’$L”¦…õ«$òY’ƒž<½’"ñQV-Ý^^ürv?¸g)'" cðP€kdC©—­¯È3eVJM5LA$ ~Ù¯Š8s†épQ%y3åå?ïU¸,ZÔÏÝLŒ<ÜÝ¿~´Dzø·]d~b)cJ4%¤Õ} {S‰A£ç¡¸5QJ=YG{!¶¯°|GJÏ2à­©=•]RƒS/¦²•êœuIWd2f‡ 'Ë"¶˜ñVÝJ#¦@ü1æáÃ&#ú¸÷îXcv**¿Úý°ýã¯g×_wá…”hå)c)QÚk ¦SÂPT!à»zü(!%'7ªÈ~Àg¬¥ùPRTe}Ü`ï—’¢‹õ¥r˜uMÆv@u”14Ùà£ô¯#ßéóõAm|úñ憶X–Çzõs憌ۢÿŠ$PvŒ¤Dï27ôšCÓç†^³gÌÚ¨¤[CTXG±Xh•³mçÏ,˜bm’í§‘¦<˜X×¼Œ›Ýœædc#Kâ]2òHÖ¦?§Ö†Ò¡¾)ï·ú`¹à‡,€xîBïKÌÎ!üîR‘&”C?àÀ‡yh½'MQQ•j”¦Apn@Eõˆp Úàã¶žu½z/žcø»£'=ÿØÆH#ÊoKRWÜZ]¼ËûÿÈÙÏþ#7†^{•cŽK¢žyÁ’¨®‰äËçOôž2#¬Ê‰!5™’ˆ‘™’øj+èœá°pŸÒ;`‚¼zŸÎn®v?ÿüòöþÃíå—Ëo—Û¯]W=leyTÁ~­ÓX€Á=h©œ,/¥Në(¬s+í¦½ü**)¤šIÛí‹ê)ûð¯5eÐWm\‹ºzG‡šÎƒãx眢áÆø^ËÞô~PR|»oÂDÚÛÕh}¬G¶TÙ]Þ¸KœP G2ÞÑižÚRl 5㾬*V|6½ÏgäVT™¢‡qÑëp žþ¶€ ~d~Ó¶~ÈQôp;8øÛ dªÁ«?µ…Èö;É^£3{.½ô,üÃ×´›ÏÞ/ÉÐkâp×ü»Æ!É…4<±ÉµðU>¹Å6eWô/±Á¢ú¢Hì6s¿‰’ž¯V3±É²P h)êwK”€2Ä·Ôcª½`T!“ñ½Ûìlµõ  VØjòXf\¢ì¾žÕÕªÇâB^_ì&Æ!j1°# §+ Õóð!øa¾Õ¢MyƒuŠ1U(Ci}Ž]ÜgŸoV¥oq±*žHÛTüBNT° ¥Gß,ØV¯*Žó­—ÇÓvé TðMJèqvó¼Ï»ºZ ã,]$Â-«·õ»‰!ð¡D é uãÔÐx[þUÍ· {ÜnføæÄK ßv(Qù¶[NðŠo«›U±-,Êc×2.(¶.=²ø¶…ÝââfÇD©< ãj25è[P7¶bTùF¥•évóìž±õÕ*Ç‘ƒ4é[ˆ1ªÂ 1.v¥ìCÒˆWcµqÆÕ"*ùˆ “¬ 2ôÝ,3.:Éo¿}ºZ ã¢_Ôê!`ã ‘Ò㈻&£$a"h˜qÕà-A4ÎÐØ"ãb`*2޲³“«›Õð-èQÉŽ[øfÈq„oªo=M+j·ÄZc{›ûç«//[Ô¯®Ï¾\îÐ#7½«bŸ{ÕõÍ(œ~#·ˆYí*q,‰Iôy9YQrÆð›ýì(à›’12"ÉQDìÂÁU×:j ßæð²šó¢ÜúÔÞ|}~þa‚CV@xQš úŠ—;ùò#äS Ï$š" ¼~#r“€X¾Ô¹dêMtÍӔܑ»ã7R3¯)nsܘùI` âÔ8WÄÏ\1ïwZßÎÞ>ÑwÊžbýÙ¶ù±IºÈƒ‡±„–¡w ùGa“¾ÒÇ®]íêb²=;,lV/òR#l»íà‡„Mݞܠ÷ŠÜ“dƒ÷©Ée" µTä‡d :D-z€ &¢=wú¿ÙãwwöáëÕÁ\íääúìfÞËg‘)„Š@N®¢~FÀ¹©œg’M‘ýÑÁ ½MÒAcî–="Èút:޽+ÚìzAÝÅÕÝíÃÍ«Ó6¤YÙ×»`t×' †R®êO¥ža<õÔ€q©£HªYʸ‘++VT¶¢x*Æ)”í/ZÑeʾkuBÉæ{”‘=„@1)#¿‚ž¥ŒS$pÜ}׫½‡çiÖuk¬÷am(ùQéÏ#Õ+å ôÔ‹ƒO¹ä›—X¯HvrM¯!eôæþªÉ¯QÆ$%˜u#XÊÆ„+ŠLRG@òMêÀctCò€Ð!ú†ûÍŠ±ÿ;ÍsàY¤Â¦'ÕH(‹f{ºW„œÊ…Kð .5IQŒa~Úf“å£>-ǂʥ1wOàOß'*œøýœ@"ÒØØeÁU”×çÞOHçh4Í›RvN$U„6!EEÍ‹ùò÷Š SpŽ’ U¾&n¿@h°¥kø ˜Ñª#áHõqȦ8DIbM̺ä=ÜB”ÇÛ}ºZMÙ'ù…4n³™ÊnKù ÙLŽ`3eñä½ð{ ¢¿´ jÔÉ»¸ü|öðõ¾Éd‚wø‡nR—M&øž‰Bô‘cs}ufKFç*°CÕXqŠR1çí®H1ÅVÚ¦H‡M­D¬ÎqLÌC*÷º•hŠÊ¡_0j\{ÔÑ%õ•Ê”ÒòEfz›óØýË&PõÒÚö+pI†»¹$å´†)ïMìiöÉðS¨iՅ䋿³/슮3Ì)ž–‚9Û0‚?´r¥l’!…²ØòBÀw›/À@®>Z™‰ˆ;µV?ÊïMkмÞuD±wë´Z×ÞD±Fht#ãá‡Ú[Ö4{£Jž  A¬ W¨$آ̢J&ŸtZÑdŠN²9ÐT«á$ˆF¢[’®}ŽàÍ&…0Þõ ·Eé¬Bè«<Â%£¼•Yk†©‚ Q±Ø'³kžˆ8%¥aUòè›Â*Q›îÅÃ=]P ßjš5Þ½ µM²™Ge¨HT«NqE8 >kžnV K\À:î«ášY?ïlî§¶ospÏ0cDDH)Ôôw1ÙqðBö¨³8b"„ï4ö=˜[ÞÒ.[0\ç_¶Ž…úw'Ÿo¿_Ÿ\}ûp­žìí¯»YÇ{Õ÷· Ìd9²&I°GCÆ$¡§ûܦ·c"Jÿ_<}¶ÖÜ«!ŠÀE!"É®^Q¶KŠôgº›Ì‰ `ÿB¢ÍmD¨kˆÁFÙÔéë=¸ûéNÝýC]Bÿo¦ K\ÿ܈Žøv„Ë@<¦S ÚÄÀ¶¢ö{“Û#ºöRõ„ˆÅ§¾ùøvFÞææ_³ÎËN”´eÖyujb‰uê½…|¥úé6CòNVí…‡õü‹3Ʀäyq>ÄÚ1y»è— Áˆ$è…:€Á1`H†ÝB©^lìBd‘ ©PÄ’T€ËæpV7«q Ê]ØÎÆü1sFCýRﱉ٘¬ü7Äl Ü…pe»ÚÄ»ÑݶAÌlB¶óº%+tN5¿¸ÍnkŠÊ΄š+ÎCd=QxFŒ©T‚¡1Ô?+èÑyŠŽûå ½z¸ÔcŒ&2c7¤jœçc;ôÏý|Õ‹'lŒù|³ª Sf{œ[øf‹6}L#|SsÙ‘\bd½vJnðþÒÛ³Ÿ4ìÐÿê÷³keàÙùåhkjÊøNCM5©EÐÒA, @pY ˆ‰fèµÛÈÚ䃔 †ä#ˆïi†3c‡öHÕ¹£—d;#±¬×»_dkÈ®.^ݬjð9,l%lb{Cb[B¸h€¡)¸QŒ¼ <Þz,r“~þÌÝ4å&\ '+^íhÐE)ˆžò¥…'BMÄ%ĨéÕ’’>1Cb’º€{Ä‘í¬Áqõ®†Ê¿¨ÍŽDeÆŠ†–Td,S6ª_]­F¿ÍÝŠ‰¡…oÑ:G`ˆoÑ÷Œ‘0a²”É0¨aª\ÒÐYéBÕcb‘mzñ\Jnu³ªÐÍ/( ¶±Y£»1¶uíáQ*Š•áÇÝÓ›3Å¢”xLŽ6DG©ÌúlãÅŠ:SÖôê¯VYÂÔ$d(8ÝàÛ#^iôœ ¿ØvðÇì¼xòÖ×"ÖÚ"®1ß"¾!ŽQ—âê1†‡ÌYŒÍ u÷Ñd–vm8+úÏ w×À¢»9‹ó°&À”™oé†-êú4v#êEú¼A½p¡dÏÖïZ½ˆ1…‘0Q±«å>‘úëî‡S/ïËêùº,‘/©—ÒC¾1ÿ‰SÔËv1#AhR/ñêÜȘzE9Šz19Ž?ÐJÙW@>37Ƚœ]\_ÝݽZ Fç|žRáá—OŸÚvªªÏp"†4€ˆ]й6›æÝ„ĉ¯Î‹ÊâH=ŊΪ¨!';gò ØóÕª£ÆçZšêÐd@„ÜáS×:LU´…oï€É4 ö«§¥º.ºÇ3¢Í‚€ÏhW<8ä¡çüBÌgŠOòŒl56ú@ÓcŒØ0§º˜B”Ô:¦3ãÉ›éD§˜ªBT©((ˆY¬õ©f Š8l* ƒÁôƒç!A‰Ðó¢€­çŠäýXŸÇ{b¨ë’ðÿóßÕMxê;¸Àq¤ÜjGô€0DhºF¼ooÝ$â³mØŒjË)4 §°(S’8ÛµùD)3ZV°Äüæ²½ÖAÌ×  Žç?Æ ;7Às/^õ1Æ€²2 VÈ ¤’Ìp~ hE¶ISUhB“Ì[d†d&@WÏ¥  £ ïÐÓ{”`öÅÁ ÑlFàЩ‘bñeßsQàž~¡gšÏ8ûÙà‚ã&‰3 ItCG]xú¶†Cµ-¶â ëÅ•“wË^øõ¾ß^\¯=cPŽâ‚ÉVµUÈ@(ŠA¶ÙaEÈ)b¤¿š¢ÆÎõbD¢1Œí7ë#'=Ìö¼@Ÿ+ÌU®0°>AÀ•‡Í%)±Òîšß°±ºMÙ†ÀKRÿ|[ØûcîŒ!gØ–ë1œê§RTÔ!È xÇ]ý‡hÑàŒX¶v#–z™¶–]¹‘M? ‡÷jloN)Á÷tµŠ`ÁYY(!ëC²SHêg‰MåMew©~¥GøWÉÐàí͹šèûËÛóË³ÎÆpá'†?Ö®”à÷O„?¹¿º¾<@ýíX›}èäNýý#ÿ4Âå­¾½¼þ~oGüûé©ÿS5@«âÒÉý¯7öoz®¯Ý|Zþõß~úë?¿=—ºN-åó_¬ö¤ÐHquØîcnõ§íç‚_ŸµÙ¤_ºcØéÏzÕ/—÷§þËŸNŽYÞ¼þ~±Ò Ö÷äãÉÝù‰ÞéÏ»ß÷¿Ô]ír[9Ž}•®þ=¾! € R[ýûSŽã¤]íÄ™ØNmÞ~I¶®eÊü”&éšîª‘í«Kð_þýíñáýÿœò5~\Þ>^ÿ{sy'ÿØT”¿'›2ìô?‰"ª®Küõןô?šˆžÞncýOû~ýñן§Ô„]Ž•¿·ºã÷‰\­ý€Ê3¨?”bQB–£c½´Eˆ6‹Æ&Wf‚Jƒˆ§Vi*Çmäq Òt'¢¨4µã‹š«p^¼L1ÌÓm]8ƒn»üvó]#‚û‡!ö?ééãg^~ùvyõÏåçë½s¿Ž _©6H’ÚTÛœ·Xi6sךØœ·Ò_ç€rSJS,ïÑ9Š6%~x¼¹ýøœûúÛžÚ;]„£;©!ì ¬Qƒ<¹›ómÁu% %±H,e[N¢cB.Z$¬EZ‰iÆý¡½¶:6Ú,©n‡¡“ùEd  ƒàÙ8û‹¡ñÈŒŸ*WœÂØ7O$èWCŒ0Âà8^ =a¸‡aÍ ƒT>aùÞ»•¦0…$ê^60ãAòCûÏ]m^)=™I`Ô——ÚÚÀ –FŽ)rž:‰T±¯!{C¿_Z…/¯¿¸€o© ´L\TåŽ#g&÷)!ªQÁ8¼o©vX4„´˜!¢r¶"“Q!änÉV «JEñ¢áZ`˜ºke{Çõœ S)n^F`ºŒ^v1Ç#05‹ó"0:C6ÚÑÿ*ú2жèkÂ+¬B/ÄöÔÑ„7x+ì lR²5ÁKGÞ9ÛD‘a•UËáÞ-+¸!ù]³Ô›*+?¸|µ²ådq‘¤Ig¥d[C>¸Ð|žuSGŽ‘0œx@”ÁûÛíå×ë#—ªßî>ê¯Ûx2ýå¯ú7?n~^ý}}õÏú0í~çâãÍåç¯w÷7W÷ïž>ÛüòÅýÝãw#ú~uñpw±ÿË]êÃH¥·!èÉtS`—gGÏfBÝã8ðnvkyFýŸ÷Ð>•ê×ݘY¡€¼Ö¤Ÿ#¤Xt}l.R¶`c¿ b‘ÍÑtB¾EÏ!XÖ3|ψ·ªì¥g¤ÛÄ!¤Ýȇcž1n”\Ñ3úßÝA8ðÜ«›7ÎøFþ•oDæ¼£ßSáP…¶õVæQta˜ÓOª9ýV`”5 |ѹ ü€ÓýÂjÂX®Ií1ŸûÐ{9Á5@«‹ü+5co*ï7çâáaSÍ:·%ûõÌiÌžã`HÏ 7V÷ÕÆ2â9Ê?_îÍìЗS]À—ó+Èè^ RM}Íä(ú’'ªÚ †[a‡4¶âsû‡*æ…<,Ž^{¶Qlíê¹Bl©EL5É– ÕîÄï¢âNˆÄtrô$}Æ/õ~ Ÿ¦Á½F¤‰¥Bç(•Èî÷ÕåÃåíÝç­?Y[üôºbBC­¶œÝÔ—YN$¢æôÝÔ—9žÈãÀV˜ê'Ôw?…œ[<ÃI«'Z[‰®®.^6œµål€úw¡äQÙ.pO…#áÈ”ü‰Û°^É®/­òzµA%Tv0D…^$e«ÞW’šáÿèkk8‡¡uÎiE/'¶`*g'™Bf£¯÷Ÿ–îœ Æ™ŽP U‰JÕŽÐÅqJ]@©gb†~oâ íû`ò#4ŒGý§\þ] %fd[w–eµ°Jbdý¶ä°éd’OCÄLÜ52ϧ¨Z L+€;ruì÷—³T¸ O‡>”éV2”pÀ;[øº˜ãYXSwã£O-@±V†Ž·Æá±« MR Þ·v\ïÔÚV«íVÿ2‰üâ7¾ÝÝÝÞ¿ÛÞ(OCˆÍ"Ñð«¬)¼ßñd½…ä1å[Ûže4e¤¾¶w›Œ|BçÜU˜>:ô? :uR}h¿½Ómÿrùí~;/ñþ§.óË»ëÿSºaâÙ_\>ª üúpsµÐ4t ß6>–G'¨a„RÝÄs”§+ͺÍ4—6}5#bqŸ4Ц„¸ÕÒ|¿þv«~ýP5ùØ.>\!ãGúç™ÓJEÍŽ”X~L°1[Œ¶’ܳ£ï‘\›R„€CþI’®š+‹U N2àêúûÃͧ›ëƒSŽÀCØ({ªàQj§Vù¥ì4ε„&ÁCÔm ,E4þ¡¡ÀRÄÓüâ|¡E¶—“)¿*–{ºÚ~ÖJ‡\ÜÞ]ýÓ˜þác!ŸÆÁ&tŒNé¹ïf±á !…Ñ©Iu›u6 è#TX|$.Z|‘”õWò™Qׯ¯íI<¶„–`“A(šô>õèn¢ AN=Aó°.uâ]ëÁ£.ZsANä+øˆÕ {IlW8;q/ö)ö@ß:’sÔ„9DÑØvÀè#ü zµâ⢪Ëÿƽ@ ]뺞ËãVÁZFFçÚôÔXÛtåÄ.H˜s)pBö¾œmPŒxŒ¡‚”¤xL1Qþ˜>KiJÏi8ÈÒBÊÊD.Úø¥Œ@Jž&°@²ÙqèJ&™#11™dˆ˜—ûh$¥¸O%˜/èß‹hÒUãb z\br0Ò§¡èj ôF˜ âbã» ãUÊL0š^I\î&K>ºQ§­•²]€«Õ”¯"-Æ5é±ÑúC„WÖ i3Ý»–Iƒô4„"ßÕ/“—þÞàg“²Í,¾VïŸÜÁ÷wFŒ§ŽÆÏ>•AY aJDU8C*âŒò3•Vš¡2ì­:Dܤ2"Gn *Ø5Ç“¦F“jšoG aŒݨ)º]ãÓÛûê²ü û•UÍøÄÅ&‚¶Ü+»CÂ02ãSß~×'Ý80Ûº°Ñ»ñm«ÍJ~qÁFÑVG ·}qÛvU±‡³÷+«Ù6Ô@Ë9=¾ Û–ŒŽJFØI®±0Nñ")…06}¡º1„®9}JW?þóãû4sÌ0õ'uã!ˆ¯QÍ;ŠÛ7° Ò¤,ÃÃJ^SÆf§%ØWŰD ª«`,ÑÅ*G‰½zgžér"LC Òžj:m:¡„”è çö¯d5å–PM”õ Ÿ'»kÒÖ™.NÝŸ<Ÿ‡ÕýîþâÛã&§°ûãy`‰‹ÓØ|XŠP—­)\‰kV„|S„˜6D‡°B®Ã™2Uï4¶ çj‚—iƒ ÕÜÆ,ï. w±{çå‡ÿ öËî³92 û ž]…L(¥‡íKŽI{%·)˜±l™]A6Æè*£Z¢žz7`£¥v~ÆØÝ¿Õayò`¦Á@½wt©Â̤°+ž$Yݱ’Ä”ô.Š)pv¤€Í-‹ú­ùÇ»¯7jAtý/n¶Å»Öä®|>S™¨böÂ5ÊÄCÙY‰ùY…+9NÒ&((Ô¤M$1h˜6„" n­F^À–Lûmø*nï.?^|¸Ô/¾²rºo7Vf÷ýëåí4äyY<«£\¼È eýµ ÀÛI÷²ŸRi)6~G œxª;ŠéŒ ÃiØßª½>^»½ûùå ¨Ú—XÜ|º¾úyu{½§xºýòüãia·¥Äu!wi €ÉÐåŠ/VBš{k›7pn€ø®\(%¯Í£…9Ϭ—ûð)óY¹<Ç5 `Ab” ˆ 2²³p‡Ök%¥)CeôµÁ&#ÐS¼E.Øu1ñ„›• ä«Gœ˜àcSd<ÚÈ)7m/‡Y.Œe‰èì.L×ýš‹Aظ.õ®Ž\îÖ««!€yü@\*©µÈx–єܭ¾¶ñÅV«‰´øhé©¿šgóˆÀ=IŽŒtÂ÷õÕR&zðê!`*û´8pñí‹íÂ1ïWVsA£o…ѱç¦}‹˜Æ¡}£®Ù©)‚qE6ìZfsÜ{ùÊóK0ªl)o›ê\*îZ¶Åj¿¬ª^L¿iðZ¶ ¼qh¸‘-×3ÑœU."h Bƒ'-UŸ´ 'Ͱ5G ßU¿Ý4¤l!ô~i5û¬#KˆZ¶ %&èïŒÜ<¢GAJä„lÀåЮyçªëxQŸÅW4ë-.íRvXìje•u)`“]#Ç.&Ù5ÕáÐÕæcâSV§[Ñò´zt{XuúM¯4´%]  7øx¾…rókO¢’›Ñ·†špeÆ”8ßU±f“JínþWîXËÂÕ€“WsP†‡ú²18R ¹—Ï”x+,Û² V¶ådjÐzH2ìlÀi{b4Ÿ?}zýÐ1-dÆbòJñí¨m»YÎŽg9O™s®¯¬i™°ê-?²]Ê¢1³ÃÄI\JÓ4nþ»ÎÌN² @„ €†¾\ÄCî’`%ˆ)(MËG ":ôýC•6P t(ˆLNcÇÉ“Í>^º|¼}¸X§ÿG›enñIB¨BˆEW%B–•t%¦Iò"Ø„‘H0äªôuYg¡1sˆ¿a M5–OT¬ÑKòزmQ'!Ûb¹ܤ‹`–SéïúÞ<"öônyýVµÛ{óTGJ!öLé.â§xõP(õ 3¶X>ØcÙ›IkoÉ$ü¶å”*}mG›€¤ú‡ò]º<éP?ªuÉ’£î”¾ðÍÅÊÅnŠìÏiÞðËÇ6øÃ¬yu.YÅ+°)¥"Ö²u+iO Ê-C—ÈSÒ‚*¹G¦AyψIÁPcV™Yž‰Å/>ãá]ú5õ£è“‹Óô’—%Æ”°ì©œèíæä­8³µkyM*¯QE|X ‚¥£8}P" YzWOdiÍÛ€ÊÛêlÊEq[wÃÑ‹¦ö+«Hè&±¼m;Eÿ•yÄŽ¨û€™zêQsméæ}[{ëÐ^sìòeœY¿0¼ÙP½ÙÁê$Ul¶‹›ÍÙÈzi•w.è=ù¦C7Y‘4´qÐ3Ëã8œüI½‡u…÷4ßaýÐ1Ï!¸ÅEb¬A…TVGÒJÖS¢d}m´qBMHS h„4‚´èº¦.XÆØ«l 6æ|‚ª\ÚØè0~¬–VImì=F›6’qªm\è)ò×0È\%ß!–‹Aà“4p,ƒÀ<ëb\`‹ì-ﳘ¦„ úÖÆàÎM‰âÆ •Ô1"ØäP:ÍeÝÝ—/_o~Ž^Öåu?C¨Òý.U?ú\ °’Ï$ͯ&&¤&ƒƒ@KtöÐŽÔ‹ò]ŒÆ/s£;{ÿn`,hÁ¨ cÅ~ðà°ì`ŒÙ^Ó½0¦ !,"…†µ¨_¯1 dª¢ÝõdÊÕ[Áä¥ŽÈÆÛÒ¥\cDXXl~Eq+1‘+)|[kž p¿˜ 6":›b\Ç€ë q)n—`oY+NAr]±¢ß$Ì ÇŠ¾ºÒ+Ñ’˜•ëóÕ_¤$tå>Ïô¼´G0ú¢*ƒ—U«gdSÕ}tÞó¹·¹c»mˆ:VŽO[ÿNøbËPwñùú«¹ ×7—"·×]VÂg.Æ¢,Á¡‹P¡Z$P(â)›dÞËlJ¥¼h ¨Ã\#ñ6÷èfutÀr2…|ÔÑ‚.‰?)ùh¾AyO•ùýñözêí[a)¸?OyҩǺ{³¦Ô>ØäÜ¢ž¥ 6HT{+ _ºE0±§¬Ïø,קΨÜõd¹•¸0;^ا°»È9‘2ȲV¼ä:lãÖ8ã¤Ç7¥ŽË½F|ë¢ûmÚð_þl;æôp¾ý4Óoó> ¦ŠZ;Œ¾À[µÝ¤,ÃÙj&¡PÃO1ž9>LEß z˜‰šÛ­+è<¥¯'Z>§Bé{O\ÄÁnfÉáã^S´¾ ]2vʳã 'fðlÆ œ'U0qYŸ;eã/;îäÃ!ÏÒšBXo=c–$8·>G:EŠ .!ÄpÚÙu¹»Ê]úŒµ§X¶-W`ÑüI58ÅŽ k)BEO¸Ó=&§y!›Ý¾q¬8„h#Ê!›弪•L&%ꌶ Ê»b´®F8…úˆƒò®9§P§æ,ánf ¤¾ý¦3¨¶þØ ´›:‡2Prc ¼±‡{•ÙflHLýƒ¢ Bšweñ®¢ÈF-•JuMb”Ÿ¹É”˜ªã]&X]ŸFñ½€Ø<uµ!zkë`쪮°q-‡Õ™+P\$UÔØK´Zü·ör³ØŸë¶ZM¹¼Â¦x§¤1¿(°X?c¨Â" éH¤Ú+÷YH ìH†D—ÂfêÙpEíŸ`5 ì]9)jé·ÛŒ·ëÎ2M®VS^a375†Š/ê+ÖÏÈ—Whœ¤?nÙk²ÙÁo5¨¿mw7k–GlÖm9Ù˜=ùïd32“FVµyÂ.Ä?Ë189k€õ°ÓøŽ4Î?Iéëk™õ嘽½V_OUTùÌÕø±tfÉå§n­4É-Fõp©É(Û2“ç1xôMöŽÉª²›;Õý{\<Ó̹SÍTj -ÖÉåÀÌ(E«!ŒùñáO›0åJ•ÔÔ{l`°±äS8‰}0zpàSÚ‡¶žð¯ºG÷÷ïÔ8ìa6å‘®ÎÞ¸­½yÃØŒ„nH›„Ð5wżJ»\?i¯ÿn¦5ùï¤_Ý£wÄZÙ°‘*kEESã‚<™í^“¬ mò¿ ÊÍfÒ¾ g”ÞFÎ{8AJa?Ö~0’`ÈQXÇÅoz—k‚RRFG‚ìû³|¦Ô咾ޣú©r›W#ŒÒ8º˜}BÖØÉCøå/íòh³À«©ˆÊrv®…4"úÚ^¶@Eýó ¶Ëë)!ôTJðÃ=¾¶…•™§ŒgÆŠÔr9õ@ÄÙ£¿ZZEî!xYô/bÄso÷ôoÁº%›bç…üý»{uk´ÿÌÍ!JE+†zÿ’Êš}¶i%™I:rj9Ð @à Á…*>Ž=9eJ¬ßžÒ/wÏ—2Ä»ª Äq1œ´v˜ 0Ö–|jz/”)Ô»na›[‚MHê:ø8‰D]ðz L6BâïëËÛ‡¿{†e7Û{$’”7›¸XÜìò…ûåN"ü²¢ רHyqnh³1¸®Þm°ƒvz#uxeö ¥çŠÊ}›ÏÅ?é~š&ða¡hí©à0•À>?ýs%ž)à†éèÛÀA ž ƒz.=;'úo˜Q² î¾?^=<ê6õ÷çaD¸‚©8 ^aá7’¿‚ÞËdJi­¾6’@= 8>¤m¡è‚…ú0 ÷\@£Ѿº€–,ï¹±}–72F·ã³>¾‘ºÔì|œõZ*®Ÿ,f#‹ëçÏkð7F‰ ±Aý•$'ãÁ‣ŽKdRŸYxòæ yÝñ½ã½¹î3&ÁgU‡žhµ¿eÄQ,²‰ƒ•Ð&é Š˜§„8ª84öºÎ!Ä`ê!Y×ChxƒN†ÑëOp(r@X’ Wh]÷ô› ÈÓì$1%²,tGS ŒïÆP@Òq•>šífMÙÖñ‚Ø”I‚X±­ê(Rq[³ui«…U1HÆE ømçÕ¿2ÏÈ–­¨ðÎs‹ÀÀÀ2´×j·{ÆF)6Sb‚ÎɺÛôÒO]ä—™Càóg?.É.q¸ $¾l$Ë*»ɤ ª¤¡ 5þ¤#·î„õ!HO0T.^ÔÉ>ý¡A6 \§ßBðÂé·•g]ÉÕÊ* d£ˆKrî}ëkÕ•äųÈp±a¨fþõlž–@; x(î›ä¹ÚVK«¢õT=¡‘´÷gÞ8æžÙæÁ!0¦à¾–ÝEÀî‡5©Ÿ#Ÿ÷©mÈ2;¢çPlÔ7”ãD“ràl~p/ÆYÔŽ¶žpf%×s'uéÑze'_ oçBh÷Ùˆ{¯ƒ€C®P$߀ãÿ©»Öä8r}‡ÿìŸQ6  Ð{˜ I–m­eKcÉžöÁö{²²Jª,‹J2IVMïLÄDŒmQI¼ˆç‡׎|Ù¨nAŸAÂAé첡f°¥ì>¯”ðÿMóãÝý凋«Ë;kBøn¶IÅñI¿ìòn˜à…4W‘¨¼äöïÒªàù|§Ê‚øƒÐ l_íD¨¥-Iµ+„ÐZµÜZ|6HT1¶å!ã«þû‹‡ww7×ßožº— æDE#kïv¢Xí¥*p‰èïp/ŽÊÛ‡{ çŸ~Sئ™ßLû²’’ošSKIÄãVäŠzRJ“Û(ƒUÞ]Z/¾ìä–%‹ùBœ!@âi®Vó&­„¨ovi¥g­$Á 'EûW‰˜ûLþùãþérùì3—_înÍqëàG›Ï€|TÇÆ hB¦Q2iÜ·UÁ·“~ÅëÛN÷ÿÎd3iÌ#5ŽTŒ"¼ä”_h;È>¨%Rq“ˆt>- ¿`Ë>¹?‹«§NÔ¯ñäBÙîG¾‚¯{d¸WÈ‹‡«ÕD‡&ñOÄM|›“ ±‹oØÒÀ‘¬Õ$¹Ø=-j¡fl¨Î‘ÆAe}Œž"–ßkÌBŽ,nV5/ÀSÒpª¾bþÕÉ1aW8EÔ„(â•0àC-UyØô¬“RÄÌ6‹l#’üüÎóÍj§¼@¥d[,&3]¹â¦f:« „ˆtÊTÌa²`Xêåpd_ª%ͰÌ ½¤Þd1kÞxeê…Î#"èd®Að²É¤'g[f\—¯Î¿Ù†1¾:L¤þ§?iµs¹ä·ùâmk$ß7Ó¼¬ØÉ¥–•›úÔx$C^FŒag=b˜Xʦæ Ž¡¤†Ée—°.ˆ2Ä!†)Š«ÇüÙ} 9èr¬’ç,8&IÄnH |$¬KÎ.;õY%DWáJÇèÂgßö%MFØeûìæ Æ œw©ïñMÛX=%±’nŸ 7¸ÚIˆ* hÑ?cU®¾·!"-®Vå´É\É”M¡mB⺠RlМÌ9Ínõ»¾þP÷êWçD¶´%jÁ}®i®W°‡¢Ï®ÌfFRÙ2d¨ä`‹¡G!»B1Ã'kxúõC]Tò.m©L|PR}:I+)ŧ {.¥bÎ1×» Ì:ÎX®.»ÇÉê]l™Ÿ€À-e÷“¨Jðßzn*å ¶©ÇÒ£&a¢WÛœfúAÞ8¨¡Wm†tƒ ÙÑQÂÐÒ‚¡qÛN5>år›ñ‚0Ž](&ÎMW7ôîi—¯v¨Ó"ö•Þ@Ÿë%Aß*^j—„@ Å™¤pô± Ý^£ûE”|O‹ƒÒþs½ÌõŒuÎé]²0í‹ËT =ø41ùpÜÞ¾<¢kê!Û3X=U;H B€†n˜àH0!ÄÞP mCéÑ7+DBã#,‰„ú•9 ÅÍ*Qz0FÇçVÞ°·ÚŽ®¤OcpÒ¾Ùc ¹uóדt¥ÄÀåòÇÓgu“n¯gµÙrȉ¢y•EXEßÚâ(MÏÔ䦑ÔËZ‹ÆØ.zDl©ÊØbƒd9i«¥‰È æ¡;·>ÓŸb†Ä(Ë›Ù~*È›1$ "¿¤ø(hêÑÞæoKÉÆ ¹GÞ4°nzDp²ñóÔ)Æœ@XPo F¨B ÖSŒ3ÅgA"$dTœ[¨eM"[WlÏDìÁ }8¯P }=UÿêâîþúKÓŽ‚œè¸¨L€PñvÅÄPÊ¢L-È7$9­_íd£èˆºD©\Á3C$×+:ÇîËAlž[»Þøû±cóœ6àTaj(JÙÔdŸžɆ¼<6ƒ*ëÅ…‚U .´‹‹á¸ÅNàØ1÷· %Ü0 mûlÅ’€b«v󼻸ZMã4¨¿gÜ?r‡dq†£XV²¯Õñª~²úµè ‹ß±i£!³CÖÿÙ2k™a«[‘‚WV§“„Pä7¹õ"õ|ó²äâjUü¶í[ŽæßM4öîbú¦*4¸(›¦-;½m~/A󙞲¤ÈE¾Å,ÞÆòfUm™n ÂÂÞcVåVÙŽ„±„[<{3)©)K˜2YBÊd M¹ÊåœPr¶1sUv×ìæèÅeÊYÂ0é“êEŽPÛômqa²ô@¤z³kR€œš÷ÆíŽ ¦&ð=Ã)%ìf¥ö™u€ú++d °y»›Ó°y/W«P߀úø‡tŒv°<"ûÈ&˜‚W·Þoá¶:U í­Øó%5$…yFò)ôúà;Øßý —_GU£Ù?nµ¢œ)±!,‚à«LIÉ;3ú\§É‚FCb´4y®Þë¦Iå9á· È|6!^¸Ä¶F3t#^Hõ…ƒ‰Ùj+˜C‚ÕªÁîâYxÖåÍ*¬Ÿ¨gÈÇæ`yFÖèWª¯—_ ynƒú(1B{»Ø|Ä~IæF€,[¦/j7³¹ºF&Õ.a½VÕP„Ö¹mÙÑáfU5¢0¹”‚“z%ï;؆Bmv.}©º¡D6tù©w„Ul[3Û];æ{ü«ìñu#Ï͵MX×hÛA»ÁMÀ¹-\ÀÒÆí=×<ù–\Ê"_¿Ü¬†m6œ¥EÞÀ6R/×ñ"lNKÝ6ƒÛ–p7ßü–žÚà¨Ø+=ó øfÝéمه›Õò ˆp ß!¥ö–ÚùˆØàȰ>Õž<´õÀ½Žn%·#À"«.–üúÆÁÝUC6’Y\¦¦†'çÁÉQºpyDò§›¼GJaƒ‡Cjs…|ì’‚}cðÖ^(а8y ní­nƒqa2e¡ LB±wz_uF®V¡¾'P/1m±º~séá›ÑÔÃæbPCÍþÉW“¦×«IíÈ»›§–Èu–“Œ™PÍ.¶ÊÍVÂKéN·<úÓfC6‰ðĤ®Ò6‰A æº$&Ʀwzî] „§ÀÝ µ® “ j§#ÔˆšƒU‹²«Ï„Ò¯`ÐÏLÕ«J£º ¢^=r³ûfGp¼¡Db¸=±wY!`uˆëaÒ0&•–Þ>wzî9zƒ¯»‹Ç,XÐâfæQ‹0ÁÒ)X‘Mg M6çœj1˜‡ñšÚÖš³ù§ŽÃI*Q³¼zìõ€‡2³9í—í®2›ò€­‹«Õ<ö2Ï¥Ò™uTÔ¡i© bdÇ(رwãñ]ÃëŸ/vüÏCúÏý”ÃÅõe“õε¿»8Ek“Æû ŒWÒAÔù@›A-CAõ’!¶Ñ8%/í’¡G4áö‡¤a MëÁ0¥WAæŠÉž&UL*­ÙÐk°q5»µ»ªãì;¼¸M9ŠKI5X?ß^Ñ7É`àúâ!Ôö1b°÷Q¶ ªGÈÖ4Þûˆo¨Q’-]` ²P€Ò…"—YÜ«ª/„ô ú®qLkB'P¯mÛP7׸:áiÖ $R…*ƒe,s-_Y^\­ŠqabªÞ7<ÿnÛOSìb\ Î ÛR&HÝÊ–6 Ù–è’elÓ‡2¹F"ùÂîóÅ*ç†@rxf¦êyKQ‹öÀu³mS |¨Ð6ý8.ñÍr–Ù²ÐájUÛ½õ÷Q}-oß0pËZ1J1±…`Ûò\?ôiW÷î_7WŸïï¿å»V—Ï­þ`õ@Îl“3`>Ä"ôÖΑ‚ÕEC¶-oA²Q çΖÕÊ‹WñÒKzßœ°UŽÛå%F«„éEy9½±~ßä"óªùœËúLNŸÅR‘Øš7QCšUŸiG¡˜æ^`HD„öd€?7û¹ÁZÌýÜ‘6[‹ß§ü?ß?>=\>}¾PsºãéÍ÷?®o/rqñØ5çE„'¥µ”€9gynäY‘<:ß‚NCDDM„ Á™eD•© *©Ÿc…ÝN)ùpóñòÇÝÓW?nï>´Íé½%õ¯b•p( ¾QYÐa _2J l® á¡ —œ5ŸÂT —Q—K¸¯;ØgÃ×Â*¹‡ã@’!ò S¢$ÕƒãÃä!¶TÙ©­¢t&¿bØf€Ä¥šgýêÒ“=‰òûqD"4 ]5 wÔx‰C˜Q£[`>BãÒß“ (…íûqæ]7;T #ÐÅÜ þl.>Ü_¹ù~ýñÓÅÇÏ?Ÿ¤Í&¤ŒLx›æfWö&…5f^•‰Õ²Þ ² ‘ ?9qÞ$1ªãºD¢iQ^JÂHªR=u—ßàÍãØ“³U(e'A‡õ¤ÐŽBÙ&¸ FMdGå'Ôó_ï/lhÉíü×#RSÁ4©åElš£Ìzlr¯Ke :AŽ èô›À‡uNÚ]ó}‹Ë”ë+SP‡þ·EÈË#ú¦À‚LÑ©-°ØÍì5ñ=†ÀdIZATþÌö&}ƒ“ê™qRg:2•¥‚drñ|ýtq³ÊM§¢~"ñ.òm‡}w„û 7vû¼pF=Þ“n2[ƒÜuMí’Œ¤†Ù‹¸º¿zT&=èŸnBgOžß¿Í o@=¤Ìlñ®HO“#|+‰×öäfìô¼%Th$BY#c~ÉÙ‚R# Ûõ£Õ”põR•ݧ%µôkH¾ û{T>JaõË<žtÉÙÃý‡•…ë ÍVÖmÒc/>¿X¾ŽEš¨eF–ÔWߨÈëD])ȬS´W¡ÙdNBšÓvÍÃÔ¼PlBŠOáÜ | …ƉXHøœ/ðï¡ú>‘»I; ऺI®¥É;‰3`ìÍj¾A¡qO)NÖôWêí6ͳ‚)5%¿0ô@!º§ÒJ„‡ê^44aW8ˆ!AˆÝAKmËÃÌWïªø*e¶RÊOr¼\¬¦Ù' ŽGÕGd[»-BuŽ]‚Mê;»ylgir!z:©}öÁ‡EæïΉmó€Ü›¦†éU Hˆ5½©ÏÀ)6BÌ®gœµ^p©JIŠI¥˜ò}÷ Z 2°"˜Ü&‹@1ô(’ N¢t6™´oø8‘Ò}»üªB¡Ìx,„&‡X‡PÜ«¡wå}3ùËzˆÐX³Sç= l-ä/H¶x,èõJ)nÓEÃÓ*Öówº¸º¶ó™ZùâÝ 5†¨"M ÑU³GCzr Õ—ïÈ/éû%XËùì4$Š©wŽM­¯ãÂÎc eO/¥d ´ÎU»7å|ÅÅjš;Å@ Bò[¸–PÿC=Û<…-˜IŸ(òÝàjÁû iu#–¹FÞ¯÷Rì.²HƇ›U¦Õƒ¾A[Ø–Ô¨°“.¶IS•3DÇ/Ôóà9 ‹¦Ñ“Æ"à ¸:`‘áù˜ä@“A¸hÝŒ½zV‰0ìý&°Ìà¬G¼[‘«a$ÄbqœJŠÌ®0F¼»8ä3Ÿ/Ve~ ¬Ú#ѹ¹ÖÝcm âðlÑÃÝý¯:ìEÉj­AeV‰p¡ÈkÈnË[dÐEô†Öxni-%:Ôú¥ÀÝJj•˜½u •]cÕw¹ÌØüˆÌâj5jÌN_ý‘³«q=g@4ê¯lÔcý®Mû />ÅŸê`zÆ+ºº  ?:¦æ'¶éD_¡æ>JYÍC~ÒxA¯!{ô³Ñ&OÏ--±IÍ ¥ý àÚ©æqƒš¤âr뽚Sù­’õÁW«Tsà¼?»š#bÓÞ›Q ŽT÷kÏ.Ÿž.¯?Û_^n‘Y4«~xúï«ašÖ…Ê®šzСì‚FÉïB|&еÖoöÁ󹥃BÛ8¹' Âß$)“¢ÉYÇr9cbC¾¾(o@‚(1B ÈRÒ!œ] šÖˆ·ÕN\·u¯^ <…ȈväPækÊzç‹«U ¨§ÉqTm:7ã¸iq!E‰>‰ð)ðÙLŸ‡©±(¿Õ—öN»>ÇPvÚ™òÀlŠŒPdýlÐü¹åAšöN°Þ\íXF…ªõ˜¦„aà+ºß¶¤ÈWÉO-®V¥Ç8§ÿÝÆ7Ÿœ£Ð÷ä¨qà,›î=`7ãªñA|šlnE«8 ìç<Ö—v¿ã ,nV…3Aêα†iCùVªåÎ_Ïãk¹Î[‰£“¶Š_>ܾ,$~œ¾ðÎÀ«-¿ÿªü¿ÿñýúæƒþívcq©÷{ú~swyus·Çk£FGO¶¾ /1¾?¥jù¶•¾:·öžžŽ¬£ÞÔY¼H=ŸŠ9!•UøDØ„#ŠÄúÙ1¡ð6gîÔñ}¡f¸Ž³KŽOÚ¯ñ*éfŸ~_8úûTÛÇð12¤a“³ÀûfT(1¤– :1Zc{@éMVn¡ÛH=µ˜k|_ŒÅP'AÞõ]išªE˜×÷žWMã)Úªœ3B9¥š.fx¯n¿M3ª×תè<}¿¿àç6ýôàâI4‚oZè({ûÔóŠÓL• òIO°X;H1;¿¤ÎÍœ»Œ‰øÜš‰ñšéÝ„ >Âi§9ùì‡Ww·×Ïó ‹üõ×Oé¯oÛžKx³ÉqŒ:"r  ™P²õ*=Yÿu* SAc¿ˆ`¨ÐÁçäÀª"æÎ^H2Bí«#o@8¥‚§S5è¶-Éæ©Ô¼ëñK—ì°KþöãÍõ¯ë»›‹¯—ß.?é•.¯¿Î|Á‹ý ÛôÝIÕ3µ¬åŽŒIÏóˆsý$§¼¤Á«ê‹t,Ť®íNȶ³¾Ðkˆîª¸CòqÛó©6Š‘ºtW€Æ+/Àxþ×sׯµxâÕÝ_ÿÚø| ¾©Ÿì¼§ÔÕaœDZ²±6;gð¸CžÏ<™†© ñ_0Õt”£>Yå$0d¹¢ŒPBýì eSÿ‹HèQBvxLH“AaT _í‹Þõf¨Ù·µÑúý&%_o?íž‚—$ÇÖ¸¸SI­Ób·k~ß̰² ³“†'6¨°zå5õ.å>%‘Wm€Ûb’/IEœG—ÊÅUΣŒ,ˆ:ÄèW;†³[8‰@C,¢sç—Ž¡â6½¾1ÂI5Zö’Ęþ»kI«#ÙÑ[©¯æ¤CŠT[èYzNan5ׯ`ÀõÚWo WÖÒáÀI ñÌcwÝÁ­ú\8ÉÔ+$…ôÿ U5ð„NÒ½#T¹›+^ª ²[\ILr74úîxj ´¿!-‰0ðÉóÞ··_ý¿&vfx^W7qJ‚ð„´÷ˆ”¦¹ Úô‚Íæ—]0¸DÅéPÎîv¬$2Ãõ¥£]×6ž1…„x`˜Há±ošˆmö©îY-ø-¥Ïy«P’Ó¿¨I ]©4æmëC~€èùk*øžãœ(½„ÿX?b É’`II‚oÀ Ñ/#InÈÔœ;ƒ-Źùöõ —ÅýÐ.ûñz¨.ÈÐ ,ŽR¬8§õˆŠ%T•^ÌeÅ+éL™8t¶Nš‚„øÈûi3-Ži-ó“…°mÓâšiÖƒa=Áµ%Ëþh¶z»7#®©¿Ç5µ<&•² VcÌË#ƒ@y1#øT,6íÃÏž½Ï_V18˜parŽe°×Èã5iö¸-£ÆméKÚ7÷”F(õ\ÚbÚ;ã}ÙZÌćÅ©|ÇPUVa2>7³’ϤeD-‘X]Cƒ1DbŒ ›L.’rÐÄ—%A`"àñ˜+AOwrCêA2®4£=…ð/…3ÍãTۚѨÂËÁ=Nsá,ÐA“*$çv [½Çl²8âqч næRXDSö€ß¦ôñþ©§”º] šëàPªCN©óÆræ‚û¡'2ΧɈÃÀ5u‡qç`Éù¢Ï^‹¯2ÃùÌ^‘6w1ÐCrÈû6˜ÊVÜÞ÷_øòg7Ÿ.Ûš†§ƒwËÈ<4ÌbH¡kw‰4b`gÛ°[z3ýAªò×" ˆ 2Ù…z–ÔEÛƒFÕ;ù±Ó’y¥¸ Ã_õ)úo¡ªWíê·tñ¯¶¦~pÔ=縧æÂ±o˜…|J4‰¡ìˆ¤¦ù¢šµR|Å^¸¸ ÅÄ5Ajo%–®¨o4©i MèÐS Ôoúßà " àƒ6ùjàKŸƒ­SÒˆRȉìË!›­>­CÑ«¹n LâÐi¢ŠSÓ…bs€¤GìÙs%ƒ™+ŸÝéf°ÎyQWš1ì9—ßÑ•8ŸÚ_MùJ†ÒâCŒøâJæÅ#†®d¢["éó:F® ,↠!õpV'MºÙ'Þ÷Õ¨ÉÎgÛ=+;°Í,Ä¢Q$ÊuV_V㿪6¶ß÷²ï»zÆœþ%h“Eu8ªlýS(¤‡’Aó µüÐŒÔÖ¸Á|{óq?+óbsùBÀ›{ýÇuן‹/$Øî¹|ÆG)†Êm]­Ä6ã^N_šõÜi9)¼QeG±¿j]ÈcM/\ éº+¸>¿}‰ýõlH‡Û³óoÿ­"½ºØ‰hšuˆAšƒÃòeAt8”ÌÃû<ôùJBSîm3—Ä·ÙG„4Ô=IF·¹Á@3ÄÅùˆÿÜuY“>‹陨#zà^ÀíÈ1øÇ]–Í8¦ÙDÛ/g†µà‹Ž™Gä[IgʶO\ E' ~IÎY êÒˆ_õì º$² „€›ÎS-BŽ+`#Œì£è˜”ÏPWò™rª-Ç$- äÀ±Æè8ä™{ô¸É‰nû¦qXVvÁ½+º ³¹³q%’)ôX¼¸„NÚ<0$0 y º jDL‹cͭég{U°j(÷—}‘¾ýqç³Ý‰CΤŠ;2±Øñz¤WšæwÕ‚­¸)ûž}PÌI!pÊ’ª¤1e®Ì@Äɵ`™é»¥ÝNí˜ã¥ Ë",ìU<§ç¿~Ÿ  É Þ9)0òBêš1ÖŠr öûÒšæ“É©OÆX©Ô—1»—,¾àJ63|R­˜ô Ô⓼­á?ä“dŸô‹ÆŠÀòÿ‹+Yø¸ú"ñBB‹QŸªäá¥Èzš¯ —Ôœ„NË®’ש¨ ;¨růS;uZJôÝl¦MhÌëdƒ4¦EÓ­ÿ; *h›§ðCíP=z–¢uìéYjȺ\\ÃSÉÿ®ËÂrßÅGÉ'ŸYLq¹dk¨¾e£SߌÑ|¬ê£-ð®õk ™‹ø{ù\çFÃ;çœØz˘ÓIH] ДTI‚nh›áˆ× ÅŠƒ.$yäd×ëÄçÓ˃0&ymCS¯%€óŽý×í1J'{- É²;Üɾxy äñéÓ×[? îÄDž¢Ç¡Z/@Ϭæ¶V €Ÿ@wDJó|0D&_ãƒTÚ(2‰e}p%’I.h,h.h3[ìF\lpáyI ”ü ®×¶µû÷Ýâٵɣ{Çõ?FÄ1sØ`‡ f ‚ÿ˜L#Dd»LÃ-šÉFî<Á¦9Ý—ókØùÅ‹›Û½øÏ.ÚîÏ$ð;Bfâ44»ƒ¯×*æH™-’‹ßÏžw»;ÓªÕ ÿQˆÆ$-[LIáI7¯àŠªh°\Þ±ò„¶ò9$û-ÊC½&«@tBÑï®™w\>wm£iø=ÇèµdÄäÒS’@¤ó¶öÞ Pæ¿¿ò´¡mÀ%¹©Üš‰_œéÇò#1ÁÌÚÖ׎Fü›š2Trˆi,C%Ø abX0;¿1Ýà㋾l²ëïÿâþày"Áû–«kKXÕ^ïîþZ²Mú6¯Å÷êIÒ <Á˜v¶ÀWí$ ¹Â§ [=쎨³»Ë‹ý¨¿Ö?P?YøT[pxGF{6ÔOAAÙDú\N:GÝJÓqfJjËÖäøhK‡ÇŠÙ™ŠµúPŸóÿ”´X«a?¢‹Zh‘pöùð⸻ù¦"ÿ ÂIÁýhðÉÙt-.!bb®J×bqÇ.î ßò=‹dJ¶f H¥Ñ’­!h¶0vE”ùS\Zy,þ1õüAœq'ÉÇ•ñ‡»o— ËüŽ7j,£¡æ¢&s~é#«„ÿ u*zïõĺBé¨RɳFZM ¦Þ^îÎâi'D iœ“Š‘ŸÜ#jþ»'{¾Wn%Š)ÈN¼'ØèB⇮±5?ìØ5×\ˆˆ}ÂØÄJcƒ¯Ø¥óLÅi- !‹^qÅ ;°Ì˜ø¦É4Š cvÐùdƒõÞ…¦p.µÖ¬3MųХ©(€«0•,™ÒJZSL5>:µ©`ê7ÃH‘ÈE7RàéD·x‹KÐĬÿ½1-³x€k!LÙ¼EÃlp)µ™%7¶ð‚1v°Íxaº®à(Î¥´/h$rùìˆõÙi]^¹-‚rÑöÄÙ¯CÀA3ôoå½­þ5©_ÓsLcê‡vö¤¾=ôó†ox1`ʪ¸|,ZÃ~\é•5¬$2eø†—„Àm+OjòÞÉÀ­¢=½Œ¿‡»Û‹W_.ï..o­ño!ôlO¥šJöáYÂ?=\]_¾ãt¸»²úé^Ýÿaï¶6œºSÔÝåõ̓=œÅÅÅ-~‰šÙðªîúéá¯[û™‡¢òö×å?þëÃîòéåö’s‡?±Âìç_PVÛÿP\ýÑãO©Vqý,Æ— :hGÕ j[èF¼<ºÔ³ÕâĘ҉¢|ÃðdäÕ\Á<ÈøJ~­%¸ÏÞYd0Á±­sa !¡©(ˆúa„ÇÏñjÆyÇ6Ü øÖ±MSIâ…ãDŽÍ5Ž`·~Ó‹;ªØ@d- zÏ]:Î¥˜À>x^ŒûÇAoƒ/̪ŒŽ`ç< a’oü-6MÌF´5|«âºöØ{!³€ªÉ©©Å3P’šâÊìˆ>[Üd3©¶Ó84F´Ò†‚~xE–¼IЇG°WAßîGüÓrß!è‹_À¤>-¿ &685îÇ·qÞÆýàá½Ðßvp\,"cýž–.jÄóþG^L"XD0Ö4êT[åº,‹E±’Å”Þ>,RhôdqZ†ú±ž®ïHàS4>3iîíO}™ÚÒ臘¥ëEŠ‹£è9‡8¼Ö¤–.Q„ÔTÁ'ã"ű†ŽèÙ vú¶NkFS)ìÌ ~q‰a?^ ¡ÜÔó.¿=|äx!ÕÔÕMZð ÞR€Š@p!$Äv°ò ÆJgF„­˜AÐ@KåK¡”§¯8ˆoRA¯énu ‹aUPÒ”¬×xvp®gì=>~|裷ñ-½EÈ0ŽG·ˆÁ±ĺ™ïêÒ>6fÙ-ÖSC8î´,‰ñãøúCô‰lƒôÿ§’ï`–„žAº¤µkU;ÌmQKi ’4?¬@ÚeU¶¤’E$Ÿ–8|XµH\ P~–Ò÷ÿ¢)­8v£Z µ”B@Öþó±‚Rˆ9aYiyB’Õ‡Õh-É¢Zn e³_MÞkÓ½H·{¸ æÀÛ”\ð›NÉíiÿOöý“_f/~âþƒðÙ7wŸ´Zøíò‹úúJöÈÏ—mCܤ§ðÏÝ:){Aìh¬G› €Ö ¸>‘u‘¢f›ýi1BuÏ3èRÙAÅa¶Ùÿ,£)Í~5i‡;*µÓ:hØIÍK@JÛ‚¯V±ÐN„_:!žUÉ›:hè)=xHˆ>´2Sõ mÞM€[€)rE£)P©© LYª•ˆ¦x¨¨$!…“{(mpçe Nx[=¿½ºüóA­JŸx¿|âG T¼¹~ºùý¨ÿre¿rG_ÿ«þEUÐÕÃÍj`¹PÍÜÜë?®Û®ë |S§%ßÑ4@dd!¶$Ïã4'†ˆæÊÛkìÅAñ”U ÎU©+‘MIƒÅðØChòa›}Lýôƒ»G¸ž*‰Ñ f6+Vñÿìü7•èoÏIØì汤ň±kF®Èy,M]$#éÍ%b)Íhdè['ï‚4Œ\iâFpÄDôÑwAž‰&<Þ… Õ#Êq1D"LCëCÁñí»]v…éùÃ*G”C4º¥iTñÄaDiÑu­+“PÒ#(Ž« kÕ¦¯«Z©P›%T¾¤¶èòäi«O«Qœí DFiÒƒr¶zˆº@ô¼3˜þ0«WÎûb1fqv9¯ÆkYB,«?äÕ¿’Ð$ ]}vØdEí}È>ºÆo@ ´ÈC¤}•†ª÷þ‹ãîìâó•ŠRë°³HJ×`AÏi«Õ¹"40¥bD1Äüþ³tf؆½6z$j³ 7C¶Á±hPq`ÏÍ0cy&é½m’¶ŒápG0LsctzHž+öÖRÑ`(廕ÈfØ‹¾5 íHIêí%o4#ö’\ì™ »=×ß>©ûÂô)œL³ ‹ÆÝ*ÂHÐl¿xĤ=yâë©‚ƒ`¦Ü[ê[£ÚÚ*¼ãžÆ¼žlZQÁ:V–3‹]+ByÌS<—¦TnÀ”íÆ? fŠUèK;5²–zœ¬$?dH=õDˆÉ@‚:À1ž»Àß#ãˆÁ$õ0 5à Š³~ެ„6Éd¢ÖM§6ê)ePÏÁ„>õrOSÕîÏfg,‹×D‚ʳO„XŽ"Ù Û•PfX„½tHQŠ·$“A¿EDùí˜LUg`D–ñ9D®íIpÜÑÄq ?,Á8Ç ƒ}¸Ïe «/«iIè[iÒ™ZòÄIj#îš4Çd6ꂪ@ÿª3½ÜÄ‹D­àRYU6+TEùáð篩Àª°·òÂ_`U¬Ÿ1„U†ö Xsì@º€ì5gÚ&ªTs·7Wš(~ÞªO¿ß}”¾øÎÙ‰äÀ Uæ•RѼ$†#í©'!͈ðúÚ.Il8ò£ž1‚id9LË•=/H£…hªAŽqdŒZ<°°C,¶–L¯Z'Îmûòl§`ýi!Þ†m¼oC4™£¸®™eHxü8¤ Ö_hèÓª-•ã=:çJË@öå”s\}ZÕu-Þl€›ç‘\ÀÅqÇÕ\°ûq–˜†µV DcÄ ¹¸ç®î ¶¤5ɹÛê˪.ç’Ý=`ˆMJ3°›pº¥êœLˆE˜eËIS,ê-{¸­¿¬FoAôt£¦„Dp(“T³êE5^Ÿ`®A­Åj­% ŽC…Ö|¢b”w °ôùÓª2I\b‚Ô¤8o7àC™¤x袩ÓÒ£Ké{s\`¦z÷¼D/ì*Tì‹ &"—§+Ì(Þõ­5Qj,µÔ#‡,]è"0òlÃûÇ+âý•ÎÇûy&!KŠ>BEýhÜ¡h!‹¸µ–Ê›Ð×v–k5ÙDDG†?Í&¤ç2X«Vq ÍéýÃÍÚÃÙï—w¦Ø³ë«ß/ŸÇHZÿB™ Õ5XT ELç¨Â¢´xÁ¢E!g§‰B2¤d´ÏŽš *1p!>>¢GGIKÁ Mä–çÕÅÙoçw¿š\X ùÂìâÐEþã럟þžtЮ`µXŠ5©†“b!-Éç!ÆRša#öÚšÇ74´Ä¢èº9Ö„“DšrÏðçíù—ù †Ï÷ôÇí4Óˆn±»™Šèa¼9 elV™2²¦oýÔ]MvI޾J½ÞÌŠ©øA €ºÂìf1{Y¦-¶%RMR®òÜk.0' I1SRdW½7o^—l¥3~ø + ¦©ºí)Fž¤r¨cŸ‹ì<¢©êY{ßáxœS9uBß}]·‹N•Ñ<˜PP¹ñ³f 4õÜI M®›P7‚D„IpÐé%š“lÊ#,ÔÀÁ{ÉÙÝH¾<Šð¶÷O‹Ÿ›§×ç3—’ÿüö¯ÇÇv(ÑÝ^´îŠ{¼Ù¹€£I£dS”P'!,{s”`MŸ«¤„à=‡i{Y¿l7r悌—§ûýrµ–ÓY?\\Ôzæ7̬ 4*«-_¥Ã³yˆ$9,GBjÔój Á®y5åí`BU¤9šÞ‰SŽ[„1àJ.w$@á,B¥‘!$FSÈØäÕ3 !Ps‡@†­¥XÓÌøº]í½<Þ~Ú¯Rüs¯LäòÏh7ÌÝãFðAK‹ÍœŒn„9—XÀ–òø€äeÄHDM†då­M$žˆ†ë©äø&»Õ^×%&ONe Ôw»Õ÷µHYþç—×õ×§å$¶oÝ9†’,HqÉlç¥nýU6º@‡`íu9œ²ÍmÏ«í¶çHH”»ÿó¼žÆ·&ýaä-¤õk¢,PJqŠ­›Kä×ÊV*d¢¤SlKºÉ0dr›¦ « …ÏsH$‰XEþ˜ŒõÐd²óB Ñ Aph `ÑïøÌ†¨ÇF¡OÔƒ`­ÏñÎ;7!H%ŽÌÜ -@®(žê¢ê§Zç5f„â6(Ã]@ùÿùÌ4‰7º|¬úÝ©êÖð]%]º¨‰;˜rf,1Ud;çÌ$9 U<.ºb×U°ØžÇ=7{Q¥Î!&Î=HîfÑåg-äóØøÜ¹³Móˆ„Ó"å”×Û O,‡ã”YlÆ ¶<ÂT!Ën)ÙDƒ€[唯·¢+ÔÞ{[`ç£ÞrenØUrIï?M#\HJÅ“P!~¼™… \,Ø:댫?’h`OæPÉâ) —¾KI¤ ¸3÷[š„]¦lÃ,<º’ ¾Ëxl`%^´ Gþ×zÿvE¦“ýïÌðÓŸO-‚ áZB Y—E$göF2j z\¸5@¢­ID5$e?2t+?×Êu¿~÷HzÜï·Ò‹Ÿ¶{csèN×ìs©BS@±bIÀËš6`(1É8–Uî11%1:,F ˆÁýP®žÿUðj–5[ÑqlZ삜Iúœèþe0Ÿm0_îÍ:H)M26ˆá?å¦Ë_»ß¾m7Ï¿­Ö‹çåófûë8 ¾ˆ|¦#ÔkN¢Ÿ9ˆUÖÁ"zÿwh°Hè•ÏO3c¸Ü”u¢Ç´axS &ä-=º`'Ø †ÄzLÈ#|Í~#uôÍ­#`ñÐi¿7‚ÍÝ`Ê y“¹Á<|·Kî+¾¬¤’  O¼!‡“NÄöDžuj¹ÎÏ{ež™Íx¹A¿¹«äÜÌåqªÃ—ïüöeçF XTç=£Çø™¼Üo?(©×Úh|* t×8€.úª¦kÖ}dÿ²]×I`pOðdC 0Àæq†þc$F‘¿‹èùÖШ"þñ GßœûÄ+n€9ƒ ±Ù›‡þò(ÏApɮɑd¡"xçËïñõÕéÚ‚Y°(¦fÞΥ!GjEð7̽ ¶Ãƒkš³Q´6‡Žþh&‘4™4ºe82ßGJ‚‰=’dti½›j&—÷OûÇv¡wQES*xΪ¾·©nØÑÇ6Ò|]PBîÖ'팫›»àh¢¥F“‡qm/¨¼^HA—€¡qÙHúAy 0œ®xF(Èkëå)Ý T5‚C$jâ0'8øºÞ-†eR-ƒbç,™Â,\L× 2hd ÈÚ›‡®*q¬XÒ&ã'ÔÒåƒâs5Ð)â+úAÏ?®!]Õ;}YQg‰ë˜#ÞúØ f½#(‹ˆåkÑ#6~¼l¾¶S_×ä]PrÌÖølÙÏC²w$‰&Ú«~g¬/‡¸¯;‡°‘VÍãGh7°Uܾ?sû"~îï‹Ú(mðÙ“”TËQ¼|’ú­ÉÅ £) öuQ×l¿+Ž0×7ØNGJŽ7¹s(%-®oÞïT7UHÔ ®±èõ-§ûý½µXôÃoÕí$)sá±³A‡² óZìµ9©ïH\¿I«ÑŽ!É%)ðsÑ(U¾ž˜+ÁD¨$7…¾õÿð£CÊú_Í`FÕ>ƘXdÊÂ$Ú$§âHTMpÂ@;N¹7nƒ“ãÆ”‰—„,G¼YP(²î—ç}Œäíœõ1–(-Œ¾¬ˆÎöë€ÑŽÝÌøÉË&u–Œ'¾±ûp¦jà@§-8 á6ÛažïEÞhÄNvàíç›ÝâËëêI[ÏäËÖ•¤b)ã$ä1ï˜7QWå@äLza$°&,.s2ºMÃå°Ãç±k>&1L§,Äáš`X ÇK2·Ë¯÷ï~´ ½ßÞï(<3üDõì¿#ö&áÑ‹%e?k½?¼uKÛpMm÷u-?@: abÛÏi~¨™Û%ÒåëE©€ËµnöòJÖóy4Ù¾EšI˜[Gä©¢{WL£R¶³k†9]±'ËEóºŸ1ÚÓ8_-1x”·øèSeû‘Dš´ý›ÎéŒÞÚäÇpî D#Å#†+sgLcÇYoÖÛÍf¿øé¦Ñ—˜`¯jž‰ëh.Ñ‚ÎÓ^›Th[+U|8Ë6ß_£l‚Ù\Û¥sí±„šÌke%#¸ušä?,âmÅ)D¯Í*”(ñ¾¿¬Æ—îšzè­©¢$ › ùõïëÜd÷‡À®DßÀeËÞ¤¹ÎG’hÄÉÉ ÝZßÜ58¼Xw„bŒ×Ô·éë’‹ú°Ë”Ô”¯ª¢®†‡^«ŽuÓíÕ×Oß´¯½•R”ŽÎ1%Å›ìQɯ‰bsçtöåÖ±®?îhhL8e%,›·ö¤e=Ñ“ˆ©ÝìïÓìßâÃÓÏ:xÄôScòe}ꗮŌ[ŠGM¶¬ŸdTCO$o PÎ}ÝdÁT]ªJÐL‘ÍL)F¾âÀh·í)=T*3ŠlìL?<–‚£Œáù3ªƒtªˆª°c „Eä‚Çg`ACå ,ˆq;ÂÙq"•ƉÞušKØ{9Ç@™c”ï†T³âèÊöÇkS³>”ŸZì âfΩÉ#ÔP‘±ÚU5ib~šHëöÉòBG&„…‹l¼|Rú­.eyGS2‘]”ßÀ8Žçǘ5“ºàÿï‹C~KõÆY; à*|z@k!:7µ(|œ8´¬¿û}“Ì»¿±»3vñ|¯w ý¾é§eeäOI|‰=ÇèËðY|A’†b$©&„#ºçBb?;ÅR´€I¬Yÿi#)‘›‡¿nõ0 9b–—â\N Â;ÃŽ’NhÈ[Cˆ¢Àè‚/øœ ±¢dàÈ¢„Yf¾ã/¾DèKz’ˆ•ø“˜ÓwùnHFò×MŸaÎM*ìÉ©y’ÿ›£ÏZ­‰×$pRÚ ˜Wòý85önç”ß4Åj©ÔßëQGÌ#µ;#a(çTHRž„Ö$ù÷XöÑOB ’›±êêðˆšëÂ(É…¥zâl²7/ÇKB’'@S !D›šT5h$“&€ÀŽLù&_}5Ôý3Á®†Î"Öø©Šo*ÿ>Z<ýÇÛÄvoÍ)ÍÐáU·l'˜¢#`ôYt°MUFjÄPgåm¦9tÎrsÐ!9vÍr2Ê“/“C0£úà[óàƒdg4Þ·ƒ…Õý}~âƒà\š­`L\hßTˆ†'áBç4bœ…‹#ÛÓD«¡‘²è‹ EÀËÓýzÙ¥MˆDò×ÿØlÈ_–Äs¿ú©3ˇc,ÿÎâëêþûz³Û¯vwo?ëÿòâP™^ëßÃ}?°xyýò´Ú=¶ÔgñØÁ¼(4æ>¸äbÞA2MZÞZ`¦ÅŸè8ø™¸¨ÚÀ*˜ò¾Ôvÿy¹ßJÀ à¨ßË{Îõ³œ–Ãß¼{¾ ŠdN2ˆ¥‘ë§HÂ$L°¤En^N«öxÃàÅ”òjG–¤vÙ‹öÛ¾5$ë¢uùX“{c™4ÉF¸ÓBt˜âöADzŒ 3½Gŵ ’ÄÙûK0³ž³ ¤sÝE928‹äXÌ ‰V£Ñ`ð7†A”³"'°lиhfvBðׇ"õƬ¹Ö§*”£/+Œý$1LS©ž?6gC¬Ÿ`9Äý¡&•ÓeÕZ¼œ¨½OËû]éÕCÞ?ªC=m~´3ö¾#’«¤ôäB଒C²™e$°&¶^ÞÚIº5ÍÖ銗ÛÛzkÈ‹ì°"óŸÈ¢l=%î—í¼€mZCWR²và ¢Ãtmr$£FM'å’èå=x^krIròïzÜì]ã»­o÷¯OûQ_ëãŸë‡ç¦Å!ã±è6<óÕ!HW­Ù4*ir"28j¤:+@pUû—"éF<±m ¸{Ø®^êÈÝÒÁ¢ÕU\àcQ°˜íŒˆÇ-ŸÛ]NÒh4d™rï Ù„y~DXƒpRÛÍŒ]q´è; %£ hÀBÁ¹rRÇG_V-šNÃKœ¤ÃŒgoçBM‚x£¿ õþ9Q¤-Qñ@ùÄÁ‡äœÓ ­F*.饳“BE DÔÒÌ‚ ÔŒ=‚n !Æù—¾xêÑvL¾”òÇš¡:|¸Iœ¾¬@Ù:ÆãÞŽâCcÒ%‡ó²ÁªýéÚkؘkvÞ/VÝ‹*ë•Þpïû~–±e‚/9ûXÂO–úFjTãAÏÃM‡ŽÅ;ëçÀƒB ù‰(°ôÞµ°ÿŸ÷iÖh6CØN´b”¤üëÇz÷¥¥“AÑììeKïd2“bªº:’Q#/£+zÌ”˜Sr$ð3šóú¯“\«f0lÄ`þfYJÒÝ„Nsú,ÈÁßd Šõé¥y#‘µâ‘À¹ôz>Dµ‡Æ‰Ë«ÍjûG[Ã&Œ’ú;n×›?^&©“<«o«å°é¼ªé&†Ùw¤Mk™`DdãµùúÒõÝA~†’üCƒ€ªøä¸¿ct®).î9|Mà+lä ®s.^wa^®6ÏϯëÕþ×€«I;D¬ñÉÝÅÍt]M† É ‰ãjHö‘Z+eìˆÞf•Ñy1ô;Ê/] ¨Bõ-­uý.¢beÔÁ-æ(#z¾Âº.¥> >íú¸»ÛÉã 2л3?Ÿ¦žÁ„TŸG^=j:ÙÄ–‡}ˆ\ËÓ=S†Í´U\¹æå^[%Ñ¢œº"$o(Ç«QW]?6)’j¢®®¡®Ð¼Õ^u—œÿæiù~fâøÃ—§WçîNTqÀQÍÌüe{Í4cU£¡%2v2­b‰t/dJ%¢-Í—Òz-‹>z—×kI‘²zHÉnÆ“øªÔäÑòÜT­ °Â±†QÉ“²Ä¿2áV2Cbß»Àù IWb œ¶ò#ÕdHò–µKéÆpˆ¦êØ‚±4‡–4¥ëýA‡¨àгvlY;ýñîîÛÓæ#:/ÛÕFÇÝúÏ:ÓæVÙ›„LŒ]0@yâUþÙ8>¦ëû#Ö F^,XoŒåÈ© =š( M‹{Ÿ¡Ádõ¬HX=¿l¶µtlé² u"+€‚ª :“ ƒuéÐð$”ªªŠî€#cnÈEKx•ÈPBhïÿíËY/õ´ýþúòõ~¿\ô„GYÇþuZå%‚½f<]ÍÕ¬WnzáÔh¡¼edòžn­…ð¡\ÞÆ­†Î$W.¸œéÎBµgîÄ;Š`fÔNJ¤~2—ñ:.ÇÇ¡×2~âqG¿Û=ÞoåyÛn¶ø¹Zþ?w×Öɱ›ÿÊÂÏQo±È"«†Ÿò ‚ /Ap •äµ`­¤èbÿûsÑôÌÔLUWW}x±ÞѨ/¼“E~ü}bÍ™“,kì¸!Û$räÕnP·ž±ŒTö 5Tá‚ZõPj`*‡r§@#¢4†É%NpézµõƒINÉKC3jYEºñÞÔ>ÄXÂQ4ÞG[Ç]æ½äûŽwäha¾>¦Z½r©ÉŸ¿8?HˆÊ{ß°< Iܲ)^yÔ˜pfûxŒµøhäâàT½âIþx+¤”N´hí½yÖfüjíãäÄ:|BE çã’ !Nv‘{—Ø?'ü _žo>ß?jÒws÷lÅ]ãœzŽ-ç¶ÞÑ’¸ ~z»ÿ~wšŒqíìKŸ^U_ß6Œð.nà_Tªßì ÿýå üÏRY‰#Ïöéígûéçþ:üË}þÏ¿?î|é§-½vŸ˜ãûá‹ÕÒFÛ|Í>Ú~o|-¥ÁêI7û—¿ü¨oúíîíË¿þÛ?:)~¬è~„\¥K½ø÷§Û=yðé§O¯ï7&6_~Ü<Óßžßß¾üØûÖ¿]?¼ßým!cÿé§Ÿ>ý¬º÷no¼½ñÊõ¿õOŸ~úá¤l‡¢Ä£)ù&8·AíYhX^ÞÇÃå…ëÏìÚ¡„@\¶C¼Y uÎ ÅÍwŽq×>^¦¼¼põPNÂÁ2òñ5fn/_Þ^8~«˜l…ì,û§˜H/ÚÎ -”œ˜Û|"ÎÖ»$ˆàfé]¢&EMì£wLÝ@ú4˜¬‚‹#õ¥$¡MvV¨oðEýM˜ÇMÜ¥Cç.®L!U¤£gÓ CmÎÑÂävˆ.g(Žâ ã:§¾d?Ѐäà8€ç@]¢«‰8þc#š1‡;Œ98qÀQÄâ1st1'ø©Úi‹Èý ×°64á`:ŽQÿ›î×Ü)Û KèÌvö—²¹ë‡{/îS~÷ÒöÍ*¢}{*kO› „I³B™ã õ~é`߈ÈéX õ…#’‡˜‹ö‘׸S´>ÐÅ¢ýó¨5§•Ý¿YÙ=†ú®%ÌŸ~ÏQŒ q/Æ?‚'À5®Ä9v"‡M¡Ù2I×´E´°+´ºmeer(‰š§;_´'œP|(Ù“@é„ßÿ @'˜D—Ü”òB \ò0Çä„ øÃÀ¯6Æ,Ù~4P¸7fúù7ºrÛu³ë3ñ XÛj½Ï´¦"Ózh³.i–¿â›RaÍýÔjc‡aÌI´k ÓÕ#ÎF£bÙý»ÄÅ0]é˜]à;&T0]<Ú>v˜¤®jLÂ,uŽKGJçuSÕ~„ œRsñD€¸gp«‚s"rÕÁysqгÖH–Ò¬Øo‰a9‚hã ~уÐB“ñGÜóÑ\<Éî&Uúv²—í®Áf71ÙiêtHÑÉÄêehM"Xš_®‡ˆx(Zvù”",‘F.KEvêxD›‚aO Q|WÁ¨±Ñí%K´ Ir™ò˜ê‚Æ=‰B©]ÈS×òX¨jrœêc£/}ЧB0Éž Fßÿ(Xx@²}›—8ƒxnTmžì¯ï~{yz.aU\Á0‡VŒ_·?½½¼ßMÀ ŠgXgfgÖ)†6oF@~úhW µÏôxÔº¾䨔›È‘c‘²ƒ'}{_<êØÎ†:øùz)›¢¨LêüÔ$ÛKœÕù™$,ßtæ]ZÛ–ý#ec”0…1áÐÕ‚ƒ‹Ç&L8ÂäóË›¼z½(v«h™Š%ï+ZkƒFáE'.ëïF$éQÒ¢4HLÉá$gkÖ¼Ìl¡¸3‘šÒ3äQæUÅ<|Y|ŽwD….C* pJ¹»B*,o<€b[$èÙ ½=âÑj¯Ž]ƒªŠy xž7û¸gúO1N<¡s³'àÒ¥K7X¥~Yô®|ä&lë™¶5ÊíÔ.[JMŽfUXCwNq²ç¬#P/‹¹bz jèÊÅCäPL5ÊÈõÈŒˆÑÃ`²4ab)S¼PÎÓ;~éÚ¡Ry]:¨ºc 1_:Œ¾«½ U‡?ªX<³7:§÷'ù‡”Ø»Yü;l¢ìÓáƒfé–®õ½¨¹x}Û‡4ßxœÕ®“ß`Ð~m/Ú¡î4ù \t0Ëb¤P® Á6ñä~ôF’u FM.VB[Ó(EËŠë…ØG¸^;êté>Ô‡ÖPt’a%Õ+˜U„—ƒÅ‹‹´rÌ ~)“$|,æ9Ýz¸«…_ca•³8¥ ×j!Nrv•d¤yœ…пœ–l|œ0ÈÂ]……RäÙRæ(/Ø|çêöþúÛãÓëÛýÍëÖõmZ>7‘7/7WoOWû?[?쀓Bd `Q.†Ek"/~Ñ„D å½òU-áøLrCìÍs +›ÉÙU¾Ÿß^Û/O+¦@íLªp¿!¶ °·ý$S'?%kO Ɔj\4C±^$!‹z·£a-ê¢ †n’“î"?-%dŠú«ñô†Ù¿°ùíV¶VWm°¿<¨ VH`vTqÄ…Uë¨Xʼn|W¬p4ìýò‘bH¹HQ,œOÁ5ÎÿfSôMëÏiwu'ï!±þÇ3²ÀÖl2Óù‹§¬?ü¼ Øÿh@è±ju;Á‹Öз Ç@Ùaè1‘%R7ÃK0xu¢œ*:à!¥’åU‚a.AQ¤S¼°æTÕ:äˆÔìÆfqX]ÂKÃh„GO,N`.Àb’J€EðQù唪.hÍQ,g+Úë§\7ìèÍ*í©| +¼Á lS¹CÁYlc[ €‚í¦Ù|ãZ¾…4x'1¤2ßô¹ùÆ@Y̆ݫU0NÌHH\_ýSî|Æý_‹2ØPpÈ­=ð§Ùmˆ”4‹ÝâZØIÙä³×ší@ÌíÎ *3È\¡ë«¦P–™”SÝѧJ€>6C` @?õ¡$w--Û4¢º0»Tã¾ß©l¼¿¾¼ïmK© 6w DݤˆÒPyLe) œ/KQ6ØÑ±‡ éCkÒÈ&É©CÁy%ù–uh Uq2ÐĨËâÌ„Íî[õƒ4Ys‚×DXá’ÏíâÑ¢‹1Aw‚“4I%ͨæBiÈ ÐQ8ÀÂ-øOš>h¨äZ]ô^ó7sVë}ÏÃï¿Ü)1¿>½¿©Uz¼×?7?·žüðKì E©òK¡hQ‚ Y‹²#f'·ÔÀ O’$}a–$yj)#õüšÖÀR+dWîç ãåîÆdën¢qHÁo…Î‹Š† eQñ˜¯ÿí¨ÕèHÛÉ„½^&+¶öÀE7OVZpþ âQ¶ Ю°Ih?>锪‰öA Ãùnæ’Qv‰ð–2‚§qL JÐ;öí¸ ë—K-ã`+ܦ"öÝ¿€$DÜ‚‹÷„”µ#jt’ý"N’ŒÎÑ, ¡ñ·T·öè‰ '–bt³Ü’É ‹Íåyß²ÌsõúÙ•‚kìÀvCðY;À’µ#Rõ°^Ÿ‡QxߌoÒfl´;øKÛäú¯°ó1Ù<þ—ùr«D?N83®îø‡»}¾w/oÓÀ¤’ÄMBr®¡L$ÎLoì°Ut»!Q_*ŒƒÈRÁ8(I³»svDë²ûŠr@‚cÓ0ºE‹aк¯ë’†!9ZÄ0(-ÆËK?|_W¥ÚàY4BƒE5Þ·¨¼†èQ•„tAƒ>"QOmNÖak´Ù{(j3äGôè¥Ï ȇ=W?¾I›F“M‘Á¥5šdWŸôÒý +ZvÝ ÆÖÕgž½dç:‹*7µØ"Z»­ôÙÈqšXÝ´ÜdƒfÄE-÷)¥ÀE-§”óÙ#ÂtRraÒkŽ•||6gë‘Kë8Çet\ l–Õñ¦1Œi^œÈ/ªèâZ†Ë18vHÌÙÉ•“ú]ØÂÓô=ž\Ù’/›ÀèÓIá#zÚóêã›´i¼>?_úD Á(Š>È`ôôL±U´¦®?¼y¸¿}úýñáéÚÊU·OWúÁÕÇ'WßåIz)=4aGG1hذ­‘hÝÔÞ„#%ÏTÓÒSáæB¶WcD¢.½+/–÷ƒùÑMZÔž)$¾´ÚcÂEÔÞ°/)^«èÌLÝûÃÛûëè W¿Æ×«Õ·&b ÈÄ(ÿß§ÙC°Ê?L¤Z2ycG2ö´ Bž1Õ5ûØt9~X†SÜK¨áú&Ë1…»Ø4ý—pp8ºI› ‰O ò¯=ÅŽX£Ü"‹"¬I˧‹C’ì²Ü·+aÝ^Ý\_}}¼}¸›d,¼÷’ä‡fÊ—Í’4"²ê÷ö8"V?“ ¢ê,-‚è¹~ðY:©8G·—!ŒïѦábÍpa 'äNúl3‰G\R÷C±ÿûþôv½×ܲ.O_½^Vÿµ;§:¿¸mòõ\ÍØl‹ä%ÍQà–M½`œŠ{0òg&Û§“½rþD‹¢*/B \crΠl(ŸEB‘¶“ɱ#sÙëQߣÅä¤<à…MNpK4(_ºÅ—Á¯Jj×7+2í÷¼¿üz÷öü |Q‰¾ýåz ¿¨­Ž ʃí ¡­ 9™dýz’ÁzèØÅ…'.¶ —];¢O'…G8í¥ïÑ ðììˆWxÍüQxÍ"†ÿŸøØV”»úz­7¾Q¨±ø•ƒ¼<^?LëEÀE‰ GrÀ~ÿaðrO³£§¡Š†ÄW™P±áÑÂ…6ψö •^/Žºéøm† =P¸t26#) •œx?):ÂkÍö/óJ}‚¦°M…UU"qÀÝrŸB7¥<Tž—é¨Ü%Ú®(e:X,®ÊW¦ˆ˜ê§»›´”à4r›ÐpÁúýä1̘èfG›*öDï˜ÓFoæ>ÕÎá§hMœ°(+dQ)ž—{qÊuàÞ¬QVÙæƒw{¥ôñ%²¸¤ÞàÍO¬X”ש —Téì gu:ð†¾èðF—Aksƒ’Œ¨\h%$_äoCt÷.eÜ @$RÚOqö®1 zU¤ÕŸ3\X‚kšØAÇœD“ËÙJªáˆqIΕò ™"BI( ðÑ«Õh}ðƸX ûÙq>´ lz‡ë]É3Çð¿A(rãDœ„"ã|Oo÷fUøß2X7¤Kó-7Ý(ÁÓ|´›jÜv[<çlë@™o1&¤"ßríæãÕjg£«B›Ö—K2.ib\Ô(2í^žÞßî‚mk‹ú<³*f¢©4kr#ÑC,ó9?Î0"DÈ[ŸšSô.tƒRÖÍJ ì=±îS}<øS'žô­~yz}û@Uµ®;eÓÕÛÓ¯ww§‰Ð©Z^}d-‘KĤªÌ½' Ä륮+QÁˆ\B&¡.ed‹`¹£NÍÔ§f ùÂ%‹Ûz™‚¦:šVÿ%w%¨X¤ã’ÆA³Yö\QòsÈE¹ˆ(¹Ó¼eºŒRʰê6›€Ëž4ЂfH…wmÇnšzMÏBÇ™»ç‡wý³›*~}Â9pÊV‹m÷r…I±mNŘ[©K¹˜{D¾.8ZúÔà|½ì|b×¼@nu ”¦ML¤º#>µãIæ38ØQûŠcZV&• ‹ú²œò'6»·©Ð×»Eýî> Êø³Îñ$œhÂy® äæ=“›K`Þ£sH æž*`=†¾U›]EÛjE*ÉA¶kl÷^G hMöö—*Œ/‘=¹ˆ#g÷þžætà€À³8Ò„®¤ŽÍ*Ý_7ÀD—ŠÊd_gCŠâ‚Lù}¢Ô饯ÅM9\”h4g'ƒDHŽ[’kcðàÃl3à'.ªµ]-I,ò5¸Tò ÑÖed3‡Ý«U.ÚèjÒˆîÂŒó¾éT88ƤƔg3ª'ƒOšAQB&’ãôͳ%ÑÝ›U #…ˆ—V8-Ù~¶k“ô-Ãw†™¶¡ò¸Jß|QßôÅsú6z³JucP!™ n‰Ðin9gáP"e|ÎIÄ`Û4d!¾Q†oêI˜¼+«›²äÿ¸»šæHnûW¾Ìe”M Ah;ú²sÙØqŒcvb/»‡T*¹µÖ×êÃnö¿/PUR¥ªXEf&™ÝãÛá®Vg%â$€ÀL)œ®ÜPÒn—V 8¯%=ÖæöçÔ3’!S7ƒB&mùÕc^˜¢nØ­”/D× ¿h8}ºÕãí7Ûèió®fDÖ%º‹Áàäzu, vÊ:8Ì •E‰gCw”djµpH&°z++PÛê­œveQ›Õ1ã!LR[S,ä„X|ádk+-€§ÄqÑ`¾0.2ÐñR›õÂ)U×[Y‰Ú\ì¼f0ß5×÷Ÿ‘t½,ç-”ˆº¸§¤HÙÇ“‘«5sx ÿ ‹×wâîËõ’ªn¦7u¿&"EÞÏor?y¾¾]þÆô‡NžÄÏ>oÔ§+Ø÷<.oïŸõÿuvf<ƒ`]èe OžÐ?ý°M–>\tÿþŸþñån›·~Å7üõüæeùÓfš·'«¡~gµCO\†è¢·xòéÓÉ•ÀËJ÷Ów‡7:©»›‚jvÓN0Y Æ¢4Õb1ªAÆA}¹` y—C5¥¤LoWVr»êƒzHTêŒœŽ¿d3ŸvG[À“ÞR'à 4)Ng€§hí×@§Ý&ÓߟÞÜ/~Ùš÷…e¼ºZÖ ±/ÑÃ2q2Äd÷jÄ8´ãzµŒüU3¹åxÆèE€/8§p4Çp Wëui… ccô6Ì 2Ž|s”‘CpH¢Œó¸Zre,‹(Bìe:kÏÐ8ÆŠpø5àfUœJNúƒ^h±Xð"`mì©òF} ²ß9x³¼¤CåQtÀq¿è€}¢"-v],¸èÖ“6az MÒ¸]LAópÄÎzëþŽƒ±÷ˆI5Œù”Uªµ äýGø#”irï0–8dMZYŸy™d]Ù ëÓM3½¥•å1vc4³½„õça­ÏÛÈÜ”(WçC—Ô#<ý.¼={ûÁ3M‘Ôéòf¹Ún«°ï´ñáþæzñ{ï9ÃX«<}×P[ÎÄÐD[ 8c=÷n(~ðON‰‚â Ãéåÿú§ÅÐn§hZú?Ùæc€/hDÌ4˜À´šGå—ôš;q3вë hI#c l|/rO{«_Ò×6‘höø_oì›ÇÿÖ4'{— Þ‰£¡Ô-"Wöÿ}³]w³ø³Ùûí^ÜoÈ#ú·Ç– £:±”ˆ›šÉ7€XZÂ#Ö&¯KÖRA:ÊÚã5˜ë•ÇdMAoiEù(– 9:Šs[gàÖÖÉâg×…ªï­Ó€6˜ìžÎDR‚U]<ÃÀ>o¢ÿºŽc>|¿TýÿõZ<òž±žîÓO—‚§ûæzš<¦4%`çÑÄ)çês.«°Y"_ßÒ@˜¥ê|y’vØ€‰çÇë#m£íÁ²SšÆÃ`‰°Z´£‡‘8Ÿ;ϰ¯·7§ˆpzòªìÈ[“VXØ!pZÃxÁ…öpêB"¥b°3ž"Æ£—ž ššQOä’¨ÇQ â §&Œ´Ôµ7Ô¨±S*¯¶ÿz‘pq}§†“ ’x—.z÷Á°“¨ØzS`ö£êX¤l´qCÙdGʬËÎprò„$Þ ìQ$ö.YרP(FÁmŠ7$¸­¡“PìÈ›MpïÜ}Õs'AѹÓy_ŒÀSð¡¥JC¨?"¬ëLŽñ9àýŸ­óЧ›Q¯wÀƒpÈØ¦¸ǵõGoÂ?Í ‚ÍTó²Ë­a\à=¬§l='l”'ú‘_ÕPÃ{¨qÊÙÀÑÜÞƒ¬kî=¬[ßš½÷²d´Ö˜ô­¥õ¡ª÷°JÜÃb÷ñíƒÝ¡ƒFÖݤÄ0шB%$gG#‚Ò2‚5s'*_[’:ñ6_EÉŽæ­0+%NÔèì Ên¤bn‘út‰àÛ¦>/—7÷¿ßN-–‡3ÑhM „)ê@¨} ××ñ;y,î¼Eoüñ:6‚º ëŠ2Z>šA‹™*n9àNêë@ë«J@Õ…h-ºyiÝýàôùfØœö`­¯†¼§Ep#’ޤµÑ;™y䪹PÙ&$Ñ7䯢÷þøü—µm*ÙS 'ª›pP­W C…æ¥Ç^KAö![µâ+9à~ƒƒU/á‹€:X(¿ª-ëCƒ$iàNίÆ5ÞøûÉ×E)—×ú]O™éo…OÑáo«ý°.¡x~|Y›¢÷†ˆvèõ=[98⑼9±¡ÿ+•y1`Þe’¶¯—À;æHJ “¼åßʱ ¾Ëk¯zÒçFøÐ¾‡Mg·$ºKdÉz vøž¦jžÕÚ²{[Œð_ˆí 9ìXr“ÏÈêó@Gå ‰!Òü‡çwùíõÿ ÌÆcÆK¼Ïy‹—|/Òp8/R­¸{¥{/!lA! â¦jý(0s’ûk+dÏQY`%D”|–d/ðóóÑ5¸»’ ÚbÓ1â²qt\ôn…§î—¸¾Ð–È÷þöuì¥<ünŽKÑÑÓÓÕõãò7Yêú/Üÿz×Ý?þ<È<Éòá³rV!yëô›e;ôMôÊ96ôZºž«›•àÚ .1%?ÍÆUžmІ¦'±a•¼up:anûå&ÇaU$NO¬èÓ 3åšvC¬Z½oKÂ+r¶ü]Jj;XjpŸ¹+A7®«Èýטe£L|-Z© G”ƒ¹ãàQ6SÄV’u~F .„ã3m×LV$ôDT‘eW+_yœ‘Æ­®múIvy‘EOè€mú K¶*sú!õ“@¢¥ZÁ7¸È×Ûâû[gwOýÑ_]b<ý|¿< ˜´Åã͸ºaxu°(šÖå:]–õ@ZÂ.W´µy”vyÆ,‘kªê¸'¸*(-à!0ÍÒÀØ¥M4»ª¢´¯ÑfŠbÕ¸9ÅÍa@õq`i ›Á]óÐ`Iv<ø*7_ƒ…T¯¤Ëu^6—Í',ØXo²œ3hÒùè­¸ªu¹]ŒsDäæ bÆâ:¨Ø©"ò²d‚4ç¢Æ«¦¤‹p!˜P^?Ü6~x\^~>÷Ñ@â/×Ö[º0Â[Š«ðšó“«mw7g½Œ[`à’òœ½‡¦ =Ãî ÇVU<w>âŠê³¢ç+ÙcÆ8Û†3Áà}0Îtª•}@ ñ–K(U)RÁî ô ª­€ªLù\sÚM0y{ü0¥cû.Bâd`åÇ#dÇP7_á÷ÇĈ|p{SRÙ Á‰ •Ú)?þÃ^jιI£‡¦ŽyE'=”/=*Ùñ¢/@ãQ5wtúÍ“]òýùËóç^ÑÝjË-n®e³ÖÂ_ËØ¹àb–½ØvIJ³|}„>54y+“³Üô¥M´ÃÒU¶Ã˜ÞcËà­ìœG™OÍÌ”×á%‰)Î'¦Ä”|šçM8•2SZ:l^\Žío,’]®#·OôøFÃJuépÊšV¹œÎ`´—h©Î {ÝÓ0‚¶ç7vºkiÝž?áp[ÿÌÐó/QS_ëa”³ä•ª5Ž*Ñ*Sµ³Žª?¢+`¶fù1ôYHõŽ“¾v+’gymË:dnH ¡A‰ÇÎa³Ûà¿®¯–‹ß7Ë·\ZövÈÓ S@~“â@®©]‘¾—/&çÁR ³>"ùã¡AbßÃ?t[rdSæ'ØôņSqxO¬5 Á+{­ErsC[,v‚sÆÅ¹+ªÅˆÏŸž_Ï/²c>Œ¡³Ûš1¹1´qÄ Šo¯PA}@LÕü³ê?ÊÖâÉYغ‚™¼D6ÉáÖ“I #Ô×6ÁÍl„Ñ4ɹÎ8~Cþ5ÛnÏŸEö§#‚eˆÄ-1š1Y}I`é¨E;Ã{qÕKh©©— žwŒŠÝ9›û†dáÄV4UòVcxùgn›´–8Fîd?ѼùòÓ˜j`õÙz‡ #KBÀ#ýÙ5LÒ¢UõHoòmqFZõ¼¤l–óeÉ„—ò•L"8›f:}“L/)¯my`Ïk‘ÎB/©æ°¶[š(_àå¥×çôöáËÿ l:Ñ"Ù–6ê`Ìý=òÌè¦ÏV.¿š~Ô±ˆ €Þ^iͲÌè\H,l…UÅ‘:eqìf7[n`¶¨Ã—ÐÇøuÊ¡ÑA^ßÉ~X}òt>Œ‚0Ʀö vD“˜®ŽIµªÔ®Þ5±l4h ¨Ä£„³Ù’‘bÊÁöÅTåšØt6pô³CÑ»–ª×ÄÎÙ8?Q‘Ÿf¦04uóÃ0;ÍåÅ}µWg±G$`˜!h9FS¥¡«HÈõLZow¡¤RŽŒŸi$Ó{2ªbÒòÚ`h`EZ©CLšbì:"$œ?ùºª“½ò,B_ýæT"ÃaqŒGêœrÏ0bÇЇ ä-ÔHÒ$¥T­šE´¯´C®`ª0áfæè1 d›žKÚIrÝ´J½s[à t"çuÉûûrÕ°c:@íKUy}Ñ|=’M5£Z*DFªŸà";‹ï^ï‘^eÊ鸇¦ aß.X ÆÓí•OMð H°¡ÎÏ€'Ù×ZOÙ›ññ|ÿËòNM}ùÛ鱃а|‹ã•‘EU0›·¡)Dùv‹ƒ‡ÕU`=Ü :«=­&wÚh• Z·âª¼¡SÞ9Š[År½§öÈ»éÀÚAÞ m>ǪIP7zEdnʇC׃CŠŽÖx3‰}\ hѱSÊ C_g4´–¸¯Åw¹/øu+Ðz‰‹åãóéó£èiRû8^'y¤v»%y…%)d€$êóÕÆEc=À–-„Mç&A¾iF6:§i>¶R«ر ‚<$7^ÅŽÉ6oš9ÛÄ!Õ”ÖÛr°CÕIBŒe#ßl…YÒ#%­o(£‹0#dǺ1ùYmX¨6ðV\¸`#¨Í›)ru'Éz +PšÉ[Ö®H”þœxÆâþöáüqùþ¯1uÑÛ* t4íŒ<‘"MÑt4cªØHé-ÿøælÎ/n–õþõþþaOçÿ!ð½ü·»Ëå—3°hD¿¢]//·ŸA‚I;KìóÚeˆ9Fk]k²‚¥·˜?éËž\ëK‰rËë_——ïÕc·HúÊ}÷ˆÍÊ6O¹~ßü›„ž¿-Ož?Ÿß¼É£[-~gó€ï|tȱê>ȹf‘Åú€Qgo“išƒ“(èZBqˆý™I粌çÍñäõBéxÛUñs¦M;t+]Çœ7¤ííYùz‚ÇÁC<ËE¤ïª\îÅ||>á+t/ª¾‚Éf}…øød·GO–5buymˆÀÊò:€àvj±jÇêk9›Äµ¶,Ù| v(V—8 CóùÌ“(ù¾ djºIÐ7ñâM=Ç?œ×¸¹¾½~–cªüúf<†£­EÌ‚süÛax àKüE଻p!}öYÉ[¨‹£8»·hÌý–rषJ‡bçööà-Þ!RË ‚®©™µ5¿+ü OîaÀÀi¢›‚<â˜cþªØ:ÈæhrY¿+±Q·ò)àÖ]uJÉ­änåWÒKv¬õÄSåV^^{5f¨*v—l7†ˆt¶Dê¹&^÷qáuŸƒÐñªY<«Xñ¨È1¯Ød#âva%×}àEmÁ7·ËÅÐÞåz€T2Ô­†¹“)T4SÒ:¢ÊÙ”¶·Ô¬oQØ1ËëÑ™š» ú1ˆ$Äѹo™’9yÎñÊÀñÚútÔ]x볨ê1Yí¹•O•“Žlgkg¿óÑ·†]äÔ½˜(J;ð>ÚªÕžE`+Þ{³òxph©Ñ°ÃtTçhB'o1ÌTúæÅæ© `šBpعX,¬.2 c|_:]ŽõÎ1²…8@QÆB 1Ÿÿ @ÉꢭԪœc䵫†0oT²ãöñ°„6Sä?&f«AãüÅEé­ƒ'-!‚F‹:ÒÀ[3õÔêMé©Õ™ÎÊwš||,ÑEþÔºa©ß5ãíÊJŽ­–”ø#òt+ýa˜•ÆØÄÙò Â:ÛË£õÁîÍu©@ ›bø_…Ew?Ÿ.†qq€a>h_V0ȾxT¯öP`¬Âãz\\5-; Tâh ÊZ¨˜UÊB·²©äg‰­‡AOÙQ`Á}:¹”Q!}jé–ÈØ1T¨A¾V¯Ÿ&'+¨4ì‹F{Cæë‚ ˜s*d’嫽••„}òV}¬ê4JÔ6¦±3 {ôy²ÚBiI¹1xte`Cðv¤Úz++QsÄä8ÌìëŒkìTŒ°ïŒ“3µ³œË19ÄŠNB q/±C9xäú©¿ ºâ«KŒ§Ÿ/â—ç]á"G;Ìg´x§ž÷Ð %ïœÄë­BøßéÓIc÷Dc˜•˜ƒsòïäk ôÅ@ðY‚PµPÆç.$›Í{K+@º•ÝkÓÇÜ@‡;I€@'ÒNDÁÆwÑ9ÀÐqE ófÀ¹±“¾Þcˆec¿·‡WH4®Æ~mcH ÙYÓ[ZÑ 7tÞq4C\‰`´õMQX;¦ëYNe†ASÆ“çŠqÚÄ‘Wœ9æ'+OÀé-­DqFžåm¤az‹Ñs˜¤77ªwίRˆ0YoP¬7ÓÉQ° Y%)«µtgBaEæf:‰*ü ‚Ð!#ÆøIjÛÄZq›È(îf²ÚJ»ÕEÍ!ùµè ɥYt$m“^ÇMÂ4½1…Q2à¼Å z+½É³ÑŠ#è çÅ^0«81Íòò¶´ry-½ù‹afÅ93¦EIÝ:’(ñrHµƒììÌ´ís‹u¿|E¶^7½¸d2P¬0uX7ëmXþ VtU í·ÿØçÈw)ò:pmbÆ}/ ÃZC‹a­¡½êµs›„^åtö´zy”ŸûÙÝê®láÕúcOíE®"®2ì4j‹ª ÀyßÖ5ùÓèLÄ3hlÎ*ןTÎFôi±±%èÍX׆9| h„RF i{õŠDª{ˆò P©NçÊÐz•lç ²ŽâA¼Ñ¨ÈW‡fÖ×¶ ö¨æÐ¥oušY‹‹ùàc<ˆ°ŠÏ jàQl’µ´Úê—Oÿ:ÔŸ¢òOêj³“ÂÚ,2£:;mœE k]Mª3¦ƒI圿î}š·ÆÉ:_Æ;ä®v”‰j°؆Õ€m é)±ÚiHÕ'Ž!dhHc(üq|ãèˆ2MT$‡kÌ|mÉG9mVw»A‘ÓWßzv´úð?g·/÷‹çÙÓòaq·˜¯ ÁQ÷ÔŒ¨ºhFk7ú}ÖÏU­@ð=u!*ª¨i Q{ßlíܤusgtŸ¨>¥uŽîCJFÏ4(ÑHõ…yºv~%Ô~ºH#ƒ§w”¸»ÛÙç—Çû‡yá(—í+v ë\'?ÕN·•»7µ”>&¯ØæI_²\€Ç{ÚÓ£‘ô9pé®-}òkð}ÑeîVóç NG¸€ÍÏvwe 54JõÃë·)R‘üeŠe ©ÕL/ÂkèÐ¥¥ÑйâÝŽ.-ÄQ>zÛxmqĺ`ý D#öÝ?Gš|Z-¿ÍŸ¿Î_Ö«—‡ùþÛÆn`UøB{ùoe‚жFPaK˜& ¸Ud!=Gc èd}O#ì­5æ`0ÉFBÄxèˆÂM,° ‰§]–²HäEiµyê*úkÄ)l7ÿíY8Qž¸~á­Ã÷²~^~nY¾¬îæ÷òpaèðBê^þè¯ËÕ/bcåƒßÏ?î¾Îï~ \¾Éá?=Ü>·hÞª,‘GÖ|¬¾µ ­@ 5Edâà*…뺙éLÄ7£7U˜´+NŽ¢e’7ª¶PfÔÑxuEÐ#ö0XöïÖ•3È™çŠs'(BòFùõùÃ|ÓÞ²™…™íRÒÛ<áÑsʰÉuÕV×4ë°,€ô´SîDèvåô0ǧQgÌQå“åR!z4&QµI=]¾ZŽ£Íµƒ9hãDËâf©ØOãÌå’ÖëÅÓö·.¿?ËÕ—"ñvdûÊ÷q¥4Óû×T{Qé} }”~ͤ6p ê<„m»Ã¸(¶£€#bµ[ùl ]mÅ6£¶ëLEc®WŽë0² »;EþêÔ­gL[q†VÏh°G)Z¯fëÅ—0BQ ImlW9Õ5ïÆ ‹f7¶™˜fS¯]G… ‹Ô)òCdÎa²¿P(3§#R5i©XšbžýµEVÎÐAdqpÞ{ã~¦úêjù?ò›ßqn¨¬ù‰‰»Š.øЈêoÈËnC2¶“aL†Ý {µÓ2 ñý…#š5ba{k – Uˆ RUN››ñUe6¯%<‹0ú;e‡3 Íø$ŒpqtrJè!’Ì´zå“¥Zð‰|¶õ¨¨ 1–Y;c&ÀvÉ#,Ö”k{Q{}:¬N®Ê€Âq¨-$âñ@õh9vDÕßmMžX¹õ‡ÿ^-¿}X<ξͿ-W?äßîç¿}x–;Fê°ƒ £aTÂCåSxÈ«ª±§…wi2•Í]%¤Ñ ŽœV·hD§nÑ+ˆN™¼,¥Ã¸HoÌÚ?*ŸŽÑ÷ËÒ¡Á“8P$ž¢t”ŸvkduTœ' õ²¿¾Y¯ÛÏó?ËUýi¹|:¹¿¾}žÿ1è¦ÔD*C`°˜ß¿ýLG4+¸@ØßæÜ”J‹$ŒƒÃ½æwácwJt5¿›/¾ÏïP´Ã0†QÆŽeò໓ힲX‹±ÿUž_ç«Ï_o?¼ÑcØþHxyPÛÓláuJ“¥4 亪1ƒÄ%7ƒz¬¤z^…L|îP“:§LD׫£† ]ï™TB½xšnLžwðƒc*i©‡!8‚0Å dÞá —6åYpâ?º6=yíx¡ /¬wj¾HÔ¹SV28“b2TTð¿¬$ÄÆèdþžš-8ɸ&òþê¬Tã"ÒhnÆI¢{NÏ7ãí‡`TÓÕ9gÙo§Ö.2„×q¬£Wš´àùf£!îêüPÅ¡÷Õ¡Åwëø8J^ŒyçJïTÙ0ƣʄŒ¦óeŽœN2)«XdtQM:ÎÄq'KÙ+s©W¾½œQhì­UÿOŠ#1F5H¨é2Ð+E›í–½_b!e̺HÕÄQ %Ð{mF[³‰Í[6ú¹Â¾]Z‰²ŸrœýÂëv,ºUåƒ3<«µJ³Ä.™rœ)ÙS¥ KÈWë0½zm–0ª"°rb:ëµ½FÃûß–ëùô–÷¨srjÈe™©6iVA›]Q«‰s,_-µ.s†¼„“Ž'±Š³UUg ëúF·H‡ÛÞÔÜ<Þ~“ —wÍÖó»—•ø&3ùÂå݆^S盢މÂ:¯Œ*MhhOA&ÂR4¦Q®‰{"Ÿ-žt‰ÑñJ‡(lB1OQ=A‚U0ùb„>ó°¶cƒPÔEerØÀêTŸc ”Ž9#J4áœ.~’DM*3‰ йºZ2»2°ÂÝÓË,|Ûâ7tkÇ3fØv¥çð ¤YÝ™RÜ+Ñš°Œ”eùì|–ŽQÈOhñÅdÖäþƒS$§ï„5üÖ0=e8ê·ºÍÞÊâŒtŽ]´–°'N#mbء҇ՆÑKŽKÁŸ—Ïî?Oñ ŠÉF×¢DË 0 Cò»žÃ6°5=Èy;^W'µ ìêtuÛˆnR!ÉÕ,ñÝåæ“-q1ºnO½&¸Ùµ&;p Ë¥%:´ª±›Â#D9uÊ! TTÐ|ÐWøuSa¸ùýÿù†Œw’ÿãЦ?¼È÷oùæ/GÑ_>~•­=„•«Ï¿=Þü>{ÛkÖxê·Û»¯r›»<@d׫Dz]¯uomzEÇå«^ëÞúyÏaKæv¾!Þy¹áQR^Y_¯Þä¦fFY;ÔF+WU•øÐhkioÈ#€kÃÂz'Ñ!MV¨¹k†Ñ‹ƒaŒòœq¯^ëË×Nqè¥ýÉ2*z¼bPtoX,C½B•GkÔÖ¤6dT>²AJ.Â%mòöˆ¡7[ëc’[¤þå??ýÇo§›£NGi‰óðtw”:Ye¢«£²£ôx{OˆýåA‹m½l¼Hé0>—K-ŒÏKß7ŠÌu(X‡æ¥/” üã9>6J‚&àIúÇr¶@hÓG%~xDͨsZ‰Oõ›Ýê“e ÐmId­fŸ÷GËQ@Œ.[Æ"ÃÑââ|Ê'‹3Bl§‡Ë¹K—QÂe¡§ñ”¾7/ÖÐ%ïÍ»¨?0:Zν9?x±fùýMÛ{WQû)†Ã¨#pí†C踹94rŽ”²îÐpü×Íþë ÚmÐÈ^ˆ¸ÂÌE´ÏïP™î›¹Â)Bl´çÀ~T¦&½;áý™^ªÍôNø€‹ö‰:œ¦¦ª*ê BÓÓT5˜«¦Ð "†Òa‹AkȦÕT¼aft´5%Ÿ%¡ Ú2ó"±ÛI÷f°Æ/CT¬¹A~rG‘x@ ]:?'ÑšºQxt²,Ób¢ÏÊL‹/²n’†²^÷ÖPBFФNä"4yfw1ubBÃÔ ¢U?‡+üé»™©×‡m»òlïë+RF5÷†“Ÿ0vˆÅSêá'¿á¢â¿Näl’âbâ0V‡n²ÖÊ-|¢âA”tQ]¤ÍZМÔZ > ‘º;VŽÊ xéâ »2§*uc*ËCw§ÊZí#N•Ù)94Ä*o[&y%Øã+¨ªûùÓÃòG¨×\Ò-”ÁÛ/ŸzSÆØ2õTùÚqÊW1•§|+ß{I ‘VâPóž&…ýͰÝáÑ™a Ù€¼M˜a-+Þzwýˆa7"ÿé~ùëãÃòö~=³HZfóÙÏü¯wß;elÔvR˜õÎW{ 456ÈzçEŽÆ 8Ã$Žlßå`”›£ó?ñªÒ£¶ëõÃr=Û 1ÞÏ3±¬Õn7Yßû¡ö¸§èhpF«¾[¤^'?ÿö²|¾=ÐâoD˲ Á÷­Sž¤®uui—•v`ž¥Àdä78Ûï:„p 5øGbN¼ãZà°m=§¿l‡mtÉ=&pq´76èr _­Ø:Ï%tþ©)Q:xïPŠþP"­pGJDu /mT˜DG&§Dêç ^ŠÍxÈÙ€—ÂW‹Kå›òR†A Ý[oHmÅñÀq âÁ?K´Þ8ä¬RêŸwL}亪Ӹ,+*ó\ßÃß8ûÆ0ÓÄÛwÝëç~;“rµ¿‡«çsåsMï ñ¿Ÿs–H[Cj;8lïS2 êU]}þÒN°‚4¡%²žêæÜ`ÍÐÖfÀ •N=Qªª‘9f•… DÜ)ÇÃó!i• Äa ÷Tia–å³µg¶e.^I4Ž»H¢ü­°ktOФ4ÊÖ‹Û~A­¨=IÉ@•†] )šI¢–Â'âB6«kPqÚ%&Mqá{£G#áC‘d eÂçD\a7TáË[Ò,ÑÂäªRö\pØÛe’+ E;L_l6xt²œ¹¬€’† ãb_ÖÀ$¥IýYTÑfP07©®”ëÍ’=ufõ‰3ë¼Álw¶@iŸ½@{¦Y=«{ä4í` [ÓÕÿ<ƒí»Z¾<ϧaEŸ{D™U\qÓ:ªd¥äjÔ´¹‡KT“Sž3æéDazJ[Bkã€o'ËT˜Zâd*ªÄ7¹6¦šy•°DiåÛà_w”‘f”èÖç„/ìÉeðM|^fOÚ&Yū윶M+C‘»î-Î&’W²›±‹‰j8ZÛ4Á„e“Qçg˜ÞÅ¢te Ó%¤å0mûïVí’Zr]Íò‘«°7·ÞWD¨‚ ¿zZ)D &OăWéÈQl^aD’F­\¯Ö%ãr‚3GSÆØ™+¨eØÎºœŒY“ ¸¦ñt¿an©c-§ÊXK0—­ UÀ¹KdÑpžôÄÔà5Â\ë£a®¸©Já™[lkS-äÝ¢V&û‹Ëg/Ò3^{ÛvX³éL<ÀA›M+`m^GÀÚ"›ÎŒ¬…4ÞO€aãäüi “ÆgICµid‰Ê<@„ëàB¸A±ëÜ ¥ën–pk ï·íqwË—û=¸rƒFî4áÓîƒø¢56B;Ïâ\c“%µ)jµêÑP‚s:Ã}°a8?å>HŒ…‹Q§‰ÿ |ìDè‹r-Þz´…R«ö½t€f» ìûŒgœA©íoŒ?®Ñ †WšÅmäiwˆîÐiñ×<û3PËûÝ/¬·–üóáš67±ueŸW/ó&×& XœÒ¿ãÕnõziÿŽò¢Þ”j´åcDê ci:ORÚl rÀ2$’ÖRÚ"TMäŒh×BiÛìJê–iÎÉxÛ!ãlh"ÕŽ®]£QYÝ®E4ïžÅÕM‰|Ʀ =wzrM{$¶>ü¯A©ë˜Ø—w0')=I콬ؿŒ²˜ ¦ˆ“b…_Û¯‰Ì‹”°8jW—y€CŠ:ˆzŠ|árÕÛûo‹ÇÝbï§åÃân!ÙüËa÷ã`†¶Ï)JÓb@uï*ÜÎT4*yBÑ ÎõÝY[DÖf…oÀÁX­!PÆMKxt`vOÁeï  Xñu%Ü+vÝñfÜ´ç°`*·D¡ý×&‚rAmG}NéTÄÖä—N{k›~êCT|Íx)R"}nòž€løM=€²ÞeH5ÈÇ%ÛY,Ĥzt°œD¾á´-ßl!¶a–«·ÜBèJ9•[+a“E½[Ïp~+ù†(œ`´½Z"ÿØùÜ,^ŸŸ] ãËúbïåæb“õ|±÷\H·p# ­ÞëQvò¨#=¯lŠXVGP±Z  Ý ô7èìl³£ê§©/ðéÁn ¼>mV仿?~Ù”Ç_×Yðjg±þ=c XãXµ+s‰ë£87`` ß9/fˆŽÖgnøôð"ôÍÈK¤žÐ*gÜ TѶ2tjC€å‰ˆ$u/'"’¤Æš´—H%gn‚SþL nlÚfO¼&C"ÂÖ_Ûß9žÆé24áÐFÚs!ìïX/ÅBma9”m Ëq=ms–‰Ý¤­&ÙtÑöa°Y÷ÍBoÝ­0»²>HÙwºÝ<­ßó/òaAõúIy={ZßÎn?Ïfëw…ÓÆŸíôr4£&)vÏíºÀ¨)®õ·%d³¼“G‰PÁÈQèÉñª±y‹ÕitRìŠzÄh…†Sà•×ñ4:aD£Ë=« Æ'Ò™š&œ|ÖD:iÎWäíI¿‹vÖ¸>þ9‡4½\ËîŸEz˜ Øzb§1h[Ï^í )ݬe÷Ï–úUÜ,Yú•’“B,Š®·Q£‘† nƒ×Ö°š¯ aw“«'Ö*&5il|SŸ™LŽªec|#ð‘ è©A©þ Kñ=ZXÍö-rÇ?(je}–ø¬«^Åšy4oo¼nÔÌšE·–êÖ)­Ùæ¨[›l$+£îiÔHÙ:­Ñ][×"ëþºÖjÕµÿÇÝÕ47’ãØ¿âØK_FY €dMGŸö2»±»{܈ÙVU)Ú¶¼–\ÕýïTRZ¦D&“©êžÃLGùCÎxð:¤¹Ñ¨¡R®õ¡¨ÕÙ0°x]LÊÁ¦aa ½§Ü·\H¨ Ÿ(?Xöq±yYÞ­Ê4à´-UõDç­f w!3UK® Öa(ʸlÅŠ%N·íÒŠlƒód¯Í¶¯ ¡ã-$Ù6°3œ(CMu¹™ÊÚý¹Ü±­a…)ù•aм®, Øgä®*¯|øÇ1³ï`®ÓáܤD«”–#:Kbl¢´\`´v”‹••] çbÌÞ¿YŸ¼€ë[¨çê„O¯Ì¹n_ 4-禦ýª>‰#™©7Bjª°â‹r w×t#V ‰bRîÈ·²nà'.óÞ=è‰hÍórÿ‹Û•[ÿ9=ðº¯Ðm¿ÿy¹Þ장¦®›â”|ìlÍ‚èÀ/êngÄ–üLè/Ê>p6Ù«š©Q5ß-ÖŠ›AÅįÍÍW¨¯Ó¦É9 CÈ8Ä›²s,cgF;¤†»-•LÉØžâAÛÁ‰ybÆNØ«oÕjÕ×Í—•ªwm¦¿ Ëì¤N³C®ÐH"R ¹`+šoF™®5#hܯ©„’‘tì³…bÆTj¸g§&#éÞ ×fgŒ~ú1b̉1b°Í/zÄ;sÓ{¸%ìŒbÜ!ì<š6­ñ#¦±4pv›\`‚sˆÌ{½Ì!S ψVT*<£X1ªWC'¯n|É @Ÿ•e‹Ó“‘ŽëТ]W•HLpÃBôôjò âÉ%e“8.†.åpý›âãÀùëýr˜:"]¸ÎÙ:¿áñ´ë P/’€ÐBy£û´;ÞåT°:«»àx&ÆÜñŽÖ&ëlzÆh})PÁš·£·]ÄÉï"ÄÊ áèí:iwr83'Ò7½ƒ Wt~ä…ob×OË™S “ÃÎJhlù ÷ºó»}`z4áãüå·ÅæùAv™Øøññõi¹ùãp$¼à5ÎMJ¤+~cpì­áÊ Þá6kF®…xFçJªj¬‡<¹bH济j¢Êï,^›[ÉâôªüÞø„`)tÁpH³+г‰×¿nC<ô†·Ž&¤\o¦•„:k<„UJS].ð¤ ëMÍÍ®qÞ`@Û°„fŠªp4¶3†c,¹Çµm–P9&½Õ£=ª dÀ_›P½›þª@¯@„j·j£Dçµ­»ê‹M›’™Òrp2.Š}F-a„ øÓa§:qîLE8züº‡¶_ÛEÃþM¢òæQ–â~¾™ŸÌS xЀ~ÀÈ“{[N„©ûÜuÉ\¢ÌÆQ§·-@g² mËk¸¨+R¶´uø¹‚^Î.¦¸þÖÓ8Gã$ùØÈ6ø+ϯÑgß®ölW:[/?? ¬atÂç .@ £ú´F¿0'KV~ÕbÍÏglÔòÞ•¢sÞ%f}¶L\¨2Y&Þ³H£Äl4ÌLC|]RÕ@3ŽLýô-æbg†äÅ+ë$”kʦ½æTÔ”£u–#Hõœ]Ì "Ž#Ó0E¤Î¢@ø/·–óÏ‹Ùòñyõ"¡Ä| £"Ö[½€Q#Wdg uH‡Æ·\'†jWih;À(|®:@;ý™‹´)U Þ³J“bC.ùൕ³ëFmDEÕÔªX™)­ÄìÉ]律]—²ouß• ‚³ëÚw5JIÉŸJˆ6!T9ÜÄkì^%¬J¹¢ l#‡QzH²<5S6$Veñ 2y{´#-N§Ì `÷ÄO Bh2} K“²/MPMÇÐ9&˜VØO"ŸÍâQb£Mv~mÿG§™NÝ„‰%ê©‘¯vV(¬ÅLüÆ€Üæ7Öå èl4!0¯8¹ùÎyœ,VîY¤IêDà,ç€u×f^k¦nÿ;ûD‡®” wùÜxJ×XéÏ6v[’Ĥ4ŒÜFzìŒJÛùköám‹wc¾ov7ŸÝ-^6³ç×Û‡åúËÀº9oí¤¬[Õ‚Z $Üw#{ðŠìÕRVÊiä[rŸéö.2°Ñ)îY§IŠBìp[õy]&¼BŽ0$¥¥œçwÉ ­Õ¶¾¯)ñ}=Ru#^1ALÉ·ŽÂ$ RµÝÿê­ò6u:«Ñ»0)Ù2Ö4‘¨²Adv¦­Îê{c5+V¶²M1.)V&Ìç‚%ÆÅd±òÑ2T¢„r‡k-3O¯…œhÌÓ•ŠÀ&7ÑY"—ë·€ˆa •Âjš¦dY<‘¯oDzDæ°ì>aSI­äì¤Öž@Q<-vQÄmiÔ1Ä' #¾ŠcØÉ¯Ü2‰â{“µ $T알ç‚@ÂxÈ6I¶©q=û4‰#μ¥+oýôq„á˜# å9-äAÛ y¸2íC ÆÚ‡iv˜n¿{¬›Íbƒ°%+þš|åi=¿Ûî¨7Û^ »Ë;}š?¬ç6¥ùûÍÓë£Xê×Õ§çé~wßEó[_480šœ–´¼¹M–%ô^í§ç—•,Ìz·G÷ ý>¦wL?`/F´ÓËE‡˜½»(›13L[ ‹6åÿüþô~KÂé–´.º’¬*&Í»ÕãóüeññgyçÏ‹ÍÇÿ½InS±rJàùÑU¬å-ß~º‹tkî ';öquD¹¹ùåfýz§@úøóþÑ~}~Ý|üyÂ'ø:x]üºuäØq7‹¹l+--Îù¿@rÒàÍ/¿Ü|’-ûªÖùåÕ8v¤×ÀõTd»Õ”ï‚CT±ŒÐ$f>u}›Ê#…’@™Bv‰÷ÉÛøž5ÅÉ>Äà‡\Æ7€Âöª ¨€¢Ñ§Ž/ Äe°ËÄC:ãÑ3RˆÈcC0¦-iäÒðÁ‰>ýäÚk"¾­ø@ß]Ÿ©øp±i‡81D4²â#U¦^ÉèÁDq¦¾8Ôä$£CˆC.™Ç!‘ƒ‘u%v9*ÙÈ„™•¼øþÂõÝÌëÛ•¤$*ÞBùþk²jÑÄŠU#Ðqäyt"9”&’eÕ$ \²hÞg-š¤SïÅ É[œÿw1Ú1-Ç~íªm?\M†È"{‚kr°î¾8Ûé¾ßImÏS9\¶e¼UGªKmXq_mu¨sk=^ÝWÞ%QA±a[&E…‰ELÖ?õÌÒ¨§e¼2&¸*ù¢ºý3^KŸUtÞºª‘UÉR ›dtÛ7lX‘8G`¯}ì°u5·Nä½l+<i®·Ë'5ƒº[¿ôã ÷`‘çùøü"pzXH€3{š? 䯯gÏëùìa~»xŸöé®å1ÅQð%ŽìVÏ1‹ë’b”Gë5Ž<µ8ß|m?–‘*)H¬gÔz“‰àݯ¨à%3$£øÔ¢÷Ñ! LÁ£g & ‰†É3Ç«£#T¤ÍØ’øóÀn¢!³—Žœ`w›ªÍ3 @ÃX Ès&+¡z†i„ õ•ƒ»6*bMU?“&aÓ¨ë,ÅÑ×äe±ZË *ûoÈÆ·#î¬GSPÖ«E›.ó¾ñ3G³5Á wC¸¾ç[ g0¢m,€rôNv©× ¢eGB(*4`óçÌ~|ë;Å©£…šàƒÄ{ÒbAøÐæcðQS­ Úƒ†BüËLêP_fv;—?,.¯æìf‹ßuOó‡–ÀSè™_-`¹¬À³Ûü1™ðï¿ðTä7ÂÕWÃKâ©“•ÕµHÓ.Š–Ÿw܉‡³§¥wóÍüaÕ»Kþ²˜?Ï~øOM£×¿®ˆ©0ËTr^%Ó³Y+ÀsmÀp¬Jü‹÷Ç<‰ÞÉíëòá~šLŒX²¡pxK^NL^ìÓÈ3Žr† ‰—Àè¶SŠ"q:õ§¬¸,z9q½Þ~|³ùíÃâ¿äÕþmµz~Weöß›ùfñ§ûÅï ¿QQ£åâþø5N\ðÅκà VRS­6“SÓw唿Ú{™Ÿôao–úP²Nw‹å×ÅýÛ…Š¾“wÓ™þ–ø„ý‹í?d¹|»yX}[¼Ül¾ÌŸnæè¶ï~šTéŒÿ(Ùs(WÒ(D[áÍxBDÆèÇ–ºâCˆ]¬Ô™©@ qѤ C{oVTcè;‰öAÑÿŒ½6ÇûÅŽÆ´ØÑFùØ1‹¹&E´ˆ2p£KHÞ ÃÞñEg1yh­))J•1Ë!:3+•=;Z§Ñ  E )/°ìµ±4ÀìYthyZ è`4ÄD '_Œë*f„L.Cßl²Hèøj%L^‡]oGtX8ñ?hÔ¹š…CŠQ…óÈvô“òŸ]ñáÛíö²¦¿‡’½l™³kîÀ'Ç9¬Òh/˾d{uH UxvÅ—°6­÷9Þ“ÅEKN'¡Zƒ%8ˆYNKaÊÃ ù«ã F’ÊçäÓÄ-)šÖ5fÇ<æ!cpî³õ¼dP¨Ãù¯Q|#Ÿ ¥KÏFkõb½‹×F ÛºŠÒ !l€ñEÆ'J'ÿnÙ§¦— Ãp©„^=q¨RŸätñDD—/öŠ.Ú\œZ+YíÕ³G#½OvPI²@ÁZí®ù‹6Võ;‰‹¦ò>ãZÖy FŒ— Jvz1wJXo"¦»–¯V’ÇrŽqȨ1#Žˆlͬ.´lƒùO$‡©ŠM×9ÍBs~±µ&S}¥ÖJ¦¡ûæhÂ讳!Xà!`Ðf\< «v± Ý3OÖÌúmqûeµú­ŠÜñ\¸N‚ÉãÂÆ˜%1œKâ¢g™VUàÁ9eX0T‰hŠ/.N¦œFk¶CwàåDEhÈæ+Å^i¡žAvƒóƒ »åÙPez`¬¯a‰ˆLVvþ —Tmµ”jKªÎ@ÏÉñD®zÖf¡gÓú9GÛ7Bž3Á€„<Êà(ä!V]k Ç!NR!³|”°s|…ÌpõF*:¥Ä,80YèÙ³O«~9ópH6”¼¦dM¶¶.Áå­‹¯Ó_÷°!žB ºŽD. pÆçQÂéÖž¡š$Í5í x<’uÀ~L<×<œÜMC"¯ëÅË4‚¡sll(à\Ìû6ž’'LÏ>MÐ!-ÎÍ2;T• c +^5'Œ•ØQ;¿bƒ6{±…,îzä\Nd> t:ó#Û¾(13™˜Inª©Ò*ßG[4QáÍÈÒ0 °ÁÀŽÆÁq&3ƒ*$±ƒÆÅWi¥–õºö¶> îP<„ìõ‰Sr×'jAJ%¿{&j‚yjÖã |XÇΚ1ø@6X5¬N¨XCøÑ9*0Òß•ê,)0Xq*À…l«‡€A/L ¤xfÂÝÁgÈ~{Ú @#\6 5‰3ö^Ž+ÿgš2’$† žZðÑr œ CzÖhB òÔ}y…uµnÔ2Ôª0ËGÈÑiùÜÔà¯î×½j†à!špÀ÷ùr#¯p#¯v£ÅùÿØW¶×IîÏþ›|}óòÇr;Ux-6šíofK±ùÉX=¶€ª},¿0SÉfŽü9ÚNNAõ|%Œ ÕRÔÛ0'wƒ-F'‡èõXqSNN·?¿¬neÇýï¿ô¶Ô _5ò«[ìín²6/¯‹’AËf7h™ë—ê2uí–*PÕ¨à@'ä6é¹ÞøÂíÿEë–Ò›…÷ÊŠ* Á˜½Q'Ì€—ºAùÌ‚ƒÅ~Ú>åÍ÷'[ß|zY=Þ,Ÿf‹ÇÕËûž”pà[RÓguÞ›ò)Z­ö3Øi¥åwÆÝ o¤åõ•Ù>9mYNn©(±d°«Åzò×$”4œ#-P?âwûî„%þb§ú å aÄ.@ïÃÉ€…oó‡ò?}ï`èðÞë‡Õ·›O÷óÍ\ˆŽtàÿŸ½«[rãÆÕ¯’Ú»s12 €‹}†ó Îx¼qÅÎäÌ8[›·?€Fci$¶Hv“š\l*••¥n䇀r6osÉ„ÅÅH 5ç×c½ÿÄšhQTrÒ­m#¡µmdç¼êeÝ5Pö2‹ë“í^V^¾¸?YZK•†§ ÍKÔ¤Íøk#Ø¡À ÇÏ^_u2üšÃKöï üî!(H~1¿8qƒæjctƒ¹Úe þCéòóÇo|}x>–ÿÈ‘wwÿôý|´¶ïÖÔ7^{Ø‹œNØÖßÌÑ~}½¡™ý"ÿüéŸ ø•‚ØYK ëñË~‚W%]83YÓ”*³=ª]â—- E©á…¯_ÝíWž‹ðu²²6ø Ž®MDÛS‰¯ÞÇÑkX¼6ïáåÆÁoŸ~{øþÇW¸OŸ~ýøæ£óíܹ•7<ödWS~ë-î¾T€Ð«}VòŠ»ämÇŒ¼9ÚÂæhË"‹¬1rÕê7µT¬žS¹—ÈN—Öfõ„-4cì°z ³t`Ã~ólØ09r.†÷ÕàK†à l3 |‹¸ë¬ëôÜzçΗÏ_>ýˆ2žï¼Eïó…fíCŽ•=A ó‚´Û®|ì‹èƒl³æ-žxLŒkŠä2§Hšd;&åvLòbž$ ˜¤†*¹†I‰Ëem'KkÃ$ˆÎÅÅÜIÙüÕ«7ÌuLÊxÖ§3“LŽ/Äç˜dn”³Û]Ç$ 2ãûn¹ Ã}zkêG9шÌOý±' „dP¢§þÜ«¨¤~ËŽ[PÉÎÇšü@‹p™1mF¥V^3UÝYØŽUîÝd«2H¯ålå©Lgt\Z*©Ú1tǾu ’Üaà-¨ÄçSGÆ£’ËñåBõ *¹&Ø4—äÛ Ó”ø½Áèx&/àG{óí:Ô4pÞ<éÄpÚ1‡d]ow(™Ë¥y3ĤÇG}¥48>‡ù,W!&j¹žÿ¸´FÇ ú„°bjŠk€˜Äi¾ãPÌáK|ˆÁï 1wÞ…`¶ÿû÷ýàï‡ÇÁOûK¼ñ…†BSû[L‡-ZÃÚÃíloF-nG­ êª~‘³x§:hAy&Á…µa–ùQ9Aî,6‘'ÚYùÌÖLðŠLŠD%Èr9—Þ-c5ûµwÄ®s:ºî«{"–1¶æUÞâØ¸˜nÍ»\C3³üÉàlšA^Ñ]…Äàm‰Û“OÔŽf’œ¼²žŽÕÜSÆ¡ùÉÂÓቲ“ÜvÀ™ä„q œÙSqz:ÜYJp&*Éœ¯ÁYâ`œQB½œ ¬{¹ý‘>»:-\ Ë*äê}êéšYÇ•ÕûÔ+hècŸ6ÝÏ ¬¨†TÈàÝ_yR1‘\&Xv ¬’êI'"sy*pd /Î;YYKΉaGìU]9§ºÚªpdq˜ 7ÀS¾¬IqE TÕ¢ŸÃàÈÛ¹ÿ§\§ˆ`zÏâ”ðƒ +Š!Ù“ |hߊ‰.ª—]$•ÊÍ±Z~ž­!Žßæ” Ž‹©Ot±—òþdûKO¦wœþĦ‘.„;T_vë”QÛ×ð~»Két´Ý)nåàè «£³?„ú¦@*SI—Öd‡x—S á¡v¨Aq´ŽPÙkv“Eè[±Qqhgfº:‚6íØ|}­ªh‘ kP›¿TNíã¹hÍd÷ÁÅÈxÙÊfKVðYfß®÷Œf¢Eñá̧‡?¾>þå=ÛíG ^„vz+{êL /»‰•ÏíS¬²% Ó¤ö—ˆ£“f7 ’™‰\‡¤ò ÑãÂZ)†ØŸ§Ø…HÌoB¤óš¬ˆd.\"’é-„cúöß"ûw‰c4Ò»Æ1æ:kÊi æÆÜ'$d§– 7»A¡Õ Šd‡œP'ÄzPCJAÍÉÊšPg?ÌF¹ uªj«£Q˜:!b. íìñ*i”±Y]”ÿ.YÝoïµàý@èUÊêꄬîÅSOožXxNV÷â©Wà#F/ÆØ²¯SÀ8œz&zy]J‰!Ï$Ÿùè¢ûúðÑÙÿÞž“[_L|ዉDËf" 7>Ú[­"ZÝσ5o£—1fY*kèÎöqóe¾K|6cjÉwQÍ!5Q9F>Ê`Ç•ïJØ^´Û„˜HrÞpÆì'âìAÏó•ÛBÕ¯ £jÑc5ÑŽ¤ƒ¡‚Ï/}ÖÌí|0×Ïø¢ÒÌÙˆ 7#`œ~o’¢jñÞÄû,4~»iUJLŸª”bÕÅe!Šä)…(KOS{‚0«ödéñW\‹èsˆ´¥ÙÉ„¹¦äW8X´lv²ÓNÎô~.Ù E뇅eëÌ^š-‘SzM(.›­¼Ã¤Ze™p³Pœet"’fË^;“…b=7ÇõýPE@ÒpÆÒ9Â54`§¤t˜%3É5´-±Mñ>~ÿXª¥¿N%Vûzhó/+|„C­^Ñ ÄÌø0÷/Wåz…’°*ÔfZÂÂÙÞo+Q õ³ &q®}˜\s©'ûDpŽöþ­¡cî°“­sÒ)Ã%?!bÚù^\H¡F‚‘i a0Aá-eQó’(nÒ<`é™vYœ±s&¦ß?Ù®Š„_ÿØ6Y¹¢„¬L´ ³aÍû³…x¡{ôʫ஀ó©maÛ­‚p:”É^a ŸJb ûöÍÁ¿ †ÁÜ? m‡‘gðəòeãƒ/YJ®Þ­ÃP¾ØÔ” ÀØ Ç#@aQâ)sܤa†áhë< >™d&Ú.í£´ã4᯶º_ŸöþÑ>ùˇ6™Êºr°L‹ˆ›B°¿7eàH•Æg¹ Ì{ 4S¥9ÞGéùÝ `,«´—Ô%Þ`ãT×d½™³b €óÒ¥´fÔCÉޙ߱ä:ºY ÆTµw*Xlã;Je€½ó}+n¥{èœBûþì+Ts†ËÆWjÜ‚ÁÕjî2 Ž>ºÁ`Q›1AÙvÂóŠê)>ª{.Üvs4*‹èæ7iª×‡&¨õCO¥,âQP#޼½´Ù·¬Øuä!x²aÓ‘ÂM–§@`ªT.z¶O>þ«êo^ÿòœñ& úªjÓ׊â$ð.êŒ{úUd|%^­xSëÛ‹0'®G±S•QÄdZ´ê§RÅÚk'fÔwÄã¦#‘gG±ìœs—Q¬-™c”JƒéP³®2جßk–wƒ:‘ð¶Ý§S\™Y-ºc!Àâä²-4r¶d2˜ÛSc"îEÕ’ÊÕ‰Ñ ªE–áÆ<å°Ë"逶7Ïf¼ô…{öÿwÏûòAÖ« ÁüÔû¢Csâ™Å~¾ÿõáÓŸ_Ëäà¯ó¿þäÝ×ÇûßúŸk [SNè¦5¥Y}Ьëö-ÐQ½ßU™´ÒæáPª‚t*ŽÏ;ߘ– ;Í™(wÁô€ƒh:މ9ÄN0§”.ùypûXS¿Bîç‰È²¬p³ZqSŠÊ`züÕ¿pØaŽ˜dr†ðåEÏø¿<*ÝñpÍç!ÚîßÌ1t ¤hÜtI×0zRÒ|)]‘ê‘Õ0ôõ œr}³¼ºˆWÑ—!ûÆŽr¿þÚÑËbºàW %o;Ó¯v\Îñ’šÄ—LÀzèð;Ç_Ÿ=´~¾OÆgX÷¤£zñ`Q•0€nË5ÈìÙÙI)åK2KO¯ª«.XÒ06¯› iÔm½mwKú„`¡aØf(uÂUšAãNP“Nm4ûöå÷&ÌÎ ÃÓ×Ë´n!à*æð¤‰2So«É‰Æ8S DHu.ÍÌù0©ìšƒPž÷v²æ!ÎüpÅ”»2°N„ij ¿ÎÌ;3pi§„s% Dcñ¤ÉÒ¡"µs§½=Ћúå7uùœékÌ 6ûÈbs*Þ=eîËy¡âúý_/T\íðˆAuYö˜6¡ãyÊ´-y.ŠÙ6qœ‘>?׸k,Þ {ÄØrUÏÑB©ñîD4Cn±låÔUÿå7Em:“¨²¯¬»œ}žÐß"dzüÏ—÷ÒwnݾdküÇjm4œRâÕžÙÜó  f…ñWå7ÎùÑZÈ-Îpý ¨Üe5Äó±}%ÅxëSKªÓ=ÈBÏG½€©Zq¢C«6…Û"ÃLJøUüXÖ4›¯-Û4x~äOX˜Ëg&I ðò¯:³¶‘ÿŒ½ê,xo‹z!ÜV3|6TfŒ/,;t ¾¹/¼”Wyz¸ÿëÞ?øÁTß“7×´¨D… q‹µE\Á‰i¡Ÿ…®9….q£ÔƹƲóÖ݆¡e (Õ º ,Ù ñmWÛ㹫eÝ,Tº)¿€Dó¡÷° Ï WwA1VF¢‡Þ¶.¹ˆQ¶AoX,©—œÜ4mJªãÚ5Ù¥l[nÁ:òlÉÓãÿëۇǯß^äv:¦¨m³Ä+âV2?w ÚRÈiÕ5g׳`{%iHEDãø~ÄÙ!¥!wk@†ZM;ØîÎåék?ä1„ñǰթ…»jCÈôqSmÅé%|.çÒ–Ó8'§”¹…ÃÐʽ܄©¨›¤áô/h0ƒ=/ó&6ÍŽ…OLfá¼tj&|–ÇL¸•µ5çtÁiò"£Õâ¯Ãi¦¸N ¢t«­”Ø0tµm"s…bŒ\E×l¾S);tÏpõÍl8©\‡Mš®.æ\ PtEqLIÊõzܾ)±¦fpÝ€ 35šD&€­·(fyRê\ßv´HSÁ6‡5Co,DqÃu®‹lÜZœiÔÆÚØà«p{軨´; hÞÚ~&Š]C뇜Ό7 ¬=Ü á-øÌg½ 7Ñ™ É::ÌÔ¨ˆNÀ[r 橌¤ïÿɬÓó‡—‘Y?ÿÛÏט$~¶eª|¾ûþx÷ôøç÷ÓM½™ä™h윷kèÍ$‚·!ö–;Ïè8¬všniq!¤ºk¬¡L}v߬ö! Ü«9Ü«#,øÆœIpzR’ŒíJiëÐ^K;Zf*<²NÉS0!Üž\úÓïÞôéáóÇ?¿öyÉY¦3„5CëÑGßÚvB ýF:#ólø-#"AÕ!6I•ÿND1(ÿ`p—»ÊS†9±˜¹ˆ±’ø5x‘ÜM46¹Û”0³7’<_œù™ÚC™@4…y3Ï¥üã«Å ÏÄ]þ¤œÓgþü©ïº,d¡©`z^ùÓ¦H€ 绉7‰nÐÚ6qÓœÅznÚÎpöDJƒpV9É­q– ßÀ— %œõÙÈšò¼ƒ«ÿBÓ%ZàÀ¬›Áb¢j…fÔ‡i´Ó¥áЗyS¯õ­`W–(xŸÇLÌâ˜3›e³ã†x° ’±¾ ä5cxµP!›+k+2ñó|Á‰PF`¬¿µýÞc%…éU`&g¾$Èõ%ǨzΕþêËRγ§0V!ËFgö ÌTd¦ ˆ*ºÃŒ„ñF)Ý_¾ü~9&T~~; äîþׇûßîì ÿñhÛöùÎÞ¿E'üæÔ™=øß>ðÍyÅÔ ÎiQîú¬î`™^õ’Ì4¸l] ë¾_é´¯ãºW‰Ï% ÐNä=Öý­SîBõ²ÜúÀ`>¡‚I9]6ÐøŠÍma) Ì46 ÜÖ8#qMxMTºžOe„ç².¼›O]“zÖ5€¬˜<ªÝ‰Œ÷ñ©}°Åù|#U‡šÐŠS'N¤2}s¹1üšáùð‹ð«kÌ pþŒß¿w=@£¢3òhÞ›ÄÛcëÿ9Z²Ã]æ:œM²Ü:>fÖpÀq"´>å!0Û ´qi ô‚†ÔÂVžMÀU×Âí"YΉ„†ä1pB†[C®N‡\  7]S9àùÈÍËù«ƒg¾5Ñ™-Ê·*æ)׿S˜uB‰Þ©<øP2òa_/Ò™©€0kíû«°6 BL‘Æ Ø|+¢‘ :ææÐPþ0jí.û• õD ƒ&AØ«³òm¡Õ¾O³Çiºœ19bxí÷¿Ù4ÍÔ–S P|‰35‰yFéƒî0Fˆ4GïŸÎ{V>ùסïÓA€w÷Oߟû›áµN¶hz>0º­“-¢—ó(uƒi‡œÆ•5ذ=Ð@¼!†êTJ—Y±í(”!u ºs¦%Á[*êüþµ ‚…ºÝ‘!êyzàõÎ-¦w¸s#ìè_ëÆ™zL¨Sà4gÔ¹ ×eÞŠÔúVPÚáT(Mk®ÐÈEh¥Í¹ŒFâhÔ‚£A2Wq4•bþ ‚Qoß›ÃhºŠ2qEMK(^Îü‡#IÎXkJ§I…ÉÜsŦþ¨:Ób4oäqýóÛ÷-ä? Ÿ÷]f¥°|—•c¢M'*š ìáv|Ï,÷ûÙuÙme\® É#Ãúb³÷y ¹û~|NÝtÝÂfàl/`†×±ë m¢Rk41ÁQ‰½øT2CR/aGaß õ?®ÈËg¼lSÛ™Ï?}~züöÓ/_¿ÿôé—·?co Î+QÊÃâîJó¦cM2ŸdS48¦ï%Ÿ“]äÆA‡šÌÜÄJ¤œã¨Üx`•ª,‘td]SËkÍNi]’v³½…$Ñs÷îÄÖoÇ’äjÂ!)VùžˆkHAîrŠÀòHNÒ $ˆUº€¤¶ç€Dt~YYâtIáêŽ!‡¼@1?¤mi‚n&ˆ!cYÅ£9Rˆ†XØOMÃ]@ÓFñ ·Îi¼|xòÁAw÷w÷O}ýÅ‹C,LpHªiCzÃ"ÆUÄÍÐæ4"¿Q‘×@øF ®µÀU-ZËî[_r™Öá‡pFxN;æ—æ'~úsÿòûÝ7C«§¿ì¿>=üç§ïöoÔ~S³ Åv4·õ™áMGýÜësÍÑvdÂ[$P)×q+åzÖ,ªS’„ ›ÔÉÀ{O¿æÄ¿G2ådR÷_úR%Ó²´•Ä܉MX­qûM†S&Pæ~¿‘Ð8t6­gö[¸:gÀTEg åImGq @goü{ éO²íý™@:C@ML[޳«Ùy¦»¨©€Î¦} .’æZÙÅØ·W}¸ ihr¼Øı'øhV-o™Í^C'[ÝŸ^DŸ‘ywRÁÜÀlÞ=(.¦FDˆ1d쇟¼¢LxW!=ÿŽ&@Fãw£'Ë9ľ"=šìÑ"ßU&‰€K¦J€xÎÒVT4?D©Î¾¢’,²ƒ²œÿ‚@ àÈ žˆ·¿Ý_>ö³þøZZ‹Ê@fÙa$`±•ÉQË$<&Fô6Œ€å“ÔY# ³n³·h0‹ÑdmrÞ4ƒÊcì®Pu†Ñï¾£¤Uåê@ZÝð} ͰvB[iâ^"žÕ1j¡£•>Ycœ;aO KÒ±®³x³ÈUo/]©iBY§ø\•0œ©„ñmòë:›`Õ/qcíƒz]BËîA‘˜Hh-k¬§ÚÈJtÑm‚iP† æÁ§o'4Ò’è'çAÞU¢Ï¿¤Íà («@s4Àì‰:ô^Whý SƦ+ÑÅÄ|5$}nKC¯'…Ër ¡¾Â± w™êâ¾Æ.(ѧaîÑ®êÉ€ ÆÈÝ]>þºy~¸Õ§ÿü¸¹þ~ùîWÃ`ÄØxï!’›×É9$ÛТNêÆkFŸ.¡¾µêžTŒîÐw3]¡‚>¶ÌÄBòVÿªý§‚×åýÓåÕŽúïDDù¯h¹ùvyû´Yb þýËý; m¿½YÚÈïc¶‚™|, Ïr• a®t%<Ý¥t8Øß·ªÏO/ÜÜ£À×ô¥È¡`ÅU&Àº¶ ƒ4Ô8Å Ãð¡‚k 昼<æÉ `žmà­Éô/ăSª=xv²¶N(FªØæ½#À.¶ 6µŠSÔôèJt*›”**‘z g›C~w’mâl²«ûíd%lC˜Ð¨™o$¢|€ÐÃ61-ÁoÈs7ߨ˜o2E!Ë®U¾ésdù¦aHÒ…>­ˆqú,ÿJãÐ š>Ưd.bœˆ#ãC7߸Bߨ0—è›êLŽoòÒwĶÙÁJÕMC¨J®ÑnK×°e»3qŒùç›·uùõvó?ʪÿÞn>ðï5bÙü#`¿8`ãÿþ%f1n6ׇß%¼K%ŠFö\Â("Éòɦqqv–¿ÅwÝŠ›«ÍÍo›ë÷ŒòÔ³}ß =Äþ`û§Ü<©oúû—Ûíï›Ç/Ïß/‘cÚý(514ƒPÑw'¢ºŽ–¥K ‚4´ÅfI½ô*¯3Åž©4n•ý*è“2ÁDœ•‰ÀÉ‘«‡ƒy¦6»àk”×kdo z¸æ¡e D!ÚÓ¹¾sÝä É—`î~Ê)¾yH‚îüh¥¨k¬vUŒC£tïR7ßT ĬWÿìf\(eœ “XõõCžqè³ê¦çNBðì`%l³>® ®âš:¦à¹‹kØÔ/µ €•ú%•`Ülv}\‰Ò ¦T£„ c\ê<•¨0@ÖÝõè’îîŒ\#r=úÚ /¡FX‚Š˜þÓ%,Á»–è ±¤¯Û B©‚5¨ê¾˜‚èSÔ¢†,[ƒ`rˆùëÁJܘIÝtvË™¦‚écÝ‘êÑG°k™Ed•~^_Xš¼á˜W=ö†Áà”5ŠzìÙås­12È\¿Ç³šTªu~˜¼;liŠ~Œãwîðü]î0x;Yd°\îÏÁ ¾SûÝŒ>ÂÒø™°6fz4ÊßsðÿÇ^á«Çë¶µÂKƒhÆðçØáOÀ)èû9ŸØÎu·yÔ]nŽÿ–›Bì Çš F#YCfî“Ûc1Óƒb&˜}h='9¶¯5¹½üòáö‡ÂËSfòDÁÆÌщ¬¢8°‡U+Œ1µÎOäX±ëü;HN,,/7€Šx¦êx<7õnXV Gî+í©0tb­*äñÐkòx’«çG+ÁÃÀSœrj«Â8X´‹qÊ{lêR7Þ¨ÁC6Ç,ˆø0ç&j/«ßQï‹e«Ù„ŸN·©¿Qe„s£¯žV‰„S±íY÷aWÐL ÍVÜšªg¶XîEÎu¤™Ä%ŠC‚ ´•öÅ('µ•%e|gt¢¬*ÞŠ1ÉéËÊBÏÜ-}„÷v…HÓM±Âѧ¬´ؾ×–üÔLþ ¶¥0-¨° akÆ÷.¦t3ÊEP/pŒ ‚ͪf€dëâŒ<#tSßšHzªÑM¯vÔb—!õVÐMŒ§‰Øuwæ)jûj§¦„‚ã¢z@‚¿ÝJ,ee³î¥ôýåRì2*õA¯ïŸ*GmØú–Dý-å®Q@=؆ÑtÙä‰)Ó%ùÇâY£I„Ø}ç C.Ÿ€löz£Þ„I]ev`€èxÓâÅ:ˆ“ ×Ö(,iÓ0 ¨˜ Œ¡ íVÙº"{ŠùÞøT:pvþÌœ#;”ù¸¼B éq2AßíÓ+¹~èQvŽÖåUÌÖ>oÝÜ7­Y'Â5ÁÛƒm™“äØ:0 kT "Ý8g5Ö‹g5®ôɪjºC~F¨!8qy)a¨RU‡ «ÌCÖ(ëSÏ;6ò­šó¹|¸Ùüñ¬’¯¦_ý‹ªnï^gà_ëÃïoöwï)å,Ô¹^â9ü´"c¼3+\P"MèTný"üÄŒSŸf?¾n!¾x¸½¼¯Ò µµ3£KÛÚ‡zôjÓ‡ © ß¸ÄœŠ ¡í‰9ë9[6§´Lî@?kHbŽ&çÑ‚ÔáiœŽ’âšvª5AiØð6Gša&@(ɹü틊ÆÂnÏWÊ Ê©×,àëÄF(ö¡=­ÉV}fqÎí4KT‘yÚßÔ]´ðò=il¦1ÁôijÓl ñqëDµ¦ÖPi\µ¦­xTÝØ’Ëë`²ˆeN”!Õš*¶.øP‡Ý*çÔwêÇE \­†rŽÏÞãòrpŒüZbuyñõÇýué"Œl9ž×(ÊXÛ¥Œ¡e q\2Ó7G­¦¥E ¦1X6‚Ej¾ÖV©—23ê ŠP5¬®j 7v™Æàx|‘3ntÇEž1¶ b(n2GÌ‹p²×ö8ƒO2ÖWz¸äœtÅݺsÉ_ðùã* ÀñûjûãÕlqÍȱä¾h5[\#W<—|„‡½ÈY‰=Ð}>3[¿Jì_ÙÏ<5#fŒ_ÉmëØ¶“¼e[ðÁÅ!nЄƒ™Æ¹eçy)Óé-3eÁU\²sF“!nÇ‘–jj¥ƒÑï·º”0À i=u~Hÿé)ïänïó½VUüòšY~Þ¾$—gŸ©/Ä‹•†yhm-ý…ñª&Ä5ÖkT«´Òu¤ͤ§+r¯]Þ½Þ7ß~ˆ¾Dâ^û v¹ô*55•Ì}‡Î¶? ¨w4`‘òQ©ô8ApkÈPÉ„#$›¯>LÑbˆ$¸ CÐÇUIZ/¦§t&ÄýIã½.‹“WWß­êu=l¯çŽÄ1´¼ÂÏÏKÿゾŠ| ß®Ñ_|ÿêÿx®L£!þÔ̘¬Š†XoײjDMõ5ÏpbóàT–b°\2T³á±Ö§7(7ƒÓ×– ¶N¡Q‚ïSh´víY-íÇ-ŒÎÒ.ŵ÷òŽd3toWð%²ðË’Æ¢y@Yd5‰ºÄÒÇj¤œõ0YRgÕ{¦Û›»›ç݇?¤Ü·Ê§g蔽°¶ªyY}³äÏ«/@hH•v¬ ö¯¼:qA9ÙB0% ·É$Rϧ.3g䢚¬QôgVÍp|™¹Æ„eL]J¨[GVî’»R_aÜÂkS¶ðÚSßÂëåÔù"‘-ƒíêî1´Â@B}´1løsjF¦þhÒôåë›ç‹‡ííÍÕͦ²‘Úß·¤Õ¢¡‘Uꀬvõ[H»akÍä½\ÉTQý˜Ïbn:ÎjæM±:Ta®uŠfÒû˜š"œÆecž?ߊƒn¾Ýè·¿¯Tª©JÒOÍ (PÓ¦áè€dÁ°jÀ*Q‚hãnÕñGD+%7‰6›XV¨ ß(4ä&QE:jЙެé*>‘}é…:ò‰ÂäÔHÁ{—bƒ{u—xøã&§¡{ä±È72ĵ‹äÛ€bMìŰB,*fÚ }öîŒ|"Ù·ðû‹§Êª[±ë¢qÓö7ÈÅÖO†Æõd7wâì`.¹ùƒÛÈIša9£Ù¹#fòÊÎí@?Í@qWAVÎ;ÂòÈa¿ÝꩾoŸž/7W[ýÍŸW·7ʦ‹ÝÊá‚ë:RÜPÍá5ð¨{‚eå†9Sà&_´Cô˜ÙK=ºT·åL#\©(ÜÖ{vMYES‰âòÉÏÉL$LÁÛôã«'µ 7ÿ¾sS묬3Èkz>pdeWpjã’JøèԪ¨•0²”èëÌ–UÇ…Æ5Þl?j¯ Ã-C¨<×qP¬ã°8ŽSRg‡J°˜óq-S²ýà@©AXL úpn0°>p”KÞYäÓa­Ñ E‚¢°Ö})ÿz£°¦Å%ZaÒÓÎx9cMÜR4vs¯R¸ûM¸¨ÝüÆ ¼* 5­ŸWÊîŠ2û ß )6,hežœ‹+óA«¥}c×Iô”䨑yFD­*Ë&ˆ¯Á7D3íMBê\ rîyÑûâç·ÄÝÏw—Wß•ö-ÍBN5õ2þ¶¥íÀyåÚˆŠñ ±FúC^œ_âæ&)á ÝKðJ™!þŠ0"9>³BZC«×>‡—øÈ#R Š d1Yoª¢è‡Ö>Õ›:åh¥#TKÜDdºV—¨!BcCl˜qÝ+8±p§úxo r½¼Pö²Ü…äú±ÙÉJ6Î NÑÄrU;϶©°·t„)ZA”þÕ©¶”o»qN‚.Ÿ÷óFá)7%#¶º§ÝÃÑJ§¯å ¦ªç-Pãl‹M%K&ÖÙôóÍë›LjªJl¢À›uS19×tv°"uã m캭â²cò]\æ ·èŒcÛÍ5(Ö¶0¡ï @ÛÈd3;¸o-ùP±ðv°"]“É:å]qè ¢§07ƒd|„!hâk›µö©âG¶q˜ —€¤3ÁŸt@w'ÇÞ!t8Z ãØNB–ËÕ-~·z˜Örèaœ³-•FèƒzòÞöƒ$•ª›).B/¨ò¨*`s|sàS(9;Y ÛÈOA”®‚m æ:¦;Ç·'ß2 Äîv«Û•Ôls`ô&KئtɱöS5>4¢ŽV¤nFùà97ßXå®áBš×W&:ßÐNlЗ¨›béé=3/·É;äÃÉJ؆/l;»º‰oÒ7=\h»w»¹|z×¶[/^ÃÕË¥UÌÊ-ý }ÄukßB@Á7p(Ñbý˜`NÄKrôþ`Ò6`ÂŽÓ¹¥EMˆ´,iðwYW¯+ûx÷û’ÎØecö×:Ã$yŠé¡Ï3Óô÷´Jî`8Ðb„$è[“ìátþ.>ÂX¡ëÅîj9‰ÃÙ‹Kòy¯noZ²ëàÒÙõ1 ©|À–N$ !™õ´¥¬;I¡€ùUâØóœ W2ºd>hF§×_6zúÖ”oL%%š¶i˜©Øü)‰ñ‚ÛÛëa íÕmòj™©$æÈ ‚MÎG“bjÇ×¶n7ò¼‚€-ÉŒ‡È­;••âÛ›Xñ}½ùvùãöyöÁa¢xrqÎv(È5¢!ÈŠBʀψ1BBœÿ¼áI@ã< ÷ðãnÉ1öÛ«ýVõç-ã~í÷}+úù}óõ»¾ÒÅNïÿ}{]*'ÍY4Û*¤f¥G5œoY'Ôkœ=‹Îlœ­“A¥C‰±&ã8§˜¸7èÇ3¬ß¨3ÄTû‰ãÜd¨ÒKdu|^ÆNÁU“ÙXü½|ëÇïPÇ…¦ÔÅØ˶KÉBK/ƒÄ®ËQú˜$Ô85ô“FW®D åt£Ó Ñ’®ÒŒ*ƒôBÜÓ[£‡ñë5DïÑC‚ÔЙ‰ ³uŸl;Q–‘¬úØ®Kɶ’¨Ç›}<7Œ«hb^]Añ¦GƬAŒûC’©¦7²ŒÐD}ë8ñÆÙ MtÈNôšhCU&X9!ç·ˆ‹•êEŸj·¡lv=±)«ºÎjªó°q"ŽÐÝ´¦ì*iê’áe·$¹Û]¥³Kf(f„bwabÑ÷Ùù¿ÿ•ú’-PÁúòíq{÷åæþâns·}üSÿëzóÇ—g}‡£‡Æ5ÇŠð©XצEÓþ{W·×£_Å囩­Z ’Ù'˜ª¹Øš¹ž Yꌻ"K*IŽ'o¿@«Õ}$QâÏái'™½HU";G$€Hàƒ1mcý¦‰¯çªÕrŽX D‰—„ÊRDk2·¶¼‚±¨!„ô¾­ØÆ]¾‚㸳ª‚)+è™-Ì?rqóõöüî…ºíŒç\jR·'p KÔ u!ADôÏ8Î?_mþ®þÛÍÍí+µÿãáüaóW³ôŸ<ˆ§ÿù`žg»¹<þ ³ ŽêÑSªQ0–jtl¯”; f›ù‹-vÉ»ÍÅfûëæò•žH­ô±'î¿3_Øolÿ‘í½%ßÕ¡~ßÜ}xør~ýá Ži·÷—Ö“&ò6#³Í ôw÷”ß}»Þ* 5nÒÈ`1èc-èÁO( $,ÛDäDT´ „|½Ýag ×oèQôÆaýF¸§¾ISñw¯ µ¥¾éîâŒQÑ…¥zãêâÖEIù°ˆ„e½¥ì…ó|kU…’n#Zn®Tqšðªuõ+ÎtO``•ML!.Ö›«Å›CSó‚õª¥Ò›‘m<ûx8ÛY ÞØ­<%lR›ºˆ .MókäW&Ûúº Á6[æþ‘"èñ¡ùõ€™Çj¡³«›‹_,²/h»yƒ6¸J)X®d ÍâlÔçb0A˜¼žœªÀä‹XBÊÍÙžï¬L¬Ç©oSt^éB‡óòè9Úûrc<Öþ»2^ó–™‹f‹£TŸ!m"Ûc5Ú|W,=’ÃõÍiQ¸³ÚÚX¥£eËÊfÝ;±ë+ Û|‡ðÄCð>иbnš¼d[çŽ;«zœŒÞ†S‹Ú@7ãyÑ•B’œ‹:Dϲ¸•‡±%F†"aÖ.F.Ǩ9vàÙÎ*CÔi×vb­…Ð3>Zˆ¢^Þ‚•ªÝª‹pä¤m$RÖ[mó­UÂÍÛ¸¼– Õ!Ø;Æ¿ŠÜu¥£¿×'^é”Ìø>H<ŽåÜ0ìGĽãûÐRöìýÎag5xK4‰s!´œ’èõwKZ°tõ¡§:5Q2jK\~LV÷…'МYÝ^E*¯b‰TT[Ȧ„³U©M3yoÇj›ÚaÈYŸ9ÊÕÎæœ´Ýû%ïP½áñžu|0ab¤€ðÃÃÖÇÉ\óÙ,˜‚7qh½‹q‰ÔÅà wk6øÏVžN–Îå~yÝ6÷›ÒÛ—hb Œ²¤9@?á{&0E° yI¶þL&ãN5œP€ÉלjP¬ó‘Þ™¸†xJt#ØÖάü½ºø ä"Â*ž2QpIN_Êñ‚À® ˜6ãc·°+‰¡gÂGL`†8¤~ã…€FzÌ”*lí<&–Ë6æËpgÒä2câ(ÜÁ@ÂÍl6Ôqœ_ã1ɾ±Œ@¢-›œ‹MHD"c¯]Tôé¥kˆ·Éáaîê§nž‘mš¿„XÕ,¶¿*{·óŒŽ3© j¬P•6‹ˆ a I «”‘ÙC~øA¹Ð.º¿8?ûüíú²±’Ì;yûZBÍ-^RIfÀ=üªiM/†%BÏE4î¦'ý‡|Õ+6g9»A~–ÃAƒJÉBT‡ÜÒ0­¹~4ÆÉ%í'o NƒdBrႯ¯>L×ü&ö™€—ðýè'°‡º]Duôc¡7¾æKÕî)9¨j4Š¥V–¥ÿ(!‰5æ 6ÕpzQ¯çpÑ  ­ðLM2é‘$ÿ„£™Uf,Ø/ („{¨ @ld)üG3gs5jc]•“xWô²d± IIdr ¶ F4ÀïŽë.B‘»P “1lùôÁïÚµÚÕØTUJˆ÷ÑIÜÍþøCòûD¶’c&¬™3¥B*`U¥˜­îš‰iVÕºŒ-nSô°aÏaV×èÎeÒO‹ x"Š‹CÑþë·½_øé­à»ÍÅnö·F¯úfáIY)ÀM±‡[6:ŸÔ£·ÇµC9 È&¡RŰ£(¥9ªÞØ&³=L3©²™¾cJpj Ë·>ºF—(ü®š›_¯ÏŽñÉäÎn¯Î¯7gWÛ¯[]IcÿýÛ÷ó#`ÑõÔmîŠß997†Î¦Flã0Šb¢5…Ò«Ç=Áÿ«7ò£ˆa”bˆINÑ€² Fáº)ê®äéüb'¥Wwÿ÷_6—ßtOŸ®nts_nîþàìñ9 ­x…âª@íš».ªOÂvFÚå‰W´q‡T…WEÀú”-j™‰j`™Y$ž°/.Éd㤿ON~›û¢¢ø1”»ºùvyÖB]"œQîâ¦eœFt0äj·$­wQéÚRV¥¯)>QeTî •_µë¥3$gU;ÞQø• qTj„ëÞ/}=·íjs~¿)ÄÍÿªk‹sßæ´ÍØ3Íœ8ßLýLbï°¾=ײ $™ì]  ê©œxƘ²;È 02Eß2<™ h¼ þD?A=ìþ㘈þyØÙùç«Íßuk»¹¹}5ô盿^_nþý“±Aäêíæòø³Ì )pSÔ –É8}ò.NÛhöõl¶“¿ØJ?lmEª¤‹Íö×Íås-±×3æ‘úõ¿L¬¯¾°ßÕþ#Û{5‚ïzP}ßÜ}xør~ýá ‹i·ñ_6|‘9wõ€o™ƒs*¤E&°ƒ¤CCGö!y^i szm.Lb¨{®a-b,šDÈ–tÏvV3Öɤ6J!´ W\HNd‰Úzø%F„ Æ=»pœoíÔsÖlħ„j³1%µ1Ä,«ÑøIH­ š´f¯ˆ–hM #ue¶gIñå7g|Ž»×š}9ß<0þ|wóýÞZû“SÉm+fŽŽ× õ¬Î-w›4ÆQn"p>5´<1Ú˜„K Cì©ûä ;÷z:-†o¬†o˜$²`ùßG=P©|ê¾AFvÜZ€EÕBt6M‰§†çÕ!j ˜zX5©gň‹õæôæ<¦Šk²xKz ²ež³­U*Ž<Â©ÕÆØóKzp Åj£µ÷4¨µ 6b(ª³æ[«TŠñHZqÒs7!l™Ç4´Âæ]ÊQ¬áJË'M€\t<Ü«ÑUûNÒé¸ÓeËD—ÅÕ¶ÚŸ=Üü²¹#G¯yTÄÈ’°œAÎÎW:ˆiPÔ"5‘Q 1t]~|ðÁ±Yv²¼`±Þ^«¶vI:Y\"4ÙFSUšåS#¥ìáL*#lB—mïÑŸÜ&tñ›ëÿ¨Ñ¦û°ØeS hwpUÏ…èS*=$«9äÏr’…)éAäO~rìGr´Z‰DÁ4Ž-âÓþJæ·sú²·ü[u4Æ ª#°¸¡…G$ä—˜‚uƒ÷Ü êÉ”|l¬Fþ™½ÿJFˆ5=ŒâІ"1;•i.ª!`ifhbJ’`=~QP*©§ÉqÇ?.âÇrNfîOÇ’O¾ÙG¥‚„šg=.6Oª@c.‡9JlT[øÐR0Ô-É2ž—^°«ÖQïºhµŒ,à%­Vާa"{ .W…Xc,„ºUNù²Ô§½”‰µBš¢sž³y?°ˆW‹`²±¬"õœCL€À÷tEAŸâÏËÒ®èPMúAa²ÎG)#[¥HìKA€Ùù|³­UµEóĈ€é9ÇËì#YŽrSðä"œZßÔ3Ê1zö-äQƒwŒàÕ夿 ÎîÊêqÏKÿ®º);,w¶³m{š¢J°åš!hŽê!zZ¤6‘ÐÇÐ;‚ø»¡äÊjZ&Ïâ*øÝUÓ¾♬²×Ð3a ªÄAM ]C×J´ãË.°ý÷\Ï6V_[”‹š4e­YYHQm)R~ÚÂag5zc¯á¸ 7î!iÒ@Î8aq%w鳤ûÉÇ|Ú‚‹…èÈ6žçh:î¬ n6º#1ƵyÅzZ2.7ªC‡®)æ)X+âb¸Õjê2 E†òK½0úÒkËnçy*»ãÖj§Ëò<ÿy×ÖÛȱœÿÊÂ/ç%šíîªêêÞü‚ yN€¼F K´WÇ’(è²¶ÿ}ª(ŠQMöu¸ö‰ ûº §«ê«[×…Î͸íÜ’j¼I0îÙwϲc.eœ^‡ý•0N'wæCp»£•0N^Ë0&ž›qÐÒùb˜œ`µ{˜ûb¾ÅÉÙLþzÓ£„uy¾¦ ÜìdElcÁ[Ø4Ø—mØäM2º¨“èº'¼2»“AL BŒ€ó|CLâmv´"ÇDñ-–׿ ÿçI†ÿò ä<®<ü¢¾µïzø¡)㇠k, ûù+ £; ‘+.ÉvÉ)~#ûcC'ߎVdqÒÚ—šfÀ$ñ-¢ïam0ŒŽ-‹šÝ| ¦˜on⨗ny¾yG¹=3zðä$³ÙÉŠØf§H¡jè¤|4zm—~¥¦¨Oƒ#1 ÝÞL°¥\3~BŒ%\“x»€kG¾ýÑJØ&¯:بâê]“ :Ø&hòfl0è aফÑ?\| õlôzW(4ÎòJœöͨ‡N/-Ý&9¢U™â…¡{;2FßÞ½Êí¥LëM— Ä–-~­î¬ïOÛWìÖÒÄâ?b€%Ú´˜ŠhRqäìdEn­®ãG5øµ.ê ˜¶É?-·Œ®Làn¶A…µŒâ+Bþ–ÃÇ`½Í±MâgL6îNVj- çf›£†[‘gQ´Ø2K~BB3MÆÊÇ"⢸\ÃÍæÜ)°ÍVr7Å \3XSs"M –ºŒ¥%jjï4Á±Oýh+d來À&Ÿ³áà|.Iª'ÇäÌÕÙÑJçteÂƹè Bhßq(8¸“²âÐØ „,`—\q˜ž‹õ¾ƒf7 «n¯¡úC?4= l›RÓâ8K¸C71.€DÒˆÊl— .„ģ㈷»öCuîVÏ7WO•PŒnQ$6ÍnÅ©IbŒ£F7£Õ8,Š(RÁžIÌ;Àåu^äœu%¢E!ê¹%)ê9ÝÝ5Ƨ™€Ã€«B¢wF”2gë“»çä\}mçmKuó=È .OV7•ñ²vôp¬ËÁd†oxaÞ¦I½ŠäÅ“Døò­*´úp<ÎÌ2 Ö[riV'Ô? §˜jã Ê“ƒPPGìùm Ò)„†ˆÉ°3 A¨•0 ÐÁ™ª6aX;€éO½Û7 P/Îo03C<eÓ¹JÀ¤õÜh>uÒw°5­òCði-Àx|39û¶^ùœÆs?Ù÷êòâ§—ûëÛUÁø¸$­õ-ÕFºô£;ýxG£aàSÆCD,èo×A˜9ìY›o¹'Çì­œ{Ï=ˆ;ƒXzù<¿WYîѵ—mº°¨ûj¡aGޏW¯ î¡Ú8ˆj…q%uóÀÎgÃK ÉæÌ†`të¹E>7F–À¨n’p!ÒÙju÷²&˜x¼ü˜èxº¼¸’Ïx–Ÿ¾8%˜uÕ€hêÀûŸuàÅ–L¯³b ³³“žãp-2åÐZ“¼Ñ|˜àØ„±€P`©‘’£1fÄ¢ü£·\Æ9}• ^àÞ'„Éšø¶šêì~²ˆ–˜Õæ/ëëºâ‡Ž_ãd)^€í¦ð:oÄiÈf‡yʇTXᾋ† ò¼âí’ÏB0½×}N’!Ô‘e!TM¿AoˆÁдYcÏ[{l“„^-<Ýür_if! .ŠEv¡ipCíccVО¤×(Tn$‚)„X2*8b6¿+´ƒô¢q róÚÚåÏgGåÁ:§oðøpõùæþY÷?h&P…#òN8Þ )¤}Þ‘øÓóÍÝê_¯xô‡>I줇Úp*M#l8õ¸º[?ë#þçËûãÐUâ3}zþãA¿ûy¯6~šþý¿?ÿ}+™o¯µ¥™ÙE!öÃg{#ììK¯?âÔΟ%d¸®޲“‚ zØéÐøñZÖ…p:ø~õ-;‰+;Ú‰žW¯Ž %)hµÜ:Â?Ýòæ”^Õ)5älÁZ^A{¾ÊÚ½µVè×ßSe„^•צ ¯S•Ôð†ú€¸í’^P¯ÒfSéG½*Gò¡^µÑéPm¼œ¬ý‚ìÄé©`]‰‚%"_¬`«UÂ1†‚1!êbèvôÜ`ÿ•'qûcX´‹EH¶ú]è¬ð~š~ ¯Ó‹„EwÂõ‹äµ<üþæuìÎéRº‡imáF^f=?¾¬J·Ù*`¨fߨGk«L´hmãôjÌ׿Ž+™p¢ð°’Å+s“Þ4OÎG޶hñ†ËzÓ.Æd§ÛŒªC¼iM3¡¯ëtÕÙ®+Û¬›|–÷¦ùÕrxÓ:«ÁÃi­oœ#µ¾“Pû†?zÖ¶XíÿÙôÕQ0"f]ã(,bU¼1ìíÙó’3ë|'‘ÏÅÍÝÃú±º5Do Û©^ ú¡e¬0¢±6 i 9J¨q‰ŠmB‰êö˜*D3)‡}F•!š[üZ`[§¸Ñ¢ë; ‡;¨‚\DqûWùAq³ÑqÔGÒ ôóÒ¡( +Ò õŠà(#Åf“±]Œ$Š (Ô0Ýž »é¢?žž_úf«¶”ÜÔ|³ÓÃãúòéïêóª”+FCí(P®Þ6\üèþ®MƒZ½[ÝH´qŠV„Ã.iÁ³šŠÊ*ZoR;Af¤h#†« Gô`µÏ"ŠCH(Zñ‚1RäL^DÔôHËT¢qQd«ÆAîÐǘ‹Ž©ë6â“>ÀšI+‡ÎP—X-1†üW—rŽØN缎ÅÃŽ·2ëCˆlh+{;NšaªT9£%ªT¢Ý¬ÏІS3BŒP¥òÖLn³ª\•@Z·¸*K‰³œ8ê"9ŸòYÅ©AÉ”hPñ”+nî À~”s-Ù®°Á.pMg­î¢~›Ê±`Uâœ`ëo÷ûŸ8ú·[Ñ‹‡ÛËûÕÅíÍݼ[]1b¤vžèT@ji„VbŒª‡Ñqœ¶‚yƒXPÓdµú0«€Ò»(öT¢Eòu hUYáûåÓº±1¡‚íÄÆ†í¸àãÞ,ÐPo6b‰.f[îÌW(Çø-/eB_ºi‰t­u“ÈfѲ­‘GÁÎt%˜Z?Éÿî>_Ê1žw­|§3éOè»îóí¬+PïÞ¶¨wp Tj÷jŸ¸â+!u×½žˆœÕñ%UrNuVÏ{CÉï;ò Qóàרù°?CÎÂm ð´¼ð)ÐÛMÎ1-OLf¨–·&|Tóæƒšw±XË?õsL."o¹G.tSôæÀO݃g޶² }…ËOb–_tXŠt“¹¿^ý|ùrû\æ±gzD¡kð¿ð/6mC@9Öû—!ØI×¼Jeû)úÈŠT¶ÉàÅ#ûU÷ô¢²ýäsÍú?Iï»ÒÌ!.¯³zò-Cbùì±äÈÐ =†ÁšºG3eh4iõ1” - kãdÅGY¶Ùz»¾çýHín_DÙäl_ÁÌÍ ^W×Ù®Ê:QR-KˆtŽëë+²ä=ák—жOq‹ŒDWRC§#ƒ²Š›ÓÓâgô¢¹åµ bUå4xvAÎÐtð—Ok‹:JÜŠ7m€Ìá á«æ&‰Ñ;Ù~°ê>Ÿ¢9Æÿà1xî©€@ZºÂzo½Áa—[;—êæ9ö‹§aÕ[º8zRÞÛ &z›» ª&§ ͨ6@gè[³è v5:#Dù=ºd&„¦X h--™3lÚ:MW­».µˆŠº,ÈâpD|-w?)$“Cöt!$òÖ@¦&‹QT¶kDÃ-n ‹ÁeK…Ç剢( ÖüGN$¢ñéñi{ªŒ }mDªZ"øÖFê’ ô-3â#’h,¨¿„L°Ø8䤰֡æe)é€Î¨2B&䵃#XSV!Nšü^WëJä¾@©¦®C ná@S]’ÍØ”–µ‡õµüøoëÇ_å‡ïå%n¾Ý<ÿqõuuõkº|7ë/ŽþÄAl^²Ú\‹Å°bFCDGõìpö|‡ó®¯ÙL—ã ŠÐæ«¿„}69ö~ÆŸ!£?ãä¢Åמ…I}È+~²OŸ~~\ß}º¹¿¸“àòñùÓõê÷OÏò‡»'ŒÈ>åH»%ußÊsÑNÔ” ÷0ß%ÇA \•+o6>øþ'Q¾)ù â³; Þ´O¢Ù<´L¢!MdË~aãïÿÚ: Œùèwñ»,°Éé³0Y&œµ!^B™ÍH3@—1NŒÎ;7Weó¨×dÖ‚ni·èKUÙæ¨ä½×%jH-ÈŒ³òéa‘F”M » %)mQw’âP mn;Õý¤´azpöŒ@#$Ž&b#¡ß;ã9ÿŒ‘à 1Pp5"ìDN©GäÀh™FkB@ ÚÜQëò§ÛÕß…\ÿ±^?¼@ù≥]ý›žý‹| AnV×o_ó?ЇչX›íépÀâ·ä¤CÈ&§ÆîÎò7}×-WW«›o«ë>yéÙÙ9óçØlû”›'‘­ß$²úmõøéùëåý§9¦ÍÙß?>LlغX#âgÁP išjŠ`Äq?ŠD_Þ?]^m8wÈü×Ê–Ÿ/oŸVÇtƒÿ×O÷/w¢›þwýó.jRUr(`%&ul(/:p#'hÒ“Òf'ûÛÃãZ4áÓ«¢Ø:VBfRäcq-ïæ³‘±]|sÐàšx‘1Ë’þˆ€™¢a‡®D¶‰‘“2à\Êh̨4"3$o͞Ʉ*!0C—ˆ´ä ³^þ…~`»R`“¸\¨ëgó\5o7¡'¹ é$ðìh%È&7y½Ipgg[Ë Ïà‘cŒÝ|ƒR¾aœ,;o €oT¢‘‘)‰ÆÝÉJ؆®XT3¬©á؆’Ûµ¼Ú×õÓóÅãêj-jäÑ>Y]HI¾(¹I˜½Ÿf›^—5£Ù‰¡ÍP%ÃUY ½@èKZû¦MÃŽÅ÷ ¬þxgÖ‹³Ö<4ÞääÝlöjý‘U„û£¥­ýä+ʇõ³ M1v9¦¥/݈‹d±æ–(Íž47|ó(6;äïþl4Áç}¼­x"îOVtI„“ÈRðU—DAlwˆÔÇ6n‰yË:üµ'!ôášáˆ–þòíéáëêquq»úåòJ5ôúåúBˆúíæ:sQåû»)8@çJœç /é†Õí†Øv7±§ªâ_ÙV6(:9{üɧ‘㇀©çøQÔ†]œH)õI1"¥R¨ˆÅ£+eÛMb>{O"™q ³RËL­|‡âÂ"¢dNq¢Ûê±+ËÀÊîZ­ñþ¡«k<—m7ºv‡@åAtÌKX 9k¼‘Æ$’ù° ´@½NCP`º¶‡LZmÃÍíãýþÇÃí·Ë$¼„%ØFîŠÒ‹úŸØeËp[yLYC,ÿì|“WÌV „¡›‚{ú4¢=F <É.Q0Û2¤°;¨™!Åb ƒs™ÄTÅ'¡˜j§Ê–8O³‘ǤÆ)ÁKj:|Qƒ¼±Ú‡X/jz¢è¿Z™å®TW×·ç—‰M»’S\¼ó©†FM}õ;ã»Þã“(¦Xá¸ØJ"ßÒ›œœx?Ä ¥àÿ,‘Ñ䊴Nõ½Ž~Hüø´¸U`|Þý;5| ‡KH\A%ˆÖjTª¸«Ø$7Öµ‘Ë PØK³õf5"F…‘ "öT~½3 [üFÒ9ÜqY$X‘ƒkö0¡!›ÈÞHcR--¶É»h$V#Xð®gÉq*;#8ø”íÏ·×ߟî^~Œ/˜8Ä@éVˆ/¥[M‚¹Q±“„¦ÀƒH"5Áƒ=û!x vÀC¯M<á\F¨•RnÊ·¯»ë«™ˆQaU!B h2ü¡õm‹ÈI*“ ¡>?µŒê«ÙøãÇ >¢‹ÐÅ8÷~|Ií[ÚÝóÝ×o½å˜ lXŒÎUø¥4"lòÊoo; dÖ•êQbŒ÷B}Ì!4¤®HbLæ0·7¯4o™Ø]ÓÍ ‡D²{xüç_¦!Do’˜jšô…9½NŸ0¿Üú(«)Ë­ÅÔaÃM´’~ è#„z– ¡ ¬sôcSɾš!"‚m–Zgí Zeï¹i°÷¹œïæË*†[%{Q øËŠ¡Í#®÷WOï6HY…È 7,- ŽŒú"¦e““ГŒbôŠôá#¾zw¨Ûmü=å¶&稨ìŸZïÒJÇ/«™d¿hNZÎ(±Þ'nLmì{ø?Ù‹³¾ØaÂ_M@êª{ZÖ›8¢ÚÎt+m¾¬Š:@2´\ÂAC8/dDoÒE­yCãº&нÏL §Ì “EÐ<š©xÂ$eOØæc*&ÐÓ‚¤j};~|Âкùb fŠÔbu'À ô±ÇHb4 Ã,L¾ž¯Åv¢³¯ð±9DŒEPÈnÝ~Zc‹yjùÂÇ7f€Z7éâeJ”=ƒÀßÇlR‚Tsúä²*š²/[YL* Û6–vŠdü”#­Vúˆà;¢)ãÄW¨7OÌ]=ÞÝþó¢°ÁöŸ¹¸×¢¹êwÿýéúVáp÷íîuò}MÞ­ÇæuÂn9ffÆu28n1½¦Š;žõ/ ®Ëve¤7¥GßZ øè°"‡F:ÖͺèO¬¸†·ÃùT½®“– ¯bG'E¨¤V>Ðô½QëæËªÖuâ"ÆXMjã¤áÿЉWäv5P%‰)ÆÈ£zW«7/‹å»* +ê3«{XÔeéT·ŸVÅúÄ £íMmQœh@6”××—ÂÔhñºSeXo¾Aoœ¬§BoÉqù¼I–â~óe•jz]Ú{YµÅz–%Y´ ß7~Þª·Ÿª„ˆõ—³‹¢Ž4rQoz¦²«NŸV¥8\¢:t1^XqâúØñ0[ZÖ[õöSO‹žýÍz³Ü’Þ,vγã?­Ro@(œ.­7”ŽœR½¹@wXoÕüÿ6ë©¶ ±BoêJ–õ†ÙÂÏæËjÔæ’y®ˆ-Ç ­mˆãHLŠ.ÅžHqb©!ö'ôFlì}zKì õ|ûò|ÁnóiUŠ£E™¦Sç½ëaÈrä‰Ep8‡ Õ¹wç–•`¸œ ¬Kξ<Ïktú´ Å%ZHýר¦·uš# éM\×”U$ç\t~v÷ü´0ã¢Ái¨ˆ÷K*(NùñÆ“$¦lóŒFN ®å¾TïÎ׎ÔIm:²kùƒú(ævÜÜ=\}½ÝÝ=<îŸ^z;nr˜ ³Ù̾* J˜Ÿ_Íq’ʤ=Ž›R;ëB8ý†!H@$lýTXW>}Êù™}Þ9ˆ¸°$T?«âZ*Ž^­BÌ®õÙHiJWVXÃ#”Kc¤k´Wƒ›bón]#§BæñÞªdŸ’¶j«ÍÍU®0"˜çÈÜÈh’±<š»¸!ßASÒ;MÜEC}½¾± *‡KPUV´õh˜PsåËn~9ÉkJ[,ú.¡ezoXØ÷,ô1¦y´-ƒ#Ç»æ³çG²¶$¨C¢÷M9iÑ]‰þÆäç²+ 6šbKôµÁG‡÷1²X©Üᔊõ¹ ™PûÑűYsˆ’\M–c0YZÁ­È&µ &”‹Ûž‘#oUøIã©»§Ûud®_¦]3z"…k¸|8Râ²É7n3ÉŽJº8,bל@ $#›6˜¸¿!Ρ¬=1hPS+-×nLNÙÖð“ ¦t†“í,–Ë{¦:n½†9zNôi\{7û d{9xp\\`Ž¡¢ýD|s鄦ºë[crÔÚ¢c‡#ë_ôëzvއ`]§Ðœ{?oôê8|‰›Ÿ“«×·O/ó²øHTÕ[aô E@$‚ìPâQ(“&I¼-J»< z¦ ¼õ&Ñg†÷ùv íyw}µûòýÛM'çÅ3¡žšsRc&8U "±nä2ÉL íÙnŠIRºEÐ÷PlËk°²[š“<_ÿq{ó]ßáÍJ‚™ ô²fØPÓ)©XTQáa~íQ:“hÊP/Ž®¢ ‘Kÿ{›¤Î\"z;ÔÜ!L¾ ‰|Me#”I—‹Q)4ABCï„4 „Ôµ2Õ8—ÄpÂ’ìʨ¨©»aÔ‡À3äG)L€Í.C[·ýp×210n R ²ã6F}VJÓ†Y¬Ÿ© à©ÙîŠ4¦`Á¨ øÒæ §¿t(´ç§.»ýLy5hlé°¦¼JåUŸrÇF@“ª«ëŒ¿´¥Ø5Yl_LŠÍéË«‡GµK&?yÈ·Xèy€Jt‚ÖB©±/¾Â¾HȺN²/lu›¬Vk¶söÒZ£ bH6p3¬³úA"\ãIÊ:³†£âA#—å—Ú|YÕ¨­†ú Þú›%Z­zHg;’m°úŸÿéøŒ0›ÈÁèÖÅV5ŠÊgÉ)# I™+é×Gïë«b<„Ž®á ±!ÞÔ^ªà$è'8ƒavâ+Рʦ€³ýLBû 1´ 9:ìgÑX=Ts^ˆÙ+vã0Þ,œ6©mzОçÁ".Î÷Q¬&¢á{æßNr™… L±¾ -Ö¼}Dß }õÜûDsø.’,â÷$YÙYo• TøÔàlåÓǪ´o…\©wó1e’,=…@øª¨ÿÉ}Y-¡,Ѹáâ¥ÕFÐcq™×8›$ÕSÁbÅý+ «Õvµyk~O_VEIä5¿95©M]”È=£dÖ‡š˜3:ÿÒ Í7F{]ïþþG o¾2¼”€j_½Þ¼WîǤ™e¹ÅNB™â:é;«_Ÿ¢^_-F÷ØA–ÔEçëa΀Ÿµò§çÏ÷û¯kÉ¢Š«ÐµA‚42‚Dq’]pË$H MW29—B€!H¨{®ä¨hÄ s[0Ն̴ ÁYѱ ¾èwa‚½Â’˜ܹ-¨"µ«(azÑõ¯Ô¤[ÓW# î¾þ:nüü²ºúª/ûd=þãçµÈuHÒíN¯Ýáçw‡?°;þ‰yè¡…¼mO¬ð$Rø˜³ò ßüz²Ÿœ4÷Á». ì¹V|4&|‡ÂŸËkOþþx£ÑìîË÷»û›ÃX€ªøåûóLü·ZM‘I1€ ÌŽmD8 @1ÙnÔKˆ±§”£Þí^Áþ=£LžÐ»—…/jc±s…†b@BœÍm$1úÖ^â¥aÐ5?¼qF¹ÿhkÍ\°c„*\Ò¹â"`ÑLÂÙ›îFµoc_v>tñª» ÷«ssê§qû½¾Éûç—ÝÓíõ^‡:Ñîeÿçí·™(‰VÇ©B‰/2|†r#¨I01>¸8L ‹:Ç QI÷lºŠüq÷ë,ò¯3"¯#Ë»ã?ž‡YÀF8jâ0!»~a#¢Yó†ÎZ½šbaÝ/Š±Ž TkzHÑc«›ñÇíÕýËÓ’ä–¤·|¹Hh¨¾xcpÖ¡Ü|ë:}gñÒ”æbÛ·I<¤géÙ .`u,p#”7¯Å‹¼IîUF„èíñ§-¾<Ã{2Í6ˆÓhP=k¬)V-B ÍÙrÅÂóm ΫGº{Üßß]ÿØþ„áhXüBèU˜! P‹œ™CÝÈk Zü‚FÓÕ°’:;èLÑGôÕwNë5:Vãêe.-ÉjŒÕ=ÈER?\²|X›O«döM)M™* 0“F4Czã€=[P ¤8jm0»¹}¼ßÿxxËS3af”s½Ána½×©\ýc¢ŠE]ç[ ·Ò˜â º…"¸&op ¤§ª 6™ Ö»3 §æ²ãÿ›ŠñåE/¯x "›Þ1›2 @¾©6:©géžF$h]¯Í9HEƒ‡Ï· Qã.|áß×ÿJ_æaĶî@M à ƒ=¯RäýÝFLÓQLréë#}&Cÿ.Éà’ì:„¿Pø=í¾Áý÷Ó0‚¬NVJ¾Ü´l{ÈS)ô¤¯§oÅ4eø‹m«·z»IOÛ'ud9Èœ’ú¹hâåÉ¢‰›AŽ#Î&‘*ÏK!"HBv=ìVL³r• \Ú’Ä.âÕà5ÎŽÜ4Œœü_ÿRÍÍÝËkÔy×9HzÆ–XY=U4HÛpZ1‰œ·…“ &™#7„™â‰X" $%ô]ý²ÞzRÓ6uõºsxßëÎ’ÝG1Aʼn§’+¨R?ÖçWÂn¾¦ÜìÎ~‘!Ѷ×ý—G 5» ­ŠÍ.•s@P¿'&Æ! ö´rØ"ûáiÒ6Qì+ÊÝjC1R¶á·žÂñ»*r âõß¿ bó„ëýÃãÕÓí;=[}¶IËÈN-kXºVÆDQŒÿ?ˆAÄg)¸%:*CÂQJ¨a3KdŽršDÁmt&Œ-—‚-S«9„’{(¸“¢K_6„¹#´Galt6‹ uÔlS,Ôæïc¦Èz”'ÁLEZi¤KM°PÄþåó¯Àžœ´ú¡ 1Ðð<¼@Ã"[— ”;;ç¡xÜòÔê›/«\d+Î7ி;˜s4tœ{(p«Ûè°Æoöj§’—˜Ê§‘õþÅ«]Î$N_VUGàÅY§ZÓa Në‘N5Ã\€¾ŒŽV7t:Eí¯ÙÝI/¹édXR¢èC…!F²ê³‹§OÒ™Ùë[«1öÜ ì5hF=1›þ!ìš³?—%¸Îήa_ãó§"8Âa#ﻀþ(ŸI³ë›ÅGàÆ°¡¿²'ž‘mZ‘dÎ6¦sÉÁs +·÷·ëý²vìÎ6$ÌC”B¥*†<ÌB¨è³1äV¨³¸o5¨”Ø)ãïˆqRÈ=³¹¶êVp™¸sãñþ»þ‰é,+þx=™e:èOì댑/ñ/˜t)3lÄ7‰Ë’ˆméÅ(Ao+»ªº¨,1èÇ S7 ~8¯óº¾öåéî«mû “'ÖäË·X¾Óz#»Y,ɵ[á¬däc£ôë!Ü6ÂV$tyšÖ.ìo_OÛ;ž®ûP‘ãV†Å­[”J;(•*Lh”]Û±‘Êje¯Þº¤tqLHocCCúº£‹áä¿uØŸÁ’mǪCA¹þ¹¦`‚Ø’Ë`Ôà4âŒ0v¥ ÈÂÎCGXl÷ÀãýÕ·Û3 Ùëÿ½úSø›¹±ݽüÐ(ùúÏJ¡{0Ÿò;Üžü¶ˆÑqEJ ¬VB)b~cÈIQ³HCZ‡àPƒÑP¡TºÆÉlC‚sù_ƒéágv7wW_¿íŸ_ô¾ûíçß[x÷¼ÿþd«—Ÿ®w/û ôOŠÝÁn£87ÍP.#Õ%ôÏbPoÞl礆)#*úÚøºm§„È¥4t]Ò!tmíD†¨ò ;σ§q×Ëú»4Õ"Y4Æ T“êÉA $![ÞHe &d‹Z*{Æê £!L$ßSÙãä!‘ÚÑÿ Çî/ÿEßç)Øc„P¤P„Q¾^tã¤U¼ÎªM†%8Tä Ýn¥'ªç@j iËÓ[ò¯éù3f%±Tp ³K±ˆ‰YbŸ­D&]41´qkXº8hl6äÑGpÏZŠ€ÎÆŒR˨R¦xÖ‰MùÑÚ3Šj• ÅËB?œ²D™›/«* Ç%BbL-jc6¦Ž0¢6 ]‹Ö “ã ®«óO2Ùl™[RÍ`‰$"‰%MÑ–î]¼qü˜–[¿€QÿÂr»}ÄPãßJ¾œ ¥ño \ÍmÄèbô±m»áÀÄzp™ígÔ0‰ä$•`Â>;¶ùØ)½¾úÖcKÝXXØü‡!Ms¡q"¨Çt¸$T”jخ֓˜*Ô©tÿÚ‡ÇüŠëŸVµ‚„–äô®ôMZKŽpÄÛÒGÆZËo6„ì?¡À×Êb"=œIÅ­ìC©ËÛ沌æ'‰Lr¶BäÀ—FCOLïY½YïZ‹±*œ;Àôë—k•ÊþYÿçaÃdöôýWž³_wƒýÌYñdýÉiAè ‰}…`,zêfãµØ¦Ø}#žˆë&îz̈‹ G0#Îw&‚À%&÷ ½d“MˆÑXYεÆ]tž‹Ž=§¬c¿ɤÑ0n+Ê«2éÿÈ»–ÞF’äüW„>[Õ™™Â¢/öÍ^xa¾x…ZbÏjG/HÔxúß;‚¤È"™b>*‹Ó v–4U•‘_¼_N|þix°Ô’[Ä•å3ž_îDð,¿¯Ž5S()ZHŠUOgV´ÊCÉ[L™wÔìÔ?&æx_…%b0MÂR‹ i­cvX’oØEÒ.±º §D¦0b®ÓH7¦·bïˆÑÅ.!bpÎlp.¬QËjuŒ£/òó×å¾dÙØ¨«´ýo0hcÑ“(V3vDûDÄk…LJLÈMáU2&£#:u‰|µÎ£ªÓ<>Ä8e¢ CZ–¯éüs¡Kåòf»VÏ¢å¤Ib º «$‚eòz$9–wGšNí"§ `.‚xe8)‚,Ž]‹1Ó‚µ÷C6œ} .€ CQ’Ï¢‚“[Æ”é$/ÀèÛªpÅ*1Ó” ·Lí¶rxBÇ"@ù€Ë›Å˲Ÿký` iK|–MåÛi($g'íhÑÉcÛØØ*ë‚-!D?l¡¥Ý"ŸBç°4±øu°ªøÈ]-EOÙ°4Ûd¡Êèx=îV¿Ú°ó5Ng¢µ¦Ü­3>ĦN7«cZâä°tùèURÊ™¼GÀ`0çȹýMjï+ÊÆl0¦ÆñCpƒnŠhF 11W‘4ùÒ\ÅÂ6çt§iÉ¥qÖŸG¹þÔ­NV¸°MÖl?îskZJ7ØÅ@uØÍÄ{³÷FÖŠ%÷Ñgï Ó1ÞÑÑ ÙÍ{ƒkÏéŸRINûñÊŽX©Hî"iiˆŸtßdêP#êÞvž¸õäåémYÖx²5ç`¨õf¿›VŸ¾ Mc|8ˆamJÀ'l™ARÝ%ˆƒÇPSgt̰8c:®¨ÁH÷¤û'1þŒµÎ»!fÕÎmÞA& Σ+053ål ¹£ä‹Ñ%ti Ú9T!åIIQ•«-åöä™t—v‡Bç„ëQñœBð€ Fe² Ñ„,2\Hתn‰Ói’‘x+êÁF+»'!Ã75 Fa ck#ÿxúº—߈§Ëû»o‹›ï7÷‹ÝÐ}ËÍòRLŽow¢6/mð$¢˜úá$ >x_™ÖivœŠ÷É.÷­º E>Å u5H b'r )HáØRÄ HQ·¡T ‘d’|= m4°['dv„îö&ñµóîšhÈú¯B±tû̈$½Z%˜ÅLª„®3¤)¹ Ô5-¹M#•öÍwí]ßé—«Çë¹qy—Ø7oš"×àäÓz˜ÞÔÎõnÈ r£X0rÑG]“ÃÍGã2F”ëÎ’ÏFF*U9Ýqòç‚ëV•³~¶$BÁyðQ³s¬*ÁG­—-nnî^Wõ÷×_µÊâûãÍôéÑ%{œmÐ"ìÓ ×ÁÉŠ­ ‘ÓËnGTìÕ匆ª0ÄÚç“0äcKGé¾VûãÿJbÁc 6ŒœÅ†OGèvÔé ùj-ý÷5Ð@0n 4È0µ”ë[âàÀON‰¥Â9›šë‰ 1U8@ *àdiÿ†nÉ‘­#ÂtBE·ÉUÂéÔ²8 ›µ:‡M$k9ü41˜ÄZ›.!˜WD®.]hÅaÉÊ%²!9vt ](,m-Tè,6:[EØ­ƒl ·d„È1:W…MõídÁÀQ;Y0©\»X>/doRón'WЬê“säGgÉw“¡Î¶gØë%?`R/kŒÈºÒV²nhZp¤†¡Hgš˜[BƒýE!Fç¸:ó$‹‡ôJ¢ÝÁ 2K¤Éeñ?öK£'$ÓJrÏ ÌÏ|Ïbݺ¦ÎQËj¹Žu9ë¡\—¯w¿<¶‰þlŽ‚ƒüsIãÀ€>D?PºûtK.Å9òÕN¬\[!ù=®j>&x¼òG ¯n.‘OÖ0ÑD¾÷»# zˆy= ,'#`›ƒ§ÔÀè`E}…¤I4u·Æl˜qÒ­Qlj%óÞÆ®+¦¹Ç<æ 1¡ …˜½6 œlVT àÅoG=Ö\œHsGÍ#~ן[Š"A ­ã‘-ÿ¼YW˜ITe‰ÕÄÖ Cgêhmzÿì–~–8/B&T¡‡‘ÔВ<ÂÅ5Å™ º÷ëqÕÙÕ˜M^¸wƒoÃSþ °ÏŠƒtÇÅèÄ]ÖH»uîœûº7kW+ gƒXÇ`öˆ$ÞÊýËÿÞÄuÃAÀÁGΫ &ÄÅ%½º-zA¿zµ(¦ įŸd¤Q –R0§­ù‡§^߬Nv*ÇyðïË×ë~‚£ë>¹HPPœŒ ŽHÔIPDùfô5øyÀq>6qZ«‚­ÝÀOA¼º¾½üz-/ÖT—àîòîq)_v}ß w¨ó]bvOWè¼3Ú Ù’¾KšÝ ‘¬©³f½.Zm®»Ø@76M$4¬“óizBc'ŠV…›û;¡cO„&z[$ƒrxX§j–9<Ž ,•dʈb…à ˆ¥üybÀ¦!ÌÑG9>†ø3'sB¢êOS}@˜¯úã.W *äý »iD¿.e:.ƒ­OÄ`â4ðmiïµònˆ<9tRê].Ÿ~]<¶‰˜Hx[ää÷ “yüº`D·5pUu†ÎªÔæašr-MNwÖûØeøçAˆm[/|Ý깆DXKYDlÚXŽ*¶Dé„ÂæB "ÄÊR£e"|ËÄw­œa'¶Ï,*oŠU*èúFë.VãÝ8í‰4ÆxWˆ>“èÓÚeK©N8q¤i€³ã$´´U³ në—ú:üü²¹Ä $÷ °Îϱê¬@'>$;%G”ê´W@›\jº•úà„[Dƒ2HÔ ª;jŸ’# %¦ ©Ã‘Áçå´·uK¡N{[ «óqzà#Ʀra¿ÛâãÌØZ„‚Ñ,K 4ÚšD é‚áI:µ72™ˆgÆC4ÜÔXíIgBÄNëÃä‡Ýq»Ý"öüöõþ½áQW[”Øh³°ˆ$G¤édo¬ZøkìRùfŒÜYy„¸ÐõÀpâá)&ã´ýÎ&höÿÙž£LDÇsú|è=€õ˜ w(SnˈBàœKUðÐÕŠ&L‚GlÉÑm±pûlÜì´}½~xÖ¡-ï¦ÈçÛ§¥ü——/B„»‡E˲4>t A<}RvèˆD¦@9G†b >T†EÂ)øàñ!úOX#°™cQÒ:úÕçHü€º6ƒbyæ!$KbD“.¥î~pl¹>Á§f‚m«G¸¦eI0Š*²ØÔ’ á¸%üÑMZÒÑ&rÎd„Á88¹ËzsÖäMŽ“ïI%7ÈÃÛn9~ÄÄ ‡q°àÊÛ{á )Ž"£¶Omt‚XØèdƒœú•Ç„è_´YPød+Ëèd}NòQâ¿‘ÝëT?"=ð4ªˆéÜ—[ÖåAÐ$@¡7õ¶­)¾m1ûÅ¨Ë €_ݶËßuLï¬à²õ£tÞ q¬f§»pOd)þ÷úåñôµ!Ð~ŸŠf 6jT/o}mf{m*ˆ¾¿Ÿábùôô«þŸˆ¢Ç_ôEòo}•g8ø>Dxñ»XúËÅíåøŒYýîùeñíîwùÞò¥ÞòZe_Èï6o‘_þºø~õ×O™±·þÃn[Þå¯áõ¯ŸÖO\ÃBß·2 M‚‹WѧWâÔ}j¦yžU°ió«åhPÛ &N>I¦&ƒÉO~X!@DU龊ã2æÉºÉM:LúÑ@ÎÏ„î ŸÜ… Ù²2vN&|~º½½{}y{ÖG~}»ýeQÕuô|ûµŒ3Íš3ý¬|éZF<¡Î¶§¡!í2‘rMînŠY)º'͸ÈDY£ÓµC#JõàVÅ÷ªæÜÜJýy5Ò`‚!‡sòêÇ®s"Ý÷ry~º¿»ù>þ‹õcªª{VÎ¥–}ˆ¢x¼8XDÝB tì¦qBà­bˆît.dMSL©ÜÑz0qÄ!b,ïºìÅÃÞôçbgD@ÊÝ ;;ïóˆZ©bM«yðO³’ÚÎàaD36f%µhÛÅïK¡±ÎRxßuy#üüô Ä\µ·ÞÊÃïÖktOuÎùÎÛªk‹äçõQ|ËH1Ð6I6}‘ºê^rR™ .‘“Žòô”ŒŽHÑAP:£€·dcWIY„–ÖNï¨~¿ì9®Ÿî°x!T¦â&`|Rœ^F»%jÕ«ã¶ ¢,¥—BùÄ^4²‹›¨Þ9uoÒèÛ$ ªdºˆ%p³ õÐTX‘,¡õÜ©äCjõ´…ER†‚A“j Ÿ\¼!SzÝë–2]8’4aÎïÑÎ éÑnðÑ›0«O{jFå¶Þýå¦.üëiÞ8S ØTß hƒ©Þº\H¢~Ìç&'¶S2{:Y¾&§‹x·äèÂ{n ‹-ž™÷ÈÌVøÍ½<ß|^ ǹY<«É¢pˆ¼…Ã;ß e—[ _hýËÇv¬_#NÿèâUp°ÜXÂŽ6ê/‹‡§åbUý2XàAìÍ+ÒE#ƹX~Ö?ù¼Ï_‡^‡8>ÿy¡é·»nÝòíņ„—°û‘2ÙûO6ÏÝû«ÍÏÆöþ,¡ÊêÛçtX †¡Ìs#ëõÃG7‚ºÆ‚NÝH°ÿïnÄZ;‹ÅHbðsê§tѼÆò…lwëåAÛBù:kQSösª)1m‚ªtAUèÒ]𡺊 EŸMS’Å”8¢I'35º€çVUèÜ,lM`à³;n‡)¶å‹‚ëV‘˯o·÷‹9S“©ãE¤¦–0¯IøÛzëË{´{q¬¸ú³p,Eš9Ùñ®^ÇóŸîÖJ¤Ñiüég$·Ú42'¹Óó,wå¹/oûu¼=Oûÿ¼|½z½\¥Ro·?޽ãiêˆQÞnƒ ÷Ž˜²yàµṗb²êyLŽN²‘´ö»ÊÆ"0„–1ZÆc?\ßü] ÚÔ$*jN&t†°e ‘î¿â6Õ«¨ÕæpOýV8è4È;œà ,åXÚ™äD¼i:8 úÕØÄŠ:ÇN,ìÜ&ˆŽ-½'¯ ©NZ ÄÞØžŽƒ%Žƒæí*ý†jÙ0ç­‚‰3ZŒ:ç`æîé’Âw)pùüòö(?õ%­`hV‰ ¾Áì >êÄ*CóL¬M’­Ÿè…!x@Ÿoš–ƒÛè²¢(•Ѩ‹è…8P¤s‹^{`Ï z…Ì.&d¯ÕµÆ–½È±§ìe.‘½¢”C±ì*.æ¶¥›E,±#šÌ¡Â`ŽG;XȧàˇmÖ\'·XïVËAXïJ>»Adê3h®5úc†ì²á»å]ú‹G%2ÎÊ4®%ª›y<: ÝÆœ¦X?4®@±ÈY1³¼ç’Ù}ºhLùj«]ýçæM©³;+9å­„Á‰bŠá,Ù v%Š2Fž–Û(” ³Þ¨›Û–ˆëv©½U­'Ìb[}¶7j}¿Àw€p†¸·P}¿jßçÜU1Ší3@ ¯þë·Eùàã7: ~›ºÈ÷ÔwÂßbu^Ü/®ÅV!áŨ‘±èéxBÀ½¸ø—Y¡‰3Œ èŽ4s°òÃyâ«<æö*Ë—»›º9€¢{ͬú[ª½!qá0ôš¼ž¤T7µ:]Ì¡¥‚ü©ß Ó8©öѧ+wdé¡÷¼ÞvçÖûó;Êò_§er£ˆ›RˆÊ =c“EjDkëýzQ0'sSÓ°¤ìd—غÄìQì†à)ÉÒ].o—' FG+pŠÙ»ATŽçæ?ïú7›ïµ³qVUøp÷¸î’¬Œƒ‹&|:'Mû 5…æ±+³ÂvÔ9Ìg-ýp`ÿ¨ÒÍ<¤Õ¾”Ë»[mž]~¯¨!Ó¡ ¤ú#ö B¬íSRýÕµ!©Ó¾e_yÑצ¥G ›)§R̳5™rÐu}'ÕzŽ,ÃÀœÕ+⥶fìÔÃî“3:í>­²û>8aŒô<·á'd¶ÇÕ¬zd_$—!1Ø7;m‹²Ó ·Ql¨ÀæûÊs3š†Œ'{&k ¸ÉÖ+µî@Ü/ÝmÁyëŽ8Ÿð@“²íFç*1퀋‘éÜ<†àææ1%"óØêHËh&ÅTKKOÃf<¼Ý/ß^+ ëz”fߺgaîTrš}í)“¢‡²HM[WDgŠM>yÇŽÅÒ¼«xsÖÏù;ZÏã}VYGé½+Û£•d^= l<»¶G{0n†\€ÐŽÃÖ„ƒ¦NVÏ\þëÚâ” —3¶7ûÝLù cËu…çH0?ÈC¤¸É86&çÚ¨ÝÁr]}6B@{nÈQ˜Y‡ WFê‘c´lsyºæ>€°D›ÚXlý,ÒmN‘…®!Áêuýµ³~z0” ýçÜ`wù”ŒGk!Ûµ+B å׎Vàýü{×ÒÜHrœÿ Ã]ÌžªÊ¬|Œ7öb]!…¶¯Š .‰Ùaˆ/äj÷ ÿî,$š@UÝ]Õ3áðE«á î||••/-À³hÒâÌ¥S¡±>gvæÛ¼ëaÿ†ÛGvÝfãG6Ùk[gá0á|€ÊÌB ¶ôDU+‡´±ƒ˜–ΖÔv³Ûç}m°û´/— 'qzlqÎvÆ¿ŽtFm_–@ e‰Ø¹Òf¹N-/UÛP±hL“1ŒhC-0õS*†û#†üsÜ/Ó¿´ž:²3ÍëÀ:št(…e[õœ|ë´†…M¿<<®_n¯ÇÔÍ!TÌl>ÂÇ6=W7¹Qø gó FªŸßp˜|5´­[m#óçÇ»ÕÏ·éPYºyX÷ˆ±F ˆaº  â(SZ€6Ó :ºl•N­#)›œ*åi¶»]Vþlȇ·öDQãòŸl4DPrTð8r­œ$çxܾѦ‘› 5°,_±@³¼ RäúM•ÈõËýì¡‹ñ½{±KÜõï¦|?%PçPÞÈ_]tûþ‡=MÝŽ¥nZ/m¦ðMS4$Ïá*o $V/Qa†a(GX’¨ˆyö`Š:< ù. *yŠhºvüÒàL>¶ÏS €³KÛ5åD—fŒUÓ\VÑ-†¡ó,th¨ÓÝNçºY õ]Ú‡¼lÖbG>µ•àͧçÇ×´Õózõü26cÁM¿ÊˆS¿‰pvÂ⌠ª%+Ì"‰YA~tAÄÇìè‚r Áð^*5’Â{Y:¬š/‡19»Á\Ó”!{€°pÝ9”å-4ÎH[d ¡BÅ9l@(ç°d‚¦M=W÷¦„«ëÛà6MpãÆ¾ü¹þÙ0šö·ñ”¼¯“àd,Œž–J=~XÓ.+–`&ÛE5ת³Ùp7˜àÝ‹  A,vÏx6fþu¤‹IlN© QªÇÎùà$œHñÖí•Å2¦8?¢EggÍ'”}`;¼½ó"üsܧŒóq™M´KDLËΡ†‹Ì|×òuŸ¢Øë¹¼¯ 'j¯¼¯óP1§'Š ®¾yêÔo?Š“®†`ðSh‚ï±Úºñ«×—¯‰Gëz#‘OéÏ—ï]Í:Ôuv« ”ŸeµHC²ÖÉ"öªaé±S#(U5‚“@¨&«l½ö&]"`ù6àœøùè¦j¯®%b{ç§,¸fŸDÚlÂ9#¾Z>›ÌEoÑ™tW7>ç³Þ¹ÁoOT5–ÛcÚ‡.í³~·i­îÔª]¦9î³v5ÝdHOwW›jÉ@fÏìr×ÖiÿøÁâö×ۗ߯¿®®ÿþá8y¯¨gK3µ¿±V͸ fð”NH ªfE~Guõ¯1W×Ýœš4E×´€†¢@ÙÔœ©ƒ‹ž~jL„$×6cWö¨TJÍ·V$9„$ME»hâ !¹T—޲Œà`\MúÿP¶´¯P¿äRÒV؃¶<Í0_Íþçõñåª/»ÔyûNÕp^aEŸñ=U!LÉ:2  ÀØ•Ue>sø”‰wÎq’L-8Ä‚CŽf”=NÂà0oO„5ÂÞä ÕËÒ§ÉŽ¶±nØkEðUß8·+S_®¯îŸîV=£üòº^ñåêêiŸùž¼˜¦”`7N)ú›çÆHîì%udìP v¶¦ÄåC¿‡¶?öU#ô³§¶øÊG]ÜY…XXFƒ‘x Xi´4”à|Ìh©aàúYH”eÎ|—ÎîÃäÛ3½M®kü²l“iÛ!(Méw"âèlþ(AÕJnl -˜Ö±0$Ï÷lB“A¤žT*@mzl°°hÜÖ¯Žˆ¾ùܽÉ9/l4%â…N¬ýª»ô¡¬ï…ÔìµØg‘ ©©AZ7rç1±{,ßnZ††J ×w·¦’Ëë±M1bSlEÒ£„šB¾m^dÕÈm’];Îפ‚I–Hʤ'Ã#‘{ùT‰gí±*̯¹Œë˜ñ±9Èšœ¥2¹ oý‹´ÊZge^LûÑèO©0q$‘Ûœ@•Žû˜q–ë/ùá.z,ÓÛFŒý¡wõzsû2™#N—x0Gö“è>ÀQ¬²ì~PJÕ"Þ¤}ðùD`¢%Ì‚q¤´ï"©ñÚc#EÄQí‹U\PZϧG•ã–ï¦L‡ºÈzû¨eÌ3÷ÛŸ„€–:d_F- éØFÅatÃÓfš1ã[m—ÛÝbŒ²Ý¤šÂ(ãµàɧí³UPtPHÕBZS¾¸"퉂lxZ =€¢=‰Ôˆhí©‰8, ¢Lq‰5RÇ›#6ŠB »ÑþÅ"Ú28eu8NOBACm¢÷¾ ÏÚ÷¹åát½<§”÷ö—?¿>ÜÜ­FjHD¾ÜVÑO)ˆ)¡·p¢¬žVµïd "\BKOJ”£¥7±m¯QàÚ“K•oîº. ®¸ëRmÊÆ¡4°£oC@ÂÜÂàºeœË«¹£7^3ÐpR«vi~ÖdbŒÍÇø0z¬j¦¹p8¤8NÝ fã|F—¿}üû¶€øé/«D6ÿç[ƒå#Ý^QÝ^°^k÷réö´F„œ¨›¥õgE1 c¸(^—^¸+ïÚÓ¯7«¨³{¹O˜·$FZªÎ£oÓ΃öp¾)íã®ÌÞ5úׇËý?üôôüx¿zùºz]§uãÊÈ|:d!CnˆqNÈ%è¤E½Ž]Ï3EZõÊi0Ø¿1 žoÛqŠÙÑ’(ƒ-v=ÙTêÚ±°Òã( ÿ¬aÀió¼Z’² ð7=)©Àp¿6J¬[I.» ºòHe*$œT¦·kòeš=Ô_Žƒë˜ ñr”Á¸ï 8?µŠÌêÛJ]°Á±ì㚎fݬžîO¸u65üöÿÆIÝ3Rq,sN4SÜ”9("Ž‘Æi#eUï433 ´¯¸à4Ãʹӌ¼¬õäRå8K<92Ž“‚SÅY¦E¾ùqªC5{»l£j¸ÈÈö`ÌdŠR…EzkÈi”8sbç$´Ô"Rý•:ä\çMy°,7íoÔ¯xéÞÈ)׿›[Ü_®Wë#u`º puÂ}Ô®™Y\1›§¶XhÕ`6GdCÐ<ÌÈ„<Ì¢ ÁlOD5bWUû*I»UÃAh'ßÄØÇ0kŠ )Õ‹ìØa*Ë´Ò ªÚQqJ©’ú=g^H(6ØFR-@ÙéwÀ¥6k4¤û·ø ÐØ &0Z(áó B!Á8–ÃøÆäÄ.kƒ?Œ³S˜Ã{){2¬ã踋!=õÒ0®±uS¥IÙÅãö„¤'uA4C*«f|½Hùâ ›ë×FôjÔR¦,2ÐàÒŒOÙ>ß±wžÍ2Û®7‡¡ÕD‚4?ÿ‚ÑeÁ„!U‡{â«&j÷Qfp²0š°}Dëú‘F ãL§é¨sÁBC¾‚ƒ¸º ¥$*?2Ñù-ZW[ÚCĉÁˆÄÚ£‹¯Ù@òúúëêæõnl»QÓH#…)»)ÅRµÝïÒ©w-‰Ñ±d7P„˜’á8È×E•{yØìÄÅ!8²6¿—GpŦ¤¨4Ùe‰ƒºÝe%|»&çü-Õ(Ò€Ž>v½´í3nr:¿©ddã£:n ¥ê¦šìöF.Ué{\f³K² â|A!Ãn÷ÔYlÂÖžlªp¯›;^:çÉ¢·ÆVôCé•íË%7Þͨߠ©?Mþ»™]y€8¥XåÔÍêjUŠÔ`Ò[:sˆ·,ß°­ÿã/7T$—Ow·×·«qCTiËÜE`”y5~e/SzRÓþ@o®Ó 'õ¬ðê‚K§|Éîo¦ì6åáu==IU™O#FÂnTK•Ú]ÅѬZ¿™´_þÍH³à’“7þ†Ö-UR܆љ†™ˆ1¤Û`7FG&cÀæ#i7 ©ÉEç0ÂßÞÍöêç»ÕšÝþùññé#$ØÿË aõ§‡›ÕoŸÁ À¿]$T¿]ÝìŽWቷóVí2ŸñÓ”¢ˆñüRÔô²Ääbû—ùCzØ‹ÛôPæ„׫Û_W7^Èi—Að›pè_‡>b÷f»O¹]›—ÿÃN«¬ž/^¾^=\¼Ë£Û¼üÁFUèìNçüPi"œ¶ƒÂ,;€0eþ(D¥ èüß.ì'ë«ëîÕ¿Í€}¹º[¯NÀt2‰‡×4ÒÿÓã—÷ó0¡ú‘QhçÄ»òFÁbzʇmõ¨u`ÿjxz~´ëéz Ú;Ï?\‰¦8¦øÑ.úrýxÿtõ¼:Ò·²Z¸>Fß‚‰Ái–ß ÈÒP ß—V2ž#3I…ÉmÿqvpÈ™‰ÀÐßH%šx Bq+$¢qp†÷£}Âkð&úÄü…:Ûû±Ðûíèí4ǵ**žWkzóè½ÿjÞÌûÉÝóŠ ‡#ÒeÙ†hWÙz‹#ô)1“åõfÿ4§·°ÛPr˜ûØ¿Y‘Ú T 0JmÑâNÀYjžró¢¨qS{˜­7*Õ[ˆwžiÞø|›ÞöŇ÷öÞ¬ä°ÕD¢ úQzK÷˜géݽƒuv€ÌV—ªÍSG»h$£¶æ±ózîËé½Y‰ÚD:3= 8Fm`àêý,”'ÐÅ %npy¶Ú¤8¶Muõ¨ÙñxS¦8§6Øí?P[ïÍJÔ¡‹Pu”ÚÄE‡:Km<‰ÁŽôd0¢Ó6¿¯?­ŸV×g®ëóX’1ªÍ/›(äÕÎq·¾ì¬ÚyðpìK¦F´šÛù@a„]€Caösì‚§´azH=: c™2îVWëÕikØ/°:ñóË»Çë¿W³¢Ží<–|ÈQÌ ïº{Ž2"ﲪa'öÐv±,n'4)8œR¶.Ž--^­î“_ÎÐ'Ù7f’äWÍ&:oñ°†‚C£óY£ 08¸¶—K•«.tŽ64wËÚ„ú —‘DbJïU*y·šî‘äzý|¹¾ýåÁ~6‘œcÐLÈG%±R6$´ËÜÐÈMOTUÌ„º ivU;9_óHo'޹ÁjæD•y°Îíý±‘¹Æ"æJN,ÞMHW± )![qóÔýËå}Ò®œÌˆ]@(ˆC> &n°‘¤'°*k—ÍøÁ®Ð‹»rPjàʾóœ2Ë’ |Xþ}¿zy¾½^_¾Ük,ˆ‚¡©B˜xQLø³ÉN ©ÖYºQ¾JD)q@ï³÷t?LÊú.‘*˜êìʈK;`tÐÄY‰´9Qòé¶ÔOÛéüÏïÿy}uyw»~1Ù_îc¿Q¾i!ZS׌0áèûÖDĘ“g‹¯¦×J_ÛÕD³°D?÷„UÉkÓ®cj"^§o€H‘8§xmìÒ¬~ôߦMoo{¸ÌO»fVàYׄÀ0a¨$Úg¥m±ÚÕôPFõüìÊÏ›m{þ—­šÀÓœ=‰Tñ¿Øòâpaÿjqj‚v&öá{ãÀÚ,Q}¾¼^=¿Œ<0ÑsK·Ò)½íšjq/@…Õ—]5oM–¢ *¿&¥\®Ùä›d{‚ªá® ÀˆI°ôhâ‚·ËÜoÅàä‹Ðö ·¥þ4IßÚEÉDÙ]›ˆ×öŸû·jØî/Óé†u³úrõz7rœ(œX^¤†¼³b  •^ l‡SÀ±éÝ™’«æªf&®…X⪠!{°¢®ÚS Oµ§v¢h„«¦=êaŽ«Züxªï@cÔ…¹%oï¯~Y]>›q®_žÿôñãóBfÿ2YòYïÄIœ’^EÅâ9 Ù'iVZÕ<2¦v7(òHˆ’ñH“\lRîɦŠKúΣ}äÒ.W¹Vñɨ]””|ZÖ'·±ÙfeWÆå„ àZ:á”V(Î êl<N=ŸÓŽ-Œt\ÐL>ïsf;ƒœ‚{YÔp9{ê(G\/#Û3ø2Ý " Ó„@ 0 Ãì–8-m‰—šˆó ¨fà3×ôÞ0„¤½7+šöRùGúÓýÏöH#Ǥ|؃+°‹3éÈ~ÛOÈ,I²2ûåØ~Îs÷Ÿ‰Ý80þáí½½=~¾“Òô$S¡1)ˆmûíɨF·ƒ=5’D1FPÅ@Â4ˆiê9ŒMq¼•æ>²'}Ls¬ßØ”êY¤Uà›I«¬E¤Ö±œE—¡–èžPªXt–¶ˆˆæBÙZ)À„µi·ÀÖö‡Ow¯ö9bºÜ¯—7\œ° $™¸Äv%k;»*ÜaÃÅ^|•lG]9sǶ‘ÐG„ÉöNÑÕ_ƒc«ÞqT.ìCôþÕd´zþüßþø9•’‹ØÿH$Lµ†‹W{ÀÄDdþ&8Ýî ²È=áÒÞ@üÅ/¿=|þawÀþÁŒá—ÕËç¿üÇ/æ÷pÜ?Þì¿ Á¾nýz‘Ï?ìÞå§§×—Ï?Ìÿ®_¯î^W?m»µ”ùâÇ/¾˜«¾¦—zûªËÕø²íã?Þ¶ùÀa‚‹"C<bCŒ4¥7ÝÊ0{¦K§1¦ñAFŸ¯L. ŸiÞ¾8åÚ{oVäzM”*Œ‚Üþg ¹Ðq‘+ŽrkÁ¹¶+)6Òçc–Ф> &Ÿãb3s/ «øïߎÉ*ŽVڳú wDWƒdÿ_cá«!PÙ%BÔyPÇv¼«ÆI,ÈÇ, ²Á bÚ#˜_Õ`îÁBç›Ý¶/˃¹ÌÞÛÐp!&m³G·ÐÿˆY4Ò±žX_ZZø Õf»á«]ógY¥ݸ!³s¦‚¼U @Ö*À ÖœöoVpdÙ‘Û©\8ⵇWßúè1)"Ÿ=Ijxª³Ê³ìì J²ÀÙ3˜Iº½{yíÿ`Wo½¼¾ê®Ÿ_O¡D6òšøµ½)ˆÂèiâ×¶>š&5ŒzÒ½s³‰F ’Ý<¥à’n¾³ˆà ]TïÕŠ )vÒ åÒ˜Ôš›x+Çí ë“RarÇ~ÌÚ¨yç`(2±?²Nðý«Ä;ô3ýüE•~‰îFË‚)‡\ˮ瀎틇Áœæû‹ %ˆZ8“ö TEË­±×)Ä~"³ÃîèJ•Æö°KhÍM(1«4öƒEŒ·÷*ч9x KŸpêZGÝIˆÛ(àà —ÔB”íxGQ·é¶âQ'Hßâ¨ûX™Û#ð‰Ÿ¿aîêËÏ_ÜÊG><ØÜÌC­ÂóôŽ0Ï ϰçF‰5´¶fÃq:^–NJEà³Æ™Êv=–3Xt³ØrpOÏ›}¬?Ý_]½}è­6ÿ_ê®ï¹­[9ÿ+š¼Ü—ðX`w±n&O}éL;½Óö±ŒLѱjYTE)¹é_ßI‹‰C8tïLÆId›”SôUý`Înëñ ¸ÎØþ|~NâĪװ|ΤVz`ôíÞŒçmh´*ƒË1?Œ ÐL_³»?»)+6 ‰Añ]¦k/kZÎ E0 ú`ó¶ì º37g’K‡Ž%Ó!i°Ua ™¸Î Õ“ñÞ7¤?·©r—(øëI‰'¦tnÓÛ®ku‰J’ÂT¾Vw"Œž¥ ‘ì´é,3€«Œ×3ÌK`3[°>¬õí>¯7/‹çÕr­oöçaH¶mÙ‡éQ¶Ž¦lh3Šû¸\iÚΨáõC߸PE})A_—ÅÞôf cAuÂÞ8¯S7°£¶&ä Í^ͽ{2ÂìA&˜Ð)7Tœ¹u% Ì\ŽÁ}€côIõ´éý ºQe5h6ruë-Ëk¢Ùý§0ËÛ:*NF¸8·²£¸úƒ›Ä'Ĺ]ªè&Òw†+\‰#›øë¿ýWà?ÇåΆŒm Îõè&Ä^ˆâ¡w]š˜U¯û.*Äô`A'<ÄëìG!ÕàrKeÔ^¯àQU t!Df±&[d;{«6GPÕ¦fÂÁ4å1=aÿýÁ^?Ú‹³¾O$u byŽ!üº=‹¯w«ª+Ñ“Å_`¥$"§ ªÿÌÁ]&°n7³¾/:üÆ›tùf¶ó†IÉý¤o¢éq-G=¶b¸îZE‡MVyJØ2ǽl­Itwë÷“Q­¼J”Ħä^Žf¤Cœ”Á…±EМ›Ú¤pÓTÞÆq`Ÿ˜]mábËø±Ù-¢ØÝ´?ŠÝî]OŠ8ŸÉ•8Q›mGDË©™ªc‘ô¸wãcû æVcá}ÿ¶ôvìþž’ÞÖK¬¸Ú­O뻂¬Á˜’,–ÏËnŠ‚2ŒìoyE‰#ªY=q”^dtT=‰O™Ç«nDÕtß”éFÄU;;8ˆÛæôÏžVÏ›ûMDêßׯ_WˇÛû¯ædUÉN~´ØèM§?¨«@‹Œ7a!9Õ×d¹4‰ÄØ“>Vûlý„ØË€·Ê¸Ä‘ âÁfS,H&¹´ê ±.«s·Óƒ6T1͉[Kš\9ÚWëúöéÄz¡ÆŸû}?³Xïk¢[ÝšÖŠnù‚a’ÚÖK‘5S†Òc¶ p‡‚ñ{átk~´ï’|N3nì’ì•IûžýÓ+ó ‰ݪ¢%Ô Æ’ ì$4œ¥¹;Ï£˜tåñ T%ÓýˆàzFQh‹²›le •3ýÑSŒcb®©PHûÝÎk–ô£ýžÕ&ÐT©¯_UÑþçuýr»ùIÅt@Ì6ú‡·Ç·ÓuU¯f¯°U#ôMÕ$B˜¿†¨ â°6~=â…íh',^„–ÂYÕŽ8•S°DÅA¾’OhSMwÇRé´n—tª*å ;oš,9Ì£·ï,y³.¦åàÛèÈy’ÊÃõ›í\9¸ö†ˆÑCfõ C›÷ÊŽgaŠëË躻«×±u”|¿ ¯j‰õ8ØBÒ¶A±)ÉBu°5¤¯^i[%¨nN­Ã ‡’B½B²Ë±¹¨Ð|²P$•N`KèTå‚€»Ø›7Ä ~ñìh ’[Ôßç©Ïþ·bÔ-*Ù;¡š™žJD;O©¬ZÎS ÙÕa0no™5+P˜J¾|«•}ˆ)ƒe¿‡eò2>ó£þ– iÂåSó²Ñ“`â@Γ°¿àŠø Ä}ˆ«Â€+`×a6$ÙT ºÕ©ÌÄA†= \˜ªÆ2ãb ÉaV¯av‡Y¥Ì‰úÊl\ð9öð]Û®¬M€¸?q ”B1Š_F5"pð¦)É!a†^YŹa;âñIK¢ÜKrZÆCpãòoÍ¥6‹@;¡ûF¢†BÎÚ¾ä%ãBëæt«j`°Û*f¯äñZ’‰ä#ulœX0U5z!æ6MØÏØÀ!Ø2Pp8“Iö}çèÙ—8Ý(†z‘™\FŠ‘ÃlOè›æçõ#hÆ(ÒFgeÖÌF¤“ýxÿ-ådŸ\ì¬ÝÉñîœnvìì^q¹z~Yl› êÃø™€×`»eŽL?bRϲÛvâ«)Ôf™{ʱF« a¡‚Äq~T…jÓŒ•©õé¨ùÑÕ$FôÙ$ök6Ù1ÌŸ!’Tšb½˜ì;u×4´”as¨ÈC÷Æ‘ÑcöèTU›ŽÙà t'!Örƒº+W¬_¡*ûòw Ó#ûú ‹Mý­zvv‘r›:R±³à$ÒÓµ)ÍOû{õ¿êFVqœE>/ó5x¨ÙEÕEwb+þ”EêÚàm3†œ¬ˆ]ì¾xYY=vC0PLË(‰¢GÖ‰ó5É…Eorê¡$ñ© o''¯¬$“&ăQÑÕ+ÉÞÛx7({6{ðR?¼W¡Åòójùe¡‡õ´Öh³Ý‡µ¸[=¬~‹¾KÏ;Ê‹8WvGåJìQÆÉ;êXŠî(ìáÚN¬ƒ ¾Jì"DOä¤ΤŠT›ç–Öµ¤f¸A‚` ´P^1 Ù{q,›.Š¡O­W¸¶÷ÐO$G ¸Ëʼ·û§w@ƒCˆ8¾@ó¾ˆsÝ>¯Þÿ-¶Å%Ù©JÌyÚvÊ2&ÝA¿ÝŠÆlßôøöãÃêßô„ÿy½~:;ö¹}YýÓãÝêoÔ#ú‡›X©¸_Ý~拵xÐïa ùŽ(ÎÙ¶8’‰x{›¿Ä§½¹O¥Ç»\Ýÿ¾º;9)ŒEÚ% ~L}ÄþÕöŸr¿Q»ÿãæaýÇêùæåóíãÍ›@†íÛŸ|< ` ¸*=pâ˜l“8€)z°]£éŒo¶z(¶z;0Iž=9Z=#ä­D’Jqxµ«…±îj«àÚ«-´ÌŘ2«Ìq²¤œu´íhì7´üt'Ÿ–áÓrÁ›ÿûÇ´[>µ…OÙ€…q³z‘œè<’[—|úÐÖ˜š@€ÔÙch¨ljðh oXï<5[;Z{Ü+ãDƒ÷kGñ¹yñÅ!Éypx³cO$b|×cËõéeÞݨ;)Br½Æžð¾/‘¢®ŽÿøÛãyÁÑžõt+%­u>I÷µ÷®>ü¬oúÛêåÿüë?ÞŒÈè-½¼Ý¼PSÁé9ˆqˆÛ| ?†ÿüáæëúî Üææ—›Íë2*ЇŸ÷O÷ëÓëˇŸç}ˆßo^W¿îÔ%¾ùå—›Oj’¯Q¿ü0®’Þz4-H–eJä¨1.Kð¡J|1”È‚’ÝqT¶9(´7yüjEX ª}ÝàPÃׂ%B³ƒ 0¥ÐD{HÒ q𵚸¸èŽ& ¶§ÐÛr¨>¦}ëV8ü+¾}çÖ)ëú­¿ÜŒãž‘E€&"™RŸgÔ{\ †\) 9}ZÐ_©ÀO ’Ûþ¢/>R^?¼Y ñ !à.zù1ñɤù½Gg˃Ղ³.€.™™øt'{+Éý[Á‹E¶…ŽŽP0þ È¥R|éÓ 6àGúøé ·^Ò•ñ‰íÍv“üôz™;6úK@Rm·¾Ì¹R%±B‹s„hŠs%‘†C0ͨ&¥Y1`\žE5q’ioŽl&;¼ZIVFËê¡…ŠîÐ>Ç“–F*–ªc¤Î`óÁQáÁA€}¾Ù ï²ç–_ š¹äV|ÕäøËÑ»äáv4‡»ÖëÑ”n0§Â+œ‹>¦;¥úåÐø@íÅ/.µ[‹qÕ—Ãe>¯’€{x³ËÕ§Ší©«éf=B1pÂ&Ñ…)p«Ø8+ƒÇÝêéaýç×V§jp†Œ 3ú-z2!½Œ >·5¡¶U©“'U"ø¼ß`«6„ M±õ5w§ª4“õGâêÁ  Om}×6Ýýâ»9ƒ9ÅIIsúÊÎFÞ’‘¬6uå¡d†Œ/_cÑ;f=aœc¡=äç%›L4Aêoþ±~þ²¸»¿ýíq½y¹_ÖøÏ ½“†ÁÁAý›:´‰&$Ô [ã©“³PPåU»6œÅVÇ附7qtYŒêC{¾:¶^¡`èI\zw²1˜#\aÇ]‰{±hc=›rŒ­À€9ÍÚO)Ø[&²±RÕ—DS,K! ú|¯%9}+ÊÚ§OÒ4Þ«$ „và8põ¨y†‹ÏG÷RØãÌ4Ëçº'ë±¾®žõ_»Xúï÷Óº{Æ¡âkÑûQŠÖ>gAÆÍr.ذŸ`¹îî vS;Öði¬¸Œ³b™0iÜÅ{²¦GSy¥ÊvóS¢b¨:™‚ÞdŸmRŒH’N¬ɪ‡Ÿ¢Omm÷"\(É†îÆ êbýh¾²qn‡\6ËÏ«»W}™ÙÁão‹¸/°rÓG‹y–=ôÏ™éÍ7C’ëßN;Gm±¹ýúô°Ú,¾õ"üôéu³âÅ—[5ˆÅáÿ÷Ýk»ðüwË)Ä™êÖxG}ŸÑ¬´ÖöÅhBä}žq"}YÉ¢"L²÷¿ ¥(naÄ \»%†§lâqH@àïr{N0©‹Úb*´%" Xôù>Œ»|^[$Yó:¬‡¶èSCð^µ¯Ð½½tN²Å( ÌZ9kïHôo¼i^ÑŸúÖÝÁáÎ/Ýí§ºPDÔ¨ªNꯕví' ‘HÞÕÎZ^Aºý²{´å?¤’]¿w‰÷&?v^êÍ¢;k”9˜d2ðXò]’4dà:';ýU ÌŸ ÄÎI2°–gØ—U-ºrôÚ¢l ìÔe¯„UsªX?C¬ÅCœôáênÿ§Àÿ±wuËqå6úUR¹^IS©\íÍÞå”¶gììØžµì$»Uûî ¶ZꣻùsÈ£IÕÖLÕ”Gr7 ~|8ŸÂ™\ 5'EÎÇÉG1#'…Kd¼ûËØz„ wi­… Aú¨ÂTwq:YôÁb`#|ð¶û=.V3…&‘¨5±Úéè¦÷­„ù—¶gY ÖxÆÈ<Ô¸ÖÜ”ÐãÔòщ6Ï›§~ã2 Ça †pP1ÛQ¾@>ËÀ²á(.D4îí©LoÇò^{g¶e ¨ñJŒðX§ÌqUŒl{¢gŒf¢Æ‰’ªµL\ÈôJšiw’F>„˜_O¬U!P‘Åì‹ðJ&ƒ^B(1XóÎ0ަ·6ùp‹˜ÅGØeŽ`tusU6̼ùN:õyÂËŠ_(Ê‘+bž_ÁïqA ùé‹Ý®£ªw‡o‡6W]ÔÍ ¦0L)ç¨õmâÚKN´¶ñ™gš/¸¼çÕ™ÌàÉ5§nº›Ã!S;o \õ OMÈ· 8½–e|’ÊwC J†wç@ g2øàAƒQÃBæ!t‚WÊûÕ¬¡úšò~ðÅ·Tßoš{É ú~6ÿÊïîÑ€Çéõý(Ã3qqöë¯x4ÎTyå—ÚY=õ>î6ºQBÍqzß0èzTQåLø„]³ä$ˆÙÎ &µ .,!¿•;‚Š; À•‰pÏ[«é€tÇ„(´Pƒ‰‹æ9oxÿ·PBçbâ 7µÖøËýg;çû4èÚ( &¿1\{¶19‘oÐy…U*" @ŠÍÓ+„ÓçÙäT*."krþš4ï¶J%Aå2¶+IŒptmÑ‘™š.¸àoR¸ .õ!ú²¨ ´{¸y9*ü˜3y®l}7Uýc·è+°­\‚é> ¹mqçiLÊÓ¸”»ð^#‹¨?Üý㟿ŒRGsyÌ”DÅŠ^Râ·Nb£,aÝJ.CtécA^£BƸiZcÚ^WŽ…XÞ;qDàYP¢a@ º¤¨Š±AKl;It1ÇŠºÎ¨6t“#aqíÍð‹Ù Þüü^¨>~DãSƒÈºp*ŽŒõXý›?_˜~úÓüû%ÉTJ#ýá‡-0ÃïÕt/ÃátÇÎ7þðç?|ÿç—ŸþôÆÄ§GVß·'?ÎS3¡á°…üÙ¾ú¥™|LÄæ‹TÒÕÄ#SD·5<~HGéÅ­‚YêÍ­¡Õ̇jºY(÷D±•›™ÔÇÇ\&uµ³þUMc6äH [Jèl%œ^™¤3´v‰—•tL5f™¡ù´oAf8¥<óøœ ù §¬r…Š‚µt†5p¹Ïò¯ñÅV©_j¢tt×[Ø«æmFÍÚyi¾µ…Q|jºÀTDMÌ2+®vVÓRÎÌq\s¿­?#K›|&/XMê–Vì£9¨Þo:íÐÓŠ¬(`t›iݸš¤\iIc*ZäXµl#}vºÉjgU6— DõIͪc«°‘ÑÇéFRñÑÕ¸0’væFœM¯ó—xÅ£ÈãV2OUåïæÜþF6ßmß¾öõ1}u£ñÚöí·ŒÒ˜â®zs°ÿ"xê£ Õפ^C“ðâì'X&.³C®lˆ8;Òw½› Rç—è%<Nû·Üglb!Mü¸™‡Z¬ èó¦CŸ;%F^@_ò8$Ú ñt¡(þËN}ùÿäÇØ(ÚSX„•SÏf.4øpÞ;W‘åôŒÆE ÉVŒ­¶VéÄy•€¸·§~~¦C=ŬçÅë¦òŸ:øB•¸Û˜™Â«Ãù'ǧ‡Ã‡»Ãýr8²Ï\¢IÏÀ™þï_ªÃÎÑ3ýß¿ £j®zœ@ §²ß£Îßîÿ~?€:ge = p‰ÎE€©<)IP}„(!kW,"=>Ÿ•í ܦ|”›Ëͱ> fDÙ©-šCb—ßÙêÛÃêrÖêëèpß‚ÅÈ5uc¹b fž)¸ ,ÿí¸œ“©}5Ï™îSÓžÿpö­O®õÃSSc#eœÐT¤ìat1O5·ÖXt ­¯Ð"dŸ,ÓCIÅÜÖEÈ…ì•€†`n\ØBÞ;]Kàçç,âé^`nJNa +Í‹! e™f_SÔÅj°ÝSÏ”eæ²™@‰‘ö¥æL‚Á§Ãñ[ÞýÝß¹§úûÇ~q‹4v6ÎÝ9B#è’NóWes“xµÔÆÁ®]³eÓä œ/ÃnuW‚ºA=ÃÞ¨‹¤óQ×9΢."YpZòta,üV’üÇ ­ãMP1õt§à¯[À ¿{N!-Ýÿpw¸¿ûë/ïýÐVc¨ÿ‚r÷¦0ÎÅ!n®Ã·÷}Ô\¡í|þÒfÉÅj®`M"¨‘ö¹q—Ç™BÓTb³+/MáUÁ0TÐ]Šr,ÛM•,i×YŠC,§[\pâ_&²[m»C=‰C ó]ÓGÞ’´ë¬—ã.›ÁSHÃkk2Œ®ürõ¤¤¯<¯³‡Üû.L:ôþT؈é\=¦¥¦kÇË-=ÉÎ=ѱª'Ú´ ·% NÂÜ#åùic‡¹ÇJ¿P™B;ѰU9Ñ‘å­hØ^ú3ÏÜÏ`RZ‚îS%çTûeÎèØÜ—?VùãLƒë¥‹0È“¦çbêrÚnIgœå<ÎY†Šq̦þR®Ò¦S]È+6‚³(½ÎyŽõ$A£4.ÀüD±J ìã\ ¤àoZ΀„8”ªÈa¡Ñr–t?ŠèSñ¬¹`ýÚŒiì”ïJL&Æ`òÛ‡ÆÚ T ¢4H¹óLì”:Ïl秩ͯ3ŠÏ[«¨Èò(iðúqYµê7èàzH^˜mÉ6Ÿ[-Ç‹áé¢.Õv•Ï #:(ž@®âaµ³šc³U1¦§Ð¶cSVÞÐÆ|<ùÙL&FÐ×LZ]z^ó¡èeô[íTÜ5u­”—5tÄÐVC×ú}«š9¶Èª¹f®õû®×ÈÙ¤Ô­¬Ûn±ÌްˆsîëB!zÈGX†x2îòRÝ­ ô¥p²þO/Êï>ß>š'x÷Ì÷ªxÝ÷”¶~ëê"':áÎâÏÖoÝt+l)¹Ú ж7’rµ„ Ú·ÖØRï8mi>7¼ÞY•K4ËÛÒZ˜„ÎÑŒÃ&¢ Ó©wÑg`ÓçÈ׈¶1´¡Þ¨íþ&GÒSÃ÷Ï$øW|ïé”ÜöŽúæ¼è¤ÁʆùàbÞ&ûmŽº'è™úŽöÅc ¸d0¤¡6ÙSX"D_a9vRD—SNîÕà÷óÖjÐ…übËõ¨MèR:¸ t a¾“ãNcw^¢‹„¤I ²‰Æ?‹.¯²®¿â¸[®Œª½Å9Qú…»‹™Ì[g‡å½"VÂawž6Á`G²}/Âöìs¬ «Õ€Ï‘¡04¹6Åc«u³ ÂLŒ˜™!’"¨àµ7,vÉ,ÄÃýÓ%Ç"]µÒµãšê'nOT|ã­¨jòðéÕ»‘¯Ål¢3?ÍoEq Èã¡ÔOÐP>u-]Òµœ7V <ÞÔS`gàapa>ðŽy౟(Œ)ˆ¨Ìñ º·jú=·¶V÷ûzõ8 Ý·î›×ÉÓé[÷Õ³±ÉsG‹ ˜6ó4`““A+À ˆËàä#d [Î[«u‹HƒÒÞè`¾[)‹Nj{`×çÙ…rñµïPA9J8xôŽÃ9B€­^Sß:ÖÑGàKõ­c6ŠÔUh6T·£X-ÿa¢2OŸ*X§Û)A,Ä\biµ±.[To®öÞçc˜‰17†Ç¶¬ä8⮬±Þ"}ýíëï¹Í»k?¸‹‡ŸßëÏùùpÇýŸ<ç—=ÜÄÑýè÷œ%¾$·§Àà”¸žƒŠlQ™ó¼§ªÙ­T@,³sXˆÎŽ‹@¥yæÎóΪ€ –¨ìqï”àüçµ¢dê¨í ìÆ'ù”ÆøÝ.ïkyÒÊfxíeàòΞ•¬ &¨ãADž=+¹îi%¬cÄ[e[%óvãBÏ#^ÀH¶C˜¯m¹)í^zÄó‹³ß…Â#^Ú9dË$W[« ý4ŒW°ô0žn&’,Þ±€›‰Â‡Þÿ°ý¼{ùÇ;‹[šz$þqªô=Lµ ·8ÓÖž.\z’=ˆ‰@Þ*‡˜FÌÜýãë·ÿüõ«]°Ï_¿|2©~úòKCF±¯rlÀ:^ä1ŒÉ/ö,ä¦õ@²Üë vÔjk"fçâ°qÊ·p£«‹Fø•:b#“ÄŠi R¹d€„\Ær%M4iÕDøb“}*]„œNÄ^¿ó¦-Tõ¼Ëde®jžat:`°rÉ@^?Pâp›§È= 5IeœØ¿]ôï’£××3ç.`a€¢s˜¨Ýc(;‡![áµÚM™þÝ%`Ñð‹1$/>bû;¨.Ä!Ô³¿ÛM°»`—a“jòp×ÓSâY`uSY·>~¸ÿõûÇ!ì°¢¬0 Ä=á—îYŒ¢­D!Ûïâ|ÍXÄtdûx¬ÉŽaUÑ"ÒcEò«¾Òóv˜ÄãM£ÄðÐdGèM˜ 7N&¸©¬=¹6\ãtaî~ûöã˪k§$eƒÂÕœâìÀÍî:f|’¶Ä”âË»%.¥P@L¸{†Î™xè»VØGlj¨tÛðä¦k=*®°/©ÜïƒqŠj–×f%¤(j¸#‚"º7Šz’ÙœFNg4ØN \ :6V\ÅãJ ÅV¿z¬Ea[F-ÌYÒÈzŽsÊ4ùá7Ó±ëmD'ÁŹ¢Žãý¸˜¢§&»§ŽÏMÃÌ=­©c@r~®ø/|þ!7=òÈǹâ7‰}ýa(û_?¾~¿_™ûÙÃ;“ôÙ*6ÿu7*0rH2EG<1½!³i«> ÒU2Ó QWø|±‡,Ò ‹žA‡e‘oAȰ,r‚ÉÄÐRQ÷jrDz·$ûŒ¹ψ4²-;ˆšáhòö\ŽÀ=üâ^Òd„Æ»QaÉG]„£kb-N¬ô ‹IÊ!f/ÂYîAZµ©]­½ïADêÉ>;çÓ?–Qw0 xñè*𓉀 x"äS×gQ „ÄýïtèE¨0Ì‘pö»¡dÚxÒ–Ù«#ü½+™·Š<àa©ä>O=ÑéƒY’æ€Ïž¨…ã¾Lµu|{˜xJFQÕn=Æ„æžÔ]ðÀ¨¢›¹f¤vª‰Õ,¯škT´¼oÑiq]–lfµµŠJ²´,pÁ9ÚQ9Œg˜Eä0*½Q:àÅÁááÛúç‡û¦xÈ“w×õHSîP7é‘tUd*F3ðˆ~X8Ô$Âanqª (Xñ’=„²[|¢‰xõ¦x–׿8]wI“ò–nK…òÊíâ7Eóòm]4wêà©XíþpÓÍ™¹/ÿ|÷Ц±ìd®ÂöðF£'è™Bó¬~± {³J$­èÞUa„rCò£<Î2ñdRaK¶`憒‚0o«…S7›ÕÄ ñõ,†´eNSJ½½uÄKcŸ®˜ aÖ6°¸~º©õyãéLHS“©úöØžŒþO×¼‚o_m³ÿÝ8 oJzK…M ¬Ô‘/’ä°°ÌžS-Çq©¥T#ûØ\Î5CE®Y1Çû¿’ÚÌ’Ý| ÜEB¶©q€ù‰%—M,ÙØ…˺‚W(M¥Ñº´ßR’µ©íÐ-ÕÓG¤HåDGêyå«!¹–ü•ôFä9Ò¢6^vÄ~ XG7¾ÛÕǘh°yîÀÓ«E/-ϹµÎš+7ª¦&Íuô[Ô?vM䉉¯Á¾¼5OÒ,÷XÐ,ôMméZzOTvÛ•¤ø>e+V’‘]IËvæ¤B4®U 4„8;»br~lø~™]±-+8Ã×+/Â~hA0®éÿ=`ÓÕ›‘æA¸M/[.©†dfPgPvj/e5Ö!Ôc¯ç1Žzg¿‘zÀïÞºÿåËׇïŸÉpìyÆ÷u„ Pdð­ýl-R–UIóÚ2WÐêÚå Å‚ÁçiuŸd2"«’n-(JlBç:(Ó盘¹}_¦UÒAñÓ8û×#kÃPpŽ•màõÉ”V˜zˆJãÝoEÕE¦ßAçâëIcõ nøÍèÁ¯M˜Š=“àÑEƒõÍ4}ö´(h@eÎo¹R£-ÆÅfßÏâáý¦ËLÊmMÅÅËQ¡šH³ÓÖIÎ!3**åZÇ·ÃUl´õŒn‡Ë`ÃÕ•CÜÔßh;ÿœ(M2ÓÉ©Ž'¿þb Þã †ÚŒtþ/OKF—¬ž©«ERÇ£}7t¦5¯¹"£yEÂ[ré~ñ±k¶ŒÚàËL‘bÌwµ< mhµ‚}pm)‹:®³bs×)‹´eqv„WR^Ç>1J h7Ôì 1s/AœÀàˆv¾©Íc&ÐþôÅ>«$îÓo¹¶ö[R7®OÃûáÚ‚góïŽãµšàúIT7pùINÛØéðX^SÆ€BiŠ|dx%‡!ôt¸°‹MÓ€ÆÜèj+ŒÞ£ H›_kãZ˜²ˆOR~A¶c‰ås…<ííjk5OÈ.,¦HýÞ–Óûù³+)dæ¤5ŒYä †¡–3¸ªö/s$êg;lOÔÀ{ÚÒôCBvnÔkàsAÏëßmÁ¦2€DÜÀ¤ý,kš‘²ú½܈:€´j²·þà îA§ {6lÙ;yæó¼­“7ÿn£ƒ•&ˆNu°ugH呌Ò•g±ÞPëÛ2ݔ÷€I±èŒÕRzÅ€¹æZfC(½L œ÷‚{+õ¥-˜Dé•ÉaÚI!:"3ü¬:.öuÍÝ^ݦlæ8|É'¢É•¼Hìr¼û}q©õÅC\ì¦K…ZcThMLûöœÕêóÎj\ñà=2½Ôkm€àÓNp˱¡É®çܼ¹ð,)…·ñÜj'p#˜"G„Xö±Ž£ýBéà9pç­Õðb.ÀÐs+¡­­ÞóøZJ >ÍM™7©¥¼:Y§³`éÚ穦£~¡gö˜¨¦1 "£â§«’ï ¨®Š½:ÂR—)Â×Å¢sEøŒÂPTŸ%X]‰vD„•Ê'<Úâ :Û*RؽÖãÜÚî=ÿÝJµÇc^$¿AõkŽHÆ7ý#ÂPÀÇ™GôjTä-–‰ªßzšɇáÛ¢aš Ò m¨öÅÌžaó Û„†U.ˆ_É—!’¥èÙ"g“P«}UøGiMà”yo¿–CÇ™™áHü«ÜJG¹ƒBt•få iBEå â$ð\¾&¹Âו$GØQ[tšgäÃÞvTd|ͤEǦ± û×ñ<ÿá|÷NWïá©ñ·­l8LÅÞK^†º!â.Í:S–S!³aÚ™®Ÿºnꦹ¨¥w„$=—SΕxF(§§Å<Üãõ]•“œŸàä è‰'Sž¥j•l Ëo¿þ°›Yzß«ø„ÝÂÜš“ `e‰æìò|rº 1?êüsoñ»Ï÷‡¦$w¿Ü„ŸièùLü${x¹]"° B;]ñbßÈ)ÔÜêMY„¤ÜÉû©Êýºb\¾¼æ,½!tͶhseïÜ0¹ža!¢cDt]†Mš6Î'–c iÅ“€øSÕgáZäRË+É ñ‰mÑN ÈÞf=O}A‹I-Ì€öðáðíÃ÷&’ÌÒ/<ŽëýòËÝáÃqXoƒëìD§]œ0!&Ý:æ·Ë^4"·vz…S“ú„=½Ä,çòÊ—|z÷?Þú> .!½÷S_.g$±0¥˜B°›Š9Bœ³DàeZ5qT‰{›Q”Nm (*É€Jš[ª3Ò†êqÁUFÔÇò¥ÈÎDYÉe•( »Ç®ãã!r¸$6¸?[÷ÉO»{¸ÿüÛ¯ÎÖ󗯿Úçþ~Þ[‰zºp-÷à9Áæ‚°ºûfòx¼£XìÀÉa½©W8¸Ÿe3@!«vŠ´Ý«ýK£BRÞ÷ÚwL÷òVn.vK‡òX~ô¢Øí( u^a—&Q³ÕôâÇÿ¶‰ÿÿØ»šåFräü*оø2,‰D"Ñ11§½lØG¬¾ØlŠ=M·$ÊÕ½ý`~?™ÅYA…BÕx7v=Ý”T*d~ù‡ü¤°dw™ð'L¬0ÎT®b7ÌêýCFÝ—ˆ£ß …íiOÏ¡Ïðôâkmû´¬áê…×¶ x˜I!)C‰ sØuÓøPžëæ O{gÖûÝfµxÞüö80=$ìÖÓŠ®wEãè$â8ÐÛŸI¤Z^uË{g™\Κ«1)€èãS䎩bÄå­•uÆÎ.~‚ªÈ0‚Ȱ6óïîæÝn#Ñч)©ÜálÂJ3‚8?Ø(‹ÖS»¾tÝ>¾¼xÏ8`Â};¥µE;=ˆ|“<ø"óz·>ÂYÌ»HV‹ æM<ú9Ò¡Êu„|»Aˆ*¡À—ì}3 ªÞ ï§×YÜ cŸoïÖŸ—/÷õ.+E5(ƒl)£*NH™NîÄ×ÜôiQÃ…õºñ¬f7¡4 5Šë­¶ªâ⬽³í§|ÎÇ÷7¢_Ö«¯ áÿÓVìÙ°©œâDMjvš¢¦[,ªQa«ùœ ÊÑé¢ mW¥®5µÆõÂjH3rF†Dôœ,PLs|@É‘vUâQ? üÜfÕ•Œ©ÆIüÄ5¹Ÿ DB½o†·å•¦HÆ8ðÉ¿ÖÎL]-g}–#eql½Ü%0¡H .™‚Ò°lpt¥MÝNjWŸ¡o‰½5ÑR(Š@gôBèèÔ#«L~…F¢8nn<ˆC%‘7ÙPDæoÁÏS²LÊæøyb_Ò¡xtŒ]d•æXqp;óá┑pÖ›K³äJZœb”8¹NÓèÑ;:{ª¤ne,‚Fœðël 'Ǩ?Ð;Zî’pǼ×O±g¬¶OËÝúýÕKCΊM-ƒ‹ì&ÒÀãØm¹¤;ƒH)ô€CØ᪺‚3v«0CI´Y» `šÝ6>Œ²w´v£ˆ©58HN½2“1ºÇz&iÕéÂìž)ª! sÞÓÜ@ S äÉ õM¥É0—ÖM}ÙïŸîÊnª#¸0º!¤t«ŒeììúU\ÄÂâeª¬N’—käÌܨà’0ÐÊâ;¬±‡âíÆjúÁ‚Dkœ³ÃÇŠèû´¡`ñ%ZGZÔ@‚ ûÉóÜHpJÂð°Á/ì å©&ÍÜ®žw£Ö;Å aT6gó¸8.:­£M=âT†iÀ‹sC£äbÍÛ0°^ì\•<Ê!N|¼tûöŸõîÉmÈÝqÎýŽöéøÂQ¼(²Gš*×ä¶!ëˆæv-Lk陬ĸڔ¼_­t¯Õ„s£¨Œ5 Ö(m“÷ ó8{ªva¡‡ÙáaKú”HB¤ ™}• àÇœPÑbä4ð:D;³E¤¬[F¼s ÎiÌï<¨ :ä­Ã8;7:¹¨‡_b1k ªjÞª‚±KÑâjmèjÏRÉÞý@ºxëþ‘4U€!o-®­âl@ØéfxÌt @Åt”*-ÄuÝÿ‡^¶VQ}¼8‚~ó( ‡Øn?³%)¯€=š^ 7/ެ™¨½-—”µ„9‰äTÊçäÜ-¤|¡+F}ÄájØÈk£Õ mUiNÃF1•\?: ¸4¸Hü ^.¶+_ÓÿUVxÅB ¡†N—q+¤Rûa!rö(W%¶À†Ð @ [çÀJä[ޱb%…²jÒBAõÈ„”U*7!¥myU±NÂ=—¸j”skŽ+ƒÓÑrRRaŸ…xˆCr uØfKæ™ùÐmÉïH¾é!|³bÑÒWƒKòÍB¬§w²¶iÓHDÈLs³ÍiS2bÚ ÁšÑl˨UØ.lŒ3l#›C®0:ôùx²,is …íÁsKYUA9<£} e•×z¼–þmó,N_vŸìçÏò‡ÿ´À¯öÓ×F³ýDŸ>¯µþ´Æ•£Ë*'Ž3…¡°ÚpŽ0ëT9] k=²Õ0«òÎ@–fu÷>¸Êu§€Cž|‡ìí•L÷áÓý‹üÄóµqà?®r3…—ã½2³×9ݨÑkÈq*ÚÛ#_%ìxïon§Ì(ÉXÇ$±µªzaS{XÖÁ>exw6©V„|1×îDž*ÅÐ(á3Î $,ÉHD$†ZN !„«½Š‚×VŠýýs58hØ…m'i8ÚÔ5^ šõÑÛÿ#Uª@è¡PŠç†„å’û]v6 èöªN:ù´söí¾’óÏG¯)‰b¦]ÔE^gø&Ú™´o"Ô‰Îxè‘­hÂD hÔ*³}G€†$Žâ=¦FÃàŒâ9fÞ÷b˜…ƒ óÏuV#êA§TˆÐ+vžèQ *äåi~f0±%í~P¬í´%kÿµü¶\ì„›‡uIåZ¶ñd%DIÔIú£BŒö)ôHT òÚƒÆç×õŸ øé*t ½[^Zuÿûe»_¦z|¯„5Cž3.¾QÔXE.çšËzŸ'ŒvAô)ZOÔ koæ¶>PEhbwÁѨ\Å!oÕ†9çÑνI”Ðývõ5x(ãë%/'oHÔ}ŽqêfÐ^E ¡Žw·éU-ܰDÃÊί}IålX^Šm“Xyzwr°N¯F©¯K:ƒU/c±rÀÔÝ'_‡W:ZL{"O•xX5 @óܦI¼ù‚€Ø’ã°¶ÔUX¸)¤}åñâIùÜ—.»#Ä4Þ{‚œË4£“9!a´½G£*‘·6Æ3Î (™MbPÔfñtÄòxX®¾¹º®z¸;b26u .SÀ>ºìHš*¸@‰€É“ž¨Krß`T`ÝDVåpEr¿ù¼^ýXݯóiWËýò~[,®i«f2s9ÉY¾+Z+×£W´¸`›ÛcõÀe“JBÇ‹HL-õm.p±;ø«B»~–p}·×öõ|­‘PȘ¾nÇ’—kBÍhú¦O®*« ~Èj`qP4ÖF‡5˜¬ÐL;Ÿ¾NÕ~&¡‹GtyV\ƒ>×ããVN„ª’åÃÅG°¹(1ª‘ -…(9<¢«Gè¹²VÀ ¡`§(h­Þï‘’Èm:‹Ú·x]礜ÃZmÜ5³;+ÄîÇz‡ÉØ)*/e]ØÒß)ÚĨ¢Ú©F4ÜÿþOÞJÑñ<{¯y <¹!Š”·4z4qölkÖš=¤1VyNb"¾'¦w²œš +ŠÂ‡ÍÔC¤W“Ñli ßÂÈ‚‚š b°Ç*Wæ‡B S±m›N9&׊º*Z8œ)ß8ï-B–2 Ÿb¼ÖQëߣM•Ìš¼5Öpd¼B£Õ ¨¹W"*¶ £å9{Æ‹ÕAÇ{vi¶:Ä«Sñ»“«øŠàÓѲZ|MÐÞ™AŒ³â²h ,)Ÿ·VäBÞ¹ÆR¡‡—ûýËóbù¸ÚÜß/w?ûí¶¬ª¢EAäÖY[a&ùMšÁQ’ßx¡ÎæD’J·Î"“ȃÙÑãðàtI– YbˆÑ)òƒÎ“$/oÎ"B‹h+§Hg B€£“ˆèøÏ:%N4©â°«võ®{3uI igÁXðÕ¶F.œ÷» NîÆ ´b„LbÔ)­5$t×WcºŽŠ¬Ä‰LU¼A#Q"fÏþ1Pø¶¸§°§òð|§5z=•ß̯‡¿¤xÅG¤|_nör„9ÚMäþØEA½í´üI>ßï~lÚžËç° ²3£‹]ü'çSò‡ø·èˆ ´5îƒ1Ù<¬·/y¶í¿ŒA¢eÂíp©¤„GˆSŽeåšùI¦ú:87‡1€ÞH¹@¡Ö3Lu¾îÃÁ.¤í»“ex@á­D[šìUÞÛ¼“Ø©»áïÆÃ~3»§ÕíæQ•Õú)´üÆywdÜk/°Pn¤àM@Úe2 BÃ/ ßtó,Šmß2Â7â7ƒ>h™Ýúa»_·½|ЄY‡Q^1¥×§{³ÿñ¾íö$dOŸšü÷ÛûË㩇ø¦#:}Úx»º'õ¿§ûèðMàûR´/¼Ú><-wë?Ë[ï?þó¿üáæÂB¢÷fý5ù#qÛmïï§òÈýýó~¸yØÞà«n~¹y~i °?þܽͯO/û?×ý¥ß–÷/ë_´²7¿üróYDï%œô—¨Æ0 ô¨Mééát‰ÆpŒ!¾(» ³|~fí™–ðƇUxÉro{5ü=œUÅk{N‡I_†±õÒÁô/ÃÞÄyq˜]j°` ”3h“òêL´ß£w´ ÍyÓ0oiDzºGI¤W~b™ t<äÞÈdà„mK®ʤÐÑC=—Ùò .[|2b×GƒøÞ=~zæ¢eÿ¢ž;ÆÎ»7îØë¯iMâØ_ôËÍeïÎ(Ãy”¢é*†&c¼&± 0Í¢°˜g€‡íÓžu$ª"©iº¥°g9•ÓÑ24 ¶läò³h-〘q”…0ŠJvg€VÖx5:¢Ÿi!X‰*>º ר¦½•d›QÑ–óÞÁrn‚UXËþ«Êµ´}Õ1Û‡°ö࿽ >xÇê½Ïö?ê?4tá*™-ÿý®‘ü±µìöÝ4ÅÐ/ÎÜ£ü¿ýÎnÛ‹›,´/y“žqÑbÑ3c}á¡Ï®tlBxŠrvtO÷DÌ™¸îOT,¾¯?}Ùn¿¶™?cñî·¿I]°Ê£8©NsM `C†Mh­šÉ/°†Ñؤ}sÎQ:þQ>–Ãè+ü…wj#)`ÞŒUCŒ`Y˜aV²–/ 5vü시d\ÙMD'ùbiÆ{0È©L…8ѲÝujun¨û7j!ÃheŠ&o9«Q»BltW8×f¼ûÖóqf(œk\ÈÙ&1ÁÖ¦î.é–¿ŸÏÛ:¥$䉔« ‰¤³a rýÄ´ÅôÌÝ^œI oc÷vŸËÝæ7ù±‡Ín·Ý=T‰=!¯ü9!Ô²ä`¹ö»—uNŠBRôaJɶº .ÛYûÌ]érÇ®~—ü>7zÁ’·‘eÝX'²å²ìäøí/@Râˆj²×aâJ*ÑNcù;<éµ9£Û3OùËS+É^;»$;QÄ”«LƒXƃ”­]‘rƘu}kë»6œØ3Ð °zD"›îL8™]Øvéúc¾‹w**7~EúÒ”}¤ ÎØ–ÚM]¬‚ûN88b”’³l½‰`iO§j2&H>5™ü™žîWºMPåÝkCSdŽ*oä22ïËßDìÄÞVÏËå(‚’jbR°&1IÑ»êÄä ²!ÇÕäÅùè-¼ y/üÏ¢÷e®øûî_è°ÑÑ“u'œèÌ´2e‡@]¥‹y,§®Q%–ÊOƒ-cÈ-;;’n¦ë[§  ~mó,ÂÖ‰m–)L7>qйÄEŒäüL(çÿ,Ÿ1[2ÿd Ât·BkOÛ–>Þß>ßW'eÚ¶•9Ù¨cìYk턬£Ü·.3n#Ô4ËÚ„@} à*.B°zÁÙñ7G¢LÂ`±án×¶«ãæmD†Áû›îƒ-l¬~É9 ¦©ŒUu~)øjn‡-Ùˆ'ý®3à45 RØ4ðy¨J¿½ÛqåAŸôtŸŸž­‹òîIOöë0ˆ¼Í޼)ÄbO™¥ÍØMzqµç$Fi7 uMV€¸Üx•¤°Éï@ÈìÆ¡¥&ÀîNÂA ñ6Ó׶ȹ!}MÑÍ_çªín¨$áôõùƒ"ùݹTé¡ûäæÛãí×û×Ožï¼Ú—m  £dîÐHå›Ew:æG 'PIˆÐ§µ£œ¥º;±ç*¼VuAJõtJMÎ-p_‘k‚êÚ[Cð ©Iuô>£!ÕM¼uA“‘ß»­vä <”B‘ÉÍ´˜«z$$ÖOAä,«)9Œ0Äj¤ù!HFZ²Kô ô¹æÀ3?oƒéäR??*`šzæï±9EŒqL7’pN«à š>58 E”>Ì£?Aéµf ´ ;°—6¿–½zÂaHué$Ô±J«ï³R'ó1”M6!çŒ_ s±Yª°9yÇæè8Çat‘‚“£ƒùÙýHnIA_ ¯ÞD|Žº/Ë–¾==>ÜýZÿýcš:xöç™â1yé,ÖG„ž‰Ši7P¶œ»U"ä4WXeˆ”–RÑVdÃù‹®ð˘ÊwE{GªÍð…õµ# ¶ù‚ÞC5†ÍRýû¤Yá! 5°T§ôc V‡à{Gä,Ÿm‚dÄ!>Ÿ:N3;YÜÉ{rt…”Ïz¸ÿ {ùRºÉ–àòcÏØckHsÌû²=4š´Æ{õuC‚"ÐÚ¨½X \ ’Ï.^)2gÓ.æh#õ›pV òÐÔ•¨¸u¾ÝÈìÞ›Ävd6«>–2µ†*ÉFIŸJ 8ÇLqêÉ0“ Òt0U÷ܼ5>(Ôf¡‰ìRâc²b½s¸-·£Zzžè^É>&&zÆ ŠÚÈ2g{sžP³@ÕdÁ1”›l SéŒõº¢ÊTµ×Žj 75!Ù‚ öCFÕl=MMRnbŽØ&ž¦¾¯ §&Ðëj˜]Ú•†F 8ËGAË: ñQ|œ§˜Lþºpz®gñxI=fˆÖC+:8ßòà^l”hµŒG_W€ctÍ[‹»‰6 fU48¥˜Ê<† ©Îepx¦þÿ…B3`V_;Ih2^ÙUÓP*†ád3ä8K¶Êâ=Îê‘…1¢Ì#P=™¬*Z À'€Ûg™™ …rÙó|ðeŠ‹K¢¿vÛfa‡h[À–ÜùÚ}I0”Rã(=¸«®œ~3Á¿sãj6{¦âà$WdÏJËö”£,àI3#FaáB[þŒ;K}sÂÍ«”Ì)3Uˆ¬Ê!Qº\åçVécU•ƒå†£³Íàp–­ÆCZÄù òq‰>ŶDÚ/_õYÏî¾ßµ%8OS±¹­C%Ÿ,=ˉŦ&¦èZì+2Ì‚Fã’­ò+OÜïCÙͯ±_y4Ú[ë×S[ɼØ&Ï¡kzºÍçqŠ~_ÿe'&Vq;3oͧ©¨¯+û b5"ž(ñY.©×‚ÑpI¼› eïÔkp麽½û~¹˜žfynÄ@¦ó¹ÃCÅUzÖm'å3öÞ3šf8zXôÌãÃUiC±ŸHB¶®}M‹v£ *éqbhÇhƒ†’ùznÌ÷VO–©»R>©=Ï¡ãäxhŠu訷ŀ~FñÏ2Eÿ*œÄ j^/|œXzcYúùãMC–j…²ãùæîöëí÷_öÿi\Äp!¡$”<ÐlöŒBñI\Œ5m䙆šÆqÁBÅwk™/â&e·’¯ˆ16-QmóRšPS ÑXL˜ýæ3Þí­ò·¨i ³¼ÚZxâm;ÅYZÜèwë¡ç;/æ’:uÿ £SÇ Œ5aòM]ä’ò€Ý¶I¤wͬúÉßOßÿ¼1õ¹yø¤”øñëÃÓ__w)º—´U;yº@} F’ó±o™'[F!Žö¶×lšËîÐFoG_Žf*æÞ^€×üè¨#mføë&È^¢kkmW‘£pÕ/t~ƒ^K6›iÓºÃ×±ocˆúãÃ/ú~oõ`?¾ß?Þ~¼ÜÇŸ—¿`QN=èÝ¡¬xóûMZ«¿us§Bm¡Ç"çÂnKk“ÚnCÐy˜j~0Tubz)jµ;Lf:ïx¤Þ”NL^P’øxmÅö.m߉é)³W,@ì³=> üäùAUÃ+Ô ÆjWs;d9Ëð@•›r![[Ùp‚ÌÀÝx1§Î2p牠µ»kpÜÆY&#8»®aóJµC2áuG6‘ä:i¨ª .%×?‘¦ÞˆÏó“}P§fÀ.×G€ëX6I¶ñÒèV! •[…üâ ÅXZ+T)¥ÂåkKä³cV'«Y;‡Ö¼DoWG¯qXr2N*. êïûúÐ6{Jáhˆ×˜:x Û‘àRßjpñïWƒKž¿‘\¹ânÇ_¨à/ÆìèÕiÊ»Á•¿Qý|³|ý„¡ÍàÁJר{l‘=ÅàFä€ ô,›$Á¨wÏhÝ…ÎUßõ¸ßýÏÍ·§O]–:rV˜Á¨&Œ%a"ðٴЊ$33èk33qÃZY:¼Cˆ”zvB’ ÿ„0| ÄÚ[À³*dðŸuXv{˜©ÈØä³é¾ãÉ*nuBç‘Zv¬Øf¥èñí´g¬.Czlö»Ž¹A¾¥Z¾9QÛ‘bÑu6…d,³ ·Êq}²¾©I©ð¾[UXÍ7/¨ö)àß¼x×À^¡Î1„41?0f%£ž^™£È[Ãf,j§’9»ùñH‡ ëm&Kj؆CÁ©ò’ðHÏ\nÁj?¥4ã~ºýùãóqº°~¯U?ªTÜÞ|üùõÓãýEÁp ‚áxa—˸ªÄ¡,s¡³u¦ìβAÔœo Ÿ0 9eÁw•}ëGJ±U6ò«‰_ÿçØmphž}ÝV< 7ìzP³'ÕH‡Û×Þ]¥R¶øûH I«ÕôëSj0×Ô3´¹#×þn×|O=+Y ù(£×>Aõµ‹ž*«±/•gÙÉó¹°ÕÑ*î}q Ꟙ޸uëgdÝvp¶´OBƒ»&`Ë,âè¡)ô¸íõqìþ«Ñ€rR£Ö»mA,_fP*J å½·…¦ÀAXbjY ®Ò!jÈú0"ž\°UT0yšÓôžgóüýæùá¯3å"Ùþ±B.P$”äÂç.‰e¦ER¬ Ò²w\ÔóDŸdÀÅõ‚:f!ˆ T¯´9³•€û_ ϾÜÿøþp÷|óéö^iwó|{³ú¸O$0艋Ø|a*‹D ¡°’Õ¨s~åŠ.SÂPÌÄ€E™a„ìα²M^¼:ñÚ"óR_×è8ŠSÿL)315õ2^ìÅ‘ìm©Ï‰À˜•a„!¦ ÎÅVd™â7ªD¨:}u™=ÍÌèØbÕc@rRøxòáqñ^¶ß]„  ¡$è Ÿ8’hJ*m6£\[@|O¾ Y ì„e‹MëqY¯7ÌËÿ¾ÿøùééÏy7ŒY›„*ÄåòHÎ=5‰³sŽ_É5å‚Ñ—ñ¸ÚfMKâÀ1vËŠ=ˆ:ÀD-]ýã\ê BÅø>sÕE²X‘%–®…¤Ö>ÐECaÔl’zu–rÊ›gaË`ÖÁ†õ†"P–Œ誃“¤ÀGê1L²C/8èo2T—õ7Y 8—l'|I&ÔuÉÇ£UV.©sÊô&…ùæ!ù,Ý{|úe¤ÜKÌ`#+ qzÉHQ@lPW’ É–B®h4CBdŸl¯Ÿt8KB€ºj]¢zwÄ.N›<|ˆ×0Æ—»»êÿýê;œ—ޏ@Šâ°J:€JÒ ŸñXÑgŠtD½%ã®9çºÒºRš6†‡)Ä4­“a_L[U6Ùt›¨C—„"”o WÄŠ²u0+rL¹LpïÒ•…!¢tx¤ŠÈn4Ñu¶æþûºIóøŸîþì“ʸ.iñ®XaVªD, JDÊÅ+V´š®H XÇW“ ú¨.òˆ ˆ:T¨ak#ÙzðÚQãùéñþdÐìþ‡ßêo<XËAû¯»ÚÚì¬è8uq€¡ÂéµÝ3EGRÎé]SoJ¢Ô-*ÈÉsƒìè…©¾OëÆîzb(¼Æu)æ9ÙP–c;Ã¥$J>—²ÀòBŸ)²zz×Ú²ùw6qD6€zÚè¼b çÄÍ7ЪêîùÃ~ÝÕﯲñûQ8~?fdÕ¼ýùýáÇ/›Wøt·£×°CÙÞèÉ£ÔHM€X’ l/ïšr“ xƒ…q›0%„àxSWŠ{z0?øÓýÿÝþ|ü±ú‹3Ĩ‹µÜ{¡"~ø¬·»¢Å$![Ùõ‚€êª3#ò;ƒ’ºÖì H³$\7Ñ–upØ/ÑS…ƒã^fý\ôpS~ÏÎ+‰fˆ¾¶98Ü`¹*²"U°>ôÑw ET/Ù:Øqp$©u ÚP’ýÅñaß=øúq—Lø\Ñg\¬)"”eÂJ>}A&Œh.B]Qe fD3Tߺo©ZïÄu7Ù# ùžù-еɶˆŒ&R½Ÿ¹¡²¿¸»Tb‰Áê#^îñßöPwó~€ËëiÊéSˆ¼$µ \<ár|ÆXDÈûìM°©(ëˆa° j&!Žë=×꽈ÍmNzÏȱ(Ù¦­ã¹*´^ßIˆ^BX«ma&H÷HÄÝ#üüõ>ÁYNת̮»ôâÄø+Þ¸—ÉÞUgóÙÄ¥m‘%çÇ™î¨]¤äF´ÆònöIRMÂp2©šh]ÆTpïÔϤÃöFˆT¨†Ë3KÌÙRG M°¥ì¥wuŽmú½Uj égm×c쩼_áðfºap~q^=N¸Êz ªÁ•–c4ÄY¦&°øÊSãÉö›¨ëÅ6D}·k¯¯ öE '†åŠ€#”öwû8ut·«Zrã7î“‚³,Vo?8ì*¤s*á„-3>3sõ^2­qœÙÕ\´>ÆòE›ò;ýVG« ¤Å9f„&Ý” ÞWÒÍÃ<ÌÉ©S³vHså‹ô ýû’…O‡,ÂK³ŸK5ÝŸžý…ûsõ‰yêGõ’ÃpÝ˜Í éw žï Kc 艢3ZÄwåA“‹DÔ<ñ¬OLçÙ*΋PQäiÒP œ³@Þ±8’gŠ­â'¡¡"|špìYèm8&„_P¢y"— wPM²,aù Ÿku$ʉˆ õÃKvÕÍ»K•‡0ú Q»˜>ܵóèl³Íº#erø2 ¤y7:¹€7¨oÉíðÂ.QÅ{‘ÏsôÜvŸH7þÎ Ý@ñEâS ~â«?«H¹,j%œiD»ž%5i`þ˜ÏˆÜ"5„ºýQ„ÑÛϪ W÷ú³ßïÿ5F/« Þ¬ÐKY? ¼‹ú)ë:V"šB”KbбIAÀ‘^k##pëÀü(1¤EM1\zÕX«zb:سk¬Ï}²9XvÙõÓž]ãD^8 — 7ø‘€f™18A*–Ðø©dxúiü ŒÁœiªgh²;пàF–tüvG~‰à~T7²»=zçÊ•ï¸)~ùvÿõóÍ÷Ü<>\ý‘Ú–s(¦·Ý²®0;€žð(iÖ#˜ZïÀ*ñL³:;qMõ]…Õ© ‹V>„1Ãêô©)ù6´tÇn;$XáèYç§|¯t-J†Îýøùv‡[ru÷þÃÍÝÕÿ¾\¿¾Úê#Bo»Ï¨ÂZCÏÖ¯Þ =Øb¼W¬ó¬\5 “K±låz¯@1鱉®œ•„8ÅÊq±aŽˆMVnû‡¬œÜV®ñÙe‹W\w·zmñgÚ’¤+l•°gCŸÙ;ï8Lä^ÝKgžÅÉâ J…Å©Ç+¦‘*©ì†ëJSLN•”"Ŧ hüëY†L.n`q‘—¦÷íæþáöÁº­~½{ü¬¡Û·?¯¯>ÑÇo>]_ `ºÂDWò!}¸‚pý)†O7r#Ü–Zºs©%GöiìÖdìY&›ìáÖíó~™M³OÓ Ñ„Ñ¢më‰rGùVÔ¥8ßjÚÇð¨"ugXºžŽàØ'¨LÜ‚jdÐ$/V¹ M nÁm³¨ƒ§x0¸ßrcô¤^k£{›V.ç’m• CEÄji±ì‹ç@ã¹ꛇ,úúÕjèBФÀSˆ>8г×Y¥ˆ]<ìy bÊt<ãŽ1€¡|Tˆ±„ê¡ïêóðs«·)óÄ´°‘t ƒõW Ñ /ª5pLÒ‚.lÒ,ˆ¶·6h¾Xk¾DKŒ* mÉKìÍó[Ï«W«eHî ü‹/ÉÒ—¨3ò„167x[6”¡óNÒÎSÜyµÀSÈ_.W 6n3|gž—¨.µ¢…¤ÞÃKÑÑÛJeFSVB™20Áû{§éÍþÔŽhôñ7CrIÐ%ø÷P‰´@ >T](./ȳ8¯„2E%l•нK—V‰§-“ÆÚ›^å»K{C¾÷¯ƒ|ï'tƒŒüÔWéÆ~×ö¬nDŸ›Ú]‰g’nE6éF@ãŽ#º\Ï6*5ë}ä¿ý]V%dqzZu7ˆóÅX#<¹”שÂA*“TBÔûPhS ‰*…!•ÀÐQF€ A¹Áí Öê¯ïÞß~>3¢y}ý꣫u&úÁLuB‡Ú}Q[ÔUä´å °)Ú"KJàÚ. k5âÓ–H ­ˆ Fí ¹½™dP²m5 ÆbŒb6ÈXIb†-%lQƒ¨šÃ8c ÷]C1lÓ’³QežÈ—¯ÖS3Ãc/ A° #¤Šlˆ³¤"è²üŒ+1ÍP}jHQˆ/­"{2“dÛßÎcc¬±‹%~ÙO]¼Xý‡ýú/¿ï¶Žf2rê¶%¯upO¼ÔgÕ)»/¶’È}K4*çxi}ßS²´Ú*y š;¾Ïù|óýþözÚ/w… „š;„¸x‡ ä‡ÎW2™r‰ÈâÑq¼´‡ >¬]uÀzM«µ\€Lr›>¯&¤÷TÕB ‹j¢rÌs­5IM,Û€ti5ñ=EO}{§Ù¼¸Ô10ÿ<{»orîtáx÷NŤÊq÷õú»^fÌ9äµ%Fç¸Î©”†mMœ óÚò,¯IÚ$n ûÁ0:Æ¡E¯žµdƒfñNÆšTÛA¿$ô M/)ÏÕó`á(|x~³ª~&.†»Õ²JÎAÔËh`˜A¿ÂÇ®†¦¾tLžü Ø}=êý·Û§±˜÷o;ÃEÉž·qÇPE›Rˆ…:ƒI,dAÖW"™.úEc jš‚xdUX¿"v ‚Þ„¢Ùy ¿^_Cò ¥m?¤•€R«Ëd9°–ʬŒRovˆ-*áC°ö•0N»ay¶øèÇ;_¿Üª’¨D~¨Âê£çFG{#£"‹Ñ)Tµ¾qÑkø'?{ä5Bšrû«f xnS2?ƒC*‚Øsûƒ›(øïâ6ŒMÕ…·QL3Mh'ú¡©Lrz-Á¥u‚ #ÍLF¬‘Ð œš­‚Â⣄š{Äbì²BÏ5¼V"™â#ô©½†ëM>ÂWÂ@}…K]ŒN³vh^é¬Ä§ùqÆWßî¿èçÞaj\ƒŠx½GºŠkÄF BQC$Û] i ‰¦´”¤IEÀ!ŽÌÕéWøúLã¥æ99ÌNœûÃ]bš/1ð_6„²²¦°fŠTÒð.—¦d5COô¡5FÒ·^O"`à i`œB£mI¥)MÕ <- ía-Ñžw~q‘SEòIêΟª½x6Œ\½YE±!ì—Cÿ#ó ÙÑ˪Ÿ)dyhNŒ^FÕto¥Ð£ÖƒN=Åjó^9º¾‰k>ž¸f:>^ö†ÈCP*oáBgÒ^–³YÂêmÊ×4âðb¶výC×^Ÿ”ÕOQº´&@OÛœ×{‰B܆]F+h kt„¤¨šd‹ ÏïUS_ÔgrÁé­Ðà©%%Ö¨Õ ZðœM¢#Q ÞÚ×zꨞZkErGc‰EÊ^<ä±&¯VsnÑ-ô/a˹ zëRŽœ›øÔ1Sò ˆKzµãvÔzxuwûð}7\ð\Qî Å0£ ¬WPÅ&›9õ…µ‰2gd5#³µ\bh|H’^2 (ú‘zš5‹æ+aÜ¡–w!%P)”ÇS)9ï Ùº½¹ÏZøêÕj˜lÑ»üqýÁˆIh @¿â©9™õXKE!n ùhÀò¯@‰÷~»{TóðËù½ãŠopu€n¿wßvUÙÆ``¤~¤Zˆ‚í¥´¢xÏ,×ȶº¦‚”á;RãÄ©2ÙÀN*rxb =÷9Èo m²,Èì°ÍÐÕ=‡è‡ x[Š•œŸ@_Q¬ÈÂÖâ€Å ¹0—èÈ»x̱âŽ9V¤…aåBŽfËóÐÛ«i.è=-–^œ´ 7þð¾ FñӇ`s#ìTɶ=:’WÖ$b7æ1mr»Fn}4e˜®TGôêK©pÄ$ňK!— ¯¤4…˜.-้ Ëà4¾ c… â± ñ˜€ÇŸÉÛ»_$ú~ûûïx£€o»¥_a Ô3¿&šr©ì˜‰í“Ø,ÓÜé…7\‘ Ey*š&a–3ò Ÿ)!”ñ'`ðpqÓ|ÕoØ"„ÙãG¾ ¡lm€‘R.„BdSYêBI,ò4’Øs¾áä‰&£‹Ä¡eˆó½­wVe¥M“ßüìËóÿ$øtµ÷q Ø<ïù'H~¨l¤GØ3\îb0\¹)CB›æpM5X’@UÒšÊI+ÇìêJB“’VëWGiò¸Dá}¦WÕ©M’Ö§:þK«'Á:­y^P¢4ÕãbÇ^êyA‡<Ä©3EC½–±DT¶€¥%øË€¤˜9â@{9lÖälÑõ ¾¦Fß5À«Q"…Z§!ƒçë­|,½—è~À“¯ÑgîׯVS£×Ç ƒqSJ8ãà¸g„Ò³Dää$ 7WBms…­kâCÅV Ú¼/gGêׯVÕ\±YÜèâçÖÝH°!QÆç“jáápq†nS^‰Ô™–-ŸÞ¬æÔ ,‰!9l:6H!º8rAák† )IÄ…Ùº@—eŒ8õ Õ–h^óvSÑÇ `’9©z'„ôÓ)y^í¸µ…åž½¤äyÄY!pG_T¸Âjñ›á–a–år0µÄ "53>Š9Ø¥}íí¢™‘‚™ÿ€}U¡ÅNЋÓ¨÷;ñL¯ŒhZŸÂ܃©&xËz‘…¾<ˆf t5/¢YvÛÀjBïÇ<µlÞNöº÷23WK Ÿ™{Š33ó˜ª2sG¡¡ÚuKŸ>M=ÿ±) ™_ïmXÜ!\”žàe~å‰ÏŸèCüðéúC¸át6]Êè̹É‘}EÐ8±3 ÂQ’QQNsͪI !\ÅE$ƒ[(ºæ%¶\ n†o6ý÷D©É5ï6&Ã1#n`ÌI½nbI?ÅùÓýû‡ï÷×ßï;¹œ}Hð¶[ävŠ]ë)YŒrž¦p9gÅ4Í[S+Tu« K;*²í¿È•ÍQ³P—ÊæøÔ5:bê9¼ZÍ“^êò¦ž[E,ÄaƒPH ‰/ í0²Ö0x7ÿÔÌr‡TpXEè9ÚPÖé‰×†Ã=¤ÀÀÉèvûÅÚ„6/LRÝ ‰¡&L"Hʼn:•_®¸¸Д(I O@¯ó4T)ǪHBÔûÅ%lügM»ê<<ì ç­àt{¥(î cÁžF2ªfø*À´ñÖµˆæMšZ£¦ö•ý˜"—ïÎÊJ SFÍU_ƒíOµ¾u9 G…\3÷Ä)Hè§åƒ¿ £ýæÂ!ñ sEk۶ܱ¬Ù‰„•Pf¬ñêSß‚¯a½rGCþ8nQ6&OÇé²­o{ö Þx“Cýi¬–{Vàe$àž2[ûà ¦aù 1¶¥ÓƧkPíAï=ýGc‹%œ^«&ÍÇz_Â"hWÛîdùÞ¿þùåêðcmÉ Ÿ©’:ïxh¹ÊBœ¾Tký4CÉWˆgž›Óãf ÂÍ¥8SB!fQ%V²˜âæp!LM$‰¦ hø5Cš€¾C8@1Î«Ž¿öLó:–i!ͤ¦|¹Ü.¡­¬D2¥c™„}[ÜIzË;òÁ·ðÁÆe+D|ù–å.½ílVBHgª%aW¹žZ¶ ú p¦ô*_Égžæ…C®H÷€9Ó=ʵ¢V¢˜â„UMÉ ´“è¿CF÷tM7:M„£lšìåP}@a6ßI>E‰WXcO¶GÆËc˜}|RH]PÇ',P¡¦ÿ¤Xž®%y0Ìg¡L²Aㆄ¦]ýÇHI‡l7x!-E8]~ùØz[¢—öi”õïrÖ°,unƒµot‹sIm|K©ÁU£¸äp«ÏüÔÀº­KÁáôé a½ •¥¹#³'MMFá¿çÈsÚ²‚)•ðŽi¸CÉŠzÜÜØÕJ|3–‚,I¢MSWEÕ©0l °õTºŠyŸ›¿œJ׃òdÃk':x2ÈYݸÄê©ô =ˉg0P䱪¡Ö»”ø¿SUÎ Ø}®™~¬rEþˆÐ²³—+©LñiO°Õ’Ø”u¢ì$m¿›zKùŒãQ8qîn Ô8 §˜a#΄uÛ$;¿É$»-DJ“£ãòÚõ²ÏÝîwØžÿ¸­!‚~CªBçŠÎšWޱ8¼QNóÆ´T‚IÍв@iºU…s“"+©LZ¶=}ŸêPk€ ¸9„’· ¸ˆv°o$̰o¤ d?.ä*xŒC!V NÄìôùê]ÊÜžpñNÓT|I¾±úŽ1ò ·¨òæùý`K×u ¯ b¾Í5ìL_/šÛWKiúÉÕþÓiCB1-†õîËÍ5ôTR9qÌϺÿÄŒ!!}j¶ˆ­Éˆ±.`ع@ÏÔX` ŽR3dÛ‹UðŸB AYRpC2Ár<ÎTeJúrú²’Ù,Nð„šn !+ÊHÛŸCàLUŸ@¼‡ÜÒ;u9PK¿üß«ÓôÕ·¯w·×·ófôÚ_Ä©½—+ázç`‰Èä³K«AÍÐ}ꤿ?5i iÆ#N†´„©§+éбáÎã6¼‡.‰»$Mºë‘q…±Û¤FÎÒCÄ3E9`qzSV'ôÖ r5Õîö ú‰RÏöU`±ÑÃq¢¿TK¥Wû‚¢iE)°Ž/>;í¸qÎE£«7«!ú nñØ£ªŽí|únO½ ›ÖaöbÄ FˆDLþ9È1FTÕaþçŸ_Ž«0þuF"ÖN ÏÕ`žXßýªoúûÍ÷wÿõßÿù¦çâ„Ë:ÔhK?°‡k¸vÃG0ˆÄ¿½}óùëǃê»7¿½yx¼6õz÷ëÓ£ÿýÛã÷w¿þÄ'üóýÝãÍßw…àÍ®jýŽP ÂÖ%˜ž#y|óÛoo>©™?šp{»¥š?‘žLÆ‹“E 5]~ÖÐün{Gãçý\øCý$ÅS™ŠØËÛ /K»–™ \¹‰£óQuE )w¥8P»âJ±‘žÒ•cÈÐä2®LS$㎗¾sb¤Íï ×3wŽ^³>¸^Þ9^`±\}Çíìß¡ ~*tt¬ê# T݆†®gšz89ÄK’(=ÅÇ`ØE¯Š ƒ%˜4Ç«X/Ûk ÇbÀ‡ ³Ï/SÁü«eb¤uõqýƒÅG}gÀÿûßÊÚcZ r ønˆý {8dÍ#x†“€Z†IZˆŠ+Oª6¾}¶š¸sÌâ¡Þ¬" Мd Ó‹’ôúK²„ßÀ ª8}¾ðq‹twdçy;9Õâ¤c·J!qù´5—C„âi äj«7«âvÇ…-{¬À©:¶óׯ==z¿uÊgb„ãÛWÏÁ*•¯rŽn_œ—úq‚Ÿ‘úo’^1~ºæëÈ××Ï ¿7ü^nf§xCO²Jå"ãP*7ÉëØœVGôrò1ð°× Õw ,6ë–*î˜ä5,yÄ,:ÓêÍjîNÆÂáý¥E¿uÌ¿cé>ò:vêfýÓÌïI¯ÃS½Ntî^çhðÐ0ÉŒÿáé¿þ,Ämó9³žcåmÈ¥>åÇCìRöã·7›z0’´n‡ÆžÃ㬖r‚jâ¤A)³wçQ÷ï+”¯^¬Âa’Å ¸ú¢Å$]‚˜IчMähqèÐû³L3WóbŠð`}„MGþ*´ù«¾ßºòNˆšÝSßorFe½æ4Ÿr4ôaºø~a¦;|žô»îK\]Q~àÑp [ÞÖWìÂh°±Äöªz¥ ×Õô>)ÕàéÔvxÝneïøòñ=íÝ[¼µ3G ضå]¸xÅñŠŽ÷EH1R¹,|\4å8òqh=Êî¾×á¿Ï_žïMÈß^n~Õe“êÝPÀ×B-â)Ú‘úpæåÔ c“ -'`Á\Ú"ÂrÌRŠÏdÒbS] ˆ(m ±2º­9‰ù<_´äÁ[HP¨hïÁ,ŽZŸ7X ymç…ÍZSj-“þyX„I¢ü)ëL>ö&Wé ˆ±1Q5p Ç=Æ1}&y¹uËÒš…†ŠJ¾ÔGKi“!åXgBê‘¥Mv휇nmŸ]ê‡Ó&1çúHÐOPcضD´uƒWìÝEr0*˜öwmÙE;¯íÌ ?Zfa7ëàËÍÝý—¯ ½]ÃÊ‘PLŽ[æ2§.H7À$æ²ëÇÉR‚‹xè»N¢ARjAObÌyÀ31u@ãôÒÀ„²1“žÓMB9ãš(ŸÔMÒÕóÕª DtC“ §`1RµÂý½aö~ò¬êðO:ƒ2: º§WЕ}Ö.² >Â7·,°É¢x©XóŠU‹+š½6›I¥Ħ×vÙoŒ±âýp¾l“s†=}H)^…jnYpÛÎ-ë¥HýÇÙ3º)š‘e¾¸Â#bo¿kŠûpøÍ²¡e!ðP8=P*,ÅSO1 Q©öÄÔ M“þ9ÑÝ×pÍÆ"eRYL"é¦öÖêÀæ`Ãøá‡Ò·Xšô•Ýu&Jò¾3¨ºPb5¦.ƒ Ú$®(â Ž"·Œyõ0ækš@‚Ë@‡óbvÉ¥yô¬©díú¾´µJvÔ|1ehû˲‰þíºù3V±@ÛbÈ|Øì¶ö͆PÚÖÁ1 ÷‘4d˜i _›uOK‹Ìö´c“(küQøÑoï>ÙÖ9„§¹ÖPÀvö­ó†P ùÇð£}ëå6PuiØ8;l7f{¹‹ÝU¿‡ÿÜÿ.Ù³£ßíù·i®Ý_ìøýK²; À«åͽDZϿ~ùãaçG¾˜yÜduóðñ§ÓÎ2uûF>û›dûOßö É§Ì !¦ÁAȯVhO°'´\¶3‹C ‘×v·FWÙݪ @˜·#^=ú«Ñì~á!Û³4[YEskz+G°›9Uç`Uéíºñî^?ƱÁ½ÝyF0-™ìèqë2‚•ß¦É °ø:ÒÊ~ýür ʨ‘—aòÊoŸ÷­\Î+¿þJ'+Ma€²_PÉjxÊ.e¥¼®î¾•øwHàË÷:9 t|·p’ÜõòleøÓ,dÝ·SÑù3òs'¼ù‹Ž´zÄq•²Ë¨†ÑF5>Æó™uI}žƒrŸ;çZî3$·=ÉÂ^ùŒ×©n!ÉòxÃ|˜ÖR’,ƒË0gG!0B;Ì™´k ±£¤ÍWûaPÍ&µ¿OZÄ9s Õổ3ç'ì—V3ýÀ^ =²`½#V¡¸dÙ#œãÁ•äè2ý6'vàÈ–CWÀ€þý Gެÿ>ÿNða¶t{Kw<–p¤é}f )¬"éƒBfÌÐ2æ‹TQÔ†¶DŸž'úRòï4‹‹ôF»D¸Xž5ËZq\M9Ó'Á¢¡àcâ–œeúæÏX•éSóÄ(Íš©÷ÄLAÛùàöÆe0¬yÏç·¡Š>QÅ™%žÀš3´“ÉMúW’¨åNí݇.þã>kÿþðòõÞnÎðí&SrsžÉ¿ÉÂÛŽ\š`ãWiÄÇþd@š.ˆÄv‚l@t{·“ç²Éï/· ›µ] bÃ4ÌIJÀb(ÑFÔ(·¦«ÏÈçÀ›®ÔÌóEÜõvh•`×D(¹ –™Œ:\|¦wæ –xƒ=ì=µõ.ALé¶Õn|í 48é´|ššï‰ÑÕšç?/­f½ˆJ}Ó|/t%î?F<º8)0HxGt=Òƒ\úƒ¥+úÿzí!j©ÍK—›wÆØ*éõBÚ½˜7éË9DN|kI³—3auÛôÚ†Ü[ƒ-iCÎ$±ü™Vc--ÀZIÜ"Pƒµ‡*Ëëj•\ä2[Y Ô&cSÃfÙjyð”ÚÝîàsZÖd§1ÍøåKlz]û‰rĬ™>ù]œ½~è9ª£•›üδìÛÆ6*a»a {]î:>G\þñßþùÓ/÷_¿}þûø|ÿøéo_>}üíááÒ1’n™ nÞzÏ|¯l`ãÔKgõPºÌÀ¿öÍ}”E+‹í»ñ{WwÙ£JÔîMlÁ¥ªåÕÓ;b5×}À ½Ób9¶í&åÙtÍ¥ç0Ϲù}iUÃÂB*~v„ûT¯tþ#QÉ䈔¡(6M,¡n0¿·¾Ý{~‡Ê»Íï@þaæwôB)Z’lJdǪ ­F©Pøq˜Ô¾‘Ë(Å$AÊ(å}6žŸ-­&òãä0 ÕS¥õB)¯£/•’éüR)i":ÝqÇ犃º~(ÑûgúCN0)`ìV8žù¶ùնר³`<ómý"i`€ô(„a}iŽ_7BhÿTÁÓ²SÄÙYuǕբ½UØmÓx´ñš‰Ô'ÇNC¸àIGŸˆoœ‚HÉá›Ý²þ#´gÎþÆ™‘U‰„³gΜCQDaPŽ™" ˜R ‘/1×ù¾üREÊ,½ržµ—LÑUZ£àÜŸ&‹æÄ#…÷§²;Êr ‰;Ôb´‹¾|(¯ ‡²zq·­Ç0ØŠëêÅ «¿Ü™„"c±¹‹qDð’£ò˜I§ÃµàÞŽ+-8÷{lÊàFŸûIÈpŽº{5¥`Œ¶e®ó5€½yërè0R±¤ý‰=l¿M($¸1þéü«´–›»Ï¦•eL¡1èPœe×TÑäÐb «é鮪[ÝE²©ÈxŸH›ÑE.â+Å<‰ÀQ*=ê.Òk{ZTåÖc#²í×&9g¨’Ò’-æ $ÛrÞ×q&…Ȱ‚—® Šn@õ›E ξ¢ó›"ëã·Ï_¿½|Øÿrsûñña7)`~€íhYa±[ƒ³5*í¸íY«¶=Ûo¿üzÿõù³aÞ‡ÙïÃ÷_¿<ܽ,TÅ‘4M*4¨gâÕ]­ÔºzÉ8„ ‚R!€—Ò©‡à²óÂg"êqê%’$u¼ñ©‡t8åBp¹æä”ͱÍ…'½TU°±S¿âÔ[C·?iKnŒÑθõÛ¿öØêµÿ÷ç3™§Šûß{ŠYtšïä=ʨKZÁ9¯€¸õþ^Ú>õJdò i@ ¡[0× xb·ÊÙ]ä¥Ôjð0$…Ë©•äÿ~[]ð ØÀj4æ?i[ÝÞFÐlK+rº>okMˆô ©SN×¼¹ôÈÁ÷PÕÑ;B5ñ Xh±q|4gº;r<yúçZ¥E‘©‹Ìãóé5›O·ðÄ9üÿKÌ‘à+ à‹1EŽÔ¾‹wCG‡7¡adä"â²`9Íûº3Nû°Ž¢éãï‹JºûÛrÑîtHbÎÔï–É,l›åÕ:Ǘªx·üGª•Ž8I£ójÑdÜô$Mï¾Ï“Ûj†;APYX¦ðó2pelÉ*r*©ßáú,#¡Žxj)ÂiÀ‹"6H>5èål#‡ü8’£èú otÉ2‡7»ªeÛ”‡S!<„§èkjM ³°E¶°jIÐW î hV`yç“k‰iÁ©‹Á¼S¡táóæA›9X@ÒÌa-ߦGá 7‹¹ÇÓ>Un¯ .¼M_o((¸}dçó]ÔMê™ÛBËܪj·È“OŒ,äã²îªùìL‡™{ؽ5h¢àÛÚ~šh0 &¢"YZ ù½>áÄžà|ˆq_ÌF ZÒid?ø;Ü~~þt ÓãÃí%6ýªéyÝPÇÄÏÔK…Ñh,Mždc.·NVc†¾›¯Xo5ì 8yUÀpR×Õ# H}`î—†ñS²oŸóüËÝ—‡ç¥qŠâOÍâ.oR†à›â³é4ª·iBVID½ö^R»ƒârãQt‡él×¶ƒy×þ»4:l=<)qÀ%€ÒÝ£mÚU[uÀÖ£0!%2‚mk}iŒ½ùñCû˜å þ§f¡Wl@ îxª[ab–Õqø9uÛ†fæ+«”g¤¨ç‹G Óžbìd΄ÒcÚ[›×M‹¸»lC–þ“wѹ ÑÆÇ÷ïÞ¹á?Q{m$޲d’e—= 4šZÆäŒrŽÌiÉÌhÝøž¾ŒR¶!£ë„Ì%”©UÄþìàȦK{¶úñY¤ìü×þŸ—9[×"¼õ‚:n‘&³G ®)•T/©n›ì'UۨP†ØÃPÍÓr–ïbé°é~‘ví!Ûâ+n0x( f¦’ÚŠ=(_½Q%á¾@ËU@›r4 KË0aœN-à0 nÐM¶)c1+¹«á†¸ðù2/Wu$ê†жð§r´Íi!vkYËJö+/tF 5.ñ$_Ác“gÈO¤< ¬K‘K*FÞ’my¢Ã‹\b܇ 'õ…ÎþÈ'ES×\¿Öåúãr^©.@2P×pˆÞº—x»@¢CoþñéÞ~èoOß¾¾Lw¿=Øw¿¤ÏžŸŸž>¿?…ŸN©Ÿ!ì‡À}ýòí¾ÇxhÐÄ[5¶\ʆ]ó•ltIÆk˜v6ÆŒìk@ÝíÃŽk '­™É­KÑ8¤F[ ²1¨Ãé<¬!•‹‘5[4žš0_º,BÝâ#w¶š v€’¨ëVÙ"|TÆ~€ÌÇü]VZ.NÚE_Üä e8rD ål âêÙÐÀžªªÊUŠ7`ÌOW>ʦSGzfX’é²)iø4Ê$g†,83Ûóà,ïAã$¸Þ¹ST·Ï}š÷ÝÒDÂɘa©Ö§ ¦_ 5L¢C_ÓB ®\L'Ù꣰º4PÃdËÑÔ[±é¾·ýÅã¨e.—œEÓαH!ù‰Ð7Ò–JJÁúëÿ~aã4í%àt²³jŠlêê|ÕÿKwõ‰öµÝç¾î忺.‚Ö‘ŽlrĘ쨎¾œ½"À7áò魉͟Ðye÷Kœ‹¥ž>“WÈ{_Gô@a³f5\w°5 ‡á|®&fÕó|çNQ¶‚SvSâírSˆæ 1ãèy‰xî‚]ËtŽÃ‡‘Z¦Ð?ð 1NEÿ.—RûªÑÅ\’†aµ‚Bæt»Ú¦©zg1”I÷$`T.0e§¥Ó~ÓK#y¿9úR]„Îg¤3± ñRמ= <ãúâ¾>`Ý%SsuÆþ8káŠWÏþýjZc_dqkÚ¥CÁ5ú†Œ!+ÅH]kXOÄÔ RÍBª­+S™™æh\…Ô˜mÒ;ʤ¤ÚK§»‡[cjŒ£ó‰&eÚ»Ío1Õ–ŒÀE† ÞÌluSÀÉîNÅ«8¨P‚3Ö‚E¢ á}*ªÎAXæÍ’£‘€K -wðj&N{ù³ eØ ÍrÔI¬` RgÒ*¢1AÖÁ ¬Ûk³:[ã1 çÉ41ûxž`Ø)J_Ù¿´z*òºÞÕm€‘‘šPùˆÌùÕàF³S¼<}¾?¹HÛøüù›¡R©Ò¡â •m½¥2ª.B‹Ï #áå—ñEé^©¤ª회q2±nƒÊîµì´è!ï^Å×ÏÓÆ €ë¯ëÎmòKð‰F4œÚÞQÐðK>I%jçèoòAP,Ð$ÛbÈ÷-ÁâÎ%XÛÓÿ,Óú¢³A†°n8A ÇFùÉRßÿçxÌNٗ׌eã’Ah¨dÈXS€cСýÖ_ÌHîn_î°%Þþr'w,w ›®_ž¸SÖCÅÑ«-³$ŠÅ«Þw™«Va¼=‹b o ¤kîX‹µÊ¤”€5“P§¹|Xx!ÛÃ>ZîŠÀ»€ˆÞ/ϸrouµ’(5V"Ta%ù z&§^VBÞ‹ïj%hÝG̹é0Ùû{„ë~•ǾóX¥j«YP}X½ê`ª\ÐñÊÌ+XJ=häûæL\UÎĉ«'œëà5dtŒ0©øÔ•\#̧G˜Æ†éª¾™»U]7zß·9¶äý/{×ÖÛØ¤ÿŠÐû0/#™¬*^Ê› °Øvg‚Ìæa²,w;‘-$»»òß·x$[Ç%ò‘jw&Fé¶åsȪâWÖåZÈ7ÿÜÏ`Cû=´Br,0êDØT¨ã¬Út³=€ð ºhÍR‹D%ºbɲ x6¹ƒ0K ˆÐ)# 䛪sÎHÞôÝ,øD 8¡…NŽDlJwŽI’‰vxÞ¥ˆ@ÀˆQ¬‹l“RP‰Â6ÐS+4Pbe”¯:?Ïœ»„W_9l„FÓÅå7ßýéÒ;uDBÆÝ e÷㻩Às3áó¥Mr­­€èÁ·ƒÕ§ûËo&ó»‡ñbzùÃûéêòÿò§AæÄÇdߤ6¯ç“_DmÞ¼úåÝò£¼ún~ÝMÅ„òöåãd2]./¿Ùlí§‡ÇÕå7¥_ý4ž=NZ'ù¢Á·ßnät>†?¿¸9\å_ý­¼ìµ¼i¿ð®·Ì±FfÉõhy‚äYN¬|ç~9ž4²ø ÌDîÖÉ“7ãÙrzu¬ÀÑýãð§ùÍ‹Ç@*Ò&¬xŸ:‰E,‡eap‹Ö;7±•í­ýáa12·¡ ûwÇ0‹±­¼¥Æ¿øcì!›³–c­,Cl(ÔÄ(Sw:Ó†ü‘Ü‹À@tzï¶ï¥Arž7òŸî÷ÍU½g®êœ o|ÞïWãÞÕ”nKå[V‘i#VMï?Ü.Äð[®^ ¹¿ØtÞnî–Ÿô¨ñ†^ـ݆²z†ªÚ¤×X@ð²,ã;—÷%Y/óØÂ¾J ‚á@¢“šF’ò.©’l´p‹>%ʈeÕÎgÏc-v8ª4Ù™"=dËÖˆ’Jû3¶lÓJÌÈÊ:?§ú˜¨jF²éuðåiÖâ/R;¾šM±ýŸùüaÏœü«ÈÁô»ûëé§KRößÀo§×ÛïQô”b8}.ë”­7ÛìÕSü”>oæa±ƒÛ°(9ƒ“éíÓôzÇl´0B-IÚfcû›mžr»”CþQÓÇéb°ú0¾¼ÐcÔl~׺d±.ÅÛ.k^¦å@÷jX,ægÖ͈§Ý Èu' Ž¼Óž2°[9 ¤Th­Mt˜övkîDXV(¼F838kí©68 !ÒãA¶ÌˆÏÁOÎÉÎs4xôgðvÒ´[ƒTžÿ6´nrucéjÂß¾¹^î9!6×Í9è÷Ö–_ ÇØRgÇ ßkû¬E<,õ—lyD…˜ÐHü{Güå=‚¿;î„.óE#¶"ÀŸDÇ÷“élzÝŸî)UèÞ'²bÀ  é*®À­^{»Y‡Æ®6TÆyÒ ó>PdÝälWE¼l¹DÒa#š ç+Y©€2'34µˆP×9¯óeËÜ0 Z(iTÙZžX¡ä¾ÚP…-úè1¯ÊN*›V»0Ý¡n¦øá‹ÔÝYf;ÿ.Çe°“4Ú£~øð…òÏüý?øówþïËÁ?Bo˜ÿøk³’Á¿ùïåø\Ößm,ǿݟøþ¿µ,°š>.nWÓæ =./ƒ¬ÝOãyЬûr Ø1üÇà]cW?Ì儉/1™Í—ïíÀz¯OBR=±•ŸØåyšçGu@X§Q9JêðÆ:›Ôèc·­PÍ9ó–\'bzh[>üêÞÅ‘Uš¿ jtìbÑ 1JÞCeõˆïÇÖœ±‚él7ŽôÞµÙþ×:{ºWõV½ýiŸu»="ò2ABŒbÒ•…Ž·ÙÁ¹æ1e˜ºÊ±ZÇúŽÂœ‰Â\kÏ`n ‚9¾ΑÑ'6 å›Hê‘Ñ Ü—Ÿò¼Z„4¢ëád<¼z¼¿žM ¡{ò§ùò]í(Uè…·¥ ²fœ³›ÇjUmí+„S¼Sô\oÆf@¨%Û§b]/žŸ©1Å{ïÐæô¢¶: ÖÖIµ¤£%ç/;. Í¢ëne÷Wåß@"_aDªYRu‹µ¢Ò¿súŸh¨žg},?‹XÜ…ÄÀ]Ë{)'(Í'b÷Á Ú“b¯K€F²…†ó¡N6íg’C8ZáüLœhSÌÖîKT×4‹M‹×’â}Æñsº² dÖ‘ÄaÇâç;>Þ™˜BѨ£×eí€ p˜¿(>ÕWûúw6¡ø+Þaqµûíí Î1ÕΙ qKØ:«CK;eñP÷ɲ—*î\nNÞ%À¯¿I¡Ä‘²št¨èé+”áŠéyŸÖ rÉÊ¿ÈåÇñíJ–5»b¼¾ÛdG= PÛ"þ£|µø|ÛØÆKaópÃöáíõ»ý*ÓüÂ0Hðü±É*iìä_£DyW–ÚþDhfà«&‚œ`£ŒÁSˆÀ¸sËóq<»?aß^™—}/góƒ›ëñj¼ü|?Ù:}éAÉ9u™OF,Ó±Ðz¥) Yâë§!Ë€º Û­e‚–xyâ.ú3ƒÖî„à…Gr´Hœ`vçtȼ‘@{Guõiö´ŠXT¼ÐÁw¿êät@ß}¢Ä‡–/u¬/cJÒÅ#¥\ÒΗRaÔ„RÎ™Ž—RÍ<¥ @ï«Åð§°|#xçüÈ*R*ÜJ‰i^íVJȶ“¨ürWÿ¢ýp¤½¹²\mÇvÂW»'Ou;u5ÖÓ:b“O|—Ê‚g¯½ü.×RY’y\û7’ ¾Ï˜\K $¯>Yùs¦òLj*2É0¡Õztãfß6–ÝÚX–ê'Ú‘'j{-Õ~FôZÊYýj•ξ–*Cèk[èÐ$x¿ ;pO \â^ ¸dGåÎÑäw8;gU ûY&gÄÀï\|:0銊—ìÔ#¸$p1%Ê×ûŽ69mm,¸šEñrÈfú,¥¸FЧ9­·è= OÕ7Ngëå´VZ)Å7$ œV8âOF˶[ËbœrŠEi[S’qªc¬©:Dþ)ª;œ8»ŽŠGÈú•#Ççˆà`>#ïpv{3|žÌ¦/S%ç³»í›Øû÷ÃÉt±:QI[DK38ÍþŒ ÀP8;‰-rr =@}¶åªD²ÅÖI›®NYN"‰ÁXfBkgy•ò†ά õ™µŠw4i:9Üèr/H´X¥Ž-&nHhžm†Ë±™Õº«Z[ËJ…“e‰÷àñÌ `3}½ªïà”1QçAÖż"aO_™Ëpå¦ÄoÈex^O;éT}—!K„me¶ ‹Ù0bM‚öl ÞRE«’"¬íd81ÉëEHÚ¼v’j]»ÉiŸw¶]W×#­¦Ï;ÅË 2×M¬iTšñû‰5k¥è‰ÎÈ}÷%@yÃÏár|÷0›.·PxèÏ0(Û¿ˆ»—†å"+jëô›ˆåXÝ#[WÍ}p¥XDn õ,[L¶!o3Ly«c&akgYQhY•Óa>Ö™MB[Ý$ dŒÄÖŒ #Õ»€~#Á»ÇÙêqyq7]-n'ËáõX6{?\®x ñ²¶³tNgŸ±Og‡Žµuº×ÙI*ŒÆ£LzûäAù´×ȹØÚZFȲ@â†á6ÇéS0" S©Ý×Cèhc9ÀŒZÓT±l¾„©Òä|/'¦×Bf!¯P-|膓ÅäÅ=cwå½_—¶Kº¿¾…$†ýéFHJRÓ#*·Gœ¨éPƒh|¯'š#OlT¼qÀ°)ë;ڷܪô’l5ÖJ¨µ—Œy'aŽ(kfå[wßíGœ4ïDvÑ3)¶eèpT¾ç(DQÈլؽmgÆÆq,í4΄¸Rͳ}j;Ѿ\çÉèÔk"œ£˜='" Î'ëáқܗ£Ç |¬>±E”m ›U {žù¢ªmî4[OÝØ1÷L ¡¦D;×’æžs¤$7ÖQNÁsèÀχ³ùä—ýaD^w¬m,½ž–1HÐ=s¹ôrŽFoKœ]}®ŠèòH«ÃœÓÎøx 0UtX¢…œa‰†³»€uÖ•UùX~4ŽÀ ³ˆÎUïûúBcs6žÏÂòân<ù jfx Eì1»#¸pUíêÓJGcë°óPÚÔ*e}¬Å1|2wÄ’U&i}‚hÉÓ–6%FV…eËã›3›T»ŒS‡‚ËQÀµÞÇêÍI„ÑVŸ¬( èØ=¶3$TååN'Œ2ø¨âY®ˆ¯ëJ©»C9Ûá‰PUƒ!¨ «®‡;'2¯¡:Âj6‘Êià=‡œçtg3´>¥ñŠ÷-=Š@iX4)¶þÜPê|•〆½9ãä§óåêa¼ú0|XÌ×2Ý\w:‚}]Èã3ôCƒý…¡;ºÕV÷ž‰‹º&K“ òÙüêз&œݧ´ÎíCUç‰C¤Ž }IHpâ´g`ªLBªQ1HmѤ¤Bug†TCTRuh©­é1öI‚ˇUqp°*®[Víf„ǬhåÏ:c*/òb­îÝ~=©S««í¬:´¿ÁP–Û X;‹~9p ';X“Iç8dc¥Ñ5Z:Ø"Lt «F1í¹ÑÕÕ@W%TóÌ Ñ5tYX>Œ'Ó¨ÁÒÍI4ª2”úÚX*¼ɋÚÁRZæhÇGMTÀª(–Ú,Æfƒi/mZP­êáø3ˆ±Ì]Çn—örè)'×j6”ÌÕEvéЩY_vï6ÞR¡zʪ)\Þžœ?ñ}·sm±sßÙC«q¨äy»Ùµ’F˜±b°ÀV6ƒù¢þ)¹,ÿÔ@þáOƒý¯ÝÛÍU\tˆ‚€kC¥Ö¥Õ ó™Pn=¼ìò…€—«ù/Óû ӭ柋ù㪠W¯ä×½ঢ¹[ Øþ,É_Û|ÅÜñšL癇©X40Kš |ÎÌ LçØX˱iѬTd­X‡® T—8þ~0ÚÄ'°I @LÄÑ5ÀóÒ8ÿ²«0”Ôd¶SUbF΄Fšëho%ÀÍE$ÿù8_3;M5·˜Râ‰*êÕ&¾„•–ÝI@ï Wçc@y¦®q‹>äo#zÚï)Õ)úáŒSJC2úArÌ\R)8í¢í¶ô-þeæØÍ~w»Óp±†aÛ³=÷ôêùÓýpû± ùç&×kqû°Q»êqŒG®ÎëO$?ÕÎ{¶Ñ°ˆœq ¥ªhˆY£ï_ùÆõÅAAñÚ°æÓ”@:Ff$KÆ«Γ'« Y/è2.’­ï”tN›õ.v9Ø"N«™‰º¥®±²O4ô|õbêfZÄìzS0 (Œ‡TÊšóNg…TsÚìó,R“£ìk¨deÂi©š¿%gC¼}ÿç¢O~°àEʧA–•êai‹Ùæ•üñ%Pö8µ ¹òaäL2Æm}ÀrtvZ‹0eBÜ"ÁZåy)tE*¸†‰ì+‡U/wÊ"¹LCë&W7–®&<üùç›ëeÇÔRo컪´‡ÚA« áñ 9"lñÚ-/5›¨Ó”[«â+BH€x)¤QŸVø™#ýmX9Þ¨E~’6,jµ¾!<±rb3gÚ´)cÅz'”ëT†]䜛ꬡ£G4:-œ >ÍYjÛ2WeM}‹ÛrQþ;!Ä„DpOb§FS%ª$Ž¡ƒs怷{¨Í¯‡òÁóÅ/›R–’>Zî–î¨?2ðÐõé~Ī1…OM%îL»’TE•F_g´NfÀ1ÄÑ·E©2èKÈ$J²CÍF‘ã Õ›´ •ÇB€ÆñôÕT}}^~ëzÚa¯üÖ^xQ‘µÞûH¬”8var]=$Þ«Î?`ºnËôSx®Óï€Ï«ªIT5Jƒ9âr5/i_L‹×¹ˆòíÍ/n΃°c9ŸM?ïf›££'=Ú|÷Õïv‹‘ÑU5'ê~}ÃpE"ì¨9+ÉzÁp‘ NŠÎhN¨=èdN¤>Ç›¾Ð¯LÄ(\B§MI}š#=}zit*Ü\3u”žòg± à‘‚ôe<±/\Zpt,ÒØ&]Á±¡>MÓ¹ 1ÄÚ—9¡ç¡Ó±p—CXkŠ´AʾV”e’ùüDÙŠ†CUÖSõH§ £’b¬·a$¯7¡0ŠF@˜³šP£lŽ×1Fj2›ÀÕˆ˜p¡ |ù"ù›ñãlÕ±FªZtÔk쟢pK£ëU7D*ôö[+g9=æÜªôÜ!&¢èÄÀš”‰ye}ð„άj©úPe ûXÏ4ÞÏ[Û›•?a;dOt…‚šÜ4ºFÎÈÛ|ˆfýEœe¨ ¶F¼Âym-ÀW¼îðؘÑ¢ËNl!¯ê+LoÓ8 -šßh¦=Ôò…éñ· ìRÍñá‡-*”é_'¬ôì½=sÀ¡ßpDÁ¶0_[ÛJ‡ÜÓT0ÜàBGöépéM?£ãRCÑK÷-ÝÊDœÑ5}îhƒ©Ó)¯iûnÍ—+ê<àÏ_>->LÓálú~<ù¼¹¡iŒ•ì‹ÛTÁO!¾ÔŸ™ÌÚQtf²B/ÿ™DQªßj¹tZ¨>1R£ÆèÄHdÐÁqïÂpQ8H²´âQ.6['žßQsŸd‹C ÕÖï !ÊyÓ› –o÷ÂË£J[u‹[ˆ¯*^Óq‹ vÓJÛFµv‹v…´ã.e܇ØÚ [¬®¸ð&¸ýž\‚ ðáeh… >8Šª ÀZÁͱïü¾tRä®G²ëÅÌ‘fºXù OÌ’-gjbrynîüî;¿;~/äyá„ØÊ£ß:¶óßÃÉuÿò‡§^¥a9¹ÿsŒ‡ØãÞYŒi€4”Ai’U¯²d ^)/>ž›Ýœ2(ÕGÄwT¬AgGK²ªï,>(Í«!Éna¨# uÓIn^hÚppÁG-µ™"ަ]Z„é(rN§>8GaÑAí=öï)„ «$ØòqMª}Kæ¡C÷ôƒùT„zAú€v __û“gÊÊÑ‚ÎÇèf×04Ê«Wª%…&WH·Mô—š3Lx’MÎÄÓk'SEœ¶¶þ ¼lͺÆïÄÞwîÅ{Yµ©Ÿ«h–h÷km=ýõqN³˜æj°_¤YÄþåy°±ï¸çíÍ×§”¹Júùã%IÙœÓlwª³~ýé‚Þ8»Þµ«¨—QZˆ|¢#AaKƒX{¡÷d_œàýeôÆ‚S-£7r–ïH†à{Z¶}#ùûðÝÃÉÉ­ŸØÉ™Þs²¤-§¡Žï× æWÇAì]ÿŽF˜©öú×Sô»íQàûï1@k_¤rµ'?9žãPÎ}_§áY]ð?¶4GÛõß³}ìµ×𬳢ouVîK.É6Ñ¢Báâ¹ qÿzxñXˆ˜+K=mc!y£§¸ö± &fÈð‡LŠbµ{É}ž™û ר´*>´ жcưsÐ#ÏŽÛk3'ÓêKéMâð‹Ë^=e(úk§ Räæ¦Ë„wÖ$·¯'{ˆ$êËxíÁ![!ËÉyF¼žVmFÌkõ†0®SÉÐû”wÚ±‹ãÃ}¡#»o94å¢y|O´Õ¸Ý ª9¸”hã„ïWoâÛ§Þ\—D^úç?0HÇ- €ˆ{Üü½(è Ð]”ò@OfšïŠ€…÷Ô4—=¸˜|x,¹ˆ>ycX;Nt0¢'1¿OËLzâ €N>ø¾“S]UZ} Ìi ¾?ø£nØ¥)—}Ñÿ™}Ú^~ùôãdMJKOv"ßâXQ.˜Ù}¯Â‘Ia‰ÐEÊ0Êå®ë$‹l\|´Ù(j«Ž©ªjíÀ8xÇHböï'POŠJãZOòà6xã6A¦ô€ÙD½_œÌ8ã¹#u„ØØ”Ù81м_±}vbYIžq5=é¶1E{?ü064Yz‹Ò¼S]Ê}FB½žÿ’ÚÙ©§2<"Ò¾wø">"å¢ÌcytÀÇÉZ#¯Ž8x6ÖNÎá}œ9i á~ÎJªêžŽØN}ÁõGêÏŽØ)ÈaU쬟-v»ýòùñϹÃÅÍSÄœÒåsß«b>M§ NjÓ–:BLòaµÉ…¯’îÏfYÁ1TÁóž(ô"<“dG-$Õ URú,tÎ+–wGãÑ9d¨R“¢,xvïW¥ñ—ªvFva…†Ç03RÉÜ¿^Åmhjcùžþåóõ‹9Øý•ÁçÓu[0Lçƒaà ™`‰ì …4ú°sä-¸ñã«Ê§°cÇöxnÐÓ¼Br§í/ß>ˆZR/*m«oœm9¬‘$PWÇÊ‚Öòð1RÃAaB{Ô¨0ðÊI„cÊ·ÿ;Ÿ©UâP(nIȲ+†K"²—DÕoÍx7Ϊ€·™QŠiƒˆ.Wr$—.p›Ì7€ºµá= †Û$fÄ Ü&EqÕu”¨ŠFU!è¢ B F*•p©Š'r?@šö®¾<}}hKÖbж[ ?o`tÆ$kór뇽f"vG"-’*År¬KYBŽƒŒº@o²ë©"oeè%=[ÕD,èMzRVIÙÖÑqXxÒ;e{&Fê•Q o—:X‡Æ·vHmÿ0¹?Oóg•]VÇ. ÷¯½é·öåwÓ€Ú‹Üà›2·ø¼é}§Cáš¹eê-%vd‡àçó+­(é~Ÿ¬$Ä"ÀÛ Û¹bQ_ê;É!ü‘X»@¼-;#Æu!žF'`¤ý ˆˆˆP²Éd§}¹5|ÕØl%á9”K+ƒÑHK !ågÞvB߃…é1 ,?ºç4’›œ|M/Î¥gÇ–Ù6Êjñ©p7Ê¥S9·0- y£u2»èÙQ¹ð(Pé`ÊN¨9d§º7ï@Å­ü¢Èѯ{#ÍÖ½ù”ÔF.Lþt}¼!®Å´4ƒf™Ä?g™„àݤ0ñ!ü ”Ñ‘½¦jÛèNÝñ¬®ÎüþœC@ȵtQF Ñ|ßýµŠî&#òÄn7ÁèâLç¨>9U³];Gâé3Ó™Ôî%0kÒwÑ6*ü˜ÂèøÞÄ ™b‘iË!•äÅU³çò»åªñ0"‡®‰ ×Àðò»iJö¾§p,¾ÿéü|w÷ò3™+ Û¿Î€GCÑ8bK9 NˆÕT©÷(/Dß9¾c?'ÎVP%G»Âø‹“cœD-6˜˜ð|–Gé .¶yŽ‚²2G‘xµh„„}ä´Cº¡ÕŽñoCõÀ£OÅd﹤—š., Ÿ'”å®”œyIö™Îâef eOì ÆÒðĉÄÑA\±³½+ïÈF•õµšø2*s®žúML}ÆsˆûYå%= €atöŽOt¥é\©™ݶöª9*¯ÒÖ^$&é€viåǨÑDÚòèÝX-R·}PQ­`µ¾L.oÂËNŒ:H§«…­ ¯`µ¥j Þñ÷WGãY-l‘þ=«…ÐF<îçSŒæb¨rz¦nNžpa¤:ÀÊÞø”»ˆÀâÃ×»Ïs).X…_†Y M ˆ(žC€±8»X?œåÝ •}Φ·¨Îjȉ§ Ðòˆ1‘­ ´Aãp MA 5MQ´cl]ú ï¹G1R±¨ WÃ4úã÷dÌlÂX?c .´l]ˆ¡3afoLMZ©‘©Œ©v%àâÅUÉåÞvŽÄÑRmÕa*Ð]RLJ®&å ð¤'PùåË\ 9#è€1Ò!*°ß“’ BNÛãétfžÔ¹\}¸AÆ[ºW¿?úýã„>ÄÛ›_⽊¬z`n 4s£…`±ÛLÌ]A´½øßw¶&‚JxíEñùG£Ïáõ‘(»L§N«&ïte¸Ž4šþݤvf'émÇ^¼É0ÿêàð⪕àf¤òÙɬGdÚÑյ]'fô€c-CŒ!¸@D3ñ¸ïÈŒÊÐQˆEÈ„hFZ„LöÙÉCGÛí‚™˜Xü¬ÒÔ.~34MÎ2 i[¶SËû3ý ±këCgÌ,ŽË5„E¤nvCP'ê"IŒLú=š^_«–ŽD=ýžNmYmE iÎd„ÆèÓÅ"Ä^}ïDÝÒ8ðNÎKðÙl+—ÞÎ…¥ç35dIádÙ¥¾4¹ r˜Ôv€²(ƒŸÏMÎ^c¦¾4iŠXœ¬2›d­Ž¡tÖìpGÙÂ~´Jç—œsD^Â:©Å:ß£ËÆ³]4î>¦¾í›ëYYbs½ó…Á.xЖŒ¥jqŠs î¯Ó•kOîôí«J¯Ì0x‹Ó(Í/-d†i,”^Ûðn¨çi¦á ‚Q3ÀÆ‚Å9U eõ—=1@ô#<ÑÛUSÔű þ»…~KÒhïÿá>e;¥hŸ7¿Áfú¯ö¶ýàüPWL= Ý9”)ÀÁü®ýÅâëæ´f0ÅNÎÏ•B EJ>›ºÕs:Qu¹éš•SdÏ´®Ïbð0˜çÊļ×Nnº¦'EÀÂØtí[ˆXÅs•¢Ô9ù]Ðc ØñÔ2ðűjð±½?¤5²éˆÊ%D,Bp1Jñnu†›ãHV}Á–í$"¬‹h8"˜œ=æÁ8¨Ð–í»&Áë!jK»Â’È>¯f "!á‚êdN#Ϥ%ã\Dö*ÿófÁ×>oÿËLø?¿|‹öá›MlÿãávûÇϦ5o~›¢¾»ííá3÷¾•Á®Ù„ûÑk—âîôA…3]?üô&‘Í´ý“ï·Û'z \‘œi *-gXð¤v:ãÿüdŸ<<_ßLê;µ€Ýlø_®??oÏ6ýûO_ïÍþ÷ñ—·x9aü;»Ð ‹ýÑP¶ ‡û$ñE»PÎrFmíï_žo¶ÏÏ;ß»ÿ{ÅI õߨÅñwÜ<Þ¹~Ú¾ÿg$†v~–¾- ²KT;Â'“Ñ-¼æ³Ó|4Y;·h5Ý~0'³Sóù»ÁO?¿áìÏgpöçßž¿|Ú>¥j›×7¾V/>=þvW]nWH“W¨§ìŽvŠ· ˆcï!UàÌÍ£ŒiÛë%eb!Ôªã©äöè=f©â"ìÑ\fËCÄú< §‘‰ˆ´Ä€ØµkbdL¼öËÑ@‰ê0m<Ûéu´³š#8©Íþ®®6Ž e!^½Ý#mÑ‹M+Í‚¡ GGÊÞ&N¥¬¶=›ëiZì°³µÙªÀ¢$ +«Í.™-tv¼ƒÚñ.Kõ&nŽÞR“NJ ìgü\Ô›J¶rçhkUŠ“2/«+Î7][ƒíÏBô8'Ó™÷ît“ ‡hVs¼yªÐg¯°‡ÕoŒNÖŽIÀnÏMãŽ8Íþ¢ÅQ‰@µÃ%îX‡Táob'a¾Kô¶³*ƒ( Äµõ¶o"žëo&ò© oU1Gâ ®Bqù”Ñagµþ&‰ú|m½…ÐâobˆŽfgËýÍ×ú[LžäËœ…/ŒÅk„lJçhg5þqCÑô¶z`r:w¦²Ó< Z¬¶P­6´À|¬P[ÄâÜ6œmš{ÛY•ÚÂF=)­înH-9´«¦Þ‹o<#ê¼w5 ‰¾¬4Ì:ÛÛ¾j!2h Õ]Z ’‹¯nén]éjÞ”öÚH~ÙÓ´ìgäs~v´­*?sÃS]]gÔÂ<.ĉiP–$¹_ÆÑ„"Ö9Z9!ÌOiÛY¥«Ùïò¥q*Òt´DiìZê]ˆÓ¨‹…è(µèÑ®lAbYgÑùXDǨ³Õ~o«Ñ™ÓxçæD"â%¦»Â‚ŠñézÓ2Ñúbx¬ÕK9$‘Š2Ñæ¦û˜lø¶³ª„¤¦Jq°¶Ú”[¦YÆh2ô³îkyµU'$M@hZ‹T ûÓ¾wة֫Қlĵ-¶½ùI"MŽA—ç#©:=â,Zó®ÆÛ0:’ÞbÈêíxk5Šc±e1:Z[qH ùû®@lT\ª8­N${3ítj•Cÿ¨âBYq˜-‹=ÚYUéoö~m½‘¶ä‘!‚Æèh¹ÃÅGÞ"®r¸E½Q6¯u´³J³“&âwÐ[Ëà ’Z”MiüBƒþæƒçŠhÒü-7êíhgUþSYœ¬­7nêýöÌv7…ÅþÆÕád˜úÒbÅû¶í©Ô?vž/69ÚZâ(•W‘Õׂ“’¦+¥©ó‹Ý­:ìý&sù¾Í©|¸i̽Úí«ÊÙÜÆ.Q(¼²Î\ÃÕ-ØŸ¶CÀ-NGju|¢;ÐPÆHŒR<ÛØiÎ׎vV£60µÙÉ×>Û[Z¶\ð?¹ô°±x¢ÒóËãÓõÇíÕoÛ§i<ðýÝÇMM«|ÓºêÖÍÔ"n|õRƒÕ‹WÆ@Ù ö ã­¶lÌqíË Ç¦C P`LeéËã¯Û‡¤çíïWãyzüú’+8mj ÊY§ôŒˆ¯ˆÎP¨ô|•D›?åd×ÃrÒ²Ó‘Œk[Ž¶Ñ¡·³OØÍíË¢=@Ë[»Ýó«‰u3 ÔD…j.[¨E«ÐìcË‘\z…-Ú3… ]¢ÔG`_áOŒ¢ ‹"Â=Ãõwî÷Nã/M7w7ÓßžZ7—lGqð·Îj¨Ö’Ûàà@‚ªÔü]+Ë^Î<>Ûéùoðš~ùjÇL{uótsõòx•¹I\íGñ¼Nt˜äãù讨ä2¨áMSIYªÕÅùpðã*¦g.}]mw"b,¾j–PêX ½RdºÖµq& žéñDÙDK ’‹‰žÔØud=p+$yÁ9p?6â1 ƒ‚„E´oWé]r"E|­tPÍ~|œux¨„Ð.øŠÃƒZxíÊJš:Àä¯òR;ÙBP«"GŠOµºçßy7Ñæ ™.‘cbÏ1EÏIݧ¹‘ê`ÁÄ3­À8ßÔ9ÚUÞ¯~Ý{³¬\½½{™åƒDk–uÑ`Kc©]P€P¨ËCÚ‰xºy)G&S¤Í Šº¢>ây{óõ)…,)úÃ>}x6 Zœ8=X½þtA© uUÒ–LzLƒ!Ó‰1¨T¢A¬=]œR²¤ÊÅ÷E'̬G2ìäãèÌtm'0 Ò…¸± ¾;™jæL™—-êÀ'§2~çæÓÚ7J¬›s&»HÔÝ—©æ$s_|zþäÓÃ7Óª½‹«Ç¼{¢þ¾®éC"Rd‚¿F¶ô ^žç5à/ö=ØÇNK—z%Îh¡5 ætw¦ óªÿ#ïZzÉ‘ô_1ú2—Qƒ Fk}Ú˳ØÅÌ^YRU #?àG?ûß7(ÉVÚ¦D2“™Õí>4Z–ÓÉx#â‹n¥Úv÷¨ÿ¹ùtÿpw³yú¶y~|xÞ½i—¹ûùvqzÂqMÖbÿ­*ÃÍ®¹›Xn!?hkNÖ~·kî“F]†™BŠ1´SžCÍ–øxûT€"Ñ’³žC’«Ìúüiá:¢µE©êyj#œƒÖåë_v\4Ôñ–G$¡¤bâ5_Ï ¥ó€Í•L ˜9ذïi›Ì]Pé÷ßáÆ[I«²òxê è àñ‡§ö€õæËòy÷TYHµî‚‡cyÌ]øž«†–<’xS=ÑŠÍÒ–NsbÁLTùKòHÑXKd-l~zˆPu*œ‘—ÆéûÛ=Jº¿Ý{vÇ¡§óeO'MËžÁ–T=¹¦êÙ8\=Çn ƳØÜ$}ÍHúzî÷Ö×¼·,½&ÃëÇ_¾be[³‡á )°×ƒæšØ£N~†®æ$ [65{/Áù¢…(œ½yµÇÍØšáNkÔ¦¯é±*â BÖ±¥¿LÓ›k |¯mÊdÔhóMfQ!HSsmÊú•yÚ~å³–d:f[a˜ÊXÃË„èw®¥Å*ÏÝnóÛòf÷¸¼¹W‹=ÇOGlšÆ V»êóiÎ@gˆ¦½d='û÷»å“š¾Ï¿<,Ï¿^¯¯ë\§§0)¼±Ó0!ÌÉÜ(­Ÿ5Ü<=lW‹õR­ëíâð½*šØIß[žb.ÎvÁ!ÐÌTIA»í—Íê·Õnó:|}·»9ýxßFx«aóPÉ Î6Ý 1q,.Œ à ¸b5ýÆÇ¡ÑÁã¶‹¹‹«Ô‹ÀÆd«‚“;kzÄj7ÚŽU ©¦b&¢0Ò[6âéãF{ÀÍ{7rGdL¦»¹yÜ(Ecì #âÆ¡ä,§÷wJcJ¥£ç!Ͱ^ªóÎ6Ê(Ë≖Æ`ß³Z4êಠzJÅd¡§G¥&Ö@c ‹Žë¬J0àFYäFÜÁä¼³Òaïg™uåP”;úÕà¨Ü±<~>ÇW§9º0J÷= ÚðHαf›ãgä/űít}ßÙ*Eë|\YU÷éeu=ª´Òu XÕ«×F&xþ!ijLÔìYž÷7›–)(‚OÈ튄MîðíQ®ÙE#ìߺ¡Ø¸ü x‡­#o]Dd±6å#ЃµMï‹|„CnwTwù5)³y†ìÀIHÆÀÌÇ©reI޹LÎ]æœcdD8ªš«ÞЧ¹—ŒÐ0iCÆn³|Ü”àâû‘÷w»íê·þ7vlÕ5c çFÞõ†0²(ˆc#`]¥ïmDÁf×E s´&8\FÎjABrñb^5«)–­éÜö&ŽaŒ‚ÃdzËèI™rʃ—I‹ÌM=®HÑØù¨¨È7´iV‰ €+‰ ƒ°F û6ƒ¾Êîòz·ù» ïßîîî?ì*ø‡JÃæ¯·ëͯŸUæýå*Zùíf}úÌ'6“Ù8£k †,âÚ1›¹@‰‡åä Jï4Šo{µo¥ª¸ÚlÞ¬ßê"A‡„Öí‡ÿœzÄñhǧlU×Q÷õËæáêéÛòöê• Ýþôo/¦ BdR)™R¼I †a?<6rWEñâ³WžòFŽ‘ÇE™Ho=ë«dS… :/þœzÆêîæ~ù°ùÀkÖ´,¹ßbR^p%P@süV’â[>tBÁSÞY‹ FB–×!¹ºét°¢ý?¡å@Å sž'æëâè¶‹ 9£oÑ^úP^[d_R‰§ÝÀVwJ,ƒÀNc3(yå€Y<ØH4Ÿ„ý?Q¥É.ìÀ~^‘`gh˜ÉFÔôšLµLìk–«ãMAö¶ù§ØL8P­wT b5…Ï ÎDò9I›þWú´Ž¸^• ;·t¸!¹dõmÝHéØG»‘R‹} Ü^|DžEâ"q0¹õp‘^6-'‚4Y9¤¯­²…nnqÐ æ Æ=ƒ£\$Sí$yÚÜDÿq¸m~ùŸ5—„¸h'^)™ïîÐd#/.ù§¢ÊÅZˆ‹¾uÜòêæv#qUñ\Ò·Õø«~8øCáåÃûݳþÆcfØ7÷ëû)ÙÁ89d J2 ÙMõá•zMDG_Úx2s[´CÆþ4G Öøê5мӵÚñÞm$^J8â6 5-%éª0æ…ãXðý`X^ ÔD:â6‹8\*Ôc‘52üj9²åI)ƒFWèãûºðp÷ü”ô¦ç~P·½ÌÆøRX/lƒ¯ŽCZ t}öL`¢j{}¾«/€w—7œÈ™ŽLzk™` ³XæŽL(ð  |Çý!ëïl sÍ"ÙhkÄ9–1d$/0Á¦qò_IÖD`ôµ÷3qs Ì[¬ƒ#’.04Þ«°¹ØÖ—ÿm3ÆÔìƒeöã[ù`Ù‡ÁIÝNœHפ|oTÚqþH:À‘4qçœaœþuóë“Ú)}âc÷o9¸=õzw7*)û½Ík}¸ŠZüÂÅ­0]~«LÝ2âóUþ6ºpÀ dìÕ ‹Õ3ÿóº™ÿˆBèQ `Œ@b—gÖ „$^[ª-܇¾µa‚¹ÝGÒ,Kk‘þ•2õ°¹ßmWËÇ·ÕéõJ„‹¸ÝZi²¸Ù~=t{½6’”}mAB+Z1J³Ûµh¥Õ·‡ÏbÁçeŠ’½„'ª6ò-ûá™ ÁDÎEÔ·—´·ú~[—_"ø)Á‹ndo|.l•0¼!PKc.dÅI‰1·”M”XÉ…;'j42æÌbÅÏ®xþEÅãî¾_®[R$êHÿßuvÒ…a0Kº4hÞ¤ŠZÍlb R0G¸·‰6k]ºq"M#›Äb:õ?'±þf?€ýþoÝ׫˜x‚‹‹"€ßA¾Rßüúñ!…]°tÐ|d V7Ç´Õg†Ü‚¦¨ê4MfSÙ› ò+A5Ivα)꿳Ça ‹V‚“÷ƒ'6Úس©»l Aއ´DÄUKJ=h xv9ØiåDöA_Ð!ÎzÉ‹‡Kî蓨\±¾6 Bh*n!ÀÔhő̔Úõ˜ä Z1`Ó2¾2^\bü˜À~Bgï‡%ÖΊ;‘F¦IaÇ­Õ·%1XÒ~"”q`éNêדtÜZÀÑ‘y;£Ø{F˜Ö†Î‚ˆ¸šY¦ Ìô¤I\ç¸S»‡8ç=t›UuÛ®öâÓaŒöôc&ü% .níîÛ®îÚ:ç'Õ¹A‘ZÜMdx;.P«¤\³|-J yÁ‚HLÅÅåõ—“Ø^=*µÅô­½°œ9cv°‚Ø‹– Ž6Ë¡Ø,ÛÎ1Em¤ÞæÙjS!vïd%fÙ…ÃЙ¤ˆšþîn¹^\/õ¯TÞ–÷ÛEĸ]îšIï—íåW´ªä‘f”•¼ôz”>ñ›`uC'Á£™E/ºÀur†¼úÕ?Òv”3ò"Ôi®D\F{0.m0`J^zk!.¨!¦¹ÅÅòqá ‰ª?ÑrÆ#tÂ9ᜰÇöø–»ûoKè^?;Â}´ébëØ’$#ŸŸt÷Ɖ€M¤G_ÚÎïæ4 C À °P¨†›qáÞ¹(Èv± ÝØDÊg2JÀäæ×…š„AúÚ@f7.4ð!–ÈAm«m2ŸÀÆL[¥bMg —Ø ë%/ É>ù-švqn9P91 Á; Ô®çEǰ+¹Ç²È’Æ4ìÀ+ùåR½© l"<CÚ“ˆ q Öïl!Ñy‘`BðEÖÄes›‰QZeך¯Ï.ª<¬‰‘´Op„ÞKÄÍóîéùñÓÍæéAóãÅz¹Q .ßk'±<‹ù {‰°Ù0T¬Oµ£õ©ÒH$4½–ªÂS¼/2üð~4}Y;Én_d;ÿŠÇý‡Ÿ–ë›í¡ûñðÁbµÛ*{«e]‹1¡ýá,ñ•Yc.´•`)$€S­Ã(ÖJ1£P°5ûF†œb:8w+®¢›F ꑧѺ^|•fc4Á$б0afF z¹ }½pñO»ÇJt 8¯’YªçURÉ>¤W)^hXçÜhy‰PÍ41 Aô5 —Å»ˆ4ƒ$Šò‰(Mɘ8õ¥†¡F½ó*˜lÉ>Q£>¬þ%4u¦'ͦÜ(½Cï&ðu¢‡0ãì¾î°“êÓòéi¹úÅsõ­×\U¥… Îå ´PÃŒ!uØ` `Ày5±Ú¹ÂXèÆ`|J2ä/_¼7é•K¯¤iâ %ÞÀJ]}žb¿?„Q)a ”uÛiÐN0©F¦kýÇ‹½3eþÊezâÏ+e–øy¥$ƒ”2IJ_“ÞˆËôjç(]g”Rä(Í•¯”tIGÙ£M?»"Ї™•’O9e:AvÖÍ{!z\Ÿ[>)ÍoªË×K‡ÅúnõïÍÃêË×Åwëû*ý2~RõtaHɣ¯H«H×LU£œ°ó%á aE^U]òú¦G¦& S¦ãâ+RIo±QjÖG§ˆjglœš*ñ4zxJçëJtØšá«~ÍåÖÏÄ“'¯z'+šò†Þ€ö’DÔWìБ¥ 8@7²Ãeä(22-žçüˆñ<•–Ù‡·pž±uO^6v5F¶X„ëù?¿Þ~Dõ„÷¨ž`ðG`OóØ!…ëyäÝçõÌ_7OŸÿó¿þãj¨Â›»õ[46wõÓÕãó*Êßçoö¯ûç§Ï?Nó?/wÏ›»¬³W?ýtõEUù9ÿåïïMôToðÓÕOgÜ¡‡¸ïÚû9úÏCZô4ug4ˆúç«_^ï6Wëö·»»û&ïª&›¿Þ®7¿~vÑyüå*†7ÛÍúô™ùè´ÄtàâPMÖ¸YÕ­L¡/ž5 CÑ;ÌŸâË^mãK©i[m¶?oÖïlhøÈÑd¼±mýgv|ÌöQÞ/·ý²y¸zú¶¼½z%H·?ý{·CÆ:q… –8t"c$Á›A>/&ª2zŠ ÔçŽP³bóõ_kX)™“ o0Ý#u:ZÓs{| *æ¸ —u^q;ì¥Ó({"Œ *KŸÈû'OÉÜÉk²èl®Ã¡ì²¨8‰2EÌ¥Ò"‰Ê]µÚÅ÷k¬¸ïõ(̹ºý %íœ>ýø9"Q¾ôÑ­¾ÝÞ”&NËø`? /V#Së½ÿn ûý{#©ùñYŒ©t"-³¢Qx(ÉË  $tFêi’–ipÌaöð·Ê­)SZ<"Õ dÂôŒ¾…‰Ÿ{æ”…t»˜C?·ÃT_6Ø’¶+Qžßa!‰û1’r£Ì;?%U{,-›ªÉ•uÉüŦS'°—=ºÍ%v-ã }±’ë\W‘øÿi’¡®ÛËqÿíÓ•%/.È‚oZbn½ÏÚG^]ÞajFUœ|ŸºÒN¯­ úÎ*K]Ð$x6ÓÅ_28ï¦D³‰hKíШ¢“Y«²ÂvkÄE÷v³Dg{Ç.P¾I¨Å LDúe'6,~Úûö8'*Võ´FCG%Vkªùº5ï3MˆäÅ>NH6`ë.X˾]/JQydvÖý¢I ãŒÄÓ¨:B^=ÍâÕ„þÙ«é<Ê^ÙkbkO´°™¦ÕïŠn_œ)¿~™áz*”|Ÿ¶Î2ÿ®öù\­×µ¢!jW·;‰”FLdøvÕhHó¥×Òg|‘ †üMøŽ¶äè&|/ªF>Ø3sÀ¥}0ZÀ ;›tÂÁÙQ}Ïžöo«Rswœñ]ÕL}<3yË}[Ò e{¾¸_öÂæ%Þ¡Jz Ïú†Îf š+qÎhþô–NO¡ð@µ„y§»«%±ùì—V2=eÜ ¤NÕ7uªŠã0¡Nq ‰³^ƒã-Õ¼XÜ•ªXgÙà„¡ó6GÏWž&J-­1¾–©aÎd]µqÝ,ʼn™0¸‰ìˆdKl>3‡¢ÒÊѰmkͧPž½•¬ÚB’Qm´°’ ÊÁ úN5ŒìlÐi–FvŽÖÐú ©¯'§ñ¿e3[k\œùº„ò¸!ÑK‡œÖôÓTV3ZXQÞkD «´†bü,­Á$Ô<1ç„Âl'i}EÁ¾^eÔ¶cg:§7ÜÁ™9ÉýÒ ç"[X^q0¥ÒqÌ ·‰pOei3(Ëã¡yxlžn¯7?½~¶ýá‹ÍÃó:öÞ¯¯#¼Ã݃¾ëÕ¥~ñµž—·ÛÄûëå]U‰t{L!I3a˜e“A&A ÇhØÑ÷›…l§ŽV‡xÜÐα7Eáç}FHc…dß„}ZÍ4’¯:Å=:´lç82!tq*HÞ›®Nåm”êýÎÕwq7±ÑÍ­Ñ¢þ`Ü6÷·Ÿ_^b3|³—w_.íðöÙð[ˆ®$<9XR ¡¼k ‹S˜ª5—× ^éú µ¥“!/RdàÎæ,œ¬KYøH‚ 5è·¾©lg&1´XË>#Ï.Nšâh/ÒM±€'ú׫$ÑêGK+*N†Áû`qyÅšÂe[[g+ÎV(ŽÈƒáÅy粊s!ɺ_Y¡Þ82ÀÒ'ª ¢˧ A½‡•ïÅò±éo?E>–é0Œ»”ঌçÀ6$ùL˪Ù÷‚F¥ù¬!äÏ?°IÖݽ`šP|êKGþ°´9bäTn°Ø«ºäÊó~µÖ_.žÖq¨êæâö^?½XëvÝ<­ÿ¸¸®CßCt}Y²S¨°­qˆç­9ù53Ù¸_X|Éd¯ˆLÒdG²jÂ*¨om¬ðâ6뉺Ø,ÆQô®V{ÇïV—›Õ&ÓD6þQS7!‚}R™rê'.X`®å¬'²3¨ïä5×u]‘9db“…å‘D™#@¤ˆXØÙ×Áaˆ‘þŽ©|1ì‚“W‰›õÅæöó×ʘB_[<‚ñ)Ä'/†%2ˆÎ„+ÈË«Ý!©;‹.1K‹Y³d—´Ê‘lY¥DÆà¥óL†ˆ"ðGXrß-ÏÜáTΩ÷µAštÓm½‹ 3Ëtš™\TwìøÊ[\̳—%'È®G’haqñ¥5•t~i‹#çÛ[›Áy~åVû‘0¶^˜Ö+A¶¸«%NJ!]À` .‚²õ"µf·‡„‚XÙaÞDéåà|Z?ܸýzq¿ºXÿ¡ÿw³úýÓ¾Ááí‹<˜t󸛼Çò†Í‡wr†ß‚õx<üUO²(z­!D¸Љ|°³F–Ϻ˜S «{hÎÈ”*t d›3BÖ˜¯»ÿòênõºýÿõááñ¨;á?Uù«‰›ù#8Œ˜Ñãß®nöŸÁñ‘‹npòÖ[÷rôqóN6"ìWóOñmwf·^]¯n¿­nŽ&ÈŽÍùÝ3vkÛ=æv£îâz–ýcµþðôåòë‡7‰ Ûåx Ñ8C×ìj,;» ,›:…Éã ¯fºVŸt‰ïãÃûËõo«§Ç;UÌOëÕÍ—Ëw]„Mx¼l +VÎ×´wEMxTÚ˜$Mê{Ð"É‚©<ÈÛˆ¯–£âä w ˜"ìïv}ÖèÉú¤ÑïÅÕâŒ×ÓIœç@ï½ÂèK&œòF‹V÷-ªNyU#’ÐvtØ™™w'mÂ|c®vN©«§™2¦Â[Ö`N¥K€UCçòÉ7Y‚—ÿóNÁ¥"ÑÊJ&8Åj2âXxlóãg\?Ü?^®WG<‰†ÿ²´Q ÷†`§̪/Âì¹Ù¡ûýþõؤí‘IÛ“ŽA›IYõNwÖE^=}ü·ÿç%ÑüåóÓ}BlTuÿô±ÿcÏt…þ“ ^|uw_âÔÒýÃÍÞœhð÷á—›çë¸ý>þ¼{³_oVwú¿nO‘?÷y“_ôßOjÂÏqݯ_ÜûÿÒqÏ{×%ÐÅ„5ª]¢|¼ùI­è:5¢õ:çõÓýêi}{]¢dƒÃž'ˤ‘T§Þ?¦‚¯uÊEÕ¬(w‚}‚ªa¹sɧÌGbiQÞ‚Ùí¸MyM¸…-’ëÐuD8`Ä•ê ­¢¾Õí£²®UÆýåþuÛíþp¼ï¾}½Ø?áõ¯o?ÞuÞ\lg5/&™.;êjºSðσ±Ö¹Z<ËÞ’m×X¡ûLŒå<Ô4 rYKߥX¶>’c“Î Œ3sÞâ¶ CcE ÖwîÒß<Ü­ÞOKî>|¼{Ö-¼És æž`~œbSÀ)ìI‚ÀÞ[ úþ¬tÏ“ðfE;§ø·˜wB•k:å³V I©4_ +†BK[9ì&x¥†`biïá@Ûñ숌‚üþ÷O­8ÐÚX1O){v¨²éÇvBx­Žå¸YDÿƒùj1rÈà’=Æ#Iµ(ë[ë·Ë,c‡œ#`©p߆ǯ—÷º'.c–—íe¯ §½1}MÓOép´AŒ¦|µñt˜ÚµFqD!%›oi¤ v”íÊÌÉl¶Ó6«Hët¥ße‡ø§ïž3­¨ÃÇ ÆvÎÙZÎcäÐ(»G$9³=c úUˆwÔN7ÞÂ'ؽô&öÚ¾÷aª´Ç 7wÛ!Ì]ãÝè‡êX+BèÔ€…iý¸ÖЍ kÁUr²ien[U3˜¶cfGìsæöDî› št›!üt0qù|sûtñø 9ÃmeaÀꢺº\Â)ÄÚÖ£ âZÐYÖ¯•ÿÝî•°­šæj¹L–8ë~ R€«cA5á_ƒ&"ò¾¸¤û¥ÔšS @~éA’-?ê´9Nu.•Öx~ŽóēйÀNvô… 8e( Ézß`Èä½(›ÅFº/ܶ#&‘±’½Ù„Ýќכ$Z„FúÖЄ*Û<¡ê*Ûd–Þ¡‘Š™mºd²Öí‚ØSY(9pM»ÛÉ–ÄH6T¤¡eŽ¢£јö z*iX‹¾/yLE[G‚«hÔâÁ_¯î«ƒRž®•|x„fJ¿+F2]°ûA ²™;Ž›H,cAªê ¸lMMª&8Zú}iÏÂÞ öœŽLÆ!1b¤Ködƒ_&QMõÎ%€N„C “BGrRÑ Q´w³í¤ÃÝ5šÈõ¾ˆ©À“y!‚ÉöÛ•ûïûJ¸“ÓEš(pÒê5& 9«fä{âÀ¥¥×Ì3ÇÝL‰c†ÈÒ“uÌÄCIª…gÖ—FÖè¾ê¶!Ž_À,ƒ=È–:8f‰Ûo]1£¸6ÝÍÅPTXE:AQv'µLbÀÏ;±ø=°|dbÇï~ÿ âpqwûiuýÇõÝê­bûpw¿ÿãIÀÚB¦«>¨C‹—CÈú>Ø‹oµÉ:Aùé‚.8iRãzòšÝ[ß sñM<ÍjµQÛ$E Óî íì9GéN÷‘,špР qÝA×Âä Ãå †á­ûù»ÑGo.ïïV›=»ÅnR¤rÜýŒÏã ³L‘XË8i-¢ªš3FŸY;û ê!*°Ï˜åçí3¤êµ{5±Î0€ZVõÍS$˜šcdÈv±NCXš$:q‰¼§rµ¦š.šm8mšYéçM“Ì”ád‹Þú¾‰b¢ÈZZ&ˆpp%–É!ÛM&ø4:×›|ÚÙfà*Ûô–õf¥ˆä¤‹iŠ7„áÇèýÚž_?_\¯ÖOU¦éw™,üÓ„)×™ô› íÖü5X;ÔÁX´FJ s7%sÖ0wÀV‡Gæ^:ìR,‰óMí²$·±2Åk‹í–ZQ«U&âÍê|±ü '—P Hç||emÒ‹ïåÕ„ØÌd ,íÅûùF1C Ç ^tá”mË6M+}EMƒºb¨íËnRŒê©k–Ðó ª2ä¹k#áÝêró®zrˆ4¶—ëÉ©«õ‘î¸wSð}`£6[ËÇÓD|ÍÎó¸cb:Ypžƒ§ 'ŸœDK«Å®¯a8ëú ˜-zî@Q ëŽÂV¥¿þ?ò®¥7’#9ÿBŸX“¯x :íÍ0 †/¶±àœJ|‰äŒ¤ý_þþeŽèn²‹ìêά̬Ú¾¬VÍVue¼#2â‹3Mïî¾´Vê‘U_‘Fårš8Ñ‚¬}tì§ÆI-À‚•ë”|Èk±ðäôþˆf]´8ÚjÂck1F·Äf-Õb„çv£0Œ:‚==û¢ìù²úáÚÉT"\Rm±*m& èÓüÕmt릧hÝ]#”@±P^Qq:}Q©ËFÕSUûx첦%¼-k®ÙKüazúpwûËݧê‰ñ•æz ®B/‘ØpÑõÔË:uÓC“’ :X‰š²z˜$Mèáˆ*=ôPß$ἫŽU‘Zô„¨/3 àú1QïÙ·§¯£é-ý3Û"Âþî,ÍójÈ®j‹CpÝ¢Ú]:õÓB²6@)H>Q\Â,†)oz˜w†Ù¶D领0Z£*—«¡X'¦5T{µ€;ô2@Œ¸ìðéþË~/åÆÕùê×>¬ÅíåÏóúuhd–üyTª™òg6í¨[Äa’uÓN òÀ“þ–/†lw„@‚Éî¥-zh§m$§Ä³îzlÒ߀ó/Ö‚Å 8É8x^–^[fÖýõÙíå0½öòþîB¿þûÝïúå[}‰«ïWOž½<ÿu,š›ïœ^\}¹½{T}üðüÙê˧wßÎ/­YÀ–T¼þÛ‘øt#ù§5“®‘ÜÞžý<—³F@ ¦ CRTÛ b?Ú;æK¿0 Q D\’£Ë%Åb[k§Æ¶\è’§aWø1üýø7ÖÚ§ ÷xòùáîæäÓÝõÓÉŧ7–á¸èiÚié ]dRNšƒ¡Ò4LŠÎ¹ƒ«“ãt+Ðèhðµ^…–ÅÂÊR7³b\JÞcã ¹Šj·fº ãS3ê0¢ë1mGH”ç§Lø°:xœÜð6:Y Û@† Œv2‹mhÛ©‰m5—AVøàšù&¥|KÕáxtoBÉçù†“Qßèd%|3ãïœ@˜Ã74ɨŸÑ[?¢¦Jí5Î@¯ŸÇŠÕ‹/«¿Æµ°5`×ÊöˇÓë»ó_­ÉkOX•ŽnB*È¢jÀ©`„œP¨=O“ðÐ[ºõ€‡¶·„’æH éï K‹ÔPn±¼¦C»o¾ßžn¿úáõ¿>‹Ëé*¾ì')2Ø<&•˜””“ ù¦íÇ–V]$ÅúIŸ8KRL! É/P݆Aa¼xýo{L?µ >MɸÕZá!yˆ“ ÆTè$ ü,Cšö'M2€¡¦°Ç€s0·Kt:‘Üd|÷þþ/ËOŠŒ†t¬ªc–xøftMT?¹«nDµ2c¯MæÉ {lµìÞl•,\ †ýëÙ¹q@¨Ý&uÌC²l @ê@²Q ‰Lî Þ½‡ÈÙ; ÎÈ?zW>¿b›?µií5©A¢ ±[_ë˜ÛÆlab¹-¥•QŠì@ÊÚkõŽ“0 #ÊôðÃúÚ‘qVè—y7>Z ßôµ|”ÈÇf[¨â[ ä ךùÆ3øæ#XQ=‘ì¶¡ž^öþr²"¶©ÐŒ=á,¾)×Ä…&¾AÕ6\Ã(G×¼T5/UMI3PM@ "ÛÑ-Y¶%™^†ûr²®¥0è×ÍÒ6òV‹£®‘«é±Ššãhv˱™mÅ·d ƒõõŠäµ-jØŸe¹8Uæ¬ðRZ%)š ˜Á6 ¬ ¯í}Z=âí6†.½OUæ6¸°Ëõ>=Þ]_¾.8m>¼¿þ¦òã‡ÿøiêÖ<Á•õ0¹MƒÿOÕ¬*Ð0àªÅ‘¢æqæ³ä=\ÌÒ¶¸˜v €&c!¨&sAŒ*1ëLw ¿¯Gï‘i† gŰšò¢04é9¹œ E‰&ôÿ?¦ÖW„Ôø)µ¥DDUc뎄Óì¥c?nl}JmMd8äÕ6øMÃÔAµ¥é&•±ºlÁ‰CLˆiVTÅèAå¿ET¸Êæ‹Ó|˦֨ÊXX¹ÊV`Ly¶&àlPÅ@8 ý|°¢Â½ áØÑ,®‰þJýŠ«õ#j&€4 DBœ»âJ…zµnþöËp®Ò|÷¨ÿ¸y® nþx ·dóÕǦ½T]åßd¥ C^Õ™§We¿­Ë%¾uò³n{ôëÑ:ý›„F6­s…&xd”åA<Ÿñµû ©UµÇ’TL#½œ„HôÓ×}[u‘+ÄÏ+h ƨáj“„T¼*ådÕs#ôÕã÷û£öSb!8°FÜT !hÜž )±¦‡Tˆ 9¸(3ú“Sd› Ž®%²IK,¶LC DÞöòÅåç³o×O]°—W Ð Q|‹áŽLU -Z¾ 5Û@j‰Ö-†·½s®` ˆÅð°%8  JÁ鎯‰ºñi iÁœÜ;J°}èÒ ¡ÈáÓ¿~H[…㯞]øáìâæjH®?Pù»RþœžŸÍÇÇöª§ÆUÈM× ÆÀš Žè b¸ÇºÙ,Áúi&ØÜ† –d×ê²rš)aúpK.ƒ¿~ˆ« m–b&»›j)Š™f/€&EqHŽÅ„!½LìLšL4ŠÏ«ƒyÚ¯™’l V“fŠ«ÁØN¶²ÂÍî„­$Y7ådk[Œä¸¨»1å•“eúÆpK Ú©ò¬^ÆÓ·™œc=ChÒN \M%<èÁ±—@oÄï9({|Y(¸þò¼Yy&ùi/é}´ñœÍÔGx¬);²ú M«:ìyÎP«›N‚غÚäRÁ-’¼¹Þ ¥œãÉÂå i:íù‘˜f5’'[ö&Ñ5E²ôÚ¨JZ‡D<î*{ñm‚ôêßN ù>Áã)…p3O5Yã¢ýª‰û&ÕŒ5µe×TÆ}Ķm"s¨ÖMEÑ 6” šÍI˜²ÓTyD¡.ŠÏ€9^½3c“†ú"Z€ƒ„£§šÛæåÕjôÓ«›û»‡Ù)¦æ i¿N¢·ë¦&„Š8V•Äö›Ì^ƒ7Ný´PlÊBAðª2(!«†ÀS™åˆ*=ÔP%WÕ0¸YŽBò)´9ÊàX¹…<ˆÓÌ€ÞYQvßæÕ"ïWÒ,KòJê«}ÀÕ¢)–/ÓîûC¿4 )@,H@³¸.FÑ4 1"Y¸&8ID<¶ƒ[ Ø%»óZ6Ø}gðcsWòyeÈ¢¦À× 1¡•£úßW»ìoÊ ˆD…KæuÏgÏÞÉ ìˆò] Z4»v¥#Ž´„Aa³U‰-hYÑ´hsݰ£ËëËUÚ õèÅÞß]_ÿY‹¦¾o»u/#¸j7 ¬ºö*УŽ@ß~Šïkgá¢p‰Ù¤\c™^&øLÎ.ŠÏCHÂ)Wñc„%²C"&8zV¾O:ÖËsjJÚš3¢ê¡¦éZl°ÉK=²ôBºõ¬œE€çê™â¶äsvÓ$çˆH]– Ù®#WH¯~£ ŸäLåˆ è}J$±áVSc\$)‘±y©l® èôB]ÄåÃùç/§Äç¿üÞ­mÏ€tœ€äÛö¦Í§ýÒf„œî ‘jG®nOo.oîþÜÀ|<©0îâQ2ÆfDú.(Ž šä"aͲBÛ ¦á/„%cÝ×;’Œ¯Èß?w“ rƒá¼æå"9L•‹4m…F”ª‘ Ãb‰rt¹Øìfœ Œ¢d ¹=úVoŠQ뉼w¿ß^ß]<ž"$BæôINå÷óïÄÃ'ütyÂ¥»tŸå X¸y#øà#Œ¼\NÙ‹-¥ª¬…úÆ`-Äs¤B †–®5»’ˆÓä9hb4w‚d}¹¾3¬ùR·Ø|áÑ,Ê¡Ï9Ïqm²£Ú*PälÈon­ ÁTøˆ ÿ°zÉ“ç›!Búª~5S.AÖGµHM$5 <“óNØËrþfßFþç·Àò[¿ÈÄî/Q¨ÄÐ(«|VX„'6F´«²50PDfº1@ Ôp}‰A=P ’­&bÍ»Í7É#œ[·ŽHAÙAµ^rìä4 O;:ZŬ·Çc3N°bòÀÔ¢ÁåV!JÜE”8É*%8)`•ufY%09ý·=L#pýR†,7N-Çh LÎü÷•g›=D l`œf–)}`H¾Î\û޹4­ì^ÕS™å(¸É ã3%zA‘KˆP<:ʃ•]4ˆÕµ-{‡šZuÒ°G\¬³Pd4—X­í(€Û1çƒl\ŸuÊbÎ’71âΚÇf`üˆ6¬Ð(ÇXŒÚK ÐWD|dËpØQ3Ìo …n]„4®Î#g%»§ËŠÄf³ÔÛØ}{°'®¿3pH¯ÊŽãgœßÝÜŸ=\î2Ûñ]_f¾SXZ^·³|÷ç®n5²>¿¼·+fc·Ð »Ÿï”ÞO/t?yºº¹<@|·’(ûÒ‰Z};”²/©¥´|ШöÉñï?úÿüh`b<*ûŸ<ýyoý°½ó¸ÿ4üã¿}ø×?n·×'ÏVrû‰Ý üô1"Ëèa›¯¹ÑGëï%?~–aõª–}üËúîûã?ýó_O:6ìßÜ]¼YÕsòóÉã·sµÙ¼Îßî¿=}üKÇ_ý~výíòoëÆFôpòóÏ'ŸUQ¿Ù9Ÿså‚»þêÏ'?ÿ´Wœmsµ -â,ø§Ç0 šx'ïtE6ž‘¿üC…Ð$c½Ùÿ9³=.ïç+[¼‡pÕž+õ˜ÎIŒ®þ‚lÕêâËÝÈÄ„C§T_‚·âœÂéõt[ u2íµÁIb,2½s†D„±^Cív8Fçõ´ø+‡ŒƒÁ$˜r8jWv!ëpþe#žo\Ž{ër(í:¿ãp 0sÒçô·{™Šœ|jíàk”^£o©y¹]*Å—‰CÔÿñù<>¬¬vð89;9:YAÐâ¶{'ž¥Œü1§&e$Z\•ŒëØûµ2*#PÚTÞ_”ÑK<ê»UV¯)õ Y½ßÂÀ¶!±Wá` A7/lûõQXÈʸÙaaÛ¯ï½ÕÄUÕ]‹¼{‡ýgË`}‘ð<…ÒÔ†v¹YQÏêÜ>´8 ëÆ¨é^ðÎðô(öŸžØC´^á¡ ‡*èZ@、œÃQúO^5Œ(Ô!:\‰´æ¡å‡«w 7E‡ÞóÒå£sÚ-G$õÁ.ZEÎ!v ¡$LDŽÅAb›¥ØË\›:fic®ôŸ*c)Ú:´c#1¨µ{8{|zøvþ¤Þññ¹ø1wZ-Ô“¼ÀÞF¨é Ò|@S(ìÀ°‡Hݲoå>QÔ„/ÏCL‡{S×6ltK‘¹·½MÀT¾8h- !h¢Ð¤€éMùá¾Rv£}ý‹$ÕÃ4YëÕ„{ÚTtE©wr03ó.6{™È*Dm¿D«Î¢`8jûª„q{q§‚øx}u~Y;ÊCë)_`LQ*Œ©Ý~°U :§î£V·¨ÕäÁÂV) [7ÓvÍ*ò¾úˆ6=¢V}k\=̪iöPIò¼|ÔêH&£VÔ8ýÈA+J‰¥(ÕAk©mØÇÕàÂ!\¹®Ês–Æ0MÔHè]XÚ›«‡‡ä{µÉ%Wσ¼ÉÕO+¦0$gë=i)“;E¶ž¶weËl/fCZ•§©6Ó‘:Ù^ÆdVÅ ƒ–÷6yÛ›Ò¤íyÞ^ü6¦MäS×:™\‰ýMî>#±—«‘jâjˆKD¹qp ¬”wV¨ýôíêúb¦á=T¨Í2 ÀðJUÛJxœ_¾P»!Z?³«Â!©ŒÜŽHy«¦¯ñGêbvà ¨ÞbVÈÛEAE–6»êiªP‡èèHv÷=ÔgGbO}Ä` ¡ <õ(ÔÝè¢w¶ÄQàÝà Œ&ˆ•ÖW_nmd~& Kx€ƒw æ×Ç(ÀHA­-2°‡x½¬°I‹¦³Œ± ?#Ιaƒk˜jꑪƒFçÍÁ%ÅV8/&‹‰ï߈a=Êöº{W‰<¦£4SqQ3Ut(5½T-&cw“u²ƒ4qW.`ãH£þ(¬5Xå˜b=± ,îówfÀ›Ø%Ï«U‚=áW[\¦¬ªaïÛüyUu˜µªB“mo#jô0«~’Ï2«PÎÜdV“s¼¸Yuz¼ ³ªœb ìtÕ(wr¡Ì¾"b›}2{cýºâÕ#<ö_WŒlVQÁÝ’|³uóõ¿®oÎ\›ä|=òf5_ƒcÄÅ öš-Í“­›± †ÕF̘*D˜kùRºéåI[õ0¶&ÕÀÀ2ËØ†„,ؤ£oK} L )ãîåÙŠSlÕ˜}•„¾XÔ•€!Í-%ÔZ‰½\M QBW£øE,¯8B·èõºÛC?Þü‡›ºÍùÃÝí/wŸ‡ï~øtötþuì¨ÆPõ (0¼)Vij)Æh8w «šf=­®('–XÝD”µº)LÕoGêdtÖ3°3Œ®†ÙàÛÔ3!,ot7?²ctElx%Ó²‚ÇÇu¶ÅÖ·ÉPìc.xÏ$¾‰¹ª Ø^b2D±¥¢_¶ê½/œh$8À´ä|‹½ïk̵‹)ú+p¢Gæ~6ž$z¾ùÉÙxŸ-cÀ4Nüˆª]l<! `šcã!øàbKWšJàò6>B˜°ñÙ4÷Òe£cäµ7F€` »(ë #Æ4uQÖ?âùUGîýWõ g×§ß﮿ݼ C·€Ç¿|þíë× ŽHó0$Úß`Œ#¡.i6ŽDûìÇ’èc³b͆ÞP¶lâ©Íd+5Y. ’H$ßÙJŽ7óÂmVt“££•Ø,}-Nâ»Ú¬¾¥TÅ8âl-D£¯¡RÆ…dëÙ ’!Ë´Íæƒ¦mUÂ4}%o@ørlG“`évcþ݆ r63 ^…öô¹• Öº}ÜÝKÃN—ãTóë\LÛ[ŒÝ ¨v3mo±´«$5ðjȪi6Y\l²4UmÙ\Ð6Z"98†ÕÉ'ÓæÑÑŠÌ–W%”Žmµ€’P2ònvŶ²ó1ÌÈ»3[“;Ê&¬ñ‚VkßK¼2Z—5Zû^âÍŠN3€&›U³ŽJkgë@› –Ç8@Š1`Apü\G?h°6ˆŠ;u¾çƒ…Æ0H°2Ã{•åYÞ^Ù޳ţ,ÎO„Y¶ÄËÔt^ôqïËp•î4š²dËÅ_3ÞjlÚdÙxlÆ[5Ùº’ì„G­cL2áæ5EDa G­‚mnR@ìËåíåÃÕùé—³‡O6!ynÅŸó×Lÿý·?~ýûD 옻Öå8r[ý*ûQ/I܈üÈ3ät[Û9²å•¬d“§?àÌXÓÒp†l6Ùr¥jSVMu7AâÃ…À‡Hës`KÞÿ&æ±KlÉûGxÔ–i1”šû®7ñTkâÉ\a4ë[Á§AªT6ñ¹÷U#Ç•ÕØxr2íˆë–Øx–UåÖ»ÎUdb §ÛB©'±oÀ*hGÊgÉ‘ÁuǪQßCÖBÚçNŸ0C-ÛÇßî¯M«ͽsÒ4M8C¤wpvh;å°.ûÑÂäƒ}4­®;¿ê¤"c+TÊÉŸŸ“þº² Ó.§ÁÑ  佋NWM4&F9%æÀ)z“l’üpüQW‚}¿þv—¿{x–ÿ|Ï]b‡‹ÀŠ÷Î×çú¯â½œ¡D„·‹È58=XCº+o‘G.œŒÈÛýíüØ·™yuX̽îÆ~A1÷ €1ççÌS3#Ï>JÞÎÈ›?bÝŒ<¾Ü]-Vwè8 «Š-%—ª©˜Ìqì«%i]DX·a)ÍÔŽŽ+Ê6%Íï-"¬®ldv”N„MÇ8]CËÖ«ãÉýDY3÷@i§T‚ê63úªrCa%©T ( ÜMôQ@,Of£“áR=ÉäJן—±ùIp#¡ƒo(ß1ÄàD"u£R-ɬ›C›Î˜ÇšD˜9N%¸Åe°žI¨ ÚÒdž%,a›ê£ŸýøD™ù¦´å”Œô 1© ö`R­‡s[J1’²¬ÚR¹˜®V|Ðrk¿¾ûúeO$s0i·_îSƒÐ2¸õÒ.ý ¼×”1H–R£—~®í‰õ[œÓßß62Á–}6yp”N°Å %*,)É좙ÃÞ&)go?ÑœyχJ³Írâ«ÚפkÜD ÜXˆÈ«i ¬"~$äæ;=ïlwž›f¸/q$¾’oñg-þ–˜"ç¥d%âé—Šå “?W‘(ˆ‘K #;Qe{¥f²è’Š5l§6FSÂñ®«É9W0j;ïf}]jûù]S±U]ß!8_ §•êv5˜Y‰«¶1Ê¢éè§èXÙÌÖ*¶iï‚H»È+0Tµ\$ñ8t»ÔêB9AÒÝîC49– ë ݾ»ï"’ìûÉmô«DzðgØWc¢T]¤l`ÖÐeîNÔh"ÿ$fz§•–œæ†»mˆü뮲¼³SÕá2«žnÚv1F YSnÂrý‰÷ÈÁD^T†ïíêüre÷öíIÍ?üÿ2ŽSÇíâ.c(‡ŠÓ B0-_ˆ¡µê… iß9 –{…@%FÓ$®- 8Ê£ÇlkûjdUáEÚA÷BÙ'1g²¨iÉ‚égҨҕ펪ÈîhÁ¥ÕÝ?»ƒêƒ¬ªL·C ýiêć‰Ðb츓½Àkàó)à±`ñï{î¥×éŠËH‰ÚÅ_¦Q¹© €$òâ¬i£Èº…úé`ˆyåK*± ²ê†© xPPß>DiÙ-UdÒU1b¢ í¡ ’¡à°%sâH,FCçÚ¨"•zRÑ5(‘ßYIŽ»®P|qÈÜàFÙ†„À(~AE¾"¯±§ýĉþ×Ü•rsšŸ'…kå´pÊ%çf+«è¯H_…ˆ‹¹*¶­¤öõ1Ž›±#e[²š“<½¯'ýú+¼ÔqKÀªöŠ_šyñ]G ÙFÖ»û[ÍŦŠ_z•ÿøma‹È¯¾š‘˜Mm㙃¯àa=û\¨lŠ£D½ã\™H×Ð&b`.6…Žªäó£…ñö¨ë‡+sC®®_~<>›°ßt­ÒׇÏw§:Xߥ»äý#¨J–¼0D±s-ddAƒS^ï-ÕÒ‹“Kì„Râj„ EŒÒ˜§}=.­¤ì³¼SÂ1ŠÃà 6{1:ÌäôeŠlHå7%VBp„Vÿz¼É7ÖÿçOúK2Dªë!ªøÒ9.I\*¾t4k©½SóîJ0¢j0¢)¦zs,‚QH)v_#†|äv\YQÒ@ÚŒÐÃ`0 ‰j×gÀˆRé0 g#7ò¡c¾¶ð˜ÖÌN| E¬°ÓdÍ»gˆDLËǬy÷`ò˜*Ä­&Ÿâ–B7 Š4q›xWÅm¢›(*_„¢ÄÑ Mi­’üw\L™ÛÄÄ=©É}ºý-óˆUÜ&:1(B޶)Œ<„ á<{0Sn}8ÏÕá¼ áb…u:L§ºx"(Ûc8[X•q  Ô·³»V4NyxZ1I1S?˜öMvêKó|ƒëg¥L5¶`Ýzº7¾½~~ËHû6Ýöæ_Wv"ïè†î"çòŒ Ãú•¯ŸSù‡Å¶jåë/™+шâÖU‹µJ$†¤´¦¤Ú‰ÆÉ‰P†) ±=qJ³YÇãÂ*`Êž1EO;›z˜*îY¦ ýh˜òîÐÍð·Fƒ¢ÀX"íÐ3mÃ/¾kƒ~>^”îrØä—Æó /{ÏÒ2¬áƒaˆõ ÃyV !zuëÊgÜ|z‡Dè¼¹«eÚŒ…/¥ÓÂsÑüla5HäÜ”hpk ò4:˜4ž.DsÓÈ_0P꤆m½ÍŠéÿÞwïó{úgªq¼úy1y{ÿôc%ÙqóKg$f”«XŒûœ¸ÃðÚ®ÕÍ!5ïÂлbé}6Õt$1Ðjï¾~ù´÷_!¸îgWù–oãíÕŸ7ß¾=ý$¥þƒ ÞÞ1`δ>X„[ˆ%˜ÕÜŠ¸”3í£ßF¿&™ìR;¦Z•½ Ц'H–¢â(ç•Ö»¯&ݵiz?ê°K„IÁÇC yRì+›Ïb²¹†ÆŸï_ž¯þ/.ë} ˆC¥iÀ^ÊÏ 1.TìErjkx8ÕÂÝ ЇI–—ë_U8€˜Í¡Î¥Ò£¡Ì>;J𰹋ˆ:¼¾Úä|©¦}Šªçêô"vm'ã*BvRÝæ°Fn"{¦<'Q7mÈ}z|ùñfâõžOòéÂObl‹±Ü20ÛžÁb©µíº-ò뇽vb‚PE´˜ £Eèå3ÐQX] —í{BX2í¹“Öâðº“s®µ%mT4ß·0PM*gÕ¢p䪦^”ö¦ÞVü¹Ï"ý')¥¾y(Ã;s*áÙÈÂç§«ç/Ÿ¾Ùßr&‚ñ¡­†††p€êØ÷㣩–^7h¶ã]ˆÁ©çàKÜàI’>7²z&ªМ¹Gñ¼54ú*GB3—¡ßmTta[ň5Ð ŽµaÍ"ð¸Íì! @fœ¼pËþ>5|dÚÝí)—¡¯9e#Ñ÷}”R¾b{D n9Ax•€ú¬í:y9Lc» °†/ÅJý\MûL]vOP› À7Xö<¼/‡Sr#°h6ÅÀ>5ïÕ„6U¾/è>SQI ^‹¶òŸË¶2è€äÙ£ Î|u4Ÿ C¤ÄncÈ}Í}¸…x5b7/_îÞNî‹w7·ËŽ1ú³ûvF‹Ð* S/q° ãj8^*¼~Hm«ï*j•,t#(Ö*1¸l÷ÑLT] zG‡!¼ª‹ç¤B¿!Ðh¨&Á˜ƒê8EHéãUEÕ.0TÎÃÝ#7Ãð63ª‘áøÜÄÛƒƒt õù÷¯×·ŸM±®ÈqÓ è¡øÛ2™,8`uà›r‹¤Õ pY'ÔiE»' BÙ5F†üð†WÙôÜtŠ)%ô·\ÔÑx‹1Ó²nå‡wÀY¶q1G«ëå\¬r–’å.‡‘»J: ×`;8º±¹“ÏóãÃý;iîÿøýáÅÐæùw“Þ-Zžà:@õ@fn™¬“†ž 9h–CsQ¼s,n횺¦Ý‹" 5ð ûÏEøf—­¨˜É¯ |Ûg§ذ5|oR² ,Yü²È ø Ô¿½ãþ&7Bœ3!M>P¯+&ÞIš%û#¾ò`ô¬žïwoÒJ/?>Ûn}¹Ý½â³œùû•ÜÞÊÝÑÕ'ýóßqÙ˜Ê(ܾ-Et›ZÍ?]ˆîEÙÍ)·sŒà*œò`¹€ê&Ö|§ÄLn=PÝ>Û‰0/¹ì¢ÌÈÃëäLΧÜVl'68>Ch××ת4µD­ó`2r£™yjKâ³fÀ_‚ŸýÛõWÛOSÏ«çûÛ—§/?þ{e«}wt`'+¥š?Kù¤Ý¦îé ~<½Ü/ÈhÇ‘ŒN[:ÿÐÍ),.|þ)¸KiëŸR«ÏMÇLÏ­– Zî¹E¡tµ˜•Ü™ ºôܦ¡LÞ\ïƒï?¹1€©®w7ïDx¾ý|÷bëùIœr}»åÒÞ± ÕÁÐrá’h<0p·Þƒ òjs„2jiG"QVŒÃÀĽZÔJ“}®Ùà(›Z ûÞô›kå;Žú.Z‰`«9ôlXƒóýéñ¯/3ü…9¤¡‘ š?·\£Þa‚›w¢é¦ni¯5£U¨›÷ŰÃÄ”›ö*‡Ê–>Ù ÄÐUÙ*޶àÍ>„õù±’Q/€L* B˱¤-<·©³•UY¥¯JÓ akˆD‘ä§Höhÿ+MÝó:¥÷-ÄP<ßàÞEÈ5äá}t1: ´ÕÕ¹øºál:0Ñ(Wާ"Ë¥kÍ5QÎ…ÕkÓg»èqk¨¥–ƒâcpäƒ_J6s¿^÷çËãë¹Ï{prÍ»=fB.„¥Kž³*t5WsBç9–;r)uäÏҦ£<;¥ôÑf³5nÿŒhå(LR=ÈØäþ<Èz;¤ãÓý·û§/·WŸ®ŸnRªî6AÙ­¬eNsCŸ™Zª!E4z^ì77É«ÂÛ‘°x,BÂÛ¢Ëù$¦ì4™tº |˜"G’­^ZÈn<+“Hê—Ó8ö=¾ýgšÉÕël˜›œf|ª”s„QBÙúK <·ì«xz@¶Ð¤@ä6O6*ö/¤±`eB2ïh¨Ïþz¥=?hwß–My c“ÊMuŒ¿1,í§?'“nЛv6¢«ñ‡œ pQ½”}þòôU= ×§zœ4•wsõR?À%óïØméUÝ]~xüïÂH˜™—)à?)à Íe6‘#ðJ·hÌN4ô¬H4Ô[5»R0abXáWŠ´Ô6bâ>“•ü)Ï.ŽTÊš#áBmÎ n‰6“¢Íb¦Rùcµù@Îü³Åy™ã¥à¦ ÷ öÔÜ Ù½ƒîJüNTýâ; f¥œÔ(#P1›Lså3ÁtÒFA³t´¹6È6€N.™Vù5”ñP ôK)£Ï†–xïÕi@ôñ «~×ã:yŠTSµ’N –µQ³ ñ3Étº WTaØZƒI²Zœ®>ÒGeÿî¿¶ yº~¸ú÷ãÃË×ûöÆ «•Ðd"Íó³ÌuLÿ]X7ÕL§‚U°Âk%bÒ¢j†7”¯âé¡™é«àÖš‰¡‹kš8j.ˆˆnq½ûòüôò==òæåîÓ9&µÌß®¾ßÝô¡+袠ؒŸÇ1ª…ÁËûYWÉ­mÜJFSí” UÐx¥x.óE„˜½Œ ª‡ª¦Ã ¬ÛG˜Š"Ì€ðŒî¹`þú~ýí®Õt² 5l±w ÇžŠS ¤ºÚÎ÷rêf1ÓH4’\‘·õÅ‹lÎçyæR顇öÙæÿ-ÐCEg.µsÒ~&ìÎQKÖÐÂàV iešZ–=•ƒ!fWÀ×4ï!R6÷º²š4œ<²Û ü-óˆÃ@¿ww`S»8¨3®ÌÃ[©M«ijf0=á˜(Z†Í‡Óaó>¿¿â€4Öì¯`!%”ÖŠù«îãjÊÓæ}œì·°/0ü[Í{&³¾…¶>‚ÚtW£s  Á¿oÖZv™Ó„“sAQªNÄâi<“íÿ)‘wãI"úè—ÇÁ{jq L HÍÝ k­@ÚÁJ+€S°`U°f_Õù¢ ˜E‰ÙÒ*Ì€O}_A©¾mG&A"7SØÛ#RÑOÆH>ýgõ¾ùÊ}CJ¬º€P±oÝÈE}Ü/rå…³•ÕXïT>(JÁÏá}þŒ¼ùVsEñ‹XŠÛª7Ù·[1ËÑ7[ZÍÜëT‘†1ú®jz9ØMŸÍYÔ†Oç&ÅSaÛ>­åÙ Øþtz U ÁFßeöÜŸØÕÙϺ»ËŸ¢áóû Ø!bðmS°—½u6;‚ð›Ø?ß¹ó}º¾õ¿åÇk÷$j!&¥˜"o  àQvÎ%çNñü˜Êî`ü9ˆà"‘d½†ÙÊjà(h*IÆÍáHGOú3X—™a0Ä”§!²ÃÒ…À Ë(ÔÆíü…ìÔbX†Bmo¡PšÕ·…ÚÞº …*޳g?ú8§˜ÿtpe•¨1^5"{œOØZ\;©íb‹z¹_i÷·oŸ®nïŸ~¼?ʮ͘.záì‹óoOñX{‡Úbð áSÃ[°Ò§j& mÊ#ÑÜçP¶x{îà“$øqeUþ·}•m…­ Œ7x9s‡Sd¥ÒðÄʹ¶ufO£l`õLöo‹”öŽé¡è÷P©tõõ˧=kê«*×ýìŠ#ßò­ºÛ«?o¾}{š|¤¾¹ÿƒ ÞÞ1àJÜù>úm7:øï„*šF½(DbDò¸!¼µ]Ýû”-c¯q5¼…ZxK„.ægcù²À|Ýpù2h¿rÈâÛli5ø–Fw;¡úÑ^ø¦£ÇÚ}ÄŒ¤“f~ïÏN/P •^xeÍÆÝôùÿ¾>æ@¥CŽ¡îÕsX!} u¯žmà†d7©©1®¿¤€jt 8TW“m @TF§\¶a¶°*lr+lí{©GQ@Ÿ'˜"‰iý£³SLb‰„Iß“òœžô³õ&ÝFÏ5õᯇ\î×ã҂׿Á&¥.Ø´àõðI{ç|\O,@-#L‘"’rºjl(¯˜)¯8Í€‚â5”\Š®H¶RÈæ?gK©¨­ð.± oŠ+æÏXU\iÂgíx.?{4B…E±[t‘ÜêQ°X;‡ä&PõTîÝIt»ÅZ”†ìê·ù—Õ THü&šZ» ¹Ñ÷›$Fxß¼œöùÓË¡„ŽÍ‰âã“Aý9ÞÏ)‘ÆDÿ7šƒ™]UšÁ¬ÿ†8†ý?¤Qœ®.«ÆÚaW?%~¬(¶‹<Ê>WnŸ}YͰk”)ÚWhßÛ`…ÚdÉ”XŒ¨ª$ëÕVÛTj·#ƒJ(T¨ÍÕ&Ùjøì˪Ô'dsîºZmiô…Ÿ7oÌäù\ÍÕn,¬8 & ¢ÛÒŸñR¦Ä×ÃïÏ/WOw7öE?-ì»O G/Ü}kÛÂcÀ‹ yA ÖfŠX4¢â=G Õdçm²ë—SiÌ™bMN¼/AncvëÓQN]’êa’àU›l»xH*¬Õû z2å’ê49ònåÄ\5ù¯I¥›Q2¯ 8oÇ‹‘ºÝÏ&wFbûÕæò`èêö“úÄQ¼~»:þ`b-ýzg1éç«?ä¹q¿^н‹®àÑ‚í¾ÚJ1·@ZýðV' €Njj˜@RÄÛl?éL4]à6õ£Ûu–¶†[F_ÃŒ¸5=©ÃRcaD¬U%LçªQw)6ŒT«ê¤M›þ’,°7O·‹H`F¯w~YЂ`À­8`Âë‰ÃBÄêzIJwÖ$IÈǽo¢êÄ‰Ô ·b¾sL3‰§,+@Øo&þ¨™Äu €C ¡%½$k ã’–Žk´ ÀЗÉÅIx_τ˻!ðÓù¦ÝÿŸ]~½þvý¥~Éér—á@qQíÉEq5âo½zÁìNṳ̀ë 9 ×Ìä×NÎDÒgÓk§õ¤®eß] D£÷ÝñkÕ Ê’“ÉHDÙõR!Fè ®TåB5¸¶Ùÿ% Zðè…`•討1ås‰t³Øv9_jPY.à Ì$X@.EìY5²_²®çKÍE¢ÑLÏ_®yE TÜÈ‘ä“+y¿¿GšÎ£nh*‰õ°.¾4µEÀû04i)ˆ:ŸH¡®­Ì«ü²èóqTYô~ Ê*"c1ÇS~$ÆÙµR6¬]7$z熢äùHnÝ~kA]>÷‹rê œ‡]9ƒÃ"rZø•[}Jä´ƒ 1ºÍ‘3¿Á§]Ó¢YèDòJýœºfR« =`=†6ãÁ@…šQô§l ‰&€Rܶqà,þq·£ùëÝËÓýÍóÛþʆ‚•'‰¯hˆ´„L‘Í¢®ï¨X7 µCˆÌ5Š-è(f©Âé´‰ÈÓÀÖ<»_Y#eÖ6 % ê´ £icU¯û¸¦U !.j6°·+Ð*Í¢†‘¬N…6dw ’£ûÚ·Z4²i²Î/—zÒ†E%*/l^ =õˆe/Iª_(k‡€S,RÊBˆEˆ .¿Oè(•.Á¬Ú¥X}Ó¾¡†0Že!3&™¾\d º°ú¡ªº°¼ø°.ˆý .*2Âuén Ä NifhÛöÙü|Ý翾ÿó¿›ÐU‚ÒrÔ€«ð¢½ìÄ#¬c+eÖ¯eG#˜ÌbM9Êa9” ñÂz÷7u©Fé„ ° g;˜'eMÌ^3õ(S”ÌÇCÙªª¿—¡lPŒT.{’?@xÛj½y0û¾ÚwǺy~šÿûÆ07XD5ˆTµÀC ½b·(·I„]3 N}EŠA!ÛpÏEu>}?“W§p” [3'îIræLœ4¥Ã&°HU2vˆ›ÑゆUE-nÃUއDÆÑ“±ÃMË:ÓÂ’“©ÿ/¤-F6cX®Š"4«.¡|fT‹lw‹‘«¥×-ZFšR¿UÅ\g´¯U( ²ýd¹ ¢ê-§µ3Mä€LÖ¾N†7'˜¤rÝ[¸cCç¶–ëzd…%tˆ–›À#¯fû|;èvRW©ÙÇq³iɬ@G÷3$öýáúÛÝô&ÇSNÂï·öãÿz|úÃ~ø›½ÄýŸ÷/?wÉs¥ìæêöþúË·Çç—û›çOoöÚ¾üüøã)Ñž<Ý\½<Î2÷Wo·]Ý^¿®-7¹\Ë%ÐÇ´.nA^ÄÐÒþ§íÁø_X-ýr/âËýò“Eÿøj:¿½~¹>Û“ã&õB õœ é G‡è—ÃÕÎpøä†SÎåpâä~LqE1ô½3ìëŒÅKCd×riø‹#gþ1Úm3]Û¯ ^Ân'ÂAJÃVêÂI"—ziu2€')¬éË_yýÎ!föi5´éµ ¯\CDÊ0  Röúû]+Ãl'GÏ™vY ….ñû‹ý>=þxÉ%ƲýÖ>uNùëE«81+_5À‚*¦Í“̬ۆ„)¤št(ƒFP{·hÈN‚ξ¬3'Jcü²1f03LŒ9Ö9S„]ÞéoË ~Ø÷pÿÏ»›Ÿ7w³ý~}óGºÞ¾~Ê!9CV¿Î\ìf\ .©$ÄUàH—4±óªpcœ¾gŒKv¾§“-îÅÔ0Y`3_”Bf_S&Û½Uêm¤“›Ìüw¬"ó0÷ÿ÷¿Õ·û.¥h—„Uh±šwŠ'ظ ú×_“\g×0¹l·ãÿX,ù T\p) LküíЯΊ—„Õ+m‘‚Æ kNƒÎEkVÈ¥-f’é·Hoí5RKø@˜8¸×¤² ’˜ßï‰H_œºH›ìGãX5 g!ߊÔw Ô%¹Ðr}„·V×aຓÜõÍN5§sáw_®o~„»ÿ™×sW]k%Y®†2ÒZL¼i,”wÒ>i²Np½`7“”? ŠUÀÊŒT‚]r’î;Š©ꦗ”A—üpÃ$d÷~MZúbJs“±´ ýW¬§ïÐw=bŒTñˆÕOjµÁp€±ÐÞЮMi+ÃØºïïw×/¿× Óíç'‡z4аˆï=m¨±«õJŸÖp»¹³diH+6¤‡ý¤ö‡Þ ò<òõðgöÖ˜„›šùmý¨hYq@Î× Õ.„ šÈ«}|¨ë†„òΦÕd§ÅcªŽºXT¶] óK^ŸÛCÙéµ)€—®Ê®V¼˜M¹÷ çt¼Ù©ð….Ñw]HPW–ŽL‹c–FÏ:T§88 `¦üž›‡=NŽÔ"¿B@J}7š./(|öE5‘‚FZ¥&r#¢JÝ-+‹²ÁÂÇŠ‚èåiÜîür]Tø}Þ*«Z%,H%êVlª²_qA«Di%)ư–ENªÉÿÉHd–r¡Õ\¸“BŽ$}8ç2þ³/«a‘Ko…Ñq½i$q‡À«ÔFº` UȈëÉÿB¥ÚÊÄb•¢Ú sD°¨6âÜJ…Ù—Õ@?ÆIœùŠè·³çGX‘%¶Ë;x ýIŒ1ýÍ '^ÌôWÆfu´´J¸+­‰ÿ4¿±–suòBŸù&(¼º½Yé6yÁ9o-À?v!ìo)NW(û‹§}w¡Î·”O{H±.!îRô!-×w«Q «QÊB&o¸ìkPЏŒR³äÙ³O«‚©´vŸãÖ iðŠÑ‡´ZsùE .Ýg‘VP¬ Pxõ,俜ÙûXž¶Ð(–*'1qdWæ¿¿ 5 ¤í êxã0ÃàdtqÅäC~5sPç.r8‡¾¥™D'føõª[òÖXþH(ÝsvÅRp+  &ïž?Ù¹9¹›†î>Ý>þëÛÃãõm[#´8ŠŸû¾Æ†%rƟ²ᰠ-êvÎà)€³›²¢+wKÙ­%ó|zG‰tÔtbÑs Ió g£$]Ì`"ÄüKÍoÿ÷6¾5”±ÖG²dô§ú½ÓÞæ·ÿ{7ãÃ0EŸºeÊÆ‡,\g(æ·ÐÒÃúÒÅH[‡3f~#¼Ç ‚ß'DYß÷”á}NÇŸ?¾ÞÝ<\ß=™‘ûšœ=Ù©|~yúù)í[=ý£·•ãmö©:öv!q‘yvlG²Ñ<û ±›G³LPU_Ñ& %2Z“gŒyó=¬‡ù¦ïÀÑÖÖK¡¿õ"ð„âÈÍì 5O†ýæv H¿ŽÝoÇ.¤ß¾?ÚÝüxºùyu}ûõþ9ž«‡ëÏwWÏ?¿ÝÜ=-](!¤#-¼â¢ñ‹ëDZ·Ìh/³FL#½®ÆŠú7zÒ’]ƒŸ!:ˆ¯Çžµd =®÷ËÿÙ”f°€`ƒÎ*Ýe².}{~7ózVüh×÷Ž“ú-)!Ô5)ásåÌb úÕïÀèÚÛÒ>!ž‡·OpDÉ´O8—Ú£]à¼ÂU;¶O0Û Üõt÷ýáþæúù®S‹®m%°¯4/‹ù›ëb^éÃZÙz+q^úº#ÚUM$„ðk–j½¶Noß\Ú6\京{Eàü’U©‹áŠë·þ)/ªn!M“€Ó"}Zrµ‘Š&· ¤îGÁôiÐOš ½-¤Ymöy¨ÃCsMŸ•4ÅŽö;IG/zÝðSÜfÐÀi¹ Fâ*º1¸š"÷¿?CÚ¡Åýœé§‘!Ç¢xô²Å]Þñß‘#í¢bú¹ŒÄ¸¼†—ÊÅv`Í»Œƒºx ˜ì88Š[{ ï12„›IOˆ9Û#<1üF3Á/F³‘n¨ƒÀ®Œ!ÕhÀ±ü%ÿÏ?în}ƪÀ¿l¯ÁÏJ€ó2ˉ1’# Ü‹ó~ºƒ=ýÒéŽÎÜÓŒ„wzÍy´¼Å|⃀:N|´¼Åª¼C9t –*zr.®!àê‚4·—šPÊñˆY—+VÛAv†`öiUã„ö¼ N6Ž7Âpn“"Ey_Iö½çYôw{÷Ðu„/› Úèv Õ½ôW·ßƒj.——·¬}“qgß>í¼ú>À+qvpìì¯Á+aY°BLÒêpdô£òuû§Ûå‹Ý$Á.35X'¾4œ`R£ÖÍÄÒãîeAgPѦÉêò™(c¡èð>gs„Ü<•Ÿ Ê1è¶œ#¿0m7‹åÇ©ÔÜ·ïßj‰“‹„Õ­èÝÀïkÔ×x/’‘øªž—¬ÑJ«P=ztýö‚°ºAlt©8­X5’бØú¬>ä‡IŽ¢é²ä'•È›ƒ¬z¥ñC«H™ —iJ@|Û”Dªèô8úØú"ŒDWAUš.v?ùUlzßo¯¾\?}NWë›ÔÚrc§² \u,¸"/¡ Ö4ËbG½¬¬úa+™À¼c®ÀVÜß2?ÄV¤üŠÃ£dº`«`'Adkl nx¹Ù¥5ðl¥Iì¶á6&Íã:èCšwFb,{€±qrÑ9€ú?ßKvòq·áé2´}¡åHØe®nïsa3£ŠÇ¼h¼Ïiêsá]ÏýÅÚº“ÁW³ü˽q+}Ýœ]9bäNŒòúº8aKàÞsy n—É<$=Ij_+¥acß®fïªÂcÂ%]ÍcàæÒ&›eÊó€©J$·Î Íîµ<®´|ºÿúýñi¿Ø²¶QC¸(ur‰e !ÁNqKö^„´[Ú~$Õ ‰&Q¨H;Ï\êÙ4©å©¨fbéÅ’ö3´ ˜•EÑ%1ÇbÏ>SÑ7M Æà.­iÜ@¿Ø¹j®õ5‘QÛÓˆãÉàÀԔ¢ÂØìD7O2§sJž†ØÅÖ}“Y{Lì˜\{m‰ÀºA¬†T>(Oú¡¨!6"]v÷Òé°é(Cªû5!lšñ~ÂFŽ£ÉL̘AXMó Ĩ}z¦ªyr}U°»oЬÌS,…ˆqše;ŸÂÑOàý²„ðaeÀÕë΀«tÁh,¸EÏËE_Æ[BZPpSðàˆ8ôL _V?´¥‰ƒ½¹¯A[ÖR—‰.äöÌdÓmýäìžx46E[²+þx´ÝggÎЖÌÁ8Ü´*ŒÚ'ü& E؉;›fk¤C×_|»þjª¸>mûýñùÐnß¶”†ÁÅS rDä;äF<-ʦ'xŠÙ€Jx2Ás_>Ï™ :'°xå­Á“dìÌb§¦ÕRÒu‡ª’±žC=u•µJï=õJq“7­ÐЉÍwDÞ¯~gßì¿ïñ¸úzÿåu½À¡…¶õ?¸zx¼ù£1†Õ¡˜{^©¤¶ h14‚î¦rîàÁ…)¢'ªÀo ÊøÍŠYR£P{¸ŽóiSÊÖn²Žànß8|º»8)ŠÓ%¢Ô¡]Ã`û½Uqp¨ß´9 õ ÌCüB0KRÝ6+|¾òÓÍóÓÕóý—o) ”Ëõ6hÿZÖ^j\B]æ!í^²€uR¸,¯n0íy"GjRÂ.ÇÌLvùÝoGÙt‚é¡¥sÍNFê€YSÿN‡+Œ‡i–÷IФ(NMºŠ­o’¥¡è¶.ªSÔ[P°d5 YR¦ýžã_>Üi–®ß´pà¢ØÅ‚ßu57Ì´„€-($•CÇ$pNTý°5NŽbd¨ÁVR,b+“ÏÒ·Ó \ÿŸ½kYŽ#×±¿âÝlFiôܸ«Ù܈YLÌ|\’ÛŠVKj=º}ï×X*Iõ`V2“dºÝíh—­Ê$@pÀjþ”ƒ«ýuf®³F öMPäÀ”€ìé§YcJn-5Ä0fˆ$Þ…/Iܶ#_»h÷ãd¹º›7WƒF/iÛȵ}q4aL¬\ýêes¯Û{óýúêÅÖóùýÿ¶ðú³Ä…ºŠŸ¨},E”¨— 3°$fÛÑ®ôÏ÷ÛâÏ?ú¶‘MÍæâÇïßôë¼L…“ ]ÕÀ Ô€ƒù ®3%úvOr4íèíRàéóo—›ïv¶^,0T‰}EÏDÏΓ.t}šBr$û×n_Ìß›]Uð ®L[»va\UYjj¶Ú^Ò4ç\ë$Î/öï™w%²­™€ÇîuIœö¡½‰¢m9‰—²­u{òk1/™FPŽó|hÝ’`,7ô´C÷* “3žf“¦(µ˜@î‚‚ƶÄ|.”ä@œS¼ÐŒé\´PN«Ð#â‚ÔBê²ä-^Nx³Pša°­E0ÍΉ¬Qu4æ*W÷ÑR$Hæ`xÇ´l ½T$!œ^R2.Šƒ‘‰BëWÐŒb€¢ rTs 卯zÁóv:òÑsd¥>±]3«Žé¦ÔK(8Ü“h`ʪÁg+$÷Ô¬q;½*pœeØSÛ£À°ÁwOƒÏ¤ˆMO!’L•£CS—¢ÎŸ+ o3FKiâQ¬RìñÜÕ.i%~½9T-út§¯EF/y%q­.‚Kˆ,Vi”,™´!êƒ#?-°êž¨ ã r® ›ìUü4 ï敜Tý|ˆ© Sj=õ[¿e 7°{bî­iˆ»Oš²_üÈp0“ES$/J'§á°å (õW££š èªnq€—¤8-|vŒž[¦8óOíB2´0@b(°usò'm)ŸÝüI£˜ÌÙ œeëS;¢ÀÖtï&ä”ÎFed¿È:í-e„Úèd?M®ŽéÓåH:«–Eì᤼£°(>›•uhw’Ó>À’“yÚ¸Cö ßM“ƒ;Dì}æ÷äÆ˜6ntêûäè3W.©§›Ó@ȳ½‹cØ®ýèë·´”Jœk;~Û€_RèÁ£y³É#¯›†¢P8 %$¯<Ët°a(è‚Nì [x¶L}oe%ÃP€‡à5 ìŠý/Ù뱺iÀ³†Ø1F)¸jìÃBìãÄÈaþ½L羃Dò2‰}vlåÀooiàÇ!1j„Ò‹”ÔU ƪ@Œ{ߣ„äSŸ¸˜¼ “°î ;ÑÙP‡üqaÛàùz»Ã÷í÷cjÛ#Ü>þr2*ÍÙY6ÁnáãF×Åå£ë>þ,RENôÂU~×S×4 . Œ‘¯LøÞC{óízóÏÍíõûqðp¹ù5õÝîˆnw¿Î'½ûeu x·\/'H„•š!DàPYµå²›Bʸ涑"«/qÍS\7y±³°K’¯„ûOØ•Á<±¨< v'»`×AÈÀ® éÚYýÊ\ÛP„¿.Öào1LŒ«6ç\¥ûÛvq tV†Ý])ÒÅæòƒ=l1⦚qÁ‡€ŒU÷©uÉ 7(¤8—«wJZíÀ–—Jú°lÍG “`|¶ItO4MÀ†ÔFåý,°5Ägà*‹4¥;Ø gJ“¦<&¯ÃbUt÷Z*0¶F•-fª<–G”deLR dgU§±¨4åÑQtaš4µ Li„ìð‘••d±¼p—Â/µÁDïˆjl\wÊæ$ÆWrðÃ<–7´d“’ŽÔ 4ÌÀ—µi·ÉÀoÓ7ÛŠàíŸ7›“ÜTÊX.J«ç¿|?Wβ4Wžÿîsi%ˆ¶gCÕMhlÏŠcã@Ô7žŠ}Púýúòöù{=eÇV ˜ÆËWÁ)xXÚ&–À‚DPfñg]'Wî:%}1i $LB2DÉǨo mà5mw˜EÌ*sɨJÝ ÃØ±Ñq±Ó€ð‘BT{­Ä18`üÂÛv×TókŒ툎$Ø. \uªÊn^aã`Ó æO ÄÎL_W7O/é+¿¾\ýrèeîhDÞèDfšêâr¡Oߨ,juµ£˜ÄÏîu›'¨v1¦™™ƒX’æN|]“yÑü}Þ‡TZ€eŒCô$ÌË&†hpß;ÄdÙÑ*…˜n`!“àÔ¤hÒõÇ”*œ×™ˆÐQ¡êBû$6+ ¶×£ç9ÇX§Ïs;•}I!\رŽJ ×™ øu!#²8„:‚19Ÿa“+ò އoÚrbáwŽ‹ù|“ê³mËïBlãöÖHÄÌ+ã¸z߿ږ!øS§7M™¶@2Lv PÛ¹$>ãø†Ó —rt‡ ž;Ââ .¶œw±o½ð@½ºÿóîöþòj…´B_(Ç%Ŧ>ݤúÀû8Ùï¢jçfÛF @Õ7¦Ñ_o-ŽÝì=¹4Éä¸!‚ÊúðŒû»Ù€¹†.oðëÞXÞO¹þЭÏ¥„­½ë0Èk2Ý 2k…}G!Ä%Õ‰vþމ?¡2Ã¥yœT¦†ÑaA':z%3>c¢¶ò×läIAáÇÒJ9.¹öÀÉÔÅMš`4£ïé&9æ<$'ƒ©h$‘c!x»DN² ÁÊLÎÕõÃíý?s;}¤YG>?Iúˆ*ÏKú´z½üù‹nv‚¨Õ{Œç’ÚApÒa’ ^¸;½ü#ä~÷×õÛ¹Ûé—7»ÖCéΛ­€ÄÚõø¨K:ùcôŽCðË™cçIm‘Ÿñt>´í púÉS(&/eò =„öÔd´’íhçÐéÚ‡TwVxêéµÕ“`ÜñH­GKE„ƒHê—ÇÎ‡ŠžÊì0Å.ÒRw7¯:Bzº‘k'ὋÙã¡Yûb±,á~ Ž=F¬½T„Í 91’ÊÑt€`Q„C™Äf¡lGè‡ÀZ@sÚïˆö•kC³„Þ!|t°‰pˆÍIQöôcbÈÞ“ŸcYç‘÷Ë?×ÀHGdÐEtKâœT ¾:Æ…1¾…”©2¹€ÚU)Â$ãHL~Z6Ùý±²‚ŸH†Ô@W¶Põ½§$ÅWf¦CnGSCÄè×d1o"®È8r¶Yp[yÔ#xÚ³yç˨Ff?w/”XL22ûÁcwßaÒ|äÁ«ŠÓÕ‡S¾{᪉<ô*Uæ}¹»šY5bFÝó8ZT7ê9Bji1î匴Ú9„<€'_0æ%9„^'ÀlRgO0<˜úÁaíóf—ïìfÚs¶šÒö­­—ew µ_¹ºY/àÐÓØñèŠ´ÈØ9RLÃ;ë}?)õý@‹ H§Ã7ƒ÷ݽïYkÍ•Çì­«Äó³}fÛLk›"bïò"çt˜$ɳö§¾v.šÉ¯àò-#ø>¦Å 4Ïá[öÔ=o/µ%Îöö–=µ·«ÇGE$MÆÛNJqíyﯰ_Þeüå#šÿ²½¡ý¸Ý|¿Þüza0õpoÆûôêg?n‰Ôç±dôMã(/è埊 ™võ!¡…‡DrD1¸t&”éC‚1GÒ½¿´BfsÙ|p ÕÇÄÏU›,ɾ%Ž_%ßbìjåò蘴M›'ívƒKé#*q9§¦l€°'ðÓFð%JÃyB^ó`¼û¸À$g/¯D»g^e«weäs’x]Ï‘®J—Ð?*ÌÍŠK`¯Á 2ïH\;WÔ{Űš/z¾~¸K\}>ÖývŠŒ§„Ǒçtñã÷¼Sï…q¡{ºøùgýÔ{=HûŒvuêP úu9ƒŽÇË.¥ w.m=%ô'Æ‚ÀEÃÍ b=ŸñA5óKÒ`P7]6¤[f–IO#(åçš½K¥ÉÄi¿uVw4ÄuO}™˜]~.q¢èö°Q”#ž‘“ž =M[|XÂêÔ–ÇÒ°RtP¢+©ê3ˆ§Í3@–#è}e%Q¥ÈàÓt \Ýüº—–›%—0EÄ`Îàʳ.V©1ÏM{x¼~¸}+›ÎÎzø×€o™IÜ`ÒEñÃæ\o2ç¢øáU>_Fí¸ªfæGˆ£x­¿úŠ®£`Àâ ²™`±Ñ4FÅìÜ㽕a”4ÍN µ1*‘ÛwÀ(cf¤XRD*,¿.FÅŸ7g39’æ_¿?üsy´Å@žòçï#UðÔh"Oùó;ƒUt´ VbÝÖ'T‚•/+?˜ß"s#ˆ›Ä*[w6›û±°B¬Jl­ìWƪèb÷ “"Æ,V¹TéXÇíÿ—fO¸1Yþþrÿ|™·Ðo²¹ý#ƒP$-©âÇ÷6ãñ½ñÉã| {­Æ§Ò¹®€굠Ý?"LûR¶î\¼··°| 4 «“µáɇޔ°IˆxÊ™•Ô@êqV'÷“£¼§ëç¼uþ&Ês› Y7ùì`hãM>»7*¡_PÞà9Mw±–JG®"É‚ù3Ó i{¯@4‰Kè²l{K+&â=Ù›­LȽ§s$9B&È3MˆKÕ!#dÕZæí-V€¦#"§ÃfÑÏ÷é÷,ôm#› ›ÍÅß¿é×LI¸yøTýùG a6JU¿ÁY¨ é¹¢+,hÕ²í˜Zí`.ÛÒ[Eàa¹àvŸ£Ë;óc®˜:“%óƒš'55qÚ$åxª»>I“³Týâj$Ko-!Dž‡ŽAc¨AÇôØž¢oH­¯hËÑQ™ê뇷/¶UŸ&Hé ¾Á5aj79‹™uôUfm¡ñ$;¢ÙΚù7“â=cß%²­±ó´Ç±õ9;w>1`OÚyÈò¿ï‰¯o-ƒ\˜eçæÄÆ*;ОùÐ^ʾÚc碔툚ËÍVJ™’Ÿw¡¯/7·³k¢ FmÖ;;!0VÙlÔ%EÑNÐQt ³ëRfʪY›\Ú ©¢ ùÍ2Ùx“"Ùšê=É´h“³×öA=Ì2GªŽÝÝÐÄ®1‰p†a<­˜ ~¦x©„¢bØ×fºÂ•¸0ªR — –ëTªZ‘M%ÁÛ!ÔbmwçèÙ/Ôb˜ÍF¾†‹ß~½Ûl¯ü5˜¡8æx%œ$[(Q àúÝt♗ǶíÕ³Î審Ü¢Q89üMûDÜ[ÓÄ]4…IoÈûlOËž ZÀ¯½5«zž…¾[p•©úþ¹4ÜõÇÝUë ‚àyä®ÚŽÉ®ƒÓ(F+Þ[T¥Ø 0˜tH·VÅ࣠ö÷Óläó¨¿}µ8æo®Ü,78ŽÎÏNœ,P…ʲ¤äÀSP¡4‰º–keÙÎM¶­µänqúrÊ æ.§ö%×ÄMÖÁpŸçyÉ‘)P¨2gÝ“Šl†¡³sô01‡ÇµfÓ/9ΫÝVÆTÉQ­ó¢ÁÍ}n…à1ŠÉ«úŠõXÊoÝI—ó( 1ŒûИXÆ ê¢iÉE£…檘Ƈµ ö)”[3`N §¸ëؘÀe? ËH!æo/ÞEÔ–#ÅYÞ³=Ì…®1QdîyŽN;x“–l½„¼2!oÙ(nÔXGð3,Fõ} X§_íq‘Á~Ðo¹ðu?DºmyNŸ½ÊÞ”BrîŒä…˜«./Ð{ð‚xsõ¼kÓÄ8.¬v`뇘pCI…Êîý,ÜFÊyÁ{‚i¶öÖ‚ˆ³*ëÌCtÎU]VØžë·^Cn VA´éQh{UlÆXÛÐxF• ¦õUJíÂ[É2 š¾ü_Ìͽ|¹ºy¾x¸¿½ÙÜ\ÏÃ\NÏ(ÂÀA°r $.a-B¶‡ „þ¾î¡ðÚa° Än[Ž7…Á&]^$dI>$Õ„m‹G˜Å£lXA¨Ê`ºWR¹Œ-9` Ð*“š;ãÌð @ßÓ×=EŒ1ݲ?æî×è–]‡©$<8¦°cû‰`¼Ï}2x½ž‘º3>Ô /ï næ"/±õ¸cx—S3ÂÛ©,§Dbœts9[‡ý!‘di×zûgÖ¥Ö›¦Ê½ïMd|Z¤½U’ˆqx-#ë0ã£Ð_@`T(žëÎI ðÛS#òi‚´ÿt–΃ÔçÑôâ/ö#¦È§‹çû‹Çû—çý ÇóŠÒÂx6ŽI»]ò’[BÝ®1¥›ü좴®rm†Õi‹%rÂP@®„%`1×m¼/Åx^;µ Ì¢N`Æà@«ìœ=wgWÂè3€mKæhZœ Nm;[½lôR[>ÇÄÝÂ$u54Ôwz˜b ?§ øêîé³ý7/i\̪¹ª—‡•xÉÄ‚ÛÕïDÓnu°"BQ?‹‡Éz5ssÎñž š ­mO‡!κ}àhþ±¯*æØ3—M˜â¤(¢#cšÅ7M´qát5iP%¼gìcš³à«’i;\-¤»»”‡îêBì îûýÓó…ýµ?ïý¼ÿ›‹»ËßLSfw¯?4wÌðø}C'¾®ͼ‰<‚×2‹¼g"ié5ÃZÛ.@¡d"žîpôÖšQäªöDÕk·ÔÞ|å9XÄü¯€U+Ôݳ%>­NKN ±â`‚¦½…S†CùED#ÈS®Øö®ê ÇŒoMð8mUaé:D$ß0ýþ›Ç<½…sÝY8£€€ŒU(,Ž4ÄI` mZÓ DÖ zmg¤öÑPàæðÔô¸$>Ìu©~ȧò¦í샛ÇÔaN:™rkŒS\ŒÝ‘ØeÜܤ'ŽFrl̺þ€wŽ3ºáªÐaT§]å=‘ù± àjT‘5î ž>Û.Ùœkzy¼¾ºy´7hÔ'hñ"»*”EXPÕÆ)8òËn ÊÕªë-íƒB(I° O1úœg»'˜Føªæ§ê¬v !goQ‹ *öÇWôšÁ×” Wô~²ëØõ.%«j€[„ £*å˜þ­R)‡Ð¡t! n˜«÷juO/¹?ož/žn~¹›éÊ¢ÒøÕ¬2›®ºš•aIMD¼ú6åºgåÕ®’a˱èK檚‘ÇIz Y ÝN“jÛÈÞÓ¼Ù#Íröü-0˺{²¶0Í mÒTªy+íÆ—5HmÍî$@ŒêÕüi“Y•^c‡î´¤2! ¼WnØöòÉô?_]?ÜÞÿó€[óQ~½¢Y «ìyTj»®’PG[r‹ë-žUÏõ˜[*´–È«^ `XiB„)–á´…³u {j¼AÌ»BP§\U]½ºþÓ%“¨²¸«‰'L×Å])š>¯ŒZ»sbT¹Èö*åjô•Ó¬òðWš(F¯7/ÿÇÞÕö6’#ç¿"l>Ü—•Ì—ª"é\ X 9îäÃá°eyF»²ä•lÏÎýúTµd«%±Õì&[ž v13˜ÑÊìfUñ©ÖËâùëxzÿ¸Ø ÝÇËéÝ|9Þ~]ͤ°_n™ ðÙʨ¨¬Ü2oN®ª›*‹‚rOîD×rÎ"Ʀ%„)ª {†{㣙kT,âŽMI¶%;å{)UpY¹ÀÞ'WA‘8Þ2½çì>!8-£Ìk?Ø„àÎ8ÓÈv2@˜ï†€w?‘N*.Ã=-§«ªãi$Ï”Þßyò—W2蕉^d®«Ú·{ÑûÅôÓj½}^̶7oŸíæ7ï®XdzÍL²þïQ«žyñ,_êXmgà‹Y‡çu¦öD¶Wæ1CŠeã¶{ÕÇ·Ê–rºEÕ….E·(l×-mw]çAÝâ'$×¢“ne”™9›&:ˆnÙOš:Ñ-~"w.zè lʼnv6­¢\—Š“oïäG[X€sîZ qéÕ„u§QxÝÐâqúIn*>-¶Ï›¯|X´—ËÝ4ͧ—%[»éuëú3 U…hEЧxÅX ±Y•J¥Z1„gépZ©”ô=‚iû-ŒÎ™ª“¨ÂókC7|/r>ið9TBåØÔoá“9‡ªlàÇ©ŒÀO„’©Þ àx=Ñ…+‡ÝwÝ©ww÷7÷ó‡éËò¹JÿoºÝÙosû4ï Z³Ì2ù1÷$Š•Ã[– &m…V´õñ:ÀyŠ ­ž(I¿¦kãmÞž62K:‚·Ì§`Ñëëà­M ´«¼MdžYª•у ­Ü¾Â‡Ž 9vwÑqT ²zŸÚÛµG²e¼Ò“BNIU^Ѹ SðuŸ«{ _™lQkö@—Bð*·W® ¯ìoêáá0Dáý{ÏÜonPH(7($† M<™ D9!(m÷£m ã+L”²*èµf÷£VjkµºŸ½–³fA+ýœöƼDŸ!Ú8æ»B_Úœm Y9¼eÁpH ’­b-) ÑðÁ;} ŇƒG”¦gé€ ZK]lÎáð ñˆÅ`"7øÿÖž½€ ,…`!Ó°‚`NZ•$w ÚjÇXÝoöąɈ»oä 9UaÈkO­Ãåý‡–C AûX]Y5ÉìC|u¦¿<¢¢ÃèmïÛÑÃfý8Z¬Æ|6_ùo÷ó_GÏü'`&¼ÆXg^,/-x0 +x ¥`¾¬éÒÄõHqfÒÏ7·¿ÿáßodè“´U’6zᔢ[>¾òºã]c0>‚2¶ø wzô‡Ñó¯«Ûß'†bÒÄÉ,û`|·YÑC¼S}ƽ £å|ºŸRPCç±ÒC¼ìø-Ž-¯Ý=SO]/é éŒ6ær´lŸÂbÒYtZÿíI¦wËùŸYžÿs½~:`þð/¬pç?&ÜZƒÈ6‘°Åüþð™:CN´4qÄ ¶"§µd[¬!Ù«‰ZCµÍüN^v^›ùl¾xߟ#¥Ü0-±ßÙ~•Å–÷ Ç—ùfôüyº½ÓcRmþ<WS`³º @²æõB– ° ޣŧ•Ž’H¬øƒÕv:«xwÊþÅs¹}˜.·ó&Å ÿÈœàe Y¦;‹‰î¬amJݪØÁÑåŒÑÝÆm¬Þ¬¶³w–dŒ¶väêþl}¨?+˜áІD¶ÑÀº¡¢>àùܵŠBMEj‰Ó[§‚º‚gû®µàZÁ@¦² ò,}eZò‚?¹Ýü2]Þðo‘d–¹wIÞ.×_F÷Óç©Öµ)/`¤û?4iS†œØpbƒ´‹¦þ픫%” ßÑÄëÀ„æ]Õt\$³/×|“†í ‹ÀÁ¹ëµ„:s©eK›ÍzSR>¯¿ò¹›q9¿ïϪËÚ²ZÂô þj¯‚ôa„^žZ¨{áR6‰´çÉoXÑZ£B›¦Evᢠ]MÇ+»T)‘\QÉ»e+Ö&ºÐ¥N¯AX¥ ™cÉÂÂ(c«ìf9,;+DùUª:Žxºxtùòò7ÚŽÞ¾<úßýóøãÜŽþ:cüÛ诩ˆ8ú'ÿ·Ñ'fùíh÷Ádo6þÏjºùúç?ý[¥aYlŸ×£/›Åó¼âûËövôVC¸^*,»±¼ÏFÿ2ú®²©ŸÖ,‹íh¶\oYÜ¿‹ì€M&éÃkŒïÉKH{ôAë‹XM[>éác†¶w˜þ°óéŽÃËÂALJÊÛ–‚r!­½½ÆíøoH× éš0Í8>4r0Íö¨Æ—Fn,µâÎd"šKG4 Z±­€Ì~¼Í@3» §ÂKÀ3~)Cäå!ÑKbZ+.é˜>4.1áó! U&ç*31¢ Ê|HDóÍŸ/óÙ×Ùrþ~4Ÿ¦³Ÿ¥¦~Ÿn²ÿx|1Þ©<äÆ;³ß¨ 5!ˆ†f¿Ò%dcÝíØ*ÈA6ö³°O=¦UÖ+)XYئ•NÆ6™3)µ†¾ ÜØv2€mà”…hIåakIèÆ¯ÅnÔÝ‚A)ÉA7¶º†ND:îæ•Ÿ o™ÑY‘ÝÞu”ÀÄúÒÕ7–Jä¼®jH»€å?òØÖg"6^[²Áûlm¢ÒÙ&Q^ vS™´UD­Œ³ÑIµ¥ñM³”PPLe–V%iY|s؃oFFå¹üœŸîà(voŒÕí7QÌ«[Ùæ0VÂPÛYÛy§¡ŠØ¤³-€S9Ž•±eCÛ,ýóp¤º(xö¯™¼Î~Œ«óºªõd¼‘îãª;äøqþ¼Y̶ žMÈ6 º¾@Ý‘±¾DZG×7¸` °[¬P÷ŸòUÜ}‰Ì—¬öÙNEß2JÓ5ó–<ÄfÏ\ÃNŸŠpl ³ •ÀÔááG¶°÷%láô‡_vw´˜…ófè=Ñ–*ÄYJÆ,Zò—e-¿‚²nñ#ŠÏN‡Ûyx˜ÍfîŽÆ?¯f³}­ÔüÁ:…îݩīÌB³Ï?‚y—XXvATˆ—ΰ$­ê5C†&²• Ùö¿I&¡'"e[ Irš ´’|D£ömgiÁ$”È1ZÝÅD”±Y\CÝ#‹MfHH“Í6›Î6ËV³\„¶²-è=O.² U,XÛYÛ$_EƒëÀ6”ÂqK”¥€·ÿ´ïŒÅÑ{v<Ñ^Õú6‚Í~΂®p0°ùу›}Áü! Žÿ˱õÉ<1½B 勉¹ $C‰]K{À’öºk”(TÆB·«ªýMUí.~ɲpžqˆÝľä»ÔÎkŠÎÇ ä«d‹V­M¶ßd^k­“c¶Ö†t­­¬qØlÓÚCh‹µò¾m|xËagIZœfƒí$/gvèt)¡¢^ÜÉtu°Á ßAÄéé òÞãâæÐí‚ÜìîànÆ?ýôp¿ÝyR3…ÕT1H•öäú¾D ºH«\w®Ä BÄ'Æ9†=:\Q¬ÃU SXŠ­‘IIm˜"í«È´ B´&å°—öWŒL@txTûY[!«¿N4K‚5|z~'‘%ÅÀ˜ƒW‚·å;¶L;h•Æ«Ñlª¹J¿¼¬Ÿ§¢Ù—kJWQim^Ô¾Ïzã±%0†<æ[ö^ûdõh¹Bì<Ÿ4•Ì*ÓRiœÚOT+ŒÝ²›ÎŠÉ¸„„o£[1ƒ¬N½¨Ñ±D/~m%Õ2¦Ã­0KQÐ!§ H– 7|b·ÆhbwÐR©Šº%¼ ¨ì¬RH1ZLz[öë#P“@8#ö}Ž@8=@ïæ5IïWúC†fܯ¶7rÇBäåk×qÍ—¶™Nå¹3®G3°ø—²ÅÆeœ©O{®L&ñèZqPÙ¶´N¦Ø.&zšÑq IH&)|ñ®";ë¤V&ë<3zOe‹„ñŽ]5.:4Úé¢C£)if49W`>Fš˜èYý°C’ÃD¯OFàBQcYþ?º»Ããtö™ÏÏ>|Üm¬&»¹Í¤7l¾åÀ©7¦O·–ü€Z“¢]Ã1¹ ⪳¨dÊ|{-ÂÚpÕ ºÕhSXÙÅE©þ逬Þ*(dJãpðb!Øí›;¿â Érk½>I‚éì!ø5È/ÐRoìl$å.â’"]†D)– YßOî*Ýž•ÿ ¿û˜ÏI6Êîçå cTÂ,‘¶ Õ|8Ïi0YV·Ç^7vˆÎiïœéžÛÖJÞË“ˆZi›WëÁ2PkS ÝµDü>dp9Я´ó/Bh'mÑeYâÃMÙ¥{ Q,±•-rÖŠ Wx\gšØlí˜Ã~Qƒà<û¯è¾²ƒ•)/ßFêã/ U+Ø»?o^æÐÜ73r2•åΣGq¿•[(>1Яðã†pû„€¹ Ê*L€è–Éì;šíú¤@t(…Zúƒc—Úé”ãMVÛö¢ ÑÖÆo˼jªý(jsS¡Éœg&뮵Å ¢Ë2çÜ+í³2öMk;|òÞkÈoÇÉ==,`Ðtk'®`Ð8h½Û×!–¹ZÛYROk{ó2%ùpP @Æá ÖÁÐíH…ŒËLµäAË\£?„lÀõž¤zÞ‹jyw·ü)Ö uàJxöQ . åZp%<»¹×t«•º° ÿÀö›@¼ÚwÊ¿Ëø—éâ™èˆëHÚü°ožñ&‹u³ñ{þüyóuQ[ì´§ØxÁ q6b«”þ±œ†õKu*c2F712“X ÷%BµœòV2S×I‹Hï†ÌI~Zߟ]lY¢d;Áíøqñi×éê8¼õkcò4£™?ÿr·Zu‹s³a†ÍL ’@Ö»·ýn èÓ&‹¤ODÐè;šÒW%t©0x%ƒR ’dбE׎è:^-x j[\»àh×/õûØCvçƒÄvô°Y?ŽLVV‘›¯û&<ÏüÇ‹´£¥(ç1±ÏŽlX~@ÿye»%ìCß­3ÄÈi‡tÁ÷%};eµ_ù8šqô§õzÙf…÷Yrw>‰·í Äš«!û­lÝ©Ž Ô‹!¼þ^Üȉ Tr+̺ÌÄ;°—+2v ˆ7h~§p@²0öt Hµ‡T¤½m¿*±`ëÂN¨ÄgŸ–,TÒƒú5ÿŒ7‘Yö,rI}Rý>˾p\"%YA‡o•—*HˆÏ•÷Rs=dw‹÷,™cšòÇûÜ·Yy'Ñ9Þˆ$l^–­ÖSÒƒ…~Éféß]8£¥PÎè®—HI¿tk”Dî¼k"=v [’üíDIóAÓª+b^a‚E.‰ ë$­õ‘SXFF´‡¬D:) ²ÖcžcˆŠØñ/rG$%ÉhgãwDª¬_¨Kû… jƒJDÐxƒÖÖ†oàèu½|yœOŸŸ§³Ï•§0yÕ“}ÏDÜôŽ÷?6 õçH‚î Ý#µ—”pèzkPŠå )™õ„­Ž…2íÊ‚Tìràÿ˜»ÚµÆr}+ûìï%mI¶%ÍÝd3Å65@õ<³W¿rÄ'>ö±SÕº» bl}ë}3‘é ˆ%¯ 3g‘ŸÑ—X$øjÔ&?‘ê‡~]ø'ÓWìU8 "r+Õ‰ c|G’òrŸ1Ø@ÍT7cœm@PgJž: ð‡§¿ì¾Ý=nkî7ûÎióg5U,°þì{x„¢Hâ+i­$åÒ»ä¢[—¤+£j= ‰\·ìL%Ëž‰cLàì*“f–=?cE üz¼¶yçÉu£ƒþ ”Ø €=hp¥øp6M†5Scã ÏYE«³ëE«-‚¢{€@Ôÿáý4ƒÿ’4u𣻇íýcr¡MA}gðƒŠH=Íui–:?cÌBÒsŦÛÊäÆ ¹”P©¨ÈJÙÏix& s ¿šÆë<ïýÍnûº}xj£öB»RÔaÿ¸uù­rGmÔÂHHÕ«•[>KnïÀAIÓªUЪ„‰IZv´È˜˜‰fÈ$L HèôœÏ¸ågô ˜UmÚÔ“ª£¸î®iϨUÂl 0…D®þXög÷ÔWX»~a㜽êú$––AÝâÓŸµ«ÇÎ1 bÏè»~ÓäæÐë·ÄóèŒM1ï5ò:sÔæánûr÷ò¶î¸?;ü÷WÖ‹¶8˪°(ŒWíýŠ›Î;•Lµ“ÒP¤pÛ{¹FC<.¦‘†zUg 7U0wFê`ú|q¾ÄE –«r=Ê"½& ̱\ËbÈnU×=u‚c阠ò©Í›×.—[·QÞø`µÙ~M­íI&?{À–¹äí“JÃRGŒÙ’Tæ̪æÃRÙ}= !±¸¤É0&~ò‚dêÔH(Ô¼Lõì- €²kà±s–‹f¥-Ÿ”j‹¦ª0„SQ‡™}ŠvÆ;œºþw//GAæøí‹Gì‰Û»l>´ÑDšk·¥gÉ-ªã49Úšc ã8Ëžî%(k–=agâË~جø²”ÙÛN¨œœì¿gGtæY–2Åk[vpa²i7ͅÂýç¹&ûù‘½÷—ÃD îúÝê€>.6òƒ ÔYu£‚¨_§nžAS¤]Ð8½žû¸ýq&Ñúy{ÿÚ¸¦ÂLYC™‚e‰…§µG:âP.7 Ñ^)[$¯¥7±žéÕ}!TT!ô”T)Õ«|?iŠDQ¨¢– 2Õ÷×}±¸š nÌš+›'æ“ýõìŒN¯Ÿ «Úª«ë¯¢€ö /{4£âºªû—M÷8²“äŒÈ$ĵœÔ7«ì –Æd2Q ¸[~c?.±óœ‡”ùÅ‚„²OCïÖ‚'óî5váL¬é>x/ªælêø8õ+n­<Ö£¹zœÝ°Ç±Ø g\¼î +/Ñ£]?|µné¬ÎIÐLÆ*È„Y›˜ á¦(-•ô‘g¨uÐ)*ãLáGõã‡m`£©bìóTpÌÔy+vž>Ó¶œÌ2’©žÐTi{ÁS.Ÿ(áûÇT$¶èêåõù?›rÅáà4j3• Ÿ4kØ[›fUKÜCí)6ϧ´HÿÂ0x‹èױި[“°V‚`H³EªaB±˜bÅ9$ 6·/öN²#VÀŠ†à¡±¸Z»£ ¼©ws{ð{õQ‘ù•„\@ä‹T‚ö ƒFq÷kìøïc÷¦^Žü’¥ö“Y"pöؿ۴G]eûú7Öùêg¼JW GHssý ²Ò¼D0Xå:¿‘®L@F©ù M•¹ºßðgo>Ä3ÄoX‰B¾d”Ñí8’µ¤xmÇ&§Þ{ýE-9»––syÞBxhá„K ýŠ6Ð0p1ÏÕ·=’u¥–8²†6Œ ÔŸgfÝÏw?îw–ö½ÖÄœ}ç´´‚=YW‰S^;ŸÇ6HOB—owÛ‡×o¨zIrã|ƒnÜUÑè½gÒªã,üò{Mˉ$3×P" ÔÇÿ…K^0û]‡LÿG`uN%ŸþÏÏèëO‘ù?mBgñ yU´¤ˆ(¡LÌdj-Ó›ýñãçÃÃÍá«m˜)ˆçkbê”5ò:Q{žŒW‘IÕïðÌáǧvÛX¾¿qRûU‡Žÿ €½=cv»u´Ä–zí*vE%Oìš™rëwxÜ4@z• ÞªºéÁkÝÖz,—ª>„1d Š7ã꼞«Ž‡ô±rAGMùƠәѓòbiT)&²CÖ0{ŽíÔznàÝ]fðϪ4­"US@˜ÑÖ h¹ Íß•; ïé¯ï7Çoùãôo}¦¶)npseïg Xð¬ŽYp>©üv÷ÖÍ—~üuóãÙžâžc9ÛVh=Cì}Ý+*pr´½OAÕw.>µ]ÕqcMé5¦4_íè(cB!¹!2éŒ)ÍÙ=vâ 7D~H_Vbù4Ŧ¬dÌ…ë˜R ‰O›ƒ°ö×9*Õì7ã_%>¨Xß˰ÛVb&<ŠfHHfÏ#¡ókÞ?ÌŽè È|43Ü"˜hÊ-ë^åUÐÁìΡ™ ¥Îa°—‚‘+³W¨cKÁnÙêµ®Þµ‡sõ:{' $VàâN€… >†2轎¸ã¢•J&m¸û–nµ.ÁJqù¹}¥Ý(Ç“&ûÏâ{ÃIIpf’«j8)@ NE§åÕìLRCJ¥@ƒRn¾›Ò7om1C¸º¯ˆ~2Ò¤Œ¸„ô`Þ1P,5 àX|‡e‹{ÁÓðéÜ‹@yUËŽ`FÜn ùêKÛÿs4Íoëï/ï}ضäÞ‚Ô™†]éSeeYŠ)ÑY®&c6´Èl`16†ÄÀâëq¿s=î'(cø9§9|x~F_Ž™ê/W7æô‰ýdB€hšÃPªÄZpN]edÐ yÑÄ`ÐVîc/´=ç•«)1Z§Ü c~ذ Ä_V*ܨâCbtë°6L{]p†Á1Ä€4´x³›^½!H¸,X·âÀáp'/[q-óÈ¥3ÆŒ³¸´£Nà ³Cú*8̪5mAV/Ü‚·îÝ”Ð-á,`œÚ(3C~­žä@F7?¶ßïš^¼ùgúï©Àéµ{>PÚ_´¨Î…à¯ZkYVCcÕZvÅ–¦ÜcO±%U)‡0€/y Cr{òf¥Š–`‚¥zßÇc±Âò.žA³ÌóÙˆã =VÜoìM 5â ®šùžžºï] Yˆ'+¬/šÛcQ°V/ZÚ ×úMS)ÖòŽÒsÙ|d’xr×ò3ú‚´'wý !Ðì1³È¥äÏìBš<òŹc›=a‰£’ˆq%õÒ eªN=ÌVjèŒ%¥:H q®QŸ]}¤ð~x}öËNÿy}*˜ù[§Ï99¼Ù,bœY’M.§!Üá_ûîßìCïžožvš¥½9™i[…]Ž&8òÊ]µ¿ß“݃™< Õ[·ôÆ rdÚo<­Ÿ-IûëË{‰•½˜öÅ6*í‡}.t’ög‡ôyð=ÅX[ÄX»‰ ¬Ct2?éôåœ3yp€ËØGLcÙÁt‘+~y¼™šiy¢ï¡ Ëeì¦h«å¹€³2ÊŒÀF=D`ÄPëé“{‡ž¿hF"…"<ßQ̈ oÌ}"ùÇ|¾ì¾ž¾sýõÍÈd–Á¤<ÖBOÿ ~!§\Èw"#͇_Ôý¸|¿EtVi ȺÏ;*v€*¦Œ[\leŸÎ7o/@äk·kv"Ó618¤7¨÷Kü£9†jSAŠ„\™4ÜpTï(œ äg¬ 5WÚF?œ °®?l“ÙG$w¡=œôij˜|èf³Ê*D™q+ëgõŒD$ºJÏàg¤™béAý•a¾DpÇ@íoûHïÈ“´ûv·ûóÆôñãÉ®õËÍîùöæyïµõ÷®ë§jëM?Ú1Ä4Ø3¢°0CD:.÷´›SWÝâôAjƒC&_) e’zŠì§:4_™ÏÏèkU„¡Í1 0ˆ³ÁæMq¾w&Í[ä-\É;yhànØ|lÁÜi¸ÎªÞ¢KZFª~„«H锸À(óÇŠLÌöñ'uû³œjüØîþL°þoR§Z{¼{}¾ß5Ò'+I¿b8‰Ï“ÙËöÅÄ1EßÜ«/ÌQÞa™0Ñx=^öqƒ½“ªw@*Ñüd’ÁòC–#»Ãê×ÿNèó Õ¤É9Œ°:wE`¯8â¯ëdIõi==%JE7Kî³hÛ4BëÌ`uVÙi}+Ê*e{ft¨,³ÓH׿ø9×÷i6éf·m›7e‚ ZH¿«¬²ýºNKpÝD#(6Kn` Š%C_Vò€bÕÐ(OžÅ4&@çA$œl©ç‡ôû„ æ¹ÉØ×nÞ‚÷`F<(§ŒÈ¿Ëûß=Üßµ¿{ îÂéڿNüþ ´_î0Îþ…ö+xŠD¡Œ¥ÃC}-/ÛìµiëI“ýïVì‹ÞÂʼnéŠ-EžiÑ?Þ¸ÝRwòêdYȾÀ’—a¹2ñ ŠÙÁ}êæGôÙqŽÑr¹&;ΉŒÖ¾FÐc!h—Ys W±#°Œ=I˜æ’G2S¡qÂú—*lÓ œé˜í9ÞÛ…³÷¶ÙÙ;|z±=¾[à·/Öº)‹>c4p]} ììY!K£MâZá–IüBãw™¸×´„÷WÏ{¢X÷AØ×}D,Qíeá!âFYTN\D~DwGØo”ÚØö†v3 r…Mçn&öß7»ÿgæxûðãévûóõéÅ~‰»ª9÷×Ö™Ž8Õò3ÄÙ®ó>/;¤Ç;„M ÄÙ}u÷ Ó݃¥/ÊE„e *ñ̨ÓP …ȋگÈCÀ~Î[™‰ºœ”¢ˆH&Ô_2ÉðáþÌú%·w{³ÛÞüýç÷Û‡6ø,D”§ÚmìaŠ(”p†µ»/làú‚X0켃šåÞSU 7Ç–Ž¢b¶•<`ð._^ÈŽè éýF蛌6©xÄUë®ý°¼xÍ/yè/ßþ´ßçÓÿ½±»‘FG.¼óšô¼s Ò y‹t$°‡~I`#áÔR×Ã.€SÕêSìý£}Hg¢ZbŸÑÓíxFç¢Rh‚¸ðÔí#ÄÏǸ‚PƸò 9矱–DÑ<`ª¥fkÎ*Ô´Ia]¦Øn ÒÄ)‡™€HKá¥öŸg¿ÞÓ!N S›—»ÝÏçû×ÿlö7÷ûöáä3šì»Ž35ȳwSTD¥‰TÙÓ )ÑUžÜ"²8ˈGW‰äÏk”¯Eß™³QÝ,XÊög20~W碥ߴ ~óØ½º/Åï™Gðö´9'¡Éè\0s .4yõWÈš¼<Í71ä¨dbRÞîE…×* žtq‹ê/öPu8ÜaŸ?š©~»SzÉ{Üã©Chö,Me/gš!·ß_žïvOÏ·µ®þÒ™5¤VUâ§ò¨taÈæÝ‘´µà³Xð”K}ÝÀRº‡,æ´4|ýa¨ë¢s ä‹)ãQ’ƒZ¾clž3f‡¬YRH€M^&1FʺÜ1Dœßú¥bmßîXº,\£”Ác”2xXéw0s3o;™—¤LN“j×b@?dð&Ü´AþWbž{²'yÿÏòù»ä›ÿÂ-³±î¨tæ=¬,;rr&1G±™2óªR‰ª¤ê£F@c½XÉ¥~r&ÓQ J|ôy_"?£'¯‰DZ­ZÉÑE]W:aÏóa•´ˆÈoº7—sŸyì¬ÆeÔ1ØO¿<Û®»‰×•½[u t_d±@,òÌô戌Ÿ·ó7æû…[÷øÓr‘#H0 fb\+JRY ÛK¦TIc·Õ\Í&RÕb«–r…ì·“* ˆ8ð9þE~FŸÉ¦Ô¡j±ØHVÁ/à| <Óܾíç…‚r¼Œêˆ†V½iæÐrÞóVã¬æ¼ÙH«4Ga|‰6Þ‚@q3‡¾Ý=<~š¢}~ú_;)}a÷ÍîÓóݧ—ýÚ]uA¤éÃf”гXĶʰ{ìX>0o#ÄÑ7sn6*áÒÞ[›ÖÁj§û)Ô`µÓU· ì!“2©ÖÎ3bJ³âR~ÈŠâ"jšAJlc²#ýdÈý½£Ä´¶Ý¶lûÀêù¥¸diÓÐXŸÜèM¸ßÌü½%A0ú¸ê–„ K~ã<©Îÿ ¬Õ-Œí1 ëCSÅók£Eƒ²NôŠâƒ(â-Ÿ3³°w¹¾äëçïhëö~ßÇ«¼Œ…Ÿ2 Be„ ?³åŒI¤÷ŒÈ,ò;Ì ÆEö-Ùb‘¼éI‘Ð%}$‹U¦ŽÉO"H­%ç´¥¯àBd¶ô ¬ ÉÌ$$PíªL'–òT)wMÊE´âLŒc¨NÈ~âD=”—]³3þygŸþÓÜè÷×·MŽÿ23¿½Ý¾n?Aä8‹T²<žù¤ÞÐÃÆ)7Fiê-ïá¶Hj½Ê¤q©š¥W“2ûÈ.¢Txaªêµ~ŸC‘Ð+Ø¢TZ«d6åç)F~H_U <6 H¡"‡užy6qO¤˜_2gJÎ*)o-‡Á­ëE„ _Ńg7~çüTTûM–øh*¼UûMÑ«TûM$”}û‡`Æô›Èâ‘ÀN\ûñŒ>ÏÎÎ>Û<;‰®¬é|ÆßJý&S¼)õ°´:}ß|Ù„ÂÈ}ó¯æœ&©½56ƒœïàïTe$»¸n]Ï…zÜ(³‘ ŽiOÀ×É<ÑI)+A·f²ݺq’è s¦ðüŒNtoKl¹ ƈ ˜ó]5±H€~rB`ªóü½½òcpTY˜À0–Ès!5çõÔ<•’öYµz²XlÕ¤¥`36!@Ð~8ºæ êãv÷ÍždÚs{ŽºÙ#G}|%™ç‡vnÎ ®_ ì¸wUÍ@h†|X;YÚ"¶qq`º!%øZÇ1¸:ñ‘– z&¤!q bðÝøT5?£3ôÛFEɧÝöUã+éâNÎä;ÕR hÊ÷hÞ<^cV”eñ¦£þQÑVËsV­A#IXM¢¦lXÚ†¬áÚÐ$‡?|㾸=ÄÜ»‡§Ÿ·o_YfÈ«ãCÄÎÓ:5Š‘zøvÐN‹ÐFà’Ô¤uÑ~·b{0óm>¾>éÄÿ‹æ;–QI2áŒõfÁõ$Ïé‹È Ù’±&®u]©Žgì!§ŒÅ,ÑÜæô³Ýé—×Çnþ”¼Fº‘/WÇë0kªÀ;Œ «¶IÉO_Ò0-ɯ=$HØ*#¿èg]¬šàêvgÕ j×~]†lqàŒy`ñÀŒðKçuªûïv÷òÒÆgåÂÛ"ç8®ª£{ à­èì]+ÐÚ:z›ÜF%S鎤$ª3¹1¡OÔ¼±Ç"œH&¥1#¹˜~ ùÞ]~F_2T¤°Ú´¼nU2åM³‡q£pð¥a\ ßÝÑ5ªê I4¡¿ªÞn{Îè5‚î`×ÍpÊ ›Ž¨ÑnÝTnêowÛ‡×ocÒ£ˆN-Æ]×aÑÓà 脽'h^®H¿ý¨tÇ4FêjÕªGŠŠµöƒÉâ Môñ·ba--ñÎþ¡šèìΉ$$j‚lЉ yM¥Ú>b2˜Æ^y"P0±¦ýýXdˆÑÅ¡&iøºÃ»%8«‹¼Äñ:ìë)s#Á|› øËæF¬ÿúùôºí¦+€ÿ'ïê~¹‘ü¿"äòiøQ,}¹` »{ÈÞáA ËòX‰GòZöLfû߯ª%[-‰R³I¶œàfËr³YUüÕëã °²ð&+䤎 CmÑUM‰S«æ(Ò(m:³G‹4vâð¦[Âá$¶iê$(Eƶ‹•ÛkdβюG/vAú°Ã¡‡Å3çØ—Žw42!`ç-0^þØÛP)‹ä4Üœä*!Õ KöÙEbÈ2n¹K^¿Ìô‘3:^ÜHñÀÓ—w«OˆÎ/ôzF§©<°z*Âï`1Ã2FK„^iSzÿ›F±Š©< Ôv+Ø©õ:1<³¥[ô©’Ë£¥¯[I{ÝéÚ‹äÀ¸ŸköIæA™é ®äê€aôÀá ™8çc¹<ÌxÖ…„0ê†-LÒh2§óï€Óç$c™-¶p0…`º¶žðFăßþîgÇo­½ÖBtÜ ¥ÙŒÝÙ RÎ ]ÔÂÜóÑ)=¢{ˆqÊtôó¯n¥=ÒúËr¶¾c4µHp2݈qrd•“¯¯øÓߨbѤ4à§WŒ^ßÏdýaµzØ×_ü¡œèù÷‚˜WÖ8Ï>œØ ‹ùÍëghŽtŠ™X%r¼í6sZ§°ž¥¼Ùl|XNk7_ËÛn±ýq>›/>ÍoöñHpPùviï Ûm²X³Jú̶Õçùãèénº½ÒcÒl~ÿé"Úmš¡¦iTÒ݈ ä ?BåŒÐÐÒ,Uk‰ñËõtÖðîýâvz¿žŸ2)ð_GËç|ø^ݾob …!ËÊÕ„N¡°Z\ÿóB!ÇèÕng_?<®Ø?Zo Œ-t¥s1ýµ!²ÇlÓÙlë ~„†a3¯6d„ãäÖ #¤bç|¹BâmÑý¶ðÏBÞ÷_ ‰S–2.b$åH&6—ã”ï…Sž­0ÝS¾¤lÔjm+¤ŒD:/ŒQ`/€Q!b 7L`mª­¾ Hy§ß¤æïÿ(¶ðæ€_K­B ë[­1ˆÝ  q(ë-Z0ƒ.f^^®qÑ}½÷£a!í0@ž3’þTÖx,†4êi&hi.Ÿi¨ Õœ‰]ú¶v–ˆjÚ;°†5çppX“QÃQXcÄBBŠB7XÍßÍö» ¿[ÉÏãݯ=ÜÎØvñ³Ùø·ÜÒõ‹ñ‚¤o¯s¦¶}Uá}Ú(gÍï˜ò*'½ÑjTF—[S¡ôPÐ.y¼îŒv¤io,yØí×îÒNŸ‡ ߤ«TJ‡0<·¢v-¿N|¾5n=ήõít~±©ÈUµ©²Þ©…=äàrVÖËž³·Ø\ ®$h+0¾úu©+ùháMšl,¼’ûfþp¿úÒ·‘.а„`dGCx©KqC¢ÅÅ<UÙ—÷3_x|¹™ÝÍû]d+4ù¬ê¶ øùYóÖ”ÒAõspòfÝtëø­Rm÷­(ßÛÑÊEû+µhY媛_›¡ŸBèa¬¼$YI˜‡àlµ¶§²ª`hd³+A.(˜¹ˆÞVµ(SK.¸ÐôML– c­Ë/WÜŠ–؈•ógbQAËŽTX%*Xyœ…&ù†üBzé”)pVº ft’‹'O`¨ÔÕ*Ým&ö…„€>ß´={¾·íG9ív–æƒ:åùOlXïd[Âñ5xPÉÆ|ÐfF›L§:ß(-­!1 Æbzt=Ÿ=ÎÓ.öM·ü0žÍŸ½NÇXÓÏéÌ_¸}G¨)ôö óW>ç'Ö@&g2*)@Rrh]ŒL:™”%ðÚ˜Nd |l:ÃcÚé˜EÚÚY2)MÞiÀ #“s442ÏöO ™””cú`ãñ1W1>Fp‘kÆãÓyÐÊñÅŒ߬f¿2…o?Œ?¿ÿçQ ŒýUŠIék·^Öë ¨”¾öÀ¸d´59íºbºüÂP|ŸD`ÒÁBîx]ÀŠÐ`0™í ‹£Þ\»­%!“tŸCè.‹LfàÊþ #°Äl`1ªŸ3˜œç7¬‡O/ØƧ¤*ΗNﯽf÷É*E¾2å­ÚÂ$ëeÉž˜”·ê94²Æ–êß¿O¢~<Årk.ÛwÈ' -ð5úlž ëŸ`ª·FúùaS‘ôAnC¥H½ôá×èÃ…{Ulû¬_NËQ¨]^çñqõØ ³ê7f™”¢ÜÏoòÉܩɑŒ£œŽÈü™˜ 5ÚJ’f?zâ´¼ñ°-¨=ëiX ¡K¡£ š‰v3Þí°NŒ“O²Ûzd>V9FV­ñ…ÎsExËÒÕá¬ÎWFUíÓà’dª¢& ±ÓÜ“‰òåwŠÞi;zùòèþýÇ¿|ÿ—?]þ.S}ýýoÍþGÿB?>0Û®F›&[ ó¿—ÓÇ/?þç4:œEïi5úü¸xš7¼{^_ÉÖ–óÆÈ5àr5b™þmôUc?¬˜³‹õhv¿Z³È~ÙA˜ˆÓ$Á.›+†òˆ—ìáš²?ogß²$†F6’øyºxâ×ñ I]Ï÷Ûš˜Aicü7üùÓã—EƒökfçxËÞñ‚ÏþQõfêÿÁXdvõÜ”­5Èÿ¿Q"°]j½sNåí}zšÚ‚VK•“…"A—Õ«) ]>mÏzµ0!£<ðþ1Ôô›QÃÄú8õ ÿŠ@cbE®`‚?L‘«0C[K†l®ù²á§Î(´¬óYlqÂ&‡!œvF쉎›Ï…\¾œ³Zš}cˆå-ï6–„pšQ€¤ò3Í&©Å´ s²ÍÉ“–ÎøÅ\sé\“q>\ÓÔ1ǪÙ8…Ø}[kgilS2OÜ;S“mÀgX‹›«û™ŒLȈ)É[öȬÀ Þ·sÁðö^§ßÕêIƽÊ-Óøáùú~±¾Û‹óÂ?áîî8Ñ“õb^|;kñýŒÎüwÖâ§bJ•0Jjâ3"ܤ4«TùÕ[z52°S©é)™È] ¥•f¦·¶–„R’"r×wa”ÒAŒRìÆ[ :i›`»ê‘z}ý éÑïšÿÿÄ‚ôô‡¶Ò´±רÈ>£bÑ.O° ß‚ó]·Â÷ÛN|gÏ’ÙÜß*üÝÎR>"]Zá^áû­;u¤ð­Óžî* Ó¾Sáú>¿5ô¾²Þö¼2Ê_¹¥éÁöWôù Ÿ×òè|€2 ZÂ8< gý9ô%S®Ó‚¾œ`·­¹™ž‘ëOo³d9m±¶øÌß´µÎJ¬ dó»û5€ó&­ ¸ŠT(Ö¤&=! @ƒ f“à|V“ÒK㟳š(Ö#g·±4›X#[êg“v1-f m“ #6|pÆ€ÿÿá9 Áæ·Ï,ÂOíó¹ó—7Ÿ>¹ˆçì±ÜsN^|Ï9yñ¡ÊaN?7é§„ ÅÖ~z ŽjºgkuÆdµsQǹµ³4"ï¥c¥ïRÁ9Re 5|óN¡#ÆÂ{Ì4AwfÑ@EkŸù.aïtìÙÅ!b*…ž¦Rñ ´ÐЉ–Q¡]üg!«Kô Ë»œtVßÌ ¡¸ …NoBÁK²¨°ûF­q壣¼Û;Kƒ,vÄcõ‚,R¼|d…óì7d´±„V¦kUˆ»o5óX)¼Pml ¦!;6c~cÖñÇŇ& 𠘯ókc$œáÌÍÆÿ¸^.Ãd|® þ¾m£Ð—¢ààï{4ùè(ÈŸØÝ:P {˜Þ7ñ Ó!èP Hé9Õ`AúÜ@w²‘ÃN@2AGƒa­­¥’D…šÃ€Tθ¼ùÐ^ºvañµ‹>|säƒÑ |óºøf­eU·v–Ê6’¦z}3ÞxW¢G,(;¸!o)¦G¬Ò¸D¯Ööo­GÎd/~ùàcê+ª“„õÛZ3nZJ×?§\@Ú~XU„QÁæ(%J‰¹Q|Ñb\r èåBÝ;6Z¯ÓÞY"Hñ«ûè™å¥UQ&¨ä,R­"fFhézw6űö®é‘+å/aö¾v<Ü«.~ýáÕ-m„ ƒtêQ9k¶=p¥T\ÊYô,•Ëô‹Ù0¬L»hv3cž]àK&ey o¬·ÝEâþÍçOŸqSÁ¦ž Nz=\Ù³Kzs‚€,ø¡H ›œ9‹FÉœPå^z~¿>9²ÔqòÁƒê¼¸³&:g±µ³$,o¥$s»‡®À6Ð>«åšE/ÃFŠÙÖ£ ­v8lg^HZog hÔñ–k¯[K³”FDJ•’Ä·ÛÉÐЋ¿¥˜žiwx0MR ¡æ°«/q‡ñ8¸_̦ëýƈ‹å^f½™4É:¿>ß?Ý,\ßú뙣c» ûVÏä¯ÜR(lí÷7žòW.Ò$ ¢mÕÐ×s.Z>£ éà:¨WN•úÎ7H}¸»§B?»¾E¸ž…ñ/¿ÜÞ¬#ŽA_Ã)oÕ¶k`µ6ÅÉ¡iË,ÙÒÞ{hç€msxfÅf­‚K†™·HŽY}ZŽw_y·ÿãøã4îéº@…I1‰ë¶“a4–'Ã$.iÍÒ2‘ÁXss)¶°¨maã%6¹€ß%ôi¼Ä«Û kЦ££ÍÝ@ó=d92/ÂGÏéÈ _ Z†Îy´'ú.iõÇè»´%¤³Œ<¨Kx!btçt!¨Û kRwνnœwM>ÙÕ·ßwÜìŒÔè™_p9ý8ç#+¯;žÝ/˜R|ò$éc‡4OkÎðϤóãÛÓ½ŒDA5f×~µ¼-n®`³@S¸E;ßÌìûýnžÛZÓ3,ð¤Èdùƒ¯Èj„Î0ç,ÒO¯-+§×÷si,÷Ãjõp䊨Λ–sW–Ý^vÝ…“‹ùÍëg^y€ÀhÃò%4Ï{€$ ÷úôÍìëVMÌqomækyÙÑbÛo6_|šßìûÒ7ŽÐK1$ûßD±ÝÙö)‹õhÉ@È`8=ÝM—£WzLšÍï?ÞNPÚ£8P‘мɃ„“hÍÓ DÇSP˜Í|ÄÖÏ7õNbô;èwòBPg5k«"žÀãaâ ¥P5Ø ÷.^|œ~`»…rט·Ç*þÆ;› ÓÓãóÊ›C ÕfI½p*i70È2‘Adyà^X%i°a]³yrP)å­.êž<žœd½hL‘ßá¶“Ôª¢¹›ìã£ú=ÌŒ=9’oö8{ÆçÕ -[,½¦ÄèÓ¼!/†boè` I5ÞHáMªó@¼OdýÀ¹Ô ?²öX?=~éEiÉF?Cjò@eº’²îŸˆ-\¥zêÊêdMMh†Ýع> ­îÖ„Õ„mZTP…‘)„ÐC²ÉîBÙ™ ƒg,3êøB«Ù2jã´Žg,’©:IÀ¤(Cc|HV‡‰‡~@î¡6C¦aÑfïdP×d“Äj~Í>:ˆõ»MzÅÕ+-¯^®ºçÑ.–,‡òI/˜e[8Ÿ=Ý(‹tN·6@Ö©¾ÉP$­ˆÍ,Y†Ø„ÜöLÈ+}£‘-Öfã=KƒÖ} Ùiå}ááv~øÝŒ¿1hf”R×x®›UY«¤AÖ|šÒ=•!áå4Ï=»E&0‹ÑAv›5÷h½—.wH÷M¥øø~5ûU¦ïîM';îÁ˜ŽæxÆ=AÒg¢ÍåôÈ@Élrd{¢ù 䬈䠥¼ )a†sÝHî0VöÓ"^$çs ˆß§’wIN©Fç†O@p‘6 £4H–èùÔ‹XÒ“ÌmÄtH _N1Þi‹S¦ÂƒÇ!àܱqÎ,Äßœï¨zÕ|o—ê3»›Ï~ó¹X±¬¯û¹1ùœés¯LNDfµ°ëƒy21+B9K“\«w&fÆ’3#'v”ÅKv”«ƒäÎH“IÛÈ+œg¯Üàó(XGEJ§šï|¼Ã89S¾“®ÜðÝ O†d¶9Hˆ­ÞÖé€Ö Þ§Ÿº3`e9½¿[­eÊàlÅ[ü²Ë¨Im§£Ö €)²À½±98£–bQõ=A»"k‚µuž°¬ÙrîŒnË1‰ÙÝ;’UBk©‘3Ô'¸Ý)/)'8¸ÁáÚ²ŠÂµe§æ «ËУÄC’Õí)T%ž…$'™í™ßŠ˜í‚kPrô€p}7ŸÞ?Ý¥!í6“óI™€³Y]ß‘Ýë5éž8»ÙýYˆTý ’]^$k  3_Ñ»ØHñÖf+!$(­B|¬qd‚mˆÆ%˜KD`!zù§êâ¢I#nÒaqwbO³íÙ ìñ¶~0'Æòr7‰žƒf6Múºƒë¯í‘gëÇñzña)¦þ´—á)É7§YAÞ‘²Eˆ¸E¾ýø„ ºÊJ.,"_-“s#/À¾gg¨WiçU·Éé]´SàŽTUÞZI„} 5Ø@èŠÎì¶}Ì€*ÍÝñt †Q¬BºÚP˜ªwwI9† CÏ,Ãbì8Ád§@Kš&“30ƒõÁ)Þ—w¥šA—éóÍâiü°º_Ìó~1\måó¡•ISFÙLž= @[ ”iW’ER‚ë,ÉÁ iµ-´;ɤ}ìö­E¨*,o­H2ó’!¹Êi5jØ¢š ™#)n²eRF²ÜKO'cq’÷¯MïŒïB¼’·ÖÁHÜ }QnÈtîmïü …·OÞÏß©»Í—º²ì3ž8P9Op·”ëõ`ØÀt}c9Ìø?òÎ¥¹‘ÜHÀE7_L L ³=1'_a‡»{uLPÕb %jE²gÚ¿~E¶X"AP”äu;f£¦Š…LàË+i>}41äÐc?c­RÎvZeèN‹­ÂÚ’nƒàœX=²Aeje‘c>zÐ3¼^ÊzéJ§}^zÕœ8h]kƒ¡Â4›²‚ï$Wò€¹Û{¦8cMÓL î‘Á—:_Þ«+xÅη·l±ó<„£­Ñmxm4jlÖ¢çʬ bV1ç[Fl¼wÒ¡JMöS5µüC­Á¢ÍfðAó……[SG¬k8ÑÞ„¦F¡D_='úmÛñÞRÈ¿x¸¶<ÔW›¯f›üæ7=•Gü¶~ ‚Ó_)<+ëdC̸­ HîãýjÍòQ¦û[PŒå9ÂüAvº8‹š¯5?Š4Vw¤-³"œ•×ö‚=´\9Æx/gsjeÈ¡¢ÃëÁÈì¶É^maœTT7˼­Ál&m,à£SÿZA€/¯ËoËÕ⫼È1ãrò²™MV³»Åj*›åfŠ\..Ao±§Ð§E¡&O ™j$þe ´ È™•"m° 䀪»~kKÑ.ˆGé•á8¿È‘—㲬}8ŠÄ8.öPwäŠx*llK×CLMÍ®qº@Î*—Ä~H¼ñþ¤æön·\ÝgCªBÚê>ÍÃe‡(ÜbÑÄïET»¢xmôaû{ »–œ&êä®noÉ£vÃt•m¹»ÖV¦n2ÄŽ)‚žµÆx‰`]ô€}k}áóµ_S¨tzŠAd§×Ÿ$XîvK'Ϊ<ñÿ#¦h¢©¿ÑÉwúS…w\î¡¡åm+FÓå[ÄaQãIvØx´wsŒ:‘Ü’N‡·VNçkDµš·Œ-Õ±ñà)b6F!’†Q#è¼NŠÚÝ­B÷ž(5UëkZ–õ¨áýÛcâ¡sGfÝ Ó_î hö½š ýfXqaG÷LVA+ádÛyÐ`å3WzÑõx‚)Z«TèAeG&­‡Ú )[­U²(ÉùQ“ã\Z•!‡Å\ß(ê©T<`OUk49çí§ ìÅ¿ÉsˆÑAEèŠ*˜û„8  …dûU ºÿ¦$†M8F„îƒ홺üÝ ÊèÅ]KV…8lt“s<*‡QÕA 'GAl¹£†1ŽÒREBaÕ ¾JšÚ­ Ì¡¨ª¼ÕGòæVæË<½Òa‡1yYÍž™LVu™l©Ç‰/ Ì•~H$¾’L]–r7“ÑÜÉdëc—p-YB2ˆk,#É êûÆÎDƒÖ‚¢ØiÖ\6cÓl²‘<˜Õ뼈ÎR/º:n2’ròçó19,»<;¥û« ũЕe@Ëã 8H­$…ŒJCÁ£é$°‹† ·DTˆÀè}°9&ñ6‘-QG¦:(z jÏ[¶Ôu:QÅ>étBÖÔFñX\R/iVC 3¡È®F™ãÐ\C{͹¬ÛÌ÷»U8Qo>“ÙZÅöx7oe9öˆH³$˜3½Ø…Û©ŒÊF£àõ…Ö‰‰ÀtU±DsÈ[8ÁkK"EÊþ³–YÁÂ%ÖŸ³P» 1‘%)ü#Š2`QzɆœBWÊÎã@M]z5XÊF‹!üÙ‡t¿gýãØæÝoä• ’yYµÞôpm­1N1¯”¹Ñ%‚ –‰#T*C•/¤Nû˜›ÛW³QÞxgGæ°‡êé]hMãpÚdb ¨Mт”Äß"WϽ‡™þõ?òPmuT“u¤Ñ€îÔš'´‚¡Uè•VÖwt€r†¡“ÉVÅ:ZÌäKÊ­…à,ƒ¤‰Ä Y½ ”ðøÓ²³µµ}ï “h–(•4[7F‹ Ÿ–ƒ 2úDÂæÓ¤ªNëxØ^¾æà@5±g²ÆÖáÿžn7»»Íüuù¾æ]6ÇáÓÕòa1ÿ>_-Þ¢[®wÜ(ô-ªL_‰n•§X׫l:K&7¯¬”š®t*¥£-•eÎ;í’6^énÃã¢-/Zz(µpŠ™üØÆ¤~H‰qñ®Ê2bÛ‹éÕ@e› y*ÜUùß{5gT h! ±1¹)æsÒ:pµ–Ñ=®7ÛÉëb¾–Ÿ|Ÿì«Ñem4!ͱ3‰<È1¶<Ê©Lr¼³1âl†Ñbý$GT±›îP‰Õhíý(…•’êç{¯ ÔUÊxMÝjä:‡oŽÄ;…‘£å…x¯³Íöu7ßîDmýBæÃå\Mæ:mû” ³Ý6•nFLWÑH#¶" ›’ÛØYR_e Zaô(³R¡F¡°PI§Ìӧ𬒱kÔÙ…g3VÑ#YbÞ(áH\gìY(=M¶8ʤŒIÐiB6#›dv„|WMiSMc ëÇÈwMëMfÙsñë±D_¾ª–Ô?L‡}çóÓt£Ý ÔÉaJ£ÌÞ^0Ê¡ Ë(FY«´ Íå­r?Ä\Òºk†Ã°n3¹_ξ>¯7Ûå|sûãg͇'›õNA(0Ù®'«µ¼ëÝL¾xn³_–“pËT——‚)Ìè¯ÛnƒÐ§ÿ¨¶â+TÊäo¼>>ŠÖ,ÿQ¶^ ¶äL~Í–Äû资_ʘ€óœsæZ€*0F=£C?€3SÆ æj:½Øë¢5:­«z›UZîSò­â¼ASã®e+H ?ä®óiöúëb+Zœ/D¥OO»gQÑÛ…ò&ïpWÙªÅÌ·^öpâ×a¹n%2+˜˜$SÃ;‘[ÕãÃÏ5ª£Ù·8¡zK@e2“ä­•ÿj\ª£Uµ±.b†hx» ™Œ·WTbœ¤BWàN-5• X#K?4R”-Ûgïâu»|XÊ·Rö‚ïÁ^ É7S‡½ç2+È^™ö;³B•7ÝY¡.V°% 2ì•·6v¶ã²¨v4a³Ž]™Ès:š^<ŠáãçCM"W‰^Ðòu!y£"rïg2 DÛëýš0‘ì®#F€¡«¢ÖC®|Þ„æÍ²EÉDm®¬ ÞJëP"'!^ÐBa½‰Ãoɥ̥´ =ýä½G&,Õ¿®1C4NLLŠ|âÝž¸lº½Ã´t{ éI›}xPU—\ç AT¡Èéi,r ·{ת;Ý yU±Ê¦Gh˜ÈŒUðиX{‘ˆ¤J0?\m\?2pÊufÄ#Gëûµ¥RèÌ€}(xçG¦*Cý3§”ž0[4£öqI1?²¡D—‘ PˆkÔCˆ‹aðîAÜÖ‹F¾Ù_× ¢T™¿oJým¶Üʤ¼‘Ézó_2Aÿò|¿øýæ(ó6qÿ(?ß¾~_6ìÝ„1¤4YÊ2Bཥä_„.túiniCÅ– þõ®Ñ}ÃḈH…iMý…@äOQ÷Ûlu+ÿ„q“·qoVëßnîgÛÙæûóühKBÙzAˆ,±‹1Æ:>˜:L³H¡ïšG¨ e <è©%QŽ~J¶ï,â£LÅë—Ÿþò爞õÍn.!ž²@ÂëcÌCBñ‘Üúæç›íïÏ_~š¯Ÿ^f¯‹/? B¾.¶_þö÷?ßD××Ó,DP­³ø)ï!OZßn*(_°ÙÍç‹ÍæËO‡·ÿåe·ýòS§›­v‹_öI±þ¦ùÔùàŒéç›1Š»0žßÙØ´^ßú³<ï½ë°÷èò¤CqP½4édÆ×3ÍÒFŒä×ë©¶ÂyuXo§.vèlßiþ6 /ú.Ånÿ[Þg·9·}j&‘âV“s{0‰fe\V†WlöõÖšGˆ,zD„y1ìÿ¼‘Ÿ<-¶»Õÿ~‡—ÅÓãÝòõñþy¹<5bXI癇þßÜ2âgg[ˆþ_,Öã"š´ešØô¸ ›O€Æ E“¡d4yŒŠýèè²ÁS‚И²M‡j§hj - M²J"ßÔ®ücì!‡uñþ÷¼å);`ÿí iæ¢Âµø'–†(\ËŸ ÷Ö‡ÎȃÎ鶈dÿdÙq§-í¸KßZE‹L·G–fŠHdq‘1Ãuê­Ûi]¹Št³nG,‘,<™§dl´^’"*g€´Á Ðää²Ñ,ž›œ„µý$Yã‘,³=%œöñSAl¹ÉÉbVF˜÷‹—ÕúûÓI;µåóWùšÍíëz'BÜ/f»Õö|º fò¦kæ×µæ/[ ìyšùuW'®u0à}ÿˆ>­c½ƒO ôà™N7†ÁÞÛî}™¬eê4†VÇœŸÖÀÒl¡#¶Èœe ­„pÕq#bŒ4ÓnôÀàƒx„†w%¡çx#óäe-²Û¬–óÅyoí}ë$4 ¥Ÿ–_÷å™~„nÜÊÆåu9ßLþu÷ûÆŸ‰¬ÍRé÷iKkÏÙÄ*ý>W‘&6Nk3i@=jYú†,Ó`¤™ ¤ÉÖB÷Q“ïꚸ·!­5°D¤…M©@0iÞ¡e„4o¡6Ò|¬+b£ƌ︻ã‚~¾…l‡h´Íq ÷^½1®™<®•}›ö>‚ò· e_æÒP1²QCf¬êD&3Xf5î¦A:ÓÄ’eFÝ~»CÿÌkP“‘Gs×ZCK£š¨Á°9ççhŒì~íª¡&_›j"GïcX³Bq«9º1”ÙRf!%|œÅ*T*R¼ÅѶVñÛsäLƒT»|u T`´É&Õ€¯¾Š%°lÝ@,Ù5‘ÑXb~«‡ÉTÒ$ Sè:©BkrÝT2±¶P­‘%AIÞJÚäF¨ÚBya ƒ „È•¡ĨcPÒèÅá »GSJ¨ÜÈwy!LnÒü 4yŠ]×Aÿ뺳‡·¹‚~ÐÜÙ³¯‚CÜí]ºéCçµ\Æ£ö¤ì`pØtwI<]çL÷ÍZ쾃qí'wYš7ƒ^\r&]jK‡‡êÇN"Fàè&ÍØ¶Ö‘”:—skh·æuñ²ZÎg'í‰{´M¹¹›{ yl#—E0ú ·Qç1ÿÌ}ô¾ÌOq=5+´¦ÿB ÞëÉaIfH¨žj…Ú^K/¸Šò›Ê2zö} Ê–…=ô¸v]tSŠc¸çè €žzbtž®;µ¡bæÕýì~à:–µßY‚ÐSG¡L¶b.Úψ†\ˆæ¦bdS›r‘¤íëÓuÿc+ÇbéˆÄbzöÞÆ{™‚×äǰ"õØaÕüqq¿éŠTEXAÎj¬ˆÇwn®`2›À)ÿ37¾¾EsïáRÀÉEÅÏ—qä´Æþ8’GÈŽ¤ÏF6”[WÎüó-'fv·Z„œƒ¿®×/g\ ¼‹&á‹YÈî5Dº/÷?~†ä#"ßÐvy !ÊMn.( ÕºèÖõm,ïz³ïvüŶjJ\¿åK,^ZÁñã5µÖx^ßk7SK91\c)}ˆñP,¥@.–RÈs®p}bŒ>•P‰DŒ5× FUï[Ž$‹£sòáPú?OÉ£WM’{ù ¾©¬o/}sÍ ¬¡Åo‚„Õ«»$À¦5ΟÇÁ×ʺ]6•Ñ5>'‰iA|‚edzÅ׳öŠÐÑ€<ÖôçyˆØ8žÙÐìŒÙ^—$ ä ’ ÚU]5FÖ°;.Ÿô›Ü1ië…6”qkt{ÑBœÄ’¹Ê›X^!è!Hñ@%Çâ‚øTbÖ±8Ú¥ J¤ª5q,à=7û™ v,’™³’‘)‘` PRa÷#”ÍBð•.F]5lrM£ÔrÐÁ©ô.œà˜º[|sã Ñâ\3±÷䤔KâÄP<«à²3r§¶(™Äæsz3£VFÛ¦ØÀº_{öƒ“IfGÌ$“˜5‰îe Ïù]ØCO~vṤ×4ú÷Ó^>¼éAéõ!1<¥wÿýþöåîz{»¹¹{ºúý-£vpZ™XêñNÈ×÷Jôøs'±b[Ây!Åhoœ(èÂ1·ÏZpæÿ´-+5Ý>!nM â׫‘) vð#_í[ÌSM·-sÅfE ¥ØœT µ8raç'{iX¬¸£sѶª™ IÔ‹mí¤Dƒ=ÌdgŸwt’ZM]~xÍ|ßžŽe\$JŽmnå(h™ÜL¶›†àà|z±¸'õ 5ôÎÿÎb_ï_2iÃÌïŒÉa{½©ˆtŽWQ¶9 Šá:g±ïp9g®¦Ä…ƒ”¸yK‰@ö Å< pˆyj·oè¿ûýšxªà×þ!n*扌žx3#{ÍêÀÅ䪜5y»+ðáÊ]aü9eµºêê’6wò÷D€©u >Ê,…ëTõCzvAº²ü è‚í3M•™àuE*Å Ug§ý½²/P*‹ùb!%yVÅq§àäf†VèSG¡Ä“}M‡O@ûmj»høÌ°™™²Ž”¸·Jâäèþ´, „6¨¤þ, ¸‹§ØÓà1‹§ûÛë³ÌòþÅ?6w·O›»‡ÛëÁÀ’wåÊ‰ê­ †**•>,…T½!/õ9‡EfŸO?,²y‹#Œ»L„ű¾äþJœ?øh³@N>K£sbÇH¾?9 æÕ 99×ä2Ëž¾73½kÝ-‚]]X(uQW ?ô¤wlýñ€4¹?Ø…Äm÷aë1l‚¡óxQœÍöË?•k>üìdø±ÊÅuë¾Ú%àšá¢4@NµÉä:3ut¸C’Gã2=rÔbÂÂÞ)f‹¾Ù¤»mÛA †} ûSé!Ðp®³D©Ÿu·-’ ó}S‚K¼ìàp18׃ÀÔ:jbóÇF0•N÷Î ïaì¨ç¢‚éü%·ð]FµŸ—° Íê`•c‚€%™ãuVŸqº—š¼Íé&b\"\3g0ug§DÕeyåO¬ØÅåN]Ê€€5@ °ÍåV'~´Ï-`ž}Æç¶uŠA\À‚ðH”á>·]K û¤[À‡Ùæ@I¶m${Û^‰é˜Ñz½e7Ç wsÊ:r{¿ýµÊûfÆã§éC×ñçÉÍE/ÑÙ’Ä汪p¨ÙKSÙ^FCw23i.]b ¥DÒ$6·±sâoq˳Ýëqar=íúŽÙ 8Óré•ÝIÕ~xvÃìyýøüñáå—Û›§¯•I…iÛS GM£J~ 0¤Ü«‘å87WÏdZ©Ær¶ƒË(‹'„„¿Y§S¾#DÄ”¤©€YÒÛz¾mwùáù:¨¿Éw.xÈÀ]=äà%>ˆû&>rø0¹®Âޢߦ!  5%+& ºEIš =ârB¶#FH¡HèAJ±ÑÚ=“=°'¶ŒÑƒ=E„X3j­Áv³oº'Ud¸æ™Ù1£üm,in}?„Ažã;ÂØ]>fñûínýÓØÁÕÓõöåñæù«íÛÜžOKžMgX¸?f:cÕ—;•¾t$ÃÆ6V}¹™yïaGïÓ~¼ª^OŠ ì4O ñò 4Œ©‹2‘,p t+€dKèÇG[6†AÒ”AE³Sšr‹@-Íúi/Œž@•Ü©€S¯]òuK•Ñ¿#<¶IeÇÏ„y Àî véîCa×ùçÎÂ’y„SaHÏlÊÔ«ì!ä„À÷÷Ï­ñ¯£iÃGƒÈ&%€]YzÍ̇A«UëÕ1úUûÊ$¬åè×¢äRo­Y.«!xjš>Ñopš4¥BÍ}SÚ îÌA¿·3h.úUg·¬S¸HµQOm€Ð%èF„©µ à‚¾;tÅ«à%¢cßAÅÿü…äkçÿ¶ }-ä“õ+S†_ðqü’Otß‚ƒêøËmÙ¯Ìõ4/œ TBeÄ"8ƒ×\¿ìÑp]ê?vRî*È(:go‘Œ®ÿ`ZŒ˜©ÿØ:Å·“c•yÅs»q 4Ë—t-Ø×áÉäbs›jÌV®’™Õbt’¤Ãv·<[éSœH¾°£†C4;°gé‡OžlY²ÒâSIí°5‡´´n )ÅáôÃ<çcìaÑ)â„2,ô Óß‘rϤ–Ìöqû*#ÜVPN«‡ÇhõÔ~¯“(?8þê<µ_x.=Áî×4eèh Ý®·p›1Ð/Æ@Á¤‘ËŠZ†À³Ô¹¯¶m•<ÛÔ@ b$òmIô8 ìÐw¤ÁöaÇÝfûõæ·³9˜Ã‹¥n¾Ù¿4ÊW^®r,⚉jIÃ9id¾2™·ðL+ñ¼yÛˆUÚ£H(E!IÉÑ•ê­fÑ,Љɺ„!é@hR ¹èñ>Ÿ¿ï„˜‘97²g«$ÎÙ•0?²Ô5O{Oì]c&7B¢ùt¡ív†IŒ’îN¸¼ñv†ŠÛÙ®gó[¨|;+k™Ô!×Oqòd ¯g/.‘mÖœßÒ²-8¿ç£Ûý#”dFñ™%9?Þ Ç·S yõï¨xóðp›ò ›Û‡¯õ¿_¯mŸÿrÿòütµýíÆþÝ~Î6Qˆê˜cÅW: LP# +?V|¥¹ÐCcjã£6'–ø°1uXÚIæ“NéõçÎâÊé¯óPKkQöPÍ;ã*+€úÚZå©ýfÒïŒ×æFo°HÁÇ ì÷¢‡’ÿ©!›B;1Hÿ3mfÏæëÖÜ_Í's'3Øõ‰%4炦…RzC vÞc·\×R%tvA{BÄÔJ§QI§é^Clˆ°¾KÓ.oZ“D䋾̚þM»uÞÁ¤lÁÉ“-dM×ábÅ-.[ù€F7˜0`oFǹ–µгwrÁ–5û.ïÉÓÜ3Íî*WS¢&g¦w4ÆÃ\ðU¾kÊU7̳\ðUæ<ʘô¾Úx4u™mfLê?xÜëðõ¤6t½ñË.d4ßgÅUö±{ÀÀ×[ƒuìy‹I.ÅK™O’\iÉ¡ŒÎ>5O§ž7V”èk+ÚO¦=ÇÑ=ofç|Ï[´££‡ËN|-š° ì»N|å!b$ä’ë_‰J+ƒâtd¿}ºùôx³“/P·½þâ … Y,ìÄ&ø=lþJN1)YjY Nl7Gy4\KônÛAò7b‹i°…«ˆæ]‡"ØÚ[åÈÄŽÆè€µûMœTk¢÷âNX€µ48zß›ÞγìŠ@  ƒˆ†+¨¶©Pt‚†ÉUÖ)`ðz?@aÂ"rt©P{I ™L—Ìñ‡w›¤)wò;O•d2ÓS1jêßh‚ÝC²°’/ÌQäÛU+›L¥ézù¿¶QìvvKþoL:t¾ÉÑå ùÔP])l>­iØ)n“˜e4ãÙÙ‘d$)l¥ ­f !v-êëB†™°žaflL,18ô€ØV΢³Þñ^Éõ v~Tòá˜^x˜´+¾Áæ`…ž/ˆ¤é/=óY[õÌ9–*„rÊ€K›Ó\ üÄ02êÓ˜sEF‡ãh/žêÞ›ÙÅlÆ£R¤?á”Ý4L®%ApØ6eGqD’Q§9±w0e·Ý3Éú…Y€à¼Fœ²°új¦ÆîÆìç_$ˆZ \96¼ÔÀEüflNãh¹.öNÎSÈUyØÍ: ÓùÁ"Çfç 9ÛVJI(ïa»¾ÃÒº¬”'ËÛÆ ÊäZ²;¨MN޽ðÀÍU"s™ÂËOƒ˜‰Ì:¿ÚÅõ_ÊaM^YE¢ÁWµøÛrõs–!q{G_r–5Q¡@lC–˜üÄ"]œeÛ±(µBÆ ¯ÃêÈîV¹lÝgQ:R\ƒ²Àj ÐÌ#¢[ÄiÀ‘)Êl³îÑÜG;ìüôÓöö%×ê09 eÄ“ šä_%*W‚òj›u VS.˜«Y VS)¢ŒÃÂÀsÿéh >±ªD/ŒÃ(0:VU2‡6«¢ptêggêÿt×öSZ”sŒ²\q  'FBoÑPÍÄž°â2yÅݽöšÙ-tžÍþ­[Ͱ‡f a(6“Ò-í/µ-ÃxfÖ™~Õy›¾k¨é— la#zÁRñ…Ê )ã•-?±Y—~);NH£\¯yxõ>ÙsýR)‘'©¶à7w&AÉ6¼@vnm–±3¾L.°ËVÛjEF`ºr!©Ûõõzsû¼°ÀSšì/›röJ\#KŒ¼ÆÚúüþég\Wƒš¡”lpW@Hå$¡`ÎNY½>mÐLû,rU²¡ùÔØ[x šÉÌ.d@ÓÊÖbalŸûÊ.£È¬Û?žÞ©u²«)Yºi4Àpz Š,Yz TÔ_rz#¾'Ù¥ßïo_î®7ÏÏ›í×tÐ÷êAfu Z®~Õ§Ful.´e¿*² ”@ K”¤€”g”91\— jR‡ h·x…s‰æEC'”í¼‘¬ëßìì#ePÙVŠÌ·u±‹Ó²8"_TÉJj:ýÛ6+€erÍ#›ãÕÖw¯! @ð]×A 8²ë`oÁïͼ;h¥Þ©¿4úä$Z¸Ø„èQ×èX€9’P èÓ–c˜4kK~À¶’8J$¯RÂp‘bÇ=ÆìHÔ©­º`¸€”r  ^i @ïc‹ò˜}ž“Ánv¶Ø3ƒá¶RæQªóÔÒ]1\»3\W&÷¦ñ±¦jžSÇõÀt¹"0w?¦:ÁŸºQÕÅ1{ \ÅàØ‡þÙ:loPRA·¨y¤õJÔA,–¾!Op´PÞmi i µ¤Éî"òM 2¤÷v~›ýØ/”&v–¬º³þ+v($B›¦îùTöàGkäôõ.Þ»û´ýzýéåö›–ÑëÿvëŸ/š|ÊòÉÛ⤪.téѲS¿FôÂdwRq` Ü·#?‹¬œ8=±Jï×ö­ †ô5=ô$ÑB¥¦ž{¿àBdŸñ~m¡„!„‰y¥¾U1Y”N¶ø£±w&×1:ÔÔâ¦h6ÉÐÄòYSÅæåù«Yýf»ûˆß®¯Ÿ&^ÿ¶ÛðùsàOüñKüÇïZ¸êuºÉ‹[\Ðæ×FXAúAaÇy×ÖäÕjÊŽ˜Lœª'³$;°HÝ…RLIä“Ê'VëÉió;­êÊe€Ô˜Ñ‚Èì†wå&#û\¥Ï–‰]#]íÕ“¶–ÒÊ“íßýßÇ ú‰áO*ÀÝ—~%&MÍvè\±ëBô¥Óî,<£¬–õ«…úTýaW±‚ŠžWÛKr‹N’½…Œž[HVε¼*Cš‰õåËJM¨¼¶ÔT³ççÎ>¢ù©Ô Ç!e¥rñC{Ym=vœÜÿx¹Þ,¢‚OúV¿_§ŠSyËŽàªÎbg‡G¥ÖQ«7ýLý©ÞîmÕ(r¶Ÿƒ*ƒm,Ž;˜á]–DæÄ²}Bíî+s<+ ¿ÒОj£cÎ…Ú¶NItV4ÛJk~t×P©sê}ÀÓÈ›ƒØˆË1DˆêõÂqù}êóX3ƒ²ÿÃAÃm].ư†;ŒÔ¥b vˆâÆ­o;X¶î%$V•½vßl\/ÌÞŸ³®þ©µúöÕG!\ðyüè„ÅõE‚ýJ©xÎÆõ*}¹ÂܦÚF@ÊH@ç!cmjpª< í8Ø>^gŠŒÇëÓõÃíýv9~ºßþj[ùó—úü •ý¿ìy(P¿FF§H¡º a­Ñú%]Ù«S&(·%x’½~Æ,8ç N,Ô›US•ê…±9`ͶL!×f åÙ.HÉ;ã™wyQ†Q+ÚZbjQ•xïZU‡6p¿ª¸œ†€mc Rñ#5Ô,”¥É¯!ˆmÞHÔ ásb—7§¿2ÀâôåúJû|çhLPm¦Ï{|¼ÜqÇÙüŸ­Åæ·íõíõ§õv,ÝfÇ(+x•Ì»,f«âý—yôﯲ³ûi·iMæ¹3ñ w‰þùë)=ß¾¯ìœ þø®§Ã¾oûféõd«Nƒ§–S ~°@ûÞÌ{f»3NÆ}é¹P‡»®ÜÆŸ9ÄèºgÂ’„ÑÜ4ÊÄIœ^¢¤Ðð¦±1ýòONòøáÛ/øŸÿ¯¿üç_þãçMÚ”ûð×ÿÞ=à‡Ó¿}øb‹òó‡ý WÿOÞÕý¸q#ùe—¼ÜtXd‹œÍùåö%À.ÈÞ½vh4šxÎú˜•fìø€ý߯(É£–D‰ìn²íàŒ…³ÛM²ªø«ÖÇ*tcü¯ådýù—ŸÿmÛQëeuóiýô2Ûræusö¾ø»ÙÃÝHäôæ_o‚/åÄ·§ÍÍt¾Úˆ@~=a¯ ˆô‡ñ‰éÓ©‡‘ÅxwáŠm7<Ùžå)ä2í­ÇÉ|3»dÅÚ?Ý,_¢†~]=¾¹ Áè=µm5⽋¿¸ØX¯å×uð“ÇnÛû~ÏÌfì¥îÄtµr#”’»²lø—ØGö½BO†v™†Ã0]»8‡ýϾÜNAJˆ€øº²%~¬cšu¶Ñ>Ìì[\ïc™9û'¯Ík0Ghó*ô?NÉ}K ;˜Ã×þ°G÷vªÜTOô£¹ÇÓf¯ª[£×ûiµ{ ‘7[cñŽÐ³7¬ä7'*׉V8j{©)kˆI#9ÏC°Ëé>]Æ•qÛÖÚ±‹³±Ë4.´¹´ØEê7 @E‹¶ZGËA/-~‘mÙ„Aߨɪ!0$8à«ÃG²6J‹Wn®Ã£×å`ˆÄ1†v÷âäîïܹ]¿ä[ÙxÞÛÅÓo»ìˆ7<Èûkg]¨½3Ü œÆÙe ®Ä‚ëÚzœ]^îZ-—„4"Ò €DècÜ‹a¼©²sŽ&÷óÙ/ŠY­žÏò¯r)g?-f¿ßm• bpÈŸf‡ŸE¦­²kBâ'¤ì¹IWÖpQÅœÁöi¾»½y »DœÎž>ÎN 3‚†½sþÈžkb´ýWÄF_®>ÝÌWŸÄxy?YÞ¼¤Ùžþäó¶ñröùv*Ѭèú ‚|Bé“è€BÖ­5eÛ»½™Ûºñyx|ƒ¤ÙpR¤´C¢D|! bmÝt)^M+ 2˜¯„egÁú)ǸÃYa~¶òâØhpî«>O]ü“V4ÚþÏËû‡nïUbT|W“%H®FÜÓY€0ýå1c(ƒ”ˆ'\Äó²®ÃΑ‡ âXœ4¼Û‹{ƒ©@©hx“êñ¨ÍÖ:n™H©… ìÛÚ³½ÈNœD‚67ëÕB´èíBlðõç½>}‘=œ¨LlœhvÕMe–¸‹ì«À#“øgTszîr²AÚ·R®ÌzÒ í@TSU´#]!?‚C’»aZŒþÈS ë²Iµ‡7ÓÞ$@¥Iobõ‡²UF'öÑIà°çŒ É÷¸þ*i{¯Øh Ékg­Nª$2ѹw­Ó–È-!Õ¸`Ú#‡®½H•†29KXT%eÉOŸ¼Sï9$Vtµi.«‘RÛVZQîW»Åî^*•/mçŽö&l‘ €PɶYË…@ G¯>­EzÙ9Òúxl;‡j£½œ¿2HñHâH/®½23ŽÿÊ|Ustzf.Ä#W{:£Ü/ŽôÏÙÞPÊ0Ûéúø9STÁcæø9ÌÏnºn_à^NaiH¹§p®áÕ Þˆdé±û-<,C’XŸ1,‡ìMç õÈÔ§K$[k½×Ê2ŽhS,¬¹eµ5t2`P¹´éÅÈQ-y D¡¼)Bz#iÉÖ"ýL/çÁuHœ(s‡5µ],áᘋÅJü+`¾ÚFR‹"-š^šUÐÁÞ ªè8“ª<äò#/t(Àa¿¨ßróƒÜêçÕSx“Üw{kýÅNPŒÚW…bç{y* Î(ï]¿ö×éS Ž·wžrÑÀû$;wZÔ(á ;n´÷ˆ|œªÖZ¤k‹ÇFc8B›=›˜ÏèÄ_D 7tÙ^¾”5H¼Þ®m$ÓPR•}ª„ÖÅvDî+Æm©Ùzg\Ï_>-»½3ŠÙTž½ï“ =(/·ÝU(Œ‹­dœ)¤Pré8“#IÈöÑç´ …™@YÙÑ‘Ý^¤b@¥õÈMJcõ†Û×›Ÿ…0,)"¥ÔóJí l©ÝEÌ©ÊÒ“Òör0ΡeŨ0~¡Ðí‡ùJŽ÷~µy¹]Ϧ+9ÚçÛéüI¸uû²ú0ëëÀN™š¸NÊcŸ®EÆ!y`=×{R±(Îxð90o“Ï ¢/¢oT-‚Ãyã1Ú8ß^¤·¼Áð>.ÎŒód.à<¡@ú@Ÿ×ðB®¼¶~,Õ´Ñõ±=a(Š,n?f§û•D05™t)khø˜‘¡í‚s Y¤±!•ÝÒ´éõpi­s¡¢bd¤aS½<Ö€Ž?Ši…Ê“x+Û;×ë,ÛÒïlÐB¶e§LÚªÖf…|Ò G‹ô²>E,й±u‚¶µë›ïDžé„û– X?Æ$ ŸÕFEP‹²è„R5yºF“öÂ8(3þ ûd]a×p+n‡n)Ðñ]ðçÏ.}ŒU˜yp*'6X#À¤´ iË0~€é펎+1^‰+Å)ÝIÅŸv¦Í-´2‹5Ú5®4êõ(§þr°µœômH¥¤úGÔñÿ7ªQÿÌ:ô`UG íEz©±„´î¦þKÈ©í“`–uD\&ü™}Ç Æ"Th­"v\2?L}i·{]ò¢QÏ™Êd‡1+mhÛr×Z£—رX²Þ» K„Tí˜g€ ˆ%‡ èhS†=|Ñ,jÐY#=;7Ï&ª(T•ÒjòÛayõ3~“çsVLßÏ^å<'£Ho§ÝÊ Ù^Kn®è>s0Còzî•Ü‘`¡_dB;c“ÐoÐ+ŸŒ9ñb!‡qÊ„¡"ùŸõíC{‘àïÂì6¹#ƒðk‡¡…uÖÆÂÐÂ|¶â˜%Âкì8§¼Ì`îâÐuj2V´ ˆ;Ï`kbøsÐ}›@ï«ùëB´êóÇéíc5{œÞzîï‰ný½»¿ÕfúhÍãÌÏzTl-Ò ÍCo(Åc£¹1¦JN˜éîkÌløáþõiþpÔ‡}ÊôÐm0®‰« ¶èëkQiſע ’¹e[äõä×¢Üó‡"ÂýªHn{@9’ g PºÈ¥‹P2'„€=¥ã0!±€1ß›¾Ñ¢K©œ#v³iÃw{~èíC$gtôö#d„ØxFipÌãf„ؼÁ†õàŒkj¤&KÑcPŠQ¨¸f’qû¡ôrx7hãîíÏïÙ~wâûL?ܾUêtKûÐ\Ò z¥z®s±uqZÀˆ-Ž”a³ƒK`HE[„+e³‹Ÿ¡Ð¥·é…ú¢è¡ciIˆ S#À9öÞé±;µÉŸ~Z­?Ü>ä‚qÕ."{Ãúy>YΚøÔ£çÕÞ _&¾~|zù¼ â$´ï_~¶‹ølV¯"ýbÞOo_V·ó•ìõ~23¤×!rt»½Ëɼ[œ‚×T( O݃öNÌRÕ=ó›ãGA§C$Ú¢Nû !5--°FES‹öe|˨E–ø¨é`k‘^Š ™YÁȊɃ©¿Þ‰Éó9<+c•Ó±ø-ês€Í `Õ%Ãó›ÄÊšòb ÊÓµõ.4©¨Ø¾ðgÓ„qç«ðŸ…ȹˆï|ËÁ6;uO³éçé|öÖ¤Fè{¿J‹¨2cxŠh»Ó¶,mgÙZGÖõ²8EmõT‡?C†Îì„BX:hï÷»®j=cbs“<(³W¡{·cö­%zi<Ëá±~lg¸~ÄÞïÊ\O=1ë½8âöj>1AÙº6¶…KýÑ ´¦4!Õ ø!80ä¿JUÜùãÊ¡4qºYßnž~[ÊÏ:ÖȹëUUU¯é˜Ú„'Š|±¹lê• ð¡Ö‚éøi›|ºñíÙ¢T¡ðjYê([«½F/•â¶s‰ÆV)T¿?¤pn7ì,º‡:4–½Ð²l~¿ËjÆš¨’ë„B¸ûs7îZª1éHƒø¾jwgwéš58ÐòÉE³ù×ÕˤÍÅdýaö":w:KèìœOsnâdé¤1¬ëSGÌÚ{‘ò®YD9¾â¼dQwàˆf‘J煴ɦÎéôP&o9>kþ@Á29ÙrH8ðG E­Eºë£Ä\Gê4º0-“hêöX¼À»øP& H½OÔ{¨¢/E \aÿdL$«( ÄFŠ„T6ÆVô-OKùÖ&AäýßR]3Á±¦VeÐõÑ BS‹¶£RøB©+¸ÿ…LCó»wpºKM¤ÀM‚ ‰Ñ¯D‰víå°zÈi¢ÿnŸ×¯Á=ÞŒú…Ü9Œ„ÉÇd,sbN5ÛV8iAŒÍ‰k©ŒÈÝprãM,íjjpRµ3:åÎ"GÓ~B CŽF-Þu7A‚íà úŽÆ™ìµ5 •áþ(àO»3gÖŒ:öÛªDùÉr3™no؈(ÿ¬®»ÇÉ|3»pIÉüéfùºzýºz|3™Â>»ºÜȵ2’W×ã„o'ÇVqß®u²ïŸ×«él³Ù]Ù=·¯&CjŒ=àØþÆtµxž¬O½³0ñ ÐvðÎÊðZèÓ^)6JYSó÷¯"ÃО0"2¡å¹UÚ§EF‡WͤÌhÑîïú{Á^ ÁÏ|¸—½YÑ9ÆCèñXî­³þþv²Éý|ö‹í/«Õó&üUTÃì§`ÝÞYRØ¿§ÙÃág&ÊJ¹h¨m+YQ’• b¯Î­Ã|6»7×gOg'ã[}ƒZóÑÝo`®ý7ž6"ŸÄý4[ß¼¼Ÿ,oÞ¨Ñl~~;¸ "¬¨LÇhï ÛOXè1±Cù—4XP®>×üfÒáåTW³Â·ç&+l,C„M‰³§Áæ^]Y ‚85„k¢lúLBö¢Q|­2…üÞ T %;Bˆz‡ê•°ipÊŒ.:ýž=˜0Ææ«e’<¯ë²Vz¹a²I¶1Úˆ~OË&¡ŸNŸeÐâO é Û¡]:­ïת”¶ÕŠ]¥sþ*'[59«ÈD1µç DBÊ0½â J¿¨„xȶ• ~íØâ}ê*­uI\ûJöRDŠÊÏI‹‚‹k€­æ lÑLRx¢QÑõŠ ‹ìY¼Eôù¢#þ#o'½ôŠn?A¶B?Z§-·jªÈzÖ¦|Ïøó1Ùó#}ö\ù‰(Šòƒt$Ç[­“ºl¬ªEÉÞxØ´˜Õ†íèºÑ»ÚºÑ‚âó‚Øpd óôâzA¬µªì Ç¬ÄD]Š˜¾Xªj÷Ú>ñCCdöFù~© >’ÚpžÚ,6h÷¡|*KeûJ|± ‡ÅhÜÖiÒ¹ `XäÞ}äY}cPzƒñj¥˜¨ƒ ^/œÖTÁz€F+»íSÏz3ž:šÓ÷"—ûi ­òÂõ4Ä7îíýãTù{˜¢¢Žó@MÍëè\ŸGg°`•Õh»êþAtëbiuhœÚeÄÏ‚ ¥nùwíô–·ˆTD­Ë5ßüÈjÝëú‘2¦©GÔºn´Au¢ÕßJ’uÑê1ΚgÀ.?Ýx0JĹÊJ~9ùÕÿâ[ï­¢>ù§hM˜¦ŽCP67Û˜Dî=xò:©”ESZJ(e991GSGGˈ@…m9kã}Öàã@¬|ãUÿëx_·ÿ玌‘ªN’Ûh¶c×Ñ6ñb}Œ;™¯wÚG’o§“ÿ7grýÀ8SîÂí¸¹áq¥Üu/Ç‘2„:…FaæG¯ÚTñsqx4œ3ÁHœƒ†Nƒ‘2ÞÙëXÎ}žo,Š 5N šc÷àð…èÃh½Ï ùVüâÊôéáÀóœe°¸(RÉ”‡dŽŒÊÎK›Þ2]vÆÃìqò:i )üLJåæþÐÐw~‚´| Ö0íˆk—¯Œn€ªG?)pà "Úá¶ä›r Y56 oaBdß þØ×>ZÀ‰Õˆ·†ØÉÔb¤k5Zi¨BWª„Œ‘BëÀ$Øg˜žy>š ·{÷¯ŒP_ ®ÅÙ9©cÔÝçøk-ü‘óÔ@Ž?wRr•{¤+(á¼!3Èçâ‰ÉO(’˜Æôúz­×îäµwZGËÁR!ßPY76 XíkгkÎA8ú±¹ +1Yè2Yžç«ÏGƒ•×è×1›ªØ,×?2Z¬«e´\\ÿJ…bHd=¥ô¾~ÇTG¡1­ãÁ å²AÊ6dC¥gH‰dc ¤´ŠÖ¶N–…Q¦»…™»`T¶õˆë±³Öе3¸hÔª\®!68Í47Ö§™34[çÊáBE[EEy–Ö+\}½úöDôŠm,:ÆQ}b¯,~+ æÚ¶@÷ã'|ÿ>¦t¨‚Ò鸧¶"òVÕQD÷tU9ÉÅ tctõ0xÀóû"°Fl3a‡¡õ_åšœ¾OEŒ÷òlÝÙ}“‡Þ‡‹·߸X‡ÅIx†§>Š#(‚ÐÒn &×¹tNT„fé:±ê'U9©hCÀÖÑr¢âîÿØ»¶å6r#ú+[yÉG@_ÐÀ~ÅþMÉ–²’å¥$ûOƒ¤Å À 0V¥²©Ê®%YœéNßOGLñ¹ª@ךØ÷NpG9&ZÄwš§i£Î n «? Jæw__ÿõ=Rܤ.}öŒr¡ F]úì+88™Ñ<);x cEäqñ>R4ÍD(WÜB€C\œ™ÈÖëáÈQf†%¾8¦ZFoVÔACõË+Ø"¼~ìœ,VÔ|ï´vc"­úh(4ø *q?ï¿|ßþHàø6¸tå#ÇpÍàèÊG^C¡3ÇQŠUØb\žï'Q±Ÿ¤‡"[Q…(ŽäPHõ—äÀ½Z‘Ÿû -Ôä<òjË£‡Ð½‘‰œÙȉ›¤Šði‰.þÎÑÅïŒ.¬ûÇëýmÊ]ò­Ý¥š§#–7ÒÁªyšÞ`†nAˆ¥!ï&ÍF OÌF„3ôBc4œoòO¤þåÑ S£PïoR0A2:üÀúøþ f EX†Á«—ç+x Úr|jà .¸`g[3.ö©ý pBAÔÏ?W‘\=”"÷½X‘K-±Á,m̸ó®•OxÔ~PP“™ 7M§Â,aÔÎÚçKY‹÷½ôCk\ß‚¬a&¿DÓ›35¬»ŽÜ3¢ –PDEÑr¦ð³Y&50Ÿ‹Â•vß2º]; PsXÇYÌádwÚèÍ ÛoÙ¨ÞÂÒ ãzCNƒç«£XãöK8п8xÿòöðxû!ÿ¿¾ý– ÞM£àýòGv Þ/ä,·à ûò¿3 ,Ñ ðö02Ôiô2ëP¢zzpÕè§Ú®°`Ê·“ÊÉÏ)Òš4ð)ç„iñ<ˆ,ðU­õÙü¯–“Œœï²i0🚋_ڮꞙQ1SÂ›Š áÒˆYÛ}æ©êúùħ[±`f&tÕ&÷@X3 KW„¸´¤d­7}QÖOéõfc4tãùe|)MOSdëç`0_$ Qr”ëñÍ“s¶ãW+azWq× ´P\0ÖMbF@¯ºçO¼ 'i"Í@6)­Ø³Ï¦s‚1HÞåÓÄFÆåºl€¶‘ñžt¶‘zË01 侀$Æ%m$[h8†„nè«Ã“³MÄT¶¦õyªQ ‡ˆõÍš}žªs˜~»»”ž1QìS„`0A|Ž6“šú–E¾%ì³EÎå w¨«jC{'vpÎוũtSÄÎ]ÿ=+ùջ䫠ª!OÀ@6`præQ€-Šúر)Ã7==%&šzçlô’‰ÝM2°^Àüš9hڔ̡¬ö‡ÆÔ¬£¯ˆqúª³÷†„xk\¢µ<Þ; »núë9¸àÚö˜sÙtzÅxzOﯫòÙ-ÐªÌ åGM‚!À4¶K [žŒ‘½<¿½&çÈ.}c׈lÜ—ÍJ¾ý#±CeY¹·Ç3"R¨'ËéñL³âÒ—IÌ„ÀGÀ{ ¾"îI‡73£g9PŠí1κ‚(;ñy_&¤Ú!FoV” †=-í«„“xµI>Áâé:]×nˆk+’œ¨—£ãMÀI«íÑ«ö­¯-ðeÐÌ×WÝyÇÆcÁýp²Dw!„ôÊû÷WnáêëS;ˆŒ¬ËÞuh¡=Å‹:ðÖúù¸õëúñùÛöùM=žèŒ­ܯíðÞF1lTEÏñ_OuÓÇä¹çE4fÊ| a}¯ ¾BœÍÒ?z â°|(@Œ4)¹åÊhL’2w$»éŸx©-#‹_jéÞ¼kÝáC>ðQQÞ3¯PUèÙ’@O=ß>Ы„•žú>$nã‘èÆ†®û%aO©Ë­ Qƒ•¾E„†ÕÓ÷=³üÇ1ËìÕazä2é‰éV&Pcs`¼©ÍË. Üvn›ž6…Þ"¯Íd©%v•¤”Ûö.È&N[ ÅØÛÅ6ºÏ ª”Mb=ªIÃBÉ gPh ô¶ˆ5‚Á•Og,==ÏÁéÂâ6λªØÙÀðk:€i¢D‚è0ÎXß!$X§‘?êàðÒ/×8Èžª{Ê0žªÔÄ õveÛZ™·óë#œ(ñë åa M,rYÇžv먠n@<}*ׄ-d|,žÏnÔ4Å„ aS`ÍU/ Ôšª·Ž^¬¨5CóÒTk%0M²@8©pŒbØÀÂõÔ`Ëê©ûfù™åÔ)£«¶Eº“8˜à]¢ñÖ °µmÊ­¥³_î35€îu¼«¹n×O?õo' ÎINPŸ.ЊGO±ãn­ tµîÖຠAûÜÚ!Ò½0.À ±½Ñ ¸IÖGÁ×ù§dùo=t¯^sð§‘?däÓ,ƒu$¸’ Gž™³NÇÈîÄé8 £E !>4 ;XÚ'Á%(ìÛ±ÎúB|¤íã,¿ƒiÚãUÔd }-¿CôÕd‡1 0ƒ#LJéÉ~I€è¦ýx\¿KÛŸØIrRÐþ®ñðχ׿6÷w›?Kù4v?¼Úçé#©óêõyõñ{{ßcuPçêȉTÜ(Ævn2SV*hļ”úTÁ'ÖL³ôC<åBÖ,â4A _VLr•Ð"ùÚèoã¥MA÷E&µ¨3¾±I¡4“…Ϧ©f)J5£Õ¯ŸèzžîáàÃ`˜-Øå‹“»í«…ÍíëË_7ÿXGõ†L]ㄹnñÁPP 7©6^•V;ß_Ct…l‰ïO²@Θ"€‰¦‰ó¿c:\È™ðý‰¤DúƱ˜î ×à­é’:_‚äAMËÌ¢azjÓA{B0‡n±Ö_?0öï+ª;Úþ›8(¡"~Øì>lóÝËJ/Õjô}œ7}Ï:vMû»"®c;¥Öƒ‹c‚®rÛ°™; :â< ›Èižw§ƒhg½·Gqµp¨õ±=v~ivÒ{6Gå|hïÿèQGM©Km.à°ç¦.µ—¢æ )ϼ´CŽ®ú=á iåù¢ó[«ïþ­ò—}{T»Ùh°ðü¤JÛ·úË¿?ìI¨®Å.C>ö©ó™ÙUúÌ•2… -3Yoê{®”sKoƒqEÎ6ÂåYü Z´ y€gQG¥H›Žhä™#€¯3 ž» 2±¿kÁ']sÒÏOó…¢3ú È&öRÓ¹½0ju< Ö°íb<Ä;‚Þ½}[µÏ'Ü¥û/þx|SÌñüSf5²¿YUåM†5~ ùJp–x¬OÍg¥{…µD´sRãó`±Ä:ç¶!«t%•U‰¯à †°4Þ[{²£Þ%ñ^Ã{ÉPm° C÷ý³R—ƒ›‹ƒ›wC¸ƒE2}Û`ÒõŒ÷?\]:S³ê•`º @ü„T¹…È jó6“EÖ΅ד!´ãƒÎ¢4˜à³( ’L˜Ô¥Ã ~¨ UÝ2-n'šî¥O³¤ºe ÖÔàõnj» E¨(eãô,œè¨Ü8ÂÕgŠFó§ ¥îDE.oßn‚ȧ›NJ©(½ñqj§ù÷ÅÙëœåãNlW4$“ík´¸‡ós¢ËwÁ5š‘1.° Kc5wñ¤œw ³Zr(n#6:®nëéB£-RTz+ˆˆãé½™BQ)YLÅÚò.èÝi :Ç‘ ¦Å´b/~ÙÝýÎPÁzÀç,ÈAríQb§·ËÊâ +ýbEiJ:ÄâÈ.LZä«vgŽ´ÕƒZØ”9K‘®ÇZV±±EUýêžÖòGœºßFÿóùñíénó¸~xºÒ•;"?~éç݈N~ºF `ö´Ÿ§ f kiŸÚI±ÇÓÁ˜,æXðÍbñý‹G"kÅúØì¬suXÜâ sw,V9;9Çâ¨){:pú³fd–Ÿ*Öç/gõm‹!—”Ìœñó”¸;ã ;ÆôIéŽM憣ÄbKJjV<ÿ!ü/]Ÿ=Ñ%KÇ&Ù9‡¡×„Õ,Í.<ñ y6,³\T ’€á‘`Z¤Ùõ{O¡n±o‹û(Ý£Ž(æDçJT”Õ(œÓAÇÚÍMƒê%Ä™‰Ÿ tÔ%žò7ÁVGC× èËÝLJÍz{÷aëdà´ñÂ×W²ÙÈׯ·\W'}ÕâCµ¾ïXb²kMÿú²¾Ùjüþþåß·ëÕF?#nª^]»uÚ!ÓÓ*¢DÈ ¶4~2VOz5>èͬ¦'|žÁѸÈ›³šhÒÕé‘àZ˜M‡CP7ÝÂÂf-v7›qŸíy²n§)U”P.x¶Y»¢Fÿ˜ð-Ÿþê€ú=u‚&tÃŒFú}—Ô¬c2èñNek¢ÿ¨©$=£®H aË®€BíN›±È®4â×¼±[= ä¥dì]¶^‚àÓ<6G‰4» ƒ×ÿ3¼4«»Ûì15v«šrN‚ÏÔ®¡-¥3E1Lwú\èªÚ„èo‚^€Lr½9$ÞŽvíiýòçÝëG½ƒ7›»—ׇ¯úéïkF*ùϱk®§ðSýdð¾:¥?Yd-SFwKIʨ R­ÿ¤2÷#5I±"Ç`^ýSF‡låYÊH£õ³œQ‚sòtÑ[î–srNt‡í'+€J ÃæŸ4Ë/´ Šã"pt%A±£|Pl &=²£øšÅ!ƨxqŒú{d*gÉ X‚üÔÔe`@×xC5EŧQ¢4M¹Uiþ:Í»P’â¼ð,:4§G¯Ò½ 8·,Ô¿ø±ØºSÍöõåmóú¦ŠŸ4iÑ_‹¼ Ÿ*X i+j„Ä?©úW$¦f`­úg‹ò»%Ff&»¬ 2‰4?eÒ¬ã±E²¾Š=0{" ®¬@ûbš¸(Ò'ˆ±žŸžÞ¾?¼þÕ#Æj¢¢ÞÖ’ŒM,Ù]gùÂÌ ú_¶ Õ•×bî ¬<©|$ÁëM€>¡òùÉo…²ñ‚ƒe,@YŠ l>²Dé:Ñ»„ ì—ˆ]e=÷nrS9“?•wšqY—ø&VNã}Oå†!q²W•~–½ÇÜfû2þ~e‡›º9¦+‡0‹eÇlÏ-ŒW$Ø.‡©ÇÆ[qE9Lg²mo|ª€?W“¦>µQâ…qY£ÚÞÔà$r˜ Ù1«äé{óEÏ„ІØÑ ÈLAusäk³C ml™6÷zøW{aÅðñë­n¿šÁ£Ísy²CÀ¬[FƧ2•#ñ4¸ý»§6´xèKödŸB+»-Öø_²«îÒ Í _¯‹„Ùp׫zÊÜS¶ÍN½$/\ÝØÑJ„--·¬/±ÜÈù»kSKvGâjd¸õ“\ÝÞ&wW(>ØŽÏ ·×ˆÖ¦¹ÖŶµÛE³BVŸgþš» ØÑSÃ:LØÓ7ªº¨î¹í7÷w·oúR7Ïúv÷ÏÛ×ÕËÝæYßì¯Õ~H  —…|WXvÄS–`#T/Á˜/¼v¥œ1®¤”`òûäÈíYAÏ`EÕ¤” g|WY“ôf¿5‘Ô7•ÉVMykÉ_å¦ä1¾¨Ú+b¤¶j££nAÛ±ÀY›X2eq& ÅL—%™¦]–P1[ô®|L÷éáûŽ«~ó²é¬/¶B²ù l—4žãu0¯¯+ÅøF%¹ÕK1òK*m¦ëi3Ì”ÎÉxˆ˜¹“Å"kf)õd0 ”ô¬;—]ØÇ6¤†mGjÒ².ú8Nÿ™m(ëÚdÔÖ㛢8a(£¢Ä©¡K¯GÖ{r§‡ ³ZV2’ ¼…%ÿ©“PÕPå4‡g±<Øøt]·Üß­_ïmëÈŠ²i1Láñ‘Z6ÔBíþõ¯âfÅvÊÔq-Ø®aô‹í,eô!YF?¾nàÜ=¶ ÞUµœ·¸7ú+z'› n¯ h=Ò…}JmW6Þªq¼¶=Õ#–©W· ùÌñù5›Z‰ó·ü…o½«œc¿BïÒý”éHBõ’è9bkå|Æ D €éEƒEÊ&ÎY\rf}$¤ Ï59kdiõ®;ˆÚÄÔN|á¸èÕ_ÚÝ–eKÊÕ©‚èe.H¤”*곫VG&Ïëè¯Ðƒi§,¼1Qv£z‡5K£zÿÉš•º=DÊ€|£KÐ[vÝuÚIÐ$CΑˆþ¾7&z4¶¿}}y~úíáûêIOñË_ú_·wÿþíU1ᤴ­O'‚¨8Ûê< Oiô§¸ˆY5=_¾­ß5T?hœµEê7”W?R²Í$‘IúßÙ&(÷”[éŸÜ„¢ {bAê‡_ÞoÛžôšûÊÁÕS@œ2ýïr™xÀƱâ¥OêIW¼£À6ØúFÈÌŠÕ«+Z“œù×_¶þ¸Y‹2w7Ýo¡ä(’ÉEv©þ±~¦ s En4?yÅÄîW m_Ρ|ÿ_€[ªª,p¨ªÂ„éÖQ0.htò¿µ7ž1'Ö\lÏ,6w±U¼>y±Gò›p±wWAï5øª‹H<ºYÛI{ZC`;€!²ø+˜|c§î¾ÉéöæéîEÿµz}‰Sd·§ÛjGÌߺê"´OÀ!šA],ü5SïÖùèDÞÞ}]¿=¾V¦ÝütÁ@¦œ™â¨9¶X•kn+|wUÿËܵ-·q#íWqå&7Ñht£ÑÎlÕ^lí^o¥dвY–D­vò?ýß (qDŒJUbÊúðõ}…¾üFàç°ä]|ô’l«õ5­qåÍ$ÃVÓXéd}h×zZ‘š5…*P Æ-T`)Íû÷{Ê%Åà@™ 1ˆï¨±¾šÙ¥Í$»ÞY×kß®|Ù±ÅA\¶Ô•.û&Urï[‰Slµl ;‡­ÖpûB+gml`ú1­Ÿ‡øûõÏÎ/6g·Û«Íj³ž8EL¨çCfõù0‹J[ ¦YÏg!éš!® J\4Îù½ïAÕ³¶×¦ë®tª\׊‘”oµl¥›à{׬FªâÛÍ2úÕƒ‚±7Ý÷jî„¢RUu4wN@‡“¼%ÁÂ,}7¶¦)!ŽX¶,S{ö3D^Íäz3ƒëÙò|<¢Ës¡v»&O?êáòJ/„Hu´DRMž#bÕ$™ki’Ò3 ‘0KéýÑDö6oÿýÒCÅö’÷2Êâ¨qxÚöp6|Z'Yb ³tr¿!gb¨Ȩq‘É;*ÈÕÎ…B‘Ý•µ»lRvwL›*Ë ƒš'ï'½úµ³”°ûJ7%«H^õUàÝX'N¾h;’»¨Yı3³f[ ÁI®:òŠ_s¸ ® °ªx[¡Ÿ&»øÞ'¶Ë‹­gGnÁA¨éËŒ[&È9î™Yzï‡í@Ø Ôjù2ö9—\,=¦X%‡à'ì¦yzöŒ3Õu›9fã3»–M8Iá^‹}2N9ä8Ék@˜fñš€:Ü®kÁ ÿ`pNRö~ÚÕ¿só–e@S¨ãXý‹Áb{(>A²f—>̃Aµ¤äRœÞß®ùD?N5®Tu%®ßŒÈ¤´…6zÓÞU‘ACrÛw°õ‹k9¬¶wëmüÏõïÛ«ë'³r¨µ¶«îùª¡ÖA}ÔPgj­k!‰š©šòÝÙ`8?j5ÄåX’U5o“í #zÔ(›¾¦‰q㔋5´Æ œL ªÙÖcLø¯æüæþ|µ£÷+Píù3jÏÇËó«ûõ)–ù?>Ü<^«„ü¹½|Ñ€Èá7‚4h¨c‚-ÉÛyÌù«·£¤üÕÑÑ~½½Û®Ö÷÷Oý){Ý?®¤­ñ0%Ií‚Ȭ$A…CÔ0©³‹gêžJÎ JOþj¬ÜþIêî×jOޯ噕ײ°]3cIq‹ MÆØw ü~1àiêΫ ñ¦ ’îëù, +AS«¡Æûu÷’ž_¬‰½ÝuÆ,­Î¶ƒ:›A‚T}ö¦¡®ö »ªfý ¸;£ÿöëõh\]C\²…)°…`³(&rŒ(SãE5„úzˆuiäà )ÐGO2.'qšŠ <Û …^8E' »éd9/ˆƒ±™21Ðð.Y'6:YßJM˜Â6¥:‹1³Ø†žk†•{AñäþûÒf|þéjýoeÕ?·ÛÛ7üûÏÃùÃúQd?ªûðLJˆ¾›õÅá3“àÄE™Ä®„S–31~O¦T7’¬ñ^™b5·¨M}£Aí$b=3‰+ºÊK&›Äh$ g|M±Že“goç³?®‚kÒ¤ apjé ô-B]Ý­Þ·²Û,ð´ìñL¦™CkO5àP=¯Þ8WƒÐ¤RG<½y ¡Z)âNÔCǂʧÀžszèÓéóQèa|iW Ëë!õn>ŽT~"áëæc=2ªŠʬ u64Q´­›Ò|<ºrÔKdU çzo=¦c¡Û¶ß|>[­ï&nß¶âú‚kÍ‚;0¬:¨<ƒÙèš!VS€E¸`9ŸöQÂ¥Ç9(Ób­ Ö8µ’ [Óõ*ΑÄÞ‡¹É1ÅÕ± €Éå=X러ëmŠ­£ƒåk£аÁ/m-» £ø´aÂÒ†ÑmuëgÆ@ïÉT0¶‹mÔ¨ÝK×Ö7U´Û­JßýÕFUæ¹gkô—¦!t5…Ç{“Ë ‰Acè©™ ,iZZ¾˜¡"ˇ![€MUçÑÈðy¹4„±2&!”ã~ÈÜ$mgšB( 6"[>ÞµHí{òÐaar&îs'üyšûÕ—õÅ£æ÷«­žêËöþáeòësŒö°ýº¾™6:Eݾх«¹øw>°ñ¦¬E榘tÍWŪŽ¸¤Æì‡Ë¼‹·.•Ê9©ÚFÙFš2N·¢"bW°}"1HbŠœÀx §fh·lÑð8fïç&p&AE_Þ†n«h¸‡ž‰ O›G§l Óu‘k:¢0¦rE˜{m˜±§5éÛÊW”´ûv¹ÖœH:/ÉúÉm9·q©àÂX+Øß±u”Ê (—âð0ZxEL([ã ì|zê:Õ¨:Åcж-˜~¹;h~QÛo‚A€|3WÜ)P¢èÉNåaèy|kå0\Z×÷YÌæ à»Æ?*`÷›û¨ƒß¶W×=Þ~[]Æ÷ëËÕY܇r†ŸˆÎäSøtnuéÝåZÖÂÓªŠL0]ß¡„+â(pp,?Æù¹>¿ûº~¸½RMù}µ½¾~¼Ù<üý‚x7671òü×4\<^ÔQV|*ˆ ¿;¹¬«Zj[hÐä."²Y€ÜKðqëBïä)ÎxÛî_¥ª&ºN鯜&LX3ˆEâ /“KÀª5°¥[ "êW¹Õ.YºŸïxlnGjâVëk;2SÆlÈG Ú“] aÒi tÒi léQ3•xÔqâ\±K=Ïàwe*/.YN†KI à²ácQŽÊo/øyËÇd­6Y[eNøA¹©SÅä'>Ÿ¼â€á—Ž6–MÕÔ,ã+bëšÙØédlfv£‘©$›åÅeëòÙ„dkÚˆhÌnp™6»lºC´+É¥ÇqÀ8QÊì"²mjv¥¨ú“…sŠô[É©~ŵñ Ðbûî>»÷<ì{ξ¿Tr qb.è°±’)›Üâ}ñÞwû@“Ù-}mÒ¢]Zï÷›´š[o4Î[û$WÖw›Ë~{äJ•t5Yg0!–ù†»á2Dki˜ÉKpTb˜™³Évé6©…Zæ(Òq¼§[8_çÒUtì€sA}‡}UŸ7W ¯#<ìÖËPÉœÌ{j.¤æ¶ŒHÒ°ãöñB°8`Kÿš:Ï>é§Å‚>‹øi?.‚NŒž*N5 ¬S.©FYjÑ7’\qv±]}U±ºü|öíówþÒNåi‹,7d)›e2É×# 5Qy‚w—ÖxÒÅE‹Eoˆ?¬øç8*z,Ò7Áâ©¢£Ç£è—t-‹‚&R±¥'qåc™çòqVºŠ`D³F^3°K{qÞ×LqŠ3‘ÐÉôÑ›û»ÇÛxìOŸ×vÞ^|zWJÌ ç8ÇÜ8Éo3PsÁå¥$97xL¨H_›€mXêj¼»8wþi ÕïNX-rø‰²pÁR›r² SÓ ‚u5ã£5Ð5qN·˜‚r­Gm_Y¦z#Ú‹/Qw Ùö( ¸}r ÆÔÝ£¢‡Ÿ­îÿšÛ×tÉíŠt½ìV€´¼§9X…“Åîí²´4(ÉÒÎÛwbpàToåcàÍ:†Šœ. 6€Ÿ/¤¿|¼T¡ :«6€ÍTWC¥fý—ónñåð5z&ú«ß·w_–n4æØ|Û<ü½ú²^}ë™w`¬qøÍzHÖǶC*;ˆ+Ø‘Êd0.tN»ÈÚ©ll½§55©}„=yS!äöŠãF©Š«ï·øíETPr^NœgG"”^ëôL©&b"*&´+n]VLjV$ê©YƒòXûTõH8è ø‡–ÂŽ|‘t@vıúæ)36¢P#épâvCôJG>º‰û¬:r÷¸²ë?‹óè&ûÈkzýÇ3µtÓ.Áü¥£r QÍbpýH öÖ¶¸üÏ’¬]Ù­ŠyÏEë/÷í\辶R/¹úfDž&õ?V#t d×MêÑæËh;ëfÆk|×ëIçþïœ]lÎ?ßhø³YÝÿþüÙî/Ÿ=9´q€ìÙÃöìõÏTÄ>¯&O@Ž«éÊÙà{ç”Tƒ(‘SRTTc¦}á;yjwcXÖ¸8«Tû]Ëÿ¾MÔ]¸«òŠ©Ù— !© $ð¼s)ÙÒOC4h¨ÄOÃl©‰¤úgG4kå¤y°´ËQ­X ‘“>²ßí‡M˜¦>šͪ `My÷l{éÇiÅMËÛ j1½ð˜ËØtÔ;0tÄe%BENTC¬…6ךöe"PÁF7c¼Í®µQÒ¥bämÍ`$ò´ô L/Ø·­á¯ÔÓb°‹ a,K¶1L꣬……ž …“ qÓOß%~åô\ÝßÝo>ßL½èN¼°J”Gøuu'°?fÝ{œr6Ô\oi€b=sèÊê¶+¾Vùañ¡d¤LÉB²3É‘2#Ú4)¾†ÁÅ 7“öoœ„I:ìÿ Mzdïßä %èì‚çèüMªùšw¬«ºÐöêFÄõ&—›Ï/ã)Ge&ÿ ןšÅ¿6¨âX«ádI™‰q¹²Z%&õ}D™u&úÚj=mXZßí~ÓtÛ¯ ƒá`¹«ÍVUZ—&ûGŸM2Û|W­$®™Œ†V¦šáJ‚5[ú©Rͤ-ˆŽP0»þ5v¬'7^ÈÓbég”eþ—ÖLîàMCCœôcù'X‰½¹QØ}R± ÛÛw&©6 ?¸Û@}™àÄÒÏ5©ëÀˆ‰­£þÍH à1ÖÔ”ç‹AkP÷Ù\cµ۬2â½uUA0›CRF$û‚FTj²‹Ó*-Zè)r5uƒKŽœ^]o®ÇR·k<;»SQ½¸ûû÷X£ðú£³{Çôƒi5t"}•CMn}pÜÌÈõTl§ÀNý2JØe—*GŠ&ûeF$k¢Á.V ù¥5˜ vQaŒò¾§ _onvZ“ëX]À“…œäÜœ»M:ÞÏІ¦8ÄiÀ`çñÔ\ÀɬïL=+òx(jÚÁÀ8B¦0¯,¡Ôõn†}1¾ÐÏ`Ÿ‹ªŸñäRØw O èÓ·Fg¦\µPSÙŸ®s½yH–›Cp~Sùö Ú®!+aÀR¾ê±IÝa?µwx¼ö¨HíÕúØA:UíG†¨Ùʼn¡Áj|!PâÆçnN”"œjع‘ã¬ï–Ue§àá{ßf+™)±¿U1u|¢u¨é} m·RÇ—kó‘ÕS-¥ªãßëâ VìcŽßqfZ–¦ˆW”V°ÆbV’ã…F¤j¢ÏqØo€I»5ôÝÄ“à,}ÝõÙHtuŠÇçÅÙïìa@\ÞH«¿L®¶N°Ò™?Åf/–ØÍ™"åÈøî°íös£PÛA…Œ–©e(ÚI¦9oSYþ E°qüœQúW³™È‹0¹ã/³™ð@=9fΑ²o/¼Œâüt@0˜›« ´éˆë@¡Þñµm˜0˜ 1ìgT£ê#|¨™.hÈaðB³Îþþä—ÿÛý`ÎŒÿV<¬ Ç„ù€Ü¡ÜVðH?Ÿìí¨…xèkVàœŒôˆHP/úÄ øµ"€>ê†~ps¾Ú‘ÿ•Œè1ÿŒ‰°—çW÷ë4õþøpóx­ôçöò%cõ1¡ö Ñ‚+à+ƒák<8¤fŠŽNöëíÝvµ¾¿bçÞH±MßÊP`;mqÌâ­ÖG0UL‡ColˆeÈ3Ùfc'^Û0Nýã‚QZªiœëxnL]îVÂ5å>á‹K˶TÚkt¬"` ½«÷æ—æ™75ʦö;—ÃÙL³˜fÈI ×(wóOž,ó>¬m¬@>ÁÉõÁ#ÄÛ\SÑ2USs ø¸ÿûâ ºZÿ[9õÏíöö ûþ£!ðú7ë¿>ª§æã=âf}qø ¨¨4±ª]ùD©ãš›gª‡…Ü©>:ͯñm?lâ[)£VëÍ·õÅkNqP &G26¯±?Úþ)›{uh¿¸Ú~_ß}xørþÿÔ]ÛrcÇ­ý•”_Î÷ôhþYÒØÊèIãÄ’÷hšì+Ç•*ÇIÃÐXxüLJ@–íé?ýz·E§ w¥FÓ$Œ2ð}TzkÉ–3¸Ú|3VêND¹?¡B4%€PÄRÛ¶<¿æbu²ª«— .46Ù¯$JÁ ©-QÏ›1@H¢î ‡½n¬NClN%TlÿFuk%æ8;xÊ7«NVåvý’4Êl)'&µ5ôiDoê/:ô–¼õ+köþ?Ã6•[… j}~º{,³Nõ ’Y~“[wYá4Rð¥HLõ“í…\)`Êîýз‘ȅчܷ8CȢѿo«IqQ7™˜j* 1qpl_Æ»|¦ #nÇ­[¨Îç cÏÖº9Ëi~› Ÿ»2·Íœü¦!“ÊáõËë_ú¥‡_?0óë»Óúµ£ÍŒæá‡Œq±*ͳAüâÝ¥á]¶p¾’àüÐâÁ]?Ðs·ùä):ŠÍ›×ŽÎsc”Ý5ócÈÐxÃ9¨I%E|à³ÊJ6“UÑ/ŽŒ`°KNÿj¤á`‚Uˆ$.TD«è\…^cv6tu´ªhu· ÆùKëmŸ\7F«Q#qï ë «õæF;qÞÈ•ÃÀüààêd•j‹)²\ÜÜ¡khÔ ’\’¤âÖ:g¹e¶ßí±ª.+Ü2Ò‘ùÑwIMñÊluˆ¿¯õx]5@c\ˆg© ýçnlsV²8›B¨©ôÚÝ[Äåéꢘ„kahÁ;À8TÁW(uÕ’‚:IµòÃn>µ¸ù@Û¥±E7Ÿ\”¢^9_LZ­ÒÏ»¤¡É€Yœ-`QûûU¸¤†ÕF j3_ƒ5Q•Þ—ÅÒ-kŠœSÛád5Zó²B«Ù­)PºˆVÀk’Éëþ³¬7f{,µ'µŠGnÖd¸¨þSžpåCBSWܶS%]ܪ¡çAY$ß°Ys‹YoI­jÌ:1õ³\²«“Ušµ†f,|qµù.þ$޼ŸkÖU÷³˜3ëykzVöS¦'Õ~ä!n%™IæLÎ¥/‹®dLÃRýÉØ¾¬zŸtý‹ý¯ïïnžþýxÿteuù›'ÛCºùøÊL€h[Óf¡ø¨p÷ù$l-¡Iø‡äÒøH©gm­ºT½ýxûŒ´¸ûHˆUî^|9ŠK˜µûÃÉ*Ý}¡.­6Š=Áw¤’¢~J—øž1Â\'š±_;ë£ç±§Î÷EâÓly!5öŠæ*Wî¢`Á\$¿Ú S·ÔÖ¥<1âzÍÛkw’xb@÷|ÿ]âõËí?î†pÔáBòAê® ,>’«+Î…+ñMº& 2¸4v¼ïÀWD¤æÚÏšáÕ`ÙàŠƒ¼3‘%!“TŒ°(zñÙq%”)ÞD?µ#nëÄž‚ˆØÇ»,è¢þÓGa¨·Îóíõ©W»—Û›;½gÞ¦uê±µ\zªòB ± ‹ù2à‡d&y ØTOKm]ÂEèããföá¿Ö*zØåzúZšý× _cÆÖ-UswÊ×XÀ¯j}èg:ÉîÅxatrÏ5†"L0)ùýÜäuøÃßïŸ~»ºß\_M»ÔHNêj¼—ž±\Õò![ëþÐ tè‡&!5~¢ >>S''5}L #„" 40Îö11ú±µ4_©‡¹ÅH2€Åñ46øcÛ.ßwÛLñn TE-7н ÈæT+M‰KôcGç¹ÉO0Zàì†B¡gþ)8â˜âØüÓ'žñOã„Dr øõfógøgøsF’‘ÁatSPè÷Ãü'1B!ŸÙ¤4iFΡÄЄAýg#ÐEÿ¤'ON1}'òõêûýÛ<|ðkÈVƒPƇ@~ÄJB“ºeƒº„|µƒäFº¦õx]ïEšÖykNkMw7ÆÃéû™PHšÿUì%P(Äb’"‚9$¬d1 Ñ6† ýþ„mÓú+ ö¼øF• rs_O»ŸøíûÝýÍÄXƒ5ªܾ¬—À.PašÖä ‹ŽMZ›ç]lpX}ôH@ß5óæ‘Cð⦬SÜÖñ¿¿Úò¡Û··í[óþòX-W|øvýz3/ Û¶z_.£jÎ¥æ/£Ï/S=Èéÿv{#¾Ýþõú¯/Oÿ¸{Ü<Ü><½üµgbyS}@mÄÚÈ"/ –žà"&pI„ü¸%üÏÆ¬óÀK M¥ ©ŒæìÎJ:]X€-=*_ÚC$r=÷G°u º˜ØÿLÀÄ™©ä2ç ½)¢SIq)I.c8¥Ì¾ä.As¿f_Zý‚!î%\4÷ÊÉÃõO>…®µ/^=†a _MáC¸8½Ò+8|ƒ÷¡„òù·±ÕÑj:Æl^Ž=¶pøÌQ\¤ŽÖOpLäϺØßã»ÕUþøðø¯Ôç½cFë~I.©ò”Š^€bÊ]å¹L©,ùÅÞê[ ‘‚gÍùyÉw%… u§QgöÕ“òÉ-IÿÚroÌ®THñìà!›ì¯NVeÊÎ69öMzÓÏäúÉý·ª?à ]tÎíÇT϶ĵ.âÿÙM¼Ê·oØ´͸~9ª½ÏøCÉUL(:\©OÆd—MMW™õvå·ú²ÀéÉRÈœôóëûjj¨‘•ß¡8Ú2Ù¢V…ó\퇓Uñ¼èýéS[–2Co¬PéÑ›S}CŒã•"iØ9ãlµfÅý Ñ•˜^¶'Ïêíp²ªaÖëÚy¤KëÍ÷,ºWñ¡ a‚žP½Î"Xßž„P¡µ÷«“ZóÙBýê`UZM3$¶E«Q¢Â5 hl³c—µS¡þm^øs÷¥‹-HMåÌDC?‰¥HÉpaÞB?¤1eÙW\‚þU-„Ùs°Øãy5XܽH špõr‡‹¥„r d­>ŠŠ…˜å*^­ÂˆôÆŒˆMz# bH#zƒÐóΪpÉá´æŒ‡«—o·oÏ÷ú뿼ÜÞüqõצ™5‹íJäŠFaÍa–Â`•g£àƒ|¦ôf°Ý³øÒèÀž5S5áW`qºÈxÊóËÝÓËÝÛ_ÛcMžPÉBH³U²]6™”^!EeóZKqmzq‘.¡Ä±kÇœ=Šè%Ù=íX6/ÿþº'ÚUI~urus±k–7 oï5‰j°±E@¢|}ó ­IÃ’A@ÜÜÛ¨øÊd$há ¯LICmýåþœ¯LM$ûÇèJžŸnšž›¼§#ïM¼8@`äÔ=½ºý1ô´^°8›,÷tÎM'„8íá‰4Xï\Åû„F{þä+ðNžy ¿•À¦<<)â`uÙlûÑp»ékÀ|Ò§Çý9æK‹Qïup.ó]OªýÄ3t@Ù¯ûoÜ\¿¾lôó¿½Ð …m¯Åî„ñjòᇌ7õÐïxãsvΣöá4ãMŠIäXQñ&•u(Ú.R~]÷A^“l—Þ¾©ÉxÅš;ÇŒ—Ò:<<.ŽÝ–,æBÆ»#ÛMo^ž¾¿5Zf¢–)Æ¥ƒC–)®g5º‘xêÏòˆaϼ+3-†³Krz•ØVTœÍŽV²˜atSënk±¹¤qÀHW•Àç5sSl.ÀSŠBçíªj¨ä¼Ú¼¶E·Î‡_ŽKŸÂPp›ú¶Ÿ{ãaŽ‘ÏYúúØL³´Lœª"Ù„E»Ô 3Ë«qÎ »T(;ƒ£&ÃŒz2 f¤tÃD¶Ú‹'¢‡ƒÝ0õÝÃóÓËÛæúªÉ £7Hp A† 2rÏÎWvâ"Æ8#Ù<*¨™†h­õ•)e’¢!FÊ• WR™dˆ6aæB“!Z¿êéŸí«ÅžbŒ¨y:ðCý?rÕè—÷V·wÞï!Õín™ï¿¾?½]½.úeûÿ~øáy<²D{®éÝÒØ¤TwVÑæ®V²›ÒÂ#KÐüªÑ…‹Õ¾Gò1z¨³t¬ëÝ‘Ïé¯W×·ÇI1šü¶µ;õÛ$†þöݯ=›‚÷ÖU$­%þ ñLóÖ¨aÍk1IÁ’·&—šV¢˜Ôxn[½ ÉæŒÓQ?ÆÍQôg±9ô5 <§ÍÕzþ×�^¿Ü=oß!Õã_Ý?ÿqå?˜9_—kÕГýë¡ÉFè„–TSa£1Q£s±¹Ùõ âœjÓÈœ êQ®hÑGvl7É¢mCjSQÐa+PP:˃,}Ø/¾¹xâGw¦‡¶ú ¸ãÆš‚õj +ö,, â°í G[%5Ó WÙaðÅLHñž+®Ä2É1¨†›j…D!ÉÐÛ¸}Ú†8ËÕ­ç4Ň+K–îo¯Œ&Tí쟓ßêÚª…'j*;—< ™$ùŽW [ÒÚµØ ^Õ¤5jŠ(àR):)V)OdqÇK´Ï¬aa“%r$†¡—2ÑH?žÅyûÆxNKüaÎIŠðç—§‡[õþß_7߸µnÏþ—nù—Mñ}­ië”$ÛdljÞDÔ+³y×$.>ºDXeœ§ù%¶òó’MBWšdž‰Iÿ×`žè¬`èeÄ<Ùó9ºPŒV øÿNÌúiÌ¡ñfÄ£ïheWØ#v¥–$%Íkç…«gš1D†TÁöhE TL![y]IdŠªßØ>S °ˆçB_—Bì4nðë½Ý¹Ê¼°”S W ÄR§‘Q¿f‹„Í(Ìë§Fa‰Õtn²D ÐÅîQ¡í¯ð©§ZŒ^o&éás#Œ?ñ¹i¶˜¥rÒÜ=ã`YŒütoÿî¬å‘?¦Ìè¶Ýp s®ÝÖ¿aˆÒͶպ=e«›K¬¥‹†`À{*ËÝðÀדºÉ¢'*cÂ:r *¹ìÞ¾Ãɪ(Jhqd,ô-Ö;Cm"=CŸ I£íB§a½AƒÞD†X£79Mθ?y–Œou´ÅY#ŽDð-nõt<¢8ñ]ŒªÞÈL08l38”ƒ“XÔ›xÎÎ̬NV©·R¨')™¥7ì¡üH€d˜Ò€ë)@ÞcYm*“DE½í‡~®!½Ÿ¬Fmè—Èà¨Imš ƒP›†Ð¡g :%Jà¤9EŠÌÎU ÉY­ù5–«B§Ó£2;úlè´’ÐÌd]5©~´v>‚ïáh‹¢.‹ü„k´š1 É(x€©Ê¬¥¤Ö›¡X«Ò¦!`À‹ë,ô¬®ß® SÇÈô@\M „6òˆŒU¾8PQiáÈ#÷ádµ¾8"¸´ÞbWÁØqÔØÁÁ¸±I‹Þ(¦šüS0b**.Æ#fÞOV«7Ð 1]\o=´@6|Âd<ö‘j^ Œ íY“´õ\e½qÌW?ŽVE¬Ç jNYÏŸ6Kq=d0Dý+ŒßmâØ5`*«M¯ë²±a̽«¬ÎU£³¨q»XÏ& ¸ìØyXÑVz”!‰ÃaÎX µJã`tÚj”&.M;x–™{u²ªôк³-7iRp†ÔÖ·AÈxÓc«®Ç ,¹G_¾ÛláÑi®ÂÝÉS6(Y­êr“E2Š4)ŽÉûþõh;Ý÷øH¤¤_aÞ›Ù~ûõêáùþöõ°ëØô%Œ)ójLÞ‡ßë¹lÆY3>HlÊ£‰u 6%ºпBm{8¡Ž‚½Ö ÄŽbÂ#Û á\ÃÃOSþ§Ú#~½W‰êÞ?]SÁ~Ì¿=ýôtÑïd1†K2"Å2ÆÜ;ðIŒí–?oæ8Èy ÆÐ8š)]dÑùžô–@àÒ>Ç“»_§ùžd´z·W#¢?M·]ž×e%›)³T´€æmÜd(6™ÆAÜLj6'|þËjh_ô‘Ë)‚¦ãPs;‘Oe€à1¢À½ˆ¦x¯i(Hð—Æ‡‡žU¯>Dãå³ÒDí¾y0pa#Я»RPŠÀðù¤q%›Iw GuáòÈè*áXQp®uù«ŠçNýž58´zñxùþcdó‰~ÿÇÿܼÞ?½nnÿó¦ú»½™“À–ÏŠ+Þ¥ƒÅÇT |7ÙA†3ÉŠî,© AÕ%bW1)jêú?œ1[zĵÌÍ„’8_U†â" ä\ÊJ$“îqêYÇ· ¼Br,Jí£$ôÄBnf³éç«æÔjÌ2/iñœ¶ï(å;ȇ2`ôfÈvH|Èl `ôS{ ¡0š–A ‡žà•Yc»Ðœ˜êŸï¯o—|oêóÓÍþa^¿ùñöúíîÏ»·¿®ÿ¸½þVz¼ÿÚö›7;ö‰Íõ˵eÑ?þ™¢â÷Û·ÍÀ›‘ž×lŪMk­É –o±ý®ÐŸäƒ&ePQ]h[jlÅÅØ%†=òâªiR+‹‰â«&2¾{Tµm¿âah¦‚}5þÉI9FÆ”} [Ég’ƒ²I¸48¹eïˆêã”=¹Ÿ¡ñžO¯6:þ þyóŸ™!OH‰]UõÊWXH’mª=iRõW=Å6„Øìo?¯ÆdŸüÇnN·X¬gÃß±n}Ю\|Œ¸¹i¦ üÒ-ð M=ÝY¤]²·ÛF“¬—Ñ,ûÛê^Œ'ª¢a!–«[)KÒ¹’Ç”y&Y„“$j²>`£ŒÕ0˜»–î:ýË‘üpç´T*hPz¥b¨ Ê¢ßÔ¬Ù$Û?t8Zí:UPw[N]w³âcÏ«hrå›çÐjö;~*0]ß¾h„þ×£þ{û»_±‘™ï"5£+†‡P¤gÛ7W›£‡è]ä&¸Ø¾Á‡â0èydõ6úGB~x‹«TOHx[§>·¢À€­}×\­jÙv<ßÖ“Ðêéa$<Ò,€Ï²6„!2ž•ê¯j©]ÛRðþhh”4zå€#V’RÏÖ?Íã%€þ6[ÿraÑvUŒ†35a‘íÀ-–Þyù-¹iLÚ@ Ô–Ù$°¼(¨Û1<w&^ÈRó.üøT#ù¨ Ã×ëëkú-m¾=^_7Y íbøå¨Ô5îÒ„ÀUq]#˜ÞÜ$%±jRARó2k©&ò\eŠ®»¨Ôò«³Vb™bм$rNš,ÑŒ] gà_²¹R?^L¯Ù´K2Ø.R¹T•ñB©Ñ›OÌç‡|¦'ì¤7Å«š‹Ù8Çq&95˜#ˆŽÎ¿ÖãÇwäý›îYò—‡«ë?Tö›Ý7·íö #Ëè¶¢LÒaB×2:Ïi<ºŠqMâšY[gÑW´ó8£¹(%ä'áW²™Ä×'˜ô¯k2Ê‘½2J"¾>Àø_âë;Þ©ÜÁ6ÜqÓ›ïLéÚòíÝÿ“wí¿‘Çù_!.¿šsýªî*BÄ@`v9ÃØ[òDÆ<’àCw ÿ=U³ËÝYnïösöN² 73ÝUýÕ£«¾"í¤ÈðåÝõÄÓÑ:XÀœºŽžÐøºéu"ô3¨ *¨Ø9`œ –Ã@ f5s™†ÒUŒmAí3¶yŠv^+̉IQ ϵO´øðZ]´`²˜4c›ãICNÛ¦OhclSvVs§óÛº¨SªªòxO¥I¢Oæ÷ýŠ ‘»4=(gµšf(ÕHnP£ ±Ý¯.·iJf³+«‹@ƒ]thÓ¬‰Ä„ŠÁøÐ+=òævóoÝô`@atõ9¤C$­çßîL½àÏ–¤"½Œ ® E‚¦ªr\4Z𗬣3‹)„Ï,êœóž¢3cÃr ¾u³®ŒV«¬’Z¿‹`0P6úÆ7@?ÂÕ <Ž8¡YÔÙu3Ö eä]ZÔ|T" ·HQ'b³²,&™¹ˆ¡$KB ‚gànÕùÞWÑ%cß”|h–[>ŧ”gCš>¢B è••›hnz²´,.¶ʱ³P"8OÔÒ#Ý=¾ŠxµÓ A5 .›çÊÃ@¨UY Û(òiÁ‘‹æ-'KË*Tc›ÂÈ<š/¸ÀoÇ`[窸J2ÆhÊáf¹aA¡÷è}†ØÂšêï˜Ø¼ŽlÚ®+Kh8Þôè"˜ 4Ú„†¡Ê‘Em»'“O¨ËxäÈe”‹_CI±™@q_f³´L˜T2TØ Ž=VþG“àÖÕn¥ôÎ36 µR]±—}Üì …‚7-7K“rs^GÙè·+Ë:pfp2¦âÔr{m§/“:ÅÁ»n>oÙLÈatÜÈäÀ¤ I” ÆÄŽÛdaY§Í (y ,“šÜ¡7ž¶ªêù )ÞKoW£Ø²™n=îºÃ=)6’’>nÑoº´¬ãFƒâè±&Ù‡÷-×Ò[gjG€V8jší›Ö%Ìrìº{ÈÀIŽÁBRp`âý†Û¥å2ËqlRB&£*5rÐ×$8O5öUFÚù}ó‰Ó¦@p„‚˜%¸ôóµo“•eÊ-HØPpJ2K µ %{B®&cJd ‡8®ýÀÙ‚Ô96 6Gn„‰T¸¬¹Øj¸‰Xô^U]kÊ8·×š!")£$«œÑ-·³äÓ‚2QâÂíZÒ·šÚË œ°"&þ]ä ·šŽ£ ¹¤ÌOR÷Ñ UºJ¦&hÎvê’+]ÆMw¶x.©6>…p²²¬Óëk]§>¼kçºxŠ’iàÍb bS´ÍȾ°Òrs!Åo—–%7;xð¡„g¼à jˆ Õ‡f/¿Yp®`ƒUìÖ…kilZp`£IêÉÒ2Í¥sXv\eXa€ÿ¦sÍYj'N8£UŽàÐL Îc¼=v»´ÌÈgŽN,8«ªæó8 H0×øóý£Ðúv¡oõ"¢X89Ž.£fR¬Š×êLö¨KM[xg;¹†˜ªÈS©À› uà§zZ^_]¾ðÛßO• ä/ª=1#L h)'Á¾‘N*Œ±gÎØlY'ª’‘í}#ÞTÍ6/Ï ¿Næ qLi¾…~Û;þçÝâö¼¦b[XÞÍy`mMï!»3Σž©õðí–õ‚óQ1¤x7£ô2蟜ì^ˆåw'ÛÓ£Z›?šÃnwú³¹®`î|6Í ¤öÉÎßätœBgy{Ãb9_.ʤŒÎšõDBM“§ ,¿®¡ÍFõ;†fÐc‹`Î1L†C2X!V$·Ý’N§À5äwÒ‡š±½–#Ie8,Õ‡•g½ÜR4•¥›Ë”Ô´CN­‚ Þ§µ#6Íg²?]†¿ò7;vÆN®¡¦ã*r MÀ™ˆÉn_é‚¥KyQÇBW 霟AC&3×R\ÁãöÅì÷dº(µäùü©íw€جI±•_W³ŸŒAà÷p@0‡ì¸A@ð-cÙȰkÈx­S$=YÍü9ÖÍ {3x^¯ÍhžóB—8˜¼yÑy³“ÝébÒQLº-Ê’tP m©ÊÅ“Y„*Ph™è>¥†Ý3ïoæªu¤Ž¢9ꌛUÝf)y‰,%21?p»o=М¿ÚIéœ\i°êžÊ£¹r¾º¼¸*ØA&ÔdUÌ)ëÓiVŠæÙ'›ÑEìÊž\œªÊ²9ôÒ,k|öÄæ?6¾ßÓû†ÙQµ )^”S·llrÒ«lœvDNv¦ »»€]™^x«¡eÂ/OÛªBKÄA8. õb#ø]Å௽»õ½ÍâòÓÍÝz¦ÈÃýíÍò†ÿþÏzqûp½ÐÃø“_†õŸËÀ­¿¡<¾§¡ÕÞæôvÙ¤‘á=ŽW˜m7±“ pTæš%¯µ©U±„À§ÇZTÍ^ëå]dãdxÄÇg?õS =hBe3.k ÷&­Qöúíîtš—Å€…%ýÖ¬B¿è›ÂIçÕ œWÁ.pÁv'd>¡­7XZ(‘ã+›ÚöQë*ÊìQè]¤Œ«5‘a³‰¢( RÕNi0d4Ú¤NØh s²²œ†7òj3ÞÓœöôº:" ï„©kñíÖè_¬2ؼï÷7ŒÿçËÇËóDZx§ æE=öOŠöÉN‰QÀéTw¨gu³W].ÇìÀÇ›ÑêÔšP“‹<ˆQ¼­'Üä¦*e:<›úY°&)Xˆ÷§mW–ÃGL€ `§°¿óŒ8!1á(0ä»âùÑ»ÐÕ¢×5ŒGÐ SõWWyyÿùîö~qùtîYÑ} sú¼üùKÙÄJvÞÍyÌB ANàåx‡Øg^åá­ê¾£"°å̱ÂR½öÌBÔ o÷¥ËPLÀ`BWèÍ8‹3ŒýáHRèI™¹.!™Ræó·Õªš'¨öI—ÝŽ/¨ühŒ#øgìcÇ\t ø Kî_“§œü9 ›íë2s¿ZX¹ \,rqŸµU~ù½Â÷Úž½þòÙÿÓúßþåâìÇ%Ï_Î~üóx–Ïþqê§Ç‡åÅÙêÃÚ­øÏ»Åã/?üû?Ÿ}dà¯~¾?ûüxó|ÅoY<¿<]œ½‘Ü߉sýxqÆË]žýãÙ»Ñçz¸|–@ry{ÿÄ«}wpd}=RÉ#ôîçÏ–×ó~ìv^^=H¾O…ÍxE)YÊFGΞo>]x X6I£=_:{bÕb—XZ¢WpöxõéþYñãÅ…þË…õŽpgÏ¿<ÈŸ¾ß"éÇá_ÿëý|¹Û‚ÚÙZæjûAõÖOšþÎúG“_z}ÎÝóê3ßý_ÙæÿšÔ'X£´jRïûO„uãè¼ Ñý ,HDÆÿ:*ÙV¶ÅÝòŠå|XÕã×p1-²6U³cµ³DÆéoÑ̽±]£ÚH ‚OÚ.°Þ»Þ35^“Mé`¼VÚξŒ+1^^£e›Î.¾¹ ˜ú½^E¥»ÐÏKfÅ Ÿ—E&úXkd üµòÕðÿ•¡£Pü¿"ãÓAýÛ)l=Œ×vðÂã‡_Ëø,oo6ñüûûËûsþÁùæ'] JróÓÅk­«A)Îv4(Ç7l7òÆfŒÒ–¦L_8 ‘¦l†×*N¯¾]w›±RR „'¶üb˜ÙfÈ>Û}›1.™øÉol†&3ŒcrŒ´¾€à­ïi<œÎv¤OþoØÇâº)–õ^÷'ns& Z¢ä¾æ`·ˆfñò|½æ4zzU†.@„›CÐS ›÷(ÉHß7¦wâ(‚‹|À:ïÒ‰iЯ7~Gœ¢Õ“õpúG­C•|üN 3}D‚Òsã·ìòªz×ç9QPk’µ-~«Áè00îÓ…\À«žàmC+xçÓB‰ýJ`Ù ÑX>ÜT«sòˆ`\ÿ‘ €È{Ÿ Ë;0|=–&\|÷‡ß_°ûÈr¬OA&:9:{y²ÇOW¬òµç+™³¨E¶It}öýÙó—»‹ïÖWÁß­Zø.þøo¿?‹šÿ, ~ϧûËÝËtyÕÓËRDñÝz}xy¾ø®é=?/n_®þ:^³ Õü’ïG…z‘µ¼¾eD߯÷|ÏOÞ5g«6RŒž,Í;nºÆGh[Aj`tä›Kœ²gÎHÁªá5.{“!^ÇlØjáÑËÕÉÊrJœdŒ)R 3T'À@Mzî CIéåž‘Z •·&z'¡ÁWßIè·ÆIØÙsÌ“Ó1û”8û„ë>Ó §ÁÈh;¡4x¼}þ|·BÚ›2jy÷2#SË»­ãëŽ"Ý„K¦fH§WÊ)­}ó°«Kp)x¸äÐ%qÉ€‹ÒamV–KÕ€Á Óʬé3â…Yƒ!¡ÞD²™…Yã{~¦UMpfß„¥³À™[ÑŽîٲœI½«ëg,‘ÀÙß±õ®ZI3 ÉÚš¾ö`‘}ûx>ëò!iä!Ð.H|ÒC‘ltxØdayžR@îÍPâ)uÚzy¡Ô´2íÓg­Ï¯Õ途”ï|0I©Žrl–'4¯Œ$ ]O¡eØx3²f{À»1s ­ü–¹c)tÀ¨~v!9…›ûÈ8z³\<]ÃÑst—Ëeøà#Ba™…(ãÄV0^”»´åo2ì³5„ìJs¨ÒîTA‰}öl =dØg•6Ï>Zõ»]W®yæÀS+[bž;`Úº­tVóìWƒ|öÍsP` N‘|¢o%ù´G¨‰ÏcÀ¥úgŸ¾|ŠSì¹Î’~:øò#°$¤®Jc,‘­É‹“2ƇͰd ÒO6¸ |F°ÇP)\"k¢#½7 ËÍŠ fòq)-´4.‘S~þ4D*õG9È•úDÜ@ÊR0'@¨y£ß¢˜‘ËÐ2›ý'Hg éfÿÀ¹ÑÐÕPü¡VÁ;×<"݆|4doHób!£TÑ&±pÝGý¶Pq»®<0°‚Sƒ!„ùë Fsê,“êã^šéˆá$7„¼ëwYsz^õöüáñ厮•páƒ_*¼¼ô‹åOoA¯Ðm›ás&Ç1ÎÙíÕ‚ØÛº´ƒ}s‚)åj¸»@n2ø­¨ãTê»aiÔq&•oçuGÇ]L– ;B+Égð”°Ã_³Ç†¼‹­L@mI…‡Œ}_18|ÿáåæör'0Z¸ü)†>!á‘WNAãu·@ðÈ+çux8Üq5—FFƒL­jg~‡Ïyp˜áñx $ø˜è ”éÒ2ÑÇ%=B'FæGã‹;=ìÚ9i¶Lø*0t±prV?ÿô¿¾ì…wZ¡oE¤’·ïd©t§’·ÏSj€J†”z-ݹ8E8åÙn£3>’Riœ².z½½]Y®“DN ¥N S–Nà$E1ÊÛXÇ1ÊPǪ§`èkDhoï|´?|უ×ðg¡µp¥i약zyYßyÙ 5ÿMg\­zÊ#4¨Ý;¹Ï‹Û÷üQJv6Jùt{ÿùìãåâyñôËÝrBÃ4>„=l=ÐOѸUM’Gª>_òö=æM››ÞØ ;N¦ÊÅn¨è>Y\ȇÞIãÚû?­*û§ì|ï˜GJ w~6ýµ, —NÂÀ5 yçæ%ØP÷on›[¢ÞD¤ÌÚ=<âuÄw"èà;yßÔ[Ò¨ûxKGÞWÖô"ê$S8ƯõšÆGª(.cóß\tàr›^ì Ý[|üV8zÔkrŽÎZ^/c­÷“•åÌ'’¯² P妖ÆWƒ2º~°ÔøˆZ_§¼AÏ[«Ô ¤%ÀåS®®‘"stžÜzÙÑúÛÉÂ2ïd­ö!*š¶¦-è/#¦f6Ų:^Úoè¶ù»/í·¿ÑÒþQÃCK}·ÑøˆšQš.p̤9¤m¶ºÀÖNåØÇé½Wë¶±FþÉÊ2 Å Ò¿ì8?i6š>ã@³#›f§ËRf³ÑøÅlÂL#"±‡>wð-leû-£ü„}M4GÈá×Êþ}{¶ïªµ8²ÐÖôµ§ 9óç ³ƒE›Æ,°`’˜…&æÃNV–éÃZOÄŽC‰;Ô|hv軚¹=²½` î õ¬K ¤¿~–mò-âi5GCËÎ+w:Z\˜©£eç•G G=ÀHMÐCu_A*· 4ߎ:Sâ/”Á )8ŸÄ‚øhÖíÒ2Á‡]#£CÈŸ´à’àÃPa~Ï'XsÀóG'¨ÍøšÏ~ÉÂõíÓÏ&v J=žÃï›Çá9ü¾£¨Ãxà½m@VÞP3«Çñáä(ķ׃…lÔ A+!:`I©TˆWîcaÓ¥e¡N` TÂ[‚:rOemêèÙã-qéA–„c|Ôçáíè:2[è S3hxH[<5{ åU5ï< @–ÕXa[’(ÔM µræ ýÚÀ¸= 5ˆéb çÀ¹tšÈÛø´ÏÍÒ2Ý~]´%$7s†š,GU‘ Ã+*Öm›GÙ‚sÒ\¼J’œ ½GHë0n86+Ë’Ö2r*˜¹9”+&Ãa½™Ý]uk~=wýs«Oš¨CøŠ‰º½VK××Ë}Ç•ý‘>™ºƒ/Ü©'Ö¾[ªîà ZŽŠŒ8³" ¸XÆ™… ûˆîÀLGý%uаkùtóÄ;¶øéj¹xX,ožo®žÞ_ß?=?,ž¯ÏÎVŠzõ8þæòü£yˆ¤ P‹kß»sEf‹u¹öµGUšM°·ÐdS¡¦?Úh™òªŒ‡šÙ¬â‘Ù1£Œ–δÿã4¸¤…Õ˜}.¥ÍjÒ3²ùA¨—v†¥î<¢iH¶¬q2–Lþuš¼±I¼¯™raÀ‘&rÍTPЧ…R´g(dfh“v¯|”ªqº´ÜÊ é£,q®t€¶¨<Ì<¦bµ‹ê2»¸oÚ´f.B¶ðÕÚDŸ–×W—/¼Ë¼»¼icƒfØéÃüÈÑÚÒ/ô,m¡E¯ß)FƦbdÑ2­Åúùy«ò'˜ÿºÞ@ˆß™ùÐÃñëú`\GMõÁø¯—HÚ¿O:‡/ðåo1ºPß%tä»=ɾWéÈyMBŒ‹.´€.Ÿ†þ¥ÐVèVHûàfœÆõøÿì]mo#Éqþ+ò!_Lm¿VU+—‚üÁNpN†qàRÜ[Á”´¥½ÝŸª™‘H‘=ìî™nêò²°Ï>IKÍÔËSÕÕUO±ÀŽdyRÝl_eEhoóW5· “ íÁ„)ɾ!+×Hªí/à^3|~ÍàJ³J‡—õ@g1L¼l/Ë:>¢SNa~ (qÍy§eMYt¢|ïÜ!¾º9¢•Ê/\(쌪Ÿ9O˜¥~<ÚUXe©ú•RViíœnóO;¥nïX–ËGF°ÝÓã÷—+ÃîW.¿ÍÅ»bøÍªÞ¹å`«tðf¯Å0®MT@jž654É^A…0°¼{ùuô;eEAn\&X‡³ZXŒ† -,¤À€ šÕG¿S3ÉåplqY®RÉ>YxkPß‹ªR’ TXR#LÚIŽËjžäú¡ùó$ÉgX“˜ªòUï.)+ËÕž°AMè,x´T³±MÚ‘0´ÃŽ‹FÈ,ëv_Øåå:ì¾]Š“-ooX7·Oe /‡;j,ñ6—Á`AZ-Þ%Í‘nøí„ƒ€Œõ¿Ù÷ý‚Œ‚⎣¦‘Ðxœt ¢=Z¥· ùæZóî#ðŽz»ÿŒ Ɉ7üÌéÝÇ«H*Ý} BªP3äåÄ$ÚR¼6VU¼/Íõ¦ªGSLèiÇSGSÐiCé{ÙO €{QÕ:›Zc½Ã çF†.°8;Ê:EüÊ‘¿X¡ª>ëbÅ•IV³GUhy5W…öõ?R_ðœÌ† §·6«ˆ[寳$U2Ôfµ] õ/ÍXKNöÇ)lyiÖñjäÌ4N+uÃLEù–‡ÐRÚoÂmà4 þ]ZÁ"aÿ¾–/ô«¬úÃr\^:Lfͧ}›Ãç™^û>:î¾û²†SÜt9g¤’ÎNH%Ò._/‘$S3OD%å`ÈÈJ(\”7m/‡ZYbƒºd®·Š¯µoúWÞb4Ã@%|êt‰V*Ÿ74¤k”Í\½©÷NY$(M£øwWkL—šì-ÞZglºM’=)í×6Ú?»—P%îÆåËê5Û‡ieŒVú҉̾§mõ|sûTØÃîÛ&,í¹Ý *Oѳ6‡0F‚Ô*6SM1«1U[Mºó²×qõþº™§^4 \IJ!^9ð—<¡½œÄîVëÏ ]ìVB¾{pD;þƲ§Ÿ(<£q7P2ž0+ò ³÷¥##ÀHŒ*`aè«&ÃZq°7éCOd¸ú*xN±’“$,Ðø$É^bBaÿØÚp/ …){Éñ_js™…šSÝ2¾ƒ¾é“yý—=4ȸ{±ÁÒsæ8IgRh5ÌR¨¶‡ŒÎÐ!BRÐÙ\°è ÆOXµ²NlŽÙa±*€OVq&ƒš‚ɳ]Úƒ>Ù/ªVŒ¢`uF•Á†$;; 0J.p ¡Ju”•[E§‘`.Qfp.^g0²ÆE¯£¼¯Ú)‰Yý9¹ó©fÅ€QçÅ3û%Ñ@ƒ´XhÀÀ ÈÃêîjÍ^ó ÿs÷"µ¡®Ão!î¿ûÀRÜCÆœORüIEôÛžŸ791Zõ1š¦«2½IO˜µ£­BB;¼s¤ˆ×3Dõª$Ýf“”cÎ¥Òm$›ÞýÌŽ±}õ⬒m‹#9ö¥¢žLBòsmÉLY0gd±-è­ ½…Gþò€Cyv ù²]Ý﮾êÕöË畾ŠÙPµ^^WÁíîΛ¦Âô=éè´ÉÕèåå§¦Ž æ®Ènˆœ™WD# Æ‡qOe#‡váTUÑ«mIÕ¥uyÓÅÂû‡³QûŒE4¯&`}2r#ëŒ*ç烈7«íÓç¼#ùîdž­Rv^a˜D­¡rt°¥õ³þíÏ¢jI4ñ‰dÐØ9XUÂh–ŽÆ!Ú.ºÙÁ˜ÚÆTWªV£×jÖ„šUN·ÈÑçxÓ¶¹èîö¾¬õãºì²ÇÒ™qû2Õõûµ¼lq36(wwñ~­} ÜzóxØý6§KËŸQ#9ØY7VO¡8ör «4™¹%b«•MvŒ—¼ì,î /«L'Y†1¾ú!U>yjKè-¾”…d8©i,p)ËŸ5ÙK:©#×ãj÷ôø¼~z~Ü\ QéêÛòoÔWvm¶– dssÛýÚTUºüçݵž©]8>ì…lSØÁ+å9–^¶NPÆ™;× š˜SìííÕJ K¡¹Þ×Êò‹’>H·JÓ??¶ Þ,B@«phxÓ€ÂÊ'רE;‡f¶˜‡ß—{‡•ñI­Z2,*ÔÌýÍ4…ËÈõ»+‰é 4jöÓñK ±XºŸŽrÖq’ß뼨R7 Ѝñ“¶‘ãæG õÏÒ §>2pÂÈA…~‡ÛX­Äs[w-§KÔ)ÍC~­ä·š¤Œš—5€j^æzB—W@¶X[¥E7è]^9h>'Õè£4¦˜÷Ä5¬O§£—9gæ=þ-(ÔÂPA)[Ê ™÷XÌàÏóž(*8cBôœ þgt뼦ÓÖ>°yÃ7¡‰S×8ñ± 3‘u9rÑgÙ‰U0b>g¯Ù¨vÁÍ›ˆ´¾É¢àÓ jº0ë‹´¯íDê_¶Ïw›”ü|³3CJ-~KÅ»´„žŠysOEy&ŽŸÊq^´f[A-éw*Z„ Óea0_¦õ*›*áZLÜE…TÖÌc“9—ÆáZä¬b“DSBË”@ãºåa4•£vÜW¹÷ì̳TŽªi. ô^¸(¯ØËLã1!ñÑ×÷7Ú…ã¯gPù…ݼÑH§ü¤‚/!§(ªJ³Z1'[½.íd}X{¨Y@éYu0^Á}W•.¡Ï3¡lkO[a¿×SH}(ÙÚ1›Ú ØÑêqû°íNrñp0sÉõNsL”Ýg/ª*•~~l%}'%ÝŸ\æõ7ò1¢õùJä¬M¤ȯ‚ Á]¤&‹ŒžÞN3)ºŒêV8åçÑ ¹Ð`‰¨,'æ¢|3²P¸r×ýõU¶/ÑgäûÚXí8/¨WZ˜Gæ³c­Nfï â`.0Ï‘cMŒ&g”ÌÂè$ç Ñé¡Ô*a4ÉrŸ¢IX¯ùtigº¼2—€h…hr¼¦‹v´†¼ÖǾugVÏÅ“Qus– zVHöVQ ØJ‘µ¦å«_ö+^v´þåäÐpò¥"NÞϨÀ˜³}¨i”ö“ŽZüYZ6Y†Òiƒb«ˆÉóY—\‹Ê`a¼ÑILæ8JÑõ{ÕÁd Á~¦Læä’ÌÉk§Šç¼&«âŠ5€¶”d!æükè6?‹¸Ä<”MÄåÇ6àŠ·‚KbóE4"g…ÄåWvrš盕@™wØ–ŠVÕÝ–z cj….¥œuÁï Z -)§•vô›X–úñùv[Ê} g¨4“bÏ@Y x g|Œ@h¶uU=CàÓ-è4È2R&»±Xn1=LŒ%eC %Õàδoޱ:ÖmÓ+ŠtPök /™-`Wœ„ UÉáªEñ—Àhkàýxû§ä¬àÆ'(A+ŠŸƒ¦ÀîZަVVIZÙ]?~òX<TƵé¹*Ψ’À$M’pDt/‹:øÉfêø?%9jÒÒNŠZç¨,e¡§íôDÒžsÙ >¯«`æôdÌýGiƒ ~'­n25äÈŒjYƒ}|ؾ©t³w°6v¾<>Ümž>ožw2IPŸ4]Øèiõ¦Oë¥oOÛÒ#ž|jî²_0}Ê·ÖXL& `Ulòð@µ¶¡5E è|¿ë]ûm³Ö‡è !„hJ®îºÙ¼.ïߎï÷£úMvÞ´¡ñS ùDÀY¥æø[7ê6„fJU¨°·ÏÞK•!¯z¨ tw›bfÎŽÒã’0ì©:FÕéTAU±d­¨ˆÞ®‚_ºÖ+XÎŽ «¬)Ãgt—¥·ËÛ…Ô÷ MÜ…” czE0|òŸ‰· .œ5§ñšmÃáoâäfóiõ¼}ªv’”{kÑÓ´ûf#·Kàš]„ ²ª¶D‚MÁ{“žž°Î¨ôêôñ»æW¹TsûõÊ»’©tD‹@³¨¹Ù°lãäUä¬\dÌ-tÏc0Êzøßr!r c:%"KnV7<5AXGƺДÊ9%ÊiÌÆMwX¥’2e}¬ævìô•µöÅr§x+–>‰¨¨ª¶sˆJñeN‡©ƒ¨Žd‘¥-)Ôð>×QQ 3}LjÊY·•’ÀEÕc^¿$VCÔœ›eiòõzÖ@°p×· mð¬ëûõŠ€ô¿žžVÑî'VËÃ3[b÷©™ìÜiÅ™TbžƒR“æˆ:>ÈÂs¶àÏ0?dK}&!Û!ðŸ&„@e’„4ìH:™MÚK²!?¶"iŸÉÇõ ôì¬2¯´¢7'„€ FáešÀÛ(»W¡*-Öµ™ã~ `4j–ä¢q–e0FµÒyl†‰´¶ä~ëÁwå݇öƒ²²±¿FN93Ƴº ×è2=#ªšV4ÎQ½Œ[PÙ±ò“÷o h’JÁBìþm/:¸,dð¨Š:9ƒçÀ³.Òe&ª5.{Åe"¼ \`®ß‡¬ò°-žëÏôþ–:t (ÊRlÊø—láê>½ðn¡nžoŠÆÒÙqJòPê§t$±.ÈpÔÔUÚRÒªµ³7‡ZA W½TÛ“µáàGc¼ § °ÊSkí¼/V?w½pðÖ4V3Æ*ÃrWA¨.Ûà»~›•öN††¥¡­¥YHkL‹\5Ÿ”éGª/^¾¹ß}ŸZÊ9`ûµ´<ìÝt‰'–?BO!¨rÚÇ"¥jåá)U¬“¶r7é;7îÓÎà*itÃð¡LêTˆI+"À¬B³Âø>Ï6¿rÊÌX˜XˆÊa¸ÈèQV?`ºpFuP†tçíÓ= ŽY^ Lœ¶:õÁfçzíHÞy²œ%èT;JóÁƒGN{ðÊUº‘ø©!ÈŒuÏÍ_žµlSLÆ5îF1[Št#yÐ(›®ýùtÈÕÉ6Y3ÙÚägDGÊGôå€4ãÍÐã_h¼²{{ŒuitÔ°ébÝc”·5˜™ËQ‹‰QÇ5Í‘HŽcÓ5M²f”Dê«ý¹ÿ¢ì èUÙ¿®nŸ\ :‹Ÿhþp³ù¶Øëâ0Ïýýéñûm—ñîd•Ý ¿å팅NQñ?ȃcœèÌa)fñðÜ-Ìí²ß˜üh°Žó§É—ò6@‹–ø“µlëiX@JøÛÍÇÛ{ü݇¾uýjy×{“º~»]p¹þ¼YÿmÉéË;÷n9$«X“ˆ¼ÿ"‘OgúߨV÷ëÍvs3]9çsù>‡O¡Ø²¢Pzh'Ï·©ÇÉ!€íDèOUbüË] kÙÙšu'4þ¸(«Ö^,UÎbâDs»D*9¬3m‹Ö½”Ul ¿1Z²Nǧ¿°jTÒ sª*¼.Má¢PãòÃ}ÐvñòËÿü§Ÿþô‡?ýËõâ/k¶À¿.þòçî7/þŽ-öVòõ¢ÿÂ՗LJõf·ûûÕã÷ŸþퟟطÙNŸ¿>Þ>m:M?ï®E ÷ ¼l¯‹½®làëÅ?.Ä%îYŒl·»Åzû°“y¸’A@៙ÜÓ}ylm´è]´W¡ÛkQeiHíWò ÷‚÷þ܉öÔ|—úØ~—úÔ|—§iÕ2zé5'‘$qögÌQÓb¡ŒQZrù­IoüçΑ®øÃï#y‘]<óóݯî6ìÕÝÂõ~30ûpî¬zñãâéÛýõ뇻/«ÇÍõl6¿lž®ÿø¯¿_DA¥–bÖ‹nÿ©+Mþ#|ü„?:{£?åÝÃÍþ!…Gt÷¼/¾þaÂÏ_žŸ®xǧüºÚ>o~îWçX³è’ømx ‹ì çYüòð]€×Çÿ‘ŸëmÖÖ¯†§éÎt>Cë>âxx^Vi¥¤è­ÿºè xÕ¡ôÛD=¢oCù´Úî6cõó‹ûç;–öÏŸ^ËÔ×ÑR«–­„RI–5ÂÙ,«ñ8çÿÁ›ýý§úìj0”·YªpÅçPNv·ÿ.ö!ƒ+Ÿf_òÇadêÁŒê Ÿ­í,ðt64Žd"ÿèrV #~ Ù5Y­¹ÿþíþ4z¯.'ùrÑàõÿàû|G`À;éŽWÓÝŽ?BûýÔ†xuåÞôrb³}úœSP¸á–Sçbóm½ÙÜŒT:iŽZÁO[üý¤Ñ> E“,-D„p¶ `®òÅ´O(1‡ó[æº7´âÃ"¯¯Pç*Pþ(vÎÌ2€<È‹z7Ë#‚³ÿ\¤L­„zO;kÞ–©Õ•Ñx¥®l¸f  U$Œ™[ ñʘŠðŠŒ3C¢É*’°N¿Í~]m?ðE1¤ü«bvÛ‡_ŸnVO«Ý÷ÿfïJÒÉyìUj×++9`bõzQè…9í/=•å¬á?}ƒ!¥–(‘Œ Ã5ô*?+mE$@àáéz,!âYà“…svsrh†Û·© 1Ø tY( e/±Øæ“[_Hný ¼ÂgCˆzš"UHF.ɼ×ñÐ Øm|-/ñÖ¿´6w˜8çô ýœ•‡¶³ôÖŽ˜ÒØÓOõ´6§4JË2[ËX*/ÀR¥v—V?4 •l¼>ÿ6ºÍ:ü‹‡aÂ.„ê¸hËw1N±j^ÙòUN±Ça-“G0ǧæ+úîtfÃTÀ˜5‹«ò0€6c ?) ‚ÑÈ QÌXu0M¨A±ÇâgM›L(¿‹·?àˆüìYÃòYÓI l6´±ßggM’ò<£‘•Íš5 L³¸[¢>ÚÇöŠvzxy;ñ¡71U3JÔÙc­‰‰ DH8¥Ñ?Br*‘b×^?±æâúõzë£_›b#!ãíáácfAæ¼ÈèäQ?¦0Öqò„h±hÉØî| v¡Ã#¾¤»ÚÈähæü.^ÀO ïmžÖ_vÿ^ÀÕó„ üÿøöc­\Å@À핹i½h§¼Àx±õ_¬ùs1æEÖŒDº¦,›ù#•Œ&¶ƒTg&0!w0Ò.uöà`¬ì`4Q‹ƒ,{0ê[J÷ƒ1x›< „\"(7u}?Ã;ÛÖ$^ß­o¾?DçcýÓÑJ0<ðÈ”cÍôÈ*ŸÿÁ ƒù^XåãÏx^M Š§4‰ôAUÍl¨ ÅP“ÃA7‚ËB…Lœaà´-Õ8” ߬ªô­„ÄÂÂHÅØ©Ô>>ÕF,ÎC†ºÄØÂ‹G·BmÖׯë”Ѷµ£îÅÝ‹jÙÛ¯¿Êã•¢’76„:TšòÌñ ¤S ESžÙ~OXcuS(æEφ.‡õëYŒÍ£ÙÕ'œEŸ$øŒÆU>„±yKôÉOšSn,+÷ìÝìY“šY³»ŒŒÌ´IÈ»ÃòðÒa4²ÒyÓ³Œ‹Õìl t?5 RêÒ¢âMÖ7i÷[v|³Àñ1¯ð½[yá0ïécßY^}¤Ì{zçÃ…§`Ät<§È”ã ×£6Û5:‹S;õC  ­ ¨¢,¬qå@‚ñDâ@â;ËSlñ~{{qTz¢r@¦þ·£ìrþw™Ð­ a‚³]ö°±gíhªg]ö¬3PÓdÅv_±â\pé£Õ0ZqK†ŽÄ~FèècSç}VäÁçûÈ.u<3~4å%ÆK=¸ùA¤)ïpf  !o.眶b'5cVŒA¯^üï{*ßåÕÃ:VîþÏóóËѱë{ÖCMïÏÞ‰ÿýSL%½_ß¼æLòÞž…Kn€£³Í:rª„q?šÿŠoûÓý®úøz}ÿÛúæãùJº=‰‘së?|Çnl»¯¹ßüôôüûOÏ¿¯_z»»|úéÝ"«aø‡Š ":b¡Ôùí&/…¼£!Nz4žÔ9T߸¯QRØéı2êµ²y½ØÜ}ŠUˆ—u]) PÇ] êÔÒ$‰"劤žv ¸:óMRAáäžgAë½ç’=or¤Dì é¢w[5éZ—¹‰­¤Š9wƒ= ?$žûf$Ú$ƒ‰Ø#ò¸¨(œ”uý54S®=úáÙkõx sƒtà²Çöu¼µfJ‹ ÌÖ ÌF$[£Šâ«P£,"Y“ìñ¸Yaˆ.¶á…!I—`ÿ€Çî!Ç(ï˜áR7NZ^EûO¾ŠÚ‹nhﮞ6/‰›h¢f7ѧùá"šlË‹èSìBÎMºÒ+ z̆!WCÞ]å0ŒËÂúÓÉ;ÍýÐJ¯ ô÷ÝÂ8„ÆöÇ! É«bCh\ú¦À4¼)`Àèóò|S“Œ{ >|øÔ?q„=l]= ªboèñ0AC.^;¥ì³3𨼄3º!ä PÀü%e”•MeSŒFV<Þ! cZx\÷\Š€.$s)b4$[“èRiH{¼RÞ€g{ü'û?ýpG6w ðñ•ÕÓž: õ˜ëhÚS{ƒ’›¢6A±2Æ6@¡ŠD<ãucØlP7BBýÙÕH‰PìGV†B‘ˆÄö§ £˜Þ0¤vD±)RÎl¢,¬€÷YÞ×@ Þƒ$»î¤#ï½%R'jUŒf=|ì‘Å6YéûcÓ”µSÇ]ŸMX´l bÖd8MœÒ¸¬4jí ,ÍvÍ~©Ã&tØ?j!$£Ö¬=UfüÏ„¦ãàÉo_ç»T±‚4‡¦Óï M'Ö¯‹ÙÎÇ¢ç_õøÂ* #<=§¯fãUáX,ø” ãüå&›þŒFVŒcjD¨ñô²ÓV€cÔ]Ð;šéŽ©wËn­4ŸV :¥ñÄ¡RÏÄTÕoò¡B”UˆNy‘îøE0¥ÝºÕÉÎ.Ö¢òZQĪo—U*õ$%‹_„ÉÌŸýÈŠðË/,°0~©WÚ[ˆP͘Ôs$Èì—3´p2ýÐ_`ø`×ôèfyœž<ôå@†g%Ë}wwÜ)Á%¥FýüþŠjÁØhËyv%±¥l‰¹Ž;é¿íVYòè(,Œ½»¢m­è“kNy^ì%¿ä?;økøq‡Œ•‡ß_à?©X4wè ÞbìÙ‘‡A§‚·è ^“2œCB!Ð\øâò"˜4̱–)_Á¹<|q²Âa?°"ø2,1ûÒ»eáËZÛ›ôD+šT`Ü+B±år)“Äþsj ÷ý—þü{¹»}\¿}øõOxY?Þ]Ý¿ÞÝ<Ýß§¸‘™]XXøäJaÀU†…>L²RÀ°tSWxü ïh}‹"D§0‚>øÇöm‚‚ë×&(íDïgêýD9ñù‰¨@µ©ú÷øPkÅê_£®eOÑÊ>ä+¦¨Çè„xï‰'U‚9®$ŒÕ…§¬¾ÜŠ£ø }<Ê55ìÏÆ¶ƒ›,Õß&_IèÄEÒ¬žö‡.=ãï˜UIaAÉzy,¬$lµÐM9Å›{’`f‡Ì¹<;Ü’>’ëÎ/ \‘³[ᳫÓìk?°òåVd‚²èÆõ¥£ïH¶nB´+±(M'»àDCê›S>Ÿ1ua¨Ó_Àà‘*ë?-Ðþÿâü±ëÑGÁO9úÔÿÖ©™ßЃKóÐÅÙUì‘c8’Ûe)E<âä1¸Yâ‰3«ÀFyçc¡»Ù º¤C“nXy„XJ´­`î_ñ>vrÖ(âÄe1Ú ‘¤+ Ý_¯ªs·Q«®«;$}1‡uì—´q'èÃÙ2Â¥Âņ@øýãåרTùëýæíõÏ/¼àë«$¢ÛÛ‹««Û×Ûwiù+±—‚­åÂç¿ÎXÍ,ñðVG€„)ß¼³Þ±vºbCÝöŸ¦ÓàŽN’-ʱ°p¾»| ‚“D$}í²·P†áµºˆÅâhÍ€;5Öa¿´¸Ê®ìx›äpóåq­tsñösnvx­º €3]`»ë0»€||I6L˜‡¨ Îqä¦Ýý¸H+Cy±SÄ2ê‰GO~J-(Gh!j!™S¹)Úb±Ä¾Û%X Y(.Y]1²U#(ö!„Ø:{Y(ìûo:µû=^bó –lþ¨:O(gÂIprr•ª«åæL.Xc{œ³$úv@=ÏÙÇû'ý®Í.»§ÂIÓOÓmš‡V°vBd?öIÓ­*µbd#34H? NòE°ºBªERå÷£!·HR~­NjWm±‡ú ùª™m»¶%±l"JÓîÇRÄb˜¨#vq×ibסy»ƒØ9È¿A®±@†))ÖÊ~DÁ&V«þ=ä‘u»\òr-6x¥~·3È*ɈòÈVMÚËÇ×¶ÆpXZ]ç§ÁÎŽœâÁJð‹°Ï¿•Lã05ž@×׬ÙõÈgp†½íI>ã[Þ_¯/¯‡¹[xÈø9¬¬‚_ò2ÝêèëÝ„Ð+ë‚«uü+ Õ’Âç=P …5’uò!):2K# b•œ±58Ûb'ôï–(Maã-D I'%ئr¸T„³Då¾ Hpj"0ÚY H®Ç(“FÛîÀ½üùËX÷·Kaøx¯Xy Ó-\Ÿh'iµ^,R«uïO[¥)ÕéÅØšs¤¶Y¬T@MjˆïmЄ”ÆU{@X¶Øc‡’Íh ‹DŠÝoý@£ÂLï?ì‰áŽn~”ãVí@µ°ë r‚¸Î-'œ]rdOˆim²Û(âòXËy„í:c¡ Ó- <9èÊôë·z?<ë¨îž7ãÇCÂÿÅÛó·ue3ø®§™ §qÖ'c5ùŸg»†Þ@lx,aÞ'ù° ™T>ûÈNm¼6±HLüÂ9Ûßp$)o@'jèçÒ¤*¶8üb‹°–™¨Â1˜='™xl#O!6÷»øf¯xøúõíþ6æI¬c¨J7Îî6_Ôh{ȨøC£8Ìû6{øíõûº¶Í¶¥+dó”`9{ÃÑׯÊO›v Æåv=‚mSå˜èj ÆY—uLŒ1wLØ'eÀ÷Æjã—(ß)3h¶£Áõ!Y1cî“©ú+¿?¿~ÛÊyÜßÄü¨·?¿$?­cZº`úÂ+ag'%f…”N>`¦2»­»ÂZ¯0*Rç‚ù®ØMvJ°Ö¯ll¯ØFMI7Y)é÷lÁzΣ7&Ó¬ÕŠu#ÅÀÂð-Ý{Ó©M8ÁºYøŒ2MÑbÖME¬Û8ß _ytœè#Uý6Ì[ÈZÓ5çî=YùcQ?ÞýáÎìk݆›Í¯Ïý³ßìê›lVÏ¿=­ž_¿Ö-Cß™ð]"¹bD„¶ÝÕ;ÍÄëúåAO¸MRZ(Ñè£Êî<õ<Ñà”n Jò•§âüEÛî¦%nRYõ"Zk êIÑ@ª½ÚÈRM}klƒ*Z"äô0x™mm–BínjÈ‘Äu>M¾Ñ•ú·¾dAP*Óvo—6I88\–"¡¡¾ †}·Ë…8ðªt(Ç´þ|`’1´¥H¾„"á®sVEjs†÷œeÛ!2 CTÆïä?9+l×e¢YVXÒè¿Ôá­õSv˜ÿ‚yWEÔwâú7;9c^’‘EÓ,vÝr_¾?<\lÿ·®ˆÅ9˜në‚ÓM1GÀž}¨jªÍNÂ9ÞYMù•¸€$yV]U—_ !)92F+ʤ®[ð´0eòÖwvhãŽK•7Å-”½%DÖijIlY W^æôY'~×Uáúóh0œæÑAú$.î)Wº°(¸#W3’®Ò4¢çìöHÃp–@}º…IØG¡È/—_k¿R‘ï¢Ö”,Ì¡d–GÛÅò¬ˆhÈôô[^ŸÖW÷Oñhú ýö4á¾¼¼>?®ßîÖß7ßdS§E¦+ Öå>å*Ç2Æ ”6 Ågh;fãbƒ,2b³}sHÙI6›ÑCòêæÝ2Mˆ¾µ B sÜC•ã2޲àjÉUÛ§á’à!u0/“‚Ö‹óù%‘nÓ=²J›5¡€æÉ ]˜ì"¶¿mW³êxx|ìÏGšœçJè‹~k+ÓyuËr×þò¶N¡C‰{WP'˜r?“úkýÓlÛjÇÇÅæÝYJô ä”¬*ß[²Á~/m-Õ¨E#{4fmwbÛ…9pÐzrÀíõõ5_QenÍq6f÷‚÷AôiÿvŠnY›–àa˜±3±àÌr0]²QÞ—_K>ã@½×|•œòyp“d¾îÈ,èÌÐÆú¥ñM¸¿»³)1v¿²âÓQ×4J£gGI˜&(ŽÒ,D§zN>†Þ}CIv á!:]$Ž­Ü–™gcuaV¬®$ÚÔsRÙöVºi6I¡k1=^]6½Ì™å38TÈŽMàN§¦TB,Š˜%^¬`؃„‚qÂf—ÆÛ«Êu«¼C›/[ÙðŸß úó Ž­7/z´Æøö÷×û·?/t´Ï[ñhÖÑTïºÎ‹wíSLÜ*^ Šx+™ì ä9-y˜ ûïé³™ÿn 8†µác.GM6VG~›Œ}@nGöiÀm· \ð5ܶÉölŸËâVz@3…Ç‹Vn“m*ú4ycˆ%ü}¥r‡å"ÆåZÊʘB¢ç7¬¤2IF–j°aYì**ï‚<.¼aá@×§5uÌì䘺Æy²VØæ:ƒé‰ÿïVÌm2Í>ô MâÄð—)½™U8e®º¢2Ф–&8|× ÆcÛ5$QºRÔ È’(ñ²  é’þ‘¡ÚÐ(qL‚â–Fe±Óûbs,<Îä†Ô„D©Š´Î-¯ŸT‘××/zN-‚ïXP¿,„ÎUj‡q·›§Í—AAMùü°›çëoºFo¿^à·Wú£ |€»‚/ò„Û˜ØaÖèŸq”5V»û˜¸X‘0›_×òæ„”ªß[¦ÉuL|iÖÙ…±¥wõ™ÎEJqi2FA½°DÖ,uáÑ×13"ñEˆÐs»I-b‚ÚÚ[ƒµB¥Íc­-1@w“/y P+ïïÔM×ÈrP@] °vi î½J£­$QÀMrÒ ¤m/®P„ŒXîýö¸oè:Õ(͘_9´hYü_AFéñòúNwÚ6.1ÜŸìÞé×ԹÉ>@W' “òf\Ý„~&˜µžo×—.0Ìp:\¡W/-ïA‡t’Í» àùðÚ@–guìúÖBE;Càcz2û€U|±FÊÑÚö’/ñ¦çü‡÷Ä,aEÞ¹@Ÿ"ó# bB>ø î¦]%ˆ×)ÃP˜Pàc1°Xc š Ãd­ÖîæI—r”Ïò\Ð-€BR¥ml£&wOa±ÒgõtbQÇbä¨aXÐOüa͆æâ—JÊ €sè/ÑóïåáùÏZy§3]«²fÏob68áÂB¬ÂnMÿ¶–j(? ëóòrz[Yåv.HÕgŒìÒF}À€gu–kY5Ù‹Ü]qXͼm»|Xh®Ø[YTq˜°L¹‰;´ÿÛƒBÇ)¥ ]n%”ø«³ÈŸøþIÝÃÃËÃåSd¨—/w—v•q¨s…¨+Û)]W­‹í­DL'O¸Ü˜-šì‘™²uÔžc¢­M^!ï-×(žÉ–0ê´. Ñû_j0`2œ©Þ¯ïuC(ó~¥ýMr²ttjâÌʈ &üEœÞI"¨Y² Sz»cìP€ÒÏ×m¬ƒƒµÞp(qq S…%†Ü›¦…‹×°úÞ Ã°³½»ùD;'¤M‡™Š.Íã"ʦ¼?>]§’±EF‡–z îÖ—owe€™•m—n \#}vmnúvôgÐTòPD+ŽJxh^yŸÓJ…£Ñ¶â¡hb'¤…Ð[·µ'ˆ(§\>CD¹ii¤ó… ÷»·ë1H\²ñ}6ËЧ6þÈ>­6>£·Vænü_ê6>Y`ë[Noý Æž•ÊwÐà®úÿØ»ºÞÆrûWòÖ/—D‘”˜iÔÓ‹f±ƒÝ}4ÇÕ•­$N磻zýR¶+¹ŽeKºWºUÀ.0SNªí{Iéè"­q‚ïwW5P0U+ëWï“8‘~“º)€¾»<~,ÆHÊã!˘.ñùÅ ^¨µ8ÓþÔØŽž Àís±àeaœÂM×êÎôæˆ'±/ÊP”|Ô´áSa¼3óÇw5I{g=´9¾Ø¿`4R‘ñ§$œ7Ë’Ï š€,Ñ”×´E뼋X`¨*•2¸‡@P˜ÎïÙÝó°VŒýs±-¯&Ý´¥ë/®x7¿¾»¹ßýzó—oV9Œõ™½&Õµ ´aÜ|ýb6µãã\r6Æùc~,€ã•=QV{C1òWöa7(1mà››hoÄ-ÇÌóH@ê,½Í ˜Þ€¨:©/”Þ°QY±e¿b㋪¿z.r¾Ki@VôЙ~СnÕagï­~™þþv½Œ~9?õ×]½yêz‚Œš0¯â£^8µ0ÓØ¾-‹lÕ@ KŠLöÞNªmx3f£Ú†à½·lg>)(øþµ »VèƒÚ†$ŠîŸ®mpÜXÒ½¨Ê–·"e™…Ù¨ãJcûˆ:E)37»¨“þ Ürýé„:)'=·z‚xœ59"gÇÙ82M¤œ&j â0,À° Ù_ŒKÕ˜½Ù£ K¼è‚™ »X²ýaغ¤tK,TF <ëL*Sr€‰JNG` §3é1Š3^ÑÃh»¦o®‡ö{\¿<§`?ûÅyˆƒ•ý’¯–çþ×ÿþü¹Nw?œjqh€¸ì¨v~ äBu⥱1b³®%0Þô¡!IV\O\ºm`¹6輩À1$3£³CìŽÎ˜îŽžbÄí´½ÂˆßAa+T4Ct–#N+ÈxXˆ1âóÖÁ‰­ÍÒÔ[ßd]Oea“ÿÁ¤,«5~!E²·4Àeoi¢ S”mh£°°yl#5ºóù’…±Äy0z@ßæéSUv~üo±/­y§ŠU69ë…~mŠ­ëÏãZÞ±+ÁrÍâj¸ä!=°L!}j\°soFã¤ûN  …R)²t7¯žÇ÷ìWÌW¤7pjÅÓû" ICߤ†¼³NwŸùÿš×^ÞõF°Ç°OlYYxƒúónY_4MÔvÙ³)¥½cîÅÍ,Ü)#ïs‡4ˆuQ¢–èãªGÁÈÇ«Ëå"n¡µž[¹ÈíMo¤ñ~ɨØÓ6BMÃ/=b««OšÛ²]DWkŒ’K‚0‰.¹¿Ò Szio†k’‰ë_ßÇUä@šìft½çÃF3»Tåqt”€†EsŒs)K}°ph.ÂS…'G}‚1“|ͦ r ›(Aá¿glûí¬lü#oðHfQÏA›1QÐ:¦ýf£†P+lh);È$Í&–“Hiµº`5²sXµ-¶÷®ÉP3cò2P_À±³tR‡²`ÇS;:<ÜÿiúþÐOñ¡ ´‡Ð`í‚ ±™aŠìþQµ;™^õྩ¥ïÔë¦Æ/ãMŸS!•¸wÊ2¬çqÓ«ÌÕ Wãz`2º•óAc^Ê᪢''öo¶i¬ñ±Äk­òa›MIýÁû„’Y|å±yîdÂ<˜^–F'P™Mª†‡žn ¡}']p~aŒ×µ“n«ŒwÓ¤»Ñºü‚±¡+þʈ" B%‡JµjëÚ°"ÇUãÐç™!p6© bR#ËæjÈŽú,07 øÞ€‡¾Vo%Žœ¤5ƒ„Ûì.K*,Ïê7„ŽnÖ@È vÁfBÝe¶'6Çé/º {Y?_fêºO7U|’i¢pÙÇÕu#ÆP‚RÅx¥V›”¨±ö‰ª˜SO©•Ù,A§”•=g*袱mJn`Í&8¯­;ƒf…ùørˆýaž’0Ï–¼à]z<-¶íGÜV+7ìGü~ Ôs=¸]…8å>|gÎ1óœé ähF\øYä8¦FcÿöBm'm<ïÕ÷¶‹Éú,;I¦Ai„ÆAƒÿXn9/£u#§Y· póÖ]”%š™\KÉÎÜÌ£T ’ÞWÖyJµo‚·”ƒû##ŸÛ&² h4JpS›6õ_c-õ‡‡ÇõÝêùóêåéüKxšäÅ|c ñ¨Ù j~©­É¸»¹Wzú°|\¶Â`»ØÍLœSèFŸÚåZJˆ“35¯Ü¢t<>µ2èf†`6Ò¹°Qͼ;Æöö­¾²h @€öta#¶ÍDCÙ]Ÿ \NŽë¸3Ìòûá8-k\}h8X)ëíêòi5D¸m èòóêú%rþ×òÀÁ¶Mçû››nõ:‚KÔ•àzuãÝu®í èdцP,ÁQ<ÃNëñ$ö.)–4°_#,VÌqì`f,îÅ`“PÄ€djÌ=¶²^Tdξ¼Â­#ÈôDupÐAgÏ]ÉÊ*š‚úÞ}kÃaHG ù*Žo<"Ul0…c…úPxÐ``!/_”°U«ÑIê]}0e°œU¢cÏ.dÁÕšÏÝ™fó²M„袇ÀÖ@kÚ5Ðj-`glVF›è߉~ò¤ÇÓ>¶šX¿0 '§6å¸Ó³¿G¶yÂG§r,¦rc« ·¡q|jk¬òZP®V±‡zŸ7ºH?ÿí¯„¢ïèþ¡Q!öñlsâÄݺþ²ºøùæú—¸”p‰ŸØ]®®¯ìÇ}lÜ€ ÈqK"‹î·I–tÔ^A‰/”…Æó§^—–>ßÍåíÍÿ\^Ý®š¨¤–Yó䙲ûcÇœ)HÛ,í?_Q1¾Ú¨Iþ¾^?ìúÃÿT YýíþzõõBù,*âG·Ü¬®ß~fÛˆÄ.ö\o®®ZiOœßöqòúoð6?ŧ=»‰O¥kf¹ºù}u½òÎ`L(Ü”qüKê3vï¶û˜›'=FþÐ÷ÇêñìùóåýÙ«E›×ÏÐ7›ÚGÙƒcº.…1-ßÖÆbæ8";GÚR„¸cƒ fWëw|qȽPÙÅ&y1x¿Ÿ¶]÷öÓÙ§Çõ:øüNϼÇ?w®~V ±ïÍ °@JoƒwÏ@ÈM‚IìÜÔ»µ+¶}n<#d Yó+«Û†YSÙÀ)>æ Á°‡)òàz0 r»¶ÿè;ž¬ú?§kѸ6.ÏÿøÊp¿°®xiLœ@ DD ñxGäq/Ž^!óʈ T[È>Ýví2XºV¢e.ƒµ;^mA#Mi¸¾YªIKŸ:y´X¨-ö«’”ÎÒÑ̬è(¶2— 3XÁ‰úò«„6ÀÑÑÉJ¼¸¿“1ícä Vr>æü…ÑßcÔöaóç¿*V<úúܾwö¹=töù¡·Ï“¥³Ç\gI‡)´¬s#nm‹X¿ØÀ?•”]Þ?].7¨÷>.Ùʾº¼}ZN÷—³û—;]Å¿¬?½P©h%‚œ]‡98 Nv¦NÀ©¾¸M] Þ짇Çõrõô´…Ñ[ÞÃ%hT«„€|9\æÝ–ÝIû ;ï$5#úÔVRG: ®‰Öê}½?Ü@û'å&š®=_®ï.W?ë;ÿºz¾ø·ÿëY¡újB4h[3Ðc]þúÉÇþ »õõpqƒ¡³gO/˸ˆ.~Þ=Ö//Ï?wøöß/o_V¿l):<ûøñì“îÅ—øÞß¾{Cmº|ûdz]aŠxD7%ëäcp9¦|9L±±ÞsN•T÷Ç¢¤,L¥XßàÍÊ`ŠDý8Âb^˜bÛûÀfL²:u¸˜4JM5D'%ù 3ÀÓ~­È;¸Q[×ÁÍþ§ àC×¥P5~ì\o<ðnL6-^I‘W <°†·ÄyÞ‚Q1 0YTööjeˆ¯íB3‚çÞ ‡jF¢#¼Å†¨Ì‘î¢(KœÒGæ{Ò•W/7·×{õýÝýoœ ) HʉïR%Ìí¨É‰ïì @âF ËT1(“ñ'”ãýÎ*²ä ˜›Ä¦¦TÞ¬”Z=gZâO‰Ûx”¶°‹]Ì“ÝÆ<ÒkÀë|Ök™BäÝk£¤E¬__¬Ìk¯°‘eæSC¤wr0šÑ¦i¤Q2÷tâÈ£@»ãƒ±3íº¯÷Sw¨´{’ÁQ£O᫚vO2éÊï ý)v­ô9“{B”àßg€^™4dRBn†Êœ¿Ý*ÿ<þ·¶©õ«O>\ãÒ]~º<Ü%è]Ý.™ç)÷ÈšÝÖ(Ô/‘—ê­5ÏãŸÜv–<M!J‘Æ(Ã@ˆ]Enz"ŠÊ „·†Œ‡<ï#kò cR®ðõÍ óåú]¢4¬†A8GbÍ$´ÛŸAŸBKocFÅ`zšpË4¹ã0Gjœ|Ö»ÈÓq%üû֜Š¿µÆ}ë)R'90 …üYkM,”—éág°á§Gï]>üT,Á, ù¤ÆÈðÕ ãO%7äMM$“u\‰·ÝYwIÖæYãžóÚÎÎBßj†&î“0]7vjþëøw ©” Ò ïuü»:#Ž{?Y· qB@±€ì&#T ‹X}à<âDiä,âˆ$s'ƒW+D"`r3#NœSÖqv6:Dã4b€2îúUßµ@àEŸùüiõü¼Ñb8ÌK?É—/”È¿;hU$Pú{…¡e¡@étÇ*ƒU'|;d39H“šj °Ja-U΄d‘ýàÍ ¡*ÄF;çZBU‘߯°ZÅчýä3¦";lPA•‡g!(p\röìàÕʇŒEt3Ÿ1»¡Ü}s‘D6]æAú?ri±ß2-ïü GL¿6ôwùGˆEb5GO¿'fí¡>ißïÁ&T%‡|ɣ {˜2¨lã âi©ÒE7÷jµÛÛ‡[=,¿ÛËۇϗvñšÍY,׫uüÇÝÁ¦aÊMÓå¡ûÖS¸.Õ›ÕÃêÈØÖÖDràêPG\€‚±Yr°“Ÿ>Pr|}³ÒøÓù(û037€î…êÌ; %ÂOÎÁœ/6üÅ¡‰Ší'z¸½N¢–ú¢'a‰’•n‘è‰GèZïêËêÖAÈ;v2h™šHJ"ÑÝt˜“ ©Htðb¥({~{7]´K<ãùH8ãðÿ^Êl7{°Gï¾,ŸRHUKÁ¦?Á\)³ãOЧa”øÄɬ“ª¢žžH‚ëõ¬ó8…ÒÚU¯oVˆTK²fFª]‹^k™¢,"º®£Ino–¸ŸlYfÄ«+ï¯PªD ¼8îº_hL0"ºØ‚²ÕsEêL5Jà ø”†. e7›A fò»MRM·»´Ñ0°Á[1h§nÆÔ. Õ”¤Aƒþ9ùÞ!T‡)ÝÔÈ)Ÿ¿Žè_˜a„‘½S« ­‹ýfˆBÌ5Ö™PÕCpäY«0—\wv¨»É¦jÉte‹ i½z‘¦â?L% ^¨fL=èwu%öÏ¿Z¢ô可„˜>ÎÒO.W‰Ø}÷Üþ¾Ó—GþzXc„[ää¾uÈè݈+‡qßzŠÅ7XÎh‘:°A VIwdƒ¥éÕf²Åoë§ULH/¾„§Åú÷ûÅú±n~3Y}=AØÃ Þîëx¨÷+{sGöê WjÈëóÝð˜óÝô˜ó§›_ëgVy üxOäIZ?B,ÄÙΨ$¿’©O_Æí¸{ܶh|A ì { 6Ù0°Tî®OíQ±%©+Y&aDí‰X2^¬åêñbS÷XÃe¢˜â@£ÁD—_&>•ùXªÍ2‰³5â±órÓ¿ëÎ;“®ßæ º×ÂéNTj”ÉÔ)—õÅA@›³¾¯—g­AHv évô°#[“/-Ûz9Þ8WL—kÁ#ŽzYl0ÁMò²ó]Ȳ· 1;R´m›ÑÝåÃñš§o5Mñ¯IåÄLN1äœõ \‘@Õ}$`õ´£±kIÆ<Äi%d l¶)’ZëoöiDƼr®ŠŒ5Ø™;ýƾ§¬=Ö³­È’NËl¡ìú"Ø•Š³u 4ôô(÷ÉL°•`л 3±«(#97tåñêr¹¸|yþ¼ŽãâólÌͺnêœ u…aæw„–¬ÏÂãb[6 ƒŒ!ñaPìâÊ4SêŽc`¸Fa•xm34‡þm¬ÀÉysÑQQÍ™¦Ô-—´” ]Ö£yôS…'Ç|M&x/“J PBèÝÎ  ;OÞ·óÍÝ寫WëVa±UKŽ·t‹ÉX;¦«_Ù ±g;ŠŸ²NKtuÞYC*ŠŽY2&Ù/50E#xuL©Ôd™l92пŸ*–9&áÕy±hÒ²¹èÚδ·E•ØJœÛóGçbqÙ¤ä“>W?$ÐuÊò«¬Òžñ^ÿåMˆi§Ã4EIúÚ?¸ö÷Dºü­íè€ûË;5™"Ùi•¬ºC˹ æ.8´Ü¨CKÉ"Bµl…&jxr©ß%^yùfñ&dO.gRõ¦C{´9¹ô±c‡_Må|‹Í׌ڙ‰’:[jCÏîˆ*ah\ލøäªØýÇ=èc_Á´ýLc&- Õ…MQ~©j;>rÚÞš`Ë—§¦“Þ Ôêþ“\€Pµ¿yÓ8mÏø¹à DÉýMÔôBÌ—'áòý=‰^õô)rÂHã== ÓAą̃ n–›/yküÿÖîä÷ßÚý+X¬“ñ>)@aÑ6( >6ØA%7µbC`Q#Ùx(œù6jª_­ 0ëSÛP5¬Å¦þƒv0ŽHÎÕ }k&)°'dËp™¡–›cHO7s—©¶ÊkLÐtߥfá” ôòéqS¢?[^V´3È}]¶Ï©ËñÈÏfØäÈÎõ|xz~¬+á ¦ö†M I˜F¤‚h@­±;6«)^¨MóæÆ9Ë®$8±Ùþ;Ú]ÿ¯á}3U«à$›ƒª´y“uÆh”"Rœ-RQûÛ¨)R›Ò¶ø3C†vóðN/JÊ(lÑŠ Å*]ç[®„pö3D©ÛLÞáý‰‰Zu˜©ß„¶Ã¤‹h‘>™4¨$ª:°ûzÙÏ@zÓÅØ6Ã/»¦åb%^¨˜^ÈŽy‘þ8©–„:¥˜Á’t$Rëë²6ÖsúJ_ë zzR_³SŸŠ<‡àô¤úazo&u "ï‡¤Ž‰)­ìë"éRèb£²˜žŒxP+~us½Ñ|úStwñjî‹·°ýâ5ê;Z-_ožÿ<×—^ïn4Þþb]ÄâO,}˜4y‘Œׂà5N§ZžÜË¢- u,”Г¸€P›<¡vé¹jû5"Ô¬4!&´k¸RƒÅóŽ,•]!(IŒãŸ{êˆÙ”k'pílß锺Xx3`+¦íã,sh¹z N‡3Ü®§ÓÉAw¦í´à[ðM ¶5…Šoå»çÉÔÕã4ƒ4¦ë)ôùÙûàgÕù°¦¨Õ ѶúÉúßCœ”<Éÿ؇®KœöÒ³éMÁõF×§î´ÁxŽ;™Ý/÷’ý÷›ˆh;Êþñ|'A]Ùc@2Þ§7Ñð !ˆåÚÓ»ÔBÊ¿Ðæ ”…| SR‚÷ÍXmNj]é6x©)h±oIúèî¡ÐV«s®ZÖøè§FWíý~D«õ]”M÷Á/ÎZNj( áæwɉëØX Ë—u‚„ò’ÄvÞ”ÙŽeç8?µD¹~s´ŒœPBTb.hffÈÖâ¨!m‘ß,Õ(tÒ°É ø™™]ÿRËéÉd²wò qÍøúäqçÂQ €ü“|¤G‡‹"w$ÖÏœ8Õ_þ±~ür¾s¾¹Ž¥ÞÏu±ÿËÞ¹4Ç‘#ø¯0|Ù‹ÙBù”'æä‹#ìp„íëÆÕ¤$Ú")‹ÒŒì_ïD³Iv³Ñ]¨* (ÍzO;M |H$òÁª'äѱ8‹¼jöHˆÇÐÀZQî!Å`JÃÁŽ>!¬=äå¾ìŽ@ÚtÑŒr„C=a%03Î#¬õoGàbÂ’«"+JòYxš±IšææQ•§Bfy©Žbà¨.cÊåòæèR@û“ß^Bü!ó;vþýÈLN¨ƒ|y†9 uò7äÒ* ‹åì °%…)¥ ¿CŒòPíì…%_˜ž¥Õ˜ˆpT]hqsÞÏäY7öoäìRF.Bs¦mä%*Vj—8qZ Ðü€=5ŒQû Y}g^´£Ãþâ°ëà÷ËøÇ(+žpŠ¿ň:©A!D*8»@•ÈšÂWÍo°\_ÂAØÅå6‡Oòi…_”ûÅ-‹_dê_ŠeÝ7ÕÇ©º,EUˆâgã :Õ¨™Ì ÆRê‚[ n£'xEܾûvýér¿õ{–Û†¸n9à¤h+Hbqt´ÕD‘µÄ- RQkÂ1ƒaÜÖr4Õ“|á–ò œÇжÁÞÜÖçéK[d,Ò–€ÿ¤´=‡£5 ³<¸"]êJ$ð`ÚæàíÅ’|¼ºøôõc;ÃÖ;;]’à™d¦æ{Q†}…ÉŸ¤`×íW8ùæ„ ø¥}0¤T„ØœO“mÔíWH²çm ìë_rB(šœ š;v½èVnF¯Â*Ù[ò 6½÷ÇXÓ@ôd¾Æ‘}ÛUEØÃê·u7ùý ïhD.˵QR€8È}à -º lbO7ÿÿâòæúö:9lJ¸bˆh¢Å]J»rWøÀ6t¤˜BÂa²I±zœ(yUJvŒ8Q¢…yI b¬SÞu)Yc£[ÛEÎ&°Uˆ€iØ_eEÛáy®môrEoUm©ç–Zï4ϼŠižyC*Тí‘g[#™ÞQ}ú²MGkõm6I²Ò[¯X®½X¼þúrnjó¥ÆZ6ú!ÎòNh”.51åÐÑD¯úPóùÓÝÿìyþ÷æó‡û‘®C±éò>Å4Á„+0p®ílÖü¥¦,³†¾C_D$ÃO5• ˆk*|»òiskŽ)ù˜iL"f“ÕÓ”}æ7|”ѹן/nVÛèžÕ÷ó‡†YùO//sÄôúÓÅõÍý›Ýµ0åÂ,Ó‰ÃJ”CÀ›!ÛÉX(ÅŠTt>ïȰŠÊÃf%öa7\A|O©{‘ ߨPJ,ðÅK€uQë©®K¨Rhùèwü¤9®Û‰gÕM¸@{Ç·ðÃöê!"a¹ŒµÕ%üÇ÷ÛC…ÂA!›VcZa1‚q}wóùâËÕÛ_|ª®¾¾ý—ýdz¢’¿¸¯×÷ûŠ> ªySˆ³!Ó÷k¥÷—è*¿¹»|¦X8ûõìþÛzíô{ûËvt¿}þöõí/}ñûŧoW¿m[HE>ûõ׳÷~|Ë’øõø¢$ 2/ôD_öG¨:²ri†HggºãPßÜý~»¹û4¾^¡ä8|¹1ÂçRÉe³#ž‡Q5JuY„³OrwóËFÂä4ÊÕöo–+^y$àîï¼ç•5ñGBÝ…·Q½Â'Âz•V±¡lJpD{ ó*.“ó ´RÈÛžë·}^r ó3ðà¶Ïå ±Á¶Ï£A ÜtÛW¬ Ÿß”dÌë_ÏüOnï/Öñï-$Ÿçoy¿}ñéþê„o¿Ýø"ùíîýÓ6Êœ?LVL9/e¸ >zWôú<µ¿|þr—­ƒ…n)tŽ˜Ò¦gŽRœß³qVgUµ¼î``ºcÉwçš¶b䊟]!¶³/…°/_pi“}\BÛ±qî–•‰¬ùÝú\>üçÇ/MÍ€mœ¹ÙcL;–§øšÝ³<G´÷‚cúõì¸ÍkàúŸwcJ}š~c~“·žÏ½G/{•=߯×kyÇãJÜÙ‰¦2ƒb>W,ÈË$?‰%Dë)ª–ô’re0®È —‡Jϧ%[Ñgº#—V9ô’B c’‹lFƒ`ýsèÊM%WÓ2(FÃ+Túµí,&`áˆJ}EI²0«u…JçŠrvz¡xs¶™Ü„¸à/V¼ AƇ&‰ðˆÔ±¥ï°å!¹RøÑ0ýâösœkÊ£†Ù ­¼Ppð•à;þôE1¯r°maˆãìÎó¥–TÏ«¸NPH+Ö 1æ{àßþÄÖ¤}Áó@+!G[éö'«zhóû”…º^B¶²—ÃÍŸµ§~\½)¨¶»…@¤.!7ùinë5݇Øáu"Œ»Nœþõ‹ËÙæo½%4±$nÈ¥ü8¦f:úÂpú«ÇMÍ‘E‰aÆ ­ÿ„Єê"‰f“Ȭ’D”lEìØ&Q°-vœDyâ©Togf5(òQ%5Öz”j"3˜’¢o¹µ­ÃaNNž±†!ÞÌÉÉ©cŠbJ @eÆóèqœ7ÇgƧwp¤¾fGcgƧO0ixu2ɦG‰,'‡Ìw·Ö2É͉I†­£I‡™dV´ŽžgVÁ$!^¥Ø((Á&Ça”½ ¸­¼±Ç¤Ìàô'TÖ~«DSÒÈ䲿=yƒ¥KcQ5\Ÿÿñãí ”Þñ:“À¸6-XE(ãÕd»^ÕdG¬'7ÏGc¬ÉèNmS[(ÎÚ¤ÊB1$&Žšl¶••j­,öÓTɆ­,ðá ÍgþpÏ=Ì |šZ™Å9GXuœ™•Ðo¤8‹hñE®f<6ðÝ7³\]7M8^Ü"y¼ÌùöžUaô©2çÛ'q3vNî¼Gp‹! •/†€Š Ѹ¡“‹Ýn\`?xšoý8ÃןÂ-Fº·æ}}w‘Ç î«y_ŸµÌ+N_ 4徇ä_F˜}úÆj›)­ cÑ$@ÃÇ/—+ú=ϬÊh‚•åûÛ¨ÓWXç=Jâ@’ÂéëzðáoõÐûq~„ÇÁã%ü^¾ lû6xüË{Oƒ‰š? ÿò) ¹µ”‚¦Y@Ò)5s DHi6xÄuÀ"$…šû€¥ákœ–Üè;«¼ å®c€4¨µ Y”þÖ’%([KâzK7ýó|Å8½¦û|ïÕ‹ß½y‡V0‘9Êr÷½O-¶s‰ÿäIöäªdi–+¶nâ‘ÆP`ä§%΄†jçwJ«\ü+œ(†Øã'W >»3« •Š+J¬<Ê1¨·aúˆu•+ÄJáÊï#zÓ¤‚ÄOe/$`i똩c_îmÿrw6éï¶a´\z{þE­6A•Vš«WØE~ýjö¶ÙÜX2ŒžgVƒ& +Š”–FSìÛÌm+F. Éa‰ÙŽF"±å{\XHKi²PaäEmú—w€„i¼‘4ýÃ'y„9i“—lËFñûÆlá˜{šø.H5÷4ª¸§m¥©UÞÔ$b £žÛõV$ìSÛV5Û’ø÷)… E Ć@rîÙ ps±þx}{õÐmó2¾÷'Ûe±÷#…È }"¦n×/NJÝB ¦®3þ²ãg‚9–‚™!ÍOªÑæ˜IPÆs àß’”¤vfVkޤ‹ÓO¤¿9æYÁ‹«À•—Œ“BRûá⤮?GI©¦žQRÅ!ìšjbç©âNQ+WªÄ4+º3Æ þ-rÛ R@žíÞªJèwH Ô¸·¶‘§ c*AkgfUî­´b £L¶A­ C+nƒÀúº·½[ÆÊ–ŒQ'Õ׋QŸSËç0gcÍwÃÏÔnì§(4rÔÏToÒMKz6E IâlÔŨ#ôáÆ Ô æÞ«¥”ççyU‚.ç) S †Y>N'ù”Ž9nyrÊI\³µVD•â*æ² ï/´ £8¥¶DfÅ÷—ç™Ué-·ÑQÕ³ÞXüò=ç€JÒÝ¥àRäÂã¯ë!Ũbåh”H-‹ð-’rÇuíŠ~ãëåóÝõí<æ\Ÿ2ú-¦ñ€öü<á…¦ñxND˜('1ÎFÆImÀ¢€Ð_Ÿê˜]¼ûtõo²¾»û|@·÷­tõO·—Wßß&¿d†8Ë%ü®¯.ŸþŒô0½=_J}Ép Ó}ð!c¢r?¯§Ùü%öì:Ê1¶¾ºþýêrŸc¨°Ê!í!íVÿÙûíܶ?s}v{÷ÇÙ§»?®¾œ}ýxq{ö$‘Õfúû¿9¬0˜S¯¾NOÍÿ§qV‚ b˜Ô+ƒ8;Øfg¨`¨Îu¶ h ¥XÄM˜Á·ÝZXfL»±1ô˹3¦“‡§Hóü«È¡}WÑ|Ja€¤vóSôÎb@Žó*äøؤÀü\R åô±eOo•‚Yn•§«lðV›…YŽÕšlƒr°yÔ‰ è¹t(›·iºÕq1ãa¦Qž1…ú²®ÎŸ©ñ´› rÙâY*²Å<[pÕVHÉn~ŽÆÓ¢¬›ñ„V 俍æ&é‹uÅ-m£1c®›Ñ¶ËÐI°­â¹6ÀZsFTõlÑdÏhï·V—r©­jž²ß*,¤#UQë’”~ضª®Ì©ú6gúu§T¬ !*àè¶žn›m,íÎ_œR¾´Or¨Q8ü€( ‘xh£R R,rõ,é}Ø9ª?޲@ €“Îza¤-ˆÚv½Š’\Ú6¸Óñötsñy¿wôÕwß`y…œ?Ý¢Îó•(‡G®7ŸÕ'"7׿›¬ŠÝHi©“ïâ\Ïäfœ,´V4E]òÛæpX姃&¢ÒÿŽ|Zt§Ú,hçE\z{õ>I³”á°H]Ö¤hÂGÒ}¥éÝ@ªÚDøfT¨>Lg¢§Rå…3¸ sÍU™õgî¾WÊ÷åâþë—oë¯ß\_;…:FP6iì Y™RÛ.`M'´ ­’R3¬fõS.Ï<ˆU_"0|A!,>L?ˤXm3ž(´4X¥{=ã,çX}Ê )ñ맦§9Ô5ÐH®VCàˆ-*#Ì ?°нT*h)Ç*­õØëLÛäsbù™ââX/×ïå0==¼NLÜÞ€vcârUͧÕ,×3ž…Ó—‰-Ž}^™ þÉ/ZÃò´ü¿ž¤1ùþ³Ÿí¢…ÙOAy8¾snÇP™\—_,FxìH¨Et~±1/ýM¶'Jw‹Àå|ØÉ)+ ’‘âP%ˆ‹W®&Êõ# 9{ u%œÒ6­±}·îÓ‹õöÍÿ `ÊaW—QÜe ]©+4¡=B¾Ê¸u˜xš¯yŒ°Zñv³rCu­ˆç6ƒaÞ–üÎ;‚iÛ͘c^¶ú'¶øW Ã>e­)ŽM!ËZY–úF™yÐQ™Bˆ4]vË¢/\_}y8¶Ö¯.¿ùd¤}ûá|}õåë¸þÒ¹cG®Lñhɦb4Žöh’S3¢æ%`¢AjÚFzËs™•}Z;RiÕéÅ6>ª&QK³Ô„ º Î$v½ —š¾ô@?™äUë¡‹á»÷²¾Z«¬×2Š¡¦iº¦*JaBª.³‚& Ð„Û ÌrêT¤*2?eŸ3Sƒw$وˎH”qÆjƒ O©?–·ëe£]M¹YÊ‘¸–Ø4B€«°WSy!à”•oÁ¿i˜w(s‡¨Ã¬ØHÁ¨kÌç»Ëã5b÷þéÜÿKzG—Êçã͇QW7¦« ã2%ìÐÂ\âãXÇì ±µäs̅ʸŠÏiØP¬‡¶+¢€Î£ÿ¹FÚìQ!] `š¦"¡«=⢠±é;X¡5i}„ÌLHtÓªÿD ëB^ @¤¯ûöÍÿÊùýÕׯ¾‹ž’ F>†a?èféO©‰ìD¡¨©ùcXQ`-y+‘µ†· êgé=;âiÄ[Nȉåmž-ÁÛR߬(Çm„bo±mö”ĺ·1iø6v=5±sÕV@Œ}SH_?ô« wã”N³àvŒú³Å~m‡0T”þÍ-:x¼Û‡ˆ—¦îŽ€Z 7(&]½QûWF‡ …ÚÀ>å” þ?Ëf_©*‚:K©ØÁ¡lÙwä»—N²ÙÆÓ?^žj£Ÿ?üåQ¸Mdºä+h‹‘§Ä&D ¢1Yƒl›q5㬯ß×@U­ÁÓPÌ×FtÅz';²iÚ¼ŒIxŒÓ·Ñž|±)»´ ˆRêîšbz¬$tÌÆu¡´u.ÔÅ%IaVúMŽª5¿'„4K­ÔÁ©À¹ä›ßëL~,§Â”[ÓÅ_Á[¼ òÃn…¶¶›…¡òèm:ÙÚb ÃÖ-Y1,lG@-ª±äaCçXh°;9ö~js9C©‹Oc°r‘Þæ±¶’ªü š:ú†n]›”y÷•—É«­xK8ÅWåícÿéq€<]Þ€šXÅæv…´æë£ˆZ•r9ÖXÔm6ìI :ä @ÝH#žæ×^Q<µ”NÖª¯Ù}¬ ðT¨ÈSRŽ å.P]îZ;žîîþc*„¼Åf©Ð”»T£1½?O<2ŽsÆúÒ›.ûa˜úýlBü˜ªÿ‡"4ƒé yµ«JÀŠnFyÇã X!@)$lG8À*jl XìJh €µÐ 2OÙÀïiCµ®&GÄЀ¬\è©Ofj.‹SÒWÈ {re¿Y¼Zÿ×ùSEÃóõ§k×Î&)äÜqœ?ÖLOè!åB¼³ˆ㔆•~ð›ch“)V-¹vìõuâä VÃÞ€ƒ^€bBš°×V~ÛHQƱ—8ÌÚ«±»g6X¡’NžrDbX8wL¡Â9¯pnîØ(jÕ±/ ¶YÞwÀ˜ºðso!þ“Õ«yzqR #¿ä™ü …ç6ú—\³º‚³ÎaIÊÞØ'™4mb–0ê ¬É&ìkå EÎKlCü7Q€ÎBBŒ³|ëÀ]Ê f¯èI˜è† Ó…]SÆ)M"“2»‘ÝÔuÐÞ ›Ó²-U„äýã G‹1[;âhÄQ&€qé Mvžè efH"/.KTçŽMœ5¾ØŠŒÜ!P‹Á/ 5¼Fõ˜BiÊõý—óûë·#š±'Qs»´ ¾X7",gp¶q œW3¼ú‚H 8Ü¿%jJ2H×m@ñKGì³dZ°ÕÇ,EI ³5nßïz²Õ¥\hë»Q“Èÿ‘w5Ûqì6úU|²É&*A€ž'È9Y%ë,ôÓ¹ÖX²4’lçæélµÕ%©ºù[íLÆ;·¤ê"@|@àÉ¡zرúL…­ÎÒ ½9€,&R'£#Øcå-—×!¤)¬¢ý“G}éäQÔIü dŸ¬Í³ æ´»§}×?ÿ²²’Á£öVH@‚56È^YûrŒ«û7IŒ°Ä[â&ôPû¥ ¤¶rzjë—”ÛÍvQ™´ÿïï·KÓµc¹õ ¯4£¶t\=vt…W:=>=ü¾Í ?Öe4ùH-TBÜÃ;˜á&YG jǘp{'UFHÃŤüNó°‘•BöÞ(\:†fá*Ú[“bÕ1‚š:¹ËÛÉ*+¸Šœ¦S¿wmÉd"rZÚl.ò+Ê¢ê à6ÅE°ýØ¥Í-ÑX8EšÌV…Ó*›WYá—ÿì¨Ý Õv_´]ytÛÏõ蚆w{g~m%º6ËlØÚÖz©™Ë…åùR~q´—Ï4lí­™ÕU¥<X§8õÃrä÷3‹ÒŠÕ1‡ãXËéÿôýª|ù}NT®GDîS.¾±ü1Ðë'‹uhUOöY@&6 ìÓw_Î’E]_¥vß§ºr(‰±]ÐKÜÄ‹T#Öz°YáŒSŸÆ½#„0E—Íq¦¹‹Ì°{Q BÓhn¨¯ªãaq´úÀ“3èB_TÒ”‚¼Í°¼»A¢±íÿ\TòdAD¹çZdú”À)|èIø¸B}þÌBŸ2`æaêxüøðøûãÍÝoÛ(ଊÉÊí<Õv©g1ÔÁÔ§Ô9`ÐY€c’: ©®RㄺhAg”¦9*H…Ä–²H¤²—ÌHMÜ@[ŠðbDMõÞ^\]¼~ ØÒ™-¹áؼÁ Ì›NÑ=+ìïìZ$ñÿnHB¾ Í cfÑ¿Õì˜J^˜È¥~’_ç竨+¹d}»²nh)@ lúKJC=í¼äÆýBº> KJ'g†ê&1ÒR2d&§!>µ¡0HDžcùü;ʰõl™•¢! 6ÑO/푟)•Ù¦xüŸ_Æm šÄ":‰›"æ6E ¢;fb²)ìµ½9¯Z±)Ô¡£è±cSHäØB,–&b¡@òœ:92b!G†¹]“Ådq—Ï:ªØ€$>£X‰N­}¶´’ŒôZ‰°"ÆS+.hKý2r°×íÕ¸R½9{[u(-Êž²zã¸TG>[Y‰Ú\º¢ˆX®6 †Û ©Ó#Ì×Ð&ÿ̓O#jºõVÊIƒ!¦Öä ôy×.uXoiåËܦû•è-½Ù 8µÞ¼´8X` "J7NWèÍ@Ò{)ÑcÌê͇°Ø ·_Z¡â¼gÇtjÅEi`„¨`aÁß_Nîó‹›Í_MU¹»»§¿¿Y”½ùó׫Í??j4lLé¼ëÍÕËgêßkŠÂÄæ@@Ž˜më›î²êG5iqöÐËZþ˜ÞõÃuz'ÓÒåæúûæê ç“›‰Ÿ¹4þôþ»eížqýh^Î7w?6ž>Ÿýð"Œi»ò7 «aBGÅé“¢=p<}²}¬0…§ºVB~{ž’Ü7 Á³uܳ_­ìé8@µ8È´˜ ¨Em¶¥kK_IìX[Ç\\Å!º÷vœ6‚”oíˆJ ’3ã°\M<“Ç€€#½´9[¥l ]¶H+4û§Õ¨|¢Ëî Àm…À¯©95èˆÁ*¹]KAZœ(ªiíWáãåÚ”X8`åæð(r‰™{Õ¬™ïH@ßùU{)²s;¼A}•³ÙR.;g ìœ&$8ür;]ÛGŽàœ6 ,š[,Ú"jO!®aÐ+•¸,š®íõÎI‰é¢„¬éò’Ÿ=“ÖÃ¥ Ôq¨3ÜdvÚç,û73>‡®ÀlAé2Z‘ lõí`²Ìžó@’x ÿ­™Ð–ÌÓö†9…ÂE ô1ožq)Õ4—Ð7mÓçu©(õ¾Ë@ů` šf ÚËù5 t•màž„šáª,’]Þú°Œ·”é4„!¡¨ñÜé# 9 k*ÒÃ:ª1ȯ¡lz¼ü¼¹úfëùøú¿g÷wWu—0 aUhõ-î­¢yKäªKzÛ6aէЪ aó­_ôhgÒ°âSÃá©Ö{¿>ÂÚÙ±ˆ°*vÆÓ²JQÿ  ‹¸™r±¦f™W@\M©UoÇM6W×ßîÓ#/¾]ýöºøû…È5]ö< äu…¶Ü2Ô\ j›¿›dÕÄ켈µf¦$¡k=ç½YK¹3á ÁZÛÄN޵¬roöÙe~ƒµ¦(Rs3r½Ú~(¡’¸ÁÌÎÍа¦½‡ša’j¢µ»š7WóòD8_#êlfsó²hªûu•ò¦ëtWÕä6ÆßæñFŽ”ztW½™~U ±ßù‹c#–‡Iœ=gàê"†UMHšr?‰@I«[QG‰p\Ìb;Ç#”e…¨ +$°Ø"±Ø sU‚ÀÉ­—Âúç(¿xŽRˆnw¹vä Cù«b¥·÷uGýìì«Ý&|3;ݱ¨Ç¢a2…k¶Œ:Å¢dáhV±~‘ôu¾´¢p§`QC`ß0•kwá©?Å5çn²Úk‡ÜäÈ»·$~/לcyXCÑØo jˆXÛý :Õ|ÿ.;×ͲÊÞâ#î¶C-¶CšX$†P`‡ {Wí„[²ÃÙÒJìãÄž+†™Úw‹s‰Ï¿§ÓÈ1žh‘¶lÈÁ­ëÖ\mîoî~OÇÖ«tèöŽÿÁÎõǧ‡ß?¾þo["ÉÁÂè¼ä³&c G—ï%ÔÆ ÒæÓØv ÛÙ󂢓$9^ 5f¢áÒ@¢‚P¨ª·—zí³H]=Õã µâû“1)Jǰ[Z‰âÈMìœTÂ'³pt]ðéEWñhÔãv2ò)5[Ò¿m1øÅùå—o÷gç3+½àºq[^ùÍÂ/°nbÒÔ˜†X;§cX l¤[£ª%švr渓ìp%î…3­±÷ VÚ¥:è²K~C#´Š[³þûέ±h„æªî`õJ…ÉZì´#ò/DˆCŠŽdòéQ,0¯qaŦNuAAf|_1¾K©œm)«ŸïgÊ׳˛kSÓÙåæÁþfSXz³åà ©Æ¶§³ÏA .§3P§ÇdËëe8îf+Lƒ†"îŃë1`†x¿­ÐÛ lÈÅOƒrM]Øá`æ«}æûfdüó S >@WnSÈðÉå£_ì9ηÂ7º1U¼ù­PÊ-×ìSu JCÓÄpõ¡¬[Cw=•ÄÉF8ݼpsG¦–‰æê‚S!Ñ®¸0"G@Ç CzDË ª ¥‘MÝñ8•R6+¤.:{ál<YFC{êR"e¶²Êf{«ÁÕÜ/ØW«K»Ô&-W÷ŽR†ºK…JµÓ°tçÄçµ–û`Vkâ— Ò_VV¢µ¨¹¯¡±ÚæüBä.­yh)ð†à‰5xn½a±µÑä¢ÿ9]3cm.PNox¹Œi¿´"sÃÉmÃì*Å)"ÅŽë<{Ä.¯?Ö›CqpñäM§KwÚ»°¡®•R{øJ>RDp]&¡%Ñ/Á+7¢ãô˜´ÚÂ+zï¶Øf°(¨ ©B‚mȬÅE·è·Ì$3Âo±×f;1Qj,’Ébu‚‹dC,2Ža¤Ó2{úœ*¡/·_²'0ùI[ràç ÜGÚj*“`á[mºÚR2WO*ÅqV'Ÿˆv±ÀŠ92䬘iÑKKmˆÇÉ%Bý*#N3Å‚vmna•4ðF |FΧðûíÓ3¥~a‡@¤I£ÄìhcHý°Áqv‡°[ºß˜ iÄ(¨¸½ÅŽTå1sÔàÄuÁ¼®‘F#Ÿ/¿j­vJa%׺ÂA$N˜z 3¸¦ú_H\Dþ':.ÁµmñŒÁÀ5`Þã⨹\F µ½µ7ô­é:7•ZÔ =Ýré2žzŸ0Zħ¯j\rì;)ÓIô°an6b—a7$~¼4˜iHuc‘Ô†™¨mŽ._å˜2H 9bˆ$ÁÅ:œ™ˆF˜¨½µ_{î¹v²G¬AÃLœj»ÊÊT- à?sãß|pÆW—â¯.ãÙå=_ÖñdØN‚£.Ï6´0·€zÞþi=uKŸìƨ<…@.œ§nçÚ5ÖÅ¡s9 9Om;ÙvñVkLœÒ}ƪâW8OÙ"‡40ô´Öš?ÎDÍ/ôìûíå§J—×$qHbèÍ =›JS[LÖ‚´t»M¶A€ãYNdŠ¡ä ùk;,–»èöÒrÊòäRŒj WÒ€×¾¤4µÜè 3=Ѷ¤ôë;õ]Vë'¢?~¼=¿ülò:{þåqé 7[Ü é J¸ -½x4“Í|…³÷ÁªÆYÛâÑyæNVqÀPݯ(“~Ž^ G¿Ÿ/në ¤-ø= ÞŠ¤@=b7gb…Á^lïKŒñôy¢Cã§>úÁYb*¹ —uWÑv„Óds;oºŽUmiÔA™‚G7$‘Ô#Ìqž±LàÓtâ×X ¤¢‹9g’â‡í [U•¤ ‚Ô•d„–B DæÆŒ¼Øæ ¢àÈsU<–\xÏšM=ââàÆ™D«¶9«h (e£ƒëK{‰8hbBåÈZí›Wž„ã "L!úP’Î^Jˆ¸Ì€úS(ƒbga[mYº#|âðÞuÔ…xGñM?Zá¹bº—è\w)—–R%ê!ŠùÂEŒÞIøÓÊŸs|´’R*—˜ì|§*Åy³åžjhŸŠ[ Ýœ_H=ˆØgbô¸þjŠÙ~Ï.O}~¯|p[HPP€‘r3‘’ðK‘÷²òx9œzgPh3iE30î&B_jÒº¨ Âó@ª|V­ÄqÙ¤_–V`Ò 3>×Fþié—w·÷ç›×•ˆsÁ±_Â<¨í`á(q—¶6éÞdï-Èö+xxyBó*˧‰CøÙÚ|ÜòÍzó{d7õí½Õ^ ClßÞ¼«i—²W‹Hö]»A|Ÿƒ'Qóe´··(B|7!úÆÄ¶y.3é­50zaCaêü7oºt<æ ’Èiy¾Þ^¦#vT"?‰æþWùAí DzvT/M9ZˆÁä\ëg˜¤ïï® Zn f¢Éì—Æíœ<¸b Û”k„I‚B]ÎȾHbÐ>€mùBù>0‘ìH%´0¨a !!ïÓêû¼ *í•A…„½QóT†d‡A®ç"­\…ÙÒJG4}‘`•âHÀ¿ÅiÅ!{µ%v+®˜*F-N5¡@qäÐç§°œÚ›-­¨Y&˜?Èâ_ùƒóg,úƒQ¦40>p¹?0ZbŸº£Æ–á¶ (ÚíüS©óŸ¨’ I\¾QƒÌdÉj;ê¢c·_Y‰•Š™Ê©ÆJ#š¢ö¨-¶tZ¨ª>öGl„Å&ŠSû‹"lEÉ*-B\RÚle…ЊæçÔÔÝÛW{Ö€¾KiZÊîS;zß­5*65™Ó½r©!`ÈjÉÔg++25‹þ(Ç*­±$–â.­qSæ,&–î®_âbλ8¡gàk ö!dõæãbØliE¤Ì:°«ÃȈiB—âC}—¤Bþû‹s}~q³ù«éê/ww÷ïø·§ó§ÍŸ¿^mþù‰Òü_Ò-÷õæjÿÙ[Úf6/òÙi¿õųªÐeº­Ÿ‹ùczÙ×é¥LO—›ë7Šš4õÓÜq™?`·®Ý3®-ùñáæîÇæáÃÓçó¯^¤1m—þ¶×‰¦óT¸7éÚV…;*ÒxϰBaHL9(nÕ~Å]©ÑsÎä]BdŸñøtÿpýýúfó›½Á×ó[ÇÿåîjšãHrë_aÈg–H$21>8|pø²áX‡Ã‡=P-ŽD/E*HÍìÊ¿Þ@³Å.’ÙÌÊ*޼{ØŽØêÄWHà=UÜýù×û‹óë‹—×ç÷ßovóôz—¾;­•¨%‚&¤}DÏ WÐX5¯,dWGÁ§,g”D*nY/±TvªL³ i3¡xwTË' MÀ\tà–À²ééÒ³wÇÑx{{)ʆgŒQ21§Ÿ€Na(Ìžp Ì^€z”½âÈzŠ6>ævŠS@ƒ“xkžñ‡?Óˆrêû^ŽÅB=Y’Q]‹æh¸µø‚QÚl˜å‰5“àŠ8k˜¸¥¬X–MµfhÕbÉAä°q Õ2×´„‡lõY å)¢1ï`2Ù{ý°@ª˜LóPñ£ÿ¯Cà °91N’œp\wºô`|øß6¸5êUcès–ÃJ\0ŠD6”5üUýð¿£b¨–—y5!¤XŒ¡%‹'v”Èt¿¨%­s¶ ¢Þy U’&rˆ›'1Oào[óÏ«&0ÁiÊØÓŸÓÄ]‹/æÌsK^¢yU¬aë­Ê\˜b4aåy…gÒ”½Ûl\&êñ¶È^˜rĪ)w`g_={¡*6^š¾ä<=_~8+/RôÜb¤FÏ ½vv.Qªéµ` _qLŸG›¬ªÓn)šM!m~ßá gšÀ¸ÆámÎ#Á}8#£[õ¾Ó‹¢ÃG<êoF–lÎ4.ÓL„@UoR¼ïÈgg·gÒ’hª©: Cý®&\tÕèÅi^¶8\ÖóÅjdB©ˆ–ª~,GKòõÃãɪâ%OÞG’¶&KLKöììësר­ÿ¹Ä‹ùE}jÐ[$ŽjÇ¥í(;yxv´ZÅ9#1oQ\`´ô¢ãØaiV ÖÆ{6{} x 0¹úhß¾¿Ïþôüô³‘ëä bYåKP/ˆžÈ'Á†ì5[8¸Þ.¼q¡L$‰^3Rå°ä‘*Éü ëLTC*B2EÛ~ðB4ö"^‰» µuQÜõÒ1Ag ü°°›“Ó¸¾|˜´&¢Tµ[B‰‹qÖÇ\c~&•aÖ,¢HSº­=ËËú„VpCoóqoFŸQ…l݃HîtmZÔC…k&îw@^BsJ´\zãüÕÙrtJ¾Â_9–à L’ùÆýLTCÖïs°¶—´hË–ý*{BÆs"oÊú~ùÍín.®Ï¯o/>~¸¸¾°)þóý5róiÏWÞJùOOti “œ½>»&º 8UUÉ£9ß«$8®³í5HjÞJ½D(-©4Cö­t&®A›A›&¾¢ñn-)h"ø ö‚ WÏXŸXóG:1ð5¶ŽI±j/‰ ™Q‘ã¤~½Þ.- Ìa ÂIË (þ¤"S¹SˆìpQ î- ]ŒèèHE–Í“xŠZr¨©kó¤èó 3Á *l’½-6õô F\¸Ì!®ãޤ7œô;B‘¶Žû…Óã~Ñp(âîB ]»%ž1 0àØq¿¹”Fn—8‰ª\Š-Üb¶T™‰dToÁG M½…d“k,Ë\Ðã öd}t Û–*§®éEÛrº*I.iª¹¨—\ÏÐ9#ºDr‹‹’‚¬Æ½¨àDÉæ‚*&pƒÚrÉS¾þ˜ fȮگׄx·¤ãÒáMiÍ$RLf¾§`äÀ±ô² C ¸ ¡ ",©D*ÃI¥‘„eíÛÀi•nØixƒvPŽèì^/ñ«O7)^å{Zö*}½È…ÙÐÅgÏjö^`@œ-Ëk\«'M‰½Ö´z¸M /—ùÌ„3¨Õ³§†m*>“†ûE׉×oö|ÿY³G&GF/ºq¬­š"ò‰ãÒ®O1@œÔkÒü=.k¾‡èWÁ+°TÇÛ\73_µœÒiu¨§;YSt³'ën8LCÈ®›%8.ý ÖçkÐ$«Ñ¤ÿÉ=tÅ5é@SñýžxCH–ÄZÕ, É«׫˜½dáb0ŠHÆÎÙ× 9‚ê<0ˆœRµ@$tË¢4à C B¢÷Û¦Äê?ª¯ûIÞí+Šûóž‰N ¯tá--z t¡‡›'"ìÑÆ&ïIjXÈEœ’ºk"kýQ ¹³¼>G± IƒÉË}jšUý5XÔƒ×xö<¾ḞÕ§1×4%>äWš†¿yrÕJi¬]ý–bÁ EF¯v¤7Óïfè¡>O.󞣸Sõò¶íº¨Ñry"Á¶Ú¤”±aäðÀ'«ÙÖoEBӨЭEìhPØ» ‹¤Å ˆ^jÕFª ìËï*š”ˆ‹ìÜÓÚì`5Z#7QLm HC´&=LWš=†@˜d±³U³šØX‰OÈÈà¤Æ‹j“,TÂìdUÎÆÛØÖjS/ï;áÀH’d¹Þ AoV®Ñ[‘YØNžGêš­VqÑñÖŠó{g´,6h±XqØ ¸„Fä\¥¸ˆEÅi¹šUÜñhµŠS-Ðæ÷|…¤NqšË¨ G(Î7(Nl…¼*R†òg |9ÅÍŽV«8`Í=.øžo.…Åy Q‹Þ‚Á Vé­¸:­GÎ29=ž¬:Pò}x[µE×CÀÅ‘í=b±ÖªÉeT>ö¼çjî7뻵]nPîx°*¥…É%«~c¥ýàªjì˵¤ Ç/ÖZ5L!ºISûšÙ*ýPÄbV¢Î–%7<ž¬Vm)ÀPµ;',žWBÚ£ ¯‹Ëk`,®n¬ µƒnÛð•KkvJ„ Çß&p"pOÛ »«—jœO“PŠUO¬ÿ×^²”˜3ÁŽè¤ê—Žj'^¶Ž”VxÒ@n–Jl 4=ZÖŒéåÇZáý¾ñÚ6ÜãVõãà©Ç/ºØÇ6R!¤qî&@Œ UîWt¾€>ë~Gy r?M¡¤‰#zˆûñ°ÁúÑ)²¦þkºŸZÏýÕ½½4ü~{ýÛ—ËÝõÅÕ—'Ï _4¬Ÿß©UÞ»ûþ~w·{ö£ó{µ>ýA‘…Ⱥ÷,‡.®KSë=;NŠãX­G…TáÀ1R±Ì¦Û—"âÂúµ½w‰7wá52j“nŠ«Žèoº÷ôÏw_Ãîã¤5Ìþð+¤Ý¯—¸ÃPçÌî0ǵ®'§Žþ«ZÆ5l£#¯!ËW]Ú5¸´Ysª|s±ìÒ1—Ïd7£í[KµßÖ£#Æ5°K0MÈ@´*Âè賕µ£ άó!|úï÷ÕZB +ºpÔ »È0€‹oÅ8 ½q·°š‹€÷uO2%µi¦<½Æ£¨†\ÂÉ&™.á¤:ööú %¥•Y‚ÈbÖª†“×òˆ¼‡Xή8WR¬ÜåÊG«èGøÉ3$zJ}=ÿÝí—¯w—ÏϾC©'µ£o޾‹Ø\ÓDÍY¡‹Ùñ%³9d–yU&$?V¹_Õ°½‘AQÃ0Olþx˜2³¹…oœžQ›?b·9N½gj¥»Z…#W)  W†ÿh](«'Ñþ†8¸Ž Á { Ír›œºîß‹Ó8ÙËzðq:I ÄFÙFÖL,ƒª`ƒƒz×ø!¯ÓGú[oMZ½ž%¬êÐuz/>%¦¬Ç Î‡|þÄ¢nÉ5ùaÙ/²<¬3é ñË é<ƺ3 û a‰_®V£§I! l»³2g[;ÿ¨ÅÖåÝî×Oç»O÷û{Ÿ¼–¦ïº…^áH==d ÷׋wV^“ÔHOL’b„Oô>=O ŸÅ2ÈÉXÒâPO¬1Š.lz"aD¿|¨°zhyÒ ‰¥*¢ò^³8dz“Õ ÌX 1€oQÆ –˨_#~âÄZk§mãç ÖÐBÃN˽›¶ÂB¯Dy×-ì !è˜ä˜lW‹–£žf%4²¤ÐR;ùš€™**ŠKä3¿šIcH¸TSÅØG<ÆïVÁñרf«Ôì7mÂ_üöí³±‚íöÉìEèÐF>ñïÛÛñ a]÷ ÒLc5œÄÅ«(¡zI«wŒQ*\-ú‚×Þ!+óx´ªK̈˜"âæ—Åœ ¼~´ÞâòeÀ…q–á~úÇïŸÿ·­ ZÙgz DÀYÞ"Œ)^“Ö°ëMÍA\Ú0ûÞ1$(û\¶0?JfÄõf6ìUEqk ’V¹Þ´R!Z5±|L’žNê¿xhÔþz}ûwë}¹¸Ÿ~‡j¾~²ŸF§ãçü-Ù7ŽizZÕmŸmU>(rrÎy×J鸖LÇå®jYAj¦:­íŠÏ–p”ñlyàìUkÙ­@8ê µ.xYº²ˆ>ñs+¨'Ù»w§TáϲÈ=%õŒ5%M­bs`_.¼QkƼ±JT<|û1Å«+Ù=Ó™¤F ¡è·öƒkjGxÍô·–8¬þú Åà&‚IÞéæ”ù]ßê1÷UëÝåîVø½g[52¿ëÖHÙo=vp× ÓŽïH؇Ép˜›ñKM;Ña¤b;ÈÚ†¹ þ(²!?nò^oÐØäÀž8YäÀ(¸ GDÆu¡ªª@ÄŸN.6rÈ:z×-ú Oõ=¼@ØXmZrý©´†¹¤™3I¹0âC(>ýøC{Þá?Šf$’aSìƒäh‘K>ßgu§jyœü’à×—ê>¤r€¾,É¿rí–”VáÌ¡««É£fêa¥„ù•wÞJ)¿ð{n»Š]òÛ?–J—/bÊó÷Íd8(•– ÕOS?Ä3«k¥Enž!ñŒr{Ÿ£¼ÒámçÜä~õÑ.–oß_6ìîw»_,#üñÇǶ|:ÊiÇfM§Ó¢§VÏj†òH8†ß¨[#“êà5§ö5® Î}™³¸“G± Ê©mÏ‚Û<9E ´ìG·ê¤Ì¿-®Å“WÚÃ[Ü ¬ásøú)´B[œ¢2Õ–[²#Uƒ«$¶†@ËQîj6.³–)é°*³.¶—€Înœ°öxZ1!a´$¾yêY–µ!¥¤wZÖ_.ˮʞ)óø !sE§\X*¥è©>“ë”Ï7ˆ‹ôZgj ®Ë­F+Š«a£±–è}7´/¹¡=fz4F+ ŽÊºbpÅ{ós³Ã”¹¡‚*Jƒ(Îɡ籈ZX­’©–ºÊ ÊQ]¥·F–&£ÌéM‡ì¯/¿õq¸…°j̾§ñšö°}„iô¸ÁSQ ÉÖlM¥¢ÔâTX~yfßDfrÒl¥É¹èZš­8aÒµ(Åâ¸NÂ4˜¦7"joÏOŒT)¥ìªÔ‡OC‚Q’‡OÓ#Ç‘¹Ub×äV†×ùÿØ»–æ8räüWsñ‰%$€|Ñ:ùâ°ŽX;|qllð¥]Έ¢B”fg~˜ÿ€™ÝMvu7º ( šÚÏ¡i‰Õ…||ù@>bv&Ö˜j]|+7Pª„®z\€îàfd’OJi-eóÀXŠÅÓÐÃà@µ`c‘’A"N´ËŽ%­hº¹òì€ðÜŒ ³F"ª™ —ªÞjëë»d z*yê.Å¢ø yR‚äoF·Ôê¤äj?DÏl¬SÚ¢¿±Ž8D ={C¾«ãé—O+WñåƒËÛǺÁuQëgÂö½>íQÀPL¸nŠy€"ÄÇ=…Ik]έQ©‡¢šh‘†r=5_œ‹ó‡BùÕª9SÀ¥µ­¾y¨4•N'6@(¯SéoÑIœ˜<º:9Sv[Âèh%ÆØ^ËkÚXǸÈk ãt޼a®²´²‹‡J+.ísÔi¶¥-9qŠmšÑ6:XÑÌ}{©`VQª˜–~á°N¢Ši0kÇ‚¢Z´M‹µÍ¥ŽJ´M6EÓ'Ùä³û¶G+Ñ6Ò5m ©b¦tiãö6É—MûŒ\l×6ª¸#D79OÅØÅã´¶åÇëŽNVtEj‘¦˜˜Ô©E! M\CQŽ‘–Vyi^»Å\±À)0{ÓlcíIÛ¦(¹«íÑÉ 8qZóÅUlTm¦ç$Lì‰É7/'áâ’ ªD)@IO§Õ³Ö㣕0ý€B•O’v2´L]=ÂQ˜Sþd¦·'vŠa’uHK'W]$ëæâ”q³sc~ÝåëÁŠJ¾¿$o%_›²C%ÏsJ-õ=•œ}‰’“w¡XÉ@~IŽ.Ðñx%.P€%¦i…±ãùöãõóóΊ‘»û×ß>ÖÃÆãX€©!Ìi‰(è@¥vôÖiz¢§xŠÐÓé$z†­§¢|2Ew~5 _>ãzt×| y¿ Ÿ ~’Áâ©à*r]¥¨%8Š0Z¢öKjr„8gê´}¯CfmÎØ•^þš° ÞaAÈ3jœÔ͘MV² ‡€¸Óc²óÛ§ÇÏ×_îwyˆ^|sçw"/¬±‰útèð¬øÇáåòþ0ªñT¢¨ÿùë§C5…}5õä2;PÔ9=Ý0ìêG;êŸï¿^ýÛ¿ÿÓEVwê»ßém·óªvþoÁºu·w>|¸»¹ù`Šýøt·UwñþâùÛm’·«7/õ§Ïß¾^ý¸ÈwÿrýñÛýŸÖC‹ÐÉÅÇûkÓ;4°ÕÀÎ~Rd½xÿ~Õ0þ-QäýËʦ.â¼)›µÇ7Ú$boŸøûünó‡*?.. ÿ:oý"™ÿBú­Ù'SOŸNÕœ#*ºVUž¶ ù¥Š¯4éäÓiPpñÜ>ž#$Žë*Ÿ=Ÿ‡Áœôé,š&êšs%>1+`Åpp” ÌLMì$ZRa°#‚7_δñžë.s˜`>É 0•æŒOÕ¨!ÕHµzÔRZº`aî€éÆ~ºàÔOG»ÄÙº¼íÉJjL¹@óß  ø:°mNFC$v­‘Ž%ɯmm¥nÑ¢œ¿4g-xʽ§èKL³dгorÃa¯QÌÛÚÿ(;í¨^ÚØÎpMµžzpM%h@y¬¢Oº¶0Ä¢>¥4R¸˜‰§=ç£,C@i™“µŠ6fTݲã`Á³}Û=W¡p/#=IZpº±Þjï`ÚÈÆ\™îˆŒì*%±ºˆ«2²S‚P »ûe ¤íáPyŸ|(Îkd±P‹Yzu¾fze£ö,Šà‡ˆÕŸ={0^ÙU—D 'Т¯&B£ ‹Ü ˜  žùV`-¸ï>ûøñrý·u·ói]`¤pVkˆˆ«èõÇ‚Ø-LzÅž¨`ÐRp.Lv’˜Pò‘:¨JôÓk;:•¶PB›Î…Å»¼ŒÎëY»¦]ЧŒTß./,ªбÍÕ܇ÙÙL,QæY“™[xáãŒïÓ`×óN/‡ˆEwz:­Î>[à?¢Eu¶·†Ñ⻪sQZoÖ<„¤¡T½Æ¦cê¼k0N‹Ë:ð ¡ ¨Ù ¯ë•ôiêw•¸¬ÆÆaúÉ"×’.-ââ7Ø6•ÉÀ}|²ã­ÐøËýí“í·ÍZ¥Ë¯O?ß×µlYèâNÜÚtáÈ&ÓDÀlb&Ì1rÀ®æX‹†WDTlŽ;ßèÍævIMάQ ÊàÅ 4Ï –Ò‘Š!ȺÜôõŠO3 Jk²Ý£“õgÓ€”nsºâfßæ\­ ™òõYWtýllJ<F)š¤¡Ói@vÙ:«-Á:]µÚ·¯&oÕKú¶ê]gHâãZ$!}Õ¥p$‰]{-µ,²bñÔpÓ`æ29-@ ®ÉK›í„«xh¶2›Ï ~êBÕkÏFy“ £ÄïËŸ.‰oo>P¼¹ÕËŸ~úp÷ü:>Üú`¤é=‹aîKŒºMˆµy&C¡EÔ3ÜGÌ^cZé)gA¦²‹ ‹ÿ¹˜òß<ûħµÕ*¼ì.ÌpC"»ÔvÍÃïÅ:èt  Ó“·˜O7‘“MÏú £“8Qâ ÄóΔçñ3²³}| ·öÀª¬V7OÍQÒ±htÐÐòÖ«Œt’!˜Y,./žŽï̞ѷ‡t%æ(žë¤+-ý¢&s²üð)#³èaÍUR( ñ,Y7)º@/¡õ: Æ«8ÊX1g˹6ÆYš±ÆYŸa¬É2¹c~¨˜—®-7&A»©¾7å¨x¤¡.4‰‡ôŸ€Liòv`u‹Ö‘[tØFóîù7CÌǫׯž¯/M=í{MíŽÅOu޽„ ‘è96™ùÍP¨Úùøæä@¨îÔéHÄn~¡IOÚHé©À/L¥º“–[1»*sK²†ÛÞšÈ~VîèXš wt7ܪ|˜Ã2Ó”:Kà©•èÏ_eTŒô¡ä(·}ªn²æH½¸ñµæým7ô¶Ÿe)Œìi3òB%Ð –¥«®Ð#èœÅÒfØ B\f܉P­ÊO5±¿´ã >t ã#H6öQ±KìŸ ÿkE' ŠN4)½÷¸8ÄÌ FHæ´õTβ4ÎLò!´Óáõ„ jïµqÝç¨ ˆMs–J½Â*ßüù¼zW«Kp'8áY 4úœ*0(p”òÅË`ú êõsÎi@5‹ÀÍ«˜Å1‡|å¾ÇޏÁÁC^Õç§‘©&ŒŸŸÒ\×j¨óŠ»¿ÐÈ N«V»¿ÏI”ô›Œ¾À2åí1Í…%Sõ«ød/³òüôñ~;pÿƒËO·?W®ûÒùÄŸ6Q²ñ)+ãôæL‚w•)Ù™›•ƒUÈ–­GQ•é…4:rÊæ‰÷¹=#òt*[(H7.i–9½l¢c '6¿\üš“b~},ˆ}qbNÐúàÞ;>YQð뇡â|ýÝL K7¬_ØÙ0í°ˆ)SÒ’¢DQܵ•‹š Еïþiô£ü ˆ°‰Ÿañ}0I/2Ûœ’kr_ý1†vá€ç˜|òê}|ؼ?c¤Güé—Çœ+óªý_iäC‚CWíD.ðJ'ÝHŒÜÐv³zDäþ«dI\š„ ο¬÷ѨwùÅܨç¯_êÊô|*0šMêÇ`“®¥Å£Š]¶€î§›‡˜ø­1p‰+a¸<í!"ä‡omiÑÃEL¯ ŽË^¬ßJ›ÒáâåÓi†æξ?5Žžy L‘Ó‘Ö?4nÚëOõšõ¬˜ŠŠgˆçr6»žÓÎÈžçÕîí­HìÇî> ¨‹Ý¿~§GðÕazã·Ÿ ÉÕyŽ!4É»*-àVò ¦#²èÕNvð±Â‚LÒ#ýÓõ3ꚢ÷*\ Ôä¡èf©Bõlï‘Á·Ïþ×RÇ’ep%S°1­ƒ˜,OP8?ûõh%®%ÓÀ1jÄOATb“é0Þ/=B(Ñ‘3I}ã€Á#Ãÿ¸ãôë4¥ñ 6#«ÝyÅ^k𞕈®ÒJTáÈ.xÇõS««¿ð¤)!Vi 9­½ŽQƒ Z;3­|÷ Ìl  •èÑ4 aÌöÿlIÖ%4â-‚P­¼ÍÙmºÄT¿D§$«’JU—a¾)À1½m˜‹Pç9 Ãfó£@ý¬™‡ê5…êØ}hØO}Mr ÄTËÔw²Æ@=Rv“å–`]Ô×^8Öàw—9 Ì B┚˜êŠýLÌœ{(à+CX§kNó•s}£“º™Råܨ–w3 嚺d0ïïe ía©ôm¶Ô¢‘ è¥bÆt?Ø_”Å~ñœ£i drŽI×4FgX†"ô7´GgÛ—©'Áð6{tv^j¼G‡ÞlÎÎKŒZ‚b±Í ËW^DŸ…C'QŽ®àå¾iy(› £åiù¾´£LŽ^Z/a€ûGhX—¶7Ê9v%­¨º…¼úóõÝãç»sÖFß–tœ7‰Ñœ­˜2{Í þZº­ÞÔ-¹] ÓM´L¤q:F>;òvt´¢£nP1{V—tÄMþºt3f¢b8i›ø€õ(øùŽŽ;x‹¾þM!ñkA1ÅxsÃ|õÅ_~mÿ ÷·ª½{÷ç¼ÀβÐöþ|eÚ¼YbY&M¤€tþ¢ÏõÅt]æ‡;AbŒÞ5õÒ+‡9wF.¨z¯äºT{®éÒ3›1M)š–À: ôì³û+FDè•ÍA‡\—‰LMý«*á i¤O Ið±ÕSÒµ¥$b‘Ì>4VwnõüÛÐAL9ݶÙ#–˜En¡à€~weFO –‘öžQ:ïÙÜp6(û{(4ñQ_R2 qrî@¢hvÞ–d÷ÿØ»–Þ8Ž$ýWˆÙ3K‘/sXìa±—Á`‹= æ@‘”‡IöŠÒØþ÷Ùl«KdVWfeVÓ»°_°eª»*"ã‹GF|A*IZ:) ?BòúGì>̗ż€¼BTÞ+žÚüÞxP‘'sÌØ… qËí8„D)o—³Ûe‹uƒ³†ê Ý£¦)ïh¤õQ–pœY=kè‘Ê»cOR`é¹pƒ™]Κ,=E7•Øeé vÛõ$-¼¬5fMIà×øô" ]™xð6ÈÕzØ¢úH!¦> &Ø£×Ñ¦è¿ w]ü²Z »œf«¥ÛëŸ~füøki X>6·ÖÆ6­ËÉ‘<¶®‡¤qòàÏ0^pÏZ¦Iè‚@ËRgEŠ¡Ëƒ1nˆkÙ4¨Eä¾e;k’»&MS0&Œ5õᤫŠ”Y3© ph¹˜jž'494Éï± Ypÿúq*•Ó„zƒ\d#§šÈU0êÖ}:5H°¨HU-¸K‘‘y—2‚©Lú ì4;1Bh^dæ¶–ÅŸIƒvA«=넬ã2õ˜Z(„}Ö˜=ظê€' .1ª*ÌF^‡Xƒ"êWé *XzªQÖ#,ˆ$}N×hCe^Ðöæžêþ0qä)­«!Úú)I¥fÌ™¤†ËÜ©ÍC@•.’ÿˆK4õAiÇ¿²Ó®‹Ü±«í´šaLj´¬[æ Ч۔ö×m$(ê–Áÿ/»¿°Žõ‹1 __XŠU‹,Ô·_Á€÷™žÌm)‚Ÿž¼ùñáøôŸ?}yÌ_z{sýöËÇ»÷÷ìF(&Ë’²h=~ÐtcÃ’ÇŒ#†•ÎHk¤e䘪F’øéŽàœ,²wÌ3h$)¯~Ejò£QüUûüh¤ øÑbo¼+Ês%Yå:{'ÃTÇu„Š¡¯/tÊJhy-f|aëù‡ß}ñê×¾úïýËŸþãOÿþÝÕ_oýýíê¯ÿyø¾«Ñ¿]}ïjüîêé¦c?Þ}¼ùôË_þüo‡>&?ŠŸ¸úéÓÃçûƒ.¿<~—_óãý¡%ñÊ¿õ‡Oß]ù¾½úãÕÝŠ?þàš~x¼º}ÿãá?ß E·?ãØqyäaËdsò(j÷ŽÃÊöJʳ„¨©bÖF“moüÍ;–^PΟ^­¢½’’NI‹Ùæ”ó³Ï8vù}ûÛ¢N™ øC5—¼?qRîèI?œ˜½Q(Kÿå…pVŸ²RZ[Xí'b\{¦»'¼@æÝýïøåÃËh½S½0°!Í™}ß>ïÊDnçúìûöåŒLV¡Š ºÐ-mºËl»Ñ-6 ›ÇÕRn ­‚Ûqösp›½Y ¸)OfšBCËà½=g†¬Ô[Š1o%é÷Jµ[ý(ñ$ž¯S…âòT]Ua1 ž½Z•WbCjR‡äßßå_Hww/æïUô/„.¢xѵ~ ÿ'ÖúÚñï¾{w+¯¾Ë¯ôDó~:Fç½j5/ß|Y ˬž’ –‹Ÿ¬Ã$"—ØOYî©;<ó›Û÷9u{刿‘½î[¾á8lI­ûš³ޏ‹Ôå(yÓtœær†çûºiI˜„—KÂøåÒj8´½ëÚEËÓÂ4ÄUßÈPÌØfo³¾$ ÛC´K’B91ÆK\ñ4´ß—¹jfÜÍ$6Ï6ažÊL ;À«§¶÷¥ðptÿöác6ŒoGìß}ºyó4ûÝ×_þîñæÚï_ÿñûë3»èòÂx|3£õ¥…iKÓUð§BMÍØ;^–ãÀY' jTÓÓe¸ŽÍ)Æ"6Ÿ$7›ÝAÛ°yåØÔ˜³¦ý»·,p›]Ql ±<©4t*Òª–™ê‘yÉ1¼ìWÍš°Üqþj‰\Y7>RU[ÿOWYÒ+4®*d’îÊdv®;T&aŠ1±ìʽþþáÃÃçÃWݯwm†ìåÈ Ì¹mg{˜lèlqßd  —sìÀ“´ÂAHPuÁ%²—Ù{Õø¦ÉåØ×à0ªvYÝséüƒgºø²Ý4²L)¨Ç³˜gøÝ-0¿†[@O†­¯7†v¹°2”]û\w‡’ñO÷oÿîßütÅwüŽÇ7þƒ?ýðéׇ+À‡;?jŸùFØM7TÙ /kâà»F0uÓ`Iæ-dj-Jô oÜ•”å„¡‚DNó6ÅÕ ƒ¢wµÎD5èNJ=ÐäRRˆ¦]eIŠûßI›ÑŸÝIÙÄ™š6­õ-Žåª# sÁPua v,j™Ü!w¢ÁR#òÃ=*oå˜o ±ÏÔ'ãë®Ú¤g”70WM6 ¬2”h Ž@3©AOš‚%hš×¼Ö]ºŠ“š ÷ï4bµrÀ$yŒ|aEÂPÀ]“|í´|*<£‘uÔý#¶Q?PH¢B÷Ø‘TŽÅ¦„V1ÝÊèöuÞÚó‹—×=žÞ¬&³ô§ Ƈ4«Ö˜+Ô¶fÌþ̸sf™ÅXhjϯL¦.¾K2?€•ßèHãÐÛvO¯0Ò8ÿîYFjD#~¤€{°%³}H•ÔŒù’Ê_»ÁEkÁ%æVX"©€#®‹;x-’+ž^­]ü±"fržt‰ÈpOã[ֽƽÑÅåXà•‰&È–àcÒu——{o“­¡%~6&Õo”­yˆùÐt´J„©©… yºåR™èĆ.0à q’&ÓÀ¸Sý]_B†És@´eîÄiÉŽ †Ï ð§«2¨‘¤)LŠ”¨ÈXao s)¾\kâÞs ŽÝ¿ßÏ^°o¯p?ë§LYØ  ]„6ÔÉý•º#%«”Ü’ÕC”x1Xlj¥ríìÍ*ñ0´á‹% )vዪî/Z¨©äW6NP&,K®Ñ)„ DJåqòf®»kuo/rËoo t¯Ö˜–íòP3Ð" ÔŒY»<Ô9ˆË7\Aº*M)ñ–šv÷™·r§ÑKî´Ì§öœ;%w(¦UXËÔ_Æk°–:¦¡6ƒLª~ÛeWî6Ñ ]ÿóÃíOŸÛŒ×tWëÍå²vëÍ\%)P÷"Þ òe¶ùÈxz¼Þ)Ц´pq’ÒåýLV¬öpÎ!™^Úó2…ݯû\Ê…ý¼=Q„‹ìçÕ*– :öón„=õËÏöÄŒ‰¬d *–ðx™ b>D¿¿¾½ÿÔøoŬøA `”1ÔL+.É­ —‚¯ð˜¥tÕ9“ÎpÉ2¦¼ük‘w±K2Õ}·i—îÔž£Ü׿‚ªŸzºt{ûN=UU~Ú¨_!ÛÕˆÍdõ«ÿ> ²_y°tGZ<ÇDÈ5q=Ô*6XÏ$9Èà)dª„ [¼Ñ.o¨(»Z|y5؉ í뿵Ù-’ž†+ ¸‰ù<3|@l¥l©–ÒHû3ÏN*näÝþV ,eÆô¯dªíâmŸ«žá0_°BÑZ;®©®7M[æÍcš 64ðùS1£õÕ'6HoœÉê”CäTå2ƒ­öJ ¥5s3I ±Xõ B<#¿´ÅR¤jŠaʳDªZ~~8WOÔtoîþìÇпåä®o¾|þ{žº=|}›¡&&ÝÕP‰·0\ KnFÍcÔ[— ·Im\õЇe‚Æ •°¶8 °LD2“ÐúaîƒSMrq µ½ë‡YÎX *ñW&÷»ºÐÏ+CéˆJ{^–ݶì@o‡ˆ=uziŒº:‰$Ü—±òI¤ß6Ü}KÓ4o½kÀÙØz_óç6˜•MSÚ‡u‰¬ºR¸ô)ä¾Û燦]¨ã`ØO‹¹X©æÇâ: ‡·gÃþØþÐØvùZ–~›É&ÚÿÇ t£“¢ ­­"!Ó¡¬ï¡îsÁcQ¤!$êQ§Žï‡‰ˆºc vùZ‘[ŠKïñ©_ðÓõÝý»›/ïÛîj,Å3L– ¸'ÒUÚÒð"êi…4¤ZT–Ó0TÍ'@ESE‹5£è¨æ©ìbCËI&#@ÕŸÌj¨2ôÅA~¨öïÁfd.ô`ã1o)\èÁæ¡sí 5Xên²K[a`Is÷¨Ñp‡+ðènÍS» »>Þ|p¸M„x}{óñæÓ/*Ë’N̺fM,ꆪžã„‹Ê“×F­Î8àäI3mÔ §¸'ZCÎ\a( §œD18yb,¥)5Fê _ŒRÚx%¥pæÎ”ÌʲPСME„UÀ)¡7+-~Ay0FO‡:”çv€KÈWU‘‘/ýÏ6%lJüQxYÚna;8ãóG„M# œ\–lS}uMBÃ`t )iEŽ›+° ™ú£8æpÇÜôêÒrA Ž™®(ë2½„ûgñˆO³“ßâ¦k 0/,‘'š¼?’«É;šu%ïeÛ_T ‘cvªÂØ™×´! ün˜Ï3PÕ}H%ù1hBÄÅ»hp›ÄžÄß?7@pt3LL!îs}† ¯RÈ/€›Ú€›=p„šò+F]«¸ŒK«6g2Ûùn=jŽ1êa@ˆµ ¶.€Ú±°ð;¿±°Òsú»—k Çgâ^* mÇ·vÓï@‹Â½}ÿàzôl¤-’öÜ+n—zŽ×Y´öY ,¶w*¬ j\@§”¢çx5×b)É*.«A±}è«P†s˜­­9ÁÕÉ„]ÀlqÿK±c'Û3`ŽS®å¬à²çcë¸UåWzâ-M u€°¤Ï˜(š¥}bÚ!º81é¾ Ô•¾í᣸ï@hîÙô„5lþ:¼"… ,ähe‘Pö “ŸKl\ÿNB„kú4eŸÁOnq´a&!mšäI| -ëŒÇØ%Eؽ ÌKÝ®'3át‘!R©XÆ0<ì-ACY£ê§Ö3ÚØaì™d’¥\¸²t5‘Ò‹Lcªêì¨~0pDL±¨XϦ4u•ÌëÊÝFia.ˆ/oþn2©ÚüG„²KªÍŒõ¢]·`ïà4;Ù§N¥oUË2…Hη‘ðȺP½ÄÂ×Ò4ø1žû5ä{<¼Aa›t#[BՖ€›ãÕ–ï:µ:(F Ôyrm÷.ªˆ%P"Ïý\ÆK¤Už0%¹À‰ý 7OÙ+¤UŒ†Ä]ýlË Cæ 1hïÞ––;ÜQaU‚01–õNÏäÄÕÖ Tœ‡8 i@\•/ٙݩ7uüpÌ ýÖaŒº{O¶KÙ^V†À¤œ¯â*W‰VñTý/wWÛãÆ‘£ÿÊ ÷õF®V‘4ö8ì‡n‡]îÃbq˜häx6c1coâdK¶4Rµªº^4v$q”™VY$Ÿb‘m9¿_sÉBZ­Ê‹|h9RÉ#ÔÜ[XÙÒŽ|pÍ÷XxoaAbœÒgä» Q–e2m†ºr—.›Ü/­¤I^ËËÉv á&{2òWlàMÐGXÐà”$cðÙùƃð«ÿ¼~ܬ?¯õ®oí·lX¡u³V#ËÃ` ÅjÈÖTÚ`uguø|N^uqí½a%Çr|É…¡³YÃ#ëR…$‡âéB/óTJ±À0²i¹BÑG ™%ièMòN×m3Ýl`MW¤˘°­-ÇSåæ•)[Ê4$¹ d€í¶“ã$¬K!ÎÐÙ®Ç,jMDt&Ö– Ïêå´†Ôx¥ñú4Ž“Æé<³3úÄ ©Æë³!…9£P9{0äšîø¸¦®Û!À4³¹¿|{ÜÞ6„<Ýýô¾_Ü•×[Aî6Oâˆ`½Í4q’½ø@®¾5È!‡–„]5kp¦­ÈÂD;䪞ƒž /~/“JíÔ¦„ðCµä³&(ENàª8 =îÎɪ ¦• €¾,ñK9,bƒÔ­À^..Îɳ8ÉEÖ ´ÄYYÅ ÄÙ-óÕQœ¥•—“kœË8àK€`KÛ¼p}¾7çf”‰QœWhjÔP†W3ùÉXKØN{L¥ ‡Æ®¦Ûù¾'’¨—µQTŽÉäЦýÒ YI•¬)ïÿ…U°Ö3{_m„Ó#ví’©øOÿÛ?©òØÐWåýrs÷QüÈ•ø—+¥‹ÿãŽký‹½Ê•Ï?>~¾›Bæ“îÅÝú¯ïn•±§¶NÔ Èli›ò ×j±ŸT7a Ÿ)!9þ`Ž–ª‹¦G8 ßµ( Ê‘>6AÝŸ[ð/7÷¯äo]7™ðuÝO÷¿\½¹½ùxóôùýzƒìÊ xÃçp[€·Þ£²x×/@ḇžl•c\8=Csoekl_ÿîÿ‘г¹ú$/øþæÝF<üt>ØúkqÖ7Ÿ>¾Ý{8{õû«¿¾ý»±dâG“iXrþþ9æÝey‰ÜüÆ LTO`´}„{ dâ ‘Ý´÷‡1]KæÚýéF_ôýÍûõ敎´øôt i®O8a®¹ ëSPs¼E™WF„hBuÊ}«Œ£rì²~&½Rö0hîMœÜWz©CHÏÃXMœ­²Ý®;Éã¿_X ëÁôRL…(dúf´Ú\w<¦~’bŠHiZ²ò@Y÷ Ž£“÷º<£Ò‚€!¸4£’n3ÁãJÒâ^ÀššvÉÀÞȪÝ%ý‹‹äAqî˜Ã„.ºœ“ß~°²"cXкˆƒ—Ÿ±ÛÚÇ“*påc,§4½°³r¨5-n †Ó»©ì=§¼’v` ‡ßköoÕE¤Ig¤¥NŽJÕûsz„=*ìX|dcñA¶ÛlÁ‘M¿=8¯Í—µÞtz„ Ì¿Z¶ÁŒšç ÚÒ¬‘_ÏQœdÑš¤p–,g»p›|°²’¹KúVÑ#âµ^zs¦FoàÐ\óXË ô&›±@oèåõ–¼t°²B½aˆdƒ_¢7Pb`‹ÃÞ ÇÙ"GH\mOŠ ›&QçØ1¢‰S»_éãæÃýÝúæˆW½eàQ"þÙ…ô¦C^êpþ’¨pq´òRç‚«„FgêË@væ‡ÛŠÞ¡%m…QçÈÇóm Ž¡#˯˜íŒ¦†5óì=¹7£ÉZ¿û5º®¦r—ã´.,P6m1”“ǦH±âbÝɶsìÙ´bgʱ2cùt—¾XÌâSMå +‹Ã¢Ffcí’8¬ íÁ6ù n´o1`2»(@Å[ßRl_æRؘ—ð(ûÁ_ÿtqýã›?®ùúÿxsûô•o7›pnz»–Ú—8ð1ÈØìb¢ªØ6ÃT3ј%£•VŒ÷Fâç˜r+´E,8 ðå¡.ëW]²ð|¿š‚ùâ+ƒAÝašëÙ3ÚŒ¯¡àå8Z–›¶‚R²oñ[ѹ=}šÉ ãxvâ§Wâ ×M(tYe]°ñ‡jä37ϧÖ-!«£,î Ī’;çR@œ£R{Ü£Ù]â9 %•™9Xš»iË[ €‡6*HÜn3`†ámŠSÀ#hÕD›aO ]‹ï¸¨iÁÏK»:¹’ymïëï·pU´ ž  ¾ùt`ËO²m¢õ²V,†(oÅÀIÊ„ýÒJ¦¯Xv-1SÍ16™)Øáç‘#&ÍT`’&±ÇH×Ñt—¾€¥÷ùX,frM£Ð|óÁ)@0,N¸Õó¹¬Z Ê-nÚv6Ú!R{$ˆ µLïéÃÍzó¼Oí'å[† ¼×˸Àí æ¬©Åâéôë‚Âsré ó¢±™8ó gÄŽ‰à¤j{/„N8/ÈÁÝOõÅ$p®ÍÌÂx˜gvŒ'0O´Ø¢ÍPêø®0BÌCWÎ+›³÷Yõ!0ئ{º(Ï]y‚_zýŽõ‡ˆ‚¥çú3+gåÕVôt,]VwØA¼úÓFñß݉_}‘²ÒIœ:/ƺ&ŸÊ¾†É@@°÷±¹¿E}b”Ì}ä(¯œõ”ì]>#Ù%io–V¥åd­ó p'Díl°¡Å”Ðv…"G{ZZ:i‚¬¥mÌš/-õÐóO^æå†ujׂΠ_OjŸ­ëÒ\.õ¨ëšÿ¾Ã»:qX½*¼æ¿ðpF瀄08 @±46õš™?­M0φN_ º(ƒÔg9·¢'ôh[|¿‚™ ߯c ÖÝ{@}R\g‘¶Yˆ´Á.Úd³áÓ™˜CéôÚIH ‹ïÛnDÐ[´Ñai²1† Ò¶4𦉼Þ=Ìê¢oh&ÜÝtÑw |/Ç™¶Oí²@uÜiQ€Í¼1„¦$VR§é…›ÈúR§Í ­g†ÃPd.ïxµ ëxCHŽ]8PÇ+§Ý =ØKoôHm%4íp\/r†dy^0¬w·,¡ ¡¯¿…’ÌF´`{1¨÷iJðÕ>ºØLµÎøo©;„¨´ŸW⼃®&#˜á£ë߯¾’UTÞ¿—Ÿ‡8óyr4¢¾ìããÃãt4•Mý«lnmU¾ßÜÖ+á|ôÙ>¢bŒ5JOixÌPÄ3R{ƒNB¢zg9æ:u¼ˆÝÍŸ ,“l ¦:–ßcìávî0®lÕ&+5¡ÉüŽûÇ{Ç•­˜MHä‹Ôóx'Bl¹{máA>Åñ¦v¦l¥ñÏj6J„‹'—êúï ½²þêË_ýï¿ÿåÏüó¾¾úÛZvÚ߯þö×I:WÿB¿úItùújûÁj—µüŸ÷7ŸÿòߘÒC²?>\ýòx÷q3)ôÓÓk]øûÍ”¸½š|Ñë+ÙÈë«»úaÊé~xuß=]­ïž4²Ì­•æ”ÛBÙ!¡!€UÖËï`TnU< Bß°ªQFDJôsrq ÜÍù8ì|6 OÏuÛ/ºS  —´$0zWÏѼ}ÄÑ£‡±‘­¼˜g†oeˆ×œC–=ØÇúrª(°>X“ Ðxª #çJßYsœv‚·²ƒò%o(çeÊš#‡äq/„Öøeÿz€ÖhåS¨ç©Û®ŽFŸöõÚû”M ¯ì·DÜõt Õüa4+Gd‰mÕ??EJžôÚ?n{ìÎTXŒ¦¿ÀA"gw¦u˜¨¨úžv¦‹FÜPÓÎ$×*)¬Ø±À=ß¼“˜0·;îd™ož4÷~%~ÞS+¶cš¬6ò˜Æºš"M+èˆÉ›ºÎ< k¦ýÀzçs°Æ¡„Öƒ5Ö¥§n¢®ù²--9et0JëFjŠ˜-ŸY±vYr„ëÇé:~‹;G‹Î®a¡ª¿§â3¡)7jGÌO˜Òž½üóåóO燗=Äô 19u•„ª˜{BcŒqLÎê0zÔ‰7}ÃA¬äH7»êù˜ƒ©˜s ™^7å8ÍKBŽ·®)±eýøû Ÿ(XšÌ‘v‰æ‹Oü¿È¥›/™ß ^¬ßoL²V+,¸ @¡30x?Gc"A‚¤3šd«ž!ßŽß ùö¤ ñnܤ k=ö¯š³+¥As|@ÁOß¿…ñ„§Ñ_˜‚ŸæwÉùÄbΑߎ5®ƒòo mgu±¸7Ç[rÊ–ža«‰+â WÅváÉ”ÿÁŠ:s¼öUŠ'.‡!r¢T.Jnòƒ)ß'¦ê,dÁà½#OeV½ç}*{Ž»™Ï¿²Ým6JÒI¦;Vûû¸0vmä{ºÿdAô-ŽÇ®¨ p¬õ»‚x›ýŽ/¦×䙕î¼ß +ñ†³ŽÇJ ›8XY‘ã‘“€^í"°d>#I¯pÄ/8K…äW[m“ÓrnÒ„íØ1/[é£MT’—×Ûñtð:̾XáoÍ»#Yš3¡ÉAM¾ÇŠ1:(ÑŽN¡^êÔ!ƒœ]ƒËú#//ç²þÐ'‘Ð~iEt  À…° ¸AÇ."6y–€£‹ˆ1Eö£z`Á~áÛ~cúߤƒÑQ{/á`P¢&¸&ƒUw–ÚƒÁöƒ”“ 8ƒ çBŸ;h9¶³þÓ—++ã@ïrìÈ.ð/^¬PÌ·Å¿øÝìžg-£OQ h^): d2çÌ·{äZßlûoåÈuð>n‹¼m>ryÁslrAD­9ÚåMØÁ…b¤‘Õ;çrþ'j-eÈ: Â]Õ~]EîGà³llXâ~¼ª£&÷ㆧzTŠ.åD $ûչ̕SOÿè/à&c<šþýæî§½ý÷õýÃúçSÀããBÀSû½¾Äór:–Ú¯=‰X"7[×’Åôì}Û.OåÈ=¹aŸæÀ±iélaü^3‚Ë+1S¤XÅînÍ1»û–ñýÄ‹:Ï`ÈÚ»ØWÖ9—«‘g1»µ_MžÝ‚Ñ¢*òáÝ!»ûá3ÙÝ=±•¨Àåù-–Õë9´e+9’®EÓé¼`SEöQiØÇ3¤søýÎm5 SËÅ–<ÂÖ 5ˆZÂâšÑ£«}·÷ò8F6wî¼% 7©Ö²ƒ•áËzm,ïVŽwXGpk1$qçÃ[ŒŽýNàU3%6bNyœE[<·›÷Ÿßrj8×OfŒ.œ±ÖïM«Éî^ŒŠú½É9œÔÅ&ȧaÜ ì= .òÃQ^Àð™àéû .:I&R“FFðŒÀÊkÂåŠyõÅ·ÃÛRãçH¬äpKtFÈÞyOMLUÆ”PË}c î3ÁÔUQ¢ÃCU¬üh1wy#A*ßá!‘)™\Ý ¡K‡‡ È Ê®± È:Ûì ¾ÝQ^Œ‰§è˜/A(RFÁí•+·¶ÔöÄÒ«õVbº5ÃS&×ç‚ ÔfôMåè;P`òü>)BÖÁ¦íq¿²2ô0( Ýs ¦mÐØ±ßRÉFщþìm*ÆžCT”Ùäà{›]{~û¸»wü’q{:ºYHm\´ë¾õTƒ¸üÅ ºî[ÏhñÆÛöup0€ŽÄÁTÝÇÓ‘<—çnŽÂ¦Ž|Ôy2óÂQ5 ÎÚ>ÂÔ$Euê4©›¬"ɨ~›tn’ËOÌ ‚ÍÆ‹ì–:”Fæ§“>Ó¢lNn/^0óÒ­¼é ºõ6¼§õÛÍí§{%õ<ß tþ—µ wþÈ||ü´Y@×N? U˜ç[Ðù‹ §/|°]ÔF¦¿azQÂVŒm¾¨æ‚ ‘}_z°çÏ´Ÿf6|9©>¥?”ÂPÐwÈÕ0Š?õYŸ!ɽv µ.-ªúÚ–˜`‰ÏÍí™?]b›©l!DÅj!3´%p×Ãs(byv`üBʧØ5ÔøCñ{A‡C˜ù¯ò†óÉy¶¥Æ`š¹Êœ…V¤Œa.oæ!9ü@>]Rc Såˆ.mæqt]¢ºS—ìÓþé"¦;*|×t¦¢Ô˜m²í‘Ènè&À0Ø×‹±9LQਹ²Zìe Óà2‰Ò:ä8«jö$¢hR5ш+§èE‹¼ëŠÌŸ™ÇÛÍÍýÇ·FRI² ®²«iy6„¬58.W«¯›0•Š’¢0ÐzÍ,EXŒ(%Ù¦«ºö«í%uŸãü’ É!’õMF3¾,KÅlCÂ?FMùk?}sîùu@Æ®ÎѹæñR3–;§#4z›ÚtäÝ€iQ02Å¡)‡w7ë·²ó·˜`÷ä£Ô·È÷F~äñQ”!ûcû£9QýܶÌR<£fŒÄMPÍ€¡`râ &F9Pâ…S'wºœeÉ#'1¦Zàù0w2¼µ´8Y~Å¥uÈ%…Ô1/º'‡>Ÿ‹‡è³u“²lŸ®oþ*>©xyk9?ñÊy7bámö7ø¼¸³OU(Ë’uÖ£Kq†h1V×™_¡ì¼èšNŒ³`V‡V ÓM“­B%ù¡ã­NZPv1NGU¾ÚU^¸¿‘O¾üŸ§ÍÇëû‰~j‘_T5VqÄpK"&ò.Æ—ŽcËÓÇ(¼Cq°»Ï"‰tõ+„+ò¨rà6H–¦Q›6z¯Ø8Ùt4ʺ›«3$-âlpÜNê=î)ØË¨Ë8Lyi@ ¸$2j—elÜ5LD6¤mý#€Ò«¶<Ù<Úþ_¾{ı!ã²Uª ã•óÐÊnÉUNÉ־аËþ‘׎iÑrš-l n4·‘ŠÙû‰* •ÍýÈÊš´rp‚­ˆpKÛ\uŠJó›@ö´n‚•1jlIÇb³‰F÷é³HíÝÜ›e@Ìœ ë¼5Lÿ?ÅFœ0;0Ó<8K]:ã‹ï>ŠN6Ñ”Û^3*Ÿ×opÑÚ¦°ì|Åà²Ä4•°­Àúõ‚ˆm{£ž'›Ê÷]£¹äLÁ‰ôÉä‹GòÑ.©jÑüÿSwu½åÈõ¯Éþ¬e’E²Š½“‚Èî³ ò° 4¶º[[òZv÷t~}ª®Ô–,S—ä%©ÞÁ`fÐjûê²Hžú>UwÛ;ó| RÞ¤œFòyŸØÒUïÁ¶Å,!ç’;(¾ä™ð~~ t¨Šô#×%Ì>ƒt¢tó¸Ì놪¬?\Ý,ŸŠô%²}Åï¡Cä" ü®«…ò°¹[Ý|™óŸ?oaéΡá/·÷«õþãá‡VËm2w_ð¬n «,Õ•‡!àýhø¢XBW<kÒhìgo†u¤ƒýJ'ÉLø.ÄÒÞGÒiëgÕë\IiX““1…çKÆ,;šŠ K.çhIaÉͬ)£Ø¡($QÒ{úÏñó„-6<È´I„C^[&Òꢥ®«3CÛ=u¤4P4u$CÔ4Ù8Ý|Ûùj±iœom/±f F¬M6ºî§êí;ó½1>FÇÍG˜ÍiPv¼FÜ`ÛØ•¡êÚ˜d›äìaá£t¨;,›ðíÜxÖ’‘Û¯ØïbãÞŽõ›"i²zdhpèMÏ‚©ÍzÅçEø«`í‚òô¸¼[ü¼¼Ë,‘ÊR7Þ{=¡ÊRszJê)Î*c -µñØi%²¯³ÒäLzÉ}$³˜’êL²3 SÑ<Ô‘<Ûä1ƒLä„¢ gQQ]e‚ƒÞñp6ÓbñðÀÒƒ:š†ò`ÚªrÛX•ÿãÀÒùãÁ®©­=¡GôM_{j¦ èØ~öu™ ç&Sà,(*í¶ Æ^—ÒÉ@ŠqiHu±¤ÃÑbÛRdd‘P„¨êRŒì|öv|3QÞ=Þ%†åñ £k›|0ÐY3:õni ÎwH½krÒÀÑ•ïtû¸v9ùé:Ã6Lß’ ˆd¿m ˜5RmŠƒÓ¯¥8b»žŠ°®K“$m´MÖIi™e˜Ž"ú8ÌA,M2¸|°5™¢¡•|&\0¶.脺·*b¶±.o0L& 6ÚÆ}$æ:DŠÑ ≭£çŽk§{ÔD9ÏŽó~Ú…€ù¤=xµæ³8Do·‹¢$¯wçÉ·0H×e]-+NâŒàM6j•¯Þ)5,›q|—ɧ«f,cn’ ãs€$Ò¦jÆI»?߇Ì”ªºÆ1˜Þ"fkbe3Ž__ôÄËftÓÚ8¯²jã&#mÎm"±-¡®ö)XÓEÍjÿ¥|ÏÂÅÃê‘cû´obýšÖxXíÅ»½þ¤ç!„|0)ÂTKÁœ—¿V©2[= gdþ”ò.Ùuµù´¾:üÈõë?^Ý/ŠÙ$V#¢7†tU)ñæMÑgÀЂӵÍ;9òj×´Á:§ ê”;X9íªcÇT)Š’I§I¸$úk|‰j#éÕUÜà’_îìN°˜-ÅâݼQ~hǼDE(ª¼ŠP˜^ö g·Ó($ªJ…Ѿþ£0¨ª-ÙÒ˜êTµÔêªÊ×)2:EiH䜳é›îc\gé4!4äw–þ<,éÏ#ãˆêòZdNNF{¦3‘²6¦3+ÝiÈÇ.ºS¹­ ç¢[ö"²ozýuvKÁÈtà:nŠ L2qÐ’¥¬¦ÓYZÆrà“)Zê2âÄ¡Œ{K6f²ë›pŒ€Æ‘˜Û4eŠ(¥KP€ß*ÏWÊå{we²Œ•ueºÀH Á‚JªiZÜ™Öiñß½Ñù3<[±6=¸¯¶”KPGÏ/>óþv-vßíòýâù®¬9…M‰óL×d=ß„Úä€ïP T; %–úÆ8v/zRò!ªwøÅ}éf»Ú®Û›§Cõïv.ú7¶YedRÞ†éÛ“aÃ?\&ºx 5vÆámçtKŸpcJ¥:k ÓN7Dî#Q4ѪŒ!(‘×’Ö’çÀÚ –›‘±¶¦˜¾ÓÕjוĸ‚žñ%¤’Üž°yôðXˆ(˯I9ò:X‹®$LNˆ³ëB6ÖSg›Œ/)z±Éøš Ó¤2ãF™mK^í²:J$»Ÿm•ejö³Û(}õ¦’UÕû>㔼D&Í%â,ìíænyõUª×§\Ýmn~)KGŒZ> U¥5„z{1yÐÚùÚˆxžÀZ\b*“«3.NGÊPŇäÓhà’—’ÝP°ì­]eEÓ¾€¨y/Z§BWnº]kîW§mõð´ø™?º ¨‹™GrS²­‹X5¥Í6ðY–Ú‚R†’L 5¼yȦSΧ …õžühôæQˆ7_äѨTØ['É™‚«„,¯Ž „…îµÂZc´I–½¡ÔãÕm€MNÎgå¥BÁÔ|8»“l$kUÕ_: ‹„9°“@|Êz¦ûYô›g>mÞ<-RQ¸×?Ük&]¾Ú¡ªÝ"€ž’´øJ:[³'2Iœ°.®äƒ¼M§þ‡…¥À6@4xI›Ä?ŸiÂ>Xµ–ý°º°,«nÛ¡¬T¡ñl»+¼@Xv;¿aoä÷/~xÜ$­}ZȰ§Y>µŸ„ÑŠRY¶ÈKÒÉ£¡=ƒ×ý‰üA”È/ˆ Uˆù5MfÚÏ;åZ0¦ ãâÇ÷”2ÃÐæJÚ>èÑOËŽ¡ƒ}‘s?Ü%ó¦ÌO>üûóêæ—áde zÈyJŸo!(cŠ/…Ær &¥o‘!Hz&M9g‰}|ÞL–Ì+ÇÚ"¡”ìê5ÚÙDW¯ˆ9:ÕöHŒmºz…dœÎ¯£áW³ ÑWðÒÈ9×¥.U¨Tv#£.8‚lOo¹ýšþ)²ž4¿ðùûÊ»Í*½ê¾Z5¡-I:Rƒ²`̆:OËÊGVHÒ­–®|”ùÉ;!véŽdѨòÑà°wN“v ë®{íšò!Zº&|åÊ3æ*£ÚÎÑÌêFâûªQO±ÛßmÅñT]*d¤˜Dpù¢3("ùÏÕvõa]ˆœŒš.é äôzJªÇ­ P)é4u:I2Y>çžóh7½Š&mŽ$ÑÈéôVu˜ §‡þƒ‡•§øàá,û6&²m¤#±´é±F-\YeqêÊ’«º‰AQï"Y61V#‹^€ÂKÌêóydY~WJ;1È’B‚sûhx[S!%P=†(g­tá$ß>ºÿRRz}¿xüeùôpÇw¯Œˆ-È=XWÓÕ)ð~¼ÛÞ× á7&ª†&©³N9Ìè<°è\Ur±Ê©#¹´1IÙ1·ZÙ’¶Ñ^¦ƒW]F ÝMR³Ž.àòJFrŒF?z›Ú¦ÞeW êªì_ â{ŠBA&TÐJò#øhžØOðÓ¾Y‚·5 yÕݶ~^¬žøXÎø¸Î~à#úÇõíò×ÙAêǰû{þüéñËjà- æj/¨«Õ­”¨¶5PæÄ‘$÷ ü WrXáˆNÀ8"P»éÊØ©,y»G°qý[‚óà” ÆO‚4ü`ßÔ~˜[БԾžkþœïP XI6m2ýi!ï¹^¬o–×å×yÞ~“á¦,H x §ï AJø-Hò’fe8]ìJ[ÿ:•òyqwÍÿʺI¹—uoï6ŸgïoO‹í—õÍÁÌÓs‡š-PuÞqÖ 5¢†GH'm¹¼ÑRfC?Îøƒõvq3¼ì+SŠww—Ïz¿¸Û.Ï<á³õ³ ûýióþÅŽûèMzI@LªïšæŠÌ¾t䌴_8ÄrùG+ûÝÃãæ†}Šù³?{§é$‹H*×wluîÐê¾IÞAŒš|$ýÀ!ƒBÑ5é·ùï_×oað zÌ¢LµQ¼ÙÜ?,—ï¾ã%X>½ûÓ_þ}6Ö#^m¿ð^Þš1¯¾üŸyøøþ~ùô|÷÷/öayÿñçÕãÇÛõjÅæÏýæöø\3.;ŸmŸoäü¼ûnÿJ?=0¡d¿ LÁkÂ$09ŒQB­,˜¿NÈú/ L¨Mo`b1ú(0Éz¡_¬™õ‡G,¦ 㑸ßWÃû´é)äø0rÞ<üU¬óU¨òæÙƒ¦ÅÚ­Sä _äjäЙȡçÞ£7Ê%ÂåanÿPÒ¢!M5-,8ä¥|©“2¶³g'RÔ6âÚÉ6w‘¹ Aãbœ\½ä&nä-âê~±^|àÿ¿´§~àkËñ+ô1ìOâ$ôéðfÇÐe'CW‡ë{àݤ#€3ÎêjÜ3¸çØó³˜Æ=¡rIâ¸ø,ÞÃÊ2Ï)l½4ðYMݑϠ§(ò9öwµƒxÐøv€§© ‹ýõÈêë+C˜ö7j‰þgCþýûSô+tõ.ó†G(,ÍT;™‘Ãð ;ãÓIf&“Ùy4¸z²ñɱ˜„|Ï& €MT´öxiY%¯ÅÖœ‡ ”ëŽO,Få"øä›…h\¢+¤©eŒû854ˆ¼\ùW:ÜïwC`Ck*üî#l!ÐUØ¢çì¹Jm¾ZÔ6<œ"¶(ÁÐsÏ(иWðUZõãrq÷ôñM]…|ßããæq8R|¸~åC&‰£»åíY9ZeÑLÅèa+¦PX+ÍšÞ§)#+]&ñ‰C`d槃æS0Ƴ?jQÖ˜£4¨}ØëeèhPgí^ú@PЩEÌ:fIòFš×éQ­æFã\Í!¼c@M©cLVJ@T=œ¹‰…[$?|­èZÃìëÏþ÷_øóÿüïfB¾gûë°ÀÙ?óûÀ›òn¶û`¾×ûÿ³^<~ùá¿þmÀK>XO›ÙçÇÕÓrØ™çí;y÷õr0}f0¼›ñ‰¼™ýË쟫èaÃû¶ÚÎnî6[I˜FWÀ¶;DÚºéÁ0fB\…SÈ «÷2mv\^‹6"2˜Zã8vðÂu”4áheYqy ÞBe³±A¦—J½©¯Ú¶ýÄRë%éŠäª÷Ígï›B‚óJ'÷ ÉŸÜ7LÔú>,-kã¤J% S´q4…é .è<sê" ¾AÆX§u XžÕm´Ò0»£Ü`CÍpÄÁ>P…äZL(̵LüÚ#C›÷­<–9ñkÏ*‡ãHha¬œ/Ь™0¡Ð{ÖyÄj$rùH$¡’HäýdFÈê-,‡ Ç` ¥v-‡tob <ŠCJKYo.×õ-Ò2ÙL¯§I_K•i—ìo~•6õi•ì/E£ Ôé×ÎTõ;;]ì\#«=ðÎ^0ñÈ€ö-NøæÓúêð#ׯÿÈ*hO(ur¾5ûTuç;ó{O·n4ÌüÚ±³ÍÛOJéM+mÆŒ~Çn¼QÔ äŽùªÖû`Ö´¬ý0¤T­Ö!JÔt´²<]ë¥5ß«@× )=ÿE &I#qoLb9ú¨®õd€,]¢ˆÊ(ÚµJàë½|õp·àO¾þÍvù$K·qHªU¹…ß MHõÐTøõ£…ÚŒNˆÊ€(o'@öÃQUãàã1Ÿ%ñÉïhaOk<¿.+œÀJöЗ€›¶ œPÛîà„ž¢“·Âóê/Q¯àŒþyÀÅóÓGþhu3ìóÁ<>óùK~ŽÀÑÂú›Ö¹Áïs„`¬±2aÈÞ‡)¨£&YFн Ô^Õ‡±CòkŒ&JCOpdSÐóµ[ýM8ô°´LôÑ2W×– &±®³ŒÓ¦ð¬¡tÖ‡±m~[€r¬/Ð$‹{A-&5†ÑÚGÙÔ^–·kÎX9%%Al>| ºT£3øøöîÛ1š˜Ê`YKŸ †y|¢y*üc؉îì=“ЉñêJýé’ï:‚y¶Jpb´:ï»Æ¬Rêºójº;­£],Œ÷|Iu¸hLH›îº3•o®SÝPwþËŽ],ã[ôÐÿ®Ñ“ëyó1TéÇ}÷X)!ŸT9Iµõ/å-2È[Z´ÿs³yx£)¥}94o¿“ß³˜IIÕjyûò™V±”<{7Þ³^LV6ÓJÑùhaåÑj~'o;[íÛÌo–«OËÛ7u•¬C5{¬{Ä~iû§¬¶³õæóìnóyù8{ú¸XÏ^2VB/E2.À£Áû·:ל= A9¢:‹ºÃäMV;.M}¹‡ã0Æ©ŒÓ©pöÒ$ÔtUz0@ÁPÑÅž°î(ê7Ñ¡ý‡ƒŸwúÁËÌÁlé;ðç…¯𨪠ɠŸ‚x^f9+ZPžœÍIÔOÖDæ¾ðmcS„œMµOã³–ö¢rQ¼<È¢ÉÜ~k^. šƒø…¡*hÎG©7%KÙÚÈØ^°d (\‚‹4sÚ‹SÇEܳ»'mZX¥­Àœði7‚P¹võÃÝhI?­Dv|õ6«±ðÙõN¨/]8~tGØbª*˘32-Q«âé£Rk‡¯r<‚Oâ«òˆÉ Q6£¡µ#5ÁW¡DVÞùP°VSuEºsÎ ³øDoo½"JŒ‡ M¹ï1o¢7…âÒ*¬8»»ì,8UX7¥®Œg%ØPXÍï&IÍç?™‹s¤Ð§o­‹ò&­,«9PûÐ]J¯‚‡:½éBïH•ˆQc¤çƒ7‚œ´Y_J«Ö h s æ„×#*ÞÐ;²¼~Úq> 0”Ó½~ÜXD‰màS\wÂ,v°ÌXHoÝ.ÎÙÉ2û…¶óͧõ|óøáz9ðC¿àïøà¦ó¿X7°xÄ€cBùªj/ðaB÷9bÜÏâ‘íȈâÁfO%ŽZr|œá×'H¸pt JÖjƒ§( ×AXM,9¹@ ©Di ×ÞÖYrØyDÜ ft1KŽ—ÌÛDåQÕŽšRχ¬jn“o¿] QÎn~ þ§Ï@‡éošœŒ´Ážtõ›Ûø`ÅÒqpìëMr³%8%()ðZSéÀ1Á´ò–w[ìÑû¤ám=ÓÙö»£îò‘šLˆã×¶Fh, @Ö*… au÷,øÞ¼^”b{â.+­ÎtÌ´% ·YôFç3ЧnúÙ}c+Š=„š}³ê¤V¥½ë›â©ç€¤—`íë" þxÿ‹Ãîl¯^l—ý¹eObŸàyØÜ­nVüœOzq÷ðq¡çÃ'_æû¿ç‹5%¶Ú” Vâ‹4²[ÁVòY †€0úƒ*ÚËH¸]€SN›»Sf1kn›N YåÏ8g»Ø £»wA—@¶QlÚT¥¬&ÛÙ.f9ƒÇˆ]ÌK&B¢ÌßvrrȚͤ@eC÷å@èüI ñqªN‚1®‘Ìšy_ÿÒu¦ÓIÓ÷I2ïzñ|»*Ë>iíìy‰L‹T䆦¹8ýd\ÀIãœr¤ÔÔ~F+Ì1Ÿ]²Ç¦£´—‰´2Ÿ²™Ï@6èª~s>R¡¿ùlðÿÉ»Òå6Ž$ýD„ëÈÓ÷ïîCP=fŒ® åÙñÛo& XèêꪂìY‡ AˆÎûü’‹á³å x“‡Âá*#sãý¦µ&à*Ñs˜>‡ŠI&ÄÒ,bfL·«5ì:O—Þbs¾ýŽå#ú‘¸{yß¹ÉÖ’R¾Î ° Ⱦ׆V?G†"v'z)9.8v1âÀ\­ð) 2r êD¶!±±J9Øs‹9&V¡®>#PšQ5‹ñ%$¾éŒã±òz÷p’¾Âk&y!×eG«ès‰<ÅšŠeMj_}¦55i2’züî€x__ì·Ï¿|{þú¡Z9_zë¬nœï eéZlIiÛÉçàI­ñì"r‹ÔíkÉ™\Õ†Wy‡[}‹æ•ùÊ1èb^]XüÄjûjϸ——ݶȓ+Få¹PyP¿4ì=ÈÅ«¥I‡wænia®Êˆ;ç¾Ö,Å)…`eR¹uááT¹yx~úöý¥­è,pÜjzÚµ.å§nx‹•NëAJaDÕá‰Vö@ö!ÀŠ’ƒbm*‚è¸bûΟÈ1¨äl¿¨!Æ%‚öl}ÚGLG¶0:ü²äàH÷9B¬ ¸&Z{XÕº‹`yR_ñ¡h®²2eˆ‘ú†Ù·›HÁ1^bæn|XL$Bâ×pÐtBW ¡§´Ìù£­›g•%Ú-:˜íIX»t0Çéó¬FG.]›0˜íÀ^›øSPf~@=”õ „ñùá„àò(øÛo¿}(Óø³Ï¦e9vƒÇcô᜞M"ŽÑ`gæÂhæDì+LÏÛÞܛŃ—¨ m¡ššâ,P$KI·€ 12ù ‰ÆP­‰R£"6Ÿi¨6ìƒUðs#ZñÊõ9UDl{ÙÍ5Eläß‚»¼ªLN›Îø>mÞsÊ g\^ú´àud VBJ,jšIVjͶà*#Õ/ôõÛeBÚ›E,â‘bvôíTŽ¥ Øë FÀ+Mwü ö,hXíl´WÙhªbìëÖå4Ù/Q±šÌöõ¹‚_C ºjS—Thëòˆ€k×̘wM-›âê„Þ ¯­R¦[ögß’ôW˜»çÇ<~y<üØs`­öíõÑ ãƒ—zÚ~ö›²ˆ)#Zî×èS»i7nšBSDAÀj·Øöš›5Ï—‹×ìO„Òîs÷šUÃtY¦kÏÐÓæ±ˆÓû}öCiÏEÞšÍ=ïk ³¬r´Ø`—‡˜‹«¬µ´˜û¦4ä’µcªC!§¤¾A2sJc—âHà·.»±mw½i§è´©ËÆbÚ°\gÁ=‡È­85 4Wõ 1$ªOHbµO§K9ʉCj>.‘–0)´˜NJ9uáÏÛÃ)O6Ž?ã¢äcl²Däxì]Šd(ø Àª’­ïÌUÔû*Ï|n ««ã€Ó3*æ‘sfÉüX0W³“H}¡¨nš@ Þ‰Vý{̣ߪ‰«¦3 ª–wTC±`~F•!ÆÓD7!K |¾}7ͰO',·ÂÎâ³wžo®‰‡KmÂkñ½}C¤WaØ/•hWš`añ„¬v9R0É|ãý‰—‡ß?þaObÞÚTu‘s3x2.4·è@3ˆž(g oÚ‹¸HŽþ°ßóï_þ±¯DÞ…°2¶¯ M[hì´‹ôŒ<ƒô¶Y°ÿóŸYs¬Š.ªk˜beHÔ¿Ý*ÜdÛvîî¿Ü?ÿÙDk^ 4õ-5j2er$ÆìcëÓ§{gHwX"¹æbÉi6{ ì=þ©&åT·ú×âœÂ;¦E7)[àÝÕ³ÐË++cšþ¨ÙÔŒùæKöý¾ÃÂí¹³mRŒ$iÁê'JÐw´@#óê{}J óÌßžŸ¾>?}ÿsÿ°ª?¿¦ (–¹†¾CÍÀÑU–Q Ï84kôøáé‹§À5ÞœÿÓ>fè3,;è‘0fM`‚ùÆ>ü鳫cáðüJÝˬú˜IÛ™ÆDºk‹ó¦0QÓ~ÔHoò¾ÄZã,À+]¤Fžá彯’!ÞÚË_ ´ÿ îÂÏ8þu¶_ÞJÉ ‘°×ᥠa'Çmh¿yÞ û°ñ´ÐÀãaXƒw]Q­Â;ä°xÝáDŸ1×Bö¹æ–Ŷ± æ›Bˆs–P2F9{L;=~xùå ƒ¿¾ŠÝ¯ß¿þóñ‹ ËãÿžŽ)Üí‡ñÊú*‘iÁrúâ®j±DaKÄï@…4ÓI½[úéŸ;l}xüS6ËŸ>Ž*Z" vU,áÁ‰”7šÍ€ m©üþxÿéûïÃH§‚±k’Ã^Ür»s&â軌—ô&F=<ý¢ƒi¸ˆŒG”?/:{P³Gè…¥Öcˆ*ÅíêÓÓp0û¯Eˆ>·8ʉ(w©Iš±árL0¼¦\ÚýÃɉ}ȶB(ŸPD ‹£©kn&ä8‡œÅR²¿=ì»ÑX°ïŽ,âœBc‹£¢ÛQ¯õšw±ÔMøõ-7î^Û¹máS„ëÌà±Ë¹À¶õ&NÉgÓvlóKÐÿþn¤ò1ÔS”ó6…•ñDÔ‰ãHÔ’·É«c„R*/¾Rh„C²¯MÐäÌòSŸ®ŽÚÌîÐŒ~¾¹?:v=OÞhÛPÌû™V—2t¡fYÞ;-)Zææln\‹=D¶6R1_§µÅØ—æà„»1`§û”îó_à.‚ ý½™«çgûãÓçÇã€Øî_qw­xÓ2zóimÈB©O÷[}¼¢ègLãücº÷G–,”Ÿ¾}¿ÿðéñåîÞž±U[”ÂXÍ’÷1`Bé&îRT@ߚȡ5‚½!Õ¾¼…Ò R: 'šŠ¶R²ñþxúôqK‡íðÆYÎØ å!ô%?4º0b–Ì0s¸éãýãg§ÖƒóñËË/öëîãão÷|jÞÀ KJÁ±/x%å Jû‰=|óèµ+ô$yÚÊIú¨ÍaJ…7ìLù'*}~|¶ßîÌ–¼øØ·ò}ƒòéåûóŸ­]L€tݳ:ί÷ñ"Ç ’ω“ÊLwðbRïSŸî_^ªCoÿñ4³A%ôå`2eŠÏ$%ú™ïŸXûp÷á/?=¶…öfšÈÍ(Ò×§½¼,9ˆÜ¢E½sÓ°£U9¤O‡KN?²³ýUË´|Ea{fFÞëäw„è" 4Å8Ī›¸>W¤»(y9Ž2Œ’Ñ|ÐÔùÓëIêÎm›Û¤%¯™X:‹7nç';ås¦´1š‹É§í‚e³¼ ;ù4iý¤ÒÃ_þçÑauÿÛÌÈû%ê»wøªwxÕ»÷kÔwE|Õë1+Ϙú8B7àHb½ÂG™äŽüÍ8úÔ1M8qwªR©»±ï÷þ.¯}ç¿nA%Œ¼Ð¶3™ ‚ÐÇ™ÁÖ¬’uj£º8š{Á‰ÖÙÜ…hB¢¾’MI3híýû‚tãýâ÷ã|µuUÖñ h„…Ú²Å&”új?IóŒS\ÓÞPͬ³=}»ÿ¼;Vxvÿ¾;&öêÇÞt¬&ÅÕ÷ÏZ90E ÚuàÔ8ÏN8F3š‰ÓuJ/…ª“¹gpÎg†,Mùpž~ ’Õíuõ2”Q–J“Ùg¤‚Èj ‚’,rj˜S°”¸+á·È³¯’™“¦B hŒ26i†T\ÌcOBÅ0ú&ÔÏ17We” ä.Y žëê09I àWQˆ2’Èÿw¼sŽ 1-ö¯p7@ØyÞf…âl¦Ø9 ž`8§Ç ÀÎÒ-/gæê©ƒU/ åýœ3š Anr¬# “bì§à-ª.Õ·Wgk>X~$Í7Fí/4¦Å iMÉ`8¤sŽú? @êª(‚ô­ãuË"_{_`æ÷›¯"9 ¼„’,vÓ(Õó¡Jb·ßˆ&EópF•1—P¢Ÿ4ÎØd,ú" ]æ!]ÀœŒ·NçœJ—P’úÉ6â2 pH· %v!½/B•]å#*P—n¦-‡ë ©ZpÖ“c `ب*Ñ´WëJny`¨f‚ ’ßÈ3FÉåÊ-;º ш ]JŽi6ô­Ñ9åö­q*½B,¨ŒCƒŽƒSÂíyWùê'a(u)=á¦Óf!'áÜ\úY§Í0´àYªºnáµJU×éÐÒ|·r¢Ê]ÏA´EטûRýÄóu=9mI×Ѭ2@Öå€?½áò—ÎùsHYSìÒrÕ¸å4yûßœ_£–¯€F¨ÛbOç[ õ³…ªÂUÝV‰Åë'ZŒÑm Xý>ÜzÝÞ×C¥K·s ùg -à)•ñ²ØH 5?žÃÐCÓ«t[9¬Öì•ÈŸW¹háXîkèøAŽ ú¬ûA/ȼi¿ôÖ>Û´ aE|„¥Š®î$+êõM#¦b¾À× ×™8w·È<ÿ)õ‘,Àª¸lªÖVºìÐx>¾ß_#„ ]©¸EA[¶ìGiYÛµûÖëB]¾ôÚ“‰gƒX&SEËÊ!Ô5´Ø“}¥Î¬,GáM©) ÏûÞ®4ÜüÆÜŽìžÈ™Ráx—±)Y†+ç1 ͹T‹ÇžbüQà ãöää×$C£$¼ÁpZpo·û&ú•÷ç»kÑ£DÈÕë¯F/*ÕvÏ2D@•p¤4Ô݃߰?ʾ)^˜è#‡ÉáÊñWÁé½Ú ~!­ö #ñ䯲8e °©‹Å)àì¡Mð |i»Ç⌈©’"ı¾¼ª/Ãú^ÝZT†ë|T6mí2Ü ¶ŒÚ˜‘²è„üE€þÆ ˆÎßgñ9E3ä©6½';F“ÇjÅßt®8žsFÉ!&ßÏ,ˆ£±¶˜ül‘`×a3ÅÉßÉ|p+æÀÈ’Šc»‡Ží•÷ëÿ '+®ÊÅ>vèª(„-sÀ)‹\(Œh ΣL; ,ŠP)˜ä™JjÕ@äòüÞ‰cìƒï"«¯Æ6Ø´h!C—}€‹#ã „“ùàÌ. „ ‘‘ù/ØŒØ`+V^F¹ÎF$BìRçË]Uêì­³(í§éG`û ¬d HÕ€ÀWËç©Z ´d¤Ô*<£Õ˜‚€e­Á5´è;bèËPxv~•€l—ùÅmIËÁast˜š ®QsÈ!A?¤Õuö +vMèå iƒR0Éln ¼2pt@Ý1Ç\Ÿ°«æ€R±"t"Ù ÁdÇ(Ó&sP‘—怘¦D.îõe‹x²âµ‚ÐØ6Áº"4äƒoÿ\e2±öõ, Üëï+ŽÌ­xôhs ƒ ¨¹¾6!{oÉ’ùjc€ÍÓ`¬ê=g(†ý'ª é:þJlêZÖbÁH_À(iÃä7ûK„}|Þà#Œ¸œBóöéõâT{¹gnR ’ˆ×HÁŠIUÂR™ãDŒQR`þ+„Ô"Éä»)¶ì!ÛssöS.Ø}M¨é ùÀ’y Ž—ª2„ÇcjK2b•‹g…Nt2Aáýš-h’ËÓ3÷I nÈ^5åü®Úš 97ÿf!œ Ö$ˆS½že¤*eŸg´³ªY-—ÌmæÂâ‰>ØÚSia.Zêݬ¼à¾4¶±î|û; ¦…fh\¯«¶Si…QiRúŒˆc$È/›%l‘ ¿_BAœ=)m®T&¥sBò˲<Á1x•×Ý#ÄõsYïrîwµ˜ËöÙö —0©ÓÔÃY%•Ù;/¤)•@K£#ÊñRðuÎfº]*«–^0ó°ŠŽ†á"£ÑòÔгFìÙŠLVa‹s2—¶A|q,A*_塳W,køK »]Õ²E¶:@\×mûˆ8]-)KRÒ_Ì>K+–ÇÎÖ­³Ì qýÒÚÆj×k-¶ðß.Ó,“ÁÄö“g!–ÐE)fñùÖtÓ}ÄÒYÐ÷¬M2Uìýà"g)Êâôô Îj˜½âSc¹ètCVòCË¥‡nƒ3¬2Êzᆵá[d.SL}kþz c§27%ès9Ę+Wÿ½«ím$7ÒEØû/k™Åâ[9{ X`7wØÜá>‹F–g…Ø–á—Ý™÷߯ØÝ¶Z»Én’šÙË-’IÖ#·šUdÕS/|Š=qÑc«’®!J˜=¼û  oD„Jk%T–Kõ]ª¨Ã$ï|8ü¡«'dÚÈîf‚Z“ö‘«fiyûÊû@@íSÎ0̓+Ã_oD2"eé=Î…¸Œž€Î ' z" Â¾ƒQ%Œp¿7£ºþ°6/ãÃ$š£KrR„²•ƒ´±ØvBé ³µ¢ºŠmmSêëó2äS 1”p_$Ö&Ú¥ôC «xåɨíñ#!Ñ^<—¾úßÖýÅ›òV·Ÿ¯ë{ÞíþáÿXÜîî?ú/â÷Y…KÒl=Û wóéa³ö³ú:B4÷ð¸¹Ù~jvÅêúÂÇx-zXì52¥¤¶cYîµÅÏô/Æ¿ì×ܨé«ku¿ÞÜn®ç <Öháo»Í ë$›*­'Ï}š&¤Ãæ›câ8¥–B+G¹ÏÑ8¡Š0¹ú½GÁvýÞZKŒŠ÷›Ñßité:Ê-U—Ñ¢“3¨ãXSÀ&Ó¥j—ÍMÈYEóñ>Qþ×Â].^?¼øïûé¯ßÿõ/W‹¿¯y{ý¼øûßI,þÅý¼øÈ ¼Z´?X><îØr?ý×ýêñóOÿñ§Å ŸFÞ„Ï»ÅoÛçM£Å—§+¿È{¶¼Á¹Zðî]/þuá÷û=‹†u¼}Z¬owO¼ÂoV`ÈӧЬ I†Î°!­ nHíœs—SÞÿÏO÷ »PÌÞ„ƒÛg@ø¿“í#—–ãBI³'ÕøY’DŠCX‚¡¬¦"T8!ø?M9ìsRIŸêþoœ\Ñ×”/Êh ÅÑ BJBvûRK]QIYÅÚú¢‡v­èÝ :Š„0;Ñ¥üž…ƒÍÉ£ÅÑ sãö­ôLhê_Og$¦Xe_s#ºìp·PëNð/Þ·SL¡ú, uC Õ‚Ì™‡-™²ª=›¸ N‘¼üj‘%~>´É;¾(1#ké´¥\ NÎ!EñŠã¥iâxE‚‰Â¤þÛ2KaAŠú¬RàDuä5]ËX…ê8^KÎ8:Ë  # ')§žçÿùÃ>fص©’dޱ‘„3×£ Œ%þ:Ì?ÍaWu–¬¡élwå™Éå9K [VFýD˜‡ª'B© áP9göZ¸3” †ƒ @â¿>k] „¨_ت4Áü÷ïYp©´4žlö¶åG(kEíØÝ‚ÐlþôÓBPÇ5­¸RRÊx[ï+ÿº÷Þâ_þ­ðéî½€ãí{¸2wqÚºrÉ5±B*!£ë^ÃK­Ñ¸†a–¢·,HiñO¦ -ÌQn« òÒàçYò’‘×Òú…]ßæñê»ïÿ|¥Yö¾‚ÿp¾( Á-ü}[ßEÅ&¶½ùÕm¶…¾‰jOð‹w‹æ†öûltž¯¾{~¾½}ñä/\/¶×Wj­ÖäVêÆàêF(xwˆ»¼R®Ì—ÿ8Æjäoå *…M FèŸßƒoŠø‰ÅöÃn÷p¹ø‡~£n¾¿¿Þ|âoå÷øãÂëq»¹ÞÿLË„³H m[ÇHíZ!Ä¥Ó[ÌüË.¶þ¥x­7Û_7ׇøGIµtžSû¾žoèVÖ=…Ìýî7ÞŸ¿±;{þeu¿x“DzYüÉ$:@a• ¹5Ï!U^1PR¡©V|²Ð_ \é– ·õðÇí²ùó/þÌý®m#gëëÄ*5 Iúø2öÁXnŠ{«?µóòÇÍ#ѶO_L#~-ľßÎ×?Žf¿õ¼Õ¯ø¾»†ÆJ!áÞ”òÛjûÌ sÁñÕÂ[Èï;óò*·¾û–þüøyÛx³'Ÿ½éx±e»vâ‹°i}ã_¸ðêcŒÎ_§Ï‚òÛR“z¶ü#xÃÉß³übƽ2CünKúú… —Ârdb¥š-„æºîÍ6–tSö?5PVàR: R é„Qˆ_ ’Ö¼’–f[ÌW†lF'(ã·Õí%ÿ׫À ý¦‚'Æ"‹›ëÕóêéóýº×>¿Dex#1ŠÍð4¸†iFâl(Ú,€,Ìâ VÆ¡³?/šÐ~ÕDý'´½È}³º}Ú qñá÷/ÞM½ßݼýˆ'<enƒr…¥ ÍÛµëR<öö‡.íÑæëºmuÔoî_J#lê8ÔFkÒ÷`ζcŸX=/Çb µ÷ú%#Y…*\Á‘fv{/’qIœ0MÌzw÷°zÜp¸zü¸y¾úñßÿ¼HëS8,éì{~~aݵŸ4äÖêâií~±]›ÂæH$$yÍQêÝîz¿ó©O/k¿½®¾ëÞüýà G«_î]ݾlÞ7Ñ/m }ìòœ¶´x÷®É¾xѾ¶M¨9Êu9¶IK=¯ÇŒq–i™(Ù2ù:~Ä $”@SÌ41̱Ý‘íÂ’,“!M†›b™”óAu–ÎÔ¥¡crà&²µfÒý ƒQO¶ך‘«Þ¨Ö‰æ÷+Ks(Œ‘'ž¢6ÞZ«,‡¢è ÷EXÚ‡ÂÇÇÏõ¶£kšgé˜Z+;–\¦ÈŸá© ä4¿‘ûý=— ¬>0ü¯_ÞÔZ+}ý»Åˆ{ñžeÞáéM„@ ÖÉsîù1[ý=ܳyôQõÇÍý¦µzýçÀ¾—6sßÏx‡þÞ×6ïÏx…ÑýoØlj›åªÍãFâõd_¯çæ#,î«Q õÕ¯ÃG]µ^FÞ¯,ÍW£ðEœ±ŒómßYvËTOP³<°~Å <•h³Q¢µâužÁ\…HŠžÖ¿l®_X¸—olܶcš<2MDjšišú}=3¤HN¶CS¿oÔæø)óïFµ{—f¤›ø'V #!?ßdÓm*iM¼ŸØ'C nt\hQiiFÇ“ºjA“ŒŽ#5:o.ÁèXS=ãdý,ñÕPB íêgœ,©/‘pê¨_™¨ßÀ1ŸÉ6U³·qüÿ¬,KšøÝ=ƒdy3妉Œ !U–MñFiFʶC:Û¤¸ &ÅH--ÆaŒ!Û¶Œ™# ”(ê­,Ñ¢hœ˜(2ÀxLRŽA1™êÑY­‚…qÏy5(Æ„1 ñ &%|²yçÞ…ÉÃÁ{Wâ4ã‘ö-=¼Q¦'Ò¾f ªéçÖ«,¨‚sáQùÑ~†h¦]QÂ#F;›„¨]yÛr UzKKŒáH5ŲÄ—`Y¤¬UXŽÆ}¿d-„B‚"7wMŒµâK€–¦ ó .üÛ+p¸°^[gliÐ2ñ»ûÉX“ZŒ¤0 µhsfÀkÃJ)Ê5/&ԷؼgSnV¢ŒVJL—w=ݾ_Zb… =p‘“€‹‘"ƒ³U\]ÌNŽ`ƒ¥Å#c=4 ‰èæ%•PþŸÜ¦ýùún{ß‘Œ?ìn·ëíÆ³Ê¯n~YÁ²ùÉçe÷÷Ûûݧ“ Žq4íE{FËO_šŒ¢Îö¢£@ŒÈqì˜e) çÐhrtÄЃ¡lK)Ó-¥qš„nG YJ1Ú `‚wÐû+K3”Æ:TÚš †Ò÷)b^žÚŠÚœ˜^Š"”§f=«LJZ¡*‰ÂH)ûŽ4+}4ä“%B¾áïëC-W¶Pð7ü…cÖÇjÆã*«AÒ*7§‘É8Aªlã“jr@&áâŽkc¶Ç*cB›÷ K²=š!6[ÔSl[QÈ Þ}¥bFï˜EáÈÚì2ƒÖ´&¥u¤eTkÄhWGÕ¦)Tfè­,QmŒ“‘â)j³¾ªóÔfæ\ŒÖ·ƒè)ž>¬Aež¨CÉæMES.žÛb\m&”Êí­,Imþ­”60Emù1‡Yž¾*5]=ïþ€£÷‘’$ôó-lÿÌ—È·ð3žv·›^…æèûÌÇÚH½Ö7¥³.³Þ ¤*{)±_­¬½_ý±×¡öy>¢Šhl&PÑ›ÆÙ¨máίä‰öÇý^H™ý^ÉßÜïòâÝ”Ýå•üÅc¨5ºÃSP+©éŽÔHJ•@­é×zŒdتUþa³kL¶ºPoWoei!³d¤µ;³#uÕév¼UÈ.ùq„l½R‹•š,ôWé?›³ùÖópsãYÄ>˜³;ÑÁ×è+Cù­΢%—• wVÔÎ…û`U› ôsÈçͨâc&N{¾DçtÚN äxPgúÏäo>è’Æ|ÿ™üÅÃþ“ƒdŽuÁÊù›–ÊÏÊZrˆìü°A¬N:{˜=ë²e¯B|z…%Ù¤ƒIâŽÁ~„´3â~kÑGc0‹v6MDãs²¼FÉ1ìpñkÉJÚHmÔ "8¡¿Òs²ÚÈ'ý޲׳2èg+”կܰœMðÞrCÙf`´vÚ´8¥ …´é§{X™äwOyõ×Èh|MK1àÖnîvl!­*nååR‚°²Ôâ*ßä™ïk~´·‹Í§õfs=`Ç“:nÇ›G Q,‘Âjá$ú4gÄŽ„0j eH©]›!6ÐÊ¿=ÈÑ’l»Bj^é/¡€îöp6•¼Ô‰P`+“Ëð—t³Ù§ø Áî­ ³_!±u&*;*;› |àT†UÄ/ BóUÄ0ª¼Ñ² —ÄÒ÷ {ÅÉQM!o|þtõ]rÐt·ò®¯‹#ùi`„þÕ”Àhüéýx܆Ãqâ½;9*ÿVŽ|BŒ•àF6'¨ó »õ4ÌsîXDéxÇÏb¬T"ÀXiBµ8¥A¡H ±e£¿©Õ®Õaèm1qÆJX:?ýסceÿÙŒ•‚4hÂDÆJ¿0l¸•É1>ŒŽ]q㣗‚¤@‚–¢bÚÞ­>òqâ´‡ áÏþL#Ê6Ïýüø²9‚YíÞ2ß͇¯Dã-®è›ÙâCÃÁæ|åÕ|x¦á«Nd}€”W¯Љb¨,pzz݃4QB~Ë[dürw+¢à=˾ 4ÿÚl™ &4~7~ ç\Ö¬ÝUçå¬N3®­¦¬;¾ÛpÈlE£gåNñ™Tx‚ÐP¦B´L30¨Y­8—u¶•™Óh&‘¬eà–]²KgNBÅØÆÕq+»v¤Ñ«‚T|½•%”ìäÒHÞ¸V4ÎßžÑAÊÃ_ó¸W ÇénŠ#5Š£JÈ:Ǻ2õs+}K¡…$•¿ñ*õ ç vÅÊs”úþ)ƒ€A[d¬„¬ÓJ[~@€Zú0„´@¬ó>¼lo¯#ö½ýLÌ3#â÷$xYÑ–E3%Ã(Eí4˜×‰læuòÊyîc³½DæðÕŽ9 ‹A§Ñ—A˜×nY+uM€y òÙ¡¨¬¨°öd–3ºÓKf~É œÅyZÕÆy§^BR*ÊË´ƒŠõšÕ2K±ÚÖ %*>5ÎÔœfú¸y¸Ý®Ù¢¢=©^í-´öVPóõ’`rœsÛß9!%š:´½´,‹ Ç"œŽóN²J©ZTgãvº#;áZÛ ®H8î/s)g…™b§ çŽj¼2‡¤¡@8ÎK&ÁÎ2vX-™Pà’Û)¿6h“kÚ5 ËÎÙ­ryà˜Vè rÆ)# çHÝ2Á5ƒ/7ŸX=Þ*ôäzxán’­>V®®ŽÚ…Ê8Q!ùD[[QoMKßL±óÿsw¹»½k5s Y&ý¶˜¦&ã`¾–\*ÑŒ„–5jPóG‰OÛÚ¥\gs‚Ù«‘‰Ô¡ä’ñu´ÅJµôT¤Ì¥iŠçtÞÝá„03—f+IɉÛ#vî¢ßi‡.o±ÑAÉ‘±ßÚµÆP|Ó´¢c¼Õ“[¼åm¥2Öè’»&nùÙ,Qå1¢þp¢¤Mýñ¶Ö(5~×eÑÐ8twëo±Ëu0gŽýt0¬]åçÒäiWRõëé…д?‡ä¹Ç9#tÙ$rIÚµB%+·>ÂÜleøxdí5φ͢µÐ2XÕÇÖû fOø¾z¹Þ>OÃg õ°¬%ø¸0ËÃ<.fôÙA>(óÚ |Ja1Ö8ã ÅËv1,ŒÄ¢%JA§ÚE‘d³ñzµHrŠSEÿ”u褩QìñS›5¶ ë{®7·»Ï^èCÆmüãµJ@Q$N³ØG‚!!ä´Щ ‡«A§RÌ, ‘ j£ùF'Œ06zRe˜\°'˜2u!_ÔU¾YyÊQ%Fáy‰ÏṠ1Ø!•ücÃ;÷qûôÜ>´£{e©Û®7O—¿ÂrççÁõ–I>S9’ßT•¿“5úA²ð¥®h+›ñ[ï^´FMåñ§«YJäYÊ9] @0vjåfî6.W¡áÃêÇŽ"¥d 0ŽmÐ-fO@¥2ÊI-&›ÛC‰9(W‚âWubš=9dÃnôä„evÑ‚$Ðqî7ë´²q/ŠaæÉ½TÊôвÙóì 4iKHtyE;ÞUpŽ4†Ó Ö:$-ÐB•MaJrAñ¾IN.ä8ña¢çSÏÔhmâp ØÍ'¬Ðr´]¦hÖÈn—©€ •­Ñ—ó”í°FŽÈÏR7zx{%n†î²v,17ö«yèl¤TëoŽ‘Ír¿Úª9è 5!Ù„VÆ ©ñŒJ7/¾å}…ÒÓ8Æ3QR&À5m‚=‰•ÉE9Ãïz\³àŒÈ+ðtìTÅ\?(Ú»ÅçL7õ•ÞhÏ»Õýêc¿eýôxñ´ýxÏ?[¯&ź(ÔH`SEÂÑ5–æ\6㟱W"?±ÎS2ÓOÈÞ‹Ë—œý´ÎXJ+÷%³ŸíÍÉAUk0êœ,èÒ±î<’¸ÍÆ8–Ý”W&%‚‘˜b %&ª búÓÀk¬©æd‰Û˜ØÐúÚØòx—s¼0@±o"}qP‹ALHMW¬>ÂB—¨$ÚùêeG%m¶ŠnªÀPåáìrä‘.KADÎßR@=ˆK˜ŒQôªØ’)Ú-¹4 G¢q‘À–80‘Ðûºp$ÆÞù"–}*a#(RÃÑVl aaÛ&Œš[Šƒš&«þmU†XvË„‘¯èDŽKñ³Ë¦95²&o\ô£ãñ†ÅÿÍ‹ÑLM=I¥Ñt<)¡gÉáÛ‡¡‹Kë¢xHä{º´|˜ùb% l©%¾Áš“m}ìbKÅÇôD'€ÚA ¸º]]ÈŸ³«å›ùãM!Ô´ñG¬hYƒírm‰ô²}º^[‹ûc¼D+=ÅzÜoßž-–÷gÆø<áÆ$ªª±{ܤªVðš²Xl÷¦ˆë¨ 4e&P'í#Çq¨·c&P¤—ÄAÜO#¨à²¦"ØbsL(¡8ëÈ;p-~vÌVÃÛPƒ} 7Z Ú{iÇ·‚MM@l £ÍV×6®_"c+äXhè]‘#g“V…h°«! ·—þпÎ÷‹²~"qè»®BööëqùEÙJÞ•†›¦ú¬Ëâðå@XÒrb{x'°üê,8„)dàÚBΕ¶;"Mã! /’Ɉ‡ìx×§œ–dvpK­â!вGIŸ˜U$uÆ€mÇÙú޳ü›wÇQMѵe €Õ1æÍ/ŒÿC tše¨R w’v£çOšs|>äzñÎÌÙ:<Ùpwž €O÷CX¹*cL‹‡õà䱭уs}ºŠ˜1Ò ºŠ2{&!îmRÀ$Ò?¢ 9¨SQ—P’ÝÀßqêjÇ@1RK¯ Ó%>îD¨;:‰xˆ‰‘[$W“RjñÜR˜ù Ìé1—B¡3Ó¬C_DÒ(ˆcíµ®À¥p^‡{뎠ït#Q$ +zÞH›TqÊ—¯¯oU¦«‹õÅsùe³]îl¶²[ˆY¿¢Ð?´cJ6(Ë!‰m<)F}à¬|¨E¬Ë‡´Ã‡µIpԵ˱¬'$Ø!\¼·bÃ:a﷌ڭö™ŽGmZ¥7±©"Ö–hÅl½¥Xb`Ag`bÝ)'ê³ù¸ög÷b6Ò6J÷µíŠY‡;8“¶'øƒÚ Îb¨*]iUµ-µ1âÚÙ5TóË Ú£$’Â;3…:„'7¥™£y j¾YÈÙ!>c>:‹ã-É.isŸdÔ0†%8Ù\…ªb—è¦Dµ—è[à˜<ŒkÝ Î.¡ÏˆnF7Fî·$Ô“(E7¢TO%é­lx¼mÆŠ_Œ&¼d^ìÖ“cƒ ¶Ž-ƹI7޶ˆ Ù1éöúñúæj Àeø™.¢g«ýƒU¶Ë9ˆbdÄù—ˆð\šôäMAKz²¢§ÚÊY¹.-ÞÖÐ7é™å(”qíX{DJ_«ãMADM™G4mŠ5gz5EúRæ'2ÉÍ-ਟht¼u46wÉ‚ê–$Ú8ŠŒäŠF×¼‘;ªRpÞôÎÀ‰”=rjnUQÖp â{plµm7b¢–ö»KµìMéìêäzÔAíj+Õ°ÇàzÉY—PˆŠÙ€0Î4mæ<P[…ÿЋüàFP‰s/GpJë @𠈥ì^»Îûž¯]Ͻ’^A1U¬DÏ£ K$ÁØhpè7)í=Vï'¹´A]’xÊkŠŒÐæ´ªù/ÝùDÌÄà·d8ÐNúŽõ¦wz¶´ qˆzPÏ‘˜*¿áÂnw†(rÏ4ÀS® iNô¹ùÉ^p–^ ì¸*1'QúövV”æà]aæv[zGÒµÛ¢«ËÑÊ–ˆQÆì/f´ø’Ó]Oâhâ{ëFGµÄüZ6uæ}1ážÉ’QBÌ£˜1zS T ·3 uÌâx®:ïlp º¿NhA ˜pàGzõ!(¯Æ<š'NóŠkš¤UÝ’A›S­FüX2Í ÆyªSÎݹˆ9@êGqQ•=üè)ïΫZ}ʧ%žêÔjG@U8%;sBýÕÉ÷jjÁ}’ºÀe,u¬5Pƒ4u@£)4°ÉÚ–<±Ô‹ºä€;‘ª®qpÖö§2o“Tf,ëµtJ*3›GeŽò“hª‡´œlt_wä£ÄÒâ@1ì[U¡%eˆDx–ÇYzÄ0vÔ‡À˜féù"¥F¤!òÞÞPÉ8hP¬We‚ ÝÝH',’x»Úíà˜NÑ膔JÝŽÔÕªU gŒ®‘ »Cp€uIt`³i¤9 ½ã³}¥FôÐEØ‚UÈèb]¤­MþýáY@žU@OˆÎj^:kvÌ­ÝrÆVõzˆOÔ]ÛA®»dQ3Dd4D'dor/›½é”ªG‹¡ÿDIˆ)" m•—ˆ;(QȦîx°Y%äêjžû4=µHÔ¥O.°‡à^d½¢uÏܨÆCäI …,^‹'YÉË/V(o(Î)VŒ³"Û4é“<U+Д´”·8”Ü}XOåì’yL…$2ìNX­°/¥Z‘•à¨Ö¯Vãz# *·‹‰›Ó²†Îè9 ³±)GLÌâˆQŒJʾ1‹Éâ;žS Gs- Ä2Ÿ`cwŒ5™¸›‚¯a5zîzšvúk›16;òq=wú•ù õLÌ嶺¿—ÿ•½±YÊ+ön¾øMþeýÉæå*( °§ŸC›d¡ŸãÅ™”ÀžêÁ˜‹¶yË!æupd|’*x£þpHù?[rj“ÝSä#žÖÿ!ÂÞ”Ð*g—†ž äpLc¶†¦$e%-z¬á-¶ì=oÌhºÀ8‚6R„þô-« Ù+‹$¦Wé­H=-íg bdsñÙÙ‰tG¤ÓÇå*7¦zšÇŒ)›${ó¶(ZS–ØÖûÓS6Ô×\äœl=V?Ç ñ@UÅ|…ô‚--«ŒœùŽÊ³¶ì­}4¦gˆQ¨å›NÙP¡«í´4a¤?F VÚ€P$¨–V”QÛ?ÚgÉlåaD/«2uáÞ ±Ž0‰NŒâ¡;¢ð)„PŒJw´å>°ùš·aìÏ\ñ<8Ø„uŸcÀUyíÌÇõ3+ñF®™$—G¸ ®¨ª |DפŒZúœ6¢³às|N?–±yÙ$°ë“@šùœÆƒ/˜5³™˜*Ÿ“)ô÷9KŽ%E‹AÁƒR ‚¡„Ü2a—5kæ#Ôõ·<ÿ‡uèõR«Â†@˜P ÖBDùí“B0–5V‹ŠŒ:À(4§¦F·äЦ­ÚˆFž§è ëxlD2p‡Y"ËrLÏ{ôf9_-÷˜µ74‘:×÷n~;+ëXµï—‹;YÔ§í¸¹[ü^†ŠJþȉ$v¯ëâþ¼fC8´Ÿvõ W v39cpì%¹™V­¤­cõ;à Í(v x"ÒDèÑâ͵Ȯ̿dkŽH:D¨8Gø ÕdÔ!¡|¬Nuî §¥c(¢É` €àâ§µ*‰f²%‰V «F,Zt%÷‘¶–Ùš¸Náb†U±iIÇ20³Á“´¹æufy¹E+’˜©Py`c°ux¿>tz”xË»M…¾Ïàd‹ aãJQZ Ê¨Â_ޱ•à¨ww_ˆ‡}ý} V"ïJHëÄ6Ž’Nyn5²æ±Þ’L¯_¶¶3\feƒsXâ0¾{=!pLÕ÷ESÁ‰ù„õûòê 9s•Ö¢WÊñ*U£ Ý!öífT{·tdƒ1ŽOˆ±YûG!ØÛÓ¶Ô/±rV™xšÔ…éå‹%ÁÒ.Ì6Aw+çÎ= Î%{Ü9€¦=Gí>…$DØ–¸šØ}"!Í=Xùô8l†1`æÎ}–f3ʽc Ý.žrrþ܈"ù5ÓF*'g“ ýHéÍÔÆúÄ}Ñ P.. öYwuD 1Õ Í÷£÷ÕOs}Ñ[…(¼ø‡¼Ïãê«ÉI²‹Ä5Ê`Óš‚ϽâOì·+’x1Áüë»dÎ>d/H.x'‘a¬P†q`ó¼=7:µîŵÎÎÁ?˹ÿ&FsyùÃMXÀ0{”¼¿[Šsõ¼Z¡žÔ“Óog¯fÜ^þ°¸{÷~~¿¼üAöÍÛåÃåOÿù×YÒ7±Ýî¶uïÅi{!)ÁH ¯éìûÅLJsKáu\˜Åk–ç ¯öîîêéÍŒ¼Ùêq±X®V—?l–ýëûLJËNýZæ7Ë_‡Òš8¢Õ]™S@ž½z5{#‘Ý£ŠñÕ«ç‹õH9Þ§è-óÔuýˆ¸CF‰î(;F³ð¿ÌäƒÛÕ|1l¿gaªlµuîíÍüfµ<Ö+³ÚBq¶…ßQ‚Âq å9:3³Y¸Mò­>­,Ë@iQä ©Ä@E0–|•Š&öö”dU6¦x?äŽöøv@¹<ÛYª`Nbª²¦eö™v ”£25í[·ÌR0¦Ü.MûÖcÖHIÅT3rÿ\yßzPìw¼F˜Ù3 Xú¡ÀòåsðË5´¸_|v¬ß†7ìèMk¿ð»·z•N=D%Á0ÞLߣ ³F¦4´\"¦þCLCɺ6|q ŸdoZC_êþþî~Øv²ÿ¨Yì›åÕtYy'ò4)€òâªy‹ÓÎ2ô¼‰d¯3„‡6òѱ@84ÎŒDS*ÓÑÔ—u6é 6aàg¦Í‰ ÛF9Ëí”è QÔ#¥o'í YÔѦ´O,ódªRøÂÐ…õ³Ï?<ûßûùï?þý?.gÿ\ÈÖúeöÏ R˜ý ý2{+Ê»œ­?8ßøÐÿs;¿ÿôóýe°Ñ²îfﯖƒW—ºÀÛåEÌCs9“»˜ýëì»!Àx'ú½^Í7w+­v'VàÎ!xËÁ‡©ÝÃ#DÓkZO{ëlü²ÿýËš¿-¿Ï÷÷zß s‰ß¯ôÖ’7‘ûï}ÂŽ^É?êÑœ-ÿX,—W j£Õyk!cuvH3¬W÷ãÛ[µâ®o_1î~×…Ï›óõr1ÔülÝ¢®ÛïÅmV/Ï“ˆv~s¶þסAAŽ$ËÑC$YRx§ôâYb&ôyL_¼<»ÊNdù9"WÚ‰à~ú@¦B´­q.²—X“#Qu°mf°.FS©¸u¢u¬ŽE²'ýÑzÈzá!UÇÚZYF°®o%žYt˜›M¾ZîØèB•Úpgî=Î:gÁÔ—±¢ËW›vi´~TmresU&ý–­•å©M®bÙOJÔÆšÈ¡s¡L¶“ÀºûÍþxèp~À[ IÃjf’%‚9A4ÚþîY@ê¼,(mð Ûé—|qú¥Á+ÊÅ4Ûþç7ÖV„\jû‹BÁ6šßȬ†°ùÕµÏ.ŽãÕ£ˆùâùÿîgm€ÊòÇØnílp¶¾Þ1ö}Ƕ1Éíë|&åÈž2–ìÉGôõ½?>ûò5ÊxŠrôò•`6ÂØåK&IQö´°¬»W^ ¼G´w/¹(añ!çCoã#RL´yê’IäëûW`C _ÍæL˜"Û³E±…-šð"Û6 ]#5á=ŽÙ.6QQ©ªlㄸhA ØúxòmÅÂx¼G ŽÚ.Ž)@í•å/íƒÈX`¼FÕ6n¼Øô7^¤v;×ÁóàBî±B#w¢×|æ¦q,¶'zÍ£ÒYË“AgÖØ@Å—zw¨é8hàÞ…| Šj‡&Ž™Hv guÌD²I–¦­¥åÙH±ÚѺP’cï£7T¥8§@üZA[#«û\Κ¿¾YêœÜßîîÞïiP‡l–ÃÝ¥"Dš?Ï´{½¼úò˜T ½—øyŒáRq¨»QWœ7s2{ —O«ù^ßvv½™õ[,¯?,¯ž«ÊûÿãîÚzÉuô_ öe^N»EQ$¥³ï `X`Ÿ÷Áí¸'>Nr’ô fý’UîØ‰åº¨$O2˜iLO.å)}¼ˆü(ªÜÔ›³¿åž±_Ûþ1»§«»û߯nïß>^=߬ï®^$²ê–R-nt9Ž`jÕw·4c|^d.CªßE$+=D*ï^¨?Ü_Ÿa@{ÜÌ#åÂèwB"OB…Ól¯MV)åg…ûZŇûGx߸hÕä|r–;=‘Êq䚤î@Ê@“Ødm¬âäc=æIQŸØ<2Çq™ú¤¾k”V£ºýL©‘lwø¹ï_·¿Þ[4Ôîä<ÿ{YBÁeô²•Ô$?›*G lñ29¦ÖaÂ[œþÈ2áÇráZ9}„/!~Ô°W ”grûìëó$¿›z2ÛoÆöY¦ÇW‚°Ž8` mgIDUÁöoí$ËÃùÂÆ>°l£„’$°4J%îЫ=‘9VKÔß„=6¶?Lò·PžìùHU|!}m² <˜¥ mbZ†ÒÔöêÃΙÛ3‹¾ò†lË'Ý}èÎ ¯Ãô÷ žh]ÆÏX—ìùBýüH ¸p¶îà2"gɸÃvT#ɹ‹{W5´ ™Ð6úyäÏõÜ„³5Š'”Eõé@È™MÀLÞèóõ4Êä>ʘ ^˜.³O‹LlÛ®cA»(\|ãÅo¼hÿЋ) P]FçøéÄöOº K›p㥯£Ff³L#%›Å°(Îô‘±¿9 ¹rbÈ%5íé‚=ÛHá¹™3áð„JÂ5èÔÞÖ1m„z%mú³÷Ÿ5pÉ.6óÝ/Jp‰q€¶½Ù£xß×;^æ†î¥èáPþð‰eóå+‡/›ôéŸÿüzý4ëúÔDz7Î"Y„ö\V†!šu\–^ž"®zYgÝ Æ)ÔÔ\L´™R¶„ìE2uâ,VgTßg–1±‰¡nY6LBk/Û‰nÞœ—m÷G >\4éÌ“fãêqôRštž gÔ­M ý«ô-®éɵ»´¬…¸Ù®o»¶” ù∠Ò’‹¹ˆ 8?#RÒˆ#̽—ëW?ˆ…nÖu:³Uúàøuz×Ñ8†% †G«­s®û “K3À0uýÉ/éëØ¸Q©—r†6²Ó“úÕò´‘®îG‹³MçÎìYíøè–Å= Ík°B®%¤Í(‘}º¿Ýnnw×÷¿ßÝÞ¯¯»òØý—ßVÄNw.ƒsñ¼fôTóˆ XÂýf¼¾Öœ7wšWiÖ+_ò6ŠÕÌ÷Þ*&ÓXõ’ ¶¯ïy ·ÉU[#Š´S sà6@ti‰ï©«KÐoMÊ.W½¤zŠ"\¥½a*ð‚s“&ù¸ÙÞFàrVïì]raQÌcý4ÛÔ  ´+”¨ê›±ù"à%,À]ÐC!ÖÚÃónÓ뺦¼BpvSM£,ÒX«—Ês®éñj+€e¿Íا`IêúˆÐ8ékb¦\ÒW$ij÷¡$IªFéÞ×½wPI¤b¢ñî!¾«ÑçäfŒ>×÷ƒ‹©|rx÷¦ðq…`SŒ‡\ñ¨*{„]Ù}d!h¬AhÓrË… >´¢L-B”Åm~åœóŠç•óìS9¨0RðŠý­î®d ¾·å÷ˆoœ S†8^<\ÄÛ¦) :Aý ³gG+¨àí·z<Ñ ê^Í ×¼[pŒ«¢i`·3®uŠòIO4 ]&wùËAWçn°WŸº1§ólÂ{œgCj;šËA·{ÄÛMö¡,úñ–ìóºŒr!Dc™loèvCëŽ'' VjùÙ¨ÓlŒýøû¯µ½èÍ[ûü?ÝþúsI$†ò:¢î¿~ÜÉz¦‘Šwd÷øB6ÀÍ‹Ãðט"8IÖÃ~R÷_Ò2«›ƒ:F?gˆ`§Lï=ˆ/eÑ6¨±ò¥z)d-³Š/e[08‡Sùd{+@—g”úG8iŠð½˜Åg|)›!èÁ¿¥Þû8 ÁSW‹ß£«e+°:2~Ɉñ#¶$‘ô$åBÐGùÀBÐØÇë8±K…`p ¨Mª#D jà=¤:-w'ÇŽ¾£xFÿ,ÞCÄc˜p§T’áÐÈÅa«Löƒ8\|Я²åÏGK¨•âÑÅɤÝ»yOÈ/96¿©©YîåÌÙ ‡‹AGÿÍpôâµ®ã0nvå=š]YùÐ_ÅÅò=¦ˆØ´)8ûÌ\&€ÑÃå+\|ù”•“ÍåŠ÷Ö™í£Qªë °Dø€Á7¾Ê‘òÂw–"KÃC2—L l¯D!9Xpô^Ò’ÉÆa…lc$0ÂÌÉÆöéÞÙR(7ô6l×úDIÜbV«4yŽKp¬>nðÃÅÞºoÕ÷D¦aÀV¹ÑÆÇK›4ÇE_‹ˆÝä;Œî³;)ñ’ç‘°ñùW9ú˜2ÅÜÁe)¼¡²o;%”/7±ï8(?p’üñþáæë÷íóÛý¶ßo¾ìo®ïv»L½šOE#ú >ùx"n¢Ò™||®ª­ÛžÒ ÈYM\4©È%fq‘ýRh7š4Øõì¢ÉÂÊEˆ‰G¡ (;¶èhi“ I_KCAˆ2šD@?f4IÛºÙ½9Mºäˆ"]`†¨£ #’…žŸº/|îéùß‚ºrÐ9yøñ ;âE¸ròìsБÔéô‰ÑqqôÛ=‚Œ°rõÒåÙœŠ+^& t‹»GH =Gý‡A¸©Ó =¼¥PDpÌÔ·Ç+Îan¿B'ÙÁG/K¨Bo[È3Lm†¯u÷¢ì¥B†LØ``ô+#Ì!uok¦Q6¹JŒMi83—§ð»G0žÍ^ò÷vŸø¢§ÿxImn¶›oW›¶C¿wœË¿<])êê›<Ýì`HÕ ¤ «ƒn#ô«ûǯwv¬žo¶WõÐSõÛý7»°ø©¶/ÛÍú‡©øÕº¯vOw¿è1ݯ^Ÿw{¯û©ÿnw_ê¡ëɉH £qùŧÄêÊj¤_¾ø”(µåç·’³D ×ú\™FÄ.~Øt ¹•‘M+†Ò¤Yÿ~ÉVÿ²ÚYïbÌj…ä?\ÆÒ$§˜ãipdð¨ðg‹eew73W¡ ñ™R°øQJÁz]`ˆäÜ]xïÛ*#­Pø´jƒ­ëÕw ßsÊü‡R†QP8Y¤ ð­QIŸóYTòìöÃzÛdò.€NêÞ» q‰i@ãm­¤=ÍpÎ@bfç?I Ñ3]k,ׂ>¥~ฯùg‡ C÷§íæqû|ܰÞå>ìÖê§>~²¿«ûi³}|þ¤¡ÑÒŠi’ŒðûG)ȶT§,B¤™~™¬3½–Õ2‚ã1: ˆ0X!²—åš`Ž–\!#°ß›!Æ0qrµ£Fž³™˜Oïû%«—ìRÌûÂuÙ›ê¦ÊÏùLmžT“{‡Õ$+@VUKqwBÿN¸ä*WúSN$‰Ì»Jï>=iäÀ©´+°„‹ôlVhÿ}Šõ—Û­ÕbþçýýÃɵ•¹»Û®JóïêwRü÷+3»íõák.C³ IGÛÕ êY†ÏAc¿TÎAãa-¿Ø»^íöÕ¤›íî·íõkà³Ê]Ç»âÞßNŸ°_×þ!ºïTïª{ÝìÏ7뻫i¬º¥¿~ºm êZs†õÇŸÝj¹Ý#ˆRÖõ’F|)á; Q|ùî¡R}_¨Þ$gQf‘t©Ü¿ëhÂAe(9¨ n:E܈G±D®E,*2a;Œ%$C?ë>‚]È!Ä‘«P±ëkoœå§þÝXiÑ1çÆ}*gL‰1CÅ®K¶T(ÆËP)¦ICçSÄêTŠ¥@sVïI=„¸ ÞBxçÈ.çGCxÿŸV›»þÙ|µÿ×oü~ÿøíÓúùy½¹±cøéZ~·Ë½]ò°ÓíM DFó5î*õìƒ3Ö®8ÓÌÔÁÀ€í™ ˜Ì<Ÿµ º=A-Œz0úQÓ8; î Ó*–ÁΔÆUSû!§m¨Q„fß[†`w¹!ºd ë%o€BU†-@Wy*Ò{C¨³»¬ÛÛ-Ú%˜‘È¢pô—f¾ÿº]?ÿxÜþªÁؘZÆ~}™©ˆçuæÁæ°,0ª³XPž¢úqŠ x_ zÀ:ŒJy™=°}F!$³V6æÇ ‚ Vr¥WÇ’«bôµCŒ>ð‹€à8⢳޺¯“sŽ¥[²XuKž˜c]‹àü¤XQ.¶›³;!™\\´°Á §¸ÒØjä=LØØ'Ç5>›•òI!ùrÉOÀî€eØ­àäÔ©j”ò9H«^"'Y)kò cଈ#}¥Ò 8ëãòàü"š:#Luw6v8“‹âÓ¢#¸58ë/«½ÌÍ0UÑJDð©Ç2¤5Ú«?ã58œÕ*wÓ³iUƒ¸@«Î"4eyº[WMèËNÈž7삞—² &A'€*Ç’ÞQÇd£àiî´‹aÉTPÕ²`ò£™pP¿Êñ(€rv¦Ð‘êàgôhœ~~Ží 'M 5•¦ŠÙçðÓŠ\:3ä"V̓‡I¨ äy2jŽŸôsz5|–!dJMR£Bh;ïþR©Ëp@‹\(¹g´ë,òït.Œã#‡˜ÆðúìMáA •2¨‹)<ã¦Pß­#³]rÐèSs€dL˜ÈÀŠ 7~ º¿.Tj\‹‰|÷ÀÔ¢™T‹äA.=AíP¨¶þq½{ž‡˜Šˆçezà.BLtXBGâ*9IùioÄS 7{…'‰)Žâ&hP;êW‚O!Ë_ò"‹: «êVÆèÍͱ0áÐaãn¡^Ìr«ÞŒRì£7Ÿž†“Ð$.ž–9õgÕÇúÝoÁÛñÛu0“K¨& ÞA¦Ó {w_wK$ï~½Ó£³ÿ~WœrüÝ•ÝEά“ä¤\A€v?!~î ǽþ§QBt¶PëÁ³í-ýpŒÃEÈÑp÷@/à˜û$XŸ»²cbJsð™c· ŸÙscŽ“sÀ >cbª+òŽf KBW=_Z1ç”"E·Ì(«/í ºuìï54BõïëÍž¥Þ4îŸüZú¯~bìfqîãZ•5Œ*uÜøäK¦ÊªOi¶Ï=[e³µ°¤ì¡Û§ÖS★°yž2êÀû” p,Ù*‚\Rûà%̰5À"5&(ìåìsœ3ª©èÀ A)‰ÔM|†SSá@¼GØj¸gеȴižžnym—Ÿ¼yÚ=ÝivsTÃ>¦¤9jUY]Á® @ÁðNh3æqnCñ,ù˜”YÂ_fNl[j˜çhÌœaçiÌœ d;nŽZÅšØ[cäpak¢+ð­‰UZÄŒ1ÑC°ŒZ¾†®.“=x®\Uýž©åñ©…í}3€}.ðOÎ8uñßÓÓ¶‹þöõìúµ’ô’õÌ5„’¶„¨ç<¤FÙ¥aÖK%Ù¦ÁÈ!;êïËh*ÉhæsÈ~Vd×·Ÿ. 숭IF瞙֭8)æRSÖ-À‹“îGƒ§X=¡4Ž -5ê´M•Arâ |ûÑ‹‡Û”ŸÝD±ñîZU³{þãóýowÈÏ/Ì‚dòMÓªuΤ²aŒs%V ƒû}Îá»Urq?îz„);ûX@5è€úýì9Â…a˜Z“pñ4Ÿ¿WU ‘éseR“ ÌäX‚-5Ê -ð20Å–Éz‹(tÇ­ÝnÖëÍîy7Þê“ù•wœtG.á 2"ò`I²¹u.9yµ愹$Ù±ß6ºkÆÚRÜ*ªK(£UƒÈ’»^=’P4¶×Vp £±´¬fR΂ö+î2FaØ)u»Éûò–š­ƒ 1¤¥æ5ñ’1Pr(.é%æýÌ ~óÓíýæÛ,Ÿ™¡-J§’ãr‰Æ6½Ôe._EÚ¶ŒÆl8:`C’ :Œ"öžiåíôƒ´*!v H|a:8h=MT‚1ë@c`çQW^…Æyr"cRe ‡žt)‚œS´DB·¬‰Ûª¿ê#tXùè25¥(~ümgUôºw6Çr¾¾{ú¬ŒeýãvfM¸¾x¹ÀÇQ˜€‹Ú ÉZH<Ìf&ž&¢j@Ûi^ÈM¶6>]VZ‚- ?¤Ðv¯bš´5Î_ãÑ{9gæHôkŽhXŸÏT@ÝÒði4qÆeàh©Crõ‰0@Ÿ¬Ž¢pKšÔ‡ûëîç§ÍÍöú‡.e$Âü]7 uýOÔEĦ°K%“‘ºÉÔ8—1cXª¹Ša‘ž ³ŸÌý~2¯ktä°vF‘™zèzƒÌG2«ÌÝk£xt—Ff ­sÈ,&9EænÍì¬Ù1Ì®j9å ý2ÈŒ4½Ç±-®´T;Shæ„1„žÍ°˜?o¿?ÜN “:þÑVew5`›¥äš“FŠnv5÷+ù £ôAx‹2Éý¾@¡‘a˘VŠâG«+ˆó\öG"©ƒÊöÞ)Æ £2Ǧ ½ ÷÷oQÙÖœ4hÉ–ÎyW5Á¾rå\Mph©à©þªG“ÈèÛ6¨?Ýßnßö_ÜÜî®ï¿»½__O ú›ô˜V7‚5p;•Ôg0JTG :ß§É}˜dtšÐâ½íèûÐá=—FÉF)åˤ‚¬ƒöúÖêþ%º4Ú§¶uÒ½˜£@íÙ‘#Åõ埪 ÷™+C\È6ú§ƒQíÁ|“«EV­ƒ`K"]¤Õ/ ¾ÿ¡eŸZb¼bB i”4ðss*{¡ @ø^bKK;Œ…Mß/^ª;F+íXQ8KlrC¥›B« vÃe!š´¾(Œ.FÎ^²žTK†POÎ_gÉi¢ýdŒ^ Mõš …®[”‰Õ—x¤'½{¸ý¡xÓ³d¬Ý¼i_V^ßtáÍ]Ã4ÐHÖ:c+¦“I’¬vËØo#;ù£i ÆÛ U¬˜Cé#¹Õq¤õ½u‡8'Fé}gyCG­ù%åi]³DEc †“D“JïR}‚“ÉpÒRÓ¾EþÚîŠõ(øýåÑEÛÎ×?žo¬Æ|³î'åln豙ɚÓ<±ÿ÷[\>ûˉ9œÐ¶(I P<ö/Vé&?#Öª• ¢*A|_ 6ˆÑ^r÷GR©U ÉQ˜Å=}fÌ;¸)¶¯OùJó©íÀÜ:µ©I!Æ…}áÐÑR“R‚ýÊyãÆ táyX{¡ý”ðÓçïëÇoÛç‡[=}³@Øÿ?{W÷ÛÈäÿaï!/k™¬b‘,_n€Ã.îà²dwq‹ ÐÈòŒ[òIö$ópÿû»e¹%QbíIp‚™ñG«YÅúÕw•3¾?Ý[ØÆˆ=j< 8ë œÊ°Ñꩲkul¨ÑMÆÅês:9ºÉbÌömÐ%®Ö×W¼Ã#ã*’/lú ™m¤%{Ç)Uw&†¬³vØV¦/87leU JòÒ¸2¶-#‚ò% ívTÛˆ;Ð,nÙõ°_mÖÏ]mZñËÒÚs-æÄºò #k±Ûն׆µê÷ÊeZ‡+=Ã}jñxгï¬õÚ\ã¬> ~ƒÃ¤AN§]cÛ F.‚Å£AÖ9u]‹Ë@ªG½”aš4;ï>/f—yÖ26(oSãŠcB2Š”¨cHV· )cÞfƒV™¬"ym4¶³I¥MHu¦í€"°‰þ.Ì<šÞ´š|¬Ú>n­ÒË2’ [·"/Ñþ‘êîê+¿¸òO^Ðå8äGØËšß\(y D¯•0ÕÐÛaUzd[MÝf¶²ÍÃrÛ>l8z—l¿É½ª( ôE(—Ag“»£ö>¦õ@=.PIèq?uÄœ^ÈewƒcNë/öÌ£ÈÃ{#±Y‘[_V‘×tFà ;ȵVOìaæ}Ücàþ8ÐSôBp™ük.[?½ïçb?®—!§ºëàlü`ÇÉ \¼]ŸÁ N¥È2õmæ¾Džœ¾·°ÜXdHûÞÊÙt…‹þlP#“ïÍ$¸~d v¦x#·¹€žxTÕàOôñå±â]ŽïH‚î}Ü)¹/É?϶„ -v†é(:®;{xtzf¢u«e ­ž1ªý]7(fìaGOZ‡ÍÕ]¡¸Å/ÒíÈ=Дwτʗ”)6xÕ)œ!=mа€Ÿ»lļ?ÙuzbwË!—cvjwæÙð‚É6¼WT4­(g¶jëãiżåZCfÃûmñªè¥pE¦))«"Ø7j^\­w»ÙšU4WÏ6³Û–5 énñùÓ:À)ÝcE9+6×}Cyo‚]4ÑU'].†ø³î2»©N/ñr*ÚNØ P=ÜgfÓ¸˜v½ÅìšÎ†c6z`•—_á8fç‘;“¹{|:”d©6PqI_%ÛTD~–rçÂÊ‚¹Îz+<¼8C»o6Káh¯“ÿºÕ@CQôծNJ¬°][Ì庎N½Œ¡¹.LbzÚaÒÐvÚF»Q¤Ê…ÄX‡5›â8¬`‡ÉX£íåeŠ&ïð;ßn/:jj¿L1 z”ä2 ‰¨0 eÆþܨqlÖ;˜öL`Š¢1ô‡X#bÃb(öÜŠ\+¢åNˆ;¬,§"Z„Óv0PtGÕ+yòä ÃUƒ©ÇÆ_(ž4:GCÂ)O@z”í(ºÕvãúÏtn %ù‰^2‚™=” ;¼Q½_®‚”l¯·ŸåK7{"ß¼óæH-WrÙ»!¯.‹¼Fõ€^&,>‡î_;€~YMa«ÀÚ¶ðn‘ÝE(FŽ Úo+›)̲x#Cq ÷§J18$McÚHÕ ÎÈŒû3_ïa2¡þCY^py·»%V‰×a¢!}ãi#¡ëS ;Æ‹"³éÖ¤<8±[†Å}ä#0W˜€ 5“6²˜˜ì¥p¦¶£ƒ¯ÔÊÌ5”!(ıÙÚÒ62:¶‘ñÏÕ™ ḅêò~Ær_ Q’ÏǬ2i``Ñ¿`‹Î9Ö¬Ÿå6þÏóúivØ‚p·™%¨—Y}AY=‹½¶o{PÆB׬^‚¦J:”é«ï“w”´ª=)0iðŽ˜4˃ÝP­Þ#ôC±û»Ž2m ÛqÊJÇ [ŽlÄKñ´É;ìÈæJóa†½«ÝÉ*±‘ð绽U§+âKt7ã”=…%BEó†·‹Çûõç‡jtà+ÙvÉØëýDÖN®•cPýéÝÄYÛ>“ü=:r¾óÈ£–$Êh]Æ›Äzn-èj“K¯+ŽNñ%G€;l™XÔJ{€Î!}\¼#ZS ¡åÌ,W¢]+F,ò¬ÅÒ­ŒjÑyØÚªî þgY¨­•‹4„…^±/ ì}ðTË®I¹]n7Ïá‘ïŸo?,2©¿„£)²§qÔkÓcrœßQ‹ç¯º/DéD©œxÊUQ¯ÑID5 é6@¯1†¨ ÂdTF:».C޼ö¢M}ôÚ•®¨tŽU¶Ugín!¨ÕøCŽ|<íçXY¥%ü@V–V3‹ Ìð£´ÍäO³ÕìÃb“ªênñ„RÛª’¬kÎÐg·a³š².KÜ!µ/„+Úz`u²\9c‘t²0@}²*ÃCtlƒ~™ERD¥P Ï!ø¥«2j:+í VYÒÅpm5oåTžüvÈSòBU¤•[!XŸÏ½Q7Én@ö€Å3»'ä Rç€~sTìØr ›#"Ï2öž¼·ç²«ÚíLApÀœ.‰Ö:T5'±ß@´$ºAÀLu ".Ì:üwår% x!ô™ÚƒÐ£¬Hóåšh¤ÌZÀŽØ¤’tJÞ‚2CN…}èuÙùó͉ót·üpUkÐÛëz$ÚÕ\”jÇ9è (|AŸA(¶§Fè<È£rôi”èÁr¦t· 'c¢=öTÉdcX>èpdD&G…Íñ@fŒÏãGB±Õ/V€(Ì<@ºU£·A¦£=:BBIŽ“‡S¯¬pB;,?i6ß횉 ‹Ëòð,~Ìç} M7˜Õ⭗䀳Gkas% ‡•E¶ îþì–3P\R¹±ê5bP0W#te’»?³¦`5†ò‘IX—\¤.ØÝ J¦Œc&Y»±¶ÅÖY²ñ ¬6FÀ 5ØäÕn­z0}‡õÝ!á,G­vÆ`ÅâF‰m¨è࿇媞°™wÓWèÍY…EZ[«yPÙ ÷ð À¢ ¿Ù9¢ÿJ†œ 9Â@#”HârŒÉLj˜®«My=s¦@Œ SG¸Ë>Û4Ç“bDJ¯LQÎs|Dˆpš5Ÿã—Á·kI÷Ö¶ÃÊq9Á$eúäÚÐPØjM¿É›Ý­ïŒÖW¢5É ¦ÄåÙ” ëê¨#1nP(yãåÝЩ…Úƒ/³`\XÆõ‡¢ (Œ¤á¦C´F„äýŻ̵)=l)‚¤Šl׉¨ýÜý¢°ÛgÆ1ÙôX‰˜ ä±bRp¬rÍ;YH"Ôˆà %ã¢5´€V›Ñ B I{cD)h=2´’ÂÒQ1&]´NAX%æì®ŒéÕuTSÐnª¦NÝ&ä­V0|*Þ`ðTÀ¦â¥º³<0_’µ¶H€Ôb€b5Îìx¹,ó\¹`ý…S§z¸—Æ9fcúÎO(£Ñ*œ7*ÇSF«PØ&‘ÕrÌ÷l$ÑZ5ÂiÓe.Zñ;^'ßd tŽvENYãÀœ±˜òîâj5@ÅÝgÈ·ÿs,DKdÂB@ëÊ䘘5gÞ>Ç´Ø„qŸ‹Û9¦4Ò˜ »å~Sù`CÙ>q™ÈÀ)ͲF˜C­µˆ Le>„€±…I å °waYŒÍ! ÆØòn)ÆfOÊ™µÒa}db®Ù߇{GŠ’Ìµ¾L†_Œ­áMÐwöüôQ˜±›°|½ÿ¾Ú»cwÅÞÝŠŸ®á°˜µË†½—)–z+ëV¹tPV¤;¼.ºs¶AŸLЋ¢,Ú‘‘×AéÔJ 2DÇ2£tŒaÆ…Þv)gã3 o% òU™1ìà½0<¾AÕjÅõ«‡ÅÓf9¿Ú.?¬:Ž™D±–J‚-*ÛlÅ· ÉdªYP)ëú÷PãêtËÅ2!¦N “’Q,~РI®õï&ô?è‘m[TŽKw ;âÉëÀ*Gˆ&>ø&/°ZÕj%³\©zJ²ËlBÎz«Ý[„ak}´Ÿñv-ë6 ²¦+ \÷ÚÃÜmR[ýé”OÃðÞ×b¥§U˜ÄSä3Ë•÷Tɨá½ÅÎÛdÅ]‘DÙ¥žd¢‹*äÌa)pÊdÍ;_¡ÕÈtMÝmÖÖ˜P’¡dÊ +*^oT3õ˜|Q_`ݧÝJ»ó#Ò³€-Q¯Št¥”gA–Á¶k'ÒeÅ_A1e¬m¿†’ðK…ßrÁ/†Ê.ÃÑóH«a§²Á3ð‹ZðŸíö¬oeÏj×a"w¼8ÏZV¢w±ÖZ[‰1ìÙƒ7EâÊe˜ß/…¦CèûS½îZ¯ûLa E¡–³ãnƒPyQVüPðm×£IG ¬‹V¼7È’ fÅ‹&Ó¥  båÊ"Ûò0+Ü:³è­/±ÓÙêŒx{„ ç8*Ü`cý ޲*…®zÏJN·zmL¶OõC§?û—/ïýˆOzú,ÿ8˜ÒÐmœ€÷ÜŸþ-p–ûLiÔ&l”Ž8Û“dy×*JƒÕ8l1 ä‹Úµ úd\Ž¡Ãââ6 OÎ(¹Ûe;Ý-?<Ì#‘ÒÝ%üÜÇã$°%%’Ø÷ï‰à÷ÏÔ’H9eP£WèZLÛ3 “¹hbîÑ H&ÔHa:¸W­‚ÒEìÌM•N‰Í£1aó@ÞQÔÜ2²×iàR0(ÉNÐeR&`Èn•EÔ§Íúþñ~¶ªOU•R‡¨×·ò㿬7?˯ä%–Ÿ–OŸçóŸJö¦frÊUîOÌ4Š/Ô[0}‚‹NA¨ßÖ¶;ÖggßåÑ}Ùy7pÔŸI˜€œT¥¾­rA·“»Íúaò~}ÿ4¹}ø ôS@Mîáp¡¤pë¡À-íltE‡0-ÄWzçDkæ— z ·"˦}Aþ„‰³Õ¥œõC†<Ú°¸ål©Ú'ü±þ[à¬ÜÓ=g™-Ÿä‚OäâO¾—ËþÍêvñëä•M(ÿ£|ýióyYúVtµ#ØÕRDRpÝ1º°êÁ“ ‰ŸŠ¢¡P/Üõs 'Uø#L•ýPõνTPù§$ú)3‹_ﱤ÷}&¸>PîO8Ku-•ë9[Í÷‹Û tdk<÷U>õ#¬îãT{-žª0¡£ò‰™àGÊ¢â‚qœ.çý Êëó. sNYÔG4u‚_ÏAYì. ^Û^€êÝ€D–ô IU48XÑÙ3ŸšÕ5§äÔ‰q„”7ýØÆ´V­aþ²¡]“Ø¡å.…¾VþZãäå‡'ÿýïßÿ囿üçÍäŸa ø“þ­:áä_ü“—›Iý…éãf=_l·ÿXÍ6Ÿ¿ÿîO“;¹[OëÉ/›åÓ¢bÎóöfòR¶^M*|¸™È¥œOþm®ñJŽ,¬[n'óûõ6vôÚ(«i@“_õý›Ö:ÖhíDýÐ:ò…î·L¯t¨€ìOyÄñ¦ò_f÷×ò8·W´?÷ö~ýËäîvö4Û~^ÍþáÔ†‘\†ÎÏÒñàTÌM§zÛÅÕ#tÙ¡,B%2¡rí1N™ÈÐC¬NÊP‹<Ê·³ðž«`\ÿ­Âˆ·É£T„dm4ÚA¼p6'›€ÝTžKÆ=´6ä 7Q r–›¯¿ùó©ÌMB1Êjö°]rØ’b¦¯Î°ž¼›<ýººùz¾~xœm7_˵ù°xºùö¯žDU‘íp&cµÜ6Rƒv}îWÞXvnnßϯ܇Ÿ>~?ÞÛ¹š/nïðå5Ö·¯o©ä-·Ïó Œn¾Þ‘àÇÇç§›¯ßò?ÍîŸ?Vg`r¿˜mǼðäxòî]¥:Ÿyß½;´©/dfÃÝó¢zï_¨¡zTçêÉr!ÞûäRâ³J¿ÚÐr ~ÅÍÝì~»ø×Éê9äM\ßí#U]RKbÃ…q‚ÞÃåAˆ¤¦BZ¤‹¶õÁ]t»iãh_íLœÚ˜Þݬ£ÀHx-+j¹uâ¨úlðaæÀØ9q{r«€šŽxZºWY”¨_” Zìýý×Õ)îŸÀ¾SmLe¤£ ¹[ãÖãú6Þ±ÿÛ•uó÷wÖ¼ŸóÕO?ÝUÕ&0„Jü…nPÔïSÈ¢½?@—­üÓ¬ûnòîBñ½DÜgÍhØâ¢tÞ ¢Y3šzkåÈ—{7­ò‰>…CŸØ8X ¢)„ñØÊ)nÆy›Ù‰ÂáïUvŠ8·Žì¶âv¾˜]YŸ¿&¿:ìVüC›(ø×«@/êÎ̲›8?»ib“Áœ1¤­óƒÐĨåÈbÈ;¯Ç WsØun}LÀ{vI0AŽÎ¢ÛŸ«]¬Ú³`‰÷œ(’Ïvjš¢ÒhvïøX¬šÅ‡B‡è‚`Ëœ­m«Òiò~°TÂx¸xrþq¹Z„xîuãï{ ?`ç|7@éó™ 1;«û|äùP5;µ#¦mÿû, u6åûZA!:Þî/ôì‹cª©H“ùófеªf¯¶ZåMä„‘:º[ùf(ßš,~/·g êªWãP¹®Žu‹ÃiûÃ}óa ÌÄF­5„ÀϧõÏ¡bäåz¿_ÌgÏAŽ=YnW_=Mf»ÃËóî×ò±Wõw«‚MÀÞZ¹¨fxOdcg7Skµˆù€†íêºÄL~5UòŸO2ïŃ\™ª2[ò²š®¡z´`pÚ™ô,„ÈÑ/ÖdÖ<Ö=]Λê핽˜?®~|,¼Ð8BŽþúêÈë@ÛøCõn¤,;H°l~yGgˆ”ÞWgF@Åg:¶‘oÄætY»†ÖdþuW»KŽc_ebÿì¯[×¶$Ëš}…}…ýAÓC4\àöL¿ýJY$”3NÛ\&b:¦ƒ†¬JÙ>ú°tÎÂi\\£¬ý¹Ü’‰_±%3œd`âÝíˆÓ# ¥8z«“Wvâé§kú‰!®9Ëuœ;>âC(°MNDtéMeþÿ^àâ·Ûkë­üßûû‡³ÄÄÚä®§®Ë¿ƒfæîþfŽçæúêõg”Î |°ZÀºˆMÚ€]¯!Þô®rWÕ³—ùoû²»9µ‡^^ßüy}õÐtç=»Âì¢zþˆÓ›ž¢»ð‡.½.¿î÷ç^üøÛ«=ÓËLq%Ùx=½—¦³š4mÁs¿)`ðX“–æPZÊWÏö„BŽ©ò†TÚº\ܹ"×ìÍ6$¦ö­Šfö[›¨ì£½y ð£þ‡Ç&¦G3fœÜ´Öžá}¾‰Š;6Q‘ßTèÊ–íÏGO”°ß,âÑ`Þ ØÏ}¿ÿóÇDKøòƒó6*ÝŸYéæOžç¦ÚsÓͼ”¡N»2$là!8>b—º †­‰‡v@Š5€¤x(’]%”É/PgÏ^m+"ù`PƒHÀ£kB$K”t²£OyD‚i"ïKeäèW@ÓÝÅã×Ï·º5¾Ïþýµ†ôíöþòsd šëeÛ>xL!õ(šmûÜU\"ÍÊ<6áÒ.Q(Nê(CˆÍ¨Ä¨¤§_3XD%r~½ êøÞœ÷{³m äD$×`±ÆÃ± “ˆÂhLR+¦Å$ëív"㣤èü¯€¢·žë×{¹µÿ˜E¦ ©x+2íü3 "'í@µók¬âVçšp+ÑŽ16’ÿ¾¸Òvàr‰‘± \æì‹À•B.œš½ÙFàbÅ9qRƒ\¢'¸ ¹Òpä2Îñr¹”SJYÞ&æ¹ ¹Òg$xG–µ÷Ê'Ú—ãøôýDÊšÉêÕ!TåǽKå|ý=cåÇ­N&žÐ8z¾ö°Zø  I3âÈfÄa‰šÁqñÑŽJ¤@Å*cðYÕ½ù«m‚) n>½âßÉÞ´Á˜ùüøóº™­²y“˜lr)î¹=Ð#! §JJµ²©WH>ËvÞL‘l®éü=’×ø6PÉ 9kö-Þ—†D1Ë:ñf»mCÇ#¢¡B•›’˜æ›Î»€ì¦ÌÌ>ç¦ôAc#ÌV õDÄžÝBÞag]Ï_ƒ7K{ÁêüèšîÖ_è£zc?Ûýp8Ö·a¿[Ý[OÎ$çõΕ?êo7?ìÈéZ??ÞLqpiqê8Ê;x[¼„]—8^·”‰ku®v,ÆŠ»Ø±þƒŠTÊþ#h’PòšªdY‹fÖíã?؆ákª'š¹¦ê Œ¿þV;GÉùvõŸß³Àteõ:û¯ŠY‹›…¢gnó0'ý˜ÞF4ŒI¦‰>ÎÜæV>¤i3¡íÓ/<]?—Ö©æQüŠõ­Ä¦œHö¼“sÁÆ»j½JÕ¬ø“*ë7zÝ—ÂP“6„c}B1JÙ:ûÌ¢}<‰~íàM2²Â“ÄDÔÖ´‘F_ý™1W‡MƒÔcä[;=@_WbgWò•Àii‡  ¶©švˆ|(„÷pt˜˜34óæ¡ú¸—×ÏgJßoD~ÆïúmÎüñû·ËëÇçoÎa9È£éEwwS{*îÂ#OV!Áz¥Ü=ÖÚ¥²•îiGD.Á¶;$àb+ºlif›¨}ÜÅIjP{¢¶n+£ÇѼfæ£Û2©\k] Û¢®$öA{?$,.§ÑpHĪ©Ç@,DLh¸Äè{ÇuswñûõÕÍïúgw767·Án~ΨØÑÊmp{†Ü<ëò'ñ¼O·t‹é×Ä¿6Û½ܽë;†;êbÞ!« 8·e'|‡„kð}šW’&@ÀHÃñÝÈ™Îð]³zÓÉFå û^×à¿ -î &JmÍšÈa€¯`Ës½z]ðâcßU|úÃÉŠO߯®nïÿ²ƒûtøÓôžªä©c·¼Ö æÓ n²9)H¡VűÁj»äª³èͦ3 ·°4Årpž²š3uoÝÓÌn;%„}5räChjB2Ú¯þJ'‚±i^øähîáñþß7×{àòô—m[ú¯ÝKU>ÊäÂŽjj ÄŽB‡€íź•!Ú‹i[‚²ã–B>¥rËT/rÐC}*E¬kEÖœþÞÌ\Žõô­mRE êXGqД¤‘á1ÙÉÌá¼iÃ^YœLœ’]c2 [„çÁ7…dcPeqù=$ò©iù½óãÉù1’ótÇÌ)«±k صä·,Û/]Û"¿ÅUUd Á·­jJ|µ•ÝcðŸÎc¶¯øQ´äW`pƒ—˜’kg2kq„¶^vJŽÐ$mʳ9BöÎpö¶]<¡}m‡©†{Gý/Mg&ŒöƒfeŸóƒúÂ=~5³ÕEãÙd_®lKAp@ BÒÀ‹>_4{'¬aLQ¸ Ööÿ‘ÊÀcè ›ÝRv5Ùi4M˜Râ@¯8æ|ºÇ?#y{Û.‰»m4ÅY_Suµk8®Dõ$Ì̹/…èŇ쥚èÏ×Õ®¨¹–1-j¢.Ô´:qD¨æÃ5;UÀ•‘u•ÜÞÿëéòŸ×w¥Œgö›£ê&ª5[P1¦=´µ(‰"c%(έ·R'™›®5{Âiç:­ ÂAà WnFˆ!B–÷Õ=8p[Ù³& ŒÀÈÒÔ`5$í çÄ7Çw§á_žùFú^WÅÌmU –ÚH?dX\`bý_ôê¦%GM¸0 ‡È{ì½~º¼½xz*–›Þÿò0üèÛð7ú=ÃéŠ7 jñ÷ƒ W øƒÛ’rÝŠ‹Õit&]ᓌÎEÏ«Qºää¶©m¾Ï×`0O:6Qö08+73åŽ>dåúÊ(¦9ž'“Ô‚cçêto„XZdö㻦Ef7âb13¦0üãâNHÏ[ÉÀo¿8 ~YŒ®©(ÀžwÐX{‡NÃ3е^3ã­`ïÌrm¸«ûYw«/án0Z‹bðË>f[µfÖè¼¶1rªa}ä¢Ã¦ØˆÆ]33† î&˜„ª­ëOv¤ ÌHÞ;Ž£jî‹«LÉ'jºýcÂþÅ>õÚ ™æÈÚÃÔ´ö¨¸òôüø×!Ï­4ýÎÃãÏ×EXù¸a0‘Ø7©pÜ#¸é$»Ú«-íV¯ÃZœ]»-aÚ¦è0È=0Ç]8ö@¯z„è$;œýfÙáxºL´¢¦Q„Ô)4Ý1»±åÉÎ>ž»„é•I%ÊšK +¯÷ÓÆÞAù­¥-“<¤Ð6¢tçp/°ælè¿@Wïýíݱ›gjíÈѰT5ù"Êò½_ ÞBËßü.5æH6Þåuù–­Ø«éwÚ=i*]QŸ\qd#ù<·ûÌd]P_¿v”$RÃé¤çW6µå§€2õI4_É ¾åÄ­'úç}ûýÐGòÒ½Ql˜,®6 r[¹-a×MKAFÄõ\ݨ­ôà̰L2ìþéñçmÙ—–0jD/Q´½&X'ØÑ¦fmÒQ4¬eÏ(Ûz3£lè¶°Þ¶[ ¡0Õ1(FþÔø±Àþf¼>øî‘¦XsÉi~ ¤©" ‡w8]ë|„wOÎX ÈGêË“áºódü"ÄYÚ âblš¿dþkî«Grbœ)‰œXo¿Øng¬·Û£qr¼lnÀ¾iâN¼ßA¥'A])×j¶Z¨c¤_µ`øÖ¯Mâ jU„ˆ)¶Á79 ßfgç3ð­¯Ì‚Þåi–®]Ö>§&ÜTÿ•೸#Rò šŠß¦rÞݤƒæˆ’غ;˜nNkùtÿÓ¶õýíÍeyܵöq£ -"1`Ûd½|¤¸ß6YY³`/]œÄÊ2T;Œ•5hq¶KƒÉKaòvË#ø"Ù üW³vðÇ“¥OŒ6ìn"¾ÑG¸±%˜£‘ñœônZ¦`„IyÒ;ì;Ÿ“½Ní¢šð•pjq§õM1dõèäPÈ&„qhu~¢W^Ü>Ü_]ü|¾Ò—(»õ¥?ä Ô«Gß" ØÓÂŽ1Ú*Õ¶°/ÚuÅ,µñÉ®Ä"â{‰¾TìQ;ºäÏ ÕóÉèz óA\©é$C”јï…8æ0_3 mYÖoUCìZõI½ËïŸ)‹;@· C–SŠ#À<ú 9;¥ñ¥öu;iPÁ‡m0Ä5Eò&÷±GåÌzÌôð‡Õ÷\>Y¬…MŠRƒœ„jW18BNý{nƒ> ¬›41‹5 ̚ʅ¶À›ÇÞjgÌ‚pôˆâ(@¸oí†2¥hêqoÅ‚¥åõšQÓòÊ€ÍkÅnêÇ MnÉn†ß¥"'¹„íÞ®âI£bjoZ#¡e.I±«6ŸDŒu3¸zÉ±Íæ_×Ûlû6Ð×=Y4¬.Á¾c"Í“šBVæÍž]`ߎ‡*ÂC6‰G/-×túzþÙÙ‡ ìë+3ƒø|Ë `ßzKÀþ*c_˜÷N©)2÷ý«ör@´¤qhÑ^×îéþö\lÇ~¨vþc)åúßóˆL)5y tV˜H˜Äc=ámÁÒë´·3·8ƒãN‹à¹Py‡h¤ÔÅ:Œ‡Ü”ëÌr\Áô¥StúÐW@4 h:ê˜ú€x8h\F?ò¸?Ü_Í[^ì›OžâÛåãeU“›Mj.gÚE+o8œ´KOXsÔ”¤¶µbÍ0½zÛŽk 6W¢RršÔ…]ÕF”¹fFèÁ¥4}m}#HUAW$ËD›NZt£¹”ÌÐá\ùã´TŒ)ËB§¾Çu-t"mj€à·gÙ¥³¾¼pIœPDÊØ¶Ä£#Âó¶Dó &—9åo'û¶%ö×tÿ5Ùâ^$ÝuM{Aµè2'aI1Ž$ x¸¸;œºBÿþöGzùéÕ•ñÞ”ùqJ?*6&š”š ÝÁÁµE› I‰}-@ÑÒkóÿE37ÆÆÖ:ÅÞccJá4º¼ê´…³bŒo¦ëëù`±á÷í.;x1’€–Ón«ƒ‚cõJ€<\ðåîâ!ï6Ÿ.ožëä—¦ñ¬ås}Ó9 nG•4D ž îm)Y¨k¼ Îa‰}i ã|*M]Oû;G=úfnárò¢áIÍÙ Æ…ï›Î^+„z²s†³ù´Rbþ¥Ð÷njSÜl“Ä\©Ð²×X1®)õ €ýUPMÀ‰,?[õȨžüä©«`ÔT—­ïø¶hgT©qË Fî¡|ufžnjK. ”ŠtH6ŒTŒ_dÛjçæè¢ÓVÕFª@E¡·©ÉJ1tìýþd猘ôq¥<;ŸM]ûR×m¢) Ùmr£¹ƒ¿¸~ìuµçrºÉ¿ ±KŸ0A1çîøþôó·£ËÑ™{¤—_þv{óëË¿.o¯¿Ý]ü0j§ ËŸâúè’p –\|kRµ#¦ ¡É¹¥1«´aî¢u‰š˜ÿ§=ÏdbVåð{ƒ”-S.ú~[†^Ñ·Õ9¨i +n À“B}ÉtßÙGˆò¬©o‹§A#ÿ9À·´‘ÀS›è”ô0$ôHÞƒ íò­7œQW™Äỿ«»‘ĸ|!i<9ìcÛ²¤1•-c¼:I“ ÌNáØë¡˜à Îℸbñ<7uðß%7&Ñé ¤YYÖD=«[†^JœSú‹.êö+ùWðYNÁW{tr¯ÖE¥bRžA=¼†åM§ÏÇáWÁ.ž”3μ«®cà*‘,ǾwÂ9nñÌ0EnJÐq`y5ÅclŠ• =Q‡MFJ·Ð\Ý<=þœ"†ß~^ý~ý\ºÌýɨ!4.¢&Ü…=l¯š{qXßysnÏ•Ü(k̦DgÚ7Ó“R‰L"WdwUfµßç&êS"Óï­Ç$ºš ¦Fm#Åœ×Èl%b®H¦ïŒOBYg}9©¯¾C.Ói}‰‹+®€kkr0$øeT¯ÂHIN=2—·7W÷ÿúq{q5%%¹~š:rnçÒþ¥Ù€ÖÄ;àÚ>YPÈ âæ®°fÏ€šÑG P(X…ƒˆf\Å6f­r8>³]§ZO×8½¦WLv4µEaÑíÒ;Ú9Ã!r|gÓ³“Ï¥éön•ºHß§»`Þ¶^hCsñ#¢ïÄ!AŠ_ÍïÌnA÷ÕõÕÍôñfëßô‡þ úxñ¤.óòùçãõYçX%¸ÓòÅ´†)ºCÛJñ»É‘ž¾@ÿù³E+—]¨±xîàç2¶6dÃ5H.BdNeghÄ%g¨.*×·8·B'o˜œ…›5„Zè˜8µïé#h¼3´ä<ë “1zÅX˜ëï;éù•fPÃ&lŠƒEvtñ Oº&’pP¼ßuìJv¿ S©¼œßPÝPkå¦ûÞ,Ù§¸¡ßš]Hj€À§¨exâÁÅ 3³ç\q#1ØoÂÏŠ=m‹ŠºGÅmÚâ>Ý9M:ïšž„þD¨+lÜöÑT¹´oyc ¢º³.ßQœs—TäÛ£ýQ¿A§Vž¢Õ7„b÷Üù!j‚1q%ŒW[j•+ŠÎÇ`Zçë—äæ‹ÁY86n~¬8ÏìÒ”§/­ážø*L \¨©R0vôhe¤t†ÉÓ+' O°œõUž‰½™ïvaÂâ’’§Ø6"Цèý!Ù£#ý"ÒŒ‡ÛŸŠ3-Sš§'ŒÔ]nÙUHí|'á¨0¤#qÆ‹µwRg¼˜ºéúpÚsd¼w…ëCÅŸNÓ«`N˜cЛٯG€}<*!_s{ˆ1zl»KÂøáä÷°';gZ9ì£Ã$˜»=Ôc!©oµ9}âtøXäYÜ" ©mC àÜÓ¥õQ!†/!ø{{swó<=ÔÒ›ºs\÷ekt–Ð4AJw¨@F`ŸvÔ>÷¬[-dÚj2* Ôà$äŠ=특¦»™yú`µ~m²Žˆ ¨&QÚŠ¢äý`¤6ߌÎãñ“õ´K®Ïƒ|èÚkÇÛä{y€|ï94,.(9qm½å„~Ô±µf™þÓåeNžkŸHÃËêÅÓ¨0 5M¦ù=½-ÎÇ>c¯6®VŠy5pc€­û £`´!¹2f£äèéfFë„ÙÀA¢`M[‘DßVÀ¦ APXR6¾Öw¦¤’W[¥m†¸«úzúu±q`³´t/h¬âÛš5eL i•QøãDFùú|Ïz(:Þ"d“+›< ¬¬é7EYÑPözâHÐXA>{bùòâùâöþ÷£ìÑÓž¡¼÷Oåž‹KWvÏ'MšZ-fM,(Eè1xüÁصCÅ,Ýä§;YÓ†Ò@“qûâ­Fô1¯Ýüj¿ŽzúÚ6Ys©Mü˜±íØËè‘&5sŒ™áé•)hÈë#MÐYÌÍõî§ÿu´¼3R`i’‹öHDGQ³i7ª—¥ÚËvíì6 HR•„è‹Íl1äZXæì5,)¦oj¸"cn©‰ÂðiIÈ\—WŠuó¼¹½ë6š*më\9…u-Üì :-Áum‹ÎiD‰&G”ðÓ)Ã6ë8¯ÿñ°è±´bpÜÄêŸ]ô&Ü·éÆ Ü4Úþ’é²íQcÆ ÐŽÙáË™Õúúµum(TEzŠ\jJçuÛ†vðÏÝžNï F-\¨î`r_ººóÙh³´T­?¿o÷ÀH¦u£5†”cZ§~^8æI‡\úÚL뿆Þ{q/P\ç.ï…„ÎèXE/v ñ…†tO ]æsVîדuܤ¦á¢÷04ÿ?u׺ÞÖ­cŸÈq!@œ¿ówæ!Åm<ãÛ霾ý€’b)2÷…›¤šöëå«•(°±€ÀZÆ(11ÑØýÜCö*àòC伺4íŠ~ÀÐâ!Ÿb‘ÉùÌj}Æ]yß"¬8á“$6)±$ƒ‡]ÝÈ”¸4ìʈ¢xÕí[·‡$îÞ×Ãɤ¿-‚Û¬iqé(PÉ^IìÑ*ƒbC[Ç%¥LOçqÍK;Jê_ü0–2»¤t¹äš<­ÏŽRfBAÓ]$Û3¥7ÍN'|û·r*ó¯spˆÆtþÎJlð'¢;6¬§(œô4 jljäe!¦!)*"_[MãÍøï~¿¿yxüöôüz³»«KÙl†xÐÐÍÎM©L¬8îÓü‘SR_÷D¦µ;öàùm·éò)g`vìu¯w®VËæxf†~9lÈÓAAa^(âm ¢eõ¾Š%—cH§¯Ü%‡ÍO*U‘Ë’?ªMÝhd†®ÑÌŒ¡”Å–uìt^‹„ûò5âª6•Úa·kÕauÎÓþ2/z ->e“†% àA%¡ÉÓ¡kêÖJ!®¨CI`9†cñ"ñÌ, QÌœ’j‚˜ò—m*EÀ„ž%’`°¾;z?¥sOwß_?gÁðÏOÙ°¯OÜVc–˨-¼E笈XÇô ‹‘£?Û+ÖðVô§K£Öl[™öç†sN,KÁ%Ý./ÅҞݹ…ºo~ÜYLköìò•™´ïEÆÕ¿”íJKÓÙSþ.ŠÄ,qzë5¤ý‹MPFSuVÞ\A&Ý£—ÝmÅKÃ[Á)6]»dÜõ,5¸ûþéáµ®¶ñCzÚâYû£ ˆãæ!*1hÌ©h¤žY!‹cØš¬‰œó¬e©;w²H§¤‰À 1¬Â]ÑÄÐÄÙgé ý{(’Ux€y’µ$ vˆú~üô¶ªÞm”^œ€ ¦¼æ•6š QC2D¿Þ=ºîòàëɆG¶:ØÔfl¬6«s³›þqKÅI¤šïÿ*asÎ,=’½ìã0h©¸Lù1.å§' ô‚É”YÏ*fTS®mcÓšš¿Äñ0 ËÅ7yé.TÆÉ„×çYÅõäjK1>é6t` ¡Ém8@ào%曹”F™øwÌ;'‹=?}½ÏH¯ÏOùÛ¼éˆM½p“r§;ù¸»Ñßÿ÷óç[Hñ£ìÂîþÑoô‰ª6mwÕ €= îÕ®Xn@'ªML¯aÞ^H}xàÏSÅGºÍ7[„j(.œlÙ«÷Ÿ:qŒd5XMW²Ú‚Þ† ~Í ï™âóWŽùÎ¥8eèX™0u X¬oá_ u&ÝŸ/CblrÿÂx¾Íä‡äï>rƒø8Ÿé½ÿøð5‡Ôâ|ïûß±RêìØöN€a»GV@;SÜ’;Aöƒ¥Võ£dŹ}‚ ߣµUõ~óâïm²h`\N­‹­ß3uÉ­ýûÇ&«Ê­;,`ØC1rÉ»f’–«ÿÑÃnÿ‡¼é²~˜øùÂÀsÝ"ÀLܪyÝÕæÁÁEO(Ô<@Ñ«žeÝ,íºè«%qy{ߎÏHÉóÝÐ|ÒdzZe›Ã˜™-Ö@ø+ˆ«¼½zÊUŽ©Ê~H®ÈR9à%3ÇgL¨˜šŽÏeËñ)¢æOì ‘Û-fíÚºJ^XèŠæU^’^ÿÃ¥BF"3ëÐö[c±ŽÇ&‚H‹Ç_ĔϿ¬2W:OöésþùÓ½,ëžžú/JÜÌRKwTd 4¿:Lš®FM¦µ 1ҵǩºþ°«Fhú 4•Žã {ô[ÿÝÿ÷ôüGæÏÚ¢ÊëµóP¦¸…”ÛR«^eëc¿®ì8z|f 9.΂º-¹¤\v2V/DÖèÿŠWFd_‘¸•ˈœ8_•Ì!­º?gŒ©{»h@FzšE‡´A@€8^-ònwÙv{7y{ñÿ7/ukÛÒPTæ„[T¼SÌéÆmÒ‘›ÌÖ ÀÁRV 1ÅÅ9&`-&ÈgFê„Ç–©QkôlºDi 8K"ßOE:.¯½»UèÛ!ZÃZÁ™×Š#½*Àc°—sSt¬Rïîù~fÜáß|nk7‡:ä0?ö|³»~­ègܜڲÊb`’êñ·Ít]1˜)0ð [nRÇþÏ Õ ƒI•kôÕûD+óx .È«%vÔ†}?JûfÂë$Ÿ¢9Ñ-FzVyHœÙÄíHÙpÍÍÓüñ÷î¿ñpýöäOh%Þ†±p«²e¼?@`¯­öØ;-˜¨#®ú UWä¶?ZŸ³¸ªåuÿ3ƒôÁUÊ ³jU¹­gר-{ßþ¨£qÕË,ޤh¤“ÍÜ•YøGÝ•e"ÌZ–H3yè8Ü{…®sUöòáÓ×—›“žAÝÁŒ-©4Mƒ0nb@‰)% ºý:ìÒ*A/Q Iu™ (>´Nç@± zg6èz‰$FªºòbÒàÀ×bÄÃwíC& +^"G5‰iéÊ«ëÖ=óªy¼È[®¼JÁ>é=¡©^J ÿ0?S·L9ox¼ºí±ë¼M òÇo$X’²z±€4®¤-sÓŽBžë ]ähߌ\-Gûfá* é6ú+†óo)+Ø-ö\Y´XïŸY­DïÃð=,* :eÆÒ¦ÝѼµ5¢÷vò~w4{JÔ<,*T$ü'ŠÐŽÃ˜©‡ É´Tm|*6Hñ–1SB»:ÒÿÜxyùqˆÖ¬ËôœB„ˆþ·Àu&žÛBLåÇ„zÑ­' Õo‘ŸòÝÛRÂL·ê@Dº„Æ ÌMõf”.‹üþäæ+ãP“/G„hд·âeº ^äw3³aa‘ßåé£-Hƒ÷•\žbœR#0Ï@¤G½”mÛ( 2bÕ7¡ÂA~…ɰÛñéÑ=¶Wcýäž×'ÙÄFþ‡ûÿ¸³^ö?Ø0,–E†§ÄŒ^¦60Ök1 ˜R¥ïQmÔŽˆC[lñ¤ayѰLÀ|f¾>xösT³È£ÿ“ššº‘G󮸑KY{/1xÞ&¿Ž«@’þ"@[àeÒç")IÛÎ `AÞ€æùÉHò†oOŸfn'ƒ!§—•ÛéNt·»ùÏ¿Kë’g¤™i†Ì#š°©×uÅŠ×Jèeõªa›é:òb‘~?õ`‘+oµ-67¢Jw—n'3õ!ÆÂÌ']³p˜.Ú ® lÅV$½ôJ#Ð5t­„Ö1^J£®Õ¬ú¤)w SSDÒ/²<‹u}}ÀS|{úTIöMhÛ-½˜DYžnß²›Gƒ<Ýæ¾Znžž»Y^KZÞÍ ¦Ké[JÊ;¯'StÚÍÊÍ(®á°¼ÙÛVîÚ ‚¡‡`P).½:Ra’‹¥×á‡àªíWÀˆ½ŽÁcøOº1ϰFúåØéÖ"HÈêáóçIˆã€Ú~ÀˇǻÝg¡ãqݦWÒ(^7j‚ÒȦ¨ò[¢€´ I«¬Õ ZýqÇuÜ›‡V_´`°­G=à h=3MdÝ?ÄÑsL¡dF¹ Y//z#k6³×³ïsÒ½£¢Iœ%ØŠ*¡ë¤”¬Û2ð|¿W«±aÒ«ØáF¯Ú˜u?щ_Žúxÿúü°{ñ,ÿæã÷¯Ÿ¾Üס,Ì%¬â ›¸ e%঄1å–zß„õ­º¦¯ISJ+xeƒ‚.c¬I9}}3L¯ôU%«ÁXQJM‚6v ± 7sàröjT¯SùÇu\ýrÖ"LºÒ=™´¥öWÐ üÄ£˜’h¹ˆõåþîå'ÁÛãuóãhòûü”¿ÉÔ 7þŸj ½§W´,«iÍR¬ˆ-#* Eñ´ïú”9•öõ½Çš:ÉûǬøGI¯mëKÞ½= ìiú¾þ~ì"®O,‚Ù@Ûk&ç’ÐIÞ‘I…R”ÚÞ¦ àm7òŠì-maC ”yŠõÈÝÉžbeÁ -VX‘¿IJ‹ù[JEùæ“m:‰•…Û™Z“¿Y"nZô· ÕTªa¨"ayÓ2ÿš—ÝçûOßýûl[„ÿù-QnôqÞx†*_º´¼³{•ä{Ð`cò½âØÜìÎmqÃÀ¤—’µ[û¨\„O5ÊEì´Õ8Š`f8OKÝÎ=ß"[,­Ù¯ gè‹)V¹åqã6,Ð8¸Ù™­|¸ä¸À‚ì'Á óS“^­÷«pR•ëœ(Sõ¢eËÜß‚Ò\áý…¨ÔâJ·R¼WF˜[1$±ùù<öâØOµÅ½,»P"Î;³X$9vNb¬°5 5lkÒ ™Óäš üòÍI$öæ¨ûöð}{úò°ûëüW|yÚýQÙšÎò8Ä`M£UGPØgš2ˆh#û@Ÿïï¾¼~î3¹œµ,0bh2%ió &ÃLÓû·(±ì;Æ_Ööo¥ðŠÄ0mqm"¢Î5mýÙ“8%ÍEüßµîî˜{®lƈ"ž(­Ø‚ÅÍÀ±t©tf·^@¸êN ó MSCuÈ"±gû|fìþà9íx£ 9“>ùù¬k²Ä™#‡$/ï5y€9@Å3¤‘çÿã]>ÕYÀ|ª~þKC¥ýyÚú¼_Ín{þØ ±¼ô›~Ú®ûçׇßò³ÿòðû×½<Ûþõ=±Îù«[h»Ô¦ÕÜAÊFM#›lHFJ™FðmÑËÓ—û‹žÃ¿}ùîgêRجx‡¶–éLI<›nª…cØ2víÇ­æ¡æT²hî™Û[7¶Ø@•ÂÒ¬‹×nJG®ÞÙ˜˳.gæëÓcóH hkÙKå6ÅýåèÙ€›Ò‘Gû²ÇšTAKúD™íµ¿w&Ÿ¯&Û(Ï-Kõlážaõž»¬p\ Utœ4VÌâÅ Þ¤5“Æ©Ü2;٢Ϥ±;5CC 0,=+€AGÏ»‘ Ê‘ù “xBW¹†‹«¶ß4õÚ~{›št^26„ÆÑˆ íñH‰Í 2ŠçgFzNAX>%uyŠ•!éŠø-rŽœY¡Ó« ¡öŠøÍ",šÆ¿à¸•?ò"=¿Å¬r§ma‹úJŬZD†ˆë9U—§¡&=舸…U@cég™&ó…_‚ëúìäfÒóWÿžO¿{@ýq¨˜ ÉQÝOÓô¹4h¡éó·àMœ!îVÉÏÇ(¦ë:“v»’ô+Êò.óSjÌ·þü)/V^E*Õ3ûõ¸’„=¡DÖš ‘`›˜"ÃÐë`g)h\µg‚›ß’ÕØy¼!¬Sóâ ¿H3é|2 Ò2Ú¢z©,ÕKÃÛQÍ8Åñ·v/üqÚ­g²­íFã´ùY²zht3mKóDŽU5TKwo²XÏÄ<”-ÊšñaY\ÿB.¾'ótìÿÈ€5äÑ46uÂÝ|… TNÅ TÏú™.„e"¬Ð5_ÖuR1|X[©Ø½&=ªâpÙ¶Ê<la( ßI˜Ò¨çû§ÿÏã_\ °}|øúðx÷¥|½vžt“Y øò&¡”9#¬ZĶŸ {Â1yú qÍ6î‘bŽ‹0ïå7‹u‚cR­ic8d µm±1óÐ8R‘O-#ñ~¥$§È}sc[Å “{ƒ«Ñ¸/~L:Y!F…&€¶0&NÀyí}h6¼{¾[¸(‚OÏ7y"क़¦‡4Ø2oGSVÌ—32«YÀ’«fCh´\O8ŽAWt•MÈ݈XT<³R'4Nò*nNjæä8›`xWÙ”Šh‘(ѱ¤ž°œV-dä[‰Š$¹2&= þ 7qÑ8§!É2å“"âàI­wâeÓrgYˆç¨ìûêèáχ׿vŸïwä“r?<ÿíËÝ×}cèýL.A”ièŽ^P“4]ÛG„-3@þ©"græú^ô-ÝêS¾²2[Æz=ùÎa},!ý¹Q;] æë\O7k žÙš >â`ª±ƒ•C‘ÔÚOQápA³@ºb=àªD‹IkÚÓWƤÉGB¨INYÓåÖ]¯³(K®ÿ ’ÇÃùöO¨ÃwO¿§-ŸÌ5íQÇD¼eçDš‰ ºj<«#DG€@žë-C4ËòøfT±âVÉ›e:a´ÿåߨ £³ŒhFp3›#0ÅcB7Z/WV3Äî÷†?”/ƒ·­Gûƒ«çˆŸ¹º 𢽥ÜVµÀbZ?!Ý•ÚrÒÙBM]ÙÆœ2Y›'#ÔJ3²è¡'É8B#Z$QK‹s#­Ø09³W’ÿ£‚[Ms\¤&¥‘D£±Aí89ùò¨_MåÑÅëÃ4Ø ÛIN&=lYù¹-)´mmÖ„€¤V›¸Ffã¹n3RE4’ââj¤ÿm1ž+ï §¯Ûg5Òϣı"œÕË{lš¿·½É‘«‘ÊRnî©ä§}q52/ w=@ê¼yâÈ™t2øÝ‹^:l"ïÏÜadA· q®!«éW…íé(!-^W¤pT9› ×<­Vdí?Y¤K¸¢¦´œ¬ W‚üX7…«§Æƒ‹073&(„«ûÉòJ -$æ}ãVV%æ¢T;¿¹–‘iÒ›1xMÞdI㳡 tƒ·9çM‰UïPQ®/”H|}¥ì”Q³¾–œW5eU*[¦ñÁñ"Ë}ˆnïµÕuì®IÐ@kò3Á ."»`‘£âÌB½BP³)¥YÛ1…+±¦ò5‡dF¤«v×”WNæ'ØÒ^«§ñšrnò,[©©Ç¢f[Ò8Íʬvêð'2­ššŸ˜´Zƒ(kÆÁ~h.ÎÆº£m)ÖÏ Ò+Ö5Ï%Õ] Ø,5M$ë‘˱î%WˆKS'xý,*ä—[ã&]K„mjké2A_éþ^„è¹È°}éZÞ¶ž—l^60¬ ´À´˜$ÔRpnÁ^ŒI«èà%äÔÄHãçð-° QÝ~œøÊûÒ²Ô(ôß—ÞBgXö¾[RæújbðÀ#Ø0ÔË—È!Æ«/ò}{úö®¶·‘GÿcîÃ~¹$’HQRv®Ã.p`ß0{‡û° ÜŽÓ›$ÎÆÎL÷ýú#«œ¸âÈ–ª$y²‹z0ÝŽS%QâCŠ"^½á$P:õûmØ yÝŒDy šÂËÒOýÀ+ÉÊ¥¨JµßPÐÇ\¾¤”K¢ï²Õ8!¿§?H2QîE²ÑNÇÙÕáÇ #nÓxl •¥‚ì=Ôù1XÎ&„?/U,Äp¼Ù1(lÎQ‰ÿu'¶+ܹÔié9uWt°¶>Ý}0îœ×Ò!Þe[W¨ÿ¹ËF¸üö»ß_Z ¢çJÈA-ñŒ)Ì>Ì:[ bõÓòòÛ›«K\à"ø9^Ì—×D^Ãu‡ËšóþS@:Nfþì-è¥ÎCçê9¬Ûtú•¯r ƒ¼ïñq%Sé´â kÇü~±¼]^M—ãq×=ÂÒ ç½ò| Åtjvdê¯O û†F–À*IŽ>~¬Ýr]23ýü0vË;˜@ 3ÓíGRò‘geúÕ%[¤Ô8ƒº—²ŠälÈ”Ùþ˜ýSêÜhw®Î!0ä…P•óØsì4q@G.‘|ùBù ³ç/Ïþûß¿ÿÓwúËÙßÄÔü0ûÛ_» ÎþÅÿ0ûÄ‹r9ë?8x\-–ëõÝÏ¿~ÿ—ßÍ®YmxcmV³_o6ËnežÖ—³çšÕý¬†ËïÈÅìßf²…ïyƼn7ëÙâvµæ ùMtFÚ™ =ØôMư¦>û=[Ós'1™ÐÀh92Z× õ(£ÅbðÁÐE’t{³»s–…¦·ä<ä,ÿˆP›»Øµ JgJQÿ8—Þ‹Qºøk·Iߪì™Þ×Ù³3ÄÙÛ(ÀYô.ààb€µ`¦[>~„™Ò×h‰G)«Í³N犾6€¼5{‡òz~»^:eÙßÎîŸîØ_ýquýr¶•Ü›V5Ò\ƘTâcµW„î¸I”‰Çø¯†ûÍ£zS¸]–ýV4<(e «D¶ÉãW£ D®D‡·XíÅ>rµÎSfg ÇÏU¤²Bìÿùåþ­½Ñ r¬FˆÏ;óGÆ?žó§åæòþýìÈÛ—{Ÿ¬ø|±¾¾xÎcî¿Ì†ónu5ÜÓ¨x×O Ù<—ßn‡óãÃÓæòÛŠoýy~û´ü±gI%mg>tFôIæùüÎÎŬúÖ³‡á( S¤Šà(L¡c,H¸¤L>‰Ú;J‘ä1Û$ñbW‚ƒ©åá¡B¯³=òn§P©%xä”6­ñˆåØwÜÇ#"©Û4Q†'—G%’‰BÊá{A!™Á[ìáslìÙ¾kˆ8J«6ˆ³}×1œaÿQÐ$uàSÍ„¿õÊYžÌ…8ãFx=ÊSðˆi·‡=b{EuÑHÀnfy0R›Ã 5f¤‹”×EËf¬žrý¬¿ÚI;€Âuƒüu³J>9‡äº9—±l1z-¼›Y¦»*3"B“±næÁ8×Ü]%÷V¥ïŠÖ‰ÚÊÌz»<;¡•¦؉79¢[ìAeåÞÏY¡¸Pê ¯,í[5ÎzÔÁÀ®ð±½Yì‡3¼uá•Á9b øPèζޮ¬õàbûÕò·Ñ›:õ¡yûÕÆÚnTß®5+F÷ܰ0nãÖË`û¢Õ£½¢šC9ê5YòÆ2¯ &´$ÒÁƒRްØú⯉Øeò6ÃkRÆ…¤ùÕ&f~3Ë AÐÇ‹j,NJÏEÖÚŽ ¾pÝhĺIå¡ ëf}0Éuè·;œZ®»ëQâ£.O¦È Y°í£|&j†ØÄ€³Æ–”"gZ« ë3¹8yßöÚžÉ#yey\¹å™<£vGÚ›:[¦¶¹?Æx¡¬í‰¢ެ.ÎܧÿùüyÊTWh¯-®¡v0µÍä/¶¦7z¼A3åVÇ(‡§0žøL¯“FD³x|)tòtôVg7³LxiL¢ÃiÊ›`›Ÿ¼B ˆìŒ=a˜O’ƒt0Ì£•{±ýÿÛHïäJQð·¯z.†ê¹ß¾éȪ ØX5å …¢¬üÛÅAì G€D°Mpi°Âd0ÔcpÑ`ènj™hÚ Ÿmló,‘#˜(Ú °×ÙÄŒ53µA'€™‡ÕU~sÏ3‡× ö+Übqöåï×þã[ðÑÚŸâ ‰]UíGƒRñšCUShU@oô¨{›áž‡*-lqʧ¡ÊY‚$TÙ½oL-ª»²6œª¨q…C/G (Ti%…¾‡Úã?e”§§2X^:Ê}ï”(O¼ 7¥Š1Xv±É‡bÏFðlØ s`!dzÉ8GmãâûI»™e¢*Ô¨ëù*háLû8Ø?FY­ÐÚ:œ(¹Ùq`Ü;ºçzÕfïNKYjs§õê¥COF²¿š]`½zk[鈴†h@c­ÃâRŽs¶Î}[xÜ9 Go2Y(„½«ÚÍ,dT„Á[¸;)2Ô¸8S¤IÕµV1Ø8ptw‚œE§ åtiRJÖ}p±x\D*7œš¼‰?¼NfHüÙ­qƒh ??"[zàÝS #jÀ4£{]É0£5©Knž¹…h€f7µÔÑD«dÑ;Ÿ?Ș$R wËB”õìÍZ¬ „«¦$šŒô¡/^4c½È€Ò9Ö˸ô¢Å™lv˵^^yoÝi­—QúÖ‹âÆËñAI·` ï©Îh Ký¾‡¦Q0cÂØ†tJ7ŒkL[‘¥ÊQŸæe. C4c]³IaÏBœˆÕ_b0<¬¿ëó/þ¸ìüÃÍzó«Ñ;ÖYÓS`Ó’59•ßj[p ×øÓX `0—ßÈXg72&FµÅòæçåÕ^·3—ƒ¬ö’Kó¯‘Gl'¶}ÊÍzv¿úev»úeù8Û|žßÏ^ÄqÞÍ}?Â@½”ÍEMl„sN pò…{÷Ó¢ùGw•nC‡u郞=ñïçwKÖDî+£«ŒdÛïðE˜$6_î/¿”k³í‹ºë†úõÍÃçë»åæéöï_ñay÷ùãÍãç«û››…j˜žˆ3îÍCW,Ø¢,q/f|ŠñþkÿÍÁíj…^`2çjÿˆ ñ0£ ù|±k²£˜¨"ƒÆ§òùä£ÄúyCÂv3Ëp͈ðœ-‡µî† ž±Õ”}lRZFÉÆ`9‘XÑb“žÄ'…ö¦8üÉ'ÏìÕ–´ 6Õi‡9²6½Ú.Î÷2³¬ 5Ê;›ïˆw¯v– ™"“BKz{1‰0¾(äNÀ9`áŸs` >¾oÎç!¾âPEœÝ.eÏÇá“ÓSúž¦5*7É…:D!‰x»;ÔIDcÁ%‰¢×hƒ¹¤]hîICó3|D¡ Í^'IÃL;ÆLBE¾ïBãdw1¾±¾aìcH/EwJš_Ö™'½ÿ¿#žvÄ¢\Ðr÷^dÎÃ^—ƒ:'Dã»Óœ¾=äøÈY‚L›‹ ü„d4pNQ0 ‹ÝY5•Ü]&íÎâ¶Ä1㔋åîf–é΢„ IpgƒÑrv)Úÿºq›¬^ŒÖEÝY©>Uˆ§l<`Æ](ç…óÿÕ¿ž ÿ-|txLuu仇µ–hL±/Œ ¶t—†Ð¢‹¯1ìye«Ât‚×gõóýÙî+¯ÿyör ˜뺇u«ÕÙ£6@EÀnÌ„P:’UrÕhFö¶Ÿ$®×‚÷¥t$ò6=OîÙ$ä l¡sº§°8jŒŽÝ9„S§«°‘è e“xtCCÍ*\]&˜Æ‡³ŠEBx4ŸÁW9äöv‘¤°·9, »sýdt˜¼¬YžÜ„ƒ¿þ'>°ay«ï}ïuŒ³çMü‡)çNK‚¬Ïpî|Œ7c7Ù* ,ç0a”ïg­ K‘þ¢ÆÆ¾K\ìtÏ3–fàXWp‡§ï gãß¼Ú…W˜¼{5™àOéøìšòÜ\l£¤g·sþäù'ëåfשg„ä8@/t€¬uSʬ(£‡ÂRhŒÜ*zB¼C”BÒ"F®ô!Ùj-ÙÚ ©Ž'Ä£6Òr ’¦¶H†–Zߺa;ËÙxˆ¹B€l9Œ†NlÈçx@¤ < ±0qpYÙv»P¾nïèS |½ç9t ÑwÛ{#^u±ìºttVë –â5Jž¤øIݦèãu›Ç§eðdj)3@ÜMj ¢¤ »”cA|Œø‡˜] û2¿™÷¤Þ–ÉŒÔ*hJ⽳ѶbyÖÁ{V%VÒ@cð¾0øÖP,f¯`ÇÙ—ª0D5q_,vžß+@Å÷ *¥Á›’‰<¢ºù€sÞW,54ÙI¿OëÍêŽ%¼zâÅ‘Qt©¿üíåÞë «|Çž4Ááå1ŠŒ‚éFA–ÇM"P`íb–üH£ÐL¨µ¼~Ù[^Ú¹¹ –baš:nDÀÍõH°‚èTBN1?ÊcåÐP‘z€& îµ4ÝÐï@ÁïxcßÛ™åÕÍüyö¬'¬_ó5cèbóô¸<ßö+8ÿr6AÓQYõMÓ•Ú#þ­±Ræœ=>½Ú™àãWj{ê‰õ!¨Åçåâ§”•Ì}L™ï,¢³\\›I|7„÷.5®³ÄÏ–z‰÷ÞíCÍÒI7s¶µ÷IØf·'ÛIV€í^}T6 c¿¬àŠÀš7 9c¬O†¬”$SÂ}·¾ªû®³â6 ³Ý÷÷€J·ˆY]5pJ)ŸoÝ9†ÓqUQ{kŒÉéï 5ñì©$+9ˆ^‘çUƒ4„¨¨Ìí@ §èÿõöм›28°ˆð5`lÀ rù‘áÓ¸¨WßI_´úäC‹ã7ØÝr4t:·4ø{Id}îËûñfq&Œ*#/ê >(p­œRÁ¡v 0‰Ìh¡…ÚÙ2ª¾B/¥­59DT©º‘—õqú°gÔ_~x;|Ÿ¿L‰úiÚUé`bØëÅKFpÇ3[AWõòlž“Ù·¼öê ¹îæq‹·^<Þ:;É¥*b:>{*—˜1‰˜cÚ j¦Õ*Œ¨¨¢e,}ÀˆfðÁ¢-陞‹“˜…“B—”ô¼åºAh[uRYLËÔ‹Š•ûU0Ðà”³¼Éb0~lTÝ‚¯~ÅPSFH”5=•rÙØuÔn²•À.ž95Ø!@{°Óź`œÒ*Õ§«n¹‚ÊY«©š/–±Co\Ñ­¡pí¶Á6 àû’žS;;Æ—ÕË ÁÏ':áð•c†v‹À‘h‚ƒh­Aœ‚Ò8çñÕô$…¸ Z1â4Û@ŶdU [åîÊûØšÜ(JëÝ)°õ¸:ä¸S”€yÊ*‚·8=°94Z.ohãozÐhàÔAÍ)n<îXN-A”¶“¸Q­±j„4÷T󴕃gúŸg”LÖ¥BœOõE•nñA¼>-®«^óküàã·ø ^:Ù8ݪSU㘘u>'¢²0fLñ.Ï-ŸAÛ7QuT'Íê%¶½d»ê _ϤԸ;!6É“…žŸÛæŸcûh»ä)΃:"§š0Š*X!ÌKÃhFÎ;l«ëß´à~J-eOB›QÞi5ÓDµ‹F9=jÉ„5qµUëTÉd¥¡B°)P 8¸ŒHF—•1±§ØäN(H§ÒáÿN¬ñÊe‘Â4¢Ï¶YƒzŸWBAkv8mV dY'ã‘W÷¿¡X$gõ˜²NèU‘–y皣%¯†fíK&˜²)ÖkÄÖË!´’.\”xèÚÞ›Îïl\—ûÈ8±ÞlÛö… ϵ7‹åúâg}.=‚ö;jp?½‡Å/mˆ¼*P¤)Œ¡^*~<¿{|ñÔ‰ÕWÓµ¤ p5.&EŠf‚ÄS©"Š‡ã²»Gw#c(öªÈEw‚¤|c¢AR¹'ìâÂÇo ¨.aVžCÊ'‚1uQS!âÐÊZ¹ž+Ëú´ŠZ`®`<õÝIú] 9Òâe±~œT÷ä›Ê^7à)cÙ³-)E~<WˇÛÕWÁ¼- Êø ù ¦/AÚæÙíØØ,\4ÎYˆ­žá“ýA‚èÉ>–Ö“Kž*ØÏ²2ªbùxØà½RcL_%¥Ö1‘³ Û'+<ÿ—°}¡j0ÆeÝRÐõk‚ÀÅÁåã!Ê`—ñ/òóTyñiÚfUÔ|%Ô\|Òm¥ÉµI³ŸÓ8ÚoîE¹"®£„tWó§Íçy× p³úiyÿlUÇåÌÛ#áÈÀΈ/ÊR°„M‘:XûÜ]Ýo#7’ÿWŒ{Ù—•‡d}°8äå8,àw÷º[“ƶ [Nfþû+¶dY–)5Ù$åÉîÃ&{º›U¬_}W™`Ý{óá¹îøãÁŠE½âëáÈ…ëû*˜‘¡Y9¸IãYF1ÓŸ ¶ô„A€qEÊFÆ=!–#ãväj¤JõyªI‹" $×ê’HðèMéšÄr_ ›àËÕRòÆÛ:­²¬3]²64^kN[.!nq…ÓËwcιióˆÏÛ X0«¡b÷a™‚Á–”=$dŠcÃy¢Ð2ä )'´]'{e=%3A{i” ò€^¨dH‹OXWŒK‡“=ºd‚ì&èü&äõ2Šý®U̶͒%íŒPeÊGù® ]"KŽÄÛÕÂÝÕˆè¸J7†è¿ޏ²4Úãè{ï§´ƒ¡x&µÌ¦Áç…‚§r=(Û3LT½£É46ÉÓûähžúÙV̓ób'o»›LÆl¦¿b§‹‹·FL&Gæ=\”¢hôä¤í­q.öÊÎ_0öB½©™ T¾ûmh ™C¢í-ÕÙ Û lŒc§D›Z…rµ¯„’ŒÇ­S¯Â;jªS˜ §ï‘§ÀJˆ1ús#ìvpxO„U:C²NI¬Æ±·çèó¶q}:,ôd' wYEYë{FtžöÇË+˜UüÇ퇭ÇgŽÂ=¶hóóa;LI¤šØ¡i©x0w#ȵ6bP­ ±ÃÃÛ bƇ}3$'áìs¡‘ ƒ&áÌB÷JWÏ!@Rƒ}» # Yjè°¾uBöÏ~=¯rè’”8g¦k7ߦð  òþau»X^<=Æ5De{lºê2ü‚Î.ÎfЦe€ZÀ{ô ¶ãc"C*ÆòB‰Fái±¢&ñ¹rr†ð´‘dx:N‹°2–•j[OCyMz’§Îü®\ìdö¤žíx*x÷+½m7CŒêzñiþt³Þû¥Òfç¾)SEŠ% KCÓ£¤i•\02nòrÑ)âL>4Ù#D+‹—Œ(:Ÿ2¹¹’9¹çQÌ` ô5ƒi»k,k½ Ì‹Èûžü¼¸Õ›üÓR!ಜÙCm9K¤foÕå,©- ü*ñOúÄV¼wŽð éר±-zi;KÔ½ ”ÞeZO…¶+KIÿá–’E¡=£3çl–ûM64ŸgMæv†Æ 8aÜ`\ÏÇOÌ×4š¦Á9ØHœœ3a\åà„ÊݧY«ðcá—øàÍ9ªG·½oÃsAuÆÚÝšV@…¼ðœ.N¯4R=ÙmMŸY†D@Òu àaô`nîÛ*ýÁ?Úœm¨ŽX?¨±Uè’bO´öv‚{†Ô8*$ÓÈ×rÎ!éÿ$c¯`Ì9æP¼µ©Û{Ôj4ç0Æ1éÌ(m¡»£ÉA‡DB²Ä_п¡ I´C®ìÅ>rqz²±ç¯]}¼ú¼¸~Ò³|¸Yé¡>¯׳‡ÅÕJômvu³T†Í6 Ǿ,b噹/$û ˜L Þ±7Ô¦”µ€zMÇïÅvªq«™­1pÌ©”ö¡Z ß‹%‰AÎ Ç¡{Q+› G†ï9cìYGüdµ x–ê×BìèËfìË™ð^îVëå§åÕð¶ÇçŽ×ÙÓýoóëE£ÒÓPìxR[AÜ#‡Þ7âs’b ; 0Nà0ž$'Ï&±#Nv¼P¨ôxn›Ø…î½[JçäŠD=23\07 \x<㘞QèÉYè‚»êɈºÜ†Ïj¦i÷ÚãôÊHäÕ”½_ÖÛ¾á ´†P#¨@õm]E”k˜G4d&FK•”Š Þ£R«¸„G} ;3+ì÷K Út`"ˆ·áÍ]bòÌ_¨™pYŒ=KÒ'NÌœô5}#\®ïg*[K%Ÿþwa Øõ%tÿqƒq¾OH§§K'ŠXþüE ÿ,S{ֳ̓g”  Fœ½¾®-U;¯ÂUÐ… èžTm ÉÑ/”Hé¶£œ Œ?”¥ šPMvc iÂ介äø«ˆ¬ ’Ô„—b‚;ëð-Ê«³`ª¾õ¹{2QôÑz¤6Œ¼{‰èËÐW_Íg¿>Ý]ß´‹öŒ>uä%íVÑÞ¯y›F®v¡žx%˜Ùp#$&YºGžFn“¢ëÙÁ5€í®`Ó+4™­­þkÛ›‘˜ e…§ÈøÀÐCÔ‡âù†É45­=‹÷íN”1#F\j‚áEÚrÁX R´ ·t‹±ghð'›,Q=qܤ=’Msm'f-̰Öû¶‚¾kéÈMõ)é ÇéˆXGú³T©JÜjØ}j~b¥n^g|¥x½ºÕ7­ž”¦×j4ªõ›ídËÅW}Ëãð›užy.Ô~öš½ŒX¤’ ü.sžoç_ëûÚ1â¶ü´Ô·ïjºË"õ §BšRöK6Ù@pÍЬ•~Ý\àŒèW¹t«ªÇô«¤Ìç=5Я›¯¶èÜ™­g!î[<¸!3¼­Ž÷ƒŒ ¹äÞ™Ò„~5Ä FgEOævX¨l#câ–T×=bq|ÞD€vÄn@Ùö?l_zCû‘Cs‰€âø¶'Á—`­ÞÙþZÄ4ö—ø–ê$. ‚EÒUßù þcð€äÑ/`|&Û©^Ï4«‰ ·€EÈßžVk!‚§Ÿ’!žR‹÷Ñ@­ _mâbïÛ3«5ßyò Ö(1÷~ÀG+àÂÉÌÙŸ­ý·š#Áøö~© Hä]OlTGyõ¨ß­×;©‘ž›¸NÃgÁ“êöD8Þ"… 5«·aÂàÏ€ŽE%£tpCñO€p åkpz¸‘âFoÇÜý57^,¯ÄNÁôŽ˜ Pzøfì\J]¤ LØ.ãîˆÒJdËoûKã‘"ºMѬPÓt¨u­Ë3¿L:z?ÔG²u åxʈoL`áÇ>¹S‹[Öb Ag6³ G2€lÆ—Þ'ûmö Õ(¬ ÀEu†àœ5U³›ÔšjnXx K«$êiX¨<ÌUŒ®ÖO‹Ëí¼ü:Ûf?Üêµ½§Åõrx­ÞÅû›ùzÔM›þànf e‡UfØ)óÆÕÒK \Z{UÁœvHgjÌ’á> ¨¾5KØogjŸ´JÀ`r9å ±@ÊðÕŠž‹ 4êwV^ Îv‰Ry¨xe—Ä#£âhÚ.!i›§¢ÖvÉŸÒŽ_ ®ÓGƒ®úôã'gÒëS9¶%ùtC¾oš©–¬a)äoØÛM}”µjœ0Ø*ÖR‡ 'öÒ‚Пcßnøiº‡@_øaS°ûqYŒýTþ2Œ€) Œì½5S×ó•Ó¬a:S/;¶ðT.÷£k¥Ãvsý¡æÞ£Q›|&¨ p‰æÖPü­’PvÐ;ð«Rq_pÖ±õ#ÕB¡idÁgš@Qç}PåmPE@uèÛg‹ŸÑïŠßw05yÛ_Ktîô˜«ßTо •;©~Ü2`F>šü´ÆbL{¦0SŠ´•º*NJÿN} IÚrº¯`"Ÿ3Ýw¼a?l—½Áíú5šî«ö£¥’Qƒ6î¶P“ÒÛº”‰‘Ec™Þ¿Jlu{ût·\ëP%6Nÿ éUí5Ez%:LÔ§Lì-ÑÚUz5@Çæ”‰ÉX™˜Ð%ƒ¬{jcW‘öC‘|ÆÁJuòi ÎP'–‰l8åc›Å9É¿_yX!ŽòÔéÁ­©±§€\—>n!¥Žaó‘2JI·þcv¿*+ó"'l$µÅVÖ&ŒjÌCÙ8h^eÃMënɨ¢k‰ªÆÆµÅƒª.£J)iô줨±M·TÇ®C¨â9@•$¤A5®FÆd?„ÞÁ¶íÀ’å¥Z‹…=-™¢ÇyH¬TNà&d¥œxC‚¡I¿Z÷Z&£­ðóöÊÓÉhA7”¶i¼Ã™é/i“ŒÖg”’øS.rÞT‰4uŸþÉŒ.üƒç׎|íj>¬kW;ªÜ2Ó#bÀ*fê¥ì㔲+üÞô·ó«Ï*HÛ¹ÇeÑ$oüqÒ«îF¬N³”†ì/xë{XJ¯ÉÕÒfrŠ<!Çfb? °i›i4­Qò±;§`Ç.F†LŠógpDI’6“³Šï8²ñ›MÓ$=çOL[ãé-:c«U4ž+ãx†Î`LìŒ3$<=ë*"k²" €‚í{„Ëó Gy¯šªò:6æ¾ûô©!ôœp¯&ÈÁþ€H»gòÎñ×_½ÿÃŒ¾Ò×/e±âã_‹qéYU¾Æ"NŒE8FC¥R…¤jçUëSO2®]ã¨vU Û I<Ó¥UHBíO‚yqI‡%¨ÂáH­îê50¸tH"ÎàEéƦ gÁ°§‚Q `á(K½³®.m®j¯Ëà2±a¥¯×cÝßÌï—éEZJkýõ?V_ô—ïô#–¿/×ß®>/®¾¼*\ØÍ9ipmÿFÓfÚøUÈÀ|ÕÕS0I)–{UÍÙwªÑ¶ïj'¹£Í˜ä&`Æu‡dÕõÅάÄu­¾D÷ˆSmUn±¾ݬ8Nr$Fò§=;l;ÑM?±ýH·ÀÕ?_?|[FÐcÜ3´¥Øl©€Mb !Dq$¸0½ë_˜Å °zšNƒè œ"Vô'Ü$"XÃnÔrqýüg&`"$%F xAå§ûãžwGMΈØ;Ì_âÇ^,·êíj±ü}qýÚèˆf¿Ò]b«å_Øžlû”売ñÅ¥?ëÏó»‹=.‡Ã¿~|ð¨ª#Xð‰NNwô€!‡Uwœ]çù±Ê$`IÖ-zÅÐGÓ鄼 ãZñ§(8†ÿÿ/5Æ×ï§}L¬ƒ©Í0eD¢÷Š´ÎýëBÿàîq~5ÈÓ¡Hnš¼þÏÙí¶2¥ýËÉ2õ<ÇXÚ‰«Èb ×yÐ†Šœvt˜•DLéÚŒ– $Þÿ)oß; S‰.ã¾/ÂT®Ëßxx”1 U‰²Àa’‡¡þ·‰»°ª‘' ‘¨ª±ø88qy8áÜ;Z&ôXŠe^!{Àxö5؃(ÒÝRGbSÔò¦,ÌU/Ïdð=9׋û›Õ·Øð· Ü?&Â(hûxé·ïƒXÛÍ=H¿ýPÅU‘ÄRSéúC¡P,:4m‡B½¢ßçÅüfý¹¶ù™’¬Âçk _1e¡ãÄŽxJþäX‰c­½6¥œrÌdÔðo§œÐQ¡¤¶2ìµÉP‰xÉ “¡!!ˆ³5*BE®»Š0ìBª˜"µJüáZ†½¾4ò!´më­ïê="·ÇxË0¹B#7a|?1±Q+¥¾À̘é Ö¿1n~ o' ==x2©¿w²<ëËOhœ+-¶ ÞJh©Æ± I?¡g4xÕi. í$éõDn%ÿdü%:ïŒÈ¿QG>v×Ê¿¸”¾GÑòEÆ‹‡ç<Ìz55½vTvo{ê#ŸÈ‹Z%·5ZÙÓ®S6×ÛaqãOói×mèØ%Pó› ª3ÃØËü°‚·§×ªò¶Ô[ªµ×ôáþa¥Dÿ¼xz|xº]ížõŒä÷lOYN°-VÐGØ)[©<*Ý”GòsRóè]ç~ƒXU”"£W¢öõ¿½3Éɧ{4lâ€G‘‰# ”€ƒC]•îû—ï(™-§&«€8ãG9]LÒÔO6*8I¬ËÖï‹DGo«‘bëü~ é 0ÇþŸ½«ímäFÒEØû°_Î6É*Ion€Ã.°Ý[$w¸‹ %ÍŒ[6$y2óﯪ%۲śMRÎáyA2“´»«È§ÞŸâ°84Ý;{\/·ß.»£¼šÞ¾¶ÔÝcd¾k·N<¥Ì§5³)•°vLnUë@VGÄÙÛvóÔpÂväê ÌŠXc8œ :I{ÃjU²Êî,Q”’àE®U¬ˆ\-ö ]N'–—¡= Ef„šÓ¿²œÑƦ=XSšM!ƺ€ÈÖ &À Û'™±¢÷÷TíΉ!ÔØÀ 8ŸCÐß{­]’°<ùˆ+ îm…Ô0z5‚¥l‘Ƭi±!É)v/ÈYc¯£íÍ“ø³ò|š×>‘[¥„L»d䛵6fä›- ÿ)M»Å {ÄT+S'=+ÎYãÓ=l‚îIÆÙ·5¯2©b^ùØbÓpóÊÎáh´¤ÑÙ*Z›×ΊÆê¤ŽµdX[jt®º?€Ô3+ME›OÀA¯:PU†©®E«ˆ3$»wwª)¦Æäv‹q9X]/g›‹Ùôâæq5¿]ä•Jµ 'äÐQÙñ#vŒÖ™ÌÜ^¼K¾ÆÕfÇEU\1xFÄÔî¾ôJ¥¹ƒYlë>Kl568s°•ϼ ¡[‰ZÏ©‹˜£Ð*C„¼>ëÚl²Ã"k*¬ÍîG…>•jíØ9ÐEøJ¦I”†ª!¸ •¶ŒÈ®„Ë’Í“îç¢Žà•¼XÂøx0‡çßåÉ·†§†FàA¯*‰ï·.ËЛ ¤ªò1#MM‹ë/Ëû÷|î—"ÏTííÿÐ(¥oµ³‹ˆø Ç¤ôb”&?¥$Ì“Yû#I–%æe'‘”¿éê®>úh^þE8µàT`΀_£Ð³Šà7×~AE¹„9ʵ–´æåMݼ|lß æåÛ`F¯¶…(­hMŠaǤ @ ß¹Aß¶£sÍX³Ùîº'—{2mT¥{X,ßøNläûÜ/~ àX²‡ #†Š¥;4æ÷gŽYMOØs¤¥=¥¡( ÅÆøØ¤Ë|*!1(\£–µi%Hlà H ÖÄaï©—OÕ]jï†u[msº-G£CŸJ9òªhŒ!ïš”ÆÀËEz¿4îÃý< Yeš œµ£“4´Â˜ q)»ô¤ë&nY81”Ð)çBšJÌ{ )*1–ÆRµ’¨„¡^ª9\c âÖª²2@{y³ŽÒȳ¢‚L£G½Y]wJÐê!Ê·®^Žvå{µ'¾i_€‘‘…6pÉŽ] –­æo(Ë^¤6[Ïò*\ }.@FUQ•ìžåGØ1Ûš„KÁPn…ë”XjB$G‘^‡[¤"•‚Hp"ŸEP !QkcsÒ­¨QLe B¢6í99<ÆiçÉ’õûpàÄ‚ººî&«f1x ýS×½W$ô¸E¥+§t#´A‡¦­›Ål½x•©–ß¿ØÌ>/æ·ûÅÞ|U.f‹õöÂeá¦3ûÅ„pBÁ¦£1s4FʸšËïŠÎUM,%ÎW‚¥&Y¼’¶æhÛÕ‹\j¡©µìå´X Úb‘¿i÷šmÑv ÁGhJN¡7÷7-¿x%JÞÙ2X¥æ+õ\.ÎRì”× {hà¼ì}¦´:ÿ¼kM»úÛB¨Ê~XnÞo»%6%Á`‘Ft“ÁSYp¤ZæNÖ÷·‹›åJpêÈKx™×’ñݼì´íÏ °S¡È ÉÀ 'sæÈ~nwÆpU´mŽÀZL›6|jôŸÅeb–í@u ›Œ—º`rÂâO±bòyr¡=:Œ– p×HuÔ•áë–m”ŒöÃÓ(y·¿W‡AZ—ËLš¶-ȀŎïéÜÉ磮–Çùr›×áÆî^¯À²C”ÌÚó#Æ$¡Ù…”~šj$¡£Bªˆ ljNØkLŠ>‰¦£k˜_$RC;Þ:Èy´ì}Q†:°ýâ‹ ¢=Â_•஫Ý8< L¥·,)Ý ½ÚDÍF´¨–ÇfX·ASÉ+†¦µ¼]` †isµùÆÿ|wý,ÌëGþˆN ÓŽÿu{ÿëbõD™š¯Öà x«LQ…Ï¡1ôfÄ3VÚe£ëh™UE[i#†0mÝn¼í$Úb<¯ý"¡Zh‹Ò ”3çˆÚ%¶µ' •±ƒ´õEÌÅc­:øæå³9ÂޱEøÐ§Ó ƒ®hË›0Šæ—C0¥A‡ s®GÜ':YMoA‚Á,L| ý~YùIL°  9˱VDXØQáA9ùÙäJƒ‚‡ÐÜ>2„ ê ª­;) UBëNì!Žqñ„!Ì"Ûæ|¼OŠ÷4Ü'e[8ɺ]FéÖ/r.ÉÏÒUq¬_%0 ‘Ê$ëzO¡ÈäG´ïý"GÑŒe`Œ1cP/Á]]‚¡gW‘!þ|0Ó«þ@Æm7´mCä5„ó׋v¦sïjÏó]d÷‰]t»f»š²ÛÞ^3b—B[SÒkưcš&„ú×V*)c½(Þ+Žˆ‰‚à±cH¢¸VG{Z%‡œÌƒf A6ì±C‰«Ælε bvuØ9qæO·§Uí¬ƒ:+¬×¥¨ñpÒ«o”±¬ÈÔ¤F%KYÙîž·›ímZúåwä7XcËY·™J¤›WúgÄ~58¾`Š€šüj”Ý"Naqs[–äjB³ç €æýëIh&ç¢Ðü,¦JÐŒ(Y›ÍAïÊ ÙYlÍ&D‘٣Š(Qͪ›SñÃÆØPô¼ecFŸ†ÙöíÊò$¨B0²¦)÷æ±`ÄÇ?åê ^¨'¸3}ëûÇíb ³_äúõtg¿P8AcKà½fO³ŽódW»„‡ƒ’Û©ö–£¶ª „¬æc2JcIr›OÚ9y?np„Èl6 ½>Gó±4ì†ìq>^ôª–M=Ú²dú&üdÒ‚š°q2ãP®óÕ¦¤élÆÙøÀVŠ ÖáˆÝx ¬Uìàˆ ÅiÙTÄROâ,eðJÒÓ³  ¥‚¨ƒ¤|@eçgÆ¢j’ýŦhÇÉzìÖH*“{Ñ-£:PPðvüQÖ¡rÛë H²²é{ß«D T„™Û$€]@@8SƒV$qÃÿÊzÙìíOž§J¶_äÈèIEõ:;¢?ÃV·寷d%¥TGAN–Já0Ø.–áºØÆÐ™TrI]ÐH†r€T€Ü–¤óø¦9™#‹YÛ(¢R°Oæ7a¹ºdŽÃ\RïÇ4a €^-v-Áe­®Ö51vÞhÅ7àü#Æ/ùéf»x5Y¨Uꀀ ­Õ¶Qý¦FcÄf7®Î”qBZ'±Uåa+")«Ülå¨2‰­ÞD^¤S [Å·Vr°5x"5´/±˜Áű•ƒzæá¾Ó•ûƃBŸ:ÙžŽ;Š@–¹«»}ô;Ø®ý‰­ÕC·Õ]<ž–¬ê]´^H­J@GLm¡dËË^wWAv5}[+Þ­„¿lM@Œ& DU ;ò†œúW° <•ŒS[ª3à¯ÁhL Á]pŽuØ~PG»ú¡ú6ìb´Ó®qN7Bc‰BË&á»©Ä ·‹éf±¹z7ªä×&Å›†XŽŒÇ,µZ{§(;UpB&5½WꜢ0=“ V ^¬zA-ôdåq^ô$xôŒ«49þCcÔ{U¾j×beï5q¿Û錽*h’PB5ju8ƒÀŽU¨+á]-¾²¤åR¿8ÿo ymYìhû–°IšÆì# 7º€Ö6Oh5“ hð:½õöóÀ§@•ñYG³­Ïª‚©d·¬RæÌ˜ª=4_ú{8â¬2Ò!ìÍ9ØnݰÕ9H]²EÑt-5¡½ V(o¿Dg:Û{óoÈo–+>nòQ›iÌ’UMAF_ñÍ0Úx·gˆ*úÆ{FUJ»ª|§m²Ñ•Àøh ÿ,’:®j]ªÖâ™aõ }U2\æ`QÉúØhKºÀϾB—vûrVâ E€–0J®Œ §º¿Í ›Ùzù;>€ŒüM!½3¸ëCPüã+/ÎÝ‹¨&|%»q|B²N%}“ÑAÜqÔÂOÝ5Ÿ?­¢öø ;ágÐüd´gYÂ`‡¹¥^¹ZKnKðl³¶& ƼÕÕôߪOÓíâb{q3Ý,g’gÎ\“Km½QÂ1Íÿ2º(<È¡*ÏUÀ*"+(%KÇ‚xId¥øpìt*!+)Ù˜z¨y{•°F[XSÈ¿k}ª•#ÿa3XÊC%¶«ÑpÙéoƒ¸ô~j»f€çœÉæiñÚŒE|ÇZºäƒ9燯–ò#7²9ƒÿ×ßî׿²šVüBË/Ëí·ÙçÅì×Í¥hŽõòp;]u\=Ðø¾`©©×ëØ%Õl^]àŸß9pFAW…x+'ñɧ º ýE¦µðÝXòþÌž³SíY·ý ˆã»õ¨U|¯ªÌacÜ\Ö»œƒ3QÖIøG–A`c«ÛRNÎuÓÜ>¹x!é<º!oñžÛòv•n—¬ô¼~±¡_9ñ—ÈÂ}C#pŸBOÆUÏI¨*`{Ç0a6Q²£{¤Q ±- ·9É£0±á ¹¡Ž#¶lV7þ¬¬–u‡±-ÆíÀvªdÈµÍ „A‚Ýè8µ –p8±(¨ÿM€ªhDc‹þ¯…ʨð.¥Ð(ÕÇs©9³2êŒjjèÆì³PÚš`lµÂhJdMŸ¬ð%½Krœ4}:`€¤é£è@Ê„ªØ>9Ðl²•;³í#Ûœ\Ym¢Ù(À Ú$Zy0eP¾ßazé˜h ½®ùšX­£^ÑÒÙÌ Shéÿœ)Ä&4yA)§UÓü`ÿ¿›®]l9Æž-®dèjùqÉ?ý™<³*ÐÒÊvé1©>”a,¯M5[˜ZÍš7ºnaåþ¬Ìx :”P%~gwî0Ѓi?“G³ ð̶pXK¦²TÁ@Š–ØKMÂ_~ô™YñºV×}vSú°ö{\®fv*–ŸV™ÅpðˆMawÌdfð^[[/-¬špkµ—•!ƒè𒑇Ç‹<dS)ë¦4kÙÙ3í%}¾&­“ݤKû¬Y7´Y<ïßN‚CK¤ ؤ îƒuìø÷wsïïîWËí·ß«›Kn a¿æYƒˆmÜÜc¡UusÁ \¦qA'[;=Qˆ6 ½H¨’› hT0æÌÀK¡=¿‰·vZåT|VíÎø÷ónãS©g•²ãIŸåês>¹K@­‚» ¹¯ ö3ŸúÅúú»ïÿrm1¸Òlö1€6éÈZMï,å×õ!™Ôz¹ózòa²ýººþŽ¥ú0]/®¿ãÃ÷i±½þÛüeUKíµnö;lžvÙ\âÍs7.ìWûõ×Kíí ÍÔÜú°@=ŸñKÜÝÏ_ÞAñ;l»Í‰×ßí?ð—‡Çíõwí^àËôöqñË®@àhÒÉÑ?ùðaò‘QúQDóáÃk3´³7~üÑ;mmºGh3¢ÕÕáÙ·ÁýüŒ“Ó›ÛÅ|d~¸¿xm{øbäY|¿š/¾^håþ4‘3Ìä˯ёa.jçðî„aØ/Q|J–ö†Ý·îV›½1 óGyÙÉR^Šï×l±ü²˜¿†}v:ùZ±!»cÔÿ×Èö¶ÈrÃVå7¾š¿-Ö“íçéjò,ŽËîÛ_?].-?ÆË'ÓôŒa›$’ÑlRùWV›é¬ÓÝ[õÿ"nÜõÇéífÑç˜?MV’&þåþã³ãu}ì-P,!˶&}(€_N§FF|ÚÖ÷;/a oŽÁ¥÷æðX¼zÆÿÞêÛÊ6Qe ª¾ÓÇAËèíYüæ˜$Rè@£ÚÙ¥ãù3(GöŸ_WÇ>ÄQ-ÁÇÊEÇL­%ü¿Å`±úm“ì:Ժ被&hïñ®a(úp??µa䥊HafÉΠ]|¹™?ÌóBQV#×À³`ÇŒe;pÆ@0¹T˜£…6.5/A–}Ò ƒÒ‡eäS!•—f5[MwÑZ×Ó´¶!,e8Då“Qسý]<Uæ"Qv¡†3 DL©pi¼f› vtvA¡(´Øz!ä,,_wîð‘Ç«·ÛÏGø)?o½¾_wUò•U3]Í·‹ùx9žFÎNŽžÆ­2¤[ôébuäÛ_ÃàQšu IlóO;λ/4Q²´ƒO¨’f뎎Ra(Õo­‹ 9º ]º‰¤Ùø“­QûÛrÜáÚºªù3ÄQ>µÜ§ç2fªHþã+å¯4LžþãÉÿûÿþ~üsÆGæçÉ?ê>pò/þçÉ'VÊõd÷ —û8í¿VÓõ·ÿñçÎM僵½Ÿü¶^nf7ד§i“ûÕ¤Ã†ë ŸÈÙäß&è¢Ø‡{ÖGî³Ûû È?D¿<5Û㙿™9ümz{ÅÉÑòÊ>­Ííýo“óévºù¶š«rB&HÔÛ>¤7=*èZF”5aüð#\ËiØÎÑ`(ñþø–°éÛ÷´Mg£ÿ6•]‰9¸ú©;ïÓ &’´:(cl‰2¬¶-l·æ Ok­lƒ<2æä‘»ô—O|E9Þno¯µ½Ø,X¶óÉr~3œ?ÅÓjªc¹TÔýò·_ηùòˆ13IœÁ“.ΡáÀšøÕݺړd*;/?€õæ´+ ncóE_6 …¶{+-+ƒ[z©²Ú”*º6ÔróÔ³£c"T”jC¬—ã@ÂÒb%!ΫlÈn±¼ŒXÉÏ>H„iÍþUÂëéGwžvƒþaò¡¢Ø+tÚõ`š{µAS$ïË'Ñbœ:ݵc(¼ÿ –àªûû_żŸÙ&>š€¶ÄlÈÔ˜P±³Xì„Û h7XQŒW@ÉRÖïžfØyt6çàÓ†Ô^øµ¤;Üf>jã‹(»×÷¶ñe9F`&@iS²`˜½ôç0uö¼2‚–&ÏpÔy‹ÂQV¾©ó§l ñù‡ñ{[»G°53ÁÞŽW`Gö„H¹¸$>›tqŸKÀ'¡ÊªøìÄËפ›4øKß{]&>|FQûÇj `„fX9¹û4É EžäMýÈ“UÃÎû¾M›FÙY›/7ëÇyäÍãüÓ"cða~Sa¿ÇN –½ò²ûÌ?[ò«uE‚·c^‰F+“öJ„-=yÕƒŽ&¶U!±Ý½6¿Ož×â8ò¶( àš;--Åœ™ØZÛøzPP­7Ö-©Mõêm 6‚î€-g¤¢ìNW¯)ÿ7O¤y¡oýòNêÈÇΗ€.+nÄòzÝíbÌÏíÖÏ’Ô¸¶ˆ(Âpb›FX0ΧÖ1Æö@*uÖH΄a;ae“=%ùXùæ ãq!x¥Ñ¸hsRÕT2ƒ#BÎæl,èU¤Œˆù"·Ö™7íOµ ó~níۜ䛯èØ&Õ0ch*F3ÃGÏÖulˆ®¢k+û¿´&H¯ñ>Årœ¥î<U%àe‡¬Í©ä8PìSÙ}=ƒoköŒLÇ 9˦]‚›«Î¤î8—ÏçäÄ^£!ME¨Ç,ÎxïνUä .ô8_n3û­/ëÀ‹zùG ä0{/óùÔòi;{%=HÉ¡ûµÃ'¡•Cë³0ª´ÃñkR°¢g_Q]:„æ%rÍ~Œ—Qô$ü*þ,ëD`X«¯óeÛD"·¾©úZø±hÙž4¦¶_3Vl¶»‡>qJïå·œ-6WÅ»íÙñ ýâ·Fª•Á¨‘#p¼÷Ú‡|Žú1«˜,àcaA™d:Ö°¿gþ‡½kinäFÒE±—¹ Ùx%©uø²s™ˆ™Ø‰Ý½n8(’êÖ¶^AJÝöaÿûf)©(¡  ¨¶7Æa·],$€/ß_f+ÆŠ‹zâ©c²:šäP¬©£‘q3¡}5FØ»<[­çíUðu[)²"Ž´)ᜋËýóiëšDcPÇó©kn¦žÆ_Žñ–WB„Ow«õ¾S‹Ã‡ «ÎX­¤,¨‰á·Fi"oŠG‚ŒVM¨U] †Z±š„Zð>:ïU4• VyÃÛ\bÄ&ÏEÆ•ôÆ´±‚‚ÐüÀ©¾]áôÕjýõù±x®/ã{Ë»èÝ@Kå°óuçúžŠ©ªÁCڑƒç@¸{öz£©èK¥–ÅÃHÌûº&Î_3§¡¥bè0o!Éæ£ÔbFÍG­W’Ìç´.ó^ÈcÞ+öÊ‹œ¦» ³8&î N‡YùkóÒ‡>@­Ù×õdSÕçÇT馑¦›BÝkž–yR :QT”Î<ùh‹P¹•2OŠ¿_Ï­Ò‚n_S!êÇó‚I§ã3ã|UN>S;ݔꮳ=!`‹Ã£]ãAôBàóNO~øxûÌ8°ÿÄÒ{»ôcž æ2i2@“ÆôÂãÀ_L8“#Ý>¢Žíd¶ÎM9 i¦h{O|µ0Ø*6qf¦Fþ=›Ù ™•$þÍ^¹Û>ínÖãf0‚nŠ«Ô¨< ¢%š;ßqoï×_¶›ç[¡Üî>³ÀBvhÚÊÜ7ÒeNšèæ„ð– •ý¢³àï?/dÄÄB)ü=©&Ó^ÝL_"7yÂ$ÔŒcyGŠMÆŒ8VF ,ZåÛ“H¥8–7UÕMÎycŠc¼¥= ¯3#q}ªúƒÎ90Yþ`Fª0Z‰ØO-[ı4wŒ“ˆfp5ÄB½ÞÖx8Ö.îkh s>Ú9å<˜€æ3œw¥AâÈm=cé6Dï0ªo‡!_”«RÙ8`nÖìÚaUk!± +þ4bGÉL{"©جc4ÐÌÎcPÍs)"æxå88ŽõR­ ½Îœª¡§8žñ¥Švñ…»Ø€1ºS©€üj®­G*3ŸoW÷]D)Òõø°áØ}}!0üvóôËxý5^[šŒ í„–Šݨµß¯ZÜ1þºeê€Æ´±EÌ_ aDà±úfŸTVß»ÉÞB€o"¥™‚ŠÎ{êmN-W@Z†K4Sêhf`šÖv††}Œ«&å-»ê>šå¯\ÊoVÙ‡ø!˜Ú%›ž/mÂæˆÒ8Ó¶9 ‹&‰‹Õ­åó»ÝÜt_ÿ´½ã-’§|ÓWüK½d±[íŸvÏë§çÝvy,Yþº8”—õHÖµ¥²2cf6i§;ÚU?¢¡`&)W e"‘ÍêA°¨“F>óAż‰´V(Á²ú… `Þ7‰lXãâ‘ 2†0Á€ TµôH›¬F/ö˰¤aF4jzB› M¯µðêÏ7wç-¶´Þ­Ër§V‡ÐÄ­Sæp”W~Úh¹Ô 8y-¤.#ât´:΢®¡8׫ jEœˆØšs­qíNÇéŽÌz/ló䪰Nçb®ƒ¬h²5Z£óá¾7Ý@lc7£Rš!øw`7¿þöõè¸F£~Mºz×ÖB¶£Â9ì YÖ}ŹÙò¬i # Ô‡e 'AÙFƒ-=ÑÕ²„Ƀs83*;=?ºÁxŽÙF6~¥s¶¬²Öûúp)¬4ÝojžÓõä\t¿=)§ŒÂø”§p)ã6ÓÉŸ;8Ÿþ¾ ÷¿Ýì ~&5…2iÉ„¦¾ÇÑKÛ=Ün÷Ÿö¿ñß]¾åËÕgFÅÏìå-ž{>X«ÏÛÅjsws_¤9I릊˜†M6WÈ)þ»42Rd•# “®y³ A§]ˆ—5õTG=ZEìháÜ"o›7º°œU”ÙRÈß}Âi±uóå˜'"›_3&šn-µàU3Œ;µ,æþ0þú´ø`ñbX¤?±€ ]Áµ¿Z\¯?oð0(ûj»Ú¢2׫+û;JY㘠¦à½”ʺÒ$Àœ2®U´Ú>kØÈKR¼¡å-I‚;êXcO¤UÞäÊ ¦b{aí :ךãÅåx“}bç3¤FC‘«›€Ê ç—[2´K†—ó‘[z.Üúß²×*:” ñTµÆÙÂD›aŒ[Ÿì1ÏôDQÉ×Ö«ãÀž~çè} scÜ:7Æ=jè:gw§jzÍ<uæ¥Ü=Y¾ŸFçUh32‰ØOúq<Æ]þåµ1£ŒŽŠ}<5^êày ½–‚'YPà}¨ËhüNR5±Ú ŸnF"ö…DèÒ±pý¾‰¥ÖD$ÆlU4©ÂU$«Ú§bÑÇË_x§Ð¨Dù‹²¦ªåë³¢öX%Sƒæ8 ƒ;ꄬpÒdNr¦9êŒÕ4k›ôÎý˜Âb@zàöe¦*j7~2ÐÖá¨áHäÙê°Ó;¦‹$WÓŠuÈÒK£SÎ'ó­2À*:+éUH•lXLJÁ@Qкñ$Ó.*ùæ6,K0PvÂ>E}Wu¹Î‚ʲaÑâ„VÊb¤ÞY°!L‚me¢›Î³Ïéà;Ûµ¡˜c¶¸z¾ßÜn Ë Ý0Õ¼p-N³oý¨Òp¾"|•k·ƒbªkÜ‚fÃ3ø=Ž»? °€ñÁG™Ô²lÅšrT¯ªIôò„¶ ¹rGòÏF—ðv»Úo‡Ûÿß‚£?_Ü>¬¿ÝM°gR멽ȸšG¤ÖíP•zžUÄW•¨<8ÍöJºÔÁQ:{gC”¨¼'¬jDåÈ`f¾µÔ¼- ćBhmA“N×ÍßcäçØ«GË]6.´ñQ--S&R»pus/W¦/äÍýþÓãîán˶æó^zUÊÚæÜ`Ã…SZNå”|¹Lï†øKAwꮳETÓí,P†Û’ì],®Ãؘ÷û&Z~'¯ÇcÁ,ˆôaH^>^÷ ª]DË«ðÇLg‘BÚn:n²G9õé9îváy·Ë§Ýó¶ ìeÂ5ÎQŽhÚkƇ+ë€j¦–¼­E3-F{Ö±mŠÇaŒ«jƒ/w$Óù{sžB#yi¦Õ*1v€rhuªXIY IR o̫퉯J±’ ^ÉÐ4¡ä@m¦ Rª[N =+=whé­k7ôñ¡¬ÐpR8|Q#hÒE5G]:0üÖ+EDTÕpbP·&dXNéˆ=ËKP[¾ ¤’éZ#QÉå³J³>Ér2ígtL£Ñˆ=ãݼ¹RÀ,¦«¦©áÜI§=oåDúî9¦VÄ{ŸŒ’T¤s[ h¡rÓ›¯Ï04Ó|†ÁƒÖi&]i‡-œ¡RÎaš–"½ÎjZ®4ä_w¯?ü¼{x~LÉ?ã ͼ!pÁ¨0I7ƒ3ZX+RèK5sŽ°Ï˜Ð9’žfBîcrIE! ÏÎ*r01ÆÓ7éU± åš +¼¢[ÀA©I—°uç+KÙ)÷ç}ÒÁ&(ªÙ&ó¡.úGèŒm¤ê)ýú  ôleN:šó ZÒñj8¥í¡ŽhpÆ…?RG´8õÈOÚ|g§ÕQÏ Áz„†êùîæ># yüT35B˜VÈ&4`º\ÍJç Öò…zöEjgté‹È¦éK9Â’—\‹^‘I}‰>æøöÄPEaÊ[[¯K›œÌC0zšãšSEˆ˜uD_ —a`‡)w6æï¶(Ð’ÓA`hWµÚLô}ZdaUp¤-Ž2ünµûº}z¼åÛ÷i·Ý|Y=½f¶ „•6g¤Ï—PM)WãGØQåjÖ[´ÆT~N`õâŒÂ lú¦»2€ÿJÁˆð¢uk¯Â©e”Œ¦’þ´MžŒ¬-¶32j ֘ф÷óÏZq˜»EUƒ[©µ%5%Î(§¡y7ßs(øÊñM‚÷K[cæ [c²u ìwÖ(Mj’g¢56F¨Ø\”þÒß‹òìý¨t¤–?ÅìlORŸFdæ ;D`•o©?{?ª˜©SÀ[›®q²š¨sVƒS¡}ÕÉÔ±Y¯cJòäüþb?NÒ¡ZacF©E01F.ðdíÙŸ1u•h4Ä7ÉyùqåVCðÞP8Éé1 ›7¼òŒ'ÔŠÏ¥Ñ6Ær)=hó³w±Imñ;Å1¸¥!à¤j*éûCÜGÎ:Ðh“ªàäü~Ù®nŸ¾Tãir¸$þMÀ»ó¸n—†¬I×®jâ,|o«­‰â×AÙw¸n,Ð4â§Mƒ|­Q°o´Ñw¿æéýóÕÿð÷ó-Ûî÷r|¶ß;ŽXéc|à=é^i9j6Ë™b)c­šÂÎ#Û;& a|ÐŽ¼iD<]"ÎjÖšòKoÑx“¾Õbl%ӱƺè–žðjXkr´õºèR;GÚOÒÉú= E¥" ™lfºÑõÔa IÙÝê~õ999+ï!™#áxÙµójø¶±:™ä€€Z¶;‹Ó†x ÉùLÚ(SÈ®=UaÃN«Ùd†—ÅÉk—zB¬S†T`ÛB—”S2ð(>“Lqo[gÛYÌci%Þ(i²&nŠ«ºµw:’W‚Ní‚;þ£9ò Ë:HšVÔL›¢v+ÅÍ8ñùk¼ù±ñKLòò´äuqsÏX–\Ö«=L\㬖Áƒ“òRV¿«É ¬±'ü@=ž}‚kÄ‡à ›nñÇ1VçÜjÑ0Û›¸jÄ[ñŽJ*é¬1lLÊoX£¨}ªÊV?äªy¥R¤èUKBVI<pÙÔƒÁfÀ$ˆf×Ñ`ís¶þíaÈn#ˆÞ¬ølðÖ<¥¢•olUŽeùÂMr²­Sc‚]|´ Di/wOxglëžä¦Ue²-ªtzv©óìÞè$ú:Ÿ]ú&*´Tò‹»’BëÀ´ë|kšåì"4ïAУ4UfÁ¥6Ü”ÜF-PØ]°Îâ´P§÷MH2Øö£ÿcZ=yÇøXnÊ‹½"Ë—ÁA—x¶â˜ï”èÇjÍžï…T“?ŒÝaoTÚº5ú¨ñÏâëqnÂ;|íI¤Ò”R%#' Â5îŸ íkwXÊ*š)4&8RŠæm÷¤ÛI¹ ýž1$ˆí¦[éL';¾ã“|$Ûz*v D^Slĵq.ÞÛêÒŠP“[ù÷•¼îýê~½ýôŸüVÏûÓÜÑmI`ÝC0aK¼j¼#niƒáãŽ8 K¤`µ½‹Z/Úÿ±öBxAq|½L÷´õý;Òz)ó¯”ÉÏ«ž_Xlw—?ýõ/—à‰ÿà1t^<ó Ê4†ÃàãÛ–‰äöÞ”¡¾øùâé×ûËŸÖw«Ýöò'>7Ÿ·O—ÿ÷¿\ÄËöWâ‘™ÔNQ—Ÿ~÷°y{¸â‡ïŸ»äâåOÇ7ÿåñùéò§Oþ¶º}Þþr˜Ÿ…ݧ>.\Û‹Ÿ¾¸fãYÖòò‰0ê[æça‡ ¾0xàÿÔØÑEÇ3;*[HIqü÷«Ý°ººÝþ”¿=<<žÚ^üC¹šÛ¿Þo¶¿ÊX{ÍÆ’œÜ›íæíg*â‡Ú€^Šg1•¾uè ž-W?,ÖD'÷Vó'yÛ‹y+¾VëíÍ·íæÔrK¢\ÉÎþ9öˆãÒŽO¹Ù³õ¯ä÷íîâéËêþâU Ënõ§—ËŠ’‰•t˜†'Á(qŒv&ˆCÀ><ÿä~¿Zw»÷þœµëÕí~;d(ÿ^Ü?K»ä/ׯž‡ØÕ‘cÁ+Öd!},^B玅xb±àpoizÜ=¶ìæã].ü’]E£uÿ`œ<ä|ï6\Ÿ'Þ<_uÃÓºÆhc›b,‹±ök+Ü•¢a ËFMV¬ø¿~½ÿ¨ìõÈk.ªëÿ©ª†TÕ¿´<™ÎQƒ(7:1–ëN`>Ô!V˜ÉY ÆßÏ žx¾ŽÊ&grN)»„8”)p9Þ'5¼qѲËÞòªšå$1ÐSîìãZ÷ 6GlgÔb •’òÓˆJR|S‰’Þ®flO`ÙUí4£3rÝ#ÞYMßW·ŸøÙˆ àu#ölÿ]\oVO«ýo÷ë>*)ô†­I3Jz?ðúž„ÉÆ×üÉ#´n0ìÞ{·T º§…ÃiÚ9œ¹Å¬wŒ·–oÁvsÓ­èi{÷x»’§|ÓWüK½d¤Þ±Ý=¯ŸXé.Ùå寋׊á£À q¶Ì0˜õe{v†%Ší‰Y_¶Ì“κIçow“œ±cÚ$xoØD›ì?ùLÿÉ,")2O‘T±6Ìâ9{Xx´Ñám]Γ¨X¹iÓM0ýsì!QçÉ{+Œ=Úe{˵pTs牅k—Ý#~ŠMEXÏwnŠœ§ÂîÿØm °Æ¸DƒÁ@ .LÆWÌÇW™Ü„2É.¯x~ˆôaá6V>Ó[YÂv¸Î•;#ºRC®=Rb,}Ëûƒ¡xu9VDJ =PÖ*s‚VYÔeXë=zðæˆŠÑ­Ök´.[Vz¦MÆ­P‚[Fh/t·Œ IØŠU\÷–• ZÎñí¦™AË;Ó´Ìñ`|D-kña|ÅØ¸2aØb±Ÿö©w÷q¿þ²Ý<ßöïiò ØÐ\û«Åõúó—:À•¿Ú®¶¨ÌõêÊNŒ¶Ïüª= a(~oèÔðk V>ŒOo *göpA%uÄ>Vѱ˜¼8I¸"‡’xå1Ú²ß[Zdñk¡Ñ¨`fÈÂæ)±ùJDlU† ÝB(b} «ñAál¸Mt5xFi߆œ(°Æ…I(kíœ@€ør'ÍPžÎ— î£U$½•åáDðÈ›'¨1ëz'Çpø’÷@¼DeªpáýA㔑g6Ì8ýÚ>jx=jØqQr¾–^j±ý¸z3ŒÔ›}t€‚ÕK§Û\ ºC-%- ‹CŠÞjÒõfÚó×Ç@qRpÖƤ‚3ïý¬±äf¡³/Ó¢±†=Fåd|bSêÆõnû4yø$ÊEw=ï?/ÖÛÝÓB©P¡H¤Ö-d½3¦{1†ÈëbÖÆ1âWdBñ|—’ð1¥ã iXm·é‹§J¿Qx»õÌŽ{ž¾yìÃ…xnK{öu¢áF‡Öd¹“frLJ¢}ýGá¾¢nÑÈœrÞª¢!C«*»s2Æ“œkáïJG®Vë¯Ï ùl‰Ñð^ÅWQÐa@[cŒØ?J$6®!’¢š“Œzº"¡vI„~¡_åS§#2XbÛ ŠRj©Ó‘q“IasŽUO”š>8aí¶ñœZ¨;³€1cX>J±¡í†6é1—égì%Ь¦ðƒTáõtÝÛoä¼77ë.>.j¯pp.ÕB]²#Ìb ÖÈ®Ød«¸Hr5Á7HmµIcïˈƒ³ØKÑj…ž”*Ao ~àæ†ÞwìªuZ¤MÈÉ(¡¶lB(÷x»ºïf½vHwZôø°áØ}åßóKÜ|»yúmýe»þšÁ1>åé-¨~ªm¹o®lÅå‰*ÛÀ¦5ÛFñTpÝ‘!Ae1•c§VåÐîÙ§•êÆL_N42éþ!~šzù¦„Wà…ä}U碦é˜c=¯@ ž‡}“{•&ymmÙ˜WƒÈ0¢Æ| "çÀ{ -rt~ ëX=ÞUï>"K˜8ã¦Êš¨wôcÕœ V·ô¤‡gÿÈ—ûŒ†(sØmm+ê6 ¹Aš¥Ž#ÂZÍÁ]­¿0$ˆ€ŽO>=Ó'ŸØÚñÝØîø=­À ­‚ÍZy·¥ëu°+DZ)¿Ñë÷ÐU3l”cÏo•k±Uà%oß–¶í$Fñ¼y(e&Æs^²APÓÌ&3b±ö†½?Ï–N"n}nkúÑ2BǦié¥MÙANS4ˆÙ“d%O<¡óeAL6ž5M;Fcº9,CC@O…§(GÓT ¨ÈÌ •Qp6ã`¬Þ¨'ŠZïÐ8_ódàô±’¼©{L|Ìj²Þž§®ÈSw Ër´]p‡Yry#®çÐïÃç€=35F*kñt4†0)ãÕéØŠ÷¿›b…ލZ¤tˆ%áS½µ×A¶mx_Ôka…ÃYOC€æÉ,‹NG¼T|˜$w}U°>+«E82«õj16Ý4í£¢6ÎÊõâQÑÊœ×yóÁæç sÝÞ¡ícëþ½ëûm$ÇÑÿJ0/÷2q$‘”ȹżܽ,° ,îžwâ™&‰³Iºgûþú£Êî¤bË–T%U÷v€A£ÝI¹DJ?RüA³ÔGÙX¬…u¹ ÍšZOjfí8™.êÃæaŠ¿dI,04kg=¯õ‰¬ Tn€y‡ÇKÙtYuA\ÊÎDшé¢òAäšÀo‹3çC÷‹$•3¦“6†âeûXÍ í¿\ƒFÖ¹Ö­´ÈÂ.€ °nó¶À‰‰¹’[£L‰qjü‘Írã‚Sw^ª/48{çÅdò>þ~Rõá„Á·Å6¹òŠ»Ì;f·0ò pç+/}GÀÔ€]2)_dN÷­Q_¤)¿wnv:ñ‰ƒÛQGtØ ¶Q5†å?—MA„Íæéòúéº.€î1ôD>²~ò©›“WŸgv,žvœ0*œbÍzAïäl6YH?G²hSj¡$3€£ÐK6¤4o²êe9cíìÞ RÜDÏ0+ ·’i(¯×]c¹C®?ZZY­wDþe-Y^ ý&zg{52´DNXYôƒ|03rÂÒhÝS{»¡ÑSŒ]©)D$<p$iØä÷±©ï¤‰ß뿾þBì1—γ9j×»Möi×WùZãúitÔ­1_åkëdÕâà€Å.$ÑÄÖì=£†é4®QΩ~Óó×8z]*‹ §®lÀMI¸ˆ‰”@\K«$Õ’7ý¯ ÷XYŸå°û™£ä‰7©4"ކãüeY˜rÿ]4˜îOË ÜÙ]r€Mƒ‰¾¨L#—ß™WƒBO¢ï‚¬Ö1ž—IW;;8û8Ï__¡Î/·¶+Ðb˜Pà=(ÙSOÏ@¨[KÜyL%½2#¶ö2Lå*Œ„Ôv­Eb Ãnw¿!J™lu­ ékšbmà"¯pJ‚Â4Œè©To}§È§CCü‡I o±~—E(%Õ—á˦}F£[Òö[8Oa½K†ÈFÂhû”8&z.–þ­òØQÿš„tÍ_\¨2K¾ ¥œúïZNa‰¡C$éH áà3M®•p6ÅÕ¢d[A¢b\­„SúQNng•›ÄA²n=ÂØ.°#¬6M»ÍŠ2–lpJßþ8ÇÑBmývëÄ[U™€¹¡¸26@~¶HYFËm’y«¯ ±ù}M»²ç†MduÏNú–ùäëYãÏßé‚öõ¬ç{ºÔ>n^?²¾ØÚ IƒÀ‹C°Íû/èáL›ªj%ÌƒŠ¸M=Kð9ÊÄ>Þ­d¡Âše ¶ RÄ·6b`i¤°½ “JÙJ"C_{Šdèu?Ñ5í(k÷úþêŸxŸì ·Kì=!ÎL'¾š¶‰¯o“|GåHb=½=£¥ ‘û›äý蛘“tVŸíúîñãú-3òyu­x»Ü×%™€…®æÛ¹ Y&‚¨Ëhk³ùzеUpcØ`àÐæ&£zÎÄù†ìlªx$Ã&–: +Ì´°¥î½>ˆ9H`°.™@N¤¡”Ξ/¶ÕFÊòPv 5Š`¸7ÌôÔûa£”F‘e§„3æ0÷tæ¶·*,=C#q}_¶ÿÇqŒéæáùJÿŸmÖMH]ñe‚ûE^ÅC­û5Or ãÏbYJ¾× .fäTÜe$¦6ágÝÜ åÂM¶ûmžŠÙ&ÃϪ(rœéñkÍ7hî°¢£Ù|ÐèªbìsÏLìîè;¢ñãöf,7UÃóönó6ÚúðƒKºùðá:à͵\^?Òu]ƒÏgÚ‘¶Àcò0%k[%mCmÉÙ¢ky!Œ6%Ù†˜Ï¯ˆÃh“‰Û_åÔè:Pun ,M™)ðåºiyÐ?‡ŸÜ=â2Ôù­öLÂ* fŸD÷[Ò'QKGûZ˜Ó>‰ok }QpàÊTÃjÄ®Rëßê08Ðû̘“_Åp4¸°ËrlËÒ&÷xÚYjÉPßÙ•D2¤€8“Dò‘Ìš…2‚Ó/«Bò¤Ò«`ƒö‡ AH‡28v—EÛVJY¥¹ÛµrºM™¬î<œˆîÛ ”Ž-»˜„?ÒEçáPwnþ¡[ìAßcû«žŽì´âçÌË*•Rœ=éïÏí†yÆÙ‚éè §àc$Ñ—UúÚ$Á{}íEáCÀth4mý*Þ»ñN ßÇôÛÐg»Jð„º ™÷Øõ˜«k8eô¡yõQ:ßBçDÙŒ ÄÚ¿`)pö0#;ÈϾÉ®3#¹µàqû»h4—>Ìí3¢Y@ê|˜Ÿtg>¿ì3ª~ãñƼU±çó²òè5¶Åq§ƒvAeÇÝx!«þ¸ç„}Æ–HzŽö{Ïy îH,eëôetxèGÒkpè‡×vAéŽïëìäœè[—ŒH`Ã}2ÝÄ´í0¥ ;&þ~ÖœÒo…:=·ƒ·íKÆ=ÃÊ CW>÷øt»}º}ù2¬õ]~mN g~ó{}5ªSš ;§¸é|mYä9ñžAûs²óqK9]…dQÞXŸ'v'z±Œ„Õã‡càQ–ÆxOÔãUÊpÜwÐ’xHB< ™Þ¡™ðËaJZûÁuL,úéÚ×GiOëƒu+4ÖYw¿@ëÿó²ÞÿÔ<¬–é*Èaµ>ÂNê‹e‚'&ðÄ‘gpù«Ìæ`ð°bwT¼Ï%vÇ‚éÜ­i“KÆÚGrhÃñµcó6Gå0Üæ Zê]\å|j+޳¼¿Ï„ÚmÓj,Jà1¹Ä«n/?z*Øaè¬`õl÷!×wf€•¨2ïÏtÀ*¨¡ú~:`µÑPû¤zZ©¬p_Û·w!u‡?nocütŸú:úÁª¨3º¾F¦4¾RX²ÑºÔÛ¸ñ´Š$'u¸$k験Éõg‰²J÷gyFCßÚVú¿´¡‘î8ˆ’ú—Œ&å ´5tE#¿‘jì\áé?¥FoLÆ9jDã{ŒƒPĈlzÎ »¿}Ø5w¯ fUd0]¦yDKnæ\¬üçªË‹ÞÄÐ.ÁFõ'ËÕr 6Á8+˜ƒB´˜‚ÂÑ’›TsŠXW1±Í²Ü;-/Šywÿs_£|Ô€!›é„‹mÓò\QZžw4åµBïsJ_~zVotª¾â#tË$ŒŸá÷×ܪ21üª²ß×·/ºå.t+^ü—n¿??Ülþqñ&Ñ1þ¨Ÿ¿<}¹0ñY×|¹—ÁåíM4$‚Z0`R»©Mú —Q¹ÛOQD4àcJatºŽ©BÐ4›UPå„ Òüç?ª`7O?ýéÏÿ™’¹øô³€î7ºy†Yc×w·*\Ý-±ò¨Ù‹Ÿ/^þñðÓŸŠ_©Ô:¤ '÷÷6_¾&5|üV/‚—7|n.ÿNן?¯,Óÿaó~1ëØøìp –©›€µÄûÆa±‡‹»Íúys¤'ünNÖÏ?¿·»;ËÓ7îyë:2v¥ ÇMu]Ã"°À|¿ÁÜT˜;h^¬3¥Ó·¸Ü…ß×wWúÜØlèuc?ßm¿øåfý²Ž9Åãæ>J Áû}aSÊêÚ4•âò•¦M]ÀðK}ùoX‘W7#Õy,x«”Ÿ“w–ów]Ç}X?\o®þ[ßçÓ󷉆ª$ã +ÄÒeõhå!B±ïÀë\?^7­Öç„E¹Ʀ}똀ÄV#ö­?_ÔP¸¢=zžÂÅ=êÙò¤(²bMìt?›ÂÙB +’¡ÈÍRÁÂÙËÏaáÁA:$üº²gWBêZƒZÑþ˜zH’ÄÅc¬ö Bª”ÔuĤ@èz“8•¿9žŒ+oŒ !Ž™KÝdÜŽÄE¼‹û¨€ÚIø a˜_l˜p Fì¼Gü?¯‘Ìõ‡»MŒ%ýe»}<‚±ÈE6C”é§x®áß/¢­¾Ýܼ~fC°ÜÐ’+ð9„ béZ)…X£Åü[|Ù‹Û}8ìzsûyssètz}-áóøcâû•íŸrû|ñ TWéîæéâåãúáâU«añ‡u•±:ljÀ°‚3~–ÃàiÊe¨Xk=̶bPnÅb1¿#ÙMáM°Ù=á“÷šoë*°añ¢m3Ê‚õÃ73 ;3éÞIšäxìœA³›¿óýt5xÜÞT¶0PH0?LÁ™Ñ8%£Yô¸ùþÕëƒÌ&ݤ9Jⱋ.6B{Éã1'«ÓFòip“6XFòų†Wa³§Û=ÚGû,ÅC¢èTS&Zb„V(¡`ìU þ §TÊÅ!ÏQ)îÀU£eR*%ˆ03ÆÇ•!Fr]"”Û¢¥×;—À1‚Ô¹-Þaä ˆuTí ´x‡sî;ÖÛY'ÀÙ>Œ\tÌ{6ý-ù=ˆö§!v÷a}ýÛ§ÇËë§U¼ƒN×AžvD_cJÄ òEãçGœ\WgE& PÀÕó!']9%ËžFK+£ë¼UMøw!§ÑCÒ!'³RR€î+\³&úž2 +]’¯î‘4瘴¤šCžj¢ Ïïœ$Õ ©Õu ‚j²®Ó ›‡Ê \,'Æ ìE|4Éú—@hšÀŠ’¶+¦¶Î5§õëÐò,W‚¡¿+¡l0É;£rÑaº_lpíè&‰]€m·äG~ÅkK˜7ºuâóË»íõo© t¨cœ­ÞcÄ:Q¤št¶z³Ä3Þ‘"΃8Ç}B]q‚B×QB_KÀÞD¼wÁ£‘У1ÑúºŸo¯ÑƦku%3JÎpOÕ‚ŸÅEÐN +ˆWœX:QrMÙˆeã‹."œ‘,I¾^¥ÔˆŒ¸]¦C ÁØUi^DBÿ¸×~>ý1q±"ȧã^mg²ΘPêS]-:*Nj6X?ƒ}§ë0âÅ- Áï­ÙUÌÚ¹j˜~Ýõº4F I.’|ìîÇ_ÕÁ.8йÁΆÝiMš©|m%H(¹f@á,ÚzI9#é´òý⨠š[h½Ì;“Á-€¶Ètm!–C-Z¤lãQ˜ÓÁá¤^Eæ´ÀÙ=¢C-ã 2 Áv•é½_‘›.ûö£¦Žèâiž+†¬˜¸b`ÏU7GŒp€ê¡Ö#‰X=×lFëÕ*ÀXYŒ“L ¤ÆêËqÈÆ@7'çO¤?È‚Ç4È£ìè||Í8ßvVA·õ¡b.õtT8©ZbÈ~ÖA·aJG0Ö¢B³/S°ü2%ÊW`)áÁü‘µér£¥^¦(okˆÆÎ¬3é¸{Ì[åè8õ72uÊN^Åsj”Vÿ¢ª)ÁÀ]¥Òµõ¿0ˆˆß´®¤jõR£Èi˜Y>5ìÈx byMðó”k²CïçgaRAŒÂØ|D,¨X²1!LuR­¬‹¬sqNA ©®ÙÎÃ"ÏÝk‚JZ׈LOîIÉ ÏJðÐGNrƒÀ±ž«K†Ç{qµ¼JAR“PÜ .—¤»VBz^Í«pZ%‡ ÎWdvAkü,–«[CH"F’ž'Æ _@É›ÿ8F¹¥Ã)%'9Ó8¡ãÒÚd´¢éÃtmrÜ­~^u®ß}T—fÜ™sœ“þÃÎ1 RßÜNR-²iaúãÃ9l %21¦G·O ?w8ƒ ©ü°‘|Îá­õi¸êpýFfNëºßÿª˜í±[<,YÑÉgÜâ¶µe=?¬A© 2 &NëV¼±a¦n{û™Æ«3™ y(¨9{¶ç‡7›êÖ:Ó8Ãúû „'7‰C#žg‘u±=Œ³žôŠÐKLíÒ]wý®]õƒâñóóÕÓöSlS½æ3¿$f'r"ϳ ²“I‰›[‹’“©ã»Êä4­&iˆ­2 È#pÖ»ôÃH.m,ñМµŠ$+0L³°Bû5\Å,c}²ô<‡£Ù?ÛÕp]ý®ƒh] ‹ûa²ü Î$”{cãde5Ô•grªÌZ±äag°u™–Ê~ÅüuPýÙéd.9OöM@ çðÚÀÊ|Õñ;½Œbÿß—J r&<³7,YŒdn¨mEMVÉR1—š'uKŒx¦n¡ ²ÆÒòµÁQ OÛ¸˜Ë=·¼ºÛêê>nŸã¬ë­®ìË>—äòeûÛæáÒV–°…pZ¦“8&š@‘õ¨ Ó¦jx’Æ1¬¢–³ôi_s¡ S­F2kÄžÔ £8­§ Õ> Ì;Ät—Ñ!Ž¡f,ư–œ—ƒÖö½k‰¥,Œ¹¹åÄ“ñ䤾ƒ~޳Òù‚îÚ€±$]_¾¼Ÿmø®Ÿþ`‡_ÜG‘öY {Û¨?ÑâyõÙ®ï?®íêµóþ(¢P…âÎèܬ ¤X­=áɲº_¾žRwkCP:C/^†8´- êa7¡þðæèM†m@”#I2Ùî4¨3ÇÉ£³y î)ä±ë§P=f°[äLpš¤mÓ”%“øÚÝoNm6lf¡|œ(Ñ!*ÂÊ…ÔŸZ|JÀaG¤õ§›Û—:âm O Üûyw…lÝ”‘P–Ø1`‹¹I)µ |¨ò <å¢ù•‹SQ²¶»!Z‡‘Hš>8Ž Þ×ܲ‹@1뉇j·º¼''AD–>‚ñõ¯t³(Žm†¿Äîûu£9Ô®ý0YâGÐMišgk!R0-N`RHÿÏÞµõ6–ç¿B8~±8}­ªV6 1,Û‹u‚<‹‡¤f”H¢BJ;;ÿ>U‡ñÖ‡Ýçt7g ØÀÚ³ù§ªë«KW}Uó‚¾fÜϳ% ÐÄ" yTºž÷œ#» Žà–™ŸñíoçwCîg·óÞ·›vl½ˆÃg…=”+[ÄÑkþ½:”S eÚ˜âzëª"È×àW`kûv”¨5b™{è^%:þÐÑÛ#¾Ø2¶èi%þ^Ê5ôƒqšäHQ§ã‡Û×sö¿°ãIPY þÂyù*º¬##:Ú´3ÈÆC;Sëȯ–ûìÎ :”NMñR`“%²½VßK«Nó){YK§‡8Pï•Ý«“7§¾¤œ£u¬Š8À%¿ .žúʦöÅ•_?¨‡½zF)à«2=ŸÌTÕªû`'ó­4¿a¶ksAú6[ÔÑD£*½Ú — -{Œß$V³ˆ Êí £ˆ .9KD¢%„ñTªâz$­µSPƨ2¦–iøÈz ï×ûÎÙƒ9Ú»mDúö×Wnù^Ó 2o˲`]ƒÀÕОÛ¼€s\µ¾dWó2ûðx¿^³MËmååÆÛ¼‡´à lmV«Bó&™G…OM;tã‘CßÌa†ê'õ”r¶Ä¡mFôŸ âÏ•y›*FŸœjfÃL±qŒ‹Á«ä¤ Ÿ¦Ø¤ÌT*eA“²Æ pcà)k FaëTƒsÑi èËK¦ö³ù¬lÞ¤.Ì .á@¯YíÁc™Ý.•cì»]à¨5Ž=ì ÚµÞ6t²¾Í ÓQªÒ¯P'ËƒŠ°Z™1„†làì{pëvfTraJ93$)QæÍ¸ t’ëIÖp@’뉟íï>bp—¸’ÑÝ wiï6 Ê@Á6¯±œaëOÀ_YJc¢5z¨ Z»Êr_;Joy*P·à•E l5h[æNgÌ¿[ ï{ÿæè½p³ø?¿™oI€ßßùÙ{9ŠEIº´‚2 ݘ‹€`†sT´j­±»íñB%[ ."?N»A—Dþy÷Ñß^ˆUhj±‹q•ÒWF~jÓà"ó€WÚ ÿÖ¼w³îE‡5´ Ã¦F̨Û:¡&ÿÏ}*–š`¹·açœSÖÉÑÖÀRŽÞÁíeP©ŒÊXù¯lcá}¼  šr¡lù&µä»j+„sYý,ˆ£H¡cæÞN}R¿i±À8XvTA&…®X|œÍ?²™œyîãŸ+E¢mŸ,};>A×Z‚¥µÈ<‰Õ‚V9œk"8L…/d*­,>㣃É{ùT _ä4 ðW…V~½ÆÐ*rV!ºKŠ5¥5žÌ½5*J¢É*J†‚¶†|hhè*Ñ’n?ÇèFU€.W'«zHpÕw¡¦ƒi©ÇKPÝQKûœoyk÷qÅ@Dzž=<¯³×—Õf.¾2]yÕJü—­0µ›IeÝ"˜¦´9—³‡—-ïž6óhÜݳVHvÔWšy¸Ü‹^ÏÐŒ±l¶W¢MáéáLŽHç!ÝZJuüúMšUÌ’Q9täsƒÌ2ú惜dp­YØYÎbdƆ±×éIÌê-%GZÓ3í4 hMNIE^mW\TòtÞúxi~Ècƒ¥ñ2O<£¸køè-ôÁÅœ‘ÄT“’Õo½ÊáÔ6&y;¦ Æ„R‹RYg´½.¬j« ýeŠQÑ,’%Ýêg^¨»Ó²nÇl]À™€¦zt®š²KÔlMðí=î9Ö~ÕÓHSì ¼×o-Ž! T:€U¦-ÁcJ”1™ÏQ°”Ädãu“m|ÑîÜꀲó RV›+ƒ²S¶5(³œ}ô†ÛI;Øè•צjËB€,& lE嘃&-c`jSú3ÚÉ€–œ9»ž­ÌvO>–ùÑo<¯Vþ‘}`ï¶ ‰aÄhçd7×Ð ŽãeV•[΋*MÍàÀR‹AÅ®YöªtÉ¢­!§ü•‘›_²€g—cOfM‰5†ºÍcYHìIå‡Ç…ÑÒäQéQ›l‚RJn·ß–ekM¯˜)§Mà…y0¹ZÕHN™6à_Kóö¶ufÒ°»BÎZÅ€OïÖ,ø3*8I÷JfšNʆ緤uÛ@­Ý¢òv«ÐPO îÔ~þÁÇyÌëI)“XNaª–y!ë2ÛCF•qA&®cF·w^[äÑ‘Îk¤qìxY«•©å¢äôEÖ×zJµ¡Mˆ£&~$²JÓW!ÆîJzìNŸW|Œ‡QÌ1Ì´Œj8;SkräˆôàRS®„jÖõQÆG0dÔõÓ¬ô,.­!íåQ©®/[¯ž¸8ï\ûº¾S¸hN”}U‡ê³X=Uè±ÏA ¡•ÃØh¿€Ò᪥{ñu·ÒOnæëùÀ&[Ma”#ö0j…Ä´öÅ—¤QÕlÓ²n^ÙœÝÁ'alFäQk»€"aZ(ÅÑcÆ×ó¯çÙgÇé‚0¨EÄc¥3Eæ:®Í$ùt·žm8k˜¿¼®÷#ÕÃxu-õòêöÉw£ÃöØlç*cOÜ–Fäjk }^©0Üd÷âôhUf ï)—P&K›Gk KYüÄ×C_1e2Z§Ñ—c(“F_Œ±2©‚¾ü8'ú¾UlÛ6/ ‘RÛ&ÕÛæ7†ŽZ?~j«š4¨,BÎ"Ê8Çúñ¼W‹9Tp… ÕÐ`Î×eÔ—ÍkqcìZ)oæ³wû?–íñýÔÞ ,šp®Õ˜êù¨)ñ{¥ÔY«·¹ÎrÄgLMí¢P:%á•3¤(3ÆxªŒíZ Áp^3¤Hà­\°á«·튜mdj—%lh®ºù#ïÕcEF6BÄôj§Ê åCæX½Ê# ‹­¥^µ™vmçýCŽô”c—#~T tSçŸfòEŸfOó廿ò÷yÝœköFŸªö&¢Ú›sÝÞDï¯[*C&ðª»?=Ž\ íª¡YîïÈÝ}ìX±n¿ûá·žÓä`9 –<¸`½š¼ò|š=.ùÀ÷°Jƒë†õäûÉËoO·ßÍWϳõòö;>7–/·úË'ñ°ds/›åf–óÙól~ÿrÏÖ#[ÒŸg/ož×«í“eUÿæüæÎlè7þÜÇÕâý­ üÑ›×ù|¹ÙÜ~·{¯_ž__n¿«ú¹¿Î^—¿l·PX;ùþûÉ{ÁWyÕ/Úù®Êû=Òq ±M¦iüI½+tØuº(ÐÚ±3üâéfï–?ñ ûÕêù8tàŠM/xZ,»eã0ðÏ9ò÷ËÅþgþ̵ë)I¯.í®}z]{q‚§‹e¶/Ÿ?x›ßË·ÜË·b{œ/ï].Ž=·&bCÔ&<²ãþCì»WÛ=å~ÑÁ'¶åOËõäåãìiò&i÷ö'ç‡c@(Ò5aZb– Øì,j™•7uYŽ2œjµLÊÙTKV;¯r*‘—×gDš9Ñ•S›´8¯cÁôÁËVa™ãom\.ÿM-{ñÞ6î2Ð!„H£¿¯wNwè©©Ñ8Uhs<ËRµÅÈ79ôXmKS¤1¦h4{ÃùžùyÂ?yÚÌæ}œ:µ-ßÎÝìa³¼`bO¯ü²º{Ã"IoÏü¼Uäe÷ìeW§¦^™ }ÒðbËÓ_í÷ŠH(µµ¸ÖŽ-‹<çf”:rvGÙEçNŒñÁkCœXð2f\d”¡mv»•?al k­uœ'0˜¬êáþötnŒg‰OÈk9rÑÄçq{õ¸½ÃØFtÁžìU´ æ€YÜ ø¶÷‹›ÕÃò¸ ½ûáóÃ+G3 0ROP•"¿¤ªÒîÆ)5â2‹1Oi n·LRº—×í$E[Kòc4LîÚ S d2²7טKÛ‹¯J,ÉßZYo'"kŒ.1sÇÚ8œ1£‹„“¢(/]óÏŨ;u¢Ô_µs%˜éÕ?¡qEúÇ“ýk•`•÷n×®ÐæŸW‹#baYPt³fÈÚ¼¬?¿;þל¿_°ç»»›÷ïïÖwÃK´²0^ Nc&5ZgÔ@/ܸ ´(N£xÆ4N;ÉÔÃíæ[O«l{!ÕÁiTÎ ÙÝœšS’2;¥`›ã´Ób8Ýѧ$Ru‡)«=A+ŸÔÅ@ѧWÏ(†dKôêuü`L°×\8Ù%úö§î/vS#8|}¯4öz=Š9ÎãðËaY†,¡U^5&Þ¹ŒY_fÈØ {–Q~‘P¥YVço$뾋À•Õ[½qØ>@ÖÞFdv6Ö€³Õ]’„>yÉ”M5d#D¯R½aÔ4EJu€MP—ûѯ‚ºûáËÙ¶·Ý`Þö6ýf>†¹AÙñòÏÀ\¯Æ°áb*TƒÜË« ¸$‹öiÀE’·[Þ…Øý^<•ð–Ø·šlB¢­^¥½Ì4O¯1[à-ZˆºÄ pÕi²<NÙðYxÓ Ñ«[Ù2_ 'dSµŠ Æ ;&w噕S^§’µuض¢÷M<žÐ|:Î5¾*j”eVÎõr=pnZÑÆ«!ÃñÁ¨BÃhŒ^U˜;IœÚšõ¡¨Ö.£/‹I¯·–µÊ)àì¶H­ÚÀ.Nâ[òR¼Íœ?t{üv‹ú~i!Ñ1{¯-–à*Í0â ”‘ ,Kr(#ER65QT#„ŒÌÁçR JÇJ5‡r¨„¡r§íÔÜ!y ÒÆªÏ{÷64g&Mƒœž}™§±ÿiXDÃÙúË×Yhy~LD(rì÷Ç-ÞÈ’SU# äMN(ãè´ºè(íPj™!yD?Ì IQ 23ô¦ý]•1 e4éÁôlÍŪDï>k–Öë|"Ë,§Û«:B­ eªÃ+¨®/e³…‰1=_wñqÖ˜ž´=—¯>îò^u'S¹Eé&ç2cú œ“6hå«oḘ©WÅkëÓ ¼¶àÓ©gœ·ô@NµðÚ8²è†àµu­/2zc¡½Ñ[¯{ð:sÅ”Ç}€+.æH§z5ËX¨Ëº À;hó!ã½jYpݳåÏÙxVò?o?d¥,îå³Rv™O)ÛŠNãU˜_û0ЍFf1‚w5å¨vr$(–ˆ±Ê1­8¶â+£­-‹ƒ5gYÊ‘5$Ý!'²‰{(ªZÐ&’`‰«ø6 ©ïŒ òJ»²Œ1`›Fë­Á–MËÂ4³yžÍ—§ ”™F}Qµ –ŠŠê¨hÄDŸUèƒl.ˆã½2©Y±Ú¥Õ»;j“×¥#| €Jõk•SzH´.;n8å(±.áÂhß  ÁGë1–m[™Ô†[‚]ÖÆTÙp™Á ½WyÆeu™ò¨A ÍÙ•ô}ó ,G}^-÷›õë³|ôû×ŇåK·Òðyõp?ÿ<Œ‘ÒA×C¯UQñ͘Áh##!zðê:ò«…¹Û3ƒ‚M½†ã›,‚£‰rêH«JÔ‹$càiæ:aÏ-êäÃG½,æ[? {…žR‘¶XÞêó–XxGP}!jAz5-ka¡ìŠ“À6@ho=™–•ŽË‡ÇcÉ>¯Wÿß$1ÿÈÇo½|^mîù° ‡ÈåÜbØÃÊjý³ÓœË¡…¢Ú&š&r‘8ÑßÂ6òÍýb}/IŸÉ.Ü¥„ÚðøñªÈp·žpLw³€ŸÎ]C~õRëä=%S àB²Ý=FÇÙ„U‡ÃÎ!1Ìñ·¬ts;–³õ±¦DOJ_cï8eU ïfý½ã ÜèU/çÐUŒ‰¼e¤•ƒ?;xéí ¿x¡=Ð)–U¤½Ag‰ÃÝ$õžVʧƒÑ=¶ûM¦UÂssä?~M)ÅYBQQRk‚=–r¬(Íj ÂR›(JWÞ¼£­ª¾[÷ÛŠ{N‹SÁšP–Ì‘mq™Ù9^É>¾p1ãƒÈÇ`›7ñ ›Æçh¼ÒðOÎŽY¯äA1´°6 Ï¥V¯Ú"§ƒcŠmãñ%4ÕvQÊ%4'g¢-â"ª‚ç !p.«Xèn#JC<çoE]¢tV³×ÿòX5À¡®þÅ1"®ToòÞú‚˜Þ[ Aúp÷WûËîK³^ƒ¢7½~šÝ¿ð¹œðyÿ;ü/ò?Dã?ðÏ_ÖŸï;\ÞÈÈúîoî¿‹,—蜣:É X½vÛ¶:ŒŽ Hø} 0?›¾äÍhw3~í$–7ëcónÍ"\®oažWë—›Ýt™瑯¸^¯Öé/åßøHËŠ•‡åb¼äSNG¡ÆT´lãM­I¤„¸Ž]ΙaMK G€Ôî £œV ®9Ž!vSzøÖUüHw@9ðÊ¿*…Æ=*²6À¦e„­œ¹ˆáWö²Œâ£FZµ& 9OTùJ†±÷éÓigƒ;»=•_~§è¶“/¿<ùïýéÏ?üùßo'“6ÈŸ'ûk'“É?ÑÏ“¬ÁÛÉöÓYü=ÍÖŸúñß:jm>…/«É§õý˲SãëæV^÷iÙñåO:ì¹ðñOþeò»ŽJ_^AV¢Ì8)ah¼ë¨ÌŒõ8úDº-•§ýûu‚nê‚6 a¼œô£Ï|š=¼ãä½Iù·÷Þ<¬>Mî³—ÙæóÓü°þ®YÎöŽèø xŽO9KÀÑ,DÝ#L° 6i Q?GÏ®¾–f¨OÎ’ãeŸÜ=Â∆T>@dµWº|)ÍÙn'V)«SU7És¬÷Gy·/èc·âoPÁ½î°:2Ýk÷Õ8‚ñä‡Û ÛÞŠwbÞ‘ñ¹×î•¿ÃiÝíxÙLP×_6£êìš¹ ¢sé¿EÉo€d‚S r:ÞvyŠ›BÒ¼ó« rSO¤P÷¬ ?+$E’݇?1Ø=B+ÛX¶Û3w¦ aÊPÀùJŸ2Üß“2H˜‚ì¦{„?ᄪ…øÀÊŸ§ñ±Á>OÓnŸçÙþÛŽÐð`ÝíãìiöAöä,×ÏW,§Ï‘}¸'‹‚<Øa‹‚ª}‘ƒÍAÍàÍAվǰ r2Ö—;î’¡!é0b°•†8‰« W¤…ÌiœØON؉’Û@ ´‰ˆ‘dùQlzhÿfÒìTšcîÝÁ†´ÃgD¤qB5%m Ñcæ‚´­²-U¤ì14F9Y:ÌAR©²­ÊW6k›…Š.©ì`«_»÷VÑQ±ÃWËÒ6ñ÷ –Om~üŸV[Ú±™»Æå51šH«!xi+®²´=o½…õvÿpc ÝX¿Ã’j7ê2cpÔÚT 6D/°Þœ0Sïi­LE+ðîkXÁ1+çÍ—‘øs¶Î“c¯ }ö's(?çÙ|ñ`; œ§9g{R²Î›6nСÜ;Û|ï Ú7!y7&¾”tÎÖÇØ0^,Ï7ƒ’fÒ쉕,­eÀQëÚÜVŒ6F”Ïzý#ÁT™XÉ\>Ë"»0e7מРKE„~ø° zTëþ7:2ÎÞq0’ÕÿF„×U!œ÷#Z8@c±áÌ„³ذþŸº«ëm+G²Åè—y9d‘Udey`Ñ@/v°»¯ƒ†")‰Ð¶å•ì¤óï·(][×%’÷’7ÙiLwâI·ŠuXŸ§’ñho3.:DÐ{°L„ ®m ¾I(:¢º gmƒÁ‘nXAì©mg·Oî {Cg6û²»XBH”ÚA¡6ÿ2Xü&ãOö?gRÈÅ©]1ïØ@‰ jß4Mò8‡Š,§“<Áx’†ç£í›=ñTÞ‘Oò™&xÅ£LÓ{jž@݉&,p®¢Ý›ÊÖ]ÍyÃ;Jº7‡bÃ%š°ìÚÚQå& ÑP»×ïÏ@[°Ýܯž¾¬D´{GDŒn?žðð¹GÅVÁJÑp¥¤!Ø(°%  ´ròë­ø rY™å ybV.…ÌaËîõ¡ÆW ô¤VšÃº7zU’±­`ÈætŸ{%¯I¢½ÐŽe~üŠ­íŸ«§Ç;ÑY‘­Š7éÛÚ* p— ,”’p¹Ñ^§WQUô“ä xGéú€§ÓÖˆ±¥'—:~i a¼ÜOºßêûIzŠ#-Ø2ज"l³ü%U0¶Xýz¿¬oQ—ÇQúÖŠÚ§äm<%··¦Ø®`+ÈSu:•²¦šÀL5 Aÿ¦ªÔ-CÑWãZf:©m7w¡½ÿ»üùþý«Xß¿¬ý\Íž6³îGgÛ}KOÙÂJ§ÜXyFu¡ê5 m<‚;C¥êP™U¼Yåh³õéÁŸ08“¼Y5D·²÷$Tçjul,;Ä’«´ÑÞ³OôÍ ÙhõC4Åhe¨³b¬ÇªPëlÞ–D“?û3 .ëTö¸Œ¯Ñìš`®wâ´÷#f'{;?äÍÞ–ƒ¼~»ŒtÖZ¸"þ0®1*ãk†lr“OEâøXª5?y]dUW…E….gÒÒ&ûúÔbü+}ùT\ä-˜À—áw:ŽÐih5šÑ4›_@³6l]UékÔ„)­ÕØÒáãsåÕϬQ,w¶.R"ᨆ%Ñ:´© ih?'Ýù>ûþíãáóL½íz±›-¶‹2ÒY§/WÎꈽEèíÚo¹¿Ÿ?f´ãû•»íl·þü _[ÌË‚Å+ªp† Œ-cdå=Hi^wõÐV-^†dBF‡zÇâ{óÄ“ˆ€^O,µŠ—ÊqQÛT•#1 ¦â”f$B”§eOõNÉžBD“NÎ1@h6õéScèIªÊ!‘vy(:%AÆa8q“2Š‘À×`K–àåêñnó=ˆýSÝß:zøSYðèõe¨¶a‹äH‰·gòsã—Q;y>C•Ù\ L?_cŠukeBú|~÷øe®ocÛÊÎ;åD7íh,ýx½öFÔ^·no,ýx×z«’o=³.Ûæ@-hçâÛ\]Õi}Ÿ—SÄ…EˆÑþïeíjrÃ×vwÛfE´äÂfáR†í커fÙ„å4>£Ç_)Èðe}¬ÀÜH¥³ƒì‹²àÖŒ¡ÅíTó,¸ˆYÅ Ž&cÅ“à\w›'ú¬ý@@ÊfÛz‘ŸtQ‡H€0®’àZl rZ‰ëb[²Õ>†+o„üus÷|¿ZÜÍ×÷oÈÄî)g]¿ã÷wôž|iÖ±’—ù¢ÌWzyÄ=ó(Œ%5dmPX£î¨¸ôXOˆµ@øpxXŒ;Af ·Fâ‚ë$„‡#ÎÑž»£Äê¬rJ>8rQ¬(×ÀØX‘°E¾OžÅy.·M³>o×Oßß:º/_íˆÂä?¡[y€ò:+xé˵ؤ[K©†¦˜|çøì»Óä5‰|•{z²Üit H&‹³øUê½"?jÌÕ8£‡´”°‚(O—™Ö•µÅv•½˜"ê‚ òô+L¢¿ ýféESô[Åx Z)W‚þLÚšqM Þ´íñÜ‹ÙéØ,LàNS(vŸ…ñTw#E `ÌžäŸñúixT€Ûgf´±)†hÍJt„ÆNȧbYÿ0>•ÝâËjù,b~÷ö¯1Þ¨Á›rå û¼P‹åÊû]M0² \B£±EÀj™Q<"l9+ö’²{‹¯9&ùº_ qŠ_¯™ÇËÌ02êe7ˆ>?LÎrÚBÿ¨\W¤r=ŒñögUÀÔYJyHè­¡t’’£»^z¢­ ô%IJ«4ˆ+1j’œÌƒ¸õÉú0÷òhów«@þûfóxÖEø>W{âô÷Æè@*€g½Z¾~ (¶·Ç†í^6Õ;–Ö¼‚_ü&ˆ®mì=Íß§½Ywï‹ÕúëjyÊ÷lÑyð¾Çî÷æ5ºgë^f½“³ðM õÛj{óôeþpó*‘ÛýãŸÐ²Þ¯;S¶„0yr®vå`q‘®íƒÍŠš»ºÇMž°²×LYUª|§ùç¼Ã.ZåÇà‡Uv@ËNè¹ K|atï)æ÷žêÀ,Â:Yº" Š“VE;lz–×|ªE$—U êkOJ ²ú¤4Ui ƒ(Òж r·¹[Ì¢¾ø}~·›ß?Þ%W$f¾J‹ÍÚ{ùƒ²€£F°ä ÙâÇ¡‰!T7Ë›å²Ä~ÅÌ•ù8ïOΠG#˜òþ˜µÑÉüXh ˆ¹ =9ÖYõ'¦ã•|¦ P4Ž§Îœ Õ)XpF‚EÝ2\ÙÎwb’‹'‰Öo»~ŸÛ¿fi˽ë;#¾Äj¹Þ¿íÓJΗ8])lþÂíàb´¢qZ;wrxÁ¸ØNG‰~É¢øô3Um<Òš*ïæýñ7ÎÅsú§hÔ\™…!þ›–[{_º•o„É^¹WFØë¸RŒ 3ôì!9£FH¨\òªøVÙWYW©Äȇ«YLÁE“:g9øã±…Ç)a¿ã±ýøè|Ñu–s½‡éÇcWÒÉßg»²§üSë5S·&ÀƒÊ–Cç8N!ÑOÕ "»àøÿâi >;ׄÑ0`ÂåÚü®Ã¦Y\½>+eŒÒÚJ&QCÞŠZeT'ã¯è¸ßC¿Š5WiOG·,£1ÛèÎpQ”ùËDù´nòйœÜ¡+ª¸»h.«Ä·€‘ÌuwÀY ¦cèVå¾Fó`43A“¡p¢püŸt(üÒ¦N!³îœÏ·ë¯ÑD^v¦U)Ïò•¨ÙqÏÇ飅×z‰ñ¾%éÉ…U³Y(°fÈJ±À–-À]ÊÁÛ6;&Ôóìµ •˘áî®ù«®€5±îÞÃVšá&F϶¤éÎ ‘53a­› ;œvçzíŒN0+ÖvS9âH.§Më)Ç"½B)„a÷²Q4š€aØQsšÍ;Òé’ ˆÁê «æ(ÇÐQ`•ÆÙ$b‘ö´f-ŠnÁadñüÔÌ)½a²çåºíK#^2MÖŒ@4®,¶ùú eµ±q=­œCßf¦ÕôþÚ8MK¢¨!loƒ5š¶Üíà Kkp›œœøz™  Ñ§Ø1å‰p’w"+Òž0ª¤JzÂ2›¶æˆCÌQm +¯F5”#¸Š)M9#N¾¥Ó)MÖΤ £­±G)ÕIiÇ ýÔvJÔ¸d.xhÙFRš,^)p©)C¨;Éo²ncíü8ÒŽˆëÕTš§¦Y›X_«ü,9ºÓê1¶‹1¢Guh•I¾’ ý-ÝåC­•¥ ƒ­Ä“G&§Ùþ7#¹Å$ŽÌM/ÒÓmc™ó¦AiHb±¼jR9 .£MõÞE>z©”ôZ"V7ñõÉÚï¡ßCÇN)08)ìçMŒƒÝ |hªWãš„/Æì_bu²à»lÉ·ZØ èÂÐÎZëåø6‹¬ú⪪„ í0ª¦Ç5˜bTá}ÑT UÂ;.a¹©c“'Äs-\\¹¢Ý½à5)žd·‘Ï[¸¡¨ön£SPhêζY¨ŒRÊ£ÿÑû6l(CX¯}K„EEòB ¬5ÆYÕaߊ«¢7>µwdŽ^$“AaíoŒG¬'›:ÞlGÊN ±¨š3·*<4œº³Ê"œ¬U>Ž™ÀÀZo+ï‘;‡‡–X˪EÇZÑpS¬Ï‡Ÿé(;ÒãûõçN²Cf¯¼^#¾Œ*84d]’Q¬%åªðe\SE1_Æ5=ŒëmBKèÉ9JÝh<%§X0šÔ艶Πg±°ÐºÙ´'ŸZc.bŽÚÌË_:EøÓzÊAΈÑ1paŠo’ý yYÎ{W¡„˜¼jTB`jp¡Ñ ç^蹫PºäÕ¡©x_¤ {P> <éÛî{o^¡l´®Ÿ <¶0Ù!…¹`]ˆÇ­dZ°œØPÕ…T³³}23ŽVǧάØÁ !k൵¶qOst,1(ŠC21‹Àu9Íæ ·cù;©[͔ʯízy:­Ìb¹Ï7Җݪê»!tð>ðTc®Èn5’Ùª«ÕþÌ…5чý²×¼t´J§Sõ6^ 퉯æ,E9tS;é–[7ü‰˜ŸaþAO,×E}t¾.Ôû ©ŽÚÂNKŸ”nÎìîGçÉÉa  çA×e$ðYýG–X2|ª»¥Å“j“•5ìˆtˬìý†!Û]ø‰¯›»çûÕân¾¾ï+{}/®âl+7úîiûýø‡'_šu1|c·Ñ¡k²RÜBvùä‹ Ã3t4|eôœª¥kè‡Äé¬Wªííë ¨&GœD ÈÉ€œœM2W8ôÑý=iTâˆSJ; P§u éÖñ¸HL”"Nioð‚ë ërXg–©L…µ|§fßÒõs ²žÞÛ[<&{ß6ÅmY/åõ>[}ü"ï|p »÷HyÛ‰ßnÅA4|Y~}H˼5h%-ß”˜¯ë)‰à=kñ¥$h„ûTggÀENµÙ¢õhÞ´'· 0½7 ãØ›ûIašµXzãÞÎ g<ñ‚¦ À4ÝGû TÝÕ?Z©s¤6c˜m¦š‹ç<‘Ãt$/a[øÉb̆ÔÏÀKÑ]¬‹ù¾„X¡__Á•D«a·[^‚†ôë+t^Yר!CŒõ\ng¬–ÐXSŠ} “êH‘¢‰¦c2«âs;#/ñaQ:Ö:ù|z”ÛÆ3S{9ëX LÆxž¢5Œ³È‚¬B¨Þ†‰!—tlåÊ–{o *XOCHÙÂ+×­¸+ò•‡Ý|±·±7Ø gùpÛ|šßíV—ÌTì÷áù^„÷ÇæÓ+2«ŽtÙk" Ÿ±ðÒt=›×ŒW<•(×WïÑþö¸Ý„¼èÁh;Õ/´ô(§˜J¸a0žºa=92mI¬&t8é²í³ ñ±µ‡›íl±Ú>•ub‡Ý§¿ VCÚ~Pá‘g¯Ä὆¶Ht“WVÜ ð.§™Ä&M‘9:{Ü“R­fãät—o£öQÎ0*Öí›I ÄhœDO•R©^ux•5„ìŒÁåP —UÌÎñ¨¸uãæÍý­¦â[ý'€ýfE zµM>(¬qmS¤þ6Syˆeg/þÑåïÌœ#ôñ#Ñlù¿¸[„%Kéã'œ ·:.O+ ª¬úÙø£õj¤^Ã;æù-;¼’y$ñùømõôry“ÅÉOzÌÁflBM"ÁIàkúâøÇÍr¹ÞmŸÃ[|^~î57wëÅ÷"g#0S WFÚÙ`²œ /ØæÑ×Ãê°¦Ë!AµV9;•4úTÁL„y¸0OœŽž´jMhâ’jY «%¦ ƺ\´\f½(É+7Eìî]Ž£(ͪÇîi܈)Xü-çC.ÎVpx Rõƒ?ÒþÖ‡)‚zÚþ"°Ú¾ÿõ·œÝY†Ìͳ|À‡ùýJD¾¯:.îÖ¢Tp׎æ¯åFÞ_||䋸ééNNõl¶±.oÖË÷vaìçö™ù'µ„oúMÕÃň÷â×0 8†,ú×+lÌ?Þ­þK¤öûfóx–<ùo1ÅÕoËÕ_ï‘(êßn‚׫åñk&FE"z4 9‘&Aqü-¸«±ÙáQ£Æ§d¯Ïò·ðYoÖá3É [¬Ö_WË· h-Ürè'ÅPúûù+tÏÕ½Èz'ûMç·ÕöæéËüáæU·ûGûêáØ†ÒU(üœ,´´A‘sS H.·Ç"q'\'2é¦Ñö÷`kïöÿþ÷`pç¨;;‹f‘°Ù9î΢q@S\´Ö´`Œ"oú]ʧ+ª÷ˆYºDYk¦)Z”óȨˆ«-ˆHª"ÅPE•…«„I\µñMC=©Ô!y \>ŸX š!zÛƒœµ#PȈM»?9«Œ$а•÷Ö÷A!®Q¯N±òz¸FC¹/–ô¿š?:ŸY”*oôªÔoóuèI¸‘Ãzß:7áEæ}Àý»|ýiû}½‡Þ]x¦NJ³õò—sòÀÔZƒú7ÏAˆ¸ÇḜü>Kl2âX;F>™ ü6¿{'ÿÏí¾>÷N¼—›OËùÓ|÷ýaÑ¿JÀxgœ<Ì¥»D_~ ªP`c8V‡—°R€6„Z|87ºÞÇ™õ>}Kè,…üp­BLnxrˆ±jõ-£Þ·ÿXư"ÎTyoPE=æäy ¾5 †7F5Ô4`z€¼T³\Yf‚\ó(—7™d³JeÙäQoÞÏË]VSàUŠ „GÅNº'Yzü%µ+$GÈT ¶‹ Dð{\?¿åZ¯¿š±,/¯/—aöë¯×WëímîÒ¬„/¯4Ë ÖÍCb/L`±£®”7G£yjƒëÉî¡E²šC‰õéY.Ór ÄOpXÇr¬|Â…¤ºs:öû÷ާfaXÅŽýþ³,j²}.Ý.:ëÜ¡§Úˆ ÈÎËN=‚(kËмæo@}ò·Ìcì3#x¶Ý2¹Ìcœ5MÿÙ‡‘9ÔG™.îtcd¢-2]\›|>§c ^± §KÝ î½YYŒ©á Wc’SÅ™QF¶{ÝIÅhR…|Ž1¸I3;Õ¨i\ÖhT3r¥÷Í¿þæÕ”ÕZoŸãÞeK-L ùÎ=LRјzÇ<äKÏ"Gò …@ƒø$P­ÔxMãG#©¨*!CÌìóU%M<}víp´÷f…U%46ÀÊ©­˜ûW¾]Àt¾¯ Äà„1+Êÿ@1T¶ÏíÛ(JôDõ‰¢²²ŸêZ Ýâ¨ìƒœÂ1ÑÓlâp ^¿¡á}è°ZÌ#ï(úÝ5ÝO_ïß(>Çç×ÓøN¶•¾óü§S\9Q[q\7sþ[õx¥Z­~èÎ;Ïxè‚Bš¼·¶£ïdÐzÏÀnÎ4ú¡ì๒³»Ø¶/nÓLÇooVà=ãS¡3ˆXÊÈÑ,ôö>Ûˆq·Õêï¹QDlȳIF=,¾aáR´tÍæ¶!ç`?ô6~} x“EÛDï°‰F|õ~l=øêQ®°•ȹºŠåìØ%(é­‰í,6‡IV"ï]”È$›YÞ^¬“¼Èœ<ª›ÚïËÞÿŒ]|ûÏÄTª¥ýÖ­Œ;#Y¾KÖRU{`Ì ßÂJLÁû—Œšº"h}Pßìš 4KšG#!!‰=Nq%äÁGÔgÁ‡“´7{/“Ÿ Q[šÇµ=‚ûسÿ£†BB°s qaf HY„F€Tx™Ü®.õØßÜ\¿[ÿ¡t÷þ5ºzé õÁbÇO¿­î_î9릕 ûêÁ…zˆkS<ƒ›” æ+Ußf óÂÃ¥u?úC:ï4 AOuõJ£ÁP1h]b<6 izB~‡ çÓT~˜ÇÙ“P“qpŽcƒ/bE ÉñT¼aBëÆ³%ì§áI 댳Kzp³¹U4¥äžÁ=i´9 d ?ùQÐ[-D@íòàÑ.­%`˜` x—ï5r~žd+GJñGì ª`± ê”Yç½Ã=z˜ ˆæŒC™”²Š°h|Ï©;ÁYu"Žè©H±ý{„`Š $ZYO™ÛS'mÉÇŠ1IʹǚÄå]uìl—”G”Úèäcï‡ÌnOQþWq¿ÏåóýÕíª,öÞÑ䟉»…ÕŸã(o*†Þ(ꇱÅì{V\g}¨©ó¡6v”úPàC]Ö‡Jz¥óžx9QkØ‚«r¢ÞÇjÎ8»< F肽 œÄ^»©úM±ÏNŠ®Þ Œ›{/B…SÚÆhSÑÎõ)íPEÑw!Y/?­®žõ}Þm»Wf¯?:“F(ÛDô¡KMo3ØÌÌ¥uE*É;>°0„G Ò{CÜŒóåÜné÷8xÏÅ&Ã’ó{`Ò{\÷ÄÓªÚä5åšjS sµè» %6®ëöÓíxªèõu¹~œ­o>ÆÉ¢Jr&gûš)ñÊP=…AŒý“-ýHÛ®š”¾â–½ë|ÌÊÎÙ¬íî÷oè«ÈÚ˜.k FnjÓíNj§bŸ YUQ>ükéÇ饭” ¦»’Õ¢Oæ%b5±ÛŽ6üs$&Eô)u:0è€F©sÇbИô= …Еõðh«³þÍï¿Í"ZÎn®"‹ÿÓï¾Üo„üòu÷Üà‡K¿ÀÃ:C¸` ^#³J;P`­¼éæTÄ&Ì72FR®ì- 8qIzÙWñ4aAÔ§&ªò¦-,-ônY4„R-‹žƒµÁž'èB‚ŽbÄ-+¾Sù^ÄÑU³Ô%Åav&„:bn¼ÆH¬°Ø&5WÇÛºÛöÝâÕlSx«+F öU†‡.ù¦„8±áúoeרP?>ySX»îª¯³ÃA+Ù1^CZ‘Ú%’mŽjËT’]l,h!`‹Y߇ɱÜ}i5J%ãã¡¥ó+)=ȪQf±*d”fÂâÂMËVgbÑòåÂ`M¾\h95í±'ªFÕBàìÄçdÐ O¤muàX|usZq[ž †°€3œ8”ÜŰ/ŠFAŸ›'–é q»KéÉÚdUB=8ò”$óø–Ö¨Ÿ)Ž‘Û…t]õÛŸÁسõ©GU'‡q_ OJ ï‹zÊBlíÊñ|x¤lô‘s¾O%˜dwÞQ“èXŸ:ƪPWÈÈ¥ÞèQΉؘb ”†Ç0Å4OjuËqõ"UšþO@ÄI:ïˆÌ(£wf@fP(2Þ³©´ù­×i9Ië½+YêmÑù7_HµPî½l«$G¨¨Êmsš.°^Çý«ÓôAãfÆ4„ê¶¥ù‚k¸¾…J'µƒ& ØqÚé2`9ãµêÔCÎÛ?|÷|¿~þüùáqì[Þ6_]u}“?sò/ÁÁ0l#…QÿÓ¢¿¸Pf ÑSOÉÓˆ°Ëö?¨ÓL‡¯"jƒžúÐàŒ¯é?'4ÊE@ëÐv‡OÚ®×8„OKq‡=äý†"SŒ»VEOí²é‘}2z°†¦¡y~Ý~7»Z]>¬¥pÆ Ãe\€°ƒ¨B#1fTOá\¡Çrix®*öÞÛ<ˆj(yeNf{Rhƒ¢èãŠjÓ¢¨þiŸæÐwÁ-?Ùd4Þ‡b{;kô/âàè2^K!ø‹]áNúS6èqOeòzðT[`BŽÅ³iW5¥ôf‹gv§T_£Þ< ÀºïÈôfG»ÙƒÙçÛÅýj;˜b@×=G[ÖÂ¥i$:À¥± FfB¯‚ –Í_ˆ&’-L$îÞuzbS´ ûrj5’Èä<»–N¯ä” auE͹â¶ïÐltµÐ²{Dr&Kà²7ËÞZSpL uLöÕæfÙÆÙ2ä‰óK/̳¦›‘,!pšE‰š¶Õ{S6àäd3zuhÖS³¡wÃ€× S’{æ ²ýž‚«Œ(ÝÖp 5 :jm¦tœÕõu]JTz!µë¨Ý–À—ŸVËÍ@Ê¥:";ßýÝVY°ÒW?Ô¾ªÃbæš</wõ£~Ie–ëˆÙýT¯ŽÜ±Z†A´ê€)ôj)¨=Õ-· ™¸ôƒòô¡è„³å"=áæòx‘_£åBq»Êä1r¾4ÁW sý` önò´…¾ÆÓnž~롆õ¼%>çGÆ !WkуgÓ¤Ý6%úê®Û”ÜG]ÁéYŒÓMÁe¶ËËœ,J~B’Nx_–-A[ÑIŸûnb@€€(ÌÙ‘sÜôïŸ.¾6Þ%H7(øë¥§ë+œùßo×wìÓ%/Ír¹4×Kwe«.ÍÿºÆ[Ї9 ±sáGˆ‡W›€ëñáYÓ·M‹Ön’¡ÅŽglWvf@±LÜ?G®zW_¯ß*Û˜=‹¾Ë•§¼CÉcïvÝ!ÿÞ`o|lŠ;^ðnÚ*¦Ÿ„!.Þ'hÈk:EòuöØ4ŽW™î:Úrq|¾m?µýMz­âxŠ íØ£óó9¿pêƒ4ƒpx´-! è]1Óç 9nTŒ¶%HNÂÝùN›Æ4ÇÖPIí,`yI´g¸ñu ®Š vgíCÇiV7R[76ÍÚ׺ÇÜš"vµlÞd^ÃôÔ8Aû)úàí\8ô é“âz3¹Wu¼öTÞ]›à†K¾À¡c3èƒ5èjãÁji5 ôô0xƒ†B>гP°«i»öû(Ð{“L‹@O›%X¢ª@¯E²íÒŠNÀ†ìÅíÏŒFLV{*äþSÂd¤­30uå#²™½°œÍè*Ä9ž€ËÙï_î_‚ޏ‹q|Ý”$qWÔÜ-¢­œ—"Tas5ÅHÓÝ2ûvàòc“a70PÙ¥˜¨÷„Ö(ûŠW«-á´äÀÐ;°„XT]ꌭۅ¹¡Ø=Râv)äïº8YÛÞ“T#¯ë‘'wºÜèõŸ'ÛÔš½ó€“¬ÊòUys%ͽ~W=KoþhM þh±³ü]ºÚ¶yå·^¨’F}ÑS·ž¹C[6Cˆ2u[öaÝâùê¦nS“µ„=%®ÁAÿÍ“ÀÉ9am,§7OBãQ¼¢ú ìŠJEÖ¤ÙøúfâËÃíóÝJáñËrvMWžW×ËY\'4ÃK¢Y¸ô—3pËkv׫° ÒÛŠBZÅ 0×OézÇ›´¢‘öC=“2ƒ–å@ô#±m¡Ekú|š…ŸQãqlWòág$Ér»( '×åì £Eü©í#ÝùÔhìðê༜>šµ?ÿÒõjñôü¸ú¸xZ½°ªRÊÖ­‚EãúZ'é›òú\àƒéUk(c»=?*Ì™°¦6ä$o˜d(Ø“X“ }h‚ 2µ w¾<‹PiC8¾< âæ^œv5 ¢ÜÂZñãF ®»«úØuVŸš‰&)‰I‘hf~~­P¶Õ›— ]@G-«1´÷³Á¸yÞq×ûÎ4?jÒ°ýÐùo~_ö› S*ÒMÖ8¢ Ψ“ÿÏ•NuÈ­§ºÞbÐÇ4Hf£=è F.LkŽxï_‚fù©5b³ù Y£Ô|Í–ŒOö#Eпž‹ë—{Ü¢¨‚lÛ±£˜MÂ᪢Ø:}ü»ÌN$nKXÖµ,Õ@÷`Té©Z Uœ³Æ›ÿ¬m)YÁ4È€šÇeŽÿ4])zDÀ×c` ÖùpмrO0MÊz‚¾‘›iî¹!H4…FE‰âÞmÓ܆¹lŒÚÚòèx"tT&R¯¹é`ôݤ,P'Öån·®ÞܲÙYS²‚gšXZ`¬;Œ. ‚#M‡1ÕußÁBkÙ±bÅ9g hÚ"ãEl]zÎnOB­ö=R°vj´õýùÁ…t7‚¯†ì§ #‘¢nÁŠkêQÑS§„Z?w–pW÷ëµ­lq§úX,WÉÚ\]ôJ¦oð:VQsvŸM ÜzIÏPð‰@ÕÏœE8Q„÷MK(E*UTqó†ÞÓt ¸mÓçò{ݵíâñÅóÓÃzÙ¶k!v¿k]®Ý¼Ì#$C––€wyæ6òÉÑØ7ᵸvÓ‡F ¾Îâ1n&Gqãis<éÿŒƒÚmr/Ì@gÈ+^hÆPSDš#Š$®>{,8¦§ªLÒÿɡÁ)ذ3±>O])ØFš4?÷ÙPßÅVÎö¸@ÁŠûš‘$q?nŽl24¦…OuuR­@lx¥ÛУ-Ì­æ9Îõ ›??\íûÑͼg* 9õ3D–|¹œÉÇ_?}ª+a0âňQÇ1uy·Ž«uesÏήv×zsa¶ Ïìc¡’ðÜ@ÖW3¤7±ïI®I|æ†+£s§¯FEç|H Ô#:W›‰è\õäŠdnì›¶‡¢UìÂ\Î-ÐXN)Ý{Ž‘ù8Z"Ó¶ÆIšZ7Ò{@æ–6ðwà“@@Ó¼ÃfahOÝ3ô7x•ĽQœ$“ lϼŽð׺@Êjµ åÒÄq€ó'I¼Õ)WùtL»Æ×ØnÎÎowŸóêùóÓÁ&«n{"iÒúªÞ‡~½™³ØÞ­¯*g‹©ÞIÒ&ñä&ÝÎE¡°VÆõÀžë¨MqÔƒìÀÌÁ¡³®säê« 9Úöú¥—i©˜øp§ªyxÖÓx¥~¿rC©ùŽ ÿ^èæËÍÓ;·¹‹œ6Ìæ øƒÚzOP–A£D^l, `zžPÎífâõ 29‹%ÉZž6Nœ¤g’Þ¤Úè.…WUV[àBÿ«³­ÞÄnfîTˆÈ9"8öMw(ºT ä¥&pŸ‘Ò'ƒ lF”ã<ØCö•6Sj6‹Ý¡? Îù*çö‡Æqy‡á ȼ*@†Ü’90­IÛÉìÌUÇN`cn:6êŽ Ø(³¾?g.¸£ŒÒdžoBhjGU(FÚ8eö‰ïØââåå××w7÷³;E³Ç?ôWW«¯Oú‡äÞs³i9J€=ô4m!ßí£úljQŽš·qn·§õx?yãéµÄ$¢oø2¶?è –c0¸¤ôæ€.vbÙÁN|óx@;ýûâöþµé ½js}ûðûÅõÕâi±þã~ùÍ q\FB'÷ªÝžxŽ;SÝpVóÍG @Éh'ÄÛòÝÀßø˜O «Ç÷þþ·÷„a³2.Ó_É?]Ü®[ì{ømõþÃÍÕ{\F”^³[\›…ýé[—²ñ'Ö%F1`ÌëÍà aó»]€µ9BÜ"†é®÷ëÅr#¡o|ˆJc{ž¯·ëÕ)ðÇÿ¼¸¾S›øåáúÕ{ÆþÈ% ƒ†;휻üöÆB8K°yó3þQ þöjÿþùña¹Z¯·¾`§èCÌ×lj­“…øæ»I,™0Æ:Wзr”TyYOÚ±ÃZô?_ï‘Ùsù’»m´)\^>Ü}^<ª©é›~\=½ÿÇýí¢ÑÝÃÕþÑÆàÕä×ÏËx‚ÞØ=Ý/ŸŸŸÞè÷_·Ï«_¶×½.lç >ÚO×j©ÏQfÇùé,i '•áƒzw c”á±ÃFdœƒ'=iw|´¿xÖŒ-zþããΖ·7*)=¹±äúfÝVOòÓ×û÷F!L#ò÷CÜ7w=Ú>yêŒFžY1*†YÒX‚õq£çÀ0lûfÀ:bëHÿ´££°ÿgïZ–ãÈ•ë¯(f3+r€|)GxåÍðÊ?à Èž-JÔ©ëïèn²‹$ª (=²Ã³ͪLàäÉ·²00¿LI“ïI(-5Ha³ˆ“+à`û‡2èO+üŠ8ØAä‡5-§@<šƒ¥\¢fª앃¤e„Ê] Oß4¤ù{ží¼†?“)µ|¶Á{Œã]l@Ÿ›e fýÐ\óÖ£2œ7ú7ùÚÿÏ„R&<‡ÿdƒ] ü'ÿzU{á²›$‘vÒ‹ @š°§êë. @Ú?¯+ÀÞÜÇì¶šÓ«Ñ^{,û, (§½éù(l®xÒý辚$G—Û–ÒÂŒww©Þ-ä¿&þ[½+ìû•àÆ°ßê›rß<ŒûV?Ø"ò‘¡5@ò¡à–FóX+4_(g¾ÞIªºUà³ë «¸‡ŒÙ>–ç+õö£Ž´öÖ´V{F¸GÞIÑçØ 7éš× ùnÂÐÑÙg§gÀº=ÿ™²–ã¼H·7öq·?~»ÿóË~RÇÓÞr<\fÛÿò­ÐWƒÕö?¼ˆFÁhãR&±dÓ2$ðä÷#ÖÛÀˆ] óvhŒx¹©îøÚ²»‹ž_¬ƒ™—‘¹Œz(M¶ôZJšÊ`n#6Û­ˆ§j|çCy&W ·ü°’É«•’ç´xÙW)Îü€Vò@F[v>SA»¿@©¤2†3ÄŒQä'âÌǹ¥×W{úWH”£1D¹ìi¦ö$cÇeO³d„ u;¶\ ð|ŸR9›Jq(AàŒ.%jÔŸèz”®a{Gñ8æ‚”>ÏäŠ`Œ:슔>Ïâ%!|‹ÑŒ[Šõ¼O_B¢4O%ÌWîvÿa†þßï￾±þ©ød÷T‹üѧ Êš#÷Ùá4§·ù5=í±júÛîzwûçîæU–RU§h›tÊ’¼øŒã»?æöáÝ—û¿ÞÝÝÿµûöîñãÕ—wϹܿþaÄ#ÍA€ŠÜˆÇˆÔ„—Ì#浚AÜÈЇdYŸÜ±B~Ù,Ê‚[uš_éEû³ vûÀáí»×æ%8Ÿ›ŒèÈ ݺUv s·nò¶]†¨&$Kž\M§‡ª 7øº$Ásù›oIå¾®r`)puIV@ó™&/Vêër/UJ3xdi‚:ñéáP³iXsí½FÌw¡ô| ØyŒ× igÕc¬É»¦°(Œn:rÎgƒÌoÐ0Ó$ÔµÝc®t|vü™¢^Ǭ“Á”Ý~ˆ!M¼ mQíƒUj]G”ýu»A15â4SÙGÈ–‰—!šã-›<¦·žA†£˜:/=9+k?üc­~*½ëÁâ½²f“—Yw áÒ¹h>ÙÔ1˜~D“_à/=ºZ^³”ÎÚ1â¦seË.­è Þ¢q¿fŠåý\šfÍF—9N¼Tóýꡈœÿ?y³¢ª:$±ß`š¶ð¾ø#”¿Ö·§hÿUé;:g^nÜ\öÚÇ*ëaÖw/~Ž˜kSEwãëþã»ÏÄe,ÞÌןŒÝ•»†ßã ùÞÆ·ÃóL¯ó±ÑøÚfNuð-g8¢Ç± cç@y§n±ŒÂÔ.ô¶Åûí´Má‡T›Ü$|~Eß;-þq„àÃ!W{ÎÝîs›=ŽSð.¾ÞßÝ^ÿ˜þÄácj÷!™Pfõ"&WCnªÝ0ÕCIŒi†ãôD¹i:Sâ“o·ÙABE^OúCÚ´°FR>ß:É­Ï’ ä§oÅ¡’‚CSr™GÜeŽÁK 0ò.ÛÚ}0ªm²|UÌœÔáäÝüöðÃ$ùùýó÷ßFá^q¿³Ç õá±ne<Ç¡÷X܆!lH ¸oãX¹Æsôb4OƕǛs÷ѬªÙkˆ´ÆÚ1ØQ“õû˜­Â9½—똖5`ǺëHj¬¨é: ¯4Ôs¹È¥»Ô³äåÛßµhF’bÛäËFã>¯j6''´]sÜàwËþ”W;ý¶+Zö³ÕœŒDºº%mV±A —‹š­ 8p”࣯‡ žZš‡÷‡. öÜ ‚ErÝe.íúFz9¿Å»Kcì—îõ=©`×Ù¸$͹~ž×¬VÕ;æ6w*  `žØ‹ÇÃÝ©WóÄb¬ 1›ÿÅ6¿7nWT`Ç-þ•‡D\Þä`e…»0JqA²m,ÎÎhˆªk,"òZ~9 3»èz*­.P®AšýF5P­èâæ‚Çp0$Fy‡þ'‰‘œâ‰×ß^¨«**Íüe¬:t°åLÇ>³x|1D¿¸@"î»1Æñk_É®Çcýü) ó#›2]QaCÆÓÿ# -Ü£žq1ÃÖàѯÇżçõ¸˜ú\Nu"®Nq±Àio-W83C#bÐð"ðN5눇ˆz ˜œmE,Ú1E(c<òS2§jvÏA[T­NhL T#…Îɤ—¾Ù}½»ÿñùUÎïͶm[¶à<ŸVÒ¾Ï&¹{¯#8’§4~ÊᙟcDéé?¼J±Ö¥Š¢_=ÂkSíú-mÂ!í.UìàɬH«£Õ3þEJ~=Âäé=Z²zš_€;M«g‡Ø1F¨°z „ЖWW€áó‹RS|vÉž}ÃAôa™'ÛÏt]´Vdõ0RS.‡±j’å ‘Í±Á8iv×ßvoCÖ{…_<Üþñ¥ZÍ›œ—5zjÊÞ™º¶”ÓiJlѾ^¬ [KäÓ1\o§MÖÁ(;P}"Œ>ñú)ª=z šbH{^›®ŽGSNõ4µWFdŸOæõmC`,QårסôÞϪ4µ·©/†1¨iDFòÓƒ¹y‘ô{›Û{[„_WáÃPñSÀ1îA4 <Ôhíûã~3ðûz›¼³›ÝïWßï'?X Íïô8ŠÍjyh2Z¤[«‚ù¸"Õ6«@<]=sñEJ<€c/ã¢Ñ"Í­‰0zyX« P” Úœr>‡À󀚳Fùà®Ä?7 :îb¨0Y…·~^}ê%4å(•ý–wÚ–zIjûf¦/!5ìÃ>ª®äjK¶|äY<½è(3 57[Ø»Èm7[Ãx6³c„ìƒóê ;.†®ìPÄF™\¹Gß@§FjTÐ \›%cOc7,Qù}wõøýÛî«ÇÕLl¥Ž³’Õ˜ýömÞ¿ÁÔ†q œœáúõɽ„ص¢O?ò땊¾@«Õ¾*”kÒ;I¬SA_‚dðUd+€l H8뀹‚¾¤'3l-“ÝŠ«  ’Aj6"÷DY-k:Ò}U7¤_ʳ鉇Ö}½¿ih&¼ñÆè<Çkºx¸ŽCT‹™·µ¤]¿MÕ ª[ªù¼N¤¥Ÿ•XÝY’=½cæ´» ¼cÂÕº>5š™Áu’['÷˜Óˆ¬( A瘱)¤ká`¼{LÙB²¤)IF3?Ò¹o^L¥±Ec9b“yMKtܨiQÁI ÑóȪ7s£÷"}¸þ¸»ù~—z_R#Ç·Ýõ½½Ë‹Ö—»ûëO•p² ƒè@ZZd’7Œ,Iƒ`¬„è&±õätB̉q S†J×ZbèåôDH]ðØžÚèHPªÁcNÛnéð…KIÌœ±o¯@Yty láÂ¥âèFQÁ‚i¢—›ábV¿iíBSÈŽH¤!uôÈÏZ¹ðº.o1²o¤áý×{K—^—2¤5¤qؤ€!¦(BÂßÒÍpÒþ§ÑþòõÕŇï_nîvU²{yé#’Cn2ƒ°!¦d#šë94W”TÛžõêÞ°3lH-"¼j²žÉDN Ö½ kšCÓ8Vµ+!mHáš»†Æ©}¿é תg1 ¾ ÈÈkK…÷²ËŽø8 §S”1"¡Ñÿª“á‰=™ã û±QF†˜2Æ4ÍÃY[BQóSV,ɼj™ xšTK8~\DTÈ6©¤x—5ÍúصI¥H³†ç¡¡Ò¬š§Íj˜‘Õ»6 ¿ÚµÔƒzÆKŒ@^>ÿ9¾ô['¯âØ´¸Y¤"@a3{§¾Éê2n PصqæÊ÷U ´—YN§Š(Ä—§¶z½TV­2û|äâYz¬rzh@ÅÏ5VYÄè5]l–Õ¥b·:±æø\ìå¾àÂOi¡¿ŒS!$^3f"€( Ô`vÍSÛñ›S‹¸/º~ŽÐ§Õ…'@a»" PX`CœØœ´´×¸º|µ]x=}d1ßE ! 8].B®øÜü¤‰¤:¹ÈS&«'âŽ:%Þ£¤É/Yë…a.¥Ù6Ù%™Œ(×Ï.8CÙ¦ž‰û"3Ú¼¯Ê8ˆ¦nMv[„Ï0¡Oò>— Šhn·HÚWܵ©'•Qzô\ìjõ1;³Ê 54*7ßLb—Èe«ßMëÄ„pŽ‚,ãÐ톷óÕºWd£…s'#a‡Ø6{ïU7Ò˜m“’¢™·Ub¾zºo_„õEp,ï‡î>¨w šõõRÑNU|LËÏYõñùêú£YÑ'A?ã諯Ÿ"u‘ÕíŠX'kÞñ–œG£‰ ®µô£Zvë?P]4–²Úc­êu­~Úä˜o±žHªOý—æ´U ¬èp_íõÂðúMÝ•¹ú4´HyÅ|=^€ó³1Ø^ö± /æuËjg¬­èyLÝ´ýÚÞ2izîúðýöîæâÁîßãî7÷ןj!—x^ÞÀ5A®— ˜k–Í8«£†ÞÁeõ,u–h³¤ýjÒ8ð\ðI½ #ù¬ÖµÃP¬>òøBgÙ:gQ³%€3t·kB‘ËÊéc‚j›×ïþ¸ë J´©l(º¨cûêZ*Þë䃮å|Œ¼z3íͳ"¦¯V´Ø Œ‰©Ùñš8¥·»o†«éêÁà1ï{9’úÌ‚¶%oÇb9«_XÓZ¶ªMEÿ7mjóv`Î8þ,›Ú&Ï3ÙÔ™[µ¡gDjš­if¥;‡³| æØ ,_x£ o÷ß³»Dæ¾q!vÍn®¤_èŸ_ÿû¨3wƒðw¾ºÂ*ú§âª A¯1Œw”-k8iîùò<<çÊA¨ÆF¶K¾€xÓ–M¼‰ǵã8«¥Õ±nSØÂ’ºM`]¥àF泩¦“dzí{‰ (5͆=À“]s6…äbΚÚbÀÅiœÚu+”uzwL_ññMà0¯Õ¨vÛ´*cZZ º4×÷ÌaSÀ>mÏÙí¯ö$èñþÓîË“)¬‹sĸrLзÐÖŒ¸jZ¿Ñ!ÎQ&³žU")E :)Âz¹¼ŸÉðODÔ©LØEª }Hš?Òæ—Å-ªýov&õ#,÷bße|‹‚ËNš‚ åH1«Ü è§>xØrû R@&Hmb©r@×G€Q ˜Öj)nÕŸ½ñ±ôšÃÈÌXÍ…H@mI¤À:žg¹|ƒŒî—¥ /_xsÛ»ò,*ãY åiþ »(æt é`yiº2›UƒgÁÿåÕßLà·×W»Ç}m¿½A]ÓpŒà¶«`gó[&¡‹LéTo™[מUâ´Ç`vS÷É*Ó²Ó”íYȨîosÆk˜Vû%Mi²áÃfS‹Žäûר9xŽr»Pä×F³Ý«ífPbV« (Mkjí` $†€¦²´¥oÌ÷…ð>î®î?–è±Ád¾ñßÈ—ù\mcó’C#ÖÛ×mzù)Nõ°ŸmBMàö®NŠQ·‹¿€¹ ß0PÍ  "×6Šœ#ÑÔ‹á`ÄK§Y—{Öì|¾›Œ0€{zzò€Ì5ëêM€¶#[Õo´•iÌk4hÄ6Ÿ–xLz/å“îê¸Yié~|½»ú²ËÓÞ4ÃÕ~ü¯ûoŸì‡¿ØCÜþyûøãúãîúÓô"žâ}«»Á{ÿE×Ë}c×6¾Øû-0 ÆÒ¥~&Wwõ-4'Ð]c#³cJU ë« 8]' œ ½´Ó‰€°WN‰š ÂiÅBSmÉèÓƒCvÑ«©)¦5°Ù.f–®‘7{²’âR¨šö%gWHc§ÚL“ßf…ŸûµµçX\{NQ(ªãЈ—i)©®Ï¿Ç˜Z4y³¢ÒsŠ$Þ¸HÍÉUµ BðC6 !³zôpÎrÝW£ù–¯ØÊo§ërwûùöñ=›h èð‘`xÕ¥r¦T—©eÔ-WÌyQß­z%çư&ä7ÖߤXUó€f<Ø3­Ú ëEI°½Ê'ÉõI¾!zóŽë’ojOçÛ@ Ã;¼48ÉQÓTО7áü[.`gìmáƒjM±Ó9¡gî@ ó±©05͘â¹M2Ös}¸¿Û½äDÇ/~½ûn0ö°Î±Ö>¡—g¹ç'¡)²œfÇnð,‘H ú´Þ±\•î²£¸*ÚVÇ/¨DÑ‚È3WŠ,?:̺~'ñuòü‚(Jã—ŠÇ}Ûˆ#caã=?vÙ⢉Ÿ‹[9`ß6ÞIðLh3{ ¨‹mæ°õ?8L™=ÏɹÑhÁâähv{L}AÎÐeš÷ÎÓÏdûL^4‚š ¯k"­üs“þPˆ /ú@ŸþÚþØöù{ÿún¾¹pÛðe ¸!¾š<.^¯äΔ-&K+LVªfÒv›®ŽJ6ÿpÝW!ε¼ŸÞµƒÅJÍÁ.f¨É£cÚÒ×ÖähpT’2¾õTB K¯’¥îÌйKÔ´j©o´G,gªÚfuÄ)!Ú*–tD£·»tö`G6zw­ú[e¹gÚ0§"hmÙt_`Ãè/Ó²)×ëƒReu}r–‹OÞ¶KP*—ˆÂP…lâ¼ú¦Î;-~tˆ‰9Sâ¹WTpÎéZgß9f€Éxº ²bc¶ ú"c§—Àjnàçá Ð^¶ýåXwQò¿×6?.¨) jÓ3Ù°Ž¼SUŠ›–`åD» ™‘k î²O\$mÅZß½vœWë2Q8W~7•UàÝ_{§*à hÞ4oð£g,%9û·îoÒirá?góßú.L*šo†¾r Öh<™S=9³Õ¡©=‡t†4¼žu¥è«U­/ÿyq}wkšÜ喙°¬ª±G;£ÛÕP€Õ6u’÷äD´y«e•è:6f/ÆW#Õˆ«pP³ƒî&Bê“¡ öq$U«ȱ25%¤Rpt¨Úœ{Í b°W¦•A Ïz»´¢a½]5f̪ث#hËF±‹‰¢³ 8²ÑéËÕgSËÕõªå;ý`aQèX;ây.óâÆÚ£^\»•<Ø?žHë Ð U!-1¨ø‚1ª_Í Ò‘Í½Û“0ú@-ad¤fGlš`aOѵ0¾Áþ˯· ³ÁE\ZìeåL:èÅXŒ§¶£:å= [AaVµœ4ínOs€‡,o7o“ˆ\ø &a3NWßïì5»OÿLë†6·š"‰[ÐNÊÍ4UrЖX/ £:Ž£FdÔɳkaŠQÉðŒC™í"`çê÷&Âë4;#2E¨ZDœ˜Os²³'g˜òc&AYÉàP´)öŸ¢Q0sªOÍ«†é-ªgsLº#zIäÃƒŽŒFÿ×ý‡5˘~dTŒyUøë€Í6”['a,-PíøÈ½Àèò^Z-‘ã½Þþ:¿9f9¸%Ôýö®­9ŽÛXÿV^òŽÑ@ß ¸ü”ª”«ü”<Ÿ^V+"©”%çןÆî’r±;˜°äÉqÙå²$jÓ¾wmôÉBÅÐ@í®OíÁ鬙~ Þ›s]%{ÁùΉãDæ°ë(§OfbF?Q±óÚ´b‡G9ÔdktÀÎRZàu£ØDÔsŒ5¬©PÍÇœ}Ù¶>¶žÒed±(/N¿}gó8æîLJ‚ᚇºœ–,A_I-…:»ÿ¡žv­üßÍ]QóOeJ{Vq“Ùa¢ˆYp•'J5ÐÄëS}f-âm"¯ì{kbŸ×ÄëOŽ"J1SÂK;íÞ`¯ ,XY¤/:ò6t1“s.µªõÍY¬¾ñ“ ß?bR\Xôp{m»ýjwóÒ~s•^yppw˜ü·@„{*nÞÎVÌTÜK:ÀÙžñq ÝNɧKÂÔ¬Š¢ìü$B(;Ê͈ÚDÇ3¢OC{t\ÏN±ó”BÒâ’ÓñÆ'H5$ŸÕñ®- 3ø2t$ÒYЇGÖDoB0‚wNTu øðœ·Æ×„ñ_ëìRnHh^îš½tÔ÷yÑ’fBòÔ+{]@dž9k/ ¶Š¦ÝzS}wFSŽù}ÍODk4|¨ÂrTmN@ÒôÐìT®ÈhlòŠ$ñ¨)ëX–±frí3Ö…Ú$Çoœ¤yZÏ‹»¦×`iïÅ/CZ«¾ü¸|å’ ëÅÀ`p\¹urÉ!Fc„ pÍdÉxa“Óýd¯}i#7Í4º÷ª‹:·Ø®aÉ@>…HÍ,Ô1ûÁ›/å¦tL_o[^ö¬Í—ûl¾ôiˆGéXi«Ï1nZG õÀ12‚îà¤/#AõÂ9h'*G˜ƒ®ÛNðr,š5ÆyZ®îíã)i'Žf«±º×ïš^ßVeV’*MeªjÁдg1ºj`¶X¨§Øñz8ŠhRO¡v¬7ß$|þ°-ÅŽòçh)$HZÅ4eY‚ñ¦y"{­f—² hˆNƒÀ4ÛâáÆæË!§÷üi%Œ¢ÙŠyŒ‹æSˆ‡ó¢ñÕ¤öö%]ÿLÈÃN’¡W·oΨ¬s½Ì¬¤RÓ¬Jn†hۙ󘓺ÿáúìâ“…¿§›Þõ™ÙÃæêþäæöÛÉçÛo«»“‡Og7'OÖŸÿ ûÁëìÓc.òôû¯1€«¹ ¹.ðé쥩'A‰«}ºÒ”YSÊmzçOêkÞÔïEÜÔ‰_ߋѧøt>šÆ'ó5ã‹‹1~ÈV¿f8­sÑg1œ£R•ñ 1ö6žFÿLà ›qtÞÛû¯kVÊl¦w“ùG6õhÙÔ?õ” ì€9›|ÅäìPì¬r}öeg¶6³€hóçàæí¹3†,§þ´ Šäx#ÉöƒÙ?Y—2—`‹*µ)¾ãÝÈìZ—87Þ;œ2b‘ ×\9"Nƒ:í:êãÏÊ[°[>b²yD÷¼¸‘(f½ èÀï±UÒt T h/•w:!e‰bèÈPuÂ-§ò†…×tTîrMœ© #ûeE!&g~ ûïë¨Ì*vze( [aZ³gÛæGTm¢Ù-Øfa^FZ@#¸*ÍÎz„¹\ O͘9ˆ4µy’õÞÊÒ>(if³îJ´Ÿ'5jšm0«ÕÕ ßE·¹M}:£å£åWöÆÒ«‹Dß5ÀÍý,eoŠö€²WMûª”½„%ˆXÑ3‡ÀÈÕˆX³H×J}¯ïI ,¨ó&tµi¿\|¶rD¦À…vjä ^g©ï©KR «¨·úN{»3 +öÉ,è¦*0oƒ‰¤k¶ÖØÃb5—/ŠVûtI[5FÅBu/™+í%‹qX£ŸcA†Oö’)`biüi%½dQTõfȦº4¡Ž5Å~ã}è^ëWBÌÔú!¥~u‡½®õ£”a8—åwIáùÝut,¡/Kݧ~Ò«ø(ÄÞÍïvóò»M1ÊïšfÅmr†Y\»ø¦¯0T]|€ðØÑ a-öo“ÅÝ‹hÎü©ÌòÍ6¹ôlÂ[a1ì¼hGÔ“9Ð,‘{˜fÍ\Æt3"ñ¶µé ÝÑ(<ñý²ÛSÆjá3Ú±½È:"ša—’ß«Òüêý’Jsˆö¼úJ3C©C¡d^µ ù’`@p²A=d+Í£O+q(‡Ö#Îâ[0Õå¥J­öo¶JkDCÖÕ'xÔ¼¯Ý ßÐ ðžZ¬ÊƦ^5VAjðëÒXUvœq;•±§_;UÙqº!Å—Z%¯1˜šÔ2̇vHª|}|÷\aèþîôþê×›µÿ6cæWÐ^‰ª*‰}KØ„E2CÔ¶ý ½šÒ0c„RÞF?i’IÈZ£qZTœ,^öJóÌÕÄÕ(Kîžš’´U#ÿ‚ùI¬{jÆÔtª—cÑT¯²Ô¢´Oê…=ìŒ$d”ªó>„:TŒ¤gGêž“{OÝòs ölÃHÄçJMgÒ`ž²vèb‹î5ZvÞ8¯Ì¡$M(ÁOèÑÔ 9=:¢F= ƒiuf?CNß…’lJå86ÁƒÇM–/¿v”B_WCŸÄÎ(¾Edê(IÝ-’‰G•Ù~ÍÝoW«³‹­cýJï?‹âÕÉËÚÜŸµHˆ< ˜ÁŸÈ” aÒ±ÌåOFÄiBÓD¨#ˆMå± }‚âèfŸjæTUgOBñh?šˆbÁd?"NŽ^*rnbvôa…ƒýÎI8:×–À4Ûo…Èà빆Å\ ƒ‰@(a›„é"jv¸âùË Ù–°ˆœ›mKr•*v)#9¨žŠ‘“ÎìQJ’ÐìU§ùF.˸ѧN:; ÇgÜ"K»½£1V3N‹‡ƒˆ]m-`\æiÆaÖ}ZãÂ`NN„plwÓ÷Q×Ô¸¸›fމĶýÝ[½‹À¸$ÄРÕ{ºzØ‘¥foÛC™û OÅ™7K4ürkwôþ³¹Ñë\ôåÕýÅ­ýÅ߷혳 öÀ}9Ûs×IKZÞ¦Bþ¥¼üåúŽÇyá9•}Ô¡EAÌ”„îß™ýÁû‹÷smrao—׎Àmw¹ô½Þk—L‹ªEX]õä6‡b ZÝÿpÿ»ýÿõ‡ó¯WŸ/OïÍýxXýúûé¦}–’‡¾ôޱK£y’î¾Óž×·7WvÿbÄÅíÝê6µí_?¦q¶8¾ókìȧެõ¯®¯n®®Ï>ÏÓ‹*}9´GBÊi?ÉŸŽ¿¡öÅÙéù×›ËÏ«yy~DZ'½= tÃÓÁÁ¡k§ÜKë‡ËÕdz¯ŸF?4o´Yµ+}‡pCd†߃œþÖsyp[\;Âõcäûr§Ãª%©ëÊóx,θûú%=òüëå¯c/}úåòüýx?>tp‰S±Ù÷mÚmÙýt{ÿðå̈üåîv“ š[X"×—ÊäÚ×±wƒ'À¾ÍJkïÆüK{|¾Rlÿ7Ú=«x¢ˆ êè"QQÉIuÖß—fý,Hð!kÒâ‚bMÈ%!G_V’õ{›íäY9HŸ–$×L Fìƒ\“1Ÿ‚ti?ßÔREÄv=¯âé Ìf¬ö‡áÏÁÉžß?½$DÒ <½¿ÐO²Ûöª:0©ý‰ÆèIÎî|m¢ͯQÄ.T•=‰‚ ý§ÌÄiv~ŸMT]œõØ2oOX¶Dƒ¹.oŸk[ÌÈé–/Y2²ÃD¥¹}´³sÚ5І!m­‘‚J[4-Ì|}ï;OG¸óA(dî<j?î§bЦw>e Ä^óÝí×‡ì Ø¾?8eU¼¼ø(Ooðß_¾?":_†óð‘ÎÎBôÚU‚^÷9æØˆØ;Âb÷9•Ng4†%ˆvp‹´gÏ‹µëÊk•[ð$ƒ˜sQ0í [|C¹5;šÅy&Y‹Å»!b§ž7ÄÒà¾DGK†X": ›AÄnlu=0]MC “׃Ý!ÛSä] &EÀþâ‰ïÓ¥:V ÇjQÝQ=n:•D<UvqrÁ®±Qscx#>µÈy Üø±ÏëN­.•ß #ã“ ÕO$K±­òàK‚ ?Kÿü×5ˆ÷Ô‹è–à,'óeÿDš«K;µÛ•ÝpHt*ð}̳ 8©€BÖóÑ£IÑ5/ [ÿ öYÃàs ZiÞ­¨ÔF™´Äñ!Ärµ3g¡§<³,™¸¶Î|ÑN(–è¢é¾½,LÛ°‰ëD9,e5ƒ`–„C§zÖÌ!»fòïã€i6·y<ò´üûœüÈ×DüÝÁY€1Kü§îNø쾬ÔÈìsíéý«:ÓŠ7Ù¹¢VxòqÖ&»^ó¦{YŽ0VZ~^PÎôÎ…ç.¼¨Ÿú<¨Ü<Í ìbIK½ÙüÓš! 7¢TÍCHUøYŠ1Øéª9ß]1˜å¢¬bX¯PŸš+ ØvYZh\[h3ð¼—Ç f²J(ÇEËMBt1Ĺٿ‰Ñã– º€Ñ|øI9×ò›PžèÐ( ET%éªæBC•¤ Åþ!s60.íDOí ÚÔÀ¢,>Ð /¦ßǸà!Pø.jdcUû—@—á.OŒ´·“bå“b#Ôã´kvOÕˆ-„ØŽ¦•ݬÔ\He•!¶ët„VzÐL©MÌbïY Ø6OT$ÅtƉR¬†<® øˆ…{Dö°;!­ˆj²ëâ¶Å'»ä«»?þü·þatñä«0é@£é&yùùÊèl”J®Ì³„ÃÉO'ßo>ü¸¼Ÿ;·Íc·¹X”ÔD‰Îõô·ë‹oÍÅ«Ëð1Pðç¯{¸kû·Ûkܳ±°gû§Ÿ^‡lqÏ–ßÓ)£c°€dѦí9 ÿó¤CÏÎ?¯þa÷ë—ÛÛ/;x!ÿ4­´úùærõýCÀüדtá¯V—Ï¿—i 8¸@§ê9iåMôÙÇ’îYˆýô5N§=¹J§2i¼X]ý¶º|Õ· <°˜‰‚dþ’{ÆöÛ¶¹º7£óÍDùÛêîäáÓÙÍÉE†õç¿|~ò`M]s;6Ç÷½ ‹šBŒ®1†KcJ>h˜¾À®ä^Ä|‘ÿùÓ c¼·`–D||q1ÆÙ*Ì× ç!¤ænÊðÅ‘»:úç¶*-®£Éú_ÛÙ%qo1»ô‡­ËÚº½VÍ;6qðUª,º«ù<8ñ "ªŸKW'ÐZöƒŸVeHä`R•mƒÈ×IÌѧ• _Ù±Lö‚ƒò¸§€q*ÉnK{Vó#L{Æ·/“´mU!-,71œáO]™B½ÍjÄî!bZ ¹'£ä¹uàoaö.¹Š§w[u gúñã%ÇpÙÚ ÌýHÛSÄ÷¡í—ìÉáÔA€Ülžoœ/J¯‰Ë†Jè…‹B¥©2y¢iv?ëˆh-:ßìÔÂQŽnfú{¾Ff—AYLŒRSlrÜηˆeÃ~Š fý¼Ž*Â;X[O[Â<Ö¯@‹…þ ¦kàÉCœ–ã´NP¦äØŸäÑ—¸ƒam*§%l[´bÛ""ÓSÕ~|Ò´e|S;nXÀ6u!Æi¶…lycôe%|³Sy‘8o‘,´«Ñ¯Þ ôÖ¯výy¿o-@š°é™YxKßñ çÙy“Î[<Šç]|ÇY¯oí;‚S XwSµ{p£¹X’MñÄ!¶„øöÁÍ$†ðó•qççþlåúF;‹Î3ºÂÕßaÏ P•¸ß¶cÎ]7©©ÌÓK¥s£ÅÎM‚ôˆ¥À¹ñÛiÔƒVÒC6J}Z‘wÂnž•D±. ñ!tBìœÁg¬dTMáW ņZ(u7¿#<Éùm¼¯0ßÍËéƒ-9ÿdcÃJŽºáLÎ?Ù~ð/r Buº5MAð¤ÞW¯¹(U~hÒ(¦3¦34Þ'lûIåHùÍÓ§•(?tƒ)˜³k71N«épJì¬ü)"'( D:*X¨ o:‚óß eþººYmnÀ4`(aÀÐygƒ†5 w„Ê+¦võ*ÅE²…N$Áû‡êf …z+¤®^S)0Ýl^¯¾~þ÷ïøeuýéüêîÓåÍÕÕ®…a/‹,Ì‚7¿H@-¶+ ^|Èš´¸á{ßð¤'2 •I¦YM5Éž~qi¹_#Êÿ=Äu&y;Äuïq]…¢_³È!¬½•92åÚQ|´މyª¶P0ÉøK¢òëÿþݸñ°{ûOw®ÿi¦Êpº[Å=Í^ÿ½9™4>\ ÿŸ¬¾K00dŠê8ªÐ^³¤¼ú®šu„fz~u“¤ä Ôô‰°iýaouõ©`–ò%: zÑ´€“*¶„ØÝy”à ã<ê ;JœDîÄI/™ Œ‚oí¼p …æ‘ÐÎû…>L ßÕU]^|íÐd:¢’ $=uÊõYR)[‚¾´-ó|3³B{)œjb¡*¸dê0"Áì¼ Ú“Â¥%Ü볋Oæm:o·'8 ^UÌœ/çNßLKnìµhz7„¹{<þ—½këm,GÎÅØ<äe¥&Y’ d ÁÉb1ƒEƒÀu&nÛ°ìé‡ùï©:’-Ù¢ÉsHõd‘tc ØG‡ÅâWV}5¢³­üd=|½–D£òN2&›¿öhQµ¯€:@¶ÊInq8tp©`)¢Ûùl®þ.æu3œµíø¯[FxÑ”ûgñ=näáwë­‡:6+b™Ÿ5Quf#ñ¢‚w0/²aãû§ëBªÉØi¹Ù)&`šòaY‡—óflæéMCï ÎZ9¬SRû¬ç =ãñh–ãäЦZkKÊCÀ,|³IÒˆµEŽCÏ(·y|{½:çyH@±¹càc Âa§Ù“HÛr“º"šñH¡Š›ôÌä¤&„@ÆÎË:{ä^6 Ü·Kd³Ë{×Ŷ-‡¼ùðe%qóÍâéQÓ(7‹õùT«Ö›'½0»¬²ÏˆÎtÝ‹@ío(Rஹ¡Ó^ˆ’—¼~¼®¼œ|´ÄvÞ%@˜RJä¼èQ=oo•¬š8ª ê¸çËÇ´Æfmdð©ª×±4™ÈAK‰×«FÓ¶Ò‰)SÀ[²$^p‹VöJ0kæM |‡ˆ¡„Ô)?çᤢʪ…3¥FÇ ¯éŒUMqâ ¹Y°»‡Uáóñ°6=N9­èD"Û4u¦¸Œ RÉ{k˜œ« Çé½=õ%{Ù½m¨sB?s2z™’/1z¾±Y•S…ßËÑ6΀.B$ŠÓ‡P½;í®yý’Øšj7ŠÙñSž% $Ðä–WÐÉèô„*¿ åϬý˜ÔÝàÅD:Äú vóÓ;èõ8ºsâB…HÖ{y.i³à|Ö\oùÓYó—íiÒe!ÀN>x¬ÒM4$¾Ò,3úwYpHT àÐè !$“æ–Ú:ûòfý3yB¿)ûDœWŸÍÞv©kpÁ{®ž×f«çõíÍb#ð´úüëb[eR×ÌáìtyØ6SÊÔØGœ©¾--QÓŒŽÄí¶d8„Ä‘ÙËsm¿NRF½ ¤Ñ•¨s`\ º·8}ž¡{oÁ`2Ž Øs) ¨ÛÎÎ+š€IXAé]ƒ=;ìÊ­íñümÉÃûg ÞÈoªá6xÛ ÿôø¼ªˆ÷BW ö4…¢ÛãŒ8•Ô¤›y+ÞÇ}L¶³p :w¨¢•}*Ë' >j²ø@^M¢C·4Q^¾-F—tðLšPÍTÚ\_WÞ¡%ªywK&„‚y„˜W&Ÿžn¸—^óî–(þBÍû6!¸þ52b"tƒ° OÛpå–_Ø@‰y‡H±¦Š¼S`ĢuZן|79¶³Ée¨”tψçk"˜ü‰&o8æÉevcìŽûù^—Vºgp©•\pîBo‡\ÅÈ Ö½Á¼9x?‰¾7Ã+Ÿ™uoŸìþ°­R<"ÖãéÄzG?¤SJ³ÜyGÏiœo¢ŠÑt Çdë³>G¯BÔ46ŒEô*`gr«ôˆkz¦€ºÓ숉»~ë—Þ¸Áã¿ë‡¦z€EN„çrbüFºgì`'ñf‰é³Aç)ÍöJçS;£–)Ú‚ÆPíý÷.O~µ¥1OLã{YZÉ|jc–>ê%Ä™sÊNÚ<›Ã>.½ ®·ýÛýÞ"×.Iök›w uÝ}6ôÌÎp<¥ŽÖ‹tõÂoZCQ•¸ZEÓXôÁ¸|…¬Õ3kÆÉ%¹4EÓ šV-f'j|æ3)ËëÞ‹Ã1ßܰOñ+*¢5m*Ê‘£³¶²å¤z¬u=îÅ<é÷@Z$'õá^´ss»+§™ˆ›õfË@¼k†¯‚]v§{{›À.N)‚yŠ›ˆ±Sž3/Åv¹Í°ŒN\Ⴊ)DŒ. ǘäÙ=Y“ä¦]b@&n b{Þ1б­&Ó·™{²¥òòùfý´x¸5\¯*œ©t˜þRg‘û»ÇÝœ”£‰Ükˆ­I̓àBKgÇöÔÿò“¼gqéŒ à³…iMƒîXÄF>L)LkFIÞqË­k“]Ê«ëZ¿¶­ýÛKPßüÃÃãý—ÕÓO«çÍB\®º '›žÎ™øÒSJ€ ”ŽÙùZç¬@:Í@V·›*èíðÙŒ&"${;dÑfÉ,£Eoì™aV°ÿ ù“¸ã•’èŒ\ÈÁ,5 á¨ÈÿÂPΦSxø{î"uBNí_êË¿òø‹„¨—×»À5Õò’øl±©#„ô®/ ò»ñjeÜ`Ä”£š;cªÌÚÂ,S(‚Y΂,SŠXå@>­0–‚;7Ærˆý1–¬Æ¢;5>Å6 m}S™¯ *›‡=÷ÔCÀuZ]gÏKx5Ø+ýg±Y¾«¼‹‡ØY§vJ É_;›•è½pZ"¨‹KÚÔÄ}ʦDPèÓ=/’h‚¡ò:ÁÅschèÏö(bJb(€V(Ÿ`{¤¦ìdËê׫@´äÄwÜ<2îét_¼CFúæ¥L»Âÿƒf|’÷=á“Þ33^ô(Î÷(ez+®–€ê1–Fþ˜uJEtÉÈÿ@6•QS gFTÚÕuŽü]L"j˜ñ+é\‚óW5Ó¶ªéznë.§ÕkÝ ¹'¼Î \¼È3ÿ º‰W1Ò'¾Z|ºþ|ã몜Ç®ˆ 1Nšúm‚‡àcƒM„Ú·E·B4%¸mLöÂ\ìÒÃÅ_%Ø·rUr˜ Ûu¥6„ݯųS=9²`d/Þ®=¬íÈc›jKÔÜ„móÎ$*û“Û…ÑóK}S\ú­î»«TåýàÇ6¦€–‘œñ½ÝîÍýíêaÝ~øpû,8·É!åžÐˆÿ/¿UFa'ó Àéb½—ž•î8Ÿ[V´³è!TÅ‚w%ülN9³áƒÏ^|M ‚¼µCWÇàÓâ˜s*^wSDß™^GmšÛÔép¡î–`.?Û™`æäþ ³Js=%Ä>‹þã«j_^Ý®¾Ýþûû‡£V¿DYVßÝݬþþÀê±U{±^Ýì?ƒã¦>‹š<ã‚Y½Ìv7f~¼¹ß'›úVóÏú¶k}+9©×«õ/«›·GU]9w†Šý?¦ž±[Ûî1ë`ÁW±„_WO?]Þ]¼Jd9,ÿíóCXº`‡®ª#$piMˆ‚1„p„úÃLø`áâå‡/þë_¿ÿówþ÷»dúñâo? §éâŸÂŸåì¼Ø~°Üµ7þõîòñ×ïÿòoCë¹,êéþâëãúi5ÀóæãÅ àýÝ…– =~¼໾ø—‹? ÍŸ÷"ƒëÛûàÞ+PÞ‹s+¶*– °'[r~ÿÞU>‹>G^õùëåúIÞêBPúBµ÷»Ýο Î¡óGùüéñ×õàÑl;¤X¬o”ô+z­œ“U1¢.´tÞâÞýóÀD?x7¿%e”OM~5L—AÐ!\üYÑ)ù8NBtÞ¥1Z5·”…:wÜjÞ/ˆ˜’ù~±ï”ÿy©/zwyw½úðÃpÆŽÍÛ∠c‘Hý/Ž#E²08¹°´è” 'L!><ÂÁÛÍøzyûAþêC¯[°€¼øtsùt¹ùõîz3Ø¥“3ΞŠìæô¢ªcœj"·€)‘u'êá™Mð¾° ^%唿7g/Åñs~Ô^nWn“ÝK+h‚^ Ä2ÆXèÛêwƒµZÒ9GóÀ„Þ×~*Ç-QÀ–¬ƒ œMO?.c¥,#ÁArg Á©¿­¡ÈŽð[Ú@W|m/ã5®.EÞ³å˜:¦œ6op@©ä.c³eß’íüé¾HäIÌŽ/ò?…¡/ˆ¡¼3WÀKEÒ ðÂQô:À ;ãð" 瘚vt°²"ta!D1Åè¢Û¦TŒ3ìš>‚û£ ;—Bæ-‘›uãÍG®Œâ¼fʨ¶`Ê|¹ÔÛ¯oÕÛõ$¸¶Ð¥äfu+%Æÿš?]Tòlew>¬vˆyœÖù |#¢Y¤£/fSõSì0 xxÈÁG4„ã½Û…Sª?ý`eeðA$p'^j|d·-:µ 3|¨Ù§àƒHÓšž3ðÑ’ªÏ{ü^ÊÞ=Ø{쯯>1^]ÇÅÏ?ºÙìܳZûÉÛÜQ™ú¾ {{ÂWq1ú*Öƒ«7 lxREX-dž5¶kPâVljDÛÇy~·Ë6œœÑò²®2¤Á ¿rì*&D ³ÆÛÞä*ŤŸ‚Ñ ÞÍ¡W(Ä€iJ¡ðdË©®š¦/s6Þ¡©&mú2£>St&ÎIé詚ÐpÀÆP0f6ŒAŒYð¢Ëâ˜3H˜Å±àR8v°²B “øÉÙªˆË"ÚY@»—¯¨Ñ%‘Ì"ðÔ¤|¥Òl€3@ZÏʹ·çC¬t¥z¾Û!à‰_T x=ßm ÿœeñyþÉÙÅ)diÈALƒÎKœpëR·¾é4y`ã  Í”iÂÞ.Ö&iÌö‹)¸ôÕKÐcpéûæ³.}ÑFíµ#æÒ[ßae޼ã0O„Ë(a¼Ó‰S³M¡«0…Ñi¯g(péC´y­ðÉQëû••úôÀÖWc l››‡¡ ‡…àͼœ±ñk޶Í`”05rfÛÜRÂݬÈÑmsÉ+©ƒ•m›¼•óòVuÛ¦eqŽ£u½/¦EŒ 2ôíFDì ؤßrTje¦Z;-E½ë»Ïò­›™éœÊ¯:p+¬%[˜›i¢\ÝÙ8Z}:Î i›Œë)¼ü{ ø«ç÷¼s´Gð_ýb‡CG<…n‰€êuŠ%ö"C³ÎšömùZvœÅ@±cÅûÁ€ˆS×ãu”¯Dþ´¨É2ðÌøcR“„æ@di¶×‰…^gðnéµ./Ǻç–"È{ébòýÊ Ü}+ÇhÊiõ†¯f£Sqfê^W'blb‚1%L&Ìê£/œ*Å.œÁ²wÖ¿+hùººúIPdqùüô“<{}­ xd9D;+šÉ_|`ˆ¨>c2ù‹G‘߃ŸUÊ!Áv@~özÿAÏJÈòN¤Ã¬ ýlkg+™51ÜUòh\›õþ¾ »bÁã½.hñåòîòóKmÚÁç׫ǧ…Ö¹Êÿï7gaM™V¨¼;fÚO©|%ŒØ9ëgÓçdTyR±ÚþDÆQ§Âò¶ "“q´Áe­¼O^³¦AËØðÖ0\®V8:ÔÕÏóÝ电ÓuW‚CÈw(…HyŤ{¸^Å‘—/!§=³æìf0ÔiÎPÏîk5§ &j¨ä±wùzùuëóYNÄ€ˆ¢ø`@meD Uæ…»¶‚މ*=m&u+«UÚ4Cé0›o0OZÉ%ÐõÏu;Ní¸Òf˜éyÛ‡î;>ã§vÜëŸé߀ØR0S†ŽåvºîÛþ•î–’™eÏl¬ñáôy\D"A‘›ÃŸW ¥73xT2<˜a„ƒŽQœ2ZÇéþHL5íæßßü‡Äô!.Àeµ<̤6£ºVNö{ï“¿ù§g©2#|Óî}ðˆYÿ€K†!”ßû«h6o† ÖGw¼Å›pŽ"Æ33}íÓ8ãÂèïßn|;FïOŸTÙ!pfÖI¥)´à–q¨.0Ô€ëk/ÖF—q™ùÕî”_M ¿ZõÉ[f.ýŽ\à, P²eäPfMk¥$3ä*ë)9Ô±G6-2QìšMKy9uýÀ#‡‘p–¤°ÐùêCUR>†h¬ ×g¥N‹‰~w\ÆÁTÁxÙº'oOê{ QöÍAƒÔÅ´\E Sõ86€ùB:kr½1ƒhRNÖÁÚ›@ª¼5Æà£«‚ÔçûÕXcS¹ Y2; δé4MZ@Ñ$1OqV;z^£æ¹·!´ÏDâRÀ³÷=à\¿~\?êÃF>¹ü¼ÊyKã¿ÜÍmŠ,’ ³ð9:š4Øxç° 3B±ã Ï3è"E%ŽuÙ~b6Ÿ:1=û÷UjmúEĸ(Ö »ÙX >ÎÒ“:Œ”@BNlÆåPF¶2ñnZDh|þNΖ£™Õö1ÙW´—T=Ñ`?Xå¨I:³UåšWÒ;ç¢Ü–¾2•‹Òí-GLÖt²išZŒ¡ñŒèsÛ¢“Jà@þ™UÈ–BŸÀ<èÊ‘õps³Þ<>?è#¯žo>¯NLçI|¶x¸¹* Ò³Fž”mcÖ€m+«0'Nö—óuí¥?U _úuôXÚe~:“È+y;+&f„0¥LAÇ U§LR",ïËä7Çjªžè¥€¡¼Õdk²5¤Œ.]CºO£©omÅ~X_e4ÉFâ¶5+í!´`(Ql¦%GZkOô%3à CC>æh~×ch¯Œÿ_ò®¥9’ä6ÿ• ]t1kò U„.>9BŽØ°Ï‡ìÑRËǘ]­½î&ÙìÎêʬʬYÉ=¢É)v‰>l¾Òõ—/ßm íá78\C+náZ;¯ˆ²ð‘ôiô±]ƒÔ³ÑGØÃ‹Õÿ¼<<_Î0.|ŠkPPˆXÔ¦N(4gspŒ‰Ù×D”JùŒK*ñ²"€rÀP2ëðº{ù¬ŸÂ;ìÉšâ71¶êá˜RMYŸìÎ'.šo&¢.Ã^ÉG"à÷”½º¼øòr}[7â¥gJþ0[Þ&›ü “õ^!ÔkLÒxRöMF-/O¶ ;¤’Ë“8m|)¿yó@ nO¼7¢_c~,B˺km£zÿÛ“ {{âÁ»ýfž3]>¡é| &D'¡ÕÔì Óerž -â éÀ9`鋨›—µsóè¯fÜIÍ%,c¯KŽçÜB‡`L ‘ AÆý*˜VIž§nØÅ¢´ÚO£5È’\¡Q^M$‰CE^ͪ>Á°ÈÌØwfd19+,Ê d’O ‹ nl¯í=3 ļŸKÀtbéõF1ô¨G Ͻ¦Ð=o”51Ùü]Åú´ÝP´[Rüùêåéùáî5£¹Ö‡ßßlW½ÞÜ_=Ý\?Þèë=n4ª¡pa…µÙê™FV†9¹¤@Ô˜±vâ¹DÛE±Â*´!ÂôÖB·'<9É œk ;^›kd²æŠÈ²2"cÄî[ `Ì]kƦrœ¨tboà‹^6÷C™®ºOíÛu“¸bàžQïö÷ž^‡?ÌÔ¾=<Ö2kè Ê4oÈY-BõRÛt9"V¨ºÕ*YanšG‹Õ§O²¹2ŽLB¿½~TµomíÖk]T¥ØûþÈÄìO'ì•Ñ'1"­¶s?àʰ´œFëŒe÷TXвBb’ ›˜Ødèžãi5ŠØëd(óúºªºÃP¼Ü ŽÝ}7öôÓËË:w(ÞSW‡ÈqÎöò= ¥¦œè§Âjç*õ(Ä@¾€r’a~Ÿu•‰òÄçï‚iâ+að °¶¯d‘î¾@BÆWÂ%‘øUjBTF:)>J#Rî<"ô¬AÿÁçýLÎÉà³N„£’·¸šêÒ{hÜmº~DÇs€)†ÞˆÁçˆ vsš$ñ2d͘O=‚`ð v_ëÓï ˜u©/ö¬–óør»9NcfσóÐh1ÌèCÖÌÀ $¥¢lÊvŒÁš§@¬"'A}®@r ·&  ¶Y72ÑÊ ||hÚ³e’ÅmyË÷÷_žRÜ\mÿÈÅë(ïk[ÓØÏ/n®~®[ê©+¾"ô¦Š¶“ïr]Ûz cD¡Üí·-LY¿P¥N95/TÕà{•¦¬Dlš“H”Ú:USƒi× 1ïFòp=ª¶k–c[Ûwx°;¦çóà¹ZÖˆ›€;oWBƒp¸gß® I¨ ¸'Aýbѧï¼/oÆþ/ïÂø°i‘˜4͹¦Šš*(s‡}­w;™úc7_iÊ&9¬UbQ²Pï"i“õŠ­}kÛ tïUgM!1ãÍv€| ˜Ÿ«LmÝyQ—ULU6𺪹óÈåÖœ|n½‡Ù£0áµb”¢Š†‹Õ›»J¡~L‡Þqta “¨>Â¥îRÃéàŸ“‘ÀJnžÒ"¹{ŸºÐ¦#i¡o˜òüøpûíöò~“'2Ýè¯ÿúðø³þò½~‰›_nž»úisõóìùâ^¶U¿Ï䡘¤ôLМžÖäÁµêã¨>z<ñßG‰‹©ß‰^;àÏS¿›ß™ ï;0Ûƒû¦¨FÔïˆbå*¢;ÛA‹6’ë#º¸ [Þ‚ˆ$kÖ϶#Co1Õûç¯ÑÕ‡Ÿ×WÍǯ}`RU¿}ÀY‹„¨$‚¸¥Å”zé5ÌÈ0ZçÈä%dH0ÙÞcÛñòùØ»¤ÚäcIBÂTc±ÑEF·Ìb¹wuUå%—™žX“Ú0A1&Mo!¹,1sR¶­žÕv•²“ÑlÌ.˜î{¯žRáS†ºMãËÁ9 £S5Úmr–ø]694‚¿!9¥«/_ ¾\ÉÅßþöõúi¿JÙm¾:¾^ÆMëeÎs¿ÄÁ>gidŸs.Øçl']tÉ/B&vnFG5G&Ä€a12¥bdr ÝŠM_i~`†s\ƒÿÕ,qyqÏWF†Y›À%¤Àè§{DÎÈïC_Çá•÷bð‰[·cAI±¸éòXœ â#LÁ7ü߇í(Õúû¶ÊØ7[è ]Ü]Þ_þõõ"àôs#¿ð®’ZDáõŒN„4GZäôiN“°wFlPÚM[UËqÞâiŒ¸=@Ä4"ª§žh×ÝI”òä^ï"k%±&<¶º*ÜàÀ –…)´'þdC8Þ¬nÄ[§¾”þgÜDŵàe&ª'jN—:£ÍeßÂD³Rje¦}‰!Æ8¥©µœ4AÆì¢É‘40Áí¡ èRUÄ?y LP:DîäœÙü) NÁ¸¯ZᬜOiÙ¨Ü(Œ+QSþù#[»GDßGa€ºnïÍ·”~¹hN4î+õ„ÝËÁ ³å`»ûMx®ŒÿXå`uÙì0-ñnâxV*qöÕÞ­êì¶sp0` Rƒ„ÉÛE=Æù6Ôw©4qp0D²z}•ƒóš”ÌçïÙ="v¿´!ÉŒ•ê+“þ ïÞ mï)•цª·/,©ÆæQ5´F©ejÄö7=¤!ˆG[ÑÓÅíy¸Ÿ>«~{¸±âÌõæëåËíóÁ/Öµò,tÜ *Bð¦e©÷½¦¿áôžÓòç QmŠënƒ“2ëñ"©‘¬2Ϩ¾!I¢EƶÑ!| ƒ•÷‹ýºÙÖÕãÇ­Z;±ïxýùñáÅFÞ®6Ï•6S8#ta¼(V£æéÂŽ[Ûn T½«¯JPíb=Á%ã0}»&€ùõïRi«„S«ªUªÅ™ùº¸º|o9Ì|v@_à+ ÃÅ·_¯®þ·nùCð<®Œ!F¿R}ô3 ÕXIl•ÇeÄsEجf£ÇF#?L3ÖVŽ™yPqºìÐüZÔlôkG£É­i¦ÑïÆŒ¸(dU°§î£ŽêBfÔ‘˜h"fMÚ²Ùq›]?—c ŒŒªš0éQ\6yFØ¥ÿ8Ú28ìߺx´Ìsw‰û*ͧ϶ŽeóTwe#cJÉy·Œ g\™ø`lœ@L³šËäÔ²J®v,©¤JÎÓUr•äæ‘…Ò¤ƒX†à˜ë7øe5OÝGËUÊDÙyòAÎßSc¢ÆDD¾n£:¿Êkêr4Õ¦ Ê"¦}D .¡/I$Àuï¡Ìœ¶Ÿíä]{ ÎJ>8RÝ2dÝ3LÔ!«-•CtD‹ï¢'„Õ2ž•¤"ˆñlt>L¢«d‡xÓ(œµkœP®šYAi‰9†=y]ßhvòNȼÃ×Ç3Xßô’¨ì B\p‰U€ y­‚õÇZ'Ý|S·"€Ì™ À)Ymláh¶Žfë«úß4i¬v©&ŒÕÞœ²“¯V0› ‰M£…C†¯ÉR|‰å¤.ÆTNñ¥_Ùh"Ó‚r’>bϾÕÑŠÅvŸ^@›)¡‡°hÀ£lÖV˜õ-YŽôzØ/L~/O»ö½%½³ÿèÁÔ.P*šÍö.ù¼ Û#'Òž”yÐô±óUyVÔ[¶]=Gj*›wþ·‡ëªø-1û3pà–`ºw3 #9ŠØb¦/+£Ya›„SO`ºgò~z¬VB”)OàÝn-ä1Õã»DZp¯`’ìJã60Æ`ç—˜Ÿ¨Þç&æ][ÓGÄWE)Ü«w\·}»l¶/©ŸZÖ½=Š£Ú4n^EÚ ®=šÐGQê:øp²˜d¬ûýu1É·‡Û›«ß£~¯ɸ6"Ø–èeÚ öm`Qh ðt÷ݶÞÜ«ÑoE.Õ+ S\"ô÷Ý JbýGÖmè°)ùü‘œZ¹8;)ùׄ‹õ„šM&;V›ÌñP½ ¥‹Û\ð±ÊÅ50Ãúº¸˜ã©‹Û*Šaª2Ñúž­¬»6E è3x0®Oa²ÈÂÍY'œÐi\¸t[Ü<¿ÔÊöÍct+NÚ~ˆ”MÛ~v±Ì¸ؾ}ë4`–*Û‘”Â"Û‡öý1Ð )ùµ/^oîô¤=M/áù‡}è@ ôT`Ó0k˜Is–$ÜÖî…{~ëë˜d‹¹Bs«ž§m§÷tãŒØfQž4iHÙÆ™iµHYÍ À.”ªl½–0+Û#\÷"¥Ê9dx'MS)%-Ó ®{?Ž Ùe*¦&0F&æ¤-zª2¤òaÇæÒSϾí¾È`ÓЕ0bŸÒ}X¬øô›~t÷Ûx•éG;•ߒ⺒¤‡žŽ¢²ê˜âr¦uñ¥×ùHF¡ $ÖüuWf=¤Áç™Öß^¬à2â š"å´Ò`Î:M¯šŽè‰ª™r›úf.Õ ÐËt³å¡S'<Å,ƒÙ»¸ZxUýÚÂv˺^5õµ÷ª&g:ç²WÀDY¯j /m‡ R‰WÅXÁÁÔ í»êC÷ûZL²÷µ@ÁíWšœ®QM¡a‡Ž¬±뤶ûqÚù}†äµ¶;òó×›Ç;ÑU6ï´þ>‡==G|û¯_f‹Ò+}?úsߨfν }L‹ƒWÜ«H¶Œœ‘Kzúéà&ëÒÞ¬$º±q\')ÑÚ+R÷#d-Yç°¹[²º ÐÀ;ZѪEoåöòíBcœ3ÓÝÕÝ’îÃþ³œ—²FÄç0À‘gmÓ‰è´&¸>iÏɲ–ZFlî‰ÖÎèCd1®•T].,”P»&ÄíR£T0;"ÑqšÎZ ×~ &=ˆzV͉ÂÚu¯ºD™®sÕÓ–µ>­ÃAAe¼ËÉ/$íÉš~OìÄȰS#ðè÷Xã;rf3«}ˆ]1–܌ʑFd)EWV*I²Û­cA11ÅɆp@É‚ñ»Øš€±…ãjW¸6t¯/(ä¦ «š’ wª[.¦¦s|RVHJܽtRz‚·´'§@’Ávwº¸úÎîÃÏ QOÏ¿Í`óö¨+,'Â9œ•ÖÚ¬VâZÌàd¥Ô q“³Ö Lˆë£x˜„ܔݫu(’˜«(#R kc.÷žV1sfvÚ%)!ðïqÇ#ȲœQ,¨RæEÊ}¢Pð'›ŠÞÁ7µ_¶ÍÉD´Öåø[‡ÁÅåõÝMA°€;£¬ìj°–¢Ì©2ˆ!,¸9=’H3`ekÖ) f‘ýUÑ9`¥øì]Mo9’ý+½ìIÕd#‚áÌe÷2À,°Ø=ïA.«ÛBÛ–!Ùݳûë7XU’ꃙI&ÉR7ÐÀ`fÚ-§2#Èß/"d Ïè«v)z¯±Vt¿ «üè£$eÌd–“ž$‘Ô-ú²]YCÉ¥Év/›¹å•‡¤cƒzUÜá¹ѳ]X_ïq§™§gfºCþûió«ßþ¿Ý¡ÍÖîÖÓýÏçÊ”BÀ¡€J°Ây 9:!©oóì*Ê~®¦e5Áx¸Ž =ÜÔâ’«á½Ê­§ãOLáÚHLax¯§‰Ùg2¼Ñ¥@4ø¥¬)¿AVö,B¥MŸÝae(€H+¸ †Díq…|·Ûƒ@Ïfúwý>ïo·¿|ÿzót[Ç@Ì0›×4@ØovXM?\%¥n¸ËvÓYÑi »‡¸ÅÆPfÉ9ÀGBé’Xpà´¥âÚ¸+ŽÆ³{p†Ý8l\féðÅèó[,âHµk¡ÊÁ`$¤ú˜j®L2пŸøÖ>ì[býnÇ™éZ=ød# Å]ÕÀ›HP äÌ& rŠKeÙ/;›”Kt]Ý8ÅsNñ±àºxÅvœyÙWg9pG8ÅèU3N1n"¸¨ÙÕ»¾ÁëûÂf¥»/\'CÛq†£©‘߆ängùž¶÷_kËkAâP8ôkà˜AD#wã¸;Q?˜6. ç`‘,@`¿sëc¤Ñɶ‡ÁµSÀ‚tz;3¥5Në¢Dl"4-®¨QQE-±Äv µ;¿ûC¡3):!”ëÒ¼¿ÊnûéÞ_·0CýÐJ™P\1Jìƒ3 ˆÍ´îçÂéç•ÆMÚ%¤TR33Ü[DL’!è‘$:Í”µz,¿1Ùóø¢Ùa ùÌ+Mf╉ܩlO†b {îêDÍCtÕ5eã¢Æ7ß8ôùvûÑnСÁ£GI¤NôÿY‡£ª°f\#KP±wèTZýF³ý&#‡`e¿/Ϋf›ŽD³¬Sj  Ñ_°H–À°lœ “ºä@þ×Õ\àè ‡av1S13µêެ/›°£ÛwFÑR8dçún)ºDÚDäÄ"aˆWÿÏvœ”(ÿòóͶ®$fŽ™®ý2ǰj''zL{¤G.`A\ý]ݤÝÀx犟‰G.p¾ §‹§kÇPT¯ ±A†#lZϘqtucѳuµ®[Ö2A€¶A<ŒÄZÔ1PkH¯Ðvðô£“íÌõº}òÂP˜%Zƒ³‰,Ì8ºæƒrYuÙ¥$›à}ÞH˜-q ¦Æ‚`”xmŒ5z<Èj¶Æe +¼K dÙ¼»þ6839µup0ð†c€q-yˆÑþÓÎÅÜq¸ iu\,™½wnqÞƒÏæû^¿¬ˆ<Ι$ºö ”ñô*`0d £Á‹.õièÈÅüÌQ;ç…éhé@tòï§ø¢Hù¢V½Å1ãŠp;KÔª—˜å†êàÆ!sV!A%(8³óüܧ?ìÊ )J(z÷'øN‚/9NMJìÛà× ¡=Ípû^ïíûOwÿe§÷_/â÷ÿ¶ãp÷÷/îþù •Ã_~H ~÷áõÏ\f—l4 ,²¼§]fBË9n¿Xüb–çå[þ5½ë÷éì&nïî½ûpz}°#›V„ãíÇ8|Øá)÷OvÕ3ÛôÛÝãß>Þ~ùáE›Ý·Ÿ-v×=;b,]ìNæ•íz4Ýê=h»Gö/@BŒóŸ¹Ù‹Lð‰Éýh¢º{|÷׿ÿû; *æ¤9û¯H4„ðÃw{ÁÄogWè´s&Õ^-·àñÛ?¿¼ûëZ–ÔÃ…¼ÙÞ¾†­™?»¡ÀïÃOæM†›¯¿m·ÿwÁ›}h"Lmx“£(žƒwÕa|¿7±Pþo§ŽÏÞÁÉ;7E|æöpkÈ…àÔ¢¨Ô´×˜žÄÂô¤:Þ$ÎÎÅýä)cÿ¼®c ôv_Κíš9ú´‚ü${sA½à1î<äp½Î6;8;Uˆ£ÝqnB´Ñyͽü/»w ÔÔawÚh&®Oý' öÄ ècR3ÌÝú›áØv¯ˆæ‰óÛNãl{¿=ògÙyót³ó«ßä8CO2ô«_¤Ö´ÛáT¯f¡Ö›v{DŒ«Z v•»ˆ~]Ù&\Xs¿ñd6ÒX²æä(ȼ1Oß*ùåD/ß²Â`´X(p$91åÇÏhŠa„uc˜³ã¾)5ùl‘l` Üpì~MgµKuW{b³‹ ]¼Ý¡PIýB‡Âã¡°ïvÙ•U¯Vàà)¦¹Á¤ƒ“SqôŒ¼ç-Æ#»ÆuÊd¿>5µÄàÔÔ^øþ²s$©E…EGäÞ¢\ý§Qìjÿeò"ØõjC½ kÖÿY„SW-5ÃUÀžO(ïuöÃ2î…lkÜÑ—à^z«D6ဠsë{½i4Ç¥ ÀHh8€Yðà³æÀ"äz‡2CÍÑ;tG°„g¡ªøÊPuÀ+aF Õ 6à•fÑMì0!6¡›¬[nê0¸@à›ÑËÑ ]""_7Ðýj£Yló}¿/VnöË4¸5àÍçSi·È2Ü@…9n‰+-ÚèBèX˜‡CßÊUP<ž¢+ú1(WñJÇ™9{£a(WñJs(ç=qt¡åºXàÚ¿f§bÄ«’X8õwY»Ïwßï·7O÷?©GAš–¸RD’»bp°†È|&ˆ›é,&Å´n…³î6";,È2`¤%“ä5KA|,“½.»SËÆ “ehË´Ë€ã Èù„‚ý~¢<PgLWÆg¡±Ïb&µ˜šYÚh’ÚárÈ‚<JíO D'2¯õ¥¥¸áŸxûž‚ÁåÍãûÞo|¤÷üþîýOÄüOïË0Ù؉×+n×ìûô‡ŒjIÙÞBسÈîjÝŽ£·Ø~9 ì`¾/j/û<ÇÛ‘p» {ºEN£dï€ £÷|ìå `OŠÚ1ÏÏrN8•"¼Ç¢¦(ßžôVØ4y*$-Sl³÷Œ<ÄR¤™AÅxU§;UÅMC÷ÛÝoùñ×pã&øf¿Yðæñá{ÊR×o» ÁËz=?ËàgÙñWziöÄëd×Ï=O'…È]qàå* 0gÝó#Auqs •Ò’ïq]•‡ƒ8dAÜypäáÞy,b› !¸ï¼.&5I‚ǦäDÄQ½¨àµÙ¦ …wÛÿݦ?8HÚ»JZ䈓j@çÍËhʈÀaÔ°ÖçÀjè=˜ÊE׆½£€—a˜IaX]¶ÝöHN|éhž¹:_Ø6Eq ££ÑY“3ø\ÕÒ4”å„ÉÑ•­ uÜF†T‡“šU A¨ †5 7°!Ò3šŠ|^±q»/·‰ï‚šw³¨ÝÛwbüøw©^÷{Ãí =ß\”nn2÷æRÓ7ÙÊͤBÌ& ´yr´Ûƒ-Ë`§EÔÏΡÙw‰ŠáFd]êDMó8kç^åÑ)Ü@b¶³ÂÎ…/i“S 19Q MS$²í…‘ú²[S»¸n'ìüíŸÔ JtÌMö 1·gÞ»˜³g༲ÖŒA£?˜AKµÉ ù0¤ÆmšÜ÷â^e•ù.@~÷r®ßÝþlçúçÛow7ßn{?-|Bm3o´®9Þ/O!¶í:/YÏêwšL,j±^Ì®r!Ûcÿ*ŸNÅovj¬±vCÐdíÃóckß$[û6 oŽ \uº”mBßRVlB¯‰IÕZT©-»fº»˜`WÂnšÿ{úz»=ñ"¶Ÿ¾¸±Ÿùíáñ—çàø5~® )œ‡Ù&mkë"?¦­‹ÓÅ¡ÜeÎøŒV4áÚîÏö*¨¬)apnFò‚mÉ CRÊŠÌ„t_ãýý—d"2hrÆ ]'{•É«EÑm’gOƒðÆŽ½Èµ³ù¯'þþsÂìûÏ_¿Uo1Ц¥nÇݹ¶`Gܶ'ï‘ì#ÑAR¿´z¨zÞÀaïúøðëѤÕù¿¸ù´›%¯RñL2É>Xœ´ÝüÈQPc”7©i½^…Ó¼ùúð¡®”èf®C4”Eßv`xÇ¥?›ËöP0É\2;þ¡bu.eSš4£Ž(a»kaà…xÎWMœ”З8ÝŠž‘*<;íÇ’¾=~¿«èÎŒÓêvÜæO¡÷~ˆú¥¥ˆúæÕDkDM™Éਠ«ç6ð ˜UqC§D>ß&,ýüM9þQ×§=Ùœ+³çÐfÐ1 Ê^:C›¶ôÖÚn–4¤ºPÓħ™ÿ4´&DœÅÛëó”ÅáVÏÞ“TÚ"†‚Þüb’2féaŽdÓ©õDMÕ3¥éúÅ Ôt0Ö4(%ÖÏè‰j7g®É>uÍ^›YtÂ% 1>.ž õÙqâ#áôJ_£™t暣‘þ†k<aMqârƒß]sñBr¬ã¹`çE"œ c Îf«øG’éu.$1$IÕ¹@„ÛÎéš)O/\­1é–Êèiaû ûJô¼…q‡Íxóç…²e°#‰õ11i":RŒk”¶ã"+Z€„ÀŽ×£v2ÝØ×´ø”b)©ŒÒò‰È¹G2é…†|æÇ׈`ç>hÛÐ5Ô‚SŠÇÎc·žè¹Ô[Ç1ÇÂaÙUXØ»—^Ì:GâéÓ"Æ©wVŽ(ˆ®åpèyHXx8Ô«çꎊ²4Ô̲Tã`ª º(ËM†vµtÑ=Q„<åÀ« {™›´±ƒ«N¹bM=9ª²^ìŽ:1Ï=ÈXw"ÖÓ%ÔDñPô/¢Œ*å,б”:‘t༫raÍ2¶ôØ#†3)í˜'²•Œ˜2gà—:sº6¤J,éÌ¡u9ë2À“êh¡n‹‘D‘¶†ßÌ.P ¨±‡‡q’ 쪒ÅEx²°Ç I*ª‰¢[°ºXsÏÍÙ ÔvÏä -x”eLcƒãD‚yÁ**p@j\3‘ãžÒžw$^¤M{:¾2ˆYí±¹Óa¦úxý…Ö`àrAÓÙ¤VÑKlêß‹TÂhÛ«sC )¡r˜Ö™Q*t%‘¢»éA:L=.õ“Lj–‚gðMš%?|¼Ç"œ,ÚÚM¥à¢Ÿmx&°è²+'Œ‡"Ú¯ò¥voÛ1y8L·N¨É'ókµ¼g¡Ú9À“ÒøLô~Ro ÒÅü›°Ì hA`^tÌ|6{•G¯ð‹I}¨ ÑÁ%þCjŠˆ~|ø(K•²´Êê³n™‹q4+`eTKÏǤB3€wMžšÅðWð³=LøÙàtù»uÆ2¾Gn´é³Ò“EDs€Ú|o…+jNO>E"XB~TWú*R‹2#à:gF.‡ &µI˜þÚ´É㯧SÉ_ÏH|H^Ï\Ot××*8¤bµ®œšR+zšäD=3¾#ó$Œ’úŸda“t_F-"¡X>WßuR"¯kuÎü9X¿H~÷Ô…1Žd‡„…×ícÓÌ>¶˜Y)í6ƒ,lWÝï¦ ªô­Ù ÕÑÇ씵hÕŸìV=~DÓ>6N+ÈÅP¾¡«Ï1 ^‘Ú6¡‹’`óâ)\ÜÁN6hX< @‹G‚(·Žíõà wPà½˜Š“5ã¯È.cc·±p…ºjz ÜíŒcód{чKêt‹‚6.xPþÜŧ.Ûöá!ä2eÝ×}¬Kí©ÂNöz¸º-¿÷hy‡¹ÌêO¶wümÚìhʶRÓ)T×Ä 7h:~Ø¢ÿhÇÇNõöÓíÓr°|úÃm3C¼^ø¯Ä«šrÕ"'j›§Î„8“J;“`y2M.ŒG:Á>ÔÁ¢ñ  „²h<4H¾!÷E*ÒiéµÁ³YèÏÅé4uÞ€ž‰ÛîèàìËNÎA/Ûvš ŸÚŒ®“5åÕzƒÅ¤¶}LžA‹¶½ uÀФrg„î É'ˆw·Ÿ¾}ì2~¦;òhCˆ,õ^×4Š‘yµÀºˆ¤™o_·¬B2åS—z‡²Tmà´ U—ÒgYŽ>µKµ!1ûžŠDIÍ”š›. ŒÎd&!g)X“–D•!?¶ŽúÎ$Ü7Ÿ Ð —vRE½'mRÑaJ¢+¦E“1btžGº™æ£Ï8ôó†döïÎéâ őⴊÈù¾ ,®é©3¿Ábucö¼XgüÐy™^€.–ƒn:OÐ"æE·ÔET¦EÔ ùùÁ#™uÀÝôÚS•WJ‰lÅ7Ýé #ØZËIT‘ì9¥».wEœí§û´Æ'-ºÜýóù.ß ># f®°¡¬´i„œ ŽÒÁÏXÂtI^Ò3傾ý9šYÙ„—UZǾœp~,à¯V‚ÓŠqÞ ,”’•øÝï ­ëÎú̘öÍ´è°øEì&ŸËG‰«ËœH̃­‚nòÛBLÎòœjÒ˜k¼MJ2‡9p®øo¢íšGТÖLt™„¾¶cRÉ»]»maÅþÌ71à&¨¸Tb¹.ûÐ<ðfþF[’W§U#àÐIRs\³GÙ€‚•£Ã#l3vN–-ÉŒtfÄ1J_má2‡ElæüæÍ#ùôð«íµ}ôn_M,FgM°mWÏv*÷¸¸Á§â<¨Æ· –Ϫ_ÛÇm¢z?ã?«¶Ö^¼®)½$²O2¹†®ð¥¬zùIé$˜q¶ kñ:†˜6 /^ljêË‘`:\ÇôÚÁîdíuÔÖ ©±i£½FÉâ·û¹§ÓËÞoïî¿ÖñMû'o¸v´Ý6]“p"!Cxû˵•ÎsYô‹;0¦ž#†EºòQ]Á}Š”]ôúå} ìBr3+îÓ¢Þ—ïSÚ ?ºY5§f—¥N…¦FgžwÏ($·X:Ó¬ê˧ˆ²{ZgQ¤eR3I€Ñ™%sýüe¯RòÂÔ. Ÿõ* ê—Ü€¦^ÖWšÚÅû3“ ÇÄn×dôpt*Ñœ ¯—Ç;÷$ ‰¿n*‘©hŠS=j§TbÞªTãÉ Æè”Æfìïþi¢NvøióKÜß»&ŸMCßíP~°‡¹ßÏOnŸîŸ¾Ü~}úøðíµYÿi“œ¬ò‰$­WѲSæc¬¡ëDRWŸÅ%Ö®îU0ljKÜ«(‹½šgõ|•a'÷ Í0تq¯:\r„k¸W¤y÷*E–¼à^q×!LïŠÈo$†X“å‰7“ú§´”¸©4â Ô›‡SnŸf՞гŸ~|žÍÙÿp]õ•DÖ ¾ºWm‚)Õè×ñ`VI«'"Zȫˀ,àG?À–m“xM'@N»è£J [È Øx!Çã±ø=çÊŠ 4KrC®oón#;W9:] WÔjœ% }Ìp‡¾ø7n~Ù}Ñ®rÙì5#Ï$îÉN#¶A¯ÂªTb4_\Ôû0/Á^p¼;5€—§)0Š×eùP8¾Ì?¾ˆ«§×† >uÛUà13;i*¯œÚHr¾ì}H_Òæ€¥&)Rî‰ÇQÊðû7A,cȤ¢,ÔnK4³h:f·UG¿ Œþ`úë„ÑxÆ=ŽÑ«sMÚ™¤ä´—¡e´ìdçÁ‡šäŽ|¸ÙÞÞ¼ÿþåç»Ns0 .HcdyÕBêÄqÙƒ-uQZý&g0É ÌŽóщÌb5΄Y6íéô NR¡9©*Æ){ M€ϸúGD'ç˜cì“18¸jÙèyEàzP˜R':¤àÛ|›snô ËÁ%"9èßzc“÷õñÿÙ»–&7räüWsqøÐ% @BžÐi/a‡#l_›#µ§Š~Ì®ýëI²›ld5ãõF3R‹¬ÊLJÌDæ—_6gžúÚ¨:Ó°T'{xÒ_î* §pFì|hªæ£ú×àM.1€K3¥%#ì•—¬ÍF¼°§ ÀÆ]~œf'®HlO .Æš´5'Immjnan:96}c=Ëœ£»K´dKQuˆ4ªëž@ÇI{V°6§4KŒ "y¾L?v†ênwãñiùmµüõJøûƒn]»Y`ž.þ,ö8%Ɖ ›Þs]%²ŽEz5 qšÖo°*Ì(úfçaöåÓ)F0Vý ôµk†62&5°4;ƒbòY*zL¤p ’op⾋BŠxMC”)ýMÕàpR£šÚĶ'„3À­Ú»è6^õ÷}ÄÑõâµn—úž…ìÈ9êý=õÙ™Åxj Ûˆ [¶‹Ûý'LiLÆ¥àgºo;ߤÓëp·Æ$ŒÉ{gÙ @e¯{mØ9ºn{VŽ NC²n*ªºmK¶}´©}WÓhœ{ •`»ëþý@kðƒ-ÞÝÞ¦ÿκ5õwÝïÝÆ»ÔNišœ“ÛêÀinvXÞrãP>õÊ6¸PßRI¢Î›*/kœÔ¿ÑépÛÍÄþ÷¶ ÆÕA6ÝUÊÕÝâ~ñU}u¨²ŸºŠŒ]ÇëøK¸‚ûgùŸºÑWÕÆiE·Å?-g7yÁ)T X¨±nÖ_¸½öèq =Ø}rq”=‹|Ì[íÙád7!ö>VÑÒÚ‘)©‰jZ-Qf>ÙUÌ!C/bŠbÍä©*bß1+ï‹?º@S æAœ“ÊÇà›{õ“9šÌ Ü‚ó?ìÒ÷æ^­r-âTÍ›1¤3B×ø„ÚêdzÄö?aÅ Á…YùêrÊ«í~{ù2%ÎÙÿësÕUˆ‰&µ¦z±>¶Ô¡°òNЕŕwRn)°DñFNÉ2~¸D?:ÌL'8ß÷×ãtUïà(Á¼£âtU«IÜD… qætÊ®]n#©ó"ÞöK^âtý¿Ã,N#´µ®’esÇTÖ·sR©wŠ‹àñnd„]úÆV.MÏ}Òœ´ˆˆNÛH1½Ìrií…Õ`ýÅH1 vmãz%5&ÐiéŠEUM$$0*ñš|¸~`^ =oIãg¦ñKçÇò(†ÜxúÞÛwºtö¢É‹¯ið¢ì”o‚Ú„4û­³>&äéB aò2Bß”ºb-­üö'²c¾óðSšc@…–&§U±N‰¨ÙX€Atíû:È;îØ'NG§ ‚Êh»¦1âf¹&vRéÓ®©Ê‰"*¼™×ŒWMá2‹Ì¾›NåÌùvMÏê5.\–F³(pŠ˜ SØœ©ŒœP(xýGš.˜ÕøÓ”"µÚsИÉAó‚v)\ЈƒØüh^Ä*û#ŽjÂYR˜½7+ØÐnO…¶à´¦* ÖšÀ”ZôæA¦4ˆµPFa׬·T¡7kÒˆR 8ÑÝÈúæ1{¿÷jEŠÛ܃©Nqà<^ëÏݫžz¬}Óàÿtþh‹…’o*ë¼Fjµ áÌÁ Ï»õ­T¢Ý\›È–C¦’"PL.ŒóíN|=\[a½™¾ª„ÑEjrmœ£ƒ5+ûYóÊûÅšÄÂjü{Ó$›åêiq÷]Ï”·Q’ÊÕÕÞö×È‚14ùkœD¹ÆßÖV|ªÕµë%RI-6‘K£n!ë†{RéRàQÓÕϬk1 ç‚ø¦xR YŽX5Ö”Â)ÆÏv½Þ˧½wu‘/âéÈW Èa[äK.LYž¬?C·:ì˜Ôzž”I5‰%'å–}öœ‹’cʦ¤;u:*c=_ª\Ô Ðãg%Ù|n‚ø# Bw«çÇ›åSíö™WÇôN;&X#fSAðSŽK#âWðƒ¾•¡#YõsG0+HI£ˆŒ˜Æš=0wréäzþ"{c<…ħé×±mö¯œrH>J² ½©#/:_Ú‘~@vXŽ$UX:[iؼ÷†Yïp€áíÅ úñ¼ãÁ–TXìṀ±|¸û¾x\ü-ÏMxœªá”ªm-6MgY\„›Ô6«ù€\ÿùfÄ‹/·«SýþóÃÃ÷#¥ÿûóâyõçûëÕß4ÔPù§à7«ëÝïAöFŧà ÔŒkw„Ñ+¼½Ë?س~¸±gRå.W7¿­®µ«¹$xÒ¾v÷?bûbÛO¹yR—ÿ«žK]=~xþ¶¸ÿð&Žaýî‡f  ž6ÆSl¤§Äô'ë@Yî@£¾ õïFò­/ÎËôñ’õGMnEO^Ÿ­ÇŽÐé´Þk$=ÂL‚K©àšÒœj³ZâŒWÙšÜQ¸÷z.*CܰúŸ„¶¼VqWZàQœ›Ò>ÂhMLÍG!…>Ùµr§¢±•L2¦Ö5Çi®ãc÷f%g¡>æT¥6kMÛÔ§0¶ÛN¼4k *´&ÎzhŠ´<¦5q˜Ëw/Vª4"®Rš†Ø Ô¤4?Å×|ò¶G¨Yk\ª5‰9.ÐZ°Á Q­y—eQß½Y‰Úô©€ÉeC‡“jÓD†cômj›Ôa‡}!6«j R“€‚{Iu¶mbä)¤}µ•Ý"_hÚ¾ø¦fB&…AíE ê¤ÉöÄù±pU"ٱ尺ģaˆ6Ôƒ—ÛÃn²ÀFÉäœt›º;ô9øÿ«§E/±cMÓE‡-ûÑ@ÌÝuíDÔcôÎÄÕ¥+]Ì#M¦¥õ`•O©Ú>–«çs¥Ü«§›¯÷ýÐbãDy%:o Öª7n ’=Âwéd ¶7‘ü¥­a»§¶¶ &£ë58ÇŒnÆ8@‰¥$æ‘.Ðø fë…;ùô°{l/úe·™´¿AkrmБíKúøÞ2ž^C“ž&¡±x*2‰@ã&!>»£áM(½,BÍË_<¸ ˜°ˆ7‚:F§>_êêú¦#LfïT2ÌŸ4?j9‡=±t± 02y¾¸ML 8½±×Šu7§÷¡4½a@…Q_ Wy%`8¯×˜ò½‰o¯V’ßý>JÉS•â0`mÕ4ö4EqÉZš/Ðq³Í+ZoB¶ÊCÖõ[P²jq¼¤JÙ9¬=Auqo}j@ä:÷¦ Ú å<¥ÿ#DP$bGÍ Ã.…\g ËÛc?[°ÛT ‹wÖvù¬-0ç"Â=qt±…0¸dÖUe œbßh “˜˜0qEÞy{Ý· î;8éÚëž1ž(ƒ¤%7jAÕUbvd5úTa=v!¬'6Õ“·Ì•}ëÉN5ßšµÉö­Ây»N_Õê/·Ï{?TW,ùi²œÇ‹Å'Ì£D$ôȪ!•L·:°'£EL\ÐëCFè9V¶¦˜\T¾D2°™';¸«³@uð<…ðÉ–>‚ž=–/nÒ·W ~úx·X~Sy]µŒÕçJ~~ðÆ3^Ï…¨º…áàR–÷i'›.5?od±ªÜ†9ÎÃцéf%~²JÀ—›{æÓ†&eSø¸|º¹zº×ÜâÛû6o»lÞ—‘¦Üðy=bæDm†8iŸín,Hm)¶³(û¡<È6VTpÛ§9óXr¦b ÙížÜ:Á¼*^æ§Ô^HóÈ$õãÙY6ƒžXNFmZ„å>Œ_ßÎwôî$Ð Ë)x.ŽåiŽ 7H ž·E£ßÒŠ&g…ðI“2†€À˜ÆyàŽƒžþ˜ ¸ãvGÆYà.KZ½'®.mjë˜<^<>~ÊT…ÓÈxFB×)d5ŠÅ×UãrÍpb(AsWÐO§?”­´î¤Ò ÍY"§ŠÛxq^ôU=OGó˜â »øh°Å$©‚çãx[ÇwŸ~þóŸ>1¥˜Œõ ¡p D?¼èšq)ÀïÕg–-L܈ÿðùÃóßî?ý¼múô³Ã×Õó§ù×?}È—\~[½ï(z|xyÎ^œúƒ+¡ ~º _–Wñë}û6xá/aé~¹æè¯Wõ1ï®wOéô)Ÿ^–vaóéç­þòýåùÓÏ?ò[ܾ¬þ²á2aúp»Z<­ŽtI>|þüá…‚ïçÏïÏÅ-µÂO“Íw ÒÔ|ÉMÈ`¼ ÕQš6(–2ƒbr„DÁ[çóhg¯¢Œ§‘Î}WL¹¶ ½—)˜“4€à¸?)¶ÿM“bkÏW`ö±|R¬“ÄI-fœˆBLÍ÷Á±ð>ØŒB£\„›Ð\“Gm‚(e[ÅÞ^¬¨I? >JÀwF±ÿÙéP›:LÒUÙg»yW6o¤÷ †õÊfz¥“;·óªhKÎüíþxGŽ?Ú-)ñxIŽ;Z’CÙí‚ÿø]èðûiN‹ý—W¬mÙâ)šwyÅh7î[øVA@‘Np"ö:N¦P"úH¢Ió®ÇrK BÌžBÁ'Ÿ C=†bÈ2OìI¤C’´>å‡â†®T³>Æ6˜=‰'´IN¿ÓSÓõ?%êÏìÂÁÚ¯ì–sN<ý¾z|ºy2)ÿöpûr·ZÞ.nîž>ªÀv¸Pø—¬sf­ØMÙòùñeU±€FÎ(Èzæ–”ØGLÚQŒÎ©yÛZT¿'„z&a=!Ñ–ü”£9³‡4>ßAw„ÚÓ„HYHÞ“RHV³—¤ÈWS&}|Ö(¹Á‰õ#`îËvŽø!Y5d;p~ðý€€·"ÜCN*=Xa Z›ë?Í –h´¦_|õ–nâæi=ÆN ;úÐVFT}M¨"z£*¡HH=F¾åÓ+üU;TïfŒãÅ@ëIò£X{"üÝF¬5C%f†*¬ÐÀ26amäÙ±½jãøöM…A$à‰â‚@ר—Š¢Þ¸m‹ýþ”þ¼kÉmƒM™eLÓ;F±Í8£\O{?]‡¤á ’ZŽ£HÑ‚¤Þó„>(Ž8÷-<ä52놮 èjU«qpµ8t¬¸ âËRîɧӄ˜w¾ê¦…l?2AKgƒ~„Ÿ»p«ß°­ t\)¶:ëFÏa+‘°ôÄÖX†­¡[›Ðá¤NÕpššèø°»àòÀ“ø‹‡©ï{ ê •õ=OŠšƒºvS©ÀdMÙ÷îll2HêBLôN<ÝPýšhŸdF]â8¤ê!”¥¨Ú“EµÁS$Ui`i*x>¸%™HmRH=‰ãX»V¸ì*LŸ¬-Z=vÿ“jL! 5ÍmÚº°3 *¤˜~—k—ד¶8†ÓQ+꿚¢V7eᆕþ/{×¶Ç‘\¡½ͺ䥒»¡;¡í‹ýìØš ÐH®ü_þ™3{@¦¦»º«z(r­XiX3•—“—ÊK ÙNµÅñ™zí Wm{R/h¥v‰pzƒ—,ôhÕzUÊÕ'ösŠ DëJ¨ÑY•N«C¯ Ê@o´r-ï'û)W!9³êB/óe—ƒÇQ.«u-ž/@æVæ¶cÅWÞ:g3ÇÇÛ+ý£_nï~W²ßèÚ|Þ<üÑÓý¾Ûqâãû‹›ëC^ Ž3lm½Tø¢(è¬û 9­»¥’Ðíà$Jšã…4:œDûüDþ!U¡=³„9³`Í%pÌÉW¡=%Xí2½Ñ–j0†#Ù`'M³Á>׃qˆò‚©¼âôPtL¢w*=uÏqW0 A:pÉûUçyùíZÿЛÛOJÇ›þ}ùÖ~¾ùøñööýÔkèÄŸ®+­ –%U«ªG½¸¬´‚}Œ>ÀìE+Sd©±˜¢qM±…ÚUÅ,\ qª@É Ù¥zºµ€|ËØZ bäG|ªIhê×o –y4N‰­uð‘!6hæ+é¿ RŒü§Gœ£j]cUI2‚—Uáè´S%mõ¼¼¿Û­{Ÿ7`R‰{œô˜@V!8.™9e8„6´~Âä4½š9æªü”\*š/'ó0)ò0ÏÄiÓÀ¬x$vX÷– àh$ Ó–þ•É2eßôQ‘ŠÊ”c"®˜;YÇø Î%¦T‡¶¼ Ør?®íd+ÐŽ6U¤»¶UáÛ+^^ß=lû­gÁ0baI "®…5Ž_°,Û iö>”¦dl‡Î© É܉tfœšQ¡4Í:Ñ¢5gâÈ2«Ð<„ªÒ& Nø ÀÙç|è¤a¹0™$§¦>´%ÉÕã+Æææpr”ßÑ ÔMRB\ÉEfµ&žNöùæâ~syn‹sæ6Fûò²Pª*üP_ò‘h{L¹æuò™$-±”\б(!ÁÓ Ø Ì9œ÷ótÿF`ª‘·“Y À˜$Å*0Ew0Å,˜²º˜NR5®¬Ý9À²—Æ}í>ʱÉ×Á!œ`²œº³™W×y•9yiþ\gk]—^ƒ„Ó÷/[½úÛµMðúu£zÀÀóƒQNçþƒç‡,<ÏZÁ£AoÛ‹]Gd¾ó ¼n çîs?›W9|™]0ÏRññEc€ê{RÕ=Â-Úmî4žNÒÏNØLQ§™Ñ"§ÎC,Y~jD‡Ê´Pv“ù3%ZØ,“Pðgeg0$ ©¦ ÃÆ¢Ãú»@"d¦Ž+£Ä»ÇHíÀf±o[—è‹lƒŸ‘”)Qù£ÌCç•U`ÉŽVK«7'>ù ŽçŽû~¡JOß›wÛv3rÝêOŽP£ÆîUÐùÒi(Ìu§ÐEò-æp¥T;õc”" i²ó1û9 J#ÕóÎrý‘ÀQ¬ƒQò¼>ŒÕ+Gð/ó(+M:¢XÖøbÝŽQ(8ÆHrúc©ëJÜmþj©Â‘O”yÊF$,¦êOýÔ‡Û߯oŸff²¿'’wâ±*‘B޼'²S¼"ôayeéZ¢nrœ¡îäPQ¥#åÞ„j„º Ž(Ír^)Påä´õÃw%sÖyUFIH/§>%\N?nçÌ=jGY‹ ‚u™‰¸ qJ«Vñe‹_•›Ë‹måŽs‡k§:Y¹@UV€(,¯Ñ¯Þ‹³+³gQªƆ."?NÊi€±äós6ž©ÒdU­Ô8ÌY¶l· dÙù€ìvXê Æ ÉO¹Ç ]S·¬…Q ¥b¼ Ç8jÅ›,U '”¡Kð*ê Õû&bñryìl-]¤’™ 8ƒ’”±>¸Yɺ KÑ+|Ìêgãœ|U+1ëÏd@Î8;;‰ñˆ³Ã.4Üj«êO°eâãíÕèÈǧ©º$—Hx)oèüó›«W‡pÔ ·Z¢æ³‡{E­EröB‰šÙ"Œ^c€TL´dO9íGÐ/P L¡˜RG"€RÒœ%4Uvëó^âÒóÅŠp)¨ƒØÏÊ}±FÚjá«€ OÑh«žT˜RÇÑyôY`êßКSLΟlû;›°s }ˆûƒ¿3ˆDqɲ›¹Ÿº·Ô4ðÂÝ6s?õqçHÝc`Y¼>¹?Â/1j­¯~‚PpÜØrãIJhû±Æ@h{ñìlÖÁÍ Pˆ}%aì½£Ÿ2GìáÅFõ0³Dá8«Y“XÇê´¤ïÌ`’sòŸOQÛÅ›÷×ÿ®üýõööãÓÿCaîú—›«ë¼Vʧ¿œY¶bs}õü3ʬ†.Ykîô¤D·kÇÙK)¦nó£}Û³}+åîåõæóõÕ‹i©³R{›¯÷ÌÞ½#vWÛ²¹×(÷ËÙûÛ/×wg¿]Üœ=¤ëoÿRë‘RvSŪ’¸P˜õŸõ®j½T“§±ˆ…zæ0)Ýn<¼ZÚÛ×Òkh_ê|ôŸÍ6ÄÕ1N–4…(Á9Âú…®\Ê8Ua˜nè‹§GÙÕÁÕJ§_K£> ¾”qÒ©-A€ÅŒëØ¥Õç1.¡Š˜†=¾šo©Ô×w®³RŸ0…ÃIù¦ÙhAÑöâ9}\¬€k+ P`™ÅµÄ^–7ýôG0®ûν¥"fÆ,(Ðd¦š¹4túw)¾•}~%ÿÍCùžŸË»ËN=—7té/ñ­K|qå_Ænžÿßæ ã”Îú…»¯QAO";¶^FѰl/PÈ;ó½° š ªB™¸h£þ9Ï,òð‚;ôðìg/x+VS¦"¤Ó¨ÞcœF–ü@¯Ám¦=<ûVNH#‹=oxF•‹¹S÷ÿ§Ð¿+’ƒÜBh?üÔHAýá°æsâû͇ÍCÿËSíøƒß\cØÇ–jvbJ ¤%*i“ÿBœÝ¥8$ßÈ`!íʇx¤¬R³øÇÖ)¥žTj…ͬR?Ó£Áë¢}퀋 û}FoK§ôõG€H\ayF²)µ¼nCš’ºß{ó®»TßÚì£EÁ»ÿ8¥²Eg¬°©Œ}£Ê¼=‡¸hÒ1±#µÍ3•¹Œä#j^Fï0×\6¥8 Ä(q,ÎÛ‘Øa~Bò ì¿åÙã7»?{{wûAÍóù½ÓÝ;Cý  ñ2ŒH] êTÇk=¾Ì±¶#z⦔ٓ$¼V"œ2i;±Çgj €k6%}]Ø9*Všâ±J(^t¢4©ørÔÙØ¶Æó$^¬¿xÿð[Y!טƒUFÇL¦%Û™m\»D›9‰É™ÛgÍ@Oã—õF?ž$£Óv´àíÞz¾ì[6+g‹q³—*ç|ñ~¹f*¢aôÚ¸©du™~UcŒøÇ}d‡ýªQ^«÷(M› B¨wvDSxú?›—/Ê"Z–ØÜ‘dÉt[ Mð¥:±)¥ï‡Ê6òühb³¿”Òd·b «kOÏå£+HlO¨å.{Fè?X0F²ª¸EŠõxDX±Æ±ÿŒÈ@™öÛÛˆñ)^\4=öoöEo.n.¯_YÞåÓýWh$d"bâ*^´à’t61€ùOï 4¥$¯@ÉÔ©!ÿýPr Ôõ†ƒ] ‰PbÆÓ¹Trˆó*ùhe"2‰óD°Û qç1çS .»À¥²o“º8ùÕ¡"r¡Š³)ñ¢µÎ^ý?ÿuY›:‰N(”°6…iÎb~#Èóe±6uü,ÖrLŠ1©þ ®Ðo·±ÑoN6ìj<–þª+CÌp‹™!¦˜›3Rʃiõ‚ÈKà<%JHk&bä÷ˆv ‚JŸyJ¨ J¨aLi!Äl5ð&‹´ÐŠF‚+œ0ÒR 9­¢… WÝŽýñns{g› Þ_Üß?-À9·‡•óKý½Ò¼ÝX°®®[bÊHm6PšÝ5]BžeM{y%Ã|‰ƒ‰iZÇ$_ ùL‹…–œ~K.Õ1ê4ôcç‘i©ŽõG¼\8Ù:~ÅŽõ3Üa ~Û~B2>ÅSe;NDZ¿Z½È«þŸÿ¦Äø:Q,uäÀ'ñTÁr¶ ¼9쩯׹h3QËqoç~ëwr½þë/ÿzP—“ žY÷m[T,ìÇn\¾ß(¥ã¬IèYGýÙÏgÿ¸yý×õ+œb¸æxEHékU8í}ƒA…Sx¤Â)߯púyßDl›·Òré7½ô¥–¼ˆm Ú,¤•;e!\gs3']qbô£Ñis¦|ÆÿùjEe±Þš<ݶ%á§Ü!ÙF†èúÕEõë Ñ&ɺ Ìžü!6i1¡d˜ˆØ°94Jü3—c~`•…%î0Fq2º½u\Lû#^–7©öa{šâˆ«Ešc{ÿý9/*fï_{<¿¿V¼:Û\½†K¸”to)^¼ o0gÀ¥¿ª„¥f¡?®SW†ÅRƒËÊa1S›™ËÒ%¯>f˜´j-(Ä1K°½lÌÆÑƒÛL—Ã&ß%3;¸ßð48¢ª™;aÒQj0ú›¥¨aF¨ÑDäö©1 ô5LPâŠ/¦‡)±Ã_i^»#TÑ^Å|ecmâ|v檽á{ÿ=LÞ‘“5âÆHU¸˜DMCŽ<ËÌG€­ïe'3ò[ü0i-0MT|šÎ—pç‘dK“ä‡ ? ArÄ.Biú¤ˆýÓ I°æòØG2'9ÜY¬¡68>^VB±é|# ¶e5U|Œ­I­b[%®9py%‡‡›Ç$jø˜HƒºÉ´XúFÒbÖ&šÐ^ñó¤?"¦¸‚ãá;q´J€n½¬Ø‡ 3¢}Ðpÿj_6*Èñ“!ñѸgÏóÿÔŸÏæ$ŠämܲopKÄ« %¸,â¡Lă‡¹/T”§§5²cVšÈêôǬtYÈ®x\f:àaã¸í,ןrGT¶ÿ©TD`W]ðô#§^7× Â’±.–YK„¡zN@ð… QËfĤJ'…!Œì¯x¾w®¦cp±’²L+ð´°go¬ËðŒ|6Tq™AãgœÇlJLReeèEåG{‡ÎˆOœ©ç¤‚SU<’%h8+/à Ò¡ÿtVê¨=ç$…:Éhßåb†„Ø„ëôýŽ ®~»þt½¬ëhxÀI›žqÁ/Ê™¶â-ýXM¼kÒè¸GëÙ]Ž{„.Ï.ø¬ß’¢Mr(ò[⤷:zPwøL½éûÚ6Ž|iz¡ÿnÑÙ$¤—Àk§”ΞãP@ß9¶¿#ËA\Ó~ïê3 Ì9* 8`¨’â*€åqoÄ·múQ *KúWðGõªÀÍÅ t’äï ý}„}€²¨uüÑgKñÄkþLDÊ,ð'E¨JxÙŠë€?åÁLúOþüm‚ÿÞä%PåÑùå’`• }mˆ¦ä|X#÷‰ëå>Ÿ–'*ç>Þn¬Üåêúíŧ÷ƒ_<œlÌnfŒYú9ÃÙef•¥37 ªbCŠ[€ËÍ—‹&gûFAãLôpÌ-:×Å 7eMK ‚à¸9±»Ì>U>]¦`@§>sJûÅÏ'T¥@Úô€í×Ò¬˜ÞË88T‰/èaT¬ñÔO×¥ÓõÑÙ- $ÂEÇ“Á!·õcp±’ (¤ÎÖu˾L ÎÈf@ žUhi¯ÑRùªŒ¸u;zê‡tø¤mìC 1`>Ê¡a|8AôŸ×Výp\D9¥àjDÔ¦s¶/U{ÂÖ­ kF¿§ûîöóMw{÷îÕµÆ&÷÷o7w×_ô"S.éÈŸ\'ù©t &Ç×X|¹{¨,þEö’ ÑÜ1ocä |Çh[ñ†¬"/rQM$ .Ûf= Wƒˆ×¾u¿«šË#^²M´aÚ5˜>1²ÇoÙ'³˜Ò©wauh°ø1³?Â¥£ Ÿãß·ÿfR .=IÁ—‹ÍƒJñ™J÷™å~Ù%_¹4ÄúŸôçwlzÔ¿WRžïH{¾Qµ;ÌQö ôœ›¼Ü~Ú ‹Y€<l¼!™$-'‰úUû}lʆWú·Ý;9|º÷ýûÛ/go¯..îÿ¸¹|¶b :¾Ÿµé˜ó#` >ÉRÐïÀ%¹ õ CJÑÑ”`fó•¥ _`²w‚^P&1™„ÉTÙ^³`ƒÛLçí[Y-‚ßß‹0<£nõU[ÄS¶¡¿˜*³‰v¾F‚_bý¹ŸZ_Yº X5&øÈÖx7)œ\œ ﳆúéf%mâªÇÅ9„BC\Ķilî[ëmœWƒ„}ëƒÞf³˜w·vó7ïôÿ·?–ýÖ9 §+¾â·tnÒo›¡ß\¿Q׎ Þ`ƒA‚­ô–ìÞV·RL~nÙ× õ¢áv†‡ïTÜ»±À àøËЖôÙ´Ë€¶ \ó­õ)ü#Å´²k®d†Ìf®žQÞ6eóýäCÓ2ƒ]ò¯Hk z¿¶0°-üÌ CðVfƒi¼óM¤á\àS¬f~z.yß¿ž”=¥™“? >cðü~~ý{ÉGyúñÎ&m¯±.”Ïþ"ˆ¦ ’üŒ Ro H€üâ r{„ ðíÁòfªç.-®îˆutdÛ6ÒÌ@ºìûúsÛ#Â’ÒšD•è¼°½P2qtÊ8PFvÎѸÕÇÈ„c£µvw 9jp™é0ºÿR¢„ÑÃ#êæ©슣èþZˆ¶gÛ×H1„Â1E™ÈG¿æ¤áÛ›·›wÝ^·¾ŠýÝÅýÃݧ˵*“ˆ§OXðô±™êc ?¬Ê9I«p.ª§#Èßçð„|+À]ö *òQ­®­”¹c¥ H=6mº€ÎÅÓq3£³z¤ˆA¦Þ»ñ~¬q]n•Ë€zM"eýÒÎ:¼RY¤ÜNtbZ´2>x/‰Ý÷';ѧ,Ó²ã¦e'ÄüZú'ò5žhƒZ O½`Z·c«¢3‘µ)¹3O,@÷û:ä0¶>%ãÁaqÊåë8kŠÅË-"kˆ…-§ÌŠEŒÌ‰áÿÅbG³¢¡9(€*44)p´Z‹êÇ5(~\È“Ÿ²V&¥lÈ竽®aTñ±dËiA½o?¤!tz{Œý[áiƒ‚ݲ“ÇìôzyDó•ÔM™—Ö`žÇ˜|Z‘yÿõéöá"Kx¥ãí'µ$ýoL1°ô˜Ù±ÝØi3Ðô°``Ä­õ«E`w .3½ó])=tÏ·¨A!òä+([³tœ„fsýq6pÏûo @ñÔÈM²²¦0IÈøajC=xNñHÛ5ü¤PtŠÆ§›Û'ÿêòîòðm !Ì÷´wÚàK¯cïdsç8íwìŪ,á‚ÝJf4©.Ì„¥bC0MÅV žÒ9%{ƒa—¿Iàýr†@3-iO±Š,'œÒ7‘0¨ž“ˆ ö¨^ðn6Q6 ðLÁFY€èlŠç¬ÞÙ¥T§ï+¸T¾cl¨ÐŠ ÿ$0Ée?|wwûé㔾œ° 'n5ß®ÿÚœc·vƒ©)Hf5R/¬(1àÄ;šok~½K‡ö7ÀVÌÝV¾/~I[Ën,‡;À~IÖ… )\ÂÜåñ%º7bJïÀ ü{×÷Ûm£ÿ£/}©}%’¢È4ÈÓ‹ºØÅî¾ÁÄw’käÚ¾ðÛä¿_jf®=žÑŒ¤s¤qìÚÆ±gÎ!©¤H~Ä–N´ê}(ÀéF 1@t9æÍ-ñuéÔH°é=žØ &]^K™2ÁIMŠì´PG^Ç=Q{“jyotÈ(8µ@8že*#*»+jBEÙ¨{¿¼|¼»zøý%"ûéJ‘¿=ØÿÜÛ)^qzWLëÇMiá…uаƒ©Uv~DûuºÏˆQP†juÃÅfðq¹3?s×~Hmï·÷çö9wWË»ù³¬ub/{ç´ä~šOÔB@®Õ=·‹jÚ(jÎ󚤜³¬Ø#ÉÑ9%ÏË.Û#¹%™.ž7pð[ª]ìb ·(p"âi¶ŠFX<Á5câþ½,´$õæ6PX=•’útN¾lUeR–l—¤ÞÅ©$ê’ŽVUòaôÍ,³j̵X¦ã!ÍtŸií˨ν Õ§9›¡:]M'Ça&Û_ñL9Y_æì—CCß$s—ì°§K ô êücE„ "ªÎ*˰—cZäIV£Zãö•Z.>?|ê´•$YáœaÊ*0?…DÚ6]¯_¾cæI1‚b1 cR¼7YäÚ”·^¶OfV& ÐÒÅÜãÌép‡és¥Ìµ¢TEÃÁ­Ö!ªvõ–}·Z?Ü¡: ~H³r"&ðï±Y¹ {êZúHçÁ©À”Õb^ßs_ ‡ºö*ú#”‘:úìV'9vj ä4ºÒ‚Ó0λ¨6_ƒã›c¶äå Áþ«'p 6ŠD€¤³Œ"¸0Æ1DußÙKçµ k 08œ×aÈ&8‡ Ä ÞÓ»vJ¨5ÎÁ•ãøMÓÞ._Á³ {y‡ÈBM|E+ª>EÓ¸bÞAˆ¹ˆõŒËÛ÷5 6†âgƈ¶Fa¨…)2~¥1U{úûÕ\Ïæÿ45«2y ‹îf!:Ó¶¯àY°__â®z5‹¯µ/ìËL2Tœ73§,gØ“Hº\±$O+~M*VÍ£Eœwƒ ¾bIbö¹š„½²S.t™PßnqWÓ-®*±G¯Z ©3ðÑU(5žÖ`Õf0¬¿æ1 ¨×‹/{}÷ÇØl/ïïÎï¯~¹I]ø‹& EGÍ:ÌÒüfóuw"i÷Æ; ÂRE+©®Â÷OÛÐî±i4Ÿó¸O\÷ & , Ü'ËþÓT·7‚ý,¿Nä'(ÎBÙ¦êHƒ¯ãÉO8B–ýÕ£“xbö©a?i¨’¼² A¥™½¤ÊC˜œøN¨oyêh‹Ú+ûÁ0e¡Lêç‹QÞ%/";Ž´ˆ®Ø¬h&Ûn±%ÀN´ˆÁ§éª^ WaÖÙG ã¹õ\ž|\W«àOË­çýÛæÖ«ÊÕ±u³Lƒãˆi#´´.$ÅŸ¦ñqU1¹ÿݤxýaù›©0!ÇV eñøðÉôzu¹úú¶ò*q騀ö¸ÓÎRwM D=¸é=smBëGJn¶‘Z°ŒÜ!Ó"tGŸÛ±% .È,Úù–p¾ÇùŒƒ›[VbFα’'E h¶ÌÊÔ5ŒT·TÅOijiGˆƒJE7/GÓÝ•2@72XzvºV–ëÅݯˇ/ŸíÄ}¸\Þ=\ý|e_ùäÉΡ¯Ûæx€pºìËp«àaJ2IÚä`^7K•¼:B­Y„š.ÒŒEÓ–šígÙM¤]qc(µ”ÜzJ ´8ä‚d{å8iø´ûê*(À<µ±¥èU\šŸ a–^1Ââ×è-ÆÚÜò ùpG^VjŠÓò‰-¬ ošjÒµ1MFƒö¸Ì4 ¾lkvGÛ¶\;M²¦m" .  Tì®ê—Išø näéçåâ~¹ s“°ÏW?øÐÚL€þ4VÀ<œö6{d÷hcª]z= í#ÕÑ>¾)ÎÇÚzE„Œ“ÞûhÁ`#À>>ƒqƒbŠÔ'¬Z¦üÕMŠ¿  O0.Î…hÁé©‘x§¯µ?P¤aÕõB»Ý`Ü2 `TŸ½ö¾ÕKªvªob:î FêL†xO°0›TN< sùiùñÑÞäà¿hÕâ84í¬ê„­9>­Þ5ÌŠÜaT§Nl=Q@©8wÎ{ŒÅ¹k“`v3ζˆ:᪷ãbqáIq5m«1ײ҃ §˜Ú‰uS;@êfMíÔ£ÄH­ÂÆ TqèeèdäÆYGN¿ÕHÏ¿Ü~¾ºü}û7>ß^þÚ–ã° Åà aic=eµ\§®sدY?Y ³‹e¢n¿nI:ŠÇÀ9‚¿-quiÖCMWJõÄp CÖUZ+^†^Mþ*÷·_o.nï~ù°øx}u³¶«ŸÌ?¦úÚÝÍâóÝí£ýv!­ÿ AtŸ]Î<ú)íY1šÕ¥+‰ÆCß ü#‘ ’Ÿ×À›Ì‘$¬w¤…K|!ó;”¶¤Ù$å4,'†?¢qßþ&àÀ¿÷¢N"JˆR ðï®qßlŽ9p™O¸ `–ëw[~}÷í±1€À©>ùÑK;ÒÅbŽQÒ .uî{,tîw¾$ﰸ㕠5“0œvÃŽcÈxØÉÐèžx¤ïÝxt1 R8zýú:ˆËAnZÒ|Ô6Íýw[ܦ´¤1Ø_ºã¥®ƒO-Ä~o¶ø-nF÷+÷_ÞÏšP­œ[—cNÜ’u¯Ë"´õ7dUÖä¾Ãð‰óêÄžã/Fïb†$œBèÃØEd7‹kÓäâòÅè·‹Î)KÀÇ0]ÖHϧкاk^\#ž~¿IáÄA ˆASºWDbfÈÒ´<É¢ h5Õá¢ó.Î+B{sœVwG¢óg…9™,P©ûŒ@õÕ†ÓÛúéo+Ó@Ø ¯‹E ÓUTŽqÊÔ$x‚ÈÔup½1{x–ç>£uSSê³H¼œ=°mÅZ ¨f'Ÿ$Ö‡©ÉMHƒÂ ¦‹Àywz›°‘‡ ’ãò‘=½ZÀ·Ï"dG3p§1ö¹xrPé>9Å8Oé:âžHyç9ŽÜió¿ôŠöãÍ®t[r¬•Ÿ2oDîpÉœ(Z¢6«tcçfÊ-Qpé°H+Ûu­Ø ~­Ìç¥ûÉ9íU+¥ûd³|íö£Ç<‹É³»äûP(´ŒÊ­¨üè6ö$æìV3M-Íà¢äÃ{î›ø­Óø:{ÜOý í¯D# iDñÞ“ùôþs½®ÓóP¹à¾U‡¢“¡.ã„r rò¶‰^}[þzã=,愞‹£Pˆ2>äY°}òrÎ¥ ='ö »ûu$„.Köm¯Œ¢§!" uD¤.¾-.Ò-°i'¦ª)GÔD0€zâ­i†¶w “âãåÃãÝrÊ=ÞÎ'Œbãèá7˜¦P„[p')tV¦íŠ»qcÚ®¬g¦‰üÏC‰Üb_³ùrϲÏdlɯOŠaGÅé‰^£™ÄÌšK1ÄüxÀxˆ¨züöLß»€pÜhF›‹œ+5&žPçµ°D¥«]H•Yõ­Þ=¨)Z€•aVHüNS¯;ƶæFÿžîk²ÄøÍÕ§îw?]seo§dÂ4²>èºY4¡¶Ý-³ê kÏQÇ€Rl%’ kçî`ö³ ûø}ŒÞqË`¶…ëfD4ëü¿[413fïCb5Åpš»Eª»[L'7Ò÷~ñ¦¨¹}p³¶/ªãFxNå&”wå {’+=CY‹EÏ .Ò”9ͳ8…wìØGE ª);a©aÙ'f}d§ªù4IÚÐeÚ $úñe'Ñ{’)J1D=QÙ)¾×PÑoȉX"ótË`Ë¡…¹‰¯øãæqÍ8tÅ©²6Ž,®̸ÏÌèÏþÇ ý?n>.;{Vâ¶óø‹ýüáî÷«•¹7ŸoÞõüêcÚ£iËaZu(É2ÁU¿¨ýÁy2£ÛǤ‘°r)y!D%í(£é(kA2i÷ˆÓÔ0ô÷3ûÉÍýâr…N/°Ö¤³Îó^|¾_¸ð׳›Çk3•o~ò6iÜ}öØZ0,±0Õî/|ÚDtõÒ‹sÌÛŸÞìÏ_în/—÷÷k¸Ûèî%¬¥§Â€ X?´^¡·’ñÚG0 %݈÷ï6Ò+3Xˆå¯ ´pTUù¿ßnöÑÍïµTa5íÔ”½Ú¸¼½þ²¸[~÷½½ó/ˇïþó¿þí¬µvCMtÿᙣèúöã¶1qg?œÝ?^&£ùîûÍcüøåñá»ï;|Û×ÅçÇå›uË¢g?üpö³µÇô^ß¾k…À]¾í‡³²¸ì@"OÅÕG0MáÞWeGÀ^fãN¬ÄAiA·÷… péÀ¤;Jf³~sÈb?¿Zð¬Ë>ʇÚÎôÝ–i›Ë™ì5W¯@£Ç¤À’‰§Òän€ þì¹ì:àñšm×ïŽ<Çz^þãf9Ñ øˆÚ?¾r ƒ¼:ŒÍ 4á;!QêOµ“3‰ÐÉ”<“¢_Ñ›¹ÙHÄ H”v;ä $*Ánãvah뽪`•$¥uÉ}Èæ¼ü»¨¢dªË„MpPÙ@èÈÏA/ô ÃÑ+h¶$dÚKC‡AçpýÔ–g'€¬~µ¡@Š•Ú¬ß“l…vxë÷ GQW—ÎóPÏOÚzb˜¢ÆùyŸ´Ä_vìÃzaõXµ |kª¨½øëùÕ*㯠1¢4Å_‰’_ç!ÅñÆ1EŸÈ…ºµ+¡Ì…ÓAY~ýî.:Å0 òþpx*àä?û†“T´š…!ÁMšþKtý–Ïφm€R²7®œ$ !„²SzOoV‰ „ Ž„ Dž>f½úˆ±cIˆ(ð]üH·F„ãGŒeüøÛÊÌWÿýïfEûHr¾%ç™ è|¿=æ< %‡²Æ€³T‚'È©%g1=]>+òœÁÈÊTˆá•Réê¿;I5ÌOª«¿|;½ß%½®þö£î‚1ÑÍq´»Ç¤n$D)xF€¹î]µ»À(àsÑ]0h1Õ¦mºÝz³ºLÛ²‰i„¹Á_”ÔVN ~08%1f—˜"Ðö¾Oäôu½¸ûuùðå³ÙŇËåÝÃÕÏWËçO¼ç¿ÐW{è$ö¬³Ð©ök_”Î…¥Ú¯= H‘€ͳl1&BjÐaQ– ¹^|©¸kxì‡Ëû»óû«_nâ/š8Ù-ç骨ð &\Š˜,pÒÔÇ,éM"kO.h?7ÑD ¬\./9ˬ‹×d"ãl¶DÕ¥›ÇžÚRï|‹3RË_PgÙ†g/&æ,{XR”@ž@„©o?Uw8ÖÆÙˆ1Y·× :edXBÚ îÕÿýÉj?}^¦Þœ¿ÝÞ~Ù ÿ×ì`¹êÚùSKï_Ϭ›ÿyúYºWÈœRb 5‡T©xÀ9–Ÿ­WùszÔ³«MsÑåòêëòãnç‰\ ‘ãõVåeû#6ïµù”«{;âÿ0gõåÝÙçÅÍÙ“4.V¯¾³Aø$mm©ÐH9^œsÂ-VѼKAH’‰Ú)¯¸¦Ù;_\þzßäwÜŸ† ;Œ˜›A ÎV†W”ÝYš$îQ`ºÈË(6-"{€@zD:{&Ù1˜ 0²—"NZðEœ >æ2ç-iô fBšZáÐ’Yw1b ‰¶V-30ê¼90ýì!!EGÅ›”è|…9dïÝ·$Ò‡Ù>$zl2ŒfF³ÀüðÊqÈ÷&a³ÆârI×uQ¨‰r•p^›õÀ‡õÈÎÎÓ#êè 3 ÌòcÇhñS$Š…Ö\ßU‘ìjéi]Êí6¾åÞ‡ª“pDŒÄš*éÇ/è{> ·_oΟåÃË´ì¯9Z þH|Šˆg• LéÍKýûì=N\Õ×&®Ž¾Ò,‚Å;*úJ RÎ1Íô³U‡gáôq–¬>M¯H‹³,YFÅ©DŒ£AÖÄ YyVËo`‡Mp4·CtuÛûp:·C-<ЫGûÏÒ+a²UHít¹ ¯“þo.Ö>~ø¸¸ÿôÓíâîãÓÆë‡Û‡Åç&Ì5 G˜®ƒ Ô%žP¨µÐÒ±EñSÔ’Ø:boÔij¹â"ú"öRÈ]ðm‰¨ÓâM=ˆ!ÔcoŸ3ºc½""{Y7[︮;nG÷ouåðTÒöÆlm=Àû§(hD®q^&L =’½¼  ŠSHìÄ[TãZ/vÖ6} ][j‰~`”rÊ,e(Î2Î=½l§0T¢…kO …›Õô¡HìÐûp¢‚íýÍiÿÏM²­9¬ÅãÃ'SÏÕåêëÛ®ÓÓÓP 0<`FÍæb‘˜æ+–ºöÐîˤž=áH 0aq€}VbSI¶Yt¿8ÒnP³øbiV/\ËÏçâÈ-uO{jgOíOG|tMÊ’$¶¶8Ü9ÒûîHuפ¯M ‚ÚQ|¤nãFFr>‹גּÜCä²÷EˆðBPÁÃõÐ5 w_¯.M®vD/_tŠÞÞ\f}H=+·÷çö9wWµ’ŸŸÂ•}Ó”*®Ç`K€þÿÙ»ÖÝ6’ý*Áþ^uêBɃƒùµÀâ»À¾ÁÂväD˜8™µ93çé—””¸e•TÕÝUŠÇÉ æb;nu±X/E~œ<ë`ª¨ZE[UPǸnÎZ7CÑ|1K’€³˜#Ñ40oÛ×¶áËþî²æM j_û¶•ó^†ýÓ@éîDWOÓi‰û¦óà §K‰fuK+6 þJ5Ì´ªO”b1ì·B*º®„”o³þ&“6µ;N϶:rÙÃÍ)õ.D69C.s EÅ—KPͿػý»¨¶,5w™Èó¢2¹ûÑÓ7ed•@s¼§ dåJ’þj)‚­rž–"‹!‹“2ÌJØ9ýÏav$ 0koí‘1Ð…}(ÙÏÃí³&g8NïÙ’#ÄãÝE.Hþ ¹8€n‰5÷Ïf Ø>Â…ö*rÔÕG¨Ýý z¿¾ÿÛßÿñÇ̬o¾èû}ºº[«”ímW77*/”IîéÐû7¿¼yüãÓßþ^ݪR3ö܃¬“í8UjEBüŽÞÑmZ…Oü¯Á3^§ëõµ"%¸Æç¤yã¾Ó+›P½ÙV¯dXt‰1ùåÐØí‡¢ÏWïó&mûˆèçLË6H}äyLrÜÈdÍMÏÌj0, RŒá“‡ýÕÝ)û³[¬Ë^ïVSîe (ƒCZ™±¨—É :©f›k¦ 2‹ÂÞ›s¸œòÀWR0Æ!„²Z8rÊjÁÙèo´´ ΃Ó9>Ћñ3²ä‚ê¨ ¤žø¦û]aØ så÷Nün.Û@õVNy+¾%‹÷ HøiûZÆ®60=›ÀYYã4n¶@/p'ñðù¬g=VŸ §"ð¥°ø´² àctŠ ëâ­V–ú˜ìi㞘î†àü]"ªº„µŸðer:¿[ÔÏè@´üÝÈœ[é/u(gHC€”öºùÒKS[A8%œƈ7¶E¾dcÈÄ9ºe”؇ª8‡¸ˆë„YºìÑräÙ¶¯í’¿8ìsçnš˜ã1¡­8jäxW … é/\OgBö>I\t&÷Ü"S³åìTE£óu«" ƒTãWíg~Ÿ=’(GK«õ«0‘e ëXqãÊÌwE¼—c YÇ*{OwdÑ£:+ ‡—Nõ9Óךºù8>·JÓüë¼Ý#܌ܧJRý[<ÕÇ<º„WLƒâ»ŠÞo¶¿8DvòmžìVYv¾Ããý—u“üŠž§t–î©äf¸JäÀSÒÓ5u¼õô½87ézúF,Êæ¨º†ÀÈe‹ÎžïDsmU#Ù¶°vÆBÒ(xŠEêAsˆK#¸g-‘MÊk¬Úœ4ÂìêÌåÇg=7‡_NóÙÒ颢Ð˧9h„?'[Ê⃠ZFUtVP­\3S‚€jòË®™XV¦˜UÕh1ßò$–'q«»úQ,ÓNbTÓ½ì$ú;ûf*g‚cçÌ–)¤»‹’¥X奉ŸORT„„“j¬ymhˆÐÅÃäpÆ>nî6Š9Nÿjâˆ%=bg7µ Ýɑ̪U (O,¨Ä@ÃÁO,•Ë%’Ë—¯„ÛÈ”m¨ZoÌ:‰“%x¡ñ_ï@•3fÆØF¡È×ï (5íôQ_¬½Dà:©2ÀFD¼Ð{—þ*“#4Ùž .®2ü:Uæ(ñpRm0B¢EH€:t²1ÔEüž1Ÿ ˆZݯo>ëRrƒ’§Õâ™- iQ…ç;8‡~Ìàãå÷à©”k¿ïןֻ]¶ÁŸÞ‡-µ«_tö\Ûæ?4Öb¦®ùo?Äýöþ·r|x{wuóA}ž]͉ž”›_†ßýµºŸ~Øÿìà Ó2%zÎì€:ÜËöGÚç§(T±ºŒ¯¹év»uƒ!Uè0÷ެtM#l¢ž0£ât‚ÜŇÓr'›ô»,ïpËι}º Üõ<¨ßóvÿß•‚Âõ5Ñ5È ÿÀ?¦™¦3·éäØº–I=µÇ20¡0Ü}gJéÝ\‹÷Wú+Ó´]cìÓb÷úöˆËÄÎÔEÛ‘)@øÞbu«ÿ»ZÄè]Lo‘Ñ(~Ñ>ÄÈý  ÓB¿-²¢o@Gúóv‡ûœrçž´¹”ù¥† H.Lk,›û¹£Î0ð8ynïÜ=WeHžhYÆ6…­ ®SäÔø®íl€ÚêêÄ ÎJ+ )a,6n§”;R ÞÆ“'æ)‰W…÷èý¢t…îÜk*fà oˆn”¢vZzÕ¼!\7„›]¸ó1'·5jä"‹nÙ)ÎéIÒJÆn_6YPæÐƒ’PòÅ3OAWX¼n× 9GÖ0’O‹3¯‘8‹gŠ“Î<cì"‡“ ö¾m71ûcJ6Û'Ö@Åýä :ÜBõ —xšC¦b¼SüTnÕ©Ù…fçÜzt\àTîV$ŸÀ•ú¾îù€éobiqÎ! ^¢j÷¤s.Þ«{±èœ3Aïs®ïˆÇ½T¶OÀ^ÿ^Ô¤^]LS5Ó‹Çús>'vj/9(äÊ¢"X3˜…0ñdò¿‰I¬f ËÙ&¬jh8?Ùo'´˜;ÜORiÒÏÀƒ…0©S’#è¯,ÊMsôégÀL+2 NÔ}OØâNÉêc^eλãÜS^“®=¹¥ j˜`Ñ!‡4£6–ÑÒJˆÜažÜ(gÚ®m ‡@!pE‘zŒ¾Ì8À€Yþ$–ÇÜÒØ€Ûëå ÇѸásìí«›”s]K8Dvé9ÏLïÙu‡Üû@MçŸ=»8¹¡6ŒØ/¢ á4'4Îi|K\‡S^HÑ·ãU¯1oU/±{‰9e©ÓDzjdáÕ)q<©Ù˜Å8šU¸©ýÄK\HPÖ (#^öð34.pktAuj“­ù‚\\–¢Ç9’‹êmHâ&9ú LÍÜŸL•Vt/£‚rž²„r#I5©¶¡Xºï“À ¨'e0zcŠ9×¾ì5ÀQG†ð²2ªö ö‹÷Nnqô£ó2(H3‚| 諵›Ú^Ñ©„®Yqk”+å‹@©†žÕ¤ù=R/Ø.éæï¹º¸’Nr8þÿw¿$ÝvqümÛÿyµyT­}£ÚüÆ(Óÿ±§ÿº=ãr¢×ï?Þÿ¹Ù=¨W{y¬6ïþ-Ãþ¿õËôV¦ Ÿ¿lµc[d” VDd¶âo᥯acc>v’#àŽ‡¾ê½pÙ5þï+{ÏOWŸnÖo•ýËÃñ XUð¬2<«ã#°Ê&»Oîb2fó%{ êe]¢]È`‡ìú ¬hG¹¦2ÈA¹–BÜI5¶Q`™ëtíçðøà=zÇnÖ  ã!“l„ŠjÞÏ»@êPŠá,ŸÇn±>K9ZMy*îÛ åÁ´ñ3M¡°¯.”£Ú¡Û•ODË4!ò_2'AŠ8CcN-¬hØ ‡¢Z¨VH*ªE¤üЧ¥Uò!²:‚‡j1~Fv…m· S÷[b ~‰µÒ—êë2ïÄϹɄÁú¼8ªQˆ/„gôÇ1z§Í›qš,5œC„ z¶+,tÂ4ÈbšWÔ×@¾ŒiVÚRÄ4Ìò VViöVÌ#¢¯ çwÛÆSZ„M)…îØd)‘,6i›‹/e¼DDš2[Âpˆ~ŒÙ[ÍòàqæHœ3Í+YÇO‹É±’ÜØük…Xse‹ Ãj”ýk Ù!o£¥Õ¡h$-ê!L@–€<ºÁöÌÝQG岨£?‰ Ó ¨a+3~ñ\›»«÷k»hØ<<ÞÿùöðËÝ\¿Së‘noW××·÷·»9V7Þ¥5»uÐzôÖò×5O…EcµL Ecá(‹b9ñAæ4>©y#Áå”êÈšÔ£¾K$Æó%»…»lIÄÓÊê(†Æ®=€ÄJ2çw&ìvžzãJ13p»¬ïŸ É%ç‹nÅ,>ïÁ$X؃YúÀqóeË›/KŸwÎÿ)jqø€3ü5¾­šq1úÐ$ô‰B ú¤DEô—þ4ZZ-üG€“БÝ2ôýá'%Ÿ‡J6^ª8ÖшuFRwä…¶ Túü1&!4ǤÒÇŸ…¨ˆ[Q(~έ‡š?£Zî ñˆ )Øî2D%A,B’d¯@ž–VQp‘½LÁ(òÆE¥(Ú£’d˜’·;A½ÂrŒ‚ôÝ0jZAØNù85í°ŠaÕ´W8‹WB¤¶y^ ̨’µz%†Èn1\I=\)V‰~n1£„6›½˜Q’}.íy'úÓÊêÐÊ DŸêÑ*:…CôaZé#bg°2)†l<§,'Ø]‘^è†Í¸.[Ó›ÖŽXuxHMÿÄ$1qšŒIÓ?ñ é NNÏå"UVdo^Û¤jŠz.ƒ¸Ð‘¥ëa}s¿~¬˜Ý´Ùô°yÿiâÈȧe/êT%^€þúç´LYGfÄÉ•Ñsä5¯êYrÕ†«7ZUòãCÁ€DÇù¡n#á4({Þ¾¶ñ«ÓcV=È¢S)Ôݦ(g`¬, †<“Ic†©)zŽœê»"ç¢Bn7qHb`27íÎw¨MIM½‹‰CúQêW?ºwËÇ,3O¹þ+üöáönýøåãÿý ¿­ï>\oî?¼û´Ùd̊If|òÈì£Ì¦Ñ›ñÁ'jG=Ï××óvi«¯âœÉtŠ[^ƒ„y¥£œ)¥#;"  Éýž]í´Ù^[ûp¶Fp·V—÷m1åÊÑ 4°WÐNãÁñ#ŽÚI§D̾¶’°•ðœÆ-Œ{`ixj¥Ã•á©I_ôƒ±¤ˆÂ‘ŠZÁ1G±2^ZE| ª¬^ÝûƒÒÑñC²¥£)á )RÓ ¯0Tê;un:QùJÆïðÑ{`韅K/—…ûiç*íÜi‹¶¥•KÛT«C”ëù%NM£Üƒf½]S^6é*I–m‚÷iN$Œì¥³fV?‹¨#Ûwbf}©©d&ŒúÊÇ’™ð>[7Zl“ÔÞÚ%ïkÎ<[1GÌE¡ºd vò0Íé†àipC”¿!IÛ1T!,&é8qrOîQ#È\´GáY³G`3TSoÀãwåq~×Ùßuuh¾ŽAÆù[T˜1Ψٳ¾/ýàÔ”ÁôÌÉó=BÝ0uU›ˆ¬P¦”ú“CuãÎK}ÞñMbM@W_“` S@·ÁŽ„3&e—»×%³5Ö]‚ÛT2”H!džŽq›.E”ž›Ð~TJœÕö«ÀQ\Ï×F•RÐ3’ ]ÈãýúãÕõú㎘ª4cq“Zn…ôs.ó½8¯&"…‰H?EðgpŠÔ—øÞ;mŒ"åžo#Ç P´²£ãŸÄÙÀ lßÚŠ….l€{—Œ«˜™ŽYp·KNN=óÒضü7sä½ dê©#(ÜÅj°³–Õð­ÆÑHÞb9’Ç9SCT5{ý¦ƒ]´¢Ât ù¢éHÙ;Ÿ‘8™6(…KG©ÃbkŸÞ)ìCGXø•†›OýçæÖ¾Öüóóý¯««ÇÇ«›¶«wúpU»Š™ÝÓÖÅ¥4ê(JË6»`<sB”®åY÷¿onÖoÕ³¸ÇVO¸ñÖ.???¬ô9÷›Úâ¬åb¯ÁcšÁž8èá“8uÎÔDµ?Éu~Y2Ý!ùèŠw®]9­“RîÎu$Ô6])8ôq *·Ð¨9„%ÞÆwzáÉ•~S^«+–-´ˆëb®/E›SX¡’¿bÙË¥¥V8$”–:Qî"/X¬ë3ߘ—‚µúh\›'ylšìóÑ7î^’§pJC9aYæË1Ió¯@C˜ü®+³Wˆ§ò2ÑOÂe¾L+X<Íëtê4™l=ÉaV1udm…RÂb7N°tXe%;.n¼æ&8«¯úI8ÛâIïbj³#ÉäÒtÅŒ1¤Thס¦ó&BÕPX=i;2úºtÚáyîº]È}X›R_qÛ#»Õqy–ï5ºP/åÇ §€wúÙ œ’<ùéHª­â)/‹éf×¾£ïŒê¥ÄtßÙ½.ßù¤zš’}ÿk4Œ1{ÆN4†¼–€k{}¯åúì’ú<øñ•CP?¦£«Q3ìiô—Ó•êÂç/ªö_M‹Ë|Éõ݌أ–b2ÉŽ›‘)|?Ñu8IæèºF–¾‘åö†…ØabcAaÐbíYXfßÁóãÖw‚Äl3ò“¤Ú„Ï’$9çCKW«JMxF; o™9Bš7ɯâ(µÒƒˆh³5Kä&0R¡ÙžÂ'‰4ñ¼ Ø‚Q']Øóö»S5¡Ãl:Ũ¦„=}÷ñÙÙ½cÓžö»ïF§Î!–'— ±ìD²wè/:›«è0†©³+ƒž‘оϹg$¹”‹„DÄ—8°¥ñÄmjÍ+hèy@ƒôp˜#‚E/˜::̛߮î†=Œ ¼í¿ûînÐÃÍÇ«Í])®¬zÆ N‹†ègøU†±Yô«:‘ŸÉ†ÖÉ{YÔtϪ©ä‰ÅÈ!ËTÄÙ‹­‘ ›¸böÚ.íŠEè=#2ZùMÆ@늽Or¾J<º¶Uâ>4õï D}u£Oj‹ÉÌ|Ï[´«ß6ë?tÓ )†½8oT¼Ÿï¾úÀãõ^òjõ%>ß? ;/úhdmuÚ…ô­ùŸ‰À<«¬+—iúDìÖâl™ˆ1EæÜ­×É@ká~8ž’cJeó€1_÷ðMÒ7LÑ&™‡ì*§A@÷^R“3du6–&(Ô=´½+s®&€ÓÀ’¦ ÑîF=·û„!º˜Âk Z—û·®Â2@š3µ7Ä…Ý+ ÇP¸2æfI[F2lù c>÷ üÎ9Ž ¢GáÂ1Aü+Ç•¥Ä Cb‡x 8Ýt5ããÝ‹>«øm³ÿÅí¦?¼½»ºù çrG¼sóa}óëÃð»¿V$ôÃþgóƒ@ÅÝžÐs&—tŽB˜ti»ÀÀ Ê#,É•A˜Ÿà4’_OßÞÚš>/íécèέböO_WŒÞˆˆ tžÛzúXãéîxfj=ýn0Óuï»P€©;dƒ ~Þôuùq;XT½Ž$^¥Ã¯šGÑûb_i Þ—¡ŸsÅ9#¶ñ÷õ­£e/ŒüÉÇÞþ~à˜r|aúÝ! _ÜÏ;€9wË•ƒ|êcÐ;‰ø3ÔÏ,Pœ5Æ(A-ö• ’|ª0 ^ÊU›²3¥G2le@ô´L¡ï ̈ËÈÜ÷· 1oÙÇK[†×ž "V1/ë ˆ4#Õ Q^J ¦:øÊ©’d§ †åY ©_[8æˆcG+«ê`oåÑâ®ú<Û¶)†›ží©÷y61fz™¶a¤Œé.?ʇ†Ã‰.0œA¥þé`HÔæîêýÚ¦nïÿ|{øåŠn®ßaJéövu}}{;xÆëtã]ZÃúÆ=ŸÚà¦Mlèð:£Q”àͶõxÐS’ƒ!¿œÆB»©_¤ÀÈ¡·SàŒþF?H⻳·Ñ*ôø¿LŽo·ÿþO…ÁÇc^©ô*Ã~²:NV­²*}zG4²ÕãºÈðœ:ëœñOŸ…Pk¢ÄUŠ"P4¼£÷zž ~ZY!з ¦› èâ¢sôœÀäÿÙ»¶ÞÆn$ýWŒ} XÉdY¬êy` fö5¸eub¤}_Ò= ìߢ¤¶d‰ÉsÈãno‚F’ö刧Xüê«b]zœ#”@‰ƒdæ1”¾oó:BÃq=ìð5,Bí´è £_µª¼¼¼hmZ,h×*¶ >Ο€QlÔ#C@È€³`G£–¢ðãPW€BDr(ä7c¤Phûj%0aNêa2ÖÀPvãò0ä uG!JTÅ}ˆN™‚G¯†=‹ß–—O*Üó—Ý9×ÀಠÐÔ|úª7ãQlìA= U•6[GÔ$*®Uüœc·(@Uý¬“ë7™ÃŽ‰Ï¯V„*~n,ëÖU¡ PPê0 UÀv÷rUŽ)r£¯ FŒ›”Ü~³á‹[]ý¿Nðˆ÷ô/ 3ÞT/f—ÔX?޼ÅQ®n¬bj~ñ-½Eã¯;^¼|É\~™æy‡rÏ’G ¸vÙ$bkèøÞ©äEÌ(Óáü€þB¼·¨¼x)÷‰{—"Y—gb¹„R­cܸ|§íC®£o¯Kawä×àÒ%.Ú OUæ+§<%Ÿ]wóÅàA.}eõÚëtéÖw-¶$Kñ¸ü¶åµ ç¨>0t’¨“!pÕ¤½C0<5‹‡o^% !9¹„(º;¿ K`ê,AÌ‚6¡Äl†Çž4äRaØ62(¦Î´8úlû›\ÒÄŒ”‰LAQÕگѦcévŠ#U‰QÁŒ]ŒAŒÿ¦¼ßÐ'Èí\%0õœØÇèߢO<9›PÜIK°¹ÒÚ·[6²^-A µ•[2Æs£Î½¸0%@JZ6êÕðŸNAòՆؤ”Fñ²{µ:M¬Ñܸ K»î:Òkq¿< Ý63;_<ÜÏ®~½©lxŠìÜpÙçqœ†Œþpq4P0Õ5vC¤5¬~.ÐQ„°€ª;còQ²”¢ê;¢iÐä笋7¬ÝâHJ÷Ì$3ØDjÍ-91»t¦EÓTÔé™BE{¼aàpt_pSÿ @ì@¸=ÅKˆ`»†áW¶hõÃGÚ—<Â4™¹X°YØŒ< ÷…p¬Æê¸Eò=Á±‹„;*Ú¢jóîEÎB8†TºÉŽ[@x<JÛë’âZugºß+3ä„ë+£âµØ „34.æÆÕSbÎQUŽÈøQªà±Á¶s¨/ÅÓu8ºyxº»»½Ÿº¾"¿}|ÎûÓh…ä¹ (@rhÈô\o™À´+Ù°©§)¶mçœ %tÛ[—§Û>9öt+¢&l;Öiyµ1UPÝâ|v{¥Œ‰ÀxÜ'Ï$áHê kJ²CëÔcá¡ç¦º€®î Z|%ª½MJlöža¾"€8d ëÇzF²í¸öVÀøöVºcñ›H¹6–à·¡l HŽ­Þ‘`#×à. ÍÁáq‡Ý÷pã$ à$$Χ<´!0)ÅnŽ5Ç x1ˆãfâ¥ÚÃÜ ¡s×ýGu Ë>­›ÞåiøäA|ÓŽ«’dû`IŒ³Ã¦2 “Z» ¶*£+ÉEtNl¦ER¹ˆ;Bj„Ò"ž}UNƒC v¯ÐqŽ%$`æ„1\O:¿'ͽVëW;¿g8ZtÝß@@Ø+ۮ׈w·—'Dzì³ÍäñºúÉ .æ¡U´.¯vÕ ÏÆØäƒÙG0’ FïH¨Ñ}¢±1nZäµý+´Ý— Ø{ÈëçÖg²ÁhvM‘×7&Êã0¢çÞ: ]PcƒyMêûò¯&Õ¶A[?$qƒ8æÆ\÷å_Û‘[ËË¥„Û‚µÙë>YjôÕV&Qµ§XØ`­¤zÉè+ÇÆÔ覥¶„FSV ÁÑ ( £Ê~¡í‚ª±`aS[=)ªVôé¨ZrÃ÷¡h) @"êì{oš@m…ìZ¢oìÇá°~ó s(9fdGRØÙ*nq^…&`¸Ž’L.ÖŠ¦CÀŒMq—¡w9ŒÄÝJ¼8¶µlŒºÉãMà½1Ç EâÆPü"£ûöâéñ·ó‹U#‚ÇÛßõ;¡ôÌÏ÷é_°-ydfk`HÈ!Hìq”oŸ“ä‹û¹œÇÅÜœUTŸ¾g;…Ä|,Ôð,™&@Œs–ÚfplƒwfTÞ[èŸLRLX÷ÉÏûLØÌÁ†¹QoæÓjgßû} ÈHm8ÍY9Ð`˦=<˜G§âæíÿ¦gbyÿþrØwEøìI×wsq½TqÆÕίtKTpQš[<°g?ž=~¾y÷Ãð¾6©lóƒ/ͼð‡û—nÆŸ>>\o›Ê¸KñFš·sm·¬Ý^7áh¯1/{ÝüøÒˆ­~Ýòp5=m®VjÊ2(›;QÌ _ž÷âýÇåßT½~¾½½;hÄõwŰåO7—ËÏïÁûïÏ¢º_-/·_ÃD#yëŒúé2F_Á²‘ßú]ƒ$3«Ÿ_滸س«¸(=Š‹åÕË˽6ñæ6Ž$_Ūÿ#ñˆÍ›mžrõ ê“ãOËû³Çß.nΞå1_½üž}Ò.ŽŒI%ûAO=ÐÞÑ+Ù‰ý Ç÷˜õ…}Ø¢R6&—U Ëöt³ñõ‹ûTIëΛ´a³AÍ/Åëè]­Ø}Æ*÷w;¾NlNÛt·óÆI¬oþZú‰åëýóÁ±O¹…d¸eÓ6y•¦m·¤qëiÆÈú!ñ0UEõ.ÆK  ü"ÑŒÍã—X—Å/Ú¤©ô‘ܾZ €Åe!;²B¿¨ÙÎÍæ×r„4ý³X<®Û|1" š×@¤ûÛ§Çd‰ç±öÙ].>'³÷Ï»Ï;0ðaA‹‹÷­Ñ©Ïw‘JÂWTH<¨èˆãh  å@V ³É³o§Ýd “¥é;oV†S`#‚ÂÔ8åŒïS*Æ4cÒˆù{4&^†ONìøT}¥ùiT‹‘êЦúwpü@‡/Ÿ¶ >µú¼Ïú‚ާy6vÇB0¾Í6W Ž¾‚þ Î¦€ô$êxJ¥Wì¾Z1ì(N1TÁNµ/£`‡Œë;~í À˜œlH¤+k:(<…ŸÖqnô>‘«Ä§ŽKÛA2äPe—v ô‚QZfiè‰T=Çdã 8 zRzV˜¼õ!zê°Ø,èm’ýÊÚ¶¯Vz–õã\À ÐËn\èI玜k9Iž‰“ûS-üP­\aÄó m™Wîów‰˜w͉XîãOBTlãÍf D…Mð¦rª|ìža,BySƒPâÙ€²œõæÎþ¬÷í‹•â“gÃRO¹MËÃS°sªÖR .ÉÉŒ1Àëàáè¤ÖB šd¶R‹t«°â`U·†€Å«n 'AËëù¥ÿÎ`‡iõÖYòƯŽN9À©;‘‡ÛËóÍgþR(0‹[Ì>}&¸ÙÄ•+ÄÖU áª$`>RY\´Öù!^=J`µíŽÇ oP°7ÉKKâ¤$3Àe£Áa²ÄލšŒ˜×e{öA¨ÆbyÝæ‘'Ö[ém±TÎiBílð¨FsÌÀâ,à¢ê ®(,nƒGö6˜¸7fT„(0R4!LW^×|µXγ/ Ï_›)yï>(Åq³»O‹ÅÿT!2y°Ãw£’‡ÜC£SÃçšqÇCEØ—õŸP€ËRà'ˆMVgìÈ«.‹º¡—[œ]6àúã²P: G0Ф#zö›ârQÃRK7´ðx zôÜap:S˜«î“ =ëåô>ÝÞÿ>Owå_ê©{xXÝÍgûúW§‘…Ö­’AÎyâPÛ©Fú':ëÕˆ¾¸¦'aÀW*é%læœ2àbŒ7Ù›ˆXò˜JúØŠ³IQOˆ¥Ž ţÚéªÂ5ÎÄ.-üö•‰­e‘er„ye ÉTÇy¶Ò&ô|=1î”­{A’*ã¹g}åÓÝhœQÞÝ´o#´nGóu–žJ‚¦÷Ýh<Œ˜j§ÇÙ‹;2¸ÖmuýÝ(-;l¢.ôÎÃñVŸ»÷Ô)Ê sûqêW‡Ž*´â±+uê­l f»x>uß?"Ü\á¤dÇ…ŠT¥Ä!È‘Ž%ZÙ~’";k*Š'Á9¤‰¹7½ïœ£˜}*‚7Êa¦NB¹AÛ†âKBFúÆTÙ–¿7ÂôÔêíbmPëþmb{ë™ä­à¬R&`=Bu/¼iðÝÔá»XtW‘T†l› 4™È½#±Fø®V)®hb€'„þïÈ%^©A–#¼¾©Ï'®ñò)±¥§”>讇Ã2~Ë̽ø.`*î<hš‚.f_ûÜ]0NþÀl7yî\²1ÇV`­ „ÄÂXhÿkÝÉfþÐŽÒÐŽ¢ßó“@{Ê~l#$¶‚9hÍö*@ô¿uk¬RéCòU=‚nk?3p||ûsúêð ðÛGt#ý¹ÍËÛÙo«RÖþÚ"¢wPkŠä}Â> {¤€1ÇEO|ÖN þ”äì„Ê7UB¾#À6vã|_c'Zh VتðˆjÏÃ|¡R¹ÿ¹>_\<^|¼ýõáöI-S‘òœøõqºãüÜ;ŸÕIó(ì󺳞3´§;;âk¡;qÕFën¢Ç› 1Â}̆õ°ß†Õt ø®6Ù`Á{Š­Ãü7a4l¥Ñ`ð¶ÄhpÖ¹P§ŒÆ®[Y ö¨§eâ“o;XXË9YwÊ#“ÍÝ 4í-k­-ò3âгâ(ÒëÀPW\ÒØÓ4¨Gƒü7@'\Æ+¢®€NXNf¦ïȯŸ çýÔ|B:lœ™No‰OT'»3Šý<²Â‰£Æa–·H(BlöмPŸ'›¬Ïƒi¤ÏlÄ'â€:ô~â“Ý;E1Ûd´c×X·W“ò'Ÿ¨€¡ž¸‚0äJ½DcY]Û¶©Þy8WýXì–ú艺R±è‘9 µofº#÷W™oU±™‹¨žIž,Ø/R9‰hÓåÅ[Ñ4! ºì¯Â§& È®G ª ™žlAUbÅ.ÿùtûxq¢íÂé“ZøÓ`,n«³ìU©bTW•'¹TÈ'XB©„Ç…«£Æ9ÃÕ«X@P|ºzõYŽmªWuÙq"M|ìíÞ$Y½ªLA©‚Èé>83…Ö9éÓãOW…è0‘ ÈνgÞ”t²×zjî7Íw^ªQUë\_Ø2¥…#M1P‰Û'„ÒªcÀjwÙP€,Õ"‹e³˜ë’]¦wDÐrW«+vj¢µÉ“lO´ÔA÷±ë‹»½^*úÅMC÷Ëó_u'ôg—¿½¿½¸¿œýγ/àö0[||Š Ù“Eýµò8ŠÒ¾/%SÝi¤ŠÁ/ý:wÞ̯Ziš P@° ß¶I\2»#àVü ¬º¼±´X«N»7ð++!ͯÐ8^Y:ί¸q4¦WQ§ëf‡ö&Ș¹8#ä®ûB{l@x÷ñbÝâ9´º»½Ü$Óéßè"®þ¸zü׺KôPÒÛëc«3 ,MF= ,çÄ‚#¯d÷ÙÑÓÙÊ}¶sŒÅZ ²]–ž©GŸA¨û˜ÊBÚÙ¨kµjuœ ^Om±ÄwéEg Y‹ðjÀmÃè1ŽøóSÚÇou&등nZð‚d¬´T„ã¨Ëdrºå–ܬoO.Þ°ÝëiÌ6¥|•‰®‚¬˜&®iôy{²GŸ·Çz¤s¤08æ{'Š_çgœ45dRÎÑŽµN4&ö§Ú9ê_/©bv6Ù„Ý0q Ž´´åo9æÜÃÔõԃС3ߪ;&ëêI9y—Ÿu£"(<ÌÕé\¡»:›·×_„{©¿¹Šùp¾ñCã­ÿêKU¡j}A[CèMóqw‡$6  ‹µÑ±öÒlÙ…Ùù‚9æ1Ÿ7ƒì*X“ÎhÞŠ®´Cvg+ Ý´j_Fi}½Ð}ürÒô½¢sF1ŧ¡Ý‡¶-N¤$ïد©k¸÷•c» ÌÞ˜ÝìÐ!—Œ›[kÅ}Õ¯®<@‚*¯ ‚ ßò¼a·×̨p¥Z,cCðG³ú­®‰M©Ž¥?T››«®ŒetÉ©q;roœÂ¹Ø@uí5[ë\(³’³—ˆt¸9y “D:,%"|ØbÐÃdÁŽi±®ŸÅ*]š ?š%]oÅJ¯ÜÔùѦqËž×ìÐaÚçÁª6·ïzR™õØÖFaéÜŠ !öäkká×oß*ÍaµcdDòéã€Á…5Ì”œ–¸ó¶-ìr\5€5“Ûe"ÛÛ.«˜×S’^Úe}eˆ}'N·½Žc“›¶6lÑÛžÞcûD±ÌÈHupÔìÜÅZòÿÚQëÑÏ&»åy&ÃBxê)µÎ¸7ì§Uå„9 E~šÏš2ëQ û;[±7qÓô` FjÌÁx˜ d½ÀMK™Ý(a ÁM䦹"7;­ˆŒó¬õ>?8Ь0â©o×½0÷¨/¡G$s¥eލå½³ÿºˆë¼¹¸Y,Ïÿ®Ëyz8T¨ÙÁ æY¢Žuvè¨Í’Õq/¬¡>—tÊêPl1qxA”3雼ûá§¿¼óN‚ÄjkAåÓq9=éòn.®—zâbg‹W*'Õá‹§Çß¶oÏ~<{ü|óî‡â9×MŠî º&çê]7YÄΤë€r¶ª9¥òÆêØMV÷£~ìKµ.ÐááŠ~š0­]†Ô稼xûË™~áæáb±Òà´Iµu œ.>>,±ûýÙÍÓµÊò·ž c¼«<¸’D H@¹&K+ÏóÉîÜëwIuûfßÝÝßFõ\s‘ìÝ8̃ !"ç¿GA¡Û(Á…W¢Ûb½cßèDŠ?x‡ˆz …¬buÆQZ(4R:8¯ '‚Ïç1ŸPðs Ÿ¯þýŸŠf¯w¬ÀãÔû·'¡Ç”+lL×¶ò%„ŽuŠ÷ÈwŠàušQ?&ß…j2~ŒyÃCˆ³Pð^¹tè_–a;F ¢ï/d$g³U»„³6[÷*ÕÙeWZMµ*{0ž+Œ:Xql˜£Õ­3œF9»”?¯;eWSºÒõö®iQ&S£Öɦi²"ô8ºÃΑQS]–öÍ{0öHוê™ß[ß*¾A¥C‘Øÿ½«[rãÖѯrjoöfG&‰ÿÜî힪}…ùM\ñØ®;§rž~AIöh$ª›Ý$5v6IUâ(3­&@|@àCƒÆ5sz˜9šý ¬ú§ˆ ûòÎLO?‰¸âŽ¡Ñ,â‚ÛàDÓq·;˜BõžÝÆ`•HMöˆGƒáú0f9ã©›—ìc¹¾ëKÿÈÒ¹/b.œÕ©Ç•€m:µþ…"’ÛhÔl,Cöíon*»Ckÿä×Õ¿ûa_Øî‡æ ŠW \ÎTàDÙjÊ<ÔÄÇ™®ãγ°*µív¸ú@ø1AìܾaeJ[ö w«wʸ @1ÚÀgúrZ+“¿[ɾ {š,„çUd"Ørp°ÁŠê8ŒÆê/=7¦¥:qBL‹ôä,€E©˜\hfŠ6w„Ì€ sgAÎY8 DÖ'ãF€),IĈ£¤Fn²h#\HÝHÐÁ|I? úön·Ýª6fGC[T+aÀüœ¸‰ Tu0—áîE_Kß?ÞÿâVŠÏûDØóæ¸y¾~üüÁ»ØJ¸(SžoOèÅr¹} BK]ÃW…$.'2ì*Ê~ó¨B2äY˜ˆÓ\è{‹)ŽG{‘[CÍ(íO]‚Óy¬6y^²¬1Ðcw1-%Ì]QÑ6í±§¾ Í&U4A’KØ »ÃÊyKÞµM:ß×vŒÔ¹{@XšØ’y?#`”–¢ÇjUZ×á4SG¸«Œ¨ÒõXÿ¬Ö‘£6aMw “R Dá/øYp š6¨´*ñC³¾¾ŽÅCäE¼]ò>ºñhœ,ÉûÌî­ DA° ä} ×®YQª9.÷éÍ6÷ㆅ“‚ëhù†ííÑö`ƒ‹ðl1ž& ™Ò†0çzäFÆð³ð_ôRFžG7 °C"¦ q Æô¬Õ£ž6÷Ä—U+N?ý°OMÎtd»¾÷¯Më".ŠíŽ¡ˆS©ßI·d÷ˆ•Õ]¹Î/ß¹4—Osuù´ä‰$ã …ˆ”üdŸð ö+/wÝ,­ª~Ú_Kc R™áÛ}wrä[]%¹×ýXžâ¡T¸šÁF]Ǻòi×Gº@ùô€PòªÁ—AÕ€W:À7ZŽc^éLCîv«‹BrÔm‚¹Ä´†g"!D`i‡9ª†99"!³Y˜KŽþiæö\'¤/K«€9ˆi!%±×¬9úãÇ&©q›DÂ+àÆG—qòSÖ j„;v•Ät .‚ÿ—¾ØP8bZUd‘(—‚®£½¡SÚÆ2¹Djh²‰t¿V°bÍÍËbæYo¶øãA?½ÂŸWÏhd½AS2ð˜æ²8Å€#‚>Í]‘!Œ¼Ìk¢—òƒ+à¢ë;Ei‘ ¼âönÛ„™u)—|»ðÖ]ØqÑÜ#gž¡XcïQçìÝ%YʵŠªË]º"éüâPƒUïXP1v»dQfå3#%»ÞÞ(ÔdXUêS¬}c¤n‰x §à>Ëëà¾Iô;/Òñ.ë››[Á»[»ºýL·w‹À8ŠÆ+Æzø¶‘t1SG³äº"±#dÄ $&€y$&+µ¾ªGN¼(ÖÃX9Åá@Lp&‹ÉýÅ£^õÕˆœjÙü¥×^§¯ƒ‘@`B«š$I¨ ±`î8 þôü¸ûóËEݾó©›ÑGÉùÔ}Ýüy›·Üøu6«Í~‚–Œ¿K¤‡Ñû[gâ«ípèeôÊ(º´ŸlÜ«²Þãe^%G{ÍÎêeYÓª‚Òà`?kY…Õ¯k=5š­Â ӼѰ&™3Ù×!ž”‡¼,¶Gh~kLzq£A|ËïrÞw(¿îõ% åñ£Gå!!W­lÂì“®ÇdJÍ ¢g,w¤Ž’Œ¨êöhJ‚"ÊO0°²¹k¶†éOó 7÷ Žäâ5wnÓEÒÙ •f±-ƒ€Ãåv‰üµÙ _Û`ü=¸Ë9Jñ\ÅòdÀò=OÄŸw`GõŒàrÊS-2õ çrz}Ãíàótýüåéëí—¯.ùoŒ‡Ë›@ÓPøƒ5CQÙÌ"²¦UtMURê—'ÉêwøJ< ‘„³y(ÅL"éƒy‹ópi„d.æP"Zô%+!a©ÝóñÕ•¤ª$8_".ddªÆ€EZüß…ZT¤ ’±â ž]ƒåbùãØv×çͧ?>n>=ýúîÞ­îùùýç¹Jëâï´µpNy¨EA,‚hĪ" óÒfî²D'º=Êâlój9S¡Î ÐÜhm¶ªÉ¥tÉx ¢.ÍÛzSKË {fT;²Œ0v_ˆq” Ó1T³À.¸¦ÿªUä}.¥3“sÃf˜!'Æàšìz·:‡cq|$8ÓªN½Èh˜Ü“ì@µQg1ýüi‹ó €¹]nœË)‡õAgKÉ!-Ê…Ñ™ ‡,çTêÙö%»€ÒE/…ª.Eµ©·þ ª]£!g¯2‡°›÷ð&\HÅy,KÒ†2w9®¹ê™899vu¥8:VO´UÖH4[à VqyÅ¡8¤å@0ÐV9b’K£-Ãè2È€YtÅLÅE¨— Kà*Œu+”Љ,¡ CU)0&‡Aœh耵®ÞìBnÒ‹d$ØV4$äôhHfô“f$H‘€k2ó ¬%–¢õÊGätj  c°²r8Ì“]6Ž?r8<Ç ÚOï4°Sò3Uý¶ÿá_Ÿž~/÷›¾ûöéçßßÏ)¡þA8·ZPw!I ¼kˆiM¥~BÿÝ…ø¾@ø ¿@òm'AÊÓÖ<ú˜mxË^;Í–`j(Îe;”ffQ7#2Ô%‡Áì^š lƒƒ,çXšš˜rE¶bàÉÃ`{ÃÞ“l¨TQ-dC? :Ü&ц´“ ¥)mšÃã6¼Œ–T âPìO‰Öt6&Ç1°—œßåÂëX%b¹EDm¾ãžÓ~ÖS¤b_ã‹ ú”‰øJvaTO8¾Ž.ÆR•ˆ+*ÊÐE&wÅ:þQ#¼Ìà®=`ŒÔ-€ qãsU‹$ùë¹ñ 34߆h’¡H~\ÉUçÅSÌjRýù¼xXæÅä, VxñaíJ“b…Ùɉ‡La¡pa¸‹ÃøÌXtâ,Q,ó>ö퉥²é±RøÑ÷Ù¤Oô HðZJòí#X ?#‰`Ú°…húØŸ†2Á8Ê.œG IL´Œ!©ËK)¹†Ï0))/gRêòvËÈ/w[]Å·ûÚu÷ZQ*ï˜,ß8¥fV8­f… ~ji®Ÿ™<Ý2¶ç‚ëÉ’÷Ýʱ8Uá`iUä—ùµ‚úiTy|m¿;r¤Œ’†òï奮_2#Eã–[á:v7L¹½[€cˆsL^q=ÞááÔÒr ëñçˆã¶»-…&ÄJ´"‘o™$-j–ÒXR XÑ_Cl¯(ì/þ&ñ*ñê`a5$–Ky’Ú!‰Üá3Š–Ùwñp\|¬ä†«RvÊAŒ£QÎ¥¿ëE8“Ç”@™BCîw áh÷·Gw1î<RðCMHˆ«®4…s©P•šR Ðä2úeO ±þLfá—‡˜_M‡f†¿Òn,핞ÑÄ¡) ¾f]„“ŠAÖS¡oÁ6‚¶-WPˆçJÿüÉýùÃvùÝýÃõ×_~hÑ]êD¸4'ç “\S$FÉp)EæœhÖÝiÙZ3æ ãí7kM³ÖêgÑYù.ˆ>#ë|"Vó_æ7S›ÂêéeûÅ1_À)‘²OÂØÊ÷D ]çÔQU71Åô—5¶~Vs¹Ä­ÍÔ3aýÁ®¬ˆñoäuò?^µßÌOÜV)cKlÝ<¸”¸•¯à:š›"æú¶qÏcGQ=‰¹î- Îa®î‹t Þ¾ ¬è†GÛÐíaº2s]ÊXâök4˜é8 ¡ë-½u®¿í‰ gMä0Ø”ØT´þ4Æo£#ÐãE[$ò{oº¥eÎß~±w³D•’*°û¸´’`,Y¦CBlî–ø.ØE-ߥÚÜy/Å òãpûJ1Î7•ç;H«÷b6’ø¸¹I’ÿ£É ‰Ò`èv%:…î¼ä¤‹%ôElß1ÑOÎ*^( µÙlCœmÿ[FRÉßÝþðéÏÇãàŽæèÊááÝÑ¿Y”°`Ô¸^ H-kj_c¾ùsŸS–2í´I®c>Ãw Ͱ9ì|>žž“¸“a,ÎI<R/ÏÚ?©.haMØT4Þ·fJEß:ÿ~™¦‘·ˆ]ÛÄj2ìÆWÕíqVÃFÑѤa•”!šëmdz¸vÐâçOwûb2WÔG“÷¼ÿòçío÷·¿çé‹×>ÿv7{ûÏ®?ÞŸĸ¼=zŸÐ[f4æ&ð6\Ó·€$´´'ùÒ¢î‡ö!¬M°+ŸŸÂûˆ&qöª]­H†~(Ø.xï~Œ#[¬æ“Èïf[mBÓÑ—ê.gÀRûZÈc¾‘àÒa2÷-}­ÃyÆúq!oGg÷BÆ^iºË°aH¶EÑ-?a¶¥7af•¢æ‹kæe*²©¦Ä?{ÂEUkMÂ%ÑlÂ%»“EêÌïÂê”oÑ$ˆ²$ßbÑ0 ´™´Èø|K‰¹1/Ù ”ò-§ÝÉÖµÅÁÂÏ‘x©j>Þª@Q¤i¤LCL²!ʉÇÁåúôxÿå·û¯Ï÷Ûën¨ï}w»%ú¹útÿéÙÿõ¸¬z$^¯” ßÏR[HLïvP8ŽòçkäØËYßn!S™Ož{Ô»'Ý™ÄrEæú™uÀòíKçÆ7]„åyFh³b€Ñ¥&.fæÓÜy^2sž 5å!Å®E'†5.»­õwÙkñ¤¬p,L¨áð¶Ü Ö?Sƒ®/6“g0°ñì B­WEÍ)çËŠšßà•J ‰€W:¿Á+/ít«°¬¹³×-+†5!ÁHü É—°#VÙ8"b›\ã773jQgÂŒ)V, :XZMëHŒ› B¯:G^=£Ø:’±{Rª/‰îƒ¤qðµóNü§Uš[ýq"æÇ–$W]ñ\¢cäo(þëBqtÓ&BLÁVg­¶H°¦ÄSÉͤ¹»˜C%æ*ÓF!Ò<æZ™¼BØ-<kz–V¹ùµ<Œ¡o×ÅÛ令²þŠ`ûT ž.G>EO—¿Ë(ö‰;ê`ÈèG‚ÑÊHä¸mFV¾ÏÚ@¬|ŸItchB7Nk؈b&pÇ©ÞÒx#Í£è*à ¦©v Åë+«D·˜ ÷hº©°ÿ£ Ý$âxt“"º‘ûöP/À€œÞ„;!»yTÉ®ªüé…> ê§®ÄDïäNø*}ü¢ÿ>: ÔÊ­ÐÿÀÏSî…þï8 ˆyRîÞŠ©~îh2¡êº†dI…†d-@ lûdSæùBó^é ñ`)ÝÈ6Á|vR>¢©™H=`O’=©ºÈ;¯ËÃæ€ÀMð Ð?‡™õ—Ø‘?=þà]vnǚ춺H‰­Í‘·8`Ž¢›ù†ŒTÅËpîKHrÛ,…5íŠäŸšÉÒË»ÝòW5f·ðý\cã,üå2:›o]ÅœâÁr;Ü»m7…H‹â_?Çråe“Ù$<Ìr>-¡Ø*JØO…²ƒH©ëm[‰²©râÅjÏkG5´™"¯H+§ Pß qØÏ„n¦Ì¸‰àQ“Ty2»ŽÖISŽRšžv ®–œïdâ"KÆd¾[š,Œ†Çzb=Ü@ÐdÓ7èý‡õlÞ;EgMîsª4Åû¬ìBP0 ®!Óæ\L\‹Ém¡„5©fGËÙH¤Š¬¬6£ª&‹ìS}“¯¯pÙ¡À€`!×µ`Ä¡ªÇÝ÷'çÂö—´e¶Že³GcsÖÃ!dvÀ¦‹ó3uE«I.ä ˆK'd4JnU-Zñ(ŽÐ(³ÜÙý‡ÓìQêáV©qäEL]ŽÒ°Éð8-0UÎw:ÁZLÕA0ü(UR+¥qchiú(µaôHøÂDÒõÔé@㬊=ôBHMhl!ÏŒ[ñÖσF&}¼ Û¦û†oÁ¶¹¿»QññWÂrKJw7tõï;⛹äȵ¹¸{8©Èqw#ÿfŸ×:d Ni$#gŸ÷=—‡MîÅH eflc *me_fñʤâF9 Úò‹8•ÿ¼Î/úñúãíý»œgþú|jUW'fuUàÁ¸:…Í«¢YQ†DrI:¼­UF~„o¼mÐÝ`¶ ýËk1õÖ†²­úÝãýÝ/x‹·¦×øÀpýn¨TL™l¤$)1õŸã&¡ˆömüž’ò÷{ýaw¹Ô§·¯JšÓÎ÷ö°¦Ï;J æáÓ,GGß\òVMJ)Íd³Év·™‹Ô¾/ëûÏv]¡Ïÿxp‡èï?^=:¢=ý¹¿ûâ^õo~Ííx®Z7¹—mà`ªº\ùòó’‘³\L粉®íÕÍmwS¦yNCšó’ëÛ°ò#Ye¹½¡£¼»šSEP=B€€x†”×÷Lµ svƈ¥DïÁºªždž`f-.0«Y¥Í›ï)0zRYŒ»/9žÀŽy”Ê%G˜¼Õ(€£Pô]!:•¦Ó^ýñxû¯/ߨïïïàÒM¡8‘¹}@@§W; RÐ èôÎçܤ@@"a5Kßö‘ú3È;´¸íDG¼­gHãZÏŽj±Þ?ºš®žÜ™zþòôç»×ÿy%·7]2?<\ÝÜ<<=œîýd©©^mñ÷lðèik1ÚâXÖâUµ‡§ÏõÝ#Òšà(qû,3¦êss jŠ˜æNvAæézÜÝÂc™AôûʪNöÜ™àç\å¹¾ýbtÿ¤ xà((ï®»©Ä¦Ÿµ`!íF”Øô{öi¾Ài¾=Èž·n©š¯¶¼»}º-ÔÐ&]†I“?$Y 7“Ïž<ùÒ$9^j0­i à­?Ñša+a#íÆ?$µYØðW3šE ÞQtTñ¿¬¬. ðˆÙ‚cÁacèáCÊ¡*V†bõ:«oµhëii¿m™Ñ„‹N‰­…ƒ2¯þÑßìòåÿ»J+\¥³ðæg7I²xc”ÅrìvFàr]M>žÖäK*ZJ‰r|0h˜BšC4F*e3_•Ÿ n”éUUþá#šªòýár‹T‚°—nÄ6ØcÕþ…6[ æ ¢ ,´¹qi¿ÿøëåêmÌÎZ¥„ܶMÚ0ÂÚˆšg”Ù[O":W˜ïg?|º¾[¤ Qb&”á¶¡"e_¾¹°m A§4bÑœüÖU@QÑ¥TˆTáRê¾ìzÍJ| «PÛ·F£˜hA¬*ó„Ÿ«õ½‡»ŽšPŠ®cÎ/ƒ”ƒUƮ̹J5…O¢îäw-#®Á³ú›BA‰aÅÅ€º·okŠ ÛOµŽH’PÔ]ÃŒ/ç‘KC^„Õ²ÿhq øËIÔ& ØÓŸŒ…;Þí’ÍLæJ!»r¬ªT•Bº?¿¨²[wVÓI± òpðm¾[ ¸ÈÿB:fB ›e6ú Z²ù*®ÿÞ x÷ÏûœdúDߦŠ+‹Sc &MÑ ý[Z s‹ íøœÇ~âÜ:€­zøu7[7]Ý–°ŠƒX]S¢è™‚æ¾ÃCJÙóÓóâÂè²Xsv'€{FšdÍe;J`b^:V®ßÁÙ+^ʃ÷0i¶ 61#ÎÃFqzó‹¼:€F~gJ¡º {·U$OÅh å €{lÿ½«Û+¹Ñ¯bìÍÞDí*’Egƒ\íM€½X` %9nŒd)’Çyú%»ÛÖQ«ºëüTµ-d‘£sHÖ÷‘,þ¼naâ~ßS‡è?fëlÄñFžÑj>´Y™ŠÚ?ˆý¤díÿ Ö:–ZDrkå[x8åœg³åeç¼s-ÏÖ9HTvDb:s€£œ{£É=¨nþme)KF+JhÞ_¤`ùÌæyöy·›_¾Cúpuñé~ýøeþBI ˜ç+f¨sà9FB‚!k¯ 4£…ÙÊ{ÛÚ’$ײ݆þ€Õl··P{˜ž$××ýµC¥;Cïš)“sܲǰÛ'û p’•‘šGázàØ|ýÌ$D9¨ko_YFâÚ»Í""Ô—ë =>QP<076î qŒ®íýÇëºm‚÷’}dÄEIt œ;ÔŸò2©ÈM úÙ±¹õÖŽ·ÛF—.Ñá¿Ú+[ÕE”5Œ՞ÑBúí8°Ç4i94 Y”€ÂÔؽDÚ*„u»Š1AõB€´V¹iâ-ß[ä×€T)ÓÊHUO¸‚ˆÁ¾’xÉ„={ß[š˜sa+¶+ r—£ŽÐ659ªvÓñrtXÓ^ê£ò¢HÖ¸OÛ·<ÊJS캈àk±ÛÃ`¡Ñ·š¨/×›A%'<ò)b#SaFIºôqN¾Ò¬.“ivjiþX©¹Œ+ò%• ùoàk„ªÐORÅ~¤â>§`ÿæä k‚IØO™Â" ±ßä …Á¥®©“¼ð´÷¶ÙE‚¶÷S‘¯|@:h 9‡eÝXûUÿMh‚L»â½å¿h¢BÞ·ÁqÉý„w›S¦×UàüuRûÏ¢Âyå&æ@Ó j ‹ÊQí³¢»ìËjûÆÁWÃôq Ó«y>—e ÓÔªSMÌI‹Lÿ$Ç&Lo'$úܧñL sB]„µ?ÓÇ\˜½hŸ ¬‡Æí¶mljq\?N@ùq)~L9JŒÙ3ú‹ü?¯«ê@ð¦p³„œ~±C…4z;Ú"vÈ3jݯ ¨gýIÉÁ,9¦¨ur`³Q%‡,¥–Í¡[ƒ½v‰™¦ƒ¥hBXÚ»8ÑÄL…Îÿb2 b<Ä ð‹æpƒ~Œ‹üNq=gf Sò© ù'Í0Yô )ç3¾76Õ “¤b}Ü@ŽM Å‚n²£6 Z²‡E „önû` °%¥Õf¹O®e˜bþ•aš“aÜ,b ®ŸìDÿÄ¿í ÍF4äo6òùÜ^Ö‚ÙþŸ¼õ×ÝØª¯*:¦²Ÿ?ÞYo\ÔóÙNìgk;•/†ç¦Íå×›¹5Ý~r­¤»Z‚™X \°]hós仞Z¡ýBÖWGb>@ÀùÕìBqA"†¨´H»Ü%ŸBì0ï9Q¿yÏ/8÷Ç Ë÷ƒþõÕžA?öô¯ïXFaæìÛ^i¯ÁZ($‹†Ç€µp¬í‡¹ÖOrkÓÄkôÂÙœ®‘IÊV6ƒqÆ-šúÔsVŸ|~Šúùê)lh;’,úmcñQÛ!¢1¶S}<_ã‘Ä2Ð$ãaûÇ2>@äî!a1$2M¡D:šáÐ6Ãýš[Á›)º+ÝΔ¶^9C.nJñ*ß¡gB‘z·LŒrBjœ)(Ã"Ž É3B3MÞõ2-ÎtÉ„LW²`T)Õ©ßâ©Â7qq‡ÚàÓFmCBÉ›i “Ƚ¦¸G5užÚ°•#áÙÇòHÐ_K,5¦_KžLU}‘Á2kר¾Þ›”W™åk[t§èt·ê®:ïüíÅÕ½_â^ÜÚG}þs ~Ÿ‹šó4_#8CyÆíˆçÎ9N­Ün#¿VQ„ÛŒn®*…2ÉüT ˜«4¤©tã2V‹.\·ô2¤› ,#3%Yvn;Ï:Hy9÷@ˆ­ø@ìµã”넨ÎËüà ©?p:À¾„*ÓIø &…“¼Ì-GejÌ=`èM˜ÇŸx‰Møv÷´°A¼”âéc†»ÛËEºüÿ—Í^àùj«³™3—-èf’57aƒ¡¨'3ÁPÎ㻦J$ÞØ˜°z%Aê© Ï'²=I® ø!¤'P@‹ã¾¿5®˜˜S±$zä·{²ôLQãÑl4æît!ôÆ®FíÛRR–!íßüõ€»¸ëîúüãÕÍùÅ;ŸW›;þݿί $ :¢‰‚ ±>ꩆ 9”9wª;›(ÑV) Y%‰ª¹/¤8Þ$»“nï¿I¯ÜûK£¦0)óà™ó¼ì`‹vÎü»éåVÿ⪡Ø}ß|ƒDä_&€<l^74Z«œ#%]¤r ÔÅŸÏ”}ÚÑk÷ç'¦wê¾<EÔEøŒsÊŠ½Š’cþ¹\ùl(idÝ—Ç]>ì(¶C’beñ7Ñ5òå³1MžT}hX¤qÙQïŒî[1#}ùœ(Zzb_>½F_~\.‡ÐXlQŠßmªý’öW扄ܵ@§Ä”‡˜µ¦’ êÂ5EŽ`œUjq7e!ž|Ñ °g !÷ç?‚ÏŸí‘GøüÈ«L°8#d ÑFNN‰ã´Ü¿9Ó–%‚Ç8ýŠN¿&ÀLÞÍm‹}€_üµoÆ0-²Ù[Û„=’®@|«ãM¿}Įί?´rÔ[H’».ƒ¥b˶g(̧‚C[EQ9Ñ«Ù*ºÕˆpHËB§ýRÇqlš0'…Ì2m½óÖ[Å<~„xwuzŒè˜qÎ%:)î|k žósoÄ<‰ç4Š𢳗¥w“ƒ‹¹ÄsöÉÄ)eº9¸¸9‰jÓp`1Ù€ÒC:2ŽO ã³ó—Ò²…PÄpê8e!ÿw¾€¨*¬ŸçL-²³ì{áÌùnŒ, @Z[s—µz#ı †Å&Øâl×䚤ŸìµQC¤8Œ[ôH½ÓO.g*]I»¦Ô9ªÜ4pã+iÞ’ÏeÏòº½¥ÙÛ&çSfœ¢fe* ?•ÃäÔÇ8LëGKëÑ‚kå/±ý7È©ºB©Ü¡ïŸ¬õÄî¼>witR¶…E`Ÿ.>>>ì˜Nàw{½¾ø²²÷ëO/‰ü=oþàÝùÃÕæ^Þ¬?îþxó—×W5õÌzæL8g©fLÞ™®4¹Ø{žJŽÇ<},c·]ÝN¢=Ê(1sª6ù› Š!ø@Æmšü±&<1¥Pïk?s±(Ü%!ÇÇzÛ–jÝ×ùcãWO“IûpŽz†_ÿ(œó#&rÌZ¢}INæ?(Ý(A½É`0I=U‘"GÊ|“p+²Q/דMÂÜŸmB™mÔâ‚Ó°Mú™ÙfdB#â¢ËZ¦ÚƒirИTÓÏ2„l~ô’¨(J^Ä&Œ3 Ò3$3ôJFMc 3=„Œ´xCìLð ªvT?Ùòö.°r1‡Ò…!ø–lo“<ŠÀ ±íaµÌA%Î_Ö‰¶Kv`Bò8ZÓ|OÚáÃc9MÿÄ¿íW™)hÈßLáóùúÑLù™ø_àü×Ýò㯚ºR²Ÿ?ÞYoœª“×ÙN¸gk;{û«´•7zó½Dn4&5çìƒU‚(1gˆ8_öˆ¤ÍGNfÊöaaBÿÊ3GÑÞ>å·?ÿõ¿ RŠo>Ù ~<¿¹2«ô×=»¸^›pÍZÎ?=~xÂÊøæ/oÿõñ·?÷^ô>f÷Ýþb-J'Yô>æÕž-zçC‹Þ5œjÑû˜wþ‹½Ìs—~»3,>+Y2Âì!cÛGÌÈܰ@0 ‹¸xÃ`½a0²ÑdF‚Z© ƒ†ã •Íw —ľlÄ‚ÁdŒ!ñpüð»cº·@>ÉJÑk|Æ.xß¼±2 Á"`Tꦹð·[›÷Ó, æälóasWˆŽÛK¨,'ØJø W_®TV)BˆYçOI¶GXHMí+³ ÙWjQIÂîœÐÝèãC¿8"¼ðˆôyÍáqèy\ú¼ÿ4—dÔ:î’lì&3£ô:­¯aúù»ë+hþçööî…wòÆiW›Pç7ãâLÿõÆOæúêòÛÏ$ü,¬9pŠU?#1·ýØòœéÁ×ü§¿í›õ.(»¸Zÿóêrß¡•ß ýgÏØ}Ûî1ë‡7o?æ|¾ºóøáüã›oYm>ÿå"e{Žb©Žzš‚„¦ fÙì{»§:Ú=5³ æ Ž0 åªU”.ûŸ5nùuö@rØÄðEßÔ‰'„ñ¾i+•Ô{&¹‹žJ,W^¶ãÊM2X#·g‡qCk[»©¿øøuóqOæÕçlŠÆ  4o˰ZDa E¼õƒ¨ZÇ[–ãë}¶_ŠãÿŸ6 s³qn¡±ýBÍ'3Ê95iŸº”'Æó$ï–Õ[Ž)R]o\¼Õyú²qjVÍœZj­Ny Ð{·Iq—ôÙ§<°s±”IfcÓ h®Ùî="³Pd‘5{‘U‘ÄÉ„Ôì=zSޱïŒþuñ.ÑòbäŠã‘‹³"I=ð³8 ×±4[|ðeã‹sˆ’º8t÷Ö}É„” ‹L@š—l^ œ½Æ®Úx“øíÅFÿ›]BWŸî×_ö— =20'¦Yw`sOµïÔßöw­s©I–lçg1Àáh€“談“ÔðMÔÜ.­âÛ®pߣ|Ù(€“(œÀ‚²S\çü%•Îa"6?ùxI£¶¼4³ étnÚÆyøbº¼y:Óg_þ wÞß\=~ºþǺ»ºùðn}ÿáòãz]J0„Y~ÙŒß<-‹*æ:b3~qç{+5?£C}µn´ÿž´¾zWª~vq~f?´_u{ÿ¶ð³³DüŽÞÛÉ¡³»ÏÿÞæXÞ]]óìàò"6ìÔ,8äþùP(zXö ±2±ÜÀæq”Ç9Z¸în/‡f³¾±È‚Ÿ¿¯}þ¯grñî21óû÷gïÞ½¿_@!– ]^]Û/<Š K_İa 5ø…]­])õÏþg.{lG Óùõ¦TÈÿNó[uÎra5×Ô¤¹´0½ ÏšUçNs!㜣ù´8æ†/i=þßÍøÇ‡´1‡—eÁ{ònR½îß5Á¤lBùõ¦þœûŸþ”¨|ü ˜Š©Pn\´Zïëë ÍÖy`b˜µ×$ ¥”`~ÏðóÔýn¯}¿ÿt}5¶gøðÂ"„A^gNµùî²BA•Ú݈I¸¸Òj ˜áoM 7'ÅŒ0ôPY"8õVºM€ø-2|»-ȳ;©Ï3`×S‹sj³À~«ùÊqrÃÚ$IÍê<+R½¢(Ö¨^%ìóD„Ò%åP*MØÛ_ÛL7ÆSŸÄ½DB{öVVboûdIl€vÒÝr cZÏ¢¯ÿ^¶^î(tÕ(k‡ýq¸ÂH1å¾ûã< Û%ÝžÛÁ4$%ê‹£ÊspÔLJ,²›<2å°TZ¡æF½äs,+î ¯8H TGÍœ‹¨ù$ƒ&{ØÐ¼.0ÇüÔþ ž1Ë™Ura®LQ˜à¦ó¤¶3OJó `¹mÚ9Ú÷è華N9s"òqAß}NUµ^ä)¨¬ý…Ip €ÐWqœ{ KvŠR꨸ÂÒÑO’x Ú•àHæ4–fûÅ9v ÖиÛÅ~öÅ7žÕÓŠI3WY“¸ôÛ(QH(iIšc¬*ÏIqÒ8yýÔø“×ÎQ(j-ËÌ+L¢ÖÍAJUf‰´™—Ї OíC©vö¡LÌy;Ze¿¡9¡—éE^RB;Ö‡J:ƇJ£}¨i¬ÒW‡Ü=÷¿»y‘ú§²Ö6]Pnz€qŒ.}sxã€}»ž0ŸÏ™N;ÒI~ÎË ç òG]ßr¾1·¿dØèv e‰áä·Iº H(Yà5l˜kvuΨxQ1 ÃÉÓ}¬˜+:T®2³uÁšC.©ú±Éåví§¯m3FÎÞÚÞàÄÇf7á­£C(šKEgöÉ춨6È^ñ Ïfzbê’‡J>å¸#¼®á¿º»^oë÷Ÿ¯äŸ”å Iyš"&•I|°D€°¯aSÛ(¥à¥˜"S“"‹%Ý2ÉaA-‚ʈ$°VQ˜‹«š‚j”ä0DÏQ§UCU?épK¤þp,†D‰È<69ˆMc¢<ê>•$ŸåÙfz*Y¹ÏMBRŒiÛ¤ø|'Ï»ëóWƒÍ‰ÞŒUÚ®8 ×áèýlE=#œ^Õ9m[D3¤TOiSWóÎFx€ziÚ.“½ïFä× ÁýÍãi<†Ø¿˜Ý—•GÙ“n¤%ͱãËYGÝíL¸Üíˆ.=U¾¿°·œ3ú PÜÙ!žãÙ‡–õÄñˆqFä ‚29wÑX”-ᛑ# Öá›v#²ŽÁ·IµT™ó$µFàm¯ì'Fo$<zÿ?{×¶Ç­ceÖyW‡q‡ùƒùY–bM¬ØËv&sþ~Àî¶UêfW±Šd[Žý“Ùª*䯅À”{‘Ý|0•›°+z×€·Q”¶*Æ60)©ZwA#¸is×ÙþÓkbËZÖ¯;%¦íBÈ-˜.÷ávW§øke ]þª{Ñ“vPJÖÜ”~ùeSB à}è—ßµŽ÷µj#ÎÛêýF´°…}Ž}Ÿå¤e;UVRe¨ñ.Õd‹—qªó:û•û³JáÐdi5lô$»U_°ÀNR¤üL‰v¹Y±–ò³ò˜I~:·´YƒqaótEŽ ´ø}86~N»ˆUî~倭«8 nÀ*’<ì:`•Ôb•êN<Ȫ( $°UR)õ>]YTå¯b«­ ¨ÒÙ2Üp¤4n\„ñ<¯ž—k ñÓxÎ2tgí5q–Í$Ø_r” â(›ù†)(…qœd3ß0V™cX¥´! ’ãhsw[ÚÁŠV8VyÈ\Ô Ç*Ó,¢U‚" Ùdi5he´ƒÌ:¸±ønRÄ&ÄBÂá’1QÑA’˜g§/ÍU•®“{Â÷ DïI™ðÒB# zÏ›RŸ'¹D}Îö/ÂÒòî^†% [¦ÆÇ|½é›§;WÃî82bE¼ç€â,IˆÅâËÉÒª`)íPyTÃHX’Æ;R e÷{Mˆ‡´ñ©KB% ¬¦ôŠ<ªõ7_/ý,Vµ1~Öú/› R13ÌûZÿe—|2Û9Z‚9\ÊVðÛ?bËÜg7È9c–M3Ž,žÏ8R*ñ_„@ˆiìDIf/‹+…brk²–š Gþº±\|ɶwºÆ¢Í¿uÆ*)@ʰMhDº¡@Y5(Sû4†úykèN¨d)/¡]Æ"’Rš`²®*(B‡=ÎÅ•îx•Ò* …qô5Š ‹³\ Ý×+2çïW—ÆÜæžÉ~à»7ùÚLo>½yûöllh€•ƒk|ãèDa=ú5¾± +NØi5p'®Arוýô¬îs!Vr|{ºÿòîþ¯Ï¹*gUVPh”4Ä-#4ؽPdÑ•5|Û¶­TÏJ±%?e–Ýlp'×L› ¯˜ŸH§õ %$ˆÕÔúL=-´ît.Q0’|çªÛß]þŸ7Ý«ogÛ†{©" ¿»`ÀRe&Ì q~œkH]I!%Ö‘BÆz’Žíà=ŽÒ¦™ÌþòTTo8&ÛÚË‹píXJɲ å@“Tý|”^´FIAâ•Ñ”ÆßH(BB¬ç¸ ר¬.öÅ4u•4#Nax²+E$-*Ü>Æ“)?ìXÛ©Fã,m׈j°ÁGÐvIÈå/4Bw¦Ì©È:J ¸lÿû6蟷ÞÝÿ–Sù}þ~Ê€¤,‰Z”)öï&q÷&<òðv®¼×q•÷F¨+²‰Üñ›»ùýÞ½;ŽP¦oö.Ðó‰/¦Ô[m©ÆŠ‚ýýæsä Û‹`÷À-#º8F§œÜÁM$„”X2BW[ÉU¶ÒÏõZ=®Ž‰/¨ÕBH Ôf yH0ã`×ÓkàýüøÙwçíï÷w·oï¿<Þï}ãwèçÕqŒŸ†ËJIæ‘46XMm¡õHîc¦Q´¯U’ìÉøöd\*j±$²`^]ªeó:[—Æ(D¢h\o^- »& 'ÙA£ËZ²œS‰0Ð5¥f®Ë×mTehõ`ûÒ¾VƒÊE“hhËãˆÙó˜;†!þs¦IîΙï94á2Ɇ±ó®å¤a5÷à±EÄEÊ·~FËY£|ÝM‹ˆKÅ)Ïé‚·¾_%pˆ¸oÅOÀ&¼e°á· ©D¯ízUüY§þí ÆQÛÆ1Ñ~ý¤ˆB3÷G>„µeyf=áÅ£ÉîÑb®!»á'+«ë5ŒÉ?ŒeE¾Ö<2HÐtöl09òAŒXòuÜ0ZŽºìŠŒDBt½J€Šcy¼=ÏŒ$fo€û\ðo|ó´šSë½½ïO ÖUcŒ#RÌ´‹‰MŸ^ ÿ· î›7ÿ¾ùøáíº(šù²"A”–ä³+R7„Ñ brõ eÝ?•\/NƒãP„TQp¨jKåeù,”ìÄDF]2Ò”Ë5éÓ3²´A–ÍHŒI†§¤³³yÎ åjò¯'ZbþÕз;¬îþ‘mÙ~ 2.ªØ]Ü€¡ ‰Ç 1‰?üipÏô›Ç?óyÝ:5nWB§¼­½to4—ÄØ†ÞéÍ‚ë Äð‰íÈÄ7 ÅPù:R'(vó¶ ‰SÎy‡&$†Áì$.qùe=©A §¥>Ý®·„¢U}ºpUŸn`\V°GxÜTž¶@€bH¬…€¿ßÝûþ~óá¯/Ÿww>ú?wùç\Oïãó4ôüÛŽÏY{ûí?;\}Ûë§­—òq‘vnÁtB„=¾Y¬âB€baæDÊ=2rù³ƒûëÜ9ö_!kÛc¼¡%Ü]YFÊs©~Ú=ææ?Uí±Š"*ݲM¥Üm¹ÑX·Ç˜©©RÀÒ8sâô'Ožn?ýqÿåã{×ÎoŸîß¾»}ñ£›OO?ü¹nŠž©ük³è+¼ÈcmÏÊpžŒÈÖäoV?×1íAª â.»ŽGÛpÅ?‹¦“ëh¹¼>­:’ùDŶ(^p¼ë*ˬ't÷›õB]Y‚®õHU£‡¡ÚcÜ ã”é ñÚtwG Ú »”S¿'Ú<ájЉ«Á5âÁ;K›F¬?=€;¢»ÌD£Ü×⽨ÚxwûþË».\T]$™„Ü €o[‰_§|çb»oúíöe1_IZ‡/—vDShKQ©n*¹sÓŠäÿ3¨än<»Õµï»U4UÄ<ÓGÝ• s&¾ ¯‡ÿ‘$Yå,m ÿÃdtî*ËùÕg)öaÆp” µrÀV¿ÃLq¸»@¬Pt˜9dÙ¹ŠsߊˆSÿyE„£úŠˆ+ÀÿP‘ ½ Ìà*óùVõ`‡µh!ŠL5hhQžžú,­^pÁh_.0Ðà´ Âv$]=OåUŒã«ìÀî¦^Põš5w í‹ßi†ÅÛÛ Â7Ÿï¿l`Á64_¬[®ú%÷‚iŒ±ã‹¢¬zúfùÒK,V -‚-¢-—ËU'’é·’ È® ·`<Ü&+Ãm¶WS¹_†º^÷0Wñ÷BìÂß{.ªÒ8°„U’ hFt-¥äQCˆö36Ôéf~I·pE&¥˜[Ê~Ü~‚"@ï^Œi L!søLŠÅØùYt}à9“ 'L¸ž;œiKÃш‹¬’)åy ‰¯1‰äo"ØëŠ3í4[‹º9²p ¤jðs°×ie¼¶ÜuCp—ÛÌ0ü€ìeÜ–<›Èp¹–Êa›!Hq„Ô³Ð:á¶„‡¬Àí†sɹ”ƒ)I  ‘~ѯWЯÔåþŸ»&§áÄùZ;IYãJÿˆVõƒ8Ùcanò¹™pKG^2÷2‚#@s‰®(QbuÔXv‰*À•ŠÄN“•ÕÑ9Cžï™j霯 SÓIb2È iTzI›бâÁçÙôÎþÕ°ŽŸñ_Ců2<æ0ºp_'!%Ó>¦«®¼Ç„kÔ÷ ŒxRue{ÈšÔ åxuÁК+,Z<4¶FNæ¡Uršº—¶wÖJLqf†BîÈ´zÿ•*¼¥Ùa G…¢šH¤“ÿ¯!sÖ®I«û*U"5¨àøV ¥¢/¨à®l\¨”ÔÕûçPÅ_ÊR_ðÐ`J/*Ö;H6*v€oƒñȃØ'Ô§¿î¾8ïîÎþîÿnáÑoO<ï“ï˜û·û×~¹úøþöË¢ø·?8+ëýãÓã—ܱééQó&yv}#˜[Ú8°›ß$î?­óÅÌ@ƒVÚª]|/£ïe’¥R| DYŽe, ÅDÚ] EÈ#÷0®‰u·Zœ˜ò<14·×D?žì•vÎý÷¯S<ÈàMýRyˆx ?;¦ø~äÀ‹˜‚pd˜Ã ¡\Óñ,í>˜b¨î ^Tä8?`”óyÄîb Ã(y8u¨¾¹ÁçÊæ®×‡u±VÓýÞÒÐ-„qôBÿõÒųûPÉ–¶ôÝBøsl¡3yqå\`h*_8Ƚ¡(9Ðý.5»6KJ®pm m!K IÝG…@?»kC¹ÝMÒ²kÍdѵ(º6iwrm(R63k\›€,W°K¤e»fÄte»ô˵齅ú×X1ñŽ)ÓG¼ÞíÞ~íîãªÛÅd c­…miD 3î£*…=Y¯ª©ý¾P°%:ËÌL¡mÏóø®Eö‘‰|:àûþ³³ÖNïèv8SÄÁøîr&>¿|ÎKä„òt‚W©jÍWáÝË¥ èpQ£”K€›n‰úíá.8 „Û3òÄǧÛßïo>9θ•û÷o/ÿïܽyKÌüðpóæÍ燯D‰ïQÂCÀU8l!mWK Sˆ›è|TüÍk}öÞ’ìÏy1FáexÆtÌxÍÂ3”‰ƒ¾Ê­:[ÚY&ë[‡ÎÎrr–…†¡g¹«f/QR„1Éšküì×ÅOEsD7ýmŠíÑ84“eåså°c, CÅEºz6&UƒÉ€«›vx³ÆkLkJ[®ú¢a  +Mk c.™ôòTØCZ,óúºûÏnÚ¾-¶‹=t ‚"­²‡=4[4¨y¸aöÓЍ%ƒŠ:N¶¼Ó§îlä̳°{Õ‰x<Ï¢=·Za§Ëå;ÝLé›àÊwº¿.äFl£áù•}ÛHÁ!¿qº0‚b×Ý©óæY¦÷K„ Må”§²òPÌËùr§±e‰ê[–bž¾C,‹-KѸÂQà"ãîtiu=K2¹@Zu/µ¤¸Šs%qxÓŒË1• ¾£t§ó^¾*öë™± p…ž™m‰_4Å€ɺ¦˜moŽ@ñƒ±ºëeÛ[/·µXˆ¹¬qû¶Îý“·Ii‡¾oŒ0¸â »ö8ô”È9Qã|ŸöOšld¨œåS³Ã|ëÚÑ  ñv#m¹ûpCÔ— Ú<¡Øl£­ÒF{@´s›!¶0ý&0V“…»Ç¼ðQ̉ž¬¬ÂDï¿ Á‚Ôëþê!˜¶`€Ž bÄ󒯼ää_¯\_IgŸyL×7Í{î³o.T—Õž_Önî>Ýe¦Ðd”ß7E1di5Ç‹ï›3Ä@F¢m˜C´¥¿1×}B3èH-è`Ì ¥X:AMA‡Šü^Ó¥Õ †i&È]…:U(4)îÈ4»òž•!±y$Ó¬7®ÕÉŽ¢[ZÖRP[Ô›k·tú¼²šxN²Ú”_ŒP™<£<_Só<â$õ“Qü‹È7n“¶%nÐ6GKöS±µ­Õ®zКBªÐ6c’EeK(%_' «ò d_œƒë<ƒ%­UxâhÏÑ wtùÔ )Ë5¦¢ªÒ÷Šú+HúþAÒe×D-pK%C~D‚á~5‚pÁ¯Ö]>èéŠt1˜^]L5÷ÙKëhúbKÔ‰”±Å¿Âì,‘«v£+ áBƒcSi\¥Pm¿¬Ü!í¾y­U‰W÷wßÒïj¿*1‰]ŽG¦e5Í-™ê@sÝNØiËU‰9Y5ƒW3…€¡ ^]ZŠR‚W _rýASÉ´€W{mdõ>ÌÁ×à1¨ kŽcé}Ñar&ÙoòMŸLΠ·°¨ãw¼2ñ.T)˜JV©”|ûyf§ƒE#€]©`j–ýN–ü&7¶K9»»¿»8{¼¿œG c¹Tùö¬¯B]]ÚBŒvhÈwØ^_+NC¬È·/Ôø2o»¶÷ÂܬZL]Úk{µO‚98L‰Á†Vá0yô½+G&çLåˆÁ £/UŽ\ÓݵªÈb¢÷m·…WÈA=³ñ«ò¢<]ƒs޻؜Íw]î•ÖvNm+ß»ßîno.¾\>Þ§IÁyK€œèrá—áøå¸i´8ðÌ8Œ—É«þ&›ˆ–iþ*­¤M²ó9Ž®‰tZÀ¯a‘ÆY@ŠDÁѪcÉÝëö&åÀô…<ºª Ò”x1Vm¤µ´~#ír\8¤OFe¥U]~IßÚ gÓ?0¢ÉÎ[HEkÚ‚’—\„@º²¥ø–ùóX«(†Ö`­§bIwÛ^ÿ2Ôˆ§ ÖÂÅâ‘Y%]ÑìŸWMß½¤›ÄŒ”[ܸÆåæ¤kdß/¢é%’gX•Å0kÔ5¥'ß—ñëÇo·f‚÷×WæÎ>^~:¼~˜üM3—w.æFX=®‚×H Ê Š©mI<Ìå9,‰¦Žò€hº®©8*×n9†\Ì:DåÁ^&М)¬´Z£¬:lÑQ÷ rA5ƒ£<˜–¢k·Ú4z¥ª…|õµÛªcŸÓ!DC€åÔ5ö/Ú“OLco(­¹h$2\./\fÌ&À«Ù<½Òt̆ÃÛ³yz×y\U‡â¸{E\2Lª–.嵇Е‡ìiÁAqÂÇòÜ1oµùìYÐä³*:¨Ò+yuÊÕ÷ˆ­€L]ß{ÄQ†¸AËçÎÈ>ÙÒýÀpsB² (àNÏHpwûø-®ú?ÎÄ‚Í/ø‹³øù¿|Ù‡Ä ëX š¼Ó³ÑCÇk™ š¼Ó¡€F(§.ÐZ´TÂõ0çjaŽí!À¶µøXX®b èÔùl}còi5Í¢ã dÚ ÷l”júì}0Û3ù²šü `*Ž;:u~ /G,:x?£Ï«’"DO2 lªxCÂûÕÿ—Œ ûÌïÿæé˜oˆ¾Û˜ïþo^åš*L›¡Ã2(ÆÄ¼÷ÔûŠ —Ÿ M쿞5i¼<ºœ¤•‡à°€“BÙ[¾c‡žËJ.Úw¡¹x‡à,½ŒUñÈñÞËD}ÎõLDÖâ%Y|`KÄO횸3}üFÌ.CŸEìœHÌBÛ%6ؘ>¾1|ôÔ±Dé‚ÑìXÀ÷Äè,?…Y§¸ûû]%«´Î¡ö1ë¶Nu½Pu‹È†,ôCà8w¿_µàP3TK}-ÞÛ_ލïË è¶Š´Çt´“d#À'`f85à+RÀ‡Í^ó=À²•8B‹÷šn›©j<ª¿/ `ÔÏ2,hjÉ—î ¼„®‘|"|»ýóëp{÷ùÃå(Æÿ»½/îø9ðS=6ÔпæøÓP\´‡N)zžKpH¬G ÿL× ühC‰ 8ošÑƒB áèGã¨{ÔÐ&tÎ/¼ÛIµ??2 „Òpàçz ï®M+"û73é“Ñ òÒ{äjUÐuºª]:å–Qö¥"x7|}usõð;Œ `3a#øå:¬pàã’k!²“åÑÅw•À¼” ­êg)8 tvçéT’Rv¨êOíà5 OM $ѪèMa%IÛÜ@krƒ`Æñ–³ƒg°ÔÕ@¤Of òäýßKf0Ó³‚nÝ‚^*/¢éý¦©çÀIUJPðškÊú.£V~*„ÇS£;vô( ÝGüñ8§š‰cÿ¼ŠäUä‰åDy½õ¼ª›¬b]B™ÌšH?#À›¤ËÅ×éô#–‰‘U´°\`+¸,‹ãD2-âkrƒZ¨N ÀvL{‡×&fÌô¾&EI¢¡=°¦Ü?„]¬™*‘;TM`Àt»D½lþ¸úšŽEo"è ³>ð’² šÉ/ ²Y,¸f›lD% W”4Ô(B®ÙŠÆNHM*öÖ>ÍŸr»W44K(Æãòx§ñ¥¶%W¨ºãta¥Í* ÈéÕ@¤êÄ-¾¢!þàÄðŸþ¶-’¦ZûMßUû×ùÕƒ}Ì/f²¿¤Þ÷ÿÚ6Ž?©`ŠÊ¿ÙŸ?Üý}5âó½Iél+µ³+;KûóÀã¸GšHFpû8R{ŽX‚…3éþ=,&òJ€°$Ê ,NYâúÑ)_ÛN:DˆæÙŠIº*ÑÍÒ7NÙΌɧÕ,¥ÈâÀÍ”èo¹‡dG§XIZÇÚžÿ*u—lÞ¾šT:'’Iü˜.5Š}Ra¸ÔtÜ®‹6沿Ly\‰Ä‚†ˆ1È3Hš>cÕ¼’¢$p¨Ÿúle ‘â1߈ÚÀW…J_•ÌBìüXøV6 ³ )šE¤óýþi5Û²‡±åPÐ^ëÙ˜ïî!Y_%ãÌ+«4Õ÷q_5~µv÷UÛCùÌUúó‰ÏÝÿäAXÀƒ`9çO„9< @‡ÖüÑÇYÜ)‘^évf;àRɳóÏŸï.?Ÿ¯¹ªIñeG/eOX0^íÕ9oêã†eû²Ü–”B>RKgWÅA¾äð4·*d"¥¥ñ­…YÕW”ZSTßݱe=›EÎE9éªXÕ!U›óë#§`0 ªw‹÷†Œ è;R„ï÷ÇàÃfÁ©…²Zÿû<½ç×ó¯—Rðx¿¯à³½èå,3¬}¶¯â³lðrPÁ)ò]ă‘üFþÞ´€pJZÀYUƒÊ_µNñô“àŽc¬-SÌàß-'D;È‹««ã#ü"† ði›TX°reÂ*1 fÎ¥ ¡`)$²­cl¾ÛÅ,ÃÆ÷/«ÈWÓ[QÕ>ã¯NSÀnd„Îþy#FÞ/’¦OŽâÈêM‰u §…'È7·»Çs=Î+ðÁv&qdš‡&G>‰@qvxôÙ‡2¹Ñ|"‘êU¸Á!.aýGU‰E ÇöcÍ@Á{àÄUW`{1(ç ív_VUè2gÁ àÕÀá]:u~ù®Œñµ3p$1æâzHëƒ-7ÑV¬ÀU¬,_óï´dõñòÚ^â5êDo·Xµ9+©#V9Y’ö6ÃhÈñmrý4äø©ÒC…«Y´,ûLøJëoU¨úVEÓ’Ën»štÍ­àË1*kþ²m÷eU¾1x¯áù¥Êô!ÙK•”B:Cc®¾EõÍ.U'V;ÕçÎÁ¸`œzž»(÷"s<ÞÑ}ôg]Ýqô¿loÆÙɯ‹UTq$_æ•“ù]*ýøeWÞ’¹òŽÙˆÏB+ËD¤ñ9ââ)´$;;¿û˜ò•·—t˜6ü'¿åž±êÊ;VÁDÈ|âÚ𡇓ôŒÎKìyXon¿^YÀ{õõópq{wy{oÿq3ið»{¬Zvwö4Âø¡ô7Ì\ÛæÝ¯=G/¦©Û(ŽBDFQz[óîM榀‹ëóûû‘rØþɦg®nç­yŠÎÑGj%^©)PR^2ÖÈÎÛ?]§-z%.»Ÿã=ÐN†“¦_D‹}J‘ 0c³§@vÄñ»¼ÜÏm̃ýkNo±DZ•Æs÷i»$fدÿŠ ÎPŸOºVO*'Ø‘›¯Õ«‘ƒº6—o™û*`ˆ‹ m‰|Ü©†ÓQœ®£6ÍaBÚcàÈ—ïìSE³ˆ 1˵>•UPH¬si\p&”줶¤C]ïì#9Í`‚)Êsô$§ 6mÎlÔ¨ZrH·`&l™ç* Pá%C‰Š@æÒœ<2nF¤ÕZ(PÑîÌXhwÞ>fK0É6‚ 6œ£¨30£hWeÌ×ù:`#g—‹#LS‘XŽÇ>¶¥ÉôP·Ÿ—|5~¼J"ÙÕ*:·5o  ™qÂ4Ú Â@úo/Н³½èÕÐrØL½Ù)¬Ê˜Óàõ|§hB*ŸèL—x¼²{„èxYwÏÙù9ÎÎ¥ñŽ9”îÌ#’@Î×íÖÄÓ9`ô©§uާ+K¦aç4£ƒä¢cSS ö šg¥ðM÷&k&:öûÑ1ñ &è¾×y­[È­ºÂUC$j_6UˆÈAzôV÷\Óñ¿ý·èÕÇßÃE¸P9ŸØŸÂO”ëD=(ÉT-6Ó\¶¦ / FÛŠØi_¸’ `ÿÂ\†," Á @©7pЇxÓ·RŽÿrò15+Ùpˆú|'ÛôëV²™ 2B˜q7«œÖ¿á*3 Ežì³ÑQ§¶¯˜ÆCtd™G…MDŽ¡l17ï1ù²šlöVäÒ‘¯ï­P[û2žn¤H™‘úÄÍ]8@Réûš6{…wÐê~¶zÕ¶zµ9>†s‰ìÄ~]ˆú.C·”º› ûé¿nïþ*‹®B!öÔqÔÚ°TVìý6ƒe½z Î/¾X ³)oŸü\1ÏþŽËãÜÇõXèÔ(rHt KhÌØ²;]p?ËêÒ¹ž$ƒAK{±Ô“”h #•Â.Oh¶W“ùSË„½Í-ã•*cY¶Ú9/<ûºcî¡=Rî™}bëoHsfe¡kši.wº©ÄR«Û(vÈ/£~’k£J@Lê8ž8AzÏ>˜˜}Ì”µMOi¦ÙC÷_°íeG¦„«–‚½E××fzÏËøï½ /l&¹ !¿f•uõ¥z¬ë¯ ®yM92ík vˆ5½ÓrÓPó™¿\ž_?|i2¥ÑÆeS\2IlÙ‘¥{T^ΓùøeF97ê°@'–ühr‘Š~4KÜ8ùÖ&nÔl,H8µeì=yœ‚[ @4-˜#}îC]Z1¸ÁëïU›v àú-ÊmOH‹:»Í#QÓ@Çôy:Ѽ¥J9Ð 1«`AN»Bó¢lÅstéâbfºR/¤†é+Y´¥œ¾,$g;1§"i“¾¦¦*çôÔ©Ø=h4›Îñ&E™#ãy’4 ©ª9›€Ã ä¡(1†ŸQ¢¾ä]²¤9s ñ•¨8¿ýyqöíΔ~±Šñˆì£˜WXƒ…qÉUˆá‚:øÐ€ó ´ÚEŽQIÈm³¶£%ê˜(tJÈ!ך3‘M› µ†@ ‰gáfÁ.Êg2"õN®“”sjeB› ªÞ«³¢«bÛ Ô„mó(&T§'ש³‘EÚR‘º‘|ÏíY{-ã&³ûÛëËÛÿ<£öy".Îþúã×mÇxš—qœzgÁ­™Ý!=TZT´ nÕ-¹z¶à):‚™x»^v­À—Ùâ–XÓGP Zísýá95ÙZhæm!œÈŒ¹²‘TVeîß#·åy±CË !Ñ$a›½3ÕW>¾„%ÖR¶žJ.v@ä0„ àæ'[I'§ ¦à„Å3Â9ÿšk{¦|„­)Å|‹¶C1ܶ(‹ù;Á6Á|;@›VùB¾tsÜÙgº¢MK’ªI¹®èÔÀÅ?G^«G^9ßam|©¥]ýBË:o¤FZ0•©èEœ“¹­>1cS$Eߘy¤XémƒmÔ¾ÅÉüÚ ‰°Bz‚Û__.í‡þ¸}|0óõÊþºø”þüêÛ·ÛÛëRsaá§S/áõÕÍÕCš2á¾Õ%Dp‰\b*9ÍDÐ’ˆô:—今Q)-?r‹PÌчX¬^ˆŠ?ÀÛþ$·óíéXxÇO]¾ À.™­=XYo^™¬zG»ôíöãÙÃíÃùõ¼üTˆ¥ç©Uä *¥¹lò:°´eDÖ.Íd;œ^kÒÌHÅqEÊ¥™ñ4J3Ù~W8uiÑAú'š3‰¦éÉÀ  'Y0Úôz«N!…¦$mP¢çÕÑ’yx.]÷õjÛåÓcoýS.ý쇛Ýô:¢ƒ…"&“”{#…ìJá‰ìZÜõÚks@öpr‡:ð/kâg 㻎ʗ°vÌUqÑzïà#“Ç%0ƒ‚AªÌÃ6—9êû5ÛÒ1[“¸Ü†w*'Ë_Ro´:æ, }+‹ÕüÈÜ_Ü]}K¿9ù™óëo_ÎwÞæ~R!žG3 ]ï”ÌY½X¼YwœÉ~u´WëŸ^/ÍV®;åÚ.²G­:ÒÞŽtt.Ûݺ“\£Í^8¬>Ðÿšy ‰º4õ vH´‰B<ÚãÔ{»[¾ò¬3OKˆ`䈊nvžþºkrA8§PE‹EX<éä²Ëq&kÔ$ÕÍ;ì%s)vû<:o–×LOê,Ìò7?×,ÁÛ¨š{ ‚G{·(ücSͼEÁ-اÍ*ÒïÙ-™: }ˆð®™dp^£~4paA¨à’ÙŽo÷ ’%Lœˆ¶™Œåå3ð´A î¿G×Ä ¹1ø´zÀ1»pb.ªâ’±C¼+B™gð•3RIÍûèoñÁ° XŒ¤!ª‹ëÖ˜Ú×ÅÚ5¦Jܵ¥Æq0¦­:Ç `ün’\…g÷aÔ©§Nqʦ;y@vƒ©E¡ƒ… €¡–%7½­áÙC×(ZÀ’ÊBº‰¶ŸËu—« ìþð˜9þ“µuB³-Ù·¢ÔüfrD*[‘øíÉ8fEâ²”wS©µp(éµCçkSñÝF*[Zæ|—D*lÉœŽŒ ó¶É»=†m¼.Qˆ«t‰¡ˆêòÁÁîkÊÛà‡fôøÙJÕé#VQl'ô°H ‰g¡1ˆºUèá–Љ«(E^í(¤ÚQà ÞþÁŽ‚ =:›/?À?¸û´š…×”V.ÆyGX”Ö)ÎlÁ6[5;#‡?ì§ZrRw TD*þ`¶Â4‘ZØ·×v†8ÇfÔð'ÆU¡‚†E > QXB‹{ù ÇïÇÛ…MaåÃÅãýÃíiøöÑ’——Ÿ®¾^×OSµÛnÿû!Ûö¿ 8™7 !Á'KÑtüÈ»_2Ý6õï'wÂka:öÚÑ"¾@³L'RTZ1(/bT‰ìN£,j9{žJÝž?>|9Š4‡Ê­2gIn°|M˦‚>p1EQvÙ9§°šôš°a£Ê KñDbX2ì—LϹèƒ(z~s¦âæ™JZ”"Xe*ʦ‚š RvÒjd+ ¤€³lÅct+Ò[‰}\p+þfc“¥«J] …´ÂåLf"´F!Lpê)̲ «ÐÅŒ..è£hàJHðÞýPZ{ìb ¸p)b1Qf™c'²j-ì™p†¡¤¥¬ËŠüȸäò]‰½Ú9›±\ݤ RûîîþÎ7¿~¨¸‡™ñ˜u¶œJûcµ²dKÂø¸1™¸dÃ߉<ÿŸ¼kÛãH²¿"øyYÊK\2 ÃO ,ØÅ¾ìû€igH‘ E{4ÿµ?°_¶Õ-vuwvgVVV¯dfŒMUUÆåÄ%ãÒAšòÑïea¦ïØ\dþýöËË»_žÞ}x¼ÿüîæÃþc8 Þp¢Ã)™6ß.,‘Iýû-©Ú‚¨ ; Ì‹Ï6Ìúx]eñNýÝeÒÇÎZ:T¸Iæ%Å’ôA6nŸP®ƒôYå ~Oú¦ï¨“>3ø˜Íùž>²¬.-> ®!øG5Ùjè+f¦7²gËSQ¸Fö‚¸¢ìY 6×N´#\Ñ£A¿Y[ý³ª\AKÅzpV\iöîÓoK®ü<¹õucXAJE±ŠBùk½7Âv+KÞâÅá Zú \dëæŒôg’*$kÄŠÊRÙº‚)a;IUðÈ_ZªÐqKÑjTOS#enÛ|¶›ÿòøÛ§«Ý¯¼ßÿãÕõùGmåI9ÿÈ è“¯°c,R!’/ÝѦ‹wäH¶XãÒ’A!´Df€ãìYß¡«MìH…,¡…i[ v·½‘³‹,éWÍ“%ŽÐ-‹ò}ËÜÄ(AïDKmAtLêë&p¡"w+âqÉ^áNŽVSíxýu·—Þ{H¶&ºA-!y™•LãÄÑEØÁÔ€¶ûUAæ– ==ÞLMÊvõn2õᮘø#&¼ù€Wÿ¼AúÐÍÀD?Ø¥bª@Š ÀÙb’ º$s¼Ê©—ú—öi^аÔÀ´]¨ß&<;Nú“9¶P+Ô$s”Fòwoôìdb*èÌrd½5“q‰4y'M{Ä ‚ƒøMf’ݼL23Pö@tT’¿Û}d¹vë”JÖøœÎ’û/.2Q#4õhù  ´nÛ†Ýî=Ž;úöläåÇ`¦Fn„ŠFËcÈo†ÞQ®×„]Y×^A@P+‡cZ놿ñNƒû-þe;(E…G\zžß¯•¯zN=Ú;»Ùþ÷íµð×Îþéì‘ÑŸ~þr7N!yQ]m]«;%:‚°Dvú„„ެQ_ÿ• x|5æá8“$G„8(ÍRL¨•ã#÷¥gDðNÍbNh'‚>ö!ÿ~}ÿ^ÿgçNßÎýrÿøû»_n®?_¿|ùôq:ZÆE°}VÉŸ.ãO=D=#nሚ:×<)z´UlP¦b3ÓLÌ¢ÛªóÓLT'€£sØö°ùjÆÉi*J6쫈PR˜ÆaÓg,+ÙpiH$ÿû?uáÚæ`UüIÆy…ÉIÖ †6ßÀˆK ý^î^lŒÉo÷¯ãxªYsíôñ‡fT(cié$}KXi¬]ŽlMž‡‡¬Šû(¦:÷EùUg;u™W¤_mŸ XéxôRRðýwÑâx ÄË·0‡v»dòéþUEv ·ýÉÞß›·6…Ψ-¢ ¹W]:: äͤñƒ1R*ˆ”œŸÛA§^×ÍIÌ5sQý±îsOa÷i¶rt!,Bc€×(Ù`Î>­Çu Ò   PÕ¿p_fa‘ºDž‹ø|¢›Bµ=Bšu ×öho¿-)Â2(Xw›Ú†ÎÛ %{P`G&5¤|áY£‚U[BÈKwL¨7'9žT¢„Šý|”ˆ5b`”¥·Z¦²u·Zi¢h¼è±èi‘ ~,j2A.—<9YÍ¥–}•ç ç(ªÍœIË5ùþýÅæÏ …kzRÏû[Y·kbvw]»®WO/³\%cëimQ| iKj©·G‰æ83 ÕËdšëX¶™žE(jZÊÚÌUzôÁ£“PB˜e1KQ£ˆ)­í<“(rÎs$gÎdœçhV¶ÆPþ÷?>›Ih&•W5[G!;ru{¯ÿãOzÒ_o?ÿøŸÿõ¯ï²Ð°-­×0¿|Q>œZɬ¨ððx3',ßýüîåõ£Aö?m¿â/O¯ŸüiùË~»¾½ýËf;@Àw?ÿüîUàW;Ô×7Ú×ã]?¿ûù †Ek¯[$±r0¶¿Ä*.àq´g:•Qç=¼(.vÝ·I¬—<<ï¸ÞÃk0a§Jú^$Y–ú\a‰— ¶‘oÝ­›OÏwÏwŸ¿Œ‡Ý—|°`Ï´äêéþúÓí‘gzÁñ&ˆuL/ñ Ô¶”‡1XcìüAh‹×ÍS°õ¸b•qeOÁž‚ÂFö¾~B¥¾‚Š6 Aš]“÷QÎÍT-+ªŠ¯¼Jgwì+§HƒÅPÂ^»ÎkçT…½!ºjì] §LžyQT¦2Ò-S2:¶ë®S<ªjßø*ÿz{cï÷ÿ8ïÚÉÉi¢Å͸è XùÖ2¼0 ÏFÞy„êµ& ¾¸ìX! 1ÁÙAæ[ªå¡vB–ûÑLvÇŠ®PÛC(R‹9v¢Âš”Î+%¿?¼ÞÝߌ©¾ñÿ­’òöâö°l•E…¥,*œÝ9!VKIýlRA™ÀGëIíuý›Gl3ús·d3.ÐŶ ÎTб֣šç JZ¯lRË]à¤5_?;9L¹€D¿‰}¢¸?òcúˆeïE°ñ!Õ$Ѷï ÅΙ>b…„jgÓÛ\šoájúøÌ»váT–µ‚e•¤Ð8/‘Dpñ;o“ѾJãÏŽߊuŠù xoÄêaç ½«0ÆoSÛ‘dIÉQ$ëµoÔ£†ÆyY¼¥Âcåõ•Zãcô’*øj-Q%¾²˜¨óv²ª}F2¨'¬f ÁæýD–æ¤=b´õj+½R&‰_mQCfîýÃõ4ØÕ·_=Ü~~¾û83;¥ùC3½Ï«ÉHon:}0…¹Òµ$jÍ\¡­2¥»áQ¹B8«\r…hNèÑ3½­Á ûcC§ï8œ•w÷I)÷ðøüeëY}ÖOئbúó8ÖTèÖMm¬ÛNÔ<¸3æ³F½œ»#×7£…U5dS} ÙD9ÅB}èÂ"²K´&zÏò‡® ®âA§Õª7Œ² èSòü7_A}»I\ŒÐ gÇZn)˜Ÿrº#Q'è&bĽ­oÓw´A·X#|˜Ý=ôÞƒ¬Ýÿã¹9wPý»vÉßwRÜ‹½Â*°2ÿ'«¹ïô©è!` ¿ÃÚì<ô—Ôyô511bûóÅÙªuÂ~†äcÚÇþÝ;°ŸÝHxqì¢õ±Ûšwþ¶fÄI¼DëÇw[æÝ‹Ñ˜üVÀKRyåzáª9ª‰«• {H«¢=¶4F¢°MöOØP7<Ÿ`QÝÛ UàTFuæ³;\¾/7juB¨2D@·—‹™¾¢ ÔB |iP?L®®ê¼M‘‚ºò>Q ©Tâ ]=z_•”ñìg•x¶¡Î ΂¤cOKpD\Ëî2Åà¤åK¹=U^w$LÆp&(^w¨Ä§³³·'÷ù‰,»£UÜw¤HCHV´?Uó½‡dgÐù4ؘ–ghr™ßeM^»R{CýM|¸§ÈÆ¿€ôuùL«wV×[Kh.ètM¾×v fÃã¼¶ƒNŸ1iHâyvGB§Ï8Õ«ƒM•\l<}¶Œ×c[+lxmCi¸LiŽd%ÚðÌ"­˜‡³S6·GÍ.*˜¦\š#Ö^üAeÎî‹ sB0H¨Á>2@±ea§’“Ôë`ô¸¶EÕó`ý*DBc&_ u¤²ãëvG«0zÐÐÎÌ]<#kó”Ý@˜êÙ N=„H¼ÀDZuê¡iõ³D4»pó°“g¿z·o¬ÄYÑÀ<–Vç|‘B̈—Á<¡NJMýê Â3ŠFT“ø’¡Lç6 §˜Òb HÕ˜Ã@ú%®†­€±àýÚÉÑçàíh5ÍêÑà Ía\rÞ;âEŒÃÔ’ýVBº¹[¶3l7нýÃÕn¢í›J·épnO£j!•™->H,29›îx#G—·ˆkpµ»ÒŽ*<ÞqëÀäñ-…HÞŽ—êpð•:Ì ¡”PŠlôªÄçØº9yÈêðäh:LôâxoáëÞCò¬ä‚Õ‹ëø}>‚O͸îåò†ü›ZȽ–Ú ‡lõ_@ÇVäìvÓñË{•³§Ç; Ïö:ÞìRáýÍãïŸî¯o^ŽbUVgn^¬:û…“¨4F7;(ý¾Sá'Z³”J2.@+PËײ°ÈF7éÿBSü‰™øŽ ÅõÛ†q€tžÆ–ûÂàÁñ¨Ž³ƒ­vg)‡ŸAŸ$(…=pšä›gu?:ZgÜ««¾<ç¼HžçE6¼râGzi¶#Ùðʳ®$¡ÕÉ.â´î]ÞˆIŽ$LoѦÑùÙª¦]/å…j.åÙI}ÉÕ –ñË5úµ.çE¶N°e/‚UD&[Uí7fÜCWo4FT­cðE{dÛËc,Ú#u³ùÊÝÑjcpBØ._;]ïS{òâHQ«&ûYdvr;£¶ÉÙ·á<µ³þÜ<ó³ðÕSSäݾ)úyMøÛ\ñLøPÉRÇ2$^vBu؉ƒµ,3V„¶–±!¤ìÀ‘ÉѪÏ8$‹T/qå^È gsžÈêÐ ‡•C?‡5¹KŒU“s÷òüúdLýðz£¿H{fÆÊM/¢4ÀOÓK× L ¤*ªæb¼\ìÄ2$ É—‹¾„ŸŸ=±98çóMNVÃï€. AVÏÕQÎóQ6ˆóúöËñ0½e6îï~¹ýøåãýí%c¦³oß ›+Ã&ó×½—%À‘|l¸³UT‹I÷å® VW‹z•X(&]Èú‹®Kò.Wû79YM˜]M(šIíîkàÁ[`F››–ÆG„ô-­cMç¬cÕ¤”`RbVEOȼëãjä4ØZ`ÂB’˜+î<þÃFä¾ÿùoªLŸ!õêÈ©»Ê´p]§‹¯²>Ýižˆ†ÉíŽÈ†­-ãA)‘·yÛØTBp\B™åflÑOäâüuc»÷çï7gÍœONS1Ô¾ Ã^/åÞ3•èÓ«—ËÚ¡D_‹–襸ƒT\—½•J'…Œ±wÕs–¶µéª[7› o²¹}^wµh„þC3{Ê**¾¥¨ížtó‡i¬@Ѧڂ³˜ ˜*Jˆ >Û•4¡]u‡¦ ÎAÄNG’IÑ}‘^v¬ao½ß”¿Ù[Ûýóz”©Eî:þBE¥ªCZ0âœ+AËš,Òê£YÙ ñ[˜ó|÷t}sc]êãx©Ši~П= ÛnøÇÕ†Y³P4º>ÍÄèR\Äè?}ƒ €òõb{%¾ÜÜ>Ý?~1Ìl ™\&î ª~kÞ`B&nçN…‰ÅвO5,M"2·‰g%’v3²Á¶ +IbÙÌ¢H⢙EŸmþЯ‡ÕÏŽ˜’K³ì¬ÍÀeÒCC%Œ#–ùHn¥igÕ8ÙMlÌLe±$E±‰>—+ž®‡Ôè«ÈÇY!N².2H«'CT93uŸ¦']HTÚ–¡¯—VµªYê×e­hœNó^º…¼g\›÷às5¿V‚¥J.2—®jA¸Ø.Ý,71Ïk²ên•ÆÆAÁ®¥L2¡Í4 n[ÀCX•a/ƒS‘(­Bfk®¤óýùãY)»gr–r~- â)FÞ˯MŸ±(¿f;‡l7Gõ°£~rв[[¢[<æ#Ö–»A€õ´Å‹›ÈÄÅ ¡ÈUËNNVqq9 ú»ö²®Ógdû‚mAT+ˆ³˜-ÖÈNíoˆëf^Fâos÷ûåðÁ6…ëœH¼PǶ`ð…*××Û½„û>Êñšlõ ç]Ÿúäš—øÝø[™»´4»$åü[OÕž˜d¡w’œ,N<¢>ƒ2€j¢_5‘_%‘­kž‘¿gçΛÐ/~ôÐTè¬& lÆ^—í4Ç4j‹úò>:Ҫʉ\²˜ÏÉO Ò¥ÝOļÇP÷é·éûõûiŸ'^Eý| °z†¶%#þëóãëS‹6Ö@á´¦›4ËX#´¶ÕÄÍêÖƒ&6 ÉE_ßЖqá–Ÿ<4Ÿd!Ú!¿ˆ…@i•=—j4ýÿ[_ü{¥Û¯çý]W§eq£e½÷í,ª°‡èrí`sŸ÷ì‘?5‚³Hzd1c½ÅÅI ­Š3RE‹‰.W×2!Y§˜‘9&že0 .RiŒ²ŠÁ £§?啦1Æööâ"E¦Øri¦^œJ…Dÿ}Þ{ä=à˜—×›ù§Å4šzÉ”V´£\'8Äç)4Yÿ"…&ÄÕÝ, ™ì÷¥újÈ/¶’ë{Oƒ“ á°,(‘´l„rÑIb¼LÛ™H¥'X X—s*¦W1d7%O(× +Ùä9XQ›2VXŸáú!™d68Apƒ:iX.ï{Kšµ˜‹ÍÇÖ.e+î9¶›%$²ú¾æ.{@ÓÙ8 Þ;iº3c|gÆîØ{ÊÛƒYꈓ ÆæœoÎ Yk?9MÅlBý*çS¢ƒ[³É3'ôVW¦÷’lÙ o%ÒS/¿4«‰‚ Âä•…ՉâPl7ŠvjïNVqif_¥  ê)ȸ†¾e.@H³×±ïMľþUmݯ*ÄW÷W¿ÝÝþÞf¸3ƒ°lê¹”9¬üÙäç8ŒÙ?"ô˜ƒ­ßŒ.Q}Ì®_&l7í¡ùþs|ÄáþÏ.i¸ˆvìâª%Û!½×·ô.dgØÕíýi^;xþ¡™øEõFßšIœU|˜©~Mô꥔£H0K)öÁtRÅVr,ÒôH¤©©Ÿh³Ç¦N)ÕIð]û½¯=BÏ׿7@MÕÓ–+)eÎ}ÿñúóõýã¯ÿu*x§;·ß›Ux{XÖºÖÒͬ)©¬>Â4µarÔWÏ×!çY¥võJm"%N\*{Ø€â#ŸWk£nÞÞ¯a¦}¦0ÎÓcö!ù%z Þ­;Ào¤«ÛÜ…îÅÄ#gD ù³ˆiõþ®Ú™”“Œ'Ÿ"Ã"ƯQ¹£Q® †B„ËWî,ð¡ÐùvRWrÓlSgc´6»:Ôì¬â1ì€XŽcTÚxC†³àŠùåMZ´€«~&*ŒÏ×:–hep5ºúãÿcïj–ä¸qô«8ö¼J‘HÇœö²‡ØWhµZ¶ÖjµBÝòzûî d•ÔYU¬"“I–ZcÇÄL81 ,%L6ô˜¯Í>ºdœÏÇ׿~xxsóamüúÕcŠ1üÛùãŽõ’RËè; f$7¶A†Ýü*½9–½‹X§ú¾¨ú²ÍT uHFÙ²ÕÔJõÔ“ymÑ©’`Øt]$¤–Žû€ê#± špæ`ERù\ƒAñ\…òÞwVÓ¥«ÒÏ ÃºsFJ›PW˜ûǦ˜I>è–Ib* ú ãPïés݃|dÞ–}h4gÏÚÛÄ ÙrÖÑ?kÕ2gmãç/´)ÿKòÞßÜþ¦xmeÆ× 8=ü¹FÓDeFS½"ê>:ÅçÚo™ |Ô ÿ}©1Å…Ô˜ºýëÉšò¸Ýê'ÚhÈé’±±Þ31 d¦çÀd#[B!?#F‚.@ÛªËW·~ÝK9¢±5¹ !h<œÄùí Ûâ34WÄ3ÂêÑ1„mW@¤!°à¨Î{ØÌì‹RËì«È¬Q£ñö="¶3)^ˆ½KqÜ þ¼³f_]•)bD_íé±y›¿¾ìäe¸‘dc,81’óADÕ…äó# ±ã8 >]gÊÒõ¹¿ùüûÝÓ§z^ßÞ}~zÿNµúÙÿyõ+ý!é´ \½ÜÕ3QZ~va3…Z£´üì™þðÝu„dãÓ7ÝhðÌ”|Ò{¬:^^f¥OÀ>Ï>…󧣎­§´ÁLè' …±õPÁy›j|…<˦RÁB;'z«„Ø.;'°h‹ôî‡lóùB|².¶ì¢àŠäyÅå©PmŒC[ovrÞ;‡,<›…ö g"ºDßVäÑQ{cÍÀî¸õßosOÂQ“^Ÿ¢/7Eξ7â;wO>ªU®vCç&§›Ÿ~»ñÓýû_w«™z'‘(´KŠnyŸt™j¬”d/ôž¯Q’èR½-%(±ˆÞ!åBË…Øz¤ÌuÕΨþixwPåèý`ð61‡L:N·l9à+÷MVÍXFŠÜ½or¨œ;r›æ‰6y¢þ“àDÑÚûÞ‚\ÏŽ]»ÜËðìkyÌTÿ̰W·°¬”U7÷YoBB›ëS•Û#,°pÖ}^¦‡û¬Ëö)yâ5ìrXd‹:z`´û¬rÎ<ˆØ–Cê3:»c¨Bཻ]…À-À;S? ‘}³UõsåëÆ r2d+²ú›Þû»Ï?ÿòŸÿqú@áé§/ºÀ7÷w*ݹªîöÃ{•¸JêæËÓoÏZïúÇOO~üù—êü[óx82TÄ­$hì¿¢EÖ=¸ÕY»þ+ú‡®áÐøíÞáøüÅ6»9‘4"4ñFD¯G›ÈÕNAF~i…+å'ç@-Í%+µÛ9d«ê[«)Á°q”Ð [|$ËIkØà-L•¤´u^F:ˆGÞÛzÍò»Yb‡Ö‹4d2úšN¼+uOpAÊCæw‚̳àˆUÃe‹® Cß?ðÂ8qð†òZ#ü›÷Ím^ä#Õíúd¯I·ÌmñwkËGâGï’ÛbÐ7Œµ%çbPÄ_û¨Q-¢¦@‹\Æ„ÅIHˆ±Â„E°dÂloŽòïY =â,]µj›pöK³¶¿Iû|ôÃ-UÜ·QZ*L“#üJópò¦ÎÒ5ÁªˆÁ¼‚YuxµJýÏžaDŸ˜6)t€&¯À$ºß¶’¦Ì`Ù„YmWê°Ù_“˵Úû0”²\ß6SÑ¥Áéw£ þ=ó…M5MäÝd³ÃCí$„@Srsst3{çü WǪ2÷¢‹™¤5¸‰1Æã¤õN•R¹œ)ù¯[èÇ›·w¯M¶_¿Ë°ç^‡áÇæ¯` ä|:e^4?ß§`”«ùèpüÿÅ„CŽËFnWœ×9ñÆ%ž.”B-þùùUßPéí«wòçÛ§“0ï0u««\Á² íçû•ƒU®`]"©ê"_¶ôö Zßbé5Ìâ0¦“‡2$m“bPýã’ÙOzÛáb±Ð¼q¿Àsbö¿í¬†¤ÍVe]]‘+ìNøs<´²¿mRÜ]#[ ;Ftàá/ÔÜa©ß};Ÿû5ú;N~qÙâá$U´xôBj)mWÅÓfGƒªT3NQY(ø`Þtk?©r?äSóP¸ýñ¶ôCxˆÀäÔÍÕ±Bž½@Ûn”Æ2â $b*7Ç„K |Ê“ˆ= ¯hÏŠ` ÉÕLB»«VAµI­aø “É™äµmËÆLxݶï|å<)׿/b%œ;ø˜ôêÂ&k0È´ŸTXóœø€ÓŸު潟y;TÂûV5;šf¹&ÛO¤Ô`#Û[~PH}?Š¢FŽýòÖzŒj6r ©UXKP­2ÍRà/…Ö%o­Ëö6÷"ÔCµQШ³ï7)± †j“3dòÖºãä 2\—ëüjè?P»O†7¥ε3² vÄQ{7*cûàµYê ¨W‹›ÀV¸¡P…F quÖbà¬?k'ÆD EV!N|%%KŸ³ØmÖVí¬mH‚z£±Öp[h´?«bÆœ?«[Ž,ŠŠÙç½]ÁzK{Vڳlj½lrDAMÄPCù|ôã«&æ7ÏÇê}¾}æýs]e„ãv—¡ BKeÅHÞR[eD…úÅûzø¬ØEÅxYýÀÒCšJ,ÛæµI€´;K‚´@‚ߦ€<Ñ03øˆˆ.I¤‹ƒÏXã®Í—Uƒúµe•@pæ,ƒÆäÉm*d jë Gš…ˆ@ÃYÃîo>–ø}#z}óðêé³Uº¿}u{³Ž™0ž‡T»X=âHµ“k ÝE‚ž˜o¢ «V/\ïÍ@XÑ>‹1pUÅæ²Eò ¹ôàmœ¢$„˜–EôË9ƒ÷æáÃÓOoßµÒò¨§èWtÒEWL’6é´—ñô-j,9SI­®†”)dý×H]q9Ru/X“úJÖ±UÀ’;Ë –È¡"E„Ö³œ?nÀ(v&Ågá¤cFúòÓlß ~¿ûù—÷o¦[º¾¡woÞáí]®ùä¼$woÉ­;‚¸…Ý«#b„ÐÖi’2&13:Ì[IT.V$uæ/òVíöš}oZì¥btŠÞ‰ÈtÐI´üĦ6»ƒl¯´T‹Ž¶1u aÃŽÝ'`Ä8HK|Êž³ü{¥žôò­#}~Tö,6P™Ä¶h þÛ LÁ£·›2‚*õPZÝü½ ¢diÄã^/WÓï$çsl© Ñt™ð“zb5]õ¼4  ËØ¢‘@0º.ÓÄìrÓãP]´€Þ_™,µªÚ'ø“´Ááì±õâd›ºSCÂ\¢ P—ms'/Ôvò¦ÙMÇeeE*_ lµÇbg5œ‘ÉnÞŽq…2F`Ù¤Œaprg£—ÓŠ;;4B/ºßÒÛR׆Iâ¯Ð…¹*ÍsÒpÉë.WýØ¢·’ ®n­\õ[çº(ÕÁ нSh¿±!@rý—¹IAÿë ÙÎ!Œ£ÇéÖ`u4)-%hºŽÛ²¼ª Z¯êöu¬£Ì©ºÜ%#j~ÒPŒi“RG­°°%h•LКN 'LHŒÅ¡,sD°`8m¯ÙñŽ‹ÍT°#LQ0-/?°OA!DpëcÖ>·@bËÐׄbäuq»3…•Î{žÕÕN5ΔBwñNì§¿ŸÐ¢ER¼~ñ6xq«ª·»"ÉPhØ’Aå…¢\±äøT§ÒäŒß4”uâžú²NcÈfU¿í¶ƒJÏ^.&^]£e¸Ã¯RÎTlØ9yÅD83Æ xú+Ž{Š ð@y² M> ^ I¾¥[™DãY½üá‡åÊa®Þ¨èxÏHrsé2]é^´9j‰¥ìº`.OÖµéÚ K8t)`Ê€®”x”tn‰¨×y‡þƒºØÇÉô½ R·ëu0÷ï¦Ñ?(–:¸æ fÛJâ* ]ž«¼—hnÜ×Bd= Û‡Iغ® ÙÁ÷ŸrN@“U‡âPGÌ{?éY,3®‡õ8ߘˆ×e¨\”¡°Càüê Š qÇý_‰¡§lÆÔ›Iý6ƒŸ'çè†yL{M}%¤(EÐŒ)çè> ¬‹›«kdô›1ó¿—ú{î/Rôét²i…¶Küe¹Îïþæòù¿uG» @’oé…p $¬õºÖ¾^a&l´™”KS@¹Œ>ÛxöU"ÂÖì¼p€UQº > ž¡º²?UyÛ2¡ÌçÙv±Ð·]¬* 'õ‹k½º‘Ç(h ”ú¿Ém/ˆ=¦Þ©ä)Œ!º‰æ6ç¡Å‰½íTã¡ùŠ']Á,/äÕéý'ÍL9ׯ`‰q¼Ûå$dÝ.u­<ñu©É<ÂÛ^G¯PÜgM÷™ê>=JFŒWf¡ÿA9غ©¶ôòÄH'Iû‘Õƒìò…ZVøþ ŠÿÖ]öêWúCÒ*#,ÃH#l{-ÌEjº’þg-ÍF‹¼ºY\½ₚɚžS*;Èr= §‡Åµ‹ŒúÅte‹Kù M§B§0l'…ÂN EÖܹ鴊ÅHBàjnˆÜ¹Æ ç«ÅÒLNeŸHÞõ¯Õ ê¦Ù/ìß –Æuƒ z¦=(‡(qe ü e- â1òúŠøAËZ×8V¥—M߬à[2|jZ·}2Uw …))œC9U,¢¼Ø1´Éÿ.¶VÓ1aRü¥tÐIvð‘|Ç" Z=@]ËP/ØCÄ3ñcÆœ‘‹MÏ„MyûºV!T9]¡WèoØ|°y )Eá ‘ÛÒ>áz*‰E-½þöf€ µiÔB@.¥ €üê´ŸHÝ8c–¼ùyg5ƒæuU N T?gÚO'ãh:ýÄñlÎ@GÎ6?°àäHM@§ôI%â /ñ**‡H§gFc®r9K„#‰Ã®r9—M,ÓÒdchA6ðàƒµD§ÍЖj¡Ñx²hV]„6!ðR‚6ö)÷J°ÜZ ¶é²H5ÚØøê±­xpelcm*G—iaš\tÁÇ«bC|AØV~8‚5½‘c`­¼’%¢%糕W2Ì(¶¤pcô!†°ÌbµŸf`kÜ4⢛f(Ç.vVé§ ^® e!ŒG5<ÏÄ£¢P†ääñhTD|AVWYvD`ã ç¬n5 $‹èh’Õ­æš±8F°͘CS·;dnâsbÈð9É €‰CµñV›Y`S!§›%ÉNÁXì¦Lè¤ËŸDð(·üÄ&J§@¤›–¹½6a×ç&ÄÔBédó:…`³]ãj»¦Â¹ÊI—w;§l[ábk•N:Øë ®°låƒ+Z6ÖöãtÊL?±“8_.©~`?Ë&X׫½Ñ²5¾ žpe®tÄva«ˆW۪ƟÝd”*n´ôoð1cBL‘F¨|Õ‘é?¨¿}üJƒ¼ª2ÅF5…óB÷ˆÞÇ-øß”}5©ÑÍyŒU…)«äÔV‘ÂY"O.Õù¥ˆHE¹—½…P:¤ØªÕ åzVŸª QÖBqƒ;ovb§u¶å¤ÿ‡ÏšR¤ëZ†RÕx£úÇõu(«q Ž¢0Ô«tíŠm<MŽžBŽm.~8uñ™2}1ibLL%¦­Ÿ†þ²JÚf£ä}¹o»)»øóØqápÐÈâ ÛH[!LàŒ&¨ÞÃ7 ï¼Û`Ví.Õt÷™˜®@R ô/nïî©ã¦èÎ+¨9{>mQP#åh!{úÛvIeØ%^Ã.V-ࢪê²IIU Ï2ì÷Us%qÝÅÁL ç/d+b<¨~«ˆ¸JÿÐ mxÛ9dž{à<ˆþzKNOióÂN³:3`¤ØÇ /LÞÆ{´”[7l^6;ŸªÝ0]P/qj“ëO sgЉwçÁü×_‹iîž~×ÿóµbïóEªú+NÿÊ,èš=}þr·™¿v'Ù¨v6©4Qƒo6KÖùXŽš*ä¹ÔËaÖ“(žê®]pC,»kް¨»{Ê«cÝ}PÕ/;Rðt¿Fuµ5Ò6ÕMc;«f1û /ê|PI›èÜ>MnBùYƒTèÚΜx3—âõðãü± ¸·ûàq§;˸«>98öÙH Ç„x:ÜXÕ¿®ñŽ¦Ã½äéŸ=êäÕ‘MGCÿQ?,aJÆ2ïç_??|ùTЦݟf~Kâ¯0¿QZ’–ެÀ{Yg÷";op÷òÚbaç³WÛ¡e|9ÊŠÌYïøY=ºælÙ˜’ó«LlŠQÍò&LƒG5ÎrN!3šDÂ\+ˆé¢‰íKWLMìF8w°ì‚Ú„¬ì `ÖÃɱÞÓø‚º”pz·®KÏËÞCRÌÛ«ì¤Î‘˜|\=·E\½’v!|ˆ!•“Z$êÖA nÙq.ÿ¼Mf&½ÆÞfÀ®A[¶î_ÆMJ飌h„öd‚GÌL8‘1ì¥Ù£ŒGõ(ƒ‡³çjSD(m²¢H£¨ÃL‹å”!êÿ^w¼—‹WµjÚýjþ¯o?ßfÊE ´V=ùør WH›F©ž|û|å‡^D"Èm˜còüVO!Úp­‘VÿþÆÞr÷Â;¼Àë =ˆ.HXcû¸Í¶·•¯‹gä(kÉZ/H¥— ·ãà½OEÍC&¦˜åyAáöºjÅmf\eÃç¤ä¦÷>¦¬îö*æ2ÃíuË)8æsÚbßX‰ªê:bª¶Ü?shPÿË[RŠà〉Ò‹`«|ot”z n+H)_ßß=}~û¸3ùÇSùvùŸFÛQþ©¥`‘€CH1¥¥Qì*+¢„½â$s&-utšc»î->YXEI”½CªØÉj¼“µ“*FH‡Y[²‘Éœ¡ %¦è~¾6”—$Ñû¼‚‹«¶¡¼|í$ã`u³k·¡¼|í±dD¯ ~í»%=¾w\ÓowttƒShÏC Xf?ø1}3þóß ó7ùæ`—o2Ìô›ÃÓ~“Ýäkž =¶tßSÔ¿9´³K¦˜ÏÓLp°|  ¾•u®³ýª“Õ”‹Ù½…yäÅÇ7åìÓg,kXUÇF@`[._SNÛË<#õ/Œ4êI’`Õ xÛê}ñ­Æé=òùfûyq²ºVÇ­Sƒ \dÉ5Z'sLŒ3åŽblŠÍSÌ`@Lu°›~²×‰S‰uˆí¥Yw=òŸ1(>I KL˜}\»@RÅì2ÃlÉÑj¬ tŸèÜù§<©SP?e 3˜Q·#äp°˜º#Ä;Xu¯ˆ[{luCYâvøåÍõ寑< Ãvý”áÛªZÊé¬%RJgiP)Ê´”#tP/÷¨K N¿œA—¬P˜0 ‰0Iø(Ù*lÚÜ"› ®Å&©o£Ðþ9jE‡Œ{%€`¨È*«œ¥hï»Ô×cÄ^R=èkGIŒ3¢§Š}Ra°Á¥ V} ™×Û•Šmòç}tüDÌ$tµùœþvõ<•M…ãæ#JËÌõÝœ¯º¶X]³1ÄýXÙj$×ÏVmÚžJ/UØê ÕI[ ËtLÄÔÅTÍ£Hˆ2ËTK{¤ÆTÃê9xÝc‡dNêãAðçågË5ãføÙH°?[-fÕp•ѲÀ/†x1Ù¨.0¯Ê• YÆ?›>l9*ì¯_fNFÓØ²]à°Û4Më\Ž~.îÖ ©Äšò‰‹˜Ñ“CbwóûšÖö"é‘L¶=ë|àyîP L°.ÑÁVÎŽ3×ð~dÜI}Øk1–«’Ê*Öjˆ‡kê2 ®~`¾øÏoLrƒîÞäâIe’±|ÿëDsPÈZV™`û$ôO8PòHŒëÆ/™ð«ëÏßïž'?8ë´$8~ZŽmž—œ–ª®†WÞ…d÷‚ˆ­Ti§äÓí T•£$¾"O˜$øÒAéóôaô8'õ­ò0çœ,n…²ÙyïüÊç¤I 3ç¤.™˜’£#„@}ˆrU :Nó‰ÑJfT}èu™‹ü^¾%9híQQ GYÅ—`Û|ùô¸yºýòU?»¼èeåÞóàÄŠ{ŠVN,¾˜pðúq69ø*©Óé쥭Tg99$·èlõýÏVD7„<§Ù¨9Ýï·Ý8ÞÍoæ§<|ýe÷ï™ÁhøC³¸+Œ%´Œ eõS¡--J¨—é™Þ£±¢PÑô€l0rÑöP²F_ÅÑ£‡Ó6+Ì<_{˜^Œ+Ÿ¯Ö'k_lÉÉ©#t„y4bßø3ÔÅŸnnüYeúG5¨ð¢èÓ‹ƒU¶,ü 7¡£)iˆ7lOøvò@íº¨@Vi¡xÖÃ=è‹%+]†ž’_ϸFÊ @g/e—G0›œ«S`Ã6ýrÖ%‹>†´,°áõ›$ùÀFéÃéß<Ï5À+úçŠJ¸Ñ¬Þ $¾¥EÒFÑ9‡°x"»YhUw;Q§_°"!Ä®¦l)[>YYMKlE’èßÔ-M’o‰ÕÕJ„4§Iµ-ɇEæ×6gëè8Ìk°0¨³p™UW:,Î<õèÿ+1Ë•˜GLÏã Çè’ Þ÷ç±OÖÐ .Šÿ™èöbïPíçúêv| ýù¯Ÿ/žž¿_>«&‡ƒ?üm³}ؼ ^ÚU>qÀ·4壆`IãH¿2íÞlÙv«ýÑ­6rñBM—–8ð”ϼ ²Gí¾µM‰Â0Ç-íbîá MN™§QO¨ë‘§ÎSǽ¯Ê·³lÝ×UH÷šð&¯{cßqD$íºÞæˆõgVE7xëßøyzQU¾Oö m½¨ãErÓCñ j4UIZ²A¢`×ìF=]/œ¶}Â^±P‘? ‚t§MŠ”§nÙ‹©Gþ Ê1 ƒ·ä.“/ÙîaݶO>?>Üøôp÷üáêÓ»€‡Ð¥‘g¸Tl¶²Ñ³£Õ/X#a:ÌC˜ÆÏË]×Ü:&²xöÖ,üÓp°yÉÁ-I›º¦ °Â6,NEpe*Â#ä1Ë´ÕQ‰žJ ){u3]ZM.ôsà] ÕþÙT„§!ûÈ3 ƒ‘3,P7Z‘FC…ºOØÉbmK­¶™Byò•i!–´!«ìÉÂjtM#ç(q¬÷»ûh-@Ó™¯/l£Î©­µ3­9Má0έ­QT𩨨à³%K“ÅTt6ú0Ä`,}oÞÉ3–u6êñö°9æÕsKqáFà† –ó$oÑçBó Õæk•Óz@aŮЀ/”wåvÅdeUöK‚b5αßjÓƒ eêG´n¼´üµzS„Ów}¹ý9©7ã-©-—£Àœ®¬Jo2¨ü¹µSË-èëJ1üÓÜׯŒý²úÙ Õ¡zñø1fã­‰°zÔ¨ék;.¥so•ÔrµOSPS˜{³ÿŽ9¤W9ò‘mQ¤nPÆcÊ2¦NÑih àæÁ|ò¬ñuX² Òûrä:À J‰lNö™³ìÏ×÷ßî.Ƈ­‘mÏm'XR”§€Á—v“ˆå@e"Ð.ÛIß}€Y^BÑÁ²í”šºÕŸ¢8»ûV‡ÔU`]LЧOò!æÛ_eÕ w@ŸV?9  ±+Y¬u£Œ&÷Ò&ÿiDNmŒÛ)3±Ô>;à ƒFØe~eÕS¤]…Ü1]Ž‹å,Æt5åøp|+‚a¾yÆ¢ø0 Mÿû?•Á¡- ™Gk²v”Lj=)ÞŠj`՚퉣(sýI®¦õô˶Ðáã+}|os§óÜyŸ¿©ÒAÙѹ–tjRœf·r/‘[Z§Œ…GEk÷2϶dá§cýq§”E뉺0…Z ¤CÝz´F›0ɰÄLwÓÊ9tf˜¥nk€ef¹ölÉ­ù؆Äi” :õé×k2¸t†jÍNÕØoÙ^£:óJ7;½Æ´ŽÓ7p¾vzcUºäÕ„B\` d™˜€dPhM‘Öe3¸|¼~®a'‘ëL²y¢vÙ•}j`7HH‰!Î&LlW“ß(NT¶*~(5~c8=Ži+ºm…Ú»h"›.œ@2˜q>u1J‚3ø2ÃOLQÁnóJ³S×*L©dš—0ƒõ  Ö4øÐ’4v)wq±¿h–Yç/R„…* p‚qK¥¢½îº Ò·û¥Õ8Œ¤™(œÛ Óêä#&GÌÌʲ[Z‡Žp`Ïq ÑŸÁO|G‰¨¸«ÏÚ'šß° W$Wüé“\m¿ÈóA‹~¦“ØãÞLà4"Q—XÙ?X#ÑhôžQ^îÿÁMåw6xAEýýn눫»p«ÒÖÚÌÉùU}G À±á€4öô>ñJõ µ¢ìæW¢å5Ôß+³ëD‡^JùHkÈÕNäÖï´í¯Nã™±Õ;SMÈÉz˜šÔr–FóT•„Ôã¶£ù,YSÑïçòö‰êiˆIâºÃä^ÉÊtó\¾eN¾¼QÃڨ̙üy8þ~ýéFßt^ó¸*Lk$ÕB`h¶â¤•À°Uxý~¶‹¨øƒ@1àõ²ì†{Iu‰øi à…ùÜÈ‘Vøƒ¸Ü0^À]ô±ƒÎO  ͧ7\‚yݪĬš1J;è#B ÷!8ëø˜bKɆ8:(Ùw8úQýÁ‘NÉNÓÀú„ §íÔvX–­{ºšrɆC4Òàw%“g,*Ùð‚ƒK4£hC1Æy¯~Ì¢bÓa ßì9±›‘þÉdyÜ1ô–Ì®Hƒ¤$>Vì ÑwEÊ—O–V‘þ±×Bô‘C=:«â F ÚÑÙñŽ´¿‡;> ‰Q$ž©Úæé îó O›m¹èfR/º¹ø¢‡é—†¢tñ(˜Ú¼Éäâò*%LÐVëÕZrÓ*¼6‡*g’npÈä*€š<œ&³Ü Rr]6{Iup§l‹GS™Ì2XŠQ".Ú%ÔÒéšÔ‹c‰¾Lò&¼¹¾¸{¾9©èØ«˜8(&ºŠŠ5(,b¯§,MÉd±=êaõ­â<ÎÓ´$uÒ"hæûC³zEæjp8ã4±-ãÓåÍõÕw]‰ºþªúñš*l./ç±[:>Â%¡W˜—PhºÇòÈÀâ– ;-¨~€«{@«| õŒ¡h‡’ïAŸH¥äÚ iÅïÊ+2þè`ˆz\D ]“!¾<‚Vð‘8ÄAœ_ׯ®¿Ý=ü0©Ow™þïßÝ_Ƽÿ`æ4?á?4Kÿ„E¾J?¶Ü,G´óÞñÜZäF‘5Ùf8Ì.Ù¶Nãò“-Ç£h€0ðñrÆWñI~Äî^>ì-·_Õõ»xü±‹AŸÕtßÚ"‹èîu>Ýù]A×lÒ«\ýa6iÔLÎf“ÄQ_¨›×cu6iÑ(r`ðäÒ%]¦xAŸ/>K¼üó[dÞBpjäiøáSSé¤Ú—°kë{‡—(>ä¢vBÞX”¸•BàNtŽ.6`6Tܯ¦|‰"djiK;öÇÜ#–Ý¡Ø. ^‘¥0Ç…)&Cã¸×GxYÁ¤TƒêÑ‚Ðú¥ —»bÅžýY]úi§?=þi{Ï£@v]ã¹]üyÜ6Ùzì‹l“BƒmšOëÄEn«qx•âÔá)ˆ°:/›ñ‡Æ-b†CR6rŸ¸lä$9h"–yY{k¤2Ýn a¸ÈJÙ…•Ý“rfîVOQ¢KùzîêÍH<ôfЧÃÖ…Ùã;ÂÄQE«{" "Za*àT}§l/jT@ú~·w[W`.’Ê i¯¿!iãØ¨„`î[1±¾àª*‰¦&rì1CÄè  ‰‚Ùq²¾†Ñ^ÓG}Ït?—p‘m„µ³p*V)C¡‘Ï'á8víù÷®¦gÒqáeóRxû*n^ Å"ãÝÓÎj¦6ÚªØÃ‚joÅŽ£q;–ú'àèPÊÅHX¨3 ’̳æ¤í)Žs[<øò ­µ¹ï礬šºHYkváV0I/ kíÏÞv’°—Þcè¹ÚL|³ÎRÎ:øÕ»@ˆQ÷¶ZU9L’Ïx—†¼T&öä_‡í§‡÷w?<¼_z»C˜l1®œÙù@åeûý–7ùt&hób\-zí¡ÍÚä“)ãFê N2÷@mc>O„xfŒ›Aá«F@Ý‘Zðcò™Î“Æ–ê>œWªÅ#9‹«U§íåòúêÌð!#of’æÍ¬ÓŒ“ÅB¿'éx¸´U{ø˜÷]¶Sm/¥>íNà`?ÏÅ®¹‹}ËêQ¿Ž™ØTÿpIu,Æñ ã]ÖäöÑ›ÃÔ…Ø2fÑ.›ã÷öÝÏ­¯UCCNÕ“o~ùøþÝý/ÿ‰÷ïÀaJÀÀyÓÂ(ÄóPœU˜€âÀÅ…¸†LÇ “—Ù’Ö¾ &é·Ö9¸=ÐÓ»(Ã]ía·püç‡Ïo¾<úÌîg¦(7åo»¤ºtU©íu;‹8ÄîTÖV™ ÓNç¼#š ɶWHå‡ïoÒjZÐõL‰9ôYqe9D=E(œÎà¥Yé wÍ.Ú§w÷;3¯)f$v)fjiÑf§æÕ,q ými UIÅX§’º©’)–üê…hi¤ýQN{4ÒOÁØ¥‘éÙÂ#^{sP(åOU¤È²ž?Íáð†íB=`αŸ€~n«.›Ä»Ž5?óƒ9Vб|¬1!©ë¾S;ošêª;wœf#vûHU‘s‚Shá\e‹ÕÍoàÆ8èĵ:7…}#†ÜHE’3Ýj¨ãi;™ŠÝä ‘Œ€n[6ŠîK¦ ¸]Á¾¤6ÆCžØ}ßÊë€ÌC!ºò -IIó1$©äðëæŸMer¬Ë?óf…€s¯—Tf±ÛA hÆ,\ÝþÀ“z˜E€Í:㟤Ì2 '8ÏKÆFúÝ[߇»ŸL|µoîß¿3Ñ›öÜ}ùüöé~Àwýîó}øË?U?ñÖüÕÃ.Ù9ms×-žs#îÍÝõ[µÏ¸ì L%Zu ×g¾†2šæ1—MÛjޏPs¯ù½VxQ’&Ì«áÓ¼×ÈEÖó§Íl×E„É4âÙ8 å'ºjŽ\ƒ-f”"e:yúùf6EQîž­TYsä—ÂÎ Ô\ XOBŸ6JÓ ;«m·Â 3.oÅòg,|þ¯e3>Æv…í9Æ:°êÓ…_húðãcP,×U[HFëªc~‰ú¤?Œ×§•¤Q7öùܺ!ôH3–D9–t|ñRyoéÇGûŸŸ¿æiθÜãLB:Ô›xžT¯äLòQä¼·£ð`Ѷe¨l³|U926KcyDÄ“G)Ù²)†œ_<4J7O(7ÌÓ\(P2OlÖ|(‘@ÝRswªS†/77Ž>…¬±ƒZ{þ„¶¼ÙÆì5÷±Û µnhÒ ’Ê&ݤ©&FØPig@ËEší§­Uø¡)É”lIv¨lrZ^ À:TÖ~SâÑ*›)žêE.UÖNÂŒxqLMTeËž…K¯¨eow‘Ô…Š^ÓwL“Þî…-Ûò8ÄÃÚòv/lÅÁµH'‘ÿ=À15U«rdî>®¾SR&Tø29Ä-àƒXž/ð´³*܃IÅ‹_váÞÖ±Uà\ :‹±èªØA ¿ÝÈ t(KÊüŠàîë#·iì–Ù‘¦c°ìòW@% á0 ºüÕ5Â@sèB¡¬Ô¿ùT3' í†!©„!sˆ& MÒV)*O&^uÚx95¼ØY ùªÈÓ€{"¦„ä=]ç†À ÖC™#:…ÍÃÜݦ<· “í+Üæ ¦یf%J%ÄO;«±&»ø´ëØÌ«Wîòš1¦ƒ‡IS0XÍŠ£ß0ÔÞ'zåvP²W[Ž“ w ¥çªøoÍ’þ=~œÇl]˜Œ°Ï\ìü©…Œza'V°ÜƒÈ]—ëØª«²ØõU"Ä-^§œz(Ì/˜ëïíȸnO΢MV²|%`m}%°5ß&:>S챑†j ìv4‰z{w>þLJ7OÿÈ÷—ÿׄä²iK8K¡|-NÞ¨â¦{$œExËÌÆ(¥ m!œ!ÕkÑ')SÀ=fxófl#e|>÷r §{š$SJðçt×|¨V¦ÔÒQ—ÌÀ˜™_-§»8ÝÓäùí ¥„(hS)ËÝ Ù ¡tOÇ<·œ¾¬Rf„ƒð§2)Ð…§)£—I¿.ºpüUèÂeò¹"æMJóËðü‰s?óîib€>\–š Í0 Íòuéj‚) ¿.lj¦æ¯@niæ¼YQ-Ïû¶›íJ3dÀÛU.‡¦.¿ÑUj†4™­ªf·ò¥ˆÊ]7!Cs¼¿kA’~2N­Ì1¡Ó" +®b–Ík‘ryÇÓÖ*²(0E ˆ)›Ùy{9oqòÜíóÎ*Iš-çOØÆ7˜ÊZÌ$ç‰ð¿Rt_û3ÓŸšÅ¾­f)· od8¸õyHdzô'Œ!¬‚í¸©ŸæŠÊ‘'©Œè7°‹‹ÈÛ$êü)[™¿¤‘Á~Ç}°[`@1p ±vS5ìÊDYnžj2'9ÃÆ©ÚƱ».wVƒº¶ªr¢Ç–ÄczÚÉÇñ…Ó„‘Ókà [‡¿Áúáî~ò–’îíͤ¸§±»ð5éñóÁ`TJÒq0v¶’0k2 ›]K¯häÔ9—=bÈÄÜ~*›(gŸhé 2©Æt­Wõ o3éCMÁ£ oƒh´‹œ6@Ôn;—ÊÓ‚amÕA|$Å.qkZ& ‚‰ŽíB‡ƒ‡Y­ëß°+cÐcñG¨±»±âÊ ƒà…Ô†øS29%qíÉ“»ÖÄšýëù[zà#ÛjÁ¦òThœ£ëƹˆaâ™ø`ã(ó„fm`5ÓxÚ«[qŸ6SÓ8ç³!ÒeãÜâ}Ù °ëKòÿ[ÜŽºØò¨ä³-zÑ~¯:×NñC 7+f²{†º>Qô´óâ“Ðrk5s¶,T¯™Ù¥¾Q4PsáÞéì3ï¿¶;®*¢ùÐyësÂÏw¿T¼º>õß?~zóøîïìïÝßírÚ(IJ']u:›Æä诰÷ h€ôšLn†"N'rÎÞ*œÛ:Yl ZHjĤ2[´d ¬»4–ÍàSîÒØçÒ£„f)Ÿ9ü/'•Ù–3ç \~2ù ”b̓7ñÖW7OWÁ<‘Øwº€GŸ®W·^“¹ =w¶>÷UÎL ä×73á§üüP˜™ðЙ _v939=3áëÏÞ¨7‚0Ùßòù415šµùñ<©c/™ÃÊØ!H!B¸®Ã%  2eI›–GÁþÊ+–ç¼Y(zƒ‹ÝT¼xæ8¥¤)Ä‹aù®Òd¾ äXùVwV±í$Æá®¦Ÿ ™¬ŽùðøpÿéásUuß7[±Ï·L1ª„Òâ[‚}Pw“î—W›7‰EfTÄ\§Ó›*-TÌß,d3ÀôUc6o²6¦”z¸Ãá´Æ\˜|g[–h^9n ¾Íòò…FdÞܶÑ6|8PãÍoâ¦TˆlŽ~wB¦vxOÑœø´¤³Ðm}^ËyãT|½~ÚYÍ3g–‰À ±\˜àÅ7е%fY!Ó.Ã%æ{tøÜ~p¤ 󑤠Âvx)*½lÓyŠòšæÄ>¾›¡Þ_´MئçOµ—M™>Fý )°k‹X6“kCoÃEÜ 3`  ï‚h»äœÌ;ÏŒ„ì÷X`<‘$R<œHò™9z¼ûðã“Î÷—ÿ·ÀÇ<¢ gå/8¹pP·ÍÊïí"•¬»“[6Øî¤7Œ[Wðá ©1òÕBä[¨Â÷Tòñ"ëf×öA(*qÝìºþ•'¬?mf;ðMa’€óÛÕŸ ÿ~WÐëVFNµ$ƒÃ.ÁóÊ¥JGÌü y|o§'FÕclÑ)$DÞœÎIÕ‘±H峨YÍËç)3Z vé‰->RtÅlS@ÌCÏ»ÂåL‡T¶J*ðüûŠ é vœ¾(»ämiÑnÚ. )BH­mÓó'0À4é Œœ‘èÈlíñÃE0þµåöç»û·fQߨZ?}ô<ÿo4i¼Oªüæ§·ñßó‰œ@ð΢¿}=ƒ)·ŸÐºa96Ôøiôn¼û½ø@¡¶eþ¨èÓÉj‹zß}³kì$a)²j<‰pDg¡#³˜gR[í7LÃ9Ê~|s8U¿]vÚAE'{É7’ÇŽí¬›/ ¾µð`Œ¹qì`ΰ™þÔqìð•4u,°'š²„Àò‚]ÞÍ"ß…â>¹¬ý<6aÜ.§¶ÝŠ9¸ 9h_ëw³ ‡vŠS°›_†²l@¶I“ÊÓRžÄ5³'â °³‡(oNGC¶‰ù4¨ì²í˜ÐNpƒ^”RJC©SÕc莉Ë#Q¤tÔ8öá [kNŸ :ö¬a’ot»—Y}“ f(6þÇd;™ÿ¯w¾Ðwî¾÷œÌ—ÇëÃ~sK¾)„’o®ûM1–¼}ÙôžC×ad?¦Hsž‚¢†CÒËé¸ôò±ÎÏå“ 'Þ9âØÕ-Âüõ»9l&ý˜“ϳؙ8vÙûòß~å(Úw[]›ù!·t¡öD\¦îÌg¬Î|ÚOš aÈë^ˆ‡ML¸ÖósÞ¸–ÇÅÎjÞ ÙÃY;ƒ‹N‘å7ЉOÍ:9ã0Cmâs^1ú#­ö`¤Ïý;Ø`™ôQK‰OJQnT’èÀÄgÖ—H|þµ¿E¬½ª@ I4§vT5TŽ- ÅBЍÐöª˜ ¯ŠZR€™Oœ¶€%l©ï‹­O›Ù~U"o[ pQ˳üD×â݅Éá™Áðæ=ÀYªíx >`¼Oê'ˆÚZ ø (,R|üôñ‹éæ\Ub_øÏŸ~ЂßšOg[KCË —¾†Äuàïh[އ‹°€œ(·Î…k°ÀP|Y~ÞN_´y‰©2Çs:ölø]zÇŠ?K‹ƒâÁÂÑdÀÜ3K¡>/_5)Þ£êäÎqÈrè‰ëSý’Ù%>¾ wƒÜÄ÷$MŸuØŽñLz“"¬ ;‰Øu¡;na=°ˆ8"ª65ënHy‰Ö-"¾BxŒõï·-£ü¼ñÎS"›žŸ©¹žD8b¶Ÿ-Ûǰcí¤ŠÓÚ¢PéRyJGgw]ÎX–,\6ìÍ]3º_1^¡¼©,ż/;7o„9tÁÇ'1™N9Èq}†Ì¬\ )Ëf†ŒìÚm{p ERœ§UdÈ|U’%„=Úëcb´O{%oeÄœ'&Ó=Ò`ïšf\NqÈ·Ŭqn/¢=}šú“‚&ÉmS *„4*DòÃ@á¼i?ËÊWõKËïàß$2‚uÊVm´Ùc>E<…ºÐ~òè\3ùÞRŒ”“\6Ì9ž&!h|yš„žñgýN‘ ‹:¡g)Ëì±Ý·^:…ž¥¬e„7µa%P ËqJœR°`':¾}¸{ÿùí*ò…É¡½Õ2mq0˜Jf¶]èôI(R0,v;";äõ0…¤]ÈgqFŠØ•ýWlêäO@ rTbñœ>™S)ÇæU§9UÜ1¿pÓP*R1Î\lÄuQžØö©öºiæ6™/Ü\ä;"¤ceq²e’^J¶? Õ nTU¼Å¾–*¢Y’vR{kÐü‰s=ùX²½™×¾ÊylQ<¾Iµ.yÞΓrS;OÝ]´òHk+OÝoí+Ì©ºˆëFdþ ·Q¬Ëq=ušJµïħÃND1w¡Ž`<ظðO7êÂøñEä«ñª¿£†Ä?ÐëNB¡@ÇÕM"Ïiì‡ôªDû´&Æ|„Á”=s–õ¿ýÝŽÃNþóç÷ž-—ðæñÁ„ýãwï~üK¼÷9ÝÅ¿ Ýý-á%Sñöd5׬ë”3ç‰PÏ2->ßËÂ6þª¾qI?ÿ÷?»¸-?2IfбÝ|[¤¬ÒªrbçAì¶ßZýl rÛ~'€ ûí;§R=ØrkUïæ+˜ÚFˆÕÑXÅÁm+SáxeJPä°“H”@s1mi4Î[èL/QûáÇ_>šìß¿»hKÕáÿ§”¶ÜÉø:p)ƒÒ!í¯m·”ë®)(¥S Ð¶L ²…ÒÝ(–ö ˜:Q›l¢Ù@1õá•[ì¬ÄD4!Õc˜ý3R†ÙÁ:ÃØ;èn`˜¹”š6ze`8¡é%Èæ|]³¾~SßÍ-!ÙN¹a Yà˜dÚ cÃÖ±b ™!JˆqОMÖìp´mþMÄB5~.â–¤£V8_¸:Î÷¼Õ⨓Å^¶‹ñ9„‰ÍÝ—‹i¾ËOôQ[Oˆæoš›ÃÕIÛX$F ®k@ÒÐÅo!»„¹Ó–Aõ¶Ì‚T2Ï7m^ ÁŒ´y)ˆKôÈ‹UÚ2 ˜!í0fìÍ`Ä=ÆŒ#óÑÆLðVt‹vë4ö”ZWfÄ@ø¦î®½~6Ab:†Du÷–©6sü#Vݽ°U£gOÚ Nš—¨…ØRÄW·ãžw8î䉰 v†Uˆ›`w¦Lº"¹üº±J¬ó^U[ج˞c”.¬;³x긃Jë2ªýµê¸g‰zl?÷¨÷éÁü•û»Ç‡Þ†Ò’ó÷Ýȵ,°%Ánl¹–58 äƒBfH #:)Õn<ûû×ËQbÍ/5Oç×+àWÿÝÊŽ:4öQ·È/qOÊ[}ˆoK°y•¤!@KÖ3±®tÚ¬Ëôªð‰Vüù‚1Ó”rJPέ³žDZ®|ZÈlH¥-)Ás'¨>ºK©n£<É¥hî’ÉO0¾(WV.Ô Ó•¹3UÝÑhs,¾”? lþ·FÐB=ãcx½Ÿö6 œ§Ü¿¿{ôÇó€¸ü'Cmå,„‚{ë¹”™µf£:ÆÔ!eXG—ÝšoüU*#!8Ëå☸y%„-ÆvDðO`æ½®Ùpù÷Æ|éYo«hr"éQQ‹Bºß"æCÄ—Eíõ‹Ã6í¹g'·^•\œ©ô6¾×sn«&óç˜÷(oöÒ÷Ð¥¼)ýìäb޹`Î=ì' ¨=?c/{ôÕ0^úùP🻎8‚ÏÁ§œØä÷Aƒ0_‰S’ìó§/5xά·„!h˜Ó9šØép™áð—Œ{œ3¿wÉœmÄ-xOñ+ÙÁ¼„;:ÃzÃI!Ã|'Ÿ”§]øN(G—F¹œ¹Ì¨Y3G]bcÃ5»6U̯‘á †nÞ VávÖë“mI´ÛÇ8žv\69x.c¸»?¿äµOåO½20è+`,ÜìƒÝǪoêpŒæ²1öèp¤Ô¢Ã%:Óo©—¸ì£©“Ã3W¤ÔI7}4wœJêû$°1uŽ$É"ê=ê…@cú^Ï’Q§bBÝ)µQr2f¸:¡jBq6÷{8?aŽÜ<ë ±ËåŠÊz€ËE^ pžýâWÁGXûÃÝþVÆö©Àê Ø- (¢º›ãc¤Çy[þ®G*¼-±¤M¸N¹R/d6ÄÛ þ´"’÷x[ì÷%w=w0A߆Îß)°¦·Å G/…£d#žÝÉ ¬ia{Eα¼Ë5’ý@üZÐÔÔЕÛy7|°ü†úÉÂ(åËU¢.qÈBsÜ’9½9þ·ò<­ò<¬&‘YD>ÖÔžjµR„`°¡£ÃI@²“Hçáf†ÈlÎ 6¼ž¬uW=Æâ¸ßüÑAæš-ÜΈsBFܨÎ7873‚Ñ:>eßãáçô=Þ½·üù¿îáëõÝÕóï·ÿüÓ?\Ý}ýróøõòûÍM¢ 2›æ&È…Ÿ¼leÀØ£#ráÃ¥`Tްå†cx@>¥%@íÅäN÷ø´«<ºy‰Òœ=ÜžËW^¾ótõ|ö’u<¹Œ‘¾ø³çg|þ±êAŠà0Wæ91Ü¢-ªò¢[Ðbò¶×¨F2V½P©zJ8oZð—m˜¹qÞòŠ(XŸjU°#Z—´JÙ´ò2­ÐSâ¢Ó$ÅÛõ”P’}yäÈä9œ¤°‘‹Þ§{jß§: ÈA>£¸¾Ï#ò±˜‰´’zhK2uþ™WE˜«+*YbDycó @Ý›Š6Ÿ¼×†”°6U¾ˆàGÊ‹¨]\Þ˜€øùÞ plóCƒtë86ù{KÇMïw>pB“"ïºôð*t ú7ùz;Ša䤻#yã;ULê¶‘CŽU ##"QQú"(À†µCT#Žaqé ¢5"ÀY»Û‰ø… ŽéÁS•/»ƒ OèH`ìMåËbd( üZ”(­sê~dÀ¹ÑÖ¸?™i'ìSÛ‹ø„•/¢’8ž¾ôåo³ï4fßa\$/¦¾kÁE[—>½öê6ëžIs­i »&:#,oÓ‰½9›Ž´9_ʦ[œ&ÿLb&fß`á›%ÚŠCLFT1®Mg-Q hZд™QXhµ.kdìñè”Ü·Cu3Ù«+VjKb ¿T³2/Ôr«†(ZЖ‰ìW.×ÿH.ëÚ—§´†=X™ï$¡õ> +àDÒ³°"78ùúº g‡Ð¦¸T‚;À®q%¡p´Î% Îø8Ø•:‹AµgM)§ÀhRL2¶‰Æt-\°®È‡t\Úü8Àtð~81â«Û…o—0#¦êªÚjGŽsúzu~ûüµìÑ)Ü9R·ó5á6 u@äÚdæÍñ¾­ÀÕ Ë"„H9\Õ4cËY\³!…«‹ãö™„$Û¶b\ƒ«àƒxßmrx0® ·)‰ï'!YÖ0æéÊ^¦²øú=qÕAgX݉íAö`ˆ–¡‰=~@²ÕÖ½Œ:-éçL©Ó‚ ùœ†éP/+ (âÚ²Nþh‚QÄŠuux†‰±ßˆºWj×M¨{%u,;+F›…eØÇ,*£OŽ.ÚQ¯ («œxÑ~ (}j ÊíóƒAY©œšN§Íe½7”~Æç¾¦®¡’wü>#écÎÁÛÀäml» lÜàu(›·ã'•Þ?ì5öM´Úyñ*>_<=ž=Ýüã»6÷]7Œõ¬(ÀtöU1 cã<© Ô›ÈW—‰•pÏÆb41 àì²ÎWìHÕÁ=‘öKsk¼‡Ì†Ñá !sJA¸öf yü1¦kN–/ÂrÑ;+Ѽ=±DRcõœð͆ûC³Â)» îÿüTYÅÇó'QrÏ¿ 3õÍ<¡#WÎ uæ0Wt¨5YÞàLM­ Š»æÄ% ƒRfË(Ù °õ9Èο[ ;¶Å Ç,%s‘dë€ØºmcdÁ5ñepÚ³ÉÑÜ`ÀV2'âËzb¹±/£GçΖ5Ɖä¡{îl9”d´'±é!Dµw‡lðA•«Å¡-"u#QdFèöxuÿ$Ü}~x¼¿»zþzõ»Öwnò ŒzOÌ2®Õ=VÐfsªä*T/ ö‘xJ¥[Â)zßœµ–ò¯‡ ÑAܽçtÃêWêu÷YL|àV¡; 8†6;Í Fw¡³åý\¬™S‘#‡ãÅxìœé[ ãÝ? }^‹h0P“—&Tÿ>i ¨íýÆž½x¼zΑ|ûSm¨Îõ,(@õºÔ/ë FbX=ivC¶#ÐýB³|Ö;às(0¾=o&QÅç1ù ¹£Cã[¶í½p•Vásô,Ú½Iã»> #¬oωWÈ™ShM é~I¡o[Jtû¨Œ.'qŸ܎iÆ’8uì¼oP¼¤/Úö_ð?[·Axˆ^yûãüæYîæ'¹³Ÿ4™ó¿·‰/; …6æýǼÿ@~bõcrf¥@ë2Q^9yLš•Ë£dþ˜ O#ñ2ÛõÍ"ÉÔ“ACyk»sw^–0CçÎmˆ½ßDOøu±5mèÙøAV;AÚÿãÕÃíÍÅùÓÕs—Ôö½ò§H–Öåý÷ßѲÊ„õCéúïèPâ¿\(×кX©Õ%*¬?¹áÁ˜Ê¼ÿ˜Èûßï¹&&— v`!‡ã&Üæ¬.Õgq˜|ÚÐF¸²³M^þ¯‰%Úš#T ˜ˆ–Af§{lUÊ!9ùèH¡½2Ž 5$yM[3K4$ZG¹[¡“øvG+©óf‚(† –îEŒ;®ëæ%ÜØ1ó[:šýÀỂÀ¢s]ޏã´Õ r°²“óp/D|8¿jÀòïw¿}ûžRuqªë½Ÿe…›·ZÑõÞO“š+rý«šÄiž {í)âLÙøÅ»ûº”ò~¤‰ˆ«ú'é¾¼˜Ò•¦5ôªzòõ&a²ÄÉjêŠ/2YBÖdÑ‘µÉvI;âtˆ:©¥¥÷ØÅS+/ü满sâÍW9åXŒž˜Q^‘»>þ†¢vÝ@!®?­ˆC|bÇô-"mt5ñgäfchŽ·¨h–Y“r ÀŠ6¦¥ãiÑ›“LÆ[vG+±&çùõ6ÆUÖd–qy|y©+1„¤@z÷zf®LÂ(œ'NåêSœ–ñΊU3¦_Lñ†fdðûÆo舴X‹!4Ù2U#´½eMp¾ÙKæb\ã‰MS;ÌÈ"ɉl‹ƒ¡šÍ0!Ð TËs-‹jA[&Gµ€à>²ŸHƒC˜~ÜŠ®çäc´'5Ý×ì󽺀ÕÞ_Ê/æuˆÖw7K8CXf}7s ÊÐP´õÝ‘6K (jqÌ“ØÌd?Dæô–°ñEÀÖNÛ¯ÕçN{Ï®ž/Y£¦ÂvÖṳ̀ 4(uº””už´M \ÑIrŽ,fDÆŒ²úTòô‚n=æ­È®÷˜ìuPÃQm—ÆÕ¤f"è§s»ÃeKÛ!Nѳ [5²\ÝöóÛ˚ܬÀ0±(ZÄGó]ÙV€ÁÎÃ1Ù8‚1A‡«GM ¹¸—Cý¹üÛû‹oë½ü2R†<ÕÌü´,(åÁÔMl¤_/¸GKª1o³p‹AçÌdÓÇäÌ‘µ: ´éäbëyÀk‚SÝa‡ í2=áx¶Úâ'GtxèÅœµ¸¹î›y7åQ†­‰p’)eýöƒ k§%9½©î^Ý–£ðä/»þ­–¼°Žµ«×‡ˆHˆ/ò›|þ׫ۻ‹¯r…Ú÷OsMÎÕίÖï6Ä'Œ7C1™kº*©ô" ·qP€¢Ž²½ìg±hµh:<üG@öY$ß–ƒï5çØQ±Gµ‰ÈFŒ‘M<5’‹DFr!³ÁýþJÊ(¶¡ÏXŠâb@kK 8oBÿ¢ïzÌIÝœ‚EíMŒÕõFº„ЀŸœÎYÚfZÎø=«F7?¤ôöæîæù?QU€áúc—KʶžÇq|æ[_ƒãÁ2ˆÛ³Ç·T;bVoI¶ÃB“4;“¨Ë–+à1“kº: ÂøÜÉØ$ïèð››#—åéÓõãýݧ/÷·ÏŸ.¿¼+½7ifÏâî%jÌc_?gRï'Y)3ÐïͤÛϱŠ]ÍçÔ3¨\ ܇[Ö·ábÀ­üwÜ܉€®ºÖi^ÂÆþÔ s†éîÔ½éÄHÑ&Ö»nRâv<ég¯lD眎­§üq—p\/f ¿z.èZjÕ˜¶bõì#*kwðÙ—¸090ŒG£ºQQ„)DT6“Ϋ°ÅÍ,6œû¶IæÆ¦­nH¹__8ó‚ð¡Ziß7QÔ©H% |[O¹ŒÔ§yI½Ïõ¼”%Þ·ú©õÑÎZ^S'±O´øßVçaËd‰û[ã!ºÉ¶ÀÑs¬©¦ÓŽkºÕóïBã¯îÖî’øúd ¿„ {É××Wö*^íOBwkóœFìi‘íĆ?Í/{ûί¥>b³ëæžÉÅq¯rá Uäâ08ä"6fU8ç ³*0²XÓÀÙY6abóÒB÷ŽÞ<•,³8XAV…nÊhÏ€bõÜ ÍÄŠ­ž•Šn_?Ï| x¼f§0ïN1õoO¨e§ûÛ«ÏÛ?Ï‚÷_¾ÄøÅóþ|ÛJ±ÇKÑâÈ^¾‡AÓ5XfxbÞ Åp5n‘!ç›A¦8uk‘‹ŽCÖ 1Y 2`Ó•×»£•ÀŒƒ dðí$¾å"鱤ÑNÑŠ_Sܤ_<^ì‡4­[Ò<ºø²i2ÆÕÈ£k %Î÷Èq“¡A‚«²TÅÇCsk#‚`!‚D’Ê#gDÄßX—ƒàL²Â`q´‰d5}$^…!^ Àp†@ƒ1lõË QNÌ(ëk©Fo?ÂË.^rˆDì/Î~üÜ÷W+ñZ´ _9ý²[²7­È\³•ªƒ™½oÒsÑ0 N‹n â瑜>>Æ{ûëŠÓ¢‹x˜æ:ð„L  G[eb{ÁÇPt.¢S],1 ¹‚»Æû\,YǰrÖÁмɦMºÄ’eÛ(êã Më,Ò&)tÁŽ%+#'‚rdŠŽÇ’½qÞ.NÔš mie,¹òS¼ÍmüÄ#ãì}Ä‘¨zw®ôÚÚ»ooÁº—» »‹(\€¡ÁÖLoV•ÉV´ôÚ9Ÿ‡©Ò3…½b³ÙY«ƒ3O\æ't(¤=ËúáÇAF‘VÓä*ÆøÅjp‘âóÃýåÛþôß®žnE??^]~=ß5£:»ö¿}{XÅüì]]oÉŽý+~»/k¥Xü¨bî`žX,p ìîëÅ@±cl+ëÌÌ¿_RR,ÅêVuwUkb' à$RÔê&‹§HyIúŸ½HKª°Xd ™Æ)›ç-«v°kK!{N¢è*³†¥Ù‚¹®ÖÎ`šÀnòúµuL²7+:úW™cÖ43î²vŽ~öG΀# ‡sÛd¯ Á]›Ácƒ€w0ô)UÁ©K¹F©fÉøª©"QVzsçhMd.í÷µ¸ÎÌŸSÔ7(s€ª-M!NŠ.¢-É·r¹QŽ©˜„J[*ãc[™Â&SA|‘H‹‘~ÓäµÉc’ð1¥ºê…C«Z¡žÁiŽð-°$ó§Œw ö‚iK^Gñ€:±ÛÚ-› v%^÷$Ò¨tÀü:’œNl‚1¦ùK¶MÀ¥¶` Ð= sØHõ7P1 æÊ$ˆujÌ8‹/ì˜ø ")1#Q•Ì)ΑI!¢fÄù­Ë‘þ4Iß¾ë1î€1h¿¼Y8§:¨"%FÒ`‘c`(zu+¬€¡N·y–TIsÓÛs/J2·BSJìj ¯¡Ì{³¼o!—)C21V:{w2i“.qú‚ < azs|•Šèì”!™âa¬¶ÖTDތϡZ-¦ ³¸±Iâ©ÝØ•sïXÓÞ-Ÿ.¯G:®ÀtDâëñ UHšâ„šÄÐÔ -üÖN!µôT œç¬ì©†Àå”WÚðW¼$½“H#GU%h0 F ËaˆÒüŽê–ôÀOÕŸ‚Úžu{ )Õ¹§½Ð£C“Mfæuñ΢AÊ2ÿl¥üÖÌîÏt%_ýkÛê§=éÃe’<®>%¥~€åq¬Ú%§L.õþ¸4ºÞ»JlíÖVHtJ‘¢Çj^‘ùR¤…mûÁ4Óg5Ú,A ¶ã 59zK•‘ÎÝac2î®S1-t(Sʃ7„0vÓt°èU¯Ü®R/ÍR¨]Eâðæ’Ä¥Sæ2OÀÜ쵺ø*No-Ô æ¥6d*ã+S—'»'6žl/HB°æ±çªR@»ÄÜŽ,y™wW£ç˜MQÌßwB‚FÛfê¶É­ŸÐG9+x‰ÂŒ8jX=ÙZû¿§Õã²;ƒ±aꪯ2=“Jþ¨[eD€ UGŒÀsôXyßaN,oRK2/C*L ÃPʘb~-V¦ƒ›¢j!¿}T‚ÍÆÿ’[m'’6VA‰PÒ(Lmaƒyî+3vvXyÇŸ„NVøï©¢Šl‰e–*çTÒ<ñ"!;b|cÍùãòbùáÃÇ叿ý*`Ÿ^UCEô"@8¤Èœqt U•Üú¯ö+‚WÑ¥”cl©ó¬_JmÚ¾÷"urê©Ë÷i”y¢S2 á7GPk{´w§ÕLîµKÀ„‚0yš’ã+¡Kõáï!–@Ô"ÂR纉¬“/u_&­âS8KÙÑ«R~¼¥«œKmûæ.,öù´øãTú‹2 £h§¤4‰âÇÖ34ðá´ ©ý“s¨I*fô¤ÌLA»~žEÒ†Û?ðƒc""1ÉT™·ÍwYNaa±EÜLóž—Rrlëå6ãî¯/M×¾[}¾[/¹//Œã—Œý|EáØnuÒ\e‹$‹fœfc%ÖÊ49a:D€Û㦙É6ÞTìcÍ[ïž|˜æz1;ÞŽ2M$‚ª‚Ly錵Þ{7rÆÃ ukM¥‘n;“Û²ÄAséx8‰A6ô*B¨£/pòy˜Ý™"îàZóí ëÎÔVŸÃsÕ˜*°pk–óSˆë±=|j^µ›vñëÕå“{fϹ½âÿ8ß\n,›ô+…±jÞ¶]eÊÞš4‡ÌÉ, ´éŒ°§âr+Kð~”ÒÎ+ØI×¶/½FD xê½&ŒZõ:RÎ~(ëgÇÝ@Ir¤“ Cˆ©Û´Žº­Æôj^4Īø8¤)‡ˆ¢f^>Sîu +É ˆÌC¦•°ä ={2i5®$i2ä)puG†’ðãJ §ŒXÌ׎ò½Ÿ&$ûQ7Ã0Á,ãJB2'b:µ¯f/>¬n®vúxucöxÿç$g #ôK=¥ª1Ãlh¹û˜È[øbEµt¶Äì9ç2ÀºSòµJ—¯µ'œFç æge¥1MI˜2•á5Éì=[ÆË?ËħAñĸ yWHTç] B…^mfˆUMo$ ³DÄ^A¯˜ÿ¢9›?Î/—¶Lî¦á«ôó0$e•œ«ðUcžDŒkökRn8(§KVM¡•|¢2 €V*†±I!u“á>Ë¥Q‹(ŒÂVûzLTU[œtö¹Í.æ¨ÝðÊœ,x?i+< g³4™Ó‡ ½:•˜¸ªK|œgls"ûýæŠb²¨ÅôU…ÅY¦tƒa¾Ší©¯„§!SCòƒ«n³ŽkÞVÇšïdÒ*'Àl@9†ê&'ßn«€5'¤ä¤“3Lí‘msøÎG˜‚ÏR®›1F3a©Š¤8kNàÓêò «º›‡qœòòègÃ0ìÅ öb#Ø«`>»Vâ(L"ÈIѶJëØëžÜã2=Àf9j)F¦C!SqÔŽ‰4@7‡Î³Ìa³J0hƒÍjwCÍŒ\{¼ç )Mʦ)s¢Dºys6Åf¥ClÆCÞÜ #°y^XéÕ:f?»©*cËÚžI—!.ÄçÖèœulÏ2úzK´—·\ëöáÝòòöúîÓêæúâÏˇ«Ë«?lEß-oîWO~¥Ï°ø-?,VŸï«ûqÙ òlW¯rˆˆc¨q‚)ìæyìE‘c;ófh³"8°È1a±.Œ©Ø°§:YÐ÷„×¢Î,Á-ˆŒ©SûXªj±ÇË2s œË9fˆ]Q‘œn %SméОÄÛ.3ü$~6ˆéռ䕉ŽD³´ÿ˜a¤DQOJVx~q;²ŠŠ´_ ‰bUîãË\Æ‘‰å3å”›Q \Ã,³WG„ËÝ–ÙIË‹½={~™fÞI©ÑŒ`‹hÑ(ç:‡;Î>#X€¡+Ëì•ÞUNrˆ7tºe¬K.BŠÅúŒs¬é£f RÏÞÅ$³ö†4«ìÎG{^ß6°Ã¥~üxWéj(‚~ÅDó‰ªÒÑÍv‚?SHA-Ôªeº¨f+ Î OÇZt¦I#`©Ô›;é=É5@j7ìŽeáLÛºñs ¨2è(s7”˜œC:Dj×däSÎÇÔó1…¦ó`´€–^¥R„ª?»µçÍÌ®êæ^Þ¾µÈç"Vt™tbˆ9[äô ªˆ× ymà蕈X„Þ¤ûÄðY&- ×îÇ{iôjP‹h«¬0“ν›dÉ×Ðko F¸ý¾O }üŠseUvKÏ}¸à]éâœóÕ);á-™<(ÖaŒc[2PÓ¡Sß[\^cŠf8õ ƒ8yí‰7DÛî—÷OO¦žIc¢|~Æ‘+Î×ì‡ Sšj4ÙÆA[ÐôH©]Mp>ˆ%¾´ˆ‚¡ˆð¶Søån¸'’&g´¾hCôá<ÃwCÑ!U™`Œsï†Q¢vΊ²GŽÛißDzúáôÛ"Pàª]ñôªÓ¢² uˆJ8Ç̽`ѳÁͨwË[Ó€™Ö~@·Z>=þº‹ÜÆ!©í.ý¢æè›UHJ:%²@ó~i4×áù4ÄP”L‘7¥mG1ÔBâ"ˆRw6gOm@ÔÖ©y9QNö«*=ËÌ47ˆæÜÍÆolNuçݹ­/Šƒ°sŒ3:ÔîûÔ'¶JƒV©Oâ˜écÔðþ²ÚÃi¯‹û‹qšú“炞û®i•q÷kZŒ¶e]‹2VVíÀÔV‚æ@J` ™Š`*Ûqf/Ï0wri‚¥¾~É<ÜP*hßOZe‹(sC©I9¦.(µ¨!dÈ(§¨Ø“©QÍ`7ôêÒ;†B.Eq`EVñ™ºô TÞ\ß^?®/ê59#KýTú¥Ÿ“We¾åe?ï°ñ&†"f÷2W¥ß×k‡±¾(r@-’ï«­êDEŒMÜÅʽ'&kwí¸yLÐoîž„Pçð¼ìQmŸ;u1GìYôMŠcw \þ‚foJо€ïzêƒÊ¥šÂ¯/iß>c »ˆ 9ço¡æÚWÞÕǧ›‡«Çuù£=ÂÃ8Q[ÓuPÄ[ܲdŽtfí¦‚°9M3ám—ØZ¡®/4Q°ˆbÀXêU4B×̾}!µ¨ Î¼ðˆÙOÿ¥ëK6‹×ÖëÃÙÇûÕíÙ‡ÕÍãÙå‡e~¸ ΚºjGbÿBS¡*ô¶KÀÜ.²©+v`ºÂ)‰¼= z§A'_¤9z÷¡MZÉëûSªi$ɳxËÑé ’|~}»Ül‡‹ŽŒù80×HÓ•QsóÍÓ”ä®kœ£i&4?*À†Ù_öÆêHZÌþª$`Ý„ÙéLïK«Mö¢“ÜñªõƒP‚Ÿ ªw Fu[q‰íó·E;w{k«Ó'7}ÇLâIÛk6ÌExWJÍá½D=Šfr*p©!fbÔ 3à”hœ·°ò èvu¡vÀ¹)Á¾¿+-!7ñ4èDx ˆOF™n xoÕQŠùn˜{ŽSXÌÅv0b”>¯ù凛«ÿ²EÿÕêÓ×0b/þ·Gÿ~wyõÇ{4·þ~æÀõÕåî5<0r‹® r,DâëÊÉ#÷‡…N#ß{š¿ùÝž]û]™ _\]¾º| '_¤1äý8û«Klm{•ëÈßmgûýêþìñ×åÝÙ³@ë§qyYÄ”…GÄß-4@ À§›¶­¥Û'Û\ƒRTø’mþ©]5è¶tí/=÷NLÝûö÷ê)Û©N±R±KÙ÷Æ0Sà6R ­6ÿõªRsÜi.XœVÂÛy;C¹=ù5Øüý¶‘Ȱnxo@ÓFœ·v#çtÈqà,L‰å$©54(øææÁ×p™Såaæµk2Þ¨Cå´È‘dÛ3y¤Ë.Û?¼¦ÿÝúç¿v<®sx¹Î;NÇÎÁþwK O'f·æìå ª1»À÷®F‹"üóÌOÞ–k|éaýâ[æûË›‡«$EƒØ»§[[Ü¿¬>>oc¼/ù X ˜â¿+S¡÷ʞݯÌ~ØàêV1/<#»-ŸmôFSB댒5KÎÕz Cõ–ha7¢ e½±ê½AçïýG¢7»­£P:µâ"Nh !CÅl?ªõu„ÞØ‚Cô&BE½ÅÐÅŽº÷dÕÅ”0ÎÞ”l÷â*µ1O)ñI!Ûz©‡É˜†ª-㔌8ÄÜAŠjcê,ÏÙ=ٵ帠˜•ó©Õ&:ÁÚ$Øk9`µÖd Ö`¡‰ÍM)kÍÖ”Ͱ°Ck» 4„°È‘±©ÒŠ~"Rš™±a-EÑÃSwÓÙzfäv†o5cÃÿüqwè8‡ÚÕw3{ÎáÅêöÓòþêýOöÈÿ{õøþ?þó_φQ8øÍú×–®!j”ôá2I¶ÈàvuùÕâ:ûùìáéÂWÑûŸ¶7ö˧§Ç÷?ÍòýŸ—7OW¿¬óYølÝYûžI“©µ>è"ñÙÏ?Ÿ}43}r¡|¹­uŒ?Óý|ös¯‡Î1“ÖYƒ·OµÏÙBW4S5¶M"//¶áèNÄë–½ïŸzº¹9¸º¸¿z|Øë•òÊ!÷+C TµŸð”‰#`;­ˆ-ѱ•gmä7)Ã…]›®§´F²IE.e¸<ÊêŒÀöÄÕ"ÃeK=›#K:fc?ÖÉRg¸y.Ç„‹Ì$üæ8h\æ‰BJU2G€ù]‡Ðá:˜u¨ {zx⦭ÕyPÁ^FœSlÖ½ú¥¤‚P…¿8iR©3™%5µ`ƒœk[3†)‹ ÚՃýqûEÛ7Ž1äºÀ!S~ŽÊ‚rŠœŠ¨ìZ9Q;AyO„-jü®ýØAFr AB@p˜aV›ÝÈ:¯7ÕI›¹ú|w¾ûoÏ|™÷ןÏo®?ŒìCÕþòøØ/ MáK,²ÌЂZ·(²VÓzYä”u€ÃÄž‹,š&sì¤ Û‰§§®Ý6dÔÔ4­ )m¢ŒšÇò{n™™ö¦Hnü›óÛåů&¡s³üû•ÃBßçv‰Ëvî5D]`@’o@$2X-]H¾'°@nw½5Æ­ó­³V¹HžÈ‘RÎ4'wž½¾Û Oã²"»ó~ÜVo·Á*ÛLSp;IÌ$cA{˜xZ‚4±¯à! ]6ºînÖgI´ÂgÄG%Ó%hbÊ5&'q“Ë1æ“ûN·Ëûß®?ݘ‚ÞíýýüñÞWØåùÅr” Fs©?º-* l…ò„¹`N¸Ï&Žd-³–†é… C̲Xei¢K]ÓödÓÈ2=i™Îͨ¡Ê2 ÚÓÇÈ `²üöRM‘cMßÚZmSB¯<‹>¥á£;î²@[‘’NÑ ’:étöeÒ #Ùý0è¨&›Õí2Gn!ÉN¿?n^ÜÔ^¾»\>üúaµ¼¿<ÿ-ÿ?{×÷ÜHr›ÿ•_òbÍ6èzËuOyIURI%y‹].Š¢nekWQºóå¯@Ί#©Ééééáj¸Êw6%‘4ðáG¶—ªD÷OëÍö²Š{Õ.ÇÝ1óÔüÆEÆÆýì|ÅÍ¿­ƒN,®…¯œ&¿fV›\GÎïJkcž3:ÇcV«0'¹…¨Z-‡NsRtSš Ôq2h>;Ïu.Àç¡QEç!û¿£ƒµÍ!ÐJ>hIÀ˜R~šÄjG-ã+*‹¯ÈÏ ¯Jb9“· ¡š²˜×˜Ûqà kÃ&ÄÄCÉ,¹¦\¾` dßèò†…òY$MJØØéÉF'çöÁiñZO©K¼£¦¯÷×—÷«»‰8DYÔSÍb2¶4‘Ó¢µŽg‰µ‹Í‘9 ÊÐÒot:é~ ³UŽgá4 ŽU•!Ç3&sX`|V\çÿžü›˜!WñIE[è£ù½óÆŒ¸#„PpýÃäGou±g~÷ÄÑäúÇu`Còx^ÓÓïhôÜúüÝYÓŸ¶›‡9¤é²hNJŽ*FœƒQ {Œ~Îô7k¬Rg[Þb*`õK)ŒZ&¹¬eÄÓÂ0CÔðÓ¹ 3.Q0²c´ÍÌgVX\âv½û Cfxäõº–©´¬¹¢‡*lj Ù6i4®c ûÌ Žïd·4»À†d4¬%Ì3¢dÖ$ãŒÆµp±,ÐqÌÒyÍžã»Ø ¸Ì®Ö6öjv’øÀɽçM­Y» Kj2#t1Ñ(¤Š.Ëàù,›&F©jì5üäs¥ -2<¸ð.Zϲ®F©¹—±¡/Hî‡Y—³ÙvIR©FŸÜx<³ýÇY5 àvÔWç5Úôª–ߪý4!Iñ{×n?è #—7›Õã“êÒ$kM¼hWóáPAÛ¢ºË.Æ ¸ÇÅÖÎJ½fL)Ä’;Ïã}$œës8©U3±qó­ôߦYé±·V3’·«¿RàElÚKâpîIöuÞöñái½SÀªÒ/ œèdÊ qŠ5 ,údÛ¥Ẕ‘RKsU_‘ È\cË @Y®L¬ Ó¤*S$†9nUÜ"yÑvx‹ÃwÖþÞ«®œާ«ÑÈÒœuúX‘® 9OìÛðåM_;£¥N8Q*jcß÷¢’ô9£ˆª‰Íª–{à§Ø,û`+dfÙ,\Äf£Ãùµ1<ÜÞ<~Ú΃òA ñ QÓþÖ•1ö­‹ÇyǨÀŠ+l=%oãRMø´LOìm8-Ð7S,Z:@^Jö*ªëµhJ¹¼v ²&º .^ü¹ 1çäÝ7š®¿'^Ó&öÉ®ÆË"¢ÊôÝMÈäM0xÆ’†¥ Ž2$rÌóšdÒÈ…žÝi‘U¸õß½°4¬aN»UÁÇ%íPjvÍQˆªóŠo T”†ÂjeŒè, õEüûq<ÂpÙ%rµ­zû·©‚ÊD- É Æ¹!»Â€» F7F§.ôŽX~ÿàœ³üÁ“Dö­œ­P*½Ù·Æ`d®Ä3ŽM «h˜ ¶År¹z7ûÌfY쌗”KÜ»ÑW‘e‰g2i²Z;…dš¢õ Íшè¥fɼ‡¨(’"ÿáÔ±x‰£Íµªh£bcó<‘4Yï;'!NRˆÀˆÞÏRÍmjš°£Ó8f¢6<ÜßmŽ-Ü̾zùyóøp»Þ¶Ó ×yÅô͈pr±—R¶íºN½ÐbÕÒ0I/˜„d–ëÐжb›UÒ$ÙŠ^ïh™UN @‘W¯‰/*gYÒh¡ °Ëv ¼ËUf(f©BJ5ý¤^œÆ©?BA~Yp¡$ˆwI²Z‘4‰!¨ã¨éï$—¡©„y.ƒ=V¸Œ(žw«Pç:û/·Š*Žoj1xéÐP¼Ò¯öx¹Ö_¸ÿ|û?;‘µô"šP,q#q<À`ïsnd °FŽDÏÝCš¤-|šçH˜|¨ÑÉ€¡ó»µIEèQ =_É}xI#ôHX¦éƒsÆyèAU× ¢Øiüƒ8>žâîöf³þm}·ù¶9àÃñ-‡&'%Іø2®0’íÑȬ‚»è¦i ›ÞÆYSám:§ÙršZñ>¸’n­B¹ßê¿>èùûn34 ÏE¿×/´Sè@8‰)ÍxòÂ1Û[;\Ño­OŒStFˆ4ᙳrªÙˆQCvž£¼º×ý¼ZRA]®¾~}¸ÿeÓT'ØzõŠ€ÄÃh£ê•c]ˆ¥‘N°­PÀsë„âdE[]¤Ù¡e±cXâhŽˆ'–’p¤ŸÞ<í]²À¹´JfôƒÜ4¥A#T™£B©ngBÍÄ'g3½'Y­û«ÝôÅ×§«»Ûõ‡«§Û»ë–UѬ–0T9 õ\Z§A(_=ˆ¢I•#vÉ3¯ìÞ5ŽšÙΪŒ cU•#Õ]ô2žîŸ¶ãì—•vñ&„.êS—Õ»ü¸&°Ï×7Âh¢ ¡Sß‘¯–«‚ñ5zÄ9ªÖŒ‰P ^ñG(x©‘Å’)«÷B£ ‘œÏSñ=‹¤4ØxYœT Wf2ORÍl.X󧤹ë·*×võù«þäjûA»H"v–ZbQ a5’äçs‚jIÄ.…4)’HêÝf6†ö]ÐÄ…Ó'¸m—ò—>|£¶ß—;õ|ïŸÖ›ëÍÍ­&¬;yÙ‹²mÆó™Uî¼Ó ²Ho \Žl¼=H®‰Úpgkz“Ÿ¤6ÆRÇn–Ú¬¨‡%‹ˆ&B½—ÃÄðB‘‡Š•Ô±(ñUJNòh¤ ÈOPõtQšY,¯q5FÄÉϯ[lõ¥‹yõB˨æ&¹èâx’šßÖ8L£(kD:©œÕ@/°&KU¶šú™IêAAvl–Ô±¬4]¹4õ¹ºýbrÚVñ¶Ã+ÙvQpDIè[÷ô›Þ®g15Áèbtä§\ÓLª¦0Ç‘è[Ô`GDuÁì&ëÈúasB7®ï×Ý<¬o~¾üïõúîoÍt©S£&€à8vw¢R˶n ÄÒ$sIqФ]Qã÷™Ï(êêBJȹ҈Ïit¨­K V³voÑWk èûÉÌî_t©t H¿­7Z‚‘:‡þ¸tÒ«ì<¿ íðd%ó@ªgÖF‹Á>:hâå˜æb m#8iEó¾0÷ÜÀM87¶.%.9·FÏMó|mÏVxp AÊ£ÀV'TsEžÔRl0ÀÏ>8(>8¯gAVÑÁÜúÙ?y¶£røhE§ï%6·væƒ#ªZºîUÅRÜ­žyp~ÂÁ1ö’ƒ‹'·¥ïŸó41ƒG+<¸”Ô#ã¹.Ty8Бüé9X]Ýmþ]ëŸï￾9Áÿ0:½úr½ùÛGD ê×lóvs}xM²g¥ÇèÍÿþ¬0ÒèYéWΞÕáiþÁ¾íÅ­}+=©õæö—ÍõË££ªLáE@óâú'ëßäv«Ó¯w÷¿n.?­¾\<Ë£Û=ü+EðæñŽC§4Ò!ŽL)¸Ú9¹ý[¼žto2BCç½âÑ¢ûËr¹®[ÍÌV_V¿e"sÒÀ|=‰šBˆ³Ìeò³C{‹š f 9~òž $1%2«Éf07§ªšö8_fÚ#0lú›rÕ±¡€ZQ v>ú}©ô÷¹Ï(Kgô+kJD&Yy Ac÷YVÎaÑiØþ¸(C4¤NÖ`5Æh¡N³å4,c £…WNh1oŽœ­0°¦«0ãlÕQúödÆúÖ.ĸ(­bß&¶¿šéßùåµð‹ßØ~xP(Ø<è÷°ŽÃÍÃ¥àj½¾U"·ñéj½Þİ¡ëë$Ž8Mc)'u®þ¨FÁ^"WËê Õ‘ÒÔn¬Åe[åÜÛhÏT-x‰F]‚ã2âTÈÙÅ–1–aµ}-[ ]Þ‹±S 6SŒsì™û¶Å±Úä¸ïÊÕöÈxW´;ÕvÃà›2•€µÙgÄb¸> ¸у&çæàú›ÓÈqgÕ$ŸÞ5ª¯¯‘®™®Ö쯓[­®Ò&’Kײ¼¹™ˆêŠTP£¨®á¨f¾#@ŒjZrn`o3`WmÛM[»Hð#ENt~Æp ÈBhwFX+'”9Û˜´FòKC»H¿û%´ëYO®çè}†öÿúøþô1èOâwAt";¢—Kþø5OÐDÓIš’°¢_m¸¡ª’[Ä·%·˜¹e ÔÙ§تF8|r«bÿ¬3ÖÁÃŒWÜ€¤KÖ Åô{ø³Jn¶`ÊÝ¢1‚-.™ÉX âÆöÖ±ŒY¬â–éåßþ¦’û¬¶¦GÑ?:#¦ÑÀ::n~Éxu¹Þüì-¸ª¥D3?š >œV]‘ 3E¶Ôy¯A:ŽÙ4ÅÁtÚ˜Mr1ÛErLƒ›©0¦P¾ j÷ÕÀã©7H{‹Wýˆ ÇTÌ>CöªÎ‘#űÆî9rÝ«ç µ±d8z¨>Š—0ÇØÁ×øÚDÖ³]Ñt¼}l_!|î/ìk„­L(u.©§q¿<Œ™:ä¯Ê‚iÑd_:jä'“,]Ý>ò¬,ŪD|´ Ñ;¬ÆÂËj}ÌŽ9–”ÄŒÓ/Œc)f[ˆ‡OVpWŒåY>M:6MGøÔÚä€î§ý½½Hûe/ñY ‹)ĘKšŒk°¨ öŸûò“á5& !¼Ee÷• r ¼¾ÿüuõ°ùø}П7ÿå_ÿñ¢î£ŸÉ\büå/k2¸þ|}PgžüÅOÛ§µ©ÍÇ?ôßéÏ_Ÿ?þ¡õGÿ²º{Úüy!«8vñÓO7jzOöÄß>x¨í?ú§‹ŸŽú)oC@äçø)5› @sy à.´ñZn·ÿ0ËSÖÌ{Ù”¦=Êøm°f­a lös–ZÀ9c)e„ô]ÚH?<¨ÐíÖhóð¸½ì 7ÓÖ»†ür”^êb¼³ÜB]Aê<¢êO­šG³’jVÍvb#±„³2Im4IÀþö5ÄÒÈUy5·qS† ‚3Ê}Nq’“{µi$W—Ç1³»ÌŠ‘äûQÅÚzIq»¨ÅÓ¨Éç·xÅ‚£Iz–æ¤s¸(”ŽÑÈG¾ÛEáýý£&Ž;Qî‰À¶Ÿ^ÙÖë8V‰j²lrSË[¬¤šá*£]DG%·„8~u@„¹âËA0M. ب›†ªÁ3€ÌBÕ°üa¿Íî%¦ê)yÇÎqS9¸øfð(=IŽF> VØNÔ\\´Xòf—}õÃÝâ‹ÿש÷¹Šk/7«õêêêf5m!zpÇÏ E[ÿ0 aSßœ÷¶LNÿ1w«ß¹µÂ[o·ž6<^ðAk´¡’ÈO ¤ÔbôÉÊ€‘£LªHhb“f¶e8ˆÒÞNƒt6vŸâ’vúi³º{üTfp£åÃ`ËîÒ,s ^¨jJ[#ú¤Ç0ÑÜöÒn&Ôñ|ÄŽ)¢Œw3y‚àiÌn,ÈOn??n »1Mð!N²J€8+ÿsÛσ!‹†!°¬{û–} C„>|x% ®j¿|ýËÏa±ªHV`nØÅÞ,°7 Yޱ㬺'û^ûûd[uŽlI”y”ÙôÝŠM‘¬ÿbÕWM‰ècP£ê+ædÁ”ÄôÍ-¦s…º#r Ú˜çè§tnôÙÊ‘Ñ--¢r•³UæxV{TTWÙ‰kÓˆtþ>¿A³ÝU¹J¨•Çù×ä$¤,'i‘œD™ª8JX% Ra5š9Èc ðÃøŸ‘ÎÝ'­~~žêeÂã·/À5umD©I›Ê7,Ù¸d2øè…-™fÂ.иžÖÄr­‹£ÛY$Ë`!uš×ؘœº‚mIàäöæl#É@›þ˜ƒ$pô¿–Ê`§¸=|RC¥¬çà]›ÇÂj÷’ë¢ð9îfÆÚxÒI¦ñk(‡œW Ž?¹%ô>@“–Ué­))›£©‘ôÜFÊ9×´œrM‘¤ìég²b/ùü|ºÑ¥,âÏ:+ î»Ùg¨W”àC›zŽ ’s¨Wç\̦Õ N¨Wq”Í%©WJåomsÕÁQrJˆã6I¸¸š&¸ÈìÑ‘‹çõh˜cœoeK&Ï% 2íD¥”GÎ]Íí¯lsÉô@3*^BôÑ¥ ³ ¬ 4TM±cÂë€ÕbX-dZÜmî MW/q€Éw;,es£ÓL#«Qê‚9¯¸ƒg>^¢ Y-¨–³™rdµeø€±â)AÕtd‡í£Hb!º ¨—É)oKxö“,ÁÙQ€Ûƒ€»Ô;÷£6#{tî1òˆ]Ç -,!ù&B§Ta Ä&iêÇí8>©˜Ò®S·}‰ð{Œa’Ò‰²èLÛ“œXì—æt¦Ù¼âÀ⸅l¸)+˜‰™¢ ¶’Í—N粡—ê¥9žž?£32O’M²Ó¹F'+ ›H§*;°ŸE5oß-MTó5#žBJÀQ³¬I1ѰS‹‰ dMhŠhúÕ9¢mV$j¾޳hFìYBÍ6“SæÑŒBPeϳá"ø—‡õ÷S“-]Á>oD1:I%6tSþu’Ú˜óªF7²n~bp; è[Ø\ÕåHj%À…3FNû,Æ*xŽYbå §øƒäC¨í-2n¬Joú‡nÍö8´1ˆT e’sÄBhÆx‡%;ˆ©GyÛ›ÕºY’ARR¹% â]šflŒ=º£EøCwíÒ,×lo©Mƒ`UC>/;X°)úø£êr¦ÅwªRYÈ>éIÏ"ng)㢻ñ<‹5¢r§´±FˆUÆ…!:'óç=~¿Sý çïVz/V+úΛ¿œ¨}:õ—Ëéî¢*Žwtò!´¿fÈçðÞïqÊ3]”%¸Hj摊Óà*Ar j˜——ûK½8å «L~¾]_ŸCÉh¸bÓ Ëì/ଃ2º¡Eø;ôjÏŠk“ÌKΩ]]jKD<ÃÔ߯ô÷øWvíj­ÛSôSÄëUZ¯oÖ!¦õ•_ÿö›sî4ŽØ° û¡û»‡»—Oöpu1bRïBÉcnÿkC×hœü‡j¢ŸV =ÑãªX™R°iâ>Âl|«_A¨W½?w(qC¦Õâ‡_Ëhǹc'—(¤xR»xéD£ŸÓØ.%ò•£«^ćI–à @;ïã/)ð¤;V×ÞSa^z)%%çmêÈÅr¢ØÜ{NùòIX91ÓÓIÞña·êˆÍçE¯Í"U™c#µBÄPPÓ/Á´|½¼rGÕ¯ûµ5ƒ_µRêõ>O»#?qÉ×òU„oÂW ü¿]Çž×k¾ù °Z߬®+I•6OYxcóxURõÄ?mó†%j—²Ó@Ú¡qÎI±jã§8âZóaù„qzJH2åZûŽ•`§Œß@’|îotçev)yèÔ,«¦.õžû¯–çNM%Ây›©ú« rXhjÄ]8VÙ¿°iâ €™Ïš&e‚zÓô'Pm9Ž" ;£G‡®ºÖ–¡â >ˆs<Pèøƒ‚þˆÍv¼47óòîZïäîåçÇì§5‘;;ÌfvÔ—€4•9&{  §û«ôÙ‡Æí]-àYKêBpª±JO4Ð/°™—ÕÈ ¶D»ÜW÷õÿÆ1ñ;w¬tÓê\\]|_ÿqñŸù׋çõ7;°¹#W/·¦~¿zúx÷µW Öö$lK}t»ÿó—ãÿJ¼¦N*' ä:€¹a„˰IE”ª¾˜úÑdO‹³«ÏHuí~õ™~Ƈ¼ê•‘@¦*†5°H‚ñÔ3Øæ¬”kǦ`¬§ ¨uã o¼BSí™wx¯QdaMR¬è)‰-\}Õh"aÏ0»ÁòÇãõ1ʤº|~¹zZÕèØ2£c¨naHÈ"zK•óɪôá&!eq GWµÐS+Ä(¾4ÓÝïM<&õtÚ¥^˜H/XÜl>¹__=¯¨/ïW_Œy¼ é0œü"Å2É<›j¥ýgØíõ-Á;ºkôÓ/çä ¯‹½€S7£µ‚®'`Y1–:6€ž¸„¬o…V'ÉŠY¬ÓÑÉJª±Àw”b?VjTä:Z#[åª%h¨ ¥U®Ä6wF *ÓÕ»_bSu0wºƒ:­ LÓšbgÜËúáǽÚÔ½1>ùŸ9 ÑùHvZƒÝÌ{²²Ø8Õ$ª·}Ø Î=ÆýMA¾¿ateºí ­½ôy‹Bg“©™ÐÕvhKðò ÚÞúC$/Œu»Ãwjž_ô÷'ÙýÉmÜ æ¥—$áX†’O?TÓâ´ÜõKÔt“¨ûè=¹çIÝÎõ¼»;”=)—=c ²^É4){)$8™ÛÜV696ºÂ7ÃïäËE Õ„=5ˆž.œ_¾U×s,‘Ïg—½©«7O8œ:;Àà>%R:z®—8°~Žšw^ž|`®¹IaËd¡ ËåÌ@0…©ÖÞ~ZÎìŽ8[ÿ³½„29³yõy„æÈ™ªTàêô†OâYÑe†›TÛpØkݶÞækÛë:±s]tê‰Â²H”Á˜Q:ö2ñB<‘‡^Jæh>9éФÛ;®+Ôx[kbZ Éæ'Àø°ýR?Ë$ÀÐ\~ÂÉ!7ý)4:KtÜܽŸ4;Jýý(Óé/›`“HâNÃö÷Û/±K2X©¼Èúì#4‘?Ötfkxìö>J[\:”Ɣť„]Õf<ͺ/ñ“ì$ ‚»=YI\ª»:)` nšE(›ßßÊr¯ÛACž%÷Aù޹‰'\ÜC²0H)PWiýY)ÈÒ’T†2ZÂ$)]ôÙG·ía Þ54 }œ´“³¯Ñöòîºä ps–rHñMŒ«ð\ôÔŒª‘ÿÜþCF„ØQ´ÇàiÞ ê¦¦I拘ç½÷û[BØ®}ða–ñjÄ|K…Z¸ÙTõ_|ꑊ™ T¥Ø,QГ É[õËb†j·}êãý£îçöñùåòi½zÔO~.ìÑ€†&Ê>ž§†öp.4eĘ$×Xµ½¨EútÕ„Y_ÕŒ.$(ü©“jPpôÐÙeÝŽz3þ¬©÷‡«%ƒ`»\Í{Ô4å£)ÑXÛ ·´è5ýÞÙЯ³Ýɺ+[J2Ɇ ©U+©×ŽA&ºvìò²¸—ãÛY"ÔefõuwÁw¾¤l¶¢tâ yF}·JQª­ø~K‚gwtBºgvKo ‡uÁO8:zÂ%ÝÕXä®ZûÎÌ×(›i¥SÞˆ1U[ø~‰àk˜ ¡lÞT!â-uê L=é¨åp²³r8¸Ë™äÑÉŠÆá’² C*}Ã#ìЀí5p¬;4,Á¹$Tß@nm¯)­aô‘;ç ö4ÕÈ Žääè°áØ”F0:XIwMTVbßÛr¢ö1×B´PS…ÞÏÒŪh“ãƒÞ¸þ³}:o½‰pšNIƒÏ4E¨°é˜Ý'Ôö0­q¶)]*íXÍñmݰêI”wç˥Ô>Í K$Âå1 ƒ¡¼HøÏ)û›NºÏð—â‡cˆÎŠ¢¸I%Ö´#›—MÝb1íÄÕ8̽^ÎH¶P"ÉN2)Ù‚YŒí-U¨»¶Ü|œ¡¢ƒÒÖªO]‚á /çäm.3Åâ¹#Ž·zGë§OŸÿ密È£Ó?Žzí¯ºAƒQ‘ÝM~ Ñ–AàâËÅË?¾ú\œŠ+\±›œ9HÃiçç¥áê¾u”‚Ãèç‰×}ëýž]Ý6(±ôá“úÿcïzž#¹uó¿²·\¢6¸qùô.9¤’C®)—¬•m•µÒf%9ö°gVÓ3ò»ÉÑî³_½zÏ–v»› øáî;·Â0%Kf…&nBg'JW{µuwÁ¾–Y¸DtI›á’±€=¶pÍΜ¬¬¦îÀ¹AÁ»°×r?}F~HAú,ÙEØq·ÑÑ‚¬ÉÞ¶n™Û«ÜFv<°§â’­íBiƒOŒ,¦‚D%¹UöHT¦X9¸À#o’6Õƒ²iÂΉÛSx\­žŽqP[@È6Ù“zjwÇäÕ]â’éoËVaÙzÚ0ݤ'PSñ¸Ú†Õòú†T¨@J\ŽŒŸ%mÝ.œ³6w+«°aé«çøÇ-@ÈŒLêBf6Õ…û(46xcP>Q¯ ›0_…ަ÷OánɉŸo{§Y"Ý‚þ¤¸²¬®Ç÷L›7#¼¯º£f_Qˆ×L‡ ÛSsgaK¤·¹¯ÜÝŽoþÌÌœ‹ùµ]ÑžhÑ ÖT¿À>Äfw”‡bZ–f¡L¨CtȾœèFïE3A”§‰{•Hƒ4KújñôÒV„Dz[“²ËX‘´OOYjÚqɱêÑYLÚà1‡=wÑ|™8*ƒWžù­qôÀÆ]¿|¸{¾úôxwsw;3¥®k>™–Pšø`únV¸ºž^;Ð5]rTº†¹Š ËùÜöDRMPWg^ÇKÃ.‡î ö3¨›ª8ÌxÉe«86Í{EøµOkоEäè‰Á-bÛe1}D„Õµ¯¬Ýà“_X‘¶ÿHùìæ›F&+« ¬1¤á \¢Ãâ€5¢ÏÖ¶¢îCóèpŠ4¬%רßV€ >¨~=öë÷LlQú*l9À 6Ža¢=¹wûqe úr’#q]ñ_@–ó¹%v+Ù_Kw”wÿL#(UÔT¸(¡lB0[´ô*œ&Þ_Rsô—¶0â»[³÷÷Ï6*’waUÛsuínÍQlAsT@…ž=â‚ЀàH|XíæQ­›‡hÞ…ÇXqH½Û²gi̬ïVåå¥Q ¾~ÕöÍQ€aÕŒÁ÷>ƒ&E8&ZIû@f>ÉKܵáõ‰ˆ¿$UúþÝfú3¯Gr:$ù°?`zåë&ŽH´˜ï¼ò}çnl=zAÒ5ºkñ{ûÊU¶Ã)3èÛW®fHâç\¡.—~îæ.!í²b˜= u™ÄZyv<†(\.ZEe§ÅÂ1{M–úx"ž®Ý¨Ì†±>Ì1+Ù;WMô±·Y19ûãÌ^ZrlpY*ËÊþ,繓W‰“;;VK¯r¼ïpÅb{£ºðFÝ©*Ùöáîf|Ûwé߯^=onòié“.fˆÝ>¢Zð~ÔtÝ—Tt7œöé» §ÀûW¿ß5¾¿ùõöæ·+ÏO†OW ñ®>çóØF‘–oE…ý£%w\é¦9ˆ¡ãòùà‹¥×,Í‘ô%ŠÚŠÒ‹/Š¥ž|.‚šÈªEš#}5F•Y¦s ëŽlÿ "k.nïgŽÞySÍ?hzÉÅUT–$õw\°ãô63*Ê*8àeC<\ôQ›yÃçmY30Šiâ·!Ò”c(C@~ä÷D@-0À>;"Á, ¶]çwž²óE÷1 Ú¯l‡“Bº»Ã¾j–io¸ì¾ÜZ!Åè×ùÃܾv“•¢^zzd¶㌺ÿ}y|¾>“C6 îPcéS\ïæ¶wR§·L-DvaT+.à§QóDDunÞ¢VÈSd^(áÚ±ÂÙ‡iœ†ÔI_öéBEzCBŽ~'ÃÉtHýÌ䆹JŽ×¡y<à7êàÑÕLΜ• ­C8Õr -AܬïrÊÞ¯rNé{ç]\—ŽÜ!©l?l‹.Æi~ ÿÑoN?ÛT—ÍË<“'ç– ¾ à hK4bB.Õ´ÅY5ó±9Íz!UÕ®\M`®t®—u"˜.¶é¯XÌ?•œFÒþÕî˜útÜ'Uô¡T®ÛO Ua¶'\Ã`[ =!V¹C¾“à޼Ȯk2»ýÃd=ŽÙ~ÓM"ãåéùñãûõÁþp÷<ÎáøôøÁþêÿ=~þÍ$û`t÷ûÝóŸc>#•3Ÿî¯n‡ì%ì,€Ž¨+>£,¸d5²}™› ¹¬ Û¡{ª@W¨ ¸2ÌrP,C!äºO¤ÚÝà N× £»&Z$™l5füº^| û<_šóùfÞ-¢|ï‰À_&.öL]ÅãƒO'ij>ÁÏÒÔœVŸ¶ð±>o}yÐÕýA—reSr4œMT6÷Ø´fœÇоYˆµÈ c(f[¸vriÌ40_ïÒÀlþ¥tÇ‹P2E¼œ’!Þ¥Ô¶jÛImU×[Ìžªb‰Èmª)§yWqyùNz„Ú~µ'P÷:°9)± éœö#ËW=îß:œ©êˆ-ª:O¿oZÖí86*ë<ýÂys¦PÁ¹4† —šŸÍ#x‰ùÁÄl ´ºÀ\+ Ì%È aUô¦Ô‹ž#†Þ®œ²µ‚“¥U”˜§Ï æFÇÚYž›wƒA‰ò 0±G„Þmc ëÈ@¤«Xì £mˆ ÷@lÜ>NIÜ‹‚ɹa›L$7k“ùk{T'±Ê³0q•ïd`·Äu{5²Æ.Sí°»NйÔWv$F %¬Â<}Êde5ž“ÄAÔœ™ã9·­u¶HÛÓs21†ãBƒ´æ]~©ïû«ÌÎôRô œyéÔab©àIh„Ã’*Võ¼‹$‹¨Áe¨ã6(ÀjGBÊØÀ¥ëðÍbóô,»Å”©½…) ¨qêÆì=c·3¥[góÉê¹[i‚÷K¢mµ`55f®ÔŒC]´Éûâ¨çê0žgÜÚ,Üe ‘'+«1†@²Üì=M†gàî&cf0ZÚ‡Ñ_‚× ~Kl=?ýtíèVäkaë™|ÏÔ×5—h]OÒ@òu ø“e©¾”êgÄÕà*ÁGHÙBë2úÇRŒ­ÉΧú^—Vƒ>Q¥4œlú;pkЇ ;¡_’#a¥`ó â*Õ:ð±ý¾X‚ï<¯ß¥½Jâ{yºX 0‹_:õXÕÕB…9·ÎÉ*£ã‘`ð©ˆB_Ê×§ÇûÛƒ´ææ‡Ÿî_~¹{x*ÔAWHvó×Ûù7)o+“ïÈÔ´Bôùy`›€UÑ9Úmؾg°xWƒÀÉmBõë<êÃîÆ6HŠ’rŠ |é! »Àe¤=ŸW] ÌgdQ×%:–ŒþUGJÎyßbLÂxÚA§m8‡ÈŠî¬šßëKØÉß=‘Eì´¯ôav¢°ÆU‰Fî]_ÄLœÁNL“FÍ2ÑE'!pA€èºI™ãbŬ³>´Oe?Äh1G× 4©töé“©’ó¿ûƒë’ º|#*•–p< QŒanna"»3)„‰àÖd ì.ðU$¶åùg¡• ç–N„ÑZ5uÉÚuFJºÉ‰$îÝŸ`RF=®¡Hûö~Å3MY8“/t­3FÌ´‚†žÛ;Œþ’(zÕ¿Àè¯VÀ—”Å)PPsD¾Ñ_š©C0]Ia|ÅM †"(Çì¸Å‰œ€²Ä0D‘€piTŽ¡{e›‰ÉùL™Bê*MÄùOþZå —¡Àº%3VÓi°( [@Áæ‡W¯®¹ý°å†ëò~¹º¹ýÜ.à:Hб4ý,Ð@^B±tUY³¥ñ´ˆxí³ƒ&jó¦P¡‹ì„zLÃ$â\3ñZ\9ܘTÓÿ}|ýá/Ÿ_ö”æË/®îï~¾½ùóæþv«@giÚZ½Ã­SÃ8x¸F Ë·ˆ¶MYΠÝ>4QÂ8¸ÈÑÇ +a¿ÄY€ÂfäÝÆ®é´b\™gý›ýÃËÛWÒìö5ÃXlWŒyàÚI­‘ÎŒœO—v]¢í{£Ü ”f‚}|››Æ=ý;LÍdóÐÓ«ˆ^—xö^5¿[;ÙC‘µ:£b¤vÅP>æ0@1Ù“†çNçN> gújs´ÕÑ¥'m8CÎåÈuO‘o¿¿u9ó7¿â md]0..%ØÉ¹°ÔÌI·Â§Ë‰vMwT©”*Ü4§ ÅtAä챞ˆ«Å±N_bAÎ¥µv`Z%ÑцØóXåôS±exÚŒ&¸ûúAŸÿüîñ÷‡1TÿòƒY&—1t=©ÑÑ¢”žBR¿¹^òB‰µ²¸I--)žLHíÅ*j~ øD< ŽfújOì”/}4£ïË·‘³;æ‡KKNŒRm mGTeò˜ê 2W`D¿çcûñŠêÜ t•¿ÊYS7Ÿ?,5°#‹#\Ж‹ˆ*(—5µ^+,Vƒs€ZÑÝ"A ŒM’ù«î¨Z\u›Žè3¯w“þkÞ‰%ê«’†ñdÚ¯OÉÉ•ÙwñÙ8Áì¡)aŸR]½&÷ŸGµ/'UAÈ‹È*ðfãš~š1Vò•݇zü0óÜ;:#~V'º ±ÅÃ’öyŒA¨ÿ%¸I¬H'­@2À+ß|¹Ä‰€EÌòÝí¤Ó‚0Õ¾ÚÅhP3 ¤%¨iǪ“){d7bÆãý0†ï)?¿ªI¶ñD\ô(=¯¼·ØpjGÕyÖ‚#ße&.søÂ~ý¦sÈÓ=ùÝÏwöö¥£È½_¾e´\ãÚé—ùóˬà&ÕPB¹4Š’”î’ür—¼5š½hn`³2Mާö “Äì(3|ÓT.<{‘êF‘sh<ŠüNôÜ[/í/câÞâN½hbøãõͯv†’éúnB\lÿ¯¿yº}¾ºŸŸ0o¶+ú’_BÝÈ,3{ôí*¹5«öL*b efÄó°72ÄO…Ô¢ÜÓ¾Ú1 —†`âÞÍú&gÈЧ¦%£¢¬TÚvôµh ôç—ç†çÂDÏmµ`¦=ú¦ÁHD.Ôì´“lúúy~­?ÝÚYå š¡²FRBšn¶¢‰i_&Í@3 ô±<òÊÂr¢b¢,Ï'¾  hÚW(€^4•ºƒ¦ÙnÍ€¦-™X<œ` Ü4©®4ÞÏh={¾;î”6AD‘!Q«Qøgœ Û+PzŸ@÷u΄Í@hR…à}MïgLÄ%Eà\[ýD0- Ô¾šƒG¤ C¨¹„Ú}ôQ8¦$IKN—0á„züËÌ‚mµ™DÜ[uÐîó¶ßhd+h¥°„Ž•$ ˆsþ«ü—ÃV¢ È5s›Ÿ?7UrbC §‘× ïPÃ&¢lÖt©ƒÂ3Ä*G¡’g¨[?ªíÛÉ­…ghê/‰|g€³b”D™²ê,G†þTŽ‚™¦CÛ( xœ}tÔÔEŒRÕ>‡¹¼ªœÚr¢4ûz [½=•~ÇÄ…¬hx1ñÚã÷˜›öf³o‰"_=/AM„gÄÙ~¿§‰dÑ8w—8wf_Ç.”XS4f‰5ó}¹R¯‹ /doewÒiÆÑ;˜¦+1y¿êd2R÷0ÝùˆY0$ƒ–S·²ty Às{\–@ÃÉŸ1­Úу 1m –L=íѽÌ—3× ‘«¤LÁ¯ƒeE]BLˆA$Íúçšg©ÀÉ‹6™ÔPwHðEà–M™ÝqáN~MÛŽ†}6ñ,äŽÌ^uÎ#ö§î’ p³yË^üùÛC2¯£ñ\Ëð­Îµ<…"ØÖXðè ½kÍ,6  ]KqìÄÝ™žÚ‘Ú—ßµ-äùóíýõO·÷ÛJÑÂ6ÌxR§ùÆå,ÚƒègzU¦S¢yŽVÏ­IŸ#þ3vaŽìר‡¤“1:)óì9ï}©ÈÄù4ËNž ì;Jè<ÏÉ7Áï¨7Ïžó¨Ç‚UÌ’—uì-œ¥¦s2,†hl¾\Ê©¦·Öð=¥Gxó?º°’Dª·{vâ×± õý÷ÿþć(Ñ‹³ÿIÍBï^ìS§–I:}îÕÍýí£I.EX;w?¼{þãáý÷7?]¾}ÿ½iò/·Ïïÿã?ÿñîbeD?ìcÚg=½ÜXÄøôþûíšüôòüþû‹}Óï×÷/·?ަ,Fx7ö JšÙûw?üðîg3!/Ix_>uTô ~ìöû¦y“-Óåçâ¼Ï…à¶ø€HÑü¥ÿygÿþðt}3jûž!6ÍÞퟯïŸnOÙ½øoï^>šd|üùÕIé¯CkèÜ@Q%”9´íÁg£¥Íº!Ç¡µ[Ø¿|úü˜Twc·:q@÷íø+Œ¶î_±=ËÅÁÌ]ÈN”Áž¨KTFÙC&÷•vÏÜk§§ˆµ ª#á¿ÿx86pd Áq…¤ì ©¿!ô‡Ð`9~& òòd NyÔœó=œìçD¬á<ÚÓt7OË×¼w¢Íöu6k|ÃIuRTòÄËmoz.™®df'#½«­oÆÈº36ùhX9x(rÙú>ã?®¢Ï^2í–Va-nÆ:œÚß½‡d-p:qb«zlŸLh§4®õQºàÀ˜™anû§åSþê‰Ýbó{ŸºåÑé_yNƒL0ˆY 2Á-¹ ç@H=¯tñÑA¥‹og™5"1&Æpž%w\ø‰ ±“•U`Lú* ÌôWæ³Æw ‚¢¬Û7Z2ÆËÓxÇœ†È­Ü7W½o€pâÆyÜN?»q²Åº“¥Um ‰'s6Îô½[…òÒ™d~s27UIW£2ž/Ƶ µ]¼Eì/oU šÛ¯DÈDTªóÙ[§XOf‡FËÞz.øÑœéÅ*@R^ÒwÀ`‹âi5 a½!¡t—g€Q$µ#Qá­*fi²´:KB$n½fRtѯkÉ ¡3%1åðÈ\ó/!ž¿9¾!ÙÎ3]òíû§rÚ/p”ÜáyP4÷}‚¬ÎF¡¹/<‡?È!Å•vµ÷\€äŸ`ήš¦¢HdXÑZ§¾ìôÊ»|¢Ö;š™£\þæ©Uõn¶:/ñYÅÆè½¬4¬K®`Ì£r~µ]õ3ìjÅ.–íª_v5ëèOVViVS‹ƒú8Ǭ’¤ÎŒÿgïú~¹‘ô¿bÜË¾Ä ‹U$«²À½/°»Ï‹ƒÆV!–=k{2™‡û߯ؒ­¶D5ÙÝd'ÙI<š»Šü꫾š£6!3õË:…~³ÕFåjƒÎ‡àl|Æd fkAIUˆ÷Þ¬Lm…;ÕÛ(oˆb­Ê,3¢ûÒµv‡Pã3I¹C±§‰h˜„„©è i ‹EgcçÎ\ÌÌÈÍùÞ¾ƒduïæäÄÅ8ÒÌÛƒš»äzN]Ò%GäØòU… §p¢ØoéJvs³aú`üÌÍÙb=o6màÂûØÝÌÎø™†0LÈ ¨½Ž­Îÿ뵤pýánóµ}xøxfÿ©ó·ûÛÍï? 1„¿^ÅËÕíæöø3J§&‘Œ3˜OMê±È›¾ôäµÞËü%.öj¥†ïf³ýms{RÿÁfecÄìú·VýGÞìð”íÓÕýÃç«»‡Ï›Ç«ç_Ö÷W¯òXu/R O>¦¬å·[¥G`ƒYÛ@씑~ E±EÁÍ÷c]¡CÄW!ñ¹Î ·2è³ kÃÉ„õñÕ <"¶°ÒèÓYØ•{DyÅX#á¶ ë½!ѧšPg߸@]ã*Ö9õ‚ÍVè0edï™´{Ö0q˜úRVû2™dÿÿ×Ç$nqV¨özzÈÅÄÒè@¼ö‚Âó:ÇCêqé,“Eü¯?¼ða·ût¿}þR{xa»â™2ÜÔÌ"xqÔfzá¹Ð¦4twF+å´øè¥C‰Ó’½WBº£û(¡ ݲɳq~Qó¥¯G®y^ÑR:/T„ˆ’ù…i¨h©\k¸øR¯ÂûÓ½þt»}¾þøp·½ÙnF¢° ª) [œRFÄÞx¾ýÄî·Â«‡ÆºWœU?_ÝŒ΢±µÉ`¡'©*h¬ËVàqFãÓ}RéÀªlIC·è€§b‹ßèÚ¼x {ö¡ë'uÂ÷ÕåGÕs[ɺ '5Òíw©ÁÙ“žŠ…VñˆêîFÍmöˆê‰G”RG´'¢:'TWmˆ -}B]óú4}3›*CVPòV ¤»€¨ª—Äe 8a Â|“ÞTµ¡¹+¬'“¥>Áôe‰q^¡hœW0~Æ0¯QàßR©H F&räÉV ³»æ.p²Ž1’©võC‡ÿ7,Q£LijCaõÈ®X R0ØIÎn‘˜jÙÌNÿ6ŽØÉçÀC‚¬ÍÄäð’žHj,ëªÑv ›LlŸ!úrç÷µñ]¼v ~eKuù•MѼD6{+;Âz£AK}ª¼š)à©jBj¤@í[©ƒEÚ­ï×?on¤¨ßþÝbcÒõó£J|ÚêÖj‰µ4åÞÐ8vži, PÖŒ]ÀbÊßP33ea˜ 9Cª'­J± 8ý °06ï d{úˆ3B$Aõé`®êàr…2ùPŒ¾õ€£©~]ýì;£ÊÓ°ÿˆãÁ/JþÇ‘(<…3YCÁ@[xóÁSx÷óÂô7è÷‚ú´’Üýg.i†b¹ômêiýÐQŒ5(:IVÂ`ÆyÉé…Ž;¼ÛÛ÷:ó)$VƒuärþÄÿi¦Š×Òªk0‹$ZS«ÝŸ}{¹iÄmc°Ó¥^ÇŽ'= [Ýãuf<]TM×7΢&Ê×9°:Èyß×…rO(•\_çƒÖׂ áL›1ïÉoçqÅŒP+ÅòV…žµ°ÈƒÏªÕïûWcâ_^­ º’‚¬ [Ú*zâÆV1ÊÑçä;M¨Uä$鉭Øtë‹:nAPùÓ4Ý üšM·uvo‹€L`å\°Zdÿþôð¼NfuW>|ÒÃ×}"Çi[ú˜DÛÕlGðSlŠ•Žô&Š?À²],õRŠíd0÷!˜%+€ò¦Ê'k{z’¬ßÅe«×vqSfšˆÆ)!éëeõî4øÃ9¶¬8®++`7sºä@0ÒT‘‚©þ¶Hb$î]Éh|æ> ¤.k:peÖô÷`^Zn–ú…·¢N8ur¿Î\¥§õîc̃OsrxÂ{6÷r2 ¡Ì܃PC7Òܯ?n7¿ëIé.pW¿ò^‚º…v/[øVuy¿Ý#ÜÐLíU~&wµìƒÐ ÔÚ`Ȇ©Þû.kû9M7Økôƒ.ÛX€è²,›k§ÜÒZd"öPqx×Ëž8¼ëåÏñ#©*˜Í ò©+J^ÛÅWaótðk‚:²K;ŽB­ý •3ùókÛNQŒÇÌð.¬<¥Åð‚ûÚZµ¦õ¹\`SèôUš+Oîk^‡_?­Çõ ÓÖM˜4%—cÏJµ¾ÊyÕ¼kè&À˜ëÖ)¨³9üœEúGéTºl` =. ØÍÄ(g„d¤Ï‘’܆9\åÅ…äEuŽ.Wè¤ÌàBCuª¿èÚtQ 6¼ýÏÜUý8VUõãn=i‘g#ˆÛ-h‡ÕròéKñôEI“ ž ,Sà@ë@ú±c̪†fSè b21çÜ·;åøbû¬ÑùúÖ`M¾}ÔcO†ÙGùUjŸe‘8ñz”áÍ잢ãߨ ½SùÂÒíîû"Ò£ÍëºáÆUÚ€£ÿj*ðæsº¯ùB3¤ÚN–äP Zÿ3Mz`št5´v¸wé¼0ÃèÄ凤bäCÛ£¥€ ?Ù<' z©ƒ¿ Y@àaiü ¾93g°ivXBõ§½ã%º£]qBóº£/Z€¶.Ø„zÔ-l Ü̼ƀ³uêòÎq¬:—>8´>{ÛÏŸkL•Ù¥RáXÇEkdïxäíÅücmŶ €YÚO¬Z<Ïør?v}·ýisóåænsèæyüþánwüã·Oç}i8ÛÔûBhÁ­g"·¦©¿{¸¥W_åÃö>îõ§ó&ª#Oá'¼Ý…çµÈj\ž:¶ëÛÍÝæç¨³qyb4mudëŸõ{ãèF…¦–‰#fÇ„.›š9Ä)÷üÌÆBàÑ$-÷{Eo7¶À†Cö ·kŒ@–Ö@7š­ëUˆu¼],¦šV±dÑ”žũ¥Æ?ÕOc­]Ó‘WCž¯±ˆ6„ü®!›$„>J®Â¶ñ6¬$Žmi¨`5ni°‘™ôüvÈ[YùàQpÖ ùÒ I¸,H¢rž¡&&©)FÀv "[Ÿ«vI<ÍÙ­Ë©Ž–ä‡Óûj—a„0©4vOnµH )[¿0@i~}ÙüŠ9/N`‰ëc)¢i Ó+ÜO÷ZªºÑŲ€:hü‡lžÉî䎸Ìo‹Þ; ßHw°ntÈR€¸.ßFÙjR>YO*µ 7ö7/ ¹®=3NlØO#®4¡ÊŒ×?H‹Î|}ú6K1qf}eÞó›»­Šôúf=2¡Âc3*ãHpT Sâ8c/NYv HÎ_%UXu8uR¡XÑÅ]xQâl8ÛæÃ0ö®áÙ0Š£ ëÀ°ë1ð8N¾Ï˜c‹v ¶!IPÇo"_e¾de $††=¦7ÂêéRÒ²zã “¬q•›^Êja]y-ló;’†z·‡KúêX¦~Tø*W†ñº 2ºÁÝ1Eî9#°õ>LÉD3[älµ4S_@5-®%¯¿ ,®9 㲡H.YÙG% Ö²c7Ɔ:òj†fÙPZ`>\0¡ÖiDè—çä\ðR&„S¸¨Çnž†™… º!(°í‹’yžÐ¥>>|zŽjóø<–Ès(7À[1³ôpm2’¹^e6 f6瀜ªâ©ÃØÔYÁåñÔs*&é ¥œjP2ª—+è~³8u1,PâüÒõÀÓ@NhYZN[FË9"54.*Tâ¡0 WÉ·I ƶƖI÷e¹"F³8?Ðh j}m˜Ì‚SŒ…xâÝŸƒ•#"FT„ìÙkV‹Éjõ£T«yʤ¦bÔ=k¬Ýžíâxh·†v¶lö…VÐNU+24x.Áv•KyÝúò tiKÀy úÖyhc<6-Ó÷ëjQÏç@ÿ@½êeï¬Ìº‡%À)×FeÅ>Œ%P(OMF…H²¶$eY &°.yûú*‹Z8ì‚ <‡Éb̳Î9tdÛ# 8‰ÃH­ä’þ\5cÕë^JOJ~E±ÏÐPrXñ&\¦ºÿ ÿwÿ_Q“ÒIs¯ÉÏkµ º]u‡^ýCwåßîo7¿_%ÝGÔïôçÏ_¶¶>Å닃p®·ztIT= ²ó7Q'Rý ×Qç`è×¹gÓBP'$æÞ§ ACTñøí !¬ŒüœÈT!ÄG œ |û¼¾û^ÿ‰ïÍÆ½¾÷ÓÝÃ竟n×Ïë§/÷7}ƒ Ɖš{1š‚K/äœhÈ<Ù4u )”¿AWØÑ¶ø´Iñ—häÏ Å@!x›«¼ÝÕ×û÷v);Ó{±®ø¨=F#6Q;ò]â¾ò“¿†fÅh!Qm/);8 u¤§ïV}„Öã £ô}* Põ9Žˆ™0 øzdó š(if'˜Å5ߢTûŒ^ÆñÑ·XS³^®î6k=ª'8ìŽç²o±ØK|÷¼Ši5Å[;™)©{„;ÉÒŒ~´ê‹ëi$ðóJݸX˜caú P°HmØÃÊ90çýëb ɃMµfŸOüÏ:®ó~}³ùþŸºœOOçhp}×p×çÞèu .ê‚"•O6Â:§ñî+˜ªó@Öÿë5âY¸ÛD¿êïÏÌq”Ô¦ó¸~@]´ùëU ê·›ÛןY<ï_9΋¡¬á…Ø)7dx÷ïš¼’î½Ë_âZ¯¶Ïðf³ýms{nw5”Búv÷Í3ovxÌöIÄÏêU~Þ<^=ÿ²¾¿z•Ǫ{ùÓç“ôŠPh km™2â4¨bã¼õÙÞX(ôÆâ¬á®®È¸ì®P=AÈî 1)šÄÞ›¸cVC3v¥w3—¾K<ã‚;†+õÖ„yWSÛyÖàÁ·vÇTúæü­ӟ“!üœö¸2/Lƒ¯á„½-Ç>ú~þâC¨Vugÿt#§N˜™é€UXOÏcK…Ø·´\SËMíæÍ3(–«€\p ^ “ tïmÒ$î½W+,çÔÕµ•²ÇUBk@riä Þ‹H•Ò©$¥!ÍtJ³bº\Jµ[?þºyþx§;ä{}ÜîÓýöùËëAš 4s¾·(ä—„ +Ê8}`²öÌ÷s¸ÜÏA‡À˜E ݰÁfQ朜Þk•a’åW—Å ôÍ1C¥˜¨oßkÁ{õíR£¾ZÅ…Ha‰œÑ¾*(9Ïçõ€ž¸çY!k`VLûÖJ ů™Þ™ö­—8º¨¡YDaB™cÃúËÐl’bŠó¾ƒþÎd¾ÕnzkÅeAˆ’xïÍŠPÀ¨K5ÊsÉ©­…\cÆ€½÷dŸ'(¤Š°.Ђ)lõƒ¾¡ 6Ý®ñ'0µƒ§ ë願jhvðä„ÀÏ‚ á á1À\S AVe¤âA—… Èì*Y’$õÞ¬‚¬èAõìÇ@PVmyrF¸51®êN HÑ1p÷Ð䊑:À©Béê ´[ßü²½ßì½…ƒšÏ(Ÿ€û=½ü—"ëãRÎÒȑε×Ó +‰FûQµ4äaU€7wÊ¥Z–0($2»¶ ß<¢³†².–>4SĶsN±õ^­ à~Knµr,û-Bœã…+Œ5´yÜ%ŸœþÒR¢È!£_¤ªõ• æùFu+@ÊiF¤jCÅ%m[Ž#Íi–¶µ`W´Ñ1(¾! ow±~eïúY¥)ÖºÏ||üt¿yÌéaìãZzV© îxBG,šHõäGWŽÖÃìVÂðߦþÿÙ»¶æ¶räüWô¶/  ¯ÐNÍÓV¥¶jSI%yMMÉ=V­(ºtñŒÿ}%Q$Èœ{&™{L[‡ÝÀ×t”eÝ|:σ†$WG¸#Ø&DeöÖÎüm  ÃÐ*ØIo~tb€¤-9°8ç‡Ú¨š6ÑzÀÆ–áGĬž[F©×d@§±+wïçååíãçF•ä-0=â–¯²&r• þ¼üV<1Ie›1Ù:;ý1CD ´iceÆ"•žh)Õ&»ã¤p¤ ž*;9žßzïÛ&»‡Ûÿ©lsÇ˼QIÆÝß³cØ ¢?"¦–ªæœ²²¥£Ž1 »¦Äš¿ãI#DÕ4 ¼ªå Øk˜D™¶tGÔDú ý#úË«NÞ¢^ÞÿsùøåÖÛ‡”]¾ùtcßþJ7Wç§:€ñòFUä1ó¶<¦‹;ó<.ЯZËø?š(Äÿ´½‘;²H17´|WBâÿè|š©åMÎ'cw¿Õä,œÿc` .›8%óá{OÜʸ«Ž¸6ì‡Guª´4*nçµÆÜ4yT#Ð=Aº¾]½ýõû'Ôa1¹8^/X¼Ó]yge¡©øfH•k8ÖAÜ/³1Ná>gaG6rõÈöAEI“cI½AßdŒ’ÍÔ›ôxKV:[…‰ÝŒ¦©qZI BôT,S—”;öQÝ÷ܽ¹›žó—ñÅuúà´+ô²ú15(©2ËùÈ{Rt-QØ1šó-Ã(Ìn˜|‚9Ëÿö&§F(ì’;=3 sì~cjRvùŠi³”ÌNò={Ø4ñ£E={>T°û4A‹žº•N (©@“%ôâçò‹4–oîÒyyøðœ;»xõÅ[«ùÅž˜7Áƹê/kÛÉçW÷×çé%*»þ˜¨+> h[&D/T”é%Ò¦î3s@†aà§8ÜòÌ¡´ÏÏö&ÀVþ3§Á¿3ûÏéB¸ÇéõLÛ†£Îã¥;ôéê ÚƒgV÷«QÊt%&~ #'<ÏÜ«›ö@d Ãç0]Ò¤ —9Ý7¡´9‡ÁÏ~ ºß ‘Ãç†ü}ÿ 4`pBãxç'?°ƒXÁ~P =5µ×¤B!ÌQovعwÐuò妲–,tõ„âþÐò²H¼3ÄÇV ¼9µônT4—%¤ŠßÁK!ùRÞ¡4ro4EiQæÖè|÷[¡™p¹Y°šÙãíz-ämŒ¦eý»„:½÷ SbH¼íÑM‹@»ÇT‰óI^Ži\^V‹¶¸¦ÌÁ®¨¬„¥\‹]#à~*‰3­‡ùÄ`Ñ?«s¶‚¿ý“Ör~u{³¬¬Å†Ç3½›Ù28¥¼ÃáÇ4pGÇÁ™!áÉU~'ÕЀbP3mȃmÐ4‰Îg¯×Þ¤ÒÆ€b`5yÝ@Ñ&C’î¥Ö&eÊvpcˆÞéÃÅ‘ý„оÈë‘-t‹qGl¸…GŒ«¾“<Ÿ_t/ªKü6?¸-`¿¼^ÝÜm+F^2㋯þòöËçK¿Ø|òm±ý{;^Û‰•ø«q¼–†ð×´D,cÆ*Áfø$Öûì(ØVx6YÖ)²ì¬ÎH#%!‡ü¦W)6€k\¨ĸ2Àþ—Ìwb{"ßÿBL·2ê‹Íàæ…Ó‘š‚&¾®VðYö‡µ…ìÒ· °èj’¢â–ÿþýîÐúC訤°<ÏøÿXúÇÒ#¨éÉ¥0ÜÅñ¨iðch¬(:Žú?¯îöåÇÛåPþc½þr€žÿe‡nù÷»ëåï¸çþz–œ’›åõÛgî' [Àyq'É»í`²ã8™–J¹i~okùKz׳›ôN’WË›¯Ëë=”D^˜“S\ý“;OØ®kû›‹M~3Wë·åýÙãçË»³Wi,6KßCÓ¸agFOåhjË" Š"¬ “€é»ƒ0Þ &ÝïÞv0 §<4„¦ L“6ÃêÈ-Zhgÿ‚Å `“18ï-ž¹*•¯ÉK옸à9Ö›²&oqÊv îþaØòûÓÚÊ<~¶3¯ö³“Q‹ Q‹¢7§Ñ™?9lÉ8Ñq ¡–Åjí¬¬´Hã"0ÒÌ •X™:ƒVcfîlR„“‘®¦T¤—:í<‡Ó>¢F=Q|?|»³ß7ÿrK]àý>z¹:äjü*;æó;ëŒ+qL[MJ*kŒ“EŠÅ™;Í[)–àqX4Û ³³´BdQ MÀÏ,¡óX•g9:Î"‹sçÌ ès¹É$- ×`›é“¿ž3=Þgk„±0%Є@Ç´U;6»“É „¥1¦­kIœ2ˆAe‰ÞYXID–’Üì¾ ê@&Ä ÞF DBG\ß0ó1†ï<ïëÅßú‘Ï_úÕ§OW°„Ðx¼Ï®“cpþC Ò˜Zí€dV•'#–º?ª‹hÞ<•ÄUy8®B̺?;K+rdÁ tnðÙgìáþ˜'xXS<>C%]5iU)„!Eu3àPmóÊÁG¨Ã™Úï{‡!è]u²§ö 'åuŠö±v7¢fF3×{À ˆJÄCn< ׸ü#Aù‡Í¯ÿjÒ<ÜÑç[úšŸçpúM•ÆÚ äà¤äø¥NùÁã§í1ÜYZQ´íÓtÌ ³{¼1t¿J0qÆQ€Ô­óá¶ýL;?—hŽ4ß&òÜn·m†‰Vì5²Í|v~»¾úç×k¡n¥×;õûw¼`¬ö§~ýIŸØ6:zœ²Óì5€5ñ¿ÌÝEŽfgWß2Õzµzº»yüÖz2U‘†mD€1Ë DêÜ¢woðòðøüÐmƒÊKÿˆDà«_¤îŒw¹ŸÊQ«GzïšI_¡ƒøÁÄï¶ûO3ݶ•µ#QUToY}¿ã˜]ÚÌÔÙQŒ4а©ÃÈ2XFHrw˜;âiaꀤ‰÷ÐÔÔì Óh#âT²ÿÑ&çöƒaCtX°^¸ÏNçG{I“ý ‰d>ÂÜ®ûÐÛõ±cç3Y®tp£y§¯ 6nI•"JŒu©c u_Õv¿‰Aå ù]ÚÍ18%ê6Fü?0Þº•>£ëáJá<0óê{Pß™tÖ·Ë×ëßîn×—×uiu'îz¢“I5¦³™e•`¦Àµ¢½;Q;SiºgQ/%¦’†ïƒÂ¶³sß›Dš˜J\øˆ”m[õ£·CÉùCè§™,-ÅÕ¬wK`T+.‚êÎÿ1&Zs’I N¥†Òu›kûsϾ(RÃ0²‚c¦S3 ¤Þû?Æð‹àÒ‚B ô€+ËmÁgwGPM×¶7*H¬Ü‡Õc÷x³æXSIÔ°šu Ñi F+%ŽÙô#1Êì|ÀâšdY‡½ž°+ö†1ýâ^8¦¡·!´ß¬”Ú¡¬)?Šº"”µXpeC¶©aG"MPÖÞ:D޳£lˆ±?Ê:Æ Êò"€Cð«9f •¹µÞ‹L×£ÐS‰ í‰(Í[^D‡Ú·ÂÓµüÝ„¼I¥¼¤Î®ž×+SÍúÉ6áµ=üîæ¹ùÔüÒÅðüÓ*(޶»B1Ä1PLê"ÄF0SÎ(éVp¾Ù…A,þ„s‡©d4×€¿+Öxž^[™<û¹ñ]o¼4ÛµpàÀ/¥ ‘¦ ‡"؃§bômƒG• ¶Å¼Lƒc¥xlVSŸðoÙ'â叨ÍcFœ¾ #ÀW£ùyŒû¤Úw‹Øùe ôôEøBvpiMlœíy“K µ(;zLùÎ  ¥`v`Ðâ ÅËij@+Ȥ|„ÞHÚú¹T”k@×:×PÒ3bvœ-y ª¢ ± ªbH&ßµïnLþÖƒòqy'Jï'Á鸬AªÅeóZ·ÝµÎÝz²=ÅÄ%¹[² mF…rýé;òh£%TU7°‹!ºIM¯ö׿éŽ(ÇÂhŠgþ¶¤æiZÝ@EÅl]Ãö»á´mŠ|0NËü ÷}€”¼¹ ]Ó×Ë/·ëo«= ùi^©ÄÀ'ÄM‚›‚£~;Œ¡2'æºÚÁ­ÒBµÄQ ‰’¥Žz²eb;òh¤hHª5%샡¸¬óŠ3ú¤v΀>LAâ4cL²Á‘©ßC3§xXjÍ`1Õ©HI"× èP®‰ÐeÝá75âXç¨uƒÃÙLA“p8HwoØÄœ™öazŠN$Ìâ KQöV …3\†ÇušB9œ†»,Ü`Y ëwñ‚¿<ÝÞž?ÿm¼Â‰Æ2µeš&ê¥vÞÜùþDî—ן/ß}T9@äD.Ýl±L øztÐSÔÅ€º|µó!+„®O8ˆ‘ÝD©³vØôæ×©z×õÂè~}»üxs— ðÃ3¿ÚÃ7ðêÃóo¯Êø_ö®¬Ç‘9ÿ•zÛW6ƒA2Êyš—¼ðÂö«1PWi¦ [—ë˜ãß;(©T:(‘Ì$³{íÅ,u¯”A~qqõ1žvuÀö¢¸T—áéQÍd]qIííÍ„˜ô¦™ºp[°,}×:¬Oúž‚Ž”rjMt]öà´ „iÆ:°~Јð›3ç­º'WÆãá×Ëëº`ÈÑ«1uÓÎÃÔn«ã¬ú ð=LÜ/nîo6œ6ïTÃo°¸{úº€aõÉŸÃæÏcGìš"§NKÎi)˜¤%×ÁeB;°F8s’v†>ýF—fã²n ËåóãÛk4ËçJSAt¦=ÖÆØoRåYÕÐã²h ¨Ïæð^ö¾ß¾¼\?ß>­§{ÿ˲q_jÚì@V…:õ~¦…}ÎwX©…~P/Þ’ëlybÑþ`Ãúç»·_o³T/ßÐLUÞXƒ<)-G0‚h[ï<8ç¸Y…d'ªnÉ¥­öÁ—Aœe³o˜\ó³#Š&U8BUö-ºi# ÖYe§ÖôS±Ëbšýj—£5ƒB.ÈÑZk²„ë*GHn<ßT“$­ˆõ”`Ý1}:3)I¨{“XLsûäÎsïm@ø‹eD4¥XVï¼Ò1Yá`&µ…¢1#ÚëÑ2‡H·×ƒþû Ôk¹×:6¦`bÉ:o³-¢V$I“ò!›—_#ð˜”*‘=ù˯~~wš”è}ºä\ÄÈX5ïºWÄŽ`m[FðDj£ç…‹c.¼ˆN°'ݸüB˱q}»’)5«Ùâ-¸$4l¥Øhõ•¦Î-h п“Q¥lR½ã8¨û{hÖÕ[¢àšze[Ù9áÖQóÖ÷ßy‡XÓ+x² ÌÂÖÂEM¬XÃOXZcîýú¸nÔ®kõæU÷ªW•[®{!ãßÇ3™ã²NÞ¦èx¼îe+”&™€0èibKsc9u§b¶$Ã#6ΦçÒIlÛe°e»´Ûò^œêZz×»-#zŸ#íh”]Íß¡ Ý , ¹V FÑ3Û<„$ìÈ®…7<»¹!ÀuOò¨˜M"΋qœsx‚™Bšrƒ)Úí¨œ¨S[GO˜àQÌ`>ú^F¨&&uW´‹îì` Lª±±ù¬/§³¾;BjµØØ92vf< 3ƒ“¯r2¼óÑnš=ÏðãRSdð¡hD©<ï3µ«¨«§À#¢6‘ŽÌjÖ›¾ß³7zù½p #`Üé“÷(I°ò!Ÿ&Î@ÁX™ûò[ߟ`Ñ€ÅèŠç³ô*MS<¾ŒÐ4èÐÏw8ŸÔ,z}qœrí GMäxµ›`GÆþ•­2g–Lì“9ÂSS/ŽI'r%õbM–ãn³‚ãh¤çCÈfzâÚRWW1"G2i(A¿ºcGl2OT‹y ‡$—(n»`ÛKÌhTÐéoe'‰Í®“&ÉÁ¨%§&îBF7¢àœíð::%í]ÓÆýûÎãÌÔ »lMš6)Í£m¦òk1úØÁYS—«Èžˆqn†±AÆÆxEUyJ§+}ÛíÔ`\cŒ™¯×1­ÿ ±¬¥€z¸ƒÃÍñî¦0¼×íé_CˆÁŸ,@±…ýÿuôañ þßê󼽟„K8< —‰"äå±z™Ì[wU†´_»b­W1D¨{}¾_¯–ÏWŸúñ*‘½Ñ…H=B/Þô÷K=Ý«¶‹-?L Á?Ð.~¸xýãáê³Ú̧Åóòê³›_—¯Wý÷/Òlà‹ðß-/Ë—OûQ‹~ûýãÍ®)kõ^Þ®¯—//WŸ7OÿóÓÛëÕçßþÛâîmùóº}Ø_¬þÖñË3_üðÃÅ/j2ßâû¼ÿæê¾úÕôûöûš7c€ãÏ\Î}ˆƒêfDþRlðMÜ\¥<¼,®WgiχÐsósl¿úeq÷²ü׋‡·{•ÅÏ¿lµOv džݛÁ Ù2¤Z5‰aïyî/Nb†}çÍþòôüÏÓÚžoô{`·õ©È˜P1íß)¨÷‚õ×¥œ}³õ(Ø0iÞÿ¿þx(0Úf¼É.Æ™ Ù…=2u4ágwÐIÿ=ú¡+Tx³úÏ“„ÿ÷Ö ^|¹[þ‡Âÿ=>>aF´ãËŸn–¨H!"Ú¹ÛåÍö3L¸ý> Dä-äÑAÝìw^|WLö›}¼Ì_âÃ^ÜÆ‡Rl¸^Þþ¶¼Ù…ÖH$%°Ê9þKâ+6o¶ù–Û ~Wûýûòùâõëâáb+aõòû_omˆ/#!Uϰ=ÏAµF]ß›‘¦™ ¨1©‹­ø‚C!6Ga_l²NõñjE6#¦ÉÜ6CÓ;Ø £O DE8B+¹þã@£Ç‘›¸ÄzN²O7Gü‰¥iî‡Å¯ê¾ŸøüòúùúØ™µ€)[r³¼Ó_=ë_6yuCë¼ÚV¿ÚÓv±ñc²õb­Ä-®´ÒðdO¡Ù!h9Œz”² e>˜Ë—äÝÞyµÐrm:73j1 vF­(GJ VÔ[™Ô]VÈR”'˜WÝ›¥÷¡Ì VÆåÝpÇyFò¾:„ïþ€½Ð¢ŒjYVHâà'ã â ¸ƒqÁg‡ËôþÎfqÐbºÉhûf0WœF‡²¦­¸ Ú½a0& mb««Äê':<뼑–Λâ!Ï€‡ë‰ƒüºÕè£*xÔLt诙Prã~u¹Ô,U#׸_í GèÆL³¶v:¹b42©9 œG#!±˜E#dLrBo߬œ >NÇÎ~dÛTŒFh¤ŠpîhÈá8”ô¦²3À‘º ë ‡TXhleŠqÿÛvàC_'BVmñbïëzã K—.\vBhà§`i¼†ï›aóÈ`Í–%Xe—Þ¼}³’p õ*¨ÉÆÙÃ5ê]NŽR¤ÔŒ{ ä̉IVdG £5†Ù¼“ƒa÷ƒëÙ÷{=Þ7‹×ÅåqqÔŒñMês^ŒçVU‰’ÚSÇ:¿Jf"Ùïa}èåóòFÃËë×&ÄW'¤þ·:|æ1,'Á¬Ý}wê-˜Œ7GLk#Äz¶5Þœñ  VÁ¢*TÈçîHRƒr;blÑ{Ϻ·"¶ÊV¤¥_usÙB÷‚DœãL$TQ.8Ïóîu­{ÐF!Èh•–À€Šž ™€“½6*öÚÌ"ÕG¾4hc¿z¶4Èœ¬ï¼Y‰×feЇFf¿‰ž{»m*ƤÛfëÝ) °íÜ612ƒ×¦R?·TmÛþéÄç ZüÅ]søeñ‹µ7ÁLôèz<ÏŽ·ç…Órz%°Ô œŽ<Î80"˜ “¡Ç”BOÀ=k„Yà$¨‘Ê&¶áŽ·¯V‚=„ʈq:ö80ÜÝ ˆôsÇØ£š Ï9Z4ï›ÆŽ†ÿPÈ -!| 6Õ§`¿ í>Ó‰@!•¤»z<ìÙ¤Y „1lÄ—‹i0¦B Hr\J pC°pÁ¤zvÞ¬‰qñ«ÔòG»Þîwl.ßažŒÄF…ª6¼ÀÉÒ87èp:µa'|6v `ç@Ëÿ—ç (ÄîD/±Cð£hQô175Žát¼]"?ŠÁú#SìâÒùo8öªDÎåýU`&Õ[º#‘©…ÉAÈûª‚aþ8d±*zPv…‰´¶ òÏ• U¸‡ªüG:½Ì»Öå p/Ëê³u•£r ’1]138Å8M…#÷<É@Øx»)`>N¶qZ³»!ß?fÛʯE6]ÛõZ|Sˆ-8=#ø ½x²ä`ú*’ÌUkex#’Uƒ ÇÂP6}"LâXì¦É©ðƒðavÃ+ýÛ§)’'"Ócf={œÚúç ÿùAÿ6¡3Çøú¾9L=ºœ„³0M§jl ŠqT™@5]àXtT¦Ú—Ð) ^JÛx?âÞ¡ÿº\ܽ~ËÛÍ[Í4¢ž‚5-.OàxùqÝÇ–p¥/B”¼%t,"Ù4@ªólç]ÂøÐFãá¹í`dëÕèËp@÷=,ý˜¬{?7qÞdЩ søô¦õ6š oÛc—„ÈÃà‰;«bù‡Ú‰µp×;Ÿ®ß^^ïU–ojHoôËn×d`O7›Õ̓>Ðío·¯®è‰#wØJ]Ow‹‡•_pÌ{^¥5‰Iמ8InÌ쿵ˆqR®Ó’›3g¾]¼¡(ëÙÛ’0ÔXï²(K ÉM6[Q5Š7‚'47Î2B7Ó¦¼Ì0XÑ?É„ÆBSªã@ETÇ5‹M!~O-»é\tn0&€¡ž~¿¸þª—gí§o¾y_Î{ãå“1—ëJJÇ’¡®ˆìdÄ„;…¨ ®‘Ç‹­¯Ž‡ fÓ­qˆÕ…qÄY Þô„òNH¨£ãUµwv×›ÞÅa•2ËqG_Ô˜ð¾húˆ×¯5ß4ñMGÓPÎ6? $zªUº„1ÅJï:Úñ`I~sëõÝíúƒ*øE@9­õì¤ñ4U%Œè6t(œ©Þû5Ut-}aïœ)s…)[‘Q)©àbjâ ‡Á©.¼«‚àÜÉßU1ØžÎÅ $›¬z¯»ú±m#›ÚÞù«1‡½ìºWìõùmY‘ç›paK”Á4C\b ˜¬øí¤ÁĶ ·ÜÄ»•ÈÝSµíJ¤÷ŒÒ € Èñ®'.·\e ƒå3â×;ɪGd†‚G$ï°Ùö›ókeã¹`PÛ–B"ƒ¹‘¬ ›ŠBväÓÀ®N3;ã±Î²ú›vÚÝì>®bæDÕq¥(0KñMöߨAh°þ&'UKÀÈf’jm{Ôuìd ÌÃì‘ÈAA÷ ©ñvsûzùÎãU™ 2§¼ÕCôqãí„ícˆÈYÅ0ú¿¡HìšÅ"N ‚)èBÖÞßi V9nºïy>Õ"‰±¢;¬à”än«¨gê»3ö¨˜1±ƒÐ­s‹ÈBÛYëP¶ŸXÏiŽp-ptƒ‚U3fR%ÏFÄÑø]¤ç;EwCÄ)1øeKR07DbÙR>A©Ä®DZä€õ±QŒ!šûÒ£éÝü·Š‡„÷eT¼`ØåöŽ¢ë½“Üî mÂ_;fñèäTH7UëIFÓ>1Ìqi¦Xü.\6âÝtZüÏÛãëbUð\ý×øÖ Bg{:hö`ûoaªØÅý"½:'Ê…ÙÊcc}Ÿ‚ý-è›ó×,YH&·‚kÝ«`qµWb^è&îÝGQ$¼°õŠêévÝbÚëÆKä ·o ¨C”®ðÍízYüˆß75w‚ïBöÿÛ=«OÀ\>/_Ÿÿ¼„ õ9T;¶¦+Tݥɼ#ª뉢kJ êíBI(-ú². ÍÎpzgæVN-°9oãç†fwpW{ß…ÄH*Jq9À‰R’o;IŠ9Rµ#s°è‹Âí+¶NdpÎuNifº‰Ïv#ïæ”·™ŒìHSë_l3ÝÆȘUgÑSd²¶š¿ ‡úÎî6×Ý”¼Žê+Bž5%ÛÛÄ‘Ï Áä¨ïŽ~Z$sãÍÖ€&„¹-„þ‹¼œ=îeˆŠB m¦ÉÛ.ˆ×Gl?Óù0{ÚC: …lSã‹ckΙ®Ý†1 ÷åö!^Ú3Éúu§ÊÕöϯ>|‘«•Ø/U•Oz³êj~ÎúžÖ AÆ -’õ‚ê>V§æ¢l·x}©¤sK9VrM¥”vפ㜸¹K€q¨{ l¢%#ZŒ“-͆„²Æ›.°ÒÁÙ`‡¼R½óþ;È+7ºe_Rxð¦·Ü_|ü­Þ‰eTðÁz¡OêL‘=¯Tâ½"e– ¦W)m¥ÓÈyµ–|•ïÊŠŠRSà•ïïºÚÄÒ·¨&ïl ž™P£ˆÐ rF@œT¬Ú`¦9¯;0”Û0ˆ†;›òê·L¬&áë@˜ÆK¼dmÑgáVôk<Òs-R;º>X0@Á\3.dÓ°*°äÈå‡Dš8®zdA¥jä’#qÌ$dE螇U1§«¢Ôñæ`çÍ °A+xß6/°…‚“ÚTg@È3­ñÚ’VqS¼é»¬ t‹ëëåËË*ü>#Q·8üLµÞcP£;)•éqÔÈ«ú,ì Ð'·'»f!µ…ÁX£nrAHí³,7Oxwï’j€ÜÞà ÁŠ«"½÷dâÖ$äîßÛ4ðO4H©¢ æ=9 ܳ›eû-¸¦vß.še3gfÐ?©ÙõW µ/éS`yu¸+âç=äýª§ù|õù§¯XýP‰éyÁÀN/“§‹(¯‡ÅýRŽÊ4¬Kn*©(û?\¼þñpõùúñþiñ¼¼ú¬§ð×åÿ²wuÍÜÆö¯èÍ/Ö,Ð_6©}¹÷%UN%•Ü×”‹’¸6ËZQW»öCþûm”8"A0Œ·\)WÅZKÍt7N ûôç÷ýÛÿ^%µ¥b»´ êPà8ó¹š1ßÈ­š%ÀÍÇ›[Ðgú´¾;<’ÑGÚ|xë³ö•ßó–éÐŒ]»û 3læ4²hŒÿו~ò°YÜn îƒRãÚmø¸¸ß,Ïx TwòðüIµñãúã«wŽN昂ßbGž‡aÆ´ÈÿÈ—Úö/¾kZ<ö%½WûîñimhçCövtʲOúëLnËWžÞ2Å5¯D9âiù5¾² …˜4Áf•_ÿïׇSŸa}©{Í£ä…X64eóï¿å¾Xó–}_q{s FàÉø2@ªó="´ö Ò¼<Ã6Ðkú®>œÅ¯`¬ ž¦à—#[{ŠÜßh*~ÏůÀ17~¡¹„_Þ¿tü\Â/w]jTàðfðefìh˜¶ðõ}â;ö'çh)§'E=—ìs…ÑÊ=E½Öµ%u'E  ,ê5&¡5¯tC‚¨ ¾±ÕœÉÀ™šF]ND î+¯Jø}Ñè¬I ÓL‚8ˆÈ â­5Mýô§ÅíÏzÒÞìd{Û ûæ'6ïŒÙ…euèHgÇë æƒso^œ¥âM£ÅV­†iK}Æ2† Hâ‡Q;IöÕ“P ÔÖ§Áaj#z Â'Q´¡ù€%Š(QQâàr¯¯zߺ g•¤õ\BÁÌ$´8«Þز@2I½Ô$@W\7Ü–ÿkõ°#JËMx^=‡ñÍÀÓQ“ŸDZ=)¦‹éÉ¡BB\"n¹¬¸v·&ô"B&[{ï\)°UÃb.«D‘#–‰gH`†À’-¢³pný"T­.ƒÏBFôëjŽŽñY-qðÞOK?˜ÜÀYßykÄÌÅÂД~Áãyˆ¶Óê Æðh… †l=»ñô sð.$#M5ÒM2"Mëq”}G{"ªjúÎEšŸ2ˆ+‡&P±ÍqTålC"Ôô±õÅã/Xš?Ä#cˆê1.Dï,`§ê´M ˜œÏ¿“&>þŽðqsû´z,¤*´ä<]79‹vÆ Ž>ÅrÜ¢JUvÖžH¨fXJ­ÃZa©$gWzÒ¨•¢µ¶ M…ÀLKÜ…gJS`ªoÌ a5μ‹&/qWÉL\F“Ä€sšÔàœÉM‹\ƒoƒ¡žDM³1†p¯]än;? x¹ÈÝê×Öº«`Ö˜éLÐÌŸ†Ùë‘~w?SÚ²íÜx‘;׸²sLã5Œ,îÙnvv.\ž6;8SïY½C—ÁÙ±­“„!ǯg'ÙýÒÓT%Ïï!X[2ÕZÇNº_2ɤÐD–Àȸ4= Lçz‰u\|ØåäÕâý°=Ød§\O"UòêÐ,تöã*ŽV7É«ESÎQQÎèÄ•Py¡jÔL‹Ï:«¦ZD;G@Ÿ.3MLMr&„ØØªûÁ+÷Oü£Ð––v´Å´Vdé¬`Û½íûÞì·2Þøxÿ¬Îk3¬³¡o˜- Èðôvܶ ‡>Î ™rO?(ÞËÁå l'Å‰ÑÆ%c[±z$7X±ÉÕAùUŠ œ8wX`=µ Ða2,ð&Îç§·¸¹ºa5Rߡ̄3-õÐç!t±xÚô>ôNE´þíÓ½ËÉŽè—?•VÐ7Åìã6Ø\ÂðâK«…²ªW¬W;ë8+e·2Åœ÷î‹¥JÊ®)‰Ÿ‹AfˆíM‚Q3êIü0°8>¨¿¬ºOˆòR5üÍš#`áŒJƒñ¨¿'©”Lh¯¦3–ƒ|#H—\Þ—dÆÒ¼Ãf4ÞI[±mâÉ0¸0ÇŠéÍ› †Ï7»+ªmUš á²Ç~m¥$iX©—Üb,2޶É#XëáBâ4Z Óªî!®ûÉñàfð²Ý’¤.Û{b®ä¿1 +šmª¡½ÿ6ÀIÿMbì@<øª L–|åŒê[„¯–öÂÄM‚g›æ^óofËNÝ 5u$ìÆÜÿ:g¼8/ö·%2é6TT6„U„³ƒŽƒ’—s±Vñzvq~ÏÁíÇ«lºMK½D¢á¡] u3¿?ìºÈj!š$!‚ÆnßÙsTÔܵ&¿zâWŸ|}¿ú¸¼ýíö~ùÊ^ñ¸¸ýEÿ0¢lÎpSìW´ýdÑ»â ÁUY³¦'ŒÖæ¤Ñ B»pÚ{R«”°“àinhw M‚<§mÓ¶Í7 ·¥›ÝLÁñª—§íæÂkUC9Œ>Nœ8Øî~妠êе§Yœ®°:Ç~ßSyÜ=A«¦f!«° n7ÌWÐSÕ›â4Z¤hLäD)f«sPjÆÞNbL—{›à†cïý*Ü“Øû ®J±·³&Ì{;in‚õÉàÛ£tfE»U[ª¼ËÛô› õüIû͸ÔùÚИëû¿Æ”Ý%XÇãõžá¾™pÌÔ†‡L$5:’Æèª´CiŒ¢¦feb̺ŸžÿÞ‘ˆœÌ4Qé~"Ä]%\!Ôp 䛯G IFfÆátË”5¾N]‹þeàá)ÿÙ•¡ßýuWRþ°RÄ: ®Oöá^'b€ëÓ à:ÙzÜÖU·_X£–ŸÚGÏó™½_DÞÕm ó&ëšÂ?­…ì÷I-mé¨ëµÉáÞŠžn1òrEyæpËìpçô!³E´ÍÄ—×E'îQÈlà?qØ\v_w1›¬^1ô<ÜdL˜¦âÞ˰Ҹ¦wÎ{™Ù]#™a^RV#à!F¯ÊmÆ)&>9Êh|(ª¨Î:ÝÒQÌño&!‘æú?,m0åÂj¢‹¥À¢Io쌵Â{%Ò RZ—Œô{¢ªÐH³CÏÐeêöÌK'$þšiÈŠêÊÕ=¬<—PgWË0Ï­ ®UDŠcSM›EŸT-‹Û7‹hFm³Á–@‹ê|Ç­E»yKƒ·Ë’©w‰­^Œ„¬jÉð„WìÙLbèA •.±ã>hïçÆPß|'·ÊÙc*7õvèÝ0¡jÕ®!Êê² ù<ÕÃg¾iÝuÌ>;õ×X.má®~wQo‰´Ñ(Ägp3 KÃeRKIn†ƒä*m‘öDÁâÌ_ÿ~óà _¦œŽ*)бž{38¢ãª愼E–ò~“›¼–J®AXÅÜùHÄ1ËÝÜÛ†í·WÒ›—î²0KÅâ["t\ =¦{›b—œ7cgõ³$U~¹ „\¶ ¶hSɨ«'“*àËæwP|˺XØ·/cLÌmD=y9Û8­'©;¿!p{ÏJÙœn±z½mQÌ)¾GÉÆÿ”=T‘ú­k0'ϪºÀ†]ێç/«èªAÝ^hÚñ½þçÂ}"ãÅŸÇ6ŒÉ{… l âNÃQ"«‡ÌÒ)ÂÏèêŸ1>£éœ¤°¹'Ÿ*جæÌŒXXUœ~8Á´ïðaׄͪ)/2‡º Fä-.h íëBp-Ö‚ºÎ¤ö“«'»ZŸÖŸ–*ÏçÍõ/¾°•[LS€ÅQûšØþc݈iÅáÔ+.¢‚’ÙΚd@f`)Bêj·'Š*­Xj£±×ÅÎ ¥8Cqlj<9** ˆ €4Xw[S˜ÒîB'{$1ãè7Ôbä½m‚›Á ¶- ,ºz\ß•µÐBh œŒºÁðˆ-¢5âm1苪üª%X¬ÃAøUƒ 88™ rK5 ôSO£ýFîï¹ñ4ÇñTÅìýiÿÕVQŸ™êôXµ` ’G_œÏŽBƒ–ªÛ x`¶c¹ä`ÞâÁQE;¦íg»ô °ÿ ɘ¦Ð ãF;ÁŒ˜^‡VµZA4O€šDÐàÈ|ذOÆ=’©Q,ˆmÅðÜØ í¹±TÌ©R¢.¢Í[”ÍÃX=Ž0¥*› -•Ê¡¼õ4ߺ·{Σ¡ŒÇÕþ/n5ö:•ñ²ôêßÏëÏ‹M÷ÅvÛ?_­G(Ð…ÅŽ(+€"C `(߬W[šõJºjM|JÛ}KìE”“AèI®JEWZ­Üù¹QZ°yO‚‹ô ˜öc°gøò¤*:[còàÙsÉÒ¼˜ÒTÝ®Åwb½aúvª¯Ô ·ŸXs­¾óé·kûB"“ÍÆÐ6dvÆ´.ˆfŒQÒmh§Î‰®2«8ÿB‚všÉà Y"K’sª'§*ØÌƒlYóçÅfÇÍ'w†Du"¾² ãæ¥«Ëã4&¬M?u 6Ú©8 -ð8tDÛ–ƒ/ì´Ó§ßî‘}·ÿCY ¬Ž$4_•ù˜ÝúI6’=Õ[Dz,§z@«à,S·32Ô=%f“ ‘T¢UDÙ„s⬾ž4P1§X¼õ•™w—çpÉÕ¥HÈ‹…½ñPcÇe Òúdkã’¤)Ì3ú ¶?C NÈ·P£8p€iQ¶¹Î‘?/zŽþh¶Æ¯1µ`ŽEÈÖ4*C¼X-”&¡–ª‘áà]X‡8Ðê…GÉp¶' 0 ]@1¼Ýú}ê—ììUMtsõñiýéêf}ÿùêîæèÒÎtyS H›˜B`Р’a¼‰EëàfI´mùו~ð°YÜn•øÆÌTX?Æãúþãâ~³Rj LmH¼^sê2&Yã˜ùëC\UêÝ·+¤MÑYÏ9¦a Ú[L6:¤SÅ6‚†8-± 2.¦…SlƒŒ5ÛEàÉ¡à·Gc˜¶‰àõõ1Ç&Œ ƒ6Br×|O*•l"_f  0Ó$›°nÓÂ)ªƒŸúanèGÜÅú5…œˆ‚†K6I‚Ö{³œØPc ‹±-¢@oê´NÂy:žÎˬ·‰Êœ+½šþ´ˆ‡tßžva;Oÿç̤sl¡…v2α ~XÝ érÛA"5Îq|ìÈšìŠì!"N²‡Q[œôíIÑT‹UÁŠùËëi 4iƒ  ä%Ë hÐÙSz)S_$• ‚bXÉebv7É xLo6P`±†'ã:e§ô±Z¬/›ÄY2Lƒzݯ½¯® þúï7wÓ_¾¾PùíòáûÙ׋³¿ewëK&®ðun‘s郲ŠÃãGÈÒŸ–S^²5áO§”G óüKrÀ¬fÍšê–ˆQÓšÙŒ'qºï†\HƒÚïH¡â~BЕ}T!Î.=“§‡pر=À<–”m*¡Ëò“©å2‘#‹†swL¼jªVàÒÜg-¬7Ö$ɘʉ˜²þï„°]0‡ÛsƒóÄJ}s¿sBS5Lòb#/®.ÙŽÛ0þ<“/£¤-:»¸{8y¸{¼ï,±’€«ªÈå`i»JiWX&äê,‘Æ÷0OX"#¤°Ì@¹†K3¦dmyÔÐ0v~y÷xkoýåñü××=èÑxqŒvþ~r{þ¥_–ÌÈì JN •mU¢œ›=¡T'/[/u´g‰IB\hªˆBËp9 [mÂ4̾kç¼l dûö¯½ÞW¿qÿѹó€;Ñxg·Øãæb HÅ«V"Ÿ…‘ …ºØ7h¨gÕáö¹©æÎ¶„„˜â{Ÿðšìª ” MÌgå&4ê„ NBðÖ‚-#€½³3úÝëo²èY¨áÂBÅ}>€`(‚æsõ/ÄéRgÕ%ú¸Y¶ÅJO!-©á7‡·¥x;ÙÜc ÜÕçËÜîås&,©Æ– ¸büÂÞe‘bB’N9V¥ÇêúmEF¯žP{ØÑ2o†ÕwŽJ€ØÔÑÁ°ßÑÁaŸ‘¬>ÆsåtýØïvŒã{bv¤÷äU*:ôIœÔ‹ÕÐ1}IJ†Ž4 *x}/V/P·¹¡­ÇÚYI­ÛÒ”ÚNª¡8æ]E" ø‹2Á1ç&NÞ¬æ'Z%5ѦrëÇÌ3În®n?ß]ì|ŒõS²éŠ5y­ñuƒëƒÕŠ«O±zdáO6½cVë³þËÝÉåòª­Ìd+ƒ+$z®–ã9‹ I9W°?¡YûÀ–i 3j6ìhÄ„1Á¡l°@‹’€÷UØsHLblp°âÿ*û¶cÛŽ‰ Q<’>Qª‹˜À Áo 2GLÔ_"r‹p…ÅµÈ ;ûßâ2€èjÅA˜”K{Äf0K‰¯œ$[²5yµ+B8xo¬›Ã¸ÄN¿!-a\jš8ÉJ)ȧÅ|óµ|‹¬’­fµÌ¶ f·è¦˜eÛäŪlraׄPeƒ—¨›xà¿=ƒÈü¦Ë;ýùýÃÖ‚k¸6éżT"}üÍcþxù0È[s·‹\Qd) !þ†|ÙÒ }º$â>Á[ QÓ\:d úýØa4RϦËa‹Ø¥F8RQòµÙŠt’‡‚¯Ow’õ‚›êø•rF%y—£½sRIc*ëÙ)‹lwÔ“Š€é@Žø™0=ÄBOm•Æá­mHHØP-ƒST›ÛÀ§Çº>"Ïõ1ü|ð)~¡³¨¾ê…O_Rè&5äÃÁWð¸(6ÛtÊn?Ø ázHž:ˆ‹ßXj|C!Øà»V m5V÷ïo/ÎŽÝ^ëŸú…~dåXQj–× ?{zB“.¡ êÓê·½µD¨‹ÓRXE,êpͽkººüõµjÝ¿üo‡Ò=ÿóýÇ_¾Ýü¾ÍœØŸïϾ^\}>ÙþþÉö'ÏŸè'><¸SM‚IÃ>(z«Ö=’­ z¦`ñÑS£óæc¿­øpËâ— ÄÙõSÙ«Ê],Ù,Ë5ä¹þõäìs7ÁH`ÕR.T¤œHJEÁàë(ŸÐ¦‡`è©)1Ñ[»­èäPψã{îLÎÊŽP£•*Ù(‡4èróË&äé"híÔÞ30¶lßoå? òï.„ƒÅ{DSÕ¿¸Æ~#šÛ+µ£¢Cî*vžN²Éè5|Ÿ#›É2@qIBUE»e†'&fm\©k>[¹Õ½’¢—htT#"ÄE—™óc2'Dê""dÙÍò˜ø(íÍò)kªßƒÅ ™´ñs®.î.ÏNî/½î˜ KvUψU®N¤¢9KÁe%bB“.‘´µ³¢êD<»E[B(aïk˜RVD=7#UÈJQbv£é”$äAÔer³|ß”Ih‘&j‡ÑP$uìÞZ¨ÅP@òÎ;òò^³gYA°Þ’âÜ>}c(ÊeÍÅ„4Mr`Cى껛«äàø¶”ÍÛ¬±.%à€êÚò¬ :8à]½ÐYËQl»C~óT7•k»NË®Gé;þ¾ÕGÏ(˜q[ÌSÙò&_¶¼LÙ’û %4ÌNÆ ¯Y&Ökã±}[T6·rÚ|I„÷´õ ‡®ì‡˜|¤ 3(p*2?eËë^ÈÑ‚®zHLõwTÖ ÕõVo¯]õi…}ÙÆöYuÕùg%©òçJcôÙžZ6ŸK/v~¾yÃYà žé‡fF””ÐÑ2mARˆ*³an\¼œxmê‰û»UX‚’zŽ¿FRð~L¢%çýL(Õ ŸÊA¢—æé§¨ ã"ýä5wÿmÉJû«ÿŒ/œd›ø>¸bU<…®+V¡fõx‘êÍ}â“ÉjkC\ÂdõpCNֽŴî–Õö ³ÐW¥ <´ó Œ¿„à›¶w$²\ÍûL‘Ã]Rt‰¸Kx|Xü†€ÙìÄB-¸«§$©þÚ¢—JâNöa ÜU™Ã ðºÁù Pع >u^Ž5ÀkŠT ½Ëà!Ï_ç@’_¢î Û–|$pÊö¦þp ûýá²€)ج XTMð1…¢j¦írÏÌ~ާ·)·ˆkp¢ªè·sxÌ=cQ8&kKcµ}ý$!¶Té±ë.·dÚ²61ªÜ£­sÅb­Š 8…"fKì&/VÑ&]ì–^Ï ˜<#Û"®¼N’ïÊê2¨«!ZÔƒd¼ic^Äèüqo:ék@ý¿ÿq½é~o6‡}Lwû{´}Ï·Œ;ýI_ù׋‡ÓÿøÏ¿|Èbüv•Õx‹¶©ÏyÙexòýÿÂí×_®.¿ýý;Þ^\}ýry÷õüúòRþêæ|ª >ñ‡OîÏLèNÚéçÛLJӟ:óoŸ¿=^ü<º–a}øôéÃ/ªµöºO_;:+Ý¿øÓ‡O?¬)á¼S±Þ#’0{dc¼#½y²|÷fœñx1ÉÖEnEЪÇ–ÁTÁj˼'XãÂj\McÍDe\±%Tå¢x*Z#Ƭ‹2!O‡;¶za¡¾½›^²_Ùò(y“ÇeyÔ;4€b~mxþçôÔÿï)DBêDøƒês7í¨ã&XéžzÌ·rs|Ê YsàÄ…9YóW¨úU%þâîô§¿þå4¢°;¶mp™bøð¨´ÝÁJãÍE‹ºê×J)#拾{5Qÿ¸>ý©ÚþçûçÍ6yê”3•â/ç²ë¸yžÁZgšø ìðÃèìR;j@ñÊ£øô鵉Úd´Òa kõm¾ÉÑfŒ,ïáC[t›Ùg/ûÓÏÈ ¸“R9ñd‘ëñ‘¨›w=°}òùeÊÁ­F°êŸKÜT¹ü˜yÄ¢Ø6BÃ$©€f/–¼KB²©’Ûi'éT~pÈQ£¼ÕýÁ×É¢m¡æ¶Áñþãö³Rɘ\”ši^ֽ严ä9 }‡!èÔ–;æý+u€èõ%RI…5ºŠÇsÇ’å4LhÒ£ÂÕŽHµÚWz‰°‹Ã…Z(²²÷§tÞf¬_yã++†©qÿк§ûG®&‡¬˜ÂL°³‘A‚_ÄF²˜z‡œ$¾up}|ŽñçÇóˇ“Û›o—g—÷3oëÜaï&©F Á"„õ©¡ m@ºª .NÙV/fÔ˜Dæ kÂ$–ý•à\¹÷äµj¶Q.¸[çÅÑ™>#›°µà$pòg90X¶Œñõj6\\<ÄÍŒ_«ÅÅ!±­-{µ lCv2ÂäÅ*˜m‡R ¹ºv{üj°/X §qeƒgTôû·§öÊÌï=ï€Ð/¿ŽNè ìÅŽ•I<âýÂÐyþ·M‚bt^Ǿ?¬*fÖˆnb@ê6­hóiÏy¦össÚ<ì\¢#”'ñ‘Á24 EŤÔ;èÒÆw„Z#hשežlàQ×%;,wB™>‘Ž5yE»9À¯Q½›¹Z¡‘À«G:)nw³íF:1‹$üH»F:TU-㼫Ïu7aªÌÜ)K\™¨Ü”3Õ#µ5å!oÅ}G+®.æñgÉKYòâLãø®eaDÓ.C^<øˆ¼þÚ ×áv?ûåJ)Ú/€Àå 4¸ü°÷Ju±_–QSéqŽýRŒçöv˜ñq§nÈ‹\2çÆå ºá-.jS¨«ö¿ìªvnŽé0oIDâ"Þâ Ñ‚zÂÁvû?8}×XóÜ[”ćÉo:—y©°&à²êŒ°fiÒï_/ôC_nëKýïì—Ñ5Ó³Ós*~ºÔï°7ÚŽQ¼ x§‡:}èȸMîJ™~±¸ªƒ}¤5¢qö>q’°"ƒ­.âþöóÙEvÆÜ¼Ð;:8Bedy‘'Ã-T“DaÖs¦#Ó(ùGf6Š}}±Ú¾d ¡Ñw9‡AÝSÆTt‚ØçZ]&DîàÐæô‰aVò–Í·^(aè[æ`cH þýÜ1Çu¯c"‡½¢H9£ŽBY d_¿¡OÇF~Qôi–¹±‹ œ×Ïß;=æ~¡¼€%rT‚c.ÕFuuƒ=Rç‚Å÷î+”šd¦É-jŽ !‘"êAö¼èYx¢~¿§$,@Û0¯2¢PÊîV˜P§ ¢è±õ‘Ih¢ ÷a¢ˆ[»Ýè ¹}e¢§ípëMr®k)tU-ŒÅ]IºÆØ³ª¡mÍCÄEÕM¢¢ÑÝïO†D1m—®5CäâöÛÍ÷«,hfGK  Ò˜Úé^†[ñ-ÒƒÕe¸¹Ó˜g’ªΚˆK]ÉQ.Á¬¨ÅÏѼP¥ÊÚ¡Ô·™ŒG ^?è–©âNéi”52»ýº‹‘O*Ò²u–Áïzý†u×oêó— `p•¶ïœ¹àÛ]}AÕƒZI$‘?Ã`¦ E¢[³ØRÎR Dÿ“ fÊ ¯ ‹Jðeè ä8¡³EÞJõ€^=uP®û4 z£7­X¤¯Ñ­}Yž‚ÍÛ‡^}åhåñôF÷§Ð4rFCæev•Vh«Ö8lH¬$½¯M‡ÕÏÂe[àÜΓ \&†–a¥®$a¢Õ'6¦c?ˆ¶Æ|‡k šË}8r`•û„h]0zIJ€óÜãj¼zŽÑyS!±ƒÑÊ©äŸÂÙ½´¦¸®iM©ÂhvàVœätGÖd3Sÿ¦Çè’e˜d›˜^ ­.ÝÉñ´¥$4Â]•îúÓ•î^5×ÁºtŸ¹b^Ë) s«Ú —Ââ4tiöðØÙBÚËÞ™¢ KöTd<†b5›¤KºOÓÁÜÙ©ƒíCÅ·6wig@os7’ü~1›½r†ÝqWOÅl @1vB®¶vM8¼ª‚džúUŠâCçß犜}7°¶è¹@¨¦%»Œ³«2@+‚ äŽ÷6ohærAЄ(=š7ôÔb”·¶$ˆk³’ùò”qjÝ€Ñs̯г=Í]+1ê&3ªI÷ ' ‚55›ˆ[¦ÄXë>§8WµÏî/Ïï.Ç©ÉG¦’½üVý¤±ŒJ+uï$oöq@ËÞÅüT˜jô¸uÐc»DüÖJ-¾ÿH΄Q£S¯§»ú—ZOÖKÿ’A”õqäÞÙv²œ]5þ'¡Š.uÂÊvU wõ7!I»ª§FK}ù7×ÁÕËkŒÌad¿½2+ ºãEäV.†‚-eÝø™V¨uäÁì¥?´ ïâúìîû­}áS÷ÿ¸féüÛNÀ*®¥G‡mòy„Þ=:9b5­ÿÎ@¬ÉƒhP^Ž\¼;¾ÚlC:È%¹&´éRÀȳ†óñV­0Ëeà`CÆþûb»©¤´ÌˆD/¤¾é»ì›Ëº<* J1ª¸Oµ[ײ>¦ìˆ aº¸<2P˜1àø Tü×L}Ü‘Šÿgïj–ãÈqô«8ú²—U6 è˜èÓ^ö°ó ²¬^;,[Ùžæ±ööɬ*«R*V‘É$ÓêÞž9t·~J™øùÀ‡þO‚=Ê5ëäÈ^‚¶›ËqQ>?ž>'oa¡Óea‹8q›û¿–=Ô"#ôÃo&¢ÏQå™Ù+àè­+ ÿ#“Ý”9`3¯L1qºÿj>®RHEà‰M‡|l°Ü«ù8— NÎYe Å€¬!œL ?ü!±r²x²¿µìp¿ƒÿ"¾&NbÆÓ®á¢XÅaé ô¯öáNÚö¾9$<©=Ü¡óhÔ¡ïþDçáþ.Qƒ|·ÿøúQp¯ý5¯oÞÝÞ|¸2‡þ|oF¼¬*æ¡Ðì¹eu«K;•-/]zNÑ*²n¥Q2 «—K£à)”‘Ø“f[ êQÙc‡hõ½ßŠÉ ?69s¦yÐ^9R`¯ù¼Ø’¼®5QGž„ú÷5è0T£¾ÿ%zS[Tä-X¾L7æ4÷é¿øß÷ßžìAøñ«»÷¿ßÞ|¿¹»}ì¼¼^¥×ŸqËVîAà±z—¬!Næü ?¡qiüvô^!–bËä£Wf/r9z®È1[@ô@åk€”`—oH}vˆã(˜EŽ=µSî\ãÔ…6ä]iË0!7Åjä¼Ð4Ó 6×á.Eg*¶ÜøDëËVÈ1×EwTD#LO­dáÙw‡ À0¼Ò6_ÏpBîÐÂò».WÚdõÉöçŸàÀÅ•ýt…ø4T« Jj™ˆ€ÆîÖ}yüô½ð^Z:˜ÓpµD«$ñÆOŸ­½Í‚D±fœ/x(_K¶³Ÿ$G™u)½eòªJ[C³x_yg9½0ƒ"_¾•$o?óK}?u« (ËÍvYÃa#Äö½¯9á¯çó½«Ï×7ì_ì eÆŽ’´ð,[ü5ü‰ýšc—˱hïLÈ BMBíËŒñ°㄃ùQh]j±ì2Ï[£vd’z™#Òx7~¶L4=þœ¡óËÓ=¢ Ö²a\æ©_æ©Q[è-4p"c=v _U«3ž•Wõ'[ÝsùVZ–«î:œ”]7†,éÕQŠò-I׳ Od²"Yàºäøñ —Îf\1ZÍyæªCún9®[ÙÆ¸nq;Æ©2%Sý»ÞÕE+€Í/uƒ•?×7?®çÏõTÝÝÛÛ½»ÿ’ذn7ç÷#Ó‚²—´]EX6Ÿj8úæ(AQ8¶-¢Y!ºn•®™Š’ÃX¦}Øñ¯•fŽLŽYäIªòÚSûè,o¼€£YðÆÓ~ŸôÊ :p;od«ÛšiAp銚•¸1TÉ YBb¼ lv‘ÿæý§ä7™ûü3»õë†ñaK÷ÒÑމz\{›Ï"‘õ;j VL@ŒÊE†˜e› ¨KîköœøÙ`kFŒÃs_Œ”¡SHš Ÿ¯+~ŽÀ—tû~“LK¿Ïb˜éùâZüÒ@š-®fûKÞYÅ•e©ÌÍH0Tô€`ÙgÏÜÌ^­‚,KMpsŸ¡áw³ gîfí§Áéîäº}}udY„J°eÕ®f¼ó )–uºŒkÅŸžñ_:~Bsõãï"fÿ?ýÛ«ßF¦ƒÒ?€ª·A°ù1é³ûÃÙõ··ï¿^}¾¿{óþvY÷7Xq22:Hh™Ìö,œˆözœ.^·ÑlÉk¨£v¥n5dvDv&¨.l[~J‰ðƱH4ÏÅ“dÚÀMQ&>‚ð±´‘Åu­Ðëx·ÐúCҥȑWr—:<¶cAà[ÖF«šyE ášaÏ,,Ô0±¨g-ÞBr@_"ñIï 9½L™ 쓜Ižð@Ï?b?¸ŸH•±ž,º‡ˆs-äà!­éu ²º\€ÊrÁÒxs~F*[„èa×Ó‹°÷†\£çüÍ*ªà6’x'ÜàóÉsƒ[êÍ’`km#6´¥±4ØJrì͈qöpòêëý‡ÛFN0ȉŠR”*8ñZ4Ì®šÉ©/Ÿ=µÄ ² µ'ðV®!¬1±áðÐg>åÂF‚W;’‡öÕŒ2ãÒ­%&5v"Kvâ1›&EÕÃLì¡Ù›Ùâ"3±°EV™Éó5uSþiÛ+¦Vú•¡kCGÊ)SGJ…ZSiQ«³Óø³7« ŠöT ,Ñ›DŒîҺŲÞ]C @+‹‚õm·´æÛVžgÒŒz76UÇ´„škT}ØmI×t¸F|®ë£4º ½ŸÌ•üæ¦@ l¼lŽd%ðjöÕ즴¿¼˜•—ÕŠ>7l6{³N'ØVò/i]ê£6K†[† nT w^?zÆ“L‰«¦Œˆ{\öä_u”JW¶ÇLÌ´µM´ðOZÐ2äAß­÷æÂ0áýÝÇã·Ÿ~B?ü·]‡Pƒÿ¾4•jbõû\øy¦”['üe{ Ørl…+¥÷«ÕƒZƒfX¡W+”¹¨×ÙÞÖÙ«UEIL¨æz›+®éÔFS–¡ô2ûŸs1 mî&ð¡&@ ²Ö}öŠx&˜.1ÀÛ!.™Fèc±…ð4Q‡’ˆÊr«øbUû³ÛÆý?ß}³ß¸¼t¡üë ¶1äC¦YQ÷¡@Ê DÔì¡ÏQ|]B=µ#ò[—ü¼a¯®*T/iå¿´Å{ ˜¬B)Ÿ úÈÅdÎîÛ™I¤ËFp7EÃ%ÆÍÍÁ5e€oiY1·™>èî¶cþˆ“7ñiEÕ¡(X1¿ô §.i'“ƒ­Ã cM€0H,äxŠ´ÿv?Cpjú͉ƒ¹h-PBþÐðQ]l!íçJ¹P­-Èä1ZZäkíH¡8`ÒÜp² EÆ’ð¾<Š*\öÏN<6øgâ<&+€þp13Íl¤.îš9/z±LÎ_.÷ÍEü™Àþmosff_^ýþpÿñÕûOV}¼ø~è øjþltÙ‘)ÝÚmí±Gxlš 'ôðÓÖv ”_6½Ìa¬c 4$WÁ4&žbÁ3ðTZýü’ÄÌËþGdA¤è¹ë˜™hš\&ĺ$h&:1¶¸½Æ£wýWÃà®SÎïÞfËÕ0;"éY>6ËÑŽ7÷ö· )‘Íè—f5ÝÑ”À Ç&fÔi›_¿ºi‘亹fL«ÊÓ‰QEhº|<²—¢ÏLÍÄÔâšÑê_GK¢cêâ`…M$ÎIß²ÂH­ÓÕÍ ±¶/Î;I…J¤ˆ§¾ G{s‘ì®ã«UœŒ{Çsˆº*®ˆ©Vf f°ÞË1Ã`4‘ò@fŒ&Fí7F!x¿ÁMþî7ýL–æðéì B¤e³3 ÿÜl^‚-˜Yø÷Î É˜á ’UÝaM „&ÐIÃ.DméÈŽèN:²#èi¸@ˆA¼— cošk¿˜½I¹Û„e‘ Ãaþ=ó«Ú±ÁMk;tw¯%öúWÙiK‡®ÓD/´:î`uÜq‰HY´"‹¦R4‰Ãö…ç©ýñÍ*ÂŽÕÓnþF„²ÚŠaGÅ;&FÁLØqXtA.MÌhÇ1NP«16ˆ?Ÿïßžô›‚[KÎ|ýêæáæ4*Yº´,*uyˆy¬Rpq¬êò—"˜‡CŒ«ÐK[J) ’.`yuCaô Ð+öÆðb¤"x)åÀköbµàŰ›Ã®¯¢Ö*À+z^¼¿`>/.é/¬Ä Ò¼4ÊÐeâ¿ÄÌóØ ÷뙯'ª›7rÃ×úöúmxüs$sËPlÄóÌ@-ªÞÝ^›»1™Aú`Y˜·ÙŠÿ'`·ʶìÁ ÖÅ8Z ?‡ îãõÇۯŸï S~M§@ï·¼öQ1 OÓœ÷#C@´h1·œ]} ìÖ‰ZZÛAšÏTF‰'P5ĺڨKüa7íÉlùQB=(sí±=³#Yl§c Zå Ü;YyŠ–õ+ÿ÷¡WI¾Â3=c i®ÕÓèĹ=ë‘:ùTÐÕœV°,z¤§,ÛÃL2]<Òl8±HRW¬° ßÒMI7¿»½¾ûú]½®í&ÆàCù†©uÍ’o~|Ûm:öÔ8n­j -!`5‰K¤˜ü“uJಮÉd[ß>ž|.àæ ¿ uKiÂÁêÂtCV§á§âé–y›Âc¹RS‡½Ž³ëg²è‘x'35Ï«ÏÆÂä,>0H»!ì>µ%ãô¿È¾é¾—ðô¾—2—&+l,.jÒ~ýbµÓì´åñM*è·œ›ÒFôàŸP-Í>bÕ}¯ÚsZ¾ô¿ÿSyÝ»{+ÃýØÎµ³ûn9,S3¾%àÊ s£Ú\gO뙸|Ê=3IÑ"X1O¯V3€»ë~ ¦¼%®ëSÝŽ«\·õ,%qÅÙ'.-š-[z°¯ùzè«ÿ ?¾üȯò˜¾Ù<édiÁõMœh'ðê° RIûÞû¼öòé‘MÙc;+û볩ôl”Š?Ž«¬#pC啸°!5îöØöüåÛ›/7ï?ïÉ9²-¥öcf¾tÓóÂϯå=g~ä›VùyÇEó ”Ý<* “õ‘‹»¦©Ögo A×XŸakËòv‰!W¶3¿AV·ÚØ‚G®1¶Ãð%c#ô”M~Žòîdmbq³þ!q""§zF[+ÇÝG<·¶ÞÍ–—ñu©Ïš0õå\Ã-£•ÅžÿºNúéúÓÍí¯)7üöå´àê¤à Nû®N邯² ¿œS“¦\|2̦ú—ñ`B¶¢uÁ>Ê'eû»Kãë¿ýç<ï °À«”¶¤ÃC+íwUéã~¢äÕGw…W¿½úúÏO¯ÿ¶²cYÏÔc×Ã[ 7×á- éÂXÿL³N UªíÄøíé©È¡;­ÝBKÁÉ>š%ÚkDh+z9Sôž}¿#ï*Þ/¤­=Þ‡pùÓ»fFç/S®{->Ln<©{矱ŽwÚ¼[•ÂÇêÒ·%xma‘Ž£—Õ¥/W–¾ÁË$œ–­ÂJQŒE«ð!;`s|±ŠÂ—pÇC,ú„Œ|þyÞií)óÃ"e§ »öþè½½ø08KHÒßÏþ<É’úLyàqËCpZá_®Ðk˜,R m q«Œ:>kuë‘mYíaY­¨ºý;×L~v-ÅÒ  Ð.øŠÐƒoY]áÐ sq‹áRYµºq&ðXI\&3Þç#TÎG¢øìŒG¹t(DÓS{G°äØ£‹7Šs£ Ñ$fð™cŠR]È/ô¶ïo¿ÐÛœÐSõ²š<ªK† «¤C›DOÚ Î…¿ÃXé~ÝÏ÷ùO¤Fˆe7ÖF­¸( S=˜úhâb¸í$À®(D«PX‹ù¿•r¹ªp&®N(̔͗¡0 ã:Ï  p€, Š"1ÂêûîôÖª²\¿Q¶#rœU°&ú°NÁ:…(ўƨcIkÎod[$äF)T<«”ˆÐ¯Bèˆ-­žö›>ªë¶Ö±A޽pš4LU*Nï¼¹b¶!K„2ZœN†ÏVG†E8mÅnY_ãÆUã´‰97°žÄ·]ñ©°ÓÞ¶+e ÂÓàUwÜæs]„Ã^Õ?ßaE4e–UØLMmøžIÀébº¿ŸuZÁž§‚z(ã¯/ЭL>;ˆ3“KüåýKˆ°KVQ¿D<½g>Í““¢Ô æ¿N+vŠHëëRbòý‘•üq‘Èšç¿zü, Ö’Qs§r^ð ®‚Vn¸A´Pšz·tYƒuAVÝ•¬æŒ,edMëËŠ'öI¹îË™`z +¥¥v¤qQCRÑ(*UF#«‰9sÕ˜… Ìù2ñvEV©BV.V#k œUfªk(®ƒVéÏÀÂSä¼ ¶V50ÞÜÝ{{õùîú«9×ÇýØä¯&Û#‚ôÿ|Wænæg‘æg’ wƲo«ûúðívAäБ#¸&î‰HÁ‹”7Uå$öþsâZá!1íªsåslLGÔ…a—$¡<ŸÔL=0=Ć øÝFâªH\EŠ“c€±Œ•L­—_݇Tæðr ‹ŽâP4Æöj–tü‡¨Žº3r¾­W ùϥϑq⨈ç&)¦ä&ã,'ÂLˆ=8H'õi”»+œ—+Áà‚´ /"ýyO/‘¤.gàurèXÊ•Z@sÛâuupyâ²™dºÔ„)&ŠOÚ6ÌKíßkžÒŸµ®ìù²Ûe„‘êx,š‹oI®ÍªY Éðe-΂³y!Çò¥x$'ElÌòÌDÒœ}ê–râÖNÈÜa™Á›E>ѱ[>ÚÎÀ^¾Z†úCˆ¼e/sgà¹~[µTF*ÚmCIJKÊßÏ$Ó£ÝÖl˜Óø}غúÕá—]&æÃ9i·š@1¿îJ}ßË®ªv[/Î-ìÞZ #ñU ”I°šX_@Íûþ“ÙÛî+à®,ó~ø~‹[¶RúH¬G yƒat ~L{Np½`W'JÔ3¸«*NK¸ ÎçÈuRê»4]äî‚Ŧ° NFwo™”YèvwzŠ^£öY÷T=îàjð7Íy¯êâZC±Xû yŸºº\ôòs‰<;Ü8ŠöËõ2cZú¢cÐà»­¹ ®^¸ë}˜È2Q(ã.§±§âñ I »‡÷(œÀ»³ã´yyý1Ðß—/zÍ6å½ÈéuOR³}²ži›åæi…³â¢`ßyîH9tuèëÁ¹CþµìmArð®„&ôì#ýDH^Àޱ©CÀv•T uá„ø€˜Fúâô"v;@Tš4rÔX†oŠX„ïüØÃLf=“Í;b\2u,„Xò¾n:˜¹;§ L&Ø,óõöcê+*Ý•?þب–™¢*|–›¸‹í¯ª#«ê–õÌw¾mæ(µ53b¥ObOuå+54Û‡²/ZTÈò·%Ñ£ˆMæ –ònìŒDFŸîHòO»“¦ÈRºÔ9ܵs†3ãŸìV´ÎtA„X«ìú_Öx‰ÆÀ_Pëü~®öÇI­½ÇÃý?Šý5]þF¯ú¨Hš¶=E@Gc›èϨ¨_#ýý¬‰'–rMLÎ2èb<¡a¿Jêb< y&è™zÔæÉEC ^·Ž'ú¬·=MŠ$r”%‚öŽ»N´š§ýÜ&úŸŒw9Ò‰Ó-{hž¤Ý}Pÿ˜e©ÞäÒ†Žà¯ãø«û=áóôè‚.ãôì÷$3ObyÂÕùã1vF<üA~{µ„ýºÊ¾/éÝGka¿N‹Áap«IC%éq⥱£/Ä>œGÒ‹µÔîÍÏÐ9Ì^­†ö8·ø¡zÊ 0m{9J†-‡ÒyÊ; `?ÖbÍuYŒ'-žoj¸}ûîúÉ—®èë?ûƒøSCon¤7KqëCÌ-¯¢%ÞššGxY2Îw(].!3­Æ©ÄAœ4¹~ùêK#ÅËŒ]ûÏ+Î^¬j£L ‰ðqN¬>ÿŒ,±:£Kì7¨PK¬žXOÛ1WSŒ£;‹C†GFÐO.¤óB'BÔŽÄêÌnˆú+ó›y…?ó!ÇkºEëÀ˜ðjàV å±jšÈ%…‰È{4°žùѺoõÌŒòvÕeqcÒƒÎ/•ìT4Àb1Þ‡:E:¼Ål{käå#ü‡WK×qSEO¢ö.è©p¸Œ"Ò ¢2ï|û)Ò[h[ídÒf$ǧF«…$›$‡”»4©´¾ø´¼òv¸’õôíñûí·ï& u—^à ¾AÉ \CÕpý bƒí×*µJìKîÂb©š0›7‡ß÷¢}áRé³ ¸ùúøýËqµtízôJÍ­ýMÕš£Èâ$50û¬¡çéÖÌò&±à¦`R-; KáDÃì¥û‘š`ˆÄ»yĽ oè~û2acˆÏbˆ¯e .ÃÚ  ­—ÏçíDO¾Æ3¼Í&LÀEåÍv0O˜-•Ë»d­™°O›BF}Þ³q™!e³~¥ RL«QË+êÈ鲑&ÈM›®E¿2É?ïm¥)ôîW&:³\ö+§"éõBcP‡Òh‹¬õѨ÷ï[ÎX•žg×j/$|KËð;ÛïYDÃY|­÷ß?Þ;NûÜß=­+z']-ö¸p³²lˆê5H3XÔBÚµ²ÐITÄt‚À¢¸ëg-tô¹[NBµØ[²·&õw¯_œŸVlß ‚Ô̹Ü[: ¡j M»!¡Š/ŽòWwa;Øž=‹XÚ Q!ö=’íF|¶ìddÉ3:j¥Ùv¸ kEœ¯¹*˜ncC»s#GÜš¯Ü<ÞÝþ¸|Ç̱• ‡€fÃAa@dH¸}Çù¨k6F Œó;¨'ê49޳o[ )Šazª ¾Å}ܬN^)VÈMC!„ƒÑÆ/îXÈà,S€P)FSJ¶˜ I–•c ½Kjà"÷ ‰éÎëåXˆqÊ‚ïáúXh»Í Òx,ä-8¬U"ò+G9t¨Îñà4 oýúaƒ9gÀsÝ«:™~åy«|ÔL c ÜPÞêÍô\INŽ…”Tä`¹k€1;|¢L“‚œ‰|°j—wY. l<w/ÈY$D™‚œÐüófz3-÷P3•Â#,u(ò˜ zJÆûöý͇ï_>~ºk–ˆN"ƒ_N ‚qÿ®jºwÙU¡ uZd"ƒŠê:M·øo㪵¿&‚ö1èÂóe„}ÆJ–ÓWU^Õ©›×J4!ó¼I+ƒT¤ê&ß!mˆm›^(!W+DM ã£/¸”ἓE ùi± mZdX&Æ ¼2¬ó¾HN½nò¾Ø±ä@gˆ™SÆ)ÏA|Ü÷t~™V€êÓù¥æa–­äƒò¶  {\DœÒ·]2·¹vn,-°ÂV°—7Ù^Š5G†L!K§É±+äkÖ÷¤àú|‘âjßK\¸|¤d;jJª/t)<Ò°n2½÷L›t–]ï£F‰ÎpiЧGuûÌI(jx¥åôÖd VcŽ»}ZÕÝÂ];Ô©dHÐÌ?a á”_ÕÀó†zb/Û\¤f6 £ÅÊ ¥µ«.ÉÀ l¯¤¼”å—̪+;ð5¡F“ú’‰ªàUfµ…â!@÷úRˆšø’AœXˆ>s,»ŸÔÎÕ—Žûõs9ÍŸåŸ`‚ÝÄ?ê°ýp` {õøæŽ <Ù0Ÿtóü‡{¬æžÿF£³Ëì.0Ý|6Rv.O <†{œ¸àPëûìÙÒsN¢®>¸W³cŒ‹UJo$wØïÄ‚FîBbXÕrnanx‡ù౦tÑpñ?ÒÙìŸfíæDˆ,Û‰º)‘£ ½%HåR„¢O^ÍÅfZm1<¨æ ÐÅmà óf˜(ž½ÛÂÄ4®+]æ ,"îv¼h¯éh{üƒ§ºQ™þ€!§Y+sº(õ Y Ä£TÔÑ„LìÙ¯NéšÑ°]ÚÇi¨Àû‚5xK,d)í3zR.훬Å2fŒC‚çÅ0½|6ý£P›?½ûõñá󻟾½ûøá<0»×Ü7kb¢ï¡aÌŠ’‹Òž= Ïá¹SSS®®,w”òô¦V(Çc (špQ5€Ð~=cLYÇÒw[Œ Ä~Ûî°ƒZèÃ.üÔcð“—˜ÜX€Òcð+p(Ž"ˆ,R]®<¬ÄñáH C^a„Ð¥Þ$ºMFH¹÷ÌV¢¢¿IŸ,ÎD7 ”Y!uÄ;˜¡ºá­3 ó_ë¬LݯNZÕ°5¤îWçBÚˆ3ºç¢&ði¯À;×É\À89„?ßþvwûû¤¯¦¥ë®2ÐÌ"ê/z£~¨Áÿ0ÊFPqõWtVQ¬ªŠ–¢Ü‹Æä" ²,; İàHŒ|>ÏžèÓ¢Šfoí‘Ã}=}õIKdÎìæ¤OF'3àÐt&˜‹&Ò¨îFÎjÓÐOÛ-'¯YÂK6±¹C2ÈœUMËÖ–i¦.†xsš9ù–eÜ@‹;ÙùWåëÉ#6áv`Dr«ê˜°1œß¦×¾i p™ ÷Ínu•Ò®•aN²œ,÷¯4€÷~Ñ0ã¡çw»N”j`—Ó[‡`ö˯±ËMô;`)YX¯NÕÃÏÑ×ñ>Üãý÷Ÿîþb?œ&'ž¾Ûžn¾>½¿ùôþÃݧ›§_n×¼¸+:Ëœø-̀ȘayVò±'3Ú¡S¯ˆsi7–-hêPWà?D .^‹¶Óˆ€íâ[“B"*ˆoÃ[ ÈmòN¨Õ$ºõC ÏVYÑŠKˆÝ£[…Ü”€}rtñ¼ú"º•ÐtjPÊ€ ´|h°¡á˜c°9Wäú³„«Ó€g:' ôvVÒÓø#Í?6]ÍxõäëgÍXÛ#J·´Å—ü6œîÝã4àÞâsÑÕ3£Àçj¬‚¸Xú.¾Öf+€¯£xñ/ç´“AaÖ‚Ü'¦½·E§­(Yܼ¹›àæYÆf‹«zM„­FÖ\2p¢Ú-À»¦¬Íâ»d ¼_з-,xJÙS-r5‘ÐÑÊ<¹‰—î}ñ4eé‹›zptuŸ-Ñøs·D÷ @úÊK÷¹aÓKȬ„&Íf¢K œÒ65(C¢Ð!5X o:: ã*®$ÐÖ´¾Ð¡¨äÕ̇¤2ä‚Ýñ¸\j%Îø±Zx{é.ugâGÝ®›À˜›áuƒ9zÕ™Ñ*ó.ûYnÓ®Ò;ËßàÍfnêp>Ÿûm^¼A‡úSn^­Þ—X´ ¤žÒ&BÅ‚¦Kçæc+¨‘â´¬¹ª³·,ˆSGUP¹˜|B‰FEWÁ´`±Ê ¶P8”.59&K1ôÕäÚ‡ÂÍjp-XãõsÁlý\‰P—P_¼6ÅèŒPx¶Ã¾,Pif—+ê)Ä(ŒëñÒkT¨mÉns©ŽE –TêÂòø˜Ê-ŸP·Q¥Ž‘ãÊØ¹…}Pß¿úŽ[„Õ Ât"¾dÓ2LŸ_†éäsú Jû˜ÀÒù­}NG¾6à#”Ë‘zãï=ò¥àIÛÀ&ûFù*Ø0Ä€‰+ÝI õ ÜH é·¸ƒHj‚\žrs ÒbÑäh&äl1å–Þ:²òªù Ȧ‹?&Ž€h‚=šÐ»xdw@ðL!tЇºöB7x_;]äA>W!ÒD‡ €›ÁD-“­U>Š®™†Å3,Ž¢Z ˜­žhÔ Á~Ê¢žýuôÀ¤}ŒgT&¹œ¡J_Ìp<6·ÃÕn.ªBi€þ]f$æÙAd—²hÄÞ¬¯é͹í+ŽýëÃç§_þöðøûÊzi„èú² =Ì¡Žœ£]盞gíÕ“ÝýeüïuÔvp™ö#P‚9}F˜¾j‚ÉSvÎûD±&—áÍhE‡°nP`I^J4>vï)Œã?¯.ûdËÕ…ë—ó ÿ<îÒš á&fk‹à‡àÈØ¹7îisþõÿ]zË žðVYkâ^udbHÎ7½½F«V¶÷ hÑoXƒ³„kÑö*æ÷º´··†ÀQÖ\Ïn£Ü{m"‘9dÀÆí“18q„[*:ûbJ¶!Ü.ÙƒŽ¼ =V`A‡¬#~Õ[*ä^ï±×<²W³½ñàjnÂFpæ×C–W1äJÛ½Š›Æ·’Ü’˜:/ºòÑ…Åþ»1 {þkBá!xzí¼n}®…íßû@B¢3èenŸ é€Z̯ϙj:·…þÒøËÐúV:w2^³ã¶‡ÃªÊI NÝ6%Tîn¶Q=fÍv %7lª~`SS@ñ›æñ¯˜Y6F2‡7±±GÂ¥šˆ/ìXÉÌ´Ü1A;üp7wßÜÀê EïêÙP`^‘µèœCHM‘’”kW¬61IúZR¬NsM.²Ï‚‘¿©I­Ú O ¬+˜4PÕî{P\n4À>Áþ,\ ¶{Rdx)bØT6Yi2.ÌÆÐAƒê2íñ•Ž˜™ªd`¸dàK]Â#»p5<:|§Í¢ùÔ— ÙB“¯)€d°·°Ð4N!^=c&C@? þïÿ”2ŒßRt°EZÐýåþ¸²tÞ¦òíÜf¢C<™6s‘#a\ù.Ù ñZŽùbQ€RÁ1¿áÁ»H•ØÀÇG˜Øøš‘a öoÔt…¯&$ LH/Çýœ (è®÷>œ0ð5N?6_èž|ÍrH\TˆáJÓ«gl R@øÏ…1A+Aà×’Lo´ÔukH€X§©ˆ¯ß|–‰p]»ß=Âî]8†Ó§ÄITÉ2§Mõ÷JÎöüúçÇ›fm/tŒ—óE‰³o”µ•ak¥v”îÓ:ùÌ!žs¹3ØV 7Á¶Ö½ÄYˈçd+ŒkÝ[dQ]Û©¸Ð/[cæ¡/¬ëªKgs o$¼~òæû<í“8®ˆP4ŠÅ ¡ïi—3n´:ñrÆŠâYäD3îÒ¢•²Ê/yKq”+€N¨Ýb_"é]°ˆvv¦Ðß—RæÔÑ—’ØùÒËÏ=¸»«5ë)7Áu¸ü"šìb±‹PfÊ=M»†À¨û®0P=šèéôWŒà¯å&&ÄjU±¨Â\¨8tº½`#k±ÊQIªYxóX´^ŠM±LÌzÌ Z¬;ñÒßp[ñ ë0€gÏ’šœ3A±`§v³fH—ïV{·2·Ü­q/m•_v të´ÅÔ{¨*#cVZO¢¬‹R0ä„ã )Ž ¶Ê¹ØwüüeM){+gP!º²)ò+ú¾)aô¬æmƒ4†8²»2¿Ewå9&´3~èpü˜Á·žî³ð>/:.q}Ç%{²ÖáA‹‘9nL§Â+ø€C©ð àºÊ¤»Òì¢Õi?aªã#Æ¢/sÐÀ_Æ_;B q €j–ˆ+tþ´Ÿ·—òλ"JQ¯KœÛ61•8÷í«™G)OéÎÁމ7¿óÐøp!ÍWå6tT°\SÃMö𙝯1jô-òñ .²Iä®—Ôµîj3S¡jÓ »o]wýs±cúH%Cn67àä»®|³åWð1pALºº›B¹:ƒÄ¹Ûà J?ÿòðè³Øìÿqóáó'ÓÛ¢Üiæ¹!»hŸBÆ2±XˆãòlvzrÆ€Tö¶o®·OH¹mî ±!ìG>ZH{eû•ɬC© ʾXR@äZ–Gz¦[²lyAk0”µMÁ>%X¥m•)u‹Ð’GõSÑzǽÙÍ/·Ÿ_vns ~üññÇ›ïŸ>.kˆ:/}E ÌýÈì ìé?3ÉšI¨¤åÛI{6…ýT[ô_MPf¨ l)Ep,Ÿ1){íÀ(í9Ö†ÃQ3Mÿ¼<ôÏ™¸”"H1¡^,vy³ØÈF‚&æ°¤ØÑSõ*31гP1O„ÜWÝ6Mø‹ðVƒêTÀíIúúîô‹‡|{„5ký8`Fæ*MN6Àä:Ö*ËUϲµöÖœ%‡te¨˜Ýuèb¥l¬}²5]‡ÍemÁX…ØÞøß‹ 3ÕÓì­t¼%Å-¨3‡ÀÉ J êL8zîGÃ;Ì©¿]Fvæ™Â0ƒ€•YafZ®iÄŠ“æ^ÕØµf]% FÂ¥[LÛ…4îŽv勽*Õâ!µðëñP±bz,‘!w´¿5²^ùŽFš ê.f(:)…HjÝ_24 JÚÖYÝ~W/C‚©1PœE·¦ÄS‡±žû\^Ä’f'¦—ǃ /¿Úc樌AhêY ³( æÎÒcUð UŪÔ× ífÒjðêÁjq-é‘$ÇÌ|²F½vôe??»âàKvK!IpÊåVH2Ôm<®ÿ@iê aœâ|stb¹7'Æûøåq‘ëUa.²KÏvAÍ€yùžàVt¼ÍGA­;Þk”–.®\šÉ=’Ç Ç›Áà^äÚðÇð»Ú~ƒæÀ MJH#G^E ×4SÍS¾Á¹5ƒÆëuãôM8µ=¤qÞ…Ÿâ#s4ú•ÖàS[Ü3½‰ê‹®ÖõñtLg6ÊøÚy•„¸”±>¤’R¤:¶KyþòI†c˜$@Cò¹Ž+£{³ÑÝóÿ¹TK´w÷Eœ®ÑïPhªä‚nnxoÇÏtÜ™z¦pBúÊÍšßüϵÍìÈÐÚ=xÈ y*¤gâž!%“tT\œ&Y/¾‘†°ÍëÖ"¹¡ŠÜ¹Ìy$ª1 ‘î⤀|eì”Ù {ŽÝ%†H×”Æ ÁÞØÔ¨`‹cîkÇÃh¯ Ç9-[<Ÿ¢Æw˜_!SV ïÀ-?⦠7»E½°–9ÙPÞ¤†:*CÐŽLA” ¬æ€ò9É ¬Fûrª*©¬!2ìúd^—RSôk$å%ÕÁ†jæôR• KùlW“iàºxÜ”Øæ´`ùÈ8«bŸ;ˆa•Šy ›7báŽEBoÁyú¼~¡ Ò/é¸Í]¡€Âȼx›M]:]šB‰>O"£¦œ«ôyð´‹ñ"¤²–Öˆ‹b¨ÚkSò–ÒvT5%rÎyÉå³ùõ,DÕRúÙ>9{öL~blÙÚ|[‚õ —/L~¦îtÂÊHÙ¤ììg÷CáòE)õáöû·_~ôÿzp£øöð«ýŸ'yœú¬ëYæ~å4 ¬¢ô¤ýq¨çŽú"SÜ Íæ^äðnÏf3Ûûð`É °Ì}rÐàݾ¶×…ãý•WÓì ò¾¡GNÙOü“ÍÑBâWäƒAuÞ„ êOÞ·8væ^VÏ^Aæª}|ÿæY³»¯ß–õyæ8÷6ì[šk17Eˉö»ê¨@dk“Œ9Hõ>TrbØú}˜µ¸é £÷¡¿vJH:ö>l9!Úã/e…„bO\}B.˜Ò¨3±E‰Œ( >R"m9RÞÑuʉx£¨Å=z3]¤¦“eاÁ鸖"½¿œðËÊÓ·ežòÃ^BL´æÎžªÜˆ³ýßD‚§n… fùºÚå&Ú)Ÿ+^¡ÞŠC0S¡gô·9 ħÍ5õ\½òqÙÌ<×ëŠÐÓê½úOa:ëy)ŽKûù¡k=¸A1¤Ú‰¡Øóp$³!Ù`{ë ¾UüÊ7.ðìl°‹™J56¿ *C9œ†î;Ò¦V‡¼ŠÇbÈTXàÂgaKkW!¹™6­BJ‰61íÓ—ŒWÈRëÆ›Jµœ£/kX…äoÀ”òµÝa˜°29=žOœ¸8gÇ,n_ï>þrûí@ᶬbjnÂT÷&†ÙùZ?ÄpÚ ¶5¶ \jþjæq;©Þd'U«ßóæ'⌑‰Fo¢ê~‹£ýSk÷Oû{¦‚ìÆ…y˜xÉÒG¹{[4s¬ß¾œ¨zaì9NˆŸÒàîù[›¢…áÚ7Êlúʘñtúß?9µ[ôþî^n«þáïÕ®´E ýË2…b˜Ñ@²=7p]¹™õ°%tùˆÇóÏ^š(;§ ˆ$|Ò­x„I¸s:0¤óógç³½1öõi%3í y@_ìA Ï8t\Áû>䜪|Ô(õ wYåÓ†®g™ ásôv.Â%˜^;1-€³“æÛýQ%’3×”¦°k½›=q¦tŠéX rá•gã@h¦Ö f$_!ßÎðGþ·€r–ar{Š $÷ãD(ÕÖÜC’S,"ù‘ÌÆ´æ¦5äte '¢Ù¹&æâ¼ƒ7Ù «Î;ü?¢¿Pÿ+æ!Ù>’M” t?ŸþEMò,qÓߺùüðá×e=d§by‚ŽÂ›ÅÁââI–t±C…€ŒPOº$Ú†]ÄuƒïÒÚÒƒøF$]ÜœRë uvÒÅÄL§úVQÂÌñþªôò¨iMy)¿ü Œ™ªzš±8d1å½—~4/¸|´8%ÌsÁš©g´Øy bëx¿ÔÀ±7óŠ‚7þU}kê®ý2§ò(ñ³TƸÖvtF¸6çÙ<ð.åÝ‚î×®µé‰¢^Õµæ¦~‡”8 ò¬Kˆ0ÓÆYzª\ÉbÑdÚXjä_nïMr·^\AÛù’ƒ†™vb÷ÙEjîÙ¡cÔzQ‹ËäGÂaÛþÚÈ!éPÿªá(hê50ƒÜÂç˜]!/›lŸ G F’Üt"êk?LdPê‹9ÈdÐHHyaßÌz´Ç0¡1•…7ÎqLWø¹œ§Øÿ­uð:ž1¤Žx9‘FQŒÒ9Ûq!Ëù$²5¯Û# )4Lsd`HUö Tê-=Ã3ô·¦l¯¯n†2»ÙÀÅNþý“5{ΦÜlÀc S©ÐçŸb:MdÆÅ}þý80S±‘Æg*%ƒ=:žšª¼ìþü×&Ã0U{:»Lj½5 lOe5ÊÙÙž„(&®*Ì&Ájz#c‘pö ˜8ë¯)eZ³YhŽ:{°Êäœãi~Ñ?9k­Qõep{V$éÔ@և© Æ|ÕA±pGþkÙw/:”ÆwŠkLÐeØ·!¿Í(V)Á°pæŠsè—}(›ýv 2Æ@JÇÎ\•Å5lœÙOc¢úˆ;e‰P-è[tX*è gÄ4³½5ˆpâeÀ¼Þ*S˜Ý™åË@Óé2RÌ5\Ö0vœ¹¿PqÔÕyt˜¨VŠaüÊHÂä™â <l·ïñIªnE¿=,e1‡e&œRÄ'×¹r°ÔÇ=#aœ ®UßzWhSÖPMP,6@}ýF¤M@‰HWÆKŠ,³éLÊp:àzŠ.\-' ÒXbjÂI ö…8 {žÂ bŸh% f_Çw=$üôÛB ™ˆ&×ܳ± F@L˜iš,†ÁŸ©’ؾ‘àODkô7à;•K•¬£/€iã:ät]¯!O@É©€¦)6Ê9üš0Ŧö Љ{s.i*o(&öMôÝC–ÛG¼æ&o²Pñ`:©Â¿?ŸÁÛŸ?ßý«Â~xøíd´þßL«wþòñî?M!šå9‚~ºûxø³B­Ð´jF—7cà‹åãÝ·Iæ>æOþ²?|ò—2‹2ùÿíîãëZà&+‰ [Ô?ž°ÿ°ýC>=šÅþÝ.†¿ß}ýáÛ/·_~xÇfûí¯Sov{'ÀÒ8eœy 0PN«—c(®&VÈÚH¬ ²¿΄E`õLD)ñáÓ˜6>J-ñøT¼xÆ~®üDÝœ¢îS³º³ý*†UêæÜšHÊ&xÈ®S˜7=ú£®Ë;kGB Є#—wzîÄÇ¥àåH>#š<)¢1äæ._Ù qê½»·ˆi¼/ƒs,«î;­ÿàY•ËgM*ºl¿ÛG`ýFÊäç3þβ &íÇ)c ª&mw#ï\œs&½“(CqOï³Èþ´;„vîø¯÷vÁßÜ›{ùõû«þ›YüKö·¤èM<‹LXÉn„°Ê„3N0agŸ(@'™ð3ßÀ惡äƒÿÏýß~üðõÓo»–´Ã‰úÛão¿Ü}½»1‹ùz[±ïþ7ÿEj~׊SLŒ« Ϥ»”e;oþHg‹`‡2.À M47=•0ÃÎ/ƒÿ§Ž!`‚*f(]Ç#i÷€†›™Ýú—€†¢jJq hì— Å 4ÇØÂ¥¦öÞß~øÅ¼©—¬û?|¬7Õ~|ÆªŽ­À@¼Jg4!o™·Ž°äkýK±º¿ýëÝ>EòøðÝ“EŸ?}øt÷Øï­{×J°q—²Ú£AÃT¥î8|_ê`ßAnoÿ¸Åöý?,#ù—ô‚̽õˆ×\¬§w\¬ìÔu´¸õ~˜FÁ[ý3)HõöcvêŒÚíg^jqÉÍA$—eÙhb¸èòK€d•Í¥É+·brºLÌãNõåÉÖ-©Âбª¦ÊÙ´÷ú.´þ3ê´+-2À:u2ŽgÃäûŲÜ¿Å:Æ…d¼ËÊ€)¤~}4À«PX…×GÉpøúÉɾ”»žµÎªØk[$;È«|ý-ƒ]€ïk¥2£ú|G ¾›tÅwû¦›¿?|ýõóƒýžû‡/Ÿì'?}ùëÈÌŹß1(‰1 œ¼³Çׂ=7½ ƨhT>ã¬~Ö¤6ˆÀ…ÀÔàÜ©­ÌvÿB`:èñîÌ$ fôÚÎöîL®…å[ͨf¸ìÝ9UÝPÞ’L«—þžîÜY²sÄ*¬Ë­¿ZA0$å¢dqO’ ÷¿O&”Œgh«š¤^¿3 y:ÉØ·­š¾ÞJÄ“æ ™k‡ZñD¸ß‚~ ĨTÓ:¥Âý f•K²ÓQ½]k „hš á&U.t”¹^RÌQãý5ÈaÛøNì”ð•ùN¶Zð‘ÖU–µ§%4!ˆF¢!,ÒÌ Ž^sŽyOçYx æoÕီT¬V öOÛ—üáéÅÚAÁ^Õ< Ë@Á·jX ˜òl¿Îd[è³O^¥‹’÷ýkP ,2ÔŸÃYþÜ{*¿œ?)‰c„U¨Ó׈Ägù—ò^V+’P¦ZŽ\ƒ*^Õ’rU$䱎*±)I®V¶ÅãÛ|ëXIÞE«`%MÞÞº.î·GX„ty¿§÷ű4—q0¼¼EIþì‰ÈÑðC׈<¾9"‰nbJa¿Iæú%ÙçxûI´û¬‹"?»‚óÑ0†UÈ¡§t€¢ 5DWš='®a TH¡žÅK†ÌAª kyÔA6=! x'YËÂWažL+¿•kÄÓ­r ‚VÒxs_JK,ˆˆí%88§Ö$„°F­ñ5»Öê/&‹¬Œÿ×ûÍ)Zl'ë²o¤cí*ˆИáÿÎý<€ÕýæŒ^…­as„\lŸ9’YO…`#’,ñqqŸ_[eÄ‘âdlÎQUO}f׌Ӭàý{Üøá«PÞñÆêD®Óþëý¾3nfÚoH{u3çÒ‚D††Å²ÿrë/úåöˇ»}2óûã©þoN¶ÌÞ3Þœ^Î7ÅÑß³Ê ÞBºFfz‚7)’ÊÚGïokéÙ± ïÑ_áÃíÍÏß¿|ü|·(l‰vì£öë¡~i’¹‹=$‡ÊÉ Ž´+pé•ݰÆÏ‰E-1Œ„$µü’I1K‘ò𠦞†aݨÕi„qÆ8½Œer-ð÷»fXˆµ¶BEÂØ¶ÚD‘`昖vš®Š3:öUÓÑš8Õ÷dËl_È7a²¥Y$h·¡i²N—6TÇÒ¤cHQ–ò¥ iƒ-éZ7ÁS°b~k/Èû#öq&R°°9P}F,ÐgœR%D`ç@È+P,vœ4\œzß}ê®ñå„âðù[êìÒ†1l÷8ˆŽ±Š>ƒs´oùŸÿndSurßžz °–;CB#wFäl`ξÖ ^îÜ}xÄ3;æ÷_Ö@™793ÿ/{×ÒGޤÿбçVš 2"Fç ìb/{Ø’ÜÖŒly%Ùhï¯ßˆ¬²*UÅ*2™ÌRwƒ~Øíñª. ¬bÛi˜OÏ®ÿ¨YS: ÈÑŸsÔìˆ}ß•rŸ^ÿh˜½½þíÝå÷‹ËÛ»¯Wʆo7WµM'‹€{) ‡–RLQf¯¶j&ØI»ØÐÈ [-kcY÷rvñ„@fŒ§ökLJù0$µ?Ô4ñ­®“}B!jo~b•pDb½9µ½ª_Õ茶,Üs¡•CïÞê@N0r{×ì›ÉeÊÏj਄Þ`è/™O,zVõ Xû®ÖIÁI€ÞP†pÕ¨’þ%Ñv«çÞM÷§0ÇGÖnørúý?mÏãëñ¯ÿ®òø"1¥‘€ÑÚ¹°BýÞп%Ô ×Ê~öŽä:’1ÐB Ò>ŠªñŒ³wd¬Ý’,áðMTþǨÆ3¡”  TI–²C|&4éñ(ê±m>ˆ“úWQÐlŽ+l>á÷ªûŒëæA-†D«jáS&boÙÜ—›íÜÚeÇ ‡oþÝí—ïü-5œ§»GÒ¥*GBHíª+öîC‹mË5æv+¬NÛ^ún¢–"q­étZÝÌ>[Ѽ£cm7{OmjçèÙhÇÉÏØ>¿¿»}|uõ~/2j5¦˜]) GEA‚øE˜%­lC¯äçG[Û»ƒ{ÿêí«Çß?¿ùu;äöͯ*7¿]?¾ù¯ÿþÛ«¬j)ÙžO‚Ý•9ì¶M_¾ÿ@ñý¥\ü㮟ð=]âµ|ðÞ§Ë ùtwµ;‡Ós<|½´€ã›_·—üû—¯o~]÷ßÞÝ~½þûf#¼—W·æóÒÓ«·o_}Ðý«‘èíÛçËvÿqÔ×#ÄÐj—ŒŸØxÖý…à˜Q`i Zbe Zœ…„c(µ³oŒ~q'-…þnJGööînV5ãYNú×gA–é7²3ž1á@|v›å¶ ôcZÄmq-VEm}ÐÅ/ÎKV²mùžíœ/ sÕ[YÑFôEv3fc£“«UðÛŽ¥v smWÛ–êFøeO…\ÙfSµItøn›â ù@¥b:æ*ëí~ÿ|øh¼ÙUÝ'1ûfÿ|rf<9G—äœK‘iµiŸX!D˜" èDÏö3:±¡²‡ä›K¶Ÿp°²[`0kšUDeôŽtÒ§ÐUø¨2*Þó|âN‘¿ÿ|ùøáÓõã×Ûÿý¿\úøþæþãÕç››}Ñç2ò<(iÿÉSìø !~üØQ>»ÿà·¯N /Âæ­!P[/†T«è¯wÃÃ5Ô¢¨%ÍiAÆ“}e[BSÖ¾šP²Çj=vôŒ4Ãþ29²Џ eí¼£Ò9fFØ•95ìØ°é  7Ö P¦!¯Ðú;?ÚK í©è0¥ìB›Ée*Jûgó~êO?±¨Á=\(J½­r`ã;¡ÆþY7½Æä"ÿ%[¿ëH_ñ ïÇLëtN‰ƒøôíüμÐ&ÁécGňj0_hÌfÂ'¤éð@Û©= Ìz  I„©$y\Ùƒ12‡Ãzd»˜Ò‘¬Vø×éÿÞ°‚½ ¤EZNÒ§dr¢¿\±îÙ]?^¿»}üØTÖ™Q\x€„,eÅ% ñä^ì-1RvdÃä¶=vËÙ±Ç)€³4×zc-CôÈM.ÇÄ@éEy­Æ7’m`*óZA–}¤ó““v—íÁj;uäX½Fp<šMìŒKìg“dƦ‘ïª(É…øG«_Ë„÷ƒUNù åO¢>\Yù³¥ÀSšô=6ˆñ¸V"POL š•ü„‡–ªn&³‡´8E•ù¨0DŠìÊù(õÄ(žt¯6ÏVxï.V“}ÔCµë­ñ'+d'ho>áÃÊÆ–Q-åÙà„)„E%Duáb["s†xñ9¢"Ï"Ë‚Ÿ™¤:Ç'1hk5›„>Ç…«M+‘õ…[‡jbËfÆQÿm¦” 0¥Cô2¨«Ã¥Íë£îq(! …l§àä2M.ã¡ü¸DeR1ùIJÓÀ\ÝáR%el¥ýA“]Ö9ôñаjE÷Ç;•<Õ+Õ¡»«w_ïôj_ÆÔûc+¬ãÛPØÞ¯èé)Ɔ(£I¥›Û¨x”¬'¦^¥i½·C‡ Rd½QTövÔv>!C.i´£SbmýÀyŽ‘T”‘ENkçÔ9¡Œ‘¤W&GáXNSßIÔyPì9päã­*X–0žaood7 :ÉÜ@á˨̀MÎä¸I¤gt¡on Ä& ÖlU¶q ˆ²‹&¤éÄzê awQÈ$«»«DñV­øô.þL xRÛƒ²³yìöø‰dÝØ Ô?Ît/‰€…îUb¸<&ÜoÐIøóô±tb öŸÖ‹Öâk3ˆ¸ '^¯eÖ²ý_¿¼¾¼xø®‚q?þ™ÍÇ.¼_Xd¼Ú¡&AÕ×ÜÎ2Ê ®¹ÞoüDŒ±%€b}›žea<9†Ú¹Kˆ4¨¨@SxÁq•×Mα|sql4åéf5í z*‚(õmì¶¡ !H\w“׆Œ$™úcc'ˆ±}¬Gu@YŸ—èj8¢ç?^ð×¥ÿàG·Á‡àñ»DìÝò°Ê 'ФŽò¢~ˆQFÉ@¼È58(Áöº°@jÑIdP2AiL ©•]Š‹è„)WL8¹Y:%’ØÂˆi¬wú‰|¯‡±&…ª{íÆÛ* Š‹˜½]Ç83Ù­?¬%­)–Ÿüa,?3y É @b(óW­Þ˜Šü%Éf©w—)ÇòlòO‚gµ¢Ó/, å[«ë3K È{•8·èicÇk?m¦_‡þŠ|Pm ‘úL÷­{ã¼çs$MZÙå‹ …ö)þ? Ÿbø]t¸ê"–m˜B©{ý^!ãæóo¯7½Iož˜ófGÿ7_îo¾ÝÜ^ÿ¦1¿ðá‹âÏÃÅ—‡w·ïÞ_ßnY3+àIíÌ)¿0Ø-F}Øõ'§ÙñÎUèÙݬýri•)Œ6ð°ü¤9" ¥'¶ÛKöž´ õz”K;«ùÆY—:É’€éõÚ!Ñ ™cîU’ÁRÿ^N¿Jâbß~&WµD™"ÖGWC˜£œÇHí#Ž7Ÿˆ{sT» º…À%¿8 ÿpß<ßvs¡6çþýû,üVj·³¢¿Ñ7T>+\¨× k w‰xÝÀZÅE¬cEôË©q[Œ~ѶCuß¿üAªP½Éò¤˜fAµÅ|X¤°¸îŠÃ ‘£®8$ý-¸#±1¸vÁ!@««Ù k#Ï^õŸ½Õú.ˆXKJ-az“P}¶8AÌÄ àp:£¾Å*ðK?££›<žÖS»,eû&·©è*µNõ«ã³…Ͼ±l´u2 ÅEäúPAQH-ï:܉­* g/j‡qE{’b—ƒ…™œ \” Î÷½L®V ´c¡Mü õ]Á¸DË6õÒÕ¢b»‹ƒ´îÒˆM7Ôò ýÉk'÷M }fe¤Ò[-X‡GjæôfýbdœþÑbdÏ×aØb¬ó†ÃŽüüIä‹Y¸*ò¥¢¨`ÚÛ7ÒL¡eTžj€(h¥Å` µ`Ìq°þ@„"G'©ÆÁ#eGåí®VÆlÞk q—WÆàW/2:òa.Ù8á\b8†xþE1d7¦îÎJf.>©€_½{T½…ÕpãôÏ|%Ç^XQ!rýc䑼 j‡/³\õáòãõÕ×ÛyЧ½ørw5+†¢xA«â47mZóâÈ v[©zŠbm“ÃMC£T¤#–±>yE¨/ùmmOÔé:±S3áò­ ¢ÍX¦—ënÝÜÙ'<| l¸?x>ófU_CapÒa¯j #š9[¡õ[¬3–€‘Ñb ­³Pí*ã`qÞ²q†éô<€ÍÅcv¿âäfUžr´eZ2'˜ÙG#ãÚî]°½‘÷VÛ—àœSŒ½ºšî‚?T¨ðïñú[ð79᳂?YXð×IÜ©3/ìùGªûœ¦áÆêž ¨þzuó8ËT·%®ú0Ä–åËãîžäw1³TêeŽÜI8_”`AòâƒYŽmLÚR¤GöLOMŒèÜ®<Ùz$3Éáâõ‘Q6FSÎkbÝâuϼÌ<Š+j77P ‡9’Ìm¼ož–S\‹è“7®¨¸](çÒ$»{}rÙzk‡6ªïÙ즷´¶Þ•ݡ޲ „¤w(™Š=õBï6ú§äΚ|Ú‹’w1pX¿ì£¸ô2±¯¦mš/.ßÍ´q„Uj£—œ ëNº…¼2„êfæ˜ (’I¹ôˆØ•³”wœ'Téaçè±qpgÇK"Y/Õ±¦L•ó`;HÎkåÔY9)R‡H×D8ÊO $´(¥W(*@eFð´­½] X¿Ü]Í]g4^Õý¡´§ˆÂKà54”dA ä!¹¹;CÈÕ dMÀém}d}JÑÉHÙeÛ;Êô0IõÐj¸…™®¤"s^¤’„¼2Ä*•3ÆG6¥x¤“œŒˆç8#‰Ðˆ Ǹ9.ºp˸)¡ÿЊ`H,ò2–멈êåÃýÅÃÍoŸ­Õlž=\¤vVTà­pÃІ æ!FèfÎVS¯þš¸¹ üõàb1;ó1 ©z,a2!wd–ÛAeqmôõ‡#œF H\VX]_µt)8ê‘¿…Gùj9³˜ñ5¬0ò4Z ÅVÍ’ìú¸&ÄUÄ»Ÿ1ðÒNá2Âb ¦¬Hðâ$€› ±Ç©Ò­ÆZS|pPÑ™ ¶b91 fÓ ;ô¨„1¡TQ޳lWÔ?æp¡Š¥´zæ]¹á3ïÆ)Eþä‹‚®ýž1Ô"T#éie?Ê=‹‰q ÿ¾17X|$­ Ï˺Ÿ¯ýòíòâ˽ŠÃX…3yŽæm²¯ }tRKʉ…ÎÍ'7‘«¨Ž¡C©J†ÅnC’ûÓw¤é€¨vfGi¢²gHËŒÑÕ§º F< ·Ú…!:.⩸®–)»ªÊÂíóÍÐpŒ«ä‘U²–p•úÇ€eH€iÛŸ±În70OÒóß>_ìþ“×ÏÿUí}kŒ× ïÝqʃ•a/Êi¸5º¨xBinе‰\½pÖ$BBŠ\nŸ$"À¢ñJ>;~kBœHk§VØÄ8«Š›pܹH'Ñ­£•l¸ØKç˜÷ÚÎÇ2š‹iæy‘àÒ¿53¢BEÛ6õ±2A`Y!ï|’x=õ5'úÊàËúŠ!¿ÑqG©N «že)âXJ|‘º+¬°IC€¸ê3ºúíëðܤš§Šñx•+q O‹œ’s+@bHÙÏTÄDé¥bª[ƒ³ó•5,0qq mgýT{`:ÖáŠÔu"L¬Š†{ª¯d-è÷1ž¥h›Y"»Ð+Lb•@‡à`ÝÎåýõã‰WØ´f2³ržÕ¢pà\;åË`É®%‚ãÇ¢¦õìÎLjuË&š0Ĩ ¢Ž­õòÍ¡;ÂôÈ&ê©Õ;æYÉÄú«Q*gRŠz㨶xÀR±tàPÕ|-•¬M%re §ÖäjÄþÓ •‹jœèÿÒ9#åG›mýÆÅü¤#Óº°‹-䤕gY8o¡^7 6y TnÑrêk…bí2g·sMHÕ‡iã™"ˆÖ®\V*‰‡@¬|òFû=[W‡}÷$bU=…æz+d¬É]Žýk;$èK ¨ÿÿK®CìGúþqSé)¬Û›“]¿=þ7O–EihÍ'–mN«¾žL±eüX°]FÞ‡?æ2ËlÖÙ”9¢«È†8ä‹Eè̘õ[&´é‘xÖc»„˜b×÷²B2HË”Pmܦ™’Q¥Š'6QWéá’¾Ú‘лŠè¼‹à6¥†§å‡8;ŒtGÁòcƒa,§pvÇ7…µKÁlf7Ú[¦2€”@~®CíÇMZ»,ÚE/.Sˆâ¬FÅ’‚ynŠôÝßà« P‚¯fæK+J¥:A]%Ri /eq%õç<¼Þþü³íÊ^ÓZ³¹Á ƒ$Þªp?cmŸLÝRà”l`˜ÇŠ8Û>²Ò#kq\«àŽ$=2à*µ(a;)¾òE/I„xÙPÿ–”CJèÕÀ˧ÓùÊét„l³.ÈUÌ$I](øÏnMšÜ¬f:Û£_2ßÈnz´¹ýÎÛÞsûËÜ.#n¢¸öX;¥~H‡©"›Uâ“óL-ô¢‡wñ¬Ãaáiµ™Ã¥ŸúlB9U—#Gã¤èb‡ ûÇwÐödúäW­‹{²Žž?búËÛ?¸ hŽö•rçêÆ~üÃð †'S{¸ÔWàÎþöi®=Àíœ)Â?rhš¢bœšos7óõ§f·ØŽJ°—ò¸bç‚+«SÂÆlYÏ„r=\óƒZ„8k^qŽní Fç̈cT `<=Æ%:ð]½:©VS¨÷Ñ×Á–×yP#%(;|óÚŸñ>µô€|P“ܦj,´±ÒZôŠªÍ\Tgµ¨N:››ç#m“«U˜‹ÞfKŠçð|sôô#Y{1Ò‚ kª5õÈ=£ã%üV3ºÅY BAYåýbv§jvãC  ŸO8¡Ä»õâÀÙéO7«á¶ž ˜S}ܽÛš&˪I¨ÄIaùv.ªå[ÂC+Ô4…ÓÛ¹67çìv÷ÉÕj§Çò$Liã@<µwpnx/-öl¦™M-[ìÇZo\©£š åañÊ\ÏeÆù˜Í„M®VÃ8=VRñ™Å¸}‚´ˆqûëꀬ…Õ£oÛŒ˜2›)«c‰:åU:ÆXf•ËmÀ›\¦¼ÑŒ´„7 ¿d>±h/"&½²¾ØiÖ{ikÞOõdV½— ã.”ðÑ,Æ]©~/½Hý€ ó(A!Fº¹xÈáîäfUï¥Xý]?ï½DËÆÅElSڵ䩃²{±M¡si …C  ¦±ZúE¦Ÿ_e¹½WàÒ‚Ú¦rn–5-¤*ò6”3ÍÍÐ4Ž6ÕªFÓ˜¨È4Ì)Úäb•Šà%Ìã%ÐGgÙ3ÙÀ4ðŽía °¹Ú.uª9'ªÆ7Øw’k MoVe–ºÁÕÍb«2ã²g!´(›÷N¯è×ßÿ¾«]´ÿ=BFàPs¨²ž°, œ×ß'Zuˆ Ú¡ÄÙÑ,AlO&n>ü¦àÄðëÏL>"5„aŒó”¥ÊÞ‘@.=!\'©ANÒÙ¥¦©€…@4_lîõ×7þ™¦Òtc3³¾ùáëÞ\õ“±”_bMÌÃÊ@Ë‘ßH>¡Oé°Ùé6ùìÒÑ´4SVõ$—f®_]œ Äy‡r!®Â'ä|SäiºÈÉYšrŽ\ ¨d'\"è[|íÄ(ÉÈŸdoNNl‹6¹ŠÅª’üv¾Å)9AŸ­ošPª‡˜paFól/1ßReêA;w¾=LÇEj¢¬$EÁåWr>ݶ"&%ê,VÇdU{ËXÝâ¦xµ¿ØÓÓ û•;¿¶²lêNMš+HYõ!š¤‹îÓÀìRœ§ûd½öËtëlÍëQˆ H¸#R7•’ÌòX‘-&µ("…(M㉄T ½ð|ßãú÷G•Ñ Ýz—j^Þ}Rß}½¿¼¾ºþpóy㦸²CÞåí&;ä‡@Ì¡"ì% Œ/ùö¶ õzÈŽÚ‰Ÿ—6è!;Û¹C3³="!:œmZœšAºý½ÿgïZ–ÛÈ•ì¯(z3›fH$òáqxu7w11›YÞ‰‰¢º-K IöíþûI”H‰ U”ìî‰è‡MJõÈN>™gñRï9màîX1'ŸJ2¢÷/ ’܉ÑN:µ`%z†Xò ¶š¦Xò<æD‰3k¤VU•×_,`iéK­u0ÊÂT±GëŒiG»TZEaqhªó÷½kBå<õ® ò9GfïÍJúÈ¢vk%èþªØ¿F¶.MÙ²Iyhi©ˆÚvæo¤˜á¼–.@íT—ãåÿ^°^CY‚7[CÙÿ›¼!&ï§–Ë\Cý¾µd•,Æ–!Ù¨õÞߜ߮º|óÿýÝ¥ýø¿ï~·¾Müß®Ÿþ\þ¶ZþžgÀ<=a Á]Y¯ÜvÊ5õs”Fä‰Èœe€ÌÕµwbòM Õ|d·T®ß›Ä\;nNc¯‘Ο‘ï©§£zÚØÈ1õÎ7jã)Û1ÇLW·½²&fç&&ͪ=xöd%VQNþE`òÈúâħÇ—m®/a'£OåD'·¥PiÝf@í¢x }s¥“)ôᆽ¹ÏÖPì½Zw¿~,Ô 0ú× v jŠ 1nâŸ×Ó÷íIƒÛrÖN{(ã/tÎ%7?«ºs~d€ôaýÇA:þþáú›…ë »ìb=½Êø‡Š±ï¸›GZ0b½¨8Y,ž(äÂg*˜±Œ'ã âITé0 ¾ã^æQ9d%„Ôtßi_f_{ Ù%×;òAºèƒ¹^½ûš³Ý¿û²©RFªÓäâ™÷5»Uf5ÌÏFÅÙ9Ówé“5Å÷0cjÖ´å&ä1A’çÊY 5¤Þˆ§š15}«yóÒŸtQç{ó´ì$ß(·E Sš)˜mžÛ”²ÇÖaWóáyñZM¶-Lš^lIC‘%õ,yUú÷ü1í‰[“Nõ,"­|æN0 âQ•´ý™Ååõù¯·wO×ËÇÏŸ­x±i§Z,–‹§»Åëï6ôâù„lë ãÄcǵ NLÂj1ß`D¾;ú{GônG-SËJ„Äàì#J¿kªê¤ÏNˆ‡lAøN ÌDÚšÌÁ‡!ç}â,²”»7ÛZG\ªB‡¹û¤¨5ľ*ß¡*u1OÞ”Ï\bÔ ó.! ZŸÙ3ˆ[Ÿ©ü5mÕÍ=ëŹÝx¹zHÎÅb½ånÏo†±fcãuÛo¡ÇŒm2ÈT‹øSG|ê¨f—Ò‚¦è¤Ä,m›O™%EÈ7^ïd_Å.iræƒÃ™íRbwko—à0|Iz»9Â,å®žŠ¬ãæìù;·F§Q­år!h/ézvýâ%ûn«¦ȰlÌ©?y 4†Ìð;SF½¸(!yŒK õ–M+Ê%¼¾–ù à%¬ëà µ·>0k~xš9*",±C ¨?jT´E¶v«Fœ£&#»ý5Ðî9v)Ö…ÏýÝõípk„AZÆC¦añ£J£Æðã¤ìzÕR3.rBZPK›òuÒw®c*bÌ—H¼è †eÚàtðq^Ë$Î;ßÞ4)IÖ4ytî„]Ñ¢ûB¤"¼k¹~Þ6úÕ2R‰>ÏÃ_ÞH]®®Î¿ÞäXn"!lj§` «_$ðö›~D3uT15-”ýuÝÎmÆOZ*À\çéž**Ïq~;­{B’7¶ð0G1|—çJ$Â?–™:‰x-W…ØÉ‰ÐÖR½°LØ¢\žj®yþÓ°2njDWY‹Æ…±Ó¡M’C°à˜"ÅÅÄ«=I‘ŒõK;Íþu0xšTw–ÊPíáüÃý·Tòÿç2Ùª‘Øj»u¼ð °Uâ˜)[ÑÛà©ØZ&°j‰ÄÂÀПðN)ô"¬„|ÉçN:5òöÔä- RÍPecªÇÖ™“s&1xMØz|XuU}X.ªjàG#m9@Õ«¦&Ì)it€úSä)Æ•ཽØÇ›õè”±pKãEß·‚cFìz—¦Ãƒ4peߊ«š?k ‚Øã6¬9éÏ"pïÜ4=Ÿ…Ý §†?kíÔÇAô Uv%aëÚ±$gwx¸•^9™¬-uælp[–2àP×±ÍÁC^­@) ÷ ´Ùy õ½[bgÞmàwÃÚ›Õ¯çË?_æTmfÓϺ0kv>4QË"ãµÐ‡»I 2‚]#1šZvK$W ‚Ó2Q\ÿ=gÑt_Scvàëžœj 0q'ÎáŒB•­ ®}F!ž/×Ûö€Ùþø%Ù›Wç7«c;?üçÙí×/¦_î®^Ð;ÅÛ£ML3­\‚ê’€úx ,†Ë&Ü÷Þì?îî–«ÇÇ l×Ò›£Kt­pƒRê©’Y]˜²ß$6?ºDôœ9ºsCY½Ï]’/÷?ÜîvG‡LɈ;œg‘uA—w_îÏV?mHŠ?þ×ÿã,»ÿMêé sKÆõÁöfgðqAËx¥W1ÀÅ’m~¹»Ü-]wöùìñë2-Ÿ¶·þåþëÓÇOîðíüæëê—ÍDògë©#*k0kH©êHÎ>>»²mö5½Üçcè`ëÛ‰Ÿ²Ìx]¢Å²éØž¹|.Ë8„ÉiU;ïBpæÓòX­®/áPÈó¥áÇáÝX¼^hwñâ‚ùuV‹m*wˆÚDîæð<7ß¿ãQć»ô÷½2NÆ«%/‰—Ë‘3ŸÐCp£#Í%üÞUrH°ÁÙà ÙÕ2 i¡Hê‚-êmB=I.¼ÝÙ²¢=9ÕÈb­3ïÛ@åçÜ=ÞZÐëÛÅC܇?íO—«?Ξì2™”¦/–ÚÓõûb´(ÛOB€gh˜ÂL‘ÑZûä¶dK³å´ÄFAÎU=ÊèÁ¢£*Ž"@q¸H”x—Z bM:ÆŒ=¬ &L´ÈG鈹¾ý=¹Ô³•¯2'û·%‘¤Ÿ=ýqûñÓÐ~¤Ñ•†/½E—i&=ÁªN÷RÕçÙëuJ‡ó¥ÍNŸ_ææ÷å§ãK3pT™¶4i†| fhG0Åqî& E´ØŠCúˆ­<ªeÏL±~èý˜ÚL°ˆœ™0NíƒeWØkfÚ š1ôWå°g¾Ï\á¶.ü­¹Ú{µ‚FØõc™Ï¥¾´>²šââ(BS5›ÅÅLn`ÖBÅ™œS%Þ3£ð)Å‘†maÂiÅeÃÇýW+P\z,Jæ)NÐQœ”²À´õÄ‚´€2ÜÎdá˜üfè`áù:˜ßçeÌô‘Ïwš/Q.k{ žgÏc°m4©;ºÖ†ÐÚ5HPá|L»Vþ½këm#WÒ%Ø—}YÉd±.ä,pÞ° vÀc+‰ßÖrdýï[”d«-Q"ÙM¶3sf&@ÇV7«ÈuýÊ1ûú(Üp/§æåôßËw—O_WÏ·Šnj!ÜÝ}»¿yþùºUÖ‹GY|Ý1Wö’9|¼l½ƒG¿Åpß²ôß·%wæ1biýq ˜|gúÒ;“m­Ztgª9j îL{¢âuiw¦øX­8(pØÆn³ÇMcyC•wݧvüã=qJ±0$ú¼yC$4ÇU«ö'¾>ýåÏrÙ2vèiwfãWüCÏ÷ 1ð·ùšîÆO7ŸëФÑéûŸ<•=LÊîa@7¦HZ¡wê…¼„2/?Þ®þK¥÷G·Æ+Ò¬þã’ z?ýû‡¨À›Õõþk,tDùü>Ǩüý&ÝLøº˜/»  >­®V7ßW×-Iå!ô¶Vyÿ »…í>äfýáþá‡îË«§Ï_.ï?¼Šc¹YûqˆVÔ<>é­cäóG¶ÃHdµcɸÎD–1¬ô¶‚x÷ÅmõÙúBÏÜ>0æLÙ©5»VãÓ'ÖÅit“R9~L*‚Ã@feVºÃ\ÍÑ¥wÌ™0Ú‘ iâlQ.HÛ3"d!‚ Ýñ°—^‹ìÇA^Ψ‰Ê›I‰ÚYq]ME‹ISÑŠ"dFµ 4v[Ã%F#„:É™àæäFèeð¤À¡Ü{· A¥s{Z$w¾ °^~õ[‘~[??Ü©R7ü›×úáê¿Ço8Kí¹ÌSƒÖãi­ð4-NB#Úâ˜2®¾ŸmFAÊ÷§.KuF\óØ-‘ÖÙ A/—Ä…0j‹ !žŒ¾«.„ ?3~Jíö#zÓOD!C‚~BÕä/éÐ5¾ ¨ "¯ê]›ˆNm6Þë6ä pYX9¦ê‰ª•>s3tpf[äw=ç]J¤…L¹”Á4©ŠãVÊÇalv…5€4É^ÔÕ»º;Š«#ÐÙÈ£&ééLŽØÁüýÐ!¸Š!ª#Cieª Äb—ñÊÔ0þ¤Í÷Ýýcׇ«ú Æ¿êóǥ—nZݧbtçï»Ðȋ܇–à¿éןŸ~ÞllµJf±“Ôâæú_Žk¬¶ZýEÔü÷MKóÆ>L Á3èNñ4Aú®¯–.Ö¤%rz>,7*Hnh½î(¿¡ÿó2¾è}œrrCHßÖÇ[{q8_$Òz‹ãͽHÆÍOêÂYQ;ÁœÕ0nD C1- ØO®íRrzõ\Á«†²E9}&Æ {0øTMÎ`aYªøR¢ #[|'¨ÈÇ®Ÿ‰ZQ’£–^¬å±ÆMVÔ¨ ˜Ä„µíìŸszsRwýpiÅŠ#|K½å‘Ïaoö¡(E:ÎnôÀGÊž¯fðÛ¥ ƒ?CºP­úê2‚ÃŒ ë¹.+8ò±ƒÄ_0ƽIï½üíä=¢F²g?‘ü˜!¶~Óý8l©Ïë|É5’Ç# Éɰ¯++‚#ÃbÁ¢ABkøÉj6Ëèy,ÏSè:b¼ .÷…1oC ÆœåZ–ÏÃXð ‹²0ÌQ”µ)XoËõÖ?U—wÛ?ïÂ*i=ë÷—Ÿ7óŠ° ­Ã®ª‡  ««êYga* ±S`ŠÆN֪׌8¨|1P&Œ­ ’*oìŽèîP)À$[aK+D*rNQ‰*,§¬â 0Çwïqr4)ÌÑ%³U@rÉ(ˆw¾a•q–2yk–$6ÛË·ú>·'Ä8Iy!ƒ•x!ÞDBZæšÖ¶½eo© ¸seg”cbºZT‹žQw7cÈÐz?‹¡u"v÷š¸¸}P‘Çi ʼn]ÅÏUââùáëê~áûë^`èsxµ–ç»VÄ #ê³)’*OÅ_J='BKöÖbþŽÉ÷MÅuoGAÄýÂJ¸X—žÔ:ä¹á¤wþa+F:†“¨}ºšÄs’?+¿>ùÓ5|’O¼ ïKþ4x‹¡k]+¶`ß²m?‘Eý0òN·ˆ@ÇÆÞ»›ûM£ÜÕSåtsè á‡Ü4Å„ à݆©±ª9n ‡Qmo>™j  V¿Ë—ž°.‹îŒ'X^–Ü ím»íLÀ™¡Ÿ¨Ò›! ÈÝÝsôßêñöáçݼ$âÕß_&äþX}ü¢oZÇMät=w‡ÓJM'ñÎÖÎo,¼¦§ÕÙ`ÑœV—­¿Ô|Ú{•T«ãjb÷ÛìVº' \Œ¢§”b†{›z?ÓÚ²Á‹fíªà‹†ésËG‡\¿=õµ£Tèª/µV’êò$Æo;0ãÉl°©–¶ÂÏ6«S>y±Àw„lo¼5fÄ"7½§Ó—6ãpÅˤ>XÖÝV‹(>óižõÁÊ Ü]Yº8ÐÙÏìîz‹ØÙÝU1ÚpÜ¡+6'ý±¡—KsÄ̶ýüw—gšÿ¨±w¿]].,Lto§?}àÖÆRÈù¼WHm×d^”3±ïKúôýæjuyµ›X~Rô7÷zT6y²õe•¹ÍbºB7„1Ðí}ˆ…Pª)`FÈ«•…½ÙùÉ¡!vGnƒtgÁBªÖz(œv|kQ±2÷íàl÷Ü ëk&†|ê’ãÝ”™ÌçƒiJ%EC>9ÔÁŒ„‡žjEç;:ŒjÆ…°u‰úq¾ž¸ãöy¡}ÇÊžc§ŽÎO°+à"É «=¹îë‰\G‰¬]T#n RôÊ–©{Ýê,è"¦GíÔ$ª_ÛXoÝTÐý½òtvjD9ïÊjO­C]–£ò±ƒ*C«’ºÕ’‰˜¦ÞµÍ1 Aj˜]GƒÊÿÕ °j#ö34‚i €Ñ×üŒâ“9ü*ÀôãQØã:ˆÝ†m*ô²O ¯!tëfaÛ:û³Ð-°k8 ÝlÒÅ{Qµn}íˆÜUé'–¡÷D•ó®?í°äØ ³6á$!3ÿ,IÛÞN>‹=u+ØÅh&ÝäÌm4ê-ä˜ù芸aâªaè˜tÛµ0‘ÔT ¬£PE–lM¯÷éô«0Ú`ªîT 3Cjp½i*¢”OA*XïÐO P—BjjÈÕ1¤:Þ2ÖŽ·kSç¾£þÔê›V&qç÷„ÍÛoºËŸžnWoîã‘X_l¹¤~{ço/©·ßN5C¼ÜRUøJzÂk°vD§´ƒTjѵ©[D“ÉBAiT`²Ù°DH!H­M­…>ܱv^ŽE)½s÷A!5eÚFE©{(‰Ã°%* ›òÜ}{é©gè0 &R(ŠC«:ìnæL×ÙÊ~W ±¾Øý¡²ÜÍœ«3mÇî€Û²0¾€ÆY–0ÎÚ-’SC°Õ ÀXœ`wƒ Ïb­KW] DÒl@ä$ò3ƒ­ëßyËvÇnxDv*¨ÿ$ÀV]ŠÐvü‚)«”2õl•0ÐSx0u­–¢óHVèW“ß+ÉŽHž)ð s–Â/Q’¯úcDæ<û:™y,¤$–Ü щ~¤Ÿ9¤Èu·;UÌ$„C§k`”tHøÏRÚÛ@M‡¥Ø­ Ž ÷EºrŽº«õÓb}óù~Ã]+õˆ]‘GøòAïTŒí•mx•VKäT¬3$ùöB6Ùê­H›•bCØË¦r’Q‡gŽœ†Ã.šȉ»õrFº|oì,3¼BQäTå9u„Wº©3NípÖ«ÓàÏaLþ^Q¦4ÆñæH·꨾‹5yJÁÂÑPÐcUÛÓ (úŒí˜.ÅH¨„ BU!VZu'Núã玠å?õùÁþót—5P׎)µ9@z½ÈŒÇùÛw·A‹Å.Xqqð÷×®¯ºfE‘ñZ(€T’1´ë†À­n&˜&¸–†©E°V­›…UJ¦ôRj„ª‘ØÈ0ÌŒ«fàô”ìUE9Ë™QN ¼M-T)²P™ Nm­€Œžî ðA ‹?¢©ÚWå`€m®Æñ²&xù5¨HTºÊƒä¶ ò#+n…‘ÁYãíÌyXmØ#Å¥÷Í dÝ_Æg±¾¬9`!lyÁE<˜ÝøÜ÷'ÿc]u“ꉕv•'¡뱉Z)À–0œÏE0›EYkÐ%«K÷Âj„³ØÌ޳Öp'ßr8³Ž87 ' —À«ú~¢ :8zêÒÅÍp9Ú(à;âò«¬Þ–2è—w?¸ë”xR5Ü\]®WÏëåw»Ô¬ëLWã v…ccºZ6úhA©²N[SŽcT)€0{ŸEa€tœu/£V(Œ<ËÌ( 3 °ø4 ‡³“(Ñ€õMãE`¬ÊSVᢧvÑwÂ`§[ñü¶ñ·ç/ú%•âó0¨rqâë•d/Ø×6Çr1äÛ˜ÆuòkŠÊÞIÀ’¹k»á=gQ™ÒݬiµBeaCfFers 2A•#}Ž¿ŽmÌ“mãzàè©_îÒѪªS Âýs†i6ÆTÄQGúdï~ Êhc „”ÌqdE‘¤§ŠÖÜ %ƒ~¢;‚À3jù­1H˜ ü‰êSIñ„™¢%€€=°ŽÕšP¸“÷‹Ïî ÿ:$b7^ây$ #êR­5Î:çØµ Âî¥Ô'ÙëÁäP@­9j”(/Ÿ ´Ò&Õ v1Õ\“- õ¯¦R1›dÒŸ=èdª©Z'ý© 0ÕMàV!×·XÐS›Ò¥±‰#ÿp–c‘:„>Ó'¬(D‰ë(ÓAH阜üâúáÇýíÃåõz¡Æ¢úP?†:v/²¡çMåG•¬9p¶¶ƒ¢^X--{+! )I 2fsƒ*8›,S{•L£éÆlÒ-óÞXç°ì)$;ÏÈzê9en,njâ³±z–[øã ¡ãaWa ³Tl;õ^<ü:;ÑwD*˜cò…T œL. ÖÜÈòT+=¹yÏ1¢éoy†]³À‘åIŽt𗇞Uw F21z½Œþ|&eÖ‰ŒtB†Õd´¿ÔK ¨³Ù<:rHÍ:,¹Ò‘¾²3~f¤ýça…péb#ý_H—S“J£‹§'V(?#«ó–"{Ãdu¡§îñAwÚ G¶.ïí¹'’àˆ$ÄèÆTg½+åÔ-u è Å‚€¤Í§·õ£R`¹J¬”Ø“| V²qê0À´CÈýg¢M£“è]S€9¨š¹ˆWTLMÕ\'I¿ÉOQ$[êR]¤{’MW8ð®/î.¯¾èYYlÙ«Æ¢¨EÇã…‡Q¶a„÷¬Î*{4ÞŽ'=# †ðGU 1yzë­§~²õ©Ä÷@mTwªÕÿjèEô>•içú‚ª˜™SªŠâÈË—&µM A©ˆ4vìaÍœü“ ÔK˜pRá÷H8ˆ§¥agM¸ëËôeôx{y¿Z¾TK½­{}|¸ÖoÿñðôU¿ù^_âæûÍóÏ«/««¯éÔ™J{#s<Ñ”!»ÙÙÇã·B¬#hDœ£zBèêÞ3èîè¢1åM<#Ö(üfgÆî Ù{fëÚÍ Ük§Á=³9ØÈ®nB-GBVài0%¡ó Â(fsÌ…ºÑ`{w¾;ÀQÓÂ}Å’Yµj(Qÿ$xyb£Yƒè=O2hÄHû4 Y:ˆ}3Ì ]_èÖ½:3;g\©˜/÷üå#£†X€È„Ï0rºV¹¬Zù›­ >…,î3Ç©¨9ÜWÁ%Û’iüñµÁsÕìÙ&Ç:tÔ«b–ŽÈSããøؾ¬.oŸ¿´±îš0Ài äøƒ,yã.±ø³gÆÔœÕ—8ØÁçÙ3C*§lLS ÓÜ/¶É‘Ñ·ÆÝ0ï‘q¡=šYz"w3¤PÏ»ïІÀFšÿˆÏøíùéÛªâ¨ù®G íˆ4\ðê4L¹žq~^d6ån¶€ä/.’@.Û"ɲ±¡ÂÍK;1æ>…½ó®\â׈ÜݬÔ=E¼¶:;zê—¸=Ê¢1qw÷.ÓÓ‡^»÷Ç‹«Ë:êsƒ}-ò2†-8TÐ4¤^,½V¾Äf»ðëI? ɆÅfs*IH²©½Šª$oÞÚ0ÚÙ £ÝØážl˜ljhâIB¸›cˆ¤/"i84Ž^…]µëÛnЙ%:×5póÕ¯—ßï—OŸ/Vz¾ÖëO7O«ºÜåwæ'§ÙÇ}[ŒŒi¶Õ²Ÿ“î“ùœh§˜Ñ›-EŠEy;EÝÑlË H²¯úUX- ;¾s¬ë›² 3d«mbzE\2¨‰}ÞŠàÚžlõŸ\:îo;°º#Ú%9«†üûXÛû(óÛ¿..¿]ßÂfG0””гÙF$Î@z]²$e/›&%ô¸Txfv5ˆÞâT*¦v/¡§°áA =.P(ÜÍʽW6ZOdcJç@ôTlé·aé"ÞÂüpûB µû½®L]_x 8ÊG‡¿S£Ñ¦oÔPUç^@\IOeIŸ£°’µéi4ATµ%ƒ¥¹ñ4p<µ„ÀÓ°ÄH:‹óâ)Õ•x¤©pš€“ŠDçô–Y»Ëÿ6ÇOq‘ çÝðsP‡sð…Ê9}ãÅŸGÔ—©FµÕ ‘JÝTh‰¨§EÖcEضõô#fV_21O#Œe\]u‹Ã (ýQK¢¬7‘~ónŽñ#e£ùµ×óèpJ£È S4жm´Œ¹nž‘úê /»¾ú²ºþ¦‹:ù‹u]é´ÅUø8»e ôª6GP‹!uóþŸ½«YŽäÆÑ¯Ò1—¹ŒÒ@€ ç 6b»ç µTv+¬ŸIm¯l_`ŸlÁTI•ªb‰L&³Ôê°O%¹” âñûav^áõ$ÉŠì‚wZÁ’J=x&HÌuáMDÕ‰%+RjžœÁÚo– !ƒ¬Ï’2Ã÷Oç$Þ½M¬Ñõm ®ª.¥f¶¬VËqì”C‹§ü¢SöV±Ë‰ŠGÀŸÞ ~*Þž=SëØßߥ·Ùÿ`E!¶C…M6{5ß(‹ž™€>Þp­äzÚcAÏ„lÖŠ‹Ù‡ÊDL ²;œa{@•Ýú´…Šù5*vPª!ðISꪖó!/öŽç5x/¹ÔË ¸@øáˆ°»XV¡ŠH”Óï‚ÛÎÏ~$Pa%¹\ð2‰dWwì^¹—•t¢AOl$…O`$½JÞH‚9 {Ü®kO`¿!v—óÒu‘ ¬Œñ=Ò°‰Êñ%Ïrô'óüÏ78²»XÉ訅ˆÉ9`7K¯tl躚Ö-\¯°¬[а7-«æ·©LÄÔË´J*Ùɉmk¤õM«óš7­ØE:ÉNè:·sov[±âÁò*; ÔG–ðN[ ®7¿ž_üy¶ý•³íïœ=Þý¶¹5‡s~;3T×4Á -D¬>¢¤븧 Fr=™·’Ã+r²J3&FŸùØÉ©óv¢ Ç9«T» O°ÐÊk~Í‹TäS'e¡ªû Ë ƒZ›±æ ûu¬±…&ÁŸf§Áç«Ûš‡ŸžäúóËÅös†Üu^_„U0i‰(Eôí[f‰¬«õMk¶j*bÅ%[&¿lv* ^æ—Y£œ8»Àëï= çÍm‡ÖWb”Br!šûܵ?ªú¼kÙ0ÛJ9Yô, Žœ,Ð*©WS{TuïW{8¿ùz½y˜4~ùÁ<3lšÖ~E;Œ+·dmÉû`>¢t­‡UK°§U6ßR,ä­X}¡Ô‘kÒtÙŒïN\–5«’|ZÖüÌßxRhÓá‡O¿ÜßÝ|ú|wýøéòónZ"@¹–3\Óà –‚Å-£Ím÷©Ã\ãi«kZgÛ:U×f™¢ÌQ# &Jkií=}ú t-¬ófò$†±ý 矯7ÿa`ø÷»»¯¯}øŸ¦›»½ÜüÏÏĈҥqµ¹|ù â!öÉ h «ö"dñ,¾µáùée!f»ñ'oó÷ô´Ÿ®ÒS´/6W¿o._ƒRÃ@LÇyÔä¾bûjÛo¹z0Ûñ‡Ý†lî?=~9¿ýô"a|û= L4Ók1?¾˜ÅÞ/R„¨ ŠÀhA°·ÿû¿>Ù·çãÙíÿÓØå/ç×›¼ég–~ºývcúï»_^.ÚŸ3通–”•‚IˆŠJ!×¹6y³¿½¿»Ø<<<][ø¿>µè%ŧJ1ýŠ‹»›¯ç÷›ƒ³öHNdÖYÛ…À.,9kt-VÉ©snn`÷ʽÝYÂçÄÂÅùΚ o6_6ßÎ~Ó&êªQ‘T&ÝÇ‘´ÊŒ@QcÐ…\Zm"Ÿ.„=´3K µÌUöh¬È‰¿ڕþƒ´‚•–‚PkÀ •¶3•o«EëÛíIÇÕ^œBÞ¼¼Y…!HO¢?ëØÄ‘’sKް…IÏ ¾Þ]¾ÑÁùõÅ›ÿã!½½¡$ûêàp gž¯/«ƒº(PRs‚%7 ý"± ·‡N¥Äê¡’nÚÂ^›– YLãüü ¦9&·‰ÛÓÛÌ}FÔ,§°–5Á¡eMàì²”©4z肺!&Vľºðf´7~…wn……_`p>¾=ýÃØ·³ýǬ”ŽYˆ|'^/üyð-=Îa$ëíÇa·/¦^ Lç‚ e§+˜C+%Zè‹Ý&"鱱ȞLîˆ§Æ ˆ_ƒÊŽüª\’ù­†éw^ üó¶Ík‡MdÖalYb¢åÖ¹Ó_säÔ…š*Q¾…±ŒÂ'R¬ýŽ‚L:¡0DçªÉÖ»¡VA! æo:^„¿Ÿ__]Ú÷ÝþúÇæóûï8²~Ún=Kçpvu™˜Zÿ|¥‰óÚ(àªÈl*={‹}•y6ÇöráõÃ+Ùýc¬º5‘‹1­§,ËöDR]ðjŽ«w"wÃ+ó:x%F…𞀽ùvýøíaBaU„r – Nãl iõ„$Ak Wè–Áô`íß‹h:A5ÕN É º $S o{' &—º°dU*QnL[&Õwˆ#×t^íԃƊ¯!Ïùr©ÈyÂÚqt‚ž DN =õ´ôÒ²&]ø²À<ŽJKù\8±X€ƒÈ1PE¨ !Íݤ2 %[„ŸÈ¤ Ý`.)Ñ©}R†°Æè‡Ä‹ÀeS³2Ç–‰½Ô<Bø ´óN«Ò8} ƒ 1wN$Ò‚~pì}N AZaWZz¢T_‚Gz%ì®ïîíÓ—>±ÝGm]«€È«br[MŸy3:Bs6¢“>M&u‚ë Sb $U8…búÆ„˜¥6œH©PSãÑ©ÊnüMÒÂZ‡ï™¿Ù.þz}~»Ù§¹øb'~¿ùz÷0VÏ7ÓäEúùÙø 3S;ü-Ì[Dn“¾Âµ”>H9-b¹¥E‚ëTI£H•sê¹ TŽÙ„‰”ºÕTÛž{ÞÍ¨èØ°½DGtÿf¬5r춘›JøåúîDwpsž„ïÞAA1ƒ\1 }9_`ÒÊ)ÃN=áí¡Í0Ьñ¿(ÊöKZ–¢ˆ¶˜ Ô;˜Ûüðrì õ[½%åÝ‘' ¿C?’oW‰‹c-µÔ¸`ÂcŸMÿî¤ÓE7FI™“I2DgvbA&IÐüoZŶƒZÕ.‡óÙsÙÇ 2¢.ÁPÐlzKç DNSÈm²‰§ç86æ0Ô˜ãR+}’TrIt2ÇP‘_ÞMþH™%™Ãaº¡žìLÌ öšº¹"n†|–ÈÄß¹óP+rµe1Qk_Á ~”ýÑ´žuvCA2ÝkÇì‰}¬òŸ‚+I“”|Κˆ¡bQ4L;ýU€µoq†ï9€M“ ‡:½,g£«ÉÛ”)IG?‚Ý=NÒéw,â$eÐ<§§×„–R&¡RDZÌE+¹SmÅ³ÝØeoÚœ=Wâ"LoNÙÄÉ«UŽójJ0£Y¢âàJw®zsû§›É›÷dÎ ïQòé––žˆTôˆ)nþÞ >‡dÐéôÙQ(÷ø²V.!,:Í!l"‘7qzhç6O @ðk¸o¥L‡Ë9ÆcŠ,Û雽õt©q³+o»T­§3ìÍXø9Ó¬yŠ+„.øwúNfô9Éùð²Qúé—çñÛp«ZÓ–2ž† ‰+¦c·ö1aõ2«I¢¯™X´øh»ŽëM³Jùõ;Éô(ØC‹ÍÈŸÚ¬ú=‡´‹_C2DIó k2Ǽ¼ë]<»9¿=ÿÕÞÂ@¸S¬Æ/quHÏës‚ÇU¡Üä!Ø8§³¼+=“$;#å؇ÞTÒ9ós°"ŽMûÖC÷^³‰‰{øSöØ*Ñ…“ûS ´¶Cerö|èPQ,rð[þ•ý}¿f”»:TxèQ…ÃM8æˆsµGujÛ³jd»†K–ÂN”zkÞMõÆ™ÍUL°ª)ŠM¤"Âì0÷„ÚŒgfJÔ‰–;Å~«ÔÈ1 /ßqµ“Nß,é²›GÞÛG7‚oiyU2ÙÍçÉn4H¶`´¼gf¬zéAÚ‘VËpÙE^¢–]ô]g0‘D§ÍÁþ<žÜEºÂX¥ »uÍs¾¤uxÃͬÕ;·¢Aæ´î¨‰ÿÎYD¡OÙïPFÝlp:wŒ±Ì»%½«=q;’uÈ}÷"&ØžÚsÔÓbÝvØ¥;öB@Âpú-[æï¶Vs|p¹ö€¥…‚9-ªC}öìK¨'ò A´zTbB¥YÚå8:A/F/'‡^ôë@„Tþª¸ådÞ´˜Ùƒ9Yi=ð÷_sÏßÐ%Rwÿi„9NDÒ ƒƒÄ°ƒÿš‡AóV!`V§žs=¢ÇÞÅ“z<è…þèˆÍ¿Õ,ÄB_:c*j~€2ù¨.„¦/(q;xü&d |ž4ýE&蚃ÞÓ,Ì–4¢³«Ò‰ Yà¼6;€cK§¨¹Ó ‰| Úù³ÅÆ™;¢Håk“cƒ‘tšÀb 0³W¥Ã^Ó‹’ ÿ…Á¬Ì[‡¸ÄÛ¦¢ø@h áÙ¿ B¤ÀE†ìÞž©L:ÑÑ‘i;ž…‰Ðd~«´ûŽÇpx=>l.î7ùÕÂc6þâúÊÎd^Q¤U[0kHæÒü®™¬Q7ðaÚcÇXÁ binbW¶v5‘Gž*SWG|jð¡×UÀÇ`±cüë ÌʼeÊP)ì?B(˜Îß?ƒ±ˆA”<w"é„AoOMó²7,iHyýÔôÕ‚ûgJ2¯À SË€¸ÈÓÇ¡r¨™³7nyÇßD!cÖ È¤ —yå{ÅÔ©¶…a ²8ôƒ×ÿ}—4«÷—M,«òFÛlé$*°ª‹¦œjþ“;ÅêNtýë‡D*ª\W)ßšs·æNPøU†íüg-ZÑ¥)‡°HG¢oPMŒ4‰|¤GÏÕ±fÒÇû¤RØœŸ}þv{y½éÖEq oAm9ÞvMh1»1ßµ“R~,ÒÁq<«'!±ÁÆe3¼ÔÒ!¤Ž4tñ ¯ÖÎð* €j¯\n‰O|æTžá…ìh÷äÕjfx1 £Òkúé—\ÜÝ|=¿ßìÍlã`ÖI‘æÌlÏ»ÜgOµB)ÚW“ËA»]à·ç7¦&ÞýžÎy-wÄ|ô^¶;Ùr]bsZë¥"¾á¹]GeÒo-M¢$ƒZ1ÆÅT‘ép¶Ñg'€.hiV£›•)*ž~Ù?ƸÂd=:¼Jàpú×ñþ>dÀ›Émð*ä"5$k=¡ ¯]š\È©_«[j ƒ…÷Š~/n—Lí·úì„Ò¥ÕGÍ‹»Â°¬äZ¨Éâs4Ï¡ÃDaRËoö·þÀb£‹n>o€ÔU"5A‘&úÞ’vЖ”kÍá‹€:q†.ò©•¹A9"¦!JàY†1IE_|ð[åMuØÖ?öÔa"’N´XiinÀßÙ¦Ô¼J_ §5hzú;{¬¶ÝÔìV½©‰´¥ÁÏ¢t‹Õ¡uä+éôìë’çõŠ…VÜ@R„Ü‘ÕÒ;QtjëKYþä&Ø· ‘C‹´ãh2µ+Ór¼  RŽ›Èç)Ç&2éa„í±žÚ³®1‡DŒ*ø#M&õ@_S‡'ˆ3}f³^Ýò»&—§þ–³‹ó~ø£!‚ TàO¢Tø@)_Ø{‘JüÙß f“OîI\Ç Bo Ó¯¼Þ‡ûÅÔÚÜ{ É=€1î€â$Ö¸Eæ”Qœæ“ö/2éäAJFÊ©Q`…-¦ÎÑsÐwHÞ{ü²„Ì^dÝ€$´”¿”ÌeFˆÔ% ÉȨ_ÚÐŽ^"QEbȧU2eüeÓ†tIÚS³;9üÖh1K1–¨%&ÿq©Ñí¯m«½YÖõC£P øÐ{ˆÐ|‡"êy÷qZ–!Uw•ï¾ÈÙF„€¼ÿ8TsýIùúóYöÔ‰DºÜ~lA ø'åÑl5lH r¸!-p¦Ÿ­/ÖÅØ5÷f«ØÓËf‰û¦oS±!ÍžÊCtüzCÚô;mH3½@ãÿýoe³ýøbÀÇf‹<~…®0,—¶½Ù³uNi¼ºÑ?»º¾|8 |ÏüNâsù4éñxÿmScœÝ“q–¿5 ¿…Z|#‡yRÁÿ—Ù«µ9yØa—·Ã˜YwI)}»€²§"või_Ê>v'2èa‡í±!˜çØaç"û%ð3Ï1¬2ëÂHò¬ƒ¹º5%?H ų\#»Ïñ(qìjXt "„†¶| æˆ;»ŸÖÙë²/±)‹Í˜œó^j®U´ÍÔ:–s‘vÒé4&QFúƒÈLc¸È6››Ù´84õ; ‡Çó«:ŒÞb¹è4Ô+%<º7ôåÍjfÍ[ŠàÆu¦3Ž-¦ñɰȠê<8IÆbJ}‚"Í^ÚÓ‡?Ý›=yx¼ÿóìbsÿ8·DŠÇŒÑt-.JS‰ÄÜr1J{´êe¥ÔÏnúÁÂ%Š5ãPti0f=š‰@ºØM?€X¿=#=‰E¤n-_‹Ñ«GŠ$k‡OšTXl3…×W7W¦RÎþëZ¤.}ñ‹PIÒr¥ôFP2£ÐZ<‹®`<ËmQ˜‘*Nbê å0ƒ˜´xë‘øp$Ýó,.›L…x(½#ϼHj1‰| Ix9Oà1?7…ÔÔ’Œu/[m–e!m£**FBWtr)jÎZO¤Óe®ÌžÌüÏÓ ³ÕaYøé×èådÛ"Ëw“þémŸ=JÊ/B$’G^Ô šT,{‡Ô’Î]«R?Á— çAó¼ñ;tIýðà9?ËQò¬)˜Z¤Ìm´6˜È¼e.òõæüáǶ&z–¤uvu™Š3þt÷ûíh¯Ÿ?èf˜Q5¨/f;Û·ÛìŸÄç%ßfÿ"Ÿ–9=¶Ý&Js´ƒAÁ#.±Ì k̺„! 0ÅÓr)ï R —>Ûz‚ó:ÉÙG7J ½còK`ɈMÜ8f@D;9KÇeÕ-¦MõR"ŒP6ÖþŸ½«Û#Ùͯ"ìõª]Ud‘Ec±Wç&‚A›$XÈ’¼Ö¶IvÎæ½òy²=#©¥©žª®®žýÉÙs.Œñ¸§‹dñŸ=Ké:Æ…ªš¦K Iýá´(¨þ¿m[uƒl1ôâ,µéÓ©³ôÏÈÏ©æÔ›gü®™æ°ÅC"+ 5\ÅE„êv M"pÅ5´• E«õNg“KÏTéÒ¶à œ=Ñ¢ìn$uš„VÝC t|¢ ¬×°÷²NÉ¥-jבU–hUé,Rj˜Ð&¶ÍAä÷ž_ R”J¡&¿”"ï%¥Ü½œ£KzIv œëo¥·­ÐQÚEAáZDÁƒž[\à…m w·_Ue‘‚ÝùD+ŽTáʘ¨;Êåm_Êù‰òÈ÷t€(ƒ cÚ5âŸû¶RuöþîöÓÙ»ÛgWï^>yÐʹZàOp'A«öµª”Ý#B“ÒXл¸¶ä R[rõ–2¡”dÔCE¥É1 »ý¹9/Ï'«)¹ê[OõHvöÛ&êÈ…lç©%‘aÈÆ©[ŸÀò^Bp|ÐK8~öú"‹eÜ“”¹±J¼Ýá´?,fy59M¹—p|+µ.^\Ôé3VõÊPÛH¸;Udfr«¤¥¡•F,UÖQ*ð«eüë&/$+DQ]@©¢£ö´J¹‹ 1z¤®ô­Q­{5RØÞO1$]^¡Æm—KËÈ& 8Z¾S<¶=ËA—V*t™2S@ª$‚è8”ÚždÙå QºT™ÂË@«L¬tަ6%%PÇãÚD÷§‹ËJ‹7û2äù—úÉãßÜ_?œlÞ‘u Éác æ1aw,¨ £!åŠ!"õ}ëhv4.’ðQåj•Êj)…¤ `Ð!¿¯¹Þ¬<€ FIJ8¸´ï«<ª0³µ± Iº¬|ðÄ…Õ¥lÞ³©–Õ1I=ªÈ(~qþîÛáÓß¹¾¬é\˜ù¼›°»†ê HYZ¼u“•dNèÕ¥TF ‡êµk»W#O>Æ5ÂÒ3zÔ°âژѸW3zÎB‚Xæi@v@|>h|:ZEÐè=*ÓÈ¥Z0?Õ€L­[ºÆGhH6Û&ø ~Úýɸ'.=qï¿.nTîÎTÏ,Nü»}Œµ£úËüî÷úùÃݯ7c¦÷Þ÷8¿Q¢F•TV×R$=?9K«ZïèÃͧëۯƜ8f}óD@]5»©9r¶G¨ Ùˆ®‡Béõ•Ò«Nゃ”ŠÒ›Žº3ãÉÓLñþédÂK`‹?0Å«|½…Þµ ¯=âß¾ÁÝ—K6Ôƒ¸¼þbãçÆ9á'Î=– ”rO<3Q;BF?*ûÒ™Z&;ÔÈ1‘v¦áîúÓíÃõ¢Ës ¦„½«ŒJ8)œ=üúžöæù–}y7üý¿¾ù—¿~~.]œ=’îù«#|÷V#ž£ÑòûˈïÒÕŸ_…w¾zê‘®ò *uâxPõÖ?¨üÙ£N‚ª&t´Ê<¼Æœ­k‰Äè½Íè/«îgTÃ|¥?£ª»>Ìèä\q1A¬r9Šö%åL(Ó#í2*lDï`‰ù‰ªAœ¬Òsèeó H=g|"«›¿ ‚Ü`¢ny‹BXýó^8_é;w…e$§ìNhgNQ?‡un"4Œ Ee£:T«½ÄPí%zäñ\¼ÆÑÁÑ‘øÇ()769Y›H6®•â$r˜>"»ƒ L.»ýh>$HHÜ ´{#o}¹•ö>¹ ó^ØÏp0S?oFà %ÿçÝþªÍõîŸàÐÞ¼þàœ¯®â»wtu~w÷3= >Åwt ×—lAíkgÇ-st6z¥‰'”‚;k²¯’®1éÿ_xH³^ŒìæÊ ‰r¾?0" ‚.%üл™rÿæ±–½ûò¢N~ xžô™‘W‘ÞÇþ]ÛdpA)8õøÄ³Wþj£îÒ Ïê!pâx´¦…jÀ»Ft”]>àò9w$¦ëb©gÙj Í»G`À ’Ćórp]‹-Gb†ã$=0§° Ýnóæ†bë‹wml•Šæ\“yB³ w}í`½P´ìNKUwzû„»nR&ánxlMŸ!’Z»*xhWáЮÜà*ú­z™g¿D!·N¥3n]oQÓ˜é°1ãÊzMS©ÞÐq†È3úp‚›üPûÎkŸÖ€f…CZÖ?³ð禠‹;„þÞ‘Ö`±!Z×T£<í_]bÃÜELaÓ¦š×>Ôù¥±ƒïH°Ê@jÁÓ³¥îâÒ‘÷ÝéFÒ ‡#ÇRð1M $ew1»ŽarÚ–|ë —yó<8Xuk^ï-ï¯úÌÕoGŽI’KŸ²t ©§Á¯9Ú3=wigÙÃNåp]°EØ¿C ñI$m‹½="Y\\ŽÄ^eq~¿¬nÎÚYR¡)µ$.ôž²øÐˆÒ…н’•£üÄÄ^Š*Võ‡¢†%ήy{&Y k/”ýnYlÅ!¬›M1‘£5¬R™(3 Ÿp°™‚Ò|¾8êZAMÊ’%.ÈXöU%³ÜOÁ­ëqÇsÛêápâÒjŒ ²ö”÷AÊ%¦ãaí—4HlT¿Ÿãwr=­kam‚ #§w.¹~oha‡ÊÞŠÀà4N/+{¶é¿bö[²ƒˆŠô@ŸÖ—vhøK”}R¯g¥;­1ûÖ™">TöÖz„ÐùOk®«Vc’÷¡`§üY~Àa–—M­ê©Ð ¿ÿ>ì¤6ÙƒìôG߇MæÛSǵdNÖhVƒŠkÙ¹ ±ZÈßóB쌚5¹I¢E5ë½áp—ôlÚOÊlßx¦OEk¯íì…p‘¢Ef€U ãÖ~–k?‰ñBÑÚ‘AT¯ÅÓV,rC߇—a•­T³¬®+Fñc>$ðÁ‰ü&c³vìÑœ-ËqD øæy`Ý1°Îö‘—-x€¢J`ؾ xDUò% •ûýÆAýòg}‰›o7¿^~¸¾ü%ßS†ôüEש¼øUNÒ§¥Q'pôÖÙÚÖwÜ•}å¾å®¼[S¾ØÝ‘ä1R÷E‚+6&Ê. žò§ò p^Ò˜ÓQé¬PT¯{]¶@~‘ì`Šž8bˆ!eâ-ŒÎ÷-_¸Ð¹~ñçÑ“³â%£ë”_à– $­5K¡Û\Ùè׊ìƒD1Kˆjf•Ó̰ò„H}4PH!"E^ Äa´ÊU’×`h ¥ó.­üZs.8˜iEÆ®y}æš$B’3quþò,[ÕËDYÇÖ=N}WX­Âàcˆñom®lм*C$ú\EB‹'B~‡m®&N!<¦| ]®¡œ3 Ù]PÏëÒäʃa=󢔑ÄêT­ºÑÑáöM®~7føBQŸ]UÙyS=€OåõϳÑJö¶pѵ_hp±iä–M•Xg캽X»Ð( È=¶’Î_O•ͨŽg:~=íàafzöéh5[pUÍwòTýºð ÷ž2Ž%µ–´z{1ÆJÆM†DNŽŠzUT)J,1î±b÷šq“£ÕàãF‰vP·ßç’EÈU?bð Njr«ø]R·ú¶nâ6òS¦élìGs€‡‡)z®þ‘ß·°g9Ká B|íßõFÁmøýÉh¡_yÛK%…–‘ï$ß°»vŸºFECâÀ„I€Ê)@6ûù|² …do¥:] Ï©-Iˆm¨j‚Ô€Ãj¾¥j¾^ªÿ\Á·DÁ—ùæ(ï<­’q̺2®Â$€O››˜1 zd <(p©í\z®€ Oa>_}¹U"Þ¼yé—Oö½[±êüé‹ç÷ïÞ‰‡ÓÑ.Ù…öŸžNzzŒ‹=WüôÜÐg71OÛ‹y"1k¨œR)Ð¤Ž»HNá=-!~âg Ïs2î—ÉxóOž xóï®ï ó‹-ÛêY5«FÚv"òáND ®CÛ®¶­lpmÌ´ì(av«üä0åˆ^Ÿ”Bx±qúˆU;•ÔCB ‹vxÉn®’ƒÖ™Ckô À’¨Ú £Ä3WH…úE ŠRAó³OG«rÃâ`‰Ý…™˜ã*ìŸ 2×;˜1O4X#mœ ̤ŽÎ—?ÍÊš²¢:çÿIÿ3FØÝášþÜÔÉrì¶p²¦?wÌòÄ€à˜V ®ømö¤°Ó{§<ó'÷¿ê•ÿôæîZÕÿåÅýõäÑ}ÙØ‹¤vºWhziIÝpJWN—IÕTP³<~fQ GUâ5ázÔ‹vBbÌ¢t<¦Ë¢TÊ3Òå6&ÚÜŒXí1;V˜‚õkÌ´.Ä®¸ëTwÉÁ7ÏŽWiƒ /xt¾¡iÉ«‰µ¤ ¯Ï„òW.Fç<ÔdÔ0béŠÆý•׮Üäh•5q,N|£ÞÂíS {°¡_.ÚDå\ûPèˆF˜~?ýSΑ ›dv?6qãÔIÞ*“°û­}¸èô÷áÐ >h\æOÞúÔÚõä`S…î±amL¢MôhzZ×ï”sÏŒÉc(O# ¡HY÷ûœêŸ¡‡w¦/í’#<µaðä·6 Jä]&ð¥a06¥(fªï±ër]S u=N5íMÁ«AVG|ãö{:ëG?Dñún'lñ_¾(w™ µN°vNTèPhA¬öÖ«8¯kå_N½~zÖ ¨Ð³@»cGõ,„lÿÓ„R]4­Ê8 ó’ªv— ›£>!¯ƒ†o䉸´¨©®¯ÔÁn:«¥—¿MylÉeLý—Í`€séäÛK¯¶ HáÊ2"žÇ7§_Äåá¦08‡‹ =1º”]¥ütÚšÑ&²ÔséÔš1â¶Â™ýáp“ƒ*Í0ƒãëñŒãÛ‹=¯êø]Ö³ARtÄáÕ”û7ÊïË#m©mÁy Ûú• ±»ÄGÛÿÔVN©§U/?r PV–$)Ë)–SÌ¢†|(­ÊùWnÃìIà1B=&)ž“ã² ÊuÀO©Õ·Å^›/jÍêaÈ!l²2W’§¸ñj…×ç m}øýÍÏŸºepS¿ŒµÀ*q°…ÐÇ­^æZ¦W7žX#§$eîÅ•]3r1;P4!N'mÑ;n'Uá´Ç¶_¨Â‰Uû¥€[GýëÕM·d¯iH¨l ­±, ÞçàZ&´è¢ŸÓÀÂ"pjýìaƒ˜9‰Óˆ6í@Tºþ냊“ò¿¤¨©ÓxûI…åöëÝåõ•>üóÍÈØ ôn(ƒæ-Òìi[Íîcƒ÷n8{B\z}OKè~&A  `¬0 äË£ä1g&Tíbô­=$:¹"¡9@N ˜ÔÀmš"ëî z¢N™lcªK§¶‘ìÒ•_Ä0¢ã)± ¦]Üïï.Þ|ùv~w}ùë¥eJ؇-/&{×´QÇÅ„‹Û{ÚÖ³ø+¬W†jŠ¿>Péf*õ²ÀÄòtIõëk;‰õ[=YÍ’Æ2ˆ1´ ‡=ÂIK<‰½Oú_Þ»C¼+’L¢0 `€C%VÒ`Œ„£ùˆÝY)‡X19L ÞU‚þX€)ÞÕôëð®ZKÑÿþO%ÚU')ðT„ºQm ¹µ R‹à#Ö­(¡(IÔ¤†’H¨Üäêx““UWkèÉjÄùÄ—×cj)9‘Ûts¹r}=ã—›Ôº‰CF¤hu×*]°[!w”ñÙúí”4]ŠÖÉá<È©ƒÈ· ×ù(âÝò4×~~ÿ°;øc¢kïˆýùßü0v\®OXå¤#¸!øEdD2œ\Ä¢¥ðÿ½«YŽãFÒ¯¢ðe/Ã2ò 8&æ´—yŠ [â®#‰Z’ò¬l_`Ÿl3«[d‘n P(RvÌÅ–Zdu!‘ùáËDþ”{à= hH¢Ïܼ>æØ®Ñ;–€^ WSèýROñ/…Œ¶O+È­½Á!‹sàT³z$ØåËûê ‡"™[¬¬ïí­ÀÕºfßÚÉ%y˾¥—“•ïü,Kì%$=l OÙšžv'…àÕö5Zß«”ˆ ¶W By`ÏÓjèš¿U4KŸÍ•X>c_ã)¦dm®J7YoâØ‘r/Œëêâ¡øðÇ#èÿ~õþúîá¾Í ¼ 3µiJU“¸8`!‘`<Ï1FnNÖ2G‚‚1šÈÝÍPÜÉéEÜm[ùþãð'׉<Ï_:èÄ?¾y°%¼³¥½søûц¾•Œ.#.±Ïm“oæØË½ÉèꈊW7&óç3.¢‘¯¹Ե‹Vo¿Æñyæœì·‘¸;Æ1?B´ƒ ›÷.‘ k0R0йt˜ª>‹NŠäb¬ð°ÖXÊwY,¦Ž‹þR`;v )?Îî*™«¦]¥› æ¡‹Áó êÞ9&"=7Îbà%z…È''Ž›ûØpâÅß¿5êxÑ (­ë†Üû½‹ÆÈ¬ºº1rïמë‘<ë#‹ÄKw *=>I‹¢š®ªâ«Sý§\£çí öÐ/öÁ=5 )€,B:€D\’Õ(Êàš¨~abLHUÊÀT"– ±Œà•®¾ˆ˜×q1Ÿ#lsöŽ™i;ž.&æbsO[rBŒxnÈáÐ!ÒÔÓÓC-½==›à {/[ì;÷6°17È d¨_¨&½ø‹§U¦ÜnûqÆêôŒ45ŸL붯¥fK‘¸7°·f‰ˆëŒ_QI¶QË(ã{q›³kîyTôꑞo…q/YŽÿêñ³??¬«b¤ Ñ6eÙtmáý¿:¦M’qtHÊt£ØFЮ Œ1@¬ŸÐ ˜cÕJK·!  0ÒY­bÂUFj‡á¶Z÷h™e ßCdür«†¶‡„6£NÇ(n0¿ü¬UûL.ÒmNÈ;\gD™2ïä·Év?·7ŸÍ†çOî×akLéü& ‚ò&>${ÊQÄCɦaXÖ{‹à†ELVo!VG×T)@*7XHiDÌÔT;%£À¼]ÍJåM:‚Ü“ö¼*G“îY_Iž px¢Bi… ™d"Dmp¡sŽ•êßÃ)HÅži )i€š&ÊvàË*òl_ÜF£UxoÚˆžúÐNJœãÄ3‡#E¤M³ˆÄ¸Ý¶ˆû v[EM6/Ê/®SFœážu“ >÷÷Žžw»=¶±·Ÿ[üxüê›SÜ/ó8¦®†—H§0¢—ð9:Ÿ%ĉ1§¶\0¨ÐRŒO.e2\]oí¸Çö^X‡wK*²éRS^NqØáúËäN'€úN±ØÂáâ­&ä¡Ã6bhAYö••0Û g÷3ÆœQ·yF;d1“ýà›öʸùdÌëêæÓ—Û»‡µ½2|ÎX¿Ôp5JO5PO«5&8¶YÆ I ,±LB ™ëК„*o•Xl2¸ˈȒ)oBe\Ç[ÕykÞ­ñEIþÐj|\N¡ÕwÊ{¬†øº #‘Z 5R†mö"(”v4Oœ²Fî;ù#R>ßüï (¬*ð:s¤;Un~ô`9¼Þ±&H_!€ ø¡TyBo]š«T9iºÜ4ä°V*U³-S/P5üq?„ù’³¹š”_%&ÜB$(Æ8ŠH\Î5uáSvæÄMû—âã¼VÚ é­û§½ÈÈ™\}¹ýxóþæú~]®†Ô¿ šcêªó‚9ä:©]Þ(¼u]AMZíÝdl7j%áÿ ÈbYÜRRCú$Lk—ù éM›óø\ZNqbŸî«o£þðjuÝÃzDõ‚‰2™“²ÅD=¤‡õøDy°ƒa\¨úœ¸F嬊&¯ªQ:‡¸<ú :(ÇUžd3"®b¯M9¬ˆ«ÞM¼Nq‹QšríË‚r>Èðy\Å–ìiÄ/YÐIÈÒØuj£C!ŒˆX_‚‡³Ûj ;d܆µÃ‘6Úq'ٽݷ«ZxJB\]³ ¤ç]K"oî­›@ÖÈ@W m…·šÒüRR£ð5æè ÿs5*”§`®6Õñc¹oΓXà«+¯+p^…¯Â^» _Ç{)Q”£ùÌßYrÝÇ[[毷÷i}kKüýêýÇÛ·uvz¦ñsÓŽ4˜©`O“ç©E¦Ý3íÎJq˜ ‹WÀƘꎋ9ÌUÇ…¸Øâe)³6ìZrR\gÃv’GÜi?¦ˆ ‘’Ó7´á§óbéò…Hƒ(†H›ÀTvN^;(üá²õYu“/“?²Sz÷—±…£jj¯ æg79êÖp’ “{|UÛ ˆ8Ÿ,d˜ëjÔJr"Õºë‡i–¥L­¢áº)SH´ –ÕÎaÙÆ·‰ºòïP²ù”‘úF.3?ïþqýðå£=õÇÅŸŸ¨RHäÍób5_$O’±¢ã¢r°œ¤â=‰gD¹ ½6xìsÝ™­bư-®ãÞa ³ÁÃxíça ³bEÇþyçÓErzË“šÒETaÔ‘ÑÈÎîqâ6í1ñø0‡ùˆSðøáë§çùë‡3½¿»ùò°òš‡õLØAÜž9 ›ð6õ”Ÿ@Èr¦<¤ã׉ˆFá«o» ý«¾œ êqTìâºLjTg ÉX)ÀªÛsûvPÝFÅSÎ{,gŒ…Tgß©œàµ8I[SNA·Ý£aàÜVz/Jó!7meÞ}+CÔNÏJ&¦,•ÛÉÁ7mG%å8¨»F)l}vCÿGØäA1jÇmŸ‚dIxs_Ylì+›‚‘Cݨõ„¥ÇÜÜK˜ËJµ{‹•5´•5Ø0¥ÄU”•# n‹n0KÚ;!ÉÄxÉ?³CßJá¤'ÞÊö6mÍÓ²iç+tO{a‘KgÍžñéë盇ß]µû«/zÿÛ—ÓæiV6OëüÚEï4s*Ö7OëüÚ3½Ó8L˜ÌÞºS´à¸CýLÔ)#î=ß½yŽÀ:’n Ñ/ñ‹Ð|÷ŒÜH¶¨èw5m¡$3^”ê÷ PàRVúQbXl›´É¿”ÆôäþÝÞÝ~zwóùê“aäÝïÇ,ó£ñ/‚bÑëIˆrkPìð.ÑÇÂê&“ÝÃäü¾‘$î›duȘ¸»ýxý‹ÉÕ|ûï·>]ýr{ûàãš¾\¹wôô×/+GWš»ôC·ð¬/†ëCãºæO¥µ“+ûä5ÎM'²ÎÀª†x±/ÒQosi¢ìB6]VhTNFYg…Ò}<>÷¾¨2¡/²Ÿ‡#L!k€2“Ë8Ô“Rhj²@+2Ë»AàÜvR2„ïnšqØNÅñÝXu Éf~› à G\GkQÿ4kϼOñ» Xßw®[dàÕ4B=iŽ[¤\ÅV-¦g,äÓƒ­®¾µ¹sü0cL*{c«‰ •À¶d¤,Ç–{'hS݉®tÐ`Ï=Ý9ò1;œJÇ¥¹ÙGk¿N)VnÚRá!c ŽêŽ{*㳤eBå·qÏÅ_íļ¾ûé¯ÿ÷ÓŠoäw_í?ÿüéÚä:GÜï-}øëNû¿½{øŸÏ?ýµ¿yS¶ÀiÊ^uc÷þÖ/^¶ïǼ½}ë÷þ;ê9Ó8PŠ´'¡0÷¾'Nâ`'Pç ùX(¼—Óã_âÄIIë%,FIjÇ?$ÒâMäÓbê…÷HàœäÙ¾gÏØ6‚Ï ¢A|)êŽ;jèòÙs &qóe 5^–ÌŽ¹¥ÐøÂËù?Çu‡²Ãý¸°†»$ñÖ¼¸ÔŠå3Ž€ùr³yÒœ…eèf×Ï( { >RÑ/ð9 ¯1Ÿ&Ä·˜Oó¯#îü·'óBØ¡ù£ŸB~%HoóýéQÌ?=%'ÿtüù«÷÷wW¶Œ»ÛyZÅ㿯¬3Á=)†÷„î8X¼ Êì4,Ü%ȾP¹LŠÒvj 1V- ¥´ª…ØF¤4Û[«q›È¯Û@Œ»ÇÑÞ²Û°%gŸ°s¹¤×(§¡•±­ÒAD»!¥¼ã bÜ‘¶ «tLÏ` HÒô³À2ÏŽ@•â”{ô•JPÒEÁ~Œ.²/Jüs±²þ9¿¥¬Ðn¨öt2HRزm^»Ðá? wðºÑmàÖÙÝI'ðdMnà!×öͧ唼ÉÅÒZfw‹L! =s'—)zæ…š:Oyluì•YD»s³Ž*“÷w’9M§é®¶ÈÞyL’dc’VÐô‡IÒÒ€¯ž¤½}’û=u`È› LRO W°dÍ›¡LZ¡,¤I|¦oKšúÅ*”‰ëãKk² “@sYòổ“*l¤˜ö®qr9Æ!úNDŒˆTÁ$ÕQ_#oÔÄþfìîúï??,môÓÿóýI~‰ï#¿OôÁs,^BTXOƒ^aWáݽxq "î›=ñóP™·‰ìÚ{õ´^0Tã…œû¢î©u§—x b°NU‚ë!õX—\jv»\K=è&/(y]}öˆmÝnÑ|ó שHª!ç-À1ŒÏQ¤dØ“€Dÿ,u{mâ®[!uO{Fõq·Jò•íº¥¾ëHæJæªÕâãUÏ%³õ2ŽbÆ'qŒ¡ìÊ(ª¬á CL/íZë5Ë™²ž¶{ó%“;Õºè?{ÙÞ¼¶HŽ1nÚJÞal#k˜QÄü´Ó|$=«ðÏ4¡i“{œ õÄõbHY÷é¥ù(«QÈ:k‚*e¨{[lL²ð;þÈÉ}ó“\FDîí­5 §´XGX£¼îàŒ™˜ 9‰aÊæ‚"–î•™ñëO¦Ç5m&zÀ`ÏÔÈ©)ød‡â1+'Tý&­ç¢<Î!<Þf›;Ý“ðÊâ5Mº+´jU5»ðNqmýL¼F¡ë¬™kΦ‡…³¥r\µÄZ’ÒkÂÞàÕ±Uy÷à{ÖXHÈS©‘Vf ³y ڛ÷ÂÃÙ͘pËÆ&JãÇÅ‹ÀLþd}ŠÛ¤^GÙÄØ3¯fÌówا¸°®æKƆ$Úh~}õ² q1.°Êˆµ·ÃlæU;À™w¯À°S îlÉ*”Ó¦ÖôÖ/Ãl;†žJ9ÍI"ËnÈwן¯ÿùóÇQÆ Èæ›†õP}&ÅjNÃ!šÿ²<îQ(#ú ú#ÇW6Úñ˜S¦ À± ëíUýõ;œ¥ZÊMŠ¶Ñ¦Ò›$P U˜ÐÊ`ÿ(–:᯦aôÚ:;.üÍÞ|„#l,¦áñ?þvÿå×ë»ë«×ÿõó{ïyûõÕ'àÝ|¨xZaVpN °B¥ª±Ø7õI<ƒÐ‚5çæildÇŽP’î¾ó#0ŒÑD§˜SxC>n§î“6­ûÝÐÆÝé8È MC~èޡ˶;?âåÔ»¶Þn™SÒ,i(w_šè*‘žØ3±gH§7~¬eB©þ0ÙïWQåR"êBdÕñ®óœ•EÖY°9,7Ypâñ.5BžŒŠØõVèX^³‡ÔÑ[5ze³÷,þ㟿v¤NÁYL®j«éGU;r±y!±õæÉÇèµ7§™ßÄÔ1÷·ß;jØøRL žÚF°ï¸¯ósu Ãaž»uæË¯‹p+'É~a%aIÎØ—gj|‘‚`6‘¸Er£`ÛµÄþ!²Öa¢ávÕ0iš')¬ObêaÆöš!ÁŠ’­&¥h0MÜwxÔA®G÷ì9S³%S"ÆWé1¤¹mˆð¶|·UÐPÞWµœÝ£í7vFAzêW(Ì™G!t%•çB+—\>‘qt{í†ÃÒŽ+ÎgÕ T¬¾{ZMCZ¹½•!Ey^|·xƦ¼rpç—øÿþ·1­|V6\Á-Š sì™HrÚÜ÷rc!p˜RëHчuU´ÂVÅù‹‹¥µÓz Ôkð±™í,À ¼eã|°|×¥U ÑÞ¶ÿÖêi(óíýÕ/_o>~èõ¢rÁôÍ‹2ÿPˆöa 04©ëTV£8ï¬ Þᬎº1ÑåÄ­ƒÔ¸”²Ëе—Ö˜´½¡Ç0cÔ½{]Ì\ÇØ’³éÀë”D¶5HR¡Q™½e88·—˜Éèo±oL{蔹Ȭ°¬Í‚í{EQô^ µ2(œBLHUãÇ$\¤\O2TeÌxU –‚×n¦-Öï¹0ãË(&óÄw©ñݛޛˆoýŸ?4»ýpó0_D\¦AOÙæC¥Îo¡Êżu£7-È=™Ú¢jgGèl&S“úëoù×ËU0S↸W6¶êyQÙóZHq Ìv£1ˆ®Bd[mB|QM?$›iÊ „éOJÈÉÎôb“íbOÙºÿžæœèÂÈm™Sc7ukL1ã¡´à¢9"! ÁŒè^êL1§uöH€d›=ƽ“LΠ§möæJ :èŠôÏBΉ1n£Z4cdJ”ᕆ—•2I¯}ÜW!+ _5±1áM¸ÊÐÁ‰¼½˜.c3mòv¥ç{.ÿÏÞ³ìFr$÷+Ä\tk23"22èÅŒ½†/†á‹a,8œÖ’Öœe“#éàwDUuw±;»*+«z,^`‰ìNf¼À“Ь˜– 'CÉ)Ó6ÖxÒÓ[s"‡³T)JË{”™äÒ/zŠew:Ú V¿Ñ!åÓí}ÿùÏÀ\³`lBì3ô#lb5æµ^Ý*Rä#º³çúìíÁûÚ6ýq³~ºK׬Ü|ó»<¿(··\Þü’ì”*Aǘ­¬Œ1Lt*žØ)Ž z‡KÜãýY'}nŸž¿¼^}þtZ<ˆÐ—6PØ_ÙcÂÌ@çÔä·ØBäHÁ/-÷C.î|‘†ÑÉÔÜÓ–tÒ§˜FHGB’]á3­¨óE ——•U¤TW”,À¬¢[JÍê¥àAííÝž)Å'{eÔ)±F“ç¾sš…’çŽä\~b—zÚq”Z,3—ü,ì1óÃÅ÷˜*a3Ç^qž Y]Ã12[WewG-¦^Å ¶my.šiP#°ùMíe[ÞÜ«úwj"Ÿ•žßÔø¬‡?¥çfÙÇö½öÆ;ý_‘Ÿíx7©ÉƒºÃªi:*ÞÝ1UxÛ)xfuƒh¾žC‡™ŸI„S[î8¯ Äg6ƒu9&W6¼È jÝc=æ¦ ÐZ¦ ìbΗæ©ç©‚.8‹µÇ¤Ä;uÍÁZØ‚ãñi׿¼¯é¢«–ÎøèŽ3ã…Tfcy¦ã¤šò “œM| õzFý–ã•…Ý‚:Y±j(ÜÄ+æ©R`)× >ZCIJ“:!"GžÐ æÙqvií…þICÛ‚¨2¥àÛ¶™Êx¯;"Iß4:“ÆÁZDyöÔÎw#}GH=ü\æ}Úû4n¢HðS4W­¬~=Ò¼ÃXvˆë%¥Q7j7S9É1‰ ÖÒ\OrLIjjô1‚m:›½·íeóõËÆï¯ã?|ì”à fÐ[ã7µBB4IoµV4NoEWJ¹Pp€RÇÆ+µc9¹£çvØc¬5ûv­?yUÅ€9$ ‚ðû Ž· ‡V8îl÷…× ú(ªI1%yFŠªîI›ØkÙúñQ#‚7@Þ‰Üé}gž1Z+dš2®jçOža‹9k\ø(”<½YL¨ß ßYô.íq*»‡®wwGŒD$÷º3¢(3#J™èÄáne ¡fPTµ*ÈÓ9i<ÕUõtÖ#â¦ó©íQパá;´!ŽcµÿTv ©Ÿ–kýÞÕSaJ»ÚU»È³úm,•­‡#ªu‡¶S½Ò½j3 ÔÞLêUÕ ãzÕðä³zõ€ˆRµŠÈ¶Ä—«U ‰”F×}ˆ›D¹´Zõ©Ÿ_r¤VÅEŒ)NéÕu3ëe”‰Nôj+e³[ëÄ?CbŠÞKMgLµ)ÎîrU²Ì>YÞ‚¾äê%ý JÒäÛ«¯vçç·í•º¬JÁƒ6»º}ú|ÕÖ‘ýô—®è¿ÿÕ¼òÕeÿÐ*׋*Cêß{5Födõ_ïxJíOp ·ú”òÔ¤5eÂ%ÚÑR»[æ²EÒg¦Ÿlïî7Ÿß¾¬=7) чK w šJ;§Á0H•æ&#¯¢ä²Ó ë.0„"듪"`v\ÒO«ôi¤O©V.®ŒžId—`¬Ù°l1¸Ù{XO¹äÙÖASÇ`§þBqúpgM…×w›—ªü,“ø„6¦:1O3‰ºÜiÚ  å ÊOkôëµõ2ªÕ¿3—Ô ùQ±qUrà‘vQo¿^Íâzlaõþú8íg"“lqfÚÏ1«,g¥&…Ã~VäŠPSË9@[ï¼Ö<æ};ò¾ùúËÃÏ›»ßï¾lvË®¿ÞÞý¢ÿÒ±Ðj¡‰¦cMäÃ$Ç—+  m•Ñùzkk‚ïÍ15Û4HQ+Å8{”ãh—úß^žßÞ-‰9Ï:Õð3þ†[Ê…)QH%LØ'ƒG™bvbÆ€+qaÆ å\¨6;"ê¹PpuÛW$DnB¬K~H&ù‘ÙYÐè¥ p’–d½x2NK–³m'h v8¤†é}öãÝË68(X]D€òôÇ:|à±æE(’mfÔ0wFþ#“æp3ä;‚m(ô~Ú]%ÂH“<áû˜æçîÎÅÙË,Â9ÔT '†$îi´¡2mm!i+Ó~’~8»_g‡¥U´34>Ú2™ríœôïÛ4°‰iÛ®ƒ|úpôË·­ÒÅß¾Y«¦rJL{F±röí½B³Ë8|¾:|ÖŽëþãúè]éb²?Úí}F¤ûÛíý‡”¤‚n£vïÞ^^,YñùÓµåo®?ý®LöáF"J.~@d¼úçüpúÕ‡§ë·íþrÂ.<÷9³æí KâÜcæ±ÎpÐÊ#ÊŒ Ê+WO›_¯ZePu‚âS܇˜ÏDj³±ÎÓLØQ¨1ñm÷à‰‰Ï­Û"›TÚÚ» uÀêñË„»¦Àf“ÙC` Þ7޾õ &~xÂBï°ê^*7ñëðVÓó!$T݇ßÏÄClÔÉ’¦ðSq¤N<ßÄ'Ûpé0Ñœ§šk^SíKÑ϶ñÞۼEWìLÿþ׫w׎ÀdöÓ„³À¤4§|Îhˆ 5Ì»õ"ÛXÔ8Ǽ‹sÁû¸ˆ;¤ÊéVÆÔË ð’ö oô\s}wû±ÛT³ÿÝÍaÚýÍácëù‚±QOžb™îŸd ùBt­â Rƒê˜º¢|‘ — £¼£:¿dw„:dfœ­ ËZI>P#øâ’½ÔÕ›cÝ`ò8!¶òk·úø¹ÅmÀŠ ö’¥@°”jØD¢°ÔR­=BUDM K"˜"-&͘ñ¡\Š>NÐM3‡á|ôòüòd…#>¬ä6åó}¸ñ®ñ8 TÍfÙácE¡¥‚f£¬<.%ÛŒBKlœíu·(çÎä8€}áÀUTf M²Äi*$šõÏK ñf-Ѻ#W%3Ú ÞÂòX ›zTF5g]”Æ©fÒâq„n=ä”m:€VB8½–sì°¨¬ÙóIH!Vö¨"\„Õ Gˆ5ÂŽÁY§Ú2ºqñ ðDMâqikj7úžÍ>‡\·ó²²é­‹¼Ë, OX–X`¯J…ж?ï™ÀS@ÏK˜U]T ìvÁ_$«@)óšDÜDËöûiŽPrŸ__tÜC~ˆö²ÙÕ[¡¨»B³d7˜_)‹Èæ#Ö”Ä)`ê½ñÜb§_ï7JÏOÏo¯ÛæîéAÿ÷sûfümóòEÙÞæ“Ù·¾¾´e ·“ãŽêN,Þg!•_" # D©Ðù‚º0k¶X^#dµk«?ëÂ,³÷“ºŒãþˆ˜*œh‘ ŽeÍ¢˜–”çÂKCÉÅщ7=C &ž´2ˆY?n€’UФ4¨ ’/ålý‹¨Œ”jù¡=¢Bã0Ð0܇:oO½…)” Á†‡D‘ :‚ vôQÆèØB .W52fÚ[h/Ô5‘¡»0ÐY¥ï”iÚ_Sâb¢š7‘hl6·È IuX²’KEQ™à†IÁù¡FÖ(q×[«º!G340{J8êõM {8zZG±ÑP ýEwÎ0ï³$I s2ÈÁyõÄÂd/¾f\p´€ÆŠ0/H2Û‰5mm]œB›í'„}’ÜKÎ%«È ¶ã ËføëÕÑ^°–Ú#¨¯?7òr1*޽[˜íÒ £8KéïÑO8E TÔ˜ˆG¨ÚÁ)[I;¬ `NJ5•ÈÞUg ÏèÛ»ãl[™yêñ  ÐãÔKŽ¡.%½;"Ô ü×àõ›qqJ:ZÐRHlÛléG-i UD#÷ò¾Zçä9öZQJÚ7¤:¸ÌP¢é´‚ÆJuSwGx Åj'™Y¡«ŠVˆâI´BDRÅ&(J©»w¸3¤ê YÉ@3®ÙfÔˆðN0‡G,Ënº†4@Ë>M„ 2‚ÊoÅTARÄdz§»ýòðY!~úÛ¯›O÷ê^uæ¼ÇÂöcïHýrû´éw˜l7¯Mÿ¯Ç=ѳ­;;Ìr›w?Z±»ç6¦¸-„ll<@Ù*5=ª§È¦‹~gÅ¡¼^£8l€¢ªL\ú–ÕR°LáûhÓšAâ4]U~¦é Y…?­ð R46rï²ïÉxg†‘S¡~°ÚF"¦ÊÝÀ¸ÒlÍRþ¦mÙTÂXS›¸TSŒh[ …ÕŠU‰mŸ¿lö`P¥ºr+‡”)$rL~œÅ=1žß[s@a¶Älˆ£“á~O×›Çç—ß{»gÔ?ÎÇ:[(ÄI u•õEy«G®uN»#¸¦þ0±­!®)?¡†xhC‚ ^¢G²«ëCBEµGt#}H˜íCZédaÒW'H>Ó–ˆ”õ“Fó6ж¢î®jyoçÎPXêK(v©qÓ¸NP¨’†,aÌoè!÷ÙW…hE®qjlì8—É?)2Ë_k]ª"ÜZX¯áºõi˜9œŸ áxÒ#nŽÜiîÙ¹†˜FIÕÂáœwþ|EÞVÊÖy ¡)xºÕ[i,ìä}–rxƲ)yA-IJçÛ0pꢎ-¬y=êþ¨ƒl•×#ç úÈÃ…×ݽ¨?²}}î6Ý¡Ó8eûñ›oÚ¾¨ã‘”3ž’’„gñïm{Ã"9ï«^sÕN³_skqVã·'¦Œlk€-Ö†W$Û#yÎ=þò= ­ñ°äl]Ù*ÊýÍ‚hè–0G®šq­Y§MÐ.²®-Ë&X"Äi‚¬±?IÕpfB˲ã0X§ –ÕD±’©QÇ4Ùtè:7¯?BIªœ"öl•ËËȆʧ¥NQ`õ>¢¤ÑçÀ*’@çî!‚y§hZ‰S¤×R:K gaE²*ªºõ}û#jÆÀY[¹:–’-?â7j¶£¤I²¥dVcdkáÆüîædTiœÑ=ëý˜9"ŸãUy&‚²G î”LL“, 5Ùˆúš‰r¥ÕÇC…ÌŽNjõg™&Ao¨až›&° N'?A`6»ÀdMA«ƒ—F4ºé–sÿ˜;cÙs zèe&#ˆKè1B`¨ÒÖ¤FJ‹}qËÖj‚ ¤Þ6ÆL2Eˆ1¯¬÷•(k†& ›£¬É¶2reäîˆT“éFO8®’ãì}æþ—ÉΣJ³žÓd”FR­ÁeJ#ð$$Èó×xÓÕK·ƒ¯©”{°}¶±j­="Æš >E›|êƒV©ÿäOÕr™Ç¤7t„üôۣϲƒò#7Àh½æ÷Å ïÎX–ÿð©Ñ›–å?zÀÔ'MœpÔ¼í³ˆDXZË¥º¼øi±QÉSK[ÀˆÂ“L!>»”÷YQ/?4"ŽB˜!¾–0ój¡-ºªEŒ|"r+Tà­1ØUäVdJ:68!¸³æ¡I¿SôVŒegn Q²Ê(±E")B)CXFÀ;pÕúÜŽp5ùl´\ž0,ãâ)¢áŽ£¶K`”¬Vn a܉kᦔó안±ÞÊÚ!PæPÚà´„jX•ߊŽÅö˜Ì-_é%´+ZП^ß¾¾ÞÞÝ_Þü|­HúÏÍÝkôfÊTHå@íS ÅúO§èŒ1_i=ÀÅâKIC¼„3-ZˆKAQP“zÑàÃÐOXçŽAƃ,-E¥qÒ±îh9žné€Í.lBSà)©@|JGŹƒ3FãÄâ…J£qŒÉ*öÓFàã'ò2¿­žqnéÊÏŠ «dx¼ Þ+ÑrZA8ÇNrRò02"f¬¬M`c•Vs´R«Å>«#¯^¡'¨Ú¾?‚/²j ÕkÈé×^\hÓ å°¸P)¾Ý–¯,Ô.µ²pò*Ãe…Ê1\V8y•ò5…Å<>®îÚ#qÕŒv5!.B¬³{Xd÷T7Áž¦ìžm¯CŠcÚªƒ³¯ ` ùÚü,+È<ª%œ±pY¡T’ò…»á 'ˆ³I¾äi;‘õûè„ÐÖ3ýO·ñŽïË.8Ávi·dè¢1»7%e®+ã8+Ä+Îfn­_CC‘mþnèæÿ½šò¬Õ…JTñ_ÕŸã™lOtvìg²„×ËLΪmr]eÚ¦ µmÜMз‚—Ùjªèü /sæaª²ÀIáŽþ¸Xyî0µ‚À§¹#„±:؃ðå[ÄZƒ=¢ý-† óØÃÞ¥a™n—÷ñè7xùjÍ`*®w›¯¯Ï/-ƒÈ¡t{§×µ¯{_½>þá']½ óà´^?ê>~x”âàéèœç²Å©‹¸Á;in!$8å›tØkõ~Öä†v@ñTÌK\³$}äEú?PM%<Ûòéaù >˜FŸÄs‰]ïKG5wÀü à´’²/ È„õãü. ’-ü²@Ë)ífZ §´Œà’jŠj™| $uã³F[$¿|¼5í6øÌv-»¯ºÏ:y?É<6um¼Ò³Ãâ™É/{4­òá´Jžeõ(¡,ÒóáhI×tÓ†ú¹f èî7·_^ï˳>6ŽçÕ«²8·HÚ UäóS’1ÊܾúQ™™IÁ£+ð•r~Rh€swhW»u´äùÝþxûnõöãÛ—×·íÇ»§õ¿þ~­è~SFÛÎÊÔ"žÅºÆ¾Ìq™ï¤"زA.Žꦇ—"j-O$jÛ½ß^ƒì‹ØG½†l…×)k8 ƹž¢ÌyäËG á"9¹´šT,s÷:ù^M*4œ–㌊*þÆÇdsºïoЖ¯©/cQj0òÌ\Û£¨‚¬áµMy¤ZжGOhÖoð×Þ|+Q•±÷DýõöÁ&ä])³^Ù+öOýðçC}û£þüõå÷‡Vónm’G¦ë¢“¢Œßú…k#ÿó[ë0¶Z8„ Þ¬¨:ìÑ#ÒNhç®­²ùà!†ÿfïZ—ã¸uô»œß«6/äǾ‹#kc•/òJNÎfŸ~ž‘¦GòIö$µW¥â’G=M€¸¤;«T;JÌSbv©¬­ÈiÐ5«t8y~kÕédI%{«!puqßJ—×fÝÞé~#ã¡YùLÙ‘Y¢;–ê/¼6ñ<.†Qãqƒ¦7÷uº€CÚº x…EÈ ›#–o°©ŒºóÌã]aâ4*A”¿Ê>üïîÓG¯»;|r“3ì#Å]mDˆ¼W\ÚŽÕ·•Vmþ0g,ŒÞ×,5c‚¢Y±0 º pˆí­ƒ:3ps $ïæÒ÷°@ëí\>¢o;€­¬ž0#ÐbO»4vŠáñÿ—IÃÄÎê-]¢í•ø;-HÊZTLó­An9fK¶ÿ9„תêY$ˆIµø ¸ÒmMª„÷îÚQ:J™Â”r*Ù¦íBŽ)D7Ò¶¢ÔØVŒ±Ú´nS{23ŒW§à&½—!í­|}üöøsþp©õmñɾ¾wZeEˆ ‰=ŠÖ'j¨í{ërä‘{[iô_þýôüåδÐÝã'½½?ÿüðôÇ÷ùf¾þ`”Öõž'`Õ‡¥nëÚàšÖõÇ›ü^ë.è3d3(OoѹîFp©¡ÒcËqœ°“í±íÛ•éuåÀy‰øÇÓ§ã}ÑWñ~üC/Çýç‡û/¥výžGohåÏ\93§â•s¢qq)v¶yô\ì¼ ûˆgoíÝD({5`PŸzLƒ^Úñ][¢%⮫Hõ ©IÕKö³h–ÝË8™QVJ·†àWH"oä>£ßi>#^hÚµà CÑçàXJ’ ùý z L ™T™PØ(™$R—drØÛW*§Ã&‡3h¬”Âq%Ò•*/Ín…Á]1#5ħÔS€Ø#öèZv«é¯9ë ÜÜ]½ð¨¯HýÒîz ¢C_–ù¤t¤PŒ»eWߨ1"ì¶ØÆ:e‹Ð¯BYèm׿Þa·¾c&“m™¾ä|Äbk‡Œmípƒ…~\Ęã³LÞÙÞ#{[ùãö×üÒiÁ*U9i©¥.@–µrý@½Rm›y"±íj¶Ù{HeÛ M½¿¬¨—Ó5z¥³™å3²³öäÅ6GoòÄTK;éa¶·]¶ ÿq^*eh*:Ë UIOuŠ/±ûMW`ïË>]J°äÓ©F“ìÌð‰H#ÆLÑ:ˆi“Ë¢ú)¡K_‡ÓÄŠù‚ûVljðmŽÙ–fZW40H¤N’Ç]?»Ø>3—­™Ðö¿½ËÊ)»lßßÄá—(”†æá¹?[3½Îyõ@$öq~§ß ¿§ß,`ò‰Æ´AÕyÿ þvÞÿ? ¿õ›«‘^½>G¢ á1¹¨_¬ëƲR9šoj*ʸ – pXÄØ=$C…»M .I1À_°×Ž äe“¿Å ºTXL{[/¥3¤ËVN;2sÂqhM }M+grÕU¥ºdÕ*•:¸Oœ¡‚+Ú·Þ®õø—QRmht)JYªmyÏõ¯žv.x|£Èˆ:±Ì~#n’éÒm¨JûŽÇù'ñS2t¹]Ѷ¾=~¯²<~jë 4Þìb¤½]C½èœ©Ôë¥#§~Ýæ4W ÏÀZ¢ŠÁP ÓE ¸ÊEòìSèR«Ø2øB>&Q)Þºíõî_[zy¼ø= Rû<ÍË× ºl«\Q•bvÖeA…Ó£¦}‚Ì(Ëtié T22î,ÈJå,~ÌËÖ)/ȉaì€K WܯÉó\…Á¶øu¶>ÂKhšhsj7]ã’Ît6\¢ël°]E¥ÎÛ‡†âñº|ÚYC¾†ýv˜ŠÎ‰Î˘´‹Gtu6pÔ˜ õ=ëÓé8t8JJqã‹[Æ@±M_»N’ž­ÞûöðóùñþmÛ~¸­:îø_Íd®6Á–dƒ N]áØ³•0CšQAIH<DfªTÇEAÈžè0À’ÚõL(€µÀd^ÃñļâÖ¦Ã#¨Á ‘E­#`wQ-,¸OSL¡Ð”¬‡rjF®.><å`Á'«)Tº01è=Jõl €1õ°-x@Û,óÀ¹n4w¬ÅÝò^=ÄDBe¶ùxYøxì”]{:W ÓÒÄ¢otÖ¸|D¶¸¬¶6Ht•ÖÐä*ƒ¨1n´†ó#àʬÛãÌ ¾6àÌé ÙÔ¿T~ýºÛ# ²¡çÕ±*‚ä·´:fnµ»"ïÛ)'V‹‘ í^7vîjw88gµÔâd5ŽúVl3v­–ªb[éîÚ#ü®iì#3PÙ,4º¾ÆV¢¹w®uÌETâŽ[D>Þ! — jÏ_~þøªäÃóçÏÞ½ò^`̹m¹æ/]ç0çþsEa`TËÌ©]a`DliŽŽ!ÄDÔI$™HR.—ã ø)±¬!ŽÛ'Ö5„ÔåÖMžŽRŽ#™ýäÙ‹ÏÚ­NOèkwêc Xo8Ç\jÁÓßòây€g[Û‚—lÿ¨­±/^ˆD.^™8^ˆìÊ’ÅÁ*LFJ0Y+_}ÿ—ä¿ýï?áÇ÷Ͽ>>þôýñ1ÓÈ¥»‘£ò›†GtuT~ñJ‹Ç°û=~ûmä¤wšôÍvìÛž–}ø8sþçÓ—‡ï…^°ËÏïPÅ< e‹zdbØ3áh™³_Õ,kô¶ôÊžþø~wúàê?Ìs /os wótíçmYJ‚=9B¨¡ØHCu¡­M»³-³)Ï1¨£`»ª\ÇD%OðÐ’ñ¾Fx"ހ̦‰AÔP¬zÌn˜¦%½ƒO%sføË>Å”G˜eçyìRÓª*aâúM »©•]ù-8¾¿†`²¶$Ø·¿f†C?ú,çç6(oX×Åɉ͈£G&…†ú,£†»Ê›ÍëD¥Qgî†]*jTÖöêVˆ×kœ‹½$Ñu¡omë\lѨEþWHX‚°³F52‡Ëtž98F/+»¾dèZ`€*Ìî «k ¾'ӘLJìÃDVýL7Àe^ì”8Ôï¾}¼ÿ¬Br§ïúüdYû‡»¯3>Ê6ØfÂ]•hËj°à¢çº×ò6Rp”Æ/N)µ¹Yš&ÀutÑ#9]·]l€Æµ·qñÖW‚ì S2ŒÖyElùë$]µWª:V5Nߊäú˜¶MgÒ8ëƒÃ™N}‹ë¥ì[žŠ]ÂPXÆ dñ`ˆ=EÙ‡€Ù)'ÚŒ¶ôšÆy7~¿]»ÕtFºt·"[#;J •Ö(¿€ mZO™(6nëu½EÆrOm0> ɉ¦Žã®yÈÄ.Mï\»Ø&еMêŸ"—&ÆbДɱªÛë›ǨޓɠXÈ;†V8ÔÃ#âÛ.˜&0þí:'·_èü6^=Ý«d>Ùÿ¾}¸ÿøóã×§ß^ž~¾h ACƒmÞÄЗY¸Aé¸Ù·ú2ÛBl»¥»æÐêðß’Ýs¤j4‘•³[blÌÄØ—– YÃ"¥âü!~¾>¹~8lWét–Š"‹S˜G—h‹gtÅØ¦$ˆ˜%Ô†\v0FQ«{4^ðãýPeZ­äÂM»¶_ýø‚è›}8ÿù¶íWÅÕ A{Ä“kéÉduFB {»´ë(Öær^nm$oX)ÂÊ‚ôO* 61Å.Jo'úŒp9í}œŠfu Fß +8±C6õ;»œ3}ºìº!Ï£gÆ€ô´ffº³…Ú»³ë•Ä*g£3ÄÖ.¹?ölÍpÖnܙ؄ÚÚÚ’g˜¤$’ESÌÎe÷¬žNV3Å6M`ú÷<ß}zF6­i6Ö!æA„Ã*·A-Yì“ãcûø®¡£?—ýÕÆ? `aÉ‚  ‚·@ù'>¹U|²® I6o™>h„šPXït ÝèIæ¦ÔªBG B^Çe>žÛgÛ†N«©ðèK% ì(mqMÄalÞ1qxïŸ 0«ÑfØò+¸è05ÉE¼"ûñô©Vï^þý_2€I´qÕmËwžA%Qܬ–Z¾óšö '©'üÕXdüŠl&™Èûxë…‰/÷Ÿ>ý®')u‘_ÿå=f\f†EDÏYáðˆ½E“`ùÝËŽD¾2ÐR põôJ&Žæ'b!.†ÑêãKêWL‘ÒÔe;D1¼¢RÈ(aK“ y­ÒXw£•ÎH!3¼s–ί̮Œ“<ºrk ³v¬±F}¯>=ÿÎa’æ7yô7ÈrÖ­ÜØt²NoÛ®z¼zcYC}!øˆÑÅàR[b³‚HÃr™“Š7•p>z²×”pÌK,è1"‘©÷Õé‹#oÑÀѦNz4pô!쬕È.£ P¥ß'p ”ªôe [Ó—•Ò¿ÊBäû*ãÝ Ôñd« âMËD+üpÿðl+ î ’ãÏå¾>ÝÙ¦\9´ó¢B¹"6¥Lô7UôVš8®ˆd]¾Ø¶dµ ‰‹z³ ØKjP½nãIq›ê%G¨Kõ"óî5$‰>ÓRkŒ"ý…ÛèÞèdI÷’ÃöÒQ³âX密6œúÛˆãõ²W·Ö9”äÃ-³þË¿Ÿž¿œjrïР‰cZ¥¾Úáà»4qr-ÙsUô*â:‚Ò¼v+\bÁRœù5ສz@ªåû7òŒ œï²÷žýÍ[¼š7í]ôS*2ï2ä6¦GA©ûw¬Ýoثޡ öãkd²ƒÆ )øÀâhGû¶™à<-£?>þâqëÎoÏO¿ÿx™þð“5—ž}x“ö¾î©~5”mh£ðÖ³ÇȰ´}ùÆéb=‚Ç„Âê •t±’2›^Ðjˆ2¶¶,¶Ý1·UÆ7(X’’*[°4|‚:æ8Ö®‚Ž1Õ÷R R»êfÙÅ&&P"ܸœgü6ÿp?ÏËëüÄ6¨vq×\à”ô¿Œt0xp@=n…JÝÞ„6'Æ©èö’0Ëm}n#÷‚$cÜ^bÛ© ¼IÓJr±§¢®à°·ßK’b®5DE¡®jZà0Ôñ¥*Ha/ÑuUÞ®(ƒv¦ä|vâ(QC®‘½åÞİ:Û³¨²=KÙ>AàèÊyÁBô%‡Hž…÷Yœ¬ Å ¦€œRÚöS¶± !tH¡>"ìöS2¾ôwìÈäÉñÔÖý䇶œÊ-ZN¹½yl|Ê#1þxútŒJõÃßÕSxüãñçŸ÷Ÿî¿äBÙOûþô¢býòÍξ;4jÞÝ?ßßý|º;ýæi½œ^ßOÊRûÌEKXô”¶µ„ý­wÖë·ïéû[îJ“›úg6yÜS/ÒG¸†÷”|t pÑôg/3äÛÆ†©×ßÛÐ.Eä·¨š‡*,ƒi±8àìœeXÐk´zçY –uËÁQ£uè²ï1X†À‡ Ljl^·qìLqe®|®â<ÿúñþNí÷ÿü¹)˜¢õ|V™ö‚JÐPN@ [€·ÇR-äj‹ª.…ÒnDˆ© Å‹p,Ë$ÅÜ\Ñ‚4dr¾ÅÐÑÍe2ù½‹¸cº,âÎlC»YÙ0Ç®f¨ ¥hCΪY'¬²Sœ\µÇìd ÃU,MäŽåöf©Ž$>⎿|8þe“bôˆWh…>Hš| èb¤äF¸@y*Ò§3ûMÝÅ¢>Ea#5*ç@PO OíÑI Û|œÒe¨@á@ \^µ=î¸ûóÃǯ??׉Öqƒ®ˆUTo/uQRìl™ôºz¹Q·Ë“º#¬Îzžâ_íS}–¸+ÝwE«®ñ“óœ¤OMÆ–ÖÂ`3…® ÷¿[k·uöy×Nû¢"ÔG¼«m-±,‡Ý=Ö5ôå;Æ0ƒ[®¨ †‚æäà…òÍ$oÔ±/QU§x‘M=Õ#Tg{Ãò™3-ÕÆ( Ó#ÁMZªë¢qŒ±½£¯V-¬qE t¬É“-6ìj»@nªeíµ­ôðñÃé¯}£,ëÏ2ñËjbúGt Ä^5[E°QzÖðÏ|$Œå™A >âr]Ó³}~éàu†Œ®È"žp‹š!˜1¥½=T%3äàÏÈ60¥ßù*…›:àϪ5Dޱê«ñ ßl?í¶Ÿl|‰)e§uí³‹Á†š"BÝ{„~åÇ„þâ7Ÿaó§qØüÅ/Þ–;ß× 1bó*ÃÃ#\ÃFKÕØ¸ +×û Vî¥I±áéiuݤ ××Ç!ç³úlq{q˜2X.L„É–þGæ}ûhbTë ‚kqü惩<»à»îAô¡%&D ¨N¦t7Ã¥j¬2çƒ*1 TŠ·‚B)ÞŠ œ èNG«è†ÓEÃÁwûh–É¶¢™yÃ!Z/j¡Òßß7Gv ¶—U9H!ª4Ä©\LƒøW.¦ùÇàe Þªi#µK.B—Jû?ö®5IŽG_ŵˆúóc#ö­‡w´ã¶zŒÃ±w_0«¬Î®b™L²º-Ï„ç‡ZêÌ$@~xø ØA@mŽ,¦4`–mA´ô؃qÑ(‚VMKå½+k¢_ ”6†FMjk&‹<˜è˜~>&·+ñ˜¦ Êân‡Gm+§¼‹b]Ôö²5ê ô¢NÛ»&MPó‹¯C“ålç.h®.rYòyÜ MÚ¡É] ÀbR‡¦(«Ð„EþÕÊÚ)˜›Á69AnÐDv¥#™ŽLQJÈ”ç«'wƒn1-”ŸzÉŽÎðÉvâSï‡< ÙÀUïw\E/ƒ¨ýL›‡ƒ”:nÐò„a¥ÝØ[°Ë3\êØ¥•I"‡u[É­z\X#tiJ SÜ]5¥5@—Dš] ‹É#ÖbøfÈ])¥xär¡gÒ‰-¼òw`ò&¾}ÿþÝOô^ñÍ)nmÌ+ ú„bE’:½‹‚ˆé –]´kŸê Óä>DÝ€N¬øüþí§÷O”rÈ©½þíý›ø‡¼zÚOÿ*Ÿ½ÿöðù˜ÿj[•D½¬‡üû¢gãŽ4º (¼µ}a¯èºê%´ŠK° I¿#ïÄU›aTªè]Éió†5æVRÝ`T4`¾ÙuXο­ðƒËÅH]S\ô‡•‡Ý[[™D´vÂhqQµ.ŒºGµz¼•ŽÃìÿÉ!¿ò½sÎ5©¢ÅŠÜ“ÈDóÿH6W¬=çÜ8æ j°œ€(58ÖcÚYƒÅ£¬Fáq÷†·8ù#-jœÇÀ©ˆÇ `îþ½|?þºTsºtVX“]Ú•ç@òR_7Ó5î™Ãµ#!\àªà8†ž’ád–'_ÀV–„Í ·’ùõ¨nCm¥Ì¶¼Ì ´eJÝ¥ h;âÔ¢”‘Öí¡«ò½›E¼};‡Ä6æzQ¡¤O»# ”ëg»gˆ,€“ºæßî~íÿÏZôÍòM‹ŸþøÛ?½»ÿrÿù÷_Þ>Ô\eïÓE¯Õ‘¶¼ÎCÔ2EÿüçÖ¨è°Wé&2ÿ‹`Š¥~L1 õýø÷ù3¹ÿÅõžKÞ¿~>ß™¯Î®V^¶æ«ó½ùªxµrY1»8»T!0ãÆ ³XK.–ßÛcpÃÞžÖö«³JgÜy‹Þúâõ­y®mÜ{kÞúÞm=ËN3bÖnR¨å]ÕÏdi±dÝ}{Ž·çxGÉ bˆ© úך5.ù'‡•‡T —Ö6~>‡>îT¬û9ÖÏ(ösøq¶„êPƒýù“=LMŒ²GߦБ»eˆ!PÜ­nÚ nbƒ” AÝ©¦n·Å®®Ç•5h{Ù„þa­Þf~qò˜Ù öXÿøé×Z.Äp^€º¨!fEÈ P9ásÔ{521œ&M; Sã{×v â~»ÔøÚKÅ[ËÞÆà~Ä.@JÐ3Í#%Ýb~ °s‰ZKK#0üäFÊùñè0i¸~ósXw(õ>.¬ÉøP‚”=8ØH5¥Õ)ÑähãPæ[ˆ]¾exOü[Ä¡³è7ô3f5£Pw8¼§Z«û¥O|cm¨ÂZv åùƒi׆C¶ù´lQ0É éYºOÿ’]Éúwuƒíë½Ü:E@ÕÝí`Ê[\qŤf ®¸D¬š>LXrÅWÖꊑ n±}îŠsÜåŒ'"žEB©ìŒ«FKeÓÇ<‰¢„[T.÷ܶöŸFÙ†B=ï\×%3ñfêyçUðaßù÷ô nf?tb§‹œsº@é2Šs•2cÕÛVÁ¨UÈ¡òˆûÕjê¤.ˆp—ÙLŸ’w¬ž±‹Õ%çxQ2ó†¤P’˜ëeöm…žÚjˆÄnÃnS$ͦ “yü¢5S„y^= #*î‹ÕÒšlQ§–ü—6Ø¢0†”vÝú$ߎ“m‘ËcÁ!¹&Ü*_C*á8[d¥[ÈÛ6Ñä·?}ȵÎßpÚÞ¼ùŸÿýÖÄòS¢÷ïß¿}7³fÓW¬L–F½ÔJ#ÐÄ¥@ѽ­=Aœ?bB½ Þ¹QqÖéC?î}’@Ëwc#Í^ß}÷á˶ò¾"qƒ€{Œ¬´Ú6]aYb×Ì!õ­HÁdå'B¨†&¤\Mʹ¼¸Øó(E+ù£y©üÞd-ØW`ßù³ÙI”,åâ5‚ëÉÝ9‰×o­Ê–ÔT½ªçT´ÁEe:»Ó¶K™Ç®üÁ`J‰Sò“·R÷Æe? WæR壵k`ïÈËzÈeæwA¬HK>"cžÁkû.0`ã;ŒÕ/0ìHØs+EJª«•µÞ`„`¨´ £ièï<$øÏ²×‚¢áP`~>G“‡ÎÑ4lñý4¦ž9šýnË%Õ¦w]Êgš¸íxl(!û„s;¤?9Ê|þrxèÝ?mÝùÁ-ÁëÁÝâf÷·Eç‹§Ëò§èѶî` žÑÅr“,§ímÑ]2¼‡‘ãZÕ¡\€«¼9½Tr(WÒ2âP-ºÿ·yÁÝÜ—ýÊ âÙg°xñO¯HTÃWD+¬¤3&s¦$Û„®d`ºâôª·Å†•J \Myf T¼Wðk}v­p’°óV^ˆ'.‡ÙPÀ½0ºŽŸáÈÌsY’""ìOÁ—TcH†–n68¾…œCÇRVÝÈÁ° …ÇÛÉÓ‹ôok‚„$& [®ŽP’DÂ]H(:=…j¤D(t5eZ8¸66^ßwl|–¯kDwÅãîç¥ ñ8I.lóí37 ÿüñç÷'®øá‡~"þùùµËîñÔoÿýà¿¿èñp¹üåÓ×÷C´ª¶:€R߀Ö¨·‡ôI¯áu³˜Ûñ8–.ë]”îÎcõ²2p IËõÏ+Ù ýý³#mrMGœv“Ù™U—rq<¯7)¦Rk˜‹a,CàÝPü°f"€0¦17 …ýuì±y¥D­¡±šÚó•xdW+k-ƱˆîdÜö3Eœ_Œã0W,ÆÉ‘1¦t=¸t¯w ¿A[Lõà û´<•y[yΘ¯Xì䣳¹`gÌW\+áq8Ìp[s\š’ÑLBÝ¡±øQÒø4²g^m&в•Ð¥Œz^ó°è)izB\Ô|9×Tó „ÍNúv8˜©Oñ]IõŽÜ'‡0„ã×ïÞ}øüéë¯ùÕo¾¾skq~ýøó‡·Û:L…[¡žÉå~4DÜLÃq]€£pxÙ3Q0UaX\UN —%–ØŠWÂÃù£Ý3°[Ãð±Wc&;,Ä~X_r$µ‡›ÌáÐ&øäás8êÀ1UÁ6¡,¤»”’r|x–ª‰Å²=vÕ}øÅ7áÂÎóù~Û8_øT¯·«ïÂÏ„‹mu{÷\ü•͘y«¹£˜+pcÝ›¥bïÄãb‡¤™sDKùîo$Œ6hºkìU È#†aÅ1WÎÅ(óŠî¨ãsžš4­‡9’Š;c%W±bNÈßÚÀÆÉƒ¸PJƒ1nÔc•èG¤¡Å‡Hƒï ®ß§;rRn–ìUÏòˆ#AðPóè²½³ÙÒÃxJö„ó(Ù÷qŒ¼ o5𛘞•édý«Ü°µ2làZoÚ…×ÍÇò¢Ž¡ˆbûø¶RoËÎàÞ@İ ÷˜eä—àþ°T(õò<®¥Î¶EdwÆ~‚ž°m­ž°‹k‹=.à`%K37ŸÌ1mó"áóe}DwÞQj㥡¸„Râ¿ð*ëòã×åòrÀ‚0,{Ô|›öö|,fògB˜‘û…3/ï›ù§Ü’éÖ™‡¸ê5+‡GõŒˆÅt(aÍ´½àžBr0Nub|•£ç¸rˆÅPâqiM´½þï•cJËGÅY ÝÙÂ?t?ÝB)ò0ºTÌfáîs ³¨•Gœ±0Ú9Ä¢öÂõð ÄýÃ+jï»P¤xØy¹]°{‚Üñ¡§ËŒýàì´t2+UArut,•JV+kÄœLÝIºrr˜[¼=aòÅAŠ`EÌÉ,½nïnÈwhÀò'™_¡|S¾CóÀþ9ùóvT€±›ûþðˆIlâÂäTàg§;+ oa;³Ô/þªpñkGÙƒ/Ôt ÛٙȺîíhLÜ’DÔëÆ$¦Š)qáÅÒmøZ:cˆÆ…ÝÙdk·5cNæ BEP"J6ñ`žÀÜI¡ÅÓ?¾zûéí6îu 9xºf%nOßiy oÐb• ¥æÀFb9L,W„ä™ Ë=Ó¹s¢LâÖ ß­»¸.gÛ Êàkw)ë@]¹8Ù{%–!EhþÑBQoŒ®gŸò,fN…R#ÍG<òa{ïºæV i"àQ¡öñ =I—,¦îíÒ¥2N™ˆAýzã!Ÿ÷=üøM¾?þüxI䯾/në¬(sK¶©¥v­§¢W‚9¥Tê{$9ÐçÍ ÕŒ˜UŸ7†ë¸lXÂåµà×É÷#Qe0Ù5*]õÁ˜qofÌB;û@¦gÍSh®+6_gF TWlÄrqï·¥µÑ0–$4RquöŸÚ ¦œ4úžØ{G ÒSK￘ñ(ò cï-£§ëŸŒªèI ªà™ ‹“*E2< Õ²c{ë3¨³«ä\Ì!cW¢‡õÐõütÜ)ü"i|‡y¶‰æ’ä-¶)r‹uËýF$óG“燼ìbGK?‘ÈOô¼ÅŽ«¯XOÓŽ²§Øq˜=ŠQzx8ròn× š]·@š¸6ð2Ýå‘ZuÝ0é'V+kòܺ×í†NG™,F)€Œë3ëÖv‰6°‘oWP±D—Ë£å¯ÉÓ»Ë|F;Š'Ê_JˆöJ”Ÿ}í2rÀF$ÐÓ8“ÿZÀ¿1Æ(4§Žjkñp åOD‹QŽ:b®ñ¬½E®†©dVÒuD±`nûok?ÓôB ˜¾Ï‰Mmү㫻Ôsë„nã=|{¹›Ê«ÙkM m$‘­šüP ÅŽôGáŒÁØ<¬^#oI©ØRÇÎ ¡<Ë.XßRo"焌µÚü—„Xõ2h©!hµ˜:á„T&g@wèVÌëGìcœ; ¤düÃà#Lrâ±(éÔÖúqµDáß“»¦h–┤Ó2¦Võô´Û\ƒHSm.Yê¸-PÍiòdûÊ2ë²in1D´Ú„ïŒBÕ¶Í,·Ò]ã\™[È­¦0ÔÜ6Üxž–ÿLÀÙ"h¹Ã 97E_ÇY¡d«©-™´åþzBQáLfÀé¦UP©hZˆ„á&¥¸±EÓ{+qÛ,ÀD]&œ0w8WLçÉÚŒñÙºV®pºú»¡ÍêÒÁê:º»¬"†Íï1»®åŽT"ç†*qðÚ qe4Üu‘ždÚÖ(‘X…95tJX½"7 O,òJdƒ:%Тɖâ€äo÷¯Ûu¢ç£³KY‹œ¬®§\8iHËs³A. ˆ£3˜Î,aPÇÄ^p)«ßý#Ó’ s(ÌhCN™?áåÏ5n“d w³$SîBŠˆÌàf“ ‚R.*¯LE€T‰a²0´4Ùx½Ú!˜Ô€.³4ÿVzÉa3ùþùüÃOŸ>>üðá—W4Ÿ~?¦§¾ø7<}¨Bþ~1hÏ@9‰Ä0»JËuWìXË×›±<1™Òy‚'ÿi'&Ô‘ GKb»CÊÏ~ø^вE’ gÌžÁrÀù*Árôʼn/Åfõtð9º²¸±£ç´XË¡14®Œ^ðeº ª‹¬È·¹Z툼[æWV;¡ë~ò’‹…wÃñ™Ã,VËîžV> ì†(FTCxùàs—2θA¬zKABªî=E¢¾•¤Æ „A³LuBë½·~IßÞ[8¿ní-ñÜ¢Än‚3o)#/áþØoÞ’áîHÇøoo‰)Nöhý|q,Žœ!çÞ¶pt4ÙïgcªŽãx?.‘‡$y$ÀKuéÀýëGß³Y¼Ûê÷í"‘ÑÑˉû6Æ…f¡<}Q_B7E¾âZ¶õ§7÷oïòDŸ¹ÒfÅ=´M#mª;-Øsã½t1ÄIþÐz÷]zŸ{@ù„bP¬ŽÂIwfyvYÕ’R¾h%šd÷’kç{ýŽ®lQR–[ûÞÒC À¹NÁÌr½ëçuÔ\pŠ2¥l½6N ëáŸ2;§w<â“¿I3áj®ßÑü1áÍpQšìܹPPλŒ=Ž(r ýgË-õ®ê7l¿V¯ë1U6;ûìç"|u?Éùã-Ô™ ­'Ápu¶y33•ì±þøaESå&ó—à­ß¿{øð‹ÿÖo?ýsé™û°ˆüþç_ÿqÿGÝÝñïŸ{¼HŠS½H¥žÙw¦ÄM2í}‚eî—M&шªþf‚cGÓUs¯ ÅÌß„8d¸’MæC–'ÉÞÕ;ºÌ½Ëõææ^'Ü|%‘;£nÝzR¡ÑÓPuüçF‘Ël“œÏJ8FO[fr’øpÓô™Çf-ÆÙ¿lPãõ%å1äKšÓó2ÄÌüß¶wn2ÖÃÈ™ù¶=&1ÑD\+À9÷ºÉ ÔvX$¥ÉÜ鋿ÏùʲîÙá&–ƒ¾ÑÉ’›£BØ×’|ÑMÕ"Ϩ½`&°®»Pªûví“G¹Ñ*·yhæÇšæa¸çC!½ GS•,3Šì„IDoÌÙü1§Ì>WªéÿŸ½+ëäFÒEð³•MÆÅ`Ãè——v°û°ï†¬ÖØ‚uô¶Ô>þý«ª¥,«Èd’åžÁð¥Ve%ãøâ`Ç?˜Jé7lßNš|þüåfAÞ`C[ÖD¹£ñ°Ã çqOt유ìÊb*&bNcÍ Æ€8ˆ/^%‚“\íœZ}ª©4%E ÇfÆ`ïK6¤¸øzüEA£ù–ç¶Þó`‹` älµoDf|{Lãá»ÞT”^lÆcÕPÞ“`)¤=ûupæ]¾y>a€7SNÓ϶ä^’sC¡ßKÃü™”z3Ž&ZH«~¿‰‚¸4”@>(ïîâO‚¼çì}ý ]:@|d´·A¿_¬=ûŠ&o?,/—í¡ßaHÈï"[Ì?tòîÃÕ½ ÙÕõÍq±]8Ê‘ÃX(~“žÚŽ5X%%‹v†X ¬þ‚¹†¤€ F 5 úP˜Ž —²B^,Äé ™ï˜‹1%Fód‹…TdâUÎÅ€Ï6ÒÌHÑ'ƒi1œòÜûžG6‹¢è¹±\íwçryžòyÒu/)×åaXëSë•6b(Lcã€ù,<°3ôöþêgûSûÜýa¼rô7¿åd ¨´$SRef·°|{Ù“ã´[“.ÙÈG'åÆG07Ák¯!äÓ%¯ôèØf^&%û½tÉìKÚÓ%‘ÉIßtIM™IS÷c”èÕ¹~óíŽnêæ#„ÔÛ¦vš¢ÌI*¦×òÅ;ix|!Nø-L¼ÀÞÅûÞw49 äÜz'á¿—NÀT’¹<¡óøè£t0é ƒd{rž*e{˜?ÿüîÆDàééÓšә´#ŸZg¥N´Þç²ÈJ‘oJù$ ºuW.ÓÂÚeZ›‘аlp$mr,ççÉqÖ༭f›–s“3]Þjâ÷¹‡ì–9½ùœ‚G§ Kgzð»e< QP—Ž ?¦F'îwŽéкËÐÀ†}Pì”"_–ìX¡•ú\íxJÛ]öåç_Òê«xóUL—h™ð‰Sõ«¢¢ÑW;Æ¿@¹îIsê0ÛÝaÁe×;~êÜ€=Þc;jtÑÃ:ÌѦýÁ‚ΆßðÚ<„àÉ‚pNÙú’2 Q®5Îî=˜‘§ؤ×ö¤.g|³pT`º0x‚† xX]ž"ØÎìô©½»ø2—¾{Ež·¾£Ò&#@ZÒÖÑ쟋ØWïÌþ7màyþüxwiñpÓÜD£VD­|,;®²]–òfÔêöÚDñÌ€€Ð?ƒ¼L$DC|æ+M·sÐß}Ý딤ÊNcèÁ”pÙE!»8R5¨E5MÔҤѥµ¿õ4ê•4J¬OùÇEõsQD‹S²0ëü¿R¤ƒö Á$Žq¿[cþm3"ÀË2¯¿‹B ¶ð‰stXÑ•x/ögòÝÑÿÝŒPíö/ƒ”½ÑFIŒ>±%6À–ÅeÄÑ¥‹¡m=U<\Oápñ ãä(ÆXN )1ðÉšžíY³‰¡Ùa*ÖS%É3£{÷ óG¬ZOåQÒçªuys°@¨ZuyóæØ‹Ç ½Ñ†ïǯ|zgèp}¢¢£­œG¾IwÁПîÁM©lÊɿ瀳n´§´‹·U1 ¥}nEϱ`h?ú¹¼»½¿µZĉpd“W#*Œ´ › ´)ž ¡-ˆ¬Ç‹6•3-Uʘ^jÙ a,Ú3ÉnòšÑ¥ÇDa{ifðj£Ån2Z6V£s`iba½>uÃôC¨Döe)‘˜[aýJ©R’PPŸ_L C¼c1ºè7;1#fÔуDz\€îÖ–ä"ç ÏHÓe¹Åp›ë™%r $WÉEt¾e!3©wiᵬ¾þ¦ÊëoœâfÂcQß-b×_£sù4ÆëÑj®¿Ñž•fÖ¿ÉSÌ’¿þFž¢àFL똘®’œ®qæ¢w#i AývÒ_ïH¿É~/u§Í¦ÈwÇ9@àEW©œ‡– t&ûj>Ë ,>¤Z?DN GXáÏ¥­|ESmÌŽ˜‘¨ &ƒ©¨‹°ÈVG Áy^% -û<¤U_æÕ®Æd®Åd¦É _ÃXãJ1ñ!? nv² HNù0UõÙÍQí|«É ½Qì>qr˜ÒÎqÿ [‰zi6Ýɤm“È‚î|q8먤2ŽŠw§g$n‰œÝáözÚ.‘I0·Y„¨^k̆TEEí¼¶Gpˆ-I •(>­;­;ÅÉ΋»õ`'ùš¦/Ð.\ ›–x9Y ØÙ[™÷i1å¶¡E æI®azŒ-3“BP%“ÚÕ|“Z¾¥!Šš†{WðÍ ùfÁeÌ9z=Z 㢟z ~ãÈð€qãÞV°WF|."¯×·P­o!yáPðq ‹|Ã|³Ãëɪô'ÇÁq_¶•œ‹ôö¡ÿÌhÐ8ENCïï¿­aqŸ?.œ‡îØH"?û{•Ö´( ˜m4¶G† qF±¶xM3ˆé,šg„ÄL‹¯‹šG.ÛÚ6#OÆ„Y S©¦¤¡uq•j2cÕŒ4EBÇp†;ᅥ ç3{ø®™úšÉ¡©´2a‚šüÃÅÕ9Õ4© “+²ÛæGn§þTM–¬Qœ‘§‡j¦×vÉ­­WMbçœýÁ /•(6y©]ò„šÊvËv0³9ÕÉäÕE.Å›X!b!¡Îš 'fg©¨ÚAž ©d/™=º¢`á¬S‹1ësÞRPh£ hÚ4£6!£9vg*`xz·­€{ÿ9ï_ËàÞ_ÿrsýëåË…Ú"ˆæïF*!‰6µXz{,ŸªØH²&ŒŽ>³9mjw¥éèI¯]ÜÍH<©×Ä1ß§ùBŸÞRú=t…èåŒcû_¶t¦˜Ù ›*üw.ÖAOœÃ®ã™ÖTÇJÐú‡+Àa(Gµ$‚¤v ¦¡ë¶¤<Þ.3®tLâX fnÙ1oÞ!¤ÃKwbv¥c7`6âÔ?Îedt¥ŒR"j>3?£Zd¶×FÏ>Äs#3kÌ‘ÙN,è½ÓÂÞˆ]'bEªAèàëº;žŒd÷ÛJÂ>^2LàTîÏ<¥|g_Bðw³ø|2C  ÍÁµTp2/^©ÃÌñS¤êç› ƒ`…[¼ét{%æ°wF–.NqÙu¡zÃØ­;*{Ê8Å`ÖÆyæ¼SœëÚÎ5›¬éª1à%08ÊJlAÑ*õŽMËeØ#¦ =ë«ybå…¢NB¦£eµ¿Šú(Û¤=;WMy¥È£†mö÷¹‡™.”¢Y%ÁE©¦·‹Š›z‰Fë­ùE’ f{‘ðk‹wëúÞÿùãáPký[­EW3à…²Z»ãØûì¨?ß<¿ÿÇýíbUÁçû­ÍBßô{¿kº}ÿøñMÅñŇ‹§/×IòÞÿ°{П¾<¿ÿaÜKüvu÷åæÇíÚZþâÇ‹šITøú ;;ò%>\|töhh1ˆÎKp1õW¬Ã¹èjqÎÞV9Dç+Â@E.%èìà>w‡2;YÒ!¸‰m®¸=¯D,ž~3*bÈDy~Š.:‡g@,S:Äo²²5êûPì¨Ê~ù ¢|ºv†QÙo?‰MäYœ[…MÈ-—Äö9ˆ¬~=8ùàDÎÛ'jÀ)’/‚ñÃ^V‰N&a;6¸ÈŽCn:ÑØM§;:Bˆ¡Ïêå:”Š‘ü7Ro:Ò÷áÉäJÆÀÓ›¯SjƆKo¾ö" ©†€+ Êìöù–KQ3¥jnþê¹³jçΦzvGäʉˆB§SlN.ì³÷™¯G«Š 䘶Ö/@$aÃ<ûçÆ14µoRC]é7oüm‘ÑåÓïwÜx6n¡7j ß <•øÎàtm½P¦ÇضôÚÀqI]5Iöý K%pÀ­§K[†$jø÷­¬ ~…Vî^/ìoÞ«|»E€1W9'*2é̱¨˜†£¹÷WâôH¥'I‹ùÜz©èTç¦\©`Ô¿^/3k'ª&âwCp´ ÏiSë¡o2ø˜/ð1`íz}|ÕB%‹E¬_kEæ¡XÛp_).ûù¨Íè@/´Mª¼½}m%-ã.£mv-òŒ>] ®u"òàÌh+8`N:˜Ä›Su›ÒõÝ­±åòújÂz©‘±ab@EÂÝI/„êçôàäÌɃŠëIAŠPÒCsØsYûQºx=é®A4ž[ % O›•9“73>9ŽGî!Cà®#g¥n‰¡W’… € ÏHƒ+e4·k¶=Âù¦I1 Æ 2¢ñðùsª¥øhg¿üéËÃǻơ̠xSñfB¡¨âj!°áhd ìä„WêôÈ8¸Ô¡Q(yÙÀ–µ"êR—‹ÆÓãÝÍ›ºší?Ý}±OœÜjTññú GYáI-oA¹Bv<”d07ŠfF¼.’“&‘ÒÙ%§iéYôšîáxé Qûß?ÿ:}uÍ÷eàëO?ýz{¢Röéöç‡Æ £9añÞ|zªðéÓB­–˜“–ÁzH‹½´è’:pq U8X#-è›2Þ©¹Ñüµ„²+¯*ªWä9šÔ0VðUÑùŠ>; }~´Š« ÑÉ Ç°i¯†mþŒl Òä;­.aƒ)u X$šëx6àÐ4~,ª:– MM³ä›fÉnÝu›)‘´ÈasáINsÙ–²Ã\f§)·ÍbœA Þgðì«úfÉ1z¿L"vá*IPnQ|Ãu)á¿@ñ3ú}ÔøÎÜb“ù_¦Qe±@%Æ¢X(d«&fG«P|“¿ÉG÷7ßÎ’Õ| SZYÂ~ƒ³–›KÁ¶p£«Îùñ°ã'1”#c¡Ü³k.·8]û’ÂêZ‹æ î^µ…[ViÑü¥³Z â°Wj‘¯†Øˆ n¦n®’¾{¼úòüËåF²~¾L÷RËš ø¸Î¦žÜöÈûˆ–"}OäRã´G>±H±¶0?£˜i¥$)²¯PLÊÑÂ×¼ÉÁÁWòôÐÌ$Ë©fXi&o.Vi&õW̱ÙIÄÁè¹ / IæÛâNMÅx3AoÕ€ãZ+Éá*ƈãœAã Zd9¶†åúóÍó‰ë„»G;Õ/OÏ—\?ÚOþ¼Ü]®¬£]/ÝHо"뢅|é–Œù‰Ö¯tê£é­ w—Áh h~Ä:m=6#QYð0Šv49¯ xäÖ;i]½“Æ#zÀÅ1Ö¢±xpeœ*€˜&p¶ArÇçŽlë2=áÈxdMså+76ÍWr@Á/6[ÈÕnÉüQqR‘ͤc1› 1?±|Fœ.xKi‘,ÁÛJ‰ÎÑpÀ ;¾\žœ Ó–ØwѲT-ZŽ -·ÂÃq¶¦²'^çkùÐÒ³ lvN"4Œ,ëBô„8ùÔ6«n½Nw¬Ï㉃¦ÐWÚu8[ƒE€à=ê*@ðˆÃwHo!N͉•##%ûâ€w®j¦$ñ¢‰e"é<·EÑsZ ÕÎmIÛÊnFø Üõ~ãÓ†ÿ~uûlG¼0A¾HWÔß]ï~eÌÜSûÞ~þüùÏÛÏö”\Í-/oMŘbˆœý#Aøm¥¡}à2‰Æã—­\$.Gœ¼bô¦ÍDØ<þ+AD<'ÜNI û÷—¿_ݽ³¿Ó¹Õñ˹Ÿî¿øçǫ竧?®_ñT†DžHyâþø‚!£æ:îÍ# e5`š(“rwë mjwê$B‘FÃvÚ ÀÓ½¡ÛƒçZ‚g«éN/å‚w*ÍÍæ›9mY%vdìµù†Š ‡þg:²ù ,ð§Æø M)¸ýtõñ£}õÓõÝÕí}êÞÿÉäÞOöóûiç±N\þªO¹P)ß3fŒÁÒ÷šß½C7#jé{„@›¢$5ÈlÖžÍ#öÏmSrÑDtIÇÐ^å—M„óþ‡¿ÿ-cÛô"×z¸º¿1.¤×Ý¥ ŒXé.ëUºŒ‰Ï<¼ÿ¡ZèlÅ‚Ù.ž~ ¦çÊ?Éõ͵€¸¹r+ËGº½ÄL¤ƒðÅÝÍ•™“7ôô×ìš|ØO:í2ûí2xÚînÑ´qÐ|ÐÖ’GÌ”#f$…M«4¶Îñ¹o¼“2ø¾½Þ|Ù.Uwi!ÞåìOìu6£"eøùÈÝj/Íkˆ”à}Œ½²}‹È×–à yŒGŸG×`¼Ç¢2c¶~yF­u*é­-Ì ¬ ¸¶¯Û‡Âmñ¦.}DÀv•z{׿Èî1ñ9ò{jÒ{t}zo1lå/1™]ZÇ߈ƒù›l"Q†¿“Zü•¿AGèG«èÂèÿ(~8¿š·Êí£úwª#|ƒ§¨a a¸kÊdX0]à?>Ù¿îß™—wc*ÿåéó—“ÛK—é·9Á|£v–TxÒÔ°hÖ‘L†dé4°ŽdìèP # ¢C¡¤A‹…äƒÆÍúxL&÷̲ģfxW)±0ö(ŒÎœõ(Rk¨„˜÷(võ(bÕ¼ŒJÕ.Eg9Êå`¸®ëüF”îPí''ä-Ä :ªßŒM/^h,Ê|Â-BéT´¢íì¨éزÄ iå Õ Ot åÈ·‘·bÂ$žFdœ 0£/#²fׂ¾’§ÇÌôÖŽC¤E€ _§ªÁ]¤²!ó¶ßáÍ•šKKpÌõ‚üÌ@ê»´jå¼Ô/@ê‡G-B"YÅ`î?-WÕO›zF<Ãçîâìúéöãç[;E[ßçÁSÖµÊPZ¬Õ¿Ú<%C04’…éêîé“Áåž_²éC{¹E]ÖxÂGªŸ«h]a*µ©ñÄ‹¥4"wiµ=ñÅ·‡ò]ßx{hS“¾‹Š›|p"V£FѦ*çfsÎéØÁ¨n`ŠÁiu]d7)Ò–¸8:‚–Ή¬Ñ²^ï7¢…-\õiÕUYB¾Gé•=Jd7õª‘êLvïÁ[×7*—[T—¤?QÓBÍ|J"w­•õÒ>îãuF ÈÛµ•ý$)"fš*RzžÕ”¬ÐTáú cU;/誨õ3Æòû{iBfx @ÏÛ:¼°áðåv"ಌFDw‚ÖDªëìë®GhiÇJd‰bb½º=ø>ÝìkâxP.NØBû5öTN_Äì2˜-z˜×ôÖÞÛ–™We°Nëh´y52»Ì΃tdeïb <ûæ1êÀÓ¢~\Ñ œWÿ#l$çÒXÁ5Y({„—! ã(€²­!8cÂx[ªúùº_¸Lä"jÚ#¸¡âN˜ÒòÐÅ…?§èÒ ,·‘r®×¡–jóŒD”ÝñJƒN¹^ÓÏ´¤§,íÕ4¬Ö²ÀÃs½)›ë¢±0uº.½'>OÒw®ïÇùµ Úñ/7v¦%1v2pà7ÆÎMàÃä&}O¤|úÛhëÝ?nRåíÞ¬pñò ~ç23ÿâò—Y&gˆOÉ&\ŬʼnœàÙ³û¥iKó²ÒNé ÛåYZF ²¯ŽbŒÚ>yºU¿9-P(ÍH'ƒH'X4až³3 _)Ñerš ©Å¥~If•¤1é¼JãÀéx >ìëÃ) ¬X¥þMa`Ž XÇé§4å”$œ{Ìä«A7K¿l¤$ ĤH"«@š@0µö¦I=FJîQ§ŸŸ¸mÄQ-¹ñN8¸2Bÿ½kY‹cIίâOkSʈÌȈ8 o½ñ¯ÀÄo@h@ÌøøéÙ4tÙ—Êlif¼9º«â~ýã@ˆ!Q¼=t:.ͱÉ #Åm 7»âaDŽÞg¢x{cûb©â‡V<ªÐÏLù·]žú ùG™,qº‰‰û½‘±Ë¸‘$…8µ§ÿœ `Gñ±{‡vïB•ߤœì>Í4´4¡ŸdO–$ÐÌlØÝÜ]þùzï»þXò Ú»ßùþðô­¼áÝúqÛ0_d¦!FÇ=]N<ùTPi4ÄÍŒ8HÒÌ…-¥züÂÞ2ÐX4ðªâ±ØàÂ=°ÿ‡ùùeø¤^N@ܹ |šìšlà3óó;N‰qQ p Ckõàë¿ê7À~E«5Õ¿Ä0aÎGM§6x*®Á.ý¹¼Úäy7SsóÍd9½ÓÅãeÛzšZï®1ÿ­(´ƒTiÜ@]Ú°q¢TœJP6›[,t!†üIW’ ™¨3‘ ì,Î?³ÍÆéýUe§Ù‘ºÄ(Ö罌…®±¸0±îÆ&· 4€©F”yJ®ä=O÷¯=w¶GRJJöÝéÜÙòÜ·éŸKàUǹBÆ“m;è{¢lÐh¡ GçÛ‡ÁRr”5ö!a y¬ ¡9BÙ{ÎB˜¯È6(„Vó!iOºÁ—„¦Â??„fbÊ„Ða‘t{SòG# mäjÕ„3[JÕƒmÉDëàw 8ZÈÄ‘C£qØ—{×§bvÿ}¸þ|qwù-¥ ðà–tÀHʋ窱<êžó„wj¿¢Çˆ ,=sº;.MZ}”P1„,HL€œÀóJåÔá©,ìá‹‚}ÞÿGÛ¤6¢*ѼÂ=Ç®n9Ÿ°Vš3¡:Ë„ÔbCôÅŽ?¡§Pœ4MûüÙaªM†¤B&·©ïMJH 6¹Þèg÷ü™]n÷ÙöýÙÝ" 1œ?J§J[g¨ª­@Ž‹´;­ö²Ý¹Ðî#À_0ò—TPˆ<á1…_õ1›3¼¾ÄøSφ­bu6L&"Þ w¡{ h÷¡§½ž°ïqÈ!ö‘Ö `”*ef§O6¬Ÿ_4æ±0ö¯R¾@D–ètw3öß?|À¦óÃÞûÅcŒQkÒ> €j Ú$Ä=Úª2²n¾?,•÷‡ÅX-.Í&”%"µÉ©(”íp­_­âqP0 äa-o>${Ø{·¸tm+61<í"Þäu"…©±Ã3ý3ày;ŠÄPÄÙÆ8îè¥Bøå^þsx¯£~ʦ†ë&©•ñ+;’†!½ˆ›Z6¶XEɲÁ÷Ž»ì½ñåŒ ‰’ñ–˜˜˜Ö§+´Ï´ Ÿw¯o;âüJº¹¡T¸{´|Ø¢3f†ýlKO.sÒ,½p´œ,½ñéuı3Åg^GÜÑX-ΈMÂ0ŸM’g“šùc_b“–³úÿJ>âóîŸÿiÊþ“°=P¯[XÜä™í³ùfàíø§Šåæ`å(º¦’ŠÅŠ_]sGNŒH7qd?“<6]8ú0õ¨È¾¶ôp{ý»¥†æ??÷5~{ Ô~‚=LÇYà©zØ8„®[#–¥œ#bkz ÙzÊÒè29§ ˆX‡Xp„@Å*D9ȯ¢½RiDÀaM¨%â>²ý`“–ú€Ó]Yðq{)½²‘c~l^_áª]4òõG­7›‰£lµ´A7i¾ðvÕWR‰Îص¹D¤•%¢Ød)«+™VW¨kΨëêÍ**D1Ä„Îõ'÷v_Mbö6i£e0“µ1‘ÑÔÆÄ ÞûBdY{碲ÒãžwË&—zú–FßÔuL¨½´Õvú¾u]ÈQõoê5/ß¹s•C¿õ?þíx-(¤ KÙæe"O(¥; ½ûy“ØÏè†CØèŽ[ƒn*ÆŽQZƒÇt“h£í7’WÛ~Z\ºá ¶_#Ç¢ígÌ…j«7«²ýihÈAà&Û_b[…Ž0ÓGKôD…„i·òÛ[ ºxen̜§™ÌÉ“’Ï2%ãˆM+TB(9bô±‘ë ~¸~P0ã~¥Ñý6}ÙÊë¤f§Ûô]'}­:‹!·™¡ŽvADö=LJúO«þ°ìßÌ]/çþQ› aPì=øáV#R{fÂç«Y Gwˆm6N§W±Ìø±dš^YȲ,–Ô_¸.õçá©Ùáç/¥ÓcÛâÀ ?Ë”Æ.ãíãäqe>·xoÁ?T¨zðZVuö”¯¼PcH•/õÈŦؒ…”i[þ¥3šñÀKp,è~lùñ^zý,¸¸‰¤'7Ḻ€Osf²î~Þ1S§ï÷æ-ÚÎy˜Äô“»lôÈ|GWô#é×àCc¯göŒïÌJRaö R1e¦ýÍÛÎ =†˜=¿`´t‡†š½*aèXƒï@£Ó)±ð[S5N2ÌK`¤Š¾S¨‘ŒlßkEœ!¢ÁÉG€3{DsÂÓû^¦2}/¿Œ}ßåØKUð¼uêå[˜ËÁé¦)Ž2ä…ˆÀ—¦b=ÝjZ•Ç œÇ| ®f²ØO U‰,ໟcƨyŸ60v¿ÓlFÑO ¤ÐQžb@R^æ\›9J»‘~4B ¨U~T°èGA³; +B ò£i¹èÜ~ýŒ0IÖGó¢ûîÑq#x¨¨CÁ q4 îI»1ÕÄŽ:’0š…"ÒÍýD¨ì'Z¼µ}gˆEÕµì4*Àþ½sÕßÕ‹U´}L…*¬_ú¥—fwÃÇânbC $õÁ 2®oµ‡T:ßê5ºøãñû×?Ý]ÿxºýëáûõÝ×ßo¾~ùvs“ib‘nnbU~óª£E:¤£UùŧÚ[C{šîn‘ ¹ý™ñឦÇF/^/aù±Ÿ_ÜÞ_ý¥mââTá¥í"¦ma#«öÁÍŒ!㨸q'A!è>'<å|Ôl3jÑû쀿¯Ìˆ6âFÚs¢ ÎížÂ„#òF\g®yðvàà‰¡Ë#t¯§Þ ,.(»òˆæÝ{Vs*³cºÊ%ËývQeB¶˜½zÙ*“žÚôÅãùUfú¸‘9wŽkÇ(öº"û¸áäõ7bÕ±‡ qófà½É#ŠÛ?ÿ~ÿðûë»ßl¡vr"2Í¿o“qf஋Li8×l“rä¤ZLw>j&ÕÈùrc>Q¼"Û Iµ´)¦±Éˆ—„¦Bñgtÿ\´Ý¾Ne¢b‚yý<ýùýþK¸|šKðéàRft5ÓÀqéþ JiÑAqè4°Æª.:¸zБ V|ª]îAdN`¿ŽiXÐü^-Æ…Í1•ÄU`E_.ÅqÈ…Í+Š ›ã¢g·¸q~Ø,fÂf^œ£ñÌasÕ‘6 ›sÆ&7EÇ£® ¹½¿ÎÚّ쵚øó—ºþ›Ÿ·7EHhª™Uˆ=°mM#IekW¤zÃz!I\bpRÑ AŠ¡ˆûIš yW¤Ñ ºDШzfœjÛr]YÒ„¢Ð¿Úa\™p;.,¨Euì]‹ÿÐÛ¸¾º·ÿÑfJÃÔJBtÜSIðéD…¸cGùB‚it@¦X5™ˆÅù¦è8;à Ú ÁDB ÀC­jÌHOcTƒ¦Ò5ÈÔ=è#:7JT’™ñ‘*€5—·„âþ¤ï‡³Xb •haCÊFB8»V™?Ä47Äj:­þ%°>ž‰—ÿ¿5ãàütx°¯¿…÷Hª)@Ï»;"uU¯¨“vàODÇø,Î{¿ ƒœUi~qÓç•SÙ?ú—õxj4tl‚ª4ÛžyÔ%ÒÖFÊÎÇ(o›úœpx'¦‹lélú¶¨ÏyÃ¬ŠæÅ*¡L÷03JûöìÜfÙ8)ÐÂi³'ysߥ:…Q òX±’Œˆ“Ll-¢vMC›”Pìgo\&:{Ê‚bIº¼Bk½û׸f6‚ŸÞݤMaÉc˜Œ rzLåñêáæ{#0…騄™öÔ¾ ç&dš] BÆN«íI4™"í3J¨™U.íMµböÐÇŠCºn¼¤cMˆ‹?É¿ ELǨ H!'F ©­´?ÏAObiÎÞnù·gAÚ¾™ˆ†lÞ‹ô•{‘Àap%Õ܉6»B9h·›­®^­b3R¿Å'x¤·gØV’=Ãf’m‰µÆ†+lÆoJ7ÿÂ~ ö¬8yŠBì:¼H.sxQ2ü…Eðµ;{š¿ÑÙ+ ÙÅ£×W)^$¿râšµëOØty@ œ]|çHÈÆÎíÐÊT«ò¤‹SvÅ[œ”¤?'-'Eó½‚Õ›Õhc[ ·Xë"ÛJó—öΚJzG.F3\¤ÿX®z‹sýù¹zõúãÆa耟&j »_I&)Q©ù¶L'Áúâ(ÉZcJ~µÊ;*i»ìüåŠ<#à0Ün&A¨¡I- ŽPû5Ó"A‘ š‰º† øó@ÝÜY²¿áð“è1L]£?XÒí7伉…Ú3¿ÒA6/6{4­ôhư…Á•kŸÁ¹=œþq½Jo.YüðÕ«U¹4]TÓ”&ÅAîÔ˜cYqMQœ úrª{’âäôöJ³ÿáql£ômÞ «z'[NÓG¸R{,“Ö!À§-Äëótš9‚¡ EÇÕ$ótÑ—54{•û@©!G0LÄ-5qm kùãÉfD…ÂÖ/Ñ‚pO¿’¾C¼ÜÿÆîÓ®ÏÃýíÅ÷ÛËo×w×?n®7÷"Ò§™|"¦Vñð‚RùO‚ê_Aï óiÑrO³×Ä… 'ÛÏVYg]M³92Çš ^bÙºRÈ®«¯9ļú%®¡o3¯êTi›õ´z ICˆà~©ãy_+¤„¾Êת/Ke;ÿ+‚ r¶,°ÍÙ²ÃÀ²ÉˆG”ñFxá˜öÿ'§•×ÿóÃdj˜´O*¯žÜß½Œ’|±73´k¾‡\ZòÐL3…>¶Äž e=Û´%0³wí­µ ¦Ú&U¦"®bè<ƒU;r®|¿¢ÞÍNº`ÉSK“ÞMÁÞa[x&8!<ónq sÓÞãÓ©ï'ßýÿp©,ÕɧnT(¯ôàs¢‰¥™N'ÃfzËT¦¢& ê«¼o©‡šH9ÛMy¥ÑMBµ\#ŽщGÜ¢¢ž¦¨¨ ¯ã¹¥©Ë4³G×6¥;ÈÏÉ_u­í–OÝä/ë'BÏ™4FpÄ®xæ ÅN Ⱦ!×VeL„RUvR_ …Í•åV¤Œ" &Ó›‡|­ãðjUFÂ)ŒÒÂ8ûˆ[— 9{g¯ŽMüyW³Ç‘£_…¡Ë\†µ™‰T8tšË&ö²·Ý MÑ6Ã’¨IÏèÁööɨ.²‹ÝÙYYY”Öž¹Ø–X¬ÂÏ—øà7ñëãy—&¿ö9‚88Ö? 5êwΗÔ/Ón‡#õïÔ¥k}ÜCÉ/20^F·Ê<ðÕ› Ô`>ilú¡ß©eðO}Í©Å>Í%@¾áöYb-áEE\³Ž~],*‡Ë»ëºé hJ¼šYÏôW×F­Þ” h^\â¹cðÕÃì˪"Q78„°$ÃÔ_È#¦u)D ª~¥èìN±a ‚3¼˜!ˉªv©FU6b‚å¬!æÃÆý×”Ç <¨ªÄfè_̹̟±nÂ:„ƒ‹¡~Â6ªKÀJnŠÄãâe‡/–Žœ§=ÞF2ÞìêÎ×7_.¾<Þ?ô|?°‘SCM€)‰+ !Ë´Z§øRÏEСN¥ « \Ë‚. VK‰:r†­`z9qð³J(Ö$*þrœèS~SÛ^N~FðKút:µ NZ‚’œaß3i؉j†7Ú%©ªfTD€yrÞ½€º˜‡¾6¢ òPA°¾¿¢ËZ$›–?j oÌŠ²zÁCmh(¤ª«eâTêÚ°ç,úï?¬&2Ô—Š¶†Š© #E\¥6ð-„é ö‘9-m‚óùÃÝW³ð#’¯|a@³¾§hól86+äSH#–Q£u_´õœ|…i/¤ž­¯m#˜K&æû˜´Ô)#bpÃRÜÿtõQõxu}Sm ýì"iΘ|YXÿTÑ,€rÈ0L«HVíòqÞ£µY:Ye2 0Þi ¹ }okf}ᣠcÒ Ñ‚âìÃÝo7Ÿ¦ô¢Ÿuš|Jˆ5ç—èÃM€˜%ÜK¨‡yH0.Wß½Àƒõ›{b‡­æaðÒT–¦d´G²øN;®Ül迎@ò¢ý¿Í>24,mÌ1[Íx0B5Jçìc'Ál\1“Ð_vÍQ¿Ý|½¿øùËÝÇ‹[µú›w_¾N5‰5ŸC´}óÕqFTs&+·wùÛ#p"(YÈ<ERt[Vó/8ë(ÐØ­4½‡€1ÒÙjÔîÃs—³«‰(DHê¯(Ó$pš{vÇGø“3Þ¿ãSoµj.9yÖÜ?¯n‡.Ÿ.¬Rø÷©Êö´¨jÞ%øWýï_¾ÞŽý‚÷ê½—Ó÷_ÞªD¥˜0ªO)ì²€"¦µêé\Úª¬»GÓL{ó2PGIÍZ-w|„´¬%Ñw¶,+6•Nc†@&ÒñbLKeÈ«sŒ5æ“ΞI»oÅlȲÿ˜råÔ^ʦŸØÏ+§óG¬*œ»A¢üïÿTÖMõ³4Š·Å¸Â Œ‡¤aVXÀ¡÷NV/§Úì–El¥þ¿Â$ô±©`–å’›Ù—U‹íïKΧz3þ Ûÿ–ÚÌA›®ïÛIÑeˆ«À;žX>W¯Œÿ}:Þâç·ø¹t¼ÇϯBõ¹%~q×Ûô›¹yxûÿÛŪ%òW*ô‡)ßým‘üOjû~ÈÞÿ×›‹wïg–¯9tºxwqÿ8öo˜žÿãçLJ·?¼ú»ý~õáñæÇ½¤xñîÝÅÏêÆ&³§7c²oðnï.Þ½9éHÑÛ8Á:G:ØŠÚe³·X²$Ò·ß™W’UŠU´_÷‹ˆ–Íè qFšóZuŶ%ß4ø›ß3"kʦ2ǘ¦È9QUdáxŒqvV`.¡;¿õµƒ7†—EçœjN¸Ê=ãÁbŸ-:Š»f©ƒƒ. ‰ÙM>|ú ó¡ëÎïHU k]àê}µ«€"¯\qÁ¥ÀÚ•+ŽBwì%ñƒc€mIð²ãö;=^þëåÕãûÛ‡ËÏwn¯oob0œ ìcãEr0ÁD70ì7·VDõ:ûzš³àC8#sÍ*ÛÏ=“97]P±]ñ,®#®¶×¶ð˜ÖÜ3$‘"oDPÀó É‘ò—TÏ‚êpþi†?è³.8ÿôÕX?T`•gúƒM4=<3 b­Ûrÿì¯6ž{¥žö ˜ð/oßùåÃ×E~©ùç›fyWx%dž® YÑ ,gܯQ/ï $„bóGÀ*$Pô>Î23ÏÒÁûÌ^9ÅúòñÍ"¦H¸ÊùâÁ¦°þÁ§JŽcOS±¼ =ÿóí[ÿßo‹ˆzFœAj"Îè}uÀ¹ÈõO*0Ù—®;cAЛЋ¤šG|µQ㌧zR›g[Ž^ôTMRøüÐÙN1Ƕ5ûØ.ÃdÒ°ÌS;(:4•;Œ iíw¶)%cMÇ  h —Ëö@’òìkÏ2é1\¬¯ ¡[dšu;ç×NLicì{üx[„+‰hòÜÓ…GÜÆª € p¼G–{RÉ`#&²®€à6àu¬7l¢óxUÓÏwŠd/¥Ÿ7˜Ñ v—p_ojÂl7…Ùíj+£5û¶¹¯8Ï]Fóæ’>GSóšCDOoRyT€ºç€±êÖ/—'x–]PÛâ&).Âô„h]™J6æõ¹«Q8¶«ÎMÝ}¢#:Þ°÷"~îäjJœöÚèœíL\嵩%ÆBOÁB¥©q£Àz…[v±ïY_½\¦òD`Ñ1%Ë$>“N§[°×‘%ŽY4²cjªÏ[¶Tz×8xpKë,PO[€}oiª‚­@©:ÖZyÍ&`ŽQ…ÞîôÉ6Ù54‰ÔʳM†mAÑ“M—ý€ž‘CÑ_£uÿÖ><篳«io´ži~ÉAÙGkS¨°0Â2îtÑ$Ûzûb¦·/ÓO ~H(Εc ðCQSÁeiÄg_S1íxˆšÍÅ—cÑóg¬‹öÆ¥X½%оÌÒpG°Æ¢Ç†vu ¶pù¡ýtqü²2òþ“Û´xº×m¾ml¸ÆŒ<­(úìâñ™$º µØR«˜´;§ô¯hòÔGkâBAgMÂ^¯J™5°F¾–’— EKÄB/° ³ì[ó¯í²–M¸`IÇLŠ ‰h•®Sjеµ iquC/Õ$°¼x…ز^}ˆ‹zM˜ÓëìËjŽì ¦Y¤4b'kr+{´Ly#Vr@´fê¬ã…cÎ{ÁÎÕÀÞë‘‹8­’rÙ ÇL]¼×¶ÅtµÞ‹0$aÖ9Çw6“Ú=B2y?xµpšAmXâÈþ*Zµ•ûi4M¿©Tج}̺úf¶‰ N‰g½y÷„jpì9Ç]¢®ŒàÏÔ9«¥^}”S†GÕì¼Qdžôƒhà%çŽòÉ[Rž£`/É¿Œoyñôfõ…ö®>bzeÐ_¹e-eîî¬|QK?Ysc`̶hþÒ³„â}UßÁ‚ëªï¶´ŒÀq‹³Á“'#JyívÙ§Àã™ñöóƒÕî/Ç9[˜qýEÿÛ²¾M"Î(A9X…ím—W8Î|!ø}›µ‚k âò¸­6’ÀE)â¶fSѯöšj~Šš®xç\(¦+¶ÜÊàB¾,´—`k5t=.IZ„5 Â¸ÎáeëèÖ„{á£rŒäÄat®o-ßc8Кbþ·Ä›‘EOiU?Ž$ö8$ò ’´ëgã Þûù(ÄÓTY¿ŸÖÕ×Ý®ž²Ñ9PVaù0Û†Fx#]\ãínIê—»%‘¯9 ÌG/X< ¼. CnûÈLŒ­‡Áè-vM‘]|ÓÆg‰öøbWbyžNêÓ,Ô÷Tànx¿miÑ÷ŸfŒ) ä‘|£k†ãM&—Ÿî‹²¨ì’œ„M!=bËF[@YÎ\¿[†SâêUqíA1¦D­k`’ºXÄé˜åQ˦¡æb¯ ‰Â¢¼>NH›ƒ´mK<e5#Oë=‹.}I?¸ê 7y'=.ÎÀvÊLš›öï˜aNÛ‚ÛMÙ“ê–^ë ß¿Èo–õÊHâ Q5MW” ãd[ÈEÓ&[ÂÖ WÍ*bÀÀåj‚C*õ6ªífsÎÄÓ«ö–„⪩”ºy¢'ÞVU¬2„ÕúÉBàBiB”"¿þ„(%_{»¶Tíáþ¼.a+»AD¦)Áí@¶Š·Ú–uÞþ|{}¥?wû˧±tüs£¯æ:éd GG²) C §oú‹ãâŽÅÍdÚ-6ÓŠb¹RÎ/—˜ä›åÀ› °%Ç£È%ym¤FG#µŠušçzë'Ògµ@u™ÂU•)Œm~ To‡,'µ˜Èã*íSÚÌcÄ×§dÞóTÄTËz=N0×ɺ—CCÙs´µŒ¡K#ùtz"l–”j6AaƒÏÞ îeш°a€_Vbèàcy{„•ÈY„å`[Ïò5†ºkÀªÆe]›xÆÉO©Ï»@‰â*·•&RQ›NUlÚ›¥ ã·†¾Üq1Ù¶]ÀÒ’7ëu%öŠŽ'Y>±ÙÇ”©54ÝÔœÓè¿æÔóG¬cÖ` !fIऀõƒ¬ûHÞ÷çÌ瑦µ¥ACÝYxŽp’d´ qGÙ¼fËô;R aiõ}?ùŒÏ©ÊBª)ü˜Ïai8Jí*[ø™nNm…Š5À%§Ÿ‡ä%®:ý<Ä­Ûø‰]fXÊ¥©„@a¡ $êÚX³¹ êË@{÷=©& 1ú°NM©'82B¬qkïh%È$óë Õó=7í¬§$ÄIÖr·ºUq·Œ³kÅcŤ’W`žãü¦ùçO« o!/://#™V†~ˆv¼,„è p¤&…kä¦až÷ßmyæt=ž‘±l)šñY"ÍId˜·”½Lzì{Ñ×6ž(^Ô"äÇ‚µ¬‚]Laó)7‚ü”€¤"m9p¿ÅžâáözªøMìß/á'¾¾¹ùI­&áO|¸ÅÓ-ÛàÙï-fû:£“‹‘:ôp«´çà_,ò|÷fK»%¿uÛ„áC¦¹Í\5ª£úït&îÛ7᪮õÂò¢FuÀrR¬ÎÉë2±jš$…'²®j$¶½¿þõæý£þö“pyýåº×ùä£-¶äâ:2•Žz”ÃbÅD%˜ºž‹¨?™¾¶ÑFã²ìM¢°¬ºKô±e]¸~¢Æ`ºÈ™–ç³?xÜèŒK,†Õb4ÔôcäƒE‹‰ù|ÿYf],†…Ê…MÏE‹©8¤»¬è ë”Èé{蘚/®¯Æ;»,{Ô²<Ó9÷fK–Ð@©a›?’åîõTH±è›ù "Qу£º_…Kvpm&±K7ô¥5'\ÔºžQ·5l©UwÆ£-J›V‚>[Ðto¡×ïw?Þ”ŽþþVSHEµ”½\-+ÏÀ»à,¥Œ=å¹Ó÷HŽk*죭0ÇÒê3‹&n·ýœ¿‚£,mäL6Ö^[Á—¸]6n]b79ãñý2b˜&Ñ{ØÝ7h±\0^´ ^œT5ú ö±NÕDW4$)uæxqþ?Ü|üüÁúšŽDœÿk›!qI H -e{o|&Š7ËØõ÷‚{À'¤¶wÁn6#§Š8)éÈTNÙ"ÿ\]â$„â¢ÔˆYf• o9#F‰¤/¼vAÖA‘ä™3qÿ7¦šÒ©¿yùáîú·~…•8Äè¥"M&ñ\´ž©/õÐzfòë”&'ÄÕÖCƒÓ °â¾u|„—MVÛ%›‹õ@þ¸UÝ|yûÃßÿvX]Œ|ñ¨/øéê㢶½®Ú×­Ê^úêñá×½ø‹wÿúôö‡×ª×¿÷já&}Ûzýì-æõú@µõúw/Ï·]Ojþ$«2Âó6>[ÈARHâShkm£Lkæ˜Â"ŽKÃy<1Íx¶L»ûÒì‰Ù§Ô4¶ñ€FÊýòºyöˆUm¶ÚŽ0y¨¾•îfÒr±óØ ¯nC€Ú2jÁ‡Ê6ÁS[Õy«ÙYæÙ§U´! *Ži·øç¯¹Gd› ¬õC¡EêV¤‹»Glš,îd/˜c™Ó”>$Î7##t¼gòýß3ÿAέÓ'TPl‚•ÆŠ[ð“Fr€a·–äÉOº“¤´êlh*::5±ïûæ'…ì C6}â "ê ó¥“ d/ûfê‘ÅŒÁ’„@qIÃΖi®sϸýi ó§‰íþ $ÝÁýÙÈKÉÖDÛc íÊÕGù-öÏ8Òß§yü†àûþJíAÕñ°h»Ä"äEæÐ.ýòÚ# é¾Çv‡RôK—еI¬ w9·qF­"±$.â®b³§ó¸kÒËOÎÄÓwõµ±©vûxfikØõN¡4·Æ†¤E]û^ýøª«–ú.¼vŒ8¥Ù ²Ã°Êë§…¶Ë¼Þös ‚E+Sq¬NÅõˆ ¶9ËRò E‡[À>û²šm®úV×C\àE­Uø£¤ÍýQ¥è³î¢O"1¿Â/tìÝ>…WHªû¶a½H¬ÄÇeÉuß·™%ØÁ¥ø"~z—ñD}•·ywñnK,S{i¸'‚jÒ1úÕXFõeÅ$i>V|$H¡„eÁK®¬8û²*,s‰5Tà׳pÈï´Ep‘`·æë0§K!i䲆±Í’‹éЬ$ñ%j ,C­¶ß:C'£R\ŒNm¿ukBn¡z`oc.ë‡,C= ³³ÅIE™hKφMØZ‰ÕqFü pˆ9AûàøøÚ˜˜j'G'9›MD]ÀU^YHŠßέ„;B¤Ð ‘ÎýÚ$‰ô„¤s¿ö,&EÍ|­Â$hê s6îq5&ñL¢l×P“p"a8 I)ßöüe•„¦ãH²ËXJq$EÀ­!IÅx’(¥ ¦w’;Bãk ÒFmÇ å–‚Övo6Oay¸Ý‹C¾ÄnƒÕÊcÙG(„sü9Éc«T>šìN°…Ä›Tk”þÿ&Í_Lhˆì*j8‘^Ÿ;9»£y&¿N×ÙŽí×=Ùm_U)CÊB!1x’5…Ð?…l/¥ûMV'MRAãìo¿sáñá×;ëqÿNû”o<­ð1¢¬î©ßqi#9ÑÀv›õ 'e× ÕR€Uz5ÅÛä‹ÅÛ§ÞÅ£ªÉ^P}:v¬mØÖ?/h D´êê˜Ã&+’Àw€Né]¸ºýhà¸pY À›-6åƒùMy>^Y"1¥ /Ðe½‹Dí®ÄiéÊBÚcò K×ÑœJ/¨3åFky+1‘à`+”¡w y ö"è2Ÿ§&I8’ƒ/AºH„¸Nÿ-ÜFº( 6ÊdN‚SÏŒE¿< Uˆš%@ÙJ²{9uÊXô$'¿ÀH¢'Ö¡°øÍ3ˆ^²ËؾäU·_HÝž"¡Ð=s9{0o©d@ÙþF[òÚz ©¢S6+…]UëëVP÷^Au6EÚR³Ì©?)»2CŒ0Òâÿ#ïj²ãÈqôUüj3+…I€¿~µšÍìçj[]å×¶å±äš®ƒÍædD¦­’™d0iwOm\¥’#‚ø|h"yƒ[eíÚœêñJ?„í©ÙOb=Ì"t€ªårF¹ É€wûlä”A>^Ù…æÀ;[W£s”S(P?²÷ZÔö\cø€ :ŠŸšx‡%ª2¥`‘1f¾Ü}|v+æ~ÉhîÕP{Çë{ÿï›ïÿ{ÝFê÷$z¸Ã.¥I3¤}=™#B<|¾}{á>r]Y€9íéÀ¼(ß‘fãÍ+¬t` Ô帲ò‚…wuÇ…Ž|V¢ÉEüž%¸­YFCPë¶šÄ{&-¼­­A ¶¦Q1Ý)Uå!%!l‡PžÑøF“òàߌ„è6W crнÃoé.,‚·0Åœ#Ò™-mdŠ“eb”øãOxIµþ~JYúyÐàAµk]©) \«~>²tî’ƒ*Õ(&ª·»fÈe|¡'"¡\°“`¢U.s€šbÜÝe è3§]¯»::ç¶ «ÆÝ‹çÕºùþòxcH».‡6Uzoògº2½5º°?>¾žg¸?Þ~ºýÍt.¸Ô›Ÿ³ ¯–wµäØÿÏ"AemCg;Ý/À‰·½îöÔÚςȔ«[«q²Ðbµµ3£”z;Ÿ(9ÀÖÏX%_ÛÖ§§3©w4Dâ«/бZæð;«0I¸«Ööt¯DFÎyĺø"Æ[Æ{Kó#׃-Ê9`Uýg‹$léÄ™!_=ØJ´÷8¤Ñ™ Á–qJ,’’Ú¦À4´Éš´%ØæmÛäÏZ‚=™Iqïr…ù,Ë3N˜éÑ>+Ì4Ú1aðZäŸ#Ä;+$Œ˜D· ÉÍ ^lL* »ºÜóv/´íý'd?ÞÍÃí*ïk6½Ÿö Þ—cO—CRo[Ô4 ÷ò¹F^QBÂÜvmTÅÈÊýOÔrqäøÉp#fIÃ&µd¼Â=AÄâ=Aö÷Ùm;<Ôbç¦.VtY± en‚æH–—l³í¡g?E!˜B~,@àÇ-ßËclE1M*Mç´RŽeÕb„¼8M?µØB™ò3¼óå36ÁàÇ8YÀ©Êí°èÆE€|q£qƒ DîI¸|p“×–»­=_î?ÜýÕÈdÖíáõßö£o¾kÅ›o³ÙoÎÎyW”.g@¡ sy²$€Që2gª/Z“9SÑâ 'ºh÷¯FÐ5½oöi^xa“ÐôÄ æEØ\îJ©)¹½>bV>|ªq²À“„ŒìOÀQÐâmÈ‚Cd=A_ÕPbßF& =IêÙA($û'ì‚•~Š»5N:,Y 1±´X ®ŠŠ”V´,4ÈT¸Ú¦´J<؇5·™ ÒŽ~#bóªd¦r+äs° òÒ¤I›<€ßÓW£;.ðuq²ÈûªˆÆZÅ6±4h[|ˆÜÓ-ˆC‹Ûwãh3ÛxžtØb¬ER•mÅÖ¿ïÇjáYL“ý¾qy).ŸQ\Œc`L"E_~6”#&ܨ =kYêÖ6†> ýaÝÿ†æ­l3@ýòˆæÑ3æÐû˜š¿”:Õ~I¯Qæ\Í ­° è^Kã–‘=»Úc0dµTEe¸…÷o»Û³P¯(;ò~ƒya9‚»ž§_.Í/ 4B<|•¤%׸âºÑÄ3–lÒÁÒ‘MR" lööÍ+ÕÈ܆ÄLu¶úÚÎÊ%œŸ›K-‹ƒµ8ÿ(„Ⱥ†k‚‘ͬláš±kçYœÁöÄÍl‹­lã0;IL Þ>k-M÷“£÷e<­‰q:áœÖ¯aœ×¸$mcÜ­5ÎÑS ÄS‰3Óƒ£?lJà eÀÈS ¡Îk³ÝPåµ1È–Ôazí³-§q,¤ Ä9óY8–ÎW¦áfoRÂ<¢qâðÛÃõá»×ÓuûøÕþíGÃd‚oUDª'kÍþæšlh„’l,É3$j³Ï¶—‘\[60wyRH‘òO½Ï®$ä[ÚT[Ü;A–ªx ÝÄ‚@#ÄÿƒÀÕÅ#uõ>()un;|xmÿ|ÿÞäÝÝßn¿~x\üâ8I)ø¢âÔéÕ†ÁfZ/“žˆ1Dì«Í§¼¶$0ôHB"3”9îÐ~÷ܘܼ½& “EA¨%˜Äç]Ûeƒ5V¿ö7°rß–âTöNj¨"ü××ûÇÛ @—æo~é&M5ü‘¡çž…!šÜ¥òrJ³>íÄSÔ7— µü¦â¥hyTŇ;ýR.Þ³<èße2ôðêo_î?¾zÿéæãÝÇû/ïôMy_À¼‰ûl%×ÖFL;à(ªFsýø¯¼"zŒ>b>JÅÕx W© ú¨Gô€º:€Ú­¸“¯Ø ± O6ºüúD9_[5ÂÁΑ&NŽ4ÿ¡Úxz¡¾RwUFêZØ1Ť?wBÉ9šd8’®48ÇP+_»øj¹uõ‰>]¾‘&’¤B×ÖFÖ¼ƒo <µˆ]sh«ZW[¨"쪊9uÜ'Ï…”À«/¯Y‡,9Åà)¤Å– ^‘S¨{Å\æZP§Ë+†)BÀpu=TÝÃ+ò$ˆ áj`ß•AšÅ¯†AHàC41‡žÎ9îŽñ< D|õë«Ç|zó—þmk>´ñå~ž¶8λ= ãæ¥aO¿ópºV-&Ù¸Vmý',ö§¥¤Û÷§­ÿ‚_í¥Ï-ÜÁ”É®–ìXt_=—fD‹_ûÆU´0®"§V(ˤÜ`…䘧\4C©¸F{yšú¸ Å Õw·<›VY>bÓ´Š›ÂHaE·â£–ÃøµKó÷Æf¹}û»Iá!8=>ùðíwMká]ÿÄmx-”¼…µqÖêÅ¢EÜ·†òöËݥɾ÷vªßïoL]ïíD=ÛÍãýßïÖAGïí>Ïs¥u“µÍÐ3ÑákU@W7 ô¨ÅÅèr½N4ÃéP*˜³Fô†V”!Õ £ù¸5ãå=ÅyGl§ñ¯¶ ×¥#d«§?”™Ð§&V—é¶©e_™ $#:APjhBðb]Fðû'³%ßé4DFt È{µËˆCåb€¸)ÚSèºèT ]mîEM½¨àˆ"Ù²€¦ruá;"/(¿¬¡•p">ÓeØ·xBqò|àQ}“K{0×ÀèºÇW…ÊCù"ï wúlŠå|ëÊË‘–‡Û›ï}œå©p¹ÊùÓYø1ª§©£‹T8x»ô–Y Tf«M~RÔs“­Ž¹ªÖŠ¥tnA³¶Ú¥> ®™ÌÁ3žHü¹Éœdíš$𖪟l-NAbÔ‰I×åÁg=¨bæbcyÆðI†t¡‹E¥’¬‘‡Ìî|7€7™HÅ;džØtr€®È¸ëBÏ÷ïVôUÜ<þíÒuȉé—nÚ×uÑl–ôN–”hʼöê­‡^ì´K„ßÅ6”þ†jZéPúÅàë‰8#Ì´÷õJ\Ó,6F+1îpÀy ä}Ï{jå±d½œžÿý´r½.ŽŠaWUDÈëU- HÁdq¥&¶Óh˜þ9ïYcK;Ø|yWÕ?,öN/(2Býü«cÎL×Ö¿´OeÓžÌúÅ-ýx÷øûÝׇ/¢÷!M¾xÆ>…é1Êœ¨§£Å²H° ¡â«aÄ_’{[ÍÓD@rCûÄ RÕþTšž^PpP9‹£±¹äIŽ!QJ½âã íR‰èù\Ô„ÂÝ¥v~’ïiÆ\ï_ÊD/2òpÖRµ8Kýæ2Ež9†gð)ËGlÃÙ#_†PlJ„ób`fh‹pÄžYmÊ"Þš¢+ ›…úå ív¡@’MB‘/ÌÍ'åîÒÅÑJ›.,Òs±X<£ «cüF_î×ð;S¦îâæ|j Ø0õ@~€Â~X²¨+'¨a]GjÚûŸÿøÔ€›º纹‰gËäȳsgË{—-;ŒÏZv~ýeWùKy‡áf"à¿î@Bõ¬}ŠÜÇ`JK”þÔ eŸAæ1·U—‘Bá‰>Cv‡Û»B$^ŽÐÍÄWp „E×À¾wö:PÚMëcH:K»nÊ µà c ýênèуdAeT–Í—ÖÔxiMš&p¤‚–Ø0äËŠêÏÅn•§“µÜZ+NÊ(ÚõÐ^-6èáÌyÙ]´ ˆÆ´[R%F3&tÇhñ¥B&-QZŠ›¢´U%ÛMÕ–‹v5U·½l‹÷¶O·½ë×W¿žµ8&¹!qÞbqÌÀ÷¬sW¶#íñ ‹Ù2Ü`qðòº«ùÜQK‘Áâ`ÇtAUÖœ*×êÇ"‡ý“BÈ0ƒ“2æ3®?¥qvFƒà MçíK““hÉé|íÂøX&«­Oçk÷¶C}ØQ!ûõG1ïyØÒý}£ìëÛ¯ïÞ?ö¥6\€€5o¬‰«Û‘iŠYA¤jÁŽX/Ëa ’ €u„à(9^ÛÄóðºL!&Ý·ÿ»ò,/N¾ýð·/÷_?×®ºž°ÇE×(]>n°Z ' l’›¡s­Ç%r_¸çj¡uû-שâÏ2G¸ÞàëøHUŧ2tÜ‚~)Nê»l®®øüâ‚c|lctNzZðvNEŒpìù}Ûˆ…DCw:9 nð$¸Áöeè?ÎðìiMrÏô9E¡œ³âæY ŒæïßΊñút¾!C’·Kæú[EK‘‚ÏFg­A-R0³r±ø@Á¢½XhP 4ÐP{Ñ Úåm‚doèK€veÞ><ãÜÍJz¬(~[+5ý§ââ©aâ¢4‰%Uð€Êž»™ RL—!/öÙæîtMÅy‘=×)I¤¹q`Àαç6e#æyÙ‚¨¯ØÐ ka¿’ŠÅÙï$d?$Æ |myˆÒ³Ã2B ˆ˜× „£R<|¾}{w!ó™s0*·äœœª9§Ä²+Ycˆ(È”„×¹r#ˆ›D!ÅÜ# ˆ”ƒfÝk¼@œ²xkFθûòø0R&ÌÂç€M2u늽Z ¢ ‰œ¢EèCE¢žÈ.Ý:™mVú±`󨿙x­jzðhç#~оŒr“R×Vaó @Gw/ 5J IÂ’@®W2JJ©®†å¨mA•!­&ºquU€2+mºbÜûÆå«ðüÂCõ.KÓΘõu›¾0¶g—Ñ?Ü® Â8üÒÍŠºzZ6ؕǜgŒÙa§íäšÅ)%¦–Ð,^^Í} %–µö‰XƒB³˜Óµ•–3ï™Y[ÍâDˆByÌ€BkŒ&Ô£zW6¶£®³e6'MÉb·-þÚ¯Ì{ʦ)ˆ²IßøŸŽŸ—/g_Y%U7‹œ+¡—Ÿ5” ”‡©ÏŸE¶¸*%y¡´|¶ñ³‡©]mG#Ü3` %‹øÂNÓg¹zYLÅF¡ºH$äªHPq¥ûâ`- ^Ý–™u +î<†p-A‚®î*ïË&ýùbm)Èø$ÚbX°Æñe¯ýD”!«<=Cˆ³œ^U"0wõýš³£o4ÖMÍ&fÓ2hà*„ªs±yæé`Mjæ­ß,«˜Æ>J™6©1wÍþB ,9ÔÞ ›p½½Ø(ùùý056ÚI¨Í—§ªsuI”Azìuë¨kD‚:p‹HÆ®Zy@ÍþþŸ{XA>"NQMûê%¯“Ū™' T¬›?QhúF¼¼ÊÌ™tôcŽÑ“Æçœ¼qu¹õÇ8~³’„[,FÕèßN^ õDRc “=QÚï¸-—• ™b¿Á˜BWï£hIà><€qƒñ•íu¾ªðeE÷ƒ³–/È'kðýþUöw¤oìÃÁÂ’úùÆÑ¾¾k5]Ž9[~²И±9dËSHÄT»Ùä)™ÞR¼Ì8v@áò"‹§£5mÎK ¡qÙ^N‰yƒÂÙ#Ž·Š+aŸ‚_¶îÃíA<-œ`au5äÉB’\ecJÌ—Yåg-vŠ-ÓP8‰0‰C­À²r²|ĶÊIšÈòpmÆíñs™Kè‡D>JROoFÓŘǡjžNµß|zø{–.‡ÌX”ªd~¦:~”ªªPiqãò’4C"xûê”3­1ê%ýí‡ó#Pzü±š>+ìå±X^aÇ%na*dÎ5®æ"W¿¬Å¦ÏÊÌÞÊÖÆ³<Ír—Š'—¯<ü– ŽßaŸNÙB‘x¥Ù§9ý[“‚ûž»Í®CÁ1þ¥›Þ—ud¦·vu€' A!Œª{¼ QŸÁ¤ÝrÎk š*º•§˜¼êI·ô:Sóx"HÇBÿL_ãËéÊê¶ï8˜Õrƒtr½8³EI¥z½ˆiäõ"å&¸›VÃÝ4iýYFb 71òåâ»ýÑ[qöCu÷þáË×ÏþÈ¿~}÷ÛIãöLÇw÷ÿýéÃýí»uÛ8$æódO¾‰'l1Ÿ1¥®¶v‹Îˆ:àÜWj˜51`± OêF1§ª ±¼ÆõH–j‰Ì9è*JÙû 7i…¼ƒæyï 7ùï¼ÊÜ%ædùšÿðó‡¯&‚µ¡Ö†'4.\>NÏ_vF° yVÉ$æ§ä‡Æ–Ö1l wƒu$î .S°X2àèàrËh¼OŽÜ`!Ǫa$*uÇ,)Òe}‡¨½°: Ÿ<ÛÈÿ®ù? 9ƒ@Jë‘gªæìZE‹-kF«( ‹Z`‹õ6ŠYXhƒ´Ëî úuJ‹ß¨K*--ÖYhïtõykVcoU‚sÈ[<4 Ñ–,D``òÌ=œç¡÷¬nä¡òþ<<´¾`¡/ð´ÀåŒ’ÂØ†Õ¸Uûeçå@bÚ˜Žæ]"-oÓ]ËxïŒF÷zÁ{\œ•.Zì ßy“£eÖ,kHY³¬ël$Ѹ(Ë'mrœ„õ0‹)±«ŸèÑé8%'^å83[ʼ­ˆ—ÃçïÔêZ¼ÊŽSÌ’ uœÜ4 +ЪWhû9Bð¬Ûªwûsùà›_pбA$œá ¥€cgp›æ;$…föTÏr4ÆmŽo‡Eb)Д-¤Gù ê°=·X—ê d¹é¦:ƒó­ç‹2q`Ô}ê°Ã¯²Ø×Ü2bÃUSÍ @±=sA–®«,šØH¤5^b ´Eóx÷ÔÃ{ ¹Gð[wI/Á O‘âeèmãè­¡…Ã?+Y8×r„À"ÝR¯³ï­‰FG*ÀŒÚ‘3IØ´=åŸmVq™ÊþÛ´Š¯}¶L+6,Óš%ƒý2oº ˜Lƒ¢7ªîwý~wûáñ÷1·d¾‹×—&l´»Ñ”¥ÍÙAJûšj ÓÈX0´ÆhÞ6媡Ôc'YÅЖÆ¾¥ÞRkáþ$öøÙ0òâ ›:jtbÌÿû? µ À ¬›$SÏ0œßybÔXOàbœ¼Æ»ÒŠI1· Ô„„‹0|‹ãŽwÙVL,«¼5Sž¸Ép¦´··6:›W>õÖvd¶¼ö¯{â­ylóààJý“á>Ëžì—y{øEn;d3¹g%ÙÂ<ݹ½äÿØ»šå8räü* _|1[ò˜˜˜“/{p„ßÀAQÔ,wDQ&©ÙGø±ü~2gV—ÈbÝ@¡€–ä˜ù•(²º‰üÏüòÃͯ/Ë7·—¿^? ª­k…•Û)]¡õÑVêâ 1]ã½oZ“rŠ8½Æì౜<·Å'Eu¨Ñ}.8y¦CUàvA]òõ{íÕØfðNåïÊ—@ƒ«†–&Œj’tìXøãòöãÃåíçÂ.ùÊGlê\° ÍÉzËwHeЖîíÏA<“±Ç؉½uñ+: ;VÃ߸¡§Oàü=U¸NT)shs¡%ãBgpóÈ"dœªX’+xGvØ|-mqšŠÁ4}+}%‰/Ó^]=g¸¼nTÁùÀÇ2\þøÌUAj­^ï-«°\"ëÝç8Æ÷— P‰9 ÖU]Ò\z5ÝI„×ùà.e}ÿç£Õh.{­ÔoLu&g曆\â·\hŒÿ mw–2vþs_Ô±¡\ÇÖËÃ@Ô%·ë L-ÕW²½õ&,›=žj,ª ¾:§×ñ¤Þ˜ì:zI§õ†KEzÚ,³‹Âž–MÏOØÖä´SUš"×F[Ó%kL [.m)^ @ ”zqq{6 +M RÚuKˆóz( mùdGÇlc}.¸^œ¬ÂÔ •—PÕŽûìiŽú÷FSºw˜ØEŸ¼“xþʼnf"îïì4sÆûþíÕÃýÅÃͯŸVöØ>¿ãz“H˜6‰ J ¢AÔdý§ÏúÄ“äjê0QÌèáÔ™ªÑÃ@%¡#Ìc >Ó¦GeÀÞÚ±zì«ü?R#ìܦ‹AÔ‚5(*túÂ!vè™ÛFžü‘· ¥Û­ °sðÔ`}Jëí9 @;“ ³pj ºt©ù:[œVijÖh<úMš öÏ¿^­i&SÔõ ©4’®_t AТƒÉ}~Ø×Dí\L_x{uuèðëXçôŸ|ø2'@üÂ[ÿúäIÌZžýË›”à›þæÛÓ§Á™MuI£íqNÓBÛ]©°ßÙÿnŸ¾¨÷çýÍüþ4(BåS aº¸{ÿõñþËõоü8”…3ôkoÚ äà0¹t6Wúöûæ_¬r÷Ð0pÚi^aÖcÓdi"ËXEègÕÉÔÏÍSþ“wjéJi<nöR´è1äÁ‡ŸHÒÅÍsÖæ½È‹ÞA#Ñà4Ÿ’9HȤùŒQÂÃi‹ÎÐà†]Í©^!‰k nÖ¨ƒ£üLè¶¹íQÊt1X'›_ àYkÉN´ûÖš±ê¦°¼N k†'E‹Å åRû 2öÑ êŒZ2X£T/%ï6é„£U‚R³*ô;Þ—nO©„®0,ÞWá°€¯VßÞÅ;v?Ø; 6ÝŠñ 嘫(3F«6Ÿ¾ ûTÜAdYp×–9<Ê×àx“/À¸ãAÀʺã¶6ìΪ7öÈ—fX‰yýîæ“©We×ãýÍUä­à¶8‹ÛùZö ØÇ–¯0G›«Xë40ã„ÐÀ‰m®‚Ý׈1BÉUëi,v°Ì&Ÿ©ÛÅW°×¯Í _CˆÁoÓÁvŒÎ€ÌéÈ”RÁ¨CÔÕY€Øyìö{Õ^G¯ 2«cºI%7„)Ö±ǵ£©sžrÙøpõW•([Kûv¶¸Ÿ?^êW¾þÉÃõ£e3ßw¬? B•öTêK4 fëO "õP,úÖê/F·ªÔP¼!5EკÃ>ÞYfAä ùÝ—’÷ô›§]ȵîñ9£r¼l»ÃaSmŸÄlô˜À”H ¸ºYjU²WÃô~#l,áò!„Pßgé:Ž;®ç0Å[“‹‡ ›!yì[Fuî,eÔ‘Ná±[`™;º6ÝÄÀ‡À;ˆ>œa[cÓÜé †ìŒß3AºÀ”êK[~sU¹'X»äé‹a€qDÜ“Ä3ˆÑ ñÓ q÷E=¹ÿür÷xYòI^~ó¨¼T‘!eñŒ!6ÍňB kK´D<‘‡: à–´“ÝDŠ2‚0O1d!–Té ¤öÚÞ¶Y‡s éàªÁžÌûÂËÁR7eT‚¯ûÂ2]ãUîÝÚ[Eå1¦nS !ν6}qãö)žÁÊ;Ú‹_?¹Úï®~[çh'~…ÒEiÀuïláÜê™Û6zõò¦;AD.•Aжo•/rvùÉ3uzè^}kUòL²J÷Ú:ñ€Ûä2¥Ñ£·H.³PSOl­þÈR‹ä»&‘Ä×è^B®V¾íjá(;9¶÷&vÎCÎ]Õ,Q´ÑŠ(þ{(âì—?<êÜNy×é÷ 8‚ÜΑ ÝËܲ#(¨÷GpÊDì¥íòpp± ÁEçÊÅ›8;̯Öq>Q¬ƒ>ž®<Îx°ÕêXœ`HÛä7Ê`uìÄ€Û_éã‰O üŒp~¼ƒ&ö’JUŠ„S÷"N&9Æíj×H¶MÆøͨ^X"FèÛWó‚–½vñÕQ²¬eSSKbð]ªXI”9{Óf|‹¡ $ˆÅ&¤¡¨SÈ‘.Û§ÃP/' kZ ;ˆŒ†B£«žJæ9ßrØb¨öɜكC· ^vnÃOˆ»¦ BØ<rDnòˆzÞ„——cý:Lç$@mdSïY¹-¾¢ ­Ü¿MÖ5ÀðN\(N¹бiÊB6.(ÙEÔm¸!x÷5¢Nœ6…¦ ãhQW*ÉÍ€Ÿ å b!F ]¨‡o;*rôn(] 7ùɉ¥?¶cܱ4ï§?W:qFôXL}¾¿³ƒûƒy‚l]†‘ãÿ¤ä (w“Êר¬Å3\5CÛšbl¥a·Åi /¡åE&ÅùÞ¯[™_yqÏëÑ£¨¯íYVĹè M¶¹q)Öí˜ÒkÝnGNÖ¿Â}ÿªãÜ*O1¶' ·(’¯aç”"‰)6ózzÊØsP­mÇë³S/Ñ årÌdP—EÿÛ¥½è§ËOW×o šúËÃkf_¼Bx¼Èd5.^³û"kÒ0ôˆ aœkgX‚iÐük`?£TΆò¯Ó=üéç¿üëëµ:1¼ùeoJøî·ëŸ~¾yÿ^áUŠ—øá¸ëw¿¼4…“Í i(%ú» àã.©ïÜ·íò)êÓ×»¹ü¸‡£ïƒ%REÌÓþÃDÌ-M”jFô_qØ„´O.ƒ´ŸrÏ 8 ±”ÁfD/'ž÷‡ œ…þ{>MÍ*ذÓ/XïÂbeÙòÛVÁšL£m¨ªÛït’ó-©ŸïŸ÷r®¢ýðæÃýÝ­ò÷âVMÛý3§Õ-|ÉÌI­°Í–Ôúy”dr<Ók¦kfeâŒþ‰ËÖ—AšÎC [u§”ôHyá[&u¼€D/Ím˜Z¢(¿m‚.$rãTÒciž[|mãžNÛgÞ3`©Æ²ì%5AF÷,÷€‡X…‰<ô'Ëâ~Ü2„¸}‰=µl–ŒB Ƀۼµ‡ªÄiH&¤(z}AôôÜ>fÇÔŸNVµ Ž£ Î{õ’UæZQ²Ð ŽÞS1`N°ÔÁG°¥­n溆.€(ð.„àØ¾¢øaËaÛ‚ˆâg[ªaU u¹ý¾²õ H8kLèVT"ê*øæ‹¾à§ËÛk%­½îŕƉŸ•X6×ü| ¼ïñŸ~úyí•ë’“<¸‘ž]Óìò./6šøÖ ÛåUô>ç’]>»åDÀ;j·¥&(-SuÞ gUÙñfkÊÕÖ4 ¢3@­¢5eN{hâãÖÔNžÅŠX­Îœj,”ÔC[&k–ÏÈî—7=àÔÝ“t[…©àw…VÃÁ`íùc̵Ù(ÉVmÈx;L(ç³Ã*Å¡JñŸFŠ…I’mx/­¨šüømõ½ Q ø8!Fõ¬~°¾ú˜3tz{4j$,‡O/ÙS³†nA².}mRO´>aÓI‚E†Ç•jÌsË%ôÄVlbù³±¾ÔXßK; kºA¸F†¶r"¾.'Z‰ñ°däÂ’íù*–Œ¬”¨ì¡Æ,¦ìâ4åÄàvàÿgéŸ.Ÿ°©˜hgœ8øsû±ý w0<:‡ö æ–©Ûó›ëR׿‰Ÿ ÑKf¥‚mñrt±Ç^à¯Ä=Ñ&~вõå—ׯÜnº£”u‚¡º„¢NìÜ‚\=P]LHè¶§1¯¸* le=¥ÿæd„¬IF°~¤Oåd„øÓ»fɱuq²ª\„¾•­ëªF˜í¥ŒS^4S2zÎ'$úèðœkž9|‹ìš-$¹ŒlÌ(¬üüeAâö,ÂÊß”9(«ªàZã;›e¦íyÓX¯ªˆ +¾¨ª4î E dûÅr»Ï'«SU¤±“Še:¯ª Þ/C*]®¾oŒˆìÈÿÝH¿2ãPGnÖQuü¢PÙE9Õ}îà|fÜ'#RØ%ç|ŠûíÖçô²!6X„ࣧ(|·Ûs^G5Ó…¶Š¢Õø˜€Š™ŽÇÁ[Ц“¾uDÆz¤Ø^B9z´}Oæž1*9Š”ÇbŠßbíí7[šÓ‹Fì½U»"ÓØšÑ½*“‡ÇýCw¿Åeú÷F­îÛßýN_ø¡½@„1U³ˆ-@£à¼ºeλõ¢’õ«Ù½ CÌ>­jÃ|¤b¢! dÇOäéR Ò·N7Wƒþ}¥l{6„Ü«†0–ÁŒÆ”l³¯…>!Ì6C˜Íë ’j3&¿ªdÔªJþ{Öñ?†õ|žœòjÆö¥µW€Ì÷lKõ§±ä'²OÎ¥ŽF.asH/Kó®K¥™ÒX÷^Å´ ƒJ-5Iϲ6p)úD™¦Hå-µšiûþSÉÒ*50”ƒ‚lùvA¸Nkå\ŒÒª¤Y'ª–l­ ':svnE3Ì›icúü‹u®/y¢±¢Ú2ˆæ%EGC‡Zê2ul~Rþ“Áº³ÕÉ@Yc¶ùiA“>ÍOä’®ów{á`¸•IÙÁÞ©z½+L¿_²»Âl¾œûŽánŸTûöìM²8]£BÔÿŒìd¾ïáë,¼Ïw÷+wJ]v<ºl”ë¯>Ÿ¿À¬%šÀ…pfSɃM¥Ñ9Û¤¢Gö¼P~Mè*ÄX%Ä~?!S·Žæ¸‚ɰhˆ¶C°)ƒ³Y´›Ï+mYŒCÕ`lÙki#Ôõ³7iA%EGý'ŽTR13ŽÊñzÄìøçƒwÂ×vX_èÌê/ì™ÈR.R0Fq„óêÏwÍuÖ¨¿à}£ú›¥y(«d„+N)Éž ƒ_Gèž>º.¹–Ü&‚ üÚ--[°{² Ie§bF2@B,+¸”…§[œ¶OBÒÒ7?³†Ý…<‘RV\L>ùÓƒ`!a_ðèœ)¡÷ta8Q¢ ̉‚ƒáiãÛËÏËŽ”§Ñ¹·—_Þß<®sô§‹ÁI‹ö ÑÖy°`S¾ø4}úyÆpA(·E»T̃ãl±fA‹.ºÑÞÚ;Z×öÜCèÒhÝhtÆìC`õxcØãv¼öþ¨ëp,AUð»>UûUì[×ÖFèLäh0G4RiÑÃçË«ë,õŒnGɆ)³¢¯^¹’ƒ<É>|•rõv¬²ç“sN0­T®§éØO­Ú`‡å"¸€ŠzÕcÖç\¡ÏiÓ“Ìίҫ…KP#˜Ã£j¥3s®ƒLLÖ•Ïy½Ê]õjO'£W)ÔïY-k†‘ŒOC:•¼º ‘¾©Bý~T$@ –‹†Ùn ù^U¤2SØÏqV‘Të.,g‘ÁdèÔ'ä]·Òõì i䇫Èàó- >* Âé‰;ÇáO]y‚ƒó¢ôþº2‘ îôšC+°5¾~ç¸^Íí[fÌ¢êáTHÖôOÔ; žñDºÍ ˜I”PTÑI®˜ðÌâ%?S£—^M ÷ϬWÉ׫³HŽ(*›2cÊ׫¾¯^e×9ßÙO; å3Å!…@Áù02ö¸¾º·Nºg“uÍùíÕÃýÅÃͯŸV:°`Œ#Õ-IjZ?ËÚ·-ôê ;d¡v e·vî½®ú‰8`Á¢Ý•íD›¯FhY‹©aƒ…ó¸n#È<ôÂçæ‚¶Ù^›q¡2ûÅqÚ¯r:=h˜0›øy"Aîë5@õsç}Øáà~P%s”,â°:/óV‰#kB¸ïü Æ×–7 ¾ µ‹B6Ž» d-â h/{ÉFfø;€¶eÊš›«é³'$ÏL—î:KL)Œ´ÄH-¸Ÿ>%"ï¢ B®#dO‰ÄŠ…2”J eŒ¨.Ûñ» ['­g\Î!!ž8I8²‰¦lÛ>S×á˜DUˆÀì©;"p½*ÈhÆ4&D’ ^††HSSáåÕLÏj¨“u‰~CýL[&+‚%ÕqR#Ñzjb‰"âcÕj¯bï±úkÙ`iA¡^íê{>s €'o-ÿÊ¢îE1œöX@ÝóÔµ7Oªº½£úÀ&U1¹‚n8¤";çbž¹ Föýõ©ŽáüðVú>²‚§m²;qÚ(DVòjÕåÓ#¶l„U+ލ bŸ2û¯·™b ª c_ž³vÐÓ-ÑûÆ,Òâ4eˆ} ¸“_,ì^>bƾPÚý{Wºɤ_¥áß[)’q0¢1˜_ ,Ø}†¬ÖŒë0$µ=ý`ûûdÌ*I)³Èd2Õ‡gàñ¡.±È¸#üÂÖüköÇsÙ9è*9ð-XÜ^=Æèj„[­D¸E0œ†òÜ&u{OtR(€ÝGò/G«€¸E !8©Í=~·}Þ ¶Úã=ïãÆhƒækußOúm01"²†¶f=•À¶1ê; Ûvσ^#ÞzÁ…¨ÜÝ74AÂ%nÀé9ˆÜQ¶)áè*õ@íßÍSê׉øcBäVѾ§´å‰h„õè¾YŒ\͆*ª¤‡Y2¥PÊ¡Jv}J›¹¶k‰cXä´Rg¯sZL[;-V”ãSH,ÝòÃàþDP¹82D¤àÛ¹jK„7Oò&¶öwøûÁogÕÉ3gÿ8¿z4Áü`û!¥;„ÞO”ŸZàÿ°Ÿ?Þ¹mñƒgw ÖîêÓOÇ’u̶ívIî>'ZÒh—óDP ÷À~SÚ‰`KøG®â127@’Ä, Ídf›\9 ûŸó´ÑÛó[£”¢|~8–îÝQt¶Ę̈ÛË÷.œÍ2ÃÂ÷è¶{ [Âò²&  ª–äµe·1“ÝÒ‘Ëè—šrJ‰Ì>IqrÚeèSÕíÐåù0åäVˆo.Aý«ìvºÆªìÖ›Ãá`zvSÞVBY+üP¥NM‘MáqS¨ÒžtAñÚQ-»˜ižê¥jàÿ…ó\ÂÛ`3›‹qEg©ÒÓIn–ŽS;kðÌ¥˜N¢… ZTPÈBŽLiÐ¥ìoRiñ™ŠÕA]'Þ8ª3:Ÿký7Ny~Û¡úTfíÛSWì_P.¨ø,ÏbÀUå_[‚}“Öš¼Ä@¿¶ì'®z°U˯œP,&X1:Ee<ôt)ãËѪ&[…švaâ/_-r(>½ù=,XuAùÁ¿+t”u ?R#|w [{²&tãuj6j+²…ôUP¶².Ï8™U(™?Ýq´'‹Ç<&öó¹ûÀ,h&P–8¼×+”)n A“È̪9‡—Ð}"(w™E^WƒOû†jðUÓÉ_×ßÍ ñ6õ÷ªÍLjïì”7«½Wmf¾în‚-&8!¬Ó ‚ÍuÃç@t“ Ç …w Ž$ºDð„ºb.x‹Î5 æZêbé¥v+cA_ Fr¶ÙPôR–Ÿ+Ý”f‘Ò&«¡z0ÃHqêÀ­‹èÂ&[Êb¹Ñ{#lÍuļî‡Ú]œ/Šÿb?¯:>ɉ[¥:¾eLYpYÔ;í½UM¹žÑb’yõw^bIã÷è(%{¡R§p‘JX ‹4UÓ¸õ5š5Ü-]¢É1æ:çÌ`ZæxiÕ:².LTÏï1„¸ñþëÍâò€°ñk'¡Ÿ:u‹C¿Æ¯=äEpÖUü¢Üþ—%+Ñ¦Ž f®öÑw0I¬ºÈ‘pàÜ"/8ËaDZ™âF *»GöU çʲӛ¯þÖâþòÓ/çÍ-|Ø”ú¸;õÁÉ@ ä:Ï÷~•“vC†®¢dE|†ÐßG–HXð±É‹•·¢×+,Kêì0žŠË‘S¿@1.Ãìs• u:„e£QXЋd[# lÖU*,•¤M¨(Ñb!üNG)íI#*V)qCe\1"±wø‡ðÌ%< Lûbå!åæÕMŽß)ቒ/Jx-˜Zn=®nOf‡ùðPÅa£œ—jÏžsâ@è› ûyTJO$¶³‰‡[ô—{÷ûíîå#g¯ÿs·¿z_d1É»yÚKzø®«,&kK¨Â`¦Ú”aØÌŠû8†–éñ(Flb@eûˆÙŽ™Éa;ØÇq×ÈqßóVmmÿÑ­+IÐ[EÌ‘ŽïHÀéà,:Þäae@?R´ »ë‹õ3=grŒ9‰3·x•6Fh€"c­9Ÿ°ôb¿ÉxõŠn’}6Ûcd+j/9ïµ7úÜ”… qz¼aH^Å¥gÍK´W<2ªäÜ„K6Ö^rN3ÑÙªÁGÑg_Þ™!ëÔĪ ÆrÁê ¦ÙGϲ3p°¿VVç·¹G£èùP¦ÜêÞ¬ÉÃã~ÑáW™¶ \%áßý^äÝ¥Nöñ3ÍÐO¨Nð!ZÚ²*aL—-‘‘Á-¶¸ki×3µda‹)¨¢õ b±h#³½WBõê½b“ ^b|âªÔR‚Ð;´—@6³dER/§3KŒ]/ÄWÍjVä% O«ÍÆ,‹‘X%®¼\‚m ²©™‹["ó^ßß]_þ|u›´Æ2ø/ö£›Ïü¸vœù8O~òhš¾Ê£6¾Á{Û—%KAžZIÖµ´—ö°Æþ–‹æ‚’+šO ÔËþŠ_=~q/–íºUmGB·7ÀAfj{Söµ‡8욵ƪc„®þÂw…u˜á¨`¿&6ɇ-æ>XD%ºMà ²Ä•m÷IëÍ•íÿeY¬Ks#«h^4²iÚ`S°B hk«¢RG«šã8tR²ªÑñá¶û„U•8f£Ú'Šô±©dß”J¹Ó7°/_±K“ćÿ¸¿»ùðóÝõã‡O?¿Y…û K^ºvQcÔ­¯U !ÛefY´+¼I°äcë1™À˜–Ï€¬¶'9nÒ€hiš`{6œ–pZÉØteÉû¼LëõléŸ5Ý»!†HR*ýÓÁ2ƒ“£÷'Ïéùä`=Ùšêdé¶¶¶4(ƒ \ §®ÏNsm\¢i yH±K €kéµúQ%ÄÁœ³E³¦ÅAÔ\øÉ wpŸŸñr´šVzÛ9OZ]ÒÕA„Ÿ*öŸ¶ãÃ7`ÂŒ LÈ !A"¶ÞxŒKÄØ4ì"'hQrM˜1cfÄcÕÔ$ØAÉÊè`$ÔœeêO‘Þ§'_ª eªí=ùK­Dž­‘)P;[Ó{Fß?¦ ÑìFvTßWÆÖÜŽåã_þöŸ™›;ýðÙ6x{~si¤NÛÝ]\_ùR©£öEïý‡¿~xü×íÇ¿T#ß,*i¾Å»Y†w³èË&(7–ã.ú®¿Úò¯=Óèµ]KÈ–¡iH“€¡íj3W£™‹d&WÈSGS›&“Ÿöé¬.äç¢=Ÿ¦ân4ÅQޝnF§+¬ºb;3F§õ7£} `ÿ`0qwî;Ir #9Y¥Q[fD—L™8y¿,7¯r Ó±NåŠùëÌÉa{<˜´]Sš¿ Ë­`u…Òà¦hÉ{*Ï¿™T˜Jx·ÓI®ºï9ÉM­,.«X¥ÔÖû#BÂêÎ5¨êSK„²ê‘cWÖ=‹à²@/G«™êSçšy·eº•^”x\¥[¼A„ š¦õ¤×z?Úír"˜©‹ÇUÊÂÜàºÐ¡€|sÍû÷–دcಎ¥ÔuŒ)WÄ}¡H0ÛtdSX¿LÕråu!!“ß«O“ãòßMH=¹•Á5«l(˜Póñ%=(öi'zóï7 fˆ ‚ëBüØ+ïRwYtð5CüQÕ—ã $¥‹¬Q¸³1þä´ $Í Zd;°š¡…Õœ£…ž¼´5à÷»ëÏ7—çç¿$z=œMù^ú°[#^pÀôR$ƒ$}°,!;†~B›oÛlÛ@èu•Œ £76ꦀŒ3XW¦ n.û“ðÑ¢æï<ûSr@+³?¯M¯QA|çVg¡6û³„Þ¾ÒŪZ§eWåìO$ÿ¬ôùh5ï–Ø”]âë‡ ÓE²ÃàÒMDpÈ€‹Ê˜Þñ0Ž,1×ÚgVœ°Ò/ŸZgž1˜=À@e I/w¨( ê(Û´õL‹ÆÙ6má¼§E¹‹Z¸j›XcœÕz‡Ò\f2ŒÏ°B­×¦u“),Œ{‡Áÿ¾ž»ŸŸ<ÑKb…·í˜Ú× )#°^KÈǾÊÍ–œ¾ŸÀ6´$¿•Þ€MÒÛðÍSQß*Ê _<'×8xbKà¼k¾`NKqKåÅ¥ö¾Ô¾ÓrÃ,™æxœ hQ–ê`Á™ú“ø ‡ÃfÑ}§§)ß0§]…Àôz`ýt‰uoEÒµúÿýoedÖIRcJú¸„"èuu$Ε‘8Xè”*e"ZŒ±(ªYج—“UâiWí7ªëUl;í–ölÛ¸t¹'cÈd¹vdñNÜ»ø% _Ã/Í E:”5w¿Ý]_]|™~âЛöÖOy]é§šw2õ[$ëýVóFVù± EÒÎ8@´¥pÓZ¿¥¢øCÖ}ÅðâúüêæáÌhö’´Vþ’³_µcoqï?_.¸d‘ "1§´ÊÁ€k©õ„CT—>!ê‰BÀ E닜 `L~D0JUã¡ä¬ŒˆÙfB¥Åü´m sÜBoV‘ %{3£³cìÍìÈñ èÇó*}_lÝPsýõƒ›·6!³<'ÆaÏ úw6ñc‹oÚH²‡‰?¸È×W¶Ë&b!ÎS˜=iû+–=…µé:”5˜NW x©¦J[£È±]MìEt+ÃH¾œ¤á0ÙKÒôx"kÛQ|½]%$ò–œi»Ž%ì`ÕCÅ»ës†èý’ç £°ýýŸ&\P?>^ô´{¸4›i9ø§x*çø†Ÿ#xÎuñ£ÿižþl êWÑ7.⦆Lƒ‰˜+7U9znù¦ÁÄb‡ò‹ËÿNd>ÿþ_‰ÖÇžnw”¹í2ðݱ¯Ûe}]–'<ˆ8"l¿±_âĨӯ½/Þ‹ æ< 4wŽKxhAVHÀ.‘9®¿€Õʲ¨1&;V°øl~ßÂH=eñ÷'÷ÙûµÉÑ*ê>i[D‘B5®Ìžq’^lµJï~ Ú¶»aOGŒÔ•ެ–¬“æqeŒGýê>ÀïPø9T+~»>¿½žÞ¾îdûíîÓ¡do¾5OwõûÕã—‹_./~Í•õ?]ÿóöîáñêâáìégã‡wwŸïÓƒÆû‹ÝãÝîõŸí÷ùòr_S9ª'¥YËêIßü'e* ·ü~å›?à\ù+ÑÒK æÚ ¹-}ppÞL—G¦¦kÉ<”ÁdŽì|iÒE²9b‘n8m»í°œ/ÙONS¾ÆƒK#_=|µÆª{ŒÎ"?•k{lÒÑ$€å+ÚîRv±Åti³Ä1EÇ›?Ìv&“ÚR¸ÿ2“âìüó§«Çeéµ'ü©™àeÝ“·sƒ*Aÿ„¯wam®};qѧáêåJ×b§ôÐN}Â49YM[Âg÷QEdz¡:]$ÛÄ–½˜½íª_ìMó9Í­aôÒÞÃVP‰–Ú ¸9!ñì@j¬µPQJB¶f=¡H—æc401Tâ¶5@±l•µ ÛŽ%:™bfúEb”@aú…C~À\ïc\øp¤ÖìoÊËm¬ +jž—–Ƨ¹½ë—]EôüCgW×w‚ü|n_|qyŸ„i7òòöü:—SÉ÷’S<Ö$“Šñ;ʤNëTþ”ú|h•êo2ŠlìPÔ·¼ƒº»½2™C.ÌQÞ=Ø?nÎ,êúù²tÿwêW×5ð<«ØÒ@u«â/¤–ë,tHhu¸í„Û¬ÌL•#)TŽ·Ñç\_ŽVoÛž$áƒ, •JŒ«Ð±·­¦[„J1dÝk‚J{;Cw“šeïþd^õò_ó^5~¿^ur¬©Wõ ßµWë¤WM(!§&*טj-UI Ä¥ÉòI_x¢óë¤#¬íÿšÉ¥1å¿Kéi@‹ƒïóÙÊç Åú$Ó˜fOZì²ÄC”ä¥ÂClì ŒÌ’A%O¬ˆ*šuQ»‚i*®~¬ûõ¢ÀYþ+¹€ë"}ôß# 'Gƒ…ac¡p»8¼¦I¢ƒ:âÐÞôµ_¢ X‹x"€¥“½ÚA0rv3±L²–pîâ`™_(šMuùËþÉi;˜Í´k ûwÿõfS=ÄUÉkºŽßÔnŽdöüštd5¿f0ôÅq‡Îvó4ÊÁH[öäW±'pÿá1JpôŸ(ÿ^HîÏÙˆæS¸ѽnÊ pý=LœJ MyaÔ½7Óûð¸_tøUž~|è6x8ûÝcÝüUb± ÃN4œ ?1¨_å–,%l€°c äÂB¯ÔJ²^wiI0|¯±èÇœ÷^¥èÈ äÆÇNèÓÁâŒ)YäÇÐ+®SN’ýX"sfXÚÈ(QgšÚ,ê ¦Uðkê©Ú“­±³,å V5£(½ai—ˆ^xË–{Gô¯Ê1#©ÎöMç©~ópw}¹ðYGvÚVØRn™ú®£:qnÊ]–½ìdb¨‚°/šÉÔrzÊö^ê4ïOŽÞ#ÜORˆ¸àÍ÷ži(B«²deð›É±÷æxæ‰YPÇèÂ<®µmÏwŪ„XÕq€äk±ÍfU{–iBˆÈë˜aƒFÏQyÓœ /y$˜:¿ª)V㉖Ÿ½c çý·…˜œ/0ûTÚ²À%” Ì$.¸â£ÚÞ˜'4éR`¶m;ô1,¸‚$çÈ6±&ý³%ÂÖf£óáQÍ› ³Y"2Å?ñ8Þ‘Þܳ¬+x õïqVÄ"øòî¶•ÊñðõƒðÌçÎÜ쨋kÒvr¡å"O€Àþ‡Ú6¯‚F½ìhâ½F¥Šçeâ(Ä‚M!¹N A:XÑ´iÓ`Xðö, CpÃ*# o VúÑDäpÜÇ¡HCB{w%¸ôÐ.½®éÕx»tø]¥˜cfjEà€k4ÛâGnz8dk²È ÊÝ´ž` K ‹¨¾<˜mˆËå¹,hë”Z=ô>•þ‘-¹g*ËJYñ£÷[÷º':ûŒâsÂóˆn%jk5> ×è;chvÙz‘’c¯YºB{T•–H©ä·ô"~ùVõ"^ìЊ™3´(»‰OK‡ß3ÓtmL“K@¤×’ðÇùõ™ý?Û\óó¹®ïþøðOçç_n/¦u 7Bœ›‘š ¥ý‰¤â´5ž=-Ñ4ÉÞ#hzê±í9lÕTb&gw$”j‘§Žš4’ž@ >+p¶+øå0å×°ã¦Ú¶üôÞtuƒ31 Z‡jú,$±±ô¼¶ôŠƒ}/'ä¡Õ½âXÝ+îÒí“:¢T03HQ*²o3'G«ê·mEç‚pM¨ñôÝA¯Ÿ–ˆ¼-bÎHGt¹^q%!ôûÐ%Ǩkg¯pzœn䑟š<õNVžvÛÙ¤s›žCm·±lïõ“ø+‘éè* Ò€\88SMâ†5²|üÍùÅ/æ”÷UÈÃ÷•š[¬Ô:õÀ2“î½®cååNÛ˜1“8†-ˇæƒó‹‘ZS=º9¿ÿõòñ·kóJgfÀn>ß^=~yV’‡e7íà,%·&Œ06¼ù— ©£<.íZj¦Y[]§™hü?y×ÒÇq¤ÿ C_ÌaUfU>¸ Ÿöâˆ=íþˆ„-XÁ)Kú÷›Ù3˜š®ê®ê!EGØá0%ötçûù¥ÏÅÊl5t¢‘X Fr(#Gqf:éNbã º‡W‹hMzæ –EÉŒ§cKÎ(H¾Íw¶–ÆŽ-¥¦N|ÈÔ\é²gyªâÊ}à‡G`â-¼'øµå5÷žÿç)29&Ÿß<¡T-h8ùEÒÕ h0¹¨ködBšš.×/²¹«‰6ÊæN²¤ZÈ‘SÕ䢔±yž4¤‘oo4sZ’ŽÐÏ´uzèd.…ïù”!ë|z˜ZU‡Û^GÓn¶½]†bSÞnbzýÍ_r]âá*Æa,bÕ˜T¼©¡Mº¢µsòËÂaô6Ïh ]5"%Æ,-åÖza-')–[2ưf ³.mX-šÝºð&Yö7©_ZÖì³£9ë%z|™Ûzúa}oFû7åàxó)B;‚â†÷c6hF“Ò5“‚ÌA/uE¾dõ&~Yv·5£g¹H‹ÑËRF\}úØFozë@Ü8Ñ4Pe(¤ž_˜(„“þÉö½Bx~¢>³jø“Þ È#áñwb”ÒŽbb¢Ûïg€~  ]…*Ü['ýæèO å$(\µ”YâaªsÖRJ1ï>¦Éˆ0{í)kº´¥Ô˜·N¼%FÅÂXÚ±©jmô“ÆŽ~~‹ƒô^8â$ìa'Å8¾I9îb€Ä_¿±týþç«g´°¯´žüuûJ1óªÜÛ2U"›4–Ni6ÊØN‚‘2TûJ 9Ìl+=Ò¯8fL g»ìµaÒEa)cô¹ß° éR¢O"Ãx¹í>ƒ¦¶¤çÞ“ãL³j‰ Ǫë!4¸î¸ ¯ùHÏ1•ä#\¡ ~³Ušl=Ðæ•UM¥Ý·cÐ\(ì Xô=Òc[”6øºæ·3rV<!å¾P ‰nÒñL~¤ `Ëüj;Ýß}¸þüf¿Móö‘Ào熲–¥[”×3 !È×l+LjD¨KÛÖmdË3%ð æ–ž'R5Èû¡Ñ“åå' êy&p$XbÀh(…¸}Óópé¤ééM>³¼ÆÂ¼qnʹdAÆÕe!J<;˜0\ÒJÄËÃ#„aüh_¶œ8PF• .³fz<¸jú}÷ËõÛ_ÞX…+)ÝX]OÉyó9="Ä5Á2`b?E³dÔ¿0Ñ¿ n%5uS%ªfAÙ!Ëg§ë&²¨«xþéÓFýIÃNS`Ÿm3kƒT@§Ç猎R0kÎ /ÉwÏ'üOŒZX_sožï{6Á–Mï÷üîÑ|¾% ÏæóÿvÎ^äˆÉGG»ì…Ðð,BÎeÝ‚˜Ää5ÐÞ%ǺH¾)_3*ÅñÛ£iX ²£à7Ï%?¢k?,+ú57Ö¶kn­bPµ>9Ä ê8˜Ö †nùu2:®Ñ§èCv@‚ékvË]~´IáX³VΈÁÅ¡£¯Ó¦0/‚.ðØCt"lí±Îxê±ý“2i— ¼µ<ÿÄíòQúˆ«Ž’zKbJ©{ó57n¾’%+M1¢a 3NÅhøèË£a¿ÊœäÒºµé‘¤G:–IœIaûö‚‹¯–½|•Å׳£y'›¬Ú»Ézö§ŽC_ «©géÜ®)ì²ãÓCÆÕ»¦Ó#˜âçóü¬Ô ªúê×Ï~éöÚë¯ûzjÄrtÐ'ˆF¼/¿|ûãà,¬ØšÞ> +þì³,>ÏÁJ•:+L9’eÕ«ëàÓ#¯ˆ$½ŒÄ«23…ÓÌLCéž“ùdæÕ”uiœ«ÜkÎýé[ꉺÿT˜5þZx@WZ6é ©wlLʦo2=K¨=ö$C ì‰x—qÃ^Ô¿î~ª5ý_ÙbÛ¼‰ø ú·îDC8¦¥3)½f¦ &bµ'w¹ ³"&NæJ¸`N=šXÕY„Ò,ô†4™\X½†ÐÚdÇý{—) |#¥ªè§lµÔ7fËô„êiÃÃÑ—Y®• ûŽ>¬)kðQUÉÇE²ãg‚ˆ‚•EQ‘%v6Åbè²³¸åHȃN…ýUõEÓJॆ²†®°§²ö̆ôØûóœËÞcgÓ–½â½=å¬+&’*§ìcßcwåÏן—>3ª1ä.é˰E>(”‚7Ìá¢+¼'#Ï80%ÜOÿÎ2ÜšœÓ¶lÈ[Üœ@&GHÞveÆr¼˜‘ßÿáç«ÛOªwMž0üÈ}+ë°LkÊ·l!‘÷`÷Vö2©_7V Ö)E0éÂZXg±C‚zX—sñòç¡ÆDëfšÌÄb\­SÈ*Ô§á²uµØè|À{À§Í\‘96mk3wlk/7ôg9ki˜¦¾<Œ¢¬03X`¦iùê]ÕÞÎdé-ƶ/‰79 Ej…·¨Â¹^x; Û¾´ GôsHoǸ¹Ä.HLˆÐe8n I¶§s(%vöɪ,‹Ãþ:v=/|øëEgåÁ„R¹/” †û=ÓcŠ|¡áþŸn>ºŽm9ãŸõ|DN!ˆÜÅzqmcLD®ŽÒmñPü®–Ø÷4–OJ!¬Y² $,¨KßÚ{É;NÀáJ”ªÅP¿d˜«ÞQ‹÷3i2Ä;ª.'AYàÉñt¥/¶Ò´æ:€¹»ìKaý¦Í: 6JN&ƒ-)2bˬb]N°œ]=QjÄþ­¿¶åWYÉID{½®"*å Ìâp,Q%DíŸ/¢" Ýh:|yÍæÍz÷}–Ç)õµš7ñȬí~EÎEn¶öÓ‚Å.®¥5X"è—ÙQú™ÆLÃ,ŽcH îö…ê\C)õu޾¬kþVÓ™n¼]Â5uKí\güÆíܽìÇ"D“Åáì i=íܶŸèr;?Sì™ÐëéÞ¼»wºåšVmù”~lª3¯Ýë)?ûü&‰…¿÷IàÆ%¯½"§ÓÞ•Ž$ŠöxnÜ:Óxa ¬fe/$¬—ÈêÙƒœ©Š¥òlÌ5ǬÌYø­Ä´(ËÉìºìrJ[¯Ì9Ë+s”½§+'åÃØ!¶â¡œ®Ý¹oÇ>q¤.£“qÕU.÷]†-C:¾¿ûõ˳5e£ˆ1èóëwW¯îÿx³ÿŸQ%¼t—-Ìm˜Ç$•\5æ‹7µž¨1ÂhØk'¿¢E%›š,4 #ÖðX‚ˆª0ÄÛ¯v4éËÝ/×]X®{=n¹2¦msM#8D/&䜆ÞOj¥ß(ÅÅŒ»‘´^|÷¥-¬—Z"ÅÀ'r PÜIÔ}sQ—)®¤NÅeÝzÎÅè 0çbœb¨×Zq0‚·-ð £JK,ÈYF›ô]Æow!ñÎOAçôý¸D}ßÀ?“‘âNCHpÑÃM•Ň»/gÖ–hDÄ./IyÕr€ƒô‹vÞ®7iØ“ûr‚ëY)ÄzöK¹sD—Íèä•<±ßZä9˜ÀcŸ™”Í{ëâÏ{Æ)NAÑn7^ds+¢ƒQO«Ó°´1Ðdœå¸ ›ê²¼ ƒ Prýoúöx!@6§8`J £˜ë›Deìƒ#íµƒÇÛ‹V†ˆ­Ø ‰QÉÒ¨KÝÄͧ«ÛÝ! ÜýþúyøÓ÷ï½ÚñîÃÕÍí¬óhz@_95Ä]òž3Õk#É2,­J—S¬#©ØkŠ)/“ œ¥Ë¥¼tÛ Ã2Eͧ»,®3YŒŠÒuÖ±ùZmÛKHºÚ‘,‰ýÏs”’„>޾<±:"š7c¹s´Ó€[FónîþýqwwÿÏ7×Sõq`ÞKŸÿ‹›õÅÔBÊÔW ^S¢ÎIEïâÎXìÊö˜i²:™òG®ši u0¬IqŠŽþˆZÌô¤1òÂÈßkLÔW ÓHÛi£3êiäOF­þ™dë¡”®®×¥lË9!à`ÿÉ]é‡8~Ÿ1î€3ÈíwKèôòKb©ËF‡ñµej‘½P÷†¶4ohÓÜÊä† måPŸê/ØÒ£ÏjZÏλ¼µ¸ÄTr ™cî‹`‹­'`&©ä¯£&SZoÖV‘Óùމµo-þThÌföK%¸m—uØ™æ=)U¸o,Œ/È+ðnBvÞ´!rØþ ¸¯?ÜüãúÝï>\?·ºz÷‹wNvÜtúp÷î—e=k™mðƒèÒÉñ&…Œ1XH°©ç½útsoÙÀç/û‡î%—îÅþü;îì…??3;˼°èŒ:"…ÎõÈ[Ì‚Zú!Y.½ÇºŸ 2Êlû233ÄΦæÚ7xK”vl’NßÏ~¸¶{€t4GoÞß}±¿9Êæƒ˜ÿë«àŠ!YŒLÑW|7©ófć°ù(¥$!r±+#Ǫ{’yÒ¸`º}}À{–¡Ä\UŸ²ÃšZ?z+““ˆuã—aç8_ŽÊ~o¢Žé˜“ãÿÖu;§xލ1f‡E³’¼K”ÛãcÒNYXaø!… d‘Äl‘JT<+Kº<˜v€–œCƒÕÇÐ`ôc©0yDšFßrâh2};R.ŒþËk’ì6%•\Ú<¶¯e¿Œžçw›TÆB¬7E{’ûvK ÷yFªÎ^goa¤àæÞC!Zw…ËÆ&å¡ {³¡Óy:þ¬Ûªþ,3ê>ö…b¸I7aÂ7­j¸}®Ÿîïþe¿äÿàÝÏ&a÷ןî>ß)oª]ÐeÛjê¡×sg$&«¶‘í—#„¸4[ȃ™AŒ… è[IÏy0Ç\ìGjÀc.b>ÒtH8g:•-8îÛ O öA¶^Hv§’Ó6.A¤å[2v Gd|kæéœŒL¾¸ ç_‚ž6f¦ž&Ÿ¿µã†…ÁmôÒ²^ß¡˜«RF²2¬àMF ncØYvdÑf£4µï*èJxÑá"‰OCDDA¸2ÏEc¯)}³·xŒ1¥Ì]3'¼ùt^%-á±ó.!‹ÊíãX§ð‚§zn¿¿< ×èy…c÷¤kAÜ{Þ`Ú],h ØPGY±¢ pÐ!•͹©qEN°`RÀZHL$XÇéá"HêUÆ„ÄÞ¸“˜— ô@ÎØ›GÝ:(v*ÇT ŠYOP¹(2¶’› YÌq„s;; xž¥j½¯h aM€‘9…åy³OäªCs0Ös¡j€†èT‹íÑçŽØ3öáÇ`¾{QpêcM}˜²õ:'³¨ž.…™íÛ8rÈ|8cò<4:žÜ>L”žåS2;¯Ð©“k ŒcDr+Öéííå“Õ›ÕjΪ¾ë,H u§|0r/½òŨõ4ˆk,ÚÏSBW)[ÒæiŠß:¦ÓZ•‚î8%ó·½áÛmK1.½á;f¾ù,¯M¤)R_FÇ[`¸Co®»µ¹`<<ñ ­“Æ>€SÉÀ«¶ðù½[]†5ØFŸQ6ÖEYD­\¿A¹ŠXf´*oÓ?Òb€…ä4Elß=Âo’ÅõÍD’ñ@Vaç“sL_O`ãîZ(£&5‘^§G$\4lÉ CðÊ&#•§$¥šqgZcJWUMÀ< –²'_,ÎTè/{y2úüê÷w·¯n>¾¾5o~ÿÇáÝÓÝçÊèò –ñ2]t¬]éÑÅœDÆ+£*’̶ä-•ÑÁú~2zša«”þNý,>yˆ0ìoC%Î?¬fVƒææ¸*•IHR^ŠstSt:f Ô èBó°¡jÓœ'j®Ñs{KN2~yEßÃÆíg%Ô-°™âJ¹Â'ß/fcœ€ÁcÓeÿ¿%o3Êæ¼µàtÞÅÙþQÔð… ÆÐA‹¨M9«¶÷I/çNÎJÂT» .Ià ÚɲڋEüv?ýñåçVŒÙ¹&Â7¸æÚâÊ?IDÜt^}O¬Q}‚}tgö ·8Zɹêhy®â´OðHœµž–XÓ2OëoŒ}ÖX··Æ‚T¶Æ–&¾ìíï<áüd Îq•@‰¢ö(8¦UÇnƒWw£Üh#ÊMØa²¿Ð+ KM= Ê·k?­ é&Ùk¡¢>»Õ~üÃÌË‹ö<™ FÖìÓ;£„ G 7¿hèôG.¨1LãJQËû Ú†¦ß8¥Ô¦¼£§”:/Æ•"ÆÎq¥Žw9ž[bìŸ[êx•sLignAòzÔé$ã/̉™cB“õöÈòY$ùóy¿ýñïÿý6'eÅ=š™# ˜Ñxõ«½àÇ«Ûk£¼¿îSWÄ»OBäôå÷oœ‘ç¦î»Ÿ:ÿéYÀão=“×P“Õõ¿33C÷·¿=“gâaç5‡€9¯îº$nÅú¦¦I YÄ÷н¸úéÃõÿ—ÿçîîÓ‰Ûü?³´×÷ð-zý_¯\ìn®ß?ý†ÔrD‰ã¼Ä]ˆpÖ?î?5•ŽÉ}Ë_ü]êýõ»ë›_¿9bw%Ë3ïxüˆÃ‡žróùÕÇ»ßL›~»¾õå端ɱ›¾ýåŒ[da•ØèDíÃ,e·ü©G DÒªÃðÞ¤ ‚¦BlfB©S™°€É² †zÐdÙ"Ô„B°|ßýñÓ‚&Rï Ðá¦Ð_K)M“]… À·#ù]÷òRíÇ,FfJÀÇ ƒPýôë͇÷kîñîÿbß‚Ÿü°¥bê¨ûY1Vd0| íÂÃî¶×/JºnâÄ,ð¨9Ì1î·”gu]CiLùˆXCÆ”í­3;tKc9c”J+èæyÄ}gëŘ²}³fy±pvyv¨o“R\¿³qRÉèXßûncÇ6®¼mWj¯ÃûMºÂeúdž<ÆšCúóÝ‘ßÂom*/ÂÄ!H®ÑX¬ÿÏÞÕí¸•éW1r³7#™U¬*²Á^-°‹Åîõ"èé–ÇÊØn§Õ¶Ç ìcåòd[<’ݧ%Jä9‡”=“2IOG:}X?‹¬ª¯2ü¾8»ê`ÆM¢šã¸ï­Ùöt›OúGN…¶oKۼ؊¨xÎSçriŠ‘HZ”=Û[»4³)¾½î6¯ÎKg$NÕyzÚ”–lç_ˆœgS£¶£Â¿ÓÛVj‚á‘€š °½4‡”é¾2 Èé;¢psÌÝý&=±åzÑ·seBl܇pºê´}7™°_”‡z^?ìI¡Ççç¤I˜ Ô÷¢ÁëbV˜ Àhj°{A*­@sP¯¥X¼SˆAW ¦f‰ñG2hA€n¯Aµ~êÅY øÁÎ}ÃKª?¶™j‡$è ²I+pÚö5èÕ‹gªXE›¢,ÕÍÄ‘ú«‡"üï4…MBQ’Ð%reÇ=G¹¥jËÝû›Ûß4Hõç‹»S êÚjÓK2UM„ÔJ µŒIÙDŠ ”.Ï9H+×û$F1)§«ŠSÀµç±“þ1©ÏÞ Ø’a í-ôî´%ÆUW¶à¹~ºÄ èªJìP°²ÆØ»¨àäÒûáþÃãæ2©éñGvÓˆ \_„ežÁ9B)yairlŽðZ¡o²ä‹ÁmPnì-GðûUT Àw°q禕 ´ð؈Á7ÕOÊi BZ2¹u\Õbn¬Â\AžŸ&›‹=Õ+îcÙBaünzÓÓGþz÷Ë `õðzó&4èRo…Ã2«‰ÕEU ºv©‹m^¿ú™˜ ÖDÀèÊ…`‚y^Û'1µŠ}4ë½v ,Lýc`310EÛ0郹·m[ÏÁD^­¼p„æû¾=‚Afñà‚fGóZò4Ó’wJ)A‰ý,åÔJÔÓip]Ô›mÉ/¦Ü“—^*]=à¸÷êÙ#–õä‘-ÆÑ?þ^Ý¡eË Àæ ó½;Y’ô ²9>ͺ$yŠƒ¨mµ– >ÿœž—Ãÿ»m§î¾:iu_Á©¿¯Nã®U¶öþ¬k©ƒKtb§þöŒ˜”h·HÔé÷ÃÚóןîw»Õææýê×÷´: áßׯ6 •L v’ðà—À%™oΕœç`û¯Æ®¡RQŽ­b§dD˜Æ0ù:`Æ0Å,™ÈHn b§ôÚÎuõtšÃ»‰¹òüG{« ±?ºÇ ºz;ÂRpr]î•oDU!JNἦdŒ`Ö=WáÃ#PÏ6<ôÙÿ”tngƒ¯:ÿt³}4›}a¶ü"R:D!_T2ôì÷Ÿ·´ïLn«ƒWÛ»?œ’^ìÛËì «dvÂO|2Ìç… !M° ˆó…`¿]!ÈÚ³³/ê|Vk{E¢ç}²ŸnÞ¼´Òº£ã¯ëÞY€øâÕÝÍãÍîó»ÛñM†Œà0œw g ÄOÂìÂð?ãŽV³(Ûx)¿¸J~+XÛIˆ,²)oHõbÂ~Õ.wT­«‚§?b:thÆÏÈs[¹5'Îòjj«*MMUˆ:Ÿð÷™¾é.ñ˰ÿýâ«èT8¦²"·ÊªâoŽ[½…–SVUüÉsÔTÉ9&ÊfY\1Ψ‰Š„˜=-F.¨F®®Ñí¯·.b—Ø9Ÿ‹ØC®}j´²*ì Y½W¬Œ•«ÔVB‡J‚ž(”njrC¡M„Þï7‘Î{‘¾úÌL ÒRr½™ï1F)Åå(5ó5. W È¬ —8e¡4 ½›s%ëÏ\Éæ²&äÀJ±„UéNÑ•°Ê–š+e-¥|!›"ä è8À(Ê?bÑ…,Ú-6Ñ ÑX`ð©kg†£cC›¬hp’(ÈõAõ4½oá]39*vdDLÿi,É£;ýõF¢ ßÒË—QlZ­Ð|5”1“A'ÖúØQ‰ ÄÔ{Ó…’›Uè>w*5#QÁX>—Ú)¤„—©ü1Wèó$§&9f{k;+i}ì×ÂWÙõý,ÐÎò>%X1Ûûã|S²‘U•>±¾*½TtU,†. ½8¾yÓåß>Ü?ÞtîÖ }ÁÍÈòGõ¢À“‡ÝÎYKÔjxu 5}u5—žɧêœSWF]€Rê˜ôæŸß€¼"MZ:æîþCÚ:îßlo·›÷ÝÔÇ-cr½öŠÓKÅ5ïçÌUeò>kö‹óz¸Àï:Y Õ™ì,T$3ç Q탱€¹F˜‘`›@EzkÇXM‘±5 4úÅÁ0Cç-‰9w7g+–JÃe^ÎÐ6mèÚ8‘ý=BVW‹¡î…eì)S½?ì§\ôøÏ1`0ƒžÕ-*ñü²–ý#¨ËÕŠ÷¶ý±óן“žeAž@'ˆ:_Þ{:ÎHhQº‘¦·™”~*¡†Q»OÉ+ª(*“b,e”)wWò$6Q»÷œCœ²ûàYÃ"ç; Uí[SƘ#`)– q9{uCj¬âäK ºÎ*ÒÂ:ß}s¸Ò›©‡9ºPÑ|“¹ŸUšuÙà‚ëPrY\®è²9^£§•6ñ×4sØŒ“NÙDö%]䯄¾³¿&CÌåµ]0‡tgý•gIKE\7ŸI+ÓQ¿-,‹tÄ.vh¼Dïú抦ÕÇÞ|¼Y=˜RÌN¥êŠâ¯@BžEÙÉ‘€Kè[\?Y;ü4Ó@´½¼˜ÒB?9ÏÐ9Q MFÿÑ mâŸA:c¨Ég04iJ‘4\¥‘¿mùü<ôT©taN¶c¿#µãÿo¤Ð¡zÊœYãŒl_ß:ŽŒ)&ÛŸ( q$‡òÑO²tD£Å6 %SfØB«+àèJfoamÅiâžÊô\$èÿ#ÉÐcà!§nètÞêIÕ¶yütÿðKåõóѧ{%ÄÚh¤}EÅk1Ó×ÞÝ´»û7›£A2û_n~}LÌ߼¹ÿÙðê—Šé^ÕúfʬØëâœj2E%fpó¹Ä…äå±?,Û“¿ {(¦&)”+Ç‚äèIF2iÃÝih#ª¶Ük ÂÏ(™&¯ªD0ƒÁ¦Þ!/2­÷Æ%¦”p*]Ô¹Ò„ž4 XË͘MsÚÀštuHõ”c­öŠHÚ;ÍŽã¹Ë( ©GŸïA¡¦çL‰Ï™í£‡ž q¸‘ž:"&õ‡»—7·bu‹˜ù•ƒ¨TÑv!êËmÊY¾…ÑÒªzļN-“Ð>±†R\âŸâ¨=g!­-è žé—«}•îþç…µûv¤ýCWqKï²ìd¶.f¯Ý@¢ð¶ðº–<@¾BKÞeþð£F;Ô0­ÑîòÓGís^ Ÿ:¥Iˆ2¹¯îò_½Ô-W´Î2Š‹ 3FÖBt@(úyD².ܪ0b¦€KÁ¯•A‹äÉŸs Âc‘´`ÛA¿V2s “¢73Gº ®´o–`/g<-9Kš üUcY«j"¼ŸJ[¹?Õ!zïéBÒ2ˆÁ.äàڙءÚdu(س5Ü|¸ÛN«-`š/ñ E€9#d‚AVš Ñ"ùš•R»ú²¤ýc F!¦¢®"Œ¢£ìD™'‘4¹R±×Nåü8)ÈÆÔK·¨rWŽgš¶‡Ñ$gÌ5ãÙ’Õ%JŒ«’–Õá)À„AµÓÀà¬6½ƒñeÚŒ]ðÔ6k„¯=;æH„Ó”^5ˆ2-BR?‡!•\"óm17æH:-4ºt#+/”Ý¥æiŸ„Ñ A³‹›”§õ!8^T/ûèa:Ä €Fgf ùbi†V5<{\6+&ãóg•GØ-»ô ×>…K„ktÓó–éýÃöþaûøyXì~:<¿t!{á›Ë²ñ‚ª"{ÐE€Kçp˜{RôS›/‰÷B²ç’l—äv›2( åÛï)`”é Kú“¸ZÜØ[Gž¦åv˜D£[æ×±ûí€ÉOs;D~m;Jp˜½ˆÀݧ|ë ,{¨åëÁÊyõ‹)fÙÅw!V2LYìÕ›ÝÓ¦hJÛÞ4q(êœGð¼Ô\ÄEŒfÂ2¡Ùb0efjÒ~FPMƒcuö™ÛËÁ± ¤ØIžŒ=WÃø$”F±qŒ”Æ€Lá’EÔøa¤îÁqºÔÊÇÊÑG*´’kÓÛÁª09zå…Íä á¼Fƒs‘iT€» «…‡‘ôÚ7 Ï…8õ‚AæKºM…çd¼œmžªŒÔâŠá¹xZ‚(û”у"ˆ*`,Ö)I¾|,‹F(š¸J˜ÒT˜r®è–Ý0HèÍØéIáwÍ´z€Ø{ï2ë–üÅS꽢뎮ۻxJ¥XèÎVcŒ?çÚ@;…ûÍĤB¬å”—…®R,"–qø$“F`*ž¢“) ¯`©½ß2·–î!©IyOÛpâÖ¶§3@®Ìɧ¤õ?M$gŒ‹r]f¤}’]i–Ç!ùÑiWÜmn6GÒ_m>”]ܽ<V·›‡Çi³¼éÒ¦XzMƒ‡ÍÛ^†ìæä²“äÔ´‚À«VðU]Lƒw¹fí‘HZ%¿0㪸Nè¯ý:pŸVX,Îr&ûEßLi –NŽjRßç¾Ô{…®Š¥Ï·£—i很v_í¶Úmv»©Ç}‰¾+°²ÌaÅr*1,Öj¡µ„Y%'Î…Š2 2Ðr¾ÛáID­nRÑ‚`ÇWZÖþ7©‰}$‹´j; øXªÓjÚ¤rƒ“ˆ; *ÎiW18Š‹ÊôPßÑ-XÇUøíÕoxÒgžÊ‰ÕÄóXNóêU¨PTfÊyNmн!ElrçpF“™ÏHI)C²KE!–b-CdC¤"òë¡ôùGmýƒ; pšS_ýêê2pÚúMÎH§A¶mZk‡$¬ùZ†Ðv`†í$5?¡˜á{§3’&UË«j%mÂÁ­-râ®»Çahä³j‘ç~ö‰’’¦>®Ó.RÖiÅ.Â2'ýGQB$žÜð6Yv’ÉZX´›˜™‚0y,ï&N½+ã”9Û77’l‹Ý$y—• ›I¨`í}a“ÄL§óÓŠÑ3†Ô‹ö'÷ÇEÛÊ÷ˆZ=mæ¸Ä½Åö¢àítbÏŽoì}M@]fz0€s„ôMéû’ÂÓÀÐ"ÐZÄÊô[äÒŒãÕ6ÚÁ΄`Rƒr¯é^‚œ¨Q2a»B⸑£å¯~ô_ÈošÀ¯?FGaÆ­+ƒ’EêSÃøJjAS6x)Ç8‰ÝšJM­¶nÎ2g>-¬‚F@×ì‹x]ÏŠŽ±·g%1âi ÐC x\Üàwí(K$7l¦×ñ1+À×.¬ÕçÿÁ÷¯_½Ý<~xó·Ïô~óöõOÛ‡×wï¶ÛIœ57|Æ_Q°òÜIá3þð¶“žÈõzÌÜq‘‚;ÔŸt%F8"»Û„ØÛÃÓ*˜{î_nÀ&îY¨À,^„*!5¼ã"C*›)Þ ŠCÝM^˜RüU mÆîØîÁs¸î{SšÞÇcw\´ohá”ì›NòWUà`'Ò"TCÁ9}"IÔEúŒL=˜š9 þ&Ô+ãé×Mª‚øÂ—{Ý~VÆ 3F›ÛBÏ®d|™Ûõ’€›y)îwë‹ÄÀèŠp­.›${Yf`Ž`! N ahâàJ½g†˜”™sä…ö[ büåF3;t5müseךœ5èZ(ÕêÍf¹1ïšÍ£¦é©]hṞk)_¤,€-¢p°W;v~™‹sÎË÷+'ŸM…?-­†ò5¸µŠ‘k³àÖH=Ãì9²Ã#}ú~k2Ñøú‹ægÛòkÓùæáÇ?þéßNh4¼‡cd2€û_줹½û‘néVã ½ÿSð ÿú|W¶ÏüÑ:A^ô€á^uÎ`pI|ÀZlMÓ|3šq,:%ywaóaÔܳ]8) <W™äªH¤˜ÝÔI~s á¬>5²“E‡OíÏ6uH‚t=Û˜O˜6vë|³ÿÝ»ÝÃÆŽw¥gíc–ÝzÉ|V 6ɌҊlj€]-÷ W_ÕB_rÿ5˜aÑŠ–X@ë‹Ðž­VɱÅ–ôÒ¶‡dSÑÐú–Ѥ ™),¶ä(jžOäphŠìqq÷ã÷‡DyËÀàR!,,ÈÏÛ#gp$Ed%T]œ™EW[ ioKQùâ1 8(³lá ¹ûîÑÊ*2³ê׆NñY²~ôˆCvð¸"e…A±¾ Å‘ÓäEºž5¸ ’&ºiX¬l¨U¶ kGöº¡¬lñ^©¨ìC½âÉî§¥ÕTã:YVÛtjá×côÂÿÇÞÕ%Çq#髸Ë€üó>/û0W )y­0)*Dzfý°ÇÚ ìÉ6³»IVw£ (ÐæØžYUȾüAæ—©•ÈlÿjÊ—z†Da³Þj»£áä5 ÷%½5ƒˆ‹œ½‡…äKÏ^—VSCJ .«ƒP]E "Ή$×|c©À”Z8šë9÷–Òݨ6¬=nÌ—òE_ &ç к[vÌÖAÍVsØ4VŒæ9²ÎŸGÖô-MR ­ú½Qõ„Ànƒ¦õ«¸¥ô”™ƒZp’ͺ®nt KhÛH²®ÕQ…’²£uág˼ߖV£mý,UùjϘ'VEnÂæ›{DXhH¼~‘&¨ ÖiÊQ‡íBØ=Èÿ; A#5‘ ¨]ú–_ÄI±R¿ }sr~÷Äc!üëöþƒþc뎎^×ýtÿø¯~útû|ûôÛ×»#fbQøRrÈ_\€þ!d´âðî¾¥ !“ƒˆ´†k»mLPê_¸åéŠQžƒÐ¢§´_xÖí¬…í«¼°zW•(¼{587ì» È/Có{1B®kT—¬¥š‘-™çºBSLî÷áÙý÷=êa²Á÷óêRu7SÆ\~Ù¬¤C–˜ËïºTG­S‘š¹€wð‘ûÄä&Lè…ºwHªA¡~Õ´A`*ZûÜ›»û/º·TXF¿þ¶ ¼ ïù¿¿~üÛõ)‹Àß²(ׇêcû~-F”àZ¸Ð1ÙÓRä¦>Ëœöá„pž5˜œu[•l!Dµ…¸l m­!G;[L¹ÇO¬_#ó,þœ?aS’Ú¨juœÚk(¶4 ásÌOÛ3€µ8vooMx‰‹þîÂÙâž84:žŽÒ{[Y•$¬.2°„Ù®˜?#›•0X÷¢n•tÕvÙH…€~°[eÒÏ0_îôGÄ—ýÅÏ÷'7v#ݰèGðóÒ'Ѝêí^&¡%ÕIö‹#½³êírÁ9ª”–ƒs~—¤h|Ðg™ìg2éCÒ§¦Ø –êè½Ï!<%ïíof8„³$}/©#µŸl÷g/ßî¥O–$}–gvÑígi_»D÷¥îâÃéÜÜ?Þý²®Dd(¾rjšZµ»weï{•è.K¬ÖÚ¶¬¡Ð2ÖŠE‰@©ˆµ)O¼õ*Ÿ.X«Ÿ˜Iã“ëb­œðiôÆÚ½œ÷Åp'XkšbHÀ]\úÎźܡX· C5;bòëî.Æ¢}y"W5maÍ#F´¦wÒ¸“þµn“#¢kZÌ )õ6þøÆªuQ¿Í`½a¤M|C!\=ö±»ÚS+R©°§©dOAÿ4;Í÷M\½ì)#ƈ׵§pšbOCf„ÅnÉ1¦tž—(5f”µ›Ñfä©ßÓúæ>ÈŒ6T†ZU5|ßožžsÈ~‰O32ß/ƶðO?=ÚÕÝ‘Õ\—-Ò½9‹›ÊËCJèI|è6·ŽD¼Ê ©îZÉ£³t@L¾„ÎÞ9GPŒv$;#q&À>à¬^$¢¢à•Á9 ID 130’yz}~|ÒïVaf'¾ü}Ù5^ñ¤mlUùµªŒƒí¬—«ð±kgÂà$Ûıs’»ÚìäžáÕ%Õ{´ÒùÓS\ß§´ðu†$q¥„*ÞóZJ‰5ÇwÁ ¬9»ÛìŠbš¢ùPôú¸X¼±€•U†„âEž] ‹A±ºRkÌJi/U`„Ñ.¿ 3HdZ£œZžPá\çqîØ¹)òýX·‘ÛäôZ«÷Á á †Ï0nÚî¿~¹Ït¤f~f˜ÇÐñ±¥{*RtTø9‰AyN\Û@ÚToP¼VŽ€ŠW€1Ë7ý&‚.m_­ …+£4®]R)û$˜¶K'!ÑÒÔÞ¾tûNíÝ#õÊ>ö¯œž\rѺá%;·ßޏÆ,sóêF«AûôåyUÆ{¡@Ê-ó4}J¬ž’þESÁN”zå¼÷ÊO1ÔcÙûå=øžµ¿‰¤²î>[Ÿ—H®Œ¬,0ZMÎŽre¡É%˜ ›óê‹Sª ŽYY¥S‹Cbh!sÀD‘€V»iÍF÷»×Â]¤(T p½þ\E€›g˜É§S€ëõqê’]÷ˆ#øþ±‹úDSò  IýuØO·w;1Ý/½²/_uìnW\Ö…•ýèô‚º )Ÿ_,F‰Ít ] s ª2ùú+Å-w^±Û‹a+x”´k›N@/ Õ/Öýå¢/qfÚ”1(^Q!ä'aΤӃ3S±)"UOèvÚÓØã¾“sÈ?ï4¥Ž/¥wµ§¾yUœ úØ7CþH½žR!ôÉþ©ÛDÂ@ úóçÛûçŸëLâ52y*IhÈäa²KG‚•è¹_ý"®JÚ™Æbp\¼Y ¨°YlNEÌ2ùÏVÛÅï´¯J쯌†˜p°ócbö¹k^]2Š÷HÙÐ’¡ëJ€ÎW*o‡v¤zØÅ æ$Šóê% µÃ}ÐÑ ÔìÑþ'¾=>Þ—îžZ9¬¹‹r‡´ízHDpÍ"äƒàoö’+Ì?þóuUÇCœúh-<Í̬п6Ót šŽÂ6»j ±›QY²«ºù„‚‹v•}Ê8¿ ¸‹]Õ¯æ‘7—,ücåÙ¦³¶Ÿ‹oÓ5Ë)pg+lJÁÜÝ™ ÈÈm1WÔ Q?­kefšƒÂ–b«÷lfþgÆ×¡·ôL¤Èú»67Öá}Ç& 5j)bÄŠ2\Ä ðÁl“ÄL>êpÑ›s³ } »£}w9c‹7yÓ³âM Îó; ’¸½I¢Þ³É+4%—ônƒBõ·ùòõÀïAv¹¯™­%»L“EyÉ5ÏB²GïZ(±Ð=a3=3ÔRßs É³¨œŠYŒœ–§íî²SŒf+«JpJÖßk®Uj[Þ»»¯qtÂÕÄèÎgY˜"‚‘*ÉÃIˆðz¤F 8u:Ÿœ¥‰Ô¨áÍs:IáVR£†_"5J‹xG”¤}‡ëI1"ÏMI]™¸%}Î-¹½Íðt£ŠÛ¸Q[?ähߺíû¶õ;ÖQVmî’Õ53†¡)O£z±Û­nõ ’ÉÇ6­ÊVWpßRpÙê²ùà˜OP¼¬¬ÂêFu•Ed>aþŒ,!pRcíœãZÂNPf“Wä$}ÂÜÈœä{IÌç¿|µhìéÞûîãëÑûøb#>^º²¼y½²\—¬¼ÀáÖO-<܇²;Ðé±Ò¸›÷½Ócù¶‰á÷pþ2Pý ÔÂ)Ø•>l1EÒ‚SgSMÃæÙgÖ˜"L6éªÂÑ~®Ù’)—r¦h¶²*2\m¸Ur¾:¬P[xùk€ãIðñšñ_ÝHÃÞ V͉uÿ¥¨Uýæ9La˜ª~ñ.©Ùsp›‹,MüE>‚þÓÈÕήvÌB‘)Ñ ÙJäË^q¤ ôB¯«)“µs€ÉÈ~®êäž±‰®'cùŽaûÜa'ˆã†Ke£E3k;w:{¼M×5ò›Ž$2bŦÃeûÇ>—}“[—Ûo.°ó>­°ÅMSaAd¸}´ §Yû(ê?ˆË¶÷D‡]k°R¥­áàóÔ4ŽhP÷d“¸XÒ5 ~%À7w·ÞþsË ÁH#-HÐr{dÍoLaë…y•¼:°î k+/á/‚HEüÅlKÀL8}ð—¼>Mœ\1Æ_³ÏÑ¡«žÔ‡W¹-÷uø»á¶¼Fê“õ¿Ž aBÒOK×îX?é©8±]»ÞÕ›o÷_î¾|~Z×È\ ¹ì¡ex‘KÂÉC>ö5²ë…¾¶UHSï ÷ÿi"ÇeøeÙYH¯‚êÑ¥_ ÊÕá÷”î Çq “†Év;®?i@út÷óç‡ÛRMáì'GT¤wSE›©ÛïxNVo»=â¡færªÎaׂÔXU@æÃŠŠÔ.>•Å5Ћïd7¾{-,ÏÏÈBAûü€TשgÀ×¾èßÄr;lܵI”Á7å˜õæò耾;¼"$†k£¯Àhç×äìÏ«³lÉÑSбpäÑñèY@¶Ô ÷3×Tt§ä2J þ5y­ÒƦÉ@£ HïlòZ>Ñ€L …Ñ3 `ÕªE´Êò˜ÎdÒ'Õ€ êxµS )ÀèTƒÕ@eS Ⱥ/yVe=Æ‚kÔ‘°õOêk¼T+žýó4ÝéIy´=¼Ð4|»¿ýZËL}ñ÷G0”vÂ㈱É!êKlœnpYÔ£ .Ëy‹—l{MBR<-§(¬9°xCÜ»g3ßd×#GaGDR¤xe/9’ £c ‹Óy ËNSºböy6üÎÔÓ®7õôï8ö‚Á“¿a/$ Ð¿DVȨÒPðÿöøi!)±¬‡ÅßuuŽ7ìA_N<—UTú̾‰kÑ'Bæµ#—źòË2=x¨xÛOmÌ}1 ’Œ.¢Ðƒf"Å<ýâ«È:à»ï*öÕÍ_²ß꺻Ÿ~øéûãÃ?>Þ?ÿðéÇÿž¦è,™½¢ü'íhÞ£l‚q£+M[™aŸ¦oÀ䕸ÅBWn±”1pžK±y±Õc,JT?qÿ‹ kL‡AâïgNÒÕwßïÖÝ3£Æ‚àÕêIÜ„÷Š xï"3pW¼?V¯l‹m…h)„Xêœ*@=J–êq&˜.¨î'cä G=PG/©EuÇÛP ꜜ±o9ÕèüpLç”i½”É…àNïÄ^ªCbßê<¦ª«0!NPž‡•Œ*ƒ›B"á$´ôþ. eàâBrÈ#uãuk‹wÎúžÂÆ‚ÚÏ(no ¨}ïªÆÛºÍºhÔöð±…Ä8:AÂæ¶É´ð9Å…ó“M³IPÌ™• Kù£Ãbóù£ÙjÊmê©}DôGvfþˆM]vÐU©j-P·-yK2Ê/eÚ~ßÐæãÝÏŸï~¹Q{T·®IŠ/:gÒ/F•~j ˜ÔoµS¬Y¯³iósÂ5‰#á°|6M~!›8š ¨Ãá´|—PR3½îpÚukØr8u‡Éèt€ÊÙK&&…P)UçCçÕª!uÜÆ°%òšMžÔj;Ú ë#(õ/#Òh¤6ÜE»bØù~vÊùüño_>}Ä;¼KñbøQ€C.QÒEI²€« þŒ>‚S“÷ëRòÛå)“(— ÞÑDÀ^ è‹€÷‹TЇµ gáîm1åÄ6öýRõ˜VäNue1&Á°i#Ä–¡‘¢8ëßœ'ÇÚ<¹WÏS\ TÞ;£â®ˆ”»ÇŸ­¬† \0 ûz—tð$7vZÁAŒçES¦‡ÀzâuùÁý»"¹\ŒÑÏSàb¹,}ÉQN\Ò\–>är’ÜòzÚr lr@ÿ2 µDÁû´$y ö½Ä°EæâŒeâÉ¡sLïŒ4äÛã§•L!àpAü ÅŸú“@‘ÉAÏ×è¨:ÞôŸ¾ÖÔµ6d~sIDªŠ^–>·\'% ¤ÑºŒ§uÑÝÝ–¯8çÏ´ìÕ'–rn‚K¥\¢I/뾿 §Ç½Râ)x8!?zÉiÁå—¯7êþ|ÿíàò?ë7œ>Ôx¤š‘~¿ÚàC´ Ô˜ŽöUs.ãªîº˜üu9Iêhº’ƒŒ$%9؈˪ ¤ºÙäç$7 À«V(¦ÎåòGÝk`÷Ay$;h)d"hÀ÷înøÏqؼW¾ìƒP„aç³W|3‘tb•&ç’bñÏ_ÒÄh¬ ë€À}nbp0ˆMw™9^`nt=Ô^0E¿Í}chÁŠà#znlÏùÆÍï9Ǹ¾íýE, P‹Pöæ8¥XáÌeÛg^eÕBlv/©…8öåÞ^±Â/ _ƒ"; J«P9o‹P '¤¾×Æ2CÆP/¡ÛÇ\ FvSÜ> î²ïZÜ?!{y9_m‡-„ ŸÄžÓQ ôÑKš¬ÖaÕþ¡`] ›¬mÇÚëŽéüBÔ´A#¨“hÀMvFÝÃGHÄ]C6³1\p\/ªˆÄ´-}™bþ]f2¶¸èð=V”ÜØ¼ù¾»<]—× i5F“lBÛÓ¹Á• dLúai=YñuËÜð®o‘jn^K¬Ùe+MfÂêRibI<éW9zIXG1qX[Ø20ŽÀ»Ï{gMû‚!† ´90w¥‘­êE”áU+§HtI»ÑÐ&W.b MôÞ ØV‚Ëׯ»0i!n8üȦÁ'7‰DNåêQ‰/£Ÿ–pCÃSγw¿Ê¡Gùh°67N|¼Ù;öSp‰!Öb‡WØÎ%ô­Sãv`!xO#ï!­y¯+ˆÔÃöí­;öˆp Ã\;´$‚ŦM…M6æþ¬°)sQDc+Ô7•Ì«Ÿ°ÃÅʦýR!egŒ¼¬¥\×dß´+Ò’on°Þk±¥¾"Èn¼-ù¶@ÌŸbœ@%¢¹vœ¡Rƒ¬È‹¥ûµ†l‹Él5åPÌ;?1Ø ž£4æü›‚1Û¾¤®z}Js¿4¬ Üru3…ÁX+)Åx޵È4E)RžPÃô"Öþ§µ»ÿÿ»·s̽9óîo2åh7ç¨{“õî/JvѵqïUAÁwÇE¯Ö÷]“nõáöû/ŸŸ¿Ýëù ñÐï_¿<ÿö:Åþéæù§RZåš&=€í²¯D’†!b^X¼žÁÕÎi‹¼zù¢¶#4ØŽ E€•„n9µ±—]žK{&œ·þb#_0ÖÓKô:–§S?G $Ú5ŽR—LæŒÈuI°÷þfÉ-M>`+qj5@ŒÔ«øþ-ŒvO—Ó«+©›‰ëoà@¦èb¡É˜ÇÜíÝN^ Eç÷ººŸŸž_£©·P¡^êVµvB œ Õ«×Fï­¼úÜÆíŽa”Š&Žm>C9†Ìõ¹Ì$Ò£Tm¡“®®f;´ð HÂýÚ1RÛR7Oâä¬ÄÍW„šr¨¡[Þ&Ù¾õ¹¨zxB`ôˆ˜®í EHƒ=!;Žç×@v2¢žA‚¿Xöú©“aHUá¼îõ¡Ü¥g­ï·w?ë±¹ÙK÷Õ“üpòçoWPë.FRº¬kÏ aÛ¹ÂÑ9ï}OÕq„Æ‘,» j9 ®+V±"‰ÔÙéã 5+¹ldÙ5ñ¢y±&™Õ¼hÛH7WÌÒËXìE_ ƒP2²ìòÔú3QõpÆMɺei•‘ÝìÝètƒÉÙe¬¬iÊxäóùØÔ—­xß1P}ŒÂEÕ··~ïUKÿÏÞÕõÆq+Ù¿"ìË}‰Ç$‹Udyƒ¼ìÈÅ^ìî{0‘G‰dhä8ù÷[Õ3Ò´f8Mv7ÙŠ³ À°ãYEžú`Õ©úi^)Ò?“þ,˜q?üòðøKŽb诶 …(RV^;7áLbN âîû 1ò‹|*„…[^6œˆäX¡‚rÈè)ŸEmgR ä=yÕˆŒtî¬N£Q ]áf»ØÚÓ”ý¹ÿ¦ZR æèÇc¤ÖcÏÁ{ÄŒÄ%ñ%u`eŒG-µ\ä­Ÿ€¶Ôn…þ‰z} ”Õ%ê¸Æ´éFŸ7Ýÿ\ë:ï×÷×›÷úLýy÷6©µ4ÁÜ È„a¥ÃÇ æ(/2¡v£ÑDl¯ƒ¢¢9Sø#]߬áÚðìâi!ªW„Zo½ Õ@a_q“¨}U»:fæYÑ)v"ºOX3¡Ú=h[(2ÅÙåîTZŠuTRîMX]4È$ÐmÜP²@ñ¸³’QWÞ­">3x~“øFrÖ•ŽMƒŒ+æk­„9¾5µÓ^úx>ëJõú–cþ4ŽêÕÉǰóç_Ø¥Øu¥t\ƒŽÓ£éç6j²öu¨óe}÷^þÕÓ ¾œÖÝÝ׫›ë§õî÷ûë£YÔŽ:ÃgÑ^2‹6ÝVæW!:‹h"LÝ€~ž¨}ñ¼ ' 4ŒÜHmŠŠü.üzü.‘#k¹Ž3GÑBýX ›ø]aÁ‘´SRR§3ûØÏ$,ž²ˆ~O»ù\ÅSÖ0nlmÑváºí&Íb:åÎ3šiµÒ¨•>wÛ$4Ê‚‰ÞfݶÙ NŠÚo6¦ÇÖwS0¶ìJ\4èõÜÚÞ7æÍ­•,ÈÕE‹…þ]­£àƒ›V(áŠC<Û…î¼[yrÁXÎ -9&vÚïý…†ç­øóº,+‚ئçj%ÄsÒõò‘VÌqÃÄûúÓíæ7±Û]aÀê—¸Oy}Þ==lå<|ÿæ£|\œÏ®r@þ÷Ã÷ï «—‡ÏW ²‘dÙ5õ°qÜ$Ç=ÝE‘S,ÿM:p±¬ß«,\b³/i/M,µ=‹€#Ç"Lü±=¿‚åv,&þØKmǵ,Åš ùÉJå8–¤ª BL{þi…ÄåöŠàLÖaH•ØeWƒ£R×,?‡#-l§¨ñÛ'eÏ>„hˆÐG›nÆ2Uëä'•q‚‹›Wü†ÔÎT¶D š0|­ÙÌv(C±C‰VBó×XâÃ| A” 4Ž+p'•UI‚x½(£÷dn˜ƒYyýžÁºZìPpµÃWÛ§hÿ÷º“koÂ’S¡â›L…ú+¿2/¿’„:ÔÎzk”boêÐOÄp™‹å-ÐÆ0 ¡à€ ö¦WŠtŸ¸<"âk¸hÞ7óX>!ííœ÷·2Ïë™F¾7Ð Ð[‹Ì“ßô`BÛ€WÆÙóJ» /ñ2a¦Âù¯æé¡–V0Öïg@C+GN\šOü‡-ùð7ü1ÒÍæmŸM{«èÈh¡AÉGÑ)vè»SÈf“ ²º@4Û§çBŸVâëy2™$19ç“ÄÝÎÅ«Oò·V”$Ž …§‚`ý׃þG’ž=ÔɱÀÅsdª)ÜOV„,þ±«poŠn‚±èçî8¸á n¿sÇÉy!Ç­•Tùè²< ¾0ÛRÉ^xÓ¸jwqÎÇÿèÅ rïÀN&¾)O:ÛðǯÑù“›kzX‰V½ô‰Ãjt¯ÞzœÓT˜40´0ñjÇ.ßýÆûëÇëÑ*N'Z=ûxÏéo_ Þ÷ç/F˜Þ³ÿÄ”*E9 n*uwLTA$îW EGbB†íZ—‡ërvÍ[ˆIG減Æ8¹8Qþ±–_ù1½o̪‚ÐÀ„ÅIéï€ì 㬓àÜ”£`ÐÈÖÝÿ6áÕ˜'è,g­¤a‘ÉÄœ»ãý³/0x,œM‹ãΊÜ[‰"‘”c¼œÚ ˆ;iˆ¨o@TŒÆ%ТGL̹&¥é¶äì‰ÉLŸ4UlI¸^œŒçÎôw×ë™îÍøŸÖ³7Ö(ôV(ôÓ)^÷ç¨AÁ“{ aEËzÍÅì>ɽîû‘š‹‘£q{Ýý qÅ3‚ÓÓ%]€Æ0­<Ñx– k,wOt&$á»k¥§˜‹V½€‡ø×yø†ý·³ÚÆQT)UÖdÈx¯qã¼ÃÆ%ð¶û'ï²e0$QB:œ¥ª¼èŠ*Е“fÞù¶ÚkЭ¨M uÐð›0ãéäʹêf>ìÖ£€“Ð4ÅMÒ´¾›Rk=Tã<;R-üÜkŸu$n®dƒƒçar³ƒÄR󌎩ŸÝ¢Ù¸°4|jpƒcv²%úS‘V“ylÝ.©';A{Õ3]¾Ig`#¹åiä(@¶«$è6Ubƒ>{×M:5z—võÔîéÂÆúúÜÔfΊ.P‚C„àþ`´­I›¥€é2s6‹œ¶kçm§ª…{"©b´dÕ‘‘¯i´J΃›Ò8ÇZDR#ø;¹/õ¢?EÇâ[ä’wQ  bö$ Mv0õ„Q§r\öC–À/ì¿ kmKåÆI œ°¥ŒqrŸbrd’'ÿÿ€’µš=5NÒêuñ©(^/GÀL’ÖÔõ°¬ÁÍ çv½)2ãdÆàH‰`ÉGÃ.!›$Œ¡<Îúdqok…¯$1*ýæÒi4dßü•Ý~Âé+‰aØrÉ4Z,ìÀùÊGn·ëŸt¼ûO·»§nð\󒳟Ø$qd|$! r&DkåÁr󞺠s˜¥/ñ7̨à ÀÆØ³-I#£+ctD7œ7ªáÑgJ~óB-=' ¸ÙbÀ@˜¯þô„©êϾ|ªÄZ²lƒÀvaÃ@¡u± ÈY°T–h* !ÎéQ/uÈBµ'8k§NšM5Ì®E ˜}„Haá ðëç«Ý³—ÏbšJº7˜¢dgeì,Þdu–àê”: œóÆçsð‰Ã<ˆ˜#’?ûûçsVD²à|Õ™ö4õÓÛkÀì-ˆ!þøÂ}l® –{ nyáp}ßmÚµT Â+\§H®F$o¢«‚?‹¥žW¢ ¦ÇrÁDÎçWcrØòQu‚l±ÉèÀ‡¥¯Xh]£®#¢0I’¢i ñ5œ5+gÃʬ¢Òr`ž<û?öVçý?7š–ÿþv÷ôv=õ54ÂÖ5O{˜¸Ÿ‹tšöˆÚ•Î-:üÚã2ù>þ6T Z×"«aQ(Ñ[û¶íúSZv"ÔQ¶Ë9pÜÒv¡õv’+¨Ýà´{H<õl˜ª5©’3b–‰b¶Á-PÚS|E3&Ë–Fè—5ch±µ9{J…x²eGÕ@æßW…¶4å¢ãå®KÛ'¡]ô¤¶6Þ/]ڨ˗ÿ…¡à ÷NR!ul“cÄs`|;Û´»~¼ýô´'m}ÊijŸ€¦•°È“1(›GMûtQ=¥j'g ³l6¤ãƒ²6 0ÙíÛG%Ÿ#« Û(­ù$TÎh6J¶¶û¦ûŰn¿X‘?oÅd†Z¦©wû›j°Á{¥f׈l$ú³dä pÑ[?­Œš­¶]¾u —Èøl—,a ä³°çM¸P'ý¼ÛJI\BÖVÙ…aσoÜ&«rv`n”-ƒ`!aþ f…«&3œ›^è7|u›ê(¶p A™¡•aýmzñŽˆ=Vªq Œà›æÞqR{‘uLà MôËeU±1VNB@vÙÊ7MG䓇®ˆ³Ù8G¹Ô錕U+{¿[1ѶnI1ƒ¥$O^°Œ.†9ÑÅÏÔEŽ¢ ÁÖh,¹MUÙ˜Cί\ÀÃ,ô“ç#¦­#‰x^) ŠøÃ?¯ükd÷ì«i‰~ôÈpFßV+!åC7P×´,QÜ®Um»×ÙÈ‘I’Ë)’ bg&J  ˆŠÖ;»0ˆ2·FQ‘³çѤ²º£`—`"óeLd0¢}ðÝ®¡ÎÈÔg?ò:¥%‚gOmç£z9»>};4zߪ$µ+YðnúT™Z"&™§ &{&§;~`ä‘ÕÂÓý¹  šN .O×L&&áô(ž pÚ­Z¼Rr~Y8%k¡ùp\°tÞ‘Þm™½Ióz8¬ê’†"0õlyÌ<È©ÐÐRŸ>ÄÏvˆ.x}ªÇÌ …q!¾hжxbìŠÐÈ‹V8Øj̙ՂÛýÑ‚¤{!”p&¸¬ûúܤ|‚·=Uy ”U“ޤä…á¡qŠ4j^ãŽE,3ÙÍ¢Y¾ßÛ¤ ì„8ÑT·XzÏWÆ‹Ÿ¶ ¡÷ãæÓÝÃïÛ“ù6Ý$HN)M Uï¶·?F}~1²Ãg ¹ úÆ)Ã#,x4ÎͰÎ[-îH0`÷/0®?o°ƒŒ!Ùg×Rî–mˆMj¾H[n<¨¢“³÷ç%ºe+ÁÞþ&_F`ïê2n„²Ž,ç‡ -ÕK¡~u³]Efn ÂgC–_·‡,Þx/ýù»»‡ë_Æ‘¡Û¶ ˆ ÔVÙ;-ÅäªR¬ÒÝb"Îb´ø¹œ¯–/¥ˆgûB«€ÑݪZˆKctp¾1F«œý¹›¬[A·­2;¤¹ÈM&(OOTG“Ëúöbïh–¾Osúu’rt£Ã`[ΧxÓIÎîvw¿þ´ûùáIÕôø ûÚeH©Æ|JÙªºc²'ë|zü¼©RïœUf탙ÂMªEZr¼a,ÏÝ(ùàþ¼’j=– åLÅ6?6.›4=VI˜ÈªC°ã&g£³!ÃhÙØ耽 Ù±˜ƒ.‘ †X8™´Ôˆa™]KýÇE¦K„#¢Ç™$r‹„·£¤ÃÚ;üû¸éw¹QÛý¯Ò|·]ßK45.Ç24&"+ð|·8¡ˆ ¸á5ú5±\F3ÚN€K+d3ÚÚQžM¨„t;zO"u2ÚŽ¢ 4Œ)à®qÿllÝò¢bNTµuв¢&Ë‹úêÈ%¾úJùq@ÐR™Î×Óa"A„m»®¿Ç‡ÏOáÎû=AÿËTö¶­Ðƒm²‚Áj¿hʪkP} -{ô¥‡ óÕŸOTaĦ¦Í… ­šQ 5º‘d{#Ol-ó¦·1ZkÏæ¢0€ÇlKg8Œm81o=©T0o†x+ñÇvaó-2ZFh(š–$û„{¯2åš?dæ_þx÷þæîáËÁÔ¼Ó_ï®Þl×Rù£'[rMï,Lé* $7ÎÍ.¼z9eí`Ó*´‚¤òaŒóð=fNŽV9ŠªJNÄ1ck¸êE.9'0!/ÅÅÁe7v"k»ËW1Ìc­Ï†9‚»ç\j‚kO‚uÂOVg·-æ@ëù×*åtáŽè‰˜ÙTŠ T}-¶¦è¹ØÚˆåÝ;M-Q[¥ÇæïP‡¹ñ'ÏP°Bc¶ÉNƒÊ¹ÇXöødâôǧiqASÕ†&µZQœ:pÛ¥É×k/ÆùknN´[b‡ã„úXçl@1$\£}kPX5ë²"“±®¤.ËÇ|œÉ>„£`*UeE‘—޲ü%YÉŠ,ÖžcØ.:•’Êl¬#?¯µ+‹ mgh¬TÍO˜ó¼°æ ؈ÇJÛ ³Nü&ruUZ4h”5:7¡ÙR›Èõk›#ÇU Ë@Û¯„®†ý£)“’1†Àß’ ®ÓW´ló/ ZP»@©H½½V0iÝ¢ÑDOKÛ4Û½rÁS`ê–ÙС"üþ!Ñ×MWAEÑÆúÉÅy@°¦éËËó„¸áÂÃÿ5¯ZŒ›ªÀ™ú/ŽÀF«wlíÇL;èn¥¢ß²¿0þ™LÂq:K û»ô×ųŠ&žª|PÃvBÀ§cXÙáèÄëó½(õ{¾sLawÉ#À|t$ÌšB¹)bÔž*ØB¼UŒb¿ÇÙ´ÊÇ8=ÑM¡À5â¤{O£/-z1«ñF*fE˜?QLJ“?Qîã:Jµq¤.Ûì‹;*©îuÒ@¢>Cï¾c¹¯! ¾jd‰ Ï ]<ïàr#:hç:MKÔX±:ÝÏ Aå(ų½FY÷dÅïyÊ x7=íÞÚ#ih2|€)nÑšJšè6 \®Çø¥#¤ÑqȲ& AÈí Ž“qô‹,ª~ɪ]°aÔÔ¾xÐ:ŽÞ‹9Ѱ«|ÊÊ£Í;ªú‹E¹D áa4âçŠ7Ô£|¡5÷‘u `×ÞËŒÅ!füà¿ÞtˆìÂê<)“5Ô}ÂÕõƯè¤ÇòPüUè-;—­|øöÿ à FG\!E0¯>ËõíC$¶yxw+RAizüˆÄö껫Î:þ ¦LàúÛ§§»ßɽx¸ÿxuûñƒ¿ö××þ†àGÁ8ûÝëp{ÏÔf§‹ØâuŸ8L%‘qâ#þÃ×?Þmþ[äöýÃç×P~óä6mþqÿqóÛÍð¿_©"o7¿¯™q%w1W5*7hUm¿ÙÔœÙþfþ¦‹½ºÕEÉ!»ÞÜþºùxb’¼ ޏ/ ý&õÃÖŸ¹Ý‰Íû"'ôËæñêéçõýÕ‹@VÝîOHÙäìFOH©¢S×ò"J8Õ*UKrƒ¢ÜN ~øAÔ1çáò{½oï»ÿþ§^º·¡íä)GBÂáY*ñÜ¢`„U¤ÁÇí[O ¹ÝªÇ ´û}§‰tIôJ/Èa.â¤Á§Ç±ûÐdpÈkqÕ«‘ótLOÊ"eQÓ9 žlªTðÊxb,ÍŒì¥$k 5ÇI8b#oÓŠBÛE†‡Ä¢ˆÀ®;<äÒÚDq ÄîFž®Mùpý×8ñØÄ-üOã>ývÿáÛë‡í§õãFÐõãO›§ÿü¯¿_Íï Ü>||…-€òóvŸ¯¯7»Ý‡o›ùáÓgq|çÿ°_×wŸ7?ìbGWß}wu#¨úY7õü“:8¬ñ³¾“Ï'q/Ä ¯â4ÝàÈ'è„ó¿Ð÷Y§TÈïÜïÖ×Ý;õ¿÷iÔ›õÝnsÉ2ˆÉ¸ÿ¬lý?<ܼ]5$ç\Æp&Çu,8(c-tßikÑÛØß>=>è¡Ú[‰ƒºO .ˆVˆÁí™Ý¾I}äp N^Vr»©ÜÛ–5‹Õ aî„ÆUÞô1Í1a´§ bæÿýíþܘœùØžmIrÂ'}ì¿ðk,~]D*­¤²6ÎAª8…k¼—ãhÃ|œ¢BœB²+oŒ76ãתÏJâæ*›ªëí¬¨tU`aD+Z5­ J+œ”‰‘‚q³õÅöE\Ü œµ/Q¾Ëy½QÒÂô¶V ¸nYâ}…b2¢"Åå-E4 Õ»óçìî!Zw¡!¨p€S™}°°Ã#N ‚ ã Âð×{€Â>/œpîÃhÓ0üSgÙ‚‚ÃiMhíÆÈ%O±(L á"–‘çe§9.ç½TcO<9ÏÎøINµõô} ŽS} j˼Î)Yç,ƒë&qZpÌLhf\_npcdÇA}×ÝpÓØW’ï,½­•ܽ³b¢Ç\øónžâ¦Ô ¢Ò–xãÂl½¹Ž z—×[ˆ>º¬Þ.ïõ¶Vè(E:Åpi½…ICú"3èƒÛlÅሠ‡Ö#çXDq v¸Žw¿sòÉqQÇ­^8egPj³ŠŠ+p"\äæ¹èƒM:,ñ¬¸™bGõ¼ ñœð&Dü£•¼{ºùYæt}³Þȵ'pêG˜q>D½UôýgÂK´·¯üŠ /N¥Ä~³Î-´w~YS©s5åà#.àü¦:‡Û;¿“ØÎ\^œéòNZEßÑ a¾£;iƒî­,Kþem½™ÐRJ:¹Û1Œñn6Õ ˜àsc+F`ÞRÎØzclÈÎ ëí¬ÌÖ†@芉S‹´VbjO–ÞÇ”†¬ÈèÜ‚ÏÁØ·À®Æ%ð§(gç¢\ãõöð‰çãaãå"'‡cÖôÁ5¾ƒŠdÿÇÞµ47r#é¿¢ðe.+6‰| ÇáËÎe"fb7v÷:áЃ²+‰Ú–ä¶ÿý&H¶D‘(¢PPvÌømu‹ªÊD~ù!Ÿìs6h÷Ùý§¢<=¢{P>â¦#6¼9[ùæ°Ú9?w—0Že«ÆUÉÏ»Sœ’ )ÿ ¸'Uú&‡|Pé3¹:\xð8"WG(RtÒ”+¹ó2#*}í“ØñûJßÝϘUé‹H‹`ü^¡¦öÀ¬×wo‰Ð~]±ÁN@Š|ÿá«Ñ×cjfíF÷ßM–ÿKŒqJd8 ¨ ËnôC™M+0åŒqÛɈ©bf„qûP¬2IYã~“O‹¶u{jtÌÞ×$ê£ i½Âë´FÝó½>€f¾¼ (Áߟd;:ŽÜŽN·£çÑa@£ŠÞ'‘ÌÒ( ÷Ø4–T0vÝ4¶0|YÝ-/ÍÉ™•<}ÚÔ}~à硌\Ý”}€é:(cn?e!d[W/„œ%·VÀ»>"$p9?´Yíx v£€dËhÞDÔfç—K{…CMÖ¯‰‘ u/ë—蕳™æ6€²¸Û¸ÑWtÜ&™ñc gƒD?µª]¤ÃžYD”®-T«»Û«ßöÿ_W_þפúêÏ®ïo¶_^ÿ¡Ûei¬FÕgõØóØÆMSù¦–ç…]%Š×iàÈô§:ñŸÅ™ v4c¾ØÌ•fdi õMà"ùMß$Úd?ƒ=tpHzRÐO™ßÞ¡¸$æ™aëº0F ÛJç¾hãM¿/têyFB ìÒ€6p›Ú6áštMé)ž¾uÛU,Ý{ö]ñ=LY†ÁK*äà]·òiÇÆÓH/ЉÈôÅ24‡lÕÏŽ0šðñtP}>12ï·¾wH’Ø1›$1=­«áŽ&*CÔ¶G-Äð¾bÊhûï©H=VåÚ·ø”­—ÃçûËKɱø„yd[»‚1Ë”˜ s\ÿë ñ¾¸pì1²žÃ¬7gÑ;-â7¸€\Äoæl’jG~m\ŒixA91‚Ûíàà0;zË^]pÙåUj¬¿íòª‘sfÁöip§ãqàÐǨW»:~Ì,›m-ÉÓ§í/êòŒD=¶u‚Gu$®Ý ›}15äÔ¦ÉèK­+&1Íí•ÝI+H&äèO ÉàAûC²@äè±/þã¶[ÓÙÊ1ˆ¶˜a“Cƒžê ¡ ¤ªhª8ꙸ¿xÜ/sÞ̽º8¿|y¸¾[VÎãØNÃ:bêQ qÚH°²ˆb©)^ɗ㑊5X&®|ßýŽ@Ú€©¦Æciˆ¾óeŸU¢gÄRÏRhºl‡FíCƒùJ(‡=UÉÚ%D¡šš€>GM/v—›ÞÇÕuÝ.ÕÊjŒÿ¬CR(€ü‘)=ö•|”Å:³½ÈÜ u÷ÚwUŒªG»EÐÈî›4A®¦*x_‡¹YVª„î#j£Ù…ØIIú”¯œ—¶é:Š£¶4D`m):*Ѿڣ.Í[ æåãXëÓÕ—ÛÇ秺ÍÇA4LwjíÒ5!@!’]…Û’Ö­„Bgš²Ë£#)«w!wýß‘FøŒN‚ÓN Ÿi{Pwøùa=F ÓïÓrVµó8µt´"­; ÐS•¾CY[Ò’—?àê¿Y,¼-Ô¶w¸x¹¾}®R ¾8êuBÓ†ŸjíqÄÝ©Rj ¦^Dì¿#ÆmÓóGÁÔK®OcW$ÐÔsŒäO ¦Ø ¸‰™0 ¦^C~?\@fBè ôT"¹>\]t$=atÓ½¼Óå²moÞ†ž_ûéÓÍÝêëvNìùã—ÛÕ—ÛçßÖ"舮Mctf°„8%îŠ)9¸y» ¶%X£ÙúQ`íŠÅiÝk6Xû&ÅF`öóPŸ­é`:0¦˜UGkÉ5®J€Q©/ðÌ·;äô<Ûî¥æH/›Àà ¶®¬Ñûº¼üÙdÎe:ÕI¨ö kŸZ׆Ç$]Á\Â@#3 µÚ°9Rt-áZ<ˆêˆ™ÖDP†kÉ®2Û‘S#´° ãëR±p ³V‘¥·ãþSŠc¶P!®çð¤Û7F5屺ùÛ7+`££Ž#uiŠö`l…âï )z=òåóÞ·Û;›é•+»£Ãte”Q90Oi«ÃèŒØÔŠ5‘_Ë.iêyÄôc¿mO9ÍiÞjŽI¿ «Q›´w†ôîÄÐ8vŸÕjbfȶI{ïU|a€ú¦1d•qýÒÒ¼_ºŒMÎk|fŒ¼é­éF˜‡¦sí„é/ªð—½ë‰¿i¨\=þF3p±š‘OS|5 ‚F,pš°BnçŽ4Z,‘A,œ` °¨Â#åñ5-²c9ޝ@îô9:5{çÇÚG=2….øiøhOG½ºw+ãó«:µ›ôR&Ñ)±â@^Y#öXÛû&ª†šFƨÁY™É—V›%±eƒ¿obi©U£QíÓB*³ë¿ÙœÍ-å0Õ]ˆpSU›æìx\/±hh»º÷=*tT©ìÏÐmÅNÕ4iLâCØéÑ)ÅŸŽÏ0®ä³Ú—Ï ¹ «X˜¢$CÒf|vžD[2`Å]ÔƒØ\,†$Ä\²nG~°0ÚI81\ yé?‰ÍIÌR`50FçÌÇß9<Šújê;\zª\(tóèÒj3ùð`ðûÚ–º.á(]¡Z‘¦lÍŠ °ž1OWKŽ©Ab §û[†Õg ÜvdÓ‡£CâP¯½Cÿå&§,mvÑ{“†zyT©y×8Ô{ˆÕª.to\c;ÙÍóÒp×½õàì7dáj`穬ÒßHõÓß—i3Ðßn ”{~° æ<“c=?TïyvPGW APˆyÈÖAèƒõŸND=úAMÔØôðáûÞ¬Ò"õŸ™¾'!z¬ÓC]—¨’NÙX’bŒÄ„ÚeO°ìÒ;)aå=ÎS†¤o|!ŠÛ·d,v*Iv,ÕŽXƒÁ×°šükU7û¸?vÙ6½‚—B‚%Ÿ¾¶(¸Æ‹ŽÃLO @ÇÛZC{’pÌ¥þÁ(ÎlHÜŸDÕÂ¥š >¦úô¾cد«÷×V¹Ñè"O—}ÑÚwRÿDLýš†ÑÕ3ÓëÅÕÊsJàEjÔC)N?×´hMÁ‚ü΋WÙ4p…ëcÒº?©+Œ¬’(>ò‚5uûö4ʇ‹{;WË#duMLäbO#ñŒÐ¼²#®­ð#žVF磽_š9]^ï…i-e©¶ÝDårtG Œn}LÓP”¦6W> S RaW©<_^ÚÛ_®^žŸW·öïÕMúúÊôg朆A§ï¾5SL§¢Òü€ÙÉž4fX>p˜Ð¦þ´º[î•l¾ø°z¾½9¹y܇Ì;F1‘ƒ ÊÇÈ}›]wôÙYÉáÔ›([œ£ôÔ.Mc¬9GÁ.ñ 3È;{E3-È݋ь,JOrðóòânlÛ[aiÑQ–L2‰’'¤Ø@œ‡Zç¿yý£î½Âl ‘öVæ*‹f“¦:øú&iP¶ÓxçuØM:iÀþ>¹ÝÄÞë;“œÝaïZSöÎ Ù•BÂØ´eÍcãBofÛS=Ô}„œ]-93¶C‚,À·¥ûB}µw§OjF€šÝOÓB"=õÊÒÁ],¢šÊàŸÊ]q„)Í€!­òë®È/œd_vW䊗…$ Í·ó½½n w•N§ûñ©Ý•8íí®LÎx˜YkJ¼bpWá_îÊ¡‹Ô…ÓBÅ­ëúÒ”Zýò°X}ùéÓr½sêÿVOÅU€ßÕg%ikßè¢[¤í]Ï‘YäÕÝÅSY1ïÿð‡é£ì¾ ˜'DØI Mì®ô^C§üH¸cèˆÏ»¥Ñ":Šcni(~s{8æöLŠ+1S“Kš‘ù ¯'özØ»Ér-f`Ì\ÒLQF)"næ4›„Æî¯?¼wÅœ´Êƒ\Ú,k#2{@{öPv$¬]Hð#¢æÞdZF¯Ù&Á7¡4€„ôÔ¤`¼çÔà;׌l䌇E˜é•ÕÙ½ŸóD8¶-bnŒ­iÄ’9ºˆ¤s”ÌÒ~Ü'a°{LtÒ•«]<Þ.5-¬÷K}«·ºzyz^Ý›æV/vF¯íÃn7Ù–M·¦ý¡û‹Ÿ–ÛR­ßßB2ÓK>õhÍ'Ç4,,ÎAfÖ8½ç Ô]}Ég'©¶Ê®§ÓeüˆaÜ.ÓL¾㬚#v;l€â²@ ëèû¿e~ÆæÜÛQ:»ù²º?»\Ý=Ÿ]_î%¿xa"È_à‡¡Ø+’ÇY0töIYN|AR7ú´k @ m[e\+be•hGÌÊé_ñA aNÕúˆ Ü7I`Š4o– б‘GHáÐ*ð÷‹ôœWËOÿmóòô1•£]¨£stAA¡¹Ïf2Tµë™†ñ>ûþÙPoùåó÷ýËgsxQŒ˜ócE#g/ö€©Ëlb³ÉìîÖ$e§6•`½A>œýpöüëÃçï¯V÷_–Ÿ¿·cóÓòùóßÿã/gY“2±½oë•{Y€Ò%_Ý\à›ËË”Œ¹_]¿=ˆ³yz¹º²ûØçï·oùããËóçï;?Å/w/Ë7A!¦³»åÅÓòP¢ÁõÃÙùó—$¤~xOX6|:| ÅXE”©´dýöõ´ *¡üã̾ððtqµ>\ïȉ¤ ù½¹¸{ZùóÙÃKªÿquóJÍ8èq˜Þ75íè§|ûÑôÇæÅ97hvçÍþôøe•Ά&lÏ^œGy.`O»t`÷3¶ðþÛ˜pjuÿëŽ̓΂ ½s"IøÙöSÛ_˜@ 8jÁÿüúpˆù/¹ÌðáM0d!ÿ_ˆU…XƒØÄNCð< ›”'løRÅÔ a66éHl2^„`œ€ŠW\›4[I´óf#°)= {à±§µÚ@S#ó”aèžy51Å9@™ôÊl·*ñy–©ã2¯ãP&zñ'€™‰%)ï0Å3j®Lü±; bîJÞaÅ·º¾!·ý±?œ"oþع9@Ä&Ž(9ö†¯F¾…..ï–ÿeàó·Õêñ‘Òfùׇë察×Mx>K”ÿvyýö5àEFàE襌=’Åž·—ùSzسÛôP†™îW·‰3\/o.^îžwþ`]“¼Ã6‡Â,›C†I™¬t#Š:qÔÚqùL‹}jÖhÉù ¤£ŒÖ—Œ–íåSX¯ÒhÒÚžH¼úZÃ(ìdÇ™Œ"`ÿ{‹ß.˜>¸·`ZÂÇp¤í9º?LÛóZœì%ʬ›$õXr‘ S0P?|’ÈåËíÝõŒ "fwGäo¤ü,td˜Òúì8’Úr— "‡2k˜i±0ŒEÀÄD<‹€I1; dG@mÓ‰ÈÌ5€i9GT8„Þ€ir–ÜZ¸¤)v wKëøZ¦wÇñ¾ñ<> ªTId¦”Ð>] ê¼ë:gbxtçûEMŸ6}ܯ¿]9÷—†¥oˆhžAõ˜›åLò!ã)x¿èÕÑ­´·•sŽH\œë,þ/ÎMا‡è„ƒL¥ÿ#ÄÔЩ7Šî¡|u·ïTtj1æ* vdÒÆ§Ĉ‚PáÓŠb rÊ æÆ0úðFøº¶£ÎôäHN0õ቟%ié=Hss ½äØxEÙ[ÕòÀm[18j€PÐÚ9Õ£ñvP@ÑxÔ,?eÊ`DŸÆ;€´‚Ð×CÞ8=9œ[P<œm…ÛE+äÀh¿9½gâYÔQ j{Ž4`Î(ä´Ðܶƒ›ÆÙµäS33 |P‰vweÁYJD„>T£¹?9éžçuU·?=TN´²§uÓ%=7qJI 0GN× œ½Êy_:Mq3¦`±ŽÀÍPÎ J®†fWpÓ s€*ܤ”#–Y&°Å.€˜ΨvÐÊe‚?=^b¬Ú×1Ææ•Ç>xvór§¨uB—¯u"uTòÐV‹£vcÙ}³Þé•’†ÃzŒÌóB™v°KÎ! ФSoPÉ®¢Ù]JS·>›pXø’FàÏJ3¤;û¤4ƒxNÉÇ;TŠk™•Eï‹J9+•¯¼Cz˜dxO£¬,‚sQC{Ô€Žx–e pw„5÷˜GXôш']£"£ü$ Ï[£2 "óÌû¢„öë‰i o×é¹¹]ŽëÀK©{søç „ŽÓi"µÇ:HF s Sj÷Slɹa·¢=>sH®ã[À¡:&öâJ·º°Ë":."uÄ\ÑÛ›¨Z æµg¶Ë6bU=®¦YaNë~°Uäí¦N= ûmëk%h]ø<’ÿn²¼Ëf© SÏA0oEÊÒb$vFF-£(‘ó#¢X,v×mÁþ²¹y4 " Ûϲuõ¡[›e}¨1[ЍbÏ -·m·Ò¨Ž[{cM“jp`P—iýLœSP„{ž˜HOÍŒ¶\ûÕùÝíÍòê·«»åùýÅCjZ6!?_Ü­~ªZx¸D@ƒ¹Vœu;Uœ´ !‚Ó(Ú€Ma+Nç&]ü¤Ì”\EFÉ-ýÜW dO .uùW± v©…e–ítÉŸxAûgS—ØÉv_/€ßjѶ7Äíñõ·Ÿ>ÝÜ­¾&(µûäyúõÓÕÏËû‹óáå/piGëø“‹á»ÉšaÖ4e„¦”Cj¤¨´êþÂmH¼ì¤iq{ž År!%Ê6 ìH² ñòâI˜¤Æà…ÏkÖQé8õ‹˜¦¾ÆrÖÛS6%ž±÷ ½Ñ;p@s O€»·y‡êÛ¹õWaWF8ö%='Ô!jn>bpXl.V³±Í‚×cfrQé7鵨üef¢êuü°»ÍÉI¦5ë¶}ït2V®%5:R˜oÄèÚÎ7ÒQ¹Ûñ•.>ƒçb‡š×/Ý”Z8LE (>L«…«.yoER]?¥sy"p@(÷ËhÌs…75€ôØ>õáQlAšÅº'²Œ‘…Üæ\#G€Ðgê<0º¶uq4*4cìq4\œær2¨{¶ãâæ¹ é¬}³1¿ „¾Ó¾*-ˆ1Ì<È>¦Wì.픂çf<¨ ´–1󢆲Ò$hbtY"´#¡VA jš¬È™£P˜“+¶×S9¢ì¤ÁÇùΟ¶9*U+ ޏƒ*%ŠÎr‚)ÑÒsL->ª!Í$3©AãñSJŒÌB\ræ~+¦±S¡Ë †‰µ„[ B€åÚ€ÔX‹p»m+ÚŸFõ&žFhË.R ØGà0 l)Äî`ëØÇ<ØF§ñøÍ#´]Ñ›˜—×­Ç5d1bHµèÓ0 9ƒ¶'ï®lºQëùÈO>lQ´DÓe^FZ³á)Åê‘" RõLœ 9µ‚×tcyÓwj»à"›õì!Ûøú&“øEëV¬ë5±BîNfMÌþ0®—R'b~¤n×c7®óD›¦v``P‹˜j›æ ì HèR¿êDˆ‚œº~u½âüÛˆºŠ*ætQ—!qÊ SP»¨² ÕSWFˆ§eÕªS~ÄÈÅPÚ¶9•ù b;²h4AŒR¶Ý³* 3IšetˆÝ±ÓäÌÙæÓÔÿ“wuÉ‘Ü8ú*s+› €pLÌÓ¾ÌÞaBVËc…[R‡¤¶§¶Ø“-Y-eU±’L&Sn{#<1ÝR5‹@üÀäK½¯”ºV«V¹¤,ÛªUÏo}†{Á«ƒGG!­Ï|Óè¡¥22%G06!c±3ˆÅjmHo¡nQÏáÕ€.†ˆÓYc>59;L±Ø6E!Á±øhMÅÀκŸþ÷ê‹»ÉqƒKÌb³ùDÖ>>|º»¿{¿X13ûX}e fF‚$´ùtPJrO’„‹#A È9zhtÛu´Þ¯Jg¸ŸNH-OÿìÍ 1Šq ™© +ø G¿:J‘¯Ž],ò)d5ÄÛÑ*æ)ضlÌÛ¤"j—Ô‹\|“/3Î:8Zç{G¡\wÞ÷ašë4Ž.{~ütÛä…å®,éÄæ‚”¯¬ÕŒÆkÙaÖ ›¾Çµm/T}g£ªDÆZ;<§%BËM¶Ø„úÝ`Æ ÎÍ8å´/ ƒ/ = a/0œ\båtVÊÙñÙajÌ8ª¨Ã‘Ÿ¯°Éˆ{¤=Vq=•÷!¨3†[„@ý’† ‚^3Ee°ÃWýÑý¯zâÇo¹«·]MïÛº 1fÅ,áÃKJIÌ—ÄLc³œ˜ÍÈÕÅÈÓÀàÕίPc«!“Û +y6½R=»JêÚùßs)9•…oÆ£¿L‘LÇ—e‚)È„RÍeŸAgdé!T9ALþ(ƘIÕðØàùé¾Z¡ ^±h ]¶|Ó8.C± ”„ÚŒQÈ#8 ê›!I*¹Œ#·Ãr(05æš’g‡)#uòUu9‰2ç÷|‰mÖm„ï,àZz.(Û„dóP6ÄÊ¡lúÑyAþ¢H¡“’H€ËbÝÍNVDDP®Iâãg†ùÙ‘ÊkïDCWf/¾NKø´+ÞDü˜å6²ÏB$?ʤãÀH5Þa’Ûýµ9Vã\ÅçÇ™öÌÀ¶Hë¶-¯>îÈf;ê—®ž×¶ü­ƲM¢E^¤°I:ãI§hÄÆ¦ƒSÅ*´g’ý@±7—É~}eñ¿èY^]¦¹S59æ—>yõéñæ×UÉ$ʧãë¸Sa(°åÙB}1  ·íÛã>msY1ã è•` ëü_4Fˆ9ÿdF½«yU‚LÕyûƒè€7]lr~o³£dÆsàUãV37;âSß:TZ‘·ßQ³ä9Î*×öÖ–Ú•wCž~m´Ù.i§·éó{¬¼ P Å‹ŒQo"-_d=xÜÛôìd^¥‡¨‘¤x4†|¾FÖ«Œ2€·Æ«Ô ³†Ô71›Z’“^]>2ÕÐHÆL ÎùkÀr]…¢Ö q9õ0]lýêü0¯šNGTuüp0_cS$©r ‚%®’ðÍC§%M¥]ý7$ =võß.<|©¡½S©?={üZפšã¢ºµ¾Þ„~Ó ”֛ؔŸTîú¼.«—Û5 CŠrÀÖZô»l6]ñ:KÀl†ø•0Ü.Ûtd†z·Ë¤"„°å>ª`¥½Ý.¥ò™ÓeNNI›wºl[]«%ëðé€ê+~ÚÔÁEV’ Þ¥M¬ÄØ¿fXÁ ü1ãORL—³rÅV,šÛ¢k5Ô©:#v¾[Ãe5ñºé^•{Ù¬‹y—áÔ„ ù2ŽWRõP¾É[Ÿ2ó*ÝËhNñ¦ ËÀû‡¼‡&ÇÚ×@7pLï‹BêZ01q‡ÌUªã›Åqp~“ã%MS-‘++¾›‘ö¹ªNôƒÆŸ1REàd…WÅÛžfùJŠ YÝ3& QÖ\vI^6½v´Læ30y°=‡F•94y@Ý-W¼v¨s<‰öS²ýF³“Õ¼vDØ©Îõ«Ø&Þò|›âU9ñûÄ«zqRR¦ìéTé~|úuø¦æNù?ýüë] Û«~¡ð JŒ"ɦû'ÒTQ‰fB)­W¡lzå뫪s!sÔÅçTá¶!Š_8E²©Š95»ÍzÐÈV(pc}Þ–G,+諒7¥3„LØl¿ÒÛPðÛPú‚ˆA¦c&lAü^”SNJhpãT(ææîq p-u’l˜„£ÆJ^ɼygZãTÄ"ˆ YáÆã á0ãb‰åxXŸ²9³ÓTÞ+K¿¨ð}ÔÅ2¯Ô=@þª_•O­RÍ µ³gY5K°ky=PÑG]Ïñz\ð]hÚK¡›dEëqHe…®jiqNÃAüSžñ€-=ºPˆ“RK‹æU:‚:ŒÖ ]«Ó{iŽaoNâå|¹qÕf'_HWkˆß·Å®jæUôk0ÇwÓPyæ•• Ô^öbK8r¡á]ÊQ"¦Ø ɘÉ7œCº¨Ùˆbéþ› X*qùþëY1eûêÞS‘nÑàMó¨ïj¾Ä6LH´QØõ˜URPRJ™ÄÒÿùIùgÓ2"½CkÀóU*7—üª÷·/Ow7+ç.%'»^=ñ ÙctjipýUtj3áù+Œ†#ÁuW¸xƒ%›1œÑ¤Ç8zÝt äX]NÐM·dÌ~‹1m.‰•e †z)˜VpÕEö±ÈVâœbž¬‹“=ùêwDˆ `-µÃ«K@Ë+2z,ylzE¾¿þ|#ý 2ÿ<»Å/Oi~¼º¹n»Èœ­ã`LâJ¶X¬8<âb:g¢šÏ>%¿Ñ¥Îk¶Ù$ÍP;V½õ Fô‚·NE›–à“ÚŸ™Éü-ükú“1_iöÊü߯ÕýS¥¤zèoæ,ýóài|ókç†ôýùËÓ׻Ѥ>«B»:ˆýÕ’õ¨Óèªê?¸2ûñËøf2š×<x•ÜR;’ú þ¼Dðªz¬P$…VÈ¡q Ïôg¦ADÒ(R·Ó@—ˆþØtý~ýéƒþÏξžûYá¿ýüñúåúùëÃÍ|ô%ç=K¾¢¿|«/àØÚ~3-M€Ê‚œœ!ëo6¾©¶S)åÝ8—²¤‰™ ÇxN$?ïçõh5%7Æ@N‰)Uêáñ»É¦—ÒÉcŒû¦ñ&::Ì ¥PNø( Òòë×—N=¢wÀ¥;”Mè?#ÒïôçóJÑ LÁ:˜ºU_6C­‹@«ÁéV}׬:•;Qi7¬“ †CèTr{Ýi ß ¸«‚î£ ûUW·O?þýŸÿ•1-áo64Ïf™(em»W(¼ÒʪˆÞ¤À+í^þóðãß_%nâÆø×›ûà¿ô'ÆÏ©æáG.ÈLæ_¾I–åàüß+wÿqü0­Úy^²3Ês[ |Q}-«£ÞVØé¬Ï¹`!.š™h3L -:ü‡ƒŸ-ö};Y•™ ŽbRc/GÕ¾³E²°Vv-Æ 8¡òq½Ûç}«L&ú3úœyâ´hžô3=aS¥Æ:1-Z§ïRWì©Ô@‚r”€Á·•ý§ÌP‡tÞÂn5Jš¦p/9œÉ‹g(j‚ÃÔsÔä×ÓT¼ÃX7þ5ùmMñèÒ`O×é½%ñ;L6Lö¢I{N6|^~Ôžôå§ç›§»ÏSÚk¹W³¼À¶rÚõ PKdžJE\[á]Aë…¾ñ BW÷‹g}‡4¬LTr¼™(*© ³]!¯Äë2WQwíÇA¬Õ!lŸ+/'…Èý}#sȸzâà“K…–ôÍ¿S—õ`k”êÙQ*pC Á²-žâކ ߨz­Çx9×S£Ä”êu¾c³ ÎyjB1æ"c—áå¬C=Ù· Ev6l¯d$ôC¸8÷í@éü¨Æ7Rv1vÐGÿÎF‚ƒÛÛH™!7|×Åg+oÑõÅ?×Ù8|êhOÙÀþT08ddu®Ò;„ Ç}Qö™·ÙmŸ¿P¯DòdŸvÕïÔ2ü£‹Ñ 5zýkÈÕV2ï&b à¥wqú1,êk”l1Ï+izÔòئcbB|guM'@ýÓRFe^T;™jeXvêº:õ$uØýj·~­zØ‘±ÉyÜÁ-‡hp#vÔµV“üøÛÃðøôï·cÅÐÏwO·¿ë9J†oá_~Çžwr-ª úG¿|58Ïu|í%Ònó®M¢$°÷%ïšÁF­”´urY˜û¹ºx×¶k‡^ä}ÕurD;{×JfÊÝGêEĘõ®ƒPO--ÐÙ¹~?­r‘ý=±ÛÄ~HÔ߃BÆøuÆ-³dÝb; ¹Ö1´B÷‡ZjâÀæ[ʾ­t ö`%¶˜ˆQ>Å9* ºZ…-«))&öR€,6ÛŒª=\zÝ6˜m¯-ÏŸö†¢çMJâ4IÛ)ÌNTIü•0Z&‚©â1mºÇä¤e.Xàä£êÞï ¤%V«ˆÕAÓëÉÊ•‹·%‹©9£IŸÀZÙ¯¹…$!Á6S}X÷‰¿‚5;ð>i‘# _­‰ã‹!T¿A^^c· ŒÕMvÛ 3¥ÃlÃN¥ Úó! $¯Ë…,Ð{[¤f²—|€PŠÔÈS”²&ÎÖ×ÌhØ%T³m;Nâ×(õ†¹y*Îa‰½ êŒÌD™HMO !$Âl¤¦Ad×<ˆ‡ýò €þ¹(*¯p›M@Úù±Um¯§˜{lM G:½Gê \@®æU½²ÃWú†—Ø(6ºSâ6Š#ÞÁ´GTMᎦýóã§»›¯Ãa؀ݎCèöÓõóí¨J?Þß=~=~ø®xÕšÖÜËôÙ[6ýâ} Ì}t^C>Y“·±dÁhãÇ6×ÀdW|r\r лCbbÉ55ØYŒÝ7wq lÛòš¤[âãÞ¯¸èDr…tzbp†Tœ5úÖÏÅØÙ5ø¾õבay)m™÷HçIôžÔ9Žïü¨£ZøéúY©vóòå©È¡ŠvzãíÂ:‚= $U“BL‚}Ë^ŽhüËíõ§—_ê^ÚJ–¹LÉ ËLMóhí) ¡l™3§_|?[e#a†Ÿ-%©:z%ê@ ̆ϳÓv±‘8vq\ñ~Ö‡×-sÕ` ×áAõTG-¸\5 jc jôjÒ!»v©ž\–|F¾>5¨Q…GE¾§èT(Ü„{»WæFMòyâ_¡“Já¤Õ»<E¢A:R×T9l¹ õ/ò¬dcËS»Žª„Áð'rUÂÑK>árÙ™:Ã}{IR͛Ȋуœ_uI.Ä$£l‘ G¾CòÒ£ S{¦M nàùóõÍjòͧÇ/¯>º~Q•y5Rw%ºŸøv‚í¬Æ²Ð’è½/8‡°vžl-‘úå-•÷lFkÚ‚KOJ/‡Yš7‚tI[Ú“•3Í`æ_Rë~ 0˜—è ¢Á´µn¸ãJî“nÍô4‰ã¬ž¶"9ƒÁ]öŸ#tM)rÕ  :À V)ì Ú&ÇZÐqB›ØÊZ[S”ï §QÜœFÐûÑëÞš½Òq ßR~޽ XÆÐ4“™/›Î'’Oªž§ÂtéhÓžiÑ¥œ›š™Ÿ¦ 4c»Ša*7ý!·Æ¶ù²†àb-â{7Ai OÔwAvv0{n ôò¢ún©B*<‡X” HÙ÷ÝÙÑ*€Èl[Á`‘jGÌÖ1®¤Æt‰°sýÓDÇÌÜ:;2¹oÓxRjÆ;3OzÁ«Á.*`^aAf†ìùúþó§ÛçSàK·ô²ùKgPfÞa8B3ûÇEÛÄÂ&ÉCêŸ)nÄås‡1±;…?zÄc_äþúé×Ûon•÷÷_î^¾¾:"ÏW/?ÿ†2¨ûÝü|£îõ5ÿì>®B;3ÊúÙqS4” ­®£íDÁ^Ñ‘‰h8¾hÔ§f¢’QÐxóÑÑ+½:DG¶mÖkhÑèruSÿ¢yò2 ¼„Ýæ»Ü¨—<½ x½Ó„“‡_ÝÜ>½¬ºª »*N´³É6)Ï MŒŽçƒþW°]Sg©êPã ê˜²£ ßU+c‹Wí=1­~£j»½”°ÝrÑ{¬ðÌÉ—=s½')Û¹ôFžJض4ª“ôÞJ8ÙÝs'—2ž»™£û6ImA ľoK¾F 0Vëvsp‰³Áš¶pölP×8Šú8ã/ß3¹ú:“p%Ô’ ûR™vÈ¿H²!ƒê•íܤÿ¤ üùeZô[!ìÎãX¶ßü ~nŸN“P;ù˶|KÀ˜ÑôÉg¿Êe·8Âî˜jF.Z0Ïf2JLE4dkߨÐ#ŒÐ]']xë"©âX08ñ´T¡åÒtËÀ™fÀè|Y8°–…ƒsƒvfä邎 æœX¥£«lÔhgÁ½»ƒç-xÝà"Q€û÷(pˆ\s´­ÀáÈ_äF6ñídøs×ý€D@ ×¥Dä ÅSËdw ®C’ø}O´$}ˆRc=•©¨ #gñcfdèa=M8Á`!WiÈ7 =ì®!#GʨH›ÞiÝi¥øà/«+“ÀAÜÂÁy‡‰]Þ 6u€ùR ËuËÓ‡¶µ|I;Êz4áIËW-¬W )¨ziËz,4¨(¶¥Åø.&/e?Ó§àŠ¹Œ„Þça¸^©ÐA—‡⪠¤Ç%ÄwV£Jg€ó.ã”úê˜|ÞÓ$î ‚™Ü·F`|ÞÛkŸÑ›ÕÀE¾&a¤m|e×ß mJ'¿{'îÉTîW£ôáúËÇ»u) V­ŠÖzTý„–ì1éHà{L-?¡O/¯Ô8®Î"²’‹©á Ñ½/ªÓ8îYjø=Ô© j¨Ç4ìuéRäÃaHJy‡K÷ž˜†#Á$Dܦèhï<„ ™t° ™2\€.<•`ß^Påþ¯+*kÚföUèNñ-}ˆlò'€¤´û+щ åÄ®‚e4å!)g4é’Ø ÝÆ¸ª$Óê1öĮ:ÖiïÄ®Òr‰Ý¨¿>L{^àÜÿ2eÆϧ†y?C;ýð`³b„¿#4pcɵ^«§'e¡ Öa~Ñp}ó‹þxúëaƒí™É/´ÚVñ«¨‘C8+½Ydô6Ëk_°ïfwSç*y–ä ¡ª¯¹ Îƒ ØÍzÄoíT§£¡›‹kŠ%»\Ú_;Œ15%ƒ6Ô´ôNëbß‘k¾®^oE×î{j¢œ0„ÁYN3BlööÆ%b“nQ§"z‚€m S)Ó0EÿÇÞµ-Çq#Ù_qì˼ K@yóÃüÁ~Mql…EÓ!Jžðßofu“¬f£(Š–c666ÖKI`!/™@æÉsöšˆfI½äªc+ä|½ÿ°Y(¶Æ,vSo˜Bü59žôK,±m07Ú1Ê*ÐÚ:oÌ‹%o2âž°ŸcŽÁiÌ ÏcáÃñOûáçHé¹OäµËòÒt. E#4Ç&# R5B¢âé²ä%ûl3Êš¯¯íÛr€ìô7ìÈ™6±‹]#®h»»¨ÀB?^,¨•&>õ©A­)_¿F;ì;J™•ü¸±†¦; .&²T(`»Ò, I*¨ÝœãóoÛ‡ÜMœ$xÃ÷{_“}z¸ýùþã§ŸíŸ=|úòÅäþk§Qi=ˆ›ÔXó=_{0<‹zó[„äwEѯ¥À+ʽù²€â³-jLÕÞëäjÎ|ÝÝ]ÔI‹Lï²üÇü•?<Ù‘âÓo7vÀ}ùó|5¤ûòh!@’ìo:+Pç0` $Ù÷åñ Üxžø–Õ2“Zb€a,ÞøYÌßíi$Çgl•zÈzQç®­v¿ûíÚ“kïÇÏß,û|ûéá„ÂÃ|s,ÊýóÃÝ—»7?º±ôî‹ý`Ýc§ªôk¤õßR<·1žÆ¹`"®½'ÄQºQÊUl‡LLRÅöœJUî ‰‘üÔQÝ¿2±­ÍkP=‘¥ý×=ónHÆ7ï¥)Eä]ýµz7òáµ-ªqÂýú÷ oÒlƒãsê©‹äK¬k[VzÔqmòT‡.¶„€³É:•ÔC@‘„© \¹ooèß:Ok“÷ …=ÅF ¾™ÓàrùóàÈŽ—ûÛåiôááÛÓ×›Ÿo¿þ²¶b>KÜÕw5w´& Ÿ³È:Fúºl†Å¦jg¢L\w2 ª>¦E†«W1ôœÄnŽSLïî\ºƒs™üî á»W^êL>üåUî–Ô²ÊÝÍ~Ú5g•Å×2¢ ðº¸ÆEÃf pC4lKT£aå‹Í…lºÂa™Ì‡C{Eà '„¸Q¸†‰2Ç] ÍlîOBæêæ`?¯/"o~þñÖßSÖ•†°«3Fî©ÎåœÈ^·n’Û¸ƒÑ,Ä cÃÁ³jÕ-c±äl!¤®£1xXd«k¼’È Ì7=@Ø ºÃL>°!æ‡÷§ï?¼Jöèšfñºe]w?¢D=L9$Ÿ:†¸ÿD<£œÌžAPnùѧyÔœŒ¨œâ½Ê¢ËÉxJÑi÷W9Ûÿ†MW·N2´÷ý¾‰• …ÙÁŠÍõ]zØÛ ³!ªl%é?óò²úÌiÃ|õØï·~Á =¯{ªA“‚Âæ·õÜú¶qÊŠ±î…¯7ü6Š‹µ<®C˜T‚ЊÇuR"É*"l‰Ø3‘l&ÏœBW…•ÆB…•žk*{…hCT‚q䊪l¯\ K›©X):ej ²,°Z®°©¾*ëJ1ù‡KV Ñ&u‹HÀ®ËZˆÛ½[½—ÃʸÅ}!ÕÜ×vÅPu±µ÷%$°ÀŠÐÔÀE•hK¶âøc)ª¤´zªó Áã²,îù‡?yüvr¥ðZEW8•^ÿ°/¾ÂsÓ@KÚ!Û!ßT©¢0Ù¢B‘2èEx#Já ä@ :š-‡&M$[šÛhû@I»ºPsˆêmõxKÐæñ%E­QÐÓ$)¾êñ‡Ç2ÛðËΞՂ=œÖÿ,¬p¤?[ÃdÁ¤CkØSÕ‚=ªö¨>ùûÿjbÔ¹NöönÞÚž¡âæó§ßßýy÷ùþåžññóÃ럮Ðn_g–dgGn°$ˆª–$¹hI Á@ûlTöÿ› „EG–‚JWl˜œÅ†æTE]椔´M—RÕå±õæ¼­æe7õà-wF3=……Å›¢CH“r’w†³óžÔ.0úhkB>éÉöPÈXœm¡j; ¸H<¹À  ‘¨ôÎ8`‰Mù(7ýŽñŸÌ?LOóÝ<ýi[}øpø??¾ÅŸMLJ~~¼ûÕ¹†/÷üs¼±ã'üë‡]<µìв l2{Oè>l™“À¾Eƒ¯ãK¶q7^H)~üùîþyú—Ç?>}l­ß½ÚgÙ¤’†Dz ˜f"|œ)zãîb¼€‡+gÖÙé$“ååmKHÕÓIsìJóðȬ°uÅ-ßb\wz pcqÿÓKŒoN/™Ôë2Ї— ž‰¤4º£r<Š\R3SܦfŸ=®QÑßBpßï —Ñ-þûd¬y{[Pº,~¶ ‹Òd¶C­£X%­n€ï”XßÈ9»Y0j6ä%WóF.½¶,Ä3ý«Åq ³äÀJ›\“a÷ü#ãùß²"rÖ ï•¥ÕCçÓAA©™P3o‰Kª•H¤¸IµwhJ7…9 “þE4¿¯s²OKÓ×uÖ s¿èëˆ+±ïZXCV”qt±—Ä5 nÝrýqÂ`@hm%oè’¶öÑl‡.ò´C¦¸Õ%yo°UŸÍ|¶¶c‹µŸé /ƒ­ÈТcjš œ(„”±×°á¢V ÐÑMZŸÃ8­˜§œs‚÷ o¯>z¾æ?ž^ÝÜýr÷ë9õïfÉO«€˜.jF †cÿhæy‰ÜÓPÌŒd¡Òn/ÈÍÂÓ³5|UçÊ< ªO)‚Xª=XˆnÄÔ.ÿê8®º—¨ÚMÝ£õóïÔ.æsœÎ'Ìvžò…{‰8žchâô¦Œ}±ð8D¹ l¦˜X6(-WÔâdšTh&2Þ‘ f%aè•BȻۯ·Ÿî»Á€ Ô¯¢*ŽÛÐEÈURØ›‡u…XGÆÝÇzàmb ‚  ›ˆcyã« GDÞÙ’þ0S$5:Šd‚7ù8ãÞ€îb.LªuEE‹âjS%ÁXd—¶v¿Ø~é¼3ÚÔѧäYLÝlpó)u¾JÁBÈ8dôÕ¢-åëýgs#/ôÙŒk.–)1Æ ¯É$HW‡³åVîYXf(ˆødó”[yìÓ2³;X6X…7Øö õ;ËÀû)º*Ÿ Pg¥nQœ| 9W™Ž|@—éCž´ôŒ¸ØK½ðI` f1ž´ .VØÖ‚èDd‚Y«ÌçmY<,Ùd𻦋¥(9ÑFÞÛo_1ßùt7‹áÃï_ ~¹ÿötó«ôeJEk⬱ɜX«æ¤P¬™ZHeD]¨}6qÀ´ $Fänºö£Y1uMœ…9øî·Ãik1œ±i,S¦ºbsâPÅ åb¸ÅÖªáf”˜TH.×(ö©9e¼…`Ò i~¶áÚ”‡«Ááa‰ô&»³bp‰¢Yÿ‚ðE¾|û|^ÉõdùõÇo^ðBósáçë’À$ÅG•65]÷Êy‰œ:Žk¿ ¤!ɇI¶Ú “OÍÌ8×®õL~Ñ’@¾–e\|YñŒÛå§ÇÏ_øøÓºvû¬ö ›¡{”/ç7¥&ƒ½£$Çß)‹mYr€˜ÊWwœÆ^ÝŦwlÈ1lLð†¢IYõH"¬ToKÐcmSd§äKûÒe­œ›åÿøå¶ôiz–oÿ°>3¦Äýº©a·-Á=tì1[Èj¨`ßi}-òØfP>— ë ájSËQ´paèÆ‹ì1;:×aˆv+f;L u“ãJâ1ÛD±@Áe[&R•x‚+„¿à¹Å¼‘Â^3÷Z±£¬oÒlžv³þ–ÀÐÑr–’aTÞypç¡’ÀÒòÇ'û>Ñóû±¸"Äl™— äXED0c½Ž&ß,¥ îU€mxàß•µyZæÉç¨ý÷róDõ°BÊÉráØw1‡…‹¹ó+ÖÌÎð”RõŠÕ óUe6Ê„ù¯»©_ÍùWbâSb˜åÛ 'IÒÚ“8oŒ“_ó&C0‡é(ß µ |æ`Þx £×0Yó\Êšr݃%&æªQ¤â°•ÅÎnaü«2Á÷­jíúq>üÛ<}üqîRÄó,«3÷YrF•Ç6h›ŒÞÖ›˜5¾_oâ²¢âBÇybêlBlûeËnC Þnößu¡­pà0t‘pû9ˆt¼>=~¾?ðâ?´#ì×ÚŒÀëÿ¸}0ŒH0SšûýeÒ4§ãÜ¿«ÆeÆü…ìF<øg;§¯®=DÅœ·€žeM0¾” ÏPù8¶à¿îòa,D¦-^Íoy>«‡ÈçÐqÒ¿ë僢Wœ¢s.kSôš«^}á]y)»u lñU¤„iW[J…þnñj[@ve\Îñ¼?5³…*à´ÿ3Ñp31+‹ ùQ¶é›ÇÏAIÞ‘Èœu×¹€ÿ¶\ίÜnkóþs‡9~9 %ް´ERGîGŒ즸³—Ò»s-E·%ºJJS0<“: \¥–=Ê JéâB@8©ÓÎǘã*V`´É)5àÞ lR>„§§×æ¦Èñ“^„¡5›TÀ`Àón©ØŒÀã@á‚~%0€†M×}¨'érŠKçn «'vúï1ë<½f™-Új‰¹f‰MPuð‹aÖB#’'ûì4S6޳ì)ï·#&K°ì`\»ìqhÉÓ‡-CxJÁqÊ~1Eõt:Ö¯Í"ŠÄŒ ™Œ°ûlŸ›ÕLÍ8[øÙiæKÌ9íP’'Œ"‚ßQ:}ìTmypZ•Usˆ“꺆ª>+Îu×ÅÒÍf¢*iߤz…TG¹øl](ë.NêSz*..ž¨—©Á_D8"¬c;ýÄþGßÝÅu÷°N•Kƒ·LS ¨±Ö“B›dsK’Í!ï–c¯D›²ú!"VÜpæÛf¤D’½™ °ï·@; …‰.à$n¡öïFL•ÐÍ÷YŠÜûh`œÍ“—¯dX>î.WØô¶k«›'l¥÷?ì*%¢¤› ´‡Þ?ÙÁ $XöÏ–ˆýŽ›?{;9>8þÒ[‰Å‹Ô°°Ô;¨ÅÖ®Ïf< µXú±”ÚÆ ™ÔÂ(Zqk߯df7Ù Q¯Ø÷ZP)l!wž õŠ‹¹Kæ'Vo#­›ƒY iÕˆJYÂR CÌ!{ :¬3ŠÁd°É¨k|¡(iìœBvЧ·ÖÏV2Ô"Øß¶,"ç\³ ¹xUø*“AA¢¶•DœBt‹î®<ñÊ‹(Ç39’RøëiŸ-ð-_FëdLÅæmʸ æˆ]³½¹2ïA\`ÛæÎi€‘¦œ!E¬ºmBÎk^{å…"ÁWauÌ\¦¹cÆLÞÙOùÍÕÁèÌï Õg½éÍèHjœ“@#3?¡¶×ÕÃŒæq À-€±£ž½5}8ƒÐäU°ësªÉÓ´S{59þ­mϨ´+ k’®ŒÕ V¢ð³Ø®¼¡>ˬõý´„¯ –€˜3s=Gç(šs`5–'*¾ˆáóGþðüaíƒíÍ\C²|YÞd1ކ2OJ»úÞï/ÑèÞ=><|ûíÓ×?_ ìéæë¿ÿ@]úh ²§Ó‰ä®÷¬,)$;V:]¼ÆE:qJÑIö">Κºæ‰Ïuç”3/Âé uÌnI…÷NIìßÇÝcÎ …XÇÛw4ñ…Þ6;‴%ÄѸ¢z¡öŒ\s¢Õ‰ö+r¡·ÒÄÅZ ëëÿÞú‡þvû›IÐ=+ö欿á¦ÐÞps®Ú›¢bwUÒx‚sK¨&6¬ãÁGÜÉ¥Ñ/÷·Ÿ¿þ²}vϨKSרÕSÖ9W¬Âî{Fî”N dÐŽö%ª'Q¨Æ‚šÊ Õ‹Ývœ@nW”‚¾w®­‰öF,c—ÎK˜]18Ó$ÛÁò7žÂ”ôGdå¡ÇÀæI;<õ‚Ž ‹O~Ä :òžã ˆÂ$(uoñÓCúðdvûû§Â‹ÉÚ±8¡_êUÐ8²¢®mC‚©‰ß® jXxѲÏl±^sˆtmâ ´DÅÞò©ôDç!MädùÂ}mÍrúSâF‰Ôú’=Ê}9î ±¦¤BŒoêI)\¢ C¯1)5ŽÎI«èÂ×áGY‘è£+œC´_‘¶D ù;PV „Frö<¶#·} ÎÔEÕþb'û‚*sø ÎÔÁœà¢ë™[fÈ×ùΛ-—Ñ.vS¯Ï™¿Ê0>aF=YcS…ê$)6“/ ³ƒ3PHh‰oåÀ4¹5’/¨óÒÚ©T}kÃùµnÅnšåÖZ80%úŒlo—iÒÛu›?_Ó¾u•9æóÓÈ5‘8Fɕ׵é4¿ÿBçeÔC^ÇÄÐùkœ j9çjR†Î_{‰žaiK„ñ]Êè÷xaöŽõ›ó#=‰(]—B9X¾±ºm¾MB}‰QáÔp­SЖ8Ejg†W³—ïŒ^„1 ®Ê?:¾ ¿ó™" hï3ÅäœÏÇgø––Eñ}+6›RË7Ul´§8£4‰ã9R1& u @w¥º.—_¾ü?¯ÅÙG¶ ZíÆêõöh–ÕzësÏ’H‡TÂ^SÅ•zõzh-8ÇýÙV9su²†˜ª ¤gkŸ%¯¢16É¿:(zoà§Ý“ 3”ÆCG' ÊAK7[ICË’š*ôV<^Ÿpµ§©í0EO'Á‚ücG¹ˆvñ!°fä˜Þï1¶¯®0‰šêuFQ­^Ƙ0 Ì|ð²Û!¡í³rz÷ÀZ#쌯.çTš­“²z½ð:+`0‹Dß×ÙA:Rã©ý4«i†9íÚUrd9y½È9'Õ<¹å™s‘׿³n–3bÞRß@Ž>tm³×VÉ »§p;ÊØp»MÊ)×Ub™UàEP#n*ü³CÞPx÷ÛoR:tâžÞTØ–Ñï¤\o©cG7Kl X aš#Ö`±§f“Œ¿º`‘)GÚõêâ áÐËú›fóUÈ›LÍ»B¯%c=´È( ºò8~¦Kâ·ns}=~…A¨ ·XîÜ[ÈfܲðÄàsrWÁ-'ïÞä”÷¾v9‡óV>×T²–ò…q[c R©iœr¢ÐNÝÒ‰ ´éMÓ‡ ~nH—zúV²ÅȆ¢ß[É¿£¥7’S=œÒ ‘ª÷–Âïÿ2ÑPoŸEY×ÜÿÕ-¢êߎ…½ƒÝ8ø¨˜yØŽ['f?>þÃY|O¾<~óñGœæÿêg\ ¸«wrO½ª%!ØI×®m߸SÙ í¨áT6¿­Ý*™(±t*/d5äPvB8ÎYßÛiå Æ‡²(œÉqÓÒ[Òò³ÇZ; Sšk…@Öðª AÕL‚ã|Uæ†Â´kӀ䧻ûÛ»£ßL=ç)ø0„ð$sØŸIs¡YΉ$ÐꆂaBvUe¦“³ç4MŹ”j(Må¦ü…À†ÔÔ&QÌߥ9DÝ¿¦&C©N3OhLnª (G¢³¶–ÈÚÞ50@Ê:޼¢eË•-‘Æ7¨BŒ“%(ð=DÏßÌÍ©þsÿÓ/ö}'ôŠó±xûñáÓ“£Æ±§¯GâÅQé:Bc”xEWL‰¨¾}‰Ðñì •¢+z§ðz“|G!»›\Ö*M¥eͬ1TÝÄ•‹Ð¾æ–4„˜NiS¿¢±©,NÞ‘øÐÜ‘bB@Fî·Çÿãîê¶ã¸qô«ääzT!Aü9sæjoæ)ö(²&ÖZ¶¼–<3yûª[êR7»É"YïÜ$9rT&ñø ŸÀÐ4€L5ân›ÃI™9œÓ•µ@F .•˜°í/ãR•]Ögûg·)ÏáØ©(YÇÀ»9œå7ºæp|dõ›¼bG/&ÊF ]‚ÀÐR•CgóGÞ _;'q rG5k B*dßårQ*Øg'qW«˜Ä±cÛ ±â-2P&Æö½æö J~üX7è§T]6Wøpÿõñé3¿×ÊÎ(ѹìL™œà-2Œ‚8J¦(ÂZ ´!lóßQL5\J?ÇUæBI1ÙSvÍ‚Z¸ÙódUï5Š¢¨®wÈŠ~Â5à Kt.8ïÖÜŒ]uõ|  q™c(6‡Ø¶1åLaŒE/îcî5cq³ {k§2Ø<W± ]×ñ©Ÿ1lšQïÈȧ½veÍÉJ%<µO>ž$Ö®½™®zìñ|¦]˜ <ž}\¹ºçï].¤æðnöñÜx¢ ùÐuÉ²ß Ý‡ÉÆm÷Uµ?³Cåõ¿nþq÷ÇÝãýk_ÐÍ×Û»Oú»Ë®òû g_Í1%†M˜âÐp?®1ñô<Ý©»|²}~ûáïßž¾]êÅyîTNôþ5nß½tð½Â‡“kÙªq¦K x‹–¥Zýi û0S¶QÓ‘VïšÉA—4‚ó'|j¨ "“ž6lêXC<íXÃÓ§roÉFÕ°"+€¸‹½I»»RŽ•‹ËT,v÷@J.-ûÕ–_èjW Éöý†ê½îó­4gŽí/;9â,:Cñ54œ°QóDfµ²ã Ù…èË¡F qQ"B¶)fq³ŠÐÁNµ[£»è;iŒC>5ù_’SÿK’e•‰rä2«’†ÕPbUܹžvŒ¿Ý¦¢cÜNäv›êÿ’ûF— •»úÅ ³h‰ Z 0ó'¼lÐvlãnôÛâ•- ùmÖg‘‰~}ø¬ÿºyùöðûïï`‰VJ5Òÿ¹™3eM€ †•`ýƒâÄlrÆfÎO¬Ašiê-5ÖÀï#.Yƒ”]Ù± Üˆ†Uͪ0úvÓh\‡Ø„„±%E—ˆv*ùºý±Tûco`uì|‘­âØG)°5 fÑ“W«rȶ%K#äuŒ‹æó±ƒqè[²cL‰õ¼+æ^N©J+óú¯Ï÷/ï¿?ß|ŠmP…þÔÅ{CdÒØ³¸TA¹¯q2–”Úð³S úŒ³#M1¥àÓé€À…;ü´~Â¥ ü´!&'·-žÝ·§/ÿóô[eÚ¹Ž»—•º<¨„®ƒ¶c>ë’0ò=¶U?A-¹Nt‰­¸²úñ²‹r£4ÐÍÐtE,³Ó¯—4³(w 2 qÀó.©M¿º0†À ¥-±zý4øßà™Ô†nSÑ‚L‰¢FUùbT5ßÉ{òñüÃÇÛ½CŽ­‹›ÕT´@TB#Ös-n C×Ô0 6"5}Æ»Çfƒ«Œšüܽƒ—!'w!ý!_æšÝ;‹ð»¸XUÒi¾‘k¹æybÛÈâR#×vŸÐX¾å=Joç“…½l£XË6“"…gÆùVäðB›äÛÅóOC‡‹Õ°MEZ»Z n aÙÑ ^?¨¡|@íK/ט«•ÍöŽ;)ÙH½I.°mñKQ7«Ô6ÕæÅZ¶ÙüQ Êhï[Ù6¢ TTê²o˜ €“Š!Á)§4r±>äÂûŒÞCÿ7§é’]Üß5 'µ¸L¹`h‡2ññø®`¸üFWÁP…f"‘º’áë½4„…ˆ]b ™v Ä„Ú?ÍѰÛWr­Ñ¯Üfu2¡(öy¡pôvó3 €W«Ð_;V°yZ£¿)8ŽÔ¶±úõøãwS˜Gë¤ßP¹òõéáËËóãÜ~yFñýÿ<|ÛH†Ð† ]Þ›2€Ø*ƒÖ©´iê߬۾‹!ù+lß­m²ékúöýq‡Ê©©æƒ]t Íy´™×¹´nB}ƒ#-×!ÆÕ[{78Rv£o­ºT8j)m§„ZÅÕ{ LÎ…þÄ#{SÝŽÈ1Ǥ¨é2TÅ1P‘Þh&¹ÇŽQÔZìÔ½:âk{4†­ ¨‘O (¢-’'”ÒúrÀ¡ø˜œY=È]»G»Ö- ·æÆ`‚²UûékQöã÷ßf™iXj*Æf샗8w†"íC´j¬/Ú΃¦H6âÕÄN­®CíCÀ$iÚ¦”#OqȪÛù}|¯„ÏOßÍ0=©ú<Ü7lº=ÿ­.·ã£LÄè.7ÜìÄŠ½wR+9³8ì²C^ãÄ X±ºZ<.pß4’›|ˆ þ Kþf ¹]›Çî¿Oû:Öòy·i¢Ó`I‘&rÛ‚%e‡W¾¨Š=>~}¼ýòܸ³üý¾ü–7õש ŒTUZ49 ض†±B¶G¹cÓY´9°Š²£¨Ç¢ÝŒ’íbXdرӿK¢ÄkÛÍãÖñºÒ™N§=õÆölAx•}‹”ª¦=CX»o±Òno«Ó¾eÜÓm%16øÎ|®|g;{‚¦ÙXzXŽÅ…C (*Ê"/i7¢1ÂŽ­¢ WÆ [¼wª'6©¼Rrþ÷ûÓËmvjD<‡ÎóÿqQ€j¿Ñ|‹=%ö59x¨¨ù@Ìî(9$©³g R|‡)jâêPÝX«ÍŸÐ2·ì$P—×ûü$µóæÁÁÄ„Jµ¼0z–‹¯Ç»‹g[7«ÜÕSADåÜ*¶©ó Ä­¾þnëúwT¤ Þ®ÞXxxsµ·ŽÜ¢±?÷­c!WñÎñä¥#hĶáKGÍÞ¿sà¶ï5:÷ÊQ¥'æ¼´X$IÃŒÛÚ,b¦Í"Ó Õ¤Ek?*›4ƒ]ñE“æ1÷v¸MÅ`–žJÍ,²·¼eù®> tirÞW÷YÌ‹>C— H °¼W+–¼l¶°x¾1-ùM?­ª²ûYÉÛœë ®‚Æ«ºÄP”9FÍãŠ"wfSÜ‚nC`hRs*çög#UÄpi·ìfÕ n°F,†)‘í0ß²Töåö³ÊÄíÝ»¹;þºÍèçf—ÕRíLj°²Oxí“ÂYšŒR/u©68î« úL¤u²3Xo·’Á†Éx®a5ÈóÉÅ$˜ZuË>ãÙ²ÿ?Ãï­§²_ÿª7öÿëöÁº'~Ò›ýdÎøï{Göd.î/úó—o<̪÷¬$ºÙkÌ̓RœÐ¶›ˆÓDÒdÜïšiôn,Þ}únœ£Y ó40Ì Í 9‹Ÿ?|ÐmˆàÉùþä+ÕöþÙ°vR\l™(1»‹IõîÞ¹ÆÝ·kÕôýé‘, JX/¶,+‰­~âxÕÄèÔk¦á¾©ö}›™^Y+vI7.õÒ¸é ™×¾"{´¯éY©iãÐ{@’›Ï¿ï¶ÞÖ7­ý…›Ç§»O§-hª ë³kŸw‘·§ÕiÛµ{.«S²Æ3t]fS?áZŠŸÁ%[—Ú²ºÀ§Y]ÀŒ­tz„>m%{Ík¤`+1bÌF‹Û”³:ñ¢‰êòûËoô-h zR½ÿ\™ÖÍ7‹6GÊ]’ÀMzœg[3ÜëBcí¨™8œRâù•°$Êb±ßMy’y®VãFõX¸_ÕZéFËŒ+ºQŒr„U6$³rêU)·mŸÏ7Íç¯l?Èüæõ;ViT¤‹UƳÊÌ$£ ëtÞ'{9÷ö ØðîÏ_½ÙŠÎHÛrâh®v|ìiÆgבö>öt4iÆš8t=ù×…œäøz!çØXÆ|3ÿà—»ow§ñ¡Þ¼)>Ì|Y„'i æòß¾yiç©}\m'ÒàomKà¡­©G}›*;æÕ3a =ªÂ:OEÿ9翤PܱSiâ˜Ö¸÷¢œTXªä·Î’Ù‹`ÈdÉ^c ´ø*ÍIkš“ØÅµÍI½>kSæ‚l»Á$àÿÇŽ]—6µºMHáI…=Iròcu~fŒëÌ{²‹‚qÕðÕ¼BѸ¦ƒ¨\PdDåÜZRÔÎ8j\kœ05¼e† Þ6Ñ9Œ]º³zäb—ëT7xee%N@–VÈŠ=t–qAiA®!²2×g9ÆU²¤Ñ@WÊ ¡Äx¨3Í"D¢«&o·ß_>*îæ¿äàkÊr¾>cÏçÍy ‡‹‘ßx2þQ9D˜"«€ÁŸ´¬ú=|Ýš,òDí$/›ÌBÜ YÀÓ²ž1‹Æû§rÙ91£Ã’YL! Ú² Ɉ٠=µÌ+ŒW™Å`¿â»T0Ðxð‚ÉkR"×-i½[ظ Ñn¬ÿþíOžï_¬FðamPÝÏ›²€ÃÆ)¢JzØa•¿ïa5]I0Ÿ"Fä¡ÛÊØÕ¤ˆÖ©XŸ$®4ÄgÙˆ ¡Ï²JjÙöŽ‚ÖïÞ[Z#ý£lí¬èzúE[“\BÔ?¨BÊnw#Ò\ ²eàNp­EJ‚¡KÑ1l­èJf/§Uk»2 C 1µ/±®(Ú‹Bk­þyþ&µƒ]éh‚Ôe'‘W7ñ M †•‡5#)—‡ËÚ#dq}TQœ,f «LqP™éË€ÒDÕÐkå÷Zñ×Y¬÷sЋ1þÜÌ nš&¤¤¦ÕQìÖàV«?ú¤Ý«Ë5¥àB*;tÊB“.6¢¦¤ò.Qß:íå}W ¤~ÂÑÖ;j$œ>î€/DtWvè©î•'¤v‡Þ[ÜÊð;øIÀZÕì5ò{÷ DúZµ1ÈŠVí:• ÍB¿û„Û5…çÆjÍÀB½Ëzç¢>Îvð׿þý¿2dŠ?}×ÚЃÊßüJx÷ø ÔUq1i:XEÿÓß~zù÷—_ÿZÝÝ¿h(zwÿË·ûo{ž+ܺ¦Šæ¿ôBÃÅßþöÞ ŸÇqÙIAp  Ñ×îe± ÇEY ¬ùºÛf©AÊìàlŒK/œÂE¯¨6ìx—bÚýÍó[eW«hYÔPeJ"ÎmŽ[|d/ÆG+_M×bö¹6e¸Ä ½DêõV½=ª‰jÚÕƒFïæµ—®YõŒ×_f8g±W«h×c :¬ÎRêWaò!nZ¸Ø+Æ9³ê%Û¶D—ãOí£'QŽ«éºƒÔÕu÷Ÿã8Îûfåžë2L†EOBM뼉Âé|AÞ+H ä Â+rÑH䊟‹Ë”ÇÌ'°@Yú„å'º¦4°Ÿ¼ýÔ*ב…©ËÉOÅ;ã"äH6¬“dûïfì㣷±y›”¿ùþ|ûûýª2 âÏ›òƒ·`GHꬄ·äFSà | 0µpñw7L¬cV…ЂW«qMÐ8r-^â(yoªiYˆ—±Ó6g'’jìtHE;-g‚¹7z@z˜sŽè4*ZëY[zâ>aÁ&pãy&"¦µxø—õñB[Ýee¬n¬Ë ŒÚ¡ä=•Ó=FÄrºC6Ý[ÐlŒÄ„’4(0® òö¹Á÷ù$5@.%¸œ$ÚÅ«P8mïVu tdˆp–ÓÉynn¡ÜsZdkV«BíZ²Žy­¦•!gU†®CHÜžþQÇ90”Ç¡Gpü²'6$,q²}ËæçÛ¯¹îÝìW÷?ûpûr»î‰‚´ó ì¡ µµñô`I½{SgÝFÅp&!>øÄ…¥öJ“È_ÚÓ0%° Ò—Ì.N†T½I`˜š¯ o©ÎîÛ8:%_x˜tÈC;¤ Ø­l-l7›ò7m`†Å°+€$\}sÁÓãçñž_Ÿy׳qô›Z\ô —½Mº¤àä§f[í,ÑcÙ¶ï..ÿÙSËe›8äaZíÔÔz¸¶iÅ[›V›8èµ+c@ŠrÙ´ª] טV57XmZW³¬$%t1‘6ÀóP‰œ\@·íHX®í}¹0¬S”~}–:¦UØb–&Û˜¬ïÅ‘ ˜`zGè %ª"•{ªT³œ‘þÍ6&À‹ãŸ{†lYsA¹v{>6ØÎ”Uv›8Iä>e§¸±Ý6:g¶ÄØ•½âe»~,"ƒºýb©ãŸauΊD4 ’>S±¡Ôô÷üêÍ!ÙtbQÄûþááeØ›ûÉ¥À„åÙX,a#KÎ8,h1¢‚m§¶… ëbº’ TØ¡´Ás¦Ó,œ À5›ðõOþõôíÓéáƒ5ʾüñËÓ?¿Ì5ã׬¬gɦZ¶$Wبn¤·å¾Ž`ã#­ HCÏ’×àÊÉÖþm÷$Ùz#ϧ%eÏÌéÊŠië:–’Ùs®ñlnÚx ô$©rÕD®½¯¾Þ2œe¨aÆ4Öî?Ѳç­*•׺Ü÷·//êîšwa%ÌNÕïQ.ên~3Ûâ¶CœªžZe|]ÀMŽ˜‚ëÑ]r~ Ÿ*.Úà ^û¥h_ÜÝ>}øåóý7ýׯ¨Ïv€yƒòÍ7ÂgǸ[÷bÄñB·镹¯‹’Ü\fe"{ ~D »’|ý­ ŒºÂHEÐs±Is¸}ÒûóFª1îVœ° :¬RÙè:U–·~à7*û¬»— -\²îVh,X!U±Å=£ŒÆYöú„Îc{ýp…š®MÝkuc;“üüôxÔ±ÿáý¿_l¸ëññéwÕ¬OUˆêOmÕÔ9„™G3ƒZpƒOà¶LZÏO!56=>éí>>=Ûä;ÿqÜ«÷®‚ç Ò”]Îuø2¤µäN”Ôëk½"\®M×kAOÔlö!†ï¯4;ð!”°—lÐ| ç8.5jìCõ*óq”ZF䨥<¬~Ýè×Ä]Ú;À!”›n£FsR–“²M·Riº´¶Dô«%¨Ñ誌˜¬mÝ‹P®lä!ÛŠ)‘‡>>ÇP©‰Ô?bŒñ>Û²vÛŽêÙÔB \˜’‹q¿[úˆ·6Ž0LÂÃè>Û)L<+!Ö®LÐ%!4á¢M…‹•øþ£pSwô"gïº3JC ÅÚé iLðƒá¦f°ñ_ã'ÀXì'HÌŠý„Ù½ ’ŒÀòÓS“ê“[Õa+^×Y*!·u;A2ãSlWŽ‚”:ly,ôωÞ9ŒŸâÆÛT¢ÝöhØ´£vÕ*úo÷·¿Ý?îíŽÒ/ùEÛ¹Sa}Ç¿ë¬/âÈŒ®±ÉÎòue¨g¡ÒH,–ûugÃX´Ó’Eý?Ðn€™žÏìü¼úºfZhë4)f`K쾂‹x«1 nùªJ—±>_Úμœå;²K}=N6éèQʶÕÑënæ©#xÙBK@hybœôÇÚÌs¦q @cE㎭®ÐYXÛAõíø˜¼ÆŠÿÇÞÕìH’çWìEwÉøaÄÚðE òÕ7C˜íni;½3žž]c~,½€žÌÁ¬êíì*f‘™$«GZAíìde_| F|±vÛÝ/‚øñu;1@¶nÇÆG®ª‹IUMáakýN% ¬®%065¼DÔþ]Ù0Pûƒ2ƒ¸æ9Q\;'j L$A‹!à1·xaGØwc¶&`ñaA)Õ$EïñEX>Äà/Aßïß|Jßýñ§Ç7·fCç_í0ˆ0þãß~÷Æ ÷?Lð[C²ža³¥"Ãv1³éË-¦t ¸iÏ£2vöˆG“‘ð$]wR°`{È Z„F²ƒ­Îª¯Éýæ«Ãþ@·ÐoV7,F7ä1Y¸Q¼Œ<‚cGÍ7`$1 2J²G,¢$æÔeVsŒ²—Œv¨ÎjØÛË£oBÒáY 3"ø,ÌÄØ2Ä.…QuxCN¯‡7;¤ŽNÁÇŽ{ÀgÇ//‘ü^$ÚñÃaIͼkÚáêúW‹¡§9D•ÔMgÉG;ÊŸgÄo¿¿¿ûéÃfQòü¨úÙLþ ÎØbiïÜ1’‘GÒt'wõQ/M}üw6Y\Ö îƒwû… ºçtêÝ<Û”{È(e´ïŽ"f:KüÜ®DT<â¦q<¡©½ËÞP, Ò¥µÄv¬cŠþaC,÷ Fjò??¶Äy62dô„ÓG&!½îˆºganÓLZÅÕµL§¿¶¨e&Ñ.X8IoÇñ-f:>ù¬iýx:y’×?bòü/lë(þªÄËmî€ {dëÒˆ*I—Iû&ö1d/Hž÷PtX^™ˆ¿i|“!;‹pa´.}'öÖèٱ߂Éä7ù1:êîÇf–iÎ þ)#TÙ¼ÂKqWy4ÚÆöðµI#œ;aÚè¼ÅÖâ]°FÌO\tÂÃÌгޯg“tpÂôÖIhÚã&bD¶—¥Ñ GWÑj|:f¾ Fé‹)²'Ìw ão@a^6âµiyˆ~o`±£k¸2”ÎZ›;§‹E•uKÇt˜ñMÊqGßVp56Lô¥uz’UòLX&/>–¹ s®„fi‰NäE%øMä%hAš<.BÿCHž‚ÅÔ«'tÒëÏàzóéãÝ& ‚~³ÛÔ.'°§[]P€í‘ÊyažnTÅ–,8«,ˆ:2nPô9 YÅ…)zp{mpnLº®Ï ‡ÑdÅ™ó_ÈÌ+ÅÆ)ód¥·â5ùª©å© -}sæõ#W/È€6—Æuø¡‚ÖÊçü×ûm‡ÍWˇ ؇¯Ù>8/® B-°ïa-¶dÑZú*ÕæíÕ UÓ–ˆâ<ÇbbÜÑ—+}¼b–È<§Ëôé05Âm Œ-@ÛоC§ÆÈ‘5ŸLŸN­†i¤T!5î©«üG¬: ²B/•Úu|º®1 ¨::zE‚úxûùý§/ÛŽ…÷›»Œ°{&%Q@tN€û2Ô£}ú±T[ó”)ã –(O†ÁežÖèBRuÂTY ׯSƓԀ’#©:1–FNw¿j¬ê…ò(.öb« X_JLê1m¹Q?¼é%ö”-[Ï}69Ê}#¢b –ŽêÛÙz_u®/´8;¶%sp|A±gÍR Ñ9LJÞ—Ç KÄëŸ0½„¶yÃç‰ÜÕU4Ná\SG’8éßPŽ0%V‡2;}_`kpsûùv Èra ˆ<µ± ¼çœ vþA·vï7[/r”¶Gê3ª˜˜Ñà¼xØ Ç4ßÙióÙD=:ÅíµíçIÃ&vdtŠ´®©NvÎÔT§Osÿ˜ÏॡŽ=Ϙuœ({<ª¶ $V•‘CÆ2bS’T_xÞ}þáþ˧æro?ßß}ÿîËÍÓÌûÇ›Çÿý@ÛNêèÜ2£m‚[Þ#"iÎ38×·ecõ½›D%¤šÂªÃ ¤‹(Ë!;ta™^•U©vÔAì-šP–™¯ppÉNâN+¥ Ê×€Yæ˜U'»Q¶V×R“N]Ó!4¨1:2D°pn±ò7:ë"™œýW›.VÀÁžîgÅ 1 ¬ÿv‘…é Þ TȰ0a\Èê§-Ìצäªs²¦Á£·¿×âÚ0º+á`çƒèéYÚÁN-`!u%¿Äÿ˜tq¶\IÅž›êôÀÛ€NÁ[Ìå×KQ\T¾øw] Ã±ÚéB £¸Dˆ~Gã;ã9vÍ`\Ðf¿lÒ3‡-) ÛNQŸ® ) E‡\àÂ`òLà¼nÉgôphCÑáù ÑÜä3p²XJÂw> ¿*€óÒ$¤n‰VhY]}r#´Ý8’ø`£7:”›¯wó-$·wõ¦>çxÁä1;šà÷i˜:G‚ݺzOÔ·…Œáô*¨E/Eð=^Ÿv=¤Sž#)n°‹[ЗŒÆ·uõ±ŽŠùëY1óE¸J9hU/¯‘lç:tóæüu ™ÒÜÁ¦5ä@còéN1ȵ1ô8uúiúôÍ%åu“Ç॑â²î¨V’à ‚€ë¢y+õLD G ÀŠD:9.Â(gata’Nyh>Å[`TRavøJêÍó:¥*Ö«V,Õ5zöÚ†§ëX°ºššz"Û®XqŒ#:E¯H/æs-1`<מ>«§)]iâÎé˜.’ éF¼vLW¯‡~O{Zš½)FÙ7!P2s .7¥÷s\ÜmÑ¥×)î6Ðl‚kñ5å â&ˆh¿÷bÜòm3Á¥fHºi'±š$A|ÄJ˜iHÃàì÷SC\Ÿ:ÙºQJêr'åî³”þj/†Úµ šZDQö«Π8`rOQÍÑÃëY|÷Óûw Å<Öͯ. m-!DØs‚çt_ÏsàP\qn³}×1’‰Vlg >ÖE«B°2óùlMÅÂ>]ncÜ£˜kèU›£Â7Ï*?ÙùLV6NÄ/lÎòyºfcc]W†Ð¹"k+ Iæ`ÿÌóù0  z¦€l‹_GÌÐÖÉöâýÍ.Õí(â÷[¼ °»…$¦t!LÝ$ O­´R5{ˆæÇlÚEH5’¸„©ä³ÍÊÏ&é„©ÄI6u ¨vqÁ9.ô~‚Ô2„;5L>>Þ|÷îñýVí-oè?tFëø&?pš‰z)ÌCQÆ÷U&ÜEuØAu2‡Õëk‰ÁvmúîÈ›ˆRvž_µ íÙ;úµa!G®jTê\¢fa¾.0í'OÄQº°q*@Ãxðª9îë'„ôû°xØ÷R»RjÍ4Ù^áVûeÄX]ý¢ MI `Cbtd@'7šßUæ®@iÞ5cØ™û@ À¯Hö;»I¼LjøñÓd–‹°Ë1Ëæè¼18Þ„¼|/ŽÇÝÈ”Å] ö¸Îhš¯Nì»—3G·§–ÛÄ¢›ì”8óÜ£LNIAÑ£• •‰Ô1vvâ}¶IêÎôÚ©¾º3½[ºDClóh‘sŠIµiì„·ÃFúüñÃýwïL&}|ûø‹ý£‡oÝpß>]™|»ªûë´’m“¦W&ØV­J…ŸªÆ~‰Œú¸µç¢·)û…ç8!ÅPQ¨}YÓ¹N}ya·.ÑÙ¶?¹²/£sý;ðì©©jÒ‡¡÷uïÒœËGß¾óf탓€ðup¶nâ¦cWß<2¬®(;c´Ø¶¢2„ÎRtn,ÐÞÝúðñ—äED<7᪠\ ¯ì™"6Á*í¹ !.²ßZ£[ežžl•DŒÕ°UçŠi"ADµE²j/T8lBQNßÚ|Î¶Òø–%Ê’U&vǮݳ컺&ª&U‹aP5ˆV{ýêú #“oZ¿!)7E †¡)€·Ž?_]œß`|˜õn>|¼ýÁþåŒÄφfX]Œ(¶Ême]{Tý˜Ù‰ºˆÍýòZÝ/Ï“‚³ R‘v .Wheg/¾¬¢aÚÏ!ºML²¸leŠx…¤è±†í¬Å˜ ô¹4ŠGêäœêZŒÍÂþ -ÆÃƬ½ìCqa[ò°[4+cа¹YyØ‹]êhî{‘âØ‹Æ@ƒƒ­råâF?º ³p¨¯©vMâð%,Ĺѕ suj3tÞXìÚkÕCOíiB†oÂÄèÄûØ‚ëÚk+RWöQ¯YÞ‘9åWÚH%…Ôລ1¦l@i+{´8Å-·SõÚVºEÛj.,ù㟠E,Ð|ùòá[O7÷¶twoÞß}‹·x«òÿÄð]¼» 9ñ&ôûí_Be³?КJ NÚÔâü. v sд¨Æ}»¢†ŽA;hÓ·ÉJè<LYB&/ŠÞ¿TÐY<¡QA‡&cT¯ ÓÇyð˜ˆÖUÏMòDÂä ô6¨– íþÜííü¿ÿž|îyoΘîM†çÞœcïM–èÅF úg>ÅNøõ5`Þ?¼ûóý ˜>Ш{îî}4Âäâ ˜s›õ¢¨óΈÞy©BÛRÉw²_ÀlÉ÷³z g°×Œæ,õµw*ÉhÄ„”=>GLûä¸KERç꼚L'òK«"ÁGÛyû—Öá»ã.¥ ’@IÙ“[5£½Ì\—ÕÑŒ!Ñw|ç¾³“,¾‹ñî4ã¶¥b:½Â"ñÂf±™™ŸÙÒ±¾ÈÈlÓSµ­Á°%î;iÓ®$ûa;6;ØGÉý9%O4ý$H¤mÌ‘‹]AºÍJž¾5?Àgñ1eJŽ4!„¿þeÉÈ—hbäó’(õŒÜ¾ Åþ¹oÚèv($a0>m;¹UV—œ«¼&ô“¦ijeÚ€Œà‹[ò#¥Ÿ?¬â–‚L޽¾ÔÕ]>Ã0äñûwÞ<ü|{›âàý/ÿ•ð¡F?·-$uc’”ý¿—{.L ¶‡µëž«|HÃO¶ ÎeTÃdÌ}Qä'p¿;ŽÈ×¸âø­GÐõX©é™Z•L4zõOÅë”îDÇãf•²É¶t"ùUe‰ ›—ÁSý®©P¨Ž¼è×¥“ƒÐy¤©uå£@’Óׄª‡lqa’šÞÔ9‰[0T­ŠotB£Ù6§¶sºmŸL)º"$mܧ'ŒrU9`ÜÚT «ËˆÄ¡EÕq*eÜKÁD¶'Ƕ«7ꦹ ÷·?}~ÿå—MP›Œ÷¯HҢ߃´1éÑÑ\ˆxM…Ó…»qÚ=ö5w9K’¦É y%®…Åz1ø)ÐÛÒ‹Ùǃýh 6ÂÍç8lëdQ¿Ê€N­êJÝ­×R2=AÕ5‹¶®mYÔæT-Ç2Þ«ö U»€ãé˜Ä\Tÿµô å‘ÕV<‘O_¬à°˜îÖ¹.÷…1º +O¤>ø —äj”[s¥½tKrŠç]îB0‹mzZVä¦àãä&IWÕTÎ&ÿþ@*ßþç}º ùÃÿ³w-ÛqäFöWt¼™ÍT5/úøxåwþ¶D·dK¢Iͱþ~Å"+«ˆ¬@Rí™M·Tee pãÀ½ŸÞl¨ÈVÄGð¼â¢zZÔØ¿cšŒXzýöäJ§XrØ}úòíDP´Ýê‹€hÑ‚ˆ¶¶i] ]ˆ@f-Õ “ˆ†EæÅ%©d5ÊOÌÒ9|ó¡Ý_Ž4DXµÁÑxpÌ©¦WVİ1Í’M F¿®›zf­’WYµ èt÷1¤k oMò4ST ­æ«ýe«`+rËœ¦ÆÃyÅä/–ê­D{{ÏÈ%C¸t[=Ù,+R25Jd¥Ähé0H²’St+7¢Âø9ÈÈ”JÙÏtV{P PYA']Y?Π`v!Åv~ U É *í-z»04[=g ºBUõùÎÞîãÝÃãîþöý½ÙÓôf“iûB€ìåý"ШÒjó×õ¶ë ¼˜Šc ò‚úEä嬎÷ÄR]€×<Ü)A]JÛa¿Ê`}¨'3gtJ Ŭ‹úPVÉu½¸‰…üvÅøÛ8fYÕŒ´®… @C@™YbàŸG¹âÜÞ¿~ûþÛçOïwPI&"íkQ€Ë* ÉobvÁÐj¨ZÅŒùzB3Ç X4y|ª¿ ÍÊ9Õꉱ:A3…D ]ÍÑ›³¬Üµ›äĤYh6Ì–îÔC×vƒJÙú8HJá |Ì­³÷â, ¯Ygï=¥%TrøL¤§Ñ¸ZÖQ„ØnêeðõÞ7\žWOIº3ö†m$V ‘…Š€Õ{—Ó«8¢®†@¬¾W{ì7àªWÉâjLÊC²1[~Ù C¯©Ø«´¢IàÅ;\µŒÀÎÅÈVÁÖ.íݦ!Šª»U(½u³á @¢4Üs3€2X‰®E«mÕTÍÒqxI#]\l$xÐ\#ab˜.¨š`Ä@2T¡j‡íˆž‡£êó|ƪ&Öõ:ÓÁÕ¾z_R”¤Z€ÅXÚ„ók¬ö¡ukÉC&ˆ@ ¾M÷šQëŽÇœ‡+Æ„«®l¥õ‹M·É!Znç]·Îí5‹õœ@ €® qÅ q9qżhåÉ8æP9Æ:ŒMgIVíKr8|þÀª~ÌÎpRUžÃX7àÆwÈš´KÈ[PÞ'îy’ˆëf\ƒÈð9fˆ13P‚²’î5.qÒý'óu÷YÈHq@ÈL¬¤á' þúv÷O{ú ò¯è¸}–Ã&ƒ¶ôÕÁ±øƒçJö _ȾÁ^÷QÐ…F–ôsK!!ä®zLÞ¬€~ƒ½ßkº¾WLÒÝiã0ú:Ì’(Ç!ŒÀ:JÐÎêï>‘)(=µ£ç~òÀèZ7Ë4tS!5ɉÄtJƵ©è ƒ¶åª¯wîÁ«,4°,Ït1f³Õ‰ùzˆÛ·¢ ½¼^;›ÜèÉHÐ׌ìãÞ~:èŒ`žDèGIã7 ¤¹®ù~NO¸Žmæú§OId ‡ŒåkÕ¼ø×Ÿ:Gnß ÙX¹‰FYCð±ºÊ~ûxe>/íéß™ÿ{–ŸÈz_DMÔ<Ë]L+aCž“ùÅ2P+}mRïÊÝ;ù…ø·ÔwÖ!²¹Ü»—¤ƒ“jàrP£Äý·Ԝ˲èœ,ÔÁ=Ò·K|7OWÅãð ffv˜ j˜ÒŠÃŽÈÝ<6stíÀp!w¬væŽ+4gՇ襹­–>GÌ,ñ>‘3Œ¬AžÝsZcy¼¿K/ôåæëÍï·÷¿˜O ±ê£œ}ÔÁ)ž ãÇûï·%E‹;^fn_Ìë~XLi¹·BÁ Tx•ý§˜½Æø¯ðÞU1É-“|örÇ=µá*åÏ“½³Dƒö(b XzÂñLÅaúƒ%Þ=¿ýûÜß}y÷éëî‹åý#•ì£}‘‹DÃ>ØGWNÜÙ s"÷ï{XO ôóqÊ ÌI¬ÕßnÞß^N~Ô£!óH°*-@`U¹F’ÚÞŬQz%jimÁ1Âòð,{zê…^Û·ä@²÷d€‚Tê­Ïá|ãNòävæiå{Ví3Qo¼gÉ 'Ûµ¥ Å´øè}˜#Qì«©JewÔ°‚üë*fŒ\2$YÒýó¸÷ØYìÿx{óùñc¤ó*kÉ8 I6¿¶Nc“ä+Ìs/½¿|¹yÿÑÀl÷ôÃU¡ 9„¡¡ \“äb"š-˜[Î8ÞÕ°U‘o¦½Å‚ã.w¤»·Žƒµ¯)&_Þ¶‹HM’( àÏãÖô!Mq ¾¡ÕÉì]dØÅtn³u; 14ï#Ârmc†½>MtÜx¹^ÖÄ4=jK‘üAÜrêjÓg4yBtB[gH0œ—ÏP3·IΨĈ8Ã…1]Š]¯’”ýÀµëJ31ä!öçÕÃÔV`‚Í+Ïcó°–å$âÐ6TbÓ~=¦ÇøŸ}ëÀ)xlíHÚ¢áê9¤)Ø6%_Ê(ùæW’À—-äòöŬÜÝä]–…|St&‡ç¢­§OX¥ä+l¡+5tk}Ñ –ÒBðr¶¾G]Îï½ýÄê@ç¿Ý=|¾{ØÝþÛÀ·º}]¶¨7œ†(¤Î±ßV*èšIÛ*É×À< ¬–(Ê<Ù~r „(7â=1ac´EZ]àGµÌoU¾ƒ,s{6éêÁÿ“sˆrÖ°Ÿæ‚X’§†î˾éxsɺ\¼Ÿ˜£K¸‡=±(U¹IŒW¹ù–aQFu1¢÷ݨ&j¢7=]!RàXä ÄË®@.[NÌÑÉ”PËiÁßÍrOR\õ-eñ¢>ì1&’Þ±¬'ÙÛEij»Ç»ÝÖÉ®z³“›p˜ nÝ®lR^±D ÌGûÜÌ*¶]¯íš<…m©Eñ'Þˆ«Û5@ö¢ÖÄP]Â8ì}¤ë¸bpQÖy 7 EÅ Aœw?ç¾ Œk:Y‹ä ü¢xXö ÊNìO,ÓÆík;¡º˜ž4—]XåMŠ÷ž£y±YÚ –/ÇšvOžÑ/žãÞ±%ÀZ”Ú¹¸ì’ˆ‰):ÅóH*åÌö‡ïAÅé:€hÑ” •m‹ÀÏ&Þqˆà,×·ü® µg©¾/ˆ.«lx2IHß,a?¨C"õ«üáòÆMaÀ°Gƒjànì¿¶¾f©Ý:à@$ÍHðPéÒã¢?DÊjMLÒ ìk'ž©PåöÏ”WeꚎ8‰V$=Ö2T÷ 1>ìÞßì~ûþõC« j.‹{ 4 ¼ìèôºÀô“ñ‚ä‹ÁëtÉ"ÂÞÛ“b_ßX.Õ(]Šà0VÆèŒVãú•ß麺"h¶L]BëU[\Ëô¤Øº‰•6µ§sg&»rK÷Ì^ëŠ=óKPJ :m5]ÜŽ>«„1µH—bÏ= /HÕvDcp+ý¡%v[-¡dTúÙ\Älcà•ŽÄÃÍîó§‡GûW»Ø÷Lÿ1²/IÿE–Ügu­¦FëÝqŸ(³|U@Óqò5­Ö—Á–èÎìcÒkåx˜žÝ}þrl<—=Á&[¥gˆ¼Ø8ͯ/º<[£“+@À U-MWuüªÌ_¥Eo44IôN†púÜß}¼íÀè“÷DÜX°[Xs £‰}:9‡F…XÕö×1j\åѵô‹"‚03ðš~Ñ«r"ûøuíåœcÐH(r ŽË$fûSÛtñ Ú[è« SDKà­n÷ MyKCÒa©NŠžÛf83 D¯Y»‚á¿7#/ã?X*¾tX`ïzdϽdí:½Ìò0Ñ¡ÅO˜ÿùˆUÓ@ao_3 ”õñÂØB g–GFY=Œ¥ÌÂæÁ™»H‰SÇ‹Nq¼ŽýŠÊíåÍ <“W¤‚œôÌ+&Ÿ1`þ¸eÜòy²Ø¾š\Œ'׳€jÖÕõ–Ú¶K¹s²Iµ/MŒŠ²5ÈÖÑÛÝ̬™Ä*­“DÌuLPƒv%Å‘2ŽÖtË¿xŸWÃüÀ…Œt Øï£Cðm4h/2ÓÙ›U!4Ž]ˆþÅ"‰‹ŽUÞH øÜ¥¹K¿ß½üue‡G¸è¨å–x'‰é¡ß µR×íöÒV‘‚‹Œ a‰6öàÆÙa÷‰¥zÄ=ûÖê%òÆG•±e<Š=ÛË“p7'¹¾¯z¹Æ=0ÄEÇ@æãÏëžÁ¹£Ï‰uzðàÛw¦4DÏ'DÑ) NˆÌÌÈþ5g½$Á¢èý’°箤¬Ë:MÚA:z9Œ\ZïyHC!r|&ØúPåÉ‚gÍ»º~ׯã¿×­Ç&¤véŠC¯•Œ™zÖ›QÌÙH R.ý¶`*5Q£-¶Í¾> Æ/ìTÆÄ_U…Åù¯Û°ãó9É™eŠÓ¸w¸ wý}Q•Êõ`\ c×SpLWp‚Çm.BÙÿÐνÜc=Œ)î#«÷»÷·÷u§iäCû:”€r‹ô;˜¥mOðz)×:ÓõjsfV*‚^—±7äd'vê…½ T3çÛg³‚ƒ À×…<øZŽ4£ÞD];ƒZ„¹Dä*0w=Z ]ZCpØB©óòvÓE“»FÏ¿ª¼¥c˜Zæ‹Èño¯4VO¨MÓÖ!Âèr7 H²F'ËtÁZžtIhóD×ÞaƒSÁ,Ö‚cðannU¶ŸjñæTÆZfaäZ∃˜´¡,"[h„­ñ4\±[o ÅüÏÉOœ‡VðþÀÿ\Å,B+fe&†é„¬^}ØYrËo“Ŧ¿¢ÅÁApÛC¬Õ.åÛ„ Cwz˘¶wŽÄQýíöWäüŸ¾Üü~»;Žøü8³Ì’ÿt+±Û¶÷ {ÐÀnù®ÒáŽéò)fOq¦Fêq1}mŒ*›gTÄ2$ c fØž”ú˜æÿò’#é$Q¯+†nV M‚$B!±øaþéëõŒÔE´l-,Gj’,ÅÄTB5‚:Þ¼áDqƒ†“#Ÿ ÕD/„jž',Ó ÷å#i}Àcä*sÿê„öéê§l{ðÔ¿;ÒPúýke³ #º¡ÀËØ”%Eƒ“`_ÝæmŸ^КÒÍíåSTfF‹Ðʘò£TòSJ,[C+‡“Ò uÐ ´ès,€™Ö΄O¸®5hìNŒ-×J­0Œ è\7âÄë&ëwÏɃ’nDÉ=§¸<ãÀ17ã0µO—{N´·m"nó¼G4 Q¼¥.Ð]_z”¼éŒ%+‡„jZ€³_D5¾fä?±=C¢yˆa;1Ô×[ì°¼@–k b%^>`‘,¡Ùôm{ˆ¡¦¯í|åË.HÝ#«-f„K¬geÇRmáúž´”MuÚ—k"ñÐ¥•Ñe£m!Œ¯/,&o«ÍhÒ*X’Ï"dMÚNk¤CÒÏ@lž~^(RªëÄazSWÿyçÚ^ ƒk¸ ÖxVß…­{ÖP=SÍhø‹(¼bÁL—Ĭ|×É*RM1܈´u °hïּЂê"ù÷^t±v=+ ~è×E¿«@0rÝøSM N2ƒÑéšYL¬õ3ó"òÒ|‡»)5—MG®­^´íz\à8 ÿ‘´‚¥J“(9cÿ„?1‚ùzŠ%ô31Âò²f9ô'Fét•$ò*/ö؇:¾dŒÑÅìqTLZ×+F Qþï0$tXÐ8`" $­( Ǽybøp÷ùö—ãÿw moÏQ‚ª½¹o  Rtèc— $o¥~å'¹.,?Ü‚’ôÁbDX¤Sõa¸#‚Û"ª½œÐ€èöÂPjŒ ý‹ÎÉñÞÆoww‰ééÛî hùòÛo•){üÓPã_°íáõÊÑzÙaɰmT\Q8×Õ€ó8<YµEßÇÙ!+®í¬pÿ~(ëö‡¼5–$®aIÉç`½ìaÿÉ:]`Ö€) ‡­aÖ»ñ;=†ÜUZ(ÙÞ€:êzW:ÝÛ³Z¤|7£üÐåT’·&"Ö·‰šK.•‘3 ÅZhÑÅõœr7¬£{“ÏjÁS cPÊj5.â-8ÊîÉD]×|šØêâ­ÐoÐVgÌæVPÝg*BßÃe.‚ÞŠÓë bäâ'»Ã/'QݼmpPT~VR®‚Y„Xgê¿×-óB»`îÙ–ZÚq¹Ìù¡:ÇbŽ»©S¯S¥Î£/ctP‚¾é†ß"ú2çÇç^,Ñ |“H¹ÔMöä®nnQׯ˜_J¼É‘&e»¶GÖÕ³¯"·xVBTËèѵ.^ú5Hé?úˆ”T£¥F5ú L?šƒ[ˆùóßþj!,†ˆÁÙ”Eí·þÝwû‚_o¾Üš=fz9LmíÓîöïþòîñß_ýs±ÞÒš{†¯¥—0)!ÕH/­|üD…ÉŒV-´ò鱞‡²ã¹a»û^X÷¥Tï)±nGÁÕJwT¨t‡d_7S,Od+Úž¼zô÷ôæ0£…üòjRwÞïƒz†0Uº;ûŒRw+oó>«Þq¢wÍÒ¤ ÄRÍ»^ Ên4…{rŠÌTÏÁ­‚­-”ö¯û©ß±º ÄïþŒWñPØÕ†Û4I]ÁEN7¹Zdf%#3ËYœqì¥g)È"ÎÌ#N^fYfÖPV’‚MQvú «TfS¶•xš*`/4|~ìÅ(®ÍÐ> ÆH# ó'1ëÓ&:53F´G…f“ åuXÜö ´;œ 1‰—W–á«-×V¥Sæí•Ìt¡h§K\ØéfƬÌÌÄP=îß>QjÓœ)NOŸñäÂæµïþq÷Å0a÷ÅÂúý#:<ÚW¸€‹óÄÇ­÷?/üÓÊåÒ[û(~¦ðWçºöZµh,K*F`»ÀO~eÁEB](öÔBO+N=yÖWU\ZU‰ìƒ‹¾€ËÛê¥ÀxÒ‹KŽ“gòfEÚãbààpºÇ§ŸqÌ_mcrO·iòÿj‹Rìy†â…âòvŸïÞÿkZ…^JÃÞl97Ü„ÍN»Gé#Üh®òäD3p$†TBa†.7zíX|¡‹â«Ì1ru—®¬»ŠŸ<-¹×—\Åž¯¶À; ‹SºÂ±½óCòì°·:PÞðòÞã}:sü`m÷Û÷¯>ßV‡Aˆ¡Ýò‹qÐ,/-b΃¢ëÃqÅXm™t¦fN®@.–DQô ™ôÁlÙaƒ³t¹Èö¨T~¯ÛfÔ0>Ê Ë]v¶e²ZOi›|\vÜ ®×¾,Ì.&/«“\€¬V9ïÕ¿¡dã‰`î/vuMŒ îÊR¨ð*¨%ù_ö®.9ŽI_Eoó²*ÈÏ„ŸæeN±Á¡¸C©!){|°½ÀžlÝM±Ø¬ê ¨¦VÞðÄØ¦Ìê®ÌDþ!óûÖÌÓ«ÿ%}i‹Ä×Ïÿæ>#锸ßd¸èqr-l,¬.X‡€þiVå‚9jˆÖtj½æØÞï¯àŽ<°fC5<Ïêm‘…^«·ÅNcSån°¢’[ä–ˆä ’ÝEÚ‘Ë›ë›cÁNè,IhòÇW`W$N°¨vE¹èº¹âl(â–²ÌP¤B qÑóäàíHN=<±kU1ˆuž˜Bj¹øÞÙÙÖs·.f²‰°ÿQÈ»gu*£*‚È3`•ncVÇf m:Vؤû`æç¦h^X.Ìw>éÓ¿Q–üæ—›|·:Þ®c‹³ ó*Pe4iRn0"y³ðŒ„&¥iÆåýe%„:ò X 9¥‚‚p¨²†Õ¢2¹Wà'Ï2ê4àJÁSµºŒØs¤cS€d=Àk˜Ìxlð€bZÈx"…®WR4ê =XfʜŬz3xqcB˶ùÞ´`˜ØSô C†r€=ÛÐi‚uN+Z ‰F”S«$²†WÁBï–[_^ôË®tP‹)à©x¡'´èºÓäÐX$]Ò+wÝh˜êŠÕ%ƒ(8Ûv†‰Øij"6»n†s¡!Z‘Ƕ€ÎôÔÕÜœSÊÄMªgÉ[Œ¼Ú vXkgü> ß{ùñêòÓ{?©_îÜ8êðlNL,Š¿dÖ2­!kËõT¥Gmî¹þ•€9aIrLËlžÊŽ¿ ¨ÓömÆ÷¬K»M‰Û§Æ¨4™³;Ñ4:ãšéšs‰E«H‰Çágµ ^×…¦Š8­¸P/ô„x=º•‡ìY „:LÁ¶=/®¥dSèÒ#ùt9ðnýH«ª„þݬ©N@|†}û$³›aÑüÌ/AûG=?o<‹®ö¯N fUKnwm›E eƒ{t­±ÇÉMÙÈü]»éù1.ýøÜ=øß>?õxœÅ>ãÈìþíóõíõç‹›JV‰'4"hÖ”y%¢5­IÑ\æ•®¸Ÿ»9g·A(YÅ/*w¡LNÖ?‹¬K¹»#–P5YŸ8ÓpÓ>¬·¾A¹ëšJR˜IÇüÝz:e‹eÔ¬V^îöõ!³JVa#lR²`Üd”‚Ç8ãÈÔç‹ûOWžÞ^^ýtõáãÅã7>¼øý†êê`‹8 ?å5D¤Ö-Ò 1#Ño üõzÞàhÒfgç·¿¾¿¼º¬½œÓ-…I·¹÷Ðot?zÊîëÓ¦ÂÝ %”8„Œ·°é¦ó‡«/7wä0~‚›´JØŠ@› [mÒæL€ªéû`¾¿º½ú½2Í&ˆ›Ê]p£Æ‰ÞÀ«?ߥÞÜù[}¼{xt¹_ÞùOþ8Œ”½¼ûtu[Yì° ‰¹åò.wùVô zu©ªÐ<‡\'»n%ŽzL™ô¥d6Yö‹–'JœL¢0u£7’S·ád³šöSò↘[Öüa{¸Gd{Må›õ”2m‰ÌÜça:?°ŸÉоSw1«Û”Éó¸åüS\sÏU3B:wÄ:•žõ<ö@Oƒ=K§>-žz 2Iá=’N§,4¨ 5ÜÛ†WÿA›Î}:˜8JÄÉsp@ãÝd°ŒÃ;w½ZªCfê6Ch:ì«&uü'QŒ ÔnUýÎv,(cÑ 2-îÄ“]Ë‘4ºà„Á(í8ñ*÷’-î#8­m®”`b¼2+Ê«.™iZFî;£ƒ%g[‘ÊCzq•;«>$Tƒ¦£ ¶‹+˜Kض1¶ÜCì²£ÿÁãðô±¦‹§4Nâ²? ¦ÓhzJ‹u§š0p@I¶,]cÁbðl×Ï”µ zš…J ZbH²ìíq-z$˜N™\“Šç7‹UÞ‚½v´dú”ùÓï~×–4ñR± »×€¡À$tÒ&FRérm™¯"tµ‰‚àxq›•R‚ÉìÞ«# ²4T’ºŽàsÑ8®*)oÁn«ÓÍ—f¯‰=âøî—wÿ¾ýùoÅ”j«Û"/ÕB›Úê}A¦ö’¿ú—_^^âìwæíHÐ\¥?b $w½ t”¬“¡&3Çù¹YHÃÜÏ‘¤Ô—üÚ/3ó£ )ÌJÜ2î5I\×€ô€qÐ×Q|ëÅ·LH†üK¤ìÎ\Ŭij_bô2߃à¤/¾Çhcøv—§qeWÊð›­Àp›Ø*þ;!ü„ÝHË¥4°”‡Ó dÙ 4…%Î÷ì±&Y`Ÿß«„Ö¿“›Än]¤´¶aÌ „bKÌõŸn»!³—âر¿²˜rSþTFƒj ø <¨k2ª#T P³×|æ(\›‡jòÓ5Ÿ9O{êV˜£a“!ǰÁx’Î=‡7›Ä»üè‘ñ‰ËåÛ8ïÑÏ÷¯Y79‹°ßI¤›h#eLž7ÒÆ˜wý·zÅ*lX/õåœâ­DYÆâgÇbÇ … ³]×ÕäÉDOB(Kôóéeª«9W¦¦kä€dU¿Ç µ-Æô“§ž̾ǭõ÷—÷Þßïêù:°|>å+Û4‘âæ)©ga-,2­Ü™†GT‹Z‘Ügv¤2r®Vp[NiÍÂr° δ°<:ÝÅ€eÑ!§ýUÔ«Õåo’êró˜aØ3W8d©i9.¿ÝÆÀ¥;1ø€° Ò`!¤é9“];ÙZ4g‚J›o,„yåºq‰5)â8Šš4–m)0^\Ù!í|¹¿û|õøñêëÃûOúPÉ@c'nª`M>dÕj²ëZR-=x¹ˆú%»i ¼\ɮɲk= G¼^<~H—\7å\7R¬r­¨ÜD‘pó Ë„dr„3)/]à‡t~¼-Ôr½:70«L Ú–.SM¸3§ ðNÉÐI€›D3÷7ñ‡Ù»,vA$#\³w‚BùzÀ¾w.Yžä«P+¹ añ/7ÂÉà6Q'¾ÊÄJ•W8‚!µˆ¬aHÉ]m"‘øýì*Lç9ªœÊò.hêÑ$¯ðHA¶`]M¡Ä1Ëæ5$K´IæRÆ g†*ZCJhqí4TepžWjDäF¥Ú²W¦4½€â!YñOº€’õG1ŶÆŸIŠIe[à²ï#aõ¯¨M!’פPà%#rÁÿÉÊý4.I¡B Å°É2¹â7Q§*º)ĺJ^i:¢‡ž¾4!˜7 ùìCG&wdhÕlµQçç5ûH_7q‰E,~à‰øC]E*ƒ =ìg<ÓþϽH,ª .·G•&á žÒÉû±rÝÍ“JŒç·ðÃebƯE‡ÕË2²SöªŸÁM#s!˜Æ4žÍ7éô`Sôo À¨«i¸fUܾ  ‘&+J/ü Ó™Ùg¾O¦Š¬ÍD@óa¶b#©Z·ñ°É9ºžc`I–ŒYAêuj¼ûY*¦À2U”V%¾R¾vj:ß´ýìÑu˜ârࡳŒ1–¡¤C‡Ñ£Ù1ÑY=FˆMcütƒ™èV ù&~Ó™‹ _üðr¡ôúóůyxÇ?ñÇO»ÅàÊY[`K›ŠH§¾`Hæ™x•3…èUAÓÅsJܾùC„“Íñ ì<ì¤ß#ãp–R¦»gH¸ÁÜ9ú£?mÚuRo”y?ýSåU3‡°¥ÄsKfƒ˜%yõ³í^]Ʊyør‘™ç:óu²f>Qf`Vi›€_]¸ò(I™TºÌ½MØc·p•™"G]WÆ%á ¦™#yôVþ¥ ø€mR¬8˜J›1¬I] cT êK¬‚£Ò/kÑL°À 8#/[LöF¢è’´È€’¤6iY°ƒ’¤e󆶜(ÝòÅKu‰K`_®9Ò²YHNÐ867{ç•)ÀÒ¦ÌxDçÚ'}O_â¶{.¡«»ˆóÁ~xâN¾üúðx÷Ù5s÷Õñƒ?üö:äCó_ýýîþ“kæÖ¿Ðõo×ìÖ3ÑßnñËÍÅíŽgy·ÉüŠp¹œ‡ŽN†e倞xÍ<:Šùÿ˜VP²ŸQÐýb; š‚• ’™.÷õ€§À*FRíÜaÈÉRéÔLªÄ© pnQ‰"ëT%ª 3ÃÒBß K*‚mõo ÅN½0½ŸU#PÐ65ªnœí€ì{œý¼1¢ÍÑ#…ÐS{1uŒ*Ôwþˆ²¡%¼¸Ýâ@³LÏ‘ÈbšSøs®¬¸¯c±ÐtÐ5Éæc%’MΕ„dKÚ¥°]ð•nC ´`²&hÁxÒðd€‚§ ºgÃl ’RÊÒîðBÜCx; éoeÚ ×ž|ønµ^yføðp÷ÕçÃð[‘c64aV¦¨uvµÙÙi5°åf_ì—w“8ØQן†Ó•Øî4تÕqƒä®YHVá2K˜Àe¶W‡LA›§Ë+ ñbNnyï_&W{Go³ ÌœdeÏ1öSh‚fΞ„XÂ$Jšµ„ˆÆ,Á‰VÔä„8²´c3K!6s¶ á(¤%V‘N£uï_'ŒbôbàÌn1büwÂÙÿåáÝ—üÖw_Þ]º`½•3·!DwîŒÌ¥eNÒ9´ÐïLiU3vP†õë&ùªÒ‹Ô8GßÈq“v³l×nîÂ~Ä?,ûQEÿwgžI½>\ÝÿÝ?"q}¤sã‚ìv)¬ëÿÆ×ýßÜ> IÆî–bFú^IûÞ®ÄhÙ¿ìÔXãøe Ú¿8Éÿü÷(Þ¼xBS÷7le-îþîÞ 3ç4Ù¬Éx¢¿¹ %Õæö¯¶ã7èÄ,êržÌiÙ(p’›oôj `yœ€È‹ªrüƒ“|•lätÿžùµ :ÇÕÞiÔ6¯YšÂ=üÝ!î_âÈhaÖhYÜpcKl5"ÛÆ°>æG„5Ä )å,ZSuõÿ$¸q%?'µW~8O"g¶Ë¼¼˜Nyb¹À&½7x I¢ÇD1…¼èi/CäèCv¢x÷ôúÅ…>Å!¤8½4›f ÌbÔõK³»G¨l{—¹S ËëW )ƒH!,*½l3”5öð6sŽÁÿ"mÒð1fV?‡ŒuÇ(¤s9˜}ÕVy¸xsýð¸kk¤×Ñ<œÐ¤ ÚâÎc+ +“z ”â ž§fñ­ëéêD­äC‰ *¥ˆL‹þÞl²~~UwŸSk¯ ˆ˜Çöè3ê;ºîèÑ@Sªqô1`$nr1$ݾ£‹¨JÖ¼Z<íè“õåïä"R)â*R©.þhVÏîë¥&w/°‰·—à¾õxÅONoW¬ø‡˜æ…Ÿd=uê^šÖ@¾e\fðßíHi=/±ž]Ä(t¿ãYßðQŠQêê/¬É¯lÝîÉÚ{}K‘Õï‰Bân.®Mž'Ò“¶´yÞÖ­ÍZżÐfÖ¿d€”A×L ÏMøõÌŸîáûöâW‹ÓJ){H(+4t\8Ÿ0$j›”FÐkÔ+ôÁµTÎ>L¿ÿáAØöŒ*—7w_?þ¤Lú‡ÐΛJ>ÙCT ïf-ÎQ8KPÞ5õxxBä–Rj+«‰e Dgæ Köy²¨®¹ËG€dyÎÈyØLÌ“s ¯v¾ §Cš„y`×=Ò‹“ñG¬¨¨Ó0/·VåG=lMV ö«JCê‚ÒØ‘qæµ…eWc@°ˆÚŸç€]Ë6 å4’H3Èéòÿ’w-½­\Éù¯\x3›aßóªÇÑ^³ ƒÉz`Èm Ö+¢®Yägå䗥ФÈn²›}^-M <÷Z¦ú°ëñz×IESÿˆ¢ ®½Ê0ôâÃ@ÝÍaa\Á±=ϸ ¶xÃ?¶M†*æ´5 &¹JÑÖeöJV2Šk²3{‰vï‡?Ñè=Nš­ €@a®Æƒ;cÔœG?êò‰Øb;…HÔgX<Ù;£jØ;O6 j¡:_Ÿ‚]jvœk$¬g¶nv^ã©U£nÿ¹×/ÞYåØª%ÌIˆÂÏJß_øî›Fª}¢.P Á‘EGu¦ F»pôGÙΓ:*àµ÷$„o:g© ‰W:[¹íUÀÕÉãÏÀªIˆÞöU`‰[ /ü-ÞK“/¿Õ¾¯û¿ä•Ÿ('yæG[Üž8–•Ÿ%Q©]Xǃõ10Ïvr‰)±ÍÙåŽCöAšÄt\»°ßÅÕ;¡È¨GÒ%w9ˆî b¨´âY:¢#\ 0Ö™+|φárL½ù„¤¬¸mœY” .“ütÞ¨Šãëh+.u`Ëge7_E@nNŠ«6Ï믷Ͽ?=<_ßæ…çÙàz³ñ® Å$*‡ÌbBY 6FíÐYX/f7gx ˆc˜í³ouÌðîQ¤ >ë—‰V»þÝ?£¡}Ǻø9+ì¢ò!æ•Fû…zË9@?‚ÐÂ{òäh¡­o»¥9)õÊE›yØ2ÉLb4UÌ ´<{€+α_Ö€VR\r»>=¿Ýÿ|³÷eö7ßêÛË/¯×·ëFÝ3³ÔOk(É¥j‘y oòMé2zµêæØ‰8±ç ÛXò03/m,f~¤OËm´ÆøÐ/•9QÜA7Ú˜•+m¡ëûŠá[ùfÆÆ#x`Ä[¿Ü`\Ûbûоؾs&Y‹bPÝŒ!,ae38vbi¦™½ÿ3/ñéùp³â-W÷¾(7ïéœè]lmfïÿlhd3£ÖÖ¬‘ØÃlM£'ÝÔ#H#+ÛDÔÙ9ƒìfï"´v$~pÖhͬ›J´fZ­•wÞŽ™Ùºlàd”Í1»Ù¢!iÙº ¦¡qÝC”)ëÉrÕÌš` /ƒÉâIrðË–©ÜÞo^¿½è#úvûËú­2_ §yß*‹:곎߼¤ %ƒP-áwÁC<[7;>&ØÑ’ó>YÁ³µ:ä‹úðÜ?¤žI.t—5k,¸œ©ŠkêÚÑåáÙŒÎÓþ;E§83A’Û6ùÖQ|˜™dhðÑÆªÄCv:†@`~R$äåá›àÖì8æù'˜FaÕuªª…¢%ƒ%yƒNÞÉ{iD|mëÂ*òfê§ÍMÞÆU£Ÿ-E ãƒ'{äkׯ#ff[{ï‚Ë:£ccò.ñtL]R—‰‹ÞŒ†ÄDÆ0b¬{½UÛ–ŸãTÚB×$ûI[¸ªŠP:»ÀÕ!—¿\Ꮌ9îÖ×ow­}Ž’ ÈŽX2)R“ Ûå™È¾{ývñmá˜×èvˆu#q‚YŽ0ZÞ{Ý&Hlt¿³`î ¨H{ãCVn²‰&2/ ÄÂ:7f”+ï£Ý§¼Î¢&Ø6#é|c>Áw@§’Ôë…q‘žNí\=¦O¿þ–…¢ o¯÷7›ÕÍõªlç}¼@w=¬šâ¥£¢JÒŒVkލÉĸ ¤jÙŽ‰Ž‘RÚ1g»1aßY~–]<¥U3&Šénh^<žQ„½ Ú`²Š±åã‘ëæ©€Aü€fLïÆD,wê(úRÛ¥IÉ«î]ݯ|™d¥5:[º©ãH"{ë ÅFêÓÝ$ƒÉ¹> #\ >pUM5`Kà8D±\ÏÑ«f‹Hv6„9Ìfû$÷Eж£ Ý#NÐâ–y¤ÐÏ3öÏ(m!ÏbgˆmUè,/½©U8·ŸòxÚÂ{`­kùЦ¤2k"Ëu 5Óüdô®â§ón‘06$"÷ùYDzë ðàM3¬ ³.Dþ=YòËd©½F+¸ñ ‘f;_À'åX€ãH–6f­d± Ö½#аš½Ü[˜…Õº/±®—BäÔ.f6»‰îgafÔHs€Q¬Žm—ե؜×ðž-“œ š2¨s•Â"cù^À¼d$äéúQØp}³N 9¬Ä˜išƒsTIó%JÝqF{.— Ó&í ³òãý/î'¸íýb•M÷›íF—¬äÝœòvËò%úEZŒs‘–åË«\Л·ÝC»_¹Ï®m š°`Û-=à\^C/Gwþ£¯Ê¦ƒø›E9'^\ ™ÆK)Ͷ÷êzCB˜-ž’9˜m 00ž¥9¨M]€í‡Išãv vbø2ø,;z_§ñ°ôÜeåœmC0Þ€Eš»LmÛ’*§‚XÌÉöL òLrÅwäX‡% Vz R!2åcIÓ{±ƈU R~6¬ý”>ÎcŒ]ðÝ#\‹½"A!ân’×Ç)š i´¿%«Û <Ö …Ó¢Œ¦»kÃöØ á¾wÎq¸ 3ÜvÍ_¤4¯iו 3Í Ï)ž£Ñø§­‚ŸÓnÖ´Èy abn]`Ž'Ô0`u}ýl’“Ù ÌnÄ’Û|4`~¤IãÅG&KÄý&ÊþeÆ‹AƒYkDäób­UM¦1]zÉ´rn´Ö/Èõ*› ãsl›åLÛ-M»Z”$8ÉL²Q«]C]ÞCìæEâäìM°Œ¾¸°.Y!qŸ9bUq ³Lq XévÉvÉÛkµ|7ÔÂãõÍhàû*>ùräóëדŸïÞ0³\.`µ(/aU¹ ä'žXÌR°¹(õÔkYAHÀ1¸PA¸_ pÇÃhÛ{ŸT­jtyäƒCŠ€\«Yl–ÛŽ 0P7ÜÁ~@!-"$D05äñÆz똘t o‚C“Ì%íήŠÉ \ãYOºÏ+y¾ÍÂpçü…šïm%+Õ8•ÌCâ•£ mk¾…:M1::¿”‚Ñ~v<+Òèü¿%šA´Àã ¥FBSçP÷d!tôƸªì FCŸWæÍΚà݇îl†¤Hˆ¨t»‚ï=žL±‘¨*B..ƒÅ¬ISkÄâ‡õõf=\VÔ¢EtéÁªJÉ—ô6’בF6æ¶6¦Ó¨%23Ä`œOé¿ÁYd~x‚Ì=Š4kÀ Žc¿§F2‹I!+¯.̬2¯(Xÿ‰ÈL6²Ã™™$mgg§MfŸ¢ÎÖ)fF³­£¬a&³ÄL Á„E›%å5–t¹o÷¡ºÝ6±œóxÍESF˜"ˆ¡Ç¹Q=Í.ÌÙ¬rT h~T»0o< Ìňî¡MƒzD&¢D÷Ï(ÝZ5XœÕsÓBµÁ„¥ƒÕºjz,Æ!`í¾€ÿB¶ =Ò{£·ëy¸#Ý€®™qþ‚ÞÛQüªrä€ úÞ¿ÝÜ|ýõ·pVµöñåúf[µ%\ÞfvLþùþé~s'ZðÞ=tûåøY}Üî_Vïr.ä’+à3¢%´ŸÜxw½¹“Ö‰é*Ÿ»ùöú*ôXÝþ´Rr¯~ú»I§äZ`ÃæìòL _þúOßÿª(×·Íá ºTXîv3õ9yP°ú ‰èñœèJ„­2ù£2mve)_žÖ¿ÙiŒVgD£¾A.~£9˜Ð7¢öÉÄëŒ'k3öœnü;èõëÕ÷ùó„HÑ“‘ÿc@q„ÉÂíWMó‹.nÙÝ’(Q@-E<ÞOöË_Þþãéêû-Y_×Wß ¢ý²~»úë¿üùKÊÊÂÇë×_×o/r|}]ßÞ]¿B››Õæ÷Ð|ÕãóíðJD9vóíæf½Ù\}¿§o×rÈ[¹ú¾Ùá?Èÿ~–‹ÿ›¾ÝûY òÃÐ^ÚFqZÙ¢æ[M•²1–¤óµ´8Fú›ÜÆ×O›´õL$¯Lÿ|ý°YÿéËÓ·G!ÒÏ?lÀ§Ô¡ï¢è"ûÙ=­Ží~Ûç´Å£oFGeö^í/¯Ï*a;SgÏù¡åá}‡â(³¬aí?c¯'–8`@€qò×ÒœU¡ò¿ÿ¨—úÕöóïFgÊÎÝy©Æþs°~ÖúN®}­8÷SòŠÆ[ù§N^c¾¸ݧÄVúÛÁD½þéaý¯"¡ÿüüür&¶ÿ&ÆÙú/Jf±À€ÜŸ¾("߯o?³ç¨CÒ ó*îµÁy%ÐÞËüA¿ì^^×7ëûßÖ·'rFg+h_ÎúOØ¿Øþ!÷1è—‹æ÷õë—·»ë§/rtÛw?YB,Wc#)Ýî)p`|Muº ’%EH$»N”Ï@­q|ŠSpvê§E£˜ŒaV(<óœP 1ãÛ¥¯–€Z–£&…yl'œä[0âO„:¾YWG3ãu·Mޱ VxÌó-Dt8Ï7G3_-…oòµ¬ÑIYŒg¸ÆÑU] †¥ˆ#íRX.†u¸ÔzÕë_Ÿwy›Ã~¼WdûÝ4¹ƒh Wé ”lætØY´3CV‰$*Ë/Ä‘›R´Æè„¯”›ÒÆ8«\ã{}z4ˆ^mÅÕƒ0¥|→\w%‘æñ¼€ûº€â>ÚÇå|³5âÇj–¹¬žo~ÍRVÄ©®ÛîÌêª<ÂÚ’l`ÁFIJl`k‚6Sm‘*º˜gþÞÏvnç®Ê~ £©ÃùZh¶xçh°M½F~êІŽ)F›QÔ!o+¶1„:°×¾@D¼#´Qwç|t;ľ#mus="žûu“׬o8|W̧Ø€’¤TtHä ©IE´ÒU³t­ ‘,R‡bhÓÜ‚*¥ëè”áZ$²Ä–~uÿŒÀí9bY€ƒ| ® qàÛ†…?Äðø)¥œ"~š¢Y•tO[÷óOPýh zëÅgöb®`³’ÎS*5³Œ¶t²8¶IQ³JÍÑyÏ=’4ÑjÓùèìI0·H‘Z«¦DŽZ;gkrN(O€Ô;Ä‹.’¼y–¾ŸÎÏ]mîyÊm¶‰8­Õ³Ÿ×jñ4K|v¡—µ!úìE’‰4j©ÓZÊÄ1E§w‘ëK*í,Œö=z4Òh¢[ÌÁèQ¤ÐrÅ[ÈShÐÜV)4X»€aï:Ðî \R¡Åó\ÿ$ôþ\°º-Ò»:ü÷«Íõêá~ó&¿µ:Þay…À8mÃϲ$Aå°$¥]¹!·3£%›‚ˆ m"¥Xïq6L. ã9¯w’52Þ=Šuâ†I±Ã˜€NùÏi––we šÞ y¹’Ø“E‹™ÃE†äÉ~¦]hX@h[°÷uÿ—VfY×ÏXoæCÅž(ØÙx’Û7íŸÆ“Ž$i!aÈkõÀ=ìŸQ$bžxÎ1! V¹‡..0†ZË Ë£õw¢9ÀÍ3x|‚^ÍGTïŸDüô'yWPœ¶:gÙ“€1¬ '´&–"i» ÉwbÙYãR.¤yÇSë«ÆàâH¿FR@ ‡à¢wF\€Ö½‡,¸pbáWo¢·H%± ¯]•‚ß5FQ¿ÕãL*¶ÕŽÛd‡ÚBM[¨Fî­HŒ1!¬‰ÚN:'ˆÞŽ®òìÑ­… ²éP‹?ý0Ò;¤HÙBÄ,ÉïÅM«¹¸¼G»H\SÛ»‚}ü¿3*Ü ŠJ‚š€q›V2ÿHc¡Æ‚Âv%L¸V€öMÕÙÇ1?§GŽ&ê,_'úmñY¯W½wF6SgŒ%ÈJkŠ&bðu÷JÑ%Ýl.(µ÷èÖ0»ÙÝþEøCôÞçàƒú…© N£>MðÁùNx*æÝ²ÉÝý¶‘Çïû?W¯ë[‘ó›·&;í•ê¬Ý5ڦȑsÞQvŠ7“RõÛdè·Yõ ‚hÖ»¹úlÔ1c@2-ª8ÅI3ÂeZýC 4\ ÎxOYÑ ±ž¼3U8ð „Ý¡ÓE3à>+Û{¸ÒÎtÇ`ÜÕÍÝúæ×•HÔËó½\pyÕÞŽ¾+æK¸Xâ XqÂuš@Üo)ÛÅÙE’@Ë“ú£æ½R1LÇû£„kh'£ý!zg³5&f„àk¶¢è#ì&€‰ wô!&À~“î…b†‡gy»»çÎj¸y–7ûûq AFä3ð4" ®ÀªCô¡(:`u3Û2« ‚xíÜ .)( ÑÀ,Æñ@ÁTü€ <(8R€±³š)Ï  ÅXµüBo4r‹8Ñ:ÙB»õõÃÛ]#ë^GÉÆàjô]I1—Ec…ÛÞ,=Þ½~;“=tl´@$ÅdßoL¹¤‹èƣĽ×mb²‡.]J6ÐÅþ!Eºh ¹ä¡:,0–î¬Ú>Âͦ°Vç{i/~ÑpŠ·SDÔµÔ±6ÍdäEÈ9ò—‚ºû—…q{íø6 Ó)ä[¡ØäæÄë=£j>…eêä‹þ÷}° PÙtŠÈb™²‡Úé.¤N§@ÓQŸg¥)î{0.JrÍ-_-eÊ|-]MêR­wŒc1‰Cãb(`\´Èžˆ¸E_ÎD]Ê{åÕêåùáþæïýOìSdÔ© œk9N ?+qtáeh-l:ùÖÆ²7>EbôÛùÎ ¼X¥EÆÿáÞtqZ‚€äA;êT0UÕ0CLõ‹…´»·ÒN˜iU?¾ùh!mÿÕRTÝi!osòžl ãÀ¸‚:µÁ]rW#س»zû³1V ”!%°*îëí.± ŒK¯ô^&áªÖ/t6ô ãµ÷ˆª›Z3I×ôQ£¼¯[à·³ Žtü•³½¸Ý0Ä·: ã­W¯Û’Æ"äWÜ8;ÜÚÍóþG±›îå8Ò×= ¶@~‘ËŒ1@¼•¯O†¼4…Ò³}„ú‹ùâãÁ[ǦøÑr*ðìP fÐÄ›îØ»¢&9nÜüWT÷lµ$Õ•Ÿò’×TÞR©ÔJZŸu–¼ªÕJñå×èžÝí™á Ù${íäl¿H³«ž&|@Ÿè!^ÙÖeáùú¡ÕÊjpßIíª†ë·ÍFÖY§¦´o›ÂÜRŸìŠSµÇà'w>A\n«dò”Ba«tŒ¥°UºXй*Ãõj*_ctŠËmà*H[?£ úqŠæ.|®ÿaz ¾©LSðÜk¾ÞÕš/À””U³G.ªÇ …äO+«4_Ý}Hq“ùªsèÛ’tÏ;ßb¾Þ¡ õ_F]ÁÛ d=¶>¾ŸòæÎþþúéÇ-Gò¬@H`|mU€¥°û*> ÙAiÏòq&ëk#G¦*倷!LÄ\h;’ms5¼qž‚X{C—QS¬Æœ¾.²ãkÙÚeU‰ÕÌ.ýãÊ9eƒ±ÕÒ*¬:eƒ’;NÆ®Ÿ‘eŒŠPÀ\G0ioÌöÆh~`hÝn{4µ¾6¦k½»¨z·5Æ ƒv›§d=áÚn/ Ùršç•UAxœ‚€d‹©sVŠ~bš9'¨uÛ–G ´lèò(ßm¥6ι2e&«qp}ßtU­†àʾ-+³9óç¥U¥LÂS¨;{Ã[pz¯fضqð-D”’K–õœõ3ʘN6 ãZÒrY†Q._¾‰~^jNWK©s›m~ 5߬Ñ7z[H©.gò¸*rF‘Ö¥1µ…OÉU¸µ2[ÆüFmäó­ú_ß¾Þût4 ó1K~˜BðæäïO-;-š„¼ÞI þ:´?*^”TT¼,QÞZ~Cü3èÈSÀ-¢N@bìÒž-Q—z§IœsH»ŒÿÎîkô“c’ŠûªFå=÷5\˜Pü¼´è0k•ÔeËÑœC=s0;mÓÆ¡Þ¼œSòÌÀe{æŸöìÀ®3ßË/Ðö—#D‰ù7MV‰:Þ4%”ïb7Æ&[ÈsÉLB+œs‡›Ô……È & N²/c“¢:’¿ÆØäsŒM…~v™²±i-«3!_àËʬÕeÓÈųo\ÑòˆCGÊÆÉN6u)tÆ :’Tù+üàõh^FÅßËù¡§…cÎõY­¬§‚º»vAX—Ýo‘×¢oëÏz|„jEK`°ñÏÒÉ ¢®Ž ­šÙ]½f_V¥Fq…_âyå=`VK«Ù8H“x5ž:Ï€ÞœSLs©iãbKY=:Kù¥‡öí¥ââÛ¿©˜ô¯?(>½»»¹ÿðä]>Ü=Ü|j¨Ü‹ˆ¹‚ ´ûŒW5ax$wq´À³(!›Xɪ¥¨N_3*ÐĸA3Ðyð C3ÐAKÓ•ÕXµCcKìWä•"Å ûE¥pzR,(…J³Ó6Wbjíí/„´E)T—Ø¥ÁµÀy«ûþPƒïòyB¾J@tж³¹¨žóõ:Ï2i†J#@µ„‰õD#A׬ó#N;äksý`½³ {„’º‘ç¡dˆ6‚6ê!\ØHõ¾ +½Ì™ÌÌ5œÉÿþÛ¯ŒÉÂdwÆ–ì%Ç–\ͨzŸž¨ÓÑôêÝ<\Ÿ?þmþñ)ŪËÑ«~ùöpðtÈ |¿ùôíö¿~-=Õ^ý¸b]ýñ/T’[ K¤ÉÅBNO²–’så,±=vŠ.ê1ó [BR ‡"zH¶lsµ˜rj{~©ÈŽ. ×èJmËä°.¯]«eT²v§á}T&'tÉÉ®íÖùùÉL‹ÍÃ'÷m] ¢ ‘zô ²äTÞg ¤ 1Rg±–H:°ŸÐ ±j”"\!V|Z¹Ë+®—VFªVpôޤÄçïöl—Á­ ¾<ÂïF.r\ŒëÄu'¢|HÝ_q\s §×’]仨þä¿ïîYúM>~°’ׇ¼¹ûþë ÛœF!¢mdû7¯ÂF î(j|üÚÙÍþÅ?¾úñâ•ÙL›. W+’˜b}‰Ïðã|€ýÎ!ú?ÿ±·ð—åOÖöºŽ†DM9f«ÎÔÿ¤‘ƒ{³ÐÚ¢ˆLªÁTC”OŠŽRñ„I)Û}²’Ј(B_[½AL¼éÒHAªË>O¹wv8ŒœÎO ]rbO ÿxûþSJA=VbuðÐ…—ö”ì¶1õa®Äæî"«kKsÁî 2­¾¹ÿåöáË'µ¹7«?w©GW„Oš÷®î_h\;gjÒˤ^%°QhkJÉòE´ ÆÊ%´ÕH2ëϯÄ3b`é2û°mI]i߇¶4~Ü]ð<‰1_¤—OÐÞè2Ó=–듯~óêçÑù¼%Küùpÿí¶kÈÖÓ¨¼@ºò.ª ÀÖ.K(Cò·9É_¡?¯{}PÆû2Udç®hzô Ôü||¶ h%ÊΗ½µ‘:ÄMp`ýdØþ«*âîΗ%›Ï//“c:]ŽþÉ uÂÀw\$ÿaé’Ž0&]:"4Þ™ &M·ÿå+Ÿ`^ÿÂÛÆ#sŠí2/ã;;j¸ ³*x‹t¤a¡øk[óh¸+DSÓPÊ„‘ Ò–‰¡âmòÝ¢ËB¥0|ª: \*bùü˜ìgYŽÈº‚‘gðõÎÛ£¤]ŒÞJÉ~šŽHJ"íiÄ4M–åùÁ]ˆfTŒRƒ ¡ ›$weÈÕ“ä²½N+Ñ 2Iöœ^Ø$÷à}Á4àG:Ÿa&yt½7„ða¤…ahp`ƒèÄa™·%³ø¦¦ÑœÉØ~aÔð¤l3É;.:œíú^­uÈÝQšæÞâðÒ&“Â>·º$ þ3™LjdáfJá÷6Ý/+ø¬1ÌE“I(yíÇźoEç6Ð:›‰Ë]°†T©Õfâ\Ý‚q¸Í šŽ Á†cæÈF~ž'^¿ýë¿þËÛˆBì2+¥1Ir¯ŒûכϷjGöºÏda6žóYAàÕ¯~ûõí_·ÖE.ÁÊ<µûÍ¢†óRHÄÔT Y÷eëêGŸZ«ë¾ëG}ü1-Ízù–Ÿ*E¼9³"25Œôè­*…¶t#fÒç¤:‰­bŽ„ÊéFñѧ«Ñä²Ö”ó\W‹©É6Â8ð1§Îú}Óš-Å  qamþiÖ0ÒcéÑrÌMãúQýôîl#Tfã„n!„,*…/ŒnXN)?FÿqeUÙF£t£ÃEý™gäG»²L)2lÛmòj0=ÇNýkdÒOçÅ•óþ%Žä_²¼^®¼ÿÏcìò…ªòÂ]ª«þÈpÏ)L`Tó¿C+ñåq›'é¡“V¾/·÷êãªÜßßΤ÷æÞ¾žŸ±)ѦP®¸óè\ßµcÑO©$»p·–xÞsÇTÚ·¿) Ì3h¦_x™ÆðíëÃÝgæÝ7Ë·6ÜðaRó˜OÿùÛ;ÛÛ³¡ Û˜¶Áûöm©8ñcSØk,à·––d[r4çN¨q¨s0¡èKÄ|jôYfbV{e刴)fbÃÂúì˜ÒîNC<Ô^ž8 ~neŸÁ- \†e–*M`}gÆ€rqÇ“ K±oÇe|\²!ˆ!’ÀÎȽ¼èqŒ~|ø‡~ïÕÍŒnÀÃýÇ9‚ù:}‡É²wÙÎm›q<„Ë›DªÛ@]8N®!‚)q_ÈË% 3c:çv‡±ÀqÂ()#üÇ*Š^-vÄ|o{kŸBÜ–4dõ0»Ì‘7ǨfÅ ·«9º~§|gÚã§_~ùX*‡®P_›ÆXÐA—Ér„6"\=·Ölý•6 rïAÓFIìBœS("ã%~ÜE–a6¡˜6á¨ß‰}™œV£²EÈþÜ!‹z†;òsšò¼EýØ;>SóÎ<±P?^ç‚G—´ƒF„>çMpüiÑO)8ïwM”TÍ;ü|óþg5ÌÃ\ém.Y$j}ßÙAKdí~s—Fƒ°FEϳ6PpˆE´¶­®ì¿ e/}ŸE3®í­#‹c¿¯Ùž¤ëÒW5Þ¯UÌ>žãµ-™ÈHÕÆÌf­îÕàš:$窑».n+¨¨hOˆ»†DÉÁOÙ¿ü(™C½péè»þ÷è’ž7̦q'×1·ôØPŒúåa³¾k*bX"Ô3ùX‘ —Bù^5åçh¯¤8"ªŽ(ïoÂò’U`9Èî7¨â"Ÿ÷ÝÙN%çËPxìKW…å iK6tß$ÝeP)¦®^×Ù2NE4ìD@öCæW=!õ•ȽÓ=ѺR)¢”kþ„ì×J˜Á¯ù[ mfè[ƒ3=ð̈>af¨cöÄ cA=ªhKg³‰ó#­xl‹®ðàa /í§\T‚­I¢ïÚÕ5à†M` .†½>›K†ù!SðFÀ[Æ” §¿/ú!È9殕@G@ ¹IÄmƒ”’6U@Ê !Ûˆ¢R^†,# i|bl%1Ÿ¤ÁÞ‡¯êúÝÕÞÛ;Õã\Tò!p×¹Bx2ÙmȽS_Òyw >|ùLß)õÓ¦ 'P»Ì+Ü¿$ 1d°6Óà65ïÍ•»‰q2Rˆ71 xÃ)˵Ɉ«ÕZgþâ¦â&Ô%teÛU¥d÷» sÓù]Œnà™òå€0ÕUÔVçöªâAU-âɽäö3Mè7×ÃϱÁMóÚ¼¶}øÏb†È.À$‰JügiòDWSðËÂC.o³ZYC-L¼ZE­êWƒ³˜Ü¹ŽmÓM M§ÉÆÍ‹gS; ¸óvpY&RŠà”wJJ—%ËZ³ƒ­V‹©hg±—Bwè@ø!óˆ®všê)'lIêiZkG—ø–ƒY H o=˜¿}5Hû~÷éÛçÛ÷Ÿn>~^ûŒ?Ûä{ý·_îÿñæýýû“§s´ÛyÂ[=ÿ¸JÍŠJv8rN+ªžå5¢¢JßÙ¸ZB¬‡ Ö#/±jj»²è#4Ül¹©M*`OmxUx¦hcŸCq# :ñúNÚZ1;/~µ˜2^ØK©Ü Ö?äžÑº©Æ½48ø¤Þ|r]z@Ð4ÒGq28€ÐÝþ+ÛßÐH6£0§¢R…ÒÊZ!ù ;ÏK«8ïíµŒh‹{òèÛÇ.{/û²,r„¹R‡)¡”O€Ø¨qýkøýk*õã!Së‘í÷·~¾9úèõßßýϯ¢pߥ”Þº%r?ur{‰Uϼš{ØÎÚÚ#סΚ&Ô·èQVL;äiHÏNJû²`nì[ør÷á‘£nÊfÙ¶QѤÀîâÆA‚ï/ÜâH²˜:`Ü·ª(Ì6×1fR>vøª‰”=Žà5h(.^ò5+Á Éùh˜!°)ØTµg­×=öô+÷>|¬Ç1fs>êš“ð…ÃÇ MõUQÔ$ta¯N¨*D¹¸×>Šw9Î/òŠ>L¬OFÞqÈÑükúÖwïoQžÞ–^ýí=ʸæ-  Ôå´Û`éÔæ(N±Çó¶Î§31•\\—a}E”}´j+<~Çc ”ß÷”WrʪÚÖâP_˜µhERP†.P˜öe•óÒsw ʺS^#c8ˆn#]h²†oƒ$Ÿö®°ÝXj±7Z\Üìäìr  •ý$Ž–ãp˜øwïq8¾bÞØvо]ôè}CÛ)'PÞíÑäp,­Q^1z?±"Þ…"Ç,•ØJ4#š¼›$IÚè0IÚ¡íH]|B "ÿ/ÛŽ‰>Ä´w2Ì®hèüz¶Â@úÇ#•m%”âØ¶’s8¾¸›¤þT{ñA!¨…¯Ñ«³AÛ Œ_¬,ƒ°Í:IJ‹‹£Ç2ÂRž¬ñY6# Ö@InÚæârbl¯5Y” ýÞv®rv™¤·å„ÙÁŸ}d÷ՇFÛŽIBÇaÿ“óóÍ—uâþÎz3Võ’7ß>||Ø6š "^‘¸êaH]Ë©aÞ²u#$ä›¶FHÀì ÑS°úpwXÌÕð­$2WUeõe‚ß„«¢î“ôùO|R:²®zŸáR›7J¼x)M¸Â¡Ùƒ(UUÕzhnÄÕZ(¸´›Q‚ïÊÎ{<™‹4N§¸oOîiIÑ×7_¾¿ýSüÀéö§÷¯Å#¿Æw1¾–wüîµïJá§[¹Úv©æ®Ü©Å_ÆVlñ^“ O XÐY†U/³qYgó(T@-öP‚Zt)—¦]IhÔš:[.~“ ;À8qÿ@UÃ?Ȫn ÁQüÅ»¡øJ±êâ 6Ü›õÁCnKiò>’GñÍ­Hó#"íÀ`çü¤ïNiÀºŠý( Úò =…3m߸ª’'t\ “£#Hí:týИ‘š 1PC¢y¾_K=¦ÏÔcB†˜ØM`-ÀE€g= üÕÆÄe­)fë1ŸSQ¿Ü„z,…w?äžÑUéA&=r Kç/iB2ƒö^éåw`w¹ôGª”º¿{w;¦H P.Úfò€Rm& ×í1?T ¸sÔ1Žrñfí‘è* ²ÒõÚÛE¢!çâ­D6„™\Ó誧•ίôtÒeÁagމEÌ2Ìän"]1¤žíëʤ€üneR8¹¸ß1b@ìÂäò2AÁÈL² -4Ëè—ƒh¾ÍGõšÞ|‡iN< ˜tŽüd½FÅÙÅ4Ùä8ŒE4@Êú+hµI0)ºøúˆo~7—Ç}:ïP@&=–`ßáſ۽ð,71ºŸ>Ëä–¦H J=Eù¿s‘hê"DWç£CÙ&9e»cžE3ä„6ò'Q6™¤ Ót™ä{ßfNh˜Ðª·þœGy¶'¤XŸº¶•ïÀǪ¦%Îí›û>M­›Áé±þæÀ6õú˧ýäñ'_o^š“<›07^È~› E€È%p Îw11ëq¿s»ä6 }ME(ZËp}…cTB_r’‹ÖR«š­Gð¦øÈ¦!¨©ö˜)?þ~Ê[#¦ƒäèEÍôx+µÓ@ÄréM&›âe“EÀ>“ ÒD¾ÌìxÇ´‘>™%Í ñÅqݲ›iÙ@gû»V2²èk“DÂM=éÿ] -Î^#i…'kœîƒðV›ç%D>JΉTh‹äªòVæÑÒ]SÚ¢-Œ6|‚º®.(µt#‰S=‹x§‰A¹ )LLì#UIÀT ’"Ålµåji5#ƒNé"Qè=$Ë¡k7„ 71À– N1 t%4y÷à õV'ÁSš¢@:ÜDµ[g'œEL®½½èŸë®ô’k¢AÔ;T-ðigˆ«(Éî3øå¬9_…¾m¶£çpYÚžˆÛ;›–G@Cé¡KÎJC{Ÿ hØ¡.3vD,!› ¤þAràJÿËÞµ.Çq+çWqù¿Æ@ßÐð¼ u;f…Qrâ¼W^ O–Æìˆ;ÜÅ.0fÅã—ì’TôÌ ïÝèþz„KWžœ@~Kú—n>C%(b» Ø#–û¹­A@Œ1b[3fšà¼Ý,†ÉŽèˆ‹¡¼.°¿ÎÇtV—…^?¦Ê> |ºzáÂWèëE œXC56Ô|¬èƒ(wIѯE :ênÁf¥´àI@wø*ÃïèŒRa|-;¾¾¹æpp—3¾«“UDß*q¾m¥ âëgd«ª¤“‚Î0³˜â¹ùRôð/{›d#>£7%ö±%‹€…–@70x§ð3‚÷ŽûÚÓ@^bg ßñ-ë ž}Pßñ)—üY¨£D é2‚|a´Ä€Dî6‚ZiMž'°¬À…¢„p}ÉÁáàëÀ[¬Â2ÉDÄö‚ÚÓ^­Î;‹Ñ\‡9S£ÃÞ%#£èùÆçÄvE¤W‡GWg΂ȫǩþøÁ½}Ï?§zõ+#&‚]8Õ³È%0{–.©5Ï?¼š ©Šàe×髯&q÷óާ§âbÈ—?¼Òé!¤=™Ð\„i‹4Kl"[JNˆxe·ì ëkzæ9fùP£•–Ãç˜àP žÃˆ†ÙU/æÏvæÝ·¹²´,Ä.%Ÿ{¤lt†óæítd/x {:†±§2¸v1ÚD\â± Z€>ûS€ñ!†8ÎØáèãëƒ]–n†Åà-‘¼Â%Ußc’½—¦i7`û6/€N憂ŒÙNҤȾ¢ê¡f°d¶½Ïn5[Sn„ÙŽ:19!Üb¶= ÚWt©4xÚÛlá¼À‘8!`äÛì ¨ÄèLždÇiØ «r‘Û5`W$íeL”8Yn°¯ý^*_Ó ä§¿ì÷Ÿ~Îu~z÷î÷»Ï}¿¿Ï/ºËä¤H]q²-Ûâ™ÕÍƒŽ•$ejg¾G!¨0µi!(Mm€\÷ÖŠ,múj•ôÀM–Vƒ˜ˆwéžÒîíFæÌÙËR2ÐdYFd*Ô^Ðnäª5-æêѦ¶˜K¼Ÿr‚®@Øœ&ï`Gi¢|@·ïjõ‘ñ2­SG&uE¶p¡ÔE¶˜ VDìÌAé~ÍYPš;I‰% à²óá+b 1¡&¨do VÍÙG`—ÚbùïbB%¬¦#‹ÅÚ”ÇéS‰C$«,§™±®¹ðœÞ_d«Séò€À';̇XMô©¥b¼sù Ý%¡÷½ ð¶9 q.“?8t@]†T(¶,•uÑb7Oº ÎɆÙV“ VËo+ê·lf±`ä˶Ĭè3¶&q¶ L·ÙÖ`‘Ÿt¥†0ìn[Ó¶ùsÛjG–(žJWƒ4¶’ªà±(úMèX­fâkMë-Kís›ö°»»H`‡ÿݸR’ÚeCÓ êLŽÓn÷pÃvÜœI´ &Í× é(RñJ ò»ºW§bSÁm²ˆi /êQt¸ûVÚ*IØíÈÁÁðî‹1]ˆýs÷妽·JØUÂFvHÆ‘'Ël=ÿ=‘ܤ·¸¥ÓbSKtŒf¢ª¾V(·œíäÉ2h…ªÇeÏô5Û‰Ëv³{¥#m†Oc¿Ét¢’hŸéD¡ý'"SÆtòD)Uú$·Sž°˜Kì*{"ÅñX˜vÍpˆo õøçç7Çùíåß|ºÛ\µä¦ö¦–‘î’$u—c½è?5äekg‰ˆó MÁÔª™Êâ0‹‘rsÃ+ÚŒ@ÍL €°é^i„R2ïmkÌ ðèKà;²˜GŒØ5V\¯»ËX›Ôl.±“À,¤ëb'ùÖDy=²ÔwMÖ¿<¾ßÚ€¼ÉÂF§ÜNú²‰%Ï É}Ú FÀ›WH7Pk”ÅÁ‚Y)³‘ƒÄâ½yÊB©I3wžÚBpªßçSúzƒïv±GèIý¬nµ8º´ü(4.‘Ìàg<-HªÃ8*Í·èd ÎXà¤5@®Ê½:Lyð”& ê¬güÖèüŒ~J»’×ÏZìCžÁÏôˆ„µÊ‹ha²¬NC÷ÐS¬zZVl„â*SÝ(ª\ŠàóÀ+«£UL=êäÒr7y1¼~HvöÓømñeÇñ/óÛR@Fm÷Ä©ciça©ùd† 9'|ËÙOŠ?¸¥2æ:lñsž•ï]ÏtzéŸé¬|íåùML6uñÒ%Û§ RC2ysFÑKD½uÍÔŒ÷W£Ü×ïï¾gû˜P¡äîãtaåÕQdo‰ê€Zé2µ…•1Œ„É¥*‘¯ F¨èw8»ÞyM“™»‰mÐ#m‰+S;¹>#žwv0‰Î.ã`ŒSÞYÊTP[%­Já=9^1yvŠ÷NŒ(×ìs—ò6•ã¤j_ç_åtÔ×ïC¤ÒnÈvÞ”Œoê°DlÁP SÞ»[MG]!g“‘–Lr`cÅN"3ì Y‰´ž²˜«Ï´a£í«ÅPo£Ç(µ„¸·¶Tö¼Å™‰NC›·]hבּі ÃþcRór‘ï–TúÈ=ïm(« œ`o²Ÿ4·.Ùé²ÄÁŠ DE}öùÊê£U$ûUÞ)mRX@ õ‹q[àqÁ"çÑÁvSв—ÈÜ8“e>òÓoo¿ß?¼?Ä*OÓŸ~šÿÜßøš1ðOÌŠÅ]= ÂgDzÖ$±UÅ>›’ã6 à4öXx;ÝîQ¸)žÏÜŸ¥Ê¢FŸ¿?S¢qÕâoPÞi»:)îÓ¶âNÛ[W¥è5l®í´½õreG’ßjßtxDK•عL—ºý–«E÷æ ƒrU±€¨h¦ æÑÁG«Awf=-Y`¿É Y’Ý5šm |üÕ8¡*ã…A|„)D®àÞ~ü‘~&"×ú#†"r ¨…î®g’K •ËÆã{[F&‡û6‚- s‡PlyòËØíÅO|y||H]v›ËÉâEÚYP´ózEß4+(ns;È6*°MòÁÞE– á™K™Ž‘P0?-ûL£pâN§hbzyí¹zÉAvM\Ÿ~ùh¹ï/o¾ýòþíi€<957£õ× vDð¤§b²Êº»cò™½b‰ßâé°—…̓ÇÁ¤¦î!^ëÇ:mÍE¦FË付ªÈ;ð0‰¢Sý;!ÁTлÂZÁºw9¢Ptøš`²–9¤D©Æ0/½$W óÒqÚ°{$Ç»lÍö¿¥B5D÷ö^s ³‹j|Ò´$(bàÆ®yx•ø/1Hwùöˆ%rغNÝK4鉮­%0fZ3Ýó‚9rEm´e.1å1§Úpà&7‚oP¬w…rVI&Òë †~̺wи½¹À<ôc.×M¬¹«$”ôkpŸ$÷Á²^þ¹ Ŧ¸³mK*ŒRWHž x¾ËùPS­_P0 •¹"ѱ0‰9žkû(ù`×H­–ÈÓæ´«U§ÆI‡ŸR믖§‰…™ÊÒ¡ÙlE !âá¦h‘&ëPñ¨°ÞL~ÿØoIýO¦ºtrÑh×c?Š·¿9g¿ïÍùÑ•ïËÞÝC{S#ÌL„$EŒ–•Ö8xº9'Ô‘;ªoŒè‰.ñVI ×ÃÛ 'øòC°w<[ÒXô®±ïýçyvæúú¸å§úVKê®,0~ˆ0N$  JÊN,¸ÿr÷iZjGÓ½YÆœìoß¿7B>½{¸»ÿTbPÕ3öØ Zžrl6ÅF{‚Ûgß÷Ÿß|2óõ¯eâ›™ìs l•º!+·O‰˜–uw)ã² {x}œƒ èÏWÆw¾~»ÿxooÞ°²Qmf@…6*µ ‘§­à©æ¸‹6fh6LM4‚wN}YÁ{ÏEuÔlB¼"P‹:ÚW¦ýt6¨cZW€ÐŽšNÃNw¸*¡)58„û„Þ}ýp"Q‡=FŸ,ÐoÒšÌ7ïï¾Ý-ÍoÛ&zSä¯ÍÄ/«"{’&€8vi×߬‹-¦ˆi!¡+Ñ¢àà kŠÈåWäiÑÄÃNR”pkM¤].Ža "¼‚é—O‰†oZëÜ®jÈM30jYçâ.30kjó…`AІ _ÂXTAÎ/ \‘¦ÉBò1ø¡*X%-¸ÞB ÷c·ŠÒ”~ή[~3J,¾I«Á},a˜qʹ‹5M$ }fšµ áÆ <29ÒÒ­¿ë5SFŽ"LlnR+’\ÂX£ ýœ+¶HQÂÆ€KQXöálÅÕ"ä€Þo]iûùíîÝ‹!ÇBÛ0Û~2êѲíp?æ®1ݤ'u¤C ×í3!eR¸n,·3£úv®“eoÛ=r„6@às@8¿&ð§€vß4J÷}Tºèñã4e@‹ó'‚W èñüŒ.@;´ðÜÊ\‡ñ0J‚kþ`𜒄ýÂÉa–€ÝD¦¤¾NšHJÒ¢ÙIÍ#IF CsÚˆE!Æ-†.bðíWŸ6cˆWÏ~ÆÞQìÝ¢ýRuŸ æ{©´ãž‹ Æ|‹z%”™ŸE[Óg„R§Ï¶s“tppÑK—t,‘ $Æ„Y¬ýÐé¡lÁô ¼Eq5|u ,úU¾f{‰W«Qét â…Â&¦vܰQð @U –žn¾û™`’Ôö]Ná0Àâ}¯r=¸\ ·"ÍeN_´©D‘ÚA]—\ÄЀœB jQú&¹øt÷emíg ‡g ¯ß^þñÍÝ÷÷÷ßÞ|y|¸wÿáiœ”è„NÍ—¥DìŸb²˜@sÆáH¨!R’Ö`… ë½f)G»L>“o©öFT{æ[»4»úðx÷¾»ö“ìcô*DÂB,ïK"Áä²aÀŠ*ƒb{‰Ì•7 ú{ÊÔÎÀmey„-uä ©®Âi'éöRP”slWs¾™¥)¦')ι:‹oçÐÉ‚µ¥”‘eåóY³H˫Ô+AääbÛ‹JÐú}… -µøßÿ©)“€¦/$¸)3™±34ÓZ ñ“xT„¢Lˆ³h K2&ÙÌýx´Šx0}–Å„±¡–qWîâ<‚tünŒéRÍ¿ë]üY¾xA¤V«7_?üãÃç‡×®¢ßýû6#¡_÷T i©¤jZÞ(Лjo%]‹·•ôlã¡mYµl¢½M4°çl}õ™P#Úúí«%|áÖÊÊ;"*?9‡¨l&åw^ű • U)ªûø‡˜Š]K´ÃìL.!CîÚUßÙ±ÉÜ’é1ïjp¥¥|å-×òõuuÀdm«ñßW Ót¨#EÛ*ùLfE“!3S~R Œxkë*'˜Ü£ÍëLg ójœJë¿—åÙ€,ÉÐQ)q5ö5Õwµx›¸ÈƈºØ(îì%SfÀç«!SSð7Ý;@ȯhïÀ£ýȃù'ÓëٌݛsIÿã¬Ý oÿîáËw~úÏ?>˜ú¿}üþíiz÷ùÞþ}÷1·”€QeŸ¥ý_º²EwÛXÐÿ¥YÜEaØEg¢Ù£s'7…C¤ù8±ù•ݘ§Ç‡'–ëð—_¾›+/ òW' ŠDÚ©}¥õ³†ÔµÀC¹i¹½ó5C;~)d2o–2&߈Ýl©Î $È–ˆi„Úã üÜ ©‘™8³…9N8 f·]FïÇBŠ8­q±·ÄK?ÉìdäaN$Z¨Ñ–Ì`ß„gç(¡.po˜¥vCpSÂ&-S&â(‚^QóÃÁ]VÍ×'«YÀ qŠiP™+õ¸ŽoWõøðˆÅJí¥Ç dQZÒÅÇ-0H¤õ Ž÷Ùý7|ùãã§ß¾?üÇ_ôåç?ÞÞýãýçûûóXº·1T¾yÑrä«*_|!@5ñ JTÚ% üL C¸î7­\Ì·ý>£a.$}&ZÜ$XïGì[¢³p*ti™Â®¥—…®x^zIGVH“”%Ô!ºëо·WÒÊ ‰è½…ÌØÅǨ;l'd7‰zØî«‰§oËTà‚Ø´Pï>uýé'ûܧÓ}n&4Êeâƒoî«=ò¯)!¦ŒR%l‡{k¡Ø(‹š„B=b([TLæK]~]ÆŠ> 5}&™_ÔMA¨b&š<ñøÀÅò ¿àÄü].ôŠ!²×.ýÓØ’Ćh/a|eƒ¹0ÆøÀ“ƒré_Tº%¼> cŽ4i c$¡€›âò&¥+I@…ÒEÙ;Ša\Æï_F1ÁÌÌܪò/|8³€“ UŸé„½Ù˜\’Ël•ü?ö®n9ŽÝF¿Šë\G’ø!èJåbkor±Ï²%åXÛòJ¶Oüö ôŒ¤Ö gÈf“cW6¹ÊÍ´Ù ð €ä'çTíS~õ(TU.cÌkx/[ý¨S;ÊhQ›v”e»eÀ ä;¦U™>ËðÝ>>Þ΃Éõ×Vèn ¢» XW°¨àß¾ås’±…JȨ\Œ«+û°× ªÒ$Â>Äâ ª8O¾x‚ò~îAÄ‹¸š¨/£F¾Q¼¬8AÍOV·s‹n(ô¶1Š‹â¨±q´b l¯ý÷ÁO ï!•Ãöޱt¤òÎå _$ÒÈ-%¨­S%ž<¢Ä„±ùh~Ð,€ú;ü}oȪÉɳüñîî«Zê›Ü?¼±>¿í›(žNÉ%|ÿI?ÿúðãnòG=È®ö(pu§2&TÐÆ%$²)Û 5õWvNß³½¢ÔóBPít$¡ÕæGøØHÉýÖd˜Q¨W%Ã8:¤6ð« ÌvÝM’ÎéòîÅ‘OÝ<½YE2ŒNŒ@ÂõÊÛiß a߈bÒ˜ª©¡)¹ã†&IG;¥›:QLìcq§Øšÿ©¼Uùjù——)74)@N¢·£‘üS曚8jµýLuJPB0}Ä"uÒ¸ ¦ÇÞ>wV/hçÿ|ÿñSÝT’¯»ÏBì¹eHCã½üñqòÁ’ºi'æÛ¨Ñ/wM„ü •{î»÷o¿šÇý^1ÎOûÏC춺?¢ö]]¢ h%ïCë<²ç¸ÎÏ^9EÛ‚óy]ògˆ¹ª»Q¨ŸvùcyèY›ú÷«™w«aÛ®þü›¸—˃',™¯”ìÎsÕ5;À¡€NM€îD!Kà8èîŒøš.ar Î6—HM?UEC‹ N˜»UW5×È(Öç’»¡x £QíÇ(®;eÕ~”ìæÆ¨êFb¬qv×dFzÀÇ©m6f]æmÞ“”t,¤{xñšÙ¶Ê¥ðùß Wƒ¿ñ~ nÛ!ÚRçƒ$¹. ü;Ñžo·9!×-ηéRJêRR·л¢ó|–šm)«.ƒ{Õ<»°¶ƒOâhl«®Á`ØV1;ÎçËät§0œo\Wؖξ÷epå¤Ö?à6LO<  ̺Πǎ’«OέËn{wZà3SJÜ„Î!Å&†{µ€Õìƒ3˜¶Í§èC,§ŒhЇ"‡S4ùÏ"éÑêh:ëh} [-Q{5Ñîn4 «œ9s»l¯ƒ‹.B¾C©Š|`-ùN%œÜC‚DqS¨¬Bl(Ya£L÷« «ŸnlÎL¦ÙkuÕ-hÅc69e>ád„”ÅŠÖ˜;r*ãã¼H³é¥-šB¨/pPÎc¬G©U“æG49H,’Ô¶1爗&‘L ÿ”62Z:ËñÙtþî]óÃoSN{T¤à`™$~õŒóO‚ÕT§‰ç3¸¥¸I$6\¾ ¤”B²#qc‰G-ë% NFøb¾#N¨®fâ¢Rç*¾oVÓïŒ0¡£”ÂóE‘±ùŠd~„œ*Þ‰qGý:¤ÒÖ/IÄ!É/”3~üöþñúáîË{M.Çqœ6V)I¯[Õ"s̺yÃ2ÇëVu*yÜÍnRÿ¾@uÍ'§‚\t¢ùùÀÿàÛÛ®Sû–”O lU#8ñLgœŸq\EXïœJƧIz|y¼Ÿ\šP:½0d= ©tpNÍKD°ötSøOèIææË"{„žÁý­%Nc¼øýÿãõ‡Û›oú"-—u/?‘ è´_êÑ€„ LNÀ»¡ÜwŸ,lßu ý˜ò±êü/ß>—·píã~Ú¦žÇåݦ¶ T±,¯$p-¼jç,geŽga6õ°}\®l˜áÉInG DýÙ¨c'TÎá~‘Z;Fa«9ZÑ5ÓMg€»r¢–XCä•*³ÚvÏ(ÑjÃÝ¢Vm'Áù‰Ì»£>¸DTV+ Ù‰^Ï‚í’>T@À.ì ÷£ƒ]µ^áL;³Ú¿:TO£³IúDÿž7×éiÑîÈXcNÖâ/C °¾Üß¼ ùðÏÛ¯_>ª…ªŒ?}úöùîëçÙW¾ÿñýÓªT¢Ñ}ž–}ˆI$n‚uMŒ.Îhüê©Y-òjË*fN|ÕÕU_>ñ)‚"4ï%J;^„ÓãÄ×eStÃ*l.©F 6cÿZ-tº Ž]ä äõσàîKÃÜô@iôáHÉï8‰ËpJ–‰*1_ú¾Å5\•ßMjZÕçd+BÅÜ–T¯GãdöÞ5Öoœñ÷–°)œ2»XÎìE ‡Î³Öî„ÄÙzç…z€«.;’G¡Kƒkpq´mK ’)ÙÐWÖÓ‰SºLÉF¦Wp‹ç»܇"¶—Á»jQ㎗ó &=8%ÉyÄFu(»ö"vl~Åk»S*C.*N¹M*îÿt ‚¤Š,ñâÍܯr‚-c¯0`6EÝÆUœâL-5òDŒè\ =šU_ {m¿êkIo9ýg}S_RRñôW©EŽÅÓŸ1[5¿^n'[¶ŒqÍéßÅèeôéorÎõ¬ê+'ïKc ¸wÏ*wRñ³Ðg¤Zˆ h\õ“ÆiåâDð‡s,?ÞÒõ»•Œðz ÅñÄ 7` ôqz0ŸT¯›¯YŒ¯¨ŒÎ`DHEtΕ×-$Ò¥Õ[-ZˆtihNƒQg!Cò™FT?Q_¼téLÏ¡Ž&^vÜ.í<ñçðàÔŽ²0H{ú\r±…ëÌi,¢Ø\›* a11h…P‘’fÙB>g§C~òÏË«ÕTº8‘#± ƒEyôò!û*ÌC†?û]ð.—ª7ÜHŠ ŸõhðÐXO™xÌRÊ«GW‘Æ»õn‹}n‹/SQÿÁ&†Ó«ý]>aSõ»j‰BWrÀ]Õ ŒäÄ0ÉÍ 3—1h´½"z æ¯Ø"„~õÓýʧUêó°Ïué“zÏ7r‹·¬ p‡%Ón]¹t¿U,J¤5}3_qsÒëÖ_GS=·eéJs14f2S\žÅwöóäù^ª7EDZt} “óŒT¼> ÄyÆâaõèðÒU[a¬(¢09o;²‰s‡>jŸŽk7á|iX(¾ ?¨ŒnÞþåoÿ}dH1à›oºÀÏï>ݪUÛr¯vâ»o_?¼(ˆWDøú¯Ïoÿ²v”Ý6M>îå ’¦Áv›×±äýsÒ:ænó2þªÿòë0}ŸJyWhv Å©CÓDÜî£ê&ÅÍMkÑUúêŠ0Sò]²Œ6G u~úæ’ò,}/¯Vá«{  ]ðº•qù¬¯Ž¶k/¨÷Ѻìwð±¥Y5$&›,êÛ|uÎøê”Ùbœ„ùŠŽ!Ji‡ƒÏ3x-^¦ì«{€I#*÷šÐøÕ3¶õª²·^áÒšb +¿$Žêx¥Í†ïk ß °]$Êjá °hø!P®Àwñfvo«"‰ŽÂ…]ó†Y;1îî1^_—Í•ðüD“zd b¿ KwüMªÅJ‡#y‘×y.ÿìÂQIέ'(nügOõ‘î4ZÝNð›€]‹ ¢§«cðÛG#ÄPD8¹ Q£T ‘•‘$ë‚,^­ Š`Š¢Ñ¬_E…«€"ä0Š0Eº–ÔõàXOáê—_ŒÝÚqHæ†sa”eYÆtÊ×®g^F‚7¬G¾v=gQ-AÂMÆ ‡ÌŒ òï5¬³Jæ'ɾPoÝI 0 ”è›Õ!ËT¯‘£‡ _˜üßow×ÿœ-ª‚á¡æ)? zÊÇ>ø–{X#1 4fÁþê™pÇh_Ýh“‹¸õôS'uII(¹ú¤ìlç1öˆ¸uÕN"¹ØÕ©Ñ!ß CºRУ'6Œ1©²Üó„WUf»E‡ ÆÐype7 EWÖ!ïryƒ…»ëäN_d¹´{0 §¿¤Rf n}ݧ˜Ü‰§ôîäõ™-Ÿ6rýä£ì¤^` ž¶éEpÃÓM.Û§hš€žÂÌ#½@–ñzÑÚä%ƒ«fµ¨8r@Ú‚SHÈ«g³Üøœ£¼}}ݳéqF!nEå$•ï9Ô ÉQ'.„Ò…ê#N6Z!­sB:†Ñ14Ø,ÊP}Ä)yž[/ÎÆÐ€}/ˆ{#Fï‹Í¡» ãÜQ<8âä‹ÃMÄ.•¬ý¹ÄÝ/JþSÚ=ª´û´ ¨%âmöÜÄo•)¥Ë,£s©ÉÕÎå%·¤sîVߺ¢‚‹€‰*ÙÖì…h{pÇÛª½¹¦Uiz Çx ±2ïé_×S|ú~mÈzsû/S©àñY¥f„¹VÕ—ÝBϘ¶Û =zÈ%̯”£žŒ>!´¯TA.³Ò~7%2=™gå?ºìÙ[ß­úwŸï?¨ÜŸ¸XoÞ¼|×·û«ƒYL«ð“sûÜȇw‚C‚9YDú€oÖs}óþÊ@èêýÕûßÞFô¤ßpG_ÐgF|ó?ÿõÛñOï>_}{|~‚Qºh#óß››"íA#…Î}Þ³ÐM3àÀˉkÛ jýæóíof!ådu$äŒôQ½®ÜÉäˆ"¿s“/ñôwz²æwøû¾þL_.¹%úã>úrjËo¬!âoûn‚§cy·ý'ýüëûù–ûѶm ]Ý)ÈÜà{á™ëKpeŽÇý·]žÆn¼óBúþ^ô­Û:?Zº¹YA5¹¶–ãÆTÛ¤F4qòg¯Õô¥âDŠæ|úZíùÅ)w­¶x³Šä¯­JÏ’X ÖݶX’°õVxjhìIžŽ¦ÐégÝ)Qºb§Ð§TÞ)Ì]¢/^¦ÜØ3/ b$\6ö,±©¯GcßP7€îYÕoh‡/{ …ž-)Smˆ—…è"õØG¾ÿ‰1¥OUSW_î?Þ]ÿX~# E6CÍ+Y„CH¡æ…d¢'-¶›.Úf,ý îMÇQbØ£ØeîwsrŸ&.bÎþÖÕÕjÃ.ÓPeؾEÇ•‘-lŽ$¨‡’¶ñàˆõL€z^¦GÁ(œñj2§"‘øó Î/§bñPŒ˜ͺY‡`ÔV 1†UþM›¸¡Én,©lê{¬\’á÷…cÞF¤ä[9Ù;ƒÊ©M1F¿eÓ#cÿ.šiŠÆ|cgL^kè¶Ý®äzVŒ«OÕÝ]kÄþûçݤóê.H‘N‹]5™³°Êìƒþ‚Øo1ûàúß“h 4©r‡Ä¿€ÕŸøÜNšu¬èêIÅßšw¢lËÁ7±©ë„ódŒ1Æ|F|½jSžSÙbD–’Å0?ÊâYT=L6…‰‘Ù‡U&Ë<…M&K©¿Í‚§ A`,±Äž—ãÝõžª£¬wŸUæO×M1`9}ò‚UfømÖ]ÃÉ‹”„!²[$o‘[/C5!LÁ•ó­I¢§âѪBÌuÛ.¤ÔcÒ²®‚Ͱ[c§E©°SžŒ31»ã*T{儈O\Xr׸9VÍŒ~Eõéfœ¸¯à“ À_™Ø‹×ŇÆÜðbJÄ×ËÝÝ\]¿»zÿíóÍÇÛ:Ü==ìIn&ù-%¶{>5åŒ4¢Väâ£cŠâ: ·n ܪJ g®@[¤XD[^~Â׋tºÀ­F2I½4Y·%ݨ0K¾Ü"9ÈÀ­†œ¤‡Yºì™º1+|µÃé}Åämk7íkêßW¢M‰¶jAw»ºÇeÔUÝ?>Øug©¯¢ü€îÓk÷­«SËÌeJœŽÜÚô~…¬ÏÜKUz’›º%ôR.3Ö˜XDòĹ+©…ìz¹-:pˆ«âÛ.Ÿp4«”3ElöÊ„ˆ.ï7 ñ/>¡ñg!Î@mPhþ“›Œ|7„ Üv<þYõëúL6ìùÿ­»ÄGâ·Â4pizËy ø¶ŽzYu»Ô0U@ˆ!”±ÙªƒB œq?`áQóE0]À9©7긒ò¹§9\>gB'ÇÍ–ó 7•qX£¼|k­×È`íõÆ:`¹©@ý'¢#ñä¼8wù1¸»÷wC7ûÆÇûo7û¿tºÐè²À-UÁ¸-É÷¸Ð(I«×}Ƭ€ŽRiUªjèE¤ÎŽwYH§‹-Ûñri¨Œ´ºÀ ÇnðüÆ |8UvE]éN¢»Ä5F &ŒÜLŠý!–ì†0x¦Ÿ”´;¼:øïµ)»èd(ÖRj`µõSOÈÔ-gW[/çvV œÊk©(¦Ö­è<¹ !õ`¯³;dï#Ñ¥!—ýèRW•³dˆÅõ£·.Ð’sK]ycb¬JÝAÔ]ZŒÜÝ8b ‡›˜äè'ÖM¼D ïõí>Ü?Z[ùõ½¾Ù—©Ÿ+€Çq¤¦ñð ù¾µ²ë†Æ¦+‚Ê`¬!0sŒc~xÔ³œºŒîÐ5x^åý’Ò7Ýf¬<ú¢!’µÒd†<¸I·Ï¸Hõ„@ÇnÕ•hqroM»ž ?³È‚QfüFª•yt)ÄÔ’ŽsŠEÁÁ¯5)‡®óþs ‘Šðʪ\®X/A!f“lÏ鯶hÏ8×¾®€WëªLÛL`0º²Gàj/ â&èÜ>Ü^}¼ýýÑLÌ÷9_î¿ßÝÔfÓŠ·¼dä%7a/†–ÒþD}íÈa-øi¯«`S-u¤¤¢’ØÈCÑV9ë /DØ­Í"Ç«ÐZC&ò›œaB×FÂv ×úÆ­±Äg¸oiDì})<gNî=SoÛ{éߘ©îƒõÜ .7>™ç´›Ý«»ëùÛß=ÿye†vñW`7A»8¨h ¹^eç%ÖAÕB‘´‚OO؇b)å]è…pzð"˜*{ZwCAÑ…¸Í0÷ô°#yŒŒîø¶xÞ¦èâ¾c.CHÓ·Î8ÔU@0m¯€(#CfGž'¢Ò˜£ÑÖçG@j^㙽sÈ5Q(²ËP(¦ã.I§j09oœÆQÆHLgŒs÷²’%.^¼L™B18™’uQó’BñÕ36q(¢wН¥QÜéŠ4%·EÔi(}KÖ³ÆÎ˜#7 R%jHqò B(ê„r^Š:ré½Å›U †Äsµsªíeï¶mØrÃEˆz”qêZ°XßIÝvtÓ1Ý Gõè¼_ÔrÑïF,Õä\ãB^=èNtÕP¬5º"zÀ9€Mº’8¶0gè©+!Êf’+;‹w$@³"}þ½½£>£ ê01C*ë‚úhgév÷Ò"Îr`<‹£‡.誉“"UW]8ëÈÍ ˆýçO=º¬RjìèÊ=!ë.¯3G³§pfUPEÞý6Ðø[b*#» ˆ°–¶¦^F½ìÏö^=÷À¾ÊW£X²?BŸ ¤éAha«fZÅÉ‹° mP‡ôä]Ýr·’~Unrs8Ýè†O*7»7§êœÚ­Ý½ÜS)…bbVdS%JNü`ûûdÛ G∠C;µ®ŠcÓÒÓ?_ÿ ¬TTHP¿ÔŽMv¶|©³¤ðä`–©úE!g`õµ1¹k|ðfÎVˆ˜iv,|øˆ~ÜõÔê»(6YIXA5«sP«`7ÒîÍÛÿ㆚)Wž;q|¹Ø?ɰ!Ÿ€f3 2©ocÉeµ˜÷‹Ds\J5f“îê8‹¡Õµq~4sE$L±­!¾~]BÄž&D¼IrR_Ñr'}–‘ýÉãxçð.ù|ˆP§Ô¼ãá&%C"hŽ÷O#@»…(ù±&°ŽÓõКɱRiÕÒâióÛêS+¤°ž:‚3ù *’QVÀ‚MЩI@­§F%¾2R(jr§ÎRÐhÀæÓ/oÊ,vÿ¢?i³Ï5÷_üÒü ¡0¾SÃkòIXÑW3• JÊÄ,-„"ž:NÅ»¶ù¹Tøö`‘‚è•dYö#—·-] ·†£rRÞ™l`ÒãÄdi®sÁ°/­e@êŒÆ Þk°Q+»GpM´÷Ú LM­‹ÂhÏ:£Šn]¦˜± ñË.5Îìßš’×-ƒ+ö⡌±>p9ϼQç‡Õy¬ ÛvÀó²ë¯Å‹K%F¬Å‹µ•  :Apõ\s+ïgÔ‰¬ó…C–Ó{{xãB‚TC@¼,­ñU“ƒïR°^MϤ‚ƒoœáá&9Ã@Ø9ÏÅ7ƒñ­vó•«7^îq¼²l7†Æ*bHÜdør¥«ãÅipd°@ Ô×p9‰€ôÂøÁ›•¬F4¡S“ƒf~°-‡_ú@ž³T£'#s¢TÃtC;ﯱÏÐר‹÷ñöñ·ÕÓý Ä»ÁŸs;Á¦®Á+ýÁíwÐ`ë]éÏ=³ä®™T“moÅÆuƵ3sfSÓ[¿_ÿòÚº}©Û•< ìä»9­¹šM«Ÿ=Ún4­j›i9ñ*T‚§Ü~ÝÞ«¸8©§¤Ô Ó¢0žÚ8kõ ‘›ÝȨœ¨ÔWVWÑI²cщ4Þ“T4)°/oY¬Âƒ³¼”¸¬ÇNâ¥øö3ö}°ê¥+vØ«OD:w³òB÷wËíc¿‡j±7µc7Õö,+g&…]Êͪ9¢,ö¶˜4Žz­àw'.ÁšYøµV}xʯH²¢ë@ª=2ñÔ0"ÕX$&ŒÌŒ¾‘Ê|:.)¾±sÑgºÊ¸$Á¢-uzži“ÆCÆ9î:‹^@¦p×™@í'ÔÅ}ƒ³™ñx»¹_µ‰î?|¸V Ê e-x‚™Þ츧sì—ó“ðÚYÇUæÖˆsí²Ô½0ø¹„´¥÷E) "æUˆ|>1ç„/nLy!nº'ý…x-ÞEµ Öð¬ä(¹õ3#¸ÒCbà]ä‰ôËŽýç`í7>÷ùzs–ýè<¢ŸÄ~03L!µØ)_ƒ¥91þNi´ùµîhä`$ß»»ÍïŸî7·wÛqC˜øYÉíÏ@nßYP—}îZ¼»õöñù!>òçç»_W é.gêîŠÈ^`]Ívm›ýƒ¡ñ•w£HÕ*xÙ ”>0œµ|Ž’Ûk„ibû|gÄ÷’6”ŠŠ»E@VßZãã‘BQWíd:õ€Œ+‘/yYp©Ë¨9šˆv°¦²PÌG³\fqƒlHxA¾û»Î\U…¶YD*‰cËǪט…³|¤¸ÿ Lã#ÎïÏ ì;6OüYˆwÕ”™ÚÐtT>%†9`¹g;³:ËJÁ„i‘ y;‹¯¤<|­ôÃò~ýJÁw›e,£]Ô9MVÕ3 À>’T$ÁZ%¯ f.­¥û„b|(rŸú¢”‹&³/N86™ 5rŸ\7.sÐB?çÝJÒ“™$i4Éy™Iq¡AÓ™EY`¬4I"d‘â,s™â“˜Ë0C  ±h̺ðõÞò-Qõãþû ­·úbO}‚}Û}¶]r׸¢hà¼õ–pŒâcÌÌO»ÚQAoß÷â6 ž7³?}}±Åör¾hQ­¹Â)*‰®¦hѺØÜ蜡¶k2„k¥¦QFÁÛ|EÆ#eK&;#djÑïEÛ!Ó¨*ô€ÆšIz:GI1²U¬?{¢ã´ˆeи|\?<ÌjÄQŠßU“»@'‰*F–‡84Ü…µm'jæ+׸À7vÀmAQb¥š’ÔháGYu`É^]ódö´¿:‰•£‚ Ée‹MÛ±¢DecEe? º¾’- gY)@“.p° ÒDƒt.ˆ1üõVV—l³dè?‚ ,“PV°ÆóñˆbƒwæO42 Ä*9ŒÏ·:ãfoâQ’·¯Czµâ(ï謥Q@œ“–í?ûµñ¹d}eòqœîUÆ:K(Â_i´Øz|ÌËâöá¨s®‹ùv‘¯ÓÝqX¬8fÉõ àóeOâxµiž®Ô4NÇ{!^\³.Žc*µÙÈ}PeG®R§þ–÷ù@S’#i²êví8o7'Ü܉àHf<­rÙ1J,\ÛÙ-[ãŒiж‘‚sܤ8ÕŹiÜ 837Uè{¯úí®FÐr`ä¨rÛš,w¦“÷ÀçYù/ûtñ»¬ât…¿¯O˜º8i¸_ØS®.NÙºH6BžçˆÕXpZ Y;…ÓðÈØÆý2oòöV·÷Oµ³fštϩ̨ÙnãN`vžÇÍ5Û¿}Íš­sF‰½•"£ÙB²ÉÉT÷mb•v·ª Åc¬R–×%ZsTe4‹UJ Ùq*0K8‡cð>Îlj“&·¡œQÝó<0©ªRåf·5âÑ&lí‚ø¢­‘?—­Aö q“8‚Ð~¨3Aè\Öȼé®ã‰{©Ïîš[Ç5Ãgêi]`°fʦ³`‘ƒ “*ž’§UŽä,æ(`y}oOªähÍ-Z¬GbJB4j¢Õn#¤LTº&©;}tðjÍg½RÿtûQeâv¹:¶ŒLÓùÔÅÝ•fš/G æ°8Ñø',NèÈÄa6Ù`µmÑJzõG܈Ì`Òžç£5¶»õZWãÉ+Þ„àüHð,ófɧ¨·±>™ò~>2.I7@Šn~üQ"VÜ(ìô`Õ1˜¤Ðd瞨dÆý8ù·n>úΨŒ|fTˆo:*„Ê7ªLëq!xŸåi¥däÅÝôÐû¯X¢ò°¹eø„zR ¦¯LË AÄ´-OQê´ô5|èïÒ/ûš¤x—ÅKï“ÍÉR4r6½ÁQW¢MT.ÌïIb0ÇŽQʦ®2Z‰l ^ªÚíµEEJ¯óç™ç ̤ä|œ³:C¤à;o‚í{…¿Nñíá–£¾æ–χëÞ(xLJFÂa[¥˜dÄ]ÝV}¼]~Pê}÷Ã(°ø­ýgw·OãX@พF,Ø+PýÄ`<·°b…tkißãKÿwÖ¾åóþ!9~@£FæM;;jàB - ޝà'2ÊqŸƒúâ¯{Í\bçüÄ»èpq–½â½Îö*RΟ¿AÙÿ#þÚ΃.\º1 ?×ít Ž€´·Šž÷ ‡èsË·ƒVºw›û{i®›V®šnë©7€/µ†# Y@$[9®¼nh\®ŸÆ¥Á¨žøy˜ewtáP³1®du횤Á³k7x :3üÎÔ”gWÉÅfÊÎÈÓdð”ʲ 2npg Ít4{·Ò™ƒ§”QÑN\gë—¥ÆeZÊ æä'Í2„šc•zœ±púqóü´J Iìÿ¿XªOZ:D*[YÝW *&HIôlТ¹0ºŒ:­*¯w'O† `Ô@ö‚‘û’…ãõ±b4Q=´E¼6ˆºÙAÔ€KͼÕWb™.Õ]{€¹ß§Ô]Ðü9™(!Ì‚œÈ¢û*]ïG„—Ös2/|¨YBh56c‡Òª©ýˆDíüPe|,r-èZWÁóÙ´^Ü´šê§<У‚Šg3î «…ò…+ÌšýMê‚JgA8s‡e]ÛU.›bÂôîõ ÌÈK±8Çbéœó_H+Wr˜Y‘4n¿«™Œ&4Ðj ¥3L“޳cPß³Ly_íu LÅúô´Iš ©œ%º2š ˜M9éJ‡„äÂUF€”u¥ 4ÄТ©ÑMx8ËÈ~éÈõßÂz£§tn$u®Iã U‹‚óì6jî™Jç:y®L³ƒ>ÄArœÒ( …äÚ~©è®€¤&¤žÀcf0¬§nºÖƵ†Ôñp0'CQü,°ªÎ‰ç0c~t—J~·¼_ïvÉTEôåÌŠždjÐ9xWŒŽËˆ&éÑ$£î—@dÁ0$ÁlH¼z#dö¯}‰$de~”ôd“()Þá¹ñ9N¿9xkZ"¤+ŒÜÉ–&@Ϫöœ<ó~†!Ÿ6tF­“ù*E¯®ù7wU$ž+J½*sÜœ]Ó§Ýe‘rÝiìLEpéòá¹çT ã€MÐROmôÓo‹þ}œæ±mŸ;$ÅÍ'=—°5¹…O $åó5ï–øJwK)°H1;ç¬Õpª >ˆT\cpN£µ¾ŠóíÏ÷«ÿPyþûfóðôÃÿTYýíÓÝê¸õ›á¯7¶×«»ÃgötÇ`§bM.WÌ4½ý»&'— ^æûxØ›u<”*çrµþ¼º;Ú  ‡"«¿vÚù—Ä#ú7럲ުöÿ®æè÷ÕãÍÓ‡ÛO7¯ôèv/¼A(tlàÿçTõaF)°Æ‡š¥SâƒC üÏýäÓöv¹cÝ1÷ŠVôý/·÷ÛÕ9ðö½ùô{0~Úüòjß'd±㙲"¡g²9‘°ý½P‡7ûþáq³\m·{(ïµûH(ÐuÞ[WÙ»ŸäÀLãÛq•TánO,&Ï-:4Ü~\==®w ìôêº ˆIu›%{[©@¾_Ùz‘ß’Ï€"-Ö±"SQâàÑ“«^´{µÏ!Pç•¶óÎÇ??r51Üúm%ñb9®/–ÙÙ滋P “ÞÖè¤Ã¸!Á³i6’¹„r­5J ÇÝ¡ÄV÷>ÇEEõû«¢“›°™ZìŠÂm /×TòV=Ц^HˆbiMB¢”‚UeïXr•Ûr»¾{\Ç=nï†âP÷S°%à>ÒÆ`Ú«iFƒ—…(RÙ$­û€Œßïyór°íÍ/›ê»©=û¸yüÒ{qO*hGá<²Œ‘ˆw\è'HŽ9_#9=Hš5'ÅÜêÞz¹£H_QüúÏÍ<c: †ÑæeÂÇU™ðqÊH²çþ@ ï÷VK U¹8è1‘!˜Qâ€ê¶kà2M*RÛê¯'8zôZ¯íjTV?+%N·ï¶_ô£ï_ÿþ`vÞ¿mìY,?¬–¿-”9›µÊÒbùx·ˆ°\Û´šàNHã°|Ȩ(ãˆòÒb“»±¬=%Û0¢ym',l˜",ˆ¦"Ìs¬'e'0W_ãýFôa³}RiXnô“/­Ã—iLG b‚˃ˆ÷ɨJUaˆUgÄ™ò¥Ò9£‘»G[!ÒAÐP¥Â¤Ä®Q=màºKä€àÔ¯4Ðos.t&øL…Õþ]“Üû¤€ôLñ S@ÃGLJ9Ã2µ8ÔJªL>0Ãä —f€Ìn¢ #)Óž¿(Â)ˆ¼YIHOåÀì64ÔÝË!ÿîf†•K`\6 nÖÅ¢ÉýÙ/V£ÿÇã\ÓaÖÍ>é´þ´þx{?®Ú@ùýÝlФ°¦MÁ`PTìØ¹Xí¨Xgm9¡¡NuAˆ 1;£¡QÆS9»!ÅZ¤ôÔVí †«Zß(/P1zžXÁ ™eÎtÑéG‹Ãˆ©VâbCL–FÚdÅŲ޳âb“ò2 X‹o<5x4xmqšÌY/ƒ1ß̦‚„ vK3 ¨¿Š¿()/}@‹‚Omâòƒ« טAAôM2ˆos@gP%$‚L\P†!ä%B¿H $"9OxH•"qÓàµEÂÕ,ƒTW.°±ëÌé ‡ÇõçõýêWõ$ãÎÛÛÅýíÏ«ûÅö˧eCáaŠ.¹+ ã%+;ê'eç@½&W‡Ô©S\ê†í¢Áý¦Úº»ý#älsÜg÷ÓþOQ~‚‘Wùùývý¤op£ovãþ¿õQóKÍÍ0ºø‹~þôøe½‹3¶J¢E,ÖJó¸C"86ú›:Ž mvW¤¨ß°ˆµ?›çÝ ³]Ì‘¦u1¡f¤öênÿ_“[—]w–¯M„øD"„N¤ÙƒP,åÆñ)ô5H—!ý»¦:Íï’O„x0]|†‰á#&%BÀÛX½^šÙ½•ì ÎO‚oÜÆÑhàÅÛÉyP˜ñÄê6 Ì 2‚É Ø$¾ _­ m_ñüÅýÏF6¦¿ônÖFí=­œ6j{’˜¯—kÔVÅ(©Hü¯?>Ö#žÌ µ D+)IÄdµ÷róñáöqõþ}é_WOïÿñoÿz“LÒ¼f鿇uÿÐ>¹¿ºÝêw|¶Ýr³yT»¶“‡î7‰Oùïïn>nîr®Îš½ùñfû¼Œâôþ‡þY?=ýg†œÃ™žéŸÓ‡–³[©JÅtxØ ¶CòiMÏ‘“FfZ UËÒ4¼ôÍËÒš*wçy™|àUEm‡ñë'Ip2€ Û'ØÙÑV#àÜÛ¶ÿMn—‚¡Ÿæ6ô@€ë/:êÊŸ«È4,;6þ+‘¤Ü÷Éa±Ë5¹Ë9ÜEFdÇzh[Ÿ ØfNDVª`Úú]ɨìOÍ©]‚¡ŸÉŽãX4Ъõtª~©uoRµ8ËFQ1dXiI·f£Åî[Ëâë3+¡»âÐjÇŽMÅÍÙRÿR¸:·:Ç_ÛMÆ°Š¿Ãø•.!¹ ÙÇýìóv¯Vg!€žÖ‰í €l¾û]c’꥟ìeg¨œ*šµ§Ê@´¤ ©FyÌÀœU4†¨™,—\fØw•Ÿä¨ºŒXÜlâ‹fzZ|æe$­Sˆ©§<ï5ö¼±u4ÖyV C"h)-·š…KaJ‘m°ªWÑçO¯u­…³3 0/¸¶‰×Ce¯7¼Jo¬]…)‚x¹Ø7µ»¹£<žÝóÕ*šxõD 8ª¯Í;ºDýõ¢ù{V¢Y3Èw7Óä! hã<¤2Ë5h¾8™—l'ÿ3QXg‘IL„B“@$"·J RÄžÁj2•üÜ:îxKZX²²Ñs9e¤ S×kIŠ‚²ð¦KbŒPÒc[á¢ÞOã„‚QZ÷æõÝ'öm£Ãƒfçv«þˆþ‡wïo®õ>?¾üÇ«Ïw×m¨†>¿±Žú—qþ„†I=Š¨Ö Ò}X§Æ%ŠRM“ ƒÄM\TM$1„ÞQòªy O`Pˆ¢jOÁ 9_WCÚ6džéŠ)1»Ió·_€q)bޝ¿¥;AÃRÄ~›pޱ„Þ³[ÇX9j™bc)N<wnicçœ_o­Ùd1hó4®3¯Nûê'=ðËp¯­œÏûÉ_6²67ׯ8Ñ¢ÑÆvRl˜‘U±Ã׃²‘մ§’‘}*ÿûÈÓccõ”Ió&i w¨"C²o`c)e&Df¾h±`c)µ±PUÒ×,¸ÚÆ®° [j9pêˆÆ$ɈÛC)[,qô®±ûáçú;Ú+~½¾‰=—yš0EáR>Œ"‹f²Mì òucþÚQ'‰-¶ jì§YÞ*[@ã½2±ª±Æôª³œånÂ3?¿zhEy‡º9R¡Ã¡… ¢ŸHd,’jG¹o“`Š9’Ø~’PÔÛEÚ=¬Ã{Û!ÇÄþµ56DÞØ{‹í׿Ӈx½2²gô¯2á)Uk9“„îϵ&cK. Ž7Ìz²I#Yò´¥a~Ú´œ¢}YþÞãs_ýrw¿|×ã<ê=۶͇3­P£L²„ް*¡×€76›ä1ôeŒg™‰ %€k3ÂÌå\Jrõª±:Œ±æP“ŠuôðÚÆX6îŠÚQõ´)ÊØ2U–ºiÐ 5Êœêxë{£Æ™‹ ù¼l€…̶ïšøûŠ“Ï‚æ·…É1ni“mÌ·%j`é]›ÇÉcWd,ó,>I°ˆŒ‚öâ@J–9@~§Í‚f¶ÙŽI,€øÊ¶9o×\ßÖýº°©Îf‹_ÚÃ6`òà¶UÆØƒ«R-Ñ©pxJIaѲœzY,꘳Vò²Ží[cOF¨ŸIÑþ ‚ˆ¼¶Š%ØvvGÖÝ ÂËø絫)¥3ƒ°qè l¨ª ûЀ7T©ãyîÍ‹K4Ö ýZkƒdØ…éKÁÐ!½ï%e—%Íͺ¡•W\”'= –j·vYŸo5=ܦ KЦ¤æ€Ó–ôÅ7V.h‘‰ˆ«qIç‹©«’¬„ÀP„ X““¸º§+{m„@l§bª Ü#î^ŠýPËñ»Þáf-v*" Ðàˆ54*Û|¿õ5ÎoüŽ·##Åì,‡Dä˜ÉIªæãðHcÂW@#Ýu¼H@Ÿ×Ë}ûøüþ—7_?üë}¾ùøþ·÷ï¯?ÝÞžàVv6áöÿåâhÐŒ7Úÿ‡Ï!ŒšTzoíº°F°ýñž›!Ù¶E,ˆßžÒ¥%(m¾Kx#1Â5Â{ìXï1!%@÷çÚ$cþè8×6áI†óUômi‰ŽûYTöj÷¸«xbsÿÍðî—©zaVõ2IO‚¼àyŽ£§$à}ÑÇ`1êØ柖Sž(6âiQm«éê7-ÙÉÀöp\#.@©§U)EMÕ9¦?×CtFZÔ¢MA%(J‹mž/‡$šwf[ Q·S+çÚ"–¢¸”ÄÈ8Ðt]”r`…æíãõmÛ@†?÷=J5£tmꈤa©æhc+43}FiÝÌñ„(©l£Õ(^Þˆ¾£c¶õ@Œ!V&¯:Üf¥ $ZÖ¨Ê+¢½¦Âð4ò¶½Ú]²ÿÇuû.å‡nF”u¥'¸2L?ûÏŠ¾­Ë°O„«F÷È餉ƒÁ’JMµ&H*é¤jwnÊ’#t2±žG$a½Nr@õ7 °_ôްkõ©uáµÂì]ß|þp÷ÍHuÆ>¯½ÌEFÑj³^´(Q€ §É¯,}&HǨŽÒ9(Mì‘Ý*öSþ­'õê >áj„òsaó†ó»›ûaþÚÛØ–úQ_ö×IÓ%L%a@Ê£,.ÈÓ# zL¯AºTÛž$qpŽc·[˜?q¼°¶¶ 1Ø˾*|ÈTá)›áx ó0ÛEÞ±!9úx!swÙÜ^ßßÎKÄ.‹Ïÿ¯†`1# ¤êåÕ#¤²( c¤’(ÌbÁ¨1BìÅ9ZoøkËöTÙDoïUŒ†e_þc÷:“¬}ˆê~ÕòÖ9š¢H äüÌ‚*ƒ*iÂꇨI"4~äþÑÝí$ö„ !˜N ¬~³Mµ‹/ô¸Imµ«±ú®‹±çkÊÆ‡«Õ,¾°Š¬øPê¡ñIÃNgŒëk—yþDrgŸÀ¾âïþ—1O?3ïßoo¿¨è½Q‘|cÑÝßö‘ÑÓûê²@óýù—ûo·s©æAeûjÿ«Ûkkb1&§ÿÅê'Ñóܨ¿pe/¼w3.ì\¶ÉÁOÖ G€ÔÕpðô‰}O‡®dB1¦¾IwáL¼{j…¢;¾ÜO»ã¦¦þüPóóeCV\—)‡»v(—<»ø"Ü]~cÝ ;jŸ¤*Ü) = A˜ÐÛ{ÈúÆ©4bÉÃ$>º‹Ö{¡pVN* gá–W«0bv¬`Û |­«bÜe+6_6\ p #Ÿ6ýÍœˆô„s¾‰šq`JâÈð&”ç°êhO‘ÍÿÌÝ£]ÝhÜòðð4ü3}õóHÐÝ×OÓÝý¯§ý)D©­?eÔ9–ñ]sïʨsäÚZžµ’ "k´!a?¯f}ò†Ï²õ.…›ßULçá¹§Á²w_î>*IïU›¯õãŸnw²45Ð_ý÷Ýý?Uº?én¿Þ~ù6wù?Lûbâço?Í â§·šÞG%DÚ˜kã§´‚†¨Î®ýX®½ÿ÷7o?|y_GÌý«Vì§dÙ'ê çˆ‚ãØ±ÌâE¶/¹•LX™& Ä,Uae R‘Èe›™TÞÚ©Õ“`S€1B¤z â"«V;ÏE‘ÊèÓEN»zN› ðÁ+ ‹œ¦ðTä»ÌiŸßôx¸íˆ­¬š‹8ïe(«+ì0…­CIU¨]È‹PÒ„£WL¹¡KY=TöJØûÖî?ye?~^"XOîÖIĆíÍó§"aº ¤a×ñž^7-p~2ÐÈ¡ëp *µ¸ex&<8Ç#æÄÞ­âÃ|èyŠànó>)Ŀᄐ]Vº?¾½ÿçÍ•ûw7…f£šO¸AZ‘YeoÊè;ð`9F µK·Š¾Þ˜ªˆ»ÆCÏb–œP9‹b¸z%Í貋y1á©§Á‘ZôUG‚ͱ’9à醻r†P*ö¸±à5Þñj“üÇÙœ- ‰º¾ž‚-1$ƒ×l´$·Ÿß~œö`ÓïWû°Fz}mŸ›‹OÕÅ_^eCìYËÚ~¸åst”ŠÏB¶¿mA»!ï•zìXׂ<Òˆ¤#ÄÂ!-È1Ø$ʦÿ‹ZâQ¥± ±çq]‹ò¶±B ]Ð’¬lPnÔð ´½ Ú»F§gqâà1u:t±¤‡•|&Öˆf;5EÂØ Òà8%‘U@,´Á0¨Á4¿¥J7 Å6ì½òî‡n‚W¨¦HGG˜˜›òÀ­¸Võ4U#y¯úGåWÖìΖëYPd„þÙ©½C|eõ³…*ãw…¦`å2`ÜøÕ©ê]ïóýÝÇ›/ïoî?ìö”I·šq(¦wÊ;+W}lRR[j©„Øá@ "GJ í¯!CI9J—gA24,êr òņ ¥jö¹ã@¶ª<Ú‰#xu]æ­+žFe:…ò°+2¹<”Ç0´Ö)U¥oŒ-¥ïáÆdKV''€ƒºÉ³s þsиGšäýÓQ bÇó÷†Æ«hÿ%y*[Ûh½LX4·ÙÙËE†@€ê¡A#}ÿÚÖö¸±AA3%Ì@€ºI}"ïG`N¬­¦qÿá ÜãØ¨?¤ñ–48œÄù$›Â,¿¿ùðq:z´ûðñÝ{•ªû›Ïw³W*Ö—*¿²®ØÄfýaêÈhCôI±yMB-Õ/TžjI¾ªÄD¢„rzð‹ÏSúØ›–]qDÿˆžšÙ6¦¼®5×ÊÖÏSÁÆ’üi·€1JlOöåç)â0¶ÄÇ*(R_mÔÿx{”ÍX˜Ø÷OÞ¹ŽG+%sd^=)…®vRÊ…)hp@%½W©Ç°Ÿ€?¯÷¶æ‚s „7«š1˜^Nž,?±ïn?(™ #åjÒ°%¯÷‹%(™À¡Z"hõ ­R[­º ï¶ú›õ> 3²ë݄˥[“]'@%ÙUÛ™…Í[pjÄs¨ŸMˆR1rJ¡gò‰RLêpÜêŸÆÂj£¤¾“#U0Öà‹Œ%ÊCw®V3¾éH½²m‡zmÆ¥žÎ {¢?cu÷œºÛ-_!ž°¬î);%¹ Û m‡¤á@C‹3ê%Qˆ¹_hôçvh» †GÓ3îªVòdÜU—Ño™ðå 4Ù8¿>æù®ž³†ûp™ŠqWýc)Qz î²øÂªaWâ)¸„êc½V@ zh• uÄ&Jo¥½õqhíZç9u‰„¡lòm[S¢H æ”{q³‹ïe I8¼Üõ½øF6µU^4šhbvdd·Nå÷ïU­¨úÀ ÖCù¼X]ÿãWºrW»ÚÝþ5ÿêþîñ‹!xÜÜw.£ÂÌž¸ù9Iã~¬µÇ¡(;û¾Ð“8ð@¨žA 1àÏ@† iE8¡ŸÔƒ„ q–mÌLnµQ ê80NιPìRÓM.†šìâu‹›U…Á¼×~G5ßÄPdß"ötÔ;Q×ãÿ?ͬI33`tÖïX¸B ½÷E1Œ¹w®Ÿ†„¾íH1Ó¸ÆúøÒl˜7Ý+ýŸóUg½Rô*¥ë ¨¢°A£´LÉ‘„ï¡OúAy¢\0•š‹¿o?|~ÿÖOo}±1¾ql€ó\šÂ­ñÑw½Ô‘g§že£Jz%)ûê%™°ÄĈ5† ÉQÉi—­—,è6¤QZ&à„‰[RšˆÞ‘O«txsw£A"eÜX{ Á«n˜®kõˆ) o˜n°*g9,P«8Üxp5b?iâ™ü¦]~ïîï>ýv÷NÚf­U»û¹Pa­÷ÙNóZ@ ›´Zëu”fœl9ˆ«xÃ@¡ôömT¤ü“™FgnMj¼ø&ã¬Râeq6·Î6,pjS `¿ì¼u#­s’*ëÌõýÕ«ÆY'T™p«8œÆ›bMxƒ njŠóì—odO}ìmkü4z»(ó<°Êô¦ØÓ<RD‹Ó[Ñ{š(5ÌÒªòE%†ÔC±:§ñP6 ^Pe„¥ÕccpMφz6 yÄUz(›ÇÁJgwŠÈc76ß÷ùoyN2tn0B¡U]”ú8¸Ù&œch‚(.®‹r9à&QnpBqÓ©Àµ­m£‚š?öó¡llÓ¾ƒ±qR03ÉŸ¢k#èj:¤f °"Ð ˆÅB¸µ~çætç¢D'MÖ7¡ó.®²¾ 6szŸ‰s•QÑc¤ìü`J44¼e_5?Häªîkq–µ18Z÷r•|߆֠0~o£Â]÷–B Pn¡ˆ*„AŠºîùÌ‚“gšŒh¢Ðc'[‹š”]"F\å™S¤´Ð©ÞÆ»yqÇæ(GUÛ‹ÖÕ^¤º†õƒ»à€Kä®P@ñOǤ?xŸ:1ŽŠ¥|Æw±¹).*_°¾€XT>–Ü4Å‚CGg[@$mº'¶u•îm^OR*3gæô½59=í²¾PN‹;ªÐQÖ»Ûg¤„h˶yE Ã6yô´¹& î€×O;U/«óNaa.ë'S…öB»7dµóù^UïN…£4x>±íé>úÃùaßžÝ8œo+¬Åv5Ó¸ši8y$‰PÁ4*D4bÓ:¹Ö®ÅͪØSr.½6Û¤k±x°‘Þd˜xMxÞgšð2>]šØÛÄ~™Sª]¥é»«Ï¿vnSÑ…çTé5¬‰p´\ïðUmxQ/£QMªoÍ"êµ;QBj…¾¾ùüáî›Å’»;ûç«çÝV‘ËÈOÎQ¨˜œW¹*Gí\Îp/‰3"§1uˆžZ‰öôÞˆ°gh}‡¦T›v=n"Ï5÷&ÙÍ!k0þ—¼kiŽãFÒEáË\†%<òäqo{ØËþ Š¢-†Ô’’<Þ_¿™Õ¯b7º€B¡ÛZ{&–©nHä™ùåäh•&Ùd:][·Ç&¼æË*£­˜ n#Eäw³uæ/N#¢MŽgð&G«õ¥ :©nÅÀ$Òèìµ^ܸÆ–‹ó™\›UާoCä ì.ÊËAJÉ7”…·9þswµ9¬ÏcŸœ¦l•mWDB,¯¬òtuÅñ@W¼–4äZÅ-˜T¹ ¸V‚©¶á/¦0 WѬà +"W˾NŽV!Á¶­`#éx‘—.nþ-b\âxNY´sÒÐ]ºnç绯¯ŠJFWìÐ 86[0ú_~»¹¿[V}æ$_}¶¡=ià´Nh4TŸ‘5ž›Ýhz”_F®&O– £‹i Á~.N\”ºsƒ¦'´éNƒÓÈ8,‘IÐ(X%“â/ ä¹ÑmtZpfjH½pþšÝ'ê=òÿçîåyô?E ¹ä£î¹—s#©{IFòý«¦ÕyÆ6‚tm”éÇÏJÖ-'}úôx_#¬]æ£xª.±lö`û¹d Ú‹‚¸Ø!¥ûK¤êfI•<9.‡Ë @#î’N$5ËÙq£{ºô0¤cZæE€VyZVàÔC ÄK‹kÈÖé±n,¢:G7ªìl‰@HeqÙ¤ç„L=˜qHÌ'XÄ$hÕîëÜñÔ þl\\f|¢9Æ2¦Œg5ó‘—^¬âlL Är~<%·-ÿšu±(Ù½§UK7MÀ׿LÕÕÈs\§LÚQ!º1 ÅÄ'*óŠÀ"¯H¶»uJ°.Éݵs¨K–Àm´Ïk‡Áµ5‹m—ˆ¯mÏç÷öf÷þá߯)ÁÞSÆw«ûoc¡×æIyt«6·ñ‹^§3;µyD ¥ñ=b·áAŒ;ýøÃ5ŸŒ›³ÁŒºmJû]ÿúøåñåƒÒýå^ÿùý“þéðY[nó7G™tê¢!ñ¦[úÃÝËý³+—Ôï~Ö›¿yÿîÆÈ›w¨dürË«.r'Ð%Þü×ürúÕÇ/7ß_ö+€OQ4ûœuX[è’4OGo@#Í£‰‡§\»åê7_~3)G«çˆïNN„·¬ÓAÕ3Ä6(ÂíølùÇv.¨Låj¢!Sf§{ùr÷õåÃÓ7ý»OOº„U@ª6¼y4ýñÔUxçîáþøCÛlÌÍFtnÁéfŽ>²[÷™¨Ÿ‰ç>³Óúw†bADm„ ’΀8PL½àƒMy¹ûaw Öñ¹Æ³ŸÙs±,Üš=+Ù+ö¬ÑñvÓ}ÿ·iïOO¿½L´Ãî>0·o¾EgÖ8Ë]7…åÛ%Â61ŸÛ÷ø¯ÃÖÅ4Üïwߌ9Õм±´çns†»<Ç4Xÿ§þüÛócØþbJem,Œ ¬æÎYNÑæ—p·ô 7–oQÿNŽ!|žJP]"4N[Ý-±Mñ, ‡D$ªInH« r”¶Ÿ¥ïB­å@‰ÅÍF¶z+ø1¥wÖ½ØÖg‘]'§)g‚mW î€Àk´ÐÉ+ÑBÃÀP‰Ú‘ ¥‘]Vp†]´®ŽƒÖÂýr²¼Tð{ ¹È휞¬"¬fw`ruyÝï¦$+”˜-q„¾Ö5¼§c—Ð.Bä«âJ }\ÂÏwϾ}ý¤œñöùáý‡»W?ºùòñ?}Åʺqk’œªv\ó£K ÷ïwÕxupÆ¥áÏãœXVjç¹@´‚þ%IÔ%ˆ¹ÅÏQ[ïØ¥~pœó4ëåôgøä•lEéDï€i^:~yh’ šà8e`f®-ŽG寽œ Y7ÕR¯Ñ8 2&…@©ô6Šîú°œºgé€ËYÖ ç®6Râa¤‡ØRÆÕ–ÿüñŒq°ù¡â‹¢Æ.ô’h‡˜…¡:P¤ñm¬j ¾ôZwÂÆþÂ")µÞþ¸Äq÷[eÛ"9U—”.‚6“¿Fëïõ‹×(I⼆¶“s’|Ûâáh5Ã)u[AHØ×_œÞ™J-¶_œ˜‡ÔR0'Á?(º¦tsô§é昹+°çATzÆe‹m%Âü]Ùac6µ89MÅÈhÐPÚéŽ_§›§k¬K7‹<¤êtó†Tç4"Èí— †˜)Šzì$ül Üg¹)óÈ÷Ü4¯À7ÜäOHÒ¥SI“Ç6ëŠÁо½_Å‘¨†À‘Êâêò_;mØæŠÆy±|­öêZ¾ÖSd²}nl]€Ê6ôt aˆV«Ñvm»%"5ès{>ÕxÂS‹:GH'ê3£w4lVçgŸ.Æsx5ªé|qòþ¬ÙÞ“ÃÔhsTs¼5<]b2q¡²v¨#` ˜:Hpë¼±3:Y2x“‘gñ˜e ?XË ˆÄþà>§“''«Á0=àÕ?_(¼"êŽ5…Çû%Ž„·k|¼'#žÆÇãEˆ–ùøX}ÔЯvÈKMB–i¶rh$oKUáþ¥?±*œûpëKSG3_=ð`¹zçôûÙRœÍƒJv`ŸVH78Ç-¥’H;ö˜ê·‡ùmqÏrªÀÃï¹lT̘ š@‰„Ù†¸ :¹gªZhªP âÆ‡ÕÝZR1D°¾Ÿ“ÔAP§š9U9œŠ<à³M‘*tÍçÆ‘¯ˆ×fàjÄìšjú¥Y®æ‰rù«vãU'W÷`syÊW\.59m«Ö]³º»©úªe¯¶S·À­W=.ášWÉ%ò©­ýuDÙ`Œ¼í7Ä•)Ãb Ä)„;èÇœg˜e‡ Áò«Šta¬ÂÒ"v`«ÞvkØÁ¥¦Ø#{US¾)ÈcwäQN°ÓB™ên‹7é¶ãOÀö‡©òêo3‹i7]bU²?ú\ä\`³|­R )6¼¹'5;¬?†e“>6piö'“ˆoOõ/gd?_]ù×Îlp™±ôVUÄ6Gs<…ô@ŸNCõU¬œü-c[2§ ·ÁÀn—@qЩ³™TÒó;55çS£³]"±Ãët6ÛAuUª¨Ì–oa…7íºÖÔn»Á]´µY™Xoxykóžèêp®"ºäÚɵ6OiuBä3}åÇ'k ½á©5åŸwKŽ-}“æM0CH FQ…û$‘eýÈù7j,E;ÂDtvJëá°ù“ÓTXE±æö›aEÿÌ­±2‘‡ä¤êí³##Äm?ÇBF° i ë2Dj°) XB±Ì*ñž‹\±}9æŠÉÑj°u[ê=ÇPõúÙóâÔ)jðjÉÆ†KZúªµÃjy™ö_<}ú¼ÉLΖr¾ZëålX%£*";}•ªpPÁ9U1![7GlŽFXÀ3Ñ¢]J±gt hyñΜ<¥òJY®¶IÚF1&OÁ•d iç¯U%[Êp8YM“4 ,A£µ×à°Äö‘ý¨z’à²5Fg{Š$&nÂÎÜ/¡›’ªÕ@#)ÿ×iSª¦yI¾¬±[fÉ%f¶qŠ?UUÈF\+võþSDÕ’EÁÔEæÓ(s`’´ìê. þ½^¿ö‘9Ëp>u“ªy]±«G³Û¾¤¾ªó€·2œb€?Iuîý°·¯¡Ò©PKù¢”[æñØQ/°Ÿ=G¬^ŠÔ¸SŽŠŠÁ^芊”²£=”i™É*n°ª!ákëQæþ­Qd@!¹hÒ×çǧçÇoŒgݰÌvýR«àÌ7{÷îÉl¹xWÉkJ-ÕêØÌˆ¥ KsÔ‰@çH[}æ„Ø8Šòì,Ö­ÙŽ¸…k›â”›«uÔ¶m\"\æ 9fX%Ê—÷ˆ”´›)¯k´DÆÑFž =LQBWØŒpêɉk}µ_t=•rŽ 0YÈ'kØ‘úéQyȳ\}HÏñôö»ïï— NU›í/kf%xKÖ;xý¿`ºÄˆû‘H½<(»|²QieÝ›HÎã|èå²…ì‚4xP¶I°É>´Díö7º¸ÚUªd¦bëµXC_,¨]ÔSí¢TµŽzæ…i­àç.“†@è6;Yã-¯J>z”d¶-W3¹ºpú d×éćhÃæ¤psTÈ>ÔNSNÕé*|^÷œM×XWÁÂqHì«Suv°èCJÀkØ :iy]Œú«™qýë}m †¬LŠ<$bÛÍÁ³#¬''«x½·]i¼]r±FÞLÍ¡ì¸D8ª>ï­‰G"fü_5;ƒÊ%o‘pŽ_Y|êך€Ž®€j:­´´„ÛÍøƒ·÷Ï÷'ै——Î.>iq\Q:»v®ýaÏ=hó\eâÒ@º²KÉK¼ÊÒYû»ÌåF'çI­¡®°[¥ õ²ÚºÃ·ø5¤†>mÞvF§ë=HEC1é-ßÑ¢C¢}äSµ»äi}B–°NëS¼´ÖW2Ç r‹ÙtVBnqÔõÕë[0†j÷»VüÏ^£mÞùU×(¡?–ÍÜæ’௕C7‚Áˆ V)Q‘lõÕ‚ ‰âŸ-‰î3³àýœº®TÔ¥àƒOe]*Ù‰‡SštP¦¶mLzÿKt)8²PÚઠëR#sæ)ÃîI(tùª²wƒäúx1‰z}kä[-ËËäƒ5†áÒ¯þá÷§ç_Ÿ>=Þ?0'_tAj(SÁ†0X 7sQÆQyŒ}IÆw5 Ç2>¡K—îLÝ69Z$䬼8ÇBΡÿ¨çÜ@麋ZÚ]©Àká²ÏìÝ”R¢¦f‰K$GÊ'BH«¦ ©µÊzÕÀÅäcc-ò,Á+ ’g©½FøïØ&»Uø 2…ŸYòs%÷ìaàM\Ôc°HøM{‰[%ü|i ¯t§hÞvä”Ôcõù²9J]G*x_eÚ$‡ÿLåsŽ#ÆÙv"«4Š´ òxѨ/¾L!Øܼ—@ôl ”^BŒ,EE‘$ãq L—´~çaõcz²†¡VE1.—€ŒGRø—¬Ë¬¢ü¼@n–pM8ÀG þç­ÌÌô™);X§ —³WœçÙ‚ Ó¦¬ñžÐ¦¥6S·é êeÐf†öd´-Ñ4h \Æ'ƒjÉEC&³˜ÃbÉíJCð)Rš¿9;ªÏå'‡©hapÎQzÕ54]bU*Úë‰É‡ÚÞ¢^<€©¥"ÁE¯jŠØ¯NE‡ÚF²È6äŠdÆŠ,ù±:“£Õà]Ú$/{Ÿ×Þ-„ÈÒÇUb g¨~Ü}z|g‡x÷AÍm}Íná«§ñYœá•Œ¢ˆIs¥¢(2Åbçäýõ@¶^—î:¨TŸÌ™‰W1Mð-3–ƒÞÚ.aûÇTúÊ,gž1B´x°†1’øglGÅBœ¦cx4ЫJƈ8a@lfŒÍÞ}Õô!S›/àåÔð§³·UPHP¸G=@˜C Øœs›ƒ9ö)û¶¡àY ü_{×ÒÜFޤÿŠ¢ÏÍ™ À3ѧ½ìa~BÇMÉ-†%Q+Jžîùõ›YU‹H ”¬ñìɲ È72¿<ò&KÔ¡G°«ésKÒÚq@(ŠÓQì+SëhÌF'k3B‰»d 3`н ´'Ë"ôý3$—£¥EAÃhèpd÷+ã»WãKQkÐí¦ÿ Ëo°RcT8"ò­6·Û’¨ZÛhÁPÞ5dˆ–¼u› µ ®Ð°–“®ÈÊŠ„FtsPìÌfig:û66’ÑÅœò Z eã]Sš²ê­á  ˜*Yõ~QµL=Ðæ§ªAÉ#y†\[Šä„ÛÂl°Ø¥kP¢2h´ÐfÈ (m’2L4ó<¡I!´»Îm…0Í^9:š{§.Ø£ñQ~Ñ ;Å~J*dfb¡u 1Å/^ùh£Ç„bMÇU‚à–¾³ÒöÆQ{­­tg‚GX´ža¿¾{¸å/ãï‚߿ο)f­±@EC3q7^)ilpmÐÕÆoþñ¸{>nA°ºÝ~¹Þüµ¹½þ0=†mþg¨³£ƒÔ¾Ê0δÙñI%¢­E½ÜC¸6-\õŒŽ5%u59¬w¤æÆy‚†Åò¤¾Æ ‰´$ï^iþqÒ dU|ÄÝ„-¼ÑÛó3·Í¬,P¿ÎQoçŒ"¢¶VèˆY†ù yQÁhhQƒP‚¶ÉþªFð3ÑˇGÄeFî TòYTdfÌGœˆvèNŽÚBbxÏÄ|o•;‚IÎüPƒ^â©ã´)^=¬7_ù‹ŠG¨6¬cY5RŽÖ.Í1ñÐð…hMŒ´‘¡&ßÓ÷²5¢`Lq*µ_B«ö:–e¬Њ°¨§?¶¥¼¾a†ÉˆÿxyüßY)Tqÿ<‹èçÅ´_ÂP|9x‚¼\6â%“PE‚hÞŽp&ðdÙ‘M¢ïŒ4žÍÑ Tƒè£Å„,%h›¼M,zþ½%ÏZZ²lx¤k¬Ë² èqÀsº3È4î Êê²ÄõÃóUÀ© ÄÖ¨æB’Y Õ:Úþ\‰nÙ1Zi…*ô¨7ªZÆYé©ñê£å¹#:Ô°¤¢5:UÑÁÔö‘•OèP¦Xþ@‘’ZN^9°6n†õâp…5f”e¶$ÍØ£Ø2d!™rbœ=i¬KÞ_p ÔͱvñQë79MF9§ñ¬Mœ÷úZhºF]=§ Úm0³¢sàhÜUq‡E*x°‚~_]Ñnj«e(–R"ÅPV'jìY|[ úB”F3S)Èô¸9ÊÁ–cªl©‚'/±(^tÓ\–x)æ;¶üíÑøöGÞ_nï˜)F¯f¿{OolØí¾éõíÃÍZÇÍøÈ°óGh’ùev‹!iR0Ö0%d 6â]³Çj³›û±‘ c™ä̹{†eÈjU]*.˜W!ƪ7ˆ·oÓž©pv¾ÐxðXQðä`9b2…?U N׈ÎÑÔ!{‡Šò­HP^VÜ5ëVýjæHÞ]k'5¾Ðˆ4‡ØXsˆLoq@i©uNÛ”ÔÊY£Úr˜´7ÐqHÌNát…*_«¥F¿7XWÒ+†JÆÏîÝdg!Ç‹¸ªÙ>½üâ'}ùp«ëÛë^·¬nw›¯«1Yõ6å¯É:EÖ#æÖZìú„6‹SjF¨MQ/ä@Î&%æ ãEÔ ¤ +¹’Å‘qL<¨N×ß_^Ÿ†®oöȸUà3¥µ EVMN–3°Š=ÞM,œ.5¬6‡Þæjб«ÂÜF婈~ ]R`å­L3»ÁtsQŸY¹2Ãïßjˆƒôú®6ûÇ+åÇÝ7ùO{í ˜8úK…(¡SuûÙŒoOTþµhÄlù Ù%°R¹o-2—>°×^•v!K@ID¢ûÈÜ:Sæ_øˆñ6*°$“lr„dŽYÏ‚ÞgÕQÀŸéiÒ_3©¡SÿרU.*“ÛvÒ Ñ3ëA  ”4¤1M“ÞV[›iX)r¬ uÊÖ3O¸ ‚³)žÀø´´ÉÉ2¬*îï æH/J³{1Û°„³ `<ÉüUÖ.:"79qÍžÖ~¿}\D÷Õï»Ý·ûn÷8¯ëŠb9äÝC†ôø’&`Ã1IÈByS¤+3­6¢ }ÇηK½Œ Z%4úh!Ì„N-0™Èu>É®=™„…\c•°µ(‚í@f²‘Á=|Q½ѳN?®zÛœÉë¬ÇUcUöëj µqêŽI£ñZÕÜ1)¾c±kƒÍ:ºc±Œl Œkó~žRA¿3HýQ\žHI¼­·&”ƒÖ'?l bo¨ Ä>ùY'@í¶<¢FÖ—¤˜Œò$3ƒ¨ÅsUú@,E©T‡†õnb¤S@‹õIZ(².ýMÈÐ"7ÄÛ©æ™e¢ú©ÙÅóP†%ˆŠ0©¤èùÊ(n äð[äÂè«÷„ Mx;…ÖŸÅ>ø`M•›2Î jShŽåìûãÆnöÛýýúa³{:(Ö2ÇøRu]WtR«³Ç,~j@;U2²Î°§èüüvøYôŸ '{‚øÙ…ü±¦Ÿñ .P:‚a¿ ]R7„í›?´E#¤µ™£œvFû*ïV^O–Ž`˜Ìúm£}§ŒµÛ¯0e@½-¦¬É½dB%¦ìQL'ùƒ²,TE?| T›šœöc‘ÑB¦cû°¾ëÆ'‰îÏ•DŒÃw¯®$¼N]Kòï3Ìý|þ*#áJ¼>’a1è0ÓH¤)}Æ2¤É\c Á2“®¢";Ï™öÇc/Ò5Ap`A•o @š×4iòÅŽb¿„‚esÐYÇ#EòY*E'ñ±Ó”1þµlô~}¿¹¾”ǘçý[˰z“ðXEò«·)­U4áñËÉË AÔĪËÐ~— Ÿ5j;Ãk?Rµ7}Aȧ¿ÿ÷}B`Ón tÒºë/žyƒ²:•í .SŠ¥@(„U_üvñôçý§¿ggŒîÖ¢ ÆôÊq ,’"Òv^Šèüꓜ±ÿ­Èáav²èü§þÆë›“a>Ÿ?ÍtNS\`0ò­+é¦×R¯áIÚ ^!2F"ct;êѧ\{^ÏA’އ•NO“QìÎK)ï­:ž£;]£nŽ®Ò¶ÆeãGc¿P+Õ¤jƒV:^•¯~AÎ-Jƒ³ |¡Ñë$_x…ª-¿8ÂQÁ?bŒé"ÑÒ"QÑJYeœ —z ^¸Æàè±äjAë/ô'ˆ´ †Nb> ø^ ¦Ý Gf,XùÂñi¯NZ&m_ÇöUHÓ¦…\J¬ò´(Xïr»¹^oÆ—Ì,”¢ÝíÝáÇÇ+4™ç;•%ÏR¨1L(¸ŽLwÖ³sáw“²ìEÂE}“àƒ‡ç›$m“UGǹ¿Ð­Åè_Þµg;•_åÔo´aÿ¦J˜Ñ/oj”‰˜’q#jì`ycj|Û‘¿Á5ù»„69yÕ­rºêªm¯·MSvp.hÖ<šgèd_Ða¤‰Uɤ¼Ö±Q·rÿì `RÝNORÝz…ߙФE^/8¶ÚecœI7®ÒÄÎO)CôK”´œcy”ÙK%A=E‚úð–CŽvŒ3)Ã)¯ÍÎWGU>Zþûr˜Œ9¤úw|Žû5²D݃C—].ÇBEHÁ•jâž2Á-€gÏ÷‡hûþš8‹æiÁ“–ŸÉ# VYLƒŠÊ¯ô2‚Ë‘_{:o¤˜iâ IZ(bÞ5’±4G£¾ÎP¥ˆC™i¶Žø/C¨MªAn³ž˜*Ç–ô½ÅÎëå…¡-§×›·Å^YœsqN; ¾JyJ7û"á.0ùÌ»OÙɃ̡2ïrý|µ™XпS<-*ÒâWÐ'*°=’®­ú¹Q"5Óœ|ùÞ*“ÂR Öø¤çCZÇ*Y'iò4­;Ëë¹ì·ißyc´è§Rì—P¦½÷‚R–ë8’ï!™W>£J«_Š ~^þ†%J¦Ùb@k ‚+CÌ Q™ü™Hã¡êØqµ6ÑGÌİߧû¿A£#l')è"–MÚÝ{‹›¶Ë–‚ TÕi{QG¼àìûâFbÈôsq#3åþäeDªê2 Úö¥$Ä0¹e ÀvWS‹<™/ùx}u³>d=÷«¯æéy*4( § o•òJWéÐÑlÏôa¼&t¨çâ5̦U+U*¬€†l€¤*#-I]j\,œ¦@—Ê.¥›ÒÌÓ¥V`{¡Jü¬µ ëR&ë8ÉḬŽLÌãäN¼v4…Þ¥TÈí(’ÿ“w‰‡š»dÄÜPè8¤5h>BþÝzsÃQ€ôÙ~fµ£»ñG6KÍJú™Kñ‚s]¥f_c"gB´J%»f¡ný,B¶sg¡ (¹”´;+Á9$u0b3kBµ"‡Övžp®CË Åc^Æã¼N·€CËtÕ;åŒuú}[õä(c¼oÞªŸ­Aâ÷ ±wÏLR®«„"EÐy_=urp˜¯;`»„iÁV)ˆÊÉmôlr´ŒL«lËÚ`ò5ãâR‚ÊÛ÷¯æý.à-1#Mrd§Ð&$UjWèɼC="“ÿ¸X$íIuÚãgºýù*ðÖ½.\TóŠmaRÝèHÇËñQyãoKª²ªäy™×´v¡ìy=Dž×Ã[åâ´Œ°uÓÊáüx”á¬& –79Lúy½ß”rðèy}²D]Å|G6ûm½?jO k”¹öÕ©ŽõŽàƒêŸëm=‹æb7¾(ÎEÃT!x‰|?ÚãºH¯œ ¥u–ôž„Hæ£eNš´(+ùæÁÏr£¤*)dý±´ë€`BÄuà›2Z€„ë@‡©'ŸEÑùl/¦BˆÝgèP¦_Z¯‹±.d   M ¿Xž° Ó…w‚#keÖURN½à¿œµ²ÃÁ£Õß““å`X:œÜáþ×È'pñ-K/¹¨ôš“·Í†Û–§Ö†%ôÒONB}|‹¦'÷'Õ¤„ñ4i&ÅOƒ°fv„µžÝŒ¼3˜:ý„¡$ïhd˜ nPÍ…Ù JI™V=•RPBÌšjæ7ÉÁÃÑr°×y[VÙìCq–]âò™áîp×@' þòøÃ¥i멺 Kgh½†»ˆþWsAŸðÞØš¶3C»w°%/h¯ {¼ó BÉgNì‚g^šmJ>ó¬}HqtÚ>K9hú;»9ò¼K½mð­(µœÊ¥MñŠBÊôÈHǘé™P­A 뀽`bÞÏ2MüG®Î§pE° Ú[¥øEÙDÔYÙDì˜ÄÒqŽdE­Íˆs¢À›“Ód¤½ê@:Z˜¬Q7n’8ŽâèÎÓ¬€‚`?TDи´Ad‘³o»d]Ôô}zßÿ?Y&ž,ûË"ö‰•©ºoôíËôX#²0‚Ç1ûok¿»åom7_{æìÑÜÇï–»¦`NÔB7ã”s¶Êzs±9pKù̧f+kÞsG½ÎH“„i˜´æ1%]k.ÛæíØYÖœ_£êb·ð„ˆžÎciË‘ –õ4ó=ÆsZžšªî¬êi03ʧ—Ñ)'.; ôûêP¡%‚5¶ Eƒâ8ª“A”™ "Ä®Ÿ|˜öÈ™‰ÔyµþÜ::4üp°ŒT!;YçHhúÒ’,,Œz5PQÛ7*÷`牳¦e²Áð*“+ç?¯¯Â¬2™lašP®ªÊ¤•v!_² *\}A[.Z–gžOè—Ð)tÎ&Õ QˆOèx9YÎkïÊ;Âú½Œ³ËÖ³ dT‘Br5°yÏ BäÌyßzñVýË<Ü|¹»~z¾ýß¿àáúîæóöñæê~»rÕ]™Ÿ<Ñ)ìA·xùÊüà3iΚI€T ºœÈH͉ -:µ{¢µÿ»êû¶‡qÕÛë}Y,ô„Hƒ–FE6íI=hÀ¦2›‚nmà>ªÅÞ5²þîz¬j?J€­0ŸFpZ~®¹V’i¡d486(Þ‡V!÷Vûûgã—ŒeØHí“2h­‰âû¿¤ Ê®Œ1öÝe÷EŒÓ>ǾÙi«@ÇÓ@ð³×ÅŽé:c¥Ð¢´K(÷êa`·ïeº¿ÇÇ燧ÿy¾ßþ)WÙSõûM'|ö?î×·»ØßX[sp¦Yü¢[Ÿ›PêÍM…<é™o±Æh&÷þuÛÿ¶ÃÂfèaÈê|áó³ëq!jòöúâÛ®WìwBÈÇ‹ÑHñ"·;þ¬ÕðíÕV¼ +€/›Ÿ5‚ƒ ˆ*eÂ>ŠÂ>ãüoŸ A,Y u¤£ÎØô:& ßä_QôB˜/_¶›Þô³¤wã÷AÅŸÚ99%óãkvNèéìÎ)ÂÎ 6n¥e»jãäÃY“ŽL2ïûßÁ"ÿè§jž-…././@LongLink0000644000000000000000000000023400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000021600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215073043234032765 0ustar zuulzuul2025-10-13T00:08:25.762943892+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215073043234032765 0ustar zuulzuul2025-08-13T19:43:57.535975521+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215073043234032765 0ustar zuulzuul2025-10-13T00:12:29.592162704+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000004305415073043234033001 0ustar zuulzuul2025-10-13T00:12:33.418537031+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.417364Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-10-13T00:12:33.421938729+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.421884Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-10-13T00:12:33.422863122+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.422798Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-10-13T00:12:33.425427197+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.422944Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-10-13T00:12:33.425492998+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.425461Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc0001f6000/192.168.126.11:9978\""} 2025-10-13T00:12:33.425568979+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.425533Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc0001f6000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-10-13T00:12:33.425568979+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.425553Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-10-13T00:12:33.427701939+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.427629Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-10-13T00:12:33.427824781+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.427701Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-10-13T00:12:33.428536681+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.428475Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc00007f840 } }"} 2025-10-13T00:12:33.428536681+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.428527Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-10-13T00:12:33.428581931+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.428556Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-10-13T00:12:33.428609242+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.428581Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-10-13T00:12:33.428882185+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.428797Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:12:33.428882185+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.428859Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:12:33.429142719+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.429021Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, CONNECTING"} 2025-10-13T00:12:33.432599337+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.432405Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:33.433296297+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:33.433201Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:33.433519590+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.433436Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:33.433606301+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.433553Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, TRANSIENT_FAILURE"} 2025-10-13T00:12:33.433651352+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.433608Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-10-13T00:12:33.434591735+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.434483Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-10-13T00:12:33.434716977+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.434662Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-10-13T00:12:33.435632890+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.435528Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-10-13T00:12:33.435632890+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.435587Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-10-13T00:12:33.435632890+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.435605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-10-13T00:12:33.435632890+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.435618Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-10-13T00:12:33.437303293+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:33.437221Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-10-13T00:12:34.433971911+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.433833Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:34.433971911+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.433927Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, IDLE"} 2025-10-13T00:12:34.434047843+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.433996Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:12:34.434164044+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.434065Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:12:34.434263256+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.434156Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, CONNECTING"} 2025-10-13T00:12:34.434669981+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.434598Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:34.434669981+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:34.434644Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:34.434699842+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.434682Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:34.434747102+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:34.434709Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, TRANSIENT_FAILURE"} 2025-10-13T00:12:36.133849937+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.133681Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:36.133849937+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.133771Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, IDLE"} 2025-10-13T00:12:36.133965800+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.133841Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:12:36.133965800+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.133912Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:12:36.134302780+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.134138Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, CONNECTING"} 2025-10-13T00:12:36.134370252+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.134254Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:36.134441834+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:36.13438Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:36.134485425+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.134441Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:36.134512096+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:36.134492Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, TRANSIENT_FAILURE"} 2025-10-13T00:12:38.307979692+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.307815Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:38.307979692+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.307901Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, IDLE"} 2025-10-13T00:12:38.307979692+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.307938Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:12:38.308057075+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.307971Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:12:38.308057075+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.307996Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, CONNECTING"} 2025-10-13T00:12:38.308539888+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.308428Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:38.308539888+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:12:38.308499Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:38.308539888+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.308526Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:38.308568619+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:38.308554Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, TRANSIENT_FAILURE"} 2025-10-13T00:12:42.263948538+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.263631Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:12:42.263948538+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.263707Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, IDLE"} 2025-10-13T00:12:42.263948538+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.263746Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:12:42.263948538+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.263816Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:12:42.264225336+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.263926Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, CONNECTING"} 2025-10-13T00:12:42.275376464+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.275153Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-10-13T00:12:42.275376464+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.275221Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001ca0f0, READY"} 2025-10-13T00:12:42.275376464+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.275263Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-10-13T00:12:42.275376464+00:00 stderr F {"level":"info","ts":"2025-10-13T00:12:42.275279Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005015515073043234033001 0ustar zuulzuul2025-10-13T00:08:30.198428765+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.198075Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-10-13T00:08:30.201456287+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.201398Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-10-13T00:08:30.202310994+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.202254Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-10-13T00:08:30.204037726+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.202389Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-10-13T00:08:30.204136409+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.204089Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc00055e000/192.168.126.11:9978\""} 2025-10-13T00:08:30.204136409+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.204125Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc00055e000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-10-13T00:08:30.204150669+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.204137Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-10-13T00:08:30.209416930+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.209093Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-10-13T00:08:30.209491642+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.209405Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-10-13T00:08:30.210017278+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.20996Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc00007fda0 } }"} 2025-10-13T00:08:30.210017278+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.209996Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-10-13T00:08:30.212005469+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.21128Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-10-13T00:08:30.212005469+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.21131Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-10-13T00:08:30.212417701+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.212299Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:08:30.212613427+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.212455Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:08:30.212943867+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.212861Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, CONNECTING"} 2025-10-13T00:08:30.213224406+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.213117Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-10-13T00:08:30.213379210+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.213336Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-10-13T00:08:30.213914737+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.213855Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-10-13T00:08:30.213925797+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.213892Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-10-13T00:08:30.213933297+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.213918Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-10-13T00:08:30.213940618+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.213929Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-10-13T00:08:30.214423162+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.214388Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:30.214474244+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:30.214453Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:30.214523815+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.214505Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:30.214574117+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.214559Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, TRANSIENT_FAILURE"} 2025-10-13T00:08:30.214619798+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.214606Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-10-13T00:08:30.216572658+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:30.216515Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-10-13T00:08:31.216579144+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.216424Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:31.216579144+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.216511Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, IDLE"} 2025-10-13T00:08:31.216694698+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.216548Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:08:31.216694698+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.216589Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:08:31.216992347+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.21686Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, CONNECTING"} 2025-10-13T00:08:31.216992347+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.216956Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:31.217014708+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:31.216994Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:31.217030478+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.217016Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:31.217081810+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:31.217046Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, TRANSIENT_FAILURE"} 2025-10-13T00:08:33.036429142+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.03574Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:33.036429142+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036365Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, IDLE"} 2025-10-13T00:08:33.036519965+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036404Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:08:33.036519965+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036442Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:08:33.036743572+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036636Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, CONNECTING"} 2025-10-13T00:08:33.036907337+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036843Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:33.036907337+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:33.036871Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:33.036907337+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036893Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:33.036947858+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:33.036918Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, TRANSIENT_FAILURE"} 2025-10-13T00:08:36.059406250+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.059298Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:36.059406250+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.059374Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, IDLE"} 2025-10-13T00:08:36.059469302+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.059431Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:08:36.059501473+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.059471Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:08:36.059727450+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.059634Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, CONNECTING"} 2025-10-13T00:08:36.060431961+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.060379Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:36.060509204+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:36.060487Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:36.060553035+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.060537Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:36.060614297+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:36.060593Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, TRANSIENT_FAILURE"} 2025-10-13T00:08:40.121089803+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.120958Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:40.121089803+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.121056Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, IDLE"} 2025-10-13T00:08:40.121172646+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.12111Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:08:40.121252728+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.121163Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:08:40.121459184+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.121364Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, CONNECTING"} 2025-10-13T00:08:40.121632270+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.121586Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:40.121649900+00:00 stderr F {"level":"warn","ts":"2025-10-13T00:08:40.121624Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:40.121692172+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.121658Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:40.121708592+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:40.121688Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, TRANSIENT_FAILURE"} 2025-10-13T00:08:45.756755919+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.756559Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-10-13T00:08:45.756755919+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.756674Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, IDLE"} 2025-10-13T00:08:45.756852071+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.756731Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-10-13T00:08:45.756852071+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.756779Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-10-13T00:08:45.756869842+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.756826Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, CONNECTING"} 2025-10-13T00:08:45.768431455+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.768278Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-10-13T00:08:45.768431455+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.768383Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0001cb860, READY"} 2025-10-13T00:08:45.768478036+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.76844Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-10-13T00:08:45.768494447+00:00 stderr F {"level":"info","ts":"2025-10-13T00:08:45.768471Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005016115073043234032776 0ustar zuulzuul2025-08-13T19:44:07.409131412+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.407635Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-08-13T19:44:07.414073583+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.413544Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-08-13T19:44:07.415381827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.415228Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-08-13T19:44:07.417425202+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.415867Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-08-13T19:44:07.417943455+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417861Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc0000f4000/192.168.126.11:9978\""} 2025-08-13T19:44:07.417943455+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417922Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc0000f4000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-08-13T19:44:07.417967766+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417938Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-08-13T19:44:07.419975659+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.419896Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-08-13T19:44:07.420416501+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.420235Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-08-13T19:44:07.421162071+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421084Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc000409120 } }"} 2025-08-13T19:44:07.421162071+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421136Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-08-13T19:44:07.421768917+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421518Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-08-13T19:44:07.421768917+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421754Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-08-13T19:44:07.421856239+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421629Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:07.421936281+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421864Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:07.422733853+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.422625Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:07.423965905+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.423923Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.423989806+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:07.423958Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.424074838+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424006Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.424092389+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424065Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:07.424092389+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424083Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-08-13T19:44:07.427257203+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.426529Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-08-13T19:44:07.427644073+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.427503Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-08-13T19:44:07.428726452+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.428478Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-08-13T19:44:07.428914807+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.428851Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-08-13T19:44:07.429300827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429224Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-08-13T19:44:07.429300827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-08-13T19:44:07.429875752+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429724Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-08-13T19:44:08.425312715+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.424962Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.425312715+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425265Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:08.425740887+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425532Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:08.425740887+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425715Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:08.426240300+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426164Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:08.426464226+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426387Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:08.426516Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426558Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426594Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:10.295301275+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.295494500+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295467Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:10.295725296+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295638Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:10.295870770+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295831Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:10.296169798+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.296034Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:10.297898874+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.297841Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298034777+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:10.298004Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298270424+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.298237Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298428078+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.298396Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:12.743230101+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.743230101+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743181Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:12.743297363+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743221Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:12.743297363+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743248Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:12.743509149+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743441Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:12.743921330+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.74386Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.743997642+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:12.74397Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.744110835+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.74408Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.744228798+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.744199Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632882Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632959Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632994Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633019Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633195Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633257Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:16.633273Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633295Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633314Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054181Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054257Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054296Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054322Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.055101Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079158Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079423Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, READY"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079478Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079495Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000022400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000036015073043234032772 0ustar zuulzuul2025-10-13T00:12:34.247770969+00:00 stderr F I1013 00:12:34.247454 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-10-13T00:12:50.063834124+00:00 stderr F I1013 00:12:50.063770 1 etcdcli_pool.go:70] creating a new cached client ././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000132515073043234032774 0ustar zuulzuul2025-10-13T00:08:31.292526327+00:00 stderr F I1013 00:08:31.292117 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-10-13T00:08:41.613870008+00:00 stderr F I1013 00:08:41.613777 1 etcdcli_pool.go:70] creating a new cached client 2025-10-13T00:11:29.651472131+00:00 stderr F 2025/10/13 00:11:29 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "localhost:2379", ServerName: "localhost:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp [::1]:2379: connect: connection refused" 2025-10-13T00:11:29.660159610+00:00 stderr F I1013 00:11:29.660007 1 readyz.go:179] Received SIGTERM or SIGINT signal, shutting down readyz server. ././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000100615073043234032770 0ustar zuulzuul2025-08-13T19:44:08.536660030+00:00 stderr F I0813 19:44:08.536156 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-08-13T19:44:23.569024333+00:00 stderr F I0813 19:44:23.567456 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:39.363673157+00:00 stderr F I0813 20:01:39.363557 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:42:47.039150261+00:00 stderr F I0813 20:42:47.039006 1 readyz.go:179] Received SIGTERM or SIGINT signal, shutting down readyz server. ././@LongLink0000644000000000000000000000022000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015073043234032771 5ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015073043234032761 0ustar zuulzuul././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043232033022 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043232033022 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001237215073043232033031 0ustar zuulzuul2025-10-13T00:08:28.879966523+00:00 stderr F I1013 00:08:28.879473 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-10-13T00:08:28.879966523+00:00 stderr F I1013 00:08:28.879525 1 observer_polling.go:159] Starting file observer 2025-10-13T00:08:28.901690224+00:00 stderr F W1013 00:08:28.901545 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.904790268+00:00 stderr F W1013 00:08:28.901544 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.905021885+00:00 stderr F E1013 00:08:28.904998 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:28.905455879+00:00 stderr F E1013 00:08:28.904992 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.807469002+00:00 stderr F W1013 00:08:29.807389 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.807568645+00:00 stderr F E1013 00:08:29.807550 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.965467722+00:00 stderr F W1013 00:08:29.965363 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.965467722+00:00 stderr F E1013 00:08:29.965433 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:41.499617929+00:00 stderr F W1013 00:08:41.499287 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:08:41.499617929+00:00 stderr F I1013 00:08:41.499371 1 trace.go:236] Trace[1930891018]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Oct-2025 00:08:31.496) (total time: 10003ms): 2025-10-13T00:08:41.499617929+00:00 stderr F Trace[1930891018]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (00:08:41.499) 2025-10-13T00:08:41.499617929+00:00 stderr F Trace[1930891018]: [10.003303058s] [10.003303058s] END 2025-10-13T00:08:41.499617929+00:00 stderr F E1013 00:08:41.499392 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:08:48.141245467+00:00 stderr F I1013 00:08:48.140895 1 trace.go:236] Trace[723539459]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Oct-2025 00:08:33.140) (total time: 15000ms): 2025-10-13T00:08:48.141245467+00:00 stderr F Trace[723539459]: ---"Objects listed" error: 15000ms (00:08:48.140) 2025-10-13T00:08:48.141245467+00:00 stderr F Trace[723539459]: [15.00020547s] [15.00020547s] END 2025-10-13T00:08:48.180566109+00:00 stderr F I1013 00:08:48.180454 1 base_controller.go:73] Caches are synced for CertSyncController 2025-10-13T00:08:48.180566109+00:00 stderr F I1013 00:08:48.180503 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-10-13T00:08:48.180630981+00:00 stderr F I1013 00:08:48.180585 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:08:48.180630981+00:00 stderr F I1013 00:08:48.180598 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000002331215073043232033025 0ustar zuulzuul2025-10-13T00:12:31.671392267+00:00 stderr F I1013 00:12:31.671118 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-10-13T00:12:31.671392267+00:00 stderr F I1013 00:12:31.671172 1 observer_polling.go:159] Starting file observer 2025-10-13T00:12:31.686766524+00:00 stderr F W1013 00:12:31.686301 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.686880619+00:00 stderr F W1013 00:12:31.686308 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.686909181+00:00 stderr F E1013 00:12:31.686882 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.686909181+00:00 stderr F E1013 00:12:31.686889 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:42.491657291+00:00 stderr F W1013 00:12:42.491592 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.491683162+00:00 stderr F I1013 00:12:42.491665 1 trace.go:236] Trace[280192335]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.490) (total time: 10001ms): 2025-10-13T00:12:42.491683162+00:00 stderr F Trace[280192335]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.491) 2025-10-13T00:12:42.491683162+00:00 stderr F Trace[280192335]: [10.001591221s] [10.001591221s] END 2025-10-13T00:12:42.491694672+00:00 stderr F E1013 00:12:42.491686 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.734570958+00:00 stderr F W1013 00:12:42.734469 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.734570958+00:00 stderr F I1013 00:12:42.734561 1 trace.go:236] Trace[1597408848]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.733) (total time: 10001ms): 2025-10-13T00:12:42.734570958+00:00 stderr F Trace[1597408848]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.734) 2025-10-13T00:12:42.734570958+00:00 stderr F Trace[1597408848]: [10.001060464s] [10.001060464s] END 2025-10-13T00:12:42.734614149+00:00 stderr F E1013 00:12:42.734576 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:49.271603603+00:00 stderr F I1013 00:12:49.271493 1 base_controller.go:73] Caches are synced for CertSyncController 2025-10-13T00:12:49.271603603+00:00 stderr F I1013 00:12:49.271524 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-10-13T00:12:49.271603603+00:00 stderr F I1013 00:12:49.271569 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:12:49.271668855+00:00 stderr F I1013 00:12:49.271578 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:05.481909574+00:00 stderr F I1013 00:15:05.481760 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:05.481909574+00:00 stderr F I1013 00:15:05.481797 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:05.513241433+00:00 stderr F I1013 00:15:05.513182 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:05.513241433+00:00 stderr F I1013 00:15:05.513210 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:05.537763737+00:00 stderr F I1013 00:15:05.537648 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:05.537763737+00:00 stderr F I1013 00:15:05.537673 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:05.571852779+00:00 stderr F I1013 00:15:05.571209 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:05.571852779+00:00 stderr F I1013 00:15:05.571239 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:05.620989801+00:00 stderr F I1013 00:15:05.620249 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:05.620989801+00:00 stderr F I1013 00:15:05.620278 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:10.337927898+00:00 stderr F I1013 00:15:10.336992 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:10.337927898+00:00 stderr F I1013 00:15:10.337014 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:10.343964979+00:00 stderr F I1013 00:15:10.343928 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:10.343964979+00:00 stderr F I1013 00:15:10.343948 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:10.351538116+00:00 stderr F I1013 00:15:10.349372 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:10.351538116+00:00 stderr F I1013 00:15:10.349391 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:10.383357909+00:00 stderr F I1013 00:15:10.382703 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:10.383357909+00:00 stderr F I1013 00:15:10.382724 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:10.390507054+00:00 stderr F I1013 00:15:10.389851 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:10.390507054+00:00 stderr F I1013 00:15:10.389864 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:15:10.396926036+00:00 stderr F I1013 00:15:10.396210 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:15:10.396926036+00:00 stderr F I1013 00:15:10.396237 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:48.640234184+00:00 stderr F I1013 00:22:48.640189 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:48.640311746+00:00 stderr F I1013 00:22:48.640291 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:49.222004329+00:00 stderr F I1013 00:22:49.221876 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:49.222004329+00:00 stderr F I1013 00:22:49.221947 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:49.222214685+00:00 stderr F I1013 00:22:49.222138 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:49.222214685+00:00 stderr F I1013 00:22:49.222173 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:49.222582936+00:00 stderr F I1013 00:22:49.222495 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:49.222582936+00:00 stderr F I1013 00:22:49.222546 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:49.222877444+00:00 stderr F I1013 00:22:49.222774 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:49.222877444+00:00 stderr F I1013 00:22:49.222831 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:49.223046718+00:00 stderr F I1013 00:22:49.222986 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:49.223046718+00:00 stderr F I1013 00:22:49.223001 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-10-13T00:22:49.223091270+00:00 stderr F I1013 00:22:49.223077 1 certsync_controller.go:66] Syncing configmaps: [] 2025-10-13T00:22:49.223108550+00:00 stderr F I1013 00:22:49.223090 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001202615073043232033025 0ustar zuulzuul2025-08-13T20:08:10.249757059+00:00 stderr F I0813 20:08:10.244741 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:10.252956431+00:00 stderr F I0813 20:08:10.252847 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-08-13T20:08:10.353970297+00:00 stderr F I0813 20:08:10.353841 1 base_controller.go:73] Caches are synced for CertSyncController 2025-08-13T20:08:10.353970297+00:00 stderr F I0813 20:08:10.353922 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-08-13T20:08:10.354230874+00:00 stderr F I0813 20:08:10.354151 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:08:10.354230874+00:00 stderr F I0813 20:08:10.354188 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.319571 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.319662 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.320047 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.320067 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612236448+00:00 stderr F I0813 20:09:06.612141 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612236448+00:00 stderr F I0813 20:09:06.612204 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612404873+00:00 stderr F I0813 20:09:06.612359 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612404873+00:00 stderr F I0813 20:09:06.612391 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612481265+00:00 stderr F I0813 20:09:06.612449 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612481265+00:00 stderr F I0813 20:09:06.612458 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.324387120+00:00 stderr F I0813 20:19:06.324231 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.324387120+00:00 stderr F I0813 20:19:06.324298 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.612445676+00:00 stderr F I0813 20:19:06.612323 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.612445676+00:00 stderr F I0813 20:19:06.612387 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.612665402+00:00 stderr F I0813 20:19:06.612603 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.612665402+00:00 stderr F I0813 20:19:06.612632 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324132 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324208 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324741 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324753 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.613248366+00:00 stderr F I0813 20:29:06.613183 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.613663758+00:00 stderr F I0813 20:29:06.613587 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.614190093+00:00 stderr F I0813 20:29:06.614167 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.614294426+00:00 stderr F I0813 20:29:06.614275 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.325265148+00:00 stderr F I0813 20:39:06.324939 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.326232376+00:00 stderr F I0813 20:39:06.325074 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.326451882+00:00 stderr F I0813 20:39:06.326356 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.326451882+00:00 stderr F I0813 20:39:06.326386 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.614742214+00:00 stderr F I0813 20:39:06.614606 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.614742214+00:00 stderr F I0813 20:39:06.614657 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043232033022 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000037765515073043232033053 0ustar zuulzuul2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067383 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067504 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067513 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067519 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067525 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067530 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067536 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067542 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067548 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067553 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067559 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067564 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067570 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067575 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067581 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067586 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067592 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067598 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067603 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067609 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067614 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067632 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067638 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067643 1 feature_gate.go:227] unrecognized feature gate: Example 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067649 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067655 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067661 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067667 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067673 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067678 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067684 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067689 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067694 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067700 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067706 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067711 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067717 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-10-13T00:08:28.067734054+00:00 stderr F W1013 00:08:28.067722 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067728 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067734 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067740 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067745 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067751 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067756 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067761 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067767 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067772 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067778 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067783 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067789 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067799 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067805 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067810 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067815 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067820 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067826 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067831 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067836 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067842 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-10-13T00:08:28.067930580+00:00 stderr F W1013 00:08:28.067848 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-10-13T00:08:28.071493038+00:00 stderr F I1013 00:08:28.071428 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:08:28.071493038+00:00 stderr F I1013 00:08:28.071461 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:08:28.071493038+00:00 stderr F I1013 00:08:28.071470 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-10-13T00:08:28.071493038+00:00 stderr F I1013 00:08:28.071476 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-10-13T00:08:28.071493038+00:00 stderr F I1013 00:08:28.071482 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-10-13T00:08:28.071531980+00:00 stderr F I1013 00:08:28.071490 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-10-13T00:08:28.071531980+00:00 stderr F I1013 00:08:28.071495 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-10-13T00:08:28.071531980+00:00 stderr F I1013 00:08:28.071506 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-10-13T00:08:28.071531980+00:00 stderr F I1013 00:08:28.071517 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-10-13T00:08:28.071531980+00:00 stderr F I1013 00:08:28.071521 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-10-13T00:08:28.071543420+00:00 stderr F I1013 00:08:28.071526 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:08:28.071543420+00:00 stderr F I1013 00:08:28.071534 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:08:28.071543420+00:00 stderr F I1013 00:08:28.071539 1 flags.go:64] FLAG: --client-ca-file="" 2025-10-13T00:08:28.071553950+00:00 stderr F I1013 00:08:28.071543 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:08:28.071553950+00:00 stderr F I1013 00:08:28.071549 1 flags.go:64] FLAG: --contention-profiling="true" 2025-10-13T00:08:28.071562691+00:00 stderr F I1013 00:08:28.071554 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:08:28.071649873+00:00 stderr F I1013 00:08:28.071560 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-10-13T00:08:28.071649873+00:00 stderr F I1013 00:08:28.071637 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:08:28.071661844+00:00 stderr F I1013 00:08:28.071643 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-10-13T00:08:28.071661844+00:00 stderr F I1013 00:08:28.071653 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-10-13T00:08:28.071671254+00:00 stderr F I1013 00:08:28.071660 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:08:28.071680354+00:00 stderr F I1013 00:08:28.071666 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-10-13T00:08:28.071680354+00:00 stderr F I1013 00:08:28.071674 1 flags.go:64] FLAG: --kubeconfig="" 2025-10-13T00:08:28.071690644+00:00 stderr F I1013 00:08:28.071678 1 flags.go:64] FLAG: --leader-elect="true" 2025-10-13T00:08:28.071690644+00:00 stderr F I1013 00:08:28.071683 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-10-13T00:08:28.071712695+00:00 stderr F I1013 00:08:28.071687 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-10-13T00:08:28.071712695+00:00 stderr F I1013 00:08:28.071694 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-10-13T00:08:28.071712695+00:00 stderr F I1013 00:08:28.071699 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-10-13T00:08:28.071712695+00:00 stderr F I1013 00:08:28.071703 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-10-13T00:08:28.071712695+00:00 stderr F I1013 00:08:28.071708 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-10-13T00:08:28.071723695+00:00 stderr F I1013 00:08:28.071712 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:08:28.071732786+00:00 stderr F I1013 00:08:28.071717 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:08:28.071732786+00:00 stderr F I1013 00:08:28.071726 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:08:28.071756906+00:00 stderr F I1013 00:08:28.071730 1 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:08:28.071756906+00:00 stderr F I1013 00:08:28.071735 1 flags.go:64] FLAG: --master="" 2025-10-13T00:08:28.071756906+00:00 stderr F I1013 00:08:28.071739 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-10-13T00:08:28.071756906+00:00 stderr F I1013 00:08:28.071744 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:08:28.071756906+00:00 stderr F I1013 00:08:28.071748 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-10-13T00:08:28.071756906+00:00 stderr F I1013 00:08:28.071752 1 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:08:28.071768897+00:00 stderr F I1013 00:08:28.071757 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:08:28.071777737+00:00 stderr F I1013 00:08:28.071767 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-10-13T00:08:28.071786557+00:00 stderr F I1013 00:08:28.071772 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-10-13T00:08:28.071795558+00:00 stderr F I1013 00:08:28.071781 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-10-13T00:08:28.071804978+00:00 stderr F I1013 00:08:28.071791 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-10-13T00:08:28.071804978+00:00 stderr F I1013 00:08:28.071800 1 flags.go:64] FLAG: --secure-port="10259" 2025-10-13T00:08:28.071816038+00:00 stderr F I1013 00:08:28.071805 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:08:28.071816038+00:00 stderr F I1013 00:08:28.071809 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-10-13T00:08:28.071860070+00:00 stderr F I1013 00:08:28.071816 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:08:28.071860070+00:00 stderr F I1013 00:08:28.071837 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:08:28.071860070+00:00 stderr F I1013 00:08:28.071843 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:08:28.071860070+00:00 stderr F I1013 00:08:28.071849 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:08:28.071872220+00:00 stderr F I1013 00:08:28.071856 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-10-13T00:08:28.071872220+00:00 stderr F I1013 00:08:28.071862 1 flags.go:64] FLAG: --v="2" 2025-10-13T00:08:28.071887690+00:00 stderr F I1013 00:08:28.071869 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:08:28.071887690+00:00 stderr F I1013 00:08:28.071876 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:08:28.071887690+00:00 stderr F I1013 00:08:28.071882 1 flags.go:64] FLAG: --write-config-to="" 2025-10-13T00:08:28.092863199+00:00 stderr F I1013 00:08:28.092622 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:08:28.413279414+00:00 stderr F W1013 00:08:28.413112 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.413279414+00:00 stderr F W1013 00:08:28.413174 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. 2025-10-13T00:08:28.413279414+00:00 stderr F W1013 00:08:28.413212 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false 2025-10-13T00:08:28.463707549+00:00 stderr F I1013 00:08:28.463607 1 configfile.go:94] "Using component config" config=< 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F clientConnection: 2025-10-13T00:08:28.463707549+00:00 stderr F acceptContentTypes: "" 2025-10-13T00:08:28.463707549+00:00 stderr F burst: 100 2025-10-13T00:08:28.463707549+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-10-13T00:08:28.463707549+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-10-13T00:08:28.463707549+00:00 stderr F qps: 50 2025-10-13T00:08:28.463707549+00:00 stderr F enableContentionProfiling: false 2025-10-13T00:08:28.463707549+00:00 stderr F enableProfiling: false 2025-10-13T00:08:28.463707549+00:00 stderr F kind: KubeSchedulerConfiguration 2025-10-13T00:08:28.463707549+00:00 stderr F leaderElection: 2025-10-13T00:08:28.463707549+00:00 stderr F leaderElect: true 2025-10-13T00:08:28.463707549+00:00 stderr F leaseDuration: 2m17s 2025-10-13T00:08:28.463707549+00:00 stderr F renewDeadline: 1m47s 2025-10-13T00:08:28.463707549+00:00 stderr F resourceLock: leases 2025-10-13T00:08:28.463707549+00:00 stderr F resourceName: kube-scheduler 2025-10-13T00:08:28.463707549+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-10-13T00:08:28.463707549+00:00 stderr F retryPeriod: 26s 2025-10-13T00:08:28.463707549+00:00 stderr F parallelism: 16 2025-10-13T00:08:28.463707549+00:00 stderr F percentageOfNodesToScore: 0 2025-10-13T00:08:28.463707549+00:00 stderr F podInitialBackoffSeconds: 1 2025-10-13T00:08:28.463707549+00:00 stderr F podMaxBackoffSeconds: 10 2025-10-13T00:08:28.463707549+00:00 stderr F profiles: 2025-10-13T00:08:28.463707549+00:00 stderr F - pluginConfig: 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F kind: DefaultPreemptionArgs 2025-10-13T00:08:28.463707549+00:00 stderr F minCandidateNodesAbsolute: 100 2025-10-13T00:08:28.463707549+00:00 stderr F minCandidateNodesPercentage: 10 2025-10-13T00:08:28.463707549+00:00 stderr F name: DefaultPreemption 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F hardPodAffinityWeight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-10-13T00:08:28.463707549+00:00 stderr F kind: InterPodAffinityArgs 2025-10-13T00:08:28.463707549+00:00 stderr F name: InterPodAffinity 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F kind: NodeAffinityArgs 2025-10-13T00:08:28.463707549+00:00 stderr F name: NodeAffinity 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-10-13T00:08:28.463707549+00:00 stderr F resources: 2025-10-13T00:08:28.463707549+00:00 stderr F - name: cpu 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F - name: memory 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F kind: NodeResourcesFitArgs 2025-10-13T00:08:28.463707549+00:00 stderr F scoringStrategy: 2025-10-13T00:08:28.463707549+00:00 stderr F resources: 2025-10-13T00:08:28.463707549+00:00 stderr F - name: cpu 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F - name: memory 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F type: LeastAllocated 2025-10-13T00:08:28.463707549+00:00 stderr F name: NodeResourcesFit 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F defaultingType: System 2025-10-13T00:08:28.463707549+00:00 stderr F kind: PodTopologySpreadArgs 2025-10-13T00:08:28.463707549+00:00 stderr F name: PodTopologySpread 2025-10-13T00:08:28.463707549+00:00 stderr F - args: 2025-10-13T00:08:28.463707549+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:08:28.463707549+00:00 stderr F bindTimeoutSeconds: 600 2025-10-13T00:08:28.463707549+00:00 stderr F kind: VolumeBindingArgs 2025-10-13T00:08:28.463707549+00:00 stderr F name: VolumeBinding 2025-10-13T00:08:28.463707549+00:00 stderr F plugins: 2025-10-13T00:08:28.463707549+00:00 stderr F bind: {} 2025-10-13T00:08:28.463707549+00:00 stderr F filter: {} 2025-10-13T00:08:28.463707549+00:00 stderr F multiPoint: 2025-10-13T00:08:28.463707549+00:00 stderr F enabled: 2025-10-13T00:08:28.463707549+00:00 stderr F - name: PrioritySort 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodeUnschedulable 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodeName 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: TaintToleration 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 3 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodeAffinity 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 2 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodePorts 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodeResourcesFit 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F - name: VolumeRestrictions 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: EBSLimits 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: GCEPDLimits 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodeVolumeLimits 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: AzureDiskLimits 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: VolumeBinding 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: VolumeZone 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: PodTopologySpread 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 2 2025-10-13T00:08:28.463707549+00:00 stderr F - name: InterPodAffinity 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 2 2025-10-13T00:08:28.463707549+00:00 stderr F - name: DefaultPreemption 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F - name: ImageLocality 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 1 2025-10-13T00:08:28.463707549+00:00 stderr F - name: DefaultBinder 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F - name: SchedulingGates 2025-10-13T00:08:28.463707549+00:00 stderr F weight: 0 2025-10-13T00:08:28.463707549+00:00 stderr F permit: {} 2025-10-13T00:08:28.463707549+00:00 stderr F postBind: {} 2025-10-13T00:08:28.463707549+00:00 stderr F postFilter: {} 2025-10-13T00:08:28.463707549+00:00 stderr F preBind: {} 2025-10-13T00:08:28.463707549+00:00 stderr F preEnqueue: {} 2025-10-13T00:08:28.463707549+00:00 stderr F preFilter: {} 2025-10-13T00:08:28.463707549+00:00 stderr F preScore: {} 2025-10-13T00:08:28.463707549+00:00 stderr F queueSort: {} 2025-10-13T00:08:28.463707549+00:00 stderr F reserve: {} 2025-10-13T00:08:28.463707549+00:00 stderr F score: {} 2025-10-13T00:08:28.463707549+00:00 stderr F schedulerName: default-scheduler 2025-10-13T00:08:28.463707549+00:00 stderr F > 2025-10-13T00:08:28.464252446+00:00 stderr F I1013 00:08:28.464235 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-10-13T00:08:28.464293067+00:00 stderr F I1013 00:08:28.464280 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-10-13T00:08:28.466350589+00:00 stderr F I1013 00:08:28.466296 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:08:28.466788053+00:00 stderr F I1013 00:08:28.466764 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:08:28.466992479+00:00 stderr F I1013 00:08:28.466964 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-10-13 00:08:28.466931407 +0000 UTC))" 2025-10-13T00:08:28.467326409+00:00 stderr F I1013 00:08:28.467294 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314108\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314108\" (2025-10-12 23:08:28 +0000 UTC to 2026-10-12 23:08:28 +0000 UTC (now=2025-10-13 00:08:28.467258917 +0000 UTC))" 2025-10-13T00:08:28.467358360+00:00 stderr F I1013 00:08:28.467338 1 secure_serving.go:213] Serving securely on [::]:10259 2025-10-13T00:08:28.467506165+00:00 stderr F I1013 00:08:28.467460 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:08:28.473227389+00:00 stderr F I1013 00:08:28.473129 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:08:28.475702574+00:00 stderr F W1013 00:08:28.475355 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F E1013 00:08:28.475470 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F W1013 00:08:28.475472 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F E1013 00:08:28.475577 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F W1013 00:08:28.475518 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F W1013 00:08:28.475575 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F E1013 00:08:28.475601 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F E1013 00:08:28.475633 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F W1013 00:08:28.475655 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475702574+00:00 stderr F E1013 00:08:28.475690 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475757926+00:00 stderr F W1013 00:08:28.475646 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475790977+00:00 stderr F W1013 00:08:28.475627 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475977533+00:00 stderr F E1013 00:08:28.475908 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.475977533+00:00 stderr F E1013 00:08:28.475762 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476017974+00:00 stderr F W1013 00:08:28.475727 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476120847+00:00 stderr F E1013 00:08:28.476038 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476509779+00:00 stderr F W1013 00:08:28.476442 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476509779+00:00 stderr F E1013 00:08:28.476481 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476509779+00:00 stderr F W1013 00:08:28.476460 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476532799+00:00 stderr F W1013 00:08:28.476478 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476549400+00:00 stderr F E1013 00:08:28.476530 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.476549400+00:00 stderr F E1013 00:08:28.476539 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478563341+00:00 stderr F W1013 00:08:28.478396 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478563341+00:00 stderr F E1013 00:08:28.478491 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478563341+00:00 stderr F W1013 00:08:28.478421 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478563341+00:00 stderr F W1013 00:08:28.478507 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478646704+00:00 stderr F E1013 00:08:28.478566 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478646704+00:00 stderr F E1013 00:08:28.478566 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478720266+00:00 stderr F W1013 00:08:28.478651 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:28.478720266+00:00 stderr F E1013 00:08:28.478703 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.296115303+00:00 stderr F W1013 00:08:29.295437 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.296115303+00:00 stderr F E1013 00:08:29.296077 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.340626118+00:00 stderr F W1013 00:08:29.340476 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.340626118+00:00 stderr F E1013 00:08:29.340577 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.456061063+00:00 stderr F W1013 00:08:29.455961 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.456061063+00:00 stderr F E1013 00:08:29.456032 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.682693263+00:00 stderr F W1013 00:08:29.682543 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.682693263+00:00 stderr F E1013 00:08:29.682615 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.695982698+00:00 stderr F W1013 00:08:29.695910 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.695982698+00:00 stderr F E1013 00:08:29.695963 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.700284059+00:00 stderr F W1013 00:08:29.700243 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.700301539+00:00 stderr F E1013 00:08:29.700292 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.738081429+00:00 stderr F W1013 00:08:29.737952 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.738081429+00:00 stderr F E1013 00:08:29.738011 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.797366584+00:00 stderr F W1013 00:08:29.797259 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.797366584+00:00 stderr F E1013 00:08:29.797337 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.833927297+00:00 stderr F W1013 00:08:29.833828 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.833927297+00:00 stderr F E1013 00:08:29.833908 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.950636031+00:00 stderr F W1013 00:08:29.950507 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.950636031+00:00 stderr F E1013 00:08:29.950593 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.965430381+00:00 stderr F W1013 00:08:29.965218 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:29.965430381+00:00 stderr F E1013 00:08:29.965275 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.015679251+00:00 stderr F W1013 00:08:30.015591 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.015679251+00:00 stderr F E1013 00:08:30.015654 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.019685843+00:00 stderr F W1013 00:08:30.019634 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.019685843+00:00 stderr F E1013 00:08:30.019668 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.049457200+00:00 stderr F W1013 00:08:30.049359 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.049457200+00:00 stderr F E1013 00:08:30.049425 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.078091971+00:00 stderr F W1013 00:08:30.077985 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:30.078091971+00:00 stderr F E1013 00:08:30.078032 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.677674083+00:00 stderr F W1013 00:08:31.677553 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.677674083+00:00 stderr F E1013 00:08:31.677625 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.724499499+00:00 stderr F W1013 00:08:31.724385 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.724499499+00:00 stderr F E1013 00:08:31.724474 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.782625259+00:00 stderr F W1013 00:08:31.782528 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.782625259+00:00 stderr F E1013 00:08:31.782599 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.799424860+00:00 stderr F W1013 00:08:31.799349 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.799424860+00:00 stderr F E1013 00:08:31.799405 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.855481427+00:00 stderr F W1013 00:08:31.855350 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.855481427+00:00 stderr F E1013 00:08:31.855444 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.880085156+00:00 stderr F W1013 00:08:31.879955 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:31.880085156+00:00 stderr F E1013 00:08:31.880021 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.048335698+00:00 stderr F W1013 00:08:32.048222 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.048335698+00:00 stderr F E1013 00:08:32.048300 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.206974858+00:00 stderr F W1013 00:08:32.206835 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.206974858+00:00 stderr F E1013 00:08:32.206900 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.423164270+00:00 stderr F W1013 00:08:32.423020 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.423164270+00:00 stderr F E1013 00:08:32.423099 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.452444562+00:00 stderr F W1013 00:08:32.452335 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.452444562+00:00 stderr F E1013 00:08:32.452429 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.624875962+00:00 stderr F W1013 00:08:32.624729 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.624875962+00:00 stderr F E1013 00:08:32.624803 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.643052415+00:00 stderr F W1013 00:08:32.642273 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.643052415+00:00 stderr F E1013 00:08:32.643011 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.647157430+00:00 stderr F W1013 00:08:32.647054 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.647157430+00:00 stderr F E1013 00:08:32.647103 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.853571185+00:00 stderr F W1013 00:08:32.853435 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.853571185+00:00 stderr F E1013 00:08:32.853518 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.909712534+00:00 stderr F W1013 00:08:32.909562 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:32.909712534+00:00 stderr F E1013 00:08:32.909631 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:35.742473950+00:00 stderr F W1013 00:08:35.742333 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:35.742473950+00:00 stderr F E1013 00:08:35.742408 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:35.844766685+00:00 stderr F W1013 00:08:35.844620 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:35.844766685+00:00 stderr F E1013 00:08:35.844699 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.020440664+00:00 stderr F W1013 00:08:36.020323 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.020440664+00:00 stderr F E1013 00:08:36.020389 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.077123199+00:00 stderr F W1013 00:08:36.076988 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.077123199+00:00 stderr F E1013 00:08:36.077051 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.114929811+00:00 stderr F W1013 00:08:36.114798 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.114929811+00:00 stderr F E1013 00:08:36.114843 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.179821136+00:00 stderr F W1013 00:08:36.179688 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.179821136+00:00 stderr F E1013 00:08:36.179749 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.445577778+00:00 stderr F W1013 00:08:36.445415 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.445577778+00:00 stderr F E1013 00:08:36.445510 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.646757513+00:00 stderr F W1013 00:08:36.646595 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.646757513+00:00 stderr F E1013 00:08:36.646670 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.717705673+00:00 stderr F W1013 00:08:36.717604 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.717705673+00:00 stderr F E1013 00:08:36.717658 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.786990963+00:00 stderr F W1013 00:08:36.786886 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:36.786990963+00:00 stderr F E1013 00:08:36.786945 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.127955384+00:00 stderr F W1013 00:08:37.127768 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.127955384+00:00 stderr F E1013 00:08:37.127850 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.321840147+00:00 stderr F W1013 00:08:37.321692 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.321840147+00:00 stderr F E1013 00:08:37.321752 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.446105450+00:00 stderr F W1013 00:08:37.445977 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.446105450+00:00 stderr F E1013 00:08:37.446042 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.608530566+00:00 stderr F W1013 00:08:37.608377 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:37.608530566+00:00 stderr F E1013 00:08:37.608444 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:38.606177200+00:00 stderr F W1013 00:08:38.606073 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:38.606177200+00:00 stderr F E1013 00:08:38.606135 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:42.929973957+00:00 stderr F W1013 00:08:42.929889 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:42.929973957+00:00 stderr F E1013 00:08:42.929949 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:44.326501822+00:00 stderr F W1013 00:08:44.326378 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:44.326501822+00:00 stderr F E1013 00:08:44.326464 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:44.663490204+00:00 stderr F W1013 00:08:44.663396 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:44.663490204+00:00 stderr F E1013 00:08:44.663474 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:44.844028512+00:00 stderr F W1013 00:08:44.843887 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:44.844028512+00:00 stderr F E1013 00:08:44.843961 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.221377396+00:00 stderr F W1013 00:08:45.221221 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.221377396+00:00 stderr F E1013 00:08:45.221300 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.521312661+00:00 stderr F W1013 00:08:45.521106 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.521312661+00:00 stderr F E1013 00:08:45.521235 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.725085827+00:00 stderr F W1013 00:08:45.724958 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:45.725085827+00:00 stderr F E1013 00:08:45.725032 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:46.788891903+00:00 stderr F W1013 00:08:46.788775 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:46.788891903+00:00 stderr F E1013 00:08:46.788861 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:47.225338828+00:00 stderr F W1013 00:08:47.224486 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:47.225338828+00:00 stderr F E1013 00:08:47.224574 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:47.330079804+00:00 stderr F W1013 00:08:47.329921 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:47.330079804+00:00 stderr F E1013 00:08:47.330001 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:47.586440394+00:00 stderr F W1013 00:08:47.586265 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:47.586440394+00:00 stderr F E1013 00:08:47.586324 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:48.583503139+00:00 stderr F W1013 00:08:48.583373 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:48.583503139+00:00 stderr F E1013 00:08:48.583425 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:48.686923936+00:00 stderr F W1013 00:08:48.686834 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:48.687073230+00:00 stderr F E1013 00:08:48.687046 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:48.766728774+00:00 stderr F W1013 00:08:48.766646 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:48.766844177+00:00 stderr F E1013 00:08:48.766825 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:49.786709599+00:00 stderr F W1013 00:08:49.786628 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:49.786709599+00:00 stderr F E1013 00:08:49.786679 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:57.913921725+00:00 stderr F W1013 00:08:57.913784 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:57.913921725+00:00 stderr F E1013 00:08:57.913886 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:58.071905853+00:00 stderr F W1013 00:08:58.071831 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:58.072066678+00:00 stderr F E1013 00:08:58.072048 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:59.617069978+00:00 stderr F W1013 00:08:59.616925 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:08:59.617069978+00:00 stderr F E1013 00:08:59.617003 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:00.044634357+00:00 stderr F W1013 00:09:00.044495 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:00.044634357+00:00 stderr F E1013 00:09:00.044568 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:00.960743200+00:00 stderr F W1013 00:09:00.960490 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:00.960743200+00:00 stderr F E1013 00:09:00.960688 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:04.064714683+00:00 stderr F W1013 00:09:04.064620 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:04.064714683+00:00 stderr F E1013 00:09:04.064692 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:04.372036259+00:00 stderr F W1013 00:09:04.371876 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:04.372036259+00:00 stderr F E1013 00:09:04.371935 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:04.898815505+00:00 stderr F W1013 00:09:04.898686 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:04.898815505+00:00 stderr F E1013 00:09:04.898742 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:08.445220003+00:00 stderr F W1013 00:09:08.445085 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:08.445220003+00:00 stderr F E1013 00:09:08.445167 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:08.752091078+00:00 stderr F W1013 00:09:08.751972 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:08.752091078+00:00 stderr F E1013 00:09:08.752028 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:09.182622613+00:00 stderr F W1013 00:09:09.182517 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:09.182622613+00:00 stderr F E1013 00:09:09.182581 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:10.167843047+00:00 stderr F W1013 00:09:10.167731 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:10.167843047+00:00 stderr F E1013 00:09:10.167788 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:11.797069281+00:00 stderr F W1013 00:09:11.796924 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:11.797130412+00:00 stderr F E1013 00:09:11.797015 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:12.438032874+00:00 stderr F W1013 00:09:12.437882 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:12.438032874+00:00 stderr F E1013 00:09:12.437964 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:12.626236362+00:00 stderr F W1013 00:09:12.626105 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:12.626306874+00:00 stderr F E1013 00:09:12.626246 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:25.884682264+00:00 stderr F W1013 00:09:25.884542 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:25.884682264+00:00 stderr F E1013 00:09:25.884603 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:33.210410130+00:00 stderr F W1013 00:09:33.210300 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:33.210410130+00:00 stderr F E1013 00:09:33.210368 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:38.182913426+00:00 stderr F W1013 00:09:38.182772 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:38.182913426+00:00 stderr F E1013 00:09:38.182835 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:40.215309710+00:00 stderr F W1013 00:09:40.215158 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:40.215309710+00:00 stderr F E1013 00:09:40.215285 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:41.923416430+00:00 stderr F W1013 00:09:41.923333 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:41.923416430+00:00 stderr F E1013 00:09:41.923385 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:42.912010822+00:00 stderr F W1013 00:09:42.911839 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:42.912010822+00:00 stderr F E1013 00:09:42.911960 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:45.029176114+00:00 stderr F W1013 00:09:45.029014 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:45.029176114+00:00 stderr F E1013 00:09:45.029097 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:45.597633551+00:00 stderr F W1013 00:09:45.597472 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:45.597633551+00:00 stderr F E1013 00:09:45.597555 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:49.384807087+00:00 stderr F W1013 00:09:49.384607 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:49.384807087+00:00 stderr F E1013 00:09:49.384715 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:50.504790251+00:00 stderr F W1013 00:09:50.504597 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:50.504790251+00:00 stderr F E1013 00:09:50.504698 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:52.869856212+00:00 stderr F W1013 00:09:52.869663 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:52.869856212+00:00 stderr F E1013 00:09:52.869773 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:53.839219777+00:00 stderr F W1013 00:09:53.839030 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:53.839219777+00:00 stderr F E1013 00:09:53.839131 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:56.379730879+00:00 stderr F W1013 00:09:56.378891 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:56.379730879+00:00 stderr F E1013 00:09:56.379707 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:57.235460698+00:00 stderr F W1013 00:09:57.235268 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:09:57.235460698+00:00 stderr F E1013 00:09:57.235349 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:01.825027448+00:00 stderr F W1013 00:10:01.824794 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:01.825027448+00:00 stderr F E1013 00:10:01.824929 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:02.805112573+00:00 stderr F W1013 00:10:02.804885 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:02.805112573+00:00 stderr F E1013 00:10:02.804985 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:11.172932730+00:00 stderr F W1013 00:10:11.172850 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:11.173027723+00:00 stderr F E1013 00:10:11.173012 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:13.442214765+00:00 stderr F W1013 00:10:13.442111 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:13.442327168+00:00 stderr F E1013 00:10:13.442173 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:16.687021952+00:00 stderr F W1013 00:10:16.686874 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:16.687021952+00:00 stderr F E1013 00:10:16.686969 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:22.841128856+00:00 stderr F W1013 00:10:22.841042 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:22.841128856+00:00 stderr F E1013 00:10:22.841096 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:24.941091898+00:00 stderr F W1013 00:10:24.940952 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:24.941091898+00:00 stderr F E1013 00:10:24.941029 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:25.784876201+00:00 stderr F W1013 00:10:25.784715 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:25.784876201+00:00 stderr F E1013 00:10:25.784785 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:25.807250249+00:00 stderr F W1013 00:10:25.807079 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:25.807250249+00:00 stderr F E1013 00:10:25.807122 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:27.584509244+00:00 stderr F W1013 00:10:27.584378 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:27.584509244+00:00 stderr F E1013 00:10:27.584477 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:27.974693701+00:00 stderr F W1013 00:10:27.974596 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:27.974693701+00:00 stderr F E1013 00:10:27.974682 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:28.634303081+00:00 stderr F W1013 00:10:28.634178 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:28.634303081+00:00 stderr F E1013 00:10:28.634257 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:37.070505636+00:00 stderr F W1013 00:10:37.070379 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:37.070505636+00:00 stderr F E1013 00:10:37.070460 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:40.400697360+00:00 stderr F W1013 00:10:40.400523 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:40.400697360+00:00 stderr F E1013 00:10:40.400619 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:44.600492926+00:00 stderr F W1013 00:10:44.600383 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:44.600492926+00:00 stderr F E1013 00:10:44.600434 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:47.376889080+00:00 stderr F W1013 00:10:47.376737 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:47.376889080+00:00 stderr F E1013 00:10:47.376807 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:47.444812143+00:00 stderr F W1013 00:10:47.444674 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:47.444812143+00:00 stderr F E1013 00:10:47.444747 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:55.704406208+00:00 stderr F W1013 00:10:55.704257 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:55.704406208+00:00 stderr F E1013 00:10:55.704327 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:56.744465439+00:00 stderr F W1013 00:10:56.743489 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:10:56.744465439+00:00 stderr F E1013 00:10:56.744395 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:00.649061991+00:00 stderr F W1013 00:11:00.648898 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:00.649061991+00:00 stderr F E1013 00:11:00.648980 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:02.933273438+00:00 stderr F W1013 00:11:02.933066 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:02.933273438+00:00 stderr F E1013 00:11:02.933134 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:06.612944059+00:00 stderr F W1013 00:11:06.612173 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:06.612944059+00:00 stderr F E1013 00:11:06.612295 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:09.471570307+00:00 stderr F W1013 00:11:09.471518 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:09.471648528+00:00 stderr F E1013 00:11:09.471636 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:11.237386439+00:00 stderr F W1013 00:11:11.237319 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:11.237554023+00:00 stderr F E1013 00:11:11.237533 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:15.850667934+00:00 stderr F W1013 00:11:15.850593 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:15.850667934+00:00 stderr F E1013 00:11:15.850646 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:17.817781716+00:00 stderr F W1013 00:11:17.817697 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:17.817781716+00:00 stderr F E1013 00:11:17.817763 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:19.150045247+00:00 stderr F W1013 00:11:19.149954 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:19.150045247+00:00 stderr F E1013 00:11:19.150009 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:22.588360965+00:00 stderr F W1013 00:11:22.588255 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:22.588360965+00:00 stderr F E1013 00:11:22.588305 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:24.068168597+00:00 stderr F W1013 00:11:24.068035 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:24.068168597+00:00 stderr F E1013 00:11:24.068117 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:26.894895561+00:00 stderr F W1013 00:11:26.894785 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:26.894895561+00:00 stderr F E1013 00:11:26.894846 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-10-13T00:11:29.650654238+00:00 stderr F E1013 00:11:29.650577 1 server.go:224] "waiting for handlers to sync" err="context canceled" 2025-10-13T00:11:29.650654238+00:00 stderr F I1013 00:11:29.650636 1 secure_serving.go:258] Stopped listening on [::]:10259 2025-10-13T00:11:29.650760851+00:00 stderr F I1013 00:11:29.650703 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:11:29.650760851+00:00 stderr F E1013 00:11:29.650735 1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:11:29.650760851+00:00 stderr F I1013 00:11:29.650748 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:11:29.651039939+00:00 stderr F I1013 00:11:29.650577 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-10-13T00:11:29.654554180+00:00 stderr F I1013 00:11:29.654405 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-10-13T00:11:29.654554180+00:00 stderr F I1013 00:11:29.654455 1 server.go:248] "Requested to terminate, exiting" ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000022303215073043232033026 0ustar zuulzuul2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.347806 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348123 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348130 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348136 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348142 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348148 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348154 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348160 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348165 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348170 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348175 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348180 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-10-13T00:12:31.348194843+00:00 stderr F W1013 00:12:31.348185 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348191 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348197 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348203 1 feature_gate.go:227] unrecognized feature gate: Example 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348208 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348214 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348219 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348225 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348231 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348236 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348242 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348247 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348253 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348258 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348264 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348270 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348274 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348279 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348284 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348289 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348303 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348308 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348313 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348317 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348322 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348343 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348348 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348354 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348359 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348364 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348369 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348374 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348379 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348384 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348389 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348393 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348398 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348404 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348409 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348413 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348418 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348425 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348431 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348436 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348441 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348445 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348450 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-10-13T00:12:31.348606963+00:00 stderr F W1013 00:12:31.348455 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-10-13T00:12:31.352559465+00:00 stderr F I1013 00:12:31.352453 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-10-13T00:12:31.352559465+00:00 stderr F I1013 00:12:31.352519 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-10-13T00:12:31.352559465+00:00 stderr F I1013 00:12:31.352526 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-10-13T00:12:31.352559465+00:00 stderr F I1013 00:12:31.352532 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-10-13T00:12:31.352559465+00:00 stderr F I1013 00:12:31.352538 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-10-13T00:12:31.352559465+00:00 stderr F I1013 00:12:31.352545 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352548 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352559 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352565 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352568 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352572 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352578 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352582 1 flags.go:64] FLAG: --client-ca-file="" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352586 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352590 1 flags.go:64] FLAG: --contention-profiling="true" 2025-10-13T00:12:31.352621918+00:00 stderr F I1013 00:12:31.352594 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-10-13T00:12:31.352648910+00:00 stderr F I1013 00:12:31.352599 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-10-13T00:12:31.352648910+00:00 stderr F I1013 00:12:31.352637 1 flags.go:64] FLAG: --help="false" 2025-10-13T00:12:31.352668571+00:00 stderr F I1013 00:12:31.352645 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-10-13T00:12:31.352668571+00:00 stderr F I1013 00:12:31.352651 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-10-13T00:12:31.352668571+00:00 stderr F I1013 00:12:31.352657 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-10-13T00:12:31.352668571+00:00 stderr F I1013 00:12:31.352662 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-10-13T00:12:31.352690282+00:00 stderr F I1013 00:12:31.352668 1 flags.go:64] FLAG: --kubeconfig="" 2025-10-13T00:12:31.352690282+00:00 stderr F I1013 00:12:31.352672 1 flags.go:64] FLAG: --leader-elect="true" 2025-10-13T00:12:31.352690282+00:00 stderr F I1013 00:12:31.352675 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-10-13T00:12:31.352690282+00:00 stderr F I1013 00:12:31.352679 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-10-13T00:12:31.352690282+00:00 stderr F I1013 00:12:31.352682 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-10-13T00:12:31.352690282+00:00 stderr F I1013 00:12:31.352685 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352689 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352693 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352696 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352699 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352709 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352712 1 flags.go:64] FLAG: --logging-format="text" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352715 1 flags.go:64] FLAG: --master="" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352719 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352722 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-10-13T00:12:31.352731534+00:00 stderr F I1013 00:12:31.352725 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-10-13T00:12:31.352757625+00:00 stderr F I1013 00:12:31.352728 1 flags.go:64] FLAG: --profiling="true" 2025-10-13T00:12:31.352757625+00:00 stderr F I1013 00:12:31.352733 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-10-13T00:12:31.352757625+00:00 stderr F I1013 00:12:31.352738 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-10-13T00:12:31.352757625+00:00 stderr F I1013 00:12:31.352741 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-10-13T00:12:31.352757625+00:00 stderr F I1013 00:12:31.352747 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-10-13T00:12:31.352757625+00:00 stderr F I1013 00:12:31.352751 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-10-13T00:12:31.352780686+00:00 stderr F I1013 00:12:31.352757 1 flags.go:64] FLAG: --secure-port="10259" 2025-10-13T00:12:31.352780686+00:00 stderr F I1013 00:12:31.352761 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-10-13T00:12:31.352780686+00:00 stderr F I1013 00:12:31.352764 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-10-13T00:12:31.352780686+00:00 stderr F I1013 00:12:31.352768 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-10-13T00:12:31.352780686+00:00 stderr F I1013 00:12:31.352776 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-10-13T00:12:31.352802117+00:00 stderr F I1013 00:12:31.352780 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:12:31.352802117+00:00 stderr F I1013 00:12:31.352785 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-10-13T00:12:31.352802117+00:00 stderr F I1013 00:12:31.352794 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-10-13T00:12:31.352820148+00:00 stderr F I1013 00:12:31.352797 1 flags.go:64] FLAG: --v="2" 2025-10-13T00:12:31.352820148+00:00 stderr F I1013 00:12:31.352804 1 flags.go:64] FLAG: --version="false" 2025-10-13T00:12:31.352820148+00:00 stderr F I1013 00:12:31.352810 1 flags.go:64] FLAG: --vmodule="" 2025-10-13T00:12:31.352820148+00:00 stderr F I1013 00:12:31.352815 1 flags.go:64] FLAG: --write-config-to="" 2025-10-13T00:12:31.361526101+00:00 stderr F I1013 00:12:31.361437 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:12:31.881184244+00:00 stderr F W1013 00:12:31.881091 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.881184244+00:00 stderr F W1013 00:12:31.881132 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. 2025-10-13T00:12:31.881184244+00:00 stderr F W1013 00:12:31.881142 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false 2025-10-13T00:12:31.907947554+00:00 stderr F I1013 00:12:31.907853 1 configfile.go:94] "Using component config" config=< 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F clientConnection: 2025-10-13T00:12:31.907947554+00:00 stderr F acceptContentTypes: "" 2025-10-13T00:12:31.907947554+00:00 stderr F burst: 100 2025-10-13T00:12:31.907947554+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-10-13T00:12:31.907947554+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-10-13T00:12:31.907947554+00:00 stderr F qps: 50 2025-10-13T00:12:31.907947554+00:00 stderr F enableContentionProfiling: false 2025-10-13T00:12:31.907947554+00:00 stderr F enableProfiling: false 2025-10-13T00:12:31.907947554+00:00 stderr F kind: KubeSchedulerConfiguration 2025-10-13T00:12:31.907947554+00:00 stderr F leaderElection: 2025-10-13T00:12:31.907947554+00:00 stderr F leaderElect: true 2025-10-13T00:12:31.907947554+00:00 stderr F leaseDuration: 2m17s 2025-10-13T00:12:31.907947554+00:00 stderr F renewDeadline: 1m47s 2025-10-13T00:12:31.907947554+00:00 stderr F resourceLock: leases 2025-10-13T00:12:31.907947554+00:00 stderr F resourceName: kube-scheduler 2025-10-13T00:12:31.907947554+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-10-13T00:12:31.907947554+00:00 stderr F retryPeriod: 26s 2025-10-13T00:12:31.907947554+00:00 stderr F parallelism: 16 2025-10-13T00:12:31.907947554+00:00 stderr F percentageOfNodesToScore: 0 2025-10-13T00:12:31.907947554+00:00 stderr F podInitialBackoffSeconds: 1 2025-10-13T00:12:31.907947554+00:00 stderr F podMaxBackoffSeconds: 10 2025-10-13T00:12:31.907947554+00:00 stderr F profiles: 2025-10-13T00:12:31.907947554+00:00 stderr F - pluginConfig: 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F kind: DefaultPreemptionArgs 2025-10-13T00:12:31.907947554+00:00 stderr F minCandidateNodesAbsolute: 100 2025-10-13T00:12:31.907947554+00:00 stderr F minCandidateNodesPercentage: 10 2025-10-13T00:12:31.907947554+00:00 stderr F name: DefaultPreemption 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F hardPodAffinityWeight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-10-13T00:12:31.907947554+00:00 stderr F kind: InterPodAffinityArgs 2025-10-13T00:12:31.907947554+00:00 stderr F name: InterPodAffinity 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F kind: NodeAffinityArgs 2025-10-13T00:12:31.907947554+00:00 stderr F name: NodeAffinity 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-10-13T00:12:31.907947554+00:00 stderr F resources: 2025-10-13T00:12:31.907947554+00:00 stderr F - name: cpu 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F - name: memory 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F kind: NodeResourcesFitArgs 2025-10-13T00:12:31.907947554+00:00 stderr F scoringStrategy: 2025-10-13T00:12:31.907947554+00:00 stderr F resources: 2025-10-13T00:12:31.907947554+00:00 stderr F - name: cpu 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F - name: memory 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F type: LeastAllocated 2025-10-13T00:12:31.907947554+00:00 stderr F name: NodeResourcesFit 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F defaultingType: System 2025-10-13T00:12:31.907947554+00:00 stderr F kind: PodTopologySpreadArgs 2025-10-13T00:12:31.907947554+00:00 stderr F name: PodTopologySpread 2025-10-13T00:12:31.907947554+00:00 stderr F - args: 2025-10-13T00:12:31.907947554+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-10-13T00:12:31.907947554+00:00 stderr F bindTimeoutSeconds: 600 2025-10-13T00:12:31.907947554+00:00 stderr F kind: VolumeBindingArgs 2025-10-13T00:12:31.907947554+00:00 stderr F name: VolumeBinding 2025-10-13T00:12:31.907947554+00:00 stderr F plugins: 2025-10-13T00:12:31.907947554+00:00 stderr F bind: {} 2025-10-13T00:12:31.907947554+00:00 stderr F filter: {} 2025-10-13T00:12:31.907947554+00:00 stderr F multiPoint: 2025-10-13T00:12:31.907947554+00:00 stderr F enabled: 2025-10-13T00:12:31.907947554+00:00 stderr F - name: PrioritySort 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodeUnschedulable 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodeName 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: TaintToleration 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 3 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodeAffinity 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 2 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodePorts 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodeResourcesFit 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F - name: VolumeRestrictions 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: EBSLimits 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: GCEPDLimits 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodeVolumeLimits 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: AzureDiskLimits 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: VolumeBinding 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: VolumeZone 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: PodTopologySpread 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 2 2025-10-13T00:12:31.907947554+00:00 stderr F - name: InterPodAffinity 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 2 2025-10-13T00:12:31.907947554+00:00 stderr F - name: DefaultPreemption 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F - name: ImageLocality 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 1 2025-10-13T00:12:31.907947554+00:00 stderr F - name: DefaultBinder 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F - name: SchedulingGates 2025-10-13T00:12:31.907947554+00:00 stderr F weight: 0 2025-10-13T00:12:31.907947554+00:00 stderr F permit: {} 2025-10-13T00:12:31.907947554+00:00 stderr F postBind: {} 2025-10-13T00:12:31.907947554+00:00 stderr F postFilter: {} 2025-10-13T00:12:31.907947554+00:00 stderr F preBind: {} 2025-10-13T00:12:31.907947554+00:00 stderr F preEnqueue: {} 2025-10-13T00:12:31.907947554+00:00 stderr F preFilter: {} 2025-10-13T00:12:31.907947554+00:00 stderr F preScore: {} 2025-10-13T00:12:31.907947554+00:00 stderr F queueSort: {} 2025-10-13T00:12:31.907947554+00:00 stderr F reserve: {} 2025-10-13T00:12:31.907947554+00:00 stderr F score: {} 2025-10-13T00:12:31.907947554+00:00 stderr F schedulerName: default-scheduler 2025-10-13T00:12:31.907947554+00:00 stderr F > 2025-10-13T00:12:31.908185735+00:00 stderr F I1013 00:12:31.908155 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-10-13T00:12:31.908185735+00:00 stderr F I1013 00:12:31.908170 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-10-13T00:12:31.910640805+00:00 stderr F I1013 00:12:31.910563 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:12:31.910991392+00:00 stderr F I1013 00:12:31.910937 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-10-13T00:12:31.910991392+00:00 stderr F I1013 00:12:31.910966 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-10-13 00:12:31.910935489 +0000 UTC))" 2025-10-13T00:12:31.911340669+00:00 stderr F I1013 00:12:31.911290 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314351\" (2025-10-12 23:12:31 +0000 UTC to 2026-10-12 23:12:31 +0000 UTC (now=2025-10-13 00:12:31.911252854 +0000 UTC))" 2025-10-13T00:12:31.911362390+00:00 stderr F I1013 00:12:31.911347 1 secure_serving.go:213] Serving securely on [::]:10259 2025-10-13T00:12:31.911590611+00:00 stderr F I1013 00:12:31.911391 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:12:31.913006970+00:00 stderr F I1013 00:12:31.912981 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:12:31.913321425+00:00 stderr F W1013 00:12:31.913244 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913321425+00:00 stderr F E1013 00:12:31.913309 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913791338+00:00 stderr F W1013 00:12:31.913713 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913791338+00:00 stderr F E1013 00:12:31.913772 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913878222+00:00 stderr F W1013 00:12:31.913839 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913890092+00:00 stderr F E1013 00:12:31.913878 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913996198+00:00 stderr F W1013 00:12:31.913951 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.913996198+00:00 stderr F E1013 00:12:31.913981 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914270701+00:00 stderr F W1013 00:12:31.914203 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914304773+00:00 stderr F E1013 00:12:31.914277 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914394487+00:00 stderr F W1013 00:12:31.914351 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914505982+00:00 stderr F E1013 00:12:31.914484 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914594477+00:00 stderr F W1013 00:12:31.914540 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914630478+00:00 stderr F E1013 00:12:31.914603 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914734993+00:00 stderr F W1013 00:12:31.914703 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.914773645+00:00 stderr F E1013 00:12:31.914763 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915007507+00:00 stderr F W1013 00:12:31.914949 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915021607+00:00 stderr F E1013 00:12:31.915005 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915092691+00:00 stderr F W1013 00:12:31.915047 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915092691+00:00 stderr F E1013 00:12:31.915085 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915191916+00:00 stderr F W1013 00:12:31.915151 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915250778+00:00 stderr F E1013 00:12:31.915236 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915283210+00:00 stderr F W1013 00:12:31.915237 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915309171+00:00 stderr F E1013 00:12:31.915290 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915583495+00:00 stderr F W1013 00:12:31.915531 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915583495+00:00 stderr F E1013 00:12:31.915568 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915746333+00:00 stderr F W1013 00:12:31.915655 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.915783974+00:00 stderr F E1013 00:12:31.915761 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.916110180+00:00 stderr F W1013 00:12:31.915988 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:31.916148692+00:00 stderr F E1013 00:12:31.916115 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:12:42.721613618+00:00 stderr F W1013 00:12:42.721471 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.721814224+00:00 stderr F I1013 00:12:42.721787 1 trace.go:236] Trace[376417147]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.719) (total time: 10002ms): 2025-10-13T00:12:42.721814224+00:00 stderr F Trace[376417147]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.721) 2025-10-13T00:12:42.721814224+00:00 stderr F Trace[376417147]: [10.002166313s] [10.002166313s] END 2025-10-13T00:12:42.721952128+00:00 stderr F E1013 00:12:42.721905 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.732673574+00:00 stderr F W1013 00:12:42.732606 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.732714135+00:00 stderr F I1013 00:12:42.732669 1 trace.go:236] Trace[1819363671]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.731) (total time: 10001ms): 2025-10-13T00:12:42.732714135+00:00 stderr F Trace[1819363671]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.732) 2025-10-13T00:12:42.732714135+00:00 stderr F Trace[1819363671]: [10.00134263s] [10.00134263s] END 2025-10-13T00:12:42.732714135+00:00 stderr F E1013 00:12:42.732685 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.737111140+00:00 stderr F W1013 00:12:42.737040 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.737141121+00:00 stderr F I1013 00:12:42.737124 1 trace.go:236] Trace[1591323514]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.735) (total time: 10001ms): 2025-10-13T00:12:42.737141121+00:00 stderr F Trace[1591323514]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.737) 2025-10-13T00:12:42.737141121+00:00 stderr F Trace[1591323514]: [10.001144353s] [10.001144353s] END 2025-10-13T00:12:42.737159082+00:00 stderr F E1013 00:12:42.737145 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.778502690+00:00 stderr F W1013 00:12:42.778382 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.778563572+00:00 stderr F I1013 00:12:42.778498 1 trace.go:236] Trace[1535186050]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.776) (total time: 10001ms): 2025-10-13T00:12:42.778563572+00:00 stderr F Trace[1535186050]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.778) 2025-10-13T00:12:42.778563572+00:00 stderr F Trace[1535186050]: [10.001949967s] [10.001949967s] END 2025-10-13T00:12:42.778563572+00:00 stderr F E1013 00:12:42.778531 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.805755358+00:00 stderr F W1013 00:12:42.805651 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.805755358+00:00 stderr F I1013 00:12:42.805743 1 trace.go:236] Trace[2142188258]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.804) (total time: 10001ms): 2025-10-13T00:12:42.805755358+00:00 stderr F Trace[2142188258]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.805) 2025-10-13T00:12:42.805755358+00:00 stderr F Trace[2142188258]: [10.001178184s] [10.001178184s] END 2025-10-13T00:12:42.805819409+00:00 stderr F E1013 00:12:42.805757 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.811862612+00:00 stderr F W1013 00:12:42.811809 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.811892973+00:00 stderr F I1013 00:12:42.811861 1 trace.go:236] Trace[2132488289]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.810) (total time: 10001ms): 2025-10-13T00:12:42.811892973+00:00 stderr F Trace[2132488289]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.811) 2025-10-13T00:12:42.811892973+00:00 stderr F Trace[2132488289]: [10.001519368s] [10.001519368s] END 2025-10-13T00:12:42.811892973+00:00 stderr F E1013 00:12:42.811872 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.815384902+00:00 stderr F W1013 00:12:42.815317 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.815454664+00:00 stderr F I1013 00:12:42.815442 1 trace.go:236] Trace[442875554]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.813) (total time: 10001ms): 2025-10-13T00:12:42.815454664+00:00 stderr F Trace[442875554]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.815) 2025-10-13T00:12:42.815454664+00:00 stderr F Trace[442875554]: [10.001447259s] [10.001447259s] END 2025-10-13T00:12:42.815491675+00:00 stderr F E1013 00:12:42.815481 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.820395525+00:00 stderr F W1013 00:12:42.820336 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.820413196+00:00 stderr F I1013 00:12:42.820393 1 trace.go:236] Trace[920247850]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.819) (total time: 10001ms): 2025-10-13T00:12:42.820413196+00:00 stderr F Trace[920247850]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.820) 2025-10-13T00:12:42.820413196+00:00 stderr F Trace[920247850]: [10.00128484s] [10.00128484s] END 2025-10-13T00:12:42.820413196+00:00 stderr F E1013 00:12:42.820402 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.827790806+00:00 stderr F W1013 00:12:42.827732 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:42.827865918+00:00 stderr F I1013 00:12:42.827855 1 trace.go:236] Trace[1671135344]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:32.826) (total time: 10001ms): 2025-10-13T00:12:42.827865918+00:00 stderr F Trace[1671135344]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:42.827) 2025-10-13T00:12:42.827865918+00:00 stderr F Trace[1671135344]: [10.001289678s] [10.001289678s] END 2025-10-13T00:12:42.827901269+00:00 stderr F E1013 00:12:42.827890 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-10-13T00:12:44.455189891+00:00 stderr F I1013 00:12:44.455112 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:44.638636082+00:00 stderr F I1013 00:12:44.638561 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:44.926712697+00:00 stderr F I1013 00:12:44.926636 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:45.064207877+00:00 stderr F I1013 00:12:45.064062 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:45.167731119+00:00 stderr F I1013 00:12:45.167620 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:45.433126927+00:00 stderr F I1013 00:12:45.432978 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:45.633697016+00:00 stderr F I1013 00:12:45.633596 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:45.636100695+00:00 stderr F I1013 00:12:45.636033 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:45.714034177+00:00 stderr F I1013 00:12:45.713922 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:12:45.715309384+00:00 stderr F I1013 00:12:45.715259 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:12:45.71518848 +0000 UTC))" 2025-10-13T00:12:45.715477298+00:00 stderr F I1013 00:12:45.715451 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:12:45.715405746 +0000 UTC))" 2025-10-13T00:12:45.715582781+00:00 stderr F I1013 00:12:45.715552 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:12:45.715515449 +0000 UTC))" 2025-10-13T00:12:45.715686444+00:00 stderr F I1013 00:12:45.715658 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:12:45.715621072 +0000 UTC))" 2025-10-13T00:12:45.715771017+00:00 stderr F I1013 00:12:45.715749 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:45.715721125 +0000 UTC))" 2025-10-13T00:12:45.715847819+00:00 stderr F I1013 00:12:45.715826 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:45.715800878 +0000 UTC))" 2025-10-13T00:12:45.715936851+00:00 stderr F I1013 00:12:45.715909 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:45.71587623 +0000 UTC))" 2025-10-13T00:12:45.716020624+00:00 stderr F I1013 00:12:45.715999 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:12:45.715971242 +0000 UTC))" 2025-10-13T00:12:45.716096716+00:00 stderr F I1013 00:12:45.716076 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:12:45.716050075 +0000 UTC))" 2025-10-13T00:12:45.716172068+00:00 stderr F I1013 00:12:45.716151 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:12:45.716127877 +0000 UTC))" 2025-10-13T00:12:45.716962381+00:00 stderr F I1013 00:12:45.716911 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-10-13 00:12:45.716861328 +0000 UTC))" 2025-10-13T00:12:45.717688732+00:00 stderr F I1013 00:12:45.717651 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314351\" (2025-10-12 23:12:31 +0000 UTC to 2026-10-12 23:12:31 +0000 UTC (now=2025-10-13 00:12:45.717614109 +0000 UTC))" 2025-10-13T00:12:45.884156908+00:00 stderr F I1013 00:12:45.884054 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.007118416+00:00 stderr F I1013 00:12:48.006858 1 trace.go:236] Trace[1819763051]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.424) (total time: 14581ms): 2025-10-13T00:12:48.007118416+00:00 stderr F Trace[1819763051]: ---"Objects listed" error: 14581ms (00:12:48.006) 2025-10-13T00:12:48.007118416+00:00 stderr F Trace[1819763051]: [14.581868736s] [14.581868736s] END 2025-10-13T00:12:48.007118416+00:00 stderr F I1013 00:12:48.006880 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.007118416+00:00 stderr F I1013 00:12:48.006897 1 trace.go:236] Trace[1783878402]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.026) (total time: 14980ms): 2025-10-13T00:12:48.007118416+00:00 stderr F Trace[1783878402]: ---"Objects listed" error: 14980ms (00:12:48.006) 2025-10-13T00:12:48.007118416+00:00 stderr F Trace[1783878402]: [14.980729771s] [14.980729771s] END 2025-10-13T00:12:48.007118416+00:00 stderr F I1013 00:12:48.006917 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.008535246+00:00 stderr F I1013 00:12:48.008257 1 trace.go:236] Trace[1902149800]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.318) (total time: 14689ms): 2025-10-13T00:12:48.008535246+00:00 stderr F Trace[1902149800]: ---"Objects listed" error: 14689ms (00:12:48.008) 2025-10-13T00:12:48.008535246+00:00 stderr F Trace[1902149800]: [14.689458705s] [14.689458705s] END 2025-10-13T00:12:48.008535246+00:00 stderr F I1013 00:12:48.008275 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.008572247+00:00 stderr F I1013 00:12:48.008258 1 trace.go:236] Trace[345751865]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.472) (total time: 14535ms): 2025-10-13T00:12:48.008572247+00:00 stderr F Trace[345751865]: ---"Objects listed" error: 14535ms (00:12:48.008) 2025-10-13T00:12:48.008572247+00:00 stderr F Trace[345751865]: [14.535487411s] [14.535487411s] END 2025-10-13T00:12:48.008572247+00:00 stderr F I1013 00:12:48.008560 1 trace.go:236] Trace[2114630660]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.072) (total time: 14935ms): 2025-10-13T00:12:48.008572247+00:00 stderr F Trace[2114630660]: ---"Objects listed" error: 14935ms (00:12:48.008) 2025-10-13T00:12:48.008572247+00:00 stderr F Trace[2114630660]: [14.935984182s] [14.935984182s] END 2025-10-13T00:12:48.008596208+00:00 stderr F I1013 00:12:48.008571 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.008596208+00:00 stderr F I1013 00:12:48.008579 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.008607428+00:00 stderr F I1013 00:12:48.008583 1 node_tree.go:65] "Added node in listed group to NodeTree" node="crc" zone="" 2025-10-13T00:12:48.008649309+00:00 stderr F I1013 00:12:48.008540 1 trace.go:236] Trace[1324915085]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Oct-2025 00:12:33.029) (total time: 14978ms): 2025-10-13T00:12:48.008649309+00:00 stderr F Trace[1324915085]: ---"Objects listed" error: 14978ms (00:12:48.008) 2025-10-13T00:12:48.008649309+00:00 stderr F Trace[1324915085]: [14.978734127s] [14.978734127s] END 2025-10-13T00:12:48.008728811+00:00 stderr F I1013 00:12:48.008705 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:12:48.012035956+00:00 stderr F I1013 00:12:48.011800 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-10-13T00:15:44.474916173+00:00 stderr F I1013 00:15:44.474859 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/kube-scheduler 2025-10-13T00:15:44.662914455+00:00 stderr F I1013 00:15:44.662811 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-zqnwb" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:15:44.662914455+00:00 stderr F I1013 00:15:44.662842 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-jfjbq" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:15:44.662914455+00:00 stderr F I1013 00:15:44.662863 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-t4sr9" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:15:44.662914455+00:00 stderr F I1013 00:15:44.662845 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-image-registry/image-pruner-29338560-zvlxb" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:15:44.662958077+00:00 stderr F I1013 00:15:44.662928 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29338575-4qbqw" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:16:10.171578934+00:00 stderr F I1013 00:16:10.170085 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/marketplace-operator-8b455464d-29pzg" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:16:16.026073666+00:00 stderr F I1013 00:16:16.026004 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-crk87" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:16:23.666464966+00:00 stderr F I1013 00:16:23.666415 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-hkptr" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:16:27.229411675+00:00 stderr F I1013 00:16:27.228399 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-gjctm" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:16:27.428120778+00:00 stderr F I1013 00:16:27.427842 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-cms8q" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:16:29.026033616+00:00 stderr F I1013 00:16:29.025951 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-wswq5" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:19:02.366460617+00:00 stderr F I1013 00:19:02.366410 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-multus/cni-sysctl-allowlist-ds-pklng" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:19:38.903541787+00:00 stderr F I1013 00:19:38.901746 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-image-registry/image-registry-75b7bb6564-2mwg6" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:21:11.950399447+00:00 stderr F I1013 00:21:11.950249 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.950192302 +0000 UTC))" 2025-10-13T00:21:11.950399447+00:00 stderr F I1013 00:21:11.950307 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.950283354 +0000 UTC))" 2025-10-13T00:21:11.950399447+00:00 stderr F I1013 00:21:11.950378 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.950315315 +0000 UTC))" 2025-10-13T00:21:11.950483799+00:00 stderr F I1013 00:21:11.950426 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.950386927 +0000 UTC))" 2025-10-13T00:21:11.950483799+00:00 stderr F I1013 00:21:11.950455 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950433438 +0000 UTC))" 2025-10-13T00:21:11.950494170+00:00 stderr F I1013 00:21:11.950482 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950461899 +0000 UTC))" 2025-10-13T00:21:11.950543631+00:00 stderr F I1013 00:21:11.950507 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.95048953 +0000 UTC))" 2025-10-13T00:21:11.950553951+00:00 stderr F I1013 00:21:11.950541 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.950523121 +0000 UTC))" 2025-10-13T00:21:11.950589072+00:00 stderr F I1013 00:21:11.950569 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.950547861 +0000 UTC))" 2025-10-13T00:21:11.950635494+00:00 stderr F I1013 00:21:11.950615 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.950582892 +0000 UTC))" 2025-10-13T00:21:11.950758817+00:00 stderr F I1013 00:21:11.950669 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.950629123 +0000 UTC))" 2025-10-13T00:21:11.951643141+00:00 stderr F I1013 00:21:11.951315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-10-13 00:21:11.951281161 +0000 UTC))" 2025-10-13T00:21:11.951986100+00:00 stderr F I1013 00:21:11.951946 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314351\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314351\" (2025-10-12 23:12:31 +0000 UTC to 2026-10-12 23:12:31 +0000 UTC (now=2025-10-13 00:21:11.951918688 +0000 UTC))" 2025-10-13T00:21:35.297764189+00:00 stderr F I1013 00:21:35.297234 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-ovn-kubernetes/ovnkube-node-wzh74" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-10-13T00:22:14.610894727+00:00 stderr F E1013 00:22:14.610741 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:42.130900146+00:00 stderr F I1013 00:22:42.130806 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:43.819620366+00:00 stderr F I1013 00:22:43.819516 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:44.285455882+00:00 stderr F I1013 00:22:44.285256 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:44.638493206+00:00 stderr F I1013 00:22:44.638388 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:46.213595390+00:00 stderr F I1013 00:22:46.213475 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.905926531+00:00 stderr F I1013 00:22:47.905860 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:47.909786748+00:00 stderr F I1013 00:22:47.909747 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:48.860965913+00:00 stderr F I1013 00:22:48.860838 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:51.354709097+00:00 stderr F I1013 00:22:51.354655 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:51.476369826+00:00 stderr F I1013 00:22:51.476276 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:51.602682044+00:00 stderr F I1013 00:22:51.602615 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:51.685065509+00:00 stderr F I1013 00:22:51.684971 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:56.625247088+00:00 stderr F I1013 00:22:56.625186 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:56.912590942+00:00 stderr F I1013 00:22:56.912533 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:22:59.630847579+00:00 stderr F I1013 00:22:59.630770 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000016354415073043232033041 0ustar zuulzuul2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230190 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230330 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230337 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230342 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230347 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230352 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230356 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230403 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230413 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230418 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230422 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230427 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230432 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230436 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230441 1 feature_gate.go:227] unrecognized feature gate: Example 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230446 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230450 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230455 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230460 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230464 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230469 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230474 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230479 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230484 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230488 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230493 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230497 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230502 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230507 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230511 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230523 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230528 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230533 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230537 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230542 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230546 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230551 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230555 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230560 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230565 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230569 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230574 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230580 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230585 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230590 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230594 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230599 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230604 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230608 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230614 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230621 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230628 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230634 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230639 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230644 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230649 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230661 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230667 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230672 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230677 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231000 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231022 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231062 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231071 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231077 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231083 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231087 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231094 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231099 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T20:08:10.231116925+00:00 stderr F I0813 20:08:10.231104 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T20:08:10.231116925+00:00 stderr F I0813 20:08:10.231108 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231115 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231119 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231123 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-08-13T20:08:10.231139305+00:00 stderr F I0813 20:08:10.231128 1 flags.go:64] FLAG: --contention-profiling="true" 2025-08-13T20:08:10.231139305+00:00 stderr F I0813 20:08:10.231132 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231138 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231184 1 flags.go:64] FLAG: --help="false" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231191 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231197 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231203 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231208 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231214 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231218 1 flags.go:64] FLAG: --leader-elect="true" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231222 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231226 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231231 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231235 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231239 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231244 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-08-13T20:08:10.231261759+00:00 stderr F I0813 20:08:10.231248 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T20:08:10.231261759+00:00 stderr F I0813 20:08:10.231253 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231260 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231264 1 flags.go:64] FLAG: --logging-format="text" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231268 1 flags.go:64] FLAG: --master="" 2025-08-13T20:08:10.231283149+00:00 stderr F I0813 20:08:10.231272 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T20:08:10.231283149+00:00 stderr F I0813 20:08:10.231277 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T20:08:10.231293160+00:00 stderr F I0813 20:08:10.231281 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-08-13T20:08:10.231293160+00:00 stderr F I0813 20:08:10.231285 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T20:08:10.231302840+00:00 stderr F I0813 20:08:10.231289 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T20:08:10.231302840+00:00 stderr F I0813 20:08:10.231294 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-08-13T20:08:10.231312790+00:00 stderr F I0813 20:08:10.231298 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T20:08:10.231312790+00:00 stderr F I0813 20:08:10.231304 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T20:08:10.231322541+00:00 stderr F I0813 20:08:10.231310 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T20:08:10.231322541+00:00 stderr F I0813 20:08:10.231316 1 flags.go:64] FLAG: --secure-port="10259" 2025-08-13T20:08:10.231332641+00:00 stderr F I0813 20:08:10.231320 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-08-13T20:08:10.231332641+00:00 stderr F I0813 20:08:10.231324 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-08-13T20:08:10.231342871+00:00 stderr F I0813 20:08:10.231329 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T20:08:10.231342871+00:00 stderr F I0813 20:08:10.231337 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T20:08:10.231353441+00:00 stderr F I0813 20:08:10.231342 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:10.231353441+00:00 stderr F I0813 20:08:10.231347 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T20:08:10.231363342+00:00 stderr F I0813 20:08:10.231353 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-08-13T20:08:10.231373152+00:00 stderr F I0813 20:08:10.231357 1 flags.go:64] FLAG: --v="2" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231369 1 flags.go:64] FLAG: --version="false" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231375 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231380 1 flags.go:64] FLAG: --write-config-to="" 2025-08-13T20:08:10.245082115+00:00 stderr F I0813 20:08:10.244995 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:11.146145129+00:00 stderr F I0813 20:08:11.146059 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:08:11.188037240+00:00 stderr F I0813 20:08:11.187945 1 configfile.go:94] "Using component config" config=< 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F clientConnection: 2025-08-13T20:08:11.188037240+00:00 stderr F acceptContentTypes: "" 2025-08-13T20:08:11.188037240+00:00 stderr F burst: 100 2025-08-13T20:08:11.188037240+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-08-13T20:08:11.188037240+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-08-13T20:08:11.188037240+00:00 stderr F qps: 50 2025-08-13T20:08:11.188037240+00:00 stderr F enableContentionProfiling: false 2025-08-13T20:08:11.188037240+00:00 stderr F enableProfiling: false 2025-08-13T20:08:11.188037240+00:00 stderr F kind: KubeSchedulerConfiguration 2025-08-13T20:08:11.188037240+00:00 stderr F leaderElection: 2025-08-13T20:08:11.188037240+00:00 stderr F leaderElect: true 2025-08-13T20:08:11.188037240+00:00 stderr F leaseDuration: 2m17s 2025-08-13T20:08:11.188037240+00:00 stderr F renewDeadline: 1m47s 2025-08-13T20:08:11.188037240+00:00 stderr F resourceLock: leases 2025-08-13T20:08:11.188037240+00:00 stderr F resourceName: kube-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F retryPeriod: 26s 2025-08-13T20:08:11.188037240+00:00 stderr F parallelism: 16 2025-08-13T20:08:11.188037240+00:00 stderr F percentageOfNodesToScore: 0 2025-08-13T20:08:11.188037240+00:00 stderr F podInitialBackoffSeconds: 1 2025-08-13T20:08:11.188037240+00:00 stderr F podMaxBackoffSeconds: 10 2025-08-13T20:08:11.188037240+00:00 stderr F profiles: 2025-08-13T20:08:11.188037240+00:00 stderr F - pluginConfig: 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: DefaultPreemptionArgs 2025-08-13T20:08:11.188037240+00:00 stderr F minCandidateNodesAbsolute: 100 2025-08-13T20:08:11.188037240+00:00 stderr F minCandidateNodesPercentage: 10 2025-08-13T20:08:11.188037240+00:00 stderr F name: DefaultPreemption 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F hardPodAffinityWeight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-08-13T20:08:11.188037240+00:00 stderr F kind: InterPodAffinityArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: InterPodAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeAffinityArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-08-13T20:08:11.188037240+00:00 stderr F resources: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: cpu 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: memory 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeResourcesFitArgs 2025-08-13T20:08:11.188037240+00:00 stderr F scoringStrategy: 2025-08-13T20:08:11.188037240+00:00 stderr F resources: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: cpu 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: memory 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F type: LeastAllocated 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeResourcesFit 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F defaultingType: System 2025-08-13T20:08:11.188037240+00:00 stderr F kind: PodTopologySpreadArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: PodTopologySpread 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F bindTimeoutSeconds: 600 2025-08-13T20:08:11.188037240+00:00 stderr F kind: VolumeBindingArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: VolumeBinding 2025-08-13T20:08:11.188037240+00:00 stderr F plugins: 2025-08-13T20:08:11.188037240+00:00 stderr F bind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F filter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F multiPoint: 2025-08-13T20:08:11.188037240+00:00 stderr F enabled: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: PrioritySort 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeUnschedulable 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeName 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: TaintToleration 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 3 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodePorts 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeResourcesFit 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeRestrictions 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: EBSLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: GCEPDLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeVolumeLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: AzureDiskLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeBinding 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeZone 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: PodTopologySpread 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: InterPodAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: DefaultPreemption 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: ImageLocality 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: DefaultBinder 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: SchedulingGates 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F permit: {} 2025-08-13T20:08:11.188037240+00:00 stderr F postBind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F postFilter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preBind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preEnqueue: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preFilter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preScore: {} 2025-08-13T20:08:11.188037240+00:00 stderr F queueSort: {} 2025-08-13T20:08:11.188037240+00:00 stderr F reserve: {} 2025-08-13T20:08:11.188037240+00:00 stderr F score: {} 2025-08-13T20:08:11.188037240+00:00 stderr F schedulerName: default-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F > 2025-08-13T20:08:11.188878124+00:00 stderr F I0813 20:08:11.188854 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-08-13T20:08:11.188957786+00:00 stderr F I0813 20:08:11.188943 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-08-13T20:08:11.203107062+00:00 stderr F I0813 20:08:11.203033 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.202985438 +0000 UTC))" 2025-08-13T20:08:11.203489373+00:00 stderr F I0813 20:08:11.203445 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.203324348 +0000 UTC))" 2025-08-13T20:08:11.203548794+00:00 stderr F I0813 20:08:11.203507 1 secure_serving.go:213] Serving securely on [::]:10259 2025-08-13T20:08:11.203942036+00:00 stderr F I0813 20:08:11.203879 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:08:11.204870982+00:00 stderr F I0813 20:08:11.204750 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:08:11.205753528+00:00 stderr F I0813 20:08:11.205396 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:11.205753528+00:00 stderr F I0813 20:08:11.205692 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:11.206858439+00:00 stderr F I0813 20:08:11.206722 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:08:11.208265020+00:00 stderr F I0813 20:08:11.208201 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:11.208802005+00:00 stderr F I0813 20:08:11.208759 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:08:11.211168003+00:00 stderr F I0813 20:08:11.209991 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:11.217854585+00:00 stderr F I0813 20:08:11.217143 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.217854585+00:00 stderr F I0813 20:08:11.217420 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.218323598+00:00 stderr F I0813 20:08:11.218294 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.218656498+00:00 stderr F I0813 20:08:11.218594 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225140743+00:00 stderr F I0813 20:08:11.225093 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225409441+00:00 stderr F I0813 20:08:11.225093 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225541035+00:00 stderr F I0813 20:08:11.225393 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.228423748+00:00 stderr F I0813 20:08:11.228394 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.228852450+00:00 stderr F I0813 20:08:11.228737 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229345944+00:00 stderr F I0813 20:08:11.228441 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229720875+00:00 stderr F I0813 20:08:11.229585 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229720875+00:00 stderr F I0813 20:08:11.229561 1 node_tree.go:65] "Added node in listed group to NodeTree" node="crc" zone="" 2025-08-13T20:08:11.230234189+00:00 stderr F I0813 20:08:11.230210 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.233049880+00:00 stderr F I0813 20:08:11.232089 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.262424672+00:00 stderr F I0813 20:08:11.260649 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.275833337+00:00 stderr F I0813 20:08:11.274170 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.275833337+00:00 stderr F I0813 20:08:11.274459 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.302108150+00:00 stderr F I0813 20:08:11.301417 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.304706425+00:00 stderr F I0813 20:08:11.304648 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-08-13T20:08:11.310589033+00:00 stderr F I0813 20:08:11.310070 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:11.310714567+00:00 stderr F I0813 20:08:11.310689 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:11.310912263+00:00 stderr F I0813 20:08:11.310699 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:11.310624834 +0000 UTC))" 2025-08-13T20:08:11.311012305+00:00 stderr F I0813 20:08:11.310992 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:11.310959374 +0000 UTC))" 2025-08-13T20:08:11.311070797+00:00 stderr F I0813 20:08:11.311056 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.311035286 +0000 UTC))" 2025-08-13T20:08:11.311119569+00:00 stderr F I0813 20:08:11.311106 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.311089788 +0000 UTC))" 2025-08-13T20:08:11.311165560+00:00 stderr F I0813 20:08:11.311152 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311138159 +0000 UTC))" 2025-08-13T20:08:11.311209871+00:00 stderr F I0813 20:08:11.311197 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.31118407 +0000 UTC))" 2025-08-13T20:08:11.311252592+00:00 stderr F I0813 20:08:11.311240 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311227522 +0000 UTC))" 2025-08-13T20:08:11.311322654+00:00 stderr F I0813 20:08:11.311299 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311282933 +0000 UTC))" 2025-08-13T20:08:11.311389356+00:00 stderr F I0813 20:08:11.311374 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:11.311342025 +0000 UTC))" 2025-08-13T20:08:11.311438288+00:00 stderr F I0813 20:08:11.311424 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:11.311411097 +0000 UTC))" 2025-08-13T20:08:11.312300042+00:00 stderr F I0813 20:08:11.312264 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.312244321 +0000 UTC))" 2025-08-13T20:08:11.312595841+00:00 stderr F I0813 20:08:11.312580 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.3125637 +0000 UTC))" 2025-08-13T20:08:11.317314156+00:00 stderr F I0813 20:08:11.317284 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/kube-scheduler 2025-08-13T20:08:11.318466309+00:00 stderr F I0813 20:08:11.318358 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318649 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:11.318623444 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318698 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:11.318681925 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318718 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.318705526 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318742 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.318723397 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318761 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318747857 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318844 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318767468 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318870 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.31885504 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318932 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318882411 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318956 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:11.318941623 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318976 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:11.318966564 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318995 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318983994 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.319304 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.319284853 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.319577 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.319561951 +0000 UTC))" 2025-08-13T20:08:37.333024227+00:00 stderr F E0813 20:08:37.331678 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:09:01.242213943+00:00 stderr F I0813 20:09:01.242038 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.336110067+00:00 stderr F I0813 20:09:03.335887 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.904718640+00:00 stderr F I0813 20:09:03.904657 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.237111850+00:00 stderr F I0813 20:09:04.236960 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.238841120+00:00 stderr F I0813 20:09:04.238728 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.534492978+00:00 stderr F I0813 20:09:05.534406 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.723631721+00:00 stderr F I0813 20:09:05.723574 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:06.019105803+00:00 stderr F I0813 20:09:06.018397 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.053737367+00:00 stderr F I0813 20:09:07.053594 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.761629982+00:00 stderr F I0813 20:09:07.761557 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.771045132+00:00 stderr F I0813 20:09:07.770994 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.834012487+00:00 stderr F I0813 20:09:07.833944 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.952277558+00:00 stderr F I0813 20:09:07.952198 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.981075823+00:00 stderr F I0813 20:09:07.980921 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.114889060+00:00 stderr F I0813 20:09:10.114739 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.765327109+00:00 stderr F I0813 20:09:10.764682 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:12.779279180+00:00 stderr F I0813 20:09:12.779210 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.272706390+00:00 stderr F I0813 20:10:15.272394 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:10:59.918217989+00:00 stderr F I0813 20:10:59.912574 1 schedule_one.go:992] "Unable to schedule pod; no fit; waiting" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" err="0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules." 2025-08-13T20:11:00.012890153+00:00 stderr F I0813 20:11:00.012639 1 schedule_one.go:992] "Unable to schedule pod; no fit; waiting" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" err="0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules." 2025-08-13T20:11:01.525253034+00:00 stderr F I0813 20:11:01.523937 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:11:01.531536584+00:00 stderr F I0813 20:11:01.528683 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:15:00.383192080+00:00 stderr F I0813 20:15:00.378262 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:16:58.186838320+00:00 stderr F I0813 20:16:58.185353 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-8bbjz" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:00.178201349+00:00 stderr F I0813 20:17:00.177988 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nsk78" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:16.061619591+00:00 stderr F I0813 20:17:16.061426 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-swl5s" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:30.380858087+00:00 stderr F I0813 20:17:30.380641 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-tfv59" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:27:05.675630296+00:00 stderr F I0813 20:27:05.672255 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-jbzn9" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:27:05.827866179+00:00 stderr F I0813 20:27:05.827567 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-xldzg" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:28:43.342763392+00:00 stderr F I0813 20:28:43.342273 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-hvwvm" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:29:30.103067461+00:00 stderr F I0813 20:29:30.100957 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-zdwjn" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:30:02.007482088+00:00 stderr F I0813 20:30:02.005463 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:37:48.222392256+00:00 stderr F I0813 20:37:48.219625 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nkzlk" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:38:36.098357314+00:00 stderr F I0813 20:38:36.087705 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-4kmbv" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:41:21.475952852+00:00 stderr F I0813 20:41:21.453662 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-k2tgr" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:42:26.033155404+00:00 stderr F I0813 20:42:26.032886 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-sdddl" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:42:36.442535128+00:00 stderr F I0813 20:42:36.441417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464219033+00:00 stderr F I0813 20:42:36.439661 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464219033+00:00 stderr F I0813 20:42:36.459284 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464431250+00:00 stderr F I0813 20:42:36.464388 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464653396+00:00 stderr F I0813 20:42:36.464635 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464760989+00:00 stderr F I0813 20:42:36.464743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469013992+00:00 stderr F I0813 20:42:36.468989 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484011014+00:00 stderr F I0813 20:42:36.469107 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484375045+00:00 stderr F I0813 20:42:36.484316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484535129+00:00 stderr F I0813 20:42:36.484516 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484705284+00:00 stderr F I0813 20:42:36.484647 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484908530+00:00 stderr F I0813 20:42:36.484885 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485015883+00:00 stderr F I0813 20:42:36.484998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485104616+00:00 stderr F I0813 20:42:36.485089 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485197898+00:00 stderr F I0813 20:42:36.485182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485337742+00:00 stderr F I0813 20:42:36.485316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508276894+00:00 stderr F I0813 20:42:36.430578 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.603936832+00:00 stderr F I0813 20:42:37.603863 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:37.604083756+00:00 stderr F I0813 20:42:37.604062 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:37.604331843+00:00 stderr F I0813 20:42:37.604306 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.604409386+00:00 stderr F I0813 20:42:37.604392 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.604490028+00:00 stderr F I0813 20:42:37.604472 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:37.609753310+00:00 stderr F I0813 20:42:37.607518 1 scheduling_queue.go:870] "Scheduling queue is closed" 2025-08-13T20:42:37.609753310+00:00 stderr F E0813 20:42:37.608056 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:37.609753310+00:00 stderr F I0813 20:42:37.608306 1 server.go:248] "Requested to terminate, exiting" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043232033022 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515073043232033022 0ustar zuulzuul2025-10-13T00:08:25.768655238+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515073043232033022 0ustar zuulzuul2025-08-13T20:08:08.757827394+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515073043232033022 0ustar zuulzuul2025-10-13T00:12:29.592120992+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043232033022 5ustar zuulzuul././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001721715073043232033034 0ustar zuulzuul2025-10-13T00:12:31.236931601+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-10-13T00:12:31.242442528+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-10-13T00:12:31.248276511+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:12:31.249210067+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-10-13T00:12:31.637766714+00:00 stderr F W1013 00:12:31.637291 1 cmd.go:245] Using insecure, self-signed certificates 2025-10-13T00:12:31.638887169+00:00 stderr F I1013 00:12:31.638853 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1760314351 cert, and key in /tmp/serving-cert-4145979271/serving-signer.crt, /tmp/serving-cert-4145979271/serving-signer.key 2025-10-13T00:12:31.854071388+00:00 stderr F I1013 00:12:31.853978 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:12:31.855130629+00:00 stderr F I1013 00:12:31.855059 1 observer_polling.go:159] Starting file observer 2025-10-13T00:12:31.856915896+00:00 stderr F W1013 00:12:31.856866 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/pods": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.857137067+00:00 stderr F I1013 00:12:31.857122 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-10-13T00:12:31.859041039+00:00 stderr F W1013 00:12:31.859008 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.859769854+00:00 stderr F I1013 00:12:31.859717 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-scheduler", Name:"openshift-kube-scheduler", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.860238257+00:00 stderr F I1013 00:12:31.860196 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-10-13T00:12:31.861092519+00:00 stderr F E1013 00:12:31.861048 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:12:31.867216026+00:00 stderr F E1013 00:12:31.867184 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-scheduler.186de49393937eb2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2025-10-13 00:12:31.858974386 +0000 UTC m=+0.606055810,LastTimestamp:2025-10-13 00:12:31.858974386 +0000 UTC m=+0.606055810,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2025-10-13T00:18:19.106723495+00:00 stderr F I1013 00:18:19.106646 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/cert-recovery-controller-lock 2025-10-13T00:18:19.107483087+00:00 stderr F I1013 00:18:19.107424 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler", Name:"cert-recovery-controller-lock", UID:"e24b93c2-79d9-43db-937a-d4e24725daea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_1aaaedbf-fed3-48a1-bb0b-1d466b2ddb3c became leader 2025-10-13T00:18:19.107956791+00:00 stderr F I1013 00:18:19.107897 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:18:19.110103455+00:00 stderr F I1013 00:18:19.110063 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:18:19.110778865+00:00 stderr F I1013 00:18:19.110745 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:18:19.121479144+00:00 stderr F I1013 00:18:19.120895 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:18:19.121479144+00:00 stderr F I1013 00:18:19.121297 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:18:19.134121440+00:00 stderr F I1013 00:18:19.133918 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:18:19.208194275+00:00 stderr F I1013 00:18:19.208095 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:18:19.208194275+00:00 stderr F I1013 00:18:19.208143 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:22:13.182079943+00:00 stderr F E1013 00:22:13.182005 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:22:42.282517830+00:00 stderr F I1013 00:22:42.282402 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:46.862855536+00:00 stderr F I1013 00:22:46.862783 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:50.070164455+00:00 stderr F I1013 00:22:50.070102 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:52.894739784+00:00 stderr F I1013 00:22:52.894657 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-10-13T00:22:53.860834745+00:00 stderr F I1013 00:22:53.860771 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001431715073043232033032 0ustar zuulzuul2025-08-13T20:08:10.423486490+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-08-13T20:08:10.429459371+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-08-13T20:08:10.435645509+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:10.436465452+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-08-13T20:08:10.591871248+00:00 stderr F W0813 20:08:10.591324 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T20:08:10.591871248+00:00 stderr F I0813 20:08:10.591729 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1755115690 cert, and key in /tmp/serving-cert-2660222687/serving-signer.crt, /tmp/serving-cert-2660222687/serving-signer.key 2025-08-13T20:08:10.967190418+00:00 stderr F I0813 20:08:10.967073 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:10.968185366+00:00 stderr F I0813 20:08:10.968134 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:10.988939231+00:00 stderr F I0813 20:08:10.988860 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-08-13T20:08:10.998709851+00:00 stderr F I0813 20:08:10.998553 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:08:11.000208574+00:00 stderr F I0813 20:08:10.998984 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-08-13T20:08:11.009863831+00:00 stderr F I0813 20:08:11.009712 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/cert-recovery-controller-lock 2025-08-13T20:08:11.010233502+00:00 stderr F I0813 20:08:11.009910 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler", Name:"cert-recovery-controller-lock", UID:"e24b93c2-79d9-43db-937a-d4e24725daea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_d5b9b15f-ff83-4db6-8ab7-bb13bf3420f4 became leader 2025-08-13T20:08:11.012003963+00:00 stderr F I0813 20:08:11.011877 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:08:11.027060494+00:00 stderr F I0813 20:08:11.025374 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.041092377+00:00 stderr F I0813 20:08:11.040728 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.041394735+00:00 stderr F I0813 20:08:11.040988 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.047141560+00:00 stderr F I0813 20:08:11.046865 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.065527837+00:00 stderr F I0813 20:08:11.065403 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.113013099+00:00 stderr F I0813 20:08:11.112833 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:08:11.113013099+00:00 stderr F I0813 20:08:11.112876 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:59.290396854+00:00 stderr F I0813 20:08:59.290258 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.506841487+00:00 stderr F I0813 20:09:06.506684 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.035347489+00:00 stderr F I0813 20:09:07.035041 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.586281885+00:00 stderr F I0813 20:09:07.586001 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.968019000+00:00 stderr F I0813 20:09:11.967733 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:36.324418343+00:00 stderr F I0813 20:42:36.324115 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.336306026+00:00 stderr F I0813 20:42:36.334141 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.428947056+00:00 stderr F I0813 20:42:36.349394 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.429622136+00:00 stderr F I0813 20:42:36.429561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.431371316+00:00 stderr F I0813 20:42:36.431343 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.290643040+00:00 stderr F I0813 20:42:37.286428 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.293847222+00:00 stderr F E0813 20:42:37.292964 1 leaderelection.go:308] Failed to release lock: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:42:37.293847222+00:00 stderr F W0813 20:42:37.293055 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001345615073043232033035 0ustar zuulzuul2025-10-13T00:08:27.945762560+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-10-13T00:08:27.951520336+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-10-13T00:08:27.956387924+00:00 stderr F + '[' -n '' ']' 2025-10-13T00:08:27.957434376+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-10-13T00:08:28.786654872+00:00 stderr F W1013 00:08:28.786435 1 cmd.go:245] Using insecure, self-signed certificates 2025-10-13T00:08:28.788845058+00:00 stderr F I1013 00:08:28.788807 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1760314108 cert, and key in /tmp/serving-cert-665887516/serving-signer.crt, /tmp/serving-cert-665887516/serving-signer.key 2025-10-13T00:08:29.062263293+00:00 stderr F I1013 00:08:29.062171 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:08:29.063125509+00:00 stderr F I1013 00:08:29.063044 1 observer_polling.go:159] Starting file observer 2025-10-13T00:08:29.065014347+00:00 stderr F W1013 00:08:29.064897 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/pods": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.065373738+00:00 stderr F I1013 00:08:29.065339 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-10-13T00:08:29.068692689+00:00 stderr F W1013 00:08:29.068645 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.069089271+00:00 stderr F I1013 00:08:29.069049 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-10-13T00:08:29.069942467+00:00 stderr F I1013 00:08:29.069888 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-scheduler", Name:"openshift-kube-scheduler", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.070268217+00:00 stderr F E1013 00:08:29.070229 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/cert-recovery-controller-lock: Get "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=1m47s": dial tcp [::1]:6443: connect: connection refused 2025-10-13T00:08:29.081172359+00:00 stderr F E1013 00:08:29.081090 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-scheduler.186de45b0c249908 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2025-10-13 00:08:29.068613896 +0000 UTC m=+1.105536869,LastTimestamp:2025-10-13 00:08:29.068613896 +0000 UTC m=+1.105536869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2025-10-13T00:08:29.739655077+00:00 stderr F E1013 00:08:29.739544 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": dial tcp [::1]:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-kube-scheduler.186de45b0c249908 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ControlPlaneTopology,Message:unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused,Source:EventSource{Component:cert-recovery-controller,Host:,},FirstTimestamp:2025-10-13 00:08:29.068613896 +0000 UTC m=+1.105536869,LastTimestamp:2025-10-13 00:08:29.068613896 +0000 UTC m=+1.105536869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:cert-recovery-controller,ReportingInstance:,}" 2025-10-13T00:11:29.665382240+00:00 stderr F I1013 00:11:29.664068 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-10-13T00:11:29.665382240+00:00 stderr F W1013 00:11:29.664146 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015073043233032762 5ustar zuulzuul././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015073043233032762 5ustar zuulzuul././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000002266615073043233033000 0ustar zuulzuul2025-08-13T20:04:17.711019301+00:00 stderr F I0813 20:04:17.710517 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:04:17.731650389+00:00 stderr F I0813 20:04:17.731502 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:04:17.741661665+00:00 stderr F W0813 20:04:17.741251 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.742158909+00:00 stderr F W0813 20:04:17.741915 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.750083895+00:00 stderr F E0813 20:04:17.749981 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.750083895+00:00 stderr F E0813 20:04:17.749982 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.584086456+00:00 stderr F W0813 20:04:18.583537 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.584086456+00:00 stderr F E0813 20:04:18.583991 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.647018632+00:00 stderr F W0813 20:04:18.646930 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.647018632+00:00 stderr F E0813 20:04:18.646976 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.350624461+00:00 stderr F W0813 20:04:20.346192 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.350624461+00:00 stderr F E0813 20:04:20.346495 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.358060563+00:00 stderr F W0813 20:04:20.357964 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.358214727+00:00 stderr F E0813 20:04:20.358149 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.939148562+00:00 stderr F W0813 20:04:23.938654 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.939262655+00:00 stderr F E0813 20:04:23.939234 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.261915080+00:00 stderr F W0813 20:04:25.260621 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.261915080+00:00 stderr F E0813 20:04:25.260694 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.435909533+00:00 stderr F W0813 20:04:36.434632 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.435909533+00:00 stderr F E0813 20:04:36.435360 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.668995607+00:00 stderr F W0813 20:04:36.660927 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.668995607+00:00 stderr F E0813 20:04:36.668906 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.885207502+00:00 stderr F W0813 20:04:50.884304 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.918401032+00:00 stderr F E0813 20:04:50.918193 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.839595078+00:00 stderr F W0813 20:04:52.839119 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.839595078+00:00 stderr F E0813 20:04:52.839544 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:17.762563304+00:00 stderr F F0813 20:05:17.755149 1 main.go:175] timed out waiting for FeatureGate detection ././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003332015073043233032765 0ustar zuulzuul2025-10-13T00:15:01.780827664+00:00 stderr F I1013 00:15:01.779262 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:01.816679648+00:00 stderr F I1013 00:15:01.815934 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:15:01.876841470+00:00 stderr F I1013 00:15:01.875508 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:15:01.892882061+00:00 stderr F I1013 00:15:01.892763 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.186de4b6814c8807 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}},Source:EventSource{Component:,Host:,},FirstTimestamp:2025-10-13 00:15:01.876189191 +0000 UTC m=+1.278226249,LastTimestamp:2025-10-13 00:15:01.876189191 +0000 UTC m=+1.278226249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} 2025-10-13T00:15:01.893032165+00:00 stderr F I1013 00:15:01.892987 1 main.go:173] FeatureGates initialized: [AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:15:01.894005255+00:00 stderr F I1013 00:15:01.885679 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-10-13T00:15:01.894591762+00:00 stderr F I1013 00:15:01.894555 1 webhook.go:173] "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" 2025-10-13T00:15:01.896815519+00:00 stderr F I1013 00:15:01.896781 1 webhook.go:189] "msg"="Registering a validating webhook" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-10-13T00:15:01.896954293+00:00 stderr F I1013 00:15:01.896912 1 server.go:183] "msg"="Registering webhook" "logger"="controller-runtime.webhook" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-10-13T00:15:01.897049626+00:00 stderr F I1013 00:15:01.897031 1 main.go:228] "msg"="starting manager" "logger"="setup" 2025-10-13T00:15:01.898528800+00:00 stderr F I1013 00:15:01.898108 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics" 2025-10-13T00:15:01.898528800+00:00 stderr F I1013 00:15:01.898264 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":8080" "logger"="controller-runtime.metrics" "secure"=false 2025-10-13T00:15:01.898643743+00:00 stderr F I1013 00:15:01.898610 1 server.go:50] "msg"="starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-10-13T00:15:01.898654194+00:00 stderr F I1013 00:15:01.898650 1 server.go:191] "msg"="Starting webhook server" "logger"="controller-runtime.webhook" 2025-10-13T00:15:01.900984444+00:00 stderr F I1013 00:15:01.899035 1 certwatcher.go:161] "msg"="Updated current TLS certificate" "logger"="controller-runtime.certwatcher" 2025-10-13T00:15:01.900984444+00:00 stderr F I1013 00:15:01.899121 1 server.go:242] "msg"="Serving webhook server" "host"="" "logger"="controller-runtime.webhook" "port"=9443 2025-10-13T00:15:01.900984444+00:00 stderr F I1013 00:15:01.899274 1 certwatcher.go:115] "msg"="Starting certificate watcher" "logger"="controller-runtime.certwatcher" 2025-10-13T00:15:01.900984444+00:00 stderr F I1013 00:15:01.899812 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/control-plane-machine-set-leader... 2025-10-13T00:17:36.172961972+00:00 stderr F I1013 00:17:36.172239 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/control-plane-machine-set-leader 2025-10-13T00:17:36.173282902+00:00 stderr F I1013 00:17:36.173238 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-10-13T00:17:36.173296432+00:00 stderr F I1013 00:17:36.173278 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1beta1.Machine" 2025-10-13T00:17:36.173319893+00:00 stderr F I1013 00:17:36.173304 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Node" 2025-10-13T00:17:36.173385855+00:00 stderr F I1013 00:17:36.173366 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ClusterOperator" 2025-10-13T00:17:36.173451487+00:00 stderr F I1013 00:17:36.173413 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Infrastructure" 2025-10-13T00:17:36.173451487+00:00 stderr F I1013 00:17:36.173445 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachineset" 2025-10-13T00:17:36.174142158+00:00 stderr F I1013 00:17:36.174083 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-10-13T00:17:36.174142158+00:00 stderr F I1013 00:17:36.174117 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1beta1.Machine" 2025-10-13T00:17:36.174161919+00:00 stderr F I1013 00:17:36.174138 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachinesetgenerator" 2025-10-13T00:17:36.175929654+00:00 stderr F I1013 00:17:36.175862 1 recorder.go:104] "msg"="control-plane-machine-set-operator-649bd778b4-tt5tw_5c50ad75-1dfb-4de6-8be2-2b72853a9312 became leader" "logger"="events" "object"={"kind":"Lease","namespace":"openshift-machine-api","name":"control-plane-machine-set-leader","uid":"04d6c6f9-cb98-4c35-8cde-538add77d9ad","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41495"} "reason"="LeaderElection" "type"="Normal" 2025-10-13T00:17:36.202234918+00:00 stderr F I1013 00:17:36.202134 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:36.209594716+00:00 stderr F I1013 00:17:36.209472 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:36.211032781+00:00 stderr F I1013 00:17:36.210951 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:36.217401678+00:00 stderr F I1013 00:17:36.217266 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:36.228668557+00:00 stderr F I1013 00:17:36.228592 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-10-13T00:17:36.306019903+00:00 stderr F I1013 00:17:36.305928 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-10-13T00:17:36.316360313+00:00 stderr F I1013 00:17:36.316280 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachinesetgenerator" "worker count"=1 2025-10-13T00:17:36.319496020+00:00 stderr F I1013 00:17:36.319457 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachineset" "worker count"=1 2025-10-13T00:17:36.319514571+00:00 stderr F I1013 00:17:36.319507 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="174eaefa-4dcb-471b-923d-a7599e823586" 2025-10-13T00:17:36.320537502+00:00 stderr F I1013 00:17:36.320491 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="174eaefa-4dcb-471b-923d-a7599e823586" 2025-10-13T00:17:36.320555113+00:00 stderr F I1013 00:17:36.320543 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="174eaefa-4dcb-471b-923d-a7599e823586" 2025-10-13T00:17:36.320584184+00:00 stderr F I1013 00:17:36.320576 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="4a430d89-ad44-45ab-ba73-78617a709c21" 2025-10-13T00:17:36.320616185+00:00 stderr F I1013 00:17:36.320591 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="4a430d89-ad44-45ab-ba73-78617a709c21" 2025-10-13T00:17:36.320626255+00:00 stderr F I1013 00:17:36.320614 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="4a430d89-ad44-45ab-ba73-78617a709c21" 2025-10-13T00:22:22.324658884+00:00 stderr F E1013 00:22:22.324123 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:48.327525864+00:00 stderr F E1013 00:22:48.327004 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000005306115073043233032771 0ustar zuulzuul2025-08-13T20:05:35.452608365+00:00 stderr F I0813 20:05:35.448700 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:35.502237766+00:00 stderr F I0813 20:05:35.502054 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:35.689005154+00:00 stderr F I0813 20:05:35.688074 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.735570748+00:00 stderr F I0813 20:05:35.734903 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.185b6c47dd59a765 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}},Source:EventSource{Component:,Host:,},FirstTimestamp:2025-08-13 20:05:35.703058277 +0000 UTC m=+1.630097491,LastTimestamp:2025-08-13 20:05:35.703058277 +0000 UTC m=+1.630097491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} 2025-08-13T20:05:35.735570748+00:00 stderr F I0813 20:05:35.735070 1 main.go:173] FeatureGates initialized: [AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743278 1 webhook.go:173] "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743496 1 webhook.go:189] "msg"="Registering a validating webhook" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743655 1 server.go:183] "msg"="Registering webhook" "logger"="controller-runtime.webhook" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743716 1 main.go:228] "msg"="starting manager" "logger"="setup" 2025-08-13T20:05:35.808901448+00:00 stderr F I0813 20:05:35.807974 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.809405 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.809583 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":8080" "logger"="controller-runtime.metrics" "secure"=false 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.812095 1 server.go:50] "msg"="starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.812187 1 server.go:191] "msg"="Starting webhook server" "logger"="controller-runtime.webhook" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.813033 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/control-plane-machine-set-leader... 2025-08-13T20:05:35.820718106+00:00 stderr F I0813 20:05:35.820343 1 certwatcher.go:161] "msg"="Updated current TLS certificate" "logger"="controller-runtime.certwatcher" 2025-08-13T20:05:35.820718106+00:00 stderr F I0813 20:05:35.820531 1 server.go:242] "msg"="Serving webhook server" "host"="" "logger"="controller-runtime.webhook" "port"=9443 2025-08-13T20:05:35.820741327+00:00 stderr F I0813 20:05:35.820719 1 certwatcher.go:115] "msg"="Starting certificate watcher" "logger"="controller-runtime.certwatcher" 2025-08-13T20:08:08.092392625+00:00 stderr F I0813 20:08:08.090438 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/control-plane-machine-set-leader 2025-08-13T20:08:08.094648189+00:00 stderr F I0813 20:08:08.094304 1 recorder.go:104] "msg"="control-plane-machine-set-operator-649bd778b4-tt5tw_d3b7e6b8-9166-459d-a6c8-d99794b50433 became leader" "logger"="events" "object"={"kind":"Lease","namespace":"openshift-machine-api","name":"control-plane-machine-set-leader","uid":"04d6c6f9-cb98-4c35-8cde-538add77d9ad","apiVersion":"coordination.k8s.io/v1","resourceVersion":"32821"} "reason"="LeaderElection" "type"="Normal" 2025-08-13T20:08:08.102699080+00:00 stderr F I0813 20:08:08.102529 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-08-13T20:08:08.102699080+00:00 stderr F I0813 20:08:08.102605 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103345 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1beta1.Machine" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103378 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103747 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1beta1.Machine" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103844 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Node" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103922 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ClusterOperator" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103944 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Infrastructure" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103964 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachineset" 2025-08-13T20:08:08.201578905+00:00 stderr F I0813 20:08:08.201332 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.201701399+00:00 stderr F I0813 20:08:08.201676 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.209287466+00:00 stderr F I0813 20:08:08.208649 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.222844255+00:00 stderr F I0813 20:08:08.222401 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.231268937+00:00 stderr F I0813 20:08:08.231171 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.282215947+00:00 stderr F I0813 20:08:08.280096 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.328518 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachineset" "worker count"=1 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.330982 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333281 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333431 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333528 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachinesetgenerator" "worker count"=1 2025-08-13T20:08:34.116134166+00:00 stderr F E0813 20:08:34.115389 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.119765093+00:00 stderr F E0813 20:09:00.118762 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:43.292954294+00:00 stderr F I0813 20:09:43.291277 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:50.954091206+00:00 stderr F I0813 20:09:50.953177 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:50.954306271+00:00 stderr F I0813 20:09:50.953764 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:09:50.954429834+00:00 stderr F I0813 20:09:50.954375 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:50.954588299+00:00 stderr F I0813 20:09:50.954555 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:50.954711542+00:00 stderr F I0813 20:09:50.954681 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:52.483540696+00:00 stderr F I0813 20:09:52.483360 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484079 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484208 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484248 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:58.740448876+00:00 stderr F I0813 20:09:58.739092 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:10:25.144246163+00:00 stderr F I0813 20:10:25.143425 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:39.931663151+00:00 stderr F I0813 20:10:39.929697 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:10:45.189277050+00:00 stderr F I0813 20:10:45.188639 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:37:36.184186726+00:00 stderr F I0813 20:37:36.183258 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:37:36.191103225+00:00 stderr F I0813 20:37:36.190959 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:37:36.195214644+00:00 stderr F I0813 20:37:36.195112 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:41:15.217404928+00:00 stderr F I0813 20:41:15.216565 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:41:15.217873911+00:00 stderr F I0813 20:41:15.217737 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:41:15.218243532+00:00 stderr F I0813 20:41:15.218176 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:41:15.218480519+00:00 stderr F I0813 20:41:15.218389 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:42:36.410736622+00:00 stderr F I0813 20:42:36.392587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410736622+00:00 stderr F I0813 20:42:36.387065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.423466448+00:00 stderr F I0813 20:42:36.393743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.423466448+00:00 stderr F I0813 20:42:36.387287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.435931618+00:00 stderr F I0813 20:42:36.432591 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.435931618+00:00 stderr F I0813 20:42:36.386855 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437936926+00:00 stderr F I0813 20:42:36.393763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.121052261+00:00 stderr F I0813 20:42:39.120327 1 internal.go:516] "msg"="Stopping and waiting for non leader election runnables" 2025-08-13T20:42:39.121052261+00:00 stderr F I0813 20:42:39.121041 1 internal.go:520] "msg"="Stopping and waiting for leader election runnables" 2025-08-13T20:42:39.123198083+00:00 stderr F I0813 20:42:39.123122 1 controller.go:240] "msg"="Shutdown signal received, waiting for all workers to finish" "controller"="controlplanemachineset" 2025-08-13T20:42:39.123198083+00:00 stderr F I0813 20:42:39.123186 1 controller.go:242] "msg"="All workers finished" "controller"="controlplanemachineset" 2025-08-13T20:42:39.124597463+00:00 stderr F I0813 20:42:39.124511 1 controller.go:240] "msg"="Shutdown signal received, waiting for all workers to finish" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:42:39.124651265+00:00 stderr F I0813 20:42:39.124614 1 controller.go:242] "msg"="All workers finished" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:42:39.124716827+00:00 stderr F I0813 20:42:39.124673 1 internal.go:526] "msg"="Stopping and waiting for caches" 2025-08-13T20:42:39.129012041+00:00 stderr F I0813 20:42:39.128957 1 internal.go:530] "msg"="Stopping and waiting for webhooks" 2025-08-13T20:42:39.129362521+00:00 stderr F I0813 20:42:39.129333 1 server.go:249] "msg"="Shutting down webhook server with timeout of 1 minute" "logger"="controller-runtime.webhook" 2025-08-13T20:42:39.129548196+00:00 stderr F I0813 20:42:39.129529 1 internal.go:533] "msg"="Stopping and waiting for HTTP servers" 2025-08-13T20:42:39.129749002+00:00 stderr F I0813 20:42:39.129657 1 server.go:231] "msg"="Shutting down metrics server with timeout of 1 minute" "logger"="controller-runtime.metrics" 2025-08-13T20:42:39.130267357+00:00 stderr F I0813 20:42:39.130159 1 server.go:43] "msg"="shutting down server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-08-13T20:42:39.130490133+00:00 stderr F I0813 20:42:39.130410 1 internal.go:537] "msg"="Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:39.136122996+00:00 stderr F E0813 20:42:39.135999 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043233033023 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015073043233033023 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000006575415073043233033046 0ustar zuulzuul2025-08-13T20:07:26.749909171+00:00 stderr F I0813 20:07:26.747954 1 cmd.go:92] &{ true {false} installer true map[cert-dir:0xc0001be280 cert-secrets:0xc000871e00 configmaps:0xc0008719a0 namespace:0xc0008717c0 optional-configmaps:0xc000871ae0 optional-secrets:0xc000871a40 pod:0xc000871860 pod-manifest-dir:0xc000871c20 resource-dir:0xc000871b80 revision:0xc000871720 secrets:0xc000871900 v:0xc0001bf9a0] [0xc0001bf9a0 0xc000871720 0xc0008717c0 0xc000871860 0xc000871b80 0xc000871c20 0xc0008719a0 0xc000871ae0 0xc000871a40 0xc000871900 0xc0001be280 0xc000871e00] [] map[cert-configmaps:0xc000871ea0 cert-dir:0xc0001be280 cert-secrets:0xc000871e00 configmaps:0xc0008719a0 help:0xc0001bfd60 kubeconfig:0xc000871680 log-flush-frequency:0xc0001bf900 namespace:0xc0008717c0 optional-cert-configmaps:0xc0001be000 optional-cert-secrets:0xc000871f40 optional-configmaps:0xc000871ae0 optional-secrets:0xc000871a40 pod:0xc000871860 pod-manifest-dir:0xc000871c20 pod-manifests-lock-file:0xc000871d60 resource-dir:0xc000871b80 revision:0xc000871720 secrets:0xc000871900 timeout-duration:0xc000871cc0 v:0xc0001bf9a0 vmodule:0xc0001bfa40] [0xc000871680 0xc000871720 0xc0008717c0 0xc000871860 0xc000871900 0xc0008719a0 0xc000871a40 0xc000871ae0 0xc000871b80 0xc000871c20 0xc000871cc0 0xc000871d60 0xc000871e00 0xc000871ea0 0xc000871f40 0xc0001be000 0xc0001be280 0xc0001bf900 0xc0001bf9a0 0xc0001bfa40 0xc0001bfd60] [0xc000871ea0 0xc0001be280 0xc000871e00 0xc0008719a0 0xc0001bfd60 0xc000871680 0xc0001bf900 0xc0008717c0 0xc0001be000 0xc000871f40 0xc000871ae0 0xc000871a40 0xc000871860 0xc000871c20 0xc000871d60 0xc000871b80 0xc000871720 0xc000871900 0xc000871cc0 0xc0001bf9a0 0xc0001bfa40] map[104:0xc0001bfd60 118:0xc0001bf9a0] [] -1 0 0xc0003b8f60 true 0x215dc20 []} 2025-08-13T20:07:26.749909171+00:00 stderr F I0813 20:07:26.748951 1 cmd.go:93] (*installerpod.InstallOptions)(0xc00053cd00)({ 2025-08-13T20:07:26.749909171+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:26.749909171+00:00 stderr F Revision: (string) (len=1) "8", 2025-08-13T20:07:26.749909171+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-scheduler", 2025-08-13T20:07:26.749909171+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:07:26.749909171+00:00 stderr F SecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=20) "scheduler-kubeconfig", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=16) "policy-configmap" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F CertSecretNames: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=30) "kube-scheduler-client-cert-key" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F CertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", 2025-08-13T20:07:26.749909171+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:26.749909171+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:26.749909171+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:26.749909171+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:26.749909171+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:26.749909171+00:00 stderr F }) 2025-08-13T20:07:26.759987490+00:00 stderr F I0813 20:07:26.756597 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:07:26.890296186+00:00 stderr F I0813 20:07:26.890185 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:26.906916322+00:00 stderr F I0813 20:07:26.900969 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:07:56.901688677+00:00 stderr F I0813 20:07:56.901382 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:07:56.915142843+00:00 stderr F I0813 20:07:56.913745 1 cmd.go:539] Latest installer revision for node crc is: 8 2025-08-13T20:07:56.915142843+00:00 stderr F I0813 20:07:56.913829 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:07:56.918199111+00:00 stderr F I0813 20:07:56.918136 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:07:56.918225582+00:00 stderr F I0813 20:07:56.918198 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8" ... 2025-08-13T20:07:56.919940471+00:00 stderr F I0813 20:07:56.918696 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8" ... 2025-08-13T20:07:56.919940471+00:00 stderr F I0813 20:07:56.918734 1 cmd.go:226] Getting secrets ... 2025-08-13T20:07:56.926031345+00:00 stderr F I0813 20:07:56.925940 1 copy.go:32] Got secret openshift-kube-scheduler/localhost-recovery-client-token-8 2025-08-13T20:07:56.931971486+00:00 stderr F I0813 20:07:56.930266 1 copy.go:32] Got secret openshift-kube-scheduler/serving-cert-8 2025-08-13T20:07:56.931971486+00:00 stderr F I0813 20:07:56.930330 1 cmd.go:239] Getting config maps ... 2025-08-13T20:07:56.940625444+00:00 stderr F I0813 20:07:56.940492 1 copy.go:60] Got configMap openshift-kube-scheduler/config-8 2025-08-13T20:07:56.951428843+00:00 stderr F I0813 20:07:56.946266 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-cert-syncer-kubeconfig-8 2025-08-13T20:07:56.951934968+00:00 stderr F I0813 20:07:56.951697 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-pod-8 2025-08-13T20:07:56.960256417+00:00 stderr F I0813 20:07:56.959273 1 copy.go:60] Got configMap openshift-kube-scheduler/scheduler-kubeconfig-8 2025-08-13T20:07:56.971857669+00:00 stderr F I0813 20:07:56.971645 1 copy.go:60] Got configMap openshift-kube-scheduler/serviceaccount-ca-8 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.979995 1 copy.go:52] Failed to get config map openshift-kube-scheduler/policy-configmap-8: configmaps "policy-configmap-8" not found 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980054 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980286 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980565 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980868 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981019 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981161 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981239 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert/tls.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981328 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert/tls.key" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981428 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/config" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981513 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/config/config.yaml" ... 2025-08-13T20:07:56.981986470+00:00 stderr F I0813 20:07:56.981908 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-cert-syncer-kubeconfig" ... 2025-08-13T20:07:56.981986470+00:00 stderr F I0813 20:07:56.981980 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:07:56.982170095+00:00 stderr F I0813 20:07:56.982079 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod" ... 2025-08-13T20:07:56.982186575+00:00 stderr F I0813 20:07:56.982176 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/version" ... 2025-08-13T20:07:56.982357660+00:00 stderr F I0813 20:07:56.982274 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/forceRedeploymentReason" ... 2025-08-13T20:07:56.982442893+00:00 stderr F I0813 20:07:56.982391 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/pod.yaml" ... 2025-08-13T20:07:56.982599657+00:00 stderr F I0813 20:07:56.982513 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/scheduler-kubeconfig" ... 2025-08-13T20:07:56.982660939+00:00 stderr F I0813 20:07:56.982614 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/scheduler-kubeconfig/kubeconfig" ... 2025-08-13T20:07:56.983564815+00:00 stderr F I0813 20:07:56.983457 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/serviceaccount-ca" ... 2025-08-13T20:07:56.983733740+00:00 stderr F I0813 20:07:56.983647 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:07:56.986030016+00:00 stderr F I0813 20:07:56.984629 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs" ... 2025-08-13T20:07:56.986030016+00:00 stderr F I0813 20:07:56.984666 1 cmd.go:226] Getting secrets ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.109998 1 copy.go:32] Got secret openshift-kube-scheduler/kube-scheduler-client-cert-key 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110076 1 cmd.go:239] Getting config maps ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110090 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110125 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.crt" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110429 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110567 1 cmd.go:332] Getting pod configmaps/kube-scheduler-pod-8 -n openshift-kube-scheduler 2025-08-13T20:07:57.307066400+00:00 stderr F I0813 20:07:57.307008 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:07:57.307175413+00:00 stderr F I0813 20:07:57.307143 1 cmd.go:376] Writing a pod under "kube-scheduler-pod.yaml" key 2025-08-13T20:07:57.307175413+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"8","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332129 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332700 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332713 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"8","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043233033067 0ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015073043233033104 0ustar zuulzuul2025-10-13T00:17:28.710064655+00:00 stderr F time="2025-10-13T00:17:28Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-10-13T00:17:33.660805260+00:00 stderr F time="2025-10-13T00:17:33Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-10-13T00:17:33.660805260+00:00 stderr F time="2025-10-13T00:17:33Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043233033077 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043233033067 0ustar zuulzuul././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043233033045 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043233033045 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000040340715073043233033057 0ustar zuulzuul2025-10-13T00:14:59.214507863+00:00 stderr F I1013 00:14:59.207713 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:14:59.214507863+00:00 stderr F I1013 00:14:59.207840 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:59.214507863+00:00 stderr F I1013 00:14:59.209098 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:59.255897353+00:00 stderr F I1013 00:14:59.255540 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-10-13T00:15:00.010338176+00:00 stderr F I1013 00:15:00.007884 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:00.010338176+00:00 stderr F W1013 00:15:00.008630 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:00.010338176+00:00 stderr F W1013 00:15:00.008637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:00.012992516+00:00 stderr F I1013 00:15:00.011519 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:00.012992516+00:00 stderr F I1013 00:15:00.011872 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015632 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015663 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015778 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015825 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015857 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015896 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.015950 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.016034 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:00.016845111+00:00 stderr F I1013 00:15:00.016090 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:00.118794656+00:00 stderr F I1013 00:15:00.117149 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:00.118794656+00:00 stderr F I1013 00:15:00.117209 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:00.118794656+00:00 stderr F I1013 00:15:00.117377 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:19:50.299512452+00:00 stderr F I1013 00:19:50.297799 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-10-13T00:19:50.299512452+00:00 stderr F I1013 00:19:50.298516 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41853", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_c839c157-9676-4cfd-a993-f649c8bfa865 became leader 2025-10-13T00:19:50.301282864+00:00 stderr F I1013 00:19:50.301242 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:19:50.304407577+00:00 stderr F I1013 00:19:50.304278 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:19:50.305103198+00:00 stderr F I1013 00:19:50.305010 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:19:50.368630797+00:00 stderr F I1013 00:19:50.368537 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-10-13T00:19:50.375494751+00:00 stderr F I1013 00:19:50.375417 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-10-13T00:19:50.375537672+00:00 stderr F I1013 00:19:50.375528 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-10-13T00:19:50.375937294+00:00 stderr F I1013 00:19:50.375839 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-10-13T00:19:50.380587153+00:00 stderr F I1013 00:19:50.380480 1 certrotationcontroller.go:886] Starting CertRotation 2025-10-13T00:19:50.380587153+00:00 stderr F I1013 00:19:50.380568 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-10-13T00:19:50.381614013+00:00 stderr F I1013 00:19:50.381555 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-10-13T00:19:50.381781978+00:00 stderr F I1013 00:19:50.381735 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-10-13T00:19:50.381895882+00:00 stderr F I1013 00:19:50.381834 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-10-13T00:19:50.381999695+00:00 stderr F I1013 00:19:50.381976 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-10-13T00:19:50.382018375+00:00 stderr F I1013 00:19:50.382008 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-10-13T00:19:50.382058576+00:00 stderr F I1013 00:19:50.382026 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-10-13T00:19:50.382072017+00:00 stderr F I1013 00:19:50.382056 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-10-13T00:19:50.382115318+00:00 stderr F I1013 00:19:50.382086 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-10-13T00:19:50.382841280+00:00 stderr F I1013 00:19:50.382794 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:19:50.383309774+00:00 stderr F I1013 00:19:50.383263 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-10-13T00:19:50.383428157+00:00 stderr F I1013 00:19:50.383403 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-10-13T00:19:50.383863680+00:00 stderr F I1013 00:19:50.383835 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-10-13T00:19:50.384414616+00:00 stderr F I1013 00:19:50.384313 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-10-13T00:19:50.386288852+00:00 stderr F I1013 00:19:50.386240 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-10-13T00:19:50.386732255+00:00 stderr F I1013 00:19:50.386646 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:19:50.386909491+00:00 stderr F I1013 00:19:50.386837 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-10-13T00:19:50.386988453+00:00 stderr F I1013 00:19:50.386942 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-10-13T00:19:50.386988453+00:00 stderr F I1013 00:19:50.386974 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-10-13T00:19:50.387021394+00:00 stderr F I1013 00:19:50.386987 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-10-13T00:19:50.387904060+00:00 stderr F I1013 00:19:50.387830 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-10-13T00:19:50.387997133+00:00 stderr F I1013 00:19:50.387944 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:19:50.388132397+00:00 stderr F I1013 00:19:50.388073 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-10-13T00:19:50.388132397+00:00 stderr F I1013 00:19:50.388081 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-10-13T00:19:50.388132397+00:00 stderr F I1013 00:19:50.388126 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-10-13T00:19:50.388191909+00:00 stderr F I1013 00:19:50.388140 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-10-13T00:19:50.388206929+00:00 stderr F I1013 00:19:50.388195 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-10-13T00:19:50.388206929+00:00 stderr F I1013 00:19:50.388195 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-10-13T00:19:50.388262531+00:00 stderr F I1013 00:19:50.388228 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-10-13T00:19:50.388681103+00:00 stderr F I1013 00:19:50.388612 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-10-13T00:19:50.389389644+00:00 stderr F I1013 00:19:50.389075 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-10-13T00:19:50.389389644+00:00 stderr F I1013 00:19:50.389246 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:19:50.389389644+00:00 stderr F I1013 00:19:50.389303 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:19:50.389389644+00:00 stderr F I1013 00:19:50.389380 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-10-13T00:19:50.389984682+00:00 stderr F I1013 00:19:50.389940 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-10-13T00:19:50.404626278+00:00 stderr F I1013 00:19:50.404482 1 termination_observer.go:145] Starting TerminationObserver 2025-10-13T00:19:50.469138806+00:00 stderr F I1013 00:19:50.469030 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-10-13T00:19:50.469138806+00:00 stderr F I1013 00:19:50.469087 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-10-13T00:19:50.482169214+00:00 stderr F I1013 00:19:50.482049 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-10-13T00:19:50.482169214+00:00 stderr F I1013 00:19:50.482104 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-10-13T00:19:50.482169214+00:00 stderr F I1013 00:19:50.482117 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-10-13T00:19:50.482169214+00:00 stderr F I1013 00:19:50.482153 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-10-13T00:19:50.484708949+00:00 stderr F I1013 00:19:50.484218 1 base_controller.go:73] Caches are synced for EventWatchController 2025-10-13T00:19:50.484708949+00:00 stderr F I1013 00:19:50.484261 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-10-13T00:19:50.484708949+00:00 stderr F I1013 00:19:50.484415 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-10-13T00:19:50.484708949+00:00 stderr F I1013 00:19:50.484446 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-10-13T00:19:50.486617766+00:00 stderr F I1013 00:19:50.486570 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-10-13T00:19:50.486617766+00:00 stderr F I1013 00:19:50.486593 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-10-13T00:19:50.487045369+00:00 stderr F I1013 00:19:50.486997 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:19:50.487045369+00:00 stderr F I1013 00:19:50.487034 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:19:50.488458891+00:00 stderr F I1013 00:19:50.488408 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-10-13T00:19:50.488458891+00:00 stderr F I1013 00:19:50.488435 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-10-13T00:19:50.488618325+00:00 stderr F I1013 00:19:50.488556 1 base_controller.go:73] Caches are synced for RevisionController 2025-10-13T00:19:50.488618325+00:00 stderr F I1013 00:19:50.488607 1 base_controller.go:73] Caches are synced for NodeController 2025-10-13T00:19:50.488722738+00:00 stderr F I1013 00:19:50.488645 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-10-13T00:19:50.488722738+00:00 stderr F I1013 00:19:50.488674 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-10-13T00:19:50.488722738+00:00 stderr F I1013 00:19:50.488684 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-10-13T00:19:50.488722738+00:00 stderr F I1013 00:19:50.488693 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-10-13T00:19:50.488797381+00:00 stderr F I1013 00:19:50.488752 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-10-13T00:19:50.488797381+00:00 stderr F I1013 00:19:50.488781 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-10-13T00:19:50.488816651+00:00 stderr F I1013 00:19:50.488651 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-10-13T00:19:50.489243424+00:00 stderr F I1013 00:19:50.488608 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-10-13T00:19:50.489687107+00:00 stderr F I1013 00:19:50.489590 1 base_controller.go:73] Caches are synced for InstallerController 2025-10-13T00:19:50.489687107+00:00 stderr F I1013 00:19:50.489607 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-10-13T00:19:50.489723918+00:00 stderr F I1013 00:19:50.489689 1 base_controller.go:73] Caches are synced for GuardController 2025-10-13T00:19:50.489723918+00:00 stderr F I1013 00:19:50.489700 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-10-13T00:19:50.489775850+00:00 stderr F I1013 00:19:50.489736 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-10-13T00:19:50.489775850+00:00 stderr F I1013 00:19:50.489761 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-10-13T00:19:50.489905904+00:00 stderr F I1013 00:19:50.489854 1 base_controller.go:73] Caches are synced for PruneController 2025-10-13T00:19:50.489905904+00:00 stderr F I1013 00:19:50.489885 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-10-13T00:19:50.490381138+00:00 stderr F I1013 00:19:50.490346 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:19:50.490571193+00:00 stderr F I1013 00:19:50.490532 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:19:50.490571193+00:00 stderr F I1013 00:19:50.490546 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:19:50.490971705+00:00 stderr F I1013 00:19:50.490931 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-10-13T00:19:50.490971705+00:00 stderr F I1013 00:19:50.490951 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-10-13T00:19:50.491011627+00:00 stderr F I1013 00:19:50.490971 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:19:50.491011627+00:00 stderr F I1013 00:19:50.490979 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:19:50.504152917+00:00 stderr F I1013 00:19:50.504074 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.548105864+00:00 stderr F I1013 00:19:50.548012 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.650446428+00:00 stderr F I1013 00:19:50.650200 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.704516746+00:00 stderr F I1013 00:19:50.704461 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.748005809+00:00 stderr F I1013 00:19:50.747917 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.783519905+00:00 stderr F I1013 00:19:50.783449 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-10-13T00:19:50.783519905+00:00 stderr F I1013 00:19:50.783493 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-10-13T00:19:50.783721211+00:00 stderr F I1013 00:19:50.783658 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-10-13T00:19:50.903299257+00:00 stderr F I1013 00:19:50.903193 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.947519322+00:00 stderr F I1013 00:19:50.947408 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:50.987315686+00:00 stderr F I1013 00:19:50.987254 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-10-13T00:19:50.987315686+00:00 stderr F I1013 00:19:50.987281 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-10-13T00:19:51.103463940+00:00 stderr F I1013 00:19:51.103289 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:51.146066617+00:00 stderr F I1013 00:19:51.145984 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:51.181284194+00:00 stderr F I1013 00:19:51.181138 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-10-13T00:19:51.181284194+00:00 stderr F I1013 00:19:51.181209 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-10-13T00:19:51.181284194+00:00 stderr F I1013 00:19:51.181221 1 internalloadbalancer.go:27] syncing internal loadbalancer hostnames: api-int.crc.testing 2025-10-13T00:19:51.181284194+00:00 stderr F I1013 00:19:51.181229 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181288 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181317 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181341 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181324 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181363 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181374 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181378 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181386 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181402068+00:00 stderr F I1013 00:19:51.181397 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181431209+00:00 stderr F I1013 00:19:51.181419 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181431209+00:00 stderr F I1013 00:19:51.181424 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-10-13T00:19:51.181448499+00:00 stderr F I1013 00:19:51.181438 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181465160+00:00 stderr F I1013 00:19:51.181440 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:19:51.181485790+00:00 stderr F I1013 00:19:51.181443 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-10-13T00:19:51.188212100+00:00 stderr F I1013 00:19:51.188113 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-10-13T00:19:51.188212100+00:00 stderr F I1013 00:19:51.188148 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-10-13T00:19:51.346830297+00:00 stderr F I1013 00:19:51.346732 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382119 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382143 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382187 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382191 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382194 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382208 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382220350+00:00 stderr F I1013 00:19:51.382213 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382305422+00:00 stderr F I1013 00:19:51.382214 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382305422+00:00 stderr F I1013 00:19:51.382229 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382305422+00:00 stderr F I1013 00:19:51.382218 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382305422+00:00 stderr F I1013 00:19:51.382263 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382305422+00:00 stderr F I1013 00:19:51.382275 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382418386+00:00 stderr F I1013 00:19:51.382310 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382418386+00:00 stderr F I1013 00:19:51.382315 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382418386+00:00 stderr F I1013 00:19:51.382367 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382418386+00:00 stderr F I1013 00:19:51.382373 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.382459167+00:00 stderr F I1013 00:19:51.382441 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:51.382480007+00:00 stderr F I1013 00:19:51.382464 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:51.384043014+00:00 stderr F I1013 00:19:51.383924 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-10-13T00:19:51.384043014+00:00 stderr F I1013 00:19:51.383968 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-10-13T00:19:51.385476266+00:00 stderr F I1013 00:19:51.385426 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-10-13T00:19:51.385476266+00:00 stderr F I1013 00:19:51.385442 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-10-13T00:19:51.544795144+00:00 stderr F I1013 00:19:51.544663 1 request.go:697] Waited for 1.159687047s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps?limit=500&resourceVersion=0 2025-10-13T00:19:51.552093391+00:00 stderr F I1013 00:19:51.552003 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:51.748209684+00:00 stderr F I1013 00:19:51.748090 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:51.946903343+00:00 stderr F I1013 00:19:51.946786 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:52.146452277+00:00 stderr F I1013 00:19:52.146340 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:52.349677090+00:00 stderr F I1013 00:19:52.349591 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:52.382436945+00:00 stderr F I1013 00:19:52.382289 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-10-13T00:19:52.382436945+00:00 stderr F I1013 00:19:52.382314 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-10-13T00:19:52.552664567+00:00 stderr F I1013 00:19:52.552501 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:52.745250904+00:00 stderr F I1013 00:19:52.745143 1 request.go:697] Waited for 2.358352074s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0 2025-10-13T00:19:52.747838031+00:00 stderr F I1013 00:19:52.747758 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:52.782867963+00:00 stderr F I1013 00:19:52.782791 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-10-13T00:19:52.782867963+00:00 stderr F I1013 00:19:52.782821 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-10-13T00:19:52.968268666+00:00 stderr F I1013 00:19:52.968154 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:52.976083339+00:00 stderr F I1013 00:19:52.975991 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-10-13T00:19:52.976083339+00:00 stderr F I1013 00:19:52.976011 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-10-13T00:19:52.982294603+00:00 stderr F I1013 00:19:52.982243 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:52.982294603+00:00 stderr F I1013 00:19:52.982266 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:53.148234618+00:00 stderr F I1013 00:19:53.148095 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:53.175712765+00:00 stderr F I1013 00:19:53.175620 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-10-13T00:19:53.175712765+00:00 stderr F I1013 00:19:53.175654 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-10-13T00:19:53.176997334+00:00 stderr F I1013 00:19:53.176925 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-10-13T00:19:53.176997334+00:00 stderr F I1013 00:19:53.176946 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-10-13T00:19:53.181445136+00:00 stderr F I1013 00:19:53.181376 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:53.181445136+00:00 stderr F I1013 00:19:53.181395 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:53.181517118+00:00 stderr F I1013 00:19:53.181486 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:19:53.181517118+00:00 stderr F I1013 00:19:53.181501 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:19:53.181715714+00:00 stderr F I1013 00:19:53.181659 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-10-13T00:19:53.181715714+00:00 stderr F I1013 00:19:53.181686 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-10-13T00:19:53.182521938+00:00 stderr F I1013 00:19:53.182470 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-10-13T00:19:53.182521938+00:00 stderr F I1013 00:19:53.182495 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-10-13T00:19:53.182521938+00:00 stderr F I1013 00:19:53.182508 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-10-13T00:19:53.182521938+00:00 stderr F I1013 00:19:53.182512 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-10-13T00:19:53.353302776+00:00 stderr F I1013 00:19:53.353220 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:19:53.382964768+00:00 stderr F I1013 00:19:53.382900 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:19:53.382964768+00:00 stderr F I1013 00:19:53.382929 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:19:53.388658177+00:00 stderr F I1013 00:19:53.388584 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:19:53.388658177+00:00 stderr F I1013 00:19:53.388636 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:19:53.945120045+00:00 stderr F I1013 00:19:53.944853 1 request.go:697] Waited for 3.453560522s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-10-13T00:19:55.144838543+00:00 stderr F I1013 00:19:55.144493 1 request.go:697] Waited for 2.16391179s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-10-13T00:21:10.725814215+00:00 stderr F I1013 00:21:10.724153 1 core.go:358] ConfigMap "openshift-kube-apiserver/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:18Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-10-13T00:21:10Z"}],"resourceVersion":null,"uid":"449953d7-35d8-4eaf-8671-65eda2b482f7"}} 2025-10-13T00:21:10.725814215+00:00 stderr F I1013 00:21:10.725318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: 2025-10-13T00:21:10.725814215+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:21:10.737591452+00:00 stderr F I1013 00:21:10.737457 1 core.go:358] ConfigMap "openshift-config-managed/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-10-13T00:21:10Z"}],"resourceVersion":null,"uid":"1b8f54dc-8896-4a59-8c53-834fed1d81fd"}} 2025-10-13T00:21:10.745658109+00:00 stderr F I1013 00:21:10.744440 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 13 triggered by "required configmap/kubelet-serving-ca has changed" 2025-10-13T00:21:10.754548748+00:00 stderr F I1013 00:21:10.753621 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: 2025-10-13T00:21:10.754548748+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:21:10.759260224+00:00 stderr F I1013 00:21:10.759190 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:11.522015177+00:00 stderr F I1013 00:21:11.521853 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:11.921954062+00:00 stderr P I1013 00:21:11.921865 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkzt 2025-10-13T00:21:11.922035244+00:00 stderr F HP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-10-13T00:21:11.922445565+00:00 stderr F I1013 00:21:11.922393 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-10-13T00:21:11.922445565+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935508 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.935451075 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935552 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.935536417 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935579 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.935560958 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935600 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.935584709 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935624 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.935606239 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935647 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.93562959 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935671 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.935652491 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935725 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.935675901 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935743 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.935729933 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935765 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.935753333 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935783 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.935769884 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.935825 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.935787734 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.936154 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-10-13 00:21:11.936130333 +0000 UTC))" 2025-10-13T00:21:11.944758145+00:00 stderr F I1013 00:21:11.936460 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314499\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314499\" (2025-10-12 23:14:59 +0000 UTC to 2026-10-12 23:14:59 +0000 UTC (now=2025-10-13 00:21:11.936439632 +0000 UTC))" 2025-10-13T00:21:12.521282838+00:00 stderr F I1013 00:21:12.520870 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:13.116673031+00:00 stderr F I1013 00:21:13.116543 1 request.go:697] Waited for 1.193994609s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-apiserver-client-ca 2025-10-13T00:21:13.125254901+00:00 stderr P I1013 00:21:13.125129 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBA 2025-10-13T00:21:13.125459247+00:00 stderr F QC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-10-13T00:21:11Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-10-13T00:21:13.127609225+00:00 stderr F I1013 00:21:13.127513 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-10-13T00:21:13.127609225+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:21:13.722754039+00:00 stderr F I1013 00:21:13.722597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:14.117363671+00:00 stderr F I1013 00:21:14.117287 1 request.go:697] Waited for 1.197337749s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle 2025-10-13T00:21:14.720095890+00:00 stderr F I1013 00:21:14.720034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:15.730376078+00:00 stderr F I1013 00:21:15.725129 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:16.725603941+00:00 stderr F I1013 00:21:16.725184 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:17.722850859+00:00 stderr F I1013 00:21:17.722249 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:18.725432771+00:00 stderr F I1013 00:21:18.725367 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:19.723984934+00:00 stderr F I1013 00:21:19.723890 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:20.720363018+00:00 stderr F I1013 00:21:20.720278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:22.721130213+00:00 stderr F I1013 00:21:22.721070 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:23.320471348+00:00 stderr F I1013 00:21:23.320418 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:24.125342243+00:00 stderr F I1013 00:21:24.124233 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-13 -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:24.921609417+00:00 stderr F I1013 00:21:24.921432 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 13 triggered by "required configmap/kubelet-serving-ca has changed" 2025-10-13T00:21:24.935014447+00:00 stderr F I1013 00:21:24.934682 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 13 created because required configmap/kubelet-serving-ca has changed 2025-10-13T00:21:24.939214950+00:00 stderr F I1013 00:21:24.939143 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:21:24.956601348+00:00 stderr F W1013 00:21:24.956504 1 staticpod.go:38] revision 13 is unexpectedly already the latest available revision. This is a possible race! 2025-10-13T00:21:24.956601348+00:00 stderr F E1013 00:21:24.956538 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 13 2025-10-13T00:21:26.117148847+00:00 stderr F I1013 00:21:26.117066 1 request.go:697] Waited for 1.176798326s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-10-13T00:21:27.920448431+00:00 stderr F I1013 00:21:27.919580 1 installer_controller.go:524] node crc with revision 12 is the oldest and needs new revision 13 2025-10-13T00:21:27.920448431+00:00 stderr F I1013 00:21:27.919904 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-10-13T00:21:27.920448431+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-10-13T00:21:27.920448431+00:00 stderr F CurrentRevision: (int32) 12, 2025-10-13T00:21:27.920448431+00:00 stderr F TargetRevision: (int32) 13, 2025-10-13T00:21:27.920448431+00:00 stderr F LastFailedRevision: (int32) 1, 2025-10-13T00:21:27.920448431+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0052f1560)(2024-06-26 12:52:09 +0000 UTC), 2025-10-13T00:21:27.920448431+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-10-13T00:21:27.920448431+00:00 stderr F LastFailedCount: (int) 1, 2025-10-13T00:21:27.920448431+00:00 stderr F LastFallbackCount: (int) 0, 2025-10-13T00:21:27.920448431+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-10-13T00:21:27.920448431+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-10-13T00:21:27.920448431+00:00 stderr F } 2025-10-13T00:21:27.920448431+00:00 stderr F } 2025-10-13T00:21:27.932240378+00:00 stderr F I1013 00:21:27.932152 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 12 to 13 because node crc with revision 12 is the oldest 2025-10-13T00:21:27.934452687+00:00 stderr F I1013 00:21:27.934105 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-10-13T00:21:27Z","message":"NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12; 0 nodes have achieved new revision 13","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:27.934614752+00:00 stderr F I1013 00:21:27.934595 1 prune_controller.go:269] Nothing to prune 2025-10-13T00:21:27.943373107+00:00 stderr F I1013 00:21:27.941369 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12; 0 nodes have achieved new revision 13" 2025-10-13T00:21:29.117055660+00:00 stderr F I1013 00:21:29.116989 1 request.go:697] Waited for 1.179824888s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-10-13T00:21:30.525869474+00:00 stderr F I1013 00:21:30.525406 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-13-crc -n openshift-kube-apiserver because it was missing 2025-10-13T00:21:31.123701841+00:00 stderr F I1013 00:21:31.122913 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Pending phase 2025-10-13T00:21:31.717305264+00:00 stderr F I1013 00:21:31.717233 1 request.go:697] Waited for 1.187071353s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-10-13T00:21:32.917459349+00:00 stderr F I1013 00:21:32.916800 1 request.go:697] Waited for 1.393959447s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-10-13T00:21:34.519079619+00:00 stderr F I1013 00:21:34.519015 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2025-10-13T00:21:35.519773219+00:00 stderr F I1013 00:21:35.519716 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2025-10-13T00:22:09.482395523+00:00 stderr F I1013 00:22:09.481885 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2025-10-13T00:22:09.560870223+00:00 stderr F I1013 00:22:09.560778 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:08:47 +0000 UTC) at 2025-10-13 00:22:09 +0000 UTC 2025-10-13T00:22:10.795982687+00:00 stderr F E1013 00:22:10.795436 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.803920201+00:00 stderr F E1013 00:22:10.803869 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.816864769+00:00 stderr F E1013 00:22:10.816775 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.840220467+00:00 stderr F E1013 00:22:10.840177 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.884637452+00:00 stderr F E1013 00:22:10.884563 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:10.969286208+00:00 stderr F E1013 00:22:10.969217 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.135747414+00:00 stderr F E1013 00:22:11.135676 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.516587765+00:00 stderr F E1013 00:22:11.516521 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.638055422+00:00 stderr F I1013 00:22:11.636288 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:08:47 +0000 UTC) at 2025-10-13 00:22:11 +0000 UTC 2025-10-13T00:22:12.160877492+00:00 stderr F E1013 00:22:12.160621 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:13.444592063+00:00 stderr F E1013 00:22:13.444489 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.009256862+00:00 stderr F E1013 00:22:16.008849 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:20.796077437+00:00 stderr F E1013 00:22:20.796004 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.143174151+00:00 stderr F E1013 00:22:21.142836 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.795562362+00:00 stderr F E1013 00:22:30.794997 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.798946485+00:00 stderr F E1013 00:22:40.798041 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:41.630241450+00:00 stderr F E1013 00:22:41.630144 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.330067515+00:00 stderr F E1013 00:22:50.329401 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.490815733+00:00 stderr F E1013 00:22:50.490776 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.498341452+00:00 stderr F E1013 00:22:50.498291 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.498386863+00:00 stderr F E1013 00:22:50.498364 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.501083679+00:00 stderr F E1013 00:22:50.501045 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:50.508637519+00:00 stderr F E1013 00:22:50.508600 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.509607006+00:00 stderr F E1013 00:22:50.509554 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.521712433+00:00 stderr F E1013 00:22:50.521659 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.689923619+00:00 stderr F E1013 00:22:50.689861 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.903672703+00:00 stderr F E1013 00:22:50.901882 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:50.903672703+00:00 stderr F E1013 00:22:50.901978 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.117635633+00:00 stderr F E1013 00:22:51.117538 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:51.289851600+00:00 stderr F E1013 00:22:51.289535 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.491170638+00:00 stderr F E1013 00:22:51.491065 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.693545895+00:00 stderr F E1013 00:22:51.693484 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:51.896069177+00:00 stderr F E1013 00:22:51.895751 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.288124806+00:00 stderr F E1013 00:22:52.288038 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.491296836+00:00 stderr F E1013 00:22:52.491232 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:52.690283699+00:00 stderr F E1013 00:22:52.690207 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.088774039+00:00 stderr F E1013 00:22:53.088656 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.491804436+00:00 stderr F E1013 00:22:53.491745 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.493005689+00:00 stderr F E1013 00:22:53.492971 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:53.908548834+00:00 stderr F E1013 00:22:53.908484 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:54.089471004+00:00 stderr F E1013 00:22:54.089303 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.292127749+00:00 stderr F E1013 00:22:54.291945 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:54.691633777+00:00 stderr F E1013 00:22:54.691542 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:55.093386908+00:00 stderr F E1013 00:22:55.093318 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-10-13T00:22:56.752922484+00:00 stderr F I1013 00:22:56.752518 1 helpers.go:184] lister was stale at resourceVersion=42589, live get showed resourceVersion=42932 2025-10-13T00:22:56.763490539+00:00 stderr F E1013 00:22:56.763438 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: "assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:57.316467632+00:00 stderr F I1013 00:22:57.316417 1 helpers.go:260] lister was stale at resourceVersion=42589, live get showed resourceVersion=42956 2025-10-13T00:23:25.537868490+00:00 stderr F I1013 00:23:25.533971 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:48.252784244+00:00 stderr F I1013 00:23:48.252393 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:50.840946867+00:00 stderr F I1013 00:23:50.840885 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:51.955889434+00:00 stderr F I1013 00:23:51.955419 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000164041115073043233033056 0ustar zuulzuul2025-08-13T20:01:27.619418140+00:00 stderr F I0813 20:01:27.619067 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:01:27.619418140+00:00 stderr F I0813 20:01:27.619317 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:01:27.623407753+00:00 stderr F I0813 20:01:27.623350 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:36.827605524+00:00 stderr F I0813 20:01:36.825569 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T20:01:59.521524942+00:00 stderr F I0813 20:01:59.520654 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:59.521524942+00:00 stderr F W0813 20:01:59.521433 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:59.521524942+00:00 stderr F W0813 20:01:59.521445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:02:01.273561360+00:00 stderr F I0813 20:02:01.271283 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:02:01.273561360+00:00 stderr F I0813 20:02:01.272562 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302114 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302209 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302358 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302384 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302511 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.321242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.302391 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.322353 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.322378 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:02:01.404880544+00:00 stderr F I0813 20:02:01.403667 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:02:01.423025072+00:00 stderr F I0813 20:02:01.422737 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:02:01.423450894+00:00 stderr F I0813 20:02:01.423368 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:02:05.703355729+00:00 stderr F I0813 20:02:05.700614 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-08-13T20:02:05.713348184+00:00 stderr F I0813 20:02:05.711766 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_7069e219-921b-44d2-84f4-301ccffeb8ac became leader 2025-08-13T20:02:05.759933782+00:00 stderr F I0813 20:02:05.759742 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:02:05.774889959+00:00 stderr F I0813 20:02:05.772704 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:02:05.776863865+00:00 stderr F I0813 20:02:05.776663 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:02:17.879199399+00:00 stderr F I0813 20:02:17.878450 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-08-13T20:02:17.879984281+00:00 stderr F I0813 20:02:17.879924 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-08-13T20:02:17.880006932+00:00 stderr F I0813 20:02:17.879998 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:02:17.880069803+00:00 stderr F I0813 20:02:17.880021 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-08-13T20:02:17.880069803+00:00 stderr F I0813 20:02:17.880056 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:02:17.880263309+00:00 stderr F I0813 20:02:17.880213 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-08-13T20:02:17.880322861+00:00 stderr F I0813 20:02:17.880279 1 certrotationcontroller.go:886] Starting CertRotation 2025-08-13T20:02:17.880322861+00:00 stderr F I0813 20:02:17.880304 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-08-13T20:02:17.880552497+00:00 stderr F I0813 20:02:17.880502 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:02:17.880552497+00:00 stderr F I0813 20:02:17.880537 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-08-13T20:02:17.881171435+00:00 stderr F I0813 20:02:17.881136 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-08-13T20:02:17.881326439+00:00 stderr F I0813 20:02:17.881303 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-08-13T20:02:17.881586467+00:00 stderr F I0813 20:02:17.881561 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-08-13T20:02:17.881675539+00:00 stderr F I0813 20:02:17.881633 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:02:17.882130002+00:00 stderr F I0813 20:02:17.882093 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:02:17.882254426+00:00 stderr F I0813 20:02:17.882212 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:02:17.882353649+00:00 stderr F I0813 20:02:17.882331 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-08-13T20:02:17.882443481+00:00 stderr F I0813 20:02:17.882421 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T20:02:17.882553794+00:00 stderr F I0813 20:02:17.882516 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-08-13T20:02:17.882888394+00:00 stderr F I0813 20:02:17.881137 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-08-13T20:02:17.882955656+00:00 stderr F I0813 20:02:17.882936 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-08-13T20:02:17.883161222+00:00 stderr F I0813 20:02:17.883118 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-08-13T20:02:17.903573644+00:00 stderr F I0813 20:02:17.903482 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:02:17.906328282+00:00 stderr F I0813 20:02:17.906267 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:02:17.906367934+00:00 stderr F I0813 20:02:17.906325 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:02:17.906367934+00:00 stderr F I0813 20:02:17.906339 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:02:17.913309502+00:00 stderr F I0813 20:02:17.913261 1 termination_observer.go:145] Starting TerminationObserver 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918169 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918245 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918301 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918422 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918437 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918450 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918464 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918476 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918488 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918501 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918748 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918766 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:02:17.921977849+00:00 stderr F I0813 20:02:17.921919 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:02:17.921977849+00:00 stderr F I0813 20:02:17.921959 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:02:18.011930785+00:00 stderr F I0813 20:02:18.011883 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-08-13T20:02:18.011993247+00:00 stderr F I0813 20:02:18.011979 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-08-13T20:02:18.012509172+00:00 stderr F E0813 20:02:18.012486 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.019541782+00:00 stderr F E0813 20:02:18.019469 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.032089430+00:00 stderr F E0813 20:02:18.032013 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.096188709+00:00 stderr F I0813 20:02:18.096086 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-08-13T20:02:18.096188709+00:00 stderr F I0813 20:02:18.096143 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-08-13T20:02:18.096367914+00:00 stderr F I0813 20:02:18.096309 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:02:18.096412085+00:00 stderr F I0813 20:02:18.096397 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:02:18.097138846+00:00 stderr F I0813 20:02:18.097071 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-08-13T20:02:18.097163917+00:00 stderr F I0813 20:02:18.097117 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:02:18.097237749+00:00 stderr F I0813 20:02:18.097176 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-08-13T20:02:18.097237749+00:00 stderr F I0813 20:02:18.097209 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-08-13T20:02:18.097459735+00:00 stderr F I0813 20:02:18.097399 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-08-13T20:02:18.097583379+00:00 stderr F I0813 20:02:18.097544 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.097727063+00:00 stderr F I0813 20:02:18.097635 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-08-13T20:02:18.097828716+00:00 stderr F I0813 20:02:18.097741 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:02:18.098194596+00:00 stderr F I0813 20:02:18.098132 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098210446+00:00 stderr F I0813 20:02:18.098192 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098220017+00:00 stderr F I0813 20:02:18.098212 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098311529+00:00 stderr F I0813 20:02:18.098264 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098359401+00:00 stderr F I0813 20:02:18.098344 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098421402+00:00 stderr F I0813 20:02:18.098375 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098501605+00:00 stderr F I0813 20:02:18.098460 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098584537+00:00 stderr F I0813 20:02:18.098566 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098637529+00:00 stderr F I0813 20:02:18.098625 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098670160+00:00 stderr F I0813 20:02:18.098345 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098703180+00:00 stderr F I0813 20:02:18.098363 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.118752002+00:00 stderr F I0813 20:02:18.118654 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:02:18.118752002+00:00 stderr F I0813 20:02:18.118719 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:02:18.118881376+00:00 stderr F I0813 20:02:18.118690 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:02:18.118927347+00:00 stderr F I0813 20:02:18.118913 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:02:18.119201625+00:00 stderr F I0813 20:02:18.118861 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:02:18.119241016+00:00 stderr F I0813 20:02:18.119228 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122062 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122114 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122134 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:02:18.122199531+00:00 stderr F I0813 20:02:18.122189 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:02:18.126026110+00:00 stderr F I0813 20:02:18.125901 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:18.296336678+00:00 stderr F I0813 20:02:18.296214 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.296719959+00:00 stderr F I0813 20:02:18.296667 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.383032062+00:00 stderr F I0813 20:02:18.382910 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-08-13T20:02:18.383032062+00:00 stderr F I0813 20:02:18.382952 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-08-13T20:02:18.486049641+00:00 stderr F I0813 20:02:18.485949 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.685675145+00:00 stderr F I0813 20:02:18.685551 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.783402033+00:00 stderr F I0813 20:02:18.782672 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:02:18.783402033+00:00 stderr F I0813 20:02:18.782721 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:02:18.886902746+00:00 stderr F I0813 20:02:18.886694 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.904420416+00:00 stderr F I0813 20:02:18.904290 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:02:18.904420416+00:00 stderr F I0813 20:02:18.904336 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906616 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906658 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906658 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906679 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906694 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:02:18.906734692+00:00 stderr F I0813 20:02:18.906681 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:02:18.919072304+00:00 stderr F I0813 20:02:18.918963 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-08-13T20:02:18.919072304+00:00 stderr F I0813 20:02:18.919023 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-08-13T20:02:18.919108845+00:00 stderr F I0813 20:02:18.919085 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:02:18.919108845+00:00 stderr F I0813 20:02:18.919093 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:02:18.919913978+00:00 stderr F I0813 20:02:18.919758 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-08-13T20:02:18.919913978+00:00 stderr F I0813 20:02:18.919904 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-08-13T20:02:18.920003100+00:00 stderr F I0813 20:02:18.919954 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:02:18.920089033+00:00 stderr F I0813 20:02:18.919981 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:02:18.920103733+00:00 stderr F I0813 20:02:18.920091 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:02:18.920103733+00:00 stderr F I0813 20:02:18.920099 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:02:18.922216883+00:00 stderr F I0813 20:02:18.921920 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9 2025-08-13T20:02:18.922646015+00:00 stderr F I0813 20:02:18.922185 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:02:18.922662596+00:00 stderr F I0813 20:02:18.922647 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:02:18.981311399+00:00 stderr F I0813 20:02:18.981186 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:02:18.981311399+00:00 stderr F I0813 20:02:18.981246 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:02:19.082276589+00:00 stderr F I0813 20:02:19.082038 1 request.go:697] Waited for 1.160648461s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps?limit=500&resourceVersion=0 2025-08-13T20:02:19.108201699+00:00 stderr F I0813 20:02:19.108016 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121188 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121243 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121245 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121281 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183548 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183577 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183615 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183591 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:02:19.185129783+00:00 stderr F I0813 20:02:19.185067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:02:19.284423006+00:00 stderr F I0813 20:02:19.284276 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.379709014+00:00 stderr F I0813 20:02:19.379559 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-08-13T20:02:19.379709014+00:00 stderr F I0813 20:02:19.379598 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:02:19.485287026+00:00 stderr F I0813 20:02:19.485140 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.499418569+00:00 stderr F I0813 20:02:19.499274 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.499418569+00:00 stderr F I0813 20:02:19.499329 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.582021656+00:00 stderr F I0813 20:02:19.581705 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-08-13T20:02:19.582021656+00:00 stderr F I0813 20:02:19.581760 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-08-13T20:02:19.685821037+00:00 stderr F I0813 20:02:19.685580 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698586 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698622 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698647 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698673 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698693 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698719 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698767 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698888 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698963 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699015473+00:00 stderr F I0813 20:02:19.698999 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699130016+00:00 stderr F I0813 20:02:19.699047 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699130016+00:00 stderr F I0813 20:02:19.699090 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699147557+00:00 stderr F I0813 20:02:19.699138 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699157397+00:00 stderr F I0813 20:02:19.699146 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699174 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699206 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699229 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699237 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703331 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703368 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703384 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703388 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780765 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780887 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780899 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780925 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-08-13T20:02:19.889872838+00:00 stderr F I0813 20:02:19.889719 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.082710318+00:00 stderr F I0813 20:02:20.082501 1 request.go:697] Waited for 2.16078237s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/kube-system/secrets?limit=500&resourceVersion=0 2025-08-13T20:02:20.092984811+00:00 stderr F I0813 20:02:20.092914 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.293953334+00:00 stderr F I0813 20:02:20.289478 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.468287497+00:00 stderr F E0813 20:02:20.468230 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9] 2025-08-13T20:02:20.492221110+00:00 stderr F I0813 20:02:20.492157 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:20.499635562+00:00 stderr F I0813 20:02:20.499560 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:20.501735802+00:00 stderr F I0813 20:02:20.501518 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.502890005+00:00 stderr F I0813 20:02:20.502770 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.526226710+00:00 stderr F I0813 20:02:20.519326 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:02:20.526226710+00:00 stderr F I0813 20:02:20.519364 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:02:20.582665250+00:00 stderr F I0813 20:02:20.582493 1 base_controller.go:73] Caches are synced for EventWatchController 2025-08-13T20:02:20.582665250+00:00 stderr F I0813 20:02:20.582569 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-08-13T20:02:20.685719830+00:00 stderr F I0813 20:02:20.685578 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.781261646+00:00 stderr F I0813 20:02:20.781129 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:02:20.781261646+00:00 stderr F I0813 20:02:20.781199 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:02:21.088760848+00:00 stderr F I0813 20:02:21.082530 1 request.go:697] Waited for 2.160698948s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-08-13T20:02:21.154002859+00:00 stderr F I0813 20:02:21.152140 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]" 2025-08-13T20:02:21.182432900+00:00 stderr F I0813 20:02:21.182253 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:21.906225198+00:00 stderr F E0813 20:02:21.905498 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:22.082699332+00:00 stderr F I0813 20:02:22.082563 1 request.go:697] Waited for 2.295925205s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-08-13T20:02:23.283449646+00:00 stderr F I0813 20:02:23.282137 1 request.go:697] Waited for 1.368757607s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:23.596598178+00:00 stderr F I0813 20:02:23.593187 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authenticator -n openshift-kube-apiserver because it changed 2025-08-13T20:02:23.598113781+00:00 stderr F I0813 20:02:23.598044 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 10 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T20:02:25.681928317+00:00 stderr F I0813 20:02:25.681602 1 request.go:697] Waited for 1.173333492s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:25.917500667+00:00 stderr F I0813 20:02:25.917379 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-10 -n openshift-kube-apiserver because it was missing 2025-08-13T20:02:25.919894385+00:00 stderr F I0813 20:02:25.919761 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:02:26.197572827+00:00 stderr F I0813 20:02:26.195195 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:26.202027674+00:00 stderr F I0813 20:02:26.199059 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:26.839246442+00:00 stderr F I0813 20:02:26.838416 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing 2025-08-13T20:02:26.890933386+00:00 stderr F I0813 20:02:26.890736 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:02:26.906316425+00:00 stderr F E0813 20:02:26.905734 1 podsecurityreadinesscontroller.go:102] "namespace:" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-config-operator?dryRun=All&fieldManager=pod-security-readiness-controller&force=false\": dial tcp 10.217.4.1:443: connect: connection refused" openshift-config-operator="(MISSING)" 2025-08-13T20:02:27.062032857+00:00 stderr F I0813 20:02:27.061872 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2024-06-27 13:28:05 +0000 UTC) at 2025-08-13 20:02:26 +0000 UTC 2025-08-13T20:02:27.282296610+00:00 stderr F I0813 20:02:27.282173 1 request.go:697] Waited for 1.078495576s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:02:27.288882048+00:00 stderr F E0813 20:02:27.288736 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.493059402+00:00 stderr F W0813 20:02:27.492835 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.493059402+00:00 stderr F E0813 20:02:27.492985 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.689448045+00:00 stderr F I0813 20:02:27.688426 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapCreateFailed' Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.696928148+00:00 stderr F I0813 20:02:27.696380 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RevisionCreateFailed' Failed to create revision 10: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.697385821+00:00 stderr F E0813 20:02:27.697363 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.702021973+00:00 stderr F E0813 20:02:27.701974 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.723108285+00:00 stderr F E0813 20:02:27.722548 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.749338123+00:00 stderr F E0813 20:02:27.749277 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.794133701+00:00 stderr F E0813 20:02:27.793987 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.878679733+00:00 stderr F E0813 20:02:27.878589 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.888764171+00:00 stderr F E0813 20:02:27.888701 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.043317470+00:00 stderr F E0813 20:02:28.043179 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.289375869+00:00 stderr F E0813 20:02:28.289268 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.379214582+00:00 stderr F E0813 20:02:28.379076 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.497073854+00:00 stderr F W0813 20:02:28.496880 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.497182557+00:00 stderr F E0813 20:02:28.497164 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.689759121+00:00 stderr F E0813 20:02:28.689575 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.894884463+00:00 stderr F E0813 20:02:28.894732 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.025637333+00:00 stderr F E0813 20:02:29.025366 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.088538967+00:00 stderr F E0813 20:02:29.088396 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.295313266+00:00 stderr F W0813 20:02:29.295216 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.295313266+00:00 stderr F E0813 20:02:29.295284 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.488824236+00:00 stderr F E0813 20:02:29.488450 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.505160672+00:00 stderr F I0813 20:02:29.503630 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2024-06-27 13:28:05 +0000 UTC) at 2025-08-13 20:02:28 +0000 UTC 2025-08-13T20:02:29.907564892+00:00 stderr F E0813 20:02:29.907113 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.091381636+00:00 stderr F W0813 20:02:30.088902 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.091381636+00:00 stderr F E0813 20:02:30.089000 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.299063050+00:00 stderr F E0813 20:02:30.298658 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.330682142+00:00 stderr F E0813 20:02:30.330624 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.697258979+00:00 stderr F E0813 20:02:30.697085 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.887421723+00:00 stderr F W0813 20:02:30.887227 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.887421723+00:00 stderr F E0813 20:02:30.887279 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.086941195+00:00 stderr F E0813 20:02:31.086732 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.490093166+00:00 stderr F E0813 20:02:31.489980 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.686267992+00:00 stderr F W0813 20:02:31.686138 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.686267992+00:00 stderr F E0813 20:02:31.686205 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.875890662+00:00 stderr F E0813 20:02:31.875719 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:31.901012308+00:00 stderr F E0813 20:02:31.900911 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.913477984+00:00 stderr F E0813 20:02:31.913424 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-kube-apiserver/podnetworkconnectivitychecks": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917548790+00:00 stderr F E0813 20:02:31.917479 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:31.921520413+00:00 stderr F E0813 20:02:31.921463 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.935493062+00:00 stderr F E0813 20:02:31.935342 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.957609803+00:00 stderr F E0813 20:02:31.957460 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.999495908+00:00 stderr F E0813 20:02:31.999379 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.090438442+00:00 stderr F E0813 20:02:32.090324 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.253104613+00:00 stderr F E0813 20:02:32.252976 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.287631368+00:00 stderr F E0813 20:02:32.287450 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.485872313+00:00 stderr F W0813 20:02:32.485628 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.485872313+00:00 stderr F E0813 20:02:32.485740 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.575041717+00:00 stderr F E0813 20:02:32.574943 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.685457496+00:00 stderr F E0813 20:02:32.685363 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.896541708+00:00 stderr F E0813 20:02:32.896479 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.081990088+00:00 stderr F I0813 20:02:33.081929 1 request.go:697] Waited for 1.153124285s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2025-08-13T20:02:33.116142203+00:00 stderr F E0813 20:02:33.116034 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:33.219207153+00:00 stderr F E0813 20:02:33.219100 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.288288754+00:00 stderr F E0813 20:02:33.288215 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.838225562+00:00 stderr F W0813 20:02:33.837417 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.838328125+00:00 stderr F E0813 20:02:33.838272 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.895170166+00:00 stderr F E0813 20:02:33.895117 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.123866160+00:00 stderr F I0813 20:02:34.123707 1 request.go:697] Waited for 1.437381574s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:34.132409974+00:00 stderr F E0813 20:02:34.131888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.285426288+00:00 stderr F I0813 20:02:34.285325 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.287761615+00:00 stderr F E0813 20:02:34.287687 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.287761615+00:00 stderr F E0813 20:02:34.287726 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:34.501765090+00:00 stderr F E0813 20:02:34.501711 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.704875064+00:00 stderr F E0813 20:02:34.704735 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.888966325+00:00 stderr F E0813 20:02:34.888912 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.086482340+00:00 stderr F W0813 20:02:35.086334 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.086482340+00:00 stderr F E0813 20:02:35.086413 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.286871476+00:00 stderr F I0813 20:02:35.286587 1 request.go:697] Waited for 1.149563123s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:35.291251821+00:00 stderr F E0813 20:02:35.291186 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.889880789+00:00 stderr F E0813 20:02:35.889726 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:36.090603975+00:00 stderr F E0813 20:02:36.090455 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.482122994+00:00 stderr F I0813 20:02:36.482009 1 request.go:697] Waited for 1.394598594s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:02:36.485995604+00:00 stderr F W0813 20:02:36.485923 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.485995604+00:00 stderr F E0813 20:02:36.485981 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.686004040+00:00 stderr F E0813 20:02:36.685921 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.885666116+00:00 stderr F I0813 20:02:36.885489 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.886292603+00:00 stderr F E0813 20:02:36.886219 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.064326992+00:00 stderr F E0813 20:02:37.064202 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.109672166+00:00 stderr F E0813 20:02:37.109536 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:37.287078257+00:00 stderr F E0813 20:02:37.286993 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.682584849+00:00 stderr F I0813 20:02:37.682427 1 request.go:697] Waited for 1.195569716s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:02:37.686699647+00:00 stderr F W0813 20:02:37.686536 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.686699647+00:00 stderr F E0813 20:02:37.686594 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.886023922+00:00 stderr F E0813 20:02:37.885969 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.021723653+00:00 stderr F E0813 20:02:38.021300 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.496642781+00:00 stderr F E0813 20:02:38.496414 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:38.610677824+00:00 stderr F E0813 20:02:38.610594 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:38.682735480+00:00 stderr F I0813 20:02:38.682566 1 request.go:697] Waited for 1.3545324s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:02:38.686076805+00:00 stderr F E0813 20:02:38.685979 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.085831439+00:00 stderr F I0813 20:02:39.085615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.087639121+00:00 stderr F E0813 20:02:39.087595 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.487886129+00:00 stderr F E0813 20:02:39.487464 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.895188618+00:00 stderr F W0813 20:02:39.895017 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.895188618+00:00 stderr F E0813 20:02:39.895131 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.283222147+00:00 stderr F I0813 20:02:40.282596 1 request.go:697] Waited for 1.096328425s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:40.288259131+00:00 stderr F E0813 20:02:40.287945 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.513629800+00:00 stderr F E0813 20:02:40.513510 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:40.686668777+00:00 stderr F E0813 20:02:40.686571 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.084468975+00:00 stderr F I0813 20:02:41.084342 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.086141903+00:00 stderr F E0813 20:02:41.086113 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.215589026+00:00 stderr F E0813 20:02:41.215513 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:41.285356717+00:00 stderr F E0813 20:02:41.285243 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.682650530+00:00 stderr F E0813 20:02:41.682502 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:41.685689016+00:00 stderr F E0813 20:02:41.685538 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.885515477+00:00 stderr F E0813 20:02:41.885385 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.082715363+00:00 stderr F I0813 20:02:42.082621 1 request.go:697] Waited for 1.024058585s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:02:42.088271282+00:00 stderr F E0813 20:02:42.088207 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.186979738+00:00 stderr F E0813 20:02:42.186926 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.885542287+00:00 stderr F I0813 20:02:42.885433 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.887598435+00:00 stderr F E0813 20:02:42.887569 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.106315655+00:00 stderr F E0813 20:02:43.106176 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:43.288137192+00:00 stderr F E0813 20:02:43.288006 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.284390023+00:00 stderr F I0813 20:02:44.284299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.297508357+00:00 stderr F E0813 20:02:44.297295 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.087187274+00:00 stderr F E0813 20:02:45.087053 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.317925606+00:00 stderr F E0813 20:02:45.317397 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.684556595+00:00 stderr F I0813 20:02:45.684425 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.688731014+00:00 stderr F E0813 20:02:45.688693 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.087318435+00:00 stderr F E0813 20:02:46.087233 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.884228139+00:00 stderr F I0813 20:02:46.884159 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.886405971+00:00 stderr F E0813 20:02:46.886333 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.109306260+00:00 stderr F E0813 20:02:47.109244 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:48.092381354+00:00 stderr F E0813 20:02:48.092322 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:48.287721726+00:00 stderr F I0813 20:02:48.287638 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.289058344+00:00 stderr F E0813 20:02:48.289024 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.335550790+00:00 stderr F E0813 20:02:48.335494 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.486293661+00:00 stderr F E0813 20:02:48.486221 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.049689662+00:00 stderr F E0813 20:02:49.049111 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:49.543433607+00:00 stderr F E0813 20:02:49.543292 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:49.684554953+00:00 stderr F I0813 20:02:49.684439 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.686370745+00:00 stderr F E0813 20:02:49.686338 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.885480655+00:00 stderr F E0813 20:02:49.885305 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.087540489+00:00 stderr F E0813 20:02:50.087422 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.697695005+00:00 stderr F W0813 20:02:50.697388 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.697695005+00:00 stderr F E0813 20:02:50.697469 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229356262+00:00 stderr F E0813 20:02:51.229271 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:51.284985769+00:00 stderr F I0813 20:02:51.284770 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.286830041+00:00 stderr F E0813 20:02:51.286659 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.485913661+00:00 stderr F E0813 20:02:51.485733 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.685286878+00:00 stderr F E0813 20:02:51.685195 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:52.176266554+00:00 stderr F E0813 20:02:52.176148 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.430383403+00:00 stderr F E0813 20:02:52.430286 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.285294732+00:00 stderr F I0813 20:02:53.285147 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.286495956+00:00 stderr F E0813 20:02:53.286258 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.493943244+00:00 stderr F E0813 20:02:53.487108 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.707885268+00:00 stderr F E0813 20:02:53.707699 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.086622012+00:00 stderr F E0813 20:02:54.086481 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.499963951+00:00 stderr F E0813 20:02:55.499741 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:55.686172193+00:00 stderr F E0813 20:02:55.686116 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.288966766+00:00 stderr F E0813 20:02:57.287212 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.506960085+00:00 stderr F E0813 20:02:57.506636 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:59.052462204+00:00 stderr F E0813 20:02:59.052368 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:59.089362067+00:00 stderr F E0813 20:02:59.089260 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:59.285310937+00:00 stderr F E0813 20:02:59.285216 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.485938219+00:00 stderr F E0813 20:02:59.485835 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.711761171+00:00 stderr F E0813 20:02:59.711633 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:00.887365237+00:00 stderr F E0813 20:03:00.887225 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.232437542+00:00 stderr F E0813 20:03:01.232338 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:01.687880194+00:00 stderr F E0813 20:03:01.687697 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:02.086353261+00:00 stderr F E0813 20:03:02.086236 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.466545123+00:00 stderr F E0813 20:03:03.466443 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.684693427+00:00 stderr F I0813 20:03:03.684563 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.686747335+00:00 stderr F E0813 20:03:03.686674 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.467702884+00:00 stderr F E0813 20:03:04.467650 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:08.822029240+00:00 stderr F E0813 20:03:08.821291 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:09.055237253+00:00 stderr F E0813 20:03:09.055153 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:09.202882104+00:00 stderr F E0813 20:03:09.200200 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:09.379519793+00:00 stderr F E0813 20:03:09.379391 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:10.016453054+00:00 stderr F E0813 20:03:10.016044 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:11.184720560+00:00 stderr F W0813 20:03:11.184594 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.184720560+00:00 stderr F E0813 20:03:11.184678 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.235424036+00:00 stderr F E0813 20:03:11.235291 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:11.690173799+00:00 stderr F E0813 20:03:11.689975 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:12.912997853+00:00 stderr F E0813 20:03:12.912941 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.726670644+00:00 stderr F E0813 20:03:13.726599 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.125912634+00:00 stderr F E0813 20:03:18.125755 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:18.922056237+00:00 stderr F E0813 20:03:18.921896 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.924148366+00:00 stderr F E0813 20:03:18.924051 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.932113354+00:00 stderr F E0813 20:03:18.932008 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.946991928+00:00 stderr F E0813 20:03:18.946904 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.968900543+00:00 stderr F E0813 20:03:18.968646 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.010868720+00:00 stderr F E0813 20:03:19.010530 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.058148869+00:00 stderr F E0813 20:03:19.058059 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:19.100995822+00:00 stderr F E0813 20:03:19.100933 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.194039566+00:00 stderr F E0813 20:03:19.193906 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.262645363+00:00 stderr F E0813 20:03:19.262539 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.418278873+00:00 stderr F E0813 20:03:19.417415 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:19.591219037+00:00 stderr F E0813 20:03:19.591118 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:19.723619304+00:00 stderr F E0813 20:03:19.723502 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:20.728083109+00:00 stderr F E0813 20:03:20.727651 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.124169747+00:00 stderr F E0813 20:03:21.124066 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.237491670+00:00 stderr F E0813 20:03:21.237385 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:21.691880852+00:00 stderr F E0813 20:03:21.691747 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:22.011119850+00:00 stderr F E0813 20:03:22.010974 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.174660680+00:00 stderr F I0813 20:03:24.174209 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.176144142+00:00 stderr F E0813 20:03:24.176038 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.573362163+00:00 stderr F E0813 20:03:24.573244 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.954020392+00:00 stderr F E0813 20:03:24.953891 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.062750943+00:00 stderr F E0813 20:03:29.062580 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:29.198595688+00:00 stderr F E0813 20:03:29.198530 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.695720750+00:00 stderr F E0813 20:03:29.695628 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.557427003+00:00 stderr F E0813 20:03:30.557361 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:31.240127639+00:00 stderr F E0813 20:03:31.240077 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:31.694991844+00:00 stderr F E0813 20:03:31.694894 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:34.237457784+00:00 stderr F E0813 20:03:34.237057 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.067095360+00:00 stderr F E0813 20:03:39.066545 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:39.197105628+00:00 stderr F E0813 20:03:39.196995 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.938886849+00:00 stderr F E0813 20:03:39.938749 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.243562208+00:00 stderr F E0813 20:03:41.243196 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:41.697672863+00:00 stderr F E0813 20:03:41.697378 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:49.071193746+00:00 stderr F E0813 20:03:49.070619 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:49.200004021+00:00 stderr F E0813 20:03:49.199886 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.786542082+00:00 stderr F E0813 20:03:49.786481 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:51.262599080+00:00 stderr F E0813 20:03:51.262267 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:51.710307585+00:00 stderr F E0813 20:03:51.710129 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:52.158916293+00:00 stderr F W0813 20:03:52.158156 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.158916293+00:00 stderr F E0813 20:03:52.158834 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.881637898+00:00 stderr F E0813 20:03:53.879692 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.102664930+00:00 stderr F E0813 20:03:59.102179 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:59.199332758+00:00 stderr F E0813 20:03:59.199234 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:00.451170349+00:00 stderr F E0813 20:04:00.448977 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:01.292417287+00:00 stderr F E0813 20:04:01.292352 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:01.734228721+00:00 stderr F E0813 20:04:01.728554 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:05.252811975+00:00 stderr F I0813 20:04:05.252134 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.261754790+00:00 stderr F E0813 20:04:05.260639 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.110825073+00:00 stderr F E0813 20:04:09.110712 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:09.200838809+00:00 stderr F E0813 20:04:09.200682 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.386987151+00:00 stderr F E0813 20:04:09.386466 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:11.350253577+00:00 stderr F E0813 20:04:11.349438 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:11.731660527+00:00 stderr F E0813 20:04:11.731547 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:13.412571139+00:00 stderr F E0813 20:04:13.412460 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.126151473+00:00 stderr F E0813 20:04:18.125554 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:18.922528371+00:00 stderr F E0813 20:04:18.922199 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.924317022+00:00 stderr F E0813 20:04:18.924241 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.134651123+00:00 stderr F E0813 20:04:19.134583 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.134989332+00:00 stderr F E0813 20:04:19.134963 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.136563777+00:00 stderr F E0813 20:04:19.136456 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.200251414+00:00 stderr F E0813 20:04:19.200126 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.458201473+00:00 stderr F E0813 20:04:19.457505 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:19.810122922+00:00 stderr F E0813 20:04:19.810042 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:21.354154239+00:00 stderr F E0813 20:04:21.353918 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:21.354203371+00:00 stderr F E0813 20:04:21.354157 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:21.737023152+00:00 stderr F E0813 20:04:21.736903 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:21.737290399+00:00 stderr F E0813 20:04:21.737166 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:21.747134221+00:00 stderr F E0813 20:04:21.747030 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:24.410946792+00:00 stderr F E0813 20:04:24.408355 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:25.586371371+00:00 stderr F E0813 20:04:25.585478 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:29.312902817+00:00 stderr F E0813 20:04:29.312340 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:34.413537650+00:00 stderr F E0813 20:04:34.413133 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:35.594003465+00:00 stderr F E0813 20:04:35.590587 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:39.240500555+00:00 stderr F E0813 20:04:39.239746 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.412515322+00:00 stderr F E0813 20:04:41.412003 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.517410856+00:00 stderr F E0813 20:04:41.517330 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:44.424010949+00:00 stderr F E0813 20:04:44.423326 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:45.607638554+00:00 stderr F E0813 20:04:45.596688 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:46.880512543+00:00 stderr F E0813 20:04:46.880306 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:49.205967865+00:00 stderr F E0813 20:04:49.202745 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.944390879+00:00 stderr F E0813 20:04:52.943846 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:54.427027635+00:00 stderr F E0813 20:04:54.426478 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:55.599457919+00:00 stderr F E0813 20:04:55.599021 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:56.193565092+00:00 stderr F E0813 20:04:56.193401 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.210231627+00:00 stderr F E0813 20:04:59.204607 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:04.431498625+00:00 stderr F E0813 20:05:04.431061 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:05:05.601999314+00:00 stderr F E0813 20:05:05.601540 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:05:09.236661316+00:00 stderr F E0813 20:05:09.236236 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.383111080+00:00 stderr F E0813 20:05:09.382720 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.829698110+00:00 stderr F E0813 20:05:11.829275 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.091414447+00:00 stderr F W0813 20:05:14.090446 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.091414447+00:00 stderr F E0813 20:05:14.090630 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.441321307+00:00 stderr F E0813 20:05:14.440432 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:05:15.606367279+00:00 stderr F E0813 20:05:15.605755 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:05:15.806682745+00:00 stderr F E0813 20:05:15.806565 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:18.128535484+00:00 stderr F E0813 20:05:18.128411 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:05:18.923285102+00:00 stderr F E0813 20:05:18.922736 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:18.938201949+00:00 stderr F E0813 20:05:18.938058 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:19.209547759+00:00 stderr F E0813 20:05:19.209191 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:19.444272021+00:00 stderr F E0813 20:05:19.443961 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:05:19.828615827+00:00 stderr F E0813 20:05:19.828516 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:43.067359005+00:00 stderr F I0813 20:05:43.065919 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:43.067359005+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:43.067359005+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:05:43.067359005+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0021abea8)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:43.067359005+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:05:43.067359005+00:00 stderr F } 2025-08-13T20:05:43.067359005+00:00 stderr F } 2025-08-13T20:05:43.067359005+00:00 stderr F because static pod is ready 2025-08-13T20:05:43.114641889+00:00 stderr F I0813 20:05:43.114519 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31246 2025-08-13T20:05:43.150149946+00:00 stderr F I0813 20:05:43.148693 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 9 because static pod is ready 2025-08-13T20:05:55.437383641+00:00 stderr F I0813 20:05:55.436663 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.013147382+00:00 stderr F I0813 20:05:57.926832 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.052286792+00:00 stderr F I0813 20:05:58.052155 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31823 2025-08-13T20:05:58.128587747+00:00 stderr F I0813 20:05:58.127110 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.200418084+00:00 stderr F I0813 20:05:58.200336 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.200418084+00:00 stderr F W0813 20:05:58.200380 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.200463696+00:00 stderr F E0813 20:05:58.200427 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:05:58.314226252+00:00 stderr F I0813 20:05:58.311183 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.314226252+00:00 stderr F W0813 20:05:58.311231 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.314226252+00:00 stderr F E0813 20:05:58.311258 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:05:58.342522043+00:00 stderr F I0813 20:05:58.342053 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.342522043+00:00 stderr F W0813 20:05:58.342474 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.342522043+00:00 stderr F E0813 20:05:58.342503 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:06:00.719452509+00:00 stderr F I0813 20:06:00.719020 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:01.479830303+00:00 stderr F I0813 20:06:01.479663 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:02.789855037+00:00 stderr F I0813 20:06:02.787758 1 reflector.go:351] Caches populated for *v1.KubeAPIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.220363992+00:00 stderr F I0813 20:06:06.219590 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.797371706+00:00 stderr F I0813 20:06:06.796734 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.084738681+00:00 stderr F I0813 20:06:08.084679 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.348720890+00:00 stderr F I0813 20:06:08.348309 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:10.168909382+00:00 stderr F I0813 20:06:10.167730 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:13.305920214+00:00 stderr F I0813 20:06:13.304600 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:13.823342542+00:00 stderr F I0813 20:06:13.823278 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:15.913681751+00:00 stderr F I0813 20:06:15.912900 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:17.953297747+00:00 stderr F I0813 20:06:17.953166 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:18.997522470+00:00 stderr F I0813 20:06:18.995168 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:20.484612853+00:00 stderr F I0813 20:06:20.481273 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:20.832134275+00:00 stderr F I0813 20:06:20.830053 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:21.027976373+00:00 stderr F I0813 20:06:21.026982 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:23.123362347+00:00 stderr F I0813 20:06:23.122947 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.086211088+00:00 stderr F I0813 20:06:24.086107 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.098150370+00:00 stderr F I0813 20:06:24.098014 1 termination_observer.go:130] Observed termination of API server pod "kube-apiserver-crc" at 2025-08-13 20:04:15 +0000 UTC 2025-08-13T20:06:24.112124860+00:00 stderr F I0813 20:06:24.112002 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:24.112124860+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:24.112124860+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:24.112124860+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0027b9050)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:24.112124860+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:24.112124860+00:00 stderr F } 2025-08-13T20:06:24.112124860+00:00 stderr F } 2025-08-13T20:06:24.112124860+00:00 stderr F because static pod is ready 2025-08-13T20:06:24.143504988+00:00 stderr F I0813 20:06:24.143375 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:06:24.164448378+00:00 stderr F I0813 20:06:24.164305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 9 because static pod is ready 2025-08-13T20:06:24.687857027+00:00 stderr F I0813 20:06:24.687742 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:25.501171447+00:00 stderr F I0813 20:06:25.501098 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:25.501171447+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:25.501171447+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:25.501171447+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedTime: (*v1.Time)(0xc005bf0c90)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:25.501171447+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:25.501171447+00:00 stderr F } 2025-08-13T20:06:25.501171447+00:00 stderr F } 2025-08-13T20:06:25.501171447+00:00 stderr F because static pod is ready 2025-08-13T20:06:25.517202186+00:00 stderr F I0813 20:06:25.517044 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:25.543665974+00:00 stderr F I0813 20:06:25.543613 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=32041 2025-08-13T20:06:26.140531476+00:00 stderr F I0813 20:06:26.140387 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:27.886909364+00:00 stderr F I0813 20:06:27.886722 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.477964700+00:00 stderr F I0813 20:06:28.477299 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.803335017+00:00 stderr F I0813 20:06:28.803175 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.803641036+00:00 stderr F I0813 20:06:28.803498 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:06:29.674410552+00:00 stderr F I0813 20:06:29.671815 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.877198799+00:00 stderr F I0813 20:06:29.875523 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.889383038+00:00 stderr F I0813 20:06:29.886916 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.292428349+00:00 stderr F I0813 20:06:31.292313 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.494001678+00:00 stderr F I0813 20:06:31.490208 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:32.677649895+00:00 stderr F I0813 20:06:32.677031 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.013536475+00:00 stderr F I0813 20:06:33.013155 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.276858445+00:00 stderr F I0813 20:06:33.274143 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.278657506+00:00 stderr F I0813 20:06:33.278563 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:06:35.037602926+00:00 stderr F I0813 20:06:35.036613 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:37.144938036+00:00 stderr F I0813 20:06:37.141555 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:38.823364977+00:00 stderr F I0813 20:06:38.822602 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:42.599176403+00:00 stderr F I0813 20:06:42.598474 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.194326857+00:00 stderr F I0813 20:06:43.193966 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.197930590+00:00 stderr F I0813 20:06:43.195748 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:43.198819035+00:00 stderr F I0813 20:06:43.198697 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:43.209392969+00:00 stderr F I0813 20:06:43.209285 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:43.212251571+00:00 stderr F I0813 20:06:43.212085 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 11 triggered by "configmap \"kube-apiserver-cert-syncer-kubeconfig-10\" not found" 2025-08-13T20:06:43.224306866+00:00 stderr F I0813 20:06:43.224140 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.540526073+00:00 stderr F I0813 20:06:43.538687 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.861718771+00:00 stderr F I0813 20:06:43.861624 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10" 2025-08-13T20:06:43.863087181+00:00 stderr F E0813 20:06:43.862915 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:43.874888159+00:00 stderr F I0813 20:06:43.874700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:43.876200427+00:00 stderr F I0813 20:06:43.876129 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:43.880994614+00:00 stderr F I0813 20:06:43.880931 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:44.686921531+00:00 stderr F I0813 20:06:44.684407 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:44.710498407+00:00 stderr F I0813 20:06:44.698668 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]" 2025-08-13T20:06:44.795200656+00:00 stderr F E0813 20:06:44.795113 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:44.812895113+00:00 stderr F E0813 20:06:44.808500 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.007854942+00:00 stderr F I0813 20:06:45.007627 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.013050641+00:00 stderr F E0813 20:06:45.012958 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.015726088+00:00 stderr F E0813 20:06:45.015702 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.017251091+00:00 stderr F I0813 20:06:45.015751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.035925207+00:00 stderr F I0813 20:06:45.035704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.036284657+00:00 stderr F E0813 20:06:45.036250 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.108955201+00:00 stderr F I0813 20:06:45.108736 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.119003569+00:00 stderr F E0813 20:06:45.118342 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.119003569+00:00 stderr F I0813 20:06:45.118418 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.229408244+00:00 stderr F I0813 20:06:45.229219 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.281371224+00:00 stderr F E0813 20:06:45.281270 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.281371224+00:00 stderr F I0813 20:06:45.281347 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.605049374+00:00 stderr F E0813 20:06:45.604950 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.605106336+00:00 stderr F I0813 20:06:45.605022 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:46.159407527+00:00 stderr F I0813 20:06:46.159252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:46.250021875+00:00 stderr F E0813 20:06:46.247318 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:46.250021875+00:00 stderr F I0813 20:06:46.247419 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:47.529952613+00:00 stderr F E0813 20:06:47.529861 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:47.530083207+00:00 stderr F I0813 20:06:47.530058 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:47.873036359+00:00 stderr F I0813 20:06:47.871318 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:48.555591518+00:00 stderr F I0813 20:06:48.555494 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:48.954120725+00:00 stderr F I0813 20:06:48.953954 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:49.227756030+00:00 stderr F I0813 20:06:49.224391 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:49.587960507+00:00 stderr F I0813 20:06:49.587728 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:49.877065196+00:00 stderr F I0813 20:06:49.875605 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.092003138+00:00 stderr F E0813 20:06:50.091852 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:50.092003138+00:00 stderr F I0813 20:06:50.091961 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:50.490470183+00:00 stderr F I0813 20:06:50.486047 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.625026801+00:00 stderr F I0813 20:06:50.621506 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.721054824+00:00 stderr F I0813 20:06:50.690508 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.783372881+00:00 stderr F I0813 20:06:50.783231 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:51.189895356+00:00 stderr F I0813 20:06:51.187513 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:51.750118559+00:00 stderr F I0813 20:06:51.743548 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:53.682530292+00:00 stderr F I0813 20:06:53.680117 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:54.823520696+00:00 stderr F I0813 20:06:54.821366 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:54.949069235+00:00 stderr F I0813 20:06:54.948306 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:55.217855282+00:00 stderr F E0813 20:06:55.214692 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:55.223659848+00:00 stderr F I0813 20:06:55.223140 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:55.380837275+00:00 stderr F I0813 20:06:55.380586 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:55.922704220+00:00 stderr F I0813 20:06:55.920082 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:56.053962303+00:00 stderr F I0813 20:06:56.049454 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 11 triggered by "configmap \"kube-apiserver-cert-syncer-kubeconfig-10\" not found" 2025-08-13T20:06:56.360839252+00:00 stderr F I0813 20:06:56.360704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 11 created because configmap "kube-apiserver-cert-syncer-kubeconfig-10" not found 2025-08-13T20:06:56.440267529+00:00 stderr F I0813 20:06:56.440209 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:56.452133759+00:00 stderr F W0813 20:06:56.450383 1 staticpod.go:38] revision 11 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:06:56.452133759+00:00 stderr F E0813 20:06:56.450437 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 11 2025-08-13T20:06:57.529571870+00:00 stderr F I0813 20:06:57.528545 1 request.go:697] Waited for 1.102452078s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:06:59.153954122+00:00 stderr F I0813 20:06:59.153508 1 installer_controller.go:524] node crc with revision 9 is the oldest and needs new revision 11 2025-08-13T20:06:59.154122097+00:00 stderr F I0813 20:06:59.154101 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:59.154122097+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:59.154122097+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:59.154122097+00:00 stderr F TargetRevision: (int32) 11, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedTime: (*v1.Time)(0xc002214d08)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:59.154122097+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:59.154122097+00:00 stderr F } 2025-08-13T20:06:59.154122097+00:00 stderr F } 2025-08-13T20:06:59.241574604+00:00 stderr F I0813 20:06:59.235963 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 9 to 11 because node crc with revision 9 is the oldest 2025-08-13T20:06:59.256743019+00:00 stderr F I0813 20:06:59.256671 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:59.258615703+00:00 stderr F I0813 20:06:59.258583 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:59.267689573+00:00 stderr F I0813 20:06:59.267567 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:59.305698563+00:00 stderr F I0813 20:06:59.305637 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11" 2025-08-13T20:06:59.360396791+00:00 stderr F I0813 20:06:59.360342 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:59.364085437+00:00 stderr F I0813 20:06:59.363854 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:59.391010869+00:00 stderr F I0813 20:06:59.390540 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:07:00.332573145+00:00 stderr F I0813 20:07:00.332426 1 request.go:697] Waited for 1.018757499s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:01.331607958+00:00 stderr F I0813 20:07:01.328004 1 request.go:697] Waited for 1.351597851s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:07:03.219312669+00:00 stderr F I0813 20:07:03.215042 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-11-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:04.482399593+00:00 stderr F I0813 20:07:04.480938 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:05.929009659+00:00 stderr F I0813 20:07:05.928059 1 request.go:697] Waited for 1.107414811s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2025-08-13T20:07:06.134280204+00:00 stderr F I0813 20:07:06.133848 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:06.931311815+00:00 stderr F I0813 20:07:06.928683 1 request.go:697] Waited for 1.128893936s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-08-13T20:07:08.540323188+00:00 stderr F I0813 20:07:08.539269 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:09.934975573+00:00 stderr F I0813 20:07:09.934565 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:10.134911265+00:00 stderr F I0813 20:07:10.134731 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:15.687022109+00:00 stderr F I0813 20:07:15.682408 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.727955753+00:00 stderr F I0813 20:07:15.726710 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:15.787348406+00:00 stderr F I0813 20:07:15.785969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:15.841157668+00:00 stderr F I0813 20:07:15.840703 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:16.958534985+00:00 stderr F I0813 20:07:16.958185 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:17.311540395+00:00 stderr F I0813 20:07:17.304820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:18.708619280+00:00 stderr F I0813 20:07:18.691467 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:19.950694422+00:00 stderr F I0813 20:07:19.917858 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:20.068989554+00:00 stderr F I0813 20:07:20.066161 1 request.go:697] Waited for 1.141655182s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:20.562498663+00:00 stderr F I0813 20:07:20.535507 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:21.514635910+00:00 stderr F I0813 20:07:21.512470 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:22.503481592+00:00 stderr F I0813 20:07:22.494349 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:23.613992292+00:00 stderr F I0813 20:07:23.612845 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:25.144175883+00:00 stderr F I0813 20:07:25.142508 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:25.706344610+00:00 stderr F I0813 20:07:25.704672 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:26.723927306+00:00 stderr F I0813 20:07:26.687676 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:27.483437834+00:00 stderr F I0813 20:07:27.482905 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:27.541270120+00:00 stderr F I0813 20:07:27.533928 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:27.541270120+00:00 stderr F I0813 20:07:27.538561 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 12 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:28.676481077+00:00 stderr F I0813 20:07:28.675817 1 request.go:697] Waited for 1.140203711s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:29.865947060+00:00 stderr F I0813 20:07:29.865352 1 request.go:697] Waited for 1.175779001s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc 2025-08-13T20:07:31.498696882+00:00 stderr F I0813 20:07:31.497544 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:31.498696882+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:31.498696882+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:07:31.498696882+00:00 stderr F TargetRevision: (int32) 12, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001e57e90)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:31.498696882+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:07:31.498696882+00:00 stderr F } 2025-08-13T20:07:31.498696882+00:00 stderr F } 2025-08-13T20:07:31.498696882+00:00 stderr F because new revision pending 2025-08-13T20:07:32.044521161+00:00 stderr F I0813 20:07:32.044386 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.045112688+00:00 stderr F I0813 20:07:32.045042 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:32.362962951+00:00 stderr F I0813 20:07:32.328253 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.387055412+00:00 stderr F I0813 20:07:32.386819 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12" 2025-08-13T20:07:32.393638680+00:00 stderr F E0813 20:07:32.393100 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:07:32.410870024+00:00 stderr F I0813 20:07:32.408004 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.476455865+00:00 stderr F E0813 20:07:32.476356 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:07:33.066909054+00:00 stderr F I0813 20:07:33.065163 1 request.go:697] Waited for 1.014589539s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:07:33.908378810+00:00 stderr F I0813 20:07:33.906597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-12-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:34.471827254+00:00 stderr F I0813 20:07:34.470752 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:35.082344628+00:00 stderr F I0813 20:07:35.082140 1 request.go:697] Waited for 1.17853902s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:36.264957995+00:00 stderr F I0813 20:07:36.264379 1 request.go:697] Waited for 1.169600653s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:37.266086328+00:00 stderr F I0813 20:07:37.265156 1 request.go:697] Waited for 1.195307181s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-08-13T20:07:37.705277070+00:00 stderr F I0813 20:07:37.705217 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:40.448543522+00:00 stderr F I0813 20:07:40.447141 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:41.068953629+00:00 stderr F I0813 20:07:41.068385 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:42.872562730+00:00 stderr F I0813 20:07:42.872111 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:23.948041820+00:00 stderr F I0813 20:08:23.946664 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:23.987622815+00:00 stderr F I0813 20:08:23.985531 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:04:15 +0000 UTC) at 2025-08-13 20:08:23 +0000 UTC 2025-08-13T20:08:26.078102491+00:00 stderr F I0813 20:08:26.072992 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:04:15 +0000 UTC) at 2025-08-13 20:08:25 +0000 UTC 2025-08-13T20:08:29.272876957+00:00 stderr F E0813 20:08:29.272045 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.287636911+00:00 stderr F E0813 20:08:29.287217 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.301855268+00:00 stderr F E0813 20:08:29.301817 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.329675506+00:00 stderr F E0813 20:08:29.329619 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.377394244+00:00 stderr F E0813 20:08:29.377301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.464043468+00:00 stderr F E0813 20:08:29.463572 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.628308168+00:00 stderr F E0813 20:08:29.627972 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.953248174+00:00 stderr F E0813 20:08:29.953100 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.602610212+00:00 stderr F E0813 20:08:30.602162 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.890209348+00:00 stderr F E0813 20:08:31.889997 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.459982315+00:00 stderr F E0813 20:08:34.459464 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.274705807+00:00 stderr F E0813 20:08:39.274066 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.586015852+00:00 stderr F E0813 20:08:39.584968 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.277814194+00:00 stderr F E0813 20:08:49.276218 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:59.279422830+00:00 stderr F E0813 20:08:59.278561 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.072093276+00:00 stderr F E0813 20:09:00.072031 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:30.676249413+00:00 stderr F I0813 20:09:30.672351 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.652853485+00:00 stderr F I0813 20:09:32.652667 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.997060833+00:00 stderr F I0813 20:09:32.996997 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:33.004507667+00:00 stderr F I0813 20:09:33.004432 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:34.052182014+00:00 stderr F I0813 20:09:34.052080 1 request.go:697] Waited for 1.047880553s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:09:34.859533992+00:00 stderr F I0813 20:09:34.859391 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.055166871+00:00 stderr F I0813 20:09:35.055104 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.274030664+00:00 stderr F I0813 20:09:35.273887 1 request.go:697] Waited for 1.346065751s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:35.654672239+00:00 stderr F I0813 20:09:35.654588 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.847469136+00:00 stderr F I0813 20:09:35.846666 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.056865441+00:00 stderr F I0813 20:09:38.056747 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.855268932+00:00 stderr F I0813 20:09:38.855138 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.942880734+00:00 stderr F I0813 20:09:38.942760 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.454254595+00:00 stderr F I0813 20:09:39.454166 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.870002795+00:00 stderr F I0813 20:09:39.869847 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:40.312326496+00:00 stderr F I0813 20:09:40.310888 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:40.316394453+00:00 stderr F I0813 20:09:40.313333 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:09:40.498634928+00:00 stderr F I0813 20:09:40.498514 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:40.503509538+00:00 stderr F I0813 20:09:40.503433 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:40Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:40.522071150+00:00 stderr F I0813 20:09:40.521872 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 12"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12" 2025-08-13T20:09:40.948998720+00:00 stderr F I0813 20:09:40.948523 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.268646514+00:00 stderr F I0813 20:09:41.265043 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.289537333+00:00 stderr F E0813 20:09:41.288663 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.290885212+00:00 stderr F I0813 20:09:41.290625 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.307019204+00:00 stderr F E0813 20:09:41.306859 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.308151097+00:00 stderr F I0813 20:09:41.307896 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.313935433+00:00 stderr F E0813 20:09:41.313848 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.317861915+00:00 stderr F I0813 20:09:41.317767 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.323179508+00:00 stderr F E0813 20:09:41.323077 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.364632316+00:00 stderr F I0813 20:09:41.364497 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.371219595+00:00 stderr F E0813 20:09:41.371120 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.454053880+00:00 stderr F I0813 20:09:41.453389 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.462409540+00:00 stderr F E0813 20:09:41.462300 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.625209247+00:00 stderr F I0813 20:09:41.624176 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.625209247+00:00 stderr F I0813 20:09:41.624462 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.637270673+00:00 stderr F E0813 20:09:41.637136 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.651362987+00:00 stderr F I0813 20:09:41.651285 1 request.go:697] Waited for 1.142561947s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver 2025-08-13T20:09:41.964522376+00:00 stderr F I0813 20:09:41.962065 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.970516497+00:00 stderr F E0813 20:09:41.970466 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.613871283+00:00 stderr F I0813 20:09:42.613744 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:42Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:42.624072166+00:00 stderr F E0813 20:09:42.623097 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.674291996+00:00 stderr F I0813 20:09:42.674173 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.255737967+00:00 stderr F I0813 20:09:43.255395 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.507659819+00:00 stderr F I0813 20:09:43.507470 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.906040130+00:00 stderr F I0813 20:09:43.905940 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:43Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:43.915460560+00:00 stderr F E0813 20:09:43.915310 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:45.063148636+00:00 stderr F I0813 20:09:45.062119 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.454994181+00:00 stderr F I0813 20:09:45.454880 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.556559583+00:00 stderr F I0813 20:09:45.556125 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.029896635+00:00 stderr F I0813 20:09:47.029137 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.431534712+00:00 stderr F I0813 20:09:49.431396 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.603868443+00:00 stderr F I0813 20:09:49.603016 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.097949529+00:00 stderr F I0813 20:09:51.095966 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.156008534+00:00 stderr F I0813 20:09:51.155372 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.346948488+00:00 stderr F I0813 20:09:51.343971 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.899874211+00:00 stderr F I0813 20:09:51.897396 1 reflector.go:351] Caches populated for *v1.KubeAPIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.322158879+00:00 stderr F I0813 20:09:52.322056 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.717963967+00:00 stderr F I0813 20:09:52.717106 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.001761305+00:00 stderr F I0813 20:09:55.001372 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.604439374+00:00 stderr F I0813 20:09:55.604041 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.691689536+00:00 stderr F I0813 20:09:55.691355 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:56.397833182+00:00 stderr F I0813 20:09:56.397367 1 termination_observer.go:130] Observed termination of API server pod "kube-apiserver-crc" at 2025-08-13 20:08:47 +0000 UTC 2025-08-13T20:09:56.793974910+00:00 stderr F I0813 20:09:56.792242 1 request.go:697] Waited for 1.18414682s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:57.991351730+00:00 stderr F I0813 20:09:57.991270 1 request.go:697] Waited for 1.191825831s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:58.200016761+00:00 stderr F I0813 20:09:58.199677 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:59.278447041+00:00 stderr F I0813 20:09:59.278325 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:00.344520857+00:00 stderr F I0813 20:10:00.344375 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:00.597497930+00:00 stderr F I0813 20:10:00.594133 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:06.140558853+00:00 stderr F I0813 20:10:06.136477 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:08.678223861+00:00 stderr F I0813 20:10:08.677607 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:11.615709241+00:00 stderr F I0813 20:10:11.614708 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:12.447865849+00:00 stderr F I0813 20:10:12.447401 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.023764372+00:00 stderr F I0813 20:10:15.023635 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.189231935+00:00 stderr F I0813 20:10:16.187663 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.832884551+00:00 stderr F I0813 20:10:18.831311 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:19.913476462+00:00 stderr F I0813 20:10:19.913155 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:20.042624055+00:00 stderr F I0813 20:10:20.042485 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:22.281866056+00:00 stderr F I0813 20:10:22.280848 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:26.156467354+00:00 stderr F I0813 20:10:26.156096 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:28.443609229+00:00 stderr F I0813 20:10:28.442724 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:33.622891484+00:00 stderr F I0813 20:10:33.622596 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:10:33.626321162+00:00 stderr F I0813 20:10:33.622305 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:17:07.038035726+00:00 stderr F I0813 20:17:07.036853 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:19:40.315920166+00:00 stderr F I0813 20:19:40.315069 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:19:56.823831628+00:00 stderr F I0813 20:19:56.822700 1 request.go:697] Waited for 1.189846745s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:19:57.816367055+00:00 stderr F I0813 20:19:57.815909 1 request.go:697] Waited for 1.194141978s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:20:33.628645173+00:00 stderr F I0813 20:20:33.627675 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:25:08.288098181+00:00 stderr F I0813 20:25:08.287198 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:29:40.318423466+00:00 stderr F I0813 20:29:40.316980 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:29:56.621923248+00:00 stderr F I0813 20:29:56.621255 1 request.go:697] Waited for 1.002703993s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:29:57.820948224+00:00 stderr F I0813 20:29:57.820346 1 request.go:697] Waited for 1.188787292s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:30:33.631933560+00:00 stderr F I0813 20:30:33.629183 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:39:40.323212651+00:00 stderr F I0813 20:39:40.322450 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:39:56.818558953+00:00 stderr F I0813 20:39:56.817863 1 request.go:697] Waited for 1.198272606s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:39:58.019908028+00:00 stderr F I0813 20:39:58.017924 1 request.go:697] Waited for 1.193313884s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:40:30.989897036+00:00 stderr F I0813 20:40:30.988880 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:40:33.630681710+00:00 stderr F I0813 20:40:33.630274 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.312511 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.320010 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.322303 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.322653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.312996 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383656031+00:00 stderr F I0813 20:42:36.383617 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384451584+00:00 stderr F I0813 20:42:36.384429 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384734842+00:00 stderr F I0813 20:42:36.384713 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421150922+00:00 stderr F I0813 20:42:36.404204 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424615362+00:00 stderr F I0813 20:42:36.424585 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.425559339+00:00 stderr F I0813 20:42:36.425536 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.448576532+00:00 stderr F I0813 20:42:36.448525 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.449479658+00:00 stderr F I0813 20:42:36.449420 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451915479+00:00 stderr F I0813 20:42:36.313036 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454242126+00:00 stderr F I0813 20:42:36.313051 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454667788+00:00 stderr F I0813 20:42:36.313065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509514919+00:00 stderr F I0813 20:42:36.313078 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509514919+00:00 stderr F I0813 20:42:36.313093 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.519564509+00:00 stderr F I0813 20:42:36.313106 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.519564509+00:00 stderr F I0813 20:42:36.313119 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540144902+00:00 stderr F I0813 20:42:36.313131 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313156 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313168 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313214 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313270 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.573976478+00:00 stderr F I0813 20:42:36.313298 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.574460152+00:00 stderr F I0813 20:42:36.313311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.574851103+00:00 stderr F I0813 20:42:36.313322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575304116+00:00 stderr F I0813 20:42:36.313332 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575304116+00:00 stderr F I0813 20:42:36.313343 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313376 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313387 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313403 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313415 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576568523+00:00 stderr F I0813 20:42:36.313426 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576968474+00:00 stderr F I0813 20:42:36.313443 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.582537195+00:00 stderr F I0813 20:42:36.313454 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583345418+00:00 stderr F I0813 20:42:36.313467 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583345418+00:00 stderr F I0813 20:42:36.313483 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583700218+00:00 stderr F I0813 20:42:36.313494 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.589248298+00:00 stderr F I0813 20:42:36.313522 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313539 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313615 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.319488 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.451063 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.469514 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.450401166+00:00 stderr F E0813 20:42:39.449691 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.458503730+00:00 stderr F E0813 20:42:39.458413 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.472485983+00:00 stderr F E0813 20:42:39.472408 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.496672820+00:00 stderr F E0813 20:42:39.496580 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.541319437+00:00 stderr F E0813 20:42:39.541106 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.626199914+00:00 stderr F E0813 20:42:39.626025 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.789937274+00:00 stderr F E0813 20:42:39.789271 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.112324339+00:00 stderr F E0813 20:42:40.112277 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.694097262+00:00 stderr F I0813 20:42:40.693760 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.694507514+00:00 stderr F I0813 20:42:40.694398 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:40.694530464+00:00 stderr F I0813 20:42:40.694496 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694530464+00:00 stderr F I0813 20:42:40.694507 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694528 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694534 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694542 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694577336+00:00 stderr F I0813 20:42:40.694558 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694577336+00:00 stderr F I0813 20:42:40.694564 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694596276+00:00 stderr F I0813 20:42:40.694587 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694596276+00:00 stderr F I0813 20:42:40.694592 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694607 1 base_controller.go:172] Shutting down BoundSATokenSignerController ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694614 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694648 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694654 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694691069+00:00 stderr F I0813 20:42:40.694671 1 base_controller.go:172] Shutting down KubeAPIServerStaticResources ... 2025-08-13T20:42:40.694705539+00:00 stderr F I0813 20:42:40.694691 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:40.694719900+00:00 stderr F I0813 20:42:40.694704 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:42:40.694734100+00:00 stderr F I0813 20:42:40.694717 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:40.694748590+00:00 stderr F I0813 20:42:40.694731 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694910 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694951 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694966 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694979 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694994 1 base_controller.go:172] Shutting down StartupMonitorPodCondition ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695009 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695021 1 base_controller.go:172] Shutting down StaticPodStateFallback ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695036 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695049 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:40.695271006+00:00 stderr F I0813 20:42:40.695200 1 base_controller.go:172] Shutting down StatusSyncer_kube-apiserver ... 2025-08-13T20:42:40.695320867+00:00 stderr F I0813 20:42:40.695290 1 base_controller.go:172] Shutting down PodSecurityReadinessController ... 2025-08-13T20:42:40.695376439+00:00 stderr F I0813 20:42:40.695358 1 certrotationcontroller.go:899] Shutting down CertRotation 2025-08-13T20:42:40.695475131+00:00 stderr F I0813 20:42:40.695445 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.695549334+00:00 stderr F I0813 20:42:40.695531 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.694746 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696122 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.695061 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696139 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696150 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696153 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696170 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696182 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696187 1 base_controller.go:172] Shutting down webhookSupportabilityController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696192 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696200 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696211 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696214 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696259 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696259 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696271 1 base_controller.go:114] Shutting down worker of BoundSATokenSignerController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696281 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696292 1 base_controller.go:114] Shutting down worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696296 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696304 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696307 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696313 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696315 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696322 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696325 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696330 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696334 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696337 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696343 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696344 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696352 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696362 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696366 1 base_controller.go:114] Shutting down worker of StartupMonitorPodCondition controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696368 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696374 1 base_controller.go:104] All StartupMonitorPodCondition workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696375 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696379 1 controller_manager.go:54] StartupMonitorPodCondition controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696387 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696396 1 base_controller.go:114] Shutting down worker of webhookSupportabilityController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696404 1 base_controller.go:104] All webhookSupportabilityController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696405 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696415 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696422 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696425 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696431 1 base_controller.go:114] Shutting down worker of StaticPodStateFallback controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696435 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696439 1 base_controller.go:104] All StaticPodStateFallback workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696440 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696445 1 controller_manager.go:54] StaticPodStateFallback controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696472 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696389 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696486 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696487 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696451 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696503 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696509 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696464 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696529 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696568 1 base_controller.go:172] Shutting down EventWatchController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696583 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696597 1 base_controller.go:172] Shutting down NodeKubeconfigController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696612 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696626 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696631 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696644 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696649 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696671 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696675 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696688 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696692 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696702 1 base_controller.go:114] Shutting down worker of EventWatchController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696708 1 base_controller.go:104] All EventWatchController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696720 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696728 1 base_controller.go:114] Shutting down worker of NodeKubeconfigController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696737 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr P I0813 2025-08-13T20:42:40.697432808+00:00 stderr F 20:42:40.696750 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696758 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696758 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696767 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696837 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696857 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696879 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696883 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696898 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696905 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696917 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696924 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697087 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697126 1 base_controller.go:172] Shutting down SCCReconcileController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697139 1 base_controller.go:172] Shutting down CertRotationTimeUpgradeableController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697151 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697164 1 base_controller.go:172] Shutting down ServiceAccountIssuerController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697176 1 base_controller.go:172] Shutting down KubeletVersionSkewController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697318 1 base_controller.go:114] Shutting down worker of SCCReconcileController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697327 1 base_controller.go:104] All SCCReconcileController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697336 1 base_controller.go:114] Shutting down worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697344 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697358 1 base_controller.go:114] Shutting down worker of ServiceAccountIssuerController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697366 1 base_controller.go:114] Shutting down worker of KubeletVersionSkewController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697371 1 base_controller.go:104] All KubeletVersionSkewController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697383 1 termination_observer.go:155] Shutting down TerminationObserver 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697412 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:40.697462309+00:00 stderr F I0813 20:42:40.697445 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.697680035+00:00 stderr F I0813 20:42:40.697577 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.697680035+00:00 stderr F I0813 20:42:40.697663 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.697839070+00:00 stderr F I0813 20:42:40.697729 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:40.697839070+00:00 stderr F I0813 20:42:40.697827 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.697864340+00:00 stderr F I0813 20:42:40.697850 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.698041385+00:00 stderr F I0813 20:42:40.697971 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.698041385+00:00 stderr F I0813 20:42:40.698012 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.698062476+00:00 stderr F I0813 20:42:40.698041 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.698077136+00:00 stderr F I0813 20:42:40.698062 1 builder.go:330] server exited 2025-08-13T20:42:40.698211990+00:00 stderr F I0813 20:42:40.698156 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.698211990+00:00 stderr F I0813 20:42:40.698186 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:40.698268332+00:00 stderr F I0813 20:42:40.698211 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.698284232+00:00 stderr F I0813 20:42:40.698262 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:42:40.698284232+00:00 stderr F I0813 20:42:40.698274 1 base_controller.go:104] All NodeKubeconfigController workers have been terminated 2025-08-13T20:42:40.698298763+00:00 stderr F I0813 20:42:40.698287 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698313053+00:00 stderr F I0813 20:42:40.698298 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698313053+00:00 stderr F I0813 20:42:40.698307 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698331914+00:00 stderr F I0813 20:42:40.698321 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698346064+00:00 stderr F I0813 20:42:40.698330 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698357 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698397 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698409 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698438587+00:00 stderr F I0813 20:42:40.698416 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698438587+00:00 stderr F I0813 20:42:40.698426 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698453327+00:00 stderr F I0813 20:42:40.698436 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698453327+00:00 stderr F I0813 20:42:40.698445 1 base_controller.go:104] All BoundSATokenSignerController workers have been terminated 2025-08-13T20:42:40.698475168+00:00 stderr F I0813 20:42:40.698457 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698475168+00:00 stderr F I0813 20:42:40.698467 1 base_controller.go:104] All KubeAPIServerStaticResources workers have been terminated 2025-08-13T20:42:40.698489928+00:00 stderr F I0813 20:42:40.698480 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:42:40.698504369+00:00 stderr F I0813 20:42:40.698493 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:42:40.698518789+00:00 stderr F I0813 20:42:40.698501 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:42:40.698533250+00:00 stderr F I0813 20:42:40.698515 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:40.698533250+00:00 stderr F I0813 20:42:40.698528 1 base_controller.go:150] All StatusSyncer_kube-apiserver post start hooks have been terminated 2025-08-13T20:42:40.698548030+00:00 stderr F I0813 20:42:40.698537 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:42:40.698548030+00:00 stderr F I0813 20:42:40.698542 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:42:40.698562680+00:00 stderr F I0813 20:42:40.698550 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:40.698577201+00:00 stderr F I0813 20:42:40.698560 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:42:40.698577201+00:00 stderr F I0813 20:42:40.698565 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698574 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698583 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698585 1 base_controller.go:114] Shutting down worker of PodSecurityReadinessController controller ... 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698593 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698598 1 base_controller.go:104] All PodSecurityReadinessController workers have been terminated 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698602 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:42:40.698623482+00:00 stderr F I0813 20:42:40.698608 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:42:40.698623482+00:00 stderr F I0813 20:42:40.698618 1 base_controller.go:104] All ServiceAccountIssuerController workers have been terminated 2025-08-13T20:42:40.698637963+00:00 stderr F I0813 20:42:40.698626 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:40.698652123+00:00 stderr F I0813 20:42:40.698636 1 base_controller.go:104] All CertRotationTimeUpgradeableController workers have been terminated 2025-08-13T20:42:40.698843558+00:00 stderr F I0813 20:42:40.698711 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:42:40.698843558+00:00 stderr F I0813 20:42:40.698739 1 base_controller.go:104] All StatusSyncer_kube-apiserver workers have been terminated 2025-08-13T20:42:40.700186097+00:00 stderr F E0813 20:42:40.700133 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.700392933+00:00 stderr F W0813 20:42:40.700361 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000066324315073043233033065 0ustar zuulzuul2025-08-13T19:59:10.406231987+00:00 stderr F I0813 19:59:10.403110 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:10.406231987+00:00 stderr F I0813 19:59:10.404001 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:10.410000464+00:00 stderr F I0813 19:59:10.409906 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:11.410106593+00:00 stderr F I0813 19:59:11.409319 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T19:59:20.080237695+00:00 stderr F I0813 19:59:20.064219 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:20.080237695+00:00 stderr F W0813 19:59:20.064941 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:20.080237695+00:00 stderr F W0813 19:59:20.064957 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:20.968396892+00:00 stderr F I0813 19:59:20.929396 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:20.968960328+00:00 stderr F I0813 19:59:20.968911 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:20.969213366+00:00 stderr F I0813 19:59:20.969182 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:20.969589386+00:00 stderr F I0813 19:59:20.969553 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018540 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018626 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018655 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.020506 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.020534 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:21.067076155+00:00 stderr F I0813 19:59:21.064635 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:21.198603954+00:00 stderr F I0813 19:59:21.198514 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:21.201436764+00:00 stderr F I0813 19:59:21.201395 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-08-13T19:59:21.319617853+00:00 stderr F I0813 19:59:21.219440 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:21.319942802+00:00 stderr F I0813 19:59:21.319909 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.324274 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.324414 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.337285 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.354668482+00:00 stderr F E0813 19:59:21.354211 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.354668482+00:00 stderr F E0813 19:59:21.354276 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.378231954+00:00 stderr F E0813 19:59:21.368212 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.428961370+00:00 stderr F E0813 19:59:21.427143 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.428961370+00:00 stderr F E0813 19:59:21.427270 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.582008663+00:00 stderr F E0813 19:59:21.564368 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.582008663+00:00 stderr F E0813 19:59:21.564741 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.662010263+00:00 stderr F E0813 19:59:21.659349 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.662010263+00:00 stderr F E0813 19:59:21.659493 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.821740476+00:00 stderr F E0813 19:59:21.821422 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.821740476+00:00 stderr F E0813 19:59:21.821479 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.902165139+00:00 stderr F I0813 19:59:21.896240 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-08-13T19:59:21.902165139+00:00 stderr F I0813 19:59:21.900088 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28027", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_c7f30bb9-5fae-472f-b491-3b6d1380fb20 became leader 2025-08-13T19:59:21.945262068+00:00 stderr F I0813 19:59:21.921222 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:22.226725461+00:00 stderr F E0813 19:59:22.141585 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.226725461+00:00 stderr F E0813 19:59:22.214588 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.285924848+00:00 stderr F I0813 19:59:22.260011 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:22.300489964+00:00 stderr F I0813 19:59:22.300383 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:22.796644677+00:00 stderr F E0813 19:59:22.788656 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.893581660+00:00 stderr F E0813 19:59:22.858965 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.070172439+00:00 stderr F E0813 19:59:24.069365 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.143020776+00:00 stderr F E0813 19:59:24.141008 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.649134033+00:00 stderr F E0813 19:59:26.643536 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.733443906+00:00 stderr F E0813 19:59:26.724689 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.725856005+00:00 stderr F I0813 19:59:27.724591 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-08-13T19:59:29.825671460+00:00 stderr F I0813 19:59:29.810612 1 request.go:697] Waited for 1.996140239s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.749243 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750474 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750492 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750698 1 certrotationcontroller.go:886] Starting CertRotation 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750725 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750753 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750770 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769353 1 termination_observer.go:145] Starting TerminationObserver 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769414 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769432 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769448 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769463 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769483 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769501 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769518 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769529 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769545 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769559 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769563 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769577 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770420 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770545 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770569 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770588 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770601 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770624 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-08-13T19:59:32.784351197+00:00 stderr F I0813 19:59:32.784319 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:32.784537142+00:00 stderr F I0813 19:59:32.784520 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808536 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808607 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808633 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809391 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809431 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809451 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809466 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:33.057951796+00:00 stderr F E0813 19:59:32.938639 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.057951796+00:00 stderr F E0813 19:59:32.938876 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.133759637+00:00 stderr F I0813 19:59:33.097995 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-08-13T19:59:33.133759637+00:00 stderr F I0813 19:59:33.098146 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:33.627575204+00:00 stderr F I0813 19:59:33.566350 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620676 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620732 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620975 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620997 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-08-13T19:59:35.761142381+00:00 stderr F I0813 19:59:35.759386 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-08-13T19:59:35.769575441+00:00 stderr F I0813 19:59:35.769540 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-08-13T19:59:35.769643613+00:00 stderr F I0813 19:59:35.769625 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-08-13T19:59:35.769994773+00:00 stderr F E0813 19:59:35.769972 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.815189132+00:00 stderr F I0813 19:59:35.815031 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:35.822472109+00:00 stderr F E0813 19:59:35.818123 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.822472109+00:00 stderr F I0813 19:59:35.820422 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:35.822472109+00:00 stderr F E0813 19:59:35.820875 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.939666840+00:00 stderr F E0813 19:59:35.938506 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.295905945+00:00 stderr F E0813 19:59:36.295711 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.295951286+00:00 stderr F E0813 19:59:36.295921 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.308655118+00:00 stderr F E0813 19:59:36.308104 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.471091479+00:00 stderr F E0813 19:59:36.470635 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.471091479+00:00 stderr F E0813 19:59:36.470884 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.522045071+00:00 stderr F E0813 19:59:36.520763 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.569665028+00:00 stderr F E0813 19:59:36.569470 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:40.217258604+00:00 stderr F I0813 19:59:40.207376 1 trace.go:236] Trace[1993198320]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.838) (total time: 10368ms): 2025-08-13T19:59:40.217258604+00:00 stderr F Trace[1993198320]: ---"Objects listed" error: 10368ms (19:59:40.207) 2025-08-13T19:59:40.217258604+00:00 stderr F Trace[1993198320]: [10.368623989s] [10.368623989s] END 2025-08-13T19:59:40.272640772+00:00 stderr F E0813 19:59:40.262013 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:40.621248000+00:00 stderr F E0813 19:59:40.583928 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:41.229194869+00:00 stderr F E0813 19:59:41.224921 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.024455049+00:00 stderr F I0813 19:59:42.020242 1 trace.go:236] Trace[1834057747]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.787) (total time: 10582ms): 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[1834057747]: ---"Objects listed" error: 6515ms (19:59:36.303) 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[1834057747]: [10.582929308s] [10.582929308s] END 2025-08-13T19:59:42.024455049+00:00 stderr F I0813 19:59:42.021271 1 trace.go:236] Trace[2078079104]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.796) (total time: 14224ms): 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[2078079104]: ---"Objects listed" error: 14224ms (19:59:42.021) 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[2078079104]: [14.224350017s] [14.224350017s] END 2025-08-13T19:59:42.024455049+00:00 stderr F E0813 19:59:42.021567 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.029403850+00:00 stderr F I0813 19:59:42.029324 1 trace.go:236] Trace[582360872]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.778) (total time: 12250ms): 2025-08-13T19:59:42.029403850+00:00 stderr F Trace[582360872]: ---"Objects listed" error: 12250ms (19:59:42.029) 2025-08-13T19:59:42.029403850+00:00 stderr F Trace[582360872]: [12.250847503s] [12.250847503s] END 2025-08-13T19:59:42.033875407+00:00 stderr F I0813 19:59:42.032164 1 trace.go:236] Trace[1034056651]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.827) (total time: 12205ms): 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[1034056651]: ---"Objects listed" error: 12205ms (19:59:42.032) 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[1034056651]: [12.205071418s] [12.205071418s] END 2025-08-13T19:59:42.033875407+00:00 stderr F I0813 19:59:42.032517 1 trace.go:236] Trace[518580895]: "DeltaFIFO Pop Process" ID:openshift-etcd/builder-dockercfg-sqwsk,Depth:16,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:40.368) (total time: 1663ms): 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[518580895]: [1.663718075s] [1.663718075s] END 2025-08-13T19:59:42.062383520+00:00 stderr F I0813 19:59:42.062268 1 trace.go:236] Trace[1780588024]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.725) (total time: 14336ms): 2025-08-13T19:59:42.062383520+00:00 stderr F Trace[1780588024]: ---"Objects listed" error: 14309ms (19:59:42.034) 2025-08-13T19:59:42.062383520+00:00 stderr F Trace[1780588024]: [14.336749981s] [14.336749981s] END 2025-08-13T19:59:42.096995946+00:00 stderr F I0813 19:59:42.086121 1 trace.go:236] Trace[137601519]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.785) (total time: 12300ms): 2025-08-13T19:59:42.096995946+00:00 stderr F Trace[137601519]: ---"Objects listed" error: 12300ms (19:59:42.086) 2025-08-13T19:59:42.096995946+00:00 stderr F Trace[137601519]: [12.300748394s] [12.300748394s] END 2025-08-13T19:59:42.130937464+00:00 stderr F I0813 19:59:42.129235 1 trace.go:236] Trace[134539601]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.725) (total time: 14403ms): 2025-08-13T19:59:42.130937464+00:00 stderr F Trace[134539601]: ---"Objects listed" error: 14403ms (19:59:42.128) 2025-08-13T19:59:42.130937464+00:00 stderr F Trace[134539601]: [14.403338048s] [14.403338048s] END 2025-08-13T19:59:42.133862377+00:00 stderr F I0813 19:59:42.131372 1 trace.go:236] Trace[1185044808]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 14363ms): 2025-08-13T19:59:42.133862377+00:00 stderr F Trace[1185044808]: ---"Objects listed" error: 14362ms (19:59:42.131) 2025-08-13T19:59:42.133862377+00:00 stderr F Trace[1185044808]: [14.363246906s] [14.363246906s] END 2025-08-13T19:59:42.133862377+00:00 stderr F E0813 19:59:42.133728 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.143759409+00:00 stderr F E0813 19:59:42.133993 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.154493295+00:00 stderr F I0813 19:59:42.150129 1 trace.go:236] Trace[1599639947]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.814) (total time: 12335ms): 2025-08-13T19:59:42.154493295+00:00 stderr F Trace[1599639947]: ---"Objects listed" error: 12335ms (19:59:42.150) 2025-08-13T19:59:42.154493295+00:00 stderr F Trace[1599639947]: [12.335962369s] [12.335962369s] END 2025-08-13T19:59:42.177003577+00:00 stderr F I0813 19:59:42.175075 1 trace.go:236] Trace[638251685]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.839) (total time: 12335ms): 2025-08-13T19:59:42.177003577+00:00 stderr F Trace[638251685]: ---"Objects listed" error: 12335ms (19:59:42.174) 2025-08-13T19:59:42.177003577+00:00 stderr F Trace[638251685]: [12.33529162s] [12.33529162s] END 2025-08-13T19:59:42.750373270+00:00 stderr F I0813 19:59:42.183435 1 trace.go:236] Trace[27614155]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.785) (total time: 14398ms): 2025-08-13T19:59:42.750373270+00:00 stderr F Trace[27614155]: ---"Objects listed" error: 14397ms (19:59:42.183) 2025-08-13T19:59:42.750373270+00:00 stderr F Trace[27614155]: [14.398033837s] [14.398033837s] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.104636 1 trace.go:236] Trace[2014468090]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver-operator/builder-dockercfg-2cs69,Depth:13,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.203) (total time: 899ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[2014468090]: [899.704855ms] [899.704855ms] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.104728 1 trace.go:236] Trace[2080221332]: "DeltaFIFO Pop Process" ID:openshift,Depth:58,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.205) (total time: 899ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[2080221332]: [899.1632ms] [899.1632ms] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.105632 1 trace.go:236] Trace[1806978334]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.778) (total time: 13326ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1806978334]: ---"Objects listed" error: 12623ms (19:59:42.402) 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1806978334]: [13.326226606s] [13.326226606s] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:42.191421 1 trace.go:236] Trace[1088521478]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.797) (total time: 14394ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1088521478]: ---"Objects listed" error: 14393ms (19:59:42.190) 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1088521478]: [14.394350463s] [14.394350463s] END 2025-08-13T19:59:43.107256163+00:00 stderr F I0813 19:59:43.107191 1 trace.go:236] Trace[924750758]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.781) (total time: 13320ms): 2025-08-13T19:59:43.107256163+00:00 stderr F Trace[924750758]: ---"Objects listed" error: 13320ms (19:59:43.106) 2025-08-13T19:59:43.107256163+00:00 stderr F Trace[924750758]: [13.320284976s] [13.320284976s] END 2025-08-13T19:59:43.135742825+00:00 stderr F I0813 19:59:43.135669 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-08-13T19:59:43.135915400+00:00 stderr F I0813 19:59:43.135899 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T19:59:43.135977682+00:00 stderr F I0813 19:59:43.135964 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-08-13T19:59:43.136017843+00:00 stderr F I0813 19:59:43.135997 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T19:59:43.136067364+00:00 stderr F I0813 19:59:43.136054 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:43.136098325+00:00 stderr F I0813 19:59:43.136086 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:43.136177638+00:00 stderr F I0813 19:59:43.136162 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:43.136218839+00:00 stderr F I0813 19:59:43.136205 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:43.136698632+00:00 stderr F I0813 19:59:43.136636 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:43.161048897+00:00 stderr F I0813 19:59:43.160759 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:43.171934617+00:00 stderr F I0813 19:59:43.171879 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.203209 1 trace.go:236] Trace[1013357871]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.783) (total time: 14419ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1013357871]: ---"Objects listed" error: 14419ms (19:59:42.202) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1013357871]: [14.419417737s] [14.419417737s] END 2025-08-13T19:59:44.502769353+00:00 stderr F E0813 19:59:42.203284 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.203738 1 trace.go:236] Trace[904353601]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.785) (total time: 14418ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[904353601]: ---"Objects listed" error: 14418ms (19:59:42.203) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[904353601]: [14.418199793s] [14.418199793s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.253665 1 trace.go:236] Trace[1900812755]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.818) (total time: 14435ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1900812755]: ---"Objects listed" error: 14435ms (19:59:42.253) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1900812755]: [14.435143226s] [14.435143226s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.313282 1 trace.go:236] Trace[800968062]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.814) (total time: 12498ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[800968062]: ---"Objects listed" error: 12498ms (19:59:42.313) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[800968062]: [12.498837281s] [12.498837281s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.402149 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.421581 1 trace.go:236] Trace[222516314]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/bootstrap-kube-apiserver-crc.17dc8ed8c9d1f157,Depth:239,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.191) (total time: 230ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[222516314]: [230.016326ms] [230.016326ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.699445 1 trace.go:236] Trace[1849060913]: "DeltaFIFO Pop Process" ID:kube-system/builder-dockercfg-kkqp2,Depth:33,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.161) (total time: 537ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1849060913]: [537.766968ms] [537.766968ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.704628 1 trace.go:236] Trace[1382711218]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.986) (total time: 12718ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1382711218]: ---"Objects listed" error: 12718ms (19:59:42.704) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1382711218]: [12.718383669s] [12.718383669s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.750164 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.064057 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.096032 1 trace.go:236] Trace[1559849221]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:28.000) (total time: 15095ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1559849221]: ---"Objects listed" error: 15094ms (19:59:43.094) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1559849221]: [15.095690773s] [15.095690773s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176166 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176183 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176196 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.188026 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.280637 1 trace.go:236] Trace[839686592]: "DeltaFIFO Pop Process" ID:control-plane-machine-set-operator,Depth:204,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.179) (total time: 100ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[839686592]: [100.798494ms] [100.798494ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F E0813 19:59:43.465159 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:44.501106 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:44.505865951+00:00 stderr F I0813 19:59:44.503724 1 trace.go:236] Trace[1060106757]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.818) (total time: 14684ms): 2025-08-13T19:59:44.505865951+00:00 stderr F Trace[1060106757]: ---"Objects listed" error: 14684ms (19:59:44.503) 2025-08-13T19:59:44.505865951+00:00 stderr F Trace[1060106757]: [14.684920135s] [14.684920135s] END 2025-08-13T19:59:44.702386043+00:00 stderr F I0813 19:59:44.702232 1 trace.go:236] Trace[1629861906]: "DeltaFIFO Pop Process" ID:system:controller:disruption-controller,Depth:111,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.363) (total time: 1339ms): 2025-08-13T19:59:44.702386043+00:00 stderr F Trace[1629861906]: [1.339129512s] [1.339129512s] END 2025-08-13T19:59:44.837178345+00:00 stderr F I0813 19:59:44.837078 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.935728074+00:00 stderr F I0813 19:59:44.905141 1 trace.go:236] Trace[740019728]: "DeltaFIFO Pop Process" ID:system:openshift:openshift-controller-manager:image-trigger-controller,Depth:33,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.704) (total time: 200ms): 2025-08-13T19:59:44.935728074+00:00 stderr F Trace[740019728]: [200.473475ms] [200.473475ms] END 2025-08-13T19:59:44.959257775+00:00 stderr F I0813 19:59:44.959194 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-08-13T19:59:45.190940199+00:00 stderr F I0813 19:59:45.141035 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-08-13T19:59:45.191140695+00:00 stderr F I0813 19:59:45.157727 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:45.191207477+00:00 stderr F I0813 19:59:45.157743 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:45.191256538+00:00 stderr F I0813 19:59:45.157751 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:45.404111546+00:00 stderr F I0813 19:59:45.157767 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-08-13T19:59:45.404111546+00:00 stderr F I0813 19:59:45.171163 1 trace.go:236] Trace[1633723082]: "DeltaFIFO Pop Process" ID:system:openshift:operator:openshift-apiserver-operator,Depth:18,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.905) (total time: 183ms): 2025-08-13T19:59:45.404111546+00:00 stderr F Trace[1633723082]: [183.235093ms] [183.235093ms] END 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094748 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094945 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094958 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.105011265+00:00 stderr F I0813 19:59:46.101336 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111438 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111508 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111519 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.111606923+00:00 stderr F I0813 19:59:46.111557 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.112875449+00:00 stderr F I0813 19:59:46.112743 1 trace.go:236] Trace[1514538963]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.826) (total time: 16285ms): 2025-08-13T19:59:46.112875449+00:00 stderr F Trace[1514538963]: ---"Objects listed" error: 16285ms (19:59:46.112) 2025-08-13T19:59:46.112875449+00:00 stderr F Trace[1514538963]: [16.285982883s] [16.285982883s] END 2025-08-13T19:59:46.141217627+00:00 stderr F I0813 19:59:46.141153 1 trace.go:236] Trace[2132981388]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/kube-apiserver-crc.17dc8f01381192ba,Depth:193,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.992) (total time: 1148ms): 2025-08-13T19:59:46.141217627+00:00 stderr F Trace[2132981388]: [1.148505488s] [1.148505488s] END 2025-08-13T19:59:46.144456709+00:00 stderr F I0813 19:59:46.144416 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.144650605+00:00 stderr F I0813 19:59:46.144624 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.144927433+00:00 stderr F I0813 19:59:46.144902 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.154048783+00:00 stderr F I0813 19:59:46.153965 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8 2025-08-13T19:59:46.154176986+00:00 stderr F I0813 19:59:46.154156 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158647634+00:00 stderr F I0813 19:59:46.158612 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158740346+00:00 stderr F I0813 19:59:46.158722 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158907421+00:00 stderr F I0813 19:59:46.158762 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.158953042+00:00 stderr F I0813 19:59:46.158939 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.159055045+00:00 stderr F I0813 19:59:46.159037 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:46.159088616+00:00 stderr F I0813 19:59:46.159077 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:46.160268820+00:00 stderr F E0813 19:59:46.160204 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.160356262+00:00 stderr F I0813 19:59:46.160302 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.327105875+00:00 stderr F I0813 19:59:46.323731 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.640573681+00:00 stderr F I0813 19:59:46.334064 1 trace.go:236] Trace[383502669]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/kube-apiserver-crc.17dcded7fc33a4b6,Depth:142,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:46.176) (total time: 157ms): 2025-08-13T19:59:46.640573681+00:00 stderr F Trace[383502669]: [157.750217ms] [157.750217ms] END 2025-08-13T19:59:46.640638053+00:00 stderr F I0813 19:59:46.347238 1 trace.go:236] Trace[927508524]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 18578ms): 2025-08-13T19:59:46.640638053+00:00 stderr F Trace[927508524]: ---"Objects listed" error: 18578ms (19:59:46.347) 2025-08-13T19:59:46.640638053+00:00 stderr F Trace[927508524]: [18.578666736s] [18.578666736s] END 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.640610 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.347553 1 trace.go:236] Trace[669931977]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 18578ms): 2025-08-13T19:59:46.652059088+00:00 stderr F Trace[669931977]: ---"Objects listed" error: 18407ms (19:59:46.175) 2025-08-13T19:59:46.652059088+00:00 stderr F Trace[669931977]: [18.578891232s] [18.578891232s] END 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647321 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.368954 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647454 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.369250 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647500 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.369267 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.649332 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:46.652506731+00:00 stderr F I0813 19:59:46.369287 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652586393+00:00 stderr F I0813 19:59:46.652556 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.652668456+00:00 stderr F I0813 19:59:46.384797 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652713147+00:00 stderr F I0813 19:59:46.652698 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.657545305+00:00 stderr F I0813 19:59:46.398921 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:46.657621557+00:00 stderr F I0813 19:59:46.657601 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:46.660720275+00:00 stderr F I0813 19:59:46.399045 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:46.661012374+00:00 stderr F I0813 19:59:46.660988 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:46.662276710+00:00 stderr F I0813 19:59:46.449596 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.662334651+00:00 stderr F I0813 19:59:46.662316 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.663541776+00:00 stderr F I0813 19:59:46.449633 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.663610078+00:00 stderr F I0813 19:59:46.663585 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.666594283+00:00 stderr F I0813 19:59:46.451435 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:46.666665075+00:00 stderr F I0813 19:59:46.666646 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:46.673105408+00:00 stderr F I0813 19:59:46.468242 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-08-13T19:59:46.673184980+00:00 stderr F I0813 19:59:46.673163 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-08-13T19:59:46.676508885+00:00 stderr F I0813 19:59:46.468300 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.676573737+00:00 stderr F I0813 19:59:46.676555 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.681108276+00:00 stderr F I0813 19:59:46.468319 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.681302312+00:00 stderr F I0813 19:59:46.681180 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.707044046+00:00 stderr F I0813 19:59:46.485287 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-08-13T19:59:46.707230501+00:00 stderr F I0813 19:59:46.707200 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-08-13T19:59:46.750947557+00:00 stderr F I0813 19:59:46.485317 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:46.750947557+00:00 stderr F I0813 19:59:46.750875 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:46.751920665+00:00 stderr F I0813 19:59:46.502223 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-08-13T19:59:46.751920665+00:00 stderr F I0813 19:59:46.751097 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-08-13T19:59:46.831887244+00:00 stderr F I0813 19:59:46.825572 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.831887244+00:00 stderr F I0813 19:59:46.825695 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.865169383+00:00 stderr F I0813 19:59:46.865101 1 base_controller.go:73] Caches are synced for EventWatchController 2025-08-13T19:59:46.865286456+00:00 stderr F I0813 19:59:46.865268 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-08-13T19:59:47.704232902+00:00 stderr F I0813 19:59:47.692032 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:47.862264577+00:00 stderr F I0813 19:59:47.844259 1 trace.go:236] Trace[1477171505]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.786) (total time: 20057ms): 2025-08-13T19:59:47.862264577+00:00 stderr F Trace[1477171505]: ---"Objects listed" error: 20057ms (19:59:47.844) 2025-08-13T19:59:47.862264577+00:00 stderr F Trace[1477171505]: [20.057651336s] [20.057651336s] END 2025-08-13T19:59:47.862264577+00:00 stderr F I0813 19:59:47.844328 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.903459451+00:00 stderr F I0813 19:59:46.502273 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T19:59:47.903459451+00:00 stderr F I0813 19:59:47.887274 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:47.918354 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:46.502285 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:47.918431 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:48.329221767+00:00 stderr F I0813 19:59:46.502296 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:48.329364341+00:00 stderr F I0813 19:59:48.329343 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:48.450923837+00:00 stderr F I0813 19:59:48.433261 1 trace.go:236] Trace[1914587675]: "DeltaFIFO Pop Process" ID:openshift-config-managed/dashboard-cluster-total,Depth:35,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:47.844) (total time: 588ms): 2025-08-13T19:59:48.450923837+00:00 stderr F Trace[1914587675]: [588.63424ms] [588.63424ms] END 2025-08-13T19:59:48.658903396+00:00 stderr F I0813 19:59:48.658421 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:46.515255 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:48.760552 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:48.760587 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539556 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539612 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539633 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:49.680538308+00:00 stderr F I0813 19:59:49.680477 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:49.757899653+00:00 stderr F I0813 19:59:49.746243 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:49.757899653+00:00 stderr F I0813 19:59:49.746307 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:50.217136634+00:00 stderr P I0813 19:59:50.215305 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKAB 2025-08-13T19:59:50.217380451+00:00 stderr F UmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:50.251507593+00:00 stderr F I0813 19:59:50.250263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T19:59:50.251507593+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.251590646+00:00 stderr F I0813 19:59:50.251524 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:50.251992417+00:00 stderr F I0813 19:59:50.251867 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:50.296506296+00:00 stderr F I0813 19:59:50.294579 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/node-kubeconfigs -n openshift-kube-apiserver because it changed 2025-08-13T19:59:50.656940961+00:00 stderr F I0813 19:59:50.655745 1 core.go:358] ConfigMap "openshift-kube-apiserver/aggregator-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIV+a/r/KBVSQwDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2FnZ3JlZ2F0b3It\nY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX2FnZ3JlZ2F0b3ItY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwz1oeDcXqAniG+VxAzEbZbeheswm\nibqk0LwWbA9YAD2aJCC2U0gbXouz0u1dzDnEuwzslM0OFq2kW+1RmEB1drVBkCMV\ny/gKGmRafqGt31/rDe81XneOBzrUC/rNVDZq7rx4wsZ8YzYkPhj1frvlCCWyOdyB\n+nWF+ZZQHLXeSuHuVGnfGqmckiQf/R8ITZp/vniyeOED0w8B9ZdfVHNYJksR/Vn2\ngslU8a/mluPzSCyD10aHnX5c75yTzW4TBQvytjkEpDR5LBoRmHiuL64999DtWonq\niX7TdcoQY1LuHyilaXIp0TazmkRb3ycHAY/RQ3xumj9I25D8eLCwWvI8GwIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nWtUaz8JmZMUc/fPnQTR0L7R9wakwHwYDVR0jBBgwFoAUWtUaz8JmZMUc/fPnQTR0\nL7R9wakwDQYJKoZIhvcNAQELBQADggEBAECt0YM/4XvtPuVY5pY2aAXRuthsw2zQ\nnXVvR5jDsfpaNvMXQWaMjM1B+giNKhDDeLmcF7GlWmFfccqBPicgFhUgQANE3ALN\ngq2Wttd641M6i4B3UuRewNySj1sc12wfgAcwaRcecDTCsZo5yuF90z4mXpZs7MWh\nKCxYPpAtLqi17IF1tJVz/03L+6WD5kUProTELtY7/KBJYV/GONMG+KAMBjg1ikMK\njA0HQiCZiWDuW1ZdAwuvh6oRNWoQy6w9Wksard/AnfXUFBwNgULMp56+tOOPHxtm\nu3XYTN0dPJXsimSk4KfS0by8waS7ocoXa3LgQxb/6h0ympDbcWtgD0w=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:00Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}},"f:labels":{".":{},"f:auth.openshift.io/managed-certificate-type":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:33Z"}],"resourceVersion":null,"uid":"1d0d5c4a-d5a2-488a-94e2-bf622b67cadf"}} 2025-08-13T19:59:50.660061100+00:00 stderr F E0813 19:59:50.656059 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.745166186+00:00 stderr F E0813 19:59:50.745016 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8] 2025-08-13T19:59:50.913502905+00:00 stderr F I0813 19:59:50.913036 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/aggregator-client-ca -n openshift-kube-apiserver: 2025-08-13T19:59:50.913502905+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.932861687+00:00 stderr F I0813 19:59:50.930694 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:51.056973045+00:00 stderr F I0813 19:59:51.056886 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:52.549088319+00:00 stderr F I0813 19:59:52.370677 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:52.370630792 +0000 UTC))" 2025-08-13T19:59:52.657562101+00:00 stderr F I0813 19:59:52.450127 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.494126 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.668765 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:52.668661858 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669440 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.669342427 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669571 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.66945185 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669823 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.669603794 +0000 UTC))" 2025-08-13T19:59:52.671953091+00:00 stderr F I0813 19:59:52.670080 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.669968045 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883408 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.834619428 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883495 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.8834619 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883516 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.883502232 +0000 UTC))" 2025-08-13T19:59:52.884120229+00:00 stderr F I0813 19:59:52.883954 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 19:59:52.883929784 +0000 UTC))" 2025-08-13T19:59:52.884397857+00:00 stderr F I0813 19:59:52.884342 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 19:59:52.884319995 +0000 UTC))" 2025-08-13T19:59:52.988627408+00:00 stderr F I0813 19:59:52.988069 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.006953971+00:00 stderr F I0813 19:59:53.004302 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:53.225649734+00:00 stderr F I0813 19:59:53.221749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:53.558720948+00:00 stderr F I0813 19:59:53.558238 1 trace.go:236] Trace[1369719048]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.838) (total time: 23717ms): 2025-08-13T19:59:53.558720948+00:00 stderr F Trace[1369719048]: ---"Objects listed" error: 23716ms (19:59:53.555) 2025-08-13T19:59:53.558720948+00:00 stderr F Trace[1369719048]: [23.717376611s] [23.717376611s] END 2025-08-13T19:59:53.558720948+00:00 stderr F I0813 19:59:53.558621 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.583756152+00:00 stderr F I0813 19:59:53.583564 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-08-13T19:59:53.583756152+00:00 stderr F I0813 19:59:53.583593 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-08-13T19:59:53.605947754+00:00 stderr F I0813 19:59:53.605280 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:59:53.605947754+00:00 stderr F I0813 19:59:53.605320 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:59:58.613089865+00:00 stderr F I0813 19:59:58.609004 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 9 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T19:59:58.966505629+00:00 stderr F I0813 19:59:58.917090 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authenticator -n openshift-kube-apiserver because it changed 2025-08-13T19:59:59.839284638+00:00 stderr F I0813 19:59:59.839152 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.806181934+00:00 stderr F I0813 20:00:01.796903 1 core.go:358] ConfigMap "openshift-kube-apiserver/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:18Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-08-13T20:00:01Z"}],"resourceVersion":null,"uid":"449953d7-35d8-4eaf-8671-65eda2b482f7"}} 2025-08-13T20:00:01.809347355+00:00 stderr F I0813 20:00:01.808935 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: 2025-08-13T20:00:01.809347355+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:01.809347355+00:00 stderr F I0813 20:00:01.809002 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.907607275+00:00 stderr F I0813 20:00:01.904389 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.924416555+00:00 stderr F I0813 20:00:01.924338 1 core.go:358] ConfigMap "openshift-config-managed/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:01Z"}],"resourceVersion":null,"uid":"1b8f54dc-8896-4a59-8c53-834fed1d81fd"}} 2025-08-13T20:00:01.953431302+00:00 stderr F I0813 20:00:01.929953 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: 2025-08-13T20:00:01.953431302+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:02.037372065+00:00 stderr F I0813 20:00:02.035056 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:02.070134019+00:00 stderr F I0813 20:00:02.068760 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:02.923722088+00:00 stderr F I0813 20:00:02.874163 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:03.445770364+00:00 stderr F I0813 20:00:03.445627 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:04.121005026+00:00 stderr F I0813 20:00:04.112026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:05.170528143+00:00 stderr F I0813 20:00:05.160062 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:05.681253986+00:00 stderr P I0813 20:00:05.677309 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAw 2025-08-13T20:00:05.681315977+00:00 stderr F ggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:05.682706277+00:00 stderr F I0813 20:00:05.681355 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-08-13T20:00:05.682706277+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:05.934535818+00:00 stderr F I0813 20:00:05.920362 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.920269061 +0000 UTC))" 2025-08-13T20:00:05.935118714+00:00 stderr F I0813 20:00:05.935088 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.934695512 +0000 UTC))" 2025-08-13T20:00:05.935300110+00:00 stderr F I0813 20:00:05.935278 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.935151055 +0000 UTC))" 2025-08-13T20:00:05.935382762+00:00 stderr F I0813 20:00:05.935362 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.935336371 +0000 UTC))" 2025-08-13T20:00:05.935441924+00:00 stderr F I0813 20:00:05.935428 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935407243 +0000 UTC))" 2025-08-13T20:00:05.935499995+00:00 stderr F I0813 20:00:05.935487 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935465504 +0000 UTC))" 2025-08-13T20:00:05.935693671+00:00 stderr F I0813 20:00:05.935672 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935519506 +0000 UTC))" 2025-08-13T20:00:05.935751492+00:00 stderr F I0813 20:00:05.935736 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935717932 +0000 UTC))" 2025-08-13T20:00:05.939093088+00:00 stderr F I0813 20:00:05.939070 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.939023986 +0000 UTC))" 2025-08-13T20:00:05.939164370+00:00 stderr F I0813 20:00:05.939150 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.939132769 +0000 UTC))" 2025-08-13T20:00:05.939613873+00:00 stderr F I0813 20:00:05.939589 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.939563471 +0000 UTC))" 2025-08-13T20:00:05.940415505+00:00 stderr F I0813 20:00:05.940349 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:00:05.940324553 +0000 UTC))" 2025-08-13T20:00:06.125240245+00:00 stderr F I0813 20:00:06.124171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:06.867873281+00:00 stderr F I0813 20:00:06.858398 1 request.go:697] Waited for 1.172199574s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-apiserver-client-ca 2025-08-13T20:00:08.100086485+00:00 stderr F I0813 20:00:07.974193 1 request.go:697] Waited for 1.291491224s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle 2025-08-13T20:00:08.100086485+00:00 stderr P I0813 20:00:07.983291 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZ 2025-08-13T20:00:08.100180528+00:00 stderr F XJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.189908176+00:00 stderr F I0813 20:00:08.189030 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:08.212464720+00:00 stderr F I0813 20:00:08.206614 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T20:00:08.212464720+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:10.243962056+00:00 stderr F I0813 20:00:10.231607 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:10.463320621+00:00 stderr F I0813 20:00:10.460161 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:11.026082297+00:00 stderr F I0813 20:00:11.005505 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:11.373230415+00:00 stderr F I0813 20:00:11.353030 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 9 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T20:00:11.822691641+00:00 stderr F I0813 20:00:11.822014 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 9 created because optional secret/webhook-authenticator has changed 2025-08-13T20:00:11.846949852+00:00 stderr F I0813 20:00:11.846098 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:12.011013411+00:00 stderr F W0813 20:00:12.010579 1 staticpod.go:38] revision 9 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:00:12.011013411+00:00 stderr F E0813 20:00:12.010970 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 9 2025-08-13T20:00:13.096090201+00:00 stderr F I0813 20:00:13.095264 1 installer_controller.go:524] node crc with revision 8 is the oldest and needs new revision 9 2025-08-13T20:00:13.096090201+00:00 stderr F I0813 20:00:13.096032 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:13.096090201+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:13.096090201+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:13.096090201+00:00 stderr F TargetRevision: (int32) 9, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedTime: (*v1.Time)(0xc003c319e0)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:13.096090201+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:00:13.096090201+00:00 stderr F } 2025-08-13T20:00:13.096090201+00:00 stderr F } 2025-08-13T20:00:13.198325806+00:00 stderr F I0813 20:00:13.195633 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 8 to 9 because node crc with revision 8 is the oldest 2025-08-13T20:00:13.284744520+00:00 stderr F I0813 20:00:13.284573 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:13.285207963+00:00 stderr F I0813 20:00:13.285148 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.486186314+00:00 stderr F I0813 20:00:13.485044 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" 2025-08-13T20:00:13.491050473+00:00 stderr F I0813 20:00:13.491008 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.571507167+00:00 stderr F E0813 20:00:13.571212 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:14.466612320+00:00 stderr F I0813 20:00:14.458400 1 request.go:697] Waited for 1.135419415s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:00:17.365699214+00:00 stderr F I0813 20:00:17.348199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-9-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:17.620944372+00:00 stderr F I0813 20:00:17.619554 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:18.586967406+00:00 stderr F I0813 20:00:18.586756 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:21.262488005+00:00 stderr F I0813 20:00:21.259307 1 request.go:697] Waited for 1.120055997s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:00:22.147030787+00:00 stderr F I0813 20:00:22.142248 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:25.082034549+00:00 stderr F I0813 20:00:25.074714 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:28.579868887+00:00 stderr F I0813 20:00:28.571487 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:49.954168169+00:00 stderr P I0813 20:00:49.869279 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwgg 2025-08-13T20:00:49.954592101+00:00 stderr F Ei\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:49.954592101+00:00 stderr F I0813 20:00:49.950160 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-08-13T20:00:49.954592101+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:50.452469968+00:00 stderr P I0813 20:00:50.451511 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE 2025-08-13T20:00:50.452521249+00:00 stderr F 3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:50.491882951+00:00 stderr F I0813 20:00:50.472665 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T20:00:50.491882951+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:59.988258923+00:00 stderr F I0813 20:00:59.971160 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.971063173 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.991927 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.991860806 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992048 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.991966509 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992073 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.992059152 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992099 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992080352 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992128 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992105723 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992151 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992136864 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992169 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992156825 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.992174365 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992228 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.992200266 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992255 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992240197 +0000 UTC))" 2025-08-13T20:00:59.994898803+00:00 stderr F I0813 20:00:59.992640 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:59.992603197 +0000 UTC))" 2025-08-13T20:00:59.994898803+00:00 stderr F I0813 20:00:59.993035 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:00:59.993011119 +0000 UTC))" 2025-08-13T20:01:14.400175850+00:00 stderr F I0813 20:01:14.390670 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:14.400175850+00:00 stderr F I0813 20:01:14.393475 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:14.429368972+00:00 stderr F I0813 20:01:14.424378 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:14.429368972+00:00 stderr F I0813 20:01:14.425685 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:14.425611745 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.425769 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:14.425704257 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485069 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:14.484961197 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485106 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:14.485085791 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485272 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485183653 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485306 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485284976 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485351 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485314047 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485377 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485357498 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485404 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:14.485386229 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485472 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:14.485444671 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485516 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485497092 +0000 UTC))" 2025-08-13T20:01:14.505378189+00:00 stderr F I0813 20:01:14.489240 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-08-13 20:01:14.489194768 +0000 UTC))" 2025-08-13T20:01:14.507034327+00:00 stderr F I0813 20:01:14.506154 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:01:14.48961271 +0000 UTC))" 2025-08-13T20:01:15.604133219+00:00 stderr F I0813 20:01:15.603594 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="b57d435dda9015b836844ccd5987ed8197c1e056dd12749adf934051633baa90", new="60b634b6a45b60ea7526d03c4e0dd32ed9c3754978fa8314240e9b0d791c4ab0") 2025-08-13T20:01:15.604133219+00:00 stderr F W0813 20:01:15.603963 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:15.604133219+00:00 stderr F I0813 20:01:15.604069 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="1b9136fb24ce5731f959f039a819bcb38f1319259d783f75058cdbd1e9634c27", new="f77cc21f2f2924073edb7f73a484e9ce3a552373ca706e2e4a46fb72d4f6a8fa") 2025-08-13T20:01:15.604323094+00:00 stderr F I0813 20:01:15.604280 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:15.607932247+00:00 stderr F I0813 20:01:15.604630 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608191 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608316 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608230 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608365 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608414 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608358 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608285 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608378 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608391 1 base_controller.go:172] Shutting down webhookSupportabilityController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608399 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608432 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608512 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608477 1 base_controller.go:172] Shutting down ServiceAccountIssuerController ... 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608527 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608531 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608550604+00:00 stderr F I0813 20:01:15.608544 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:15.608562215+00:00 stderr F I0813 20:01:15.608556 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608587 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608601 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608589 1 certrotationcontroller.go:899] Shutting down CertRotation 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608616 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:15.608677738+00:00 stderr F I0813 20:01:15.608658 1 base_controller.go:172] Shutting down StartupMonitorPodCondition ... 2025-08-13T20:01:15.608711579+00:00 stderr F I0813 20:01:15.608671 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:01:15.608739830+00:00 stderr F I0813 20:01:15.608679 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:01:15.608766861+00:00 stderr F I0813 20:01:15.608705 1 base_controller.go:172] Shutting down EventWatchController ... 2025-08-13T20:01:15.610025417+00:00 stderr F I0813 20:01:15.608737 1 termination_observer.go:155] Shutting down TerminationObserver 2025-08-13T20:01:15.610150250+00:00 stderr F I0813 20:01:15.610097 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:15.610183981+00:00 stderr F I0813 20:01:15.610136 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:15.610236473+00:00 stderr F I0813 20:01:15.610223 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:15.610264143+00:00 stderr F I0813 20:01:15.610200 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.610303184+00:00 stderr F I0813 20:01:15.610289 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:15.610344556+00:00 stderr F I0813 20:01:15.610332 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:15.610415938+00:00 stderr F I0813 20:01:15.610362 1 base_controller.go:172] Shutting down CertRotationTimeUpgradeableController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608865 1 base_controller.go:172] Shutting down NodeKubeconfigController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608868 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608884 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608913 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608921 1 base_controller.go:172] Shutting down BoundSATokenSignerController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608955 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608979 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609010 1 base_controller.go:172] Shutting down KubeAPIServerStaticResources ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609030 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609049 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609074 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609093 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609210 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609252 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609320 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.608263 1 base_controller.go:172] Shutting down SCCReconcileController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609339 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609361 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609505 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609514 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609651 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609737 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.609751 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.609759 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.610261 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641143274+00:00 stderr F I0813 20:01:15.610350 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610443 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610451 1 base_controller.go:114] Shutting down worker of ServiceAccountIssuerController controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610460 1 base_controller.go:114] Shutting down worker of StartupMonitorPodCondition controller ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.608763 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.610501 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.610508 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610516 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610967 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610992 1 base_controller.go:172] Shutting down StatusSyncer_kube-apiserver ... 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611015 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611016 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611190 1 base_controller.go:114] Shutting down worker of SCCReconcileController controller ... 2025-08-13T20:01:15.641202416+00:00 stderr F I0813 20:01:15.611203 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641202416+00:00 stderr F I0813 20:01:15.611229 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:01:15.641217486+00:00 stderr F I0813 20:01:15.611318 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:15.641217486+00:00 stderr F I0813 20:01:15.611364 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611392 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611420 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611550 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:15.641239587+00:00 stderr F I0813 20:01:15.611598 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:15.641239587+00:00 stderr F I0813 20:01:15.611643 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:01:15.641249067+00:00 stderr F I0813 20:01:15.611675 1 base_controller.go:114] Shutting down worker of webhookSupportabilityController controller ... 2025-08-13T20:01:15.641249067+00:00 stderr F I0813 20:01:15.611683 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.611704 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.611986 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.612066 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:01:15.641269957+00:00 stderr F I0813 20:01:15.612072 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:15.641269957+00:00 stderr F I0813 20:01:15.612084 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612104 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612113 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612120 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:15.641299968+00:00 stderr F I0813 20:01:15.612127 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:01:15.641309749+00:00 stderr F I0813 20:01:15.612135 1 base_controller.go:114] Shutting down worker of EventWatchController controller ... 2025-08-13T20:01:15.641309749+00:00 stderr F I0813 20:01:15.612141 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612148 1 base_controller.go:114] Shutting down worker of NodeKubeconfigController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612221 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612236 1 base_controller.go:172] Shutting down KubeletVersionSkewController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612252 1 base_controller.go:172] Shutting down StaticPodStateFallback ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612263 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612280 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612286 1 base_controller.go:114] Shutting down worker of KubeletVersionSkewController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612292 1 base_controller.go:114] Shutting down worker of StaticPodStateFallback controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612296 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612330 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612339 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612356 1 base_controller.go:114] Shutting down worker of BoundSATokenSignerController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612365 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612413 1 base_controller.go:114] Shutting down worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612581 1 base_controller.go:172] Shutting down PodSecurityReadinessController ... 2025-08-13T20:01:15.641769122+00:00 stderr F I0813 20:01:15.612700 1 base_controller.go:114] Shutting down worker of PodSecurityReadinessController controller ... 2025-08-13T20:01:15.641769122+00:00 stderr F I0813 20:01:15.619109 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:15.641935446+00:00 stderr F E0813 20:01:15.630417 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": context canceled, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/api-usage.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/audit-errors.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/cpu-utilization.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-requests.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/podsecurity-violations.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:01:15.641935446+00:00 stderr F I0813 20:01:15.641395 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641958087+00:00 stderr F I0813 20:01:15.641945 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641967987+00:00 stderr F I0813 20:01:15.641411 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641967987+00:00 stderr F I0813 20:01:15.641963 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641977968+00:00 stderr F I0813 20:01:15.641419 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641977968+00:00 stderr F I0813 20:01:15.641425 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641987658+00:00 stderr F I0813 20:01:15.641440 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641987658+00:00 stderr F I0813 20:01:15.641980 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641997388+00:00 stderr F I0813 20:01:15.641450 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.643992 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644035 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644047 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644061 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644098 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:15.644242882+00:00 stderr F I0813 20:01:15.644208 1 base_controller.go:104] All StaticPodStateFallback workers have been terminated 2025-08-13T20:01:15.645271592+00:00 stderr F I0813 20:01:15.645070 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:15.645271592+00:00 stderr F I0813 20:01:15.644765 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.645499838+00:00 stderr F I0813 20:01:15.645456 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128512 1 base_controller.go:104] All CertRotationTimeUpgradeableController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128679 1 base_controller.go:104] All PodSecurityReadinessController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128768 1 base_controller.go:114] Shutting down worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128827 1 base_controller.go:104] All webhookSupportabilityController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128877 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128832 1 base_controller.go:104] All KubeAPIServerStaticResources workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128894 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:15.641461 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128904 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128911 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128913 1 base_controller.go:104] All EventWatchController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641470 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128920 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641480 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128925 1 base_controller.go:104] All NodeKubeconfigController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641488 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641497 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128947 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:15.641507 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:15.641530 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:16.128969 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:16.128970 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641553 1 base_controller.go:104] All ServiceAccountIssuerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128976 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641564 1 base_controller.go:104] All StartupMonitorPodCondition workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129046 1 controller_manager.go:54] StartupMonitorPodCondition controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129023 1 base_controller.go:104] All BoundSATokenSignerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641569 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129054 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129067 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129071 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129072 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641586 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129085 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641607 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641618 1 base_controller.go:150] All StatusSyncer_kube-apiserver post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129112 1 base_controller.go:104] All StatusSyncer_kube-apiserver workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641645 1 base_controller.go:104] All SCCReconcileController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641654 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641455 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129134 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641540 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129144 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128988 1 builder.go:330] server exited 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641518 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128997 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129004 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129007 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129032 1 base_controller.go:104] All KubeletVersionSkewController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641597 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129190 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641577 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129207 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:16.130086206+00:00 stderr F I0813 20:01:16.130032 1 controller_manager.go:54] StaticPodStateFallback controller terminated 2025-08-13T20:01:16.135711526+00:00 stderr F I0813 20:01:16.133970 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:16.135711526+00:00 stderr F I0813 20:01:16.134100 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:18.800179041+00:00 stderr F W0813 20:01:18.797867 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015073043232033131 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015073043232033131 5ustar zuulzuul././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000005533115073043232033142 0ustar zuulzuul2025-08-13T20:01:14.502314102+00:00 stdout F Copying system trust bundle 2025-08-13T20:01:15.583350346+00:00 stderr F I0813 20:01:15.577320 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:01:15.583350346+00:00 stderr F I0813 20:01:15.579292 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:01:16.424418308+00:00 stderr F I0813 20:01:16.424200 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T20:01:18.860923183+00:00 stderr F I0813 20:01:18.827448 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302331 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302376 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302397 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302403 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:01:20.554633356+00:00 stderr F I0813 20:01:20.533716 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.554633356+00:00 stderr F I0813 20:01:20.534993 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951690 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951755 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951914 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951936 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951962 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951970 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.953060 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:01:20.958902234+00:00 stderr F I0813 20:01:20.956315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:20.952733968 +0000 UTC))" 2025-08-13T20:01:20.958902234+00:00 stderr F I0813 20:01:20.956749 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:20.956695071 +0000 UTC))" 2025-08-13T20:01:20.965157162+00:00 stderr F I0813 20:01:20.960488 1 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994375 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:20.994131168 +0000 UTC))" 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994452 1 secure_serving.go:213] Serving securely on [::]:6443 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994490 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994509 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:20.999536142+00:00 stderr F I0813 20:01:20.997020 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.003034432+00:00 stderr F I0813 20:01:21.001406 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.017450403+00:00 stderr F I0813 20:01:21.017345 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.017450403+00:00 stderr F I0813 20:01:21.017361 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094024 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094195 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094252 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.094431 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.094400877 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.094820 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:21.094748977 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095175 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:21.095146758 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095428 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:21.095416776 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095927 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:21.095881159 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:21.095938851 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096000 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.095980842 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096017 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.096006323 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096034 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096022563 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096054 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096041484 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096070 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096058844 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096087 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096075395 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096137 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:21.096092525 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096164 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:21.096147627 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096183 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096171578 +0000 UTC))" 2025-08-13T20:01:21.117871356+00:00 stderr F I0813 20:01:21.115991 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:21.115950232 +0000 UTC))" 2025-08-13T20:01:21.117871356+00:00 stderr F I0813 20:01:21.116413 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:21.116373694 +0000 UTC))" 2025-08-13T20:01:21.118922666+00:00 stderr F I0813 20:01:21.118737 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:21.118719181 +0000 UTC))" 2025-08-13T20:06:02.194718284+00:00 stderr F I0813 20:06:02.193134 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:00.871143946+00:00 stderr F I0813 20:07:00.868133 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:06.442934193+00:00 stderr F I0813 20:07:06.442399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.102717324+00:00 stderr F I0813 20:09:35.102399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:36.885284120+00:00 stderr F I0813 20:09:36.885208 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:45.285875742+00:00 stderr F I0813 20:09:45.285584 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:42:36.418082203+00:00 stderr F I0813 20:42:36.411925 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418082203+00:00 stderr F I0813 20:42:36.415480 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418442384+00:00 stderr F I0813 20:42:36.418365 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418907447+00:00 stderr F I0813 20:42:36.409926 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.919276434+00:00 stderr F I0813 20:42:40.918383 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.919544461+00:00 stderr F I0813 20:42:40.918425 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.919602463+00:00 stderr F I0813 20:42:40.919510 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:40.925908115+00:00 stderr F I0813 20:42:40.921108 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:40.927359407+00:00 stderr F I0813 20:42:40.927288 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished 2025-08-13T20:42:40.927359407+00:00 stderr F I0813 20:42:40.927331 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:40.927452619+00:00 stderr F I0813 20:42:40.927395 1 genericapiserver.go:612] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:40.927466860+00:00 stderr F I0813 20:42:40.927451 1 genericapiserver.go:647] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.928975223+00:00 stderr F I0813 20:42:40.928936 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:42:40.928996954+00:00 stderr F I0813 20:42:40.928977 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening 2025-08-13T20:42:40.930019173+00:00 stderr F I0813 20:42:40.929742 1 genericapiserver.go:638] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930071 1 genericapiserver.go:679] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930094 1 genericapiserver.go:703] "[graceful-termination] audit backend shutdown completed" 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930104 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished 2025-08-13T20:42:40.930631431+00:00 stderr F I0813 20:42:40.930547 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.931352372+00:00 stderr F I0813 20:42:40.931206 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.931506586+00:00 stderr F I0813 20:42:40.931453 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.931543027+00:00 stderr F I0813 20:42:40.931469 1 secure_serving.go:258] Stopped listening on [::]:6443 2025-08-13T20:42:40.931606989+00:00 stderr F I0813 20:42:40.931589 1 genericapiserver.go:595] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.931656681+00:00 stderr F I0813 20:42:40.931644 1 genericapiserver.go:711] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.931714812+00:00 stderr F I0813 20:42:40.931688 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationGracefulTerminationFinished' All pending requests processed 2025-08-13T20:42:40.932471814+00:00 stderr F I0813 20:42:40.932398 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.932471814+00:00 stderr F I0813 20:42:40.932425 1 dynamic_serving_content.go:146] "Shutting down controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:42:40.933570996+00:00 stderr F I0813 20:42:40.933454 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000006230415073043232033140 0ustar zuulzuul2025-10-13T00:15:00.527398318+00:00 stdout F Copying system trust bundle 2025-10-13T00:15:02.024023780+00:00 stderr F I1013 00:15:02.023464 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-10-13T00:15:02.025064231+00:00 stderr F I1013 00:15:02.024211 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-10-13T00:15:02.595677328+00:00 stderr F I1013 00:15:02.594820 1 audit.go:340] Using audit backend: ignoreErrors 2025-10-13T00:15:02.675210291+00:00 stderr F I1013 00:15:02.675142 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:02.706430536+00:00 stderr F I1013 00:15:02.699845 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:02.706430536+00:00 stderr F I1013 00:15:02.699900 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:02.706430536+00:00 stderr F I1013 00:15:02.699918 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:15:02.706430536+00:00 stderr F I1013 00:15:02.699925 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:15:02.712134797+00:00 stderr F I1013 00:15:02.709859 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:02.712134797+00:00 stderr F I1013 00:15:02.709924 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720491 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720536 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720655 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720669 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720685 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720690 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.721316642+00:00 stderr F I1013 00:15:02.720881 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-10-13T00:15:02.722250770+00:00 stderr F I1013 00:15:02.722011 1 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-10-13T00:15:02.728032244+00:00 stderr F I1013 00:15:02.727351 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-10-13 00:15:02.719133817 +0000 UTC))" 2025-10-13T00:15:02.728032244+00:00 stderr F I1013 00:15:02.727386 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:02.728032244+00:00 stderr F I1013 00:15:02.727458 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:02.728032244+00:00 stderr F I1013 00:15:02.727602 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:02.728032244+00:00 stderr F I1013 00:15:02.727830 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-10-13 00:15:02.727770426 +0000 UTC))" 2025-10-13T00:15:02.728087135+00:00 stderr F I1013 00:15:02.728076 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:15:02.728064655 +0000 UTC))" 2025-10-13T00:15:02.728124756+00:00 stderr F I1013 00:15:02.728101 1 secure_serving.go:213] Serving securely on [::]:6443 2025-10-13T00:15:02.728135227+00:00 stderr F I1013 00:15:02.728130 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:02.728171298+00:00 stderr F I1013 00:15:02.728150 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:02.737670232+00:00 stderr F W1013 00:15:02.737618 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-10-13T00:15:02.737732994+00:00 stderr F E1013 00:15:02.737722 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-10-13T00:15:02.821624898+00:00 stderr F I1013 00:15:02.821568 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.821700750+00:00 stderr F I1013 00:15:02.821687 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:02.821830664+00:00 stderr F I1013 00:15:02.821814 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.822076791+00:00 stderr F I1013 00:15:02.822055 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.821995219 +0000 UTC))" 2025-10-13T00:15:02.822644498+00:00 stderr F I1013 00:15:02.822625 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-10-13 00:15:02.822603777 +0000 UTC))" 2025-10-13T00:15:02.822981508+00:00 stderr F I1013 00:15:02.822959 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-10-13 00:15:02.822939407 +0000 UTC))" 2025-10-13T00:15:02.823318849+00:00 stderr F I1013 00:15:02.823303 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:15:02.823290978 +0000 UTC))" 2025-10-13T00:15:02.823714860+00:00 stderr F I1013 00:15:02.823698 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:02.823682449 +0000 UTC))" 2025-10-13T00:15:02.823771622+00:00 stderr F I1013 00:15:02.823760 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:02.823743211 +0000 UTC))" 2025-10-13T00:15:02.827375950+00:00 stderr F I1013 00:15:02.823799 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.823786653 +0000 UTC))" 2025-10-13T00:15:02.827500444+00:00 stderr F I1013 00:15:02.827483 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.827443452 +0000 UTC))" 2025-10-13T00:15:02.827543425+00:00 stderr F I1013 00:15:02.827533 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.827518254 +0000 UTC))" 2025-10-13T00:15:02.827581896+00:00 stderr F I1013 00:15:02.827571 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.827558936 +0000 UTC))" 2025-10-13T00:15:02.827636718+00:00 stderr F I1013 00:15:02.827625 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.827609557 +0000 UTC))" 2025-10-13T00:15:02.827959788+00:00 stderr F I1013 00:15:02.827944 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.827652038 +0000 UTC))" 2025-10-13T00:15:02.828006839+00:00 stderr F I1013 00:15:02.827996 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:02.827981338 +0000 UTC))" 2025-10-13T00:15:02.828046210+00:00 stderr F I1013 00:15:02.828035 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:02.8280241 +0000 UTC))" 2025-10-13T00:15:02.828083251+00:00 stderr F I1013 00:15:02.828073 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.828062341 +0000 UTC))" 2025-10-13T00:15:02.828403551+00:00 stderr F I1013 00:15:02.828389 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-10-13 00:15:02.82837403 +0000 UTC))" 2025-10-13T00:15:02.828742421+00:00 stderr F I1013 00:15:02.828724 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-10-13 00:15:02.82870386 +0000 UTC))" 2025-10-13T00:15:02.829015189+00:00 stderr F I1013 00:15:02.829003 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:15:02.828985268 +0000 UTC))" 2025-10-13T00:15:03.641768790+00:00 stderr F W1013 00:15:03.639639 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-10-13T00:15:03.641768790+00:00 stderr F E1013 00:15:03.639672 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-10-13T00:15:05.883653601+00:00 stderr F W1013 00:15:05.882876 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-10-13T00:15:05.884608639+00:00 stderr F E1013 00:15:05.883841 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-10-13T00:15:11.878359771+00:00 stderr F I1013 00:15:11.878289 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:21:11.943250765+00:00 stderr F I1013 00:21:11.943114 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.94307504 +0000 UTC))" 2025-10-13T00:21:11.943250765+00:00 stderr F I1013 00:21:11.943154 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.943137242 +0000 UTC))" 2025-10-13T00:21:11.943250765+00:00 stderr F I1013 00:21:11.943178 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.943159312 +0000 UTC))" 2025-10-13T00:21:11.943250765+00:00 stderr F I1013 00:21:11.943200 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.943184193 +0000 UTC))" 2025-10-13T00:21:11.943250765+00:00 stderr F I1013 00:21:11.943222 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943206224 +0000 UTC))" 2025-10-13T00:21:11.943250765+00:00 stderr F I1013 00:21:11.943243 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943227364 +0000 UTC))" 2025-10-13T00:21:11.943309347+00:00 stderr F I1013 00:21:11.943264 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943249495 +0000 UTC))" 2025-10-13T00:21:11.943309347+00:00 stderr F I1013 00:21:11.943292 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943271066 +0000 UTC))" 2025-10-13T00:21:11.943361848+00:00 stderr F I1013 00:21:11.943315 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.943299286 +0000 UTC))" 2025-10-13T00:21:11.943361848+00:00 stderr F I1013 00:21:11.943352 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.943338587 +0000 UTC))" 2025-10-13T00:21:11.943505752+00:00 stderr F I1013 00:21:11.943375 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.943359658 +0000 UTC))" 2025-10-13T00:21:11.943505752+00:00 stderr F I1013 00:21:11.943401 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.943385379 +0000 UTC))" 2025-10-13T00:21:11.945338731+00:00 stderr F I1013 00:21:11.945289 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-10-13 00:21:11.945257329 +0000 UTC))" 2025-10-13T00:21:11.945721091+00:00 stderr F I1013 00:21:11.945693 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-10-13 00:21:11.94566548 +0000 UTC))" 2025-10-13T00:21:11.946028400+00:00 stderr F I1013 00:21:11.946003 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:21:11.945986448 +0000 UTC))" 2025-10-13T00:23:40.240759921+00:00 stderr F I1013 00:23:40.240710 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134315073043233033056 0ustar zuulzuul2025-10-13T00:16:01.272726774+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="error verifying provided cert and key: certificate has expired" 2025-10-13T00:16:01.273099606+00:00 stderr F time="2025-10-13T00:16:01Z" level=info msg="generating a new cert and key" 2025-10-13T00:16:02.220503181+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="error retrieving pprof profile: Get \"https://olm-operator-metrics:8443/debug/pprof/heap\": remote error: tls: unknown certificate authority" 2025-10-13T00:16:02.240928476+00:00 stderr F time="2025-10-13T00:16:02Z" level=info msg="error retrieving pprof profile: Get \"https://catalog-operator-metrics:8443/debug/pprof/heap\": remote error: tls: unknown certificate authority" ././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015073043232033044 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016462315073043232033062 0ustar zuulzuul2025-08-13T20:07:36.638917816+00:00 stderr F I0813 20:07:36.636214 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a0f180 cert-dir:0xc000a0f360 cert-secrets:0xc000a0f0e0 configmaps:0xc000a0ec80 namespace:0xc000a0eaa0 optional-cert-configmaps:0xc000a0f2c0 optional-cert-secrets:0xc000a0f220 optional-configmaps:0xc000a0edc0 optional-secrets:0xc000a0ed20 pod:0xc000a0eb40 pod-manifest-dir:0xc000a0ef00 resource-dir:0xc000a0ee60 revision:0xc000a0ea00 secrets:0xc000a0ebe0 v:0xc000a1e780] [0xc000a1e780 0xc000a0ea00 0xc000a0eaa0 0xc000a0eb40 0xc000a0ee60 0xc000a0ef00 0xc000a0ec80 0xc000a0edc0 0xc000a0ebe0 0xc000a0ed20 0xc000a0f360 0xc000a0f180 0xc000a0f2c0 0xc000a0f0e0 0xc000a0f220] [] map[cert-configmaps:0xc000a0f180 cert-dir:0xc000a0f360 cert-secrets:0xc000a0f0e0 configmaps:0xc000a0ec80 help:0xc000a1eb40 kubeconfig:0xc000a0e960 log-flush-frequency:0xc000a1e6e0 namespace:0xc000a0eaa0 optional-cert-configmaps:0xc000a0f2c0 optional-cert-secrets:0xc000a0f220 optional-configmaps:0xc000a0edc0 optional-secrets:0xc000a0ed20 pod:0xc000a0eb40 pod-manifest-dir:0xc000a0ef00 pod-manifests-lock-file:0xc000a0f040 resource-dir:0xc000a0ee60 revision:0xc000a0ea00 secrets:0xc000a0ebe0 timeout-duration:0xc000a0efa0 v:0xc000a1e780 vmodule:0xc000a1e820] [0xc000a0e960 0xc000a0ea00 0xc000a0eaa0 0xc000a0eb40 0xc000a0ebe0 0xc000a0ec80 0xc000a0ed20 0xc000a0edc0 0xc000a0ee60 0xc000a0ef00 0xc000a0efa0 0xc000a0f040 0xc000a0f0e0 0xc000a0f180 0xc000a0f220 0xc000a0f2c0 0xc000a0f360 0xc000a1e6e0 0xc000a1e780 0xc000a1e820 0xc000a1eb40] [0xc000a0f180 0xc000a0f360 0xc000a0f0e0 0xc000a0ec80 0xc000a1eb40 0xc000a0e960 0xc000a1e6e0 0xc000a0eaa0 0xc000a0f2c0 0xc000a0f220 0xc000a0edc0 0xc000a0ed20 0xc000a0eb40 0xc000a0ef00 0xc000a0f040 0xc000a0ee60 0xc000a0ea00 0xc000a0ebe0 0xc000a0efa0 0xc000a1e780 0xc000a1e820] map[104:0xc000a1eb40 118:0xc000a1e780] [] -1 0 0xc000a023c0 true 0xa51380 []} 2025-08-13T20:07:36.638917816+00:00 stderr F I0813 20:07:36.636984 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a0a340)({ 2025-08-13T20:07:36.638917816+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:36.638917816+00:00 stderr F Revision: (string) (len=2) "12", 2025-08-13T20:07:36.638917816+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-08-13T20:07:36.638917816+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:07:36.638917816+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=11) "etcd-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "encryption-config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=12) "cloud-config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "aggregator-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=14) "kubelet-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=9) "client-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:36.638917816+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:36.638917816+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:36.638917816+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:36.638917816+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:36.638917816+00:00 stderr F }) 2025-08-13T20:07:36.647040639+00:00 stderr F I0813 20:07:36.644426 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:07:36.676170244+00:00 stderr F I0813 20:07:36.676103 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:36.729144953+00:00 stderr F I0813 20:07:36.729077 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:07:46.742419322+00:00 stderr F I0813 20:07:46.742303 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:08:16.746174226+00:00 stderr F I0813 20:08:16.745979 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:08:16.759085647+00:00 stderr F I0813 20:08:16.758998 1 cmd.go:539] Latest installer revision for node crc is: 12 2025-08-13T20:08:16.759085647+00:00 stderr F I0813 20:08:16.759051 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:08:16.766542280+00:00 stderr F I0813 20:08:16.766427 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:08:16.766638183+00:00 stderr F I0813 20:08:16.766618 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12" ... 2025-08-13T20:08:16.767661233+00:00 stderr F I0813 20:08:16.767400 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12" ... 2025-08-13T20:08:16.767719274+00:00 stderr F I0813 20:08:16.767702 1 cmd.go:226] Getting secrets ... 2025-08-13T20:08:16.777540006+00:00 stderr F I0813 20:08:16.777422 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-12 2025-08-13T20:08:16.787704677+00:00 stderr F I0813 20:08:16.784537 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-12 2025-08-13T20:08:16.790097306+00:00 stderr F I0813 20:08:16.790066 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-12 2025-08-13T20:08:16.792622618+00:00 stderr F I0813 20:08:16.792596 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-12: secrets "encryption-config-12" not found 2025-08-13T20:08:16.798056444+00:00 stderr F I0813 20:08:16.797997 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-12 2025-08-13T20:08:16.798153447+00:00 stderr F I0813 20:08:16.798139 1 cmd.go:239] Getting config maps ... 2025-08-13T20:08:16.807858675+00:00 stderr F I0813 20:08:16.803474 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-12 2025-08-13T20:08:16.830867405+00:00 stderr F I0813 20:08:16.828390 1 copy.go:60] Got configMap openshift-kube-apiserver/config-12 2025-08-13T20:08:16.837695091+00:00 stderr F I0813 20:08:16.836653 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-12 2025-08-13T20:08:16.953566323+00:00 stderr F I0813 20:08:16.953507 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-12 2025-08-13T20:08:17.152655191+00:00 stderr F I0813 20:08:17.152585 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-12 2025-08-13T20:08:17.356523426+00:00 stderr F I0813 20:08:17.356468 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-12 2025-08-13T20:08:17.560140594+00:00 stderr F I0813 20:08:17.558391 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-12 2025-08-13T20:08:17.752620232+00:00 stderr F I0813 20:08:17.752494 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-12 2025-08-13T20:08:17.958951147+00:00 stderr F I0813 20:08:17.958857 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-12: configmaps "cloud-config-12" not found 2025-08-13T20:08:18.158022865+00:00 stderr F I0813 20:08:18.156610 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-12 2025-08-13T20:08:18.352269654+00:00 stderr F I0813 20:08:18.352156 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-12 2025-08-13T20:08:18.352369817+00:00 stderr F I0813 20:08:18.352353 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client" ... 2025-08-13T20:08:18.353113228+00:00 stderr F I0813 20:08:18.353009 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client/tls.crt" ... 2025-08-13T20:08:18.353340855+00:00 stderr F I0813 20:08:18.353319 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client/tls.key" ... 2025-08-13T20:08:18.353507260+00:00 stderr F I0813 20:08:18.353487 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token" ... 2025-08-13T20:08:18.353665544+00:00 stderr F I0813 20:08:18.353647 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:08:18.354413516+00:00 stderr F I0813 20:08:18.353881 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:08:18.354604501+00:00 stderr F I0813 20:08:18.354582 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:08:18.354759505+00:00 stderr F I0813 20:08:18.354739 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:08:18.355074254+00:00 stderr F I0813 20:08:18.354982 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey" ... 2025-08-13T20:08:18.355209868+00:00 stderr F I0813 20:08:18.355190 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-08-13T20:08:18.355372803+00:00 stderr F I0813 20:08:18.355353 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-08-13T20:08:18.356621279+00:00 stderr F I0813 20:08:18.356597 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/webhook-authenticator" ... 2025-08-13T20:08:18.356843525+00:00 stderr F I0813 20:08:18.356820 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/webhook-authenticator/kubeConfig" ... 2025-08-13T20:08:18.357036741+00:00 stderr F I0813 20:08:18.357015 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/bound-sa-token-signing-certs" ... 2025-08-13T20:08:18.357222456+00:00 stderr F I0813 20:08:18.357203 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:08:18.357353840+00:00 stderr F I0813 20:08:18.357335 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/config" ... 2025-08-13T20:08:18.357935916+00:00 stderr F I0813 20:08:18.357489 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/config/config.yaml" ... 2025-08-13T20:08:18.359209643+00:00 stderr F I0813 20:08:18.359186 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/etcd-serving-ca" ... 2025-08-13T20:08:18.359448360+00:00 stderr F I0813 20:08:18.359410 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.359598054+00:00 stderr F I0813 20:08:18.359579 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-audit-policies" ... 2025-08-13T20:08:18.359723238+00:00 stderr F I0813 20:08:18.359705 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-08-13T20:08:18.359975605+00:00 stderr F I0813 20:08:18.359952 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-08-13T20:08:18.360104889+00:00 stderr F I0813 20:08:18.360086 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:08:18.360250723+00:00 stderr F I0813 20:08:18.360232 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod" ... 2025-08-13T20:08:18.360398597+00:00 stderr F I0813 20:08:18.360378 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-08-13T20:08:18.360577992+00:00 stderr F I0813 20:08:18.360555 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:18.360767418+00:00 stderr F I0813 20:08:18.360716 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-08-13T20:08:18.361041326+00:00 stderr F I0813 20:08:18.361018 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/version" ... 2025-08-13T20:08:18.361181070+00:00 stderr F I0813 20:08:18.361161 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kubelet-serving-ca" ... 2025-08-13T20:08:18.361530210+00:00 stderr F I0813 20:08:18.361505 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.361743146+00:00 stderr F I0813 20:08:18.361708 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs" ... 2025-08-13T20:08:18.361983433+00:00 stderr F I0813 20:08:18.361934 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:08:18.362161068+00:00 stderr F I0813 20:08:18.362141 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-08-13T20:08:18.362731884+00:00 stderr F I0813 20:08:18.362563 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364128 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-server-ca" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364307 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364445 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/oauth-metadata" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364556 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/oauth-metadata/oauthMetadata" ... 2025-08-13T20:08:18.367237633+00:00 stderr F I0813 20:08:18.367154 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-08-13T20:08:18.367237633+00:00 stderr F I0813 20:08:18.367217 1 cmd.go:226] Getting secrets ... 2025-08-13T20:08:18.553964087+00:00 stderr F I0813 20:08:18.551863 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-08-13T20:08:18.755506325+00:00 stderr F I0813 20:08:18.755397 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-08-13T20:08:18.961047308+00:00 stderr F I0813 20:08:18.959393 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-08-13T20:08:19.157436069+00:00 stderr F I0813 20:08:19.157218 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-08-13T20:08:19.361055187+00:00 stderr F I0813 20:08:19.360239 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-08-13T20:08:19.554011299+00:00 stderr F I0813 20:08:19.552970 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-08-13T20:08:19.758824191+00:00 stderr F I0813 20:08:19.758068 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-08-13T20:08:19.952841864+00:00 stderr F I0813 20:08:19.952693 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-08-13T20:08:20.155019351+00:00 stderr F I0813 20:08:20.152283 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-08-13T20:08:20.355912051+00:00 stderr F I0813 20:08:20.354435 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-08-13T20:08:20.557744358+00:00 stderr F I0813 20:08:20.557637 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-08-13T20:08:20.752183112+00:00 stderr F I0813 20:08:20.752136 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-08-13T20:08:20.951144607+00:00 stderr F I0813 20:08:20.951043 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-08-13T20:08:21.153445757+00:00 stderr F I0813 20:08:21.153364 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-08-13T20:08:21.358305480+00:00 stderr F I0813 20:08:21.358147 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-08-13T20:08:21.552003393+00:00 stderr F I0813 20:08:21.550526 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-08-13T20:08:21.754680194+00:00 stderr F I0813 20:08:21.754592 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-08-13T20:08:21.952917598+00:00 stderr F I0813 20:08:21.952702 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-08-13T20:08:22.158927624+00:00 stderr F I0813 20:08:22.156637 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-08-13T20:08:22.368824392+00:00 stderr F I0813 20:08:22.365760 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-08-13T20:08:22.559923021+00:00 stderr F I0813 20:08:22.557941 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-08-13T20:08:22.559923021+00:00 stderr F I0813 20:08:22.558009 1 cmd.go:239] Getting config maps ... 2025-08-13T20:08:22.753218473+00:00 stderr F I0813 20:08:22.752977 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-08-13T20:08:22.958543640+00:00 stderr F I0813 20:08:22.958422 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-08-13T20:08:23.152462540+00:00 stderr F I0813 20:08:23.152406 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-08-13T20:08:23.355346357+00:00 stderr F I0813 20:08:23.355215 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-08-13T20:08:23.633038419+00:00 stderr F I0813 20:08:23.632400 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:08:23.633038419+00:00 stderr F I0813 20:08:23.632844 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-08-13T20:08:23.633354288+00:00 stderr F I0813 20:08:23.633123 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-08-13T20:08:23.634466239+00:00 stderr F I0813 20:08:23.634378 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-08-13T20:08:23.634614964+00:00 stderr F I0813 20:08:23.634555 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-08-13T20:08:23.634614964+00:00 stderr F I0813 20:08:23.634596 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.635714 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.636052 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.636078 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-08-13T20:08:23.643978932+00:00 stderr F I0813 20:08:23.643653 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-08-13T20:08:23.643978932+00:00 stderr F I0813 20:08:23.643958 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-08-13T20:08:23.644008123+00:00 stderr F I0813 20:08:23.643993 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-08-13T20:08:23.646206346+00:00 stderr F I0813 20:08:23.646135 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-08-13T20:08:23.646387351+00:00 stderr F I0813 20:08:23.646327 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-08-13T20:08:23.646387351+00:00 stderr F I0813 20:08:23.646374 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.646686290+00:00 stderr F I0813 20:08:23.646604 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:08:23.653659940+00:00 stderr F I0813 20:08:23.653566 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-08-13T20:08:23.653728932+00:00 stderr F I0813 20:08:23.653695 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.656370107+00:00 stderr F I0813 20:08:23.656272 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.658150 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.658211 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659295 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659416 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659443 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662231 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662485 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662504 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-08-13T20:08:23.664350806+00:00 stderr F I0813 20:08:23.664307 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.664501 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665514 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665643 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665658 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665954 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-08-13T20:08:23.666230140+00:00 stderr F I0813 20:08:23.666179 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:08:23.666230140+00:00 stderr F I0813 20:08:23.666201 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:08:23.666524959+00:00 stderr F I0813 20:08:23.666397 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-08-13T20:08:23.666524959+00:00 stderr F I0813 20:08:23.666438 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-08-13T20:08:23.675065014+00:00 stderr F I0813 20:08:23.674383 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-08-13T20:08:23.727612830+00:00 stderr F I0813 20:08:23.678973 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:08:23.728107704+00:00 stderr F I0813 20:08:23.728022 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-08-13T20:08:23.728170316+00:00 stderr F I0813 20:08:23.728088 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-08-13T20:08:23.730094851+00:00 stderr F I0813 20:08:23.730033 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:08:23.730271116+00:00 stderr F I0813 20:08:23.730167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:08:23.730853573+00:00 stderr F I0813 20:08:23.730656 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-12 -n openshift-kube-apiserver 2025-08-13T20:08:23.756294352+00:00 stderr F I0813 20:08:23.754194 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:08:23.756294352+00:00 stderr F I0813 20:08:23.754297 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-08-13T20:08:23.756294352+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"12"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=12","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.767161554+00:00 stderr F I0813 20:08:23.767055 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:23.767422721+00:00 stderr F I0813 20:08:23.767311 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:23.767422721+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"12"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=12","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.767702189+00:00 stderr F I0813 20:08:23.767602 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-08-13T20:08:23.767702189+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"12"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"12"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50M 2025-08-13T20:08:23.767733110+00:00 stderr F i"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.768399900+00:00 stderr F I0813 20:08:23.768332 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768798391+00:00 stderr F I0813 20:08:23.768742 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768926285+00:00 stderr F I0813 20:08:23.768828 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768926285+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"12"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"12"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"re 2025-08-13T20:08:23.768952735+00:00 stderr F quests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000224315073043233033056 0ustar zuulzuul2025-08-13T19:59:04.519182976+00:00 stderr F W0813 19:59:04.505970 1 deprecated.go:66] 2025-08-13T19:59:04.519182976+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.519182976+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.519182976+00:00 stderr F =============================================== 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.521923144+00:00 stderr F I0813 19:59:04.521868 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:04.522683976+00:00 stderr F I0813 19:59:04.522582 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:04.531614440+00:00 stderr F I0813 19:59:04.530405 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-08-13T19:59:04.536668264+00:00 stderr F I0813 19:59:04.535283 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 2025-08-13T20:42:43.954344275+00:00 stderr F I0813 20:42:43.954103 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000203615073043233033056 0ustar zuulzuul2025-10-13T00:14:57.777729054+00:00 stderr F W1013 00:14:57.775396 1 deprecated.go:66] 2025-10-13T00:14:57.777729054+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:14:57.777729054+00:00 stderr F 2025-10-13T00:14:57.777729054+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:14:57.777729054+00:00 stderr F 2025-10-13T00:14:57.777729054+00:00 stderr F =============================================== 2025-10-13T00:14:57.777729054+00:00 stderr F 2025-10-13T00:14:57.778082235+00:00 stderr F I1013 00:14:57.777888 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:14:57.778082235+00:00 stderr F I1013 00:14:57.777937 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:14:57.778391764+00:00 stderr F I1013 00:14:57.778353 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-10-13T00:14:57.778933450+00:00 stderr F I1013 00:14:57.778904 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000003410415073043233033057 0ustar zuulzuul2025-08-13T19:59:47.427869073+00:00 stderr F 2025-08-13T19:59:47Z INFO setup starting manager 2025-08-13T19:59:47.581323448+00:00 stderr F 2025-08-13T19:59:47Z INFO controller-runtime.metrics Starting metrics server 2025-08-13T19:59:47.581953346+00:00 stderr F 2025-08-13T19:59:47Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":9090", "secure": false} 2025-08-13T19:59:47.596895631+00:00 stderr F 2025-08-13T19:59:47Z INFO starting server {"kind": "pprof", "addr": "[::]:6060"} 2025-08-13T19:59:47.596895631+00:00 stderr F 2025-08-13T19:59:47Z INFO starting server {"kind": "health probe", "addr": "[::]:8080"} 2025-08-13T19:59:47.895817023+00:00 stderr F I0813 19:59:47.761307 1 leaderelection.go:250] attempting to acquire leader lease openshift-operator-lifecycle-manager/packageserver-controller-lock... 2025-08-13T19:59:48.598371620+00:00 stderr F I0813 19:59:48.595508 1 leaderelection.go:260] successfully acquired lease openshift-operator-lifecycle-manager/packageserver-controller-lock 2025-08-13T19:59:48.599217465+00:00 stderr F 2025-08-13T19:59:48Z DEBUG events package-server-manager-84d578d794-jw7r2_e40d9bf0-eba2-484b-a0df-0a92c0213730 became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-operator-lifecycle-manager","name":"packageserver-controller-lock","uid":"0beb9bb7-cfd9-4760-98f3-f0c893f5cf42","apiVersion":"coordination.k8s.io/v1","resourceVersion":"28430"}, "reason": "LeaderElection"} 2025-08-13T19:59:48.600629435+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:48.600653725+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1.Infrastructure"} 2025-08-13T19:59:48.600653725+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting Controller {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T19:59:50.305095201+00:00 stderr F 2025-08-13T19:59:50Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T19:59:51.209164813+00:00 stderr F 2025-08-13T19:59:51Z INFO Starting workers {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "worker count": 1} 2025-08-13T19:59:51.543424412+00:00 stderr F 2025-08-13T19:59:51Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T19:59:51.543424412+00:00 stderr F 2025-08-13T19:59:51Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:52.151692491+00:00 stderr F 2025-08-13T19:59:52Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:52.151692491+00:00 stderr F 2025-08-13T19:59:52Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T19:59:53.088333270+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T19:59:53.088333270+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390313498+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T19:59:53.390313498+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390534724+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390534724+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T19:59:53.534931020+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T19:59:53.534931020+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:03:01.319955058+00:00 stderr F E0813 20:03:01.318734 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:01.394983163+00:00 stderr F E0813 20:04:01.393330 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:01.342585258+00:00 stderr F E0813 20:05:01.341321 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:29.332357546+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:06:29.332357546+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:29.340078067+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:29.360067510+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:06:29.387128505+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:06:29.387128505+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:06:41.194134500+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:06:41.194134500+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.228025353+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:09:45.228025353+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:09:45.367941865+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:09:45.368022437+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.711473839+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:10:16.711690345+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.712255871+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.723341429+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:10:16.745037671+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:10:16.745037671+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:42:42.894471130+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for non leader election runnables 2025-08-13T20:42:42.894471130+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for leader election runnables 2025-08-13T20:42:42.901418100+00:00 stderr F 2025-08-13T20:42:42Z INFO Shutdown signal received, waiting for all workers to finish {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T20:42:42.901418100+00:00 stderr F 2025-08-13T20:42:42Z INFO All workers finished {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T20:42:42.902414669+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for caches 2025-08-13T20:42:42.910454040+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for webhooks 2025-08-13T20:42:42.910542823+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for HTTP servers 2025-08-13T20:42:42.913641892+00:00 stderr F 2025-08-13T20:42:42Z INFO shutting down server {"kind": "health probe", "addr": "[::]:8080"} 2025-08-13T20:42:42.913984912+00:00 stderr F 2025-08-13T20:42:42Z INFO shutting down server {"kind": "pprof", "addr": "[::]:6060"} 2025-08-13T20:42:42.915348912+00:00 stderr F 2025-08-13T20:42:42Z INFO controller-runtime.metrics Shutting down metrics server with timeout of 1 minute ././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000001522515073043233033062 0ustar zuulzuul2025-10-13T00:15:02.160479659+00:00 stderr F 2025-10-13T00:15:02Z INFO setup starting manager 2025-10-13T00:15:02.164794678+00:00 stderr F 2025-10-13T00:15:02Z INFO starting server {"kind": "health probe", "addr": "[::]:8080"} 2025-10-13T00:15:02.164866590+00:00 stderr F 2025-10-13T00:15:02Z INFO controller-runtime.metrics Starting metrics server 2025-10-13T00:15:02.165023755+00:00 stderr F 2025-10-13T00:15:02Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":9090", "secure": false} 2025-10-13T00:15:02.165023755+00:00 stderr F 2025-10-13T00:15:02Z INFO starting server {"kind": "pprof", "addr": "[::]:6060"} 2025-10-13T00:15:02.165298793+00:00 stderr F I1013 00:15:02.165219 1 leaderelection.go:250] attempting to acquire leader lease openshift-operator-lifecycle-manager/packageserver-controller-lock... 2025-10-13T00:21:07.406389542+00:00 stderr F I1013 00:21:07.405714 1 leaderelection.go:260] successfully acquired lease openshift-operator-lifecycle-manager/packageserver-controller-lock 2025-10-13T00:21:07.406389542+00:00 stderr F 2025-10-13T00:21:07Z DEBUG events package-server-manager-84d578d794-jw7r2_fe3c844e-38e1-4a11-850c-03d1da983826 became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-operator-lifecycle-manager","name":"packageserver-controller-lock","uid":"0beb9bb7-cfd9-4760-98f3-f0c893f5cf42","apiVersion":"coordination.k8s.io/v1","resourceVersion":"42190"}, "reason": "LeaderElection"} 2025-10-13T00:21:07.408867211+00:00 stderr F 2025-10-13T00:21:07Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1alpha1.ClusterServiceVersion"} 2025-10-13T00:21:07.408886392+00:00 stderr F 2025-10-13T00:21:07Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1.Infrastructure"} 2025-10-13T00:21:07.408895672+00:00 stderr F 2025-10-13T00:21:07Z INFO Starting Controller {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-10-13T00:21:07.527832079+00:00 stderr F 2025-10-13T00:21:07Z INFO Starting workers {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "worker count": 1} 2025-10-13T00:21:07.527874201+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-10-13T00:21:07.531088901+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-10-13T00:21:07.531111241+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:21:07.837309273+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:21:07.838550108+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-10-13T00:21:07.867211452+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-10-13T00:21:07.867285804+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:21:07.867475210+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-10-13T00:21:07.867510551+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:21:07.867568162+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:21:07.867596503+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-10-13T00:21:07.886863744+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-10-13T00:21:07.887319717+00:00 stderr F 2025-10-13T00:21:07Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:23:46.508070465+00:00 stderr F 2025-10-13T00:23:46Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-10-13T00:23:46.509447963+00:00 stderr F 2025-10-13T00:23:46Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:23:46.509561276+00:00 stderr F 2025-10-13T00:23:46Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-10-13T00:23:46.509603707+00:00 stderr F 2025-10-13T00:23:46Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-10-13T00:23:46.521711805+00:00 stderr F 2025-10-13T00:23:46Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-10-13T00:23:46.521711805+00:00 stderr F 2025-10-13T00:23:46Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015073043233033021 5ustar zuulzuul././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015073043233033021 5ustar zuulzuul././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000006272015073043233033032 0ustar zuulzuul2025-10-13T00:15:02.742637381+00:00 stderr F I1013 00:15:02.738911 1 cmd.go:241] Using service-serving-cert provided certificates 2025-10-13T00:15:02.742929130+00:00 stderr F I1013 00:15:02.742680 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:02.744864278+00:00 stderr F I1013 00:15:02.744288 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:02.848512163+00:00 stderr F I1013 00:15:02.848439 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-10-13T00:15:03.249388504+00:00 stderr F I1013 00:15:03.246111 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:03.249388504+00:00 stderr F W1013 00:15:03.246146 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:03.249388504+00:00 stderr F W1013 00:15:03.246152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:03.250827377+00:00 stderr F I1013 00:15:03.250051 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:03.256770476+00:00 stderr F I1013 00:15:03.252635 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:03.261527998+00:00 stderr F I1013 00:15:03.259052 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:03.261680193+00:00 stderr F I1013 00:15:03.260120 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:03.261904829+00:00 stderr F I1013 00:15:03.261439 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:03.261975222+00:00 stderr F I1013 00:15:03.261936 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:03.261985152+00:00 stderr F I1013 00:15:03.261971 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.262044984+00:00 stderr F I1013 00:15:03.261999 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.262597070+00:00 stderr F I1013 00:15:03.262566 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-10-13T00:15:03.264685973+00:00 stderr F I1013 00:15:03.264624 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:03.264685973+00:00 stderr F I1013 00:15:03.264662 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:03.363120251+00:00 stderr F I1013 00:15:03.362888 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.366454911+00:00 stderr F I1013 00:15:03.366411 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:03.366567234+00:00 stderr F I1013 00:15:03.366519 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:20:11.002999817+00:00 stderr F I1013 00:20:11.002455 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-10-13T00:20:11.002999817+00:00 stderr F I1013 00:20:11.002591 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41921", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_df25a723-946f-4da4-ae32-6eeea738de2f became leader 2025-10-13T00:20:11.019762063+00:00 stderr F I1013 00:20:11.019664 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-10-13T00:20:11.019969426+00:00 stderr F I1013 00:20:11.019909 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-10-13T00:20:11.020402652+00:00 stderr F I1013 00:20:11.020348 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-10-13T00:20:11.020425910+00:00 stderr F I1013 00:20:11.020400 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-10-13T00:20:11.020425910+00:00 stderr F I1013 00:20:11.020409 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-10-13T00:20:11.020425910+00:00 stderr F I1013 00:20:11.020419 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-10-13T00:20:11.020679330+00:00 stderr F E1013 00:20:11.020598 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-10-13T00:20:11.020701718+00:00 stderr F I1013 00:20:11.020686 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-10-13T00:20:11.020718427+00:00 stderr F I1013 00:20:11.020706 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-10-13T00:20:11.020752274+00:00 stderr F I1013 00:20:11.020728 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-10-13T00:20:11.020752274+00:00 stderr F I1013 00:20:11.020714 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-10-13T00:20:11.020752274+00:00 stderr F I1013 00:20:11.020745 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:20:11.020820619+00:00 stderr F I1013 00:20:11.020768 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-10-13T00:20:11.021164562+00:00 stderr F I1013 00:20:11.021108 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-10-13T00:20:11.021164562+00:00 stderr F I1013 00:20:11.021132 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-10-13T00:20:11.021164562+00:00 stderr F I1013 00:20:11.021142 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-10-13T00:20:11.029197247+00:00 stderr F E1013 00:20:11.028948 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-10-13T00:20:11.120502333+00:00 stderr F I1013 00:20:11.120417 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-10-13T00:20:11.120502333+00:00 stderr F I1013 00:20:11.120459 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-10-13T00:20:11.120586516+00:00 stderr F I1013 00:20:11.120486 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-10-13T00:20:11.120586516+00:00 stderr F I1013 00:20:11.120524 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-10-13T00:20:11.120621713+00:00 stderr F I1013 00:20:11.120601 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-10-13T00:20:11.120621713+00:00 stderr F I1013 00:20:11.120609 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-10-13T00:20:11.120796030+00:00 stderr F I1013 00:20:11.120749 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-10-13T00:20:11.120796030+00:00 stderr F I1013 00:20:11.120780 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:20:11.120796030+00:00 stderr F I1013 00:20:11.120789 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:20:11.120813648+00:00 stderr F I1013 00:20:11.120805 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-10-13T00:20:11.120823527+00:00 stderr F I1013 00:20:11.120815 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-10-13T00:20:11.120875433+00:00 stderr F I1013 00:20:11.120833 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-10-13T00:20:11.120875433+00:00 stderr F I1013 00:20:11.120852 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-10-13T00:20:11.120875433+00:00 stderr F I1013 00:20:11.120868 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-10-13T00:20:11.120889302+00:00 stderr F I1013 00:20:11.120879 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-10-13T00:20:11.120917450+00:00 stderr F I1013 00:20:11.120779 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-10-13T00:21:11.959530713+00:00 stderr F I1013 00:21:11.958691 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.958637059 +0000 UTC))" 2025-10-13T00:21:11.959530713+00:00 stderr F I1013 00:21:11.959453 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.95942052 +0000 UTC))" 2025-10-13T00:21:11.959530713+00:00 stderr F I1013 00:21:11.959485 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.959463941 +0000 UTC))" 2025-10-13T00:21:11.959530713+00:00 stderr F I1013 00:21:11.959509 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.959493812 +0000 UTC))" 2025-10-13T00:21:11.959573184+00:00 stderr F I1013 00:21:11.959533 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.959518392 +0000 UTC))" 2025-10-13T00:21:11.959573184+00:00 stderr F I1013 00:21:11.959557 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.959541433 +0000 UTC))" 2025-10-13T00:21:11.959591654+00:00 stderr F I1013 00:21:11.959580 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.959563084 +0000 UTC))" 2025-10-13T00:21:11.959622105+00:00 stderr F I1013 00:21:11.959601 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.959586994 +0000 UTC))" 2025-10-13T00:21:11.959634646+00:00 stderr F I1013 00:21:11.959625 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.959610575 +0000 UTC))" 2025-10-13T00:21:11.959675677+00:00 stderr F I1013 00:21:11.959650 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.959636306 +0000 UTC))" 2025-10-13T00:21:11.959746029+00:00 stderr F I1013 00:21:11.959681 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.959664966 +0000 UTC))" 2025-10-13T00:21:11.959754569+00:00 stderr F I1013 00:21:11.959746 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.959727678 +0000 UTC))" 2025-10-13T00:21:11.960182650+00:00 stderr F I1013 00:21:11.960141 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:21:11.960117679 +0000 UTC))" 2025-10-13T00:21:11.960584481+00:00 stderr F I1013 00:21:11.960540 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314503\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:02 +0000 UTC to 2026-10-12 23:15:02 +0000 UTC (now=2025-10-13 00:21:11.960520099 +0000 UTC))" 2025-10-13T00:22:11.021375919+00:00 stderr F E1013 00:22:11.020743 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.126362532+00:00 stderr F W1013 00:22:11.126186 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.126362532+00:00 stderr F E1013 00:22:11.126289 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.136797132+00:00 stderr F W1013 00:22:11.136711 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.136797132+00:00 stderr F E1013 00:22:11.136780 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.149852003+00:00 stderr F W1013 00:22:11.149786 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.149852003+00:00 stderr F E1013 00:22:11.149840 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.173421757+00:00 stderr F W1013 00:22:11.173348 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.173421757+00:00 stderr F E1013 00:22:11.173389 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.217507733+00:00 stderr F W1013 00:22:11.217437 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.217507733+00:00 stderr F E1013 00:22:11.217491 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.301750658+00:00 stderr F W1013 00:22:11.301674 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.301750658+00:00 stderr F E1013 00:22:11.301717 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.516441692+00:00 stderr F W1013 00:22:11.515910 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.516441692+00:00 stderr F E1013 00:22:11.515995 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.839204261+00:00 stderr F W1013 00:22:11.838993 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:11.839204261+00:00 stderr F E1013 00:22:11.839188 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.482411999+00:00 stderr F W1013 00:22:12.482306 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:12.482411999+00:00 stderr F E1013 00:22:12.482389 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:13.765425210+00:00 stderr F W1013 00:22:13.765314 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:13.765425210+00:00 stderr F E1013 00:22:13.765377 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.329633317+00:00 stderr F W1013 00:22:16.328850 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:16.329633317+00:00 stderr F E1013 00:22:16.329598 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.453944049+00:00 stderr F W1013 00:22:21.453232 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:21.453944049+00:00 stderr F E1013 00:22:21.453909 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.702374278+00:00 stderr F W1013 00:22:31.701825 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:31.702374278+00:00 stderr F E1013 00:22:31.702354 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.187576667+00:00 stderr F W1013 00:22:52.186765 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.187576667+00:00 stderr F E1013 00:22:52.187526 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:47.284493113+00:00 stderr F I1013 00:23:47.283695 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000010113315073043233033022 0ustar zuulzuul2025-08-13T20:00:06.609040181+00:00 stderr F I0813 20:00:06.606756 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:00:06.609040181+00:00 stderr F I0813 20:00:06.607693 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:06.613715154+00:00 stderr F I0813 20:00:06.613562 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:06.745670326+00:00 stderr F I0813 20:00:06.745105 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-08-13T20:00:08.615501452+00:00 stderr F I0813 20:00:08.614612 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:08.615501452+00:00 stderr F W0813 20:00:08.615358 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:08.615501452+00:00 stderr F W0813 20:00:08.615367 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:08.803599465+00:00 stderr F I0813 20:00:08.803212 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:08.804180802+00:00 stderr F I0813 20:00:08.804153 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:08.804909533+00:00 stderr F I0813 20:00:08.804883 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:08.804961744+00:00 stderr F I0813 20:00:08.804948 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:08.811385237+00:00 stderr F I0813 20:00:08.811308 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:08.811453569+00:00 stderr F I0813 20:00:08.811439 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:08.816644177+00:00 stderr F I0813 20:00:08.816601 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:08.823141632+00:00 stderr F I0813 20:00:08.823095 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-08-13T20:00:08.827510887+00:00 stderr F I0813 20:00:08.827483 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:00:08.828151935+00:00 stderr F I0813 20:00:08.827768 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:08.835083783+00:00 stderr F I0813 20:00:08.827946 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:10.206555929+00:00 stderr F I0813 20:00:10.205128 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:10.206555929+00:00 stderr F I0813 20:00:10.205730 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:10.213300141+00:00 stderr F I0813 20:00:10.213253 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:10.859242570+00:00 stderr F I0813 20:00:10.858742 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-08-13T20:00:10.860707962+00:00 stderr F I0813 20:00:10.860514 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"29031", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_122f4599-0c2f-4c0b-a0bb-7ed9e07d3e2c became leader 2025-08-13T20:00:11.819948463+00:00 stderr F I0813 20:00:11.818486 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820062 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820101 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820196 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.820960 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821011 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821026 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821053 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821058 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821064 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.825981 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.826333 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.826354 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-08-13T20:00:11.844577405+00:00 stderr F I0813 20:00:11.844491 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-08-13T20:00:11.845871972+00:00 stderr F E0813 20:00:11.844912 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:11.845871972+00:00 stderr F I0813 20:00:11.844973 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:11.859098619+00:00 stderr F E0813 20:00:11.857896 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:11.968039095+00:00 stderr F E0813 20:00:11.951700 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:12.048349065+00:00 stderr F I0813 20:00:12.048045 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:00:12.048349065+00:00 stderr F I0813 20:00:12.048126 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099542 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099591 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099812 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099867 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:12.253356991+00:00 stderr F I0813 20:00:12.250025 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.286079044+00:00 stderr F I0813 20:00:12.272341 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.286079044+00:00 stderr F I0813 20:00:12.274011 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.442111723+00:00 stderr F I0813 20:00:12.442008 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.445729096+00:00 stderr F I0813 20:00:12.445624 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-08-13T20:00:12.450233745+00:00 stderr F I0813 20:00:12.450133 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:00:12.450256085+00:00 stderr F I0813 20:00:12.450242 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-08-13T20:00:12.450256085+00:00 stderr F I0813 20:00:12.450249 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-08-13T20:00:12.450478772+00:00 stderr F I0813 20:00:12.449988 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.469317419+00:00 stderr F I0813 20:00:12.469145 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-08-13T20:00:12.469317419+00:00 stderr F I0813 20:00:12.469192 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-08-13T20:00:12.475613638+00:00 stderr F I0813 20:00:12.475482 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.521284411+00:00 stderr F I0813 20:00:12.521184 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-08-13T20:00:12.521284411+00:00 stderr F I0813 20:00:12.521242 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-08-13T20:00:12.558069769+00:00 stderr F I0813 20:00:12.556643 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-08-13T20:00:12.558069769+00:00 stderr F I0813 20:00:12.556684 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-08-13T20:01:00.024546078+00:00 stderr F I0813 20:01:00.011104 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.01059532 +0000 UTC))" 2025-08-13T20:01:00.024546078+00:00 stderr F I0813 20:01:00.011756 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.011738473 +0000 UTC))" 2025-08-13T20:01:00.049043057+00:00 stderr F I0813 20:01:00.048971 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.011764744 +0000 UTC))" 2025-08-13T20:01:00.049170980+00:00 stderr F I0813 20:01:00.049151 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049107288 +0000 UTC))" 2025-08-13T20:01:00.049355846+00:00 stderr F I0813 20:01:00.049331 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049197771 +0000 UTC))" 2025-08-13T20:01:00.049464969+00:00 stderr F I0813 20:01:00.049447 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049422757 +0000 UTC))" 2025-08-13T20:01:00.049530141+00:00 stderr F I0813 20:01:00.049513 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049491139 +0000 UTC))" 2025-08-13T20:01:00.049596852+00:00 stderr F I0813 20:01:00.049579 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049557081 +0000 UTC))" 2025-08-13T20:01:00.049661514+00:00 stderr F I0813 20:01:00.049645 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.049620973 +0000 UTC))" 2025-08-13T20:01:00.059181316+00:00 stderr F I0813 20:01:00.049740 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.049719456 +0000 UTC))" 2025-08-13T20:01:00.059327780+00:00 stderr F I0813 20:01:00.059308 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.059262308 +0000 UTC))" 2025-08-13T20:01:00.068147291+00:00 stderr F I0813 20:01:00.068084 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:01:00.068004057 +0000 UTC))" 2025-08-13T20:01:00.068977955+00:00 stderr F I0813 20:01:00.068951 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115208\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115207\" (2025-08-13 19:00:06 +0000 UTC to 2026-08-13 19:00:06 +0000 UTC (now=2025-08-13 20:01:00.068701687 +0000 UTC))" 2025-08-13T20:01:23.351014291+00:00 stderr F I0813 20:01:23.350151 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:23.351014291+00:00 stderr F I0813 20:01:23.350951 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:23.351996309+00:00 stderr F I0813 20:01:23.351484 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352336 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:23.352288928 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352389 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:23.35237251 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352411 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:23.352397221 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352427 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:23.352416221 +0000 UTC))" 2025-08-13T20:01:23.352508124+00:00 stderr F I0813 20:01:23.352445 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352434132 +0000 UTC))" 2025-08-13T20:01:23.352508124+00:00 stderr F I0813 20:01:23.352480 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352451412 +0000 UTC))" 2025-08-13T20:01:23.352518574+00:00 stderr F I0813 20:01:23.352505 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352486313 +0000 UTC))" 2025-08-13T20:01:23.352563405+00:00 stderr F I0813 20:01:23.352529 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352510784 +0000 UTC))" 2025-08-13T20:01:23.352575506+00:00 stderr F I0813 20:01:23.352567 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:23.352552205 +0000 UTC))" 2025-08-13T20:01:23.352612767+00:00 stderr F I0813 20:01:23.352584 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:23.352574216 +0000 UTC))" 2025-08-13T20:01:23.352666948+00:00 stderr F I0813 20:01:23.352621 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352607877 +0000 UTC))" 2025-08-13T20:01:23.354274304+00:00 stderr F I0813 20:01:23.354230 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:23.354202292 +0000 UTC))" 2025-08-13T20:01:23.354967744+00:00 stderr F I0813 20:01:23.354866 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115208\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115207\" (2025-08-13 19:00:06 +0000 UTC to 2026-08-13 19:00:06 +0000 UTC (now=2025-08-13 20:01:23.354759108 +0000 UTC))" 2025-08-13T20:01:26.622344829+00:00 stderr F I0813 20:01:26.621656 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="7021067932790448a11809da10b860f6f1ea1555731d97a3cf678bc8b9574622", new="dca6c81c3751f96f1b64e72dc06b40fe72f2952cfcac2b16deea87fc6cd08c4d") 2025-08-13T20:01:26.622547835+00:00 stderr F W0813 20:01:26.622499 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was modified 2025-08-13T20:01:26.622707979+00:00 stderr F I0813 20:01:26.622619 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="e7c4eabcdc7aa32e59c3e68ad4841e132e9166775ff32392cab346655d2dac9f", new="adf358f49f26d932aaa3db3a86640e1ed83874695ab0abb173ca1ba5a73101ec") 2025-08-13T20:01:26.623009248+00:00 stderr F I0813 20:01:26.622955 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:26.623130121+00:00 stderr F I0813 20:01:26.623115 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:26.623421960+00:00 stderr F I0813 20:01:26.623368 1 base_controller.go:172] Shutting down FeatureGateController ... 2025-08-13T20:01:26.623436000+00:00 stderr F I0813 20:01:26.623426 1 base_controller.go:172] Shutting down AWSPlatformServiceLocationController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623444 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623475 1 base_controller.go:172] Shutting down FeatureUpgradeableController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623493 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:26.623617945+00:00 stderr F I0813 20:01:26.623573 1 base_controller.go:172] Shutting down LatencySensitiveRemovalController ... 2025-08-13T20:01:26.623617945+00:00 stderr F I0813 20:01:26.623607 1 base_controller.go:172] Shutting down ConfigOperatorController ... 2025-08-13T20:01:26.624546072+00:00 stderr F I0813 20:01:26.624517 1 base_controller.go:172] Shutting down KubeCloudConfigController ... 2025-08-13T20:01:26.624602753+00:00 stderr F I0813 20:01:26.624589 1 base_controller.go:172] Shutting down MigrationPlatformStatusController ... 2025-08-13T20:01:26.624657085+00:00 stderr F I0813 20:01:26.624644 1 base_controller.go:172] Shutting down StatusSyncer_config-operator ... 2025-08-13T20:01:26.624698276+00:00 stderr F I0813 20:01:26.624676 1 base_controller.go:150] All StatusSyncer_config-operator post start hooks have been terminated 2025-08-13T20:01:26.624765838+00:00 stderr F I0813 20:01:26.624750 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:26.624957463+00:00 stderr F I0813 20:01:26.624931 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:26.625110698+00:00 stderr F I0813 20:01:26.625092 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:26.625173410+00:00 stderr F I0813 20:01:26.625159 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:26.625242542+00:00 stderr F I0813 20:01:26.625229 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:26.626234790+00:00 stderr F I0813 20:01:26.626160 1 base_controller.go:114] Shutting down worker of FeatureUpgradeableController controller ... 2025-08-13T20:01:26.626252370+00:00 stderr F I0813 20:01:26.626240 1 base_controller.go:104] All FeatureUpgradeableController workers have been terminated 2025-08-13T20:01:26.626338233+00:00 stderr F I0813 20:01:26.626291 1 base_controller.go:114] Shutting down worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:01:26.626350213+00:00 stderr F I0813 20:01:26.626339 1 base_controller.go:104] All AWSPlatformServiceLocationController workers have been terminated 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626367 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626391 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626142 1 base_controller.go:114] Shutting down worker of FeatureGateController controller ... 2025-08-13T20:01:26.626419325+00:00 stderr F I0813 20:01:26.626407 1 base_controller.go:104] All FeatureGateController workers have been terminated 2025-08-13T20:01:26.626428965+00:00 stderr F I0813 20:01:26.626418 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:26.626428965+00:00 stderr F I0813 20:01:26.626425 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626613 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626682 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626711 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626744 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.630990065+00:00 stderr F I0813 20:01:26.630905 1 base_controller.go:114] Shutting down worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:01:26.630990065+00:00 stderr F I0813 20:01:26.630965 1 base_controller.go:104] All LatencySensitiveRemovalController workers have been terminated 2025-08-13T20:01:26.631095068+00:00 stderr F I0813 20:01:26.631051 1 base_controller.go:114] Shutting down worker of MigrationPlatformStatusController controller ... 2025-08-13T20:01:26.631095068+00:00 stderr F I0813 20:01:26.631077 1 base_controller.go:104] All MigrationPlatformStatusController workers have been terminated 2025-08-13T20:01:26.631125419+00:00 stderr F I0813 20:01:26.631059 1 base_controller.go:114] Shutting down worker of StatusSyncer_config-operator controller ... 2025-08-13T20:01:26.631165940+00:00 stderr F I0813 20:01:26.631152 1 base_controller.go:104] All StatusSyncer_config-operator workers have been terminated 2025-08-13T20:01:26.631203632+00:00 stderr F I0813 20:01:26.631119 1 base_controller.go:114] Shutting down worker of KubeCloudConfigController controller ... 2025-08-13T20:01:26.631244433+00:00 stderr F I0813 20:01:26.631232 1 base_controller.go:104] All KubeCloudConfigController workers have been terminated 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631340 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631376 1 builder.go:330] server exited 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631415 1 base_controller.go:114] Shutting down worker of ConfigOperatorController controller ... 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631458 1 base_controller.go:104] All ConfigOperatorController workers have been terminated 2025-08-13T20:01:29.264183958+00:00 stderr F W0813 20:01:29.256407 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000007570415073043233033040 0ustar zuulzuul2025-08-13T20:05:36.187245573+00:00 stderr F I0813 20:05:36.180676 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:36.192640098+00:00 stderr F I0813 20:05:36.189383 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:36.197893678+00:00 stderr F I0813 20:05:36.193740 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:36.478370310+00:00 stderr F I0813 20:05:36.476926 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-08-13T20:05:37.273257573+00:00 stderr F I0813 20:05:37.272955 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:37.273257573+00:00 stderr F W0813 20:05:37.273193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.273257573+00:00 stderr F W0813 20:05:37.273201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.325391825+00:00 stderr F I0813 20:05:37.325120 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:37.325612042+00:00 stderr F I0813 20:05:37.325561 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-08-13T20:05:37.326370803+00:00 stderr F I0813 20:05:37.326289 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:37.328065692+00:00 stderr F I0813 20:05:37.327701 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:37.328065692+00:00 stderr F I0813 20:05:37.327944 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328474 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328514 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328476 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328555 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:37.328614748+00:00 stderr F I0813 20:05:37.328588 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:37.329384150+00:00 stderr F I0813 20:05:37.328735 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:37.414109896+00:00 stderr F I0813 20:05:37.413550 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-08-13T20:05:37.416395842+00:00 stderr F I0813 20:05:37.416309 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31747", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_81393664-a273-42ce-b620-ccd9229e7705 became leader 2025-08-13T20:05:37.434366396+00:00 stderr F I0813 20:05:37.430576 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:37.434366396+00:00 stderr F I0813 20:05:37.430762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.435024715+00:00 stderr F I0813 20:05:37.434560 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.625336365+00:00 stderr F I0813 20:05:37.625240 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:37.626072746+00:00 stderr F I0813 20:05:37.626024 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-08-13T20:05:37.626157699+00:00 stderr F I0813 20:05:37.626137 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-08-13T20:05:37.626332764+00:00 stderr F I0813 20:05:37.626234 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-08-13T20:05:37.626433157+00:00 stderr F I0813 20:05:37.626375 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626486 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626567 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626854 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626992 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627100 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627123 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627196 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627203 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:05:37.627753844+00:00 stderr F I0813 20:05:37.627732 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-08-13T20:05:37.628027542+00:00 stderr F E0813 20:05:37.628001 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.630202254+00:00 stderr F I0813 20:05:37.630165 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:05:37.633285683+00:00 stderr F E0813 20:05:37.633261 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.643717082+00:00 stderr F E0813 20:05:37.643665 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.725701009+00:00 stderr F I0813 20:05:37.725637 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:37.725853283+00:00 stderr F I0813 20:05:37.725831 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:37.726215824+00:00 stderr F I0813 20:05:37.726192 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-08-13T20:05:37.726284146+00:00 stderr F I0813 20:05:37.726268 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:05:37.726529923+00:00 stderr F I0813 20:05:37.726509 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-08-13T20:05:37.726569094+00:00 stderr F I0813 20:05:37.726556 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-08-13T20:05:37.730930929+00:00 stderr F I0813 20:05:37.730051 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-08-13T20:05:37.731001201+00:00 stderr F I0813 20:05:37.730983 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-08-13T20:05:37.731230777+00:00 stderr F I0813 20:05:37.731187 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-08-13T20:05:37.731271559+00:00 stderr F I0813 20:05:37.731257 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-08-13T20:05:37.731335730+00:00 stderr F I0813 20:05:37.731317 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-08-13T20:05:37.731375092+00:00 stderr F I0813 20:05:37.731362 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-08-13T20:05:37.731427753+00:00 stderr F I0813 20:05:37.731405 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:05:37.731459594+00:00 stderr F I0813 20:05:37.731448 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:05:37.767118245+00:00 stderr F I0813 20:05:37.766661 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:37.827659169+00:00 stderr F I0813 20:05:37.827513 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-08-13T20:05:37.828510483+00:00 stderr F I0813 20:05:37.828486 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-08-13T20:08:37.866996977+00:00 stderr F W0813 20:08:37.863766 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.866996977+00:00 stderr F E0813 20:08:37.865660 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.878356312+00:00 stderr F W0813 20:08:37.878274 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.878477386+00:00 stderr F E0813 20:08:37.878372 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.892624171+00:00 stderr F W0813 20:08:37.892524 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.892673483+00:00 stderr F E0813 20:08:37.892652 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.918055460+00:00 stderr F W0813 20:08:37.917452 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.918055460+00:00 stderr F E0813 20:08:37.917522 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.962652209+00:00 stderr F W0813 20:08:37.962491 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.962652209+00:00 stderr F E0813 20:08:37.962563 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.049389966+00:00 stderr F W0813 20:08:38.049292 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.049499499+00:00 stderr F E0813 20:08:38.049381 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.215044335+00:00 stderr F W0813 20:08:38.214875 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.215044335+00:00 stderr F E0813 20:08:38.214976 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.258970315+00:00 stderr F E0813 20:08:38.258730 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.542396601+00:00 stderr F W0813 20:08:38.541764 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.542880965+00:00 stderr F E0813 20:08:38.542361 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.191196853+00:00 stderr F W0813 20:08:39.190668 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.191196853+00:00 stderr F E0813 20:08:39.191134 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.485072119+00:00 stderr F W0813 20:08:40.483459 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.485072119+00:00 stderr F E0813 20:08:40.483518 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.050922693+00:00 stderr F W0813 20:08:43.048040 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.050922693+00:00 stderr F E0813 20:08:43.050394 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.184106516+00:00 stderr F W0813 20:08:48.180432 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.184106516+00:00 stderr F E0813 20:08:48.182751 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.427457842+00:00 stderr F W0813 20:08:58.426605 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.427457842+00:00 stderr F E0813 20:08:58.427363 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.989139703+00:00 stderr F I0813 20:09:29.988276 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.132409295+00:00 stderr F I0813 20:09:35.131738 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.851451214+00:00 stderr F I0813 20:09:41.848936 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.746116538+00:00 stderr F I0813 20:09:45.745718 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.935623861+00:00 stderr F I0813 20:09:45.935482 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.245138595+00:00 stderr F I0813 20:09:46.245075 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.696959325+00:00 stderr F I0813 20:09:52.694671 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.301629342+00:00 stderr F I0813 20:09:55.301139 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.755286399+00:00 stderr F I0813 20:09:55.755198 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=configs from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.900307047+00:00 stderr F I0813 20:09:55.900239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.602007832+00:00 stderr F I0813 20:10:18.601141 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.371746557+00:00 stderr F I0813 20:42:36.370140 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.371746557+00:00 stderr F I0813 20:42:36.371308 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388339156+00:00 stderr F I0813 20:42:36.368481 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388339156+00:00 stderr F I0813 20:42:36.370500 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421560184+00:00 stderr F I0813 20:42:36.370600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370615 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370628 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370640 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.472563384+00:00 stderr F I0813 20:42:36.370687 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514733510+00:00 stderr F I0813 20:42:36.361477 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.879286220+00:00 stderr F W0813 20:42:37.876887 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.884558642+00:00 stderr F E0813 20:42:37.883645 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.894989093+00:00 stderr F W0813 20:42:37.892563 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.894989093+00:00 stderr F E0813 20:42:37.892634 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.916855614+00:00 stderr F W0813 20:42:37.916590 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.916855614+00:00 stderr F E0813 20:42:37.916663 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.953634034+00:00 stderr F W0813 20:42:37.941999 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.953634034+00:00 stderr F E0813 20:42:37.942076 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.999502676+00:00 stderr F W0813 20:42:37.996164 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.999502676+00:00 stderr F E0813 20:42:37.996257 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.089921413+00:00 stderr F W0813 20:42:38.088872 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.090067927+00:00 stderr F E0813 20:42:38.089995 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.254857078+00:00 stderr F W0813 20:42:38.254491 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.254857078+00:00 stderr F E0813 20:42:38.254583 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.579708984+00:00 stderr F W0813 20:42:38.578982 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.579708984+00:00 stderr F E0813 20:42:38.579042 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.223660669+00:00 stderr F W0813 20:42:39.222692 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.223660669+00:00 stderr F E0813 20:42:39.223042 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.239147526+00:00 stderr F E0813 20:42:39.238968 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506215615+00:00 stderr F W0813 20:42:40.505414 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506215615+00:00 stderr F E0813 20:42:40.505864 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.594969504+00:00 stderr F I0813 20:42:40.593861 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.595041106+00:00 stderr F I0813 20:42:40.594988 1 leaderelection.go:285] failed to renew lease openshift-config-operator/config-operator-lock: timed out waiting for the condition 2025-08-13T20:42:40.596103676+00:00 stderr F I0813 20:42:40.596073 1 base_controller.go:172] Shutting down LatencySensitiveRemovalController ... 2025-08-13T20:42:40.596210150+00:00 stderr F I0813 20:42:40.596137 1 base_controller.go:172] Shutting down ConfigOperatorController ... 2025-08-13T20:42:40.596289842+00:00 stderr F I0813 20:42:40.596269 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597314 1 base_controller.go:172] Shutting down MigrationPlatformStatusController ... 2025-08-13T20:42:40.597412994+00:00 stderr F E0813 20:42:40.597329 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597388 1 base_controller.go:172] Shutting down KubeCloudConfigController ... 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597405 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.597965530+00:00 stderr F I0813 20:42:40.597940 1 base_controller.go:172] Shutting down FeatureGateController ... 2025-08-13T20:42:40.598019672+00:00 stderr F I0813 20:42:40.598006 1 base_controller.go:172] Shutting down FeatureUpgradeableController ... 2025-08-13T20:42:40.598061883+00:00 stderr F I0813 20:42:40.598050 1 base_controller.go:172] Shutting down StatusSyncer_config-operator ... 2025-08-13T20:42:40.598991580+00:00 stderr F I0813 20:42:40.597152 1 base_controller.go:114] Shutting down worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:42:40.599345920+00:00 stderr F I0813 20:42:40.598080 1 base_controller.go:150] All StatusSyncer_config-operator post start hooks have been terminated 2025-08-13T20:42:40.599588477+00:00 stderr F I0813 20:42:40.599560 1 base_controller.go:172] Shutting down AWSPlatformServiceLocationController ... 2025-08-13T20:42:40.599653929+00:00 stderr F I0813 20:42:40.599635 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.600121662+00:00 stderr F I0813 20:42:40.600094 1 base_controller.go:104] All LatencySensitiveRemovalController workers have been terminated 2025-08-13T20:42:40.600173824+00:00 stderr F W0813 20:42:40.600096 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015073043233033021 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000000000015073043233033011 0ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000000000015073043233033011 0ustar zuulzuul././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015073043232033103 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015073043232033103 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000000017015073043232033103 0ustar zuulzuul2025-10-13T00:13:02.524447200+00:00 stdout F Mon Oct 13 00:13:02 UTC 2025 2025-10-13T00:16:10.923163009+00:00 stdout F ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000000057515073043232033114 0ustar zuulzuul2025-08-13T19:51:17.827657214+00:00 stdout F Wed Aug 13 19:51:17 UTC 2025 2025-08-13T19:51:34.627632491+00:00 stderr F time="2025-08-13T19:51:34Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded" 2025-08-13T19:51:35.128202382+00:00 stdout F ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063015073043233033230 0ustar zuulzuul2025-08-13T19:51:10.079928701+00:00 stdout F 2025-08-13T19:51:10+00:00 [cnibincopy] Successfully copied files in /usr/src/whereabouts/rhel9/bin/ to /host/opt/cni/bin/upgrade_2927e247-a3e2-4e6c-9d4c-53a5b8439023 2025-08-13T19:51:10.090046880+00:00 stdout F 2025-08-13T19:51:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2927e247-a3e2-4e6c-9d4c-53a5b8439023 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063015073043233033230 0ustar zuulzuul2025-10-13T00:12:53.715886433+00:00 stdout F 2025-10-13T00:12:53+00:00 [cnibincopy] Successfully copied files in /usr/src/whereabouts/rhel9/bin/ to /host/opt/cni/bin/upgrade_a8dbc414-7748-4596-943c-394699942ba8 2025-10-13T00:12:53.721752370+00:00 stdout F 2025-10-13T00:12:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a8dbc414-7748-4596-943c-394699942ba8 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063315073043233033233 0ustar zuulzuul2025-08-13T19:51:02.025704074+00:00 stdout F 2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/route-override/rhel9/bin/ to /host/opt/cni/bin/upgrade_63e00e37-7601-412f-989f-3015d2849f1c 2025-08-13T19:51:02.044774749+00:00 stdout F 2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63e00e37-7601-412f-989f-3015d2849f1c to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063315073043233033233 0ustar zuulzuul2025-10-13T00:12:51.820678690+00:00 stdout F 2025-10-13T00:12:51+00:00 [cnibincopy] Successfully copied files in /usr/src/route-override/rhel9/bin/ to /host/opt/cni/bin/upgrade_cacec7d8-7b97-4542-9746-8fea1bea1b49 2025-10-13T00:12:51.826820745+00:00 stdout F 2025-10-13T00:12:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cacec7d8-7b97-4542-9746-8fea1bea1b49 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000061015073043233033226 0ustar zuulzuul2025-10-13T00:12:51.245401516+00:00 stdout F 2025-10-13T00:12:51+00:00 [cnibincopy] Successfully copied files in /bondcni/rhel9/ to /host/opt/cni/bin/upgrade_52051008-8679-47b8-b129-9ab3d34f4e59 2025-10-13T00:12:51.249898204+00:00 stdout F 2025-10-13T00:12:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_52051008-8679-47b8-b129-9ab3d34f4e59 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000061015073043233033226 0ustar zuulzuul2025-08-13T19:50:59.775239797+00:00 stdout F 2025-08-13T19:50:59+00:00 [cnibincopy] Successfully copied files in /bondcni/rhel9/ to /host/opt/cni/bin/upgrade_b1bfe828-9c07-4461-84b8-c6d1eb367d18 2025-08-13T19:50:59.791945004+00:00 stdout F 2025-08-13T19:50:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b1bfe828-9c07-4461-84b8-c6d1eb367d18 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000000015073043233033217 0ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000000015073043233033217 0ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000062415073043233033233 0ustar zuulzuul2025-08-13T19:50:57.794478745+00:00 stdout F 2025-08-13T19:50:57+00:00 [cnibincopy] Successfully copied files in /usr/src/plugins/rhel9/bin/ to /host/opt/cni/bin/upgrade_d145407d-9046-4618-84a2-0bd3cab7b7ed 2025-08-13T19:50:57.820920950+00:00 stdout F 2025-08-13T19:50:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d145407d-9046-4618-84a2-0bd3cab7b7ed to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000062415073043233033233 0ustar zuulzuul2025-10-13T00:12:50.346784182+00:00 stdout F 2025-10-13T00:12:50+00:00 [cnibincopy] Successfully copied files in /usr/src/plugins/rhel9/bin/ to /host/opt/cni/bin/upgrade_8306a3d1-d5b4-4901-86d6-46670dfaa89c 2025-10-13T00:12:50.353289668+00:00 stdout F 2025-10-13T00:12:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8306a3d1-d5b4-4901-86d6-46670dfaa89c to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063615073043233033236 0ustar zuulzuul2025-08-13T19:50:47.211892989+00:00 stdout F 2025-08-13T19:50:47+00:00 [cnibincopy] Successfully copied files in /usr/src/egress-router-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb608648-8ddd-4f33-bc1d-0991683cda60 2025-08-13T19:50:47.335048529+00:00 stdout F 2025-08-13T19:50:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb608648-8ddd-4f33-bc1d-0991683cda60 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063615073043233033236 0ustar zuulzuul2025-10-13T00:12:49.285042806+00:00 stdout F 2025-10-13T00:12:49+00:00 [cnibincopy] Successfully copied files in /usr/src/egress-router-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bba6244a-29d4-4c68-b727-85189ec68451 2025-10-13T00:12:49.291276704+00:00 stdout F 2025-10-13T00:12:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bba6244a-29d4-4c68-b727-85189ec68451 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000012015073043233033222 0ustar zuulzuul2025-08-13T19:51:11.553008882+00:00 stdout F Done configuring CNI. Sleep=false ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000012015073043233033222 0ustar zuulzuul2025-10-13T00:12:58.288910732+00:00 stdout F Done configuring CNI. Sleep=false ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000755000175000017500000000000015073043232033012 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000755000175000017500000000000015073043232033012 5ustar zuulzuul././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000644000175000017500000007557015073043232033032 0ustar zuulzuul2025-10-13T00:15:01.147518608+00:00 stderr F W1013 00:15:01.144163 1 cmd.go:237] Using insecure, self-signed certificates 2025-10-13T00:15:01.147518608+00:00 stderr F I1013 00:15:01.145308 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1760314501 cert, and key in /tmp/serving-cert-2611980011/serving-signer.crt, /tmp/serving-cert-2611980011/serving-signer.key 2025-10-13T00:15:01.883004965+00:00 stderr F I1013 00:15:01.862012 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:15:01.893614733+00:00 stderr F I1013 00:15:01.893558 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:01.987395493+00:00 stderr F I1013 00:15:01.985243 1 builder.go:271] service-ca-controller version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-10-13T00:15:01.991931159+00:00 stderr F I1013 00:15:01.989239 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2611980011/tls.crt::/tmp/serving-cert-2611980011/tls.key" 2025-10-13T00:15:02.463296062+00:00 stderr F I1013 00:15:02.463244 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:02.473387224+00:00 stderr F I1013 00:15:02.473192 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:02.473387224+00:00 stderr F I1013 00:15:02.473213 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:02.473387224+00:00 stderr F I1013 00:15:02.473227 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:15:02.473387224+00:00 stderr F I1013 00:15:02.473232 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:15:02.483366143+00:00 stderr F I1013 00:15:02.480496 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:02.483366143+00:00 stderr F W1013 00:15:02.480782 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:02.483366143+00:00 stderr F W1013 00:15:02.480787 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:02.483366143+00:00 stderr F I1013 00:15:02.480950 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:02.493380553+00:00 stderr F I1013 00:15:02.493017 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-10-13T00:15:02.494355922+00:00 stderr F I1013 00:15:02.493738 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2611980011/tls.crt::/tmp/serving-cert-2611980011/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1760314501\" (2025-10-13 00:15:00 +0000 UTC to 2025-11-12 00:15:01 +0000 UTC (now=2025-10-13 00:15:02.493706553 +0000 UTC))" 2025-10-13T00:15:02.494355922+00:00 stderr F I1013 00:15:02.494053 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... 2025-10-13T00:15:02.494355922+00:00 stderr F I1013 00:15:02.494243 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:02.494355922+00:00 stderr F I1013 00:15:02.494279 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:02.494355922+00:00 stderr F I1013 00:15:02.494343 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:02.494374003+00:00 stderr F I1013 00:15:02.494358 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.494374003+00:00 stderr F I1013 00:15:02.494361 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:02.494398143+00:00 stderr F I1013 00:15:02.494372 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.494398143+00:00 stderr F I1013 00:15:02.494390 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.494311851 +0000 UTC))" 2025-10-13T00:15:02.494520727+00:00 stderr F I1013 00:15:02.494409 1 secure_serving.go:210] Serving securely on [::]:8443 2025-10-13T00:15:02.494520727+00:00 stderr F I1013 00:15:02.494438 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:02.494520727+00:00 stderr F I1013 00:15:02.494458 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2611980011/tls.crt::/tmp/serving-cert-2611980011/tls.key" 2025-10-13T00:15:02.495346742+00:00 stderr F I1013 00:15:02.494550 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.594918 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.595278 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.595292 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.595469 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.595443261 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.595776 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2611980011/tls.crt::/tmp/serving-cert-2611980011/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1760314501\" (2025-10-13 00:15:00 +0000 UTC to 2025-11-12 00:15:01 +0000 UTC (now=2025-10-13 00:15:02.59574072 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596020 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.596005448 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596842 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:02.596825162 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596860 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:02.596849803 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596888 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.596875584 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596902 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:02.596891874 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596917 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.596907315 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596931 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.596921825 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596948 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.596937746 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596962 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.596952226 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596976 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:02.596965507 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.596989 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:02.596980867 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.597003 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:02.596994017 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.597267 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2611980011/tls.crt::/tmp/serving-cert-2611980011/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1760314501\" (2025-10-13 00:15:00 +0000 UTC to 2025-11-12 00:15:01 +0000 UTC (now=2025-10-13 00:15:02.597248935 +0000 UTC))" 2025-10-13T00:15:02.597797261+00:00 stderr F I1013 00:15:02.597518 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:15:02.597506503 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936246 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.936201925 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936754 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.93673403 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936781 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.93676106 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936801 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.936785971 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936821 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936807572 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936840 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936826332 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936860 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936844643 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936896 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936864803 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936932 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.936900824 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936954 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.936940385 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936975 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.936959546 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.936994 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936979096 +0000 UTC))" 2025-10-13T00:21:11.939479503+00:00 stderr F I1013 00:21:11.939106 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2611980011/tls.crt::/tmp/serving-cert-2611980011/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1760314501\" (2025-10-13 00:15:00 +0000 UTC to 2025-11-12 00:15:01 +0000 UTC (now=2025-10-13 00:21:11.939080673 +0000 UTC))" 2025-10-13T00:21:11.939570726+00:00 stderr F I1013 00:21:11.939492 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314502\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314502\" (2025-10-12 23:15:01 +0000 UTC to 2026-10-12 23:15:01 +0000 UTC (now=2025-10-13 00:21:11.939471903 +0000 UTC))" 2025-10-13T00:21:15.485552355+00:00 stderr F I1013 00:21:15.485493 1 leaderelection.go:260] successfully acquired lease openshift-service-ca/service-ca-controller-lock 2025-10-13T00:21:15.485775821+00:00 stderr F I1013 00:21:15.485680 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0db0ad19-cc12-475d-ab84-bad75b08b334", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42338", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-666f99b6f-kk8kg_d7a7960e-c748-4c51-b7d5-c15bfef127ea became leader 2025-10-13T00:21:15.489894362+00:00 stderr F I1013 00:21:15.489835 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector 2025-10-13T00:21:15.489894362+00:00 stderr F I1013 00:21:15.489852 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector 2025-10-13T00:21:15.489932613+00:00 stderr F I1013 00:21:15.489922 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector 2025-10-13T00:21:15.489941653+00:00 stderr F I1013 00:21:15.489931 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector 2025-10-13T00:21:15.489976374+00:00 stderr F I1013 00:21:15.489945 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector 2025-10-13T00:21:15.490024195+00:00 stderr F I1013 00:21:15.490001 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController 2025-10-13T00:21:15.490050846+00:00 stderr F I1013 00:21:15.490003 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector 2025-10-13T00:21:15.490124368+00:00 stderr F I1013 00:21:15.490092 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController 2025-10-13T00:21:15.590257360+00:00 stderr F I1013 00:21:15.590202 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector 2025-10-13T00:21:15.590257360+00:00 stderr F I1013 00:21:15.590224 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590257360+00:00 stderr F I1013 00:21:15.590232 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590257360+00:00 stderr F I1013 00:21:15.590235 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590257360+00:00 stderr F I1013 00:21:15.590240 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590317641+00:00 stderr F I1013 00:21:15.590244 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590317641+00:00 stderr F I1013 00:21:15.590304 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector 2025-10-13T00:21:15.590317641+00:00 stderr F I1013 00:21:15.590310 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590317641+00:00 stderr F I1013 00:21:15.590314 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590351822+00:00 stderr F I1013 00:21:15.590317 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590351822+00:00 stderr F I1013 00:21:15.590321 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590351822+00:00 stderr F I1013 00:21:15.590340 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... 2025-10-13T00:21:15.590499466+00:00 stderr F I1013 00:21:15.590446 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector 2025-10-13T00:21:15.590499466+00:00 stderr F I1013 00:21:15.590477 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... 2025-10-13T00:21:15.590499466+00:00 stderr F I1013 00:21:15.590488 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... 2025-10-13T00:21:15.590499466+00:00 stderr F I1013 00:21:15.590493 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... 2025-10-13T00:21:15.590513997+00:00 stderr F I1013 00:21:15.590497 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... 2025-10-13T00:21:15.590513997+00:00 stderr F I1013 00:21:15.590502 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... 2025-10-13T00:21:15.695476759+00:00 stderr F I1013 00:21:15.695414 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector 2025-10-13T00:21:15.695561732+00:00 stderr F I1013 00:21:15.695548 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.695602123+00:00 stderr F I1013 00:21:15.695591 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.695630163+00:00 stderr F I1013 00:21:15.695619 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.695657384+00:00 stderr F I1013 00:21:15.695647 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.695684115+00:00 stderr F I1013 00:21:15.695674 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.695970353+00:00 stderr F I1013 00:21:15.695953 1 configmap.go:109] updating configmap openstack-operators/openshift-service-ca.crt with the service signing CA bundle 2025-10-13T00:21:15.696454456+00:00 stderr F I1013 00:21:15.696436 1 configmap.go:109] updating configmap openstack/openshift-service-ca.crt with the service signing CA bundle 2025-10-13T00:21:15.700376731+00:00 stderr F I1013 00:21:15.698397 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController 2025-10-13T00:21:15.700376731+00:00 stderr F I1013 00:21:15.698422 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... 2025-10-13T00:21:15.700376731+00:00 stderr F I1013 00:21:15.698435 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... 2025-10-13T00:21:15.700376731+00:00 stderr F I1013 00:21:15.698441 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... 2025-10-13T00:21:15.700376731+00:00 stderr F I1013 00:21:15.698446 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... 2025-10-13T00:21:15.700376731+00:00 stderr F I1013 00:21:15.698452 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... 2025-10-13T00:21:15.701001418+00:00 stderr F I1013 00:21:15.700978 1 base_controller.go:73] Caches are synced for CRDCABundleInjector 2025-10-13T00:21:15.701036919+00:00 stderr F I1013 00:21:15.701025 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... 2025-10-13T00:21:15.701064150+00:00 stderr F I1013 00:21:15.701054 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... 2025-10-13T00:21:15.701103111+00:00 stderr F I1013 00:21:15.701093 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... 2025-10-13T00:21:15.701130511+00:00 stderr F I1013 00:21:15.701119 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... 2025-10-13T00:21:15.701159092+00:00 stderr F I1013 00:21:15.701148 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... 2025-10-13T00:21:15.701235374+00:00 stderr F I1013 00:21:15.701223 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector 2025-10-13T00:21:15.701263345+00:00 stderr F I1013 00:21:15.701253 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.701291326+00:00 stderr F I1013 00:21:15.701281 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.701317666+00:00 stderr F I1013 00:21:15.701307 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.701364398+00:00 stderr F I1013 00:21:15.701352 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.701396088+00:00 stderr F I1013 00:21:15.701386 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-10-13T00:21:15.701453580+00:00 stderr F I1013 00:21:15.701442 1 base_controller.go:73] Caches are synced for ServiceServingCertController 2025-10-13T00:21:15.701480921+00:00 stderr F I1013 00:21:15.701471 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... 2025-10-13T00:21:15.701506091+00:00 stderr F I1013 00:21:15.701496 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... 2025-10-13T00:21:15.701531262+00:00 stderr F I1013 00:21:15.701521 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... 2025-10-13T00:21:15.701558103+00:00 stderr F I1013 00:21:15.701548 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... 2025-10-13T00:21:15.701585034+00:00 stderr F I1013 00:21:15.701574 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... 2025-10-13T00:22:15.507185390+00:00 stderr F E1013 00:22:15.506503 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000644000175000017500000017005415073043232033023 0ustar zuulzuul2025-08-13T19:59:53.905156333+00:00 stderr F W0813 19:59:53.904104 1 cmd.go:237] Using insecure, self-signed certificates 2025-08-13T19:59:53.905741480+00:00 stderr F I0813 19:59:53.905400 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1755115193 cert, and key in /tmp/serving-cert-2770977124/serving-signer.crt, /tmp/serving-cert-2770977124/serving-signer.key 2025-08-13T19:59:56.792955952+00:00 stderr F I0813 19:59:56.785561 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:56.795018501+00:00 stderr F I0813 19:59:56.794758 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:57.536248119+00:00 stderr F I0813 19:59:57.532205 1 builder.go:271] service-ca-controller version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T19:59:57.570850555+00:00 stderr F I0813 19:59:57.568995 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:00:00.381378871+00:00 stderr F I0813 20:00:00.361664 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:00:00.690333237+00:00 stderr F I0813 20:00:00.690271 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:00:00.690551553+00:00 stderr F I0813 20:00:00.690534 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:00:00.690677027+00:00 stderr F I0813 20:00:00.690658 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:00:00.690713028+00:00 stderr F I0813 20:00:00.690701 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:00:00.779052026+00:00 stderr F I0813 20:00:00.773924 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:00:00.779052026+00:00 stderr F I0813 20:00:00.774096 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:00.779052026+00:00 stderr F W0813 20:00:00.774116 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:00.779052026+00:00 stderr F W0813 20:00:00.774123 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:00.877103421+00:00 stderr F I0813 20:00:00.870488 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:00.877103421+00:00 stderr F I0813 20:00:00.871260 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.966590 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967111 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967373 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967386 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967427 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967435 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:00.978537222+00:00 stderr F I0813 20:00:00.978440 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:00:01.237433212+00:00 stderr F I0813 20:00:00.994394 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:00.994350203 +0000 UTC))" 2025-08-13T20:00:01.275449706+00:00 stderr F I0813 20:00:01.275385 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:01.279296405+00:00 stderr F I0813 20:00:01.010429 1 leaderelection.go:260] successfully acquired lease openshift-service-ca/service-ca-controller-lock 2025-08-13T20:00:01.279622185+00:00 stderr F I0813 20:00:01.012093 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0db0ad19-cc12-475d-ab84-bad75b08b334", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-666f99b6f-kk8kg_02e86f09-10c8-4c1e-ad51-e1a5d4b40977 became leader 2025-08-13T20:00:01.279657306+00:00 stderr F I0813 20:00:01.084581 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:01.308670713+00:00 stderr F I0813 20:00:01.196057 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:01.530259199+00:00 stderr F I0813 20:00:01.530197 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector 2025-08-13T20:00:01.530383673+00:00 stderr F I0813 20:00:01.530364 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector 2025-08-13T20:00:01.530430114+00:00 stderr F I0813 20:00:01.530417 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector 2025-08-13T20:00:01.530882487+00:00 stderr F I0813 20:00:01.530768 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:01.530729443 +0000 UTC))" 2025-08-13T20:00:01.530963879+00:00 stderr F I0813 20:00:01.530940 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:00:01.531060582+00:00 stderr F I0813 20:00:01.531037 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:00:01.531119894+00:00 stderr F I0813 20:00:01.531102 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector 2025-08-13T20:00:01.531203066+00:00 stderr F I0813 20:00:01.531180 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:01.531558186+00:00 stderr F I0813 20:00:01.531528 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:01.531387811 +0000 UTC))" 2025-08-13T20:00:01.531647709+00:00 stderr F I0813 20:00:01.531623 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:01.531593267 +0000 UTC))" 2025-08-13T20:00:01.531753892+00:00 stderr F I0813 20:00:01.531728 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:01.53169742 +0000 UTC))" 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.536568 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.536977 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.558136 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController 2025-08-13T20:00:01.583302481+00:00 stderr F I0813 20:00:01.583259 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:01.583185308 +0000 UTC))" 2025-08-13T20:00:01.583375023+00:00 stderr F I0813 20:00:01.583354 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583331552 +0000 UTC))" 2025-08-13T20:00:01.583433285+00:00 stderr F I0813 20:00:01.583416 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583397484 +0000 UTC))" 2025-08-13T20:00:01.583497847+00:00 stderr F I0813 20:00:01.583478 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583459136 +0000 UTC))" 2025-08-13T20:00:01.583627270+00:00 stderr F I0813 20:00:01.583607 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583585969 +0000 UTC))" 2025-08-13T20:00:01.585390851+00:00 stderr F I0813 20:00:01.585359 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.585325739 +0000 UTC))" 2025-08-13T20:00:01.586007958+00:00 stderr F I0813 20:00:01.585984 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:01.585965167 +0000 UTC))" 2025-08-13T20:00:01.597391783+00:00 stderr F I0813 20:00:01.597350 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:01.59730462 +0000 UTC))" 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670628 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670675 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670687 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670692 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670703 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.680986 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681021 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681031 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681038 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.691703231+00:00 stderr F I0813 20:00:01.691667 1 apiservice.go:62] updating apiservice v1.apps.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.703267051+00:00 stderr F I0813 20:00:01.703221 1 apiservice.go:62] updating apiservice v1.authorization.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.761934793+00:00 stderr F I0813 20:00:01.761658 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783472 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783523 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783573 1 apiservice.go:62] updating apiservice v1.build.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783755 1 apiservice.go:62] updating apiservice v1.image.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783893 1 apiservice.go:62] updating apiservice v1.oauth.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.880091571+00:00 stderr F I0813 20:00:01.880026 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector 2025-08-13T20:00:01.880192074+00:00 stderr F I0813 20:00:01.880176 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880236895+00:00 stderr F I0813 20:00:01.880223 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880268266+00:00 stderr F I0813 20:00:01.880256 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880304517+00:00 stderr F I0813 20:00:01.880290 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880334788+00:00 stderr F I0813 20:00:01.880323 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880497083+00:00 stderr F I0813 20:00:01.880473 1 admissionwebhook.go:116] updating validatingwebhookconfiguration controlplanemachineset.machine.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.882019036+00:00 stderr F I0813 20:00:01.881994 1 admissionwebhook.go:116] updating validatingwebhookconfiguration multus.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.922162840+00:00 stderr F I0813 20:00:01.918431 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector 2025-08-13T20:00:02.498433851+00:00 stderr F I0813 20:00:02.491383 1 apiservice.go:62] updating apiservice v1.project.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.498640 1 apiservice.go:62] updating apiservice v1.quota.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.501503 1 apiservice.go:62] updating apiservice v1.security.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.506265 1 apiservice.go:62] updating apiservice v1.route.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.878309293+00:00 stderr F E0813 20:00:02.873725 1 base_controller.go:266] "APIServiceCABundleInjector" controller failed to sync "v1.apps.openshift.io", err: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.apps.openshift.io": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.878309293+00:00 stderr F I0813 20:00:02.874328 1 apiservice.go:62] updating apiservice v1.template.openshift.io with the service signing CA bundle 2025-08-13T20:00:03.200411708+00:00 stderr F I0813 20:00:03.183130 1 apiservice.go:62] updating apiservice v1.user.openshift.io with the service signing CA bundle 2025-08-13T20:00:03.200411708+00:00 stderr F I0813 20:00:03.183506 1 apiservice.go:62] updating apiservice v1.apps.openshift.io with the service signing CA bundle 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292110 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:06.292061932 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292742 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:06.292719861 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292769 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:06.292749792 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292874 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:06.292826134 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292902 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292885976 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292927 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292909116 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292951 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292935147 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292974 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292957398 +0000 UTC))" 2025-08-13T20:00:06.293007149+00:00 stderr F I0813 20:00:06.292999 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:06.292980798 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293026 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.293007269 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293450 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:06.293427291 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293893 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:06.293768051 +0000 UTC))" 2025-08-13T20:00:06.488525964+00:00 stderr F I0813 20:00:06.485459 1 base_controller.go:73] Caches are synced for ServiceServingCertController 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507060 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507115 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507121 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507130 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507135 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.486469 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518259 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518276 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518281 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518290 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518295 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.548071772+00:00 stderr F I0813 20:00:06.547959 1 base_controller.go:73] Caches are synced for CRDCABundleInjector 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548118 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548214 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548221 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548227 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548231 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.745229034+00:00 stderr F I0813 20:00:06.745082 1 crd.go:69] updating customresourcedefinition alertmanagerconfigs.monitoring.coreos.com conversion webhook config with the service signing CA bundle 2025-08-13T20:00:06.879052430+00:00 stderr F I0813 20:00:06.878991 1 crd.go:69] updating customresourcedefinition consoleplugins.console.openshift.io conversion webhook config with the service signing CA bundle 2025-08-13T20:00:11.997074793+00:00 stderr F I0813 20:00:11.996116 1 trace.go:236] Trace[2035768624]: "DeltaFIFO Pop Process" ID:openshift-authentication/kube-root-ca.crt,Depth:468,Reason:slow event handlers blocking the queue (13-Aug-2025 20:00:10.956) (total time: 1039ms): 2025-08-13T20:00:11.997074793+00:00 stderr F Trace[2035768624]: [1.039966012s] [1.039966012s] END 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232646 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232763 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232822 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232828 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232855 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232862 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232933 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232940 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232945 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232952 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232957 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232962 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233002 1 configmap.go:109] updating configmap default/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233395 1 configmap.go:109] updating configmap hostpath-provisioner/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233493 1 configmap.go:109] updating configmap kube-node-lease/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233767 1 configmap.go:109] updating configmap kube-public/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233979 1 configmap.go:109] updating configmap kube-system/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.307510945+00:00 stderr F I0813 20:00:12.307368 1 configmap.go:109] updating configmap openshift-apiserver-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.312396304+00:00 stderr F I0813 20:00:12.312358 1 configmap.go:109] updating configmap openshift-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.345914 1 configmap.go:109] updating configmap openshift-authentication-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.346312 1 configmap.go:109] updating configmap openshift-authentication-operator/service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.346883 1 configmap.go:109] updating configmap openshift-authentication/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.363920953+00:00 stderr F I0813 20:00:12.359612 1 configmap.go:109] updating configmap openshift-authentication/v4-0-config-system-service-ca with the service signing CA bundle 2025-08-13T20:00:12.390245744+00:00 stderr F I0813 20:00:12.390087 1 configmap.go:109] updating configmap openshift-cloud-network-config-controller/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.653877521+00:00 stderr F I0813 20:00:12.652872 1 configmap.go:109] updating configmap openshift-cloud-platform-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.821553 1 configmap.go:109] updating configmap openshift-cluster-machine-approver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.822059 1 configmap.go:109] updating configmap openshift-cluster-samples-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.835767 1 configmap.go:109] updating configmap openshift-cluster-storage-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.876028596+00:00 stderr F I0813 20:00:12.873745 1 configmap.go:109] updating configmap openshift-cluster-version/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.012099386+00:00 stderr F I0813 20:00:13.003114 1 configmap.go:109] updating configmap openshift-config-managed/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.173388725+00:00 stderr F I0813 20:00:13.173331 1 configmap.go:109] updating configmap openshift-config-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.377111194+00:00 stderr F I0813 20:00:13.355691 1 configmap.go:109] updating configmap openshift-config/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.601168223+00:00 stderr F I0813 20:00:13.600240 1 configmap.go:109] updating configmap openshift-console-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.812380335+00:00 stderr F I0813 20:00:13.806721 1 configmap.go:109] updating configmap openshift-console-user-settings/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:14.609907106+00:00 stderr F I0813 20:00:14.607658 1 configmap.go:109] updating configmap openshift-console/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:14.727356554+00:00 stderr F I0813 20:00:14.724130 1 configmap.go:109] updating configmap openshift-console/service-ca with the service signing CA bundle 2025-08-13T20:00:15.440417256+00:00 stderr F I0813 20:00:15.439282 1 configmap.go:109] updating configmap openshift-controller-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:15.515047674+00:00 stderr F I0813 20:00:15.513957 1 configmap.go:109] updating configmap openshift-controller-manager/openshift-service-ca with the service signing CA bundle 2025-08-13T20:00:16.540070781+00:00 stderr F I0813 20:00:16.539585 1 configmap.go:109] updating configmap openshift-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.540382920+00:00 stderr F I0813 20:00:16.540302 1 configmap.go:109] updating configmap openshift-dns-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.560032700+00:00 stderr F I0813 20:00:16.559963 1 configmap.go:109] updating configmap openshift-dns/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.560454953+00:00 stderr F I0813 20:00:16.560428 1 configmap.go:109] updating configmap openshift-etcd-operator/etcd-service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:16.687434013+00:00 stderr F I0813 20:00:16.680442 1 configmap.go:109] updating configmap openshift-etcd-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.762941806+00:00 stderr F I0813 20:00:16.750734 1 configmap.go:109] updating configmap openshift-etcd/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792458758+00:00 stderr F I0813 20:00:16.792392 1 configmap.go:109] updating configmap openshift-host-network/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792644173+00:00 stderr F I0813 20:00:16.792622 1 configmap.go:109] updating configmap openshift-image-registry/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792730676+00:00 stderr F I0813 20:00:16.792711 1 configmap.go:109] updating configmap openshift-image-registry/serviceca with the service signing CA bundle 2025-08-13T20:00:16.796090031+00:00 stderr F I0813 20:00:16.796058 1 configmap.go:109] updating configmap openshift-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.818726757+00:00 stderr F I0813 20:00:16.818316 1 configmap.go:109] updating configmap openshift-ingress-canary/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.049716543+00:00 stderr F I0813 20:00:17.049660 1 configmap.go:109] updating configmap openshift-ingress-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.317574771+00:00 stderr F I0813 20:00:17.315887 1 configmap.go:109] updating configmap openshift-ingress/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.455584107+00:00 stderr F I0813 20:00:17.452246 1 configmap.go:109] updating configmap openshift-ingress/service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:17.570884264+00:00 stderr F I0813 20:00:17.570709 1 configmap.go:109] updating configmap openshift-kni-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.955957004+00:00 stderr F I0813 20:00:17.953019 1 configmap.go:109] updating configmap openshift-kube-apiserver-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.955957004+00:00 stderr F I0813 20:00:17.953016 1 configmap.go:109] updating configmap openshift-kube-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.188114034+00:00 stderr F I0813 20:00:18.187887 1 configmap.go:109] updating configmap openshift-kube-controller-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.522050445+00:00 stderr F I0813 20:00:18.521633 1 configmap.go:109] updating configmap openshift-kube-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.825947690+00:00 stderr F I0813 20:00:18.825317 1 configmap.go:109] updating configmap openshift-kube-scheduler-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.933481736+00:00 stderr F I0813 20:00:18.886131 1 configmap.go:109] updating configmap openshift-kube-scheduler/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.208135938+00:00 stderr F I0813 20:00:19.207619 1 configmap.go:109] updating configmap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.587523646+00:00 stderr F I0813 20:00:19.586045 1 configmap.go:109] updating configmap openshift-kube-storage-version-migrator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.606589159+00:00 stderr F I0813 20:00:19.606153 1 configmap.go:109] updating configmap openshift-machine-api/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.009422926+00:00 stderr F I0813 20:00:20.002186 1 configmap.go:109] updating configmap openshift-machine-config-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.009422926+00:00 stderr F I0813 20:00:20.002507 1 configmap.go:109] updating configmap openshift-marketplace/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.895244294+00:00 stderr F I0813 20:00:20.890157 1 configmap.go:109] updating configmap openshift-monitoring/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.895421089+00:00 stderr F I0813 20:00:20.895375 1 request.go:697] Waited for 1.250420235s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-machine-api/configmaps/openshift-service-ca.crt 2025-08-13T20:00:21.179931351+00:00 stderr F I0813 20:00:21.005663 1 configmap.go:109] updating configmap openshift-multus/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.188084174+00:00 stderr F I0813 20:00:21.016159 1 configmap.go:109] updating configmap openshift-network-node-identity/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.219745687+00:00 stderr F I0813 20:00:21.016256 1 configmap.go:109] updating configmap openshift-network-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.220406856+00:00 stderr F I0813 20:00:21.014189 1 configmap.go:109] updating configmap openshift-network-diagnostics/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.986147588+00:00 stderr F I0813 20:00:21.974352 1 configmap.go:109] updating configmap openshift-node/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.995887216+00:00 stderr F I0813 20:00:21.993021 1 configmap.go:109] updating configmap openshift-nutanix-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042403 1 configmap.go:109] updating configmap openshift-oauth-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042635 1 configmap.go:109] updating configmap openshift-openstack-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042704 1 configmap.go:109] updating configmap openshift-operator-lifecycle-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.133694216+00:00 stderr F I0813 20:00:22.130556 1 configmap.go:109] updating configmap openshift-operators/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.139828371+00:00 stderr F I0813 20:00:22.138307 1 configmap.go:109] updating configmap openshift-ovirt-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.514936557+00:00 stderr F I0813 20:00:22.506140 1 configmap.go:109] updating configmap openshift-ovn-kubernetes/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.620769205+00:00 stderr F I0813 20:00:22.591045 1 configmap.go:109] updating configmap openshift-route-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.145944782+00:00 stderr F I0813 20:00:23.144593 1 configmap.go:109] updating configmap openshift-service-ca-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.185630594+00:00 stderr F I0813 20:00:23.185527 1 configmap.go:109] updating configmap openshift-service-ca/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.222596558+00:00 stderr F I0813 20:00:23.222544 1 configmap.go:109] updating configmap openshift-user-workload-monitoring/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.344223196+00:00 stderr F I0813 20:00:23.337740 1 configmap.go:109] updating configmap openshift-vsphere-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.801506735+00:00 stderr F I0813 20:00:23.795233 1 configmap.go:109] updating configmap openshift/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.991128 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.990982871 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.991967 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.991945759 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992043 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.99197661 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992090 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.992072732 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992118 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992102963 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992147 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992130494 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992191 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992179605 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992209 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992196796 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992227 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.992214366 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992253 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.992242747 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992278 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992259978 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992932 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:59.992908786 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.993287 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:59.993263116 +0000 UTC))" 2025-08-13T20:03:15.140543818+00:00 stderr F E0813 20:03:15.139505 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.507620624+00:00 stderr F E0813 20:04:15.506936 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.192296423+00:00 stderr F E0813 20:05:15.190409 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.387910163+00:00 stderr F I0813 20:42:36.387265 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388108869+00:00 stderr F I0813 20:42:36.387996 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.386245 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.386322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.396159 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.396305 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429408 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429654 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429735 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.745388331+00:00 stderr F I0813 20:42:41.744466 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.745575776+00:00 stderr F I0813 20:42:41.745476 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.745575776+00:00 stderr F I0813 20:42:41.745524 1 base_controller.go:172] Shutting down ConfigMapCABundleInjector ... 2025-08-13T20:42:41.745592667+00:00 stderr F I0813 20:42:41.745571 1 base_controller.go:172] Shutting down LegacyVulnerableConfigMapCABundleInjector ... 2025-08-13T20:42:41.745592667+00:00 stderr F I0813 20:42:41.745575 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.745602877+00:00 stderr F I0813 20:42:41.745590 1 base_controller.go:172] Shutting down CRDCABundleInjector ... 2025-08-13T20:42:41.745602877+00:00 stderr F I0813 20:42:41.745594 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:41.745647888+00:00 stderr F I0813 20:42:41.745613 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:41.745647888+00:00 stderr F I0813 20:42:41.745637 1 base_controller.go:172] Shutting down ServiceServingCertUpdateController ... 2025-08-13T20:42:41.745754302+00:00 stderr F I0813 20:42:41.745684 1 base_controller.go:172] Shutting down ServiceServingCertController ... 2025-08-13T20:42:41.745754302+00:00 stderr F I0813 20:42:41.745701 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.745857274+00:00 stderr F I0813 20:42:41.745770 1 base_controller.go:172] Shutting down ValidatingWebhookCABundleInjector ... 2025-08-13T20:42:41.745857274+00:00 stderr F I0813 20:42:41.745820 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.745876575+00:00 stderr F I0813 20:42:41.745860 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:41.745876575+00:00 stderr F I0813 20:42:41.745868 1 base_controller.go:172] Shutting down APIServiceCABundleInjector ... 2025-08-13T20:42:41.745887015+00:00 stderr F I0813 20:42:41.745876 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:41.745897006+00:00 stderr F I0813 20:42:41.745886 1 base_controller.go:172] Shutting down MutatingWebhookCABundleInjector ... 2025-08-13T20:42:41.745951497+00:00 stderr F I0813 20:42:41.745904 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.745972808+00:00 stderr F I0813 20:42:41.745951 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.745972808+00:00 stderr F I0813 20:42:41.745961 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.745969 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.745980 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.746006 1 base_controller.go:104] All ConfigMapCABundleInjector workers have been terminated 2025-08-13T20:42:41.746335778+00:00 stderr F I0813 20:42:41.746152 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.746543974+00:00 stderr F I0813 20:42:41.746486 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:42:41.746559525+00:00 stderr F I0813 20:42:41.746540 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.746569765+00:00 stderr F I0813 20:42:41.746559 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.746605376+00:00 stderr F I0813 20:42:41.746584 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.746689328+00:00 stderr F I0813 20:42:41.746642 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:42:41.746870454+00:00 stderr F I0813 20:42:41.746834 1 base_controller.go:114] Shutting down worker of ServiceServingCertController controller ... 2025-08-13T20:42:41.746870454+00:00 stderr F I0813 20:42:41.746864 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746938256+00:00 stderr F I0813 20:42:41.746906 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746951286+00:00 stderr F I0813 20:42:41.746936 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746951286+00:00 stderr F I0813 20:42:41.746945 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746962886+00:00 stderr F I0813 20:42:41.746952 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746974507+00:00 stderr F I0813 20:42:41.746962 1 base_controller.go:104] All LegacyVulnerableConfigMapCABundleInjector workers have been terminated 2025-08-13T20:42:41.747098780+00:00 stderr F I0813 20:42:41.747030 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.747098780+00:00 stderr F I0813 20:42:41.747076 1 builder.go:302] server exited 2025-08-13T20:42:41.747499952+00:00 stderr F E0813 20:42:41.747390 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.747646846+00:00 stderr F W0813 20:42:41.747540 1 leaderelection.go:85] leader election lost 2025-08-13T20:42:41.747646846+00:00 stderr F I0813 20:42:41.747611 1 base_controller.go:114] Shutting down worker of ServiceServingCertController controller ... ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000000202015073043234033062 0ustar zuulzuul2025-10-13T00:15:01.170430505+00:00 stderr F W1013 00:15:01.168615 1 deprecated.go:66] 2025-10-13T00:15:01.170430505+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:15:01.170430505+00:00 stderr F 2025-10-13T00:15:01.170430505+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:15:01.170430505+00:00 stderr F 2025-10-13T00:15:01.170430505+00:00 stderr F =============================================== 2025-10-13T00:15:01.170430505+00:00 stderr F 2025-10-13T00:15:01.170430505+00:00 stderr F I1013 00:15:01.169718 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:15:01.170430505+00:00 stderr F I1013 00:15:01.169771 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:15:01.194455835+00:00 stderr F I1013 00:15:01.190705 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-10-13T00:15:01.194455835+00:00 stderr F I1013 00:15:01.191381 1 kube-rbac-proxy.go:402] Listening securely on :9393 ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000000222515073043234033071 0ustar zuulzuul2025-08-13T19:59:39.171318469+00:00 stderr F W0813 19:59:39.169690 1 deprecated.go:66] 2025-08-13T19:59:39.171318469+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.171318469+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.171318469+00:00 stderr F =============================================== 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.189878528+00:00 stderr F I0813 19:59:39.189460 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:39.189878528+00:00 stderr F I0813 19:59:39.189516 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:39.255153229+00:00 stderr F I0813 19:59:39.253915 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-08-13T19:59:39.255217230+00:00 stderr F I0813 19:59:39.255157 1 kube-rbac-proxy.go:402] Listening securely on :9393 2025-08-13T20:42:42.013922763+00:00 stderr F I0813 20:42:42.013004 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015073043234033066 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000006065415073043234033103 0ustar zuulzuul2025-10-13T00:17:29.189991638+00:00 stderr F 2025-10-13T00:17:29.189Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-10-13T00:17:29.211134103+00:00 stderr F 2025-10-13T00:17:29.211Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-10-13T00:17:29.211169624+00:00 stderr F 2025-10-13T00:17:29.211Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-10-13T00:17:29.211227586+00:00 stderr F 2025-10-13T00:17:29.211Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-10-13T00:17:29.211275987+00:00 stderr F 2025-10-13T00:17:29.211Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-10-13T00:17:29.211397581+00:00 stderr F 2025-10-13T00:17:29.211Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-10-13T00:17:29.212317760+00:00 stderr F I1013 00:17:29.212282 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:17:29.214865068+00:00 stderr F I1013 00:17:29.214810 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:17:29.214883969+00:00 stderr F 2025-10-13T00:17:29.214Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-10-13T00:17:29.215626922+00:00 stderr F I1013 00:17:29.215592 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-10-13T00:17:29.254811646+00:00 stderr F 2025-10-13T00:17:29.254Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-10-13T00:17:29.255043603+00:00 stderr F 2025-10-13T00:17:29.254Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-10-13T00:17:29.315779894+00:00 stderr F I1013 00:17:29.315699 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-10-13T00:17:29.315779894+00:00 stderr F I1013 00:17:29.315733 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-10-13T00:17:29.356647320+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.356647320+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-10-13T00:17:29.356647320+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-10-13T00:17:29.356647320+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-10-13T00:17:29.356692771+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-10-13T00:17:29.356692771+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-10-13T00:17:29.356692771+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-10-13T00:17:29.356692771+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-10-13T00:17:29.356692771+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-10-13T00:17:29.356718862+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.356718862+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-10-13T00:17:29.356740332+00:00 stderr F 2025-10-13T00:17:29.356Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-10-13T00:17:29.357213937+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-10-13T00:17:29.357213937+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-10-13T00:17:29.357213937+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-10-13T00:17:29.357225127+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-10-13T00:17:29.357276049+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-10-13T00:17:29.357276049+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.357276049+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-10-13T00:17:29.357285029+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:17:29.357285029+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:17:29.357292430+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-10-13T00:17:29.357337701+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.357349821+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-10-13T00:17:29.357406563+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.357406563+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:17:29.357414983+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:17:29.357422204+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-10-13T00:17:29.357636290+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc000a8c550"} 2025-10-13T00:17:29.357852647+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.357865237+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-10-13T00:17:29.357985711+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-10-13T00:17:29.357994151+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-10-13T00:17:29.358021522+00:00 stderr F 2025-10-13T00:17:29.357Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-10-13T00:17:29.358048863+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-10-13T00:17:29.358048863+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-10-13T00:17:29.358087704+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000a8c960"} 2025-10-13T00:17:29.358160536+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000a8c960"} 2025-10-13T00:17:29.358221578+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.358221578+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.358228979+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-10-13T00:17:29.358236399+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-10-13T00:17:29.358265300+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-10-13T00:17:29.358379343+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.358403614+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-10-13T00:17:29.358410684+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-10-13T00:17:29.358449895+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:17:29.358449895+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-10-13T00:17:29.358457496+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-10-13T00:17:29.358586310+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:17:29.358586310+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-10-13T00:17:29.358594030+00:00 stderr F 2025-10-13T00:17:29.358Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-10-13T00:17:29.368696573+00:00 stderr F 2025-10-13T00:17:29.368Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:17:29.457059979+00:00 stderr F 2025-10-13T00:17:29.456Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:17:29.460045362+00:00 stderr F 2025-10-13T00:17:29.459Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:17:29.460045362+00:00 stderr F 2025-10-13T00:17:29.460Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:17:29.461972402+00:00 stderr F 2025-10-13T00:17:29.461Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-10-13T00:17:29.462034713+00:00 stderr F 2025-10-13T00:17:29.461Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-10-13T00:17:29.462173498+00:00 stderr F 2025-10-13T00:17:29.462Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.462366004+00:00 stderr F 2025-10-13T00:17:29.462Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.462414095+00:00 stderr F 2025-10-13T00:17:29.462Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-10-13T00:17:29.462456647+00:00 stderr F 2025-10-13T00:17:29.462Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-10-13T00:17:29.462520829+00:00 stderr F 2025-10-13T00:17:29.462Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.466492871+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-10-13T00:17:29.466516032+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.466579554+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:17:29.466604045+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:17:29.466604045+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:17:29.466615865+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:17:29.466624436+00:00 stderr F 2025-10-13T00:17:29.466Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:17:29.661500641+00:00 stderr F 2025-10-13T00:17:29.661Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-10-13T00:17:29.661729358+00:00 stderr F 2025-10-13T00:17:29.661Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.662679038+00:00 stderr F 2025-10-13T00:17:29.662Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-10-13T00:17:29.662890614+00:00 stderr F 2025-10-13T00:17:29.662Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-10-13T00:17:29.663004458+00:00 stderr F 2025-10-13T00:17:29.662Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-10-13T00:17:29.663088600+00:00 stderr F 2025-10-13T00:17:29.663Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-10-13T00:17:29.671137200+00:00 stderr F 2025-10-13T00:17:29.671Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.678459596+00:00 stderr F 2025-10-13T00:17:29.678Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-10-13T00:17:29.864561390+00:00 stderr F 2025-10-13T00:17:29.864Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:17:29.864771146+00:00 stderr F 2025-10-13T00:17:29.864Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:17:29.878045008+00:00 stderr F 2025-10-13T00:17:29.877Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "canary_controller", "worker count": 1} 2025-10-13T00:17:29.879454771+00:00 stderr F 2025-10-13T00:17:29.879Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-10-13T00:17:29.879576505+00:00 stderr F 2025-10-13T00:17:29.879Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:17:29.883581539+00:00 stderr F 2025-10-13T00:17:29.883Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-10-13T00:17:29.883676422+00:00 stderr F 2025-10-13T00:17:29.883Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-10-13T00:17:29.883724173+00:00 stderr F 2025-10-13T00:17:29.883Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-10-13T00:22:29.455688231+00:00 stderr F 2025-10-13T00:22:29.453Z ERROR operator.init wait/backoff.go:226 failed to fetch ingress config {"error": "Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/ingresses/cluster\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-10-13T00:22:30.070405282+00:00 stderr F 2025-10-13T00:22:30.069Z ERROR operator.canary_controller wait/backoff.go:226 failed to get current canary route for canary check {"error": "Get \"https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-ingress-canary/routes/canary\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-10-13T00:23:45.481684306+00:00 stderr F 2025-10-13T00:23:45.481Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:23:45.481684306+00:00 stderr F 2025-10-13T00:23:45.481Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:23:45.481783439+00:00 stderr F 2025-10-13T00:23:45.481Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000014450415073043234033100 0ustar zuulzuul2025-10-13T00:15:01.871449929+00:00 stderr F 2025-10-13T00:15:01.866Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-10-13T00:15:01.978896568+00:00 stderr F 2025-10-13T00:15:01.976Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-10-13T00:15:01.978896568+00:00 stderr F 2025-10-13T00:15:01.977Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-10-13T00:15:01.978896568+00:00 stderr F 2025-10-13T00:15:01.977Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-10-13T00:15:01.978896568+00:00 stderr F 2025-10-13T00:15:01.977Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-10-13T00:15:01.980099884+00:00 stderr F 2025-10-13T00:15:01.979Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-10-13T00:15:02.037876165+00:00 stderr F I1013 00:15:02.037667 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:15:02.047547675+00:00 stderr F 2025-10-13T00:15:02.046Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-10-13T00:15:02.050387430+00:00 stderr F I1013 00:15:02.048022 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:15:02.058384510+00:00 stderr F I1013 00:15:02.055795 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-10-13T00:15:02.118393188+00:00 stderr F 2025-10-13T00:15:02.118Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-10-13T00:15:02.119229543+00:00 stderr F 2025-10-13T00:15:02.118Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-10-13T00:15:02.175320333+00:00 stderr F I1013 00:15:02.175000 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-10-13T00:15:02.175320333+00:00 stderr F I1013 00:15:02.175096 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-10-13T00:15:02.222651111+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:02.222965051+00:00 stderr F 2025-10-13T00:15:02.222Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc000ca2350"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.223Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000ca2628"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-10-13T00:15:02.226828137+00:00 stderr F 2025-10-13T00:15:02.224Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-10-13T00:15:02.230689422+00:00 stderr F 2025-10-13T00:15:02.229Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-10-13T00:15:02.230689422+00:00 stderr F 2025-10-13T00:15:02.229Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc000ca2628"} 2025-10-13T00:15:02.230689422+00:00 stderr F 2025-10-13T00:15:02.230Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-10-13T00:15:02.230689422+00:00 stderr F 2025-10-13T00:15:02.230Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-10-13T00:15:02.239394033+00:00 stderr F 2025-10-13T00:15:02.236Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:15:02.239394033+00:00 stderr F 2025-10-13T00:15:02.236Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:15:02.239394033+00:00 stderr F 2025-10-13T00:15:02.233Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:15:02.326685248+00:00 stderr F 2025-10-13T00:15:02.326Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:15:02.333884884+00:00 stderr F 2025-10-13T00:15:02.332Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-10-13T00:15:02.337367309+00:00 stderr F 2025-10-13T00:15:02.335Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.434Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.434Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.434Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.434Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.434Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.435Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.437Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-10-13T00:15:02.438380465+00:00 stderr F 2025-10-13T00:15:02.437Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:02.541359791+00:00 stderr F 2025-10-13T00:15:02.537Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-10-13T00:15:02.541359791+00:00 stderr F 2025-10-13T00:15:02.539Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-10-13T00:15:02.642378897+00:00 stderr F 2025-10-13T00:15:02.639Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-10-13T00:15:02.642378897+00:00 stderr F 2025-10-13T00:15:02.639Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-10-13T00:15:02.742965031+00:00 stderr F 2025-10-13T00:15:02.740Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:15:02.742965031+00:00 stderr F 2025-10-13T00:15:02.739Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-10-13T00:15:02.742965031+00:00 stderr F 2025-10-13T00:15:02.740Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-10-13T00:15:02.744834957+00:00 stderr F 2025-10-13T00:15:02.744Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:15:02.744834957+00:00 stderr F 2025-10-13T00:15:02.744Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-10-13T00:15:02.744834957+00:00 stderr F 2025-10-13T00:15:02.744Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:02.744834957+00:00 stderr F 2025-10-13T00:15:02.744Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-10-13T00:15:02.744834957+00:00 stderr F 2025-10-13T00:15:02.744Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-10-13T00:15:03.024944130+00:00 stderr F 2025-10-13T00:15:03.024Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:03.025388703+00:00 stderr F 2025-10-13T00:15:03.025Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:03.025612830+00:00 stderr F 2025-10-13T00:15:03.025Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:03.029176206+00:00 stderr F 2025-10-13T00:15:03.029Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.999985549s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-10-13T00:15:03.029258859+00:00 stderr F 2025-10-13T00:15:03.029Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:03.065618268+00:00 stderr F 2025-10-13T00:15:03.051Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:03.177730877+00:00 stderr F 2025-10-13T00:15:03.177Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.824546181s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-10-13T00:15:12.229253285+00:00 stderr F 2025-10-13T00:15:12.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:15:12.334728395+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:15:12.334728395+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:15:12.334728395+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:15:12.334728395+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:15:12.334789727+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-10-13T00:15:12.334789727+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-10-13T00:15:12.334823418+00:00 stderr F 2025-10-13T00:15:12.334Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:15:22.225974834+00:00 stderr F 2025-10-13T00:15:22.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:15:32.225930269+00:00 stderr F 2025-10-13T00:15:32.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:15:42.226664141+00:00 stderr F 2025-10-13T00:15:42.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:15:52.225216747+00:00 stderr F 2025-10-13T00:15:52.224Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:16:00.241127260+00:00 stderr F 2025-10-13T00:16:00.240Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:16:00.241127260+00:00 stderr F 2025-10-13T00:16:00.241Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:16:00.241173301+00:00 stderr F 2025-10-13T00:16:00.241Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:00.319452642+00:00 stderr F 2025-10-13T00:16:00.319Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m2.682329965s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-10-13T00:16:02.228515888+00:00 stderr F 2025-10-13T00:16:02.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:16:12.231108746+00:00 stderr F 2025-10-13T00:16:12.230Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:16:22.226624387+00:00 stderr F 2025-10-13T00:16:22.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:16:30.252054805+00:00 stderr F 2025-10-13T00:16:30.251Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:16:30.252054805+00:00 stderr F 2025-10-13T00:16:30.252Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-10-13T00:16:30.252105477+00:00 stderr F 2025-10-13T00:16:30.252Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:30.321141151+00:00 stderr F 2025-10-13T00:16:30.321Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:30.321141151+00:00 stderr F 2025-10-13T00:16:30.321Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:30.321186433+00:00 stderr F 2025-10-13T00:16:30.321Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:30.321269255+00:00 stderr F 2025-10-13T00:16:30.321Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:30.323133395+00:00 stderr F 2025-10-13T00:16:30.323Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:30.335024586+00:00 stderr F 2025-10-13T00:16:30.334Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-10-13T00:16:32.225002031+00:00 stderr F 2025-10-13T00:16:32.224Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:16:42.225533443+00:00 stderr F 2025-10-13T00:16:42.224Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:16:52.226365057+00:00 stderr F 2025-10-13T00:16:52.225Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:17:02.225621587+00:00 stderr F 2025-10-13T00:17:02.224Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-10-13T00:17:02.438895703+00:00 stderr F 2025-10-13T00:17:02.438Z ERROR operator.init controller/controller.go:208 Could not wait for Cache to sync {"controller": "canary_controller", "error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} 2025-10-13T00:17:02.438963705+00:00 stderr F 2025-10-13T00:17:02.438Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for non leader election runnables 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.440433 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Namespace ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.440609 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressController ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.440723 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for leader election runnables 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "route_metrics_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "configurable_route_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingress_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "gatewayapi_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "error_page_configmap_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "dns_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "clientca_configmap_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "monitoring_dashboard_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_publisher_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "status_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.440Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingressclass_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "crl"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "crl"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "route_metrics_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "gatewayapi_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "monitoring_dashboard_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingressclass_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "dns_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_publisher_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "clientca_configmap_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "error_page_configmap_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "configurable_route_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "status_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingress_controller"} 2025-10-13T00:17:02.443145094+00:00 stderr F 2025-10-13T00:17:02.441Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for caches 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441453 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441553 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441618 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441678 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441738 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441799 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441888 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441926 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441959 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr F W1013 00:17:02.441992 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443145094+00:00 stderr P W1013 00:17:02.442046 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the wat 2025-10-13T00:17:02.443208126+00:00 stderr F ch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442090 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442127 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442163 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442199 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442529 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442679 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442859 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442906 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442944 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.442990 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressController ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.443036 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressController ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.443114 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.443158 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443208126+00:00 stderr F W1013 00:17:02.443202 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443286149+00:00 stderr F W1013 00:17:02.443254 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443371821+00:00 stderr F W1013 00:17:02.443311 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443409522+00:00 stderr F W1013 00:17:02.443389 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressController ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443498815+00:00 stderr F W1013 00:17:02.443467 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressController ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.443546347+00:00 stderr F W1013 00:17:02.443524 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.IngressController ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.445803757+00:00 stderr F 2025-10-13T00:17:02.443Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for webhooks 2025-10-13T00:17:02.445803757+00:00 stderr F 2025-10-13T00:17:02.443Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for HTTP servers 2025-10-13T00:17:02.445803757+00:00 stderr F 2025-10-13T00:17:02.444Z INFO operator.init.controller-runtime.metrics runtime/asm_amd64.s:1650 Shutting down metrics server with timeout of 1 minute 2025-10-13T00:17:02.445803757+00:00 stderr F 2025-10-13T00:17:02.444Z INFO operator.init runtime/asm_amd64.s:1650 Wait completed, proceeding to shutdown the manager 2025-10-13T00:17:02.445803757+00:00 stderr F W1013 00:17:02.444444 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Ingress ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.445803757+00:00 stderr F W1013 00:17:02.444468 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.446743066+00:00 stderr F W1013 00:17:02.446194 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-10-13T00:17:02.451805183+00:00 stderr F 2025-10-13T00:17:02.451Z ERROR operator.main cobra/command.go:944 error starting {"error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000007117515073043234033103 0ustar zuulzuul2025-08-13T20:08:00.875382117+00:00 stderr F 2025-08-13T20:08:00.873Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-08-13T20:08:00.953061394+00:00 stderr F 2025-08-13T20:08:00.952Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-08-13T20:08:00.953362393+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-08-13T20:08:00.953473236+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-08-13T20:08:00.953535928+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-08-13T20:08:00.953745004+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-08-13T20:08:00.998500257+00:00 stderr F I0813 20:08:00.997142 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:08:01.006195638+00:00 stderr F 2025-08-13T20:08:01.006Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-08-13T20:08:01.009092641+00:00 stderr F I0813 20:08:01.008085 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:08:01.013531228+00:00 stderr F I0813 20:08:01.013036 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-08-13T20:08:01.086373496+00:00 stderr F 2025-08-13T20:08:01.085Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-08-13T20:08:01.086373496+00:00 stderr F 2025-08-13T20:08:01.085Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-08-13T20:08:01.116419978+00:00 stderr F I0813 20:08:01.115027 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-08-13T20:08:01.116419978+00:00 stderr F I0813 20:08:01.115134 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00007fc90"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc0010b4270"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.426Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc0010b4270"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr P 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting EventSour 2025-08-13T20:08:01.435319931+00:00 stderr F ce {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.498055630+00:00 stderr F 2025-08-13T20:08:01.497Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.569202870+00:00 stderr F 2025-08-13T20:08:01.569Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.569414636+00:00 stderr F 2025-08-13T20:08:01.569Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.582231593+00:00 stderr F 2025-08-13T20:08:01.582Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-08-13T20:08:01.582393098+00:00 stderr F 2025-08-13T20:08:01.582Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.601005591+00:00 stderr F 2025-08-13T20:08:01.598Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.651241102+00:00 stderr F 2025-08-13T20:08:01.651Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-08-13T20:08:01.651241102+00:00 stderr F 2025-08-13T20:08:01.651Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.754147262+00:00 stderr F 2025-08-13T20:08:01.754Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-08-13T20:08:01.754242945+00:00 stderr F 2025-08-13T20:08:01.754Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-08-13T20:08:01.770681766+00:00 stderr F 2025-08-13T20:08:01.770Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-08-13T20:08:01.787209920+00:00 stderr F 2025-08-13T20:08:01.787Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T20:08:01.802124928+00:00 stderr F 2025-08-13T20:08:01.800Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-08-13T20:08:01.802124928+00:00 stderr F 2025-08-13T20:08:01.801Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.963268078+00:00 stderr F 2025-08-13T20:08:01.963Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T20:08:01.963495594+00:00 stderr F 2025-08-13T20:08:01.963Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.964Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.966Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.966Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:08:02.003855092+00:00 stderr F 2025-08-13T20:08:02.003Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "canary_controller", "worker count": 1} 2025-08-13T20:08:02.008856995+00:00 stderr F 2025-08-13T20:08:02.008Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-08-13T20:08:02.009029540+00:00 stderr F 2025-08-13T20:08:02.008Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:02.009560235+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:08:02.009712849+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-08-13T20:08:02.009870424+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-08-13T20:08:02.010245235+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:08:02.533313412+00:00 stderr F 2025-08-13T20:08:02.533Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:30.401847845+00:00 stderr F 2025-08-13T20:09:30.400Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952017060+00:00 stderr F 2025-08-13T20:09:44.951Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952105402+00:00 stderr F 2025-08-13T20:09:44.952Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952396641+00:00 stderr F 2025-08-13T20:09:44.952Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.957004893+00:00 stderr F 2025-08-13T20:09:44.955Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.957004893+00:00 stderr F 2025-08-13T20:09:44.955Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:46.234330695+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:46.234638084+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:46.234942633+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.585Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.587Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.587Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:55.836004494+00:00 stderr F 2025-08-13T20:09:55.835Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.894Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.895Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.894Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:10:01.895506774+00:00 stderr F 2025-08-13T20:10:01.895Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:08.138086435+00:00 stderr F 2025-08-13T20:10:08.137Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.831Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.832Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.832Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:22.868218468+00:00 stderr F 2025-08-13T20:10:22.867Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:22.868218468+00:00 stderr F 2025-08-13T20:10:22.868Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:22.868270259+00:00 stderr F 2025-08-13T20:10:22.868Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000013665215073043234033105 0ustar zuulzuul2025-08-13T20:05:08.191183858+00:00 stderr F 2025-08-13T20:05:08.190Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-08-13T20:05:08.198035654+00:00 stderr F 2025-08-13T20:05:08.197Z ERROR operator.main ingress-operator/start.go:64 failed to verify idling endpoints between endpoints and services {"error": "failed to list endpoints in all namespaces: failed to get API group resources: unable to retrieve the complete list of server APIs: v1: Get \"https://10.217.4.1:443/api/v1\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-08-13T20:05:08.198298121+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-08-13T20:05:08.198359993+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-08-13T20:05:08.198415215+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-08-13T20:05:08.198545378+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-08-13T20:05:08.200367231+00:00 stderr F 2025-08-13T20:05:08.200Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-08-13T20:05:08.229326390+00:00 stderr F I0813 20:05:08.229215 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:08.236722812+00:00 stderr F W0813 20:05:08.236296 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236722812+00:00 stderr F W0813 20:05:08.236305 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236747412+00:00 stderr F E0813 20:05:08.236721 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236747412+00:00 stderr F E0813 20:05:08.236732 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.327244290+00:00 stderr F W0813 20:05:09.325689 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.327244290+00:00 stderr F E0813 20:05:09.326389 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.566674106+00:00 stderr F W0813 20:05:09.565749 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.566674106+00:00 stderr F E0813 20:05:09.566641 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.263050674+00:00 stderr F W0813 20:05:11.262490 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.263050674+00:00 stderr F E0813 20:05:11.262989 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.421045724+00:00 stderr F W0813 20:05:12.420659 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.421105386+00:00 stderr F E0813 20:05:12.421044 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.769327666+00:00 stderr F W0813 20:05:15.768608 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.769327666+00:00 stderr F E0813 20:05:15.769244 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912903143+00:00 stderr F W0813 20:05:16.911517 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912903143+00:00 stderr F E0813 20:05:16.911649 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:27.134481088+00:00 stderr F 2025-08-13T20:05:27.133Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-08-13T20:05:27.134896079+00:00 stderr F I0813 20:05:27.133874 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:27.171135417+00:00 stderr F I0813 20:05:27.171080 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-08-13T20:05:27.271658006+00:00 stderr F I0813 20:05:27.271480 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-08-13T20:05:27.271658006+00:00 stderr F I0813 20:05:27.271548 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-08-13T20:05:28.707483552+00:00 stderr F 2025-08-13T20:05:28.706Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-08-13T20:05:28.863034757+00:00 stderr F 2025-08-13T20:05:28.862Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00010e648"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-08-13T20:05:29.453085534+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-08-13T20:05:29.455007799+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00010eba0"} 2025-08-13T20:05:29.455388390+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00010eba0"} 2025-08-13T20:05:29.455467832+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.455765720+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.455902264+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-08-13T20:05:29.456298656+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.456352127+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-08-13T20:05:29.456389038+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-08-13T20:05:29.456435940+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.456476721+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:05:29.456510342+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:05:29.456558643+00:00 stderr F 2025-08-13T20:05:29.436Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-08-13T20:05:29.456639825+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-08-13T20:05:29.456694627+00:00 stderr F 2025-08-13T20:05:29.446Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.456735988+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:05:29.456882772+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-08-13T20:05:29.463311917+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:05:29.463311917+00:00 stderr F 2025-08-13T20:05:29.462Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-08-13T20:05:29.574984784+00:00 stderr F 2025-08-13T20:05:29.563Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:29.945304648+00:00 stderr F 2025-08-13T20:05:29.917Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:29.965242119+00:00 stderr F 2025-08-13T20:05:29.965Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:30.014990724+00:00 stderr F 2025-08-13T20:05:30.014Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:30.023730434+00:00 stderr F 2025-08-13T20:05:30.023Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:30.026092791+00:00 stderr F 2025-08-13T20:05:30.023Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:05:30.120415043+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-08-13T20:05:30.120551686+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-08-13T20:05:30.120979129+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.121431882+00:00 stderr F 2025-08-13T20:05:30.121Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-08-13T20:05:30.219567762+00:00 stderr F 2025-08-13T20:05:30.219Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-08-13T20:05:30.241382557+00:00 stderr F 2025-08-13T20:05:30.240Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-08-13T20:05:30.315357705+00:00 stderr F 2025-08-13T20:05:30.315Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-08-13T20:05:30.331297851+00:00 stderr F 2025-08-13T20:05:30.315Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:30.378874514+00:00 stderr F 2025-08-13T20:05:30.378Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-08-13T20:05:30.379387008+00:00 stderr F 2025-08-13T20:05:30.379Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.499907610+00:00 stderr F 2025-08-13T20:05:30.499Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:05:30.522918519+00:00 stderr F 2025-08-13T20:05:30.518Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:05:30.526560943+00:00 stderr F 2025-08-13T20:05:30.526Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-08-13T20:05:30.526961224+00:00 stderr F 2025-08-13T20:05:30.526Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.598924545+00:00 stderr F 2025-08-13T20:05:30.598Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T20:05:30.599024598+00:00 stderr F 2025-08-13T20:05:30.598Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.599125781+00:00 stderr F 2025-08-13T20:05:30.599Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T20:05:30.629683626+00:00 stderr F 2025-08-13T20:05:30.629Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-08-13T20:05:30.679536544+00:00 stderr F 2025-08-13T20:05:30.679Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-08-13T20:05:30.679735869+00:00 stderr F 2025-08-13T20:05:30.679Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:30.680070179+00:00 stderr F 2025-08-13T20:05:30.630Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:39.465426320+00:00 stderr F 2025-08-13T20:05:39.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:39.470629999+00:00 stderr F 2025-08-13T20:05:39.469Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:49.463650058+00:00 stderr F 2025-08-13T20:05:49.462Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:49.466441728+00:00 stderr F 2025-08-13T20:05:49.466Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.318Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:59.460963930+00:00 stderr F 2025-08-13T20:05:59.460Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:59.465319955+00:00 stderr F 2025-08-13T20:05:59.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:09.466039895+00:00 stderr F 2025-08-13T20:06:09.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:09.466039895+00:00 stderr F 2025-08-13T20:06:09.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:19.466356135+00:00 stderr F 2025-08-13T20:06:19.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:19.466356135+00:00 stderr F 2025-08-13T20:06:19.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:29.464497590+00:00 stderr F 2025-08-13T20:06:29.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:29.465170690+00:00 stderr F 2025-08-13T20:06:29.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:39.466094165+00:00 stderr F 2025-08-13T20:06:39.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:39.470533012+00:00 stderr F 2025-08-13T20:06:39.469Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:49.461005407+00:00 stderr F 2025-08-13T20:06:49.460Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:49.463712194+00:00 stderr F 2025-08-13T20:06:49.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:59.465943937+00:00 stderr F 2025-08-13T20:06:59.464Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:59.481697919+00:00 stderr F 2025-08-13T20:06:59.480Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:09.465054500+00:00 stderr F 2025-08-13T20:07:09.461Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:09.467221923+00:00 stderr F 2025-08-13T20:07:09.466Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:19.470896816+00:00 stderr F 2025-08-13T20:07:19.468Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:19.470896816+00:00 stderr F 2025-08-13T20:07:19.470Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:29.462609576+00:00 stderr F 2025-08-13T20:07:29.461Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:29.463661686+00:00 stderr F 2025-08-13T20:07:29.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:30.132827102+00:00 stderr F 2025-08-13T20:07:30.128Z ERROR operator.init controller/controller.go:208 Could not wait for Cache to sync {"controller": "canary_controller", "error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} 2025-08-13T20:07:30.135698934+00:00 stderr F 2025-08-13T20:07:30.132Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for non leader election runnables 2025-08-13T20:07:30.135768856+00:00 stderr F 2025-08-13T20:07:30.135Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for leader election runnables 2025-08-13T20:07:30.137697782+00:00 stderr F 2025-08-13T20:07:30.135Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "configurable_route_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingress_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "dns_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "status_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingressclass_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_publisher_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "gatewayapi_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "error_page_configmap_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "clientca_configmap_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "crl"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "gatewayapi_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "clientca_configmap_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "status_controller"} 2025-08-13T20:07:30.154507554+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-08-13T20:07:30.154528604+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "route_metrics_controller"} 2025-08-13T20:07:30.154901485+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:07:30.155027888+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "dns_controller"} 2025-08-13T20:07:30.155089260+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingressclass_controller"} 2025-08-13T20:07:30.155145122+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "crl"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_publisher_controller"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "configurable_route_controller"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "error_page_configmap_controller"} 2025-08-13T20:07:30.163077299+00:00 stderr F 2025-08-13T20:07:30.162Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingress_controller"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.174Z ERROR operator.init controller/controller.go:266 Reconciler error {"controller": "route_metrics_controller", "object": {"name":"default","namespace":"openshift-ingress-operator"}, "namespace": "openshift-ingress-operator", "name": "default", "reconcileID": "8a74f58b-c73b-4c1b-937a-14d64341d67c", "error": "failed to get Ingress Controller \"openshift-ingress-operator/default\": Timeout: failed waiting for *v1.IngressController Informer to sync"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.176Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "route_metrics_controller"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.176Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for caches 2025-08-13T20:07:30.195953332+00:00 stderr F 2025-08-13T20:07:30.195Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for webhooks 2025-08-13T20:07:30.195953332+00:00 stderr F 2025-08-13T20:07:30.195Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for HTTP servers 2025-08-13T20:07:30.198455734+00:00 stderr F 2025-08-13T20:07:30.198Z INFO operator.init.controller-runtime.metrics runtime/asm_amd64.s:1650 Shutting down metrics server with timeout of 1 minute 2025-08-13T20:07:30.200492422+00:00 stderr F W0813 20:07:30.198536 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Proxy ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.201920803+00:00 stderr F W0813 20:07:30.198690 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202012955+00:00 stderr F W0813 20:07:30.201950 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202052777+00:00 stderr F W0813 20:07:30.198766 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202135089+00:00 stderr F W0813 20:07:30.198484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202245812+00:00 stderr F W0813 20:07:30.202220 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.204503267+00:00 stderr F W0813 20:07:30.199382 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.223684287+00:00 stderr F 2025-08-13T20:07:30.223Z INFO operator.init runtime/asm_amd64.s:1650 Wait completed, proceeding to shutdown the manager 2025-08-13T20:07:30.229373850+00:00 stderr F 2025-08-13T20:07:30.228Z ERROR operator.main cobra/command.go:944 error starting {"error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043232033024 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015073043232033024 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000007026315073043232033036 0ustar zuulzuul2025-10-13T00:19:39.922460227+00:00 stderr F time="2025-10-13T00:19:39.92223001Z" level=info msg="start registry" distribution_version=v3.0.0+unknown go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" openshift_version=4.16.0-202406131906.p0.g58a613b.assembly.stream.el9-58a613b 2025-10-13T00:19:39.922966302+00:00 stderr F time="2025-10-13T00:19:39.922714525Z" level=info msg="caching project quota objects with TTL 1m0s" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:39.923443156+00:00 stderr F time="2025-10-13T00:19:39.923401395Z" level=info msg="redis not configured" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:39.923542489+00:00 stderr F time="2025-10-13T00:19:39.923508488Z" level=info msg="using openshift blob descriptor cache" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:39.923542489+00:00 stderr F time="2025-10-13T00:19:39.923524889Z" level=warning msg="Registry does not implement RepositoryRemover. Will not be able to delete repos and tags" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:39.923686044+00:00 stderr F time="2025-10-13T00:19:39.923535629Z" level=info msg="Starting upload purge in 8m0s" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:39.924309242+00:00 stderr F time="2025-10-13T00:19:39.924265491Z" level=info msg="Using \"image-registry.openshift-image-registry.svc:5000\" as Docker Registry URL" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:39.924530659+00:00 stderr F time="2025-10-13T00:19:39.924477677Z" level=info msg="listening on :5000, tls" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:19:49.237790589+00:00 stderr F time="2025-10-13T00:19:49.237678835Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=97602bca-37d8-4e95-8353-3a7c04989995 http.request.method=GET http.request.remoteaddr="10.217.0.2:59178" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="80.612µs" http.response.status=200 http.response.written=0 2025-10-13T00:19:59.239478840+00:00 stderr F time="2025-10-13T00:19:59.239303675Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=c8168040-022c-4244-ae2b-ca13a7af5a26 http.request.method=GET http.request.remoteaddr="10.217.0.2:43798" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="47.122µs" http.response.status=200 http.response.written=0 2025-10-13T00:19:59.241703357+00:00 stderr F time="2025-10-13T00:19:59.241600143Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=1fbd761e-458b-4438-ac47-26e91b2ac263 http.request.method=GET http.request.remoteaddr="10.217.0.2:43812" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="50.881µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:09.242881997+00:00 stderr F time="2025-10-13T00:20:09.242218239Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=d7e4d1dd-dca6-4201-97dc-f67019fed362 http.request.method=GET http.request.remoteaddr="10.217.0.2:37026" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="82.853µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:09.243069902+00:00 stderr F time="2025-10-13T00:20:09.242728009Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=68a4b064-8ba6-448d-a6fc-2193d3ce0e59 http.request.method=GET http.request.remoteaddr="10.217.0.2:37036" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="35.317µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:19.238023678+00:00 stderr F time="2025-10-13T00:20:19.237864874Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=1cfbd390-d1b1-4639-b56b-3b2ef7066ce2 http.request.method=GET http.request.remoteaddr="10.217.0.2:58508" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="47.992µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:19.239367296+00:00 stderr F time="2025-10-13T00:20:19.239182991Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=19ac1210-faa4-4342-a45a-53676e595596 http.request.method=GET http.request.remoteaddr="10.217.0.2:58504" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="86.013µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:29.239506109+00:00 stderr F time="2025-10-13T00:20:29.23884201Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=1a7fa99b-0036-4379-9a67-a6ed6c25067b http.request.method=GET http.request.remoteaddr="10.217.0.2:43834" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="69.582µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:29.239583301+00:00 stderr F time="2025-10-13T00:20:29.238915692Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=692c217c-5b53-415e-bbff-13f2f772a2f4 http.request.method=GET http.request.remoteaddr="10.217.0.2:43832" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="55.981µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:39.238755077+00:00 stderr F time="2025-10-13T00:20:39.238093158Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=4a250c11-a239-4075-8eb9-ec7b9d0b5079 http.request.method=GET http.request.remoteaddr="10.217.0.2:50324" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="73.202µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:39.238955842+00:00 stderr F time="2025-10-13T00:20:39.238678345Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=97c62222-084b-4847-98af-3d6b32e4ddd0 http.request.method=GET http.request.remoteaddr="10.217.0.2:50322" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="74.673µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:49.236637089+00:00 stderr F time="2025-10-13T00:20:49.236073903Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=d5a2688a-20d2-4fa7-a2fe-ec26d3ab7c4f http.request.method=GET http.request.remoteaddr="10.217.0.2:41886" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="38.061µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:49.238909322+00:00 stderr F time="2025-10-13T00:20:49.238845351Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=9ba04e8d-a687-4de8-bc26-0240525b6590 http.request.method=GET http.request.remoteaddr="10.217.0.2:41902" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="45.691µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:59.236790403+00:00 stderr F time="2025-10-13T00:20:59.236181716Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=149f4769-3181-4c32-835f-373c5f8385f2 http.request.method=GET http.request.remoteaddr="10.217.0.2:55084" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="42.101µs" http.response.status=200 http.response.written=0 2025-10-13T00:20:59.236848305+00:00 stderr F time="2025-10-13T00:20:59.236754602Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=8c28115e-2da4-4a23-a9ba-49f8ff01c898 http.request.method=GET http.request.remoteaddr="10.217.0.2:55086" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="36.141µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:09.237442230+00:00 stderr F time="2025-10-13T00:21:09.236901835Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=fcd127f8-5b44-42ae-81b6-9cab6a077d58 http.request.method=GET http.request.remoteaddr="10.217.0.2:52420" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="33.631µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:09.238054567+00:00 stderr F time="2025-10-13T00:21:09.237997656Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=b630ec5d-4a9f-4bdd-8a36-0cd0c6c179c1 http.request.method=GET http.request.remoteaddr="10.217.0.2:52418" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="62.662µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:18.402720133+00:00 stderr F time="2025-10-13T00:21:18.402190799Z" level=warning msg="error authorizing context: authorization header required" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=afd7f979-4487-446d-bccf-f25584bfd493 http.request.method=GET http.request.remoteaddr=38.102.83.214 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" 2025-10-13T00:21:18.402775444+00:00 stderr F time="2025-10-13T00:21:18.402728323Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=8380c047-7bcb-401a-a263-ef743df577fe http.request.method=GET http.request.remoteaddr=38.102.83.214 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=1.454459ms http.response.status=401 http.response.written=87 2025-10-13T00:21:18.431479306+00:00 stderr F time="2025-10-13T00:21:18.431405264Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=d81d56fd-9706-417c-9b8e-0d251bd4d164 http.request.method=GET http.request.remoteaddr=38.102.83.214 http.request.uri="/openshift/token?account=kubeadmin" http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=20.270585ms http.response.status=200 http.response.written=131 2025-10-13T00:21:18.436716717+00:00 stderr F time="2025-10-13T00:21:18.436648195Z" level=info msg="authorized request" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=87ac3d3f-56e7-4baa-8184-903f95a910e4 http.request.method=GET http.request.remoteaddr=38.102.83.214 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" openshift.auth.user=kubeadmin openshift.auth.userid=1cd2e525-8d00-41b4-a486-09fa68a2ad45 2025-10-13T00:21:18.436770078+00:00 stderr F time="2025-10-13T00:21:18.436703947Z" level=info msg="response completed" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=87ac3d3f-56e7-4baa-8184-903f95a910e4 http.request.method=GET http.request.remoteaddr=38.102.83.214 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=4.180172ms http.response.status=200 http.response.written=2 openshift.auth.user=kubeadmin openshift.auth.userid=1cd2e525-8d00-41b4-a486-09fa68a2ad45 2025-10-13T00:21:18.436770078+00:00 stderr F time="2025-10-13T00:21:18.436737538Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=57f2b957-4200-4355-9958-d1339543a6a3 http.request.method=GET http.request.remoteaddr=38.102.83.214 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=4.248814ms http.response.status=200 http.response.written=2 2025-10-13T00:21:19.237759387+00:00 stderr F time="2025-10-13T00:21:19.237687045Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=79d0ec30-a9d1-48ba-965f-a9372f37bd4c http.request.method=GET http.request.remoteaddr="10.217.0.2:55790" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="45.241µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:19.238060475+00:00 stderr F time="2025-10-13T00:21:19.237986753Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=10355051-fb74-4886-aae5-2cb91426ed83 http.request.method=GET http.request.remoteaddr="10.217.0.2:55800" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="59.782µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:29.236670467+00:00 stderr F time="2025-10-13T00:21:29.236129932Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=42ac454e-03b6-4cd0-b421-457f5471d163 http.request.method=GET http.request.remoteaddr="10.217.0.2:34624" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="39.121µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:29.237243402+00:00 stderr F time="2025-10-13T00:21:29.237194061Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=81817cc1-9b42-4c92-835e-76abbbb776a3 http.request.method=GET http.request.remoteaddr="10.217.0.2:34612" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="29.971µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:39.236674574+00:00 stderr F time="2025-10-13T00:21:39.236191581Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=590f7b6f-556b-4ac6-a522-4f679166d4ee http.request.method=GET http.request.remoteaddr="10.217.0.2:36656" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="35.811µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:39.238681328+00:00 stderr F time="2025-10-13T00:21:39.23803081Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=8d132101-6776-4e29-b0f4-74f8c2772c33 http.request.method=GET http.request.remoteaddr="10.217.0.2:36650" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="50.241µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:49.239153347+00:00 stderr F time="2025-10-13T00:21:49.236766063Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=582e181d-4a11-4c05-b7cc-bb95bed8905a http.request.method=GET http.request.remoteaddr="10.217.0.2:47744" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="41.111µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:49.239418764+00:00 stderr F time="2025-10-13T00:21:49.23924213Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=4e7eb93a-1cb3-4c02-92e1-c9b26181755a http.request.method=GET http.request.remoteaddr="10.217.0.2:47758" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="57.302µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:59.236717859+00:00 stderr F time="2025-10-13T00:21:59.236205485Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=c002c564-b1d9-4d42-a1f3-6e64ab5bfac0 http.request.method=GET http.request.remoteaddr="10.217.0.2:57632" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="35.571µs" http.response.status=200 http.response.written=0 2025-10-13T00:21:59.239359600+00:00 stderr F time="2025-10-13T00:21:59.239225916Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=5be4fe09-2d68-49db-93b0-8bf10e29fef5 http.request.method=GET http.request.remoteaddr="10.217.0.2:57634" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="50.891µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:09.237415346+00:00 stderr F time="2025-10-13T00:22:09.236168302Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=a63a45ab-9925-4833-841d-9a3de14abb4f http.request.method=GET http.request.remoteaddr="10.217.0.2:39610" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="51.411µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:09.237478528+00:00 stderr F time="2025-10-13T00:22:09.237300343Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=cf2a97b9-33cd-4a47-b497-188950d9930c http.request.method=GET http.request.remoteaddr="10.217.0.2:39604" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="51.482µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:19.237264758+00:00 stderr F time="2025-10-13T00:22:19.236619731Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=b26a0936-60f8-4ae6-a9ff-7c565513b90a http.request.method=GET http.request.remoteaddr="10.217.0.2:42622" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="33.071µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:19.237736111+00:00 stderr F time="2025-10-13T00:22:19.237296069Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=7fe6864b-bc60-4d67-bb7e-27afec2045e8 http.request.method=GET http.request.remoteaddr="10.217.0.2:42626" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="15.42µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:29.241593614+00:00 stderr F time="2025-10-13T00:22:29.24107646Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=ab2c0ab2-8b83-4eaa-9b06-34d5896740f3 http.request.method=GET http.request.remoteaddr="10.217.0.2:53758" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="62.062µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:29.241783999+00:00 stderr F time="2025-10-13T00:22:29.24109131Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=d69ba47d-a586-48ac-8ccf-25279369aa4b http.request.method=GET http.request.remoteaddr="10.217.0.2:53768" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="95.443µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:39.240601837+00:00 stderr F time="2025-10-13T00:22:39.240117693Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=ea31390b-7a67-4a51-b328-a5cf3bf2e032 http.request.method=GET http.request.remoteaddr="10.217.0.2:51292" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="145.454µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:39.242359186+00:00 stderr F time="2025-10-13T00:22:39.242239872Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=7e08927d-6a8c-49fe-9ced-c2cf082b655e http.request.method=GET http.request.remoteaddr="10.217.0.2:51296" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="90.462µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:49.237598994+00:00 stderr F time="2025-10-13T00:22:49.236857213Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=78b9816c-00d5-4e4f-b196-aaa80828fbfa http.request.method=GET http.request.remoteaddr="10.217.0.2:49330" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="32.391µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:49.238070557+00:00 stderr F time="2025-10-13T00:22:49.238003385Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=af12ae36-977c-4cc4-89c5-21c511cb2d7d http.request.method=GET http.request.remoteaddr="10.217.0.2:49316" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="19.541µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:59.237551334+00:00 stderr F time="2025-10-13T00:22:59.236726121Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=acce647f-fbe8-44ea-9259-2b98e6949a5e http.request.method=GET http.request.remoteaddr="10.217.0.2:33308" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="63.932µs" http.response.status=200 http.response.written=0 2025-10-13T00:22:59.237600556+00:00 stderr F time="2025-10-13T00:22:59.237238245Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=6d4343df-c052-4483-b4a5-cab69f1adc33 http.request.method=GET http.request.remoteaddr="10.217.0.2:33310" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="41.351µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:09.239824377+00:00 stderr F time="2025-10-13T00:23:09.239178999Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=9168b76c-fe6a-4e69-bd68-ae1f6c92a2fe http.request.method=GET http.request.remoteaddr="10.217.0.2:52072" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="65.872µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:09.240759283+00:00 stderr F time="2025-10-13T00:23:09.240675491Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=90ca3b1b-46a1-4659-8f30-c50ab627b917 http.request.method=GET http.request.remoteaddr="10.217.0.2:52062" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="41.761µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:19.236503715+00:00 stderr F time="2025-10-13T00:23:19.23596726Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=77ec52c0-3056-417f-a1ca-0a514b2d88f0 http.request.method=GET http.request.remoteaddr="10.217.0.2:37472" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="50.302µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:19.238448219+00:00 stderr F time="2025-10-13T00:23:19.238405108Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=1f0e9315-e6f8-497c-8e4d-02058ea7b2c0 http.request.method=GET http.request.remoteaddr="10.217.0.2:37474" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="20.85µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:29.236523206+00:00 stderr F time="2025-10-13T00:23:29.235962661Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=e953dfe7-19bf-456b-89a6-f45685a089b3 http.request.method=GET http.request.remoteaddr="10.217.0.2:44748" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="42.061µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:29.237830053+00:00 stderr F time="2025-10-13T00:23:29.237765031Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=51988530-564b-4600-b1eb-641ac2f2d713 http.request.method=GET http.request.remoteaddr="10.217.0.2:44752" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="27.691µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:39.236939379+00:00 stderr F time="2025-10-13T00:23:39.236379333Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=707a786f-cd14-4bae-ba56-9d878b039d69 http.request.method=GET http.request.remoteaddr="10.217.0.2:51790" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="38.221µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:39.237050532+00:00 stderr F time="2025-10-13T00:23:39.236406154Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=a030a1ce-7480-4a71-abef-5e4e30c72785 http.request.method=GET http.request.remoteaddr="10.217.0.2:51792" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="52.561µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:49.236286330+00:00 stderr F time="2025-10-13T00:23:49.235846297Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=29681afc-9060-4e5f-bd4c-65d6625f7c55 http.request.method=GET http.request.remoteaddr="10.217.0.2:42950" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="49.761µs" http.response.status=200 http.response.written=0 2025-10-13T00:23:49.236678801+00:00 stderr F time="2025-10-13T00:23:49.236598508Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.38:5000" http.request.id=262c5769-593f-46b6-ab77-40e7abcd999f http.request.method=GET http.request.remoteaddr="10.217.0.2:42934" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="42.551µs" http.response.status=200 http.response.written=0 ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000012315073043233033065 0ustar zuulzuul2025-08-13T20:42:54.210046648+00:00 stderr F Shutting down, got signal: Terminated ././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000000015073043233033057 0ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000034400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000123015073043233033065 0ustar zuulzuul2025-10-13T00:15:00.834181320+00:00 stderr F I1013 00:15:00.833545 22 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-10-13T00:15:00.835116508+00:00 stderr F I1013 00:15:00.834673 22 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-10-13T00:15:00.835116508+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.crt", 2025-10-13T00:15:00.835116508+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.key" 2025-10-13T00:15:00.835116508+00:00 stderr F } 2025-10-13T00:15:00.848541970+00:00 stderr F I1013 00:15:00.847802 22 observer_polling.go:159] Starting file observer ././@LongLink0000644000000000000000000000034400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000770515073043233033102 0ustar zuulzuul2025-08-13T19:59:27.826236007+00:00 stderr F I0813 19:59:27.767514 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-08-13T19:59:28.048819791+00:00 stderr F I0813 19:59:28.024085 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-08-13T19:59:28.048819791+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.crt", 2025-08-13T19:59:28.048819791+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.key" 2025-08-13T19:59:28.048819791+00:00 stderr F } 2025-08-13T19:59:28.504245112+00:00 stderr F I0813 19:59:28.496669 20 observer_polling.go:159] Starting file observer 2025-08-13T20:00:33.509123438+00:00 stderr F I0813 20:00:33.508126 20 observer_polling.go:120] Observed file "/proc/7/root/etc/secrets/tls.crt" has been modified (old="cdb17acdd32bfc0645d11444c7bea3d36372b393d1d931de26e0171fac0f40c1", new="efb887ba7696e196412e106436a291839e27a1e68375ce9e547b2e2b32b7e988") 2025-08-13T20:00:33.774400752+00:00 stderr F I0813 20:00:33.767934 20 cmd.go:292] Sending TERM signal to 7 ... 2025-08-13T20:00:34.034707685+00:00 stderr F W0813 20:00:34.034452 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 67, err: ) 2025-08-13T20:00:34.273950867+00:00 stderr F I0813 20:00:34.272174 20 observer_polling.go:114] Observed file "/proc/7/root/etc/secrets/tls.key" has been deleted 2025-08-13T20:00:34.273950867+00:00 stderr F I0813 20:00:34.272491 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-08-13T20:00:34.273950867+00:00 stderr F W0813 20:00:34.273404 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 1, err: ) 2025-08-13T20:00:35.365647885+00:00 stderr F W0813 20:00:35.357860 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 2, err: ) 2025-08-13T20:00:36.281903040+00:00 stderr F W0813 20:00:36.279125 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 3, err: ) 2025-08-13T20:00:37.283066398+00:00 stderr F W0813 20:00:37.276990 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 4, err: ) 2025-08-13T20:00:38.308991331+00:00 stderr F W0813 20:00:38.305516 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 5, err: ) 2025-08-13T20:00:38.520605965+00:00 stderr F I0813 20:00:38.518352 20 observer_polling.go:162] Shutting down file observer 2025-08-13T20:00:39.284050724+00:00 stderr F W0813 20:00:39.283223 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 6, err: ) 2025-08-13T20:00:40.280375272+00:00 stderr F W0813 20:00:40.277531 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 7, err: ) 2025-08-13T20:00:41.276881747+00:00 stderr F W0813 20:00:41.276510 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 8, err: ) 2025-08-13T20:00:42.286214347+00:00 stderr F W0813 20:00:42.278618 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 9, err: ) 2025-08-13T20:00:43.275711531+00:00 stderr F W0813 20:00:43.275310 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 10, err: ) 2025-08-13T20:00:44.275672973+00:00 stderr F I0813 20:00:44.274906 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-08-13T20:00:44.275672973+00:00 stderr F (string) (len=33) "/proc/38/root/etc/secrets/tls.crt", 2025-08-13T20:00:44.275672973+00:00 stderr F (string) (len=33) "/proc/38/root/etc/secrets/tls.key" 2025-08-13T20:00:44.275672973+00:00 stderr F } 2025-08-13T20:00:45.576200637+00:00 stderr F I0813 20:00:45.575191 20 observer_polling.go:159] Starting file observer 2025-08-13T20:42:42.279269663+00:00 stderr F W0813 20:42:42.278947 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 2529, err: ) ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000034030315073043233033074 0ustar zuulzuul2025-10-13T00:15:00.661662481+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-10-13T00:15:00.661662481+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="Go OS/Arch: linux/amd64" 2025-10-13T00:15:00.729898796+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc0008ac000)}" 2025-10-13T00:15:00.730038880+00:00 stderr F time="2025-10-13T00:15:00Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0008ac0a0)}" 2025-10-13T00:15:01.047274795+00:00 stderr F time="2025-10-13T00:15:01Z" level=info msg="waiting for informer caches to sync" 2025-10-13T00:15:01.060486341+00:00 stderr F W1013 00:15:01.060357 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:01.061632785+00:00 stderr F E1013 00:15:01.061593 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:01.062286165+00:00 stderr F W1013 00:15:01.060472 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-10-13T00:15:01.062364297+00:00 stderr F E1013 00:15:01.062342 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-10-13T00:15:01.935568540+00:00 stderr F W1013 00:15:01.934632 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:01.935568540+00:00 stderr F E1013 00:15:01.935168 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:02.275734202+00:00 stderr F W1013 00:15:02.275630 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-10-13T00:15:02.275734202+00:00 stderr F E1013 00:15:02.275668 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-10-13T00:15:03.927811580+00:00 stderr F W1013 00:15:03.927269 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:03.927811580+00:00 stderr F E1013 00:15:03.927712 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-10-13T00:15:05.119300669+00:00 stderr F W1013 00:15:05.118997 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-10-13T00:15:05.119424403+00:00 stderr F E1013 00:15:05.119408 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-10-13T00:15:10.446342457+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="started events processor" 2025-10-13T00:15:10.448357667+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.448400368+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-10-13T00:15:10.453392028+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.454237403+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-10-13T00:15:10.456723798+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.456761409+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-10-13T00:15:10.461038817+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.461038817+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-10-13T00:15:10.463643185+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-10-13T00:15:10.463690276+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-10-13T00:15:10.466979955+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-10-13T00:15:10.466979955+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-10-13T00:15:10.469573843+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.469573843+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-10-13T00:15:10.469648675+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-10-13T00:15:10.469648675+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-10-13T00:15:10.472792849+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-10-13T00:15:10.472983455+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-10-13T00:15:10.475751818+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.475751818+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-10-13T00:15:10.477300614+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-10-13T00:15:10.477314685+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-10-13T00:15:10.477484540+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:15:10.478983724+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.478983724+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-10-13T00:15:10.481215881+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:15:10.481368046+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.481402257+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-10-13T00:15:10.483259253+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.483294334+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-10-13T00:15:10.484935262+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.484935262+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-10-13T00:15:10.487186809+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-10-13T00:15:10.487186809+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-10-13T00:15:10.506469197+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.506528709+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-10-13T00:15:10.509141287+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.509141287+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-10-13T00:15:10.512482577+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.512482577+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-10-13T00:15:10.516375184+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.516375184+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-10-13T00:15:10.517924300+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.517924300+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-10-13T00:15:10.523709164+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.523709164+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-10-13T00:15:10.528882399+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-10-13T00:15:10.528882399+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-10-13T00:15:10.531146206+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.531146206+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-10-13T00:15:10.533828307+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-10-13T00:15:10.533828307+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-10-13T00:15:10.536424285+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-10-13T00:15:10.536470136+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-10-13T00:15:10.539090144+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-10-13T00:15:10.539090144+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-10-13T00:15:10.541767865+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-10-13T00:15:10.541834897+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-10-13T00:15:10.544465966+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-10-13T00:15:10.544465966+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-10-13T00:15:10.546829546+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.546829546+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-10-13T00:15:10.549792165+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.549792165+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-10-13T00:15:10.553748324+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-10-13T00:15:10.553780545+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-10-13T00:15:10.557019872+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-10-13T00:15:10.557019872+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-10-13T00:15:10.564584178+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-10-13T00:15:10.564584178+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-10-13T00:15:10.569336811+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-10-13T00:15:10.569336811+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-10-13T00:15:10.576203936+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-10-13T00:15:10.576203936+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-10-13T00:15:10.579100533+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.579100533+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-10-13T00:15:10.580533036+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-10-13T00:15:10.580533036+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-10-13T00:15:10.581941358+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-10-13T00:15:10.581941358+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-10-13T00:15:10.584606958+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.584606958+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-10-13T00:15:10.586930868+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.586930868+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-10-13T00:15:10.589314849+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.589314849+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-10-13T00:15:10.592292949+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-10-13T00:15:10.592292949+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-10-13T00:15:10.593702361+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-10-13T00:15:10.593702361+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-10-13T00:15:10.595856695+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.595856695+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-10-13T00:15:10.599710531+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.599710531+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-10-13T00:15:10.601828154+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-10-13T00:15:10.601828154+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-10-13T00:15:10.603387671+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-10-13T00:15:10.603387671+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-10-13T00:15:10.605999169+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-10-13T00:15:10.605999169+00:00 stderr F time="2025-10-13T00:15:10Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-10-13T00:15:17.646401641+00:00 stderr F time="2025-10-13T00:15:17Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:15:17.646401641+00:00 stderr F time="2025-10-13T00:15:17Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:16:30.355398080+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:16:30.355398080+00:00 stderr F time="2025-10-13T00:16:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:19:38.910414462+00:00 stderr F time="2025-10-13T00:19:38Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:19:38.910414462+00:00 stderr F time="2025-10-13T00:19:38Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:19:59.723016300+00:00 stderr F time="2025-10-13T00:19:59Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:19:59.723016300+00:00 stderr F time="2025-10-13T00:19:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:27.909692455+00:00 stderr F time="2025-10-13T00:20:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:27.909793898+00:00 stderr F time="2025-10-13T00:20:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:27.954484372+00:00 stderr F time="2025-10-13T00:20:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:27.954484372+00:00 stderr F time="2025-10-13T00:20:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:28.083562044+00:00 stderr F time="2025-10-13T00:20:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:28.083562044+00:00 stderr F time="2025-10-13T00:20:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:28.487216040+00:00 stderr F time="2025-10-13T00:20:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:28.487216040+00:00 stderr F time="2025-10-13T00:20:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:29.018890409+00:00 stderr F time="2025-10-13T00:20:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:29.018890409+00:00 stderr F time="2025-10-13T00:20:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:29.841427549+00:00 stderr F time="2025-10-13T00:20:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:29.841427549+00:00 stderr F time="2025-10-13T00:20:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:30.466436847+00:00 stderr F time="2025-10-13T00:20:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:30.466436847+00:00 stderr F time="2025-10-13T00:20:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:30.820455890+00:00 stderr F time="2025-10-13T00:20:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:30.820455890+00:00 stderr F time="2025-10-13T00:20:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:31.223219602+00:00 stderr F time="2025-10-13T00:20:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:31.223219602+00:00 stderr F time="2025-10-13T00:20:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:31.871029160+00:00 stderr F time="2025-10-13T00:20:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:31.871029160+00:00 stderr F time="2025-10-13T00:20:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:32.860175484+00:00 stderr F time="2025-10-13T00:20:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:32.860175484+00:00 stderr F time="2025-10-13T00:20:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:39.175857912+00:00 stderr F time="2025-10-13T00:20:39Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:39.175857912+00:00 stderr F time="2025-10-13T00:20:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:40.178861205+00:00 stderr F time="2025-10-13T00:20:40Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:40.178861205+00:00 stderr F time="2025-10-13T00:20:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:40.226725998+00:00 stderr F time="2025-10-13T00:20:40Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:40.226725998+00:00 stderr F time="2025-10-13T00:20:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:41.175033688+00:00 stderr F time="2025-10-13T00:20:41Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:41.175033688+00:00 stderr F time="2025-10-13T00:20:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:41.221202453+00:00 stderr F time="2025-10-13T00:20:41Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:41.221202453+00:00 stderr F time="2025-10-13T00:20:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:42.166732085+00:00 stderr F time="2025-10-13T00:20:42Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:42.166732085+00:00 stderr F time="2025-10-13T00:20:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:42.212410667+00:00 stderr F time="2025-10-13T00:20:42Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:42.212410667+00:00 stderr F time="2025-10-13T00:20:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:43.183568728+00:00 stderr F time="2025-10-13T00:20:43Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:43.183568728+00:00 stderr F time="2025-10-13T00:20:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:43.250279860+00:00 stderr F time="2025-10-13T00:20:43Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:43.250279860+00:00 stderr F time="2025-10-13T00:20:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:44.170357457+00:00 stderr F time="2025-10-13T00:20:44Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:44.170357457+00:00 stderr F time="2025-10-13T00:20:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:44.221368709+00:00 stderr F time="2025-10-13T00:20:44Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:44.221368709+00:00 stderr F time="2025-10-13T00:20:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:45.178026203+00:00 stderr F time="2025-10-13T00:20:45Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:45.178026203+00:00 stderr F time="2025-10-13T00:20:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:45.222173622+00:00 stderr F time="2025-10-13T00:20:45Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:45.222173622+00:00 stderr F time="2025-10-13T00:20:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:46.182597492+00:00 stderr F time="2025-10-13T00:20:46Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:46.182597492+00:00 stderr F time="2025-10-13T00:20:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:46.230184888+00:00 stderr F time="2025-10-13T00:20:46Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:46.230184888+00:00 stderr F time="2025-10-13T00:20:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:47.189536366+00:00 stderr F time="2025-10-13T00:20:47Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:47.189536366+00:00 stderr F time="2025-10-13T00:20:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:47.274807129+00:00 stderr F time="2025-10-13T00:20:47Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:47.274807129+00:00 stderr F time="2025-10-13T00:20:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:48.176110129+00:00 stderr F time="2025-10-13T00:20:48Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:48.176110129+00:00 stderr F time="2025-10-13T00:20:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:48.231897025+00:00 stderr F time="2025-10-13T00:20:48Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:48.231897025+00:00 stderr F time="2025-10-13T00:20:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:49.180510834+00:00 stderr F time="2025-10-13T00:20:49Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:49.180510834+00:00 stderr F time="2025-10-13T00:20:49Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:49.259169061+00:00 stderr F time="2025-10-13T00:20:49Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:49.259169061+00:00 stderr F time="2025-10-13T00:20:49Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:50.180560496+00:00 stderr F time="2025-10-13T00:20:50Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:50.180560496+00:00 stderr F time="2025-10-13T00:20:50Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:50.240132528+00:00 stderr F time="2025-10-13T00:20:50Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:50.240132528+00:00 stderr F time="2025-10-13T00:20:50Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:51.171928763+00:00 stderr F time="2025-10-13T00:20:51Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:51.171928763+00:00 stderr F time="2025-10-13T00:20:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:51.244739686+00:00 stderr F time="2025-10-13T00:20:51Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:51.244739686+00:00 stderr F time="2025-10-13T00:20:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:52.171560943+00:00 stderr F time="2025-10-13T00:20:52Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:52.171560943+00:00 stderr F time="2025-10-13T00:20:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:52.215470075+00:00 stderr F time="2025-10-13T00:20:52Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:52.215470075+00:00 stderr F time="2025-10-13T00:20:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:53.179997340+00:00 stderr F time="2025-10-13T00:20:53Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:53.179997340+00:00 stderr F time="2025-10-13T00:20:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:53.250116508+00:00 stderr F time="2025-10-13T00:20:53Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:53.250116508+00:00 stderr F time="2025-10-13T00:20:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:54.173174048+00:00 stderr F time="2025-10-13T00:20:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:54.173174048+00:00 stderr F time="2025-10-13T00:20:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:54.221451892+00:00 stderr F time="2025-10-13T00:20:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:54.221451892+00:00 stderr F time="2025-10-13T00:20:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:55.184724202+00:00 stderr F time="2025-10-13T00:20:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:55.184724202+00:00 stderr F time="2025-10-13T00:20:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:55.253576734+00:00 stderr F time="2025-10-13T00:20:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:55.253576734+00:00 stderr F time="2025-10-13T00:20:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:56.182147721+00:00 stderr F time="2025-10-13T00:20:56Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:56.182147721+00:00 stderr F time="2025-10-13T00:20:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:56.240423556+00:00 stderr F time="2025-10-13T00:20:56Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:56.240423556+00:00 stderr F time="2025-10-13T00:20:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:57.182575212+00:00 stderr F time="2025-10-13T00:20:57Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:57.182575212+00:00 stderr F time="2025-10-13T00:20:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:57.252155775+00:00 stderr F time="2025-10-13T00:20:57Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:57.252155775+00:00 stderr F time="2025-10-13T00:20:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:58.176963524+00:00 stderr F time="2025-10-13T00:20:58Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:58.176963524+00:00 stderr F time="2025-10-13T00:20:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:58.229242631+00:00 stderr F time="2025-10-13T00:20:58Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:58.229384965+00:00 stderr F time="2025-10-13T00:20:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:59.170915435+00:00 stderr F time="2025-10-13T00:20:59Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:59.170915435+00:00 stderr F time="2025-10-13T00:20:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:20:59.214066096+00:00 stderr F time="2025-10-13T00:20:59Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:20:59.214066096+00:00 stderr F time="2025-10-13T00:20:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:00.167034426+00:00 stderr F time="2025-10-13T00:21:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:00.167145899+00:00 stderr F time="2025-10-13T00:21:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:00.210485005+00:00 stderr F time="2025-10-13T00:21:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:00.210554597+00:00 stderr F time="2025-10-13T00:21:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:01.179149646+00:00 stderr F time="2025-10-13T00:21:01Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:01.179149646+00:00 stderr F time="2025-10-13T00:21:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:01.254506220+00:00 stderr F time="2025-10-13T00:21:01Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:01.254566961+00:00 stderr F time="2025-10-13T00:21:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:02.168304271+00:00 stderr F time="2025-10-13T00:21:02Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:02.168304271+00:00 stderr F time="2025-10-13T00:21:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:02.221263837+00:00 stderr F time="2025-10-13T00:21:02Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:02.221263837+00:00 stderr F time="2025-10-13T00:21:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:02.263611305+00:00 stderr F time="2025-10-13T00:21:02Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:02.263682877+00:00 stderr F time="2025-10-13T00:21:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:03.169149754+00:00 stderr F time="2025-10-13T00:21:03Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:03.169149754+00:00 stderr F time="2025-10-13T00:21:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:03.241480004+00:00 stderr F time="2025-10-13T00:21:03Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:03.241480004+00:00 stderr F time="2025-10-13T00:21:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:04.174041652+00:00 stderr F time="2025-10-13T00:21:04Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:04.174152325+00:00 stderr F time="2025-10-13T00:21:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:04.237386390+00:00 stderr F time="2025-10-13T00:21:04Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:04.237486092+00:00 stderr F time="2025-10-13T00:21:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:05.193389214+00:00 stderr F time="2025-10-13T00:21:05Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:05.193389214+00:00 stderr F time="2025-10-13T00:21:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:05.242942285+00:00 stderr F time="2025-10-13T00:21:05Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:05.243014227+00:00 stderr F time="2025-10-13T00:21:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:06.174095373+00:00 stderr F time="2025-10-13T00:21:06Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:06.175368769+00:00 stderr F time="2025-10-13T00:21:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:06.256516806+00:00 stderr F time="2025-10-13T00:21:06Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:06.256516806+00:00 stderr F time="2025-10-13T00:21:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:06.750004574+00:00 stderr F time="2025-10-13T00:21:06Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:06.750108497+00:00 stderr F time="2025-10-13T00:21:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:07.165453671+00:00 stderr F time="2025-10-13T00:21:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:07.165453671+00:00 stderr F time="2025-10-13T00:21:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:07.207086769+00:00 stderr F time="2025-10-13T00:21:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:07.207086769+00:00 stderr F time="2025-10-13T00:21:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:07.696271846+00:00 stderr F time="2025-10-13T00:21:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:07.696271846+00:00 stderr F time="2025-10-13T00:21:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:08.168523738+00:00 stderr F time="2025-10-13T00:21:08Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:08.168603240+00:00 stderr F time="2025-10-13T00:21:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:08.209472587+00:00 stderr F time="2025-10-13T00:21:08Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:08.209472587+00:00 stderr F time="2025-10-13T00:21:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:09.171716276+00:00 stderr F time="2025-10-13T00:21:09Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:09.171716276+00:00 stderr F time="2025-10-13T00:21:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:09.220294249+00:00 stderr F time="2025-10-13T00:21:09Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:09.220294249+00:00 stderr F time="2025-10-13T00:21:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:09.799665706+00:00 stderr F time="2025-10-13T00:21:09Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:09.799825760+00:00 stderr F time="2025-10-13T00:21:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:10.185305110+00:00 stderr F time="2025-10-13T00:21:10Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:10.185404212+00:00 stderr F time="2025-10-13T00:21:10Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:10.236458785+00:00 stderr F time="2025-10-13T00:21:10Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:10.236458785+00:00 stderr F time="2025-10-13T00:21:10Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:11.142437829+00:00 stderr F time="2025-10-13T00:21:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:11.142485830+00:00 stderr F time="2025-10-13T00:21:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:11.213459169+00:00 stderr F time="2025-10-13T00:21:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:11.213459169+00:00 stderr F time="2025-10-13T00:21:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:12.173200658+00:00 stderr F time="2025-10-13T00:21:12Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:12.173200658+00:00 stderr F time="2025-10-13T00:21:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:12.229593544+00:00 stderr F time="2025-10-13T00:21:12Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:12.229593544+00:00 stderr F time="2025-10-13T00:21:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:13.176420197+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:13.176420197+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:13.236523854+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:13.236523854+00:00 stderr F time="2025-10-13T00:21:13Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:14.171109937+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:14.171109937+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:14.216499687+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:14.216576609+00:00 stderr F time="2025-10-13T00:21:14Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:15.179784182+00:00 stderr F time="2025-10-13T00:21:15Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:15.179855454+00:00 stderr F time="2025-10-13T00:21:15Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:15.247806691+00:00 stderr F time="2025-10-13T00:21:15Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:15.247806691+00:00 stderr F time="2025-10-13T00:21:15Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:16.180165323+00:00 stderr F time="2025-10-13T00:21:16Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:16.180200134+00:00 stderr F time="2025-10-13T00:21:16Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:16.254259576+00:00 stderr F time="2025-10-13T00:21:16Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:16.254259576+00:00 stderr F time="2025-10-13T00:21:16Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:17.180092663+00:00 stderr F time="2025-10-13T00:21:17Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:17.180092663+00:00 stderr F time="2025-10-13T00:21:17Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:17.222622787+00:00 stderr F time="2025-10-13T00:21:17Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:17.222622787+00:00 stderr F time="2025-10-13T00:21:17Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:18.181724400+00:00 stderr F time="2025-10-13T00:21:18Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:18.181724400+00:00 stderr F time="2025-10-13T00:21:18Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:18.227360137+00:00 stderr F time="2025-10-13T00:21:18Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:18.227360137+00:00 stderr F time="2025-10-13T00:21:18Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:19.172968355+00:00 stderr F time="2025-10-13T00:21:19Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:19.172968355+00:00 stderr F time="2025-10-13T00:21:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:19.220554534+00:00 stderr F time="2025-10-13T00:21:19Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:19.220554534+00:00 stderr F time="2025-10-13T00:21:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:20.177033657+00:00 stderr F time="2025-10-13T00:21:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:20.177033657+00:00 stderr F time="2025-10-13T00:21:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:20.233219768+00:00 stderr F time="2025-10-13T00:21:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-10-13T00:21:20.233380432+00:00 stderr F time="2025-10-13T00:21:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:21.143478207+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:21.143478207+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="SamplesRegistry changed from to registry.redhat.io" 2025-10-13T00:21:21.143478207+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="ENTERING UPSERT / STEADY STATE PATH ExistTrue true ImageInProgressFalse true VersionOK true ConfigChanged true ManagementStateChanged true" 2025-10-13T00:21:21.325998285+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-10-13T00:21:21.326437087+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream postgresql13-for-sso75-openshift-rhel8" 2025-10-13T00:21:21.340570357+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-10-13T00:21:21.341872022+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream sso76-openshift-rhel8" 2025-10-13T00:21:21.357413860+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-10-13T00:21:21.357584074+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-10-13T00:21:21.421594256+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-10-13T00:21:21.422366597+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream ubi8-openjdk-8-runtime" 2025-10-13T00:21:21.479238026+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-10-13T00:21:21.480310885+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream php" 2025-10-13T00:21:21.538749566+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-10-13T00:21:21.539220669+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-10-13T00:21:21.603048076+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream fuse7-karaf-openshift" 2025-10-13T00:21:21.603797736+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-10-13T00:21:21.660721807+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-10-13T00:21:21.660721807+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream fuse7-karaf-openshift-jdk11" 2025-10-13T00:21:21.724891192+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-10-13T00:21:21.725861308+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream ubi8-openjdk-11" 2025-10-13T00:21:21.777614890+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-10-13T00:21:21.777785655+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream jboss-eap74-openjdk8-openshift" 2025-10-13T00:21:21.842077434+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream fuse7-eap-openshift-java11" 2025-10-13T00:21:21.842538726+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-10-13T00:21:21.900936886+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream openjdk-11-rhel7" 2025-10-13T00:21:21.901237035+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-10-13T00:21:21.959498491+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-10-13T00:21:21.959599674+00:00 stderr F time="2025-10-13T00:21:21Z" level=info msg="updated imagestream ubi8-openjdk-11-runtime" 2025-10-13T00:21:22.017066479+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-10-13T00:21:22.017160292+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream jenkins-agent-base" 2025-10-13T00:21:22.080368262+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-10-13T00:21:22.080535546+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-10-13T00:21:22.139094741+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-10-13T00:21:22.139532713+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream httpd" 2025-10-13T00:21:22.199719031+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-10-13T00:21:22.200116792+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream python" 2025-10-13T00:21:22.258315187+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream dotnet-runtime" 2025-10-13T00:21:22.259015706+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-10-13T00:21:22.320019256+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream java" 2025-10-13T00:21:22.329149672+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-10-13T00:21:22.378572101+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream sso75-openshift-rhel8" 2025-10-13T00:21:22.378737875+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-10-13T00:21:22.440511226+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream ubi8-openjdk-17" 2025-10-13T00:21:22.441042421+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-10-13T00:21:22.501004053+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream postgresql" 2025-10-13T00:21:22.502666318+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-10-13T00:21:22.559053094+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-10-13T00:21:22.559124026+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream ubi8-openjdk-17-runtime" 2025-10-13T00:21:22.620357923+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-10-13T00:21:22.621482873+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream ubi8-openjdk-8" 2025-10-13T00:21:22.679201755+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-10-13T00:21:22.679476963+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-10-13T00:21:22.742365363+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-10-13T00:21:22.742990050+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream redhat-openjdk18-openshift" 2025-10-13T00:21:22.797622089+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream mysql" 2025-10-13T00:21:22.797659330+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-10-13T00:21:22.857263132+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-10-13T00:21:22.858123485+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream jboss-eap-xp4-openjdk11-openshift" 2025-10-13T00:21:22.921457018+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream jenkins" 2025-10-13T00:21:22.922640340+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-10-13T00:21:22.982783658+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-10-13T00:21:22.983109276+00:00 stderr F time="2025-10-13T00:21:22Z" level=info msg="updated imagestream ruby" 2025-10-13T00:21:23.038733452+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-10-13T00:21:23.038950708+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream postgresql13-for-sso76-openshift-rhel8" 2025-10-13T00:21:23.103070472+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream fuse7-java-openshift" 2025-10-13T00:21:23.104802489+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-10-13T00:21:23.175732566+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream nodejs" 2025-10-13T00:21:23.178122771+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-10-13T00:21:23.224235341+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-10-13T00:21:23.225023512+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream jboss-datagrid73-openshift" 2025-10-13T00:21:23.278148460+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-10-13T00:21:23.278313244+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream jboss-eap-xp3-openjdk11-openshift" 2025-10-13T00:21:23.342641394+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-10-13T00:21:23.342789228+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-10-13T00:21:23.397456078+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-10-13T00:21:23.397935061+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-10-13T00:21:23.455438898+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream ubi8-openjdk-21-runtime" 2025-10-13T00:21:23.456594899+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-10-13T00:21:23.520410145+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-10-13T00:21:23.520800805+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream mariadb" 2025-10-13T00:21:23.578608220+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-10-13T00:21:23.579237377+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream nginx" 2025-10-13T00:21:23.638858970+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream jboss-eap74-openjdk11-openshift" 2025-10-13T00:21:23.639130568+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-10-13T00:21:23.697643301+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-10-13T00:21:23.697894548+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream fuse7-java11-openshift" 2025-10-13T00:21:23.749906697+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-10-13T00:21:23.749906697+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-10-13T00:21:23.759056403+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream dotnet" 2025-10-13T00:21:23.773303406+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-10-13T00:21:23.822417267+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-10-13T00:21:23.822417267+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-10-13T00:21:23.840233286+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream golang" 2025-10-13T00:21:23.852984529+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-10-13T00:21:23.915986303+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-10-13T00:21:23.916351663+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream ubi8-openjdk-21" 2025-10-13T00:21:23.979839950+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="updated imagestream perl" 2025-10-13T00:21:23.979997925+00:00 stderr F time="2025-10-13T00:21:23Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-10-13T00:21:24.039182906+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="updated imagestream redis" 2025-10-13T00:21:24.039992708+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-10-13T00:21:24.104454612+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="updated imagestream fuse7-eap-openshift" 2025-10-13T00:21:24.104547444+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-10-13T00:21:24.159208114+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-10-13T00:21:24.159257185+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="updated imagestream java-runtime" 2025-10-13T00:21:24.159257185+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="CRDUPDATE samples upserted; set clusteroperator ready, steady state" 2025-10-13T00:21:24.270064615+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-10-13T00:21:24.270064615+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-10-13T00:21:24.993107100+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-10-13T00:21:24.993107100+00:00 stderr F time="2025-10-13T00:21:24Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-10-13T00:21:25.184995050+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-10-13T00:21:25.184995050+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-10-13T00:21:25.203382004+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-10-13T00:21:25.203382004+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-10-13T00:21:25.243109973+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-10-13T00:21:25.243109973+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-10-13T00:21:25.246048152+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-10-13T00:21:25.246048152+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-10-13T00:21:25.544756995+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-10-13T00:21:25.544756995+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-10-13T00:21:25.612642260+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-10-13T00:21:25.612759893+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-10-13T00:21:25.785366055+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-10-13T00:21:25.785458667+00:00 stderr F time="2025-10-13T00:21:25Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-10-13T00:21:26.555930536+00:00 stderr F time="2025-10-13T00:21:26Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-10-13T00:21:26.555930536+00:00 stderr F time="2025-10-13T00:21:26Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-10-13T00:21:27.179350891+00:00 stderr F time="2025-10-13T00:21:27Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:27.179350891+00:00 stderr F time="2025-10-13T00:21:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:27.956188092+00:00 stderr F time="2025-10-13T00:21:27Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:27.956188092+00:00 stderr F time="2025-10-13T00:21:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:28.177092053+00:00 stderr F time="2025-10-13T00:21:28Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:28.177092053+00:00 stderr F time="2025-10-13T00:21:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:28.202011733+00:00 stderr F time="2025-10-13T00:21:28Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-10-13T00:21:28.202011733+00:00 stderr F time="2025-10-13T00:21:28Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-10-13T00:21:28.227232981+00:00 stderr F time="2025-10-13T00:21:28Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:28.227232981+00:00 stderr F time="2025-10-13T00:21:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:29.172126281+00:00 stderr F time="2025-10-13T00:21:29Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:29.172126281+00:00 stderr F time="2025-10-13T00:21:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:29.215016844+00:00 stderr F time="2025-10-13T00:21:29Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:29.215016844+00:00 stderr F time="2025-10-13T00:21:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:30.183635280+00:00 stderr F time="2025-10-13T00:21:30Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:30.183635280+00:00 stderr F time="2025-10-13T00:21:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:30.222984439+00:00 stderr F time="2025-10-13T00:21:30Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:30.222984439+00:00 stderr F time="2025-10-13T00:21:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:31.170937181+00:00 stderr F time="2025-10-13T00:21:31Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:31.171000803+00:00 stderr F time="2025-10-13T00:21:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:31.214092371+00:00 stderr F time="2025-10-13T00:21:31Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:31.214092371+00:00 stderr F time="2025-10-13T00:21:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:32.174527090+00:00 stderr F time="2025-10-13T00:21:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:32.174527090+00:00 stderr F time="2025-10-13T00:21:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:32.236037354+00:00 stderr F time="2025-10-13T00:21:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:32.236037354+00:00 stderr F time="2025-10-13T00:21:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:33.178501649+00:00 stderr F time="2025-10-13T00:21:33Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:33.178501649+00:00 stderr F time="2025-10-13T00:21:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:33.222905433+00:00 stderr F time="2025-10-13T00:21:33Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:33.222905433+00:00 stderr F time="2025-10-13T00:21:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:34.176909177+00:00 stderr F time="2025-10-13T00:21:34Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:34.176909177+00:00 stderr F time="2025-10-13T00:21:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:34.245060539+00:00 stderr F time="2025-10-13T00:21:34Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:34.245060539+00:00 stderr F time="2025-10-13T00:21:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:34.916182457+00:00 stderr F time="2025-10-13T00:21:34Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:34.916182457+00:00 stderr F time="2025-10-13T00:21:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:35.002595291+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:35.002595291+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:35.047884769+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:35.047884769+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:35.093992009+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:35.093992009+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:35.171922335+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:35.171922335+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:35.226522723+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:35.226522723+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:35.889925303+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:35.889925303+00:00 stderr F time="2025-10-13T00:21:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:36.167621301+00:00 stderr F time="2025-10-13T00:21:36Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:36.167737554+00:00 stderr F time="2025-10-13T00:21:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:36.239575426+00:00 stderr F time="2025-10-13T00:21:36Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:36.239575426+00:00 stderr F time="2025-10-13T00:21:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:36.301879312+00:00 stderr F time="2025-10-13T00:21:36Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:36.301879312+00:00 stderr F time="2025-10-13T00:21:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:37.097878747+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:37.097878747+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:37.174758944+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:37.174758944+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:37.234994414+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:37.234994414+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:37.490520786+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:37.490520786+00:00 stderr F time="2025-10-13T00:21:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:38.166392962+00:00 stderr F time="2025-10-13T00:21:38Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:38.166485224+00:00 stderr F time="2025-10-13T00:21:38Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:38.208135774+00:00 stderr F time="2025-10-13T00:21:38Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:38.208135774+00:00 stderr F time="2025-10-13T00:21:38Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:39.174189833+00:00 stderr F time="2025-10-13T00:21:39Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:39.174189833+00:00 stderr F time="2025-10-13T00:21:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:39.239208752+00:00 stderr F time="2025-10-13T00:21:39Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:39.239208752+00:00 stderr F time="2025-10-13T00:21:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:40.187744760+00:00 stderr F time="2025-10-13T00:21:40Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:40.187744760+00:00 stderr F time="2025-10-13T00:21:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:40.250867098+00:00 stderr F time="2025-10-13T00:21:40Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:40.250867098+00:00 stderr F time="2025-10-13T00:21:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:41.168754621+00:00 stderr F time="2025-10-13T00:21:41Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:41.168754621+00:00 stderr F time="2025-10-13T00:21:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:41.214647205+00:00 stderr F time="2025-10-13T00:21:41Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:41.214647205+00:00 stderr F time="2025-10-13T00:21:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:42.165467574+00:00 stderr F time="2025-10-13T00:21:42Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:42.165467574+00:00 stderr F time="2025-10-13T00:21:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:42.211536433+00:00 stderr F time="2025-10-13T00:21:42Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:42.211536433+00:00 stderr F time="2025-10-13T00:21:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:43.177002377+00:00 stderr F time="2025-10-13T00:21:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:43.177002377+00:00 stderr F time="2025-10-13T00:21:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:43.243941397+00:00 stderr F time="2025-10-13T00:21:43Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:43.243941397+00:00 stderr F time="2025-10-13T00:21:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:44.179137066+00:00 stderr F time="2025-10-13T00:21:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:44.179137066+00:00 stderr F time="2025-10-13T00:21:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:44.228556524+00:00 stderr F time="2025-10-13T00:21:44Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:44.228556524+00:00 stderr F time="2025-10-13T00:21:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:45.175288133+00:00 stderr F time="2025-10-13T00:21:45Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:45.175288133+00:00 stderr F time="2025-10-13T00:21:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:45.234291630+00:00 stderr F time="2025-10-13T00:21:45Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:45.234291630+00:00 stderr F time="2025-10-13T00:21:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:46.175390187+00:00 stderr F time="2025-10-13T00:21:46Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:46.175492860+00:00 stderr F time="2025-10-13T00:21:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:46.232790181+00:00 stderr F time="2025-10-13T00:21:46Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:46.232790181+00:00 stderr F time="2025-10-13T00:21:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:47.176081118+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:47.176081118+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:47.234903810+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:47.234903810+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:47.364779172+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:47.364779172+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:47.417476289+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:47.417476289+00:00 stderr F time="2025-10-13T00:21:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:48.171624589+00:00 stderr F time="2025-10-13T00:21:48Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:48.171737732+00:00 stderr F time="2025-10-13T00:21:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:48.232074785+00:00 stderr F time="2025-10-13T00:21:48Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:48.232074785+00:00 stderr F time="2025-10-13T00:21:48Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:49.188392232+00:00 stderr F time="2025-10-13T00:21:49Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:49.188392232+00:00 stderr F time="2025-10-13T00:21:49Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:49.248498639+00:00 stderr F time="2025-10-13T00:21:49Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:49.248498639+00:00 stderr F time="2025-10-13T00:21:49Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:50.172427686+00:00 stderr F time="2025-10-13T00:21:50Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:50.172427686+00:00 stderr F time="2025-10-13T00:21:50Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:50.238911113+00:00 stderr F time="2025-10-13T00:21:50Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:50.238911113+00:00 stderr F time="2025-10-13T00:21:50Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:51.172533180+00:00 stderr F time="2025-10-13T00:21:51Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:51.172533180+00:00 stderr F time="2025-10-13T00:21:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:51.243099728+00:00 stderr F time="2025-10-13T00:21:51Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:51.243200840+00:00 stderr F time="2025-10-13T00:21:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:52.170087086+00:00 stderr F time="2025-10-13T00:21:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:52.170134617+00:00 stderr F time="2025-10-13T00:21:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:52.242932575+00:00 stderr F time="2025-10-13T00:21:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:52.242932575+00:00 stderr F time="2025-10-13T00:21:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:53.184172187+00:00 stderr F time="2025-10-13T00:21:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:53.184172187+00:00 stderr F time="2025-10-13T00:21:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:53.243906233+00:00 stderr F time="2025-10-13T00:21:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:53.243906233+00:00 stderr F time="2025-10-13T00:21:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:54.177830829+00:00 stderr F time="2025-10-13T00:21:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:54.177904440+00:00 stderr F time="2025-10-13T00:21:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:54.229558110+00:00 stderr F time="2025-10-13T00:21:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:54.229558110+00:00 stderr F time="2025-10-13T00:21:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:55.183957084+00:00 stderr F time="2025-10-13T00:21:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:55.184017686+00:00 stderr F time="2025-10-13T00:21:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:55.233268290+00:00 stderr F time="2025-10-13T00:21:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:55.233268290+00:00 stderr F time="2025-10-13T00:21:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:56.169487517+00:00 stderr F time="2025-10-13T00:21:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:56.169487517+00:00 stderr F time="2025-10-13T00:21:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:56.209738239+00:00 stderr F time="2025-10-13T00:21:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:56.209738239+00:00 stderr F time="2025-10-13T00:21:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:57.181462450+00:00 stderr F time="2025-10-13T00:21:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:57.181462450+00:00 stderr F time="2025-10-13T00:21:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:57.234942648+00:00 stderr F time="2025-10-13T00:21:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:57.234942648+00:00 stderr F time="2025-10-13T00:21:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:58.173650772+00:00 stderr F time="2025-10-13T00:21:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:58.173650772+00:00 stderr F time="2025-10-13T00:21:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:58.218919350+00:00 stderr F time="2025-10-13T00:21:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:58.218919350+00:00 stderr F time="2025-10-13T00:21:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:59.179287704+00:00 stderr F time="2025-10-13T00:21:59Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:59.179287704+00:00 stderr F time="2025-10-13T00:21:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:21:59.240514321+00:00 stderr F time="2025-10-13T00:21:59Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:21:59.240514321+00:00 stderr F time="2025-10-13T00:21:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:00.185451352+00:00 stderr F time="2025-10-13T00:22:00Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:00.185451352+00:00 stderr F time="2025-10-13T00:22:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:00.244771187+00:00 stderr F time="2025-10-13T00:22:00Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:00.244771187+00:00 stderr F time="2025-10-13T00:22:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:01.180062259+00:00 stderr F time="2025-10-13T00:22:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:01.180062259+00:00 stderr F time="2025-10-13T00:22:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:01.259494045+00:00 stderr F time="2025-10-13T00:22:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:01.259494045+00:00 stderr F time="2025-10-13T00:22:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:02.175999490+00:00 stderr F time="2025-10-13T00:22:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:02.175999490+00:00 stderr F time="2025-10-13T00:22:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:02.224128035+00:00 stderr F time="2025-10-13T00:22:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:02.224128035+00:00 stderr F time="2025-10-13T00:22:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:03.173095314+00:00 stderr F time="2025-10-13T00:22:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:03.173095314+00:00 stderr F time="2025-10-13T00:22:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:03.229097100+00:00 stderr F time="2025-10-13T00:22:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:03.229097100+00:00 stderr F time="2025-10-13T00:22:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:04.175467460+00:00 stderr F time="2025-10-13T00:22:04Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:04.175467460+00:00 stderr F time="2025-10-13T00:22:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:04.224054637+00:00 stderr F time="2025-10-13T00:22:04Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:04.224054637+00:00 stderr F time="2025-10-13T00:22:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:05.176519821+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:05.176519821+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:05.245074354+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:05.245074354+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:05.752418697+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:05.752418697+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:05.797987852+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:05.797987852+00:00 stderr F time="2025-10-13T00:22:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:06.170508920+00:00 stderr F time="2025-10-13T00:22:06Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:06.170508920+00:00 stderr F time="2025-10-13T00:22:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:06.234899142+00:00 stderr F time="2025-10-13T00:22:06Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:06.234899142+00:00 stderr F time="2025-10-13T00:22:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:07.172040694+00:00 stderr F time="2025-10-13T00:22:07Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:07.172040694+00:00 stderr F time="2025-10-13T00:22:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:07.244640786+00:00 stderr F time="2025-10-13T00:22:07Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:07.244640786+00:00 stderr F time="2025-10-13T00:22:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:08.170839294+00:00 stderr F time="2025-10-13T00:22:08Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:08.170839294+00:00 stderr F time="2025-10-13T00:22:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:08.232256495+00:00 stderr F time="2025-10-13T00:22:08Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:08.232256495+00:00 stderr F time="2025-10-13T00:22:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:09.176826696+00:00 stderr F time="2025-10-13T00:22:09Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:09.176826696+00:00 stderr F time="2025-10-13T00:22:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-10-13T00:22:09.220365587+00:00 stderr F time="2025-10-13T00:22:09Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-10-13T00:22:09.220365587+00:00 stderr F time="2025-10-13T00:22:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" ././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000033314115073043233033076 0ustar zuulzuul2025-08-13T20:00:44.607246338+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-08-13T20:00:44.607860915+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="Go OS/Arch: linux/amd64" 2025-08-13T20:00:44.643869742+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc000789040)}" 2025-08-13T20:00:44.643869742+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0007890e0)}" 2025-08-13T20:00:46.045395765+00:00 stderr F time="2025-08-13T20:00:46Z" level=info msg="waiting for informer caches to sync" 2025-08-13T20:00:46.490100176+00:00 stderr F W0813 20:00:46.490036 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:46.490191988+00:00 stderr F E0813 20:00:46.490176 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:46.556919121+00:00 stderr F W0813 20:00:46.550017 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:46.556919121+00:00 stderr F E0813 20:00:46.550134 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.714511817+00:00 stderr F W0813 20:00:47.713583 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.714623810+00:00 stderr F E0813 20:00:47.714577 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.909393354+00:00 stderr F W0813 20:00:47.908970 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:47.909393354+00:00 stderr F E0813 20:00:47.909304 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:50.029249460+00:00 stderr F W0813 20:00:50.028537 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:50.029249460+00:00 stderr F E0813 20:00:50.029222 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:50.706434319+00:00 stderr F W0813 20:00:50.705977 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:50.706434319+00:00 stderr F E0813 20:00:50.706347 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:54.091117328+00:00 stderr F W0813 20:00:54.087954 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:54.091117328+00:00 stderr F E0813 20:00:54.088658 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:56.391398369+00:00 stderr F W0813 20:00:56.389256 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:56.391398369+00:00 stderr F E0813 20:00:56.389730 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:06.689662603+00:00 stderr F W0813 20:01:06.687460 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:06.693053539+00:00 stderr F E0813 20:01:06.688866 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:08.723703810+00:00 stderr F W0813 20:01:08.722018 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:08.723703810+00:00 stderr F E0813 20:01:08.722756 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:26.625246002+00:00 stderr F W0813 20:01:26.623011 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:26.633354693+00:00 stderr F E0813 20:01:26.633140 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:28.382170209+00:00 stderr F W0813 20:01:28.381670 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:28.382270301+00:00 stderr F E0813 20:01:28.382213 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:59.134006323+00:00 stderr F W0813 20:01:59.133064 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:59.134006323+00:00 stderr F E0813 20:01:59.133733 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:02:17.309913758+00:00 stderr F W0813 20:02:17.309063 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:02:17.310005721+00:00 stderr F E0813 20:02:17.309935 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:02:36.746592578+00:00 stderr F W0813 20:02:36.745612 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.746592578+00:00 stderr F E0813 20:02:36.746462 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.876275642+00:00 stderr F W0813 20:02:49.875554 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.876275642+00:00 stderr F E0813 20:02:49.876223 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.230143008+00:00 stderr F W0813 20:03:22.229209 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.230143008+00:00 stderr F E0813 20:03:22.229992 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.101454885+00:00 stderr F W0813 20:03:26.101276 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.101677402+00:00 stderr F E0813 20:03:26.101486 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.219366650+00:00 stderr F W0813 20:03:59.218715 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.219366650+00:00 stderr F E0813 20:03:59.219326 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.239806871+00:00 stderr F W0813 20:04:06.239069 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.239806871+00:00 stderr F E0813 20:04:06.239748 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.574106586+00:00 stderr F W0813 20:04:37.561396 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.574975231+00:00 stderr F E0813 20:04:37.574405 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.987897501+00:00 stderr F W0813 20:04:38.985355 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.987897501+00:00 stderr F E0813 20:04:38.985464 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:33.947007920+00:00 stderr F W0813 20:05:33.933063 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:05:33.947007920+00:00 stderr F E0813 20:05:33.942434 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:05:34.721279152+00:00 stderr F W0813 20:05:34.720916 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:34.721279152+00:00 stderr F E0813 20:05:34.721267 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:08.857838209+00:00 stderr F W0813 20:06:08.856659 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:08.857988443+00:00 stderr F E0813 20:06:08.857853 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:21.536005791+00:00 stderr F W0813 20:06:21.533951 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:21.536005791+00:00 stderr F E0813 20:06:21.534644 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:56.337219614+00:00 stderr F W0813 20:06:56.335478 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:56.337219614+00:00 stderr F E0813 20:06:56.336353 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:07:06.011577326+00:00 stderr F W0813 20:07:06.010522 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:06.011577326+00:00 stderr F E0813 20:07:06.011292 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:08:02.348213355+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="started events processor" 2025-08-13T20:08:02.372559943+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.372559943+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:08:02.403632764+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.403632764+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:08:02.410222553+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.410222553+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:08:02.427300232+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:02.427300232+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:08:02.429669810+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:02.435338613+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:08:02.435626801+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:08:02.447449820+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.447449820+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:08:02.448483380+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:02.455757238+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:08:02.455757238+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:08:02.461821002+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.461821002+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:08:02.803253941+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:08:02.803253941+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:08:02.807970266+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.807970266+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:08:02.872235759+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:08:02.872235759+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:08:02.888209217+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.888209217+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:08:02.896845954+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:02.896845954+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:08:02.901936570+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.901936570+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:08:02.910760503+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.910760503+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:08:02.914398708+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.914398708+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:08:02.921727088+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.921727088+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:08:02.926546616+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.926546616+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:08:02.940195247+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.940195247+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:08:02.947607250+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:08:02.947672352+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:08:02.955914108+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.955914108+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:08:02.962386474+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.962386474+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:08:02.969634151+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.969634151+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:08:02.974028547+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.974028547+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:08:02.982579443+00:00 stderr F time="2025-08-13T20:08:02Z" level=warning msg="Image import for imagestream jenkins-agent-base tag scheduled-upgrade generation 3 failed with detailed message Internal error occurred: registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:v4.13.0: Get \"https://registry.redhat.io/v2/ocp-tools-4/jenkins-agent-base-rhel8/manifests/v4.13.0\": unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" 2025-08-13T20:08:03.922922212+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="initiated an imagestreamimport retry for imagestream/tag jenkins-agent-base/scheduled-upgrade" 2025-08-13T20:08:03.935561485+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:08:03.935561485+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:08:03.938521480+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.938521480+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:08:03.942338049+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:08:03.942393101+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:08:03.945844949+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:08:03.945939002+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:08:03.952315515+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:08:03.952315515+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:08:03.960539321+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:08:03.960848920+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:08:03.965657717+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:08:03.965880724+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:08:03.969083446+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:08:03.969203969+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:08:03.973291166+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:08:03.973291166+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:08:03.977096866+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:08:03.977170668+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:08:03.980565505+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:08:03.980597226+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:08:03.984014454+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.984014454+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:08:03.986409873+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.986738382+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:08:03.989862012+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.989969005+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:08:03.994198356+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.994198356+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:08:03.999411255+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.999511038+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:08:04.009067372+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:04.009667959+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:08:04.013335484+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.013420207+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:08:04.016498465+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.016678530+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:08:04.020407737+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.020590873+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:08:04.026368188+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.026455471+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:08:04.031284829+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearing error messages from configmap for stream jenkins-agent-base and tag scheduled-upgrade" 2025-08-13T20:08:04.040103402+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:08:04.049922974+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.049922974+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:08:04.050622114+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="CRDUPDATE importerrors false update" 2025-08-13T20:08:04.053712322+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.053762564+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:08:04.057615664+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.057615664+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:08:07.114613122+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:07.114613122+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:07.222850344+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:07.223007889+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:11.779708813+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:11.779708813+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:11.861485008+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:11.861485008+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:12.282581371+00:00 stderr F time="2025-08-13T20:08:12Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:12.282581371+00:00 stderr F time="2025-08-13T20:08:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:09:54.830597497+00:00 stderr F time="2025-08-13T20:09:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:09:54.830597497+00:00 stderr F time="2025-08-13T20:09:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:09:55.128982603+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:09:55.129065715+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:01.566036489+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:01.566036489+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:39.024057289+00:00 stderr F time="2025-08-13T20:10:39Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:39.024302786+00:00 stderr F time="2025-08-13T20:10:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:59.781430907+00:00 stderr F time="2025-08-13T20:10:59Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:59.781430907+00:00 stderr F time="2025-08-13T20:10:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:00.084006942+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:00.084006942+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:00.374907753+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:00.375574102+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:19.467053212+00:00 stderr F time="2025-08-13T20:11:19Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:19.467053212+00:00 stderr F time="2025-08-13T20:11:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:18:02.307339975+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:18:02.307339975+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:18:02.966757286+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:18:02.966757286+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:18:03.520430957+00:00 stderr F time="2025-08-13T20:18:03Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:18:03.520430957+00:00 stderr F time="2025-08-13T20:18:03Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:18:04.535876206+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:04.535876206+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:18:04.978403133+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:04.978403133+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:18:06.390925831+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:18:06.390925831+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:18:06.407627178+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:18:06.407627178+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:18:06.416865322+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.416865322+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:18:06.446104327+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.446104327+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:18:06.452899421+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.452899421+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:18:06.459854979+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:18:06.459854979+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:18:06.469164085+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:18:06.469164085+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:18:06.568908434+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.568908434+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:18:06.574094902+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.574094902+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:18:06.579286820+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.579286820+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:18:06.582616595+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.582616595+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:18:06.587947887+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:18:06.587947887+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:18:06.599131687+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:18:06.599131687+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:18:06.605735775+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.605735775+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:18:06.612459677+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.612459677+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:18:06.623063180+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:18:06.623063180+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:18:06.634421014+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.634421014+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:18:06.643141153+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:18:06.643141153+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:18:06.654255851+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.654255851+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:18:06.662020883+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.662020883+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:18:06.667241402+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.667241402+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:18:06.685541374+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.685541374+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:18:06.688721875+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.688721875+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:18:06.693616445+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.693698747+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:18:06.699126952+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:18:06.699126952+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:18:06.702904520+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.702904520+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:18:06.707837741+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.707897353+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:18:06.714696467+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.714696467+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:18:06.726013190+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.726013190+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:18:06.731489846+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:18:06.731489846+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:18:06.736492549+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.736492549+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:18:06.739185426+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.739252558+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:18:06.744658042+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.744658042+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:18:06.748432640+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.748432640+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:18:06.751220370+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.751220370+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:18:06.754631607+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:18:06.754631607+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:18:06.758684183+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:18:06.758684183+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:18:06.767243277+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.767243277+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:18:06.776049099+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.776049099+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:18:06.779757575+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.779757575+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:18:06.783944384+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:18:06.783944384+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:18:06.787472455+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.787543947+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:19:54.880115698+00:00 stderr F time="2025-08-13T20:19:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:19:54.880115698+00:00 stderr F time="2025-08-13T20:19:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:19:55.141373815+00:00 stderr F time="2025-08-13T20:19:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:19:55.141494039+00:00 stderr F time="2025-08-13T20:19:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:22:18.915201214+00:00 stderr F time="2025-08-13T20:22:18Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:22:18.915201214+00:00 stderr F time="2025-08-13T20:22:18Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:28:02.308714006+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.308714006+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:28:02.319453535+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.319453535+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:28:02.323832900+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:28:02.323923243+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:28:02.329429961+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.329429961+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:28:02.333298503+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.333298503+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:28:02.337924946+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.337924946+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:28:02.341408756+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.341408756+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:28:02.346147092+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.346147092+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:28:02.350829617+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.350829617+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:28:02.355305725+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.355305725+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:28:02.360209206+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:28:02.360209206+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:28:02.364812158+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.364812158+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:28:02.371478110+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.371478110+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:28:02.373967922+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.373967922+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:28:02.378446610+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.378446610+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:28:02.381110237+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.381110237+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:28:02.384421912+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:28:02.384421912+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:28:02.386399489+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.386399489+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:28:02.389844498+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.389844498+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:28:02.392765632+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.392765632+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:28:02.395645355+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:28:02.395645355+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:28:02.403048348+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:28:02.403142020+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:28:02.411039457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.411039457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:28:02.415214457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.415214457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:28:02.419758268+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:28:02.419758268+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:28:02.424949507+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:28:02.424949507+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:28:02.430269180+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.430269180+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:28:02.440613397+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.440613397+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:28:02.444632473+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:28:02.444632473+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:28:02.448248957+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.448248957+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:28:02.454270920+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:28:02.454270920+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:28:02.463883476+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:28:02.463883476+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:28:02.467411198+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.467411198+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:28:02.471754743+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:28:02.471754743+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:28:02.477402255+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:28:02.477402255+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:28:02.482131731+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.482131731+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:28:02.488397691+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.488397691+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:28:02.491967054+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.492082687+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:28:02.495688911+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.495688911+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:28:02.498623645+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:28:02.498623645+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:28:02.501133997+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:28:02.501185759+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:28:02.503971799+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.503971799+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:28:02.506392838+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.506446380+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:28:02.509020194+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.509020194+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:28:02.511605938+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.511653530+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:28:02.514507372+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:28:02.514507372+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:28:02.521752660+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.522160162+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:28:02.526249639+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:28:02.526249639+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:28:02.529900994+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.529900994+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:29:54.885451542+00:00 stderr F time="2025-08-13T20:29:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:54.885451542+00:00 stderr F time="2025-08-13T20:29:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:29:55.027747743+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:55.030641676+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:29:55.147048872+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:55.147048872+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:38:02.309098888+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.309269413+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:38:02.326326495+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.326326495+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:38:02.338454975+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.338454975+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:38:02.342211403+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.342211403+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:38:02.346724143+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:38:02.346865967+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:38:02.351160871+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:38:02.351160871+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:38:02.354514998+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.354514998+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:38:02.358551754+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.358598555+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:38:02.362463087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.362463087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:38:02.366733010+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.366863374+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:38:02.370736005+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:38:02.370736005+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:38:02.375228885+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.375308407+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:38:02.379249151+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.379295622+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:38:02.383374900+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.383423071+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:38:02.385961164+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.386057937+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:38:02.388712063+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.388831257+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:38:02.391967277+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.391967277+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:38:02.399498344+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.399547966+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:38:02.403921842+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.403990024+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:38:02.407048532+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.407236408+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:38:02.409688868+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.409733350+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:38:02.413591791+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:38:02.413642652+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:38:02.416944857+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:38:02.416994169+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:38:02.419545742+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:38:02.419632375+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:38:02.422408055+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.422457466+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:38:02.425163324+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.425163324+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:38:02.427469901+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.427516402+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:38:02.430865569+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:38:02.430920290+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:38:02.436699087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:38:02.436699087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:38:02.439641982+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.439641982+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:38:02.442081142+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.442081142+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:38:02.446396566+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.446396566+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:38:02.455233461+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:38:02.455233461+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:38:02.458619829+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.458619829+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:38:02.463981274+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.463981274+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:38:02.469686288+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:38:02.469686288+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:38:02.475182526+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.475182526+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:38:02.479040148+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.479040148+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:38:02.489055846+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:38:02.489055846+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:38:02.491998151+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:38:02.492242748+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:38:02.496285355+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:38:02.496285355+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:38:02.499108776+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.499213759+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:38:02.502560806+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:38:02.502649578+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:38:02.507321043+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.507537159+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:38:02.512683188+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.513183312+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:38:02.518338191+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:38:02.518338191+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:38:02.522257654+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:38:02.522257654+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:38:02.528893235+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.528893235+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:38:02.533061065+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.533061065+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:39:54.858926167+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:54.858926167+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:39:54.929035027+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:54.929035027+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:39:55.130956989+00:00 stderr F time="2025-08-13T20:39:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:55.130956989+00:00 stderr F time="2025-08-13T20:39:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:20.405267752+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:20.405267752+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:20.467397943+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:20.467397943+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:21.394447020+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:21.396885090+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:21.465459317+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:21.465459317+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:22.370705595+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:22.371446596+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:22.433963449+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:22.433963449+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:23.380587010+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:23.380587010+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:23.461112032+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:23.461112032+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:24.381655721+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:24.381655721+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:24.458359573+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:24.458471076+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:25.417401671+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:25.417401671+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:25.505537472+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:25.505537472+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:26.408426263+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:26.408524316+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:26.480985995+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:26.480985995+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:27.472023757+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:27.472023757+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:27.612048704+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:27.614752602+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:28.392512125+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:28.392512125+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:28.468054283+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:28.468054283+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:29.407675142+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:29.407719013+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:29.479289737+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:29.479289737+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:30.404129860+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:30.404129860+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:30.506750169+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:30.506750169+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:31.393037481+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:31.393037481+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:31.456042957+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:31.456042957+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:32.390306402+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:32.390306402+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:32.467966580+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:32.467966580+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:33.431971123+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:33.431971123+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:33.545267030+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:33.545267030+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:34.405879691+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:34.405879691+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:34.501982352+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:34.501982352+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:35.404360718+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:35.405447990+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:35.490723988+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:35.490723988+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:41.328870733+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="shutting down events processor" ././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000007432515073043233033104 0ustar zuulzuul2025-08-13T19:59:23.745759522+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-08-13T19:59:23.745759522+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="Go OS/Arch: linux/amd64" 2025-08-13T19:59:25.650871697+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc0005c8f00)}" 2025-08-13T19:59:25.650871697+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0005c8fa0)}" 2025-08-13T19:59:31.855068557+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="waiting for informer caches to sync" 2025-08-13T19:59:34.408955148+00:00 stderr F W0813 19:59:34.375704 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F E0813 19:59:34.378903 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F W0813 19:59:32.402524 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F E0813 19:59:34.378961 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:35.874613566+00:00 stderr F W0813 19:59:35.873930 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:35.874724959+00:00 stderr F E0813 19:59:35.874709 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:35.874972206+00:00 stderr F W0813 19:59:35.874946 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:35.875028847+00:00 stderr F E0813 19:59:35.875010 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:38.636207756+00:00 stderr F W0813 19:59:38.635498 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:38.636318039+00:00 stderr F E0813 19:59:38.636299 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:38.659493230+00:00 stderr F W0813 19:59:38.659430 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:38.659730837+00:00 stderr F E0813 19:59:38.659714 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F W0813 19:59:44.386596 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F W0813 19:59:44.387299 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F E0813 19:59:44.387307 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F E0813 19:59:44.387323 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F W0813 19:59:56.214886 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F E0813 19:59:56.215543 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F W0813 19:59:56.215600 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F E0813 19:59:56.215622 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:20.485015857+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="started events processor" 2025-08-13T20:00:20.695061156+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:20.695061156+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:00:20.975764440+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:00:20.975764440+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:00:21.317402651+00:00 stderr F time="2025-08-13T20:00:21Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:21.317402651+00:00 stderr F time="2025-08-13T20:00:21Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:00:22.029163475+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:22.029163475+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:00:22.088482377+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:22.121173649+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:22.363082328+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:22.363082328+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:00:23.156969546+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:23.156969546+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:00:23.197957675+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:00:23.197957675+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:00:23.444171106+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:23.795935686+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:00:23.795935686+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:00:24.208376076+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:00:24.208454759+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:00:24.576855803+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:00:24.576855803+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:00:24.752157662+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:24.752157662+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:00:24.953569996+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:24.953569996+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:00:25.068954766+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.076592703+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:00:25.222950897+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.222950897+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:00:25.293323493+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.293323493+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:00:25.355468586+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.355468586+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:00:25.373568142+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:26.063921706+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:26.092546152+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:00:26.092546152+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:00:26.098371999+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.098420160+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:00:26.104509704+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.104509704+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:00:26.108077015+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:26.116665780+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:00:26.116665780+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:00:26.132354727+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.132354727+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:00:26.137939257+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:00:26.140329065+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:00:26.989988312+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:00:26.990079965+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:00:27.020001668+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:00:27.020001668+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:00:27.047936015+00:00 stderr F time="2025-08-13T20:00:27Z" level=warning msg="Image import for imagestream jenkins-agent-base tag scheduled-upgrade generation 3 failed with detailed message Internal error occurred: registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:v4.13.0: Get \"https://registry.redhat.io/v2/ocp-tools-4/jenkins-agent-base-rhel8/manifests/v4.13.0\": unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" 2025-08-13T20:00:27.227916267+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:27.457083101+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="initiated an imagestreamimport retry for imagestream/tag jenkins-agent-base/scheduled-upgrade" 2025-08-13T20:00:27.488319112+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:00:27.488528778+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:00:27.517631908+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:00:27.535678842+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:27.535678842+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:00:27.567528931+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:27.567528931+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:00:27.611589907+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:00:27.611589907+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:00:27.912992651+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:00:27.916887692+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:00:27.931633623+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:00:27.931633623+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:00:27.938067556+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:00:27.938067556+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:00:28.025671784+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:00:28.026030494+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:00:28.038732106+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.038896461+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:00:28.046319203+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:00:28.046395345+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:00:28.086291793+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:28.086672094+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:28.086710605+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:00:28.135997070+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:00:28.135997070+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:00:28.137027379+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:28.183570867+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.183570867+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:00:28.429676585+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.429676585+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:00:28.449937642+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.449937642+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:00:28.499246578+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.499246578+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:00:28.513314029+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.513314029+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:00:28.523039677+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.523039677+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:00:28.541055630+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.541055630+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:00:28.549866492+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.549866492+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:00:28.560148035+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.560148035+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:00:28.566894357+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.566894357+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:00:28.570909852+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:28.583921022+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:29.732106741+00:00 stderr F time="2025-08-13T20:00:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:29.732106741+00:00 stderr F time="2025-08-13T20:00:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.031907490+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.031907490+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.300890139+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.300890139+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.833731993+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.833894197+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.044594205+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.044757080+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.258115964+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.258263938+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.592207790+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.592207790+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:33.490730614+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:33.491354072+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:33.777940123+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="shutting down events processor" ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043233033054 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000375715073043233033072 0ustar zuulzuul2025-08-13T20:01:08.019466651+00:00 stderr F I0813 20:01:08.013488 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc0005aa140 max-eligible-revision:0xc000387ea0 protected-revisions:0xc000387f40 resource-dir:0xc0005aa000 static-pod-name:0xc0005aa0a0 v:0xc0005aa820] [0xc0005aa820 0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa140 0xc0005aa0a0] [] map[cert-dir:0xc0005aa140 help:0xc0005aabe0 log-flush-frequency:0xc0005aa780 max-eligible-revision:0xc000387ea0 protected-revisions:0xc000387f40 resource-dir:0xc0005aa000 static-pod-name:0xc0005aa0a0 v:0xc0005aa820 vmodule:0xc0005aa8c0] [0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa0a0 0xc0005aa140 0xc0005aa780 0xc0005aa820 0xc0005aa8c0 0xc0005aabe0] [0xc0005aa140 0xc0005aabe0 0xc0005aa780 0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa0a0 0xc0005aa820 0xc0005aa8c0] map[104:0xc0005aabe0 118:0xc0005aa820] [] -1 0 0xc000333a10 true 0x73b100 []} 2025-08-13T20:01:08.051902776+00:00 stderr F I0813 20:01:08.050031 1 cmd.go:41] (*prune.PruneOptions)(0xc000488eb0)({ 2025-08-13T20:01:08.051902776+00:00 stderr F MaxEligibleRevision: (int) 10, 2025-08-13T20:01:08.051902776+00:00 stderr F ProtectedRevisions: ([]int) (len=7 cap=7) { 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 4, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 5, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 6, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 7, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 8, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 9, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 10 2025-08-13T20:01:08.051902776+00:00 stderr F }, 2025-08-13T20:01:08.051902776+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.051902776+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:01:08.051902776+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:01:08.051902776+00:00 stderr F }) ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043232033053 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000357515073043232033067 0ustar zuulzuul2025-08-13T19:59:23.961605895+00:00 stderr F I0813 19:59:23.946532 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc000543180 max-eligible-revision:0xc000542f00 protected-revisions:0xc000542fa0 resource-dir:0xc000543040 static-pod-name:0xc0005430e0 v:0xc000543860] [0xc000543860 0xc000542f00 0xc000542fa0 0xc000543040 0xc000543180 0xc0005430e0] [] map[cert-dir:0xc000543180 help:0xc000543c20 log-flush-frequency:0xc0005437c0 max-eligible-revision:0xc000542f00 protected-revisions:0xc000542fa0 resource-dir:0xc000543040 static-pod-name:0xc0005430e0 v:0xc000543860 vmodule:0xc000543900] [0xc000542f00 0xc000542fa0 0xc000543040 0xc0005430e0 0xc000543180 0xc0005437c0 0xc000543860 0xc000543900 0xc000543c20] [0xc000543180 0xc000543c20 0xc0005437c0 0xc000542f00 0xc000542fa0 0xc000543040 0xc0005430e0 0xc000543860 0xc000543900] map[104:0xc000543c20 118:0xc000543860] [] -1 0 0xc000548120 true 0x73b100 []} 2025-08-13T19:59:23.962267703+00:00 stderr F I0813 19:59:23.962243 1 cmd.go:41] (*prune.PruneOptions)(0xc0004fe2d0)({ 2025-08-13T19:59:23.962267703+00:00 stderr F MaxEligibleRevision: (int) 8, 2025-08-13T19:59:23.962267703+00:00 stderr F ProtectedRevisions: ([]int) (len=5 cap=5) { 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 4, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 5, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 6, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 7, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 8 2025-08-13T19:59:23.962267703+00:00 stderr F }, 2025-08-13T19:59:23.962267703+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T19:59:23.962267703+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T19:59:23.962267703+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T19:59:23.962267703+00:00 stderr F }) ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015073043232033037 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015073043232033037 5ustar zuulzuul././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000000352315073043232033044 0ustar zuulzuul2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.761771 1 migrator.go:18] FLAG: --add_dir_header="false" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762301 1 migrator.go:18] FLAG: --alsologtostderr="true" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762307 1 migrator.go:18] FLAG: --kube-api-burst="1000" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762313 1 migrator.go:18] FLAG: --kube-api-qps="40" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762319 1 migrator.go:18] FLAG: --kubeconfig="" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762341 1 migrator.go:18] FLAG: --log_backtrace_at=":0" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762346 1 migrator.go:18] FLAG: --log_dir="" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762357 1 migrator.go:18] FLAG: --log_file="" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762360 1 migrator.go:18] FLAG: --log_file_max_size="1800" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762363 1 migrator.go:18] FLAG: --logtostderr="true" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762366 1 migrator.go:18] FLAG: --one_output="false" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762369 1 migrator.go:18] FLAG: --skip_headers="false" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762372 1 migrator.go:18] FLAG: --skip_log_headers="false" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762375 1 migrator.go:18] FLAG: --stderrthreshold="2" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762378 1 migrator.go:18] FLAG: --v="2" 2025-10-13T00:15:00.762436701+00:00 stderr F I1013 00:15:00.762381 1 migrator.go:18] FLAG: --vmodule="" ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000000376615073043232033055 0ustar zuulzuul2025-08-13T19:59:09.272236653+00:00 stderr F I0813 19:59:09.269668 1 migrator.go:18] FLAG: --add_dir_header="false" 2025-08-13T19:59:09.315418924+00:00 stderr F I0813 19:59:09.315376 1 migrator.go:18] FLAG: --alsologtostderr="true" 2025-08-13T19:59:09.315482885+00:00 stderr F I0813 19:59:09.315453 1 migrator.go:18] FLAG: --kube-api-burst="1000" 2025-08-13T19:59:09.315541857+00:00 stderr F I0813 19:59:09.315512 1 migrator.go:18] FLAG: --kube-api-qps="40" 2025-08-13T19:59:09.315594069+00:00 stderr F I0813 19:59:09.315571 1 migrator.go:18] FLAG: --kubeconfig="" 2025-08-13T19:59:09.315638740+00:00 stderr F I0813 19:59:09.315617 1 migrator.go:18] FLAG: --log_backtrace_at=":0" 2025-08-13T19:59:09.315681631+00:00 stderr F I0813 19:59:09.315659 1 migrator.go:18] FLAG: --log_dir="" 2025-08-13T19:59:09.315724812+00:00 stderr F I0813 19:59:09.315710 1 migrator.go:18] FLAG: --log_file="" 2025-08-13T19:59:09.315765133+00:00 stderr F I0813 19:59:09.315747 1 migrator.go:18] FLAG: --log_file_max_size="1800" 2025-08-13T19:59:09.315911708+00:00 stderr F I0813 19:59:09.315890 1 migrator.go:18] FLAG: --logtostderr="true" 2025-08-13T19:59:09.316095953+00:00 stderr F I0813 19:59:09.315951 1 migrator.go:18] FLAG: --one_output="false" 2025-08-13T19:59:09.316153515+00:00 stderr F I0813 19:59:09.316130 1 migrator.go:18] FLAG: --skip_headers="false" 2025-08-13T19:59:09.316200426+00:00 stderr F I0813 19:59:09.316184 1 migrator.go:18] FLAG: --skip_log_headers="false" 2025-08-13T19:59:09.316252307+00:00 stderr F I0813 19:59:09.316234 1 migrator.go:18] FLAG: --stderrthreshold="2" 2025-08-13T19:59:09.316293679+00:00 stderr F I0813 19:59:09.316280 1 migrator.go:18] FLAG: --v="2" 2025-08-13T19:59:09.316350440+00:00 stderr F I0813 19:59:09.316311 1 migrator.go:18] FLAG: --vmodule="" 2025-08-13T20:42:36.480632977+00:00 stderr F I0813 20:42:36.473590 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000013175715073043233033017 0ustar zuulzuul2025-08-13T19:50:48.104229662+00:00 stderr F I0813 19:50:48.084890 1 start.go:38] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:51:18.557700290+00:00 stderr F W0813 19:51:18.557095 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.558076260+00:00 stderr F I0813 19:51:18.558043 1 trace.go:236] Trace[912093740]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.535) (total time: 30022ms): 2025-08-13T19:51:18.558076260+00:00 stderr F Trace[912093740]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30021ms (19:51:18.556) 2025-08-13T19:51:18.558076260+00:00 stderr F Trace[912093740]: [30.022734787s] [30.022734787s] END 2025-08-13T19:51:18.558524563+00:00 stderr F E0813 19:51:18.558463 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.569047554+00:00 stderr F W0813 19:51:18.568932 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.569072735+00:00 stderr F I0813 19:51:18.569052 1 trace.go:236] Trace[2105199760]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.532) (total time: 30036ms): 2025-08-13T19:51:18.569072735+00:00 stderr F Trace[2105199760]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30036ms (19:51:18.568) 2025-08-13T19:51:18.569072735+00:00 stderr F Trace[2105199760]: [30.036876891s] [30.036876891s] END 2025-08-13T19:51:18.569072735+00:00 stderr F E0813 19:51:18.569067 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.572313797+00:00 stderr F W0813 19:51:18.572229 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.572412030+00:00 stderr F I0813 19:51:18.572388 1 trace.go:236] Trace[153696363]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.532) (total time: 30040ms): 2025-08-13T19:51:18.572412030+00:00 stderr F Trace[153696363]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30040ms (19:51:18.572) 2025-08-13T19:51:18.572412030+00:00 stderr F Trace[153696363]: [30.04035192s] [30.04035192s] END 2025-08-13T19:51:18.572482572+00:00 stderr F E0813 19:51:18.572458 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.663598296+00:00 stderr F W0813 19:51:18.663514 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.663703289+00:00 stderr F I0813 19:51:18.663686 1 trace.go:236] Trace[1936765059]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:50:48.640) (total time: 30023ms): 2025-08-13T19:51:18.663703289+00:00 stderr F Trace[1936765059]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30023ms (19:51:18.663) 2025-08-13T19:51:18.663703289+00:00 stderr F Trace[1936765059]: [30.02355184s] [30.02355184s] END 2025-08-13T19:51:18.663749820+00:00 stderr F E0813 19:51:18.663735 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.504049251+00:00 stderr F W0813 19:51:49.503980 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.504303129+00:00 stderr F I0813 19:51:49.504282 1 trace.go:236] Trace[1149040783]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:51:19.502) (total time: 30001ms): 2025-08-13T19:51:49.504303129+00:00 stderr F Trace[1149040783]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:51:49.503) 2025-08-13T19:51:49.504303129+00:00 stderr F Trace[1149040783]: [30.001813737s] [30.001813737s] END 2025-08-13T19:51:49.504357790+00:00 stderr F E0813 19:51:49.504343 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.604755611+00:00 stderr F W0813 19:51:49.604696 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.604995868+00:00 stderr F I0813 19:51:49.604972 1 trace.go:236] Trace[933594454]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.603) (total time: 30001ms): 2025-08-13T19:51:49.604995868+00:00 stderr F Trace[933594454]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:51:49.604) 2025-08-13T19:51:49.604995868+00:00 stderr F Trace[933594454]: [30.001502258s] [30.001502258s] END 2025-08-13T19:51:49.605047339+00:00 stderr F E0813 19:51:49.605033 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.608579750+00:00 stderr F W0813 19:51:49.608494 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.608737084+00:00 stderr F I0813 19:51:49.608587 1 trace.go:236] Trace[802142435]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.607) (total time: 30000ms): 2025-08-13T19:51:49.608737084+00:00 stderr F Trace[802142435]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (19:51:49.608) 2025-08-13T19:51:49.608737084+00:00 stderr F Trace[802142435]: [30.000833509s] [30.000833509s] END 2025-08-13T19:51:49.608737084+00:00 stderr F E0813 19:51:49.608606 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.955594646+00:00 stderr F W0813 19:51:49.955445 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.956119431+00:00 stderr F I0813 19:51:49.955722 1 trace.go:236] Trace[375825009]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.952) (total time: 30003ms): 2025-08-13T19:51:49.956119431+00:00 stderr F Trace[375825009]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:51:49.955) 2025-08-13T19:51:49.956119431+00:00 stderr F Trace[375825009]: [30.003125253s] [30.003125253s] END 2025-08-13T19:51:49.956178603+00:00 stderr F E0813 19:51:49.956178 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:07.688377796+00:00 stderr F W0813 19:52:07.688204 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:07.688377796+00:00 stderr F I0813 19:52:07.688318 1 trace.go:236] Trace[1219952241]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:52.075) (total time: 15613ms): 2025-08-13T19:52:07.688377796+00:00 stderr F Trace[1219952241]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 15613ms (19:52:07.688) 2025-08-13T19:52:07.688377796+00:00 stderr F Trace[1219952241]: [15.613260573s] [15.613260573s] END 2025-08-13T19:52:07.688473449+00:00 stderr F E0813 19:52:07.688372 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:08.200402353+00:00 stderr F W0813 19:52:08.200209 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:08.200402353+00:00 stderr F I0813 19:52:08.200319 1 trace.go:236] Trace[66411733]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:51:52.591) (total time: 15608ms): 2025-08-13T19:52:08.200402353+00:00 stderr F Trace[66411733]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 15608ms (19:52:08.200) 2025-08-13T19:52:08.200402353+00:00 stderr F Trace[66411733]: [15.608347813s] [15.608347813s] END 2025-08-13T19:52:08.200402353+00:00 stderr F E0813 19:52:08.200340 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:21.221752251+00:00 stderr F W0813 19:52:21.221643 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:21.221752251+00:00 stderr F I0813 19:52:21.221724 1 trace.go:236] Trace[1453919014]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:51.217) (total time: 30004ms): 2025-08-13T19:52:21.221752251+00:00 stderr F Trace[1453919014]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30004ms (19:52:21.221) 2025-08-13T19:52:21.221752251+00:00 stderr F Trace[1453919014]: [30.004688526s] [30.004688526s] END 2025-08-13T19:52:21.221908706+00:00 stderr F E0813 19:52:21.221764 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:22.252234010+00:00 stderr F W0813 19:52:22.252150 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:22.252393324+00:00 stderr F I0813 19:52:22.252369 1 trace.go:236] Trace[1556460362]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:52.249) (total time: 30002ms): 2025-08-13T19:52:22.252393324+00:00 stderr F Trace[1556460362]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:52:22.252) 2025-08-13T19:52:22.252393324+00:00 stderr F Trace[1556460362]: [30.002882453s] [30.002882453s] END 2025-08-13T19:52:22.252472317+00:00 stderr F E0813 19:52:22.252453 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.086000804+00:00 stderr F W0813 19:52:42.084352 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.086000804+00:00 stderr F I0813 19:52:42.084477 1 trace.go:236] Trace[1272804760]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:12.081) (total time: 30003ms): 2025-08-13T19:52:42.086000804+00:00 stderr F Trace[1272804760]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (19:52:42.084) 2025-08-13T19:52:42.086000804+00:00 stderr F Trace[1272804760]: [30.003320052s] [30.003320052s] END 2025-08-13T19:52:42.086000804+00:00 stderr F E0813 19:52:42.084495 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.671043855+00:00 stderr F W0813 19:52:42.670931 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.671043855+00:00 stderr F I0813 19:52:42.671022 1 trace.go:236] Trace[1619673043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:52:12.669) (total time: 30001ms): 2025-08-13T19:52:42.671043855+00:00 stderr F Trace[1619673043]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:42.670) 2025-08-13T19:52:42.671043855+00:00 stderr F Trace[1619673043]: [30.001258406s] [30.001258406s] END 2025-08-13T19:52:42.671085227+00:00 stderr F E0813 19:52:42.671038 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:55.955644476+00:00 stderr F W0813 19:52:55.955445 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:55.955708578+00:00 stderr F I0813 19:52:55.955686 1 trace.go:236] Trace[1977466941]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:25.954) (total time: 30001ms): 2025-08-13T19:52:55.955708578+00:00 stderr F Trace[1977466941]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:55.955) 2025-08-13T19:52:55.955708578+00:00 stderr F Trace[1977466941]: [30.001497346s] [30.001497346s] END 2025-08-13T19:52:55.955838651+00:00 stderr F E0813 19:52:55.955727 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:57.480925777+00:00 stderr F W0813 19:52:57.480755 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:57.480925777+00:00 stderr F I0813 19:52:57.480906 1 trace.go:236] Trace[1866283227]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:27.479) (total time: 30001ms): 2025-08-13T19:52:57.480925777+00:00 stderr F Trace[1866283227]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:57.480) 2025-08-13T19:52:57.480925777+00:00 stderr F Trace[1866283227]: [30.00181764s] [30.00181764s] END 2025-08-13T19:52:57.481038301+00:00 stderr F E0813 19:52:57.480924 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:20.367755117+00:00 stderr F W0813 19:53:20.367425 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:20.368044136+00:00 stderr F I0813 19:53:20.368019 1 trace.go:236] Trace[1083300361]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:50.364) (total time: 30003ms): 2025-08-13T19:53:20.368044136+00:00 stderr F Trace[1083300361]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:53:20.367) 2025-08-13T19:53:20.368044136+00:00 stderr F Trace[1083300361]: [30.003506528s] [30.003506528s] END 2025-08-13T19:53:20.368124388+00:00 stderr F E0813 19:53:20.368101 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:21.438760162+00:00 stderr F W0813 19:53:21.438626 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:21.438955457+00:00 stderr F I0813 19:53:21.438756 1 trace.go:236] Trace[702819263]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:52:51.436) (total time: 30001ms): 2025-08-13T19:53:21.438955457+00:00 stderr F Trace[702819263]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:53:21.438) 2025-08-13T19:53:21.438955457+00:00 stderr F Trace[702819263]: [30.001816914s] [30.001816914s] END 2025-08-13T19:53:21.438955457+00:00 stderr F E0813 19:53:21.438873 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:34.333573367+00:00 stderr F W0813 19:53:34.333471 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:34.333968239+00:00 stderr F I0813 19:53:34.333752 1 trace.go:236] Trace[623327545]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:04.330) (total time: 30002ms): 2025-08-13T19:53:34.333968239+00:00 stderr F Trace[623327545]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:53:34.333) 2025-08-13T19:53:34.333968239+00:00 stderr F Trace[623327545]: [30.002761313s] [30.002761313s] END 2025-08-13T19:53:34.334038991+00:00 stderr F E0813 19:53:34.334010 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:38.954408297+00:00 stderr F W0813 19:53:38.954273 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:38.954472968+00:00 stderr F I0813 19:53:38.954398 1 trace.go:236] Trace[32431997]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:08.953) (total time: 30001ms): 2025-08-13T19:53:38.954472968+00:00 stderr F Trace[32431997]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:53:38.954) 2025-08-13T19:53:38.954472968+00:00 stderr F Trace[32431997]: [30.001223625s] [30.001223625s] END 2025-08-13T19:53:38.954472968+00:00 stderr F E0813 19:53:38.954423 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:04.670979405+00:00 stderr F W0813 19:54:04.670474 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:04.670979405+00:00 stderr F I0813 19:54:04.670582 1 trace.go:236] Trace[1588578066]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:34.668) (total time: 30001ms): 2025-08-13T19:54:04.670979405+00:00 stderr F Trace[1588578066]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:54:04.670) 2025-08-13T19:54:04.670979405+00:00 stderr F Trace[1588578066]: [30.001681551s] [30.001681551s] END 2025-08-13T19:54:04.670979405+00:00 stderr F E0813 19:54:04.670604 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:06.844605739+00:00 stderr F W0813 19:54:06.844423 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:06.844650260+00:00 stderr F I0813 19:54:06.844638 1 trace.go:236] Trace[737086321]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:53:36.843) (total time: 30001ms): 2025-08-13T19:54:06.844650260+00:00 stderr F Trace[737086321]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:54:06.844) 2025-08-13T19:54:06.844650260+00:00 stderr F Trace[737086321]: [30.001231928s] [30.001231928s] END 2025-08-13T19:54:06.844730882+00:00 stderr F E0813 19:54:06.844665 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:09.417087090+00:00 stderr F W0813 19:54:09.416716 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:54:09.417087090+00:00 stderr F E0813 19:54:09.416971 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:54:22.292448382+00:00 stderr F W0813 19:54:22.292119 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:22.292448382+00:00 stderr F I0813 19:54:22.292366 1 trace.go:236] Trace[1159168860]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:52.288) (total time: 30003ms): 2025-08-13T19:54:22.292448382+00:00 stderr F Trace[1159168860]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (19:54:22.292) 2025-08-13T19:54:22.292448382+00:00 stderr F Trace[1159168860]: [30.003899291s] [30.003899291s] END 2025-08-13T19:54:22.292554615+00:00 stderr F E0813 19:54:22.292436 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:02.279342700+00:00 stderr F W0813 19:55:02.279209 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:02.279342700+00:00 stderr F I0813 19:55:02.279323 1 trace.go:236] Trace[316618279]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:54:32.277) (total time: 30002ms): 2025-08-13T19:55:02.279342700+00:00 stderr F Trace[316618279]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:02.279) 2025-08-13T19:55:02.279342700+00:00 stderr F Trace[316618279]: [30.002099667s] [30.002099667s] END 2025-08-13T19:55:02.279416412+00:00 stderr F E0813 19:55:02.279339 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:04.185069915+00:00 stderr F W0813 19:55:04.184917 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:04.185069915+00:00 stderr F I0813 19:55:04.185050 1 trace.go:236] Trace[151776207]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:54:34.182) (total time: 30002ms): 2025-08-13T19:55:04.185069915+00:00 stderr F Trace[151776207]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:04.184) 2025-08-13T19:55:04.185069915+00:00 stderr F Trace[151776207]: [30.00292695s] [30.00292695s] END 2025-08-13T19:55:04.185110916+00:00 stderr F E0813 19:55:04.185068 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:23.981912405+00:00 stderr F W0813 19:55:23.981607 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:23.982536453+00:00 stderr F I0813 19:55:23.982465 1 trace.go:236] Trace[1215179368]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:54:53.979) (total time: 30003ms): 2025-08-13T19:55:23.982536453+00:00 stderr F Trace[1215179368]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:23.981) 2025-08-13T19:55:23.982536453+00:00 stderr F Trace[1215179368]: [30.003092165s] [30.003092165s] END 2025-08-13T19:55:23.982720378+00:00 stderr F E0813 19:55:23.982695 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:30.972367921+00:00 stderr F W0813 19:55:30.972184 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:30.972367921+00:00 stderr F I0813 19:55:30.972342 1 trace.go:236] Trace[1271936866]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:55:00.971) (total time: 30001ms): 2025-08-13T19:55:30.972367921+00:00 stderr F Trace[1271936866]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (19:55:30.972) 2025-08-13T19:55:30.972367921+00:00 stderr F Trace[1271936866]: [30.001039224s] [30.001039224s] END 2025-08-13T19:55:30.972367921+00:00 stderr F E0813 19:55:30.972360 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:51.572445390+00:00 stderr F W0813 19:55:51.571947 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:55:51.572445390+00:00 stderr F E0813 19:55:51.572021 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:56:16.186532855+00:00 stderr F W0813 19:56:16.185956 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:16.186532855+00:00 stderr F I0813 19:56:16.186062 1 trace.go:236] Trace[1037743998]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:55:46.184) (total time: 30001ms): 2025-08-13T19:56:16.186532855+00:00 stderr F Trace[1037743998]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:16.185) 2025-08-13T19:56:16.186532855+00:00 stderr F Trace[1037743998]: [30.001968233s] [30.001968233s] END 2025-08-13T19:56:16.186532855+00:00 stderr F E0813 19:56:16.186078 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:51.014032237+00:00 stderr F W0813 19:56:51.013906 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:51.014345846+00:00 stderr F I0813 19:56:51.014263 1 trace.go:236] Trace[324623291]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:21.012) (total time: 30002ms): 2025-08-13T19:56:51.014345846+00:00 stderr F Trace[324623291]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:51.013) 2025-08-13T19:56:51.014345846+00:00 stderr F Trace[324623291]: [30.002111695s] [30.002111695s] END 2025-08-13T19:56:51.014450029+00:00 stderr F E0813 19:56:51.014402 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:52.162970465+00:00 stderr F W0813 19:56:52.162515 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:52.162970465+00:00 stderr F I0813 19:56:52.162611 1 trace.go:236] Trace[633606038]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:22.160) (total time: 30002ms): 2025-08-13T19:56:52.162970465+00:00 stderr F Trace[633606038]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:56:52.162) 2025-08-13T19:56:52.162970465+00:00 stderr F Trace[633606038]: [30.002157687s] [30.002157687s] END 2025-08-13T19:56:52.162970465+00:00 stderr F E0813 19:56:52.162625 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:58.950133678+00:00 stderr F W0813 19:56:58.949498 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:58.950133678+00:00 stderr F I0813 19:56:58.949878 1 trace.go:236] Trace[1460267524]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:28.947) (total time: 30002ms): 2025-08-13T19:56:58.950133678+00:00 stderr F Trace[1460267524]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:56:58.949) 2025-08-13T19:56:58.950133678+00:00 stderr F Trace[1460267524]: [30.0023144s] [30.0023144s] END 2025-08-13T19:56:58.950133678+00:00 stderr F E0813 19:56:58.949947 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:36.904398000+00:00 stderr F I0813 19:57:36.904118 1 trace.go:236] Trace[561210625]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:57:09.705) (total time: 27198ms): 2025-08-13T19:57:36.904398000+00:00 stderr F Trace[561210625]: ---"Objects listed" error: 27197ms (19:57:36.903) 2025-08-13T19:57:36.904398000+00:00 stderr F Trace[561210625]: [27.198137534s] [27.198137534s] END 2025-08-13T19:57:37.027895736+00:00 stderr F I0813 19:57:37.027722 1 api.go:65] Launching server on :22624 2025-08-13T19:57:37.032064235+00:00 stderr F I0813 19:57:37.030734 1 api.go:65] Launching server on :22623 ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000004152415073043233033007 0ustar zuulzuul2025-10-13T00:12:49.886576319+00:00 stderr F I1013 00:12:49.886355 1 start.go:38] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:13:19.948927954+00:00 stderr F W1013 00:13:19.948490 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.948927954+00:00 stderr F W1013 00:13:19.948546 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.948927954+00:00 stderr F I1013 00:13:19.948650 1 trace.go:236] Trace[657726583]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:12:49.937) (total time: 30011ms): 2025-10-13T00:13:19.948927954+00:00 stderr F Trace[657726583]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30011ms (00:13:19.948) 2025-10-13T00:13:19.948927954+00:00 stderr F Trace[657726583]: [30.011144985s] [30.011144985s] END 2025-10-13T00:13:19.948927954+00:00 stderr F E1013 00:13:19.948679 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.948927954+00:00 stderr F I1013 00:13:19.948591 1 trace.go:236] Trace[125125700]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:12:49.937) (total time: 30011ms): 2025-10-13T00:13:19.948927954+00:00 stderr F Trace[125125700]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30010ms (00:13:19.948) 2025-10-13T00:13:19.948927954+00:00 stderr F Trace[125125700]: [30.011029702s] [30.011029702s] END 2025-10-13T00:13:19.948927954+00:00 stderr F E1013 00:13:19.948716 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.948927954+00:00 stderr F W1013 00:13:19.948395 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.948927954+00:00 stderr F I1013 00:13:19.948763 1 trace.go:236] Trace[619564826]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:12:49.937) (total time: 30011ms): 2025-10-13T00:13:19.948927954+00:00 stderr F Trace[619564826]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30010ms (00:13:19.948) 2025-10-13T00:13:19.948927954+00:00 stderr F Trace[619564826]: [30.011131115s] [30.011131115s] END 2025-10-13T00:13:19.948927954+00:00 stderr F E1013 00:13:19.948772 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.951066165+00:00 stderr F W1013 00:13:19.951022 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:19.951078906+00:00 stderr F I1013 00:13:19.951066 1 trace.go:236] Trace[12091591]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Oct-2025 00:12:49.947) (total time: 30003ms): 2025-10-13T00:13:19.951078906+00:00 stderr F Trace[12091591]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (00:13:19.951) 2025-10-13T00:13:19.951078906+00:00 stderr F Trace[12091591]: [30.003993241s] [30.003993241s] END 2025-10-13T00:13:19.951078906+00:00 stderr F E1013 00:13:19.951074 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.022942011+00:00 stderr F W1013 00:13:51.022860 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.022942011+00:00 stderr F I1013 00:13:51.022925 1 trace.go:236] Trace[1937227727]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:13:21.022) (total time: 30000ms): 2025-10-13T00:13:51.022942011+00:00 stderr F Trace[1937227727]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:13:51.022) 2025-10-13T00:13:51.022942011+00:00 stderr F Trace[1937227727]: [30.000811894s] [30.000811894s] END 2025-10-13T00:13:51.023002583+00:00 stderr F E1013 00:13:51.022940 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.140940701+00:00 stderr F W1013 00:13:51.140813 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.140940701+00:00 stderr F I1013 00:13:51.140906 1 trace.go:236] Trace[1471316100]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:13:21.140) (total time: 30000ms): 2025-10-13T00:13:51.140940701+00:00 stderr F Trace[1471316100]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:13:51.140) 2025-10-13T00:13:51.140940701+00:00 stderr F Trace[1471316100]: [30.00084191s] [30.00084191s] END 2025-10-13T00:13:51.140940701+00:00 stderr F E1013 00:13:51.140927 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.187399827+00:00 stderr F W1013 00:13:51.184549 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.187399827+00:00 stderr F I1013 00:13:51.184619 1 trace.go:236] Trace[1056298198]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:13:21.183) (total time: 30001ms): 2025-10-13T00:13:51.187399827+00:00 stderr F Trace[1056298198]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:13:51.184) 2025-10-13T00:13:51.187399827+00:00 stderr F Trace[1056298198]: [30.001381436s] [30.001381436s] END 2025-10-13T00:13:51.187399827+00:00 stderr F E1013 00:13:51.184635 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.476459909+00:00 stderr F W1013 00:13:51.475959 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:13:51.476459909+00:00 stderr F I1013 00:13:51.476443 1 trace.go:236] Trace[1945126476]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Oct-2025 00:13:21.474) (total time: 30001ms): 2025-10-13T00:13:51.476459909+00:00 stderr F Trace[1945126476]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:13:51.475) 2025-10-13T00:13:51.476459909+00:00 stderr F Trace[1945126476]: [30.001716615s] [30.001716615s] END 2025-10-13T00:13:51.476521391+00:00 stderr F E1013 00:13:51.476458 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:22.915768689+00:00 stderr F W1013 00:14:22.915676 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:22.915768689+00:00 stderr F I1013 00:14:22.915737 1 trace.go:236] Trace[1575915027]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:13:52.912) (total time: 30002ms): 2025-10-13T00:14:22.915768689+00:00 stderr F Trace[1575915027]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (00:14:22.915) 2025-10-13T00:14:22.915768689+00:00 stderr F Trace[1575915027]: [30.002772947s] [30.002772947s] END 2025-10-13T00:14:22.915768689+00:00 stderr F E1013 00:14:22.915750 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:23.794894384+00:00 stderr F W1013 00:14:23.794815 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:23.794894384+00:00 stderr F I1013 00:14:23.794885 1 trace.go:236] Trace[663648893]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:13:53.794) (total time: 30000ms): 2025-10-13T00:14:23.794894384+00:00 stderr F Trace[663648893]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:14:23.794) 2025-10-13T00:14:23.794894384+00:00 stderr F Trace[663648893]: [30.000742718s] [30.000742718s] END 2025-10-13T00:14:23.794947216+00:00 stderr F E1013 00:14:23.794901 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.270825545+00:00 stderr F W1013 00:14:24.270725 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.270825545+00:00 stderr F I1013 00:14:24.270805 1 trace.go:236] Trace[1620385227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Oct-2025 00:13:54.269) (total time: 30001ms): 2025-10-13T00:14:24.270825545+00:00 stderr F Trace[1620385227]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:14:24.270) 2025-10-13T00:14:24.270825545+00:00 stderr F Trace[1620385227]: [30.00154621s] [30.00154621s] END 2025-10-13T00:14:24.270859076+00:00 stderr F E1013 00:14:24.270827 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.337303527+00:00 stderr F W1013 00:14:24.337215 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:24.337303527+00:00 stderr F I1013 00:14:24.337272 1 trace.go:236] Trace[103474796]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Oct-2025 00:13:54.336) (total time: 30000ms): 2025-10-13T00:14:24.337303527+00:00 stderr F Trace[103474796]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:14:24.337) 2025-10-13T00:14:24.337303527+00:00 stderr F Trace[103474796]: [30.000747206s] [30.000747206s] END 2025-10-13T00:14:24.337303527+00:00 stderr F E1013 00:14:24.337284 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-10-13T00:14:30.238608536+00:00 stderr F I1013 00:14:30.238472 1 api.go:65] Launching server on :22624 2025-10-13T00:14:30.238608536+00:00 stderr F I1013 00:14:30.238519 1 api.go:65] Launching server on :22623 ././@LongLink0000644000000000000000000000022400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015073043234032776 5ustar zuulzuul././@LongLink0000644000000000000000000000023000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015073043234032776 5ustar zuulzuul././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000437415073043234033010 0ustar zuulzuul2025-10-13T00:14:59.051711675+00:00 stdout F .:5353 2025-10-13T00:14:59.051711675+00:00 stdout F hostname.bind.:5353 2025-10-13T00:14:59.051711675+00:00 stdout F [INFO] plugin/reload: Running configuration SHA512 = c40f1fac74a6633c6b1943fe251ad80adf3d5bd9b35c9e7d9b72bc260c5e2455f03e403e3b79d32f0936ff27e81ff6d07c68a95724b1c2c23510644372976718 2025-10-13T00:14:59.051711675+00:00 stdout F CoreDNS-1.11.1 2025-10-13T00:14:59.051711675+00:00 stdout F linux/amd64, go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime, 2025-10-13T00:21:08.828736722+00:00 stdout F [INFO] 10.217.0.8:60867 - 1456 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069429s 2025-10-13T00:21:08.828736722+00:00 stdout F [INFO] 10.217.0.8:36699 - 24518 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000746231s 2025-10-13T00:21:27.787322021+00:00 stdout F [INFO] 10.217.0.8:51627 - 24190 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000483673s 2025-10-13T00:21:27.787322021+00:00 stdout F [INFO] 10.217.0.8:35550 - 21399 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000546895s 2025-10-13T00:21:49.795804517+00:00 stdout F [INFO] 10.217.0.8:37041 - 39330 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000539315s 2025-10-13T00:21:49.795804517+00:00 stdout F [INFO] 10.217.0.8:35052 - 11211 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000820862s 2025-10-13T00:22:27.790693976+00:00 stdout F [INFO] 10.217.0.8:60829 - 47334 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002003224s 2025-10-13T00:22:27.791024285+00:00 stdout F [INFO] 10.217.0.8:59830 - 4029 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002795095s 2025-10-13T00:23:27.790726014+00:00 stdout F [INFO] 10.217.0.8:59757 - 24944 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001244535s 2025-10-13T00:23:27.790726014+00:00 stdout F [INFO] 10.217.0.8:60976 - 60021 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001533483s ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000226074215073043234033015 0ustar zuulzuul2025-08-13T19:59:13.144252487+00:00 stdout F [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2025-08-13T19:59:13.171934436+00:00 stdout F .:5353 2025-08-13T19:59:13.171934436+00:00 stdout F hostname.bind.:5353 2025-08-13T19:59:13.185586915+00:00 stdout F [INFO] plugin/reload: Running configuration SHA512 = c40f1fac74a6633c6b1943fe251ad80adf3d5bd9b35c9e7d9b72bc260c5e2455f03e403e3b79d32f0936ff27e81ff6d07c68a95724b1c2c23510644372976718 2025-08-13T19:59:13.187380976+00:00 stdout F CoreDNS-1.11.1 2025-08-13T19:59:13.187380976+00:00 stdout F linux/amd64, go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime, 2025-08-13T19:59:36.359190859+00:00 stdout F [INFO] 10.217.0.28:60726 - 4384 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009569913s 2025-08-13T19:59:36.359190859+00:00 stdout F [INFO] 10.217.0.28:45746 - 61404 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.016871841s 2025-08-13T19:59:38.555978409+00:00 stdout F [INFO] 10.217.0.8:37135 - 10343 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001011259s 2025-08-13T19:59:38.555978409+00:00 stdout F [INFO] 10.217.0.8:58657 - 31103 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001289036s 2025-08-13T19:59:39.582450718+00:00 stdout F [INFO] 10.217.0.8:36699 - 25225 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003627074s 2025-08-13T19:59:39.583116537+00:00 stdout F [INFO] 10.217.0.8:46453 - 52750 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000979658s 2025-08-13T19:59:41.285089343+00:00 stdout F [INFO] 10.217.0.8:42982 - 35440 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005392324s 2025-08-13T19:59:41.372498954+00:00 stdout F [INFO] 10.217.0.28:53074 - 61598 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.044448346s 2025-08-13T19:59:41.372498954+00:00 stdout F [INFO] 10.217.0.28:47243 - 33124 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.045254849s 2025-08-13T19:59:41.380854483+00:00 stdout F [INFO] 10.217.0.8:59732 - 7106 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.093183886s 2025-08-13T19:59:42.056944115+00:00 stdout F [INFO] 10.217.0.8:57861 - 34485 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001998527s 2025-08-13T19:59:42.057040607+00:00 stdout F [INFO] 10.217.0.8:51920 - 49588 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00312535s 2025-08-13T19:59:42.254946729+00:00 stdout F [INFO] 10.217.0.8:42744 - 36863 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002500741s 2025-08-13T19:59:42.368712312+00:00 stdout F [INFO] 10.217.0.8:41487 - 58644 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00389273s 2025-08-13T19:59:43.477540588+00:00 stdout F [INFO] 10.217.0.8:55842 - 16014 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002701517s 2025-08-13T19:59:43.477540588+00:00 stdout F [INFO] 10.217.0.8:59959 - 45350 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004039455s 2025-08-13T19:59:44.068239376+00:00 stdout F [INFO] 10.217.0.8:54207 - 19718 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003326715s 2025-08-13T19:59:44.068239376+00:00 stdout F [INFO] 10.217.0.8:55710 - 12381 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000859364s 2025-08-13T19:59:44.473616752+00:00 stdout F [INFO] 10.217.0.8:57433 - 12555 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001231785s 2025-08-13T19:59:44.490141913+00:00 stdout F [INFO] 10.217.0.8:56361 - 611 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001532184s 2025-08-13T19:59:45.207580854+00:00 stdout F [INFO] 10.217.0.8:44517 - 45189 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007351389s 2025-08-13T19:59:45.331979930+00:00 stdout F [INFO] 10.217.0.8:58571 - 60387 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.010958382s 2025-08-13T19:59:46.275200306+00:00 stdout F [INFO] 10.217.0.28:56315 - 52153 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004062336s 2025-08-13T19:59:46.275701470+00:00 stdout F [INFO] 10.217.0.28:53644 - 10701 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004936511s 2025-08-13T19:59:46.778551484+00:00 stdout F [INFO] 10.217.0.8:35729 - 16750 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003115839s 2025-08-13T19:59:46.779616544+00:00 stdout F [INFO] 10.217.0.8:47577 - 4218 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004051835s 2025-08-13T19:59:49.490117401+00:00 stdout F [INFO] 10.217.0.8:60652 - 34962 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000776823s 2025-08-13T19:59:49.492174519+00:00 stdout F [INFO] 10.217.0.8:42073 - 18763 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000911126s 2025-08-13T19:59:51.325499579+00:00 stdout F [INFO] 10.217.0.28:35410 - 28476 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002997815s 2025-08-13T19:59:51.325499579+00:00 stdout F [INFO] 10.217.0.28:38192 - 43866 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002657256s 2025-08-13T19:59:54.722954265+00:00 stdout F [INFO] 10.217.0.8:57411 - 62430 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003547181s 2025-08-13T19:59:54.722954265+00:00 stdout F [INFO] 10.217.0.8:44346 - 25135 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004123868s 2025-08-13T19:59:56.279401503+00:00 stdout F [INFO] 10.217.0.28:55304 - 30922 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002699447s 2025-08-13T19:59:56.402210944+00:00 stdout F [INFO] 10.217.0.28:60105 - 977 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.129498722s 2025-08-13T20:00:01.312229374+00:00 stdout F [INFO] 10.217.0.28:36085 - 33371 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003943962s 2025-08-13T20:00:01.312229374+00:00 stdout F [INFO] 10.217.0.28:33968 - 25713 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004474897s 2025-08-13T20:00:05.154259269+00:00 stdout F [INFO] 10.217.0.8:38795 - 21470 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003238472s 2025-08-13T20:00:05.154259269+00:00 stdout F [INFO] 10.217.0.8:39200 - 34446 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.023923592s 2025-08-13T20:00:06.214910512+00:00 stdout F [INFO] 10.217.0.28:43430 - 45449 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000923157s 2025-08-13T20:00:06.221060848+00:00 stdout F [INFO] 10.217.0.28:53417 - 39326 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009344247s 2025-08-13T20:00:10.325143240+00:00 stdout F [INFO] 10.217.0.62:51993 - 11598 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004463237s 2025-08-13T20:00:10.333127238+00:00 stdout F [INFO] 10.217.0.62:44300 - 42845 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001539534s 2025-08-13T20:00:11.276530577+00:00 stdout F [INFO] 10.217.0.28:43084 - 32054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006750782s 2025-08-13T20:00:11.276530577+00:00 stdout F [INFO] 10.217.0.28:53563 - 42854 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006401943s 2025-08-13T20:00:11.908425165+00:00 stdout F [INFO] 10.217.0.62:54342 - 11297 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.013080363s 2025-08-13T20:00:11.909993260+00:00 stdout F [INFO] 10.217.0.62:39933 - 33204 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.014236276s 2025-08-13T20:00:12.249274154+00:00 stdout F [INFO] 10.217.0.19:58421 - 64290 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001664357s 2025-08-13T20:00:12.250137289+00:00 stdout F [INFO] 10.217.0.19:52477 - 46487 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004760996s 2025-08-13T20:00:12.416577515+00:00 stdout F [INFO] 10.217.0.19:53799 - 52499 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000827873s 2025-08-13T20:00:12.416577515+00:00 stdout F [INFO] 10.217.0.19:60061 - 51150 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000839763s 2025-08-13T20:00:12.441540027+00:00 stdout F [INFO] 10.217.0.62:34840 - 21342 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000948127s 2025-08-13T20:00:12.453288212+00:00 stdout F [INFO] 10.217.0.62:42451 - 35945 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01227864s 2025-08-13T20:00:12.493549240+00:00 stdout F [INFO] 10.217.0.62:50935 - 27932 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002008878s 2025-08-13T20:00:12.493702974+00:00 stdout F [INFO] 10.217.0.62:43620 - 46295 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001971266s 2025-08-13T20:00:12.530296808+00:00 stdout F [INFO] 10.217.0.62:36702 - 14398 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000981408s 2025-08-13T20:00:12.530296808+00:00 stdout F [INFO] 10.217.0.62:48646 - 64315 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000479213s 2025-08-13T20:00:13.138332395+00:00 stdout F [INFO] 10.217.0.62:59404 - 49497 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00140676s 2025-08-13T20:00:13.138332395+00:00 stdout F [INFO] 10.217.0.62:49332 - 15686 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002175362s 2025-08-13T20:00:13.283059932+00:00 stdout F [INFO] 10.217.0.62:44541 - 25632 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002145041s 2025-08-13T20:00:13.283059932+00:00 stdout F [INFO] 10.217.0.62:40546 - 5445 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002069449s 2025-08-13T20:00:13.353282534+00:00 stdout F [INFO] 10.217.0.62:54016 - 46275 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071551s 2025-08-13T20:00:13.354444558+00:00 stdout F [INFO] 10.217.0.62:33622 - 22522 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000671669s 2025-08-13T20:00:13.439033769+00:00 stdout F [INFO] 10.217.0.62:39950 - 27175 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007004119s 2025-08-13T20:00:13.443953430+00:00 stdout F [INFO] 10.217.0.62:48086 - 49881 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004105107s 2025-08-13T20:00:13.541132851+00:00 stdout F [INFO] 10.217.0.62:58381 - 51106 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001316337s 2025-08-13T20:00:13.541132851+00:00 stdout F [INFO] 10.217.0.62:54860 - 61170 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002034738s 2025-08-13T20:00:13.653005921+00:00 stdout F [INFO] 10.217.0.62:48034 - 54387 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001380509s 2025-08-13T20:00:13.653005921+00:00 stdout F [INFO] 10.217.0.62:48917 - 55266 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001103781s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.19:38017 - 48268 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004880009s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.62:49113 - 33930 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003019526s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.62:47685 - 37637 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00420269s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.19:36992 - 13239 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00596202s 2025-08-13T20:00:13.954743274+00:00 stdout F [INFO] 10.217.0.62:60184 - 33262 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001215854s 2025-08-13T20:00:13.971888523+00:00 stdout F [INFO] 10.217.0.62:40495 - 54729 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.018309722s 2025-08-13T20:00:14.090604728+00:00 stdout F [INFO] 10.217.0.62:44602 - 62643 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139623s 2025-08-13T20:00:14.090604728+00:00 stdout F [INFO] 10.217.0.62:38107 - 49721 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002221754s 2025-08-13T20:00:14.159122312+00:00 stdout F [INFO] 10.217.0.62:47344 - 18930 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009264744s 2025-08-13T20:00:14.159179524+00:00 stdout F [INFO] 10.217.0.62:51560 - 51651 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010123818s 2025-08-13T20:00:14.262518960+00:00 stdout F [INFO] 10.217.0.62:39670 - 1640 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000935326s 2025-08-13T20:00:14.264760374+00:00 stdout F [INFO] 10.217.0.62:55417 - 30464 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001238945s 2025-08-13T20:00:14.346152915+00:00 stdout F [INFO] 10.217.0.62:56143 - 12731 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003476399s 2025-08-13T20:00:14.346643759+00:00 stdout F [INFO] 10.217.0.62:48892 - 34607 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004013275s 2025-08-13T20:00:14.509059210+00:00 stdout F [INFO] 10.217.0.62:56488 - 15404 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001027549s 2025-08-13T20:00:14.514045602+00:00 stdout F [INFO] 10.217.0.62:47329 - 1471 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000592067s 2025-08-13T20:00:14.622904326+00:00 stdout F [INFO] 10.217.0.62:56101 - 63034 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001539474s 2025-08-13T20:00:14.622904326+00:00 stdout F [INFO] 10.217.0.62:42829 - 34852 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001581765s 2025-08-13T20:00:14.723699989+00:00 stdout F [INFO] 10.217.0.62:38772 - 52371 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000794353s 2025-08-13T20:00:14.723699989+00:00 stdout F [INFO] 10.217.0.62:50000 - 58314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000788752s 2025-08-13T20:00:14.852071560+00:00 stdout F [INFO] 10.217.0.19:57963 - 54633 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001571125s 2025-08-13T20:00:14.852071560+00:00 stdout F [INFO] 10.217.0.19:49761 - 48106 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00210259s 2025-08-13T20:00:14.891880995+00:00 stdout F [INFO] 10.217.0.62:45637 - 43595 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002088s 2025-08-13T20:00:14.891880995+00:00 stdout F [INFO] 10.217.0.62:46509 - 9221 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002678606s 2025-08-13T20:00:14.902471267+00:00 stdout F [INFO] 10.217.0.19:44812 - 34538 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001137942s 2025-08-13T20:00:14.902471267+00:00 stdout F [INFO] 10.217.0.19:39668 - 49216 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000528135s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.19:60068 - 15629 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.024030195s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.19:41174 - 41426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.024001304s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.62:42114 - 47386 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.016223713s 2025-08-13T20:00:15.017236019+00:00 stdout F [INFO] 10.217.0.62:51210 - 5963 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.037691795s 2025-08-13T20:00:15.061301206+00:00 stdout F [INFO] 10.217.0.19:35651 - 11770 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003591183s 2025-08-13T20:00:15.087382469+00:00 stdout F [INFO] 10.217.0.19:37849 - 7839 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003403527s 2025-08-13T20:00:15.093767751+00:00 stdout F [INFO] 10.217.0.62:46650 - 34229 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001093931s 2025-08-13T20:00:15.093767751+00:00 stdout F [INFO] 10.217.0.62:39635 - 29161 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00105456s 2025-08-13T20:00:15.226151576+00:00 stdout F [INFO] 10.217.0.19:46945 - 33977 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000949407s 2025-08-13T20:00:15.226151576+00:00 stdout F [INFO] 10.217.0.19:57137 - 54899 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001574874s 2025-08-13T20:00:15.240936918+00:00 stdout F [INFO] 10.217.0.62:42099 - 59560 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002851612s 2025-08-13T20:00:15.240936918+00:00 stdout F [INFO] 10.217.0.62:41175 - 29037 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002982265s 2025-08-13T20:00:15.364911513+00:00 stdout F [INFO] 10.217.0.62:49482 - 56674 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007002209s 2025-08-13T20:00:15.368139115+00:00 stdout F [INFO] 10.217.0.62:53468 - 9573 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002738768s 2025-08-13T20:00:15.382737091+00:00 stdout F [INFO] 10.217.0.19:57361 - 17104 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106734s 2025-08-13T20:00:15.382737091+00:00 stdout F [INFO] 10.217.0.19:55283 - 16862 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.011326993s 2025-08-13T20:00:15.433983542+00:00 stdout F [INFO] 10.217.0.62:55667 - 33923 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004534109s 2025-08-13T20:00:15.433983542+00:00 stdout F [INFO] 10.217.0.62:34787 - 37314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004371375s 2025-08-13T20:00:15.444336067+00:00 stdout F [INFO] 10.217.0.19:51870 - 64590 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001194404s 2025-08-13T20:00:15.446909311+00:00 stdout F [INFO] 10.217.0.19:43903 - 19405 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001676148s 2025-08-13T20:00:15.517874164+00:00 stdout F [INFO] 10.217.0.62:38759 - 35666 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003969233s 2025-08-13T20:00:15.531892114+00:00 stdout F [INFO] 10.217.0.62:56587 - 13504 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01158042s 2025-08-13T20:00:15.573548082+00:00 stdout F [INFO] 10.217.0.62:52776 - 23634 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00211457s 2025-08-13T20:00:15.573548082+00:00 stdout F [INFO] 10.217.0.62:42107 - 27037 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002034778s 2025-08-13T20:00:15.654391257+00:00 stdout F [INFO] 10.217.0.62:32812 - 42091 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00282523s 2025-08-13T20:00:15.656534878+00:00 stdout F [INFO] 10.217.0.62:38907 - 27002 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002048938s 2025-08-13T20:00:15.734330996+00:00 stdout F [INFO] 10.217.0.62:33135 - 32921 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003792468s 2025-08-13T20:00:15.734330996+00:00 stdout F [INFO] 10.217.0.62:33955 - 62297 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004643453s 2025-08-13T20:00:15.802234793+00:00 stdout F [INFO] 10.217.0.62:49884 - 1877 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00387168s 2025-08-13T20:00:15.802278394+00:00 stdout F [INFO] 10.217.0.62:49483 - 64427 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00490845s 2025-08-13T20:00:15.873532576+00:00 stdout F [INFO] 10.217.0.62:43741 - 36144 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000938287s 2025-08-13T20:00:15.873532576+00:00 stdout F [INFO] 10.217.0.62:41760 - 12545 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000846474s 2025-08-13T20:00:15.908057300+00:00 stdout F [INFO] 10.217.0.62:33669 - 26634 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001498842s 2025-08-13T20:00:15.908057300+00:00 stdout F [INFO] 10.217.0.62:49123 - 62305 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001537134s 2025-08-13T20:00:15.958915760+00:00 stdout F [INFO] 10.217.0.62:36466 - 6221 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000944747s 2025-08-13T20:00:15.958915760+00:00 stdout F [INFO] 10.217.0.62:45514 - 61891 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001165713s 2025-08-13T20:00:16.003345277+00:00 stdout F [INFO] 10.217.0.62:35535 - 24579 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000768992s 2025-08-13T20:00:16.003458100+00:00 stdout F [INFO] 10.217.0.62:54227 - 43386 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000751341s 2025-08-13T20:00:16.025173939+00:00 stdout F [INFO] 10.217.0.62:47898 - 47331 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009788199s 2025-08-13T20:00:16.028196506+00:00 stdout F [INFO] 10.217.0.62:53790 - 34665 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011007464s 2025-08-13T20:00:16.063730659+00:00 stdout F [INFO] 10.217.0.19:38695 - 5246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002941634s 2025-08-13T20:00:16.070752309+00:00 stdout F [INFO] 10.217.0.19:34631 - 33937 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007993128s 2025-08-13T20:00:16.071029697+00:00 stdout F [INFO] 10.217.0.62:45478 - 2016 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000954457s 2025-08-13T20:00:16.073662712+00:00 stdout F [INFO] 10.217.0.62:60256 - 52513 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001386379s 2025-08-13T20:00:16.115580247+00:00 stdout F [INFO] 10.217.0.62:60048 - 4345 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000984398s 2025-08-13T20:00:16.115580247+00:00 stdout F [INFO] 10.217.0.62:60259 - 45250 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002088889s 2025-08-13T20:00:16.150981417+00:00 stdout F [INFO] 10.217.0.62:33976 - 29295 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010768027s 2025-08-13T20:00:16.159019016+00:00 stdout F [INFO] 10.217.0.62:53332 - 40899 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012587289s 2025-08-13T20:00:16.197911625+00:00 stdout F [INFO] 10.217.0.62:46654 - 45279 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000622438s 2025-08-13T20:00:16.200930001+00:00 stdout F [INFO] 10.217.0.62:32870 - 21450 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001038189s 2025-08-13T20:00:16.205693417+00:00 stdout F [INFO] 10.217.0.28:51187 - 30233 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001215535s 2025-08-13T20:00:16.205693417+00:00 stdout F [INFO] 10.217.0.28:51035 - 51486 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000806912s 2025-08-13T20:00:16.336719233+00:00 stdout F [INFO] 10.217.0.62:60998 - 63904 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001224225s 2025-08-13T20:00:16.336825176+00:00 stdout F [INFO] 10.217.0.62:33232 - 15158 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001088981s 2025-08-13T20:00:16.356991191+00:00 stdout F [INFO] 10.217.0.62:41518 - 40868 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000610358s 2025-08-13T20:00:16.357465835+00:00 stdout F [INFO] 10.217.0.62:54709 - 24528 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001147052s 2025-08-13T20:00:16.379975566+00:00 stdout F [INFO] 10.217.0.62:57672 - 38980 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000834984s 2025-08-13T20:00:16.380039328+00:00 stdout F [INFO] 10.217.0.62:49914 - 18773 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000918706s 2025-08-13T20:00:16.407303966+00:00 stdout F [INFO] 10.217.0.62:52427 - 11793 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001208755s 2025-08-13T20:00:16.408590282+00:00 stdout F [INFO] 10.217.0.62:43965 - 9519 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002598674s 2025-08-13T20:00:16.452031651+00:00 stdout F [INFO] 10.217.0.62:55006 - 16870 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000919176s 2025-08-13T20:00:16.452031651+00:00 stdout F [INFO] 10.217.0.62:49785 - 28542 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001365109s 2025-08-13T20:00:17.145072472+00:00 stdout F [INFO] 10.217.0.19:41177 - 20131 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001652497s 2025-08-13T20:00:17.148561112+00:00 stdout F [INFO] 10.217.0.19:37598 - 42797 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006819774s 2025-08-13T20:00:17.197991761+00:00 stdout F [INFO] 10.217.0.19:33492 - 52647 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001516074s 2025-08-13T20:00:17.198367222+00:00 stdout F [INFO] 10.217.0.19:45055 - 1509 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001945665s 2025-08-13T20:00:17.757549517+00:00 stdout F [INFO] 10.217.0.62:43319 - 34607 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001677958s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.19:55481 - 21056 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000947157s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.19:58786 - 15965 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000674169s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.62:33169 - 37945 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001128812s 2025-08-13T20:00:17.816943650+00:00 stdout F [INFO] 10.217.0.62:42205 - 36373 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002212463s 2025-08-13T20:00:17.816943650+00:00 stdout F [INFO] 10.217.0.62:37387 - 16790 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001350018s 2025-08-13T20:00:17.868672735+00:00 stdout F [INFO] 10.217.0.62:52021 - 9648 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001696059s 2025-08-13T20:00:17.874387088+00:00 stdout F [INFO] 10.217.0.62:40260 - 51115 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000826174s 2025-08-13T20:00:17.998417495+00:00 stdout F [INFO] 10.217.0.62:47087 - 30590 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000785533s 2025-08-13T20:00:18.001043570+00:00 stdout F [INFO] 10.217.0.62:47798 - 1955 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003256683s 2025-08-13T20:00:18.093719402+00:00 stdout F [INFO] 10.217.0.62:58441 - 33701 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021269306s 2025-08-13T20:00:18.093719402+00:00 stdout F [INFO] 10.217.0.62:41589 - 40585 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.030229852s 2025-08-13T20:00:18.296816152+00:00 stdout F [INFO] 10.217.0.19:51832 - 36823 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003318795s 2025-08-13T20:00:18.301997970+00:00 stdout F [INFO] 10.217.0.19:39903 - 35 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001972347s 2025-08-13T20:00:18.350136493+00:00 stdout F [INFO] 10.217.0.19:58831 - 64918 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002482641s 2025-08-13T20:00:18.350136493+00:00 stdout F [INFO] 10.217.0.19:42371 - 39239 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006809914s 2025-08-13T20:00:18.407327564+00:00 stdout F [INFO] 10.217.0.62:52472 - 11023 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011434426s 2025-08-13T20:00:18.423120254+00:00 stdout F [INFO] 10.217.0.62:47420 - 64726 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.020940467s 2025-08-13T20:00:18.629197680+00:00 stdout F [INFO] 10.217.0.62:37245 - 2390 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002279585s 2025-08-13T20:00:18.631098444+00:00 stdout F [INFO] 10.217.0.62:50831 - 11251 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003223551s 2025-08-13T20:00:18.761415450+00:00 stdout F [INFO] 10.217.0.62:46429 - 45837 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001951445s 2025-08-13T20:00:18.815524273+00:00 stdout F [INFO] 10.217.0.62:36125 - 11874 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.033757773s 2025-08-13T20:00:18.929433271+00:00 stdout F [INFO] 10.217.0.19:57431 - 54919 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001774561s 2025-08-13T20:00:18.940378423+00:00 stdout F [INFO] 10.217.0.19:56896 - 39088 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002388608s 2025-08-13T20:00:19.114070866+00:00 stdout F [INFO] 10.217.0.62:33352 - 47244 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021087921s 2025-08-13T20:00:19.114070866+00:00 stdout F [INFO] 10.217.0.62:60188 - 11847 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021673368s 2025-08-13T20:00:19.237767773+00:00 stdout F [INFO] 10.217.0.62:56582 - 19683 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001947166s 2025-08-13T20:00:19.260334966+00:00 stdout F [INFO] 10.217.0.62:33301 - 31898 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011323893s 2025-08-13T20:00:19.527445143+00:00 stdout F [INFO] 10.217.0.19:60066 - 34703 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001614406s 2025-08-13T20:00:19.528323238+00:00 stdout F [INFO] 10.217.0.19:60305 - 37834 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002391868s 2025-08-13T20:00:20.196990224+00:00 stdout F [INFO] 10.217.0.19:52258 - 25009 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002468161s 2025-08-13T20:00:20.198550649+00:00 stdout F [INFO] 10.217.0.19:38223 - 9118 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007779282s 2025-08-13T20:00:20.476452013+00:00 stdout F [INFO] 10.217.0.62:54708 - 21880 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002848321s 2025-08-13T20:00:20.476452013+00:00 stdout F [INFO] 10.217.0.62:59683 - 4772 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005088505s 2025-08-13T20:00:20.675957121+00:00 stdout F [INFO] 10.217.0.62:50588 - 46196 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004520369s 2025-08-13T20:00:20.714120399+00:00 stdout F [INFO] 10.217.0.62:41010 - 48406 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009880922s 2025-08-13T20:00:20.839418082+00:00 stdout F [INFO] 10.217.0.62:47437 - 37198 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006881437s 2025-08-13T20:00:20.839561556+00:00 stdout F [INFO] 10.217.0.62:36545 - 56669 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007588906s 2025-08-13T20:00:20.923534181+00:00 stdout F [INFO] 10.217.0.62:41433 - 54004 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000731301s 2025-08-13T20:00:20.923853600+00:00 stdout F [INFO] 10.217.0.62:58985 - 8745 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001453541s 2025-08-13T20:00:21.140617421+00:00 stdout F [INFO] 10.217.0.62:50318 - 16622 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012183297s 2025-08-13T20:00:21.140617421+00:00 stdout F [INFO] 10.217.0.62:43541 - 15801 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012824826s 2025-08-13T20:00:21.197936515+00:00 stdout F [INFO] 10.217.0.62:49442 - 22994 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004789596s 2025-08-13T20:00:21.198102180+00:00 stdout F [INFO] 10.217.0.28:32829 - 14166 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002330176s 2025-08-13T20:00:21.202505155+00:00 stdout F [INFO] 10.217.0.62:33830 - 17870 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00597609s 2025-08-13T20:00:21.202505155+00:00 stdout F [INFO] 10.217.0.28:53244 - 10983 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002501252s 2025-08-13T20:00:21.450165117+00:00 stdout F [INFO] 10.217.0.62:50542 - 30667 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002195952s 2025-08-13T20:00:21.450165117+00:00 stdout F [INFO] 10.217.0.62:37026 - 47114 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002694096s 2025-08-13T20:00:21.548909932+00:00 stdout F [INFO] 10.217.0.62:58992 - 14098 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001445921s 2025-08-13T20:00:21.548909932+00:00 stdout F [INFO] 10.217.0.62:59478 - 12237 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007775702s 2025-08-13T20:00:21.665943849+00:00 stdout F [INFO] 10.217.0.62:53629 - 30765 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002404769s 2025-08-13T20:00:21.670443357+00:00 stdout F [INFO] 10.217.0.62:59632 - 3278 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008741309s 2025-08-13T20:00:21.703125519+00:00 stdout F [INFO] 10.217.0.62:41766 - 63549 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001135602s 2025-08-13T20:00:21.703719906+00:00 stdout F [INFO] 10.217.0.62:46054 - 10024 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000784832s 2025-08-13T20:00:22.579485308+00:00 stdout F [INFO] 10.217.0.19:50242 - 14711 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003977904s 2025-08-13T20:00:22.579485308+00:00 stdout F [INFO] 10.217.0.19:48700 - 53632 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004684324s 2025-08-13T20:00:25.731086805+00:00 stdout F [INFO] 10.217.0.8:59710 - 37958 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003472519s 2025-08-13T20:00:25.731086805+00:00 stdout F [INFO] 10.217.0.8:38997 - 4199 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000922526s 2025-08-13T20:00:25.739917427+00:00 stdout F [INFO] 10.217.0.8:37909 - 64583 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.005826467s 2025-08-13T20:00:25.743884900+00:00 stdout F [INFO] 10.217.0.8:51606 - 4042 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001596626s 2025-08-13T20:00:26.211451573+00:00 stdout F [INFO] 10.217.0.28:41789 - 33440 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004783477s 2025-08-13T20:00:26.215747035+00:00 stdout F [INFO] 10.217.0.28:54682 - 18635 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005841127s 2025-08-13T20:00:27.306241280+00:00 stdout F [INFO] 10.217.0.37:49903 - 59099 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002873802s 2025-08-13T20:00:27.306241280+00:00 stdout F [INFO] 10.217.0.37:49337 - 16367 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00315367s 2025-08-13T20:00:27.610176267+00:00 stdout F [INFO] 10.217.0.57:48884 - 35058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001445402s 2025-08-13T20:00:27.611670049+00:00 stdout F [INFO] 10.217.0.57:47676 - 42222 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001946096s 2025-08-13T20:00:27.716045735+00:00 stdout F [INFO] 10.217.0.57:44140 - 9399 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.013504715s 2025-08-13T20:00:27.717506627+00:00 stdout F [INFO] 10.217.0.57:45630 - 4081 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.013814204s 2025-08-13T20:00:31.268346355+00:00 stdout F [INFO] 10.217.0.28:57797 - 30523 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001610226s 2025-08-13T20:00:31.268346355+00:00 stdout F [INFO] 10.217.0.28:51665 - 51102 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002051629s 2025-08-13T20:00:31.425624620+00:00 stdout F [INFO] 10.217.0.62:40352 - 63069 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003710086s 2025-08-13T20:00:31.437602572+00:00 stdout F [INFO] 10.217.0.62:41024 - 28867 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004605691s 2025-08-13T20:00:31.661447155+00:00 stdout F [INFO] 10.217.0.62:44788 - 55371 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001476762s 2025-08-13T20:00:31.689217696+00:00 stdout F [INFO] 10.217.0.62:44360 - 51842 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.023932763s 2025-08-13T20:00:31.811253746+00:00 stdout F [INFO] 10.217.0.62:57778 - 8850 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001100381s 2025-08-13T20:00:31.811253746+00:00 stdout F [INFO] 10.217.0.62:60857 - 53999 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000835464s 2025-08-13T20:00:31.961965074+00:00 stdout F [INFO] 10.217.0.62:34727 - 35445 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002562353s 2025-08-13T20:00:31.964171786+00:00 stdout F [INFO] 10.217.0.62:41993 - 40439 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006554117s 2025-08-13T20:00:32.113323289+00:00 stdout F [INFO] 10.217.0.62:58232 - 45749 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001293527s 2025-08-13T20:00:32.114298577+00:00 stdout F [INFO] 10.217.0.62:35112 - 22784 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000848074s 2025-08-13T20:00:32.737430424+00:00 stdout F [INFO] 10.217.0.57:38966 - 30327 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000583237s 2025-08-13T20:00:32.737498686+00:00 stdout F [INFO] 10.217.0.57:50692 - 52016 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001420221s 2025-08-13T20:00:33.045863489+00:00 stdout F [INFO] 10.217.0.62:58824 - 1702 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001806381s 2025-08-13T20:00:33.058230141+00:00 stdout F [INFO] 10.217.0.62:36864 - 16355 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008756479s 2025-08-13T20:00:33.577261071+00:00 stdout F [INFO] 10.217.0.62:43777 - 9593 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001006539s 2025-08-13T20:00:33.577261071+00:00 stdout F [INFO] 10.217.0.62:53467 - 42328 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000907196s 2025-08-13T20:00:33.863106352+00:00 stdout F [INFO] 10.217.0.62:38630 - 31176 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001723559s 2025-08-13T20:00:33.872896041+00:00 stdout F [INFO] 10.217.0.62:60120 - 36301 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002984125s 2025-08-13T20:00:33.968965360+00:00 stdout F [INFO] 10.217.0.62:34743 - 13834 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00177437s 2025-08-13T20:00:33.968965360+00:00 stdout F [INFO] 10.217.0.62:44966 - 17140 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002161702s 2025-08-13T20:00:34.058497063+00:00 stdout F [INFO] 10.217.0.62:43193 - 43833 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001836962s 2025-08-13T20:00:34.058497063+00:00 stdout F [INFO] 10.217.0.62:34426 - 45919 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002009107s 2025-08-13T20:00:35.433094038+00:00 stdout F [INFO] 10.217.0.19:49445 - 33612 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000984848s 2025-08-13T20:00:35.433094038+00:00 stdout F [INFO] 10.217.0.19:41366 - 18651 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000927016s 2025-08-13T20:00:35.453677195+00:00 stdout F [INFO] 10.217.0.8:57298 - 52946 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007615537s 2025-08-13T20:00:35.453677195+00:00 stdout F [INFO] 10.217.0.8:38349 - 16220 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006572848s 2025-08-13T20:00:35.468442306+00:00 stdout F [INFO] 10.217.0.8:33452 - 50656 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012375263s 2025-08-13T20:00:35.468442306+00:00 stdout F [INFO] 10.217.0.8:35773 - 41908 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012355652s 2025-08-13T20:00:35.690673373+00:00 stdout F [INFO] 10.217.0.62:55989 - 53229 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0014117s 2025-08-13T20:00:35.690673373+00:00 stdout F [INFO] 10.217.0.62:33065 - 53085 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001248106s 2025-08-13T20:00:35.730503909+00:00 stdout F [INFO] 10.217.0.19:36292 - 10939 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005190688s 2025-08-13T20:00:35.770042926+00:00 stdout F [INFO] 10.217.0.19:54938 - 27133 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.048229325s 2025-08-13T20:00:35.897148751+00:00 stdout F [INFO] 10.217.0.62:51579 - 52497 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010913982s 2025-08-13T20:00:35.897148751+00:00 stdout F [INFO] 10.217.0.62:32787 - 24254 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011512098s 2025-08-13T20:00:35.952043906+00:00 stdout F [INFO] 10.217.0.19:33608 - 30398 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000857724s 2025-08-13T20:00:35.952043906+00:00 stdout F [INFO] 10.217.0.19:39154 - 13296 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001591015s 2025-08-13T20:00:36.052158070+00:00 stdout F [INFO] 10.217.0.62:57742 - 50873 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007803622s 2025-08-13T20:00:36.052158070+00:00 stdout F [INFO] 10.217.0.62:50217 - 39537 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008163043s 2025-08-13T20:00:36.085994515+00:00 stdout F [INFO] 10.217.0.62:46973 - 4617 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000783212s 2025-08-13T20:00:36.085994515+00:00 stdout F [INFO] 10.217.0.62:48190 - 9675 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000738442s 2025-08-13T20:00:36.123936097+00:00 stdout F [INFO] 10.217.0.62:57435 - 21834 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000900436s 2025-08-13T20:00:36.123936097+00:00 stdout F [INFO] 10.217.0.62:50783 - 58379 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000881065s 2025-08-13T20:00:36.192447920+00:00 stdout F [INFO] 10.217.0.28:35258 - 57233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001739479s 2025-08-13T20:00:36.201027864+00:00 stdout F [INFO] 10.217.0.28:45699 - 57411 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010188299s 2025-08-13T20:00:36.562031178+00:00 stdout F [INFO] 10.217.0.19:51585 - 9027 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966748s 2025-08-13T20:00:36.575971016+00:00 stdout F [INFO] 10.217.0.19:34532 - 16670 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.014703769s 2025-08-13T20:00:36.983461205+00:00 stdout F [INFO] 10.217.0.19:41101 - 30346 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001602416s 2025-08-13T20:00:36.983886077+00:00 stdout F [INFO] 10.217.0.19:37160 - 8672 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001703779s 2025-08-13T20:00:37.283931612+00:00 stdout F [INFO] 10.217.0.19:51863 - 63110 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003548381s 2025-08-13T20:00:37.285398654+00:00 stdout F [INFO] 10.217.0.19:46973 - 8780 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001810482s 2025-08-13T20:00:37.544949185+00:00 stdout F [INFO] 10.217.0.19:60998 - 12802 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002060749s 2025-08-13T20:00:37.553034096+00:00 stdout F [INFO] 10.217.0.19:57940 - 9552 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010101818s 2025-08-13T20:00:37.790369403+00:00 stdout F [INFO] 10.217.0.57:43550 - 43696 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009638015s 2025-08-13T20:00:37.796695383+00:00 stdout F [INFO] 10.217.0.57:39451 - 44600 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006570608s 2025-08-13T20:00:40.173948128+00:00 stdout F [INFO] 10.217.0.60:48850 - 60117 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008025469s 2025-08-13T20:00:40.173948128+00:00 stdout F [INFO] 10.217.0.60:48884 - 7163 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011604921s 2025-08-13T20:00:40.183890401+00:00 stdout F [INFO] 10.217.0.60:53961 - 51717 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009816699s 2025-08-13T20:00:40.184241341+00:00 stdout F [INFO] 10.217.0.60:46675 - 43378 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009938023s 2025-08-13T20:00:40.236573113+00:00 stdout F [INFO] 10.217.0.62:40825 - 24788 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004861068s 2025-08-13T20:00:40.236573113+00:00 stdout F [INFO] 10.217.0.62:46421 - 61090 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005456386s 2025-08-13T20:00:40.714710607+00:00 stdout F [INFO] 10.217.0.62:59322 - 41606 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006050943s 2025-08-13T20:00:40.714710607+00:00 stdout F [INFO] 10.217.0.62:39975 - 59107 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007196325s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:45725 - 34122 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.013637669s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:44229 - 60604 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.014164904s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:43328 - 58288 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.015254415s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:38617 - 57227 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.014962096s 2025-08-13T20:00:40.910345735+00:00 stdout F [INFO] 10.217.0.60:45732 - 4038 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004323434s 2025-08-13T20:00:40.930483439+00:00 stdout F [INFO] 10.217.0.60:49251 - 15133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.021362629s 2025-08-13T20:00:40.990210252+00:00 stdout F [INFO] 10.217.0.60:40178 - 34377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004410105s 2025-08-13T20:00:41.060271440+00:00 stdout F [INFO] 10.217.0.60:50125 - 63735 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018152517s 2025-08-13T20:00:41.075524395+00:00 stdout F [INFO] 10.217.0.62:57370 - 34368 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.017149759s 2025-08-13T20:00:41.075931257+00:00 stdout F [INFO] 10.217.0.62:41324 - 39724 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.018058485s 2025-08-13T20:00:41.190088592+00:00 stdout F [INFO] 10.217.0.60:36300 - 17768 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007318269s 2025-08-13T20:00:41.190088592+00:00 stdout F [INFO] 10.217.0.60:60017 - 36037 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016632274s 2025-08-13T20:00:41.224605096+00:00 stdout F [INFO] 10.217.0.28:51635 - 40403 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006965329s 2025-08-13T20:00:41.264087632+00:00 stdout F [INFO] 10.217.0.28:52062 - 1904 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009673976s 2025-08-13T20:00:41.373989136+00:00 stdout F [INFO] 10.217.0.62:56057 - 9966 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001195254s 2025-08-13T20:00:41.387945613+00:00 stdout F [INFO] 10.217.0.62:46435 - 5126 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.016528461s 2025-08-13T20:00:41.715311878+00:00 stdout F [INFO] 10.217.0.62:50311 - 25129 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0063246s 2025-08-13T20:00:41.717438859+00:00 stdout F [INFO] 10.217.0.62:33386 - 40928 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006839125s 2025-08-13T20:00:41.738119748+00:00 stdout F [INFO] 10.217.0.60:52373 - 23632 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007761871s 2025-08-13T20:00:41.801297700+00:00 stdout F [INFO] 10.217.0.60:35320 - 1700 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.058023705s 2025-08-13T20:00:41.801297700+00:00 stdout F [INFO] 10.217.0.60:55328 - 10605 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.058711244s 2025-08-13T20:00:41.829259987+00:00 stdout F [INFO] 10.217.0.60:53501 - 19974 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.05925692s 2025-08-13T20:00:41.976174496+00:00 stdout F [INFO] 10.217.0.60:52578 - 9173 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008371928s 2025-08-13T20:00:41.976497765+00:00 stdout F [INFO] 10.217.0.60:40532 - 45632 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00071053s 2025-08-13T20:00:41.976709652+00:00 stdout F [INFO] 10.217.0.60:34514 - 826 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001254386s 2025-08-13T20:00:41.977127923+00:00 stdout F [INFO] 10.217.0.60:57847 - 46655 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009665195s 2025-08-13T20:00:42.007513300+00:00 stdout F [INFO] 10.217.0.60:54739 - 10924 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003385036s 2025-08-13T20:00:42.007513300+00:00 stdout F [INFO] 10.217.0.60:59114 - 30470 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005906868s 2025-08-13T20:00:42.084584237+00:00 stdout F [INFO] 10.217.0.60:56862 - 58713 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003955053s 2025-08-13T20:00:42.087243233+00:00 stdout F [INFO] 10.217.0.60:36311 - 27915 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002747588s 2025-08-13T20:00:42.132939816+00:00 stdout F [INFO] 10.217.0.60:52763 - 16178 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009929473s 2025-08-13T20:00:42.133577344+00:00 stdout F [INFO] 10.217.0.60:49355 - 29298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012692052s 2025-08-13T20:00:42.153883023+00:00 stdout F [INFO] 10.217.0.19:47374 - 58670 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105662s 2025-08-13T20:00:42.153944575+00:00 stdout F [INFO] 10.217.0.19:53419 - 55595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000777443s 2025-08-13T20:00:42.164854346+00:00 stdout F [INFO] 10.217.0.60:32773 - 53922 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.006955098s 2025-08-13T20:00:42.164854346+00:00 stdout F [INFO] 10.217.0.60:33864 - 8621 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.005889688s 2025-08-13T20:00:42.206111563+00:00 stdout F [INFO] 10.217.0.60:57809 - 30579 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005044634s 2025-08-13T20:00:42.206578306+00:00 stdout F [INFO] 10.217.0.60:58796 - 61858 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006533616s 2025-08-13T20:00:42.290129518+00:00 stdout F [INFO] 10.217.0.60:47662 - 60054 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008448151s 2025-08-13T20:00:42.290129518+00:00 stdout F [INFO] 10.217.0.60:44908 - 34608 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009088019s 2025-08-13T20:00:42.372369263+00:00 stdout F [INFO] 10.217.0.60:50074 - 31458 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003794648s 2025-08-13T20:00:42.374170995+00:00 stdout F [INFO] 10.217.0.60:59507 - 41881 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004698834s 2025-08-13T20:00:42.381148524+00:00 stdout F [INFO] 10.217.0.60:53163 - 6696 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00353352s 2025-08-13T20:00:42.381148524+00:00 stdout F [INFO] 10.217.0.60:47961 - 23913 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003714746s 2025-08-13T20:00:42.395730410+00:00 stdout F [INFO] 10.217.0.60:51772 - 27470 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004109837s 2025-08-13T20:00:42.396155742+00:00 stdout F [INFO] 10.217.0.60:56559 - 41180 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003775757s 2025-08-13T20:00:42.465712745+00:00 stdout F [INFO] 10.217.0.60:52007 - 64110 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002959514s 2025-08-13T20:00:42.466142467+00:00 stdout F [INFO] 10.217.0.60:46205 - 30385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000848104s 2025-08-13T20:00:42.466550819+00:00 stdout F [INFO] 10.217.0.60:39664 - 25521 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003301974s 2025-08-13T20:00:42.466616831+00:00 stdout F [INFO] 10.217.0.60:58488 - 44759 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001436741s 2025-08-13T20:00:42.501886346+00:00 stdout F [INFO] 10.217.0.60:53857 - 2484 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005065864s 2025-08-13T20:00:42.502274967+00:00 stdout F [INFO] 10.217.0.60:37557 - 36840 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006229778s 2025-08-13T20:00:42.541595989+00:00 stdout F [INFO] 10.217.0.60:49941 - 58907 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001070981s 2025-08-13T20:00:42.542671779+00:00 stdout F [INFO] 10.217.0.60:33250 - 47268 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001080841s 2025-08-13T20:00:42.549360000+00:00 stdout F [INFO] 10.217.0.60:36173 - 36326 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00417675s 2025-08-13T20:00:42.549577626+00:00 stdout F [INFO] 10.217.0.60:44656 - 34287 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00457103s 2025-08-13T20:00:42.612258244+00:00 stdout F [INFO] 10.217.0.60:36746 - 61964 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.007478603s 2025-08-13T20:00:42.612546452+00:00 stdout F [INFO] 10.217.0.60:48936 - 60887 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.008014958s 2025-08-13T20:00:42.612857751+00:00 stdout F [INFO] 10.217.0.60:41597 - 9052 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010254542s 2025-08-13T20:00:42.613108138+00:00 stdout F [INFO] 10.217.0.60:60388 - 16520 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010896681s 2025-08-13T20:00:42.626540471+00:00 stdout F [INFO] 10.217.0.19:47160 - 61990 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001217815s 2025-08-13T20:00:42.626767237+00:00 stdout F [INFO] 10.217.0.19:35835 - 10492 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001272586s 2025-08-13T20:00:42.635307571+00:00 stdout F [INFO] 10.217.0.60:47210 - 51012 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00493136s 2025-08-13T20:00:42.635508907+00:00 stdout F [INFO] 10.217.0.60:48041 - 17561 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004893059s 2025-08-13T20:00:42.662218808+00:00 stdout F [INFO] 10.217.0.60:52906 - 27055 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009696496s 2025-08-13T20:00:42.662218808+00:00 stdout F [INFO] 10.217.0.60:36666 - 6155 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009805049s 2025-08-13T20:00:42.716915738+00:00 stdout F [INFO] 10.217.0.60:60334 - 9510 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005900578s 2025-08-13T20:00:42.716915738+00:00 stdout F [INFO] 10.217.0.60:56325 - 59807 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006511966s 2025-08-13T20:00:42.722290411+00:00 stdout F [INFO] 10.217.0.60:56599 - 47234 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.007804482s 2025-08-13T20:00:42.722639381+00:00 stdout F [INFO] 10.217.0.60:40308 - 51070 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.008097021s 2025-08-13T20:00:42.751493194+00:00 stdout F [INFO] 10.217.0.60:42287 - 5955 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068765s 2025-08-13T20:00:42.753440729+00:00 stdout F [INFO] 10.217.0.60:43479 - 31758 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000434712s 2025-08-13T20:00:42.758242896+00:00 stdout F [INFO] 10.217.0.57:34553 - 58336 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003449808s 2025-08-13T20:00:42.758242896+00:00 stdout F [INFO] 10.217.0.57:55698 - 58679 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003164171s 2025-08-13T20:00:42.799761920+00:00 stdout F [INFO] 10.217.0.60:57471 - 53951 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001028359s 2025-08-13T20:00:42.799761920+00:00 stdout F [INFO] 10.217.0.60:36137 - 30042 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000925436s 2025-08-13T20:00:42.821276633+00:00 stdout F [INFO] 10.217.0.60:51024 - 25845 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002521761s 2025-08-13T20:00:42.852377870+00:00 stdout F [INFO] 10.217.0.60:45732 - 49655 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.015162122s 2025-08-13T20:00:42.877721733+00:00 stdout F [INFO] 10.217.0.60:36587 - 34390 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008998267s 2025-08-13T20:00:42.877761484+00:00 stdout F [INFO] 10.217.0.60:41455 - 12455 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009635164s 2025-08-13T20:00:42.934947915+00:00 stdout F [INFO] 10.217.0.60:48487 - 16479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001001568s 2025-08-13T20:00:42.934947915+00:00 stdout F [INFO] 10.217.0.60:39248 - 18897 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001283497s 2025-08-13T20:00:42.992199317+00:00 stdout F [INFO] 10.217.0.60:38306 - 15345 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001791081s 2025-08-13T20:00:42.997569910+00:00 stdout F [INFO] 10.217.0.60:43691 - 39626 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006527846s 2025-08-13T20:00:43.062744239+00:00 stdout F [INFO] 10.217.0.60:43124 - 10054 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002526562s 2025-08-13T20:00:43.062819551+00:00 stdout F [INFO] 10.217.0.60:43696 - 21466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002597884s 2025-08-13T20:00:43.124412067+00:00 stdout F [INFO] 10.217.0.60:40037 - 48306 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001489412s 2025-08-13T20:00:43.124412067+00:00 stdout F [INFO] 10.217.0.60:56776 - 5951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001802502s 2025-08-13T20:00:43.124448128+00:00 stdout F [INFO] 10.217.0.60:35495 - 36278 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001221295s 2025-08-13T20:00:43.125673583+00:00 stdout F [INFO] 10.217.0.60:33856 - 56194 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002322437s 2025-08-13T20:00:43.147262809+00:00 stdout F [INFO] 10.217.0.60:36329 - 25227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003368526s 2025-08-13T20:00:43.148305638+00:00 stdout F [INFO] 10.217.0.60:48432 - 30770 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003434678s 2025-08-13T20:00:43.187327211+00:00 stdout F [INFO] 10.217.0.60:42719 - 44038 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002487311s 2025-08-13T20:00:43.187622319+00:00 stdout F [INFO] 10.217.0.60:33733 - 28042 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002977814s 2025-08-13T20:00:43.193190838+00:00 stdout F [INFO] 10.217.0.60:40321 - 64415 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001383639s 2025-08-13T20:00:43.193190838+00:00 stdout F [INFO] 10.217.0.60:58984 - 37137 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001215215s 2025-08-13T20:00:43.281338372+00:00 stdout F [INFO] 10.217.0.60:40457 - 23589 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001961056s 2025-08-13T20:00:43.281886087+00:00 stdout F [INFO] 10.217.0.60:52727 - 44082 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002660136s 2025-08-13T20:00:43.285920862+00:00 stdout F [INFO] 10.217.0.60:55415 - 57509 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069516s 2025-08-13T20:00:43.285920862+00:00 stdout F [INFO] 10.217.0.60:58720 - 48518 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000799473s 2025-08-13T20:00:43.352745757+00:00 stdout F [INFO] 10.217.0.60:47603 - 41130 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007655488s 2025-08-13T20:00:43.352745757+00:00 stdout F [INFO] 10.217.0.60:58734 - 17536 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007685769s 2025-08-13T20:00:43.423710250+00:00 stdout F [INFO] 10.217.0.60:51827 - 61021 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003569272s 2025-08-13T20:00:43.424613746+00:00 stdout F [INFO] 10.217.0.60:49551 - 22902 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004468317s 2025-08-13T20:00:43.425194043+00:00 stdout F [INFO] 10.217.0.60:43655 - 46137 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004508809s 2025-08-13T20:00:43.425534112+00:00 stdout F [INFO] 10.217.0.60:57763 - 5602 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005004903s 2025-08-13T20:00:43.470829214+00:00 stdout F [INFO] 10.217.0.60:60522 - 63475 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001313588s 2025-08-13T20:00:43.472331027+00:00 stdout F [INFO] 10.217.0.60:47979 - 23540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002337996s 2025-08-13T20:00:43.497125284+00:00 stdout F [INFO] 10.217.0.60:39595 - 21645 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001085451s 2025-08-13T20:00:43.497502724+00:00 stdout F [INFO] 10.217.0.60:44441 - 51694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001836472s 2025-08-13T20:00:43.543204928+00:00 stdout F [INFO] 10.217.0.60:44521 - 34550 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001911934s 2025-08-13T20:00:43.546595764+00:00 stdout F [INFO] 10.217.0.60:52466 - 14265 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000969237s 2025-08-13T20:00:43.557742862+00:00 stdout F [INFO] 10.217.0.60:59650 - 5341 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001627867s 2025-08-13T20:00:43.557742862+00:00 stdout F [INFO] 10.217.0.60:51584 - 49883 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001501333s 2025-08-13T20:00:43.600072219+00:00 stdout F [INFO] 10.217.0.60:49431 - 52323 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845184s 2025-08-13T20:00:43.600110150+00:00 stdout F [INFO] 10.217.0.60:48025 - 60773 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000760792s 2025-08-13T20:00:43.611390982+00:00 stdout F [INFO] 10.217.0.60:40820 - 1549 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070815s 2025-08-13T20:00:43.611544296+00:00 stdout F [INFO] 10.217.0.60:47757 - 10448 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754742s 2025-08-13T20:00:43.618045112+00:00 stdout F [INFO] 10.217.0.60:53912 - 30298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001381489s 2025-08-13T20:00:43.619334828+00:00 stdout F [INFO] 10.217.0.60:44704 - 44979 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000822413s 2025-08-13T20:00:43.662573931+00:00 stdout F [INFO] 10.217.0.60:55017 - 30285 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001868524s 2025-08-13T20:00:43.662573931+00:00 stdout F [INFO] 10.217.0.60:59906 - 48194 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001780561s 2025-08-13T20:00:43.678353481+00:00 stdout F [INFO] 10.217.0.60:55405 - 6328 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000933286s 2025-08-13T20:00:43.678353481+00:00 stdout F [INFO] 10.217.0.60:47560 - 58287 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001986826s 2025-08-13T20:00:43.679677589+00:00 stdout F [INFO] 10.217.0.60:41177 - 47820 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001985617s 2025-08-13T20:00:43.686079122+00:00 stdout F [INFO] 10.217.0.60:35239 - 45681 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008523043s 2025-08-13T20:00:43.729225602+00:00 stdout F [INFO] 10.217.0.60:36044 - 4180 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001164383s 2025-08-13T20:00:43.729225602+00:00 stdout F [INFO] 10.217.0.60:41479 - 54020 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001488832s 2025-08-13T20:00:43.749673445+00:00 stdout F [INFO] 10.217.0.60:51760 - 2234 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001074331s 2025-08-13T20:00:43.750405596+00:00 stdout F [INFO] 10.217.0.60:56738 - 50301 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001634247s 2025-08-13T20:00:43.788935744+00:00 stdout F [INFO] 10.217.0.60:58467 - 36904 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003258043s 2025-08-13T20:00:43.788935744+00:00 stdout F [INFO] 10.217.0.60:51533 - 39214 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003825169s 2025-08-13T20:00:43.806927437+00:00 stdout F [INFO] 10.217.0.60:34408 - 40072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070067s 2025-08-13T20:00:43.807457072+00:00 stdout F [INFO] 10.217.0.60:49017 - 43687 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001157863s 2025-08-13T20:00:43.848398340+00:00 stdout F [INFO] 10.217.0.60:53118 - 52132 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000759662s 2025-08-13T20:00:43.848634737+00:00 stdout F [INFO] 10.217.0.60:35567 - 5264 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000990089s 2025-08-13T20:00:43.872894978+00:00 stdout F [INFO] 10.217.0.60:45098 - 58996 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001257796s 2025-08-13T20:00:43.876055308+00:00 stdout F [INFO] 10.217.0.60:41662 - 8062 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003664225s 2025-08-13T20:00:43.922988587+00:00 stdout F [INFO] 10.217.0.60:60175 - 57474 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000804053s 2025-08-13T20:00:43.923065349+00:00 stdout F [INFO] 10.217.0.60:58165 - 10664 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001026449s 2025-08-13T20:00:43.935124723+00:00 stdout F [INFO] 10.217.0.60:37231 - 28474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776202s 2025-08-13T20:00:43.935243866+00:00 stdout F [INFO] 10.217.0.60:34968 - 12153 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001174304s 2025-08-13T20:00:44.003189984+00:00 stdout F [INFO] 10.217.0.60:37021 - 40929 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000838734s 2025-08-13T20:00:44.003408840+00:00 stdout F [INFO] 10.217.0.60:38598 - 52744 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001257846s 2025-08-13T20:00:44.042574607+00:00 stdout F [INFO] 10.217.0.60:34231 - 46460 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000661409s 2025-08-13T20:00:44.043194384+00:00 stdout F [INFO] 10.217.0.60:58181 - 15108 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001057461s 2025-08-13T20:00:44.097995907+00:00 stdout F [INFO] 10.217.0.60:39022 - 58329 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000672049s 2025-08-13T20:00:44.098756259+00:00 stdout F [INFO] 10.217.0.60:57403 - 10634 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001475332s 2025-08-13T20:00:44.111347118+00:00 stdout F [INFO] 10.217.0.60:37653 - 37404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000602627s 2025-08-13T20:00:44.111596875+00:00 stdout F [INFO] 10.217.0.60:47515 - 2054 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467333s 2025-08-13T20:00:44.238898615+00:00 stdout F [INFO] 10.217.0.60:42808 - 16861 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000642628s 2025-08-13T20:00:44.238898615+00:00 stdout F [INFO] 10.217.0.60:48067 - 30534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001094392s 2025-08-13T20:00:44.239324577+00:00 stdout F [INFO] 10.217.0.60:35307 - 23336 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001235565s 2025-08-13T20:00:44.239324577+00:00 stdout F [INFO] 10.217.0.60:39319 - 35875 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001458431s 2025-08-13T20:00:44.306349678+00:00 stdout F [INFO] 10.217.0.60:38046 - 11013 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001185764s 2025-08-13T20:00:44.306349678+00:00 stdout F [INFO] 10.217.0.60:43413 - 39794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105296s 2025-08-13T20:00:44.308755196+00:00 stdout F [INFO] 10.217.0.60:54687 - 38217 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004081087s 2025-08-13T20:00:44.309022484+00:00 stdout F [INFO] 10.217.0.60:43534 - 29464 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004097267s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:53728 - 6570 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002602834s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:41092 - 5177 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005096716s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:35936 - 46005 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005082925s 2025-08-13T20:00:44.373211944+00:00 stdout F [INFO] 10.217.0.60:39366 - 34168 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006175236s 2025-08-13T20:00:44.467058530+00:00 stdout F [INFO] 10.217.0.60:50003 - 16 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005760904s 2025-08-13T20:00:44.470581051+00:00 stdout F [INFO] 10.217.0.60:59145 - 51973 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00663318s 2025-08-13T20:00:44.561180964+00:00 stdout F [INFO] 10.217.0.60:38896 - 9837 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007873105s 2025-08-13T20:00:44.562378618+00:00 stdout F [INFO] 10.217.0.60:57914 - 49009 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008189754s 2025-08-13T20:00:44.565446676+00:00 stdout F [INFO] 10.217.0.60:44070 - 11996 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003979883s 2025-08-13T20:00:44.570577672+00:00 stdout F [INFO] 10.217.0.60:58860 - 63938 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008014928s 2025-08-13T20:00:44.634733022+00:00 stdout F [INFO] 10.217.0.60:57237 - 12292 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00106751s 2025-08-13T20:00:44.635281027+00:00 stdout F [INFO] 10.217.0.60:50472 - 10469 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001678718s 2025-08-13T20:00:44.636369068+00:00 stdout F [INFO] 10.217.0.60:59839 - 51565 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002900223s 2025-08-13T20:00:44.636465431+00:00 stdout F [INFO] 10.217.0.60:34556 - 21555 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002778619s 2025-08-13T20:00:44.718334255+00:00 stdout F [INFO] 10.217.0.60:47654 - 43073 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009952494s 2025-08-13T20:00:44.725470629+00:00 stdout F [INFO] 10.217.0.60:37658 - 61873 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007332789s 2025-08-13T20:00:44.725470629+00:00 stdout F [INFO] 10.217.0.60:47601 - 29425 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007493024s 2025-08-13T20:00:44.725535371+00:00 stdout F [INFO] 10.217.0.60:36558 - 10190 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007303668s 2025-08-13T20:00:44.804289786+00:00 stdout F [INFO] 10.217.0.60:59348 - 55222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001225224s 2025-08-13T20:00:44.807936300+00:00 stdout F [INFO] 10.217.0.60:40745 - 55897 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004331593s 2025-08-13T20:00:44.872898843+00:00 stdout F [INFO] 10.217.0.60:35807 - 51060 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934206s 2025-08-13T20:00:44.872898843+00:00 stdout F [INFO] 10.217.0.60:45328 - 15524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001419791s 2025-08-13T20:00:44.946685987+00:00 stdout F [INFO] 10.217.0.60:52632 - 32111 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002485871s 2025-08-13T20:00:44.948327163+00:00 stdout F [INFO] 10.217.0.60:59941 - 31677 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775922s 2025-08-13T20:00:45.044009342+00:00 stdout F [INFO] 10.217.0.60:36355 - 57491 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003067507s 2025-08-13T20:00:45.044009342+00:00 stdout F [INFO] 10.217.0.60:49443 - 46875 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004174929s 2025-08-13T20:00:45.067966155+00:00 stdout F [INFO] 10.217.0.60:45455 - 970 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007233066s 2025-08-13T20:00:45.069930651+00:00 stdout F [INFO] 10.217.0.60:47235 - 43002 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007697979s 2025-08-13T20:00:45.128341376+00:00 stdout F [INFO] 10.217.0.60:48452 - 47585 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004124938s 2025-08-13T20:00:45.128606654+00:00 stdout F [INFO] 10.217.0.60:43867 - 19263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00526546s 2025-08-13T20:00:45.135204322+00:00 stdout F [INFO] 10.217.0.60:37334 - 26237 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00631382s 2025-08-13T20:00:45.139650159+00:00 stdout F [INFO] 10.217.0.60:36882 - 30502 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006561027s 2025-08-13T20:00:45.152938708+00:00 stdout F [INFO] 10.217.0.60:49344 - 27017 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001373699s 2025-08-13T20:00:45.153824703+00:00 stdout F [INFO] 10.217.0.60:33307 - 39959 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002010507s 2025-08-13T20:00:45.207455652+00:00 stdout F [INFO] 10.217.0.60:60805 - 45507 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001168484s 2025-08-13T20:00:45.209456329+00:00 stdout F [INFO] 10.217.0.60:36405 - 47965 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00210645s 2025-08-13T20:00:45.212187877+00:00 stdout F [INFO] 10.217.0.60:40040 - 8629 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002089839s 2025-08-13T20:00:45.212291200+00:00 stdout F [INFO] 10.217.0.60:50600 - 32146 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002621445s 2025-08-13T20:00:45.286609639+00:00 stdout F [INFO] 10.217.0.60:39091 - 50762 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002044838s 2025-08-13T20:00:45.286928948+00:00 stdout F [INFO] 10.217.0.60:39286 - 6970 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0024663s 2025-08-13T20:00:45.297360786+00:00 stdout F [INFO] 10.217.0.60:45099 - 30506 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001081961s 2025-08-13T20:00:45.297486899+00:00 stdout F [INFO] 10.217.0.60:40857 - 4648 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000881935s 2025-08-13T20:00:45.358146569+00:00 stdout F [INFO] 10.217.0.60:54841 - 9214 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001787301s 2025-08-13T20:00:45.358523760+00:00 stdout F [INFO] 10.217.0.60:56436 - 23377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002150051s 2025-08-13T20:00:45.389388330+00:00 stdout F [INFO] 10.217.0.60:54857 - 31695 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006964878s 2025-08-13T20:00:45.390320876+00:00 stdout F [INFO] 10.217.0.60:43585 - 49599 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008879413s 2025-08-13T20:00:45.413340543+00:00 stdout F [INFO] 10.217.0.60:59884 - 19312 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00456255s 2025-08-13T20:00:45.416187324+00:00 stdout F [INFO] 10.217.0.60:42278 - 12738 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005400374s 2025-08-13T20:00:45.539007396+00:00 stdout F [INFO] 10.217.0.60:59731 - 36791 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.046739423s 2025-08-13T20:00:45.539007396+00:00 stdout F [INFO] 10.217.0.60:58492 - 4853 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.047440983s 2025-08-13T20:00:45.543086592+00:00 stdout F [INFO] 10.217.0.60:44716 - 64555 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001795521s 2025-08-13T20:00:45.568481187+00:00 stdout F [INFO] 10.217.0.60:55385 - 23971 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002639415s 2025-08-13T20:00:45.661917681+00:00 stdout F [INFO] 10.217.0.60:48047 - 11680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.019125105s 2025-08-13T20:00:45.662415625+00:00 stdout F [INFO] 10.217.0.60:39642 - 1573 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.019904978s 2025-08-13T20:00:45.664545706+00:00 stdout F [INFO] 10.217.0.60:36013 - 44109 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002240734s 2025-08-13T20:00:45.664806083+00:00 stdout F [INFO] 10.217.0.60:44076 - 19830 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002743738s 2025-08-13T20:00:45.711686910+00:00 stdout F [INFO] 10.217.0.60:58769 - 4854 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001804002s 2025-08-13T20:00:45.711913676+00:00 stdout F [INFO] 10.217.0.60:43779 - 32061 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002489711s 2025-08-13T20:00:45.763128247+00:00 stdout F [INFO] 10.217.0.60:38757 - 37067 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001899084s 2025-08-13T20:00:45.763128247+00:00 stdout F [INFO] 10.217.0.60:47541 - 44972 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002489831s 2025-08-13T20:00:45.818947768+00:00 stdout F [INFO] 10.217.0.60:45022 - 59800 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004771886s 2025-08-13T20:00:45.849971173+00:00 stdout F [INFO] 10.217.0.60:41359 - 41174 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.033949028s 2025-08-13T20:00:45.946985419+00:00 stdout F [INFO] 10.217.0.60:55262 - 56115 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025679812s 2025-08-13T20:00:45.946985419+00:00 stdout F [INFO] 10.217.0.60:57747 - 17936 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.026174786s 2025-08-13T20:00:46.032202749+00:00 stdout F [INFO] 10.217.0.60:41161 - 13567 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013157245s 2025-08-13T20:00:46.032202749+00:00 stdout F [INFO] 10.217.0.60:56298 - 13419 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013566517s 2025-08-13T20:00:46.065677784+00:00 stdout F [INFO] 10.217.0.60:44922 - 19924 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008302017s 2025-08-13T20:00:46.066911359+00:00 stdout F [INFO] 10.217.0.60:56113 - 53771 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0091097s 2025-08-13T20:00:46.151485830+00:00 stdout F [INFO] 10.217.0.60:52318 - 47301 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000995248s 2025-08-13T20:00:46.173898789+00:00 stdout F [INFO] 10.217.0.60:43487 - 39893 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016371676s 2025-08-13T20:00:46.174119226+00:00 stdout F [INFO] 10.217.0.60:40927 - 1924 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.023281174s 2025-08-13T20:00:46.174286081+00:00 stdout F [INFO] 10.217.0.60:58486 - 48914 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016530531s 2025-08-13T20:00:46.191491961+00:00 stdout F [INFO] 10.217.0.60:40447 - 49758 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.01370845s 2025-08-13T20:00:46.192810969+00:00 stdout F [INFO] 10.217.0.60:54234 - 36503 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000944556s 2025-08-13T20:00:46.334162769+00:00 stdout F [INFO] 10.217.0.28:54681 - 4446 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001799722s 2025-08-13T20:00:46.350929327+00:00 stdout F [INFO] 10.217.0.28:50313 - 58891 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.018742515s 2025-08-13T20:00:46.411343540+00:00 stdout F [INFO] 10.217.0.60:52088 - 60139 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.033143935s 2025-08-13T20:00:46.426122801+00:00 stdout F [INFO] 10.217.0.60:33487 - 5955 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.015007558s 2025-08-13T20:00:46.430542497+00:00 stdout F [INFO] 10.217.0.60:35828 - 34714 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011280352s 2025-08-13T20:00:46.430755333+00:00 stdout F [INFO] 10.217.0.60:40533 - 36830 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012218549s 2025-08-13T20:00:46.433404079+00:00 stdout F [INFO] 10.217.0.60:44919 - 20101 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014276107s 2025-08-13T20:00:46.453394739+00:00 stdout F [INFO] 10.217.0.60:39833 - 42467 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00278477s 2025-08-13T20:00:46.535225702+00:00 stdout F [INFO] 10.217.0.60:43467 - 41891 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013027052s 2025-08-13T20:00:46.538984440+00:00 stdout F [INFO] 10.217.0.60:47507 - 26638 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017371926s 2025-08-13T20:00:46.540622626+00:00 stdout F [INFO] 10.217.0.60:44240 - 8524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004996362s 2025-08-13T20:00:46.540914895+00:00 stdout F [INFO] 10.217.0.60:48175 - 63130 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005273221s 2025-08-13T20:00:46.634162373+00:00 stdout F [INFO] 10.217.0.60:37456 - 31623 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012673661s 2025-08-13T20:00:46.644251291+00:00 stdout F [INFO] 10.217.0.60:53598 - 25926 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.021997027s 2025-08-13T20:00:46.754285509+00:00 stdout F [INFO] 10.217.0.60:44935 - 57055 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003675785s 2025-08-13T20:00:46.756616595+00:00 stdout F [INFO] 10.217.0.60:51680 - 59997 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006043852s 2025-08-13T20:00:46.768956457+00:00 stdout F [INFO] 10.217.0.60:60644 - 39110 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011803606s 2025-08-13T20:00:46.769154143+00:00 stdout F [INFO] 10.217.0.60:35175 - 29320 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012395984s 2025-08-13T20:00:46.769327967+00:00 stdout F [INFO] 10.217.0.60:60554 - 44922 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.013082673s 2025-08-13T20:00:46.769385279+00:00 stdout F [INFO] 10.217.0.60:54831 - 58016 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.013392642s 2025-08-13T20:00:46.839294423+00:00 stdout F [INFO] 10.217.0.60:56661 - 11234 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008934975s 2025-08-13T20:00:46.854649420+00:00 stdout F [INFO] 10.217.0.60:54181 - 15570 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023440799s 2025-08-13T20:00:46.943090421+00:00 stdout F [INFO] 10.217.0.60:56336 - 41120 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.034635307s 2025-08-13T20:00:46.943090421+00:00 stdout F [INFO] 10.217.0.60:44456 - 12995 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.035224234s 2025-08-13T20:00:47.116218208+00:00 stdout F [INFO] 10.217.0.60:37550 - 43782 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020021221s 2025-08-13T20:00:47.116218208+00:00 stdout F [INFO] 10.217.0.60:40522 - 2678 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.02034309s 2025-08-13T20:00:47.126230043+00:00 stdout F [INFO] 10.217.0.60:36157 - 52261 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011332573s 2025-08-13T20:00:47.126230043+00:00 stdout F [INFO] 10.217.0.60:47728 - 17977 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011963282s 2025-08-13T20:00:47.241771228+00:00 stdout F [INFO] 10.217.0.60:41268 - 12463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008230535s 2025-08-13T20:00:47.242500149+00:00 stdout F [INFO] 10.217.0.60:48998 - 45260 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009925393s 2025-08-13T20:00:47.272471063+00:00 stdout F [INFO] 10.217.0.60:48206 - 29639 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010554841s 2025-08-13T20:00:47.272471063+00:00 stdout F [INFO] 10.217.0.60:46995 - 41796 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011676323s 2025-08-13T20:00:47.353176194+00:00 stdout F [INFO] 10.217.0.60:52090 - 63475 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001874713s 2025-08-13T20:00:47.353440092+00:00 stdout F [INFO] 10.217.0.60:45588 - 13464 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767912s 2025-08-13T20:00:47.376652584+00:00 stdout F [INFO] 10.217.0.60:56400 - 20881 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003841279s 2025-08-13T20:00:47.379424483+00:00 stdout F [INFO] 10.217.0.60:49524 - 56709 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004009245s 2025-08-13T20:00:47.593026593+00:00 stdout F [INFO] 10.217.0.60:44764 - 44056 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005789535s 2025-08-13T20:00:47.595032430+00:00 stdout F [INFO] 10.217.0.60:53477 - 46362 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006140115s 2025-08-13T20:00:47.622130023+00:00 stdout F [INFO] 10.217.0.60:33700 - 42114 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014865764s 2025-08-13T20:00:47.622935996+00:00 stdout F [INFO] 10.217.0.60:48559 - 14693 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004471528s 2025-08-13T20:00:47.721827146+00:00 stdout F [INFO] 10.217.0.60:44528 - 15152 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007532715s 2025-08-13T20:00:47.722296829+00:00 stdout F [INFO] 10.217.0.60:50872 - 49613 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007892625s 2025-08-13T20:00:47.748162137+00:00 stdout F [INFO] 10.217.0.60:38608 - 25117 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012239068s 2025-08-13T20:00:47.748162137+00:00 stdout F [INFO] 10.217.0.60:42298 - 23915 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009985865s 2025-08-13T20:00:47.752151451+00:00 stdout F [INFO] 10.217.0.60:52605 - 33456 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002111341s 2025-08-13T20:00:47.752151451+00:00 stdout F [INFO] 10.217.0.60:54884 - 59917 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002028918s 2025-08-13T20:00:47.804053931+00:00 stdout F [INFO] 10.217.0.57:34181 - 56876 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.032832606s 2025-08-13T20:00:47.809603409+00:00 stdout F [INFO] 10.217.0.57:53332 - 16628 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007190365s 2025-08-13T20:00:47.853002786+00:00 stdout F [INFO] 10.217.0.60:60728 - 54856 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011329993s 2025-08-13T20:00:47.853002786+00:00 stdout F [INFO] 10.217.0.60:47761 - 10494 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.012030293s 2025-08-13T20:00:47.864434812+00:00 stdout F [INFO] 10.217.0.60:51567 - 53307 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012441734s 2025-08-13T20:00:47.881486958+00:00 stdout F [INFO] 10.217.0.60:37162 - 20491 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.026721132s 2025-08-13T20:00:47.918687149+00:00 stdout F [INFO] 10.217.0.60:33731 - 24998 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009373047s 2025-08-13T20:00:47.941229022+00:00 stdout F [INFO] 10.217.0.60:51821 - 8384 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.031317193s 2025-08-13T20:00:47.941717446+00:00 stdout F [INFO] 10.217.0.60:47409 - 36059 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006863795s 2025-08-13T20:00:47.942178689+00:00 stdout F [INFO] 10.217.0.60:50438 - 6738 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007836173s 2025-08-13T20:00:48.101958315+00:00 stdout F [INFO] 10.217.0.60:37129 - 8864 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004393355s 2025-08-13T20:00:48.113905356+00:00 stdout F [INFO] 10.217.0.60:46038 - 64197 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012032263s 2025-08-13T20:00:48.113905356+00:00 stdout F [INFO] 10.217.0.60:52966 - 53758 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012346242s 2025-08-13T20:00:48.140736431+00:00 stdout F [INFO] 10.217.0.60:57444 - 30845 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.047522346s 2025-08-13T20:00:48.221997768+00:00 stdout F [INFO] 10.217.0.60:52176 - 15093 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000942287s 2025-08-13T20:00:48.221997768+00:00 stdout F [INFO] 10.217.0.60:43544 - 8170 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001601525s 2025-08-13T20:00:48.288368820+00:00 stdout F [INFO] 10.217.0.60:47883 - 29163 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005282701s 2025-08-13T20:00:48.288368820+00:00 stdout F [INFO] 10.217.0.60:40600 - 44845 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005914099s 2025-08-13T20:00:48.300356172+00:00 stdout F [INFO] 10.217.0.60:60257 - 34117 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001176293s 2025-08-13T20:00:48.300356172+00:00 stdout F [INFO] 10.217.0.60:54557 - 16227 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005456785s 2025-08-13T20:00:48.333949810+00:00 stdout F [INFO] 10.217.0.60:36088 - 663 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001502822s 2025-08-13T20:00:48.333949810+00:00 stdout F [INFO] 10.217.0.60:54236 - 5321 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001722179s 2025-08-13T20:00:48.378015887+00:00 stdout F [INFO] 10.217.0.60:57981 - 48896 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00138745s 2025-08-13T20:00:48.378015887+00:00 stdout F [INFO] 10.217.0.60:57451 - 16785 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140207s 2025-08-13T20:00:48.450697889+00:00 stdout F [INFO] 10.217.0.60:48824 - 688 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010017566s 2025-08-13T20:00:48.464770230+00:00 stdout F [INFO] 10.217.0.60:48120 - 24857 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.029571773s 2025-08-13T20:00:48.465644615+00:00 stdout F [INFO] 10.217.0.60:41695 - 60353 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007614157s 2025-08-13T20:00:48.518190334+00:00 stdout F [INFO] 10.217.0.60:58007 - 53703 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.033442244s 2025-08-13T20:00:48.538927885+00:00 stdout F [INFO] 10.217.0.60:48976 - 17944 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008436611s 2025-08-13T20:00:48.538927885+00:00 stdout F [INFO] 10.217.0.60:56226 - 25509 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008007849s 2025-08-13T20:00:48.604984298+00:00 stdout F [INFO] 10.217.0.60:39769 - 21331 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002969005s 2025-08-13T20:00:48.604984298+00:00 stdout F [INFO] 10.217.0.60:58238 - 34786 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003174291s 2025-08-13T20:00:48.626890253+00:00 stdout F [INFO] 10.217.0.60:57081 - 27014 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001323508s 2025-08-13T20:00:48.633335527+00:00 stdout F [INFO] 10.217.0.60:55145 - 13147 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011070895s 2025-08-13T20:00:48.677906238+00:00 stdout F [INFO] 10.217.0.60:48452 - 48371 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007958427s 2025-08-13T20:00:48.677906238+00:00 stdout F [INFO] 10.217.0.60:59529 - 56136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00839607s 2025-08-13T20:00:48.817630152+00:00 stdout F [INFO] 10.217.0.60:35230 - 49847 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.019805685s 2025-08-13T20:00:48.843040606+00:00 stdout F [INFO] 10.217.0.60:34693 - 19528 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.037023836s 2025-08-13T20:00:48.879189817+00:00 stdout F [INFO] 10.217.0.60:34184 - 25098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.015109961s 2025-08-13T20:00:48.894400171+00:00 stdout F [INFO] 10.217.0.60:34465 - 55609 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.042471811s 2025-08-13T20:00:48.957136080+00:00 stdout F [INFO] 10.217.0.60:38476 - 51226 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008033729s 2025-08-13T20:00:48.991085998+00:00 stdout F [INFO] 10.217.0.60:34121 - 45872 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018326873s 2025-08-13T20:00:49.007615269+00:00 stdout F [INFO] 10.217.0.60:52068 - 8984 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006058543s 2025-08-13T20:00:49.007615269+00:00 stdout F [INFO] 10.217.0.60:41706 - 21787 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005875008s 2025-08-13T20:00:49.091360167+00:00 stdout F [INFO] 10.217.0.60:45325 - 10432 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.01261853s 2025-08-13T20:00:49.091360167+00:00 stdout F [INFO] 10.217.0.60:43459 - 12755 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.01299332s 2025-08-13T20:00:49.120658922+00:00 stdout F [INFO] 10.217.0.60:60336 - 33140 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003302204s 2025-08-13T20:00:49.120658922+00:00 stdout F [INFO] 10.217.0.60:33883 - 18191 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004276132s 2025-08-13T20:00:49.186982213+00:00 stdout F [INFO] 10.217.0.60:45906 - 53699 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011465726s 2025-08-13T20:00:49.189545497+00:00 stdout F [INFO] 10.217.0.60:39046 - 2349 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.013666889s 2025-08-13T20:00:49.335934651+00:00 stdout F [INFO] 10.217.0.60:45971 - 35326 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003226722s 2025-08-13T20:00:49.339332508+00:00 stdout F [INFO] 10.217.0.60:55148 - 46935 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006413193s 2025-08-13T20:00:49.401101669+00:00 stdout F [INFO] 10.217.0.60:43142 - 57420 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.029564173s 2025-08-13T20:00:49.401101669+00:00 stdout F [INFO] 10.217.0.60:53476 - 47620 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.030470539s 2025-08-13T20:00:49.475906792+00:00 stdout F [INFO] 10.217.0.60:44808 - 38767 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00664648s 2025-08-13T20:00:49.476184320+00:00 stdout F [INFO] 10.217.0.60:59050 - 11320 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007127623s 2025-08-13T20:00:49.499986538+00:00 stdout F [INFO] 10.217.0.60:56114 - 55312 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001972917s 2025-08-13T20:00:49.500952016+00:00 stdout F [INFO] 10.217.0.60:58021 - 19747 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003528551s 2025-08-13T20:00:49.585704093+00:00 stdout F [INFO] 10.217.0.60:53492 - 45836 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004307853s 2025-08-13T20:00:49.587022820+00:00 stdout F [INFO] 10.217.0.60:59742 - 9936 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006042932s 2025-08-13T20:00:49.619867467+00:00 stdout F [INFO] 10.217.0.60:40608 - 36044 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002080209s 2025-08-13T20:00:49.621386450+00:00 stdout F [INFO] 10.217.0.60:54497 - 59787 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003969453s 2025-08-13T20:00:49.738205081+00:00 stdout F [INFO] 10.217.0.60:59059 - 46154 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0013962s 2025-08-13T20:00:49.738205081+00:00 stdout F [INFO] 10.217.0.60:37185 - 27738 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002493211s 2025-08-13T20:00:49.742536965+00:00 stdout F [INFO] 10.217.0.60:49568 - 23639 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006554967s 2025-08-13T20:00:49.742898255+00:00 stdout F [INFO] 10.217.0.60:60560 - 13861 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007278567s 2025-08-13T20:00:49.772066467+00:00 stdout F [INFO] 10.217.0.60:37772 - 50665 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.009398078s 2025-08-13T20:00:49.798546172+00:00 stdout F [INFO] 10.217.0.60:41603 - 50543 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.035662117s 2025-08-13T20:00:49.876354960+00:00 stdout F [INFO] 10.217.0.60:39133 - 51596 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003322455s 2025-08-13T20:00:49.879992484+00:00 stdout F [INFO] 10.217.0.60:48144 - 11011 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004976322s 2025-08-13T20:00:49.899650055+00:00 stdout F [INFO] 10.217.0.60:34077 - 56114 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001278427s 2025-08-13T20:00:49.900298013+00:00 stdout F [INFO] 10.217.0.60:47986 - 62439 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001832252s 2025-08-13T20:00:50.012628316+00:00 stdout F [INFO] 10.217.0.60:39637 - 26080 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00632463s 2025-08-13T20:00:50.012688988+00:00 stdout F [INFO] 10.217.0.60:53716 - 534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010245322s 2025-08-13T20:00:50.051583307+00:00 stdout F [INFO] 10.217.0.60:55657 - 19501 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004846708s 2025-08-13T20:00:50.058106233+00:00 stdout F [INFO] 10.217.0.60:40058 - 18678 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011136948s 2025-08-13T20:00:50.141707167+00:00 stdout F [INFO] 10.217.0.60:60817 - 13198 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008362138s 2025-08-13T20:00:50.142358925+00:00 stdout F [INFO] 10.217.0.60:46182 - 57950 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009989265s 2025-08-13T20:00:50.152990589+00:00 stdout F [INFO] 10.217.0.60:60651 - 47522 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001214825s 2025-08-13T20:00:50.153652057+00:00 stdout F [INFO] 10.217.0.60:51945 - 46133 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005919309s 2025-08-13T20:00:50.216588312+00:00 stdout F [INFO] 10.217.0.60:36086 - 48239 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009558012s 2025-08-13T20:00:50.230207140+00:00 stdout F [INFO] 10.217.0.60:41438 - 20130 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001547914s 2025-08-13T20:00:50.246760042+00:00 stdout F [INFO] 10.217.0.60:37703 - 41222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000947307s 2025-08-13T20:00:50.286219267+00:00 stdout F [INFO] 10.217.0.60:44233 - 28746 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.038549609s 2025-08-13T20:00:50.426368064+00:00 stdout F [INFO] 10.217.0.60:59761 - 18861 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103962s 2025-08-13T20:00:50.426610421+00:00 stdout F [INFO] 10.217.0.60:50474 - 18337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001572425s 2025-08-13T20:00:50.476992067+00:00 stdout F [INFO] 10.217.0.60:45294 - 4861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011467187s 2025-08-13T20:00:50.479194270+00:00 stdout F [INFO] 10.217.0.60:60484 - 20544 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014007289s 2025-08-13T20:00:50.487049573+00:00 stdout F [INFO] 10.217.0.60:50323 - 53459 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00952296s 2025-08-13T20:00:50.487462955+00:00 stdout F [INFO] 10.217.0.60:53709 - 16232 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010004194s 2025-08-13T20:00:50.631761129+00:00 stdout F [INFO] 10.217.0.60:43049 - 58271 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004848749s 2025-08-13T20:00:50.631761129+00:00 stdout F [INFO] 10.217.0.60:51596 - 4810 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005409235s 2025-08-13T20:00:50.632212972+00:00 stdout F [INFO] 10.217.0.60:36500 - 56939 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005445896s 2025-08-13T20:00:50.653875190+00:00 stdout F [INFO] 10.217.0.60:37162 - 20095 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002929244s 2025-08-13T20:00:50.710595157+00:00 stdout F [INFO] 10.217.0.60:53343 - 65404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001281806s 2025-08-13T20:00:50.710917926+00:00 stdout F [INFO] 10.217.0.60:46189 - 9933 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002245714s 2025-08-13T20:00:50.741703954+00:00 stdout F [INFO] 10.217.0.60:43271 - 32831 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003597993s 2025-08-13T20:00:50.741906950+00:00 stdout F [INFO] 10.217.0.60:49322 - 45791 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004130768s 2025-08-13T20:00:50.814239543+00:00 stdout F [INFO] 10.217.0.60:43252 - 38980 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008340968s 2025-08-13T20:00:50.821518250+00:00 stdout F [INFO] 10.217.0.60:33887 - 23010 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008209924s 2025-08-13T20:00:50.822469257+00:00 stdout F [INFO] 10.217.0.60:52123 - 64764 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000922446s 2025-08-13T20:00:50.822531939+00:00 stdout F [INFO] 10.217.0.60:47079 - 60051 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000604947s 2025-08-13T20:00:50.902917641+00:00 stdout F [INFO] 10.217.0.60:37685 - 16757 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001947336s 2025-08-13T20:00:50.903207449+00:00 stdout F [INFO] 10.217.0.60:57688 - 53872 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00208412s 2025-08-13T20:00:50.903287672+00:00 stdout F [INFO] 10.217.0.60:60105 - 24057 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002338506s 2025-08-13T20:00:50.903567440+00:00 stdout F [INFO] 10.217.0.60:38381 - 3564 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00279449s 2025-08-13T20:00:51.002414068+00:00 stdout F [INFO] 10.217.0.60:41363 - 55003 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001155413s 2025-08-13T20:00:51.004027554+00:00 stdout F [INFO] 10.217.0.60:41385 - 36490 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104889s 2025-08-13T20:00:51.017230481+00:00 stdout F [INFO] 10.217.0.60:51407 - 52430 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003244232s 2025-08-13T20:00:51.018354993+00:00 stdout F [INFO] 10.217.0.60:48463 - 37591 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004471547s 2025-08-13T20:00:51.101497723+00:00 stdout F [INFO] 10.217.0.60:46161 - 3185 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00561465s 2025-08-13T20:00:51.101497723+00:00 stdout F [INFO] 10.217.0.60:51482 - 46090 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005485686s 2025-08-13T20:00:51.142181403+00:00 stdout F [INFO] 10.217.0.60:56445 - 47860 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004654473s 2025-08-13T20:00:51.146638801+00:00 stdout F [INFO] 10.217.0.60:45442 - 19436 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005363643s 2025-08-13T20:00:51.204234113+00:00 stdout F [INFO] 10.217.0.60:35174 - 59788 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007879465s 2025-08-13T20:00:51.204373347+00:00 stdout F [INFO] 10.217.0.60:38913 - 63826 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009311036s 2025-08-13T20:00:51.219247781+00:00 stdout F [INFO] 10.217.0.28:60652 - 32272 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005673381s 2025-08-13T20:00:51.219938511+00:00 stdout F [INFO] 10.217.0.28:33440 - 14907 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007741481s 2025-08-13T20:00:51.220274210+00:00 stdout F [INFO] 10.217.0.60:33067 - 24093 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003064477s 2025-08-13T20:00:51.220749424+00:00 stdout F [INFO] 10.217.0.60:60012 - 42704 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002688427s 2025-08-13T20:00:51.288859666+00:00 stdout F [INFO] 10.217.0.60:49681 - 14797 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000803023s 2025-08-13T20:00:51.288859666+00:00 stdout F [INFO] 10.217.0.60:44507 - 3751 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000508474s 2025-08-13T20:00:51.306944231+00:00 stdout F [INFO] 10.217.0.60:40419 - 58620 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007643068s 2025-08-13T20:00:51.306944231+00:00 stdout F [INFO] 10.217.0.60:51791 - 27961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008525664s 2025-08-13T20:00:51.375923848+00:00 stdout F [INFO] 10.217.0.60:34231 - 35820 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000928496s 2025-08-13T20:00:51.376458394+00:00 stdout F [INFO] 10.217.0.60:43604 - 31528 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001008199s 2025-08-13T20:00:51.386691535+00:00 stdout F [INFO] 10.217.0.60:60210 - 17663 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001227575s 2025-08-13T20:00:51.386691535+00:00 stdout F [INFO] 10.217.0.60:44075 - 52352 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001294827s 2025-08-13T20:00:51.611735065+00:00 stdout F [INFO] 10.217.0.60:38690 - 52947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.077360954s 2025-08-13T20:00:51.612714183+00:00 stdout F [INFO] 10.217.0.60:41948 - 6937 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.078289441s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:42928 - 19370 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001911985s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:32987 - 60196 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002667866s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:49495 - 63508 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002623465s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:52718 - 33970 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002470911s 2025-08-13T20:00:51.742160331+00:00 stdout F [INFO] 10.217.0.60:48686 - 41468 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001831722s 2025-08-13T20:00:51.742427329+00:00 stdout F [INFO] 10.217.0.60:35098 - 63774 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002177992s 2025-08-13T20:00:51.743220981+00:00 stdout F [INFO] 10.217.0.60:43032 - 9717 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001273976s 2025-08-13T20:00:51.744057455+00:00 stdout F [INFO] 10.217.0.60:34108 - 623 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002023838s 2025-08-13T20:00:51.748934934+00:00 stdout F [INFO] 10.217.0.60:40602 - 21272 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000581176s 2025-08-13T20:00:51.749930403+00:00 stdout F [INFO] 10.217.0.60:37496 - 12601 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001247316s 2025-08-13T20:00:51.822689737+00:00 stdout F [INFO] 10.217.0.60:38232 - 17431 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00665717s 2025-08-13T20:00:51.823074108+00:00 stdout F [INFO] 10.217.0.60:58055 - 13489 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007323248s 2025-08-13T20:00:51.823325085+00:00 stdout F [INFO] 10.217.0.60:47561 - 18500 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007794712s 2025-08-13T20:00:51.823613724+00:00 stdout F [INFO] 10.217.0.60:47869 - 39243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007890675s 2025-08-13T20:00:51.855978707+00:00 stdout F [INFO] 10.217.0.60:34863 - 45394 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004177349s 2025-08-13T20:00:51.867184376+00:00 stdout F [INFO] 10.217.0.60:39514 - 30757 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.012415594s 2025-08-13T20:00:51.910303916+00:00 stdout F [INFO] 10.217.0.60:49136 - 577 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003391667s 2025-08-13T20:00:51.910573373+00:00 stdout F [INFO] 10.217.0.60:42532 - 1690 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004396395s 2025-08-13T20:00:51.942863424+00:00 stdout F [INFO] 10.217.0.60:33964 - 26144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002259924s 2025-08-13T20:00:51.945255812+00:00 stdout F [INFO] 10.217.0.60:58693 - 49707 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004602351s 2025-08-13T20:00:51.984627495+00:00 stdout F [INFO] 10.217.0.60:38197 - 6440 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012006032s 2025-08-13T20:00:51.984761369+00:00 stdout F [INFO] 10.217.0.60:55819 - 1373 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012547668s 2025-08-13T20:00:51.999348845+00:00 stdout F [INFO] 10.217.0.60:44642 - 63594 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004095017s 2025-08-13T20:00:51.999725875+00:00 stdout F [INFO] 10.217.0.60:53602 - 61788 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004738385s 2025-08-13T20:00:52.015235118+00:00 stdout F [INFO] 10.217.0.60:52813 - 36985 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001107691s 2025-08-13T20:00:52.015343841+00:00 stdout F [INFO] 10.217.0.60:55494 - 9494 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001735579s 2025-08-13T20:00:52.067732165+00:00 stdout F [INFO] 10.217.0.60:56886 - 32543 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002148031s 2025-08-13T20:00:52.068143936+00:00 stdout F [INFO] 10.217.0.60:52460 - 57024 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002538542s 2025-08-13T20:00:52.108533718+00:00 stdout F [INFO] 10.217.0.60:51310 - 53415 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016064728s 2025-08-13T20:00:52.109018332+00:00 stdout F [INFO] 10.217.0.60:59350 - 62708 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016686755s 2025-08-13T20:00:52.174940892+00:00 stdout F [INFO] 10.217.0.60:39079 - 60775 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005555518s 2025-08-13T20:00:52.183123615+00:00 stdout F [INFO] 10.217.0.60:57759 - 31828 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013467164s 2025-08-13T20:00:52.184567376+00:00 stdout F [INFO] 10.217.0.60:58451 - 2791 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001231495s 2025-08-13T20:00:52.186513841+00:00 stdout F [INFO] 10.217.0.60:32892 - 3317 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002866262s 2025-08-13T20:00:52.252289647+00:00 stdout F [INFO] 10.217.0.60:49551 - 64846 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001023919s 2025-08-13T20:00:52.252707309+00:00 stdout F [INFO] 10.217.0.60:49202 - 12073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00142371s 2025-08-13T20:00:52.342704175+00:00 stdout F [INFO] 10.217.0.60:43192 - 32993 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.016870411s 2025-08-13T20:00:52.342704175+00:00 stdout F [INFO] 10.217.0.60:38616 - 39260 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017765217s 2025-08-13T20:00:52.343319953+00:00 stdout F [INFO] 10.217.0.60:32855 - 42141 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.034283607s 2025-08-13T20:00:52.343319953+00:00 stdout F [INFO] 10.217.0.60:58760 - 51997 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.034872634s 2025-08-13T20:00:52.363666863+00:00 stdout F [INFO] 10.217.0.60:34342 - 39912 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002230364s 2025-08-13T20:00:52.364055264+00:00 stdout F [INFO] 10.217.0.60:40112 - 47926 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003496609s 2025-08-13T20:00:52.368147640+00:00 stdout F [INFO] 10.217.0.60:41418 - 27502 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000772232s 2025-08-13T20:00:52.368147640+00:00 stdout F [INFO] 10.217.0.60:54568 - 8178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000862675s 2025-08-13T20:00:52.417474187+00:00 stdout F [INFO] 10.217.0.60:34401 - 41823 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001162503s 2025-08-13T20:00:52.417474187+00:00 stdout F [INFO] 10.217.0.60:54281 - 1628 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000844794s 2025-08-13T20:00:52.425915018+00:00 stdout F [INFO] 10.217.0.60:44410 - 34400 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001689318s 2025-08-13T20:00:52.425915018+00:00 stdout F [INFO] 10.217.0.60:41198 - 33445 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002418249s 2025-08-13T20:00:52.478947420+00:00 stdout F [INFO] 10.217.0.60:41674 - 36850 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004340144s 2025-08-13T20:00:52.478947420+00:00 stdout F [INFO] 10.217.0.60:47806 - 36647 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004133997s 2025-08-13T20:00:52.503407487+00:00 stdout F [INFO] 10.217.0.60:49901 - 36499 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00488833s 2025-08-13T20:00:52.503434608+00:00 stdout F [INFO] 10.217.0.60:56089 - 38963 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003719806s 2025-08-13T20:00:52.612027434+00:00 stdout F [INFO] 10.217.0.60:49947 - 17990 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002598664s 2025-08-13T20:00:52.612027434+00:00 stdout F [INFO] 10.217.0.60:42773 - 36346 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003213992s 2025-08-13T20:00:52.631294194+00:00 stdout F [INFO] 10.217.0.60:35440 - 16529 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007477253s 2025-08-13T20:00:52.631294194+00:00 stdout F [INFO] 10.217.0.60:38024 - 19973 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00633636s 2025-08-13T20:00:52.669176664+00:00 stdout F [INFO] 10.217.0.60:60705 - 31478 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001868913s 2025-08-13T20:00:52.686242461+00:00 stdout F [INFO] 10.217.0.60:40476 - 8782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017952332s 2025-08-13T20:00:52.700432845+00:00 stdout F [INFO] 10.217.0.60:38184 - 24580 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006019771s 2025-08-13T20:00:52.700533838+00:00 stdout F [INFO] 10.217.0.60:60768 - 46385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006904217s 2025-08-13T20:00:52.708316640+00:00 stdout F [INFO] 10.217.0.60:41556 - 44564 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003196391s 2025-08-13T20:00:52.708471474+00:00 stdout F [INFO] 10.217.0.60:46736 - 43501 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003790588s 2025-08-13T20:00:52.753748055+00:00 stdout F [INFO] 10.217.0.60:40008 - 18248 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003482239s 2025-08-13T20:00:52.757647567+00:00 stdout F [INFO] 10.217.0.60:39281 - 40776 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007330079s 2025-08-13T20:00:52.793241162+00:00 stdout F [INFO] 10.217.0.57:45300 - 46926 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.014451182s 2025-08-13T20:00:52.793326134+00:00 stdout F [INFO] 10.217.0.57:35437 - 60936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008610236s 2025-08-13T20:00:52.804324888+00:00 stdout F [INFO] 10.217.0.60:50313 - 39461 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025408345s 2025-08-13T20:00:52.804324888+00:00 stdout F [INFO] 10.217.0.60:34678 - 44426 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025856147s 2025-08-13T20:00:52.860209981+00:00 stdout F [INFO] 10.217.0.60:51260 - 353 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003001965s 2025-08-13T20:00:52.861057635+00:00 stdout F [INFO] 10.217.0.60:47222 - 2537 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006474134s 2025-08-13T20:00:52.881133378+00:00 stdout F [INFO] 10.217.0.60:52091 - 13165 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000937946s 2025-08-13T20:00:52.881607451+00:00 stdout F [INFO] 10.217.0.60:54382 - 48231 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001847163s 2025-08-13T20:00:52.885077150+00:00 stdout F [INFO] 10.217.0.60:47517 - 41143 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003975963s 2025-08-13T20:00:52.901372165+00:00 stdout F [INFO] 10.217.0.60:49983 - 15446 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020226507s 2025-08-13T20:00:52.945679118+00:00 stdout F [INFO] 10.217.0.60:57504 - 17108 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002316026s 2025-08-13T20:00:52.946193043+00:00 stdout F [INFO] 10.217.0.60:55459 - 34942 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003532381s 2025-08-13T20:00:52.965326068+00:00 stdout F [INFO] 10.217.0.60:52607 - 15292 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001042949s 2025-08-13T20:00:52.966944765+00:00 stdout F [INFO] 10.217.0.60:48379 - 16992 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002693877s 2025-08-13T20:00:52.968991643+00:00 stdout F [INFO] 10.217.0.60:34347 - 19971 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001370789s 2025-08-13T20:00:52.968991643+00:00 stdout F [INFO] 10.217.0.60:58227 - 27878 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002165991s 2025-08-13T20:00:53.008765737+00:00 stdout F [INFO] 10.217.0.60:50325 - 4808 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004454127s 2025-08-13T20:00:53.008765737+00:00 stdout F [INFO] 10.217.0.60:53513 - 15907 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004383155s 2025-08-13T20:00:53.042605572+00:00 stdout F [INFO] 10.217.0.60:50040 - 6391 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001593495s 2025-08-13T20:00:53.042711485+00:00 stdout F [INFO] 10.217.0.60:56947 - 35540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001457651s 2025-08-13T20:00:53.098025152+00:00 stdout F [INFO] 10.217.0.60:47610 - 52365 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002047098s 2025-08-13T20:00:53.098424924+00:00 stdout F [INFO] 10.217.0.60:46903 - 19146 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.011319583s 2025-08-13T20:00:53.099042741+00:00 stdout F [INFO] 10.217.0.60:40196 - 10562 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0035066s 2025-08-13T20:00:53.099131124+00:00 stdout F [INFO] 10.217.0.60:40279 - 44420 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012856286s 2025-08-13T20:00:53.201288987+00:00 stdout F [INFO] 10.217.0.60:44271 - 34314 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004942361s 2025-08-13T20:00:53.201394970+00:00 stdout F [INFO] 10.217.0.60:40528 - 65373 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003594543s 2025-08-13T20:00:53.201640297+00:00 stdout F [INFO] 10.217.0.60:45273 - 26892 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005355922s 2025-08-13T20:00:53.201729409+00:00 stdout F [INFO] 10.217.0.60:57186 - 19083 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00348992s 2025-08-13T20:00:53.252955060+00:00 stdout F [INFO] 10.217.0.60:54736 - 18202 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006892836s 2025-08-13T20:00:53.255122962+00:00 stdout F [INFO] 10.217.0.60:56460 - 34298 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006607799s 2025-08-13T20:00:53.286328582+00:00 stdout F [INFO] 10.217.0.60:46439 - 33364 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006952718s 2025-08-13T20:00:53.286657981+00:00 stdout F [INFO] 10.217.0.60:50478 - 32524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006934677s 2025-08-13T20:00:53.289987276+00:00 stdout F [INFO] 10.217.0.60:47197 - 42349 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002953375s 2025-08-13T20:00:53.289987276+00:00 stdout F [INFO] 10.217.0.60:55550 - 14391 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003505219s 2025-08-13T20:00:53.340759034+00:00 stdout F [INFO] 10.217.0.60:36130 - 51725 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005857947s 2025-08-13T20:00:53.340759034+00:00 stdout F [INFO] 10.217.0.60:47471 - 34726 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005486817s 2025-08-13T20:00:53.411344546+00:00 stdout F [INFO] 10.217.0.60:54552 - 11612 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023914062s 2025-08-13T20:00:53.411344546+00:00 stdout F [INFO] 10.217.0.60:43437 - 29520 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025134666s 2025-08-13T20:00:53.432280653+00:00 stdout F [INFO] 10.217.0.60:38044 - 13694 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008570454s 2025-08-13T20:00:53.433227400+00:00 stdout F [INFO] 10.217.0.60:54568 - 64079 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007869224s 2025-08-13T20:00:53.471951515+00:00 stdout F [INFO] 10.217.0.60:53072 - 41075 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002703447s 2025-08-13T20:00:53.471951515+00:00 stdout F [INFO] 10.217.0.60:49580 - 61579 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002508161s 2025-08-13T20:00:53.516481754+00:00 stdout F [INFO] 10.217.0.60:50918 - 35890 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003679485s 2025-08-13T20:00:53.516481754+00:00 stdout F [INFO] 10.217.0.60:43105 - 22887 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003932652s 2025-08-13T20:00:53.516594207+00:00 stdout F [INFO] 10.217.0.60:33168 - 53941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023277554s 2025-08-13T20:00:53.535903078+00:00 stdout F [INFO] 10.217.0.60:42750 - 34289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.038139318s 2025-08-13T20:00:53.570983318+00:00 stdout F [INFO] 10.217.0.60:41931 - 41111 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003255393s 2025-08-13T20:00:53.573029197+00:00 stdout F [INFO] 10.217.0.60:40847 - 48976 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006612799s 2025-08-13T20:00:53.597868635+00:00 stdout F [INFO] 10.217.0.60:59360 - 2136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005416224s 2025-08-13T20:00:53.597868635+00:00 stdout F [INFO] 10.217.0.60:52461 - 27104 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001841083s 2025-08-13T20:00:53.601450557+00:00 stdout F [INFO] 10.217.0.60:54454 - 54017 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006414802s 2025-08-13T20:00:53.601450557+00:00 stdout F [INFO] 10.217.0.60:37130 - 13113 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001838232s 2025-08-13T20:00:53.665990417+00:00 stdout F [INFO] 10.217.0.60:46913 - 29149 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006355062s 2025-08-13T20:00:53.678985018+00:00 stdout F [INFO] 10.217.0.60:60979 - 7034 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002641435s 2025-08-13T20:00:53.679034359+00:00 stdout F [INFO] 10.217.0.60:42536 - 55728 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002853981s 2025-08-13T20:00:53.692268767+00:00 stdout F [INFO] 10.217.0.60:59147 - 6840 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020945667s 2025-08-13T20:00:53.759479803+00:00 stdout F [INFO] 10.217.0.60:45721 - 40846 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001359888s 2025-08-13T20:00:53.759479803+00:00 stdout F [INFO] 10.217.0.60:59305 - 1822 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002186972s 2025-08-13T20:00:53.766363369+00:00 stdout F [INFO] 10.217.0.60:35259 - 58487 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003414577s 2025-08-13T20:00:53.766543834+00:00 stdout F [INFO] 10.217.0.60:37103 - 25785 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004143848s 2025-08-13T20:00:53.848258384+00:00 stdout F [INFO] 10.217.0.60:47545 - 58113 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002532112s 2025-08-13T20:00:53.848258384+00:00 stdout F [INFO] 10.217.0.60:42233 - 41616 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004171259s 2025-08-13T20:00:53.864414455+00:00 stdout F [INFO] 10.217.0.60:51098 - 41843 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004119977s 2025-08-13T20:00:53.864721734+00:00 stdout F [INFO] 10.217.0.60:35925 - 20271 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004664763s 2025-08-13T20:00:53.981440612+00:00 stdout F [INFO] 10.217.0.60:55209 - 35425 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003106589s 2025-08-13T20:00:53.984768667+00:00 stdout F [INFO] 10.217.0.60:49145 - 61100 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003199231s 2025-08-13T20:00:54.006162107+00:00 stdout F [INFO] 10.217.0.60:36995 - 2348 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001911485s 2025-08-13T20:00:54.006666721+00:00 stdout F [INFO] 10.217.0.60:44304 - 50706 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002065359s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:58116 - 63292 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.017787097s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:56494 - 34065 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.018799796s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:40690 - 33753 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018499167s 2025-08-13T20:00:54.091014255+00:00 stdout F [INFO] 10.217.0.60:52303 - 57777 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.021306917s 2025-08-13T20:00:54.177955294+00:00 stdout F [INFO] 10.217.0.60:45987 - 37479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002613084s 2025-08-13T20:00:54.177955294+00:00 stdout F [INFO] 10.217.0.60:58306 - 8356 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003806909s 2025-08-13T20:00:54.190158042+00:00 stdout F [INFO] 10.217.0.60:49703 - 48052 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005040684s 2025-08-13T20:00:54.190158042+00:00 stdout F [INFO] 10.217.0.60:47515 - 57080 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005875278s 2025-08-13T20:00:54.210693428+00:00 stdout F [INFO] 10.217.0.60:53317 - 61694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001216725s 2025-08-13T20:00:54.211213623+00:00 stdout F [INFO] 10.217.0.60:36938 - 24375 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001097501s 2025-08-13T20:00:54.261073224+00:00 stdout F [INFO] 10.217.0.60:48205 - 50646 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001469772s 2025-08-13T20:00:54.261451375+00:00 stdout F [INFO] 10.217.0.60:48539 - 55836 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005469026s 2025-08-13T20:00:54.346313265+00:00 stdout F [INFO] 10.217.0.60:54730 - 32651 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.045075945s 2025-08-13T20:00:54.346313265+00:00 stdout F [INFO] 10.217.0.60:43637 - 9908 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.046208437s 2025-08-13T20:00:54.358372139+00:00 stdout F [INFO] 10.217.0.60:56901 - 19987 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006063023s 2025-08-13T20:00:54.358429720+00:00 stdout F [INFO] 10.217.0.60:39734 - 36586 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008345908s 2025-08-13T20:00:54.413487130+00:00 stdout F [INFO] 10.217.0.60:35374 - 37770 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003413067s 2025-08-13T20:00:54.414041696+00:00 stdout F [INFO] 10.217.0.60:32861 - 23169 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003239342s 2025-08-13T20:00:54.444068402+00:00 stdout F [INFO] 10.217.0.60:49969 - 9790 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003211791s 2025-08-13T20:00:54.444068402+00:00 stdout F [INFO] 10.217.0.60:48352 - 43128 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00142177s 2025-08-13T20:00:54.496125126+00:00 stdout F [INFO] 10.217.0.60:50646 - 6041 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001598405s 2025-08-13T20:00:54.500601564+00:00 stdout F [INFO] 10.217.0.60:54137 - 5363 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005470266s 2025-08-13T20:00:54.562235942+00:00 stdout F [INFO] 10.217.0.60:34907 - 2227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009674926s 2025-08-13T20:00:54.562235942+00:00 stdout F [INFO] 10.217.0.60:56654 - 46794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009203472s 2025-08-13T20:00:54.605821094+00:00 stdout F [INFO] 10.217.0.60:38008 - 56395 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006390213s 2025-08-13T20:00:54.605821094+00:00 stdout F [INFO] 10.217.0.60:60224 - 57243 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006918808s 2025-08-13T20:00:54.641390439+00:00 stdout F [INFO] 10.217.0.60:35101 - 21936 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00350588s 2025-08-13T20:00:54.642122009+00:00 stdout F [INFO] 10.217.0.60:36606 - 13769 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00490198s 2025-08-13T20:00:54.701656737+00:00 stdout F [INFO] 10.217.0.60:60974 - 25606 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00630711s 2025-08-13T20:00:54.717990603+00:00 stdout F [INFO] 10.217.0.60:56711 - 1491 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.021716869s 2025-08-13T20:00:54.769018978+00:00 stdout F [INFO] 10.217.0.60:56705 - 50944 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002386768s 2025-08-13T20:00:54.800321270+00:00 stdout F [INFO] 10.217.0.60:33894 - 34414 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.020661909s 2025-08-13T20:00:54.821698250+00:00 stdout F [INFO] 10.217.0.60:46811 - 16255 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001141063s 2025-08-13T20:00:54.821698250+00:00 stdout F [INFO] 10.217.0.60:41651 - 35052 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002192633s 2025-08-13T20:00:56.237173901+00:00 stdout F [INFO] 10.217.0.28:57650 - 26271 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006250578s 2025-08-13T20:00:56.237173901+00:00 stdout F [INFO] 10.217.0.28:36911 - 23347 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006025012s 2025-08-13T20:00:56.426192821+00:00 stdout F [INFO] 10.217.0.64:60386 - 16371 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002096029s 2025-08-13T20:00:56.426755407+00:00 stdout F [INFO] 10.217.0.64:37132 - 17691 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002402099s 2025-08-13T20:00:56.429936718+00:00 stdout F [INFO] 10.217.0.64:56566 - 24357 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001352609s 2025-08-13T20:00:56.429936718+00:00 stdout F [INFO] 10.217.0.64:56185 - 8747 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001461742s 2025-08-13T20:00:57.764385539+00:00 stdout F [INFO] 10.217.0.57:55995 - 61941 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002156022s 2025-08-13T20:00:57.764487992+00:00 stdout F [INFO] 10.217.0.57:32930 - 18533 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003038847s 2025-08-13T20:01:01.217889967+00:00 stdout F [INFO] 10.217.0.28:60933 - 64320 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00173895s 2025-08-13T20:01:01.229498738+00:00 stdout F [INFO] 10.217.0.28:45794 - 37899 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004856648s 2025-08-13T20:01:02.786206288+00:00 stdout F [INFO] 10.217.0.57:40045 - 52840 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004254891s 2025-08-13T20:01:02.786714652+00:00 stdout F [INFO] 10.217.0.57:37315 - 53363 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004645843s 2025-08-13T20:01:06.248307858+00:00 stdout F [INFO] 10.217.0.28:32940 - 64022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010549551s 2025-08-13T20:01:06.248307858+00:00 stdout F [INFO] 10.217.0.28:39158 - 51585 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010443988s 2025-08-13T20:01:07.803260386+00:00 stdout F [INFO] 10.217.0.57:55745 - 47377 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008387099s 2025-08-13T20:01:07.803260386+00:00 stdout F [INFO] 10.217.0.57:52414 - 56772 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001229835s 2025-08-13T20:01:07.876341910+00:00 stdout F [INFO] 10.217.0.62:36493 - 37658 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006798604s 2025-08-13T20:01:07.928674602+00:00 stdout F [INFO] 10.217.0.62:47351 - 5566 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.054731061s 2025-08-13T20:01:08.958312120+00:00 stdout F [INFO] 10.217.0.62:45058 - 48452 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009701407s 2025-08-13T20:01:08.958708771+00:00 stdout F [INFO] 10.217.0.62:55693 - 35669 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010232312s 2025-08-13T20:01:09.463479824+00:00 stdout F [INFO] 10.217.0.62:42400 - 55577 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003704086s 2025-08-13T20:01:09.463555486+00:00 stdout F [INFO] 10.217.0.62:37144 - 52861 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003480729s 2025-08-13T20:01:09.791534678+00:00 stdout F [INFO] 10.217.0.62:35264 - 59363 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008885523s 2025-08-13T20:01:09.792368432+00:00 stdout F [INFO] 10.217.0.62:51739 - 64573 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.019929048s 2025-08-13T20:01:09.883410818+00:00 stdout F [INFO] 10.217.0.62:33611 - 14300 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003421418s 2025-08-13T20:01:09.883410818+00:00 stdout F [INFO] 10.217.0.62:43024 - 10224 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004042806s 2025-08-13T20:01:10.148001973+00:00 stdout F [INFO] 10.217.0.62:46260 - 39301 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005178448s 2025-08-13T20:01:10.153873390+00:00 stdout F [INFO] 10.217.0.62:57629 - 50837 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011846428s 2025-08-13T20:01:10.366079851+00:00 stdout F [INFO] 10.217.0.62:53398 - 6757 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010000395s 2025-08-13T20:01:10.368951013+00:00 stdout F [INFO] 10.217.0.62:34726 - 49492 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009213843s 2025-08-13T20:01:10.599053724+00:00 stdout F [INFO] 10.217.0.62:38447 - 31498 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006358471s 2025-08-13T20:01:10.602083821+00:00 stdout F [INFO] 10.217.0.62:35021 - 53431 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.028566614s 2025-08-13T20:01:11.189276624+00:00 stdout F [INFO] 10.217.0.28:57278 - 49657 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000955917s 2025-08-13T20:01:11.190236392+00:00 stdout F [INFO] 10.217.0.28:51161 - 41838 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001605305s 2025-08-13T20:01:12.737583602+00:00 stdout F [INFO] 10.217.0.57:38545 - 42595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000573556s 2025-08-13T20:01:12.738199670+00:00 stdout F [INFO] 10.217.0.57:37428 - 59560 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000958997s 2025-08-13T20:01:16.220937836+00:00 stdout F [INFO] 10.217.0.28:53038 - 50311 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004677073s 2025-08-13T20:01:16.272384153+00:00 stdout F [INFO] 10.217.0.28:41185 - 25277 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.059117746s 2025-08-13T20:01:17.742430731+00:00 stdout F [INFO] 10.217.0.57:55347 - 17398 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001211184s 2025-08-13T20:01:17.743296655+00:00 stdout F [INFO] 10.217.0.57:50425 - 39022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001655518s 2025-08-13T20:01:21.215652285+00:00 stdout F [INFO] 10.217.0.28:39681 - 25914 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001337798s 2025-08-13T20:01:21.218189227+00:00 stdout F [INFO] 10.217.0.28:50437 - 47565 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00278562s 2025-08-13T20:01:22.758813675+00:00 stdout F [INFO] 10.217.0.57:51254 - 6258 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00737564s 2025-08-13T20:01:22.759524666+00:00 stdout F [INFO] 10.217.0.57:39283 - 43233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00418449s 2025-08-13T20:01:22.875226975+00:00 stdout F [INFO] 10.217.0.8:35450 - 460 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002742558s 2025-08-13T20:01:22.899969020+00:00 stdout F [INFO] 10.217.0.8:50023 - 58252 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.012143567s 2025-08-13T20:01:22.903084419+00:00 stdout F [INFO] 10.217.0.8:37803 - 13193 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001617146s 2025-08-13T20:01:22.903387998+00:00 stdout F [INFO] 10.217.0.8:33487 - 42396 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001981667s 2025-08-13T20:01:22.948433202+00:00 stdout F [INFO] 10.217.0.8:60399 - 35132 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000913156s 2025-08-13T20:01:22.948694070+00:00 stdout F [INFO] 10.217.0.8:58767 - 6469 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00105186s 2025-08-13T20:01:22.955416611+00:00 stdout F [INFO] 10.217.0.8:35574 - 36598 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0017425s 2025-08-13T20:01:22.956320387+00:00 stdout F [INFO] 10.217.0.8:52173 - 4616 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00209741s 2025-08-13T20:01:22.983271905+00:00 stdout F [INFO] 10.217.0.8:42709 - 34679 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000729141s 2025-08-13T20:01:22.983556464+00:00 stdout F [INFO] 10.217.0.8:37180 - 59501 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000854214s 2025-08-13T20:01:22.985588542+00:00 stdout F [INFO] 10.217.0.8:46898 - 18051 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00067822s 2025-08-13T20:01:22.986040514+00:00 stdout F [INFO] 10.217.0.8:55598 - 646 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000887455s 2025-08-13T20:01:23.019215770+00:00 stdout F [INFO] 10.217.0.8:43776 - 16490 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000885795s 2025-08-13T20:01:23.019563030+00:00 stdout F [INFO] 10.217.0.8:55377 - 13705 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000998358s 2025-08-13T20:01:23.021052543+00:00 stdout F [INFO] 10.217.0.8:32974 - 5627 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000607378s 2025-08-13T20:01:23.021367562+00:00 stdout F [INFO] 10.217.0.8:46715 - 588 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000855444s 2025-08-13T20:01:23.072998254+00:00 stdout F [INFO] 10.217.0.8:46363 - 14241 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000838344s 2025-08-13T20:01:23.073300263+00:00 stdout F [INFO] 10.217.0.8:39978 - 64267 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000959147s 2025-08-13T20:01:23.075549817+00:00 stdout F [INFO] 10.217.0.8:50302 - 46004 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00036754s 2025-08-13T20:01:23.080598711+00:00 stdout F [INFO] 10.217.0.8:36433 - 63422 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001915785s 2025-08-13T20:01:23.174177349+00:00 stdout F [INFO] 10.217.0.8:44290 - 35137 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001082941s 2025-08-13T20:01:23.174453997+00:00 stdout F [INFO] 10.217.0.8:58110 - 59594 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001618486s 2025-08-13T20:01:23.182685632+00:00 stdout F [INFO] 10.217.0.8:41613 - 40136 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0024516s 2025-08-13T20:01:23.183076613+00:00 stdout F [INFO] 10.217.0.8:49877 - 49171 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003004526s 2025-08-13T20:01:23.356737254+00:00 stdout F [INFO] 10.217.0.8:44016 - 20804 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000900336s 2025-08-13T20:01:23.356737254+00:00 stdout F [INFO] 10.217.0.8:50779 - 40285 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000814863s 2025-08-13T20:01:23.362991283+00:00 stdout F [INFO] 10.217.0.8:58191 - 8238 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003906692s 2025-08-13T20:01:23.363173828+00:00 stdout F [INFO] 10.217.0.8:34112 - 1811 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.004521639s 2025-08-13T20:01:23.697180232+00:00 stdout F [INFO] 10.217.0.8:45455 - 40717 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005744714s 2025-08-13T20:01:23.697623495+00:00 stdout F [INFO] 10.217.0.8:36955 - 33064 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006594509s 2025-08-13T20:01:23.710523722+00:00 stdout F [INFO] 10.217.0.8:53645 - 3233 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.011739265s 2025-08-13T20:01:23.711900992+00:00 stdout F [INFO] 10.217.0.8:59710 - 44829 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012081614s 2025-08-13T20:01:24.356571364+00:00 stdout F [INFO] 10.217.0.8:60593 - 48150 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000811133s 2025-08-13T20:01:24.356620205+00:00 stdout F [INFO] 10.217.0.8:55236 - 41544 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000640498s 2025-08-13T20:01:24.357769188+00:00 stdout F [INFO] 10.217.0.8:41541 - 33055 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000577307s 2025-08-13T20:01:24.358411346+00:00 stdout F [INFO] 10.217.0.8:47550 - 49968 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000632798s 2025-08-13T20:01:25.676466559+00:00 stdout F [INFO] 10.217.0.8:59216 - 3385 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006457944s 2025-08-13T20:01:25.676466559+00:00 stdout F [INFO] 10.217.0.8:41508 - 20230 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007326559s 2025-08-13T20:01:25.677325214+00:00 stdout F [INFO] 10.217.0.8:35700 - 28311 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069899s 2025-08-13T20:01:25.678744934+00:00 stdout F [INFO] 10.217.0.8:57821 - 32512 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000834894s 2025-08-13T20:01:26.195177770+00:00 stdout F [INFO] 10.217.0.28:45247 - 60056 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001457361s 2025-08-13T20:01:26.197937238+00:00 stdout F [INFO] 10.217.0.28:36922 - 18578 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00243877s 2025-08-13T20:01:27.747230374+00:00 stdout F [INFO] 10.217.0.57:42648 - 41427 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003019136s 2025-08-13T20:01:27.747230374+00:00 stdout F [INFO] 10.217.0.57:45322 - 51505 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003414618s 2025-08-13T20:01:28.252961704+00:00 stdout F [INFO] 10.217.0.8:38267 - 2620 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001890944s 2025-08-13T20:01:28.253402637+00:00 stdout F [INFO] 10.217.0.8:33474 - 20943 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002586484s 2025-08-13T20:01:28.257625927+00:00 stdout F [INFO] 10.217.0.8:46050 - 9905 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002819021s 2025-08-13T20:01:28.258326047+00:00 stdout F [INFO] 10.217.0.8:52537 - 2133 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000777933s 2025-08-13T20:01:31.226187453+00:00 stdout F [INFO] 10.217.0.28:54125 - 15824 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010559841s 2025-08-13T20:01:31.226275255+00:00 stdout F [INFO] 10.217.0.28:34218 - 4235 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010554371s 2025-08-13T20:01:32.746987848+00:00 stdout F [INFO] 10.217.0.57:48872 - 60983 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001016209s 2025-08-13T20:01:32.747463922+00:00 stdout F [INFO] 10.217.0.57:44677 - 14267 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001244225s 2025-08-13T20:01:33.390184889+00:00 stdout F [INFO] 10.217.0.8:51622 - 5984 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001513223s 2025-08-13T20:01:33.390184889+00:00 stdout F [INFO] 10.217.0.8:55023 - 43646 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001353799s 2025-08-13T20:01:33.391758223+00:00 stdout F [INFO] 10.217.0.8:36157 - 4425 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000748261s 2025-08-13T20:01:33.427944867+00:00 stdout F [INFO] 10.217.0.8:50839 - 60785 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.036577225s 2025-08-13T20:01:36.185028201+00:00 stdout F [INFO] 10.217.0.28:53977 - 19614 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000873425s 2025-08-13T20:01:36.185510115+00:00 stdout F [INFO] 10.217.0.28:48739 - 23520 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000631698s 2025-08-13T20:01:37.742700716+00:00 stdout F [INFO] 10.217.0.57:37766 - 57355 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001464932s 2025-08-13T20:01:37.742909242+00:00 stdout F [INFO] 10.217.0.57:45806 - 61763 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001585065s 2025-08-13T20:01:42.737094876+00:00 stdout F [INFO] 10.217.0.57:51744 - 57853 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000895476s 2025-08-13T20:01:42.738118555+00:00 stdout F [INFO] 10.217.0.57:50810 - 5972 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001739799s 2025-08-13T20:01:43.701713921+00:00 stdout F [INFO] 10.217.0.8:36382 - 38398 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001006558s 2025-08-13T20:01:43.701753482+00:00 stdout F [INFO] 10.217.0.8:47387 - 17641 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001006888s 2025-08-13T20:01:43.704013746+00:00 stdout F [INFO] 10.217.0.8:54898 - 64653 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000649298s 2025-08-13T20:01:43.704736497+00:00 stdout F [INFO] 10.217.0.8:32768 - 24650 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001379599s 2025-08-13T20:01:47.743368384+00:00 stdout F [INFO] 10.217.0.57:58791 - 31751 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001133693s 2025-08-13T20:01:47.744028232+00:00 stdout F [INFO] 10.217.0.57:60427 - 36503 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001844383s 2025-08-13T20:01:52.740083758+00:00 stdout F [INFO] 10.217.0.57:55965 - 27277 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001546664s 2025-08-13T20:01:52.740083758+00:00 stdout F [INFO] 10.217.0.57:43501 - 22555 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001841002s 2025-08-13T20:01:56.381289243+00:00 stdout F [INFO] 10.217.0.64:33868 - 27378 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001576005s 2025-08-13T20:01:56.381949141+00:00 stdout F [INFO] 10.217.0.64:43107 - 15688 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000985269s 2025-08-13T20:01:56.385583775+00:00 stdout F [INFO] 10.217.0.64:38774 - 34772 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000796203s 2025-08-13T20:01:56.386594514+00:00 stdout F [INFO] 10.217.0.64:51409 - 59863 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001705519s 2025-08-13T20:01:57.740516440+00:00 stdout F [INFO] 10.217.0.57:40457 - 8246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000826874s 2025-08-13T20:01:57.740968553+00:00 stdout F [INFO] 10.217.0.57:41951 - 15134 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000784732s 2025-08-13T20:02:02.752936391+00:00 stdout F [INFO] 10.217.0.57:45491 - 28069 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00174288s 2025-08-13T20:02:02.752936391+00:00 stdout F [INFO] 10.217.0.57:51623 - 20798 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001608546s 2025-08-13T20:02:04.196547714+00:00 stdout F [INFO] 10.217.0.8:53185 - 3699 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001116902s 2025-08-13T20:02:04.196872024+00:00 stdout F [INFO] 10.217.0.8:41818 - 23258 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001150073s 2025-08-13T20:02:04.198577372+00:00 stdout F [INFO] 10.217.0.8:59757 - 57451 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00072344s 2025-08-13T20:02:04.198696666+00:00 stdout F [INFO] 10.217.0.8:40174 - 24364 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069757s 2025-08-13T20:02:07.740393995+00:00 stdout F [INFO] 10.217.0.57:44423 - 57160 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001285037s 2025-08-13T20:02:07.740930810+00:00 stdout F [INFO] 10.217.0.57:37229 - 23125 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001233406s 2025-08-13T20:02:12.737589534+00:00 stdout F [INFO] 10.217.0.57:60917 - 6095 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001498463s 2025-08-13T20:02:12.740934829+00:00 stdout F [INFO] 10.217.0.57:46568 - 39709 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005422165s 2025-08-13T20:02:17.741689966+00:00 stdout F [INFO] 10.217.0.57:39448 - 45084 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001421561s 2025-08-13T20:02:17.742317324+00:00 stdout F [INFO] 10.217.0.57:58611 - 38262 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175589s 2025-08-13T20:02:22.742260017+00:00 stdout F [INFO] 10.217.0.57:37594 - 46202 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001674168s 2025-08-13T20:02:22.743136312+00:00 stdout F [INFO] 10.217.0.57:46178 - 3725 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002001987s 2025-08-13T20:02:22.850376041+00:00 stdout F [INFO] 10.217.0.8:37535 - 5907 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000958718s 2025-08-13T20:02:22.850376041+00:00 stdout F [INFO] 10.217.0.8:57939 - 54092 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00138282s 2025-08-13T20:02:22.851987297+00:00 stdout F [INFO] 10.217.0.8:47235 - 23930 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000638688s 2025-08-13T20:02:22.852372948+00:00 stdout F [INFO] 10.217.0.8:35321 - 3816 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000816614s 2025-08-13T20:02:45.175333378+00:00 stdout F [INFO] 10.217.0.8:44330 - 47496 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002702667s 2025-08-13T20:02:45.175333378+00:00 stdout F [INFO] 10.217.0.8:53638 - 28468 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003530061s 2025-08-13T20:02:45.178408116+00:00 stdout F [INFO] 10.217.0.8:34486 - 6540 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001221594s 2025-08-13T20:02:45.181027171+00:00 stdout F [INFO] 10.217.0.8:50562 - 34559 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003549701s 2025-08-13T20:02:56.361633502+00:00 stdout F [INFO] 10.217.0.64:54018 - 28556 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002588663s 2025-08-13T20:02:56.361633502+00:00 stdout F [INFO] 10.217.0.64:52974 - 20654 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002253644s 2025-08-13T20:02:56.361737164+00:00 stdout F [INFO] 10.217.0.64:55753 - 41252 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002745678s 2025-08-13T20:02:56.361757175+00:00 stdout F [INFO] 10.217.0.64:57017 - 50212 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002530172s 2025-08-13T20:03:22.854718995+00:00 stdout F [INFO] 10.217.0.8:48056 - 26529 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005041034s 2025-08-13T20:03:22.855475337+00:00 stdout F [INFO] 10.217.0.8:36036 - 55050 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00594783s 2025-08-13T20:03:22.858104102+00:00 stdout F [INFO] 10.217.0.8:35111 - 28439 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001604606s 2025-08-13T20:03:22.858457422+00:00 stdout F [INFO] 10.217.0.8:38433 - 51125 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00106096s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:44595 - 12598 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 2.98031181s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:51254 - 44553 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 2.9808070239999997s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:47570 - 1814 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 2.9712284s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:38933 - 34536 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 2.971904639s 2025-08-13T20:04:22.855177511+00:00 stdout F [INFO] 10.217.0.8:36152 - 1487 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003761058s 2025-08-13T20:04:22.855177511+00:00 stdout F [INFO] 10.217.0.8:57798 - 5862 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00453144s 2025-08-13T20:04:22.859450584+00:00 stdout F [INFO] 10.217.0.8:51299 - 43533 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001581495s 2025-08-13T20:04:22.859450584+00:00 stdout F [INFO] 10.217.0.8:33424 - 53620 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001211915s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:53398 - 7438 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002246825s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:53707 - 34441 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001054011s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:35175 - 59215 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000854044s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:52207 - 4802 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002560213s 2025-08-13T20:05:22.834833023+00:00 stdout F [INFO] 10.217.0.57:56324 - 25639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002765099s 2025-08-13T20:05:22.834833023+00:00 stdout F [INFO] 10.217.0.57:56473 - 6138 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003923752s 2025-08-13T20:05:22.874352144+00:00 stdout F [INFO] 10.217.0.8:57994 - 3404 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002601494s 2025-08-13T20:05:22.874352144+00:00 stdout F [INFO] 10.217.0.8:56186 - 51110 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003099199s 2025-08-13T20:05:22.877030471+00:00 stdout F [INFO] 10.217.0.8:39214 - 1864 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000801883s 2025-08-13T20:05:22.877568387+00:00 stdout F [INFO] 10.217.0.8:52797 - 23506 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000858964s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:42329 - 34614 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001238436s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:34828 - 37668 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002666427s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:50135 - 64401 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002043728s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:47646 - 27514 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001369899s 2025-08-13T20:05:30.484629752+00:00 stdout F [INFO] 10.217.0.73:35591 - 37435 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129143s 2025-08-13T20:05:30.485074535+00:00 stdout F [INFO] 10.217.0.73:46188 - 1945 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001558785s 2025-08-13T20:05:31.039418839+00:00 stdout F [INFO] 10.217.0.57:55199 - 28877 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002956185s 2025-08-13T20:05:31.067272687+00:00 stdout F [INFO] 10.217.0.57:34733 - 49370 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.032735907s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:50848 - 53238 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003411278s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:39821 - 33040 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003394617s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:37737 - 1404 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003336776s 2025-08-13T20:05:56.380818118+00:00 stdout F [INFO] 10.217.0.64:35989 - 676 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003093488s 2025-08-13T20:05:57.260526589+00:00 stdout F [INFO] 10.217.0.19:55716 - 30971 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001340569s 2025-08-13T20:05:57.260526589+00:00 stdout F [INFO] 10.217.0.19:48212 - 28087 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001226735s 2025-08-13T20:06:02.817339144+00:00 stdout F [INFO] 10.217.0.19:42395 - 20503 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001299787s 2025-08-13T20:06:02.817339144+00:00 stdout F [INFO] 10.217.0.19:58590 - 35761 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001914884s 2025-08-13T20:06:10.928750040+00:00 stdout F [INFO] 10.217.0.19:40611 - 63299 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001083611s 2025-08-13T20:06:10.928750040+00:00 stdout F [INFO] 10.217.0.19:45243 - 63194 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001169673s 2025-08-13T20:06:13.331971681+00:00 stdout F [INFO] 10.217.0.19:60442 - 24966 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00211397s 2025-08-13T20:06:13.331971681+00:00 stdout F [INFO] 10.217.0.19:49512 - 53168 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002144402s 2025-08-13T20:06:19.649446948+00:00 stdout F [INFO] 10.217.0.19:33262 - 15714 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002213083s 2025-08-13T20:06:19.649446948+00:00 stdout F [INFO] 10.217.0.19:51577 - 30006 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002758709s 2025-08-13T20:06:22.858350418+00:00 stdout F [INFO] 10.217.0.8:54831 - 57999 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.0014117s 2025-08-13T20:06:22.858350418+00:00 stdout F [INFO] 10.217.0.8:51901 - 11298 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001923375s 2025-08-13T20:06:22.860114628+00:00 stdout F [INFO] 10.217.0.8:41737 - 24832 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000652439s 2025-08-13T20:06:22.860657794+00:00 stdout F [INFO] 10.217.0.8:51273 - 60742 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001147012s 2025-08-13T20:06:24.856002612+00:00 stdout F [INFO] 10.217.0.19:46264 - 24052 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000716081s 2025-08-13T20:06:24.856002612+00:00 stdout F [INFO] 10.217.0.19:38175 - 50312 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000926017s 2025-08-13T20:06:24.982110083+00:00 stdout F [INFO] 10.217.0.19:41025 - 12544 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00067708s 2025-08-13T20:06:24.983396940+00:00 stdout F [INFO] 10.217.0.19:48621 - 6324 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000912816s 2025-08-13T20:06:42.398374316+00:00 stdout F [INFO] 10.217.0.19:40544 - 31548 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003992194s 2025-08-13T20:06:42.398551511+00:00 stdout F [INFO] 10.217.0.19:47961 - 38383 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004599002s 2025-08-13T20:06:44.831326502+00:00 stdout F [INFO] 10.217.0.19:46233 - 36481 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000929797s 2025-08-13T20:06:44.831326502+00:00 stdout F [INFO] 10.217.0.19:52474 - 16656 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001438521s 2025-08-13T20:06:45.977531073+00:00 stdout F [INFO] 10.217.0.19:42773 - 41690 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000721611s 2025-08-13T20:06:45.977531073+00:00 stdout F [INFO] 10.217.0.19:48183 - 6599 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000650689s 2025-08-13T20:06:49.310538523+00:00 stdout F [INFO] 10.217.0.19:40594 - 10031 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001391659s 2025-08-13T20:06:49.310569354+00:00 stdout F [INFO] 10.217.0.19:50841 - 61241 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000930497s 2025-08-13T20:06:56.401041584+00:00 stdout F [INFO] 10.217.0.64:35784 - 58433 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005351913s 2025-08-13T20:06:56.401041584+00:00 stdout F [INFO] 10.217.0.64:56005 - 56756 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005147368s 2025-08-13T20:06:56.408528299+00:00 stdout F [INFO] 10.217.0.64:53375 - 63899 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001581075s 2025-08-13T20:06:56.409009593+00:00 stdout F [INFO] 10.217.0.64:60446 - 44331 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002788589s 2025-08-13T20:07:12.391819733+00:00 stdout F [INFO] 10.217.0.19:51663 - 10457 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003456329s 2025-08-13T20:07:12.392347668+00:00 stdout F [INFO] 10.217.0.19:36490 - 30092 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002132671s 2025-08-13T20:07:22.916941526+00:00 stdout F [INFO] 10.217.0.8:49926 - 26483 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004742656s 2025-08-13T20:07:22.919477009+00:00 stdout F [INFO] 10.217.0.8:44496 - 22655 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00626125s 2025-08-13T20:07:22.925974105+00:00 stdout F [INFO] 10.217.0.8:43816 - 62129 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001857553s 2025-08-13T20:07:22.926229433+00:00 stdout F [INFO] 10.217.0.8:38711 - 50307 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00279s 2025-08-13T20:07:42.446369421+00:00 stdout F [INFO] 10.217.0.19:35262 - 46720 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004152379s 2025-08-13T20:07:42.446369421+00:00 stdout F [INFO] 10.217.0.19:58765 - 30779 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004750666s 2025-08-13T20:07:55.895470739+00:00 stdout F [INFO] 10.217.0.19:55147 - 31451 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002516592s 2025-08-13T20:07:55.895470739+00:00 stdout F [INFO] 10.217.0.19:57149 - 51963 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001709219s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:57024 - 62161 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001544464s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:40706 - 49086 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001788221s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:33994 - 7565 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001635137s 2025-08-13T20:07:56.368462020+00:00 stdout F [INFO] 10.217.0.64:38183 - 47201 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00557274s 2025-08-13T20:07:58.927633963+00:00 stdout F [INFO] 10.217.0.19:35433 - 24636 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002575593s 2025-08-13T20:07:58.927706205+00:00 stdout F [INFO] 10.217.0.19:38853 - 41776 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003402368s 2025-08-13T20:07:59.051685730+00:00 stdout F [INFO] 10.217.0.19:39692 - 59438 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001105412s 2025-08-13T20:07:59.053430520+00:00 stdout F [INFO] 10.217.0.19:37686 - 44994 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001551274s 2025-08-13T20:08:02.186596361+00:00 stdout F [INFO] 10.217.0.45:41379 - 46721 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000921637s 2025-08-13T20:08:02.189161414+00:00 stdout F [INFO] 10.217.0.45:59167 - 54274 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001012559s 2025-08-13T20:08:03.090366973+00:00 stdout F [INFO] 10.217.0.82:44963 - 1211 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001140693s 2025-08-13T20:08:03.090366973+00:00 stdout F [INFO] 10.217.0.82:46007 - 19049 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000906806s 2025-08-13T20:08:03.829207705+00:00 stdout F [INFO] 10.217.0.82:50887 - 47041 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.00105427s 2025-08-13T20:08:03.829537805+00:00 stdout F [INFO] 10.217.0.82:49104 - 36628 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001064701s 2025-08-13T20:08:11.673531949+00:00 stdout F [INFO] 10.217.0.62:53197 - 19130 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000967497s 2025-08-13T20:08:11.673531949+00:00 stdout F [INFO] 10.217.0.62:55102 - 58758 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001366819s 2025-08-13T20:08:11.863221898+00:00 stdout F [INFO] 10.217.0.62:46299 - 6790 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002732218s 2025-08-13T20:08:11.863307000+00:00 stdout F [INFO] 10.217.0.62:52890 - 25150 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002978285s 2025-08-13T20:08:12.460165583+00:00 stdout F [INFO] 10.217.0.19:55674 - 5069 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001345239s 2025-08-13T20:08:12.460165583+00:00 stdout F [INFO] 10.217.0.19:53687 - 56231 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001230745s 2025-08-13T20:08:16.460247929+00:00 stdout F [INFO] 10.217.0.19:33883 - 22269 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001597046s 2025-08-13T20:08:16.460247929+00:00 stdout F [INFO] 10.217.0.19:55451 - 9936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001798961s 2025-08-13T20:08:17.093307339+00:00 stdout F [INFO] 10.217.0.74:47638 - 51886 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0014134s 2025-08-13T20:08:17.093307339+00:00 stdout F [INFO] 10.217.0.74:43722 - 53547 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002339637s 2025-08-13T20:08:17.093651509+00:00 stdout F [INFO] 10.217.0.74:47930 - 29842 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001831943s 2025-08-13T20:08:17.093928847+00:00 stdout F [INFO] 10.217.0.74:60430 - 2357 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002843112s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:37794 - 44338 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002469581s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:36380 - 24703 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002938054s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:54413 - 4769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006599279s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:58887 - 6627 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007189946s 2025-08-13T20:08:17.313472882+00:00 stdout F [INFO] 10.217.0.74:34420 - 20953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654899s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:40265 - 13881 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000726571s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:60934 - 58201 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001991577s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:56818 - 41121 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001181694s 2025-08-13T20:08:17.353452958+00:00 stdout F [INFO] 10.217.0.74:59414 - 1737 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001528744s 2025-08-13T20:08:17.353452958+00:00 stdout F [INFO] 10.217.0.74:35775 - 32087 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001648827s 2025-08-13T20:08:17.374565733+00:00 stdout F [INFO] 10.217.0.74:51839 - 10472 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000966457s 2025-08-13T20:08:17.374623335+00:00 stdout F [INFO] 10.217.0.74:59412 - 56029 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000880815s 2025-08-13T20:08:17.379713471+00:00 stdout F [INFO] 10.217.0.74:59850 - 37284 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467644s 2025-08-13T20:08:17.379713471+00:00 stdout F [INFO] 10.217.0.74:48117 - 9957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000437173s 2025-08-13T20:08:17.414002834+00:00 stdout F [INFO] 10.217.0.74:53202 - 11313 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0007195s 2025-08-13T20:08:17.415289781+00:00 stdout F [INFO] 10.217.0.74:34969 - 34716 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001980387s 2025-08-13T20:08:17.438063694+00:00 stdout F [INFO] 10.217.0.74:44550 - 1766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001649777s 2025-08-13T20:08:17.438179977+00:00 stdout F [INFO] 10.217.0.74:33996 - 10441 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002167302s 2025-08-13T20:08:17.506074713+00:00 stdout F [INFO] 10.217.0.74:36687 - 64047 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00101641s 2025-08-13T20:08:17.506356141+00:00 stdout F [INFO] 10.217.0.74:60766 - 9260 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001620316s 2025-08-13T20:08:17.511222911+00:00 stdout F [INFO] 10.217.0.74:58673 - 45520 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001016829s 2025-08-13T20:08:17.512197309+00:00 stdout F [INFO] 10.217.0.74:48516 - 11841 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001193584s 2025-08-13T20:08:17.512382915+00:00 stdout F [INFO] 10.217.0.74:43668 - 62296 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001465352s 2025-08-13T20:08:17.513393153+00:00 stdout F [INFO] 10.217.0.74:53257 - 38114 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001675628s 2025-08-13T20:08:17.531563284+00:00 stdout F [INFO] 10.217.0.74:32914 - 43822 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000721731s 2025-08-13T20:08:17.531563284+00:00 stdout F [INFO] 10.217.0.74:57557 - 60863 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000788833s 2025-08-13T20:08:17.571268303+00:00 stdout F [INFO] 10.217.0.74:34472 - 31021 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001095471s 2025-08-13T20:08:17.571371496+00:00 stdout F [INFO] 10.217.0.74:45301 - 50358 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771102s 2025-08-13T20:08:17.576149263+00:00 stdout F [INFO] 10.217.0.74:38042 - 53403 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00139997s 2025-08-13T20:08:17.577165722+00:00 stdout F [INFO] 10.217.0.74:50457 - 1804 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000845834s 2025-08-13T20:08:17.587648382+00:00 stdout F [INFO] 10.217.0.74:33807 - 22539 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00101709s 2025-08-13T20:08:17.587648382+00:00 stdout F [INFO] 10.217.0.74:50078 - 57611 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000952327s 2025-08-13T20:08:17.596519817+00:00 stdout F [INFO] 10.217.0.74:51619 - 1271 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000502865s 2025-08-13T20:08:17.596723603+00:00 stdout F [INFO] 10.217.0.74:33030 - 31537 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527365s 2025-08-13T20:08:17.633355123+00:00 stdout F [INFO] 10.217.0.74:49719 - 52641 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757552s 2025-08-13T20:08:17.633409224+00:00 stdout F [INFO] 10.217.0.74:44615 - 29659 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824994s 2025-08-13T20:08:17.659664257+00:00 stdout F [INFO] 10.217.0.74:60079 - 38710 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002021758s 2025-08-13T20:08:17.659664257+00:00 stdout F [INFO] 10.217.0.74:46372 - 29784 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002361298s 2025-08-13T20:08:17.697094000+00:00 stdout F [INFO] 10.217.0.74:37367 - 17356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175155s 2025-08-13T20:08:17.697094000+00:00 stdout F [INFO] 10.217.0.74:33453 - 15295 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001937126s 2025-08-13T20:08:17.720648796+00:00 stdout F [INFO] 10.217.0.74:56978 - 59221 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001009399s 2025-08-13T20:08:17.720648796+00:00 stdout F [INFO] 10.217.0.74:37924 - 45832 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001598885s 2025-08-13T20:08:17.757582705+00:00 stdout F [INFO] 10.217.0.74:50255 - 7778 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010567s 2025-08-13T20:08:17.757663517+00:00 stdout F [INFO] 10.217.0.74:49792 - 21282 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001282237s 2025-08-13T20:08:17.780338727+00:00 stdout F [INFO] 10.217.0.74:56255 - 42807 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001229795s 2025-08-13T20:08:17.780338727+00:00 stdout F [INFO] 10.217.0.74:59643 - 60857 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001216784s 2025-08-13T20:08:17.796204772+00:00 stdout F [INFO] 10.217.0.74:39042 - 60622 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002146812s 2025-08-13T20:08:17.796295525+00:00 stdout F [INFO] 10.217.0.74:51432 - 60269 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002134271s 2025-08-13T20:08:17.819143930+00:00 stdout F [INFO] 10.217.0.74:33576 - 27950 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103722s 2025-08-13T20:08:17.819928862+00:00 stdout F [INFO] 10.217.0.74:46599 - 38481 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000400511s 2025-08-13T20:08:17.858210460+00:00 stdout F [INFO] 10.217.0.74:40244 - 6782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000604517s 2025-08-13T20:08:17.858261901+00:00 stdout F [INFO] 10.217.0.74:34036 - 38337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001010159s 2025-08-13T20:08:17.858490348+00:00 stdout F [INFO] 10.217.0.74:59286 - 12399 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000644209s 2025-08-13T20:08:17.858646702+00:00 stdout F [INFO] 10.217.0.74:59246 - 35961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001014419s 2025-08-13T20:08:17.880153489+00:00 stdout F [INFO] 10.217.0.74:36022 - 45218 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000942657s 2025-08-13T20:08:17.880526860+00:00 stdout F [INFO] 10.217.0.74:34981 - 33902 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001340278s 2025-08-13T20:08:17.881396075+00:00 stdout F [INFO] 10.217.0.74:49413 - 27840 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000502115s 2025-08-13T20:08:17.883545775+00:00 stdout F [INFO] 10.217.0.74:57490 - 176 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002198163s 2025-08-13T20:08:17.923052838+00:00 stdout F [INFO] 10.217.0.74:42831 - 31660 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003493061s 2025-08-13T20:08:17.924053797+00:00 stdout F [INFO] 10.217.0.74:56272 - 13437 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003915012s 2025-08-13T20:08:17.930211313+00:00 stdout F [INFO] 10.217.0.74:51914 - 40356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001364369s 2025-08-13T20:08:17.930663796+00:00 stdout F [INFO] 10.217.0.74:35796 - 40289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001480022s 2025-08-13T20:08:17.939123259+00:00 stdout F [INFO] 10.217.0.74:48300 - 58931 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000874155s 2025-08-13T20:08:17.939484169+00:00 stdout F [INFO] 10.217.0.74:44967 - 4651 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001081602s 2025-08-13T20:08:17.957997040+00:00 stdout F [INFO] 10.217.0.74:45670 - 58180 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001100601s 2025-08-13T20:08:17.958301839+00:00 stdout F [INFO] 10.217.0.74:39047 - 29042 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000748512s 2025-08-13T20:08:18.001523288+00:00 stdout F [INFO] 10.217.0.74:60147 - 42545 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001336129s 2025-08-13T20:08:18.002163866+00:00 stdout F [INFO] 10.217.0.74:52066 - 45542 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001274117s 2025-08-13T20:08:18.003171135+00:00 stdout F [INFO] 10.217.0.74:44306 - 52127 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001024219s 2025-08-13T20:08:18.003308809+00:00 stdout F [INFO] 10.217.0.74:54199 - 52772 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001133893s 2025-08-13T20:08:18.042742490+00:00 stdout F [INFO] 10.217.0.74:49386 - 1226 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768522s 2025-08-13T20:08:18.042914435+00:00 stdout F [INFO] 10.217.0.74:47491 - 30793 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000759702s 2025-08-13T20:08:18.057213194+00:00 stdout F [INFO] 10.217.0.74:35466 - 46773 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001504533s 2025-08-13T20:08:18.058068579+00:00 stdout F [INFO] 10.217.0.74:47145 - 28223 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001951675s 2025-08-13T20:08:18.060862139+00:00 stdout F [INFO] 10.217.0.74:43862 - 43274 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000581036s 2025-08-13T20:08:18.060862139+00:00 stdout F [INFO] 10.217.0.74:53185 - 40376 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613208s 2025-08-13T20:08:18.101125083+00:00 stdout F [INFO] 10.217.0.74:41513 - 8463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105484s 2025-08-13T20:08:18.101125083+00:00 stdout F [INFO] 10.217.0.74:57968 - 54244 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001155973s 2025-08-13T20:08:18.119922342+00:00 stdout F [INFO] 10.217.0.74:40669 - 25614 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000916636s 2025-08-13T20:08:18.120042356+00:00 stdout F [INFO] 10.217.0.74:52622 - 18205 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000878835s 2025-08-13T20:08:18.155849493+00:00 stdout F [INFO] 10.217.0.74:59445 - 26368 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769882s 2025-08-13T20:08:18.156263654+00:00 stdout F [INFO] 10.217.0.74:60747 - 31872 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104462s 2025-08-13T20:08:18.180989663+00:00 stdout F [INFO] 10.217.0.74:60997 - 17041 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000908776s 2025-08-13T20:08:18.180989663+00:00 stdout F [INFO] 10.217.0.74:46551 - 10659 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001160833s 2025-08-13T20:08:18.213105324+00:00 stdout F [INFO] 10.217.0.74:39244 - 55181 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070225s 2025-08-13T20:08:18.213105324+00:00 stdout F [INFO] 10.217.0.74:51836 - 55722 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000638858s 2025-08-13T20:08:18.239968064+00:00 stdout F [INFO] 10.217.0.74:52292 - 12385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729421s 2025-08-13T20:08:18.240025206+00:00 stdout F [INFO] 10.217.0.74:39250 - 28993 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000791312s 2025-08-13T20:08:18.271506388+00:00 stdout F [INFO] 10.217.0.74:58756 - 712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000933617s 2025-08-13T20:08:18.271506388+00:00 stdout F [INFO] 10.217.0.74:46085 - 64615 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001323918s 2025-08-13T20:08:18.296870226+00:00 stdout F [INFO] 10.217.0.74:51242 - 53671 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001269976s 2025-08-13T20:08:18.296870226+00:00 stdout F [INFO] 10.217.0.74:38515 - 27078 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001712259s 2025-08-13T20:08:18.331032635+00:00 stdout F [INFO] 10.217.0.74:50758 - 41176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000789273s 2025-08-13T20:08:18.331032635+00:00 stdout F [INFO] 10.217.0.74:35435 - 27337 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001215435s 2025-08-13T20:08:18.340077824+00:00 stdout F [INFO] 10.217.0.74:52631 - 10801 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000752912s 2025-08-13T20:08:18.340128446+00:00 stdout F [INFO] 10.217.0.74:49085 - 51573 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104725s 2025-08-13T20:08:18.357214316+00:00 stdout F [INFO] 10.217.0.74:36458 - 4215 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00101837s 2025-08-13T20:08:18.357267757+00:00 stdout F [INFO] 10.217.0.74:37756 - 51038 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001114382s 2025-08-13T20:08:18.388217535+00:00 stdout F [INFO] 10.217.0.74:55365 - 27500 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001113472s 2025-08-13T20:08:18.388380129+00:00 stdout F [INFO] 10.217.0.74:50138 - 31839 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001151443s 2025-08-13T20:08:18.395161234+00:00 stdout F [INFO] 10.217.0.74:34389 - 8169 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000879295s 2025-08-13T20:08:18.395161234+00:00 stdout F [INFO] 10.217.0.74:39526 - 1664 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000770822s 2025-08-13T20:08:18.421573191+00:00 stdout F [INFO] 10.217.0.74:42922 - 4581 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002536073s 2025-08-13T20:08:18.421699015+00:00 stdout F [INFO] 10.217.0.74:36744 - 20784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003105509s 2025-08-13T20:08:18.445528128+00:00 stdout F [INFO] 10.217.0.74:40515 - 63429 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000746172s 2025-08-13T20:08:18.446329821+00:00 stdout F [INFO] 10.217.0.74:40343 - 7258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000667309s 2025-08-13T20:08:18.454415793+00:00 stdout F [INFO] 10.217.0.74:43286 - 24798 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000833254s 2025-08-13T20:08:18.456846412+00:00 stdout F [INFO] 10.217.0.74:45836 - 44265 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002275765s 2025-08-13T20:08:18.477692230+00:00 stdout F [INFO] 10.217.0.74:40548 - 35806 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000763682s 2025-08-13T20:08:18.478217505+00:00 stdout F [INFO] 10.217.0.74:51750 - 50242 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104433s 2025-08-13T20:08:18.501767740+00:00 stdout F [INFO] 10.217.0.74:45811 - 21551 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001103142s 2025-08-13T20:08:18.501767740+00:00 stdout F [INFO] 10.217.0.74:35630 - 41838 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010681s 2025-08-13T20:08:18.513550628+00:00 stdout F [INFO] 10.217.0.74:37388 - 59626 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000973008s 2025-08-13T20:08:18.513633050+00:00 stdout F [INFO] 10.217.0.74:41749 - 40718 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001201095s 2025-08-13T20:08:18.533399707+00:00 stdout F [INFO] 10.217.0.74:37546 - 7034 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001270477s 2025-08-13T20:08:18.533399707+00:00 stdout F [INFO] 10.217.0.74:39060 - 18619 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001359849s 2025-08-13T20:08:18.561114502+00:00 stdout F [INFO] 10.217.0.74:50716 - 20934 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001269167s 2025-08-13T20:08:18.561666278+00:00 stdout F [INFO] 10.217.0.74:41229 - 6523 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001590155s 2025-08-13T20:08:18.568039400+00:00 stdout F [INFO] 10.217.0.74:46810 - 19974 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00174046s 2025-08-13T20:08:18.568272437+00:00 stdout F [INFO] 10.217.0.74:52908 - 17620 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001644087s 2025-08-13T20:08:18.590841034+00:00 stdout F [INFO] 10.217.0.74:39127 - 36265 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000630128s 2025-08-13T20:08:18.591324348+00:00 stdout F [INFO] 10.217.0.74:40898 - 29496 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103056s 2025-08-13T20:08:18.616736236+00:00 stdout F [INFO] 10.217.0.74:36035 - 28386 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000574817s 2025-08-13T20:08:18.617176059+00:00 stdout F [INFO] 10.217.0.74:39294 - 50408 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000754191s 2025-08-13T20:08:18.653306395+00:00 stdout F [INFO] 10.217.0.74:49643 - 20012 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793343s 2025-08-13T20:08:18.653829190+00:00 stdout F [INFO] 10.217.0.74:49400 - 43367 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001490833s 2025-08-13T20:08:18.681602586+00:00 stdout F [INFO] 10.217.0.74:34632 - 19788 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000816623s 2025-08-13T20:08:18.681980997+00:00 stdout F [INFO] 10.217.0.74:60503 - 51226 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001154653s 2025-08-13T20:08:18.714427047+00:00 stdout F [INFO] 10.217.0.74:57813 - 21578 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103567s 2025-08-13T20:08:18.714427047+00:00 stdout F [INFO] 10.217.0.74:44347 - 19831 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001489173s 2025-08-13T20:08:18.728724417+00:00 stdout F [INFO] 10.217.0.74:56433 - 9101 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000562716s 2025-08-13T20:08:18.730372845+00:00 stdout F [INFO] 10.217.0.74:43298 - 41510 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668719s 2025-08-13T20:08:18.739250759+00:00 stdout F [INFO] 10.217.0.74:58344 - 16334 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000675279s 2025-08-13T20:08:18.739410844+00:00 stdout F [INFO] 10.217.0.74:55776 - 48013 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000848244s 2025-08-13T20:08:18.772723499+00:00 stdout F [INFO] 10.217.0.74:52116 - 40428 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000591977s 2025-08-13T20:08:18.772940755+00:00 stdout F [INFO] 10.217.0.74:56184 - 26165 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000535085s 2025-08-13T20:08:18.784148006+00:00 stdout F [INFO] 10.217.0.74:59633 - 62916 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071752s 2025-08-13T20:08:18.784242399+00:00 stdout F [INFO] 10.217.0.74:36176 - 20344 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000646268s 2025-08-13T20:08:18.829873817+00:00 stdout F [INFO] 10.217.0.74:53963 - 24242 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000631288s 2025-08-13T20:08:18.829873817+00:00 stdout F [INFO] 10.217.0.74:53356 - 40328 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000655019s 2025-08-13T20:08:18.842616683+00:00 stdout F [INFO] 10.217.0.74:44219 - 60953 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001134443s 2025-08-13T20:08:18.842616683+00:00 stdout F [INFO] 10.217.0.74:53785 - 15265 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001234925s 2025-08-13T20:08:18.888835948+00:00 stdout F [INFO] 10.217.0.74:45759 - 30313 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000572467s 2025-08-13T20:08:18.889302571+00:00 stdout F [INFO] 10.217.0.74:42183 - 64045 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001291417s 2025-08-13T20:08:18.889944740+00:00 stdout F [INFO] 10.217.0.74:53201 - 28698 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00066511s 2025-08-13T20:08:18.891499084+00:00 stdout F [INFO] 10.217.0.74:34657 - 40843 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001221505s 2025-08-13T20:08:18.901686146+00:00 stdout F [INFO] 10.217.0.74:59660 - 46487 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000527275s 2025-08-13T20:08:18.902438668+00:00 stdout F [INFO] 10.217.0.74:39170 - 27000 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000802423s 2025-08-13T20:08:18.943844235+00:00 stdout F [INFO] 10.217.0.74:47238 - 51574 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000864945s 2025-08-13T20:08:18.943958148+00:00 stdout F [INFO] 10.217.0.74:40533 - 50479 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000665359s 2025-08-13T20:08:18.945034189+00:00 stdout F [INFO] 10.217.0.74:59443 - 32212 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070142s 2025-08-13T20:08:18.945284256+00:00 stdout F [INFO] 10.217.0.74:44930 - 37490 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000979978s 2025-08-13T20:08:18.999972634+00:00 stdout F [INFO] 10.217.0.74:49226 - 54351 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069281s 2025-08-13T20:08:19.000010075+00:00 stdout F [INFO] 10.217.0.74:33374 - 675 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000548685s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:34487 - 13154 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845254s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:47712 - 4340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000636399s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:35471 - 58428 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001321158s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:60689 - 274 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734062s 2025-08-13T20:08:19.111097770+00:00 stdout F [INFO] 10.217.0.74:46092 - 31714 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001013969s 2025-08-13T20:08:19.111729979+00:00 stdout F [INFO] 10.217.0.74:35110 - 49539 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000569996s 2025-08-13T20:08:19.123997620+00:00 stdout F [INFO] 10.217.0.74:50974 - 12345 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000832623s 2025-08-13T20:08:19.124532406+00:00 stdout F [INFO] 10.217.0.74:32788 - 22690 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001368809s 2025-08-13T20:08:19.173362416+00:00 stdout F [INFO] 10.217.0.74:57072 - 18749 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002017758s 2025-08-13T20:08:19.173960953+00:00 stdout F [INFO] 10.217.0.74:58011 - 64217 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003071098s 2025-08-13T20:08:19.176240778+00:00 stdout F [INFO] 10.217.0.74:49301 - 14570 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001588786s 2025-08-13T20:08:19.176240778+00:00 stdout F [INFO] 10.217.0.74:51980 - 56486 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001845263s 2025-08-13T20:08:19.231853013+00:00 stdout F [INFO] 10.217.0.74:42889 - 57106 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001345139s 2025-08-13T20:08:19.232280125+00:00 stdout F [INFO] 10.217.0.74:38556 - 62901 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001417371s 2025-08-13T20:08:19.234921181+00:00 stdout F [INFO] 10.217.0.74:36720 - 15603 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00138679s 2025-08-13T20:08:19.237827514+00:00 stdout F [INFO] 10.217.0.74:59480 - 37520 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104755s 2025-08-13T20:08:19.254981216+00:00 stdout F [INFO] 10.217.0.74:36818 - 57438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000951858s 2025-08-13T20:08:19.254981216+00:00 stdout F [INFO] 10.217.0.74:55824 - 7717 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001288587s 2025-08-13T20:08:19.287223030+00:00 stdout F [INFO] 10.217.0.74:45490 - 37893 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742451s 2025-08-13T20:08:19.288857277+00:00 stdout F [INFO] 10.217.0.74:54329 - 23127 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000895675s 2025-08-13T20:08:19.309303843+00:00 stdout F [INFO] 10.217.0.74:46021 - 745 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000624868s 2025-08-13T20:08:19.309303843+00:00 stdout F [INFO] 10.217.0.74:60525 - 34491 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000919096s 2025-08-13T20:08:19.344323977+00:00 stdout F [INFO] 10.217.0.74:35478 - 40915 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001874983s 2025-08-13T20:08:19.344323977+00:00 stdout F [INFO] 10.217.0.74:34458 - 1330 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002321437s 2025-08-13T20:08:19.404959636+00:00 stdout F [INFO] 10.217.0.74:49339 - 57172 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000725511s 2025-08-13T20:08:19.404959636+00:00 stdout F [INFO] 10.217.0.74:42428 - 29004 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000939657s 2025-08-13T20:08:19.440233127+00:00 stdout F [INFO] 10.217.0.74:34899 - 32956 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001207635s 2025-08-13T20:08:19.440233127+00:00 stdout F [INFO] 10.217.0.74:48635 - 22276 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000980108s 2025-08-13T20:08:19.458175032+00:00 stdout F [INFO] 10.217.0.74:57975 - 38076 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000981929s 2025-08-13T20:08:19.458175032+00:00 stdout F [INFO] 10.217.0.74:46325 - 41507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000946747s 2025-08-13T20:08:19.491980691+00:00 stdout F [INFO] 10.217.0.74:43809 - 28638 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070041s 2025-08-13T20:08:19.495368818+00:00 stdout F [INFO] 10.217.0.74:54058 - 23406 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003173411s 2025-08-13T20:08:19.499145456+00:00 stdout F [INFO] 10.217.0.74:32823 - 17562 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000504084s 2025-08-13T20:08:19.499145456+00:00 stdout F [INFO] 10.217.0.74:56726 - 343 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000561356s 2025-08-13T20:08:19.514751294+00:00 stdout F [INFO] 10.217.0.74:41593 - 63316 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069s 2025-08-13T20:08:19.514751294+00:00 stdout F [INFO] 10.217.0.74:42069 - 26740 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000919106s 2025-08-13T20:08:19.557246502+00:00 stdout F [INFO] 10.217.0.74:36148 - 26416 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000834204s 2025-08-13T20:08:19.557246502+00:00 stdout F [INFO] 10.217.0.74:53623 - 53928 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612508s 2025-08-13T20:08:19.560498005+00:00 stdout F [INFO] 10.217.0.74:57862 - 55531 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001326568s 2025-08-13T20:08:19.561442072+00:00 stdout F [INFO] 10.217.0.74:46271 - 32034 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001072861s 2025-08-13T20:08:19.574042364+00:00 stdout F [INFO] 10.217.0.74:60622 - 623 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000890776s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:44083 - 31197 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001470792s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:33941 - 39911 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001874684s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:52521 - 63095 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001776991s 2025-08-13T20:08:19.619598280+00:00 stdout F [INFO] 10.217.0.74:57139 - 31701 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001597296s 2025-08-13T20:08:19.619631901+00:00 stdout F [INFO] 10.217.0.74:51704 - 25051 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001798212s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:48185 - 21897 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004593802s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:42680 - 51080 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002749438s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:53337 - 43838 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002883242s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:49462 - 22842 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004737485s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:47487 - 10576 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000635059s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:55514 - 64163 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001015499s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:59295 - 60158 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657648s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:56438 - 61941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000965287s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:51878 - 49945 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657199s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:36094 - 50966 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001020449s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:55603 - 60901 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069649s 2025-08-13T20:08:19.752212632+00:00 stdout F [INFO] 10.217.0.74:56998 - 15524 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000435012s 2025-08-13T20:08:19.809153344+00:00 stdout F [INFO] 10.217.0.74:53401 - 39051 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001184504s 2025-08-13T20:08:19.809153344+00:00 stdout F [INFO] 10.217.0.74:51215 - 21865 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000982628s 2025-08-13T20:08:19.812235183+00:00 stdout F [INFO] 10.217.0.74:40889 - 52437 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000586647s 2025-08-13T20:08:19.813729486+00:00 stdout F [INFO] 10.217.0.74:48421 - 25312 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001891555s 2025-08-13T20:08:19.863663957+00:00 stdout F [INFO] 10.217.0.74:60962 - 58177 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000864505s 2025-08-13T20:08:19.863663957+00:00 stdout F [INFO] 10.217.0.74:53141 - 18297 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000925797s 2025-08-13T20:08:19.920427445+00:00 stdout F [INFO] 10.217.0.74:46406 - 6689 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001152653s 2025-08-13T20:08:19.921107744+00:00 stdout F [INFO] 10.217.0.74:52820 - 53416 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558376s 2025-08-13T20:08:19.922021890+00:00 stdout F [INFO] 10.217.0.74:47651 - 17881 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001226225s 2025-08-13T20:08:19.922523145+00:00 stdout F [INFO] 10.217.0.74:52083 - 21131 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001890835s 2025-08-13T20:08:19.964350344+00:00 stdout F [INFO] 10.217.0.74:46219 - 37278 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069833s 2025-08-13T20:08:19.964350344+00:00 stdout F [INFO] 10.217.0.74:56445 - 19602 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000983288s 2025-08-13T20:08:19.980119026+00:00 stdout F [INFO] 10.217.0.74:47896 - 63739 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002735378s 2025-08-13T20:08:19.980119026+00:00 stdout F [INFO] 10.217.0.74:53528 - 63966 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001429141s 2025-08-13T20:08:19.982137164+00:00 stdout F [INFO] 10.217.0.74:36392 - 821 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000785813s 2025-08-13T20:08:19.983587326+00:00 stdout F [INFO] 10.217.0.74:37348 - 50385 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001475423s 2025-08-13T20:08:20.026518367+00:00 stdout F [INFO] 10.217.0.74:53987 - 20951 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000582706s 2025-08-13T20:08:20.026565948+00:00 stdout F [INFO] 10.217.0.74:59175 - 57290 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000483584s 2025-08-13T20:08:20.037504422+00:00 stdout F [INFO] 10.217.0.74:59422 - 54552 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000653699s 2025-08-13T20:08:20.037552413+00:00 stdout F [INFO] 10.217.0.74:42804 - 8086 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626458s 2025-08-13T20:08:20.045226223+00:00 stdout F [INFO] 10.217.0.74:38048 - 34636 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000587927s 2025-08-13T20:08:20.045523631+00:00 stdout F [INFO] 10.217.0.74:46147 - 23800 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000991229s 2025-08-13T20:08:20.092669803+00:00 stdout F [INFO] 10.217.0.74:42603 - 6548 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000541455s 2025-08-13T20:08:20.092669803+00:00 stdout F [INFO] 10.217.0.74:59176 - 45031 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000920527s 2025-08-13T20:08:20.132848905+00:00 stdout F [INFO] 10.217.0.74:39842 - 39036 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000801383s 2025-08-13T20:08:20.132848905+00:00 stdout F [INFO] 10.217.0.74:55461 - 20359 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000863444s 2025-08-13T20:08:20.148882125+00:00 stdout F [INFO] 10.217.0.74:48741 - 47367 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000800953s 2025-08-13T20:08:20.148882125+00:00 stdout F [INFO] 10.217.0.74:49669 - 13644 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000790093s 2025-08-13T20:08:20.188878572+00:00 stdout F [INFO] 10.217.0.74:45619 - 23079 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000639578s 2025-08-13T20:08:20.189544251+00:00 stdout F [INFO] 10.217.0.74:35659 - 16828 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000816493s 2025-08-13T20:08:20.203331556+00:00 stdout F [INFO] 10.217.0.74:54643 - 54596 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001219905s 2025-08-13T20:08:20.203816790+00:00 stdout F [INFO] 10.217.0.74:44079 - 24799 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001638507s 2025-08-13T20:08:20.222126875+00:00 stdout F [INFO] 10.217.0.74:35728 - 57603 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000833413s 2025-08-13T20:08:20.223450793+00:00 stdout F [INFO] 10.217.0.74:51625 - 51357 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001006178s 2025-08-13T20:08:20.224555275+00:00 stdout F [INFO] 10.217.0.74:55727 - 11533 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000638348s 2025-08-13T20:08:20.225231624+00:00 stdout F [INFO] 10.217.0.74:47336 - 45122 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000887886s 2025-08-13T20:08:20.234617923+00:00 stdout F [INFO] 10.217.0.74:48099 - 10617 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001363009s 2025-08-13T20:08:20.234617923+00:00 stdout F [INFO] 10.217.0.74:40174 - 24904 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001339848s 2025-08-13T20:08:20.253737411+00:00 stdout F [INFO] 10.217.0.74:42674 - 40263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000679899s 2025-08-13T20:08:20.254765271+00:00 stdout F [INFO] 10.217.0.74:34954 - 36129 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000766252s 2025-08-13T20:08:20.266526848+00:00 stdout F [INFO] 10.217.0.74:47334 - 28205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001164104s 2025-08-13T20:08:20.266742764+00:00 stdout F [INFO] 10.217.0.74:53173 - 33111 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001522993s 2025-08-13T20:08:20.289119096+00:00 stdout F [INFO] 10.217.0.74:58640 - 24864 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070667s 2025-08-13T20:08:20.289391903+00:00 stdout F [INFO] 10.217.0.74:50835 - 5956 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740561s 2025-08-13T20:08:20.301440449+00:00 stdout F [INFO] 10.217.0.74:49383 - 62464 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104861s 2025-08-13T20:08:20.301440449+00:00 stdout F [INFO] 10.217.0.74:36017 - 30424 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001671858s 2025-08-13T20:08:20.316057878+00:00 stdout F [INFO] 10.217.0.74:36367 - 62148 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000437042s 2025-08-13T20:08:20.316353217+00:00 stdout F [INFO] 10.217.0.74:56467 - 60074 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000994989s 2025-08-13T20:08:20.347647934+00:00 stdout F [INFO] 10.217.0.74:43603 - 36237 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001694369s 2025-08-13T20:08:20.347647934+00:00 stdout F [INFO] 10.217.0.74:58264 - 25764 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002189132s 2025-08-13T20:08:20.374389510+00:00 stdout F [INFO] 10.217.0.74:40516 - 61985 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000824754s 2025-08-13T20:08:20.374389510+00:00 stdout F [INFO] 10.217.0.74:41851 - 23707 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000762832s 2025-08-13T20:08:20.404057201+00:00 stdout F [INFO] 10.217.0.74:45825 - 36179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000787853s 2025-08-13T20:08:20.404187815+00:00 stdout F [INFO] 10.217.0.74:60727 - 34978 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001304217s 2025-08-13T20:08:20.433333370+00:00 stdout F [INFO] 10.217.0.74:47313 - 41258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001661527s 2025-08-13T20:08:20.433333370+00:00 stdout F [INFO] 10.217.0.74:37101 - 36275 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001336558s 2025-08-13T20:08:20.434687389+00:00 stdout F [INFO] 10.217.0.74:50737 - 46844 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000860355s 2025-08-13T20:08:20.434687389+00:00 stdout F [INFO] 10.217.0.74:34462 - 31226 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001051541s 2025-08-13T20:08:20.468028635+00:00 stdout F [INFO] 10.217.0.74:57365 - 25742 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000990839s 2025-08-13T20:08:20.469755305+00:00 stdout F [INFO] 10.217.0.74:39107 - 47850 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001581315s 2025-08-13T20:08:20.477362013+00:00 stdout F [INFO] 10.217.0.74:47989 - 52317 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649848s 2025-08-13T20:08:20.477449415+00:00 stdout F [INFO] 10.217.0.74:40591 - 60541 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000771862s 2025-08-13T20:08:20.510911725+00:00 stdout F [INFO] 10.217.0.74:48028 - 3073 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000765232s 2025-08-13T20:08:20.510911725+00:00 stdout F [INFO] 10.217.0.74:46573 - 64702 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001097651s 2025-08-13T20:08:20.530874907+00:00 stdout F [INFO] 10.217.0.74:41963 - 44907 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002360568s 2025-08-13T20:08:20.531193616+00:00 stdout F [INFO] 10.217.0.74:53470 - 27807 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002826101s 2025-08-13T20:08:20.544350043+00:00 stdout F [INFO] 10.217.0.74:53647 - 29671 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009632836s 2025-08-13T20:08:20.544350043+00:00 stdout F [INFO] 10.217.0.74:57620 - 55875 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009781851s 2025-08-13T20:08:20.570202815+00:00 stdout F [INFO] 10.217.0.74:44231 - 53021 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651088s 2025-08-13T20:08:20.571462151+00:00 stdout F [INFO] 10.217.0.74:42901 - 59801 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768212s 2025-08-13T20:08:20.576648890+00:00 stdout F [INFO] 10.217.0.74:57027 - 64889 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000545476s 2025-08-13T20:08:20.576802884+00:00 stdout F [INFO] 10.217.0.74:33614 - 61489 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000609417s 2025-08-13T20:08:20.588171280+00:00 stdout F [INFO] 10.217.0.74:49097 - 19093 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000917837s 2025-08-13T20:08:20.588599452+00:00 stdout F [INFO] 10.217.0.74:55195 - 10285 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001500893s 2025-08-13T20:08:20.629764072+00:00 stdout F [INFO] 10.217.0.74:49525 - 63178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000586707s 2025-08-13T20:08:20.630031790+00:00 stdout F [INFO] 10.217.0.74:43936 - 49048 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000670439s 2025-08-13T20:08:20.633615463+00:00 stdout F [INFO] 10.217.0.74:56815 - 9782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000677729s 2025-08-13T20:08:20.633615463+00:00 stdout F [INFO] 10.217.0.74:50379 - 52189 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158163s 2025-08-13T20:08:20.648194051+00:00 stdout F [INFO] 10.217.0.74:40370 - 31715 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001159454s 2025-08-13T20:08:20.648194051+00:00 stdout F [INFO] 10.217.0.74:46407 - 49932 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001255207s 2025-08-13T20:08:20.651815225+00:00 stdout F [INFO] 10.217.0.74:39433 - 20178 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001055991s 2025-08-13T20:08:20.656983383+00:00 stdout F [INFO] 10.217.0.74:44582 - 39095 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140599s 2025-08-13T20:08:20.665125626+00:00 stdout F [INFO] 10.217.0.74:44221 - 55255 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734341s 2025-08-13T20:08:20.665251720+00:00 stdout F [INFO] 10.217.0.74:39018 - 60238 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000659679s 2025-08-13T20:08:20.689938068+00:00 stdout F [INFO] 10.217.0.74:41391 - 51096 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002542943s 2025-08-13T20:08:20.690173254+00:00 stdout F [INFO] 10.217.0.74:47992 - 23554 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934757s 2025-08-13T20:08:20.712100273+00:00 stdout F [INFO] 10.217.0.74:59180 - 1034 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000846694s 2025-08-13T20:08:20.712746112+00:00 stdout F [INFO] 10.217.0.74:50309 - 3207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000605618s 2025-08-13T20:08:20.722083729+00:00 stdout F [INFO] 10.217.0.74:33841 - 24481 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069825s 2025-08-13T20:08:20.722843781+00:00 stdout F [INFO] 10.217.0.74:41004 - 7845 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000935487s 2025-08-13T20:08:20.749768843+00:00 stdout F [INFO] 10.217.0.74:35507 - 16179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000877175s 2025-08-13T20:08:20.753088058+00:00 stdout F [INFO] 10.217.0.74:59183 - 20585 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753702s 2025-08-13T20:08:20.767614805+00:00 stdout F [INFO] 10.217.0.74:58570 - 41317 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000853814s 2025-08-13T20:08:20.767841851+00:00 stdout F [INFO] 10.217.0.74:55816 - 52170 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000998769s 2025-08-13T20:08:20.814605662+00:00 stdout F [INFO] 10.217.0.74:43412 - 35534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070546s 2025-08-13T20:08:20.815200969+00:00 stdout F [INFO] 10.217.0.74:36960 - 17945 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001435021s 2025-08-13T20:08:20.829635843+00:00 stdout F [INFO] 10.217.0.74:36040 - 9387 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000743341s 2025-08-13T20:08:20.829635843+00:00 stdout F [INFO] 10.217.0.74:42363 - 34278 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615428s 2025-08-13T20:08:20.873258044+00:00 stdout F [INFO] 10.217.0.74:57034 - 44179 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813913s 2025-08-13T20:08:20.873319845+00:00 stdout F [INFO] 10.217.0.74:46023 - 9625 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001219815s 2025-08-13T20:08:20.899992170+00:00 stdout F [INFO] 10.217.0.74:44255 - 47347 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000738771s 2025-08-13T20:08:20.900095283+00:00 stdout F [INFO] 10.217.0.74:45113 - 757 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000606878s 2025-08-13T20:08:20.922948738+00:00 stdout F [INFO] 10.217.0.74:43853 - 47584 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000456963s 2025-08-13T20:08:20.923684229+00:00 stdout F [INFO] 10.217.0.74:59554 - 41272 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001007579s 2025-08-13T20:08:20.932553414+00:00 stdout F [INFO] 10.217.0.74:51472 - 20194 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000796723s 2025-08-13T20:08:20.932639126+00:00 stdout F [INFO] 10.217.0.74:42302 - 22466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105013s 2025-08-13T20:08:20.958865018+00:00 stdout F [INFO] 10.217.0.74:34581 - 22745 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071544s 2025-08-13T20:08:20.959120615+00:00 stdout F [INFO] 10.217.0.74:51406 - 23089 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000901286s 2025-08-13T20:08:20.967416073+00:00 stdout F [INFO] 10.217.0.74:45014 - 37578 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870665s 2025-08-13T20:08:20.967648760+00:00 stdout F [INFO] 10.217.0.74:47564 - 64455 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001078651s 2025-08-13T20:08:20.984656478+00:00 stdout F [INFO] 10.217.0.74:34397 - 48184 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003339386s 2025-08-13T20:08:20.984656478+00:00 stdout F [INFO] 10.217.0.74:57204 - 2685 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003514661s 2025-08-13T20:08:20.995082076+00:00 stdout F [INFO] 10.217.0.74:55637 - 63608 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068953s 2025-08-13T20:08:20.996061235+00:00 stdout F [INFO] 10.217.0.74:60552 - 45167 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000512405s 2025-08-13T20:08:21.027181477+00:00 stdout F [INFO] 10.217.0.74:45214 - 44061 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001082451s 2025-08-13T20:08:21.028565126+00:00 stdout F [INFO] 10.217.0.74:34987 - 36881 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000870085s 2025-08-13T20:08:21.029176524+00:00 stdout F [INFO] 10.217.0.74:57687 - 39493 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000792063s 2025-08-13T20:08:21.029436942+00:00 stdout F [INFO] 10.217.0.74:48392 - 28980 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000893525s 2025-08-13T20:08:21.049611960+00:00 stdout F [INFO] 10.217.0.74:57286 - 65085 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105317s 2025-08-13T20:08:21.049670302+00:00 stdout F [INFO] 10.217.0.74:43053 - 12216 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657459s 2025-08-13T20:08:21.095700921+00:00 stdout F [INFO] 10.217.0.74:43606 - 1746 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001463912s 2025-08-13T20:08:21.095700921+00:00 stdout F [INFO] 10.217.0.74:59907 - 36665 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000466304s 2025-08-13T20:08:21.097937406+00:00 stdout F [INFO] 10.217.0.74:54768 - 998 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000672219s 2025-08-13T20:08:21.097963896+00:00 stdout F [INFO] 10.217.0.74:45132 - 28321 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000950917s 2025-08-13T20:08:21.156295639+00:00 stdout F [INFO] 10.217.0.74:53052 - 4292 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813174s 2025-08-13T20:08:21.157350069+00:00 stdout F [INFO] 10.217.0.74:43527 - 27937 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000610728s 2025-08-13T20:08:21.159719867+00:00 stdout F [INFO] 10.217.0.74:35243 - 38330 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001566785s 2025-08-13T20:08:21.159719867+00:00 stdout F [INFO] 10.217.0.74:44258 - 26336 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001432661s 2025-08-13T20:08:21.214957301+00:00 stdout F [INFO] 10.217.0.74:33663 - 52704 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000819633s 2025-08-13T20:08:21.215178627+00:00 stdout F [INFO] 10.217.0.74:54171 - 1 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000600528s 2025-08-13T20:08:21.251848858+00:00 stdout F [INFO] 10.217.0.74:41906 - 5676 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001237486s 2025-08-13T20:08:21.253050703+00:00 stdout F [INFO] 10.217.0.74:47230 - 11157 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001222845s 2025-08-13T20:08:21.276477684+00:00 stdout F [INFO] 10.217.0.74:36698 - 36123 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000664929s 2025-08-13T20:08:21.277333799+00:00 stdout F [INFO] 10.217.0.74:38715 - 787 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000642098s 2025-08-13T20:08:21.277816763+00:00 stdout F [INFO] 10.217.0.74:41323 - 41761 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000915877s 2025-08-13T20:08:21.278021229+00:00 stdout F [INFO] 10.217.0.74:44740 - 7711 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001326388s 2025-08-13T20:08:21.318615902+00:00 stdout F [INFO] 10.217.0.74:47736 - 41622 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001616496s 2025-08-13T20:08:21.318761397+00:00 stdout F [INFO] 10.217.0.74:60351 - 34419 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002020678s 2025-08-13T20:08:21.348972883+00:00 stdout F [INFO] 10.217.0.74:51905 - 53438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003447209s 2025-08-13T20:08:21.350017763+00:00 stdout F [INFO] 10.217.0.74:53640 - 36814 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004793708s 2025-08-13T20:08:21.350343552+00:00 stdout F [INFO] 10.217.0.74:53486 - 4265 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001804502s 2025-08-13T20:08:21.352277748+00:00 stdout F [INFO] 10.217.0.74:42630 - 31454 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002065249s 2025-08-13T20:08:21.381030062+00:00 stdout F [INFO] 10.217.0.74:40627 - 8986 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000994388s 2025-08-13T20:08:21.381292199+00:00 stdout F [INFO] 10.217.0.74:56655 - 62455 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000911647s 2025-08-13T20:08:21.391505122+00:00 stdout F [INFO] 10.217.0.74:38469 - 59137 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000469634s 2025-08-13T20:08:21.391505122+00:00 stdout F [INFO] 10.217.0.74:33175 - 27614 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000449763s 2025-08-13T20:08:21.406906914+00:00 stdout F [INFO] 10.217.0.74:59333 - 37348 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001283207s 2025-08-13T20:08:21.407047058+00:00 stdout F [INFO] 10.217.0.74:47742 - 60191 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001129573s 2025-08-13T20:08:21.443857173+00:00 stdout F [INFO] 10.217.0.74:40528 - 41650 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000792613s 2025-08-13T20:08:21.444235444+00:00 stdout F [INFO] 10.217.0.74:47840 - 21143 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003349207s 2025-08-13T20:08:21.451058140+00:00 stdout F [INFO] 10.217.0.74:41261 - 13281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775202s 2025-08-13T20:08:21.451058140+00:00 stdout F [INFO] 10.217.0.74:45851 - 14398 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826563s 2025-08-13T20:08:21.467184651+00:00 stdout F [INFO] 10.217.0.74:51266 - 7856 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000544876s 2025-08-13T20:08:21.467329675+00:00 stdout F [INFO] 10.217.0.74:42910 - 23102 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009659s 2025-08-13T20:08:21.507686802+00:00 stdout F [INFO] 10.217.0.74:57006 - 16266 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000783853s 2025-08-13T20:08:21.507686802+00:00 stdout F [INFO] 10.217.0.74:59004 - 2798 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070864s 2025-08-13T20:08:21.508597748+00:00 stdout F [INFO] 10.217.0.74:36516 - 35450 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000865265s 2025-08-13T20:08:21.509967498+00:00 stdout F [INFO] 10.217.0.74:58294 - 64367 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001360009s 2025-08-13T20:08:21.524058432+00:00 stdout F [INFO] 10.217.0.74:57622 - 23682 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000840354s 2025-08-13T20:08:21.524096323+00:00 stdout F [INFO] 10.217.0.74:36061 - 43951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754442s 2025-08-13T20:08:21.567555409+00:00 stdout F [INFO] 10.217.0.74:46832 - 53516 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000781792s 2025-08-13T20:08:21.567607830+00:00 stdout F [INFO] 10.217.0.74:59571 - 25182 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000803143s 2025-08-13T20:08:21.578841352+00:00 stdout F [INFO] 10.217.0.74:35014 - 55842 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000644738s 2025-08-13T20:08:21.581847819+00:00 stdout F [INFO] 10.217.0.74:55775 - 55026 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102618s 2025-08-13T20:08:21.627158318+00:00 stdout F [INFO] 10.217.0.74:52303 - 42393 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000750662s 2025-08-13T20:08:21.627158318+00:00 stdout F [INFO] 10.217.0.74:59691 - 22958 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000549025s 2025-08-13T20:08:21.641962452+00:00 stdout F [INFO] 10.217.0.74:54771 - 8887 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000581156s 2025-08-13T20:08:21.641962452+00:00 stdout F [INFO] 10.217.0.74:49009 - 37894 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104908s 2025-08-13T20:08:21.681094214+00:00 stdout F [INFO] 10.217.0.74:37748 - 51751 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001596456s 2025-08-13T20:08:21.681628369+00:00 stdout F [INFO] 10.217.0.74:36028 - 57991 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002143281s 2025-08-13T20:08:21.699708158+00:00 stdout F [INFO] 10.217.0.74:35438 - 20392 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000583846s 2025-08-13T20:08:21.699708158+00:00 stdout F [INFO] 10.217.0.74:45645 - 53652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000513624s 2025-08-13T20:08:21.740565719+00:00 stdout F [INFO] 10.217.0.74:43636 - 21088 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001104292s 2025-08-13T20:08:21.740881948+00:00 stdout F [INFO] 10.217.0.74:60974 - 8141 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001300058s 2025-08-13T20:08:21.753051197+00:00 stdout F [INFO] 10.217.0.74:58618 - 16057 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001003989s 2025-08-13T20:08:21.756076784+00:00 stdout F [INFO] 10.217.0.74:43844 - 23921 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004007895s 2025-08-13T20:08:21.758245566+00:00 stdout F [INFO] 10.217.0.74:47124 - 59391 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936547s 2025-08-13T20:08:21.758441132+00:00 stdout F [INFO] 10.217.0.74:49102 - 43624 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001713939s 2025-08-13T20:08:21.772176476+00:00 stdout F [INFO] 10.217.0.74:46657 - 4769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000960378s 2025-08-13T20:08:21.772284439+00:00 stdout F [INFO] 10.217.0.74:35549 - 18077 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001296937s 2025-08-13T20:08:21.781389130+00:00 stdout F [INFO] 10.217.0.74:35033 - 499 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000914066s 2025-08-13T20:08:21.781597126+00:00 stdout F [INFO] 10.217.0.74:52691 - 12784 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000996999s 2025-08-13T20:08:21.795328699+00:00 stdout F [INFO] 10.217.0.74:39043 - 52094 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000585827s 2025-08-13T20:08:21.795527995+00:00 stdout F [INFO] 10.217.0.74:38492 - 24303 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870904s 2025-08-13T20:08:21.812769619+00:00 stdout F [INFO] 10.217.0.74:44059 - 37291 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002576694s 2025-08-13T20:08:21.812980465+00:00 stdout F [INFO] 10.217.0.74:35021 - 5072 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002731668s 2025-08-13T20:08:21.815246030+00:00 stdout F [INFO] 10.217.0.74:51931 - 8604 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001972337s 2025-08-13T20:08:21.815246030+00:00 stdout F [INFO] 10.217.0.74:46159 - 49185 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002262854s 2025-08-13T20:08:21.836761357+00:00 stdout F [INFO] 10.217.0.74:41242 - 33438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001621987s 2025-08-13T20:08:21.837088427+00:00 stdout F [INFO] 10.217.0.74:56449 - 54193 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0020712s 2025-08-13T20:08:21.852242801+00:00 stdout F [INFO] 10.217.0.74:40528 - 28590 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000773742s 2025-08-13T20:08:21.853209469+00:00 stdout F [INFO] 10.217.0.74:40518 - 52546 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001800842s 2025-08-13T20:08:21.874935372+00:00 stdout F [INFO] 10.217.0.74:48787 - 4883 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742461s 2025-08-13T20:08:21.875084876+00:00 stdout F [INFO] 10.217.0.74:53900 - 17729 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001129183s 2025-08-13T20:08:21.909328318+00:00 stdout F [INFO] 10.217.0.74:58437 - 7710 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000685829s 2025-08-13T20:08:21.909587265+00:00 stdout F [INFO] 10.217.0.74:41177 - 8284 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000807493s 2025-08-13T20:08:21.931037270+00:00 stdout F [INFO] 10.217.0.74:50656 - 30424 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070117s 2025-08-13T20:08:21.931133773+00:00 stdout F [INFO] 10.217.0.74:54700 - 47502 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001229895s 2025-08-13T20:08:21.968340560+00:00 stdout F [INFO] 10.217.0.74:51881 - 37498 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001640137s 2025-08-13T20:08:21.968429062+00:00 stdout F [INFO] 10.217.0.74:40751 - 48109 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001837213s 2025-08-13T20:08:21.973223730+00:00 stdout F [INFO] 10.217.0.74:47528 - 5737 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527265s 2025-08-13T20:08:21.973223730+00:00 stdout F [INFO] 10.217.0.74:36322 - 65342 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000478184s 2025-08-13T20:08:21.979201161+00:00 stdout F [INFO] 10.217.0.74:40156 - 14196 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000723381s 2025-08-13T20:08:21.980020715+00:00 stdout F [INFO] 10.217.0.74:46359 - 39016 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001127842s 2025-08-13T20:08:21.982708972+00:00 stdout F [INFO] 10.217.0.74:52956 - 1784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001235185s 2025-08-13T20:08:21.983336250+00:00 stdout F [INFO] 10.217.0.74:49008 - 65323 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001339569s 2025-08-13T20:08:22.031831640+00:00 stdout F [INFO] 10.217.0.74:59415 - 64727 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000711211s 2025-08-13T20:08:22.031917433+00:00 stdout F [INFO] 10.217.0.74:54528 - 22049 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001002928s 2025-08-13T20:08:22.042281320+00:00 stdout F [INFO] 10.217.0.74:35159 - 49207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068618s 2025-08-13T20:08:22.042378513+00:00 stdout F [INFO] 10.217.0.74:54247 - 9704 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000502274s 2025-08-13T20:08:22.044223495+00:00 stdout F [INFO] 10.217.0.74:56155 - 28029 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001385s 2025-08-13T20:08:22.044223495+00:00 stdout F [INFO] 10.217.0.74:49841 - 61991 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102096s 2025-08-13T20:08:22.083609545+00:00 stdout F [INFO] 10.217.0.74:46141 - 31380 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000932726s 2025-08-13T20:08:22.084075808+00:00 stdout F [INFO] 10.217.0.74:47850 - 15680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001004049s 2025-08-13T20:08:22.098749489+00:00 stdout F [INFO] 10.217.0.74:36090 - 7351 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000636378s 2025-08-13T20:08:22.099014486+00:00 stdout F [INFO] 10.217.0.74:38597 - 21700 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000812803s 2025-08-13T20:08:22.139299391+00:00 stdout F [INFO] 10.217.0.74:39378 - 44582 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000653949s 2025-08-13T20:08:22.139647061+00:00 stdout F [INFO] 10.217.0.74:36225 - 22223 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001251976s 2025-08-13T20:08:22.155679041+00:00 stdout F [INFO] 10.217.0.74:37035 - 15705 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000885045s 2025-08-13T20:08:22.158742589+00:00 stdout F [INFO] 10.217.0.74:35138 - 51780 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001082841s 2025-08-13T20:08:22.179351120+00:00 stdout F [INFO] 10.217.0.74:45734 - 36900 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000978308s 2025-08-13T20:08:22.179593517+00:00 stdout F [INFO] 10.217.0.74:40799 - 54997 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001279057s 2025-08-13T20:08:22.201436993+00:00 stdout F [INFO] 10.217.0.74:44396 - 55140 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001330658s 2025-08-13T20:08:22.201560556+00:00 stdout F [INFO] 10.217.0.74:33447 - 1163 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00139551s 2025-08-13T20:08:22.214420215+00:00 stdout F [INFO] 10.217.0.74:52628 - 34934 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000763552s 2025-08-13T20:08:22.214820137+00:00 stdout F [INFO] 10.217.0.74:52648 - 24657 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001467352s 2025-08-13T20:08:22.239860595+00:00 stdout F [INFO] 10.217.0.74:51385 - 16973 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105344s 2025-08-13T20:08:22.239860595+00:00 stdout F [INFO] 10.217.0.74:49919 - 34860 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001381359s 2025-08-13T20:08:22.261126194+00:00 stdout F [INFO] 10.217.0.74:43526 - 6054 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000821284s 2025-08-13T20:08:22.261374731+00:00 stdout F [INFO] 10.217.0.74:40719 - 25417 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001145663s 2025-08-13T20:08:22.267248970+00:00 stdout F [INFO] 10.217.0.74:44331 - 53040 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002694397s 2025-08-13T20:08:22.267341953+00:00 stdout F [INFO] 10.217.0.74:56326 - 34656 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002656876s 2025-08-13T20:08:22.276757652+00:00 stdout F [INFO] 10.217.0.74:52375 - 43508 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001356439s 2025-08-13T20:08:22.276943238+00:00 stdout F [INFO] 10.217.0.74:58513 - 13901 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001270606s 2025-08-13T20:08:22.332239413+00:00 stdout F [INFO] 10.217.0.74:41229 - 42767 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001075081s 2025-08-13T20:08:22.332239413+00:00 stdout F [INFO] 10.217.0.74:52021 - 30200 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000567466s 2025-08-13T20:08:22.398007929+00:00 stdout F [INFO] 10.217.0.74:47267 - 62452 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001238796s 2025-08-13T20:08:22.398975957+00:00 stdout F [INFO] 10.217.0.74:57833 - 38362 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000962667s 2025-08-13T20:08:22.399740409+00:00 stdout F [INFO] 10.217.0.74:38237 - 53887 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069585s 2025-08-13T20:08:22.401736936+00:00 stdout F [INFO] 10.217.0.74:44127 - 22403 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105724s 2025-08-13T20:08:22.433097905+00:00 stdout F [INFO] 10.217.0.74:52965 - 42770 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000876085s 2025-08-13T20:08:22.434533936+00:00 stdout F [INFO] 10.217.0.74:58613 - 49472 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002353548s 2025-08-13T20:08:22.455585540+00:00 stdout F [INFO] 10.217.0.74:60674 - 20048 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000941467s 2025-08-13T20:08:22.463268320+00:00 stdout F [INFO] 10.217.0.74:41587 - 40893 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000694699s 2025-08-13T20:08:22.466577205+00:00 stdout F [INFO] 10.217.0.74:46766 - 32712 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000801933s 2025-08-13T20:08:22.466577205+00:00 stdout F [INFO] 10.217.0.74:45096 - 7155 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000899256s 2025-08-13T20:08:22.510876515+00:00 stdout F [INFO] 10.217.0.74:54552 - 47724 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005303202s 2025-08-13T20:08:22.511946716+00:00 stdout F [INFO] 10.217.0.74:45434 - 9918 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006879638s 2025-08-13T20:08:22.519684217+00:00 stdout F [INFO] 10.217.0.74:37786 - 11215 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000952157s 2025-08-13T20:08:22.520166181+00:00 stdout F [INFO] 10.217.0.74:36296 - 46459 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757792s 2025-08-13T20:08:22.522077546+00:00 stdout F [INFO] 10.217.0.74:51580 - 63207 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000449953s 2025-08-13T20:08:22.522077546+00:00 stdout F [INFO] 10.217.0.74:40251 - 25049 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000519685s 2025-08-13T20:08:22.576696652+00:00 stdout F [INFO] 10.217.0.74:57622 - 63627 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069116s 2025-08-13T20:08:22.577608658+00:00 stdout F [INFO] 10.217.0.74:45888 - 2144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001443201s 2025-08-13T20:08:22.636625250+00:00 stdout F [INFO] 10.217.0.74:52830 - 1253 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813273s 2025-08-13T20:08:22.636625250+00:00 stdout F [INFO] 10.217.0.74:37953 - 41778 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001006499s 2025-08-13T20:08:22.638191015+00:00 stdout F [INFO] 10.217.0.74:41117 - 48794 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000588647s 2025-08-13T20:08:22.638569146+00:00 stdout F [INFO] 10.217.0.74:41318 - 41580 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000748311s 2025-08-13T20:08:22.648858241+00:00 stdout F [INFO] 10.217.0.74:45123 - 27094 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000730341s 2025-08-13T20:08:22.649838699+00:00 stdout F [INFO] 10.217.0.74:38611 - 55493 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001436781s 2025-08-13T20:08:22.664156650+00:00 stdout F [INFO] 10.217.0.74:43102 - 36070 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070239s 2025-08-13T20:08:22.664418027+00:00 stdout F [INFO] 10.217.0.74:32935 - 60085 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615287s 2025-08-13T20:08:22.691543585+00:00 stdout F [INFO] 10.217.0.74:57635 - 58953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001525044s 2025-08-13T20:08:22.691858614+00:00 stdout F [INFO] 10.217.0.74:50777 - 5102 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139997s 2025-08-13T20:08:22.693921343+00:00 stdout F [INFO] 10.217.0.74:34461 - 46395 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000959297s 2025-08-13T20:08:22.694219062+00:00 stdout F [INFO] 10.217.0.74:34317 - 59106 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001340569s 2025-08-13T20:08:22.702807788+00:00 stdout F [INFO] 10.217.0.74:53439 - 28469 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000780463s 2025-08-13T20:08:22.703057565+00:00 stdout F [INFO] 10.217.0.74:45448 - 47572 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071143s 2025-08-13T20:08:22.721880755+00:00 stdout F [INFO] 10.217.0.74:51174 - 61205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579236s 2025-08-13T20:08:22.721984198+00:00 stdout F [INFO] 10.217.0.74:40056 - 50199 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000744641s 2025-08-13T20:08:22.744304728+00:00 stdout F [INFO] 10.217.0.74:42321 - 7798 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613208s 2025-08-13T20:08:22.745109991+00:00 stdout F [INFO] 10.217.0.74:42452 - 58325 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000604637s 2025-08-13T20:08:22.802369352+00:00 stdout F [INFO] 10.217.0.74:43377 - 56553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826424s 2025-08-13T20:08:22.802422454+00:00 stdout F [INFO] 10.217.0.74:36584 - 51319 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934307s 2025-08-13T20:08:22.826702370+00:00 stdout F [INFO] 10.217.0.74:50113 - 4511 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001435291s 2025-08-13T20:08:22.827707989+00:00 stdout F [INFO] 10.217.0.74:60437 - 29422 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002011328s 2025-08-13T20:08:22.841089162+00:00 stdout F [INFO] 10.217.0.74:40733 - 6592 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000608397s 2025-08-13T20:08:22.841724611+00:00 stdout F [INFO] 10.217.0.74:48412 - 63283 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000536045s 2025-08-13T20:08:22.853629642+00:00 stdout F [INFO] 10.217.0.74:57847 - 38739 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000649439s 2025-08-13T20:08:22.853831288+00:00 stdout F [INFO] 10.217.0.74:44784 - 44679 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068887s 2025-08-13T20:08:22.862625980+00:00 stdout F [INFO] 10.217.0.8:39285 - 842 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000741802s 2025-08-13T20:08:22.862710062+00:00 stdout F [INFO] 10.217.0.8:49026 - 54348 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000958068s 2025-08-13T20:08:22.865088361+00:00 stdout F [INFO] 10.217.0.8:45372 - 17228 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000958098s 2025-08-13T20:08:22.865369679+00:00 stdout F [INFO] 10.217.0.8:59953 - 11278 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001006529s 2025-08-13T20:08:22.866947584+00:00 stdout F [INFO] 10.217.0.74:50270 - 28207 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000865685s 2025-08-13T20:08:22.867147420+00:00 stdout F [INFO] 10.217.0.74:35084 - 42316 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103552s 2025-08-13T20:08:22.883126038+00:00 stdout F [INFO] 10.217.0.74:43979 - 42279 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000583907s 2025-08-13T20:08:22.884513628+00:00 stdout F [INFO] 10.217.0.74:36172 - 11931 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001900804s 2025-08-13T20:08:22.901498455+00:00 stdout F [INFO] 10.217.0.74:53107 - 13887 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001286337s 2025-08-13T20:08:22.901736311+00:00 stdout F [INFO] 10.217.0.74:58698 - 43570 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001462982s 2025-08-13T20:08:22.906571780+00:00 stdout F [INFO] 10.217.0.74:38047 - 19286 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000501705s 2025-08-13T20:08:22.906856988+00:00 stdout F [INFO] 10.217.0.74:60554 - 11266 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069933s 2025-08-13T20:08:22.921088926+00:00 stdout F [INFO] 10.217.0.74:54298 - 2714 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000801863s 2025-08-13T20:08:22.921255611+00:00 stdout F [INFO] 10.217.0.74:51577 - 64057 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070477s 2025-08-13T20:08:22.943009155+00:00 stdout F [INFO] 10.217.0.74:56610 - 49819 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000551956s 2025-08-13T20:08:22.943108448+00:00 stdout F [INFO] 10.217.0.74:60473 - 63037 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001326858s 2025-08-13T20:08:22.964922453+00:00 stdout F [INFO] 10.217.0.74:40775 - 7463 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003672445s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:38048 - 46678 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004013355s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:41056 - 56626 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004316424s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:46554 - 45253 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004679264s 2025-08-13T20:08:22.983985159+00:00 stdout F [INFO] 10.217.0.74:35219 - 42924 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00314427s 2025-08-13T20:08:22.984067622+00:00 stdout F [INFO] 10.217.0.74:33846 - 27423 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003050767s 2025-08-13T20:08:22.998949769+00:00 stdout F [INFO] 10.217.0.74:34661 - 48996 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009099s 2025-08-13T20:08:22.998949769+00:00 stdout F [INFO] 10.217.0.74:49940 - 35189 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001271607s 2025-08-13T20:08:23.028307460+00:00 stdout F [INFO] 10.217.0.74:50107 - 58863 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579406s 2025-08-13T20:08:23.028442514+00:00 stdout F [INFO] 10.217.0.74:47941 - 33212 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000843434s 2025-08-13T20:08:23.028560918+00:00 stdout F [INFO] 10.217.0.74:53687 - 62909 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000572706s 2025-08-13T20:08:23.029147614+00:00 stdout F [INFO] 10.217.0.74:33055 - 46652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587257s 2025-08-13T20:08:23.041401596+00:00 stdout F [INFO] 10.217.0.74:50392 - 19123 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000922237s 2025-08-13T20:08:23.041401596+00:00 stdout F [INFO] 10.217.0.74:59302 - 36026 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001150433s 2025-08-13T20:08:23.067258497+00:00 stdout F [INFO] 10.217.0.74:48739 - 15689 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001277987s 2025-08-13T20:08:23.067258497+00:00 stdout F [INFO] 10.217.0.74:45224 - 16916 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001430021s 2025-08-13T20:08:23.086723925+00:00 stdout F [INFO] 10.217.0.74:46805 - 11307 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001466452s 2025-08-13T20:08:23.086723925+00:00 stdout F [INFO] 10.217.0.74:44357 - 57641 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001210575s 2025-08-13T20:08:23.102167298+00:00 stdout F [INFO] 10.217.0.74:33526 - 21050 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001001959s 2025-08-13T20:08:23.102167298+00:00 stdout F [INFO] 10.217.0.74:44924 - 63144 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001224765s 2025-08-13T20:08:23.130224712+00:00 stdout F [INFO] 10.217.0.74:51112 - 22062 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001516313s 2025-08-13T20:08:23.130457029+00:00 stdout F [INFO] 10.217.0.74:45256 - 58772 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002293196s 2025-08-13T20:08:23.142601387+00:00 stdout F [INFO] 10.217.0.74:39991 - 54096 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001159244s 2025-08-13T20:08:23.142601387+00:00 stdout F [INFO] 10.217.0.74:53534 - 40160 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824164s 2025-08-13T20:08:23.158352619+00:00 stdout F [INFO] 10.217.0.74:43614 - 27969 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000830473s 2025-08-13T20:08:23.158352619+00:00 stdout F [INFO] 10.217.0.74:51746 - 46118 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00071584s 2025-08-13T20:08:23.173865214+00:00 stdout F [INFO] 10.217.0.74:56815 - 10800 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001192754s 2025-08-13T20:08:23.173865214+00:00 stdout F [INFO] 10.217.0.74:37470 - 26160 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001359149s 2025-08-13T20:08:23.186530057+00:00 stdout F [INFO] 10.217.0.74:35698 - 17396 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001814522s 2025-08-13T20:08:23.186576378+00:00 stdout F [INFO] 10.217.0.74:47297 - 35966 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002178742s 2025-08-13T20:08:23.201879497+00:00 stdout F [INFO] 10.217.0.74:44242 - 61307 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000986358s 2025-08-13T20:08:23.204248675+00:00 stdout F [INFO] 10.217.0.74:49688 - 14474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001558625s 2025-08-13T20:08:23.207265181+00:00 stdout F [INFO] 10.217.0.74:60793 - 2671 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507625s 2025-08-13T20:08:23.207265181+00:00 stdout F [INFO] 10.217.0.74:47305 - 44007 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00280909s 2025-08-13T20:08:23.224348531+00:00 stdout F [INFO] 10.217.0.74:43667 - 49108 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139733s 2025-08-13T20:08:23.224348531+00:00 stdout F [INFO] 10.217.0.74:59356 - 15274 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001646567s 2025-08-13T20:08:23.240945167+00:00 stdout F [INFO] 10.217.0.74:37437 - 18158 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001358979s 2025-08-13T20:08:23.245963521+00:00 stdout F [INFO] 10.217.0.74:36155 - 8426 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002000028s 2025-08-13T20:08:23.247389952+00:00 stdout F [INFO] 10.217.0.74:36748 - 23802 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000823644s 2025-08-13T20:08:23.247389952+00:00 stdout F [INFO] 10.217.0.74:34141 - 30409 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001084881s 2025-08-13T20:08:23.265588203+00:00 stdout F [INFO] 10.217.0.74:33930 - 425 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002363888s 2025-08-13T20:08:23.265588203+00:00 stdout F [INFO] 10.217.0.74:48650 - 34517 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002275536s 2025-08-13T20:08:23.265645915+00:00 stdout F [INFO] 10.217.0.74:45625 - 16991 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002644116s 2025-08-13T20:08:23.265862481+00:00 stdout F [INFO] 10.217.0.74:44165 - 37866 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002362178s 2025-08-13T20:08:23.286846603+00:00 stdout F [INFO] 10.217.0.74:43829 - 8853 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001176033s 2025-08-13T20:08:23.287205573+00:00 stdout F [INFO] 10.217.0.74:59220 - 37306 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001708199s 2025-08-13T20:08:23.302813801+00:00 stdout F [INFO] 10.217.0.74:35066 - 6993 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069728s 2025-08-13T20:08:23.303000866+00:00 stdout F [INFO] 10.217.0.74:58090 - 34860 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070014s 2025-08-13T20:08:23.323088672+00:00 stdout F [INFO] 10.217.0.74:58163 - 25525 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000577657s 2025-08-13T20:08:23.323339729+00:00 stdout F [INFO] 10.217.0.74:47008 - 15212 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000624078s 2025-08-13T20:08:23.346958476+00:00 stdout F [INFO] 10.217.0.74:36598 - 11240 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000590827s 2025-08-13T20:08:23.347015058+00:00 stdout F [INFO] 10.217.0.74:55618 - 51013 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006817s 2025-08-13T20:08:23.365053625+00:00 stdout F [INFO] 10.217.0.74:56789 - 10666 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001782331s 2025-08-13T20:08:23.365137098+00:00 stdout F [INFO] 10.217.0.74:50602 - 20928 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001779181s 2025-08-13T20:08:23.385098650+00:00 stdout F [INFO] 10.217.0.74:38012 - 36657 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002411469s 2025-08-13T20:08:23.385748948+00:00 stdout F [INFO] 10.217.0.74:39530 - 9020 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002152452s 2025-08-13T20:08:23.403006163+00:00 stdout F [INFO] 10.217.0.74:46876 - 29006 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001095072s 2025-08-13T20:08:23.403196749+00:00 stdout F [INFO] 10.217.0.74:55259 - 46879 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001254786s 2025-08-13T20:08:23.423972174+00:00 stdout F [INFO] 10.217.0.74:45266 - 62572 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001078551s 2025-08-13T20:08:23.424981563+00:00 stdout F [INFO] 10.217.0.74:56831 - 15440 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002025318s 2025-08-13T20:08:23.448177158+00:00 stdout F [INFO] 10.217.0.74:35227 - 32690 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000593787s 2025-08-13T20:08:23.448323243+00:00 stdout F [INFO] 10.217.0.74:35367 - 16258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001110942s 2025-08-13T20:08:23.462711955+00:00 stdout F [INFO] 10.217.0.74:46751 - 5046 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000588527s 2025-08-13T20:08:23.463100416+00:00 stdout F [INFO] 10.217.0.74:40424 - 57957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000795833s 2025-08-13T20:08:23.483741708+00:00 stdout F [INFO] 10.217.0.74:57391 - 28731 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004024075s 2025-08-13T20:08:23.483866722+00:00 stdout F [INFO] 10.217.0.74:50791 - 22189 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000505134s 2025-08-13T20:08:23.504960606+00:00 stdout F [INFO] 10.217.0.74:56036 - 54608 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000469544s 2025-08-13T20:08:23.505524843+00:00 stdout F [INFO] 10.217.0.74:34403 - 11947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001620877s 2025-08-13T20:08:23.521936363+00:00 stdout F [INFO] 10.217.0.74:55722 - 8768 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000545856s 2025-08-13T20:08:23.521936363+00:00 stdout F [INFO] 10.217.0.74:35719 - 8130 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000526285s 2025-08-13T20:08:23.547486986+00:00 stdout F [INFO] 10.217.0.74:40315 - 7041 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001810692s 2025-08-13T20:08:23.548531626+00:00 stdout F [INFO] 10.217.0.74:38044 - 53013 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002714748s 2025-08-13T20:08:23.566259754+00:00 stdout F [INFO] 10.217.0.74:43842 - 53315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175518s 2025-08-13T20:08:23.566840831+00:00 stdout F [INFO] 10.217.0.74:39656 - 15795 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001910445s 2025-08-13T20:08:23.604865841+00:00 stdout F [INFO] 10.217.0.74:45080 - 14184 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000976128s 2025-08-13T20:08:23.604865841+00:00 stdout F [INFO] 10.217.0.74:52623 - 24062 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001460812s 2025-08-13T20:08:23.606214639+00:00 stdout F [INFO] 10.217.0.74:34918 - 62766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070184s 2025-08-13T20:08:23.607222188+00:00 stdout F [INFO] 10.217.0.74:55300 - 45356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002002047s 2025-08-13T20:08:23.617876724+00:00 stdout F [INFO] 10.217.0.74:33261 - 28747 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000575036s 2025-08-13T20:08:23.618124261+00:00 stdout F [INFO] 10.217.0.74:47962 - 437 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000735731s 2025-08-13T20:08:23.649416848+00:00 stdout F [INFO] 10.217.0.74:46369 - 10987 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00175553s 2025-08-13T20:08:23.649961034+00:00 stdout F [INFO] 10.217.0.74:42929 - 63396 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002341427s 2025-08-13T20:08:23.664390887+00:00 stdout F [INFO] 10.217.0.74:59747 - 35086 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000679749s 2025-08-13T20:08:23.665021106+00:00 stdout F [INFO] 10.217.0.74:47958 - 54146 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000965598s 2025-08-13T20:08:23.665225681+00:00 stdout F [INFO] 10.217.0.74:60738 - 28876 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001199405s 2025-08-13T20:08:23.665372356+00:00 stdout F [INFO] 10.217.0.74:43542 - 36132 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000755901s 2025-08-13T20:08:23.673935971+00:00 stdout F [INFO] 10.217.0.74:49465 - 19319 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069495s 2025-08-13T20:08:23.673935971+00:00 stdout F [INFO] 10.217.0.74:38700 - 37429 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587637s 2025-08-13T20:08:23.711639802+00:00 stdout F [INFO] 10.217.0.74:45354 - 1150 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000710531s 2025-08-13T20:08:23.712135206+00:00 stdout F [INFO] 10.217.0.74:55757 - 61526 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001368139s 2025-08-13T20:08:23.724351257+00:00 stdout F [INFO] 10.217.0.74:43712 - 32007 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000786572s 2025-08-13T20:08:23.725416347+00:00 stdout F [INFO] 10.217.0.74:60368 - 40387 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001848233s 2025-08-13T20:08:23.790917425+00:00 stdout F [INFO] 10.217.0.74:35075 - 58159 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105166s 2025-08-13T20:08:23.790917425+00:00 stdout F [INFO] 10.217.0.74:41707 - 58369 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000997008s 2025-08-13T20:08:23.790977527+00:00 stdout F [INFO] 10.217.0.74:42168 - 54228 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005657392s 2025-08-13T20:08:23.790993397+00:00 stdout F [INFO] 10.217.0.74:57634 - 13866 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005406595s 2025-08-13T20:08:23.861389176+00:00 stdout F [INFO] 10.217.0.74:53505 - 55365 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004804728s 2025-08-13T20:08:23.862470117+00:00 stdout F [INFO] 10.217.0.74:49058 - 22335 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005747865s 2025-08-13T20:08:23.865482123+00:00 stdout F [INFO] 10.217.0.74:44041 - 4083 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003981994s 2025-08-13T20:08:23.865695549+00:00 stdout F [INFO] 10.217.0.74:38165 - 44456 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004146199s 2025-08-13T20:08:23.918755860+00:00 stdout F [INFO] 10.217.0.74:37762 - 27467 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006508776s 2025-08-13T20:08:23.919418649+00:00 stdout F [INFO] 10.217.0.74:38545 - 48889 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006821736s 2025-08-13T20:08:23.930539608+00:00 stdout F [INFO] 10.217.0.74:37549 - 38725 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000463294s 2025-08-13T20:08:23.941953185+00:00 stdout F [INFO] 10.217.0.74:40745 - 59090 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012452197s 2025-08-13T20:08:23.993117352+00:00 stdout F [INFO] 10.217.0.74:45014 - 24003 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002614755s 2025-08-13T20:08:23.993117352+00:00 stdout F [INFO] 10.217.0.74:59484 - 61983 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002975335s 2025-08-13T20:08:24.024252145+00:00 stdout F [INFO] 10.217.0.74:48887 - 8971 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000939367s 2025-08-13T20:08:24.024723789+00:00 stdout F [INFO] 10.217.0.74:42036 - 20136 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001184584s 2025-08-13T20:08:24.062560143+00:00 stdout F [INFO] 10.217.0.74:52891 - 55952 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00069057s 2025-08-13T20:08:24.062560143+00:00 stdout F [INFO] 10.217.0.74:55499 - 34115 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000744161s 2025-08-13T20:08:24.079944652+00:00 stdout F [INFO] 10.217.0.74:49890 - 38847 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767392s 2025-08-13T20:08:24.079944652+00:00 stdout F [INFO] 10.217.0.74:60726 - 27498 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001103431s 2025-08-13T20:08:24.110453807+00:00 stdout F [INFO] 10.217.0.74:39015 - 8181 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000881105s 2025-08-13T20:08:24.110453807+00:00 stdout F [INFO] 10.217.0.74:46491 - 49334 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803433s 2025-08-13T20:08:24.131459409+00:00 stdout F [INFO] 10.217.0.74:60382 - 27381 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000645609s 2025-08-13T20:08:24.134677111+00:00 stdout F [INFO] 10.217.0.74:39985 - 63434 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002143151s 2025-08-13T20:08:24.141847427+00:00 stdout F [INFO] 10.217.0.74:39208 - 986 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000837914s 2025-08-13T20:08:24.141847427+00:00 stdout F [INFO] 10.217.0.74:51325 - 7814 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001170333s 2025-08-13T20:08:24.170697164+00:00 stdout F [INFO] 10.217.0.74:33143 - 6862 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834134s 2025-08-13T20:08:24.170986162+00:00 stdout F [INFO] 10.217.0.74:42045 - 3662 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103698s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:48666 - 48400 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000563326s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:46676 - 6547 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000519965s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:35811 - 22073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000716421s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:36893 - 38752 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000668669s 2025-08-13T20:08:24.234172204+00:00 stdout F [INFO] 10.217.0.74:59916 - 9133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000994929s 2025-08-13T20:08:24.234172204+00:00 stdout F [INFO] 10.217.0.74:53675 - 62828 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001190384s 2025-08-13T20:08:24.244733557+00:00 stdout F [INFO] 10.217.0.74:45956 - 55947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104376s 2025-08-13T20:08:24.249329228+00:00 stdout F [INFO] 10.217.0.74:32801 - 25347 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00071749s 2025-08-13T20:08:24.264423121+00:00 stdout F [INFO] 10.217.0.74:38307 - 32343 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000897656s 2025-08-13T20:08:24.264423121+00:00 stdout F [INFO] 10.217.0.74:33335 - 51186 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000887916s 2025-08-13T20:08:24.271386791+00:00 stdout F [INFO] 10.217.0.74:43747 - 31683 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002332827s 2025-08-13T20:08:24.272200764+00:00 stdout F [INFO] 10.217.0.74:50597 - 61774 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000667099s 2025-08-13T20:08:24.291488567+00:00 stdout F [INFO] 10.217.0.74:38457 - 23497 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001091461s 2025-08-13T20:08:24.292299590+00:00 stdout F [INFO] 10.217.0.74:60003 - 46978 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740811s 2025-08-13T20:08:24.300860186+00:00 stdout F [INFO] 10.217.0.74:33575 - 3747 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834954s 2025-08-13T20:08:24.301911626+00:00 stdout F [INFO] 10.217.0.74:49103 - 44491 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587857s 2025-08-13T20:08:24.327883021+00:00 stdout F [INFO] 10.217.0.74:43019 - 60503 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000859995s 2025-08-13T20:08:24.328021395+00:00 stdout F [INFO] 10.217.0.74:44241 - 44160 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001261347s 2025-08-13T20:08:24.345248108+00:00 stdout F [INFO] 10.217.0.74:47062 - 48859 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000619378s 2025-08-13T20:08:24.345599408+00:00 stdout F [INFO] 10.217.0.74:47666 - 58634 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612027s 2025-08-13T20:08:24.366537759+00:00 stdout F [INFO] 10.217.0.74:47016 - 59253 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001480562s 2025-08-13T20:08:24.366744295+00:00 stdout F [INFO] 10.217.0.74:46246 - 30705 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001438961s 2025-08-13T20:08:24.392236476+00:00 stdout F [INFO] 10.217.0.74:60212 - 51027 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008598517s 2025-08-13T20:08:24.392592016+00:00 stdout F [INFO] 10.217.0.74:58066 - 61218 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008901875s 2025-08-13T20:08:24.410356635+00:00 stdout F [INFO] 10.217.0.74:40348 - 18397 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000849115s 2025-08-13T20:08:24.410696465+00:00 stdout F [INFO] 10.217.0.74:55095 - 42434 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001425661s 2025-08-13T20:08:24.426346164+00:00 stdout F [INFO] 10.217.0.74:58429 - 37747 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002262725s 2025-08-13T20:08:24.426429916+00:00 stdout F [INFO] 10.217.0.74:47306 - 45293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002616975s 2025-08-13T20:08:24.450868817+00:00 stdout F [INFO] 10.217.0.74:55881 - 27796 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000377991s 2025-08-13T20:08:24.451188236+00:00 stdout F [INFO] 10.217.0.74:45482 - 49898 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000913146s 2025-08-13T20:08:24.452458552+00:00 stdout F [INFO] 10.217.0.74:33213 - 47965 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000542426s 2025-08-13T20:08:24.453206924+00:00 stdout F [INFO] 10.217.0.74:46715 - 48650 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000621578s 2025-08-13T20:08:24.508503639+00:00 stdout F [INFO] 10.217.0.74:35011 - 31718 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001217464s 2025-08-13T20:08:24.508854299+00:00 stdout F [INFO] 10.217.0.74:52715 - 20222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768762s 2025-08-13T20:08:24.520193884+00:00 stdout F [INFO] 10.217.0.74:51498 - 7713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000939187s 2025-08-13T20:08:24.520439531+00:00 stdout F [INFO] 10.217.0.74:38612 - 10532 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001119952s 2025-08-13T20:08:24.553056217+00:00 stdout F [INFO] 10.217.0.74:39589 - 34577 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00073006s 2025-08-13T20:08:24.553266833+00:00 stdout F [INFO] 10.217.0.74:50887 - 44553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000965228s 2025-08-13T20:08:24.578471775+00:00 stdout F [INFO] 10.217.0.74:49744 - 44397 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000530205s 2025-08-13T20:08:24.578997010+00:00 stdout F [INFO] 10.217.0.74:60908 - 2189 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731991s 2025-08-13T20:08:24.617388391+00:00 stdout F [INFO] 10.217.0.74:48084 - 23967 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000972908s 2025-08-13T20:08:24.617695760+00:00 stdout F [INFO] 10.217.0.74:36137 - 28536 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000564016s 2025-08-13T20:08:24.618158183+00:00 stdout F [INFO] 10.217.0.74:43087 - 46692 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001262616s 2025-08-13T20:08:24.618211615+00:00 stdout F [INFO] 10.217.0.74:52408 - 20765 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00138048s 2025-08-13T20:08:24.645443035+00:00 stdout F [INFO] 10.217.0.74:38609 - 47142 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001852633s 2025-08-13T20:08:24.646116835+00:00 stdout F [INFO] 10.217.0.74:41031 - 52842 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002800981s 2025-08-13T20:08:24.646466725+00:00 stdout F [INFO] 10.217.0.74:59849 - 60627 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000685259s 2025-08-13T20:08:24.646518566+00:00 stdout F [INFO] 10.217.0.74:35121 - 7797 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626798s 2025-08-13T20:08:24.682149238+00:00 stdout F [INFO] 10.217.0.74:34031 - 10475 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158263s 2025-08-13T20:08:24.682391585+00:00 stdout F [INFO] 10.217.0.74:35405 - 13007 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001918105s 2025-08-13T20:08:24.688375276+00:00 stdout F [INFO] 10.217.0.74:42975 - 3941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000588656s 2025-08-13T20:08:24.688375276+00:00 stdout F [INFO] 10.217.0.74:40270 - 27750 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626067s 2025-08-13T20:08:24.719747866+00:00 stdout F [INFO] 10.217.0.74:54688 - 30674 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002935224s 2025-08-13T20:08:24.719747866+00:00 stdout F [INFO] 10.217.0.74:47077 - 35095 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003633474s 2025-08-13T20:08:24.722207566+00:00 stdout F [INFO] 10.217.0.74:39440 - 61086 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002880883s 2025-08-13T20:08:24.722207566+00:00 stdout F [INFO] 10.217.0.74:47023 - 61827 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003258813s 2025-08-13T20:08:24.755713397+00:00 stdout F [INFO] 10.217.0.74:35434 - 20749 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000781653s 2025-08-13T20:08:24.756109458+00:00 stdout F [INFO] 10.217.0.74:53059 - 9205 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001007379s 2025-08-13T20:08:24.784571244+00:00 stdout F [INFO] 10.217.0.74:36555 - 53245 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000628068s 2025-08-13T20:08:24.785053848+00:00 stdout F [INFO] 10.217.0.74:56648 - 41834 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001459212s 2025-08-13T20:08:24.805676099+00:00 stdout F [INFO] 10.217.0.74:54450 - 39176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001632277s 2025-08-13T20:08:24.805729691+00:00 stdout F [INFO] 10.217.0.74:53352 - 1136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001506933s 2025-08-13T20:08:24.851120342+00:00 stdout F [INFO] 10.217.0.74:38902 - 48894 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002668936s 2025-08-13T20:08:24.852221004+00:00 stdout F [INFO] 10.217.0.74:43404 - 30730 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003059038s 2025-08-13T20:08:24.873870125+00:00 stdout F [INFO] 10.217.0.74:42313 - 6315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000823914s 2025-08-13T20:08:24.873870125+00:00 stdout F [INFO] 10.217.0.74:44164 - 12605 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000872185s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:42936 - 10687 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103921s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:36100 - 37711 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985678s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:56708 - 58003 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768852s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:55787 - 15515 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001686759s 2025-08-13T20:08:24.929399557+00:00 stdout F [INFO] 10.217.0.74:36348 - 27003 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728081s 2025-08-13T20:08:24.929612103+00:00 stdout F [INFO] 10.217.0.74:58709 - 40669 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001330818s 2025-08-13T20:08:24.930241351+00:00 stdout F [INFO] 10.217.0.74:43717 - 42072 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001431632s 2025-08-13T20:08:24.930980552+00:00 stdout F [INFO] 10.217.0.74:50271 - 30858 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001860083s 2025-08-13T20:08:24.972041189+00:00 stdout F [INFO] 10.217.0.74:36524 - 45953 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004283983s 2025-08-13T20:08:24.972041189+00:00 stdout F [INFO] 10.217.0.74:46433 - 44305 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005543429s 2025-08-13T20:08:24.978928047+00:00 stdout F [INFO] 10.217.0.74:39950 - 50856 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000684569s 2025-08-13T20:08:24.978928047+00:00 stdout F [INFO] 10.217.0.74:51766 - 25395 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000776972s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:34690 - 7086 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003429738s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:43571 - 61951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003356256s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:56828 - 58461 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007452054s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:46054 - 64977 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007036922s 2025-08-13T20:08:25.036035054+00:00 stdout F [INFO] 10.217.0.74:53095 - 17901 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001446761s 2025-08-13T20:08:25.036035054+00:00 stdout F [INFO] 10.217.0.74:57138 - 43105 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001692948s 2025-08-13T20:08:25.052948038+00:00 stdout F [INFO] 10.217.0.74:43926 - 25694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002316425s 2025-08-13T20:08:25.052997559+00:00 stdout F [INFO] 10.217.0.74:53532 - 45713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001506383s 2025-08-13T20:08:25.096404534+00:00 stdout F [INFO] 10.217.0.74:45404 - 13830 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002596194s 2025-08-13T20:08:25.096613950+00:00 stdout F [INFO] 10.217.0.74:51308 - 24763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000542626s 2025-08-13T20:08:25.111347213+00:00 stdout F [INFO] 10.217.0.74:40585 - 22852 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808484s 2025-08-13T20:08:25.111431875+00:00 stdout F [INFO] 10.217.0.74:52176 - 59269 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000953717s 2025-08-13T20:08:25.142186147+00:00 stdout F [INFO] 10.217.0.74:49925 - 55227 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000873406s 2025-08-13T20:08:25.142406793+00:00 stdout F [INFO] 10.217.0.74:58393 - 32978 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001325018s 2025-08-13T20:08:25.154245632+00:00 stdout F [INFO] 10.217.0.74:33747 - 17097 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000816724s 2025-08-13T20:08:25.154245632+00:00 stdout F [INFO] 10.217.0.74:57980 - 13317 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104587s 2025-08-13T20:08:25.157223058+00:00 stdout F [INFO] 10.217.0.74:52234 - 27248 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000541706s 2025-08-13T20:08:25.157223058+00:00 stdout F [INFO] 10.217.0.74:35217 - 42622 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668129s 2025-08-13T20:08:25.174913265+00:00 stdout F [INFO] 10.217.0.74:54396 - 14624 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000631218s 2025-08-13T20:08:25.174913265+00:00 stdout F [INFO] 10.217.0.74:43944 - 47763 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001561625s 2025-08-13T20:08:25.199658625+00:00 stdout F [INFO] 10.217.0.74:59881 - 6776 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068003s 2025-08-13T20:08:25.199985764+00:00 stdout F [INFO] 10.217.0.74:51326 - 7569 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001062151s 2025-08-13T20:08:25.227633737+00:00 stdout F [INFO] 10.217.0.74:46695 - 35811 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000990998s 2025-08-13T20:08:25.228805970+00:00 stdout F [INFO] 10.217.0.74:36601 - 55298 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000490834s 2025-08-13T20:08:25.239563699+00:00 stdout F [INFO] 10.217.0.74:57169 - 22926 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000628898s 2025-08-13T20:08:25.239563699+00:00 stdout F [INFO] 10.217.0.74:45489 - 37868 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734892s 2025-08-13T20:08:25.295034379+00:00 stdout F [INFO] 10.217.0.74:46388 - 48751 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001222805s 2025-08-13T20:08:25.295085910+00:00 stdout F [INFO] 10.217.0.74:51846 - 12374 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001515264s 2025-08-13T20:08:41.635823402+00:00 stdout F [INFO] 10.217.0.62:34379 - 7559 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003923512s 2025-08-13T20:08:41.636428109+00:00 stdout F [INFO] 10.217.0.62:45312 - 22703 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004136568s 2025-08-13T20:08:42.402994408+00:00 stdout F [INFO] 10.217.0.19:40701 - 2139 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003220132s 2025-08-13T20:08:42.403236254+00:00 stdout F [INFO] 10.217.0.19:39684 - 2175 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003694346s 2025-08-13T20:08:56.374659197+00:00 stdout F [INFO] 10.217.0.64:34472 - 48135 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003421968s 2025-08-13T20:08:56.374745359+00:00 stdout F [INFO] 10.217.0.64:54476 - 13919 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002192603s 2025-08-13T20:08:56.374840782+00:00 stdout F [INFO] 10.217.0.64:40326 - 5569 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004057756s 2025-08-13T20:08:56.375154361+00:00 stdout F [INFO] 10.217.0.64:49383 - 27450 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004855109s 2025-08-13T20:09:02.378239024+00:00 stdout F [INFO] 10.217.0.45:43397 - 49512 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002536683s 2025-08-13T20:09:02.378239024+00:00 stdout F [INFO] 10.217.0.45:56823 - 15158 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002466611s 2025-08-13T20:09:11.725896608+00:00 stdout F [INFO] 10.217.0.62:34550 - 12666 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006674761s 2025-08-13T20:09:11.725896608+00:00 stdout F [INFO] 10.217.0.62:39907 - 38685 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00801045s 2025-08-13T20:09:22.862762824+00:00 stdout F [INFO] 10.217.0.8:39156 - 25361 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002846862s 2025-08-13T20:09:22.862953529+00:00 stdout F [INFO] 10.217.0.8:53927 - 1632 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002753809s 2025-08-13T20:09:22.865524273+00:00 stdout F [INFO] 10.217.0.8:54729 - 65332 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001623787s 2025-08-13T20:09:22.865728939+00:00 stdout F [INFO] 10.217.0.8:49636 - 52910 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00175812s 2025-08-13T20:09:26.078490881+00:00 stdout F [INFO] 10.217.0.19:48977 - 50456 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001002199s 2025-08-13T20:09:26.078490881+00:00 stdout F [INFO] 10.217.0.19:59578 - 61462 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106775s 2025-08-13T20:09:33.484850278+00:00 stdout F [INFO] 10.217.0.62:42372 - 190 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139979s 2025-08-13T20:09:33.485393903+00:00 stdout F [INFO] 10.217.0.62:34167 - 57309 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00244067s 2025-08-13T20:09:33.523096954+00:00 stdout F [INFO] 10.217.0.62:49737 - 32251 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000720071s 2025-08-13T20:09:33.523096954+00:00 stdout F [INFO] 10.217.0.62:40671 - 52652 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000871775s 2025-08-13T20:09:35.187928017+00:00 stdout F [INFO] 10.217.0.19:33301 - 21434 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001188724s 2025-08-13T20:09:35.188043610+00:00 stdout F [INFO] 10.217.0.19:52934 - 11233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001575785s 2025-08-13T20:09:38.306477737+00:00 stdout F [INFO] 10.217.0.62:40952 - 6986 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001327978s 2025-08-13T20:09:38.306477737+00:00 stdout F [INFO] 10.217.0.62:45564 - 26341 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002726588s 2025-08-13T20:09:41.638121368+00:00 stdout F [INFO] 10.217.0.62:56173 - 24407 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002718738s 2025-08-13T20:09:41.638416066+00:00 stdout F [INFO] 10.217.0.62:47549 - 44536 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003351996s 2025-08-13T20:09:42.030655352+00:00 stdout F [INFO] 10.217.0.19:56636 - 61643 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001563925s 2025-08-13T20:09:42.030851607+00:00 stdout F [INFO] 10.217.0.19:51975 - 50744 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002233574s 2025-08-13T20:09:42.183194045+00:00 stdout F [INFO] 10.217.0.19:33628 - 49975 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003283644s 2025-08-13T20:09:42.183194045+00:00 stdout F [INFO] 10.217.0.19:45670 - 3055 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003702877s 2025-08-13T20:09:42.529990608+00:00 stdout F [INFO] 10.217.0.62:48401 - 656 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071058s 2025-08-13T20:09:42.529990608+00:00 stdout F [INFO] 10.217.0.62:58670 - 49481 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000742051s 2025-08-13T20:09:45.431538738+00:00 stdout F [INFO] 10.217.0.62:43098 - 58085 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00107588s 2025-08-13T20:09:45.431538738+00:00 stdout F [INFO] 10.217.0.62:59851 - 55795 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000526385s 2025-08-13T20:09:54.586441237+00:00 stdout F [INFO] 10.217.0.19:47711 - 21321 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001843932s 2025-08-13T20:09:54.586441237+00:00 stdout F [INFO] 10.217.0.19:44898 - 45604 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002623275s 2025-08-13T20:09:56.371181378+00:00 stdout F [INFO] 10.217.0.64:41542 - 41108 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001686828s 2025-08-13T20:09:56.371181378+00:00 stdout F [INFO] 10.217.0.64:48576 - 20336 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00139462s 2025-08-13T20:09:56.372300050+00:00 stdout F [INFO] 10.217.0.64:38485 - 9093 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000542365s 2025-08-13T20:09:56.374142783+00:00 stdout F [INFO] 10.217.0.64:46788 - 13302 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001550825s 2025-08-13T20:09:57.900691400+00:00 stdout F [INFO] 10.217.0.19:53614 - 57161 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001425781s 2025-08-13T20:09:57.901094752+00:00 stdout F [INFO] 10.217.0.19:53845 - 1062 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000942357s 2025-08-13T20:09:57.983843465+00:00 stdout F [INFO] 10.217.0.19:39361 - 11239 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002185313s 2025-08-13T20:09:57.983843465+00:00 stdout F [INFO] 10.217.0.19:41001 - 62298 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002361678s 2025-08-13T20:10:01.056405577+00:00 stdout F [INFO] 10.217.0.19:39388 - 44237 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002664086s 2025-08-13T20:10:01.056405577+00:00 stdout F [INFO] 10.217.0.19:33615 - 53295 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002689067s 2025-08-13T20:10:01.079333115+00:00 stdout F [INFO] 10.217.0.19:50682 - 20587 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000795232s 2025-08-13T20:10:01.081832366+00:00 stdout F [INFO] 10.217.0.19:44383 - 40450 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000861545s 2025-08-13T20:10:02.444873055+00:00 stdout F [INFO] 10.217.0.45:44013 - 11251 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000983468s 2025-08-13T20:10:02.453875573+00:00 stdout F [INFO] 10.217.0.45:37306 - 39606 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001917425s 2025-08-13T20:10:05.186876261+00:00 stdout F [INFO] 10.217.0.19:58118 - 11638 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002586544s 2025-08-13T20:10:05.187021395+00:00 stdout F [INFO] 10.217.0.19:38219 - 21719 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002535682s 2025-08-13T20:10:08.345010648+00:00 stdout F [INFO] 10.217.0.19:51265 - 1330 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002312606s 2025-08-13T20:10:08.345010648+00:00 stdout F [INFO] 10.217.0.19:59348 - 19091 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002631336s 2025-08-13T20:10:08.355087687+00:00 stdout F [INFO] 10.217.0.19:42792 - 61879 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002574304s 2025-08-13T20:10:08.355139488+00:00 stdout F [INFO] 10.217.0.19:39369 - 9267 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002482411s 2025-08-13T20:10:08.411554935+00:00 stdout F [INFO] 10.217.0.19:35601 - 58707 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001258786s 2025-08-13T20:10:08.413167012+00:00 stdout F [INFO] 10.217.0.19:56524 - 65241 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002158461s 2025-08-13T20:10:08.548968705+00:00 stdout F [INFO] 10.217.0.19:45966 - 19689 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001272617s 2025-08-13T20:10:08.548968705+00:00 stdout F [INFO] 10.217.0.19:48885 - 11127 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001335808s 2025-08-13T20:10:11.628730835+00:00 stdout F [INFO] 10.217.0.62:36584 - 46375 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001654868s 2025-08-13T20:10:11.633039398+00:00 stdout F [INFO] 10.217.0.62:41877 - 29105 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000915227s 2025-08-13T20:10:22.867019153+00:00 stdout F [INFO] 10.217.0.8:49904 - 64741 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003766368s 2025-08-13T20:10:22.867019153+00:00 stdout F [INFO] 10.217.0.8:41201 - 45742 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004181499s 2025-08-13T20:10:22.867959440+00:00 stdout F [INFO] 10.217.0.8:35597 - 48884 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001018489s 2025-08-13T20:10:22.868719412+00:00 stdout F [INFO] 10.217.0.8:47410 - 20918 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000770882s 2025-08-13T20:10:30.808858992+00:00 stdout F [INFO] 10.217.0.73:39897 - 44453 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001530163s 2025-08-13T20:10:30.808951815+00:00 stdout F [INFO] 10.217.0.73:50070 - 51054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00174087s 2025-08-13T20:10:33.362905250+00:00 stdout F [INFO] 10.217.0.19:32790 - 5866 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004795368s 2025-08-13T20:10:33.362905250+00:00 stdout F [INFO] 10.217.0.19:49534 - 57961 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004693464s 2025-08-13T20:10:33.373874594+00:00 stdout F [INFO] 10.217.0.19:37942 - 8212 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000595517s 2025-08-13T20:10:33.374417290+00:00 stdout F [INFO] 10.217.0.19:56916 - 12238 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000661719s 2025-08-13T20:10:35.199277209+00:00 stdout F [INFO] 10.217.0.19:41870 - 23972 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001476932s 2025-08-13T20:10:35.199277209+00:00 stdout F [INFO] 10.217.0.19:58212 - 53423 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001832952s 2025-08-13T20:10:35.697241647+00:00 stdout F [INFO] 10.217.0.19:33071 - 20323 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00069523s 2025-08-13T20:10:35.697303468+00:00 stdout F [INFO] 10.217.0.19:42008 - 33508 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000793093s 2025-08-13T20:10:41.622326063+00:00 stdout F [INFO] 10.217.0.62:34619 - 105 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001195944s 2025-08-13T20:10:41.622417725+00:00 stdout F [INFO] 10.217.0.62:43729 - 23355 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001137973s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:35428 - 7758 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002594974s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:53534 - 35151 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003622834s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:60247 - 46067 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003174031s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:49932 - 39403 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003596073s 2025-08-13T20:11:02.520033866+00:00 stdout F [INFO] 10.217.0.45:39664 - 38219 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001183744s 2025-08-13T20:11:02.520033866+00:00 stdout F [INFO] 10.217.0.45:60311 - 9110 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001162163s 2025-08-13T20:11:03.399531671+00:00 stdout F [INFO] 10.217.0.87:37356 - 34781 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001512653s 2025-08-13T20:11:03.399531671+00:00 stdout F [INFO] 10.217.0.87:54183 - 38414 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002224574s 2025-08-13T20:11:03.399949703+00:00 stdout F [INFO] 10.217.0.87:35523 - 20655 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000555516s 2025-08-13T20:11:03.399949703+00:00 stdout F [INFO] 10.217.0.87:49559 - 28694 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001214345s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:35061 - 433 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000936687s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:40965 - 48513 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000667779s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:54542 - 34394 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000680099s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:38160 - 26586 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000648358s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:41763 - 22360 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001318338s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:47208 - 63365 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000723751s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:53434 - 45248 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070464s 2025-08-13T20:11:03.649033824+00:00 stdout F [INFO] 10.217.0.87:53713 - 18434 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002473161s 2025-08-13T20:11:03.718848076+00:00 stdout F [INFO] 10.217.0.87:41691 - 15051 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002671346s 2025-08-13T20:11:03.718848076+00:00 stdout F [INFO] 10.217.0.87:46056 - 9861 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103799s 2025-08-13T20:11:03.722611354+00:00 stdout F [INFO] 10.217.0.87:39086 - 34308 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000556906s 2025-08-13T20:11:03.722611354+00:00 stdout F [INFO] 10.217.0.87:42891 - 6546 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000504144s 2025-08-13T20:11:03.767221563+00:00 stdout F [INFO] 10.217.0.87:58197 - 30208 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00385624s 2025-08-13T20:11:03.767221563+00:00 stdout F [INFO] 10.217.0.87:39728 - 49827 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003767278s 2025-08-13T20:11:03.817666909+00:00 stdout F [INFO] 10.217.0.87:35172 - 55283 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001101361s 2025-08-13T20:11:03.817995749+00:00 stdout F [INFO] 10.217.0.87:39261 - 6732 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000712641s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53119 - 44401 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001572155s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:56071 - 13263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001206034s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53447 - 33479 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001892754s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53190 - 29850 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00175118s 2025-08-13T20:11:03.847942587+00:00 stdout F [INFO] 10.217.0.87:56508 - 38072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004455958s 2025-08-13T20:11:03.848072911+00:00 stdout F [INFO] 10.217.0.87:33109 - 11968 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004842229s 2025-08-13T20:11:03.897437386+00:00 stdout F [INFO] 10.217.0.87:53430 - 55335 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000998288s 2025-08-13T20:11:03.897437386+00:00 stdout F [INFO] 10.217.0.87:32838 - 14045 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001481643s 2025-08-13T20:11:03.900510014+00:00 stdout F [INFO] 10.217.0.87:43972 - 23286 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000949927s 2025-08-13T20:11:03.900756701+00:00 stdout F [INFO] 10.217.0.87:49838 - 38792 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001019979s 2025-08-13T20:11:03.902765709+00:00 stdout F [INFO] 10.217.0.87:47774 - 42440 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000679239s 2025-08-13T20:11:03.904858479+00:00 stdout F [INFO] 10.217.0.87:57291 - 56285 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001835753s 2025-08-13T20:11:03.913620140+00:00 stdout F [INFO] 10.217.0.87:46856 - 11308 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000987408s 2025-08-13T20:11:03.914091244+00:00 stdout F [INFO] 10.217.0.87:55527 - 2267 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001479192s 2025-08-13T20:11:03.954577195+00:00 stdout F [INFO] 10.217.0.87:51392 - 21341 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000807283s 2025-08-13T20:11:03.954937355+00:00 stdout F [INFO] 10.217.0.87:41890 - 15684 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001436721s 2025-08-13T20:11:03.977215704+00:00 stdout F [INFO] 10.217.0.87:47581 - 1434 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104783s 2025-08-13T20:11:03.977509832+00:00 stdout F [INFO] 10.217.0.87:54607 - 53936 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001454602s 2025-08-13T20:11:04.023222913+00:00 stdout F [INFO] 10.217.0.87:54496 - 16868 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003156581s 2025-08-13T20:11:04.023686356+00:00 stdout F [INFO] 10.217.0.87:42267 - 51763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003255144s 2025-08-13T20:11:04.039212591+00:00 stdout F [INFO] 10.217.0.87:53493 - 33994 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001088152s 2025-08-13T20:11:04.041175287+00:00 stdout F [INFO] 10.217.0.87:40575 - 33957 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003272184s 2025-08-13T20:11:04.041577319+00:00 stdout F [INFO] 10.217.0.87:50647 - 26691 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010365s 2025-08-13T20:11:04.045511112+00:00 stdout F [INFO] 10.217.0.87:54103 - 16592 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000857164s 2025-08-13T20:11:04.102489046+00:00 stdout F [INFO] 10.217.0.87:55724 - 48220 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006211418s 2025-08-13T20:11:04.133961708+00:00 stdout F [INFO] 10.217.0.87:43240 - 60487 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001790991s 2025-08-13T20:11:04.142111361+00:00 stdout F [INFO] 10.217.0.87:41388 - 46250 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003411068s 2025-08-13T20:11:04.142305827+00:00 stdout F [INFO] 10.217.0.87:36215 - 4218 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003739277s 2025-08-13T20:11:04.202104702+00:00 stdout F [INFO] 10.217.0.87:55842 - 26479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001143423s 2025-08-13T20:11:04.202351719+00:00 stdout F [INFO] 10.217.0.87:34872 - 6296 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000963698s 2025-08-13T20:11:04.209565155+00:00 stdout F [INFO] 10.217.0.87:36576 - 31736 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000777092s 2025-08-13T20:11:04.209661408+00:00 stdout F [INFO] 10.217.0.87:52223 - 46340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000739101s 2025-08-13T20:11:04.267950309+00:00 stdout F [INFO] 10.217.0.87:47717 - 18265 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001877574s 2025-08-13T20:11:04.269479953+00:00 stdout F [INFO] 10.217.0.87:49071 - 3191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001551425s 2025-08-13T20:11:04.271193682+00:00 stdout F [INFO] 10.217.0.87:39213 - 49156 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000561976s 2025-08-13T20:11:04.271193682+00:00 stdout F [INFO] 10.217.0.87:45159 - 61063 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000781293s 2025-08-13T20:11:04.334709023+00:00 stdout F [INFO] 10.217.0.87:42608 - 45702 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000631568s 2025-08-13T20:11:04.336961018+00:00 stdout F [INFO] 10.217.0.87:40086 - 23695 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000568236s 2025-08-13T20:11:04.337177644+00:00 stdout F [INFO] 10.217.0.87:44964 - 114 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000677759s 2025-08-13T20:11:04.337395390+00:00 stdout F [INFO] 10.217.0.87:46041 - 32750 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003591873s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:42635 - 50007 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002949285s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:41785 - 28793 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003545352s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:47944 - 7692 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000543566s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:37949 - 25447 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000756501s 2025-08-13T20:11:04.463665041+00:00 stdout F [INFO] 10.217.0.87:36079 - 39964 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001139033s 2025-08-13T20:11:04.463665041+00:00 stdout F [INFO] 10.217.0.87:43169 - 50861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001230846s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:48399 - 60449 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002498091s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:37499 - 27113 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001824773s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:58069 - 60634 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004967292s 2025-08-13T20:11:04.538092615+00:00 stdout F [INFO] 10.217.0.87:49277 - 25371 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007577988s 2025-08-13T20:11:04.569854585+00:00 stdout F [INFO] 10.217.0.87:43743 - 36368 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001295347s 2025-08-13T20:11:04.569854585+00:00 stdout F [INFO] 10.217.0.87:39846 - 32417 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105676s 2025-08-13T20:11:04.618958633+00:00 stdout F [INFO] 10.217.0.87:50590 - 58487 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005965221s 2025-08-13T20:11:04.621902648+00:00 stdout F [INFO] 10.217.0.87:33159 - 18725 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006726893s 2025-08-13T20:11:04.653088392+00:00 stdout F [INFO] 10.217.0.87:40765 - 34294 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001484582s 2025-08-13T20:11:04.653088392+00:00 stdout F [INFO] 10.217.0.87:44177 - 23695 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001702179s 2025-08-13T20:11:04.715885942+00:00 stdout F [INFO] 10.217.0.87:33043 - 12052 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002513822s 2025-08-13T20:11:04.723084329+00:00 stdout F [INFO] 10.217.0.87:35338 - 1652 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006784525s 2025-08-13T20:11:04.805091720+00:00 stdout F [INFO] 10.217.0.87:52674 - 55206 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006223669s 2025-08-13T20:11:04.805091720+00:00 stdout F [INFO] 10.217.0.87:52033 - 60656 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006853436s 2025-08-13T20:11:04.816179148+00:00 stdout F [INFO] 10.217.0.87:50848 - 46737 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001064251s 2025-08-13T20:11:04.816179148+00:00 stdout F [INFO] 10.217.0.87:42103 - 9497 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001151093s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:47340 - 39540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003902982s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:39624 - 35803 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004711905s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:52299 - 14108 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006719073s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:51626 - 3473 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006789615s 2025-08-13T20:11:04.954439932+00:00 stdout F [INFO] 10.217.0.87:52193 - 7609 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888555s 2025-08-13T20:11:04.954660798+00:00 stdout F [INFO] 10.217.0.87:38807 - 33179 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000561866s 2025-08-13T20:11:04.954746981+00:00 stdout F [INFO] 10.217.0.87:51130 - 30582 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000945637s 2025-08-13T20:11:04.954990948+00:00 stdout F [INFO] 10.217.0.87:35982 - 7711 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001363909s 2025-08-13T20:11:05.014950167+00:00 stdout F [INFO] 10.217.0.87:42495 - 23659 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000950157s 2025-08-13T20:11:05.014950167+00:00 stdout F [INFO] 10.217.0.87:56385 - 22005 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001003799s 2025-08-13T20:11:05.067956076+00:00 stdout F [INFO] 10.217.0.87:47869 - 28411 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158573s 2025-08-13T20:11:05.067956076+00:00 stdout F [INFO] 10.217.0.87:53901 - 44918 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000643019s 2025-08-13T20:11:05.072057984+00:00 stdout F [INFO] 10.217.0.87:49967 - 54232 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001286327s 2025-08-13T20:11:05.072114736+00:00 stdout F [INFO] 10.217.0.87:37700 - 58806 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102417s 2025-08-13T20:11:05.102279941+00:00 stdout F [INFO] 10.217.0.87:46754 - 38555 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001617416s 2025-08-13T20:11:05.116212460+00:00 stdout F [INFO] 10.217.0.87:51552 - 34798 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.022058323s 2025-08-13T20:11:05.145679675+00:00 stdout F [INFO] 10.217.0.87:36269 - 19193 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000509534s 2025-08-13T20:11:05.151134801+00:00 stdout F [INFO] 10.217.0.87:45946 - 16784 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007139375s 2025-08-13T20:11:05.155084265+00:00 stdout F [INFO] 10.217.0.87:54068 - 64440 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002909163s 2025-08-13T20:11:05.160362146+00:00 stdout F [INFO] 10.217.0.87:52370 - 7710 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006949079s 2025-08-13T20:11:05.204469531+00:00 stdout F [INFO] 10.217.0.87:46241 - 8297 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069313s 2025-08-13T20:11:05.208651580+00:00 stdout F [INFO] 10.217.0.87:40189 - 37269 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619368s 2025-08-13T20:11:05.241635746+00:00 stdout F [INFO] 10.217.0.87:44567 - 60474 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006494957s 2025-08-13T20:11:05.241752309+00:00 stdout F [INFO] 10.217.0.87:38105 - 27874 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006842646s 2025-08-13T20:11:05.249504242+00:00 stdout F [INFO] 10.217.0.87:57155 - 58400 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005020243s 2025-08-13T20:11:05.250721467+00:00 stdout F [INFO] 10.217.0.87:52971 - 8666 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005474167s 2025-08-13T20:11:05.258077158+00:00 stdout F [INFO] 10.217.0.19:40122 - 47771 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005301442s 2025-08-13T20:11:05.259867269+00:00 stdout F [INFO] 10.217.0.19:52923 - 1992 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001806382s 2025-08-13T20:11:05.290980531+00:00 stdout F [INFO] 10.217.0.87:51112 - 26133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001604326s 2025-08-13T20:11:05.290980531+00:00 stdout F [INFO] 10.217.0.87:57064 - 21762 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004293173s 2025-08-13T20:11:05.302120030+00:00 stdout F [INFO] 10.217.0.87:40821 - 41355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000800522s 2025-08-13T20:11:05.302379718+00:00 stdout F [INFO] 10.217.0.87:56713 - 19604 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001137903s 2025-08-13T20:11:05.313414544+00:00 stdout F [INFO] 10.217.0.87:36400 - 26073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000772012s 2025-08-13T20:11:05.317306166+00:00 stdout F [INFO] 10.217.0.87:53031 - 42574 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003964054s 2025-08-13T20:11:05.321047143+00:00 stdout F [INFO] 10.217.0.87:53956 - 17610 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068611s 2025-08-13T20:11:05.321278060+00:00 stdout F [INFO] 10.217.0.87:46140 - 44894 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000664749s 2025-08-13T20:11:05.342043325+00:00 stdout F [INFO] 10.217.0.87:35847 - 16647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000609347s 2025-08-13T20:11:05.342043325+00:00 stdout F [INFO] 10.217.0.87:54364 - 5238 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000893065s 2025-08-13T20:11:05.362751859+00:00 stdout F [INFO] 10.217.0.87:37115 - 46250 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000650209s 2025-08-13T20:11:05.362751859+00:00 stdout F [INFO] 10.217.0.87:49240 - 12686 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000519375s 2025-08-13T20:11:05.367891566+00:00 stdout F [INFO] 10.217.0.87:34732 - 2819 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000436073s 2025-08-13T20:11:05.367969348+00:00 stdout F [INFO] 10.217.0.87:33759 - 26935 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000470533s 2025-08-13T20:11:05.397690750+00:00 stdout F [INFO] 10.217.0.87:35998 - 34083 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003644675s 2025-08-13T20:11:05.397754462+00:00 stdout F [INFO] 10.217.0.87:52131 - 3546 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003547612s 2025-08-13T20:11:05.420550816+00:00 stdout F [INFO] 10.217.0.87:39735 - 12055 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000621878s 2025-08-13T20:11:05.420843084+00:00 stdout F [INFO] 10.217.0.87:42570 - 32800 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000528725s 2025-08-13T20:11:05.457708621+00:00 stdout F [INFO] 10.217.0.87:48256 - 42070 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001333268s 2025-08-13T20:11:05.457865856+00:00 stdout F [INFO] 10.217.0.87:59243 - 325 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001696519s 2025-08-13T20:11:05.459695468+00:00 stdout F [INFO] 10.217.0.87:36953 - 24753 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000388781s 2025-08-13T20:11:05.459695468+00:00 stdout F [INFO] 10.217.0.87:50391 - 5271 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000510425s 2025-08-13T20:11:05.476240483+00:00 stdout F [INFO] 10.217.0.87:53411 - 28017 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000549455s 2025-08-13T20:11:05.477273462+00:00 stdout F [INFO] 10.217.0.87:54565 - 53660 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001512104s 2025-08-13T20:11:05.517877896+00:00 stdout F [INFO] 10.217.0.87:36552 - 9276 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000594707s 2025-08-13T20:11:05.517877896+00:00 stdout F [INFO] 10.217.0.87:57974 - 5702 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000671679s 2025-08-13T20:11:05.518295358+00:00 stdout F [INFO] 10.217.0.87:35507 - 62258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000621747s 2025-08-13T20:11:05.518966027+00:00 stdout F [INFO] 10.217.0.87:48489 - 51566 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001242945s 2025-08-13T20:11:05.532273779+00:00 stdout F [INFO] 10.217.0.87:54360 - 8769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000570296s 2025-08-13T20:11:05.532620469+00:00 stdout F [INFO] 10.217.0.87:45071 - 2076 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000563326s 2025-08-13T20:11:05.572070690+00:00 stdout F [INFO] 10.217.0.87:35327 - 48220 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001152263s 2025-08-13T20:11:05.573180142+00:00 stdout F [INFO] 10.217.0.87:40319 - 52828 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888025s 2025-08-13T20:11:05.573360557+00:00 stdout F [INFO] 10.217.0.87:57234 - 30752 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001159563s 2025-08-13T20:11:05.574248242+00:00 stdout F [INFO] 10.217.0.87:50779 - 25703 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000996218s 2025-08-13T20:11:05.636950840+00:00 stdout F [INFO] 10.217.0.87:38613 - 10822 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001529404s 2025-08-13T20:11:05.636950840+00:00 stdout F [INFO] 10.217.0.87:34827 - 31320 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001443362s 2025-08-13T20:11:05.695036296+00:00 stdout F [INFO] 10.217.0.87:50823 - 21243 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000660819s 2025-08-13T20:11:05.695036296+00:00 stdout F [INFO] 10.217.0.87:34187 - 38762 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649678s 2025-08-13T20:11:05.695768016+00:00 stdout F [INFO] 10.217.0.87:48186 - 4485 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000949247s 2025-08-13T20:11:05.695871989+00:00 stdout F [INFO] 10.217.0.87:55448 - 36001 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001257476s 2025-08-13T20:11:05.727963740+00:00 stdout F [INFO] 10.217.0.87:45140 - 62363 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001007239s 2025-08-13T20:11:05.727963740+00:00 stdout F [INFO] 10.217.0.87:41301 - 4913 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000911526s 2025-08-13T20:11:05.737425181+00:00 stdout F [INFO] 10.217.0.87:54940 - 54838 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068333s 2025-08-13T20:11:05.737695999+00:00 stdout F [INFO] 10.217.0.87:47179 - 4957 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000484444s 2025-08-13T20:11:05.750248559+00:00 stdout F [INFO] 10.217.0.87:38521 - 7178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070606s 2025-08-13T20:11:05.750464235+00:00 stdout F [INFO] 10.217.0.87:40720 - 32764 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742241s 2025-08-13T20:11:05.758315660+00:00 stdout F [INFO] 10.217.0.87:53784 - 14976 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000741112s 2025-08-13T20:11:05.759021180+00:00 stdout F [INFO] 10.217.0.87:47766 - 50912 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000766572s 2025-08-13T20:11:05.787698922+00:00 stdout F [INFO] 10.217.0.87:46050 - 38144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000734861s 2025-08-13T20:11:05.787846096+00:00 stdout F [INFO] 10.217.0.87:49776 - 41453 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069027s 2025-08-13T20:11:05.808445967+00:00 stdout F [INFO] 10.217.0.87:34882 - 53698 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775223s 2025-08-13T20:11:05.810042633+00:00 stdout F [INFO] 10.217.0.87:57934 - 36471 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613088s 2025-08-13T20:11:05.815279783+00:00 stdout F [INFO] 10.217.0.87:59022 - 44298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000583917s 2025-08-13T20:11:05.815389526+00:00 stdout F [INFO] 10.217.0.87:50553 - 13065 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000446953s 2025-08-13T20:11:05.847885058+00:00 stdout F [INFO] 10.217.0.87:50104 - 3863 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000556416s 2025-08-13T20:11:05.849309909+00:00 stdout F [INFO] 10.217.0.87:42086 - 17959 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001748971s 2025-08-13T20:11:05.850025639+00:00 stdout F [INFO] 10.217.0.87:46475 - 670 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000601117s 2025-08-13T20:11:05.850271846+00:00 stdout F [INFO] 10.217.0.87:54185 - 52777 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000961198s 2025-08-13T20:11:05.864866695+00:00 stdout F [INFO] 10.217.0.87:35066 - 21339 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000673089s 2025-08-13T20:11:05.864866695+00:00 stdout F [INFO] 10.217.0.87:52115 - 23043 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000745862s 2025-08-13T20:11:05.905121909+00:00 stdout F [INFO] 10.217.0.87:41027 - 45325 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649229s 2025-08-13T20:11:05.905199141+00:00 stdout F [INFO] 10.217.0.87:35153 - 34789 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000867205s 2025-08-13T20:11:05.905703236+00:00 stdout F [INFO] 10.217.0.87:48675 - 18007 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000562456s 2025-08-13T20:11:05.905742957+00:00 stdout F [INFO] 10.217.0.87:43674 - 13853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000559216s 2025-08-13T20:11:05.921436857+00:00 stdout F [INFO] 10.217.0.87:48274 - 1386 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000690469s 2025-08-13T20:11:05.921436857+00:00 stdout F [INFO] 10.217.0.87:42202 - 38824 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000865555s 2025-08-13T20:11:05.946203447+00:00 stdout F [INFO] 10.217.0.87:56106 - 42543 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072583s 2025-08-13T20:11:05.946437223+00:00 stdout F [INFO] 10.217.0.87:56970 - 21694 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012729s 2025-08-13T20:11:05.961452604+00:00 stdout F [INFO] 10.217.0.87:55002 - 42104 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834214s 2025-08-13T20:11:05.962054481+00:00 stdout F [INFO] 10.217.0.87:57408 - 45063 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001483442s 2025-08-13T20:11:05.978225745+00:00 stdout F [INFO] 10.217.0.87:54164 - 2186 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000564496s 2025-08-13T20:11:05.978225745+00:00 stdout F [INFO] 10.217.0.87:49778 - 4297 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001432481s 2025-08-13T20:11:06.001327877+00:00 stdout F [INFO] 10.217.0.87:53683 - 16082 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000602247s 2025-08-13T20:11:06.001327877+00:00 stdout F [INFO] 10.217.0.87:50274 - 56137 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070625s 2025-08-13T20:11:06.024428690+00:00 stdout F [INFO] 10.217.0.87:59104 - 10557 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000574946s 2025-08-13T20:11:06.024482571+00:00 stdout F [INFO] 10.217.0.87:57333 - 36257 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000501454s 2025-08-13T20:11:06.042144708+00:00 stdout F [INFO] 10.217.0.87:39448 - 51107 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729891s 2025-08-13T20:11:06.042350963+00:00 stdout F [INFO] 10.217.0.87:53358 - 62191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761701s 2025-08-13T20:11:06.064882039+00:00 stdout F [INFO] 10.217.0.87:46028 - 30827 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000779163s 2025-08-13T20:11:06.064965342+00:00 stdout F [INFO] 10.217.0.87:34523 - 33195 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976068s 2025-08-13T20:11:06.079122668+00:00 stdout F [INFO] 10.217.0.87:47662 - 3356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976048s 2025-08-13T20:11:06.079490318+00:00 stdout F [INFO] 10.217.0.87:39599 - 47911 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001112542s 2025-08-13T20:11:06.081466575+00:00 stdout F [INFO] 10.217.0.87:56701 - 58527 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001175663s 2025-08-13T20:11:06.081526277+00:00 stdout F [INFO] 10.217.0.87:46845 - 61569 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001124562s 2025-08-13T20:11:06.094482548+00:00 stdout F [INFO] 10.217.0.87:44174 - 39781 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000914246s 2025-08-13T20:11:06.094591531+00:00 stdout F [INFO] 10.217.0.87:57763 - 55061 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00136419s 2025-08-13T20:11:06.117101526+00:00 stdout F [INFO] 10.217.0.87:35335 - 509 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558666s 2025-08-13T20:11:06.117376293+00:00 stdout F [INFO] 10.217.0.87:45399 - 59288 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826444s 2025-08-13T20:11:06.134357430+00:00 stdout F [INFO] 10.217.0.87:57107 - 7410 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000519265s 2025-08-13T20:11:06.134609068+00:00 stdout F [INFO] 10.217.0.87:38962 - 36757 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000547605s 2025-08-13T20:11:06.136617685+00:00 stdout F [INFO] 10.217.0.87:40779 - 41501 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000559076s 2025-08-13T20:11:06.136842652+00:00 stdout F [INFO] 10.217.0.87:53109 - 41446 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000586077s 2025-08-13T20:11:06.153357345+00:00 stdout F [INFO] 10.217.0.87:53791 - 54818 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000739891s 2025-08-13T20:11:06.153455868+00:00 stdout F [INFO] 10.217.0.87:40529 - 22388 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000568216s 2025-08-13T20:11:06.174253844+00:00 stdout F [INFO] 10.217.0.87:33434 - 53255 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000811123s 2025-08-13T20:11:06.174311886+00:00 stdout F [INFO] 10.217.0.87:41504 - 17167 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000981359s 2025-08-13T20:11:06.191676854+00:00 stdout F [INFO] 10.217.0.87:44850 - 25924 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000839094s 2025-08-13T20:11:06.191676854+00:00 stdout F [INFO] 10.217.0.87:35096 - 8796 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000593937s 2025-08-13T20:11:06.194354941+00:00 stdout F [INFO] 10.217.0.87:42299 - 56354 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000612367s 2025-08-13T20:11:06.195107692+00:00 stdout F [INFO] 10.217.0.87:60823 - 24639 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000432882s 2025-08-13T20:11:06.222853908+00:00 stdout F [INFO] 10.217.0.87:58939 - 34814 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000455973s 2025-08-13T20:11:06.223381183+00:00 stdout F [INFO] 10.217.0.87:41083 - 28150 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000423563s 2025-08-13T20:11:06.229876849+00:00 stdout F [INFO] 10.217.0.87:50140 - 37233 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00036622s 2025-08-13T20:11:06.229876849+00:00 stdout F [INFO] 10.217.0.87:44139 - 19006 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000462453s 2025-08-13T20:11:06.235723447+00:00 stdout F [INFO] 10.217.0.87:37000 - 48562 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615558s 2025-08-13T20:11:06.236182840+00:00 stdout F [INFO] 10.217.0.87:37401 - 26048 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000878205s 2025-08-13T20:11:06.248820622+00:00 stdout F [INFO] 10.217.0.87:53517 - 14644 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000478294s 2025-08-13T20:11:06.249197003+00:00 stdout F [INFO] 10.217.0.87:46966 - 49070 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00102005s 2025-08-13T20:11:06.252328663+00:00 stdout F [INFO] 10.217.0.87:49959 - 54517 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936917s 2025-08-13T20:11:06.252872748+00:00 stdout F [INFO] 10.217.0.87:34122 - 64368 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001336928s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:54879 - 31349 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757111s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:36976 - 4227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072202s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:50486 - 30028 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467073s 2025-08-13T20:11:06.290157227+00:00 stdout F [INFO] 10.217.0.87:44581 - 20684 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000646279s 2025-08-13T20:11:06.306865616+00:00 stdout F [INFO] 10.217.0.87:52436 - 41201 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000389121s 2025-08-13T20:11:06.307338690+00:00 stdout F [INFO] 10.217.0.87:46131 - 60825 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000489754s 2025-08-13T20:11:06.347124001+00:00 stdout F [INFO] 10.217.0.87:50795 - 29243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000588857s 2025-08-13T20:11:06.347124001+00:00 stdout F [INFO] 10.217.0.87:40418 - 64190 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070063s 2025-08-13T20:11:06.354559374+00:00 stdout F [INFO] 10.217.0.87:38571 - 1815 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000470153s 2025-08-13T20:11:06.355035317+00:00 stdout F [INFO] 10.217.0.87:33273 - 16168 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579847s 2025-08-13T20:11:06.400201202+00:00 stdout F [INFO] 10.217.0.87:60599 - 15192 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002863203s 2025-08-13T20:11:06.400386088+00:00 stdout F [INFO] 10.217.0.87:39072 - 30678 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002858582s 2025-08-13T20:11:06.412956878+00:00 stdout F [INFO] 10.217.0.87:55153 - 53749 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003419738s 2025-08-13T20:11:06.412956878+00:00 stdout F [INFO] 10.217.0.87:39601 - 28661 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003660195s 2025-08-13T20:11:06.454535910+00:00 stdout F [INFO] 10.217.0.87:35573 - 1487 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000883155s 2025-08-13T20:11:06.454573761+00:00 stdout F [INFO] 10.217.0.87:53850 - 41953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000804833s 2025-08-13T20:11:06.463589630+00:00 stdout F [INFO] 10.217.0.87:34383 - 55906 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000546776s 2025-08-13T20:11:06.463877388+00:00 stdout F [INFO] 10.217.0.87:50571 - 33343 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808043s 2025-08-13T20:11:06.491525171+00:00 stdout F [INFO] 10.217.0.87:41911 - 60063 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001192454s 2025-08-13T20:11:06.492985652+00:00 stdout F [INFO] 10.217.0.87:45782 - 22682 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002186713s 2025-08-13T20:11:06.509816265+00:00 stdout F [INFO] 10.217.0.87:52306 - 22289 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000733441s 2025-08-13T20:11:06.509855916+00:00 stdout F [INFO] 10.217.0.87:35555 - 28182 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000762071s 2025-08-13T20:11:06.533077352+00:00 stdout F [INFO] 10.217.0.87:43457 - 64202 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824794s 2025-08-13T20:11:06.534133062+00:00 stdout F [INFO] 10.217.0.87:50391 - 2706 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000806523s 2025-08-13T20:11:06.552254362+00:00 stdout F [INFO] 10.217.0.87:41758 - 54760 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000894436s 2025-08-13T20:11:06.552254362+00:00 stdout F [INFO] 10.217.0.87:41418 - 37950 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102568s 2025-08-13T20:11:06.565714768+00:00 stdout F [INFO] 10.217.0.87:53002 - 34946 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000669639s 2025-08-13T20:11:06.565991156+00:00 stdout F [INFO] 10.217.0.87:52134 - 13366 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000942697s 2025-08-13T20:11:06.592532397+00:00 stdout F [INFO] 10.217.0.87:41702 - 16932 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000916666s 2025-08-13T20:11:06.592709062+00:00 stdout F [INFO] 10.217.0.87:33361 - 18852 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009309s 2025-08-13T20:11:06.608624028+00:00 stdout F [INFO] 10.217.0.87:37399 - 31636 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000787072s 2025-08-13T20:11:06.609027200+00:00 stdout F [INFO] 10.217.0.87:38988 - 32905 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001131022s 2025-08-13T20:11:06.650274692+00:00 stdout F [INFO] 10.217.0.87:56662 - 29308 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001180094s 2025-08-13T20:11:06.651669022+00:00 stdout F [INFO] 10.217.0.87:41415 - 62225 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001204265s 2025-08-13T20:11:06.665180180+00:00 stdout F [INFO] 10.217.0.87:35263 - 58718 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000958997s 2025-08-13T20:11:06.665440387+00:00 stdout F [INFO] 10.217.0.87:34409 - 3458 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000758072s 2025-08-13T20:11:06.665900150+00:00 stdout F [INFO] 10.217.0.87:38774 - 44257 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069317s 2025-08-13T20:11:06.666261191+00:00 stdout F [INFO] 10.217.0.87:51417 - 37318 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000584717s 2025-08-13T20:11:06.710675574+00:00 stdout F [INFO] 10.217.0.87:58364 - 51070 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012859s 2025-08-13T20:11:06.710869830+00:00 stdout F [INFO] 10.217.0.87:33657 - 35571 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009739s 2025-08-13T20:11:06.719730754+00:00 stdout F [INFO] 10.217.0.87:56472 - 42237 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000597037s 2025-08-13T20:11:06.720049443+00:00 stdout F [INFO] 10.217.0.87:34285 - 51553 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070137s 2025-08-13T20:11:06.725270743+00:00 stdout F [INFO] 10.217.0.87:56793 - 19375 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000649548s 2025-08-13T20:11:06.725506199+00:00 stdout F [INFO] 10.217.0.87:37335 - 60642 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00069964s 2025-08-13T20:11:06.743459024+00:00 stdout F [INFO] 10.217.0.87:49928 - 38274 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000782013s 2025-08-13T20:11:06.743459024+00:00 stdout F [INFO] 10.217.0.87:38836 - 29306 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000717341s 2025-08-13T20:11:06.768242695+00:00 stdout F [INFO] 10.217.0.87:60924 - 22492 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001305128s 2025-08-13T20:11:06.768311347+00:00 stdout F [INFO] 10.217.0.87:33026 - 58340 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001566935s 2025-08-13T20:11:06.774824223+00:00 stdout F [INFO] 10.217.0.87:35023 - 36636 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000527365s 2025-08-13T20:11:06.775069740+00:00 stdout F [INFO] 10.217.0.87:36959 - 13740 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000538765s 2025-08-13T20:11:06.804987058+00:00 stdout F [INFO] 10.217.0.87:52927 - 15321 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000732581s 2025-08-13T20:11:06.805176204+00:00 stdout F [INFO] 10.217.0.87:41921 - 3811 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001665357s 2025-08-13T20:11:06.824849498+00:00 stdout F [INFO] 10.217.0.87:56546 - 11258 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651899s 2025-08-13T20:11:06.825061844+00:00 stdout F [INFO] 10.217.0.87:53868 - 34815 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001021429s 2025-08-13T20:11:06.843694408+00:00 stdout F [INFO] 10.217.0.87:39912 - 28227 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000592707s 2025-08-13T20:11:06.844727207+00:00 stdout F [INFO] 10.217.0.87:43094 - 49549 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000830364s 2025-08-13T20:11:06.880210985+00:00 stdout F [INFO] 10.217.0.87:39540 - 28116 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000741151s 2025-08-13T20:11:06.880420891+00:00 stdout F [INFO] 10.217.0.87:37296 - 5530 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000985599s 2025-08-13T20:11:06.898124619+00:00 stdout F [INFO] 10.217.0.87:34894 - 32594 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001575145s 2025-08-13T20:11:06.898483399+00:00 stdout F [INFO] 10.217.0.87:52980 - 13091 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001652387s 2025-08-13T20:11:06.929318283+00:00 stdout F [INFO] 10.217.0.87:59919 - 19170 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769912s 2025-08-13T20:11:06.929360744+00:00 stdout F [INFO] 10.217.0.87:47630 - 55246 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000912626s 2025-08-13T20:11:06.937565909+00:00 stdout F [INFO] 10.217.0.87:40289 - 56438 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001221625s 2025-08-13T20:11:06.937699333+00:00 stdout F [INFO] 10.217.0.87:52474 - 24238 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001223485s 2025-08-13T20:11:06.939627758+00:00 stdout F [INFO] 10.217.0.87:36682 - 2516 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001295677s 2025-08-13T20:11:06.939627758+00:00 stdout F [INFO] 10.217.0.87:56748 - 10632 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001002569s 2025-08-13T20:11:06.986652897+00:00 stdout F [INFO] 10.217.0.87:49396 - 42637 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002637565s 2025-08-13T20:11:06.986720629+00:00 stdout F [INFO] 10.217.0.87:41630 - 25830 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002414649s 2025-08-13T20:11:06.997326363+00:00 stdout F [INFO] 10.217.0.87:51569 - 44524 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104834s 2025-08-13T20:11:06.997622911+00:00 stdout F [INFO] 10.217.0.87:36794 - 18852 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001446492s 2025-08-13T20:11:07.044982969+00:00 stdout F [INFO] 10.217.0.87:51837 - 36888 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936357s 2025-08-13T20:11:07.045427022+00:00 stdout F [INFO] 10.217.0.87:58976 - 21162 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001307418s 2025-08-13T20:11:07.045548015+00:00 stdout F [INFO] 10.217.0.87:36974 - 32460 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000771272s 2025-08-13T20:11:07.047007987+00:00 stdout F [INFO] 10.217.0.87:58415 - 60280 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002137671s 2025-08-13T20:11:07.105296868+00:00 stdout F [INFO] 10.217.0.87:60538 - 54742 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001012649s 2025-08-13T20:11:07.105296868+00:00 stdout F [INFO] 10.217.0.87:58811 - 17080 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0017433s 2025-08-13T20:11:07.105495824+00:00 stdout F [INFO] 10.217.0.87:51688 - 12168 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002047849s 2025-08-13T20:11:07.105664929+00:00 stdout F [INFO] 10.217.0.87:50539 - 42293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002032268s 2025-08-13T20:11:07.163007373+00:00 stdout F [INFO] 10.217.0.87:58079 - 39541 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000982978s 2025-08-13T20:11:07.163007373+00:00 stdout F [INFO] 10.217.0.87:37302 - 55370 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001032149s 2025-08-13T20:11:07.170320102+00:00 stdout F [INFO] 10.217.0.87:49364 - 65517 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000833624s 2025-08-13T20:11:07.171272730+00:00 stdout F [INFO] 10.217.0.87:36435 - 40868 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000906396s 2025-08-13T20:11:07.171989020+00:00 stdout F [INFO] 10.217.0.87:59848 - 32310 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000610828s 2025-08-13T20:11:07.172141005+00:00 stdout F [INFO] 10.217.0.87:55904 - 37116 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000517145s 2025-08-13T20:11:07.207731465+00:00 stdout F [INFO] 10.217.0.87:43437 - 61119 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728381s 2025-08-13T20:11:07.207861279+00:00 stdout F [INFO] 10.217.0.87:57484 - 35925 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000743692s 2025-08-13T20:11:07.221266323+00:00 stdout F [INFO] 10.217.0.87:43164 - 16768 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000541336s 2025-08-13T20:11:07.221266323+00:00 stdout F [INFO] 10.217.0.87:60482 - 24325 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000885805s 2025-08-13T20:11:07.225198466+00:00 stdout F [INFO] 10.217.0.87:49620 - 5559 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587827s 2025-08-13T20:11:07.225319079+00:00 stdout F [INFO] 10.217.0.87:48813 - 53088 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000534395s 2025-08-13T20:11:07.257577124+00:00 stdout F [INFO] 10.217.0.87:40760 - 5643 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000937127s 2025-08-13T20:11:07.257826671+00:00 stdout F [INFO] 10.217.0.87:41535 - 4835 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001141493s 2025-08-13T20:11:07.262181946+00:00 stdout F [INFO] 10.217.0.87:49409 - 29932 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072507s 2025-08-13T20:11:07.262240898+00:00 stdout F [INFO] 10.217.0.87:33200 - 42103 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000580437s 2025-08-13T20:11:07.276715173+00:00 stdout F [INFO] 10.217.0.87:49404 - 50720 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000742431s 2025-08-13T20:11:07.276846737+00:00 stdout F [INFO] 10.217.0.87:54369 - 4775 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000498485s 2025-08-13T20:11:07.280807520+00:00 stdout F [INFO] 10.217.0.87:53914 - 14210 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000432693s 2025-08-13T20:11:07.281426688+00:00 stdout F [INFO] 10.217.0.87:51940 - 43696 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888956s 2025-08-13T20:11:07.317145732+00:00 stdout F [INFO] 10.217.0.87:59791 - 56988 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000923027s 2025-08-13T20:11:07.317197224+00:00 stdout F [INFO] 10.217.0.87:43385 - 12255 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00072117s 2025-08-13T20:11:07.334514660+00:00 stdout F [INFO] 10.217.0.87:34980 - 10777 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070021s 2025-08-13T20:11:07.334691555+00:00 stdout F [INFO] 10.217.0.87:55757 - 15662 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000538395s 2025-08-13T20:11:07.376549415+00:00 stdout F [INFO] 10.217.0.87:53403 - 11578 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070054s 2025-08-13T20:11:07.376549415+00:00 stdout F [INFO] 10.217.0.87:35946 - 17798 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001085221s 2025-08-13T20:11:07.389303621+00:00 stdout F [INFO] 10.217.0.87:44091 - 57874 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000605428s 2025-08-13T20:11:07.389703323+00:00 stdout F [INFO] 10.217.0.87:53756 - 50686 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000758902s 2025-08-13T20:11:07.417015286+00:00 stdout F [INFO] 10.217.0.87:40347 - 17567 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001583275s 2025-08-13T20:11:07.417194241+00:00 stdout F [INFO] 10.217.0.87:33768 - 59369 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001353908s 2025-08-13T20:11:07.443644379+00:00 stdout F [INFO] 10.217.0.87:50810 - 56645 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776422s 2025-08-13T20:11:07.443644379+00:00 stdout F [INFO] 10.217.0.87:46506 - 36903 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000660439s 2025-08-13T20:11:07.472205178+00:00 stdout F [INFO] 10.217.0.87:47203 - 54622 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000751011s 2025-08-13T20:11:07.472205178+00:00 stdout F [INFO] 10.217.0.87:57837 - 6935 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000608628s 2025-08-13T20:11:07.481557286+00:00 stdout F [INFO] 10.217.0.87:34574 - 39418 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000713751s 2025-08-13T20:11:07.482115062+00:00 stdout F [INFO] 10.217.0.87:60348 - 34743 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001230536s 2025-08-13T20:11:07.507484500+00:00 stdout F [INFO] 10.217.0.87:34603 - 8271 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000671269s 2025-08-13T20:11:07.507538051+00:00 stdout F [INFO] 10.217.0.87:46661 - 53710 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000798843s 2025-08-13T20:11:07.527233716+00:00 stdout F [INFO] 10.217.0.87:50539 - 40214 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000919156s 2025-08-13T20:11:07.527354589+00:00 stdout F [INFO] 10.217.0.87:58467 - 3852 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000786952s 2025-08-13T20:11:07.535472752+00:00 stdout F [INFO] 10.217.0.87:34042 - 43843 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000737821s 2025-08-13T20:11:07.535559044+00:00 stdout F [INFO] 10.217.0.87:35568 - 3264 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000796383s 2025-08-13T20:11:07.557271757+00:00 stdout F [INFO] 10.217.0.87:38703 - 62460 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070702s 2025-08-13T20:11:07.557654668+00:00 stdout F [INFO] 10.217.0.87:54204 - 9243 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000859315s 2025-08-13T20:11:07.571607918+00:00 stdout F [INFO] 10.217.0.87:51349 - 43568 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000394932s 2025-08-13T20:11:07.571909177+00:00 stdout F [INFO] 10.217.0.87:55665 - 10478 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000890045s 2025-08-13T20:11:07.579506694+00:00 stdout F [INFO] 10.217.0.87:41713 - 44982 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000639949s 2025-08-13T20:11:07.579822964+00:00 stdout F [INFO] 10.217.0.87:35599 - 30094 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000641579s 2025-08-13T20:11:07.611660656+00:00 stdout F [INFO] 10.217.0.87:52989 - 57443 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000811524s 2025-08-13T20:11:07.612091619+00:00 stdout F [INFO] 10.217.0.87:56925 - 44043 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000982728s 2025-08-13T20:11:07.630198318+00:00 stdout F [INFO] 10.217.0.87:59685 - 54799 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001186674s 2025-08-13T20:11:07.630585489+00:00 stdout F [INFO] 10.217.0.87:41588 - 19165 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001908224s 2025-08-13T20:11:07.638215148+00:00 stdout F [INFO] 10.217.0.87:60419 - 15905 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870815s 2025-08-13T20:11:07.638310660+00:00 stdout F [INFO] 10.217.0.87:39051 - 25853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001563095s 2025-08-13T20:11:07.668658981+00:00 stdout F [INFO] 10.217.0.87:36510 - 56531 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753982s 2025-08-13T20:11:07.668910118+00:00 stdout F [INFO] 10.217.0.87:45382 - 41023 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000604917s 2025-08-13T20:11:07.688061667+00:00 stdout F [INFO] 10.217.0.87:43511 - 46046 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000736041s 2025-08-13T20:11:07.688430487+00:00 stdout F [INFO] 10.217.0.87:55325 - 56240 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001117712s 2025-08-13T20:11:07.693539064+00:00 stdout F [INFO] 10.217.0.87:57667 - 32897 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000927677s 2025-08-13T20:11:07.693603626+00:00 stdout F [INFO] 10.217.0.87:53696 - 30464 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103904s 2025-08-13T20:11:07.716821652+00:00 stdout F [INFO] 10.217.0.87:47346 - 60672 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000607528s 2025-08-13T20:11:07.717144831+00:00 stdout F [INFO] 10.217.0.87:57924 - 33305 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000790483s 2025-08-13T20:11:07.743893558+00:00 stdout F [INFO] 10.217.0.87:56868 - 30677 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000767852s 2025-08-13T20:11:07.744194106+00:00 stdout F [INFO] 10.217.0.87:49842 - 57859 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000859045s 2025-08-13T20:11:07.772413855+00:00 stdout F [INFO] 10.217.0.87:33450 - 52319 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000833204s 2025-08-13T20:11:07.772864258+00:00 stdout F [INFO] 10.217.0.87:49356 - 2562 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001155553s 2025-08-13T20:11:07.799166922+00:00 stdout F [INFO] 10.217.0.87:53352 - 29022 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00142271s 2025-08-13T20:11:07.799415080+00:00 stdout F [INFO] 10.217.0.87:49746 - 13386 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001722979s 2025-08-13T20:11:07.826411524+00:00 stdout F [INFO] 10.217.0.87:35786 - 37507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000635989s 2025-08-13T20:11:07.826551608+00:00 stdout F [INFO] 10.217.0.87:40944 - 25651 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000536725s 2025-08-13T20:11:07.866043630+00:00 stdout F [INFO] 10.217.0.87:46701 - 20031 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803634s 2025-08-13T20:11:07.866305297+00:00 stdout F [INFO] 10.217.0.87:49754 - 37423 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000920356s 2025-08-13T20:11:07.881445101+00:00 stdout F [INFO] 10.217.0.87:41833 - 64957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000903816s 2025-08-13T20:11:07.881964906+00:00 stdout F [INFO] 10.217.0.87:43415 - 47315 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001354449s 2025-08-13T20:11:07.882467471+00:00 stdout F [INFO] 10.217.0.87:48205 - 62842 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731091s 2025-08-13T20:11:07.882601685+00:00 stdout F [INFO] 10.217.0.87:48027 - 3372 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105356s 2025-08-13T20:11:07.905826531+00:00 stdout F [INFO] 10.217.0.87:58159 - 46154 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000625888s 2025-08-13T20:11:07.906098518+00:00 stdout F [INFO] 10.217.0.87:36665 - 53602 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000843184s 2025-08-13T20:11:07.921844300+00:00 stdout F [INFO] 10.217.0.87:52715 - 35017 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000484183s 2025-08-13T20:11:07.922081147+00:00 stdout F [INFO] 10.217.0.87:52898 - 49650 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000426353s 2025-08-13T20:11:07.938112126+00:00 stdout F [INFO] 10.217.0.87:49711 - 41337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000938537s 2025-08-13T20:11:07.938154007+00:00 stdout F [INFO] 10.217.0.87:44817 - 58022 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001096921s 2025-08-13T20:11:07.962470985+00:00 stdout F [INFO] 10.217.0.87:41285 - 13116 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000762342s 2025-08-13T20:11:07.962525396+00:00 stdout F [INFO] 10.217.0.87:38417 - 4434 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000501594s 2025-08-13T20:11:07.981162011+00:00 stdout F [INFO] 10.217.0.87:37682 - 11071 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072595s 2025-08-13T20:11:07.981193221+00:00 stdout F [INFO] 10.217.0.87:45937 - 140 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000706161s 2025-08-13T20:11:07.999602619+00:00 stdout F [INFO] 10.217.0.87:48786 - 24055 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000580597s 2025-08-13T20:11:07.999602619+00:00 stdout F [INFO] 10.217.0.87:39752 - 12452 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000929997s 2025-08-13T20:11:08.064061297+00:00 stdout F [INFO] 10.217.0.87:49777 - 44088 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000721461s 2025-08-13T20:11:08.064688655+00:00 stdout F [INFO] 10.217.0.87:58214 - 49133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001578405s 2025-08-13T20:11:08.081518438+00:00 stdout F [INFO] 10.217.0.87:36948 - 4802 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000605337s 2025-08-13T20:11:08.081581300+00:00 stdout F [INFO] 10.217.0.87:41574 - 52127 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793012s 2025-08-13T20:11:08.119315391+00:00 stdout F [INFO] 10.217.0.87:34539 - 59046 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001407601s 2025-08-13T20:11:08.119385703+00:00 stdout F [INFO] 10.217.0.87:53056 - 45122 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001035959s 2025-08-13T20:11:08.140376275+00:00 stdout F [INFO] 10.217.0.87:32830 - 23502 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000724451s 2025-08-13T20:11:08.141998682+00:00 stdout F [INFO] 10.217.0.87:41187 - 7567 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001931556s 2025-08-13T20:11:08.146236623+00:00 stdout F [INFO] 10.217.0.87:45883 - 45200 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000991418s 2025-08-13T20:11:08.146587473+00:00 stdout F [INFO] 10.217.0.87:59245 - 9995 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001093662s 2025-08-13T20:11:08.176333046+00:00 stdout F [INFO] 10.217.0.87:38631 - 5845 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000641959s 2025-08-13T20:11:08.176854491+00:00 stdout F [INFO] 10.217.0.87:45038 - 42993 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985579s 2025-08-13T20:11:08.195841936+00:00 stdout F [INFO] 10.217.0.87:35908 - 62593 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00107632s 2025-08-13T20:11:08.195879627+00:00 stdout F [INFO] 10.217.0.87:57842 - 25290 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001344349s 2025-08-13T20:11:08.201090476+00:00 stdout F [INFO] 10.217.0.87:40521 - 43457 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070197s 2025-08-13T20:11:08.201833717+00:00 stdout F [INFO] 10.217.0.87:53603 - 40731 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001195764s 2025-08-13T20:11:08.236267675+00:00 stdout F [INFO] 10.217.0.87:54377 - 33154 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000904966s 2025-08-13T20:11:08.236475721+00:00 stdout F [INFO] 10.217.0.87:49212 - 45423 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000958377s 2025-08-13T20:11:08.251370648+00:00 stdout F [INFO] 10.217.0.87:55160 - 29098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000675209s 2025-08-13T20:11:08.251516222+00:00 stdout F [INFO] 10.217.0.87:42220 - 5777 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000705991s 2025-08-13T20:11:08.259369497+00:00 stdout F [INFO] 10.217.0.87:53420 - 37723 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000562176s 2025-08-13T20:11:08.259485550+00:00 stdout F [INFO] 10.217.0.87:37634 - 18652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000766012s 2025-08-13T20:11:08.306407546+00:00 stdout F [INFO] 10.217.0.87:58080 - 12192 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000620858s 2025-08-13T20:11:08.307266460+00:00 stdout F [INFO] 10.217.0.87:41950 - 42971 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001173934s 2025-08-13T20:11:08.317512554+00:00 stdout F [INFO] 10.217.0.87:41163 - 25281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000689609s 2025-08-13T20:11:08.317614577+00:00 stdout F [INFO] 10.217.0.87:43491 - 18647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000532295s 2025-08-13T20:11:08.322396894+00:00 stdout F [INFO] 10.217.0.87:35754 - 20409 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000502435s 2025-08-13T20:11:08.322396894+00:00 stdout F [INFO] 10.217.0.87:52198 - 36446 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000585887s 2025-08-13T20:11:08.339933417+00:00 stdout F [INFO] 10.217.0.87:50506 - 34942 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000850635s 2025-08-13T20:11:08.340078971+00:00 stdout F [INFO] 10.217.0.87:41082 - 58001 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001259356s 2025-08-13T20:11:08.347099922+00:00 stdout F [INFO] 10.217.0.87:48202 - 24673 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000706341s 2025-08-13T20:11:08.347345749+00:00 stdout F [INFO] 10.217.0.87:55112 - 2851 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000876806s 2025-08-13T20:11:08.363877823+00:00 stdout F [INFO] 10.217.0.87:45181 - 18776 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068396s 2025-08-13T20:11:08.364125350+00:00 stdout F [INFO] 10.217.0.87:33510 - 43360 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000972978s 2025-08-13T20:11:08.376663840+00:00 stdout F [INFO] 10.217.0.87:54478 - 11527 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000708951s 2025-08-13T20:11:08.376987139+00:00 stdout F [INFO] 10.217.0.87:48980 - 35262 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651708s 2025-08-13T20:11:08.384834434+00:00 stdout F [INFO] 10.217.0.87:35665 - 2681 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000518975s 2025-08-13T20:11:08.384966958+00:00 stdout F [INFO] 10.217.0.87:33876 - 4161 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000749152s 2025-08-13T20:11:08.395677425+00:00 stdout F [INFO] 10.217.0.87:57212 - 64901 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000905326s 2025-08-13T20:11:08.395890481+00:00 stdout F [INFO] 10.217.0.87:45257 - 8753 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069839s 2025-08-13T20:11:08.404554680+00:00 stdout F [INFO] 10.217.0.87:59219 - 40644 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001885915s 2025-08-13T20:11:08.404554680+00:00 stdout F [INFO] 10.217.0.87:50503 - 20139 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001935436s 2025-08-13T20:11:08.423018619+00:00 stdout F [INFO] 10.217.0.87:60190 - 12866 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001440531s 2025-08-13T20:11:08.423496763+00:00 stdout F [INFO] 10.217.0.87:55511 - 27204 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00209878s 2025-08-13T20:11:08.432903562+00:00 stdout F [INFO] 10.217.0.87:35410 - 46456 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507545s 2025-08-13T20:11:08.432903562+00:00 stdout F [INFO] 10.217.0.87:58066 - 60152 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000623148s 2025-08-13T20:11:08.459096243+00:00 stdout F [INFO] 10.217.0.87:55275 - 54993 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000979079s 2025-08-13T20:11:08.459194356+00:00 stdout F [INFO] 10.217.0.87:33489 - 1806 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000961338s 2025-08-13T20:11:08.487465687+00:00 stdout F [INFO] 10.217.0.87:56155 - 40098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728021s 2025-08-13T20:11:08.487522958+00:00 stdout F [INFO] 10.217.0.87:34779 - 64998 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000902886s 2025-08-13T20:11:08.499420449+00:00 stdout F [INFO] 10.217.0.87:53081 - 28211 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000496094s 2025-08-13T20:11:08.499476821+00:00 stdout F [INFO] 10.217.0.87:44623 - 61713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000791513s 2025-08-13T20:11:08.517257761+00:00 stdout F [INFO] 10.217.0.87:60530 - 2512 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005661673s 2025-08-13T20:11:08.517432846+00:00 stdout F [INFO] 10.217.0.87:46936 - 26510 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005884159s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:46216 - 38326 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000781543s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:55829 - 44853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001203765s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:36518 - 59457 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001556755s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:38180 - 64319 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001057991s 2025-08-13T20:11:08.555623421+00:00 stdout F [INFO] 10.217.0.87:55842 - 205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000616398s 2025-08-13T20:11:08.555623421+00:00 stdout F [INFO] 10.217.0.87:45745 - 7394 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000842004s 2025-08-13T20:11:08.573364259+00:00 stdout F [INFO] 10.217.0.87:57068 - 35192 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001069521s 2025-08-13T20:11:08.573364259+00:00 stdout F [INFO] 10.217.0.87:49760 - 49044 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000945517s 2025-08-13T20:11:08.579015571+00:00 stdout F [INFO] 10.217.0.87:43119 - 21963 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000549946s 2025-08-13T20:11:08.579708371+00:00 stdout F [INFO] 10.217.0.87:60286 - 28624 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000616738s 2025-08-13T20:11:08.599127888+00:00 stdout F [INFO] 10.217.0.87:42287 - 8556 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001494493s 2025-08-13T20:11:08.599256052+00:00 stdout F [INFO] 10.217.0.87:38544 - 61819 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001566324s 2025-08-13T20:11:08.599857979+00:00 stdout F [INFO] 10.217.0.87:54830 - 40587 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000758882s 2025-08-13T20:11:08.600256170+00:00 stdout F [INFO] 10.217.0.87:45174 - 390 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001139543s 2025-08-13T20:11:08.627299166+00:00 stdout F [INFO] 10.217.0.87:52597 - 28415 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000736661s 2025-08-13T20:11:08.627299166+00:00 stdout F [INFO] 10.217.0.87:57052 - 4137 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000971697s 2025-08-13T20:11:08.632964848+00:00 stdout F [INFO] 10.217.0.87:39800 - 50300 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000512485s 2025-08-13T20:11:08.633108582+00:00 stdout F [INFO] 10.217.0.87:58174 - 39741 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000584247s 2025-08-13T20:11:08.655018860+00:00 stdout F [INFO] 10.217.0.87:47248 - 53195 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000666499s 2025-08-13T20:11:08.655018860+00:00 stdout F [INFO] 10.217.0.87:60705 - 40238 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729491s 2025-08-13T20:11:08.661694522+00:00 stdout F [INFO] 10.217.0.87:53642 - 49494 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000573287s 2025-08-13T20:11:08.661694522+00:00 stdout F [INFO] 10.217.0.87:36968 - 25355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000502334s 2025-08-13T20:11:08.687659976+00:00 stdout F [INFO] 10.217.0.87:48084 - 29807 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000730811s 2025-08-13T20:11:08.687659976+00:00 stdout F [INFO] 10.217.0.87:53674 - 5248 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000805873s 2025-08-13T20:11:08.711454949+00:00 stdout F [INFO] 10.217.0.87:34393 - 12701 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000739862s 2025-08-13T20:11:08.712082747+00:00 stdout F [INFO] 10.217.0.87:42170 - 56756 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000760271s 2025-08-13T20:11:08.741562942+00:00 stdout F [INFO] 10.217.0.87:47188 - 22454 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000750161s 2025-08-13T20:11:08.741692766+00:00 stdout F [INFO] 10.217.0.87:60309 - 59151 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001182084s 2025-08-13T20:11:08.745069502+00:00 stdout F [INFO] 10.217.0.87:45344 - 61474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668139s 2025-08-13T20:11:08.745069502+00:00 stdout F [INFO] 10.217.0.87:58293 - 48213 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105222s 2025-08-13T20:11:08.766544188+00:00 stdout F [INFO] 10.217.0.87:40412 - 29040 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000454903s 2025-08-13T20:11:08.766693232+00:00 stdout F [INFO] 10.217.0.87:59933 - 26530 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000411332s 2025-08-13T20:11:08.797008712+00:00 stdout F [INFO] 10.217.0.87:45446 - 13925 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613957s 2025-08-13T20:11:08.797008712+00:00 stdout F [INFO] 10.217.0.87:57036 - 43938 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000506345s 2025-08-13T20:11:08.820598308+00:00 stdout F [INFO] 10.217.0.87:45608 - 64359 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000469703s 2025-08-13T20:11:08.820818864+00:00 stdout F [INFO] 10.217.0.87:39399 - 5121 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000514944s 2025-08-13T20:11:08.849384853+00:00 stdout F [INFO] 10.217.0.87:40247 - 24315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068251s 2025-08-13T20:11:08.850164516+00:00 stdout F [INFO] 10.217.0.87:59475 - 26937 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001337518s 2025-08-13T20:11:08.874556005+00:00 stdout F [INFO] 10.217.0.87:57108 - 6458 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000702661s 2025-08-13T20:11:08.874619757+00:00 stdout F [INFO] 10.217.0.87:44884 - 13914 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001088391s 2025-08-13T20:11:08.888978959+00:00 stdout F [INFO] 10.217.0.87:53148 - 29687 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000759401s 2025-08-13T20:11:08.889120743+00:00 stdout F [INFO] 10.217.0.87:38152 - 61846 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000890716s 2025-08-13T20:11:08.905832352+00:00 stdout F [INFO] 10.217.0.87:33541 - 15399 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000869564s 2025-08-13T20:11:08.906256264+00:00 stdout F [INFO] 10.217.0.87:36658 - 33176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001183774s 2025-08-13T20:11:08.942624797+00:00 stdout F [INFO] 10.217.0.87:60473 - 46967 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000935207s 2025-08-13T20:11:08.942624797+00:00 stdout F [INFO] 10.217.0.87:42459 - 50629 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000915966s 2025-08-13T20:11:08.956306379+00:00 stdout F [INFO] 10.217.0.87:39704 - 64412 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069933s 2025-08-13T20:11:08.956390161+00:00 stdout F [INFO] 10.217.0.87:35286 - 54664 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657089s 2025-08-13T20:11:09.009847054+00:00 stdout F [INFO] 10.217.0.87:54051 - 28377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000598137s 2025-08-13T20:11:09.010077731+00:00 stdout F [INFO] 10.217.0.87:37021 - 5506 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000633918s 2025-08-13T20:11:09.064643675+00:00 stdout F [INFO] 10.217.0.87:48804 - 14694 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769092s 2025-08-13T20:11:09.064643675+00:00 stdout F [INFO] 10.217.0.87:40276 - 41515 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000978868s 2025-08-13T20:11:09.075868827+00:00 stdout F [INFO] 10.217.0.87:52252 - 12463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000681859s 2025-08-13T20:11:09.076070703+00:00 stdout F [INFO] 10.217.0.87:40170 - 19513 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001002759s 2025-08-13T20:11:09.115422351+00:00 stdout F [INFO] 10.217.0.87:37118 - 64783 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000903006s 2025-08-13T20:11:09.115540514+00:00 stdout F [INFO] 10.217.0.87:55116 - 52020 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845964s 2025-08-13T20:11:09.123530044+00:00 stdout F [INFO] 10.217.0.87:46018 - 56449 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654459s 2025-08-13T20:11:09.124037828+00:00 stdout F [INFO] 10.217.0.87:34184 - 43139 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000538175s 2025-08-13T20:11:09.130141053+00:00 stdout F [INFO] 10.217.0.87:56344 - 19 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817974s 2025-08-13T20:11:09.130393150+00:00 stdout F [INFO] 10.217.0.87:46527 - 2906 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000835174s 2025-08-13T20:11:09.165076025+00:00 stdout F [INFO] 10.217.0.87:45989 - 3354 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000988799s 2025-08-13T20:11:09.165162427+00:00 stdout F [INFO] 10.217.0.87:40822 - 23382 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001084201s 2025-08-13T20:11:09.177453200+00:00 stdout F [INFO] 10.217.0.87:56347 - 34656 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000718601s 2025-08-13T20:11:09.177554863+00:00 stdout F [INFO] 10.217.0.87:45640 - 4199 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768752s 2025-08-13T20:11:09.185731777+00:00 stdout F [INFO] 10.217.0.87:36847 - 50417 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000633828s 2025-08-13T20:11:09.185731777+00:00 stdout F [INFO] 10.217.0.87:52308 - 60129 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776212s 2025-08-13T20:11:09.221102691+00:00 stdout F [INFO] 10.217.0.87:47405 - 29150 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558266s 2025-08-13T20:11:09.221102691+00:00 stdout F [INFO] 10.217.0.87:45271 - 42783 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761961s 2025-08-13T20:11:09.231398136+00:00 stdout F [INFO] 10.217.0.87:33150 - 48647 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000595407s 2025-08-13T20:11:09.231444548+00:00 stdout F [INFO] 10.217.0.87:55990 - 64284 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000492205s 2025-08-13T20:11:09.241229768+00:00 stdout F [INFO] 10.217.0.87:35019 - 34439 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612528s 2025-08-13T20:11:09.241851496+00:00 stdout F [INFO] 10.217.0.87:57066 - 18806 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000641939s 2025-08-13T20:11:09.275480630+00:00 stdout F [INFO] 10.217.0.87:46739 - 53451 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000772852s 2025-08-13T20:11:09.275853701+00:00 stdout F [INFO] 10.217.0.87:55427 - 54278 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103188s 2025-08-13T20:11:09.276502500+00:00 stdout F [INFO] 10.217.0.87:49219 - 48743 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000627578s 2025-08-13T20:11:09.277077556+00:00 stdout F [INFO] 10.217.0.87:38267 - 55558 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001083721s 2025-08-13T20:11:09.285312502+00:00 stdout F [INFO] 10.217.0.87:57050 - 26639 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527796s 2025-08-13T20:11:09.285312502+00:00 stdout F [INFO] 10.217.0.87:38263 - 35891 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000778362s 2025-08-13T20:11:09.328960133+00:00 stdout F [INFO] 10.217.0.87:35786 - 6519 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000763642s 2025-08-13T20:11:09.329544330+00:00 stdout F [INFO] 10.217.0.87:51541 - 48774 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001336938s 2025-08-13T20:11:09.330937810+00:00 stdout F [INFO] 10.217.0.87:45477 - 8398 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767262s 2025-08-13T20:11:09.331439575+00:00 stdout F [INFO] 10.217.0.87:47873 - 29317 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001480123s 2025-08-13T20:11:09.339755763+00:00 stdout F [INFO] 10.217.0.87:53989 - 17357 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001479943s 2025-08-13T20:11:09.339946839+00:00 stdout F [INFO] 10.217.0.87:39615 - 48203 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001503723s 2025-08-13T20:11:09.371635847+00:00 stdout F [INFO] 10.217.0.87:33870 - 47553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803013s 2025-08-13T20:11:09.372254395+00:00 stdout F [INFO] 10.217.0.87:33091 - 45955 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001228876s 2025-08-13T20:11:09.392121074+00:00 stdout F [INFO] 10.217.0.87:43512 - 7191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619788s 2025-08-13T20:11:09.392121074+00:00 stdout F [INFO] 10.217.0.87:43215 - 45310 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000675009s 2025-08-13T20:11:09.427988023+00:00 stdout F [INFO] 10.217.0.87:33535 - 58269 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000669729s 2025-08-13T20:11:09.428161888+00:00 stdout F [INFO] 10.217.0.87:40410 - 40341 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000664079s 2025-08-13T20:11:09.448156561+00:00 stdout F [INFO] 10.217.0.87:54954 - 42761 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000628158s 2025-08-13T20:11:09.448156561+00:00 stdout F [INFO] 10.217.0.87:53421 - 38162 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00138773s 2025-08-13T20:11:09.449071847+00:00 stdout F [INFO] 10.217.0.87:55224 - 28314 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000527796s 2025-08-13T20:11:09.449071847+00:00 stdout F [INFO] 10.217.0.87:49659 - 6648 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000764562s 2025-08-13T20:11:09.482376662+00:00 stdout F [INFO] 10.217.0.87:48689 - 21850 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000638679s 2025-08-13T20:11:09.482376662+00:00 stdout F [INFO] 10.217.0.87:57101 - 38149 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000632798s 2025-08-13T20:11:09.503586530+00:00 stdout F [INFO] 10.217.0.87:52881 - 6877 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000808803s 2025-08-13T20:11:09.503586530+00:00 stdout F [INFO] 10.217.0.87:34201 - 21870 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000537925s 2025-08-13T20:11:09.505160085+00:00 stdout F [INFO] 10.217.0.87:35048 - 16070 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000575357s 2025-08-13T20:11:09.505404882+00:00 stdout F [INFO] 10.217.0.87:41863 - 24054 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00105403s 2025-08-13T20:11:09.537342868+00:00 stdout F [INFO] 10.217.0.87:60456 - 44692 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000797203s 2025-08-13T20:11:09.537342868+00:00 stdout F [INFO] 10.217.0.87:38893 - 64497 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001186754s 2025-08-13T20:11:09.565609508+00:00 stdout F [INFO] 10.217.0.87:49518 - 2987 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000829094s 2025-08-13T20:11:09.566091892+00:00 stdout F [INFO] 10.217.0.87:33729 - 40144 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000658899s 2025-08-13T20:11:09.567072360+00:00 stdout F [INFO] 10.217.0.87:53459 - 9026 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000851374s 2025-08-13T20:11:09.567072360+00:00 stdout F [INFO] 10.217.0.87:51738 - 51961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000828444s 2025-08-13T20:11:09.619399371+00:00 stdout F [INFO] 10.217.0.87:46668 - 55956 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985679s 2025-08-13T20:11:09.619708900+00:00 stdout F [INFO] 10.217.0.87:42988 - 40163 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000664769s 2025-08-13T20:11:09.621897792+00:00 stdout F [INFO] 10.217.0.87:48469 - 28257 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000531815s 2025-08-13T20:11:09.622085258+00:00 stdout F [INFO] 10.217.0.87:56697 - 386 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006824s 2025-08-13T20:11:09.644941383+00:00 stdout F [INFO] 10.217.0.87:50544 - 34100 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001134353s 2025-08-13T20:11:09.645365005+00:00 stdout F [INFO] 10.217.0.87:53854 - 56413 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140471s 2025-08-13T20:11:09.676304942+00:00 stdout F [INFO] 10.217.0.87:56020 - 5860 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000740331s 2025-08-13T20:11:09.676304942+00:00 stdout F [INFO] 10.217.0.87:39917 - 33779 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000581146s 2025-08-13T20:11:09.678225417+00:00 stdout F [INFO] 10.217.0.87:44903 - 15129 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000491185s 2025-08-13T20:11:09.678534746+00:00 stdout F [INFO] 10.217.0.87:47391 - 53784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000419112s 2025-08-13T20:11:09.699531377+00:00 stdout F [INFO] 10.217.0.87:40503 - 29441 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000648348s 2025-08-13T20:11:09.699735143+00:00 stdout F [INFO] 10.217.0.87:56421 - 20914 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000838404s 2025-08-13T20:11:09.732739419+00:00 stdout F [INFO] 10.217.0.87:60895 - 25087 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000904066s 2025-08-13T20:11:09.732879243+00:00 stdout F [INFO] 10.217.0.87:45233 - 38453 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001086661s 2025-08-13T20:11:09.754002619+00:00 stdout F [INFO] 10.217.0.87:47687 - 30565 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753871s 2025-08-13T20:11:09.754062771+00:00 stdout F [INFO] 10.217.0.87:56598 - 25031 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000797913s 2025-08-13T20:11:09.791738541+00:00 stdout F [INFO] 10.217.0.87:35890 - 8037 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070223s 2025-08-13T20:11:09.791738541+00:00 stdout F [INFO] 10.217.0.87:54483 - 44678 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000723801s 2025-08-13T20:11:09.801146801+00:00 stdout F [INFO] 10.217.0.87:50514 - 38291 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00313687s 2025-08-13T20:11:09.803285852+00:00 stdout F [INFO] 10.217.0.87:39839 - 32977 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001900175s 2025-08-13T20:11:09.810966062+00:00 stdout F [INFO] 10.217.0.87:48258 - 20246 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002353797s 2025-08-13T20:11:09.811439386+00:00 stdout F [INFO] 10.217.0.87:60135 - 51291 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002544313s 2025-08-13T20:11:09.815948235+00:00 stdout F [INFO] 10.217.0.87:40563 - 6633 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000910756s 2025-08-13T20:11:09.815948235+00:00 stdout F [INFO] 10.217.0.87:37855 - 63755 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000946797s 2025-08-13T20:11:09.855902510+00:00 stdout F [INFO] 10.217.0.87:55357 - 52910 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000672089s 2025-08-13T20:11:09.856062125+00:00 stdout F [INFO] 10.217.0.87:56983 - 37252 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012639s 2025-08-13T20:11:09.871047725+00:00 stdout F [INFO] 10.217.0.87:35549 - 20648 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976718s 2025-08-13T20:11:09.871047725+00:00 stdout F [INFO] 10.217.0.87:44595 - 1932 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002394738s 2025-08-13T20:11:09.871549349+00:00 stdout F [INFO] 10.217.0.87:40336 - 25626 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001868553s 2025-08-13T20:11:09.871549349+00:00 stdout F [INFO] 10.217.0.87:46912 - 10871 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003059918s 2025-08-13T20:11:09.910681771+00:00 stdout F [INFO] 10.217.0.87:45858 - 55022 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001129743s 2025-08-13T20:11:09.910974900+00:00 stdout F [INFO] 10.217.0.87:55087 - 63530 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001228406s 2025-08-13T20:11:09.914111879+00:00 stdout F [INFO] 10.217.0.87:38666 - 64429 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000674719s 2025-08-13T20:11:09.914236783+00:00 stdout F [INFO] 10.217.0.87:47423 - 54243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000635178s 2025-08-13T20:11:09.924884558+00:00 stdout F [INFO] 10.217.0.87:55691 - 6387 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000615748s 2025-08-13T20:11:09.925091264+00:00 stdout F [INFO] 10.217.0.87:36726 - 7563 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000820113s 2025-08-13T20:11:09.933855386+00:00 stdout F [INFO] 10.217.0.87:41358 - 31832 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000607047s 2025-08-13T20:11:09.934167695+00:00 stdout F [INFO] 10.217.0.87:54170 - 36712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000620148s 2025-08-13T20:11:09.967295634+00:00 stdout F [INFO] 10.217.0.87:58101 - 34961 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000751712s 2025-08-13T20:11:09.967366646+00:00 stdout F [INFO] 10.217.0.87:49474 - 38655 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000732611s 2025-08-13T20:11:09.969581790+00:00 stdout F [INFO] 10.217.0.87:47263 - 41007 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000959338s 2025-08-13T20:11:09.969630631+00:00 stdout F [INFO] 10.217.0.87:58934 - 8505 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00107031s 2025-08-13T20:11:09.979130774+00:00 stdout F [INFO] 10.217.0.87:58145 - 58179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754711s 2025-08-13T20:11:09.979260177+00:00 stdout F [INFO] 10.217.0.87:38045 - 32921 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793963s 2025-08-13T20:11:09.988431340+00:00 stdout F [INFO] 10.217.0.87:58901 - 15757 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000785472s 2025-08-13T20:11:09.988598135+00:00 stdout F [INFO] 10.217.0.87:40838 - 44964 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001242825s 2025-08-13T20:11:10.020712196+00:00 stdout F [INFO] 10.217.0.87:47276 - 32294 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000511265s 2025-08-13T20:11:10.020857110+00:00 stdout F [INFO] 10.217.0.87:53692 - 50495 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000708841s 2025-08-13T20:11:10.031652339+00:00 stdout F [INFO] 10.217.0.87:35587 - 6934 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771622s 2025-08-13T20:11:10.031652339+00:00 stdout F [INFO] 10.217.0.87:49877 - 38155 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000954388s 2025-08-13T20:11:10.040906155+00:00 stdout F [INFO] 10.217.0.87:38169 - 44285 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000416262s 2025-08-13T20:11:10.041016538+00:00 stdout F [INFO] 10.217.0.87:48760 - 59142 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070476s 2025-08-13T20:11:10.075523907+00:00 stdout F [INFO] 10.217.0.87:40850 - 53105 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000811223s 2025-08-13T20:11:10.075755104+00:00 stdout F [INFO] 10.217.0.87:38297 - 14321 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000948948s 2025-08-13T20:11:10.084645699+00:00 stdout F [INFO] 10.217.0.87:34519 - 42542 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000430953s 2025-08-13T20:11:10.084704071+00:00 stdout F [INFO] 10.217.0.87:43438 - 12631 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00067879s 2025-08-13T20:11:10.093303337+00:00 stdout F [INFO] 10.217.0.87:56908 - 17450 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000708811s 2025-08-13T20:11:10.093442521+00:00 stdout F [INFO] 10.217.0.87:51618 - 15318 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000817544s 2025-08-13T20:11:10.101313267+00:00 stdout F [INFO] 10.217.0.87:37133 - 19887 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000712141s 2025-08-13T20:11:10.101512533+00:00 stdout F [INFO] 10.217.0.87:50412 - 11192 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000948597s 2025-08-13T20:11:10.129508615+00:00 stdout F [INFO] 10.217.0.87:43831 - 20450 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000750722s 2025-08-13T20:11:10.129556986+00:00 stdout F [INFO] 10.217.0.87:52358 - 12808 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000957057s 2025-08-13T20:11:10.136328551+00:00 stdout F [INFO] 10.217.0.87:52783 - 29135 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000594527s 2025-08-13T20:11:10.136569338+00:00 stdout F [INFO] 10.217.0.87:34678 - 60145 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000918516s 2025-08-13T20:11:10.152319909+00:00 stdout F [INFO] 10.217.0.87:37166 - 20284 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071952s 2025-08-13T20:11:10.152695840+00:00 stdout F [INFO] 10.217.0.87:58908 - 31337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001094711s 2025-08-13T20:11:10.186369485+00:00 stdout F [INFO] 10.217.0.87:40691 - 64 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00106306s 2025-08-13T20:11:10.186424177+00:00 stdout F [INFO] 10.217.0.87:59312 - 58681 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000909676s 2025-08-13T20:11:10.212445733+00:00 stdout F [INFO] 10.217.0.87:36036 - 47426 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001638717s 2025-08-13T20:11:10.212445733+00:00 stdout F [INFO] 10.217.0.87:47264 - 35254 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001092741s 2025-08-13T20:11:10.214609215+00:00 stdout F [INFO] 10.217.0.87:60530 - 26229 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000657939s 2025-08-13T20:11:10.214609215+00:00 stdout F [INFO] 10.217.0.87:41544 - 60427 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001028729s 2025-08-13T20:11:10.240992232+00:00 stdout F [INFO] 10.217.0.87:54263 - 21128 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001061571s 2025-08-13T20:11:10.241217088+00:00 stdout F [INFO] 10.217.0.87:47748 - 54361 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001358849s 2025-08-13T20:11:10.282300966+00:00 stdout F [INFO] 10.217.0.87:49674 - 704 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003252953s 2025-08-13T20:11:10.282300966+00:00 stdout F [INFO] 10.217.0.87:44821 - 63231 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00314728s 2025-08-13T20:11:10.291850080+00:00 stdout F [INFO] 10.217.0.87:35498 - 30057 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000780553s 2025-08-13T20:11:10.292084226+00:00 stdout F [INFO] 10.217.0.87:34433 - 61062 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761342s 2025-08-13T20:11:10.308115376+00:00 stdout F [INFO] 10.217.0.87:45267 - 53227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000601597s 2025-08-13T20:11:10.308174398+00:00 stdout F [INFO] 10.217.0.87:53644 - 2574 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000985089s 2025-08-13T20:11:10.345653612+00:00 stdout F [INFO] 10.217.0.87:54134 - 43179 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740462s 2025-08-13T20:11:10.345706304+00:00 stdout F [INFO] 10.217.0.87:51934 - 41503 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068563s 2025-08-13T20:11:10.347835605+00:00 stdout F [INFO] 10.217.0.87:48788 - 65440 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000528275s 2025-08-13T20:11:10.348054241+00:00 stdout F [INFO] 10.217.0.87:46192 - 13272 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000802233s 2025-08-13T20:11:10.365227793+00:00 stdout F [INFO] 10.217.0.87:56813 - 61336 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000841114s 2025-08-13T20:11:10.365558593+00:00 stdout F [INFO] 10.217.0.87:56406 - 44405 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001118022s 2025-08-13T20:11:10.404568631+00:00 stdout F [INFO] 10.217.0.87:39247 - 53082 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000962568s 2025-08-13T20:11:10.404568631+00:00 stdout F [INFO] 10.217.0.87:39015 - 64029 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001217245s 2025-08-13T20:11:10.407330981+00:00 stdout F [INFO] 10.217.0.87:48101 - 3473 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104135s 2025-08-13T20:11:10.407428593+00:00 stdout F [INFO] 10.217.0.87:54432 - 43945 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000922756s 2025-08-13T20:11:10.427216141+00:00 stdout F [INFO] 10.217.0.87:58629 - 23429 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069069s 2025-08-13T20:11:10.427321594+00:00 stdout F [INFO] 10.217.0.87:53692 - 13549 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000727961s 2025-08-13T20:11:10.442966342+00:00 stdout F [INFO] 10.217.0.87:35003 - 8346 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654699s 2025-08-13T20:11:10.443015794+00:00 stdout F [INFO] 10.217.0.87:57738 - 6845 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000876895s 2025-08-13T20:11:10.462218544+00:00 stdout F [INFO] 10.217.0.87:38434 - 47432 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001120352s 2025-08-13T20:11:10.462218544+00:00 stdout F [INFO] 10.217.0.87:46544 - 18300 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001356399s 2025-08-13T20:11:10.483870565+00:00 stdout F [INFO] 10.217.0.87:56416 - 36850 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761392s 2025-08-13T20:11:10.483943937+00:00 stdout F [INFO] 10.217.0.87:35365 - 54110 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001481532s 2025-08-13T20:11:10.497518346+00:00 stdout F [INFO] 10.217.0.87:36394 - 20484 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000662119s 2025-08-13T20:11:10.497561348+00:00 stdout F [INFO] 10.217.0.87:47551 - 7794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000662119s 2025-08-13T20:11:10.524954413+00:00 stdout F [INFO] 10.217.0.87:58045 - 1135 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000569947s 2025-08-13T20:11:10.525179859+00:00 stdout F [INFO] 10.217.0.87:45550 - 18182 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102599s 2025-08-13T20:11:10.535454884+00:00 stdout F [INFO] 10.217.0.87:38958 - 24486 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001087191s 2025-08-13T20:11:10.535617379+00:00 stdout F [INFO] 10.217.0.87:35462 - 51414 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001264926s 2025-08-13T20:11:10.555337624+00:00 stdout F [INFO] 10.217.0.87:37227 - 51475 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000821594s 2025-08-13T20:11:10.555405996+00:00 stdout F [INFO] 10.217.0.87:35920 - 64361 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069905s 2025-08-13T20:11:10.582638297+00:00 stdout F [INFO] 10.217.0.87:46664 - 5406 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808283s 2025-08-13T20:11:10.582867113+00:00 stdout F [INFO] 10.217.0.87:37306 - 48071 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00106375s 2025-08-13T20:11:10.594028313+00:00 stdout F [INFO] 10.217.0.87:59666 - 45350 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001113462s 2025-08-13T20:11:10.594028313+00:00 stdout F [INFO] 10.217.0.87:53509 - 55627 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001300197s 2025-08-13T20:11:10.610093124+00:00 stdout F [INFO] 10.217.0.87:46572 - 58676 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001320778s 2025-08-13T20:11:10.610148496+00:00 stdout F [INFO] 10.217.0.87:40323 - 64318 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139729s 2025-08-13T20:11:10.635853543+00:00 stdout F [INFO] 10.217.0.87:53169 - 9990 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069213s 2025-08-13T20:11:10.636072669+00:00 stdout F [INFO] 10.217.0.87:50177 - 43293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000754261s 2025-08-13T20:11:10.668321404+00:00 stdout F [INFO] 10.217.0.87:53293 - 62044 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001576965s 2025-08-13T20:11:10.668321404+00:00 stdout F [INFO] 10.217.0.87:57662 - 55233 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001761411s 2025-08-13T20:11:10.697836330+00:00 stdout F [INFO] 10.217.0.87:41254 - 48236 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001464082s 2025-08-13T20:11:10.697836330+00:00 stdout F [INFO] 10.217.0.87:37182 - 29642 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001693129s 2025-08-13T20:11:10.725196374+00:00 stdout F [INFO] 10.217.0.87:34029 - 25028 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731301s 2025-08-13T20:11:10.725431641+00:00 stdout F [INFO] 10.217.0.87:51302 - 1519 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817833s 2025-08-13T20:11:10.739264747+00:00 stdout F [INFO] 10.217.0.87:46810 - 32281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000892646s 2025-08-13T20:11:10.739420212+00:00 stdout F [INFO] 10.217.0.87:43458 - 39469 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000972398s 2025-08-13T20:11:10.751142388+00:00 stdout F [INFO] 10.217.0.87:36895 - 9809 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000612998s 2025-08-13T20:11:10.751430606+00:00 stdout F [INFO] 10.217.0.87:39596 - 65208 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000677019s 2025-08-13T20:11:10.764475080+00:00 stdout F [INFO] 10.217.0.87:47548 - 14641 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000989968s 2025-08-13T20:11:10.764557333+00:00 stdout F [INFO] 10.217.0.87:43050 - 3786 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000903196s 2025-08-13T20:11:10.779200152+00:00 stdout F [INFO] 10.217.0.87:37918 - 49565 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001230145s 2025-08-13T20:11:10.779329346+00:00 stdout F [INFO] 10.217.0.87:41188 - 48358 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001293657s 2025-08-13T20:11:10.791840205+00:00 stdout F [INFO] 10.217.0.87:56862 - 47485 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001256216s 2025-08-13T20:11:10.791965058+00:00 stdout F [INFO] 10.217.0.87:56497 - 6626 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001321368s 2025-08-13T20:11:10.806885366+00:00 stdout F [INFO] 10.217.0.87:42559 - 25132 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740422s 2025-08-13T20:11:10.807054091+00:00 stdout F [INFO] 10.217.0.87:38089 - 18651 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000656199s 2025-08-13T20:11:10.821087963+00:00 stdout F [INFO] 10.217.0.87:53932 - 43404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000529605s 2025-08-13T20:11:10.821087963+00:00 stdout F [INFO] 10.217.0.87:53669 - 32799 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000495664s 2025-08-13T20:11:10.838031589+00:00 stdout F [INFO] 10.217.0.87:48051 - 47009 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069481s 2025-08-13T20:11:10.838031589+00:00 stdout F [INFO] 10.217.0.87:49633 - 2906 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000731321s 2025-08-13T20:11:10.847942923+00:00 stdout F [INFO] 10.217.0.87:40866 - 54204 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000492464s 2025-08-13T20:11:10.847942923+00:00 stdout F [INFO] 10.217.0.87:55638 - 5418 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507535s 2025-08-13T20:11:10.859496155+00:00 stdout F [INFO] 10.217.0.87:43040 - 23737 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000779872s 2025-08-13T20:11:10.859560776+00:00 stdout F [INFO] 10.217.0.87:32768 - 51691 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000733041s 2025-08-13T20:11:10.897474393+00:00 stdout F [INFO] 10.217.0.87:44023 - 5516 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068915s 2025-08-13T20:11:10.898269516+00:00 stdout F [INFO] 10.217.0.87:43332 - 54899 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001154573s 2025-08-13T20:11:10.913750840+00:00 stdout F [INFO] 10.217.0.87:44629 - 53763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000848825s 2025-08-13T20:11:10.913846803+00:00 stdout F [INFO] 10.217.0.87:47543 - 42445 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000658709s 2025-08-13T20:11:10.949764543+00:00 stdout F [INFO] 10.217.0.87:60784 - 25294 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001517523s 2025-08-13T20:11:10.949888526+00:00 stdout F [INFO] 10.217.0.87:43931 - 17207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175024s 2025-08-13T20:11:10.968321445+00:00 stdout F [INFO] 10.217.0.87:39431 - 5861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000533925s 2025-08-13T20:11:10.969151149+00:00 stdout F [INFO] 10.217.0.87:36790 - 35647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001096441s 2025-08-13T20:11:10.989866742+00:00 stdout F [INFO] 10.217.0.87:40960 - 23472 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000554456s 2025-08-13T20:11:10.990262134+00:00 stdout F [INFO] 10.217.0.87:51522 - 65437 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000818503s 2025-08-13T20:11:11.006375476+00:00 stdout F [INFO] 10.217.0.87:52401 - 28255 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934187s 2025-08-13T20:11:11.006555201+00:00 stdout F [INFO] 10.217.0.87:44627 - 52350 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000977718s 2025-08-13T20:11:11.006699505+00:00 stdout F [INFO] 10.217.0.87:39108 - 49176 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870595s 2025-08-13T20:11:11.006741886+00:00 stdout F [INFO] 10.217.0.87:34385 - 6198 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001256896s 2025-08-13T20:11:11.032143735+00:00 stdout F [INFO] 10.217.0.87:49275 - 4543 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000543846s 2025-08-13T20:11:11.032388102+00:00 stdout F [INFO] 10.217.0.87:53875 - 52880 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000647548s 2025-08-13T20:11:11.042664306+00:00 stdout F [INFO] 10.217.0.87:49993 - 57969 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000487754s 2025-08-13T20:11:11.042990406+00:00 stdout F [INFO] 10.217.0.87:42332 - 18348 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072954s 2025-08-13T20:11:11.060284971+00:00 stdout F [INFO] 10.217.0.87:46737 - 15987 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000730151s 2025-08-13T20:11:11.061205788+00:00 stdout F [INFO] 10.217.0.87:37223 - 10334 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001298888s 2025-08-13T20:11:11.061591239+00:00 stdout F [INFO] 10.217.0.87:46133 - 23565 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000556496s 2025-08-13T20:11:11.061872807+00:00 stdout F [INFO] 10.217.0.87:35095 - 42296 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000506325s 2025-08-13T20:11:11.086639487+00:00 stdout F [INFO] 10.217.0.87:34121 - 241 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619908s 2025-08-13T20:11:11.086905245+00:00 stdout F [INFO] 10.217.0.87:53216 - 15697 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000627458s 2025-08-13T20:11:11.099091904+00:00 stdout F [INFO] 10.217.0.87:52035 - 14176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000739021s 2025-08-13T20:11:11.099598979+00:00 stdout F [INFO] 10.217.0.87:43061 - 9950 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000636628s 2025-08-13T20:11:11.115611158+00:00 stdout F [INFO] 10.217.0.87:55396 - 28029 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105816s 2025-08-13T20:11:11.115710071+00:00 stdout F [INFO] 10.217.0.87:47389 - 50072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001448772s 2025-08-13T20:11:11.116040450+00:00 stdout F [INFO] 10.217.0.87:38105 - 49459 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001355359s 2025-08-13T20:11:11.116375710+00:00 stdout F [INFO] 10.217.0.87:60529 - 58340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001838553s 2025-08-13T20:11:11.139878123+00:00 stdout F [INFO] 10.217.0.87:34386 - 39284 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587697s 2025-08-13T20:11:11.139997127+00:00 stdout F [INFO] 10.217.0.87:33415 - 62766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000665539s 2025-08-13T20:11:11.153224846+00:00 stdout F [INFO] 10.217.0.87:58006 - 26905 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000784372s 2025-08-13T20:11:11.153637368+00:00 stdout F [INFO] 10.217.0.87:56675 - 53466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001014779s 2025-08-13T20:11:11.173482927+00:00 stdout F [INFO] 10.217.0.87:37374 - 59512 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002216123s 2025-08-13T20:11:11.173804926+00:00 stdout F [INFO] 10.217.0.87:43308 - 15355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002719148s 2025-08-13T20:11:11.193315066+00:00 stdout F [INFO] 10.217.0.87:39323 - 45299 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000551476s 2025-08-13T20:11:11.193843811+00:00 stdout F [INFO] 10.217.0.87:60659 - 56528 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000749141s 2025-08-13T20:11:11.203904119+00:00 stdout F [INFO] 10.217.0.87:35418 - 56911 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771722s 2025-08-13T20:11:11.204142896+00:00 stdout F [INFO] 10.217.0.87:39299 - 56238 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817754s 2025-08-13T20:11:11.206533775+00:00 stdout F [INFO] 10.217.0.87:39203 - 4198 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000597697s 2025-08-13T20:11:11.206586266+00:00 stdout F [INFO] 10.217.0.87:33058 - 32028 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000648949s 2025-08-13T20:11:11.221888475+00:00 stdout F [INFO] 10.217.0.87:57419 - 33230 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000789802s 2025-08-13T20:11:11.222177353+00:00 stdout F [INFO] 10.217.0.87:45855 - 55824 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001099731s 2025-08-13T20:11:11.252482972+00:00 stdout F [INFO] 10.217.0.87:49168 - 33322 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000678s 2025-08-13T20:11:11.253498791+00:00 stdout F [INFO] 10.217.0.87:41421 - 56012 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001364589s 2025-08-13T20:11:11.258970418+00:00 stdout F [INFO] 10.217.0.87:34340 - 41230 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000517045s 2025-08-13T20:11:11.259189624+00:00 stdout F [INFO] 10.217.0.87:36332 - 57392 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006936s 2025-08-13T20:11:11.277766107+00:00 stdout F [INFO] 10.217.0.87:55318 - 35910 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000542755s 2025-08-13T20:11:11.277852489+00:00 stdout F [INFO] 10.217.0.87:57870 - 21939 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000753292s 2025-08-13T20:11:11.307979793+00:00 stdout F [INFO] 10.217.0.87:51817 - 47893 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649408s 2025-08-13T20:11:11.308105307+00:00 stdout F [INFO] 10.217.0.87:33695 - 38695 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000747191s 2025-08-13T20:11:11.313892163+00:00 stdout F [INFO] 10.217.0.87:35898 - 19074 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068864s 2025-08-13T20:11:11.314104319+00:00 stdout F [INFO] 10.217.0.87:47876 - 34289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000799533s 2025-08-13T20:11:11.367508110+00:00 stdout F [INFO] 10.217.0.87:52614 - 11229 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069437s 2025-08-13T20:11:11.367664444+00:00 stdout F [INFO] 10.217.0.87:59094 - 25413 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768972s 2025-08-13T20:11:11.643660627+00:00 stdout F [INFO] 10.217.0.62:54796 - 40368 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001262316s 2025-08-13T20:11:11.643823292+00:00 stdout F [INFO] 10.217.0.62:57586 - 55468 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001439811s 2025-08-13T20:11:22.864396336+00:00 stdout F [INFO] 10.217.0.8:43207 - 51542 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001437441s 2025-08-13T20:11:22.864396336+00:00 stdout F [INFO] 10.217.0.8:32904 - 36126 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001346429s 2025-08-13T20:11:22.866861087+00:00 stdout F [INFO] 10.217.0.8:58933 - 4423 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001564905s 2025-08-13T20:11:22.867523206+00:00 stdout F [INFO] 10.217.0.8:59732 - 57745 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000567686s 2025-08-13T20:11:35.191236613+00:00 stdout F [INFO] 10.217.0.19:43624 - 31557 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004043076s 2025-08-13T20:11:35.191360136+00:00 stdout F [INFO] 10.217.0.19:42091 - 49079 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004665044s 2025-08-13T20:11:41.626012045+00:00 stdout F [INFO] 10.217.0.62:49209 - 7004 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00136998s 2025-08-13T20:11:41.626231631+00:00 stdout F [INFO] 10.217.0.62:48100 - 28186 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001462432s 2025-08-13T20:11:45.337920649+00:00 stdout F [INFO] 10.217.0.19:55508 - 38516 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000946417s 2025-08-13T20:11:45.337920649+00:00 stdout F [INFO] 10.217.0.19:57786 - 17416 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001212935s 2025-08-13T20:11:53.267739152+00:00 stdout F [INFO] 10.217.0.19:38914 - 46916 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105049s 2025-08-13T20:11:53.268465283+00:00 stdout F [INFO] 10.217.0.19:41853 - 32507 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001277896s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:34223 - 33431 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004252351s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:34960 - 46826 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004715415s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:55176 - 46961 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002248995s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:47156 - 33166 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003679365s 2025-08-13T20:12:02.567650759+00:00 stdout F [INFO] 10.217.0.45:44131 - 57505 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00210487s 2025-08-13T20:12:02.567650759+00:00 stdout F [INFO] 10.217.0.45:47913 - 32986 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00244551s 2025-08-13T20:12:05.182422296+00:00 stdout F [INFO] 10.217.0.19:34869 - 29012 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175998s 2025-08-13T20:12:05.187282006+00:00 stdout F [INFO] 10.217.0.19:47330 - 29258 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003057838s 2025-08-13T20:12:11.627950944+00:00 stdout F [INFO] 10.217.0.62:40275 - 29937 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002399889s 2025-08-13T20:12:11.628227361+00:00 stdout F [INFO] 10.217.0.62:39484 - 32290 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003273974s 2025-08-13T20:12:22.871627992+00:00 stdout F [INFO] 10.217.0.8:39575 - 52483 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004259432s 2025-08-13T20:12:22.871905410+00:00 stdout F [INFO] 10.217.0.8:53081 - 41816 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00452466s 2025-08-13T20:12:22.876477911+00:00 stdout F [INFO] 10.217.0.8:35821 - 12538 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002057629s 2025-08-13T20:12:22.877191531+00:00 stdout F [INFO] 10.217.0.8:48890 - 2287 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069125s 2025-08-13T20:12:35.190887254+00:00 stdout F [INFO] 10.217.0.19:60434 - 29890 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002610955s 2025-08-13T20:12:35.191185352+00:00 stdout F [INFO] 10.217.0.19:47054 - 51416 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002316357s 2025-08-13T20:12:41.625981740+00:00 stdout F [INFO] 10.217.0.62:40202 - 46466 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001112132s 2025-08-13T20:12:41.625981740+00:00 stdout F [INFO] 10.217.0.62:38107 - 12855 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001321438s 2025-08-13T20:12:54.987282390+00:00 stdout F [INFO] 10.217.0.19:45800 - 50279 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003032327s 2025-08-13T20:12:54.990679337+00:00 stdout F [INFO] 10.217.0.19:53152 - 56491 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003755488s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:35513 - 44991 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001019439s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:45882 - 50490 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001013849s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:37570 - 4790 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001625246s 2025-08-13T20:12:56.376464509+00:00 stdout F [INFO] 10.217.0.64:46358 - 22273 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.008205426s 2025-08-13T20:13:02.615040274+00:00 stdout F [INFO] 10.217.0.45:37088 - 44026 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000938007s 2025-08-13T20:13:02.615040274+00:00 stdout F [INFO] 10.217.0.45:35431 - 27214 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000953788s 2025-08-13T20:13:05.198056111+00:00 stdout F [INFO] 10.217.0.19:35572 - 34597 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000881615s 2025-08-13T20:13:05.198056111+00:00 stdout F [INFO] 10.217.0.19:59582 - 43606 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002272025s 2025-08-13T20:13:11.639921623+00:00 stdout F [INFO] 10.217.0.62:52711 - 32476 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001424541s 2025-08-13T20:13:11.639921623+00:00 stdout F [INFO] 10.217.0.62:57132 - 59874 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001547004s 2025-08-13T20:13:22.865124441+00:00 stdout F [INFO] 10.217.0.8:59533 - 12813 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001793821s 2025-08-13T20:13:22.865124441+00:00 stdout F [INFO] 10.217.0.8:35766 - 13841 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002905793s 2025-08-13T20:13:22.866911342+00:00 stdout F [INFO] 10.217.0.8:35077 - 20431 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000996988s 2025-08-13T20:13:22.867195700+00:00 stdout F [INFO] 10.217.0.8:60321 - 18676 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001366739s 2025-08-13T20:13:35.203554812+00:00 stdout F [INFO] 10.217.0.19:50652 - 1843 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001442701s 2025-08-13T20:13:35.204251312+00:00 stdout F [INFO] 10.217.0.19:44566 - 54018 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004432657s 2025-08-13T20:13:41.632163817+00:00 stdout F [INFO] 10.217.0.62:59452 - 58538 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001657787s 2025-08-13T20:13:41.632163817+00:00 stdout F [INFO] 10.217.0.62:33873 - 61617 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00068974s 2025-08-13T20:13:51.958542921+00:00 stdout F [INFO] 10.217.0.19:34570 - 51195 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001959286s 2025-08-13T20:13:51.958542921+00:00 stdout F [INFO] 10.217.0.19:52745 - 6433 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002972695s 2025-08-13T20:13:56.366582744+00:00 stdout F [INFO] 10.217.0.64:58857 - 31612 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001163063s 2025-08-13T20:13:56.366997556+00:00 stdout F [INFO] 10.217.0.64:38313 - 44558 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001188384s 2025-08-13T20:13:56.367223423+00:00 stdout F [INFO] 10.217.0.64:43580 - 26926 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001452902s 2025-08-13T20:13:56.368567151+00:00 stdout F [INFO] 10.217.0.64:34606 - 26831 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001230565s 2025-08-13T20:14:02.654198796+00:00 stdout F [INFO] 10.217.0.45:34942 - 58963 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001013979s 2025-08-13T20:14:02.654198796+00:00 stdout F [INFO] 10.217.0.45:33630 - 17756 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001152493s 2025-08-13T20:14:04.578540679+00:00 stdout F [INFO] 10.217.0.19:36000 - 321 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000795613s 2025-08-13T20:14:04.578714824+00:00 stdout F [INFO] 10.217.0.19:39517 - 29052 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000842304s 2025-08-13T20:14:05.187584410+00:00 stdout F [INFO] 10.217.0.19:46122 - 58576 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001175993s 2025-08-13T20:14:05.187721404+00:00 stdout F [INFO] 10.217.0.19:57402 - 32181 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001507053s 2025-08-13T20:14:11.629651830+00:00 stdout F [INFO] 10.217.0.62:59159 - 41701 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001221915s 2025-08-13T20:14:11.629651830+00:00 stdout F [INFO] 10.217.0.62:50146 - 57371 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00104821s 2025-08-13T20:14:22.866394636+00:00 stdout F [INFO] 10.217.0.8:37824 - 64992 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002725368s 2025-08-13T20:14:22.866496279+00:00 stdout F [INFO] 10.217.0.8:59572 - 3383 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002805781s 2025-08-13T20:14:22.868010292+00:00 stdout F [INFO] 10.217.0.8:60365 - 35111 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000760842s 2025-08-13T20:14:22.868010292+00:00 stdout F [INFO] 10.217.0.8:32914 - 8157 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001276487s 2025-08-13T20:14:35.203869808+00:00 stdout F [INFO] 10.217.0.19:60879 - 49232 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003147531s 2025-08-13T20:14:35.203869808+00:00 stdout F [INFO] 10.217.0.19:52879 - 60081 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003865761s 2025-08-13T20:14:41.632919724+00:00 stdout F [INFO] 10.217.0.62:59797 - 26522 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001363269s 2025-08-13T20:14:41.632919724+00:00 stdout F [INFO] 10.217.0.62:35790 - 34336 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001472952s 2025-08-13T20:14:56.364108879+00:00 stdout F [INFO] 10.217.0.64:43764 - 10505 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003449399s 2025-08-13T20:14:56.364148850+00:00 stdout F [INFO] 10.217.0.64:56395 - 23031 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005052034s 2025-08-13T20:14:56.364314075+00:00 stdout F [INFO] 10.217.0.64:38623 - 43236 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00524927s 2025-08-13T20:14:56.364473110+00:00 stdout F [INFO] 10.217.0.64:48325 - 42341 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003925912s 2025-08-13T20:15:02.699019916+00:00 stdout F [INFO] 10.217.0.45:34508 - 55080 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001459191s 2025-08-13T20:15:02.699019916+00:00 stdout F [INFO] 10.217.0.45:37620 - 24835 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000971367s 2025-08-13T20:15:05.193938748+00:00 stdout F [INFO] 10.217.0.19:40224 - 64478 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002663827s 2025-08-13T20:15:05.193938748+00:00 stdout F [INFO] 10.217.0.19:56288 - 41557 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004044026s 2025-08-13T20:15:11.637670792+00:00 stdout F [INFO] 10.217.0.62:47926 - 48891 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000847794s 2025-08-13T20:15:11.637670792+00:00 stdout F [INFO] 10.217.0.62:39812 - 38108 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000968387s 2025-08-13T20:15:14.207678284+00:00 stdout F [INFO] 10.217.0.19:50192 - 41833 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000742601s 2025-08-13T20:15:14.207824008+00:00 stdout F [INFO] 10.217.0.19:35203 - 22436 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001242815s 2025-08-13T20:15:22.865495047+00:00 stdout F [INFO] 10.217.0.8:48730 - 3306 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001072211s 2025-08-13T20:15:22.865621930+00:00 stdout F [INFO] 10.217.0.8:57376 - 21441 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000529565s 2025-08-13T20:15:22.866675980+00:00 stdout F [INFO] 10.217.0.8:43108 - 55457 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000485033s 2025-08-13T20:15:22.866750692+00:00 stdout F [INFO] 10.217.0.8:37354 - 59539 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000607437s 2025-08-13T20:15:30.900572435+00:00 stdout F [INFO] 10.217.0.73:36123 - 30169 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001139382s 2025-08-13T20:15:30.900679548+00:00 stdout F [INFO] 10.217.0.73:59681 - 33505 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001239995s 2025-08-13T20:15:35.189505304+00:00 stdout F [INFO] 10.217.0.19:57428 - 57631 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000825914s 2025-08-13T20:15:35.196183465+00:00 stdout F [INFO] 10.217.0.19:53814 - 50916 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001068761s 2025-08-13T20:15:41.629469340+00:00 stdout F [INFO] 10.217.0.62:34583 - 42316 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001442091s 2025-08-13T20:15:41.629735768+00:00 stdout F [INFO] 10.217.0.62:59091 - 44739 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001546074s 2025-08-13T20:15:50.643763762+00:00 stdout F [INFO] 10.217.0.19:39438 - 48635 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001065651s 2025-08-13T20:15:50.644611426+00:00 stdout F [INFO] 10.217.0.19:53652 - 38101 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001177314s 2025-08-13T20:15:56.362018229+00:00 stdout F [INFO] 10.217.0.64:41088 - 873 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001000629s 2025-08-13T20:15:56.362251565+00:00 stdout F [INFO] 10.217.0.64:54845 - 9984 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001465762s 2025-08-13T20:15:56.362723509+00:00 stdout F [INFO] 10.217.0.64:55499 - 33178 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001249626s 2025-08-13T20:15:56.363073269+00:00 stdout F [INFO] 10.217.0.64:47932 - 55426 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002146091s 2025-08-13T20:16:02.739548364+00:00 stdout F [INFO] 10.217.0.45:60222 - 28348 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000946617s 2025-08-13T20:16:02.739548364+00:00 stdout F [INFO] 10.217.0.45:48591 - 5325 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001547304s 2025-08-13T20:16:05.205548545+00:00 stdout F [INFO] 10.217.0.19:53559 - 36203 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001881074s 2025-08-13T20:16:05.209619061+00:00 stdout F [INFO] 10.217.0.19:49043 - 24811 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001894224s 2025-08-13T20:16:11.634174071+00:00 stdout F [INFO] 10.217.0.62:48721 - 37751 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002248584s 2025-08-13T20:16:11.634174071+00:00 stdout F [INFO] 10.217.0.62:38022 - 4312 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002118481s 2025-08-13T20:16:22.867687818+00:00 stdout F [INFO] 10.217.0.8:33638 - 1399 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002938574s 2025-08-13T20:16:22.867687818+00:00 stdout F [INFO] 10.217.0.8:60628 - 5377 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002751959s 2025-08-13T20:16:22.869538900+00:00 stdout F [INFO] 10.217.0.8:50403 - 34129 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001442112s 2025-08-13T20:16:22.869829829+00:00 stdout F [INFO] 10.217.0.8:56570 - 9246 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001772s 2025-08-13T20:16:23.832644995+00:00 stdout F [INFO] 10.217.0.19:37855 - 59033 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001472782s 2025-08-13T20:16:23.833301754+00:00 stdout F [INFO] 10.217.0.19:51228 - 32795 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001648597s 2025-08-13T20:16:35.191683097+00:00 stdout F [INFO] 10.217.0.19:58939 - 48494 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001907314s 2025-08-13T20:16:35.191683097+00:00 stdout F [INFO] 10.217.0.19:58218 - 25332 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003341335s 2025-08-13T20:16:41.630991336+00:00 stdout F [INFO] 10.217.0.62:36001 - 43032 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001942895s 2025-08-13T20:16:41.630991336+00:00 stdout F [INFO] 10.217.0.62:47530 - 49232 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002037278s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:41050 - 32678 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004154149s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:40954 - 48337 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00383684s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:40716 - 7441 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003682005s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:33635 - 35645 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003746728s 2025-08-13T20:17:02.788890842+00:00 stdout F [INFO] 10.217.0.45:34768 - 24306 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00141146s 2025-08-13T20:17:02.788934033+00:00 stdout F [INFO] 10.217.0.45:33409 - 18159 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002433539s 2025-08-13T20:17:05.225377351+00:00 stdout F [INFO] 10.217.0.19:40180 - 29563 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003059687s 2025-08-13T20:17:05.225767672+00:00 stdout F [INFO] 10.217.0.19:36494 - 38063 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003259813s 2025-08-13T20:17:11.648223999+00:00 stdout F [INFO] 10.217.0.62:46663 - 8219 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002281925s 2025-08-13T20:17:11.648223999+00:00 stdout F [INFO] 10.217.0.62:41844 - 54966 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000982588s 2025-08-13T20:17:22.871070539+00:00 stdout F [INFO] 10.217.0.8:32890 - 11124 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002656126s 2025-08-13T20:17:22.871352047+00:00 stdout F [INFO] 10.217.0.8:44256 - 52256 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003753657s 2025-08-13T20:17:22.874458756+00:00 stdout F [INFO] 10.217.0.8:58412 - 22097 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001787121s 2025-08-13T20:17:22.874552369+00:00 stdout F [INFO] 10.217.0.8:35431 - 14077 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001840333s 2025-08-13T20:17:34.179906137+00:00 stdout F [INFO] 10.217.0.19:44118 - 63889 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004259092s 2025-08-13T20:17:34.179906137+00:00 stdout F [INFO] 10.217.0.19:37748 - 10294 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005022833s 2025-08-13T20:17:35.190115476+00:00 stdout F [INFO] 10.217.0.19:40477 - 2326 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000728961s 2025-08-13T20:17:35.190115476+00:00 stdout F [INFO] 10.217.0.19:34273 - 12052 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000755701s 2025-08-13T20:17:41.668105549+00:00 stdout F [INFO] 10.217.0.62:46066 - 43508 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001331998s 2025-08-13T20:17:41.668105549+00:00 stdout F [INFO] 10.217.0.62:51181 - 37419 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00176431s 2025-08-13T20:17:49.346102970+00:00 stdout F [INFO] 10.217.0.19:36761 - 23898 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001356448s 2025-08-13T20:17:49.346102970+00:00 stdout F [INFO] 10.217.0.19:48274 - 65185 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001533424s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:38161 - 13410 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002194382s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:56655 - 1458 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003056427s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:33396 - 10926 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000674519s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:57160 - 47287 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000974197s 2025-08-13T20:18:02.989608509+00:00 stdout F [INFO] 10.217.0.45:45054 - 18970 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000822773s 2025-08-13T20:18:02.989696421+00:00 stdout F [INFO] 10.217.0.45:33510 - 45927 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001276436s 2025-08-13T20:18:05.199108315+00:00 stdout F [INFO] 10.217.0.19:54250 - 7398 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001587865s 2025-08-13T20:18:05.200078023+00:00 stdout F [INFO] 10.217.0.19:41213 - 54456 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000583547s 2025-08-13T20:18:11.579762720+00:00 stdout F [INFO] 10.217.0.62:41421 - 22525 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001183184s 2025-08-13T20:18:11.580219803+00:00 stdout F [INFO] 10.217.0.62:36929 - 36123 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000424312s 2025-08-13T20:18:11.649163252+00:00 stdout F [INFO] 10.217.0.62:42461 - 29602 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000733311s 2025-08-13T20:18:11.649199703+00:00 stdout F [INFO] 10.217.0.62:56212 - 10407 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000868245s 2025-08-13T20:18:22.870151721+00:00 stdout F [INFO] 10.217.0.8:49188 - 57176 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002603574s 2025-08-13T20:18:22.870151721+00:00 stdout F [INFO] 10.217.0.8:36995 - 42927 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002829531s 2025-08-13T20:18:26.402111553+00:00 stdout F [INFO] 10.217.0.19:58040 - 22048 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001067321s 2025-08-13T20:18:26.404015028+00:00 stdout F [INFO] 10.217.0.19:40164 - 61985 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002737419s 2025-08-13T20:18:33.587042474+00:00 stdout F [INFO] 10.217.0.82:53079 - 31680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001442992s 2025-08-13T20:18:33.587042474+00:00 stdout F [INFO] 10.217.0.82:56870 - 30507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001379969s 2025-08-13T20:18:34.092944759+00:00 stdout F [INFO] 10.217.0.82:35359 - 18217 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.003619673s 2025-08-13T20:18:34.094876035+00:00 stdout F [INFO] 10.217.0.82:46518 - 34600 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001328408s 2025-08-13T20:18:35.226993336+00:00 stdout F [INFO] 10.217.0.19:33501 - 47608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001215595s 2025-08-13T20:18:35.228134039+00:00 stdout F [INFO] 10.217.0.19:60161 - 50650 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002118781s 2025-08-13T20:18:41.636032550+00:00 stdout F [INFO] 10.217.0.62:38503 - 27459 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0010409s 2025-08-13T20:18:41.636032550+00:00 stdout F [INFO] 10.217.0.62:33295 - 42276 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001495803s 2025-08-13T20:18:43.085667028+00:00 stdout F [INFO] 10.217.0.19:60883 - 52790 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001888864s 2025-08-13T20:18:43.086028788+00:00 stdout F [INFO] 10.217.0.19:36040 - 17832 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000813713s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:41379 - 8858 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002860291s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:54022 - 180 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004060546s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:47538 - 9813 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004242701s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:45471 - 9488 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004319933s 2025-08-13T20:19:04.585379040+00:00 stdout F [INFO] 10.217.0.45:33914 - 2665 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002576953s 2025-08-13T20:19:04.585379040+00:00 stdout F [INFO] 10.217.0.45:39467 - 47346 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002866012s 2025-08-13T20:19:05.231523052+00:00 stdout F [INFO] 10.217.0.19:40211 - 52386 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001249576s 2025-08-13T20:19:05.231523052+00:00 stdout F [INFO] 10.217.0.19:33331 - 15313 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000792802s 2025-08-13T20:19:11.643530019+00:00 stdout F [INFO] 10.217.0.62:33123 - 49781 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000986178s 2025-08-13T20:19:11.643530019+00:00 stdout F [INFO] 10.217.0.62:49419 - 12181 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000731901s 2025-08-13T20:19:22.872287200+00:00 stdout F [INFO] 10.217.0.8:38758 - 21581 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002758359s 2025-08-13T20:19:22.872287200+00:00 stdout F [INFO] 10.217.0.8:46333 - 9209 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003077518s 2025-08-13T20:19:33.482702098+00:00 stdout F [INFO] 10.217.0.62:60479 - 42267 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002999965s 2025-08-13T20:19:33.482702098+00:00 stdout F [INFO] 10.217.0.62:60878 - 6876 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003089438s 2025-08-13T20:19:33.502325339+00:00 stdout F [INFO] 10.217.0.62:39222 - 4986 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001012609s 2025-08-13T20:19:33.502364490+00:00 stdout F [INFO] 10.217.0.62:60567 - 47569 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001098921s 2025-08-13T20:19:35.213953105+00:00 stdout F [INFO] 10.217.0.19:46242 - 54493 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001431731s 2025-08-13T20:19:35.213953105+00:00 stdout F [INFO] 10.217.0.19:36706 - 45113 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001586256s 2025-08-13T20:19:38.306546218+00:00 stdout F [INFO] 10.217.0.62:38211 - 9131 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000923397s 2025-08-13T20:19:38.306630511+00:00 stdout F [INFO] 10.217.0.62:46397 - 22873 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00072067s 2025-08-13T20:19:41.636439144+00:00 stdout F [INFO] 10.217.0.62:47865 - 15773 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000987888s 2025-08-13T20:19:41.636439144+00:00 stdout F [INFO] 10.217.0.62:50558 - 51504 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0010408s 2025-08-13T20:19:42.026544983+00:00 stdout F [INFO] 10.217.0.19:36856 - 18795 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000823494s 2025-08-13T20:19:42.026938724+00:00 stdout F [INFO] 10.217.0.19:34625 - 9057 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001165133s 2025-08-13T20:19:42.164504595+00:00 stdout F [INFO] 10.217.0.19:45942 - 61324 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000872075s 2025-08-13T20:19:42.164671840+00:00 stdout F [INFO] 10.217.0.19:41972 - 52459 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106331s 2025-08-13T20:19:42.532376799+00:00 stdout F [INFO] 10.217.0.62:46454 - 63259 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000792552s 2025-08-13T20:19:42.535296553+00:00 stdout F [INFO] 10.217.0.62:47598 - 51230 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004496309s 2025-08-13T20:19:45.401583608+00:00 stdout F [INFO] 10.217.0.62:56666 - 9412 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001494062s 2025-08-13T20:19:45.401583608+00:00 stdout F [INFO] 10.217.0.62:38089 - 57843 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001731939s 2025-08-13T20:19:48.020856666+00:00 stdout F [INFO] 10.217.0.19:49297 - 40636 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000908326s 2025-08-13T20:19:48.021008500+00:00 stdout F [INFO] 10.217.0.19:50217 - 33124 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000479443s 2025-08-13T20:19:52.707583818+00:00 stdout F [INFO] 10.217.0.19:41970 - 64782 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206844s 2025-08-13T20:19:52.707583818+00:00 stdout F [INFO] 10.217.0.19:38499 - 56421 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001794481s 2025-08-13T20:19:56.368549176+00:00 stdout F [INFO] 10.217.0.64:45663 - 24171 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001554525s 2025-08-13T20:19:56.368829954+00:00 stdout F [INFO] 10.217.0.64:56459 - 43664 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00173986s 2025-08-13T20:19:56.369091442+00:00 stdout F [INFO] 10.217.0.64:46431 - 38097 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002498151s 2025-08-13T20:19:56.369270407+00:00 stdout F [INFO] 10.217.0.64:46771 - 2172 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002322767s 2025-08-13T20:20:01.071900077+00:00 stdout F [INFO] 10.217.0.19:36348 - 12264 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001161613s 2025-08-13T20:20:01.071900077+00:00 stdout F [INFO] 10.217.0.19:58531 - 49546 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880615s 2025-08-13T20:20:01.094909815+00:00 stdout F [INFO] 10.217.0.19:45307 - 27566 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001574105s 2025-08-13T20:20:01.094909815+00:00 stdout F [INFO] 10.217.0.19:46628 - 46836 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001631457s 2025-08-13T20:20:04.641614348+00:00 stdout F [INFO] 10.217.0.45:35084 - 63330 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000804223s 2025-08-13T20:20:04.641962067+00:00 stdout F [INFO] 10.217.0.45:45080 - 40107 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001006019s 2025-08-13T20:20:05.195151847+00:00 stdout F [INFO] 10.217.0.19:56212 - 48003 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000984178s 2025-08-13T20:20:05.196163296+00:00 stdout F [INFO] 10.217.0.19:39285 - 14780 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002337637s 2025-08-13T20:20:08.340700585+00:00 stdout F [INFO] 10.217.0.19:53833 - 40272 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001970646s 2025-08-13T20:20:08.340700585+00:00 stdout F [INFO] 10.217.0.19:41946 - 24164 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002420129s 2025-08-13T20:20:08.363431225+00:00 stdout F [INFO] 10.217.0.19:46307 - 5452 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000778262s 2025-08-13T20:20:08.363431225+00:00 stdout F [INFO] 10.217.0.19:35245 - 39248 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000445573s 2025-08-13T20:20:08.502443628+00:00 stdout F [INFO] 10.217.0.19:44873 - 4998 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001938985s 2025-08-13T20:20:08.502494999+00:00 stdout F [INFO] 10.217.0.19:44407 - 54307 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001842663s 2025-08-13T20:20:11.654028374+00:00 stdout F [INFO] 10.217.0.62:38324 - 64366 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002006857s 2025-08-13T20:20:11.655275660+00:00 stdout F [INFO] 10.217.0.62:45022 - 17853 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071154s 2025-08-13T20:20:22.874056324+00:00 stdout F [INFO] 10.217.0.8:56271 - 7607 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003067057s 2025-08-13T20:20:22.874056324+00:00 stdout F [INFO] 10.217.0.8:53777 - 62518 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004086857s 2025-08-13T20:20:22.875510136+00:00 stdout F [INFO] 10.217.0.8:47578 - 64153 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000949147s 2025-08-13T20:20:22.877051280+00:00 stdout F [INFO] 10.217.0.8:38799 - 52729 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000611507s 2025-08-13T20:20:30.984937467+00:00 stdout F [INFO] 10.217.0.73:34657 - 18108 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001264556s 2025-08-13T20:20:30.984937467+00:00 stdout F [INFO] 10.217.0.73:54396 - 42450 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000660829s 2025-08-13T20:20:35.217291606+00:00 stdout F [INFO] 10.217.0.19:58010 - 34249 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001632276s 2025-08-13T20:20:35.217291606+00:00 stdout F [INFO] 10.217.0.19:43193 - 27871 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001431521s 2025-08-13T20:20:41.640317422+00:00 stdout F [INFO] 10.217.0.62:33136 - 3157 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000968398s 2025-08-13T20:20:41.640317422+00:00 stdout F [INFO] 10.217.0.62:42410 - 28426 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001284317s 2025-08-13T20:20:56.375567479+00:00 stdout F [INFO] 10.217.0.64:46364 - 61033 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004658863s 2025-08-13T20:20:56.375655501+00:00 stdout F [INFO] 10.217.0.64:41556 - 23747 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003774458s 2025-08-13T20:20:56.375691332+00:00 stdout F [INFO] 10.217.0.64:37617 - 7065 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003570242s 2025-08-13T20:20:56.376064003+00:00 stdout F [INFO] 10.217.0.64:40574 - 17849 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004663733s 2025-08-13T20:21:02.344894508+00:00 stdout F [INFO] 10.217.0.19:36337 - 62106 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002328257s 2025-08-13T20:21:02.344894508+00:00 stdout F [INFO] 10.217.0.19:42735 - 19208 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002479191s 2025-08-13T20:21:04.715673334+00:00 stdout F [INFO] 10.217.0.45:52465 - 28402 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001366489s 2025-08-13T20:21:04.715673334+00:00 stdout F [INFO] 10.217.0.45:46488 - 13842 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00104926s 2025-08-13T20:21:05.226410270+00:00 stdout F [INFO] 10.217.0.19:46338 - 63671 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001718309s 2025-08-13T20:21:05.228319235+00:00 stdout F [INFO] 10.217.0.19:45280 - 55564 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000554165s 2025-08-13T20:21:11.645199285+00:00 stdout F [INFO] 10.217.0.62:40198 - 22184 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002646386s 2025-08-13T20:21:11.646182993+00:00 stdout F [INFO] 10.217.0.62:45571 - 1066 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003772287s 2025-08-13T20:21:22.874469926+00:00 stdout F [INFO] 10.217.0.8:52225 - 3320 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000639608s 2025-08-13T20:21:22.874469926+00:00 stdout F [INFO] 10.217.0.8:54099 - 33964 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003515781s 2025-08-13T20:21:22.877588095+00:00 stdout F [INFO] 10.217.0.8:52824 - 64521 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000438082s 2025-08-13T20:21:22.877652327+00:00 stdout F [INFO] 10.217.0.8:57555 - 59549 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001463112s 2025-08-13T20:21:35.222143431+00:00 stdout F [INFO] 10.217.0.19:60649 - 50574 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002335237s 2025-08-13T20:21:35.222143431+00:00 stdout F [INFO] 10.217.0.19:53041 - 33994 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006000452s 2025-08-13T20:21:41.644282842+00:00 stdout F [INFO] 10.217.0.62:40499 - 18965 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001027549s 2025-08-13T20:21:41.644545980+00:00 stdout F [INFO] 10.217.0.62:56677 - 33292 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001361999s 2025-08-13T20:21:46.720648942+00:00 stdout F [INFO] 10.217.0.19:52844 - 54819 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000996739s 2025-08-13T20:21:46.720648942+00:00 stdout F [INFO] 10.217.0.19:35401 - 32403 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106424s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:53848 - 12324 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001368299s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:58459 - 32629 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001531664s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:54440 - 63440 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001854953s 2025-08-13T20:21:56.365450051+00:00 stdout F [INFO] 10.217.0.64:45586 - 58199 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003015456s 2025-08-13T20:22:04.772447424+00:00 stdout F [INFO] 10.217.0.45:47208 - 6310 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002282725s 2025-08-13T20:22:04.773388391+00:00 stdout F [INFO] 10.217.0.45:40257 - 64603 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002641876s 2025-08-13T20:22:05.225832351+00:00 stdout F [INFO] 10.217.0.19:48589 - 4475 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206624s 2025-08-13T20:22:05.226189052+00:00 stdout F [INFO] 10.217.0.19:44554 - 62237 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001643067s 2025-08-13T20:22:09.059585917+00:00 stdout F [INFO] 10.217.0.8:46872 - 45123 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001205414s 2025-08-13T20:22:09.059585917+00:00 stdout F [INFO] 10.217.0.8:44809 - 58206 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001253486s 2025-08-13T20:22:09.061384968+00:00 stdout F [INFO] 10.217.0.8:40056 - 55732 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000912436s 2025-08-13T20:22:09.062433578+00:00 stdout F [INFO] 10.217.0.8:50159 - 60649 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000761292s 2025-08-13T20:22:11.644848192+00:00 stdout F [INFO] 10.217.0.62:55597 - 19802 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00140525s 2025-08-13T20:22:11.644848192+00:00 stdout F [INFO] 10.217.0.62:37054 - 62067 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001243296s 2025-08-13T20:22:11.959636427+00:00 stdout F [INFO] 10.217.0.19:37069 - 44420 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001090712s 2025-08-13T20:22:11.959636427+00:00 stdout F [INFO] 10.217.0.19:51196 - 10736 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001195314s 2025-08-13T20:22:18.381498281+00:00 stdout F [INFO] 10.217.0.82:33260 - 39922 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001861193s 2025-08-13T20:22:18.381703217+00:00 stdout F [INFO] 10.217.0.82:57325 - 15076 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002520261s 2025-08-13T20:22:18.819005535+00:00 stdout F [INFO] 10.217.0.82:59297 - 26216 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.00105767s 2025-08-13T20:22:18.819194680+00:00 stdout F [INFO] 10.217.0.82:59783 - 53810 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001294797s 2025-08-13T20:22:18.930505661+00:00 stdout F [INFO] 10.217.0.87:37155 - 34712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001688419s 2025-08-13T20:22:18.930695497+00:00 stdout F [INFO] 10.217.0.87:43269 - 1726 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001895594s 2025-08-13T20:22:18.996825837+00:00 stdout F [INFO] 10.217.0.87:57214 - 3424 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001348508s 2025-08-13T20:22:18.996914279+00:00 stdout F [INFO] 10.217.0.87:44706 - 18025 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002379739s 2025-08-13T20:22:22.879676495+00:00 stdout F [INFO] 10.217.0.8:46274 - 2923 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004247991s 2025-08-13T20:22:22.879676495+00:00 stdout F [INFO] 10.217.0.8:45842 - 32928 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004246601s 2025-08-13T20:22:22.881415405+00:00 stdout F [INFO] 10.217.0.8:57451 - 31094 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000848284s 2025-08-13T20:22:22.881448626+00:00 stdout F [INFO] 10.217.0.8:36628 - 63016 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001175654s 2025-08-13T20:22:35.224010916+00:00 stdout F [INFO] 10.217.0.19:43222 - 5389 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002881602s 2025-08-13T20:22:35.224683185+00:00 stdout F [INFO] 10.217.0.19:42907 - 51848 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004116278s 2025-08-13T20:22:41.646202026+00:00 stdout F [INFO] 10.217.0.62:43760 - 28539 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001611936s 2025-08-13T20:22:41.646202026+00:00 stdout F [INFO] 10.217.0.62:35224 - 43702 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002153242s 2025-08-13T20:22:56.369606611+00:00 stdout F [INFO] 10.217.0.64:59458 - 7965 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002547783s 2025-08-13T20:22:56.369606611+00:00 stdout F [INFO] 10.217.0.64:48398 - 54107 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002361567s 2025-08-13T20:22:56.370722603+00:00 stdout F [INFO] 10.217.0.64:36212 - 12955 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001255206s 2025-08-13T20:22:56.372176595+00:00 stdout F [INFO] 10.217.0.64:36092 - 52355 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001346788s 2025-08-13T20:23:04.846709970+00:00 stdout F [INFO] 10.217.0.45:40440 - 63694 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001198544s 2025-08-13T20:23:04.846827404+00:00 stdout F [INFO] 10.217.0.45:38541 - 28253 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001752361s 2025-08-13T20:23:05.213124442+00:00 stdout F [INFO] 10.217.0.19:56738 - 47459 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000649099s 2025-08-13T20:23:05.213124442+00:00 stdout F [INFO] 10.217.0.19:46391 - 51426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00070577s 2025-08-13T20:23:11.652968446+00:00 stdout F [INFO] 10.217.0.62:53728 - 49866 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00210305s 2025-08-13T20:23:11.653085550+00:00 stdout F [INFO] 10.217.0.62:45456 - 15639 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002129911s 2025-08-13T20:23:21.588321740+00:00 stdout F [INFO] 10.217.0.19:60874 - 35135 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002127911s 2025-08-13T20:23:21.588321740+00:00 stdout F [INFO] 10.217.0.19:38718 - 31175 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002510541s 2025-08-13T20:23:22.871336408+00:00 stdout F [INFO] 10.217.0.8:36103 - 24028 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000782063s 2025-08-13T20:23:22.871515043+00:00 stdout F [INFO] 10.217.0.8:49021 - 53513 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000862385s 2025-08-13T20:23:22.873339626+00:00 stdout F [INFO] 10.217.0.8:37985 - 20136 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000531166s 2025-08-13T20:23:22.873339626+00:00 stdout F [INFO] 10.217.0.8:55991 - 14323 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000785993s 2025-08-13T20:23:35.238045523+00:00 stdout F [INFO] 10.217.0.19:36684 - 2848 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002465901s 2025-08-13T20:23:35.240079521+00:00 stdout F [INFO] 10.217.0.19:33588 - 27633 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001195794s 2025-08-13T20:23:41.647280071+00:00 stdout F [INFO] 10.217.0.62:42476 - 43342 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001642987s 2025-08-13T20:23:41.647415965+00:00 stdout F [INFO] 10.217.0.62:54440 - 6040 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001811622s 2025-08-13T20:23:45.402046379+00:00 stdout F [INFO] 10.217.0.19:44745 - 60022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000837994s 2025-08-13T20:23:45.402108621+00:00 stdout F [INFO] 10.217.0.19:43993 - 40694 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000823644s 2025-08-13T20:23:56.365249950+00:00 stdout F [INFO] 10.217.0.64:40236 - 37886 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003663245s 2025-08-13T20:23:56.365372663+00:00 stdout F [INFO] 10.217.0.64:46539 - 61891 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004957812s 2025-08-13T20:23:56.365478967+00:00 stdout F [INFO] 10.217.0.64:56603 - 40332 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004119848s 2025-08-13T20:23:56.365851677+00:00 stdout F [INFO] 10.217.0.64:58763 - 26661 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004710615s 2025-08-13T20:24:04.894208688+00:00 stdout F [INFO] 10.217.0.45:53176 - 54604 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001489782s 2025-08-13T20:24:04.895019631+00:00 stdout F [INFO] 10.217.0.45:35227 - 1868 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001315408s 2025-08-13T20:24:05.215869215+00:00 stdout F [INFO] 10.217.0.19:50430 - 16440 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002552023s 2025-08-13T20:24:05.215869215+00:00 stdout F [INFO] 10.217.0.19:43727 - 5350 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002644575s 2025-08-13T20:24:11.654122291+00:00 stdout F [INFO] 10.217.0.62:47207 - 44200 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000967948s 2025-08-13T20:24:11.654122291+00:00 stdout F [INFO] 10.217.0.62:35930 - 61246 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000842024s 2025-08-13T20:24:22.877013284+00:00 stdout F [INFO] 10.217.0.8:34036 - 32585 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002436069s 2025-08-13T20:24:22.877013284+00:00 stdout F [INFO] 10.217.0.8:37756 - 19886 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002389609s 2025-08-13T20:24:22.877728545+00:00 stdout F [INFO] 10.217.0.8:43919 - 30620 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000891985s 2025-08-13T20:24:22.877890659+00:00 stdout F [INFO] 10.217.0.8:59344 - 6990 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00070076s 2025-08-13T20:24:31.218619713+00:00 stdout F [INFO] 10.217.0.19:38893 - 10072 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001308328s 2025-08-13T20:24:31.218619713+00:00 stdout F [INFO] 10.217.0.19:49427 - 61245 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000495315s 2025-08-13T20:24:35.210617970+00:00 stdout F [INFO] 10.217.0.19:34324 - 3422 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206925s 2025-08-13T20:24:35.216068206+00:00 stdout F [INFO] 10.217.0.19:35128 - 8581 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000886335s 2025-08-13T20:24:41.649324149+00:00 stdout F [INFO] 10.217.0.62:45676 - 32314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001269026s 2025-08-13T20:24:41.650352639+00:00 stdout F [INFO] 10.217.0.62:59504 - 25827 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002697517s 2025-08-13T20:24:56.364831731+00:00 stdout F [INFO] 10.217.0.64:46731 - 49096 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002134331s 2025-08-13T20:24:56.364904373+00:00 stdout F [INFO] 10.217.0.64:47783 - 39549 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003447299s 2025-08-13T20:24:56.364915843+00:00 stdout F [INFO] 10.217.0.64:58109 - 10846 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004238141s 2025-08-13T20:24:56.365140799+00:00 stdout F [INFO] 10.217.0.64:42859 - 26107 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003592322s 2025-08-13T20:25:04.954625974+00:00 stdout F [INFO] 10.217.0.45:56098 - 16684 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00105109s 2025-08-13T20:25:04.954625974+00:00 stdout F [INFO] 10.217.0.45:60242 - 56074 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001544635s 2025-08-13T20:25:05.216681297+00:00 stdout F [INFO] 10.217.0.19:43870 - 53964 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880955s 2025-08-13T20:25:05.218245922+00:00 stdout F [INFO] 10.217.0.19:38683 - 13931 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002842042s 2025-08-13T20:25:11.655107217+00:00 stdout F [INFO] 10.217.0.62:42560 - 56615 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001234255s 2025-08-13T20:25:11.655152038+00:00 stdout F [INFO] 10.217.0.62:51024 - 56501 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001732259s 2025-08-13T20:25:22.875674448+00:00 stdout F [INFO] 10.217.0.8:37291 - 10141 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002284745s 2025-08-13T20:25:22.875674448+00:00 stdout F [INFO] 10.217.0.8:53584 - 44780 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00104894s 2025-08-13T20:25:22.876833261+00:00 stdout F [INFO] 10.217.0.8:44784 - 23561 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000829414s 2025-08-13T20:25:22.877283834+00:00 stdout F [INFO] 10.217.0.8:34509 - 32310 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000649649s 2025-08-13T20:25:31.108541627+00:00 stdout F [INFO] 10.217.0.73:54395 - 27034 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001383989s 2025-08-13T20:25:31.108673060+00:00 stdout F [INFO] 10.217.0.73:50430 - 47262 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000849244s 2025-08-13T20:25:35.223646663+00:00 stdout F [INFO] 10.217.0.19:59430 - 40751 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003284004s 2025-08-13T20:25:35.223646663+00:00 stdout F [INFO] 10.217.0.19:38543 - 12475 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003529581s 2025-08-13T20:25:40.839594434+00:00 stdout F [INFO] 10.217.0.19:54153 - 43863 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966588s 2025-08-13T20:25:40.839594434+00:00 stdout F [INFO] 10.217.0.19:60269 - 32295 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001198244s 2025-08-13T20:25:41.654132535+00:00 stdout F [INFO] 10.217.0.62:57722 - 60781 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001192605s 2025-08-13T20:25:41.654225638+00:00 stdout F [INFO] 10.217.0.62:57240 - 9692 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000768012s 2025-08-13T20:25:44.091295833+00:00 stdout F [INFO] 10.217.0.19:47837 - 6058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000956198s 2025-08-13T20:25:44.091480398+00:00 stdout F [INFO] 10.217.0.19:42351 - 47123 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001190144s 2025-08-13T20:25:56.362749204+00:00 stdout F [INFO] 10.217.0.64:57008 - 2843 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001875883s 2025-08-13T20:25:56.362749204+00:00 stdout F [INFO] 10.217.0.64:59145 - 41905 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000892495s 2025-08-13T20:25:56.364121553+00:00 stdout F [INFO] 10.217.0.64:54461 - 27316 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000776812s 2025-08-13T20:25:56.364366410+00:00 stdout F [INFO] 10.217.0.64:49156 - 29068 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001151533s 2025-08-13T20:26:05.008171728+00:00 stdout F [INFO] 10.217.0.45:56933 - 22116 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001376049s 2025-08-13T20:26:05.008171728+00:00 stdout F [INFO] 10.217.0.45:38062 - 21287 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001615627s 2025-08-13T20:26:05.225685178+00:00 stdout F [INFO] 10.217.0.19:42480 - 2952 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001715269s 2025-08-13T20:26:05.229846137+00:00 stdout F [INFO] 10.217.0.19:38966 - 45405 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000540405s 2025-08-13T20:26:11.660082231+00:00 stdout F [INFO] 10.217.0.62:36357 - 25846 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001317148s 2025-08-13T20:26:11.660082231+00:00 stdout F [INFO] 10.217.0.62:40131 - 27947 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001431501s 2025-08-13T20:26:22.885119750+00:00 stdout F [INFO] 10.217.0.8:39265 - 12506 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.008256897s 2025-08-13T20:26:22.885300815+00:00 stdout F [INFO] 10.217.0.8:60052 - 21113 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.009225783s 2025-08-13T20:26:22.887208370+00:00 stdout F [INFO] 10.217.0.8:50059 - 64664 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001089472s 2025-08-13T20:26:22.888349282+00:00 stdout F [INFO] 10.217.0.8:58525 - 41733 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002595604s 2025-08-13T20:26:35.217672305+00:00 stdout F [INFO] 10.217.0.19:50871 - 46595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003050657s 2025-08-13T20:26:35.223889833+00:00 stdout F [INFO] 10.217.0.19:34499 - 21925 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001911215s 2025-08-13T20:26:41.663241068+00:00 stdout F [INFO] 10.217.0.62:37952 - 57689 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002296435s 2025-08-13T20:26:41.663450064+00:00 stdout F [INFO] 10.217.0.62:32910 - 1261 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002552483s 2025-08-13T20:26:50.459861487+00:00 stdout F [INFO] 10.217.0.19:59463 - 36264 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129462s 2025-08-13T20:26:50.460897926+00:00 stdout F [INFO] 10.217.0.19:43264 - 12719 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003320605s 2025-08-13T20:26:56.361161538+00:00 stdout F [INFO] 10.217.0.64:40095 - 42976 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001215025s 2025-08-13T20:26:56.362470865+00:00 stdout F [INFO] 10.217.0.64:51703 - 18236 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00106133s 2025-08-13T20:26:56.362470865+00:00 stdout F [INFO] 10.217.0.64:42173 - 15221 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001270597s 2025-08-13T20:26:56.366062168+00:00 stdout F [INFO] 10.217.0.64:56428 - 48198 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.006411643s 2025-08-13T20:27:05.061678481+00:00 stdout F [INFO] 10.217.0.45:44274 - 14734 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001185503s 2025-08-13T20:27:05.061678481+00:00 stdout F [INFO] 10.217.0.45:55812 - 7493 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000598277s 2025-08-13T20:27:05.211882976+00:00 stdout F [INFO] 10.217.0.19:52471 - 16429 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001636387s 2025-08-13T20:27:05.215722676+00:00 stdout F [INFO] 10.217.0.19:54830 - 38464 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000740491s 2025-08-13T20:27:11.668728224+00:00 stdout F [INFO] 10.217.0.62:46677 - 64812 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001148663s 2025-08-13T20:27:11.668728224+00:00 stdout F [INFO] 10.217.0.62:51999 - 8607 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001020539s 2025-08-13T20:27:22.879288288+00:00 stdout F [INFO] 10.217.0.8:37705 - 46550 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002511792s 2025-08-13T20:27:22.879288288+00:00 stdout F [INFO] 10.217.0.8:37535 - 28497 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003007216s 2025-08-13T20:27:22.880758070+00:00 stdout F [INFO] 10.217.0.8:51257 - 27538 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001197134s 2025-08-13T20:27:22.880952866+00:00 stdout F [INFO] 10.217.0.8:53250 - 20415 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001178664s 2025-08-13T20:27:35.228295304+00:00 stdout F [INFO] 10.217.0.19:57583 - 19387 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003182911s 2025-08-13T20:27:35.228295304+00:00 stdout F [INFO] 10.217.0.19:53003 - 43498 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001328308s 2025-08-13T20:27:41.658872270+00:00 stdout F [INFO] 10.217.0.62:45081 - 383 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001441351s 2025-08-13T20:27:41.658872270+00:00 stdout F [INFO] 10.217.0.62:33850 - 51076 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001704479s 2025-08-13T20:27:42.780742399+00:00 stdout F [INFO] 10.217.0.19:39615 - 13995 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001031459s 2025-08-13T20:27:42.780742399+00:00 stdout F [INFO] 10.217.0.19:51920 - 40149 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000956647s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:58075 - 52190 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004438267s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:48199 - 47108 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005041774s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:51669 - 65440 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.005462966s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:47060 - 8587 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.005900449s 2025-08-13T20:27:58.930109056+00:00 stdout F [INFO] 10.217.0.19:55625 - 45930 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000983058s 2025-08-13T20:27:58.930109056+00:00 stdout F [INFO] 10.217.0.19:42361 - 34807 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129392s 2025-08-13T20:28:00.077029005+00:00 stdout F [INFO] 10.217.0.19:53335 - 44549 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000760012s 2025-08-13T20:28:00.077029005+00:00 stdout F [INFO] 10.217.0.19:40680 - 63548 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000908866s 2025-08-13T20:28:05.109126285+00:00 stdout F [INFO] 10.217.0.45:51541 - 679 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000994358s 2025-08-13T20:28:05.109126285+00:00 stdout F [INFO] 10.217.0.45:51904 - 3423 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001150933s 2025-08-13T20:28:05.222457713+00:00 stdout F [INFO] 10.217.0.19:56911 - 31417 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001446252s 2025-08-13T20:28:05.222507354+00:00 stdout F [INFO] 10.217.0.19:35415 - 62687 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001685248s 2025-08-13T20:28:11.573842296+00:00 stdout F [INFO] 10.217.0.62:38831 - 24140 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01252948s 2025-08-13T20:28:11.574213006+00:00 stdout F [INFO] 10.217.0.62:52754 - 21246 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012967142s 2025-08-13T20:28:11.652482776+00:00 stdout F [INFO] 10.217.0.62:44148 - 61190 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139288s 2025-08-13T20:28:11.652482776+00:00 stdout F [INFO] 10.217.0.62:40743 - 20956 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001218395s 2025-08-13T20:28:22.878744002+00:00 stdout F [INFO] 10.217.0.8:46071 - 3347 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002957745s 2025-08-13T20:28:22.878744002+00:00 stdout F [INFO] 10.217.0.8:60971 - 48670 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003520741s 2025-08-13T20:28:22.880619546+00:00 stdout F [INFO] 10.217.0.8:58226 - 45100 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001212545s 2025-08-13T20:28:22.880977026+00:00 stdout F [INFO] 10.217.0.8:44978 - 48564 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001512964s 2025-08-13T20:28:35.237885214+00:00 stdout F [INFO] 10.217.0.19:54157 - 35131 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002727999s 2025-08-13T20:28:35.238211853+00:00 stdout F [INFO] 10.217.0.19:37322 - 15708 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003765728s 2025-08-13T20:28:41.656353106+00:00 stdout F [INFO] 10.217.0.62:34350 - 14878 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001065571s 2025-08-13T20:28:41.656353106+00:00 stdout F [INFO] 10.217.0.62:42110 - 17498 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000952677s 2025-08-13T20:28:56.363747197+00:00 stdout F [INFO] 10.217.0.64:59117 - 59507 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003759828s 2025-08-13T20:28:56.363747197+00:00 stdout F [INFO] 10.217.0.64:39236 - 60297 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004420417s 2025-08-13T20:28:56.364644013+00:00 stdout F [INFO] 10.217.0.64:39886 - 59994 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002289126s 2025-08-13T20:28:56.364993613+00:00 stdout F [INFO] 10.217.0.64:51482 - 47269 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001537855s 2025-08-13T20:29:05.158973531+00:00 stdout F [INFO] 10.217.0.45:49586 - 293 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001536384s 2025-08-13T20:29:05.158973531+00:00 stdout F [INFO] 10.217.0.45:37945 - 29941 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002622176s 2025-08-13T20:29:05.246980231+00:00 stdout F [INFO] 10.217.0.19:39196 - 22276 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000976709s 2025-08-13T20:29:05.247213568+00:00 stdout F [INFO] 10.217.0.19:50331 - 9060 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000653139s 2025-08-13T20:29:09.709941691+00:00 stdout F [INFO] 10.217.0.19:49760 - 39795 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001007329s 2025-08-13T20:29:09.710193458+00:00 stdout F [INFO] 10.217.0.19:52124 - 16792 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002368648s 2025-08-13T20:29:11.659446941+00:00 stdout F [INFO] 10.217.0.62:48548 - 25764 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000931517s 2025-08-13T20:29:11.659446941+00:00 stdout F [INFO] 10.217.0.62:52247 - 39213 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001345389s 2025-08-13T20:29:22.877995183+00:00 stdout F [INFO] 10.217.0.8:55820 - 24792 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004234281s 2025-08-13T20:29:22.877995183+00:00 stdout F [INFO] 10.217.0.8:36516 - 18894 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003181971s 2025-08-13T20:29:22.879826456+00:00 stdout F [INFO] 10.217.0.8:35384 - 34055 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0010181s 2025-08-13T20:29:22.880051702+00:00 stdout F [INFO] 10.217.0.8:37455 - 35412 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001280907s 2025-08-13T20:29:33.508718749+00:00 stdout F [INFO] 10.217.0.62:52420 - 65467 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005813477s 2025-08-13T20:29:33.508718749+00:00 stdout F [INFO] 10.217.0.62:59946 - 45827 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005716655s 2025-08-13T20:29:33.548111311+00:00 stdout F [INFO] 10.217.0.62:39576 - 4767 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001328578s 2025-08-13T20:29:33.549247944+00:00 stdout F [INFO] 10.217.0.62:39409 - 20523 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00106722s 2025-08-13T20:29:35.242365363+00:00 stdout F [INFO] 10.217.0.19:52444 - 13513 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000664429s 2025-08-13T20:29:35.245163164+00:00 stdout F [INFO] 10.217.0.19:57824 - 18608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003030437s 2025-08-13T20:29:38.331371308+00:00 stdout F [INFO] 10.217.0.62:50284 - 47289 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000978308s 2025-08-13T20:29:38.331371308+00:00 stdout F [INFO] 10.217.0.62:54421 - 29466 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00069253s 2025-08-13T20:29:41.472703057+00:00 stdout F [INFO] 10.217.0.19:50960 - 37288 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00139353s 2025-08-13T20:29:41.472703057+00:00 stdout F [INFO] 10.217.0.19:40772 - 35345 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000867005s 2025-08-13T20:29:41.655946815+00:00 stdout F [INFO] 10.217.0.62:46017 - 45350 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000554786s 2025-08-13T20:29:41.657692135+00:00 stdout F [INFO] 10.217.0.62:38458 - 9290 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002201973s 2025-08-13T20:29:42.029686128+00:00 stdout F [INFO] 10.217.0.19:49384 - 17937 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001275446s 2025-08-13T20:29:42.030119931+00:00 stdout F [INFO] 10.217.0.19:34126 - 31563 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002031728s 2025-08-13T20:29:42.195863505+00:00 stdout F [INFO] 10.217.0.19:50292 - 8646 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000771612s 2025-08-13T20:29:42.195863505+00:00 stdout F [INFO] 10.217.0.19:41813 - 53616 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001189794s 2025-08-13T20:29:42.548149462+00:00 stdout F [INFO] 10.217.0.62:46761 - 59090 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001441271s 2025-08-13T20:29:42.548391029+00:00 stdout F [INFO] 10.217.0.62:44356 - 37096 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001953676s 2025-08-13T20:29:45.402769419+00:00 stdout F [INFO] 10.217.0.62:48046 - 45459 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000817103s 2025-08-13T20:29:45.402769419+00:00 stdout F [INFO] 10.217.0.62:49429 - 57122 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000449153s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:57376 - 19028 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003027407s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:53232 - 19566 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003590323s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:50097 - 25136 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003240624s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:47419 - 35557 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003612844s 2025-08-13T20:30:01.075376634+00:00 stdout F [INFO] 10.217.0.19:42400 - 32330 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00315681s 2025-08-13T20:30:01.075376634+00:00 stdout F [INFO] 10.217.0.19:43531 - 21848 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003187702s 2025-08-13T20:30:01.100151146+00:00 stdout F [INFO] 10.217.0.19:37195 - 2542 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000633388s 2025-08-13T20:30:01.100151146+00:00 stdout F [INFO] 10.217.0.19:48203 - 48919 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000567806s 2025-08-13T20:30:05.230154427+00:00 stdout F [INFO] 10.217.0.45:36707 - 63835 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001633947s 2025-08-13T20:30:05.230154427+00:00 stdout F [INFO] 10.217.0.45:54164 - 55506 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002323076s 2025-08-13T20:30:05.241957226+00:00 stdout F [INFO] 10.217.0.19:56331 - 59077 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003138751s 2025-08-13T20:30:05.241957226+00:00 stdout F [INFO] 10.217.0.19:45527 - 23788 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003551052s 2025-08-13T20:30:08.348942269+00:00 stdout F [INFO] 10.217.0.19:33338 - 2468 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00103812s 2025-08-13T20:30:08.348942269+00:00 stdout F [INFO] 10.217.0.19:45130 - 60291 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001927446s 2025-08-13T20:30:08.397565217+00:00 stdout F [INFO] 10.217.0.19:35130 - 55305 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000807704s 2025-08-13T20:30:08.397565217+00:00 stdout F [INFO] 10.217.0.19:43466 - 56354 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002227894s 2025-08-13T20:30:08.542631706+00:00 stdout F [INFO] 10.217.0.19:32789 - 5660 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001609206s 2025-08-13T20:30:08.542692378+00:00 stdout F [INFO] 10.217.0.19:34852 - 15652 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001358189s 2025-08-13T20:30:11.680732082+00:00 stdout F [INFO] 10.217.0.62:57583 - 18489 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001232076s 2025-08-13T20:30:11.680732082+00:00 stdout F [INFO] 10.217.0.62:41359 - 11743 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001834313s 2025-08-13T20:30:19.348744243+00:00 stdout F [INFO] 10.217.0.19:60500 - 65234 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002055099s 2025-08-13T20:30:19.348744243+00:00 stdout F [INFO] 10.217.0.19:34223 - 16093 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002216144s 2025-08-13T20:30:22.880352091+00:00 stdout F [INFO] 10.217.0.8:43568 - 50549 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001704109s 2025-08-13T20:30:22.880603728+00:00 stdout F [INFO] 10.217.0.8:47766 - 6147 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00312927s 2025-08-13T20:30:22.882453871+00:00 stdout F [INFO] 10.217.0.8:54083 - 61218 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000749791s 2025-08-13T20:30:22.882881273+00:00 stdout F [INFO] 10.217.0.8:54024 - 55183 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001355379s 2025-08-13T20:30:31.155485163+00:00 stdout F [INFO] 10.217.0.73:38005 - 4639 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002329047s 2025-08-13T20:30:31.155485163+00:00 stdout F [INFO] 10.217.0.73:38446 - 24841 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003351347s 2025-08-13T20:30:33.360074255+00:00 stdout F [INFO] 10.217.0.19:45240 - 18271 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001849443s 2025-08-13T20:30:33.360074255+00:00 stdout F [INFO] 10.217.0.19:56701 - 35882 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002291556s 2025-08-13T20:30:33.386018850+00:00 stdout F [INFO] 10.217.0.19:59664 - 50456 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004310074s 2025-08-13T20:30:33.386313009+00:00 stdout F [INFO] 10.217.0.19:58470 - 46639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005115767s 2025-08-13T20:30:35.215184861+00:00 stdout F [INFO] 10.217.0.19:49596 - 39944 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002485991s 2025-08-13T20:30:35.217097636+00:00 stdout F [INFO] 10.217.0.19:37961 - 22633 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000818494s 2025-08-13T20:30:41.662681529+00:00 stdout F [INFO] 10.217.0.62:49564 - 50238 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000863175s 2025-08-13T20:30:41.662681529+00:00 stdout F [INFO] 10.217.0.62:38814 - 44130 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001675658s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:43125 - 35104 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002406829s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:35781 - 62498 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003089479s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:37735 - 22294 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00384699s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:37353 - 20789 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003985004s 2025-08-13T20:31:05.224302085+00:00 stdout F [INFO] 10.217.0.19:57220 - 36252 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001177644s 2025-08-13T20:31:05.224433359+00:00 stdout F [INFO] 10.217.0.19:59782 - 30926 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000577116s 2025-08-13T20:31:05.326079580+00:00 stdout F [INFO] 10.217.0.45:45832 - 46685 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000889425s 2025-08-13T20:31:05.326079580+00:00 stdout F [INFO] 10.217.0.45:40414 - 1549 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001170953s 2025-08-13T20:31:11.673462830+00:00 stdout F [INFO] 10.217.0.62:38378 - 23668 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002980646s 2025-08-13T20:31:11.673636245+00:00 stdout F [INFO] 10.217.0.62:40505 - 54986 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002723098s 2025-08-13T20:31:22.881264896+00:00 stdout F [INFO] 10.217.0.8:55399 - 52791 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003612124s 2025-08-13T20:31:22.881264896+00:00 stdout F [INFO] 10.217.0.8:38597 - 2000 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004345355s 2025-08-13T20:31:22.883736167+00:00 stdout F [INFO] 10.217.0.8:50837 - 15887 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00104287s 2025-08-13T20:31:22.883736167+00:00 stdout F [INFO] 10.217.0.8:43696 - 36032 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001087321s 2025-08-13T20:31:28.968581310+00:00 stdout F [INFO] 10.217.0.19:39400 - 8426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001307347s 2025-08-13T20:31:28.968882459+00:00 stdout F [INFO] 10.217.0.19:36145 - 62501 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001686888s 2025-08-13T20:31:35.240270472+00:00 stdout F [INFO] 10.217.0.19:36421 - 17002 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002607855s 2025-08-13T20:31:35.241578470+00:00 stdout F [INFO] 10.217.0.19:59967 - 53451 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004725376s 2025-08-13T20:31:40.160974940+00:00 stdout F [INFO] 10.217.0.19:38477 - 33833 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001063061s 2025-08-13T20:31:40.160974940+00:00 stdout F [INFO] 10.217.0.19:37658 - 8896 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001155733s 2025-08-13T20:31:41.663014977+00:00 stdout F [INFO] 10.217.0.62:51660 - 16571 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000794503s 2025-08-13T20:31:41.663014977+00:00 stdout F [INFO] 10.217.0.62:52306 - 23225 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000757912s 2025-08-13T20:31:56.361142474+00:00 stdout F [INFO] 10.217.0.64:60172 - 9058 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00172696s 2025-08-13T20:31:56.361142474+00:00 stdout F [INFO] 10.217.0.64:48943 - 4720 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003051658s 2025-08-13T20:31:56.363961735+00:00 stdout F [INFO] 10.217.0.64:50037 - 34579 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001489083s 2025-08-13T20:31:56.365112368+00:00 stdout F [INFO] 10.217.0.64:43784 - 39155 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001770351s 2025-08-13T20:32:05.226220156+00:00 stdout F [INFO] 10.217.0.19:43101 - 63759 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175152s 2025-08-13T20:32:05.226220156+00:00 stdout F [INFO] 10.217.0.19:55083 - 62337 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002368358s 2025-08-13T20:32:05.381093178+00:00 stdout F [INFO] 10.217.0.45:52832 - 65452 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000965308s 2025-08-13T20:32:05.381833029+00:00 stdout F [INFO] 10.217.0.45:36205 - 44015 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001442462s 2025-08-13T20:32:11.669659366+00:00 stdout F [INFO] 10.217.0.62:46206 - 42199 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001127753s 2025-08-13T20:32:11.669659366+00:00 stdout F [INFO] 10.217.0.62:55044 - 47589 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001251706s 2025-08-13T20:32:22.887274833+00:00 stdout F [INFO] 10.217.0.8:57254 - 11688 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002743639s 2025-08-13T20:32:22.887327804+00:00 stdout F [INFO] 10.217.0.8:40915 - 29430 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003152451s 2025-08-13T20:32:22.889946979+00:00 stdout F [INFO] 10.217.0.8:58018 - 20602 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001598576s 2025-08-13T20:32:22.890211667+00:00 stdout F [INFO] 10.217.0.8:35529 - 19346 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001166383s 2025-08-13T20:32:35.244724858+00:00 stdout F [INFO] 10.217.0.19:45669 - 34426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002952165s 2025-08-13T20:32:35.244724858+00:00 stdout F [INFO] 10.217.0.19:51147 - 51510 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003438999s 2025-08-13T20:32:38.580841186+00:00 stdout F [INFO] 10.217.0.19:43690 - 3324 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001319678s 2025-08-13T20:32:38.580988980+00:00 stdout F [INFO] 10.217.0.19:56587 - 24648 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001099972s 2025-08-13T20:32:41.667486954+00:00 stdout F [INFO] 10.217.0.62:55855 - 31828 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001149163s 2025-08-13T20:32:41.667486954+00:00 stdout F [INFO] 10.217.0.62:34829 - 3463 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001708029s 2025-08-13T20:32:56.366707428+00:00 stdout F [INFO] 10.217.0.64:51627 - 65020 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003232263s 2025-08-13T20:32:56.366707428+00:00 stdout F [INFO] 10.217.0.64:52149 - 8902 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003574023s 2025-08-13T20:32:56.368728506+00:00 stdout F [INFO] 10.217.0.64:51624 - 35655 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000891486s 2025-08-13T20:32:56.369118527+00:00 stdout F [INFO] 10.217.0.64:60490 - 38135 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00103067s 2025-08-13T20:33:05.238556718+00:00 stdout F [INFO] 10.217.0.19:59718 - 38615 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00139407s 2025-08-13T20:33:05.238651740+00:00 stdout F [INFO] 10.217.0.19:54459 - 19380 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001287367s 2025-08-13T20:33:05.431433402+00:00 stdout F [INFO] 10.217.0.45:50366 - 22220 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000820193s 2025-08-13T20:33:05.431991198+00:00 stdout F [INFO] 10.217.0.45:52058 - 7055 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000636528s 2025-08-13T20:33:11.674968627+00:00 stdout F [INFO] 10.217.0.62:53746 - 415 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00138406s 2025-08-13T20:33:11.674968627+00:00 stdout F [INFO] 10.217.0.62:46126 - 49757 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002078379s 2025-08-13T20:33:22.881237326+00:00 stdout F [INFO] 10.217.0.8:40930 - 16922 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00244019s 2025-08-13T20:33:22.881237326+00:00 stdout F [INFO] 10.217.0.8:34743 - 60811 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002328907s 2025-08-13T20:33:22.883380997+00:00 stdout F [INFO] 10.217.0.8:58520 - 29998 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001180044s 2025-08-13T20:33:22.883380997+00:00 stdout F [INFO] 10.217.0.8:33257 - 4060 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001460842s 2025-08-13T20:33:33.398825661+00:00 stdout F [INFO] 10.217.0.82:32850 - 61432 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003149961s 2025-08-13T20:33:33.398825661+00:00 stdout F [INFO] 10.217.0.82:33287 - 31809 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003351926s 2025-08-13T20:33:33.878633203+00:00 stdout F [INFO] 10.217.0.82:59293 - 7022 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000809964s 2025-08-13T20:33:33.878633203+00:00 stdout F [INFO] 10.217.0.82:60830 - 12546 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000924657s 2025-08-13T20:33:35.232345647+00:00 stdout F [INFO] 10.217.0.19:36818 - 54936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001196634s 2025-08-13T20:33:35.232345647+00:00 stdout F [INFO] 10.217.0.19:41457 - 46863 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001223505s 2025-08-13T20:33:38.860763048+00:00 stdout F [INFO] 10.217.0.19:34258 - 15358 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001795122s 2025-08-13T20:33:38.860763048+00:00 stdout F [INFO] 10.217.0.19:42002 - 13607 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002488091s 2025-08-13T20:33:41.668370103+00:00 stdout F [INFO] 10.217.0.62:60543 - 3360 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001166404s 2025-08-13T20:33:41.668370103+00:00 stdout F [INFO] 10.217.0.62:56859 - 37798 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001088791s 2025-08-13T20:33:48.204972862+00:00 stdout F [INFO] 10.217.0.19:60096 - 61982 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000928067s 2025-08-13T20:33:48.206303960+00:00 stdout F [INFO] 10.217.0.19:60303 - 53773 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002658236s 2025-08-13T20:33:56.360014243+00:00 stdout F [INFO] 10.217.0.64:51406 - 31658 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001163614s 2025-08-13T20:33:56.360014243+00:00 stdout F [INFO] 10.217.0.64:48360 - 33699 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001532054s 2025-08-13T20:33:56.361576218+00:00 stdout F [INFO] 10.217.0.64:50492 - 53197 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000629309s 2025-08-13T20:33:56.362663479+00:00 stdout F [INFO] 10.217.0.64:48910 - 50058 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001494913s 2025-08-13T20:34:05.229192181+00:00 stdout F [INFO] 10.217.0.19:41931 - 36188 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001559254s 2025-08-13T20:34:05.229192181+00:00 stdout F [INFO] 10.217.0.19:56602 - 24472 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001454581s 2025-08-13T20:34:05.481479033+00:00 stdout F [INFO] 10.217.0.45:46813 - 52256 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000608437s 2025-08-13T20:34:05.481870144+00:00 stdout F [INFO] 10.217.0.45:42826 - 54158 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000893826s 2025-08-13T20:34:11.675507967+00:00 stdout F [INFO] 10.217.0.62:42151 - 37111 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001868794s 2025-08-13T20:34:11.679175752+00:00 stdout F [INFO] 10.217.0.62:59737 - 62734 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001298907s 2025-08-13T20:34:22.881957784+00:00 stdout F [INFO] 10.217.0.8:43179 - 46202 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002114951s 2025-08-13T20:34:22.881957784+00:00 stdout F [INFO] 10.217.0.8:42515 - 56482 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002568244s 2025-08-13T20:34:22.883479128+00:00 stdout F [INFO] 10.217.0.8:39755 - 48303 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000877095s 2025-08-13T20:34:22.883726015+00:00 stdout F [INFO] 10.217.0.8:39774 - 22805 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000591607s 2025-08-13T20:34:35.232811425+00:00 stdout F [INFO] 10.217.0.19:60022 - 37769 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003804899s 2025-08-13T20:34:35.232811425+00:00 stdout F [INFO] 10.217.0.19:39283 - 32612 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004103508s 2025-08-13T20:34:41.678398886+00:00 stdout F [INFO] 10.217.0.62:35730 - 62817 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001451642s 2025-08-13T20:34:41.678520039+00:00 stdout F [INFO] 10.217.0.62:47850 - 28599 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003685396s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:38897 - 642 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00246145s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:52998 - 17838 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002885533s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:52702 - 63679 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003412048s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:54492 - 49052 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002701758s 2025-08-13T20:34:57.836345007+00:00 stdout F [INFO] 10.217.0.19:42490 - 13001 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002930475s 2025-08-13T20:34:57.836416799+00:00 stdout F [INFO] 10.217.0.19:46472 - 20548 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003218083s 2025-08-13T20:35:05.227701626+00:00 stdout F [INFO] 10.217.0.19:52788 - 47586 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00484177s 2025-08-13T20:35:05.227701626+00:00 stdout F [INFO] 10.217.0.19:51739 - 54276 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005381574s 2025-08-13T20:35:05.528765310+00:00 stdout F [INFO] 10.217.0.45:41036 - 749 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001621397s 2025-08-13T20:35:05.528765310+00:00 stdout F [INFO] 10.217.0.45:40449 - 23309 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001683448s 2025-08-13T20:35:11.673861094+00:00 stdout F [INFO] 10.217.0.62:50333 - 63789 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000766982s 2025-08-13T20:35:11.674033429+00:00 stdout F [INFO] 10.217.0.62:51423 - 972 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001759371s 2025-08-13T20:35:22.883890497+00:00 stdout F [INFO] 10.217.0.8:35533 - 23190 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002822982s 2025-08-13T20:35:22.883890497+00:00 stdout F [INFO] 10.217.0.8:48943 - 18475 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003335756s 2025-08-13T20:35:22.884929167+00:00 stdout F [INFO] 10.217.0.8:51669 - 49205 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000912526s 2025-08-13T20:35:22.885243076+00:00 stdout F [INFO] 10.217.0.8:35489 - 30954 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000887996s 2025-08-13T20:35:31.201730869+00:00 stdout F [INFO] 10.217.0.73:36025 - 37682 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00104204s 2025-08-13T20:35:31.201730869+00:00 stdout F [INFO] 10.217.0.73:44698 - 26406 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001180854s 2025-08-13T20:35:35.230980092+00:00 stdout F [INFO] 10.217.0.19:50589 - 7915 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001655618s 2025-08-13T20:35:35.230980092+00:00 stdout F [INFO] 10.217.0.19:55591 - 64446 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001892654s 2025-08-13T20:35:37.541280062+00:00 stdout F [INFO] 10.217.0.19:36145 - 55938 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000741561s 2025-08-13T20:35:37.541280062+00:00 stdout F [INFO] 10.217.0.19:36908 - 8687 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000822553s 2025-08-13T20:35:41.673136284+00:00 stdout F [INFO] 10.217.0.62:47360 - 54352 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000790583s 2025-08-13T20:35:41.675182283+00:00 stdout F [INFO] 10.217.0.62:52225 - 16649 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001279057s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:40533 - 46498 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002024898s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:53651 - 56781 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001327868s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:40804 - 18331 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002547243s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:49348 - 64759 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001311637s 2025-08-13T20:36:05.242703867+00:00 stdout F [INFO] 10.217.0.19:52997 - 8478 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002759859s 2025-08-13T20:36:05.242703867+00:00 stdout F [INFO] 10.217.0.19:37916 - 39157 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003782448s 2025-08-13T20:36:05.582597387+00:00 stdout F [INFO] 10.217.0.45:41747 - 26619 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000789732s 2025-08-13T20:36:05.582597387+00:00 stdout F [INFO] 10.217.0.45:56187 - 63967 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000641038s 2025-08-13T20:36:07.456016709+00:00 stdout F [INFO] 10.217.0.19:55081 - 39422 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000868175s 2025-08-13T20:36:07.456016709+00:00 stdout F [INFO] 10.217.0.19:50072 - 41015 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000941947s 2025-08-13T20:36:11.676425877+00:00 stdout F [INFO] 10.217.0.62:37814 - 7136 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001359319s 2025-08-13T20:36:11.676425877+00:00 stdout F [INFO] 10.217.0.62:33472 - 12989 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001268686s 2025-08-13T20:36:22.883978362+00:00 stdout F [INFO] 10.217.0.8:58021 - 44272 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003780959s 2025-08-13T20:36:22.884527128+00:00 stdout F [INFO] 10.217.0.8:35563 - 10048 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004389926s 2025-08-13T20:36:22.889843281+00:00 stdout F [INFO] 10.217.0.8:51549 - 40045 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001655578s 2025-08-13T20:36:22.889843281+00:00 stdout F [INFO] 10.217.0.8:41648 - 55672 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001504023s 2025-08-13T20:36:35.261662630+00:00 stdout F [INFO] 10.217.0.19:44748 - 55600 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007348752s 2025-08-13T20:36:35.261936927+00:00 stdout F [INFO] 10.217.0.19:58670 - 34210 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00869192s 2025-08-13T20:36:41.673019005+00:00 stdout F [INFO] 10.217.0.62:47774 - 57002 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000775162s 2025-08-13T20:36:41.673019005+00:00 stdout F [INFO] 10.217.0.62:58481 - 36932 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00069989s 2025-08-13T20:36:56.360899890+00:00 stdout F [INFO] 10.217.0.64:49690 - 6346 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001882895s 2025-08-13T20:36:56.360899890+00:00 stdout F [INFO] 10.217.0.64:33376 - 48808 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002107641s 2025-08-13T20:36:56.362291470+00:00 stdout F [INFO] 10.217.0.64:46296 - 927 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00072105s 2025-08-13T20:36:56.362291470+00:00 stdout F [INFO] 10.217.0.64:36520 - 31547 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000756802s 2025-08-13T20:37:05.233258069+00:00 stdout F [INFO] 10.217.0.19:59772 - 14035 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002645536s 2025-08-13T20:37:05.233258069+00:00 stdout F [INFO] 10.217.0.19:37719 - 14045 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004455419s 2025-08-13T20:37:05.640907632+00:00 stdout F [INFO] 10.217.0.45:38874 - 9095 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000904947s 2025-08-13T20:37:05.640907632+00:00 stdout F [INFO] 10.217.0.45:47158 - 35513 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000973378s 2025-08-13T20:37:11.675012994+00:00 stdout F [INFO] 10.217.0.62:52041 - 28255 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001223665s 2025-08-13T20:37:11.675012994+00:00 stdout F [INFO] 10.217.0.62:53127 - 64300 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001225125s 2025-08-13T20:37:17.087581728+00:00 stdout F [INFO] 10.217.0.19:41701 - 21437 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001998977s 2025-08-13T20:37:17.088880585+00:00 stdout F [INFO] 10.217.0.19:35947 - 41854 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002072s 2025-08-13T20:37:22.884701150+00:00 stdout F [INFO] 10.217.0.8:33740 - 41869 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001245746s 2025-08-13T20:37:22.884864495+00:00 stdout F [INFO] 10.217.0.8:47477 - 64327 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002157842s 2025-08-13T20:37:22.885529914+00:00 stdout F [INFO] 10.217.0.8:36284 - 15082 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000625468s 2025-08-13T20:37:22.885654827+00:00 stdout F [INFO] 10.217.0.8:51765 - 18434 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000766943s 2025-08-13T20:37:35.259845730+00:00 stdout F [INFO] 10.217.0.19:57972 - 33316 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002853822s 2025-08-13T20:37:35.259933912+00:00 stdout F [INFO] 10.217.0.19:38435 - 20685 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000810904s 2025-08-13T20:37:36.229228104+00:00 stdout F [INFO] 10.217.0.19:52269 - 385 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000973738s 2025-08-13T20:37:36.229265696+00:00 stdout F [INFO] 10.217.0.19:51103 - 9389 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001151813s 2025-08-13T20:37:41.676036215+00:00 stdout F [INFO] 10.217.0.62:57770 - 42152 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00107205s 2025-08-13T20:37:41.676216830+00:00 stdout F [INFO] 10.217.0.62:38336 - 46730 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001156253s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:57040 - 20480 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003036318s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:58395 - 10701 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002653297s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:42967 - 18626 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002456391s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:46951 - 59146 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002577614s 2025-08-13T20:38:05.237364761+00:00 stdout F [INFO] 10.217.0.19:58012 - 10248 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00347164s 2025-08-13T20:38:05.237436263+00:00 stdout F [INFO] 10.217.0.19:43124 - 45715 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003759018s 2025-08-13T20:38:05.684249535+00:00 stdout F [INFO] 10.217.0.45:44888 - 52674 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000809783s 2025-08-13T20:38:05.684453351+00:00 stdout F [INFO] 10.217.0.45:51261 - 16994 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000764562s 2025-08-13T20:38:11.559879617+00:00 stdout F [INFO] 10.217.0.62:40287 - 3642 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001132343s 2025-08-13T20:38:11.559879617+00:00 stdout F [INFO] 10.217.0.62:43178 - 21365 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001228095s 2025-08-13T20:38:11.671238298+00:00 stdout F [INFO] 10.217.0.62:53652 - 48116 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000799183s 2025-08-13T20:38:11.671454264+00:00 stdout F [INFO] 10.217.0.62:47552 - 28323 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000930827s 2025-08-13T20:38:22.888462531+00:00 stdout F [INFO] 10.217.0.8:60705 - 40432 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004011316s 2025-08-13T20:38:22.888594465+00:00 stdout F [INFO] 10.217.0.8:51067 - 23753 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002672687s 2025-08-13T20:38:22.890079978+00:00 stdout F [INFO] 10.217.0.8:49803 - 63924 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000994649s 2025-08-13T20:38:22.890333025+00:00 stdout F [INFO] 10.217.0.8:40392 - 33949 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001184864s 2025-08-13T20:38:26.708978077+00:00 stdout F [INFO] 10.217.0.19:48385 - 4246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001006469s 2025-08-13T20:38:26.708978077+00:00 stdout F [INFO] 10.217.0.19:40797 - 41192 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001494034s 2025-08-13T20:38:35.230301728+00:00 stdout F [INFO] 10.217.0.19:35587 - 13762 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001269237s 2025-08-13T20:38:35.230301728+00:00 stdout F [INFO] 10.217.0.19:36692 - 31283 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001164714s 2025-08-13T20:38:41.688856539+00:00 stdout F [INFO] 10.217.0.62:43247 - 17071 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001293217s 2025-08-13T20:38:41.689246861+00:00 stdout F [INFO] 10.217.0.62:48262 - 45415 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001867384s 2025-08-13T20:38:49.075275340+00:00 stdout F [INFO] 10.217.0.8:48946 - 32235 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001550515s 2025-08-13T20:38:49.075275340+00:00 stdout F [INFO] 10.217.0.8:51973 - 5060 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000840614s 2025-08-13T20:38:49.075678612+00:00 stdout F [INFO] 10.217.0.8:58876 - 13836 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000530995s 2025-08-13T20:38:49.076039632+00:00 stdout F [INFO] 10.217.0.8:36573 - 30703 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000560596s 2025-08-13T20:38:56.362727357+00:00 stdout F [INFO] 10.217.0.64:36222 - 957 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001191765s 2025-08-13T20:38:56.362727357+00:00 stdout F [INFO] 10.217.0.64:34160 - 60776 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001621137s 2025-08-13T20:38:56.363249612+00:00 stdout F [INFO] 10.217.0.64:49956 - 6395 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00106333s 2025-08-13T20:38:56.363525860+00:00 stdout F [INFO] 10.217.0.64:34527 - 596 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001017109s 2025-08-13T20:39:05.244100877+00:00 stdout F [INFO] 10.217.0.19:41309 - 38781 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001548515s 2025-08-13T20:39:05.248323839+00:00 stdout F [INFO] 10.217.0.19:39054 - 64832 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001515684s 2025-08-13T20:39:05.727542485+00:00 stdout F [INFO] 10.217.0.45:51846 - 56137 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000750081s 2025-08-13T20:39:05.727619187+00:00 stdout F [INFO] 10.217.0.45:33209 - 27535 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000454923s 2025-08-13T20:39:11.675957299+00:00 stdout F [INFO] 10.217.0.62:59504 - 55315 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001278777s 2025-08-13T20:39:11.675957299+00:00 stdout F [INFO] 10.217.0.62:53199 - 51466 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001345839s 2025-08-13T20:39:22.886569621+00:00 stdout F [INFO] 10.217.0.8:44293 - 63474 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002724538s 2025-08-13T20:39:22.886569621+00:00 stdout F [INFO] 10.217.0.8:45409 - 47820 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00310802s 2025-08-13T20:39:22.888062534+00:00 stdout F [INFO] 10.217.0.8:46886 - 26202 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001225315s 2025-08-13T20:39:22.889389783+00:00 stdout F [INFO] 10.217.0.8:34919 - 32205 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000997758s 2025-08-13T20:39:33.486891188+00:00 stdout F [INFO] 10.217.0.62:38465 - 36749 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003028897s 2025-08-13T20:39:33.487008922+00:00 stdout F [INFO] 10.217.0.62:59209 - 25765 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00138855s 2025-08-13T20:39:33.520351993+00:00 stdout F [INFO] 10.217.0.62:54424 - 53500 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002620206s 2025-08-13T20:39:33.520574099+00:00 stdout F [INFO] 10.217.0.62:34405 - 52422 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002562044s 2025-08-13T20:39:34.929199110+00:00 stdout F [INFO] 10.217.0.19:59832 - 28710 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105181s 2025-08-13T20:39:34.929293403+00:00 stdout F [INFO] 10.217.0.19:52613 - 8655 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001231076s 2025-08-13T20:39:35.238379334+00:00 stdout F [INFO] 10.217.0.19:53361 - 39733 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001116923s 2025-08-13T20:39:35.238379334+00:00 stdout F [INFO] 10.217.0.19:40427 - 35608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000605167s 2025-08-13T20:39:36.328013239+00:00 stdout F [INFO] 10.217.0.19:37534 - 9463 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000661189s 2025-08-13T20:39:36.328494453+00:00 stdout F [INFO] 10.217.0.19:40881 - 3299 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001098372s 2025-08-13T20:39:38.315116047+00:00 stdout F [INFO] 10.217.0.62:36946 - 5637 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001194124s 2025-08-13T20:39:38.315116047+00:00 stdout F [INFO] 10.217.0.62:36687 - 12833 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001436381s 2025-08-13T20:39:41.673972293+00:00 stdout F [INFO] 10.217.0.62:52646 - 36352 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000531765s 2025-08-13T20:39:41.673972293+00:00 stdout F [INFO] 10.217.0.62:55867 - 16523 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001416061s 2025-08-13T20:39:42.023083748+00:00 stdout F [INFO] 10.217.0.19:34636 - 26806 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000735301s 2025-08-13T20:39:42.023083748+00:00 stdout F [INFO] 10.217.0.19:35424 - 55421 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000683849s 2025-08-13T20:39:42.111955400+00:00 stdout F [INFO] 10.217.0.19:54314 - 8861 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000667829s 2025-08-13T20:39:42.112000792+00:00 stdout F [INFO] 10.217.0.19:33866 - 17232 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000617808s 2025-08-13T20:39:42.548542797+00:00 stdout F [INFO] 10.217.0.62:46643 - 51865 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000583177s 2025-08-13T20:39:42.548542797+00:00 stdout F [INFO] 10.217.0.62:57527 - 7741 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001258096s 2025-08-13T20:39:45.396443433+00:00 stdout F [INFO] 10.217.0.62:36515 - 31345 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002234595s 2025-08-13T20:39:45.396822784+00:00 stdout F [INFO] 10.217.0.62:45617 - 47171 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002168872s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:42472 - 6754 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00345008s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:48733 - 49549 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002868173s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:49502 - 50789 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004282123s 2025-08-13T20:39:56.365445019+00:00 stdout F [INFO] 10.217.0.64:43991 - 12908 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003339476s 2025-08-13T20:40:01.060378034+00:00 stdout F [INFO] 10.217.0.19:47434 - 25551 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001119163s 2025-08-13T20:40:01.060378034+00:00 stdout F [INFO] 10.217.0.19:51981 - 9328 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00137947s 2025-08-13T20:40:01.096217857+00:00 stdout F [INFO] 10.217.0.19:39773 - 1053 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001982967s 2025-08-13T20:40:01.096217857+00:00 stdout F [INFO] 10.217.0.19:34232 - 3657 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002569844s 2025-08-13T20:40:05.319413503+00:00 stdout F [INFO] 10.217.0.19:53209 - 19770 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001161094s 2025-08-13T20:40:05.330872523+00:00 stdout F [INFO] 10.217.0.19:47054 - 30121 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000943507s 2025-08-13T20:40:05.767886341+00:00 stdout F [INFO] 10.217.0.45:37617 - 52692 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000704861s 2025-08-13T20:40:05.767886341+00:00 stdout F [INFO] 10.217.0.45:36136 - 29941 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000623578s 2025-08-13T20:40:08.338925635+00:00 stdout F [INFO] 10.217.0.19:40398 - 2296 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000933276s 2025-08-13T20:40:08.338925635+00:00 stdout F [INFO] 10.217.0.19:37518 - 13823 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001020039s 2025-08-13T20:40:08.364254426+00:00 stdout F [INFO] 10.217.0.19:44262 - 12027 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000547166s 2025-08-13T20:40:08.364254426+00:00 stdout F [INFO] 10.217.0.19:52854 - 44366 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000602588s 2025-08-13T20:40:08.482902576+00:00 stdout F [INFO] 10.217.0.19:46869 - 62396 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000638609s 2025-08-13T20:40:08.482902576+00:00 stdout F [INFO] 10.217.0.19:59250 - 28552 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966127s 2025-08-13T20:40:11.682310416+00:00 stdout F [INFO] 10.217.0.62:34923 - 63174 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000663609s 2025-08-13T20:40:11.682310416+00:00 stdout F [INFO] 10.217.0.62:48099 - 56805 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001330308s 2025-08-13T20:40:22.885384643+00:00 stdout F [INFO] 10.217.0.8:34635 - 60579 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002014208s 2025-08-13T20:40:22.885384643+00:00 stdout F [INFO] 10.217.0.8:50611 - 3878 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000949757s 2025-08-13T20:40:22.887684909+00:00 stdout F [INFO] 10.217.0.8:50782 - 41587 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000613388s 2025-08-13T20:40:22.888213085+00:00 stdout F [INFO] 10.217.0.8:60131 - 22403 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002481611s 2025-08-13T20:40:31.249961554+00:00 stdout F [INFO] 10.217.0.73:56729 - 39008 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001197004s 2025-08-13T20:40:31.249961554+00:00 stdout F [INFO] 10.217.0.73:50058 - 34761 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001338929s 2025-08-13T20:40:35.247958776+00:00 stdout F [INFO] 10.217.0.19:32839 - 29831 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002160112s 2025-08-13T20:40:35.247958776+00:00 stdout F [INFO] 10.217.0.19:45865 - 28487 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002171683s 2025-08-13T20:40:41.677077336+00:00 stdout F [INFO] 10.217.0.62:39935 - 29145 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001932495s 2025-08-13T20:40:41.677077336+00:00 stdout F [INFO] 10.217.0.62:44261 - 57272 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002200554s 2025-08-13T20:40:45.955422642+00:00 stdout F [INFO] 10.217.0.19:49738 - 38120 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000867465s 2025-08-13T20:40:45.955559056+00:00 stdout F [INFO] 10.217.0.19:38158 - 58947 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880456s 2025-08-13T20:40:56.365382803+00:00 stdout F [INFO] 10.217.0.64:40460 - 52275 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004399127s 2025-08-13T20:40:56.365452485+00:00 stdout F [INFO] 10.217.0.64:55259 - 53324 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003396638s 2025-08-13T20:40:56.365603599+00:00 stdout F [INFO] 10.217.0.64:41027 - 3240 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003673406s 2025-08-13T20:40:56.366382632+00:00 stdout F [INFO] 10.217.0.64:40992 - 1845 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005853039s 2025-08-13T20:41:03.402012231+00:00 stdout F [INFO] 10.217.0.82:36224 - 24973 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005070816s 2025-08-13T20:41:03.402012231+00:00 stdout F [INFO] 10.217.0.82:58091 - 39129 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005111768s 2025-08-13T20:41:03.942112441+00:00 stdout F [INFO] 10.217.0.82:45684 - 38759 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.002796691s 2025-08-13T20:41:03.942412880+00:00 stdout F [INFO] 10.217.0.82:60802 - 56082 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000828994s 2025-08-13T20:41:05.259690387+00:00 stdout F [INFO] 10.217.0.19:54024 - 37942 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001103122s 2025-08-13T20:41:05.259690387+00:00 stdout F [INFO] 10.217.0.19:55781 - 37536 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000913356s 2025-08-13T20:41:05.828109685+00:00 stdout F [INFO] 10.217.0.45:58352 - 47304 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000848504s 2025-08-13T20:41:05.828612010+00:00 stdout F [INFO] 10.217.0.45:37579 - 6043 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000965248s 2025-08-13T20:41:11.678751738+00:00 stdout F [INFO] 10.217.0.62:37982 - 19302 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00105337s 2025-08-13T20:41:11.679052437+00:00 stdout F [INFO] 10.217.0.62:36701 - 22169 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001077221s 2025-08-13T20:41:22.888874237+00:00 stdout F [INFO] 10.217.0.8:40546 - 41194 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002784941s 2025-08-13T20:41:22.888976080+00:00 stdout F [INFO] 10.217.0.8:49312 - 22262 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00451814s 2025-08-13T20:41:22.890701110+00:00 stdout F [INFO] 10.217.0.8:35622 - 3186 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001110432s 2025-08-13T20:41:22.893256974+00:00 stdout F [INFO] 10.217.0.8:46249 - 43292 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001048781s 2025-08-13T20:41:33.614180660+00:00 stdout F [INFO] 10.217.0.19:34045 - 59058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003720818s 2025-08-13T20:41:33.614180660+00:00 stdout F [INFO] 10.217.0.19:42856 - 33590 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004016166s 2025-08-13T20:41:35.258012771+00:00 stdout F [INFO] 10.217.0.19:47082 - 44019 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004036316s 2025-08-13T20:41:35.258012771+00:00 stdout F [INFO] 10.217.0.19:53023 - 62739 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004330375s 2025-08-13T20:41:41.687703349+00:00 stdout F [INFO] 10.217.0.62:59948 - 65257 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003153271s 2025-08-13T20:41:41.687703349+00:00 stdout F [INFO] 10.217.0.62:46724 - 35067 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00419401s 2025-08-13T20:41:55.588870352+00:00 stdout F [INFO] 10.217.0.19:60199 - 65263 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002129141s 2025-08-13T20:41:55.588870352+00:00 stdout F [INFO] 10.217.0.19:37883 - 24039 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003023257s 2025-08-13T20:41:56.366333937+00:00 stdout F [INFO] 10.217.0.64:45373 - 9031 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000812273s 2025-08-13T20:41:56.366770229+00:00 stdout F [INFO] 10.217.0.64:49602 - 22957 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001103102s 2025-08-13T20:41:56.367114399+00:00 stdout F [INFO] 10.217.0.64:57192 - 51247 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001530174s 2025-08-13T20:41:56.369267911+00:00 stdout F [INFO] 10.217.0.64:43691 - 12648 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001527594s 2025-08-13T20:42:05.249404267+00:00 stdout F [INFO] 10.217.0.19:56767 - 13605 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001139273s 2025-08-13T20:42:05.249404267+00:00 stdout F [INFO] 10.217.0.19:42307 - 25762 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002037729s 2025-08-13T20:42:05.884957991+00:00 stdout F [INFO] 10.217.0.45:34593 - 23662 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001245216s 2025-08-13T20:42:05.886252038+00:00 stdout F [INFO] 10.217.0.45:41209 - 48819 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002737289s 2025-08-13T20:42:10.672284590+00:00 stdout F [INFO] 10.217.0.19:36729 - 28054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000588907s 2025-08-13T20:42:10.674088373+00:00 stdout F [INFO] 10.217.0.19:43538 - 27639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001673758s 2025-08-13T20:42:11.682824284+00:00 stdout F [INFO] 10.217.0.62:47919 - 6529 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000978938s 2025-08-13T20:42:11.682824284+00:00 stdout F [INFO] 10.217.0.62:48501 - 2072 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001006219s 2025-08-13T20:42:22.889244705+00:00 stdout F [INFO] 10.217.0.8:57028 - 4650 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003507911s 2025-08-13T20:42:22.889244705+00:00 stdout F [INFO] 10.217.0.8:55542 - 40045 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001010219s 2025-08-13T20:42:22.890900792+00:00 stdout F [INFO] 10.217.0.8:51800 - 58877 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001256136s 2025-08-13T20:42:22.891156140+00:00 stdout F [INFO] 10.217.0.8:58299 - 14903 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001578475s 2025-08-13T20:42:35.276591715+00:00 stdout F [INFO] 10.217.0.19:42163 - 4628 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003242554s 2025-08-13T20:42:35.276591715+00:00 stdout F [INFO] 10.217.0.19:43475 - 45715 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004115779s 2025-08-13T20:42:43.641185657+00:00 stdout F [INFO] SIGTERM: Shutting down servers then terminating 2025-08-13T20:42:43.648934750+00:00 stdout F [INFO] plugin/health: Going into lameduck mode for 20s ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015073043234032776 5ustar zuulzuul././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000202015073043234032772 0ustar zuulzuul2025-10-13T00:14:59.713467033+00:00 stderr F W1013 00:14:59.713233 1 deprecated.go:66] 2025-10-13T00:14:59.713467033+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:14:59.713467033+00:00 stderr F 2025-10-13T00:14:59.713467033+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:14:59.713467033+00:00 stderr F 2025-10-13T00:14:59.713467033+00:00 stderr F =============================================== 2025-10-13T00:14:59.713467033+00:00 stderr F 2025-10-13T00:14:59.714158593+00:00 stderr F I1013 00:14:59.714119 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:14:59.714216535+00:00 stderr F I1013 00:14:59.714193 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:14:59.715011759+00:00 stderr F I1013 00:14:59.714976 1 kube-rbac-proxy.go:395] Starting TCP socket on :9154 2025-10-13T00:14:59.715639098+00:00 stderr F I1013 00:14:59.715608 1 kube-rbac-proxy.go:402] Listening securely on :9154 ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000222515073043234033001 0ustar zuulzuul2025-08-13T19:59:22.553763903+00:00 stderr F W0813 19:59:22.553003 1 deprecated.go:66] 2025-08-13T19:59:22.553763903+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.553763903+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.553763903+00:00 stderr F =============================================== 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.658394206+00:00 stderr F I0813 19:59:22.658018 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:22.658394206+00:00 stderr F I0813 19:59:22.658104 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:22.714929947+00:00 stderr F I0813 19:59:22.702562 1 kube-rbac-proxy.go:395] Starting TCP socket on :9154 2025-08-13T19:59:22.796718549+00:00 stderr F I0813 19:59:22.796259 1 kube-rbac-proxy.go:402] Listening securely on :9154 2025-08-13T20:42:42.272284042+00:00 stderr F I0813 20:42:42.271402 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043232033052 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043232033052 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134015073043232033052 0ustar zuulzuul2025-08-13T20:15:02.232689507+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/olm-operator-heap-hqmzq" 2025-08-13T20:15:02.544616319+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/catalog-operator-heap-bk2n8" 2025-08-13T20:15:02.561204695+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/catalog-operator-heap-pc9j9" 2025-08-13T20:15:02.585948214+00:00 stderr F time="2025-08-13T20:15:02Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/olm-operator-heap-7svh2" ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415073043233033006 0ustar zuulzuul2025-10-13T00:14:59.054658594+00:00 stderr F W1013 00:14:59.049616 1 deprecated.go:66] 2025-10-13T00:14:59.054658594+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:14:59.054658594+00:00 stderr F 2025-10-13T00:14:59.054658594+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:14:59.054658594+00:00 stderr F 2025-10-13T00:14:59.054658594+00:00 stderr F =============================================== 2025-10-13T00:14:59.054658594+00:00 stderr F 2025-10-13T00:14:59.054658594+00:00 stderr F I1013 00:14:59.049743 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-10-13T00:14:59.054658594+00:00 stderr F I1013 00:14:59.050546 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:14:59.054658594+00:00 stderr F I1013 00:14:59.050585 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:14:59.054658594+00:00 stderr F I1013 00:14:59.051119 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-10-13T00:14:59.054658594+00:00 stderr F I1013 00:14:59.051637 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115073043233032777 0ustar zuulzuul2025-08-13T19:59:12.246877885+00:00 stderr F W0813 19:59:12.142202 1 deprecated.go:66] 2025-08-13T19:59:12.246877885+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F =============================================== 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F I0813 19:59:12.224140 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:12.550927052+00:00 stderr F I0813 19:59:12.536614 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:12.551285632+00:00 stderr F I0813 19:59:12.551190 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:12.558915680+00:00 stderr F I0813 19:59:12.558720 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:59:12.724634104+00:00 stderr F I0813 19:59:12.644589 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:43.553128358+00:00 stderr F I0813 20:42:43.552988 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043233032777 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000013222215073043233033003 0ustar zuulzuul2025-10-13T00:14:58.184134191+00:00 stderr F I1013 00:14:58.183557 1 start.go:52] Version: 4.16.0 (Raw: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty, Hash: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-10-13T00:14:58.188935155+00:00 stderr F I1013 00:14:58.186141 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:58.188935155+00:00 stderr F I1013 00:14:58.186848 1 metrics.go:100] Registering Prometheus metrics 2025-10-13T00:14:58.188935155+00:00 stderr F I1013 00:14:58.186934 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-10-13T00:14:58.270365815+00:00 stderr F I1013 00:14:58.269607 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config... 2025-10-13T00:20:16.914050317+00:00 stderr F I1013 00:20:16.913461 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config 2025-10-13T00:20:16.930128948+00:00 stderr F I1013 00:20:16.930022 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:20:16.947866746+00:00 stderr F I1013 00:20:16.947719 1 start.go:127] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:20:16.947866746+00:00 stderr F I1013 00:20:16.947772 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:20:17.229407976+00:00 stderr F I1013 00:20:17.229344 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-10-13T00:20:17.252277138+00:00 stderr F I1013 00:20:17.252201 1 operator.go:376] Starting MachineConfigOperator 2025-10-13T00:20:31.833823926+00:00 stderr F E1013 00:20:31.833693 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-10-13T00:20:31.849981179+00:00 stderr F I1013 00:20:31.849898 1 event.go:364] Event(v1.ObjectReference{Kind:"", Namespace:"openshift-machine-config-operator", Name:"machine-config", UID:"7f2912cb-6bc0-4410-a0a8-5ee7300fa84b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorDegraded: RequiredPoolsFailed' Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused 2025-10-13T00:20:39.146837628+00:00 stderr F E1013 00:20:39.146725 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:40.161102477+00:00 stderr F E1013 00:20:40.160884 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:41.161227590+00:00 stderr F E1013 00:20:41.160424 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:42.157312730+00:00 stderr F E1013 00:20:42.156661 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:43.162355193+00:00 stderr F E1013 00:20:43.161014 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:44.163602518+00:00 stderr F E1013 00:20:44.162066 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:45.160491201+00:00 stderr F E1013 00:20:45.160421 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:46.162228901+00:00 stderr F E1013 00:20:46.162152 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:47.156415907+00:00 stderr F E1013 00:20:47.155432 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:48.157209079+00:00 stderr F E1013 00:20:48.157134 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:49.160240885+00:00 stderr F E1013 00:20:49.160147 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:50.167517500+00:00 stderr F E1013 00:20:50.167444 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:51.160119012+00:00 stderr F E1013 00:20:51.159729 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:52.160157603+00:00 stderr F E1013 00:20:52.158527 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:53.156234834+00:00 stderr F E1013 00:20:53.156169 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:54.160635246+00:00 stderr F E1013 00:20:54.158756 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:55.162892590+00:00 stderr F E1013 00:20:55.160273 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:56.162938432+00:00 stderr F E1013 00:20:56.162864 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:57.161778889+00:00 stderr F E1013 00:20:57.161711 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:58.161148280+00:00 stderr F E1013 00:20:58.161068 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:20:59.160155983+00:00 stderr F E1013 00:20:59.159380 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:00.156572093+00:00 stderr F E1013 00:21:00.156222 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:01.159125094+00:00 stderr F E1013 00:21:01.158746 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:02.154469152+00:00 stderr F E1013 00:21:02.153911 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:03.159277377+00:00 stderr F E1013 00:21:03.158790 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:04.155401009+00:00 stderr F E1013 00:21:04.154820 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:05.157401614+00:00 stderr F E1013 00:21:05.157029 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:06.158049433+00:00 stderr F E1013 00:21:06.157950 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:07.156911281+00:00 stderr F E1013 00:21:07.156753 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:08.156973763+00:00 stderr F E1013 00:21:08.156107 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:09.157468366+00:00 stderr F E1013 00:21:09.156860 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:10.163962116+00:00 stderr F E1013 00:21:10.161664 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:11.163064944+00:00 stderr F E1013 00:21:11.162966 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:12.156439297+00:00 stderr F E1013 00:21:12.156367 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:13.160113679+00:00 stderr F E1013 00:21:13.159173 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:14.156364900+00:00 stderr F E1013 00:21:14.155242 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:15.167473221+00:00 stderr F E1013 00:21:15.166060 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:16.162110298+00:00 stderr F E1013 00:21:16.161308 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:17.161277877+00:00 stderr F E1013 00:21:17.160706 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:18.165649087+00:00 stderr F E1013 00:21:18.165568 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:19.157609252+00:00 stderr F E1013 00:21:19.157160 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:20.159311201+00:00 stderr F E1013 00:21:20.158398 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:21.168411257+00:00 stderr F E1013 00:21:21.168355 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:22.156157270+00:00 stderr F E1013 00:21:22.156107 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:23.156225332+00:00 stderr F E1013 00:21:23.155922 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:24.157994231+00:00 stderr F E1013 00:21:24.157461 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:25.162429583+00:00 stderr F E1013 00:21:25.162358 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:26.169377082+00:00 stderr F E1013 00:21:26.169256 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:27.155158971+00:00 stderr F E1013 00:21:27.155087 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:28.165860501+00:00 stderr F E1013 00:21:28.165786 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:29.155264537+00:00 stderr F E1013 00:21:29.155193 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:30.162709458+00:00 stderr F E1013 00:21:30.161288 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:31.157468059+00:00 stderr F E1013 00:21:31.154656 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:32.158578701+00:00 stderr F E1013 00:21:32.157859 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:33.163570017+00:00 stderr F E1013 00:21:33.161514 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:34.168413378+00:00 stderr F E1013 00:21:34.168318 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:35.157009904+00:00 stderr F E1013 00:21:35.156425 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:36.157716355+00:00 stderr F E1013 00:21:36.157318 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:37.157832829+00:00 stderr F E1013 00:21:37.157523 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:38.157222215+00:00 stderr F E1013 00:21:38.157123 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:39.159962071+00:00 stderr F E1013 00:21:39.157812 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:40.164305100+00:00 stderr F E1013 00:21:40.163426 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:41.159035499+00:00 stderr F E1013 00:21:41.158957 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:42.154705665+00:00 stderr F E1013 00:21:42.154082 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:43.156198867+00:00 stderr F E1013 00:21:43.156142 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:44.159155529+00:00 stderr F E1013 00:21:44.158223 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:45.157000091+00:00 stderr F E1013 00:21:45.156905 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:46.155803011+00:00 stderr F E1013 00:21:46.155701 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:47.157603281+00:00 stderr F E1013 00:21:47.157515 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:48.157412927+00:00 stderr F E1013 00:21:48.156431 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:49.166301418+00:00 stderr F E1013 00:21:49.166247 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:50.158751228+00:00 stderr F E1013 00:21:50.157936 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:51.159081688+00:00 stderr F E1013 00:21:51.157442 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:52.156452059+00:00 stderr F E1013 00:21:52.156379 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:53.158760753+00:00 stderr F E1013 00:21:53.158636 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:54.159953278+00:00 stderr F E1013 00:21:54.158531 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:55.159864366+00:00 stderr F E1013 00:21:55.159773 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:56.160026012+00:00 stderr F E1013 00:21:56.159660 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:57.156566411+00:00 stderr F E1013 00:21:57.156473 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:58.158120225+00:00 stderr F E1013 00:21:58.158037 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:21:59.156345907+00:00 stderr F E1013 00:21:59.156064 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:00.163196703+00:00 stderr F E1013 00:22:00.162461 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:01.156399992+00:00 stderr F E1013 00:22:01.156296 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:02.169169567+00:00 stderr F E1013 00:22:02.164156 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:03.160686130+00:00 stderr F E1013 00:22:03.160319 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:04.157485337+00:00 stderr F E1013 00:22:04.155961 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:05.159384080+00:00 stderr F E1013 00:22:05.159309 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:06.155168238+00:00 stderr F E1013 00:22:06.154867 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:07.161146041+00:00 stderr F E1013 00:22:07.160848 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:08.161872392+00:00 stderr F E1013 00:22:08.161706 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:09.157258730+00:00 stderr F E1013 00:22:09.157167 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:16.930272199+00:00 stderr F E1013 00:22:16.930164 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:56.168812384+00:00 stderr F E1013 00:22:56.168032 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:57.162069791+00:00 stderr F E1013 00:22:57.161994 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:58.156838281+00:00 stderr F E1013 00:22:58.156775 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:22:59.155644333+00:00 stderr F E1013 00:22:59.155275 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:00.155705859+00:00 stderr F E1013 00:23:00.155367 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:01.158127772+00:00 stderr F E1013 00:23:01.157607 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:02.156527662+00:00 stderr F E1013 00:23:02.156447 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:03.159599002+00:00 stderr F E1013 00:23:03.159487 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:04.158370913+00:00 stderr F E1013 00:23:04.158251 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:05.158914124+00:00 stderr F E1013 00:23:05.158828 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:06.157908821+00:00 stderr F E1013 00:23:06.157757 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:07.159206521+00:00 stderr F E1013 00:23:07.158904 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:08.156704987+00:00 stderr F E1013 00:23:08.156289 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:09.161681431+00:00 stderr F E1013 00:23:09.161580 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:10.156165262+00:00 stderr F E1013 00:23:10.156094 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:11.158589824+00:00 stderr F E1013 00:23:11.158511 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:12.161002367+00:00 stderr F E1013 00:23:12.160893 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:13.156091145+00:00 stderr F E1013 00:23:13.155579 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:14.157393535+00:00 stderr F E1013 00:23:14.157052 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:15.157446602+00:00 stderr F E1013 00:23:15.157305 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:16.157372415+00:00 stderr F E1013 00:23:16.157290 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:17.160216050+00:00 stderr F E1013 00:23:17.160135 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:18.155187684+00:00 stderr F E1013 00:23:18.155072 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:19.155223620+00:00 stderr F E1013 00:23:19.155161 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:20.161054758+00:00 stderr F E1013 00:23:20.160725 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:21.161884606+00:00 stderr F E1013 00:23:21.161820 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:22.156624315+00:00 stderr F E1013 00:23:22.156163 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:23.157791443+00:00 stderr F E1013 00:23:23.157707 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:24.156377999+00:00 stderr F E1013 00:23:24.155762 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:25.153768230+00:00 stderr F E1013 00:23:25.153501 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:26.155307209+00:00 stderr F E1013 00:23:26.155020 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:27.168483411+00:00 stderr F E1013 00:23:27.168297 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:28.155354850+00:00 stderr F E1013 00:23:28.155209 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:29.158519053+00:00 stderr F E1013 00:23:29.158448 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:30.159833156+00:00 stderr F E1013 00:23:30.159746 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:31.158626317+00:00 stderr F E1013 00:23:31.158535 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:32.158805587+00:00 stderr F E1013 00:23:32.158727 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:33.158973897+00:00 stderr F E1013 00:23:33.158650 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:34.154969711+00:00 stderr F E1013 00:23:34.154866 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:35.159704537+00:00 stderr F E1013 00:23:35.159645 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:36.156005769+00:00 stderr F E1013 00:23:36.155932 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:37.156795897+00:00 stderr F E1013 00:23:37.156534 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:38.157717848+00:00 stderr F E1013 00:23:38.157372 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:39.156471347+00:00 stderr F E1013 00:23:39.156425 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:40.157941554+00:00 stderr F E1013 00:23:40.157903 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:41.156733926+00:00 stderr F E1013 00:23:41.156635 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:42.158295554+00:00 stderr F E1013 00:23:42.158227 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:43.175972941+00:00 stderr F E1013 00:23:43.175931 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:44.143153133+00:00 stderr F I1013 00:23:44.143091 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-10-13T00:23:44.154782837+00:00 stderr F E1013 00:23:44.154446 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:45.159635296+00:00 stderr F E1013 00:23:45.159318 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:46.153897990+00:00 stderr F E1013 00:23:46.153834 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:47.155000665+00:00 stderr F E1013 00:23:47.154795 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:48.156410300+00:00 stderr F E1013 00:23:48.156344 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:49.157092273+00:00 stderr F E1013 00:23:49.157007 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:50.155491844+00:00 stderr F E1013 00:23:50.154550 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:51.164300074+00:00 stderr F E1013 00:23:51.164238 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:52.172799036+00:00 stderr F E1013 00:23:52.172310 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-10-13T00:23:53.168461130+00:00 stderr F E1013 00:23:53.167927 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000003771515073043233033016 0ustar zuulzuul2025-08-13T19:59:03.793003276+00:00 stderr F I0813 19:59:03.792203 1 start.go:52] Version: 4.16.0 (Raw: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty, Hash: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:03.810241137+00:00 stderr F I0813 19:59:03.806593 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:03.811597626+00:00 stderr F I0813 19:59:03.811551 1 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:59:03.811901004+00:00 stderr F I0813 19:59:03.811879 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:59:05.606955533+00:00 stderr F I0813 19:59:05.602330 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config... 2025-08-13T19:59:06.352317300+00:00 stderr F I0813 19:59:06.351645 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config 2025-08-13T19:59:06.624953102+00:00 stderr F I0813 19:59:06.621000 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:07.641337833+00:00 stderr F I0813 19:59:07.604244 1 start.go:127] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:07.641337833+00:00 stderr F I0813 19:59:07.605338 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:17.879134960+00:00 stderr F I0813 19:59:17.868467 1 trace.go:236] Trace[2088585093]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:07.208) (total time: 10647ms): 2025-08-13T19:59:17.879134960+00:00 stderr F Trace[2088585093]: ---"Objects listed" error: 10625ms (19:59:17.833) 2025-08-13T19:59:17.879134960+00:00 stderr F Trace[2088585093]: [10.647548997s] [10.647548997s] END 2025-08-13T19:59:17.879134960+00:00 stderr F I0813 19:59:17.878865 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T19:59:18.050386191+00:00 stderr F I0813 19:59:18.044491 1 operator.go:376] Starting MachineConfigOperator 2025-08-13T20:02:26.192144382+00:00 stderr F I0813 20:02:26.175662 1 sync.go:419] Detecting changes in merged-trusted-image-registry-ca, creating patch 2025-08-13T20:02:26.192144382+00:00 stderr F I0813 20:02:26.191605 1 sync.go:424] JSONPATCH: 2025-08-13T20:02:26.192144382+00:00 stderr F {"data":{"image-registry.openshift-image-registry.svc..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n","image-registry.openshift-image-registry.svc.cluster.local..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"namespace":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:03:15.119052825+00:00 stderr F E0813 20:03:15.118870 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.125557835+00:00 stderr F E0813 20:04:15.124918 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.128905567+00:00 stderr F E0813 20:05:15.128095 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.763145649+00:00 stderr F E0813 20:05:15.763084 1 operator.go:448] error syncing progressing status: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:24.640926653+00:00 stderr F I0813 20:06:24.640737 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T20:09:39.828705851+00:00 stderr F I0813 20:09:39.828618 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T20:42:28.362519840+00:00 stderr F E0813 20:42:28.360448 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:29.356362823+00:00 stderr F E0813 20:42:29.352075 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:30.358835494+00:00 stderr F E0813 20:42:30.355618 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:31.358397922+00:00 stderr F E0813 20:42:31.356276 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.362554222+00:00 stderr F E0813 20:42:32.361311 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.358700581+00:00 stderr F E0813 20:42:33.358518 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:34.357667441+00:00 stderr F E0813 20:42:34.355686 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:35.354656805+00:00 stderr F E0813 20:42:35.353969 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:44.634028771+00:00 stderr F I0813 20:42:44.633156 1 helpers.go:93] Shutting down due to: terminated 2025-08-13T20:42:44.634028771+00:00 stderr F I0813 20:42:44.633964 1 helpers.go:96] Context cancelled 2025-08-13T20:42:44.637659186+00:00 stderr F I0813 20:42:44.637441 1 operator.go:386] Shutting down MachineConfigOperator 2025-08-13T20:42:44.639461528+00:00 stderr F E0813 20:42:44.638840 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.641299241+00:00 stderr F I0813 20:42:44.641137 1 start.go:150] Stopped leading. Terminating. ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015073043232033042 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015073043232033042 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000002334315073043232033051 0ustar zuulzuul2025-08-13T19:59:38.816516836+00:00 stderr F W0813 19:59:38.811105 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T19:59:42.551613375+00:00 stderr F I0813 19:59:42.550430 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:43.926057823+00:00 stderr F I0813 19:59:43.924317 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T19:59:49.044814837+00:00 stderr F I0813 19:59:49.018356 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:49.044814837+00:00 stderr F W0813 19:59:49.041495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:49.044814837+00:00 stderr F W0813 19:59:49.041514 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:49.353686632+00:00 stderr F I0813 19:59:49.352555 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-737639254/tls.crt::/tmp/serving-cert-737639254/tls.key" 2025-08-13T19:59:49.357611804+00:00 stderr F I0813 19:59:49.356070 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.404952 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.405109 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.406528 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.406576 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:49.411376676+00:00 stderr F I0813 19:59:49.409598 1 secure_serving.go:213] Serving securely on [::]:17698 2025-08-13T19:59:49.481618619+00:00 stderr F I0813 19:59:49.410082 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:49.488757592+00:00 stderr F I0813 19:59:49.481955 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:49.695002670+00:00 stderr F I0813 19:59:49.694746 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:49.696477542+00:00 stderr F E0813 19:59:49.695955 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.696477542+00:00 stderr F E0813 19:59:49.696059 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.734727193+00:00 stderr F I0813 19:59:49.722956 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:49.734727193+00:00 stderr F E0813 19:59:49.723053 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.736936305+00:00 stderr F E0813 19:59:49.735304 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.736936305+00:00 stderr F E0813 19:59:49.735356 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.829455883+00:00 stderr F E0813 19:59:49.829356 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.850042510+00:00 stderr F E0813 19:59:49.849956 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.879030246+00:00 stderr F E0813 19:59:49.877548 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.891467700+00:00 stderr F E0813 19:59:49.891384 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.996736061+00:00 stderr F E0813 19:59:49.996679 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.996927636+00:00 stderr F E0813 19:59:49.996905 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.118993176+00:00 stderr F E0813 19:59:50.116700 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.167541430+00:00 stderr F E0813 19:59:50.166359 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.276515046+00:00 stderr F I0813 19:59:50.267147 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-08-13T19:59:50.293519981+00:00 stderr F E0813 19:59:50.278355 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.315617001+00:00 stderr F I0813 19:59:50.307588 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:50.489017894+00:00 stderr F E0813 19:59:50.488212 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.618445184+00:00 stderr F E0813 19:59:50.618021 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.147165276+00:00 stderr F E0813 19:59:51.144391 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.258646404+00:00 stderr F E0813 19:59:51.258287 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:55.970593550+00:00 stderr F I0813 19:59:55.968309 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-08-13T19:59:55.970593550+00:00 stderr F I0813 19:59:55.970408 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977334 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977375 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977385 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977448 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-08-13T19:59:55.994104840+00:00 stderr F I0813 19:59:55.991731 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-08-13T19:59:56.043528339+00:00 stderr F I0813 19:59:56.043238 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-08-13T19:59:56.043528339+00:00 stderr F I0813 19:59:56.043391 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-08-13T19:59:56.347940657+00:00 stderr F I0813 19:59:56.345926 1 base_controller.go:73] Caches are synced for check-endpoints 2025-08-13T19:59:56.347940657+00:00 stderr F I0813 19:59:56.346237 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-08-13T20:42:42.524890884+00:00 stderr F I0813 20:42:42.523195 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000001156015073043232033047 0ustar zuulzuul2025-10-13T00:15:01.690220849+00:00 stderr F W1013 00:15:01.688980 1 cmd.go:245] Using insecure, self-signed certificates 2025-10-13T00:15:02.533039971+00:00 stderr F I1013 00:15:02.531311 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:02.603744000+00:00 stderr F I1013 00:15:02.602884 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-10-13T00:15:03.332425001+00:00 stderr F I1013 00:15:03.329360 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:03.332425001+00:00 stderr F W1013 00:15:03.329390 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:03.332425001+00:00 stderr F W1013 00:15:03.329398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:03.345460862+00:00 stderr F I1013 00:15:03.345391 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2631851821/tls.crt::/tmp/serving-cert-2631851821/tls.key" 2025-10-13T00:15:03.349118481+00:00 stderr F I1013 00:15:03.348542 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:03.349118481+00:00 stderr F I1013 00:15:03.348637 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:03.349118481+00:00 stderr F I1013 00:15:03.348662 1 secure_serving.go:213] Serving securely on [::]:17698 2025-10-13T00:15:03.349118481+00:00 stderr F I1013 00:15:03.348689 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:03.349118481+00:00 stderr F I1013 00:15:03.348708 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:03.353284146+00:00 stderr F I1013 00:15:03.353245 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.353284146+00:00 stderr F I1013 00:15:03.353263 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:03.353442551+00:00 stderr F I1013 00:15:03.353415 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.375534493+00:00 stderr F I1013 00:15:03.374972 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-10-13T00:15:03.455374915+00:00 stderr F I1013 00:15:03.453409 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:03.455374915+00:00 stderr F I1013 00:15:03.454849 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:03.455374915+00:00 stderr F I1013 00:15:03.454987 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.677405 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.677450 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.677520 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.677524 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.677528 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.677551 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.678004 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.678049 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-10-13T00:15:03.678379927+00:00 stderr F I1013 00:15:03.678238 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-10-13T00:15:03.778774245+00:00 stderr F I1013 00:15:03.778704 1 base_controller.go:73] Caches are synced for check-endpoints 2025-10-13T00:15:03.778774245+00:00 stderr F I1013 00:15:03.778738 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... ././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015073043233033227 5ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000137466515073043233033257 0ustar zuulzuul2025-08-13T19:57:38.998949389+00:00 stdout F 2025-08-13T19:57:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d86c2164-4a1e-4b53-8f29-3287db575df7 2025-08-13T19:57:39.063740029+00:00 stdout F 2025-08-13T19:57:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d86c2164-4a1e-4b53-8f29-3287db575df7 to /host/opt/cni/bin/ 2025-08-13T19:57:39.142894289+00:00 stderr F 2025-08-13T19:57:39Z [verbose] multus-daemon started 2025-08-13T19:57:39.142894289+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Readiness Indicator file check 2025-08-13T19:57:39.143093375+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Readiness Indicator file check done! 2025-08-13T19:57:39.150443065+00:00 stderr F I0813 19:57:39.150296 23104 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-08-13T19:57:39.155552761+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Waiting for certificate 2025-08-13T19:57:40.156328197+00:00 stderr F I0813 19:57:40.156164 23104 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-08-13T19:57:40.156925264+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Certificate found! 2025-08-13T19:57:40.158536691+00:00 stderr F 2025-08-13T19:57:40Z [verbose] server configured with chroot: /hostroot 2025-08-13T19:57:40.158536691+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Filtering pod watch for node "crc" 2025-08-13T19:57:40.264016993+00:00 stderr F 2025-08-13T19:57:40Z [verbose] API readiness check 2025-08-13T19:57:40.269915831+00:00 stderr F 2025-08-13T19:57:40Z [verbose] API readiness check done! 2025-08-13T19:57:40.269915831+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-08-13T19:57:40.270096556+00:00 stderr F 2025-08-13T19:57:40Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-08-13T19:57:48.872958586+00:00 stderr F 2025-08-13T19:57:48Z [verbose] ADD starting CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"" 2025-08-13T19:57:49.433950375+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"" 2025-08-13T19:57:49.630465507+00:00 stderr F 2025-08-13T19:57:49Z [verbose] Add: openshift-marketplace:community-operators-k9qqb:ccdf38cf-634a-41a2-9c8b-74bb86af80a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ac543dfbb4577c1","mac":"9e:fb:45:69:5c:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:1d","sandbox":"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.29/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:49.630527319+00:00 stderr F 2025-08-13T19:57:49Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29251905-zmjv9:8500d7bd-50fb-4ca6-af41-b7a24cae43cd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8eb40cf57cd4084","mac":"aa:3f:6d:4c:4e:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:23","sandbox":"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.35/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:49.638874547+00:00 stderr F I0813 19:57:49.638647 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251905-zmjv9", UID:"8500d7bd-50fb-4ca6-af41-b7a24cae43cd", APIVersion:"v1", ResourceVersion:"27591", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.35/23] from ovn-kubernetes 2025-08-13T19:57:49.638874547+00:00 stderr F I0813 19:57:49.638760 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-k9qqb", UID:"ccdf38cf-634a-41a2-9c8b-74bb86af80a7", APIVersion:"v1", ResourceVersion:"27590", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.29/23] from ovn-kubernetes 2025-08-13T19:57:49.749688731+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"" 2025-08-13T19:57:49.783585229+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"" 2025-08-13T19:57:50.109627859+00:00 stderr F 2025-08-13T19:57:50Z [verbose] ADD finished CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:fb:45:69:5c:25\",\"name\":\"ac543dfbb4577c1\"},{\"mac\":\"0a:58:0a:d9:00:1d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343\"}],\"ips\":[{\"address\":\"10.217.0.29/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:50.109852686+00:00 stderr P 2025-08-13T19:57:50Z [verbose] 2025-08-13T19:57:50.109924838+00:00 stderr P ADD finished CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:3f:6d:4c:4e:9e\",\"name\":\"8eb40cf57cd4084\"},{\"mac\":\"0a:58:0a:d9:00:23\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44\"}],\"ips\":[{\"address\":\"10.217.0.35/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:50.109956979+00:00 stderr F 2025-08-13T19:57:50.216443459+00:00 stderr F 2025-08-13T19:57:50Z [verbose] Add: openshift-marketplace:certified-operators-g4v97:bb917686-edfb-4158-86ad-6fce0abec64c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c30e71c46910d5","mac":"4a:8a:38:55:61:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:21","sandbox":"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.33/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:50.216899512+00:00 stderr F I0813 19:57:50.216754 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-g4v97", UID:"bb917686-edfb-4158-86ad-6fce0abec64c", APIVersion:"v1", ResourceVersion:"27585", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.33/23] from ovn-kubernetes 2025-08-13T19:57:50.257050749+00:00 stderr F 2025-08-13T19:57:50Z [verbose] Add: openshift-marketplace:redhat-operators-dcqzh:6db26b71-4e04-4688-a0c0-00e06e8c888d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fd8d1d12d982e02","mac":"62:c6:c2:53:ad:e3"},{"name":"eth0","mac":"0a:58:0a:d9:00:22","sandbox":"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.34/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:50.257381338+00:00 stderr F I0813 19:57:50.257318 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-dcqzh", UID:"6db26b71-4e04-4688-a0c0-00e06e8c888d", APIVersion:"v1", ResourceVersion:"27584", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.34/23] from ovn-kubernetes 2025-08-13T19:57:51.134684360+00:00 stderr F 2025-08-13T19:57:51Z [verbose] ADD finished CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:c6:c2:53:ad:e3\",\"name\":\"fd8d1d12d982e02\"},{\"mac\":\"0a:58:0a:d9:00:22\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43\"}],\"ips\":[{\"address\":\"10.217.0.34/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:51.134684360+00:00 stderr F 2025-08-13T19:57:51Z [verbose] ADD finished CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:8a:38:55:61:94\",\"name\":\"2c30e71c46910d5\"},{\"mac\":\"0a:58:0a:d9:00:21\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9\"}],\"ips\":[{\"address\":\"10.217.0.33/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:56.898496694+00:00 stderr P 2025-08-13T19:57:56Z [verbose] 2025-08-13T19:57:56.898613597+00:00 stderr P DEL starting CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"" 2025-08-13T19:57:56.898643558+00:00 stderr F 2025-08-13T19:57:56.900093550+00:00 stderr P 2025-08-13T19:57:56Z [verbose] 2025-08-13T19:57:56.900135291+00:00 stderr P Del: openshift-operator-lifecycle-manager:collect-profiles-29251905-zmjv9:8500d7bd-50fb-4ca6-af41-b7a24cae43cd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:57:56.900171212+00:00 stderr F 2025-08-13T19:57:57.037192858+00:00 stderr P 2025-08-13T19:57:57Z [verbose] 2025-08-13T19:57:57.037266280+00:00 stderr P DEL finished CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"", result: "", err: 2025-08-13T19:57:57.037292021+00:00 stderr F 2025-08-13T19:58:54.366505502+00:00 stderr F 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff" Netns:"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-08-13T19:58:54.618147395+00:00 stderr P 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb" Netns:"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-08-13T19:58:54.618344581+00:00 stderr F 2025-08-13T19:58:54.735013927+00:00 stderr F 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0" Netns:"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-08-13T19:58:55.107383541+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a3a061a59b867b6","mac":"12:21:e7:30:c2:18"},{"name":"eth0","mac":"0a:58:0a:d9:00:3f","sandbox":"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.63/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.110313245+00:00 stderr F I0813 19:58:55.108594 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-controller-6df6df6b6b-58shh", UID:"297ab9b6-2186-4d5b-a952-2bfd59af63c4", APIVersion:"v1", ResourceVersion:"27254", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.63/23] from ovn-kubernetes 2025-08-13T19:58:55.227223477+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff" Netns:"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:21:e7:30:c2:18\",\"name\":\"a3a061a59b867b6\"},{\"mac\":\"0a:58:0a:d9:00:3f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e\"}],\"ips\":[{\"address\":\"10.217.0.63/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.287557247+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368" Netns:"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"" 2025-08-13T19:58:55.309136562+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Netns:"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-08-13T19:58:55.316436250+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cb33d2fb758e44e","mac":"26:8b:b4:e3:af:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:15","sandbox":"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.21/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.317640615+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8" Netns:"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-08-13T19:58:55.318273593+00:00 stderr F I0813 19:58:55.318144 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-operator-76788bff89-wkjgm", UID:"120b38dc-8236-4fa6-a452-642b8ad738ee", APIVersion:"v1", ResourceVersion:"27443", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.21/23] from ovn-kubernetes 2025-08-13T19:58:55.373365343+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb" Netns:"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:8b:b4:e3:af:9c\",\"name\":\"cb33d2fb758e44e\"},{\"mac\":\"0a:58:0a:d9:00:15\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486\"}],\"ips\":[{\"address\":\"10.217.0.21/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.393036774+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219" Netns:"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-08-13T19:58:55.478690086+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884" Netns:"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-08-13T19:58:55.537093720+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"caf64d49987c99e","mac":"b2:ea:84:15:ac:21"},{"name":"eth0","mac":"0a:58:0a:d9:00:05","sandbox":"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.5/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.537340517+00:00 stderr F I0813 19:58:55.537273 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"machine-api-operator-788b7c6b6c-ctdmb", UID:"4f8aa612-9da0-4a2b-911e-6a1764a4e74e", APIVersion:"v1", ResourceVersion:"27399", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.5/23] from ovn-kubernetes 2025-08-13T19:58:55.584707128+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.584768759+00:00 stderr P ADD starting CNI request ContainerID:"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2" Netns:"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-08-13T19:58:55.584951845+00:00 stderr F 2025-08-13T19:58:55.625408338+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0" Netns:"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:ea:84:15:ac:21\",\"name\":\"caf64d49987c99e\"},{\"mac\":\"0a:58:0a:d9:00:05\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e\"}],\"ips\":[{\"address\":\"10.217.0.5/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.632598563+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.632651844+00:00 stderr P Add: openshift-machine-api:control-plane-machine-set-operator-649bd778b4-tt5tw:45a8038e-e7f2-4d93-a6f5-7753aa54e63f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2e8f0bacebafcab","mac":"9a:5d:fa:e3:14:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:14","sandbox":"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.20/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.632683775+00:00 stderr F 2025-08-13T19:58:55.635280089+00:00 stderr F I0813 19:58:55.633408 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"control-plane-machine-set-operator-649bd778b4-tt5tw", UID:"45a8038e-e7f2-4d93-a6f5-7753aa54e63f", APIVersion:"v1", ResourceVersion:"27292", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.20/23] from ovn-kubernetes 2025-08-13T19:58:55.723115223+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5" Netns:"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-08-13T19:58:55.735739483+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368" Netns:"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:5d:fa:e3:14:9e\",\"name\":\"2e8f0bacebafcab\"},{\"mac\":\"0a:58:0a:d9:00:14\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad\"}],\"ips\":[{\"address\":\"10.217.0.20/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.814497258+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9ed66fef0dec7ca","mac":"d2:7f:2e:a1:42:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:2f","sandbox":"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.47/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.814882729+00:00 stderr F I0813 19:58:55.814750 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-7287f", UID:"887d596e-c519-4bfa-af90-3edd9e1b2f0f", APIVersion:"v1", ResourceVersion:"27417", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.47/23] from ovn-kubernetes 2025-08-13T19:58:55.862531407+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.862588959+00:00 stderr F ADD starting CNI request ContainerID:"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933" Netns:"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-08-13T19:58:55.867309433+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"861ac63b0e0c6ab","mac":"5e:7f:a7:dd:ef:03"},{"name":"eth0","mac":"0a:58:0a:d9:00:0b","sandbox":"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.11/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.867309433+00:00 stderr F I0813 19:58:55.867187 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-857456c46-7f5wf", UID:"8a5ae51d-d173-4531-8975-f164c975ce1f", APIVersion:"v1", ResourceVersion:"27311", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.11/23] from ovn-kubernetes 2025-08-13T19:58:55.900977383+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.901038685+00:00 stderr P ADD finished CNI request ContainerID:"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8" Netns:"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:7f:2e:a1:42:5d\",\"name\":\"9ed66fef0dec7ca\"},{\"mac\":\"0a:58:0a:d9:00:2f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be\"}],\"ips\":[{\"address\":\"10.217.0.47/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.901073766+00:00 stderr F 2025-08-13T19:58:55.932652876+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884" Netns:"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:7f:a7:dd:ef:03\",\"name\":\"861ac63b0e0c6ab\"},{\"mac\":\"0a:58:0a:d9:00:0b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba\"}],\"ips\":[{\"address\":\"10.217.0.11/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.937498794+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.937598317+00:00 stderr P ADD starting CNI request ContainerID:"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724" Netns:"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-08-13T19:58:55.937638998+00:00 stderr F 2025-08-13T19:58:56.012273095+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3a1adfc54f586eb","mac":"a2:c6:80:90:78:65"},{"name":"eth0","mac":"0a:58:0a:d9:00:2b","sandbox":"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.43/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.012273095+00:00 stderr F I0813 19:58:56.012117 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8464bcc55b-sjnqz", UID:"bd556935-a077-45df-ba3f-d42c39326ccd", APIVersion:"v1", ResourceVersion:"27446", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.43/23] from ovn-kubernetes 2025-08-13T19:58:56.142080505+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"07c341dd7186a1b","mac":"06:3c:d7:23:95:35"},{"name":"eth0","mac":"0a:58:0a:d9:00:0c","sandbox":"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.12/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.142080505+00:00 stderr F I0813 19:58:56.141432 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7", UID:"71af81a9-7d43-49b2-9287-c375900aa905", APIVersion:"v1", ResourceVersion:"27289", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.12/23] from ovn-kubernetes 2025-08-13T19:58:56.157497584+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2680ced3658686e","mac":"2a:b7:0a:d6:5a:09"},{"name":"eth0","mac":"0a:58:0a:d9:00:03","sandbox":"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.3/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.157497584+00:00 stderr F I0813 19:58:56.157423 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-qdfr4", UID:"a702c6d2-4dde-4077-ab8c-0f8df804bf7a", APIVersion:"v1", ResourceVersion:"27375", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.3/23] from ovn-kubernetes 2025-08-13T19:58:56.186366667+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1f2d8ae3277a5b2","mac":"b6:78:0d:36:15:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:0d","sandbox":"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.13/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.187137139+00:00 stderr F I0813 19:58:56.187067 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-f9xdt", UID:"3482be94-0cdb-4e2a-889b-e5fac59fdbf5", APIVersion:"v1", ResourceVersion:"27286", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.13/23] from ovn-kubernetes 2025-08-13T19:58:56.207969403+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219" Netns:"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:c6:80:90:78:65\",\"name\":\"3a1adfc54f586eb\"},{\"mac\":\"0a:58:0a:d9:00:2b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe\"}],\"ips\":[{\"address\":\"10.217.0.43/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.216271910+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Netns:"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:3c:d7:23:95:35\",\"name\":\"07c341dd7186a1b\"},{\"mac\":\"0a:58:0a:d9:00:0c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47\"}],\"ips\":[{\"address\":\"10.217.0.12/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.216271910+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5" Netns:"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:b7:0a:d6:5a:09\",\"name\":\"2680ced3658686e\"},{\"mac\":\"0a:58:0a:d9:00:03\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9\"}],\"ips\":[{\"address\":\"10.217.0.3/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.259076390+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2" Netns:"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b6:78:0d:36:15:42\",\"name\":\"1f2d8ae3277a5b2\"},{\"mac\":\"0a:58:0a:d9:00:0d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda\"}],\"ips\":[{\"address\":\"10.217.0.13/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.286691187+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e" Netns:"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-08-13T19:58:56.297972859+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"042b00f26918850","mac":"76:a6:6d:aa:9c:56"},{"name":"eth0","mac":"0a:58:0a:d9:00:30","sandbox":"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.48/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.297972859+00:00 stderr F I0813 19:58:56.297589 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-8jhz6", UID:"3f4dca86-e6ee-4ec9-8324-86aff960225e", APIVersion:"v1", ResourceVersion:"27295", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.48/23] from ovn-kubernetes 2025-08-13T19:58:56.363031603+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51987a02e71ec40","mac":"f2:7b:ae:d5:12:4e"},{"name":"eth0","mac":"0a:58:0a:d9:00:18","sandbox":"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.24/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.364867235+00:00 stderr F I0813 19:58:56.363236 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"package-server-manager-84d578d794-jw7r2", UID:"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be", APIVersion:"v1", ResourceVersion:"27283", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.24/23] from ovn-kubernetes 2025-08-13T19:58:56.365723000+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.366205074+00:00 stderr P ADD starting CNI request ContainerID:"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Netns:"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-08-13T19:58:56.366299326+00:00 stderr F 2025-08-13T19:58:56.397057653+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748" Netns:"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-08-13T19:58:56.397057653+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98" Netns:"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-08-13T19:58:56.450894008+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933" Netns:"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:a6:6d:aa:9c:56\",\"name\":\"042b00f26918850\"},{\"mac\":\"0a:58:0a:d9:00:30\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c\"}],\"ips\":[{\"address\":\"10.217.0.48/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.458953357+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724" Netns:"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:7b:ae:d5:12:4e\",\"name\":\"51987a02e71ec40\"},{\"mac\":\"0a:58:0a:d9:00:18\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3\"}],\"ips\":[{\"address\":\"10.217.0.24/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.477898467+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2" Netns:"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-08-13T19:58:56.521298975+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"76a23bcc5261ffe","mac":"1a:bb:ff:c9:e7:24"},{"name":"eth0","mac":"0a:58:0a:d9:00:07","sandbox":"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.7/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.521637754+00:00 stderr F I0813 19:58:56.521579 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-78d54458c4-sc8h7", UID:"ed024e5d-8fc2-4c22-803d-73f3c9795f19", APIVersion:"v1", ResourceVersion:"27411", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.7/23] from ovn-kubernetes 2025-08-13T19:58:56.576623112+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.576744955+00:00 stderr P ADD starting CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T19:58:56.576936031+00:00 stderr F 2025-08-13T19:58:56.586952556+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e" Netns:"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:bb:ff:c9:e7:24\",\"name\":\"76a23bcc5261ffe\"},{\"mac\":\"0a:58:0a:d9:00:07\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8\"}],\"ips\":[{\"address\":\"10.217.0.7/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.782631784+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.782699766+00:00 stderr P Add: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"489c96bd95d523f","mac":"a2:cb:e6:94:36:cf"},{"name":"eth0","mac":"0a:58:0a:d9:00:09","sandbox":"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.9/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.782731397+00:00 stderr F 2025-08-13T19:58:56.783256972+00:00 stderr F I0813 19:58:56.783225 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-7978d7d7f6-2nt8z", UID:"0f394926-bdb9-425c-b36e-264d7fd34550", APIVersion:"v1", ResourceVersion:"27338", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.9/23] from ovn-kubernetes 2025-08-13T19:58:56.841606195+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"40aef0eb1bbaaf5","mac":"6a:17:c4:5e:dd:74"},{"name":"eth0","mac":"0a:58:0a:d9:00:32","sandbox":"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.50/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.841606195+00:00 stderr F I0813 19:58:56.840907 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-f4jkp", UID:"4092a9f8-5acc-4932-9e90-ef962eeb301a", APIVersion:"v1", ResourceVersion:"27305", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.50/23] from ovn-kubernetes 2025-08-13T19:58:56.841606195+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Netns:"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:cb:e6:94:36:cf\",\"name\":\"489c96bd95d523f\"},{\"mac\":\"0a:58:0a:d9:00:09\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7\"}],\"ips\":[{\"address\":\"10.217.0.9/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.886460464+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"88c60b5e25b2ce0","mac":"d2:40:20:33:89:cb"},{"name":"eth0","mac":"0a:58:0a:d9:00:20","sandbox":"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.32/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.886460464+00:00 stderr F I0813 19:58:56.886090 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"multus-admission-controller-6c7c885997-4hbbc", UID:"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0", APIVersion:"v1", ResourceVersion:"27266", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.32/23] from ovn-kubernetes 2025-08-13T19:58:56.890648493+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.893887085+00:00 stderr P ADD finished CNI request ContainerID:"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748" Netns:"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6a:17:c4:5e:dd:74\",\"name\":\"40aef0eb1bbaaf5\"},{\"mac\":\"0a:58:0a:d9:00:32\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6\"}],\"ips\":[{\"address\":\"10.217.0.50/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.893974558+00:00 stderr F 2025-08-13T19:58:56.916500400+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb" Netns:"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-08-13T19:58:56.917071956+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"44f5ef3518ac6b9","mac":"2a:0a:70:95:cf:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:19","sandbox":"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.25/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.917188900+00:00 stderr F I0813 19:58:56.917099 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator", Name:"migrator-f7c6d88df-q2fnv", UID:"cf1a8966-f594-490a-9fbb-eec5bafd13d3", APIVersion:"v1", ResourceVersion:"27336", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.25/23] from ovn-kubernetes 2025-08-13T19:58:57.000396391+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98" Netns:"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:40:20:33:89:cb\",\"name\":\"88c60b5e25b2ce0\"},{\"mac\":\"0a:58:0a:d9:00:20\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298\"}],\"ips\":[{\"address\":\"10.217.0.32/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.004141538+00:00 stderr F 2025-08-13T19:58:57Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-vlbxv:378552fd-5e53-4882-87ff-95f3d9198861:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fbf310c9137d286","mac":"b2:ca:c3:72:ee:8c"},{"name":"eth0","mac":"0a:58:0a:d9:00:1a","sandbox":"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.26/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.004686544+00:00 stderr F I0813 19:58:57.004388 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-vlbxv", UID:"378552fd-5e53-4882-87ff-95f3d9198861", APIVersion:"v1", ResourceVersion:"27387", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.26/23] from ovn-kubernetes 2025-08-13T19:58:57.056690216+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:ca:c3:72:ee:8c\",\"name\":\"fbf310c9137d286\"},{\"mac\":\"0a:58:0a:d9:00:1a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1\"}],\"ips\":[{\"address\":\"10.217.0.26/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.057946752+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.058000283+00:00 stderr P ADD finished CNI request ContainerID:"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2" Netns:"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:0a:70:95:cf:7d\",\"name\":\"44f5ef3518ac6b9\"},{\"mac\":\"0a:58:0a:d9:00:19\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51\"}],\"ips\":[{\"address\":\"10.217.0.25/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.058026644+00:00 stderr F 2025-08-13T19:58:57.105462326+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.105524388+00:00 stderr P ADD starting CNI request ContainerID:"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Netns:"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-08-13T19:58:57.105548729+00:00 stderr F 2025-08-13T19:58:57.244323695+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821" Netns:"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-08-13T19:58:57.294940708+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T19:58:57.333358203+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc" Netns:"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"" 2025-08-13T19:58:57.427415884+00:00 stderr F 2025-08-13T19:58:57Z [verbose] Add: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c45b735c45341a","mac":"ca:10:40:2a:6a:05"},{"name":"eth0","mac":"0a:58:0a:d9:00:2e","sandbox":"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.46/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.428259008+00:00 stderr F I0813 19:58:57.428100 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-samples-operator", Name:"cluster-samples-operator-bc474d5d6-wshwg", UID:"f728c15e-d8de-4a9a-a3ea-fdcead95cb91", APIVersion:"v1", ResourceVersion:"27350", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.46/23] from ovn-kubernetes 2025-08-13T19:58:57.480594800+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892" Netns:"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-08-13T19:58:57.561150286+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb" Netns:"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ca:10:40:2a:6a:05\",\"name\":\"2c45b735c45341a\"},{\"mac\":\"0a:58:0a:d9:00:2e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82\"}],\"ips\":[{\"address\":\"10.217.0.46/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.616173514+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Netns:"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-08-13T19:58:57.690381570+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d" Netns:"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-08-13T19:58:57.869494565+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.869559487+00:00 stderr P Add: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"20a42c53825c918","mac":"4e:5c:e6:eb:13:aa"},{"name":"eth0","mac":"0a:58:0a:d9:00:12","sandbox":"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.18/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.869596958+00:00 stderr F 2025-08-13T19:58:57.870607277+00:00 stderr F I0813 19:58:57.870464 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns-operator", Name:"dns-operator-75f687757b-nz2xb", UID:"10603adc-d495-423c-9459-4caa405960bb", APIVersion:"v1", ResourceVersion:"27299", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.18/23] from ovn-kubernetes 2025-08-13T19:58:57.947280493+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821" Netns:"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5c:e6:eb:13:aa\",\"name\":\"20a42c53825c918\"},{\"mac\":\"0a:58:0a:d9:00:12\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4\"}],\"ips\":[{\"address\":\"10.217.0.18/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.983576217+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T19:58:58.085763880+00:00 stderr P 2025-08-13T19:58:58Z [verbose] 2025-08-13T19:58:58.085948135+00:00 stderr P Add: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fe503da15decef9","mac":"62:a9:f3:84:d8:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:16","sandbox":"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.22/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:58.085979036+00:00 stderr F 2025-08-13T19:58:58.087337515+00:00 stderr F I0813 19:58:58.086439 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator-7769bd8d7d-q5cvv", UID:"b54e8941-2fc4-432a-9e51-39684df9089e", APIVersion:"v1", ResourceVersion:"27428", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.22/23] from ovn-kubernetes 2025-08-13T19:58:58.167669555+00:00 stderr P 2025-08-13T19:58:58Z [verbose] 2025-08-13T19:58:58.168224991+00:00 stderr P ADD finished CNI request ContainerID:"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Netns:"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:a9:f3:84:d8:42\",\"name\":\"fe503da15decef9\"},{\"mac\":\"0a:58:0a:d9:00:16\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6\"}],\"ips\":[{\"address\":\"10.217.0.22/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:58.168290953+00:00 stderr F 2025-08-13T19:58:58.213958704+00:00 stderr F 2025-08-13T19:58:58Z [verbose] Add: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e6ed8c1e93f8bc4","mac":"aa:e8:ef:66:06:69"},{"name":"eth0","mac":"0a:58:0a:d9:00:1c","sandbox":"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.28/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:58.216982181+00:00 stderr F I0813 19:58:58.216294 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-84fccc7b6-mkncc", UID:"b233d916-bfe3-4ae5-ae39-6b574d1aa05e", APIVersion:"v1", ResourceVersion:"27421", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.28/23] from ovn-kubernetes 2025-08-13T19:58:58.295951372+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD finished CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:e8:ef:66:06:69\",\"name\":\"e6ed8c1e93f8bc4\"},{\"mac\":\"0a:58:0a:d9:00:1c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d\"}],\"ips\":[{\"address\":\"10.217.0.28/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:58.338013061+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Netns:"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-08-13T19:58:58.473564435+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3" Netns:"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-08-13T19:58:58.634314047+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Netns:"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-08-13T19:58:59.005862598+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"282af480c29eba8","mac":"5a:fa:7e:60:ae:8e"},{"name":"eth0","mac":"0a:58:0a:d9:00:0a","sandbox":"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.10/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.007142414+00:00 stderr F I0813 19:58:59.007096 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-546b4f8984-pwccz", UID:"6d67253e-2acd-4bc1-8185-793587da4f17", APIVersion:"v1", ResourceVersion:"27329", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.10/23] from ovn-kubernetes 2025-08-13T19:58:59.143136761+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-authentication:oauth-openshift-765b47f944-n2lhl:13ad7555-5f28-4555-a563-892713a8433a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8266ab3300c992b","mac":"16:7d:9f:58:7f:21"},{"name":"eth0","mac":"0a:58:0a:d9:00:1e","sandbox":"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.30/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.143136761+00:00 stderr F I0813 19:58:59.119382 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-765b47f944-n2lhl", UID:"13ad7555-5f28-4555-a563-892713a8433a", APIVersion:"v1", ResourceVersion:"27326", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.30/23] from ovn-kubernetes 2025-08-13T19:58:59.184869251+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Netns:"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:fa:7e:60:ae:8e\",\"name\":\"282af480c29eba8\"},{\"mac\":\"0a:58:0a:d9:00:0a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0\"}],\"ips\":[{\"address\":\"10.217.0.10/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.197442879+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:7d:9f:58:7f:21\",\"name\":\"8266ab3300c992b\"},{\"mac\":\"0a:58:0a:d9:00:1e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d\"}],\"ips\":[{\"address\":\"10.217.0.30/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.216022159+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238" Netns:"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-08-13T19:58:59.216075330+00:00 stderr P 2025-08-13T19:58:59Z [verbose] 2025-08-13T19:58:59.216086340+00:00 stderr F ADD starting CNI request ContainerID:"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a" Netns:"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-08-13T19:58:59.226760295+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2cacd5e0efb1ce8","mac":"ea:0d:4c:ae:13:d8"},{"name":"eth0","mac":"0a:58:0a:d9:00:08","sandbox":"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.8/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.226760295+00:00 stderr F I0813 19:58:59.219586 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-etcd-operator", Name:"etcd-operator-768d5b5d86-722mg", UID:"0b5c38ff-1fa8-4219-994d-15776acd4a4d", APIVersion:"v1", ResourceVersion:"27425", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.8/23] from ovn-kubernetes 2025-08-13T19:58:59.226760295+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2aed5bade7f294b","mac":"4e:5a:b9:b2:bb:00"},{"name":"eth0","mac":"0a:58:0a:d9:00:0f","sandbox":"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.15/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.226951730+00:00 stderr F I0813 19:58:59.226928 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-6f6cb54958-rbddb", UID:"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf", APIVersion:"v1", ResourceVersion:"27367", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.15/23] from ovn-kubernetes 2025-08-13T19:58:59.238044236+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-dns:dns-default-gbw49:13045510-8717-4a71-ade4-be95a76440a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"63f14f64c728127","mac":"9a:23:de:02:96:5e"},{"name":"eth0","mac":"0a:58:0a:d9:00:1f","sandbox":"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.31/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.238044236+00:00 stderr F I0813 19:58:59.231537 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-gbw49", UID:"13045510-8717-4a71-ade4-be95a76440a7", APIVersion:"v1", ResourceVersion:"27378", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.31/23] from ovn-kubernetes 2025-08-13T19:58:59.405539971+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a" Netns:"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-08-13T19:58:59.419100297+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a" Netns:"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc" Netns:"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:23:de:02:96:5e\",\"name\":\"63f14f64c728127\"},{\"mac\":\"0a:58:0a:d9:00:1f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079\"}],\"ips\":[{\"address\":\"10.217.0.31/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Netns:"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5a:b9:b2:bb:00\",\"name\":\"2aed5bade7f294b\"},{\"mac\":\"0a:58:0a:d9:00:0f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa\"}],\"ips\":[{\"address\":\"10.217.0.15/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892" Netns:"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:0d:4c:ae:13:d8\",\"name\":\"2cacd5e0efb1ce8\"},{\"mac\":\"0a:58:0a:d9:00:08\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c\"}],\"ips\":[{\"address\":\"10.217.0.8/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.499960322+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T19:58:59.600975011+00:00 stderr P 2025-08-13T19:58:59Z [verbose] 2025-08-13T19:58:59.601505556+00:00 stderr P ADD starting CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"" 2025-08-13T19:58:59.601565358+00:00 stderr F 2025-08-13T19:58:59.631759808+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"" 2025-08-13T19:58:59.647303371+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7" Netns:"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-08-13T19:58:59.805428699+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3" Netns:"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-08-13T19:58:59.844184134+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5" Netns:"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-08-13T19:59:00.168377535+00:00 stderr P 2025-08-13T19:59:00Z [verbose] 2025-08-13T19:59:00.170681861+00:00 stderr P ADD starting CNI request ContainerID:"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31" Netns:"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-08-13T19:59:00.170862256+00:00 stderr F 2025-08-13T19:59:00.171981088+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"" 2025-08-13T19:59:00.202354764+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T19:59:00.324034272+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7" Netns:"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-08-13T19:59:00.808159642+00:00 stderr P 2025-08-13T19:59:00Z [verbose] 2025-08-13T19:59:00.808287966+00:00 stderr P ADD starting CNI request ContainerID:"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed" Netns:"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-08-13T19:59:00.808756179+00:00 stderr F 2025-08-13T19:59:01.123363757+00:00 stderr F 2025-08-13T19:59:01Z [verbose] ADD starting CNI request ContainerID:"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621" Netns:"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-08-13T19:59:03.135317658+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD starting CNI request ContainerID:"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007" Netns:"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-08-13T19:59:03.136943044+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.137268004+00:00 stderr P Add: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"aab926f26907ff6","mac":"ae:8f:58:c5:04:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:42","sandbox":"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.66/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.137298555+00:00 stderr F 2025-08-13T19:59:03.181010350+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7c70e17033c6821","mac":"a6:4b:a8:66:58:77"},{"name":"eth0","mac":"0a:58:0a:d9:00:0e","sandbox":"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.14/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.181010350+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"10cfef5f94c814c","mac":"fa:06:81:ef:ea:29"},{"name":"eth0","mac":"0a:58:0a:d9:00:33","sandbox":"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.51/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.181010350+00:00 stderr P 2025-08-13T19:59:03Z [verbose] Add: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d3db60615905e44","mac":"d6:ab:ce:8e:34:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:17","sandbox":"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.23/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5aa1911bfbbdddf","mac":"ea:f0:97:ca:28:ed"},{"name":"eth0","mac":"0a:58:0a:d9:00:04","sandbox":"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.4/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-5c4dbb8899-tchz5:af6b67a3-a2bd-4051-9adc-c208a5a65d79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"893b4f9b5ed2707","mac":"d2:c7:9d:0a:38:17"},{"name":"eth0","mac":"0a:58:0a:d9:00:11","sandbox":"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.17/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.181666 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-65476884b9-9wcvx", UID:"6268b7fe-8910-4505-b404-6f1df638105c", APIVersion:"v1", ResourceVersion:"27396", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.66/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.280298 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-6d8474f75f-x54mh", UID:"c085412c-b875-46c9-ae3e-e6b0d8067091", APIVersion:"v1", ResourceVersion:"27257", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.14/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.280323 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-8s8pc", UID:"c782cf62-a827-4677-b3c2-6f82c5f09cbb", APIVersion:"v1", ResourceVersion:"27308", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.51/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288692 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-config-operator", Name:"openshift-config-operator-77658b5b66-dq5sc", UID:"530553aa-0a1d-423e-8a22-f5eb4bdbb883", APIVersion:"v1", ResourceVersion:"27263", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.23/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288731 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-v54bt", UID:"34a48baf-1bee-4921-8bb2-9b7320e76f79", APIVersion:"v1", ResourceVersion:"27275", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.4/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288748 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-5c4dbb8899-tchz5", UID:"af6b67a3-a2bd-4051-9adc-c208a5a65d79", APIVersion:"v1", ResourceVersion:"27278", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.17/23] from ovn-kubernetes 2025-08-13T19:59:03.321165456+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d" Netns:"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:8f:58:c5:04:5d\",\"name\":\"aab926f26907ff6\"},{\"mac\":\"0a:58:0a:d9:00:42\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e\"}],\"ips\":[{\"address\":\"10.217.0.66/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.321165456+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3" Netns:"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a6:4b:a8:66:58:77\",\"name\":\"7c70e17033c6821\"},{\"mac\":\"0a:58:0a:d9:00:0e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4\"}],\"ips\":[{\"address\":\"10.217.0.14/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.672140931+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a" Netns:"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fa:06:81:ef:ea:29\",\"name\":\"10cfef5f94c814c\"},{\"mac\":\"0a:58:0a:d9:00:33\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6\"}],\"ips\":[{\"address\":\"10.217.0.51/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.684501513+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.685468030+00:00 stderr P Add: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b27ef0e5311849c","mac":"5a:be:36:6c:e5:ff"},{"name":"eth0","mac":"0a:58:0a:d9:00:27","sandbox":"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.39/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.685510352+00:00 stderr F 2025-08-13T19:59:03.685942824+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.685981775+00:00 stderr P Add: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:87df87f4-ba66-4137-8e41-1fa632ad4207:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4916f2a17d27bbf","mac":"4e:22:5c:05:c0:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:24","sandbox":"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.36/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.686051727+00:00 stderr F 2025-08-13T19:59:03.686445048+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.686494450+00:00 stderr P Add: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"526dc34c7f02246","mac":"1e:22:ca:d9:c0:4a"},{"name":"eth0","mac":"0a:58:0a:d9:00:06","sandbox":"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.6/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.686523791+00:00 stderr F 2025-08-13T19:59:03.701115536+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.701925940+00:00 stderr F Add: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"906e45421a720cb","mac":"26:7a:19:ff:0e:c3"},{"name":"eth0","mac":"0a:58:0a:d9:00:13","sandbox":"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.19/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.702007492+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.702074924+00:00 stderr P ADD finished CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:c7:9d:0a:38:17\",\"name\":\"893b4f9b5ed2707\"},{\"mac\":\"0a:58:0a:d9:00:11\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a\"}],\"ips\":[{\"address\":\"10.217.0.17/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.702175467+00:00 stderr F 2025-08-13T19:59:03.702379302+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.702452915+00:00 stderr P Add: openshift-marketplace:redhat-marketplace-rmwfn:9ad279b4-d9dc-42a8-a1c8-a002bd063482:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9218677c9aa0f21","mac":"3e:06:24:c1:84:12"},{"name":"eth0","mac":"0a:58:0a:d9:00:36","sandbox":"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.54/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.702478165+00:00 stderr F 2025-08-13T19:59:03.702995640+00:00 stderr F I0813 19:59:03.702962 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"5bacb25d-97b6-4491-8fb4-99feae1d802a", APIVersion:"v1", ResourceVersion:"27346", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.39/23] from ovn-kubernetes 2025-08-13T19:59:03.703055082+00:00 stderr F I0813 19:59:03.703034 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-6ff78978b4-q4vv8", UID:"87df87f4-ba66-4137-8e41-1fa632ad4207", APIVersion:"v1", ResourceVersion:"27269", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.36/23] from ovn-kubernetes 2025-08-13T19:59:03.703092063+00:00 stderr F I0813 19:59:03.703076 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-7c88c4c865-kn67m", UID:"43ae1c37-047b-4ee2-9fee-41e337dd4ac8", APIVersion:"v1", ResourceVersion:"27332", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.6/23] from ovn-kubernetes 2025-08-13T19:59:03.703125654+00:00 stderr F I0813 19:59:03.703110 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication-operator", Name:"authentication-operator-7cc7ff75d5-g9qv8", UID:"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e", APIVersion:"v1", ResourceVersion:"27314", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.19/23] from ovn-kubernetes 2025-08-13T19:59:03.703158455+00:00 stderr F I0813 19:59:03.703143 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-rmwfn", UID:"9ad279b4-d9dc-42a8-a1c8-a002bd063482", APIVersion:"v1", ResourceVersion:"27389", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.54/23] from ovn-kubernetes 2025-08-13T19:59:03.732219983+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.732304806+00:00 stderr P Add: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4146ac88f77df20","mac":"7a:30:67:39:88:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:47","sandbox":"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.71/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.732330756+00:00 stderr F 2025-08-13T19:59:03.733101848+00:00 stderr F I0813 19:59:03.733068 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-2vhcn", UID:"0b5d722a-1123-4935-9740-52a08d018bc9", APIVersion:"v1", ResourceVersion:"27357", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.71/23] from ovn-kubernetes 2025-08-13T19:59:03.880295054+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.881361964+00:00 stderr P ADD finished CNI request ContainerID:"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a" Netns:"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:f0:97:ca:28:ed\",\"name\":\"5aa1911bfbbdddf\"},{\"mac\":\"0a:58:0a:d9:00:04\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c\"}],\"ips\":[{\"address\":\"10.217.0.4/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.881411256+00:00 stderr F 2025-08-13T19:59:03.936494066+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.936603749+00:00 stderr P ADD finished CNI request ContainerID:"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7" Netns:"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:be:36:6c:e5:ff\",\"name\":\"b27ef0e5311849c\"},{\"mac\":\"0a:58:0a:d9:00:27\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff\"}],\"ips\":[{\"address\":\"10.217.0.39/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.936678871+00:00 stderr F 2025-08-13T19:59:03.937121224+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.937362851+00:00 stderr P ADD finished CNI request ContainerID:"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31" Netns:"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:7a:19:ff:0e:c3\",\"name\":\"906e45421a720cb\"},{\"mac\":\"0a:58:0a:d9:00:13\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d\"}],\"ips\":[{\"address\":\"10.217.0.19/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.937438253+00:00 stderr F 2025-08-13T19:59:03.937627508+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.937896776+00:00 stderr P ADD finished CNI request ContainerID:"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3" Netns:"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:30:67:39:88:7d\",\"name\":\"4146ac88f77df20\"},{\"mac\":\"0a:58:0a:d9:00:47\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c\"}],\"ips\":[{\"address\":\"10.217.0.71/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.938123652+00:00 stderr F 2025-08-13T19:59:03.953421098+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0e119602de1750a","mac":"5a:68:88:74:1a:1a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3e","sandbox":"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.62/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.953911272+00:00 stderr F I0813 19:59:03.953591 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5dbbc74dc9-cp5cd", UID:"e9127708-ccfd-4891-8a3a-f0cacb77e0f4", APIVersion:"v1", ResourceVersion:"27354", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.62/23] from ovn-kubernetes 2025-08-13T19:59:04.073471741+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-apiserver:apiserver-67cbf64bc9-mtx25:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"961449f5e5e8534","mac":"06:02:94:00:bf:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:25","sandbox":"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.37/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.075157649+00:00 stderr F I0813 19:59:04.075076 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-67cbf64bc9-mtx25", UID:"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab", APIVersion:"v1", ResourceVersion:"27361", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.37/23] from ovn-kubernetes 2025-08-13T19:59:04.183987791+00:00 stderr F 2025-08-13T19:59:04Z [verbose] ADD finished CNI request ContainerID:"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a" Netns:"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:22:ca:d9:c0:4a\",\"name\":\"526dc34c7f02246\"},{\"mac\":\"0a:58:0a:d9:00:06\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c\"}],\"ips\":[{\"address\":\"10.217.0.6/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:04.521248175+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-8-crc:72854c1e-5ae2-4ed6-9e50-ff3bccde2635:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d84dd6581e40bee","mac":"f2:91:66:93:67:34"},{"name":"eth0","mac":"0a:58:0a:d9:00:37","sandbox":"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.55/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.521441280+00:00 stderr F I0813 19:59:04.521386 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-8-crc", UID:"72854c1e-5ae2-4ed6-9e50-ff3bccde2635", APIVersion:"v1", ResourceVersion:"27298", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.55/23] from ovn-kubernetes 2025-08-13T19:59:04.538269750+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"717e351e369b4a5","mac":"52:85:10:ff:fa:5e"},{"name":"eth0","mac":"0a:58:0a:d9:00:10","sandbox":"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.16/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.538555898+00:00 stderr F I0813 19:59:04.538471 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator-686c6c748c-qbnnr", UID:"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7", APIVersion:"v1", ResourceVersion:"27371", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.16/23] from ovn-kubernetes 2025-08-13T19:59:04.658192458+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"97418fd7ce5644b","mac":"6e:43:34:af:3f:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:40","sandbox":"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.64/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.658192458+00:00 stderr F I0813 19:59:04.653636 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5c5478f8c-vqvt7", UID:"d0f40333-c860-4c04-8058-a0bf572dcf12", APIVersion:"v1", ResourceVersion:"27272", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.64/23] from ovn-kubernetes 2025-08-13T19:59:04.667030550+00:00 stderr P 2025-08-13T19:59:04Z [verbose] 2025-08-13T19:59:04.667081782+00:00 stderr P Add: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"22d48c9fe60d97e","mac":"c6:21:89:7b:ef:07"},{"name":"eth0","mac":"0a:58:0a:d9:00:2d","sandbox":"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.45/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.667113393+00:00 stderr F 2025-08-13T19:59:04.667529494+00:00 stderr F I0813 19:59:04.667467 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-operator", Name:"ingress-operator-7d46d5bb6d-rrg6t", UID:"7d51f445-054a-4e4f-a67b-a828f5a32511", APIVersion:"v1", ResourceVersion:"27414", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.45/23] from ovn-kubernetes 2025-08-13T19:59:04.774100162+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a10fd87b4b9fef3","mac":"7a:f0:82:7d:d8:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:3d","sandbox":"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.61/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.774299488+00:00 stderr F I0813 19:59:04.774247 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-conversion-webhook-595f9969b-l6z49", UID:"59748b9b-c309-4712-aa85-bb38d71c4915", APIVersion:"v1", ResourceVersion:"27381", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.61/23] from ovn-kubernetes 2025-08-13T19:59:04.817896801+00:00 stderr F 2025-08-13T19:59:04Z [verbose] ADD finished CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3e:06:24:c1:84:12\",\"name\":\"9218677c9aa0f21\"},{\"mac\":\"0a:58:0a:d9:00:36\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782\"}],\"ips\":[{\"address\":\"10.217.0.54/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:04.874933427+00:00 stderr P 2025-08-13T19:59:04Z [verbose] 2025-08-13T19:59:04.874985878+00:00 stderr P Add: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ce1a5d3596103f2","mac":"8e:1b:59:6f:cb:45"},{"name":"eth0","mac":"0a:58:0a:d9:00:31","sandbox":"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.49/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.875017139+00:00 stderr F 2025-08-13T19:59:04.875932285+00:00 stderr F I0813 19:59:04.875899 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"hostpath-provisioner", Name:"csi-hostpathplugin-hvm8g", UID:"12e733dd-0939-4f1b-9cbb-13897e093787", APIVersion:"v1", ResourceVersion:"27304", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.49/23] from ovn-kubernetes 2025-08-13T19:59:05.358683366+00:00 stderr F 2025-08-13T19:59:05Z [verbose] ADD finished CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:22:5c:05:c0:19\",\"name\":\"4916f2a17d27bbf\"},{\"mac\":\"0a:58:0a:d9:00:24\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c\"}],\"ips\":[{\"address\":\"10.217.0.36/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:05.358683366+00:00 stderr F 2025-08-13T19:59:05Z [verbose] ADD finished CNI request ContainerID:"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Netns:"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:ab:ce:8e:34:62\",\"name\":\"d3db60615905e44\"},{\"mac\":\"0a:58:0a:d9:00:17\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7\"}],\"ips\":[{\"address\":\"10.217.0.23/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.132119023+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:91:66:93:67:34\",\"name\":\"d84dd6581e40bee\"},{\"mac\":\"0a:58:0a:d9:00:37\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976\"}],\"ips\":[{\"address\":\"10.217.0.55/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.132119023+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7" Netns:"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:43:34:af:3f:cd\",\"name\":\"97418fd7ce5644b\"},{\"mac\":\"0a:58:0a:d9:00:40\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a\"}],\"ips\":[{\"address\":\"10.217.0.64/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.280437451+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed" Netns:"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c6:21:89:7b:ef:07\",\"name\":\"22d48c9fe60d97e\"},{\"mac\":\"0a:58:0a:d9:00:2d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738\"}],\"ips\":[{\"address\":\"10.217.0.45/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.310012694+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5" Netns:"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"52:85:10:ff:fa:5e\",\"name\":\"717e351e369b4a5\"},{\"mac\":\"0a:58:0a:d9:00:10\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b\"}],\"ips\":[{\"address\":\"10.217.0.16/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.310012694+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007" Netns:"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:f0:82:7d:d8:bb\",\"name\":\"a10fd87b4b9fef3\"},{\"mac\":\"0a:58:0a:d9:00:3d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f\"}],\"ips\":[{\"address\":\"10.217.0.61/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.313906515+00:00 stderr P 2025-08-13T19:59:06Z [verbose] 2025-08-13T19:59:06.313973167+00:00 stderr P ADD finished CNI request ContainerID:"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238" Netns:"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:68:88:74:1a:1a\",\"name\":\"0e119602de1750a\"},{\"mac\":\"0a:58:0a:d9:00:3e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a\"}],\"ips\":[{\"address\":\"10.217.0.62/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.314000308+00:00 stderr F 2025-08-13T19:59:06.314444290+00:00 stderr P 2025-08-13T19:59:06Z [verbose] 2025-08-13T19:59:06.314487182+00:00 stderr P ADD finished CNI request ContainerID:"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621" Netns:"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:1b:59:6f:cb:45\",\"name\":\"ce1a5d3596103f2\"},{\"mac\":\"0a:58:0a:d9:00:31\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232\"}],\"ips\":[{\"address\":\"10.217.0.49/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.314518213+00:00 stderr F 2025-08-13T19:59:06.320966546+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:02:94:00:bf:62\",\"name\":\"961449f5e5e8534\"},{\"mac\":\"0a:58:0a:d9:00:25\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454\"}],\"ips\":[{\"address\":\"10.217.0.37/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:27.144002369+00:00 stderr F 2025-08-13T19:59:27Z [verbose] DEL starting CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"" 2025-08-13T19:59:27.179924703+00:00 stderr F 2025-08-13T19:59:27Z [verbose] Del: openshift-kube-controller-manager:revision-pruner-8-crc:72854c1e-5ae2-4ed6-9e50-ff3bccde2635:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:33.096082993+00:00 stderr F 2025-08-13T19:59:33Z [verbose] DEL finished CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"", result: "", err: 2025-08-13T19:59:41.933253729+00:00 stderr F 2025-08-13T19:59:41Z [verbose] ADD starting CNI request ContainerID:"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee" Netns:"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"" 2025-08-13T19:59:43.456335124+00:00 stderr F 2025-08-13T19:59:43Z [verbose] DEL starting CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T19:59:43.456335124+00:00 stderr F 2025-08-13T19:59:43Z [verbose] Del: openshift-service-ca:service-ca-666f99b6f-vlbxv:378552fd-5e53-4882-87ff-95f3d9198861:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:47.068968522+00:00 stderr F 2025-08-13T19:59:47Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-kk8kg:e4a7de23-6134-4044-902a-0900dc04a501:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5069234e6bbbde","mac":"46:16:5d:0b:45:7c"},{"name":"eth0","mac":"0a:58:0a:d9:00:28","sandbox":"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.40/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:47.099974756+00:00 stderr F I0813 19:59:47.099179 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-kk8kg", UID:"e4a7de23-6134-4044-902a-0900dc04a501", APIVersion:"v1", ResourceVersion:"28290", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.40/23] from ovn-kubernetes 2025-08-13T19:59:47.897507141+00:00 stderr F 2025-08-13T19:59:47Z [verbose] ADD finished CNI request ContainerID:"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee" Netns:"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"46:16:5d:0b:45:7c\",\"name\":\"c5069234e6bbbde\"},{\"mac\":\"0a:58:0a:d9:00:28\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566\"}],\"ips\":[{\"address\":\"10.217.0.40/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:49.047495393+00:00 stderr F 2025-08-13T19:59:49Z [verbose] DEL finished CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "", err: 2025-08-13T19:59:52.638690193+00:00 stderr F 2025-08-13T19:59:52Z [verbose] DEL starting CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T19:59:52.658090086+00:00 stderr F 2025-08-13T19:59:52Z [verbose] Del: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:87df87f4-ba66-4137-8e41-1fa632ad4207:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:52.849384859+00:00 stderr P 2025-08-13T19:59:52Z [verbose] 2025-08-13T19:59:52.849595085+00:00 stderr P DEL starting CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"" 2025-08-13T19:59:52.850056428+00:00 stderr F 2025-08-13T19:59:52.850768769+00:00 stderr P 2025-08-13T19:59:52Z [verbose] 2025-08-13T19:59:52.850910913+00:00 stderr P Del: openshift-route-controller-manager:route-controller-manager-5c4dbb8899-tchz5:af6b67a3-a2bd-4051-9adc-c208a5a65d79:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:52.850952694+00:00 stderr F 2025-08-13T19:59:53.782082985+00:00 stderr F 2025-08-13T19:59:53Z [verbose] DEL finished CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "", err: 2025-08-13T19:59:54.606403153+00:00 stderr F 2025-08-13T19:59:54Z [verbose] DEL finished CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"", result: "", err: 2025-08-13T19:59:57.244142212+00:00 stderr F 2025-08-13T19:59:57Z [verbose] ADD starting CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"" 2025-08-13T19:59:57.594150390+00:00 stderr F 2025-08-13T19:59:57Z [verbose] ADD starting CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"" 2025-08-13T19:59:59.956411127+00:00 stderr F 2025-08-13T19:59:59Z [verbose] Add: openshift-controller-manager:controller-manager-c4dd57946-mpxjt:16f68e98-a8f9-417a-b92b-37bfd7b11e01:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4cfa6ec97b88dab","mac":"d6:04:2f:51:ff:10"},{"name":"eth0","mac":"0a:58:0a:d9:00:29","sandbox":"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.41/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:59.956411127+00:00 stderr F I0813 19:59:59.954708 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-c4dd57946-mpxjt", UID:"16f68e98-a8f9-417a-b92b-37bfd7b11e01", APIVersion:"v1", ResourceVersion:"28746", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.41/23] from ovn-kubernetes 2025-08-13T20:00:00.044587720+00:00 stderr F 2025-08-13T20:00:00Z [verbose] ADD finished CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:04:2f:51:ff:10\",\"name\":\"4cfa6ec97b88dab\"},{\"mac\":\"0a:58:0a:d9:00:29\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6\"}],\"ips\":[{\"address\":\"10.217.0.41/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:01.558864525+00:00 stderr F 2025-08-13T20:00:01Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-5b77f9fd48-hb8xt:83bf0764-e80c-490b-8d3c-3cf626fdb233:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"13b18d12f5f999b","mac":"f6:89:4d:62:4a:70"},{"name":"eth0","mac":"0a:58:0a:d9:00:2a","sandbox":"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.42/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:01.559106632+00:00 stderr F I0813 20:00:01.559027 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-5b77f9fd48-hb8xt", UID:"83bf0764-e80c-490b-8d3c-3cf626fdb233", APIVersion:"v1", ResourceVersion:"28749", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.42/23] from ovn-kubernetes 2025-08-13T20:00:01.691357391+00:00 stderr P 2025-08-13T20:00:01Z [verbose] 2025-08-13T20:00:01.731302750+00:00 stderr P ADD finished CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f6:89:4d:62:4a:70\",\"name\":\"13b18d12f5f999b\"},{\"mac\":\"0a:58:0a:d9:00:2a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39\"}],\"ips\":[{\"address\":\"10.217.0.42/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:01.731524846+00:00 stderr F 2025-08-13T20:00:02.963959695+00:00 stderr F 2025-08-13T20:00:02Z [verbose] ADD starting CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"" 2025-08-13T20:00:05.345329857+00:00 stderr P 2025-08-13T20:00:05Z [verbose] 2025-08-13T20:00:05.345387239+00:00 stderr P Add: openshift-operator-lifecycle-manager:collect-profiles-29251920-wcws2:deaee4f4-7b7a-442d-99b7-c8ac62ef5f27:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"eae823dac0e12a2","mac":"92:41:c9:a3:ea:a6"},{"name":"eth0","mac":"0a:58:0a:d9:00:2c","sandbox":"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.44/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:05.345418890+00:00 stderr F 2025-08-13T20:00:05.346879541+00:00 stderr F I0813 20:00:05.345996 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251920-wcws2", UID:"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27", APIVersion:"v1", ResourceVersion:"28823", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.44/23] from ovn-kubernetes 2025-08-13T20:00:05.381500918+00:00 stderr P 2025-08-13T20:00:05Z [verbose] 2025-08-13T20:00:05.381557790+00:00 stderr P ADD finished CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:41:c9:a3:ea:a6\",\"name\":\"eae823dac0e12a2\"},{\"mac\":\"0a:58:0a:d9:00:2c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984\"}],\"ips\":[{\"address\":\"10.217.0.44/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:05.381588991+00:00 stderr F 2025-08-13T20:00:08.083714518+00:00 stderr F 2025-08-13T20:00:08Z [verbose] ADD starting CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"" 2025-08-13T20:00:11.240574632+00:00 stderr F 2025-08-13T20:00:11Z [verbose] ADD starting CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"" 2025-08-13T20:00:12.220030431+00:00 stderr F 2025-08-13T20:00:12Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-9-crc:a0453d24-e872-43af-9e7a-86227c26d200:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"beb700893f285f1","mac":"06:fd:56:87:f4:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:34","sandbox":"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.52/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:12.220030431+00:00 stderr F I0813 20:00:12.216134 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-9-crc", UID:"a0453d24-e872-43af-9e7a-86227c26d200", APIVersion:"v1", ResourceVersion:"28975", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.52/23] from ovn-kubernetes 2025-08-13T20:00:12.800695578+00:00 stderr F 2025-08-13T20:00:12Z [verbose] ADD finished CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:fd:56:87:f4:cd\",\"name\":\"beb700893f285f1\"},{\"mac\":\"0a:58:0a:d9:00:34\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173\"}],\"ips\":[{\"address\":\"10.217.0.52/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:13.096603935+00:00 stderr F 2025-08-13T20:00:13Z [verbose] Add: openshift-kube-controller-manager:installer-9-crc:227e3650-2a85-4229-8099-bb53972635b2:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ca267bd7a205181","mac":"06:5e:50:09:30:d5"},{"name":"eth0","mac":"0a:58:0a:d9:00:35","sandbox":"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.53/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:13.096603935+00:00 stderr F I0813 20:00:13.095603 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-9-crc", UID:"227e3650-2a85-4229-8099-bb53972635b2", APIVersion:"v1", ResourceVersion:"29034", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.53/23] from ovn-kubernetes 2025-08-13T20:00:13.126610891+00:00 stderr P 2025-08-13T20:00:13Z [verbose] 2025-08-13T20:00:13.126685123+00:00 stderr P DEL starting CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"" 2025-08-13T20:00:13.126721704+00:00 stderr F 2025-08-13T20:00:13.127349202+00:00 stderr P 2025-08-13T20:00:13Z [verbose] 2025-08-13T20:00:13.127392773+00:00 stderr P Del: openshift-route-controller-manager:route-controller-manager-5b77f9fd48-hb8xt:83bf0764-e80c-490b-8d3c-3cf626fdb233:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:13.127422254+00:00 stderr F 2025-08-13T20:00:13.427259784+00:00 stderr F 2025-08-13T20:00:13Z [verbose] ADD finished CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:5e:50:09:30:d5\",\"name\":\"ca267bd7a205181\"},{\"mac\":\"0a:58:0a:d9:00:35\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330\"}],\"ips\":[{\"address\":\"10.217.0.53/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:13.463043484+00:00 stderr F 2025-08-13T20:00:13Z [verbose] DEL starting CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"" 2025-08-13T20:00:13.463043484+00:00 stderr F 2025-08-13T20:00:13Z [verbose] Del: openshift-controller-manager:controller-manager-c4dd57946-mpxjt:16f68e98-a8f9-417a-b92b-37bfd7b11e01:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:14.913106910+00:00 stderr F 2025-08-13T20:00:14Z [verbose] DEL finished CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"", result: "", err: 2025-08-13T20:00:15.107138713+00:00 stderr F 2025-08-13T20:00:15Z [verbose] DEL starting CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"" 2025-08-13T20:00:15.107138713+00:00 stderr F 2025-08-13T20:00:15Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29251920-wcws2:deaee4f4-7b7a-442d-99b7-c8ac62ef5f27:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:15.630039383+00:00 stderr P 2025-08-13T20:00:15Z [verbose] 2025-08-13T20:00:15.630093634+00:00 stderr P DEL finished CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"", result: "", err: 2025-08-13T20:00:15.630118135+00:00 stderr F 2025-08-13T20:00:16.272621295+00:00 stderr F 2025-08-13T20:00:16Z [verbose] DEL finished CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"", result: "", err: 2025-08-13T20:00:18.052931309+00:00 stderr F 2025-08-13T20:00:18Z [verbose] DEL starting CNI request ContainerID:"628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28658250-dvzvw;K8S_POD_INFRA_CONTAINER_ID=628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292;K8S_POD_UID=05fb6e44-aaf9-4fbc-a235-7a3447ac3086" Path:"" 2025-08-13T20:00:18.053876806+00:00 stderr F 2025-08-13T20:00:18Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292: no such file or directory, cannot properly delete 2025-08-13T20:00:18.053876806+00:00 stderr F 2025-08-13T20:00:18Z [verbose] DEL finished CNI request ContainerID:"628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28658250-dvzvw;K8S_POD_INFRA_CONTAINER_ID=628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292;K8S_POD_UID=05fb6e44-aaf9-4fbc-a235-7a3447ac3086" Path:"", result: "", err: 2025-08-13T20:00:18.953674012+00:00 stderr F 2025-08-13T20:00:18Z [verbose] ADD starting CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"" 2025-08-13T20:00:19.081176678+00:00 stderr F 2025-08-13T20:00:19Z [verbose] DEL starting CNI request ContainerID:"d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-08-13T20:00:19.153086928+00:00 stderr F 2025-08-13T20:00:19Z [verbose] Del: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:19.425343031+00:00 stderr F 2025-08-13T20:00:19Z [verbose] ADD starting CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"" 2025-08-13T20:00:20.398503070+00:00 stderr F 2025-08-13T20:00:20Z [verbose] ADD starting CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"" 2025-08-13T20:00:21.775932115+00:00 stderr F 2025-08-13T20:00:21Z [verbose] ADD starting CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"" 2025-08-13T20:00:21.808731090+00:00 stderr F 2025-08-13T20:00:21Z [verbose] DEL finished CNI request ContainerID:"d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "", err: 2025-08-13T20:00:22.604078219+00:00 stderr F 2025-08-13T20:00:22Z [verbose] DEL starting CNI request ContainerID:"a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-08-13T20:00:22.624992706+00:00 stderr F 2025-08-13T20:00:22Z [verbose] Del: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.221625190+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.221874807+00:00 stderr P DEL starting CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"" 2025-08-13T20:00:23.221917299+00:00 stderr F 2025-08-13T20:00:23.222874806+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.222953108+00:00 stderr P Del: openshift-kube-controller-manager:revision-pruner-9-crc:a0453d24-e872-43af-9e7a-86227c26d200:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.222991239+00:00 stderr F 2025-08-13T20:00:23.426680097+00:00 stderr F 2025-08-13T20:00:23Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-6cfd9fc8fc-7sbzw:1713e8bc-bab0-49a8-8618-9ded2e18906c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1f55b781eeb63db","mac":"4a:a0:3c:7f:f6:89"},{"name":"eth0","mac":"0a:58:0a:d9:00:38","sandbox":"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.56/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.427156031+00:00 stderr F I0813 20:00:23.427101 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-6cfd9fc8fc-7sbzw", UID:"1713e8bc-bab0-49a8-8618-9ded2e18906c", APIVersion:"v1", ResourceVersion:"29279", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.56/23] from ovn-kubernetes 2025-08-13T20:00:23.524430564+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.524518267+00:00 stderr P Add: openshift-kube-apiserver:installer-9-crc:2ad657a4-8b02-4373-8d0d-b0e25345dc90:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9b70547ed21fdd5","mac":"1e:44:eb:8d:8e:6a"},{"name":"eth0","mac":"0a:58:0a:d9:00:37","sandbox":"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.55/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.524584588+00:00 stderr F 2025-08-13T20:00:23.525128274+00:00 stderr F I0813 20:00:23.525048 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-9-crc", UID:"2ad657a4-8b02-4373-8d0d-b0e25345dc90", APIVersion:"v1", ResourceVersion:"29261", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.55/23] from ovn-kubernetes 2025-08-13T20:00:23.693962418+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.694021390+00:00 stderr P Add: openshift-console:console-5d9678894c-wx62n:384ed0e8-86e4-42df-bd2c-604c1f536a15:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"612e7824c92f4db","mac":"2a:0d:bb:e8:fc:b3"},{"name":"eth0","mac":"0a:58:0a:d9:00:39","sandbox":"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.57/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.694045971+00:00 stderr F 2025-08-13T20:00:23.694451672+00:00 stderr F I0813 20:00:23.694330 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-5d9678894c-wx62n", UID:"384ed0e8-86e4-42df-bd2c-604c1f536a15", APIVersion:"v1", ResourceVersion:"29333", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.57/23] from ovn-kubernetes 2025-08-13T20:00:23.700320369+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "", err: 2025-08-13T20:00:23.734697240+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL starting CNI request ContainerID:"f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-08-13T20:00:23.735698498+00:00 stderr F 2025-08-13T20:00:23Z [verbose] Del: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.764698625+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.764912391+00:00 stderr P Add: openshift-controller-manager:controller-manager-67685c4459-7p2h8:a560ec6a-586f-403c-a08e-e3a76fa1b7fd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51aea926a857cd4","mac":"9e:95:94:ba:9d:71"},{"name":"eth0","mac":"0a:58:0a:d9:00:3a","sandbox":"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.58/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.765013234+00:00 stderr F 2025-08-13T20:00:23.765593531+00:00 stderr F I0813 20:00:23.765564 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-67685c4459-7p2h8", UID:"a560ec6a-586f-403c-a08e-e3a76fa1b7fd", APIVersion:"v1", ResourceVersion:"29411", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.58/23] from ovn-kubernetes 2025-08-13T20:00:23.818472889+00:00 stderr F 2025-08-13T20:00:23Z [verbose] ADD finished CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:a0:3c:7f:f6:89\",\"name\":\"1f55b781eeb63db\"},{\"mac\":\"0a:58:0a:d9:00:38\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588\"}],\"ips\":[{\"address\":\"10.217.0.56/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:23.895917767+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"", result: "", err: 2025-08-13T20:00:23.993536050+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "", err: 2025-08-13T20:00:24.155722865+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-08-13T20:00:24.158038961+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:24.250949330+00:00 stderr P 2025-08-13T20:00:24Z [verbose] 2025-08-13T20:00:24.282009636+00:00 stderr F ADD finished CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:44:eb:8d:8e:6a\",\"name\":\"9b70547ed21fdd5\"},{\"mac\":\"0a:58:0a:d9:00:37\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d\"}],\"ips\":[{\"address\":\"10.217.0.55/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.282009636+00:00 stderr F 2025-08-13T20:00:24Z [verbose] ADD finished CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:0d:bb:e8:fc:b3\",\"name\":\"612e7824c92f4db\"},{\"mac\":\"0a:58:0a:d9:00:39\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0\"}],\"ips\":[{\"address\":\"10.217.0.57/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.282009636+00:00 stderr F 2025-08-13T20:00:24Z [verbose] ADD finished CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:95:94:ba:9d:71\",\"name\":\"51aea926a857cd4\"},{\"mac\":\"0a:58:0a:d9:00:3a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d\"}],\"ips\":[{\"address\":\"10.217.0.58/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.344766175+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL finished CNI request ContainerID:"657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "", err: 2025-08-13T20:00:24.388626996+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-08-13T20:00:24.400294769+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:24.770294289+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL finished CNI request ContainerID:"4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "", err: 2025-08-13T20:00:24.831022011+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-08-13T20:00:24.837189417+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:26.002482674+00:00 stderr F 2025-08-13T20:00:26Z [verbose] DEL finished CNI request ContainerID:"defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "", err: 2025-08-13T20:00:26.422336186+00:00 stderr F 2025-08-13T20:00:26Z [verbose] DEL starting CNI request ContainerID:"6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-08-13T20:00:26.434518494+00:00 stderr F 2025-08-13T20:00:26Z [verbose] Del: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:27.168170163+00:00 stderr F 2025-08-13T20:00:27Z [verbose] DEL starting CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"" 2025-08-13T20:00:27.168170163+00:00 stderr F 2025-08-13T20:00:27Z [verbose] Del: openshift-controller-manager:controller-manager-67685c4459-7p2h8:a560ec6a-586f-403c-a08e-e3a76fa1b7fd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:27.794460121+00:00 stderr F 2025-08-13T20:00:27Z [verbose] DEL starting CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"" 2025-08-13T20:00:27.794704278+00:00 stderr F 2025-08-13T20:00:27Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-6cfd9fc8fc-7sbzw:1713e8bc-bab0-49a8-8618-9ded2e18906c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:28.483037086+00:00 stderr P 2025-08-13T20:00:28Z [verbose] 2025-08-13T20:00:28.483116038+00:00 stderr P ADD starting CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"" 2025-08-13T20:00:28.483149849+00:00 stderr F 2025-08-13T20:00:28.644932272+00:00 stderr F 2025-08-13T20:00:28Z [verbose] ADD starting CNI request ContainerID:"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e" Netns:"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-08-13T20:00:28.787764605+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL finished CNI request ContainerID:"6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "", err: 2025-08-13T20:00:28.855300300+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL finished CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"", result: "", err: 2025-08-13T20:00:28.919684786+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL starting CNI request ContainerID:"9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-08-13T20:00:28.922399074+00:00 stderr F 2025-08-13T20:00:28Z [verbose] Del: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:29.981503632+00:00 stderr F 2025-08-13T20:00:29Z [verbose] ADD starting CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"" 2025-08-13T20:00:30.201077733+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL finished CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"", result: "", err: 2025-08-13T20:00:30.542571871+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL finished CNI request ContainerID:"9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "", err: 2025-08-13T20:00:30.750113158+00:00 stderr P 2025-08-13T20:00:30Z [verbose] 2025-08-13T20:00:30.750212571+00:00 stderr P Add: openshift-image-registry:image-registry-7cbd5666ff-bbfrf:42b6a393-6194-4620-bf8f-7e4b6cbe5679:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"958ba1ee8e9afa1","mac":"72:4f:d8:41:fe:b2"},{"name":"eth0","mac":"0a:58:0a:d9:00:26","sandbox":"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.38/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:30.750352425+00:00 stderr F 2025-08-13T20:00:30.757356695+00:00 stderr F I0813 20:00:30.750923 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-7cbd5666ff-bbfrf", UID:"42b6a393-6194-4620-bf8f-7e4b6cbe5679", APIVersion:"v1", ResourceVersion:"27607", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.38/23] from ovn-kubernetes 2025-08-13T20:00:30.851936702+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL starting CNI request ContainerID:"e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T20:00:30.851936702+00:00 stderr F 2025-08-13T20:00:30Z [verbose] Del: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:30.863932404+00:00 stderr F 2025-08-13T20:00:30Z [verbose] ADD finished CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:4f:d8:41:fe:b2\",\"name\":\"958ba1ee8e9afa1\"},{\"mac\":\"0a:58:0a:d9:00:26\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda\"}],\"ips\":[{\"address\":\"10.217.0.38/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:30.912946971+00:00 stderr F 2025-08-13T20:00:30Z [verbose] Add: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7356b549b0982e9","mac":"1a:15:48:83:02:58"},{"name":"eth0","mac":"0a:58:0a:d9:00:3b","sandbox":"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.59/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:30.912946971+00:00 stderr F I0813 20:00:30.912354 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75779c45fd-v2j2v", UID:"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319", APIVersion:"v1", ResourceVersion:"29604", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.59/23] from ovn-kubernetes 2025-08-13T20:00:30.967560289+00:00 stderr F 2025-08-13T20:00:30Z [verbose] ADD finished CNI request ContainerID:"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e" Netns:"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:15:48:83:02:58\",\"name\":\"7356b549b0982e9\"},{\"mac\":\"0a:58:0a:d9:00:3b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f\"}],\"ips\":[{\"address\":\"10.217.0.59/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:31.002266648+00:00 stderr P 2025-08-13T20:00:30Z [verbose] 2025-08-13T20:00:31.002340550+00:00 stderr P Add: openshift-controller-manager:controller-manager-78589965b8-vmcwt:00d32440-4cce-4609-96f3-51ac94480aab:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"97945bb2ed21e57","mac":"de:38:c2:b4:d8:3a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3c","sandbox":"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.60/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:31.002466734+00:00 stderr F 2025-08-13T20:00:31.003170114+00:00 stderr F I0813 20:00:31.003086 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-78589965b8-vmcwt", UID:"00d32440-4cce-4609-96f3-51ac94480aab", APIVersion:"v1", ResourceVersion:"29670", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.60/23] from ovn-kubernetes 2025-08-13T20:00:31.082701722+00:00 stderr F 2025-08-13T20:00:31Z [verbose] ADD finished CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:38:c2:b4:d8:3a\",\"name\":\"97945bb2ed21e57\"},{\"mac\":\"0a:58:0a:d9:00:3c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52\"}],\"ips\":[{\"address\":\"10.217.0.60/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:32.195444221+00:00 stderr F 2025-08-13T20:00:32Z [verbose] DEL finished CNI request ContainerID:"e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "", err: 2025-08-13T20:00:32.558349439+00:00 stderr F 2025-08-13T20:00:32Z [verbose] DEL starting CNI request ContainerID:"9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-08-13T20:00:32.651056871+00:00 stderr F 2025-08-13T20:00:32Z [verbose] Del: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:34.140405589+00:00 stderr F 2025-08-13T20:00:34Z [verbose] ADD starting CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"" 2025-08-13T20:00:34.673187000+00:00 stderr F 2025-08-13T20:00:34Z [verbose] DEL finished CNI request ContainerID:"9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "", err: 2025-08-13T20:00:34.835387915+00:00 stderr F 2025-08-13T20:00:34Z [verbose] DEL starting CNI request ContainerID:"fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-08-13T20:00:34.844419193+00:00 stderr F 2025-08-13T20:00:34Z [verbose] Del: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:35.948650999+00:00 stderr P 2025-08-13T20:00:35Z [verbose] 2025-08-13T20:00:35.948741002+00:00 stderr P Add: openshift-route-controller-manager:route-controller-manager-846977c6bc-7gjhh:ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7b8bdc9f188dc33","mac":"76:3b:e8:1c:21:23"},{"name":"eth0","mac":"0a:58:0a:d9:00:41","sandbox":"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.65/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:35.948823624+00:00 stderr F 2025-08-13T20:00:35.967116546+00:00 stderr F I0813 20:00:35.953067 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-846977c6bc-7gjhh", UID:"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d", APIVersion:"v1", ResourceVersion:"29760", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.65/23] from ovn-kubernetes 2025-08-13T20:00:36.106877411+00:00 stderr F 2025-08-13T20:00:36Z [verbose] DEL finished CNI request ContainerID:"fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "", err: 2025-08-13T20:00:36.200960723+00:00 stderr P 2025-08-13T20:00:36Z [verbose] 2025-08-13T20:00:36.201043076+00:00 stderr P DEL starting CNI request ContainerID:"9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-08-13T20:00:36.201067356+00:00 stderr F 2025-08-13T20:00:36.214635213+00:00 stderr P 2025-08-13T20:00:36Z [verbose] 2025-08-13T20:00:36.214699455+00:00 stderr P Del: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:36.214723896+00:00 stderr F 2025-08-13T20:00:36.550105928+00:00 stderr F 2025-08-13T20:00:36Z [verbose] ADD finished CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:3b:e8:1c:21:23\",\"name\":\"7b8bdc9f188dc33\"},{\"mac\":\"0a:58:0a:d9:00:41\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28\"}],\"ips\":[{\"address\":\"10.217.0.65/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:37.125114874+00:00 stderr F 2025-08-13T20:00:37Z [verbose] DEL finished CNI request ContainerID:"9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "", err: 2025-08-13T20:00:37.215378848+00:00 stderr F 2025-08-13T20:00:37Z [verbose] DEL starting CNI request ContainerID:"fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T20:00:37.262763049+00:00 stderr F 2025-08-13T20:00:37Z [verbose] Del: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:38.651627631+00:00 stderr P 2025-08-13T20:00:38Z [verbose] 2025-08-13T20:00:38.651700183+00:00 stderr P DEL finished CNI request ContainerID:"fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "", err: 2025-08-13T20:00:38.651743074+00:00 stderr F 2025-08-13T20:00:38.821651149+00:00 stderr P 2025-08-13T20:00:38Z [verbose] 2025-08-13T20:00:38.821763572+00:00 stderr P ADD starting CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"" 2025-08-13T20:00:38.827408263+00:00 stderr F 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [verbose] DEL starting CNI request ContainerID:"16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61;K8S_POD_UID=7fc6841e-1b13-42dc-8470-506b09b9d82d" Path:"" 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61: no such file or directory, cannot properly delete 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [verbose] DEL finished CNI request ContainerID:"16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61;K8S_POD_UID=7fc6841e-1b13-42dc-8470-506b09b9d82d" Path:"", result: "", err: 2025-08-13T20:00:41.080083045+00:00 stderr F 2025-08-13T20:00:41Z [verbose] DEL starting CNI request ContainerID:"8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-08-13T20:00:41.080083045+00:00 stderr F 2025-08-13T20:00:41Z [verbose] Del: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:42.117281860+00:00 stderr F 2025-08-13T20:00:42Z [verbose] ADD starting CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"" 2025-08-13T20:00:42.225946918+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL starting CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T20:00:42.228035068+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Del: openshift-authentication:oauth-openshift-765b47f944-n2lhl:13ad7555-5f28-4555-a563-892713a8433a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:42.356362727+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL finished CNI request ContainerID:"8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "", err: 2025-08-13T20:00:42.478464869+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Add: openshift-kube-scheduler:installer-7-crc:b57cce81-8ea0-4c4d-aae1-ee024d201c15:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"639e0e9093fe7c9","mac":"d2:b3:7b:29:cb:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:43","sandbox":"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.67/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:42.479200910+00:00 stderr F I0813 20:00:42.479162 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler", Name:"installer-7-crc", UID:"b57cce81-8ea0-4c4d-aae1-ee024d201c15", APIVersion:"v1", ResourceVersion:"29872", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.67/23] from ovn-kubernetes 2025-08-13T20:00:42.730766093+00:00 stderr P 2025-08-13T20:00:42Z [verbose] 2025-08-13T20:00:42.730891296+00:00 stderr P DEL starting CNI request ContainerID:"da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-585546dd8b-v5m4t;K8S_POD_INFRA_CONTAINER_ID=da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02;K8S_POD_UID=c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Path:"" 2025-08-13T20:00:42.730916937+00:00 stderr F 2025-08-13T20:00:42.941020918+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-10-crc:2f155735-a9be-4621-a5f2-5ab4b6957acd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c05ff35bd00034f","mac":"d2:80:43:7f:e4:85"},{"name":"eth0","mac":"0a:58:0a:d9:00:44","sandbox":"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.68/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:42.944025443+00:00 stderr F I0813 20:00:42.941635 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-10-crc", UID:"2f155735-a9be-4621-a5f2-5ab4b6957acd", APIVersion:"v1", ResourceVersion:"29896", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.68/23] from ovn-kubernetes 2025-08-13T20:00:42.950006964+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL finished CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "", err: 2025-08-13T20:00:42.992405573+00:00 stderr P 2025-08-13T20:00:42Z [verbose] 2025-08-13T20:00:42.992465665+00:00 stderr P ADD starting CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"" 2025-08-13T20:00:42.992498166+00:00 stderr F 2025-08-13T20:00:43.013136444+00:00 stderr F 2025-08-13T20:00:43Z [verbose] Del: openshift-image-registry:image-registry-585546dd8b-v5m4t:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:43.393009776+00:00 stderr F 2025-08-13T20:00:43Z [verbose] DEL finished CNI request ContainerID:"da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-585546dd8b-v5m4t;K8S_POD_INFRA_CONTAINER_ID=da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02;K8S_POD_UID=c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Path:"", result: "", err: 2025-08-13T20:00:43.421580150+00:00 stderr P 2025-08-13T20:00:43Z [verbose] 2025-08-13T20:00:43.421641321+00:00 stderr P Add: openshift-kube-controller-manager:installer-10-crc:79050916-d488-4806-b556-1b0078b31e53:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5d98545d20b610","mac":"1e:0a:7c:9b:81:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:45","sandbox":"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.69/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:43.421713993+00:00 stderr F 2025-08-13T20:00:43.422160696+00:00 stderr F I0813 20:00:43.422095 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-10-crc", UID:"79050916-d488-4806-b556-1b0078b31e53", APIVersion:"v1", ResourceVersion:"29912", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.69/23] from ovn-kubernetes 2025-08-13T20:00:43.680240285+00:00 stderr F 2025-08-13T20:00:43Z [verbose] DEL starting CNI request ContainerID:"5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-08-13T20:00:43.758502797+00:00 stderr F 2025-08-13T20:00:43Z [verbose] Del: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:44.114893609+00:00 stderr F 2025-08-13T20:00:44Z [verbose] DEL finished CNI request ContainerID:"5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "", err: 2025-08-13T20:00:44.184127013+00:00 stderr F 2025-08-13T20:00:44Z [verbose] DEL starting CNI request ContainerID:"a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-08-13T20:00:44.217335970+00:00 stderr F 2025-08-13T20:00:44Z [verbose] Del: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:45.211890339+00:00 stderr F 2025-08-13T20:00:45Z [verbose] DEL finished CNI request ContainerID:"a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "", err: 2025-08-13T20:00:45.672314437+00:00 stderr P 2025-08-13T20:00:45Z [verbose] 2025-08-13T20:00:45.672434441+00:00 stderr P DEL starting CNI request ContainerID:"5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-08-13T20:00:45.672463122+00:00 stderr F 2025-08-13T20:00:45.697065183+00:00 stderr P 2025-08-13T20:00:45Z [verbose] 2025-08-13T20:00:45.697150805+00:00 stderr P Del: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:45.697175946+00:00 stderr F 2025-08-13T20:00:45.805306359+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:0a:7c:9b:81:94\",\"name\":\"c5d98545d20b610\"},{\"mac\":\"0a:58:0a:d9:00:45\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f\"}],\"ips\":[{\"address\":\"10.217.0.69/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:45.911488567+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:80:43:7f:e4:85\",\"name\":\"c05ff35bd00034f\"},{\"mac\":\"0a:58:0a:d9:00:44\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5\"}],\"ips\":[{\"address\":\"10.217.0.68/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:45.911685283+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:b3:7b:29:cb:9c\",\"name\":\"639e0e9093fe7c9\"},{\"mac\":\"0a:58:0a:d9:00:43\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11\"}],\"ips\":[{\"address\":\"10.217.0.67/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:46.769819461+00:00 stderr F 2025-08-13T20:00:46Z [verbose] ADD starting CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"" 2025-08-13T20:00:47.097347730+00:00 stderr F 2025-08-13T20:00:47Z [verbose] DEL finished CNI request ContainerID:"5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "", err: 2025-08-13T20:00:47.471940481+00:00 stderr F 2025-08-13T20:00:47Z [verbose] DEL starting CNI request ContainerID:"7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-08-13T20:00:47.557602813+00:00 stderr F 2025-08-13T20:00:47Z [verbose] Del: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:50.754210741+00:00 stderr F 2025-08-13T20:00:50Z [verbose] Add: openshift-apiserver:apiserver-67cbf64bc9-jjfds:b23d6435-6431-4905-b41b-a517327385e5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"411add17e78de78","mac":"86:a5:06:68:9f:58"},{"name":"eth0","mac":"0a:58:0a:d9:00:46","sandbox":"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.70/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:50.754265362+00:00 stderr F I0813 20:00:50.754227 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-67cbf64bc9-jjfds", UID:"b23d6435-6431-4905-b41b-a517327385e5", APIVersion:"v1", ResourceVersion:"29962", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.70/23] from ovn-kubernetes 2025-08-13T20:00:50.832530764+00:00 stderr F 2025-08-13T20:00:50Z [verbose] DEL finished CNI request ContainerID:"7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "", err: 2025-08-13T20:00:51.690436508+00:00 stderr F 2025-08-13T20:00:51Z [verbose] DEL starting CNI request ContainerID:"c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T20:00:51.757410636+00:00 stderr F 2025-08-13T20:00:51Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-mtx25:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:52.963080514+00:00 stderr F 2025-08-13T20:00:52Z [verbose] ADD starting CNI request ContainerID:"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404" Netns:"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"" 2025-08-13T20:00:53.012391360+00:00 stderr F 2025-08-13T20:00:53Z [verbose] DEL finished CNI request ContainerID:"c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "", err: 2025-08-13T20:00:54.999966983+00:00 stderr F 2025-08-13T20:00:54Z [verbose] Add: openshift-authentication:oauth-openshift-74fc7c67cc-xqf8b:01feb2e0-a0f4-4573-8335-34e364e0ef40:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ca33bd29c9a026f","mac":"d2:28:de:94:9a:bd"},{"name":"eth0","mac":"0a:58:0a:d9:00:48","sandbox":"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.72/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:54.999966983+00:00 stderr F I0813 20:00:54.997914 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-74fc7c67cc-xqf8b", UID:"01feb2e0-a0f4-4573-8335-34e364e0ef40", APIVersion:"v1", ResourceVersion:"30093", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.72/23] from ovn-kubernetes 2025-08-13T20:00:55.784357859+00:00 stderr P 2025-08-13T20:00:55Z [verbose] 2025-08-13T20:00:55.784484773+00:00 stderr P DEL starting CNI request ContainerID:"df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-08-13T20:00:55.784519304+00:00 stderr F 2025-08-13T20:00:55.801439626+00:00 stderr P 2025-08-13T20:00:55Z [verbose] 2025-08-13T20:00:55.801514399+00:00 stderr P Del: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:55.801574770+00:00 stderr F 2025-08-13T20:00:57.685908652+00:00 stderr P 2025-08-13T20:00:57Z [verbose] 2025-08-13T20:00:57.686257362+00:00 stderr P DEL finished CNI request ContainerID:"df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "", err: 2025-08-13T20:00:57.686300333+00:00 stderr F 2025-08-13T20:00:59.625135578+00:00 stderr F 2025-08-13T20:00:59Z [verbose] DEL starting CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"" 2025-08-13T20:00:59.626535218+00:00 stderr F 2025-08-13T20:00:59Z [verbose] Del: openshift-kube-controller-manager:installer-9-crc:227e3650-2a85-4229-8099-bb53972635b2:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:59.949886749+00:00 stderr F 2025-08-13T20:00:59Z [verbose] ADD finished CNI request ContainerID:"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404" Netns:"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:28:de:94:9a:bd\",\"name\":\"ca33bd29c9a026f\"},{\"mac\":\"0a:58:0a:d9:00:48\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931\"}],\"ips\":[{\"address\":\"10.217.0.72/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:59.955906181+00:00 stderr F 2025-08-13T20:00:59Z [verbose] ADD finished CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"86:a5:06:68:9f:58\",\"name\":\"411add17e78de78\"},{\"mac\":\"0a:58:0a:d9:00:46\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b\"}],\"ips\":[{\"address\":\"10.217.0.70/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:01:00.262048961+00:00 stderr P 2025-08-13T20:01:00Z [verbose] 2025-08-13T20:01:00.262101852+00:00 stderr P DEL starting CNI request ContainerID:"2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-08-13T20:01:00.262127363+00:00 stderr F 2025-08-13T20:01:00.306681903+00:00 stderr P 2025-08-13T20:01:00Z [verbose] 2025-08-13T20:01:00.306880019+00:00 stderr P Del: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:00.306920850+00:00 stderr F 2025-08-13T20:01:01.496994785+00:00 stderr F 2025-08-13T20:01:01Z [verbose] DEL finished CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"", result: "", err: 2025-08-13T20:01:01.818888194+00:00 stderr F 2025-08-13T20:01:01Z [verbose] DEL finished CNI request ContainerID:"2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "", err: 2025-08-13T20:01:04.274439165+00:00 stderr P 2025-08-13T20:01:04Z [verbose] 2025-08-13T20:01:04.274510537+00:00 stderr P DEL starting CNI request ContainerID:"432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-08-13T20:01:04.274656522+00:00 stderr F 2025-08-13T20:01:04.279231072+00:00 stderr P 2025-08-13T20:01:04Z [verbose] 2025-08-13T20:01:04.279270203+00:00 stderr P Del: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:04.279294144+00:00 stderr F 2025-08-13T20:01:06.005889996+00:00 stderr P 2025-08-13T20:01:06Z [verbose] 2025-08-13T20:01:06.006018259+00:00 stderr P DEL finished CNI request ContainerID:"432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "", err: 2025-08-13T20:01:06.006046850+00:00 stderr F 2025-08-13T20:01:07.641698699+00:00 stderr F 2025-08-13T20:01:07Z [verbose] DEL starting CNI request ContainerID:"33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-08-13T20:01:07.655044230+00:00 stderr F 2025-08-13T20:01:07Z [verbose] Del: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:11.315883054+00:00 stderr F 2025-08-13T20:01:11Z [verbose] DEL finished CNI request ContainerID:"33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "", err: 2025-08-13T20:01:11.611486543+00:00 stderr P 2025-08-13T20:01:11Z [verbose] 2025-08-13T20:01:11.611557375+00:00 stderr P DEL starting CNI request ContainerID:"dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-08-13T20:01:11.611620457+00:00 stderr F 2025-08-13T20:01:11.640926123+00:00 stderr P 2025-08-13T20:01:11Z [verbose] 2025-08-13T20:01:11.641012095+00:00 stderr P Del: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:11.641042516+00:00 stderr F 2025-08-13T20:01:12.677367365+00:00 stderr P 2025-08-13T20:01:12Z [verbose] 2025-08-13T20:01:12.677746566+00:00 stderr P DEL finished CNI request ContainerID:"dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "", err: 2025-08-13T20:01:12.677957122+00:00 stderr F 2025-08-13T20:01:15.978545495+00:00 stderr F 2025-08-13T20:01:15Z [verbose] DEL starting CNI request ContainerID:"0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-08-13T20:01:16.068195151+00:00 stderr P 2025-08-13T20:01:16Z [verbose] 2025-08-13T20:01:16.068257523+00:00 stderr P DEL starting CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"" 2025-08-13T20:01:16.068284344+00:00 stderr F 2025-08-13T20:01:16.068958033+00:00 stderr P 2025-08-13T20:01:16Z [verbose] 2025-08-13T20:01:16.069011444+00:00 stderr P Del: openshift-kube-controller-manager:revision-pruner-10-crc:2f155735-a9be-4621-a5f2-5ab4b6957acd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:16.069044455+00:00 stderr F 2025-08-13T20:01:16.129297173+00:00 stderr F 2025-08-13T20:01:16Z [verbose] Del: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:17.040373572+00:00 stderr F 2025-08-13T20:01:17Z [verbose] DEL finished CNI request ContainerID:"0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "", err: 2025-08-13T20:01:17.168158526+00:00 stderr P 2025-08-13T20:01:17Z [verbose] 2025-08-13T20:01:17.168284669+00:00 stderr P DEL finished CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"", result: "", err: 2025-08-13T20:01:17.168357521+00:00 stderr F 2025-08-13T20:01:18.589913375+00:00 stderr F 2025-08-13T20:01:18Z [verbose] DEL starting CNI request ContainerID:"878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-08-13T20:01:18.595353010+00:00 stderr F 2025-08-13T20:01:18Z [verbose] Del: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:19.197903550+00:00 stderr F 2025-08-13T20:01:19Z [verbose] DEL finished CNI request ContainerID:"878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "", err: 2025-08-13T20:01:19.430217025+00:00 stderr P 2025-08-13T20:01:19Z [verbose] 2025-08-13T20:01:19.430279626+00:00 stderr P DEL starting CNI request ContainerID:"21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-08-13T20:01:19.430305647+00:00 stderr F 2025-08-13T20:01:19.469115584+00:00 stderr P 2025-08-13T20:01:19Z [verbose] 2025-08-13T20:01:19.469181956+00:00 stderr P Del: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:19.469207707+00:00 stderr F 2025-08-13T20:01:20.235586379+00:00 stderr F 2025-08-13T20:01:20Z [verbose] DEL finished CNI request ContainerID:"21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "", err: 2025-08-13T20:01:20.796826942+00:00 stderr F 2025-08-13T20:01:20Z [verbose] DEL starting CNI request ContainerID:"2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-08-13T20:01:20.829113883+00:00 stderr F 2025-08-13T20:01:20Z [verbose] Del: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:21.226157394+00:00 stderr F 2025-08-13T20:01:21Z [verbose] DEL finished CNI request ContainerID:"2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "", err: 2025-08-13T20:01:22.043060597+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL starting CNI request ContainerID:"3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-08-13T20:01:22.078591830+00:00 stderr F 2025-08-13T20:01:22Z [verbose] Del: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:22.231411208+00:00 stderr F 2025-08-13T20:01:22Z [verbose] ADD starting CNI request ContainerID:"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Netns:"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"" 2025-08-13T20:01:22.447936762+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL finished CNI request ContainerID:"3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "", err: 2025-08-13T20:01:22.762623774+00:00 stderr F 2025-08-13T20:01:22Z [verbose] Add: openshift-console:console-644bb77b49-5x5xk:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"48ddb06f60b4f68","mac":"2e:65:98:07:c8:3f"},{"name":"eth0","mac":"0a:58:0a:d9:00:49","sandbox":"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.73/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:01:22.763233211+00:00 stderr F I0813 20:01:22.763087 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-644bb77b49-5x5xk", UID:"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1", APIVersion:"v1", ResourceVersion:"30362", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.73/23] from ovn-kubernetes 2025-08-13T20:01:22.999714184+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL starting CNI request ContainerID:"2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-08-13T20:01:23.071366437+00:00 stderr F 2025-08-13T20:01:23Z [verbose] Del: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:23.408762758+00:00 stderr F 2025-08-13T20:01:23Z [verbose] DEL finished CNI request ContainerID:"2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "", err: 2025-08-13T20:01:23.633597139+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:23.634064242+00:00 stderr P ADD finished CNI request ContainerID:"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Netns:"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:65:98:07:c8:3f\",\"name\":\"48ddb06f60b4f68\"},{\"mac\":\"0a:58:0a:d9:00:49\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d\"}],\"ips\":[{\"address\":\"10.217.0.73/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:01:23.634102513+00:00 stderr F 2025-08-13T20:01:23.794134056+00:00 stderr F 2025-08-13T20:01:23Z [verbose] DEL starting CNI request ContainerID:"2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-08-13T20:01:23.798542252+00:00 stderr F 2025-08-13T20:01:23Z [verbose] Del: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:23.997263328+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:23.997368721+00:00 stderr P DEL starting CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"" 2025-08-13T20:01:23.997394662+00:00 stderr F 2025-08-13T20:01:24.002935580+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:24.003018803+00:00 stderr P Del: openshift-controller-manager:controller-manager-78589965b8-vmcwt:00d32440-4cce-4609-96f3-51ac94480aab:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:24.003107155+00:00 stderr F 2025-08-13T20:01:24.207200205+00:00 stderr F 2025-08-13T20:01:24Z [verbose] DEL finished CNI request ContainerID:"2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "", err: 2025-08-13T20:01:24.405154259+00:00 stderr P 2025-08-13T20:01:24Z [verbose] 2025-08-13T20:01:24.405207660+00:00 stderr P DEL finished CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"", result: "", err: 2025-08-13T20:01:24.405232041+00:00 stderr F 2025-08-13T20:01:24.676881827+00:00 stderr F 2025-08-13T20:01:24Z [verbose] DEL starting CNI request ContainerID:"d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-08-13T20:01:24.977894500+00:00 stderr F 2025-08-13T20:01:24Z [verbose] Del: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:25.162886715+00:00 stderr F 2025-08-13T20:01:25Z [verbose] DEL finished CNI request ContainerID:"d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "", err: 2025-08-13T20:01:25.410320910+00:00 stderr F 2025-08-13T20:01:25Z [verbose] DEL starting CNI request ContainerID:"c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-08-13T20:01:25.473106201+00:00 stderr F 2025-08-13T20:01:25Z [verbose] Del: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:26.572894659+00:00 stderr F 2025-08-13T20:01:26Z [verbose] DEL finished CNI request ContainerID:"c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "", err: 2025-08-13T20:01:27.222996886+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL starting CNI request ContainerID:"1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-08-13T20:01:27.225452106+00:00 stderr F 2025-08-13T20:01:27Z [verbose] Del: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:27.256063179+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL starting CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"" 2025-08-13T20:01:27.256524022+00:00 stderr F 2025-08-13T20:01:27Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-846977c6bc-7gjhh:ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:27.767518242+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL finished CNI request ContainerID:"1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "", err: 2025-08-13T20:01:27.879041363+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL finished CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"", result: "", err: 2025-08-13T20:01:28.293605073+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL starting CNI request ContainerID:"5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T20:01:28.599038402+00:00 stderr F 2025-08-13T20:01:28Z [verbose] Del: openshift-authentication:oauth-openshift-765b47f944-n2lhl:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:28.839098428+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL finished CNI request ContainerID:"5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "", err: 2025-08-13T20:01:28.913564581+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL starting CNI request ContainerID:"2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T20:01:29.073348387+00:00 stderr F 2025-08-13T20:01:29Z [verbose] Del: openshift-service-ca:service-ca-666f99b6f-vlbxv:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:29.443305406+00:00 stderr F 2025-08-13T20:01:29Z [verbose] DEL finished CNI request ContainerID:"2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "", err: 2025-08-13T20:01:29.693927872+00:00 stderr P 2025-08-13T20:01:29Z [verbose] 2025-08-13T20:01:29.694009855+00:00 stderr P DEL starting CNI request ContainerID:"dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-08-13T20:01:29.694036865+00:00 stderr F 2025-08-13T20:01:29.695198629+00:00 stderr P 2025-08-13T20:01:29Z [verbose] 2025-08-13T20:01:29.695294501+00:00 stderr P Del: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:29.695434705+00:00 stderr F 2025-08-13T20:01:30.418929144+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL finished CNI request ContainerID:"dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "", err: 2025-08-13T20:01:30.747711369+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL starting CNI request ContainerID:"a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-08-13T20:01:30.750870159+00:00 stderr F 2025-08-13T20:01:30Z [verbose] Del: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:30.992747486+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL finished CNI request ContainerID:"a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "", err: 2025-08-13T20:01:31.345338550+00:00 stderr F 2025-08-13T20:01:31Z [verbose] DEL starting CNI request ContainerID:"0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-08-13T20:01:31.353528964+00:00 stderr F 2025-08-13T20:01:31Z [verbose] Del: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:31.967512002+00:00 stderr F 2025-08-13T20:01:31Z [verbose] DEL finished CNI request ContainerID:"0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "", err: 2025-08-13T20:01:32.461093585+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.461155507+00:00 stderr P DEL starting CNI request ContainerID:"b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-08-13T20:01:32.461180178+00:00 stderr F 2025-08-13T20:01:32.509104104+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.509161716+00:00 stderr P Del: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:32.509186517+00:00 stderr F 2025-08-13T20:01:32.840936847+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.840995988+00:00 stderr P DEL finished CNI request ContainerID:"b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "", err: 2025-08-13T20:01:32.841020399+00:00 stderr F 2025-08-13T20:01:32.931127618+00:00 stderr F 2025-08-13T20:01:32Z [verbose] DEL starting CNI request ContainerID:"d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-08-13T20:01:32.957988004+00:00 stderr F 2025-08-13T20:01:32Z [verbose] Del: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:33.181215610+00:00 stderr F 2025-08-13T20:01:33Z [verbose] DEL finished CNI request ContainerID:"d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "", err: 2025-08-13T20:01:33.285728640+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.285874134+00:00 stderr P DEL starting CNI request ContainerID:"af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-08-13T20:01:33.285905965+00:00 stderr F 2025-08-13T20:01:33.301964453+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.302017954+00:00 stderr P Del: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:33.302042765+00:00 stderr F 2025-08-13T20:01:33.561984896+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.562075539+00:00 stderr P DEL finished CNI request ContainerID:"af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "", err: 2025-08-13T20:01:33.562102600+00:00 stderr F 2025-08-13T20:02:24.117412736+00:00 stderr F 2025-08-13T20:02:24Z [verbose] DEL starting CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"" 2025-08-13T20:02:24.118743814+00:00 stderr F 2025-08-13T20:02:24Z [verbose] Del: openshift-kube-apiserver:installer-9-crc:2ad657a4-8b02-4373-8d0d-b0e25345dc90:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:24.384304969+00:00 stderr F 2025-08-13T20:02:24Z [verbose] DEL finished CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"", result: "", err: 2025-08-13T20:02:25.102333633+00:00 stderr F 2025-08-13T20:02:25Z [verbose] DEL starting CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"" 2025-08-13T20:02:25.102500167+00:00 stderr F 2025-08-13T20:02:25Z [verbose] Del: openshift-kube-scheduler:installer-7-crc:b57cce81-8ea0-4c4d-aae1-ee024d201c15:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:25.345992404+00:00 stderr F 2025-08-13T20:02:25Z [verbose] DEL finished CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"", result: "", err: 2025-08-13T20:02:33.574743915+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T20:02:33.575285181+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.592362268+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"" 2025-08-13T20:02:33.592662666+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-image-registry:image-registry-7cbd5666ff-bbfrf:42b6a393-6194-4620-bf8f-7e4b6cbe5679:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.602590850+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T20:02:33.602590850+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-mtx25:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.797138049+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "", err: 2025-08-13T20:02:33.958714969+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "", err: 2025-08-13T20:02:33.961291092+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"", result: "", err: 2025-08-13T20:02:34.574369441+00:00 stderr F 2025-08-13T20:02:34Z [verbose] DEL starting CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"" 2025-08-13T20:02:34.574644859+00:00 stderr F 2025-08-13T20:02:34Z [verbose] Del: openshift-kube-controller-manager:installer-10-crc:79050916-d488-4806-b556-1b0078b31e53:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:34.770582238+00:00 stderr F 2025-08-13T20:02:34Z [verbose] DEL finished CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"", result: "", err: 2025-08-13T20:05:14.544423840+00:00 stderr P 2025-08-13T20:05:14Z [verbose] 2025-08-13T20:05:14.544546203+00:00 stderr P ADD starting CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:14.544572164+00:00 stderr F 2025-08-13T20:05:14.562162048+00:00 stderr F 2025-08-13T20:05:14Z [verbose] ADD starting CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:17.045696786+00:00 stderr P 2025-08-13T20:05:17Z [error] 2025-08-13T20:05:17.046235931+00:00 stderr P Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found 2025-08-13T20:05:17.046279673+00:00 stderr F 2025-08-13T20:05:17.046380996+00:00 stderr P 2025-08-13T20:05:17Z [verbose] 2025-08-13T20:05:17.046411026+00:00 stderr P ADD finished CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found 2025-08-13T20:05:17.046435127+00:00 stderr F 2025-08-13T20:05:17.065878714+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found 2025-08-13T20:05:17.065878714+00:00 stderr F 2025-08-13T20:05:17Z [verbose] ADD finished CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found 2025-08-13T20:05:17.113344323+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL starting CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:17.115397422+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626: no such file or directory, cannot properly delete 2025-08-13T20:05:17.115397422+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL finished CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL starting CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23: no such file or directory, cannot properly delete 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL finished CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: 2025-08-13T20:05:19.113406156+00:00 stderr F 2025-08-13T20:05:19Z [verbose] ADD starting CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:19.788248221+00:00 stderr F 2025-08-13T20:05:19Z [verbose] ADD starting CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:22.561108354+00:00 stderr F 2025-08-13T20:05:22Z [verbose] Add: openshift-controller-manager:controller-manager-598fc85fd4-8wlsm:8b8d1c48-5762-450f-bd4d-9134869f432b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7814bf45dce77ed","mac":"de:08:bb:e3:37:f7"},{"name":"eth0","mac":"0a:58:0a:d9:00:4a","sandbox":"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.74/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:22.561108354+00:00 stderr F I0813 20:05:22.559936 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-598fc85fd4-8wlsm", UID:"8b8d1c48-5762-450f-bd4d-9134869f432b", APIVersion:"v1", ResourceVersion:"31131", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.74/23] from ovn-kubernetes 2025-08-13T20:05:22.604005233+00:00 stderr F 2025-08-13T20:05:22Z [verbose] ADD finished CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:08:bb:e3:37:f7\",\"name\":\"7814bf45dce77ed\"},{\"mac\":\"0a:58:0a:d9:00:4a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742\"}],\"ips\":[{\"address\":\"10.217.0.74/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:05:22.654375065+00:00 stderr F 2025-08-13T20:05:22Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-6884dcf749-n4qpx:becc7e17-2bc7-417d-832f-55127299d70f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"924f68f94ccf00f","mac":"32:fc:e5:39:3f:bc"},{"name":"eth0","mac":"0a:58:0a:d9:00:4b","sandbox":"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.75/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:22.654375065+00:00 stderr F I0813 20:05:22.654203 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-6884dcf749-n4qpx", UID:"becc7e17-2bc7-417d-832f-55127299d70f", APIVersion:"v1", ResourceVersion:"31132", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.75/23] from ovn-kubernetes 2025-08-13T20:05:22.708178386+00:00 stderr F 2025-08-13T20:05:22Z [verbose] ADD finished CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:fc:e5:39:3f:bc\",\"name\":\"924f68f94ccf00f\"},{\"mac\":\"0a:58:0a:d9:00:4b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340\"}],\"ips\":[{\"address\":\"10.217.0.75/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:05:57.593828054+00:00 stderr F 2025-08-13T20:05:57Z [verbose] ADD starting CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"" 2025-08-13T20:05:57.913180129+00:00 stderr F 2025-08-13T20:05:57Z [verbose] Add: openshift-kube-controller-manager:installer-10-retry-1-crc:dc02677d-deed-4cc9-bb8c-0dd300f83655:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0d375f365a8fdeb","mac":"1a:07:0f:a8:db:26"},{"name":"eth0","mac":"0a:58:0a:d9:00:4c","sandbox":"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.76/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:57.913180129+00:00 stderr F I0813 20:05:57.907516 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-10-retry-1-crc", UID:"dc02677d-deed-4cc9-bb8c-0dd300f83655", APIVersion:"v1", ResourceVersion:"31897", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.76/23] from ovn-kubernetes 2025-08-13T20:05:57.935943931+00:00 stderr F 2025-08-13T20:05:57Z [verbose] ADD finished CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:07:0f:a8:db:26\",\"name\":\"0d375f365a8fdeb\"},{\"mac\":\"0a:58:0a:d9:00:4c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63\"}],\"ips\":[{\"address\":\"10.217.0.76/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:07.174029432+00:00 stderr F 2025-08-13T20:06:07Z [verbose] DEL starting CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"" 2025-08-13T20:06:07.175242426+00:00 stderr F 2025-08-13T20:06:07Z [verbose] Del: openshift-console:console-5d9678894c-wx62n:384ed0e8-86e4-42df-bd2c-604c1f536a15:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:07.412890992+00:00 stderr F 2025-08-13T20:06:07Z [verbose] DEL finished CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"", result: "", err: 2025-08-13T20:06:30.919161705+00:00 stderr F 2025-08-13T20:06:30Z [verbose] DEL starting CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"" 2025-08-13T20:06:30.929945744+00:00 stderr F 2025-08-13T20:06:30Z [verbose] Del: openshift-marketplace:redhat-marketplace-rmwfn:9ad279b4-d9dc-42a8-a1c8-a002bd063482:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:31.214078963+00:00 stderr F 2025-08-13T20:06:31Z [verbose] DEL finished CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"", result: "", err: 2025-08-13T20:06:32.106536910+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL starting CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"" 2025-08-13T20:06:32.107840658+00:00 stderr F 2025-08-13T20:06:32Z [verbose] Del: openshift-marketplace:redhat-operators-dcqzh:6db26b71-4e04-4688-a0c0-00e06e8c888d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:32.435188093+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL finished CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"", result: "", err: 2025-08-13T20:06:32.925951714+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL starting CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"" 2025-08-13T20:06:32.928683942+00:00 stderr F 2025-08-13T20:06:32Z [verbose] Del: openshift-marketplace:community-operators-k9qqb:ccdf38cf-634a-41a2-9c8b-74bb86af80a7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:33.103497344+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL starting CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"" 2025-08-13T20:06:33.106290584+00:00 stderr F 2025-08-13T20:06:33Z [verbose] Del: openshift-marketplace:certified-operators-g4v97:bb917686-edfb-4158-86ad-6fce0abec64c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:33.192759834+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL finished CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"", result: "", err: 2025-08-13T20:06:33.476751036+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL finished CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"", result: "", err: 2025-08-13T20:06:34.983841015+00:00 stderr F 2025-08-13T20:06:34Z [verbose] DEL starting CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"" 2025-08-13T20:06:34.983841015+00:00 stderr F 2025-08-13T20:06:34Z [verbose] Del: openshift-kube-controller-manager:installer-10-retry-1-crc:dc02677d-deed-4cc9-bb8c-0dd300f83655:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:35.070086737+00:00 stderr F 2025-08-13T20:06:35Z [verbose] ADD starting CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"" 2025-08-13T20:06:35.488599537+00:00 stderr P 2025-08-13T20:06:35Z [verbose] 2025-08-13T20:06:35.493270091+00:00 stderr F DEL finished CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"", result: "", err: 2025-08-13T20:06:35.892583669+00:00 stderr F 2025-08-13T20:06:35Z [verbose] Add: openshift-marketplace:redhat-marketplace-4txfd:af6c965e-9dc8-417a-aa1c-303a50ec9adc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0ac24e234dbea3f","mac":"9a:93:0b:c4:ff:6b"},{"name":"eth0","mac":"0a:58:0a:d9:00:4d","sandbox":"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.77/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:35.893138495+00:00 stderr F I0813 20:06:35.893084 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-4txfd", UID:"af6c965e-9dc8-417a-aa1c-303a50ec9adc", APIVersion:"v1", ResourceVersion:"32097", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.77/23] from ovn-kubernetes 2025-08-13T20:06:35.925956266+00:00 stderr F 2025-08-13T20:06:35Z [verbose] ADD finished CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:93:0b:c4:ff:6b\",\"name\":\"0ac24e234dbea3f\"},{\"mac\":\"0a:58:0a:d9:00:4d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c\"}],\"ips\":[{\"address\":\"10.217.0.77/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:36.181685578+00:00 stderr F 2025-08-13T20:06:36Z [verbose] ADD starting CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"" 2025-08-13T20:06:36.609644798+00:00 stderr F 2025-08-13T20:06:36Z [verbose] Add: openshift-marketplace:certified-operators-cfdk8:5391dc5d-0f00-4464-b617-b164e2f9b77a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"93c5c47bf133377","mac":"42:38:8c:ef:03:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:4e","sandbox":"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.78/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:36.609644798+00:00 stderr F I0813 20:06:36.608961 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-cfdk8", UID:"5391dc5d-0f00-4464-b617-b164e2f9b77a", APIVersion:"v1", ResourceVersion:"32114", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.78/23] from ovn-kubernetes 2025-08-13T20:06:36.644232890+00:00 stderr P 2025-08-13T20:06:36Z [verbose] 2025-08-13T20:06:36.644306972+00:00 stderr P ADD finished CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"42:38:8c:ef:03:cd\",\"name\":\"93c5c47bf133377\"},{\"mac\":\"0a:58:0a:d9:00:4e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8\"}],\"ips\":[{\"address\":\"10.217.0.78/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:36.644332063+00:00 stderr F 2025-08-13T20:06:37.252751407+00:00 stderr F 2025-08-13T20:06:37Z [verbose] ADD starting CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"" 2025-08-13T20:06:37.613569952+00:00 stderr F 2025-08-13T20:06:37Z [verbose] Add: openshift-marketplace:redhat-operators-pmqwc:0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3025039c6358002","mac":"c6:bc:20:ad:82:0d"},{"name":"eth0","mac":"0a:58:0a:d9:00:4f","sandbox":"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.79/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:37.615822376+00:00 stderr F I0813 20:06:37.615700 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-pmqwc", UID:"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed", APIVersion:"v1", ResourceVersion:"32132", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.79/23] from ovn-kubernetes 2025-08-13T20:06:37.662039492+00:00 stderr P 2025-08-13T20:06:37Z [verbose] 2025-08-13T20:06:37.662084033+00:00 stderr F ADD finished CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c6:bc:20:ad:82:0d\",\"name\":\"3025039c6358002\"},{\"mac\":\"0a:58:0a:d9:00:4f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0\"}],\"ips\":[{\"address\":\"10.217.0.79/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:38.808915803+00:00 stderr F 2025-08-13T20:06:38Z [verbose] ADD starting CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"" 2025-08-13T20:06:39.261856519+00:00 stderr F 2025-08-13T20:06:39Z [verbose] Add: openshift-marketplace:community-operators-p7svp:8518239d-8dab-48ac-a3c1-e775566b9bff:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4a52c9653485366","mac":"5e:de:da:57:2f:c1"},{"name":"eth0","mac":"0a:58:0a:d9:00:50","sandbox":"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.80/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:39.261856519+00:00 stderr F I0813 20:06:39.258584 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-p7svp", UID:"8518239d-8dab-48ac-a3c1-e775566b9bff", APIVersion:"v1", ResourceVersion:"32155", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.80/23] from ovn-kubernetes 2025-08-13T20:06:39.384274459+00:00 stderr F 2025-08-13T20:06:39Z [verbose] ADD finished CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:de:da:57:2f:c1\",\"name\":\"4a52c9653485366\"},{\"mac\":\"0a:58:0a:d9:00:50\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0\"}],\"ips\":[{\"address\":\"10.217.0.80/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:05.047576957+00:00 stderr F 2025-08-13T20:07:05Z [verbose] ADD starting CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"" 2025-08-13T20:07:05.529643079+00:00 stderr F 2025-08-13T20:07:05Z [verbose] Add: openshift-kube-apiserver:installer-11-crc:47a054e4-19c2-4c12-a054-fc5edc98978a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"82592d624297fdd","mac":"72:ad:68:88:87:91"},{"name":"eth0","mac":"0a:58:0a:d9:00:51","sandbox":"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.81/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:05.531470061+00:00 stderr F I0813 20:07:05.530332 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-11-crc", UID:"47a054e4-19c2-4c12-a054-fc5edc98978a", APIVersion:"v1", ResourceVersion:"32354", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.81/23] from ovn-kubernetes 2025-08-13T20:07:05.592743088+00:00 stderr F 2025-08-13T20:07:05Z [verbose] ADD finished CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:ad:68:88:87:91\",\"name\":\"82592d624297fdd\"},{\"mac\":\"0a:58:0a:d9:00:51\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7\"}],\"ips\":[{\"address\":\"10.217.0.81/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:08.199712932+00:00 stderr F 2025-08-13T20:07:08Z [verbose] DEL starting CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"" 2025-08-13T20:07:08.211746267+00:00 stderr F 2025-08-13T20:07:08Z [verbose] Del: openshift-marketplace:redhat-marketplace-4txfd:af6c965e-9dc8-417a-aa1c-303a50ec9adc:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:08.631969465+00:00 stderr F 2025-08-13T20:07:08Z [verbose] DEL finished CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"", result: "", err: 2025-08-13T20:07:15.563655952+00:00 stderr F 2025-08-13T20:07:15Z [verbose] DEL starting CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"" 2025-08-13T20:07:15.567894734+00:00 stderr F 2025-08-13T20:07:15Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-jjfds:b23d6435-6431-4905-b41b-a517327385e5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:16.064026918+00:00 stderr F 2025-08-13T20:07:16Z [verbose] DEL finished CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"", result: "", err: 2025-08-13T20:07:19.217331386+00:00 stderr F 2025-08-13T20:07:19Z [verbose] DEL starting CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"" 2025-08-13T20:07:19.299711168+00:00 stderr F 2025-08-13T20:07:19Z [verbose] Del: openshift-marketplace:certified-operators-cfdk8:5391dc5d-0f00-4464-b617-b164e2f9b77a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:20.041520296+00:00 stderr F 2025-08-13T20:07:20Z [verbose] DEL finished CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"", result: "", err: 2025-08-13T20:07:20.591508015+00:00 stderr P 2025-08-13T20:07:20Z [verbose] 2025-08-13T20:07:20.591577117+00:00 stderr P ADD starting CNI request ContainerID:"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8" Netns:"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"" 2025-08-13T20:07:20.591602628+00:00 stderr F 2025-08-13T20:07:21.160681172+00:00 stderr F 2025-08-13T20:07:21Z [verbose] Add: openshift-apiserver:apiserver-7fc54b8dd7-d2bhp:41e8708a-e40d-4d28-846b-c52eda4d1755:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2059a6e71652337","mac":"7a:aa:4b:73:44:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:52","sandbox":"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.82/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:21.160681172+00:00 stderr F I0813 20:07:21.159554 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"41e8708a-e40d-4d28-846b-c52eda4d1755", APIVersion:"v1", ResourceVersion:"32500", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.82/23] from ovn-kubernetes 2025-08-13T20:07:21.342528146+00:00 stderr F 2025-08-13T20:07:21Z [verbose] ADD finished CNI request ContainerID:"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8" Netns:"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:aa:4b:73:44:25\",\"name\":\"2059a6e71652337\"},{\"mac\":\"0a:58:0a:d9:00:52\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0\"}],\"ips\":[{\"address\":\"10.217.0.82/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:22.608537384+00:00 stderr F 2025-08-13T20:07:22Z [verbose] ADD starting CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"" 2025-08-13T20:07:23.304977332+00:00 stderr F 2025-08-13T20:07:23Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-11-crc:1784282a-268d-4e44-a766-43281414e2dc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a480fccd2debaaf","mac":"92:1b:9d:51:b7:76"},{"name":"eth0","mac":"0a:58:0a:d9:00:53","sandbox":"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.83/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:23.304977332+00:00 stderr F I0813 20:07:23.303547 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-11-crc", UID:"1784282a-268d-4e44-a766-43281414e2dc", APIVersion:"v1", ResourceVersion:"32536", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.83/23] from ovn-kubernetes 2025-08-13T20:07:23.501200018+00:00 stderr F 2025-08-13T20:07:23Z [verbose] ADD finished CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:1b:9d:51:b7:76\",\"name\":\"a480fccd2debaaf\"},{\"mac\":\"0a:58:0a:d9:00:53\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3\"}],\"ips\":[{\"address\":\"10.217.0.83/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:23.524278799+00:00 stderr F 2025-08-13T20:07:23Z [verbose] ADD starting CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"" 2025-08-13T20:07:24.002870461+00:00 stderr F 2025-08-13T20:07:24Z [verbose] Add: openshift-kube-scheduler:installer-8-crc:aca1f9ff-a685-4a78-b461-3931b757f754:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d0ba8aa29fc697e","mac":"32:07:c5:1f:d6:82"},{"name":"eth0","mac":"0a:58:0a:d9:00:54","sandbox":"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.84/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:24.003631173+00:00 stderr F I0813 20:07:24.003197 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler", Name:"installer-8-crc", UID:"aca1f9ff-a685-4a78-b461-3931b757f754", APIVersion:"v1", ResourceVersion:"32550", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.84/23] from ovn-kubernetes 2025-08-13T20:07:24.105089962+00:00 stderr F 2025-08-13T20:07:24Z [verbose] ADD finished CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:07:c5:1f:d6:82\",\"name\":\"d0ba8aa29fc697e\"},{\"mac\":\"0a:58:0a:d9:00:54\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac\"}],\"ips\":[{\"address\":\"10.217.0.84/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:25.184381715+00:00 stderr F 2025-08-13T20:07:25Z [verbose] ADD starting CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"" 2025-08-13T20:07:25.783963976+00:00 stderr F 2025-08-13T20:07:25Z [verbose] Add: openshift-kube-controller-manager:installer-11-crc:a45bfab9-f78b-4d72-b5b7-903e60401124:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8f0bbf4ce8e2b74","mac":"62:a3:62:e6:c6:48"},{"name":"eth0","mac":"0a:58:0a:d9:00:55","sandbox":"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.85/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:25.783963976+00:00 stderr F I0813 20:07:25.783385 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-11-crc", UID:"a45bfab9-f78b-4d72-b5b7-903e60401124", APIVersion:"v1", ResourceVersion:"32572", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.85/23] from ovn-kubernetes 2025-08-13T20:07:26.201866818+00:00 stderr F 2025-08-13T20:07:26Z [verbose] ADD finished CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:a3:62:e6:c6:48\",\"name\":\"8f0bbf4ce8e2b74\"},{\"mac\":\"0a:58:0a:d9:00:55\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187\"}],\"ips\":[{\"address\":\"10.217.0.85/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:27.651630154+00:00 stderr F 2025-08-13T20:07:27Z [verbose] DEL starting CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"" 2025-08-13T20:07:27.659907761+00:00 stderr P 2025-08-13T20:07:27Z [verbose] Del: openshift-kube-controller-manager:revision-pruner-11-crc:1784282a-268d-4e44-a766-43281414e2dc:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:27.659977833+00:00 stderr F 2025-08-13T20:07:28.002170003+00:00 stderr F 2025-08-13T20:07:28Z [verbose] DEL finished CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"", result: "", err: 2025-08-13T20:07:34.403942158+00:00 stderr F 2025-08-13T20:07:34Z [verbose] ADD starting CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"" 2025-08-13T20:07:34.844143269+00:00 stderr P 2025-08-13T20:07:34Z [verbose] 2025-08-13T20:07:34.844201801+00:00 stderr P Add: openshift-kube-apiserver:installer-12-crc:3557248c-8f70-4165-aa66-8df983e7e01a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"afb6a839e21ef78","mac":"06:11:98:91:d8:e0"},{"name":"eth0","mac":"0a:58:0a:d9:00:56","sandbox":"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.86/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:34.844227922+00:00 stderr F 2025-08-13T20:07:34.847824415+00:00 stderr F I0813 20:07:34.846601 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-12-crc", UID:"3557248c-8f70-4165-aa66-8df983e7e01a", APIVersion:"v1", ResourceVersion:"32679", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.86/23] from ovn-kubernetes 2025-08-13T20:07:34.913360634+00:00 stderr P 2025-08-13T20:07:34Z [verbose] 2025-08-13T20:07:34.913470257+00:00 stderr P ADD finished CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:11:98:91:d8:e0\",\"name\":\"afb6a839e21ef78\"},{\"mac\":\"0a:58:0a:d9:00:56\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c\"}],\"ips\":[{\"address\":\"10.217.0.86/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:34.913683113+00:00 stderr F 2025-08-13T20:07:39.789523187+00:00 stderr P 2025-08-13T20:07:39Z [verbose] 2025-08-13T20:07:39.789634540+00:00 stderr P DEL starting CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"" 2025-08-13T20:07:39.789669071+00:00 stderr F 2025-08-13T20:07:39.811267440+00:00 stderr P 2025-08-13T20:07:39Z [verbose] 2025-08-13T20:07:39.811328592+00:00 stderr P Del: openshift-kube-apiserver:installer-11-crc:47a054e4-19c2-4c12-a054-fc5edc98978a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:39.811353123+00:00 stderr F 2025-08-13T20:07:40.270850337+00:00 stderr P 2025-08-13T20:07:40Z [verbose] 2025-08-13T20:07:40.270932819+00:00 stderr P DEL finished CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"", result: "", err: 2025-08-13T20:07:40.270960230+00:00 stderr F 2025-08-13T20:07:41.160427082+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.160537705+00:00 stderr P DEL starting CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"" 2025-08-13T20:07:41.160629108+00:00 stderr F 2025-08-13T20:07:41.161322108+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.161356919+00:00 stderr P Del: openshift-marketplace:community-operators-p7svp:8518239d-8dab-48ac-a3c1-e775566b9bff:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:41.161380659+00:00 stderr F 2025-08-13T20:07:41.453042622+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.453105944+00:00 stderr P DEL finished CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"", result: "", err: 2025-08-13T20:07:41.453130364+00:00 stderr F 2025-08-13T20:07:59.167879871+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL starting CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"" 2025-08-13T20:07:59.169982662+00:00 stderr F 2025-08-13T20:07:59Z [verbose] Del: openshift-kube-scheduler:installer-8-crc:aca1f9ff-a685-4a78-b461-3931b757f754:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:59.413508534+00:00 stderr P 2025-08-13T20:07:59Z [verbose] 2025-08-13T20:07:59.413580176+00:00 stderr P DEL starting CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"" 2025-08-13T20:07:59.413611157+00:00 stderr F 2025-08-13T20:07:59.416382616+00:00 stderr P 2025-08-13T20:07:59Z [verbose] 2025-08-13T20:07:59.416426968+00:00 stderr P Del: openshift-marketplace:redhat-operators-pmqwc:0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:59.416457268+00:00 stderr F 2025-08-13T20:07:59.503252977+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL finished CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"", result: "", err: 2025-08-13T20:07:59.646335739+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL finished CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"", result: "", err: 2025-08-13T20:08:02.315139306+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.315255940+00:00 stderr P DEL starting CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"" 2025-08-13T20:08:02.315282180+00:00 stderr F 2025-08-13T20:08:02.347661799+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.347721001+00:00 stderr P Del: openshift-kube-controller-manager:installer-11-crc:a45bfab9-f78b-4d72-b5b7-903e60401124:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:08:02.347754912+00:00 stderr F 2025-08-13T20:08:02.634587615+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.634645187+00:00 stderr P DEL finished CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"", result: "", err: 2025-08-13T20:08:02.634669568+00:00 stderr F 2025-08-13T20:08:25.500144370+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.500255923+00:00 stderr P DEL starting CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"" 2025-08-13T20:08:25.500286974+00:00 stderr F 2025-08-13T20:08:25.501293113+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.501330984+00:00 stderr P Del: openshift-kube-apiserver:installer-12-crc:3557248c-8f70-4165-aa66-8df983e7e01a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:08:25.501355614+00:00 stderr F 2025-08-13T20:08:25.826326272+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.826536238+00:00 stderr P DEL finished CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"", result: "", err: 2025-08-13T20:08:25.826562909+00:00 stderr F 2025-08-13T20:11:00.076861257+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL starting CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:11:00.078535555+00:00 stderr F 2025-08-13T20:11:00Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-6884dcf749-n4qpx:becc7e17-2bc7-417d-832f-55127299d70f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:11:00.078535555+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL starting CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:11:00.082829858+00:00 stderr F 2025-08-13T20:11:00Z [verbose] Del: openshift-controller-manager:controller-manager-598fc85fd4-8wlsm:8b8d1c48-5762-450f-bd4d-9134869f432b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:11:00.299981454+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL finished CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: 2025-08-13T20:11:00.443386906+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL finished CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: 2025-08-13T20:11:01.950907168+00:00 stderr F 2025-08-13T20:11:01Z [verbose] ADD starting CNI request ContainerID:"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c" Netns:"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"" 2025-08-13T20:11:01.963896741+00:00 stderr F 2025-08-13T20:11:01Z [verbose] ADD starting CNI request ContainerID:"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88" Netns:"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"" 2025-08-13T20:11:02.160903619+00:00 stderr F 2025-08-13T20:11:02Z [verbose] Add: openshift-controller-manager:controller-manager-778975cc4f-x5vcf:1a3e81c3-c292-4130-9436-f94062c91efd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"67a3c779a8c87e7","mac":"aa:6d:9a:25:77:61"},{"name":"eth0","mac":"0a:58:0a:d9:00:57","sandbox":"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.87/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:11:02.160903619+00:00 stderr F I0813 20:11:02.158865 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-778975cc4f-x5vcf", UID:"1a3e81c3-c292-4130-9436-f94062c91efd", APIVersion:"v1", ResourceVersion:"33335", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.87/23] from ovn-kubernetes 2025-08-13T20:11:02.189402916+00:00 stderr F 2025-08-13T20:11:02Z [verbose] ADD finished CNI request ContainerID:"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c" Netns:"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:6d:9a:25:77:61\",\"name\":\"67a3c779a8c87e7\"},{\"mac\":\"0a:58:0a:d9:00:57\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8\"}],\"ips\":[{\"address\":\"10.217.0.87/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:11:02.247865802+00:00 stderr F 2025-08-13T20:11:02Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-776b8b7477-sfpvs:21d29937-debd-4407-b2b1-d1053cb0f342:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5bff19800c2cb5","mac":"ae:fe:5d:38:ef:2e"},{"name":"eth0","mac":"0a:58:0a:d9:00:58","sandbox":"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.88/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:11:02.248631374+00:00 stderr F I0813 20:11:02.248566 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-776b8b7477-sfpvs", UID:"21d29937-debd-4407-b2b1-d1053cb0f342", APIVersion:"v1", ResourceVersion:"33336", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.88/23] from ovn-kubernetes 2025-08-13T20:11:02.285459280+00:00 stderr P 2025-08-13T20:11:02Z [verbose] ADD finished CNI request ContainerID:"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88" Netns:"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:fe:5d:38:ef:2e\",\"name\":\"c5bff19800c2cb5\"},{\"mac\":\"0a:58:0a:d9:00:58\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727\"}],\"ips\":[{\"address\":\"10.217.0.88/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:11:02.286108679+00:00 stderr F 2025-08-13T20:15:00.781729927+00:00 stderr F 2025-08-13T20:15:00Z [verbose] ADD starting CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"" 2025-08-13T20:15:00.963120417+00:00 stderr F 2025-08-13T20:15:00Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29251935-d7x6j:51936587-a4af-470d-ad92-8ab9062cbc72:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"21feea149913711","mac":"be:26:57:2f:28:77"},{"name":"eth0","mac":"0a:58:0a:d9:00:59","sandbox":"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.89/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:15:00.963591081+00:00 stderr F I0813 20:15:00.963481 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251935-d7x6j", UID:"51936587-a4af-470d-ad92-8ab9062cbc72", APIVersion:"v1", ResourceVersion:"33816", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.89/23] from ovn-kubernetes 2025-08-13T20:15:01.019053031+00:00 stderr F 2025-08-13T20:15:01Z [verbose] ADD finished CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:26:57:2f:28:77\",\"name\":\"21feea149913711\"},{\"mac\":\"0a:58:0a:d9:00:59\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647\"}],\"ips\":[{\"address\":\"10.217.0.89/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:15:04.402616130+00:00 stderr F 2025-08-13T20:15:04Z [verbose] DEL starting CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"" 2025-08-13T20:15:04.403546217+00:00 stderr F 2025-08-13T20:15:04Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29251935-d7x6j:51936587-a4af-470d-ad92-8ab9062cbc72:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:15:04.593539334+00:00 stderr F 2025-08-13T20:15:04Z [verbose] DEL finished CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"", result: "", err: 2025-08-13T20:16:58.608948335+00:00 stderr F 2025-08-13T20:16:58Z [verbose] ADD starting CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"" 2025-08-13T20:16:58.796684776+00:00 stderr F 2025-08-13T20:16:58Z [verbose] Add: openshift-marketplace:certified-operators-8bbjz:8e241cc6-c71d-4fa0-9a1a-18098bcf6594:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"18af4daca70b211","mac":"9a:cd:12:c0:68:18"},{"name":"eth0","mac":"0a:58:0a:d9:00:5a","sandbox":"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.90/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:16:58.797372606+00:00 stderr F I0813 20:16:58.797261 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-8bbjz", UID:"8e241cc6-c71d-4fa0-9a1a-18098bcf6594", APIVersion:"v1", ResourceVersion:"34100", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.90/23] from ovn-kubernetes 2025-08-13T20:16:58.860039395+00:00 stderr F 2025-08-13T20:16:58Z [verbose] ADD finished CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:cd:12:c0:68:18\",\"name\":\"18af4daca70b211\"},{\"mac\":\"0a:58:0a:d9:00:5a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474\"}],\"ips\":[{\"address\":\"10.217.0.90/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:00.557066007+00:00 stderr P 2025-08-13T20:17:00Z [verbose] 2025-08-13T20:17:00.557183100+00:00 stderr P ADD starting CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"" 2025-08-13T20:17:00.557291693+00:00 stderr F 2025-08-13T20:17:00.797736670+00:00 stderr F 2025-08-13T20:17:00Z [verbose] Add: openshift-marketplace:redhat-marketplace-nsk78:a084eaff-10e9-439e-96f3-f3450fb14db7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"95f40ae6abffb8f","mac":"9e:b4:d7:fb:98:d3"},{"name":"eth0","mac":"0a:58:0a:d9:00:5b","sandbox":"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.91/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:00.797736670+00:00 stderr F I0813 20:17:00.796302 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nsk78", UID:"a084eaff-10e9-439e-96f3-f3450fb14db7", APIVersion:"v1", ResourceVersion:"34117", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.91/23] from ovn-kubernetes 2025-08-13T20:17:01.051256190+00:00 stderr P 2025-08-13T20:17:01Z [verbose] 2025-08-13T20:17:01.051408334+00:00 stderr P ADD finished CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:b4:d7:fb:98:d3\",\"name\":\"95f40ae6abffb8f\"},{\"mac\":\"0a:58:0a:d9:00:5b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063\"}],\"ips\":[{\"address\":\"10.217.0.91/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:01.051495627+00:00 stderr F 2025-08-13T20:17:20.945952804+00:00 stderr F 2025-08-13T20:17:20Z [verbose] ADD starting CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"" 2025-08-13T20:17:21.291695047+00:00 stderr P 2025-08-13T20:17:21Z [verbose] 2025-08-13T20:17:21.291757969+00:00 stderr P Add: openshift-marketplace:redhat-operators-swl5s:407a8505-ab64-42f9-aa53-a63f8e97c189:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"011ddcc3b1f8c14","mac":"9a:a7:0d:46:5e:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:5c","sandbox":"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.92/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:21.291910763+00:00 stderr F 2025-08-13T20:17:21.293331844+00:00 stderr F I0813 20:17:21.292950 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-swl5s", UID:"407a8505-ab64-42f9-aa53-a63f8e97c189", APIVersion:"v1", ResourceVersion:"34179", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.92/23] from ovn-kubernetes 2025-08-13T20:17:21.607094354+00:00 stderr P 2025-08-13T20:17:21Z [verbose] 2025-08-13T20:17:21.607148426+00:00 stderr P ADD finished CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:a7:0d:46:5e:25\",\"name\":\"011ddcc3b1f8c14\"},{\"mac\":\"0a:58:0a:d9:00:5c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38\"}],\"ips\":[{\"address\":\"10.217.0.92/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:21.607191417+00:00 stderr F 2025-08-13T20:17:30.765245514+00:00 stderr P 2025-08-13T20:17:30Z [verbose] 2025-08-13T20:17:30.765323897+00:00 stderr P ADD starting CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"" 2025-08-13T20:17:30.765348217+00:00 stderr F 2025-08-13T20:17:31.059174718+00:00 stderr F 2025-08-13T20:17:31Z [verbose] Add: openshift-marketplace:community-operators-tfv59:718f06fe-dcad-4053-8de2-e2c38fb7503d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b983de43e586634","mac":"8e:ce:68:75:7d:52"},{"name":"eth0","mac":"0a:58:0a:d9:00:5d","sandbox":"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.93/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:31.059174718+00:00 stderr F I0813 20:17:31.056753 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-tfv59", UID:"718f06fe-dcad-4053-8de2-e2c38fb7503d", APIVersion:"v1", ResourceVersion:"34230", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.93/23] from ovn-kubernetes 2025-08-13T20:17:31.140038977+00:00 stderr F 2025-08-13T20:17:31Z [verbose] ADD finished CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:ce:68:75:7d:52\",\"name\":\"b983de43e586634\"},{\"mac\":\"0a:58:0a:d9:00:5d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103\"}],\"ips\":[{\"address\":\"10.217.0.93/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:35.355986043+00:00 stderr F 2025-08-13T20:17:35Z [verbose] DEL starting CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"" 2025-08-13T20:17:35.356862828+00:00 stderr F 2025-08-13T20:17:35Z [verbose] Del: openshift-marketplace:redhat-marketplace-nsk78:a084eaff-10e9-439e-96f3-f3450fb14db7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:17:35.677012541+00:00 stderr F 2025-08-13T20:17:35Z [verbose] DEL finished CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"", result: "", err: 2025-08-13T20:17:42.940476414+00:00 stderr P 2025-08-13T20:17:42Z [verbose] 2025-08-13T20:17:42.940551276+00:00 stderr P DEL starting CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"" 2025-08-13T20:17:42.940576557+00:00 stderr F 2025-08-13T20:17:42.941672528+00:00 stderr P 2025-08-13T20:17:42Z [verbose] 2025-08-13T20:17:42.941711539+00:00 stderr P Del: openshift-marketplace:certified-operators-8bbjz:8e241cc6-c71d-4fa0-9a1a-18098bcf6594:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:17:42.941735740+00:00 stderr F 2025-08-13T20:17:43.130319066+00:00 stderr P 2025-08-13T20:17:43Z [verbose] 2025-08-13T20:17:43.130398278+00:00 stderr P DEL finished CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"", result: "", err: 2025-08-13T20:17:43.130429249+00:00 stderr F 2025-08-13T20:18:52.308718151+00:00 stderr P 2025-08-13T20:18:52Z [verbose] 2025-08-13T20:18:52.308860535+00:00 stderr P DEL starting CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"" 2025-08-13T20:18:52.308910326+00:00 stderr F 2025-08-13T20:18:52.322174545+00:00 stderr P 2025-08-13T20:18:52Z [verbose] 2025-08-13T20:18:52.322215476+00:00 stderr P Del: openshift-marketplace:community-operators-tfv59:718f06fe-dcad-4053-8de2-e2c38fb7503d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:18:52.322271608+00:00 stderr F 2025-08-13T20:18:54.275913969+00:00 stderr P 2025-08-13T20:18:54Z [verbose] 2025-08-13T20:18:54.276014152+00:00 stderr P DEL finished CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"", result: "", err: 2025-08-13T20:18:54.276041162+00:00 stderr F 2025-08-13T20:19:34.296937659+00:00 stderr F 2025-08-13T20:19:34Z [verbose] DEL starting CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"" 2025-08-13T20:19:34.300885202+00:00 stderr F 2025-08-13T20:19:34Z [verbose] Del: openshift-marketplace:redhat-operators-swl5s:407a8505-ab64-42f9-aa53-a63f8e97c189:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:19:34.534297221+00:00 stderr F 2025-08-13T20:19:34Z [verbose] DEL finished CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"", result: "", err: 2025-08-13T20:27:06.122840134+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD starting CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"" 2025-08-13T20:27:06.262561679+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD starting CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"" 2025-08-13T20:27:06.458150272+00:00 stderr F 2025-08-13T20:27:06Z [verbose] Add: openshift-marketplace:redhat-marketplace-jbzn9:b152b92f-8fab-4b74-8e68-00278380759d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"65efa81c3e0e120","mac":"e6:4c:64:f5:01:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:5e","sandbox":"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.94/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:27:06.479224494+00:00 stderr F I0813 20:27:06.473549 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-jbzn9", UID:"b152b92f-8fab-4b74-8e68-00278380759d", APIVersion:"v1", ResourceVersion:"35401", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.94/23] from ovn-kubernetes 2025-08-13T20:27:06.522091700+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD finished CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e6:4c:64:f5:01:ca\",\"name\":\"65efa81c3e0e120\"},{\"mac\":\"0a:58:0a:d9:00:5e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b\"}],\"ips\":[{\"address\":\"10.217.0.94/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:27:06.569207297+00:00 stderr F 2025-08-13T20:27:06Z [verbose] Add: openshift-marketplace:certified-operators-xldzg:926ac7a4-e156-4e71-9681-7a48897402eb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d26f242e575b9e4","mac":"12:4b:e0:9e:55:86"},{"name":"eth0","mac":"0a:58:0a:d9:00:5f","sandbox":"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.95/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:27:06.569458594+00:00 stderr F I0813 20:27:06.569413 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-xldzg", UID:"926ac7a4-e156-4e71-9681-7a48897402eb", APIVersion:"v1", ResourceVersion:"35408", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.95/23] from ovn-kubernetes 2025-08-13T20:27:06.626562697+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD finished CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:4b:e0:9e:55:86\",\"name\":\"d26f242e575b9e4\"},{\"mac\":\"0a:58:0a:d9:00:5f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79\"}],\"ips\":[{\"address\":\"10.217.0.95/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:27:29.231263536+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL starting CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"" 2025-08-13T20:27:29.236216528+00:00 stderr F 2025-08-13T20:27:29Z [verbose] Del: openshift-marketplace:certified-operators-xldzg:926ac7a4-e156-4e71-9681-7a48897402eb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:27:29.284492108+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL starting CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"" 2025-08-13T20:27:29.284642242+00:00 stderr F 2025-08-13T20:27:29Z [verbose] Del: openshift-marketplace:redhat-marketplace-jbzn9:b152b92f-8fab-4b74-8e68-00278380759d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:27:29.434254651+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL finished CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"", result: "", err: 2025-08-13T20:27:29.494300477+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL finished CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"", result: "", err: 2025-08-13T20:28:43.767683367+00:00 stderr F 2025-08-13T20:28:43Z [verbose] ADD starting CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"" 2025-08-13T20:28:44.030064569+00:00 stderr F 2025-08-13T20:28:44Z [verbose] Add: openshift-marketplace:community-operators-hvwvm:bfb8fd54-a923-43fe-a0f5-bc4066352d71:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"786926dc94686ef","mac":"f2:1e:d5:6b:2f:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:60","sandbox":"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.96/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:28:44.030692537+00:00 stderr F I0813 20:28:44.030422 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-hvwvm", UID:"bfb8fd54-a923-43fe-a0f5-bc4066352d71", APIVersion:"v1", ResourceVersion:"35629", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.96/23] from ovn-kubernetes 2025-08-13T20:28:44.068653379+00:00 stderr F 2025-08-13T20:28:44Z [verbose] ADD finished CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:1e:d5:6b:2f:19\",\"name\":\"786926dc94686ef\"},{\"mac\":\"0a:58:0a:d9:00:60\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610\"}],\"ips\":[{\"address\":\"10.217.0.96/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:29:06.009323725+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.009411328+00:00 stderr P DEL starting CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"" 2025-08-13T20:29:06.009436298+00:00 stderr F 2025-08-13T20:29:06.010357595+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.010394066+00:00 stderr P Del: openshift-marketplace:community-operators-hvwvm:bfb8fd54-a923-43fe-a0f5-bc4066352d71:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:29:06.010418947+00:00 stderr F 2025-08-13T20:29:06.211321622+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.211421805+00:00 stderr P DEL finished CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"", result: "", err: 2025-08-13T20:29:06.211461296+00:00 stderr F 2025-08-13T20:29:30.524869297+00:00 stderr F 2025-08-13T20:29:30Z [verbose] ADD starting CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"" 2025-08-13T20:29:30.754850807+00:00 stderr P 2025-08-13T20:29:30Z [verbose] 2025-08-13T20:29:30.754914439+00:00 stderr P Add: openshift-marketplace:redhat-operators-zdwjn:6d579e1a-3b27-4c1f-9175-42ac58490d42:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3fdb2c96a67c002","mac":"06:8b:ac:a2:0e:cc"},{"name":"eth0","mac":"0a:58:0a:d9:00:61","sandbox":"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.97/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:29:30.754940140+00:00 stderr F 2025-08-13T20:29:30.762887008+00:00 stderr F I0813 20:29:30.762765 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-zdwjn", UID:"6d579e1a-3b27-4c1f-9175-42ac58490d42", APIVersion:"v1", ResourceVersion:"35763", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.97/23] from ovn-kubernetes 2025-08-13T20:29:30.797250476+00:00 stderr F 2025-08-13T20:29:30Z [verbose] ADD finished CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:8b:ac:a2:0e:cc\",\"name\":\"3fdb2c96a67c002\"},{\"mac\":\"0a:58:0a:d9:00:61\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d\"}],\"ips\":[{\"address\":\"10.217.0.97/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:30:02.419847222+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.420056688+00:00 stderr P ADD starting CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"" 2025-08-13T20:30:02.420093569+00:00 stderr F 2025-08-13T20:30:02.778196933+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.778271045+00:00 stderr P Add: openshift-operator-lifecycle-manager:collect-profiles-29251950-x8jjd:ad171c4b-8408-4370-8e86-502999788ddb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"61f39a784f23d0e","mac":"da:99:8d:d0:4e:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:62","sandbox":"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.98/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:30:02.778296465+00:00 stderr F 2025-08-13T20:30:02.779130789+00:00 stderr F I0813 20:30:02.779084 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251950-x8jjd", UID:"ad171c4b-8408-4370-8e86-502999788ddb", APIVersion:"v1", ResourceVersion:"35844", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.98/23] from ovn-kubernetes 2025-08-13T20:30:02.811261253+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.811543541+00:00 stderr P ADD finished CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"da:99:8d:d0:4e:9c\",\"name\":\"61f39a784f23d0e\"},{\"mac\":\"0a:58:0a:d9:00:62\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c\"}],\"ips\":[{\"address\":\"10.217.0.98/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:30:02.811577742+00:00 stderr F 2025-08-13T20:30:06.544162270+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.544238552+00:00 stderr P DEL starting CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"" 2025-08-13T20:30:06.544263533+00:00 stderr F 2025-08-13T20:30:06.545051676+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.545090917+00:00 stderr P Del: openshift-operator-lifecycle-manager:collect-profiles-29251950-x8jjd:ad171c4b-8408-4370-8e86-502999788ddb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:30:06.545115287+00:00 stderr F 2025-08-13T20:30:06.810139966+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.810241489+00:00 stderr P DEL finished CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"", result: "", err: 2025-08-13T20:30:06.810291930+00:00 stderr F 2025-08-13T20:30:32.652453365+00:00 stderr P 2025-08-13T20:30:32Z [verbose] 2025-08-13T20:30:32.652542547+00:00 stderr P DEL starting CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"" 2025-08-13T20:30:32.652567668+00:00 stderr F 2025-08-13T20:30:32.654540215+00:00 stderr P 2025-08-13T20:30:32Z [verbose] 2025-08-13T20:30:32.654575346+00:00 stderr P Del: openshift-marketplace:redhat-operators-zdwjn:6d579e1a-3b27-4c1f-9175-42ac58490d42:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:30:32.654599396+00:00 stderr F 2025-08-13T20:30:32.870968506+00:00 stderr F 2025-08-13T20:30:32Z [verbose] DEL finished CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"", result: "", err: 2025-08-13T20:37:48.650104847+00:00 stderr F 2025-08-13T20:37:48Z [verbose] ADD starting CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"" 2025-08-13T20:37:48.861010088+00:00 stderr F 2025-08-13T20:37:48Z [verbose] Add: openshift-marketplace:redhat-marketplace-nkzlk:afc02c17-9714-426d-aafa-ee58c673ab0c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"316cb50fa85ce61","mac":"26:aa:67:d6:f6:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:63","sandbox":"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.99/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:37:48.862828720+00:00 stderr F I0813 20:37:48.861954 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nkzlk", UID:"afc02c17-9714-426d-aafa-ee58c673ab0c", APIVersion:"v1", ResourceVersion:"36823", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.99/23] from ovn-kubernetes 2025-08-13T20:37:48.903530504+00:00 stderr F 2025-08-13T20:37:48Z [verbose] ADD finished CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:aa:67:d6:f6:bb\",\"name\":\"316cb50fa85ce61\"},{\"mac\":\"0a:58:0a:d9:00:63\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44\"}],\"ips\":[{\"address\":\"10.217.0.99/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:38:08.940938186+00:00 stderr F 2025-08-13T20:38:08Z [verbose] DEL starting CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"" 2025-08-13T20:38:08.941644556+00:00 stderr F 2025-08-13T20:38:08Z [verbose] Del: openshift-marketplace:redhat-marketplace-nkzlk:afc02c17-9714-426d-aafa-ee58c673ab0c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:38:09.132340454+00:00 stderr F 2025-08-13T20:38:09Z [verbose] DEL finished CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"", result: "", err: 2025-08-13T20:38:36.497725067+00:00 stderr F 2025-08-13T20:38:36Z [verbose] ADD starting CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"" 2025-08-13T20:38:36.719000537+00:00 stderr F 2025-08-13T20:38:36Z [verbose] Add: openshift-marketplace:certified-operators-4kmbv:847e60dc-7a0a-4115-a7e1-356476e319e7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"48a72e1ed96b8c0","mac":"ba:71:ca:85:15:a1"},{"name":"eth0","mac":"0a:58:0a:d9:00:64","sandbox":"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.100/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:38:36.720045557+00:00 stderr F I0813 20:38:36.719870 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-4kmbv", UID:"847e60dc-7a0a-4115-a7e1-356476e319e7", APIVersion:"v1", ResourceVersion:"36922", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.100/23] from ovn-kubernetes 2025-08-13T20:38:36.806263303+00:00 stderr P 2025-08-13T20:38:36Z [verbose] 2025-08-13T20:38:36.806353615+00:00 stderr P ADD finished CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ba:71:ca:85:15:a1\",\"name\":\"48a72e1ed96b8c0\"},{\"mac\":\"0a:58:0a:d9:00:64\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18\"}],\"ips\":[{\"address\":\"10.217.0.100/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:38:36.806379876+00:00 stderr F 2025-08-13T20:38:57.274709039+00:00 stderr F 2025-08-13T20:38:57Z [verbose] DEL starting CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"" 2025-08-13T20:38:57.275727299+00:00 stderr F 2025-08-13T20:38:57Z [verbose] Del: openshift-marketplace:certified-operators-4kmbv:847e60dc-7a0a-4115-a7e1-356476e319e7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:38:57.457966003+00:00 stderr F 2025-08-13T20:38:57Z [verbose] DEL finished CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"", result: "", err: 2025-08-13T20:41:21.899190984+00:00 stderr P 2025-08-13T20:41:21Z [verbose] 2025-08-13T20:41:21.899407450+00:00 stderr P ADD starting CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"" 2025-08-13T20:41:21.899451962+00:00 stderr F 2025-08-13T20:41:22.168813287+00:00 stderr F 2025-08-13T20:41:22Z [verbose] Add: openshift-marketplace:redhat-operators-k2tgr:58e4f786-ee2a-45c4-83a4-523611d1eccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b07b3fcd02d69d1","mac":"6e:99:82:6e:a6:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:65","sandbox":"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.101/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:41:22.170555268+00:00 stderr F I0813 20:41:22.170374 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-k2tgr", UID:"58e4f786-ee2a-45c4-83a4-523611d1eccd", APIVersion:"v1", ResourceVersion:"37281", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.101/23] from ovn-kubernetes 2025-08-13T20:41:22.217286985+00:00 stderr F 2025-08-13T20:41:22Z [verbose] ADD finished CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:99:82:6e:a6:ca\",\"name\":\"b07b3fcd02d69d1\"},{\"mac\":\"0a:58:0a:d9:00:65\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167\"}],\"ips\":[{\"address\":\"10.217.0.101/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:42:13.827600468+00:00 stderr F 2025-08-13T20:42:13Z [verbose] DEL starting CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"" 2025-08-13T20:42:13.830067749+00:00 stderr F 2025-08-13T20:42:13Z [verbose] Del: openshift-marketplace:redhat-operators-k2tgr:58e4f786-ee2a-45c4-83a4-523611d1eccd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:42:14.179240226+00:00 stderr F 2025-08-13T20:42:14Z [verbose] DEL finished CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"", result: "", err: 2025-08-13T20:42:27.029073457+00:00 stderr P 2025-08-13T20:42:27Z [verbose] 2025-08-13T20:42:27.029266502+00:00 stderr P ADD starting CNI request ContainerID:"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a" Netns:"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-08-13T20:42:27.029306213+00:00 stderr F 2025-08-13T20:42:27.856884263+00:00 stderr F 2025-08-13T20:42:27Z [verbose] Add: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b4ce7c1e13297d1","mac":"be:61:47:79:5b:8f"},{"name":"eth0","mac":"0a:58:0a:d9:00:66","sandbox":"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.102/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:42:27.863480073+00:00 stderr F I0813 20:42:27.859848 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-sdddl", UID:"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760", APIVersion:"v1", ResourceVersion:"37479", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.102/23] from ovn-kubernetes 2025-08-13T20:42:27.896369491+00:00 stderr F 2025-08-13T20:42:27Z [verbose] ADD finished CNI request ContainerID:"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a" Netns:"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:61:47:79:5b:8f\",\"name\":\"b4ce7c1e13297d1\"},{\"mac\":\"0a:58:0a:d9:00:66\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310\"}],\"ips\":[{\"address\":\"10.217.0.102/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:42:44.710914328+00:00 stderr P 2025-08-13T20:42:44Z [verbose] 2025-08-13T20:42:44.712022420+00:00 stderr F caught terminated, stopping... 2025-08-13T20:42:44.713590275+00:00 stderr F 2025-08-13T20:42:44Z [verbose] Stopped monitoring, closing channel ... 2025-08-13T20:42:44.718194688+00:00 stderr F 2025-08-13T20:42:44Z [verbose] ConfigWatcher done 2025-08-13T20:42:44.718194688+00:00 stderr F 2025-08-13T20:42:44Z [verbose] Delete old config @ /host/etc/cni/net.d/00-multus.conf 2025-08-13T20:42:44.718472186+00:00 stderr F 2025-08-13T20:42:44Z [verbose] multus daemon is exited ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000157215073043233033236 0ustar zuulzuul2025-08-13T19:55:29.808124912+00:00 stdout F 2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 2025-08-13T19:55:29.859028584+00:00 stdout F 2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/ 2025-08-13T19:55:29.894323461+00:00 stderr F 2025-08-13T19:55:29Z [verbose] multus-daemon started 2025-08-13T19:55:29.894323461+00:00 stderr F 2025-08-13T19:55:29Z [verbose] Readiness Indicator file check 2025-08-13T19:56:14.896151827+00:00 stderr F 2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000726415073043233033242 0ustar zuulzuul2025-10-13T00:21:46.709138651+00:00 stdout F 2025-10-13T00:21:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cc2fe64a-d286-48fc-8c98-286cd0553e8e 2025-10-13T00:21:46.750798951+00:00 stdout F 2025-10-13T00:21:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cc2fe64a-d286-48fc-8c98-286cd0553e8e to /host/opt/cni/bin/ 2025-10-13T00:21:46.773812850+00:00 stderr F 2025-10-13T00:21:46Z [verbose] multus-daemon started 2025-10-13T00:21:46.773812850+00:00 stderr F 2025-10-13T00:21:46Z [verbose] Readiness Indicator file check 2025-10-13T00:21:46.773997795+00:00 stderr F 2025-10-13T00:21:46Z [verbose] Readiness Indicator file check done! 2025-10-13T00:21:46.775898106+00:00 stderr F I1013 00:21:46.775852 28643 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-10-13T00:21:46.776176424+00:00 stderr F 2025-10-13T00:21:46Z [verbose] Waiting for certificate 2025-10-13T00:21:47.776898225+00:00 stderr F I1013 00:21:47.776398 28643 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-10-13T00:21:47.777124271+00:00 stderr F 2025-10-13T00:21:47Z [verbose] Certificate found! 2025-10-13T00:21:47.778083657+00:00 stderr F 2025-10-13T00:21:47Z [verbose] server configured with chroot: /hostroot 2025-10-13T00:21:47.778083657+00:00 stderr F 2025-10-13T00:21:47Z [verbose] Filtering pod watch for node "crc" 2025-10-13T00:21:47.879896734+00:00 stderr F 2025-10-13T00:21:47Z [verbose] API readiness check 2025-10-13T00:21:47.880070289+00:00 stderr F 2025-10-13T00:21:47Z [verbose] API readiness check done! 2025-10-13T00:21:47.880421488+00:00 stderr F 2025-10-13T00:21:47Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-10-13T00:21:47.880686965+00:00 stderr F 2025-10-13T00:21:47Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-10-13T00:22:11.418021185+00:00 stderr F 2025-10-13T00:22:11Z [verbose] DEL starting CNI request ContainerID:"9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553" Netns:"/var/run/netns/715424a0-58a6-4096-aed0-1ac89d7a5b36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553;K8S_POD_UID=b0a4ec02-9b6b-400a-9633-c11280799f07" Path:"" 2025-10-13T00:22:11.418872418+00:00 stderr F 2025-10-13T00:22:11Z [verbose] Del: openshift-kube-apiserver:installer-13-crc:b0a4ec02-9b6b-400a-9633-c11280799f07:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:22:11.574898574+00:00 stderr F 2025-10-13T00:22:11Z [verbose] DEL finished CNI request ContainerID:"9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553" Netns:"/var/run/netns/715424a0-58a6-4096-aed0-1ac89d7a5b36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553;K8S_POD_UID=b0a4ec02-9b6b-400a-9633-c11280799f07" Path:"", result: "", err: ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000047050215073043233033241 0ustar zuulzuul2025-10-13T00:13:37.211045554+00:00 stdout F 2025-10-13T00:13:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8ca82260-15b6-4b2e-8d07-4b6299a76610 2025-10-13T00:13:37.260727350+00:00 stdout F 2025-10-13T00:13:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8ca82260-15b6-4b2e-8d07-4b6299a76610 to /host/opt/cni/bin/ 2025-10-13T00:13:37.279535073+00:00 stderr P 2025-10-13T00:13:37Z [verbose] 2025-10-13T00:13:37.279596414+00:00 stderr P multus-daemon started 2025-10-13T00:13:37.279616045+00:00 stderr F 2025-10-13T00:13:37.279636205+00:00 stderr P 2025-10-13T00:13:37Z [verbose] 2025-10-13T00:13:37.279654736+00:00 stderr P Readiness Indicator file check 2025-10-13T00:13:37.279672776+00:00 stderr F 2025-10-13T00:14:16.280789452+00:00 stderr F 2025-10-13T00:14:16Z [verbose] Readiness Indicator file check done! 2025-10-13T00:14:16.286301518+00:00 stderr F I1013 00:14:16.286199 7626 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-10-13T00:14:16.287233594+00:00 stderr F 2025-10-13T00:14:16Z [verbose] Waiting for certificate 2025-10-13T00:14:17.288062935+00:00 stderr F I1013 00:14:17.287952 7626 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-10-13T00:14:17.288403004+00:00 stderr F 2025-10-13T00:14:17Z [verbose] Certificate found! 2025-10-13T00:14:17.289320010+00:00 stderr F 2025-10-13T00:14:17Z [verbose] server configured with chroot: /hostroot 2025-10-13T00:14:17.289320010+00:00 stderr F 2025-10-13T00:14:17Z [verbose] Filtering pod watch for node "crc" 2025-10-13T00:14:17.393606212+00:00 stderr F 2025-10-13T00:14:17Z [verbose] API readiness check 2025-10-13T00:14:17.394230160+00:00 stderr F 2025-10-13T00:14:17Z [verbose] API readiness check done! 2025-10-13T00:14:17.394586630+00:00 stderr F 2025-10-13T00:14:17Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-10-13T00:14:17.394888348+00:00 stderr F 2025-10-13T00:14:17Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-10-13T00:14:56.371945715+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"f1bcb80057948c83721726bbe4a098a5dab150ca47256c01916d395aca24e251" Netns:"/var/run/netns/70793366-d167-44ff-8327-cd327c48814d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=f1bcb80057948c83721726bbe4a098a5dab150ca47256c01916d395aca24e251;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-10-13T00:14:56.374353067+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"0a945e981815604d880a4797418ae06dcf72fd5371cad02942d8e006202710ea" Netns:"/var/run/netns/723a2be1-01f5-4b4f-a173-b26850d44a83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=0a945e981815604d880a4797418ae06dcf72fd5371cad02942d8e006202710ea;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-10-13T00:14:56.459204879+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"f55b6990b4cae271327735db97ea23efc8fd7411cb893cf5aa36d899061dedc2" Netns:"/var/run/netns/1498ff85-69ac-46af-a155-cec0c56f9634" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=f55b6990b4cae271327735db97ea23efc8fd7411cb893cf5aa36d899061dedc2;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-10-13T00:14:56.459204879+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"b6d9e0bc146cb285eadb854b1e00a0aed110fb5693ee046f19f3a0950bfa80f9" Netns:"/var/run/netns/5c7bea13-98ac-4ad7-aba1-8d2ae02ed448" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=b6d9e0bc146cb285eadb854b1e00a0aed110fb5693ee046f19f3a0950bfa80f9;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-10-13T00:14:56.469516428+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"cbf486ca4cacf84bf0cc804deca3d39f50423cee01f2aea8e2c7ba2d637f2351" Netns:"/var/run/netns/3ce95030-2bfc-4308-a38f-8ff5a50e73d6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=cbf486ca4cacf84bf0cc804deca3d39f50423cee01f2aea8e2c7ba2d637f2351;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-10-13T00:14:56.549296858+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0a945e981815604","mac":"02:7a:dc:5c:3d:7f"},{"name":"eth0","mac":"0a:58:0a:d9:00:03","sandbox":"/var/run/netns/723a2be1-01f5-4b4f-a173-b26850d44a83"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.3/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.550862015+00:00 stderr F I1013 00:14:56.549696 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-qdfr4", UID:"a702c6d2-4dde-4077-ab8c-0f8df804bf7a", APIVersion:"v1", ResourceVersion:"38199", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.3/23] from ovn-kubernetes 2025-10-13T00:14:56.567138283+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f1bcb80057948c8","mac":"be:51:e4:e6:9a:fc"},{"name":"eth0","mac":"0a:58:0a:d9:00:15","sandbox":"/var/run/netns/70793366-d167-44ff-8327-cd327c48814d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.21/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.567138283+00:00 stderr F I1013 00:14:56.564760 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-operator-76788bff89-wkjgm", UID:"120b38dc-8236-4fa6-a452-642b8ad738ee", APIVersion:"v1", ResourceVersion:"38243", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.21/23] from ovn-kubernetes 2025-10-13T00:14:56.567138283+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"b40bcae31bccd9063990ea4bfe31c94cba0ef873d3a3641d82fd976604e08e65" Netns:"/var/run/netns/2cf65cec-c6d3-4d9d-9f18-6885a99f13cd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=b40bcae31bccd9063990ea4bfe31c94cba0ef873d3a3641d82fd976604e08e65;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-10-13T00:14:56.567138283+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"0a945e981815604d880a4797418ae06dcf72fd5371cad02942d8e006202710ea" Netns:"/var/run/netns/723a2be1-01f5-4b4f-a173-b26850d44a83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=0a945e981815604d880a4797418ae06dcf72fd5371cad02942d8e006202710ea;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"02:7a:dc:5c:3d:7f\",\"name\":\"0a945e981815604\"},{\"mac\":\"0a:58:0a:d9:00:03\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/723a2be1-01f5-4b4f-a173-b26850d44a83\"}],\"ips\":[{\"address\":\"10.217.0.3/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.574963877+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b6d9e0bc146cb28","mac":"7e:8e:ca:83:f5:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:13","sandbox":"/var/run/netns/5c7bea13-98ac-4ad7-aba1-8d2ae02ed448"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.19/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.575127202+00:00 stderr F I1013 00:14:56.575066 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication-operator", Name:"authentication-operator-7cc7ff75d5-g9qv8", UID:"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e", APIVersion:"v1", ResourceVersion:"38221", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.19/23] from ovn-kubernetes 2025-10-13T00:14:56.584438791+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"f1bcb80057948c83721726bbe4a098a5dab150ca47256c01916d395aca24e251" Netns:"/var/run/netns/70793366-d167-44ff-8327-cd327c48814d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=f1bcb80057948c83721726bbe4a098a5dab150ca47256c01916d395aca24e251;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:51:e4:e6:9a:fc\",\"name\":\"f1bcb80057948c8\"},{\"mac\":\"0a:58:0a:d9:00:15\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/70793366-d167-44ff-8327-cd327c48814d\"}],\"ips\":[{\"address\":\"10.217.0.21/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.588030039+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"b6d9e0bc146cb285eadb854b1e00a0aed110fb5693ee046f19f3a0950bfa80f9" Netns:"/var/run/netns/5c7bea13-98ac-4ad7-aba1-8d2ae02ed448" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=b6d9e0bc146cb285eadb854b1e00a0aed110fb5693ee046f19f3a0950bfa80f9;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7e:8e:ca:83:f5:62\",\"name\":\"b6d9e0bc146cb28\"},{\"mac\":\"0a:58:0a:d9:00:13\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5c7bea13-98ac-4ad7-aba1-8d2ae02ed448\"}],\"ips\":[{\"address\":\"10.217.0.19/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.590200124+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"66953f4cfd7fd14280641f1cfbbc1d5fc539237ee1bf57949aa6ebbecc726b75" Netns:"/var/run/netns/e74acd95-3290-4b20-9119-603a867da24a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=66953f4cfd7fd14280641f1cfbbc1d5fc539237ee1bf57949aa6ebbecc726b75;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-10-13T00:14:56.590576745+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f55b6990b4cae27","mac":"8e:85:0f:2b:e1:68"},{"name":"eth0","mac":"0a:58:0a:d9:00:10","sandbox":"/var/run/netns/1498ff85-69ac-46af-a155-cec0c56f9634"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.16/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.590851703+00:00 stderr F I1013 00:14:56.590794 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator-686c6c748c-qbnnr", UID:"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7", APIVersion:"v1", ResourceVersion:"38086", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.16/23] from ovn-kubernetes 2025-10-13T00:14:56.603204514+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"851b2fa5f88dcb6109f52a8c1a65b274ff13b04df31f49fca4fd66809fb8a538" Netns:"/var/run/netns/5228f103-a1f8-48be-91a4-95a42c4582cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=851b2fa5f88dcb6109f52a8c1a65b274ff13b04df31f49fca4fd66809fb8a538;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-10-13T00:14:56.608010648+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"dc7cb350ffa4d90310a5c0ee04a18e852da359cc8f6cd54b604674294ec8eaf5" Netns:"/var/run/netns/9b231092-8bcb-4690-967b-d43bdd3a6e3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=dc7cb350ffa4d90310a5c0ee04a18e852da359cc8f6cd54b604674294ec8eaf5;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"" 2025-10-13T00:14:56.608439390+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"f55b6990b4cae271327735db97ea23efc8fd7411cb893cf5aa36d899061dedc2" Netns:"/var/run/netns/1498ff85-69ac-46af-a155-cec0c56f9634" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=f55b6990b4cae271327735db97ea23efc8fd7411cb893cf5aa36d899061dedc2;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:85:0f:2b:e1:68\",\"name\":\"f55b6990b4cae27\"},{\"mac\":\"0a:58:0a:d9:00:10\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1498ff85-69ac-46af-a155-cec0c56f9634\"}],\"ips\":[{\"address\":\"10.217.0.16/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.610319887+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cbf486ca4cacf84","mac":"4e:43:46:3d:dd:76"},{"name":"eth0","mac":"0a:58:0a:d9:00:16","sandbox":"/var/run/netns/3ce95030-2bfc-4308-a38f-8ff5a50e73d6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.22/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.610524363+00:00 stderr F I1013 00:14:56.610476 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator-7769bd8d7d-q5cvv", UID:"b54e8941-2fc4-432a-9e51-39684df9089e", APIVersion:"v1", ResourceVersion:"38184", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.22/23] from ovn-kubernetes 2025-10-13T00:14:56.623768690+00:00 stderr P 2025-10-13T00:14:56Z [verbose] 2025-10-13T00:14:56.623838442+00:00 stderr P ADD starting CNI request ContainerID:"a627c77ce1d0d5be262cb7dd53e3666d217b1797f2b8b8a04872243105af1f79" Netns:"/var/run/netns/3323bd6a-c8cc-448e-a66a-fe892b710599" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=a627c77ce1d0d5be262cb7dd53e3666d217b1797f2b8b8a04872243105af1f79;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-10-13T00:14:56.623859342+00:00 stderr F 2025-10-13T00:14:56.625527902+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"cbf486ca4cacf84bf0cc804deca3d39f50423cee01f2aea8e2c7ba2d637f2351" Netns:"/var/run/netns/3ce95030-2bfc-4308-a38f-8ff5a50e73d6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=cbf486ca4cacf84bf0cc804deca3d39f50423cee01f2aea8e2c7ba2d637f2351;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:43:46:3d:dd:76\",\"name\":\"cbf486ca4cacf84\"},{\"mac\":\"0a:58:0a:d9:00:16\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3ce95030-2bfc-4308-a38f-8ff5a50e73d6\"}],\"ips\":[{\"address\":\"10.217.0.22/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.690433587+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b40bcae31bccd90","mac":"d2:63:86:41:c1:9f"},{"name":"eth0","mac":"0a:58:0a:d9:00:2b","sandbox":"/var/run/netns/2cf65cec-c6d3-4d9d-9f18-6885a99f13cd"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.43/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.690604912+00:00 stderr F I1013 00:14:56.690572 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8464bcc55b-sjnqz", UID:"bd556935-a077-45df-ba3f-d42c39326ccd", APIVersion:"v1", ResourceVersion:"38141", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.43/23] from ovn-kubernetes 2025-10-13T00:14:56.702607412+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"b40bcae31bccd9063990ea4bfe31c94cba0ef873d3a3641d82fd976604e08e65" Netns:"/var/run/netns/2cf65cec-c6d3-4d9d-9f18-6885a99f13cd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=b40bcae31bccd9063990ea4bfe31c94cba0ef873d3a3641d82fd976604e08e65;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:63:86:41:c1:9f\",\"name\":\"b40bcae31bccd90\"},{\"mac\":\"0a:58:0a:d9:00:2b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2cf65cec-c6d3-4d9d-9f18-6885a99f13cd\"}],\"ips\":[{\"address\":\"10.217.0.43/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.729563290+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"0f2717b16e9536796f58661d980973b2c5799e608aa7f2b94c6ee9c0c000d914" Netns:"/var/run/netns/4d5a2366-efcf-407e-99ca-a28e2f87267e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=0f2717b16e9536796f58661d980973b2c5799e608aa7f2b94c6ee9c0c000d914;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-10-13T00:14:56.735475917+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"747d5b06bcf5b291b2a45678514e1c475dac78b376552b58bf38508c94ac636c" Netns:"/var/run/netns/c41eb8e0-e127-4828-bdaa-c5e1bb484b20" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=747d5b06bcf5b291b2a45678514e1c475dac78b376552b58bf38508c94ac636c;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-10-13T00:14:56.769401253+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"db11ff99fdc6391ad495ce9d85b2ccdcea9bc5ef94d54f27e0536989237e1077" Netns:"/var/run/netns/6938266c-a769-4e14-83b9-2458c01e22bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=db11ff99fdc6391ad495ce9d85b2ccdcea9bc5ef94d54f27e0536989237e1077;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-10-13T00:14:56.802512435+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"5439deca53ee54fa84726a77c87c9ab512858e5a48c6a260ea74eddc8d188c35" Netns:"/var/run/netns/8c095e00-da43-41cc-85d5-f563d00e7eaf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=5439deca53ee54fa84726a77c87c9ab512858e5a48c6a260ea74eddc8d188c35;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-10-13T00:14:56.833926376+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a627c77ce1d0d5b","mac":"4a:a2:7c:44:db:9a"},{"name":"eth0","mac":"0a:58:0a:d9:00:18","sandbox":"/var/run/netns/3323bd6a-c8cc-448e-a66a-fe892b710599"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.24/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.834207655+00:00 stderr F I1013 00:14:56.834177 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"package-server-manager-84d578d794-jw7r2", UID:"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be", APIVersion:"v1", ResourceVersion:"38231", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.24/23] from ovn-kubernetes 2025-10-13T00:14:56.851986888+00:00 stderr P 2025-10-13T00:14:56Z [verbose] 2025-10-13T00:14:56.852032339+00:00 stderr P ADD finished CNI request ContainerID:"a627c77ce1d0d5be262cb7dd53e3666d217b1797f2b8b8a04872243105af1f79" Netns:"/var/run/netns/3323bd6a-c8cc-448e-a66a-fe892b710599" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=a627c77ce1d0d5be262cb7dd53e3666d217b1797f2b8b8a04872243105af1f79;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:a2:7c:44:db:9a\",\"name\":\"a627c77ce1d0d5b\"},{\"mac\":\"0a:58:0a:d9:00:18\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3323bd6a-c8cc-448e-a66a-fe892b710599\"}],\"ips\":[{\"address\":\"10.217.0.24/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.852056110+00:00 stderr F 2025-10-13T00:14:56.865747430+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"69f02b6bc9278d1a50b08bbf9a8b1d06bc1d17d275805b1b04b33bb670395035" Netns:"/var/run/netns/448c3a52-8423-43eb-b244-229f6cbf90be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=69f02b6bc9278d1a50b08bbf9a8b1d06bc1d17d275805b1b04b33bb670395035;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-10-13T00:14:56.868885164+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"79f9f836193e2f1d572b361c8f37976f87e203668f1389678aca66a602c6984b" Netns:"/var/run/netns/524873ad-b177-4ec6-b161-36ae95ea6910" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=79f9f836193e2f1d572b361c8f37976f87e203668f1389678aca66a602c6984b;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-10-13T00:14:56.868885164+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492" Netns:"/var/run/netns/1571eba1-7b2f-4e01-8fe5-9788bf88ad4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-10-13T00:14:56.907460180+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD starting CNI request ContainerID:"6cfe111afb06d35f33af08594257972298d2014399581148f5d65bf670c40f79" Netns:"/var/run/netns/03fa3a35-fbcb-4f51-9bea-e93576eb44c1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=6cfe111afb06d35f33af08594257972298d2014399581148f5d65bf670c40f79;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-10-13T00:14:56.942639744+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"db11ff99fdc6391","mac":"86:87:8f:a1:87:64"},{"name":"eth0","mac":"0a:58:0a:d9:00:0e","sandbox":"/var/run/netns/6938266c-a769-4e14-83b9-2458c01e22bc"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.14/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.943672065+00:00 stderr F I1013 00:14:56.942907 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-6d8474f75f-x54mh", UID:"c085412c-b875-46c9-ae3e-e6b0d8067091", APIVersion:"v1", ResourceVersion:"38082", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.14/23] from ovn-kubernetes 2025-10-13T00:14:56.959001024+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"db11ff99fdc6391ad495ce9d85b2ccdcea9bc5ef94d54f27e0536989237e1077" Netns:"/var/run/netns/6938266c-a769-4e14-83b9-2458c01e22bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=db11ff99fdc6391ad495ce9d85b2ccdcea9bc5ef94d54f27e0536989237e1077;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"86:87:8f:a1:87:64\",\"name\":\"db11ff99fdc6391\"},{\"mac\":\"0a:58:0a:d9:00:0e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6938266c-a769-4e14-83b9-2458c01e22bc\"}],\"ips\":[{\"address\":\"10.217.0.14/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.973942482+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"66953f4cfd7fd14","mac":"3a:de:fe:7f:08:c2"},{"name":"eth0","mac":"0a:58:0a:d9:00:06","sandbox":"/var/run/netns/e74acd95-3290-4b20-9119-603a867da24a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.6/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.974055785+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5439deca53ee54f","mac":"16:24:e5:54:09:61"},{"name":"eth0","mac":"0a:58:0a:d9:00:07","sandbox":"/var/run/netns/8c095e00-da43-41cc-85d5-f563d00e7eaf"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.7/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.974066735+00:00 stderr F I1013 00:14:56.974048 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-7c88c4c865-kn67m", UID:"43ae1c37-047b-4ee2-9fee-41e337dd4ac8", APIVersion:"v1", ResourceVersion:"38322", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.6/23] from ovn-kubernetes 2025-10-13T00:14:56.974279972+00:00 stderr F I1013 00:14:56.974227 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-78d54458c4-sc8h7", UID:"ed024e5d-8fc2-4c22-803d-73f3c9795f19", APIVersion:"v1", ResourceVersion:"38212", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.7/23] from ovn-kubernetes 2025-10-13T00:14:56.974622872+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-dns:dns-default-gbw49:13045510-8717-4a71-ade4-be95a76440a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"dc7cb350ffa4d90","mac":"5e:8e:c9:55:32:82"},{"name":"eth0","mac":"0a:58:0a:d9:00:1f","sandbox":"/var/run/netns/9b231092-8bcb-4690-967b-d43bdd3a6e3e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.31/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.974793387+00:00 stderr F I1013 00:14:56.974757 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-gbw49", UID:"13045510-8717-4a71-ade4-be95a76440a7", APIVersion:"v1", ResourceVersion:"38163", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.31/23] from ovn-kubernetes 2025-10-13T00:14:56.975842349+00:00 stderr F 2025-10-13T00:14:56Z [verbose] Add: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"851b2fa5f88dcb6","mac":"7a:39:bd:68:6a:72"},{"name":"eth0","mac":"0a:58:0a:d9:00:0c","sandbox":"/var/run/netns/5228f103-a1f8-48be-91a4-95a42c4582cb"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.12/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:56.976036964+00:00 stderr F I1013 00:14:56.976003 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7", UID:"71af81a9-7d43-49b2-9287-c375900aa905", APIVersion:"v1", ResourceVersion:"38270", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.12/23] from ovn-kubernetes 2025-10-13T00:14:56.986429006+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"66953f4cfd7fd14280641f1cfbbc1d5fc539237ee1bf57949aa6ebbecc726b75" Netns:"/var/run/netns/e74acd95-3290-4b20-9119-603a867da24a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=66953f4cfd7fd14280641f1cfbbc1d5fc539237ee1bf57949aa6ebbecc726b75;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3a:de:fe:7f:08:c2\",\"name\":\"66953f4cfd7fd14\"},{\"mac\":\"0a:58:0a:d9:00:06\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e74acd95-3290-4b20-9119-603a867da24a\"}],\"ips\":[{\"address\":\"10.217.0.6/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.991414195+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"851b2fa5f88dcb6109f52a8c1a65b274ff13b04df31f49fca4fd66809fb8a538" Netns:"/var/run/netns/5228f103-a1f8-48be-91a4-95a42c4582cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=851b2fa5f88dcb6109f52a8c1a65b274ff13b04df31f49fca4fd66809fb8a538;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:39:bd:68:6a:72\",\"name\":\"851b2fa5f88dcb6\"},{\"mac\":\"0a:58:0a:d9:00:0c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5228f103-a1f8-48be-91a4-95a42c4582cb\"}],\"ips\":[{\"address\":\"10.217.0.12/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.991414195+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"dc7cb350ffa4d90310a5c0ee04a18e852da359cc8f6cd54b604674294ec8eaf5" Netns:"/var/run/netns/9b231092-8bcb-4690-967b-d43bdd3a6e3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=dc7cb350ffa4d90310a5c0ee04a18e852da359cc8f6cd54b604674294ec8eaf5;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:8e:c9:55:32:82\",\"name\":\"dc7cb350ffa4d90\"},{\"mac\":\"0a:58:0a:d9:00:1f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9b231092-8bcb-4690-967b-d43bdd3a6e3e\"}],\"ips\":[{\"address\":\"10.217.0.31/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:56.994541119+00:00 stderr F 2025-10-13T00:14:56Z [verbose] ADD finished CNI request ContainerID:"5439deca53ee54fa84726a77c87c9ab512858e5a48c6a260ea74eddc8d188c35" Netns:"/var/run/netns/8c095e00-da43-41cc-85d5-f563d00e7eaf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=5439deca53ee54fa84726a77c87c9ab512858e5a48c6a260ea74eddc8d188c35;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:24:e5:54:09:61\",\"name\":\"5439deca53ee54f\"},{\"mac\":\"0a:58:0a:d9:00:07\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8c095e00-da43-41cc-85d5-f563d00e7eaf\"}],\"ips\":[{\"address\":\"10.217.0.7/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.074571577+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478" Netns:"/var/run/netns/dd217dce-f3ff-4b13-ad34-a4419572f865" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-10-13T00:14:57.093114422+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"747d5b06bcf5b29","mac":"36:ee:89:75:8a:45"},{"name":"eth0","mac":"0a:58:0a:d9:00:3f","sandbox":"/var/run/netns/c41eb8e0-e127-4828-bdaa-c5e1bb484b20"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.63/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.093114422+00:00 stderr F I1013 00:14:57.092266 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-controller-6df6df6b6b-58shh", UID:"297ab9b6-2186-4d5b-a952-2bfd59af63c4", APIVersion:"v1", ResourceVersion:"38249", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.63/23] from ovn-kubernetes 2025-10-13T00:14:57.101183244+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0f2717b16e95367","mac":"fe:01:b0:09:03:d3"},{"name":"eth0","mac":"0a:58:0a:d9:00:20","sandbox":"/var/run/netns/4d5a2366-efcf-407e-99ca-a28e2f87267e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.32/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.101183244+00:00 stderr F I1013 00:14:57.100938 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"multus-admission-controller-6c7c885997-4hbbc", UID:"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0", APIVersion:"v1", ResourceVersion:"38105", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.32/23] from ovn-kubernetes 2025-10-13T00:14:57.112246095+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"747d5b06bcf5b291b2a45678514e1c475dac78b376552b58bf38508c94ac636c" Netns:"/var/run/netns/c41eb8e0-e127-4828-bdaa-c5e1bb484b20" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=747d5b06bcf5b291b2a45678514e1c475dac78b376552b58bf38508c94ac636c;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"36:ee:89:75:8a:45\",\"name\":\"747d5b06bcf5b29\"},{\"mac\":\"0a:58:0a:d9:00:3f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c41eb8e0-e127-4828-bdaa-c5e1bb484b20\"}],\"ips\":[{\"address\":\"10.217.0.63/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.122447411+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"0f2717b16e9536796f58661d980973b2c5799e608aa7f2b94c6ee9c0c000d914" Netns:"/var/run/netns/4d5a2366-efcf-407e-99ca-a28e2f87267e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=0f2717b16e9536796f58661d980973b2c5799e608aa7f2b94c6ee9c0c000d914;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fe:01:b0:09:03:d3\",\"name\":\"0f2717b16e95367\"},{\"mac\":\"0a:58:0a:d9:00:20\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4d5a2366-efcf-407e-99ca-a28e2f87267e\"}],\"ips\":[{\"address\":\"10.217.0.32/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.133118271+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c87c0bc9b630825","mac":"9a:4a:bb:d2:b0:c7"},{"name":"eth0","mac":"0a:58:0a:d9:00:0d","sandbox":"/var/run/netns/1571eba1-7b2f-4e01-8fe5-9788bf88ad4c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.13/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.136457881+00:00 stderr F I1013 00:14:57.133605 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-f9xdt", UID:"3482be94-0cdb-4e2a-889b-e5fac59fdbf5", APIVersion:"v1", ResourceVersion:"38301", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.13/23] from ovn-kubernetes 2025-10-13T00:14:57.151409379+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492" Netns:"/var/run/netns/1571eba1-7b2f-4e01-8fe5-9788bf88ad4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:4a:bb:d2:b0:c7\",\"name\":\"c87c0bc9b630825\"},{\"mac\":\"0a:58:0a:d9:00:0d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1571eba1-7b2f-4e01-8fe5-9788bf88ad4c\"}],\"ips\":[{\"address\":\"10.217.0.13/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.176457039+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"79f9f836193e2f1","mac":"ba:61:d1:27:8e:e9"},{"name":"eth0","mac":"0a:58:0a:d9:00:08","sandbox":"/var/run/netns/524873ad-b177-4ec6-b161-36ae95ea6910"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.8/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.176457039+00:00 stderr F I1013 00:14:57.175089 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-etcd-operator", Name:"etcd-operator-768d5b5d86-722mg", UID:"0b5c38ff-1fa8-4219-994d-15776acd4a4d", APIVersion:"v1", ResourceVersion:"38131", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.8/23] from ovn-kubernetes 2025-10-13T00:14:57.191914882+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"285fda3f179fd952ebb84ade866da4368bf5987a1feec2cbdec549cb81462ce9" Netns:"/var/run/netns/b2067260-b7f1-41f0-b1c4-f66b1b149a63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=285fda3f179fd952ebb84ade866da4368bf5987a1feec2cbdec549cb81462ce9;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"" 2025-10-13T00:14:57.195023995+00:00 stderr P 2025-10-13T00:14:57Z [verbose] 2025-10-13T00:14:57.195065357+00:00 stderr P Add: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6cfe111afb06d35","mac":"a2:df:ca:db:dd:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:0b","sandbox":"/var/run/netns/03fa3a35-fbcb-4f51-9bea-e93576eb44c1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.11/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.195088237+00:00 stderr F 2025-10-13T00:14:57.195542431+00:00 stderr F I1013 00:14:57.195515 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-857456c46-7f5wf", UID:"8a5ae51d-d173-4531-8975-f164c975ce1f", APIVersion:"v1", ResourceVersion:"38312", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.11/23] from ovn-kubernetes 2025-10-13T00:14:57.200446858+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"fb7d53dbe8608f5115ad7a29c6c04fdef0aecab6b93e9b1611bad20287172d37" Netns:"/var/run/netns/51d01027-2645-493f-b801-b2e25e967cb6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=fb7d53dbe8608f5115ad7a29c6c04fdef0aecab6b93e9b1611bad20287172d37;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-10-13T00:14:57.213618743+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"79f9f836193e2f1d572b361c8f37976f87e203668f1389678aca66a602c6984b" Netns:"/var/run/netns/524873ad-b177-4ec6-b161-36ae95ea6910" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=79f9f836193e2f1d572b361c8f37976f87e203668f1389678aca66a602c6984b;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ba:61:d1:27:8e:e9\",\"name\":\"79f9f836193e2f1\"},{\"mac\":\"0a:58:0a:d9:00:08\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/524873ad-b177-4ec6-b161-36ae95ea6910\"}],\"ips\":[{\"address\":\"10.217.0.8/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.219623943+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0" Netns:"/var/run/netns/37a56640-268c-4428-bea0-4c33015cb711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-10-13T00:14:57.219623943+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"6cfe111afb06d35f33af08594257972298d2014399581148f5d65bf670c40f79" Netns:"/var/run/netns/03fa3a35-fbcb-4f51-9bea-e93576eb44c1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=6cfe111afb06d35f33af08594257972298d2014399581148f5d65bf670c40f79;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:df:ca:db:dd:bb\",\"name\":\"6cfe111afb06d35\"},{\"mac\":\"0a:58:0a:d9:00:0b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/03fa3a35-fbcb-4f51-9bea-e93576eb44c1\"}],\"ips\":[{\"address\":\"10.217.0.11/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.235111827+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de" Netns:"/var/run/netns/79a07779-da1b-4cb8-8930-d0a3b0f90b1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-10-13T00:14:57.237411196+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076" Netns:"/var/run/netns/86366f39-6d78-47c4-bbbd-1e3665fad256" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-10-13T00:14:57.243157558+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"9c9a5e38e90cf6cff3b4195bfbcd5bcd5f1d42238fd9caa8807f4578bb4f76e5" Netns:"/var/run/netns/b56a88ec-d292-4187-863b-33cebb1063a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=9c9a5e38e90cf6cff3b4195bfbcd5bcd5f1d42238fd9caa8807f4578bb4f76e5;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-10-13T00:14:57.268089495+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b0650787fa496e7","mac":"ae:07:e4:ae:48:5a"},{"name":"eth0","mac":"0a:58:0a:d9:00:33","sandbox":"/var/run/netns/dd217dce-f3ff-4b13-ad34-a4419572f865"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.51/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.268668042+00:00 stderr F I1013 00:14:57.268634 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-8s8pc", UID:"c782cf62-a827-4677-b3c2-6f82c5f09cbb", APIVersion:"v1", ResourceVersion:"38287", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.51/23] from ovn-kubernetes 2025-10-13T00:14:57.299090104+00:00 stderr P 2025-10-13T00:14:57Z [verbose] 2025-10-13T00:14:57.299155095+00:00 stderr P ADD finished CNI request ContainerID:"b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478" Netns:"/var/run/netns/dd217dce-f3ff-4b13-ad34-a4419572f865" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:07:e4:ae:48:5a\",\"name\":\"b0650787fa496e7\"},{\"mac\":\"0a:58:0a:d9:00:33\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/dd217dce-f3ff-4b13-ad34-a4419572f865\"}],\"ips\":[{\"address\":\"10.217.0.51/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.299176756+00:00 stderr F 2025-10-13T00:14:57.302587338+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"ea2e69806126bd9bf521543ae065e49f51baa8e9e71593190b2a3b6f0df69fc6" Netns:"/var/run/netns/21e4ad4d-05ea-4ab7-a684-0f1b0ad049e4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=ea2e69806126bd9bf521543ae065e49f51baa8e9e71593190b2a3b6f0df69fc6;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-10-13T00:14:57.336630738+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"69f02b6bc9278d1","mac":"16:e6:11:c3:98:f2"},{"name":"eth0","mac":"0a:58:0a:d9:00:0f","sandbox":"/var/run/netns/448c3a52-8423-43eb-b244-229f6cbf90be"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.15/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.336630738+00:00 stderr F I1013 00:14:57.336040 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-6f6cb54958-rbddb", UID:"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf", APIVersion:"v1", ResourceVersion:"38304", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.15/23] from ovn-kubernetes 2025-10-13T00:14:57.356764282+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"69f02b6bc9278d1a50b08bbf9a8b1d06bc1d17d275805b1b04b33bb670395035" Netns:"/var/run/netns/448c3a52-8423-43eb-b244-229f6cbf90be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=69f02b6bc9278d1a50b08bbf9a8b1d06bc1d17d275805b1b04b33bb670395035;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:e6:11:c3:98:f2\",\"name\":\"69f02b6bc9278d1\"},{\"mac\":\"0a:58:0a:d9:00:0f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/448c3a52-8423-43eb-b244-229f6cbf90be\"}],\"ips\":[{\"address\":\"10.217.0.15/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.358211375+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"cf8e48b8c1d90f3d0a88fd028b50063d68d72d5fe52dcf2abff82aaacf56b7ab" Netns:"/var/run/netns/244a9fc3-c83b-4f79-aa18-15b5e4c50b3a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=cf8e48b8c1d90f3d0a88fd028b50063d68d72d5fe52dcf2abff82aaacf56b7ab;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-10-13T00:14:57.381903585+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-controller-manager:controller-manager-778975cc4f-x5vcf:1a3e81c3-c292-4130-9436-f94062c91efd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"285fda3f179fd95","mac":"96:59:b6:b2:a9:08"},{"name":"eth0","mac":"0a:58:0a:d9:00:57","sandbox":"/var/run/netns/b2067260-b7f1-41f0-b1c4-f66b1b149a63"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.87/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.382173783+00:00 stderr F I1013 00:14:57.382101 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-778975cc4f-x5vcf", UID:"1a3e81c3-c292-4130-9436-f94062c91efd", APIVersion:"v1", ResourceVersion:"38240", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.87/23] from ovn-kubernetes 2025-10-13T00:14:57.398671147+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"285fda3f179fd952ebb84ade866da4368bf5987a1feec2cbdec549cb81462ce9" Netns:"/var/run/netns/b2067260-b7f1-41f0-b1c4-f66b1b149a63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=285fda3f179fd952ebb84ade866da4368bf5987a1feec2cbdec549cb81462ce9;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"96:59:b6:b2:a9:08\",\"name\":\"285fda3f179fd95\"},{\"mac\":\"0a:58:0a:d9:00:57\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b2067260-b7f1-41f0-b1c4-f66b1b149a63\"}],\"ips\":[{\"address\":\"10.217.0.87/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.494095026+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d1d69cb9e0e74fc","mac":"c2:bb:1d:4c:a5:ae"},{"name":"eth0","mac":"0a:58:0a:d9:00:66","sandbox":"/var/run/netns/79a07779-da1b-4cb8-8930-d0a3b0f90b1c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.102/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.494526269+00:00 stderr F I1013 00:14:57.494495 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-sdddl", UID:"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760", APIVersion:"v1", ResourceVersion:"38111", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.102/23] from ovn-kubernetes 2025-10-13T00:14:57.511305352+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de" Netns:"/var/run/netns/79a07779-da1b-4cb8-8930-d0a3b0f90b1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c2:bb:1d:4c:a5:ae\",\"name\":\"d1d69cb9e0e74fc\"},{\"mac\":\"0a:58:0a:d9:00:66\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/79a07779-da1b-4cb8-8930-d0a3b0f90b1c\"}],\"ips\":[{\"address\":\"10.217.0.102/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.515176698+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cf8e48b8c1d90f3","mac":"96:4d:0c:d9:aa:6b"},{"name":"eth0","mac":"0a:58:0a:d9:00:04","sandbox":"/var/run/netns/244a9fc3-c83b-4f79-aa18-15b5e4c50b3a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.4/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.515439206+00:00 stderr F I1013 00:14:57.515377 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-v54bt", UID:"34a48baf-1bee-4921-8bb2-9b7320e76f79", APIVersion:"v1", ResourceVersion:"38275", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.4/23] from ovn-kubernetes 2025-10-13T00:14:57.526703823+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ea2e69806126bd9","mac":"82:34:d4:44:99:0c"},{"name":"eth0","mac":"0a:58:0a:d9:00:3d","sandbox":"/var/run/netns/21e4ad4d-05ea-4ab7-a684-0f1b0ad049e4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.61/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.527207178+00:00 stderr F I1013 00:14:57.527053 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-conversion-webhook-595f9969b-l6z49", UID:"59748b9b-c309-4712-aa85-bb38d71c4915", APIVersion:"v1", ResourceVersion:"38150", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.61/23] from ovn-kubernetes 2025-10-13T00:14:57.528378143+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"cf8e48b8c1d90f3d0a88fd028b50063d68d72d5fe52dcf2abff82aaacf56b7ab" Netns:"/var/run/netns/244a9fc3-c83b-4f79-aa18-15b5e4c50b3a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=cf8e48b8c1d90f3d0a88fd028b50063d68d72d5fe52dcf2abff82aaacf56b7ab;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"96:4d:0c:d9:aa:6b\",\"name\":\"cf8e48b8c1d90f3\"},{\"mac\":\"0a:58:0a:d9:00:04\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/244a9fc3-c83b-4f79-aa18-15b5e4c50b3a\"}],\"ips\":[{\"address\":\"10.217.0.4/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.538218908+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"ea2e69806126bd9bf521543ae065e49f51baa8e9e71593190b2a3b6f0df69fc6" Netns:"/var/run/netns/21e4ad4d-05ea-4ab7-a684-0f1b0ad049e4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=ea2e69806126bd9bf521543ae065e49f51baa8e9e71593190b2a3b6f0df69fc6;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:34:d4:44:99:0c\",\"name\":\"ea2e69806126bd9\"},{\"mac\":\"0a:58:0a:d9:00:3d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/21e4ad4d-05ea-4ab7-a684-0f1b0ad049e4\"}],\"ips\":[{\"address\":\"10.217.0.61/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.559789475+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5" Netns:"/var/run/netns/81be2739-b70f-4a0e-8b06-95b6372ad700" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-10-13T00:14:57.570535347+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550" Netns:"/var/run/netns/0fd608fc-d109-4bfe-a6b6-226ba1acfce2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-10-13T00:14:57.628225685+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fb7d53dbe8608f5","mac":"de:73:99:85:a4:e4"},{"name":"eth0","mac":"0a:58:0a:d9:00:42","sandbox":"/var/run/netns/51d01027-2645-493f-b801-b2e25e967cb6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.66/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.628225685+00:00 stderr F I1013 00:14:57.628088 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-65476884b9-9wcvx", UID:"6268b7fe-8910-4505-b404-6f1df638105c", APIVersion:"v1", ResourceVersion:"38095", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.66/23] from ovn-kubernetes 2025-10-13T00:14:57.643702309+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"e1bfb510a845e2a289382554a0dc3d806dd13d66a9d5b966f6ee3d9f2b79857a" Netns:"/var/run/netns/0a42d87c-05a1-4206-a7fa-1f3f65f930c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=e1bfb510a845e2a289382554a0dc3d806dd13d66a9d5b966f6ee3d9f2b79857a;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-10-13T00:14:57.643702309+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a96bb73ae8b8318","mac":"ce:ea:9f:89:58:44"},{"name":"eth0","mac":"0a:58:0a:d9:00:30","sandbox":"/var/run/netns/86366f39-6d78-47c4-bbbd-1e3665fad256"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.48/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.644994877+00:00 stderr F I1013 00:14:57.644196 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-8jhz6", UID:"3f4dca86-e6ee-4ec9-8324-86aff960225e", APIVersion:"v1", ResourceVersion:"38114", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.48/23] from ovn-kubernetes 2025-10-13T00:14:57.644994877+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"fb7d53dbe8608f5115ad7a29c6c04fdef0aecab6b93e9b1611bad20287172d37" Netns:"/var/run/netns/51d01027-2645-493f-b801-b2e25e967cb6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=fb7d53dbe8608f5115ad7a29c6c04fdef0aecab6b93e9b1611bad20287172d37;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:73:99:85:a4:e4\",\"name\":\"fb7d53dbe8608f5\"},{\"mac\":\"0a:58:0a:d9:00:42\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/51d01027-2645-493f-b801-b2e25e967cb6\"}],\"ips\":[{\"address\":\"10.217.0.66/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.650812512+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9c9a5e38e90cf6c","mac":"2e:ef:79:35:cc:24"},{"name":"eth0","mac":"0a:58:0a:d9:00:3e","sandbox":"/var/run/netns/b56a88ec-d292-4187-863b-33cebb1063a3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.62/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.651095330+00:00 stderr F I1013 00:14:57.651040 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5dbbc74dc9-cp5cd", UID:"e9127708-ccfd-4891-8a3a-f0cacb77e0f4", APIVersion:"v1", ResourceVersion:"38306", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.62/23] from ovn-kubernetes 2025-10-13T00:14:57.663863013+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076" Netns:"/var/run/netns/86366f39-6d78-47c4-bbbd-1e3665fad256" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ce:ea:9f:89:58:44\",\"name\":\"a96bb73ae8b8318\"},{\"mac\":\"0a:58:0a:d9:00:30\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/86366f39-6d78-47c4-bbbd-1e3665fad256\"}],\"ips\":[{\"address\":\"10.217.0.48/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.667948675+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"9c9a5e38e90cf6cff3b4195bfbcd5bcd5f1d42238fd9caa8807f4578bb4f76e5" Netns:"/var/run/netns/b56a88ec-d292-4187-863b-33cebb1063a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=9c9a5e38e90cf6cff3b4195bfbcd5bcd5f1d42238fd9caa8807f4578bb4f76e5;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:ef:79:35:cc:24\",\"name\":\"9c9a5e38e90cf6c\"},{\"mac\":\"0a:58:0a:d9:00:3e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b56a88ec-d292-4187-863b-33cebb1063a3\"}],\"ips\":[{\"address\":\"10.217.0.62/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.669688517+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"47567d080cdeebb","mac":"46:e3:61:33:29:dd"},{"name":"eth0","mac":"0a:58:0a:d9:00:32","sandbox":"/var/run/netns/37a56640-268c-4428-bea0-4c33015cb711"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.50/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.670903214+00:00 stderr F I1013 00:14:57.669871 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-f4jkp", UID:"4092a9f8-5acc-4932-9e90-ef962eeb301a", APIVersion:"v1", ResourceVersion:"38290", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.50/23] from ovn-kubernetes 2025-10-13T00:14:57.689138410+00:00 stderr P 2025-10-13T00:14:57Z [verbose] 2025-10-13T00:14:57.689195712+00:00 stderr P ADD finished CNI request ContainerID:"47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0" Netns:"/var/run/netns/37a56640-268c-4428-bea0-4c33015cb711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"46:e3:61:33:29:dd\",\"name\":\"47567d080cdeebb\"},{\"mac\":\"0a:58:0a:d9:00:32\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/37a56640-268c-4428-bea0-4c33015cb711\"}],\"ips\":[{\"address\":\"10.217.0.50/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.689215162+00:00 stderr F 2025-10-13T00:14:57.721933743+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51e10eee9cf8d93","mac":"0a:7c:f0:91:19:d5"},{"name":"eth0","mac":"0a:58:0a:d9:00:2f","sandbox":"/var/run/netns/0fd608fc-d109-4bfe-a6b6-226ba1acfce2"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.47/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.722649674+00:00 stderr F I1013 00:14:57.722318 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-7287f", UID:"887d596e-c519-4bfa-af90-3edd9e1b2f0f", APIVersion:"v1", ResourceVersion:"38218", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.47/23] from ovn-kubernetes 2025-10-13T00:14:57.742424667+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550" Netns:"/var/run/netns/0fd608fc-d109-4bfe-a6b6-226ba1acfce2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"0a:7c:f0:91:19:d5\",\"name\":\"51e10eee9cf8d93\"},{\"mac\":\"0a:58:0a:d9:00:2f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0fd608fc-d109-4bfe-a6b6-226ba1acfce2\"}],\"ips\":[{\"address\":\"10.217.0.47/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.770636352+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f7684764a172c67","mac":"42:bd:24:b0:b1:57"},{"name":"eth0","mac":"0a:58:0a:d9:00:2e","sandbox":"/var/run/netns/81be2739-b70f-4a0e-8b06-95b6372ad700"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.46/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.770952951+00:00 stderr F I1013 00:14:57.770923 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-samples-operator", Name:"cluster-samples-operator-bc474d5d6-wshwg", UID:"f728c15e-d8de-4a9a-a3ea-fdcead95cb91", APIVersion:"v1", ResourceVersion:"38258", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.46/23] from ovn-kubernetes 2025-10-13T00:14:57.785775515+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5" Netns:"/var/run/netns/81be2739-b70f-4a0e-8b06-95b6372ad700" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"42:bd:24:b0:b1:57\",\"name\":\"f7684764a172c67\"},{\"mac\":\"0a:58:0a:d9:00:2e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/81be2739-b70f-4a0e-8b06-95b6372ad700\"}],\"ips\":[{\"address\":\"10.217.0.46/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.796137586+00:00 stderr F 2025-10-13T00:14:57Z [verbose] Add: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e1bfb510a845e2a","mac":"46:05:4f:9c:81:95"},{"name":"eth0","mac":"0a:58:0a:d9:00:31","sandbox":"/var/run/netns/0a42d87c-05a1-4206-a7fa-1f3f65f930c2"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.49/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:57.796514827+00:00 stderr F I1013 00:14:57.796476 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"hostpath-provisioner", Name:"csi-hostpathplugin-hvm8g", UID:"12e733dd-0939-4f1b-9cbb-13897e093787", APIVersion:"v1", ResourceVersion:"38215", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.49/23] from ovn-kubernetes 2025-10-13T00:14:57.811794035+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD finished CNI request ContainerID:"e1bfb510a845e2a289382554a0dc3d806dd13d66a9d5b966f6ee3d9f2b79857a" Netns:"/var/run/netns/0a42d87c-05a1-4206-a7fa-1f3f65f930c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=e1bfb510a845e2a289382554a0dc3d806dd13d66a9d5b966f6ee3d9f2b79857a;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"46:05:4f:9c:81:95\",\"name\":\"e1bfb510a845e2a\"},{\"mac\":\"0a:58:0a:d9:00:31\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0a42d87c-05a1-4206-a7fa-1f3f65f930c2\"}],\"ips\":[{\"address\":\"10.217.0.49/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:57.855472564+00:00 stderr P 2025-10-13T00:14:57Z [verbose] 2025-10-13T00:14:57.855532356+00:00 stderr P ADD starting CNI request ContainerID:"7cf0ee4d5236aa6dd881539d55b05f0078214f881fc1ad15f9c6c675a3c91195" Netns:"/var/run/netns/1133d8c4-5632-4f6a-bbe9-90bfa087e702" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=7cf0ee4d5236aa6dd881539d55b05f0078214f881fc1ad15f9c6c675a3c91195;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-10-13T00:14:57.855555126+00:00 stderr F 2025-10-13T00:14:57.857605868+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"a6a02864fcc1d2956060079a09b162c2d011a0f5f774a564595982b117751eff" Netns:"/var/run/netns/1628749f-de95-486b-b812-42ead3a4800d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=a6a02864fcc1d2956060079a09b162c2d011a0f5f774a564595982b117751eff;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"" 2025-10-13T00:14:57.876137333+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"ccdd6d0d771e91ed8692a1dd7e54c3af99da8a52d68b9c2b36c7ef954f5ff644" Netns:"/var/run/netns/5f44a199-41be-457a-b03a-55054033af6b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=ccdd6d0d771e91ed8692a1dd7e54c3af99da8a52d68b9c2b36c7ef954f5ff644;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-10-13T00:14:57.897777671+00:00 stderr F 2025-10-13T00:14:57Z [verbose] ADD starting CNI request ContainerID:"a3103b56b63df6a7958c4d7db2ba57a9d63227db4bc6830609f7c4ee1b9fe8c6" Netns:"/var/run/netns/a1ec5ec6-f4c6-45eb-884b-4a93add779b5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=a3103b56b63df6a7958c4d7db2ba57a9d63227db4bc6830609f7c4ee1b9fe8c6;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"" 2025-10-13T00:14:57.901357409+00:00 stderr P 2025-10-13T00:14:57Z [verbose] 2025-10-13T00:14:57.901415950+00:00 stderr P ADD starting CNI request ContainerID:"41a9e9ef4b0fde1282c8cfbdf5a629ff36a5b69891c062fcc879b7523814f181" Netns:"/var/run/netns/3d7b6d25-0fb7-4ceb-9670-850f979b8c4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=41a9e9ef4b0fde1282c8cfbdf5a629ff36a5b69891c062fcc879b7523814f181;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-10-13T00:14:57.901436131+00:00 stderr F 2025-10-13T00:14:58.240459339+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"f4cda25e678299fa34789308b4e3059e4152f166f9fdff0bfbc1d70acca4c9b9" Netns:"/var/run/netns/f6f75773-ca2e-44b5-845e-40faa769a075" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=f4cda25e678299fa34789308b4e3059e4152f166f9fdff0bfbc1d70acca4c9b9;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-10-13T00:14:58.261942572+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"cfa8e7f575221b0debff60b72fbd16f951e2851e3fb48b820af9d1ef0fafe70f" Netns:"/var/run/netns/7012c9e0-ed84-4f4d-a9e5-6afb1b2d8d2a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=cfa8e7f575221b0debff60b72fbd16f951e2851e3fb48b820af9d1ef0fafe70f;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-10-13T00:14:58.306508908+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"7490d971711bdc6ed60e5e6732da3d92565e69a999adc828a74924573d594b36" Netns:"/var/run/netns/5ab3b636-7399-4edc-bedb-bc542dba5f85" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=7490d971711bdc6ed60e5e6732da3d92565e69a999adc828a74924573d594b36;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-10-13T00:14:58.308404604+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"e7ac480dfe62caeed3862b4bb64956c9f30afefc39567787161221dbb72343c9" Netns:"/var/run/netns/77fa6bcf-a622-4adb-881d-257c4e896a83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=e7ac480dfe62caeed3862b4bb64956c9f30afefc39567787161221dbb72343c9;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"" 2025-10-13T00:14:58.310014173+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"e1d18b29c373f72d603988d845ae08ae97cc1311c64a8632034fea2ed8975a6c" Netns:"/var/run/netns/70e5ba13-d184-45a4-8ee7-b884207de997" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=e1d18b29c373f72d603988d845ae08ae97cc1311c64a8632034fea2ed8975a6c;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-10-13T00:14:58.322385603+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"6c25dda9fb1b793613efa02c00d87a63ca8fa89aa83d07d5bc0c899e48bed844" Netns:"/var/run/netns/b67a8136-93d4-49d8-b8c3-b04abb4892bb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=6c25dda9fb1b793613efa02c00d87a63ca8fa89aa83d07d5bc0c899e48bed844;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"" 2025-10-13T00:14:58.334219118+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"6ba83a9d480b2ef0fcda3fb96c12a29723d0ab6a5036333de4bb6d096d943da9" Netns:"/var/run/netns/b3e44947-bfee-45f9-895c-fdc7695b013e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=6ba83a9d480b2ef0fcda3fb96c12a29723d0ab6a5036333de4bb6d096d943da9;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-10-13T00:14:58.346951899+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"c644cf180f0b9296d965cad60a94314f0304b75864c8e357626c5cbad2a764e3" Netns:"/var/run/netns/17d7a853-b686-443f-94e1-c408a925c9d4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=c644cf180f0b9296d965cad60a94314f0304b75864c8e357626c5cbad2a764e3;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"" 2025-10-13T00:14:58.396995419+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"11b9dba771caca23066ec868af57f7f950d8c869bfb6523975c7b4a5d81fc5d7" Netns:"/var/run/netns/5ed79c17-8a15-4717-afbd-0b50370396d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=11b9dba771caca23066ec868af57f7f950d8c869bfb6523975c7b4a5d81fc5d7;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-10-13T00:14:58.401586496+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"ee196b19a43e4eb50692046754dcef48f071504e6c652f0e9360a093d18b3055" Netns:"/var/run/netns/7775cc96-e873-424b-9814-64b36c4f1135" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=ee196b19a43e4eb50692046754dcef48f071504e6c652f0e9360a093d18b3055;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"" 2025-10-13T00:14:58.423263886+00:00 stderr F 2025-10-13T00:14:58Z [verbose] ADD starting CNI request ContainerID:"dbcfdcf39d73e75afa121773ea26907cfbf91a2b30f94b3266a3559b6f934930" Netns:"/var/run/netns/7e90e5c5-683f-4cfd-a226-801f5e6afe31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=dbcfdcf39d73e75afa121773ea26907cfbf91a2b30f94b3266a3559b6f934930;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-10-13T00:14:59.035362406+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"41a9e9ef4b0fde1","mac":"66:0e:c2:35:2e:fd"},{"name":"eth0","mac":"0a:58:0a:d9:00:0a","sandbox":"/var/run/netns/3d7b6d25-0fb7-4ceb-9670-850f979b8c4c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.10/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.035362406+00:00 stderr F I1013 00:14:59.028602 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-546b4f8984-pwccz", UID:"6d67253e-2acd-4bc1-8185-793587da4f17", APIVersion:"v1", ResourceVersion:"38234", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.10/23] from ovn-kubernetes 2025-10-13T00:14:59.071258341+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"41a9e9ef4b0fde1282c8cfbdf5a629ff36a5b69891c062fcc879b7523814f181" Netns:"/var/run/netns/3d7b6d25-0fb7-4ceb-9670-850f979b8c4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=41a9e9ef4b0fde1282c8cfbdf5a629ff36a5b69891c062fcc879b7523814f181;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"66:0e:c2:35:2e:fd\",\"name\":\"41a9e9ef4b0fde1\"},{\"mac\":\"0a:58:0a:d9:00:0a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3d7b6d25-0fb7-4ceb-9670-850f979b8c4c\"}],\"ips\":[{\"address\":\"10.217.0.10/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.147960229+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-authentication:oauth-openshift-74fc7c67cc-xqf8b:01feb2e0-a0f4-4573-8335-34e364e0ef40:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c644cf180f0b929","mac":"2e:d3:fe:a5:c0:63"},{"name":"eth0","mac":"0a:58:0a:d9:00:48","sandbox":"/var/run/netns/17d7a853-b686-443f-94e1-c408a925c9d4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.72/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.147960229+00:00 stderr F I1013 00:14:59.146520 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-74fc7c67cc-xqf8b", UID:"01feb2e0-a0f4-4573-8335-34e364e0ef40", APIVersion:"v1", ResourceVersion:"38205", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.72/23] from ovn-kubernetes 2025-10-13T00:14:59.149598808+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-kk8kg:e4a7de23-6134-4044-902a-0900dc04a501:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a3103b56b63df6a","mac":"62:97:55:b5:ed:02"},{"name":"eth0","mac":"0a:58:0a:d9:00:28","sandbox":"/var/run/netns/a1ec5ec6-f4c6-45eb-884b-4a93add779b5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.40/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.149985460+00:00 stderr F I1013 00:14:59.149946 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-kk8kg", UID:"e4a7de23-6134-4044-902a-0900dc04a501", APIVersion:"v1", ResourceVersion:"38252", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.40/23] from ovn-kubernetes 2025-10-13T00:14:59.165937548+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"a3103b56b63df6a7958c4d7db2ba57a9d63227db4bc6830609f7c4ee1b9fe8c6" Netns:"/var/run/netns/a1ec5ec6-f4c6-45eb-884b-4a93add779b5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=a3103b56b63df6a7958c4d7db2ba57a9d63227db4bc6830609f7c4ee1b9fe8c6;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:97:55:b5:ed:02\",\"name\":\"a3103b56b63df6a\"},{\"mac\":\"0a:58:0a:d9:00:28\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a1ec5ec6-f4c6-45eb-884b-4a93add779b5\"}],\"ips\":[{\"address\":\"10.217.0.40/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.166800163+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-machine-api:control-plane-machine-set-operator-649bd778b4-tt5tw:45a8038e-e7f2-4d93-a6f5-7753aa54e63f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a6a02864fcc1d29","mac":"72:2e:d6:5f:d7:01"},{"name":"eth0","mac":"0a:58:0a:d9:00:14","sandbox":"/var/run/netns/1628749f-de95-486b-b812-42ead3a4800d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.20/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.167311399+00:00 stderr F I1013 00:14:59.167271 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"control-plane-machine-set-operator-649bd778b4-tt5tw", UID:"45a8038e-e7f2-4d93-a6f5-7753aa54e63f", APIVersion:"v1", ResourceVersion:"38267", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.20/23] from ovn-kubernetes 2025-10-13T00:14:59.173771032+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"c644cf180f0b9296d965cad60a94314f0304b75864c8e357626c5cbad2a764e3" Netns:"/var/run/netns/17d7a853-b686-443f-94e1-c408a925c9d4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=c644cf180f0b9296d965cad60a94314f0304b75864c8e357626c5cbad2a764e3;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:d3:fe:a5:c0:63\",\"name\":\"c644cf180f0b929\"},{\"mac\":\"0a:58:0a:d9:00:48\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/17d7a853-b686-443f-94e1-c408a925c9d4\"}],\"ips\":[{\"address\":\"10.217.0.72/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.181178554+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e1d18b29c373f72","mac":"96:5e:c6:a4:78:3d"},{"name":"eth0","mac":"0a:58:0a:d9:00:12","sandbox":"/var/run/netns/70e5ba13-d184-45a4-8ee7-b884207de997"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.18/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.181178554+00:00 stderr F I1013 00:14:59.180088 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns-operator", Name:"dns-operator-75f687757b-nz2xb", UID:"10603adc-d495-423c-9459-4caa405960bb", APIVersion:"v1", ResourceVersion:"38196", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.18/23] from ovn-kubernetes 2025-10-13T00:14:59.181178554+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7cf0ee4d5236aa6","mac":"aa:23:78:7a:af:ce"},{"name":"eth0","mac":"0a:58:0a:d9:00:47","sandbox":"/var/run/netns/1133d8c4-5632-4f6a-bbe9-90bfa087e702"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.71/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.181178554+00:00 stderr F I1013 00:14:59.180633 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-2vhcn", UID:"0b5d722a-1123-4935-9740-52a08d018bc9", APIVersion:"v1", ResourceVersion:"38099", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.71/23] from ovn-kubernetes 2025-10-13T00:14:59.184725180+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"a6a02864fcc1d2956060079a09b162c2d011a0f5f774a564595982b117751eff" Netns:"/var/run/netns/1628749f-de95-486b-b812-42ead3a4800d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=a6a02864fcc1d2956060079a09b162c2d011a0f5f774a564595982b117751eff;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:2e:d6:5f:d7:01\",\"name\":\"a6a02864fcc1d29\"},{\"mac\":\"0a:58:0a:d9:00:14\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1628749f-de95-486b-b812-42ead3a4800d\"}],\"ips\":[{\"address\":\"10.217.0.20/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.189271147+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ccdd6d0d771e91e","mac":"9e:b5:70:b2:5d:92"},{"name":"eth0","mac":"0a:58:0a:d9:00:40","sandbox":"/var/run/netns/5f44a199-41be-457a-b03a-55054033af6b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.64/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.189271147+00:00 stderr F I1013 00:14:59.188241 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5c5478f8c-vqvt7", UID:"d0f40333-c860-4c04-8058-a0bf572dcf12", APIVersion:"v1", ResourceVersion:"38324", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.64/23] from ovn-kubernetes 2025-10-13T00:14:59.189271147+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f4cda25e678299f","mac":"52:6f:e8:e8:ab:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:19","sandbox":"/var/run/netns/f6f75773-ca2e-44b5-845e-40faa769a075"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.25/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.189271147+00:00 stderr F I1013 00:14:59.188631 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator", Name:"migrator-f7c6d88df-q2fnv", UID:"cf1a8966-f594-490a-9fbb-eec5bafd13d3", APIVersion:"v1", ResourceVersion:"38092", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.25/23] from ovn-kubernetes 2025-10-13T00:14:59.195993998+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"e1d18b29c373f72d603988d845ae08ae97cc1311c64a8632034fea2ed8975a6c" Netns:"/var/run/netns/70e5ba13-d184-45a4-8ee7-b884207de997" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=e1d18b29c373f72d603988d845ae08ae97cc1311c64a8632034fea2ed8975a6c;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"96:5e:c6:a4:78:3d\",\"name\":\"e1d18b29c373f72\"},{\"mac\":\"0a:58:0a:d9:00:12\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/70e5ba13-d184-45a4-8ee7-b884207de997\"}],\"ips\":[{\"address\":\"10.217.0.18/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.199155843+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"7cf0ee4d5236aa6dd881539d55b05f0078214f881fc1ad15f9c6c675a3c91195" Netns:"/var/run/netns/1133d8c4-5632-4f6a-bbe9-90bfa087e702" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=7cf0ee4d5236aa6dd881539d55b05f0078214f881fc1ad15f9c6c675a3c91195;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:23:78:7a:af:ce\",\"name\":\"7cf0ee4d5236aa6\"},{\"mac\":\"0a:58:0a:d9:00:47\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1133d8c4-5632-4f6a-bbe9-90bfa087e702\"}],\"ips\":[{\"address\":\"10.217.0.71/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.205500853+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"ccdd6d0d771e91ed8692a1dd7e54c3af99da8a52d68b9c2b36c7ef954f5ff644" Netns:"/var/run/netns/5f44a199-41be-457a-b03a-55054033af6b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=ccdd6d0d771e91ed8692a1dd7e54c3af99da8a52d68b9c2b36c7ef954f5ff644;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:b5:70:b2:5d:92\",\"name\":\"ccdd6d0d771e91e\"},{\"mac\":\"0a:58:0a:d9:00:40\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5f44a199-41be-457a-b03a-55054033af6b\"}],\"ips\":[{\"address\":\"10.217.0.64/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.215934346+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"f4cda25e678299fa34789308b4e3059e4152f166f9fdff0bfbc1d70acca4c9b9" Netns:"/var/run/netns/f6f75773-ca2e-44b5-845e-40faa769a075" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=f4cda25e678299fa34789308b4e3059e4152f166f9fdff0bfbc1d70acca4c9b9;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"52:6f:e8:e8:ab:ca\",\"name\":\"f4cda25e678299f\"},{\"mac\":\"0a:58:0a:d9:00:19\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f6f75773-ca2e-44b5-845e-40faa769a075\"}],\"ips\":[{\"address\":\"10.217.0.25/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.262201682+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-776b8b7477-sfpvs:21d29937-debd-4407-b2b1-d1053cb0f342:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6c25dda9fb1b793","mac":"72:9d:bf:9b:d5:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:58","sandbox":"/var/run/netns/b67a8136-93d4-49d8-b8c3-b04abb4892bb"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.88/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.262713437+00:00 stderr F I1013 00:14:59.262672 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-776b8b7477-sfpvs", UID:"21d29937-debd-4407-b2b1-d1053cb0f342", APIVersion:"v1", ResourceVersion:"38175", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.88/23] from ovn-kubernetes 2025-10-13T00:14:59.280416748+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"dbcfdcf39d73e75","mac":"da:b9:01:20:81:1e"},{"name":"eth0","mac":"0a:58:0a:d9:00:05","sandbox":"/var/run/netns/7e90e5c5-683f-4cfd-a226-801f5e6afe31"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.5/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.281305094+00:00 stderr F I1013 00:14:59.280437 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"machine-api-operator-788b7c6b6c-ctdmb", UID:"4f8aa612-9da0-4a2b-911e-6a1764a4e74e", APIVersion:"v1", ResourceVersion:"38147", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.5/23] from ovn-kubernetes 2025-10-13T00:14:59.285449948+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"6c25dda9fb1b793613efa02c00d87a63ca8fa89aa83d07d5bc0c899e48bed844" Netns:"/var/run/netns/b67a8136-93d4-49d8-b8c3-b04abb4892bb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=6c25dda9fb1b793613efa02c00d87a63ca8fa89aa83d07d5bc0c899e48bed844;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:9d:bf:9b:d5:42\",\"name\":\"6c25dda9fb1b793\"},{\"mac\":\"0a:58:0a:d9:00:58\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b67a8136-93d4-49d8-b8c3-b04abb4892bb\"}],\"ips\":[{\"address\":\"10.217.0.88/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.296473679+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"dbcfdcf39d73e75afa121773ea26907cfbf91a2b30f94b3266a3559b6f934930" Netns:"/var/run/netns/7e90e5c5-683f-4cfd-a226-801f5e6afe31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=dbcfdcf39d73e75afa121773ea26907cfbf91a2b30f94b3266a3559b6f934930;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"da:b9:01:20:81:1e\",\"name\":\"dbcfdcf39d73e75\"},{\"mac\":\"0a:58:0a:d9:00:05\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7e90e5c5-683f-4cfd-a226-801f5e6afe31\"}],\"ips\":[{\"address\":\"10.217.0.5/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.303962923+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"11b9dba771caca2","mac":"d6:b2:3e:a0:5c:89"},{"name":"eth0","mac":"0a:58:0a:d9:00:2d","sandbox":"/var/run/netns/5ed79c17-8a15-4717-afbd-0b50370396d8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.45/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.303962923+00:00 stderr F I1013 00:14:59.303559 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-operator", Name:"ingress-operator-7d46d5bb6d-rrg6t", UID:"7d51f445-054a-4e4f-a67b-a828f5a32511", APIVersion:"v1", ResourceVersion:"38169", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.45/23] from ovn-kubernetes 2025-10-13T00:14:59.331395435+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"11b9dba771caca23066ec868af57f7f950d8c869bfb6523975c7b4a5d81fc5d7" Netns:"/var/run/netns/5ed79c17-8a15-4717-afbd-0b50370396d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=11b9dba771caca23066ec868af57f7f950d8c869bfb6523975c7b4a5d81fc5d7;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:b2:3e:a0:5c:89\",\"name\":\"11b9dba771caca2\"},{\"mac\":\"0a:58:0a:d9:00:2d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5ed79c17-8a15-4717-afbd-0b50370396d8\"}],\"ips\":[{\"address\":\"10.217.0.45/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.406872496+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-apiserver:apiserver-7fc54b8dd7-d2bhp:41e8708a-e40d-4d28-846b-c52eda4d1755:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e7ac480dfe62cae","mac":"4e:e4:64:32:27:e5"},{"name":"eth0","mac":"0a:58:0a:d9:00:52","sandbox":"/var/run/netns/77fa6bcf-a622-4adb-881d-257c4e896a83"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.82/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.407993950+00:00 stderr F I1013 00:14:59.407068 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"41e8708a-e40d-4d28-846b-c52eda4d1755", APIVersion:"v1", ResourceVersion:"38130", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.82/23] from ovn-kubernetes 2025-10-13T00:14:59.410988280+00:00 stderr P 2025-10-13T00:14:59Z [verbose] 2025-10-13T00:14:59.411017701+00:00 stderr P Add: openshift-console:console-644bb77b49-5x5xk:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ee196b19a43e4eb","mac":"32:a3:7d:85:b8:af"},{"name":"eth0","mac":"0a:58:0a:d9:00:49","sandbox":"/var/run/netns/7775cc96-e873-424b-9814-64b36c4f1135"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.73/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.411036711+00:00 stderr F 2025-10-13T00:14:59.411273078+00:00 stderr F I1013 00:14:59.411222 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-644bb77b49-5x5xk", UID:"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1", APIVersion:"v1", ResourceVersion:"38297", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.73/23] from ovn-kubernetes 2025-10-13T00:14:59.414802164+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6ba83a9d480b2ef","mac":"02:0d:9f:6c:62:46"},{"name":"eth0","mac":"0a:58:0a:d9:00:09","sandbox":"/var/run/netns/b3e44947-bfee-45f9-895c-fdc7695b013e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.9/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.414802164+00:00 stderr F I1013 00:14:59.412711 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-7978d7d7f6-2nt8z", UID:"0f394926-bdb9-425c-b36e-264d7fd34550", APIVersion:"v1", ResourceVersion:"38160", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.9/23] from ovn-kubernetes 2025-10-13T00:14:59.425607698+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cfa8e7f575221b0","mac":"ca:48:f1:77:d0:4f"},{"name":"eth0","mac":"0a:58:0a:d9:00:17","sandbox":"/var/run/netns/7012c9e0-ed84-4f4d-a9e5-6afb1b2d8d2a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.23/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.425607698+00:00 stderr F I1013 00:14:59.425067 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-config-operator", Name:"openshift-config-operator-77658b5b66-dq5sc", UID:"530553aa-0a1d-423e-8a22-f5eb4bdbb883", APIVersion:"v1", ResourceVersion:"38193", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.23/23] from ovn-kubernetes 2025-10-13T00:14:59.433416772+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"e7ac480dfe62caeed3862b4bb64956c9f30afefc39567787161221dbb72343c9" Netns:"/var/run/netns/77fa6bcf-a622-4adb-881d-257c4e896a83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=e7ac480dfe62caeed3862b4bb64956c9f30afefc39567787161221dbb72343c9;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:e4:64:32:27:e5\",\"name\":\"e7ac480dfe62cae\"},{\"mac\":\"0a:58:0a:d9:00:52\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/77fa6bcf-a622-4adb-881d-257c4e896a83\"}],\"ips\":[{\"address\":\"10.217.0.82/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.441449662+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"6ba83a9d480b2ef0fcda3fb96c12a29723d0ab6a5036333de4bb6d096d943da9" Netns:"/var/run/netns/b3e44947-bfee-45f9-895c-fdc7695b013e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=6ba83a9d480b2ef0fcda3fb96c12a29723d0ab6a5036333de4bb6d096d943da9;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"02:0d:9f:6c:62:46\",\"name\":\"6ba83a9d480b2ef\"},{\"mac\":\"0a:58:0a:d9:00:09\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b3e44947-bfee-45f9-895c-fdc7695b013e\"}],\"ips\":[{\"address\":\"10.217.0.9/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.442449892+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"ee196b19a43e4eb50692046754dcef48f071504e6c652f0e9360a093d18b3055" Netns:"/var/run/netns/7775cc96-e873-424b-9814-64b36c4f1135" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=ee196b19a43e4eb50692046754dcef48f071504e6c652f0e9360a093d18b3055;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:a3:7d:85:b8:af\",\"name\":\"ee196b19a43e4eb\"},{\"mac\":\"0a:58:0a:d9:00:49\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7775cc96-e873-424b-9814-64b36c4f1135\"}],\"ips\":[{\"address\":\"10.217.0.73/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.445377150+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"cfa8e7f575221b0debff60b72fbd16f951e2851e3fb48b820af9d1ef0fafe70f" Netns:"/var/run/netns/7012c9e0-ed84-4f4d-a9e5-6afb1b2d8d2a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=cfa8e7f575221b0debff60b72fbd16f951e2851e3fb48b820af9d1ef0fafe70f;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ca:48:f1:77:d0:4f\",\"name\":\"cfa8e7f575221b0\"},{\"mac\":\"0a:58:0a:d9:00:17\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7012c9e0-ed84-4f4d-a9e5-6afb1b2d8d2a\"}],\"ips\":[{\"address\":\"10.217.0.23/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:14:59.454923986+00:00 stderr F 2025-10-13T00:14:59Z [verbose] Add: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7490d971711bdc6","mac":"36:13:da:ee:85:b4"},{"name":"eth0","mac":"0a:58:0a:d9:00:27","sandbox":"/var/run/netns/5ab3b636-7399-4edc-bedb-bc542dba5f85"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.39/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:14:59.454923986+00:00 stderr F I1013 00:14:59.452911 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"5bacb25d-97b6-4491-8fb4-99feae1d802a", APIVersion:"v1", ResourceVersion:"38285", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.39/23] from ovn-kubernetes 2025-10-13T00:14:59.478356138+00:00 stderr F 2025-10-13T00:14:59Z [verbose] ADD finished CNI request ContainerID:"7490d971711bdc6ed60e5e6732da3d92565e69a999adc828a74924573d594b36" Netns:"/var/run/netns/5ab3b636-7399-4edc-bedb-bc542dba5f85" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=7490d971711bdc6ed60e5e6732da3d92565e69a999adc828a74924573d594b36;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"36:13:da:ee:85:b4\",\"name\":\"7490d971711bdc6\"},{\"mac\":\"0a:58:0a:d9:00:27\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5ab3b636-7399-4edc-bedb-bc542dba5f85\"}],\"ips\":[{\"address\":\"10.217.0.39/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:15:58.105343971+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD starting CNI request ContainerID:"19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5" Netns:"/var/run/netns/1a8d0e15-a09c-40af-b118-f33cb25fee9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29338575-4qbqw;K8S_POD_INFRA_CONTAINER_ID=19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5;K8S_POD_UID=a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7" Path:"" 2025-10-13T00:15:58.123289287+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD starting CNI request ContainerID:"7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b" Netns:"/var/run/netns/83d6827e-4af4-4eaf-b7c0-b939e1093952" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29338560-zvlxb;K8S_POD_INFRA_CONTAINER_ID=7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b;K8S_POD_UID=3c48edc1-77de-4eaf-a099-5af630747311" Path:"" 2025-10-13T00:15:58.128508444+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD starting CNI request ContainerID:"eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489" Netns:"/var/run/netns/e802ef6c-dd3d-41ae-b94b-af332fdbf7d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-zqnwb;K8S_POD_INFRA_CONTAINER_ID=eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489;K8S_POD_UID=23ef6376-305c-4598-8ddc-06f5484a992b" Path:"" 2025-10-13T00:15:58.161768271+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD starting CNI request ContainerID:"0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe" Netns:"/var/run/netns/b341345b-1ecf-4fc8-ae05-5f8963b4d584" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-t4sr9;K8S_POD_INFRA_CONTAINER_ID=0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe;K8S_POD_UID=81744200-fe49-492b-a734-108101333ab9" Path:"" 2025-10-13T00:15:58.165150739+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD starting CNI request ContainerID:"51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460" Netns:"/var/run/netns/cba51e22-532a-46be-8b2d-9a42993d11e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jfjbq;K8S_POD_INFRA_CONTAINER_ID=51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460;K8S_POD_UID=bb7cedbc-6bdd-4b23-bf12-990820547803" Path:"" 2025-10-13T00:15:58.513012266+00:00 stderr F 2025-10-13T00:15:58Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29338575-4qbqw:a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"19354dfe5a9cab1","mac":"9e:78:f5:14:72:81"},{"name":"eth0","mac":"0a:58:0a:d9:00:1c","sandbox":"/var/run/netns/1a8d0e15-a09c-40af-b118-f33cb25fee9e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.28/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:15:58.513815722+00:00 stderr F I1013 00:15:58.513418 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29338575-4qbqw", UID:"a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7", APIVersion:"v1", ResourceVersion:"40984", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.28/23] from ovn-kubernetes 2025-10-13T00:15:58.527301874+00:00 stderr F 2025-10-13T00:15:58Z [verbose] Add: openshift-image-registry:image-pruner-29338560-zvlxb:3c48edc1-77de-4eaf-a099-5af630747311:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7ce782de167556e","mac":"e2:55:6d:b4:1a:01"},{"name":"eth0","mac":"0a:58:0a:d9:00:1b","sandbox":"/var/run/netns/83d6827e-4af4-4eaf-b7c0-b939e1093952"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.27/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:15:58.527488960+00:00 stderr F I1013 00:15:58.527442 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-pruner-29338560-zvlxb", UID:"3c48edc1-77de-4eaf-a099-5af630747311", APIVersion:"v1", ResourceVersion:"40985", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.27/23] from ovn-kubernetes 2025-10-13T00:15:58.532427909+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD finished CNI request ContainerID:"19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5" Netns:"/var/run/netns/1a8d0e15-a09c-40af-b118-f33cb25fee9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29338575-4qbqw;K8S_POD_INFRA_CONTAINER_ID=19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5;K8S_POD_UID=a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:78:f5:14:72:81\",\"name\":\"19354dfe5a9cab1\"},{\"mac\":\"0a:58:0a:d9:00:1c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1a8d0e15-a09c-40af-b118-f33cb25fee9e\"}],\"ips\":[{\"address\":\"10.217.0.28/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:15:58.540698104+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD finished CNI request ContainerID:"7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b" Netns:"/var/run/netns/83d6827e-4af4-4eaf-b7c0-b939e1093952" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29338560-zvlxb;K8S_POD_INFRA_CONTAINER_ID=7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b;K8S_POD_UID=3c48edc1-77de-4eaf-a099-5af630747311" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e2:55:6d:b4:1a:01\",\"name\":\"7ce782de167556e\"},{\"mac\":\"0a:58:0a:d9:00:1b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/83d6827e-4af4-4eaf-b7c0-b939e1093952\"}],\"ips\":[{\"address\":\"10.217.0.27/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:15:58.597621300+00:00 stderr F 2025-10-13T00:15:58Z [verbose] Add: openshift-marketplace:redhat-operators-t4sr9:81744200-fe49-492b-a734-108101333ab9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0c509528394e415","mac":"96:6d:87:f4:b0:68"},{"name":"eth0","mac":"0a:58:0a:d9:00:1a","sandbox":"/var/run/netns/b341345b-1ecf-4fc8-ae05-5f8963b4d584"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.26/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:15:58.597763564+00:00 stderr F I1013 00:15:58.597727 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-t4sr9", UID:"81744200-fe49-492b-a734-108101333ab9", APIVersion:"v1", ResourceVersion:"40983", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.26/23] from ovn-kubernetes 2025-10-13T00:15:58.602440774+00:00 stderr F 2025-10-13T00:15:58Z [verbose] Add: openshift-marketplace:redhat-marketplace-jfjbq:bb7cedbc-6bdd-4b23-bf12-990820547803:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51f92555bcee6cf","mac":"2e:0f:69:bf:d8:5b"},{"name":"eth0","mac":"0a:58:0a:d9:00:1d","sandbox":"/var/run/netns/cba51e22-532a-46be-8b2d-9a42993d11e6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.29/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:15:58.602440774+00:00 stderr F I1013 00:15:58.601714 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-jfjbq", UID:"bb7cedbc-6bdd-4b23-bf12-990820547803", APIVersion:"v1", ResourceVersion:"40987", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.29/23] from ovn-kubernetes 2025-10-13T00:15:58.604783159+00:00 stderr F 2025-10-13T00:15:58Z [verbose] Add: openshift-marketplace:certified-operators-zqnwb:23ef6376-305c-4598-8ddc-06f5484a992b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"eced94f822f9f5b","mac":"b2:d4:5a:d3:61:12"},{"name":"eth0","mac":"0a:58:0a:d9:00:11","sandbox":"/var/run/netns/e802ef6c-dd3d-41ae-b94b-af332fdbf7d5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.17/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:15:58.604923334+00:00 stderr F I1013 00:15:58.604888 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-zqnwb", UID:"23ef6376-305c-4598-8ddc-06f5484a992b", APIVersion:"v1", ResourceVersion:"40986", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.17/23] from ovn-kubernetes 2025-10-13T00:15:58.616456934+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD finished CNI request ContainerID:"0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe" Netns:"/var/run/netns/b341345b-1ecf-4fc8-ae05-5f8963b4d584" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-t4sr9;K8S_POD_INFRA_CONTAINER_ID=0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe;K8S_POD_UID=81744200-fe49-492b-a734-108101333ab9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"96:6d:87:f4:b0:68\",\"name\":\"0c509528394e415\"},{\"mac\":\"0a:58:0a:d9:00:1a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b341345b-1ecf-4fc8-ae05-5f8963b4d584\"}],\"ips\":[{\"address\":\"10.217.0.26/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:15:58.616456934+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD finished CNI request ContainerID:"51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460" Netns:"/var/run/netns/cba51e22-532a-46be-8b2d-9a42993d11e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jfjbq;K8S_POD_INFRA_CONTAINER_ID=51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460;K8S_POD_UID=bb7cedbc-6bdd-4b23-bf12-990820547803" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:0f:69:bf:d8:5b\",\"name\":\"51f92555bcee6cf\"},{\"mac\":\"0a:58:0a:d9:00:1d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/cba51e22-532a-46be-8b2d-9a42993d11e6\"}],\"ips\":[{\"address\":\"10.217.0.29/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:15:58.620217544+00:00 stderr F 2025-10-13T00:15:58Z [verbose] ADD finished CNI request ContainerID:"eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489" Netns:"/var/run/netns/e802ef6c-dd3d-41ae-b94b-af332fdbf7d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-zqnwb;K8S_POD_INFRA_CONTAINER_ID=eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489;K8S_POD_UID=23ef6376-305c-4598-8ddc-06f5484a992b" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:d4:5a:d3:61:12\",\"name\":\"eced94f822f9f5b\"},{\"mac\":\"0a:58:0a:d9:00:11\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e802ef6c-dd3d-41ae-b94b-af332fdbf7d5\"}],\"ips\":[{\"address\":\"10.217.0.17/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:05.272041078+00:00 stderr F 2025-10-13T00:16:05Z [verbose] DEL starting CNI request ContainerID:"19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5" Netns:"/var/run/netns/1a8d0e15-a09c-40af-b118-f33cb25fee9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29338575-4qbqw;K8S_POD_INFRA_CONTAINER_ID=19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5;K8S_POD_UID=a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7" Path:"" 2025-10-13T00:16:05.273362561+00:00 stderr F 2025-10-13T00:16:05Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29338575-4qbqw:a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:05.428287709+00:00 stderr F 2025-10-13T00:16:05Z [verbose] DEL finished CNI request ContainerID:"19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5" Netns:"/var/run/netns/1a8d0e15-a09c-40af-b118-f33cb25fee9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29338575-4qbqw;K8S_POD_INFRA_CONTAINER_ID=19354dfe5a9cab1dbf70ba04a420575a20b24379380a8e0ed945cdb2bb0ec8b5;K8S_POD_UID=a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7" Path:"", result: "", err: 2025-10-13T00:16:10.712674448+00:00 stderr F 2025-10-13T00:16:10Z [verbose] ADD starting CNI request ContainerID:"17fc394c1df425d78a4b515158a7703aca59d4dcf35673bbc00d878b4e48b8bb" Netns:"/var/run/netns/7267dc25-af14-4fd4-a8e6-616203f95312" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-29pzg;K8S_POD_INFRA_CONTAINER_ID=17fc394c1df425d78a4b515158a7703aca59d4dcf35673bbc00d878b4e48b8bb;K8S_POD_UID=c3d30d24-1dab-4362-a72b-dd6762f1f84c" Path:"" 2025-10-13T00:16:10.802753457+00:00 stderr F 2025-10-13T00:16:10Z [verbose] DEL starting CNI request ContainerID:"47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0" Netns:"/var/run/netns/37a56640-268c-4428-bea0-4c33015cb711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-10-13T00:16:10.803319305+00:00 stderr F 2025-10-13T00:16:10Z [verbose] Del: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:10.835158546+00:00 stderr F 2025-10-13T00:16:10Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-29pzg:c3d30d24-1dab-4362-a72b-dd6762f1f84c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"17fc394c1df425d","mac":"5a:0e:a6:0f:b2:cf"},{"name":"eth0","mac":"0a:58:0a:d9:00:1e","sandbox":"/var/run/netns/7267dc25-af14-4fd4-a8e6-616203f95312"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.30/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:10.835285690+00:00 stderr F 2025-10-13T00:16:10Z [verbose] DEL starting CNI request ContainerID:"0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe" Netns:"/var/run/netns/b341345b-1ecf-4fc8-ae05-5f8963b4d584" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-t4sr9;K8S_POD_INFRA_CONTAINER_ID=0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe;K8S_POD_UID=81744200-fe49-492b-a734-108101333ab9" Path:"" 2025-10-13T00:16:10.835478347+00:00 stderr F 2025-10-13T00:16:10Z [verbose] Del: openshift-marketplace:redhat-operators-t4sr9:81744200-fe49-492b-a734-108101333ab9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:10.835628511+00:00 stderr F I1013 00:16:10.835588 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-29pzg", UID:"c3d30d24-1dab-4362-a72b-dd6762f1f84c", APIVersion:"v1", ResourceVersion:"41152", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.30/23] from ovn-kubernetes 2025-10-13T00:16:10.836241671+00:00 stderr F 2025-10-13T00:16:10Z [verbose] DEL starting CNI request ContainerID:"eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489" Netns:"/var/run/netns/e802ef6c-dd3d-41ae-b94b-af332fdbf7d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-zqnwb;K8S_POD_INFRA_CONTAINER_ID=eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489;K8S_POD_UID=23ef6376-305c-4598-8ddc-06f5484a992b" Path:"" 2025-10-13T00:16:10.836401316+00:00 stderr F 2025-10-13T00:16:10Z [verbose] Del: openshift-marketplace:certified-operators-zqnwb:23ef6376-305c-4598-8ddc-06f5484a992b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:10.853310419+00:00 stderr F 2025-10-13T00:16:10Z [verbose] DEL starting CNI request ContainerID:"d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de" Netns:"/var/run/netns/79a07779-da1b-4cb8-8930-d0a3b0f90b1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-10-13T00:16:10.853474504+00:00 stderr F 2025-10-13T00:16:10Z [verbose] Del: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:10.855072185+00:00 stderr F 2025-10-13T00:16:10Z [verbose] ADD finished CNI request ContainerID:"17fc394c1df425d78a4b515158a7703aca59d4dcf35673bbc00d878b4e48b8bb" Netns:"/var/run/netns/7267dc25-af14-4fd4-a8e6-616203f95312" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-29pzg;K8S_POD_INFRA_CONTAINER_ID=17fc394c1df425d78a4b515158a7703aca59d4dcf35673bbc00d878b4e48b8bb;K8S_POD_UID=c3d30d24-1dab-4362-a72b-dd6762f1f84c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:0e:a6:0f:b2:cf\",\"name\":\"17fc394c1df425d\"},{\"mac\":\"0a:58:0a:d9:00:1e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7267dc25-af14-4fd4-a8e6-616203f95312\"}],\"ips\":[{\"address\":\"10.217.0.30/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:11.011845843+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0" Netns:"/var/run/netns/37a56640-268c-4428-bea0-4c33015cb711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=47567d080cdeebb36aca7c11604391fe2d702d8d766fc75ff4326ec21260fed0;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "", err: 2025-10-13T00:16:11.068768279+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489" Netns:"/var/run/netns/e802ef6c-dd3d-41ae-b94b-af332fdbf7d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-zqnwb;K8S_POD_INFRA_CONTAINER_ID=eced94f822f9f5b8257ef6f127d0e367212b3998443dc1d3284defcc21986489;K8S_POD_UID=23ef6376-305c-4598-8ddc-06f5484a992b" Path:"", result: "", err: 2025-10-13T00:16:11.069122870+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de" Netns:"/var/run/netns/79a07779-da1b-4cb8-8930-d0a3b0f90b1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=d1d69cb9e0e74fcccf31a858d7094c25b4acfc96898e9fa8468e520d320161de;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "", err: 2025-10-13T00:16:11.079780262+00:00 stderr P 2025-10-13T00:16:11Z [verbose] 2025-10-13T00:16:11.081186467+00:00 stderr P DEL finished CNI request ContainerID:"0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe" Netns:"/var/run/netns/b341345b-1ecf-4fc8-ae05-5f8963b4d584" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-t4sr9;K8S_POD_INFRA_CONTAINER_ID=0c509528394e41548685bdb1ae75251ff867853e62266eabcf955f023f40d7fe;K8S_POD_UID=81744200-fe49-492b-a734-108101333ab9" Path:"", result: "", err: 2025-10-13T00:16:11.081219878+00:00 stderr F 2025-10-13T00:16:11.136170020+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL starting CNI request ContainerID:"a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076" Netns:"/var/run/netns/86366f39-6d78-47c4-bbbd-1e3665fad256" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-10-13T00:16:11.136338336+00:00 stderr F 2025-10-13T00:16:11Z [verbose] Del: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:11.236984114+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL starting CNI request ContainerID:"c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492" Netns:"/var/run/netns/1571eba1-7b2f-4e01-8fe5-9788bf88ad4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-10-13T00:16:11.237104667+00:00 stderr F 2025-10-13T00:16:11Z [verbose] Del: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:11.263065510+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL starting CNI request ContainerID:"51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460" Netns:"/var/run/netns/cba51e22-532a-46be-8b2d-9a42993d11e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jfjbq;K8S_POD_INFRA_CONTAINER_ID=51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460;K8S_POD_UID=bb7cedbc-6bdd-4b23-bf12-990820547803" Path:"" 2025-10-13T00:16:11.263183744+00:00 stderr F 2025-10-13T00:16:11Z [verbose] Del: openshift-marketplace:redhat-marketplace-jfjbq:bb7cedbc-6bdd-4b23-bf12-990820547803:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:11.272760961+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL starting CNI request ContainerID:"b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478" Netns:"/var/run/netns/dd217dce-f3ff-4b13-ad34-a4419572f865" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-10-13T00:16:11.273045230+00:00 stderr F 2025-10-13T00:16:11Z [verbose] Del: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:11.301854414+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076" Netns:"/var/run/netns/86366f39-6d78-47c4-bbbd-1e3665fad256" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=a96bb73ae8b83182ecc86f451f245418ff9ed2002118823f213e9e6e817cd076;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "", err: 2025-10-13T00:16:11.466539995+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492" Netns:"/var/run/netns/1571eba1-7b2f-4e01-8fe5-9788bf88ad4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=c87c0bc9b6308254f7cd3a0ec9b9f547ee9244666c871a77bd0882eb8ca77492;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "", err: 2025-10-13T00:16:11.484981946+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478" Netns:"/var/run/netns/dd217dce-f3ff-4b13-ad34-a4419572f865" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=b0650787fa496e7f7c93c9c8068d6b94c48380380bcdb12383573850b5d1f478;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "", err: 2025-10-13T00:16:11.487460776+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460" Netns:"/var/run/netns/cba51e22-532a-46be-8b2d-9a42993d11e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jfjbq;K8S_POD_INFRA_CONTAINER_ID=51f92555bcee6cfab0e9a5ad144bed77a8070250578f915ba3d761562bc80460;K8S_POD_UID=bb7cedbc-6bdd-4b23-bf12-990820547803" Path:"", result: "", err: 2025-10-13T00:16:11.772843579+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL starting CNI request ContainerID:"51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550" Netns:"/var/run/netns/0fd608fc-d109-4bfe-a6b6-226ba1acfce2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-10-13T00:16:11.773108187+00:00 stderr F 2025-10-13T00:16:11Z [verbose] Del: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:16:11.923845871+00:00 stderr F 2025-10-13T00:16:11Z [verbose] DEL finished CNI request ContainerID:"51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550" Netns:"/var/run/netns/0fd608fc-d109-4bfe-a6b6-226ba1acfce2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=51e10eee9cf8d93964649b6ecde8d5be94ed8c89a5c05462ffdd8c4995e4b550;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "", err: 2025-10-13T00:16:16.399647967+00:00 stderr F 2025-10-13T00:16:16Z [verbose] ADD starting CNI request ContainerID:"3501d58989a45682cd681b1dbc41d52c8d20666cdf45530a7705326823bc3243" Netns:"/var/run/netns/b01366a1-0127-4481-ac53-92b7ef237d4d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-crk87;K8S_POD_INFRA_CONTAINER_ID=3501d58989a45682cd681b1dbc41d52c8d20666cdf45530a7705326823bc3243;K8S_POD_UID=a783d910-85f5-4f52-8831-6bae329a70fa" Path:"" 2025-10-13T00:16:16.514233272+00:00 stderr F 2025-10-13T00:16:16Z [verbose] Add: openshift-marketplace:redhat-marketplace-crk87:a783d910-85f5-4f52-8831-6bae329a70fa:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3501d58989a4568","mac":"5e:67:d0:ce:e1:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:21","sandbox":"/var/run/netns/b01366a1-0127-4481-ac53-92b7ef237d4d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.33/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:16.514426839+00:00 stderr F I1013 00:16:16.514353 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-crk87", UID:"a783d910-85f5-4f52-8831-6bae329a70fa", APIVersion:"v1", ResourceVersion:"41207", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.33/23] from ovn-kubernetes 2025-10-13T00:16:16.526021610+00:00 stderr F 2025-10-13T00:16:16Z [verbose] ADD finished CNI request ContainerID:"3501d58989a45682cd681b1dbc41d52c8d20666cdf45530a7705326823bc3243" Netns:"/var/run/netns/b01366a1-0127-4481-ac53-92b7ef237d4d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-crk87;K8S_POD_INFRA_CONTAINER_ID=3501d58989a45682cd681b1dbc41d52c8d20666cdf45530a7705326823bc3243;K8S_POD_UID=a783d910-85f5-4f52-8831-6bae329a70fa" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:67:d0:ce:e1:94\",\"name\":\"3501d58989a4568\"},{\"mac\":\"0a:58:0a:d9:00:21\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b01366a1-0127-4481-ac53-92b7ef237d4d\"}],\"ips\":[{\"address\":\"10.217.0.33/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:24.051026059+00:00 stderr P 2025-10-13T00:16:24Z [verbose] 2025-10-13T00:16:24.051177814+00:00 stderr P ADD starting CNI request ContainerID:"bad13b08b2ffc1134bc6c090e8ae0348952e8231699a15567478d5789ab18986" Netns:"/var/run/netns/973f5e3a-e503-44de-8068-91a374b3da58" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-hkptr;K8S_POD_INFRA_CONTAINER_ID=bad13b08b2ffc1134bc6c090e8ae0348952e8231699a15567478d5789ab18986;K8S_POD_UID=d3fa047a-b670-4067-b07b-06d9a1d3dbb1" Path:"" 2025-10-13T00:16:24.051235706+00:00 stderr F 2025-10-13T00:16:24.231723044+00:00 stderr P 2025-10-13T00:16:24Z [verbose] 2025-10-13T00:16:24.231824438+00:00 stderr P Add: openshift-marketplace:redhat-operators-hkptr:d3fa047a-b670-4067-b07b-06d9a1d3dbb1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"bad13b08b2ffc11","mac":"2a:78:32:77:cf:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:22","sandbox":"/var/run/netns/973f5e3a-e503-44de-8068-91a374b3da58"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.34/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:24.231881870+00:00 stderr F 2025-10-13T00:16:24.232266652+00:00 stderr F I1013 00:16:24.232199 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-hkptr", UID:"d3fa047a-b670-4067-b07b-06d9a1d3dbb1", APIVersion:"v1", ResourceVersion:"41250", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.34/23] from ovn-kubernetes 2025-10-13T00:16:24.247194711+00:00 stderr F 2025-10-13T00:16:24Z [verbose] ADD finished CNI request ContainerID:"bad13b08b2ffc1134bc6c090e8ae0348952e8231699a15567478d5789ab18986" Netns:"/var/run/netns/973f5e3a-e503-44de-8068-91a374b3da58" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-hkptr;K8S_POD_INFRA_CONTAINER_ID=bad13b08b2ffc1134bc6c090e8ae0348952e8231699a15567478d5789ab18986;K8S_POD_UID=d3fa047a-b670-4067-b07b-06d9a1d3dbb1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:78:32:77:cf:94\",\"name\":\"bad13b08b2ffc11\"},{\"mac\":\"0a:58:0a:d9:00:22\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/973f5e3a-e503-44de-8068-91a374b3da58\"}],\"ips\":[{\"address\":\"10.217.0.34/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:27.611287072+00:00 stderr F 2025-10-13T00:16:27Z [verbose] ADD starting CNI request ContainerID:"c760791ee025653e95113542edf976fa192de5c9a3f734918efd5eb655624e35" Netns:"/var/run/netns/7bef4703-4c6d-45cd-8dc3-0053da4759e9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-gjctm;K8S_POD_INFRA_CONTAINER_ID=c760791ee025653e95113542edf976fa192de5c9a3f734918efd5eb655624e35;K8S_POD_UID=49cd5dc0-c0e0-4199-93cd-8637bea2739a" Path:"" 2025-10-13T00:16:27.850232536+00:00 stderr F 2025-10-13T00:16:27Z [verbose] ADD starting CNI request ContainerID:"545c8bbc6de04c8c1a1faecc1b5c94d7d05364bdddf63229abf7381703d700c7" Netns:"/var/run/netns/0b1dd775-9b66-486a-a97b-dd9e5339c953" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cms8q;K8S_POD_INFRA_CONTAINER_ID=545c8bbc6de04c8c1a1faecc1b5c94d7d05364bdddf63229abf7381703d700c7;K8S_POD_UID=c8f142c0-dc2a-4213-882f-919da8583b03" Path:"" 2025-10-13T00:16:27.937699941+00:00 stderr F 2025-10-13T00:16:27Z [verbose] Add: openshift-marketplace:community-operators-gjctm:49cd5dc0-c0e0-4199-93cd-8637bea2739a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c760791ee025653","mac":"16:f4:56:e4:d8:31"},{"name":"eth0","mac":"0a:58:0a:d9:00:23","sandbox":"/var/run/netns/7bef4703-4c6d-45cd-8dc3-0053da4759e9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.35/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:27.937928378+00:00 stderr F I1013 00:16:27.937884 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-gjctm", UID:"49cd5dc0-c0e0-4199-93cd-8637bea2739a", APIVersion:"v1", ResourceVersion:"41271", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.35/23] from ovn-kubernetes 2025-10-13T00:16:27.956833255+00:00 stderr F 2025-10-13T00:16:27Z [verbose] ADD finished CNI request ContainerID:"c760791ee025653e95113542edf976fa192de5c9a3f734918efd5eb655624e35" Netns:"/var/run/netns/7bef4703-4c6d-45cd-8dc3-0053da4759e9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-gjctm;K8S_POD_INFRA_CONTAINER_ID=c760791ee025653e95113542edf976fa192de5c9a3f734918efd5eb655624e35;K8S_POD_UID=49cd5dc0-c0e0-4199-93cd-8637bea2739a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:f4:56:e4:d8:31\",\"name\":\"c760791ee025653\"},{\"mac\":\"0a:58:0a:d9:00:23\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7bef4703-4c6d-45cd-8dc3-0053da4759e9\"}],\"ips\":[{\"address\":\"10.217.0.35/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:28.235909925+00:00 stderr F 2025-10-13T00:16:28Z [verbose] Add: openshift-marketplace:certified-operators-cms8q:c8f142c0-dc2a-4213-882f-919da8583b03:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"545c8bbc6de04c8","mac":"f6:b5:1e:d6:46:27"},{"name":"eth0","mac":"0a:58:0a:d9:00:24","sandbox":"/var/run/netns/0b1dd775-9b66-486a-a97b-dd9e5339c953"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.36/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:28.236103551+00:00 stderr F I1013 00:16:28.236056 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-cms8q", UID:"c8f142c0-dc2a-4213-882f-919da8583b03", APIVersion:"v1", ResourceVersion:"41277", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.36/23] from ovn-kubernetes 2025-10-13T00:16:28.251421783+00:00 stderr F 2025-10-13T00:16:28Z [verbose] ADD finished CNI request ContainerID:"545c8bbc6de04c8c1a1faecc1b5c94d7d05364bdddf63229abf7381703d700c7" Netns:"/var/run/netns/0b1dd775-9b66-486a-a97b-dd9e5339c953" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cms8q;K8S_POD_INFRA_CONTAINER_ID=545c8bbc6de04c8c1a1faecc1b5c94d7d05364bdddf63229abf7381703d700c7;K8S_POD_UID=c8f142c0-dc2a-4213-882f-919da8583b03" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f6:b5:1e:d6:46:27\",\"name\":\"545c8bbc6de04c8\"},{\"mac\":\"0a:58:0a:d9:00:24\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0b1dd775-9b66-486a-a97b-dd9e5339c953\"}],\"ips\":[{\"address\":\"10.217.0.36/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:29.432024616+00:00 stderr F 2025-10-13T00:16:29Z [verbose] ADD starting CNI request ContainerID:"78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5" Netns:"/var/run/netns/b53856a5-59f8-448f-b9c2-b64c0455a8c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-wswq5;K8S_POD_INFRA_CONTAINER_ID=78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5;K8S_POD_UID=82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e" Path:"" 2025-10-13T00:16:29.624449717+00:00 stderr F 2025-10-13T00:16:29Z [verbose] Add: openshift-marketplace:community-operators-wswq5:82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"78e7501285d0726","mac":"d6:d9:a5:db:54:5c"},{"name":"eth0","mac":"0a:58:0a:d9:00:25","sandbox":"/var/run/netns/b53856a5-59f8-448f-b9c2-b64c0455a8c4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.37/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:29.624637923+00:00 stderr F I1013 00:16:29.624567 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-wswq5", UID:"82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e", APIVersion:"v1", ResourceVersion:"41311", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.37/23] from ovn-kubernetes 2025-10-13T00:16:29.644099117+00:00 stderr F 2025-10-13T00:16:29Z [verbose] ADD finished CNI request ContainerID:"78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5" Netns:"/var/run/netns/b53856a5-59f8-448f-b9c2-b64c0455a8c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-wswq5;K8S_POD_INFRA_CONTAINER_ID=78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5;K8S_POD_UID=82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:d9:a5:db:54:5c\",\"name\":\"78e7501285d0726\"},{\"mac\":\"0a:58:0a:d9:00:25\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b53856a5-59f8-448f-b9c2-b64c0455a8c4\"}],\"ips\":[{\"address\":\"10.217.0.37/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:16:58.952929181+00:00 stderr F 2025-10-13T00:16:58Z [verbose] ADD starting CNI request ContainerID:"f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3" Netns:"/var/run/netns/01dbefc0-3d82-4f6a-a30a-07c327b27d7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-10-13T00:16:59.293724196+00:00 stderr F 2025-10-13T00:16:59Z [verbose] Add: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f1d6a24636333a6","mac":"32:5d:26:0c:4f:46"},{"name":"eth0","mac":"0a:58:0a:d9:00:3b","sandbox":"/var/run/netns/01dbefc0-3d82-4f6a-a30a-07c327b27d7a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.59/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:16:59.295685937+00:00 stderr F I1013 00:16:59.293907 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75779c45fd-v2j2v", UID:"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319", APIVersion:"v1", ResourceVersion:"38255", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.59/23] from ovn-kubernetes 2025-10-13T00:16:59.311611990+00:00 stderr F 2025-10-13T00:16:59Z [verbose] ADD finished CNI request ContainerID:"f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3" Netns:"/var/run/netns/01dbefc0-3d82-4f6a-a30a-07c327b27d7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:5d:26:0c:4f:46\",\"name\":\"f1d6a24636333a6\"},{\"mac\":\"0a:58:0a:d9:00:3b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/01dbefc0-3d82-4f6a-a30a-07c327b27d7a\"}],\"ips\":[{\"address\":\"10.217.0.59/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:17:27.923446183+00:00 stderr F 2025-10-13T00:17:27Z [verbose] DEL starting CNI request ContainerID:"78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5" Netns:"/var/run/netns/b53856a5-59f8-448f-b9c2-b64c0455a8c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-wswq5;K8S_POD_INFRA_CONTAINER_ID=78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5;K8S_POD_UID=82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e" Path:"" 2025-10-13T00:17:27.924099813+00:00 stderr F 2025-10-13T00:17:27Z [verbose] Del: openshift-marketplace:community-operators-wswq5:82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:17:28.063798550+00:00 stderr F 2025-10-13T00:17:28Z [verbose] DEL finished CNI request ContainerID:"78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5" Netns:"/var/run/netns/b53856a5-59f8-448f-b9c2-b64c0455a8c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-wswq5;K8S_POD_INFRA_CONTAINER_ID=78e7501285d0726654024a6d615f8a946f75ff9b8a0b62ace38b4ac07d3d8ba5;K8S_POD_UID=82cb0b2f-5c2b-40ed-b730-44f89e9f3d3e" Path:"", result: "", err: 2025-10-13T00:17:36.927052836+00:00 stderr F 2025-10-13T00:17:36Z [verbose] DEL starting CNI request ContainerID:"7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b" Netns:"/var/run/netns/83d6827e-4af4-4eaf-b7c0-b939e1093952" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29338560-zvlxb;K8S_POD_INFRA_CONTAINER_ID=7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b;K8S_POD_UID=3c48edc1-77de-4eaf-a099-5af630747311" Path:"" 2025-10-13T00:17:36.927896273+00:00 stderr F 2025-10-13T00:17:36Z [verbose] Del: openshift-image-registry:image-pruner-29338560-zvlxb:3c48edc1-77de-4eaf-a099-5af630747311:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:17:37.126627667+00:00 stderr F 2025-10-13T00:17:37Z [verbose] DEL finished CNI request ContainerID:"7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b" Netns:"/var/run/netns/83d6827e-4af4-4eaf-b7c0-b939e1093952" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29338560-zvlxb;K8S_POD_INFRA_CONTAINER_ID=7ce782de167556e349446f72c289a5fd60cf25547919c0ca6c887aa167ebc53b;K8S_POD_UID=3c48edc1-77de-4eaf-a099-5af630747311" Path:"", result: "", err: 2025-10-13T00:19:39.275560049+00:00 stderr F 2025-10-13T00:19:39Z [verbose] ADD starting CNI request ContainerID:"42c0564c08abdaba52643e36bbec5511cd018c52f9d0068e25ee03f41e52ffc2" Netns:"/var/run/netns/414a4e9f-0035-4d6e-84ce-4c6be0b4ca1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75b7bb6564-2mwg6;K8S_POD_INFRA_CONTAINER_ID=42c0564c08abdaba52643e36bbec5511cd018c52f9d0068e25ee03f41e52ffc2;K8S_POD_UID=fe9b4942-29e7-4ef1-85c7-1a2153128dc7" Path:"" 2025-10-13T00:19:39.406742261+00:00 stderr F 2025-10-13T00:19:39Z [verbose] Add: openshift-image-registry:image-registry-75b7bb6564-2mwg6:fe9b4942-29e7-4ef1-85c7-1a2153128dc7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"42c0564c08abdab","mac":"a2:0a:4b:06:8c:93"},{"name":"eth0","mac":"0a:58:0a:d9:00:26","sandbox":"/var/run/netns/414a4e9f-0035-4d6e-84ce-4c6be0b4ca1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.38/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:19:39.406950877+00:00 stderr F I1013 00:19:39.406896 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75b7bb6564-2mwg6", UID:"fe9b4942-29e7-4ef1-85c7-1a2153128dc7", APIVersion:"v1", ResourceVersion:"41821", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.38/23] from ovn-kubernetes 2025-10-13T00:19:39.419577722+00:00 stderr F 2025-10-13T00:19:39Z [verbose] ADD finished CNI request ContainerID:"42c0564c08abdaba52643e36bbec5511cd018c52f9d0068e25ee03f41e52ffc2" Netns:"/var/run/netns/414a4e9f-0035-4d6e-84ce-4c6be0b4ca1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75b7bb6564-2mwg6;K8S_POD_INFRA_CONTAINER_ID=42c0564c08abdaba52643e36bbec5511cd018c52f9d0068e25ee03f41e52ffc2;K8S_POD_UID=fe9b4942-29e7-4ef1-85c7-1a2153128dc7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:0a:4b:06:8c:93\",\"name\":\"42c0564c08abdab\"},{\"mac\":\"0a:58:0a:d9:00:26\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/414a4e9f-0035-4d6e-84ce-4c6be0b4ca1d\"}],\"ips\":[{\"address\":\"10.217.0.38/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:20:25.068727708+00:00 stderr F 2025-10-13T00:20:25Z [verbose] DEL starting CNI request ContainerID:"f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3" Netns:"/var/run/netns/01dbefc0-3d82-4f6a-a30a-07c327b27d7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-10-13T00:20:25.069451508+00:00 stderr F 2025-10-13T00:20:25Z [verbose] Del: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-10-13T00:20:25.219033446+00:00 stderr F 2025-10-13T00:20:25Z [verbose] DEL finished CNI request ContainerID:"f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3" Netns:"/var/run/netns/01dbefc0-3d82-4f6a-a30a-07c327b27d7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f1d6a24636333a65f703046367e66a01fce80c197f53c117bef8c92ebbf880d3;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "", err: 2025-10-13T00:21:30.924155354+00:00 stderr P 2025-10-13T00:21:30Z [verbose] 2025-10-13T00:21:30.924203046+00:00 stderr P ADD starting CNI request ContainerID:"9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553" Netns:"/var/run/netns/715424a0-58a6-4096-aed0-1ac89d7a5b36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553;K8S_POD_UID=b0a4ec02-9b6b-400a-9633-c11280799f07" Path:"" 2025-10-13T00:21:30.924222306+00:00 stderr F 2025-10-13T00:21:31.040045341+00:00 stderr P 2025-10-13T00:21:31Z [verbose] 2025-10-13T00:21:31.040096502+00:00 stderr P Add: openshift-kube-apiserver:installer-13-crc:b0a4ec02-9b6b-400a-9633-c11280799f07:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9eac3f19b241539","mac":"ae:9d:41:6c:3f:92"},{"name":"eth0","mac":"0a:58:0a:d9:00:29","sandbox":"/var/run/netns/715424a0-58a6-4096-aed0-1ac89d7a5b36"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.41/23","gateway":"10.217.0.1"}],"dns":{}} 2025-10-13T00:21:31.040115913+00:00 stderr F 2025-10-13T00:21:31.040396380+00:00 stderr F I1013 00:21:31.040349 7626 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-13-crc", UID:"b0a4ec02-9b6b-400a-9633-c11280799f07", APIVersion:"v1", ResourceVersion:"42612", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.41/23] from ovn-kubernetes 2025-10-13T00:21:31.050401879+00:00 stderr F 2025-10-13T00:21:31Z [verbose] ADD finished CNI request ContainerID:"9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553" Netns:"/var/run/netns/715424a0-58a6-4096-aed0-1ac89d7a5b36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=9eac3f19b241539850f2c1a7ddfdc0d2e5f4b6cb5bd6eb544b8f932044f8b553;K8S_POD_UID=b0a4ec02-9b6b-400a-9633-c11280799f07" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:9d:41:6c:3f:92\",\"name\":\"9eac3f19b241539\"},{\"mac\":\"0a:58:0a:d9:00:29\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/715424a0-58a6-4096-aed0-1ac89d7a5b36\"}],\"ips\":[{\"address\":\"10.217.0.41/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-10-13T00:21:34.962525094+00:00 stderr F 2025-10-13T00:21:34Z [verbose] readiness indicator file is gone. restart multus-daemon ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043232033066 0ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015073043232033103 0ustar zuulzuul2025-10-13T00:16:38.409979963+00:00 stderr F time="2025-10-13T00:16:38Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-10-13T00:16:39.827997851+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-10-13T00:16:39.827997851+00:00 stderr F time="2025-10-13T00:16:39Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015073043232033076 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015073043232033066 0ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043234033055 5ustar zuulzuul././@LongLink0000644000000000000000000000036200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015073043234033055 5ustar zuulzuul././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000024345315073043234033072 0ustar zuulzuul2025-10-13T00:14:59.904571997+00:00 stderr F I1013 00:14:59.903075 1 cmd.go:240] Using service-serving-cert provided certificates 2025-10-13T00:14:59.904571997+00:00 stderr F I1013 00:14:59.903219 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-10-13T00:14:59.904571997+00:00 stderr F I1013 00:14:59.903707 1 observer_polling.go:159] Starting file observer 2025-10-13T00:14:59.929519435+00:00 stderr F I1013 00:14:59.928927 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-10-13T00:15:00.398378913+00:00 stderr F I1013 00:15:00.398272 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:00.398378913+00:00 stderr F W1013 00:15:00.398319 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:00.398378913+00:00 stderr F W1013 00:15:00.398345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:00.404571478+00:00 stderr F I1013 00:15:00.404493 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-10-13T00:15:00.404882368+00:00 stderr F I1013 00:15:00.404858 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-10-13T00:15:00.414419503+00:00 stderr F I1013 00:15:00.414294 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415623 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415657 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415694 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415816 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415923 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415928 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415940 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:00.415998021+00:00 stderr F I1013 00:15:00.415944 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:00.516556484+00:00 stderr F I1013 00:15:00.515827 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:00.516625956+00:00 stderr F I1013 00:15:00.516414 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:00.516636976+00:00 stderr F I1013 00:15:00.516430 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:21:02.075002463+00:00 stderr F I1013 00:21:02.074259 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-10-13T00:21:02.075002463+00:00 stderr F I1013 00:21:02.074442 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42131", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_6918ec9f-49ce-4ef8-b8f4-af79d3c57539 became leader 2025-10-13T00:21:02.082863453+00:00 stderr F I1013 00:21:02.082774 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-10-13T00:21:02.085423185+00:00 stderr F I1013 00:21:02.085282 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-10-13T00:21:02.085423185+00:00 stderr F I1013 00:21:02.085286 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-10-13T00:21:02.103153242+00:00 stderr F I1013 00:21:02.103075 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-10-13T00:21:02.103296847+00:00 stderr F I1013 00:21:02.103269 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-10-13T00:21:02.103546504+00:00 stderr F I1013 00:21:02.103509 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-10-13T00:21:02.103546504+00:00 stderr F I1013 00:21:02.103541 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-10-13T00:21:02.103572574+00:00 stderr F I1013 00:21:02.103559 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-10-13T00:21:02.104064138+00:00 stderr F I1013 00:21:02.104028 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-10-13T00:21:02.104064138+00:00 stderr F I1013 00:21:02.104044 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-10-13T00:21:02.104155861+00:00 stderr F I1013 00:21:02.104113 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-10-13T00:21:02.104155861+00:00 stderr F I1013 00:21:02.104145 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-10-13T00:21:02.104198002+00:00 stderr F I1013 00:21:02.104174 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-10-13T00:21:02.104417168+00:00 stderr F I1013 00:21:02.104384 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-10-13T00:21:02.104494550+00:00 stderr F I1013 00:21:02.104473 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-10-13T00:21:02.104562962+00:00 stderr F I1013 00:21:02.104544 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-10-13T00:21:02.104618234+00:00 stderr F I1013 00:21:02.104601 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-10-13T00:21:02.104895681+00:00 stderr F I1013 00:21:02.104869 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-10-13T00:21:02.104949963+00:00 stderr F I1013 00:21:02.104923 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-10-13T00:21:02.104964173+00:00 stderr F I1013 00:21:02.104609 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-10-13T00:21:02.105001284+00:00 stderr F I1013 00:21:02.104905 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-10-13T00:21:02.105001284+00:00 stderr F I1013 00:21:02.104913 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-10-13T00:21:02.105045326+00:00 stderr F I1013 00:21:02.104872 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-10-13T00:21:02.204564918+00:00 stderr F I1013 00:21:02.204493 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-10-13T00:21:02.204564918+00:00 stderr F I1013 00:21:02.204530 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-10-13T00:21:02.205452033+00:00 stderr F I1013 00:21:02.205413 1 base_controller.go:73] Caches are synced for PruneController 2025-10-13T00:21:02.205452033+00:00 stderr F I1013 00:21:02.205428 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-10-13T00:21:02.205476864+00:00 stderr F I1013 00:21:02.205455 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-10-13T00:21:02.205476864+00:00 stderr F I1013 00:21:02.205463 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-10-13T00:21:02.205530695+00:00 stderr F I1013 00:21:02.205502 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-10-13T00:21:02.205530695+00:00 stderr F I1013 00:21:02.205513 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-10-13T00:21:02.205628268+00:00 stderr F I1013 00:21:02.205592 1 base_controller.go:73] Caches are synced for InstallerController 2025-10-13T00:21:02.205628268+00:00 stderr F I1013 00:21:02.205609 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-10-13T00:21:02.205628268+00:00 stderr F I1013 00:21:02.205621 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-10-13T00:21:02.205637468+00:00 stderr F I1013 00:21:02.205589 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-10-13T00:21:02.205664229+00:00 stderr F I1013 00:21:02.205643 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-10-13T00:21:02.205688950+00:00 stderr F I1013 00:21:02.205623 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-10-13T00:21:02.206476542+00:00 stderr F I1013 00:21:02.206429 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.206592065+00:00 stderr F E1013 00:21:02.206561 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.213498359+00:00 stderr F E1013 00:21:02.213237 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.224384494+00:00 stderr F E1013 00:21:02.224342 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.225453714+00:00 stderr F E1013 00:21:02.225395 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.228392767+00:00 stderr F E1013 00:21:02.227754 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.228392767+00:00 stderr F I1013 00:21:02.227795 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.228708056+00:00 stderr F I1013 00:21:02.228670 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:02.229702583+00:00 stderr F E1013 00:21:02.229666 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.231096053+00:00 stderr F I1013 00:21:02.231052 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.231257647+00:00 stderr F E1013 00:21:02.231228 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.235561258+00:00 stderr F I1013 00:21:02.234184 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11]" 2025-10-13T00:21:02.245462886+00:00 stderr F E1013 00:21:02.245416 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.252483353+00:00 stderr F E1013 00:21:02.252439 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.252524794+00:00 stderr F I1013 00:21:02.252475 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.293320639+00:00 stderr F I1013 00:21:02.293244 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.293382970+00:00 stderr F E1013 00:21:02.293313 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.298818073+00:00 stderr F I1013 00:21:02.298756 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:02.305106529+00:00 stderr F I1013 00:21:02.305056 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-10-13T00:21:02.305106529+00:00 stderr F I1013 00:21:02.305093 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-10-13T00:21:02.327243450+00:00 stderr F E1013 00:21:02.327187 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.374473726+00:00 stderr F I1013 00:21:02.374395 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.374520107+00:00 stderr F E1013 00:21:02.374476 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.488071663+00:00 stderr F E1013 00:21:02.488021 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.499910095+00:00 stderr F I1013 00:21:02.499870 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:02.536023739+00:00 stderr F I1013 00:21:02.535927 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.536137712+00:00 stderr F E1013 00:21:02.536087 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.699192807+00:00 stderr F I1013 00:21:02.698965 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:02.809820891+00:00 stderr F E1013 00:21:02.809754 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-11" not found 2025-10-13T00:21:02.857611063+00:00 stderr F I1013 00:21:02.857544 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-10-13T00:21:02.857802728+00:00 stderr F E1013 00:21:02.857770 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-10-13T00:21:02.900349692+00:00 stderr F I1013 00:21:02.900263 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:03.107246417+00:00 stderr F I1013 00:21:03.107133 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:03.314552034+00:00 stderr F I1013 00:21:03.314467 1 request.go:697] Waited for 1.208960993s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0 2025-10-13T00:21:03.328008422+00:00 stderr F I1013 00:21:03.327941 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:03.403922102+00:00 stderr F I1013 00:21:03.403861 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-10-13T00:21:03.403922102+00:00 stderr F I1013 00:21:03.403896 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-10-13T00:21:03.404023615+00:00 stderr F I1013 00:21:03.403982 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-10-13T00:21:03.404023615+00:00 stderr F I1013 00:21:03.404007 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-10-13T00:21:03.404835828+00:00 stderr F I1013 00:21:03.404792 1 base_controller.go:73] Caches are synced for RevisionController 2025-10-13T00:21:03.404835828+00:00 stderr F I1013 00:21:03.404821 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-10-13T00:21:03.405071174+00:00 stderr F I1013 00:21:03.405049 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-10-13T00:21:03.405071174+00:00 stderr F I1013 00:21:03.405061 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-10-13T00:21:03.499600707+00:00 stderr F I1013 00:21:03.499545 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:03.505018669+00:00 stderr F I1013 00:21:03.504978 1 base_controller.go:73] Caches are synced for GuardController 2025-10-13T00:21:03.505018669+00:00 stderr F I1013 00:21:03.504998 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-10-13T00:21:03.505018669+00:00 stderr F I1013 00:21:03.505007 1 base_controller.go:73] Caches are synced for NodeController 2025-10-13T00:21:03.505034419+00:00 stderr F I1013 00:21:03.505022 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-10-13T00:21:03.699781454+00:00 stderr F I1013 00:21:03.699694 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:03.899907990+00:00 stderr F I1013 00:21:03.899810 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:04.098873643+00:00 stderr F I1013 00:21:04.098815 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:04.104530302+00:00 stderr F I1013 00:21:04.104477 1 base_controller.go:73] Caches are synced for CertRotationController 2025-10-13T00:21:04.104530302+00:00 stderr F I1013 00:21:04.104497 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-10-13T00:21:04.105375865+00:00 stderr F I1013 00:21:04.105344 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TargetUpdateRequired' "csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: past its refresh time 2025-06-27 13:05:19 +0000 UTC 2025-10-13T00:21:04.314287067+00:00 stderr F I1013 00:21:04.314205 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:21:04.403266654+00:00 stderr F I1013 00:21:04.403196 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-10-13T00:21:04.403266654+00:00 stderr F I1013 00:21:04.403228 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-10-13T00:21:04.403694276+00:00 stderr F I1013 00:21:04.403666 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-10-13T00:21:04.403694276+00:00 stderr F I1013 00:21:04.403679 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-10-13T00:21:04.404495169+00:00 stderr F I1013 00:21:04.404468 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-10-13T00:21:04.404495169+00:00 stderr F I1013 00:21:04.404479 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-10-13T00:21:04.404527240+00:00 stderr F I1013 00:21:04.404491 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-10-13T00:21:04.404527240+00:00 stderr F I1013 00:21:04.404519 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-10-13T00:21:04.404560251+00:00 stderr F I1013 00:21:04.404502 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-10-13T00:21:04.404568881+00:00 stderr F I1013 00:21:04.404486 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-10-13T00:21:04.497754696+00:00 stderr F I1013 00:21:04.497690 1 request.go:697] Waited for 2.291771458s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-10-13T00:21:05.697546071+00:00 stderr F I1013 00:21:05.697460 1 request.go:697] Waited for 1.465440909s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets/csr-signer 2025-10-13T00:21:05.702822629+00:00 stderr F I1013 00:21:05.702708 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager-operator because it changed 2025-10-13T00:21:06.725744973+00:00 stderr F I1013 00:21:06.725633 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:06.738973224+00:00 stderr F I1013 00:21:06.738868 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-11,config-11,controller-manager-kubeconfig-11,kube-controller-cert-syncer-kubeconfig-11,kube-controller-manager-pod-11,recycler-config-11,service-ca-11,serviceaccount-ca-11, secrets: localhost-recovery-client-token-11,service-account-private-key-11]" to "NodeControllerDegraded: All master nodes are ready" 2025-10-13T00:21:06.900958950+00:00 stderr F I1013 00:21:06.900883 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-10-13T00:21:07.897830792+00:00 stderr F I1013 00:21:07.897749 1 request.go:697] Waited for 1.174557328s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-10-13T00:21:10.105632217+00:00 stderr F I1013 00:21:10.105550 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-signer-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"}} 2025-10-13T00:21:10.105843093+00:00 stderr F I1013 00:21:10.105810 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator: 2025-10-13T00:21:10.105843093+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:21:10.706795564+00:00 stderr F I1013 00:21:10.706077 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-10-13T00:21:10.707679927+00:00 stderr F I1013 00:21:10.707613 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: 2025-10-13T00:21:10.707679927+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:21:11.100571853+00:00 stderr F I1013 00:21:11.100433 1 core.go:359] ConfigMap "openshift-config-managed/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:13Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-10-13T00:21:10Z"}],"resourceVersion":null,"uid":"4aabbce1-72f4-478a-b382-9ed7c988ad76"}} 2025-10-13T00:21:11.101644112+00:00 stderr F I1013 00:21:11.101531 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-ca -n openshift-config-managed: Operation cannot be fulfilled on configmaps "csr-controller-ca": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:21:11.116178793+00:00 stderr F I1013 00:21:11.116101 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:11.129612414+00:00 stderr F I1013 00:21:11.127884 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-10-13T00:21:11.139012607+00:00 stderr F I1013 00:21:11.138945 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-10-13T00:21:11.149116589+00:00 stderr F I1013 00:21:11.147253 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.936899 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.936857713 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.936937 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.936921175 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.936959 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.936943105 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.936980 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.936964366 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937003 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.936986346 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937023 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937007627 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937043 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937027808 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937064 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937047668 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937089 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.937068459 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937125 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.937097119 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937148 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.93713103 +0000 UTC))" 2025-10-13T00:21:11.937419598+00:00 stderr F I1013 00:21:11.937172 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.937152961 +0000 UTC))" 2025-10-13T00:21:11.938883968+00:00 stderr F I1013 00:21:11.937743 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-10-13 00:21:11.937721156 +0000 UTC))" 2025-10-13T00:21:11.938883968+00:00 stderr F I1013 00:21:11.938121 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314500\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314500\" (2025-10-12 23:14:59 +0000 UTC to 2026-10-12 23:14:59 +0000 UTC (now=2025-10-13 00:21:11.938100846 +0000 UTC))" 2025-10-13T00:21:12.298408625+00:00 stderr F I1013 00:21:12.297634 1 request.go:697] Waited for 1.181655717s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-11-crc 2025-10-13T00:21:13.497131182+00:00 stderr F I1013 00:21:13.497078 1 request.go:697] Waited for 1.194939745s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-11-crc 2025-10-13T00:21:14.302754737+00:00 stderr P I1013 00:21:14.302634 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIfr9kS2g4TGwwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMDEzMDAyMTAzWhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjAzMTQ4\nNjQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAveIYJE5/X7jiHp5/\n+MfyQ+y4CwC59gx2kaIeBX+EAkI/wdUIJrzxkfNCoHAyghje6cXkgz3P9oMWVjN8\n6uioA4tVkxxyLJ7H1XNrCJjQbdXgtIrBJBg/u5zC9kNyotIS9G6YarzcrASuprpV\nMetQXt1LOWMRXlgdc3R5rqaPJKz6aRykv9YpeE7bKsjpEfF3TcCqTSb21LGbOpJM\n1FFO+/b00GHU7Pay65+50BuQbLKlEyEM0tLnBEOOKl2TnSRhtD9o96djDQoU//bp\nm4CIK8N1LoWr2xo/8eE3aTJgWpbr/qV/i1yUBsNkaND/pReIiY3OhPBnZZz3f2GP\n44T3AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBTJRCk5dNeMxWbx49Jei5W7wzPHSjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEADxqdVzNiMtnJUGAdnAQ/\nhwhq0fa1GhFcpy7cwiY7689g8yupDn9aL5N99Iqv3/rYL9VbAMiaNULxPZ8BVZlF\nfw6ULhkzcyCe8FrsUvGfTV3XRMWpbjyf0Yr/iXBPpgjIprFNoYKZ2BWTYvWpbLEi\nrnHbpy/rFzSOTMePRwxqbeuzAolTckGLcEGX4ZQItSaxWg8NIPkpahVCV/h/ZOAg\nSI8OfQSeq7EHZ3UNs/++l1wmCACFQ3oBoDHIRe2QK/Ax5tQOSzEB1690+PberxhN\nhRkARvaPVQ7Dyz3q5qV1CEntnc57IXZABMQme5Akiq+NjvHeMdl1JNkmRfpWVZnC\nDA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGq 2025-10-13T00:21:14.302817438+00:00 stderr F jsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-10-13T00:21:13Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-10-13T00:21:14.304595686+00:00 stderr F I1013 00:21:14.304496 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-10-13T00:21:14.304595686+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-10-13T00:23:45.132790428+00:00 stderr F I1013 00:23:45.132264 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:52.250278275+00:00 stderr F I1013 00:23:52.249703 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:52.435912946+00:00 stderr F I1013 00:23:52.435852 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:52.915818233+00:00 stderr F I1013 00:23:52.915756 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000117165015073043234033072 0ustar zuulzuul2025-08-13T20:05:36.514581427+00:00 stderr F I0813 20:05:36.511530 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:05:36.514581427+00:00 stderr F I0813 20:05:36.511831 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:36.536300489+00:00 stderr F I0813 20:05:36.536068 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:36.665065366+00:00 stderr F I0813 20:05:36.664937 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T20:05:37.295386946+00:00 stderr F I0813 20:05:37.294764 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:37.295478189+00:00 stderr F W0813 20:05:37.295460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.295518470+00:00 stderr F W0813 20:05:37.295504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.331330466+00:00 stderr F I0813 20:05:37.326144 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:37.331574223+00:00 stderr F I0813 20:05:37.331533 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T20:05:37.333384024+00:00 stderr F I0813 20:05:37.332718 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:37.333384024+00:00 stderr F I0813 20:05:37.333071 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:37.333653102+00:00 stderr F I0813 20:05:37.333580 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.353574 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.353645 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354854 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354923 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354994 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.355011 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.387949927+00:00 stderr F I0813 20:05:37.387768 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-08-13T20:05:37.402530115+00:00 stderr F I0813 20:05:37.390150 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31745", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_e17706f2-94c3-4c11-bacc-b114096fd37e became leader 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.404425 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.423394 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.428270 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:37.466191818+00:00 stderr F I0813 20:05:37.464311 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.466191818+00:00 stderr F I0813 20:05:37.464679 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:37.473592920+00:00 stderr F I0813 20:05:37.469592 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.690758209+00:00 stderr F I0813 20:05:37.690347 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-08-13T20:05:37.691722566+00:00 stderr F I0813 20:05:37.691605 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:05:37.694602549+00:00 stderr F I0813 20:05:37.693336 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T20:05:37.696375289+00:00 stderr F I0813 20:05:37.696346 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:05:37.696446501+00:00 stderr F I0813 20:05:37.696430 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-08-13T20:05:37.697198463+00:00 stderr F I0813 20:05:37.697115 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:37.697220463+00:00 stderr F I0813 20:05:37.697193 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:05:37.697648446+00:00 stderr F I0813 20:05:37.697575 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T20:05:37.697648446+00:00 stderr F I0813 20:05:37.697631 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-08-13T20:05:37.698286684+00:00 stderr F I0813 20:05:37.698231 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:05:37.698597563+00:00 stderr F I0813 20:05:37.698542 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:05:37.711605395+00:00 stderr F I0813 20:05:37.711545 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:05:37.711720759+00:00 stderr F I0813 20:05:37.711704 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:05:37.713356496+00:00 stderr F I0813 20:05:37.713232 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:05:37.713356496+00:00 stderr F I0813 20:05:37.713305 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:05:37.713579652+00:00 stderr F I0813 20:05:37.713554 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:05:37.713664314+00:00 stderr F I0813 20:05:37.713647 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:05:37.713836589+00:00 stderr F I0813 20:05:37.713760 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:37.728300953+00:00 stderr F I0813 20:05:37.728166 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:05:37.735956003+00:00 stderr F I0813 20:05:37.734446 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:05:37.941763616+00:00 stderr F I0813 20:05:37.940941 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:05:37.941763616+00:00 stderr F I0813 20:05:37.941104 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:05:37.945175514+00:00 stderr F I0813 20:05:37.943577 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:05:37.945175514+00:00 stderr F I0813 20:05:37.943624 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.961306 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.961486 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.963073 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.963085 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985229 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985459 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985485 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985494 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10 2025-08-13T20:05:37.989486673+00:00 stderr F E0813 20:05:37.989079 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-10" not found 2025-08-13T20:05:37.990072570+00:00 stderr F I0813 20:05:37.989540 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001002 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001048 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001931 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001956 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T20:05:38.009507136+00:00 stderr F I0813 20:05:38.009211 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:05:38.009507136+00:00 stderr F I0813 20:05:38.009304 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:05:38.009552308+00:00 stderr F I0813 20:05:38.009494 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:05:38.009552308+00:00 stderr F I0813 20:05:38.009515 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.022638 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.022678 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.023510 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.054908236+00:00 stderr F E0813 20:05:38.054015 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10] 2025-08-13T20:05:38.071474651+00:00 stderr F I0813 20:05:38.071276 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:38.108738358+00:00 stderr F I0813 20:05:38.107067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]" 2025-08-13T20:05:38.231418341+00:00 stderr F I0813 20:05:38.231010 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.434263850+00:00 stderr F I0813 20:05:38.433834 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.499090686+00:00 stderr F I0813 20:05:38.499028 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T20:05:38.499168258+00:00 stderr F I0813 20:05:38.499150 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:05:38.621559283+00:00 stderr F I0813 20:05:38.621492 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.629225383+00:00 stderr F I0813 20:05:38.629182 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:05:38.629297075+00:00 stderr F I0813 20:05:38.629281 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:05:38.713959039+00:00 stderr F I0813 20:05:38.713849 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:05:38.714049952+00:00 stderr F I0813 20:05:38.714034 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:05:38.815560509+00:00 stderr F I0813 20:05:38.815178 1 request.go:697] Waited for 1.117060289s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0 2025-08-13T20:05:38.829328833+00:00 stderr F I0813 20:05:38.829250 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.048347936+00:00 stderr F I0813 20:05:39.047959 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.255540009+00:00 stderr F I0813 20:05:39.255468 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.290947193+00:00 stderr F I0813 20:05:39.290729 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-08-13T20:05:39.291035846+00:00 stderr F I0813 20:05:39.291017 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-08-13T20:05:39.419138504+00:00 stderr F I0813 20:05:39.418571 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.498210308+00:00 stderr F I0813 20:05:39.498044 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:05:39.498299021+00:00 stderr F I0813 20:05:39.498283 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:05:39.620257543+00:00 stderr F I0813 20:05:39.620126 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723880 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723933 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723982 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723988 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:05:39.817011458+00:00 stderr F I0813 20:05:39.816753 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.893035685+00:00 stderr F I0813 20:05:39.892925 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:05:39.893035685+00:00 stderr F I0813 20:05:39.892974 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:05:39.898318626+00:00 stderr F I0813 20:05:39.898279 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:39.898378448+00:00 stderr F I0813 20:05:39.898363 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:05:40.013757722+00:00 stderr F I0813 20:05:40.013689 1 request.go:697] Waited for 2.027614104s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:05:41.215111744+00:00 stderr F I0813 20:05:41.213553 1 request.go:697] Waited for 1.489350039s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dopenshift-kube-apiserver 2025-08-13T20:05:42.218226789+00:00 stderr F I0813 20:05:42.218018 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' installer errors: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F I0813 20:05:42.220398 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:42.220484124+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:42.220484124+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:05:42.220484124+00:00 stderr F TargetRevision: (int32) 10, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedTime: (*v1.Time)(0xc000a3d830)(2025-08-13 20:05:42.217747775 +0000 UTC m=+6.736788489), 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:42.220484124+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:05:42.220484124+00:00 stderr F } 2025-08-13T20:05:42.220484124+00:00 stderr F } 2025-08-13T20:05:42.220484124+00:00 stderr F because installer pod failed: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.299764664+00:00 stderr F I0813 20:05:42.295707 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.326058137+00:00 stderr F I0813 20:05:42.323599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:05:42.373419313+00:00 stderr F I0813 20:05:42.373161 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.404745160+00:00 stderr F I0813 20:05:42.404616 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:05:42.413767369+00:00 stderr F I0813 20:05:42.413731 1 request.go:697] Waited for 1.327177765s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra 2025-08-13T20:05:42.629149796+00:00 stderr F I0813 20:05:42.629032 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-08-13T20:05:43.613509565+00:00 stderr F I0813 20:05:43.613161 1 request.go:697] Waited for 1.193070645s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager 2025-08-13T20:05:57.065456713+00:00 stderr F I0813 20:05:57.063355 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-retry-1-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:05:57.088850643+00:00 stderr F I0813 20:05:57.085745 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:57.131914866+00:00 stderr F I0813 20:05:57.131456 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:57.218625389+00:00 stderr F I0813 20:05:57.212321 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:58.028420369+00:00 stderr F I0813 20:05:58.027854 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:58.807715594+00:00 stderr F I0813 20:05:58.807067 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:06:00.112517718+00:00 stderr F I0813 20:06:00.112357 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:06:33.497496611+00:00 stderr F I0813 20:06:33.496837 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:06:35.691201926+00:00 stderr F I0813 20:06:35.688448 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because waiting for static pod of revision 10, found 8 2025-08-13T20:06:36.706043922+00:00 stderr F I0813 20:06:36.703456 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because waiting for static pod of revision 10, found 8 2025-08-13T20:06:50.660622631+00:00 stderr F I0813 20:06:50.658985 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:50.850225138+00:00 stderr F I0813 20:06:50.848552 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:51.121329900+00:00 stderr F I0813 20:06:51.119648 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.484035044+00:00 stderr F I0813 20:06:55.481643 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.563495791+00:00 stderr F I0813 20:06:55.563359 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:55.655427417+00:00 stderr F I0813 20:06:55.655320 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:55.658088293+00:00 stderr F I0813 20:06:55.658034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " 2025-08-13T20:06:55.665956629+00:00 stderr F I0813 20:06:55.661252 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.683381938+00:00 stderr F E0813 20:06:55.683327 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:56.936619590+00:00 stderr F I0813 20:06:56.930386 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:57.091031747+00:00 stderr F I0813 20:06:57.090927 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:06:57.980483799+00:00 stderr F I0813 20:06:57.979433 1 request.go:697] Waited for 1.042308714s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-10-crc 2025-08-13T20:06:59.201359691+00:00 stderr F I0813 20:06:59.201295 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:07:01.121543755+00:00 stderr F I0813 20:07:01.121137 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:07:04.595151506+00:00 stderr F I0813 20:07:04.533289 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:04.595151506+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:04.595151506+00:00 stderr F CurrentRevision: (int32) 10, 2025-08-13T20:07:04.595151506+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00264f218)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:04.595151506+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:07:04.595151506+00:00 stderr F } 2025-08-13T20:07:04.595151506+00:00 stderr F } 2025-08-13T20:07:04.595151506+00:00 stderr F because static pod is ready 2025-08-13T20:07:04.677555779+00:00 stderr F I0813 20:07:04.677143 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 10 because static pod is ready 2025-08-13T20:07:04.721704214+00:00 stderr F I0813 20:07:04.720531 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:04Z","message":"NodeInstallerProgressing: 1 node is at revision 10","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:04.783723732+00:00 stderr F I0813 20:07:04.783645 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 10"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10" 2025-08-13T20:07:05.988628468+00:00 stderr F I0813 20:07:05.985923 1 request.go:697] Waited for 1.015351241s due to client-side throttling, not priority and fairness, request: DELETE:https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider 2025-08-13T20:07:15.750061857+00:00 stderr F I0813 20:07:15.749143 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.830481992+00:00 stderr F I0813 20:07:15.825103 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.864035724+00:00 stderr F I0813 20:07:15.863695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.880677681+00:00 stderr F I0813 20:07:15.880518 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.896993799+00:00 stderr F I0813 20:07:15.896746 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:16.533054716+00:00 stderr F I0813 20:07:16.531142 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:16.966016579+00:00 stderr F I0813 20:07:16.965723 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:17.163532201+00:00 stderr F I0813 20:07:17.163424 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:17.730583569+00:00 stderr F I0813 20:07:17.730517 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:18.745910990+00:00 stderr F I0813 20:07:18.743413 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:19.932212122+00:00 stderr F I0813 20:07:19.894036 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:20.024030635+00:00 stderr F I0813 20:07:20.023839 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:20.190952711+00:00 stderr F I0813 20:07:20.190769 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:20.342014862+00:00 stderr F I0813 20:07:20.341946 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 11 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:21.518012327+00:00 stderr F I0813 20:07:21.517052 1 request.go:697] Waited for 1.17130022s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config 2025-08-13T20:07:21.914562077+00:00 stderr F I0813 20:07:21.914362 1 installer_controller.go:524] node crc with revision 10 is the oldest and needs new revision 11 2025-08-13T20:07:21.914562077+00:00 stderr F I0813 20:07:21.914465 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:21.914562077+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:21.914562077+00:00 stderr F CurrentRevision: (int32) 10, 2025-08-13T20:07:21.914562077+00:00 stderr F TargetRevision: (int32) 11, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedTime: (*v1.Time)(0xc002d33830)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:21.914562077+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:07:21.914562077+00:00 stderr F } 2025-08-13T20:07:21.914562077+00:00 stderr F } 2025-08-13T20:07:21.957209780+00:00 stderr F I0813 20:07:21.956243 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:21Z","message":"NodeInstallerProgressing: 1 node is at revision 10; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.971364466+00:00 stderr F I0813 20:07:21.959131 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 10 to 11 because node crc with revision 10 is the oldest 2025-08-13T20:07:22.018179948+00:00 stderr F I0813 20:07:22.017362 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 10; 0 nodes have achieved new revision 11"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11" 2025-08-13T20:07:22.198346004+00:00 stderr F I0813 20:07:22.177477 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-11-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:23.112166394+00:00 stderr F I0813 20:07:23.109244 1 request.go:697] Waited for 1.121977728s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc 2025-08-13T20:07:24.113915855+00:00 stderr F I0813 20:07:24.112993 1 request.go:697] Waited for 1.361823165s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:07:24.340358286+00:00 stderr F I0813 20:07:24.340133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-11-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:25.328965391+00:00 stderr F I0813 20:07:25.328424 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:27.325769971+00:00 stderr F I0813 20:07:27.321708 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:29.319020529+00:00 stderr F I0813 20:07:29.318273 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:32.013351707+00:00 stderr F I0813 20:07:32.012561 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:01.280911734+00:00 stderr F I0813 20:08:01.278043 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:02.776377240+00:00 stderr F I0813 20:08:02.774488 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because waiting for static pod of revision 11, found 10 2025-08-13T20:08:03.237190373+00:00 stderr F I0813 20:08:03.236041 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because waiting for static pod of revision 11, found 10 2025-08-13T20:08:12.346025060+00:00 stderr F I0813 20:08:12.336670 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:12.393195983+00:00 stderr F I0813 20:08:12.392436 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:14.389113567+00:00 stderr F I0813 20:08:14.382351 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:22.396199897+00:00 stderr F I0813 20:08:22.394876 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:22.452822330+00:00 stderr F I0813 20:08:22.451881 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:23.367716581+00:00 stderr F I0813 20:08:23.367578 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:24.170857908+00:00 stderr F E0813 20:08:24.170609 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371187832+00:00 stderr F I0813 20:08:24.369377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.375219228+00:00 stderr F E0813 20:08:24.374420 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:24.394850951+00:00 stderr F E0813 20:08:24.394715 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.568198841+00:00 stderr F I0813 20:08:24.567660 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.571906687+00:00 stderr F E0813 20:08:24.571270 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.765553469+00:00 stderr F I0813 20:08:24.765486 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.771100708+00:00 stderr F E0813 20:08:24.771014 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.971852124+00:00 stderr F I0813 20:08:24.969335 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.978861485+00:00 stderr F E0813 20:08:24.978496 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.165866856+00:00 stderr F I0813 20:08:25.165412 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.169325765+00:00 stderr F E0813 20:08:25.166448 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.364084589+00:00 stderr F I0813 20:08:25.363947 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.366759675+00:00 stderr F E0813 20:08:25.366692 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.704643453+00:00 stderr F I0813 20:08:25.698772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.714601128+00:00 stderr F E0813 20:08:25.714305 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.369521486+00:00 stderr F I0813 20:08:26.360516 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.398639191+00:00 stderr F E0813 20:08:26.362744 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.685364933+00:00 stderr F I0813 20:08:27.685138 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.688204264+00:00 stderr F E0813 20:08:27.687213 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.431541927+00:00 stderr F E0813 20:08:29.429165 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:30.267922087+00:00 stderr F I0813 20:08:30.266521 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268515264+00:00 stderr F E0813 20:08:30.268373 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.396384583+00:00 stderr F I0813 20:08:35.395552 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.398464212+00:00 stderr F E0813 20:08:35.398026 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.708406420+00:00 stderr F E0813 20:08:37.707561 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.977995409+00:00 stderr F E0813 20:08:37.977379 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.991073454+00:00 stderr F E0813 20:08:37.991014 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.994263745+00:00 stderr F E0813 20:08:37.994197 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.995028277+00:00 stderr F E0813 20:08:37.994974 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.002764669+00:00 stderr F E0813 20:08:38.002720 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.003259743+00:00 stderr F E0813 20:08:38.003178 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.006769884+00:00 stderr F E0813 20:08:38.006664 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.015736081+00:00 stderr F E0813 20:08:38.015641 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.018489110+00:00 stderr F E0813 20:08:38.018422 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.036316851+00:00 stderr F E0813 20:08:38.036184 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.169417247+00:00 stderr F E0813 20:08:38.169315 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.371619185+00:00 stderr F E0813 20:08:38.371482 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.583145309+00:00 stderr F E0813 20:08:38.581142 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.770923113+00:00 stderr F E0813 20:08:38.770624 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.972205464+00:00 stderr F E0813 20:08:38.972095 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.378703148+00:00 stderr F E0813 20:08:39.378322 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:39.434393324+00:00 stderr F E0813 20:08:39.434225 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:39.569598961+00:00 stderr F E0813 20:08:39.569421 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.772648372+00:00 stderr F E0813 20:08:39.772541 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.898155131+00:00 stderr F E0813 20:08:39.898073 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.923172408+00:00 stderr F E0813 20:08:39.922158 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.937491379+00:00 stderr F E0813 20:08:39.937365 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.965853612+00:00 stderr F E0813 20:08:39.963385 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.007860736+00:00 stderr F E0813 20:08:40.006971 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.092879614+00:00 stderr F E0813 20:08:40.092336 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.177913742+00:00 stderr F E0813 20:08:40.177511 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:40.256260358+00:00 stderr F E0813 20:08:40.255973 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.377703030+00:00 stderr F E0813 20:08:40.377602 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.586885368+00:00 stderr F E0813 20:08:40.585655 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.586885368+00:00 stderr F E0813 20:08:40.586491 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.997595883+00:00 stderr F E0813 20:08:40.997086 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.173972980+00:00 stderr F E0813 20:08:41.173275 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.236970396+00:00 stderr F E0813 20:08:41.232354 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.572877047+00:00 stderr F E0813 20:08:41.572494 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.004873103+00:00 stderr F E0813 20:08:42.002107 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.370864736+00:00 stderr F E0813 20:08:42.369130 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.520686582+00:00 stderr F E0813 20:08:42.520583 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.583390490+00:00 stderr F E0813 20:08:42.582572 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.782116257+00:00 stderr F E0813 20:08:42.781255 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.575448852+00:00 stderr F E0813 20:08:43.575363 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.969587302+00:00 stderr F E0813 20:08:43.969477 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.373945716+00:00 stderr F E0813 20:08:44.373709 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.979102146+00:00 stderr F E0813 20:08:44.978928 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.105530441+00:00 stderr F E0813 20:08:45.101170 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.644941877+00:00 stderr F I0813 20:08:45.643771 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.647885141+00:00 stderr F E0813 20:08:45.646113 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.376183812+00:00 stderr F E0813 20:08:46.375706 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.533064029+00:00 stderr F E0813 20:08:46.532345 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.940860211+00:00 stderr F E0813 20:08:46.939076 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.384872593+00:00 stderr F E0813 20:08:48.382040 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.437137672+00:00 stderr F E0813 20:08:49.437038 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:50.225616298+00:00 stderr F E0813 20:08:50.225290 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.576501139+00:00 stderr F E0813 20:08:51.576436 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.656540804+00:00 stderr F E0813 20:08:51.656359 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.776989467+00:00 stderr F E0813 20:08:51.775149 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:52.064855021+00:00 stderr F E0813 20:08:52.064721 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.984195111+00:00 stderr F E0813 20:08:54.983672 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:58.174975433+00:00 stderr F E0813 20:08:58.174827 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:59.441281750+00:00 stderr F E0813 20:08:59.440725 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:09:00.467884753+00:00 stderr F E0813 20:09:00.467455 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:01.433467986+00:00 stderr F E0813 20:09:01.433310 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:09:06.140281097+00:00 stderr F I0813 20:09:06.139717 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:06.140281097+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:06.140281097+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:06.140281097+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00231f1a0)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:06.140281097+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:06.140281097+00:00 stderr F } 2025-08-13T20:09:06.140281097+00:00 stderr F } 2025-08-13T20:09:06.140281097+00:00 stderr F because static pod is ready 2025-08-13T20:09:06.171341418+00:00 stderr F I0813 20:09:06.170651 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32975 2025-08-13T20:09:06.328351489+00:00 stderr F I0813 20:09:06.328133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 10 to 11 because static pod is ready 2025-08-13T20:09:32.791415227+00:00 stderr F I0813 20:09:32.790410 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.842145974+00:00 stderr F I0813 20:09:35.841480 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.566471571+00:00 stderr F I0813 20:09:36.566334 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.867218033+00:00 stderr F I0813 20:09:36.867072 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.031180185+00:00 stderr F I0813 20:09:39.030762 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.077560615+00:00 stderr F I0813 20:09:39.075582 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.669969650+00:00 stderr F I0813 20:09:39.669137 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.868331488+00:00 stderr F I0813 20:09:41.867858 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:41.868331488+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:41.868331488+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:41.868331488+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00167baa0)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:41.868331488+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:41.868331488+00:00 stderr F } 2025-08-13T20:09:41.868331488+00:00 stderr F } 2025-08-13T20:09:41.868331488+00:00 stderr F because static pod is ready 2025-08-13T20:09:41.901710485+00:00 stderr F I0813 20:09:41.901606 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32995 2025-08-13T20:09:43.065986166+00:00 stderr F I0813 20:09:43.065471 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.755607148+00:00 stderr F I0813 20:09:43.753551 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.828929360+00:00 stderr F I0813 20:09:43.828769 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.868881685+00:00 stderr F I0813 20:09:43.866849 1 request.go:697] Waited for 1.008177855s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?resourceVersion=32682 2025-08-13T20:09:43.874975750+00:00 stderr F I0813 20:09:43.874058 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:44.267842224+00:00 stderr F I0813 20:09:44.266984 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:44.267842224+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:44.267842224+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:44.267842224+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00184d668)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:44.267842224+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:44.267842224+00:00 stderr F } 2025-08-13T20:09:44.267842224+00:00 stderr F } 2025-08-13T20:09:44.267842224+00:00 stderr F because static pod is ready 2025-08-13T20:09:44.305548305+00:00 stderr F I0813 20:09:44.302039 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32995 2025-08-13T20:09:44.830307690+00:00 stderr F I0813 20:09:44.830050 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.070993701+00:00 stderr F I0813 20:09:45.070496 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.882929610+00:00 stderr F I0813 20:09:45.880547 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.698310448+00:00 stderr F I0813 20:09:46.698204 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.920826358+00:00 stderr F I0813 20:09:46.920723 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.441742102+00:00 stderr F I0813 20:09:47.441227 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.488483362+00:00 stderr F I0813 20:09:47.488294 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.865952065+00:00 stderr F I0813 20:09:47.865749 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.867515881+00:00 stderr F I0813 20:09:48.867436 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.465855626+00:00 stderr F I0813 20:09:49.465342 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.469199783+00:00 stderr F I0813 20:09:50.468636 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.880873876+00:00 stderr F I0813 20:09:50.878988 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.891951154+00:00 stderr F I0813 20:09:50.889425 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:50.912624487+00:00 stderr F I0813 20:09:50.911375 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 11"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11" 2025-08-13T20:09:51.070035449+00:00 stderr F I0813 20:09:51.069679 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.062438492+00:00 stderr F I0813 20:09:52.062144 1 request.go:697] Waited for 1.166969798s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:09:52.074969781+00:00 stderr F I0813 20:09:52.073887 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.087851291+00:00 stderr F E0813 20:09:52.086954 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.089358154+00:00 stderr F I0813 20:09:52.088079 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.100062871+00:00 stderr F E0813 20:09:52.099285 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.107549096+00:00 stderr F I0813 20:09:52.106752 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.112456946+00:00 stderr F E0813 20:09:52.112377 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.116400119+00:00 stderr F I0813 20:09:52.116182 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.133078288+00:00 stderr F E0813 20:09:52.133003 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.136254809+00:00 stderr F I0813 20:09:52.136195 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.167870265+00:00 stderr F E0813 20:09:52.167244 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.177925303+00:00 stderr F I0813 20:09:52.175512 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.194191010+00:00 stderr F E0813 20:09:52.194117 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.356891154+00:00 stderr F I0813 20:09:52.356749 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.385455173+00:00 stderr F E0813 20:09:52.382429 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.704555382+00:00 stderr F I0813 20:09:52.704250 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.715332131+00:00 stderr F E0813 20:09:52.714021 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.119107698+00:00 stderr F I0813 20:09:53.117989 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:53.356032781+00:00 stderr F I0813 20:09:53.355849 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:53Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:53.364569826+00:00 stderr F E0813 20:09:53.363148 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.481039765+00:00 stderr F I0813 20:09:53.480338 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:53.799756183+00:00 stderr F I0813 20:09:53.799653 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:53Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:53.836003352+00:00 stderr F E0813 20:09:53.833661 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.645177441+00:00 stderr F I0813 20:09:54.644492 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.660597223+00:00 stderr F E0813 20:09:54.660244 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.862986776+00:00 stderr F I0813 20:09:54.862846 1 request.go:697] Waited for 1.062232365s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:09:54.870893733+00:00 stderr F I0813 20:09:54.868932 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.878888712+00:00 stderr F E0813 20:09:54.877734 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.878888712+00:00 stderr F I0813 20:09:54.878416 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.889590369+00:00 stderr F E0813 20:09:54.889437 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:59.784361326+00:00 stderr F I0813 20:09:59.782708 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:59Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:59.796398381+00:00 stderr F E0813 20:09:59.794549 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:01.485628003+00:00 stderr F I0813 20:10:01.485296 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:01.486631552+00:00 stderr F I0813 20:10:01.486593 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:01.512229046+00:00 stderr F I0813 20:10:01.512052 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:10:05.576588314+00:00 stderr F I0813 20:10:05.574320 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.788647521+00:00 stderr F I0813 20:10:16.786138 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:17.632072662+00:00 stderr F I0813 20:10:17.631636 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.410729797+00:00 stderr F I0813 20:10:18.410459 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.615626252+00:00 stderr F I0813 20:10:18.615513 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.806435403+00:00 stderr F I0813 20:10:18.806372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:20.493105591+00:00 stderr F I0813 20:10:20.492429 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:24.807862058+00:00 stderr F I0813 20:10:24.807400 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:31.619188005+00:00 stderr F I0813 20:10:31.618223 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:32.803599894+00:00 stderr F I0813 20:10:32.803378 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:35.876587429+00:00 stderr F I0813 20:10:35.875698 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.381175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.382267 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.383047 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.383551 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.396116500+00:00 stderr F I0813 20:42:36.375892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.439697676+00:00 stderr F I0813 20:42:36.438509 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.473221493+00:00 stderr F I0813 20:42:36.472939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517723086+00:00 stderr F I0813 20:42:36.515446 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.525477209+00:00 stderr F I0813 20:42:36.524990 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.525649584+00:00 stderr F I0813 20:42:36.525597 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526007115+00:00 stderr F I0813 20:42:36.525975 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526113798+00:00 stderr F I0813 20:42:36.526057 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526590702+00:00 stderr F I0813 20:42:36.526528 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526860159+00:00 stderr F I0813 20:42:36.526836 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527274371+00:00 stderr F I0813 20:42:36.527249 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527652012+00:00 stderr F I0813 20:42:36.527630 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527982892+00:00 stderr F I0813 20:42:36.527959 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.528339942+00:00 stderr F I0813 20:42:36.528315 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.528704013+00:00 stderr F I0813 20:42:36.528683 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.529639970+00:00 stderr F I0813 20:42:36.529613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.530079102+00:00 stderr F I0813 20:42:36.530057 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543610662+00:00 stderr F I0813 20:42:36.543448 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.551664234+00:00 stderr F I0813 20:42:36.551599 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.552394896+00:00 stderr F I0813 20:42:36.552291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.552666603+00:00 stderr F I0813 20:42:36.552644 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553034024+00:00 stderr F I0813 20:42:36.553007 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553347353+00:00 stderr F I0813 20:42:36.553324 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553991832+00:00 stderr F I0813 20:42:36.553964 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554282780+00:00 stderr F I0813 20:42:36.554257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.564312 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565334 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565351 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565544 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566095551+00:00 stderr F I0813 20:42:36.565899 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566119391+00:00 stderr F I0813 20:42:36.566108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566487422+00:00 stderr F I0813 20:42:36.566257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566502902+00:00 stderr F I0813 20:42:36.566491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566909994+00:00 stderr F I0813 20:42:36.566684 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:38.016213258+00:00 stderr F E0813 20:42:38.010560 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.018502994+00:00 stderr F E0813 20:42:38.018396 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.022073757+00:00 stderr F E0813 20:42:38.021141 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.029345557+00:00 stderr F E0813 20:42:38.029269 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.034276839+00:00 stderr F E0813 20:42:38.033926 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.036483353+00:00 stderr F E0813 20:42:38.035737 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.040625892+00:00 stderr F E0813 20:42:38.039940 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.047424938+00:00 stderr F E0813 20:42:38.047061 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.051877666+00:00 stderr F E0813 20:42:38.051510 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.060824984+00:00 stderr F E0813 20:42:38.060661 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.199407930+00:00 stderr F E0813 20:42:38.199318 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.405874742+00:00 stderr F E0813 20:42:38.403054 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.599883536+00:00 stderr F E0813 20:42:38.599536 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.802826186+00:00 stderr F E0813 20:42:38.801003 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.858378818+00:00 stderr F I0813 20:42:38.858182 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:38.860434177+00:00 stderr F I0813 20:42:38.860371 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:38.860434177+00:00 stderr F I0813 20:42:38.860425 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:38.860457888+00:00 stderr F I0813 20:42:38.860446 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:38.860457888+00:00 stderr F I0813 20:42:38.860453 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:38.860520100+00:00 stderr F I0813 20:42:38.860468 1 base_controller.go:172] Shutting down SATokenSignerController ... 2025-08-13T20:42:38.860520100+00:00 stderr F I0813 20:42:38.860504 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:38.860533350+00:00 stderr F I0813 20:42:38.860520 1 base_controller.go:172] Shutting down GarbageCollectorWatcherController ... 2025-08-13T20:42:38.860543201+00:00 stderr F I0813 20:42:38.860535 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:38.860583682+00:00 stderr F I0813 20:42:38.860550 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:38.860583682+00:00 stderr F I0813 20:42:38.860567 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:42:38.860930672+00:00 stderr F I0813 20:42:38.860883 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:38.860930672+00:00 stderr F I0813 20:42:38.860922 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:38.860950012+00:00 stderr F I0813 20:42:38.860936 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:38.860959843+00:00 stderr F I0813 20:42:38.860949 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:42:38.860969773+00:00 stderr F I0813 20:42:38.860962 1 base_controller.go:172] Shutting down StatusSyncer_kube-controller-manager ... 2025-08-13T20:42:38.860979963+00:00 stderr F I0813 20:42:38.860967 1 base_controller.go:150] All StatusSyncer_kube-controller-manager post start hooks have been terminated 2025-08-13T20:42:38.860989933+00:00 stderr F I0813 20:42:38.860979 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:38.861177759+00:00 stderr F I0813 20:42:38.861129 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:38.861292472+00:00 stderr F I0813 20:42:38.861164 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:38.861891139+00:00 stderr F E0813 20:42:38.861726 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.863436384+00:00 stderr F E0813 20:42:38.861921 1 base_controller.go:268] InstallerStateController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.861973 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.861984 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.862005 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:42:38.863436384+00:00 stderr F W0813 20:42:38.862143 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000057124415073043234033074 0ustar zuulzuul2025-08-13T19:59:24.033453753+00:00 stderr F I0813 19:59:24.032400 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:24.033453753+00:00 stderr F I0813 19:59:24.033256 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:24.045613349+00:00 stderr F I0813 19:59:24.045312 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:28.618292083+00:00 stderr F I0813 19:59:28.581671 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T19:59:41.628019168+00:00 stderr F I0813 19:59:41.627091 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:41.688536273+00:00 stderr F I0813 19:59:41.686658 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T19:59:41.688536273+00:00 stderr F I0813 19:59:41.687289 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-08-13T19:59:41.783485650+00:00 stderr F W0813 19:59:41.728767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.783485650+00:00 stderr F W0813 19:59:41.783334 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.912309252+00:00 stderr F I0813 19:59:41.912083 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:41.922961415+00:00 stderr F I0813 19:59:41.920266 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:41.922961415+00:00 stderr F I0813 19:59:41.921423 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923402 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923470 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923610 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923627 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923644 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923650 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.967324630+00:00 stderr F I0813 19:59:41.964723 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-08-13T19:59:42.010618034+00:00 stderr F I0813 19:59:42.008445 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28303", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_222cdc59-c33e-47e3-9961-9d9799c1f827 became leader 2025-08-13T19:59:42.023767319+00:00 stderr F I0813 19:59:42.023455 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:42.023767319+00:00 stderr F I0813 19:59:42.023517 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:42.023966385+00:00 stderr F I0813 19:59:42.023881 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.023966385+00:00 stderr F I0813 19:59:42.023913 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.023982015+00:00 stderr F E0813 19:59:42.023960 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.023997666+00:00 stderr F E0813 19:59:42.023989 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.051606542+00:00 stderr F E0813 19:59:42.047922 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.057444769+00:00 stderr F E0813 19:59:42.056639 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.059437936+00:00 stderr F E0813 19:59:42.058443 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.067436374+00:00 stderr F E0813 19:59:42.067065 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.078942392+00:00 stderr F E0813 19:59:42.078701 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.096151052+00:00 stderr F E0813 19:59:42.094004 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.096151052+00:00 stderr F I0813 19:59:42.094275 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:42.096151052+00:00 stderr F I0813 19:59:42.095053 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:42.135419622+00:00 stderr F E0813 19:59:42.131231 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.149967616+00:00 stderr F E0813 19:59:42.149745 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.216747230+00:00 stderr F E0813 19:59:42.212138 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.328639559+00:00 stderr F E0813 19:59:42.328373 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.380154118+00:00 stderr F E0813 19:59:42.379397 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.490361409+00:00 stderr F E0813 19:59:42.489984 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.734689153+00:00 stderr F E0813 19:59:42.730261 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.813911061+00:00 stderr F E0813 19:59:42.813173 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.093876912+00:00 stderr F I0813 19:59:43.093228 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-08-13T19:59:43.207267524+00:00 stderr F I0813 19:59:43.193372 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-08-13T19:59:43.271215437+00:00 stderr F I0813 19:59:43.269044 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T19:59:44.345763787+00:00 stderr F I0813 19:59:44.345462 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:44.346128778+00:00 stderr F I0813 19:59:44.345751 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:44.397758109+00:00 stderr F I0813 19:59:44.397026 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:44.406665463+00:00 stderr F I0813 19:59:44.406579 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:44.408134075+00:00 stderr F I0813 19:59:44.407390 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454354 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.342036 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454585 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454640 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454668 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454686 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455379 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455402 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455416 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455430 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:44.457951755+00:00 stderr F E0813 19:59:44.456040 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.457951755+00:00 stderr F E0813 19:59:44.456098 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.605027117+00:00 stderr F I0813 19:59:44.604706 1 request.go:697] Waited for 1.261836099s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:45.701060661+00:00 stderr F I0813 19:59:45.683986 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T19:59:45.777649654+00:00 stderr F E0813 19:59:45.777571 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.214107444+00:00 stderr F I0813 19:59:46.213425 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:49.333946649+00:00 stderr F E0813 19:59:49.333240 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.333946649+00:00 stderr F E0813 19:59:49.333693 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658599 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658650 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658701 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658709 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:49.686514198+00:00 stderr F I0813 19:59:49.685489 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SignerUpdateRequired' "csr-signer-signer" in "openshift-kube-controller-manager-operator" requires a new signing cert/key pair: past its refresh time 2025-06-27 13:05:19 +0000 UTC 2025-08-13T19:59:52.221081929+00:00 stderr F E0813 19:59:52.216537 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:52.947399193+00:00 stderr F E0813 19:59:52.942757 1 base_controller.go:268] InstallerStateController reconciliation failed: kubecontrollermanagers.operator.openshift.io "cluster" not found 2025-08-13T19:59:53.014988420+00:00 stderr F I0813 19:59:53.014830 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:53.014988420+00:00 stderr F I0813 19:59:53.014901 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:53.247120117+00:00 stderr F I0813 19:59:53.000502 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:53.290763200+00:00 stderr F I0813 19:59:53.290133 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.246981 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.326055 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.246985 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.326119 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:53.343642967+00:00 stderr F I0813 19:59:53.246989 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:53.343642967+00:00 stderr F I0813 19:59:53.343520 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:53.348723092+00:00 stderr F E0813 19:59:53.347167 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.348723092+00:00 stderr F I0813 19:59:53.246992 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:53.348723092+00:00 stderr F I0813 19:59:53.347291 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:53.349069832+00:00 stderr F I0813 19:59:53.248700 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:53.349069832+00:00 stderr F I0813 19:59:53.349039 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:53.368211478+00:00 stderr F E0813 19:59:53.365245 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.370976496+00:00 stderr F I0813 19:59:53.370553 1 trace.go:236] Trace[1198132947]: "DeltaFIFO Pop Process" ID:openshift-infra/build-config-change-controller,Depth:24,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:53.014) (total time: 355ms): 2025-08-13T19:59:53.370976496+00:00 stderr F Trace[1198132947]: [355.508683ms] [355.508683ms] END 2025-08-13T19:59:53.379894541+00:00 stderr F E0813 19:59:53.379715 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.383234916+00:00 stderr F I0813 19:59:53.380322 1 trace.go:236] Trace[1066495986]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.193) (total time: 10186ms): 2025-08-13T19:59:53.383234916+00:00 stderr F Trace[1066495986]: ---"Objects listed" error: 10186ms (19:59:53.380) 2025-08-13T19:59:53.383234916+00:00 stderr F Trace[1066495986]: [10.186242865s] [10.186242865s] END 2025-08-13T19:59:53.383234916+00:00 stderr F I0813 19:59:53.380358 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.416024670+00:00 stderr F E0813 19:59:53.414698 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.420198000+00:00 stderr F I0813 19:59:53.420157 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.422652869+00:00 stderr F I0813 19:59:53.422216 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.423601336+00:00 stderr F I0813 19:59:53.423567 1 trace.go:236] Trace[1991635445]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.194) (total time: 10228ms): 2025-08-13T19:59:53.423601336+00:00 stderr F Trace[1991635445]: ---"Objects listed" error: 10228ms (19:59:53.423) 2025-08-13T19:59:53.423601336+00:00 stderr F Trace[1991635445]: [10.228950532s] [10.228950532s] END 2025-08-13T19:59:53.423669918+00:00 stderr F I0813 19:59:53.423649 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.449483304+00:00 stderr F I0813 19:59:53.246974 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-08-13T19:59:53.449483304+00:00 stderr F I0813 19:59:53.446162 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T19:59:53.463072922+00:00 stderr F I0813 19:59:53.463014 1 trace.go:236] Trace[1514590009]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.150) (total time: 10312ms): 2025-08-13T19:59:53.463072922+00:00 stderr F Trace[1514590009]: ---"Objects listed" error: 10301ms (19:59:53.452) 2025-08-13T19:59:53.463072922+00:00 stderr F Trace[1514590009]: [10.312669109s] [10.312669109s] END 2025-08-13T19:59:53.463172424+00:00 stderr F I0813 19:59:53.463153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.525856751+00:00 stderr F E0813 19:59:53.525272 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.527213780+00:00 stderr F E0813 19:59:53.525952 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.544828 1 trace.go:236] Trace[1007991698]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.194) (total time: 10345ms): 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1007991698]: ---"Objects listed" error: 10345ms (19:59:53.539) 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1007991698]: [10.345534185s] [10.345534185s] END 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.544882 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.545145 1 trace.go:236] Trace[1260161130]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.098) (total time: 10446ms): 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1260161130]: ---"Objects listed" error: 10446ms (19:59:53.545) 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1260161130]: [10.446396791s] [10.446396791s] END 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.545153 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.572000517+00:00 stderr F I0813 19:59:53.569230 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T19:59:53.572000517+00:00 stderr F I0813 19:59:53.569288 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T19:59:53.575611490+00:00 stderr F I0813 19:59:53.575519 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8 2025-08-13T19:59:53.633472279+00:00 stderr F I0813 19:59:53.633259 1 trace.go:236] Trace[1243592164]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.099) (total time: 10533ms): 2025-08-13T19:59:53.633472279+00:00 stderr F Trace[1243592164]: ---"Objects listed" error: 10533ms (19:59:53.633) 2025-08-13T19:59:53.633472279+00:00 stderr F Trace[1243592164]: [10.533904655s] [10.533904655s] END 2025-08-13T19:59:53.633472279+00:00 stderr F I0813 19:59:53.633362 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.675137747+00:00 stderr F I0813 19:59:53.671142 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.762284671+00:00 stderr F I0813 19:59:53.730591 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:53.762284671+00:00 stderr F I0813 19:59:53.760573 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:53.788307503+00:00 stderr F I0813 19:59:53.783223 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T19:59:53.788307503+00:00 stderr F I0813 19:59:53.783290 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T19:59:53.792922194+00:00 stderr F I0813 19:59:53.791421 1 trace.go:236] Trace[896251159]: "DeltaFIFO Pop Process" ID:openshift-config-managed/dashboard-k8s-resources-cluster,Depth:34,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:53.645) (total time: 145ms): 2025-08-13T19:59:53.792922194+00:00 stderr F Trace[896251159]: [145.698363ms] [145.698363ms] END 2025-08-13T19:59:53.804206756+00:00 stderr F I0813 19:59:53.803143 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:53.804206756+00:00 stderr F I0813 19:59:53.803216 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819120 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819168 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819235 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819240 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829044 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829253 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829501 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829521 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-08-13T19:59:53.899573244+00:00 stderr F I0813 19:59:53.859656 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:53.899573244+00:00 stderr F I0813 19:59:53.859744 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:53.963060424+00:00 stderr F I0813 19:59:53.944658 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:53.963060424+00:00 stderr F I0813 19:59:53.950284 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:54.249369305+00:00 stderr F I0813 19:59:54.247394 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.289571031+00:00 stderr F I0813 19:59:54.251263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:54.310357104+00:00 stderr F I0813 19:59:54.305679 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-08-13T19:59:54.337892059+00:00 stderr F E0813 19:59:54.337627 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:54.346882635+00:00 stderr F I0813 19:59:54.341597 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.358706072+00:00 stderr F E0813 19:59:54.358653 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8] 2025-08-13T19:59:54.402212842+00:00 stderr F I0813 19:59:54.401970 1 core.go:359] ConfigMap "openshift-kube-controller-manager/service-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/description":{},"f:openshift.io/owning-component":{}}}},"manager":"service-ca-operator","operation":"Update","time":"2025-08-13T19:59:39Z"}],"resourceVersion":null,"uid":"8a52a0ef-1908-47be-bed5-31ee169c99a3"}} 2025-08-13T19:59:54.403505199+00:00 stderr F I0813 19:59:54.403422 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/service-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:54.403505199+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:54.410230861+00:00 stderr F I0813 19:59:54.410168 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.411327932+00:00 stderr F I0813 19:59:54.411251 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:54.421904543+00:00 stderr F I0813 19:59:54.418693 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 9 triggered by "required configmap/service-ca has changed" 2025-08-13T19:59:54.513584627+00:00 stderr F I0813 19:59:54.513520 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.514087421+00:00 stderr F I0813 19:59:54.514057 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]" 2025-08-13T19:59:54.535333137+00:00 stderr P I0813 19:59:54.535256 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/i 2025-08-13T19:59:54.535405889+00:00 stderr F SVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:59:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T19:59:54.535596484+00:00 stderr F I0813 19:59:54.535557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:54.535596484+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:54.619393933+00:00 stderr F I0813 19:59:54.619247 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer-signer -n openshift-kube-controller-manager-operator because it changed 2025-08-13T19:59:54.619484395+00:00 stderr F I0813 19:59:54.619464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CABundleUpdateRequired' "csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert 2025-08-13T19:59:54.619513726+00:00 stderr F E0813 19:59:54.619406 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:54.754106093+00:00 stderr F I0813 19:59:54.753338 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.805605461+00:00 stderr F I0813 19:59:54.805538 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:55.108944468+00:00 stderr F I0813 19:59:55.107484 1 core.go:359] ConfigMap "openshift-kube-controller-manager/aggregator-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIV+a/r/KBVSQwDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2FnZ3JlZ2F0b3It\nY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX2FnZ3JlZ2F0b3ItY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwz1oeDcXqAniG+VxAzEbZbeheswm\nibqk0LwWbA9YAD2aJCC2U0gbXouz0u1dzDnEuwzslM0OFq2kW+1RmEB1drVBkCMV\ny/gKGmRafqGt31/rDe81XneOBzrUC/rNVDZq7rx4wsZ8YzYkPhj1frvlCCWyOdyB\n+nWF+ZZQHLXeSuHuVGnfGqmckiQf/R8ITZp/vniyeOED0w8B9ZdfVHNYJksR/Vn2\ngslU8a/mluPzSCyD10aHnX5c75yTzW4TBQvytjkEpDR5LBoRmHiuL64999DtWonq\niX7TdcoQY1LuHyilaXIp0TazmkRb3ycHAY/RQ3xumj9I25D8eLCwWvI8GwIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nWtUaz8JmZMUc/fPnQTR0L7R9wakwHwYDVR0jBBgwFoAUWtUaz8JmZMUc/fPnQTR0\nL7R9wakwDQYJKoZIhvcNAQELBQADggEBAECt0YM/4XvtPuVY5pY2aAXRuthsw2zQ\nnXVvR5jDsfpaNvMXQWaMjM1B+giNKhDDeLmcF7GlWmFfccqBPicgFhUgQANE3ALN\ngq2Wttd641M6i4B3UuRewNySj1sc12wfgAcwaRcecDTCsZo5yuF90z4mXpZs7MWh\nKCxYPpAtLqi17IF1tJVz/03L+6WD5kUProTELtY7/KBJYV/GONMG+KAMBjg1ikMK\njA0HQiCZiWDuW1ZdAwuvh6oRNWoQy6w9Wksard/AnfXUFBwNgULMp56+tOOPHxtm\nu3XYTN0dPJXsimSk4KfS0by8waS7ocoXa3LgQxb/6h0ympDbcWtgD0w=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:00Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}},"f:labels":{".":{},"f:auth.openshift.io/managed-certificate-type":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:33Z"}],"resourceVersion":null,"uid":"1d0d5c4a-d5a2-488a-94e2-bf622b67cadf"}} 2025-08-13T19:59:55.116031370+00:00 stderr F I0813 19:59:55.115686 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:55.116031370+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:55.688617272+00:00 stderr F I0813 19:59:55.688549 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-signer-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n"}} 2025-08-13T19:59:55.704932237+00:00 stderr F I0813 19:59:55.690301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator: Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.714899181+00:00 stderr F I0813 19:59:55.708597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:55.762149678+00:00 stderr F I0813 19:59:55.761861 1 request.go:697] Waited for 1.076004342s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T19:59:56.762951177+00:00 stderr F I0813 19:59:56.762427 1 request.go:697] Waited for 1.055958541s due to client-side throttling, not priority and fairness, request: POST:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps 2025-08-13T19:59:57.074899158+00:00 stderr F I0813 19:59:57.074757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:57.349627799+00:00 stderr F I0813 19:59:57.331149 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.538299877+00:00 stderr F E0813 19:59:57.537920 1 base_controller.go:268] CertRotationController reconciliation failed: Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:57.612464112+00:00 stderr F I0813 19:59:57.611932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RotationError' Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:57.729409045+00:00 stderr F I0813 19:59:57.729313 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T19:59:58.238010793+00:00 stderr F I0813 19:59:58.236969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:58.340670809+00:00 stderr F I0813 19:59:58.333712 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:58.369221253+00:00 stderr F I0813 19:59:58.368719 1 request.go:697] Waited for 1.037197985s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2025-08-13T19:59:58.427858545+00:00 stderr F I0813 19:59:58.422906 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:59.248931550+00:00 stderr F I0813 19:59:59.226352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:59.574132010+00:00 stderr F I0813 19:59:59.566388 1 request.go:697] Waited for 1.156719703s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:00:00.643511052+00:00 stderr F I0813 20:00:00.637169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:01.507455149+00:00 stderr F I0813 20:00:01.478751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:02.034100931+00:00 stderr F I0813 20:00:02.023659 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:02.034100931+00:00 stderr F I0813 20:00:02.024573 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: Operation cannot be fulfilled on configmaps "csr-controller-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.209018759+00:00 stderr F I0813 20:00:02.206166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:02.885065476+00:00 stderr F I0813 20:00:02.884103 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:03.224935337+00:00 stderr F I0813 20:00:03.222157 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:03.729095862+00:00 stderr F I0813 20:00:03.725248 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:04.211702912+00:00 stderr F I0813 20:00:04.208754 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:04.786158933+00:00 stderr F I0813 20:00:04.781212 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 9 triggered by "required configmap/service-ca has changed" 2025-08-13T20:00:04.870048975+00:00 stderr F I0813 20:00:04.863754 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 9 created because required configmap/service-ca has changed 2025-08-13T20:00:04.884154757+00:00 stderr F W0813 20:00:04.883346 1 staticpod.go:38] revision 9 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:00:04.891111825+00:00 stderr F E0813 20:00:04.889270 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 9 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769114 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.769040049 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769378 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.769361598 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769400 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.769386709 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769418 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.769407239 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769435 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.7694241 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769452 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.76944052 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769468 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769456861 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769484 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769472801 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769512 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.769496222 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769535 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769523243 +0000 UTC))" 2025-08-13T20:00:05.790969514+00:00 stderr F I0813 20:00:05.790398 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.790355257 +0000 UTC))" 2025-08-13T20:00:05.790969514+00:00 stderr F I0813 20:00:05.790724 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:00:05.790702476 +0000 UTC))" 2025-08-13T20:00:05.965070208+00:00 stderr F I0813 20:00:05.964149 1 request.go:697] Waited for 1.102447146s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-9-crc 2025-08-13T20:00:06.586405825+00:00 stderr F I0813 20:00:06.585096 1 installer_controller.go:524] node crc with revision 8 is the oldest and needs new revision 9 2025-08-13T20:00:06.586405825+00:00 stderr F I0813 20:00:06.585750 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:06.586405825+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:06.586405825+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:06.586405825+00:00 stderr F TargetRevision: (int32) 9, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedRevision: (int32) 8, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0020ea588)(2024-06-27 13:18:10 +0000 UTC), 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:06.586405825+00:00 stderr F (string) (len=2059) "installer: ry-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\n (string) (len=27) \"kube-controller-manager-pod\",\n (string) (len=6) \"config\",\n (string) (len=32) \"cluster-policy-controller-config\",\n (string) (len=29) \"controller-manager-kubeconfig\",\n (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=10) \"service-ca\",\n (string) (len=15) \"recycler-config\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"cloud-config\"\n },\n CertSecretNames: ([]string) (len=2 cap=2) {\n (string) (len=39) \"kube-controller-manager-client-cert-key\",\n (string) (len=10) \"csr-signer\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0627 13:15:36.458960 1 cmd.go:409] Getting controller reference for node crc\nI0627 13:15:36.472798 1 cmd.go:422] Waiting for installer revisions to settle for node crc\nI0627 13:15:36.476730 1 cmd.go:514] Waiting additional period after revisions have settled for node crc\nI0627 13:16:06.477243 1 cmd.go:520] Getting installer pods for node crc\nF0627 13:16:06.480777 1 cmd.go:105] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:00:06.586405825+00:00 stderr F } 2025-08-13T20:00:06.586405825+00:00 stderr F } 2025-08-13T20:00:06.687925820+00:00 stderr F I0813 20:00:06.683522 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 8 to 9 because node crc with revision 8 is the oldest 2025-08-13T20:00:06.720211090+00:00 stderr F I0813 20:00:06.711726 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:06.760673034+00:00 stderr F I0813 20:00:06.760569 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" 2025-08-13T20:00:06.836714792+00:00 stderr F I0813 20:00:06.834742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-9-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:07.767166432+00:00 stderr F I0813 20:00:07.764212 1 request.go:697] Waited for 1.027872228s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:00:08.456826607+00:00 stderr P I0813 20:00:08.456025 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQU 2025-08-13T20:00:08.456932090+00:00 stderr F AA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:06Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:00:08.456932090+00:00 stderr F I0813 20:00:08.456716 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T20:00:08.456932090+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.966021067+00:00 stderr F I0813 20:00:08.962975 1 request.go:697] Waited for 1.36289333s due to client-side throttling, not priority and fairness, request: POST:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods 2025-08-13T20:00:09.447679141+00:00 stderr F I0813 20:00:09.447065 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-9-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:10.144016496+00:00 stderr F I0813 20:00:10.130744 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:10.628131900+00:00 stderr F I0813 20:00:10.557295 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:10.855083031+00:00 stderr F I0813 20:00:10.854579 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:00:10.862919385+00:00 stderr F I0813 20:00:10.861255 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:11.266415079+00:00 stderr F E0813 20:00:11.265447 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:11.937577357+00:00 stderr F I0813 20:00:11.937279 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:11.973540762+00:00 stderr F I0813 20:00:11.970921 1 request.go:697] Waited for 1.066126049s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config 2025-08-13T20:00:12.912112395+00:00 stderr F I0813 20:00:12.886263 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:16.706302621+00:00 stderr F I0813 20:00:16.705335 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:17.450477091+00:00 stderr F I0813 20:00:17.450091 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:19.774153637+00:00 stderr F I0813 20:00:19.769354 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:19.935640232+00:00 stderr F I0813 20:00:19.935570 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:00:19.998082432+00:00 stderr F I0813 20:00:19.998012 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:22.011667596+00:00 stderr F I0813 20:00:22.006462 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:23.446037179+00:00 stderr F I0813 20:00:23.444714 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:24.916564830+00:00 stderr F I0813 20:00:24.914283 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:26.792434459+00:00 stderr F I0813 20:00:26.784906 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:27.694565663+00:00 stderr F I0813 20:00:27.690687 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 10 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:27.970950544+00:00 stderr F I0813 20:00:27.969903 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:29.012869353+00:00 stderr F I0813 20:00:29.011822 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:29.746921794+00:00 stderr F I0813 20:00:29.744278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:30.213415615+00:00 stderr F I0813 20:00:30.212313 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:30.800346431+00:00 stderr F I0813 20:00:30.785484 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:31.266697908+00:00 stderr F I0813 20:00:31.250574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:31.794188219+00:00 stderr F I0813 20:00:31.793698 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.017819839+00:00 stderr F I0813 20:00:33.017439 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.184819941+00:00 stderr F I0813 20:00:33.179616 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.622176352+00:00 stderr F I0813 20:00:33.622117 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:34.201159381+00:00 stderr F I0813 20:00:34.200690 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:34.883284441+00:00 stderr F I0813 20:00:34.881509 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:35.514679035+00:00 stderr F I0813 20:00:35.513692 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 10 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:35.960155817+00:00 stderr F I0813 20:00:35.958416 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 10 created because optional secret/serving-cert has changed 2025-08-13T20:00:36.965344168+00:00 stderr F I0813 20:00:36.955393 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:36.965344168+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:36.965344168+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:36.965344168+00:00 stderr F TargetRevision: (int32) 10, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedRevision: (int32) 8, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001ff4780)(2024-06-27 13:18:10 +0000 UTC), 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:36.965344168+00:00 stderr F (string) (len=2059) "installer: ry-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\n (string) (len=27) \"kube-controller-manager-pod\",\n (string) (len=6) \"config\",\n (string) (len=32) \"cluster-policy-controller-config\",\n (string) (len=29) \"controller-manager-kubeconfig\",\n (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=10) \"service-ca\",\n (string) (len=15) \"recycler-config\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"cloud-config\"\n },\n CertSecretNames: ([]string) (len=2 cap=2) {\n (string) (len=39) \"kube-controller-manager-client-cert-key\",\n (string) (len=10) \"csr-signer\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0627 13:15:36.458960 1 cmd.go:409] Getting controller reference for node crc\nI0627 13:15:36.472798 1 cmd.go:422] Waiting for installer revisions to settle for node crc\nI0627 13:15:36.476730 1 cmd.go:514] Waiting additional period after revisions have settled for node crc\nI0627 13:16:06.477243 1 cmd.go:520] Getting installer pods for node crc\nF0627 13:16:06.480777 1 cmd.go:105] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:00:36.965344168+00:00 stderr F } 2025-08-13T20:00:36.965344168+00:00 stderr F } 2025-08-13T20:00:36.965344168+00:00 stderr F because new revision pending 2025-08-13T20:00:37.063979591+00:00 stderr F I0813 20:00:37.049572 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.084022262+00:00 stderr F I0813 20:00:37.082999 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.084022262+00:00 stderr F I0813 20:00:37.083556 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10" 2025-08-13T20:00:37.123864098+00:00 stderr F E0813 20:00:37.120427 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:37.318318893+00:00 stderr F I0813 20:00:37.316069 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-10-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:38.190099651+00:00 stderr F I0813 20:00:38.189546 1 request.go:697] Waited for 1.147826339s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2025-08-13T20:00:39.723621538+00:00 stderr F I0813 20:00:39.722118 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:40.578344999+00:00 stderr F I0813 20:00:40.573345 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:45.563774882+00:00 stderr F I0813 20:00:45.553082 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:46.701158604+00:00 stderr F I0813 20:00:46.694673 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:47.187761398+00:00 stderr F I0813 20:00:47.184743 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:56.229393959+00:00 stderr P I0813 20:00:56.223445 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxM 2025-08-13T20:00:56.229760529+00:00 stderr F TQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:00:56.239003423+00:00 stderr F I0813 20:00:56.231111 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T20:00:56.239003423+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:00.010151258+00:00 stderr F I0813 20:00:59.999206 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.999078302 +0000 UTC))" 2025-08-13T20:01:00.012078983+00:00 stderr F I0813 20:01:00.011490 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.008929353 +0000 UTC))" 2025-08-13T20:01:00.013182774+00:00 stderr F I0813 20:01:00.013038 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.012049852 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013881 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.013857453 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013939 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.013921695 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013963 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.013946756 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.014016 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014000307 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.014037 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014023008 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014103 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.014042849 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014137 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.014118021 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014159 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014147682 +0000 UTC))" 2025-08-13T20:01:00.031031783+00:00 stderr F I0813 20:01:00.030593 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:01:00.030551769 +0000 UTC))" 2025-08-13T20:01:00.031031783+00:00 stderr F I0813 20:01:00.031018 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:01:00.030988972 +0000 UTC))" 2025-08-13T20:01:03.266043461+00:00 stderr P I0813 20:01:03.265053 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxM 2025-08-13T20:01:03.266368450+00:00 stderr F TQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:01:03.266368450+00:00 stderr F I0813 20:01:03.266008 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/client-ca -n openshift-kube-controller-manager: Operation cannot be fulfilled on configmaps "client-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:05.228984283+00:00 stderr F I0813 20:01:05.179740 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:06.133168995+00:00 stderr F I0813 20:01:06.125619 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:01:07.023349078+00:00 stderr F I0813 20:01:06.999583 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:07.023349078+00:00 stderr F I0813 20:01:07.002684 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:07.263262658+00:00 stderr F I0813 20:01:07.263101 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:01:07.839947862+00:00 stderr F I0813 20:01:07.743713 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:22.554548162+00:00 stderr F I0813 20:01:22.551707 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:23.629342647+00:00 stderr F I0813 20:01:23.629273 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:24.026047149+00:00 stderr F I0813 20:01:24.025682 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:26.338292640+00:00 stderr F I0813 20:01:26.337632 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.340007288+00:00 stderr F I0813 20:01:26.338395 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:26.340007288+00:00 stderr F I0813 20:01:26.339666 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.350195569+00:00 stderr F I0813 20:01:26.349343 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:26.348725367 +0000 UTC))" 2025-08-13T20:01:26.351715732+00:00 stderr F I0813 20:01:26.350769 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:26.350746725 +0000 UTC))" 2025-08-13T20:01:26.351715732+00:00 stderr F I0813 20:01:26.351583 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:26.351257289 +0000 UTC))" 2025-08-13T20:01:26.352549576+00:00 stderr F I0813 20:01:26.351755 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:26.351730833 +0000 UTC))" 2025-08-13T20:01:26.352549576+00:00 stderr F I0813 20:01:26.352233 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352196976 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352607 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352525505 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352745 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352634238 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352866 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352755722 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352956 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:26.352939397 +0000 UTC))" 2025-08-13T20:01:26.353728070+00:00 stderr F I0813 20:01:26.352981 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:26.352967808 +0000 UTC))" 2025-08-13T20:01:26.353728070+00:00 stderr F I0813 20:01:26.353666 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.353648137 +0000 UTC))" 2025-08-13T20:01:26.359825203+00:00 stderr F I0813 20:01:26.359621 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-08-13 20:01:26.359510245 +0000 UTC))" 2025-08-13T20:01:26.361272905+00:00 stderr F I0813 20:01:26.360170 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:01:26.360149493 +0000 UTC))" 2025-08-13T20:01:29.161403988+00:00 stderr F I0813 20:01:29.161063 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="b677063b32bdea5d0626c22feb3e33cd81922e655466381de19492c7e38d993f", new="048a1a430ed1deee9ccf052d51f855e6e8f2fee2d531560f57befbbfad23cd9d") 2025-08-13T20:01:29.161679096+00:00 stderr F W0813 20:01:29.161655 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:29.161911722+00:00 stderr F I0813 20:01:29.161888 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="b10e6fe375e1e85006c2f2c6e133de2da450246bee57a078c8f61fda3a384f1b", new="4311dcc9562700df28c73989dc84c69933e91c13e7ee5e5588d7570593276c97") 2025-08-13T20:01:29.163290022+00:00 stderr F I0813 20:01:29.163222 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:29.163387054+00:00 stderr F I0813 20:01:29.163330 1 base_controller.go:172] Shutting down GarbageCollectorWatcherController ... 2025-08-13T20:01:29.163417905+00:00 stderr F I0813 20:01:29.163352 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:29.163535239+00:00 stderr F I0813 20:01:29.163516 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:29.164066004+00:00 stderr F I0813 20:01:29.164005 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:29.164111315+00:00 stderr F I0813 20:01:29.164014 1 base_controller.go:172] Shutting down SATokenSignerController ... 2025-08-13T20:01:29.164139656+00:00 stderr F I0813 20:01:29.164101 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164165 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164221 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164237 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:01:29.164692262+00:00 stderr F I0813 20:01:29.164607 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:29.164961759+00:00 stderr F I0813 20:01:29.164820 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:29.165095503+00:00 stderr F I0813 20:01:29.165074 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:29.165144755+00:00 stderr F I0813 20:01:29.165131 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:29.165466574+00:00 stderr F I0813 20:01:29.165378 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:29.165558206+00:00 stderr F I0813 20:01:29.165531 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:29.165771282+00:00 stderr F I0813 20:01:29.165666 1 base_controller.go:114] Shutting down worker of GarbageCollectorWatcherController controller ... 2025-08-13T20:01:29.165917257+00:00 stderr F I0813 20:01:29.165864 1 base_controller.go:104] All GarbageCollectorWatcherController workers have been terminated 2025-08-13T20:01:29.166484343+00:00 stderr F E0813 20:01:29.166374 1 base_controller.go:268] TargetConfigController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166426 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166437 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166451 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166463 1 base_controller.go:114] Shutting down worker of SATokenSignerController controller ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166469 1 base_controller.go:104] All SATokenSignerController workers have been terminated 2025-08-13T20:01:29.166537154+00:00 stderr F I0813 20:01:29.166496 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:01:29.166537154+00:00 stderr F E0813 20:01:29.166497 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": context canceled, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:01:29.166537154+00:00 stderr F I0813 20:01:29.166502 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:01:29.166556025+00:00 stderr F I0813 20:01:29.166528 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:29.166571335+00:00 stderr F I0813 20:01:29.166560 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:29.166586586+00:00 stderr F I0813 20:01:29.166579 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166587 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166586 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166593 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166606 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166687 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166704 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:29.166735550+00:00 stderr F I0813 20:01:29.166721 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:29.166735550+00:00 stderr F I0813 20:01:29.166728 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:29.166747810+00:00 stderr F I0813 20:01:29.166733 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:29.167183443+00:00 stderr F I0813 20:01:29.167058 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:29.167183443+00:00 stderr F I0813 20:01:29.167087 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:29.167311166+00:00 stderr F I0813 20:01:29.167260 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:29.167311166+00:00 stderr F I0813 20:01:29.167303 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T20:01:29.167327837+00:00 stderr F I0813 20:01:29.167315 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:29.167327837+00:00 stderr F I0813 20:01:29.167322 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167289 1 base_controller.go:172] Shutting down StatusSyncer_kube-controller-manager ... 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167331 1 base_controller.go:150] All StatusSyncer_kube-controller-manager post start hooks have been terminated 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167336 1 base_controller.go:104] All StatusSyncer_kube-controller-manager workers have been terminated 2025-08-13T20:01:29.167354207+00:00 stderr F I0813 20:01:29.167336 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:29.167366448+00:00 stderr F I0813 20:01:29.167351 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167363 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167369 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167374 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:29.167391069+00:00 stderr F I0813 20:01:29.167381 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167393 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167393 1 builder.go:329] server exited 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167394 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:29.167416179+00:00 stderr F I0813 20:01:29.167408 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:29.167428310+00:00 stderr F I0813 20:01:29.167413 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:29.167428310+00:00 stderr F I0813 20:01:29.167418 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:29.167441000+00:00 stderr F I0813 20:01:29.167429 1 base_controller.go:114] Shutting down worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:01:29.167441000+00:00 stderr F I0813 20:01:29.167437 1 base_controller.go:104] All KubeControllerManagerStaticResources workers have been terminated 2025-08-13T20:01:29.167456530+00:00 stderr F I0813 20:01:29.167447 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:29.167498632+00:00 stderr F I0813 20:01:29.167478 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:29.167556643+00:00 stderr F I0813 20:01:29.167541 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:29.167690917+00:00 stderr F I0813 20:01:29.167656 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:29.167728938+00:00 stderr F I0813 20:01:29.167658 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:29.167832991+00:00 stderr F I0813 20:01:29.167726 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:29.167897793+00:00 stderr F I0813 20:01:29.167583 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:29.167976565+00:00 stderr F I0813 20:01:29.167962 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:29.168040417+00:00 stderr F I0813 20:01:29.168007 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:29.168077568+00:00 stderr F I0813 20:01:29.168065 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:29.168108789+00:00 stderr F I0813 20:01:29.168097 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:29.168180191+00:00 stderr F I0813 20:01:29.168140 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:29.168180191+00:00 stderr F I0813 20:01:29.168170 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:29.168193111+00:00 stderr F I0813 20:01:29.168177 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:29.168238023+00:00 stderr F E0813 20:01:29.168082 1 base_controller.go:268] StaticPodStateController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:29.168337335+00:00 stderr F I0813 20:01:29.168320 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:29.168393597+00:00 stderr F I0813 20:01:29.168376 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:29.168426248+00:00 stderr F I0813 20:01:29.168414 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:29.168454419+00:00 stderr F I0813 20:01:29.168227 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:29.168508740+00:00 stderr F I0813 20:01:29.168495 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:29.168556802+00:00 stderr F I0813 20:01:29.168247 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:29.168592053+00:00 stderr F I0813 20:01:29.168580 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:29.169355535+00:00 stderr F I0813 20:01:29.169299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 10 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc": context canceled 2025-08-13T20:01:29.169656083+00:00 stderr F E0813 20:01:29.169600 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": context canceled 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170025 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170055 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170064 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:29.175724656+00:00 stderr F E0813 20:01:29.174684 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc": context canceled 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174870 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174922 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174962 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:30.877494640+00:00 stderr F W0813 20:01:30.875423 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015073043233033130 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015073043233033130 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000000000015073043233033120 0ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000000000015073043233033120 0ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015073043233033130 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000005642315073043233033144 0ustar zuulzuul2025-08-13T20:07:24.772404583+00:00 stderr F W0813 20:07:24.771967 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T20:07:24.777288853+00:00 stderr F I0813 20:07:24.776657 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755115644 cert, and key in /tmp/serving-cert-2464900357/serving-signer.crt, /tmp/serving-cert-2464900357/serving-signer.key 2025-08-13T20:07:25.331597096+00:00 stderr F I0813 20:07:25.325173 1 observer_polling.go:159] Starting file observer 2025-08-13T20:07:25.420711131+00:00 stderr F I0813 20:07:25.420175 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T20:07:25.435699481+00:00 stderr F I0813 20:07:25.435631 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" 2025-08-13T20:07:26.555451245+00:00 stderr F I0813 20:07:26.516921 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569648 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569692 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569711 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569717 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:07:26.603546134+00:00 stderr F I0813 20:07:26.602166 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:07:26.603546134+00:00 stderr F I0813 20:07:26.602519 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:07:26.603546134+00:00 stderr F W0813 20:07:26.602556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:07:26.603546134+00:00 stderr F W0813 20:07:26.602565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622290 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622370 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622410 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622447 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.623139 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622993 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.623361 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:07:26.626915804+00:00 stderr F I0813 20:07:26.626743 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-08-13T20:07:26.631294090+00:00 stderr F I0813 20:07:26.631213 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.631294090+00:00 stderr F I0813 20:07:26.631217 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.633355789+00:00 stderr F I0813 20:07:26.631563 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.646267679+00:00 stderr F I0813 20:07:26.646177 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.646104804 +0000 UTC))" 2025-08-13T20:07:26.646669191+00:00 stderr F I0813 20:07:26.646580 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.646549117 +0000 UTC))" 2025-08-13T20:07:26.646669191+00:00 stderr F I0813 20:07:26.646640 1 secure_serving.go:213] Serving securely on [::]:17698 2025-08-13T20:07:26.646684411+00:00 stderr F I0813 20:07:26.646675 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:07:26.646754703+00:00 stderr F I0813 20:07:26.646702 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726158 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726391 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726754 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:26.726710525 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727133 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727492 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:26.726770897 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727561 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.727525769 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727583 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.72756897 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727602 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727589171 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727622 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727608371 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727641 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727627762 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727660 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727646862 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727682 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:26.727668973 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727700 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:26.727688594 +0000 UTC))" 2025-08-13T20:07:26.733341906+00:00 stderr F I0813 20:07:26.733256 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.733224312 +0000 UTC))" 2025-08-13T20:07:26.733905142+00:00 stderr F I0813 20:07:26.733831 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.733764008 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734252 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:26.734215741 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734304 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:26.734285943 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734325 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.734311043 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734345 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.734332014 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734392 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734378285 +0000 UTC))" 2025-08-13T20:07:26.734438267+00:00 stderr F I0813 20:07:26.734411 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734398416 +0000 UTC))" 2025-08-13T20:07:26.734448417+00:00 stderr F I0813 20:07:26.734439 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734416206 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734459 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734445447 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734505 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:26.734487998 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734535 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:26.734523269 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734570 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.7345425 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734901 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.734860739 +0000 UTC))" 2025-08-13T20:07:26.735531588+00:00 stderr F I0813 20:07:26.735199 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.735180778 +0000 UTC))" 2025-08-13T20:07:27.249549866+00:00 stderr F I0813 20:07:27.249414 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:27.328082557+00:00 stderr F I0813 20:07:27.327932 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-08-13T20:07:27.328258672+00:00 stderr F I0813 20:07:27.328237 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-08-13T20:07:27.328386936+00:00 stderr F I0813 20:07:27.328372 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-08-13T20:07:27.329224680+00:00 stderr F I0813 20:07:27.329204 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-08-13T20:07:27.329332243+00:00 stderr F I0813 20:07:27.329318 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-08-13T20:07:27.340871004+00:00 stderr F I0813 20:07:27.330220 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-08-13T20:07:27.349559893+00:00 stderr F I0813 20:07:27.349500 1 base_controller.go:73] Caches are synced for check-endpoints 2025-08-13T20:07:27.349665676+00:00 stderr F I0813 20:07:27.349644 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-08-13T20:07:27.349838191+00:00 stderr F I0813 20:07:27.330947 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-08-13T20:07:27.349909243+00:00 stderr F I0813 20:07:27.331010 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-08-13T20:07:27.349959345+00:00 stderr F I0813 20:07:27.349941 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-08-13T20:07:27.350053037+00:00 stderr F I0813 20:07:27.348318 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:27.350318915+00:00 stderr F I0813 20:07:27.349376 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.535505534+00:00 stderr F I0813 20:09:38.533238 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.910861766+00:00 stderr F I0813 20:09:38.910681 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.325177351+00:00 stderr F I0813 20:09:48.324483 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.727563139+00:00 stderr F I0813 20:09:49.726230 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.459037956+00:00 stderr F I0813 20:09:55.458404 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:58.082039310+00:00 stderr F I0813 20:09:58.081099 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.316616768+00:00 stderr F I0813 20:42:36.315144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319874512+00:00 stderr F I0813 20:42:36.319513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.320954413+00:00 stderr F I0813 20:42:36.320719 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.321202490+00:00 stderr F I0813 20:42:36.321099 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.323320731+00:00 stderr F I0813 20:42:36.322202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.323320731+00:00 stderr F I0813 20:42:36.322520 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.888191757+00:00 stderr F I0813 20:42:37.887497 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.888191757+00:00 stderr F I0813 20:42:37.888159 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:37.888314061+00:00 stderr F I0813 20:42:37.888189 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:37.888514306+00:00 stderr F I0813 20:42:37.888377 1 base_controller.go:172] Shutting down check-endpoints ... 2025-08-13T20:42:37.888753863+00:00 stderr F I0813 20:42:37.888529 1 base_controller.go:172] Shutting down CheckEndpointsStop ... ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000006721115073043233033141 0ustar zuulzuul2025-10-13T00:15:04.609616408+00:00 stderr F W1013 00:15:04.603419 1 cmd.go:245] Using insecure, self-signed certificates 2025-10-13T00:15:04.609616408+00:00 stderr F I1013 00:15:04.603958 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760314504 cert, and key in /tmp/serving-cert-1852992933/serving-signer.crt, /tmp/serving-cert-1852992933/serving-signer.key 2025-10-13T00:15:05.213152051+00:00 stderr F I1013 00:15:05.213086 1 observer_polling.go:159] Starting file observer 2025-10-13T00:15:05.263825420+00:00 stderr F I1013 00:15:05.263773 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-10-13T00:15:05.265197591+00:00 stderr F I1013 00:15:05.265165 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1852992933/tls.crt::/tmp/serving-cert-1852992933/tls.key" 2025-10-13T00:15:05.574495558+00:00 stderr F I1013 00:15:05.574450 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:05.576361104+00:00 stderr F I1013 00:15:05.576126 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-10-13T00:15:05.576420536+00:00 stderr F I1013 00:15:05.576406 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-10-13T00:15:05.576461827+00:00 stderr F I1013 00:15:05.576450 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-10-13T00:15:05.576491058+00:00 stderr F I1013 00:15:05.576480 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-10-13T00:15:05.580928801+00:00 stderr F I1013 00:15:05.580867 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-10-13T00:15:05.580997253+00:00 stderr F W1013 00:15:05.580984 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:05.581025744+00:00 stderr F W1013 00:15:05.581016 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-10-13T00:15:05.581264531+00:00 stderr F I1013 00:15:05.581247 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:05.587396344+00:00 stderr F I1013 00:15:05.587310 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:05.587476397+00:00 stderr F I1013 00:15:05.587461 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:05.587551799+00:00 stderr F I1013 00:15:05.587507 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:05.587551799+00:00 stderr F I1013 00:15:05.587541 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:05.590802896+00:00 stderr F I1013 00:15:05.590725 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:05.590802896+00:00 stderr F I1013 00:15:05.590764 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:05.592138537+00:00 stderr F I1013 00:15:05.592095 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1852992933/tls.crt::/tmp/serving-cert-1852992933/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314504\" (2025-10-13 00:15:03 +0000 UTC to 2025-11-12 00:15:04 +0000 UTC (now=2025-10-13 00:15:05.592043994 +0000 UTC))" 2025-10-13T00:15:05.592592610+00:00 stderr F I1013 00:15:05.592465 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314505\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314505\" (2025-10-12 23:15:05 +0000 UTC to 2026-10-12 23:15:05 +0000 UTC (now=2025-10-13 00:15:05.592442056 +0000 UTC))" 2025-10-13T00:15:05.592592610+00:00 stderr F I1013 00:15:05.592493 1 secure_serving.go:213] Serving securely on [::]:17698 2025-10-13T00:15:05.592592610+00:00 stderr F I1013 00:15:05.592511 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:05.592592610+00:00 stderr F I1013 00:15:05.592526 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1852992933/tls.crt::/tmp/serving-cert-1852992933/tls.key" 2025-10-13T00:15:05.592655902+00:00 stderr F I1013 00:15:05.592634 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:05.593348243+00:00 stderr F I1013 00:15:05.593303 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-10-13T00:15:05.593809887+00:00 stderr F I1013 00:15:05.593774 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:15:05.596564859+00:00 stderr F I1013 00:15:05.596512 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:15:05.601306511+00:00 stderr F I1013 00:15:05.601231 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:15:05.688593426+00:00 stderr F I1013 00:15:05.687597 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:05.688593426+00:00 stderr F I1013 00:15:05.688003 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688643 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:05.688603697 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688689 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:05.688669859 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688713 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:05.68869475 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688734 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:05.6887181 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688755 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.688739851 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688777 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.688761622 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688806 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.688782742 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688837 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.688811873 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688863 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:05.688843964 +0000 UTC))" 2025-10-13T00:15:05.689210525+00:00 stderr F I1013 00:15:05.688888 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:05.688872035 +0000 UTC))" 2025-10-13T00:15:05.690263657+00:00 stderr F I1013 00:15:05.689290 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1852992933/tls.crt::/tmp/serving-cert-1852992933/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314504\" (2025-10-13 00:15:03 +0000 UTC to 2025-11-12 00:15:04 +0000 UTC (now=2025-10-13 00:15:05.689267837 +0000 UTC))" 2025-10-13T00:15:05.690263657+00:00 stderr F I1013 00:15:05.689896 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314505\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314505\" (2025-10-12 23:15:05 +0000 UTC to 2026-10-12 23:15:05 +0000 UTC (now=2025-10-13 00:15:05.689846774 +0000 UTC))" 2025-10-13T00:15:05.693771242+00:00 stderr F I1013 00:15:05.693398 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:05.693791932+00:00 stderr F I1013 00:15:05.693780 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:05.693742881 +0000 UTC))" 2025-10-13T00:15:05.693836274+00:00 stderr F I1013 00:15:05.693808 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:05.693792592 +0000 UTC))" 2025-10-13T00:15:05.693846234+00:00 stderr F I1013 00:15:05.693835 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:05.693818873 +0000 UTC))" 2025-10-13T00:15:05.693878035+00:00 stderr F I1013 00:15:05.693855 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:05.693840644 +0000 UTC))" 2025-10-13T00:15:05.693887685+00:00 stderr F I1013 00:15:05.693878 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.693863854 +0000 UTC))" 2025-10-13T00:15:05.693935587+00:00 stderr F I1013 00:15:05.693914 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.693883575 +0000 UTC))" 2025-10-13T00:15:05.693966937+00:00 stderr F I1013 00:15:05.693946 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.693924046 +0000 UTC))" 2025-10-13T00:15:05.693994208+00:00 stderr F I1013 00:15:05.693974 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.693956817 +0000 UTC))" 2025-10-13T00:15:05.694032449+00:00 stderr F I1013 00:15:05.693999 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:05.693983968 +0000 UTC))" 2025-10-13T00:15:05.694048940+00:00 stderr F I1013 00:15:05.694038 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:05.694022329 +0000 UTC))" 2025-10-13T00:15:05.694085601+00:00 stderr F I1013 00:15:05.694065 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:05.69404888 +0000 UTC))" 2025-10-13T00:15:05.696558515+00:00 stderr F I1013 00:15:05.696522 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1852992933/tls.crt::/tmp/serving-cert-1852992933/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314504\" (2025-10-13 00:15:03 +0000 UTC to 2025-11-12 00:15:04 +0000 UTC (now=2025-10-13 00:15:05.696485683 +0000 UTC))" 2025-10-13T00:15:05.696948907+00:00 stderr F I1013 00:15:05.696920 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314505\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314505\" (2025-10-12 23:15:05 +0000 UTC to 2026-10-12 23:15:05 +0000 UTC (now=2025-10-13 00:15:05.696899865 +0000 UTC))" 2025-10-13T00:15:05.907480025+00:00 stderr F I1013 00:15:05.905982 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:15:05.994497482+00:00 stderr F I1013 00:15:05.994434 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-10-13T00:15:05.994497482+00:00 stderr F I1013 00:15:05.994471 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-10-13T00:15:05.994556364+00:00 stderr F I1013 00:15:05.994537 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-10-13T00:15:05.994556364+00:00 stderr F I1013 00:15:05.994546 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-10-13T00:15:05.994556364+00:00 stderr F I1013 00:15:05.994549 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-10-13T00:15:05.994588195+00:00 stderr F I1013 00:15:05.994572 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-10-13T00:15:05.995072719+00:00 stderr F I1013 00:15:05.995048 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-10-13T00:15:05.995072719+00:00 stderr F I1013 00:15:05.995063 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-10-13T00:15:05.995072719+00:00 stderr F I1013 00:15:05.995069 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-10-13T00:15:06.001804261+00:00 stderr F I1013 00:15:06.001742 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:15:06.002880953+00:00 stderr F I1013 00:15:06.002850 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:15:06.095624122+00:00 stderr F I1013 00:15:06.095559 1 base_controller.go:73] Caches are synced for check-endpoints 2025-10-13T00:15:06.095624122+00:00 stderr F I1013 00:15:06.095584 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.938252 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.938209329 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.948969 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.948899197 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949011 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.948985119 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949032 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.9490184 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949055 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.949041321 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949075 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.949061691 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949096 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.949081452 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949121 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.949102492 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949141 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.949127483 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949166 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.949153714 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949192 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.949174754 +0000 UTC))" 2025-10-13T00:21:11.949477302+00:00 stderr F I1013 00:21:11.949212 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.949198565 +0000 UTC))" 2025-10-13T00:21:11.950440998+00:00 stderr F I1013 00:21:11.949594 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1852992933/tls.crt::/tmp/serving-cert-1852992933/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1760314504\" (2025-10-13 00:15:03 +0000 UTC to 2025-11-12 00:15:04 +0000 UTC (now=2025-10-13 00:21:11.949554004 +0000 UTC))" 2025-10-13T00:21:11.950440998+00:00 stderr F I1013 00:21:11.949881 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314505\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314505\" (2025-10-12 23:15:05 +0000 UTC to 2026-10-12 23:15:05 +0000 UTC (now=2025-10-13 00:21:11.949864653 +0000 UTC))" 2025-10-13T00:23:22.396906908+00:00 stderr F I1013 00:23:22.396470 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-10-13T00:23:47.823864037+00:00 stderr F I1013 00:23:47.823344 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015073043232033127 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000032752115073043232033143 0ustar zuulzuul2025-10-13T00:15:03.290346512+00:00 stdout F Copying system trust bundle 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.483442 1 feature_gate.go:239] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-10-13T00:15:04.485297083+00:00 stderr F I1013 00:15:04.484020 1 feature_gate.go:249] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484047 1 config.go:121] Ignoring unknown FeatureGate "AzureWorkloadIdentity" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484051 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallNutanix" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484055 1 config.go:121] Ignoring unknown FeatureGate "MetricsCollectionProfiles" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484058 1 config.go:121] Ignoring unknown FeatureGate "NewOLM" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484062 1 config.go:121] Ignoring unknown FeatureGate "OnClusterBuild" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484065 1 config.go:121] Ignoring unknown FeatureGate "BuildCSIVolumes" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484068 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAzure" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484072 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallOpenStack" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484075 1 config.go:121] Ignoring unknown FeatureGate "GatewayAPI" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484078 1 config.go:121] Ignoring unknown FeatureGate "NodeDisruptionPolicy" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484081 1 config.go:121] Ignoring unknown FeatureGate "PrivateHostedZoneAWS" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484085 1 config.go:121] Ignoring unknown FeatureGate "NetworkDiagnosticsConfig" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484088 1 config.go:121] Ignoring unknown FeatureGate "ChunkSizeMiB" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484092 1 config.go:121] Ignoring unknown FeatureGate "MixedCPUsAllocation" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484095 1 config.go:121] Ignoring unknown FeatureGate "SigstoreImageVerification" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484098 1 config.go:121] Ignoring unknown FeatureGate "GCPClusterHostedDNS" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484101 1 config.go:121] Ignoring unknown FeatureGate "AdminNetworkPolicy" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484104 1 config.go:121] Ignoring unknown FeatureGate "BareMetalLoadBalancer" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484107 1 config.go:121] Ignoring unknown FeatureGate "NetworkLiveMigration" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484111 1 config.go:121] Ignoring unknown FeatureGate "PinnedImages" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484114 1 config.go:121] Ignoring unknown FeatureGate "SignatureStores" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484117 1 config.go:121] Ignoring unknown FeatureGate "VSphereControlPlaneMachineSet" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484120 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstall" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484124 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAWS" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484127 1 config.go:121] Ignoring unknown FeatureGate "Example" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484138 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProvider" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484141 1 config.go:121] Ignoring unknown FeatureGate "HardwareSpeed" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484145 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfigAPI" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484149 1 config.go:121] Ignoring unknown FeatureGate "ManagedBootImages" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484152 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallGCP" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484155 1 config.go:121] Ignoring unknown FeatureGate "ImagePolicy" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484159 1 config.go:121] Ignoring unknown FeatureGate "OpenShiftPodSecurityAdmission" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484162 1 config.go:121] Ignoring unknown FeatureGate "VSphereStaticIPs" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484165 1 config.go:121] Ignoring unknown FeatureGate "CSIDriverSharedResource" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484169 1 config.go:121] Ignoring unknown FeatureGate "ExternalOIDC" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484172 1 config.go:121] Ignoring unknown FeatureGate "ExternalRouteCertificate" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484175 1 config.go:121] Ignoring unknown FeatureGate "GCPLabelsTags" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484179 1 config.go:121] Ignoring unknown FeatureGate "InsightsOnDemandDataGather" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484182 1 config.go:121] Ignoring unknown FeatureGate "MetricsServer" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484186 1 config.go:121] Ignoring unknown FeatureGate "AutomatedEtcdBackup" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484189 1 config.go:121] Ignoring unknown FeatureGate "UpgradeStatus" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484192 1 config.go:121] Ignoring unknown FeatureGate "VolumeGroupSnapshot" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484195 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallVSphere" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484198 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderGCP" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484201 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIOperatorDisableMachineHealthCheckController" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484206 1 config.go:121] Ignoring unknown FeatureGate "PlatformOperators" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484209 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallIBMCloud" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484212 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallPowerVS" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484215 1 config.go:121] Ignoring unknown FeatureGate "MachineConfigNodes" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484219 1 config.go:121] Ignoring unknown FeatureGate "AlibabaPlatform" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484222 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderExternal" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484225 1 config.go:121] Ignoring unknown FeatureGate "DNSNameResolver" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484228 1 config.go:121] Ignoring unknown FeatureGate "EtcdBackendQuota" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484231 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfig" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484234 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIProviderOpenStack" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484238 1 config.go:121] Ignoring unknown FeatureGate "InstallAlternateInfrastructureAWS" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484241 1 config.go:121] Ignoring unknown FeatureGate "VSphereDriverConfiguration" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484245 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderAzure" 2025-10-13T00:15:04.485297083+00:00 stderr F W1013 00:15:04.484248 1 config.go:121] Ignoring unknown FeatureGate "VSphereMultiVCenters" 2025-10-13T00:15:04.490061486+00:00 stderr F I1013 00:15:04.490010 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:04.962940785+00:00 stderr F I1013 00:15:04.962118 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-10-13T00:15:05.006012245+00:00 stderr F I1013 00:15:05.005892 1 audit.go:340] Using audit backend: ignoreErrors 2025-10-13T00:15:05.089805316+00:00 stderr F I1013 00:15:05.089710 1 plugins.go:83] "Registered admission plugin" plugin="NamespaceLifecycle" 2025-10-13T00:15:05.089805316+00:00 stderr F I1013 00:15:05.089744 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionWebhook" 2025-10-13T00:15:05.089805316+00:00 stderr F I1013 00:15:05.089752 1 plugins.go:83] "Registered admission plugin" plugin="MutatingAdmissionWebhook" 2025-10-13T00:15:05.089805316+00:00 stderr F I1013 00:15:05.089758 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionPolicy" 2025-10-13T00:15:05.096140345+00:00 stderr F I1013 00:15:05.096070 1 admission.go:48] Admission plugin "project.openshift.io/ProjectRequestLimit" is not configured so it will be disabled. 2025-10-13T00:15:05.143713961+00:00 stderr F I1013 00:15:05.143637 1 plugins.go:157] Loaded 5 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,build.openshift.io/BuildConfigSecretInjector,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,MutatingAdmissionWebhook. 2025-10-13T00:15:05.143713961+00:00 stderr F I1013 00:15:05.143661 1 plugins.go:160] Loaded 9 validating admission controller(s) successfully in the following order: OwnerReferencesPermissionEnforcement,build.openshift.io/BuildConfigSecretInjector,build.openshift.io/BuildByStrategy,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,route.openshift.io/RequiredRouteAnnotations,ValidatingAdmissionWebhook,ResourceQuota. 2025-10-13T00:15:05.150544125+00:00 stderr F I1013 00:15:05.150469 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.150544125+00:00 stderr F I1013 00:15:05.150500 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.151030050+00:00 stderr F I1013 00:15:05.150995 1 maxinflight.go:116] "Set denominator for readonly requests" limit=3000 2025-10-13T00:15:05.151030050+00:00 stderr F I1013 00:15:05.151014 1 maxinflight.go:120] "Set denominator for mutating requests" limit=1500 2025-10-13T00:15:05.165375880+00:00 stderr F I1013 00:15:05.162051 1 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.165375880+00:00 stderr F I1013 00:15:05.162183 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.165375880+00:00 stderr F I1013 00:15:05.162194 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.178182004+00:00 stderr F I1013 00:15:05.178111 1 store.go:1579] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//routes" 2025-10-13T00:15:05.180172153+00:00 stderr F I1013 00:15:05.179252 1 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.180172153+00:00 stderr F I1013 00:15:05.179365 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.180172153+00:00 stderr F I1013 00:15:05.179375 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.188123031+00:00 stderr F I1013 00:15:05.187993 1 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.openshift.io" path="//rangeallocations" 2025-10-13T00:15:05.195075950+00:00 stderr F I1013 00:15:05.192868 1 cacher.go:451] cacher (rangeallocations.security.openshift.io): initialized 2025-10-13T00:15:05.195075950+00:00 stderr F I1013 00:15:05.192888 1 cacher.go:451] cacher (routes.route.openshift.io): initialized 2025-10-13T00:15:05.195075950+00:00 stderr F I1013 00:15:05.192925 1 reflector.go:351] Caches populated for *route.Route from storage/cacher.go:/routes 2025-10-13T00:15:05.195075950+00:00 stderr F I1013 00:15:05.192931 1 reflector.go:351] Caches populated for *security.RangeAllocation from storage/cacher.go:/rangeallocations 2025-10-13T00:15:05.212018277+00:00 stderr F I1013 00:15:05.211936 1 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.212090010+00:00 stderr F I1013 00:15:05.212071 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.212090010+00:00 stderr F I1013 00:15:05.212082 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.229287725+00:00 stderr F I1013 00:15:05.229228 1 store.go:1579] "Monitoring resource count at path" resource="templates.template.openshift.io" path="//templates" 2025-10-13T00:15:05.240344116+00:00 stderr F I1013 00:15:05.240260 1 cacher.go:451] cacher (templates.template.openshift.io): initialized 2025-10-13T00:15:05.240448109+00:00 stderr F I1013 00:15:05.240435 1 reflector.go:351] Caches populated for *template.Template from storage/cacher.go:/templates 2025-10-13T00:15:05.240754308+00:00 stderr F I1013 00:15:05.240727 1 store.go:1579] "Monitoring resource count at path" resource="templateinstances.template.openshift.io" path="//templateinstances" 2025-10-13T00:15:05.242648685+00:00 stderr F I1013 00:15:05.242611 1 cacher.go:451] cacher (templateinstances.template.openshift.io): initialized 2025-10-13T00:15:05.242648685+00:00 stderr F I1013 00:15:05.242637 1 reflector.go:351] Caches populated for *template.TemplateInstance from storage/cacher.go:/templateinstances 2025-10-13T00:15:05.255771068+00:00 stderr F I1013 00:15:05.255709 1 store.go:1579] "Monitoring resource count at path" resource="brokertemplateinstances.template.openshift.io" path="//brokertemplateinstances" 2025-10-13T00:15:05.257610633+00:00 stderr F I1013 00:15:05.256913 1 cacher.go:451] cacher (brokertemplateinstances.template.openshift.io): initialized 2025-10-13T00:15:05.257610633+00:00 stderr F I1013 00:15:05.257041 1 reflector.go:351] Caches populated for *template.BrokerTemplateInstance from storage/cacher.go:/brokertemplateinstances 2025-10-13T00:15:05.258451199+00:00 stderr F I1013 00:15:05.258429 1 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.258551292+00:00 stderr F I1013 00:15:05.258519 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.258551292+00:00 stderr F I1013 00:15:05.258535 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.326317392+00:00 stderr F I1013 00:15:05.325593 1 store.go:1579] "Monitoring resource count at path" resource="deploymentconfigs.apps.openshift.io" path="//deploymentconfigs" 2025-10-13T00:15:05.327377334+00:00 stderr F I1013 00:15:05.327346 1 cacher.go:451] cacher (deploymentconfigs.apps.openshift.io): initialized 2025-10-13T00:15:05.327393334+00:00 stderr F I1013 00:15:05.327375 1 reflector.go:351] Caches populated for *apps.DeploymentConfig from storage/cacher.go:/deploymentconfigs 2025-10-13T00:15:05.329204679+00:00 stderr F I1013 00:15:05.329172 1 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.329272911+00:00 stderr F I1013 00:15:05.329253 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.329272911+00:00 stderr F I1013 00:15:05.329265 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.341150236+00:00 stderr F I1013 00:15:05.341087 1 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.341184957+00:00 stderr F I1013 00:15:05.341160 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.341184957+00:00 stderr F I1013 00:15:05.341168 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.449173073+00:00 stderr F I1013 00:15:05.449047 1 store.go:1579] "Monitoring resource count at path" resource="builds.build.openshift.io" path="//builds" 2025-10-13T00:15:05.450370609+00:00 stderr F I1013 00:15:05.450341 1 cacher.go:451] cacher (builds.build.openshift.io): initialized 2025-10-13T00:15:05.450383089+00:00 stderr F I1013 00:15:05.450370 1 reflector.go:351] Caches populated for *build.Build from storage/cacher.go:/builds 2025-10-13T00:15:05.456518533+00:00 stderr F I1013 00:15:05.456455 1 store.go:1579] "Monitoring resource count at path" resource="buildconfigs.build.openshift.io" path="//buildconfigs" 2025-10-13T00:15:05.460665937+00:00 stderr F I1013 00:15:05.460585 1 cacher.go:451] cacher (buildconfigs.build.openshift.io): initialized 2025-10-13T00:15:05.460665937+00:00 stderr F I1013 00:15:05.460619 1 reflector.go:351] Caches populated for *build.BuildConfig from storage/cacher.go:/buildconfigs 2025-10-13T00:15:05.464306256+00:00 stderr F I1013 00:15:05.464261 1 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.464410780+00:00 stderr F I1013 00:15:05.464380 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.464410780+00:00 stderr F I1013 00:15:05.464390 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.481823601+00:00 stderr F I1013 00:15:05.481718 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca, incoming err: 2025-10-13T00:15:05.481823601+00:00 stderr F I1013 00:15:05.481744 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca 2025-10-13T00:15:05.481823601+00:00 stderr F I1013 00:15:05.481780 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123, incoming err: 2025-10-13T00:15:05.481823601+00:00 stderr F I1013 00:15:05.481786 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123 2025-10-13T00:15:05.481823601+00:00 stderr F I1013 00:15:05.481797 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-10-13T00:15:05.481871853+00:00 stderr F I1013 00:15:05.481857 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-10-13T00:15:05.481963485+00:00 stderr F I1013 00:15:05.481905 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-10-13T00:15:05.481963485+00:00 stderr F I1013 00:15:05.481958 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..data, incoming err: 2025-10-13T00:15:05.481984326+00:00 stderr F I1013 00:15:05.481964 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..data 2025-10-13T00:15:05.481984326+00:00 stderr F I1013 00:15:05.481977 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-10-13T00:15:05.481993506+00:00 stderr F I1013 00:15:05.481982 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing 2025-10-13T00:15:05.482002227+00:00 stderr F I1013 00:15:05.481995 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-10-13T00:15:05.482010797+00:00 stderr F I1013 00:15:05.482001 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000 2025-10-13T00:15:05.482073609+00:00 stderr F I1013 00:15:05.482036 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-10-13T00:15:05.482073609+00:00 stderr F I1013 00:15:05.482048 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 2025-10-13T00:15:05.495635155+00:00 stderr F I1013 00:15:05.495501 1 store.go:1579] "Monitoring resource count at path" resource="images.image.openshift.io" path="//images" 2025-10-13T00:15:05.508321835+00:00 stderr F I1013 00:15:05.508260 1 store.go:1579] "Monitoring resource count at path" resource="imagestreams.image.openshift.io" path="//imagestreams" 2025-10-13T00:15:05.512978655+00:00 stderr F I1013 00:15:05.512935 1 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.512978655+00:00 stderr F W1013 00:15:05.512959 1 genericapiserver.go:756] Skipping API image.openshift.io/1.0 because it has no resources. 2025-10-13T00:15:05.512978655+00:00 stderr F W1013 00:15:05.512968 1 genericapiserver.go:756] Skipping API image.openshift.io/pre012 because it has no resources. 2025-10-13T00:15:05.513182671+00:00 stderr F I1013 00:15:05.513158 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.513182671+00:00 stderr F I1013 00:15:05.513174 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.517221002+00:00 stderr F I1013 00:15:05.517108 1 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-10-13T00:15:05.517221002+00:00 stderr F I1013 00:15:05.517181 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-10-13T00:15:05.517221002+00:00 stderr F I1013 00:15:05.517190 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-10-13T00:15:05.537993794+00:00 stderr F I1013 00:15:05.537894 1 cacher.go:451] cacher (imagestreams.image.openshift.io): initialized 2025-10-13T00:15:05.537993794+00:00 stderr F I1013 00:15:05.537954 1 reflector.go:351] Caches populated for *image.ImageStream from storage/cacher.go:/imagestreams 2025-10-13T00:15:05.578691754+00:00 stderr F I1013 00:15:05.578422 1 cacher.go:451] cacher (images.image.openshift.io): initialized 2025-10-13T00:15:05.578691754+00:00 stderr F I1013 00:15:05.578459 1 reflector.go:351] Caches populated for *image.Image from storage/cacher.go:/images 2025-10-13T00:15:05.986827402+00:00 stderr F I1013 00:15:05.986753 1 server.go:50] Starting master on 0.0.0.0:8443 (v0.0.0-master+$Format:%H$) 2025-10-13T00:15:05.986875464+00:00 stderr F I1013 00:15:05.986838 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-10-13T00:15:05.989993627+00:00 stderr F I1013 00:15:05.989702 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-10-13T00:15:05.989993627+00:00 stderr F I1013 00:15:05.989719 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-10-13T00:15:05.989993627+00:00 stderr F I1013 00:15:05.989726 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-10-13T00:15:05.989993627+00:00 stderr F I1013 00:15:05.989743 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:05.989993627+00:00 stderr F I1013 00:15:05.989851 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-10-13T00:15:05.989993627+00:00 stderr F I1013 00:15:05.989860 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:05.990612146+00:00 stderr F I1013 00:15:05.990572 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:15:05.990548054 +0000 UTC))" 2025-10-13T00:15:05.991003307+00:00 stderr F I1013 00:15:05.990968 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314504\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314504\" (2025-10-12 23:15:04 +0000 UTC to 2026-10-12 23:15:04 +0000 UTC (now=2025-10-13 00:15:05.990894174 +0000 UTC))" 2025-10-13T00:15:05.991003307+00:00 stderr F I1013 00:15:05.990992 1 secure_serving.go:213] Serving securely on [::]:8443 2025-10-13T00:15:05.991051779+00:00 stderr F I1013 00:15:05.991020 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-10-13T00:15:05.991051779+00:00 stderr F I1013 00:15:05.991041 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-10-13T00:15:05.991183493+00:00 stderr F I1013 00:15:05.991148 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-10-13T00:15:05.992295956+00:00 stderr F I1013 00:15:05.992260 1 openshift_apiserver.go:593] Using default project node label selector: 2025-10-13T00:15:05.992380958+00:00 stderr F I1013 00:15:05.992362 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-10-13T00:15:05.996380768+00:00 stderr F I1013 00:15:05.993414 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.996380768+00:00 stderr F I1013 00:15:05.993421 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.996380768+00:00 stderr F I1013 00:15:05.993916 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.996380768+00:00 stderr F I1013 00:15:05.995317 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.996380768+00:00 stderr F I1013 00:15:05.995704 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.997037248+00:00 stderr F I1013 00:15:05.997009 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.999841522+00:00 stderr F I1013 00:15:05.999353 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:05.999993307+00:00 stderr F I1013 00:15:05.999946 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.005492341+00:00 stderr F I1013 00:15:06.003822 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.015801710+00:00 stderr F I1013 00:15:06.015609 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.018654526+00:00 stderr F I1013 00:15:06.018589 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.023362407+00:00 stderr F I1013 00:15:06.023208 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.027069488+00:00 stderr F I1013 00:15:06.027018 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.027721897+00:00 stderr F I1013 00:15:06.027643 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.027972225+00:00 stderr F I1013 00:15:06.027912 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.031396458+00:00 stderr F I1013 00:15:06.028342 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.031396458+00:00 stderr F I1013 00:15:06.029369 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.036046687+00:00 stderr F I1013 00:15:06.036000 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.040159560+00:00 stderr F I1013 00:15:06.040119 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.058899302+00:00 stderr F I1013 00:15:06.058848 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.065016195+00:00 stderr F I1013 00:15:06.064968 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.069043415+00:00 stderr F I1013 00:15:06.068998 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.076902451+00:00 stderr F I1013 00:15:06.076835 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:15:06.090787217+00:00 stderr F I1013 00:15:06.090725 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-10-13T00:15:06.091243711+00:00 stderr F I1013 00:15:06.091207 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:06.091176789 +0000 UTC))" 2025-10-13T00:15:06.091243711+00:00 stderr F I1013 00:15:06.091234 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:06.09122213 +0000 UTC))" 2025-10-13T00:15:06.091263201+00:00 stderr F I1013 00:15:06.091252 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:06.091238621 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091268 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:06.091256851 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091287 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.091276652 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091302 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.091291532 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091316 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.091305863 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091343 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.091320373 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091358 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:06.091347524 +0000 UTC))" 2025-10-13T00:15:06.091484668+00:00 stderr F I1013 00:15:06.091372 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:06.091362574 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.091657 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:15:06.091644323 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.091899 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314504\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314504\" (2025-10-12 23:15:04 +0000 UTC to 2026-10-12 23:15:04 +0000 UTC (now=2025-10-13 00:15:06.09188723 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.091918 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.091931 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092139 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:15:06.092127097 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092156 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:15:06.092145928 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092170 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:06.092160208 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092185 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:15:06.092174749 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092201 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.092189849 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092217 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.092205909 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092231 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.09222125 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092247 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.09223686 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092261 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:15:06.092251081 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092277 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:15:06.092266041 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092292 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:15:06.092281802 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092561 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:15:06.0925497 +0000 UTC))" 2025-10-13T00:15:06.094266861+00:00 stderr F I1013 00:15:06.092819 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314504\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314504\" (2025-10-12 23:15:04 +0000 UTC to 2026-10-12 23:15:04 +0000 UTC (now=2025-10-13 00:15:06.092808137 +0000 UTC))" 2025-10-13T00:15:06.132653771+00:00 stderr F I1013 00:15:06.132592 1 reflector.go:351] Caches populated for *etcd.ImageLayers from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945701 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-10-13 00:21:11.94566026 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945750 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-10-13 00:21:11.945727502 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945782 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.945758472 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945806 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-10-13 00:21:11.945788883 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945829 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.945811974 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945858 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.945836654 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945886 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.945867135 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945914 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.945892746 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945937 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.945921417 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945961 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-10-13 00:21:11.945947947 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.945985 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1760314864\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-10-13 00:21:03 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-10-13 00:21:11.945967888 +0000 UTC))" 2025-10-13T00:21:11.946243325+00:00 stderr F I1013 00:21:11.946009 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-10-13 00:21:11.945991369 +0000 UTC))" 2025-10-13T00:21:11.948183388+00:00 stderr F I1013 00:21:11.946718 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-10-13 00:21:11.946695168 +0000 UTC))" 2025-10-13T00:21:11.948183388+00:00 stderr F I1013 00:21:11.947107 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1760314504\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1760314504\" (2025-10-12 23:15:04 +0000 UTC to 2026-10-12 23:15:04 +0000 UTC (now=2025-10-13 00:21:11.947088508 +0000 UTC))" 2025-10-13T00:21:23.734729719+00:00 stderr F E1013 00:21:23.734562 1 strategy.go:60] unable to parse manifest for "sha256:5dfcc5b000a1fab4be66bbd43e4db44b61176e2bcba9c24f6fe887dea9b7fd49": unexpected end of JSON input 2025-10-13T00:21:23.736550098+00:00 stderr F E1013 00:21:23.736481 1 strategy.go:60] unable to parse manifest for "sha256:7f501bba8a09957a0ac28ba0c20768f087cf0b16d92139b3f8f0758e9f60691f": unexpected end of JSON input 2025-10-13T00:21:23.739262441+00:00 stderr F E1013 00:21:23.739207 1 strategy.go:60] unable to parse manifest for "sha256:94a97eace54b04cf5619711b49e62db7dfa6b9d7557cc2f4c1d59c02aae37126": unexpected end of JSON input 2025-10-13T00:21:23.745067687+00:00 stderr F I1013 00:21:23.744995 1 trace.go:236] Trace[1746493339]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:20585d02-ffd5-45cb-9835-f9d1990d32f7,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:22.260) (total time: 1484ms): 2025-10-13T00:21:23.745067687+00:00 stderr F Trace[1746493339]: ---"Write to database call succeeded" len:795 1483ms (00:21:23.743) 2025-10-13T00:21:23.745067687+00:00 stderr F Trace[1746493339]: [1.484837278s] [1.484837278s] END 2025-10-13T00:21:23.818921773+00:00 stderr F E1013 00:21:23.818865 1 strategy.go:60] unable to parse manifest for "sha256:55dc61c31ea50a8f7a45e993a9b3220097974948b5cd1ab3f317e7702e8cb6fc": unexpected end of JSON input 2025-10-13T00:21:24.265143083+00:00 stderr F E1013 00:21:24.265065 1 strategy.go:60] unable to parse manifest for "sha256:4f35566977c35306a8f2102841ceb7fa10a6d9ac47c079131caed5655140f9b2": unexpected end of JSON input 2025-10-13T00:21:24.977983073+00:00 stderr F E1013 00:21:24.977934 1 strategy.go:60] unable to parse manifest for "sha256:70a21b3f93c05843ce9d07f125b1464436caf01680bb733754a2a5df5bc3b11b": unexpected end of JSON input 2025-10-13T00:21:24.979482703+00:00 stderr F E1013 00:21:24.979457 1 strategy.go:60] unable to parse manifest for "sha256:8ef04c895436412065c0f1090db68060d2bb339a400e8653ca3a370211690d1f": unexpected end of JSON input 2025-10-13T00:21:24.983390238+00:00 stderr F E1013 00:21:24.983360 1 strategy.go:60] unable to parse manifest for "sha256:53ba1fe1fe0fe8e1365652c516691253c986c1898472ce6d55f505e7e84bdd61": unexpected end of JSON input 2025-10-13T00:21:24.987683954+00:00 stderr F I1013 00:21:24.987352 1 trace.go:236] Trace[1344426169]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a1ab7d7e-3162-4282-a7da-d170872559fe,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:23.760) (total time: 1227ms): 2025-10-13T00:21:24.987683954+00:00 stderr F Trace[1344426169]: ---"Write to database call succeeded" len:739 1226ms (00:21:24.986) 2025-10-13T00:21:24.987683954+00:00 stderr F Trace[1344426169]: [1.227227463s] [1.227227463s] END 2025-10-13T00:21:25.173420249+00:00 stderr F E1013 00:21:25.172457 1 strategy.go:60] unable to parse manifest for "sha256:d186c94f8843f854d77b2b05d10efb0d272f88a4bf4f1d8ebe304428b9396392": unexpected end of JSON input 2025-10-13T00:21:25.174706623+00:00 stderr F E1013 00:21:25.174636 1 strategy.go:60] unable to parse manifest for "sha256:e37aeaeb0159194a9855350e13e399470f39ce340d6381069933742990741fb8": unexpected end of JSON input 2025-10-13T00:21:25.176493351+00:00 stderr F E1013 00:21:25.175657 1 strategy.go:60] unable to parse manifest for "sha256:df8858f0c01ae1657a14234a94f6785cbb2fba7f12c9d0325f427a3f1284481b": unexpected end of JSON input 2025-10-13T00:21:25.176808570+00:00 stderr F E1013 00:21:25.176777 1 strategy.go:60] unable to parse manifest for "sha256:29cdd7dcf8b1189af4de4f53c353a35d449671b6d08df72d18541cd181fb79ae": unexpected end of JSON input 2025-10-13T00:21:25.176987355+00:00 stderr F E1013 00:21:25.176781 1 strategy.go:60] unable to parse manifest for "sha256:f89a54e6d1340be8ddd84a602cb4f1f27c1983417f655941645bf11809d49f18": unexpected end of JSON input 2025-10-13T00:21:25.179523653+00:00 stderr F E1013 00:21:25.177871 1 strategy.go:60] unable to parse manifest for "sha256:03535121119dba1c8a22786ee4b6afa119fa48dde08648e95b2c06c1cd3150b2": unexpected end of JSON input 2025-10-13T00:21:25.182033970+00:00 stderr F E1013 00:21:25.181499 1 strategy.go:60] unable to parse manifest for "sha256:739fac452e78a21a16b66e0451b85590b9e48ec7a1ed3887fbb9ed85cf564275": unexpected end of JSON input 2025-10-13T00:21:25.184020484+00:00 stderr F I1013 00:21:25.183530 1 trace.go:236] Trace[1735701238]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:373493f6-f308-402a-b704-2f8fd9061a2f,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:24.159) (total time: 1023ms): 2025-10-13T00:21:25.184020484+00:00 stderr F Trace[1735701238]: ---"Write to database call succeeded" len:506 1023ms (00:21:25.183) 2025-10-13T00:21:25.184020484+00:00 stderr F Trace[1735701238]: [1.023725931s] [1.023725931s] END 2025-10-13T00:21:25.185249177+00:00 stderr F E1013 00:21:25.184940 1 strategy.go:60] unable to parse manifest for "sha256:0eea1d20aaa26041edf26b925fb204d839e5b93122190191893a0299b2e1b589": unexpected end of JSON input 2025-10-13T00:21:25.189042469+00:00 stderr F E1013 00:21:25.188734 1 strategy.go:60] unable to parse manifest for "sha256:3b94ccfa422b8ba0014302a3cfc6916b69f0f5a9dfd757b6704049834d4ff0ae": unexpected end of JSON input 2025-10-13T00:21:25.192056790+00:00 stderr F E1013 00:21:25.192031 1 strategy.go:60] unable to parse manifest for "sha256:46a4e73ddb085d1f36b39903ea13ba307bb958789707e9afde048764b3e3cae2": unexpected end of JSON input 2025-10-13T00:21:25.194998579+00:00 stderr F E1013 00:21:25.194971 1 strategy.go:60] unable to parse manifest for "sha256:bcb0e15cc9d2d3449f0b1acac7b0275035a80e1b3b835391b5464f7bf4553b89": unexpected end of JSON input 2025-10-13T00:21:25.200450656+00:00 stderr F I1013 00:21:25.200426 1 trace.go:236] Trace[713470090]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:21183eac-c470-43a1-bb5a-9617daefdf12,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:22.440) (total time: 2759ms): 2025-10-13T00:21:25.200450656+00:00 stderr F Trace[713470090]: ---"Write to database call succeeded" len:953 2758ms (00:21:25.199) 2025-10-13T00:21:25.200450656+00:00 stderr F Trace[713470090]: [2.759770635s] [2.759770635s] END 2025-10-13T00:21:25.215148511+00:00 stderr F E1013 00:21:25.215112 1 strategy.go:60] unable to parse manifest for "sha256:7bcc365e0ba823ed020ee6e6c3e0c23be5871c8dea3f7f1a65029002c83f9e55": unexpected end of JSON input 2025-10-13T00:21:25.217278138+00:00 stderr F E1013 00:21:25.217243 1 strategy.go:60] unable to parse manifest for "sha256:5fb3543c0d42146f0506c1ea4d09575131da6a2f27885729b7cfce13a0fa90e3": unexpected end of JSON input 2025-10-13T00:21:25.218284735+00:00 stderr F E1013 00:21:25.218248 1 strategy.go:60] unable to parse manifest for "sha256:6a9e81b2eea2f32f2750909b6aa037c2c2e68be3bc9daf3c7a3163c9e1df379f": unexpected end of JSON input 2025-10-13T00:21:25.219629641+00:00 stderr F E1013 00:21:25.219610 1 strategy.go:60] unable to parse manifest for "sha256:1d68b58a73f4cf15fcd886ab39fddf18be923b52b24cb8ec3ab1da2d3e9bd5f6": unexpected end of JSON input 2025-10-13T00:21:25.220950997+00:00 stderr F E1013 00:21:25.220926 1 strategy.go:60] unable to parse manifest for "sha256:00cf28cf9a6c427962f922855a6cc32692c760764ce2ce7411cf605dd510367f": unexpected end of JSON input 2025-10-13T00:21:25.222363975+00:00 stderr F E1013 00:21:25.222345 1 strategy.go:60] unable to parse manifest for "sha256:7de877b0e748cdb47cb702400f3ddaa3c3744a022887e2213c2bb27775ab4b25": unexpected end of JSON input 2025-10-13T00:21:25.223272399+00:00 stderr F E1013 00:21:25.223240 1 strategy.go:60] unable to parse manifest for "sha256:2cee344e4cfcfdc9a117fd82baa6f2d5daa7eeed450e02cd5d5554b424410439": unexpected end of JSON input 2025-10-13T00:21:25.224949245+00:00 stderr F E1013 00:21:25.224906 1 strategy.go:60] unable to parse manifest for "sha256:af9c08644ca057d83ef4b7d8de1489f01c5a52ff8670133b8a09162831b7fb34": unexpected end of JSON input 2025-10-13T00:21:25.225605542+00:00 stderr F E1013 00:21:25.225562 1 strategy.go:60] unable to parse manifest for "sha256:aa02a20c2edf83a009746b45a0fd2e0b4a2b224fdef1581046f6afef38c0bee2": unexpected end of JSON input 2025-10-13T00:21:25.226938248+00:00 stderr F E1013 00:21:25.226917 1 strategy.go:60] unable to parse manifest for "sha256:b053401886c06581d3c296855525cc13e0613100a596ed007bb69d5f8e972346": unexpected end of JSON input 2025-10-13T00:21:25.227907734+00:00 stderr F E1013 00:21:25.227872 1 strategy.go:60] unable to parse manifest for "sha256:59b88fb0c467ca43bf3c1af6bfd8777577638dd8079f995cdb20b6f4e20ce0b6": unexpected end of JSON input 2025-10-13T00:21:25.229033684+00:00 stderr F E1013 00:21:25.228954 1 strategy.go:60] unable to parse manifest for "sha256:61555b923dabe4ff734279ed1bdb9eb6d450c760e1cc04463cf88608ac8d1338": unexpected end of JSON input 2025-10-13T00:21:25.230243557+00:00 stderr F E1013 00:21:25.230200 1 strategy.go:60] unable to parse manifest for "sha256:603d10af5e3476add5b5726fdef893033869ae89824ee43949a46c9f004ef65d": unexpected end of JSON input 2025-10-13T00:21:25.231121770+00:00 stderr F E1013 00:21:25.231105 1 strategy.go:60] unable to parse manifest for "sha256:9ab26cb4005e9b60fd6349950957bbd0120efba216036da53c547c6f1c9e5e7f": unexpected end of JSON input 2025-10-13T00:21:25.231872301+00:00 stderr F E1013 00:21:25.231856 1 strategy.go:60] unable to parse manifest for "sha256:eed7e29bf583e4f01e170bb9f22f2a78098bf15243269b670c307caa6813b783": unexpected end of JSON input 2025-10-13T00:21:25.233548146+00:00 stderr F E1013 00:21:25.233519 1 strategy.go:60] unable to parse manifest for "sha256:2254dc2f421f496b504aafbbd8ea37e660652c4b6b4f9a0681664b10873be7fe": unexpected end of JSON input 2025-10-13T00:21:25.233732211+00:00 stderr F E1013 00:21:25.233716 1 strategy.go:60] unable to parse manifest for "sha256:b80a514f136f738736d6bf654dc3258c13b04a819e001dd8a39ef2f7475fd9d9": unexpected end of JSON input 2025-10-13T00:21:25.235225231+00:00 stderr F E1013 00:21:25.235208 1 strategy.go:60] unable to parse manifest for "sha256:e4b1599ba6e88f6df7c4e67d6397371d61b6829d926411184e9855e71e840b8c": unexpected end of JSON input 2025-10-13T00:21:25.236475464+00:00 stderr F E1013 00:21:25.236457 1 strategy.go:60] unable to parse manifest for "sha256:7ef75cdbc399425105060771cb8e700198cc0bddcfb60bf4311bf87ea62fd440": unexpected end of JSON input 2025-10-13T00:21:25.242644180+00:00 stderr F I1013 00:21:25.242118 1 trace.go:236] Trace[1896376618]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:54df6775-a2bd-48d8-8fcf-6f701f15d7a7,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:21.901) (total time: 3340ms): 2025-10-13T00:21:25.242644180+00:00 stderr F Trace[1896376618]: ---"Write to database call succeeded" len:1132 3337ms (00:21:25.239) 2025-10-13T00:21:25.242644180+00:00 stderr F Trace[1896376618]: [3.340179823s] [3.340179823s] END 2025-10-13T00:21:25.244496770+00:00 stderr F I1013 00:21:25.244465 1 trace.go:236] Trace[1236792184]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:29763596-fc6e-4918-90b4-8946da827821,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:21.448) (total time: 3796ms): 2025-10-13T00:21:25.244496770+00:00 stderr F Trace[1236792184]: ---"Write to database call succeeded" len:1230 3795ms (00:21:25.243) 2025-10-13T00:21:25.244496770+00:00 stderr F Trace[1236792184]: [3.796203617s] [3.796203617s] END 2025-10-13T00:21:25.520057270+00:00 stderr F E1013 00:21:25.520008 1 strategy.go:60] unable to parse manifest for "sha256:a8e4081414cfa644e212ded354dfee12706e63afb19a27c0c0ae2c8c64e56ca6": unexpected end of JSON input 2025-10-13T00:21:25.522556448+00:00 stderr F E1013 00:21:25.522509 1 strategy.go:60] unable to parse manifest for "sha256:38c7e4f7dea04bb536f05d78e0107ebc2a3607cf030db7f5c249f13ce1f52d59": unexpected end of JSON input 2025-10-13T00:21:25.527404448+00:00 stderr F E1013 00:21:25.527375 1 strategy.go:60] unable to parse manifest for "sha256:d2f17aaf2f871fda5620466d69ac67b9c355c0bae5912a1dbef9a51ca8813e50": unexpected end of JSON input 2025-10-13T00:21:25.529776582+00:00 stderr F E1013 00:21:25.529747 1 strategy.go:60] unable to parse manifest for "sha256:e4be2fb7216f432632819b2441df42a5a0063f7f473c2923ca6912b2d64b7494": unexpected end of JSON input 2025-10-13T00:21:25.531951030+00:00 stderr F E1013 00:21:25.531928 1 strategy.go:60] unable to parse manifest for "sha256:14de89e89efc97aee3b50141108b7833708c3a93ad90bf89940025ab5267ba86": unexpected end of JSON input 2025-10-13T00:21:25.533770229+00:00 stderr F E1013 00:21:25.533681 1 strategy.go:60] unable to parse manifest for "sha256:f438230ed2c2e609d0d7dbc430ccf1e9bad2660e6410187fd6e9b14a2952e70b": unexpected end of JSON input 2025-10-13T00:21:25.535621159+00:00 stderr F E1013 00:21:25.535548 1 strategy.go:60] unable to parse manifest for "sha256:f953734d89252219c3dcd8f703ba8b58c9c8a0f5dfa9425c9e56ec0834f7d288": unexpected end of JSON input 2025-10-13T00:21:25.537894650+00:00 stderr F E1013 00:21:25.537853 1 strategy.go:60] unable to parse manifest for "sha256:e4223a60b887ec24cad7dd70fdb6c3f2c107fb7118331be6f45d626219cfe7f3": unexpected end of JSON input 2025-10-13T00:21:25.544718754+00:00 stderr F I1013 00:21:25.544681 1 trace.go:236] Trace[890861944]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:11fcb664-6190-46e7-bd36-050c2654f603,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:22.559) (total time: 2985ms): 2025-10-13T00:21:25.544718754+00:00 stderr F Trace[890861944]: ---"Write to database call succeeded" len:1025 2984ms (00:21:25.543) 2025-10-13T00:21:25.544718754+00:00 stderr F Trace[890861944]: [2.985060773s] [2.985060773s] END 2025-10-13T00:21:25.585547172+00:00 stderr F E1013 00:21:25.585438 1 strategy.go:60] unable to parse manifest for "sha256:e851770fd181ef49193111f7afcdbf872ad23f3a8234e0e07a742c4ca2882c3d": unexpected end of JSON input 2025-10-13T00:21:25.589829987+00:00 stderr F E1013 00:21:25.589791 1 strategy.go:60] unable to parse manifest for "sha256:ce5c0becf829aca80734b4caf3ab6b76cb00f7d78f4e39fb136636a764dea7f6": unexpected end of JSON input 2025-10-13T00:21:25.591786369+00:00 stderr F E1013 00:21:25.591761 1 strategy.go:60] unable to parse manifest for "sha256:3f00540ce2a3a01d2a147a7d73825fe78697be213a050bd09edae36266d6bc40": unexpected end of JSON input 2025-10-13T00:21:25.593624379+00:00 stderr F E1013 00:21:25.593603 1 strategy.go:60] unable to parse manifest for "sha256:868224c3b7c309b9e04003af70a5563af8e4c662f0c53f2a7606e0573c9fad85": unexpected end of JSON input 2025-10-13T00:21:25.595272913+00:00 stderr F E1013 00:21:25.595233 1 strategy.go:60] unable to parse manifest for "sha256:0669a28577b41bb05c67492ef18a1d48a299ac54d1500df8f9f8f760ce4be24b": unexpected end of JSON input 2025-10-13T00:21:25.596783864+00:00 stderr F E1013 00:21:25.596743 1 strategy.go:60] unable to parse manifest for "sha256:9036a59a8275f9c205ef5fc674f38c0495275a1a7912029f9a784406bb00b1f5": unexpected end of JSON input 2025-10-13T00:21:25.598457629+00:00 stderr F E1013 00:21:25.598432 1 strategy.go:60] unable to parse manifest for "sha256:425e2c7c355bea32be238aa2c7bdd363b6ab3709412bdf095efe28a8f6c07d84": unexpected end of JSON input 2025-10-13T00:21:25.599856676+00:00 stderr F E1013 00:21:25.599826 1 strategy.go:60] unable to parse manifest for "sha256:67fee4b64b269f5666a1051d806635b675903ef56d07b7cc019d3d59ff1aa97c": unexpected end of JSON input 2025-10-13T00:21:25.601651915+00:00 stderr F E1013 00:21:25.601591 1 strategy.go:60] unable to parse manifest for "sha256:b85cbdbc289752c91ac7f468cffef916fe9ab01865f3e32cfcc44ccdd633b168": unexpected end of JSON input 2025-10-13T00:21:25.603539675+00:00 stderr F E1013 00:21:25.603496 1 strategy.go:60] unable to parse manifest for "sha256:663eb81388ae8f824e7920c272f6d2e2274cf6c140d61416607261cdce9d50e2": unexpected end of JSON input 2025-10-13T00:21:25.609582208+00:00 stderr F I1013 00:21:25.609505 1 trace.go:236] Trace[2062841525]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7b1cfa5d-89ec-437f-bd4f-52fbaf5cfc08,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:21.725) (total time: 3883ms): 2025-10-13T00:21:25.609582208+00:00 stderr F Trace[2062841525]: ---"Write to database call succeeded" len:1153 3881ms (00:21:25.607) 2025-10-13T00:21:25.609582208+00:00 stderr F Trace[2062841525]: [3.883896524s] [3.883896524s] END 2025-10-13T00:21:25.764688409+00:00 stderr F E1013 00:21:25.764627 1 strategy.go:60] unable to parse manifest for "sha256:431753c8a6a8541fdc0edd3385b2c765925d244fdd2347d2baa61303789696be": unexpected end of JSON input 2025-10-13T00:21:25.766604130+00:00 stderr F E1013 00:21:25.766571 1 strategy.go:60] unable to parse manifest for "sha256:64acf3403b5c2c85f7a28f326c63f1312b568db059c66d90b34e3c59fde3a74b": unexpected end of JSON input 2025-10-13T00:21:25.768552973+00:00 stderr F E1013 00:21:25.768520 1 strategy.go:60] unable to parse manifest for "sha256:74051f86b00fb102e34276f03a310c16bc57b9c2a001a56ba66359e15ee48ba6": unexpected end of JSON input 2025-10-13T00:21:25.770464044+00:00 stderr F E1013 00:21:25.770439 1 strategy.go:60] unable to parse manifest for "sha256:33d4dff40514e91d86b42e90b24b09a5ca770d9f67657c936363d348cd33d188": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F E1013 00:21:25.772300 1 strategy.go:60] unable to parse manifest for "sha256:7711108ef60ef6f0536bfa26914af2afaf6455ce6e4c4abd391e31a2d95d0178": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F E1013 00:21:25.773878 1 strategy.go:60] unable to parse manifest for "sha256:b163564be6ed5b80816e61a4ee31e42f42dbbf345253daac10ecc9fadf31baa3": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F E1013 00:21:25.775229 1 strategy.go:60] unable to parse manifest for "sha256:920ff7e5efc777cb523669c425fd7b553176c9f4b34a85ceddcb548c2ac5f78a": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F E1013 00:21:25.776501 1 strategy.go:60] unable to parse manifest for "sha256:32a5e806bd88b40568d46864fd313541498e38fabfc5afb5f3bdfe052c4b4c5f": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F E1013 00:21:25.777748 1 strategy.go:60] unable to parse manifest for "sha256:229ee7b88c5f700c95d557d0b37b8f78dbb6b125b188c3bf050cfdb32aec7962": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F E1013 00:21:25.779198 1 strategy.go:60] unable to parse manifest for "sha256:78bf175cecb15524b2ef81bff8cc11acdf7c0f74c08417f0e443483912e4878a": unexpected end of JSON input 2025-10-13T00:21:25.813966584+00:00 stderr F I1013 00:21:25.783599 1 trace.go:236] Trace[1555124802]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5a0f6baa-bf3f-47c2-9b22-42d1bc70fd3f,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:21.961) (total time: 3821ms): 2025-10-13T00:21:25.813966584+00:00 stderr F Trace[1555124802]: ---"Write to database call succeeded" len:1241 3820ms (00:21:25.782) 2025-10-13T00:21:25.813966584+00:00 stderr F Trace[1555124802]: [3.821833665s] [3.821833665s] END 2025-10-13T00:21:26.519477796+00:00 stderr F E1013 00:21:26.519431 1 strategy.go:60] unable to parse manifest for "sha256:496e23be70520863bce6f7cdc54d280aca2c133d06e992795c4dcbde1a9dd1ab": unexpected end of JSON input 2025-10-13T00:21:26.521470729+00:00 stderr F E1013 00:21:26.521442 1 strategy.go:60] unable to parse manifest for "sha256:022488b1bf697b7dd8c393171a3247bef4ea545a9ab828501e72168f2aac9415": unexpected end of JSON input 2025-10-13T00:21:26.523283868+00:00 stderr F E1013 00:21:26.523240 1 strategy.go:60] unable to parse manifest for "sha256:7164a06e9ba98a3ce9991bd7019512488efe30895175bb463e255f00eb9421fd": unexpected end of JSON input 2025-10-13T00:21:26.525826216+00:00 stderr F E1013 00:21:26.525793 1 strategy.go:60] unable to parse manifest for "sha256:81684e422367a075ac113e69ea11d8721416ce4bedea035e25313c5e726fd7d1": unexpected end of JSON input 2025-10-13T00:21:26.528143629+00:00 stderr F E1013 00:21:26.528122 1 strategy.go:60] unable to parse manifest for "sha256:b838fa18dab68d43a19f0c329c3643850691b8f9915823c4f8d25685eb293a11": unexpected end of JSON input 2025-10-13T00:21:26.529546086+00:00 stderr F E1013 00:21:26.529527 1 strategy.go:60] unable to parse manifest for "sha256:8a5b580b76c2fc2dfe55d13bb0dd53e8c71d718fc1a3773264b1710f49060222": unexpected end of JSON input 2025-10-13T00:21:26.535785284+00:00 stderr F E1013 00:21:26.535747 1 strategy.go:60] unable to parse manifest for "sha256:2f59ad75b66a3169b0b03032afb09aa3cfa531dbd844e3d3a562246e7d09c282": unexpected end of JSON input 2025-10-13T00:21:26.543805990+00:00 stderr F E1013 00:21:26.543741 1 strategy.go:60] unable to parse manifest for "sha256:9d759db3bb650e5367216ce261779c5a58693fc7ae10f21cd264011562bd746d": unexpected end of JSON input 2025-10-13T00:21:26.546403160+00:00 stderr F E1013 00:21:26.546352 1 strategy.go:60] unable to parse manifest for "sha256:bf5e518dba2aa935829d9db88d933a264e54ffbfa80041b41287fd70c1c35ba5": unexpected end of JSON input 2025-10-13T00:21:26.548243689+00:00 stderr F E1013 00:21:26.548207 1 strategy.go:60] unable to parse manifest for "sha256:f7ca08a8dda3610fcc10cc1fe5f5d0b9f8fc7a283b01975d0fe2c1e77ae06193": unexpected end of JSON input 2025-10-13T00:21:26.553343356+00:00 stderr F I1013 00:21:26.553307 1 trace.go:236] Trace[1405575699]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7d500123-96f0-436e-8b0b-95050f68186d,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:22.621) (total time: 3931ms): 2025-10-13T00:21:26.553343356+00:00 stderr F Trace[1405575699]: ---"Write to database call succeeded" len:1142 3930ms (00:21:26.552) 2025-10-13T00:21:26.553343356+00:00 stderr F Trace[1405575699]: [3.931454522s] [3.931454522s] END 2025-10-13T00:21:28.170084924+00:00 stderr F E1013 00:21:28.170024 1 strategy.go:60] unable to parse manifest for "sha256:2ae058ee7239213fb495491112be8cc7e6d6661864fd399deb27f23f50f05eb4": unexpected end of JSON input 2025-10-13T00:21:28.172685604+00:00 stderr F E1013 00:21:28.172597 1 strategy.go:60] unable to parse manifest for "sha256:db3f5192237bfdab2355304f17916e09bc29d6d529fdec48b09a08290ae35905": unexpected end of JSON input 2025-10-13T00:21:28.175941312+00:00 stderr F E1013 00:21:28.174592 1 strategy.go:60] unable to parse manifest for "sha256:b4cb02a4e7cb915b6890d592ed5b4ab67bcef19bf855029c95231f51dd071352": unexpected end of JSON input 2025-10-13T00:21:28.179041855+00:00 stderr F E1013 00:21:28.178982 1 strategy.go:60] unable to parse manifest for "sha256:fa9556628c15b8eb22cafccb737b3fbcecfd681a5c2cfea3302dd771c644a7db": unexpected end of JSON input 2025-10-13T00:21:28.181001448+00:00 stderr F E1013 00:21:28.180949 1 strategy.go:60] unable to parse manifest for "sha256:a0a6db2dcdb3d49e36bd0665e3e00f242a690391700e42cab14e86b154152bfd": unexpected end of JSON input 2025-10-13T00:21:28.182287263+00:00 stderr F E1013 00:21:28.182242 1 strategy.go:60] unable to parse manifest for "sha256:e90172ca0f09acf5db1721bd7df304dffd184e00145072132cb71c7f0797adf6": unexpected end of JSON input 2025-10-13T00:21:28.183531456+00:00 stderr F E1013 00:21:28.183482 1 strategy.go:60] unable to parse manifest for "sha256:421d1f6a10e263677b7687ccea8e4a59058e2e3c80585505eec9a9c2e6f9f40e": unexpected end of JSON input 2025-10-13T00:21:28.184931724+00:00 stderr F E1013 00:21:28.184889 1 strategy.go:60] unable to parse manifest for "sha256:6c009f430da02bdcff618a7dcd085d7d22547263eeebfb8d6377a4cf6f58769d": unexpected end of JSON input 2025-10-13T00:21:28.187174914+00:00 stderr F E1013 00:21:28.186537 1 strategy.go:60] unable to parse manifest for "sha256:dc84fed0f6f40975a2277c126438c8aa15c70eeac75981dbaa4b6b853eff61a6": unexpected end of JSON input 2025-10-13T00:21:28.188348875+00:00 stderr F E1013 00:21:28.188305 1 strategy.go:60] unable to parse manifest for "sha256:78af15475eac13d2ff439b33a9c3bdd39147858a824c420e8042fd5f35adce15": unexpected end of JSON input 2025-10-13T00:21:28.190024121+00:00 stderr F E1013 00:21:28.189983 1 strategy.go:60] unable to parse manifest for "sha256:06bbbf9272d5c5161f444388593e9bd8db793d8a2d95a50b429b3c0301fafcdd": unexpected end of JSON input 2025-10-13T00:21:28.191606533+00:00 stderr F E1013 00:21:28.191572 1 strategy.go:60] unable to parse manifest for "sha256:caba895933209aa9a4f3121f9ec8e5e8013398ab4f72bd3ff255227aad8d2c3e": unexpected end of JSON input 2025-10-13T00:21:28.193184486+00:00 stderr F E1013 00:21:28.193141 1 strategy.go:60] unable to parse manifest for "sha256:dbe9905fe2b20ed30b0e2d64543016fa9c145eeb5a678f720ba9d2055f0c9f88": unexpected end of JSON input 2025-10-13T00:21:28.200065601+00:00 stderr F I1013 00:21:28.198492 1 trace.go:236] Trace[1895376835]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2720e22e-0b83-43cf-aa39-44db3236f40e,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Oct-2025 00:21:22.742) (total time: 5456ms): 2025-10-13T00:21:28.200065601+00:00 stderr F Trace[1895376835]: ---"Write to database call succeeded" len:1743 5454ms (00:21:28.197) 2025-10-13T00:21:28.200065601+00:00 stderr F Trace[1895376835]: [5.456070704s] [5.456070704s] END 2025-10-13T00:22:24.527370029+00:00 stderr F E1013 00:22:24.527299 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.527418770+00:00 stderr F E1013 00:22:24.527395 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.529610159+00:00 stderr F E1013 00:22:24.529567 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.529626290+00:00 stderr F E1013 00:22:24.529612 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.532212119+00:00 stderr F E1013 00:22:24.532173 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.532229410+00:00 stderr F E1013 00:22:24.532210 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.534412208+00:00 stderr F E1013 00:22:24.534369 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.534430039+00:00 stderr F E1013 00:22:24.534410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.544184841+00:00 stderr F E1013 00:22:24.544043 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.544184841+00:00 stderr F E1013 00:22:24.544102 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.546285138+00:00 stderr F E1013 00:22:24.546242 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.546302408+00:00 stderr F E1013 00:22:24.546286 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.548000304+00:00 stderr F E1013 00:22:24.547965 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.548000304+00:00 stderr F E1013 00:22:24.547993 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.549550755+00:00 stderr F E1013 00:22:24.549505 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.549575986+00:00 stderr F E1013 00:22:24.549552 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.551382195+00:00 stderr F E1013 00:22:24.551345 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:24.551399365+00:00 stderr F E1013 00:22:24.551376 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.492238356+00:00 stderr F E1013 00:22:25.492177 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.492284388+00:00 stderr F E1013 00:22:25.492247 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.500928730+00:00 stderr F E1013 00:22:25.500873 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.500972751+00:00 stderr F E1013 00:22:25.500952 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.516798217+00:00 stderr F E1013 00:22:25.516688 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.516798217+00:00 stderr F E1013 00:22:25.516749 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.518214915+00:00 stderr F E1013 00:22:25.518181 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.518366119+00:00 stderr F E1013 00:22:25.518233 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.519992663+00:00 stderr F E1013 00:22:25.519951 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.520052834+00:00 stderr F E1013 00:22:25.520027 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.521782191+00:00 stderr F E1013 00:22:25.521755 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.521801341+00:00 stderr F E1013 00:22:25.521786 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.526913699+00:00 stderr F E1013 00:22:25.526877 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.526985151+00:00 stderr F E1013 00:22:25.526965 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.528932953+00:00 stderr F E1013 00:22:25.528903 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.528950984+00:00 stderr F E1013 00:22:25.528942 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.530927597+00:00 stderr F E1013 00:22:25.530888 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.530959848+00:00 stderr F E1013 00:22:25.530938 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.533860666+00:00 stderr F E1013 00:22:25.533815 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.533880076+00:00 stderr F E1013 00:22:25.533862 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.534511953+00:00 stderr F E1013 00:22:25.534486 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.534554134+00:00 stderr F E1013 00:22:25.534524 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.535632973+00:00 stderr F E1013 00:22:25.535598 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.535747026+00:00 stderr F E1013 00:22:25.535673 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.539786355+00:00 stderr F E1013 00:22:25.539734 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.539786355+00:00 stderr F E1013 00:22:25.539778 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.541215404+00:00 stderr F E1013 00:22:25.541187 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.541251014+00:00 stderr F E1013 00:22:25.541226 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.542841737+00:00 stderr F E1013 00:22:25.542674 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.542841737+00:00 stderr F E1013 00:22:25.542708 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.544515602+00:00 stderr F E1013 00:22:25.544349 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.544515602+00:00 stderr F E1013 00:22:25.544390 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.545987232+00:00 stderr F E1013 00:22:25.545961 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.546002902+00:00 stderr F E1013 00:22:25.545985 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.547288607+00:00 stderr F E1013 00:22:25.547253 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:25.547288607+00:00 stderr F E1013 00:22:25.547276 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.008145480+00:00 stderr F E1013 00:22:26.008059 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.008189691+00:00 stderr F E1013 00:22:26.008164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.396728720+00:00 stderr F E1013 00:22:26.396669 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:26.396873264+00:00 stderr F E1013 00:22:26.396847 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.072566985+00:00 stderr F E1013 00:22:27.072477 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.072659057+00:00 stderr F E1013 00:22:27.072625 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.763362281+00:00 stderr F E1013 00:22:27.763250 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:27.763514585+00:00 stderr F E1013 00:22:27.763460 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.251769309+00:00 stderr F E1013 00:22:30.251705 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.251892093+00:00 stderr F E1013 00:22:30.251863 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.595953745+00:00 stderr F E1013 00:22:30.595860 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.596082359+00:00 stderr F E1013 00:22:30.596040 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.647045799+00:00 stderr F E1013 00:22:30.646988 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:30.647464190+00:00 stderr F E1013 00:22:30.647122 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.987154866+00:00 stderr F E1013 00:22:33.986968 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:33.987154866+00:00 stderr F E1013 00:22:33.987072 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.090469770+00:00 stderr F E1013 00:22:40.089668 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:40.090469770+00:00 stderr F E1013 00:22:40.089747 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.239774829+00:00 stderr F E1013 00:22:42.239700 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:42.239774829+00:00 stderr F E1013 00:22:42.239763 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:44.088529137+00:00 stderr F E1013 00:22:44.088430 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:44.088529137+00:00 stderr F E1013 00:22:44.088509 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.102514061+00:00 stderr F E1013 00:22:45.102441 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:45.102549762+00:00 stderr F E1013 00:22:45.102513 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:47.104885387+00:00 stderr F E1013 00:22:47.104827 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:47.104885387+00:00 stderr F E1013 00:22:47.104876 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.075451473+00:00 stderr F E1013 00:22:50.075380 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.075479673+00:00 stderr F E1013 00:22:50.075446 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.221665665+00:00 stderr F E1013 00:22:50.221587 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.221665665+00:00 stderr F E1013 00:22:50.221635 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.304908444+00:00 stderr F E1013 00:22:50.304812 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.304908444+00:00 stderr F E1013 00:22:50.304860 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.587018092+00:00 stderr F E1013 00:22:50.586938 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.587077174+00:00 stderr F E1013 00:22:50.587018 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.716334345+00:00 stderr F E1013 00:22:50.716240 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.716334345+00:00 stderr F E1013 00:22:50.716294 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.881695241+00:00 stderr F E1013 00:22:50.881626 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:50.881852305+00:00 stderr F E1013 00:22:50.881824 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.016439424+00:00 stderr F E1013 00:22:51.016366 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.016439424+00:00 stderr F E1013 00:22:51.016429 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.536803649+00:00 stderr F E1013 00:22:51.536735 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:51.536941403+00:00 stderr F E1013 00:22:51.536916 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.237269710+00:00 stderr F E1013 00:22:52.237199 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:52.237298931+00:00 stderr F E1013 00:22:52.237268 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.924376075+00:00 stderr F E1013 00:22:53.923856 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.924376075+00:00 stderr F E1013 00:22:53.923898 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.947751026+00:00 stderr F E1013 00:22:53.947362 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.947751026+00:00 stderr F E1013 00:22:53.947409 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.950361309+00:00 stderr F E1013 00:22:53.950327 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:53.950443481+00:00 stderr F E1013 00:22:53.950430 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.139857727+00:00 stderr F E1013 00:22:54.139439 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:22:54.139937590+00:00 stderr F E1013 00:22:54.139924 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-10-13T00:23:33.915685905+00:00 stderr F I1013 00:23:33.915611 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:23:36.750227571+00:00 stderr F I1013 00:23:36.749057 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:23:52.123376940+00:00 stderr F I1013 00:23:52.122150 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-10-13T00:23:52.225204186+00:00 stderr F I1013 00:23:52.225145 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000026634015073043232033144 0ustar zuulzuul2025-08-13T20:07:23.235321095+00:00 stdout F Copying system trust bundle 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.020450 1 feature_gate.go:239] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:07:25.021836665+00:00 stderr F I0813 20:07:25.020513 1 feature_gate.go:249] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021595 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstall" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021612 1 config.go:121] Ignoring unknown FeatureGate "OpenShiftPodSecurityAdmission" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021617 1 config.go:121] Ignoring unknown FeatureGate "SigstoreImageVerification" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021621 1 config.go:121] Ignoring unknown FeatureGate "SignatureStores" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021626 1 config.go:121] Ignoring unknown FeatureGate "NewOLM" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021630 1 config.go:121] Ignoring unknown FeatureGate "AzureWorkloadIdentity" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021635 1 config.go:121] Ignoring unknown FeatureGate "CSIDriverSharedResource" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021639 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallPowerVS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021716 1 config.go:121] Ignoring unknown FeatureGate "AdminNetworkPolicy" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021722 1 config.go:121] Ignoring unknown FeatureGate "Example" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021727 1 config.go:121] Ignoring unknown FeatureGate "MixedCPUsAllocation" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021731 1 config.go:121] Ignoring unknown FeatureGate "MetricsCollectionProfiles" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021736 1 config.go:121] Ignoring unknown FeatureGate "PrivateHostedZoneAWS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021740 1 config.go:121] Ignoring unknown FeatureGate "UpgradeStatus" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021745 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAzure" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021750 1 config.go:121] Ignoring unknown FeatureGate "DNSNameResolver" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021754 1 config.go:121] Ignoring unknown FeatureGate "NetworkDiagnosticsConfig" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021758 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallVSphere" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021762 1 config.go:121] Ignoring unknown FeatureGate "ExternalOIDC" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021767 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfigAPI" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021809 1 config.go:121] Ignoring unknown FeatureGate "NodeDisruptionPolicy" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021818 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderGCP" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021822 1 config.go:121] Ignoring unknown FeatureGate "GCPClusterHostedDNS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021826 1 config.go:121] Ignoring unknown FeatureGate "HardwareSpeed" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021830 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfig" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021837 1 config.go:121] Ignoring unknown FeatureGate "VSphereMultiVCenters" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021841 1 config.go:121] Ignoring unknown FeatureGate "BuildCSIVolumes" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021845 1 config.go:121] Ignoring unknown FeatureGate "ChunkSizeMiB" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021860 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallNutanix" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021865 1 config.go:121] Ignoring unknown FeatureGate "VSphereStaticIPs" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021869 1 config.go:121] Ignoring unknown FeatureGate "ImagePolicy" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021898 1 config.go:121] Ignoring unknown FeatureGate "ManagedBootImages" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021907 1 config.go:121] Ignoring unknown FeatureGate "PinnedImages" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021912 1 config.go:121] Ignoring unknown FeatureGate "VSphereControlPlaneMachineSet" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021916 1 config.go:121] Ignoring unknown FeatureGate "NetworkLiveMigration" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021920 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallOpenStack" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021924 1 config.go:121] Ignoring unknown FeatureGate "GCPLabelsTags" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021929 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIProviderOpenStack" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021933 1 config.go:121] Ignoring unknown FeatureGate "MetricsServer" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021938 1 config.go:121] Ignoring unknown FeatureGate "VSphereDriverConfiguration" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021942 1 config.go:121] Ignoring unknown FeatureGate "EtcdBackendQuota" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021946 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderExternal" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021950 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIOperatorDisableMachineHealthCheckController" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021956 1 config.go:121] Ignoring unknown FeatureGate "PlatformOperators" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021960 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallGCP" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021964 1 config.go:121] Ignoring unknown FeatureGate "OnClusterBuild" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021969 1 config.go:121] Ignoring unknown FeatureGate "VolumeGroupSnapshot" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021973 1 config.go:121] Ignoring unknown FeatureGate "ExternalRouteCertificate" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021978 1 config.go:121] Ignoring unknown FeatureGate "AlibabaPlatform" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021982 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAWS" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021986 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallIBMCloud" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021991 1 config.go:121] Ignoring unknown FeatureGate "AutomatedEtcdBackup" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.021995 1 config.go:121] Ignoring unknown FeatureGate "BareMetalLoadBalancer" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.022000 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProvider" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.022004 1 config.go:121] Ignoring unknown FeatureGate "InstallAlternateInfrastructureAWS" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022008 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderAzure" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022013 1 config.go:121] Ignoring unknown FeatureGate "MachineConfigNodes" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022017 1 config.go:121] Ignoring unknown FeatureGate "GatewayAPI" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022022 1 config.go:121] Ignoring unknown FeatureGate "InsightsOnDemandDataGather" 2025-08-13T20:07:25.025046997+00:00 stderr F I0813 20:07:25.024565 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:07:25.679468230+00:00 stderr F I0813 20:07:25.676369 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:07:25.711720245+00:00 stderr F I0813 20:07:25.709906 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735038 1 plugins.go:83] "Registered admission plugin" plugin="NamespaceLifecycle" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735540 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionWebhook" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735556 1 plugins.go:83] "Registered admission plugin" plugin="MutatingAdmissionWebhook" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735562 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionPolicy" 2025-08-13T20:07:25.736633209+00:00 stderr F I0813 20:07:25.736209 1 admission.go:48] Admission plugin "project.openshift.io/ProjectRequestLimit" is not configured so it will be disabled. 2025-08-13T20:07:25.773228408+00:00 stderr F I0813 20:07:25.773109 1 plugins.go:157] Loaded 5 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,build.openshift.io/BuildConfigSecretInjector,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,MutatingAdmissionWebhook. 2025-08-13T20:07:25.773228408+00:00 stderr F I0813 20:07:25.773159 1 plugins.go:160] Loaded 9 validating admission controller(s) successfully in the following order: OwnerReferencesPermissionEnforcement,build.openshift.io/BuildConfigSecretInjector,build.openshift.io/BuildByStrategy,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,route.openshift.io/RequiredRouteAnnotations,ValidatingAdmissionWebhook,ResourceQuota. 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.773960 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774009 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774038 1 maxinflight.go:116] "Set denominator for readonly requests" limit=3000 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774045 1 maxinflight.go:120] "Set denominator for mutating requests" limit=1500 2025-08-13T20:07:25.823566991+00:00 stderr F I0813 20:07:25.823427 1 store.go:1579] "Monitoring resource count at path" resource="builds.build.openshift.io" path="//builds" 2025-08-13T20:07:25.833390993+00:00 stderr F I0813 20:07:25.833321 1 store.go:1579] "Monitoring resource count at path" resource="buildconfigs.build.openshift.io" path="//buildconfigs" 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841419 1 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841554 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841566 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:25.846512949+00:00 stderr F I0813 20:07:25.846443 1 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-08-13T20:07:25.846623362+00:00 stderr F I0813 20:07:25.846572 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.846623362+00:00 stderr F I0813 20:07:25.846612 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.020963071+00:00 stderr F I0813 20:07:26.020524 1 store.go:1579] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//routes" 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.024741 1 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.025146 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.025164 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.046650937+00:00 stderr F I0813 20:07:26.046552 1 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.openshift.io" path="//rangeallocations" 2025-08-13T20:07:26.057699754+00:00 stderr F I0813 20:07:26.057587 1 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.057933581+00:00 stderr F I0813 20:07:26.057844 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.057933581+00:00 stderr F I0813 20:07:26.057912 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.074950239+00:00 stderr F I0813 20:07:26.071717 1 store.go:1579] "Monitoring resource count at path" resource="deploymentconfigs.apps.openshift.io" path="//deploymentconfigs" 2025-08-13T20:07:26.094870880+00:00 stderr F I0813 20:07:26.094722 1 cacher.go:451] cacher (buildconfigs.build.openshift.io): initialized 2025-08-13T20:07:26.095282612+00:00 stderr F I0813 20:07:26.095244 1 cacher.go:451] cacher (rangeallocations.security.openshift.io): initialized 2025-08-13T20:07:26.095351024+00:00 stderr F I0813 20:07:26.095335 1 reflector.go:351] Caches populated for *security.RangeAllocation from storage/cacher.go:/rangeallocations 2025-08-13T20:07:26.095544449+00:00 stderr F I0813 20:07:26.095525 1 cacher.go:451] cacher (deploymentconfigs.apps.openshift.io): initialized 2025-08-13T20:07:26.095588340+00:00 stderr F I0813 20:07:26.095575 1 reflector.go:351] Caches populated for *apps.DeploymentConfig from storage/cacher.go:/deploymentconfigs 2025-08-13T20:07:26.096232139+00:00 stderr F I0813 20:07:26.096203 1 cacher.go:451] cacher (builds.build.openshift.io): initialized 2025-08-13T20:07:26.099095701+00:00 stderr F I0813 20:07:26.099067 1 reflector.go:351] Caches populated for *build.Build from storage/cacher.go:/builds 2025-08-13T20:07:26.099946485+00:00 stderr F I0813 20:07:26.099859 1 reflector.go:351] Caches populated for *build.BuildConfig from storage/cacher.go:/buildconfigs 2025-08-13T20:07:26.101696446+00:00 stderr F I0813 20:07:26.101627 1 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.102704534+00:00 stderr F I0813 20:07:26.102633 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.102704534+00:00 stderr F I0813 20:07:26.102685 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.132331104+00:00 stderr F I0813 20:07:26.132237 1 cacher.go:451] cacher (routes.route.openshift.io): initialized 2025-08-13T20:07:26.132331104+00:00 stderr F I0813 20:07:26.132293 1 reflector.go:351] Caches populated for *route.Route from storage/cacher.go:/routes 2025-08-13T20:07:26.173127094+00:00 stderr F I0813 20:07:26.173021 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca, incoming err: 2025-08-13T20:07:26.173127094+00:00 stderr F I0813 20:07:26.173074 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173126 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123, incoming err: 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173132 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173146 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-08-13T20:07:26.173252247+00:00 stderr F I0813 20:07:26.173208 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-08-13T20:07:26.173362670+00:00 stderr F I0813 20:07:26.173322 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-08-13T20:07:26.173467893+00:00 stderr F I0813 20:07:26.173425 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..data, incoming err: 2025-08-13T20:07:26.173467893+00:00 stderr F I0813 20:07:26.173453 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..data 2025-08-13T20:07:26.173480724+00:00 stderr F I0813 20:07:26.173473 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-08-13T20:07:26.173490634+00:00 stderr F I0813 20:07:26.173479 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:26.173502834+00:00 stderr F I0813 20:07:26.173496 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-08-13T20:07:26.173512675+00:00 stderr F I0813 20:07:26.173501 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000 2025-08-13T20:07:26.173522885+00:00 stderr F I0813 20:07:26.173513 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-08-13T20:07:26.173522885+00:00 stderr F I0813 20:07:26.173518 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:07:26.229558662+00:00 stderr F I0813 20:07:26.229472 1 store.go:1579] "Monitoring resource count at path" resource="images.image.openshift.io" path="//images" 2025-08-13T20:07:26.262353442+00:00 stderr F I0813 20:07:26.262258 1 store.go:1579] "Monitoring resource count at path" resource="imagestreams.image.openshift.io" path="//imagestreams" 2025-08-13T20:07:26.345720652+00:00 stderr F I0813 20:07:26.345645 1 cacher.go:451] cacher (imagestreams.image.openshift.io): initialized 2025-08-13T20:07:26.349918072+00:00 stderr F I0813 20:07:26.349863 1 reflector.go:351] Caches populated for *image.ImageStream from storage/cacher.go:/imagestreams 2025-08-13T20:07:26.369736351+00:00 stderr F I0813 20:07:26.369613 1 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.370657947+00:00 stderr F W0813 20:07:26.370604 1 genericapiserver.go:756] Skipping API image.openshift.io/1.0 because it has no resources. 2025-08-13T20:07:26.370657947+00:00 stderr F W0813 20:07:26.370641 1 genericapiserver.go:756] Skipping API image.openshift.io/pre012 because it has no resources. 2025-08-13T20:07:26.371867112+00:00 stderr F I0813 20:07:26.371470 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.372350326+00:00 stderr F I0813 20:07:26.372293 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.385405080+00:00 stderr F I0813 20:07:26.385348 1 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.385561074+00:00 stderr F I0813 20:07:26.385511 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.385561074+00:00 stderr F I0813 20:07:26.385546 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.416983185+00:00 stderr F I0813 20:07:26.416673 1 store.go:1579] "Monitoring resource count at path" resource="templates.template.openshift.io" path="//templates" 2025-08-13T20:07:26.436443863+00:00 stderr F I0813 20:07:26.436348 1 store.go:1579] "Monitoring resource count at path" resource="templateinstances.template.openshift.io" path="//templateinstances" 2025-08-13T20:07:26.443011912+00:00 stderr F I0813 20:07:26.442942 1 cacher.go:451] cacher (templateinstances.template.openshift.io): initialized 2025-08-13T20:07:26.443040462+00:00 stderr F I0813 20:07:26.443009 1 reflector.go:351] Caches populated for *template.TemplateInstance from storage/cacher.go:/templateinstances 2025-08-13T20:07:26.455495849+00:00 stderr F I0813 20:07:26.454602 1 store.go:1579] "Monitoring resource count at path" resource="brokertemplateinstances.template.openshift.io" path="//brokertemplateinstances" 2025-08-13T20:07:26.458248898+00:00 stderr F I0813 20:07:26.458174 1 cacher.go:451] cacher (templates.template.openshift.io): initialized 2025-08-13T20:07:26.458248898+00:00 stderr F I0813 20:07:26.458240 1 reflector.go:351] Caches populated for *template.Template from storage/cacher.go:/templates 2025-08-13T20:07:26.460369019+00:00 stderr F I0813 20:07:26.460124 1 cacher.go:451] cacher (images.image.openshift.io): initialized 2025-08-13T20:07:26.460369019+00:00 stderr F I0813 20:07:26.460182 1 reflector.go:351] Caches populated for *image.Image from storage/cacher.go:/images 2025-08-13T20:07:26.463958292+00:00 stderr F I0813 20:07:26.463908 1 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.464163788+00:00 stderr F I0813 20:07:26.464118 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.464163788+00:00 stderr F I0813 20:07:26.464149 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.464408955+00:00 stderr F I0813 20:07:26.464364 1 cacher.go:451] cacher (brokertemplateinstances.template.openshift.io): initialized 2025-08-13T20:07:26.464455926+00:00 stderr F I0813 20:07:26.464418 1 reflector.go:351] Caches populated for *template.BrokerTemplateInstance from storage/cacher.go:/brokertemplateinstances 2025-08-13T20:07:26.484150241+00:00 stderr F I0813 20:07:26.483684 1 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.484232063+00:00 stderr F I0813 20:07:26.483850 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.484232063+00:00 stderr F I0813 20:07:26.483908 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:27.540162388+00:00 stderr F I0813 20:07:27.537590 1 server.go:50] Starting master on 0.0.0.0:8443 (v0.0.0-master+$Format:%H$) 2025-08-13T20:07:27.540162388+00:00 stderr F I0813 20:07:27.537839 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551033 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551069 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551125 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551144 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551172 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551226 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551313 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.551267526 +0000 UTC))" 2025-08-13T20:07:27.551645367+00:00 stderr F I0813 20:07:27.551598 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.551574325 +0000 UTC))" 2025-08-13T20:07:27.551660568+00:00 stderr F I0813 20:07:27.551646 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:07:27.551725429+00:00 stderr F I0813 20:07:27.551686 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:07:27.551740480+00:00 stderr F I0813 20:07:27.551729 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:07:27.551972457+00:00 stderr F I0813 20:07:27.551937 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:07:27.552988366+00:00 stderr F I0813 20:07:27.552961 1 openshift_apiserver.go:593] Using default project node label selector: 2025-08-13T20:07:27.553094279+00:00 stderr F I0813 20:07:27.553077 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-08-13T20:07:27.572047802+00:00 stderr F I0813 20:07:27.571750 1 healthz.go:261] poststarthook/authorization.openshift.io-bootstrapclusterroles,poststarthook/authorization.openshift.io-ensurenodebootstrap-sa check failed: healthz 2025-08-13T20:07:27.572047802+00:00 stderr F [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: not finished 2025-08-13T20:07:27.572047802+00:00 stderr F [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: not finished 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575071 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575487 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575951 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.576187 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.577214520+00:00 stderr F I0813 20:07:27.577137 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.577962422+00:00 stderr F I0813 20:07:27.577938 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.578227459+00:00 stderr F I0813 20:07:27.578206 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.588645468+00:00 stderr F I0813 20:07:27.585074 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.591386827+00:00 stderr F I0813 20:07:27.591347 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.593579579+00:00 stderr F I0813 20:07:27.593471 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.593949880+00:00 stderr F I0813 20:07:27.593924 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.595380011+00:00 stderr F I0813 20:07:27.595134 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.596744490+00:00 stderr F I0813 20:07:27.596714 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.598120860+00:00 stderr F I0813 20:07:27.598093 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.600147248+00:00 stderr F I0813 20:07:27.600118 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.614977173+00:00 stderr F I0813 20:07:27.611062 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.614977173+00:00 stderr F I0813 20:07:27.611978 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.616675152+00:00 stderr F I0813 20:07:27.615986 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.618291068+00:00 stderr F I0813 20:07:27.617171 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.620571273+00:00 stderr F I0813 20:07:27.620521 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.627531003+00:00 stderr F I0813 20:07:27.627368 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.651381577+00:00 stderr F I0813 20:07:27.651267 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:27.651381577+00:00 stderr F I0813 20:07:27.651361 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:07:27.652189850+00:00 stderr F I0813 20:07:27.651622 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:27.652189850+00:00 stderr F I0813 20:07:27.651749 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.651709916 +0000 UTC))" 2025-08-13T20:07:27.652219171+00:00 stderr F I0813 20:07:27.652193 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.652171329 +0000 UTC))" 2025-08-13T20:07:27.652622852+00:00 stderr F I0813 20:07:27.652480 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.652463258 +0000 UTC))" 2025-08-13T20:07:27.654845316+00:00 stderr F I0813 20:07:27.654701 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:27.654653201 +0000 UTC))" 2025-08-13T20:07:27.654873837+00:00 stderr F I0813 20:07:27.654844 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:27.654817125 +0000 UTC))" 2025-08-13T20:07:27.654922278+00:00 stderr F I0813 20:07:27.654901 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:27.654853046 +0000 UTC))" 2025-08-13T20:07:27.654938689+00:00 stderr F I0813 20:07:27.654928 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:27.654912818 +0000 UTC))" 2025-08-13T20:07:27.654953739+00:00 stderr F I0813 20:07:27.654946 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.654934669 +0000 UTC))" 2025-08-13T20:07:27.654980680+00:00 stderr F I0813 20:07:27.654970 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.654952329 +0000 UTC))" 2025-08-13T20:07:27.655065712+00:00 stderr F I0813 20:07:27.654994 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.65497726 +0000 UTC))" 2025-08-13T20:07:27.655065712+00:00 stderr F I0813 20:07:27.655043 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.655025441 +0000 UTC))" 2025-08-13T20:07:27.655082973+00:00 stderr F I0813 20:07:27.655067 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:27.655050672 +0000 UTC))" 2025-08-13T20:07:27.655094843+00:00 stderr F I0813 20:07:27.655086 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:27.655075233 +0000 UTC))" 2025-08-13T20:07:27.655130624+00:00 stderr F I0813 20:07:27.655107 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.655092493 +0000 UTC))" 2025-08-13T20:07:27.658235403+00:00 stderr F I0813 20:07:27.658033 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.658011927 +0000 UTC))" 2025-08-13T20:07:27.658620234+00:00 stderr F I0813 20:07:27.658438 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.658417289 +0000 UTC))" 2025-08-13T20:07:27.661626220+00:00 stderr F I0813 20:07:27.661547 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.705870569+00:00 stderr F I0813 20:07:27.703063 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.851120933+00:00 stderr F I0813 20:07:27.851060 1 reflector.go:351] Caches populated for *etcd.ImageLayers from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:08:03.895683401+00:00 stderr F E0813 20:08:03.895372 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:08:03.914238963+00:00 stderr F I0813 20:08:03.913112 1 trace.go:236] Trace[499527712]: "Create" accept:application/json, */*,audit-id:064330a5-92f0-4ee6-b57c-a4a65def5eee,client:10.217.0.46,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:cluster-samples-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:08:03.000) (total time: 912ms): 2025-08-13T20:08:03.914238963+00:00 stderr F Trace[499527712]: ---"Write to database call succeeded" len:436 908ms (20:08:03.911) 2025-08-13T20:08:03.914238963+00:00 stderr F Trace[499527712]: [912.494531ms] [912.494531ms] END 2025-08-13T20:08:42.680410211+00:00 stderr F E0813 20:08:42.679639 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.680410211+00:00 stderr F E0813 20:08:42.680000 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.687005750+00:00 stderr F E0813 20:08:42.686917 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.687221417+00:00 stderr F E0813 20:08:42.687005 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.691993933+00:00 stderr F E0813 20:08:42.691953 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.692173599+00:00 stderr F E0813 20:08:42.692135 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.695301828+00:00 stderr F E0813 20:08:42.695256 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.695398301+00:00 stderr F E0813 20:08:42.695380 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.747725021+00:00 stderr F E0813 20:08:42.747669 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.747938077+00:00 stderr F E0813 20:08:42.747888 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.752690934+00:00 stderr F E0813 20:08:42.752657 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.752819277+00:00 stderr F E0813 20:08:42.752753 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.756291127+00:00 stderr F E0813 20:08:42.756265 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.756361749+00:00 stderr F E0813 20:08:42.756345 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.760296242+00:00 stderr F E0813 20:08:42.760195 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.760377704+00:00 stderr F E0813 20:08:42.760333 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.763271847+00:00 stderr F E0813 20:08:42.763191 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.763293148+00:00 stderr F E0813 20:08:42.763279 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.517637905+00:00 stderr F E0813 20:08:43.517509 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.517637905+00:00 stderr F E0813 20:08:43.517602 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.522309689+00:00 stderr F E0813 20:08:43.521746 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.522309689+00:00 stderr F E0813 20:08:43.521870 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.525432328+00:00 stderr F E0813 20:08:43.525397 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.525530711+00:00 stderr F E0813 20:08:43.525514 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.534683723+00:00 stderr F E0813 20:08:43.534064 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.534683723+00:00 stderr F E0813 20:08:43.534151 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.536467764+00:00 stderr F E0813 20:08:43.536396 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.536542457+00:00 stderr F E0813 20:08:43.536487 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.546050469+00:00 stderr F E0813 20:08:43.545968 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.546185853+00:00 stderr F E0813 20:08:43.546164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.582191905+00:00 stderr F E0813 20:08:43.581025 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.582191905+00:00 stderr F E0813 20:08:43.581113 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.597845344+00:00 stderr F E0813 20:08:43.597742 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.598038620+00:00 stderr F E0813 20:08:43.598016 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.605990648+00:00 stderr F E0813 20:08:43.605034 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.605990648+00:00 stderr F E0813 20:08:43.605299 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.617762435+00:00 stderr F E0813 20:08:43.617714 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.618463275+00:00 stderr F E0813 20:08:43.618441 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.631235552+00:00 stderr F E0813 20:08:43.631111 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.631235552+00:00 stderr F E0813 20:08:43.631195 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.633063684+00:00 stderr F E0813 20:08:43.633030 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.633179007+00:00 stderr F E0813 20:08:43.633158 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.664941568+00:00 stderr F E0813 20:08:43.664815 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.664941568+00:00 stderr F E0813 20:08:43.664883 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.665028590+00:00 stderr F E0813 20:08:43.664756 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.665098482+00:00 stderr F E0813 20:08:43.665082 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.670473807+00:00 stderr F E0813 20:08:43.670001 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.670473807+00:00 stderr F E0813 20:08:43.670073 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.671117565+00:00 stderr F E0813 20:08:43.670938 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.671117565+00:00 stderr F E0813 20:08:43.670992 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.674330557+00:00 stderr F E0813 20:08:43.674193 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.674330557+00:00 stderr F E0813 20:08:43.674263 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.677569540+00:00 stderr F E0813 20:08:43.677422 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.677569540+00:00 stderr F E0813 20:08:43.677516 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.518948113+00:00 stderr F E0813 20:08:44.518836 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.519334454+00:00 stderr F E0813 20:08:44.519105 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.255174302+00:00 stderr F E0813 20:08:45.255113 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.255495301+00:00 stderr F E0813 20:08:45.255471 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.782377827+00:00 stderr F E0813 20:08:45.782289 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.782557402+00:00 stderr F E0813 20:08:45.782437 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.639857322+00:00 stderr F E0813 20:08:47.638317 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.639857322+00:00 stderr F E0813 20:08:47.638519 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.879284687+00:00 stderr F E0813 20:08:47.879093 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.879455542+00:00 stderr F E0813 20:08:47.879342 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159244084+00:00 stderr F E0813 20:08:48.159178 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159371267+00:00 stderr F E0813 20:08:48.159305 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159446559+00:00 stderr F E0813 20:08:48.159400 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159477540+00:00 stderr F E0813 20:08:48.159408 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.168062356+00:00 stderr F E0813 20:08:48.168036 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.168134548+00:00 stderr F E0813 20:08:48.168119 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.200477386+00:00 stderr F E0813 20:08:48.200372 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.200694172+00:00 stderr F E0813 20:08:48.200614 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.204462580+00:00 stderr F E0813 20:08:48.204390 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.204605784+00:00 stderr F E0813 20:08:48.204549 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.215371863+00:00 stderr F E0813 20:08:48.215286 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.215519747+00:00 stderr F E0813 20:08:48.215463 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.236359945+00:00 stderr F E0813 20:08:48.236258 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.236598911+00:00 stderr F E0813 20:08:48.236510 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.268222328+00:00 stderr F E0813 20:08:48.268104 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.268717872+00:00 stderr F E0813 20:08:48.268693 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.482943234+00:00 stderr F E0813 20:08:48.480051 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.482943234+00:00 stderr F E0813 20:08:48.480202 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.568999762+00:00 stderr F E0813 20:08:48.568880 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570530536+00:00 stderr F E0813 20:08:48.569033 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.580451170+00:00 stderr F E0813 20:08:48.580403 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.580549253+00:00 stderr F E0813 20:08:48.580532 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.012194089+00:00 stderr F E0813 20:08:49.012114 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.012219579+00:00 stderr F E0813 20:08:49.012201 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.326890131+00:00 stderr F E0813 20:08:49.326755 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.327013085+00:00 stderr F E0813 20:08:49.326884 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.673518909+00:00 stderr F E0813 20:08:49.673432 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.673628263+00:00 stderr F E0813 20:08:49.673527 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.723191914+00:00 stderr F E0813 20:08:49.723034 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.723191914+00:00 stderr F E0813 20:08:49.723164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.819041402+00:00 stderr F E0813 20:08:49.818928 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.819041402+00:00 stderr F E0813 20:08:49.818993 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.824400875+00:00 stderr F E0813 20:08:49.824312 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.824400875+00:00 stderr F E0813 20:08:49.824372 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.955995548+00:00 stderr F E0813 20:08:49.955762 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.955995548+00:00 stderr F E0813 20:08:49.955940 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.066716563+00:00 stderr F E0813 20:08:50.066586 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.066716563+00:00 stderr F E0813 20:08:50.066667 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.139565671+00:00 stderr F E0813 20:08:50.138008 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.139565671+00:00 stderr F E0813 20:08:50.138109 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.306224180+00:00 stderr F E0813 20:08:51.306053 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.306224180+00:00 stderr F E0813 20:08:51.306140 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.772305213+00:00 stderr F E0813 20:08:51.771570 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.772305213+00:00 stderr F E0813 20:08:51.772201 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.803185008+00:00 stderr F E0813 20:08:51.803111 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.803239480+00:00 stderr F E0813 20:08:51.803183 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.176496652+00:00 stderr F E0813 20:08:52.176361 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.176496652+00:00 stderr F E0813 20:08:52.176478 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.487428976+00:00 stderr F E0813 20:08:52.487371 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.487649713+00:00 stderr F E0813 20:08:52.487587 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.496487466+00:00 stderr F E0813 20:08:52.496452 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.496672701+00:00 stderr F E0813 20:08:52.496648 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.810637063+00:00 stderr F E0813 20:08:52.810556 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.810637063+00:00 stderr F E0813 20:08:52.810624 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.814600507+00:00 stderr F E0813 20:08:52.814523 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.814600507+00:00 stderr F E0813 20:08:52.814589 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.029824327+00:00 stderr F E0813 20:08:53.029690 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.029975582+00:00 stderr F E0813 20:08:53.029890 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.010145296+00:00 stderr F E0813 20:08:56.009052 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.010145296+00:00 stderr F E0813 20:08:56.009245 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.600006948+00:00 stderr F E0813 20:08:56.599861 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.600116191+00:00 stderr F E0813 20:08:56.600000 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.304175456+00:00 stderr F E0813 20:08:57.304076 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.304227648+00:00 stderr F E0813 20:08:57.304172 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.337506012+00:00 stderr F E0813 20:08:57.337392 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.337506012+00:00 stderr F E0813 20:08:57.337457 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.610713075+00:00 stderr F E0813 20:08:57.610572 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.610713075+00:00 stderr F E0813 20:08:57.610688 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.794263737+00:00 stderr F E0813 20:08:57.794096 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.794263737+00:00 stderr F E0813 20:08:57.794186 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.658566789+00:00 stderr F E0813 20:08:58.658408 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.658633531+00:00 stderr F E0813 20:08:58.658563 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.713140353+00:00 stderr F E0813 20:08:58.713075 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.713287518+00:00 stderr F E0813 20:08:58.713269 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.742951748+00:00 stderr F E0813 20:08:58.742866 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.743079232+00:00 stderr F E0813 20:08:58.743062 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:34.148973349+00:00 stderr F I0813 20:09:34.148730 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:34.869006983+00:00 stderr F I0813 20:09:34.868932 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.576449306+00:00 stderr F I0813 20:09:35.576346 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.784050738+00:00 stderr F I0813 20:09:35.783984 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:37.050415975+00:00 stderr F I0813 20:09:37.050268 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:37.312150859+00:00 stderr F I0813 20:09:37.310042 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:39.898706978+00:00 stderr F I0813 20:09:39.898600 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:46.110149745+00:00 stderr F I0813 20:09:46.110030 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:48.834429082+00:00 stderr F I0813 20:09:48.833277 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:48.892728974+00:00 stderr F I0813 20:09:48.892672 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:51.791420782+00:00 stderr F I0813 20:09:51.791292 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:53.442499030+00:00 stderr F I0813 20:09:53.435370 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:56.934371515+00:00 stderr F I0813 20:09:56.934268 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:59.692452641+00:00 stderr F I0813 20:09:59.692353 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:03.923274763+00:00 stderr F I0813 20:10:03.923110 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:05.041617317+00:00 stderr F I0813 20:10:05.041496 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:13.806297546+00:00 stderr F I0813 20:10:13.806192 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:14.139715366+00:00 stderr F I0813 20:10:14.139628 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:35.573370085+00:00 stderr F I0813 20:10:35.573172 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:35.662360187+00:00 stderr F I0813 20:10:35.662208 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:36.571610796+00:00 stderr F I0813 20:10:36.571495 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:17:34.122353834+00:00 stderr F I0813 20:17:34.118870 1 trace.go:236] Trace[1693487721]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1fe089cf-6742-4b4d-a354-6690bd53f61e,client:10.217.0.19,api-group:route.openshift.io,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:routes,scope:resource,url:/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:17:33.447) (total time: 671ms): 2025-08-13T20:17:34.122353834+00:00 stderr F Trace[1693487721]: ---"About to write a response" 670ms (20:17:34.118) 2025-08-13T20:17:34.122353834+00:00 stderr F Trace[1693487721]: [671.040603ms] [671.040603ms] END 2025-08-13T20:18:26.366246249+00:00 stderr F I0813 20:18:26.362702 1 trace.go:236] Trace[1917860378]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:07499f87-1a86-4458-a0a3-18f0c0d9c932,client:10.217.0.19,api-group:route.openshift.io,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:routes,scope:resource,url:/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:18:25.572) (total time: 789ms): 2025-08-13T20:18:26.366246249+00:00 stderr F Trace[1917860378]: ---"About to write a response" 788ms (20:18:26.360) 2025-08-13T20:18:26.366246249+00:00 stderr F Trace[1917860378]: [789.800313ms] [789.800313ms] END 2025-08-13T20:18:34.542003345+00:00 stderr F E0813 20:18:34.540993 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:18:34.763501320+00:00 stderr F I0813 20:18:34.762714 1 trace.go:236] Trace[2007589504]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:05d22fbd-a8ec-46ef-8a2c-3a920dd334ea,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:18:33.432) (total time: 1330ms): 2025-08-13T20:18:34.763501320+00:00 stderr F Trace[2007589504]: ---"Write to database call succeeded" len:287 1316ms (20:18:34.751) 2025-08-13T20:18:34.763501320+00:00 stderr F Trace[2007589504]: [1.330074082s] [1.330074082s] END 2025-08-13T20:22:18.881545212+00:00 stderr F E0813 20:22:18.880539 1 strategy.go:60] unable to parse manifest for "sha256:5f73c1b804b7ff63f61151b4f194fe45c645de27671a182582eac8b3fcb30dd4": unexpected end of JSON input 2025-08-13T20:22:18.903663024+00:00 stderr F I0813 20:22:18.903580 1 trace.go:236] Trace[1121898041]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:da392399-76ee-46af-89a0-9e901be9ab96,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:22:18.339) (total time: 563ms): 2025-08-13T20:22:18.903663024+00:00 stderr F Trace[1121898041]: ---"Write to database call succeeded" len:274 562ms (20:22:18.902) 2025-08-13T20:22:18.903663024+00:00 stderr F Trace[1121898041]: [563.883166ms] [563.883166ms] END 2025-08-13T20:33:33.938260917+00:00 stderr F E0813 20:33:33.937966 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:33:33.953413383+00:00 stderr F I0813 20:33:33.951719 1 trace.go:236] Trace[1496715544]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:11e9111b-024c-48db-a0d2-0ce5b8dbf5ab,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:33:33.343) (total time: 607ms): 2025-08-13T20:33:33.953413383+00:00 stderr F Trace[1496715544]: ---"Write to database call succeeded" len:287 606ms (20:33:33.950) 2025-08-13T20:33:33.953413383+00:00 stderr F Trace[1496715544]: [607.831872ms] [607.831872ms] END 2025-08-13T20:41:04.002220584+00:00 stderr F E0813 20:41:04.001878 1 strategy.go:60] unable to parse manifest for "sha256:5f73c1b804b7ff63f61151b4f194fe45c645de27671a182582eac8b3fcb30dd4": unexpected end of JSON input 2025-08-13T20:41:04.014897090+00:00 stderr F I0813 20:41:04.013060 1 trace.go:236] Trace[586486215]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:934f2366-fb55-42fc-80ba-80239bd09acf,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:41:03.334) (total time: 678ms): 2025-08-13T20:41:04.014897090+00:00 stderr F Trace[586486215]: ---"Write to database call succeeded" len:274 676ms (20:41:04.011) 2025-08-13T20:41:04.014897090+00:00 stderr F Trace[586486215]: [678.206432ms] [678.206432ms] END 2025-08-13T20:42:36.314535278+00:00 stderr F I0813 20:42:36.313683 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.321893450+00:00 stderr F I0813 20:42:36.321625 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.322135567+00:00 stderr F I0813 20:42:36.322017 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382031144+00:00 stderr F I0813 20:42:36.316915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.414129159+00:00 stderr F I0813 20:42:36.316948 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415028545+00:00 stderr F I0813 20:42:36.316971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415411526+00:00 stderr F I0813 20:42:36.316985 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415672034+00:00 stderr F I0813 20:42:36.316998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415962002+00:00 stderr F I0813 20:42:36.317012 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437615566+00:00 stderr F I0813 20:42:36.317023 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441272512+00:00 stderr F I0813 20:42:36.317122 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441877979+00:00 stderr F I0813 20:42:36.317139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442279471+00:00 stderr F I0813 20:42:36.317166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442527238+00:00 stderr F I0813 20:42:36.317179 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445592236+00:00 stderr F I0813 20:42:36.317191 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446068460+00:00 stderr F I0813 20:42:36.317208 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446335858+00:00 stderr F I0813 20:42:36.317253 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.457722176+00:00 stderr F I0813 20:42:36.317274 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317307 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317326 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:38.033451965+00:00 stderr F I0813 20:42:38.032809 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:38.033451965+00:00 stderr F I0813 20:42:38.032891 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:38.035510985+00:00 stderr F I0813 20:42:38.034578 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:38.035510985+00:00 stderr F I0813 20:42:38.032881 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:38.041608500+00:00 stderr F W0813 20:42:38.041489 1 genericapiserver.go:1060] failed to create event openshift-apiserver/apiserver-7fc54b8dd7-d2bhp.185b6e4d4a884464: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/events": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015073043233033053 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000006113615073043233033064 0ustar zuulzuul2025-10-13T00:14:58.682637017+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="log level info" 2025-10-13T00:14:58.682637017+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="TLS keys set, using https for metrics" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=rolebindings" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="batch/v1, Resource=jobs" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=pods" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=services" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=serviceaccounts" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=roles" 2025-10-13T00:14:58.841068504+00:00 stderr F time="2025-10-13T00:14:58Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=configmaps" 2025-10-13T00:14:59.071064555+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="detected ability to filter informers" canFilter=true 2025-10-13T00:14:59.086965432+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="OpenShift Proxy API available - setting up watch for Proxy type" 2025-10-13T00:14:59.086965432+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="OpenShift Proxy query will be used to fetch cluster proxy configuration" 2025-10-13T00:14:59.087008233+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="[CSV NS Plug-in] setting up csv namespace plug-in for namespaces: []" 2025-10-13T00:14:59.087016223+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="[CSV NS Plug-in] registering namespace informer" 2025-10-13T00:14:59.087108926+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="[CSV NS Plug-in] setting up namespace: " 2025-10-13T00:14:59.087224659+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="[CSV NS Plug-in] registered csv queue informer for: " 2025-10-13T00:14:59.087224659+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="[CSV NS Plug-in] finished setting up csv namespace labeler plugin" 2025-10-13T00:14:59.098230269+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-10-13T00:14:59.098278861+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="operator ready" 2025-10-13T00:14:59.098278861+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="starting informers..." 2025-10-13T00:14:59.098278861+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="informers started" 2025-10-13T00:14:59.098278861+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="waiting for caches to sync..." 2025-10-13T00:14:59.201205214+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="starting workers..." 2025-10-13T00:14:59.201745000+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="Initializing cluster operator monitor for package server" 2025-10-13T00:14:59.201745000+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="monitoring the following components [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-10-13T00:14:59.205887085+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="starting clusteroperator monitor loop" monitor=clusteroperator 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v2.OperatorCondition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Role"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.RoleBinding"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Deployment"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting Controller","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Operator"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Deployment"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Namespace"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.CustomResourceDefinition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.APIService"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.Subscription"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.InstallPlan"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v2.OperatorCondition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting Controller","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Deployment"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Namespace"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Service"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.CustomResourceDefinition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.APIService"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.Subscription"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205887085+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205938236+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205938236+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205938236+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205938236+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205938236+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-10-13T00:14:59.205938236+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-10-13T00:14:59.208139502+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.Subscription"} 2025-10-13T00:14:59.208139502+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-10-13T00:14:59.208139502+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.InstallPlan"} 2025-10-13T00:14:59.208139502+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting Controller","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-10-13T00:14:59.208207484+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1.ClusterOperator"} 2025-10-13T00:14:59.208207484+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"channel source: 0xc000781540"} 2025-10-13T00:14:59.208219864+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-10-13T00:14:59.208242565+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting Controller","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-10-13T00:14:59.215688308+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="ClusterOperator api is present" monitor=clusteroperator 2025-10-13T00:14:59.215688308+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="initializing clusteroperator resource(s) for [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-10-13T00:14:59.243632986+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="initialized cluster resource - operator-lifecycle-manager-packageserver" monitor=clusteroperator 2025-10-13T00:14:59.307733026+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting workers","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","worker count":1} 2025-10-13T00:14:59.314360315+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-10-13T00:14:59.314360315+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting workers","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","worker count":1} 2025-10-13T00:14:59.334422396+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting workers","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","worker count":1} 2025-10-13T00:14:59.334422396+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting workers","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","worker count":1} 2025-10-13T00:14:59.336652423+00:00 stderr F {"level":"info","ts":"2025-10-13T00:14:59Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-10-13T00:14:59.376106895+00:00 stderr F time="2025-10-13T00:14:59Z" level=warning msg="unhealthy component: apiServices not installed" csv=packageserver id=H7Y7G namespace=openshift-operator-lifecycle-manager phase=Succeeded strategy=deployment 2025-10-13T00:14:59.384446884+00:00 stderr F I1013 00:14:59.383224 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28612", FieldPath:""}): type: 'Warning' reason: 'ComponentUnhealthy' apiServices not installed 2025-10-13T00:14:59.441247816+00:00 stderr F time="2025-10-13T00:14:59Z" level=warning msg="unhealthy component: apiServices not installed" csv=packageserver id=ZqQr0 namespace=openshift-operator-lifecycle-manager phase=Succeeded strategy=deployment 2025-10-13T00:14:59.441247816+00:00 stderr F I1013 00:14:59.440375 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28612", FieldPath:""}): type: 'Warning' reason: 'ComponentUnhealthy' apiServices not installed 2025-10-13T00:14:59.449971228+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=v+/2p namespace=openshift-operator-lifecycle-manager phase=Succeeded 2025-10-13T00:14:59.450086851+00:00 stderr F E1013 00:14:59.450046 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:14:59.545488160+00:00 stderr F time="2025-10-13T00:14:59Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=oxJMe namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2025-10-13T00:14:59.546075877+00:00 stderr F I1013 00:14:59.545926 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"39955", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2025-10-13T00:14:59.685428083+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=UZsHL namespace=openshift-operator-lifecycle-manager phase=Pending 2025-10-13T00:14:59.685428083+00:00 stderr F I1013 00:14:59.683603 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"39977", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2025-10-13T00:14:59.711076521+00:00 stderr F time="2025-10-13T00:14:59Z" level=warning msg="reusing existing cert packageserver-service-cert" 2025-10-13T00:14:59.795704856+00:00 stderr F I1013 00:14:59.795620 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"39986", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2025-10-13T00:14:59.829350314+00:00 stderr F time="2025-10-13T00:14:59Z" level=warning msg="reusing existing cert packageserver-service-cert" 2025-10-13T00:14:59.904973299+00:00 stderr F I1013 00:14:59.902516 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"39986", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2025-10-13T00:14:59.910061762+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=Kv/O/ namespace=openshift-operator-lifecycle-manager phase=InstallReady 2025-10-13T00:14:59.910174085+00:00 stderr F E1013 00:14:59.910113 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-10-13T00:14:59.986038358+00:00 stderr F time="2025-10-13T00:14:59Z" level=info msg="install strategy successful" csv=packageserver id=kQ9cF namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-10-13T00:14:59.986602465+00:00 stderr F I1013 00:14:59.986571 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"39997", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' install strategy completed with no errors 2025-10-13T00:16:02.221314367+00:00 stderr F 2025/10/13 00:16:02 http: TLS handshake error from 10.217.0.28:41628: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "Red Hat, Inc.") ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000011562015073043233033062 0ustar zuulzuul2025-08-13T19:59:23.408499878+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="log level info" 2025-08-13T19:59:23.408499878+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="TLS keys set, using https for metrics" 2025-08-13T19:59:26.390932373+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=configmaps" 2025-08-13T19:59:26.390932373+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="batch/v1, Resource=jobs" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=rolebindings" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=services" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=pods" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=roles" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=serviceaccounts" 2025-08-13T19:59:28.078977601+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="detected ability to filter informers" canFilter=true 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="OpenShift Proxy API available - setting up watch for Proxy type" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="OpenShift Proxy query will be used to fetch cluster proxy configuration" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] setting up csv namespace plug-in for namespaces: []" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] registering namespace informer" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] setting up namespace: " 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] registered csv queue informer for: " 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] finished setting up csv namespace labeler plugin" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="operator ready" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="starting informers..." 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="informers started" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:32.924188753+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="starting workers..." 2025-08-13T19:59:33.031704498+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="Initializing cluster operator monitor for package server" 2025-08-13T19:59:33.032017877+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="monitoring the following components [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-08-13T19:59:33.055910848+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="starting clusteroperator monitor loop" monitor=clusteroperator 2025-08-13T19:59:33.246755698+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Role"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.RoleBinding"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Operator"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Namespace"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.CustomResourceDefinition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.APIService"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.InstallPlan"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T19:59:33.382939450+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ClusterOperator api is present" monitor=clusteroperator 2025-08-13T19:59:33.382939450+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="initializing clusteroperator resource(s) for [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-08-13T19:59:33.542897060+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="initialized cluster resource - operator-lifecycle-manager-packageserver" monitor=clusteroperator 2025-08-13T19:59:33.553977566+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.554066839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.554105840+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Namespace"} 2025-08-13T19:59:33.554139551+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Service"} 2025-08-13T19:59:33.554170521+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.CustomResourceDefinition"} 2025-08-13T19:59:33.554200772+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.APIService"} 2025-08-13T19:59:33.554544782+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:33.554595594+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.555013716+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555085628+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555128149+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555159860+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555770787+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556103227+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556141448+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556177709+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.InstallPlan"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting Controller","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T19:59:36.144826478+00:00 stderr F 2025/08/13 19:59:36 http: TLS handshake error from 10.217.0.2:45622: EOF 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1.ClusterOperator"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"channel source: 0xc000b88840"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting Controller","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T19:59:36.572124379+00:00 stderr F 2025/08/13 19:59:36 http: TLS handshake error from 10.217.0.2:45620: EOF 2025-08-13T19:59:36.941610091+00:00 stderr F time="2025-08-13T19:59:36Z" level=warning msg="install timed out" csv=packageserver id=89lYL namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:36.979210383+00:00 stderr F I0813 19:59:36.967910 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"23959", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout 2025-08-13T19:59:37.168694394+00:00 stderr F I0813 19:59:37.167075 1 trace.go:236] Trace[437002520]: "DeltaFIFO Pop Process" ID:openshift-machine-api/master-user-data,Depth:124,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:36.927) (total time: 224ms): 2025-08-13T19:59:37.168694394+00:00 stderr F Trace[437002520]: [224.761967ms] [224.761967ms] END 2025-08-13T19:59:39.160935043+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","worker count":1} 2025-08-13T19:59:39.161732036+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","worker count":1} 2025-08-13T19:59:39.162600960+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-08-13T19:59:39.163362922+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","worker count":1} 2025-08-13T19:59:39.167356936+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-08-13T19:59:39.181857399+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","worker count":1} 2025-08-13T19:59:39.409155649+00:00 stderr F time="2025-08-13T19:59:39Z" level=warning msg="install timed out" csv=packageserver id=wFTJL namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:39.491326631+00:00 stderr F I0813 19:59:39.461032 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"23959", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout 2025-08-13T19:59:39.678121055+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=YtP4v namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:39.678121055+00:00 stderr F E0813 19:59:39.677187 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:40.946415519+00:00 stderr F time="2025-08-13T19:59:40Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=Ot2sV namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2025-08-13T19:59:40.956756573+00:00 stderr F I0813 19:59:40.956497 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28220", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2025-08-13T19:59:41.432936177+00:00 stderr F time="2025-08-13T19:59:41Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=GN3+B namespace=openshift-operator-lifecycle-manager phase=Pending 2025-08-13T19:59:41.432936177+00:00 stderr F I0813 19:59:41.421229 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28286", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2025-08-13T19:59:42.403819742+00:00 stderr F time="2025-08-13T19:59:42Z" level=warning msg="reusing existing cert packageserver-service-cert" 2025-08-13T19:59:46.244202112+00:00 stderr F I0813 19:59:46.239490 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28295", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2025-08-13T19:59:47.823165362+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="install strategy successful" csv=packageserver id=6ssjB namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:47.824931572+00:00 stderr F I0813 19:59:47.823956 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28383", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-08-13T19:59:48.876738185+00:00 stderr F time="2025-08-13T19:59:48Z" level=info msg="install strategy successful" csv=packageserver id=pY0V5 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:48.880552904+00:00 stderr F I0813 19:59:48.877424 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28383", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-08-13T19:59:48.965878547+00:00 stderr F time="2025-08-13T19:59:48Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=k/B5q namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:48.976565881+00:00 stderr F E0813 19:59:48.973051 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.634998790+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="install strategy successful" csv=packageserver id=HgOMt namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:50.622862930+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="install strategy successful" csv=packageserver id=9BQnX namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:51.329144563+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="install strategy successful" csv=packageserver id=dbNQV namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.031285079+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=rj7n7 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.332868995+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=JU2gq namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.533734821+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=I3wSK namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.820222188+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=nCH5J namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:53.054214688+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="could not query for GVK in api discovery" err="the server is currently unable to handle the request" group=packages.operators.coreos.com kind=PackageManifest version=v1 2025-08-13T19:59:53.110286586+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="could not update install status" csv=packageserver error="the server is currently unable to handle the request" id=nhmlX namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:53.110286586+00:00 stderr F E0813 19:59:53.107826 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: the server is currently unable to handle the request 2025-08-13T19:59:53.290519303+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="install strategy successful" csv=packageserver id=+yVoI namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:53.297101411+00:00 stderr F I0813 19:59:53.296413 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28423", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' install strategy completed with no errors 2025-08-13T20:03:41.629323813+00:00 stderr F time="2025-08-13T20:03:41Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:41.629323813+00:00 stderr F time="2025-08-13T20:03:41Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:56.983016152+00:00 stderr F time="2025-08-13T20:03:56Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:56.984258098+00:00 stderr F time="2025-08-13T20:03:56Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.043288057+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.043288057+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.071758619+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.071758619+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.102757923+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.admin-2SOrzhaSHllEqB6Becsc9Z2BniBuXZxdBrPmIq\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.103560566+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.edit-brB9auo7mhdQtycRdrSZm5XlKKbUjCe698FPlD\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.105920163+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.view-9QCGFNcofBHQ2DeWEf2qFa4NWqTOGskUedO4Tz\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.117917436+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.118091211+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:02.371881741+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:02.371920852+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:02.610171729+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:02.610171729+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:05.519098082+00:00 stderr F time="2025-08-13T20:04:05Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:05.519098082+00:00 stderr F time="2025-08-13T20:04:05Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:05:07.886185865+00:00 stderr F time="2025-08-13T20:05:07Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:05:07.897930811+00:00 stderr F time="2025-08-13T20:05:07Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:42:43.424379186+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="exiting from clusteroperator monitor loop" monitor=clusteroperator 2025-08-13T20:42:43.425568160+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for non leader election runnables"} 2025-08-13T20:42:43.432031667+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for leader election runnables"} 2025-08-13T20:42:43.433208311+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T20:42:43.433208311+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433258182+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for caches"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for webhooks"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for HTTP servers"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Wait completed, proceeding to shutdown the manager"} ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015073043233033043 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015073043233033043 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000000007515073043233033047 0ustar zuulzuul2025-10-13T00:14:59.813975573+00:00 stdout F serving on 8080 ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000000007515073043233033047 0ustar zuulzuul2025-08-13T19:59:14.989395384+00:00 stdout F serving on 8080 ././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000244415073043233033075 0ustar zuulzuul2025-08-13T19:50:44.187310475+00:00 stdout F 2025-08-13T19:50:44+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy 2025-08-13T19:50:45.643448822+00:00 stderr F W0813 19:50:45.641749 1 deprecated.go:66] 2025-08-13T19:50:45.643448822+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.643448822+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.643448822+00:00 stderr F =============================================== 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.669538817+00:00 stderr F I0813 19:50:45.668563 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:45.669538817+00:00 stderr F I0813 19:50:45.668714 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:45.683583669+00:00 stderr F I0813 19:50:45.683419 1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 2025-08-13T19:50:45.688697335+00:00 stderr F I0813 19:50:45.688343 1 kube-rbac-proxy.go:402] Listening securely on :9108 2025-08-13T20:42:46.651250588+00:00 stderr F I0813 20:42:46.650986 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000223715073043233033075 0ustar zuulzuul2025-10-13T00:12:49.022566761+00:00 stdout F 2025-10-13T00:12:49+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy 2025-10-13T00:12:49.104375934+00:00 stderr F W1013 00:12:49.104202 1 deprecated.go:66] 2025-10-13T00:12:49.104375934+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:12:49.104375934+00:00 stderr F 2025-10-13T00:12:49.104375934+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:12:49.104375934+00:00 stderr F 2025-10-13T00:12:49.104375934+00:00 stderr F =============================================== 2025-10-13T00:12:49.104375934+00:00 stderr F 2025-10-13T00:12:49.115844841+00:00 stderr F I1013 00:12:49.115770 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:12:49.115881382+00:00 stderr F I1013 00:12:49.115851 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:12:49.119630599+00:00 stderr F I1013 00:12:49.119598 1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 2025-10-13T00:12:49.119995090+00:00 stderr F I1013 00:12:49.119978 1 kube-rbac-proxy.go:402] Listening securely on :9108 ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015073043233033067 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000046335115073043233033105 0ustar zuulzuul2025-08-13T19:50:47.008111355+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v4_transit_switch_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v6_transit_switch_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-08-13T19:50:47.008704362+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:47.017865444+00:00 stdout F I0813 19:50:47.015608289 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc 2025-08-13T19:50:47.017888264+00:00 stderr F + echo 'I0813 19:50:47.015608289 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc' 2025-08-13T19:50:47.017888264+00:00 stderr F + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager crc --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration 2025-08-13T19:50:50.137364720+00:00 stderr F I0813 19:50:50.135597 1 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-08-13T19:50:50.138350768+00:00 stderr F I0813 19:50:50.137279 1 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-08-13T19:50:50.196606393+00:00 stderr F I0813 19:50:50.194947 1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... 2025-08-13T19:50:50.221444453+00:00 stderr F I0813 19:50:50.219718 1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108" 2025-08-13T19:50:51.311304292+00:00 stderr F I0813 19:50:51.308268 1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master 2025-08-13T19:50:51.313362211+00:00 stderr F I0813 19:50:51.312386 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"252a0995-33c8-4721-8f3e-d993f8bb73c8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"25604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-77c846df58-6l97b became leader 2025-08-13T19:50:51.323193252+00:00 stderr F I0813 19:50:51.313472 1 ovnkube.go:386] Won leader election; in active mode 2025-08-13T19:50:51.783390905+00:00 stderr F I0813 19:50:51.783329 1 secondary_network_cluster_manager.go:38] Creating secondary network cluster manager 2025-08-13T19:50:51.785274099+00:00 stderr F I0813 19:50:51.785245 1 egressservice_cluster.go:97] Setting up event handlers for Egress Services 2025-08-13T19:50:51.786952736+00:00 stderr F I0813 19:50:51.786879 1 clustermanager.go:123] Starting the cluster manager 2025-08-13T19:50:51.787671477+00:00 stderr F I0813 19:50:51.787651 1 factory.go:405] Starting watch factory 2025-08-13T19:50:51.792769953+00:00 stderr F I0813 19:50:51.792729 1 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.793460833+00:00 stderr F I0813 19:50:51.792660 1 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.793460833+00:00 stderr F I0813 19:50:51.793314 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.821478453+00:00 stderr F I0813 19:50:51.796024 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.822853883+00:00 stderr F I0813 19:50:51.796034 1 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833061844+00:00 stderr F I0813 19:50:51.833013 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833449055+00:00 stderr F I0813 19:50:51.793187 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833536038+00:00 stderr F I0813 19:50:51.833482 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.866091138+00:00 stderr F I0813 19:50:51.866025 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.879486051+00:00 stderr F I0813 19:50:51.879300 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.886033998+00:00 stderr F I0813 19:50:51.884115 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:52.016009213+00:00 stderr F I0813 19:50:52.013643 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:52.138039531+00:00 stderr F I0813 19:50:52.133891 1 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.138039531+00:00 stderr F I0813 19:50:52.134003 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.225412408+00:00 stderr F I0813 19:50:52.223535 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.252296116+00:00 stderr F I0813 19:50:52.246244 1 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.252296116+00:00 stderr F I0813 19:50:52.246268 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.256614110+00:00 stderr F I0813 19:50:52.255960 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.352689005+00:00 stderr F I0813 19:50:52.352212 1 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.352689005+00:00 stderr F I0813 19:50:52.352334 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.360154769+00:00 stderr F I0813 19:50:52.359628 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.455446802+00:00 stderr F I0813 19:50:52.454619 1 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.455446802+00:00 stderr F I0813 19:50:52.455116 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.473755876+00:00 stderr F I0813 19:50:52.473481 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.559030883+00:00 stderr F I0813 19:50:52.558718 1 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.559030883+00:00 stderr F I0813 19:50:52.558875 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.563253964+00:00 stderr F I0813 19:50:52.563219 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.702912 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.703033 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.703049 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:50:52.706934640+00:00 stderr F I0813 19:50:52.706318 1 zone_cluster_controller.go:244] Node crc has the id 2 set 2025-08-13T19:50:52.714757014+00:00 stderr F I0813 19:50:52.714187 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:50:52.809440979+00:00 stderr F E0813 19:50:52.803290 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:52.809440979+00:00 stderr F E0813 19:50:52.803425 1 obj_retry.go:533] Failed to create *v1.Node crc, error: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:52.809440979+00:00 stderr F I0813 19:50:52.803522 1 secondary_network_cluster_manager.go:65] Starting secondary network cluster manager 2025-08-13T19:50:52.856380130+00:00 stderr F I0813 19:50:52.853690 1 network_attach_def_controller.go:134] Starting cluster-manager NAD controller 2025-08-13T19:50:52.856380130+00:00 stderr F I0813 19:50:52.855582 1 shared_informer.go:311] Waiting for caches to sync for cluster-manager 2025-08-13T19:50:52.857419180+00:00 stderr F I0813 19:50:52.857099 1 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.857419180+00:00 stderr F I0813 19:50:52.857151 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.870162564+00:00 stderr F I0813 19:50:52.869381 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.958608312+00:00 stderr F I0813 19:50:52.957364 1 shared_informer.go:318] Caches are synced for cluster-manager 2025-08-13T19:50:52.958608312+00:00 stderr F I0813 19:50:52.957688 1 network_attach_def_controller.go:182] Starting repairing loop for cluster-manager 2025-08-13T19:50:52.959329373+00:00 stderr F I0813 19:50:52.959096 1 network_attach_def_controller.go:184] Finished repairing loop for cluster-manager: 1.407231ms err: 2025-08-13T19:50:52.959329373+00:00 stderr F I0813 19:50:52.959265 1 network_attach_def_controller.go:153] Starting workers for cluster-manager NAD controller 2025-08-13T19:50:52.969034880+00:00 stderr F W0813 19:50:52.966294 1 egressip_healthcheck.go:165] Health checking using insecure connection 2025-08-13T19:50:53.964938774+00:00 stderr F W0813 19:50:53.963445 1 egressip_healthcheck.go:182] Could not connect to crc (10.217.0.2:9107): context deadline exceeded 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975184 1 egressip_controller.go:459] EgressIP node reachability enabled and using gRPC port 9107 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975308 1 egressservice_cluster.go:170] Starting Egress Services Controller 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975431 1 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975463 1 shared_informer.go:318] Caches are synced for egressservices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975470 1 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975481 1 shared_informer.go:318] Caches are synced for egressservices_services 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975489 1 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975494 1 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975501 1 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975506 1 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975510 1 egressservice_cluster.go:187] Repairing Egress Services 2025-08-13T19:50:53.979904332+00:00 stderr F I0813 19:50:53.978374 1 kube.go:267] Setting labels map[] on node crc 2025-08-13T19:50:54.041876633+00:00 stderr F E0813 19:50:54.041484 1 egressservice_cluster.go:190] Failed to repair Egress Services entries: failed to remove stale labels map[] from node crc, err: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:54.041876633+00:00 stderr F I0813 19:50:54.041708 1 status_manager.go:210] Starting StatusManager with typed managers: map[adminpolicybasedexternalroutes:0xc000543280 egressfirewalls:0xc000543640 egressqoses:0xc000543a00] 2025-08-13T19:50:54.058164418+00:00 stderr F I0813 19:50:54.056436 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-08-13T19:50:54.060862575+00:00 stderr F I0813 19:50:54.060104 1 controller.go:69] Adding controller zone_tracker event handlers 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063359 1 shared_informer.go:311] Waiting for caches to sync for zone_tracker 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063452 1 shared_informer.go:318] Caches are synced for zone_tracker 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063583 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-08-13T19:50:54.071896051+00:00 stderr F I0813 19:50:54.071052 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 2.665757ms 2025-08-13T19:50:54.071896051+00:00 stderr F I0813 19:50:54.071883 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.072169 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076453 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076479 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076560 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076567 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076573 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076580 1 controller.go:93] Starting controller zone_tracker with 1 workers 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076820 1 controller.go:69] Adding controller egressqoses_statusmanager event handlers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.089351 1 shared_informer.go:311] Waiting for caches to sync for egressqoses_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090597 1 shared_informer.go:318] Caches are synced for egressqoses_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090843 1 controller.go:93] Starting controller egressqoses_statusmanager with 1 workers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090973 1 controller.go:69] Adding controller adminpolicybasedexternalroutes_statusmanager event handlers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091220 1 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091228 1 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091236 1 controller.go:93] Starting controller adminpolicybasedexternalroutes_statusmanager with 1 workers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091582 1 controller.go:69] Adding controller egressfirewalls_statusmanager event handlers 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110354 1 shared_informer.go:311] Waiting for caches to sync for egressfirewalls_statusmanager 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110587 1 shared_informer.go:318] Caches are synced for egressfirewalls_statusmanager 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110615 1 controller.go:93] Starting controller egressfirewalls_statusmanager with 1 workers 2025-08-13T19:51:22.807417979+00:00 stderr F I0813 19:51:22.806770 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:51:22.807519982+00:00 stderr F I0813 19:51:22.807427 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:51:22.808847550+00:00 stderr F I0813 19:51:22.807642 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:51:22.823338894+00:00 stderr F E0813 19:51:22.823289 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:22.823429986+00:00 stderr F I0813 19:51:22.823396 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.806947 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.807028 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.807070 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:51:52.820544981+00:00 stderr F E0813 19:51:52.820407 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:52.820677944+00:00 stderr F I0813 19:51:52.820623 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:22.806992436+00:00 stderr F I0813 19:52:22.806741 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:52:22.806992436+00:00 stderr F I0813 19:52:22.806917 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:52:22.807074858+00:00 stderr F I0813 19:52:22.806963 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:52:22.830313290+00:00 stderr F E0813 19:52:22.829879 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:22.830313290+00:00 stderr F I0813 19:52:22.829935 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807110 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807254 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807342 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:52:52.825204490+00:00 stderr F E0813 19:52:52.825150 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:52.825331403+00:00 stderr F I0813 19:52:52.825297 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:53:22.807657141+00:00 stderr F I0813 19:53:22.807266 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:53:22.807657141+00:00 stderr F I0813 19:53:22.807365 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:53:22.811209102+00:00 stderr F I0813 19:53:22.807710 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:53:22.827475965+00:00 stderr F E0813 19:53:22.827347 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:53:22.827475965+00:00 stderr F I0813 19:53:22.827388 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:54:22.807698924+00:00 stderr F I0813 19:54:22.807408 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:54:22.807698924+00:00 stderr F I0813 19:54:22.807554 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:54:22.808136047+00:00 stderr F I0813 19:54:22.807918 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:54:22.820529181+00:00 stderr F E0813 19:54:22.820369 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:54:22.820529181+00:00 stderr F I0813 19:54:22.820465 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:55:52.807349912+00:00 stderr F I0813 19:55:52.807274 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:55:52.807456975+00:00 stderr F I0813 19:55:52.807442 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:55:52.807543087+00:00 stderr F I0813 19:55:52.807501 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:55:52.829499734+00:00 stderr F E0813 19:55:52.829448 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:55:52.829591107+00:00 stderr F I0813 19:55:52.829564 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:56:40.869402446+00:00 stderr F I0813 19:56:40.869230 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 7 items received 2025-08-13T19:56:56.883563870+00:00 stderr F I0813 19:56:56.883367 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 8 items received 2025-08-13T19:57:18.475975004+00:00 stderr F I0813 19:57:18.475715 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 7 items received 2025-08-13T19:57:22.807232882+00:00 stderr F I0813 19:57:22.807088 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:57:22.807232882+00:00 stderr F I0813 19:57:22.807163 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:57:22.807461968+00:00 stderr F I0813 19:57:22.807315 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:57:22.821010285+00:00 stderr F I0813 19:57:22.820923 1 obj_retry.go:379] Retry successful for *v1.Node crc after 8 failed attempt(s) 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936725 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936855 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936874 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245549 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245606 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245619 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:57:48.015393389+00:00 stderr F I0813 19:57:48.014453 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-08-13T19:57:48.016040407+00:00 stderr F I0813 19:57:48.015562 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 1.166293ms 2025-08-13T19:58:08.260292188+00:00 stderr F I0813 19:58:08.260083 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 9 items received 2025-08-13T19:58:11.367112049+00:00 stderr F I0813 19:58:11.366955 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T19:58:28.569171919+00:00 stderr F I0813 19:58:28.569104 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 8 items received 2025-08-13T19:58:30.882072410+00:00 stderr F I0813 19:58:30.882015 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2025-08-13T19:59:31.931567968+00:00 stderr F I0813 19:59:31.931278 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:31.931723763+00:00 stderr F I0813 19:59:31.931703 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:31.931768044+00:00 stderr F I0813 19:59:31.931753 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:36.344737507+00:00 stderr F I0813 19:59:36.344667 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:36.344924792+00:00 stderr F I0813 19:59:36.344903 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:36.344973634+00:00 stderr F I0813 19:59:36.344959 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:39.558692531+00:00 stderr F I0813 19:59:39.558625 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:39.558876106+00:00 stderr F I0813 19:59:39.558765 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:39.558924078+00:00 stderr F I0813 19:59:39.558909 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.446758 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.447112 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.447132 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552263 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552292 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552301 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:06.018912174+00:00 stderr F I0813 20:00:06.018621 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 267 items received 2025-08-13T20:00:10.536567409+00:00 stderr F I0813 20:00:10.536496 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:10.546479252+00:00 stderr F I0813 20:00:10.546405 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:10.553482522+00:00 stderr F I0813 20:00:10.553237 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:10.904909542+00:00 stderr F I0813 20:00:10.901702 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 117 items received 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372350 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372401 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372442 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964555 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964670 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964684 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207613 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207664 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207677 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.175693 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.175986 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.176002 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:37.239886826+00:00 stderr F I0813 20:00:37.239669 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721539 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721626 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721637 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183454 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183508 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183522 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482592 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482642 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482653 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:02:27.230902514+00:00 stderr F E0813 20:02:27.230715 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:02:29.555021265+00:00 stderr F I0813 20:02:29.543445 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 5 items received 2025-08-13T20:02:29.567434599+00:00 stderr F I0813 20:02:29.559371 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.618309570+00:00 stderr F I0813 20:02:29.617674 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 29 items received 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.634907 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 41 items received 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.635379 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m17s&timeoutSeconds=437&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.635438 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.650166689+00:00 stderr F I0813 20:02:29.648275 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 190 items received 2025-08-13T20:02:29.650864819+00:00 stderr F I0813 20:02:29.650415 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.728826623+00:00 stderr F I0813 20:02:29.728640 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-08-13T20:02:29.730307945+00:00 stderr F I0813 20:02:29.729917 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m26s&timeoutSeconds=326&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.766212049+00:00 stderr F I0813 20:02:29.766065 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 1 items received 2025-08-13T20:02:29.769020879+00:00 stderr F I0813 20:02:29.768893 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.786236280+00:00 stderr F I0813 20:02:29.785403 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 5 items received 2025-08-13T20:02:29.790440980+00:00 stderr F I0813 20:02:29.790200 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 4 items received 2025-08-13T20:02:29.791201392+00:00 stderr F I0813 20:02:29.790954 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.791201392+00:00 stderr F I0813 20:02:29.791122 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.821539418+00:00 stderr F I0813 20:02:29.819711 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 4 items received 2025-08-13T20:02:29.826120728+00:00 stderr F I0813 20:02:29.823557 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.850282528+00:00 stderr F I0813 20:02:29.848563 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 4 items received 2025-08-13T20:02:29.863768742+00:00 stderr F I0813 20:02:29.862059 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.426758203+00:00 stderr F I0813 20:02:30.426630 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=5m0s&timeoutSeconds=300&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.613562752+00:00 stderr F I0813 20:02:30.613459 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.622867737+00:00 stderr F I0813 20:02:30.622702 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.734187682+00:00 stderr F I0813 20:02:30.733902 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m49s&timeoutSeconds=349&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.827708730+00:00 stderr F I0813 20:02:30.827599 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m30s&timeoutSeconds=510&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.969348540+00:00 stderr F I0813 20:02:30.969254 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.986989914+00:00 stderr F I0813 20:02:30.986882 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.148498841+00:00 stderr F I0813 20:02:31.148283 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.185367203+00:00 stderr F I0813 20:02:31.185265 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=5m28s&timeoutSeconds=328&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.205262971+00:00 stderr F I0813 20:02:31.205151 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:32.811521383+00:00 stderr F I0813 20:02:32.811380 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:32.967218024+00:00 stderr F I0813 20:02:32.966975 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.267354416+00:00 stderr F I0813 20:02:33.267244 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.283575399+00:00 stderr F I0813 20:02:33.283445 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.311052643+00:00 stderr F I0813 20:02:33.310927 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.497556503+00:00 stderr F I0813 20:02:33.497437 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.812968671+00:00 stderr F I0813 20:02:33.812640 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.896226736+00:00 stderr F I0813 20:02:33.896089 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.043488597+00:00 stderr F I0813 20:02:34.043275 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.127397161+00:00 stderr F I0813 20:02:34.124967 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:36.687493352+00:00 stderr F I0813 20:02:36.687328 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:37.379416071+00:00 stderr F I0813 20:02:37.379304 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m34s&timeoutSeconds=454&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:37.775255493+00:00 stderr F I0813 20:02:37.775089 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=7m16s&timeoutSeconds=436&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:38.458428071+00:00 stderr F I0813 20:02:38.458252 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:38.775456795+00:00 stderr F I0813 20:02:38.775309 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.049732509+00:00 stderr F I0813 20:02:39.049528 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=7m58s&timeoutSeconds=478&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.266981397+00:00 stderr F I0813 20:02:39.266750 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.303708335+00:00 stderr F I0813 20:02:39.303525 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.370876081+00:00 stderr F I0813 20:02:39.369179 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:40.381014997+00:00 stderr F I0813 20:02:40.380834 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=6m14s&timeoutSeconds=374&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:44.406123425+00:00 stderr F I0813 20:02:44.406046 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:47.103889375+00:00 stderr F I0813 20:02:47.103631 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:48.121575296+00:00 stderr F I0813 20:02:48.121432 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:48.670409442+00:00 stderr F I0813 20:02:48.670299 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.210064637+00:00 stderr F I0813 20:02:49.209985 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.265045125+00:00 stderr F I0813 20:02:49.264951 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.804731651+00:00 stderr F I0813 20:02:49.804662 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:50.484299458+00:00 stderr F I0813 20:02:50.484202 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:50.736101791+00:00 stderr F I0813 20:02:50.736002 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:51.559878001+00:00 stderr F I0813 20:02:51.559728 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:53.234302317+00:00 stderr F E0813 20:02:53.233647 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:01.312578358+00:00 stderr F I0813 20:03:01.312464 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:03.902879611+00:00 stderr F I0813 20:03:03.902658 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:04.115010862+00:00 stderr F I0813 20:03:04.114909 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:04.838387938+00:00 stderr F I0813 20:03:04.838262 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:05.747741640+00:00 stderr F I0813 20:03:05.747634 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:07.821412355+00:00 stderr F I0813 20:03:07.821340 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=5m5s&timeoutSeconds=305&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:08.326698340+00:00 stderr F I0813 20:03:08.326594 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=7m33s&timeoutSeconds=453&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:09.777199688+00:00 stderr F I0813 20:03:09.777085 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:10.723459022+00:00 stderr F I0813 20:03:10.723347 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:12.627312223+00:00 stderr F I0813 20:03:12.627170 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m29s&timeoutSeconds=569&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:19.234222432+00:00 stderr F E0813 20:03:19.233955 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:30.614507142+00:00 stderr F I0813 20:03:30.614265 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:31.360239635+00:00 stderr F I0813 20:03:31.360179 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:34.585615087+00:00 stderr F I0813 20:03:34.585528 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m39s&timeoutSeconds=519&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:36.600715002+00:00 stderr F I0813 20:03:36.600587 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:37.001654079+00:00 stderr F I0813 20:03:37.001568 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:45.054617405+00:00 stderr F I0813 20:03:45.054499 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:45.234548188+00:00 stderr F E0813 20:03:45.234001 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:55.172078932+00:00 stderr F I0813 20:03:55.171354 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.287601266+00:00 stderr F I0813 20:03:59.287477 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.287647398+00:00 stderr F I0813 20:03:59.287609 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.288739989+00:00 stderr F I0813 20:03:59.287711 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:04:08.669168343+00:00 stderr F I0813 20:04:08.669023 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 30619 (30722) 2025-08-13T20:04:11.111392643+00:00 stderr F I0813 20:04:11.109692 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 30717 (30724) 2025-08-13T20:04:15.899353869+00:00 stderr F I0813 20:04:15.899290 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 30528 (30740) 2025-08-13T20:04:21.176697857+00:00 stderr F I0813 20:04:21.175306 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 30644 (30758) 2025-08-13T20:04:32.645446258+00:00 stderr F I0813 20:04:32.644704 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 30620 (30792) 2025-08-13T20:04:34.205071440+00:00 stderr F I0813 20:04:34.204960 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 30713 (30718) 2025-08-13T20:04:35.147270382+00:00 stderr F I0813 20:04:35.144833 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 30556 (30718) 2025-08-13T20:04:43.777106034+00:00 stderr F I0813 20:04:43.776984 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 30708 (30718) 2025-08-13T20:04:44.225162345+00:00 stderr F I0813 20:04:44.221249 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 30574 (30718) 2025-08-13T20:04:51.763337638+00:00 stderr F I0813 20:04:51.763204 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:04:51.774225740+00:00 stderr F I0813 20:04:51.774007 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:04:58.390947006+00:00 stderr F I0813 20:04:58.390185 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 30542 (30756) 2025-08-13T20:05:04.968210444+00:00 stderr F I0813 20:05:04.968084 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:04.971231360+00:00 stderr F I0813 20:05:04.971150 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.117422378+00:00 stderr F I0813 20:05:10.117132 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.178603540+00:00 stderr F I0813 20:05:10.178488 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.489933925+00:00 stderr F I0813 20:05:10.489759 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.495159075+00:00 stderr F I0813 20:05:10.495008 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:13.591742609+00:00 stderr F I0813 20:05:13.591678 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:05:13.595271800+00:00 stderr F I0813 20:05:13.595146 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:05:14.414762877+00:00 stderr F I0813 20:05:14.414695 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.435980504+00:00 stderr F I0813 20:05:14.435741 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.973598850+00:00 stderr F I0813 20:05:14.973540 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.982395342+00:00 stderr F I0813 20:05:14.982351 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:20.334415922+00:00 stderr F I0813 20:05:20.334322 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:20.343099340+00:00 stderr F I0813 20:05:20.342971 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:23.010243446+00:00 stderr F I0813 20:05:23.007982 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:23.085513441+00:00 stderr F I0813 20:05:23.085366 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:40.178921662+00:00 stderr F I0813 20:05:40.178700 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:40.183126722+00:00 stderr F I0813 20:05:40.181664 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034127 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034184 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034195 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802522 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802582 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802595 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900523 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900591 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900602 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.815968 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.816015 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.816025 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.439960 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.440007 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.440018 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:08:26.090759383+00:00 stderr F I0813 20:08:26.089857 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 8 items received 2025-08-13T20:08:26.238631123+00:00 stderr F I0813 20:08:26.238046 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 222 items received 2025-08-13T20:08:26.259847771+00:00 stderr F I0813 20:08:26.259645 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 3 items received 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.276954 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.277048 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m23s&timeoutSeconds=323&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.277107 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=5m20s&timeoutSeconds=320&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.292522258+00:00 stderr F I0813 20:08:26.292372 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 39 items received 2025-08-13T20:08:26.298874000+00:00 stderr F I0813 20:08:26.298551 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m5s&timeoutSeconds=425&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.365083809+00:00 stderr F I0813 20:08:26.364958 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 3 items received 2025-08-13T20:08:26.378009589+00:00 stderr F I0813 20:08:26.377942 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.393010649+00:00 stderr F I0813 20:08:26.392832 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 2 items received 2025-08-13T20:08:26.397152478+00:00 stderr F I0813 20:08:26.395718 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=5m31s&timeoutSeconds=331&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.435614541+00:00 stderr F I0813 20:08:26.435110 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 3 items received 2025-08-13T20:08:26.437728472+00:00 stderr F I0813 20:08:26.436745 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.455392008+00:00 stderr F I0813 20:08:26.454752 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-08-13T20:08:26.485053188+00:00 stderr F I0813 20:08:26.479510 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.505489924+00:00 stderr F I0813 20:08:26.504755 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 3 items received 2025-08-13T20:08:26.520279368+00:00 stderr F I0813 20:08:26.520223 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.524691415+00:00 stderr F I0813 20:08:26.524132 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 3 items received 2025-08-13T20:08:26.536512614+00:00 stderr F I0813 20:08:26.535729 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.439708929+00:00 stderr F I0813 20:08:27.437496 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554592 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554639 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554759 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.581716051+00:00 stderr F I0813 20:08:27.581655 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=5m28s&timeoutSeconds=328&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.827098016+00:00 stderr F I0813 20:08:27.827034 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.848690705+00:00 stderr F I0813 20:08:27.847430 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.856049506+00:00 stderr F I0813 20:08:27.854639 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.874058453+00:00 stderr F I0813 20:08:27.873571 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:28.064758340+00:00 stderr F I0813 20:08:28.064695 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.331667863+00:00 stderr F I0813 20:08:29.331478 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=5m44s&timeoutSeconds=344&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.808479154+00:00 stderr F I0813 20:08:29.808398 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.058301906+00:00 stderr F I0813 20:08:30.058178 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.059010837+00:00 stderr F I0813 20:08:30.058958 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.073310337+00:00 stderr F I0813 20:08:30.073260 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.449455051+00:00 stderr F I0813 20:08:30.449389 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=7m52s&timeoutSeconds=472&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.647255572+00:00 stderr F I0813 20:08:30.647054 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=9m42s&timeoutSeconds=582&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.682630716+00:00 stderr F I0813 20:08:30.682560 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=9m49s&timeoutSeconds=589&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.707877010+00:00 stderr F I0813 20:08:30.707073 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.992885373+00:00 stderr F I0813 20:08:30.992644 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:32.581485986+00:00 stderr F I0813 20:08:32.581307 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.324217982+00:00 stderr F I0813 20:08:34.324044 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.885714871+00:00 stderr F I0813 20:08:34.885543 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=8m27s&timeoutSeconds=507&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.087059034+00:00 stderr F I0813 20:08:35.086944 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.131211520+00:00 stderr F I0813 20:08:35.131130 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=5m13s&timeoutSeconds=313&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.199171128+00:00 stderr F E0813 20:08:35.198986 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:35.380952220+00:00 stderr F I0813 20:08:35.380700 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.555910746+00:00 stderr F I0813 20:08:35.555680 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.059852564+00:00 stderr F I0813 20:08:36.059700 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.252016463+00:00 stderr F I0813 20:08:36.251855 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=7m51s&timeoutSeconds=471&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.662864323+00:00 stderr F I0813 20:08:36.661881 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:44.451271463+00:00 stderr F I0813 20:08:44.451033 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 32732 (32918) 2025-08-13T20:08:44.844100816+00:00 stderr F I0813 20:08:44.843246 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 32690 (32918) 2025-08-13T20:08:45.099720865+00:00 stderr F I0813 20:08:45.098729 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 32875 (32913) 2025-08-13T20:08:45.348963531+00:00 stderr F I0813 20:08:45.348844 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 32829 (32918) 2025-08-13T20:08:46.097685517+00:00 stderr F I0813 20:08:46.097516 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 32814 (32918) 2025-08-13T20:08:47.272971763+00:00 stderr F I0813 20:08:47.269375 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 32798 (32918) 2025-08-13T20:08:47.290868536+00:00 stderr F I0813 20:08:47.290586 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 32843 (32913) 2025-08-13T20:08:47.326887289+00:00 stderr F I0813 20:08:47.326144 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m15s&timeoutSeconds=435&watch=true 2025-08-13T20:08:47.334399114+00:00 stderr F I0813 20:08:47.334296 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=7m19s&timeoutSeconds=439&watch=true 2025-08-13T20:08:47.336839134+00:00 stderr F I0813 20:08:47.334591 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 32909 (32913) 2025-08-13T20:08:47.343543856+00:00 stderr F I0813 20:08:47.342605 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 32837 (32918) 2025-08-13T20:08:48.497341147+00:00 stderr F I0813 20:08:48.497028 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m40s&timeoutSeconds=340&watch=true 2025-08-13T20:08:48.500319193+00:00 stderr F I0813 20:08:48.500176 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 32901 (32913) 2025-08-13T20:09:00.365291112+00:00 stderr F I0813 20:09:00.365157 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:00.370503812+00:00 stderr F I0813 20:09:00.370417 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:02.210017681+00:00 stderr F I0813 20:09:02.208986 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:02.217704731+00:00 stderr F I0813 20:09:02.217587 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.178241281+00:00 stderr F I0813 20:09:03.178084 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.183834491+00:00 stderr F I0813 20:09:03.183731 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.256741451+00:00 stderr F I0813 20:09:03.256609 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:03.258701918+00:00 stderr F I0813 20:09:03.258600 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:04.923135168+00:00 stderr F I0813 20:09:04.922990 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:04.925005072+00:00 stderr F I0813 20:09:04.924884 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:07.684448359+00:00 stderr F I0813 20:09:07.684300 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:07.686339683+00:00 stderr F I0813 20:09:07.686236 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:08.255527921+00:00 stderr F I0813 20:09:08.255435 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:09:08.257354203+00:00 stderr F I0813 20:09:08.257266 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:09:08.414106628+00:00 stderr F I0813 20:09:08.413950 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:08.421467149+00:00 stderr F I0813 20:09:08.421398 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:09.945760781+00:00 stderr F I0813 20:09:09.945677 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:09.948237462+00:00 stderr F I0813 20:09:09.948206 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:10.057412292+00:00 stderr F I0813 20:09:10.057293 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:10.081556054+00:00 stderr F I0813 20:09:10.081404 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:15:03.950126047+00:00 stderr F I0813 20:15:03.949893 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 6 items received 2025-08-13T20:16:32.083872167+00:00 stderr F I0813 20:16:32.083696 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 58 items received 2025-08-13T20:16:58.188249111+00:00 stderr F I0813 20:16:58.188171 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 10 items received 2025-08-13T20:17:00.691109205+00:00 stderr F I0813 20:17:00.689700 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T20:17:05.423536700+00:00 stderr F I0813 20:17:05.423393 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:17:25.219155214+00:00 stderr F I0813 20:17:25.219006 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 19 items received 2025-08-13T20:17:27.261038704+00:00 stderr F I0813 20:17:27.259905 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 9 items received 2025-08-13T20:17:29.374880989+00:00 stderr F I0813 20:17:29.373161 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:18:30.926830606+00:00 stderr F I0813 20:18:30.926629 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2025-08-13T20:18:46.261049777+00:00 stderr F I0813 20:18:46.260940 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 10 items received 2025-08-13T20:22:38.193464370+00:00 stderr F I0813 20:22:38.192907 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 7 items received 2025-08-13T20:23:32.958254448+00:00 stderr F I0813 20:23:32.957887 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 10 items received 2025-08-13T20:23:54.378534512+00:00 stderr F I0813 20:23:54.377539 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 8 items received 2025-08-13T20:24:42.425620956+00:00 stderr F I0813 20:24:42.425447 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:25:18.087113742+00:00 stderr F I0813 20:25:18.086745 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 73 items received 2025-08-13T20:25:45.268485504+00:00 stderr F I0813 20:25:45.268365 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 10 items received 2025-08-13T20:26:09.696522126+00:00 stderr F I0813 20:26:09.696386 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 10 items received 2025-08-13T20:26:19.269665650+00:00 stderr F I0813 20:26:19.269527 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 9 items received 2025-08-13T20:26:36.221522339+00:00 stderr F I0813 20:26:36.221340 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 11 items received 2025-08-13T20:27:13.935322595+00:00 stderr F I0813 20:27:13.935240 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 9 items received 2025-08-13T20:29:38.197111009+00:00 stderr F I0813 20:29:38.196647 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 8 items received 2025-08-13T20:30:32.963915317+00:00 stderr F I0813 20:30:32.963851 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 7 items received 2025-08-13T20:32:27.381759922+00:00 stderr F I0813 20:32:27.381663 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:33:02.700387816+00:00 stderr F I0813 20:33:02.700286 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T20:34:09.277196906+00:00 stderr F I0813 20:34:09.276918 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 10 items received 2025-08-13T20:34:15.428383736+00:00 stderr F I0813 20:34:15.428177 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 11 items received 2025-08-13T20:34:37.095177679+00:00 stderr F I0813 20:34:37.094865 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 87 items received 2025-08-13T20:35:23.231610773+00:00 stderr F I0813 20:35:23.231439 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 9 items received 2025-08-13T20:35:51.277656871+00:00 stderr F I0813 20:35:51.277406 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 11 items received 2025-08-13T20:37:01.945895656+00:00 stderr F I0813 20:37:01.945690 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2025-08-13T20:37:25.979330639+00:00 stderr F I0813 20:37:25.979247 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 8 items received 2025-08-13T20:38:59.199981815+00:00 stderr F I0813 20:38:59.199865 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 13 items received 2025-08-13T20:40:09.103307343+00:00 stderr F I0813 20:40:09.102623 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 38 items received 2025-08-13T20:41:27.433448588+00:00 stderr F I0813 20:41:27.432302 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:42:06.389294881+00:00 stderr F I0813 20:42:06.388390 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 11 items received 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322153 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322268 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322281 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:14.045741887+00:00 stderr F I0813 20:42:14.045658 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:14.046010575+00:00 stderr F I0813 20:42:14.045977 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:14.046096927+00:00 stderr F I0813 20:42:14.046069 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:16.636605691+00:00 stderr F I0813 20:42:16.636532 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:16.636711154+00:00 stderr F I0813 20:42:16.636691 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:16.636762906+00:00 stderr F I0813 20:42:16.636744 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:21.922931155+00:00 stderr F I0813 20:42:21.922766 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:21.923024178+00:00 stderr F I0813 20:42:21.923009 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:21.923060749+00:00 stderr F I0813 20:42:21.923047 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099356 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099402 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099413 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:36.331250880+00:00 stderr F I0813 20:42:36.331166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331349213+00:00 stderr F I0813 20:42:36.331311 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 7 items received 2025-08-13T20:42:36.331592590+00:00 stderr F I0813 20:42:36.331573 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331637981+00:00 stderr F I0813 20:42:36.331624 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 5 items received 2025-08-13T20:42:36.331898209+00:00 stderr F I0813 20:42:36.331828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331990341+00:00 stderr F I0813 20:42:36.331914 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340240 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340465 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340514 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340560 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 24 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.342976 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.343115 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 6 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378439 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 5 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378738 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378756 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 11 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378911 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 9 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.379107 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.379119 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 1 items received 2025-08-13T20:42:36.468721613+00:00 stderr F I0813 20:42:36.468553 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469025152+00:00 stderr F I0813 20:42:36.468987 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=7m30s&timeoutSeconds=450&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469131115+00:00 stderr F I0813 20:42:36.469109 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469254349+00:00 stderr F I0813 20:42:36.469202 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469357872+00:00 stderr F I0813 20:42:36.469335 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469467895+00:00 stderr F I0813 20:42:36.469426 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=7m32s&timeoutSeconds=452&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469565308+00:00 stderr F I0813 20:42:36.469543 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=7m11s&timeoutSeconds=431&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469649140+00:00 stderr F I0813 20:42:36.469629 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=7m17s&timeoutSeconds=437&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.474744937+00:00 stderr F I0813 20:42:36.474712 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.512134905+00:00 stderr F I0813 20:42:36.510142 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=5m3s&timeoutSeconds=303&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.323390694+00:00 stderr F I0813 20:42:37.323003 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.366734713+00:00 stderr F I0813 20:42:37.365096 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.368280888+00:00 stderr F I0813 20:42:37.368204 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.469326791+00:00 stderr F I0813 20:42:37.469035 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=5m52s&timeoutSeconds=352&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.507438310+00:00 stderr F I0813 20:42:37.507270 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.576580683+00:00 stderr F I0813 20:42:37.576505 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.719706180+00:00 stderr F I0813 20:42:37.719420 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=9m3s&timeoutSeconds=543&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.761913206+00:00 stderr F I0813 20:42:37.761702 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.893640494+00:00 stderr F I0813 20:42:37.893106 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=5m24s&timeoutSeconds=324&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:38.751053994+00:00 stderr F I0813 20:42:38.747855 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.422669377+00:00 stderr F I0813 20:42:39.422559 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=8m51s&timeoutSeconds=531&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.647877839+00:00 stderr F I0813 20:42:39.647552 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.747394858+00:00 stderr F I0813 20:42:39.747311 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.014528689+00:00 stderr F I0813 20:42:40.014415 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=7m2s&timeoutSeconds=422&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.169436885+00:00 stderr F I0813 20:42:40.169161 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.328353177+00:00 stderr F I0813 20:42:40.328195 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=8m57s&timeoutSeconds=537&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.614885258+00:00 stderr F I0813 20:42:40.614162 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.765210432+00:00 stderr F I0813 20:42:40.765040 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.865262196+00:00 stderr F I0813 20:42:40.864737 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:41.495986521+00:00 stderr F I0813 20:42:41.495869 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.194981592+00:00 stderr F I0813 20:42:43.194872 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.685720091+00:00 stderr F I0813 20:42:43.685610 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.877741517+00:00 stderr F I0813 20:42:43.877605 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:44.027481544+00:00 stderr F I0813 20:42:44.027363 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=7m13s&timeoutSeconds=433&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:44.107128320+00:00 stderr F I0813 20:42:44.107005 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.013280195+00:00 stderr F I0813 20:42:45.013115 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.065594613+00:00 stderr F I0813 20:42:45.065486 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.541671939+00:00 stderr F I0813 20:42:45.541600 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.848624948+00:00 stderr F I0813 20:42:45.848520 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=8m1s&timeoutSeconds=481&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.871742875+00:00 stderr F I0813 20:42:45.871625 1 ovnkube.go:129] Received signal terminated. Shutting down 2025-08-13T20:42:45.871742875+00:00 stderr F I0813 20:42:45.871726 1 ovnkube.go:577] Stopping ovnkube... 2025-08-13T20:42:45.871926880+00:00 stderr F I0813 20:42:45.871874 1 metrics.go:552] Stopping metrics server at address "127.0.0.1:29108" 2025-08-13T20:42:45.872162857+00:00 stderr F I0813 20:42:45.872084 1 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872178347+00:00 stderr F I0813 20:42:45.872161 1 reflector.go:295] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872398174+00:00 stderr F I0813 20:42:45.872346 1 clustermanager.go:170] Stopping the cluster manager 2025-08-13T20:42:45.872412244+00:00 stderr F I0813 20:42:45.872395 1 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872426 1 secondary_network_cluster_manager.go:100] Stopping secondary network cluster manager 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872458 1 network_attach_def_controller.go:166] Shutting down cluster-manager NAD controller 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872461 1 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872497767+00:00 stderr F I0813 20:42:45.872488 1 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872595679+00:00 stderr F I0813 20:42:45.872537 1 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872608610+00:00 stderr F I0813 20:42:45.872591 1 reflector.go:295] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872618260+00:00 stderr F I0813 20:42:45.872341 1 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872841806+00:00 stderr F I0813 20:42:45.872669 1 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:42:45.872841806+00:00 stderr F I0813 20:42:45.872371 1 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.873977959+00:00 stderr F E0813 20:42:45.873903 1 handler.go:199] Removing already-removed *v1.EgressIP event handler 4 2025-08-13T20:42:45.874051631+00:00 stderr F E0813 20:42:45.873997 1 handler.go:199] Removing already-removed *v1.Node event handler 1 2025-08-13T20:42:45.874079042+00:00 stderr F E0813 20:42:45.874061 1 handler.go:199] Removing already-removed *v1.Node event handler 2 2025-08-13T20:42:45.874090303+00:00 stderr F E0813 20:42:45.874077 1 handler.go:199] Removing already-removed *v1.Node event handler 3 2025-08-13T20:42:45.875302367+00:00 stderr F I0813 20:42:45.875214 1 egressservice_cluster.go:218] Shutting down Egress Services controller 2025-08-13T20:42:45.877599304+00:00 stderr F I0813 20:42:45.877560 1 ovnkube.go:581] Stopped ovnkube 2025-08-13T20:42:45.877876942+00:00 stderr F E0813 20:42:45.877692 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:45.880356233+00:00 stderr F I0813 20:42:45.880204 1 ovnkube.go:395] No longer leader; exiting ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000015657515073043233033114 0ustar zuulzuul2025-10-13T00:12:49.239935900+00:00 stderr F + [[ -f /env/_master ]] 2025-10-13T00:12:49.239935900+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-10-13T00:12:49.239935900+00:00 stderr F + [[ '' != '' ]] 2025-10-13T00:12:49.239935900+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-10-13T00:12:49.239935900+00:00 stderr F + [[ '' != '' ]] 2025-10-13T00:12:49.239935900+00:00 stderr F + ovn_v4_transit_switch_subnet_opt= 2025-10-13T00:12:49.239935900+00:00 stderr F + [[ '' != '' ]] 2025-10-13T00:12:49.239935900+00:00 stderr F + ovn_v6_transit_switch_subnet_opt= 2025-10-13T00:12:49.239935900+00:00 stderr F + [[ '' != '' ]] 2025-10-13T00:12:49.239935900+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-10-13T00:12:49.239935900+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-10-13T00:12:49.240393543+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-10-13T00:12:49.242995927+00:00 stderr F + echo 'I1013 00:12:49.242581645 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc' 2025-10-13T00:12:49.243015578+00:00 stdout F I1013 00:12:49.242581645 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc 2025-10-13T00:12:49.243077409+00:00 stderr F + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager crc --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration 2025-10-13T00:12:50.047046665+00:00 stderr F I1013 00:12:50.046901 1 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-10-13T00:12:50.047137598+00:00 stderr F I1013 00:12:50.047021 1 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-10-13T00:12:50.107867469+00:00 stderr F I1013 00:12:50.107788 1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... 2025-10-13T00:12:50.109915478+00:00 stderr F I1013 00:12:50.109113 1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108" 2025-10-13T00:12:50.139708427+00:00 stderr F I1013 00:12:50.139636 1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master 2025-10-13T00:12:50.140542681+00:00 stderr F I1013 00:12:50.139897 1 ovnkube.go:386] Won leader election; in active mode 2025-10-13T00:12:50.140542681+00:00 stderr F I1013 00:12:50.139995 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"191c52d0-8e80-4a25-b5b6-abbf211ef81a", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"38100", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-77c846df58-6l97b became leader 2025-10-13T00:12:50.160889771+00:00 stderr F I1013 00:12:50.160816 1 secondary_network_cluster_manager.go:38] Creating secondary network cluster manager 2025-10-13T00:12:50.162019234+00:00 stderr F I1013 00:12:50.161978 1 egressservice_cluster.go:97] Setting up event handlers for Egress Services 2025-10-13T00:12:50.162221219+00:00 stderr F I1013 00:12:50.162150 1 clustermanager.go:123] Starting the cluster manager 2025-10-13T00:12:50.162221219+00:00 stderr F I1013 00:12:50.162163 1 factory.go:405] Starting watch factory 2025-10-13T00:12:50.162834237+00:00 stderr F I1013 00:12:50.162782 1 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.162834237+00:00 stderr F I1013 00:12:50.162812 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.162965110+00:00 stderr F I1013 00:12:50.162926 1 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.162965110+00:00 stderr F I1013 00:12:50.162952 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.162990431+00:00 stderr F I1013 00:12:50.162935 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.163015222+00:00 stderr F I1013 00:12:50.163006 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.164544405+00:00 stderr F I1013 00:12:50.164471 1 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.164544405+00:00 stderr F I1013 00:12:50.164506 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.191198876+00:00 stderr F I1013 00:12:50.189822 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.191198876+00:00 stderr F I1013 00:12:50.190005 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.193781789+00:00 stderr F I1013 00:12:50.193342 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.209850587+00:00 stderr F I1013 00:12:50.209588 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:12:50.263544199+00:00 stderr F I1013 00:12:50.263492 1 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.263612840+00:00 stderr F I1013 00:12:50.263601 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.277934809+00:00 stderr F I1013 00:12:50.277881 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.365425814+00:00 stderr F I1013 00:12:50.364678 1 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.365425814+00:00 stderr F I1013 00:12:50.364724 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.380217946+00:00 stderr F I1013 00:12:50.380100 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.466412503+00:00 stderr F I1013 00:12:50.465500 1 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.466412503+00:00 stderr F I1013 00:12:50.465536 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.486396373+00:00 stderr F I1013 00:12:50.484713 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.566900669+00:00 stderr F I1013 00:12:50.566511 1 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.566900669+00:00 stderr F I1013 00:12:50.566540 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.582370570+00:00 stderr F I1013 00:12:50.581675 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.667410455+00:00 stderr F I1013 00:12:50.667220 1 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.667410455+00:00 stderr F I1013 00:12:50.667248 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.682223077+00:00 stderr F I1013 00:12:50.682130 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:12:50.774783777+00:00 stderr F I1013 00:12:50.774626 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:12:50.774783777+00:00 stderr F I1013 00:12:50.774715 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:12:50.774783777+00:00 stderr F I1013 00:12:50.774738 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:12:50.774846298+00:00 stderr F I1013 00:12:50.774795 1 zone_cluster_controller.go:244] Node crc has the id 2 set 2025-10-13T00:12:50.777019021+00:00 stderr F I1013 00:12:50.776915 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-10-13T00:12:50.789676371+00:00 stderr F I1013 00:12:50.789587 1 secondary_network_cluster_manager.go:65] Starting secondary network cluster manager 2025-10-13T00:12:50.789676371+00:00 stderr F I1013 00:12:50.789658 1 network_attach_def_controller.go:134] Starting cluster-manager NAD controller 2025-10-13T00:12:50.789733553+00:00 stderr F I1013 00:12:50.789718 1 shared_informer.go:311] Waiting for caches to sync for cluster-manager 2025-10-13T00:12:50.790965988+00:00 stderr F I1013 00:12:50.790918 1 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:12:50.790965988+00:00 stderr F I1013 00:12:50.790941 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:12:50.808000764+00:00 stderr F I1013 00:12:50.807945 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:12:50.890737353+00:00 stderr F I1013 00:12:50.890407 1 shared_informer.go:318] Caches are synced for cluster-manager 2025-10-13T00:12:50.890737353+00:00 stderr F I1013 00:12:50.890465 1 network_attach_def_controller.go:182] Starting repairing loop for cluster-manager 2025-10-13T00:12:50.890737353+00:00 stderr F I1013 00:12:50.890702 1 network_attach_def_controller.go:184] Finished repairing loop for cluster-manager: 240.347µs err: 2025-10-13T00:12:50.890737353+00:00 stderr F I1013 00:12:50.890714 1 network_attach_def_controller.go:153] Starting workers for cluster-manager NAD controller 2025-10-13T00:12:50.892365190+00:00 stderr F W1013 00:12:50.892309 1 egressip_healthcheck.go:165] Health checking using insecure connection 2025-10-13T00:12:51.892090086+00:00 stderr F W1013 00:12:51.891976 1 egressip_healthcheck.go:182] Could not connect to crc (10.217.0.2:9107): context deadline exceeded 2025-10-13T00:12:51.892137878+00:00 stderr F I1013 00:12:51.892092 1 egressip_controller.go:459] EgressIP node reachability enabled and using gRPC port 9107 2025-10-13T00:12:51.892137878+00:00 stderr F I1013 00:12:51.892099 1 egressservice_cluster.go:170] Starting Egress Services Controller 2025-10-13T00:12:51.892137878+00:00 stderr F I1013 00:12:51.892109 1 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-10-13T00:12:51.892137878+00:00 stderr F I1013 00:12:51.892116 1 shared_informer.go:318] Caches are synced for egressservices 2025-10-13T00:12:51.892137878+00:00 stderr F I1013 00:12:51.892119 1 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-10-13T00:12:51.892165078+00:00 stderr F I1013 00:12:51.892123 1 shared_informer.go:318] Caches are synced for egressservices_services 2025-10-13T00:12:51.892165078+00:00 stderr F I1013 00:12:51.892145 1 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-10-13T00:12:51.892165078+00:00 stderr F I1013 00:12:51.892149 1 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-10-13T00:12:51.892165078+00:00 stderr F I1013 00:12:51.892153 1 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-10-13T00:12:51.892165078+00:00 stderr F I1013 00:12:51.892156 1 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-10-13T00:12:51.892165078+00:00 stderr F I1013 00:12:51.892160 1 egressservice_cluster.go:187] Repairing Egress Services 2025-10-13T00:12:51.892362704+00:00 stderr F I1013 00:12:51.892304 1 kube.go:267] Setting labels map[] on node crc 2025-10-13T00:12:51.906977991+00:00 stderr F I1013 00:12:51.906914 1 status_manager.go:210] Starting StatusManager with typed managers: map[adminpolicybasedexternalroutes:0xc000371000 egressfirewalls:0xc000371400 egressqoses:0xc000371840] 2025-10-13T00:12:51.907025622+00:00 stderr F I1013 00:12:51.907003 1 controller.go:69] Adding controller zone_tracker event handlers 2025-10-13T00:12:51.907025622+00:00 stderr F I1013 00:12:51.907006 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-10-13T00:12:51.907077494+00:00 stderr F I1013 00:12:51.907046 1 shared_informer.go:311] Waiting for caches to sync for zone_tracker 2025-10-13T00:12:51.907086194+00:00 stderr F I1013 00:12:51.907080 1 shared_informer.go:318] Caches are synced for zone_tracker 2025-10-13T00:12:51.907120705+00:00 stderr F I1013 00:12:51.907094 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-10-13T00:12:51.907120705+00:00 stderr F I1013 00:12:51.907114 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-10-13T00:12:51.907129375+00:00 stderr F I1013 00:12:51.907124 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-10-13T00:12:51.907149386+00:00 stderr F I1013 00:12:51.907040 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 43.141µs 2025-10-13T00:12:51.907149386+00:00 stderr F I1013 00:12:51.907131 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-10-13T00:12:51.907196217+00:00 stderr F I1013 00:12:51.907162 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-10-13T00:12:51.907196217+00:00 stderr F I1013 00:12:51.907176 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-10-13T00:12:51.907196217+00:00 stderr F I1013 00:12:51.907180 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-10-13T00:12:51.907196217+00:00 stderr F I1013 00:12:51.907185 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-10-13T00:12:51.907196217+00:00 stderr F I1013 00:12:51.907191 1 controller.go:93] Starting controller zone_tracker with 1 workers 2025-10-13T00:12:51.907238598+00:00 stderr F I1013 00:12:51.907212 1 controller.go:69] Adding controller adminpolicybasedexternalroutes_statusmanager event handlers 2025-10-13T00:12:51.907275619+00:00 stderr F I1013 00:12:51.907255 1 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes_statusmanager 2025-10-13T00:12:51.907275619+00:00 stderr F I1013 00:12:51.907266 1 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes_statusmanager 2025-10-13T00:12:51.907284620+00:00 stderr F I1013 00:12:51.907272 1 controller.go:93] Starting controller adminpolicybasedexternalroutes_statusmanager with 1 workers 2025-10-13T00:12:51.907301770+00:00 stderr F I1013 00:12:51.907293 1 controller.go:69] Adding controller egressfirewalls_statusmanager event handlers 2025-10-13T00:12:51.907375272+00:00 stderr F I1013 00:12:51.907359 1 shared_informer.go:311] Waiting for caches to sync for egressfirewalls_statusmanager 2025-10-13T00:12:51.907375272+00:00 stderr F I1013 00:12:51.907365 1 shared_informer.go:318] Caches are synced for egressfirewalls_statusmanager 2025-10-13T00:12:51.907375272+00:00 stderr F I1013 00:12:51.907371 1 controller.go:93] Starting controller egressfirewalls_statusmanager with 1 workers 2025-10-13T00:12:51.907389032+00:00 stderr F I1013 00:12:51.907380 1 controller.go:69] Adding controller egressqoses_statusmanager event handlers 2025-10-13T00:12:51.907431194+00:00 stderr F I1013 00:12:51.907411 1 shared_informer.go:311] Waiting for caches to sync for egressqoses_statusmanager 2025-10-13T00:12:51.907431194+00:00 stderr F I1013 00:12:51.907419 1 shared_informer.go:318] Caches are synced for egressqoses_statusmanager 2025-10-13T00:12:51.907431194+00:00 stderr F I1013 00:12:51.907424 1 controller.go:93] Starting controller egressqoses_statusmanager with 1 workers 2025-10-13T00:12:52.724281477+00:00 stderr F I1013 00:12:52.724159 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:12:52.724441821+00:00 stderr F I1013 00:12:52.724409 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:12:52.724531674+00:00 stderr F I1013 00:12:52.724504 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:12:54.442597835+00:00 stderr F I1013 00:12:54.440416 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-10-13T00:12:54.442597835+00:00 stderr F I1013 00:12:54.440458 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 51.111µs 2025-10-13T00:14:15.221943039+00:00 stderr F I1013 00:14:15.221893 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:14:15.222024741+00:00 stderr F I1013 00:14:15.222012 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:14:15.222081393+00:00 stderr F I1013 00:14:15.222067 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:14:15.293064302+00:00 stderr F I1013 00:14:15.291566 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:14:15.293064302+00:00 stderr F I1013 00:14:15.291605 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:14:15.293064302+00:00 stderr F I1013 00:14:15.291619 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:14:25.997713499+00:00 stderr F I1013 00:14:25.995203 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-10-13T00:14:25.997713499+00:00 stderr F I1013 00:14:25.995255 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 65.812µs 2025-10-13T00:17:58.193492736+00:00 stderr F I1013 00:17:58.192540 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 16 items received 2025-10-13T00:18:08.212592549+00:00 stderr F I1013 00:18:08.212510 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 385 items received 2025-10-13T00:18:15.810596387+00:00 stderr F I1013 00:18:15.810053 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 6 items received 2025-10-13T00:18:40.586152114+00:00 stderr F I1013 00:18:40.586063 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 6 items received 2025-10-13T00:19:13.280037010+00:00 stderr F I1013 00:19:13.279878 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 7 items received 2025-10-13T00:19:28.382705545+00:00 stderr F I1013 00:19:28.382613 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 7 items received 2025-10-13T00:20:11.197489289+00:00 stderr F I1013 00:20:11.197271 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-10-13T00:20:19.783689040+00:00 stderr F I1013 00:20:19.783581 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:20:19.783689040+00:00 stderr F I1013 00:20:19.783650 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:20:19.783689040+00:00 stderr F I1013 00:20:19.783672 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:20:20.683862169+00:00 stderr F I1013 00:20:20.683775 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 8 items received 2025-10-13T00:20:28.233702347+00:00 stderr F I1013 00:20:28.233608 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:20:28.233702347+00:00 stderr F I1013 00:20:28.233657 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:20:28.233702347+00:00 stderr F I1013 00:20:28.233669 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:20:29.088174273+00:00 stderr F I1013 00:20:29.088117 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:20:29.088174273+00:00 stderr F I1013 00:20:29.088146 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:20:29.088174273+00:00 stderr F I1013 00:20:29.088158 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:20:39.487223959+00:00 stderr F I1013 00:20:39.487110 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-10-13T00:21:15.191935799+00:00 stderr F I1013 00:21:15.191823 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 149 items received 2025-10-13T00:21:42.786785883+00:00 stderr F I1013 00:21:42.786693 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-10-13T00:21:42.786785883+00:00 stderr F I1013 00:21:42.786725 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-10-13T00:21:42.786785883+00:00 stderr F I1013 00:21:42.786736 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-10-13T00:22:11.671172783+00:00 stderr F I1013 00:22:11.666553 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 9 items received 2025-10-13T00:22:11.675603332+00:00 stderr F I1013 00:22:11.675496 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 2 items received 2025-10-13T00:22:11.676656350+00:00 stderr F I1013 00:22:11.675895 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42795&timeout=5m59s&timeoutSeconds=359&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.676656350+00:00 stderr F I1013 00:22:11.676019 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42845&timeout=9m12s&timeoutSeconds=552&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.684627094+00:00 stderr F I1013 00:22:11.684301 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 47 items received 2025-10-13T00:22:11.684796769+00:00 stderr F I1013 00:22:11.684689 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 1 items received 2025-10-13T00:22:11.688496268+00:00 stderr F I1013 00:22:11.687713 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.688496268+00:00 stderr F I1013 00:22:11.687749 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=9m15s&timeoutSeconds=555&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.724956259+00:00 stderr F I1013 00:22:11.724894 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 3 items received 2025-10-13T00:22:11.725910165+00:00 stderr F I1013 00:22:11.725870 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.734696911+00:00 stderr F I1013 00:22:11.734641 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-10-13T00:22:11.734839175+00:00 stderr F I1013 00:22:11.734817 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 2 items received 2025-10-13T00:22:11.735297937+00:00 stderr F I1013 00:22:11.735242 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42596&timeout=5m54s&timeoutSeconds=354&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.735385379+00:00 stderr F I1013 00:22:11.735288 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42264&timeout=8m43s&timeoutSeconds=523&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.744761542+00:00 stderr F I1013 00:22:11.743774 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 1 items received 2025-10-13T00:22:11.745280705+00:00 stderr F I1013 00:22:11.745211 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.748809880+00:00 stderr F I1013 00:22:11.748765 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 2 items received 2025-10-13T00:22:11.749301744+00:00 stderr F I1013 00:22:11.749263 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42204&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:11.765195161+00:00 stderr F I1013 00:22:11.765140 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 1 items received 2025-10-13T00:22:11.769755684+00:00 stderr F I1013 00:22:11.768317 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42370&timeout=6m28s&timeoutSeconds=388&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.510185295+00:00 stderr F I1013 00:22:12.510123 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42845&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.517860012+00:00 stderr F I1013 00:22:12.517798 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42795&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.691275265+00:00 stderr F I1013 00:22:12.691189 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=5m48s&timeoutSeconds=348&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.697979796+00:00 stderr F I1013 00:22:12.697917 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42204&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.796847684+00:00 stderr F I1013 00:22:12.796752 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.875820007+00:00 stderr F I1013 00:22:12.875746 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=9m40s&timeoutSeconds=580&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.904572740+00:00 stderr F I1013 00:22:12.904450 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42264&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.913151961+00:00 stderr F I1013 00:22:12.913066 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42370&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:12.948098451+00:00 stderr F I1013 00:22:12.948009 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=6m48s&timeoutSeconds=408&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:13.126814607+00:00 stderr F I1013 00:22:13.126766 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42596&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.594113555+00:00 stderr F I1013 00:22:14.594021 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42264&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.661823196+00:00 stderr F I1013 00:22:14.661764 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42204&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.825674643+00:00 stderr F I1013 00:22:14.825615 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.871546146+00:00 stderr F I1013 00:22:14.871431 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:14.931386275+00:00 stderr F I1013 00:22:14.931283 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=7m46s&timeoutSeconds=466&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.118160168+00:00 stderr F I1013 00:22:15.118083 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=9m2s&timeoutSeconds=542&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.143800448+00:00 stderr F I1013 00:22:15.143737 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42845&timeout=8m5s&timeoutSeconds=485&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.321501337+00:00 stderr F I1013 00:22:15.321399 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42370&timeout=9m55s&timeoutSeconds=595&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:15.506141902+00:00 stderr F I1013 00:22:15.506047 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42795&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:16.157289493+00:00 stderr F I1013 00:22:16.157230 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42596&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:18.671923045+00:00 stderr F I1013 00:22:18.671866 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42848&timeout=6m9s&timeoutSeconds=369&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:18.961374179+00:00 stderr F I1013 00:22:18.960877 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42845&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.262578699+00:00 stderr F I1013 00:22:19.261967 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42204&timeout=6m6s&timeoutSeconds=366&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.315518933+00:00 stderr F I1013 00:22:19.315451 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42370&timeout=6m57s&timeoutSeconds=417&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.476110922+00:00 stderr F I1013 00:22:19.476040 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42596&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.639997149+00:00 stderr F I1013 00:22:19.639924 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42724&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.726733071+00:00 stderr F I1013 00:22:19.726638 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42856&timeout=5m31s&timeoutSeconds=331&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:19.802253312+00:00 stderr F I1013 00:22:19.802168 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42264&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:20.247532926+00:00 stderr F I1013 00:22:20.247478 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42708&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:21.605265948+00:00 stderr F I1013 00:22:21.604428 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42795&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 38.102.83.180:6443: connect: connection refused - backing off 2025-10-13T00:22:22.398735596+00:00 stderr F E1013 00:22:22.398611 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 38.102.83.180:6443: connect: connection refused 2025-10-13T00:22:26.540078265+00:00 stderr F I1013 00:22:26.540002 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 42204 (42866) 2025-10-13T00:22:26.860349078+00:00 stderr F I1013 00:22:26.860275 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 42856 (42861) 2025-10-13T00:22:27.095952173+00:00 stderr F I1013 00:22:27.095891 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 42845 (42861) 2025-10-13T00:22:28.201462742+00:00 stderr F I1013 00:22:28.196120 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 42848 (42861) 2025-10-13T00:22:28.284613158+00:00 stderr F I1013 00:22:28.284561 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 42264 (42866) 2025-10-13T00:22:28.795624970+00:00 stderr F I1013 00:22:28.795574 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 42370 (42866) 2025-10-13T00:22:29.633403590+00:00 stderr F I1013 00:22:29.633339 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 42724 (42881) 2025-10-13T00:22:30.474443837+00:00 stderr F I1013 00:22:30.474230 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 42795 (42861) 2025-10-13T00:22:31.180585936+00:00 stderr F I1013 00:22:31.180532 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 42596 (42866) 2025-10-13T00:22:32.412243738+00:00 stderr F I1013 00:22:32.412078 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 42708 (42867) 2025-10-13T00:22:42.885664811+00:00 stderr F I1013 00:22:42.885564 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:42.901993915+00:00 stderr F I1013 00:22:42.901898 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:43.521961535+00:00 stderr F I1013 00:22:43.521864 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:43.543322420+00:00 stderr F I1013 00:22:43.543220 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:43.814216546+00:00 stderr F I1013 00:22:43.814106 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:43.815680316+00:00 stderr F I1013 00:22:43.815620 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:46.569220506+00:00 stderr F I1013 00:22:46.569163 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:46.571033247+00:00 stderr F I1013 00:22:46.571006 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:48.088738533+00:00 stderr F I1013 00:22:48.088682 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:48.091119859+00:00 stderr F I1013 00:22:48.091075 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:48.268401157+00:00 stderr F I1013 00:22:48.268295 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:48.271426342+00:00 stderr F I1013 00:22:48.271398 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:50.071159513+00:00 stderr F I1013 00:22:50.071019 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:50.072214972+00:00 stderr F I1013 00:22:50.072184 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:52.809528570+00:00 stderr F I1013 00:22:52.809453 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:52.811152496+00:00 stderr F I1013 00:22:52.811121 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-10-13T00:22:53.393891988+00:00 stderr F I1013 00:22:53.393832 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:22:53.397815317+00:00 stderr F I1013 00:22:53.397781 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-10-13T00:22:57.508047718+00:00 stderr F I1013 00:22:57.507960 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-10-13T00:22:57.509572451+00:00 stderr F I1013 00:22:57.509515 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043234033000 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043234033000 5ustar zuulzuul././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000422015073043234033000 0ustar zuulzuul2025-10-13T00:12:31.295314356+00:00 stderr F W1013 00:12:31.295044 1 deprecated.go:66] 2025-10-13T00:12:31.295314356+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:12:31.295314356+00:00 stderr F 2025-10-13T00:12:31.295314356+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:12:31.295314356+00:00 stderr F 2025-10-13T00:12:31.295314356+00:00 stderr F =============================================== 2025-10-13T00:12:31.295314356+00:00 stderr F 2025-10-13T00:12:31.295314356+00:00 stderr F I1013 00:12:31.295208 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-10-13T00:12:31.304603977+00:00 stderr F I1013 00:12:31.303571 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:12:31.304603977+00:00 stderr F I1013 00:12:31.303898 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:12:31.304603977+00:00 stderr F I1013 00:12:31.304151 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-10-13T00:12:31.306847836+00:00 stderr F I1013 00:12:31.306080 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-10-13T00:12:31.306847836+00:00 stderr F I1013 00:12:31.306527 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-10-13T00:20:19.764860891+00:00 stderr F I1013 00:20:19.764766 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-10-13T00:20:28.211476523+00:00 stderr F I1013 00:20:28.211268 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-10-13T00:20:29.068485360+00:00 stderr F I1013 00:20:29.068434 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000274515073043234033012 0ustar zuulzuul2025-10-13T00:08:28.019797434+00:00 stderr F W1013 00:08:28.019526 1 deprecated.go:66] 2025-10-13T00:08:28.019797434+00:00 stderr F ==== Removed Flag Warning ====================== 2025-10-13T00:08:28.019797434+00:00 stderr F 2025-10-13T00:08:28.019797434+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-10-13T00:08:28.019797434+00:00 stderr F 2025-10-13T00:08:28.019797434+00:00 stderr F =============================================== 2025-10-13T00:08:28.019797434+00:00 stderr F 2025-10-13T00:08:28.019797434+00:00 stderr F I1013 00:08:28.019739 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-10-13T00:08:28.028234231+00:00 stderr F I1013 00:08:28.028159 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-10-13T00:08:28.028701846+00:00 stderr F I1013 00:08:28.028658 1 kube-rbac-proxy.go:347] Reading certificate files 2025-10-13T00:08:28.029110168+00:00 stderr F I1013 00:08:28.029025 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-10-13T00:08:28.031113829+00:00 stderr F I1013 00:08:28.031031 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-10-13T00:08:28.031222122+00:00 stderr F I1013 00:08:28.031199 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-10-13T00:11:29.656110134+00:00 stderr F I1013 00:11:29.655260 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000001166515073043234033013 0ustar zuulzuul2025-08-13T19:44:01.126546932+00:00 stderr F W0813 19:44:01.123942 1 deprecated.go:66] 2025-08-13T19:44:01.126546932+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F =============================================== 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F I0813 19:44:01.124657 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-08-13T19:44:01.175293808+00:00 stderr F I0813 19:44:01.175109 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:44:01.179226024+00:00 stderr F I0813 19:44:01.176759 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:44:01.182533911+00:00 stderr F I0813 19:44:01.181865 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:44:01.182613100+00:00 stderr F I0813 19:44:01.182582 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-08-13T19:44:01.184817475+00:00 stderr F I0813 19:44:01.184752 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-08-13T19:51:58.235350662+00:00 stderr F I0813 19:51:58.235115 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:55:11.101868577+00:00 stderr F I0813 19:55:11.101578 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:31.671386093+00:00 stderr F I0813 19:59:31.671100 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:36.085180598+00:00 stderr F I0813 19:59:36.068810 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:41.058153354+00:00 stderr F I0813 19:59:41.058038 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:06.964137311+00:00 stderr F I0813 20:06:06.954550 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:14.717482026+00:00 stderr F I0813 20:06:14.717284 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:47.017157131+00:00 stderr F I0813 20:06:47.016195 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:50.034528031+00:00 stderr F I0813 20:06:50.034310 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:51.358262373+00:00 stderr F I0813 20:06:51.347263 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:13.287019643+00:00 stderr F I0813 20:42:13.282260 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:13.985706076+00:00 stderr F I0813 20:42:13.985582 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:16.592484969+00:00 stderr F I0813 20:42:16.592362 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:47.532180885+00:00 stderr F I0813 20:42:47.532038 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015073043234033000 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515073043234033002 0ustar zuulzuul2025-08-13T19:43:57.536596541+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515073043234033002 0ustar zuulzuul2025-10-13T00:12:29.592452402+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515073043234033002 0ustar zuulzuul2025-10-13T00:08:25.762922781+00:00 stdout P Waiting for kubelet key and certificate to be available home/zuul/zuul-output/logs/ci-framework-data/artifacts/0000755000175000017500000000000015073043273022347 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/0000755000175000017500000000000015073042755024344 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/openstack/0000755000175000017500000000000015073042755026333 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/openstack/cr/0000755000175000017500000000000015073042755026737 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible-vars.yml0000644000175000017500000176450015073043166025476 0ustar zuulzuul_param_dir: changed: true cmd: - ls - /home/zuul/ci-framework-data/artifacts/parameters delta: '0:00:00.005591' end: '2025-10-13 00:23:04.270279' failed: false msg: '' rc: 0 start: '2025-10-13 00:23:04.264688' stderr: '' stderr_lines: [] stdout: 'custom-params.yml install-yamls-params.yml openshift-login-params.yml zuul-params.yml' stdout_lines: - custom-params.yml - install-yamls-params.yml - openshift-login-params.yml - zuul-params.yml zuul_log_id: fa163ec2-ffbe-222f-b926-0000000008b6-1-controller _param_file: changed: false failed: false stat: exists: false _parsed_vars: changed: false msg: All items completed results: - ansible_loop_var: item changed: false content: Y2lmbXdfYXJ0aWZhY3RzX2NyY19zc2hrZXk6IH4vLnNzaC9pZF9jaWZ3CmNpZm13X2RlcGxveV9lZHBtOiBmYWxzZQpjaWZtd19kbHJuX3JlcG9ydF9yZXN1bHQ6IGZhbHNlCmNpZm13X2V4dHJhczoKLSAnQHNjZW5hcmlvcy9jZW50b3MtOS9tdWx0aW5vZGUtY2kueW1sJwotICdAc2NlbmFyaW9zL2NlbnRvcy05L2hvcml6b24ueW1sJwpjaWZtd19vcGVuc2hpZnRfYXBpOiBhcGkuY3JjLnRlc3Rpbmc6NjQ0MwpjaWZtd19vcGVuc2hpZnRfcGFzc3dvcmQ6ICcxMjM0NTY3ODknCmNpZm13X29wZW5zaGlmdF9za2lwX3Rsc192ZXJpZnk6IHRydWUKY2lmbXdfb3BlbnNoaWZ0X3VzZXI6IGt1YmVhZG1pbgpjaWZtd19wYXRoOiAvaG9tZS96dXVsLy5jcmMvYmluOi9ob21lL3p1dWwvLmNyYy9iaW4vb2M6L2hvbWUvenV1bC9iaW46fi8uY3JjL2Jpbjp+Ly5jcmMvYmluL29jOn4vYmluOi9ob21lL3p1dWwvLmxvY2FsL2JpbjovaG9tZS96dXVsL2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9zYmluCmNpZm13X3J1bl90ZXN0czogZmFsc2UKY2lmbXdfdXNlX2xpYnZpcnQ6IGZhbHNlCmNpZm13X3p1dWxfdGFyZ2V0X2hvc3Q6IGNvbnRyb2xsZXIK encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml item: custom-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml - ansible_loop_var: item changed: false content: cifmw_install_yamls_defaults:
    ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24
    ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24
    ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24
    ADOPTED_STORAGE_NETWORK: 172.18.1.0/24
    ADOPTED_TENANT_NETWORK: 172.9.1.0/24
    ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml
    ANSIBLEEE_BRANCH: main
    ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml
    ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest
    ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml
    ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests
    ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests
    ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator
    ANSIBLEE_COMMIT_HASH: ''
    BARBICAN: config/samples/barbican_v1beta1_barbican.yaml
    BARBICAN_BRANCH: main
    BARBICAN_COMMIT_HASH: ''
    BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml
    BARBICAN_DEPL_IMG: unused
    BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest
    BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml
    BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests
    BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests
    BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git
    BARBICAN_SERVICE_ENABLED: 'true'
    BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU=
    BAREMETAL_BRANCH: main
    BAREMETAL_COMMIT_HASH: ''
    BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest
    BAREMETAL_OS_CONTAINER_IMG: ''
    BAREMETAL_OS_IMG: ''
    BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git
    BAREMETAL_TIMEOUT: 20m
    BASH_IMG: quay.io/openstack-k8s-operators/bash:latest
    BGP_ASN: '64999'
    BGP_LEAF_1: 100.65.4.1
    BGP_LEAF_2: 100.64.4.1
    BGP_OVN_ROUTING: 'false'
    BGP_PEER_ASN: '64999'
    BGP_SOURCE_IP: 172.30.4.2
    BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42
    BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24
    BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64
    BMAAS_INSTANCE_DISK_SIZE: '20'
    BMAAS_INSTANCE_MEMORY: '4096'
    BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas
    BMAAS_INSTANCE_NET_MODEL: virtio
    BMAAS_INSTANCE_OS_VARIANT: centos-stream9
    BMAAS_INSTANCE_VCPUS: '2'
    BMAAS_INSTANCE_VIRT_TYPE: kvm
    BMAAS_IPV4: 'true'
    BMAAS_IPV6: 'false'
    BMAAS_LIBVIRT_USER: sushyemu
    BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26
    BMAAS_METALLB_POOL_NAME: baremetal
    BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24
    BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64
    BMAAS_NETWORK_NAME: crc-bmaas
    BMAAS_NODE_COUNT: '1'
    BMAAS_OCP_INSTANCE_NAME: crc
    BMAAS_REDFISH_PASSWORD: password
    BMAAS_REDFISH_USERNAME: admin
    BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default
    BMAAS_SUSHY_EMULATOR_DRIVER: libvirt
    BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest
    BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator
    BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml
    BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack
    BMH_NAMESPACE: openstack
    BMO_BRANCH: release-0.9
    BMO_COMMIT_HASH: ''
    BMO_IPA_BRANCH: stable/2024.1
    BMO_IRONIC_HOST: 192.168.122.10
    BMO_PROVISIONING_INTERFACE: ''
    BMO_REPO: https://github.com/metal3-io/baremetal-operator
    BMO_SETUP: ''
    BMO_SETUP_ROUTE_REPLACE: 'true'
    BM_CTLPLANE_INTERFACE: enp1s0
    BM_INSTANCE_MEMORY: '8192'
    BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal
    BM_INSTANCE_NAME_SUFFIX: '0'
    BM_NETWORK_NAME: default
    BM_NODE_COUNT: '1'
    BM_ROOT_PASSWORD: ''
    BM_ROOT_PASSWORD_SECRET: ''
    CEILOMETER_CENTRAL_DEPL_IMG: unused
    CEILOMETER_NOTIFICATION_DEPL_IMG: unused
    CEPH_BRANCH: release-1.15
    CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml
    CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml
    CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml
    CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml
    CEPH_IMG: quay.io/ceph/demo:latest-squid
    CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml
    CEPH_REPO: https://github.com/rook/rook.git
    CERTMANAGER_TIMEOUT: 300s
    CHECKOUT_FROM_OPENSTACK_REF: 'true'
    CINDER: config/samples/cinder_v1beta1_cinder.yaml
    CINDERAPI_DEPL_IMG: unused
    CINDERBKP_DEPL_IMG: unused
    CINDERSCH_DEPL_IMG: unused
    CINDERVOL_DEPL_IMG: unused
    CINDER_BRANCH: main
    CINDER_COMMIT_HASH: ''
    CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml
    CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest
    CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml
    CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests
    CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests
    CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git
    CLEANUP_DIR_CMD: rm -Rf
    CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11'
    CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12'
    CRC_HTTPS_PROXY: ''
    CRC_HTTP_PROXY: ''
    CRC_STORAGE_NAMESPACE: crc-storage
    CRC_STORAGE_RETRIES: '3'
    CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz'''
    CRC_VERSION: latest
    DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret
    DATAPLANE_ANSIBLE_USER: ''
    DATAPLANE_COMPUTE_IP: 192.168.122.100
    DATAPLANE_CONTAINER_PREFIX: openstack
    DATAPLANE_CONTAINER_TAG: current-podified
    DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest
    DATAPLANE_DEFAULT_GW: 192.168.122.1
    DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null
    DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100%
    DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned
    DATAPLANE_NETWORKER_IP: 192.168.122.200
    DATAPLANE_NETWORK_INTERFACE_NAME: eth0
    DATAPLANE_NOVA_NFS_PATH: ''
    DATAPLANE_NTP_SERVER: pool.ntp.org
    DATAPLANE_PLAYBOOK: osp.edpm.download_cache
    DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9
    DATAPLANE_RUNNER_IMG: ''
    DATAPLANE_SERVER_ROLE: compute
    DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']'
    DATAPLANE_TIMEOUT: 30m
    DATAPLANE_TLS_ENABLED: 'true'
    DATAPLANE_TOTAL_NETWORKER_NODES: '1'
    DATAPLANE_TOTAL_NODES: '1'
    DBSERVICE: galera
    DESIGNATE: config/samples/designate_v1beta1_designate.yaml
    DESIGNATE_BRANCH: main
    DESIGNATE_COMMIT_HASH: ''
    DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml
    DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest
    DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml
    DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests
    DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests
    DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git
    DNSDATA: config/samples/network_v1beta1_dnsdata.yaml
    DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml
    DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml
    DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml
    DNS_DEPL_IMG: unused
    DNS_DOMAIN: localdomain
    DOWNLOAD_TOOLS_SELECTION: all
    EDPM_ATTACH_EXTNET: 'true'
    EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]'''
    EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]'''
    EDPM_COMPUTE_CELLS: '1'
    EDPM_COMPUTE_CEPH_ENABLED: 'true'
    EDPM_COMPUTE_CEPH_NOVA: 'true'
    EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true'
    EDPM_COMPUTE_SRIOV_ENABLED: 'true'
    EDPM_COMPUTE_SUFFIX: '0'
    EDPM_CONFIGURE_DEFAULT_ROUTE: 'true'
    EDPM_CONFIGURE_HUGEPAGES: 'false'
    EDPM_CONFIGURE_NETWORKING: 'true'
    EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra
    EDPM_NETWORKER_SUFFIX: '0'
    EDPM_TOTAL_NETWORKERS: '1'
    EDPM_TOTAL_NODES: '1'
    GALERA_REPLICAS: ''
    GENERATE_SSH_KEYS: 'true'
    GIT_CLONE_OPTS: ''
    GLANCE: config/samples/glance_v1beta1_glance.yaml
    GLANCEAPI_DEPL_IMG: unused
    GLANCE_BRANCH: main
    GLANCE_COMMIT_HASH: ''
    GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml
    GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest
    GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml
    GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests
    GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests
    GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git
    HEAT: config/samples/heat_v1beta1_heat.yaml
    HEATAPI_DEPL_IMG: unused
    HEATCFNAPI_DEPL_IMG: unused
    HEATENGINE_DEPL_IMG: unused
    HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0
    HEAT_BRANCH: main
    HEAT_COMMIT_HASH: ''
    HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml
    HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest
    HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml
    HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests
    HEAT_KUTTL_NAMESPACE: heat-kuttl-tests
    HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git
    HEAT_SERVICE_ENABLED: 'true'
    HORIZON: config/samples/horizon_v1beta1_horizon.yaml
    HORIZON_BRANCH: main
    HORIZON_COMMIT_HASH: ''
    HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml
    HORIZON_DEPL_IMG: unused
    HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest
    HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml
    HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests
    HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests
    HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git
    INFRA_BRANCH: main
    INFRA_COMMIT_HASH: ''
    INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest
    INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml
    INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests
    INFRA_KUTTL_NAMESPACE: infra-kuttl-tests
    INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git
    INSTALL_CERT_MANAGER: 'true'
    INSTALL_NMSTATE: true || false
    INSTALL_NNCP: true || false
    INTERNALAPI_HOST_ROUTES: ''
    IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24
    IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64
    IPV6_LAB_LIBVIRT_STORAGE_POOL: default
    IPV6_LAB_MANAGE_FIREWALLD: 'true'
    IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24
    IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64
    IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router
    IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64
    IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24
    IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1
    IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3
    IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96
    IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false'
    IPV6_LAB_NETWORK_NAME: nat64
    IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48
    IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11
    IPV6_LAB_SNO_HOST_PREFIX: '64'
    IPV6_LAB_SNO_INSTANCE_NAME: sno
    IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64
    IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp
    IPV6_LAB_SNO_OCP_VERSION: latest-4.14
    IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112
    IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub
    IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab
    IRONIC: config/samples/ironic_v1beta1_ironic.yaml
    IRONICAPI_DEPL_IMG: unused
    IRONICCON_DEPL_IMG: unused
    IRONICINS_DEPL_IMG: unused
    IRONICNAG_DEPL_IMG: unused
    IRONICPXE_DEPL_IMG: unused
    IRONIC_BRANCH: main
    IRONIC_COMMIT_HASH: ''
    IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml
    IRONIC_IMAGE_TAG: release-24.1
    IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest
    IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml
    IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests
    IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests
    IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git
    KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml
    KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml
    KEYSTONEAPI_DEPL_IMG: unused
    KEYSTONE_BRANCH: main
    KEYSTONE_COMMIT_HASH: ''
    KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f
    KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack
    KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest
    KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml
    KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests
    KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests
    KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git
    KUBEADMIN_PWD: '12345678'
    LIBVIRT_SECRET: libvirt-secret
    LOKI_DEPLOY_MODE: openshift-network
    LOKI_DEPLOY_NAMESPACE: netobserv
    LOKI_DEPLOY_SIZE: 1x.demo
    LOKI_NAMESPACE: openshift-operators-redhat
    LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki
    LOKI_SUBSCRIPTION: loki-operator
    LVMS_CR: '1'
    MANILA: config/samples/manila_v1beta1_manila.yaml
    MANILAAPI_DEPL_IMG: unused
    MANILASCH_DEPL_IMG: unused
    MANILASHARE_DEPL_IMG: unused
    MANILA_BRANCH: main
    MANILA_COMMIT_HASH: ''
    MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml
    MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest
    MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml
    MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests
    MANILA_KUTTL_NAMESPACE: manila-kuttl-tests
    MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git
    MANILA_SERVICE_ENABLED: 'true'
    MARIADB: config/samples/mariadb_v1beta1_galera.yaml
    MARIADB_BRANCH: main
    MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml
    MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests
    MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests
    MARIADB_COMMIT_HASH: ''
    MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml
    MARIADB_DEPL_IMG: unused
    MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest
    MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml
    MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests
    MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests
    MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git
    MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml
    MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml
    MEMCACHED_DEPL_IMG: unused
    METADATA_SHARED_SECRET: '1234567842'
    METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90
    METALLB_POOL: 192.168.122.80-192.168.122.90
    MICROSHIFT: '0'
    NAMESPACE: openstack
    NETCONFIG: config/samples/network_v1beta1_netconfig.yaml
    NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml
    NETCONFIG_DEPL_IMG: unused
    NETOBSERV_DEPLOY_NAMESPACE: netobserv
    NETOBSERV_NAMESPACE: openshift-netobserv-operator
    NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net
    NETOBSERV_SUBSCRIPTION: netobserv-operator
    NETWORK_BGP: 'false'
    NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0
    NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0
    NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0
    NETWORK_ISOLATION: 'true'
    NETWORK_ISOLATION_INSTANCE_NAME: crc
    NETWORK_ISOLATION_IPV4: 'true'
    NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24
    NETWORK_ISOLATION_IPV4_NAT: 'true'
    NETWORK_ISOLATION_IPV6: 'false'
    NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64
    NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10
    NETWORK_ISOLATION_MAC: '52:54:00:11:11:10'
    NETWORK_ISOLATION_NETWORK_NAME: net-iso
    NETWORK_ISOLATION_NET_NAME: default
    NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true'
    NETWORK_MTU: '1500'
    NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0
    NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0
    NETWORK_STORAGE_MACVLAN: ''
    NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0
    NETWORK_VLAN_START: '20'
    NETWORK_VLAN_STEP: '1'
    NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml
    NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml
    NEUTRONAPI_DEPL_IMG: unused
    NEUTRON_BRANCH: main
    NEUTRON_COMMIT_HASH: ''
    NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest
    NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml
    NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests
    NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests
    NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git
    NFS_HOME: /home/nfs
    NMSTATE_NAMESPACE: openshift-nmstate
    NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8
    NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator
    NNCP_ADDITIONAL_HOST_ROUTES: ''
    NNCP_BGP_1_INTERFACE: enp7s0
    NNCP_BGP_1_IP_ADDRESS: 100.65.4.2
    NNCP_BGP_2_INTERFACE: enp8s0
    NNCP_BGP_2_IP_ADDRESS: 100.64.4.2
    NNCP_BRIDGE: ospbr
    NNCP_CLEANUP_TIMEOUT: 120s
    NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::'
    NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10'
    NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122
    NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10'
    NNCP_DNS_SERVER: 192.168.122.1
    NNCP_DNS_SERVER_IPV6: fd00:aaaa::1
    NNCP_GATEWAY: 192.168.122.1
    NNCP_GATEWAY_IPV6: fd00:aaaa::1
    NNCP_INTERFACE: enp6s0
    NNCP_NODES: ''
    NNCP_TIMEOUT: 240s
    NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml
    NOVA_BRANCH: main
    NOVA_COMMIT_HASH: ''
    NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml
    NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest
    NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git
    NUMBER_OF_INSTANCES: '1'
    OCP_NETWORK_NAME: crc
    OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml
    OCTAVIA_BRANCH: main
    OCTAVIA_COMMIT_HASH: ''
    OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml
    OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest
    OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml
    OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests
    OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests
    OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git
    OKD: 'false'
    OPENSTACK_BRANCH: main
    OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest
    OPENSTACK_COMMIT_HASH: ''
    OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml
    OPENSTACK_CRDS_DIR: openstack_crds
    OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml
    OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest
    OPENSTACK_K8S_BRANCH: main
    OPENSTACK_K8S_TAG: latest
    OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml
    OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests
    OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests
    OPENSTACK_NEUTRON_CUSTOM_CONF: ''
    OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git
    OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest
    OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator
    OPERATOR_CHANNEL: ''
    OPERATOR_NAMESPACE: openstack-operators
    OPERATOR_SOURCE: ''
    OPERATOR_SOURCE_NAMESPACE: ''
    OUT: /home/zuul/ci-framework-data/artifacts/manifests
    OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm
    OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml
    OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml
    OVNCONTROLLER_NMAP: 'true'
    OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml
    OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml
    OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml
    OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml
    OVN_BRANCH: main
    OVN_COMMIT_HASH: ''
    OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest
    OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml
    OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests
    OVN_KUTTL_NAMESPACE: ovn-kuttl-tests
    OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git
    PASSWORD: '12345678'
    PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml
    PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml
    PLACEMENTAPI_DEPL_IMG: unused
    PLACEMENT_BRANCH: main
    PLACEMENT_COMMIT_HASH: ''
    PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest
    PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml
    PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests
    PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests
    PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git
    PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt
    RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml
    RABBITMQ_BRANCH: patches
    RABBITMQ_COMMIT_HASH: ''
    RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml
    RABBITMQ_DEPL_IMG: unused
    RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest
    RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git
    REDHAT_OPERATORS: 'false'
    REDIS: config/samples/redis_v1beta1_redis.yaml
    REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml
    REDIS_DEPL_IMG: unused
    RH_REGISTRY_PWD: ''
    RH_REGISTRY_USER: ''
    SECRET: osp-secret
    SG_CORE_DEPL_IMG: unused
    STANDALONE_COMPUTE_DRIVER: libvirt
    STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0
    STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0
    STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0
    STANDALONE_STORAGE_NET_PREFIX: 172.18.0
    STANDALONE_TENANT_NET_PREFIX: 172.19.0
    STORAGEMGMT_HOST_ROUTES: ''
    STORAGE_CLASS: local-storage
    STORAGE_HOST_ROUTES: ''
    SWIFT: config/samples/swift_v1beta1_swift.yaml
    SWIFT_BRANCH: main
    SWIFT_COMMIT_HASH: ''
    SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml
    SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest
    SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml
    SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests
    SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests
    SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git
    TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml
    TELEMETRY_BRANCH: main
    TELEMETRY_COMMIT_HASH: ''
    TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml
    TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest
    TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator
    TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml
    TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests
    TELEMETRY_KUTTL_RELPATH: tests/kuttl/suites
    TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git
    TENANT_HOST_ROUTES: ''
    TIMEOUT: 300s
    TLS_ENABLED: 'false'
    tripleo_deploy: 'export REGISTRY_USER:'
cifmw_install_yamls_environment:
    CHECKOUT_FROM_OPENSTACK_REF: 'true'
    KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig
    OPENSTACK_K8S_BRANCH: main
    OUT: /home/zuul/ci-framework-data/artifacts/manifests
    OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm
 encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml item: install-yamls-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml - ansible_loop_var: item changed: false content: Y2lmbXdfb3BlbnNoaWZ0X2FwaTogaHR0cHM6Ly9hcGkuY3JjLnRlc3Rpbmc6NjQ0MwpjaWZtd19vcGVuc2hpZnRfY29udGV4dDogZGVmYXVsdC9hcGktY3JjLXRlc3Rpbmc6NjQ0My9rdWJlYWRtaW4KY2lmbXdfb3BlbnNoaWZ0X2t1YmVjb25maWc6IC9ob21lL3p1dWwvLmNyYy9tYWNoaW5lcy9jcmMva3ViZWNvbmZpZwpjaWZtd19vcGVuc2hpZnRfdG9rZW46IHNoYTI1Nn5fTDlvRHRHQ2ExUXc3ME80TEdXM01EZVo4b1U2MFR4Q3ZZa1pKaXEyMk9zCmNpZm13X29wZW5zaGlmdF91c2VyOiBrdWJlYWRtaW4K encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml item: openshift-login-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml - ansible_loop_var: item changed: false content: cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw
cifmw_deploy_edpm: false
cifmw_dlrn_report_result: false
cifmw_extras:
- '@scenarios/centos-9/multinode-ci.yml'
- '@scenarios/centos-9/horizon.yml'
cifmw_openshift_api: api.crc.testing:6443
cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig'
cifmw_openshift_password: '123456789'
cifmw_openshift_skip_tls_verify: true
cifmw_openshift_user: kubeadmin
cifmw_run_tests: false
cifmw_use_libvirt: false
cifmw_zuul_target_host: controller
crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''')
    }}'
crc_ci_bootstrap_networking:
    instances:
        controller:
            networks:
                default:
                    ip: 192.168.122.11
        crc:
            networks:
                default:
                    ip: 192.168.122.10
                internal-api:
                    ip: 172.17.0.5
                storage:
                    ip: 172.18.0.5
                tenant:
                    ip: 172.19.0.5
    networks:
        default:
            mtu: 1500
            range: 192.168.122.0/24
        internal-api:
            range: 172.17.0.0/24
            vlan: 20
        storage:
            range: 172.18.0.0/24
            vlan: 21
        tenant:
            range: 172.19.0.0/24
            vlan: 22
enable_ramdisk: true
podified_validation: true
push_registry: quay.rdoproject.org
quay_login_secret_name: quay_nextgen_zuulgithubci
registry_login_enabled: true
scenario: local_build-index_deploy
zuul:
    _inheritance_path:
    - '<Job base-minimal branches: None source: config/zuul.d/jobs.yaml@master#24>'
    - '<Job base-crc-cloud branches: None source: config/zuul.d/_jobs-crc.yaml@master#235>'
    - '<Job cifmw-podified-multinode-edpm-base-crc branches: None source: openstack-k8s-operators/ci-framework/zuul.d/base.yaml@main#123>'
    - '<Job podified-multinode-edpm-deployment-crc branches: None source: openstack-k8s-operators/ci-framework/zuul.d/edpm_multinode.yaml@main#317>'
    - '<Job stf-base-2node branches: {MatchAny:{ImpliedBranchMatcher:master}} source:
        infrawatch/service-telemetry-operator/.zuul.yaml@master#18>'
    - '<Job stf-base branches: {MatchAny:{ImpliedBranchMatcher:master}} source: infrawatch/service-telemetry-operator/.zuul.yaml@master#72>'
    - '<Job stf-crc-local_build-index_deploy branches: {MatchAny:{ImpliedBranchMatcher:master}}
        source: infrawatch/service-telemetry-operator/.zuul.yaml@master#118>'
    - '<Job stf-crc-ocp_416-local_build-index_deploy branches: {MatchAny:{ImpliedBranchMatcher:master}}
        source: infrawatch/service-telemetry-operator/.zuul.yaml@master#173>'
    - '<Job stf-crc-ocp_416-local_build-index_deploy branches: None source: infrawatch/service-telemetry-operator/.zuul.yaml@master#230>'
    ansible_version: '8'
    attempts: 1
    branch: master
    build: 88dd6c905f2746688a8f680e3012c758
    build_refs:
    -   branch: master
        change_url: https://github.com/infrawatch/service-telemetry-operator
        project:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            name: infrawatch/service-telemetry-operator
            short_name: service-telemetry-operator
        src_dir: src/github.com/infrawatch/service-telemetry-operator
    buildset: 8637980fa8664cc3a54deb4b258e06d7
    buildset_refs:
    -   branch: master
        change_url: https://github.com/infrawatch/service-telemetry-operator
        project:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            name: infrawatch/service-telemetry-operator
            short_name: service-telemetry-operator
        src_dir: src/github.com/infrawatch/service-telemetry-operator
    change_url: https://github.com/infrawatch/service-telemetry-operator
    child_jobs: []
    event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8
    executor:
        hostname: ze01.softwarefactory-project.io
        inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml
        log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs
        result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json
        src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src
        work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work
    items:
    -   branch: master
        change_url: https://github.com/infrawatch/service-telemetry-operator
        project:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            name: infrawatch/service-telemetry-operator
            short_name: service-telemetry-operator
            src_dir: src/github.com/infrawatch/service-telemetry-operator
    job: stf-crc-ocp_416-local_build-index_deploy
    jobtags: []
    max_attempts: 1
    pipeline: periodic
    playbook_context:
        playbook_projects:
            trusted/project_0/review.rdoproject.org/config:
                canonical_name: review.rdoproject.org/config
                checkout: master
                commit: 381c86678f470a5590d19274a2eb914e95b81bb7
            trusted/project_1/opendev.org/zuul/zuul-jobs:
                canonical_name: opendev.org/zuul/zuul-jobs
                checkout: master
                commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df
            trusted/project_2/review.rdoproject.org/rdo-jobs:
                canonical_name: review.rdoproject.org/rdo-jobs
                checkout: master
                commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4
            trusted/project_3/github.com/openstack-k8s-operators/ci-framework:
                canonical_name: github.com/openstack-k8s-operators/ci-framework
                checkout: main
                commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b
            untrusted/project_0/github.com/openstack-k8s-operators/ci-framework:
                canonical_name: github.com/openstack-k8s-operators/ci-framework
                checkout: main
                commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b
            untrusted/project_1/review.rdoproject.org/config:
                canonical_name: review.rdoproject.org/config
                checkout: master
                commit: 381c86678f470a5590d19274a2eb914e95b81bb7
            untrusted/project_2/opendev.org/zuul/zuul-jobs:
                canonical_name: opendev.org/zuul/zuul-jobs
                checkout: master
                commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df
            untrusted/project_3/review.rdoproject.org/rdo-jobs:
                canonical_name: review.rdoproject.org/rdo-jobs
                checkout: master
                commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4
            untrusted/project_4/github.com/infrawatch/service-telemetry-operator:
                canonical_name: github.com/infrawatch/service-telemetry-operator
                checkout: master
                commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72
        playbooks:
        -   path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml
            roles:
            -   checkout: master
                checkout_description: playbook branch
                link_name: ansible/playbook_0/role_0/service-telemetry-operator
                link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator
                role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles
            -   checkout: main
                checkout_description: project override ref
                link_name: ansible/playbook_0/role_1/ci-framework
                link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework
                role_path: ansible/playbook_0/role_1/ci-framework/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_0/role_2/config
                link_target: untrusted/project_1/review.rdoproject.org/config
                role_path: ansible/playbook_0/role_2/config/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_0/role_3/zuul-jobs
                link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs
                role_path: ansible/playbook_0/role_3/zuul-jobs/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_0/role_4/rdo-jobs
                link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs
                role_path: ansible/playbook_0/role_4/rdo-jobs/roles
        -   path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml
            roles:
            -   checkout: master
                checkout_description: playbook branch
                link_name: ansible/playbook_1/role_0/service-telemetry-operator
                link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator
                role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles
            -   checkout: main
                checkout_description: project override ref
                link_name: ansible/playbook_1/role_1/ci-framework
                link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework
                role_path: ansible/playbook_1/role_1/ci-framework/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_1/role_2/config
                link_target: untrusted/project_1/review.rdoproject.org/config
                role_path: ansible/playbook_1/role_2/config/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_1/role_3/zuul-jobs
                link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs
                role_path: ansible/playbook_1/role_3/zuul-jobs/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_1/role_4/rdo-jobs
                link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs
                role_path: ansible/playbook_1/role_4/rdo-jobs/roles
    post_review: true
    project:
        canonical_hostname: github.com
        canonical_name: github.com/infrawatch/service-telemetry-operator
        name: infrawatch/service-telemetry-operator
        short_name: service-telemetry-operator
        src_dir: src/github.com/infrawatch/service-telemetry-operator
    projects:
        github.com/crc-org/crc-cloud:
            canonical_hostname: github.com
            canonical_name: github.com/crc-org/crc-cloud
            checkout: main
            checkout_description: project override ref
            commit: f6ed2f2d118884a075895bbf954ff6000e540430
            name: crc-org/crc-cloud
            required: true
            short_name: crc-cloud
            src_dir: src/github.com/crc-org/crc-cloud
        github.com/infrawatch/prometheus-webhook-snmp:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/prometheus-webhook-snmp
            checkout: master
            checkout_description: zuul branch
            commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895
            name: infrawatch/prometheus-webhook-snmp
            required: true
            short_name: prometheus-webhook-snmp
            src_dir: src/github.com/infrawatch/prometheus-webhook-snmp
        github.com/infrawatch/service-telemetry-operator:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            checkout: master
            checkout_description: zuul branch
            commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72
            name: infrawatch/service-telemetry-operator
            required: true
            short_name: service-telemetry-operator
            src_dir: src/github.com/infrawatch/service-telemetry-operator
        github.com/infrawatch/sg-bridge:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/sg-bridge
            checkout: master
            checkout_description: zuul branch
            commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2
            name: infrawatch/sg-bridge
            required: true
            short_name: sg-bridge
            src_dir: src/github.com/infrawatch/sg-bridge
        github.com/infrawatch/sg-core:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/sg-core
            checkout: master
            checkout_description: zuul branch
            commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6
            name: infrawatch/sg-core
            required: true
            short_name: sg-core
            src_dir: src/github.com/infrawatch/sg-core
        github.com/infrawatch/smart-gateway-operator:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/smart-gateway-operator
            checkout: master
            checkout_description: zuul branch
            commit: 2ff5b96b6254418d20a509188eea72ab2c77839c
            name: infrawatch/smart-gateway-operator
            required: true
            short_name: smart-gateway-operator
            src_dir: src/github.com/infrawatch/smart-gateway-operator
        github.com/openstack-k8s-operators/ci-framework:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/ci-framework
            checkout: main
            checkout_description: project override ref
            commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b
            name: openstack-k8s-operators/ci-framework
            required: true
            short_name: ci-framework
            src_dir: src/github.com/openstack-k8s-operators/ci-framework
        github.com/openstack-k8s-operators/dataplane-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/dataplane-operator
            checkout: main
            checkout_description: project override ref
            commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2
            name: openstack-k8s-operators/dataplane-operator
            required: true
            short_name: dataplane-operator
            src_dir: src/github.com/openstack-k8s-operators/dataplane-operator
        github.com/openstack-k8s-operators/edpm-ansible:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/edpm-ansible
            checkout: main
            checkout_description: project default branch
            commit: 95aa63de3182faad63a69301d101debad3efc936
            name: openstack-k8s-operators/edpm-ansible
            required: true
            short_name: edpm-ansible
            src_dir: src/github.com/openstack-k8s-operators/edpm-ansible
        github.com/openstack-k8s-operators/infra-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/infra-operator
            checkout: main
            checkout_description: project override ref
            commit: 63860ee1375c38462801e8341a7f18335169f94c
            name: openstack-k8s-operators/infra-operator
            required: true
            short_name: infra-operator
            src_dir: src/github.com/openstack-k8s-operators/infra-operator
        github.com/openstack-k8s-operators/install_yamls:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/install_yamls
            checkout: main
            checkout_description: project default branch
            commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8
            name: openstack-k8s-operators/install_yamls
            required: true
            short_name: install_yamls
            src_dir: src/github.com/openstack-k8s-operators/install_yamls
        github.com/openstack-k8s-operators/openstack-baremetal-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator
            checkout: master
            checkout_description: zuul branch
            commit: a333e57066b1d48e41f93af68be81188290a96b3
            name: openstack-k8s-operators/openstack-baremetal-operator
            required: true
            short_name: openstack-baremetal-operator
            src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator
        github.com/openstack-k8s-operators/openstack-must-gather:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/openstack-must-gather
            checkout: main
            checkout_description: project override ref
            commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096
            name: openstack-k8s-operators/openstack-must-gather
            required: true
            short_name: openstack-must-gather
            src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather
        github.com/openstack-k8s-operators/openstack-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/openstack-operator
            checkout: main
            checkout_description: project override ref
            commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71
            name: openstack-k8s-operators/openstack-operator
            required: true
            short_name: openstack-operator
            src_dir: src/github.com/openstack-k8s-operators/openstack-operator
        github.com/openstack-k8s-operators/repo-setup:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/repo-setup
            checkout: main
            checkout_description: project default branch
            commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f
            name: openstack-k8s-operators/repo-setup
            required: true
            short_name: repo-setup
            src_dir: src/github.com/openstack-k8s-operators/repo-setup
        opendev.org/zuul/zuul-jobs:
            canonical_hostname: opendev.org
            canonical_name: opendev.org/zuul/zuul-jobs
            checkout: master
            checkout_description: zuul branch
            commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df
            name: zuul/zuul-jobs
            required: true
            short_name: zuul-jobs
            src_dir: src/opendev.org/zuul/zuul-jobs
        review.rdoproject.org/config:
            canonical_hostname: review.rdoproject.org
            canonical_name: review.rdoproject.org/config
            checkout: master
            checkout_description: zuul branch
            commit: 381c86678f470a5590d19274a2eb914e95b81bb7
            name: config
            required: true
            short_name: config
            src_dir: src/review.rdoproject.org/config
    ref: refs/heads/master
    resources: {}
    tenant: rdoproject.org
    timeout: 3600
    voting: true
zuul_log_collection: true
 encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml item: zuul-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml skipped: false ansible_all_ipv4_addresses: - 38.102.83.214 ansible_all_ipv6_addresses: - fe80::f816:3eff:fec1:afa3 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_collection_name: null ansible_config_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2025-10-13' day: '13' epoch: '1760314997' epoch_int: '1760314997' hour: '00' iso8601: '2025-10-13T00:23:17Z' iso8601_basic: 20251013T002317274459 iso8601_basic_short: 20251013T002317 iso8601_micro: '2025-10-13T00:23:17.274459Z' minute: '23' month: '10' second: '17' time: 00:23:17 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' ansible_default_ipv4: address: 38.102.83.214 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:c1:af:a3 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_dependent_role_names: [] ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-10-13-00-00-56-00 vda1: - 9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-00-56-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '0' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 9839e2e1-98a2-4594-b609-79d514deb0a3 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 41650 22 SSH_CONNECTION: 38.102.83.114 41650 38.102.83.214 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.214 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fec1:afa3 prefix: '64' scope: link macaddress: fa:16:3e:c1:af:a3 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.214 all_ipv6_addresses: - fe80::f816:3eff:fec1:afa3 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-10-13T00:10:11Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f hardware_offload_type: null hints: '' id: 6c016c0a-2ac9-4869-a5a8-eab3fa182c1c ip_allocation: immediate mac_address: fa:16:3e:4e:a0:95 name: crc-c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-10-13T00:10:11Z' crc_ci_bootstrap_network_name: zuul-ci-net-88dd6c90 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:a7:cc:09 mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:4e:a0:95 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:1d:a0:a2 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:bb:8f:d3 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:30:89:5a mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:28Z' description: '' dns_domain: '' id: d01062d3-7b21-4be9-857c-4aa990cef4db ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-88dd6c90 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-10-13T00:09:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:35Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.199 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: cf00ba68-55fa-4056-9b58-bb065c0601bb name: zuul-ci-subnet-router-88dd6c90 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-10-13T00:09:37Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-10-13T00:09:32Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-88dd6c90 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-10-13T00:09:32Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-88dd6c90 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-88dd6c90 date_time: date: '2025-10-13' day: '13' epoch: '1760314997' epoch_int: '1760314997' hour: '00' iso8601: '2025-10-13T00:23:17Z' iso8601_basic: 20251013T002317274459 iso8601_basic_short: 20251013T002317 iso8601_micro: '2025-10-13T00:23:17.274459Z' minute: '23' month: '10' second: '17' time: 00:23:17 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' default_ipv4: address: 38.102.83.214 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:c1:af:a3 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-10-13-00-00-56-00 vda1: - 9839e2e1-98a2-4594-b609-79d514deb0a3 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-00-56-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '0' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 9839e2e1-98a2-4594-b609-79d514deb0a3 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 41650 22 SSH_CONNECTION: 38.102.83.114 41650 38.102.83.214 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.214 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fec1:afa3 prefix: '64' scope: link macaddress: fa:16:3e:c1:af:a3 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 interfaces: - lo - eth0 is_chroot: false iscsi_iqn: '' kernel: 5.14.0-621.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.01 1m: 0.08 5m: 0.02 locally_reachable_ips: ipv4: - 38.102.83.214 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fec1:afa3 lsb: {} lvm: N/A machine: x86_64 machine_id: a1727ec20198bc6caf436a6e13c4ff5e memfree_mb: 7266 memory_mb: nocache: free: 7417 used: 263 real: free: 7266 total: 7680 used: 414 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7680 module_setup: true mounts: - block_available: 20378918 block_size: 4096 block_total: 20954875 block_used: 575957 device: /dev/vda1 fstype: xfs inode_available: 41888289 inode_total: 41942512 inode_used: 54223 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83472048128 size_total: 85831168000 uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.2.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 23 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 23 - final - 0 python_version: 3.9.23 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxwvvCYwnIDtxKxVxyDCUXhYuWEo+WsGS1jEd+Im13VpWuXa7IQrDvjmuO0jn8/KspLpldlXZAyvPIi9+nNvkk= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIB1/unp9+ffn2cxr1RyLKXm2uZfT+tLfIHwoS/yhV9RG ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCtwQO/sn8zPSCivURPoL3DNUpFgI+Y/GknmWIW+/QsvlCk4sBWYiqOXubpbETP/ZuHnkt6w69huALW3iVln/6SdW9iz2mhr8+AHVAee6i3GRdpOWbUDuatQDsdRX3GWxhJ3iR4Q2CrLL9cuJIayVmHepeTrUt2AaPBwcRw7Or+VinGX/9nIUQRguvXHv3VeRUX003jI5B9xUO/6vZ99+ClMMpZPbhLqdLZnuKoLA9loqq6szVShReR3fCZNDH8FKZzjIFfFaj9uDgDfIB3iBKtQdr0HfSSF8CQ2A6o/P43FG9/w7Is3QQidH997QhMNrRNzbrNvgA8vgwi6qIkjFwYBO0O9VnlS1Fux4NG570chg5FmrtGWKGKAHxWuCm4zLuUAJWzw/gxVcPemOJlmIxbGIo/YMT0VgPQzbjFTxGehUhba1ncNNDyH8Cu7FHUbuX6pr6RWksUx+dixeBtFBjGlUg44pJZ+4I9XrHXTwLpBs3GXSUxi0gkQT182Xt8jyE= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 424 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - service-telemetry-operator ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: controller ansible_host: 38.102.83.214 ansible_hostname: controller ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 ansible_interfaces: - lo - eth0 ansible_inventory_sources: - /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-621.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.01 1m: 0.08 5m: 0.02 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.214 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fec1:afa3 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: a1727ec20198bc6caf436a6e13c4ff5e ansible_memfree_mb: 7266 ansible_memory_mb: nocache: free: 7417 used: 263 real: free: 7266 total: 7680 used: 414 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 7680 ansible_mounts: - block_available: 20378918 block_size: 4096 block_total: 20954875 block_used: 575957 device: /dev/vda1 fstype: xfs inode_available: 41888289 inode_total: 41942512 inode_used: 54223 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83472048128 size_total: 85831168000 uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_nodename: controller ansible_os_family: RedHat ansible_parent_role_names: - cifmw_setup ansible_parent_role_paths: - /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/roles/cifmw_setup ansible_pkg_mgr: dnf ansible_play_batch: &id002 - controller ansible_play_hosts: - controller ansible_play_hosts_all: - controller - crc ansible_play_name: Run ci/playbooks/e2e-collect-logs.yml ansible_play_role_names: &id003 - os_must_gather - artifacts - env_op_images - cifmw_setup ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 8 ansible_processor_nproc: 8 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 8 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.2.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 23 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 23 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.23 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_role_name: artifacts ansible_role_names: - os_must_gather - artifacts - env_op_images - cifmw_setup ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxwvvCYwnIDtxKxVxyDCUXhYuWEo+WsGS1jEd+Im13VpWuXa7IQrDvjmuO0jn8/KspLpldlXZAyvPIi9+nNvkk= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIB1/unp9+ffn2cxr1RyLKXm2uZfT+tLfIHwoS/yhV9RG ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCtwQO/sn8zPSCivURPoL3DNUpFgI+Y/GknmWIW+/QsvlCk4sBWYiqOXubpbETP/ZuHnkt6w69huALW3iVln/6SdW9iz2mhr8+AHVAee6i3GRdpOWbUDuatQDsdRX3GWxhJ3iR4Q2CrLL9cuJIayVmHepeTrUt2AaPBwcRw7Or+VinGX/9nIUQRguvXHv3VeRUX003jI5B9xUO/6vZ99+ClMMpZPbhLqdLZnuKoLA9loqq6szVShReR3fCZNDH8FKZzjIFfFaj9uDgDfIB3iBKtQdr0HfSSF8CQ2A6o/P43FG9/w7Is3QQidH997QhMNrRNzbrNvgA8vgwi6qIkjFwYBO0O9VnlS1Fux4NG570chg5FmrtGWKGKAHxWuCm4zLuUAJWzw/gxVcPemOJlmIxbGIo/YMT0VgPQzbjFTxGehUhba1ncNNDyH8Cu7FHUbuX6pr6RWksUx+dixeBtFBjGlUg44pJZ+4I9XrHXTwLpBs3GXSUxi0gkQT182Xt8jyE= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 424 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_artifacts_basedir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_artifacts_crc_host: api.crc.testing cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_artifacts_crc_sshkey_ed25519: ~/.crc/machines/crc/id_ed25519 cifmw_artifacts_crc_user: core cifmw_artifacts_gather_logs: true cifmw_artifacts_mask_logs: true cifmw_basedir: /home/zuul/ci-framework-data cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_env_op_images_dir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_env_op_images_dryrun: false cifmw_env_op_images_file: operator_images.yaml cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_os_must_gather_additional_namespaces: kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko cifmw_os_must_gather_dump_db: ALL cifmw_os_must_gather_host_network: false cifmw_os_must_gather_image: quay.io/openstack-k8s-operators/openstack-must-gather:latest cifmw_os_must_gather_image_push: true cifmw_os_must_gather_image_registry: quay.rdoproject.org/openstack-k8s-operators cifmw_os_must_gather_kubeconfig: '{{ ansible_user_dir }}/.kube/config' cifmw_os_must_gather_namespaces: - openstack-operators - openstack - baremetal-operator-system - openshift-machine-api - cert-manager - openshift-nmstate - openshift-marketplace - metallb-system - crc-storage cifmw_os_must_gather_output_dir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_os_must_gather_output_log_dir: '{{ cifmw_os_must_gather_output_dir }}/logs/openstack-must-gather' cifmw_os_must_gather_repo_path: '{{ ansible_user_dir }}/src/github.com/openstack-k8s-operators/openstack-must-gather' cifmw_os_must_gather_timeout: 10m cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_run_tests: false cifmw_status: changed: false failed: false stat: atime: 1760314913.6008098 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: binary ctime: 1760314917.1628997 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 54551398 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1760314917.1628997 nlink: 21 path: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 4096 uid: 1000 version: '3399238660' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true cifmw_success_flag: changed: false failed: false stat: exists: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-10-13T00:10:11Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f hardware_offload_type: null hints: '' id: 6c016c0a-2ac9-4869-a5a8-eab3fa182c1c ip_allocation: immediate mac_address: fa:16:3e:4e:a0:95 name: crc-c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-10-13T00:10:11Z' crc_ci_bootstrap_network_name: zuul-ci-net-88dd6c90 crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:a7:cc:09 mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:4e:a0:95 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:1d:a0:a2 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:bb:8f:d3 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:30:89:5a mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:28Z' description: '' dns_domain: '' id: d01062d3-7b21-4be9-857c-4aa990cef4db ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-88dd6c90 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-10-13T00:09:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:35Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.199 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: cf00ba68-55fa-4056-9b58-bb065c0601bb name: zuul-ci-subnet-router-88dd6c90 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-10-13T00:09:37Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-10-13T00:09:32Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-88dd6c90 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-10-13T00:09:32Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-88dd6c90 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-88dd6c90 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true environment: - ANSIBLE_LOG_PATH: '{{ ansible_user_dir }}/ci-framework-data/logs/e2e-collect-logs-must-gather.log' gather_subset: - min group_names: - ungrouped groups: all: - controller - crc ungrouped: &id001 - controller - crc zuul_unreachable: [] hostvars: controller: _param_dir: changed: true cmd: - ls - /home/zuul/ci-framework-data/artifacts/parameters delta: '0:00:00.005591' end: '2025-10-13 00:23:04.270279' failed: false msg: '' rc: 0 start: '2025-10-13 00:23:04.264688' stderr: '' stderr_lines: [] stdout: 'custom-params.yml install-yamls-params.yml openshift-login-params.yml zuul-params.yml' stdout_lines: - custom-params.yml - install-yamls-params.yml - openshift-login-params.yml - zuul-params.yml zuul_log_id: fa163ec2-ffbe-222f-b926-0000000008b6-1-controller _param_file: changed: false failed: false stat: exists: false _parsed_vars: changed: false msg: All items completed results: - ansible_loop_var: item changed: false content: Y2lmbXdfYXJ0aWZhY3RzX2NyY19zc2hrZXk6IH4vLnNzaC9pZF9jaWZ3CmNpZm13X2RlcGxveV9lZHBtOiBmYWxzZQpjaWZtd19kbHJuX3JlcG9ydF9yZXN1bHQ6IGZhbHNlCmNpZm13X2V4dHJhczoKLSAnQHNjZW5hcmlvcy9jZW50b3MtOS9tdWx0aW5vZGUtY2kueW1sJwotICdAc2NlbmFyaW9zL2NlbnRvcy05L2hvcml6b24ueW1sJwpjaWZtd19vcGVuc2hpZnRfYXBpOiBhcGkuY3JjLnRlc3Rpbmc6NjQ0MwpjaWZtd19vcGVuc2hpZnRfcGFzc3dvcmQ6ICcxMjM0NTY3ODknCmNpZm13X29wZW5zaGlmdF9za2lwX3Rsc192ZXJpZnk6IHRydWUKY2lmbXdfb3BlbnNoaWZ0X3VzZXI6IGt1YmVhZG1pbgpjaWZtd19wYXRoOiAvaG9tZS96dXVsLy5jcmMvYmluOi9ob21lL3p1dWwvLmNyYy9iaW4vb2M6L2hvbWUvenV1bC9iaW46fi8uY3JjL2Jpbjp+Ly5jcmMvYmluL29jOn4vYmluOi9ob21lL3p1dWwvLmxvY2FsL2JpbjovaG9tZS96dXVsL2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9zYmluCmNpZm13X3J1bl90ZXN0czogZmFsc2UKY2lmbXdfdXNlX2xpYnZpcnQ6IGZhbHNlCmNpZm13X3p1dWxfdGFyZ2V0X2hvc3Q6IGNvbnRyb2xsZXIK encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml item: custom-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml - ansible_loop_var: item changed: false content: cifmw_install_yamls_defaults:
    ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24
    ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24
    ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24
    ADOPTED_STORAGE_NETWORK: 172.18.1.0/24
    ADOPTED_TENANT_NETWORK: 172.9.1.0/24
    ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml
    ANSIBLEEE_BRANCH: main
    ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml
    ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest
    ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml
    ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests
    ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests
    ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator
    ANSIBLEE_COMMIT_HASH: ''
    BARBICAN: config/samples/barbican_v1beta1_barbican.yaml
    BARBICAN_BRANCH: main
    BARBICAN_COMMIT_HASH: ''
    BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml
    BARBICAN_DEPL_IMG: unused
    BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest
    BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml
    BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests
    BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests
    BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git
    BARBICAN_SERVICE_ENABLED: 'true'
    BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU=
    BAREMETAL_BRANCH: main
    BAREMETAL_COMMIT_HASH: ''
    BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest
    BAREMETAL_OS_CONTAINER_IMG: ''
    BAREMETAL_OS_IMG: ''
    BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git
    BAREMETAL_TIMEOUT: 20m
    BASH_IMG: quay.io/openstack-k8s-operators/bash:latest
    BGP_ASN: '64999'
    BGP_LEAF_1: 100.65.4.1
    BGP_LEAF_2: 100.64.4.1
    BGP_OVN_ROUTING: 'false'
    BGP_PEER_ASN: '64999'
    BGP_SOURCE_IP: 172.30.4.2
    BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42
    BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24
    BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64
    BMAAS_INSTANCE_DISK_SIZE: '20'
    BMAAS_INSTANCE_MEMORY: '4096'
    BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas
    BMAAS_INSTANCE_NET_MODEL: virtio
    BMAAS_INSTANCE_OS_VARIANT: centos-stream9
    BMAAS_INSTANCE_VCPUS: '2'
    BMAAS_INSTANCE_VIRT_TYPE: kvm
    BMAAS_IPV4: 'true'
    BMAAS_IPV6: 'false'
    BMAAS_LIBVIRT_USER: sushyemu
    BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26
    BMAAS_METALLB_POOL_NAME: baremetal
    BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24
    BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64
    BMAAS_NETWORK_NAME: crc-bmaas
    BMAAS_NODE_COUNT: '1'
    BMAAS_OCP_INSTANCE_NAME: crc
    BMAAS_REDFISH_PASSWORD: password
    BMAAS_REDFISH_USERNAME: admin
    BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default
    BMAAS_SUSHY_EMULATOR_DRIVER: libvirt
    BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest
    BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator
    BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml
    BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack
    BMH_NAMESPACE: openstack
    BMO_BRANCH: release-0.9
    BMO_COMMIT_HASH: ''
    BMO_IPA_BRANCH: stable/2024.1
    BMO_IRONIC_HOST: 192.168.122.10
    BMO_PROVISIONING_INTERFACE: ''
    BMO_REPO: https://github.com/metal3-io/baremetal-operator
    BMO_SETUP: ''
    BMO_SETUP_ROUTE_REPLACE: 'true'
    BM_CTLPLANE_INTERFACE: enp1s0
    BM_INSTANCE_MEMORY: '8192'
    BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal
    BM_INSTANCE_NAME_SUFFIX: '0'
    BM_NETWORK_NAME: default
    BM_NODE_COUNT: '1'
    BM_ROOT_PASSWORD: ''
    BM_ROOT_PASSWORD_SECRET: ''
    CEILOMETER_CENTRAL_DEPL_IMG: unused
    CEILOMETER_NOTIFICATION_DEPL_IMG: unused
    CEPH_BRANCH: release-1.15
    CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml
    CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml
    CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml
    CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml
    CEPH_IMG: quay.io/ceph/demo:latest-squid
    CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml
    CEPH_REPO: https://github.com/rook/rook.git
    CERTMANAGER_TIMEOUT: 300s
    CHECKOUT_FROM_OPENSTACK_REF: 'true'
    CINDER: config/samples/cinder_v1beta1_cinder.yaml
    CINDERAPI_DEPL_IMG: unused
    CINDERBKP_DEPL_IMG: unused
    CINDERSCH_DEPL_IMG: unused
    CINDERVOL_DEPL_IMG: unused
    CINDER_BRANCH: main
    CINDER_COMMIT_HASH: ''
    CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml
    CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest
    CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml
    CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests
    CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests
    CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git
    CLEANUP_DIR_CMD: rm -Rf
    CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11'
    CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12'
    CRC_HTTPS_PROXY: ''
    CRC_HTTP_PROXY: ''
    CRC_STORAGE_NAMESPACE: crc-storage
    CRC_STORAGE_RETRIES: '3'
    CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz'''
    CRC_VERSION: latest
    DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret
    DATAPLANE_ANSIBLE_USER: ''
    DATAPLANE_COMPUTE_IP: 192.168.122.100
    DATAPLANE_CONTAINER_PREFIX: openstack
    DATAPLANE_CONTAINER_TAG: current-podified
    DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest
    DATAPLANE_DEFAULT_GW: 192.168.122.1
    DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null
    DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100%
    DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned
    DATAPLANE_NETWORKER_IP: 192.168.122.200
    DATAPLANE_NETWORK_INTERFACE_NAME: eth0
    DATAPLANE_NOVA_NFS_PATH: ''
    DATAPLANE_NTP_SERVER: pool.ntp.org
    DATAPLANE_PLAYBOOK: osp.edpm.download_cache
    DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9
    DATAPLANE_RUNNER_IMG: ''
    DATAPLANE_SERVER_ROLE: compute
    DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']'
    DATAPLANE_TIMEOUT: 30m
    DATAPLANE_TLS_ENABLED: 'true'
    DATAPLANE_TOTAL_NETWORKER_NODES: '1'
    DATAPLANE_TOTAL_NODES: '1'
    DBSERVICE: galera
    DESIGNATE: config/samples/designate_v1beta1_designate.yaml
    DESIGNATE_BRANCH: main
    DESIGNATE_COMMIT_HASH: ''
    DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml
    DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest
    DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml
    DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests
    DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests
    DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git
    DNSDATA: config/samples/network_v1beta1_dnsdata.yaml
    DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml
    DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml
    DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml
    DNS_DEPL_IMG: unused
    DNS_DOMAIN: localdomain
    DOWNLOAD_TOOLS_SELECTION: all
    EDPM_ATTACH_EXTNET: 'true'
    EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]'''
    EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]'''
    EDPM_COMPUTE_CELLS: '1'
    EDPM_COMPUTE_CEPH_ENABLED: 'true'
    EDPM_COMPUTE_CEPH_NOVA: 'true'
    EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true'
    EDPM_COMPUTE_SRIOV_ENABLED: 'true'
    EDPM_COMPUTE_SUFFIX: '0'
    EDPM_CONFIGURE_DEFAULT_ROUTE: 'true'
    EDPM_CONFIGURE_HUGEPAGES: 'false'
    EDPM_CONFIGURE_NETWORKING: 'true'
    EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra
    EDPM_NETWORKER_SUFFIX: '0'
    EDPM_TOTAL_NETWORKERS: '1'
    EDPM_TOTAL_NODES: '1'
    GALERA_REPLICAS: ''
    GENERATE_SSH_KEYS: 'true'
    GIT_CLONE_OPTS: ''
    GLANCE: config/samples/glance_v1beta1_glance.yaml
    GLANCEAPI_DEPL_IMG: unused
    GLANCE_BRANCH: main
    GLANCE_COMMIT_HASH: ''
    GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml
    GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest
    GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml
    GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests
    GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests
    GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git
    HEAT: config/samples/heat_v1beta1_heat.yaml
    HEATAPI_DEPL_IMG: unused
    HEATCFNAPI_DEPL_IMG: unused
    HEATENGINE_DEPL_IMG: unused
    HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0
    HEAT_BRANCH: main
    HEAT_COMMIT_HASH: ''
    HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml
    HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest
    HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml
    HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests
    HEAT_KUTTL_NAMESPACE: heat-kuttl-tests
    HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git
    HEAT_SERVICE_ENABLED: 'true'
    HORIZON: config/samples/horizon_v1beta1_horizon.yaml
    HORIZON_BRANCH: main
    HORIZON_COMMIT_HASH: ''
    HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml
    HORIZON_DEPL_IMG: unused
    HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest
    HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml
    HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests
    HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests
    HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git
    INFRA_BRANCH: main
    INFRA_COMMIT_HASH: ''
    INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest
    INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml
    INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests
    INFRA_KUTTL_NAMESPACE: infra-kuttl-tests
    INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git
    INSTALL_CERT_MANAGER: 'true'
    INSTALL_NMSTATE: true || false
    INSTALL_NNCP: true || false
    INTERNALAPI_HOST_ROUTES: ''
    IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24
    IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64
    IPV6_LAB_LIBVIRT_STORAGE_POOL: default
    IPV6_LAB_MANAGE_FIREWALLD: 'true'
    IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24
    IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64
    IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router
    IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64
    IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24
    IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1
    IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3
    IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96
    IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false'
    IPV6_LAB_NETWORK_NAME: nat64
    IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48
    IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11
    IPV6_LAB_SNO_HOST_PREFIX: '64'
    IPV6_LAB_SNO_INSTANCE_NAME: sno
    IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64
    IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp
    IPV6_LAB_SNO_OCP_VERSION: latest-4.14
    IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112
    IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub
    IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab
    IRONIC: config/samples/ironic_v1beta1_ironic.yaml
    IRONICAPI_DEPL_IMG: unused
    IRONICCON_DEPL_IMG: unused
    IRONICINS_DEPL_IMG: unused
    IRONICNAG_DEPL_IMG: unused
    IRONICPXE_DEPL_IMG: unused
    IRONIC_BRANCH: main
    IRONIC_COMMIT_HASH: ''
    IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml
    IRONIC_IMAGE_TAG: release-24.1
    IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest
    IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml
    IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests
    IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests
    IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git
    KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml
    KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml
    KEYSTONEAPI_DEPL_IMG: unused
    KEYSTONE_BRANCH: main
    KEYSTONE_COMMIT_HASH: ''
    KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f
    KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack
    KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest
    KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml
    KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests
    KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests
    KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git
    KUBEADMIN_PWD: '12345678'
    LIBVIRT_SECRET: libvirt-secret
    LOKI_DEPLOY_MODE: openshift-network
    LOKI_DEPLOY_NAMESPACE: netobserv
    LOKI_DEPLOY_SIZE: 1x.demo
    LOKI_NAMESPACE: openshift-operators-redhat
    LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki
    LOKI_SUBSCRIPTION: loki-operator
    LVMS_CR: '1'
    MANILA: config/samples/manila_v1beta1_manila.yaml
    MANILAAPI_DEPL_IMG: unused
    MANILASCH_DEPL_IMG: unused
    MANILASHARE_DEPL_IMG: unused
    MANILA_BRANCH: main
    MANILA_COMMIT_HASH: ''
    MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml
    MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest
    MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml
    MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests
    MANILA_KUTTL_NAMESPACE: manila-kuttl-tests
    MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git
    MANILA_SERVICE_ENABLED: 'true'
    MARIADB: config/samples/mariadb_v1beta1_galera.yaml
    MARIADB_BRANCH: main
    MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml
    MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests
    MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests
    MARIADB_COMMIT_HASH: ''
    MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml
    MARIADB_DEPL_IMG: unused
    MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest
    MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml
    MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests
    MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests
    MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git
    MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml
    MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml
    MEMCACHED_DEPL_IMG: unused
    METADATA_SHARED_SECRET: '1234567842'
    METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90
    METALLB_POOL: 192.168.122.80-192.168.122.90
    MICROSHIFT: '0'
    NAMESPACE: openstack
    NETCONFIG: config/samples/network_v1beta1_netconfig.yaml
    NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml
    NETCONFIG_DEPL_IMG: unused
    NETOBSERV_DEPLOY_NAMESPACE: netobserv
    NETOBSERV_NAMESPACE: openshift-netobserv-operator
    NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net
    NETOBSERV_SUBSCRIPTION: netobserv-operator
    NETWORK_BGP: 'false'
    NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0
    NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0
    NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0
    NETWORK_ISOLATION: 'true'
    NETWORK_ISOLATION_INSTANCE_NAME: crc
    NETWORK_ISOLATION_IPV4: 'true'
    NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24
    NETWORK_ISOLATION_IPV4_NAT: 'true'
    NETWORK_ISOLATION_IPV6: 'false'
    NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64
    NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10
    NETWORK_ISOLATION_MAC: '52:54:00:11:11:10'
    NETWORK_ISOLATION_NETWORK_NAME: net-iso
    NETWORK_ISOLATION_NET_NAME: default
    NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true'
    NETWORK_MTU: '1500'
    NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0
    NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0
    NETWORK_STORAGE_MACVLAN: ''
    NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0
    NETWORK_VLAN_START: '20'
    NETWORK_VLAN_STEP: '1'
    NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml
    NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml
    NEUTRONAPI_DEPL_IMG: unused
    NEUTRON_BRANCH: main
    NEUTRON_COMMIT_HASH: ''
    NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest
    NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml
    NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests
    NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests
    NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git
    NFS_HOME: /home/nfs
    NMSTATE_NAMESPACE: openshift-nmstate
    NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8
    NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator
    NNCP_ADDITIONAL_HOST_ROUTES: ''
    NNCP_BGP_1_INTERFACE: enp7s0
    NNCP_BGP_1_IP_ADDRESS: 100.65.4.2
    NNCP_BGP_2_INTERFACE: enp8s0
    NNCP_BGP_2_IP_ADDRESS: 100.64.4.2
    NNCP_BRIDGE: ospbr
    NNCP_CLEANUP_TIMEOUT: 120s
    NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::'
    NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10'
    NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122
    NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10'
    NNCP_DNS_SERVER: 192.168.122.1
    NNCP_DNS_SERVER_IPV6: fd00:aaaa::1
    NNCP_GATEWAY: 192.168.122.1
    NNCP_GATEWAY_IPV6: fd00:aaaa::1
    NNCP_INTERFACE: enp6s0
    NNCP_NODES: ''
    NNCP_TIMEOUT: 240s
    NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml
    NOVA_BRANCH: main
    NOVA_COMMIT_HASH: ''
    NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml
    NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest
    NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git
    NUMBER_OF_INSTANCES: '1'
    OCP_NETWORK_NAME: crc
    OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml
    OCTAVIA_BRANCH: main
    OCTAVIA_COMMIT_HASH: ''
    OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml
    OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest
    OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml
    OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests
    OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests
    OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git
    OKD: 'false'
    OPENSTACK_BRANCH: main
    OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest
    OPENSTACK_COMMIT_HASH: ''
    OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml
    OPENSTACK_CRDS_DIR: openstack_crds
    OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml
    OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest
    OPENSTACK_K8S_BRANCH: main
    OPENSTACK_K8S_TAG: latest
    OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml
    OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests
    OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests
    OPENSTACK_NEUTRON_CUSTOM_CONF: ''
    OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git
    OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest
    OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator
    OPERATOR_CHANNEL: ''
    OPERATOR_NAMESPACE: openstack-operators
    OPERATOR_SOURCE: ''
    OPERATOR_SOURCE_NAMESPACE: ''
    OUT: /home/zuul/ci-framework-data/artifacts/manifests
    OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm
    OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml
    OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml
    OVNCONTROLLER_NMAP: 'true'
    OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml
    OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml
    OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml
    OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml
    OVN_BRANCH: main
    OVN_COMMIT_HASH: ''
    OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest
    OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml
    OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests
    OVN_KUTTL_NAMESPACE: ovn-kuttl-tests
    OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git
    PASSWORD: '12345678'
    PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml
    PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml
    PLACEMENTAPI_DEPL_IMG: unused
    PLACEMENT_BRANCH: main
    PLACEMENT_COMMIT_HASH: ''
    PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest
    PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml
    PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests
    PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests
    PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git
    PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt
    RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml
    RABBITMQ_BRANCH: patches
    RABBITMQ_COMMIT_HASH: ''
    RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml
    RABBITMQ_DEPL_IMG: unused
    RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest
    RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git
    REDHAT_OPERATORS: 'false'
    REDIS: config/samples/redis_v1beta1_redis.yaml
    REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml
    REDIS_DEPL_IMG: unused
    RH_REGISTRY_PWD: ''
    RH_REGISTRY_USER: ''
    SECRET: osp-secret
    SG_CORE_DEPL_IMG: unused
    STANDALONE_COMPUTE_DRIVER: libvirt
    STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0
    STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0
    STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0
    STANDALONE_STORAGE_NET_PREFIX: 172.18.0
    STANDALONE_TENANT_NET_PREFIX: 172.19.0
    STORAGEMGMT_HOST_ROUTES: ''
    STORAGE_CLASS: local-storage
    STORAGE_HOST_ROUTES: ''
    SWIFT: config/samples/swift_v1beta1_swift.yaml
    SWIFT_BRANCH: main
    SWIFT_COMMIT_HASH: ''
    SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml
    SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest
    SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml
    SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests
    SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests
    SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git
    TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml
    TELEMETRY_BRANCH: main
    TELEMETRY_COMMIT_HASH: ''
    TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml
    TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest
    TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator
    TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml
    TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests
    TELEMETRY_KUTTL_RELPATH: tests/kuttl/suites
    TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git
    TENANT_HOST_ROUTES: ''
    TIMEOUT: 300s
    TLS_ENABLED: 'false'
    tripleo_deploy: 'export REGISTRY_USER:'
cifmw_install_yamls_environment:
    CHECKOUT_FROM_OPENSTACK_REF: 'true'
    KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig
    OPENSTACK_K8S_BRANCH: main
    OUT: /home/zuul/ci-framework-data/artifacts/manifests
    OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm
 encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml item: install-yamls-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml - ansible_loop_var: item changed: false content: Y2lmbXdfb3BlbnNoaWZ0X2FwaTogaHR0cHM6Ly9hcGkuY3JjLnRlc3Rpbmc6NjQ0MwpjaWZtd19vcGVuc2hpZnRfY29udGV4dDogZGVmYXVsdC9hcGktY3JjLXRlc3Rpbmc6NjQ0My9rdWJlYWRtaW4KY2lmbXdfb3BlbnNoaWZ0X2t1YmVjb25maWc6IC9ob21lL3p1dWwvLmNyYy9tYWNoaW5lcy9jcmMva3ViZWNvbmZpZwpjaWZtd19vcGVuc2hpZnRfdG9rZW46IHNoYTI1Nn5fTDlvRHRHQ2ExUXc3ME80TEdXM01EZVo4b1U2MFR4Q3ZZa1pKaXEyMk9zCmNpZm13X29wZW5zaGlmdF91c2VyOiBrdWJlYWRtaW4K encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml item: openshift-login-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml - ansible_loop_var: item changed: false content: cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw
cifmw_deploy_edpm: false
cifmw_dlrn_report_result: false
cifmw_extras:
- '@scenarios/centos-9/multinode-ci.yml'
- '@scenarios/centos-9/horizon.yml'
cifmw_openshift_api: api.crc.testing:6443
cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig'
cifmw_openshift_password: '123456789'
cifmw_openshift_skip_tls_verify: true
cifmw_openshift_user: kubeadmin
cifmw_run_tests: false
cifmw_use_libvirt: false
cifmw_zuul_target_host: controller
crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''')
    }}'
crc_ci_bootstrap_networking:
    instances:
        controller:
            networks:
                default:
                    ip: 192.168.122.11
        crc:
            networks:
                default:
                    ip: 192.168.122.10
                internal-api:
                    ip: 172.17.0.5
                storage:
                    ip: 172.18.0.5
                tenant:
                    ip: 172.19.0.5
    networks:
        default:
            mtu: 1500
            range: 192.168.122.0/24
        internal-api:
            range: 172.17.0.0/24
            vlan: 20
        storage:
            range: 172.18.0.0/24
            vlan: 21
        tenant:
            range: 172.19.0.0/24
            vlan: 22
enable_ramdisk: true
podified_validation: true
push_registry: quay.rdoproject.org
quay_login_secret_name: quay_nextgen_zuulgithubci
registry_login_enabled: true
scenario: local_build-index_deploy
zuul:
    _inheritance_path:
    - '<Job base-minimal branches: None source: config/zuul.d/jobs.yaml@master#24>'
    - '<Job base-crc-cloud branches: None source: config/zuul.d/_jobs-crc.yaml@master#235>'
    - '<Job cifmw-podified-multinode-edpm-base-crc branches: None source: openstack-k8s-operators/ci-framework/zuul.d/base.yaml@main#123>'
    - '<Job podified-multinode-edpm-deployment-crc branches: None source: openstack-k8s-operators/ci-framework/zuul.d/edpm_multinode.yaml@main#317>'
    - '<Job stf-base-2node branches: {MatchAny:{ImpliedBranchMatcher:master}} source:
        infrawatch/service-telemetry-operator/.zuul.yaml@master#18>'
    - '<Job stf-base branches: {MatchAny:{ImpliedBranchMatcher:master}} source: infrawatch/service-telemetry-operator/.zuul.yaml@master#72>'
    - '<Job stf-crc-local_build-index_deploy branches: {MatchAny:{ImpliedBranchMatcher:master}}
        source: infrawatch/service-telemetry-operator/.zuul.yaml@master#118>'
    - '<Job stf-crc-ocp_416-local_build-index_deploy branches: {MatchAny:{ImpliedBranchMatcher:master}}
        source: infrawatch/service-telemetry-operator/.zuul.yaml@master#173>'
    - '<Job stf-crc-ocp_416-local_build-index_deploy branches: None source: infrawatch/service-telemetry-operator/.zuul.yaml@master#230>'
    ansible_version: '8'
    attempts: 1
    branch: master
    build: 88dd6c905f2746688a8f680e3012c758
    build_refs:
    -   branch: master
        change_url: https://github.com/infrawatch/service-telemetry-operator
        project:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            name: infrawatch/service-telemetry-operator
            short_name: service-telemetry-operator
        src_dir: src/github.com/infrawatch/service-telemetry-operator
    buildset: 8637980fa8664cc3a54deb4b258e06d7
    buildset_refs:
    -   branch: master
        change_url: https://github.com/infrawatch/service-telemetry-operator
        project:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            name: infrawatch/service-telemetry-operator
            short_name: service-telemetry-operator
        src_dir: src/github.com/infrawatch/service-telemetry-operator
    change_url: https://github.com/infrawatch/service-telemetry-operator
    child_jobs: []
    event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8
    executor:
        hostname: ze01.softwarefactory-project.io
        inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml
        log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs
        result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json
        src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src
        work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work
    items:
    -   branch: master
        change_url: https://github.com/infrawatch/service-telemetry-operator
        project:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            name: infrawatch/service-telemetry-operator
            short_name: service-telemetry-operator
            src_dir: src/github.com/infrawatch/service-telemetry-operator
    job: stf-crc-ocp_416-local_build-index_deploy
    jobtags: []
    max_attempts: 1
    pipeline: periodic
    playbook_context:
        playbook_projects:
            trusted/project_0/review.rdoproject.org/config:
                canonical_name: review.rdoproject.org/config
                checkout: master
                commit: 381c86678f470a5590d19274a2eb914e95b81bb7
            trusted/project_1/opendev.org/zuul/zuul-jobs:
                canonical_name: opendev.org/zuul/zuul-jobs
                checkout: master
                commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df
            trusted/project_2/review.rdoproject.org/rdo-jobs:
                canonical_name: review.rdoproject.org/rdo-jobs
                checkout: master
                commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4
            trusted/project_3/github.com/openstack-k8s-operators/ci-framework:
                canonical_name: github.com/openstack-k8s-operators/ci-framework
                checkout: main
                commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b
            untrusted/project_0/github.com/openstack-k8s-operators/ci-framework:
                canonical_name: github.com/openstack-k8s-operators/ci-framework
                checkout: main
                commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b
            untrusted/project_1/review.rdoproject.org/config:
                canonical_name: review.rdoproject.org/config
                checkout: master
                commit: 381c86678f470a5590d19274a2eb914e95b81bb7
            untrusted/project_2/opendev.org/zuul/zuul-jobs:
                canonical_name: opendev.org/zuul/zuul-jobs
                checkout: master
                commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df
            untrusted/project_3/review.rdoproject.org/rdo-jobs:
                canonical_name: review.rdoproject.org/rdo-jobs
                checkout: master
                commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4
            untrusted/project_4/github.com/infrawatch/service-telemetry-operator:
                canonical_name: github.com/infrawatch/service-telemetry-operator
                checkout: master
                commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72
        playbooks:
        -   path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml
            roles:
            -   checkout: master
                checkout_description: playbook branch
                link_name: ansible/playbook_0/role_0/service-telemetry-operator
                link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator
                role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles
            -   checkout: main
                checkout_description: project override ref
                link_name: ansible/playbook_0/role_1/ci-framework
                link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework
                role_path: ansible/playbook_0/role_1/ci-framework/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_0/role_2/config
                link_target: untrusted/project_1/review.rdoproject.org/config
                role_path: ansible/playbook_0/role_2/config/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_0/role_3/zuul-jobs
                link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs
                role_path: ansible/playbook_0/role_3/zuul-jobs/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_0/role_4/rdo-jobs
                link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs
                role_path: ansible/playbook_0/role_4/rdo-jobs/roles
        -   path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml
            roles:
            -   checkout: master
                checkout_description: playbook branch
                link_name: ansible/playbook_1/role_0/service-telemetry-operator
                link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator
                role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles
            -   checkout: main
                checkout_description: project override ref
                link_name: ansible/playbook_1/role_1/ci-framework
                link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework
                role_path: ansible/playbook_1/role_1/ci-framework/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_1/role_2/config
                link_target: untrusted/project_1/review.rdoproject.org/config
                role_path: ansible/playbook_1/role_2/config/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_1/role_3/zuul-jobs
                link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs
                role_path: ansible/playbook_1/role_3/zuul-jobs/roles
            -   checkout: master
                checkout_description: zuul branch
                link_name: ansible/playbook_1/role_4/rdo-jobs
                link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs
                role_path: ansible/playbook_1/role_4/rdo-jobs/roles
    post_review: true
    project:
        canonical_hostname: github.com
        canonical_name: github.com/infrawatch/service-telemetry-operator
        name: infrawatch/service-telemetry-operator
        short_name: service-telemetry-operator
        src_dir: src/github.com/infrawatch/service-telemetry-operator
    projects:
        github.com/crc-org/crc-cloud:
            canonical_hostname: github.com
            canonical_name: github.com/crc-org/crc-cloud
            checkout: main
            checkout_description: project override ref
            commit: f6ed2f2d118884a075895bbf954ff6000e540430
            name: crc-org/crc-cloud
            required: true
            short_name: crc-cloud
            src_dir: src/github.com/crc-org/crc-cloud
        github.com/infrawatch/prometheus-webhook-snmp:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/prometheus-webhook-snmp
            checkout: master
            checkout_description: zuul branch
            commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895
            name: infrawatch/prometheus-webhook-snmp
            required: true
            short_name: prometheus-webhook-snmp
            src_dir: src/github.com/infrawatch/prometheus-webhook-snmp
        github.com/infrawatch/service-telemetry-operator:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/service-telemetry-operator
            checkout: master
            checkout_description: zuul branch
            commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72
            name: infrawatch/service-telemetry-operator
            required: true
            short_name: service-telemetry-operator
            src_dir: src/github.com/infrawatch/service-telemetry-operator
        github.com/infrawatch/sg-bridge:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/sg-bridge
            checkout: master
            checkout_description: zuul branch
            commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2
            name: infrawatch/sg-bridge
            required: true
            short_name: sg-bridge
            src_dir: src/github.com/infrawatch/sg-bridge
        github.com/infrawatch/sg-core:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/sg-core
            checkout: master
            checkout_description: zuul branch
            commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6
            name: infrawatch/sg-core
            required: true
            short_name: sg-core
            src_dir: src/github.com/infrawatch/sg-core
        github.com/infrawatch/smart-gateway-operator:
            canonical_hostname: github.com
            canonical_name: github.com/infrawatch/smart-gateway-operator
            checkout: master
            checkout_description: zuul branch
            commit: 2ff5b96b6254418d20a509188eea72ab2c77839c
            name: infrawatch/smart-gateway-operator
            required: true
            short_name: smart-gateway-operator
            src_dir: src/github.com/infrawatch/smart-gateway-operator
        github.com/openstack-k8s-operators/ci-framework:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/ci-framework
            checkout: main
            checkout_description: project override ref
            commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b
            name: openstack-k8s-operators/ci-framework
            required: true
            short_name: ci-framework
            src_dir: src/github.com/openstack-k8s-operators/ci-framework
        github.com/openstack-k8s-operators/dataplane-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/dataplane-operator
            checkout: main
            checkout_description: project override ref
            commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2
            name: openstack-k8s-operators/dataplane-operator
            required: true
            short_name: dataplane-operator
            src_dir: src/github.com/openstack-k8s-operators/dataplane-operator
        github.com/openstack-k8s-operators/edpm-ansible:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/edpm-ansible
            checkout: main
            checkout_description: project default branch
            commit: 95aa63de3182faad63a69301d101debad3efc936
            name: openstack-k8s-operators/edpm-ansible
            required: true
            short_name: edpm-ansible
            src_dir: src/github.com/openstack-k8s-operators/edpm-ansible
        github.com/openstack-k8s-operators/infra-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/infra-operator
            checkout: main
            checkout_description: project override ref
            commit: 63860ee1375c38462801e8341a7f18335169f94c
            name: openstack-k8s-operators/infra-operator
            required: true
            short_name: infra-operator
            src_dir: src/github.com/openstack-k8s-operators/infra-operator
        github.com/openstack-k8s-operators/install_yamls:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/install_yamls
            checkout: main
            checkout_description: project default branch
            commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8
            name: openstack-k8s-operators/install_yamls
            required: true
            short_name: install_yamls
            src_dir: src/github.com/openstack-k8s-operators/install_yamls
        github.com/openstack-k8s-operators/openstack-baremetal-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator
            checkout: master
            checkout_description: zuul branch
            commit: a333e57066b1d48e41f93af68be81188290a96b3
            name: openstack-k8s-operators/openstack-baremetal-operator
            required: true
            short_name: openstack-baremetal-operator
            src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator
        github.com/openstack-k8s-operators/openstack-must-gather:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/openstack-must-gather
            checkout: main
            checkout_description: project override ref
            commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096
            name: openstack-k8s-operators/openstack-must-gather
            required: true
            short_name: openstack-must-gather
            src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather
        github.com/openstack-k8s-operators/openstack-operator:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/openstack-operator
            checkout: main
            checkout_description: project override ref
            commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71
            name: openstack-k8s-operators/openstack-operator
            required: true
            short_name: openstack-operator
            src_dir: src/github.com/openstack-k8s-operators/openstack-operator
        github.com/openstack-k8s-operators/repo-setup:
            canonical_hostname: github.com
            canonical_name: github.com/openstack-k8s-operators/repo-setup
            checkout: main
            checkout_description: project default branch
            commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f
            name: openstack-k8s-operators/repo-setup
            required: true
            short_name: repo-setup
            src_dir: src/github.com/openstack-k8s-operators/repo-setup
        opendev.org/zuul/zuul-jobs:
            canonical_hostname: opendev.org
            canonical_name: opendev.org/zuul/zuul-jobs
            checkout: master
            checkout_description: zuul branch
            commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df
            name: zuul/zuul-jobs
            required: true
            short_name: zuul-jobs
            src_dir: src/opendev.org/zuul/zuul-jobs
        review.rdoproject.org/config:
            canonical_hostname: review.rdoproject.org
            canonical_name: review.rdoproject.org/config
            checkout: master
            checkout_description: zuul branch
            commit: 381c86678f470a5590d19274a2eb914e95b81bb7
            name: config
            required: true
            short_name: config
            src_dir: src/review.rdoproject.org/config
    ref: refs/heads/master
    resources: {}
    tenant: rdoproject.org
    timeout: 3600
    voting: true
zuul_log_collection: true
 encoding: base64 failed: false invocation: module_args: src: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml item: zuul-params.yml source: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml skipped: false ansible_all_ipv4_addresses: - 38.102.83.214 ansible_all_ipv6_addresses: - fe80::f816:3eff:fec1:afa3 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_config_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2025-10-13' day: '13' epoch: '1760314997' epoch_int: '1760314997' hour: '00' iso8601: '2025-10-13T00:23:17Z' iso8601_basic: 20251013T002317274459 iso8601_basic_short: 20251013T002317 iso8601_micro: '2025-10-13T00:23:17.274459Z' minute: '23' month: '10' second: '17' time: 00:23:17 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' ansible_default_ipv4: address: 38.102.83.214 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:c1:af:a3 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-10-13-00-00-56-00 vda1: - 9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-00-56-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '0' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 9839e2e1-98a2-4594-b609-79d514deb0a3 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 41650 22 SSH_CONNECTION: 38.102.83.114 41650 38.102.83.214 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.214 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fec1:afa3 prefix: '64' scope: link macaddress: fa:16:3e:c1:af:a3 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.214 all_ipv6_addresses: - fe80::f816:3eff:fec1:afa3 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-10-13T00:10:11Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f hardware_offload_type: null hints: '' id: 6c016c0a-2ac9-4869-a5a8-eab3fa182c1c ip_allocation: immediate mac_address: fa:16:3e:4e:a0:95 name: crc-c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-10-13T00:10:11Z' crc_ci_bootstrap_network_name: zuul-ci-net-88dd6c90 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:a7:cc:09 mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:4e:a0:95 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:1d:a0:a2 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:bb:8f:d3 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:30:89:5a mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:28Z' description: '' dns_domain: '' id: d01062d3-7b21-4be9-857c-4aa990cef4db ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-88dd6c90 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-10-13T00:09:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:35Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.199 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: cf00ba68-55fa-4056-9b58-bb065c0601bb name: zuul-ci-subnet-router-88dd6c90 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-10-13T00:09:37Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-10-13T00:09:32Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-88dd6c90 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-10-13T00:09:32Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-88dd6c90 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-88dd6c90 date_time: date: '2025-10-13' day: '13' epoch: '1760314997' epoch_int: '1760314997' hour: '00' iso8601: '2025-10-13T00:23:17Z' iso8601_basic: 20251013T002317274459 iso8601_basic_short: 20251013T002317 iso8601_micro: '2025-10-13T00:23:17.274459Z' minute: '23' month: '10' second: '17' time: 00:23:17 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' default_ipv4: address: 38.102.83.214 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:c1:af:a3 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-10-13-00-00-56-00 vda1: - 9839e2e1-98a2-4594-b609-79d514deb0a3 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-00-56-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '0' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 9839e2e1-98a2-4594-b609-79d514deb0a3 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 41650 22 SSH_CONNECTION: 38.102.83.114 41650 38.102.83.214 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.214 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fec1:afa3 prefix: '64' scope: link macaddress: fa:16:3e:c1:af:a3 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 interfaces: - lo - eth0 is_chroot: false iscsi_iqn: '' kernel: 5.14.0-621.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.01 1m: 0.08 5m: 0.02 locally_reachable_ips: ipv4: - 38.102.83.214 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fec1:afa3 lsb: {} lvm: N/A machine: x86_64 machine_id: a1727ec20198bc6caf436a6e13c4ff5e memfree_mb: 7266 memory_mb: nocache: free: 7417 used: 263 real: free: 7266 total: 7680 used: 414 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7680 module_setup: true mounts: - block_available: 20378918 block_size: 4096 block_total: 20954875 block_used: 575957 device: /dev/vda1 fstype: xfs inode_available: 41888289 inode_total: 41942512 inode_used: 54223 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83472048128 size_total: 85831168000 uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.2.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 23 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 23 - final - 0 python_version: 3.9.23 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxwvvCYwnIDtxKxVxyDCUXhYuWEo+WsGS1jEd+Im13VpWuXa7IQrDvjmuO0jn8/KspLpldlXZAyvPIi9+nNvkk= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIB1/unp9+ffn2cxr1RyLKXm2uZfT+tLfIHwoS/yhV9RG ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCtwQO/sn8zPSCivURPoL3DNUpFgI+Y/GknmWIW+/QsvlCk4sBWYiqOXubpbETP/ZuHnkt6w69huALW3iVln/6SdW9iz2mhr8+AHVAee6i3GRdpOWbUDuatQDsdRX3GWxhJ3iR4Q2CrLL9cuJIayVmHepeTrUt2AaPBwcRw7Or+VinGX/9nIUQRguvXHv3VeRUX003jI5B9xUO/6vZ99+ClMMpZPbhLqdLZnuKoLA9loqq6szVShReR3fCZNDH8FKZzjIFfFaj9uDgDfIB3iBKtQdr0HfSSF8CQ2A6o/P43FG9/w7Is3QQidH997QhMNrRNzbrNvgA8vgwi6qIkjFwYBO0O9VnlS1Fux4NG570chg5FmrtGWKGKAHxWuCm4zLuUAJWzw/gxVcPemOJlmIxbGIo/YMT0VgPQzbjFTxGehUhba1ncNNDyH8Cu7FHUbuX6pr6RWksUx+dixeBtFBjGlUg44pJZ+4I9XrHXTwLpBs3GXSUxi0gkQT182Xt8jyE= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 424 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - service-telemetry-operator ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: controller ansible_host: 38.102.83.214 ansible_hostname: controller ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 ansible_interfaces: - lo - eth0 ansible_inventory_sources: - /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-621.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.01 1m: 0.08 5m: 0.02 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.214 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fec1:afa3 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: a1727ec20198bc6caf436a6e13c4ff5e ansible_memfree_mb: 7266 ansible_memory_mb: nocache: free: 7417 used: 263 real: free: 7266 total: 7680 used: 414 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 7680 ansible_mounts: - block_available: 20378918 block_size: 4096 block_total: 20954875 block_used: 575957 device: /dev/vda1 fstype: xfs inode_available: 41888289 inode_total: 41942512 inode_used: 54223 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83472048128 size_total: 85831168000 uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_nodename: controller ansible_os_family: RedHat ansible_pkg_mgr: dnf ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 8 ansible_processor_nproc: 8 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 8 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.2.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 23 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 23 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.23 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxwvvCYwnIDtxKxVxyDCUXhYuWEo+WsGS1jEd+Im13VpWuXa7IQrDvjmuO0jn8/KspLpldlXZAyvPIi9+nNvkk= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIB1/unp9+ffn2cxr1RyLKXm2uZfT+tLfIHwoS/yhV9RG ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCtwQO/sn8zPSCivURPoL3DNUpFgI+Y/GknmWIW+/QsvlCk4sBWYiqOXubpbETP/ZuHnkt6w69huALW3iVln/6SdW9iz2mhr8+AHVAee6i3GRdpOWbUDuatQDsdRX3GWxhJ3iR4Q2CrLL9cuJIayVmHepeTrUt2AaPBwcRw7Or+VinGX/9nIUQRguvXHv3VeRUX003jI5B9xUO/6vZ99+ClMMpZPbhLqdLZnuKoLA9loqq6szVShReR3fCZNDH8FKZzjIFfFaj9uDgDfIB3iBKtQdr0HfSSF8CQ2A6o/P43FG9/w7Is3QQidH997QhMNrRNzbrNvgA8vgwi6qIkjFwYBO0O9VnlS1Fux4NG570chg5FmrtGWKGKAHxWuCm4zLuUAJWzw/gxVcPemOJlmIxbGIo/YMT0VgPQzbjFTxGehUhba1ncNNDyH8Cu7FHUbuX6pr6RWksUx+dixeBtFBjGlUg44pJZ+4I9XrHXTwLpBs3GXSUxi0gkQT182Xt8jyE= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 424 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_basedir: /home/zuul/ci-framework-data cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_run_tests: false cifmw_status: changed: false failed: false stat: atime: 1760314913.6008098 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: binary ctime: 1760314917.1628997 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 54551398 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1760314917.1628997 nlink: 21 path: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 4096 uid: 1000 version: '3399238660' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true cifmw_success_flag: changed: false failed: false stat: exists: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-10-13T00:10:11Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f hardware_offload_type: null hints: '' id: 6c016c0a-2ac9-4869-a5a8-eab3fa182c1c ip_allocation: immediate mac_address: fa:16:3e:4e:a0:95 name: crc-c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-10-13T00:10:11Z' crc_ci_bootstrap_network_name: zuul-ci-net-88dd6c90 crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:a7:cc:09 mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:4e:a0:95 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:1d:a0:a2 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:bb:8f:d3 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:30:89:5a mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:28Z' description: '' dns_domain: '' id: d01062d3-7b21-4be9-857c-4aa990cef4db ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-88dd6c90 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-10-13T00:09:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:35Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.199 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: cf00ba68-55fa-4056-9b58-bb065c0601bb name: zuul-ci-subnet-router-88dd6c90 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-10-13T00:09:37Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-10-13T00:09:32Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-88dd6c90 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-10-13T00:09:32Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-88dd6c90 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-88dd6c90 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true gather_subset: - min group_names: - ungrouped groups: all: - controller - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1 inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/inventory.yaml inventory_hostname: controller inventory_hostname_short: controller logfiles_dest_dir: /home/zuul/ci-framework-data/logs/2025-10-13_00-23 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f100786d-5d49-4bb6-bacc-f81c832a6dc3 host_id: ff62aecd09b85709a233d3330c1581c31f2fa23cd3c1cbc3ffcedd62 interface_ip: 38.102.83.214 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.214 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.214 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__00f35d1f62e54e7243096aca394c8bcbb8907683 param_dir: changed: false failed: false stat: atime: 1760314867.5046456 attr_flags: '' attributes: [] block_size: 4096 blocks: 0 charset: binary ctime: 1760314872.6027744 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 146841209 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1760314872.6027744 nlink: 2 path: /home/zuul/ci-framework-data/artifacts/parameters pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 120 uid: 1000 version: '488734240' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true playbook_dir: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.214 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f100786d-5d49-4bb6-bacc-f81c832a6dc3 host_id: ff62aecd09b85709a233d3330c1581c31f2fa23cd3c1cbc3ffcedd62 interface_ip: 38.102.83.214 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.214 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.214 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul_log_collection: true zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 88dd6c905f2746688a8f680e3012c758 build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: 8637980fa8664cc3a54deb4b258e06d7 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8 executor: hostname: ze01.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-local_build-index_deploy jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: f6ed2f2d118884a075895bbf954ff6000e540430 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 2ff5b96b6254418d20a509188eea72ab2c77839c name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 95aa63de3182faad63a69301d101debad3efc936 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 63860ee1375c38462801e8341a7f18335169f94c name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096 name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 381c86678f470a5590d19274a2eb914e95b81bb7 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_change_list: - service-telemetry-operator zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '1' zuul_execution_trusted: 'False' zuul_log_collection: true zuul_success: 'False' zuul_will_retry: 'False' crc: ansible_all_ipv4_addresses: - 38.102.83.180 - 192.168.126.11 ansible_all_ipv6_addresses: - fe80::d7ba:4cf0:1b1a:2eaa ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_br_ex: active: true device: br-ex features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.180 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::d7ba:4cf0:1b1a:2eaa prefix: '64' scope: link macaddress: fa:16:3e:c3:15:08 mtu: 1500 promisc: true timestamping: [] type: ether ansible_br_int: active: false device: br-int features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 4e:ec:11:72:80:3b mtu: 1400 promisc: true timestamping: [] type: ether ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' ansible_config_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2025-10-13' day: '13' epoch: '1760314121' epoch_int: '1760314121' hour: '00' iso8601: '2025-10-13T00:08:41Z' iso8601_basic: 20251013T000841742955 iso8601_basic_short: 20251013T000841 iso8601_micro: '2025-10-13T00:08:41.742955Z' minute: 08 month: '10' second: '41' time: 00:08:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' ansible_default_ipv4: address: 38.102.83.180 alias: br-ex broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: br-ex macaddress: fa:16:3e:c3:15:08 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 vda2: - EFI-SYSTEM vda3: - boot vda4: - root masters: {} uuids: sr0: - 2025-10-13-00-07-48-00 vda2: - 7B77-95E7 vda3: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 ansible_devices: sr0: holders: [] host: 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-07-48-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '0' vendor: QEMU virtual: 1 vda: holders: [] host: 'SCSI storage controller: Red Hat, Inc. Virtio block device' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: [] sectors: '2048' sectorsize: 512 size: 1.00 MB start: '2048' uuid: null vda2: holders: [] links: ids: [] labels: - EFI-SYSTEM masters: [] uuids: - 7B77-95E7 sectors: '260096' sectorsize: 512 size: 127.00 MB start: '4096' uuid: 7B77-95E7 vda3: holders: [] links: ids: [] labels: - boot masters: [] uuids: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 sectors: '786432' sectorsize: 512 size: 384.00 MB start: '264192' uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: holders: [] links: ids: [] labels: - root masters: [] uuids: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 sectors: '166721503' sectorsize: 512 size: 79.50 GB start: '1050624' uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '419430400' sectorsize: '512' size: 200.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: RedHat ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/redhat-release ansible_distribution_file_search_string: Red Hat ansible_distribution_file_variety: RedHat ansible_distribution_major_version: '4' ansible_distribution_release: NA ansible_distribution_version: '4.16' ansible_dns: nameservers: - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_ens3: active: true device: ens3 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: fa:16:3e:c3:15:08 module: virtio_net mtu: 1500 pciid: virtio1 promisc: true speed: -1 timestamping: [] type: ether ansible_env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus HOME: /var/home/core LANG: C.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: core MOTD_SHOWN: pam PATH: /var/home/core/.local/bin:/var/home/core/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /var/home/core SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 50104 22 SSH_CONNECTION: 38.102.83.114 50104 38.102.83.180 22 USER: core XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '2' XDG_SESSION_TYPE: tty _: /usr/bin/python3.9 which_declare: declare -f ansible_eth10: active: true device: eth10 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 192.168.126.11 broadcast: 192.168.126.255 netmask: 255.255.255.0 network: 192.168.126.0 prefix: '24' macaddress: 8e:3e:50:eb:f3:80 mtu: 1500 promisc: false timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.180 - 192.168.126.11 all_ipv6_addresses: - fe80::d7ba:4cf0:1b1a:2eaa ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA br_ex: active: true device: br-ex features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.180 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::d7ba:4cf0:1b1a:2eaa prefix: '64' scope: link macaddress: fa:16:3e:c3:15:08 mtu: 1500 promisc: true timestamping: [] type: ether br_int: active: false device: br-int features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 4e:ec:11:72:80:3b mtu: 1400 promisc: true timestamping: [] type: ether chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' date_time: date: '2025-10-13' day: '13' epoch: '1760314121' epoch_int: '1760314121' hour: '00' iso8601: '2025-10-13T00:08:41Z' iso8601_basic: 20251013T000841742955 iso8601_basic_short: 20251013T000841 iso8601_micro: '2025-10-13T00:08:41.742955Z' minute: 08 month: '10' second: '41' time: 00:08:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' default_ipv4: address: 38.102.83.180 alias: br-ex broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: br-ex macaddress: fa:16:3e:c3:15:08 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 vda2: - EFI-SYSTEM vda3: - boot vda4: - root masters: {} uuids: sr0: - 2025-10-13-00-07-48-00 vda2: - 7B77-95E7 vda3: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 devices: sr0: holders: [] host: 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-07-48-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '0' vendor: QEMU virtual: 1 vda: holders: [] host: 'SCSI storage controller: Red Hat, Inc. Virtio block device' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: [] sectors: '2048' sectorsize: 512 size: 1.00 MB start: '2048' uuid: null vda2: holders: [] links: ids: [] labels: - EFI-SYSTEM masters: [] uuids: - 7B77-95E7 sectors: '260096' sectorsize: 512 size: 127.00 MB start: '4096' uuid: 7B77-95E7 vda3: holders: [] links: ids: [] labels: - boot masters: [] uuids: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 sectors: '786432' sectorsize: 512 size: 384.00 MB start: '264192' uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: holders: [] links: ids: [] labels: - root masters: [] uuids: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 sectors: '166721503' sectorsize: 512 size: 79.50 GB start: '1050624' uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '419430400' sectorsize: '512' size: 200.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3.9 distribution: RedHat distribution_file_parsed: true distribution_file_path: /etc/redhat-release distribution_file_search_string: Red Hat distribution_file_variety: RedHat distribution_major_version: '4' distribution_release: NA distribution_version: '4.16' dns: nameservers: - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 ens3: active: true device: ens3 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: fa:16:3e:c3:15:08 module: virtio_net mtu: 1500 pciid: virtio1 promisc: true speed: -1 timestamping: [] type: ether env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus HOME: /var/home/core LANG: C.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: core MOTD_SHOWN: pam PATH: /var/home/core/.local/bin:/var/home/core/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /var/home/core SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 50104 22 SSH_CONNECTION: 38.102.83.114 50104 38.102.83.180 22 USER: core XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '2' XDG_SESSION_TYPE: tty _: /usr/bin/python3.9 which_declare: declare -f eth10: active: true device: eth10 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 192.168.126.11 broadcast: 192.168.126.255 netmask: 255.255.255.0 network: 192.168.126.0 prefix: '24' macaddress: 8e:3e:50:eb:f3:80 mtu: 1500 promisc: false timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: crc gather_subset: - all hostname: crc hostnqn: nqn.2014-08.org.nvmexpress:uuid:fe28b1dc-f424-4106-9c95-00604d2bcd5f interfaces: - eth10 - ovn-k8s-mp0 - ens3 - br-int - lo - ovs-system - br-ex is_chroot: true iscsi_iqn: iqn.1994-05.com.redhat:24fed7ce643e kernel: 5.14.0-427.22.1.el9_4.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Mon Jun 10 09:23:36 EDT 2024' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: on [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.12 1m: 1.41 5m: 0.36 locally_reachable_ips: ipv4: - 38.102.83.180 - 127.0.0.0/8 - 127.0.0.1 - 192.168.126.11 ipv6: - ::1 - fe80::d7ba:4cf0:1b1a:2eaa lsb: {} lvm: N/A machine: x86_64 machine_id: c1bd596843fb445da20eca66471ddf66 memfree_mb: 29434 memory_mb: nocache: free: 30775 used: 1320 real: free: 29434 total: 32095 used: 2661 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 32095 module_setup: true mounts: - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /sysroot options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /etc options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /usr options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /sysroot/ostree/deploy/rhcos/var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 221344 block_size: 1024 block_total: 358271 block_used: 136927 device: /dev/vda3 fstype: ext4 inode_available: 97936 inode_total: 98304 inode_used: 368 mount: /boot options: ro,seclabel,nosuid,nodev,relatime size_available: 226656256 size_total: 366869504 uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 - block_available: 0 block_size: 2048 block_total: 241 block_used: 241 device: /dev/sr0 fstype: iso9660 inode_available: 0 inode_total: 0 inode_used: 0 mount: /tmp/openstack-config-drive options: ro,relatime,nojoliet,check=s,map=n,blocksize=2048 size_available: 0 size_total: 493568 uuid: 2025-10-13-00-07-48-00 nodename: crc os_family: RedHat ovn_k8s_mp0: active: false device: ovn-k8s-mp0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: b6:dc:d9:26:03:d4 mtu: 1400 promisc: true timestamping: [] type: ether ovs_system: active: false device: ovs-system features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: f2:3a:24:88:ca:6e mtu: 1500 promisc: true timestamping: [] type: ether pkg_mgr: atomic_container proc_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor - '8' - AuthenticAMD - AMD EPYC-Rome Processor - '9' - AuthenticAMD - AMD EPYC-Rome Processor - '10' - AuthenticAMD - AMD EPYC-Rome Processor - '11' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 12 processor_nproc: 12 processor_threads_per_core: 1 processor_vcpus: 12 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.2.1 python: executable: /usr/bin/python3.9 has_sslcontext: true type: cpython version: major: 3 micro: 18 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 18 - final - 0 python_version: 3.9.18 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd services: NetworkManager-clean-initrd-state.service: name: NetworkManager-clean-initrd-state.service source: systemd state: stopped status: enabled NetworkManager-dispatcher.service: name: NetworkManager-dispatcher.service source: systemd state: running status: enabled NetworkManager-wait-online.service: name: NetworkManager-wait-online.service source: systemd state: stopped status: enabled NetworkManager.service: name: NetworkManager.service source: systemd state: running status: enabled afterburn-checkin.service: name: afterburn-checkin.service source: systemd state: stopped status: enabled afterburn-firstboot-checkin.service: name: afterburn-firstboot-checkin.service source: systemd state: stopped status: enabled afterburn-sshkeys@.service: name: afterburn-sshkeys@.service source: systemd state: unknown status: disabled afterburn.service: name: afterburn.service source: systemd state: inactive status: disabled arp-ethers.service: name: arp-ethers.service source: systemd state: inactive status: disabled auditd.service: name: auditd.service source: systemd state: running status: enabled auth-rpcgss-module.service: name: auth-rpcgss-module.service source: systemd state: stopped status: static autovt@.service: name: autovt@.service source: systemd state: unknown status: alias blk-availability.service: name: blk-availability.service source: systemd state: stopped status: disabled bootc-fetch-apply-updates.service: name: bootc-fetch-apply-updates.service source: systemd state: inactive status: static bootkube.service: name: bootkube.service source: systemd state: inactive status: disabled bootupd.service: name: bootupd.service source: systemd state: stopped status: static chrony-wait.service: name: chrony-wait.service source: systemd state: inactive status: disabled chronyd-restricted.service: name: chronyd-restricted.service source: systemd state: inactive status: disabled chronyd.service: name: chronyd.service source: systemd state: running status: enabled clevis-luks-askpass.service: name: clevis-luks-askpass.service source: systemd state: stopped status: static cni-dhcp.service: name: cni-dhcp.service source: systemd state: inactive status: disabled configure-cloudinit-ssh.service: name: configure-cloudinit-ssh.service source: systemd state: stopped status: enabled console-getty.service: name: console-getty.service source: systemd state: inactive status: disabled console-login-helper-messages-gensnippet-ssh-keys.service: name: console-login-helper-messages-gensnippet-ssh-keys.service source: systemd state: stopped status: enabled container-getty@.service: name: container-getty@.service source: systemd state: unknown status: static coreos-generate-iscsi-initiatorname.service: name: coreos-generate-iscsi-initiatorname.service source: systemd state: stopped status: enabled coreos-ignition-delete-config.service: name: coreos-ignition-delete-config.service source: systemd state: stopped status: enabled coreos-ignition-firstboot-complete.service: name: coreos-ignition-firstboot-complete.service source: systemd state: stopped status: enabled coreos-ignition-write-issues.service: name: coreos-ignition-write-issues.service source: systemd state: stopped status: enabled coreos-installer-disable-device-auto-activation.service: name: coreos-installer-disable-device-auto-activation.service source: systemd state: inactive status: static coreos-installer-noreboot.service: name: coreos-installer-noreboot.service source: systemd state: inactive status: static coreos-installer-reboot.service: name: coreos-installer-reboot.service source: systemd state: inactive status: static coreos-installer-secure-ipl-reboot.service: name: coreos-installer-secure-ipl-reboot.service source: systemd state: inactive status: static coreos-installer.service: name: coreos-installer.service source: systemd state: inactive status: static coreos-liveiso-success.service: name: coreos-liveiso-success.service source: systemd state: stopped status: enabled coreos-platform-chrony-config.service: name: coreos-platform-chrony-config.service source: systemd state: stopped status: enabled coreos-populate-lvmdevices.service: name: coreos-populate-lvmdevices.service source: systemd state: stopped status: enabled coreos-printk-quiet.service: name: coreos-printk-quiet.service source: systemd state: stopped status: enabled coreos-update-ca-trust.service: name: coreos-update-ca-trust.service source: systemd state: stopped status: enabled crc-dnsmasq.service: name: crc-dnsmasq.service source: systemd state: stopped status: not-found crc-pre.service: name: crc-pre.service source: systemd state: stopped status: enabled crio-subid.service: name: crio-subid.service source: systemd state: stopped status: enabled crio-wipe.service: name: crio-wipe.service source: systemd state: stopped status: disabled crio.service: name: crio.service source: systemd state: stopped status: disabled dbus-broker.service: name: dbus-broker.service source: systemd state: running status: enabled dbus-org.freedesktop.hostname1.service: name: dbus-org.freedesktop.hostname1.service source: systemd state: active status: alias dbus-org.freedesktop.locale1.service: name: dbus-org.freedesktop.locale1.service source: systemd state: inactive status: alias dbus-org.freedesktop.login1.service: name: dbus-org.freedesktop.login1.service source: systemd state: active status: alias dbus-org.freedesktop.nm-dispatcher.service: name: dbus-org.freedesktop.nm-dispatcher.service source: systemd state: active status: alias dbus-org.freedesktop.timedate1.service: name: dbus-org.freedesktop.timedate1.service source: systemd state: inactive status: alias dbus.service: name: dbus.service source: systemd state: active status: alias debug-shell.service: name: debug-shell.service source: systemd state: inactive status: disabled disable-mglru.service: name: disable-mglru.service source: systemd state: stopped status: enabled display-manager.service: name: display-manager.service source: systemd state: stopped status: not-found dm-event.service: name: dm-event.service source: systemd state: stopped status: static dnf-makecache.service: name: dnf-makecache.service source: systemd state: inactive status: static dnsmasq.service: name: dnsmasq.service source: systemd state: running status: enabled dracut-cmdline.service: name: dracut-cmdline.service source: systemd state: stopped status: static dracut-initqueue.service: name: dracut-initqueue.service source: systemd state: stopped status: static dracut-mount.service: name: dracut-mount.service source: systemd state: stopped status: static dracut-pre-mount.service: name: dracut-pre-mount.service source: systemd state: stopped status: static dracut-pre-pivot.service: name: dracut-pre-pivot.service source: systemd state: stopped status: static dracut-pre-trigger.service: name: dracut-pre-trigger.service source: systemd state: stopped status: static dracut-pre-udev.service: name: dracut-pre-udev.service source: systemd state: stopped status: static dracut-shutdown-onfailure.service: name: dracut-shutdown-onfailure.service source: systemd state: stopped status: static dracut-shutdown.service: name: dracut-shutdown.service source: systemd state: stopped status: static dummy-network.service: name: dummy-network.service source: systemd state: stopped status: enabled emergency.service: name: emergency.service source: systemd state: stopped status: static fcoe.service: name: fcoe.service source: systemd state: stopped status: not-found fstrim.service: name: fstrim.service source: systemd state: inactive status: static fwupd-offline-update.service: name: fwupd-offline-update.service source: systemd state: inactive status: static fwupd-refresh.service: name: fwupd-refresh.service source: systemd state: inactive status: static fwupd.service: name: fwupd.service source: systemd state: inactive status: static gcp-routes.service: name: gcp-routes.service source: systemd state: stopped status: enabled getty@.service: name: getty@.service source: systemd state: unknown status: enabled getty@tty1.service: name: getty@tty1.service source: systemd state: running status: active gssproxy.service: name: gssproxy.service source: systemd state: stopped status: disabled gvisor-tap-vsock.service: name: gvisor-tap-vsock.service source: systemd state: stopped status: enabled hypervfcopyd.service: name: hypervfcopyd.service source: systemd state: inactive status: static hypervkvpd.service: name: hypervkvpd.service source: systemd state: inactive status: static hypervvssd.service: name: hypervvssd.service source: systemd state: inactive status: static ignition-delete-config.service: name: ignition-delete-config.service source: systemd state: stopped status: enabled initrd-cleanup.service: name: initrd-cleanup.service source: systemd state: stopped status: static initrd-parse-etc.service: name: initrd-parse-etc.service source: systemd state: stopped status: static initrd-switch-root.service: name: initrd-switch-root.service source: systemd state: stopped status: static initrd-udevadm-cleanup-db.service: name: initrd-udevadm-cleanup-db.service source: systemd state: stopped status: static irqbalance.service: name: irqbalance.service source: systemd state: running status: enabled iscsi-init.service: name: iscsi-init.service source: systemd state: stopped status: disabled iscsi-onboot.service: name: iscsi-onboot.service source: systemd state: stopped status: enabled iscsi-shutdown.service: name: iscsi-shutdown.service source: systemd state: stopped status: static iscsi-starter.service: name: iscsi-starter.service source: systemd state: inactive status: disabled iscsi.service: name: iscsi.service source: systemd state: stopped status: indirect iscsid.service: name: iscsid.service source: systemd state: stopped status: disabled iscsiuio.service: name: iscsiuio.service source: systemd state: stopped status: disabled kdump.service: name: kdump.service source: systemd state: stopped status: disabled kmod-static-nodes.service: name: kmod-static-nodes.service source: systemd state: stopped status: static kubelet-auto-node-size.service: name: kubelet-auto-node-size.service source: systemd state: stopped status: enabled kubelet-cleanup.service: name: kubelet-cleanup.service source: systemd state: stopped status: enabled kubelet.service: name: kubelet.service source: systemd state: stopped status: disabled kubens.service: name: kubens.service source: systemd state: stopped status: disabled ldconfig.service: name: ldconfig.service source: systemd state: stopped status: static logrotate.service: name: logrotate.service source: systemd state: stopped status: static lvm2-activation-early.service: name: lvm2-activation-early.service source: systemd state: stopped status: not-found lvm2-lvmpolld.service: name: lvm2-lvmpolld.service source: systemd state: stopped status: static lvm2-monitor.service: name: lvm2-monitor.service source: systemd state: stopped status: enabled machine-config-daemon-firstboot.service: name: machine-config-daemon-firstboot.service source: systemd state: stopped status: enabled machine-config-daemon-pull.service: name: machine-config-daemon-pull.service source: systemd state: stopped status: enabled mdadm-grow-continue@.service: name: mdadm-grow-continue@.service source: systemd state: unknown status: static mdadm-last-resort@.service: name: mdadm-last-resort@.service source: systemd state: unknown status: static mdcheck_continue.service: name: mdcheck_continue.service source: systemd state: inactive status: static mdcheck_start.service: name: mdcheck_start.service source: systemd state: inactive status: static mdmon@.service: name: mdmon@.service source: systemd state: unknown status: static mdmonitor-oneshot.service: name: mdmonitor-oneshot.service source: systemd state: inactive status: static mdmonitor.service: name: mdmonitor.service source: systemd state: stopped status: enabled microcode.service: name: microcode.service source: systemd state: stopped status: enabled modprobe@.service: name: modprobe@.service source: systemd state: unknown status: static modprobe@configfs.service: name: modprobe@configfs.service source: systemd state: stopped status: inactive modprobe@drm.service: name: modprobe@drm.service source: systemd state: stopped status: inactive modprobe@efi_pstore.service: name: modprobe@efi_pstore.service source: systemd state: stopped status: inactive modprobe@fuse.service: name: modprobe@fuse.service source: systemd state: stopped status: inactive multipathd.service: name: multipathd.service source: systemd state: stopped status: enabled netavark-dhcp-proxy.service: name: netavark-dhcp-proxy.service source: systemd state: inactive status: disabled netavark-firewalld-reload.service: name: netavark-firewalld-reload.service source: systemd state: inactive status: disabled network.service: name: network.service source: systemd state: stopped status: not-found nfs-blkmap.service: name: nfs-blkmap.service source: systemd state: inactive status: disabled nfs-idmapd.service: name: nfs-idmapd.service source: systemd state: stopped status: static nfs-mountd.service: name: nfs-mountd.service source: systemd state: stopped status: static nfs-server.service: name: nfs-server.service source: systemd state: stopped status: disabled nfs-utils.service: name: nfs-utils.service source: systemd state: stopped status: static nfsdcld.service: name: nfsdcld.service source: systemd state: stopped status: static nftables.service: name: nftables.service source: systemd state: inactive status: disabled nis-domainname.service: name: nis-domainname.service source: systemd state: inactive status: disabled nm-cloud-setup.service: name: nm-cloud-setup.service source: systemd state: inactive status: disabled nm-priv-helper.service: name: nm-priv-helper.service source: systemd state: inactive status: static nmstate.service: name: nmstate.service source: systemd state: stopped status: enabled node-valid-hostname.service: name: node-valid-hostname.service source: systemd state: stopped status: enabled nodeip-configuration.service: name: nodeip-configuration.service source: systemd state: stopped status: enabled ntpd.service: name: ntpd.service source: systemd state: stopped status: not-found ntpdate.service: name: ntpdate.service source: systemd state: stopped status: not-found nvmefc-boot-connections.service: name: nvmefc-boot-connections.service source: systemd state: stopped status: enabled nvmf-autoconnect.service: name: nvmf-autoconnect.service source: systemd state: inactive status: disabled nvmf-connect@.service: name: nvmf-connect@.service source: systemd state: unknown status: static openvswitch.service: name: openvswitch.service source: systemd state: stopped status: enabled ostree-boot-complete.service: name: ostree-boot-complete.service source: systemd state: stopped status: enabled-runtime ostree-finalize-staged-hold.service: name: ostree-finalize-staged-hold.service source: systemd state: stopped status: static ostree-finalize-staged.service: name: ostree-finalize-staged.service source: systemd state: stopped status: static ostree-prepare-root.service: name: ostree-prepare-root.service source: systemd state: inactive status: static ostree-readonly-sysroot-migration.service: name: ostree-readonly-sysroot-migration.service source: systemd state: stopped status: disabled ostree-remount.service: name: ostree-remount.service source: systemd state: stopped status: enabled ostree-state-overlay@.service: name: ostree-state-overlay@.service source: systemd state: unknown status: disabled ovs-configuration.service: name: ovs-configuration.service source: systemd state: stopped status: enabled ovs-delete-transient-ports.service: name: ovs-delete-transient-ports.service source: systemd state: stopped status: static ovs-vswitchd.service: name: ovs-vswitchd.service source: systemd state: running status: static ovsdb-server.service: name: ovsdb-server.service source: systemd state: running status: static pam_namespace.service: name: pam_namespace.service source: systemd state: inactive status: static plymouth-quit-wait.service: name: plymouth-quit-wait.service source: systemd state: stopped status: not-found plymouth-read-write.service: name: plymouth-read-write.service source: systemd state: stopped status: not-found plymouth-start.service: name: plymouth-start.service source: systemd state: stopped status: not-found podman-auto-update.service: name: podman-auto-update.service source: systemd state: inactive status: disabled podman-clean-transient.service: name: podman-clean-transient.service source: systemd state: inactive status: disabled podman-kube@.service: name: podman-kube@.service source: systemd state: unknown status: disabled podman-restart.service: name: podman-restart.service source: systemd state: inactive status: disabled podman.service: name: podman.service source: systemd state: stopped status: disabled polkit.service: name: polkit.service source: systemd state: inactive status: static qemu-guest-agent.service: name: qemu-guest-agent.service source: systemd state: stopped status: enabled quotaon.service: name: quotaon.service source: systemd state: inactive status: static raid-check.service: name: raid-check.service source: systemd state: inactive status: static rbdmap.service: name: rbdmap.service source: systemd state: stopped status: not-found rc-local.service: name: rc-local.service source: systemd state: stopped status: static rdisc.service: name: rdisc.service source: systemd state: inactive status: disabled rdma-load-modules@.service: name: rdma-load-modules@.service source: systemd state: unknown status: static rdma-ndd.service: name: rdma-ndd.service source: systemd state: inactive status: static rescue.service: name: rescue.service source: systemd state: stopped status: static rhcos-usrlocal-selinux-fixup.service: name: rhcos-usrlocal-selinux-fixup.service source: systemd state: stopped status: enabled rpc-gssd.service: name: rpc-gssd.service source: systemd state: stopped status: static rpc-statd-notify.service: name: rpc-statd-notify.service source: systemd state: stopped status: static rpc-statd.service: name: rpc-statd.service source: systemd state: stopped status: static rpc-svcgssd.service: name: rpc-svcgssd.service source: systemd state: stopped status: not-found rpcbind.service: name: rpcbind.service source: systemd state: stopped status: disabled rpm-ostree-bootstatus.service: name: rpm-ostree-bootstatus.service source: systemd state: inactive status: disabled rpm-ostree-countme.service: name: rpm-ostree-countme.service source: systemd state: inactive status: static rpm-ostree-fix-shadow-mode.service: name: rpm-ostree-fix-shadow-mode.service source: systemd state: stopped status: disabled rpm-ostreed-automatic.service: name: rpm-ostreed-automatic.service source: systemd state: inactive status: static rpm-ostreed.service: name: rpm-ostreed.service source: systemd state: inactive status: static rpmdb-rebuild.service: name: rpmdb-rebuild.service source: systemd state: inactive status: disabled selinux-autorelabel-mark.service: name: selinux-autorelabel-mark.service source: systemd state: stopped status: enabled selinux-autorelabel.service: name: selinux-autorelabel.service source: systemd state: inactive status: static selinux-check-proper-disable.service: name: selinux-check-proper-disable.service source: systemd state: inactive status: disabled serial-getty@.service: name: serial-getty@.service source: systemd state: unknown status: disabled sntp.service: name: sntp.service source: systemd state: stopped status: not-found sshd-keygen@.service: name: sshd-keygen@.service source: systemd state: unknown status: disabled sshd-keygen@ecdsa.service: name: sshd-keygen@ecdsa.service source: systemd state: stopped status: inactive sshd-keygen@ed25519.service: name: sshd-keygen@ed25519.service source: systemd state: stopped status: inactive sshd-keygen@rsa.service: name: sshd-keygen@rsa.service source: systemd state: stopped status: inactive sshd.service: name: sshd.service source: systemd state: running status: enabled sshd@.service: name: sshd@.service source: systemd state: unknown status: static sssd-autofs.service: name: sssd-autofs.service source: systemd state: inactive status: indirect sssd-nss.service: name: sssd-nss.service source: systemd state: inactive status: indirect sssd-pac.service: name: sssd-pac.service source: systemd state: inactive status: indirect sssd-pam.service: name: sssd-pam.service source: systemd state: inactive status: indirect sssd-ssh.service: name: sssd-ssh.service source: systemd state: inactive status: indirect sssd-sudo.service: name: sssd-sudo.service source: systemd state: inactive status: indirect sssd.service: name: sssd.service source: systemd state: stopped status: enabled stalld.service: name: stalld.service source: systemd state: inactive status: disabled syslog.service: name: syslog.service source: systemd state: stopped status: not-found system-update-cleanup.service: name: system-update-cleanup.service source: systemd state: inactive status: static systemd-ask-password-console.service: name: systemd-ask-password-console.service source: systemd state: stopped status: static systemd-ask-password-wall.service: name: systemd-ask-password-wall.service source: systemd state: stopped status: static systemd-backlight@.service: name: systemd-backlight@.service source: systemd state: unknown status: static systemd-binfmt.service: name: systemd-binfmt.service source: systemd state: stopped status: static systemd-bless-boot.service: name: systemd-bless-boot.service source: systemd state: inactive status: static systemd-boot-check-no-failures.service: name: systemd-boot-check-no-failures.service source: systemd state: inactive status: disabled systemd-boot-random-seed.service: name: systemd-boot-random-seed.service source: systemd state: stopped status: static systemd-boot-update.service: name: systemd-boot-update.service source: systemd state: stopped status: enabled systemd-coredump@.service: name: systemd-coredump@.service source: systemd state: unknown status: static systemd-exit.service: name: systemd-exit.service source: systemd state: inactive status: static systemd-fsck-root.service: name: systemd-fsck-root.service source: systemd state: stopped status: static systemd-fsck@.service: name: systemd-fsck@.service source: systemd state: unknown status: static systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service: name: systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service source: systemd state: stopped status: active systemd-growfs-root.service: name: systemd-growfs-root.service source: systemd state: inactive status: static systemd-growfs@.service: name: systemd-growfs@.service source: systemd state: unknown status: static systemd-halt.service: name: systemd-halt.service source: systemd state: inactive status: static systemd-hibernate-resume@.service: name: systemd-hibernate-resume@.service source: systemd state: unknown status: static systemd-hibernate.service: name: systemd-hibernate.service source: systemd state: inactive status: static systemd-hostnamed.service: name: systemd-hostnamed.service source: systemd state: running status: static systemd-hwdb-update.service: name: systemd-hwdb-update.service source: systemd state: stopped status: static systemd-hybrid-sleep.service: name: systemd-hybrid-sleep.service source: systemd state: inactive status: static systemd-initctl.service: name: systemd-initctl.service source: systemd state: stopped status: static systemd-journal-catalog-update.service: name: systemd-journal-catalog-update.service source: systemd state: stopped status: static systemd-journal-flush.service: name: systemd-journal-flush.service source: systemd state: stopped status: static systemd-journal-gatewayd.service: name: systemd-journal-gatewayd.service source: systemd state: inactive status: indirect systemd-journal-remote.service: name: systemd-journal-remote.service source: systemd state: inactive status: indirect systemd-journal-upload.service: name: systemd-journal-upload.service source: systemd state: inactive status: disabled systemd-journald.service: name: systemd-journald.service source: systemd state: running status: static systemd-journald@.service: name: systemd-journald@.service source: systemd state: unknown status: static systemd-kexec.service: name: systemd-kexec.service source: systemd state: inactive status: static systemd-localed.service: name: systemd-localed.service source: systemd state: inactive status: static systemd-logind.service: name: systemd-logind.service source: systemd state: running status: static systemd-machine-id-commit.service: name: systemd-machine-id-commit.service source: systemd state: stopped status: static systemd-modules-load.service: name: systemd-modules-load.service source: systemd state: stopped status: static systemd-network-generator.service: name: systemd-network-generator.service source: systemd state: stopped status: enabled systemd-pcrfs-root.service: name: systemd-pcrfs-root.service source: systemd state: inactive status: static systemd-pcrfs@.service: name: systemd-pcrfs@.service source: systemd state: unknown status: static systemd-pcrmachine.service: name: systemd-pcrmachine.service source: systemd state: stopped status: static systemd-pcrphase-initrd.service: name: systemd-pcrphase-initrd.service source: systemd state: stopped status: static systemd-pcrphase-sysinit.service: name: systemd-pcrphase-sysinit.service source: systemd state: stopped status: static systemd-pcrphase.service: name: systemd-pcrphase.service source: systemd state: stopped status: static systemd-poweroff.service: name: systemd-poweroff.service source: systemd state: inactive status: static systemd-pstore.service: name: systemd-pstore.service source: systemd state: stopped status: enabled systemd-quotacheck.service: name: systemd-quotacheck.service source: systemd state: stopped status: static systemd-random-seed.service: name: systemd-random-seed.service source: systemd state: stopped status: static systemd-reboot.service: name: systemd-reboot.service source: systemd state: inactive status: static systemd-remount-fs.service: name: systemd-remount-fs.service source: systemd state: stopped status: enabled-runtime systemd-repart.service: name: systemd-repart.service source: systemd state: stopped status: masked systemd-rfkill.service: name: systemd-rfkill.service source: systemd state: stopped status: static systemd-suspend-then-hibernate.service: name: systemd-suspend-then-hibernate.service source: systemd state: inactive status: static systemd-suspend.service: name: systemd-suspend.service source: systemd state: inactive status: static systemd-sysctl.service: name: systemd-sysctl.service source: systemd state: stopped status: static systemd-sysext.service: name: systemd-sysext.service source: systemd state: stopped status: disabled systemd-sysupdate-reboot.service: name: systemd-sysupdate-reboot.service source: systemd state: inactive status: indirect systemd-sysupdate.service: name: systemd-sysupdate.service source: systemd state: inactive status: indirect systemd-sysusers.service: name: systemd-sysusers.service source: systemd state: stopped status: static systemd-timedated.service: name: systemd-timedated.service source: systemd state: inactive status: static systemd-timesyncd.service: name: systemd-timesyncd.service source: systemd state: stopped status: not-found systemd-tmpfiles-clean.service: name: systemd-tmpfiles-clean.service source: systemd state: stopped status: static systemd-tmpfiles-setup-dev.service: name: systemd-tmpfiles-setup-dev.service source: systemd state: stopped status: static systemd-tmpfiles-setup.service: name: systemd-tmpfiles-setup.service source: systemd state: stopped status: static systemd-tmpfiles.service: name: systemd-tmpfiles.service source: systemd state: stopped status: not-found systemd-udev-settle.service: name: systemd-udev-settle.service source: systemd state: stopped status: static systemd-udev-trigger.service: name: systemd-udev-trigger.service source: systemd state: stopped status: static systemd-udevd.service: name: systemd-udevd.service source: systemd state: running status: static systemd-update-done.service: name: systemd-update-done.service source: systemd state: stopped status: static systemd-update-utmp-runlevel.service: name: systemd-update-utmp-runlevel.service source: systemd state: stopped status: static systemd-update-utmp.service: name: systemd-update-utmp.service source: systemd state: stopped status: static systemd-user-sessions.service: name: systemd-user-sessions.service source: systemd state: stopped status: static systemd-vconsole-setup.service: name: systemd-vconsole-setup.service source: systemd state: stopped status: static systemd-volatile-root.service: name: systemd-volatile-root.service source: systemd state: inactive status: static systemd-zram-setup@.service: name: systemd-zram-setup@.service source: systemd state: unknown status: static teamd@.service: name: teamd@.service source: systemd state: unknown status: static unbound-anchor.service: name: unbound-anchor.service source: systemd state: stopped status: static user-runtime-dir@.service: name: user-runtime-dir@.service source: systemd state: unknown status: static user-runtime-dir@0.service: name: user-runtime-dir@0.service source: systemd state: stopped status: active user-runtime-dir@1000.service: name: user-runtime-dir@1000.service source: systemd state: stopped status: active user@.service: name: user@.service source: systemd state: unknown status: static user@0.service: name: user@0.service source: systemd state: running status: active user@1000.service: name: user@1000.service source: systemd state: running status: active vgauthd.service: name: vgauthd.service source: systemd state: stopped status: enabled vmtoolsd.service: name: vmtoolsd.service source: systemd state: stopped status: enabled wait-for-primary-ip.service: name: wait-for-primary-ip.service source: systemd state: stopped status: enabled ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDs7MQU61ADe4LfEllZo6w2h2Vo1Z9nNArIkKGmgua8bOly2nQBIoDIKgNOXqUpoIZx1528UeeHSQu9SxYL21mo= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIDKHFhjB7ae+dVOClQLGXnCaMXGjEeLhmEhxE64Ddkhe ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCr2rWpvTGLA5BK4eYXB55gorB9vAJK1K0iUmnm+r9AcvcXH33bR/O6ZNh9h85mHU5l1Gw9nBLRbHn42EU+6Ht6te2Z1gIiJEKpfiC0sR0aMcT4hKQWHmwYqQM/VLXhPiS4OnhO1OJuz0arj1Anr1hDcEJpVTAj3sbfkgzzbBeEWMg2V3Apr1fqDimNlyWRiDFy3TUdKfnB7nucGaGbHneeVxvwv81RGur6I9VHZe/odqEQTGRUBXdu57xybxd6Yc3863ayL5L1OhGTN/x7d8qeEJGb9zt6VvtFWlpVjIXa2l+uTZVfTvufdLwxJdBRg0kHMXH2ZJ3U8w9NRHMBHG7M6YjX0w95uCB/FnyN6s8V/KRQtSnC6Wt6YMP438rM2K9yydXdS/qUQm5hQLP7eY8/Nl4+RDQAvZOjPp+DeUxXfZOqR4qq8tCKi/5Cvd7ChYfPyymeV4RKAJf971EuO0zphyDK8knic0c2XTybK6WTM8lYcbUMYJxg1CW5o1VMjpk= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 43 user_dir: /var/home/core user_gecos: CoreOS Admin user_gid: 1000 user_id: core user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: crc ansible_host: 38.102.83.180 ansible_hostname: crc ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:fe28b1dc-f424-4106-9c95-00604d2bcd5f ansible_interfaces: - eth10 - ovn-k8s-mp0 - ens3 - br-int - lo - ovs-system - br-ex ansible_inventory_sources: - /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/inventory.yaml ansible_is_chroot: true ansible_iscsi_iqn: iqn.1994-05.com.redhat:24fed7ce643e ansible_kernel: 5.14.0-427.22.1.el9_4.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Mon Jun 10 09:23:36 EDT 2024' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: on [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.12 1m: 1.41 5m: 0.36 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.180 - 127.0.0.0/8 - 127.0.0.1 - 192.168.126.11 ipv6: - ::1 - fe80::d7ba:4cf0:1b1a:2eaa ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: c1bd596843fb445da20eca66471ddf66 ansible_memfree_mb: 29434 ansible_memory_mb: nocache: free: 30775 used: 1320 real: free: 29434 total: 32095 used: 2661 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 32095 ansible_mounts: - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /sysroot options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /etc options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /usr options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /sysroot/ostree/deploy/rhcos/var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13253184 block_size: 4096 block_total: 20823803 block_used: 7570619 device: /dev/vda4 fstype: xfs inode_available: 41489044 inode_total: 41680320 inode_used: 191276 mount: /var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54285041664 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 221344 block_size: 1024 block_total: 358271 block_used: 136927 device: /dev/vda3 fstype: ext4 inode_available: 97936 inode_total: 98304 inode_used: 368 mount: /boot options: ro,seclabel,nosuid,nodev,relatime size_available: 226656256 size_total: 366869504 uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 - block_available: 0 block_size: 2048 block_total: 241 block_used: 241 device: /dev/sr0 fstype: iso9660 inode_available: 0 inode_total: 0 inode_used: 0 mount: /tmp/openstack-config-drive options: ro,relatime,nojoliet,check=s,map=n,blocksize=2048 size_available: 0 size_total: 493568 uuid: 2025-10-13-00-07-48-00 ansible_nodename: crc ansible_os_family: RedHat ansible_ovn_k8s_mp0: active: false device: ovn-k8s-mp0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: b6:dc:d9:26:03:d4 mtu: 1400 promisc: true timestamping: [] type: ether ansible_ovs_system: active: false device: ovs-system features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: f2:3a:24:88:ca:6e mtu: 1500 promisc: true timestamping: [] type: ether ansible_pkg_mgr: atomic_container ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor - '8' - AuthenticAMD - AMD EPYC-Rome Processor - '9' - AuthenticAMD - AMD EPYC-Rome Processor - '10' - AuthenticAMD - AMD EPYC-Rome Processor - '11' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 12 ansible_processor_nproc: 12 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 12 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.2.1 ansible_python: executable: /usr/bin/python3.9 has_sslcontext: true type: cpython version: major: 3 micro: 18 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 18 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.18 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDs7MQU61ADe4LfEllZo6w2h2Vo1Z9nNArIkKGmgua8bOly2nQBIoDIKgNOXqUpoIZx1528UeeHSQu9SxYL21mo= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIDKHFhjB7ae+dVOClQLGXnCaMXGjEeLhmEhxE64Ddkhe ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCr2rWpvTGLA5BK4eYXB55gorB9vAJK1K0iUmnm+r9AcvcXH33bR/O6ZNh9h85mHU5l1Gw9nBLRbHn42EU+6Ht6te2Z1gIiJEKpfiC0sR0aMcT4hKQWHmwYqQM/VLXhPiS4OnhO1OJuz0arj1Anr1hDcEJpVTAj3sbfkgzzbBeEWMg2V3Apr1fqDimNlyWRiDFy3TUdKfnB7nucGaGbHneeVxvwv81RGur6I9VHZe/odqEQTGRUBXdu57xybxd6Yc3863ayL5L1OhGTN/x7d8qeEJGb9zt6VvtFWlpVjIXa2l+uTZVfTvufdLwxJdBRg0kHMXH2ZJ3U8w9NRHMBHG7M6YjX0w95uCB/FnyN6s8V/KRQtSnC6Wt6YMP438rM2K9yydXdS/qUQm5hQLP7eY8/Nl4+RDQAvZOjPp+DeUxXfZOqR4qq8tCKi/5Cvd7ChYfPyymeV4RKAJf971EuO0zphyDK8knic0c2XTybK6WTM8lYcbUMYJxg1CW5o1VMjpk= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 43 ansible_user: core ansible_user_dir: /var/home/core ansible_user_gecos: CoreOS Admin ansible_user_gid: 1000 ansible_user_id: core ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: /var/home/core/.crc/machines/crc/kubeconfig cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 discovered_interpreter_python: /usr/bin/python3.9 enable_ramdisk: true gather_subset: - all group_names: - ungrouped groups: all: - controller - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1 inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/inventory.yaml inventory_hostname: crc inventory_hostname_short: crc module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 host_id: d19710e37f7b2620eb9f1bc9cfdfc06732b1f0c31221781941dd4533 interface_ip: 38.102.83.180 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.180 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.180 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__00f35d1f62e54e7243096aca394c8bcbb8907683 playbook_dir: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy services: NetworkManager-clean-initrd-state.service: name: NetworkManager-clean-initrd-state.service source: systemd state: stopped status: enabled NetworkManager-dispatcher.service: name: NetworkManager-dispatcher.service source: systemd state: running status: enabled NetworkManager-wait-online.service: name: NetworkManager-wait-online.service source: systemd state: stopped status: enabled NetworkManager.service: name: NetworkManager.service source: systemd state: running status: enabled afterburn-checkin.service: name: afterburn-checkin.service source: systemd state: stopped status: enabled afterburn-firstboot-checkin.service: name: afterburn-firstboot-checkin.service source: systemd state: stopped status: enabled afterburn-sshkeys@.service: name: afterburn-sshkeys@.service source: systemd state: unknown status: disabled afterburn.service: name: afterburn.service source: systemd state: inactive status: disabled arp-ethers.service: name: arp-ethers.service source: systemd state: inactive status: disabled auditd.service: name: auditd.service source: systemd state: running status: enabled auth-rpcgss-module.service: name: auth-rpcgss-module.service source: systemd state: stopped status: static autovt@.service: name: autovt@.service source: systemd state: unknown status: alias blk-availability.service: name: blk-availability.service source: systemd state: stopped status: disabled bootc-fetch-apply-updates.service: name: bootc-fetch-apply-updates.service source: systemd state: inactive status: static bootkube.service: name: bootkube.service source: systemd state: inactive status: disabled bootupd.service: name: bootupd.service source: systemd state: stopped status: static chrony-wait.service: name: chrony-wait.service source: systemd state: inactive status: disabled chronyd-restricted.service: name: chronyd-restricted.service source: systemd state: inactive status: disabled chronyd.service: name: chronyd.service source: systemd state: running status: enabled clevis-luks-askpass.service: name: clevis-luks-askpass.service source: systemd state: stopped status: static cni-dhcp.service: name: cni-dhcp.service source: systemd state: inactive status: disabled configure-cloudinit-ssh.service: name: configure-cloudinit-ssh.service source: systemd state: stopped status: enabled console-getty.service: name: console-getty.service source: systemd state: inactive status: disabled console-login-helper-messages-gensnippet-ssh-keys.service: name: console-login-helper-messages-gensnippet-ssh-keys.service source: systemd state: stopped status: enabled container-getty@.service: name: container-getty@.service source: systemd state: unknown status: static coreos-generate-iscsi-initiatorname.service: name: coreos-generate-iscsi-initiatorname.service source: systemd state: stopped status: enabled coreos-ignition-delete-config.service: name: coreos-ignition-delete-config.service source: systemd state: stopped status: enabled coreos-ignition-firstboot-complete.service: name: coreos-ignition-firstboot-complete.service source: systemd state: stopped status: enabled coreos-ignition-write-issues.service: name: coreos-ignition-write-issues.service source: systemd state: stopped status: enabled coreos-installer-disable-device-auto-activation.service: name: coreos-installer-disable-device-auto-activation.service source: systemd state: inactive status: static coreos-installer-noreboot.service: name: coreos-installer-noreboot.service source: systemd state: inactive status: static coreos-installer-reboot.service: name: coreos-installer-reboot.service source: systemd state: inactive status: static coreos-installer-secure-ipl-reboot.service: name: coreos-installer-secure-ipl-reboot.service source: systemd state: inactive status: static coreos-installer.service: name: coreos-installer.service source: systemd state: inactive status: static coreos-liveiso-success.service: name: coreos-liveiso-success.service source: systemd state: stopped status: enabled coreos-platform-chrony-config.service: name: coreos-platform-chrony-config.service source: systemd state: stopped status: enabled coreos-populate-lvmdevices.service: name: coreos-populate-lvmdevices.service source: systemd state: stopped status: enabled coreos-printk-quiet.service: name: coreos-printk-quiet.service source: systemd state: stopped status: enabled coreos-update-ca-trust.service: name: coreos-update-ca-trust.service source: systemd state: stopped status: enabled crc-dnsmasq.service: name: crc-dnsmasq.service source: systemd state: stopped status: not-found crc-pre.service: name: crc-pre.service source: systemd state: stopped status: enabled crio-subid.service: name: crio-subid.service source: systemd state: stopped status: enabled crio-wipe.service: name: crio-wipe.service source: systemd state: stopped status: disabled crio.service: name: crio.service source: systemd state: stopped status: disabled dbus-broker.service: name: dbus-broker.service source: systemd state: running status: enabled dbus-org.freedesktop.hostname1.service: name: dbus-org.freedesktop.hostname1.service source: systemd state: active status: alias dbus-org.freedesktop.locale1.service: name: dbus-org.freedesktop.locale1.service source: systemd state: inactive status: alias dbus-org.freedesktop.login1.service: name: dbus-org.freedesktop.login1.service source: systemd state: active status: alias dbus-org.freedesktop.nm-dispatcher.service: name: dbus-org.freedesktop.nm-dispatcher.service source: systemd state: active status: alias dbus-org.freedesktop.timedate1.service: name: dbus-org.freedesktop.timedate1.service source: systemd state: inactive status: alias dbus.service: name: dbus.service source: systemd state: active status: alias debug-shell.service: name: debug-shell.service source: systemd state: inactive status: disabled disable-mglru.service: name: disable-mglru.service source: systemd state: stopped status: enabled display-manager.service: name: display-manager.service source: systemd state: stopped status: not-found dm-event.service: name: dm-event.service source: systemd state: stopped status: static dnf-makecache.service: name: dnf-makecache.service source: systemd state: inactive status: static dnsmasq.service: name: dnsmasq.service source: systemd state: running status: enabled dracut-cmdline.service: name: dracut-cmdline.service source: systemd state: stopped status: static dracut-initqueue.service: name: dracut-initqueue.service source: systemd state: stopped status: static dracut-mount.service: name: dracut-mount.service source: systemd state: stopped status: static dracut-pre-mount.service: name: dracut-pre-mount.service source: systemd state: stopped status: static dracut-pre-pivot.service: name: dracut-pre-pivot.service source: systemd state: stopped status: static dracut-pre-trigger.service: name: dracut-pre-trigger.service source: systemd state: stopped status: static dracut-pre-udev.service: name: dracut-pre-udev.service source: systemd state: stopped status: static dracut-shutdown-onfailure.service: name: dracut-shutdown-onfailure.service source: systemd state: stopped status: static dracut-shutdown.service: name: dracut-shutdown.service source: systemd state: stopped status: static dummy-network.service: name: dummy-network.service source: systemd state: stopped status: enabled emergency.service: name: emergency.service source: systemd state: stopped status: static fcoe.service: name: fcoe.service source: systemd state: stopped status: not-found fstrim.service: name: fstrim.service source: systemd state: inactive status: static fwupd-offline-update.service: name: fwupd-offline-update.service source: systemd state: inactive status: static fwupd-refresh.service: name: fwupd-refresh.service source: systemd state: inactive status: static fwupd.service: name: fwupd.service source: systemd state: inactive status: static gcp-routes.service: name: gcp-routes.service source: systemd state: stopped status: enabled getty@.service: name: getty@.service source: systemd state: unknown status: enabled getty@tty1.service: name: getty@tty1.service source: systemd state: running status: active gssproxy.service: name: gssproxy.service source: systemd state: stopped status: disabled gvisor-tap-vsock.service: name: gvisor-tap-vsock.service source: systemd state: stopped status: enabled hypervfcopyd.service: name: hypervfcopyd.service source: systemd state: inactive status: static hypervkvpd.service: name: hypervkvpd.service source: systemd state: inactive status: static hypervvssd.service: name: hypervvssd.service source: systemd state: inactive status: static ignition-delete-config.service: name: ignition-delete-config.service source: systemd state: stopped status: enabled initrd-cleanup.service: name: initrd-cleanup.service source: systemd state: stopped status: static initrd-parse-etc.service: name: initrd-parse-etc.service source: systemd state: stopped status: static initrd-switch-root.service: name: initrd-switch-root.service source: systemd state: stopped status: static initrd-udevadm-cleanup-db.service: name: initrd-udevadm-cleanup-db.service source: systemd state: stopped status: static irqbalance.service: name: irqbalance.service source: systemd state: running status: enabled iscsi-init.service: name: iscsi-init.service source: systemd state: stopped status: disabled iscsi-onboot.service: name: iscsi-onboot.service source: systemd state: stopped status: enabled iscsi-shutdown.service: name: iscsi-shutdown.service source: systemd state: stopped status: static iscsi-starter.service: name: iscsi-starter.service source: systemd state: inactive status: disabled iscsi.service: name: iscsi.service source: systemd state: stopped status: indirect iscsid.service: name: iscsid.service source: systemd state: stopped status: disabled iscsiuio.service: name: iscsiuio.service source: systemd state: stopped status: disabled kdump.service: name: kdump.service source: systemd state: stopped status: disabled kmod-static-nodes.service: name: kmod-static-nodes.service source: systemd state: stopped status: static kubelet-auto-node-size.service: name: kubelet-auto-node-size.service source: systemd state: stopped status: enabled kubelet-cleanup.service: name: kubelet-cleanup.service source: systemd state: stopped status: enabled kubelet.service: name: kubelet.service source: systemd state: stopped status: disabled kubens.service: name: kubens.service source: systemd state: stopped status: disabled ldconfig.service: name: ldconfig.service source: systemd state: stopped status: static logrotate.service: name: logrotate.service source: systemd state: stopped status: static lvm2-activation-early.service: name: lvm2-activation-early.service source: systemd state: stopped status: not-found lvm2-lvmpolld.service: name: lvm2-lvmpolld.service source: systemd state: stopped status: static lvm2-monitor.service: name: lvm2-monitor.service source: systemd state: stopped status: enabled machine-config-daemon-firstboot.service: name: machine-config-daemon-firstboot.service source: systemd state: stopped status: enabled machine-config-daemon-pull.service: name: machine-config-daemon-pull.service source: systemd state: stopped status: enabled mdadm-grow-continue@.service: name: mdadm-grow-continue@.service source: systemd state: unknown status: static mdadm-last-resort@.service: name: mdadm-last-resort@.service source: systemd state: unknown status: static mdcheck_continue.service: name: mdcheck_continue.service source: systemd state: inactive status: static mdcheck_start.service: name: mdcheck_start.service source: systemd state: inactive status: static mdmon@.service: name: mdmon@.service source: systemd state: unknown status: static mdmonitor-oneshot.service: name: mdmonitor-oneshot.service source: systemd state: inactive status: static mdmonitor.service: name: mdmonitor.service source: systemd state: stopped status: enabled microcode.service: name: microcode.service source: systemd state: stopped status: enabled modprobe@.service: name: modprobe@.service source: systemd state: unknown status: static modprobe@configfs.service: name: modprobe@configfs.service source: systemd state: stopped status: inactive modprobe@drm.service: name: modprobe@drm.service source: systemd state: stopped status: inactive modprobe@efi_pstore.service: name: modprobe@efi_pstore.service source: systemd state: stopped status: inactive modprobe@fuse.service: name: modprobe@fuse.service source: systemd state: stopped status: inactive multipathd.service: name: multipathd.service source: systemd state: stopped status: enabled netavark-dhcp-proxy.service: name: netavark-dhcp-proxy.service source: systemd state: inactive status: disabled netavark-firewalld-reload.service: name: netavark-firewalld-reload.service source: systemd state: inactive status: disabled network.service: name: network.service source: systemd state: stopped status: not-found nfs-blkmap.service: name: nfs-blkmap.service source: systemd state: inactive status: disabled nfs-idmapd.service: name: nfs-idmapd.service source: systemd state: stopped status: static nfs-mountd.service: name: nfs-mountd.service source: systemd state: stopped status: static nfs-server.service: name: nfs-server.service source: systemd state: stopped status: disabled nfs-utils.service: name: nfs-utils.service source: systemd state: stopped status: static nfsdcld.service: name: nfsdcld.service source: systemd state: stopped status: static nftables.service: name: nftables.service source: systemd state: inactive status: disabled nis-domainname.service: name: nis-domainname.service source: systemd state: inactive status: disabled nm-cloud-setup.service: name: nm-cloud-setup.service source: systemd state: inactive status: disabled nm-priv-helper.service: name: nm-priv-helper.service source: systemd state: inactive status: static nmstate.service: name: nmstate.service source: systemd state: stopped status: enabled node-valid-hostname.service: name: node-valid-hostname.service source: systemd state: stopped status: enabled nodeip-configuration.service: name: nodeip-configuration.service source: systemd state: stopped status: enabled ntpd.service: name: ntpd.service source: systemd state: stopped status: not-found ntpdate.service: name: ntpdate.service source: systemd state: stopped status: not-found nvmefc-boot-connections.service: name: nvmefc-boot-connections.service source: systemd state: stopped status: enabled nvmf-autoconnect.service: name: nvmf-autoconnect.service source: systemd state: inactive status: disabled nvmf-connect@.service: name: nvmf-connect@.service source: systemd state: unknown status: static openvswitch.service: name: openvswitch.service source: systemd state: stopped status: enabled ostree-boot-complete.service: name: ostree-boot-complete.service source: systemd state: stopped status: enabled-runtime ostree-finalize-staged-hold.service: name: ostree-finalize-staged-hold.service source: systemd state: stopped status: static ostree-finalize-staged.service: name: ostree-finalize-staged.service source: systemd state: stopped status: static ostree-prepare-root.service: name: ostree-prepare-root.service source: systemd state: inactive status: static ostree-readonly-sysroot-migration.service: name: ostree-readonly-sysroot-migration.service source: systemd state: stopped status: disabled ostree-remount.service: name: ostree-remount.service source: systemd state: stopped status: enabled ostree-state-overlay@.service: name: ostree-state-overlay@.service source: systemd state: unknown status: disabled ovs-configuration.service: name: ovs-configuration.service source: systemd state: stopped status: enabled ovs-delete-transient-ports.service: name: ovs-delete-transient-ports.service source: systemd state: stopped status: static ovs-vswitchd.service: name: ovs-vswitchd.service source: systemd state: running status: static ovsdb-server.service: name: ovsdb-server.service source: systemd state: running status: static pam_namespace.service: name: pam_namespace.service source: systemd state: inactive status: static plymouth-quit-wait.service: name: plymouth-quit-wait.service source: systemd state: stopped status: not-found plymouth-read-write.service: name: plymouth-read-write.service source: systemd state: stopped status: not-found plymouth-start.service: name: plymouth-start.service source: systemd state: stopped status: not-found podman-auto-update.service: name: podman-auto-update.service source: systemd state: inactive status: disabled podman-clean-transient.service: name: podman-clean-transient.service source: systemd state: inactive status: disabled podman-kube@.service: name: podman-kube@.service source: systemd state: unknown status: disabled podman-restart.service: name: podman-restart.service source: systemd state: inactive status: disabled podman.service: name: podman.service source: systemd state: stopped status: disabled polkit.service: name: polkit.service source: systemd state: inactive status: static qemu-guest-agent.service: name: qemu-guest-agent.service source: systemd state: stopped status: enabled quotaon.service: name: quotaon.service source: systemd state: inactive status: static raid-check.service: name: raid-check.service source: systemd state: inactive status: static rbdmap.service: name: rbdmap.service source: systemd state: stopped status: not-found rc-local.service: name: rc-local.service source: systemd state: stopped status: static rdisc.service: name: rdisc.service source: systemd state: inactive status: disabled rdma-load-modules@.service: name: rdma-load-modules@.service source: systemd state: unknown status: static rdma-ndd.service: name: rdma-ndd.service source: systemd state: inactive status: static rescue.service: name: rescue.service source: systemd state: stopped status: static rhcos-usrlocal-selinux-fixup.service: name: rhcos-usrlocal-selinux-fixup.service source: systemd state: stopped status: enabled rpc-gssd.service: name: rpc-gssd.service source: systemd state: stopped status: static rpc-statd-notify.service: name: rpc-statd-notify.service source: systemd state: stopped status: static rpc-statd.service: name: rpc-statd.service source: systemd state: stopped status: static rpc-svcgssd.service: name: rpc-svcgssd.service source: systemd state: stopped status: not-found rpcbind.service: name: rpcbind.service source: systemd state: stopped status: disabled rpm-ostree-bootstatus.service: name: rpm-ostree-bootstatus.service source: systemd state: inactive status: disabled rpm-ostree-countme.service: name: rpm-ostree-countme.service source: systemd state: inactive status: static rpm-ostree-fix-shadow-mode.service: name: rpm-ostree-fix-shadow-mode.service source: systemd state: stopped status: disabled rpm-ostreed-automatic.service: name: rpm-ostreed-automatic.service source: systemd state: inactive status: static rpm-ostreed.service: name: rpm-ostreed.service source: systemd state: inactive status: static rpmdb-rebuild.service: name: rpmdb-rebuild.service source: systemd state: inactive status: disabled selinux-autorelabel-mark.service: name: selinux-autorelabel-mark.service source: systemd state: stopped status: enabled selinux-autorelabel.service: name: selinux-autorelabel.service source: systemd state: inactive status: static selinux-check-proper-disable.service: name: selinux-check-proper-disable.service source: systemd state: inactive status: disabled serial-getty@.service: name: serial-getty@.service source: systemd state: unknown status: disabled sntp.service: name: sntp.service source: systemd state: stopped status: not-found sshd-keygen@.service: name: sshd-keygen@.service source: systemd state: unknown status: disabled sshd-keygen@ecdsa.service: name: sshd-keygen@ecdsa.service source: systemd state: stopped status: inactive sshd-keygen@ed25519.service: name: sshd-keygen@ed25519.service source: systemd state: stopped status: inactive sshd-keygen@rsa.service: name: sshd-keygen@rsa.service source: systemd state: stopped status: inactive sshd.service: name: sshd.service source: systemd state: running status: enabled sshd@.service: name: sshd@.service source: systemd state: unknown status: static sssd-autofs.service: name: sssd-autofs.service source: systemd state: inactive status: indirect sssd-nss.service: name: sssd-nss.service source: systemd state: inactive status: indirect sssd-pac.service: name: sssd-pac.service source: systemd state: inactive status: indirect sssd-pam.service: name: sssd-pam.service source: systemd state: inactive status: indirect sssd-ssh.service: name: sssd-ssh.service source: systemd state: inactive status: indirect sssd-sudo.service: name: sssd-sudo.service source: systemd state: inactive status: indirect sssd.service: name: sssd.service source: systemd state: stopped status: enabled stalld.service: name: stalld.service source: systemd state: inactive status: disabled syslog.service: name: syslog.service source: systemd state: stopped status: not-found system-update-cleanup.service: name: system-update-cleanup.service source: systemd state: inactive status: static systemd-ask-password-console.service: name: systemd-ask-password-console.service source: systemd state: stopped status: static systemd-ask-password-wall.service: name: systemd-ask-password-wall.service source: systemd state: stopped status: static systemd-backlight@.service: name: systemd-backlight@.service source: systemd state: unknown status: static systemd-binfmt.service: name: systemd-binfmt.service source: systemd state: stopped status: static systemd-bless-boot.service: name: systemd-bless-boot.service source: systemd state: inactive status: static systemd-boot-check-no-failures.service: name: systemd-boot-check-no-failures.service source: systemd state: inactive status: disabled systemd-boot-random-seed.service: name: systemd-boot-random-seed.service source: systemd state: stopped status: static systemd-boot-update.service: name: systemd-boot-update.service source: systemd state: stopped status: enabled systemd-coredump@.service: name: systemd-coredump@.service source: systemd state: unknown status: static systemd-exit.service: name: systemd-exit.service source: systemd state: inactive status: static systemd-fsck-root.service: name: systemd-fsck-root.service source: systemd state: stopped status: static systemd-fsck@.service: name: systemd-fsck@.service source: systemd state: unknown status: static systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service: name: systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service source: systemd state: stopped status: active systemd-growfs-root.service: name: systemd-growfs-root.service source: systemd state: inactive status: static systemd-growfs@.service: name: systemd-growfs@.service source: systemd state: unknown status: static systemd-halt.service: name: systemd-halt.service source: systemd state: inactive status: static systemd-hibernate-resume@.service: name: systemd-hibernate-resume@.service source: systemd state: unknown status: static systemd-hibernate.service: name: systemd-hibernate.service source: systemd state: inactive status: static systemd-hostnamed.service: name: systemd-hostnamed.service source: systemd state: running status: static systemd-hwdb-update.service: name: systemd-hwdb-update.service source: systemd state: stopped status: static systemd-hybrid-sleep.service: name: systemd-hybrid-sleep.service source: systemd state: inactive status: static systemd-initctl.service: name: systemd-initctl.service source: systemd state: stopped status: static systemd-journal-catalog-update.service: name: systemd-journal-catalog-update.service source: systemd state: stopped status: static systemd-journal-flush.service: name: systemd-journal-flush.service source: systemd state: stopped status: static systemd-journal-gatewayd.service: name: systemd-journal-gatewayd.service source: systemd state: inactive status: indirect systemd-journal-remote.service: name: systemd-journal-remote.service source: systemd state: inactive status: indirect systemd-journal-upload.service: name: systemd-journal-upload.service source: systemd state: inactive status: disabled systemd-journald.service: name: systemd-journald.service source: systemd state: running status: static systemd-journald@.service: name: systemd-journald@.service source: systemd state: unknown status: static systemd-kexec.service: name: systemd-kexec.service source: systemd state: inactive status: static systemd-localed.service: name: systemd-localed.service source: systemd state: inactive status: static systemd-logind.service: name: systemd-logind.service source: systemd state: running status: static systemd-machine-id-commit.service: name: systemd-machine-id-commit.service source: systemd state: stopped status: static systemd-modules-load.service: name: systemd-modules-load.service source: systemd state: stopped status: static systemd-network-generator.service: name: systemd-network-generator.service source: systemd state: stopped status: enabled systemd-pcrfs-root.service: name: systemd-pcrfs-root.service source: systemd state: inactive status: static systemd-pcrfs@.service: name: systemd-pcrfs@.service source: systemd state: unknown status: static systemd-pcrmachine.service: name: systemd-pcrmachine.service source: systemd state: stopped status: static systemd-pcrphase-initrd.service: name: systemd-pcrphase-initrd.service source: systemd state: stopped status: static systemd-pcrphase-sysinit.service: name: systemd-pcrphase-sysinit.service source: systemd state: stopped status: static systemd-pcrphase.service: name: systemd-pcrphase.service source: systemd state: stopped status: static systemd-poweroff.service: name: systemd-poweroff.service source: systemd state: inactive status: static systemd-pstore.service: name: systemd-pstore.service source: systemd state: stopped status: enabled systemd-quotacheck.service: name: systemd-quotacheck.service source: systemd state: stopped status: static systemd-random-seed.service: name: systemd-random-seed.service source: systemd state: stopped status: static systemd-reboot.service: name: systemd-reboot.service source: systemd state: inactive status: static systemd-remount-fs.service: name: systemd-remount-fs.service source: systemd state: stopped status: enabled-runtime systemd-repart.service: name: systemd-repart.service source: systemd state: stopped status: masked systemd-rfkill.service: name: systemd-rfkill.service source: systemd state: stopped status: static systemd-suspend-then-hibernate.service: name: systemd-suspend-then-hibernate.service source: systemd state: inactive status: static systemd-suspend.service: name: systemd-suspend.service source: systemd state: inactive status: static systemd-sysctl.service: name: systemd-sysctl.service source: systemd state: stopped status: static systemd-sysext.service: name: systemd-sysext.service source: systemd state: stopped status: disabled systemd-sysupdate-reboot.service: name: systemd-sysupdate-reboot.service source: systemd state: inactive status: indirect systemd-sysupdate.service: name: systemd-sysupdate.service source: systemd state: inactive status: indirect systemd-sysusers.service: name: systemd-sysusers.service source: systemd state: stopped status: static systemd-timedated.service: name: systemd-timedated.service source: systemd state: inactive status: static systemd-timesyncd.service: name: systemd-timesyncd.service source: systemd state: stopped status: not-found systemd-tmpfiles-clean.service: name: systemd-tmpfiles-clean.service source: systemd state: stopped status: static systemd-tmpfiles-setup-dev.service: name: systemd-tmpfiles-setup-dev.service source: systemd state: stopped status: static systemd-tmpfiles-setup.service: name: systemd-tmpfiles-setup.service source: systemd state: stopped status: static systemd-tmpfiles.service: name: systemd-tmpfiles.service source: systemd state: stopped status: not-found systemd-udev-settle.service: name: systemd-udev-settle.service source: systemd state: stopped status: static systemd-udev-trigger.service: name: systemd-udev-trigger.service source: systemd state: stopped status: static systemd-udevd.service: name: systemd-udevd.service source: systemd state: running status: static systemd-update-done.service: name: systemd-update-done.service source: systemd state: stopped status: static systemd-update-utmp-runlevel.service: name: systemd-update-utmp-runlevel.service source: systemd state: stopped status: static systemd-update-utmp.service: name: systemd-update-utmp.service source: systemd state: stopped status: static systemd-user-sessions.service: name: systemd-user-sessions.service source: systemd state: stopped status: static systemd-vconsole-setup.service: name: systemd-vconsole-setup.service source: systemd state: stopped status: static systemd-volatile-root.service: name: systemd-volatile-root.service source: systemd state: inactive status: static systemd-zram-setup@.service: name: systemd-zram-setup@.service source: systemd state: unknown status: static teamd@.service: name: teamd@.service source: systemd state: unknown status: static unbound-anchor.service: name: unbound-anchor.service source: systemd state: stopped status: static user-runtime-dir@.service: name: user-runtime-dir@.service source: systemd state: unknown status: static user-runtime-dir@0.service: name: user-runtime-dir@0.service source: systemd state: stopped status: active user-runtime-dir@1000.service: name: user-runtime-dir@1000.service source: systemd state: stopped status: active user@.service: name: user@.service source: systemd state: unknown status: static user@0.service: name: user@0.service source: systemd state: running status: active user@1000.service: name: user@1000.service source: systemd state: running status: active vgauthd.service: name: vgauthd.service source: systemd state: stopped status: enabled vmtoolsd.service: name: vmtoolsd.service source: systemd state: stopped status: enabled wait-for-primary-ip.service: name: wait-for-primary-ip.service source: systemd state: stopped status: enabled unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.180 ansible_port: 22 ansible_python_interpreter: auto ansible_user: core cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 host_id: d19710e37f7b2620eb9f1bc9cfdfc06732b1f0c31221781941dd4533 interface_ip: 38.102.83.180 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.180 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.180 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul_log_collection: true zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 88dd6c905f2746688a8f680e3012c758 build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: 8637980fa8664cc3a54deb4b258e06d7 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8 executor: hostname: ze01.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-local_build-index_deploy jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: f6ed2f2d118884a075895bbf954ff6000e540430 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 2ff5b96b6254418d20a509188eea72ab2c77839c name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 95aa63de3182faad63a69301d101debad3efc936 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 63860ee1375c38462801e8341a7f18335169f94c name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096 name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 381c86678f470a5590d19274a2eb914e95b81bb7 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '1' zuul_execution_trusted: 'False' zuul_log_collection: true zuul_success: 'False' zuul_will_retry: 'False' inventory_dir: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1 inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/post_playbook_1/inventory.yaml inventory_hostname: controller inventory_hostname_short: controller logfiles_dest_dir: /home/zuul/ci-framework-data/logs/2025-10-13_00-23 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f100786d-5d49-4bb6-bacc-f81c832a6dc3 host_id: ff62aecd09b85709a233d3330c1581c31f2fa23cd3c1cbc3ffcedd62 interface_ip: 38.102.83.214 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.214 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.214 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__00f35d1f62e54e7243096aca394c8bcbb8907683 openstack_namespace: openstack param_dir: changed: false failed: false stat: atime: 1760314867.5046456 attr_flags: '' attributes: [] block_size: 4096 blocks: 0 charset: binary ctime: 1760314872.6027744 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 146841209 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1760314872.6027744 nlink: 2 path: /home/zuul/ci-framework-data/artifacts/parameters pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 120 uid: 1000 version: '488734240' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true play_hosts: *id002 playbook_dir: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true role_name: artifacts role_names: *id003 role_path: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/roles/artifacts role_uuid: fa163ec2-ffbe-222f-b926-00000000001a scenario: local_build-index_deploy unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.214 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f100786d-5d49-4bb6-bacc-f81c832a6dc3 host_id: ff62aecd09b85709a233d3330c1581c31f2fa23cd3c1cbc3ffcedd62 interface_ip: 38.102.83.214 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.214 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.214 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul_log_collection: true zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 88dd6c905f2746688a8f680e3012c758 build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: 8637980fa8664cc3a54deb4b258e06d7 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8 executor: hostname: ze01.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-local_build-index_deploy jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: f6ed2f2d118884a075895bbf954ff6000e540430 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 2ff5b96b6254418d20a509188eea72ab2c77839c name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 95aa63de3182faad63a69301d101debad3efc936 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 63860ee1375c38462801e8341a7f18335169f94c name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096 name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 381c86678f470a5590d19274a2eb914e95b81bb7 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_change_list: - service-telemetry-operator zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '1' zuul_execution_trusted: 'False' zuul_log_collection: true zuul_success: 'False' zuul_will_retry: 'False' home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible-facts.yml0000644000175000017500000004703415073043167025617 0ustar zuulzuul_ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.214 all_ipv6_addresses: - fe80::f816:3eff:fec1:afa3 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-10-13T00:10:11Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f hardware_offload_type: null hints: '' id: 6c016c0a-2ac9-4869-a5a8-eab3fa182c1c ip_allocation: immediate mac_address: fa:16:3e:4e:a0:95 name: crc-c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-10-13T00:10:11Z' crc_ci_bootstrap_network_name: zuul-ci-net-88dd6c90 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:a7:cc:09 mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:4e:a0:95 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:1d:a0:a2 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:bb:8f:d3 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:30:89:5a mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:28Z' description: '' dns_domain: '' id: d01062d3-7b21-4be9-857c-4aa990cef4db ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-88dd6c90 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-10-13T00:09:28Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-10-13T00:09:35Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.199 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: cf00ba68-55fa-4056-9b58-bb065c0601bb name: zuul-ci-subnet-router-88dd6c90 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-10-13T00:09:37Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-10-13T00:09:32Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 80ddca6b-23d7-408f-92fe-2a2eb5e0b13f ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-88dd6c90 network_id: d01062d3-7b21-4be9-857c-4aa990cef4db project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-10-13T00:09:32Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-88dd6c90 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-88dd6c90 date_time: date: '2025-10-13' day: '13' epoch: '1760314997' epoch_int: '1760314997' hour: '00' iso8601: '2025-10-13T00:23:17Z' iso8601_basic: 20251013T002317274459 iso8601_basic_short: 20251013T002317 iso8601_micro: '2025-10-13T00:23:17.274459Z' minute: '23' month: '10' second: '17' time: 00:23:17 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Monday weekday_number: '1' weeknumber: '41' year: '2025' default_ipv4: address: 38.102.83.214 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:c1:af:a3 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-10-13-00-00-56-00 vda1: - 9839e2e1-98a2-4594-b609-79d514deb0a3 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-10-13-00-00-56-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '0' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - 9839e2e1-98a2-4594-b609-79d514deb0a3 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 41650 22 SSH_CONNECTION: 38.102.83.114 41650 38.102.83.214 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.214 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fec1:afa3 prefix: '64' scope: link macaddress: fa:16:3e:c1:af:a3 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 interfaces: - lo - eth0 is_chroot: false iscsi_iqn: '' kernel: 5.14.0-621.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.01 1m: 0.08 5m: 0.02 locally_reachable_ips: ipv4: - 38.102.83.214 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fec1:afa3 lsb: {} lvm: N/A machine: x86_64 machine_id: a1727ec20198bc6caf436a6e13c4ff5e memfree_mb: 7266 memory_mb: nocache: free: 7417 used: 263 real: free: 7266 total: 7680 used: 414 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7680 module_setup: true mounts: - block_available: 20378918 block_size: 4096 block_total: 20954875 block_used: 575957 device: /dev/vda1 fstype: xfs inode_available: 41888289 inode_total: 41942512 inode_used: 54223 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83472048128 size_total: 85831168000 uuid: 9839e2e1-98a2-4594-b609-79d514deb0a3 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.2.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 23 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 23 - final - 0 python_version: 3.9.23 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxwvvCYwnIDtxKxVxyDCUXhYuWEo+WsGS1jEd+Im13VpWuXa7IQrDvjmuO0jn8/KspLpldlXZAyvPIi9+nNvkk= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIB1/unp9+ffn2cxr1RyLKXm2uZfT+tLfIHwoS/yhV9RG ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCtwQO/sn8zPSCivURPoL3DNUpFgI+Y/GknmWIW+/QsvlCk4sBWYiqOXubpbETP/ZuHnkt6w69huALW3iVln/6SdW9iz2mhr8+AHVAee6i3GRdpOWbUDuatQDsdRX3GWxhJ3iR4Q2CrLL9cuJIayVmHepeTrUt2AaPBwcRw7Or+VinGX/9nIUQRguvXHv3VeRUX003jI5B9xUO/6vZ99+ClMMpZPbhLqdLZnuKoLA9loqq6szVShReR3fCZNDH8FKZzjIFfFaj9uDgDfIB3iBKtQdr0HfSSF8CQ2A6o/P43FG9/w7Is3QQidH997QhMNrRNzbrNvgA8vgwi6qIkjFwYBO0O9VnlS1Fux4NG570chg5FmrtGWKGKAHxWuCm4zLuUAJWzw/gxVcPemOJlmIxbGIo/YMT0VgPQzbjFTxGehUhba1ncNNDyH8Cu7FHUbuX6pr6RWksUx+dixeBtFBjGlUg44pJZ+4I9XrHXTwLpBs3GXSUxi0gkQT182Xt8jyE= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 424 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - service-telemetry-operator home/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/0000755000175000017500000000000015073043170025267 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ens3.nmconnection0000644000175000017500000000026215073043170030553 0ustar zuulzuul[connection] id=ens3 uuid=19ed8f31-b265-473d-970f-0d7035ad4e30 type=ethernet interface-name=ens3 [ethernet] [ipv4] method=auto [ipv6] addr-gen-mode=eui64 method=auto [proxy] ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ci-private-network.nmconnectionhome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ci-private-network.nmconnectio0000644000175000017500000000051315073043170033256 0ustar zuulzuul[connection] id=ci-private-network uuid=81228806-36f6-5b59-8f99-53f739e07b90 type=ethernet autoconnect=true interface-name=eth1 [ethernet] mac-address=fa:16:3e:a7:cc:09 mtu=1500 [ipv4] method=manual addresses=192.168.122.11/24 never-default=true gateway=192.168.122.1 [ipv6] addr-gen-mode=stable-privacy method=disabled [proxy] home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/0000755000175000017500000000000015073043170024365 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean.repo.md50000644000175000017500000000004115073043170027524 0ustar zuulzuulc4b77291aeca5591ac860bd4127cec2f home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-appstream.repo0000644000175000017500000000031615073043170032642 0ustar zuulzuul [repo-setup-centos-appstream] name=repo-setup-centos-appstream baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/AppStream/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-baseos.repo0000644000175000017500000000030415073043170032117 0ustar zuulzuul [repo-setup-centos-baseos] name=repo-setup-centos-baseos baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/BaseOS/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-highavailability.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-highavailability.0000644000175000017500000000034215073043170033271 0ustar zuulzuul [repo-setup-centos-highavailability] name=repo-setup-centos-highavailability baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/HighAvailability/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-powertools.repo0000644000175000017500000000031115073043170033056 0ustar zuulzuul [repo-setup-centos-powertools] name=repo-setup-centos-powertools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/CRB/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean-antelope-testing.repo0000644000175000017500000000316315073043170032330 0ustar zuulzuul[delorean-antelope-testing] name=dlrn-antelope-testing baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [delorean-antelope-build-deps] name=dlrn-antelope-build-deps baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/build-deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-rabbitmq] name=centos9-rabbitmq baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/messaging/$basearch/rabbitmq-38/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-storage] name=centos9-storage baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/storage/$basearch/ceph-reef/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-opstools] name=centos9-opstools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/opstools/$basearch/collectd-5/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-nfv-ovs] name=NFV SIG OpenvSwitch baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/nfv/$basearch/openvswitch-2/ gpgcheck=0 enabled=1 module_hotfixes=1 # epel is required for Ceph Reef [epel-low-priority] name=Extra Packages for Enterprise Linux $releasever - $basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir enabled=1 gpgcheck=0 countme=1 priority=100 includepkgs=libarrow*,parquet*,python3-asyncssh,re2,python3-grpcio,grpc*,abseil*,thrift* home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean.repo0000644000175000017500000001335315073043170027052 0ustar zuulzuul[delorean-component-barbican] name=delorean-openstack-barbican-42b4c41831408a8e323fec3c8983b5c793b64874 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/barbican/42/b4/42b4c41831408a8e323fec3c8983b5c793b64874_08052e9d enabled=1 gpgcheck=0 priority=1 [delorean-component-baremetal] name=delorean-python-glean-10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/baremetal/10/df/10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7_36137eb3 enabled=1 gpgcheck=0 priority=1 [delorean-component-cinder] name=delorean-openstack-cinder-1c00d6490d88e436f26efb71f2ac96e75252e97c baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cinder/1c/00/1c00d6490d88e436f26efb71f2ac96e75252e97c_f716f000 enabled=1 gpgcheck=0 priority=1 [delorean-component-clients] name=delorean-python-stevedore-c4acc5639fd2329372142e39464fcca0209b0018 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/clients/c4/ac/c4acc5639fd2329372142e39464fcca0209b0018_d3ef8337 enabled=1 gpgcheck=0 priority=1 [delorean-component-cloudops] name=delorean-python-observabilityclient-2f31846d73c044740ccaaa4204720f0b94d64145 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cloudops/2f/31/2f31846d73c044740ccaaa4204720f0b94d64145_c2222caa enabled=1 gpgcheck=0 priority=1 [delorean-component-common] name=delorean-diskimage-builder-7d793e664cf892461c5547a2776e4b1834d0396b baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/common/7d/79/7d793e664cf892461c5547a2776e4b1834d0396b_bf6d4aba enabled=1 gpgcheck=0 priority=1 [delorean-component-compute] name=delorean-openstack-nova-6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/compute/6f/8d/6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e_dc05b899 enabled=1 gpgcheck=0 priority=1 [delorean-component-designate] name=delorean-python-designate-tests-tempest-347fdbc9b4595a10b726526b3c0b5928e5b7fcf2 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/designate/34/7f/347fdbc9b4595a10b726526b3c0b5928e5b7fcf2_3fd39337 enabled=1 gpgcheck=0 priority=1 [delorean-component-glance] name=delorean-openstack-glance-1fd12c29b339f30fe823e2b5beba14b5f241e52a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/glance/1f/d1/1fd12c29b339f30fe823e2b5beba14b5f241e52a_0d693729 enabled=1 gpgcheck=0 priority=1 [delorean-component-keystone] name=delorean-openstack-keystone-e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/keystone/e4/b4/e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7_264c03cc enabled=1 gpgcheck=0 priority=1 [delorean-component-manila] name=delorean-openstack-manila-3c01b7181572c95dac462eb19c3121e36cb0fe95 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/manila/3c/01/3c01b7181572c95dac462eb19c3121e36cb0fe95_912dfd18 enabled=1 gpgcheck=0 priority=1 [delorean-component-network] name=delorean-python-vmware-nsxlib-458234972d1428ac92bbeff26511edfdc49b6b2f baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/network/45/82/458234972d1428ac92bbeff26511edfdc49b6b2f_1bca6328 enabled=1 gpgcheck=0 priority=1 [delorean-component-octavia] name=delorean-openstack-octavia-ba397f07a7331190208c93368ee23826ac4e2707 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/octavia/ba/39/ba397f07a7331190208c93368ee23826ac4e2707_9d6e596a enabled=1 gpgcheck=0 priority=1 [delorean-component-optimize] name=delorean-openstack-watcher-c014f81a8647287f6dcc339321c1256f5a2e82d5 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/optimize/c0/14/c014f81a8647287f6dcc339321c1256f5a2e82d5_bcbfdccc enabled=1 gpgcheck=0 priority=1 [delorean-component-podified] name=delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d301b5c6775e6022 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/podified/ff/70/ff70d03bf5bc0bb6f3540a02d301b5c6775e6022_dbfdef11 enabled=1 gpgcheck=0 priority=1 [delorean-component-puppet] name=delorean-puppet-ceph-91ba84bc002c318a7f961d084e992b2e88ffd5b3 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/puppet/91/ba/91ba84bc002c318a7f961d084e992b2e88ffd5b3_7cde1ad1 enabled=1 gpgcheck=0 priority=1 [delorean-component-swift] name=delorean-openstack-swift-dc98a8463506ac520c469adb0ef47d0f7753905a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/swift/dc/98/dc98a8463506ac520c469adb0ef47d0f7753905a_9d02f069 enabled=1 gpgcheck=0 priority=1 [delorean-component-tempest] name=delorean-python-tempestconf-8515371b7cceebd4282e09f1d8f0cc842df82855 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/tempest/85/15/8515371b7cceebd4282e09f1d8f0cc842df82855_a1e336c7 enabled=1 gpgcheck=0 priority=1 [delorean-component-ui] name=delorean-openstack-heat-ui-013accbfd179753bc3f0d1f4e5bed07a4fd9f771 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/ui/01/3a/013accbfd179753bc3f0d1f4e5bed07a4fd9f771_0c88e467 enabled=1 gpgcheck=0 priority=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci-env/0000755000175000017500000000000015073043170023524 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci-env/networking-info.yml0000644000175000017500000000231215073043170027365 0ustar zuulzuulcrc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:a7:cc:09 mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:4e:a0:95 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:1d:a0:a2 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:bb:8f:d3 mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:30:89:5a mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/0000755000175000017500000000000015073042756023500 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/0000755000175000017500000000000015073042756027533 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/0000755000175000017500000000000015073043253030651 5ustar zuulzuul././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000173115073042760033436 0ustar zuulzuul--- - name: Debug make_keystone_cleanup_env when: make_keystone_cleanup_env is defined ansible.builtin.debug: var: make_keystone_cleanup_env - name: Debug make_keystone_cleanup_params when: make_keystone_cleanup_params is defined ansible.builtin.debug: var: make_keystone_cleanup_params - name: Run keystone_cleanup retries: "{{ make_keystone_cleanup_retries | default(omit) }}" delay: "{{ make_keystone_cleanup_delay | default(omit) }}" until: "{{ make_keystone_cleanup_until | default(true) }}" register: "make_keystone_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_cleanup" dry_run: "{{ make_keystone_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_cleanup_env|default({})), **(make_keystone_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wait_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wai0000644000175000017500000000173115073042760033362 0ustar zuulzuul--- - name: Debug make_edpm_wait_deploy_env when: make_edpm_wait_deploy_env is defined ansible.builtin.debug: var: make_edpm_wait_deploy_env - name: Debug make_edpm_wait_deploy_params when: make_edpm_wait_deploy_params is defined ansible.builtin.debug: var: make_edpm_wait_deploy_params - name: Run edpm_wait_deploy retries: "{{ make_edpm_wait_deploy_retries | default(omit) }}" delay: "{{ make_edpm_wait_deploy_delay | default(omit) }}" until: "{{ make_edpm_wait_deploy_until | default(true) }}" register: "make_edpm_wait_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_wait_deploy" dry_run: "{{ make_edpm_wait_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_wait_deploy_env|default({})), **(make_edpm_wait_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_register_dns.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_reg0000644000175000017500000000175015073042760033360 0ustar zuulzuul--- - name: Debug make_edpm_register_dns_env when: make_edpm_register_dns_env is defined ansible.builtin.debug: var: make_edpm_register_dns_env - name: Debug make_edpm_register_dns_params when: make_edpm_register_dns_params is defined ansible.builtin.debug: var: make_edpm_register_dns_params - name: Run edpm_register_dns retries: "{{ make_edpm_register_dns_retries | default(omit) }}" delay: "{{ make_edpm_register_dns_delay | default(omit) }}" until: "{{ make_edpm_register_dns_until | default(true) }}" register: "make_edpm_register_dns_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_register_dns" dry_run: "{{ make_edpm_register_dns_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_register_dns_env|default({})), **(make_edpm_register_dns_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_nova_discover_hosts.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_nov0000644000175000017500000000212115073042760033376 0ustar zuulzuul--- - name: Debug make_edpm_nova_discover_hosts_env when: make_edpm_nova_discover_hosts_env is defined ansible.builtin.debug: var: make_edpm_nova_discover_hosts_env - name: Debug make_edpm_nova_discover_hosts_params when: make_edpm_nova_discover_hosts_params is defined ansible.builtin.debug: var: make_edpm_nova_discover_hosts_params - name: Run edpm_nova_discover_hosts retries: "{{ make_edpm_nova_discover_hosts_retries | default(omit) }}" delay: "{{ make_edpm_nova_discover_hosts_delay | default(omit) }}" until: "{{ make_edpm_nova_discover_hosts_until | default(true) }}" register: "make_edpm_nova_discover_hosts_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_nova_discover_hosts" dry_run: "{{ make_edpm_nova_discover_hosts_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_nova_discover_hosts_env|default({})), **(make_edpm_nova_discover_hosts_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_crds.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315073042760033416 0ustar zuulzuul--- - name: Debug make_openstack_crds_env when: make_openstack_crds_env is defined ansible.builtin.debug: var: make_openstack_crds_env - name: Debug make_openstack_crds_params when: make_openstack_crds_params is defined ansible.builtin.debug: var: make_openstack_crds_params - name: Run openstack_crds retries: "{{ make_openstack_crds_retries | default(omit) }}" delay: "{{ make_openstack_crds_delay | default(omit) }}" until: "{{ make_openstack_crds_until | default(true) }}" register: "make_openstack_crds_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_crds" dry_run: "{{ make_openstack_crds_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_crds_env|default({})), **(make_openstack_crds_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_crds_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000206315073042760033410 0ustar zuulzuul--- - name: Debug make_openstack_crds_cleanup_env when: make_openstack_crds_cleanup_env is defined ansible.builtin.debug: var: make_openstack_crds_cleanup_env - name: Debug make_openstack_crds_cleanup_params when: make_openstack_crds_cleanup_params is defined ansible.builtin.debug: var: make_openstack_crds_cleanup_params - name: Run openstack_crds_cleanup retries: "{{ make_openstack_crds_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_crds_cleanup_delay | default(omit) }}" until: "{{ make_openstack_crds_cleanup_until | default(true) }}" register: "make_openstack_crds_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_crds_cleanup" dry_run: "{{ make_openstack_crds_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_crds_cleanup_env|default({})), **(make_openstack_crds_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000215715073042760033355 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_prep_env when: make_edpm_deploy_networker_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_prep_env - name: Debug make_edpm_deploy_networker_prep_params when: make_edpm_deploy_networker_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_prep_params - name: Run edpm_deploy_networker_prep retries: "{{ make_edpm_deploy_networker_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_prep_until | default(true) }}" register: "make_edpm_deploy_networker_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker_prep" dry_run: "{{ make_edpm_deploy_networker_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_prep_env|default({})), **(make_edpm_deploy_networker_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000017600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000223415073042760033351 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_cleanup_env when: make_edpm_deploy_networker_cleanup_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_cleanup_env - name: Debug make_edpm_deploy_networker_cleanup_params when: make_edpm_deploy_networker_cleanup_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_cleanup_params - name: Run edpm_deploy_networker_cleanup retries: "{{ make_edpm_deploy_networker_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_cleanup_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_cleanup_until | default(true) }}" register: "make_edpm_deploy_networker_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker_cleanup" dry_run: "{{ make_edpm_deploy_networker_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_cleanup_env|default({})), **(make_edpm_deploy_networker_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000204415073042760033350 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_env when: make_edpm_deploy_networker_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_env - name: Debug make_edpm_deploy_networker_params when: make_edpm_deploy_networker_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_params - name: Run edpm_deploy_networker retries: "{{ make_edpm_deploy_networker_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_until | default(true) }}" register: "make_edpm_deploy_networker_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker" dry_run: "{{ make_edpm_deploy_networker_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_env|default({})), **(make_edpm_deploy_networker_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_pr0000644000175000017500000000157715073042760033405 0ustar zuulzuul--- - name: Debug make_infra_prep_env when: make_infra_prep_env is defined ansible.builtin.debug: var: make_infra_prep_env - name: Debug make_infra_prep_params when: make_infra_prep_params is defined ansible.builtin.debug: var: make_infra_prep_params - name: Run infra_prep retries: "{{ make_infra_prep_retries | default(omit) }}" delay: "{{ make_infra_prep_delay | default(omit) }}" until: "{{ make_infra_prep_until | default(true) }}" register: "make_infra_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_prep" dry_run: "{{ make_infra_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_prep_env|default({})), **(make_infra_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra.ym0000644000175000017500000000146415073042760033323 0ustar zuulzuul--- - name: Debug make_infra_env when: make_infra_env is defined ansible.builtin.debug: var: make_infra_env - name: Debug make_infra_params when: make_infra_params is defined ansible.builtin.debug: var: make_infra_params - name: Run infra retries: "{{ make_infra_retries | default(omit) }}" delay: "{{ make_infra_delay | default(omit) }}" until: "{{ make_infra_until | default(true) }}" register: "make_infra_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra" dry_run: "{{ make_infra_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_env|default({})), **(make_infra_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_cl0000644000175000017500000000165415073042760033356 0ustar zuulzuul--- - name: Debug make_infra_cleanup_env when: make_infra_cleanup_env is defined ansible.builtin.debug: var: make_infra_cleanup_env - name: Debug make_infra_cleanup_params when: make_infra_cleanup_params is defined ansible.builtin.debug: var: make_infra_cleanup_params - name: Run infra_cleanup retries: "{{ make_infra_cleanup_retries | default(omit) }}" delay: "{{ make_infra_cleanup_delay | default(omit) }}" until: "{{ make_infra_cleanup_until | default(true) }}" register: "make_infra_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_cleanup" dry_run: "{{ make_infra_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_cleanup_env|default({})), **(make_infra_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000171215073042760033364 0ustar zuulzuul--- - name: Debug make_dns_deploy_prep_env when: make_dns_deploy_prep_env is defined ansible.builtin.debug: var: make_dns_deploy_prep_env - name: Debug make_dns_deploy_prep_params when: make_dns_deploy_prep_params is defined ansible.builtin.debug: var: make_dns_deploy_prep_params - name: Run dns_deploy_prep retries: "{{ make_dns_deploy_prep_retries | default(omit) }}" delay: "{{ make_dns_deploy_prep_delay | default(omit) }}" until: "{{ make_dns_deploy_prep_until | default(true) }}" register: "make_dns_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy_prep" dry_run: "{{ make_dns_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_prep_env|default({})), **(make_dns_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000157715073042760033375 0ustar zuulzuul--- - name: Debug make_dns_deploy_env when: make_dns_deploy_env is defined ansible.builtin.debug: var: make_dns_deploy_env - name: Debug make_dns_deploy_params when: make_dns_deploy_params is defined ansible.builtin.debug: var: make_dns_deploy_params - name: Run dns_deploy retries: "{{ make_dns_deploy_retries | default(omit) }}" delay: "{{ make_dns_deploy_delay | default(omit) }}" until: "{{ make_dns_deploy_until | default(true) }}" register: "make_dns_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy" dry_run: "{{ make_dns_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_env|default({})), **(make_dns_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000176715073042760033376 0ustar zuulzuul--- - name: Debug make_dns_deploy_cleanup_env when: make_dns_deploy_cleanup_env is defined ansible.builtin.debug: var: make_dns_deploy_cleanup_env - name: Debug make_dns_deploy_cleanup_params when: make_dns_deploy_cleanup_params is defined ansible.builtin.debug: var: make_dns_deploy_cleanup_params - name: Run dns_deploy_cleanup retries: "{{ make_dns_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_dns_deploy_cleanup_delay | default(omit) }}" until: "{{ make_dns_deploy_cleanup_until | default(true) }}" register: "make_dns_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy_cleanup" dry_run: "{{ make_dns_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_cleanup_env|default({})), **(make_dns_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000204415073042760033400 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_prep_env when: make_netconfig_deploy_prep_env is defined ansible.builtin.debug: var: make_netconfig_deploy_prep_env - name: Debug make_netconfig_deploy_prep_params when: make_netconfig_deploy_prep_params is defined ansible.builtin.debug: var: make_netconfig_deploy_prep_params - name: Run netconfig_deploy_prep retries: "{{ make_netconfig_deploy_prep_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_prep_delay | default(omit) }}" until: "{{ make_netconfig_deploy_prep_until | default(true) }}" register: "make_netconfig_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy_prep" dry_run: "{{ make_netconfig_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_prep_env|default({})), **(make_netconfig_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000173115073042760033402 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_env when: make_netconfig_deploy_env is defined ansible.builtin.debug: var: make_netconfig_deploy_env - name: Debug make_netconfig_deploy_params when: make_netconfig_deploy_params is defined ansible.builtin.debug: var: make_netconfig_deploy_params - name: Run netconfig_deploy retries: "{{ make_netconfig_deploy_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_delay | default(omit) }}" until: "{{ make_netconfig_deploy_until | default(true) }}" register: "make_netconfig_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy" dry_run: "{{ make_netconfig_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_env|default({})), **(make_netconfig_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000212115073042760033374 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_cleanup_env when: make_netconfig_deploy_cleanup_env is defined ansible.builtin.debug: var: make_netconfig_deploy_cleanup_env - name: Debug make_netconfig_deploy_cleanup_params when: make_netconfig_deploy_cleanup_params is defined ansible.builtin.debug: var: make_netconfig_deploy_cleanup_params - name: Run netconfig_deploy_cleanup retries: "{{ make_netconfig_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_cleanup_delay | default(omit) }}" until: "{{ make_netconfig_deploy_cleanup_until | default(true) }}" register: "make_netconfig_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy_cleanup" dry_run: "{{ make_netconfig_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_cleanup_env|default({})), **(make_netconfig_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000204415073042760033335 0ustar zuulzuul--- - name: Debug make_memcached_deploy_prep_env when: make_memcached_deploy_prep_env is defined ansible.builtin.debug: var: make_memcached_deploy_prep_env - name: Debug make_memcached_deploy_prep_params when: make_memcached_deploy_prep_params is defined ansible.builtin.debug: var: make_memcached_deploy_prep_params - name: Run memcached_deploy_prep retries: "{{ make_memcached_deploy_prep_retries | default(omit) }}" delay: "{{ make_memcached_deploy_prep_delay | default(omit) }}" until: "{{ make_memcached_deploy_prep_until | default(true) }}" register: "make_memcached_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy_prep" dry_run: "{{ make_memcached_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_prep_env|default({})), **(make_memcached_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000173115073042760033337 0ustar zuulzuul--- - name: Debug make_memcached_deploy_env when: make_memcached_deploy_env is defined ansible.builtin.debug: var: make_memcached_deploy_env - name: Debug make_memcached_deploy_params when: make_memcached_deploy_params is defined ansible.builtin.debug: var: make_memcached_deploy_params - name: Run memcached_deploy retries: "{{ make_memcached_deploy_retries | default(omit) }}" delay: "{{ make_memcached_deploy_delay | default(omit) }}" until: "{{ make_memcached_deploy_until | default(true) }}" register: "make_memcached_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy" dry_run: "{{ make_memcached_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_env|default({})), **(make_memcached_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000212115073042760033331 0ustar zuulzuul--- - name: Debug make_memcached_deploy_cleanup_env when: make_memcached_deploy_cleanup_env is defined ansible.builtin.debug: var: make_memcached_deploy_cleanup_env - name: Debug make_memcached_deploy_cleanup_params when: make_memcached_deploy_cleanup_params is defined ansible.builtin.debug: var: make_memcached_deploy_cleanup_params - name: Run memcached_deploy_cleanup retries: "{{ make_memcached_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_memcached_deploy_cleanup_delay | default(omit) }}" until: "{{ make_memcached_deploy_cleanup_until | default(true) }}" register: "make_memcached_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy_cleanup" dry_run: "{{ make_memcached_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_cleanup_env|default({})), **(make_memcached_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000165415073042760033442 0ustar zuulzuul--- - name: Debug make_keystone_prep_env when: make_keystone_prep_env is defined ansible.builtin.debug: var: make_keystone_prep_env - name: Debug make_keystone_prep_params when: make_keystone_prep_params is defined ansible.builtin.debug: var: make_keystone_prep_params - name: Run keystone_prep retries: "{{ make_keystone_prep_retries | default(omit) }}" delay: "{{ make_keystone_prep_delay | default(omit) }}" until: "{{ make_keystone_prep_until | default(true) }}" register: "make_keystone_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_prep" dry_run: "{{ make_keystone_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_prep_env|default({})), **(make_keystone_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000154115073042760033435 0ustar zuulzuul--- - name: Debug make_keystone_env when: make_keystone_env is defined ansible.builtin.debug: var: make_keystone_env - name: Debug make_keystone_params when: make_keystone_params is defined ansible.builtin.debug: var: make_keystone_params - name: Run keystone retries: "{{ make_keystone_retries | default(omit) }}" delay: "{{ make_keystone_delay | default(omit) }}" until: "{{ make_keystone_until | default(true) }}" register: "make_keystone_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone" dry_run: "{{ make_keystone_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_env|default({})), **(make_keystone_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_operator_namespace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_operator0000644000175000017500000000176715073042760033441 0ustar zuulzuul--- - name: Debug make_operator_namespace_env when: make_operator_namespace_env is defined ansible.builtin.debug: var: make_operator_namespace_env - name: Debug make_operator_namespace_params when: make_operator_namespace_params is defined ansible.builtin.debug: var: make_operator_namespace_params - name: Run operator_namespace retries: "{{ make_operator_namespace_retries | default(omit) }}" delay: "{{ make_operator_namespace_delay | default(omit) }}" until: "{{ make_operator_namespace_until | default(true) }}" register: "make_operator_namespace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make operator_namespace" dry_run: "{{ make_operator_namespace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_operator_namespace_env|default({})), **(make_operator_namespace_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespac0000644000175000017500000000156015073042760033364 0ustar zuulzuul--- - name: Debug make_namespace_env when: make_namespace_env is defined ansible.builtin.debug: var: make_namespace_env - name: Debug make_namespace_params when: make_namespace_params is defined ansible.builtin.debug: var: make_namespace_params - name: Run namespace retries: "{{ make_namespace_retries | default(omit) }}" delay: "{{ make_namespace_delay | default(omit) }}" until: "{{ make_namespace_until | default(true) }}" register: "make_namespace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make namespace" dry_run: "{{ make_namespace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_namespace_env|default({})), **(make_namespace_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespace_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespac0000644000175000017500000000175015073042760033365 0ustar zuulzuul--- - name: Debug make_namespace_cleanup_env when: make_namespace_cleanup_env is defined ansible.builtin.debug: var: make_namespace_cleanup_env - name: Debug make_namespace_cleanup_params when: make_namespace_cleanup_params is defined ansible.builtin.debug: var: make_namespace_cleanup_params - name: Run namespace_cleanup retries: "{{ make_namespace_cleanup_retries | default(omit) }}" delay: "{{ make_namespace_cleanup_delay | default(omit) }}" until: "{{ make_namespace_cleanup_until | default(true) }}" register: "make_namespace_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make namespace_cleanup" dry_run: "{{ make_namespace_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_namespace_cleanup_env|default({})), **(make_namespace_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input.ym0000644000175000017500000000146415073042760033363 0ustar zuulzuul--- - name: Debug make_input_env when: make_input_env is defined ansible.builtin.debug: var: make_input_env - name: Debug make_input_params when: make_input_params is defined ansible.builtin.debug: var: make_input_params - name: Run input retries: "{{ make_input_retries | default(omit) }}" delay: "{{ make_input_delay | default(omit) }}" until: "{{ make_input_until | default(true) }}" register: "make_input_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make input" dry_run: "{{ make_input_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_input_env|default({})), **(make_input_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input_cl0000644000175000017500000000165415073042760033416 0ustar zuulzuul--- - name: Debug make_input_cleanup_env when: make_input_cleanup_env is defined ansible.builtin.debug: var: make_input_cleanup_env - name: Debug make_input_cleanup_params when: make_input_cleanup_params is defined ansible.builtin.debug: var: make_input_cleanup_params - name: Run input_cleanup retries: "{{ make_input_cleanup_retries | default(omit) }}" delay: "{{ make_input_cleanup_delay | default(omit) }}" until: "{{ make_input_cleanup_until | default(true) }}" register: "make_input_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make input_cleanup" dry_run: "{{ make_input_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_input_cleanup_env|default({})), **(make_input_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_setup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_0000644000175000017500000000165415073042760033344 0ustar zuulzuul--- - name: Debug make_crc_bmo_setup_env when: make_crc_bmo_setup_env is defined ansible.builtin.debug: var: make_crc_bmo_setup_env - name: Debug make_crc_bmo_setup_params when: make_crc_bmo_setup_params is defined ansible.builtin.debug: var: make_crc_bmo_setup_params - name: Run crc_bmo_setup retries: "{{ make_crc_bmo_setup_retries | default(omit) }}" delay: "{{ make_crc_bmo_setup_delay | default(omit) }}" until: "{{ make_crc_bmo_setup_until | default(true) }}" register: "make_crc_bmo_setup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_bmo_setup" dry_run: "{{ make_crc_bmo_setup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_bmo_setup_env|default({})), **(make_crc_bmo_setup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_0000644000175000017500000000171215073042760033337 0ustar zuulzuul--- - name: Debug make_crc_bmo_cleanup_env when: make_crc_bmo_cleanup_env is defined ansible.builtin.debug: var: make_crc_bmo_cleanup_env - name: Debug make_crc_bmo_cleanup_params when: make_crc_bmo_cleanup_params is defined ansible.builtin.debug: var: make_crc_bmo_cleanup_params - name: Run crc_bmo_cleanup retries: "{{ make_crc_bmo_cleanup_retries | default(omit) }}" delay: "{{ make_crc_bmo_cleanup_delay | default(omit) }}" until: "{{ make_crc_bmo_cleanup_until | default(true) }}" register: "make_crc_bmo_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_bmo_cleanup" dry_run: "{{ make_crc_bmo_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_bmo_cleanup_env|default({})), **(make_crc_bmo_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315073042760033416 0ustar zuulzuul--- - name: Debug make_openstack_prep_env when: make_openstack_prep_env is defined ansible.builtin.debug: var: make_openstack_prep_env - name: Debug make_openstack_prep_params when: make_openstack_prep_params is defined ansible.builtin.debug: var: make_openstack_prep_params - name: Run openstack_prep retries: "{{ make_openstack_prep_retries | default(omit) }}" delay: "{{ make_openstack_prep_delay | default(omit) }}" until: "{{ make_openstack_prep_until | default(true) }}" register: "make_openstack_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_prep" dry_run: "{{ make_openstack_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_prep_env|default({})), **(make_openstack_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000156015073042760033411 0ustar zuulzuul--- - name: Debug make_openstack_env when: make_openstack_env is defined ansible.builtin.debug: var: make_openstack_env - name: Debug make_openstack_params when: make_openstack_params is defined ansible.builtin.debug: var: make_openstack_params - name: Run openstack retries: "{{ make_openstack_retries | default(omit) }}" delay: "{{ make_openstack_delay | default(omit) }}" until: "{{ make_openstack_until | default(true) }}" register: "make_openstack_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack" dry_run: "{{ make_openstack_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_env|default({})), **(make_openstack_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_wait.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315073042760033416 0ustar zuulzuul--- - name: Debug make_openstack_wait_env when: make_openstack_wait_env is defined ansible.builtin.debug: var: make_openstack_wait_env - name: Debug make_openstack_wait_params when: make_openstack_wait_params is defined ansible.builtin.debug: var: make_openstack_wait_params - name: Run openstack_wait retries: "{{ make_openstack_wait_retries | default(omit) }}" delay: "{{ make_openstack_wait_delay | default(omit) }}" until: "{{ make_openstack_wait_until | default(true) }}" register: "make_openstack_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_wait" dry_run: "{{ make_openstack_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_wait_env|default({})), **(make_openstack_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_init.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315073042760033416 0ustar zuulzuul--- - name: Debug make_openstack_init_env when: make_openstack_init_env is defined ansible.builtin.debug: var: make_openstack_init_env - name: Debug make_openstack_init_params when: make_openstack_init_params is defined ansible.builtin.debug: var: make_openstack_init_params - name: Run openstack_init retries: "{{ make_openstack_init_retries | default(omit) }}" delay: "{{ make_openstack_init_delay | default(omit) }}" until: "{{ make_openstack_init_until | default(true) }}" register: "make_openstack_init_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_init" dry_run: "{{ make_openstack_init_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_init_env|default({})), **(make_openstack_init_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000175015073042760033412 0ustar zuulzuul--- - name: Debug make_openstack_cleanup_env when: make_openstack_cleanup_env is defined ansible.builtin.debug: var: make_openstack_cleanup_env - name: Debug make_openstack_cleanup_params when: make_openstack_cleanup_params is defined ansible.builtin.debug: var: make_openstack_cleanup_params - name: Run openstack_cleanup retries: "{{ make_openstack_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_cleanup_delay | default(omit) }}" until: "{{ make_openstack_cleanup_until | default(true) }}" register: "make_openstack_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_cleanup" dry_run: "{{ make_openstack_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_cleanup_env|default({})), **(make_openstack_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_repo.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315073042760033416 0ustar zuulzuul--- - name: Debug make_openstack_repo_env when: make_openstack_repo_env is defined ansible.builtin.debug: var: make_openstack_repo_env - name: Debug make_openstack_repo_params when: make_openstack_repo_params is defined ansible.builtin.debug: var: make_openstack_repo_params - name: Run openstack_repo retries: "{{ make_openstack_repo_retries | default(omit) }}" delay: "{{ make_openstack_repo_delay | default(omit) }}" until: "{{ make_openstack_repo_until | default(true) }}" register: "make_openstack_repo_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_repo" dry_run: "{{ make_openstack_repo_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_repo_env|default({})), **(make_openstack_repo_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000204415073042760033407 0ustar zuulzuul--- - name: Debug make_openstack_deploy_prep_env when: make_openstack_deploy_prep_env is defined ansible.builtin.debug: var: make_openstack_deploy_prep_env - name: Debug make_openstack_deploy_prep_params when: make_openstack_deploy_prep_params is defined ansible.builtin.debug: var: make_openstack_deploy_prep_params - name: Run openstack_deploy_prep retries: "{{ make_openstack_deploy_prep_retries | default(omit) }}" delay: "{{ make_openstack_deploy_prep_delay | default(omit) }}" until: "{{ make_openstack_deploy_prep_until | default(true) }}" register: "make_openstack_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy_prep" dry_run: "{{ make_openstack_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_prep_env|default({})), **(make_openstack_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000173115073042760033411 0ustar zuulzuul--- - name: Debug make_openstack_deploy_env when: make_openstack_deploy_env is defined ansible.builtin.debug: var: make_openstack_deploy_env - name: Debug make_openstack_deploy_params when: make_openstack_deploy_params is defined ansible.builtin.debug: var: make_openstack_deploy_params - name: Run openstack_deploy retries: "{{ make_openstack_deploy_retries | default(omit) }}" delay: "{{ make_openstack_deploy_delay | default(omit) }}" until: "{{ make_openstack_deploy_until | default(true) }}" register: "make_openstack_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy" dry_run: "{{ make_openstack_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_env|default({})), **(make_openstack_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_wait_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000204415073042760033407 0ustar zuulzuul--- - name: Debug make_openstack_wait_deploy_env when: make_openstack_wait_deploy_env is defined ansible.builtin.debug: var: make_openstack_wait_deploy_env - name: Debug make_openstack_wait_deploy_params when: make_openstack_wait_deploy_params is defined ansible.builtin.debug: var: make_openstack_wait_deploy_params - name: Run openstack_wait_deploy retries: "{{ make_openstack_wait_deploy_retries | default(omit) }}" delay: "{{ make_openstack_wait_deploy_delay | default(omit) }}" until: "{{ make_openstack_wait_deploy_until | default(true) }}" register: "make_openstack_wait_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_wait_deploy" dry_run: "{{ make_openstack_wait_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_wait_deploy_env|default({})), **(make_openstack_wait_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000212115073042760033403 0ustar zuulzuul--- - name: Debug make_openstack_deploy_cleanup_env when: make_openstack_deploy_cleanup_env is defined ansible.builtin.debug: var: make_openstack_deploy_cleanup_env - name: Debug make_openstack_deploy_cleanup_params when: make_openstack_deploy_cleanup_params is defined ansible.builtin.debug: var: make_openstack_deploy_cleanup_params - name: Run openstack_deploy_cleanup retries: "{{ make_openstack_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_deploy_cleanup_delay | default(omit) }}" until: "{{ make_openstack_deploy_cleanup_until | default(true) }}" register: "make_openstack_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy_cleanup" dry_run: "{{ make_openstack_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_cleanup_env|default({})), **(make_openstack_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_update_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000202515073042760033406 0ustar zuulzuul--- - name: Debug make_openstack_update_run_env when: make_openstack_update_run_env is defined ansible.builtin.debug: var: make_openstack_update_run_env - name: Debug make_openstack_update_run_params when: make_openstack_update_run_params is defined ansible.builtin.debug: var: make_openstack_update_run_params - name: Run openstack_update_run retries: "{{ make_openstack_update_run_retries | default(omit) }}" delay: "{{ make_openstack_update_run_delay | default(omit) }}" until: "{{ make_openstack_update_run_until | default(true) }}" register: "make_openstack_update_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_update_run" dry_run: "{{ make_openstack_update_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_update_run_env|default({})), **(make_openstack_update_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_services.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_s0000644000175000017500000000171215073042760033400 0ustar zuulzuul--- - name: Debug make_update_services_env when: make_update_services_env is defined ansible.builtin.debug: var: make_update_services_env - name: Debug make_update_services_params when: make_update_services_params is defined ansible.builtin.debug: var: make_update_services_params - name: Run update_services retries: "{{ make_update_services_retries | default(omit) }}" delay: "{{ make_update_services_delay | default(omit) }}" until: "{{ make_update_services_until | default(true) }}" register: "make_update_services_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make update_services" dry_run: "{{ make_update_services_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_update_services_env|default({})), **(make_update_services_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_system.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_s0000644000175000017500000000165415073042760033405 0ustar zuulzuul--- - name: Debug make_update_system_env when: make_update_system_env is defined ansible.builtin.debug: var: make_update_system_env - name: Debug make_update_system_params when: make_update_system_params is defined ansible.builtin.debug: var: make_update_system_params - name: Run update_system retries: "{{ make_update_system_retries | default(omit) }}" delay: "{{ make_update_system_delay | default(omit) }}" until: "{{ make_update_system_until | default(true) }}" register: "make_update_system_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make update_system" dry_run: "{{ make_update_system_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_update_system_env|default({})), **(make_update_system_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_patch_version.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000210215073042760033402 0ustar zuulzuul--- - name: Debug make_openstack_patch_version_env when: make_openstack_patch_version_env is defined ansible.builtin.debug: var: make_openstack_patch_version_env - name: Debug make_openstack_patch_version_params when: make_openstack_patch_version_params is defined ansible.builtin.debug: var: make_openstack_patch_version_params - name: Run openstack_patch_version retries: "{{ make_openstack_patch_version_retries | default(omit) }}" delay: "{{ make_openstack_patch_version_delay | default(omit) }}" until: "{{ make_openstack_patch_version_until | default(true) }}" register: "make_openstack_patch_version_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_patch_version" dry_run: "{{ make_openstack_patch_version_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_patch_version_env|default({})), **(make_openstack_patch_version_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_generate_keys.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000214015073042760033345 0ustar zuulzuul--- - name: Debug make_edpm_deploy_generate_keys_env when: make_edpm_deploy_generate_keys_env is defined ansible.builtin.debug: var: make_edpm_deploy_generate_keys_env - name: Debug make_edpm_deploy_generate_keys_params when: make_edpm_deploy_generate_keys_params is defined ansible.builtin.debug: var: make_edpm_deploy_generate_keys_params - name: Run edpm_deploy_generate_keys retries: "{{ make_edpm_deploy_generate_keys_retries | default(omit) }}" delay: "{{ make_edpm_deploy_generate_keys_delay | default(omit) }}" until: "{{ make_edpm_deploy_generate_keys_until | default(true) }}" register: "make_edpm_deploy_generate_keys_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_generate_keys" dry_run: "{{ make_edpm_deploy_generate_keys_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_generate_keys_env|default({})), **(make_edpm_deploy_generate_keys_params|default({}))) }}" ././@LongLink0000644000000000000000000000020000000000000011573 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_patch_ansible_runner_image.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_pat0000644000175000017500000000227215073042760033367 0ustar zuulzuul--- - name: Debug make_edpm_patch_ansible_runner_image_env when: make_edpm_patch_ansible_runner_image_env is defined ansible.builtin.debug: var: make_edpm_patch_ansible_runner_image_env - name: Debug make_edpm_patch_ansible_runner_image_params when: make_edpm_patch_ansible_runner_image_params is defined ansible.builtin.debug: var: make_edpm_patch_ansible_runner_image_params - name: Run edpm_patch_ansible_runner_image retries: "{{ make_edpm_patch_ansible_runner_image_retries | default(omit) }}" delay: "{{ make_edpm_patch_ansible_runner_image_delay | default(omit) }}" until: "{{ make_edpm_patch_ansible_runner_image_until | default(true) }}" register: "make_edpm_patch_ansible_runner_image_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_patch_ansible_runner_image" dry_run: "{{ make_edpm_patch_ansible_runner_image_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_patch_ansible_runner_image_env|default({})), **(make_edpm_patch_ansible_runner_image_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000173115073042760033352 0ustar zuulzuul--- - name: Debug make_edpm_deploy_prep_env when: make_edpm_deploy_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_prep_env - name: Debug make_edpm_deploy_prep_params when: make_edpm_deploy_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_prep_params - name: Run edpm_deploy_prep retries: "{{ make_edpm_deploy_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_prep_until | default(true) }}" register: "make_edpm_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_prep" dry_run: "{{ make_edpm_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_prep_env|default({})), **(make_edpm_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000200615073042760033346 0ustar zuulzuul--- - name: Debug make_edpm_deploy_cleanup_env when: make_edpm_deploy_cleanup_env is defined ansible.builtin.debug: var: make_edpm_deploy_cleanup_env - name: Debug make_edpm_deploy_cleanup_params when: make_edpm_deploy_cleanup_params is defined ansible.builtin.debug: var: make_edpm_deploy_cleanup_params - name: Run edpm_deploy_cleanup retries: "{{ make_edpm_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_deploy_cleanup_delay | default(omit) }}" until: "{{ make_edpm_deploy_cleanup_until | default(true) }}" register: "make_edpm_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_cleanup" dry_run: "{{ make_edpm_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_cleanup_env|default({})), **(make_edpm_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000161615073042760033354 0ustar zuulzuul--- - name: Debug make_edpm_deploy_env when: make_edpm_deploy_env is defined ansible.builtin.debug: var: make_edpm_deploy_env - name: Debug make_edpm_deploy_params when: make_edpm_deploy_params is defined ansible.builtin.debug: var: make_edpm_deploy_params - name: Run edpm_deploy retries: "{{ make_edpm_deploy_retries | default(omit) }}" delay: "{{ make_edpm_deploy_delay | default(omit) }}" until: "{{ make_edpm_deploy_until | default(true) }}" register: "make_edpm_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy" dry_run: "{{ make_edpm_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_env|default({})), **(make_edpm_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_baremetal_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000215715073042760033355 0ustar zuulzuul--- - name: Debug make_edpm_deploy_baremetal_prep_env when: make_edpm_deploy_baremetal_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_prep_env - name: Debug make_edpm_deploy_baremetal_prep_params when: make_edpm_deploy_baremetal_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_prep_params - name: Run edpm_deploy_baremetal_prep retries: "{{ make_edpm_deploy_baremetal_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_baremetal_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_baremetal_prep_until | default(true) }}" register: "make_edpm_deploy_baremetal_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_baremetal_prep" dry_run: "{{ make_edpm_deploy_baremetal_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_baremetal_prep_env|default({})), **(make_edpm_deploy_baremetal_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000204415073042760033350 0ustar zuulzuul--- - name: Debug make_edpm_deploy_baremetal_env when: make_edpm_deploy_baremetal_env is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_env - name: Debug make_edpm_deploy_baremetal_params when: make_edpm_deploy_baremetal_params is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_params - name: Run edpm_deploy_baremetal retries: "{{ make_edpm_deploy_baremetal_retries | default(omit) }}" delay: "{{ make_edpm_deploy_baremetal_delay | default(omit) }}" until: "{{ make_edpm_deploy_baremetal_until | default(true) }}" register: "make_edpm_deploy_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_baremetal" dry_run: "{{ make_edpm_deploy_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_baremetal_env|default({})), **(make_edpm_deploy_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wait_deploy_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wai0000644000175000017500000000215715073042760033365 0ustar zuulzuul--- - name: Debug make_edpm_wait_deploy_baremetal_env when: make_edpm_wait_deploy_baremetal_env is defined ansible.builtin.debug: var: make_edpm_wait_deploy_baremetal_env - name: Debug make_edpm_wait_deploy_baremetal_params when: make_edpm_wait_deploy_baremetal_params is defined ansible.builtin.debug: var: make_edpm_wait_deploy_baremetal_params - name: Run edpm_wait_deploy_baremetal retries: "{{ make_edpm_wait_deploy_baremetal_retries | default(omit) }}" delay: "{{ make_edpm_wait_deploy_baremetal_delay | default(omit) }}" until: "{{ make_edpm_wait_deploy_baremetal_until | default(true) }}" register: "make_edpm_wait_deploy_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_wait_deploy_baremetal" dry_run: "{{ make_edpm_wait_deploy_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_wait_deploy_baremetal_env|default({})), **(make_edpm_wait_deploy_baremetal_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_all.yml0000644000175000017500000000142615073042760033146 0ustar zuulzuul--- - name: Debug make_all_env when: make_all_env is defined ansible.builtin.debug: var: make_all_env - name: Debug make_all_params when: make_all_params is defined ansible.builtin.debug: var: make_all_params - name: Run all retries: "{{ make_all_retries | default(omit) }}" delay: "{{ make_all_delay | default(omit) }}" until: "{{ make_all_until | default(true) }}" register: "make_all_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make all" dry_run: "{{ make_all_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_all_env|default({})), **(make_all_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_help.yml0000644000175000017500000000145615073042760033331 0ustar zuulzuul--- - name: Debug make_help_env when: make_help_env is defined ansible.builtin.debug: var: make_help_env - name: Debug make_help_params when: make_help_params is defined ansible.builtin.debug: var: make_help_params - name: Run help retries: "{{ make_help_retries | default(omit) }}" delay: "{{ make_help_delay | default(omit) }}" until: "{{ make_help_until | default(true) }}" register: "make_help_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make help" dry_run: "{{ make_help_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_help_env|default({})), **(make_help_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cleanup.0000644000175000017500000000152215073042760033300 0ustar zuulzuul--- - name: Debug make_cleanup_env when: make_cleanup_env is defined ansible.builtin.debug: var: make_cleanup_env - name: Debug make_cleanup_params when: make_cleanup_params is defined ansible.builtin.debug: var: make_cleanup_params - name: Run cleanup retries: "{{ make_cleanup_retries | default(omit) }}" delay: "{{ make_cleanup_delay | default(omit) }}" until: "{{ make_cleanup_until | default(true) }}" register: "make_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cleanup" dry_run: "{{ make_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cleanup_env|default({})), **(make_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_deploy_c0000644000175000017500000000167315073042760033400 0ustar zuulzuul--- - name: Debug make_deploy_cleanup_env when: make_deploy_cleanup_env is defined ansible.builtin.debug: var: make_deploy_cleanup_env - name: Debug make_deploy_cleanup_params when: make_deploy_cleanup_params is defined ansible.builtin.debug: var: make_deploy_cleanup_params - name: Run deploy_cleanup retries: "{{ make_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_deploy_cleanup_delay | default(omit) }}" until: "{{ make_deploy_cleanup_until | default(true) }}" register: "make_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make deploy_cleanup" dry_run: "{{ make_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_deploy_cleanup_env|default({})), **(make_deploy_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_wait.yml0000644000175000017500000000144515073042760033343 0ustar zuulzuul--- - name: Debug make_wait_env when: make_wait_env is defined ansible.builtin.debug: var: make_wait_env - name: Debug make_wait_params when: make_wait_params is defined ansible.builtin.debug: var: make_wait_params - name: Run wait retries: "{{ make_wait_retries | default(omit) }}" delay: "{{ make_wait_delay | default(omit) }}" until: "{{ make_wait_until | default(true) }}" register: "make_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make wait" dry_run: "{{ make_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_wait_env|default({})), **(make_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000161615073042760033415 0ustar zuulzuul--- - name: Debug make_crc_storage_env when: make_crc_storage_env is defined ansible.builtin.debug: var: make_crc_storage_env - name: Debug make_crc_storage_params when: make_crc_storage_params is defined ansible.builtin.debug: var: make_crc_storage_params - name: Run crc_storage retries: "{{ make_crc_storage_retries | default(omit) }}" delay: "{{ make_crc_storage_delay | default(omit) }}" until: "{{ make_crc_storage_until | default(true) }}" register: "make_crc_storage_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage" dry_run: "{{ make_crc_storage_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_env|default({})), **(make_crc_storage_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000200615073042760033407 0ustar zuulzuul--- - name: Debug make_crc_storage_cleanup_env when: make_crc_storage_cleanup_env is defined ansible.builtin.debug: var: make_crc_storage_cleanup_env - name: Debug make_crc_storage_cleanup_params when: make_crc_storage_cleanup_params is defined ansible.builtin.debug: var: make_crc_storage_cleanup_params - name: Run crc_storage_cleanup retries: "{{ make_crc_storage_cleanup_retries | default(omit) }}" delay: "{{ make_crc_storage_cleanup_delay | default(omit) }}" until: "{{ make_crc_storage_cleanup_until | default(true) }}" register: "make_crc_storage_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_cleanup" dry_run: "{{ make_crc_storage_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_cleanup_env|default({})), **(make_crc_storage_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_release.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000200615073042760033407 0ustar zuulzuul--- - name: Debug make_crc_storage_release_env when: make_crc_storage_release_env is defined ansible.builtin.debug: var: make_crc_storage_release_env - name: Debug make_crc_storage_release_params when: make_crc_storage_release_params is defined ansible.builtin.debug: var: make_crc_storage_release_params - name: Run crc_storage_release retries: "{{ make_crc_storage_release_retries | default(omit) }}" delay: "{{ make_crc_storage_release_delay | default(omit) }}" until: "{{ make_crc_storage_release_until | default(true) }}" register: "make_crc_storage_release_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_release" dry_run: "{{ make_crc_storage_release_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_release_env|default({})), **(make_crc_storage_release_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_with_retries.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000212115073042760033405 0ustar zuulzuul--- - name: Debug make_crc_storage_with_retries_env when: make_crc_storage_with_retries_env is defined ansible.builtin.debug: var: make_crc_storage_with_retries_env - name: Debug make_crc_storage_with_retries_params when: make_crc_storage_with_retries_params is defined ansible.builtin.debug: var: make_crc_storage_with_retries_params - name: Run crc_storage_with_retries retries: "{{ make_crc_storage_with_retries_retries | default(omit) }}" delay: "{{ make_crc_storage_with_retries_delay | default(omit) }}" until: "{{ make_crc_storage_with_retries_until | default(true) }}" register: "make_crc_storage_with_retries_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_with_retries" dry_run: "{{ make_crc_storage_with_retries_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_with_retries_env|default({})), **(make_crc_storage_with_retries_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_cleanup_with_retries.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000231115073042760033406 0ustar zuulzuul--- - name: Debug make_crc_storage_cleanup_with_retries_env when: make_crc_storage_cleanup_with_retries_env is defined ansible.builtin.debug: var: make_crc_storage_cleanup_with_retries_env - name: Debug make_crc_storage_cleanup_with_retries_params when: make_crc_storage_cleanup_with_retries_params is defined ansible.builtin.debug: var: make_crc_storage_cleanup_with_retries_params - name: Run crc_storage_cleanup_with_retries retries: "{{ make_crc_storage_cleanup_with_retries_retries | default(omit) }}" delay: "{{ make_crc_storage_cleanup_with_retries_delay | default(omit) }}" until: "{{ make_crc_storage_cleanup_with_retries_until | default(true) }}" register: "make_crc_storage_cleanup_with_retries_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_cleanup_with_retries" dry_run: "{{ make_crc_storage_cleanup_with_retries_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_cleanup_with_retries_env|default({})), **(make_crc_storage_cleanup_with_retries_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000202515073042760033433 0ustar zuulzuul--- - name: Debug make_keystone_deploy_prep_env when: make_keystone_deploy_prep_env is defined ansible.builtin.debug: var: make_keystone_deploy_prep_env - name: Debug make_keystone_deploy_prep_params when: make_keystone_deploy_prep_params is defined ansible.builtin.debug: var: make_keystone_deploy_prep_params - name: Run keystone_deploy_prep retries: "{{ make_keystone_deploy_prep_retries | default(omit) }}" delay: "{{ make_keystone_deploy_prep_delay | default(omit) }}" until: "{{ make_keystone_deploy_prep_until | default(true) }}" register: "make_keystone_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy_prep" dry_run: "{{ make_keystone_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_prep_env|default({})), **(make_keystone_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000171215073042760033435 0ustar zuulzuul--- - name: Debug make_keystone_deploy_env when: make_keystone_deploy_env is defined ansible.builtin.debug: var: make_keystone_deploy_env - name: Debug make_keystone_deploy_params when: make_keystone_deploy_params is defined ansible.builtin.debug: var: make_keystone_deploy_params - name: Run keystone_deploy retries: "{{ make_keystone_deploy_retries | default(omit) }}" delay: "{{ make_keystone_deploy_delay | default(omit) }}" until: "{{ make_keystone_deploy_until | default(true) }}" register: "make_keystone_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy" dry_run: "{{ make_keystone_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_env|default({})), **(make_keystone_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000210215073042760033427 0ustar zuulzuul--- - name: Debug make_keystone_deploy_cleanup_env when: make_keystone_deploy_cleanup_env is defined ansible.builtin.debug: var: make_keystone_deploy_cleanup_env - name: Debug make_keystone_deploy_cleanup_params when: make_keystone_deploy_cleanup_params is defined ansible.builtin.debug: var: make_keystone_deploy_cleanup_params - name: Run keystone_deploy_cleanup retries: "{{ make_keystone_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_keystone_deploy_cleanup_delay | default(omit) }}" until: "{{ make_keystone_deploy_cleanup_until | default(true) }}" register: "make_keystone_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy_cleanup" dry_run: "{{ make_keystone_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_cleanup_env|default({})), **(make_keystone_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000165415073042760033342 0ustar zuulzuul--- - name: Debug make_barbican_prep_env when: make_barbican_prep_env is defined ansible.builtin.debug: var: make_barbican_prep_env - name: Debug make_barbican_prep_params when: make_barbican_prep_params is defined ansible.builtin.debug: var: make_barbican_prep_params - name: Run barbican_prep retries: "{{ make_barbican_prep_retries | default(omit) }}" delay: "{{ make_barbican_prep_delay | default(omit) }}" until: "{{ make_barbican_prep_until | default(true) }}" register: "make_barbican_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_prep" dry_run: "{{ make_barbican_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_prep_env|default({})), **(make_barbican_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000154115073042760033335 0ustar zuulzuul--- - name: Debug make_barbican_env when: make_barbican_env is defined ansible.builtin.debug: var: make_barbican_env - name: Debug make_barbican_params when: make_barbican_params is defined ansible.builtin.debug: var: make_barbican_params - name: Run barbican retries: "{{ make_barbican_retries | default(omit) }}" delay: "{{ make_barbican_delay | default(omit) }}" until: "{{ make_barbican_until | default(true) }}" register: "make_barbican_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican" dry_run: "{{ make_barbican_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_env|default({})), **(make_barbican_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000173115073042760033336 0ustar zuulzuul--- - name: Debug make_barbican_cleanup_env when: make_barbican_cleanup_env is defined ansible.builtin.debug: var: make_barbican_cleanup_env - name: Debug make_barbican_cleanup_params when: make_barbican_cleanup_params is defined ansible.builtin.debug: var: make_barbican_cleanup_params - name: Run barbican_cleanup retries: "{{ make_barbican_cleanup_retries | default(omit) }}" delay: "{{ make_barbican_cleanup_delay | default(omit) }}" until: "{{ make_barbican_cleanup_until | default(true) }}" register: "make_barbican_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_cleanup" dry_run: "{{ make_barbican_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_cleanup_env|default({})), **(make_barbican_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000202515073042760033333 0ustar zuulzuul--- - name: Debug make_barbican_deploy_prep_env when: make_barbican_deploy_prep_env is defined ansible.builtin.debug: var: make_barbican_deploy_prep_env - name: Debug make_barbican_deploy_prep_params when: make_barbican_deploy_prep_params is defined ansible.builtin.debug: var: make_barbican_deploy_prep_params - name: Run barbican_deploy_prep retries: "{{ make_barbican_deploy_prep_retries | default(omit) }}" delay: "{{ make_barbican_deploy_prep_delay | default(omit) }}" until: "{{ make_barbican_deploy_prep_until | default(true) }}" register: "make_barbican_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_prep" dry_run: "{{ make_barbican_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_prep_env|default({})), **(make_barbican_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000171215073042760033335 0ustar zuulzuul--- - name: Debug make_barbican_deploy_env when: make_barbican_deploy_env is defined ansible.builtin.debug: var: make_barbican_deploy_env - name: Debug make_barbican_deploy_params when: make_barbican_deploy_params is defined ansible.builtin.debug: var: make_barbican_deploy_params - name: Run barbican_deploy retries: "{{ make_barbican_deploy_retries | default(omit) }}" delay: "{{ make_barbican_deploy_delay | default(omit) }}" until: "{{ make_barbican_deploy_until | default(true) }}" register: "make_barbican_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy" dry_run: "{{ make_barbican_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_env|default({})), **(make_barbican_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_validate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000212115073042760033330 0ustar zuulzuul--- - name: Debug make_barbican_deploy_validate_env when: make_barbican_deploy_validate_env is defined ansible.builtin.debug: var: make_barbican_deploy_validate_env - name: Debug make_barbican_deploy_validate_params when: make_barbican_deploy_validate_params is defined ansible.builtin.debug: var: make_barbican_deploy_validate_params - name: Run barbican_deploy_validate retries: "{{ make_barbican_deploy_validate_retries | default(omit) }}" delay: "{{ make_barbican_deploy_validate_delay | default(omit) }}" until: "{{ make_barbican_deploy_validate_until | default(true) }}" register: "make_barbican_deploy_validate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_validate" dry_run: "{{ make_barbican_deploy_validate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_validate_env|default({})), **(make_barbican_deploy_validate_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000210215073042760033327 0ustar zuulzuul--- - name: Debug make_barbican_deploy_cleanup_env when: make_barbican_deploy_cleanup_env is defined ansible.builtin.debug: var: make_barbican_deploy_cleanup_env - name: Debug make_barbican_deploy_cleanup_params when: make_barbican_deploy_cleanup_params is defined ansible.builtin.debug: var: make_barbican_deploy_cleanup_params - name: Run barbican_deploy_cleanup retries: "{{ make_barbican_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_barbican_deploy_cleanup_delay | default(omit) }}" until: "{{ make_barbican_deploy_cleanup_until | default(true) }}" register: "make_barbican_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_cleanup" dry_run: "{{ make_barbican_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_cleanup_env|default({})), **(make_barbican_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb.0000644000175000017500000000152215073042760033250 0ustar zuulzuul--- - name: Debug make_mariadb_env when: make_mariadb_env is defined ansible.builtin.debug: var: make_mariadb_env - name: Debug make_mariadb_params when: make_mariadb_params is defined ansible.builtin.debug: var: make_mariadb_params - name: Run mariadb retries: "{{ make_mariadb_retries | default(omit) }}" delay: "{{ make_mariadb_delay | default(omit) }}" until: "{{ make_mariadb_until | default(true) }}" register: "make_mariadb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb" dry_run: "{{ make_mariadb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_env|default({})), **(make_mariadb_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000171215073042760033332 0ustar zuulzuul--- - name: Debug make_mariadb_cleanup_env when: make_mariadb_cleanup_env is defined ansible.builtin.debug: var: make_mariadb_cleanup_env - name: Debug make_mariadb_cleanup_params when: make_mariadb_cleanup_params is defined ansible.builtin.debug: var: make_mariadb_cleanup_params - name: Run mariadb_cleanup retries: "{{ make_mariadb_cleanup_retries | default(omit) }}" delay: "{{ make_mariadb_cleanup_delay | default(omit) }}" until: "{{ make_mariadb_cleanup_until | default(true) }}" register: "make_mariadb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_cleanup" dry_run: "{{ make_mariadb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_cleanup_env|default({})), **(make_mariadb_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000200615073042760033327 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_prep_env when: make_mariadb_deploy_prep_env is defined ansible.builtin.debug: var: make_mariadb_deploy_prep_env - name: Debug make_mariadb_deploy_prep_params when: make_mariadb_deploy_prep_params is defined ansible.builtin.debug: var: make_mariadb_deploy_prep_params - name: Run mariadb_deploy_prep retries: "{{ make_mariadb_deploy_prep_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_prep_delay | default(omit) }}" until: "{{ make_mariadb_deploy_prep_until | default(true) }}" register: "make_mariadb_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy_prep" dry_run: "{{ make_mariadb_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_prep_env|default({})), **(make_mariadb_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000167315073042760033340 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_env when: make_mariadb_deploy_env is defined ansible.builtin.debug: var: make_mariadb_deploy_env - name: Debug make_mariadb_deploy_params when: make_mariadb_deploy_params is defined ansible.builtin.debug: var: make_mariadb_deploy_params - name: Run mariadb_deploy retries: "{{ make_mariadb_deploy_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_delay | default(omit) }}" until: "{{ make_mariadb_deploy_until | default(true) }}" register: "make_mariadb_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy" dry_run: "{{ make_mariadb_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_env|default({})), **(make_mariadb_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000206315073042760033332 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_cleanup_env when: make_mariadb_deploy_cleanup_env is defined ansible.builtin.debug: var: make_mariadb_deploy_cleanup_env - name: Debug make_mariadb_deploy_cleanup_params when: make_mariadb_deploy_cleanup_params is defined ansible.builtin.debug: var: make_mariadb_deploy_cleanup_params - name: Run mariadb_deploy_cleanup retries: "{{ make_mariadb_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_cleanup_delay | default(omit) }}" until: "{{ make_mariadb_deploy_cleanup_until | default(true) }}" register: "make_mariadb_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy_cleanup" dry_run: "{{ make_mariadb_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_cleanup_env|default({})), **(make_mariadb_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000167315073042760033366 0ustar zuulzuul--- - name: Debug make_placement_prep_env when: make_placement_prep_env is defined ansible.builtin.debug: var: make_placement_prep_env - name: Debug make_placement_prep_params when: make_placement_prep_params is defined ansible.builtin.debug: var: make_placement_prep_params - name: Run placement_prep retries: "{{ make_placement_prep_retries | default(omit) }}" delay: "{{ make_placement_prep_delay | default(omit) }}" until: "{{ make_placement_prep_until | default(true) }}" register: "make_placement_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_prep" dry_run: "{{ make_placement_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_prep_env|default({})), **(make_placement_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000156015073042760033361 0ustar zuulzuul--- - name: Debug make_placement_env when: make_placement_env is defined ansible.builtin.debug: var: make_placement_env - name: Debug make_placement_params when: make_placement_params is defined ansible.builtin.debug: var: make_placement_params - name: Run placement retries: "{{ make_placement_retries | default(omit) }}" delay: "{{ make_placement_delay | default(omit) }}" until: "{{ make_placement_until | default(true) }}" register: "make_placement_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement" dry_run: "{{ make_placement_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_env|default({})), **(make_placement_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000175015073042760033362 0ustar zuulzuul--- - name: Debug make_placement_cleanup_env when: make_placement_cleanup_env is defined ansible.builtin.debug: var: make_placement_cleanup_env - name: Debug make_placement_cleanup_params when: make_placement_cleanup_params is defined ansible.builtin.debug: var: make_placement_cleanup_params - name: Run placement_cleanup retries: "{{ make_placement_cleanup_retries | default(omit) }}" delay: "{{ make_placement_cleanup_delay | default(omit) }}" until: "{{ make_placement_cleanup_until | default(true) }}" register: "make_placement_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_cleanup" dry_run: "{{ make_placement_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_cleanup_env|default({})), **(make_placement_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000204415073042760033357 0ustar zuulzuul--- - name: Debug make_placement_deploy_prep_env when: make_placement_deploy_prep_env is defined ansible.builtin.debug: var: make_placement_deploy_prep_env - name: Debug make_placement_deploy_prep_params when: make_placement_deploy_prep_params is defined ansible.builtin.debug: var: make_placement_deploy_prep_params - name: Run placement_deploy_prep retries: "{{ make_placement_deploy_prep_retries | default(omit) }}" delay: "{{ make_placement_deploy_prep_delay | default(omit) }}" until: "{{ make_placement_deploy_prep_until | default(true) }}" register: "make_placement_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy_prep" dry_run: "{{ make_placement_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_prep_env|default({})), **(make_placement_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000173115073042760033361 0ustar zuulzuul--- - name: Debug make_placement_deploy_env when: make_placement_deploy_env is defined ansible.builtin.debug: var: make_placement_deploy_env - name: Debug make_placement_deploy_params when: make_placement_deploy_params is defined ansible.builtin.debug: var: make_placement_deploy_params - name: Run placement_deploy retries: "{{ make_placement_deploy_retries | default(omit) }}" delay: "{{ make_placement_deploy_delay | default(omit) }}" until: "{{ make_placement_deploy_until | default(true) }}" register: "make_placement_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy" dry_run: "{{ make_placement_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_env|default({})), **(make_placement_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000212115073042760033353 0ustar zuulzuul--- - name: Debug make_placement_deploy_cleanup_env when: make_placement_deploy_cleanup_env is defined ansible.builtin.debug: var: make_placement_deploy_cleanup_env - name: Debug make_placement_deploy_cleanup_params when: make_placement_deploy_cleanup_params is defined ansible.builtin.debug: var: make_placement_deploy_cleanup_params - name: Run placement_deploy_cleanup retries: "{{ make_placement_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_placement_deploy_cleanup_delay | default(omit) }}" until: "{{ make_placement_deploy_cleanup_until | default(true) }}" register: "make_placement_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy_cleanup" dry_run: "{{ make_placement_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_cleanup_env|default({})), **(make_placement_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_p0000644000175000017500000000161615073042760033347 0ustar zuulzuul--- - name: Debug make_glance_prep_env when: make_glance_prep_env is defined ansible.builtin.debug: var: make_glance_prep_env - name: Debug make_glance_prep_params when: make_glance_prep_params is defined ansible.builtin.debug: var: make_glance_prep_params - name: Run glance_prep retries: "{{ make_glance_prep_retries | default(omit) }}" delay: "{{ make_glance_prep_delay | default(omit) }}" until: "{{ make_glance_prep_until | default(true) }}" register: "make_glance_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_prep" dry_run: "{{ make_glance_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_prep_env|default({})), **(make_glance_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance.y0000644000175000017500000000150315073042760033272 0ustar zuulzuul--- - name: Debug make_glance_env when: make_glance_env is defined ansible.builtin.debug: var: make_glance_env - name: Debug make_glance_params when: make_glance_params is defined ansible.builtin.debug: var: make_glance_params - name: Run glance retries: "{{ make_glance_retries | default(omit) }}" delay: "{{ make_glance_delay | default(omit) }}" until: "{{ make_glance_until | default(true) }}" register: "make_glance_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance" dry_run: "{{ make_glance_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_env|default({})), **(make_glance_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_c0000644000175000017500000000167315073042760033335 0ustar zuulzuul--- - name: Debug make_glance_cleanup_env when: make_glance_cleanup_env is defined ansible.builtin.debug: var: make_glance_cleanup_env - name: Debug make_glance_cleanup_params when: make_glance_cleanup_params is defined ansible.builtin.debug: var: make_glance_cleanup_params - name: Run glance_cleanup retries: "{{ make_glance_cleanup_retries | default(omit) }}" delay: "{{ make_glance_cleanup_delay | default(omit) }}" until: "{{ make_glance_cleanup_until | default(true) }}" register: "make_glance_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_cleanup" dry_run: "{{ make_glance_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_cleanup_env|default({})), **(make_glance_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000176715073042760033342 0ustar zuulzuul--- - name: Debug make_glance_deploy_prep_env when: make_glance_deploy_prep_env is defined ansible.builtin.debug: var: make_glance_deploy_prep_env - name: Debug make_glance_deploy_prep_params when: make_glance_deploy_prep_params is defined ansible.builtin.debug: var: make_glance_deploy_prep_params - name: Run glance_deploy_prep retries: "{{ make_glance_deploy_prep_retries | default(omit) }}" delay: "{{ make_glance_deploy_prep_delay | default(omit) }}" until: "{{ make_glance_deploy_prep_until | default(true) }}" register: "make_glance_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy_prep" dry_run: "{{ make_glance_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_prep_env|default({})), **(make_glance_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000165415073042760033335 0ustar zuulzuul--- - name: Debug make_glance_deploy_env when: make_glance_deploy_env is defined ansible.builtin.debug: var: make_glance_deploy_env - name: Debug make_glance_deploy_params when: make_glance_deploy_params is defined ansible.builtin.debug: var: make_glance_deploy_params - name: Run glance_deploy retries: "{{ make_glance_deploy_retries | default(omit) }}" delay: "{{ make_glance_deploy_delay | default(omit) }}" until: "{{ make_glance_deploy_until | default(true) }}" register: "make_glance_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy" dry_run: "{{ make_glance_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_env|default({})), **(make_glance_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000204415073042760033327 0ustar zuulzuul--- - name: Debug make_glance_deploy_cleanup_env when: make_glance_deploy_cleanup_env is defined ansible.builtin.debug: var: make_glance_deploy_cleanup_env - name: Debug make_glance_deploy_cleanup_params when: make_glance_deploy_cleanup_params is defined ansible.builtin.debug: var: make_glance_deploy_cleanup_params - name: Run glance_deploy_cleanup retries: "{{ make_glance_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_glance_deploy_cleanup_delay | default(omit) }}" until: "{{ make_glance_deploy_cleanup_until | default(true) }}" register: "make_glance_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy_cleanup" dry_run: "{{ make_glance_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_cleanup_env|default({})), **(make_glance_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_prep0000644000175000017500000000154115073042760033424 0ustar zuulzuul--- - name: Debug make_ovn_prep_env when: make_ovn_prep_env is defined ansible.builtin.debug: var: make_ovn_prep_env - name: Debug make_ovn_prep_params when: make_ovn_prep_params is defined ansible.builtin.debug: var: make_ovn_prep_params - name: Run ovn_prep retries: "{{ make_ovn_prep_retries | default(omit) }}" delay: "{{ make_ovn_prep_delay | default(omit) }}" until: "{{ make_ovn_prep_until | default(true) }}" register: "make_ovn_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_prep" dry_run: "{{ make_ovn_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_prep_env|default({})), **(make_ovn_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn.yml0000644000175000017500000000142615073042760033200 0ustar zuulzuul--- - name: Debug make_ovn_env when: make_ovn_env is defined ansible.builtin.debug: var: make_ovn_env - name: Debug make_ovn_params when: make_ovn_params is defined ansible.builtin.debug: var: make_ovn_params - name: Run ovn retries: "{{ make_ovn_retries | default(omit) }}" delay: "{{ make_ovn_delay | default(omit) }}" until: "{{ make_ovn_until | default(true) }}" register: "make_ovn_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn" dry_run: "{{ make_ovn_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_env|default({})), **(make_ovn_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_clea0000644000175000017500000000161615073042760033365 0ustar zuulzuul--- - name: Debug make_ovn_cleanup_env when: make_ovn_cleanup_env is defined ansible.builtin.debug: var: make_ovn_cleanup_env - name: Debug make_ovn_cleanup_params when: make_ovn_cleanup_params is defined ansible.builtin.debug: var: make_ovn_cleanup_params - name: Run ovn_cleanup retries: "{{ make_ovn_cleanup_retries | default(omit) }}" delay: "{{ make_ovn_cleanup_delay | default(omit) }}" until: "{{ make_ovn_cleanup_until | default(true) }}" register: "make_ovn_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_cleanup" dry_run: "{{ make_ovn_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_cleanup_env|default({})), **(make_ovn_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000171215073042760033402 0ustar zuulzuul--- - name: Debug make_ovn_deploy_prep_env when: make_ovn_deploy_prep_env is defined ansible.builtin.debug: var: make_ovn_deploy_prep_env - name: Debug make_ovn_deploy_prep_params when: make_ovn_deploy_prep_params is defined ansible.builtin.debug: var: make_ovn_deploy_prep_params - name: Run ovn_deploy_prep retries: "{{ make_ovn_deploy_prep_retries | default(omit) }}" delay: "{{ make_ovn_deploy_prep_delay | default(omit) }}" until: "{{ make_ovn_deploy_prep_until | default(true) }}" register: "make_ovn_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy_prep" dry_run: "{{ make_ovn_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_prep_env|default({})), **(make_ovn_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000157715073042760033413 0ustar zuulzuul--- - name: Debug make_ovn_deploy_env when: make_ovn_deploy_env is defined ansible.builtin.debug: var: make_ovn_deploy_env - name: Debug make_ovn_deploy_params when: make_ovn_deploy_params is defined ansible.builtin.debug: var: make_ovn_deploy_params - name: Run ovn_deploy retries: "{{ make_ovn_deploy_retries | default(omit) }}" delay: "{{ make_ovn_deploy_delay | default(omit) }}" until: "{{ make_ovn_deploy_until | default(true) }}" register: "make_ovn_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy" dry_run: "{{ make_ovn_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_env|default({})), **(make_ovn_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000176715073042760033414 0ustar zuulzuul--- - name: Debug make_ovn_deploy_cleanup_env when: make_ovn_deploy_cleanup_env is defined ansible.builtin.debug: var: make_ovn_deploy_cleanup_env - name: Debug make_ovn_deploy_cleanup_params when: make_ovn_deploy_cleanup_params is defined ansible.builtin.debug: var: make_ovn_deploy_cleanup_params - name: Run ovn_deploy_cleanup retries: "{{ make_ovn_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_ovn_deploy_cleanup_delay | default(omit) }}" until: "{{ make_ovn_deploy_cleanup_until | default(true) }}" register: "make_ovn_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy_cleanup" dry_run: "{{ make_ovn_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_cleanup_env|default({})), **(make_ovn_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000163515073042760033431 0ustar zuulzuul--- - name: Debug make_neutron_prep_env when: make_neutron_prep_env is defined ansible.builtin.debug: var: make_neutron_prep_env - name: Debug make_neutron_prep_params when: make_neutron_prep_params is defined ansible.builtin.debug: var: make_neutron_prep_params - name: Run neutron_prep retries: "{{ make_neutron_prep_retries | default(omit) }}" delay: "{{ make_neutron_prep_delay | default(omit) }}" until: "{{ make_neutron_prep_until | default(true) }}" register: "make_neutron_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_prep" dry_run: "{{ make_neutron_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_prep_env|default({})), **(make_neutron_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron.0000644000175000017500000000152215073042760033343 0ustar zuulzuul--- - name: Debug make_neutron_env when: make_neutron_env is defined ansible.builtin.debug: var: make_neutron_env - name: Debug make_neutron_params when: make_neutron_params is defined ansible.builtin.debug: var: make_neutron_params - name: Run neutron retries: "{{ make_neutron_retries | default(omit) }}" delay: "{{ make_neutron_delay | default(omit) }}" until: "{{ make_neutron_until | default(true) }}" register: "make_neutron_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron" dry_run: "{{ make_neutron_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_env|default({})), **(make_neutron_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000171215073042760033425 0ustar zuulzuul--- - name: Debug make_neutron_cleanup_env when: make_neutron_cleanup_env is defined ansible.builtin.debug: var: make_neutron_cleanup_env - name: Debug make_neutron_cleanup_params when: make_neutron_cleanup_params is defined ansible.builtin.debug: var: make_neutron_cleanup_params - name: Run neutron_cleanup retries: "{{ make_neutron_cleanup_retries | default(omit) }}" delay: "{{ make_neutron_cleanup_delay | default(omit) }}" until: "{{ make_neutron_cleanup_until | default(true) }}" register: "make_neutron_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_cleanup" dry_run: "{{ make_neutron_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_cleanup_env|default({})), **(make_neutron_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000200615073042760033422 0ustar zuulzuul--- - name: Debug make_neutron_deploy_prep_env when: make_neutron_deploy_prep_env is defined ansible.builtin.debug: var: make_neutron_deploy_prep_env - name: Debug make_neutron_deploy_prep_params when: make_neutron_deploy_prep_params is defined ansible.builtin.debug: var: make_neutron_deploy_prep_params - name: Run neutron_deploy_prep retries: "{{ make_neutron_deploy_prep_retries | default(omit) }}" delay: "{{ make_neutron_deploy_prep_delay | default(omit) }}" until: "{{ make_neutron_deploy_prep_until | default(true) }}" register: "make_neutron_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy_prep" dry_run: "{{ make_neutron_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_prep_env|default({})), **(make_neutron_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000167315073042760033433 0ustar zuulzuul--- - name: Debug make_neutron_deploy_env when: make_neutron_deploy_env is defined ansible.builtin.debug: var: make_neutron_deploy_env - name: Debug make_neutron_deploy_params when: make_neutron_deploy_params is defined ansible.builtin.debug: var: make_neutron_deploy_params - name: Run neutron_deploy retries: "{{ make_neutron_deploy_retries | default(omit) }}" delay: "{{ make_neutron_deploy_delay | default(omit) }}" until: "{{ make_neutron_deploy_until | default(true) }}" register: "make_neutron_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy" dry_run: "{{ make_neutron_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_env|default({})), **(make_neutron_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000206315073042760033425 0ustar zuulzuul--- - name: Debug make_neutron_deploy_cleanup_env when: make_neutron_deploy_cleanup_env is defined ansible.builtin.debug: var: make_neutron_deploy_cleanup_env - name: Debug make_neutron_deploy_cleanup_params when: make_neutron_deploy_cleanup_params is defined ansible.builtin.debug: var: make_neutron_deploy_cleanup_params - name: Run neutron_deploy_cleanup retries: "{{ make_neutron_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_neutron_deploy_cleanup_delay | default(omit) }}" until: "{{ make_neutron_deploy_cleanup_until | default(true) }}" register: "make_neutron_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy_cleanup" dry_run: "{{ make_neutron_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_cleanup_env|default({})), **(make_neutron_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_p0000644000175000017500000000161615073042760033362 0ustar zuulzuul--- - name: Debug make_cinder_prep_env when: make_cinder_prep_env is defined ansible.builtin.debug: var: make_cinder_prep_env - name: Debug make_cinder_prep_params when: make_cinder_prep_params is defined ansible.builtin.debug: var: make_cinder_prep_params - name: Run cinder_prep retries: "{{ make_cinder_prep_retries | default(omit) }}" delay: "{{ make_cinder_prep_delay | default(omit) }}" until: "{{ make_cinder_prep_until | default(true) }}" register: "make_cinder_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_prep" dry_run: "{{ make_cinder_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_prep_env|default({})), **(make_cinder_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder.y0000644000175000017500000000150315073042760033305 0ustar zuulzuul--- - name: Debug make_cinder_env when: make_cinder_env is defined ansible.builtin.debug: var: make_cinder_env - name: Debug make_cinder_params when: make_cinder_params is defined ansible.builtin.debug: var: make_cinder_params - name: Run cinder retries: "{{ make_cinder_retries | default(omit) }}" delay: "{{ make_cinder_delay | default(omit) }}" until: "{{ make_cinder_until | default(true) }}" register: "make_cinder_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder" dry_run: "{{ make_cinder_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_env|default({})), **(make_cinder_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_c0000644000175000017500000000167315073042760033350 0ustar zuulzuul--- - name: Debug make_cinder_cleanup_env when: make_cinder_cleanup_env is defined ansible.builtin.debug: var: make_cinder_cleanup_env - name: Debug make_cinder_cleanup_params when: make_cinder_cleanup_params is defined ansible.builtin.debug: var: make_cinder_cleanup_params - name: Run cinder_cleanup retries: "{{ make_cinder_cleanup_retries | default(omit) }}" delay: "{{ make_cinder_cleanup_delay | default(omit) }}" until: "{{ make_cinder_cleanup_until | default(true) }}" register: "make_cinder_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_cleanup" dry_run: "{{ make_cinder_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_cleanup_env|default({})), **(make_cinder_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000176715073042760033355 0ustar zuulzuul--- - name: Debug make_cinder_deploy_prep_env when: make_cinder_deploy_prep_env is defined ansible.builtin.debug: var: make_cinder_deploy_prep_env - name: Debug make_cinder_deploy_prep_params when: make_cinder_deploy_prep_params is defined ansible.builtin.debug: var: make_cinder_deploy_prep_params - name: Run cinder_deploy_prep retries: "{{ make_cinder_deploy_prep_retries | default(omit) }}" delay: "{{ make_cinder_deploy_prep_delay | default(omit) }}" until: "{{ make_cinder_deploy_prep_until | default(true) }}" register: "make_cinder_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy_prep" dry_run: "{{ make_cinder_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_prep_env|default({})), **(make_cinder_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000165415073042760033350 0ustar zuulzuul--- - name: Debug make_cinder_deploy_env when: make_cinder_deploy_env is defined ansible.builtin.debug: var: make_cinder_deploy_env - name: Debug make_cinder_deploy_params when: make_cinder_deploy_params is defined ansible.builtin.debug: var: make_cinder_deploy_params - name: Run cinder_deploy retries: "{{ make_cinder_deploy_retries | default(omit) }}" delay: "{{ make_cinder_deploy_delay | default(omit) }}" until: "{{ make_cinder_deploy_until | default(true) }}" register: "make_cinder_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy" dry_run: "{{ make_cinder_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_env|default({})), **(make_cinder_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000204415073042760033342 0ustar zuulzuul--- - name: Debug make_cinder_deploy_cleanup_env when: make_cinder_deploy_cleanup_env is defined ansible.builtin.debug: var: make_cinder_deploy_cleanup_env - name: Debug make_cinder_deploy_cleanup_params when: make_cinder_deploy_cleanup_params is defined ansible.builtin.debug: var: make_cinder_deploy_cleanup_params - name: Run cinder_deploy_cleanup retries: "{{ make_cinder_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_cinder_deploy_cleanup_delay | default(omit) }}" until: "{{ make_cinder_deploy_cleanup_until | default(true) }}" register: "make_cinder_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy_cleanup" dry_run: "{{ make_cinder_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_cleanup_env|default({})), **(make_cinder_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000165415073042760033402 0ustar zuulzuul--- - name: Debug make_rabbitmq_prep_env when: make_rabbitmq_prep_env is defined ansible.builtin.debug: var: make_rabbitmq_prep_env - name: Debug make_rabbitmq_prep_params when: make_rabbitmq_prep_params is defined ansible.builtin.debug: var: make_rabbitmq_prep_params - name: Run rabbitmq_prep retries: "{{ make_rabbitmq_prep_retries | default(omit) }}" delay: "{{ make_rabbitmq_prep_delay | default(omit) }}" until: "{{ make_rabbitmq_prep_until | default(true) }}" register: "make_rabbitmq_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_prep" dry_run: "{{ make_rabbitmq_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_prep_env|default({})), **(make_rabbitmq_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000154115073042760033375 0ustar zuulzuul--- - name: Debug make_rabbitmq_env when: make_rabbitmq_env is defined ansible.builtin.debug: var: make_rabbitmq_env - name: Debug make_rabbitmq_params when: make_rabbitmq_params is defined ansible.builtin.debug: var: make_rabbitmq_params - name: Run rabbitmq retries: "{{ make_rabbitmq_retries | default(omit) }}" delay: "{{ make_rabbitmq_delay | default(omit) }}" until: "{{ make_rabbitmq_until | default(true) }}" register: "make_rabbitmq_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq" dry_run: "{{ make_rabbitmq_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_env|default({})), **(make_rabbitmq_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000173115073042760033376 0ustar zuulzuul--- - name: Debug make_rabbitmq_cleanup_env when: make_rabbitmq_cleanup_env is defined ansible.builtin.debug: var: make_rabbitmq_cleanup_env - name: Debug make_rabbitmq_cleanup_params when: make_rabbitmq_cleanup_params is defined ansible.builtin.debug: var: make_rabbitmq_cleanup_params - name: Run rabbitmq_cleanup retries: "{{ make_rabbitmq_cleanup_retries | default(omit) }}" delay: "{{ make_rabbitmq_cleanup_delay | default(omit) }}" until: "{{ make_rabbitmq_cleanup_until | default(true) }}" register: "make_rabbitmq_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_cleanup" dry_run: "{{ make_rabbitmq_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_cleanup_env|default({})), **(make_rabbitmq_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000202515073042760033373 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_prep_env when: make_rabbitmq_deploy_prep_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_prep_env - name: Debug make_rabbitmq_deploy_prep_params when: make_rabbitmq_deploy_prep_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_prep_params - name: Run rabbitmq_deploy_prep retries: "{{ make_rabbitmq_deploy_prep_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_prep_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_prep_until | default(true) }}" register: "make_rabbitmq_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy_prep" dry_run: "{{ make_rabbitmq_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_prep_env|default({})), **(make_rabbitmq_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000171215073042760033375 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_env when: make_rabbitmq_deploy_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_env - name: Debug make_rabbitmq_deploy_params when: make_rabbitmq_deploy_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_params - name: Run rabbitmq_deploy retries: "{{ make_rabbitmq_deploy_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_until | default(true) }}" register: "make_rabbitmq_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy" dry_run: "{{ make_rabbitmq_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_env|default({})), **(make_rabbitmq_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000210215073042760033367 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_cleanup_env when: make_rabbitmq_deploy_cleanup_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_cleanup_env - name: Debug make_rabbitmq_deploy_cleanup_params when: make_rabbitmq_deploy_cleanup_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_cleanup_params - name: Run rabbitmq_deploy_cleanup retries: "{{ make_rabbitmq_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_cleanup_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_cleanup_until | default(true) }}" register: "make_rabbitmq_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy_cleanup" dry_run: "{{ make_rabbitmq_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_cleanup_env|default({})), **(make_rabbitmq_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_p0000644000175000017500000000161615073042760033401 0ustar zuulzuul--- - name: Debug make_ironic_prep_env when: make_ironic_prep_env is defined ansible.builtin.debug: var: make_ironic_prep_env - name: Debug make_ironic_prep_params when: make_ironic_prep_params is defined ansible.builtin.debug: var: make_ironic_prep_params - name: Run ironic_prep retries: "{{ make_ironic_prep_retries | default(omit) }}" delay: "{{ make_ironic_prep_delay | default(omit) }}" until: "{{ make_ironic_prep_until | default(true) }}" register: "make_ironic_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_prep" dry_run: "{{ make_ironic_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_prep_env|default({})), **(make_ironic_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic.y0000644000175000017500000000150315073042760033324 0ustar zuulzuul--- - name: Debug make_ironic_env when: make_ironic_env is defined ansible.builtin.debug: var: make_ironic_env - name: Debug make_ironic_params when: make_ironic_params is defined ansible.builtin.debug: var: make_ironic_params - name: Run ironic retries: "{{ make_ironic_retries | default(omit) }}" delay: "{{ make_ironic_delay | default(omit) }}" until: "{{ make_ironic_until | default(true) }}" register: "make_ironic_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic" dry_run: "{{ make_ironic_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_env|default({})), **(make_ironic_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_c0000644000175000017500000000167315073042760033367 0ustar zuulzuul--- - name: Debug make_ironic_cleanup_env when: make_ironic_cleanup_env is defined ansible.builtin.debug: var: make_ironic_cleanup_env - name: Debug make_ironic_cleanup_params when: make_ironic_cleanup_params is defined ansible.builtin.debug: var: make_ironic_cleanup_params - name: Run ironic_cleanup retries: "{{ make_ironic_cleanup_retries | default(omit) }}" delay: "{{ make_ironic_cleanup_delay | default(omit) }}" until: "{{ make_ironic_cleanup_until | default(true) }}" register: "make_ironic_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_cleanup" dry_run: "{{ make_ironic_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_cleanup_env|default({})), **(make_ironic_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000176715073042760033374 0ustar zuulzuul--- - name: Debug make_ironic_deploy_prep_env when: make_ironic_deploy_prep_env is defined ansible.builtin.debug: var: make_ironic_deploy_prep_env - name: Debug make_ironic_deploy_prep_params when: make_ironic_deploy_prep_params is defined ansible.builtin.debug: var: make_ironic_deploy_prep_params - name: Run ironic_deploy_prep retries: "{{ make_ironic_deploy_prep_retries | default(omit) }}" delay: "{{ make_ironic_deploy_prep_delay | default(omit) }}" until: "{{ make_ironic_deploy_prep_until | default(true) }}" register: "make_ironic_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy_prep" dry_run: "{{ make_ironic_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_prep_env|default({})), **(make_ironic_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000165415073042760033367 0ustar zuulzuul--- - name: Debug make_ironic_deploy_env when: make_ironic_deploy_env is defined ansible.builtin.debug: var: make_ironic_deploy_env - name: Debug make_ironic_deploy_params when: make_ironic_deploy_params is defined ansible.builtin.debug: var: make_ironic_deploy_params - name: Run ironic_deploy retries: "{{ make_ironic_deploy_retries | default(omit) }}" delay: "{{ make_ironic_deploy_delay | default(omit) }}" until: "{{ make_ironic_deploy_until | default(true) }}" register: "make_ironic_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy" dry_run: "{{ make_ironic_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_env|default({})), **(make_ironic_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000204415073042760033361 0ustar zuulzuul--- - name: Debug make_ironic_deploy_cleanup_env when: make_ironic_deploy_cleanup_env is defined ansible.builtin.debug: var: make_ironic_deploy_cleanup_env - name: Debug make_ironic_deploy_cleanup_params when: make_ironic_deploy_cleanup_params is defined ansible.builtin.debug: var: make_ironic_deploy_cleanup_params - name: Run ironic_deploy_cleanup retries: "{{ make_ironic_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_ironic_deploy_cleanup_delay | default(omit) }}" until: "{{ make_ironic_deploy_cleanup_until | default(true) }}" register: "make_ironic_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy_cleanup" dry_run: "{{ make_ironic_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_cleanup_env|default({})), **(make_ironic_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000163515073042760033365 0ustar zuulzuul--- - name: Debug make_octavia_prep_env when: make_octavia_prep_env is defined ansible.builtin.debug: var: make_octavia_prep_env - name: Debug make_octavia_prep_params when: make_octavia_prep_params is defined ansible.builtin.debug: var: make_octavia_prep_params - name: Run octavia_prep retries: "{{ make_octavia_prep_retries | default(omit) }}" delay: "{{ make_octavia_prep_delay | default(omit) }}" until: "{{ make_octavia_prep_until | default(true) }}" register: "make_octavia_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_prep" dry_run: "{{ make_octavia_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_prep_env|default({})), **(make_octavia_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia.0000644000175000017500000000152215073042760033277 0ustar zuulzuul--- - name: Debug make_octavia_env when: make_octavia_env is defined ansible.builtin.debug: var: make_octavia_env - name: Debug make_octavia_params when: make_octavia_params is defined ansible.builtin.debug: var: make_octavia_params - name: Run octavia retries: "{{ make_octavia_retries | default(omit) }}" delay: "{{ make_octavia_delay | default(omit) }}" until: "{{ make_octavia_until | default(true) }}" register: "make_octavia_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia" dry_run: "{{ make_octavia_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_env|default({})), **(make_octavia_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000171215073042760033361 0ustar zuulzuul--- - name: Debug make_octavia_cleanup_env when: make_octavia_cleanup_env is defined ansible.builtin.debug: var: make_octavia_cleanup_env - name: Debug make_octavia_cleanup_params when: make_octavia_cleanup_params is defined ansible.builtin.debug: var: make_octavia_cleanup_params - name: Run octavia_cleanup retries: "{{ make_octavia_cleanup_retries | default(omit) }}" delay: "{{ make_octavia_cleanup_delay | default(omit) }}" until: "{{ make_octavia_cleanup_until | default(true) }}" register: "make_octavia_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_cleanup" dry_run: "{{ make_octavia_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_cleanup_env|default({})), **(make_octavia_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000200615073042760033356 0ustar zuulzuul--- - name: Debug make_octavia_deploy_prep_env when: make_octavia_deploy_prep_env is defined ansible.builtin.debug: var: make_octavia_deploy_prep_env - name: Debug make_octavia_deploy_prep_params when: make_octavia_deploy_prep_params is defined ansible.builtin.debug: var: make_octavia_deploy_prep_params - name: Run octavia_deploy_prep retries: "{{ make_octavia_deploy_prep_retries | default(omit) }}" delay: "{{ make_octavia_deploy_prep_delay | default(omit) }}" until: "{{ make_octavia_deploy_prep_until | default(true) }}" register: "make_octavia_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy_prep" dry_run: "{{ make_octavia_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_prep_env|default({})), **(make_octavia_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000167315073042760033367 0ustar zuulzuul--- - name: Debug make_octavia_deploy_env when: make_octavia_deploy_env is defined ansible.builtin.debug: var: make_octavia_deploy_env - name: Debug make_octavia_deploy_params when: make_octavia_deploy_params is defined ansible.builtin.debug: var: make_octavia_deploy_params - name: Run octavia_deploy retries: "{{ make_octavia_deploy_retries | default(omit) }}" delay: "{{ make_octavia_deploy_delay | default(omit) }}" until: "{{ make_octavia_deploy_until | default(true) }}" register: "make_octavia_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy" dry_run: "{{ make_octavia_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_env|default({})), **(make_octavia_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000206315073042760033361 0ustar zuulzuul--- - name: Debug make_octavia_deploy_cleanup_env when: make_octavia_deploy_cleanup_env is defined ansible.builtin.debug: var: make_octavia_deploy_cleanup_env - name: Debug make_octavia_deploy_cleanup_params when: make_octavia_deploy_cleanup_params is defined ansible.builtin.debug: var: make_octavia_deploy_cleanup_params - name: Run octavia_deploy_cleanup retries: "{{ make_octavia_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_octavia_deploy_cleanup_delay | default(omit) }}" until: "{{ make_octavia_deploy_cleanup_until | default(true) }}" register: "make_octavia_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy_cleanup" dry_run: "{{ make_octavia_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_cleanup_env|default({})), **(make_octavia_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000167315073042760033400 0ustar zuulzuul--- - name: Debug make_designate_prep_env when: make_designate_prep_env is defined ansible.builtin.debug: var: make_designate_prep_env - name: Debug make_designate_prep_params when: make_designate_prep_params is defined ansible.builtin.debug: var: make_designate_prep_params - name: Run designate_prep retries: "{{ make_designate_prep_retries | default(omit) }}" delay: "{{ make_designate_prep_delay | default(omit) }}" until: "{{ make_designate_prep_until | default(true) }}" register: "make_designate_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_prep" dry_run: "{{ make_designate_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_prep_env|default({})), **(make_designate_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000156015073042760033373 0ustar zuulzuul--- - name: Debug make_designate_env when: make_designate_env is defined ansible.builtin.debug: var: make_designate_env - name: Debug make_designate_params when: make_designate_params is defined ansible.builtin.debug: var: make_designate_params - name: Run designate retries: "{{ make_designate_retries | default(omit) }}" delay: "{{ make_designate_delay | default(omit) }}" until: "{{ make_designate_until | default(true) }}" register: "make_designate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate" dry_run: "{{ make_designate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_env|default({})), **(make_designate_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000175015073042760033374 0ustar zuulzuul--- - name: Debug make_designate_cleanup_env when: make_designate_cleanup_env is defined ansible.builtin.debug: var: make_designate_cleanup_env - name: Debug make_designate_cleanup_params when: make_designate_cleanup_params is defined ansible.builtin.debug: var: make_designate_cleanup_params - name: Run designate_cleanup retries: "{{ make_designate_cleanup_retries | default(omit) }}" delay: "{{ make_designate_cleanup_delay | default(omit) }}" until: "{{ make_designate_cleanup_until | default(true) }}" register: "make_designate_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_cleanup" dry_run: "{{ make_designate_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_cleanup_env|default({})), **(make_designate_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000204415073042760033371 0ustar zuulzuul--- - name: Debug make_designate_deploy_prep_env when: make_designate_deploy_prep_env is defined ansible.builtin.debug: var: make_designate_deploy_prep_env - name: Debug make_designate_deploy_prep_params when: make_designate_deploy_prep_params is defined ansible.builtin.debug: var: make_designate_deploy_prep_params - name: Run designate_deploy_prep retries: "{{ make_designate_deploy_prep_retries | default(omit) }}" delay: "{{ make_designate_deploy_prep_delay | default(omit) }}" until: "{{ make_designate_deploy_prep_until | default(true) }}" register: "make_designate_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy_prep" dry_run: "{{ make_designate_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_prep_env|default({})), **(make_designate_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000173115073042760033373 0ustar zuulzuul--- - name: Debug make_designate_deploy_env when: make_designate_deploy_env is defined ansible.builtin.debug: var: make_designate_deploy_env - name: Debug make_designate_deploy_params when: make_designate_deploy_params is defined ansible.builtin.debug: var: make_designate_deploy_params - name: Run designate_deploy retries: "{{ make_designate_deploy_retries | default(omit) }}" delay: "{{ make_designate_deploy_delay | default(omit) }}" until: "{{ make_designate_deploy_until | default(true) }}" register: "make_designate_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy" dry_run: "{{ make_designate_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_env|default({})), **(make_designate_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000212115073042760033365 0ustar zuulzuul--- - name: Debug make_designate_deploy_cleanup_env when: make_designate_deploy_cleanup_env is defined ansible.builtin.debug: var: make_designate_deploy_cleanup_env - name: Debug make_designate_deploy_cleanup_params when: make_designate_deploy_cleanup_params is defined ansible.builtin.debug: var: make_designate_deploy_cleanup_params - name: Run designate_deploy_cleanup retries: "{{ make_designate_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_designate_deploy_cleanup_delay | default(omit) }}" until: "{{ make_designate_deploy_cleanup_until | default(true) }}" register: "make_designate_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy_cleanup" dry_run: "{{ make_designate_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_cleanup_env|default({})), **(make_designate_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_pre0000644000175000017500000000156015073042760033406 0ustar zuulzuul--- - name: Debug make_nova_prep_env when: make_nova_prep_env is defined ansible.builtin.debug: var: make_nova_prep_env - name: Debug make_nova_prep_params when: make_nova_prep_params is defined ansible.builtin.debug: var: make_nova_prep_params - name: Run nova_prep retries: "{{ make_nova_prep_retries | default(omit) }}" delay: "{{ make_nova_prep_delay | default(omit) }}" until: "{{ make_nova_prep_until | default(true) }}" register: "make_nova_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_prep" dry_run: "{{ make_nova_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_prep_env|default({})), **(make_nova_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova.yml0000644000175000017500000000144515073042760033342 0ustar zuulzuul--- - name: Debug make_nova_env when: make_nova_env is defined ansible.builtin.debug: var: make_nova_env - name: Debug make_nova_params when: make_nova_params is defined ansible.builtin.debug: var: make_nova_params - name: Run nova retries: "{{ make_nova_retries | default(omit) }}" delay: "{{ make_nova_delay | default(omit) }}" until: "{{ make_nova_until | default(true) }}" register: "make_nova_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova" dry_run: "{{ make_nova_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_env|default({})), **(make_nova_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_cle0000644000175000017500000000163515073042760033366 0ustar zuulzuul--- - name: Debug make_nova_cleanup_env when: make_nova_cleanup_env is defined ansible.builtin.debug: var: make_nova_cleanup_env - name: Debug make_nova_cleanup_params when: make_nova_cleanup_params is defined ansible.builtin.debug: var: make_nova_cleanup_params - name: Run nova_cleanup retries: "{{ make_nova_cleanup_retries | default(omit) }}" delay: "{{ make_nova_cleanup_delay | default(omit) }}" until: "{{ make_nova_cleanup_until | default(true) }}" register: "make_nova_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_cleanup" dry_run: "{{ make_nova_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_cleanup_env|default({})), **(make_nova_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000173115073042760033370 0ustar zuulzuul--- - name: Debug make_nova_deploy_prep_env when: make_nova_deploy_prep_env is defined ansible.builtin.debug: var: make_nova_deploy_prep_env - name: Debug make_nova_deploy_prep_params when: make_nova_deploy_prep_params is defined ansible.builtin.debug: var: make_nova_deploy_prep_params - name: Run nova_deploy_prep retries: "{{ make_nova_deploy_prep_retries | default(omit) }}" delay: "{{ make_nova_deploy_prep_delay | default(omit) }}" until: "{{ make_nova_deploy_prep_until | default(true) }}" register: "make_nova_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy_prep" dry_run: "{{ make_nova_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_prep_env|default({})), **(make_nova_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000161615073042760033372 0ustar zuulzuul--- - name: Debug make_nova_deploy_env when: make_nova_deploy_env is defined ansible.builtin.debug: var: make_nova_deploy_env - name: Debug make_nova_deploy_params when: make_nova_deploy_params is defined ansible.builtin.debug: var: make_nova_deploy_params - name: Run nova_deploy retries: "{{ make_nova_deploy_retries | default(omit) }}" delay: "{{ make_nova_deploy_delay | default(omit) }}" until: "{{ make_nova_deploy_until | default(true) }}" register: "make_nova_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy" dry_run: "{{ make_nova_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_env|default({})), **(make_nova_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000200615073042760033364 0ustar zuulzuul--- - name: Debug make_nova_deploy_cleanup_env when: make_nova_deploy_cleanup_env is defined ansible.builtin.debug: var: make_nova_deploy_cleanup_env - name: Debug make_nova_deploy_cleanup_params when: make_nova_deploy_cleanup_params is defined ansible.builtin.debug: var: make_nova_deploy_cleanup_params - name: Run nova_deploy_cleanup retries: "{{ make_nova_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_nova_deploy_cleanup_delay | default(omit) }}" until: "{{ make_nova_deploy_cleanup_until | default(true) }}" register: "make_nova_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy_cleanup" dry_run: "{{ make_nova_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_cleanup_env|default({})), **(make_nova_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000175015073042760033334 0ustar zuulzuul--- - name: Debug make_mariadb_kuttl_run_env when: make_mariadb_kuttl_run_env is defined ansible.builtin.debug: var: make_mariadb_kuttl_run_env - name: Debug make_mariadb_kuttl_run_params when: make_mariadb_kuttl_run_params is defined ansible.builtin.debug: var: make_mariadb_kuttl_run_params - name: Run mariadb_kuttl_run retries: "{{ make_mariadb_kuttl_run_retries | default(omit) }}" delay: "{{ make_mariadb_kuttl_run_delay | default(omit) }}" until: "{{ make_mariadb_kuttl_run_until | default(true) }}" register: "make_mariadb_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_kuttl_run" dry_run: "{{ make_mariadb_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_kuttl_run_env|default({})), **(make_mariadb_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000165415073042760033337 0ustar zuulzuul--- - name: Debug make_mariadb_kuttl_env when: make_mariadb_kuttl_env is defined ansible.builtin.debug: var: make_mariadb_kuttl_env - name: Debug make_mariadb_kuttl_params when: make_mariadb_kuttl_params is defined ansible.builtin.debug: var: make_mariadb_kuttl_params - name: Run mariadb_kuttl retries: "{{ make_mariadb_kuttl_retries | default(omit) }}" delay: "{{ make_mariadb_kuttl_delay | default(omit) }}" until: "{{ make_mariadb_kuttl_until | default(true) }}" register: "make_mariadb_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_kuttl" dry_run: "{{ make_mariadb_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_kuttl_env|default({})), **(make_mariadb_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db0000644000175000017500000000165415073042760033411 0ustar zuulzuul--- - name: Debug make_kuttl_db_prep_env when: make_kuttl_db_prep_env is defined ansible.builtin.debug: var: make_kuttl_db_prep_env - name: Debug make_kuttl_db_prep_params when: make_kuttl_db_prep_params is defined ansible.builtin.debug: var: make_kuttl_db_prep_params - name: Run kuttl_db_prep retries: "{{ make_kuttl_db_prep_retries | default(omit) }}" delay: "{{ make_kuttl_db_prep_delay | default(omit) }}" until: "{{ make_kuttl_db_prep_until | default(true) }}" register: "make_kuttl_db_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_db_prep" dry_run: "{{ make_kuttl_db_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_db_prep_env|default({})), **(make_kuttl_db_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db0000644000175000017500000000173115073042760033405 0ustar zuulzuul--- - name: Debug make_kuttl_db_cleanup_env when: make_kuttl_db_cleanup_env is defined ansible.builtin.debug: var: make_kuttl_db_cleanup_env - name: Debug make_kuttl_db_cleanup_params when: make_kuttl_db_cleanup_params is defined ansible.builtin.debug: var: make_kuttl_db_cleanup_params - name: Run kuttl_db_cleanup retries: "{{ make_kuttl_db_cleanup_retries | default(omit) }}" delay: "{{ make_kuttl_db_cleanup_delay | default(omit) }}" until: "{{ make_kuttl_db_cleanup_until | default(true) }}" register: "make_kuttl_db_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_db_cleanup" dry_run: "{{ make_kuttl_db_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_db_cleanup_env|default({})), **(make_kuttl_db_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_common_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_co0000644000175000017500000000175015073042760033422 0ustar zuulzuul--- - name: Debug make_kuttl_common_prep_env when: make_kuttl_common_prep_env is defined ansible.builtin.debug: var: make_kuttl_common_prep_env - name: Debug make_kuttl_common_prep_params when: make_kuttl_common_prep_params is defined ansible.builtin.debug: var: make_kuttl_common_prep_params - name: Run kuttl_common_prep retries: "{{ make_kuttl_common_prep_retries | default(omit) }}" delay: "{{ make_kuttl_common_prep_delay | default(omit) }}" until: "{{ make_kuttl_common_prep_until | default(true) }}" register: "make_kuttl_common_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_common_prep" dry_run: "{{ make_kuttl_common_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_common_prep_env|default({})), **(make_kuttl_common_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_common_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_co0000644000175000017500000000202515073042760033416 0ustar zuulzuul--- - name: Debug make_kuttl_common_cleanup_env when: make_kuttl_common_cleanup_env is defined ansible.builtin.debug: var: make_kuttl_common_cleanup_env - name: Debug make_kuttl_common_cleanup_params when: make_kuttl_common_cleanup_params is defined ansible.builtin.debug: var: make_kuttl_common_cleanup_params - name: Run kuttl_common_cleanup retries: "{{ make_kuttl_common_cleanup_retries | default(omit) }}" delay: "{{ make_kuttl_common_cleanup_delay | default(omit) }}" until: "{{ make_kuttl_common_cleanup_until | default(true) }}" register: "make_kuttl_common_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_common_cleanup" dry_run: "{{ make_kuttl_common_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_common_cleanup_env|default({})), **(make_kuttl_common_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000176715073042760033447 0ustar zuulzuul--- - name: Debug make_keystone_kuttl_run_env when: make_keystone_kuttl_run_env is defined ansible.builtin.debug: var: make_keystone_kuttl_run_env - name: Debug make_keystone_kuttl_run_params when: make_keystone_kuttl_run_params is defined ansible.builtin.debug: var: make_keystone_kuttl_run_params - name: Run keystone_kuttl_run retries: "{{ make_keystone_kuttl_run_retries | default(omit) }}" delay: "{{ make_keystone_kuttl_run_delay | default(omit) }}" until: "{{ make_keystone_kuttl_run_until | default(true) }}" register: "make_keystone_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_kuttl_run" dry_run: "{{ make_keystone_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_kuttl_run_env|default({})), **(make_keystone_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000167315073042760033443 0ustar zuulzuul--- - name: Debug make_keystone_kuttl_env when: make_keystone_kuttl_env is defined ansible.builtin.debug: var: make_keystone_kuttl_env - name: Debug make_keystone_kuttl_params when: make_keystone_kuttl_params is defined ansible.builtin.debug: var: make_keystone_kuttl_params - name: Run keystone_kuttl retries: "{{ make_keystone_kuttl_retries | default(omit) }}" delay: "{{ make_keystone_kuttl_delay | default(omit) }}" until: "{{ make_keystone_kuttl_until | default(true) }}" register: "make_keystone_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_kuttl" dry_run: "{{ make_keystone_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_kuttl_env|default({})), **(make_keystone_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000176715073042760033347 0ustar zuulzuul--- - name: Debug make_barbican_kuttl_run_env when: make_barbican_kuttl_run_env is defined ansible.builtin.debug: var: make_barbican_kuttl_run_env - name: Debug make_barbican_kuttl_run_params when: make_barbican_kuttl_run_params is defined ansible.builtin.debug: var: make_barbican_kuttl_run_params - name: Run barbican_kuttl_run retries: "{{ make_barbican_kuttl_run_retries | default(omit) }}" delay: "{{ make_barbican_kuttl_run_delay | default(omit) }}" until: "{{ make_barbican_kuttl_run_until | default(true) }}" register: "make_barbican_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_kuttl_run" dry_run: "{{ make_barbican_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_kuttl_run_env|default({})), **(make_barbican_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000167315073042760033343 0ustar zuulzuul--- - name: Debug make_barbican_kuttl_env when: make_barbican_kuttl_env is defined ansible.builtin.debug: var: make_barbican_kuttl_env - name: Debug make_barbican_kuttl_params when: make_barbican_kuttl_params is defined ansible.builtin.debug: var: make_barbican_kuttl_params - name: Run barbican_kuttl retries: "{{ make_barbican_kuttl_retries | default(omit) }}" delay: "{{ make_barbican_kuttl_delay | default(omit) }}" until: "{{ make_barbican_kuttl_until | default(true) }}" register: "make_barbican_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_kuttl" dry_run: "{{ make_barbican_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_kuttl_env|default({})), **(make_barbican_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000200615073042760033355 0ustar zuulzuul--- - name: Debug make_placement_kuttl_run_env when: make_placement_kuttl_run_env is defined ansible.builtin.debug: var: make_placement_kuttl_run_env - name: Debug make_placement_kuttl_run_params when: make_placement_kuttl_run_params is defined ansible.builtin.debug: var: make_placement_kuttl_run_params - name: Run placement_kuttl_run retries: "{{ make_placement_kuttl_run_retries | default(omit) }}" delay: "{{ make_placement_kuttl_run_delay | default(omit) }}" until: "{{ make_placement_kuttl_run_until | default(true) }}" register: "make_placement_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_kuttl_run" dry_run: "{{ make_placement_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_kuttl_run_env|default({})), **(make_placement_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000171215073042760033360 0ustar zuulzuul--- - name: Debug make_placement_kuttl_env when: make_placement_kuttl_env is defined ansible.builtin.debug: var: make_placement_kuttl_env - name: Debug make_placement_kuttl_params when: make_placement_kuttl_params is defined ansible.builtin.debug: var: make_placement_kuttl_params - name: Run placement_kuttl retries: "{{ make_placement_kuttl_retries | default(omit) }}" delay: "{{ make_placement_kuttl_delay | default(omit) }}" until: "{{ make_placement_kuttl_until | default(true) }}" register: "make_placement_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_kuttl" dry_run: "{{ make_placement_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_kuttl_env|default({})), **(make_placement_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_k0000644000175000017500000000173115073042760033353 0ustar zuulzuul--- - name: Debug make_cinder_kuttl_run_env when: make_cinder_kuttl_run_env is defined ansible.builtin.debug: var: make_cinder_kuttl_run_env - name: Debug make_cinder_kuttl_run_params when: make_cinder_kuttl_run_params is defined ansible.builtin.debug: var: make_cinder_kuttl_run_params - name: Run cinder_kuttl_run retries: "{{ make_cinder_kuttl_run_retries | default(omit) }}" delay: "{{ make_cinder_kuttl_run_delay | default(omit) }}" until: "{{ make_cinder_kuttl_run_until | default(true) }}" register: "make_cinder_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_kuttl_run" dry_run: "{{ make_cinder_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_kuttl_run_env|default({})), **(make_cinder_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_k0000644000175000017500000000163515073042760033356 0ustar zuulzuul--- - name: Debug make_cinder_kuttl_env when: make_cinder_kuttl_env is defined ansible.builtin.debug: var: make_cinder_kuttl_env - name: Debug make_cinder_kuttl_params when: make_cinder_kuttl_params is defined ansible.builtin.debug: var: make_cinder_kuttl_params - name: Run cinder_kuttl retries: "{{ make_cinder_kuttl_retries | default(omit) }}" delay: "{{ make_cinder_kuttl_delay | default(omit) }}" until: "{{ make_cinder_kuttl_until | default(true) }}" register: "make_cinder_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_kuttl" dry_run: "{{ make_cinder_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_kuttl_env|default({})), **(make_cinder_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000175015073042760033427 0ustar zuulzuul--- - name: Debug make_neutron_kuttl_run_env when: make_neutron_kuttl_run_env is defined ansible.builtin.debug: var: make_neutron_kuttl_run_env - name: Debug make_neutron_kuttl_run_params when: make_neutron_kuttl_run_params is defined ansible.builtin.debug: var: make_neutron_kuttl_run_params - name: Run neutron_kuttl_run retries: "{{ make_neutron_kuttl_run_retries | default(omit) }}" delay: "{{ make_neutron_kuttl_run_delay | default(omit) }}" until: "{{ make_neutron_kuttl_run_until | default(true) }}" register: "make_neutron_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_kuttl_run" dry_run: "{{ make_neutron_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_kuttl_run_env|default({})), **(make_neutron_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000165415073042760033432 0ustar zuulzuul--- - name: Debug make_neutron_kuttl_env when: make_neutron_kuttl_env is defined ansible.builtin.debug: var: make_neutron_kuttl_env - name: Debug make_neutron_kuttl_params when: make_neutron_kuttl_params is defined ansible.builtin.debug: var: make_neutron_kuttl_params - name: Run neutron_kuttl retries: "{{ make_neutron_kuttl_retries | default(omit) }}" delay: "{{ make_neutron_kuttl_delay | default(omit) }}" until: "{{ make_neutron_kuttl_until | default(true) }}" register: "make_neutron_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_kuttl" dry_run: "{{ make_neutron_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_kuttl_env|default({})), **(make_neutron_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000175015073042760033363 0ustar zuulzuul--- - name: Debug make_octavia_kuttl_run_env when: make_octavia_kuttl_run_env is defined ansible.builtin.debug: var: make_octavia_kuttl_run_env - name: Debug make_octavia_kuttl_run_params when: make_octavia_kuttl_run_params is defined ansible.builtin.debug: var: make_octavia_kuttl_run_params - name: Run octavia_kuttl_run retries: "{{ make_octavia_kuttl_run_retries | default(omit) }}" delay: "{{ make_octavia_kuttl_run_delay | default(omit) }}" until: "{{ make_octavia_kuttl_run_until | default(true) }}" register: "make_octavia_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_kuttl_run" dry_run: "{{ make_octavia_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_kuttl_run_env|default({})), **(make_octavia_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000165415073042760033366 0ustar zuulzuul--- - name: Debug make_octavia_kuttl_env when: make_octavia_kuttl_env is defined ansible.builtin.debug: var: make_octavia_kuttl_env - name: Debug make_octavia_kuttl_params when: make_octavia_kuttl_params is defined ansible.builtin.debug: var: make_octavia_kuttl_params - name: Run octavia_kuttl retries: "{{ make_octavia_kuttl_retries | default(omit) }}" delay: "{{ make_octavia_kuttl_delay | default(omit) }}" until: "{{ make_octavia_kuttl_until | default(true) }}" register: "make_octavia_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_kuttl" dry_run: "{{ make_octavia_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_kuttl_env|default({})), **(make_octavia_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000171215073042760033372 0ustar zuulzuul--- - name: Debug make_designate_kuttl_env when: make_designate_kuttl_env is defined ansible.builtin.debug: var: make_designate_kuttl_env - name: Debug make_designate_kuttl_params when: make_designate_kuttl_params is defined ansible.builtin.debug: var: make_designate_kuttl_params - name: Run designate_kuttl retries: "{{ make_designate_kuttl_retries | default(omit) }}" delay: "{{ make_designate_kuttl_delay | default(omit) }}" until: "{{ make_designate_kuttl_until | default(true) }}" register: "make_designate_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_kuttl" dry_run: "{{ make_designate_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_kuttl_env|default({})), **(make_designate_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000200615073042760033367 0ustar zuulzuul--- - name: Debug make_designate_kuttl_run_env when: make_designate_kuttl_run_env is defined ansible.builtin.debug: var: make_designate_kuttl_run_env - name: Debug make_designate_kuttl_run_params when: make_designate_kuttl_run_params is defined ansible.builtin.debug: var: make_designate_kuttl_run_params - name: Run designate_kuttl_run retries: "{{ make_designate_kuttl_run_retries | default(omit) }}" delay: "{{ make_designate_kuttl_run_delay | default(omit) }}" until: "{{ make_designate_kuttl_run_until | default(true) }}" register: "make_designate_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_kuttl_run" dry_run: "{{ make_designate_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_kuttl_run_env|default({})), **(make_designate_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kutt0000644000175000017500000000165415073042760033452 0ustar zuulzuul--- - name: Debug make_ovn_kuttl_run_env when: make_ovn_kuttl_run_env is defined ansible.builtin.debug: var: make_ovn_kuttl_run_env - name: Debug make_ovn_kuttl_run_params when: make_ovn_kuttl_run_params is defined ansible.builtin.debug: var: make_ovn_kuttl_run_params - name: Run ovn_kuttl_run retries: "{{ make_ovn_kuttl_run_retries | default(omit) }}" delay: "{{ make_ovn_kuttl_run_delay | default(omit) }}" until: "{{ make_ovn_kuttl_run_until | default(true) }}" register: "make_ovn_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_kuttl_run" dry_run: "{{ make_ovn_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_kuttl_run_env|default({})), **(make_ovn_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kutt0000644000175000017500000000156015073042760033446 0ustar zuulzuul--- - name: Debug make_ovn_kuttl_env when: make_ovn_kuttl_env is defined ansible.builtin.debug: var: make_ovn_kuttl_env - name: Debug make_ovn_kuttl_params when: make_ovn_kuttl_params is defined ansible.builtin.debug: var: make_ovn_kuttl_params - name: Run ovn_kuttl retries: "{{ make_ovn_kuttl_retries | default(omit) }}" delay: "{{ make_ovn_kuttl_delay | default(omit) }}" until: "{{ make_ovn_kuttl_until | default(true) }}" register: "make_ovn_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_kuttl" dry_run: "{{ make_ovn_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_kuttl_env|default({})), **(make_ovn_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_ku0000644000175000017500000000171215073042760033372 0ustar zuulzuul--- - name: Debug make_infra_kuttl_run_env when: make_infra_kuttl_run_env is defined ansible.builtin.debug: var: make_infra_kuttl_run_env - name: Debug make_infra_kuttl_run_params when: make_infra_kuttl_run_params is defined ansible.builtin.debug: var: make_infra_kuttl_run_params - name: Run infra_kuttl_run retries: "{{ make_infra_kuttl_run_retries | default(omit) }}" delay: "{{ make_infra_kuttl_run_delay | default(omit) }}" until: "{{ make_infra_kuttl_run_until | default(true) }}" register: "make_infra_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_kuttl_run" dry_run: "{{ make_infra_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_kuttl_run_env|default({})), **(make_infra_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_ku0000644000175000017500000000161615073042760033375 0ustar zuulzuul--- - name: Debug make_infra_kuttl_env when: make_infra_kuttl_env is defined ansible.builtin.debug: var: make_infra_kuttl_env - name: Debug make_infra_kuttl_params when: make_infra_kuttl_params is defined ansible.builtin.debug: var: make_infra_kuttl_params - name: Run infra_kuttl retries: "{{ make_infra_kuttl_retries | default(omit) }}" delay: "{{ make_infra_kuttl_delay | default(omit) }}" until: "{{ make_infra_kuttl_until | default(true) }}" register: "make_infra_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_kuttl" dry_run: "{{ make_infra_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_kuttl_env|default({})), **(make_infra_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000173115073042760033372 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_run_env when: make_ironic_kuttl_run_env is defined ansible.builtin.debug: var: make_ironic_kuttl_run_env - name: Debug make_ironic_kuttl_run_params when: make_ironic_kuttl_run_params is defined ansible.builtin.debug: var: make_ironic_kuttl_run_params - name: Run ironic_kuttl_run retries: "{{ make_ironic_kuttl_run_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_run_delay | default(omit) }}" until: "{{ make_ironic_kuttl_run_until | default(true) }}" register: "make_ironic_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl_run" dry_run: "{{ make_ironic_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_run_env|default({})), **(make_ironic_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000163515073042760033375 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_env when: make_ironic_kuttl_env is defined ansible.builtin.debug: var: make_ironic_kuttl_env - name: Debug make_ironic_kuttl_params when: make_ironic_kuttl_params is defined ansible.builtin.debug: var: make_ironic_kuttl_params - name: Run ironic_kuttl retries: "{{ make_ironic_kuttl_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_delay | default(omit) }}" until: "{{ make_ironic_kuttl_until | default(true) }}" register: "make_ironic_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl" dry_run: "{{ make_ironic_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_env|default({})), **(make_ironic_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl_crc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000173115073042760033372 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_crc_env when: make_ironic_kuttl_crc_env is defined ansible.builtin.debug: var: make_ironic_kuttl_crc_env - name: Debug make_ironic_kuttl_crc_params when: make_ironic_kuttl_crc_params is defined ansible.builtin.debug: var: make_ironic_kuttl_crc_params - name: Run ironic_kuttl_crc retries: "{{ make_ironic_kuttl_crc_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_crc_delay | default(omit) }}" until: "{{ make_ironic_kuttl_crc_until | default(true) }}" register: "make_ironic_kuttl_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl_crc" dry_run: "{{ make_ironic_kuttl_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_crc_env|default({})), **(make_ironic_kuttl_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000167315073042760033406 0ustar zuulzuul--- - name: Debug make_heat_kuttl_run_env when: make_heat_kuttl_run_env is defined ansible.builtin.debug: var: make_heat_kuttl_run_env - name: Debug make_heat_kuttl_run_params when: make_heat_kuttl_run_params is defined ansible.builtin.debug: var: make_heat_kuttl_run_params - name: Run heat_kuttl_run retries: "{{ make_heat_kuttl_run_retries | default(omit) }}" delay: "{{ make_heat_kuttl_run_delay | default(omit) }}" until: "{{ make_heat_kuttl_run_until | default(true) }}" register: "make_heat_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl_run" dry_run: "{{ make_heat_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_run_env|default({})), **(make_heat_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000157715073042760033411 0ustar zuulzuul--- - name: Debug make_heat_kuttl_env when: make_heat_kuttl_env is defined ansible.builtin.debug: var: make_heat_kuttl_env - name: Debug make_heat_kuttl_params when: make_heat_kuttl_params is defined ansible.builtin.debug: var: make_heat_kuttl_params - name: Run heat_kuttl retries: "{{ make_heat_kuttl_retries | default(omit) }}" delay: "{{ make_heat_kuttl_delay | default(omit) }}" until: "{{ make_heat_kuttl_until | default(true) }}" register: "make_heat_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl" dry_run: "{{ make_heat_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_env|default({})), **(make_heat_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl_crc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000167315073042760033406 0ustar zuulzuul--- - name: Debug make_heat_kuttl_crc_env when: make_heat_kuttl_crc_env is defined ansible.builtin.debug: var: make_heat_kuttl_crc_env - name: Debug make_heat_kuttl_crc_params when: make_heat_kuttl_crc_params is defined ansible.builtin.debug: var: make_heat_kuttl_crc_params - name: Run heat_kuttl_crc retries: "{{ make_heat_kuttl_crc_retries | default(omit) }}" delay: "{{ make_heat_kuttl_crc_delay | default(omit) }}" until: "{{ make_heat_kuttl_crc_until | default(true) }}" register: "make_heat_kuttl_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl_crc" dry_run: "{{ make_heat_kuttl_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_crc_env|default({})), **(make_heat_kuttl_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000200615073042760033353 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_run_env when: make_ansibleee_kuttl_run_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_run_env - name: Debug make_ansibleee_kuttl_run_params when: make_ansibleee_kuttl_run_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_run_params - name: Run ansibleee_kuttl_run retries: "{{ make_ansibleee_kuttl_run_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_run_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_run_until | default(true) }}" register: "make_ansibleee_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_run" dry_run: "{{ make_ansibleee_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_run_env|default({})), **(make_ansibleee_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000210215073042760033350 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_cleanup_env when: make_ansibleee_kuttl_cleanup_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_cleanup_env - name: Debug make_ansibleee_kuttl_cleanup_params when: make_ansibleee_kuttl_cleanup_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_cleanup_params - name: Run ansibleee_kuttl_cleanup retries: "{{ make_ansibleee_kuttl_cleanup_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_cleanup_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_cleanup_until | default(true) }}" register: "make_ansibleee_kuttl_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_cleanup" dry_run: "{{ make_ansibleee_kuttl_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_cleanup_env|default({})), **(make_ansibleee_kuttl_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000202515073042760033354 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_prep_env when: make_ansibleee_kuttl_prep_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_prep_env - name: Debug make_ansibleee_kuttl_prep_params when: make_ansibleee_kuttl_prep_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_prep_params - name: Run ansibleee_kuttl_prep retries: "{{ make_ansibleee_kuttl_prep_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_prep_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_prep_until | default(true) }}" register: "make_ansibleee_kuttl_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_prep" dry_run: "{{ make_ansibleee_kuttl_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_prep_env|default({})), **(make_ansibleee_kuttl_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000171215073042760033356 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_env when: make_ansibleee_kuttl_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_env - name: Debug make_ansibleee_kuttl_params when: make_ansibleee_kuttl_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_params - name: Run ansibleee_kuttl retries: "{{ make_ansibleee_kuttl_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_until | default(true) }}" register: "make_ansibleee_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl" dry_run: "{{ make_ansibleee_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_env|default({})), **(make_ansibleee_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_k0000644000175000017500000000173115073042760033340 0ustar zuulzuul--- - name: Debug make_glance_kuttl_run_env when: make_glance_kuttl_run_env is defined ansible.builtin.debug: var: make_glance_kuttl_run_env - name: Debug make_glance_kuttl_run_params when: make_glance_kuttl_run_params is defined ansible.builtin.debug: var: make_glance_kuttl_run_params - name: Run glance_kuttl_run retries: "{{ make_glance_kuttl_run_retries | default(omit) }}" delay: "{{ make_glance_kuttl_run_delay | default(omit) }}" until: "{{ make_glance_kuttl_run_until | default(true) }}" register: "make_glance_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_kuttl_run" dry_run: "{{ make_glance_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_kuttl_run_env|default({})), **(make_glance_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_k0000644000175000017500000000163515073042760033343 0ustar zuulzuul--- - name: Debug make_glance_kuttl_env when: make_glance_kuttl_env is defined ansible.builtin.debug: var: make_glance_kuttl_env - name: Debug make_glance_kuttl_params when: make_glance_kuttl_params is defined ansible.builtin.debug: var: make_glance_kuttl_params - name: Run glance_kuttl retries: "{{ make_glance_kuttl_retries | default(omit) }}" delay: "{{ make_glance_kuttl_delay | default(omit) }}" until: "{{ make_glance_kuttl_until | default(true) }}" register: "make_glance_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_kuttl" dry_run: "{{ make_glance_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_kuttl_env|default({})), **(make_glance_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_k0000644000175000017500000000173115073042760033350 0ustar zuulzuul--- - name: Debug make_manila_kuttl_run_env when: make_manila_kuttl_run_env is defined ansible.builtin.debug: var: make_manila_kuttl_run_env - name: Debug make_manila_kuttl_run_params when: make_manila_kuttl_run_params is defined ansible.builtin.debug: var: make_manila_kuttl_run_params - name: Run manila_kuttl_run retries: "{{ make_manila_kuttl_run_retries | default(omit) }}" delay: "{{ make_manila_kuttl_run_delay | default(omit) }}" until: "{{ make_manila_kuttl_run_until | default(true) }}" register: "make_manila_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_kuttl_run" dry_run: "{{ make_manila_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_kuttl_run_env|default({})), **(make_manila_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_k0000644000175000017500000000163515073042760033353 0ustar zuulzuul--- - name: Debug make_manila_kuttl_env when: make_manila_kuttl_env is defined ansible.builtin.debug: var: make_manila_kuttl_env - name: Debug make_manila_kuttl_params when: make_manila_kuttl_params is defined ansible.builtin.debug: var: make_manila_kuttl_params - name: Run manila_kuttl retries: "{{ make_manila_kuttl_retries | default(omit) }}" delay: "{{ make_manila_kuttl_delay | default(omit) }}" until: "{{ make_manila_kuttl_until | default(true) }}" register: "make_manila_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_kuttl" dry_run: "{{ make_manila_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_kuttl_env|default({})), **(make_manila_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_ku0000644000175000017500000000171215073042760033427 0ustar zuulzuul--- - name: Debug make_swift_kuttl_run_env when: make_swift_kuttl_run_env is defined ansible.builtin.debug: var: make_swift_kuttl_run_env - name: Debug make_swift_kuttl_run_params when: make_swift_kuttl_run_params is defined ansible.builtin.debug: var: make_swift_kuttl_run_params - name: Run swift_kuttl_run retries: "{{ make_swift_kuttl_run_retries | default(omit) }}" delay: "{{ make_swift_kuttl_run_delay | default(omit) }}" until: "{{ make_swift_kuttl_run_until | default(true) }}" register: "make_swift_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_kuttl_run" dry_run: "{{ make_swift_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_kuttl_run_env|default({})), **(make_swift_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_ku0000644000175000017500000000161615073042760033432 0ustar zuulzuul--- - name: Debug make_swift_kuttl_env when: make_swift_kuttl_env is defined ansible.builtin.debug: var: make_swift_kuttl_env - name: Debug make_swift_kuttl_params when: make_swift_kuttl_params is defined ansible.builtin.debug: var: make_swift_kuttl_params - name: Run swift_kuttl retries: "{{ make_swift_kuttl_retries | default(omit) }}" delay: "{{ make_swift_kuttl_delay | default(omit) }}" until: "{{ make_swift_kuttl_until | default(true) }}" register: "make_swift_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_kuttl" dry_run: "{{ make_swift_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_kuttl_env|default({})), **(make_swift_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000175015073042760033425 0ustar zuulzuul--- - name: Debug make_horizon_kuttl_run_env when: make_horizon_kuttl_run_env is defined ansible.builtin.debug: var: make_horizon_kuttl_run_env - name: Debug make_horizon_kuttl_run_params when: make_horizon_kuttl_run_params is defined ansible.builtin.debug: var: make_horizon_kuttl_run_params - name: Run horizon_kuttl_run retries: "{{ make_horizon_kuttl_run_retries | default(omit) }}" delay: "{{ make_horizon_kuttl_run_delay | default(omit) }}" until: "{{ make_horizon_kuttl_run_until | default(true) }}" register: "make_horizon_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_kuttl_run" dry_run: "{{ make_horizon_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_kuttl_run_env|default({})), **(make_horizon_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000165415073042760033430 0ustar zuulzuul--- - name: Debug make_horizon_kuttl_env when: make_horizon_kuttl_env is defined ansible.builtin.debug: var: make_horizon_kuttl_env - name: Debug make_horizon_kuttl_params when: make_horizon_kuttl_params is defined ansible.builtin.debug: var: make_horizon_kuttl_params - name: Run horizon_kuttl retries: "{{ make_horizon_kuttl_retries | default(omit) }}" delay: "{{ make_horizon_kuttl_delay | default(omit) }}" until: "{{ make_horizon_kuttl_until | default(true) }}" register: "make_horizon_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_kuttl" dry_run: "{{ make_horizon_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_kuttl_env|default({})), **(make_horizon_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000200615073042760033405 0ustar zuulzuul--- - name: Debug make_openstack_kuttl_run_env when: make_openstack_kuttl_run_env is defined ansible.builtin.debug: var: make_openstack_kuttl_run_env - name: Debug make_openstack_kuttl_run_params when: make_openstack_kuttl_run_params is defined ansible.builtin.debug: var: make_openstack_kuttl_run_params - name: Run openstack_kuttl_run retries: "{{ make_openstack_kuttl_run_retries | default(omit) }}" delay: "{{ make_openstack_kuttl_run_delay | default(omit) }}" until: "{{ make_openstack_kuttl_run_until | default(true) }}" register: "make_openstack_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_kuttl_run" dry_run: "{{ make_openstack_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_kuttl_run_env|default({})), **(make_openstack_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000171215073042760033410 0ustar zuulzuul--- - name: Debug make_openstack_kuttl_env when: make_openstack_kuttl_env is defined ansible.builtin.debug: var: make_openstack_kuttl_env - name: Debug make_openstack_kuttl_params when: make_openstack_kuttl_params is defined ansible.builtin.debug: var: make_openstack_kuttl_params - name: Run openstack_kuttl retries: "{{ make_openstack_kuttl_retries | default(omit) }}" delay: "{{ make_openstack_kuttl_delay | default(omit) }}" until: "{{ make_openstack_kuttl_until | default(true) }}" register: "make_openstack_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_kuttl" dry_run: "{{ make_openstack_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_kuttl_env|default({})), **(make_openstack_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_chainsaw_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000202515073042760033330 0ustar zuulzuul--- - name: Debug make_mariadb_chainsaw_run_env when: make_mariadb_chainsaw_run_env is defined ansible.builtin.debug: var: make_mariadb_chainsaw_run_env - name: Debug make_mariadb_chainsaw_run_params when: make_mariadb_chainsaw_run_params is defined ansible.builtin.debug: var: make_mariadb_chainsaw_run_params - name: Run mariadb_chainsaw_run retries: "{{ make_mariadb_chainsaw_run_retries | default(omit) }}" delay: "{{ make_mariadb_chainsaw_run_delay | default(omit) }}" until: "{{ make_mariadb_chainsaw_run_until | default(true) }}" register: "make_mariadb_chainsaw_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_chainsaw_run" dry_run: "{{ make_mariadb_chainsaw_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_chainsaw_run_env|default({})), **(make_mariadb_chainsaw_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_chainsaw.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000173115073042760033333 0ustar zuulzuul--- - name: Debug make_mariadb_chainsaw_env when: make_mariadb_chainsaw_env is defined ansible.builtin.debug: var: make_mariadb_chainsaw_env - name: Debug make_mariadb_chainsaw_params when: make_mariadb_chainsaw_params is defined ansible.builtin.debug: var: make_mariadb_chainsaw_params - name: Run mariadb_chainsaw retries: "{{ make_mariadb_chainsaw_retries | default(omit) }}" delay: "{{ make_mariadb_chainsaw_delay | default(omit) }}" until: "{{ make_mariadb_chainsaw_until | default(true) }}" register: "make_mariadb_chainsaw_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_chainsaw" dry_run: "{{ make_mariadb_chainsaw_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_chainsaw_env|default({})), **(make_mariadb_chainsaw_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000163515073042760033427 0ustar zuulzuul--- - name: Debug make_horizon_prep_env when: make_horizon_prep_env is defined ansible.builtin.debug: var: make_horizon_prep_env - name: Debug make_horizon_prep_params when: make_horizon_prep_params is defined ansible.builtin.debug: var: make_horizon_prep_params - name: Run horizon_prep retries: "{{ make_horizon_prep_retries | default(omit) }}" delay: "{{ make_horizon_prep_delay | default(omit) }}" until: "{{ make_horizon_prep_until | default(true) }}" register: "make_horizon_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_prep" dry_run: "{{ make_horizon_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_prep_env|default({})), **(make_horizon_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon.0000644000175000017500000000152215073042760033341 0ustar zuulzuul--- - name: Debug make_horizon_env when: make_horizon_env is defined ansible.builtin.debug: var: make_horizon_env - name: Debug make_horizon_params when: make_horizon_params is defined ansible.builtin.debug: var: make_horizon_params - name: Run horizon retries: "{{ make_horizon_retries | default(omit) }}" delay: "{{ make_horizon_delay | default(omit) }}" until: "{{ make_horizon_until | default(true) }}" register: "make_horizon_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon" dry_run: "{{ make_horizon_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_env|default({})), **(make_horizon_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000171215073042760033423 0ustar zuulzuul--- - name: Debug make_horizon_cleanup_env when: make_horizon_cleanup_env is defined ansible.builtin.debug: var: make_horizon_cleanup_env - name: Debug make_horizon_cleanup_params when: make_horizon_cleanup_params is defined ansible.builtin.debug: var: make_horizon_cleanup_params - name: Run horizon_cleanup retries: "{{ make_horizon_cleanup_retries | default(omit) }}" delay: "{{ make_horizon_cleanup_delay | default(omit) }}" until: "{{ make_horizon_cleanup_until | default(true) }}" register: "make_horizon_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_cleanup" dry_run: "{{ make_horizon_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_cleanup_env|default({})), **(make_horizon_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000200615073042760033420 0ustar zuulzuul--- - name: Debug make_horizon_deploy_prep_env when: make_horizon_deploy_prep_env is defined ansible.builtin.debug: var: make_horizon_deploy_prep_env - name: Debug make_horizon_deploy_prep_params when: make_horizon_deploy_prep_params is defined ansible.builtin.debug: var: make_horizon_deploy_prep_params - name: Run horizon_deploy_prep retries: "{{ make_horizon_deploy_prep_retries | default(omit) }}" delay: "{{ make_horizon_deploy_prep_delay | default(omit) }}" until: "{{ make_horizon_deploy_prep_until | default(true) }}" register: "make_horizon_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy_prep" dry_run: "{{ make_horizon_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_prep_env|default({})), **(make_horizon_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000167315073042760033431 0ustar zuulzuul--- - name: Debug make_horizon_deploy_env when: make_horizon_deploy_env is defined ansible.builtin.debug: var: make_horizon_deploy_env - name: Debug make_horizon_deploy_params when: make_horizon_deploy_params is defined ansible.builtin.debug: var: make_horizon_deploy_params - name: Run horizon_deploy retries: "{{ make_horizon_deploy_retries | default(omit) }}" delay: "{{ make_horizon_deploy_delay | default(omit) }}" until: "{{ make_horizon_deploy_until | default(true) }}" register: "make_horizon_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy" dry_run: "{{ make_horizon_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_env|default({})), **(make_horizon_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000206315073042760033423 0ustar zuulzuul--- - name: Debug make_horizon_deploy_cleanup_env when: make_horizon_deploy_cleanup_env is defined ansible.builtin.debug: var: make_horizon_deploy_cleanup_env - name: Debug make_horizon_deploy_cleanup_params when: make_horizon_deploy_cleanup_params is defined ansible.builtin.debug: var: make_horizon_deploy_cleanup_params - name: Run horizon_deploy_cleanup retries: "{{ make_horizon_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_horizon_deploy_cleanup_delay | default(omit) }}" until: "{{ make_horizon_deploy_cleanup_until | default(true) }}" register: "make_horizon_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy_cleanup" dry_run: "{{ make_horizon_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_cleanup_env|default({})), **(make_horizon_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_pre0000644000175000017500000000156015073042760033364 0ustar zuulzuul--- - name: Debug make_heat_prep_env when: make_heat_prep_env is defined ansible.builtin.debug: var: make_heat_prep_env - name: Debug make_heat_prep_params when: make_heat_prep_params is defined ansible.builtin.debug: var: make_heat_prep_params - name: Run heat_prep retries: "{{ make_heat_prep_retries | default(omit) }}" delay: "{{ make_heat_prep_delay | default(omit) }}" until: "{{ make_heat_prep_until | default(true) }}" register: "make_heat_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_prep" dry_run: "{{ make_heat_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_prep_env|default({})), **(make_heat_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat.yml0000644000175000017500000000144515073042760033320 0ustar zuulzuul--- - name: Debug make_heat_env when: make_heat_env is defined ansible.builtin.debug: var: make_heat_env - name: Debug make_heat_params when: make_heat_params is defined ansible.builtin.debug: var: make_heat_params - name: Run heat retries: "{{ make_heat_retries | default(omit) }}" delay: "{{ make_heat_delay | default(omit) }}" until: "{{ make_heat_until | default(true) }}" register: "make_heat_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat" dry_run: "{{ make_heat_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_env|default({})), **(make_heat_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_cle0000644000175000017500000000163515073042760033344 0ustar zuulzuul--- - name: Debug make_heat_cleanup_env when: make_heat_cleanup_env is defined ansible.builtin.debug: var: make_heat_cleanup_env - name: Debug make_heat_cleanup_params when: make_heat_cleanup_params is defined ansible.builtin.debug: var: make_heat_cleanup_params - name: Run heat_cleanup retries: "{{ make_heat_cleanup_retries | default(omit) }}" delay: "{{ make_heat_cleanup_delay | default(omit) }}" until: "{{ make_heat_cleanup_until | default(true) }}" register: "make_heat_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_cleanup" dry_run: "{{ make_heat_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_cleanup_env|default({})), **(make_heat_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000173115073042760033346 0ustar zuulzuul--- - name: Debug make_heat_deploy_prep_env when: make_heat_deploy_prep_env is defined ansible.builtin.debug: var: make_heat_deploy_prep_env - name: Debug make_heat_deploy_prep_params when: make_heat_deploy_prep_params is defined ansible.builtin.debug: var: make_heat_deploy_prep_params - name: Run heat_deploy_prep retries: "{{ make_heat_deploy_prep_retries | default(omit) }}" delay: "{{ make_heat_deploy_prep_delay | default(omit) }}" until: "{{ make_heat_deploy_prep_until | default(true) }}" register: "make_heat_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy_prep" dry_run: "{{ make_heat_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_prep_env|default({})), **(make_heat_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000161615073042760033350 0ustar zuulzuul--- - name: Debug make_heat_deploy_env when: make_heat_deploy_env is defined ansible.builtin.debug: var: make_heat_deploy_env - name: Debug make_heat_deploy_params when: make_heat_deploy_params is defined ansible.builtin.debug: var: make_heat_deploy_params - name: Run heat_deploy retries: "{{ make_heat_deploy_retries | default(omit) }}" delay: "{{ make_heat_deploy_delay | default(omit) }}" until: "{{ make_heat_deploy_until | default(true) }}" register: "make_heat_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy" dry_run: "{{ make_heat_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_env|default({})), **(make_heat_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000200615073042760033342 0ustar zuulzuul--- - name: Debug make_heat_deploy_cleanup_env when: make_heat_deploy_cleanup_env is defined ansible.builtin.debug: var: make_heat_deploy_cleanup_env - name: Debug make_heat_deploy_cleanup_params when: make_heat_deploy_cleanup_params is defined ansible.builtin.debug: var: make_heat_deploy_cleanup_params - name: Run heat_deploy_cleanup retries: "{{ make_heat_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_heat_deploy_cleanup_delay | default(omit) }}" until: "{{ make_heat_deploy_cleanup_until | default(true) }}" register: "make_heat_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy_cleanup" dry_run: "{{ make_heat_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_cleanup_env|default({})), **(make_heat_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000167315073042760033364 0ustar zuulzuul--- - name: Debug make_ansibleee_prep_env when: make_ansibleee_prep_env is defined ansible.builtin.debug: var: make_ansibleee_prep_env - name: Debug make_ansibleee_prep_params when: make_ansibleee_prep_params is defined ansible.builtin.debug: var: make_ansibleee_prep_params - name: Run ansibleee_prep retries: "{{ make_ansibleee_prep_retries | default(omit) }}" delay: "{{ make_ansibleee_prep_delay | default(omit) }}" until: "{{ make_ansibleee_prep_until | default(true) }}" register: "make_ansibleee_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_prep" dry_run: "{{ make_ansibleee_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_prep_env|default({})), **(make_ansibleee_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000156015073042760033357 0ustar zuulzuul--- - name: Debug make_ansibleee_env when: make_ansibleee_env is defined ansible.builtin.debug: var: make_ansibleee_env - name: Debug make_ansibleee_params when: make_ansibleee_params is defined ansible.builtin.debug: var: make_ansibleee_params - name: Run ansibleee retries: "{{ make_ansibleee_retries | default(omit) }}" delay: "{{ make_ansibleee_delay | default(omit) }}" until: "{{ make_ansibleee_until | default(true) }}" register: "make_ansibleee_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee" dry_run: "{{ make_ansibleee_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_env|default({})), **(make_ansibleee_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000175015073042760033360 0ustar zuulzuul--- - name: Debug make_ansibleee_cleanup_env when: make_ansibleee_cleanup_env is defined ansible.builtin.debug: var: make_ansibleee_cleanup_env - name: Debug make_ansibleee_cleanup_params when: make_ansibleee_cleanup_params is defined ansible.builtin.debug: var: make_ansibleee_cleanup_params - name: Run ansibleee_cleanup retries: "{{ make_ansibleee_cleanup_retries | default(omit) }}" delay: "{{ make_ansibleee_cleanup_delay | default(omit) }}" until: "{{ make_ansibleee_cleanup_until | default(true) }}" register: "make_ansibleee_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_cleanup" dry_run: "{{ make_ansibleee_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_cleanup_env|default({})), **(make_ansibleee_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000167315073042760033362 0ustar zuulzuul--- - name: Debug make_baremetal_prep_env when: make_baremetal_prep_env is defined ansible.builtin.debug: var: make_baremetal_prep_env - name: Debug make_baremetal_prep_params when: make_baremetal_prep_params is defined ansible.builtin.debug: var: make_baremetal_prep_params - name: Run baremetal_prep retries: "{{ make_baremetal_prep_retries | default(omit) }}" delay: "{{ make_baremetal_prep_delay | default(omit) }}" until: "{{ make_baremetal_prep_until | default(true) }}" register: "make_baremetal_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal_prep" dry_run: "{{ make_baremetal_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_prep_env|default({})), **(make_baremetal_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000156015073042760033355 0ustar zuulzuul--- - name: Debug make_baremetal_env when: make_baremetal_env is defined ansible.builtin.debug: var: make_baremetal_env - name: Debug make_baremetal_params when: make_baremetal_params is defined ansible.builtin.debug: var: make_baremetal_params - name: Run baremetal retries: "{{ make_baremetal_retries | default(omit) }}" delay: "{{ make_baremetal_delay | default(omit) }}" until: "{{ make_baremetal_until | default(true) }}" register: "make_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal" dry_run: "{{ make_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_env|default({})), **(make_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000175015073042760033356 0ustar zuulzuul--- - name: Debug make_baremetal_cleanup_env when: make_baremetal_cleanup_env is defined ansible.builtin.debug: var: make_baremetal_cleanup_env - name: Debug make_baremetal_cleanup_params when: make_baremetal_cleanup_params is defined ansible.builtin.debug: var: make_baremetal_cleanup_params - name: Run baremetal_cleanup retries: "{{ make_baremetal_cleanup_retries | default(omit) }}" delay: "{{ make_baremetal_cleanup_delay | default(omit) }}" until: "{{ make_baremetal_cleanup_until | default(true) }}" register: "make_baremetal_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal_cleanup" dry_run: "{{ make_baremetal_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_cleanup_env|default({})), **(make_baremetal_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_help.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_hel0000644000175000017500000000156015073042760033344 0ustar zuulzuul--- - name: Debug make_ceph_help_env when: make_ceph_help_env is defined ansible.builtin.debug: var: make_ceph_help_env - name: Debug make_ceph_help_params when: make_ceph_help_params is defined ansible.builtin.debug: var: make_ceph_help_params - name: Run ceph_help retries: "{{ make_ceph_help_retries | default(omit) }}" delay: "{{ make_ceph_help_delay | default(omit) }}" until: "{{ make_ceph_help_until | default(true) }}" register: "make_ceph_help_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph_help" dry_run: "{{ make_ceph_help_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_help_env|default({})), **(make_ceph_help_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph.yml0000644000175000017500000000144515073042760033316 0ustar zuulzuul--- - name: Debug make_ceph_env when: make_ceph_env is defined ansible.builtin.debug: var: make_ceph_env - name: Debug make_ceph_params when: make_ceph_params is defined ansible.builtin.debug: var: make_ceph_params - name: Run ceph retries: "{{ make_ceph_retries | default(omit) }}" delay: "{{ make_ceph_delay | default(omit) }}" until: "{{ make_ceph_until | default(true) }}" register: "make_ceph_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph" dry_run: "{{ make_ceph_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_env|default({})), **(make_ceph_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_cle0000644000175000017500000000163515073042760033342 0ustar zuulzuul--- - name: Debug make_ceph_cleanup_env when: make_ceph_cleanup_env is defined ansible.builtin.debug: var: make_ceph_cleanup_env - name: Debug make_ceph_cleanup_params when: make_ceph_cleanup_params is defined ansible.builtin.debug: var: make_ceph_cleanup_params - name: Run ceph_cleanup retries: "{{ make_ceph_cleanup_retries | default(omit) }}" delay: "{{ make_ceph_cleanup_delay | default(omit) }}" until: "{{ make_ceph_cleanup_until | default(true) }}" register: "make_ceph_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph_cleanup" dry_run: "{{ make_ceph_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_cleanup_env|default({})), **(make_ceph_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_pre0000644000175000017500000000156015073042760033415 0ustar zuulzuul--- - name: Debug make_rook_prep_env when: make_rook_prep_env is defined ansible.builtin.debug: var: make_rook_prep_env - name: Debug make_rook_prep_params when: make_rook_prep_params is defined ansible.builtin.debug: var: make_rook_prep_params - name: Run rook_prep retries: "{{ make_rook_prep_retries | default(omit) }}" delay: "{{ make_rook_prep_delay | default(omit) }}" until: "{{ make_rook_prep_until | default(true) }}" register: "make_rook_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_prep" dry_run: "{{ make_rook_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_prep_env|default({})), **(make_rook_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook.yml0000644000175000017500000000144515073042760033351 0ustar zuulzuul--- - name: Debug make_rook_env when: make_rook_env is defined ansible.builtin.debug: var: make_rook_env - name: Debug make_rook_params when: make_rook_params is defined ansible.builtin.debug: var: make_rook_params - name: Run rook retries: "{{ make_rook_retries | default(omit) }}" delay: "{{ make_rook_delay | default(omit) }}" until: "{{ make_rook_until | default(true) }}" register: "make_rook_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook" dry_run: "{{ make_rook_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_env|default({})), **(make_rook_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_dep0000644000175000017500000000173115073042760033377 0ustar zuulzuul--- - name: Debug make_rook_deploy_prep_env when: make_rook_deploy_prep_env is defined ansible.builtin.debug: var: make_rook_deploy_prep_env - name: Debug make_rook_deploy_prep_params when: make_rook_deploy_prep_params is defined ansible.builtin.debug: var: make_rook_deploy_prep_params - name: Run rook_deploy_prep retries: "{{ make_rook_deploy_prep_retries | default(omit) }}" delay: "{{ make_rook_deploy_prep_delay | default(omit) }}" until: "{{ make_rook_deploy_prep_until | default(true) }}" register: "make_rook_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_deploy_prep" dry_run: "{{ make_rook_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_deploy_prep_env|default({})), **(make_rook_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_dep0000644000175000017500000000161615073042760033401 0ustar zuulzuul--- - name: Debug make_rook_deploy_env when: make_rook_deploy_env is defined ansible.builtin.debug: var: make_rook_deploy_env - name: Debug make_rook_deploy_params when: make_rook_deploy_params is defined ansible.builtin.debug: var: make_rook_deploy_params - name: Run rook_deploy retries: "{{ make_rook_deploy_retries | default(omit) }}" delay: "{{ make_rook_deploy_delay | default(omit) }}" until: "{{ make_rook_deploy_until | default(true) }}" register: "make_rook_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_deploy" dry_run: "{{ make_rook_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_deploy_env|default({})), **(make_rook_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_crc_disk.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_crc0000644000175000017500000000165415073042760033402 0ustar zuulzuul--- - name: Debug make_rook_crc_disk_env when: make_rook_crc_disk_env is defined ansible.builtin.debug: var: make_rook_crc_disk_env - name: Debug make_rook_crc_disk_params when: make_rook_crc_disk_params is defined ansible.builtin.debug: var: make_rook_crc_disk_params - name: Run rook_crc_disk retries: "{{ make_rook_crc_disk_retries | default(omit) }}" delay: "{{ make_rook_crc_disk_delay | default(omit) }}" until: "{{ make_rook_crc_disk_until | default(true) }}" register: "make_rook_crc_disk_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_crc_disk" dry_run: "{{ make_rook_crc_disk_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_crc_disk_env|default({})), **(make_rook_crc_disk_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_cle0000644000175000017500000000163515073042760033375 0ustar zuulzuul--- - name: Debug make_rook_cleanup_env when: make_rook_cleanup_env is defined ansible.builtin.debug: var: make_rook_cleanup_env - name: Debug make_rook_cleanup_params when: make_rook_cleanup_params is defined ansible.builtin.debug: var: make_rook_cleanup_params - name: Run rook_cleanup retries: "{{ make_rook_cleanup_retries | default(omit) }}" delay: "{{ make_rook_cleanup_delay | default(omit) }}" until: "{{ make_rook_cleanup_until | default(true) }}" register: "make_rook_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_cleanup" dry_run: "{{ make_rook_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_cleanup_env|default({})), **(make_rook_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_lvms.yml0000644000175000017500000000144515073042760033360 0ustar zuulzuul--- - name: Debug make_lvms_env when: make_lvms_env is defined ansible.builtin.debug: var: make_lvms_env - name: Debug make_lvms_params when: make_lvms_params is defined ansible.builtin.debug: var: make_lvms_params - name: Run lvms retries: "{{ make_lvms_retries | default(omit) }}" delay: "{{ make_lvms_delay | default(omit) }}" until: "{{ make_lvms_until | default(true) }}" register: "make_lvms_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make lvms" dry_run: "{{ make_lvms_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_lvms_env|default({})), **(make_lvms_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nmstate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nmstate.0000644000175000017500000000152215073042760033324 0ustar zuulzuul--- - name: Debug make_nmstate_env when: make_nmstate_env is defined ansible.builtin.debug: var: make_nmstate_env - name: Debug make_nmstate_params when: make_nmstate_params is defined ansible.builtin.debug: var: make_nmstate_params - name: Run nmstate retries: "{{ make_nmstate_retries | default(omit) }}" delay: "{{ make_nmstate_delay | default(omit) }}" until: "{{ make_nmstate_until | default(true) }}" register: "make_nmstate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nmstate" dry_run: "{{ make_nmstate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nmstate_env|default({})), **(make_nmstate_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp.yml0000644000175000017500000000144515073042760033335 0ustar zuulzuul--- - name: Debug make_nncp_env when: make_nncp_env is defined ansible.builtin.debug: var: make_nncp_env - name: Debug make_nncp_params when: make_nncp_params is defined ansible.builtin.debug: var: make_nncp_params - name: Run nncp retries: "{{ make_nncp_retries | default(omit) }}" delay: "{{ make_nncp_delay | default(omit) }}" until: "{{ make_nncp_until | default(true) }}" register: "make_nncp_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nncp" dry_run: "{{ make_nncp_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nncp_env|default({})), **(make_nncp_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp_cle0000644000175000017500000000163515073042760033361 0ustar zuulzuul--- - name: Debug make_nncp_cleanup_env when: make_nncp_cleanup_env is defined ansible.builtin.debug: var: make_nncp_cleanup_env - name: Debug make_nncp_cleanup_params when: make_nncp_cleanup_params is defined ansible.builtin.debug: var: make_nncp_cleanup_params - name: Run nncp_cleanup retries: "{{ make_nncp_cleanup_retries | default(omit) }}" delay: "{{ make_nncp_cleanup_delay | default(omit) }}" until: "{{ make_nncp_cleanup_until | default(true) }}" register: "make_nncp_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nncp_cleanup" dry_run: "{{ make_nncp_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nncp_cleanup_env|default({})), **(make_nncp_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattach.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattac0000644000175000017500000000156015073042760033400 0ustar zuulzuul--- - name: Debug make_netattach_env when: make_netattach_env is defined ansible.builtin.debug: var: make_netattach_env - name: Debug make_netattach_params when: make_netattach_params is defined ansible.builtin.debug: var: make_netattach_params - name: Run netattach retries: "{{ make_netattach_retries | default(omit) }}" delay: "{{ make_netattach_delay | default(omit) }}" until: "{{ make_netattach_until | default(true) }}" register: "make_netattach_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netattach" dry_run: "{{ make_netattach_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netattach_env|default({})), **(make_netattach_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattach_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattac0000644000175000017500000000175015073042760033401 0ustar zuulzuul--- - name: Debug make_netattach_cleanup_env when: make_netattach_cleanup_env is defined ansible.builtin.debug: var: make_netattach_cleanup_env - name: Debug make_netattach_cleanup_params when: make_netattach_cleanup_params is defined ansible.builtin.debug: var: make_netattach_cleanup_params - name: Run netattach_cleanup retries: "{{ make_netattach_cleanup_retries | default(omit) }}" delay: "{{ make_netattach_cleanup_delay | default(omit) }}" until: "{{ make_netattach_cleanup_until | default(true) }}" register: "make_netattach_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netattach_cleanup" dry_run: "{{ make_netattach_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netattach_cleanup_env|default({})), **(make_netattach_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb.0000644000175000017500000000152215073042760033271 0ustar zuulzuul--- - name: Debug make_metallb_env when: make_metallb_env is defined ansible.builtin.debug: var: make_metallb_env - name: Debug make_metallb_params when: make_metallb_params is defined ansible.builtin.debug: var: make_metallb_params - name: Run metallb retries: "{{ make_metallb_retries | default(omit) }}" delay: "{{ make_metallb_delay | default(omit) }}" until: "{{ make_metallb_until | default(true) }}" register: "make_metallb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb" dry_run: "{{ make_metallb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_env|default({})), **(make_metallb_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_config.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000167315073042760033361 0ustar zuulzuul--- - name: Debug make_metallb_config_env when: make_metallb_config_env is defined ansible.builtin.debug: var: make_metallb_config_env - name: Debug make_metallb_config_params when: make_metallb_config_params is defined ansible.builtin.debug: var: make_metallb_config_params - name: Run metallb_config retries: "{{ make_metallb_config_retries | default(omit) }}" delay: "{{ make_metallb_config_delay | default(omit) }}" until: "{{ make_metallb_config_until | default(true) }}" register: "make_metallb_config_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_config" dry_run: "{{ make_metallb_config_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_config_env|default({})), **(make_metallb_config_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_config_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000206315073042760033353 0ustar zuulzuul--- - name: Debug make_metallb_config_cleanup_env when: make_metallb_config_cleanup_env is defined ansible.builtin.debug: var: make_metallb_config_cleanup_env - name: Debug make_metallb_config_cleanup_params when: make_metallb_config_cleanup_params is defined ansible.builtin.debug: var: make_metallb_config_cleanup_params - name: Run metallb_config_cleanup retries: "{{ make_metallb_config_cleanup_retries | default(omit) }}" delay: "{{ make_metallb_config_cleanup_delay | default(omit) }}" until: "{{ make_metallb_config_cleanup_until | default(true) }}" register: "make_metallb_config_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_config_cleanup" dry_run: "{{ make_metallb_config_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_config_cleanup_env|default({})), **(make_metallb_config_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000171215073042760033353 0ustar zuulzuul--- - name: Debug make_metallb_cleanup_env when: make_metallb_cleanup_env is defined ansible.builtin.debug: var: make_metallb_cleanup_env - name: Debug make_metallb_cleanup_params when: make_metallb_cleanup_params is defined ansible.builtin.debug: var: make_metallb_cleanup_params - name: Run metallb_cleanup retries: "{{ make_metallb_cleanup_retries | default(omit) }}" delay: "{{ make_metallb_cleanup_delay | default(omit) }}" until: "{{ make_metallb_cleanup_until | default(true) }}" register: "make_metallb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_cleanup" dry_run: "{{ make_metallb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_cleanup_env|default({})), **(make_metallb_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki.yml0000644000175000017500000000144515073042760033335 0ustar zuulzuul--- - name: Debug make_loki_env when: make_loki_env is defined ansible.builtin.debug: var: make_loki_env - name: Debug make_loki_params when: make_loki_params is defined ansible.builtin.debug: var: make_loki_params - name: Run loki retries: "{{ make_loki_retries | default(omit) }}" delay: "{{ make_loki_delay | default(omit) }}" until: "{{ make_loki_until | default(true) }}" register: "make_loki_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki" dry_run: "{{ make_loki_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_env|default({})), **(make_loki_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_cle0000644000175000017500000000163515073042760033361 0ustar zuulzuul--- - name: Debug make_loki_cleanup_env when: make_loki_cleanup_env is defined ansible.builtin.debug: var: make_loki_cleanup_env - name: Debug make_loki_cleanup_params when: make_loki_cleanup_params is defined ansible.builtin.debug: var: make_loki_cleanup_params - name: Run loki_cleanup retries: "{{ make_loki_cleanup_retries | default(omit) }}" delay: "{{ make_loki_cleanup_delay | default(omit) }}" until: "{{ make_loki_cleanup_until | default(true) }}" register: "make_loki_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_cleanup" dry_run: "{{ make_loki_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_cleanup_env|default({})), **(make_loki_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_dep0000644000175000017500000000161615073042760033365 0ustar zuulzuul--- - name: Debug make_loki_deploy_env when: make_loki_deploy_env is defined ansible.builtin.debug: var: make_loki_deploy_env - name: Debug make_loki_deploy_params when: make_loki_deploy_params is defined ansible.builtin.debug: var: make_loki_deploy_params - name: Run loki_deploy retries: "{{ make_loki_deploy_retries | default(omit) }}" delay: "{{ make_loki_deploy_delay | default(omit) }}" until: "{{ make_loki_deploy_until | default(true) }}" register: "make_loki_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_deploy" dry_run: "{{ make_loki_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_deploy_env|default({})), **(make_loki_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_dep0000644000175000017500000000200615073042760033357 0ustar zuulzuul--- - name: Debug make_loki_deploy_cleanup_env when: make_loki_deploy_cleanup_env is defined ansible.builtin.debug: var: make_loki_deploy_cleanup_env - name: Debug make_loki_deploy_cleanup_params when: make_loki_deploy_cleanup_params is defined ansible.builtin.debug: var: make_loki_deploy_cleanup_params - name: Run loki_deploy_cleanup retries: "{{ make_loki_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_loki_deploy_cleanup_delay | default(omit) }}" until: "{{ make_loki_deploy_cleanup_until | default(true) }}" register: "make_loki_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_deploy_cleanup" dry_run: "{{ make_loki_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_deploy_cleanup_env|default({})), **(make_loki_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000156015073042760033416 0ustar zuulzuul--- - name: Debug make_netobserv_env when: make_netobserv_env is defined ansible.builtin.debug: var: make_netobserv_env - name: Debug make_netobserv_params when: make_netobserv_params is defined ansible.builtin.debug: var: make_netobserv_params - name: Run netobserv retries: "{{ make_netobserv_retries | default(omit) }}" delay: "{{ make_netobserv_delay | default(omit) }}" until: "{{ make_netobserv_until | default(true) }}" register: "make_netobserv_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv" dry_run: "{{ make_netobserv_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_env|default({})), **(make_netobserv_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000175015073042760033417 0ustar zuulzuul--- - name: Debug make_netobserv_cleanup_env when: make_netobserv_cleanup_env is defined ansible.builtin.debug: var: make_netobserv_cleanup_env - name: Debug make_netobserv_cleanup_params when: make_netobserv_cleanup_params is defined ansible.builtin.debug: var: make_netobserv_cleanup_params - name: Run netobserv_cleanup retries: "{{ make_netobserv_cleanup_retries | default(omit) }}" delay: "{{ make_netobserv_cleanup_delay | default(omit) }}" until: "{{ make_netobserv_cleanup_until | default(true) }}" register: "make_netobserv_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_cleanup" dry_run: "{{ make_netobserv_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_cleanup_env|default({})), **(make_netobserv_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000173115073042760033416 0ustar zuulzuul--- - name: Debug make_netobserv_deploy_env when: make_netobserv_deploy_env is defined ansible.builtin.debug: var: make_netobserv_deploy_env - name: Debug make_netobserv_deploy_params when: make_netobserv_deploy_params is defined ansible.builtin.debug: var: make_netobserv_deploy_params - name: Run netobserv_deploy retries: "{{ make_netobserv_deploy_retries | default(omit) }}" delay: "{{ make_netobserv_deploy_delay | default(omit) }}" until: "{{ make_netobserv_deploy_until | default(true) }}" register: "make_netobserv_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_deploy" dry_run: "{{ make_netobserv_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_deploy_env|default({})), **(make_netobserv_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000212115073042760033410 0ustar zuulzuul--- - name: Debug make_netobserv_deploy_cleanup_env when: make_netobserv_deploy_cleanup_env is defined ansible.builtin.debug: var: make_netobserv_deploy_cleanup_env - name: Debug make_netobserv_deploy_cleanup_params when: make_netobserv_deploy_cleanup_params is defined ansible.builtin.debug: var: make_netobserv_deploy_cleanup_params - name: Run netobserv_deploy_cleanup retries: "{{ make_netobserv_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_netobserv_deploy_cleanup_delay | default(omit) }}" until: "{{ make_netobserv_deploy_cleanup_until | default(true) }}" register: "make_netobserv_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_deploy_cleanup" dry_run: "{{ make_netobserv_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_deploy_cleanup_env|default({})), **(make_netobserv_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_p0000644000175000017500000000161615073042760033357 0ustar zuulzuul--- - name: Debug make_manila_prep_env when: make_manila_prep_env is defined ansible.builtin.debug: var: make_manila_prep_env - name: Debug make_manila_prep_params when: make_manila_prep_params is defined ansible.builtin.debug: var: make_manila_prep_params - name: Run manila_prep retries: "{{ make_manila_prep_retries | default(omit) }}" delay: "{{ make_manila_prep_delay | default(omit) }}" until: "{{ make_manila_prep_until | default(true) }}" register: "make_manila_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_prep" dry_run: "{{ make_manila_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_prep_env|default({})), **(make_manila_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila.y0000644000175000017500000000150315073042760033302 0ustar zuulzuul--- - name: Debug make_manila_env when: make_manila_env is defined ansible.builtin.debug: var: make_manila_env - name: Debug make_manila_params when: make_manila_params is defined ansible.builtin.debug: var: make_manila_params - name: Run manila retries: "{{ make_manila_retries | default(omit) }}" delay: "{{ make_manila_delay | default(omit) }}" until: "{{ make_manila_until | default(true) }}" register: "make_manila_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila" dry_run: "{{ make_manila_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_env|default({})), **(make_manila_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_c0000644000175000017500000000167315073042760033345 0ustar zuulzuul--- - name: Debug make_manila_cleanup_env when: make_manila_cleanup_env is defined ansible.builtin.debug: var: make_manila_cleanup_env - name: Debug make_manila_cleanup_params when: make_manila_cleanup_params is defined ansible.builtin.debug: var: make_manila_cleanup_params - name: Run manila_cleanup retries: "{{ make_manila_cleanup_retries | default(omit) }}" delay: "{{ make_manila_cleanup_delay | default(omit) }}" until: "{{ make_manila_cleanup_until | default(true) }}" register: "make_manila_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_cleanup" dry_run: "{{ make_manila_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_cleanup_env|default({})), **(make_manila_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000176715073042760033352 0ustar zuulzuul--- - name: Debug make_manila_deploy_prep_env when: make_manila_deploy_prep_env is defined ansible.builtin.debug: var: make_manila_deploy_prep_env - name: Debug make_manila_deploy_prep_params when: make_manila_deploy_prep_params is defined ansible.builtin.debug: var: make_manila_deploy_prep_params - name: Run manila_deploy_prep retries: "{{ make_manila_deploy_prep_retries | default(omit) }}" delay: "{{ make_manila_deploy_prep_delay | default(omit) }}" until: "{{ make_manila_deploy_prep_until | default(true) }}" register: "make_manila_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy_prep" dry_run: "{{ make_manila_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_prep_env|default({})), **(make_manila_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000165415073042760033345 0ustar zuulzuul--- - name: Debug make_manila_deploy_env when: make_manila_deploy_env is defined ansible.builtin.debug: var: make_manila_deploy_env - name: Debug make_manila_deploy_params when: make_manila_deploy_params is defined ansible.builtin.debug: var: make_manila_deploy_params - name: Run manila_deploy retries: "{{ make_manila_deploy_retries | default(omit) }}" delay: "{{ make_manila_deploy_delay | default(omit) }}" until: "{{ make_manila_deploy_until | default(true) }}" register: "make_manila_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy" dry_run: "{{ make_manila_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_env|default({})), **(make_manila_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000204415073042760033337 0ustar zuulzuul--- - name: Debug make_manila_deploy_cleanup_env when: make_manila_deploy_cleanup_env is defined ansible.builtin.debug: var: make_manila_deploy_cleanup_env - name: Debug make_manila_deploy_cleanup_params when: make_manila_deploy_cleanup_params is defined ansible.builtin.debug: var: make_manila_deploy_cleanup_params - name: Run manila_deploy_cleanup retries: "{{ make_manila_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_manila_deploy_cleanup_delay | default(omit) }}" until: "{{ make_manila_deploy_cleanup_until | default(true) }}" register: "make_manila_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy_cleanup" dry_run: "{{ make_manila_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_cleanup_env|default({})), **(make_manila_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000167315073042760033423 0ustar zuulzuul--- - name: Debug make_telemetry_prep_env when: make_telemetry_prep_env is defined ansible.builtin.debug: var: make_telemetry_prep_env - name: Debug make_telemetry_prep_params when: make_telemetry_prep_params is defined ansible.builtin.debug: var: make_telemetry_prep_params - name: Run telemetry_prep retries: "{{ make_telemetry_prep_retries | default(omit) }}" delay: "{{ make_telemetry_prep_delay | default(omit) }}" until: "{{ make_telemetry_prep_until | default(true) }}" register: "make_telemetry_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_prep" dry_run: "{{ make_telemetry_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_prep_env|default({})), **(make_telemetry_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000156015073042760033416 0ustar zuulzuul--- - name: Debug make_telemetry_env when: make_telemetry_env is defined ansible.builtin.debug: var: make_telemetry_env - name: Debug make_telemetry_params when: make_telemetry_params is defined ansible.builtin.debug: var: make_telemetry_params - name: Run telemetry retries: "{{ make_telemetry_retries | default(omit) }}" delay: "{{ make_telemetry_delay | default(omit) }}" until: "{{ make_telemetry_until | default(true) }}" register: "make_telemetry_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry" dry_run: "{{ make_telemetry_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_env|default({})), **(make_telemetry_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000175015073042760033417 0ustar zuulzuul--- - name: Debug make_telemetry_cleanup_env when: make_telemetry_cleanup_env is defined ansible.builtin.debug: var: make_telemetry_cleanup_env - name: Debug make_telemetry_cleanup_params when: make_telemetry_cleanup_params is defined ansible.builtin.debug: var: make_telemetry_cleanup_params - name: Run telemetry_cleanup retries: "{{ make_telemetry_cleanup_retries | default(omit) }}" delay: "{{ make_telemetry_cleanup_delay | default(omit) }}" until: "{{ make_telemetry_cleanup_until | default(true) }}" register: "make_telemetry_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_cleanup" dry_run: "{{ make_telemetry_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_cleanup_env|default({})), **(make_telemetry_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000204415073042760033414 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_prep_env when: make_telemetry_deploy_prep_env is defined ansible.builtin.debug: var: make_telemetry_deploy_prep_env - name: Debug make_telemetry_deploy_prep_params when: make_telemetry_deploy_prep_params is defined ansible.builtin.debug: var: make_telemetry_deploy_prep_params - name: Run telemetry_deploy_prep retries: "{{ make_telemetry_deploy_prep_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_prep_delay | default(omit) }}" until: "{{ make_telemetry_deploy_prep_until | default(true) }}" register: "make_telemetry_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy_prep" dry_run: "{{ make_telemetry_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_prep_env|default({})), **(make_telemetry_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000173115073042760033416 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_env when: make_telemetry_deploy_env is defined ansible.builtin.debug: var: make_telemetry_deploy_env - name: Debug make_telemetry_deploy_params when: make_telemetry_deploy_params is defined ansible.builtin.debug: var: make_telemetry_deploy_params - name: Run telemetry_deploy retries: "{{ make_telemetry_deploy_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_delay | default(omit) }}" until: "{{ make_telemetry_deploy_until | default(true) }}" register: "make_telemetry_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy" dry_run: "{{ make_telemetry_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_env|default({})), **(make_telemetry_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000212115073042760033410 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_cleanup_env when: make_telemetry_deploy_cleanup_env is defined ansible.builtin.debug: var: make_telemetry_deploy_cleanup_env - name: Debug make_telemetry_deploy_cleanup_params when: make_telemetry_deploy_cleanup_params is defined ansible.builtin.debug: var: make_telemetry_deploy_cleanup_params - name: Run telemetry_deploy_cleanup retries: "{{ make_telemetry_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_cleanup_delay | default(omit) }}" until: "{{ make_telemetry_deploy_cleanup_until | default(true) }}" register: "make_telemetry_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy_cleanup" dry_run: "{{ make_telemetry_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_cleanup_env|default({})), **(make_telemetry_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000200615073042760033412 0ustar zuulzuul--- - name: Debug make_telemetry_kuttl_run_env when: make_telemetry_kuttl_run_env is defined ansible.builtin.debug: var: make_telemetry_kuttl_run_env - name: Debug make_telemetry_kuttl_run_params when: make_telemetry_kuttl_run_params is defined ansible.builtin.debug: var: make_telemetry_kuttl_run_params - name: Run telemetry_kuttl_run retries: "{{ make_telemetry_kuttl_run_retries | default(omit) }}" delay: "{{ make_telemetry_kuttl_run_delay | default(omit) }}" until: "{{ make_telemetry_kuttl_run_until | default(true) }}" register: "make_telemetry_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_kuttl_run" dry_run: "{{ make_telemetry_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_kuttl_run_env|default({})), **(make_telemetry_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000171215073042760033415 0ustar zuulzuul--- - name: Debug make_telemetry_kuttl_env when: make_telemetry_kuttl_env is defined ansible.builtin.debug: var: make_telemetry_kuttl_env - name: Debug make_telemetry_kuttl_params when: make_telemetry_kuttl_params is defined ansible.builtin.debug: var: make_telemetry_kuttl_params - name: Run telemetry_kuttl retries: "{{ make_telemetry_kuttl_retries | default(omit) }}" delay: "{{ make_telemetry_kuttl_delay | default(omit) }}" until: "{{ make_telemetry_kuttl_until | default(true) }}" register: "make_telemetry_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_kuttl" dry_run: "{{ make_telemetry_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_kuttl_env|default({})), **(make_telemetry_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_pr0000644000175000017500000000157715073042760033442 0ustar zuulzuul--- - name: Debug make_swift_prep_env when: make_swift_prep_env is defined ansible.builtin.debug: var: make_swift_prep_env - name: Debug make_swift_prep_params when: make_swift_prep_params is defined ansible.builtin.debug: var: make_swift_prep_params - name: Run swift_prep retries: "{{ make_swift_prep_retries | default(omit) }}" delay: "{{ make_swift_prep_delay | default(omit) }}" until: "{{ make_swift_prep_until | default(true) }}" register: "make_swift_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_prep" dry_run: "{{ make_swift_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_prep_env|default({})), **(make_swift_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift.ym0000644000175000017500000000146415073042760033360 0ustar zuulzuul--- - name: Debug make_swift_env when: make_swift_env is defined ansible.builtin.debug: var: make_swift_env - name: Debug make_swift_params when: make_swift_params is defined ansible.builtin.debug: var: make_swift_params - name: Run swift retries: "{{ make_swift_retries | default(omit) }}" delay: "{{ make_swift_delay | default(omit) }}" until: "{{ make_swift_until | default(true) }}" register: "make_swift_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift" dry_run: "{{ make_swift_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_env|default({})), **(make_swift_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_cl0000644000175000017500000000165415073042760033413 0ustar zuulzuul--- - name: Debug make_swift_cleanup_env when: make_swift_cleanup_env is defined ansible.builtin.debug: var: make_swift_cleanup_env - name: Debug make_swift_cleanup_params when: make_swift_cleanup_params is defined ansible.builtin.debug: var: make_swift_cleanup_params - name: Run swift_cleanup retries: "{{ make_swift_cleanup_retries | default(omit) }}" delay: "{{ make_swift_cleanup_delay | default(omit) }}" until: "{{ make_swift_cleanup_until | default(true) }}" register: "make_swift_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_cleanup" dry_run: "{{ make_swift_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_cleanup_env|default({})), **(make_swift_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000175015073042760033402 0ustar zuulzuul--- - name: Debug make_swift_deploy_prep_env when: make_swift_deploy_prep_env is defined ansible.builtin.debug: var: make_swift_deploy_prep_env - name: Debug make_swift_deploy_prep_params when: make_swift_deploy_prep_params is defined ansible.builtin.debug: var: make_swift_deploy_prep_params - name: Run swift_deploy_prep retries: "{{ make_swift_deploy_prep_retries | default(omit) }}" delay: "{{ make_swift_deploy_prep_delay | default(omit) }}" until: "{{ make_swift_deploy_prep_until | default(true) }}" register: "make_swift_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy_prep" dry_run: "{{ make_swift_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_prep_env|default({})), **(make_swift_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000163515073042760033404 0ustar zuulzuul--- - name: Debug make_swift_deploy_env when: make_swift_deploy_env is defined ansible.builtin.debug: var: make_swift_deploy_env - name: Debug make_swift_deploy_params when: make_swift_deploy_params is defined ansible.builtin.debug: var: make_swift_deploy_params - name: Run swift_deploy retries: "{{ make_swift_deploy_retries | default(omit) }}" delay: "{{ make_swift_deploy_delay | default(omit) }}" until: "{{ make_swift_deploy_until | default(true) }}" register: "make_swift_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy" dry_run: "{{ make_swift_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_env|default({})), **(make_swift_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000202515073042760033376 0ustar zuulzuul--- - name: Debug make_swift_deploy_cleanup_env when: make_swift_deploy_cleanup_env is defined ansible.builtin.debug: var: make_swift_deploy_cleanup_env - name: Debug make_swift_deploy_cleanup_params when: make_swift_deploy_cleanup_params is defined ansible.builtin.debug: var: make_swift_deploy_cleanup_params - name: Run swift_deploy_cleanup retries: "{{ make_swift_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_swift_deploy_cleanup_delay | default(omit) }}" until: "{{ make_swift_deploy_cleanup_until | default(true) }}" register: "make_swift_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy_cleanup" dry_run: "{{ make_swift_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_cleanup_env|default({})), **(make_swift_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmanager.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmana0000644000175000017500000000161615073042760033371 0ustar zuulzuul--- - name: Debug make_certmanager_env when: make_certmanager_env is defined ansible.builtin.debug: var: make_certmanager_env - name: Debug make_certmanager_params when: make_certmanager_params is defined ansible.builtin.debug: var: make_certmanager_params - name: Run certmanager retries: "{{ make_certmanager_retries | default(omit) }}" delay: "{{ make_certmanager_delay | default(omit) }}" until: "{{ make_certmanager_until | default(true) }}" register: "make_certmanager_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make certmanager" dry_run: "{{ make_certmanager_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_certmanager_env|default({})), **(make_certmanager_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmanager_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmana0000644000175000017500000000200615073042760033363 0ustar zuulzuul--- - name: Debug make_certmanager_cleanup_env when: make_certmanager_cleanup_env is defined ansible.builtin.debug: var: make_certmanager_cleanup_env - name: Debug make_certmanager_cleanup_params when: make_certmanager_cleanup_params is defined ansible.builtin.debug: var: make_certmanager_cleanup_params - name: Run certmanager_cleanup retries: "{{ make_certmanager_cleanup_retries | default(omit) }}" delay: "{{ make_certmanager_cleanup_delay | default(omit) }}" until: "{{ make_certmanager_cleanup_until | default(true) }}" register: "make_certmanager_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make certmanager_cleanup" dry_run: "{{ make_certmanager_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_certmanager_cleanup_env|default({})), **(make_certmanager_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_validate_marketplace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_validate0000644000175000017500000000202515073042760033363 0ustar zuulzuul--- - name: Debug make_validate_marketplace_env when: make_validate_marketplace_env is defined ansible.builtin.debug: var: make_validate_marketplace_env - name: Debug make_validate_marketplace_params when: make_validate_marketplace_params is defined ansible.builtin.debug: var: make_validate_marketplace_params - name: Run validate_marketplace retries: "{{ make_validate_marketplace_retries | default(omit) }}" delay: "{{ make_validate_marketplace_delay | default(omit) }}" until: "{{ make_validate_marketplace_until | default(true) }}" register: "make_validate_marketplace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make validate_marketplace" dry_run: "{{ make_validate_marketplace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_validate_marketplace_env|default({})), **(make_validate_marketplace_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000175015073042760033354 0ustar zuulzuul--- - name: Debug make_redis_deploy_prep_env when: make_redis_deploy_prep_env is defined ansible.builtin.debug: var: make_redis_deploy_prep_env - name: Debug make_redis_deploy_prep_params when: make_redis_deploy_prep_params is defined ansible.builtin.debug: var: make_redis_deploy_prep_params - name: Run redis_deploy_prep retries: "{{ make_redis_deploy_prep_retries | default(omit) }}" delay: "{{ make_redis_deploy_prep_delay | default(omit) }}" until: "{{ make_redis_deploy_prep_until | default(true) }}" register: "make_redis_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy_prep" dry_run: "{{ make_redis_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_prep_env|default({})), **(make_redis_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000163515073042760033356 0ustar zuulzuul--- - name: Debug make_redis_deploy_env when: make_redis_deploy_env is defined ansible.builtin.debug: var: make_redis_deploy_env - name: Debug make_redis_deploy_params when: make_redis_deploy_params is defined ansible.builtin.debug: var: make_redis_deploy_params - name: Run redis_deploy retries: "{{ make_redis_deploy_retries | default(omit) }}" delay: "{{ make_redis_deploy_delay | default(omit) }}" until: "{{ make_redis_deploy_until | default(true) }}" register: "make_redis_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy" dry_run: "{{ make_redis_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_env|default({})), **(make_redis_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000202515073042760033350 0ustar zuulzuul--- - name: Debug make_redis_deploy_cleanup_env when: make_redis_deploy_cleanup_env is defined ansible.builtin.debug: var: make_redis_deploy_cleanup_env - name: Debug make_redis_deploy_cleanup_params when: make_redis_deploy_cleanup_params is defined ansible.builtin.debug: var: make_redis_deploy_cleanup_params - name: Run redis_deploy_cleanup retries: "{{ make_redis_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_redis_deploy_cleanup_delay | default(omit) }}" until: "{{ make_redis_deploy_cleanup_until | default(true) }}" register: "make_redis_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy_cleanup" dry_run: "{{ make_redis_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_cleanup_env|default({})), **(make_redis_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_set_slower_etcd_profile.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_set_slow0000644000175000017500000000210215073042760033425 0ustar zuulzuul--- - name: Debug make_set_slower_etcd_profile_env when: make_set_slower_etcd_profile_env is defined ansible.builtin.debug: var: make_set_slower_etcd_profile_env - name: Debug make_set_slower_etcd_profile_params when: make_set_slower_etcd_profile_params is defined ansible.builtin.debug: var: make_set_slower_etcd_profile_params - name: Run set_slower_etcd_profile retries: "{{ make_set_slower_etcd_profile_retries | default(omit) }}" delay: "{{ make_set_slower_etcd_profile_delay | default(omit) }}" until: "{{ make_set_slower_etcd_profile_until | default(true) }}" register: "make_set_slower_etcd_profile_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make set_slower_etcd_profile" dry_run: "{{ make_set_slower_etcd_profile_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_set_slower_etcd_profile_env|default({})), **(make_set_slower_etcd_profile_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_download_tools.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_download0000644000175000017500000000170415073042760033404 0ustar zuulzuul--- - name: Debug make_download_tools_env when: make_download_tools_env is defined ansible.builtin.debug: var: make_download_tools_env - name: Debug make_download_tools_params when: make_download_tools_params is defined ansible.builtin.debug: var: make_download_tools_params - name: Run download_tools retries: "{{ make_download_tools_retries | default(omit) }}" delay: "{{ make_download_tools_delay | default(omit) }}" until: "{{ make_download_tools_until | default(true) }}" register: "make_download_tools_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make download_tools" dry_run: "{{ make_download_tools_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_download_tools_env|default({})), **(make_download_tools_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs.yml0000644000175000017500000000143715073042760033166 0ustar zuulzuul--- - name: Debug make_nfs_env when: make_nfs_env is defined ansible.builtin.debug: var: make_nfs_env - name: Debug make_nfs_params when: make_nfs_params is defined ansible.builtin.debug: var: make_nfs_params - name: Run nfs retries: "{{ make_nfs_retries | default(omit) }}" delay: "{{ make_nfs_delay | default(omit) }}" until: "{{ make_nfs_until | default(true) }}" register: "make_nfs_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make nfs" dry_run: "{{ make_nfs_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nfs_env|default({})), **(make_nfs_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs_clea0000644000175000017500000000162715073042760033353 0ustar zuulzuul--- - name: Debug make_nfs_cleanup_env when: make_nfs_cleanup_env is defined ansible.builtin.debug: var: make_nfs_cleanup_env - name: Debug make_nfs_cleanup_params when: make_nfs_cleanup_params is defined ansible.builtin.debug: var: make_nfs_cleanup_params - name: Run nfs_cleanup retries: "{{ make_nfs_cleanup_retries | default(omit) }}" delay: "{{ make_nfs_cleanup_delay | default(omit) }}" until: "{{ make_nfs_cleanup_until | default(true) }}" register: "make_nfs_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make nfs_cleanup" dry_run: "{{ make_nfs_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nfs_cleanup_env|default({})), **(make_nfs_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc.yml0000644000175000017500000000143715073042760033147 0ustar zuulzuul--- - name: Debug make_crc_env when: make_crc_env is defined ansible.builtin.debug: var: make_crc_env - name: Debug make_crc_params when: make_crc_params is defined ansible.builtin.debug: var: make_crc_params - name: Run crc retries: "{{ make_crc_retries | default(omit) }}" delay: "{{ make_crc_delay | default(omit) }}" until: "{{ make_crc_until | default(true) }}" register: "make_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc" dry_run: "{{ make_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_env|default({})), **(make_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_clea0000644000175000017500000000162715073042760033334 0ustar zuulzuul--- - name: Debug make_crc_cleanup_env when: make_crc_cleanup_env is defined ansible.builtin.debug: var: make_crc_cleanup_env - name: Debug make_crc_cleanup_params when: make_crc_cleanup_params is defined ansible.builtin.debug: var: make_crc_cleanup_params - name: Run crc_cleanup retries: "{{ make_crc_cleanup_retries | default(omit) }}" delay: "{{ make_crc_cleanup_delay | default(omit) }}" until: "{{ make_crc_cleanup_until | default(true) }}" register: "make_crc_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_cleanup" dry_run: "{{ make_crc_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_cleanup_env|default({})), **(make_crc_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_scrub.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_scru0000644000175000017500000000157115073042760033402 0ustar zuulzuul--- - name: Debug make_crc_scrub_env when: make_crc_scrub_env is defined ansible.builtin.debug: var: make_crc_scrub_env - name: Debug make_crc_scrub_params when: make_crc_scrub_params is defined ansible.builtin.debug: var: make_crc_scrub_params - name: Run crc_scrub retries: "{{ make_crc_scrub_retries | default(omit) }}" delay: "{{ make_crc_scrub_delay | default(omit) }}" until: "{{ make_crc_scrub_until | default(true) }}" register: "make_crc_scrub_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_scrub" dry_run: "{{ make_crc_scrub_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_scrub_env|default({})), **(make_crc_scrub_params|default({}))) }}" ././@LongLink0000644000000000000000000000017500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_attach_default_interface.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_atta0000644000175000017500000000222615073042760033355 0ustar zuulzuul--- - name: Debug make_crc_attach_default_interface_env when: make_crc_attach_default_interface_env is defined ansible.builtin.debug: var: make_crc_attach_default_interface_env - name: Debug make_crc_attach_default_interface_params when: make_crc_attach_default_interface_params is defined ansible.builtin.debug: var: make_crc_attach_default_interface_params - name: Run crc_attach_default_interface retries: "{{ make_crc_attach_default_interface_retries | default(omit) }}" delay: "{{ make_crc_attach_default_interface_delay | default(omit) }}" until: "{{ make_crc_attach_default_interface_until | default(true) }}" register: "make_crc_attach_default_interface_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_attach_default_interface" dry_run: "{{ make_crc_attach_default_interface_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_attach_default_interface_env|default({})), **(make_crc_attach_default_interface_params|default({}))) }}" ././@LongLink0000644000000000000000000000020500000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_attach_default_interface_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_atta0000644000175000017500000000241615073042760033356 0ustar zuulzuul--- - name: Debug make_crc_attach_default_interface_cleanup_env when: make_crc_attach_default_interface_cleanup_env is defined ansible.builtin.debug: var: make_crc_attach_default_interface_cleanup_env - name: Debug make_crc_attach_default_interface_cleanup_params when: make_crc_attach_default_interface_cleanup_params is defined ansible.builtin.debug: var: make_crc_attach_default_interface_cleanup_params - name: Run crc_attach_default_interface_cleanup retries: "{{ make_crc_attach_default_interface_cleanup_retries | default(omit) }}" delay: "{{ make_crc_attach_default_interface_cleanup_delay | default(omit) }}" until: "{{ make_crc_attach_default_interface_cleanup_until | default(true) }}" register: "make_crc_attach_default_interface_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_attach_default_interface_cleanup" dry_run: "{{ make_crc_attach_default_interface_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_attach_default_interface_cleanup_env|default({})), **(make_crc_attach_default_interface_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000174215073042760033301 0ustar zuulzuul--- - name: Debug make_ipv6_lab_network_env when: make_ipv6_lab_network_env is defined ansible.builtin.debug: var: make_ipv6_lab_network_env - name: Debug make_ipv6_lab_network_params when: make_ipv6_lab_network_params is defined ansible.builtin.debug: var: make_ipv6_lab_network_params - name: Run ipv6_lab_network retries: "{{ make_ipv6_lab_network_retries | default(omit) }}" delay: "{{ make_ipv6_lab_network_delay | default(omit) }}" until: "{{ make_ipv6_lab_network_until | default(true) }}" register: "make_ipv6_lab_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_network" dry_run: "{{ make_ipv6_lab_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_network_env|default({})), **(make_ipv6_lab_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000213215073042760033273 0ustar zuulzuul--- - name: Debug make_ipv6_lab_network_cleanup_env when: make_ipv6_lab_network_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_network_cleanup_env - name: Debug make_ipv6_lab_network_cleanup_params when: make_ipv6_lab_network_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_network_cleanup_params - name: Run ipv6_lab_network_cleanup retries: "{{ make_ipv6_lab_network_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_network_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_network_cleanup_until | default(true) }}" register: "make_ipv6_lab_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_network_cleanup" dry_run: "{{ make_ipv6_lab_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_network_cleanup_env|default({})), **(make_ipv6_lab_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_nat64_router.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000205515073042760033277 0ustar zuulzuul--- - name: Debug make_ipv6_lab_nat64_router_env when: make_ipv6_lab_nat64_router_env is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_env - name: Debug make_ipv6_lab_nat64_router_params when: make_ipv6_lab_nat64_router_params is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_params - name: Run ipv6_lab_nat64_router retries: "{{ make_ipv6_lab_nat64_router_retries | default(omit) }}" delay: "{{ make_ipv6_lab_nat64_router_delay | default(omit) }}" until: "{{ make_ipv6_lab_nat64_router_until | default(true) }}" register: "make_ipv6_lab_nat64_router_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_nat64_router" dry_run: "{{ make_ipv6_lab_nat64_router_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_nat64_router_env|default({})), **(make_ipv6_lab_nat64_router_params|default({}))) }}" ././@LongLink0000644000000000000000000000017600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_nat64_router_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000224515073042760033300 0ustar zuulzuul--- - name: Debug make_ipv6_lab_nat64_router_cleanup_env when: make_ipv6_lab_nat64_router_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_cleanup_env - name: Debug make_ipv6_lab_nat64_router_cleanup_params when: make_ipv6_lab_nat64_router_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_cleanup_params - name: Run ipv6_lab_nat64_router_cleanup retries: "{{ make_ipv6_lab_nat64_router_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_nat64_router_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_nat64_router_cleanup_until | default(true) }}" register: "make_ipv6_lab_nat64_router_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_nat64_router_cleanup" dry_run: "{{ make_ipv6_lab_nat64_router_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_nat64_router_cleanup_env|default({})), **(make_ipv6_lab_nat64_router_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_sno.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000164615073042760033304 0ustar zuulzuul--- - name: Debug make_ipv6_lab_sno_env when: make_ipv6_lab_sno_env is defined ansible.builtin.debug: var: make_ipv6_lab_sno_env - name: Debug make_ipv6_lab_sno_params when: make_ipv6_lab_sno_params is defined ansible.builtin.debug: var: make_ipv6_lab_sno_params - name: Run ipv6_lab_sno retries: "{{ make_ipv6_lab_sno_retries | default(omit) }}" delay: "{{ make_ipv6_lab_sno_delay | default(omit) }}" until: "{{ make_ipv6_lab_sno_until | default(true) }}" register: "make_ipv6_lab_sno_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_sno" dry_run: "{{ make_ipv6_lab_sno_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_sno_env|default({})), **(make_ipv6_lab_sno_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_sno_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000203615073042760033276 0ustar zuulzuul--- - name: Debug make_ipv6_lab_sno_cleanup_env when: make_ipv6_lab_sno_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_sno_cleanup_env - name: Debug make_ipv6_lab_sno_cleanup_params when: make_ipv6_lab_sno_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_sno_cleanup_params - name: Run ipv6_lab_sno_cleanup retries: "{{ make_ipv6_lab_sno_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_sno_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_sno_cleanup_until | default(true) }}" register: "make_ipv6_lab_sno_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_sno_cleanup" dry_run: "{{ make_ipv6_lab_sno_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_sno_cleanup_env|default({})), **(make_ipv6_lab_sno_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000155215073042760033300 0ustar zuulzuul--- - name: Debug make_ipv6_lab_env when: make_ipv6_lab_env is defined ansible.builtin.debug: var: make_ipv6_lab_env - name: Debug make_ipv6_lab_params when: make_ipv6_lab_params is defined ansible.builtin.debug: var: make_ipv6_lab_params - name: Run ipv6_lab retries: "{{ make_ipv6_lab_retries | default(omit) }}" delay: "{{ make_ipv6_lab_delay | default(omit) }}" until: "{{ make_ipv6_lab_until | default(true) }}" register: "make_ipv6_lab_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab" dry_run: "{{ make_ipv6_lab_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_env|default({})), **(make_ipv6_lab_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000174215073042760033301 0ustar zuulzuul--- - name: Debug make_ipv6_lab_cleanup_env when: make_ipv6_lab_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_cleanup_env - name: Debug make_ipv6_lab_cleanup_params when: make_ipv6_lab_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_cleanup_params - name: Run ipv6_lab_cleanup retries: "{{ make_ipv6_lab_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_cleanup_until | default(true) }}" register: "make_ipv6_lab_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_cleanup" dry_run: "{{ make_ipv6_lab_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_cleanup_env|default({})), **(make_ipv6_lab_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_default_interface.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_d0000644000175000017500000000213215073042760033340 0ustar zuulzuul--- - name: Debug make_attach_default_interface_env when: make_attach_default_interface_env is defined ansible.builtin.debug: var: make_attach_default_interface_env - name: Debug make_attach_default_interface_params when: make_attach_default_interface_params is defined ansible.builtin.debug: var: make_attach_default_interface_params - name: Run attach_default_interface retries: "{{ make_attach_default_interface_retries | default(omit) }}" delay: "{{ make_attach_default_interface_delay | default(omit) }}" until: "{{ make_attach_default_interface_until | default(true) }}" register: "make_attach_default_interface_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make attach_default_interface" dry_run: "{{ make_attach_default_interface_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_attach_default_interface_env|default({})), **(make_attach_default_interface_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_default_interface_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_d0000644000175000017500000000232215073042760033341 0ustar zuulzuul--- - name: Debug make_attach_default_interface_cleanup_env when: make_attach_default_interface_cleanup_env is defined ansible.builtin.debug: var: make_attach_default_interface_cleanup_env - name: Debug make_attach_default_interface_cleanup_params when: make_attach_default_interface_cleanup_params is defined ansible.builtin.debug: var: make_attach_default_interface_cleanup_params - name: Run attach_default_interface_cleanup retries: "{{ make_attach_default_interface_cleanup_retries | default(omit) }}" delay: "{{ make_attach_default_interface_cleanup_delay | default(omit) }}" until: "{{ make_attach_default_interface_cleanup_until | default(true) }}" register: "make_attach_default_interface_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make attach_default_interface_cleanup" dry_run: "{{ make_attach_default_interface_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_attach_default_interface_cleanup_env|default({})), **(make_attach_default_interface_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_isolation_bridge.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_0000644000175000017500000000213215073042760033421 0ustar zuulzuul--- - name: Debug make_network_isolation_bridge_env when: make_network_isolation_bridge_env is defined ansible.builtin.debug: var: make_network_isolation_bridge_env - name: Debug make_network_isolation_bridge_params when: make_network_isolation_bridge_params is defined ansible.builtin.debug: var: make_network_isolation_bridge_params - name: Run network_isolation_bridge retries: "{{ make_network_isolation_bridge_retries | default(omit) }}" delay: "{{ make_network_isolation_bridge_delay | default(omit) }}" until: "{{ make_network_isolation_bridge_until | default(true) }}" register: "make_network_isolation_bridge_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make network_isolation_bridge" dry_run: "{{ make_network_isolation_bridge_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_network_isolation_bridge_env|default({})), **(make_network_isolation_bridge_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_isolation_bridge_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_0000644000175000017500000000232215073042760033422 0ustar zuulzuul--- - name: Debug make_network_isolation_bridge_cleanup_env when: make_network_isolation_bridge_cleanup_env is defined ansible.builtin.debug: var: make_network_isolation_bridge_cleanup_env - name: Debug make_network_isolation_bridge_cleanup_params when: make_network_isolation_bridge_cleanup_params is defined ansible.builtin.debug: var: make_network_isolation_bridge_cleanup_params - name: Run network_isolation_bridge_cleanup retries: "{{ make_network_isolation_bridge_cleanup_retries | default(omit) }}" delay: "{{ make_network_isolation_bridge_cleanup_delay | default(omit) }}" until: "{{ make_network_isolation_bridge_cleanup_until | default(true) }}" register: "make_network_isolation_bridge_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make network_isolation_bridge_cleanup" dry_run: "{{ make_network_isolation_bridge_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_network_isolation_bridge_cleanup_env|default({})), **(make_network_isolation_bridge_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_baremetal_compute.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_bar0000644000175000017500000000207415073042760033347 0ustar zuulzuul--- - name: Debug make_edpm_baremetal_compute_env when: make_edpm_baremetal_compute_env is defined ansible.builtin.debug: var: make_edpm_baremetal_compute_env - name: Debug make_edpm_baremetal_compute_params when: make_edpm_baremetal_compute_params is defined ansible.builtin.debug: var: make_edpm_baremetal_compute_params - name: Run edpm_baremetal_compute retries: "{{ make_edpm_baremetal_compute_retries | default(omit) }}" delay: "{{ make_edpm_baremetal_compute_delay | default(omit) }}" until: "{{ make_edpm_baremetal_compute_until | default(true) }}" register: "make_edpm_baremetal_compute_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_baremetal_compute" dry_run: "{{ make_edpm_baremetal_compute_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_baremetal_compute_env|default({})), **(make_edpm_baremetal_compute_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000164615073042760033365 0ustar zuulzuul--- - name: Debug make_edpm_compute_env when: make_edpm_compute_env is defined ansible.builtin.debug: var: make_edpm_compute_env - name: Debug make_edpm_compute_params when: make_edpm_compute_params is defined ansible.builtin.debug: var: make_edpm_compute_params - name: Run edpm_compute retries: "{{ make_edpm_compute_retries | default(omit) }}" delay: "{{ make_edpm_compute_delay | default(omit) }}" until: "{{ make_edpm_compute_until | default(true) }}" register: "make_edpm_compute_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute" dry_run: "{{ make_edpm_compute_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_env|default({})), **(make_edpm_compute_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_bootc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000200015073042760033346 0ustar zuulzuul--- - name: Debug make_edpm_compute_bootc_env when: make_edpm_compute_bootc_env is defined ansible.builtin.debug: var: make_edpm_compute_bootc_env - name: Debug make_edpm_compute_bootc_params when: make_edpm_compute_bootc_params is defined ansible.builtin.debug: var: make_edpm_compute_bootc_params - name: Run edpm_compute_bootc retries: "{{ make_edpm_compute_bootc_retries | default(omit) }}" delay: "{{ make_edpm_compute_bootc_delay | default(omit) }}" until: "{{ make_edpm_compute_bootc_until | default(true) }}" register: "make_edpm_compute_bootc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_bootc" dry_run: "{{ make_edpm_compute_bootc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_bootc_env|default({})), **(make_edpm_compute_bootc_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_ansible_runner.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_ans0000644000175000017500000000201715073042760033361 0ustar zuulzuul--- - name: Debug make_edpm_ansible_runner_env when: make_edpm_ansible_runner_env is defined ansible.builtin.debug: var: make_edpm_ansible_runner_env - name: Debug make_edpm_ansible_runner_params when: make_edpm_ansible_runner_params is defined ansible.builtin.debug: var: make_edpm_ansible_runner_params - name: Run edpm_ansible_runner retries: "{{ make_edpm_ansible_runner_retries | default(omit) }}" delay: "{{ make_edpm_ansible_runner_delay | default(omit) }}" until: "{{ make_edpm_ansible_runner_until | default(true) }}" register: "make_edpm_ansible_runner_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_ansible_runner" dry_run: "{{ make_edpm_ansible_runner_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_ansible_runner_env|default({})), **(make_edpm_ansible_runner_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_computes_bgp.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000176115073042760033363 0ustar zuulzuul--- - name: Debug make_edpm_computes_bgp_env when: make_edpm_computes_bgp_env is defined ansible.builtin.debug: var: make_edpm_computes_bgp_env - name: Debug make_edpm_computes_bgp_params when: make_edpm_computes_bgp_params is defined ansible.builtin.debug: var: make_edpm_computes_bgp_params - name: Run edpm_computes_bgp retries: "{{ make_edpm_computes_bgp_retries | default(omit) }}" delay: "{{ make_edpm_computes_bgp_delay | default(omit) }}" until: "{{ make_edpm_computes_bgp_until | default(true) }}" register: "make_edpm_computes_bgp_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_computes_bgp" dry_run: "{{ make_edpm_computes_bgp_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_computes_bgp_env|default({})), **(make_edpm_computes_bgp_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_repos.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000200015073042760033346 0ustar zuulzuul--- - name: Debug make_edpm_compute_repos_env when: make_edpm_compute_repos_env is defined ansible.builtin.debug: var: make_edpm_compute_repos_env - name: Debug make_edpm_compute_repos_params when: make_edpm_compute_repos_params is defined ansible.builtin.debug: var: make_edpm_compute_repos_params - name: Run edpm_compute_repos retries: "{{ make_edpm_compute_repos_retries | default(omit) }}" delay: "{{ make_edpm_compute_repos_delay | default(omit) }}" until: "{{ make_edpm_compute_repos_until | default(true) }}" register: "make_edpm_compute_repos_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_repos" dry_run: "{{ make_edpm_compute_repos_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_repos_env|default({})), **(make_edpm_compute_repos_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000203615073042760033357 0ustar zuulzuul--- - name: Debug make_edpm_compute_cleanup_env when: make_edpm_compute_cleanup_env is defined ansible.builtin.debug: var: make_edpm_compute_cleanup_env - name: Debug make_edpm_compute_cleanup_params when: make_edpm_compute_cleanup_params is defined ansible.builtin.debug: var: make_edpm_compute_cleanup_params - name: Run edpm_compute_cleanup retries: "{{ make_edpm_compute_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_compute_cleanup_delay | default(omit) }}" until: "{{ make_edpm_compute_cleanup_until | default(true) }}" register: "make_edpm_compute_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_cleanup" dry_run: "{{ make_edpm_compute_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_cleanup_env|default({})), **(make_edpm_compute_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_networker.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_net0000644000175000017500000000170415073042760033370 0ustar zuulzuul--- - name: Debug make_edpm_networker_env when: make_edpm_networker_env is defined ansible.builtin.debug: var: make_edpm_networker_env - name: Debug make_edpm_networker_params when: make_edpm_networker_params is defined ansible.builtin.debug: var: make_edpm_networker_params - name: Run edpm_networker retries: "{{ make_edpm_networker_retries | default(omit) }}" delay: "{{ make_edpm_networker_delay | default(omit) }}" until: "{{ make_edpm_networker_until | default(true) }}" register: "make_edpm_networker_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_networker" dry_run: "{{ make_edpm_networker_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_networker_env|default({})), **(make_edpm_networker_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_networker_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_net0000644000175000017500000000207415073042760033371 0ustar zuulzuul--- - name: Debug make_edpm_networker_cleanup_env when: make_edpm_networker_cleanup_env is defined ansible.builtin.debug: var: make_edpm_networker_cleanup_env - name: Debug make_edpm_networker_cleanup_params when: make_edpm_networker_cleanup_params is defined ansible.builtin.debug: var: make_edpm_networker_cleanup_params - name: Run edpm_networker_cleanup retries: "{{ make_edpm_networker_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_networker_cleanup_delay | default(omit) }}" until: "{{ make_edpm_networker_cleanup_until | default(true) }}" register: "make_edpm_networker_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_networker_cleanup" dry_run: "{{ make_edpm_networker_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_networker_cleanup_env|default({})), **(make_edpm_networker_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_instance.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000203615073042760033351 0ustar zuulzuul--- - name: Debug make_edpm_deploy_instance_env when: make_edpm_deploy_instance_env is defined ansible.builtin.debug: var: make_edpm_deploy_instance_env - name: Debug make_edpm_deploy_instance_params when: make_edpm_deploy_instance_params is defined ansible.builtin.debug: var: make_edpm_deploy_instance_params - name: Run edpm_deploy_instance retries: "{{ make_edpm_deploy_instance_retries | default(omit) }}" delay: "{{ make_edpm_deploy_instance_delay | default(omit) }}" until: "{{ make_edpm_deploy_instance_until | default(true) }}" register: "make_edpm_deploy_instance_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_deploy_instance" dry_run: "{{ make_edpm_deploy_instance_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_instance_env|default({})), **(make_edpm_deploy_instance_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_tripleo_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_tripleo_0000644000175000017500000000170415073042760033412 0ustar zuulzuul--- - name: Debug make_tripleo_deploy_env when: make_tripleo_deploy_env is defined ansible.builtin.debug: var: make_tripleo_deploy_env - name: Debug make_tripleo_deploy_params when: make_tripleo_deploy_params is defined ansible.builtin.debug: var: make_tripleo_deploy_params - name: Run tripleo_deploy retries: "{{ make_tripleo_deploy_retries | default(omit) }}" delay: "{{ make_tripleo_deploy_delay | default(omit) }}" until: "{{ make_tripleo_deploy_until | default(true) }}" register: "make_tripleo_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make tripleo_deploy" dry_run: "{{ make_tripleo_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_tripleo_deploy_env|default({})), **(make_tripleo_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000176115073042760033405 0ustar zuulzuul--- - name: Debug make_standalone_deploy_env when: make_standalone_deploy_env is defined ansible.builtin.debug: var: make_standalone_deploy_env - name: Debug make_standalone_deploy_params when: make_standalone_deploy_params is defined ansible.builtin.debug: var: make_standalone_deploy_params - name: Run standalone_deploy retries: "{{ make_standalone_deploy_retries | default(omit) }}" delay: "{{ make_standalone_deploy_delay | default(omit) }}" until: "{{ make_standalone_deploy_until | default(true) }}" register: "make_standalone_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_deploy" dry_run: "{{ make_standalone_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_deploy_env|default({})), **(make_standalone_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_sync.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000172315073042760033403 0ustar zuulzuul--- - name: Debug make_standalone_sync_env when: make_standalone_sync_env is defined ansible.builtin.debug: var: make_standalone_sync_env - name: Debug make_standalone_sync_params when: make_standalone_sync_params is defined ansible.builtin.debug: var: make_standalone_sync_params - name: Run standalone_sync retries: "{{ make_standalone_sync_retries | default(omit) }}" delay: "{{ make_standalone_sync_delay | default(omit) }}" until: "{{ make_standalone_sync_until | default(true) }}" register: "make_standalone_sync_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_sync" dry_run: "{{ make_standalone_sync_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_sync_env|default({})), **(make_standalone_sync_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000161015073042760033376 0ustar zuulzuul--- - name: Debug make_standalone_env when: make_standalone_env is defined ansible.builtin.debug: var: make_standalone_env - name: Debug make_standalone_params when: make_standalone_params is defined ansible.builtin.debug: var: make_standalone_params - name: Run standalone retries: "{{ make_standalone_retries | default(omit) }}" delay: "{{ make_standalone_delay | default(omit) }}" until: "{{ make_standalone_until | default(true) }}" register: "make_standalone_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone" dry_run: "{{ make_standalone_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_env|default({})), **(make_standalone_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000200015073042760033370 0ustar zuulzuul--- - name: Debug make_standalone_cleanup_env when: make_standalone_cleanup_env is defined ansible.builtin.debug: var: make_standalone_cleanup_env - name: Debug make_standalone_cleanup_params when: make_standalone_cleanup_params is defined ansible.builtin.debug: var: make_standalone_cleanup_params - name: Run standalone_cleanup retries: "{{ make_standalone_cleanup_retries | default(omit) }}" delay: "{{ make_standalone_cleanup_delay | default(omit) }}" until: "{{ make_standalone_cleanup_until | default(true) }}" register: "make_standalone_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_cleanup" dry_run: "{{ make_standalone_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_cleanup_env|default({})), **(make_standalone_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_snapshot.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000201715073042760033400 0ustar zuulzuul--- - name: Debug make_standalone_snapshot_env when: make_standalone_snapshot_env is defined ansible.builtin.debug: var: make_standalone_snapshot_env - name: Debug make_standalone_snapshot_params when: make_standalone_snapshot_params is defined ansible.builtin.debug: var: make_standalone_snapshot_params - name: Run standalone_snapshot retries: "{{ make_standalone_snapshot_retries | default(omit) }}" delay: "{{ make_standalone_snapshot_delay | default(omit) }}" until: "{{ make_standalone_snapshot_until | default(true) }}" register: "make_standalone_snapshot_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_snapshot" dry_run: "{{ make_standalone_snapshot_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_snapshot_env|default({})), **(make_standalone_snapshot_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_revert.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000176115073042760033405 0ustar zuulzuul--- - name: Debug make_standalone_revert_env when: make_standalone_revert_env is defined ansible.builtin.debug: var: make_standalone_revert_env - name: Debug make_standalone_revert_params when: make_standalone_revert_params is defined ansible.builtin.debug: var: make_standalone_revert_params - name: Run standalone_revert retries: "{{ make_standalone_revert_retries | default(omit) }}" delay: "{{ make_standalone_revert_delay | default(omit) }}" until: "{{ make_standalone_revert_until | default(true) }}" register: "make_standalone_revert_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_revert" dry_run: "{{ make_standalone_revert_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_revert_env|default({})), **(make_standalone_revert_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_prepare.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_pr0000644000175000017500000000166515073042760033411 0ustar zuulzuul--- - name: Debug make_cifmw_prepare_env when: make_cifmw_prepare_env is defined ansible.builtin.debug: var: make_cifmw_prepare_env - name: Debug make_cifmw_prepare_params when: make_cifmw_prepare_params is defined ansible.builtin.debug: var: make_cifmw_prepare_params - name: Run cifmw_prepare retries: "{{ make_cifmw_prepare_retries | default(omit) }}" delay: "{{ make_cifmw_prepare_delay | default(omit) }}" until: "{{ make_cifmw_prepare_until | default(true) }}" register: "make_cifmw_prepare_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make cifmw_prepare" dry_run: "{{ make_cifmw_prepare_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cifmw_prepare_env|default({})), **(make_cifmw_prepare_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_cl0000644000175000017500000000166515073042760033366 0ustar zuulzuul--- - name: Debug make_cifmw_cleanup_env when: make_cifmw_cleanup_env is defined ansible.builtin.debug: var: make_cifmw_cleanup_env - name: Debug make_cifmw_cleanup_params when: make_cifmw_cleanup_params is defined ansible.builtin.debug: var: make_cifmw_cleanup_params - name: Run cifmw_cleanup retries: "{{ make_cifmw_cleanup_retries | default(omit) }}" delay: "{{ make_cifmw_cleanup_delay | default(omit) }}" until: "{{ make_cifmw_cleanup_until | default(true) }}" register: "make_cifmw_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make cifmw_cleanup" dry_run: "{{ make_cifmw_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cifmw_cleanup_env|default({})), **(make_cifmw_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ne0000644000175000017500000000166515073042760033350 0ustar zuulzuul--- - name: Debug make_bmaas_network_env when: make_bmaas_network_env is defined ansible.builtin.debug: var: make_bmaas_network_env - name: Debug make_bmaas_network_params when: make_bmaas_network_params is defined ansible.builtin.debug: var: make_bmaas_network_params - name: Run bmaas_network retries: "{{ make_bmaas_network_retries | default(omit) }}" delay: "{{ make_bmaas_network_delay | default(omit) }}" until: "{{ make_bmaas_network_until | default(true) }}" register: "make_bmaas_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_network" dry_run: "{{ make_bmaas_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_network_env|default({})), **(make_bmaas_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ne0000644000175000017500000000205515073042760033342 0ustar zuulzuul--- - name: Debug make_bmaas_network_cleanup_env when: make_bmaas_network_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_network_cleanup_env - name: Debug make_bmaas_network_cleanup_params when: make_bmaas_network_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_network_cleanup_params - name: Run bmaas_network_cleanup retries: "{{ make_bmaas_network_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_network_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_network_cleanup_until | default(true) }}" register: "make_bmaas_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_network_cleanup" dry_run: "{{ make_bmaas_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_network_cleanup_env|default({})), **(make_bmaas_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000020700000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_route_crc_and_crc_bmaas_networks.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ro0000644000175000017500000000245415073042760033363 0ustar zuulzuul--- - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_env when: make_bmaas_route_crc_and_crc_bmaas_networks_env is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_env - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_params when: make_bmaas_route_crc_and_crc_bmaas_networks_params is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_params - name: Run bmaas_route_crc_and_crc_bmaas_networks retries: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_retries | default(omit) }}" delay: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_delay | default(omit) }}" until: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_until | default(true) }}" register: "make_bmaas_route_crc_and_crc_bmaas_networks_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_route_crc_and_crc_bmaas_networks" dry_run: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_route_crc_and_crc_bmaas_networks_env|default({})), **(make_bmaas_route_crc_and_crc_bmaas_networks_params|default({}))) }}" ././@LongLink0000644000000000000000000000021700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_route_crc_and_crc_bmaas_networks_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ro0000644000175000017500000000264415073042760033364 0ustar zuulzuul--- - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env when: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params when: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params - name: Run bmaas_route_crc_and_crc_bmaas_networks_cleanup retries: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_until | default(true) }}" register: "make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_route_crc_and_crc_bmaas_networks_cleanup" dry_run: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env|default({})), **(make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_metallb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_me0000644000175000017500000000166515073042760033347 0ustar zuulzuul--- - name: Debug make_bmaas_metallb_env when: make_bmaas_metallb_env is defined ansible.builtin.debug: var: make_bmaas_metallb_env - name: Debug make_bmaas_metallb_params when: make_bmaas_metallb_params is defined ansible.builtin.debug: var: make_bmaas_metallb_params - name: Run bmaas_metallb retries: "{{ make_bmaas_metallb_retries | default(omit) }}" delay: "{{ make_bmaas_metallb_delay | default(omit) }}" until: "{{ make_bmaas_metallb_until | default(true) }}" register: "make_bmaas_metallb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_metallb" dry_run: "{{ make_bmaas_metallb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_metallb_env|default({})), **(make_bmaas_metallb_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_attach_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000213215073042760033340 0ustar zuulzuul--- - name: Debug make_bmaas_crc_attach_network_env when: make_bmaas_crc_attach_network_env is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_env - name: Debug make_bmaas_crc_attach_network_params when: make_bmaas_crc_attach_network_params is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_params - name: Run bmaas_crc_attach_network retries: "{{ make_bmaas_crc_attach_network_retries | default(omit) }}" delay: "{{ make_bmaas_crc_attach_network_delay | default(omit) }}" until: "{{ make_bmaas_crc_attach_network_until | default(true) }}" register: "make_bmaas_crc_attach_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_attach_network" dry_run: "{{ make_bmaas_crc_attach_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_attach_network_env|default({})), **(make_bmaas_crc_attach_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_attach_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000232215073042760033341 0ustar zuulzuul--- - name: Debug make_bmaas_crc_attach_network_cleanup_env when: make_bmaas_crc_attach_network_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_cleanup_env - name: Debug make_bmaas_crc_attach_network_cleanup_params when: make_bmaas_crc_attach_network_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_cleanup_params - name: Run bmaas_crc_attach_network_cleanup retries: "{{ make_bmaas_crc_attach_network_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_crc_attach_network_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_crc_attach_network_cleanup_until | default(true) }}" register: "make_bmaas_crc_attach_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_attach_network_cleanup" dry_run: "{{ make_bmaas_crc_attach_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_attach_network_cleanup_env|default({})), **(make_bmaas_crc_attach_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_baremetal_bridge.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000217015073042760033342 0ustar zuulzuul--- - name: Debug make_bmaas_crc_baremetal_bridge_env when: make_bmaas_crc_baremetal_bridge_env is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_env - name: Debug make_bmaas_crc_baremetal_bridge_params when: make_bmaas_crc_baremetal_bridge_params is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_params - name: Run bmaas_crc_baremetal_bridge retries: "{{ make_bmaas_crc_baremetal_bridge_retries | default(omit) }}" delay: "{{ make_bmaas_crc_baremetal_bridge_delay | default(omit) }}" until: "{{ make_bmaas_crc_baremetal_bridge_until | default(true) }}" register: "make_bmaas_crc_baremetal_bridge_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_baremetal_bridge" dry_run: "{{ make_bmaas_crc_baremetal_bridge_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_baremetal_bridge_env|default({})), **(make_bmaas_crc_baremetal_bridge_params|default({}))) }}" ././@LongLink0000644000000000000000000000020300000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_baremetal_bridge_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000236015073042760033343 0ustar zuulzuul--- - name: Debug make_bmaas_crc_baremetal_bridge_cleanup_env when: make_bmaas_crc_baremetal_bridge_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_cleanup_env - name: Debug make_bmaas_crc_baremetal_bridge_cleanup_params when: make_bmaas_crc_baremetal_bridge_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_cleanup_params - name: Run bmaas_crc_baremetal_bridge_cleanup retries: "{{ make_bmaas_crc_baremetal_bridge_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_crc_baremetal_bridge_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_crc_baremetal_bridge_cleanup_until | default(true) }}" register: "make_bmaas_crc_baremetal_bridge_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_baremetal_bridge_cleanup" dry_run: "{{ make_bmaas_crc_baremetal_bridge_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_baremetal_bridge_cleanup_env|default({})), **(make_bmaas_crc_baremetal_bridge_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_baremetal_net_nad.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ba0000644000175000017500000000211315073042760033315 0ustar zuulzuul--- - name: Debug make_bmaas_baremetal_net_nad_env when: make_bmaas_baremetal_net_nad_env is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_env - name: Debug make_bmaas_baremetal_net_nad_params when: make_bmaas_baremetal_net_nad_params is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_params - name: Run bmaas_baremetal_net_nad retries: "{{ make_bmaas_baremetal_net_nad_retries | default(omit) }}" delay: "{{ make_bmaas_baremetal_net_nad_delay | default(omit) }}" until: "{{ make_bmaas_baremetal_net_nad_until | default(true) }}" register: "make_bmaas_baremetal_net_nad_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_baremetal_net_nad" dry_run: "{{ make_bmaas_baremetal_net_nad_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_baremetal_net_nad_env|default({})), **(make_bmaas_baremetal_net_nad_params|default({}))) }}" ././@LongLink0000644000000000000000000000020000000000000011573 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_baremetal_net_nad_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ba0000644000175000017500000000230315073042760033316 0ustar zuulzuul--- - name: Debug make_bmaas_baremetal_net_nad_cleanup_env when: make_bmaas_baremetal_net_nad_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_cleanup_env - name: Debug make_bmaas_baremetal_net_nad_cleanup_params when: make_bmaas_baremetal_net_nad_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_cleanup_params - name: Run bmaas_baremetal_net_nad_cleanup retries: "{{ make_bmaas_baremetal_net_nad_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_baremetal_net_nad_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_baremetal_net_nad_cleanup_until | default(true) }}" register: "make_bmaas_baremetal_net_nad_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_baremetal_net_nad_cleanup" dry_run: "{{ make_bmaas_baremetal_net_nad_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_baremetal_net_nad_cleanup_env|default({})), **(make_bmaas_baremetal_net_nad_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_metallb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_me0000644000175000017500000000205515073042760033341 0ustar zuulzuul--- - name: Debug make_bmaas_metallb_cleanup_env when: make_bmaas_metallb_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_metallb_cleanup_env - name: Debug make_bmaas_metallb_cleanup_params when: make_bmaas_metallb_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_metallb_cleanup_params - name: Run bmaas_metallb_cleanup retries: "{{ make_bmaas_metallb_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_metallb_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_metallb_cleanup_until | default(true) }}" register: "make_bmaas_metallb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_metallb_cleanup" dry_run: "{{ make_bmaas_metallb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_metallb_cleanup_env|default({})), **(make_bmaas_metallb_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_virtual_bms.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_vi0000644000175000017500000000176115073042760033361 0ustar zuulzuul--- - name: Debug make_bmaas_virtual_bms_env when: make_bmaas_virtual_bms_env is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_env - name: Debug make_bmaas_virtual_bms_params when: make_bmaas_virtual_bms_params is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_params - name: Run bmaas_virtual_bms retries: "{{ make_bmaas_virtual_bms_retries | default(omit) }}" delay: "{{ make_bmaas_virtual_bms_delay | default(omit) }}" until: "{{ make_bmaas_virtual_bms_until | default(true) }}" register: "make_bmaas_virtual_bms_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_virtual_bms" dry_run: "{{ make_bmaas_virtual_bms_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_virtual_bms_env|default({})), **(make_bmaas_virtual_bms_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_virtual_bms_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_vi0000644000175000017500000000215115073042760033353 0ustar zuulzuul--- - name: Debug make_bmaas_virtual_bms_cleanup_env when: make_bmaas_virtual_bms_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_cleanup_env - name: Debug make_bmaas_virtual_bms_cleanup_params when: make_bmaas_virtual_bms_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_cleanup_params - name: Run bmaas_virtual_bms_cleanup retries: "{{ make_bmaas_virtual_bms_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_virtual_bms_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_virtual_bms_cleanup_until | default(true) }}" register: "make_bmaas_virtual_bms_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_virtual_bms_cleanup" dry_run: "{{ make_bmaas_virtual_bms_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_virtual_bms_cleanup_env|default({})), **(make_bmaas_virtual_bms_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000203615073042760033366 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_env when: make_bmaas_sushy_emulator_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_env - name: Debug make_bmaas_sushy_emulator_params when: make_bmaas_sushy_emulator_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_params - name: Run bmaas_sushy_emulator retries: "{{ make_bmaas_sushy_emulator_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_until | default(true) }}" register: "make_bmaas_sushy_emulator_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator" dry_run: "{{ make_bmaas_sushy_emulator_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_env|default({})), **(make_bmaas_sushy_emulator_params|default({}))) }}" ././@LongLink0000644000000000000000000000017500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000222615073042760033367 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_cleanup_env when: make_bmaas_sushy_emulator_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_cleanup_env - name: Debug make_bmaas_sushy_emulator_cleanup_params when: make_bmaas_sushy_emulator_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_cleanup_params - name: Run bmaas_sushy_emulator_cleanup retries: "{{ make_bmaas_sushy_emulator_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_cleanup_until | default(true) }}" register: "make_bmaas_sushy_emulator_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator_cleanup" dry_run: "{{ make_bmaas_sushy_emulator_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_cleanup_env|default({})), **(make_bmaas_sushy_emulator_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator_wait.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000215115073042760033364 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_wait_env when: make_bmaas_sushy_emulator_wait_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_wait_env - name: Debug make_bmaas_sushy_emulator_wait_params when: make_bmaas_sushy_emulator_wait_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_wait_params - name: Run bmaas_sushy_emulator_wait retries: "{{ make_bmaas_sushy_emulator_wait_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_wait_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_wait_until | default(true) }}" register: "make_bmaas_sushy_emulator_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator_wait" dry_run: "{{ make_bmaas_sushy_emulator_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_wait_env|default({})), **(make_bmaas_sushy_emulator_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_generate_nodes_yaml.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ge0000644000175000017500000000215115073042760033330 0ustar zuulzuul--- - name: Debug make_bmaas_generate_nodes_yaml_env when: make_bmaas_generate_nodes_yaml_env is defined ansible.builtin.debug: var: make_bmaas_generate_nodes_yaml_env - name: Debug make_bmaas_generate_nodes_yaml_params when: make_bmaas_generate_nodes_yaml_params is defined ansible.builtin.debug: var: make_bmaas_generate_nodes_yaml_params - name: Run bmaas_generate_nodes_yaml retries: "{{ make_bmaas_generate_nodes_yaml_retries | default(omit) }}" delay: "{{ make_bmaas_generate_nodes_yaml_delay | default(omit) }}" until: "{{ make_bmaas_generate_nodes_yaml_until | default(true) }}" register: "make_bmaas_generate_nodes_yaml_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_generate_nodes_yaml" dry_run: "{{ make_bmaas_generate_nodes_yaml_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_generate_nodes_yaml_env|default({})), **(make_bmaas_generate_nodes_yaml_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas.ym0000644000175000017500000000147515073042760033311 0ustar zuulzuul--- - name: Debug make_bmaas_env when: make_bmaas_env is defined ansible.builtin.debug: var: make_bmaas_env - name: Debug make_bmaas_params when: make_bmaas_params is defined ansible.builtin.debug: var: make_bmaas_params - name: Run bmaas retries: "{{ make_bmaas_retries | default(omit) }}" delay: "{{ make_bmaas_delay | default(omit) }}" until: "{{ make_bmaas_until | default(true) }}" register: "make_bmaas_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas" dry_run: "{{ make_bmaas_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_env|default({})), **(make_bmaas_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cl0000644000175000017500000000166515073042760033344 0ustar zuulzuul--- - name: Debug make_bmaas_cleanup_env when: make_bmaas_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_cleanup_env - name: Debug make_bmaas_cleanup_params when: make_bmaas_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_cleanup_params - name: Run bmaas_cleanup retries: "{{ make_bmaas_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_cleanup_until | default(true) }}" register: "make_bmaas_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_cleanup" dry_run: "{{ make_bmaas_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_cleanup_env|default({})), **(make_bmaas_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/installed-packages.yml0000644000175000017500000022726015073043204026630 0ustar zuulzuulNetworkManager: - arch: x86_64 epoch: 1 name: NetworkManager release: 1.el9 source: rpm version: 1.54.1 NetworkManager-libnm: - arch: x86_64 epoch: 1 name: NetworkManager-libnm release: 1.el9 source: rpm version: 1.54.1 NetworkManager-team: - arch: x86_64 epoch: 1 name: NetworkManager-team release: 1.el9 source: rpm version: 1.54.1 NetworkManager-tui: - arch: x86_64 epoch: 1 name: NetworkManager-tui release: 1.el9 source: rpm version: 1.54.1 PackageKit: - arch: x86_64 epoch: null name: PackageKit release: 1.el9 source: rpm version: 1.2.6 PackageKit-glib: - arch: x86_64 epoch: null name: PackageKit-glib release: 1.el9 source: rpm version: 1.2.6 aardvark-dns: - arch: x86_64 epoch: 2 name: aardvark-dns release: 1.el9 source: rpm version: 1.16.0 abattis-cantarell-fonts: - arch: noarch epoch: null name: abattis-cantarell-fonts release: 4.el9 source: rpm version: '0.301' acl: - arch: x86_64 epoch: null name: acl release: 4.el9 source: rpm version: 2.3.1 adobe-source-code-pro-fonts: - arch: noarch epoch: null name: adobe-source-code-pro-fonts release: 12.el9.1 source: rpm version: 2.030.1.050 alternatives: - arch: x86_64 epoch: null name: alternatives release: 2.el9 source: rpm version: '1.24' annobin: - arch: x86_64 epoch: null name: annobin release: 1.el9 source: rpm version: '12.98' ansible-core: - arch: x86_64 epoch: 1 name: ansible-core release: 1.el9 source: rpm version: 2.14.18 attr: - arch: x86_64 epoch: null name: attr release: 3.el9 source: rpm version: 2.5.1 audit: - arch: x86_64 epoch: null name: audit release: 7.el9 source: rpm version: 3.1.5 audit-libs: - arch: x86_64 epoch: null name: audit-libs release: 7.el9 source: rpm version: 3.1.5 authselect: - arch: x86_64 epoch: null name: authselect release: 3.el9 source: rpm version: 1.2.6 authselect-compat: - arch: x86_64 epoch: null name: authselect-compat release: 3.el9 source: rpm version: 1.2.6 authselect-libs: - arch: x86_64 epoch: null name: authselect-libs release: 3.el9 source: rpm version: 1.2.6 avahi-libs: - arch: x86_64 epoch: null name: avahi-libs release: 23.el9 source: rpm version: '0.8' basesystem: - arch: noarch epoch: null name: basesystem release: 13.el9 source: rpm version: '11' bash: - arch: x86_64 epoch: null name: bash release: 9.el9 source: rpm version: 5.1.8 bash-completion: - arch: noarch epoch: 1 name: bash-completion release: 5.el9 source: rpm version: '2.11' binutils: - arch: x86_64 epoch: null name: binutils release: 67.el9 source: rpm version: 2.35.2 binutils-gold: - arch: x86_64 epoch: null name: binutils-gold release: 67.el9 source: rpm version: 2.35.2 buildah: - arch: x86_64 epoch: 2 name: buildah release: 1.el9 source: rpm version: 1.41.3 bzip2: - arch: x86_64 epoch: null name: bzip2 release: 10.el9 source: rpm version: 1.0.8 bzip2-libs: - arch: x86_64 epoch: null name: bzip2-libs release: 10.el9 source: rpm version: 1.0.8 c-ares: - arch: x86_64 epoch: null name: c-ares release: 2.el9 source: rpm version: 1.19.1 ca-certificates: - arch: noarch epoch: null name: ca-certificates release: 91.4.el9 source: rpm version: 2024.2.69_v8.0.303 centos-gpg-keys: - arch: noarch epoch: null name: centos-gpg-keys release: 30.el9 source: rpm version: '9.0' centos-logos: - arch: x86_64 epoch: null name: centos-logos release: 3.el9 source: rpm version: '90.8' centos-stream-release: - arch: noarch epoch: null name: centos-stream-release release: 30.el9 source: rpm version: '9.0' centos-stream-repos: - arch: noarch epoch: null name: centos-stream-repos release: 30.el9 source: rpm version: '9.0' checkpolicy: - arch: x86_64 epoch: null name: checkpolicy release: 1.el9 source: rpm version: '3.6' chrony: - arch: x86_64 epoch: null name: chrony release: 2.el9 source: rpm version: 4.6.1 cloud-init: - arch: noarch epoch: null name: cloud-init release: 7.el9 source: rpm version: '24.4' cloud-utils-growpart: - arch: x86_64 epoch: null name: cloud-utils-growpart release: 1.el9 source: rpm version: '0.33' cmake-filesystem: - arch: x86_64 epoch: null name: cmake-filesystem release: 2.el9 source: rpm version: 3.26.5 cockpit-bridge: - arch: noarch epoch: null name: cockpit-bridge release: 1.el9 source: rpm version: '347' cockpit-system: - arch: noarch epoch: null name: cockpit-system release: 1.el9 source: rpm version: '347' cockpit-ws: - arch: x86_64 epoch: null name: cockpit-ws release: 1.el9 source: rpm version: '347' cockpit-ws-selinux: - arch: x86_64 epoch: null name: cockpit-ws-selinux release: 1.el9 source: rpm version: '347' conmon: - arch: x86_64 epoch: 3 name: conmon release: 1.el9 source: rpm version: 2.1.13 container-selinux: - arch: noarch epoch: 4 name: container-selinux release: 1.el9 source: rpm version: 2.242.0 containers-common: - arch: x86_64 epoch: 4 name: containers-common release: 134.el9 source: rpm version: '1' containers-common-extra: - arch: x86_64 epoch: 4 name: containers-common-extra release: 134.el9 source: rpm version: '1' coreutils: - arch: x86_64 epoch: null name: coreutils release: 39.el9 source: rpm version: '8.32' coreutils-common: - arch: x86_64 epoch: null name: coreutils-common release: 39.el9 source: rpm version: '8.32' cpio: - arch: x86_64 epoch: null name: cpio release: 16.el9 source: rpm version: '2.13' cpp: - arch: x86_64 epoch: null name: cpp release: 11.el9 source: rpm version: 11.5.0 cracklib: - arch: x86_64 epoch: null name: cracklib release: 27.el9 source: rpm version: 2.9.6 cracklib-dicts: - arch: x86_64 epoch: null name: cracklib-dicts release: 27.el9 source: rpm version: 2.9.6 createrepo_c: - arch: x86_64 epoch: null name: createrepo_c release: 4.el9 source: rpm version: 0.20.1 createrepo_c-libs: - arch: x86_64 epoch: null name: createrepo_c-libs release: 4.el9 source: rpm version: 0.20.1 criu: - arch: x86_64 epoch: null name: criu release: 3.el9 source: rpm version: '3.19' criu-libs: - arch: x86_64 epoch: null name: criu-libs release: 3.el9 source: rpm version: '3.19' cronie: - arch: x86_64 epoch: null name: cronie release: 14.el9 source: rpm version: 1.5.7 cronie-anacron: - arch: x86_64 epoch: null name: cronie-anacron release: 14.el9 source: rpm version: 1.5.7 crontabs: - arch: noarch epoch: null name: crontabs release: 26.20190603git.el9 source: rpm version: '1.11' crun: - arch: x86_64 epoch: null name: crun release: 1.el9 source: rpm version: '1.24' crypto-policies: - arch: noarch epoch: null name: crypto-policies release: 1.git377cc42.el9 source: rpm version: '20250905' crypto-policies-scripts: - arch: noarch epoch: null name: crypto-policies-scripts release: 1.git377cc42.el9 source: rpm version: '20250905' cryptsetup-libs: - arch: x86_64 epoch: null name: cryptsetup-libs release: 2.el9 source: rpm version: 2.8.1 curl: - arch: x86_64 epoch: null name: curl release: 34.el9 source: rpm version: 7.76.1 cyrus-sasl: - arch: x86_64 epoch: null name: cyrus-sasl release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-devel: - arch: x86_64 epoch: null name: cyrus-sasl-devel release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-gssapi: - arch: x86_64 epoch: null name: cyrus-sasl-gssapi release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-lib: - arch: x86_64 epoch: null name: cyrus-sasl-lib release: 21.el9 source: rpm version: 2.1.27 dbus: - arch: x86_64 epoch: 1 name: dbus release: 8.el9 source: rpm version: 1.12.20 dbus-broker: - arch: x86_64 epoch: null name: dbus-broker release: 7.el9 source: rpm version: '28' dbus-common: - arch: noarch epoch: 1 name: dbus-common release: 8.el9 source: rpm version: 1.12.20 dbus-libs: - arch: x86_64 epoch: 1 name: dbus-libs release: 8.el9 source: rpm version: 1.12.20 dbus-tools: - arch: x86_64 epoch: 1 name: dbus-tools release: 8.el9 source: rpm version: 1.12.20 debugedit: - arch: x86_64 epoch: null name: debugedit release: 11.el9 source: rpm version: '5.0' dejavu-sans-fonts: - arch: noarch epoch: null name: dejavu-sans-fonts release: 18.el9 source: rpm version: '2.37' desktop-file-utils: - arch: x86_64 epoch: null name: desktop-file-utils release: 6.el9 source: rpm version: '0.26' device-mapper: - arch: x86_64 epoch: 9 name: device-mapper release: 2.el9 source: rpm version: 1.02.206 device-mapper-libs: - arch: x86_64 epoch: 9 name: device-mapper-libs release: 2.el9 source: rpm version: 1.02.206 dhcp-client: - arch: x86_64 epoch: 12 name: dhcp-client release: 19.b1.el9 source: rpm version: 4.4.2 dhcp-common: - arch: noarch epoch: 12 name: dhcp-common release: 19.b1.el9 source: rpm version: 4.4.2 diffutils: - arch: x86_64 epoch: null name: diffutils release: 12.el9 source: rpm version: '3.7' dnf: - arch: noarch epoch: null name: dnf release: 31.el9 source: rpm version: 4.14.0 dnf-data: - arch: noarch epoch: null name: dnf-data release: 31.el9 source: rpm version: 4.14.0 dnf-plugins-core: - arch: noarch epoch: null name: dnf-plugins-core release: 23.el9 source: rpm version: 4.3.0 dracut: - arch: x86_64 epoch: null name: dracut release: 102.git20250818.el9 source: rpm version: '057' dracut-config-generic: - arch: x86_64 epoch: null name: dracut-config-generic release: 102.git20250818.el9 source: rpm version: '057' dracut-network: - arch: x86_64 epoch: null name: dracut-network release: 102.git20250818.el9 source: rpm version: '057' dracut-squash: - arch: x86_64 epoch: null name: dracut-squash release: 102.git20250818.el9 source: rpm version: '057' dwz: - arch: x86_64 epoch: null name: dwz release: 1.el9 source: rpm version: '0.16' e2fsprogs: - arch: x86_64 epoch: null name: e2fsprogs release: 8.el9 source: rpm version: 1.46.5 e2fsprogs-libs: - arch: x86_64 epoch: null name: e2fsprogs-libs release: 8.el9 source: rpm version: 1.46.5 ed: - arch: x86_64 epoch: null name: ed release: 12.el9 source: rpm version: 1.14.2 efi-srpm-macros: - arch: noarch epoch: null name: efi-srpm-macros release: 4.el9 source: rpm version: '6' elfutils: - arch: x86_64 epoch: null name: elfutils release: 1.el9 source: rpm version: '0.193' elfutils-debuginfod-client: - arch: x86_64 epoch: null name: elfutils-debuginfod-client release: 1.el9 source: rpm version: '0.193' elfutils-default-yama-scope: - arch: noarch epoch: null name: elfutils-default-yama-scope release: 1.el9 source: rpm version: '0.193' elfutils-libelf: - arch: x86_64 epoch: null name: elfutils-libelf release: 1.el9 source: rpm version: '0.193' elfutils-libs: - arch: x86_64 epoch: null name: elfutils-libs release: 1.el9 source: rpm version: '0.193' emacs-filesystem: - arch: noarch epoch: 1 name: emacs-filesystem release: 18.el9 source: rpm version: '27.2' enchant: - arch: x86_64 epoch: 1 name: enchant release: 30.el9 source: rpm version: 1.6.0 ethtool: - arch: x86_64 epoch: 2 name: ethtool release: 2.el9 source: rpm version: '6.15' expat: - arch: x86_64 epoch: null name: expat release: 5.el9 source: rpm version: 2.5.0 expect: - arch: x86_64 epoch: null name: expect release: 16.el9 source: rpm version: 5.45.4 file: - arch: x86_64 epoch: null name: file release: 16.el9 source: rpm version: '5.39' file-libs: - arch: x86_64 epoch: null name: file-libs release: 16.el9 source: rpm version: '5.39' filesystem: - arch: x86_64 epoch: null name: filesystem release: 5.el9 source: rpm version: '3.16' findutils: - arch: x86_64 epoch: 1 name: findutils release: 7.el9 source: rpm version: 4.8.0 fonts-filesystem: - arch: noarch epoch: 1 name: fonts-filesystem release: 7.el9.1 source: rpm version: 2.0.5 fonts-srpm-macros: - arch: noarch epoch: 1 name: fonts-srpm-macros release: 7.el9.1 source: rpm version: 2.0.5 fuse-common: - arch: x86_64 epoch: null name: fuse-common release: 9.el9 source: rpm version: 3.10.2 fuse-libs: - arch: x86_64 epoch: null name: fuse-libs release: 17.el9 source: rpm version: 2.9.9 fuse-overlayfs: - arch: x86_64 epoch: null name: fuse-overlayfs release: 1.el9 source: rpm version: '1.15' fuse3: - arch: x86_64 epoch: null name: fuse3 release: 9.el9 source: rpm version: 3.10.2 fuse3-libs: - arch: x86_64 epoch: null name: fuse3-libs release: 9.el9 source: rpm version: 3.10.2 gawk: - arch: x86_64 epoch: null name: gawk release: 6.el9 source: rpm version: 5.1.0 gawk-all-langpacks: - arch: x86_64 epoch: null name: gawk-all-langpacks release: 6.el9 source: rpm version: 5.1.0 gcc: - arch: x86_64 epoch: null name: gcc release: 11.el9 source: rpm version: 11.5.0 gcc-c++: - arch: x86_64 epoch: null name: gcc-c++ release: 11.el9 source: rpm version: 11.5.0 gcc-plugin-annobin: - arch: x86_64 epoch: null name: gcc-plugin-annobin release: 11.el9 source: rpm version: 11.5.0 gdb-minimal: - arch: x86_64 epoch: null name: gdb-minimal release: 2.el9 source: rpm version: '16.3' gdbm-libs: - arch: x86_64 epoch: 1 name: gdbm-libs release: 1.el9 source: rpm version: '1.23' gdisk: - arch: x86_64 epoch: null name: gdisk release: 5.el9 source: rpm version: 1.0.7 gdk-pixbuf2: - arch: x86_64 epoch: null name: gdk-pixbuf2 release: 6.el9 source: rpm version: 2.42.6 geolite2-city: - arch: noarch epoch: null name: geolite2-city release: 6.el9 source: rpm version: '20191217' geolite2-country: - arch: noarch epoch: null name: geolite2-country release: 6.el9 source: rpm version: '20191217' gettext: - arch: x86_64 epoch: null name: gettext release: 8.el9 source: rpm version: '0.21' gettext-libs: - arch: x86_64 epoch: null name: gettext-libs release: 8.el9 source: rpm version: '0.21' ghc-srpm-macros: - arch: noarch epoch: null name: ghc-srpm-macros release: 6.el9 source: rpm version: 1.5.0 git: - arch: x86_64 epoch: null name: git release: 1.el9 source: rpm version: 2.47.3 git-core: - arch: x86_64 epoch: null name: git-core release: 1.el9 source: rpm version: 2.47.3 git-core-doc: - arch: noarch epoch: null name: git-core-doc release: 1.el9 source: rpm version: 2.47.3 glib-networking: - arch: x86_64 epoch: null name: glib-networking release: 3.el9 source: rpm version: 2.68.3 glib2: - arch: x86_64 epoch: null name: glib2 release: 16.el9 source: rpm version: 2.68.4 glibc: - arch: x86_64 epoch: null name: glibc release: 232.el9 source: rpm version: '2.34' glibc-common: - arch: x86_64 epoch: null name: glibc-common release: 232.el9 source: rpm version: '2.34' glibc-devel: - arch: x86_64 epoch: null name: glibc-devel release: 232.el9 source: rpm version: '2.34' glibc-gconv-extra: - arch: x86_64 epoch: null name: glibc-gconv-extra release: 232.el9 source: rpm version: '2.34' glibc-headers: - arch: x86_64 epoch: null name: glibc-headers release: 232.el9 source: rpm version: '2.34' glibc-langpack-en: - arch: x86_64 epoch: null name: glibc-langpack-en release: 232.el9 source: rpm version: '2.34' gmp: - arch: x86_64 epoch: 1 name: gmp release: 13.el9 source: rpm version: 6.2.0 gnupg2: - arch: x86_64 epoch: null name: gnupg2 release: 4.el9 source: rpm version: 2.3.3 gnutls: - arch: x86_64 epoch: null name: gnutls release: 9.el9 source: rpm version: 3.8.3 go-srpm-macros: - arch: noarch epoch: null name: go-srpm-macros release: 1.el9 source: rpm version: 3.8.1 gobject-introspection: - arch: x86_64 epoch: null name: gobject-introspection release: 11.el9 source: rpm version: 1.68.0 gpg-pubkey: - arch: null epoch: null name: gpg-pubkey release: 5ccc5b19 source: rpm version: 8483c65d gpgme: - arch: x86_64 epoch: null name: gpgme release: 6.el9 source: rpm version: 1.15.1 grep: - arch: x86_64 epoch: null name: grep release: 5.el9 source: rpm version: '3.6' groff-base: - arch: x86_64 epoch: null name: groff-base release: 10.el9 source: rpm version: 1.22.4 grub2-common: - arch: noarch epoch: 1 name: grub2-common release: 115.el9 source: rpm version: '2.06' grub2-pc: - arch: x86_64 epoch: 1 name: grub2-pc release: 115.el9 source: rpm version: '2.06' grub2-pc-modules: - arch: noarch epoch: 1 name: grub2-pc-modules release: 115.el9 source: rpm version: '2.06' grub2-tools: - arch: x86_64 epoch: 1 name: grub2-tools release: 115.el9 source: rpm version: '2.06' grub2-tools-minimal: - arch: x86_64 epoch: 1 name: grub2-tools-minimal release: 115.el9 source: rpm version: '2.06' grubby: - arch: x86_64 epoch: null name: grubby release: 69.el9 source: rpm version: '8.40' gsettings-desktop-schemas: - arch: x86_64 epoch: null name: gsettings-desktop-schemas release: 7.el9 source: rpm version: '40.0' gssproxy: - arch: x86_64 epoch: null name: gssproxy release: 7.el9 source: rpm version: 0.8.4 gzip: - arch: x86_64 epoch: null name: gzip release: 1.el9 source: rpm version: '1.12' hostname: - arch: x86_64 epoch: null name: hostname release: 6.el9 source: rpm version: '3.23' hunspell: - arch: x86_64 epoch: null name: hunspell release: 11.el9 source: rpm version: 1.7.0 hunspell-en-GB: - arch: noarch epoch: null name: hunspell-en-GB release: 20.el9 source: rpm version: 0.20140811.1 hunspell-en-US: - arch: noarch epoch: null name: hunspell-en-US release: 20.el9 source: rpm version: 0.20140811.1 hunspell-filesystem: - arch: x86_64 epoch: null name: hunspell-filesystem release: 11.el9 source: rpm version: 1.7.0 hwdata: - arch: noarch epoch: null name: hwdata release: 9.20.el9 source: rpm version: '0.348' ima-evm-utils: - arch: x86_64 epoch: null name: ima-evm-utils release: 2.el9 source: rpm version: 1.6.2 info: - arch: x86_64 epoch: null name: info release: 15.el9 source: rpm version: '6.7' inih: - arch: x86_64 epoch: null name: inih release: 6.el9 source: rpm version: '49' initscripts-rename-device: - arch: x86_64 epoch: null name: initscripts-rename-device release: 4.el9 source: rpm version: 10.11.8 initscripts-service: - arch: noarch epoch: null name: initscripts-service release: 4.el9 source: rpm version: 10.11.8 ipcalc: - arch: x86_64 epoch: null name: ipcalc release: 5.el9 source: rpm version: 1.0.0 iproute: - arch: x86_64 epoch: null name: iproute release: 2.el9 source: rpm version: 6.14.0 iproute-tc: - arch: x86_64 epoch: null name: iproute-tc release: 2.el9 source: rpm version: 6.14.0 iptables-libs: - arch: x86_64 epoch: null name: iptables-libs release: 11.el9 source: rpm version: 1.8.10 iptables-nft: - arch: x86_64 epoch: null name: iptables-nft release: 11.el9 source: rpm version: 1.8.10 iptables-nft-services: - arch: noarch epoch: null name: iptables-nft-services release: 11.el9 source: rpm version: 1.8.10 iputils: - arch: x86_64 epoch: null name: iputils release: 15.el9 source: rpm version: '20210202' irqbalance: - arch: x86_64 epoch: 2 name: irqbalance release: 4.el9 source: rpm version: 1.9.4 jansson: - arch: x86_64 epoch: null name: jansson release: 1.el9 source: rpm version: '2.14' jq: - arch: x86_64 epoch: null name: jq release: 19.el9 source: rpm version: '1.6' json-c: - arch: x86_64 epoch: null name: json-c release: 11.el9 source: rpm version: '0.14' json-glib: - arch: x86_64 epoch: null name: json-glib release: 1.el9 source: rpm version: 1.6.6 kbd: - arch: x86_64 epoch: null name: kbd release: 11.el9 source: rpm version: 2.4.0 kbd-legacy: - arch: noarch epoch: null name: kbd-legacy release: 11.el9 source: rpm version: 2.4.0 kbd-misc: - arch: noarch epoch: null name: kbd-misc release: 11.el9 source: rpm version: 2.4.0 kernel: - arch: x86_64 epoch: null name: kernel release: 621.el9 source: rpm version: 5.14.0 kernel-core: - arch: x86_64 epoch: null name: kernel-core release: 621.el9 source: rpm version: 5.14.0 kernel-headers: - arch: x86_64 epoch: null name: kernel-headers release: 621.el9 source: rpm version: 5.14.0 kernel-modules: - arch: x86_64 epoch: null name: kernel-modules release: 621.el9 source: rpm version: 5.14.0 kernel-modules-core: - arch: x86_64 epoch: null name: kernel-modules-core release: 621.el9 source: rpm version: 5.14.0 kernel-srpm-macros: - arch: noarch epoch: null name: kernel-srpm-macros release: 14.el9 source: rpm version: '1.0' kernel-tools: - arch: x86_64 epoch: null name: kernel-tools release: 621.el9 source: rpm version: 5.14.0 kernel-tools-libs: - arch: x86_64 epoch: null name: kernel-tools-libs release: 621.el9 source: rpm version: 5.14.0 kexec-tools: - arch: x86_64 epoch: null name: kexec-tools release: 10.el9 source: rpm version: 2.0.29 keyutils: - arch: x86_64 epoch: null name: keyutils release: 1.el9 source: rpm version: 1.6.3 keyutils-libs: - arch: x86_64 epoch: null name: keyutils-libs release: 1.el9 source: rpm version: 1.6.3 kmod: - arch: x86_64 epoch: null name: kmod release: 11.el9 source: rpm version: '28' kmod-libs: - arch: x86_64 epoch: null name: kmod-libs release: 11.el9 source: rpm version: '28' kpartx: - arch: x86_64 epoch: null name: kpartx release: 39.el9 source: rpm version: 0.8.7 krb5-libs: - arch: x86_64 epoch: null name: krb5-libs release: 8.el9 source: rpm version: 1.21.1 langpacks-core-en_GB: - arch: noarch epoch: null name: langpacks-core-en_GB release: 16.el9 source: rpm version: '3.0' langpacks-core-font-en: - arch: noarch epoch: null name: langpacks-core-font-en release: 16.el9 source: rpm version: '3.0' langpacks-en_GB: - arch: noarch epoch: null name: langpacks-en_GB release: 16.el9 source: rpm version: '3.0' less: - arch: x86_64 epoch: null name: less release: 6.el9 source: rpm version: '590' libacl: - arch: x86_64 epoch: null name: libacl release: 4.el9 source: rpm version: 2.3.1 libappstream-glib: - arch: x86_64 epoch: null name: libappstream-glib release: 5.el9 source: rpm version: 0.7.18 libarchive: - arch: x86_64 epoch: null name: libarchive release: 6.el9 source: rpm version: 3.5.3 libassuan: - arch: x86_64 epoch: null name: libassuan release: 3.el9 source: rpm version: 2.5.5 libatomic: - arch: x86_64 epoch: null name: libatomic release: 11.el9 source: rpm version: 11.5.0 libattr: - arch: x86_64 epoch: null name: libattr release: 3.el9 source: rpm version: 2.5.1 libbasicobjects: - arch: x86_64 epoch: null name: libbasicobjects release: 53.el9 source: rpm version: 0.1.1 libblkid: - arch: x86_64 epoch: null name: libblkid release: 21.el9 source: rpm version: 2.37.4 libbpf: - arch: x86_64 epoch: 2 name: libbpf release: 2.el9 source: rpm version: 1.5.0 libbrotli: - arch: x86_64 epoch: null name: libbrotli release: 7.el9 source: rpm version: 1.0.9 libcap: - arch: x86_64 epoch: null name: libcap release: 10.el9 source: rpm version: '2.48' libcap-ng: - arch: x86_64 epoch: null name: libcap-ng release: 7.el9 source: rpm version: 0.8.2 libcbor: - arch: x86_64 epoch: null name: libcbor release: 5.el9 source: rpm version: 0.7.0 libcollection: - arch: x86_64 epoch: null name: libcollection release: 53.el9 source: rpm version: 0.7.0 libcom_err: - arch: x86_64 epoch: null name: libcom_err release: 8.el9 source: rpm version: 1.46.5 libcomps: - arch: x86_64 epoch: null name: libcomps release: 1.el9 source: rpm version: 0.1.18 libcurl: - arch: x86_64 epoch: null name: libcurl release: 34.el9 source: rpm version: 7.76.1 libdaemon: - arch: x86_64 epoch: null name: libdaemon release: 23.el9 source: rpm version: '0.14' libdb: - arch: x86_64 epoch: null name: libdb release: 57.el9 source: rpm version: 5.3.28 libdhash: - arch: x86_64 epoch: null name: libdhash release: 53.el9 source: rpm version: 0.5.0 libdnf: - arch: x86_64 epoch: null name: libdnf release: 16.el9 source: rpm version: 0.69.0 libeconf: - arch: x86_64 epoch: null name: libeconf release: 4.el9 source: rpm version: 0.4.1 libedit: - arch: x86_64 epoch: null name: libedit release: 38.20210216cvs.el9 source: rpm version: '3.1' libestr: - arch: x86_64 epoch: null name: libestr release: 4.el9 source: rpm version: 0.1.11 libev: - arch: x86_64 epoch: null name: libev release: 6.el9 source: rpm version: '4.33' libevent: - arch: x86_64 epoch: null name: libevent release: 8.el9 source: rpm version: 2.1.12 libfastjson: - arch: x86_64 epoch: null name: libfastjson release: 5.el9 source: rpm version: 0.99.9 libfdisk: - arch: x86_64 epoch: null name: libfdisk release: 21.el9 source: rpm version: 2.37.4 libffi: - arch: x86_64 epoch: null name: libffi release: 8.el9 source: rpm version: 3.4.2 libffi-devel: - arch: x86_64 epoch: null name: libffi-devel release: 8.el9 source: rpm version: 3.4.2 libfido2: - arch: x86_64 epoch: null name: libfido2 release: 2.el9 source: rpm version: 1.13.0 libgcc: - arch: x86_64 epoch: null name: libgcc release: 11.el9 source: rpm version: 11.5.0 libgcrypt: - arch: x86_64 epoch: null name: libgcrypt release: 11.el9 source: rpm version: 1.10.0 libgomp: - arch: x86_64 epoch: null name: libgomp release: 11.el9 source: rpm version: 11.5.0 libgpg-error: - arch: x86_64 epoch: null name: libgpg-error release: 5.el9 source: rpm version: '1.42' libgpg-error-devel: - arch: x86_64 epoch: null name: libgpg-error-devel release: 5.el9 source: rpm version: '1.42' libibverbs: - arch: x86_64 epoch: null name: libibverbs release: 2.el9 source: rpm version: '57.0' libicu: - arch: x86_64 epoch: null name: libicu release: 10.el9 source: rpm version: '67.1' libidn2: - arch: x86_64 epoch: null name: libidn2 release: 7.el9 source: rpm version: 2.3.0 libini_config: - arch: x86_64 epoch: null name: libini_config release: 53.el9 source: rpm version: 1.3.1 libjpeg-turbo: - arch: x86_64 epoch: null name: libjpeg-turbo release: 7.el9 source: rpm version: 2.0.90 libkcapi: - arch: x86_64 epoch: null name: libkcapi release: 2.el9 source: rpm version: 1.4.0 libkcapi-hmaccalc: - arch: x86_64 epoch: null name: libkcapi-hmaccalc release: 2.el9 source: rpm version: 1.4.0 libksba: - arch: x86_64 epoch: null name: libksba release: 7.el9 source: rpm version: 1.5.1 libldb: - arch: x86_64 epoch: 0 name: libldb release: 3.el9 source: rpm version: 4.23.0 libmaxminddb: - arch: x86_64 epoch: null name: libmaxminddb release: 4.el9 source: rpm version: 1.5.2 libmnl: - arch: x86_64 epoch: null name: libmnl release: 16.el9 source: rpm version: 1.0.4 libmodulemd: - arch: x86_64 epoch: null name: libmodulemd release: 2.el9 source: rpm version: 2.13.0 libmount: - arch: x86_64 epoch: null name: libmount release: 21.el9 source: rpm version: 2.37.4 libmpc: - arch: x86_64 epoch: null name: libmpc release: 4.el9 source: rpm version: 1.2.1 libndp: - arch: x86_64 epoch: null name: libndp release: 1.el9 source: rpm version: '1.9' libnet: - arch: x86_64 epoch: null name: libnet release: 7.el9 source: rpm version: '1.2' libnetfilter_conntrack: - arch: x86_64 epoch: null name: libnetfilter_conntrack release: 1.el9 source: rpm version: 1.0.9 libnfnetlink: - arch: x86_64 epoch: null name: libnfnetlink release: 23.el9 source: rpm version: 1.0.1 libnfsidmap: - arch: x86_64 epoch: 1 name: libnfsidmap release: 39.el9 source: rpm version: 2.5.4 libnftnl: - arch: x86_64 epoch: null name: libnftnl release: 4.el9 source: rpm version: 1.2.6 libnghttp2: - arch: x86_64 epoch: null name: libnghttp2 release: 6.el9 source: rpm version: 1.43.0 libnl3: - arch: x86_64 epoch: null name: libnl3 release: 1.el9 source: rpm version: 3.11.0 libnl3-cli: - arch: x86_64 epoch: null name: libnl3-cli release: 1.el9 source: rpm version: 3.11.0 libnsl2: - arch: x86_64 epoch: null name: libnsl2 release: 1.el9 source: rpm version: 2.0.0 libpath_utils: - arch: x86_64 epoch: null name: libpath_utils release: 53.el9 source: rpm version: 0.2.1 libpcap: - arch: x86_64 epoch: 14 name: libpcap release: 4.el9 source: rpm version: 1.10.0 libpipeline: - arch: x86_64 epoch: null name: libpipeline release: 4.el9 source: rpm version: 1.5.3 libpkgconf: - arch: x86_64 epoch: null name: libpkgconf release: 10.el9 source: rpm version: 1.7.3 libpng: - arch: x86_64 epoch: 2 name: libpng release: 12.el9 source: rpm version: 1.6.37 libproxy: - arch: x86_64 epoch: null name: libproxy release: 35.el9 source: rpm version: 0.4.15 libproxy-webkitgtk4: - arch: x86_64 epoch: null name: libproxy-webkitgtk4 release: 35.el9 source: rpm version: 0.4.15 libpsl: - arch: x86_64 epoch: null name: libpsl release: 5.el9 source: rpm version: 0.21.1 libpwquality: - arch: x86_64 epoch: null name: libpwquality release: 8.el9 source: rpm version: 1.4.4 libref_array: - arch: x86_64 epoch: null name: libref_array release: 53.el9 source: rpm version: 0.1.5 librepo: - arch: x86_64 epoch: null name: librepo release: 3.el9 source: rpm version: 1.14.5 libreport-filesystem: - arch: noarch epoch: null name: libreport-filesystem release: 6.el9 source: rpm version: 2.15.2 libseccomp: - arch: x86_64 epoch: null name: libseccomp release: 2.el9 source: rpm version: 2.5.2 libselinux: - arch: x86_64 epoch: null name: libselinux release: 3.el9 source: rpm version: '3.6' libselinux-utils: - arch: x86_64 epoch: null name: libselinux-utils release: 3.el9 source: rpm version: '3.6' libsemanage: - arch: x86_64 epoch: null name: libsemanage release: 5.el9 source: rpm version: '3.6' libsepol: - arch: x86_64 epoch: null name: libsepol release: 3.el9 source: rpm version: '3.6' libsigsegv: - arch: x86_64 epoch: null name: libsigsegv release: 4.el9 source: rpm version: '2.13' libslirp: - arch: x86_64 epoch: null name: libslirp release: 8.el9 source: rpm version: 4.4.0 libsmartcols: - arch: x86_64 epoch: null name: libsmartcols release: 21.el9 source: rpm version: 2.37.4 libsolv: - arch: x86_64 epoch: null name: libsolv release: 3.el9 source: rpm version: 0.7.24 libsoup: - arch: x86_64 epoch: null name: libsoup release: 10.el9 source: rpm version: 2.72.0 libss: - arch: x86_64 epoch: null name: libss release: 8.el9 source: rpm version: 1.46.5 libssh: - arch: x86_64 epoch: null name: libssh release: 15.el9 source: rpm version: 0.10.4 libssh-config: - arch: noarch epoch: null name: libssh-config release: 15.el9 source: rpm version: 0.10.4 libsss_certmap: - arch: x86_64 epoch: null name: libsss_certmap release: 5.el9 source: rpm version: 2.9.7 libsss_idmap: - arch: x86_64 epoch: null name: libsss_idmap release: 5.el9 source: rpm version: 2.9.7 libsss_nss_idmap: - arch: x86_64 epoch: null name: libsss_nss_idmap release: 5.el9 source: rpm version: 2.9.7 libsss_sudo: - arch: x86_64 epoch: null name: libsss_sudo release: 5.el9 source: rpm version: 2.9.7 libstdc++: - arch: x86_64 epoch: null name: libstdc++ release: 11.el9 source: rpm version: 11.5.0 libstdc++-devel: - arch: x86_64 epoch: null name: libstdc++-devel release: 11.el9 source: rpm version: 11.5.0 libstemmer: - arch: x86_64 epoch: null name: libstemmer release: 18.585svn.el9 source: rpm version: '0' libsysfs: - arch: x86_64 epoch: null name: libsysfs release: 11.el9 source: rpm version: 2.1.1 libtalloc: - arch: x86_64 epoch: null name: libtalloc release: 1.el9 source: rpm version: 2.4.3 libtasn1: - arch: x86_64 epoch: null name: libtasn1 release: 9.el9 source: rpm version: 4.16.0 libtdb: - arch: x86_64 epoch: null name: libtdb release: 1.el9 source: rpm version: 1.4.14 libteam: - arch: x86_64 epoch: null name: libteam release: 16.el9 source: rpm version: '1.31' libtevent: - arch: x86_64 epoch: null name: libtevent release: 1.el9 source: rpm version: 0.17.1 libtirpc: - arch: x86_64 epoch: null name: libtirpc release: 9.el9 source: rpm version: 1.3.3 libtool-ltdl: - arch: x86_64 epoch: null name: libtool-ltdl release: 46.el9 source: rpm version: 2.4.6 libunistring: - arch: x86_64 epoch: null name: libunistring release: 15.el9 source: rpm version: 0.9.10 liburing: - arch: x86_64 epoch: null name: liburing release: 1.el9 source: rpm version: '2.5' libuser: - arch: x86_64 epoch: null name: libuser release: 17.el9 source: rpm version: '0.63' libutempter: - arch: x86_64 epoch: null name: libutempter release: 6.el9 source: rpm version: 1.2.1 libuuid: - arch: x86_64 epoch: null name: libuuid release: 21.el9 source: rpm version: 2.37.4 libverto: - arch: x86_64 epoch: null name: libverto release: 3.el9 source: rpm version: 0.3.2 libverto-libev: - arch: x86_64 epoch: null name: libverto-libev release: 3.el9 source: rpm version: 0.3.2 libvirt-libs: - arch: x86_64 epoch: null name: libvirt-libs release: 15.el9 source: rpm version: 10.10.0 libwbclient: - arch: x86_64 epoch: 0 name: libwbclient release: 3.el9 source: rpm version: 4.23.0 libxcrypt: - arch: x86_64 epoch: null name: libxcrypt release: 3.el9 source: rpm version: 4.4.18 libxcrypt-compat: - arch: x86_64 epoch: null name: libxcrypt-compat release: 3.el9 source: rpm version: 4.4.18 libxcrypt-devel: - arch: x86_64 epoch: null name: libxcrypt-devel release: 3.el9 source: rpm version: 4.4.18 libxml2: - arch: x86_64 epoch: null name: libxml2 release: 12.el9 source: rpm version: 2.9.13 libxml2-devel: - arch: x86_64 epoch: null name: libxml2-devel release: 12.el9 source: rpm version: 2.9.13 libxslt: - arch: x86_64 epoch: null name: libxslt release: 12.el9 source: rpm version: 1.1.34 libxslt-devel: - arch: x86_64 epoch: null name: libxslt-devel release: 12.el9 source: rpm version: 1.1.34 libyaml: - arch: x86_64 epoch: null name: libyaml release: 7.el9 source: rpm version: 0.2.5 libzstd: - arch: x86_64 epoch: null name: libzstd release: 1.el9 source: rpm version: 1.5.5 llvm-filesystem: - arch: x86_64 epoch: null name: llvm-filesystem release: 3.el9 source: rpm version: 20.1.8 llvm-libs: - arch: x86_64 epoch: null name: llvm-libs release: 3.el9 source: rpm version: 20.1.8 lmdb-libs: - arch: x86_64 epoch: null name: lmdb-libs release: 3.el9 source: rpm version: 0.9.29 logrotate: - arch: x86_64 epoch: null name: logrotate release: 12.el9 source: rpm version: 3.18.0 lshw: - arch: x86_64 epoch: null name: lshw release: 2.el9 source: rpm version: B.02.20 lsscsi: - arch: x86_64 epoch: null name: lsscsi release: 6.el9 source: rpm version: '0.32' lua-libs: - arch: x86_64 epoch: null name: lua-libs release: 4.el9 source: rpm version: 5.4.4 lua-srpm-macros: - arch: noarch epoch: null name: lua-srpm-macros release: 6.el9 source: rpm version: '1' lz4-libs: - arch: x86_64 epoch: null name: lz4-libs release: 5.el9 source: rpm version: 1.9.3 lzo: - arch: x86_64 epoch: null name: lzo release: 7.el9 source: rpm version: '2.10' make: - arch: x86_64 epoch: 1 name: make release: 8.el9 source: rpm version: '4.3' man-db: - arch: x86_64 epoch: null name: man-db release: 9.el9 source: rpm version: 2.9.3 microcode_ctl: - arch: noarch epoch: 4 name: microcode_ctl release: 1.el9 source: rpm version: '20250812' mpdecimal: - arch: x86_64 epoch: null name: mpdecimal release: 3.el9 source: rpm version: 2.5.1 mpfr: - arch: x86_64 epoch: null name: mpfr release: 7.el9 source: rpm version: 4.1.0 ncurses: - arch: x86_64 epoch: null name: ncurses release: 12.20210508.el9 source: rpm version: '6.2' ncurses-base: - arch: noarch epoch: null name: ncurses-base release: 12.20210508.el9 source: rpm version: '6.2' ncurses-c++-libs: - arch: x86_64 epoch: null name: ncurses-c++-libs release: 12.20210508.el9 source: rpm version: '6.2' ncurses-devel: - arch: x86_64 epoch: null name: ncurses-devel release: 12.20210508.el9 source: rpm version: '6.2' ncurses-libs: - arch: x86_64 epoch: null name: ncurses-libs release: 12.20210508.el9 source: rpm version: '6.2' netavark: - arch: x86_64 epoch: 2 name: netavark release: 1.el9 source: rpm version: 1.16.0 nettle: - arch: x86_64 epoch: null name: nettle release: 1.el9 source: rpm version: 3.10.1 newt: - arch: x86_64 epoch: null name: newt release: 11.el9 source: rpm version: 0.52.21 nfs-utils: - arch: x86_64 epoch: 1 name: nfs-utils release: 39.el9 source: rpm version: 2.5.4 nftables: - arch: x86_64 epoch: 1 name: nftables release: 4.el9 source: rpm version: 1.0.9 npth: - arch: x86_64 epoch: null name: npth release: 8.el9 source: rpm version: '1.6' numactl-libs: - arch: x86_64 epoch: null name: numactl-libs release: 3.el9 source: rpm version: 2.0.19 ocaml-srpm-macros: - arch: noarch epoch: null name: ocaml-srpm-macros release: 6.el9 source: rpm version: '6' oddjob: - arch: x86_64 epoch: null name: oddjob release: 7.el9 source: rpm version: 0.34.7 oddjob-mkhomedir: - arch: x86_64 epoch: null name: oddjob-mkhomedir release: 7.el9 source: rpm version: 0.34.7 oniguruma: - arch: x86_64 epoch: null name: oniguruma release: 1.el9.6 source: rpm version: 6.9.6 openblas-srpm-macros: - arch: noarch epoch: null name: openblas-srpm-macros release: 11.el9 source: rpm version: '2' openldap: - arch: x86_64 epoch: null name: openldap release: 4.el9 source: rpm version: 2.6.8 openldap-devel: - arch: x86_64 epoch: null name: openldap-devel release: 4.el9 source: rpm version: 2.6.8 openssh: - arch: x86_64 epoch: null name: openssh release: 1.el9 source: rpm version: 9.9p1 openssh-clients: - arch: x86_64 epoch: null name: openssh-clients release: 1.el9 source: rpm version: 9.9p1 openssh-server: - arch: x86_64 epoch: null name: openssh-server release: 1.el9 source: rpm version: 9.9p1 openssl: - arch: x86_64 epoch: 1 name: openssl release: 5.el9 source: rpm version: 3.5.1 openssl-devel: - arch: x86_64 epoch: 1 name: openssl-devel release: 5.el9 source: rpm version: 3.5.1 openssl-fips-provider: - arch: x86_64 epoch: 1 name: openssl-fips-provider release: 5.el9 source: rpm version: 3.5.1 openssl-libs: - arch: x86_64 epoch: 1 name: openssl-libs release: 5.el9 source: rpm version: 3.5.1 os-prober: - arch: x86_64 epoch: null name: os-prober release: 12.el9 source: rpm version: '1.77' p11-kit: - arch: x86_64 epoch: null name: p11-kit release: 1.el9 source: rpm version: 0.25.10 p11-kit-trust: - arch: x86_64 epoch: null name: p11-kit-trust release: 1.el9 source: rpm version: 0.25.10 pam: - arch: x86_64 epoch: null name: pam release: 26.el9 source: rpm version: 1.5.1 parted: - arch: x86_64 epoch: null name: parted release: 3.el9 source: rpm version: '3.5' passt: - arch: x86_64 epoch: null name: passt release: 2.el9 source: rpm version: 0^20250512.g8ec1341 passt-selinux: - arch: noarch epoch: null name: passt-selinux release: 2.el9 source: rpm version: 0^20250512.g8ec1341 passwd: - arch: x86_64 epoch: null name: passwd release: 12.el9 source: rpm version: '0.80' patch: - arch: x86_64 epoch: null name: patch release: 16.el9 source: rpm version: 2.7.6 pciutils-libs: - arch: x86_64 epoch: null name: pciutils-libs release: 7.el9 source: rpm version: 3.7.0 pcre: - arch: x86_64 epoch: null name: pcre release: 4.el9 source: rpm version: '8.44' pcre2: - arch: x86_64 epoch: null name: pcre2 release: 6.el9 source: rpm version: '10.40' pcre2-syntax: - arch: noarch epoch: null name: pcre2-syntax release: 6.el9 source: rpm version: '10.40' perl-AutoLoader: - arch: noarch epoch: 0 name: perl-AutoLoader release: 483.el9 source: rpm version: '5.74' perl-B: - arch: x86_64 epoch: 0 name: perl-B release: 483.el9 source: rpm version: '1.80' perl-Carp: - arch: noarch epoch: null name: perl-Carp release: 460.el9 source: rpm version: '1.50' perl-Class-Struct: - arch: noarch epoch: 0 name: perl-Class-Struct release: 483.el9 source: rpm version: '0.66' perl-Data-Dumper: - arch: x86_64 epoch: null name: perl-Data-Dumper release: 462.el9 source: rpm version: '2.174' perl-Digest: - arch: noarch epoch: null name: perl-Digest release: 4.el9 source: rpm version: '1.19' perl-Digest-MD5: - arch: x86_64 epoch: null name: perl-Digest-MD5 release: 4.el9 source: rpm version: '2.58' perl-DynaLoader: - arch: x86_64 epoch: 0 name: perl-DynaLoader release: 483.el9 source: rpm version: '1.47' perl-Encode: - arch: x86_64 epoch: 4 name: perl-Encode release: 462.el9 source: rpm version: '3.08' perl-Errno: - arch: x86_64 epoch: 0 name: perl-Errno release: 483.el9 source: rpm version: '1.30' perl-Error: - arch: noarch epoch: 1 name: perl-Error release: 7.el9 source: rpm version: '0.17029' perl-Exporter: - arch: noarch epoch: null name: perl-Exporter release: 461.el9 source: rpm version: '5.74' perl-Fcntl: - arch: x86_64 epoch: 0 name: perl-Fcntl release: 483.el9 source: rpm version: '1.13' perl-File-Basename: - arch: noarch epoch: 0 name: perl-File-Basename release: 483.el9 source: rpm version: '2.85' perl-File-Find: - arch: noarch epoch: 0 name: perl-File-Find release: 483.el9 source: rpm version: '1.37' perl-File-Path: - arch: noarch epoch: null name: perl-File-Path release: 4.el9 source: rpm version: '2.18' perl-File-Temp: - arch: noarch epoch: 1 name: perl-File-Temp release: 4.el9 source: rpm version: 0.231.100 perl-File-stat: - arch: noarch epoch: 0 name: perl-File-stat release: 483.el9 source: rpm version: '1.09' perl-FileHandle: - arch: noarch epoch: 0 name: perl-FileHandle release: 483.el9 source: rpm version: '2.03' perl-Getopt-Long: - arch: noarch epoch: 1 name: perl-Getopt-Long release: 4.el9 source: rpm version: '2.52' perl-Getopt-Std: - arch: noarch epoch: 0 name: perl-Getopt-Std release: 483.el9 source: rpm version: '1.12' perl-Git: - arch: noarch epoch: null name: perl-Git release: 1.el9 source: rpm version: 2.47.3 perl-HTTP-Tiny: - arch: noarch epoch: null name: perl-HTTP-Tiny release: 462.el9 source: rpm version: '0.076' perl-IO: - arch: x86_64 epoch: 0 name: perl-IO release: 483.el9 source: rpm version: '1.43' perl-IO-Socket-IP: - arch: noarch epoch: null name: perl-IO-Socket-IP release: 5.el9 source: rpm version: '0.41' perl-IO-Socket-SSL: - arch: noarch epoch: null name: perl-IO-Socket-SSL release: 2.el9 source: rpm version: '2.073' perl-IPC-Open3: - arch: noarch epoch: 0 name: perl-IPC-Open3 release: 483.el9 source: rpm version: '1.21' perl-MIME-Base64: - arch: x86_64 epoch: null name: perl-MIME-Base64 release: 4.el9 source: rpm version: '3.16' perl-Mozilla-CA: - arch: noarch epoch: null name: perl-Mozilla-CA release: 6.el9 source: rpm version: '20200520' perl-NDBM_File: - arch: x86_64 epoch: 0 name: perl-NDBM_File release: 483.el9 source: rpm version: '1.15' perl-Net-SSLeay: - arch: x86_64 epoch: null name: perl-Net-SSLeay release: 3.el9 source: rpm version: '1.94' perl-POSIX: - arch: x86_64 epoch: 0 name: perl-POSIX release: 483.el9 source: rpm version: '1.94' perl-PathTools: - arch: x86_64 epoch: null name: perl-PathTools release: 461.el9 source: rpm version: '3.78' perl-Pod-Escapes: - arch: noarch epoch: 1 name: perl-Pod-Escapes release: 460.el9 source: rpm version: '1.07' perl-Pod-Perldoc: - arch: noarch epoch: null name: perl-Pod-Perldoc release: 461.el9 source: rpm version: 3.28.01 perl-Pod-Simple: - arch: noarch epoch: 1 name: perl-Pod-Simple release: 4.el9 source: rpm version: '3.42' perl-Pod-Usage: - arch: noarch epoch: 4 name: perl-Pod-Usage release: 4.el9 source: rpm version: '2.01' perl-Scalar-List-Utils: - arch: x86_64 epoch: 4 name: perl-Scalar-List-Utils release: 462.el9 source: rpm version: '1.56' perl-SelectSaver: - arch: noarch epoch: 0 name: perl-SelectSaver release: 483.el9 source: rpm version: '1.02' perl-Socket: - arch: x86_64 epoch: 4 name: perl-Socket release: 4.el9 source: rpm version: '2.031' perl-Storable: - arch: x86_64 epoch: 1 name: perl-Storable release: 460.el9 source: rpm version: '3.21' perl-Symbol: - arch: noarch epoch: 0 name: perl-Symbol release: 483.el9 source: rpm version: '1.08' perl-Term-ANSIColor: - arch: noarch epoch: null name: perl-Term-ANSIColor release: 461.el9 source: rpm version: '5.01' perl-Term-Cap: - arch: noarch epoch: null name: perl-Term-Cap release: 460.el9 source: rpm version: '1.17' perl-TermReadKey: - arch: x86_64 epoch: null name: perl-TermReadKey release: 11.el9 source: rpm version: '2.38' perl-Text-ParseWords: - arch: noarch epoch: null name: perl-Text-ParseWords release: 460.el9 source: rpm version: '3.30' perl-Text-Tabs+Wrap: - arch: noarch epoch: null name: perl-Text-Tabs+Wrap release: 460.el9 source: rpm version: '2013.0523' perl-Time-Local: - arch: noarch epoch: 2 name: perl-Time-Local release: 7.el9 source: rpm version: '1.300' perl-URI: - arch: noarch epoch: null name: perl-URI release: 3.el9 source: rpm version: '5.09' perl-base: - arch: noarch epoch: 0 name: perl-base release: 483.el9 source: rpm version: '2.27' perl-constant: - arch: noarch epoch: null name: perl-constant release: 461.el9 source: rpm version: '1.33' perl-if: - arch: noarch epoch: 0 name: perl-if release: 483.el9 source: rpm version: 0.60.800 perl-interpreter: - arch: x86_64 epoch: 4 name: perl-interpreter release: 483.el9 source: rpm version: 5.32.1 perl-lib: - arch: x86_64 epoch: 0 name: perl-lib release: 483.el9 source: rpm version: '0.65' perl-libnet: - arch: noarch epoch: null name: perl-libnet release: 4.el9 source: rpm version: '3.13' perl-libs: - arch: x86_64 epoch: 4 name: perl-libs release: 483.el9 source: rpm version: 5.32.1 perl-mro: - arch: x86_64 epoch: 0 name: perl-mro release: 483.el9 source: rpm version: '1.23' perl-overload: - arch: noarch epoch: 0 name: perl-overload release: 483.el9 source: rpm version: '1.31' perl-overloading: - arch: noarch epoch: 0 name: perl-overloading release: 483.el9 source: rpm version: '0.02' perl-parent: - arch: noarch epoch: 1 name: perl-parent release: 460.el9 source: rpm version: '0.238' perl-podlators: - arch: noarch epoch: 1 name: perl-podlators release: 460.el9 source: rpm version: '4.14' perl-srpm-macros: - arch: noarch epoch: null name: perl-srpm-macros release: 41.el9 source: rpm version: '1' perl-subs: - arch: noarch epoch: 0 name: perl-subs release: 483.el9 source: rpm version: '1.03' perl-vars: - arch: noarch epoch: 0 name: perl-vars release: 483.el9 source: rpm version: '1.05' pigz: - arch: x86_64 epoch: null name: pigz release: 4.el9 source: rpm version: '2.5' pkgconf: - arch: x86_64 epoch: null name: pkgconf release: 10.el9 source: rpm version: 1.7.3 pkgconf-m4: - arch: noarch epoch: null name: pkgconf-m4 release: 10.el9 source: rpm version: 1.7.3 pkgconf-pkg-config: - arch: x86_64 epoch: null name: pkgconf-pkg-config release: 10.el9 source: rpm version: 1.7.3 podman: - arch: x86_64 epoch: 6 name: podman release: 2.el9 source: rpm version: 5.6.0 policycoreutils: - arch: x86_64 epoch: null name: policycoreutils release: 3.el9 source: rpm version: '3.6' policycoreutils-python-utils: - arch: noarch epoch: null name: policycoreutils-python-utils release: 3.el9 source: rpm version: '3.6' polkit: - arch: x86_64 epoch: null name: polkit release: 14.el9 source: rpm version: '0.117' polkit-libs: - arch: x86_64 epoch: null name: polkit-libs release: 14.el9 source: rpm version: '0.117' polkit-pkla-compat: - arch: x86_64 epoch: null name: polkit-pkla-compat release: 21.el9 source: rpm version: '0.1' popt: - arch: x86_64 epoch: null name: popt release: 8.el9 source: rpm version: '1.18' prefixdevname: - arch: x86_64 epoch: null name: prefixdevname release: 8.el9 source: rpm version: 0.1.0 procps-ng: - arch: x86_64 epoch: null name: procps-ng release: 14.el9 source: rpm version: 3.3.17 protobuf-c: - arch: x86_64 epoch: null name: protobuf-c release: 13.el9 source: rpm version: 1.3.3 psmisc: - arch: x86_64 epoch: null name: psmisc release: 3.el9 source: rpm version: '23.4' publicsuffix-list-dafsa: - arch: noarch epoch: null name: publicsuffix-list-dafsa release: 3.el9 source: rpm version: '20210518' pyproject-srpm-macros: - arch: noarch epoch: null name: pyproject-srpm-macros release: 1.el9 source: rpm version: 1.16.2 python-rpm-macros: - arch: noarch epoch: null name: python-rpm-macros release: 54.el9 source: rpm version: '3.9' python-srpm-macros: - arch: noarch epoch: null name: python-srpm-macros release: 54.el9 source: rpm version: '3.9' python-unversioned-command: - arch: noarch epoch: null name: python-unversioned-command release: 2.el9 source: rpm version: 3.9.23 python3: - arch: x86_64 epoch: null name: python3 release: 2.el9 source: rpm version: 3.9.23 python3-attrs: - arch: noarch epoch: null name: python3-attrs release: 7.el9 source: rpm version: 20.3.0 python3-audit: - arch: x86_64 epoch: null name: python3-audit release: 7.el9 source: rpm version: 3.1.5 python3-babel: - arch: noarch epoch: null name: python3-babel release: 2.el9 source: rpm version: 2.9.1 python3-cffi: - arch: x86_64 epoch: null name: python3-cffi release: 5.el9 source: rpm version: 1.14.5 python3-chardet: - arch: noarch epoch: null name: python3-chardet release: 5.el9 source: rpm version: 4.0.0 python3-configobj: - arch: noarch epoch: null name: python3-configobj release: 25.el9 source: rpm version: 5.0.6 python3-cryptography: - arch: x86_64 epoch: null name: python3-cryptography release: 5.el9 source: rpm version: 36.0.1 python3-dasbus: - arch: noarch epoch: null name: python3-dasbus release: 1.el9 source: rpm version: '1.7' python3-dateutil: - arch: noarch epoch: 1 name: python3-dateutil release: 7.el9 source: rpm version: 2.8.1 python3-dbus: - arch: x86_64 epoch: null name: python3-dbus release: 2.el9 source: rpm version: 1.2.18 python3-devel: - arch: x86_64 epoch: null name: python3-devel release: 2.el9 source: rpm version: 3.9.23 python3-distro: - arch: noarch epoch: null name: python3-distro release: 7.el9 source: rpm version: 1.5.0 python3-dnf: - arch: noarch epoch: null name: python3-dnf release: 31.el9 source: rpm version: 4.14.0 python3-dnf-plugins-core: - arch: noarch epoch: null name: python3-dnf-plugins-core release: 23.el9 source: rpm version: 4.3.0 python3-enchant: - arch: noarch epoch: null name: python3-enchant release: 5.el9 source: rpm version: 3.2.0 python3-file-magic: - arch: noarch epoch: null name: python3-file-magic release: 16.el9 source: rpm version: '5.39' python3-gobject-base: - arch: x86_64 epoch: null name: python3-gobject-base release: 6.el9 source: rpm version: 3.40.1 python3-gobject-base-noarch: - arch: noarch epoch: null name: python3-gobject-base-noarch release: 6.el9 source: rpm version: 3.40.1 python3-gpg: - arch: x86_64 epoch: null name: python3-gpg release: 6.el9 source: rpm version: 1.15.1 python3-hawkey: - arch: x86_64 epoch: null name: python3-hawkey release: 16.el9 source: rpm version: 0.69.0 python3-idna: - arch: noarch epoch: null name: python3-idna release: 7.el9.1 source: rpm version: '2.10' python3-jinja2: - arch: noarch epoch: null name: python3-jinja2 release: 8.el9 source: rpm version: 2.11.3 python3-jmespath: - arch: noarch epoch: null name: python3-jmespath release: 11.el9 source: rpm version: 0.9.4 python3-jsonpatch: - arch: noarch epoch: null name: python3-jsonpatch release: 16.el9 source: rpm version: '1.21' python3-jsonpointer: - arch: noarch epoch: null name: python3-jsonpointer release: 4.el9 source: rpm version: '2.0' python3-jsonschema: - arch: noarch epoch: null name: python3-jsonschema release: 13.el9 source: rpm version: 3.2.0 python3-libcomps: - arch: x86_64 epoch: null name: python3-libcomps release: 1.el9 source: rpm version: 0.1.18 python3-libdnf: - arch: x86_64 epoch: null name: python3-libdnf release: 16.el9 source: rpm version: 0.69.0 python3-libs: - arch: x86_64 epoch: null name: python3-libs release: 2.el9 source: rpm version: 3.9.23 python3-libselinux: - arch: x86_64 epoch: null name: python3-libselinux release: 3.el9 source: rpm version: '3.6' python3-libsemanage: - arch: x86_64 epoch: null name: python3-libsemanage release: 5.el9 source: rpm version: '3.6' python3-libvirt: - arch: x86_64 epoch: null name: python3-libvirt release: 1.el9 source: rpm version: 10.10.0 python3-libxml2: - arch: x86_64 epoch: null name: python3-libxml2 release: 12.el9 source: rpm version: 2.9.13 python3-lxml: - arch: x86_64 epoch: null name: python3-lxml release: 3.el9 source: rpm version: 4.6.5 python3-markupsafe: - arch: x86_64 epoch: null name: python3-markupsafe release: 12.el9 source: rpm version: 1.1.1 python3-netaddr: - arch: noarch epoch: null name: python3-netaddr release: 3.el9 source: rpm version: 0.10.1 python3-netifaces: - arch: x86_64 epoch: null name: python3-netifaces release: 15.el9 source: rpm version: 0.10.6 python3-oauthlib: - arch: noarch epoch: null name: python3-oauthlib release: 5.el9 source: rpm version: 3.1.1 python3-packaging: - arch: noarch epoch: null name: python3-packaging release: 5.el9 source: rpm version: '20.9' python3-pexpect: - arch: noarch epoch: null name: python3-pexpect release: 7.el9 source: rpm version: 4.8.0 python3-pip: - arch: noarch epoch: null name: python3-pip release: 1.el9 source: rpm version: 21.3.1 python3-pip-wheel: - arch: noarch epoch: null name: python3-pip-wheel release: 1.el9 source: rpm version: 21.3.1 python3-ply: - arch: noarch epoch: null name: python3-ply release: 14.el9 source: rpm version: '3.11' python3-policycoreutils: - arch: noarch epoch: null name: python3-policycoreutils release: 3.el9 source: rpm version: '3.6' python3-prettytable: - arch: noarch epoch: null name: python3-prettytable release: 27.el9 source: rpm version: 0.7.2 python3-ptyprocess: - arch: noarch epoch: null name: python3-ptyprocess release: 12.el9 source: rpm version: 0.6.0 python3-pycparser: - arch: noarch epoch: null name: python3-pycparser release: 6.el9 source: rpm version: '2.20' python3-pyparsing: - arch: noarch epoch: null name: python3-pyparsing release: 9.el9 source: rpm version: 2.4.7 python3-pyrsistent: - arch: x86_64 epoch: null name: python3-pyrsistent release: 8.el9 source: rpm version: 0.17.3 python3-pyserial: - arch: noarch epoch: null name: python3-pyserial release: 12.el9 source: rpm version: '3.4' python3-pysocks: - arch: noarch epoch: null name: python3-pysocks release: 12.el9 source: rpm version: 1.7.1 python3-pytz: - arch: noarch epoch: null name: python3-pytz release: 5.el9 source: rpm version: '2021.1' python3-pyyaml: - arch: x86_64 epoch: null name: python3-pyyaml release: 6.el9 source: rpm version: 5.4.1 python3-requests: - arch: noarch epoch: null name: python3-requests release: 10.el9 source: rpm version: 2.25.1 python3-resolvelib: - arch: noarch epoch: null name: python3-resolvelib release: 5.el9 source: rpm version: 0.5.4 python3-rpm: - arch: x86_64 epoch: null name: python3-rpm release: 39.el9 source: rpm version: 4.16.1.3 python3-rpm-generators: - arch: noarch epoch: null name: python3-rpm-generators release: 9.el9 source: rpm version: '12' python3-rpm-macros: - arch: noarch epoch: null name: python3-rpm-macros release: 54.el9 source: rpm version: '3.9' python3-setools: - arch: x86_64 epoch: null name: python3-setools release: 1.el9 source: rpm version: 4.4.4 python3-setuptools: - arch: noarch epoch: null name: python3-setuptools release: 15.el9 source: rpm version: 53.0.0 python3-setuptools-wheel: - arch: noarch epoch: null name: python3-setuptools-wheel release: 15.el9 source: rpm version: 53.0.0 python3-six: - arch: noarch epoch: null name: python3-six release: 9.el9 source: rpm version: 1.15.0 python3-systemd: - arch: x86_64 epoch: null name: python3-systemd release: 19.el9 source: rpm version: '234' python3-urllib3: - arch: noarch epoch: null name: python3-urllib3 release: 6.el9 source: rpm version: 1.26.5 python3.12: - arch: x86_64 epoch: null name: python3.12 release: 2.el9 source: rpm version: 3.12.11 python3.12-libs: - arch: x86_64 epoch: null name: python3.12-libs release: 2.el9 source: rpm version: 3.12.11 python3.12-pip: - arch: noarch epoch: null name: python3.12-pip release: 5.el9 source: rpm version: 23.2.1 python3.12-pip-wheel: - arch: noarch epoch: null name: python3.12-pip-wheel release: 5.el9 source: rpm version: 23.2.1 python3.12-setuptools: - arch: noarch epoch: null name: python3.12-setuptools release: 5.el9 source: rpm version: 68.2.2 qemu-guest-agent: - arch: x86_64 epoch: 17 name: qemu-guest-agent release: 29.el9 source: rpm version: 9.1.0 qt5-srpm-macros: - arch: noarch epoch: null name: qt5-srpm-macros release: 1.el9 source: rpm version: 5.15.9 quota: - arch: x86_64 epoch: 1 name: quota release: 4.el9 source: rpm version: '4.09' quota-nls: - arch: noarch epoch: 1 name: quota-nls release: 4.el9 source: rpm version: '4.09' readline: - arch: x86_64 epoch: null name: readline release: 4.el9 source: rpm version: '8.1' readline-devel: - arch: x86_64 epoch: null name: readline-devel release: 4.el9 source: rpm version: '8.1' redhat-rpm-config: - arch: noarch epoch: null name: redhat-rpm-config release: 1.el9 source: rpm version: '210' rootfiles: - arch: noarch epoch: null name: rootfiles release: 35.el9 source: rpm version: '8.1' rpcbind: - arch: x86_64 epoch: null name: rpcbind release: 7.el9 source: rpm version: 1.2.6 rpm: - arch: x86_64 epoch: null name: rpm release: 39.el9 source: rpm version: 4.16.1.3 rpm-build: - arch: x86_64 epoch: null name: rpm-build release: 39.el9 source: rpm version: 4.16.1.3 rpm-build-libs: - arch: x86_64 epoch: null name: rpm-build-libs release: 39.el9 source: rpm version: 4.16.1.3 rpm-libs: - arch: x86_64 epoch: null name: rpm-libs release: 39.el9 source: rpm version: 4.16.1.3 rpm-plugin-audit: - arch: x86_64 epoch: null name: rpm-plugin-audit release: 39.el9 source: rpm version: 4.16.1.3 rpm-plugin-selinux: - arch: x86_64 epoch: null name: rpm-plugin-selinux release: 39.el9 source: rpm version: 4.16.1.3 rpm-plugin-systemd-inhibit: - arch: x86_64 epoch: null name: rpm-plugin-systemd-inhibit release: 39.el9 source: rpm version: 4.16.1.3 rpm-sign: - arch: x86_64 epoch: null name: rpm-sign release: 39.el9 source: rpm version: 4.16.1.3 rpm-sign-libs: - arch: x86_64 epoch: null name: rpm-sign-libs release: 39.el9 source: rpm version: 4.16.1.3 rpmlint: - arch: noarch epoch: null name: rpmlint release: 19.el9 source: rpm version: '1.11' rsync: - arch: x86_64 epoch: null name: rsync release: 3.el9 source: rpm version: 3.2.5 rsyslog: - arch: x86_64 epoch: null name: rsyslog release: 2.el9 source: rpm version: 8.2506.0 rsyslog-logrotate: - arch: x86_64 epoch: null name: rsyslog-logrotate release: 2.el9 source: rpm version: 8.2506.0 ruby: - arch: x86_64 epoch: null name: ruby release: 165.el9 source: rpm version: 3.0.7 ruby-default-gems: - arch: noarch epoch: null name: ruby-default-gems release: 165.el9 source: rpm version: 3.0.7 ruby-devel: - arch: x86_64 epoch: null name: ruby-devel release: 165.el9 source: rpm version: 3.0.7 ruby-libs: - arch: x86_64 epoch: null name: ruby-libs release: 165.el9 source: rpm version: 3.0.7 rubygem-bigdecimal: - arch: x86_64 epoch: null name: rubygem-bigdecimal release: 165.el9 source: rpm version: 3.0.0 rubygem-bundler: - arch: noarch epoch: null name: rubygem-bundler release: 165.el9 source: rpm version: 2.2.33 rubygem-io-console: - arch: x86_64 epoch: null name: rubygem-io-console release: 165.el9 source: rpm version: 0.5.7 rubygem-json: - arch: x86_64 epoch: null name: rubygem-json release: 165.el9 source: rpm version: 2.5.1 rubygem-psych: - arch: x86_64 epoch: null name: rubygem-psych release: 165.el9 source: rpm version: 3.3.2 rubygem-rdoc: - arch: noarch epoch: null name: rubygem-rdoc release: 165.el9 source: rpm version: 6.3.4.1 rubygems: - arch: noarch epoch: null name: rubygems release: 165.el9 source: rpm version: 3.2.33 rust-srpm-macros: - arch: noarch epoch: null name: rust-srpm-macros release: 4.el9 source: rpm version: '17' samba-client-libs: - arch: x86_64 epoch: 0 name: samba-client-libs release: 3.el9 source: rpm version: 4.23.0 samba-common: - arch: noarch epoch: 0 name: samba-common release: 3.el9 source: rpm version: 4.23.0 samba-common-libs: - arch: x86_64 epoch: 0 name: samba-common-libs release: 3.el9 source: rpm version: 4.23.0 sed: - arch: x86_64 epoch: null name: sed release: 9.el9 source: rpm version: '4.8' selinux-policy: - arch: noarch epoch: null name: selinux-policy release: 1.el9 source: rpm version: 38.1.65 selinux-policy-targeted: - arch: noarch epoch: null name: selinux-policy-targeted release: 1.el9 source: rpm version: 38.1.65 setroubleshoot-plugins: - arch: noarch epoch: null name: setroubleshoot-plugins release: 4.el9 source: rpm version: 3.3.14 setroubleshoot-server: - arch: x86_64 epoch: null name: setroubleshoot-server release: 2.el9 source: rpm version: 3.3.35 setup: - arch: noarch epoch: null name: setup release: 10.el9 source: rpm version: 2.13.7 sg3_utils: - arch: x86_64 epoch: null name: sg3_utils release: 10.el9 source: rpm version: '1.47' sg3_utils-libs: - arch: x86_64 epoch: null name: sg3_utils-libs release: 10.el9 source: rpm version: '1.47' shadow-utils: - arch: x86_64 epoch: 2 name: shadow-utils release: 15.el9 source: rpm version: '4.9' shadow-utils-subid: - arch: x86_64 epoch: 2 name: shadow-utils-subid release: 15.el9 source: rpm version: '4.9' shared-mime-info: - arch: x86_64 epoch: null name: shared-mime-info release: 5.el9 source: rpm version: '2.1' slang: - arch: x86_64 epoch: null name: slang release: 11.el9 source: rpm version: 2.3.2 slirp4netns: - arch: x86_64 epoch: null name: slirp4netns release: 1.el9 source: rpm version: 1.3.3 snappy: - arch: x86_64 epoch: null name: snappy release: 8.el9 source: rpm version: 1.1.8 sos: - arch: noarch epoch: null name: sos release: 4.el9 source: rpm version: 4.10.0 sqlite-libs: - arch: x86_64 epoch: null name: sqlite-libs release: 8.el9 source: rpm version: 3.34.1 squashfs-tools: - arch: x86_64 epoch: null name: squashfs-tools release: 10.git1.el9 source: rpm version: '4.4' sscg: - arch: x86_64 epoch: null name: sscg release: 10.el9 source: rpm version: 3.0.0 sshpass: - arch: x86_64 epoch: null name: sshpass release: 4.el9 source: rpm version: '1.09' sssd-client: - arch: x86_64 epoch: null name: sssd-client release: 5.el9 source: rpm version: 2.9.7 sssd-common: - arch: x86_64 epoch: null name: sssd-common release: 5.el9 source: rpm version: 2.9.7 sssd-kcm: - arch: x86_64 epoch: null name: sssd-kcm release: 5.el9 source: rpm version: 2.9.7 sssd-nfs-idmap: - arch: x86_64 epoch: null name: sssd-nfs-idmap release: 5.el9 source: rpm version: 2.9.7 sudo: - arch: x86_64 epoch: null name: sudo release: 13.el9 source: rpm version: 1.9.5p2 systemd: - arch: x86_64 epoch: null name: systemd release: 57.el9 source: rpm version: '252' systemd-devel: - arch: x86_64 epoch: null name: systemd-devel release: 57.el9 source: rpm version: '252' systemd-libs: - arch: x86_64 epoch: null name: systemd-libs release: 57.el9 source: rpm version: '252' systemd-pam: - arch: x86_64 epoch: null name: systemd-pam release: 57.el9 source: rpm version: '252' systemd-rpm-macros: - arch: noarch epoch: null name: systemd-rpm-macros release: 57.el9 source: rpm version: '252' systemd-udev: - arch: x86_64 epoch: null name: systemd-udev release: 57.el9 source: rpm version: '252' tar: - arch: x86_64 epoch: 2 name: tar release: 7.el9 source: rpm version: '1.34' tcl: - arch: x86_64 epoch: 1 name: tcl release: 7.el9 source: rpm version: 8.6.10 tcpdump: - arch: x86_64 epoch: 14 name: tcpdump release: 9.el9 source: rpm version: 4.99.0 teamd: - arch: x86_64 epoch: null name: teamd release: 16.el9 source: rpm version: '1.31' time: - arch: x86_64 epoch: null name: time release: 18.el9 source: rpm version: '1.9' tmux: - arch: x86_64 epoch: null name: tmux release: 5.el9 source: rpm version: 3.2a tpm2-tss: - arch: x86_64 epoch: null name: tpm2-tss release: 1.el9 source: rpm version: 3.2.3 traceroute: - arch: x86_64 epoch: 3 name: traceroute release: 1.el9 source: rpm version: 2.1.1 tzdata: - arch: noarch epoch: null name: tzdata release: 2.el9 source: rpm version: 2025b unzip: - arch: x86_64 epoch: null name: unzip release: 59.el9 source: rpm version: '6.0' userspace-rcu: - arch: x86_64 epoch: null name: userspace-rcu release: 6.el9 source: rpm version: 0.12.1 util-linux: - arch: x86_64 epoch: null name: util-linux release: 21.el9 source: rpm version: 2.37.4 util-linux-core: - arch: x86_64 epoch: null name: util-linux-core release: 21.el9 source: rpm version: 2.37.4 vim-minimal: - arch: x86_64 epoch: 2 name: vim-minimal release: 22.el9 source: rpm version: 8.2.2637 webkit2gtk3-jsc: - arch: x86_64 epoch: null name: webkit2gtk3-jsc release: 1.el9 source: rpm version: 2.48.5 wget: - arch: x86_64 epoch: null name: wget release: 8.el9 source: rpm version: 1.21.1 which: - arch: x86_64 epoch: null name: which release: 30.el9 source: rpm version: '2.21' xfsprogs: - arch: x86_64 epoch: null name: xfsprogs release: 7.el9 source: rpm version: 6.4.0 xz: - arch: x86_64 epoch: null name: xz release: 8.el9 source: rpm version: 5.2.5 xz-devel: - arch: x86_64 epoch: null name: xz-devel release: 8.el9 source: rpm version: 5.2.5 xz-libs: - arch: x86_64 epoch: null name: xz-libs release: 8.el9 source: rpm version: 5.2.5 yajl: - arch: x86_64 epoch: null name: yajl release: 25.el9 source: rpm version: 2.1.0 yum: - arch: noarch epoch: null name: yum release: 31.el9 source: rpm version: 4.14.0 yum-utils: - arch: noarch epoch: null name: yum-utils release: 23.el9 source: rpm version: 4.3.0 zip: - arch: x86_64 epoch: null name: zip release: 35.el9 source: rpm version: '3.0' zlib: - arch: x86_64 epoch: null name: zlib release: 41.el9 source: rpm version: 1.2.11 zlib-devel: - arch: x86_64 epoch: null name: zlib-devel release: 41.el9 source: rpm version: 1.2.11 zstd: - arch: x86_64 epoch: null name: zstd release: 1.el9 source: rpm version: 1.5.5 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/0000755000175000017500000000000015073042703025073 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean-antelope-testing.repo0000644000175000017500000000316315073042703033036 0ustar zuulzuul[delorean-antelope-testing] name=dlrn-antelope-testing baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [delorean-antelope-build-deps] name=dlrn-antelope-build-deps baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/build-deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-rabbitmq] name=centos9-rabbitmq baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/messaging/$basearch/rabbitmq-38/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-storage] name=centos9-storage baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/storage/$basearch/ceph-reef/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-opstools] name=centos9-opstools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/opstools/$basearch/collectd-5/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-nfv-ovs] name=NFV SIG OpenvSwitch baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/nfv/$basearch/openvswitch-2/ gpgcheck=0 enabled=1 module_hotfixes=1 # epel is required for Ceph Reef [epel-low-priority] name=Extra Packages for Enterprise Linux $releasever - $basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir enabled=1 gpgcheck=0 countme=1 priority=100 includepkgs=libarrow*,parquet*,python3-asyncssh,re2,python3-grpcio,grpc*,abseil*,thrift* home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean.repo0000644000175000017500000001335315073042703027560 0ustar zuulzuul[delorean-component-barbican] name=delorean-openstack-barbican-42b4c41831408a8e323fec3c8983b5c793b64874 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/barbican/42/b4/42b4c41831408a8e323fec3c8983b5c793b64874_08052e9d enabled=1 gpgcheck=0 priority=1 [delorean-component-baremetal] name=delorean-python-glean-10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/baremetal/10/df/10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7_36137eb3 enabled=1 gpgcheck=0 priority=1 [delorean-component-cinder] name=delorean-openstack-cinder-1c00d6490d88e436f26efb71f2ac96e75252e97c baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cinder/1c/00/1c00d6490d88e436f26efb71f2ac96e75252e97c_f716f000 enabled=1 gpgcheck=0 priority=1 [delorean-component-clients] name=delorean-python-stevedore-c4acc5639fd2329372142e39464fcca0209b0018 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/clients/c4/ac/c4acc5639fd2329372142e39464fcca0209b0018_d3ef8337 enabled=1 gpgcheck=0 priority=1 [delorean-component-cloudops] name=delorean-python-observabilityclient-2f31846d73c044740ccaaa4204720f0b94d64145 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cloudops/2f/31/2f31846d73c044740ccaaa4204720f0b94d64145_c2222caa enabled=1 gpgcheck=0 priority=1 [delorean-component-common] name=delorean-diskimage-builder-7d793e664cf892461c5547a2776e4b1834d0396b baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/common/7d/79/7d793e664cf892461c5547a2776e4b1834d0396b_bf6d4aba enabled=1 gpgcheck=0 priority=1 [delorean-component-compute] name=delorean-openstack-nova-6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/compute/6f/8d/6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e_dc05b899 enabled=1 gpgcheck=0 priority=1 [delorean-component-designate] name=delorean-python-designate-tests-tempest-347fdbc9b4595a10b726526b3c0b5928e5b7fcf2 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/designate/34/7f/347fdbc9b4595a10b726526b3c0b5928e5b7fcf2_3fd39337 enabled=1 gpgcheck=0 priority=1 [delorean-component-glance] name=delorean-openstack-glance-1fd12c29b339f30fe823e2b5beba14b5f241e52a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/glance/1f/d1/1fd12c29b339f30fe823e2b5beba14b5f241e52a_0d693729 enabled=1 gpgcheck=0 priority=1 [delorean-component-keystone] name=delorean-openstack-keystone-e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/keystone/e4/b4/e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7_264c03cc enabled=1 gpgcheck=0 priority=1 [delorean-component-manila] name=delorean-openstack-manila-3c01b7181572c95dac462eb19c3121e36cb0fe95 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/manila/3c/01/3c01b7181572c95dac462eb19c3121e36cb0fe95_912dfd18 enabled=1 gpgcheck=0 priority=1 [delorean-component-network] name=delorean-python-vmware-nsxlib-458234972d1428ac92bbeff26511edfdc49b6b2f baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/network/45/82/458234972d1428ac92bbeff26511edfdc49b6b2f_1bca6328 enabled=1 gpgcheck=0 priority=1 [delorean-component-octavia] name=delorean-openstack-octavia-ba397f07a7331190208c93368ee23826ac4e2707 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/octavia/ba/39/ba397f07a7331190208c93368ee23826ac4e2707_9d6e596a enabled=1 gpgcheck=0 priority=1 [delorean-component-optimize] name=delorean-openstack-watcher-c014f81a8647287f6dcc339321c1256f5a2e82d5 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/optimize/c0/14/c014f81a8647287f6dcc339321c1256f5a2e82d5_bcbfdccc enabled=1 gpgcheck=0 priority=1 [delorean-component-podified] name=delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d301b5c6775e6022 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/podified/ff/70/ff70d03bf5bc0bb6f3540a02d301b5c6775e6022_dbfdef11 enabled=1 gpgcheck=0 priority=1 [delorean-component-puppet] name=delorean-puppet-ceph-91ba84bc002c318a7f961d084e992b2e88ffd5b3 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/puppet/91/ba/91ba84bc002c318a7f961d084e992b2e88ffd5b3_7cde1ad1 enabled=1 gpgcheck=0 priority=1 [delorean-component-swift] name=delorean-openstack-swift-dc98a8463506ac520c469adb0ef47d0f7753905a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/swift/dc/98/dc98a8463506ac520c469adb0ef47d0f7753905a_9d02f069 enabled=1 gpgcheck=0 priority=1 [delorean-component-tempest] name=delorean-python-tempestconf-8515371b7cceebd4282e09f1d8f0cc842df82855 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/tempest/85/15/8515371b7cceebd4282e09f1d8f0cc842df82855_a1e336c7 enabled=1 gpgcheck=0 priority=1 [delorean-component-ui] name=delorean-openstack-heat-ui-013accbfd179753bc3f0d1f4e5bed07a4fd9f771 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/ui/01/3a/013accbfd179753bc3f0d1f4e5bed07a4fd9f771_0c88e467 enabled=1 gpgcheck=0 priority=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-appstream.repo0000644000175000017500000000031615073042703033350 0ustar zuulzuul [repo-setup-centos-appstream] name=repo-setup-centos-appstream baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/AppStream/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-baseos.repo0000644000175000017500000000030415073042703032625 0ustar zuulzuul [repo-setup-centos-baseos] name=repo-setup-centos-baseos baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/BaseOS/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-highavailability.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-highavailabili0000644000175000017500000000034215073042703033344 0ustar zuulzuul [repo-setup-centos-highavailability] name=repo-setup-centos-highavailability baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/HighAvailability/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-powertools.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-powertools.rep0000644000175000017500000000031115073042703033405 0ustar zuulzuul [repo-setup-centos-powertools] name=repo-setup-centos-powertools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/CRB/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean.repo.md50000644000175000017500000000004115073042701030230 0ustar zuulzuulc4b77291aeca5591ac860bd4127cec2f home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24/0000777000175000017500000000000015073043273026720 5ustar zuulzuul././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24/ansible_facts_cache/home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24/ansible_facts_0000755000175000017500000000000015073043273031571 5ustar zuulzuul././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24/ansible_facts_cache/localhosthome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24/ansible_facts_0000644000175000017500000016030515073043273031600 0ustar zuulzuul{ "_ansible_facts_gathered": true, "ansible_all_ipv4_addresses": [ "192.168.122.11", "38.102.83.214" ], "ansible_all_ipv6_addresses": [ "fe80::f816:3eff:fec1:afa3" ], "ansible_apparmor": { "status": "disabled" }, "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_vendor": "SeaBIOS", "ansible_bios_version": "1.15.0-1", "ansible_board_asset_tag": "NA", "ansible_board_name": "NA", "ansible_board_serial": "NA", "ansible_board_vendor": "NA", "ansible_board_version": "NA", "ansible_chassis_asset_tag": "NA", "ansible_chassis_serial": "NA", "ansible_chassis_vendor": "QEMU", "ansible_chassis_version": "pc-i440fx-6.2", "ansible_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=9839e2e1-98a2-4594-b609-79d514deb0a3" }, "ansible_date_time": { "date": "2025-10-13", "day": "13", "epoch": "1760314797", "epoch_int": "1760314797", "hour": "00", "iso8601": "2025-10-13T00:19:57Z", "iso8601_basic": "20251013T001957638964", "iso8601_basic_short": "20251013T001957", "iso8601_micro": "2025-10-13T00:19:57.638964Z", "minute": "19", "month": "10", "second": "57", "time": "00:19:57", "tz": "UTC", "tz_dst": "UTC", "tz_offset": "+0000", "weekday": "Monday", "weekday_number": "1", "weeknumber": "41", "year": "2025" }, "ansible_default_ipv4": { "address": "38.102.83.214", "alias": "eth0", "broadcast": "38.102.83.255", "gateway": "38.102.83.1", "interface": "eth0", "macaddress": "fa:16:3e:c1:af:a3", "mtu": 1500, "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_device_links": { "ids": { "sr0": [ "ata-QEMU_DVD-ROM_QM00001" ] }, "labels": { "sr0": [ "config-2" ] }, "masters": {}, "uuids": { "sr0": [ "2025-10-13-00-00-56-00" ], "vda1": [ "9839e2e1-98a2-4594-b609-79d514deb0a3" ] } }, "ansible_devices": { "sr0": { "holders": [], "host": "", "links": { "ids": [ "ata-QEMU_DVD-ROM_QM00001" ], "labels": [ "config-2" ], "masters": [], "uuids": [ "2025-10-13-00-00-56-00" ] }, "model": "QEMU DVD-ROM", "partitions": {}, "removable": "1", "rotational": "0", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "964", "sectorsize": "2048", "size": "482.00 KB", "support_discard": "2048", "vendor": "QEMU", "virtual": 1 }, "vda": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": { "vda1": { "holders": [], "links": { "ids": [], "labels": [], "masters": [], "uuids": [ "9839e2e1-98a2-4594-b609-79d514deb0a3" ] }, "sectors": "167770079", "sectorsize": 512, "size": "80.00 GB", "start": "2048", "uuid": "9839e2e1-98a2-4594-b609-79d514deb0a3" } }, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "none", "sectors": "167772160", "sectorsize": "512", "size": "80.00 GB", "support_discard": "512", "vendor": "0x1af4", "virtual": 1 } }, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/centos-release", "ansible_distribution_file_variety": "CentOS", "ansible_distribution_major_version": "9", "ansible_distribution_release": "Stream", "ansible_distribution_version": "9", "ansible_dns": { "nameservers": [ "192.168.122.10", "199.204.44.24", "199.204.47.54" ] }, "ansible_domain": "", "ansible_effective_group_id": 1000, "ansible_effective_user_id": 1000, "ansible_env": { "BASH_FUNC_which%%": "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}", "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus", "DEBUGINFOD_IMA_CERT_PATH": "/etc/keys/ima:", "DEBUGINFOD_URLS": "https://debuginfod.centos.org/ ", "HOME": "/home/zuul", "LANG": "en_US.UTF-8", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "LOGNAME": "zuul", "MOTD_SHOWN": "pam", "PATH": "~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "PWD": "/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks", "SELINUX_LEVEL_REQUESTED": "", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "SHELL": "/bin/bash", "SHLVL": "2", "SSH_CLIENT": "38.102.83.114 42732 22", "SSH_CONNECTION": "38.102.83.114 42732 38.102.83.214 22", "USER": "zuul", "XDG_RUNTIME_DIR": "/run/user/1000", "XDG_SESSION_CLASS": "user", "XDG_SESSION_ID": "9", "XDG_SESSION_TYPE": "tty", "_": "/usr/bin/python3", "which_declare": "declare -f" }, "ansible_eth0": { "active": true, "device": "eth0", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "38.102.83.214", "broadcast": "38.102.83.255", "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24" }, "ipv6": [ { "address": "fe80::f816:3eff:fec1:afa3", "prefix": "64", "scope": "link" } ], "macaddress": "fa:16:3e:c1:af:a3", "module": "virtio_net", "mtu": 1500, "pciid": "virtio1", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_eth1": { "active": true, "device": "eth1", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "192.168.122.11", "broadcast": "192.168.122.255", "netmask": "255.255.255.0", "network": "192.168.122.0", "prefix": "24" }, "macaddress": "fa:16:3e:a7:cc:09", "module": "virtio_net", "mtu": 1500, "pciid": "virtio5", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "controller", "ansible_hostname": "controller", "ansible_hostnqn": "nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624", "ansible_interfaces": [ "eth1", "eth0", "lo" ], "ansible_is_chroot": false, "ansible_iscsi_iqn": "", "ansible_kernel": "5.14.0-621.el9.x86_64", "ansible_kernel_version": "#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025", "ansible_lo": { "active": true, "device": "lo", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "on", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "on [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "127.0.0.1", "broadcast": "", "netmask": "255.0.0.0", "network": "127.0.0.0", "prefix": "8" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 65536, "promisc": false, "timestamping": [], "type": "loopback" }, "ansible_loadavg": { "15m": 0.38, "1m": 1.21, "5m": 0.77 }, "ansible_local": {}, "ansible_locally_reachable_ips": { "ipv4": [ "38.102.83.214", "127.0.0.0/8", "127.0.0.1", "192.168.122.11" ], "ipv6": [ "::1", "fe80::f816:3eff:fec1:afa3" ] }, "ansible_lsb": {}, "ansible_lvm": "N/A", "ansible_machine": "x86_64", "ansible_machine_id": "a1727ec20198bc6caf436a6e13c4ff5e", "ansible_memfree_mb": 5428, "ansible_memory_mb": { "nocache": { "free": 6878, "used": 802 }, "real": { "free": 5428, "total": 7680, "used": 2252 }, "swap": { "cached": 0, "free": 0, "total": 0, "used": 0 } }, "ansible_memtotal_mb": 7680, "ansible_mounts": [ { "block_available": 19940560, "block_size": 4096, "block_total": 20954875, "block_used": 1014315, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 41790484, "inode_total": 41942512, "inode_used": 152028, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota", "size_available": 81676533760, "size_total": 85831168000, "uuid": "9839e2e1-98a2-4594-b609-79d514deb0a3" } ], "ansible_nodename": "controller", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "dnf", "ansible_proc_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=9839e2e1-98a2-4594-b609-79d514deb0a3" }, "ansible_processor": [ "0", "AuthenticAMD", "AMD EPYC-Rome Processor", "1", "AuthenticAMD", "AMD EPYC-Rome Processor", "2", "AuthenticAMD", "AMD EPYC-Rome Processor", "3", "AuthenticAMD", "AMD EPYC-Rome Processor", "4", "AuthenticAMD", "AMD EPYC-Rome Processor", "5", "AuthenticAMD", "AMD EPYC-Rome Processor", "6", "AuthenticAMD", "AMD EPYC-Rome Processor", "7", "AuthenticAMD", "AMD EPYC-Rome Processor" ], "ansible_processor_cores": 1, "ansible_processor_count": 8, "ansible_processor_nproc": 8, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 8, "ansible_product_name": "OpenStack Nova", "ansible_product_serial": "NA", "ansible_product_uuid": "NA", "ansible_product_version": "26.2.1", "ansible_python": { "executable": "/usr/bin/python3", "has_sslcontext": true, "type": "cpython", "version": { "major": 3, "micro": 23, "minor": 9, "releaselevel": "final", "serial": 0 }, "version_info": [ 3, 9, 23, "final", 0 ] }, "ansible_python_version": "3.9.23", "ansible_real_group_id": 1000, "ansible_real_user_id": 1000, "ansible_selinux": { "config_mode": "enforcing", "mode": "enforcing", "policyvers": 33, "status": "enabled", "type": "targeted" }, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxwvvCYwnIDtxKxVxyDCUXhYuWEo+WsGS1jEd+Im13VpWuXa7IQrDvjmuO0jn8/KspLpldlXZAyvPIi9+nNvkk=", "ansible_ssh_host_key_ecdsa_public_keytype": "ecdsa-sha2-nistp256", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIB1/unp9+ffn2cxr1RyLKXm2uZfT+tLfIHwoS/yhV9RG", "ansible_ssh_host_key_ed25519_public_keytype": "ssh-ed25519", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQCtwQO/sn8zPSCivURPoL3DNUpFgI+Y/GknmWIW+/QsvlCk4sBWYiqOXubpbETP/ZuHnkt6w69huALW3iVln/6SdW9iz2mhr8+AHVAee6i3GRdpOWbUDuatQDsdRX3GWxhJ3iR4Q2CrLL9cuJIayVmHepeTrUt2AaPBwcRw7Or+VinGX/9nIUQRguvXHv3VeRUX003jI5B9xUO/6vZ99+ClMMpZPbhLqdLZnuKoLA9loqq6szVShReR3fCZNDH8FKZzjIFfFaj9uDgDfIB3iBKtQdr0HfSSF8CQ2A6o/P43FG9/w7Is3QQidH997QhMNrRNzbrNvgA8vgwi6qIkjFwYBO0O9VnlS1Fux4NG570chg5FmrtGWKGKAHxWuCm4zLuUAJWzw/gxVcPemOJlmIxbGIo/YMT0VgPQzbjFTxGehUhba1ncNNDyH8Cu7FHUbuX6pr6RWksUx+dixeBtFBjGlUg44pJZ+4I9XrHXTwLpBs3GXSUxi0gkQT182Xt8jyE=", "ansible_ssh_host_key_rsa_public_keytype": "ssh-rsa", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": [ "" ], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "OpenStack Foundation", "ansible_uptime_seconds": 1100, "ansible_user_dir": "/home/zuul", "ansible_user_gecos": "", "ansible_user_gid": 1000, "ansible_user_id": "zuul", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 1000, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_tech_guest": [ "openstack" ], "ansible_virtualization_tech_host": [ "kvm" ], "ansible_virtualization_type": "openstack", "cifmw_discovered_hash": "87904e8ef63d331e266fb0d8af41e2bcef96f11dc9de5ec1b095b79a5fbe8bb2", "cifmw_discovered_hash_algorithm": "sha256", "cifmw_discovered_image_name": "CentOS-Stream-GenericCloud-x86_64-9-latest.x86_64.qcow2", "cifmw_discovered_image_url": "https://cloud.centos.org/centos/9-stream/x86_64/images//CentOS-Stream-GenericCloud-x86_64-9-latest.x86_64.qcow2", "cifmw_install_yamls_defaults": { "ADOPTED_EXTERNAL_NETWORK": "172.21.1.0/24", "ADOPTED_INTERNALAPI_NETWORK": "172.17.1.0/24", "ADOPTED_STORAGEMGMT_NETWORK": "172.20.1.0/24", "ADOPTED_STORAGE_NETWORK": "172.18.1.0/24", "ADOPTED_TENANT_NETWORK": "172.9.1.0/24", "ANSIBLEEE": "config/samples/_v1beta1_ansibleee.yaml", "ANSIBLEEE_BRANCH": "main", "ANSIBLEEE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml", "ANSIBLEEE_IMG": "quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest", "ANSIBLEEE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml", "ANSIBLEEE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests", "ANSIBLEEE_KUTTL_NAMESPACE": "ansibleee-kuttl-tests", "ANSIBLEEE_REPO": "https://github.com/openstack-k8s-operators/openstack-ansibleee-operator", "ANSIBLEE_COMMIT_HASH": "", "BARBICAN": "config/samples/barbican_v1beta1_barbican.yaml", "BARBICAN_BRANCH": "main", "BARBICAN_COMMIT_HASH": "", "BARBICAN_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml", "BARBICAN_DEPL_IMG": "unused", "BARBICAN_IMG": "quay.io/openstack-k8s-operators/barbican-operator-index:latest", "BARBICAN_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml", "BARBICAN_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests", "BARBICAN_KUTTL_NAMESPACE": "barbican-kuttl-tests", "BARBICAN_REPO": "https://github.com/openstack-k8s-operators/barbican-operator.git", "BARBICAN_SERVICE_ENABLED": "true", "BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY": "sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU=", "BAREMETAL_BRANCH": "main", "BAREMETAL_COMMIT_HASH": "", "BAREMETAL_IMG": "quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest", "BAREMETAL_OS_CONTAINER_IMG": "", "BAREMETAL_OS_IMG": "", "BAREMETAL_REPO": "https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git", "BAREMETAL_TIMEOUT": "20m", "BASH_IMG": "quay.io/openstack-k8s-operators/bash:latest", "BGP_ASN": "64999", "BGP_LEAF_1": "100.65.4.1", "BGP_LEAF_2": "100.64.4.1", "BGP_OVN_ROUTING": "false", "BGP_PEER_ASN": "64999", "BGP_SOURCE_IP": "172.30.4.2", "BGP_SOURCE_IP6": "f00d:f00d:f00d:f00d:f00d:f00d:f00d:42", "BMAAS_BRIDGE_IPV4_PREFIX": "172.20.1.2/24", "BMAAS_BRIDGE_IPV6_PREFIX": "fd00:bbbb::2/64", "BMAAS_INSTANCE_DISK_SIZE": "20", "BMAAS_INSTANCE_MEMORY": "4096", "BMAAS_INSTANCE_NAME_PREFIX": "crc-bmaas", "BMAAS_INSTANCE_NET_MODEL": "virtio", "BMAAS_INSTANCE_OS_VARIANT": "centos-stream9", "BMAAS_INSTANCE_VCPUS": "2", "BMAAS_INSTANCE_VIRT_TYPE": "kvm", "BMAAS_IPV4": "true", "BMAAS_IPV6": "false", "BMAAS_LIBVIRT_USER": "sushyemu", "BMAAS_METALLB_ADDRESS_POOL": "172.20.1.64/26", "BMAAS_METALLB_POOL_NAME": "baremetal", "BMAAS_NETWORK_IPV4_PREFIX": "172.20.1.1/24", "BMAAS_NETWORK_IPV6_PREFIX": "fd00:bbbb::1/64", "BMAAS_NETWORK_NAME": "crc-bmaas", "BMAAS_NODE_COUNT": "1", "BMAAS_OCP_INSTANCE_NAME": "crc", "BMAAS_REDFISH_PASSWORD": "password", "BMAAS_REDFISH_USERNAME": "admin", "BMAAS_ROUTE_LIBVIRT_NETWORKS": "crc-bmaas,crc,default", "BMAAS_SUSHY_EMULATOR_DRIVER": "libvirt", "BMAAS_SUSHY_EMULATOR_IMAGE": "quay.io/metal3-io/sushy-tools:latest", "BMAAS_SUSHY_EMULATOR_NAMESPACE": "sushy-emulator", "BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE": "/etc/openstack/clouds.yaml", "BMAAS_SUSHY_EMULATOR_OS_CLOUD": "openstack", "BMH_NAMESPACE": "openstack", "BMO_BRANCH": "release-0.9", "BMO_COMMIT_HASH": "", "BMO_IPA_BRANCH": "stable/2024.1", "BMO_IRONIC_HOST": "192.168.122.10", "BMO_PROVISIONING_INTERFACE": "", "BMO_REPO": "https://github.com/metal3-io/baremetal-operator", "BMO_SETUP": "", "BMO_SETUP_ROUTE_REPLACE": "true", "BM_CTLPLANE_INTERFACE": "enp1s0", "BM_INSTANCE_MEMORY": "8192", "BM_INSTANCE_NAME_PREFIX": "edpm-compute-baremetal", "BM_INSTANCE_NAME_SUFFIX": "0", "BM_NETWORK_NAME": "default", "BM_NODE_COUNT": "1", "BM_ROOT_PASSWORD": "", "BM_ROOT_PASSWORD_SECRET": "", "CEILOMETER_CENTRAL_DEPL_IMG": "unused", "CEILOMETER_NOTIFICATION_DEPL_IMG": "unused", "CEPH_BRANCH": "release-1.15", "CEPH_CLIENT": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml", "CEPH_COMMON": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml", "CEPH_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml", "CEPH_CRDS": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml", "CEPH_IMG": "quay.io/ceph/demo:latest-squid", "CEPH_OP": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml", "CEPH_REPO": "https://github.com/rook/rook.git", "CERTMANAGER_TIMEOUT": "300s", "CHECKOUT_FROM_OPENSTACK_REF": "true", "CINDER": "config/samples/cinder_v1beta1_cinder.yaml", "CINDERAPI_DEPL_IMG": "unused", "CINDERBKP_DEPL_IMG": "unused", "CINDERSCH_DEPL_IMG": "unused", "CINDERVOL_DEPL_IMG": "unused", "CINDER_BRANCH": "main", "CINDER_COMMIT_HASH": "", "CINDER_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml", "CINDER_IMG": "quay.io/openstack-k8s-operators/cinder-operator-index:latest", "CINDER_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml", "CINDER_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests", "CINDER_KUTTL_NAMESPACE": "cinder-kuttl-tests", "CINDER_REPO": "https://github.com/openstack-k8s-operators/cinder-operator.git", "CLEANUP_DIR_CMD": "rm -Rf", "CRC_BGP_NIC_1_MAC": "52:54:00:11:11:11", "CRC_BGP_NIC_2_MAC": "52:54:00:11:11:12", "CRC_HTTPS_PROXY": "", "CRC_HTTP_PROXY": "", "CRC_STORAGE_NAMESPACE": "crc-storage", "CRC_STORAGE_RETRIES": "3", "CRC_URL": "'https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz'", "CRC_VERSION": "latest", "DATAPLANE_ANSIBLE_SECRET": "dataplane-ansible-ssh-private-key-secret", "DATAPLANE_ANSIBLE_USER": "", "DATAPLANE_COMPUTE_IP": "192.168.122.100", "DATAPLANE_CONTAINER_PREFIX": "openstack", "DATAPLANE_CONTAINER_TAG": "current-podified", "DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG": "quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest", "DATAPLANE_DEFAULT_GW": "192.168.122.1", "DATAPLANE_EXTRA_NOVA_CONFIG_FILE": "/dev/null", "DATAPLANE_GROWVOLS_ARGS": "/=8GB /tmp=1GB /home=1GB /var=100%", "DATAPLANE_KUSTOMIZE_SCENARIO": "preprovisioned", "DATAPLANE_NETWORKER_IP": "192.168.122.200", "DATAPLANE_NETWORK_INTERFACE_NAME": "eth0", "DATAPLANE_NOVA_NFS_PATH": "", "DATAPLANE_NTP_SERVER": "pool.ntp.org", "DATAPLANE_PLAYBOOK": "osp.edpm.download_cache", "DATAPLANE_REGISTRY_URL": "quay.io/podified-antelope-centos9", "DATAPLANE_RUNNER_IMG": "", "DATAPLANE_SERVER_ROLE": "compute", "DATAPLANE_SSHD_ALLOWED_RANGES": "['192.168.122.0/24']", "DATAPLANE_TIMEOUT": "30m", "DATAPLANE_TLS_ENABLED": "true", "DATAPLANE_TOTAL_NETWORKER_NODES": "1", "DATAPLANE_TOTAL_NODES": "1", "DBSERVICE": "galera", "DESIGNATE": "config/samples/designate_v1beta1_designate.yaml", "DESIGNATE_BRANCH": "main", "DESIGNATE_COMMIT_HASH": "", "DESIGNATE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml", "DESIGNATE_IMG": "quay.io/openstack-k8s-operators/designate-operator-index:latest", "DESIGNATE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml", "DESIGNATE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests", "DESIGNATE_KUTTL_NAMESPACE": "designate-kuttl-tests", "DESIGNATE_REPO": "https://github.com/openstack-k8s-operators/designate-operator.git", "DNSDATA": "config/samples/network_v1beta1_dnsdata.yaml", "DNSDATA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml", "DNSMASQ": "config/samples/network_v1beta1_dnsmasq.yaml", "DNSMASQ_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml", "DNS_DEPL_IMG": "unused", "DNS_DOMAIN": "localdomain", "DOWNLOAD_TOOLS_SELECTION": "all", "EDPM_ATTACH_EXTNET": "true", "EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES": "'[]'", "EDPM_COMPUTE_ADDITIONAL_NETWORKS": "'[]'", "EDPM_COMPUTE_CELLS": "1", "EDPM_COMPUTE_CEPH_ENABLED": "true", "EDPM_COMPUTE_CEPH_NOVA": "true", "EDPM_COMPUTE_DHCP_AGENT_ENABLED": "true", "EDPM_COMPUTE_SRIOV_ENABLED": "true", "EDPM_COMPUTE_SUFFIX": "0", "EDPM_CONFIGURE_DEFAULT_ROUTE": "true", "EDPM_CONFIGURE_HUGEPAGES": "false", "EDPM_CONFIGURE_NETWORKING": "true", "EDPM_FIRSTBOOT_EXTRA": "/tmp/edpm-firstboot-extra", "EDPM_NETWORKER_SUFFIX": "0", "EDPM_TOTAL_NETWORKERS": "1", "EDPM_TOTAL_NODES": "1", "GALERA_REPLICAS": "", "GENERATE_SSH_KEYS": "true", "GIT_CLONE_OPTS": "", "GLANCE": "config/samples/glance_v1beta1_glance.yaml", "GLANCEAPI_DEPL_IMG": "unused", "GLANCE_BRANCH": "main", "GLANCE_COMMIT_HASH": "", "GLANCE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml", "GLANCE_IMG": "quay.io/openstack-k8s-operators/glance-operator-index:latest", "GLANCE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml", "GLANCE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests", "GLANCE_KUTTL_NAMESPACE": "glance-kuttl-tests", "GLANCE_REPO": "https://github.com/openstack-k8s-operators/glance-operator.git", "HEAT": "config/samples/heat_v1beta1_heat.yaml", "HEATAPI_DEPL_IMG": "unused", "HEATCFNAPI_DEPL_IMG": "unused", "HEATENGINE_DEPL_IMG": "unused", "HEAT_AUTH_ENCRYPTION_KEY": "767c3ed056cbaa3b9dfedb8c6f825bf0", "HEAT_BRANCH": "main", "HEAT_COMMIT_HASH": "", "HEAT_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml", "HEAT_IMG": "quay.io/openstack-k8s-operators/heat-operator-index:latest", "HEAT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml", "HEAT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests", "HEAT_KUTTL_NAMESPACE": "heat-kuttl-tests", "HEAT_REPO": "https://github.com/openstack-k8s-operators/heat-operator.git", "HEAT_SERVICE_ENABLED": "true", "HORIZON": "config/samples/horizon_v1beta1_horizon.yaml", "HORIZON_BRANCH": "main", "HORIZON_COMMIT_HASH": "", "HORIZON_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml", "HORIZON_DEPL_IMG": "unused", "HORIZON_IMG": "quay.io/openstack-k8s-operators/horizon-operator-index:latest", "HORIZON_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml", "HORIZON_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests", "HORIZON_KUTTL_NAMESPACE": "horizon-kuttl-tests", "HORIZON_REPO": "https://github.com/openstack-k8s-operators/horizon-operator.git", "INFRA_BRANCH": "main", "INFRA_COMMIT_HASH": "", "INFRA_IMG": "quay.io/openstack-k8s-operators/infra-operator-index:latest", "INFRA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml", "INFRA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests", "INFRA_KUTTL_NAMESPACE": "infra-kuttl-tests", "INFRA_REPO": "https://github.com/openstack-k8s-operators/infra-operator.git", "INSTALL_CERT_MANAGER": "true", "INSTALL_NMSTATE": "true || false", "INSTALL_NNCP": "true || false", "INTERNALAPI_HOST_ROUTES": "", "IPV6_LAB_IPV4_NETWORK_IPADDRESS": "172.30.0.1/24", "IPV6_LAB_IPV6_NETWORK_IPADDRESS": "fd00:abcd:abcd:fc00::1/64", "IPV6_LAB_LIBVIRT_STORAGE_POOL": "default", "IPV6_LAB_MANAGE_FIREWALLD": "true", "IPV6_LAB_NAT64_HOST_IPV4": "172.30.0.2/24", "IPV6_LAB_NAT64_HOST_IPV6": "fd00:abcd:abcd:fc00::2/64", "IPV6_LAB_NAT64_INSTANCE_NAME": "nat64-router", "IPV6_LAB_NAT64_IPV6_NETWORK": "fd00:abcd:abcd:fc00::/64", "IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL": "192.168.255.0/24", "IPV6_LAB_NAT64_TAYGA_IPV4": "192.168.255.1", "IPV6_LAB_NAT64_TAYGA_IPV6": "fd00:abcd:abcd:fc00::3", "IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX": "fd00:abcd:abcd:fcff::/96", "IPV6_LAB_NAT64_UPDATE_PACKAGES": "false", "IPV6_LAB_NETWORK_NAME": "nat64", "IPV6_LAB_SNO_CLUSTER_NETWORK": "fd00:abcd:0::/48", "IPV6_LAB_SNO_HOST_IP": "fd00:abcd:abcd:fc00::11", "IPV6_LAB_SNO_HOST_PREFIX": "64", "IPV6_LAB_SNO_INSTANCE_NAME": "sno", "IPV6_LAB_SNO_MACHINE_NETWORK": "fd00:abcd:abcd:fc00::/64", "IPV6_LAB_SNO_OCP_MIRROR_URL": "https://mirror.openshift.com/pub/openshift-v4/clients/ocp", "IPV6_LAB_SNO_OCP_VERSION": "latest-4.14", "IPV6_LAB_SNO_SERVICE_NETWORK": "fd00:abcd:abcd:fc03::/112", "IPV6_LAB_SSH_PUB_KEY": "/home/zuul/.ssh/id_rsa.pub", "IPV6_LAB_WORK_DIR": "/home/zuul/.ipv6lab", "IRONIC": "config/samples/ironic_v1beta1_ironic.yaml", "IRONICAPI_DEPL_IMG": "unused", "IRONICCON_DEPL_IMG": "unused", "IRONICINS_DEPL_IMG": "unused", "IRONICNAG_DEPL_IMG": "unused", "IRONICPXE_DEPL_IMG": "unused", "IRONIC_BRANCH": "main", "IRONIC_COMMIT_HASH": "", "IRONIC_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml", "IRONIC_IMAGE_TAG": "release-24.1", "IRONIC_IMG": "quay.io/openstack-k8s-operators/ironic-operator-index:latest", "IRONIC_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml", "IRONIC_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests", "IRONIC_KUTTL_NAMESPACE": "ironic-kuttl-tests", "IRONIC_REPO": "https://github.com/openstack-k8s-operators/ironic-operator.git", "KEYSTONEAPI": "config/samples/keystone_v1beta1_keystoneapi.yaml", "KEYSTONEAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml", "KEYSTONEAPI_DEPL_IMG": "unused", "KEYSTONE_BRANCH": "main", "KEYSTONE_COMMIT_HASH": "", "KEYSTONE_FEDERATION_CLIENT_SECRET": "COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f", "KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE": "openstack", "KEYSTONE_IMG": "quay.io/openstack-k8s-operators/keystone-operator-index:latest", "KEYSTONE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml", "KEYSTONE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests", "KEYSTONE_KUTTL_NAMESPACE": "keystone-kuttl-tests", "KEYSTONE_REPO": "https://github.com/openstack-k8s-operators/keystone-operator.git", "KUBEADMIN_PWD": "12345678", "LIBVIRT_SECRET": "libvirt-secret", "LOKI_DEPLOY_MODE": "openshift-network", "LOKI_DEPLOY_NAMESPACE": "netobserv", "LOKI_DEPLOY_SIZE": "1x.demo", "LOKI_NAMESPACE": "openshift-operators-redhat", "LOKI_OPERATOR_GROUP": "openshift-operators-redhat-loki", "LOKI_SUBSCRIPTION": "loki-operator", "LVMS_CR": "1", "MANILA": "config/samples/manila_v1beta1_manila.yaml", "MANILAAPI_DEPL_IMG": "unused", "MANILASCH_DEPL_IMG": "unused", "MANILASHARE_DEPL_IMG": "unused", "MANILA_BRANCH": "main", "MANILA_COMMIT_HASH": "", "MANILA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml", "MANILA_IMG": "quay.io/openstack-k8s-operators/manila-operator-index:latest", "MANILA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml", "MANILA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests", "MANILA_KUTTL_NAMESPACE": "manila-kuttl-tests", "MANILA_REPO": "https://github.com/openstack-k8s-operators/manila-operator.git", "MANILA_SERVICE_ENABLED": "true", "MARIADB": "config/samples/mariadb_v1beta1_galera.yaml", "MARIADB_BRANCH": "main", "MARIADB_CHAINSAW_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml", "MARIADB_CHAINSAW_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests", "MARIADB_CHAINSAW_NAMESPACE": "mariadb-chainsaw-tests", "MARIADB_COMMIT_HASH": "", "MARIADB_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml", "MARIADB_DEPL_IMG": "unused", "MARIADB_IMG": "quay.io/openstack-k8s-operators/mariadb-operator-index:latest", "MARIADB_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml", "MARIADB_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests", "MARIADB_KUTTL_NAMESPACE": "mariadb-kuttl-tests", "MARIADB_REPO": "https://github.com/openstack-k8s-operators/mariadb-operator.git", "MEMCACHED": "config/samples/memcached_v1beta1_memcached.yaml", "MEMCACHED_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml", "MEMCACHED_DEPL_IMG": "unused", "METADATA_SHARED_SECRET": "1234567842", "METALLB_IPV6_POOL": "fd00:aaaa::80-fd00:aaaa::90", "METALLB_POOL": "192.168.122.80-192.168.122.90", "MICROSHIFT": "0", "NAMESPACE": "openstack", "NETCONFIG": "config/samples/network_v1beta1_netconfig.yaml", "NETCONFIG_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml", "NETCONFIG_DEPL_IMG": "unused", "NETOBSERV_DEPLOY_NAMESPACE": "netobserv", "NETOBSERV_NAMESPACE": "openshift-netobserv-operator", "NETOBSERV_OPERATOR_GROUP": "openshift-netobserv-operator-net", "NETOBSERV_SUBSCRIPTION": "netobserv-operator", "NETWORK_BGP": "false", "NETWORK_DESIGNATE_ADDRESS_PREFIX": "172.28.0", "NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX": "172.50.0", "NETWORK_INTERNALAPI_ADDRESS_PREFIX": "172.17.0", "NETWORK_ISOLATION": "true", "NETWORK_ISOLATION_INSTANCE_NAME": "crc", "NETWORK_ISOLATION_IPV4": "true", "NETWORK_ISOLATION_IPV4_ADDRESS": "172.16.1.1/24", "NETWORK_ISOLATION_IPV4_NAT": "true", "NETWORK_ISOLATION_IPV6": "false", "NETWORK_ISOLATION_IPV6_ADDRESS": "fd00:aaaa::1/64", "NETWORK_ISOLATION_IP_ADDRESS": "192.168.122.10", "NETWORK_ISOLATION_MAC": "52:54:00:11:11:10", "NETWORK_ISOLATION_NETWORK_NAME": "net-iso", "NETWORK_ISOLATION_NET_NAME": "default", "NETWORK_ISOLATION_USE_DEFAULT_NETWORK": "true", "NETWORK_MTU": "1500", "NETWORK_STORAGEMGMT_ADDRESS_PREFIX": "172.20.0", "NETWORK_STORAGE_ADDRESS_PREFIX": "172.18.0", "NETWORK_STORAGE_MACVLAN": "", "NETWORK_TENANT_ADDRESS_PREFIX": "172.19.0", "NETWORK_VLAN_START": "20", "NETWORK_VLAN_STEP": "1", "NEUTRONAPI": "config/samples/neutron_v1beta1_neutronapi.yaml", "NEUTRONAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml", "NEUTRONAPI_DEPL_IMG": "unused", "NEUTRON_BRANCH": "main", "NEUTRON_COMMIT_HASH": "", "NEUTRON_IMG": "quay.io/openstack-k8s-operators/neutron-operator-index:latest", "NEUTRON_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml", "NEUTRON_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests", "NEUTRON_KUTTL_NAMESPACE": "neutron-kuttl-tests", "NEUTRON_REPO": "https://github.com/openstack-k8s-operators/neutron-operator.git", "NFS_HOME": "/home/nfs", "NMSTATE_NAMESPACE": "openshift-nmstate", "NMSTATE_OPERATOR_GROUP": "openshift-nmstate-tn6k8", "NMSTATE_SUBSCRIPTION": "kubernetes-nmstate-operator", "NNCP_ADDITIONAL_HOST_ROUTES": "", "NNCP_BGP_1_INTERFACE": "enp7s0", "NNCP_BGP_1_IP_ADDRESS": "100.65.4.2", "NNCP_BGP_2_INTERFACE": "enp8s0", "NNCP_BGP_2_IP_ADDRESS": "100.64.4.2", "NNCP_BRIDGE": "ospbr", "NNCP_CLEANUP_TIMEOUT": "120s", "NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX": "fd00:aaaa::", "NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX": "10", "NNCP_CTLPLANE_IP_ADDRESS_PREFIX": "192.168.122", "NNCP_CTLPLANE_IP_ADDRESS_SUFFIX": "10", "NNCP_DNS_SERVER": "192.168.122.1", "NNCP_DNS_SERVER_IPV6": "fd00:aaaa::1", "NNCP_GATEWAY": "192.168.122.1", "NNCP_GATEWAY_IPV6": "fd00:aaaa::1", "NNCP_INTERFACE": "enp6s0", "NNCP_NODES": "", "NNCP_TIMEOUT": "240s", "NOVA": "config/samples/nova_v1beta1_nova_collapsed_cell.yaml", "NOVA_BRANCH": "main", "NOVA_COMMIT_HASH": "", "NOVA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml", "NOVA_IMG": "quay.io/openstack-k8s-operators/nova-operator-index:latest", "NOVA_REPO": "https://github.com/openstack-k8s-operators/nova-operator.git", "NUMBER_OF_INSTANCES": "1", "OCP_NETWORK_NAME": "crc", "OCTAVIA": "config/samples/octavia_v1beta1_octavia.yaml", "OCTAVIA_BRANCH": "main", "OCTAVIA_COMMIT_HASH": "", "OCTAVIA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml", "OCTAVIA_IMG": "quay.io/openstack-k8s-operators/octavia-operator-index:latest", "OCTAVIA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml", "OCTAVIA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests", "OCTAVIA_KUTTL_NAMESPACE": "octavia-kuttl-tests", "OCTAVIA_REPO": "https://github.com/openstack-k8s-operators/octavia-operator.git", "OKD": "false", "OPENSTACK_BRANCH": "main", "OPENSTACK_BUNDLE_IMG": "quay.io/openstack-k8s-operators/openstack-operator-bundle:latest", "OPENSTACK_COMMIT_HASH": "", "OPENSTACK_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml", "OPENSTACK_CRDS_DIR": "openstack_crds", "OPENSTACK_CTLPLANE": "config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml", "OPENSTACK_IMG": "quay.io/openstack-k8s-operators/openstack-operator-index:latest", "OPENSTACK_K8S_BRANCH": "main", "OPENSTACK_K8S_TAG": "latest", "OPENSTACK_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml", "OPENSTACK_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests", "OPENSTACK_KUTTL_NAMESPACE": "openstack-kuttl-tests", "OPENSTACK_NEUTRON_CUSTOM_CONF": "", "OPENSTACK_REPO": "https://github.com/openstack-k8s-operators/openstack-operator.git", "OPENSTACK_STORAGE_BUNDLE_IMG": "quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest", "OPERATOR_BASE_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator", "OPERATOR_CHANNEL": "", "OPERATOR_NAMESPACE": "openstack-operators", "OPERATOR_SOURCE": "", "OPERATOR_SOURCE_NAMESPACE": "", "OUT": "/home/zuul/ci-framework-data/artifacts/manifests", "OUTPUT_DIR": "/home/zuul/ci-framework-data/artifacts/edpm", "OVNCONTROLLER": "config/samples/ovn_v1beta1_ovncontroller.yaml", "OVNCONTROLLER_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml", "OVNCONTROLLER_NMAP": "true", "OVNDBS": "config/samples/ovn_v1beta1_ovndbcluster.yaml", "OVNDBS_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml", "OVNNORTHD": "config/samples/ovn_v1beta1_ovnnorthd.yaml", "OVNNORTHD_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml", "OVN_BRANCH": "main", "OVN_COMMIT_HASH": "", "OVN_IMG": "quay.io/openstack-k8s-operators/ovn-operator-index:latest", "OVN_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml", "OVN_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests", "OVN_KUTTL_NAMESPACE": "ovn-kuttl-tests", "OVN_REPO": "https://github.com/openstack-k8s-operators/ovn-operator.git", "PASSWORD": "12345678", "PLACEMENTAPI": "config/samples/placement_v1beta1_placementapi.yaml", "PLACEMENTAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml", "PLACEMENTAPI_DEPL_IMG": "unused", "PLACEMENT_BRANCH": "main", "PLACEMENT_COMMIT_HASH": "", "PLACEMENT_IMG": "quay.io/openstack-k8s-operators/placement-operator-index:latest", "PLACEMENT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml", "PLACEMENT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests", "PLACEMENT_KUTTL_NAMESPACE": "placement-kuttl-tests", "PLACEMENT_REPO": "https://github.com/openstack-k8s-operators/placement-operator.git", "PULL_SECRET": "/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt", "RABBITMQ": "docs/examples/default-security-context/rabbitmq.yaml", "RABBITMQ_BRANCH": "patches", "RABBITMQ_COMMIT_HASH": "", "RABBITMQ_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml", "RABBITMQ_DEPL_IMG": "unused", "RABBITMQ_IMG": "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest", "RABBITMQ_REPO": "https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git", "REDHAT_OPERATORS": "false", "REDIS": "config/samples/redis_v1beta1_redis.yaml", "REDIS_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml", "REDIS_DEPL_IMG": "unused", "RH_REGISTRY_PWD": "", "RH_REGISTRY_USER": "", "SECRET": "osp-secret", "SG_CORE_DEPL_IMG": "unused", "STANDALONE_COMPUTE_DRIVER": "libvirt", "STANDALONE_EXTERNAL_NET_PREFFIX": "172.21.0", "STANDALONE_INTERNALAPI_NET_PREFIX": "172.17.0", "STANDALONE_STORAGEMGMT_NET_PREFIX": "172.20.0", "STANDALONE_STORAGE_NET_PREFIX": "172.18.0", "STANDALONE_TENANT_NET_PREFIX": "172.19.0", "STORAGEMGMT_HOST_ROUTES": "", "STORAGE_CLASS": "local-storage", "STORAGE_HOST_ROUTES": "", "SWIFT": "config/samples/swift_v1beta1_swift.yaml", "SWIFT_BRANCH": "main", "SWIFT_COMMIT_HASH": "", "SWIFT_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml", "SWIFT_IMG": "quay.io/openstack-k8s-operators/swift-operator-index:latest", "SWIFT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml", "SWIFT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests", "SWIFT_KUTTL_NAMESPACE": "swift-kuttl-tests", "SWIFT_REPO": "https://github.com/openstack-k8s-operators/swift-operator.git", "TELEMETRY": "config/samples/telemetry_v1beta1_telemetry.yaml", "TELEMETRY_BRANCH": "main", "TELEMETRY_COMMIT_HASH": "", "TELEMETRY_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml", "TELEMETRY_IMG": "quay.io/openstack-k8s-operators/telemetry-operator-index:latest", "TELEMETRY_KUTTL_BASEDIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator", "TELEMETRY_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml", "TELEMETRY_KUTTL_NAMESPACE": "telemetry-kuttl-tests", "TELEMETRY_KUTTL_RELPATH": "tests/kuttl/suites", "TELEMETRY_REPO": "https://github.com/openstack-k8s-operators/telemetry-operator.git", "TENANT_HOST_ROUTES": "", "TIMEOUT": "300s", "TLS_ENABLED": "false", "tripleo_deploy": "export REGISTRY_USER:" }, "cifmw_install_yamls_environment": { "CHECKOUT_FROM_OPENSTACK_REF": "true", "KUBECONFIG": "/home/zuul/.crc/machines/crc/kubeconfig", "OPENSTACK_K8S_BRANCH": "main", "OUT": "/home/zuul/ci-framework-data/artifacts/manifests", "OUTPUT_DIR": "/home/zuul/ci-framework-data/artifacts/edpm" }, "cifmw_openshift_api": "https://api.crc.testing:6443", "cifmw_openshift_context": "default/api-crc-testing:6443/kubeadmin", "cifmw_openshift_kubeconfig": "/home/zuul/.crc/machines/crc/kubeconfig", "cifmw_openshift_login_api": "https://api.crc.testing:6443", "cifmw_openshift_login_cert_login": false, "cifmw_openshift_login_context": "default/api-crc-testing:6443/kubeadmin", "cifmw_openshift_login_kubeconfig": "/home/zuul/.crc/machines/crc/kubeconfig", "cifmw_openshift_login_password": 123456789, "cifmw_openshift_login_token": "sha256~_L9oDtGCa1Qw70O4LGW3MDeZ8oU60TxCvYkZJiq22Os", "cifmw_openshift_login_user": "kubeadmin", "cifmw_openshift_token": "sha256~_L9oDtGCa1Qw70O4LGW3MDeZ8oU60TxCvYkZJiq22Os", "cifmw_openshift_user": "kubeadmin", "cifmw_path": "/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "cifmw_repo_setup_commit_hash": null, "cifmw_repo_setup_distro_hash": null, "cifmw_repo_setup_dlrn_api_url": "https://trunk.rdoproject.org/api-centos9-antelope", "cifmw_repo_setup_dlrn_url": "https://trunk.rdoproject.org/centos9-antelope/current-podified/delorean.repo.md5", "cifmw_repo_setup_extended_hash": null, "cifmw_repo_setup_full_hash": "c4b77291aeca5591ac860bd4127cec2f", "cifmw_repo_setup_release": "antelope", "discovered_interpreter_python": "/usr/bin/python3", "gather_subset": [ "all" ], "module_setup": true }home/zuul/zuul-output/logs/ci-framework-data/artifacts/hosts0000644000175000017500000000023715073043170023430 0ustar zuulzuul127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ip-network.txt0000644000175000017500000000315715073043170025211 0ustar zuulzuuldefault via 38.102.83.1 dev eth0 proto dhcp src 38.102.83.214 metric 100 38.102.83.0/24 dev eth0 proto kernel scope link src 38.102.83.214 metric 100 169.254.169.254 via 38.102.83.126 dev eth0 proto dhcp src 38.102.83.214 metric 100 192.168.122.0/24 dev eth1 proto kernel scope link src 192.168.122.11 metric 101 0: from all lookup local 32766: from all lookup main 32767: from all lookup default [ { "ifindex": 1, "ifname": "lo", "flags": [ "LOOPBACK","UP","LOWER_UP" ], "mtu": 65536, "qdisc": "noqueue", "operstate": "UNKNOWN", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "loopback", "address": "00:00:00:00:00:00", "broadcast": "00:00:00:00:00:00" },{ "ifindex": 2, "ifname": "eth0", "flags": [ "BROADCAST","MULTICAST","UP","LOWER_UP" ], "mtu": 1500, "qdisc": "fq_codel", "operstate": "UP", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "fa:16:3e:c1:af:a3", "broadcast": "ff:ff:ff:ff:ff:ff", "altnames": [ "enp0s3","ens3" ] },{ "ifindex": 3, "ifname": "eth1", "flags": [ "BROADCAST","MULTICAST","UP","LOWER_UP" ], "mtu": 1500, "qdisc": "fq_codel", "operstate": "UP", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "fa:16:3e:a7:cc:09", "broadcast": "ff:ff:ff:ff:ff:ff", "altnames": [ "enp0s7","ens7" ] } ] home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_check_for_oc.sh0000644000175000017500000000020715073043207027721 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_check_for_oc.log) 2>&1 command -v oc home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_run_openstack_must_gather.sh0000644000175000017500000000103415073043210032563 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_run_openstack_must_gather.log) 2>&1 oc adm must-gather --image quay.io/openstack-k8s-operators/openstack-must-gather:latest --timeout 10m --host-network=False --dest-dir /home/zuul/ci-framework-data/logs/openstack-must-gather -- ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=$OPENSTACK_DATABASES SOS_EDPM=$SOS_EDPM SOS_DECOMPRESS=$SOS_DECOMPRESS gather 2>&1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_prepare_root_ssh.sh0000644000175000017500000000122315073043227030674 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_prepare_root_ssh.log) 2>&1 ssh -i ~/.ssh/id_cifw core@api.crc.testing < >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_copy_logs_from_crc.log) 2>&1 scp -v -r -i ~/.ssh/id_cifw core@api.crc.testing:/tmp/crc-logs-artifacts /home/zuul/ci-framework-data/logs/crc/ home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_fetch_openshift.sh0000644000175000017500000000032515073042765030476 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log) 2>&1 oc login -u kubeadmin -p 123456789 --insecure-skip-tls-verify=true api.crc.testing:6443 ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_001_login_into_openshift_internal.shhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_001_login_into_openshift_internal.s0000644000175000017500000000044515073042776033300 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log) 2>&1 podman login -u kubeadmin -p sha256~_L9oDtGCa1Qw70O4LGW3MDeZ8oU60TxCvYkZJiq22Os --tls-verify=false default-route-openshift-image-registry.apps-crc.testing home/zuul/zuul-output/logs/ci-framework-data/artifacts/resolv.conf0000644000175000017500000000015215073043170024522 0ustar zuulzuul# Generated by NetworkManager nameserver 192.168.122.10 nameserver 199.204.44.24 nameserver 199.204.47.54 home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/0000755000175000017500000000000015073043253024510 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/openshift-login-params.yml0000644000175000017500000000044015073042767031630 0ustar zuulzuulcifmw_openshift_api: https://api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_token: sha256~_L9oDtGCa1Qw70O4LGW3MDeZ8oU60TxCvYkZJiq22Os cifmw_openshift_user: kubeadmin home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/custom-params.yml0000644000175000017500000000114415073042761030031 0ustar zuulzuulcifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/install-yamls-params.yml0000644000175000017500000006636715073043253031327 0ustar zuulzuulcifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/tests/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/tests/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/tests/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/tests/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/tests/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/tests/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/tests/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: 'CO**********6f' KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/tests/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/tests/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/tests/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/tests/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/tests/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/tests/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: 'os**********et' SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/tests/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: tests/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/zuul-params.yml0000644000175000017500000004574715073042652027536 0ustar zuulzuulcifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 88dd6c905f2746688a8f680e3012c758 build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: 8637980fa8664cc3a54deb4b258e06d7 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8 executor: hostname: ze01.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-local_build-index_deploy jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: f6ed2f2d118884a075895bbf954ff6000e540430 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 2ff5b96b6254418d20a509188eea72ab2c77839c name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 95aa63de3182faad63a69301d101debad3efc936 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 63860ee1375c38462801e8341a7f18335169f94c name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096 name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 381c86678f470a5590d19274a2eb914e95b81bb7 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_log_collection: true home/zuul/zuul-output/logs/ci-framework-data/artifacts/zuul_inventory.yml0000644000175000017500000007010015073042651026203 0ustar zuulzuulall: children: zuul_unreachable: hosts: {} hosts: controller: ansible_connection: ssh ansible_host: 38.102.83.214 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: f100786d-5d49-4bb6-bacc-f81c832a6dc3 host_id: ff62aecd09b85709a233d3330c1581c31f2fa23cd3c1cbc3ffcedd62 interface_ip: 38.102.83.214 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.214 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.214 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul_log_collection: true crc: ansible_connection: ssh ansible_host: 38.102.83.180 ansible_port: 22 ansible_python_interpreter: auto ansible_user: core cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: c04f36a8-c8dc-4950-bb4c-1bbfdeb033d2 host_id: d19710e37f7b2620eb9f1bc9cfdfc06732b1f0c31221781941dd4533 interface_ip: 38.102.83.180 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.180 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.180 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul_log_collection: true localhost: ansible_connection: local vars: cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '123456789' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: local_build-index_deploy zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 88dd6c905f2746688a8f680e3012c758 build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: 8637980fa8664cc3a54deb4b258e06d7 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 8afa6d0752ab47a9a1557ae2b1e1bce8 executor: hostname: ze01.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/ansible/inventory.yaml log_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/logs result_data_file: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/results.json src_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work/src work_root: /var/lib/zuul/builds/88dd6c905f2746688a8f680e3012c758/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-local_build-index_deploy jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 381c86678f470a5590d19274a2eb914e95b81bb7 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: d207d5ad1c5824d6db58c2eb5935a8b36674cbe4 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: f6ed2f2d118884a075895bbf954ff6000e540430 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: 902ff7000e709d0d272a0fd1dee697abfe8c5d72 name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 2ff5b96b6254418d20a509188eea72ab2c77839c name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: 07f6a4f6ba7b0865b97d5c8d7e4396ab0259a62b name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 95aa63de3182faad63a69301d101debad3efc936 name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 63860ee1375c38462801e8341a7f18335169f94c name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: cd83effcf3e2ad1f42b8b0a7f7e4cf815d4264b8 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 748dff8508cbb49e00426d46a4487b9f4c0b0096 name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 27a23c998d26677edb7c828172bb22fb6dd6bc71 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 3f62739c27168ebe05c65ba9b26a90fe6a6268df name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 381c86678f470a5590d19274a2eb914e95b81bb7 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_log_collection: true home/zuul/zuul-output/logs/selinux-listing.log0000644000175000017500000040424015073043304020732 0ustar zuulzuul/home/zuul/ci-framework-data: total 8 drwxr-xr-x. 10 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:24 artifacts drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:24 logs drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 24 Oct 13 00:20 tmp drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Oct 13 00:21 volumes /home/zuul/ci-framework-data/artifacts: total 680 drwxrwxrwx. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Oct 13 00:24 ansible_facts.2025-10-13_00-24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19996 Oct 13 00:23 ansible-facts.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 518464 Oct 13 00:23 ansible-vars.yml drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 33 Oct 13 00:23 ci-env -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 135 Oct 13 00:23 ci_script_000_check_for_oc.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 239 Oct 13 00:23 ci_script_000_copy_logs_from_crc.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 213 Oct 13 00:21 ci_script_000_fetch_openshift.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 659 Oct 13 00:23 ci_script_000_prepare_root_ssh.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 540 Oct 13 00:23 ci_script_000_run_openstack_must_gather.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 293 Oct 13 00:21 ci_script_001_login_into_openshift_internal.sh -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 159 Oct 13 00:23 hosts -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 77488 Oct 13 00:23 installed-packages.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1647 Oct 13 00:23 ip-network.txt drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:21 manifests drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 70 Oct 13 00:23 NetworkManager drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 120 Oct 13 00:24 parameters drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:20 repositories -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 106 Oct 13 00:23 resolv.conf drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Oct 13 00:21 roles drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:23 yum_repos -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28736 Oct 13 00:19 zuul_inventory.yml /home/zuul/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:24 ansible_facts_cache /home/zuul/ci-framework-data/artifacts/ansible_facts.2025-10-13_00-24/ansible_facts_cache: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 57541 Oct 13 00:24 localhost /home/zuul/ci-framework-data/artifacts/ci-env: total 4 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1226 Oct 13 00:23 networking-info.yml /home/zuul/ci-framework-data/artifacts/manifests: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 16 Oct 13 00:21 openstack /home/zuul/ci-framework-data/artifacts/manifests/openstack: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Oct 13 00:21 cr /home/zuul/ci-framework-data/artifacts/manifests/openstack/cr: total 0 /home/zuul/ci-framework-data/artifacts/NetworkManager: total 8 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 331 Oct 13 00:23 ci-private-network.nmconnection -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 178 Oct 13 00:23 ens3.nmconnection /home/zuul/ci-framework-data/artifacts/parameters: total 56 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 612 Oct 13 00:21 custom-params.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27895 Oct 13 00:24 install-yamls-params.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 288 Oct 13 00:21 openshift-login-params.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19431 Oct 13 00:19 zuul-params.yml /home/zuul/ci-framework-data/artifacts/repositories: total 32 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1651 Oct 13 00:20 delorean-antelope-testing.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5867 Oct 13 00:20 delorean.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Oct 13 00:20 delorean.repo.md5 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 206 Oct 13 00:20 repo-setup-centos-appstream.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 196 Oct 13 00:20 repo-setup-centos-baseos.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 226 Oct 13 00:20 repo-setup-centos-highavailability.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 201 Oct 13 00:20 repo-setup-centos-powertools.repo /home/zuul/ci-framework-data/artifacts/roles: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:21 install_yamls_makes /home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes: total 20 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 16384 Oct 13 00:24 tasks /home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks: total 1256 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 790 Oct 13 00:21 make_all.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_ansibleee_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Oct 13 00:21 make_ansibleee_kuttl_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_ansibleee_kuttl_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_ansibleee_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_ansibleee_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_ansibleee_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_ansibleee.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Oct 13 00:21 make_attach_default_interface_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Oct 13 00:21 make_attach_default_interface.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_barbican_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Oct 13 00:21 make_barbican_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_barbican_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_barbican_deploy_validate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_barbican_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_barbican_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_barbican_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_barbican_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Oct 13 00:21 make_barbican.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_baremetal_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_baremetal_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1219 Oct 13 00:21 make_bmaas_baremetal_net_nad_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1099 Oct 13 00:21 make_bmaas_baremetal_net_nad.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Oct 13 00:21 make_bmaas_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Oct 13 00:21 make_bmaas_crc_attach_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Oct 13 00:21 make_bmaas_crc_attach_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1264 Oct 13 00:21 make_bmaas_crc_baremetal_bridge_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1144 Oct 13 00:21 make_bmaas_crc_baremetal_bridge.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Oct 13 00:21 make_bmaas_generate_nodes_yaml.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Oct 13 00:21 make_bmaas_metallb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Oct 13 00:21 make_bmaas_metallb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Oct 13 00:21 make_bmaas_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Oct 13 00:21 make_bmaas_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1444 Oct 13 00:21 make_bmaas_route_crc_and_crc_bmaas_networks_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1324 Oct 13 00:21 make_bmaas_route_crc_and_crc_bmaas_networks.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1174 Oct 13 00:21 make_bmaas_sushy_emulator_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Oct 13 00:21 make_bmaas_sushy_emulator_wait.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Oct 13 00:21 make_bmaas_sushy_emulator.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Oct 13 00:21 make_bmaas_virtual_bms_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Oct 13 00:21 make_bmaas_virtual_bms.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 829 Oct 13 00:21 make_bmaas.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_ceph_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_ceph_help.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_ceph.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_certmanager_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_certmanager.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Oct 13 00:21 make_cifmw_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Oct 13 00:21 make_cifmw_prepare.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_cinder_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_cinder_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_cinder_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_cinder_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_cinder_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_cinder_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_cinder_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Oct 13 00:21 make_cinder.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1294 Oct 13 00:21 make_crc_attach_default_interface_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1174 Oct 13 00:21 make_crc_attach_default_interface.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_crc_bmo_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_crc_bmo_setup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 919 Oct 13 00:21 make_crc_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 889 Oct 13 00:21 make_crc_scrub.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1225 Oct 13 00:21 make_crc_storage_cleanup_with_retries.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_crc_storage_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_crc_storage_release.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_crc_storage_with_retries.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_crc_storage.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 799 Oct 13 00:21 make_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_designate_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_designate_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_designate_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_designate_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_designate_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_designate_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_designate_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_designate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_dns_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_dns_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Oct 13 00:21 make_dns_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Oct 13 00:21 make_download_tools.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1039 Oct 13 00:21 make_edpm_ansible_runner.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1084 Oct 13 00:21 make_edpm_baremetal_compute.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Oct 13 00:21 make_edpm_compute_bootc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Oct 13 00:21 make_edpm_compute_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Oct 13 00:21 make_edpm_compute_repos.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Oct 13 00:21 make_edpm_computes_bgp.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 934 Oct 13 00:21 make_edpm_compute.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Oct 13 00:21 make_edpm_deploy_baremetal_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_edpm_deploy_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_edpm_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1120 Oct 13 00:21 make_edpm_deploy_generate_keys.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Oct 13 00:21 make_edpm_deploy_instance.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1180 Oct 13 00:21 make_edpm_deploy_networker_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Oct 13 00:21 make_edpm_deploy_networker_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_edpm_deploy_networker.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_edpm_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_edpm_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1084 Oct 13 00:21 make_edpm_networker_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Oct 13 00:21 make_edpm_networker.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_edpm_nova_discover_hosts.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1210 Oct 13 00:21 make_edpm_patch_ansible_runner_image.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_edpm_register_dns.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Oct 13 00:21 make_edpm_wait_deploy_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_edpm_wait_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_glance_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_glance_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_glance_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_glance_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_glance_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_glance_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_glance_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Oct 13 00:21 make_glance.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_heat_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_heat_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_heat_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_heat_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_heat_kuttl_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_heat_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Oct 13 00:21 make_heat_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_heat_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_heat.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 814 Oct 13 00:21 make_help.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_horizon_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Oct 13 00:21 make_horizon_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_horizon_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_horizon_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_horizon_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_horizon_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_horizon_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_horizon.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_infra_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_infra_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_infra_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Oct 13 00:21 make_infra_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Oct 13 00:21 make_infra.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_input_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Oct 13 00:21 make_input.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 994 Oct 13 00:21 make_ipv6_lab_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1189 Oct 13 00:21 make_ipv6_lab_nat64_router_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Oct 13 00:21 make_ipv6_lab_nat64_router.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Oct 13 00:21 make_ipv6_lab_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 994 Oct 13 00:21 make_ipv6_lab_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Oct 13 00:21 make_ipv6_lab_sno_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 934 Oct 13 00:21 make_ipv6_lab_sno.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 874 Oct 13 00:21 make_ipv6_lab.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_ironic_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_ironic_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_ironic_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_ironic_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_ironic_kuttl_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_ironic_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_ironic_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_ironic_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Oct 13 00:21 make_ironic.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_keystone_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Oct 13 00:21 make_keystone_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_keystone_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_keystone_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_keystone_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_keystone_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_keystone_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Oct 13 00:21 make_keystone.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_kuttl_common_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_kuttl_common_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_kuttl_db_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_kuttl_db_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_loki_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_loki_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_loki_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_loki.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_lvms.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_manila_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_manila_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_manila_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_manila_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_manila_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_manila_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_manila_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Oct 13 00:21 make_manila.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_mariadb_chainsaw_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_mariadb_chainsaw.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_mariadb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Oct 13 00:21 make_mariadb_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_mariadb_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_mariadb_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_mariadb_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_mariadb_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_mariadb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_memcached_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_memcached_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_memcached_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_metallb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Oct 13 00:21 make_metallb_config_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_metallb_config.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_metallb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_namespace_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_namespace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_netattach_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_netattach.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_netconfig_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_netconfig_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_netconfig_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_netobserv_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_netobserv_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_netobserv_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_netobserv.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Oct 13 00:21 make_network_isolation_bridge_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Oct 13 00:21 make_network_isolation_bridge.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_neutron_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Oct 13 00:21 make_neutron_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_neutron_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_neutron_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_neutron_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_neutron_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_neutron_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_neutron.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 919 Oct 13 00:21 make_nfs_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 799 Oct 13 00:21 make_nfs.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_nmstate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_nncp_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_nncp.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_nova_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_nova_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_nova_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_nova_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_nova_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_nova.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_octavia_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Oct 13 00:21 make_octavia_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_octavia_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_octavia_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_octavia_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_octavia_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_octavia_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Oct 13 00:21 make_octavia.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_openstack_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Oct 13 00:21 make_openstack_crds_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_openstack_crds.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_openstack_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_openstack_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_openstack_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_openstack_init.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_openstack_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_openstack_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Oct 13 00:21 make_openstack_patch_version.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_openstack_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_openstack_repo.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_openstack_update_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_openstack_wait_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_openstack_wait.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_openstack.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_operator_namespace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_ovn_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Oct 13 00:21 make_ovn_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_ovn_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Oct 13 00:21 make_ovn_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_ovn_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_ovn_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Oct 13 00:21 make_ovn_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 790 Oct 13 00:21 make_ovn.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_placement_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_placement_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_placement_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_placement_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_placement_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_placement_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_placement_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_placement.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_rabbitmq_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Oct 13 00:21 make_rabbitmq_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_rabbitmq_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_rabbitmq_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_rabbitmq_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Oct 13 00:21 make_rabbitmq.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_redis_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_redis_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_redis_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_rook_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_rook_crc_disk.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_rook_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_rook_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_rook_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_rook.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Oct 13 00:21 make_set_slower_etcd_profile.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Oct 13 00:21 make_standalone_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Oct 13 00:21 make_standalone_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Oct 13 00:21 make_standalone_revert.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1039 Oct 13 00:21 make_standalone_snapshot.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 979 Oct 13 00:21 make_standalone_sync.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 904 Oct 13 00:21 make_standalone.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_swift_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_swift_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_swift_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Oct 13 00:21 make_swift_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_swift_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Oct 13 00:21 make_swift_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Oct 13 00:21 make_swift_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Oct 13 00:21 make_swift.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Oct 13 00:21 make_telemetry_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Oct 13 00:21 make_telemetry_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Oct 13 00:21 make_telemetry_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Oct 13 00:21 make_telemetry_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Oct 13 00:21 make_telemetry_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_telemetry_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Oct 13 00:21 make_telemetry_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Oct 13 00:21 make_telemetry.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Oct 13 00:21 make_tripleo_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Oct 13 00:21 make_update_services.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Oct 13 00:21 make_update_system.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Oct 13 00:21 make_validate_marketplace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Oct 13 00:21 make_wait.yml /home/zuul/ci-framework-data/artifacts/yum_repos: total 32 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1651 Oct 13 00:23 delorean-antelope-testing.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 5867 Oct 13 00:23 delorean.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 33 Oct 13 00:23 delorean.repo.md5 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 206 Oct 13 00:23 repo-setup-centos-appstream.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 196 Oct 13 00:23 repo-setup-centos-baseos.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 226 Oct 13 00:23 repo-setup-centos-highavailability.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 201 Oct 13 00:23 repo-setup-centos-powertools.repo /home/zuul/ci-framework-data/logs: total 320 drwxrwxr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 25 Oct 13 00:24 2025-10-13_00-23 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 131212 Oct 13 00:21 ansible.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 18 Oct 13 00:23 ci_script_000_check_for_oc.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14825 Oct 13 00:23 ci_script_000_copy_logs_from_crc.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 234 Oct 13 00:21 ci_script_000_fetch_openshift.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 151793 Oct 13 00:23 ci_script_000_prepare_root_ssh.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4487 Oct 13 00:23 ci_script_000_run_openstack_must_gather.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17 Oct 13 00:21 ci_script_001_login_into_openshift_internal.log drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 crc drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 72 Oct 13 00:23 openstack-must-gather /home/zuul/ci-framework-data/logs/2025-10-13_00-23: total 132 -rw-rw-rw-. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 131212 Oct 13 00:21 ansible.log /home/zuul/ci-framework-data/logs/crc: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 18 Oct 13 00:23 crc-logs-artifacts /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts: total 16 drwxr-xr-x. 83 zuul zuul unconfined_u:object_r:user_home_t:s0 12288 Oct 13 00:23 pods /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods: total 16 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 108 Oct 13 00:23 hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787 drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 105 Oct 13 00:23 openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 42 Oct 13 00:23 openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Oct 13 00:23 openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Oct 13 00:23 openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 64 Oct 13 00:23 openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:23 openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Oct 13 00:23 openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Oct 13 00:23 openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 21 Oct 13 00:23 openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Oct 13 00:23 openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 39 Oct 13 00:23 openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 51 Oct 13 00:23 openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 40 Oct 13 00:23 openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 31 Oct 13 00:23 openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 49 Oct 13 00:23 openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb drwxr-xr-x. 9 zuul zuul unconfined_u:object_r:user_home_t:s0 140 Oct 13 00:23 openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 27 Oct 13 00:23 openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 26 Oct 13 00:23 openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 22 Oct 13 00:23 openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 21 Oct 13 00:23 openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Oct 13 00:23 openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 53 Oct 13 00:23 openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Oct 13 00:23 openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90 drwxr-xr-x. 8 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:23 openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Oct 13 00:23 openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 164 Oct 13 00:23 openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 46 Oct 13 00:23 openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Oct 13 00:23 openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Oct 13 00:23 openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Oct 13 00:23 openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Oct 13 00:23 openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Oct 13 00:23 openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 130 Oct 13 00:23 openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 47 Oct 13 00:23 openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 22 Oct 13 00:23 openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 52 Oct 13 00:23 openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 48 Oct 13 00:23 openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 57 Oct 13 00:23 openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 47 Oct 13 00:23 openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 62 Oct 13 00:23 openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Oct 13 00:23 openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Oct 13 00:23 openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Oct 13 00:23 openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03 drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Oct 13 00:23 openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 34 Oct 13 00:23 openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Oct 13 00:23 openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Oct 13 00:23 openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1 drwxr-xr-x. 9 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:23 openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 64 Oct 13 00:23 openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 25 Oct 13 00:23 openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 59 Oct 13 00:23 openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Oct 13 00:23 openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 44 Oct 13 00:23 openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Oct 13 00:23 openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Oct 13 00:23 openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 26 Oct 13 00:23 openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 27 Oct 13 00:23 openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 59 Oct 13 00:23 openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Oct 13 00:23 openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3 drwxr-xr-x. 11 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Oct 13 00:23 openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Oct 13 00:23 openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Oct 13 00:23 openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Oct 13 00:23 openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501 /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 csi-provisioner drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 hostpath-provisioner drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 liveness-probe drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 node-driver-registrar /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner: total 228 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 180933 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 47479 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner: total 80 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 65035 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15196 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 396 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 396 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1533 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1533 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 fix-audit-permissions drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 openshift-apiserver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 openshift-apiserver-check-endpoints /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver: total 200 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 93408 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 110417 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23827 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28297 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 openshift-apiserver-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator: total 356 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 84973 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 214174 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61149 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 oauth-openshift /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23257 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25796 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 authentication-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator: total 936 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 217484 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 491271 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 242413 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 machine-approver-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7732 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7599 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10792 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7598 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5648 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 83 Oct 13 00:23 2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 cluster-samples-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 cluster-samples-operator-watch -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 f7684764a172c67c488de0b6708e5069c830520a62b9c52cde81ff86958ef2e5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator: total 260 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 30933 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 112225 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 114883 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4037 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 664 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 cluster-version-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator: total 12984 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11627525 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1664102 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 openshift-api drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 openshift-config-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator: total 96 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33371 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31684 Oct 13 00:23 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26064 Oct 13 00:23 3.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 console /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1042 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1042 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 download-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 75 Oct 13 00:23 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 51603 Oct 13 00:23 6.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11526 Oct 13 00:23 7.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 conversion-webhook-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 909 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 571 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 console-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator: total 1004 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 227835 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 769174 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27699 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 controller-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager: total 276 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28909 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 246832 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 openshift-controller-manager-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator: total 364 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 172802 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 38969 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 153781 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 dns drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns: total 608 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 614882 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2300 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 dns-node-resolver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 96 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 dns-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 13994 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11285 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcd drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcdctl drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcd-ensure-env-vars drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcd-metrics drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcd-readyz drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcd-resources-copy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd: total 8584 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8727966 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33464 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23802 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20593 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20589 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17964 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 518 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 725 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 240 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 etcd-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator: total 568 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 100273 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 296671 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 178107 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 cluster-image-registry-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator: total 96 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46647 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23357 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 22931 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 image-pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29338560-zvlxb_3c48edc1-77de-4eaf-a099-5af630747311/image-pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1437 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 registry /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-2mwg6_fe9b4942-29e7-4ef1-85c7-1a2153128dc7/registry: total 32 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28851 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 node-ca /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31683 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7296 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 serve-healthcheck-canary /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2922 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 602 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 ingress-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator: total 160 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 48554 Oct 13 00:23 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 29309 Oct 13 00:23 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 51524 Oct 13 00:23 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25004 Oct 13 00:23 5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 router /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router: total 148 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 52224 Oct 13 00:23 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 37860 Oct 13 00:23 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 53031 Oct 13 00:23 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3574 Oct 13 00:23 5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59795 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_b0a4ec02-9b6b-400a-9633-c11280799f07/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 60637 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59909 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-apiserver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-apiserver-cert-regeneration-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-apiserver-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-apiserver-check-endpoints drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-apiserver-insecure-readyz drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver: total 300 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 307194 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17631 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1648 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16930 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 116 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 265 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-apiserver-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator: total 820 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 222883 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 475401 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 132871 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19721 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43448 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43448 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 cluster-policy-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 kube-controller-manager drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-controller-manager-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-controller-manager-recovery-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller: total 412 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 132818 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2076 Oct 13 00:23 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 281558 Oct 13 00:23 6.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager: total 1524 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 737559 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 58713 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 60344 Oct 13 00:23 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 693531 Oct 13 00:23 4.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6726 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6407 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12063 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11354 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6142 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16578 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-controller-manager-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator: total 596 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 193188 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 324520 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 83755 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2031 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1976 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1917 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1973 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27628 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27628 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-scheduler drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-scheduler-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-scheduler-recovery-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 wait-for-host-port /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler: total 264 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59236 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 130989 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 75290 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5142 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5370 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9930 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6351 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5934 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7823 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-scheduler-operator-container /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container: total 232 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 78087 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 122585 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 32762 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 migrator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2038 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1875 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-storage-version-migrator-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator: total 108 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39921 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 34528 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 29228 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 control-plane-machine-set-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9654 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 22065 Oct 13 00:23 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14032 Oct 13 00:23 3.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 machine-api-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7732 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7599 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12595 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12999 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10641 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 kube-rbac-proxy-crio drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5045 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1509 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2192 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 machine-config-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller: total 188 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 136260 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 49782 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 machine-config-daemon /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon: total 136 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15339 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 79965 Oct 13 00:23 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20217 Oct 13 00:23 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 18729 Oct 13 00:23 6.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 machine-config-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator: total 64 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16333 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46226 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 machine-config-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46063 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17236 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-cms8q_c8f142c0-dc2a-4213-882f-919da8583b03/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-gjctm_49cd5dc0-c0e0-4199-93cd-8637bea2739a/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 marketplace-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-29pzg_c3d30d24-1dab-4362-a72b-dd6762f1f84c/marketplace-operator: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8134 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6025 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-crk87_a783d910-85f5-4f52-8831-6bae329a70fa/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-hkptr_d3fa047a-b670-4067-b07b-06d9a1d3dbb1/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 bond-cni-plugin drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 cni-plugins drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 egress-router-binary-copy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-multus-additional-cni-plugins drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 routeoverride-cni drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 whereabouts-cni drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 whereabouts-cni-bincopy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 392 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 392 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 404 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 404 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 414 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 414 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 80 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 80 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 408 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 408 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 multus-admission-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1386 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1276 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Oct 13 00:23 kube-multus /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus: total 552 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 890 Oct 13 00:23 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 391605 Oct 13 00:23 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 160066 Oct 13 00:23 7.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3764 Oct 13 00:23 8.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 network-metrics-daemon /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 40893 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26870 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 check-endpoints /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9955 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4976 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 network-check-target-container /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 approver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 webhook /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12256 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14943 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11687 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook: total 748 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 760212 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3086 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 iptables-alerter /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 381 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 120 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 network-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator: total 1276 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411501 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 574827 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 311627 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 fix-audit-permissions drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 oauth-apiserver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver: total 172 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 127371 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 44076 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 catalog-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator: total 1912 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1508074 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 443327 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251935-d7x6j_51936587-a4af-470d-ad92-8ab9062cbc72/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 736 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 736 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29338575-4qbqw_a4d63ce4-3ff9-447e-b5cf-9443eb4e53c7/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 739 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 olm-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39824 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25182 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 packageserver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver: total 132 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 91140 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 37249 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 package-server-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1187 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14404 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6805 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 ovnkube-cluster-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1316 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1183 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager: total 212 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 157417 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 56701 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kubecfg-setup drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-rbac-proxy-node drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 kube-rbac-proxy-ovn-metrics drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 nbdb drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 northd drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 ovn-acl-logging drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 ovn-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 ovnkube-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Oct 13 00:23 sbdb /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kubecfg-setup: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-node: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4680 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/kube-rbac-proxy-ovn-metrics: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4640 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/nbdb: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2415 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/northd: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4672 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-acl-logging: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5955 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovn-controller: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8190 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/ovnkube-controller: total 1912 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1956828 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-wzh74_0cb86f13-8292-42b4-bcd2-a2399612868c/sbdb: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2347 Oct 13 00:23 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 route-controller-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager: total 56 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25403 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26381 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Oct 13 00:23 service-ca-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator: total 120 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 58729 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26036 Oct 13 00:23 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28696 Oct 13 00:23 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Oct 13 00:23 service-ca-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller: total 96 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61484 Oct 13 00:23 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31608 Oct 13 00:23 1.log /home/zuul/ci-framework-data/logs/openstack-must-gather: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3336 Oct 13 00:23 event-filter.html -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4314 Oct 13 00:23 must-gather.logs -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 110 Oct 13 00:23 timestamp /home/zuul/ci-framework-data/tmp: total 0 /home/zuul/ci-framework-data/volumes: total 0 home/zuul/zuul-output/logs/README.html0000644000175000017500000000306615073043305016716 0ustar zuulzuul README for CIFMW Logs

Logs of interest

Generated content of interest

home/zuul/zuul-output/logs/installed-pkgs.log0000644000175000017500000004737715073043306020535 0ustar zuulzuulaardvark-dns-1.16.0-1.el9.x86_64 abattis-cantarell-fonts-0.301-4.el9.noarch acl-2.3.1-4.el9.x86_64 adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch alternatives-1.24-2.el9.x86_64 annobin-12.98-1.el9.x86_64 ansible-core-2.14.18-1.el9.x86_64 attr-2.5.1-3.el9.x86_64 audit-3.1.5-7.el9.x86_64 audit-libs-3.1.5-7.el9.x86_64 authselect-1.2.6-3.el9.x86_64 authselect-compat-1.2.6-3.el9.x86_64 authselect-libs-1.2.6-3.el9.x86_64 avahi-libs-0.8-23.el9.x86_64 basesystem-11-13.el9.noarch bash-5.1.8-9.el9.x86_64 bash-completion-2.11-5.el9.noarch binutils-2.35.2-67.el9.x86_64 binutils-gold-2.35.2-67.el9.x86_64 buildah-1.41.3-1.el9.x86_64 bzip2-1.0.8-10.el9.x86_64 bzip2-libs-1.0.8-10.el9.x86_64 ca-certificates-2024.2.69_v8.0.303-91.4.el9.noarch c-ares-1.19.1-2.el9.x86_64 centos-gpg-keys-9.0-30.el9.noarch centos-logos-90.8-3.el9.x86_64 centos-stream-release-9.0-30.el9.noarch centos-stream-repos-9.0-30.el9.noarch checkpolicy-3.6-1.el9.x86_64 chrony-4.6.1-2.el9.x86_64 cloud-init-24.4-7.el9.noarch cloud-utils-growpart-0.33-1.el9.x86_64 cmake-filesystem-3.26.5-2.el9.x86_64 cockpit-bridge-347-1.el9.noarch cockpit-system-347-1.el9.noarch cockpit-ws-347-1.el9.x86_64 cockpit-ws-selinux-347-1.el9.x86_64 conmon-2.1.13-1.el9.x86_64 containers-common-1-134.el9.x86_64 containers-common-extra-1-134.el9.x86_64 container-selinux-2.242.0-1.el9.noarch coreutils-8.32-39.el9.x86_64 coreutils-common-8.32-39.el9.x86_64 cpio-2.13-16.el9.x86_64 cpp-11.5.0-11.el9.x86_64 cracklib-2.9.6-27.el9.x86_64 cracklib-dicts-2.9.6-27.el9.x86_64 createrepo_c-0.20.1-4.el9.x86_64 createrepo_c-libs-0.20.1-4.el9.x86_64 criu-3.19-3.el9.x86_64 criu-libs-3.19-3.el9.x86_64 cronie-1.5.7-14.el9.x86_64 cronie-anacron-1.5.7-14.el9.x86_64 crontabs-1.11-26.20190603git.el9.noarch crun-1.24-1.el9.x86_64 crypto-policies-20250905-1.git377cc42.el9.noarch crypto-policies-scripts-20250905-1.git377cc42.el9.noarch cryptsetup-libs-2.8.1-2.el9.x86_64 curl-7.76.1-34.el9.x86_64 cyrus-sasl-2.1.27-21.el9.x86_64 cyrus-sasl-devel-2.1.27-21.el9.x86_64 cyrus-sasl-gssapi-2.1.27-21.el9.x86_64 cyrus-sasl-lib-2.1.27-21.el9.x86_64 dbus-1.12.20-8.el9.x86_64 dbus-broker-28-7.el9.x86_64 dbus-common-1.12.20-8.el9.noarch dbus-libs-1.12.20-8.el9.x86_64 dbus-tools-1.12.20-8.el9.x86_64 debugedit-5.0-11.el9.x86_64 dejavu-sans-fonts-2.37-18.el9.noarch desktop-file-utils-0.26-6.el9.x86_64 device-mapper-1.02.206-2.el9.x86_64 device-mapper-libs-1.02.206-2.el9.x86_64 dhcp-client-4.4.2-19.b1.el9.x86_64 dhcp-common-4.4.2-19.b1.el9.noarch diffutils-3.7-12.el9.x86_64 dnf-4.14.0-31.el9.noarch dnf-data-4.14.0-31.el9.noarch dnf-plugins-core-4.3.0-23.el9.noarch dracut-057-102.git20250818.el9.x86_64 dracut-config-generic-057-102.git20250818.el9.x86_64 dracut-network-057-102.git20250818.el9.x86_64 dracut-squash-057-102.git20250818.el9.x86_64 dwz-0.16-1.el9.x86_64 e2fsprogs-1.46.5-8.el9.x86_64 e2fsprogs-libs-1.46.5-8.el9.x86_64 ed-1.14.2-12.el9.x86_64 efi-srpm-macros-6-4.el9.noarch elfutils-0.193-1.el9.x86_64 elfutils-debuginfod-client-0.193-1.el9.x86_64 elfutils-default-yama-scope-0.193-1.el9.noarch elfutils-libelf-0.193-1.el9.x86_64 elfutils-libs-0.193-1.el9.x86_64 emacs-filesystem-27.2-18.el9.noarch enchant-1.6.0-30.el9.x86_64 ethtool-6.15-2.el9.x86_64 expat-2.5.0-5.el9.x86_64 expect-5.45.4-16.el9.x86_64 file-5.39-16.el9.x86_64 file-libs-5.39-16.el9.x86_64 filesystem-3.16-5.el9.x86_64 findutils-4.8.0-7.el9.x86_64 fonts-filesystem-2.0.5-7.el9.1.noarch fonts-srpm-macros-2.0.5-7.el9.1.noarch fuse3-3.10.2-9.el9.x86_64 fuse3-libs-3.10.2-9.el9.x86_64 fuse-common-3.10.2-9.el9.x86_64 fuse-libs-2.9.9-17.el9.x86_64 fuse-overlayfs-1.15-1.el9.x86_64 gawk-5.1.0-6.el9.x86_64 gawk-all-langpacks-5.1.0-6.el9.x86_64 gcc-11.5.0-11.el9.x86_64 gcc-c++-11.5.0-11.el9.x86_64 gcc-plugin-annobin-11.5.0-11.el9.x86_64 gdb-minimal-16.3-2.el9.x86_64 gdbm-libs-1.23-1.el9.x86_64 gdisk-1.0.7-5.el9.x86_64 gdk-pixbuf2-2.42.6-6.el9.x86_64 geolite2-city-20191217-6.el9.noarch geolite2-country-20191217-6.el9.noarch gettext-0.21-8.el9.x86_64 gettext-libs-0.21-8.el9.x86_64 ghc-srpm-macros-1.5.0-6.el9.noarch git-2.47.3-1.el9.x86_64 git-core-2.47.3-1.el9.x86_64 git-core-doc-2.47.3-1.el9.noarch glib2-2.68.4-16.el9.x86_64 glibc-2.34-232.el9.x86_64 glibc-common-2.34-232.el9.x86_64 glibc-devel-2.34-232.el9.x86_64 glibc-gconv-extra-2.34-232.el9.x86_64 glibc-headers-2.34-232.el9.x86_64 glibc-langpack-en-2.34-232.el9.x86_64 glib-networking-2.68.3-3.el9.x86_64 gmp-6.2.0-13.el9.x86_64 gnupg2-2.3.3-4.el9.x86_64 gnutls-3.8.3-9.el9.x86_64 gobject-introspection-1.68.0-11.el9.x86_64 go-srpm-macros-3.8.1-1.el9.noarch gpgme-1.15.1-6.el9.x86_64 gpg-pubkey-8483c65d-5ccc5b19 grep-3.6-5.el9.x86_64 groff-base-1.22.4-10.el9.x86_64 grub2-common-2.06-115.el9.noarch grub2-pc-2.06-115.el9.x86_64 grub2-pc-modules-2.06-115.el9.noarch grub2-tools-2.06-115.el9.x86_64 grub2-tools-minimal-2.06-115.el9.x86_64 grubby-8.40-69.el9.x86_64 gsettings-desktop-schemas-40.0-7.el9.x86_64 gssproxy-0.8.4-7.el9.x86_64 gzip-1.12-1.el9.x86_64 hostname-3.23-6.el9.x86_64 hunspell-1.7.0-11.el9.x86_64 hunspell-en-GB-0.20140811.1-20.el9.noarch hunspell-en-US-0.20140811.1-20.el9.noarch hunspell-filesystem-1.7.0-11.el9.x86_64 hwdata-0.348-9.20.el9.noarch ima-evm-utils-1.6.2-2.el9.x86_64 info-6.7-15.el9.x86_64 inih-49-6.el9.x86_64 initscripts-rename-device-10.11.8-4.el9.x86_64 initscripts-service-10.11.8-4.el9.noarch ipcalc-1.0.0-5.el9.x86_64 iproute-6.14.0-2.el9.x86_64 iproute-tc-6.14.0-2.el9.x86_64 iptables-libs-1.8.10-11.el9.x86_64 iptables-nft-1.8.10-11.el9.x86_64 iptables-nft-services-1.8.10-11.el9.noarch iputils-20210202-15.el9.x86_64 irqbalance-1.9.4-4.el9.x86_64 jansson-2.14-1.el9.x86_64 jq-1.6-19.el9.x86_64 json-c-0.14-11.el9.x86_64 json-glib-1.6.6-1.el9.x86_64 kbd-2.4.0-11.el9.x86_64 kbd-legacy-2.4.0-11.el9.noarch kbd-misc-2.4.0-11.el9.noarch kernel-5.14.0-621.el9.x86_64 kernel-core-5.14.0-621.el9.x86_64 kernel-headers-5.14.0-621.el9.x86_64 kernel-modules-5.14.0-621.el9.x86_64 kernel-modules-core-5.14.0-621.el9.x86_64 kernel-srpm-macros-1.0-14.el9.noarch kernel-tools-5.14.0-621.el9.x86_64 kernel-tools-libs-5.14.0-621.el9.x86_64 kexec-tools-2.0.29-10.el9.x86_64 keyutils-1.6.3-1.el9.x86_64 keyutils-libs-1.6.3-1.el9.x86_64 kmod-28-11.el9.x86_64 kmod-libs-28-11.el9.x86_64 kpartx-0.8.7-39.el9.x86_64 krb5-libs-1.21.1-8.el9.x86_64 langpacks-core-en_GB-3.0-16.el9.noarch langpacks-core-font-en-3.0-16.el9.noarch langpacks-en_GB-3.0-16.el9.noarch less-590-6.el9.x86_64 libacl-2.3.1-4.el9.x86_64 libappstream-glib-0.7.18-5.el9.x86_64 libarchive-3.5.3-6.el9.x86_64 libassuan-2.5.5-3.el9.x86_64 libatomic-11.5.0-11.el9.x86_64 libattr-2.5.1-3.el9.x86_64 libbasicobjects-0.1.1-53.el9.x86_64 libblkid-2.37.4-21.el9.x86_64 libbpf-1.5.0-2.el9.x86_64 libbrotli-1.0.9-7.el9.x86_64 libcap-2.48-10.el9.x86_64 libcap-ng-0.8.2-7.el9.x86_64 libcbor-0.7.0-5.el9.x86_64 libcollection-0.7.0-53.el9.x86_64 libcom_err-1.46.5-8.el9.x86_64 libcomps-0.1.18-1.el9.x86_64 libcurl-7.76.1-34.el9.x86_64 libdaemon-0.14-23.el9.x86_64 libdb-5.3.28-57.el9.x86_64 libdhash-0.5.0-53.el9.x86_64 libdnf-0.69.0-16.el9.x86_64 libeconf-0.4.1-4.el9.x86_64 libedit-3.1-38.20210216cvs.el9.x86_64 libestr-0.1.11-4.el9.x86_64 libev-4.33-6.el9.x86_64 libevent-2.1.12-8.el9.x86_64 libfastjson-0.99.9-5.el9.x86_64 libfdisk-2.37.4-21.el9.x86_64 libffi-3.4.2-8.el9.x86_64 libffi-devel-3.4.2-8.el9.x86_64 libfido2-1.13.0-2.el9.x86_64 libgcc-11.5.0-11.el9.x86_64 libgcrypt-1.10.0-11.el9.x86_64 libgomp-11.5.0-11.el9.x86_64 libgpg-error-1.42-5.el9.x86_64 libgpg-error-devel-1.42-5.el9.x86_64 libibverbs-57.0-2.el9.x86_64 libicu-67.1-10.el9.x86_64 libidn2-2.3.0-7.el9.x86_64 libini_config-1.3.1-53.el9.x86_64 libjpeg-turbo-2.0.90-7.el9.x86_64 libkcapi-1.4.0-2.el9.x86_64 libkcapi-hmaccalc-1.4.0-2.el9.x86_64 libksba-1.5.1-7.el9.x86_64 libldb-4.23.0-3.el9.x86_64 libmaxminddb-1.5.2-4.el9.x86_64 libmnl-1.0.4-16.el9.x86_64 libmodulemd-2.13.0-2.el9.x86_64 libmount-2.37.4-21.el9.x86_64 libmpc-1.2.1-4.el9.x86_64 libndp-1.9-1.el9.x86_64 libnet-1.2-7.el9.x86_64 libnetfilter_conntrack-1.0.9-1.el9.x86_64 libnfnetlink-1.0.1-23.el9.x86_64 libnfsidmap-2.5.4-39.el9.x86_64 libnftnl-1.2.6-4.el9.x86_64 libnghttp2-1.43.0-6.el9.x86_64 libnl3-3.11.0-1.el9.x86_64 libnl3-cli-3.11.0-1.el9.x86_64 libnsl2-2.0.0-1.el9.x86_64 libpath_utils-0.2.1-53.el9.x86_64 libpcap-1.10.0-4.el9.x86_64 libpipeline-1.5.3-4.el9.x86_64 libpkgconf-1.7.3-10.el9.x86_64 libpng-1.6.37-12.el9.x86_64 libproxy-0.4.15-35.el9.x86_64 libproxy-webkitgtk4-0.4.15-35.el9.x86_64 libpsl-0.21.1-5.el9.x86_64 libpwquality-1.4.4-8.el9.x86_64 libref_array-0.1.5-53.el9.x86_64 librepo-1.14.5-3.el9.x86_64 libreport-filesystem-2.15.2-6.el9.noarch libseccomp-2.5.2-2.el9.x86_64 libselinux-3.6-3.el9.x86_64 libselinux-utils-3.6-3.el9.x86_64 libsemanage-3.6-5.el9.x86_64 libsepol-3.6-3.el9.x86_64 libsigsegv-2.13-4.el9.x86_64 libslirp-4.4.0-8.el9.x86_64 libsmartcols-2.37.4-21.el9.x86_64 libsolv-0.7.24-3.el9.x86_64 libsoup-2.72.0-10.el9.x86_64 libss-1.46.5-8.el9.x86_64 libssh-0.10.4-15.el9.x86_64 libssh-config-0.10.4-15.el9.noarch libsss_certmap-2.9.7-5.el9.x86_64 libsss_idmap-2.9.7-5.el9.x86_64 libsss_nss_idmap-2.9.7-5.el9.x86_64 libsss_sudo-2.9.7-5.el9.x86_64 libstdc++-11.5.0-11.el9.x86_64 libstdc++-devel-11.5.0-11.el9.x86_64 libstemmer-0-18.585svn.el9.x86_64 libsysfs-2.1.1-11.el9.x86_64 libtalloc-2.4.3-1.el9.x86_64 libtasn1-4.16.0-9.el9.x86_64 libtdb-1.4.14-1.el9.x86_64 libteam-1.31-16.el9.x86_64 libtevent-0.17.1-1.el9.x86_64 libtirpc-1.3.3-9.el9.x86_64 libtool-ltdl-2.4.6-46.el9.x86_64 libunistring-0.9.10-15.el9.x86_64 liburing-2.5-1.el9.x86_64 libuser-0.63-17.el9.x86_64 libutempter-1.2.1-6.el9.x86_64 libuuid-2.37.4-21.el9.x86_64 libverto-0.3.2-3.el9.x86_64 libverto-libev-0.3.2-3.el9.x86_64 libvirt-libs-10.10.0-15.el9.x86_64 libwbclient-4.23.0-3.el9.x86_64 libxcrypt-4.4.18-3.el9.x86_64 libxcrypt-compat-4.4.18-3.el9.x86_64 libxcrypt-devel-4.4.18-3.el9.x86_64 libxml2-2.9.13-12.el9.x86_64 libxml2-devel-2.9.13-12.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 libxslt-devel-1.1.34-12.el9.x86_64 libyaml-0.2.5-7.el9.x86_64 libzstd-1.5.5-1.el9.x86_64 llvm-filesystem-20.1.8-3.el9.x86_64 llvm-libs-20.1.8-3.el9.x86_64 lmdb-libs-0.9.29-3.el9.x86_64 logrotate-3.18.0-12.el9.x86_64 lshw-B.02.20-2.el9.x86_64 lsscsi-0.32-6.el9.x86_64 lua-libs-5.4.4-4.el9.x86_64 lua-srpm-macros-1-6.el9.noarch lz4-libs-1.9.3-5.el9.x86_64 lzo-2.10-7.el9.x86_64 make-4.3-8.el9.x86_64 man-db-2.9.3-9.el9.x86_64 microcode_ctl-20250812-1.el9.noarch mpdecimal-2.5.1-3.el9.x86_64 mpfr-4.1.0-7.el9.x86_64 ncurses-6.2-12.20210508.el9.x86_64 ncurses-base-6.2-12.20210508.el9.noarch ncurses-c++-libs-6.2-12.20210508.el9.x86_64 ncurses-devel-6.2-12.20210508.el9.x86_64 ncurses-libs-6.2-12.20210508.el9.x86_64 netavark-1.16.0-1.el9.x86_64 nettle-3.10.1-1.el9.x86_64 NetworkManager-1.54.1-1.el9.x86_64 NetworkManager-libnm-1.54.1-1.el9.x86_64 NetworkManager-team-1.54.1-1.el9.x86_64 NetworkManager-tui-1.54.1-1.el9.x86_64 newt-0.52.21-11.el9.x86_64 nfs-utils-2.5.4-39.el9.x86_64 nftables-1.0.9-4.el9.x86_64 npth-1.6-8.el9.x86_64 numactl-libs-2.0.19-3.el9.x86_64 ocaml-srpm-macros-6-6.el9.noarch oddjob-0.34.7-7.el9.x86_64 oddjob-mkhomedir-0.34.7-7.el9.x86_64 oniguruma-6.9.6-1.el9.6.x86_64 openblas-srpm-macros-2-11.el9.noarch openldap-2.6.8-4.el9.x86_64 openldap-devel-2.6.8-4.el9.x86_64 openssh-9.9p1-1.el9.x86_64 openssh-clients-9.9p1-1.el9.x86_64 openssh-server-9.9p1-1.el9.x86_64 openssl-3.5.1-5.el9.x86_64 openssl-devel-3.5.1-5.el9.x86_64 openssl-fips-provider-3.5.1-5.el9.x86_64 openssl-libs-3.5.1-5.el9.x86_64 os-prober-1.77-12.el9.x86_64 p11-kit-0.25.10-1.el9.x86_64 p11-kit-trust-0.25.10-1.el9.x86_64 PackageKit-1.2.6-1.el9.x86_64 PackageKit-glib-1.2.6-1.el9.x86_64 pam-1.5.1-26.el9.x86_64 parted-3.5-3.el9.x86_64 passt-0^20250512.g8ec1341-2.el9.x86_64 passt-selinux-0^20250512.g8ec1341-2.el9.noarch passwd-0.80-12.el9.x86_64 patch-2.7.6-16.el9.x86_64 pciutils-libs-3.7.0-7.el9.x86_64 pcre2-10.40-6.el9.x86_64 pcre2-syntax-10.40-6.el9.noarch pcre-8.44-4.el9.x86_64 perl-AutoLoader-5.74-483.el9.noarch perl-B-1.80-483.el9.x86_64 perl-base-2.27-483.el9.noarch perl-Carp-1.50-460.el9.noarch perl-Class-Struct-0.66-483.el9.noarch perl-constant-1.33-461.el9.noarch perl-Data-Dumper-2.174-462.el9.x86_64 perl-Digest-1.19-4.el9.noarch perl-Digest-MD5-2.58-4.el9.x86_64 perl-DynaLoader-1.47-483.el9.x86_64 perl-Encode-3.08-462.el9.x86_64 perl-Errno-1.30-483.el9.x86_64 perl-Error-0.17029-7.el9.noarch perl-Exporter-5.74-461.el9.noarch perl-Fcntl-1.13-483.el9.x86_64 perl-File-Basename-2.85-483.el9.noarch perl-File-Find-1.37-483.el9.noarch perl-FileHandle-2.03-483.el9.noarch perl-File-Path-2.18-4.el9.noarch perl-File-stat-1.09-483.el9.noarch perl-File-Temp-0.231.100-4.el9.noarch perl-Getopt-Long-2.52-4.el9.noarch perl-Getopt-Std-1.12-483.el9.noarch perl-Git-2.47.3-1.el9.noarch perl-HTTP-Tiny-0.076-462.el9.noarch perl-if-0.60.800-483.el9.noarch perl-interpreter-5.32.1-483.el9.x86_64 perl-IO-1.43-483.el9.x86_64 perl-IO-Socket-IP-0.41-5.el9.noarch perl-IO-Socket-SSL-2.073-2.el9.noarch perl-IPC-Open3-1.21-483.el9.noarch perl-lib-0.65-483.el9.x86_64 perl-libnet-3.13-4.el9.noarch perl-libs-5.32.1-483.el9.x86_64 perl-MIME-Base64-3.16-4.el9.x86_64 perl-Mozilla-CA-20200520-6.el9.noarch perl-mro-1.23-483.el9.x86_64 perl-NDBM_File-1.15-483.el9.x86_64 perl-Net-SSLeay-1.94-3.el9.x86_64 perl-overload-1.31-483.el9.noarch perl-overloading-0.02-483.el9.noarch perl-parent-0.238-460.el9.noarch perl-PathTools-3.78-461.el9.x86_64 perl-Pod-Escapes-1.07-460.el9.noarch perl-podlators-4.14-460.el9.noarch perl-Pod-Perldoc-3.28.01-461.el9.noarch perl-Pod-Simple-3.42-4.el9.noarch perl-Pod-Usage-2.01-4.el9.noarch perl-POSIX-1.94-483.el9.x86_64 perl-Scalar-List-Utils-1.56-462.el9.x86_64 perl-SelectSaver-1.02-483.el9.noarch perl-Socket-2.031-4.el9.x86_64 perl-srpm-macros-1-41.el9.noarch perl-Storable-3.21-460.el9.x86_64 perl-subs-1.03-483.el9.noarch perl-Symbol-1.08-483.el9.noarch perl-Term-ANSIColor-5.01-461.el9.noarch perl-Term-Cap-1.17-460.el9.noarch perl-TermReadKey-2.38-11.el9.x86_64 perl-Text-ParseWords-3.30-460.el9.noarch perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch perl-Time-Local-1.300-7.el9.noarch perl-URI-5.09-3.el9.noarch perl-vars-1.05-483.el9.noarch pigz-2.5-4.el9.x86_64 pkgconf-1.7.3-10.el9.x86_64 pkgconf-m4-1.7.3-10.el9.noarch pkgconf-pkg-config-1.7.3-10.el9.x86_64 podman-5.6.0-2.el9.x86_64 policycoreutils-3.6-3.el9.x86_64 policycoreutils-python-utils-3.6-3.el9.noarch polkit-0.117-14.el9.x86_64 polkit-libs-0.117-14.el9.x86_64 polkit-pkla-compat-0.1-21.el9.x86_64 popt-1.18-8.el9.x86_64 prefixdevname-0.1.0-8.el9.x86_64 procps-ng-3.3.17-14.el9.x86_64 protobuf-c-1.3.3-13.el9.x86_64 psmisc-23.4-3.el9.x86_64 publicsuffix-list-dafsa-20210518-3.el9.noarch pyproject-srpm-macros-1.16.2-1.el9.noarch python3.12-3.12.11-2.el9.x86_64 python3.12-libs-3.12.11-2.el9.x86_64 python3.12-pip-23.2.1-5.el9.noarch python3.12-pip-wheel-23.2.1-5.el9.noarch python3.12-setuptools-68.2.2-5.el9.noarch python3-3.9.23-2.el9.x86_64 python3-attrs-20.3.0-7.el9.noarch python3-audit-3.1.5-7.el9.x86_64 python3-babel-2.9.1-2.el9.noarch python3-cffi-1.14.5-5.el9.x86_64 python3-chardet-4.0.0-5.el9.noarch python3-configobj-5.0.6-25.el9.noarch python3-cryptography-36.0.1-5.el9.x86_64 python3-dasbus-1.7-1.el9.noarch python3-dateutil-2.8.1-7.el9.noarch python3-dbus-1.2.18-2.el9.x86_64 python3-devel-3.9.23-2.el9.x86_64 python3-distro-1.5.0-7.el9.noarch python3-dnf-4.14.0-31.el9.noarch python3-dnf-plugins-core-4.3.0-23.el9.noarch python3-enchant-3.2.0-5.el9.noarch python3-file-magic-5.39-16.el9.noarch python3-gobject-base-3.40.1-6.el9.x86_64 python3-gobject-base-noarch-3.40.1-6.el9.noarch python3-gpg-1.15.1-6.el9.x86_64 python3-hawkey-0.69.0-16.el9.x86_64 python3-idna-2.10-7.el9.1.noarch python3-jinja2-2.11.3-8.el9.noarch python3-jmespath-0.9.4-11.el9.noarch python3-jsonpatch-1.21-16.el9.noarch python3-jsonpointer-2.0-4.el9.noarch python3-jsonschema-3.2.0-13.el9.noarch python3-libcomps-0.1.18-1.el9.x86_64 python3-libdnf-0.69.0-16.el9.x86_64 python3-libs-3.9.23-2.el9.x86_64 python3-libselinux-3.6-3.el9.x86_64 python3-libsemanage-3.6-5.el9.x86_64 python3-libvirt-10.10.0-1.el9.x86_64 python3-libxml2-2.9.13-12.el9.x86_64 python3-lxml-4.6.5-3.el9.x86_64 python3-markupsafe-1.1.1-12.el9.x86_64 python3-netaddr-0.10.1-3.el9.noarch python3-netifaces-0.10.6-15.el9.x86_64 python3-oauthlib-3.1.1-5.el9.noarch python3-packaging-20.9-5.el9.noarch python3-pexpect-4.8.0-7.el9.noarch python3-pip-21.3.1-1.el9.noarch python3-pip-wheel-21.3.1-1.el9.noarch python3-ply-3.11-14.el9.noarch python3-policycoreutils-3.6-3.el9.noarch python3-prettytable-0.7.2-27.el9.noarch python3-ptyprocess-0.6.0-12.el9.noarch python3-pycparser-2.20-6.el9.noarch python3-pyparsing-2.4.7-9.el9.noarch python3-pyrsistent-0.17.3-8.el9.x86_64 python3-pyserial-3.4-12.el9.noarch python3-pysocks-1.7.1-12.el9.noarch python3-pytz-2021.1-5.el9.noarch python3-pyyaml-5.4.1-6.el9.x86_64 python3-requests-2.25.1-10.el9.noarch python3-resolvelib-0.5.4-5.el9.noarch python3-rpm-4.16.1.3-39.el9.x86_64 python3-rpm-generators-12-9.el9.noarch python3-rpm-macros-3.9-54.el9.noarch python3-setools-4.4.4-1.el9.x86_64 python3-setuptools-53.0.0-15.el9.noarch python3-setuptools-wheel-53.0.0-15.el9.noarch python3-six-1.15.0-9.el9.noarch python3-systemd-234-19.el9.x86_64 python3-urllib3-1.26.5-6.el9.noarch python-rpm-macros-3.9-54.el9.noarch python-srpm-macros-3.9-54.el9.noarch python-unversioned-command-3.9.23-2.el9.noarch qemu-guest-agent-9.1.0-29.el9.x86_64 qt5-srpm-macros-5.15.9-1.el9.noarch quota-4.09-4.el9.x86_64 quota-nls-4.09-4.el9.noarch readline-8.1-4.el9.x86_64 readline-devel-8.1-4.el9.x86_64 redhat-rpm-config-210-1.el9.noarch rootfiles-8.1-35.el9.noarch rpcbind-1.2.6-7.el9.x86_64 rpm-4.16.1.3-39.el9.x86_64 rpm-build-4.16.1.3-39.el9.x86_64 rpm-build-libs-4.16.1.3-39.el9.x86_64 rpm-libs-4.16.1.3-39.el9.x86_64 rpmlint-1.11-19.el9.noarch rpm-plugin-audit-4.16.1.3-39.el9.x86_64 rpm-plugin-selinux-4.16.1.3-39.el9.x86_64 rpm-plugin-systemd-inhibit-4.16.1.3-39.el9.x86_64 rpm-sign-4.16.1.3-39.el9.x86_64 rpm-sign-libs-4.16.1.3-39.el9.x86_64 rsync-3.2.5-3.el9.x86_64 rsyslog-8.2506.0-2.el9.x86_64 rsyslog-logrotate-8.2506.0-2.el9.x86_64 ruby-3.0.7-165.el9.x86_64 ruby-default-gems-3.0.7-165.el9.noarch ruby-devel-3.0.7-165.el9.x86_64 rubygem-bigdecimal-3.0.0-165.el9.x86_64 rubygem-bundler-2.2.33-165.el9.noarch rubygem-io-console-0.5.7-165.el9.x86_64 rubygem-json-2.5.1-165.el9.x86_64 rubygem-psych-3.3.2-165.el9.x86_64 rubygem-rdoc-6.3.4.1-165.el9.noarch rubygems-3.2.33-165.el9.noarch ruby-libs-3.0.7-165.el9.x86_64 rust-srpm-macros-17-4.el9.noarch samba-client-libs-4.23.0-3.el9.x86_64 samba-common-4.23.0-3.el9.noarch samba-common-libs-4.23.0-3.el9.x86_64 sed-4.8-9.el9.x86_64 selinux-policy-38.1.65-1.el9.noarch selinux-policy-targeted-38.1.65-1.el9.noarch setroubleshoot-plugins-3.3.14-4.el9.noarch setroubleshoot-server-3.3.35-2.el9.x86_64 setup-2.13.7-10.el9.noarch sg3_utils-1.47-10.el9.x86_64 sg3_utils-libs-1.47-10.el9.x86_64 shadow-utils-4.9-15.el9.x86_64 shadow-utils-subid-4.9-15.el9.x86_64 shared-mime-info-2.1-5.el9.x86_64 slang-2.3.2-11.el9.x86_64 slirp4netns-1.3.3-1.el9.x86_64 snappy-1.1.8-8.el9.x86_64 sos-4.10.0-4.el9.noarch sqlite-libs-3.34.1-8.el9.x86_64 squashfs-tools-4.4-10.git1.el9.x86_64 sscg-3.0.0-10.el9.x86_64 sshpass-1.09-4.el9.x86_64 sssd-client-2.9.7-5.el9.x86_64 sssd-common-2.9.7-5.el9.x86_64 sssd-kcm-2.9.7-5.el9.x86_64 sssd-nfs-idmap-2.9.7-5.el9.x86_64 sudo-1.9.5p2-13.el9.x86_64 systemd-252-57.el9.x86_64 systemd-devel-252-57.el9.x86_64 systemd-libs-252-57.el9.x86_64 systemd-pam-252-57.el9.x86_64 systemd-rpm-macros-252-57.el9.noarch systemd-udev-252-57.el9.x86_64 tar-1.34-7.el9.x86_64 tcl-8.6.10-7.el9.x86_64 tcpdump-4.99.0-9.el9.x86_64 teamd-1.31-16.el9.x86_64 time-1.9-18.el9.x86_64 tmux-3.2a-5.el9.x86_64 tpm2-tss-3.2.3-1.el9.x86_64 traceroute-2.1.1-1.el9.x86_64 tzdata-2025b-2.el9.noarch unzip-6.0-59.el9.x86_64 userspace-rcu-0.12.1-6.el9.x86_64 util-linux-2.37.4-21.el9.x86_64 util-linux-core-2.37.4-21.el9.x86_64 vim-minimal-8.2.2637-22.el9.x86_64 webkit2gtk3-jsc-2.48.5-1.el9.x86_64 wget-1.21.1-8.el9.x86_64 which-2.21-30.el9.x86_64 xfsprogs-6.4.0-7.el9.x86_64 xz-5.2.5-8.el9.x86_64 xz-devel-5.2.5-8.el9.x86_64 xz-libs-5.2.5-8.el9.x86_64 yajl-2.1.0-25.el9.x86_64 yum-4.14.0-31.el9.noarch yum-utils-4.3.0-23.el9.noarch zip-3.0-35.el9.x86_64 zlib-1.2.11-41.el9.x86_64 zlib-devel-1.2.11-41.el9.x86_64 zstd-1.5.5-1.el9.x86_64 home/zuul/zuul-output/logs/python.log0000644000175000017500000000223515073043307017116 0ustar zuulzuulPython 3.9.23 pip 25.2 from /home/zuul/.local/lib/python3.12/site-packages/pip (python 3.12) ansible [core 2.17.8] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/zuul/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/zuul/.local/lib/python3.12/site-packages/ansible ansible collection location = /home/zuul/.ansible/collections:/usr/share/ansible/collections executable location = /home/zuul/.local/bin/ansible python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True ansible-core==2.17.8 cachetools==6.2.1 certifi==2025.10.5 cffi==2.0.0 charset-normalizer==3.4.3 cryptography==46.0.2 google-auth==2.41.1 idna==3.11 Jinja2==3.1.6 kubernetes==24.2.0 MarkupSafe==3.0.3 oauthlib==3.2.2 openshift==0.13.1 packaging==25.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.23 python-dateutil==2.9.0.post0 python-string-utils==1.0.0 PyYAML==6.0.3 requests==2.32.4 requests-oauthlib==1.3.0 resolvelib==1.0.1 rsa==4.9.1 setuptools==68.2.2 six==1.17.0 urllib3==2.5.0 websocket-client==1.9.0 home/zuul/zuul-output/logs/dmesg.log0000644000175000017500000015144715073043307016706 0ustar zuulzuul[Mon Oct 13 00:01:37 2025] Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 [Mon Oct 13 00:01:37 2025] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [Mon Oct 13 00:01:37 2025] Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M [Mon Oct 13 00:01:37 2025] BIOS-provided physical RAM map: [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [Mon Oct 13 00:01:37 2025] BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable [Mon Oct 13 00:01:37 2025] NX (Execute Disable) protection: active [Mon Oct 13 00:01:37 2025] APIC: Static calls initialized [Mon Oct 13 00:01:37 2025] SMBIOS 2.8 present. [Mon Oct 13 00:01:37 2025] DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 [Mon Oct 13 00:01:37 2025] Hypervisor detected: KVM [Mon Oct 13 00:01:37 2025] kvm-clock: Using msrs 4b564d01 and 4b564d00 [Mon Oct 13 00:01:37 2025] kvm-clock: using sched offset of 4577462217 cycles [Mon Oct 13 00:01:37 2025] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [Mon Oct 13 00:01:37 2025] tsc: Detected 2799.998 MHz processor [Mon Oct 13 00:01:37 2025] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [Mon Oct 13 00:01:37 2025] e820: remove [mem 0x000a0000-0x000fffff] usable [Mon Oct 13 00:01:37 2025] last_pfn = 0x240000 max_arch_pfn = 0x400000000 [Mon Oct 13 00:01:37 2025] MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs [Mon Oct 13 00:01:37 2025] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [Mon Oct 13 00:01:37 2025] last_pfn = 0xbffdb max_arch_pfn = 0x400000000 [Mon Oct 13 00:01:37 2025] found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] [Mon Oct 13 00:01:37 2025] Using GB pages for direct mapping [Mon Oct 13 00:01:37 2025] RAMDISK: [mem 0x2d858000-0x32c23fff] [Mon Oct 13 00:01:37 2025] ACPI: Early table checksum verification disabled [Mon Oct 13 00:01:37 2025] ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) [Mon Oct 13 00:01:37 2025] ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Mon Oct 13 00:01:37 2025] ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Mon Oct 13 00:01:37 2025] ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Mon Oct 13 00:01:37 2025] ACPI: FACS 0x00000000BFFDFC40 000040 [Mon Oct 13 00:01:37 2025] ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Mon Oct 13 00:01:37 2025] ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Mon Oct 13 00:01:37 2025] ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] [Mon Oct 13 00:01:37 2025] ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] [Mon Oct 13 00:01:37 2025] ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] [Mon Oct 13 00:01:37 2025] ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] [Mon Oct 13 00:01:37 2025] ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] [Mon Oct 13 00:01:37 2025] No NUMA configuration found [Mon Oct 13 00:01:37 2025] Faking a node at [mem 0x0000000000000000-0x000000023fffffff] [Mon Oct 13 00:01:37 2025] NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff] [Mon Oct 13 00:01:37 2025] crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB) [Mon Oct 13 00:01:37 2025] Zone ranges: [Mon Oct 13 00:01:37 2025] DMA [mem 0x0000000000001000-0x0000000000ffffff] [Mon Oct 13 00:01:37 2025] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [Mon Oct 13 00:01:37 2025] Normal [mem 0x0000000100000000-0x000000023fffffff] [Mon Oct 13 00:01:37 2025] Device empty [Mon Oct 13 00:01:37 2025] Movable zone start for each node [Mon Oct 13 00:01:37 2025] Early memory node ranges [Mon Oct 13 00:01:37 2025] node 0: [mem 0x0000000000001000-0x000000000009efff] [Mon Oct 13 00:01:37 2025] node 0: [mem 0x0000000000100000-0x00000000bffdafff] [Mon Oct 13 00:01:37 2025] node 0: [mem 0x0000000100000000-0x000000023fffffff] [Mon Oct 13 00:01:37 2025] Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff] [Mon Oct 13 00:01:37 2025] On node 0, zone DMA: 1 pages in unavailable ranges [Mon Oct 13 00:01:37 2025] On node 0, zone DMA: 97 pages in unavailable ranges [Mon Oct 13 00:01:37 2025] On node 0, zone Normal: 37 pages in unavailable ranges [Mon Oct 13 00:01:37 2025] ACPI: PM-Timer IO Port: 0x608 [Mon Oct 13 00:01:37 2025] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [Mon Oct 13 00:01:37 2025] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [Mon Oct 13 00:01:37 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [Mon Oct 13 00:01:37 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [Mon Oct 13 00:01:37 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [Mon Oct 13 00:01:37 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [Mon Oct 13 00:01:37 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [Mon Oct 13 00:01:37 2025] ACPI: Using ACPI (MADT) for SMP configuration information [Mon Oct 13 00:01:37 2025] TSC deadline timer available [Mon Oct 13 00:01:37 2025] CPU topo: Max. logical packages: 8 [Mon Oct 13 00:01:37 2025] CPU topo: Max. logical dies: 8 [Mon Oct 13 00:01:37 2025] CPU topo: Max. dies per package: 1 [Mon Oct 13 00:01:37 2025] CPU topo: Max. threads per core: 1 [Mon Oct 13 00:01:37 2025] CPU topo: Num. cores per package: 1 [Mon Oct 13 00:01:37 2025] CPU topo: Num. threads per package: 1 [Mon Oct 13 00:01:37 2025] CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs [Mon Oct 13 00:01:37 2025] kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] [Mon Oct 13 00:01:37 2025] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [Mon Oct 13 00:01:37 2025] [mem 0xc0000000-0xfeffbfff] available for PCI devices [Mon Oct 13 00:01:37 2025] Booting paravirtualized kernel on KVM [Mon Oct 13 00:01:37 2025] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [Mon Oct 13 00:01:37 2025] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 [Mon Oct 13 00:01:37 2025] percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144 [Mon Oct 13 00:01:37 2025] pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152 [Mon Oct 13 00:01:37 2025] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 [Mon Oct 13 00:01:37 2025] kvm-guest: PV spinlocks disabled, no host support [Mon Oct 13 00:01:37 2025] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M [Mon Oct 13 00:01:37 2025] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space. [Mon Oct 13 00:01:37 2025] random: crng init done [Mon Oct 13 00:01:37 2025] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) [Mon Oct 13 00:01:37 2025] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) [Mon Oct 13 00:01:37 2025] Fallback order for Node 0: 0 [Mon Oct 13 00:01:37 2025] Built 1 zonelists, mobility grouping on. Total pages: 2064091 [Mon Oct 13 00:01:37 2025] Policy zone: Normal [Mon Oct 13 00:01:37 2025] mem auto-init: stack:off, heap alloc:off, heap free:off [Mon Oct 13 00:01:37 2025] software IO TLB: area num 8. [Mon Oct 13 00:01:37 2025] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 [Mon Oct 13 00:01:37 2025] ftrace: allocating 49162 entries in 193 pages [Mon Oct 13 00:01:37 2025] ftrace: allocated 193 pages with 3 groups [Mon Oct 13 00:01:37 2025] Dynamic Preempt: voluntary [Mon Oct 13 00:01:37 2025] rcu: Preemptible hierarchical RCU implementation. [Mon Oct 13 00:01:37 2025] rcu: RCU event tracing is enabled. [Mon Oct 13 00:01:37 2025] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. [Mon Oct 13 00:01:37 2025] Trampoline variant of Tasks RCU enabled. [Mon Oct 13 00:01:37 2025] Rude variant of Tasks RCU enabled. [Mon Oct 13 00:01:37 2025] Tracing variant of Tasks RCU enabled. [Mon Oct 13 00:01:37 2025] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [Mon Oct 13 00:01:37 2025] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 [Mon Oct 13 00:01:37 2025] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. [Mon Oct 13 00:01:37 2025] RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. [Mon Oct 13 00:01:37 2025] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. [Mon Oct 13 00:01:37 2025] NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 [Mon Oct 13 00:01:37 2025] rcu: srcu_init: Setting srcu_struct sizes based on contention. [Mon Oct 13 00:01:37 2025] kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) [Mon Oct 13 00:01:37 2025] Console: colour VGA+ 80x25 [Mon Oct 13 00:01:37 2025] printk: console [ttyS0] enabled [Mon Oct 13 00:01:37 2025] ACPI: Core revision 20230331 [Mon Oct 13 00:01:37 2025] APIC: Switch to symmetric I/O mode setup [Mon Oct 13 00:01:37 2025] x2apic enabled [Mon Oct 13 00:01:37 2025] APIC: Switched APIC routing to: physical x2apic [Mon Oct 13 00:01:37 2025] tsc: Marking TSC unstable due to TSCs unsynchronized [Mon Oct 13 00:01:37 2025] Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) [Mon Oct 13 00:01:37 2025] x86/cpu: User Mode Instruction Prevention (UMIP) activated [Mon Oct 13 00:01:37 2025] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 [Mon Oct 13 00:01:37 2025] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 [Mon Oct 13 00:01:37 2025] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [Mon Oct 13 00:01:37 2025] Spectre V2 : Mitigation: Retpolines [Mon Oct 13 00:01:37 2025] Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT [Mon Oct 13 00:01:37 2025] Spectre V2 : Enabling Speculation Barrier for firmware calls [Mon Oct 13 00:01:37 2025] RETBleed: Mitigation: untrained return thunk [Mon Oct 13 00:01:37 2025] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [Mon Oct 13 00:01:37 2025] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl [Mon Oct 13 00:01:37 2025] Speculative Return Stack Overflow: IBPB-extending microcode not applied! [Mon Oct 13 00:01:37 2025] Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. [Mon Oct 13 00:01:37 2025] x86/bugs: return thunk changed [Mon Oct 13 00:01:37 2025] Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode [Mon Oct 13 00:01:37 2025] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [Mon Oct 13 00:01:37 2025] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [Mon Oct 13 00:01:37 2025] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [Mon Oct 13 00:01:37 2025] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [Mon Oct 13 00:01:37 2025] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. [Mon Oct 13 00:01:37 2025] Freeing SMP alternatives memory: 40K [Mon Oct 13 00:01:37 2025] pid_max: default: 32768 minimum: 301 [Mon Oct 13 00:01:37 2025] LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf [Mon Oct 13 00:01:37 2025] landlock: Up and running. [Mon Oct 13 00:01:37 2025] Yama: becoming mindful. [Mon Oct 13 00:01:37 2025] SELinux: Initializing. [Mon Oct 13 00:01:37 2025] LSM support for eBPF active [Mon Oct 13 00:01:37 2025] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [Mon Oct 13 00:01:37 2025] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [Mon Oct 13 00:01:37 2025] smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) [Mon Oct 13 00:01:37 2025] Performance Events: Fam17h+ core perfctr, AMD PMU driver. [Mon Oct 13 00:01:37 2025] ... version: 0 [Mon Oct 13 00:01:37 2025] ... bit width: 48 [Mon Oct 13 00:01:37 2025] ... generic registers: 6 [Mon Oct 13 00:01:37 2025] ... value mask: 0000ffffffffffff [Mon Oct 13 00:01:37 2025] ... max period: 00007fffffffffff [Mon Oct 13 00:01:37 2025] ... fixed-purpose events: 0 [Mon Oct 13 00:01:37 2025] ... event mask: 000000000000003f [Mon Oct 13 00:01:37 2025] signal: max sigframe size: 1776 [Mon Oct 13 00:01:37 2025] rcu: Hierarchical SRCU implementation. [Mon Oct 13 00:01:37 2025] rcu: Max phase no-delay instances is 400. [Mon Oct 13 00:01:37 2025] smp: Bringing up secondary CPUs ... [Mon Oct 13 00:01:37 2025] smpboot: x86: Booting SMP configuration: [Mon Oct 13 00:01:37 2025] .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 [Mon Oct 13 00:01:37 2025] smp: Brought up 1 node, 8 CPUs [Mon Oct 13 00:01:37 2025] smpboot: Total of 8 processors activated (44799.96 BogoMIPS) [Mon Oct 13 00:01:37 2025] node 0 deferred pages initialised in 8ms [Mon Oct 13 00:01:37 2025] Memory: 7765928K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved) [Mon Oct 13 00:01:37 2025] devtmpfs: initialized [Mon Oct 13 00:01:37 2025] x86/mm: Memory block size: 128MB [Mon Oct 13 00:01:37 2025] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [Mon Oct 13 00:01:37 2025] futex hash table entries: 2048 (order: 5, 131072 bytes, linear) [Mon Oct 13 00:01:37 2025] pinctrl core: initialized pinctrl subsystem [Mon Oct 13 00:01:37 2025] NET: Registered PF_NETLINK/PF_ROUTE protocol family [Mon Oct 13 00:01:37 2025] DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations [Mon Oct 13 00:01:37 2025] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [Mon Oct 13 00:01:37 2025] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [Mon Oct 13 00:01:37 2025] audit: initializing netlink subsys (disabled) [Mon Oct 13 00:01:37 2025] audit: type=2000 audit(1760313697.239:1): state=initialized audit_enabled=0 res=1 [Mon Oct 13 00:01:37 2025] thermal_sys: Registered thermal governor 'fair_share' [Mon Oct 13 00:01:37 2025] thermal_sys: Registered thermal governor 'step_wise' [Mon Oct 13 00:01:37 2025] thermal_sys: Registered thermal governor 'user_space' [Mon Oct 13 00:01:37 2025] cpuidle: using governor menu [Mon Oct 13 00:01:37 2025] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [Mon Oct 13 00:01:37 2025] PCI: Using configuration type 1 for base access [Mon Oct 13 00:01:37 2025] PCI: Using configuration type 1 for extended access [Mon Oct 13 00:01:37 2025] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [Mon Oct 13 00:01:37 2025] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages [Mon Oct 13 00:01:37 2025] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page [Mon Oct 13 00:01:37 2025] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages [Mon Oct 13 00:01:37 2025] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page [Mon Oct 13 00:01:37 2025] Demotion targets for Node 0: null [Mon Oct 13 00:01:37 2025] cryptd: max_cpu_qlen set to 1000 [Mon Oct 13 00:01:37 2025] ACPI: Added _OSI(Module Device) [Mon Oct 13 00:01:37 2025] ACPI: Added _OSI(Processor Device) [Mon Oct 13 00:01:37 2025] ACPI: Added _OSI(3.0 _SCP Extensions) [Mon Oct 13 00:01:37 2025] ACPI: Added _OSI(Processor Aggregator Device) [Mon Oct 13 00:01:37 2025] ACPI: 1 ACPI AML tables successfully acquired and loaded [Mon Oct 13 00:01:37 2025] ACPI: _OSC evaluation for CPUs failed, trying _PDC [Mon Oct 13 00:01:37 2025] ACPI: Interpreter enabled [Mon Oct 13 00:01:37 2025] ACPI: PM: (supports S0 S3 S4 S5) [Mon Oct 13 00:01:37 2025] ACPI: Using IOAPIC for interrupt routing [Mon Oct 13 00:01:37 2025] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [Mon Oct 13 00:01:37 2025] PCI: Using E820 reservations for host bridge windows [Mon Oct 13 00:01:37 2025] ACPI: Enabled 2 GPEs in block 00 to 0F [Mon Oct 13 00:01:37 2025] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [Mon Oct 13 00:01:37 2025] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [Mon Oct 13 00:01:37 2025] acpiphp: Slot [3] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [4] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [5] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [6] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [7] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [8] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [9] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [10] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [11] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [12] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [13] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [14] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [15] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [16] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [17] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [18] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [19] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [20] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [21] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [22] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [23] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [24] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [25] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [26] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [27] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [28] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [29] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [30] registered [Mon Oct 13 00:01:37 2025] acpiphp: Slot [31] registered [Mon Oct 13 00:01:37 2025] PCI host bridge to bus 0000:00 [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: root bus resource [bus 00-ff] [Mon Oct 13 00:01:37 2025] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:01.1: BAR 4 [io 0xc140-0xc14f] [Mon Oct 13 00:01:37 2025] pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk [Mon Oct 13 00:01:37 2025] pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk [Mon Oct 13 00:01:37 2025] pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk [Mon Oct 13 00:01:37 2025] pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk [Mon Oct 13 00:01:37 2025] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:01.2: BAR 4 [io 0xc100-0xc11f] [Mon Oct 13 00:01:37 2025] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [Mon Oct 13 00:01:37 2025] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [Mon Oct 13 00:01:37 2025] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] [Mon Oct 13 00:01:37 2025] pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] [Mon Oct 13 00:01:37 2025] pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] [Mon Oct 13 00:01:37 2025] pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] [Mon Oct 13 00:01:37 2025] pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] [Mon Oct 13 00:01:37 2025] pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] [Mon Oct 13 00:01:37 2025] pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint [Mon Oct 13 00:01:37 2025] pci 0000:00:06.0: BAR 0 [io 0xc120-0xc13f] [Mon Oct 13 00:01:37 2025] pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] [Mon Oct 13 00:01:37 2025] ACPI: PCI: Interrupt link LNKA configured for IRQ 10 [Mon Oct 13 00:01:37 2025] ACPI: PCI: Interrupt link LNKB configured for IRQ 10 [Mon Oct 13 00:01:37 2025] ACPI: PCI: Interrupt link LNKC configured for IRQ 11 [Mon Oct 13 00:01:37 2025] ACPI: PCI: Interrupt link LNKD configured for IRQ 11 [Mon Oct 13 00:01:37 2025] ACPI: PCI: Interrupt link LNKS configured for IRQ 9 [Mon Oct 13 00:01:37 2025] iommu: Default domain type: Translated [Mon Oct 13 00:01:37 2025] iommu: DMA domain TLB invalidation policy: lazy mode [Mon Oct 13 00:01:37 2025] SCSI subsystem initialized [Mon Oct 13 00:01:37 2025] ACPI: bus type USB registered [Mon Oct 13 00:01:37 2025] usbcore: registered new interface driver usbfs [Mon Oct 13 00:01:37 2025] usbcore: registered new interface driver hub [Mon Oct 13 00:01:37 2025] usbcore: registered new device driver usb [Mon Oct 13 00:01:37 2025] pps_core: LinuxPPS API ver. 1 registered [Mon Oct 13 00:01:37 2025] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [Mon Oct 13 00:01:37 2025] PTP clock support registered [Mon Oct 13 00:01:37 2025] EDAC MC: Ver: 3.0.0 [Mon Oct 13 00:01:37 2025] NetLabel: Initializing [Mon Oct 13 00:01:37 2025] NetLabel: domain hash size = 128 [Mon Oct 13 00:01:37 2025] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [Mon Oct 13 00:01:37 2025] NetLabel: unlabeled traffic allowed by default [Mon Oct 13 00:01:37 2025] PCI: Using ACPI for IRQ routing [Mon Oct 13 00:01:37 2025] PCI: pci_cache_line_size set to 64 bytes [Mon Oct 13 00:01:37 2025] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] [Mon Oct 13 00:01:37 2025] e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff] [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: vgaarb: setting as boot VGA device [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: vgaarb: bridge control possible [Mon Oct 13 00:01:37 2025] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [Mon Oct 13 00:01:37 2025] vgaarb: loaded [Mon Oct 13 00:01:37 2025] clocksource: Switched to clocksource kvm-clock [Mon Oct 13 00:01:37 2025] VFS: Disk quotas dquot_6.6.0 [Mon Oct 13 00:01:37 2025] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [Mon Oct 13 00:01:37 2025] pnp: PnP ACPI init [Mon Oct 13 00:01:37 2025] pnp 00:03: [dma 2] [Mon Oct 13 00:01:37 2025] pnp: PnP ACPI: found 5 devices [Mon Oct 13 00:01:37 2025] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [Mon Oct 13 00:01:37 2025] NET: Registered PF_INET protocol family [Mon Oct 13 00:01:37 2025] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) [Mon Oct 13 00:01:37 2025] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) [Mon Oct 13 00:01:37 2025] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) [Mon Oct 13 00:01:37 2025] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) [Mon Oct 13 00:01:37 2025] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) [Mon Oct 13 00:01:37 2025] TCP: Hash tables configured (established 65536 bind 65536) [Mon Oct 13 00:01:37 2025] MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear) [Mon Oct 13 00:01:37 2025] UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) [Mon Oct 13 00:01:37 2025] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) [Mon Oct 13 00:01:37 2025] NET: Registered PF_UNIX/PF_LOCAL protocol family [Mon Oct 13 00:01:37 2025] NET: Registered PF_XDP protocol family [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] [Mon Oct 13 00:01:37 2025] pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window] [Mon Oct 13 00:01:37 2025] pci 0000:00:01.0: PIIX3: Enabling Passive Release [Mon Oct 13 00:01:37 2025] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [Mon Oct 13 00:01:37 2025] ACPI: \_SB_.LNKD: Enabled at IRQ 11 [Mon Oct 13 00:01:37 2025] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 88543 usecs [Mon Oct 13 00:01:37 2025] PCI: CLS 0 bytes, default 64 [Mon Oct 13 00:01:37 2025] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [Mon Oct 13 00:01:37 2025] software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) [Mon Oct 13 00:01:37 2025] ACPI: bus type thunderbolt registered [Mon Oct 13 00:01:37 2025] Trying to unpack rootfs image as initramfs... [Mon Oct 13 00:01:37 2025] Initialise system trusted keyrings [Mon Oct 13 00:01:37 2025] Key type blacklist registered [Mon Oct 13 00:01:37 2025] workingset: timestamp_bits=36 max_order=21 bucket_order=0 [Mon Oct 13 00:01:37 2025] zbud: loaded [Mon Oct 13 00:01:37 2025] integrity: Platform Keyring initialized [Mon Oct 13 00:01:37 2025] integrity: Machine keyring initialized [Mon Oct 13 00:01:37 2025] Freeing initrd memory: 85808K [Mon Oct 13 00:01:37 2025] NET: Registered PF_ALG protocol family [Mon Oct 13 00:01:37 2025] xor: automatically using best checksumming function avx [Mon Oct 13 00:01:37 2025] Key type asymmetric registered [Mon Oct 13 00:01:37 2025] Asymmetric key parser 'x509' registered [Mon Oct 13 00:01:37 2025] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [Mon Oct 13 00:01:37 2025] io scheduler mq-deadline registered [Mon Oct 13 00:01:37 2025] io scheduler kyber registered [Mon Oct 13 00:01:37 2025] io scheduler bfq registered [Mon Oct 13 00:01:38 2025] atomic64_test: passed for x86-64 platform with CX8 and with SSE [Mon Oct 13 00:01:38 2025] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [Mon Oct 13 00:01:38 2025] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [Mon Oct 13 00:01:38 2025] ACPI: button: Power Button [PWRF] [Mon Oct 13 00:01:38 2025] ACPI: \_SB_.LNKB: Enabled at IRQ 10 [Mon Oct 13 00:01:38 2025] ACPI: \_SB_.LNKC: Enabled at IRQ 11 [Mon Oct 13 00:01:38 2025] ACPI: \_SB_.LNKA: Enabled at IRQ 10 [Mon Oct 13 00:01:38 2025] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [Mon Oct 13 00:01:38 2025] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [Mon Oct 13 00:01:38 2025] Non-volatile memory driver v1.3 [Mon Oct 13 00:01:38 2025] rdac: device handler registered [Mon Oct 13 00:01:38 2025] hp_sw: device handler registered [Mon Oct 13 00:01:38 2025] emc: device handler registered [Mon Oct 13 00:01:38 2025] alua: device handler registered [Mon Oct 13 00:01:38 2025] uhci_hcd 0000:00:01.2: UHCI Host Controller [Mon Oct 13 00:01:38 2025] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 [Mon Oct 13 00:01:38 2025] uhci_hcd 0000:00:01.2: detected 2 ports [Mon Oct 13 00:01:38 2025] uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 [Mon Oct 13 00:01:38 2025] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [Mon Oct 13 00:01:38 2025] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [Mon Oct 13 00:01:38 2025] usb usb1: Product: UHCI Host Controller [Mon Oct 13 00:01:38 2025] usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd [Mon Oct 13 00:01:38 2025] usb usb1: SerialNumber: 0000:00:01.2 [Mon Oct 13 00:01:38 2025] hub 1-0:1.0: USB hub found [Mon Oct 13 00:01:38 2025] hub 1-0:1.0: 2 ports detected [Mon Oct 13 00:01:38 2025] usbcore: registered new interface driver usbserial_generic [Mon Oct 13 00:01:38 2025] usbserial: USB Serial support registered for generic [Mon Oct 13 00:01:38 2025] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [Mon Oct 13 00:01:38 2025] serio: i8042 KBD port at 0x60,0x64 irq 1 [Mon Oct 13 00:01:38 2025] serio: i8042 AUX port at 0x60,0x64 irq 12 [Mon Oct 13 00:01:38 2025] mousedev: PS/2 mouse device common for all mice [Mon Oct 13 00:01:38 2025] rtc_cmos 00:04: RTC can wake from S4 [Mon Oct 13 00:01:38 2025] rtc_cmos 00:04: registered as rtc0 [Mon Oct 13 00:01:38 2025] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [Mon Oct 13 00:01:38 2025] rtc_cmos 00:04: setting system clock to 2025-10-13T00:01:38 UTC (1760313698) [Mon Oct 13 00:01:38 2025] rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram [Mon Oct 13 00:01:38 2025] amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled [Mon Oct 13 00:01:38 2025] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 [Mon Oct 13 00:01:38 2025] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 [Mon Oct 13 00:01:38 2025] hid: raw HID events driver (C) Jiri Kosina [Mon Oct 13 00:01:38 2025] usbcore: registered new interface driver usbhid [Mon Oct 13 00:01:38 2025] usbhid: USB HID core driver [Mon Oct 13 00:01:38 2025] drop_monitor: Initializing network drop monitor service [Mon Oct 13 00:01:38 2025] Initializing XFRM netlink socket [Mon Oct 13 00:01:38 2025] NET: Registered PF_INET6 protocol family [Mon Oct 13 00:01:38 2025] Segment Routing with IPv6 [Mon Oct 13 00:01:38 2025] NET: Registered PF_PACKET protocol family [Mon Oct 13 00:01:38 2025] mpls_gso: MPLS GSO support [Mon Oct 13 00:01:38 2025] IPI shorthand broadcast: enabled [Mon Oct 13 00:01:38 2025] AVX2 version of gcm_enc/dec engaged. [Mon Oct 13 00:01:38 2025] AES CTR mode by8 optimization enabled [Mon Oct 13 00:01:38 2025] sched_clock: Marking stable (1168006559, 147157825)->(1427580580, -112416196) [Mon Oct 13 00:01:38 2025] registered taskstats version 1 [Mon Oct 13 00:01:38 2025] Loading compiled-in X.509 certificates [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9' [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a' [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0' [Mon Oct 13 00:01:38 2025] Demotion targets for Node 0: null [Mon Oct 13 00:01:38 2025] page_owner is disabled [Mon Oct 13 00:01:38 2025] Key type .fscrypt registered [Mon Oct 13 00:01:38 2025] Key type fscrypt-provisioning registered [Mon Oct 13 00:01:38 2025] Key type big_key registered [Mon Oct 13 00:01:38 2025] Key type encrypted registered [Mon Oct 13 00:01:38 2025] ima: No TPM chip found, activating TPM-bypass! [Mon Oct 13 00:01:38 2025] Loading compiled-in module X.509 certificates [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9' [Mon Oct 13 00:01:38 2025] ima: Allocated hash algorithm: sha256 [Mon Oct 13 00:01:38 2025] ima: No architecture policies found [Mon Oct 13 00:01:38 2025] evm: Initialising EVM extended attributes: [Mon Oct 13 00:01:38 2025] evm: security.selinux [Mon Oct 13 00:01:38 2025] evm: security.SMACK64 (disabled) [Mon Oct 13 00:01:38 2025] evm: security.SMACK64EXEC (disabled) [Mon Oct 13 00:01:38 2025] evm: security.SMACK64TRANSMUTE (disabled) [Mon Oct 13 00:01:38 2025] evm: security.SMACK64MMAP (disabled) [Mon Oct 13 00:01:38 2025] evm: security.apparmor (disabled) [Mon Oct 13 00:01:38 2025] evm: security.ima [Mon Oct 13 00:01:38 2025] evm: security.capability [Mon Oct 13 00:01:38 2025] evm: HMAC attrs: 0x1 [Mon Oct 13 00:01:38 2025] usb 1-1: new full-speed USB device number 2 using uhci_hcd [Mon Oct 13 00:01:38 2025] Running certificate verification RSA selftest [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [Mon Oct 13 00:01:38 2025] Running certificate verification ECDSA selftest [Mon Oct 13 00:01:38 2025] Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3' [Mon Oct 13 00:01:38 2025] clk: Disabling unused clocks [Mon Oct 13 00:01:38 2025] usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 [Mon Oct 13 00:01:38 2025] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 [Mon Oct 13 00:01:38 2025] usb 1-1: Product: QEMU USB Tablet [Mon Oct 13 00:01:38 2025] usb 1-1: Manufacturer: QEMU [Mon Oct 13 00:01:38 2025] usb 1-1: SerialNumber: 28754-0000:00:01.2-1 [Mon Oct 13 00:01:38 2025] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 [Mon Oct 13 00:01:38 2025] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 [Mon Oct 13 00:01:38 2025] Freeing unused decrypted memory: 2028K [Mon Oct 13 00:01:38 2025] Freeing unused kernel image (initmem) memory: 4188K [Mon Oct 13 00:01:38 2025] Write protecting the kernel read-only data: 30720k [Mon Oct 13 00:01:38 2025] Freeing unused kernel image (rodata/data gap) memory: 472K [Mon Oct 13 00:01:38 2025] x86/mm: Checked W+X mappings: passed, no W+X pages found. [Mon Oct 13 00:01:38 2025] Run /init as init process [Mon Oct 13 00:01:38 2025] with arguments: [Mon Oct 13 00:01:38 2025] /init [Mon Oct 13 00:01:38 2025] with environment: [Mon Oct 13 00:01:38 2025] HOME=/ [Mon Oct 13 00:01:38 2025] TERM=linux [Mon Oct 13 00:01:38 2025] BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 [Mon Oct 13 00:01:38 2025] systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [Mon Oct 13 00:01:38 2025] systemd[1]: Detected virtualization kvm. [Mon Oct 13 00:01:38 2025] systemd[1]: Detected architecture x86-64. [Mon Oct 13 00:01:38 2025] systemd[1]: Running in initrd. [Mon Oct 13 00:01:38 2025] systemd[1]: No hostname configured, using default hostname. [Mon Oct 13 00:01:38 2025] systemd[1]: Hostname set to . [Mon Oct 13 00:01:38 2025] systemd[1]: Initializing machine ID from VM UUID. [Mon Oct 13 00:01:38 2025] systemd[1]: Queued start job for default target Initrd Default Target. [Mon Oct 13 00:01:38 2025] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [Mon Oct 13 00:01:38 2025] systemd[1]: Reached target Local Encrypted Volumes. [Mon Oct 13 00:01:38 2025] systemd[1]: Reached target Initrd /usr File System. [Mon Oct 13 00:01:38 2025] systemd[1]: Reached target Local File Systems. [Mon Oct 13 00:01:38 2025] systemd[1]: Reached target Path Units. [Mon Oct 13 00:01:38 2025] systemd[1]: Reached target Slice Units. [Mon Oct 13 00:01:38 2025] systemd[1]: Reached target Swaps. [Mon Oct 13 00:01:39 2025] systemd[1]: Reached target Timer Units. [Mon Oct 13 00:01:39 2025] systemd[1]: Listening on D-Bus System Message Bus Socket. [Mon Oct 13 00:01:39 2025] systemd[1]: Listening on Journal Socket (/dev/log). [Mon Oct 13 00:01:39 2025] systemd[1]: Listening on Journal Socket. [Mon Oct 13 00:01:39 2025] systemd[1]: Listening on udev Control Socket. [Mon Oct 13 00:01:39 2025] systemd[1]: Listening on udev Kernel Socket. [Mon Oct 13 00:01:39 2025] systemd[1]: Reached target Socket Units. [Mon Oct 13 00:01:39 2025] systemd[1]: Starting Create List of Static Device Nodes... [Mon Oct 13 00:01:39 2025] systemd[1]: Starting Journal Service... [Mon Oct 13 00:01:39 2025] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [Mon Oct 13 00:01:39 2025] systemd[1]: Starting Apply Kernel Variables... [Mon Oct 13 00:01:39 2025] systemd[1]: Starting Create System Users... [Mon Oct 13 00:01:39 2025] systemd[1]: Starting Setup Virtual Console... [Mon Oct 13 00:01:39 2025] systemd[1]: Finished Create List of Static Device Nodes. [Mon Oct 13 00:01:39 2025] systemd[1]: Finished Apply Kernel Variables. [Mon Oct 13 00:01:39 2025] systemd[1]: Started Journal Service. [Mon Oct 13 00:01:39 2025] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [Mon Oct 13 00:01:39 2025] device-mapper: uevent: version 1.0.3 [Mon Oct 13 00:01:39 2025] device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev [Mon Oct 13 00:01:39 2025] RPC: Registered named UNIX socket transport module. [Mon Oct 13 00:01:39 2025] RPC: Registered udp transport module. [Mon Oct 13 00:01:39 2025] RPC: Registered tcp transport module. [Mon Oct 13 00:01:39 2025] RPC: Registered tcp-with-tls transport module. [Mon Oct 13 00:01:39 2025] RPC: Registered tcp NFSv4.1 backchannel transport module. [Mon Oct 13 00:01:40 2025] virtio_blk virtio2: 8/0/0 default/read/poll queues [Mon Oct 13 00:01:40 2025] virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB) [Mon Oct 13 00:01:40 2025] vda: vda1 [Mon Oct 13 00:01:40 2025] libata version 3.00 loaded. [Mon Oct 13 00:01:40 2025] ata_piix 0000:00:01.1: version 2.13 [Mon Oct 13 00:01:40 2025] scsi host0: ata_piix [Mon Oct 13 00:01:40 2025] scsi host1: ata_piix [Mon Oct 13 00:01:40 2025] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0 [Mon Oct 13 00:01:40 2025] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0 [Mon Oct 13 00:01:40 2025] ata1: found unknown device (class 0) [Mon Oct 13 00:01:40 2025] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 [Mon Oct 13 00:01:40 2025] scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 [Mon Oct 13 00:01:40 2025] scsi 0:0:0:0: Attached scsi generic sg0 type 5 [Mon Oct 13 00:01:40 2025] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray [Mon Oct 13 00:01:40 2025] cdrom: Uniform CD-ROM driver Revision: 3.20 [Mon Oct 13 00:01:40 2025] sr 0:0:0:0: Attached scsi CD-ROM sr0 [Mon Oct 13 00:01:41 2025] SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled [Mon Oct 13 00:01:41 2025] XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3 [Mon Oct 13 00:01:41 2025] XFS (vda1): Ending clean mount [Mon Oct 13 00:01:41 2025] systemd-journald[305]: Received SIGTERM from PID 1 (systemd). [Mon Oct 13 00:01:41 2025] audit: type=1404 audit(1760313701.861:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 [Mon Oct 13 00:01:41 2025] SELinux: policy capability network_peer_controls=1 [Mon Oct 13 00:01:41 2025] SELinux: policy capability open_perms=1 [Mon Oct 13 00:01:41 2025] SELinux: policy capability extended_socket_class=1 [Mon Oct 13 00:01:41 2025] SELinux: policy capability always_check_network=0 [Mon Oct 13 00:01:41 2025] SELinux: policy capability cgroup_seclabel=1 [Mon Oct 13 00:01:41 2025] SELinux: policy capability nnp_nosuid_transition=1 [Mon Oct 13 00:01:41 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Mon Oct 13 00:01:41 2025] audit: type=1403 audit(1760313702.009:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 [Mon Oct 13 00:01:41 2025] systemd[1]: Successfully loaded SELinux policy in 152.142ms. [Mon Oct 13 00:01:41 2025] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.163ms. [Mon Oct 13 00:01:41 2025] systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [Mon Oct 13 00:01:41 2025] systemd[1]: Detected virtualization kvm. [Mon Oct 13 00:01:41 2025] systemd[1]: Detected architecture x86-64. [Mon Oct 13 00:01:41 2025] systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping. [Mon Oct 13 00:01:42 2025] systemd[1]: initrd-switch-root.service: Deactivated successfully. [Mon Oct 13 00:01:42 2025] systemd[1]: Stopped Switch Root. [Mon Oct 13 00:01:42 2025] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [Mon Oct 13 00:01:42 2025] systemd[1]: Created slice Slice /system/getty. [Mon Oct 13 00:01:42 2025] systemd[1]: Created slice Slice /system/serial-getty. [Mon Oct 13 00:01:42 2025] systemd[1]: Created slice Slice /system/sshd-keygen. [Mon Oct 13 00:01:42 2025] systemd[1]: Created slice User and Session Slice. [Mon Oct 13 00:01:42 2025] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [Mon Oct 13 00:01:42 2025] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [Mon Oct 13 00:01:42 2025] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target Local Encrypted Volumes. [Mon Oct 13 00:01:42 2025] systemd[1]: Stopped target Switch Root. [Mon Oct 13 00:01:42 2025] systemd[1]: Stopped target Initrd File Systems. [Mon Oct 13 00:01:42 2025] systemd[1]: Stopped target Initrd Root File System. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target Local Integrity Protected Volumes. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target Path Units. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target rpc_pipefs.target. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target Slice Units. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target Swaps. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target Local Verity Protected Volumes. [Mon Oct 13 00:01:42 2025] systemd[1]: Listening on RPCbind Server Activation Socket. [Mon Oct 13 00:01:42 2025] systemd[1]: Reached target RPC Port Mapper. [Mon Oct 13 00:01:42 2025] systemd[1]: Listening on Process Core Dump Socket. [Mon Oct 13 00:01:42 2025] systemd[1]: Listening on initctl Compatibility Named Pipe. [Mon Oct 13 00:01:42 2025] systemd[1]: Listening on udev Control Socket. [Mon Oct 13 00:01:42 2025] systemd[1]: Listening on udev Kernel Socket. [Mon Oct 13 00:01:42 2025] systemd[1]: Mounting Huge Pages File System... [Mon Oct 13 00:01:42 2025] systemd[1]: Mounting POSIX Message Queue File System... [Mon Oct 13 00:01:42 2025] systemd[1]: Mounting Kernel Debug File System... [Mon Oct 13 00:01:42 2025] systemd[1]: Mounting Kernel Trace File System... [Mon Oct 13 00:01:42 2025] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Create List of Static Device Nodes... [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Load Kernel Module configfs... [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Load Kernel Module drm... [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Load Kernel Module efi_pstore... [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Load Kernel Module fuse... [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... [Mon Oct 13 00:01:42 2025] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [Mon Oct 13 00:01:42 2025] systemd[1]: Stopped File System Check on Root Device. [Mon Oct 13 00:01:42 2025] systemd[1]: Stopped Journal Service. [Mon Oct 13 00:01:42 2025] fuse: init (API version 7.37) [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Journal Service... [Mon Oct 13 00:01:42 2025] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Generate network units from Kernel command line... [Mon Oct 13 00:01:42 2025] systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Remount Root and Kernel File Systems... [Mon Oct 13 00:01:42 2025] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Apply Kernel Variables... [Mon Oct 13 00:01:42 2025] systemd[1]: Starting Coldplug All udev Devices... [Mon Oct 13 00:01:42 2025] systemd[1]: Mounted Huge Pages File System. [Mon Oct 13 00:01:42 2025] xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) [Mon Oct 13 00:01:42 2025] systemd[1]: Started Journal Service. [Mon Oct 13 00:01:42 2025] ACPI: bus type drm_connector registered [Mon Oct 13 00:01:42 2025] systemd-journald[676]: Received client request to flush runtime journal. [Mon Oct 13 00:01:43 2025] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [Mon Oct 13 00:01:43 2025] i2c i2c-0: 1/1 memory slots populated (from DMI) [Mon Oct 13 00:01:43 2025] i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD [Mon Oct 13 00:01:43 2025] input: PC Speaker as /devices/platform/pcspkr/input/input6 [Mon Oct 13 00:01:43 2025] [drm] pci: virtio-vga detected at 0000:00:02.0 [Mon Oct 13 00:01:43 2025] virtio-pci 0000:00:02.0: vgaarb: deactivate vga console [Mon Oct 13 00:01:43 2025] Console: switching to colour dummy device 80x25 [Mon Oct 13 00:01:43 2025] [drm] features: -virgl +edid -resource_blob -host_visible [Mon Oct 13 00:01:43 2025] [drm] features: -context_init [Mon Oct 13 00:01:43 2025] [drm] number of scanouts: 1 [Mon Oct 13 00:01:43 2025] [drm] number of cap sets: 0 [Mon Oct 13 00:01:43 2025] [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 [Mon Oct 13 00:01:43 2025] fbcon: virtio_gpudrmfb (fb0) is primary device [Mon Oct 13 00:01:43 2025] Console: switching to colour frame buffer device 128x48 [Mon Oct 13 00:01:43 2025] virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device [Mon Oct 13 00:01:43 2025] Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled [Mon Oct 13 00:01:43 2025] Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled [Mon Oct 13 00:01:43 2025] kvm_amd: TSC scaling supported [Mon Oct 13 00:01:43 2025] kvm_amd: Nested Virtualization enabled [Mon Oct 13 00:01:43 2025] kvm_amd: Nested Paging enabled [Mon Oct 13 00:01:43 2025] kvm_amd: LBR virtualization supported [Mon Oct 13 00:01:44 2025] ISO 9660 Extensions: Microsoft Joliet Level 3 [Mon Oct 13 00:01:44 2025] ISO 9660 Extensions: RRIP_1991A [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: BAR 0 [io 0x0000-0x003f] [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff] [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref] [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref] [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned [Mon Oct 13 00:10:04 2025] pci 0000:00:07.0: BAR 0 [io 0x1000-0x103f]: assigned [Mon Oct 13 00:10:04 2025] virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) [Mon Oct 13 00:16:34 2025] systemd-rc-local-generator[5227]: /etc/rc.d/rc.local is not marked executable, skipping. [Mon Oct 13 00:17:03 2025] SELinux: Converting 365 SID table entries... [Mon Oct 13 00:17:03 2025] SELinux: policy capability network_peer_controls=1 [Mon Oct 13 00:17:03 2025] SELinux: policy capability open_perms=1 [Mon Oct 13 00:17:03 2025] SELinux: policy capability extended_socket_class=1 [Mon Oct 13 00:17:03 2025] SELinux: policy capability always_check_network=0 [Mon Oct 13 00:17:03 2025] SELinux: policy capability cgroup_seclabel=1 [Mon Oct 13 00:17:03 2025] SELinux: policy capability nnp_nosuid_transition=1 [Mon Oct 13 00:17:03 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Mon Oct 13 00:17:12 2025] SELinux: Converting 365 SID table entries... [Mon Oct 13 00:17:12 2025] SELinux: policy capability network_peer_controls=1 [Mon Oct 13 00:17:12 2025] SELinux: policy capability open_perms=1 [Mon Oct 13 00:17:12 2025] SELinux: policy capability extended_socket_class=1 [Mon Oct 13 00:17:12 2025] SELinux: policy capability always_check_network=0 [Mon Oct 13 00:17:12 2025] SELinux: policy capability cgroup_seclabel=1 [Mon Oct 13 00:17:12 2025] SELinux: policy capability nnp_nosuid_transition=1 [Mon Oct 13 00:17:12 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Mon Oct 13 00:17:20 2025] SELinux: Converting 365 SID table entries... [Mon Oct 13 00:17:21 2025] SELinux: policy capability network_peer_controls=1 [Mon Oct 13 00:17:21 2025] SELinux: policy capability open_perms=1 [Mon Oct 13 00:17:21 2025] SELinux: policy capability extended_socket_class=1 [Mon Oct 13 00:17:21 2025] SELinux: policy capability always_check_network=0 [Mon Oct 13 00:17:21 2025] SELinux: policy capability cgroup_seclabel=1 [Mon Oct 13 00:17:21 2025] SELinux: policy capability nnp_nosuid_transition=1 [Mon Oct 13 00:17:21 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Mon Oct 13 00:17:32 2025] SELinux: Converting 368 SID table entries... [Mon Oct 13 00:17:32 2025] SELinux: policy capability network_peer_controls=1 [Mon Oct 13 00:17:32 2025] SELinux: policy capability open_perms=1 [Mon Oct 13 00:17:32 2025] SELinux: policy capability extended_socket_class=1 [Mon Oct 13 00:17:32 2025] SELinux: policy capability always_check_network=0 [Mon Oct 13 00:17:32 2025] SELinux: policy capability cgroup_seclabel=1 [Mon Oct 13 00:17:32 2025] SELinux: policy capability nnp_nosuid_transition=1 [Mon Oct 13 00:17:32 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Mon Oct 13 00:17:55 2025] systemd-rc-local-generator[6284]: /etc/rc.d/rc.local is not marked executable, skipping. [Mon Oct 13 00:17:58 2025] evm: overlay not supported home/zuul/zuul-output/logs/selinux-denials.log0000644000000000000000000000000015073043310020575 0ustar rootroothome/zuul/zuul-output/logs/system-config/0000755000175000017500000000000015073043311017652 5ustar zuulzuulhome/zuul/zuul-output/logs/system-config/libvirt/0000755000175000017500000000000015073043312021326 5ustar zuulzuulhome/zuul/zuul-output/logs/system-config/libvirt/libvirt-admin.conf0000644000175000000000000000070215073043312024704 0ustar zuulroot# # This can be used to setup URI aliases for frequently # used connection URIs. Aliases may contain only the # characters a-Z, 0-9, _, -. # # Following the '=' may be any valid libvirt admin connection # URI, including arbitrary parameters #uri_aliases = [ # "admin=libvirtd:///system", #] # This specifies the default location the client tries to connect to if no other # URI is provided by the application #uri_default = "libvirtd:///system" home/zuul/zuul-output/logs/system-config/libvirt/libvirt.conf0000644000175000000000000000104315073043312023615 0ustar zuulroot# # This can be used to setup URI aliases for frequently # used connection URIs. Aliases may contain only the # characters a-Z, 0-9, _, -. # # Following the '=' may be any valid libvirt connection # URI, including arbitrary parameters #uri_aliases = [ # "hail=qemu+ssh://root@hail.cloud.example.com/system", # "sleet=qemu+ssh://root@sleet.cloud.example.com/system", #] # # These can be used in cases when no URI is supplied by the application # (@uri_default also prevents probing of the hypervisor driver). # #uri_default = "qemu:///system" home/zuul/zuul-output/logs/registries.conf0000644000000000000000000000744715073043312020045 0ustar rootroot# For more information on this configuration file, see containers-registries.conf(5). # # NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES # We recommend always using fully qualified image names including the registry # server (full dns name), namespace, image name, and tag # (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e., # quay.io/repository/name@digest) further eliminates the ambiguity of tags. # When using short names, there is always an inherent risk that the image being # pulled could be spoofed. For example, a user wants to pull an image named # `foobar` from a registry and expects it to come from myregistry.com. If # myregistry.com is not first in the search list, an attacker could place a # different `foobar` image at a registry earlier in the search list. The user # would accidentally pull and run the attacker's image and code rather than the # intended content. We recommend only adding registries which are completely # trusted (i.e., registries which don't allow unknown or anonymous users to # create accounts with arbitrary names). This will prevent an image from being # spoofed, squatted or otherwise made insecure. If it is necessary to use one # of these registries, it should be added at the end of the list. # # # An array of host[:port] registries to try when pulling an unqualified image, in order. unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"] # [[registry]] # # The "prefix" field is used to choose the relevant [[registry]] TOML table; # # (only) the TOML table with the longest match for the input image name # # (taking into account namespace/repo/tag/digest separators) is used. # # # # The prefix can also be of the form: *.example.com for wildcard subdomain # # matching. # # # # If the prefix field is missing, it defaults to be the same as the "location" field. # prefix = "example.com/foo" # # # If true, unencrypted HTTP as well as TLS connections with untrusted # # certificates are allowed. # insecure = false # # # If true, pulling images with matching names is forbidden. # blocked = false # # # The physical location of the "prefix"-rooted namespace. # # # # By default, this is equal to "prefix" (in which case "prefix" can be omitted # # and the [[registry]] TOML table can only specify "location"). # # # # Example: Given # # prefix = "example.com/foo" # # location = "internal-registry-for-example.net/bar" # # requests for the image example.com/foo/myimage:latest will actually work with the # # internal-registry-for-example.net/bar/myimage:latest image. # # # The location can be empty iff prefix is in a # # wildcarded format: "*.example.com". In this case, the input reference will # # be used as-is without any rewrite. # location = internal-registry-for-example.com/bar" # # # (Possibly-partial) mirrors for the "prefix"-rooted namespace. # # # # The mirrors are attempted in the specified order; the first one that can be # # contacted and contains the image will be used (and if none of the mirrors contains the image, # # the primary location specified by the "registry.location" field, or using the unmodified # # user-specified reference, is tried last). # # # # Each TOML table in the "mirror" array can contain the following fields, with the same semantics # # as if specified in the [[registry]] TOML table directly: # # - location # # - insecure # [[registry.mirror]] # location = "example-mirror-0.local/mirror-for-foo" # [[registry.mirror]] # location = "example-mirror-1.local/mirrors/foo" # insecure = true # # Given the above, a pull of example.com/foo/image:latest will try: # # 1. example-mirror-0.local/mirror-for-foo/image:latest # # 2. example-mirror-1.local/mirrors/foo/image:latest # # 3. internal-registry-for-example.net/bar/image:latest # # in order, and use the first one that exists. short-name-mode = "enforcing" home/zuul/zuul-output/logs/registries.conf.d/0000755000175000000000000000000015073043312020361 5ustar zuulroothome/zuul/zuul-output/logs/registries.conf.d/000-shortnames.conf0000644000175000000000000001735515073043312023723 0ustar zuulroot[aliases] # almalinux "almalinux" = "docker.io/library/almalinux" "almalinux-minimal" = "docker.io/library/almalinux-minimal" # Amazon Linux "amazonlinux" = "public.ecr.aws/amazonlinux/amazonlinux" # Arch Linux "archlinux" = "docker.io/library/archlinux" # centos "centos" = "quay.io/centos/centos" # containers "skopeo" = "quay.io/skopeo/stable" "buildah" = "quay.io/buildah/stable" "podman" = "quay.io/podman/stable" "hello" = "quay.io/podman/hello" "hello-world" = "quay.io/podman/hello" # docker "alpine" = "docker.io/library/alpine" "docker" = "docker.io/library/docker" "registry" = "docker.io/library/registry" "swarm" = "docker.io/library/swarm" # Fedora "fedora-bootc" = "registry.fedoraproject.org/fedora-bootc" "fedora-minimal" = "registry.fedoraproject.org/fedora-minimal" "fedora" = "registry.fedoraproject.org/fedora" # Gentoo "gentoo" = "docker.io/gentoo/stage3" # openSUSE "opensuse/tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "opensuse/tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "opensuse/tumbleweed-microdnf" = "registry.opensuse.org/opensuse/tumbleweed-microdnf" "opensuse/leap" = "registry.opensuse.org/opensuse/leap" "opensuse/busybox" = "registry.opensuse.org/opensuse/busybox" "tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "tumbleweed-microdnf" = "registry.opensuse.org/opensuse/tumbleweed-microdnf" "leap" = "registry.opensuse.org/opensuse/leap" "leap-dnf" = "registry.opensuse.org/opensuse/leap-dnf" "leap-microdnf" = "registry.opensuse.org/opensuse/leap-microdnf" "tw-busybox" = "registry.opensuse.org/opensuse/busybox" # OTel (Open Telemetry) - opentelemetry.io "otel/autoinstrumentation-go" = "docker.io/otel/autoinstrumentation-go" "otel/autoinstrumentation-nodejs" = "docker.io/otel/autoinstrumentation-nodejs" "otel/autoinstrumentation-python" = "docker.io/otel/autoinstrumentation-python" "otel/autoinstrumentation-java" = "docker.io/otel/autoinstrumentation-java" "otel/autoinstrumentation-dotnet" = "docker.io/otel/autoinstrumentation-dotnet" "otel/opentelemetry-collector" = "docker.io/otel/opentelemetry-collector" "otel/opentelemetry-collector-contrib" = "docker.io/otel/opentelemetry-collector-contrib" "otel/opentelemetry-collector-contrib-dev" = "docker.io/otel/opentelemetry-collector-contrib-dev" "otel/opentelemetry-collector-k8s" = "docker.io/otel/opentelemetry-collector-k8s" "otel/opentelemetry-operator" = "docker.io/otel/opentelemetry-operator" "otel/opentelemetry-operator-bundle" = "docker.io/otel/opentelemetry-operator-bundle" "otel/operator-opamp-bridge" = "docker.io/otel/operator-opamp-bridge" "otel/semconvgen" = "docker.io/otel/semconvgen" "otel/weaver" = "docker.io/otel/weaver" # SUSE "suse/sle15" = "registry.suse.com/suse/sle15" "suse/sles12sp5" = "registry.suse.com/suse/sles12sp5" "suse/sles12sp4" = "registry.suse.com/suse/sles12sp4" "suse/sles12sp3" = "registry.suse.com/suse/sles12sp3" "sle15" = "registry.suse.com/suse/sle15" "sles12sp5" = "registry.suse.com/suse/sles12sp5" "sles12sp4" = "registry.suse.com/suse/sles12sp4" "sles12sp3" = "registry.suse.com/suse/sles12sp3" "bci-base" = "registry.suse.com/bci/bci-base" "bci/bci-base" = "registry.suse.com/bci/bci-base" "bci-micro" = "registry.suse.com/bci/bci-micro" "bci/bci-micro" = "registry.suse.com/bci/bci-micro" "bci-minimal" = "registry.suse.com/bci/bci-minimal" "bci/bci-minimal" = "registry.suse.com/bci/bci-minimal" "bci-busybox" = "registry.suse.com/bci/bci-busybox" "bci/bci-busybox" = "registry.suse.com/bci/bci-busybox" # Red Hat Enterprise Linux "rhel" = "registry.access.redhat.com/rhel" "rhel6" = "registry.access.redhat.com/rhel6" "rhel7" = "registry.access.redhat.com/rhel7" "rhel7.9" = "registry.access.redhat.com/rhel7.9" "rhel-atomic" = "registry.access.redhat.com/rhel-atomic" "rhel9-bootc" = "registry.redhat.io/rhel9/rhel-bootc" "rhel-minimal" = "registry.access.redhat.com/rhel-minimal" "rhel-init" = "registry.access.redhat.com/rhel-init" "rhel7-atomic" = "registry.access.redhat.com/rhel7-atomic" "rhel7-minimal" = "registry.access.redhat.com/rhel7-minimal" "rhel7-init" = "registry.access.redhat.com/rhel7-init" "rhel7/rhel" = "registry.access.redhat.com/rhel7/rhel" "rhel7/rhel-atomic" = "registry.access.redhat.com/rhel7/rhel7/rhel-atomic" "ubi7/ubi" = "registry.access.redhat.com/ubi7/ubi" "ubi7/ubi-minimal" = "registry.access.redhat.com/ubi7-minimal" "ubi7/ubi-init" = "registry.access.redhat.com/ubi7-init" "ubi7" = "registry.access.redhat.com/ubi7" "ubi7-init" = "registry.access.redhat.com/ubi7-init" "ubi7-minimal" = "registry.access.redhat.com/ubi7-minimal" "rhel8" = "registry.access.redhat.com/ubi8" "rhel8-init" = "registry.access.redhat.com/ubi8-init" "rhel8-minimal" = "registry.access.redhat.com/ubi8-minimal" "rhel8-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8" = "registry.access.redhat.com/ubi8" "ubi8-minimal" = "registry.access.redhat.com/ubi8-minimal" "ubi8-init" = "registry.access.redhat.com/ubi8-init" "ubi8-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8/ubi" = "registry.access.redhat.com/ubi8/ubi" "ubi8/ubi-minimal" = "registry.access.redhat.com/ubi8-minimal" "ubi8/ubi-init" = "registry.access.redhat.com/ubi8-init" "ubi8/ubi-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8/podman" = "registry.access.redhat.com/ubi8/podman" "ubi8/buildah" = "registry.access.redhat.com/ubi8/buildah" "ubi8/skopeo" = "registry.access.redhat.com/ubi8/skopeo" "rhel9" = "registry.access.redhat.com/ubi9" "rhel9-init" = "registry.access.redhat.com/ubi9-init" "rhel9-minimal" = "registry.access.redhat.com/ubi9-minimal" "rhel9-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9" = "registry.access.redhat.com/ubi9" "ubi9-minimal" = "registry.access.redhat.com/ubi9-minimal" "ubi9-init" = "registry.access.redhat.com/ubi9-init" "ubi9-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9/ubi" = "registry.access.redhat.com/ubi9/ubi" "ubi9/ubi-minimal" = "registry.access.redhat.com/ubi9-minimal" "ubi9/ubi-init" = "registry.access.redhat.com/ubi9-init" "ubi9/ubi-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9/podman" = "registry.access.redhat.com/ubi9/podman" "ubi9/buildah" = "registry.access.redhat.com/ubi9/buildah" "ubi9/skopeo" = "registry.access.redhat.com/ubi9/skopeo" # Rocky Linux "rockylinux" = "quay.io/rockylinux/rockylinux" # Debian "debian" = "docker.io/library/debian" # Kali Linux "kali-bleeding-edge" = "docker.io/kalilinux/kali-bleeding-edge" "kali-dev" = "docker.io/kalilinux/kali-dev" "kali-experimental" = "docker.io/kalilinux/kali-experimental" "kali-last-release" = "docker.io/kalilinux/kali-last-release" "kali-rolling" = "docker.io/kalilinux/kali-rolling" # Ubuntu "ubuntu" = "docker.io/library/ubuntu" # Oracle Linux "oraclelinux" = "container-registry.oracle.com/os/oraclelinux" # busybox "busybox" = "docker.io/library/busybox" # golang "golang" = "docker.io/library/golang" # php "php" = "docker.io/library/php" # python "python" = "docker.io/library/python" # rust "rust" = "docker.io/library/rust" # node "node" = "docker.io/library/node" # Grafana Labs "grafana/agent" = "docker.io/grafana/agent" "grafana/grafana" = "docker.io/grafana/grafana" "grafana/k6" = "docker.io/grafana/k6" "grafana/loki" = "docker.io/grafana/loki" "grafana/mimir" = "docker.io/grafana/mimir" "grafana/oncall" = "docker.io/grafana/oncall" "grafana/pyroscope" = "docker.io/grafana/pyroscope" "grafana/tempo" = "docker.io/grafana/tempo" # curl "curl" = "quay.io/curl/curl" # nginx "nginx" = "docker.io/library/nginx" # QUBIP "qubip/pq-container" = "quay.io/qubip/pq-container" home/zuul/zuul-output/artifacts/0000755000175000017500000000000015073041415016102 5ustar zuulzuulhome/zuul/zuul-output/docs/0000755000175000017500000000000015073041415015052 5ustar zuulzuul